September 3, 2021
Back, deep in the history of humankind, there were darker times: living conditions were sub-optimal, hygiene was poor, the knowledge of “health” was laughable by modern standards. Danger was all around the people that lived then (if you could actually call it living). Knowledge was hard to come by and it was guarded very closely and carefully by the few that clung to it. Of course, the astute historians amongst you will know immediately that I refer to that period known as “the 1990s”.
When speaking about technology and incident management history, I’ll begin this journey at the dawn of the adoption of distributed computing (aka the 1990s). For those of you who are unfamiliar with the near-vestigial term distributed computing – this refers to the point where systems were broken into backends that ran on one or more servers that would interact with applications running on PCs/Workstations that were computers in their own right. I start here because this is the point where complexity in computing took an almighty step up. This, in my humble opinion, is the starting point when different technology teams really needed to begin to interact to resolve issues that arose, and hence “fixing a user issue” evolved into “incident management”. Sure, there will be those of you who will argue with my timeframe or point out with great aplomb that there were mainframes long before that, but you are forgetting a few critical factors: this is my blog series, you are welcome to write your own on your personal home mainframe and you should probably go enjoy retirement to the fullest while you still can.
To understand some of the challenges of managing ‘tech issues’ back then, it’s important to set the scene a bit – so, for context, here’s what I remember as the backdrop:
Without exaggeration, it was literally the Wild West and I was pretty much Doc Holliday.
But to bring us back to the main topic, what was the equivalent back then of modern-day incident management? Given the backdrop I’ve painted above, you may expect that the diagnosis and resolution of IT issues at the time was chaotic. To be honest, it was far worse than you could ever imagine. In many cases, the issues were caused by the technology team themselves. With relatively unfettered access to all the different environments (dev, test, live), it wasn’t uncommon for someone to accidentally take an action in live when they were meant to be working on the test environment. Also, not that dissimilar to today, system changes almost always led to some form of unintended consequence.
If I think about the method of managing issues back then, in many ways my Wild West analogy holds - sorting out an issue meant gathering a posse, giving them a loose set of directions, and having them go at it. It was messy, it was eventually effective, and you could almost always expect some form of collateral damage.
But if you look at the lifecycle of an incident in a bit more detail, the stark contrast to today begins to take shape:
To say that in the early days technology incident management was a chaotic dark art is not an overstatement. It’s not surprising really that the natural evolution from this point was toward a more structured and formulaic approach which I’ll explore in the next part of this series - The evolution of Incident Management part 2: the advent of ITIL. Nonetheless, I look back on those early days fondly. Maybe it was a sense of being an insider in a special club, perhaps it was the jingling of my spurs as I walked through the halls, or maybe it’s the memory of the mainframe guy sitting in the dark corner of the room in his sandals and socks yelling over to our posse, “this is why we should just stick with the mainframe”
Jim Korchak is a twenty-five-year technology veteran whose career has centered on automation, application service management, and the software delivery lifecycle. He has extensive experience in the financial services sector both in the UK and the USA where he has held a number of senior positions helping to shape technology strategy and execution.
He is currently a Principal Consultant for Resilient Technology Specialists where he advises companies on how to drive improvement in the application lifecycle, from requirements management to production operations by leveraging best practices and intelligent automation.
A recognized thought leader in Application Management and Technology, Jim has been quoted by Forrester Research, has held advisory positions for several technology start-ups, and has spoken publicly as a lecturer for Hult International Business School as well as at a number of industry events as keynote speaker.