Managed services are an accident of the history of IT

The history of managed services is right at the centre of the history of IT. The concept of outsourcing a high-level business or technical function provided remotely over a telecommunications network is taken for granted; we use it every time we use a Web browser or smartphone. But only 25 years ago, even regional connectivity was a luxury enjoyed by only the very largest corporations. It was expensive, required millions in capital outlay and deep technical skills to keep up and running. It also required a close working relationship with whoever ran the different portions of the network.

Since then, the number of networks worldwide has increased a million-fold and most of them now support the Internet Protocol. The number of devices connected to those networks has also increased a billion-fold. How we got to where we are today is a combination of the history of telecommunications, including the telephone network, and of computers in general. Sometimes that history overlaps: AT&T, the original US public telephone provider, founded a research arm in 1925 called Bell Labs, which has been responsible for an extraordinary number of technical innovations, including many that make modern technology possible.


Managed services have their roots in the late 1980s. At this time, the Internet is a fledgling curiosity. Personal computers are rare. Data networks are increasing thanks to the 1984 break-up of AT&T into regional operating entities. And for large companies, connecting computers over networks is exploding.

Data networks in the late 1980s were a hodgepodge of many different protocols and hardware standards. Many of them worked the same way conceptually though: the network operator multiplexed cheap local connections on to its backbone and off to other local connections. In Clifford Stoll’s classic book, The Cuckoo’s Egg, he describes hunting down a foreign hacker who had broken into his network using an international link from Germany. The hacker then used Stoll’s local network to try and break into sensitive military and contractor networks across the US. In the academic environment, these sorts of international connections were common. But only the very richest companies could afford them outside academia.

By the early 1990s, the global Fortune 500 companies were each spending quite a bit of money managing their networks. The Simple Network Management Protocol (SNMP) was developing rapidly too, and it proved invaluable when trying to maintain geographically dispersed networks. Tools like HP’s OpenView could gather information from SNMP sources and provide a real-time graphical view of the state of a network. If the network went down, it needed fixing. This methodology – called break-fix, because if it broke, someone (hopefully) fixed it – was almost 100% an internal function, but began to be used by companies who didn’t run their own infrastructure but specialised in maintaining their clients’ infrastructures.

From 1990 to 1995, a huge wave of ‘down-and-out’ (downsize and outsource) swept the industry, prompted by the fad of business process re-engineering. Competition was fierce to look after resources, including networks, for third parties.


The market was still only for those who could afford it. The cost and skills required to manage even a country-wide network were very burdensome, not to mention the high-level diplomatic skills needed to deal with telcos on a daily basis.

And then something interesting happened around this time. The Internet, which had been growing extremely rapidly, began to present itself as an interesting online alternative. The reliability and functionality was poor compared to something like a dedicated line into the New York Stock Exchange, but the barriers to entry were incredibly low. In 1996, social and Internet commentator , then a lone consultant, explained to senior executives at AT&T how he provided Internet hosting services.

“They thought AT&T’s famous ‘five 9s’ reliability (services that work 99.999% of the time) would be valuable, but they couldn’t figure out how $20 a month could cover the costs for good Web hosting, much less leave a profit.”

Shirky’s methodology – editing code and uploading it to the live server – horrified his listeners.

“Oh yeah, it was horrible,” he said.

“Sometimes the servers would crash, and we’d just have to reboot and start from scratch.”

He concluded: “The AT&T guys correctly understood that the income from $20-a-month customers wouldn’t pay for good Web hosting. What they hadn’t understood – were in fact professionally incapable of understanding – was that the industry solution, circa 1996, was to offer hosting that wasn’t very good.”

It would take another 10 years for the combination of Internet connectivity and easy-to-implement hosted services to make managed services possible for all.