On the Cover

Computing architecture inspired by the brain

’s intelligent supercomputer, Watson, has come a long way since it became the champion of the popular quiz show, Jeopardy.

In developing Watson to become the ultimate supercomputer to help professionals make better informed decisions in a world of big data, has now turned its attention to building an entirely new underlying architecture for cognitive computing – inspired by the human brain.

Speaking at an open lecture held at Wits University recently, senior VP and director of Research Dr said: “We think it is possible to build a very interesting architecture that will be more human-like and more biologically inspired than what we’ve built in the past with brute force.”

According to Kelly, the intention behind Watson is not to purely try and recreate the human brain: “But we must produce computer systems that will allow human beings to live in a big data world. If we don’t – if we don’t extract this information – we’re starting to leave a lot of knowledge on the floor, and I think we’ll just be completely overwhelmed by data and actually start to make bad decisions.”

Watson recently completed its “medical residency” and has produced its first commercially-available applications for doctors and health insurance companies.

Despite Watson’s achievements thus far, it still has a long way to go, as Kelly explains: “The brain consumes roughly 20W of power, it’s a very effi cient machine considering what it does. By comparison, Watson in Jeopardy consumed 85 000W – it is still leaps away from what the human mind and brain can do.”


Dharmendra Modha, manager of cognitive computing systems and master inventor at , says for over half a century, computers have been “little better than calculators with storage structures and programmable memory”. On the other hand, the human brain remains the world’s most sophisticated computer. “[The brain] can perform complex tasks rapidly and accurately using the same amount of energy as a 20W light bulb in a space equivalent to a two-litre soda bottle.

Dr John Kelly, <a href=Dr John Kelly, <a href=

IBM" />“Making sense of real-time input flowing in at a dizzying rate is a Herculean task for today’s computers, but would be natural for a brain-inspired system,” says Modha.

As a result, ’s researchers (and a team of collaborators from multiple universities) have been working on a major cognitive computing project called Systems of Neuromorphic Adaptive Plastic Scalable Electronics (SyNAPSE).

“We are trying to bring together neuroscience, supercomputing and nanotechnology to create a radically different computer architecture that mimics the function, low-power, small size and real-time of the human brain,” says Modha.

Phase two of the SyNAPSE project has been awarded about $21 million dollars by the Defense Advanced Research Projects Agency (DARPA). The project involves the use of advanced algorithms and silicon circuitry to enable cognitive computers to learn through experiences, find correlations, create hypotheses and to remember and learn from the outcomes.

So far, Modha says they have successfully created break-through chips on the scale of a worm brain, and now they’re on the path to create a new chip on the scale of the human brain. The chips are currently able to recognise simple images, numbers and letters, all through pure learning and zero programming.

“In the quest for cognitive computing, we have mapped the largest long-distance wiring diagram from the monkey brain. Each bundle of wire represents a physical connection the brain makes over long distances. Taken as a whole, this is the most comprehensive glimpse that we’ve had into the structure of the brain and how the structure’s modular dynamics give rise to behaviour,” says Modha.


Kelly says the intention is to try and understands the patterns within the brain: “Neurons and synapses are not a clear network compared to the way we structure today’s computers with 1s and 0s on a bunch of layers of capabilities.

<a href=<a href=

IBM says it is possible to build a new computing architecture that is more human-like and biologically inspired than traditional systems." />“We are trying to physically produce a new underlying structure for a computer system that will be truly cognitive. We will still be off by about three orders of magnitude in terms of the density of synapses and neurons in the human brain; we’ll also still be off by about two orders of magnitude in terms of power consumption.

“But this will give us, literally by the end of this year, a new platform upon which to experiment in learning techniques and new architectures that are based on these massive networks rather than traditional computing architectures.”

envisions being able to package the computational power of a human brain in a container the size of a shoe box. Kelly says by the end of the year, the company will have an extremely powerful chip. “We will put hundreds of these devices together into something like a box and hopefully we will have a device that will be a truly learning system, and one that is the next step in the era of cognitive computing.”

According to Kelly, the next-generation Watson is “a really big deal”. “The first generation Watson in Jeopardy took a single question and presented a single answer. But that is not the way most complex problems present themselves, they certainly don’t present themselves in healthcare that way.

“In healthcare, as in many situations, you are presented with many different pieces of information. Some of it contradictory and some of it incomplete, what you want to do is get those different pieces of information down to a set of possible causes and some statistical weighting of those. This new technology does that.”

Kelly says that images will be the next big thing for Watson in the medical field: “The goal here is to produce a system that will be as accurate, or more accurate, than a radiologist.”

“Where we want to go with Watson is not just question and answer, and not just using paths to find things. We have very dense research going on in complex analysis and complex interactions with Watson,” says Kelly.


When asked if he foresees supercomputers such a Watson ever actually replacing humans in certain situations, Kelly says: “There will be an explosion in machine-to-machine interactions, and I can see a Watson-like machine taking over decision making in that type of non-critical situation where the risk level is reasonable. Where I do not see Watson or cognitive systems going is in replacing the final human judgment – that is a very difficult thing to do.”

Kelly says an important factor is that, thus far, they have found no evidence to suggest that computers are capable of being creative. “Humans have this ability that no matter how much data and experience we have, we can still create outside of our knowledge. While Watson has surprised us and we’ve wondered how it knew something, we’ve always been able to trace back to where it found the information, we’ve never found it being creative.”

In healthcare, Kelly says Watson will never replace a doctor: “You will always want a doctor as a final decision maker. But we will reach a point where as a patient you will demand that your doctor has access to a Watson, because the amount of information that is out there is simply beyond what a human being can possibly know.

“We’re not trying to replace humans, we’re trying to bring a new set of tools to the party that will allow humans to be much more effective in this world of enormous data. As long as in those critical situations which require judgment and creativity, we have a human being involved, it will be just fine,” says Kelly.

According to Kelly, the way healthcare and medical best protocols are produced in large medical institutions, currently involves a group of highly experienced doctors and experts sitting around a table and discussing and sharing different protocols and outcomes, and then deciding on best practice based on that.

“Think about having a very intelligent Watson at that table, as not only a resource to search massive amounts of data and statistical rankings of different protocols. But actually being able to say: ‘You decided that this was the best protocol, but let me tell you why you would probably want to reconsider that’.

“It might actually be able to debate and present new information to decision makers. That will have a profound impact on the role of IT in literally every industry.”


Chief innovation officer at Bernard Meyerson says Watson marks a turning point in computing. “But while Watson can understand all manner of things and learns from its interactions with data and humans, it is just a first step into a new era of computing that’s going to produce machines that are distinct from today’s computers as those computers are from the mechanical tabulating devices that preceded them. A host of technologies is coming that will help us overcome our limitations and transform the way we interact with machines and with each other.”

Meyerson says over the coming years, computers will become increasingly adept at dealing with complexity: “Rather than depending on humans to write software programs that tell them what to do, they will program themselves so they can adapt to changing realities and expectations. They’ll learn by interacting with data in all of its forms – numbers, text, video, etc. And, increasingly, they’ll be designed so they think more like the humans.

“This isn’t about replacing human thinking with machine thinking,” says Meyerson, adding that such a thing is unnecessary. “Rather, in the era of cognitive systems, humans and machines will collaborate to produce better results – each bringing their own superior skills to the partnership. The machines will be more rational and analytic. We’ll provide the judgment, empathy, morale compass and creativity.

“In my view, cognitive systems will help us overcome the ‘bandwidth’ limits of the individual human.”