Search This Blog

Wikipedia

Search results

Thursday, August 8, 2013

IBM Develops Programming Language Inspired By The Human Brain

Alex Knapp
Alex Knapp, Forbes Staff
I write about the future of science, technology, and culture.
8/08/2013 @ 2:57AM |12,031 views
Two years ago this month, IBM IBM -0.78% announced that it had developed a cognitive computer chip, inspired by human neural architecture. That chip was developed as part of the SyNAPSE project, which has a long term goal of building a computing system that can handle tasks that are relatively easy for human brains, but hard for computers. Today, the company has announced that it’s created a programming architecture for those chips so that developers can design applications for them once those chips are a reality.
Why a new programming architecture? Because once IBM’s “cognitive computers” are a reality, they’ll require a kind of programming that’s far different than computers today, which still derive themselves from FORTRAN, a programming language developed in the 1950s for ENIAC.
“We have developed a whole new architecture,” project leader Dr. Dharmendra S. Modha told me. “So we can’t use the language from the previous era. We had to develop a new programming model.”
The eventual hardware for IBM’s cognitive computers are built around small “neurosynaptic cores.” The cores are modeled on the brain, and feature 256 “neurons” (processors), 256 “axons” (memory) and 64,000 “synapses” (communications between neurons and axons). In the long term, IBM hopes to build a cognitive computer scaled to 100 trillion synapses.
That computer’s existence still lies in the future, but a simulation of that computer does exist on IBM’s “Blue Gene” supercomputer, which is located at the Argonne National Laboratory. Using that simulator, the IBM team has developed the programming architecture that they see eventually being utilized on real cognitive computers.
That language is built around “corelets” – an object-oriented abstraction of each neurosynaptic core. In the programming architecture, each corelet just has 256 outputs and 256 inputs. Those outputs and inputs are used to connect the cores to each other.
“Traditional architecture is very sequential in nature, from memory to processor and back,” Dr. Modha said. “Our architecture is like a bunch of LEGO blocks with different features. Each corelet has a different function, then you compose them together.”
As an example, if you wanted to use a cognitive computer to find a face in a crowd, one corelet might be looking for colors. Another might be looking for nose shape. Still another might be looking for cheekbones, and so on. Each corelet by itself would run quite slowly, but all of the processing would be in parallel.
This means that once a cognitive computer is a reality, it can be utilized for applications involving pattern recognition and other problems involving sifting through big data that traditional computers just aren’t very good at.
Of course, the flip side is also true – the cognitive computers won’t be as good at things that today’s computers are good at. That’s why Modha sees computers of the future as being a hybrid of SyNAPSE chips and traditional computers.
“Today’s computers are very good at analytics and number crunching,” Dr. Modha told me. “Think of today’s computers as left brained and SyNAPSE as right brained.”
Of course, even those hybrid computers won’t be a replacement for the human brain. The IBM chips and architecture may be inspired by the human brain, but they don’t quite operate like it.
“We can’t build a brain,” Dr. Modha told me. “But the world is being populated every day with data. What we want to do is to make sense of that data and extract value from it, while staying true to what can be build on silicon. We believe that we’ve found the best architecture to do that in terms of power, speed and volume to get as close as we can to the brain while remaining feasible.”
So what kinds of applications might this programming language be useful for? The team already has several in mind. For example, use of the chips in wearable computing glasses might process visual data to those with impaired vision. Search and rescue robots might be equipped with the chips in order to find injured people during emergencies. According to Dr. Modha, the possibilities are endless.
“What will it enable?” he said. “The march of time will tell. We’re creating the platform for a huge community that we hope will get involved. Progress never stops – once you’ve scaled one peak, the next one manifests.”

No comments:

Post a Comment