Inteligent Processes, Tech and Business

Brain vs. Computer

Can computers do everything our brains do? Not yet, but AI has allowed computing to replicate more than 75% of our nervous system.

Part of the series on how digital means the gradual substitution of neurons by transistors.

There are several ways to categorize the brain anatomically and functionally. The typical anatomical split is based on the spinal cord and peripheral nervous system, the cerebellum, and then the cerebrum, with the brain lobes. Our knowledge of how the brain works is still partial at best, the functions assigned to each area using the anatomical split would be as follows:

  • Brainstem, spinal cord, and peripheral nervous system. Input/output for the brain coordinating the sending of motor signals and the receiving of sensory information from organs, skin, and muscle.
  • Cerebellum. Complex movement, posture and balance.
  • Occipital Lobe. Vision, from basic perception to complex recognition.
  • Temporal Lobe. Auditory processing and language.
  • Parietal Lobe. Movement, orientation, recognition, and integration of perception
  • Frontal Lobe. reasoning, planning, executive function, parts of speech, emotions, and problem-solving. Also, the primary motor cortex which fires movement together with the parietal lobe and the cerebellum.
  • Memory is apparently distributed throughout the whole brain and cerebellum, and potentially even in parts of the brain stem and beyond.

We now take the ten top functions and how computers hold up vs. brain in each of them. We will see that computers already win easily in two of them. There are four areas in which computers have been able to catch up in the last decade and are fairly close or even slightly ahead. Finally, there are four areas in which human brains are still holding their own, among other things because we don’t really understand how they work in our own brains.

Areas in which computers win already

Sensory and motor inputs and outputs (Brainstem and spinal cord).

Sensory and motor inputs and outputs coordinate, process and take electrical signals originated in the brain to muscles or organs, or take sensory inputs originated in the periphery to the brain to be integrated as sensory stimulus. It goes beyond pure transmission with some adjustment like setting the “gain” or blocking some paths (e.g. while asleep).

This functioning has been replicated for quite some time with both effectors systems like motors (“actuators”) and sensory systems (“sensors”). We might still haven’t managed to replicate all the detailed function of all human effort and sensory systems but we have replicated most and extended beyond what they can do.

The next frontier is the universalization of actuators and sensors through the “Internet of Things” which connects wirelessly through the mobile internet and the integration of neural and computing processes, already achieved in some prosthetic limbs.

Basic information processing and memory (Frontal lobe and the brain)

Memory refers to the storing of information in a reliable long-term substrate. Basic information processing refers to executing operations (e.g. mathematical operations and algorithm) on the information stored on memory.

Basic information processing and memory were the initial reason for creating computers. The human brain has been only adapted with difficulty to this tasks and is not particularly good at it. It was only with the development of writing as a way to store and support information processing that humans were able to take information processing and record keeping to an initial level of proficiency.

Currently, computers are able to process and store information at levels far beyond what humans are capable of doing. The last decades have seen an explosion of the capability to store different forms of information in computers like video or audio, in which before the human brain had an advantage over computers. There are still mechanisms of memory that are unknown to us which promise even greater efficiency in computers if we can copy them (e.g. our ability to remember episodes), however, they have to do with the efficient processing of those memories rather than the information storage itself.

Areas in which computing is catching up quickly with the brain

Complex Movement (Cerebellum, Parietal and Frontal Lobes).

Complex movement is the orchestration of different muscles coordinating them through space and time and balancing minutely their relative strengths to achieve a specific outcome. This requires a minute understanding of the body state (proprioception) and the integration of the information coming from the senses into a coherent picture of the world. Some of the most complex examples of movement are driving, riding a bicycle, walking or feats of athletic or artistic performance like figure skating or dancing.

Repetitive and routine movement has been possible for a relatively long time, with industrial robots already available since the 1980s. On the other hand, human complex movement seemed beyond the reach of what we were able to recreate. Even relatively mundane tasks like walking were extremely challenging, while complex ones like driving were apparently beyond the possible.

However, over the last two years, we have seen the deployment of the latest AI techniques and increased sensory and computing power making complex movement feasible. There are now reasonably competent walking robots and autonomous cars are already in the streets of some cities. Consequently, we can expect some non-routine physical tasks like driving or deliveries to be at least partially automated.

Of course, we are still far away from a general system like the brain that can learn and adapt new complex motor behaviors. This is what we see robots in fiction being able to do. After recent progress, this seems closer and potentially feasible but still require significant work.

Visual processing (Occipital Lobe)

Vision refers to the capture and processing of light-based stimuli to create a picture of the world around us. It starts by distinguishing light from dark and basic forms (the V1 and V2 visual cortex), but extends all the way up to recognizing complex stimuli (e.g. faces, emotion, writing).

Vision is another area in which we had been able to do simple detection for a long time and have made great strides in the last decade. Basic vision tasks like perceiving light or darkness were feasible some time ago, with even simple object recognition proving extremely challenging.

The development of neural network-based object recognition networks has transformed our capacity for machine vision. Starting in 2012, when a Google algorithm learned to recognize cats through deep learning we have seen a veritable explosion of machine vision. Now it is routine to recognize writing (OCR), faces and even emotion.

Again, we are still far from a general system which recognizes a wide variety of objects like a human, but we have seen that the components are feasible. We will see machine vision take over tasks that require visual recognition with increasing accuracy.

Auditory processing and language (Temporal Lobe, including Wernicke’s area, and Broca’s area in the frontal lobe)

Auditory processing and language refer to the processing of sound-based stimuli, especially that of human language. It includes identifying the type of sound and the position and relative moment of its source and separating specific sounds and language from ambient noise. In terms of language, it includes the understanding and generation of language.

Sound processing and language have experienced a similar transformation after years of being stuck. Sound processing has been available for a long time, with only limited capability in terms of position identification and noise cancelation. In language, expert systems that were used in the past were able to do only limited speech to text, text understanding and text generation with generally poor accuracy.

The movement to brute force processing through deep learning has made a huge difference across the board. In the last decade, speech-to-text accuracy has reached human levels, as demonstrated by professional programs like Nuance’s Dragon or the emerging virtual assistants. At the same time, speech comprehension and generation have improved dramatically. Translators like Google Translator or Deepl are able to almost equal the best human translators. Chatbots are increasingly gaining ground in being able to understand and produce language for day to day interactions. Sound processing has also improved dramatically with noise cancelation being increasingly comparable to human levels.

Higher order comprehension of language is still challenging as it requires a wide corpus and eventually requires the higher order functions we will see in the frontal lobe. However, domain-specific language seems closer and closer to be automatizable for most settings. This development will allow the automatization of a wide variety of language-related tasks, from translating and editing to answering the phone in call centers, which currently represent a significant portion of the workforce.

Reasoning and problem solving (Frontal Lobe)

Reasoning and problem solving refer to the ability to process information at a higher level to come up with intuitive or deductive solutions to problems beyond the rote application of basic information processing capabilities.

As we have seen, basic information processing at the brute force level was the first casualty of automatization. The human brain is not designed for routine symbolic information processing such as basic math, so computers were able to take over that department quickly. However, non-routine tasks like reasoning and problem solving seemed to be beyond silicon.

It took years of hard work to take over structured but non-routine problem-solving. First with chess, where Deep Blue eventually managed to beat the human champion. Later with less structured or more complex games like Jeopardy, Go or even Blockout were neural networks and eventually deep learning had to be recruited to prevail.

We are still further away from human capacities than in other domains in this section, even if we are making progress. Human’s are incredibly adept at reasoning and problem solving in poorly defined, changing and multidimensional domains such as love, war, business and interpersonal relations in general. However, we are starting to see machine learning and deep learning finding complex relationships that are difficult for the human brain to tease out. A new science is being proclaimed in which humans work in concert with algorithms to tease out even deeper regularities of our world.

Areas in which the brain is still dominant

Creativity (Frontal Lobe)

Creativity can be defined as the production of new ideas, artistic creations or scientific theories beyond the current paradigms.

There has been ample attention in the news to what computers can do in terms of “small c” creativity. They can flawlessly create pieces in the style of Mozart, Bach or even modern pop music. They can find regularities in the date beyond what humans can, proving or disproving existing scientific ideas. They can even generate ideas randomly putting together existing concepts and coming up with interesting new combinations.

However, computers still lack the capability of deciding what really amounts to a new idea worth pursuing, or a new art form worth creating. They have also failed to produce a new scientific synthesis that overturns existing paradigms. So it seems we still have to understand much better how our own brain goes about this process until we can replicate the creation of new concepts like Marx’s Communist Manifesto, the creation of new art forms like Gaudi’s architectural style or Frida Kahlo’s painting style, or the discovery of new scientific concepts like radiation or relativity.

Emotion and empathy (Frontal Lobe)

Emotion and empathy are still only partially understood. However, their centrality to human reason and decision making is clear. Emotions not only serve as in the moment regulators, but also allow to very effectively predict the future by simulating scenarios on the basis of the emotions they evoke. Emotion is also one of the latest developments and innovations in the brain and neurons, with spindle cells, one of the last types of neurons to appear in evolution, apparently playing a substantial role.

Reading emotion from text or from facial expression through computing is increasingly accurate. There are also some attempts to create chatbots that support humans with proven psychological therapies (e.g. Woebot) or physical robots that provide some companionship, especially in aging societies like Japan. Attempts to create emotion in computing like Pepper the robot, are still far from creating actual emotion or generating real empathy. Maybe emotion and responding to emotion will stay as a solely human endeavor, or maybe emotion will prove key to create really autonomous Artificial Intelligence that is capable of directed action.

Planning and executive function (Frontal Lobe)

Planning and executive function are also at the apex of what the human brain can do. It is mostly based in the pre-frontal cortex, an area of the brain that is the result of the latest evolutionary steps from Australopithecus to Homo Sapiens Sapiens. Planning and executive function allow to plan, predict, create scenarios, and decide.

Computers are a lot better than humans at taking “rational” choices. However, the complex interplay of emotion and logic that allows for planning and executive function has been for now beyond them. Activities like entrepreneurship with their detailed future scenario envisioning and planning are beyond what computers can do right now. In planning speed and self-control speed is not so important for the most parts, so humans might still enjoy an advantage. There is also ample room for computer-human symbiosis in this area with computers being able to support humans in complex planning and executive function exercises.

Consciousness (Frontal lobe)

The final great mystery is consciousness. Consciousness is the self-referential experience of our own existence and decisions that each of us feels every waking moment. It is also the driving phenomenon of our spirituality and sense of awe. Neither neuroanatomy nor psychology or philosophy has been able to make sense of it. We don’t know what consciousness is about, how it comes to happen or what would be required to replicate it.

We can’t even start to think what a generating consciousness through computing would mean. Probably it would need to start with emotion and executive function. We don’t even know if to create a powerful AI would require replicating consciousness in some way to make it really powerful. Consciousness would also create important ethical challenges, as we typically assign rights to organisms with consciousness, and computer-based consciousness could even allow us to “port” a replica of our conscious experience to the cloud bringing many questions. So consciousness is probably the phenomenon which requires most study to understand, and the most care to decide if we want to replicate.

Conclusion

Overall it is impressive how computers have closed the gap with brains in terms of their potential for many of the key functions that our nervous system has. In the last decade, they have surpassed what our spinal cord, brainstem, cerebellum, occipital lobe, parietal lobe and temporal lobe can do. It is only in parts of the frontal lobe that humans still keep the advantage over computers. Given the speed advantage of transistors vs. neurons, this will make many of the tasks that humans perform currently uncompetitive. Only frontal lobe tasks seem to be dominant for humans at this point making creativity, emotion and empathy, planning and executive function and consciousness itself the key characteristics of the “jobs of the future”. Jobs like entrepreneurship, high touch professions, and the performing arts seem the future for neurons at this point. There might be also opportunity in centaurs (human-computer teams) or consumer discrimination for “human-made” goods or services.

This will require a severe transformation of the workforce. Many jobs currently depend mostly on other areas like complex motor skills (e.g. driving, item manipulation, delivery), vision (e.g. physical security) or purely transactional information processing (e.g. cashiers, administrative staff). People who have maximized those skills will need time and support to retrain to a more frontal lobe focused job life. At the same time technology continues to progress. As we understand emotion, creativity, executive function and even consciousness we might be able to replicate or supplement part of them, taking the race even further. The “new work has always emerged” argument made sense when just basic functions had been transitioned to computing, but with probably more than 75% of brain volume already effectively digitized it might be difficult to keep it going. So this is something we will have to consider seriously.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s