Bioprogramming, Blockchain, Digital Governance, Energy and Transportation, integrated reality, Inteligent Processes, Neurogamification, Tech and Business

Beyond Digital: 6 Exponential Revolutions – The Book

I have put together my explorations of Exponential Technologies in my new book “Beyond Digital: Six Exponential Revolutions that are changing our world” (en castellano “Más Allá de Digital: Seis Revoluciones Exponenciales que están cambiando el mundo”) which you can find on Amazon both in physical and digital format.

The book is my attempt to give anyone who wants to understand what is happening a window on six new waves of change that are coming our way through an accessible understanding of the technological underpinnings and plenty of real-world examples. The six technological revolutions I cover are:

  1. Intelligent Processes. The application of AI to information processing and the transformation it will represent in software, business, and government processes. How many processes that now require human intervention will be digitalized through AI allowing cheaper, faster and higher quality outcomes. This could be the end of drudge work and lousy customer experience but might bring significant technological unemployment and inequality
  2. Integrated Reality. How IoT, Virtual and Augmented Reality, Robots and 3D printing are blurring the lines between the physical and digital worlds. Allowing us to interact with the physical world with the same ease we do in digital, and to embody ourselves in the digital world with the same subjective experience as in the physical world. This will bend our physical world even more to our will but could create alienation and escapism as in Ready Player One or a techno-controlled police state that makes 1984 seem liberal.
  3. The New Energy and Transportation Matrix. How solar, electric and autonomous technologies will change how we produce energy and transport ourselves. Potentially bringing an age of free and clean energy and swift and secure transportation. We could potentially be able to overcome global warming, ecological impact and the toll on human lives and time that our current transportation system takes. At the same time, this new matrix will tear down the energy and transportation infrastructure jobs on which many of us depend.
  4. Digital Governance. How Blockchain technologies together with cryptography and the cloud are ushering a new age of financial markets, trust, and law. Digitising money, trust, contracts and the law to give them the same digital speed and quality we have grown used to in the digital world. Still in its early stages, it holds the promise to make our world freer and fairer, with the parallel dangers a bug or a virus could have if computer code runs our financial, legal, and even democratic systems.
  5. Bioprogramming. Understanding the code in which life has been written and learning to manipulate it is given us surprising power and flexibility in using and changing life for our own purposes. The ability to edit, program and even build from scratch living organisms, allows us to change living beings like we change computer programs. With amazing potential in terms of healthcare, human augmentation, and biofabrication, but unexpected risks as we play Mother Nature at an accelerated rate.
  6. Neuroprogramming. Our understanding of neurobiology and neuroeconomics is decoding how our brain, the most complex structure we know of in the Universe, operates and thinks. Being able to understand our neural circuits is giving us new paths in creating technology that replicates the best design principles of our brain and interacts with it effectively. It will be used to further accelerate our technology, augment human capabilities and cure the human suffering linked to brain disease, at the same time it has the potential to take digital manipulation even further robbing us of free choice.

The book would not have been possible with the help of my wife, my family, my friends, my colleagues in Deloitte and McKinsey, the readers of my blog and some dear readers of the beta version of the book who painstakingly read and help me improve the English and Spanish versions of the book. I am really grateful to all of them. As Mario Vargas Llosa says: “Escribir no es un pasatiempo, un deporte. Es una servidumbre que hace de sus víctimas unos esclavos” (“Writing is not a hobby or a sport. It is a bondage that makes slaves out of its victims”). That bondage is mostly born by those around as the slave happily bangs on the keyboard.

Tech and Business, Values

Technology for People and Planet

We have a unique opportunity to improve billions of lives and the whole planet through the technological revolution. However, myopic profit and power maximizing behavior is creating more problems and existential risks rather than solving the ones already on our plate. We need to put values at the center and use technology at the service of people and planet. In Europe, we have the opportunity to showcase to the world that this is viable.

Technology is changing our world faster than ever, and new trends like AI look poised to transform it even further. At the same time, we seem to have lost sight of the values and ethical underpinnings of the technological transformation. Democracies have been manipulated, personal data sold, employment and wages are at risk, and China, the US, and Rusia seem to be locked in a new Cyber cold war. We need to remember what we value, liberty and the pursuit of happiness, giving to each according to her needs, making our little blue dot sustainable. If not, we have the risk of ending up in some dystopia created by mindless technology.

There has never been a time of such opportunity through technology. Since the industrial revolution, we have gotten used to a breakthrough every handful of decades: the steam engine, the internal combustion engine, electricity… Now, without having finished the implementation phase digital revolution of the 90-2000s we have many different world-changing revolutions in the making. Artificial Intelligence (AI) has demonstrated its capacity to do more and more with less and less human intervention. Distributed Ledger Technologies like Blockchain have unleashed the early stages of a new wave of technological, financial and governance revolution with the likes of Bitcoin. The connection of billions of sensors and actuators is changing the fabric of our world through the IoT. New energy technologies like solar are promising to change our energy equation. Biotechnology is increasingly able to manipulate genes beyond our wildest dreams. Cybersecurity and cyber attacks are locked in a new deadly struggle that takes place in milliseconds. Neurobiology is showing us how the brain works behind the curtains, and creating the tools for incredible enhancement and irresistible manipulation.

Just one of these technological revolutions could have the potential to significantly improve the lives of billions of people around the world, and repair much of the harm we have done to our planet. We have the opportunity to accelerate even further the Millenium Development Goals, reducing hunger, poverty, disease, and war, while giving everyone a common base of education. We can transform energy consumption and tackle Global Warming before it is too late. In essence, we have the opportunity to bring an age of abundance and prosperity for all. We need to start focusing technology on people and the planet, instead of thinking just about profits and power.

Up to now, technology has been mainly viewed as an asset for financial gain or a weapon for hegemonic advantage at the nation-state level, rather than as a tool to help solve our problems. We have digital monopolies that have brought free applications we use broadly, but have monopolized the internet through network effects and aggressive disposal of competitors, manipulated human brains into addiction through careful design, and trafficked with citizen data to achieve previously unheard of valuations. We have an AI and Cyber-attacks race between the US, China and to a lesser extent Russia, with an understanding that “winner-takes-all”. And sadly, we have much less progress than is needed or possible to reduce existing problems.

This attitude has also created three additional threats looming, that might materialize challenging humanity further if we continue down the path of profits and power at all costs:

  • AI-induced job destruction might happen to different degrees. Some claim that this is business as usual, and we will “only” face 5 to 7 decades of declining wages for those who can’t adapt fast enough, like in the first and second industrial revolutions. Others claim that we are like horses seeing the combustion engine come and will be totally or mostly out of jobs, even if it could be a good thing. In both cases, we will have to deal with substantial social disruption and increasing inequality.
  • Techno-authoritarian dictatorships can combine traditional authoritarian methods with technology to create a police state that is almost impossible to overcome. China is already deploying pervasive face and person tracking and an integrated way of measuring a person’s “citizen score” to reward the good and keep close watch of those with less stellar scores. This can be used for good but has the potential to take Stalinist Russia to a whole new level. Combined with autonomous weapon systems it could allow a very narrow group to control a state or even the whole earth with little recourse. AI-induced job destruction is accelerating this by creating authoritarian leanings in the “digital have-nots” and threatening the logic of democracy.
  • General AI could render humans obsolete. While there is a strong debate on the possibility and timing of General AI, some people like Bill Gates or the late Stephen Hawking think it is important enough to warrant attention. How AI is developed and the consequences of the AI Arms Race that China and the US seemed to have embarked in could be earth-shaking. Scenarios considered by thinkers like Nick Bostrom and Max Tegmark run the gamut from human annihilation to paradise on earth, including AI-enhanced techno-authoritarian dictatorships.

It will be key to take technology back to values and ethics to manage the current and looming threats in a way that it is positive to humanity. We have seen what unbridled technology brings: user data breaches, manipulated election results, smartphone addiction, seemingly benevolent monopolies, reduced opportunity for most, and increasing inequality. At the same time, technology holds the power to improve the human condition and give incredible opportunities to everyone. Each of us and the organizations we represent need to place people and the planet first, even if that reduces profits and power slightly. Of course, only regulation will ensure a level playing field in which does playing for people and planet don’t get outcompeted by those who are willing to destroy in order to get more.

In Europe, we are in a unique situation. We don’t have the tech giants to “win” the unwinnable race, but we have a respected international position, a strong and large economy and a very developed talent base. We need to be the example for the rest of the world of how technology can be used for the people’s and the planet’s good. We need to put the regulation in place so tech players can bring innovation while respecting the values and rules of the game we cherish and hold true and contributing financially to sustaining a just society. If we manage to do it we might show that there is a viable alternative to unchecked techno-capitalism with masses of dispossessed citizens and techno-authoritarianism through a police state 3.0. This alternative will put people and the planet before other goals, like profit and power, which are really instrumental rather than fundamental.

Neurogamification, Tech and Business

3 ways in which a Neuron is better than a computer

Neurons are one of the most remarkable inventions of evolution. Modern neuroscience is allowing us to understand how neurons and the brain work at increasingly deep levels. This understanding has led an appreciation of how optimized neurons are for information processing. Some of the brain neatest tricks are being copied by technologists (e.g. self-organization) with more to come (e.g. information and energy efficiency)

Anatomy of a Neuron

We know since the early drawings of Santiago Ramon y Cajal thanks to the neuron staining techniques of Camillo Golgi (shared Nobel Prize 1904) that all neurons share four key elements. The dendrites are the input, they listen to the electric signals of other neurons or to the outside world. The axon is the output, it transmits the electrical signal from the neuron to other neurons or to muscles. The cell body (soma) is the power supply, supplying energy and chemicals for the neuron to function. The synapse is the connection, where an axon transmits the electrical signal to the dendrites of another neuron.

Dendrites are the input terminals, like the keyboard, mouse or internet of a computer. Most are internal, processing the signals from other neurons. A few connect directly to the outside world, allowing us to sense it. Our body has dendrites that are sensitive to a wide variety of stimulus including light (eyes), vibration, sound and movement (ears), physical contact and heat (skin), stretch and force (muscles), chemicals (nose and tongue). Neurons can have thousands of dendrites connected to other neurons and combine those signals with different weights and time delays. The signal processing is done analogically and is very versatile, making scientists think that a lot of our learning could be there.

The axon is the output terminal, it takes the electrical signal the neuron calculates and transmits it to the neurons that are listening to it. An axon is digital, it either has an action potential or it doesn’t. The action potential is a voltage spike from -70mV to 40mV that lasts milliseconds and has a refractory period of ~10ms. The initial segment of the axon is where the input of the dendrites determines whether the action potential happens or not by an extremely ingenious electro-chemical process that was characterized by Huxley and Hodgkin (Nobel Prize 1963). Consequently the axon “speaks” to the dendrites through trains of electrical pulses in the hundreds of Hz in frequency.

The cell body or “soma” is the power supply. Like any other cell, it has the instructions (DNA), the power supply (mitochondria) and the protein factories. It supplies axons and dendrites with the significant amount of energy and chemicals they need to function and keeps the inside of the cell in the right working order to be able to generate and receive the pulses.

The synapses are responsible for communication, they are chemical junctions between neurons that transmit instructions from axon to dendrites whenever the axon fires an axon potential. The instructions are mainly about transmitting the electric signal, but they also include many others that determine if the synapse strengthens or disappears. The synapses are in a way the “software” of the neuron as they determine the strength and type of communication the dendrites of a neuron receive.

Advantages of neurons over electrical circuits

Contrary to what might be thought, neurons are much better at many processing tasks than the computers that we have built. There are three main advantages, all of which technology companies are trying to copy to make computing more powerful

Self-organization and learning.

The first advantage has to do with the plasticity of neurons and the brain. Brain development is extraordinarily complex but it is based on the less than 1 Gigabyte of information humans have in their DNA. The rest is self-organization based on external information and adaptation during development. For comparison purpose, Windows 98 was the last operating system that would have fit in the human genome, with Windows 10 being already 20x the size and any of the current “intelligent” systems like Watson being many orders of magnitude larger.

The self-organization and learning advantage of the brain makes great sense in the context of evolution. Without any explicit design or ample instruction storage capacity whatever emerged had to be very sparsely designed.

Self-organization and learning are starting to be copied which has led to the boom of artificial intelligence around machine learning and deep learning which uses self-organization and learning principles derived from neurons by creating artificial “neural networks”. These techniques are still in the early days with the most complex networks currently in use being probably in the thousands of neurons compared to the brain’s 100 billion neurons.

Information efficiency.

A second related advantage is the much lower amount of information neurons require to learn. Human and animal brains and neurons are able to learn very quickly compared to machine learning models. A human needs just limited experience with words or driving to perform very accurately, while computers need to “drive” millions of miles or go through billions of words, something which a human would be incapable of doing. The root cause of this advantage is still only speculated about, but it could be around the fact that neurons integrate computing, communication, and information storage together without separating between data, transmission, and computation explicitly.

Information efficiency is also deeply necessary for evolution and highly selected for it. The organism that requires can learn to identify a threat or an opportunity quicker will have a definitive advantage over slower learners.

There is frantic research in getting more information-efficient machine learning models, both in terms of software (e.g. new techniques like deep learning) or hardware (e.g. IBMs neuromorphic chips).

Energy efficiency

Finally, brains also have an incredible advantage in terms of energy efficiency. Our brain functions with approximately 20Wh per day, this is 20% of the total energy budget of a human that is around 100Wh. However, it is extremely energy efficient compared to a computer with comparable power. According to Forbes a smartphone clocks 1kWh per year, or ~2Wh per day, a laptop according to some sources is around 200Wh per day (so 10x more than a human). If we take the laptop as a reference a supercomputer with brain-like processing power would be using millions of times more energy than a human brain.

Energy efficiency is another cardinal design principle of nature. Brains are already very expensive energetically at a biological level, so the brain has optimized itself as much as possible while keeping its processing power.

Energy efficiency for computers is a key design principle, especially for mobile phones and the internet of things. Apparently combining analog and digital processing like the brain does, could be a key to increased energy efficiency in our own digital tools

Long-term electrical advantages

However, electrical digital computers have other advantages over neurons that will probably allow them to dominate long term.

First, electrical circuits currently work in 10s of GHz of clock speed. This is 10 million times faster than what neurons can muster with their 5ms action potentials and 10ms refractory periods. 10 million times difference is equivalent to the difference between evolutionary time scales (i.e. how long does DNA take to evolve new species) and the speed at which we live our lives and improve our economy and technology.

Second, digital storage allows for perfect recall of data. Our brains are not optimized for exact data, which is quite useless in real life. So computers have an advantage in terms of storing, retrieving and processing detailed information, while the brain is quite adept at extracting patterns and getting the gist of an issue.

Finally, digital computers are almost infinitely scalable. Our brains are famously limited by the breadth of the birth canal, with our heads being as big as they can biologically get. Computers, on the other hand, are being stacked in greater and greater numbers through cloud technologies with potentially limitless processing power and storage.

Will Moore’s law take electronics beyond what neurons can do? It is a distinct possibility, and it could bring an upheaval in how the world is organized. Neuron-only organisms are already at a clear disadvantage against digitally-enabled ones. If that enablement is made more direct through brain-machine interfaces we could have a race of super cyborgs like Yuval Noah Harari describes in Homo Deus. If alternatively, we manage to manufacture conscience in electronics, it might decide it has no use for the outdated neuron-based monkeys that brought it here.

In any case, the power of electronics and neurons has to be harnessed towards our values and goals. Ethics, human rights, global development, freedom and the pursuit of happiness have to be front and center in what we do with our neurons and electronics.

Energy and Transportation, Inteligent Processes, Tech and Business

Move fast and break (other people’s) things: The Uber fatality and Facebook data breach

The Uber autonomous car fatality and Facebook data breach are signposts of Big Tech hubris. Big Techs don’t take into account the broader society’s resources, interest, and rules. They move fast in their race for super-profitable monopolies even if they break other people’s lives or privacy in the process. Big Tech companies should learn from their mistakes and change their attitude to avoid regulation or antitrust action.

Apparently, the Uber autonomous car fatality and the Facebook data breach do not have much in common. However, on closer examination, they are both indicative of the Big Tech companies hubris and arrogance that make them think they are beyond working together with the broader economy or following regulations. Breaking things is ok as long as you move fast to win the race and other people are hurt with the breaking. Let’s examine each incident in turn.

Uber autonomous car fatality

March 20th, 2018 might pass on to history as the first time an autonomous robot killed a person without human intervention. It was not the Terminator or Agent Smith, the footage looks like many other traffic accidents. A badly illuminated road at night, when suddenly a figure appears too fast for the car to stop. All is over in less than a couple of seconds.

First, my condolences to the loved ones of the pedestrian. Traffic accidents are a sad and sudden way to lose someone. A single second can make the difference between life or death. According to the WHO, there are more than a million traffic-related deaths per year, one every 25 seconds, a large number of which are in the developing world. Even in the US, traffic injuries are the top 1 or 2 accidental causes of death for almost all demographics.

The causes of the accident are clear and seem to exonerate Uber from the traditional perspective. Looking at the video the accident, it seems it would have been hard to avoid for any human driver. The failsafe driver in the Uber only saw the pedestrian approximately one second before impact and could do nothing. The pedestrian was crossing at night in the middle of a busy road with no illumination within 100m of the closest pedestrian crosswalk. A human driver would have probably have ended with the same outcome, and likely not considered at fault based on the police chief statement.

However, from another perspective, the accident is inexplicable and inexcusable. Given the power of technology for focused and precise action with all available information, it seems negligent to have this type of accident occur. The investigation will ultimately determine if the systems didn’t pick up the signal or there was another deeper failure. However, it is clear that there are simple ways in which the pedestrian could have been detected. Couldn’t Uber be informed of what smartphones are nearby? The pedestrian was most probably carrying a smartphone and her carrier could have relayed that information to the Uber car, at least making it pause and slow down due to the undetected obstacle. Hasn’t some glitch like this happened before to some of the other autonomous car companies? It probably has, but each company is developing the technology in secret to try to outmaneuver the others so best practices are not shared putting lives in danger.

Big Tech companies are competing amongst them in a race to make autonomous vehicles possible. The goal is laudable as it could drastically reduce the more than 1 million deaths a year and the untold suffering traffic accidents cause through death and injury. However, the race is being run for profits, so there is little collaboration with other players that could help like telcos or with the other participants in the race. Given that human lives and human safety are at stake in the race it should be run with society in mind. It would take a change of attitude, from Big Techs being focused on winning and creating super-profitable monopolies, to being focused on winning against traffic deaths and drudgery to create a better world. Their current attitude will be self-defeating in the long run, and the win-win attitude would be rewarded by regulators and the public at large.

The Facebook data breach

Mark Zuckerberg is famous for having a poster in the Facebook offices that says “Move Fast, Break Things”. That is an embodiment of the hacker and agile culture and something very commendable when you pay the breaking your own money. However, when you move fast and break things with your users’ data and trust “Facebook made mistakes” sounds like an understatement. Especially, when that data breach was allegedly used to change the results of a US election (admittedly not in a way that Zuckerberg or most Facebook executives would have wanted) and was kept secret and happening for almost 4 years.

First, let’s understand what is the data breach. Apparently, Facebook app developers could download and access detailed information about their users and their friends. As usual, it required user permission for the app to access that information. That permission was construed to mean that all the information could be downloaded and mined without limit and outside the context it was originally granted. Cambridge Analytica, the company at the center of the scandal, created a seemingly innocuous personality test app called “thisisyourdigitallife” that promised to read out your personality if you gave it access to all the data. The company then downloaded all the information from the users and their friends and used it to fuel its different data mining campaigns.

The data breach was really no data breachas Facebook willingly handed the information wholesale to their developers. Facebook afterward took no responsibility for what the developers did with the information and didn’t audit or police it in any way. Cambridge Analytica has gone out in the open because of its signature role in the US election, but there are probably hundreds of other developers that were mining personal information from Facebook less high profile direct marketing and sales efforts.

Facebook has only apologized when the pressure has mounted and the proposed solutions are to clamp down on developers. Maybe Facebook is not legally at fault and has all the necessary legal permissions from its users. However, this highlights that the current system gives unregulated powers to firms like Google and Facebook’s in terms of handling information. Their business models are predicated on capturing untold amounts of data about their users and then turning back and selling it to advertisers or however is willing to pay. Should they be able to accumulate information without regulation or responsibility? Do we want information that can turn an election to be sold indiscriminately and unaccountably to the highest bidder?

Facebook is another example of a big tech that needs to mend its ways. It needs to take at heart its users interest and not just the amount of advertising dollars. Information should only be sold with user consent and with the utmost care. Users should be informed about the economic transactions that are happening with their data and maybe get a cut from it. Data portability should be available, making sure the data belongs to the user and the user can migrate to a different service without losing all his or her history. Number portability was necessary to create a competitive telecom marketplace and user data portability will be necessary to limit the internet monopolies.

Neurogamification, Tech and Business

Centaurs: Transistors and Neurons working together

Centaurs represent the alliance of neuron and transistor combining the best of each world. In the Centaur world, some activities will be totally automated while others rely on two flavors of neuron-transistor combination. Humans in the Centaur World will face a different paradigm in which working with transistors becomes the new literacy and disruptions to labor markets, wealth distributions, and human existence need to be managed.

man machine

Since Deep Blue’s victory over Kasparov in 1997, most people believe computers are dominant in chess. However, when a no-holds-barred tournament was held it was human-computer teams that won, not computers on their own. The combination of human thought plus computer analysis of moves was more powerful than any computer alone. At the same time, even human chess has been transformed by computers. During matches human players can’t use digital aids, however, computers play an ever-increasing role in training, match analysis and practice. This has led to earlier and earlier grandmaster level players, with the record being currently at twelve years of age.

The concept of the centaur that combines the best of both worlds is not new. Centaur’s were Greek mythological beings that had the body of a horse and had the torso and heads of a human. They combined the speed and stamina of a horse with the skill and intelligence of a human. Similarly, human civilization is not based only on neurons but relies heavily on the natural DNA based world. We continue to use animals and plants for producing food. Natural reserves continue to be some of the most beautiful sights in the world. In a similar way, it will not make sense to reinvent what neurons can already do reasonably well with transistors, at least for some time.

The Centaur World

Neurons will be substituted by transistors in many undertakings. As seen previouslytransistors are much better in areas in which speed, accuracy, exact memory and brute force processing predominate. We will see transistors continue to quickly take over activity in those domains. It will be mostly a good thing as those characteristics don’t come easily to humans and can be alienating to perform. The farmer toiling in the field or the worker in Henry Ford’s factory might be romanticized a posteriori. However, it was hard, repetitive and backbreaking work that is much more efficiently performed by machines.

Neurons do very well in other areas, for which speed and accuracy are not so relevant. Areas like creativity, emotion, common sense, lateral thinking, empathy, creating meaning and even consciousness are very well handled by the human brain. Exceptional speed or accuracy is less relevant to them. Consequently, we can expect transistors to extend them rather than substitute them.

There is also a premium for humans, as there is a premium for organic food or handcrafted goods. Using humans in delivering a service or product is becoming a mark of quality and exclusivity. And the centaur concept will make sure that the level of quality delivered is higher than if there was only a computer involved. A Michelin-star dining experience is the combination of the cooking and the waiting staff with the technology to make everything run smoothly. Beyond the shock value of an “automated restaurant”, it wouldn’t make sense to eliminate staff completely.

The centaur world will be a three-tiered world in which transistors take over some tasks, and neurons and transistors work side by side in others:

  • Radical digitalization of speed, accuracy, and exactness. Any activity in which speed, accuracy, and exactness are the key outputs will be completely digitized with cursory human supervision. The same way the tractor digitized more than 95% of farm employment, we can expect the digitization of most “utility” tasks in which we just need to get the job done. Whole areas like transportation or administrative work will disappear to datacenters and autonomous vehicles. Parts of highly skilled jobs like analyzing medical images will be digitized to improve speed, accuracy, and exactness. Anything in which “the job just needs to be done” and speed, accuracy and exactness count for most of the value will be lost to humans.
  • Centaur service for a premium experience and superior results. In parallel, a new world of premium centaur service will emerge. A computer might know the diagnosis and treat an illness, but a doctor will do a much better job of connecting emotionally, getting the patient understand and act on the treatment. Most wealth management, tax records and filing might be digitized, but a financial advisor will help the client understand what they want and how to get it. A child could learn from a computer system with all the knowledge and gamification, but it is the teacher who motivates and engages the imagination to create the future scientist. Maybe the computer can reproduce the violin piece exactly, but it is the performer who creates a vibrant and unique experience imbuing it with emotion. Most customer service could be digitized to online channels and chatbots, but sometimes there is no substitute for a helpful human to eliminate confusion and create loyalty.
  • Digitaly aided human creativity. Finally, there will be a domain of human creativity, innovation, and meaning creation. This domain will be fully supported by computers that enable much-increased performance of those uniquely human skills. Artists will create new works of art, with transistors supporting the ideation, design, and execution. Entrepreneurs will conceive of new business ideas, with transistors enabling them to test, build and deploy them at blinding speed and almost no cost. Writers and film-makers will create new masterpieces of fiction and non-fiction, with transistors expediting the research, production and distribution process to their eager audiences. Scientists will conceive of new groundbreaking hypothesis to explain the world, with transistors doing the fact checking, experiments and working out the possible hypothesis to help create an even greater synthesis. Engineers will design the next great technologies, software programs, devices, and structures, with transistors simulating the underlying structures and doing most of the grunt work.

Humans in a Centaur World

So neurons will not disappear substituted by transistors, but they will live in a very different world. First, interacting with transistors will be central skill both as consumers and professionals. Second, there will be a tectonic shift in the tasks and skills that are demanded, with many of today’s job categories being quickly digitized. Third, the rules of the market that determine careers, incomes and competition will need to be rethought and adapted to the new Centaur World. Fourth, a wave of digital addiction is sweeping the world and needs to be managed And finally, the dangers of the transistor transition will need to be considered and managed.

Digital Literacy. The only life away from interacting with transistors will be that of the hermit or the religious fundamentalist. All jobs will have substantial digital support to improve productivity and interacting effectively with technology will be a key differentiator. The “digital divide” that already exists between those that get computers and those that don’t will expand and become a chasm in income, quality of life and opportunity. This doesn’t mean everyone needs to be a programmer, data scientist or hardware designer, but everyone needs to understand what technology can do for them and how to harness it. That digital literacy will become a basic human right and should be a central part of school curriculums, much like math and reading is today. Societies should also work to eliminate digital illiteracy, much like the campaigns to eliminate classical illiteracy during the 20th century. It could be argued that technology should go even before math and reading because technology will be able to do math and reading for people.

The new job tectonic shift. The 20th century saw jobs shift from farms to factories and then to services. The 21st will see it shift from factories and services to premium Centaur experiences and creative undertakings such as entrepreneurship, science, art, entertainment, and engineering. People need to be helped in this transition, much as World War II, the GI Bill, and the Marshall plan helped the farm to factory and services transition. If not we will face the angry and destitute farmers of Steinbeck’s The Grapes of Wrath. The skill shift required is 180º degrees, from routine jobs that value consistency and accuracy in a structured environment, to creative jobs that require improvisation and agility in an ever-changing environment. Some will not be able to adapt. Others will need substantial help to do so. The only way out will be to support workers on the shift, knowing that a substantial group will need to be taken care of until their retirement.

The New Digital Deal. Changes in the basis of work and society always create substantial changes in wealth distributions. Those who have the skills required or own the new relevant assets accumulate wealth rapidly, while those with the old skills and assets are suddenly left destitute and without hope. Franklin Delano Roosevelt had to embark on measures that were termed communist when they were implemented but allowed redistribution and prosperity for decades after. There will need to be a New Digital Deal with elements like universal income, progressive taxation, and antitrust action which tackles the current transition. The added challenge is that this New Digital Deal will have to be global in concept to deal with the potential migratory flows and tensions it would create otherwise

Manage the digital addiction crisis. The world is facing a wave of addiction comparable to the opioid and tobacco crisis of the second half of the twentieth century. Human brains are susceptible to many manipulations that can hijack their rewards systems and create irresistible compulsions. The more areas like the nucleus accumbens are understood, the more it is seen how substances or experiences can interact directly with the reward machinery of the brain making it physiologically extremely challenging to resist. Alcohol and opium have been with humans since ancient times. Chemically-based drugs created a wave of addiction and public health problems during the twentieth century. Digital compulsion is the key problem for the 21st. Understanding of the brain has been used to design absolutely irresistible experiences that are tested scientifically to create ever more addictive serotonin highs. Thus we have millions compulsively addicted to digital experiences, from social media to gaming. This is specially worrisome with children who find it difficult to resist with only partially prepared frontal lobes. Digital addiction will have to be regulated, as we seen the pressure of competition drives players to implement maximum addiction in all cases.

Dangers of Digital. Uncontrolled AI could have dire consequences for the human race. Maybe it will not be the fantastic scenarios of The Matrix or Terminators but relevant figures in science and business see the dangers. The current “AI Cold War” between the US, China and Russia could have unintended consequences and lead to preemptive action if the topic is not addressed.

Inteligent Processes, Tech and Business

Brain vs. Computer

Can computers do everything our brains do? Not yet, but AI has allowed computing to replicate more than 75% of our nervous system.

Part of the series on how digital means the gradual substitution of neurons by transistors.

There are several ways to categorize the brain anatomically and functionally. The typical anatomical split is based on the spinal cord and peripheral nervous system, the cerebellum, and then the cerebrum, with the brain lobes. Our knowledge of how the brain works is still partial at best, the functions assigned to each area using the anatomical split would be as follows:

  • Brainstem, spinal cord, and peripheral nervous system. Input/output for the brain coordinating the sending of motor signals and the receiving of sensory information from organs, skin, and muscle.
  • Cerebellum. Complex movement, posture and balance.
  • Occipital Lobe. Vision, from basic perception to complex recognition.
  • Temporal Lobe. Auditory processing and language.
  • Parietal Lobe. Movement, orientation, recognition, and integration of perception
  • Frontal Lobe. reasoning, planning, executive function, parts of speech, emotions, and problem-solving. Also, the primary motor cortex which fires movement together with the parietal lobe and the cerebellum.
  • Memory is apparently distributed throughout the whole brain and cerebellum, and potentially even in parts of the brain stem and beyond.

We now take the ten top functions and how computers hold up vs. brain in each of them. We will see that computers already win easily in two of them. There are four areas in which computers have been able to catch up in the last decade and are fairly close or even slightly ahead. Finally, there are four areas in which human brains are still holding their own, among other things because we don’t really understand how they work in our own brains.

Areas in which computers win already

Sensory and motor inputs and outputs (Brainstem and spinal cord).

Sensory and motor inputs and outputs coordinate, process and take electrical signals originated in the brain to muscles or organs, or take sensory inputs originated in the periphery to the brain to be integrated as sensory stimulus. It goes beyond pure transmission with some adjustment like setting the “gain” or blocking some paths (e.g. while asleep).

This functioning has been replicated for quite some time with both effectors systems like motors (“actuators”) and sensory systems (“sensors”). We might still haven’t managed to replicate all the detailed function of all human effort and sensory systems but we have replicated most and extended beyond what they can do.

The next frontier is the universalization of actuators and sensors through the “Internet of Things” which connects wirelessly through the mobile internet and the integration of neural and computing processes, already achieved in some prosthetic limbs.

Basic information processing and memory (Frontal lobe and the brain)

Memory refers to the storing of information in a reliable long-term substrate. Basic information processing refers to executing operations (e.g. mathematical operations and algorithm) on the information stored on memory.

Basic information processing and memory were the initial reason for creating computers. The human brain has been only adapted with difficulty to this tasks and is not particularly good at it. It was only with the development of writing as a way to store and support information processing that humans were able to take information processing and record keeping to an initial level of proficiency.

Currently, computers are able to process and store information at levels far beyond what humans are capable of doing. The last decades have seen an explosion of the capability to store different forms of information in computers like video or audio, in which before the human brain had an advantage over computers. There are still mechanisms of memory that are unknown to us which promise even greater efficiency in computers if we can copy them (e.g. our ability to remember episodes), however, they have to do with the efficient processing of those memories rather than the information storage itself.

Areas in which computing is catching up quickly with the brain

Complex Movement (Cerebellum, Parietal and Frontal Lobes).

Complex movement is the orchestration of different muscles coordinating them through space and time and balancing minutely their relative strengths to achieve a specific outcome. This requires a minute understanding of the body state (proprioception) and the integration of the information coming from the senses into a coherent picture of the world. Some of the most complex examples of movement are driving, riding a bicycle, walking or feats of athletic or artistic performance like figure skating or dancing.

Repetitive and routine movement has been possible for a relatively long time, with industrial robots already available since the 1980s. On the other hand, human complex movement seemed beyond the reach of what we were able to recreate. Even relatively mundane tasks like walking were extremely challenging, while complex ones like driving were apparently beyond the possible.

However, over the last two years, we have seen the deployment of the latest AI techniques and increased sensory and computing power making complex movement feasible. There are now reasonably competent walking robots and autonomous cars are already in the streets of some cities. Consequently, we can expect some non-routine physical tasks like driving or deliveries to be at least partially automated.

Of course, we are still far away from a general system like the brain that can learn and adapt new complex motor behaviors. This is what we see robots in fiction being able to do. After recent progress, this seems closer and potentially feasible but still require significant work.

Visual processing (Occipital Lobe)

Vision refers to the capture and processing of light-based stimuli to create a picture of the world around us. It starts by distinguishing light from dark and basic forms (the V1 and V2 visual cortex), but extends all the way up to recognizing complex stimuli (e.g. faces, emotion, writing).

Vision is another area in which we had been able to do simple detection for a long time and have made great strides in the last decade. Basic vision tasks like perceiving light or darkness were feasible some time ago, with even simple object recognition proving extremely challenging.

The development of neural network-based object recognition networks has transformed our capacity for machine vision. Starting in 2012, when a Google algorithm learned to recognize cats through deep learning we have seen a veritable explosion of machine vision. Now it is routine to recognize writing (OCR), faces and even emotion.

Again, we are still far from a general system which recognizes a wide variety of objects like a human, but we have seen that the components are feasible. We will see machine vision take over tasks that require visual recognition with increasing accuracy.

Auditory processing and language (Temporal Lobe, including Wernicke’s area, and Broca’s area in the frontal lobe)

Auditory processing and language refer to the processing of sound-based stimuli, especially that of human language. It includes identifying the type of sound and the position and relative moment of its source and separating specific sounds and language from ambient noise. In terms of language, it includes the understanding and generation of language.

Sound processing and language have experienced a similar transformation after years of being stuck. Sound processing has been available for a long time, with only limited capability in terms of position identification and noise cancelation. In language, expert systems that were used in the past were able to do only limited speech to text, text understanding and text generation with generally poor accuracy.

The movement to brute force processing through deep learning has made a huge difference across the board. In the last decade, speech-to-text accuracy has reached human levels, as demonstrated by professional programs like Nuance’s Dragon or the emerging virtual assistants. At the same time, speech comprehension and generation have improved dramatically. Translators like Google Translator or Deepl are able to almost equal the best human translators. Chatbots are increasingly gaining ground in being able to understand and produce language for day to day interactions. Sound processing has also improved dramatically with noise cancelation being increasingly comparable to human levels.

Higher order comprehension of language is still challenging as it requires a wide corpus and eventually requires the higher order functions we will see in the frontal lobe. However, domain-specific language seems closer and closer to be automatizable for most settings. This development will allow the automatization of a wide variety of language-related tasks, from translating and editing to answering the phone in call centers, which currently represent a significant portion of the workforce.

Reasoning and problem solving (Frontal Lobe)

Reasoning and problem solving refer to the ability to process information at a higher level to come up with intuitive or deductive solutions to problems beyond the rote application of basic information processing capabilities.

As we have seen, basic information processing at the brute force level was the first casualty of automatization. The human brain is not designed for routine symbolic information processing such as basic math, so computers were able to take over that department quickly. However, non-routine tasks like reasoning and problem solving seemed to be beyond silicon.

It took years of hard work to take over structured but non-routine problem-solving. First with chess, where Deep Blue eventually managed to beat the human champion. Later with less structured or more complex games like Jeopardy, Go or even Blockout were neural networks and eventually deep learning had to be recruited to prevail.

We are still further away from human capacities than in other domains in this section, even if we are making progress. Human’s are incredibly adept at reasoning and problem solving in poorly defined, changing and multidimensional domains such as love, war, business and interpersonal relations in general. However, we are starting to see machine learning and deep learning finding complex relationships that are difficult for the human brain to tease out. A new science is being proclaimed in which humans work in concert with algorithms to tease out even deeper regularities of our world.

Areas in which the brain is still dominant

Creativity (Frontal Lobe)

Creativity can be defined as the production of new ideas, artistic creations or scientific theories beyond the current paradigms.

There has been ample attention in the news to what computers can do in terms of “small c” creativity. They can flawlessly create pieces in the style of Mozart, Bach or even modern pop music. They can find regularities in the date beyond what humans can, proving or disproving existing scientific ideas. They can even generate ideas randomly putting together existing concepts and coming up with interesting new combinations.

However, computers still lack the capability of deciding what really amounts to a new idea worth pursuing, or a new art form worth creating. They have also failed to produce a new scientific synthesis that overturns existing paradigms. So it seems we still have to understand much better how our own brain goes about this process until we can replicate the creation of new concepts like Marx’s Communist Manifesto, the creation of new art forms like Gaudi’s architectural style or Frida Kahlo’s painting style, or the discovery of new scientific concepts like radiation or relativity.

Emotion and empathy (Frontal Lobe)

Emotion and empathy are still only partially understood. However, their centrality to human reason and decision making is clear. Emotions not only serve as in the moment regulators, but also allow to very effectively predict the future by simulating scenarios on the basis of the emotions they evoke. Emotion is also one of the latest developments and innovations in the brain and neurons, with spindle cells, one of the last types of neurons to appear in evolution, apparently playing a substantial role.

Reading emotion from text or from facial expression through computing is increasingly accurate. There are also some attempts to create chatbots that support humans with proven psychological therapies (e.g. Woebot) or physical robots that provide some companionship, especially in aging societies like Japan. Attempts to create emotion in computing like Pepper the robot, are still far from creating actual emotion or generating real empathy. Maybe emotion and responding to emotion will stay as a solely human endeavor, or maybe emotion will prove key to create really autonomous Artificial Intelligence that is capable of directed action.

Planning and executive function (Frontal Lobe)

Planning and executive function are also at the apex of what the human brain can do. It is mostly based in the pre-frontal cortex, an area of the brain that is the result of the latest evolutionary steps from Australopithecus to Homo Sapiens Sapiens. Planning and executive function allow to plan, predict, create scenarios, and decide.

Computers are a lot better than humans at taking “rational” choices. However, the complex interplay of emotion and logic that allows for planning and executive function has been for now beyond them. Activities like entrepreneurship with their detailed future scenario envisioning and planning are beyond what computers can do right now. In planning speed and self-control speed is not so important for the most parts, so humans might still enjoy an advantage. There is also ample room for computer-human symbiosis in this area with computers being able to support humans in complex planning and executive function exercises.

Consciousness (Frontal lobe)

The final great mystery is consciousness. Consciousness is the self-referential experience of our own existence and decisions that each of us feels every waking moment. It is also the driving phenomenon of our spirituality and sense of awe. Neither neuroanatomy nor psychology or philosophy has been able to make sense of it. We don’t know what consciousness is about, how it comes to happen or what would be required to replicate it.

We can’t even start to think what a generating consciousness through computing would mean. Probably it would need to start with emotion and executive function. We don’t even know if to create a powerful AI would require replicating consciousness in some way to make it really powerful. Consciousness would also create important ethical challenges, as we typically assign rights to organisms with consciousness, and computer-based consciousness could even allow us to “port” a replica of our conscious experience to the cloud bringing many questions. So consciousness is probably the phenomenon which requires most study to understand, and the most care to decide if we want to replicate.

Conclusion

Overall it is impressive how computers have closed the gap with brains in terms of their potential for many of the key functions that our nervous system has. In the last decade, they have surpassed what our spinal cord, brainstem, cerebellum, occipital lobe, parietal lobe and temporal lobe can do. It is only in parts of the frontal lobe that humans still keep the advantage over computers. Given the speed advantage of transistors vs. neurons, this will make many of the tasks that humans perform currently uncompetitive. Only frontal lobe tasks seem to be dominant for humans at this point making creativity, emotion and empathy, planning and executive function and consciousness itself the key characteristics of the “jobs of the future”. Jobs like entrepreneurship, high touch professions, and the performing arts seem the future for neurons at this point. There might be also opportunity in centaurs (human-computer teams) or consumer discrimination for “human-made” goods or services.

This will require a severe transformation of the workforce. Many jobs currently depend mostly on other areas like complex motor skills (e.g. driving, item manipulation, delivery), vision (e.g. physical security) or purely transactional information processing (e.g. cashiers, administrative staff). People who have maximized those skills will need time and support to retrain to a more frontal lobe focused job life. At the same time technology continues to progress. As we understand emotion, creativity, executive function and even consciousness we might be able to replicate or supplement part of them, taking the race even further. The “new work has always emerged” argument made sense when just basic functions had been transitioned to computing, but with probably more than 75% of brain volume already effectively digitized it might be difficult to keep it going. So this is something we will have to consider seriously.

Digital Governance, Inteligent Processes, Tech and Business

Digital: From Neurons to Transistors

Digital captures the transition we are embarked on from neurons to transistors as the dominant substrate of information processing.

Neurons are exquisite creations of natural evolution. They have achieved through self-organization and evolution many of the properties we have managed to engineer in electrical systems such as logical operations, signal regeneration for information transmission or 1/0 information processing. When combined together in the circuits that constitute the human brain they allow amazing complexity and functionality. The roughly 100 billion neurons in a human brain form well over 100 trillion connections leading to consciousness, creativity, moral judgment and much more.

They have also led to transistors, an information processing system that has managed to replicate many of the properties of neurons while being approximately 10 million times faster. A neuron typically fires in the millisecond range (1000 Hz of frequency), while transistors operate comfortably in the nano to picoseconds (1-100 GHz range).

10 million times speed advantage makes transistor dominate neurons as an information processing medium, even if they are still capable of less complexity and require more energy than a brain. That is why we have seen the progressive substitution of neurons for transistors in many information processing tasks. This is fundamentally different from technologies like writing that extend neurons. It is only comparable to the use of the steam engine, electricity and internal combustion to substitute muscles.

This absurdly great speed advantage allows the Digital Paradox. When neurons are substituted by transistors in a process you get lower cost, higher quality and higher speedwithout trade-offs. Thus there is no going back, once we have engineered sufficient complexity in transistors to tackle a process there is no reason to use neurons anymore. Of course, neurons and transistors are often combined. Neurons still dominate for some tasks and they benefit greatly by being supported by transistors.

What we now call generically “Digital” is one more stage in this gradual substitution of neurons by transistors. In that sense, those who claim that Digital is not new are right. At the same time, the processes that are now being substituted have a wider impact, so the use of a new term is understandable. Finally, we can expect the substitution to continue after the term digital has faded from use. In that sense, there is a lot that will happen Beyond Digital.

Early stages of substitution

Transistors and their earlier cousins vacuum tubes started by substituting neurons in areas in which their advantage was greatest: complex brute force calculations and extensive data collection and archiving. This was epitomized by calculating missile trajectories and code-breaking during World War II and tabulating census data since the early 20th century.

Over time, this extended to databases to store large amounts of information for almost any purpose and programming repetitive management of information. This enabled important advances but still had limited impact in most people’s lives. Only very specialized functions like detailed memory, long relegated to writing, and rote calculation, the domain of only a minuscule fraction of the workforce, were affected.

The next step was to use these technologies to manage economic flows, inventory, and accounting within organizations. So-called Enterprise-Resource-Planning or ERP systems allowed to substitute complex neuron+writing processing systems which were at the limit of their capacity. This substituted some human jobs, but mainly made possible a level of complexity and performance that was not attainable before.

Substitution only started to penetrate the popular conscience with Personal Computers (PCs). PCs first allowed individuals to start leveraging the power of transistors for tasks such as creating documents, doing their accounting or entertainment.

Finally, the internet allowed to move most information transmission from neurons to transistors. We went from person to person telephone calls and printing encyclopedias, to email, web pages, and Wikipedia.

In this first stage, transistors were substituting mostly written records and some specialized jobs such as persons performing calculations, record keeping or information transmission. They were also enabling new activities like complex ERPs, computer games or electronic chats that were not possible before. In that sense, the transition was mostly additive for humans.

Digital: Mainstream substitution

The use of Digital coincides with when many mainstream neuron-based processes have started to be affected by transistors. This greater disruption of the supremacy of neurons is being felt beyond specialized roles and starts to become widespread. It also starts to be more substitutive, with transistor-based information processing being able to completely replace neurons in areas in which we thought neurons were reasonably well adapted to perform.

First went media and advertising. We used to have an industry that created, edited, curated and distributed news and delivered advertising on top of that. Most of these functions in the value chain have been taken over by transistors either in part or in full.

Then eCommerce and eServices moved to transistors the age-old process of selling and distributing products to humans. The buying has still stayed in human hands for now, but you are close to being able to buy a book from Amazon without any human touching it from the printing to the delivery. On the eServices side, no one goes to the bank teller anymore if it can be done instantly on the Internet.

Then the Cloud took the management of computers to transistors themselves. Instead of depending on neurons for deployment, scaling, and management of server capacity, services like Amazon Web Services or Microsoft Azure give transistors the capacity to manage themselves for the most part.

Our social lives and gossip might have seemed totally suited for neurons. However, services like Facebook, Whatsapp or LinkedIn have allowed transistors to manage a large part of them at much higher speeds.

Smartphones made transistors much more mobile and accessible, making them readily available in any context and any moment. Smartphones have started substituting tasks our brains used to be able to perform with neurons, like remembering phone numbers, navigating through a city o knowing where we have to go next.

Finally, platforms and marketplaces like Airbnb and Uber turned to transistors for tasks that were totally in the hands of neurons like getting hold of a cab or renting an apartment.

This encroachment of transistors on the daily tasks of neurons has woken all of us up to change. Now it is not just obscure professions or processes but a big chunk of our daily life that is being handed over to transistors. It creates mixed feelings for us. On the one hand, we love the Digital Paradox and its improvement of speed, quality, and cost. It would be difficult to convince us to forsake Amazon or to return to the bank branch. On the other side, transistors substituting neurons have left many humans without jobs and are causing social disruption. Transistors are also speeding up the world towards their native speed, which is inaccessible to humans. There are almost no more human stock traders because they cannot compete with the 10 million faster speeds of transistors.

Beyond Digital: Transistors take over

The final stages of the transition can be mapped to how core functions of the human brain might become substitutable by transistors. The process is already well underway with some question marks around the reach of transistors. In any case, we can expect it to continue accelerating speed taking us to the limits of what can be endurable for our sluggish neuron-based brains.

Substituting the sensory cortex. Machine vision, Enhanced Reality, Text-to-Speech, Natural Language Processing and Chatbots are just some of the technologies that are putting in question the neuron’s dominance in sight, sound, and language. Approximately ~15-25% of the brain is devoted to processing sensory signals and language at various levels. Transistors are getting very good at it, and have recently become able to recognize many items in images and process and create language effectively.

Substituting the motor cortex. The same with unscripted and adaptive movement. Robots are increasingly powerful and they are able to cook pizzas, walk through a forest and help you in a retail store. Autonomous driving promises for transistors to be able to navigate a vehicle, one of the most demanding motor-sensory tasks we humans undertake.

Substituting non-routine cognitive and information processing. We have seen basic calculation and data processing be taken over by transistors, now we are starting to see “frontal lobe” tasks go over to transistors. Chess, Go and Jeopardy are games in which AIs have already bested human champions. Other more professional fields like medicine, education or law are already seeing transistors start to encroach on neurons with Artificial Intelligence in which transistors mimic “neural networks”.

Encoding morality, justice, and cooperation. Another set capabilities which represent some of the highest complexity of the human brain and the frontal lobe are moral judgments and our capacity for cooperation. Blockchain promises to be able to encode morality, justice, and cooperation digitally and make it work automatically through “smart contracts”.

Connecting brains with transistors. Finally, we are seeing neurons get increasingly connected to transistors. There are now examples of humans interfacing with transistors directly with the brain. This could bring a time of integration in which neurons take care of non-time sensitive tasks while in continuous interaction with transistors that move at much higher speeds.

Will all this substitution leave anything to neurons? There are capabilities like empathy, creating goalscreativitycompassion or teaching humans that might be beyond the reach of transistors. On the other hand, it might be reasonable to believe that any task that can be done with a human brain can be done much quicker with an equivalently complex processor. There are no right answers but it seems that roles like entrepreneur, teacher, caregiver or artist might still have more time dominated by neurons than many other callings. Also, there might be some roles that we always want other humans to take with us, even if the transistors could take care of it much more quickly and more efficiently.