Energy and Transportation, Inteligent Processes, Tech and Business

Move fast and break (other people’s) things: The Uber fatality and Facebook data breach

The Uber autonomous car fatality and Facebook data breach are signposts of Big Tech hubris. Big Techs don’t take into account the broader society’s resources, interest, and rules. They move fast in their race for super-profitable monopolies even if they break other people’s lives or privacy in the process. Big Tech companies should learn from their mistakes and change their attitude to avoid regulation or antitrust action.

Apparently, the Uber autonomous car fatality and the Facebook data breach do not have much in common. However, on closer examination, they are both indicative of the Big Tech companies hubris and arrogance that make them think they are beyond working together with the broader economy or following regulations. Breaking things is ok as long as you move fast to win the race and other people are hurt with the breaking. Let’s examine each incident in turn.

Uber autonomous car fatality

March 20th, 2018 might pass on to history as the first time an autonomous robot killed a person without human intervention. It was not the Terminator or Agent Smith, the footage looks like many other traffic accidents. A badly illuminated road at night, when suddenly a figure appears too fast for the car to stop. All is over in less than a couple of seconds.

First, my condolences to the loved ones of the pedestrian. Traffic accidents are a sad and sudden way to lose someone. A single second can make the difference between life or death. According to the WHO, there are more than a million traffic-related deaths per year, one every 25 seconds, a large number of which are in the developing world. Even in the US, traffic injuries are the top 1 or 2 accidental causes of death for almost all demographics.

The causes of the accident are clear and seem to exonerate Uber from the traditional perspective. Looking at the video the accident, it seems it would have been hard to avoid for any human driver. The failsafe driver in the Uber only saw the pedestrian approximately one second before impact and could do nothing. The pedestrian was crossing at night in the middle of a busy road with no illumination within 100m of the closest pedestrian crosswalk. A human driver would have probably have ended with the same outcome, and likely not considered at fault based on the police chief statement.

However, from another perspective, the accident is inexplicable and inexcusable. Given the power of technology for focused and precise action with all available information, it seems negligent to have this type of accident occur. The investigation will ultimately determine if the systems didn’t pick up the signal or there was another deeper failure. However, it is clear that there are simple ways in which the pedestrian could have been detected. Couldn’t Uber be informed of what smartphones are nearby? The pedestrian was most probably carrying a smartphone and her carrier could have relayed that information to the Uber car, at least making it pause and slow down due to the undetected obstacle. Hasn’t some glitch like this happened before to some of the other autonomous car companies? It probably has, but each company is developing the technology in secret to try to outmaneuver the others so best practices are not shared putting lives in danger.

Big Tech companies are competing amongst them in a race to make autonomous vehicles possible. The goal is laudable as it could drastically reduce the more than 1 million deaths a year and the untold suffering traffic accidents cause through death and injury. However, the race is being run for profits, so there is little collaboration with other players that could help like telcos or with the other participants in the race. Given that human lives and human safety are at stake in the race it should be run with society in mind. It would take a change of attitude, from Big Techs being focused on winning and creating super-profitable monopolies, to being focused on winning against traffic deaths and drudgery to create a better world. Their current attitude will be self-defeating in the long run, and the win-win attitude would be rewarded by regulators and the public at large.

The Facebook data breach

Mark Zuckerberg is famous for having a poster in the Facebook offices that says “Move Fast, Break Things”. That is an embodiment of the hacker and agile culture and something very commendable when you pay the breaking your own money. However, when you move fast and break things with your users’ data and trust “Facebook made mistakes” sounds like an understatement. Especially, when that data breach was allegedly used to change the results of a US election (admittedly not in a way that Zuckerberg or most Facebook executives would have wanted) and was kept secret and happening for almost 4 years.

First, let’s understand what is the data breach. Apparently, Facebook app developers could download and access detailed information about their users and their friends. As usual, it required user permission for the app to access that information. That permission was construed to mean that all the information could be downloaded and mined without limit and outside the context it was originally granted. Cambridge Analytica, the company at the center of the scandal, created a seemingly innocuous personality test app called “thisisyourdigitallife” that promised to read out your personality if you gave it access to all the data. The company then downloaded all the information from the users and their friends and used it to fuel its different data mining campaigns.

The data breach was really no data breachas Facebook willingly handed the information wholesale to their developers. Facebook afterward took no responsibility for what the developers did with the information and didn’t audit or police it in any way. Cambridge Analytica has gone out in the open because of its signature role in the US election, but there are probably hundreds of other developers that were mining personal information from Facebook less high profile direct marketing and sales efforts.

Facebook has only apologized when the pressure has mounted and the proposed solutions are to clamp down on developers. Maybe Facebook is not legally at fault and has all the necessary legal permissions from its users. However, this highlights that the current system gives unregulated powers to firms like Google and Facebook’s in terms of handling information. Their business models are predicated on capturing untold amounts of data about their users and then turning back and selling it to advertisers or however is willing to pay. Should they be able to accumulate information without regulation or responsibility? Do we want information that can turn an election to be sold indiscriminately and unaccountably to the highest bidder?

Facebook is another example of a big tech that needs to mend its ways. It needs to take at heart its users interest and not just the amount of advertising dollars. Information should only be sold with user consent and with the utmost care. Users should be informed about the economic transactions that are happening with their data and maybe get a cut from it. Data portability should be available, making sure the data belongs to the user and the user can migrate to a different service without losing all his or her history. Number portability was necessary to create a competitive telecom marketplace and user data portability will be necessary to limit the internet monopolies.

Neurogamification, Tech and Business

Centaurs: Transistors and Neurons working together

Centaurs represent the alliance of neuron and transistor combining the best of each world. In the Centaur world, some activities will be totally automated while others rely on two flavors of neuron-transistor combination. Humans in the Centaur World will face a different paradigm in which working with transistors becomes the new literacy and disruptions to labor markets, wealth distributions, and human existence need to be managed.

man machine

Since Deep Blue’s victory over Kasparov in 1997, most people believe computers are dominant in chess. However, when a no-holds-barred tournament was held it was human-computer teams that won, not computers on their own. The combination of human thought plus computer analysis of moves was more powerful than any computer alone. At the same time, even human chess has been transformed by computers. During matches human players can’t use digital aids, however, computers play an ever-increasing role in training, match analysis and practice. This has led to earlier and earlier grandmaster level players, with the record being currently at twelve years of age.

The concept of the centaur that combines the best of both worlds is not new. Centaur’s were Greek mythological beings that had the body of a horse and had the torso and heads of a human. They combined the speed and stamina of a horse with the skill and intelligence of a human. Similarly, human civilization is not based only on neurons but relies heavily on the natural DNA based world. We continue to use animals and plants for producing food. Natural reserves continue to be some of the most beautiful sights in the world. In a similar way, it will not make sense to reinvent what neurons can already do reasonably well with transistors, at least for some time.

The Centaur World

Neurons will be substituted by transistors in many undertakings. As seen previouslytransistors are much better in areas in which speed, accuracy, exact memory and brute force processing predominate. We will see transistors continue to quickly take over activity in those domains. It will be mostly a good thing as those characteristics don’t come easily to humans and can be alienating to perform. The farmer toiling in the field or the worker in Henry Ford’s factory might be romanticized a posteriori. However, it was hard, repetitive and backbreaking work that is much more efficiently performed by machines.

Neurons do very well in other areas, for which speed and accuracy are not so relevant. Areas like creativity, emotion, common sense, lateral thinking, empathy, creating meaning and even consciousness are very well handled by the human brain. Exceptional speed or accuracy is less relevant to them. Consequently, we can expect transistors to extend them rather than substitute them.

There is also a premium for humans, as there is a premium for organic food or handcrafted goods. Using humans in delivering a service or product is becoming a mark of quality and exclusivity. And the centaur concept will make sure that the level of quality delivered is higher than if there was only a computer involved. A Michelin-star dining experience is the combination of the cooking and the waiting staff with the technology to make everything run smoothly. Beyond the shock value of an “automated restaurant”, it wouldn’t make sense to eliminate staff completely.

The centaur world will be a three-tiered world in which transistors take over some tasks, and neurons and transistors work side by side in others:

  • Radical digitalization of speed, accuracy, and exactness. Any activity in which speed, accuracy, and exactness are the key outputs will be completely digitized with cursory human supervision. The same way the tractor digitized more than 95% of farm employment, we can expect the digitization of most “utility” tasks in which we just need to get the job done. Whole areas like transportation or administrative work will disappear to datacenters and autonomous vehicles. Parts of highly skilled jobs like analyzing medical images will be digitized to improve speed, accuracy, and exactness. Anything in which “the job just needs to be done” and speed, accuracy and exactness count for most of the value will be lost to humans.
  • Centaur service for a premium experience and superior results. In parallel, a new world of premium centaur service will emerge. A computer might know the diagnosis and treat an illness, but a doctor will do a much better job of connecting emotionally, getting the patient understand and act on the treatment. Most wealth management, tax records and filing might be digitized, but a financial advisor will help the client understand what they want and how to get it. A child could learn from a computer system with all the knowledge and gamification, but it is the teacher who motivates and engages the imagination to create the future scientist. Maybe the computer can reproduce the violin piece exactly, but it is the performer who creates a vibrant and unique experience imbuing it with emotion. Most customer service could be digitized to online channels and chatbots, but sometimes there is no substitute for a helpful human to eliminate confusion and create loyalty.
  • Digitaly aided human creativity. Finally, there will be a domain of human creativity, innovation, and meaning creation. This domain will be fully supported by computers that enable much-increased performance of those uniquely human skills. Artists will create new works of art, with transistors supporting the ideation, design, and execution. Entrepreneurs will conceive of new business ideas, with transistors enabling them to test, build and deploy them at blinding speed and almost no cost. Writers and film-makers will create new masterpieces of fiction and non-fiction, with transistors expediting the research, production and distribution process to their eager audiences. Scientists will conceive of new groundbreaking hypothesis to explain the world, with transistors doing the fact checking, experiments and working out the possible hypothesis to help create an even greater synthesis. Engineers will design the next great technologies, software programs, devices, and structures, with transistors simulating the underlying structures and doing most of the grunt work.

Humans in a Centaur World

So neurons will not disappear substituted by transistors, but they will live in a very different world. First, interacting with transistors will be central skill both as consumers and professionals. Second, there will be a tectonic shift in the tasks and skills that are demanded, with many of today’s job categories being quickly digitized. Third, the rules of the market that determine careers, incomes and competition will need to be rethought and adapted to the new Centaur World. Fourth, a wave of digital addiction is sweeping the world and needs to be managed And finally, the dangers of the transistor transition will need to be considered and managed.

Digital Literacy. The only life away from interacting with transistors will be that of the hermit or the religious fundamentalist. All jobs will have substantial digital support to improve productivity and interacting effectively with technology will be a key differentiator. The “digital divide” that already exists between those that get computers and those that don’t will expand and become a chasm in income, quality of life and opportunity. This doesn’t mean everyone needs to be a programmer, data scientist or hardware designer, but everyone needs to understand what technology can do for them and how to harness it. That digital literacy will become a basic human right and should be a central part of school curriculums, much like math and reading is today. Societies should also work to eliminate digital illiteracy, much like the campaigns to eliminate classical illiteracy during the 20th century. It could be argued that technology should go even before math and reading because technology will be able to do math and reading for people.

The new job tectonic shift. The 20th century saw jobs shift from farms to factories and then to services. The 21st will see it shift from factories and services to premium Centaur experiences and creative undertakings such as entrepreneurship, science, art, entertainment, and engineering. People need to be helped in this transition, much as World War II, the GI Bill, and the Marshall plan helped the farm to factory and services transition. If not we will face the angry and destitute farmers of Steinbeck’s The Grapes of Wrath. The skill shift required is 180º degrees, from routine jobs that value consistency and accuracy in a structured environment, to creative jobs that require improvisation and agility in an ever-changing environment. Some will not be able to adapt. Others will need substantial help to do so. The only way out will be to support workers on the shift, knowing that a substantial group will need to be taken care of until their retirement.

The New Digital Deal. Changes in the basis of work and society always create substantial changes in wealth distributions. Those who have the skills required or own the new relevant assets accumulate wealth rapidly, while those with the old skills and assets are suddenly left destitute and without hope. Franklin Delano Roosevelt had to embark on measures that were termed communist when they were implemented but allowed redistribution and prosperity for decades after. There will need to be a New Digital Deal with elements like universal income, progressive taxation, and antitrust action which tackles the current transition. The added challenge is that this New Digital Deal will have to be global in concept to deal with the potential migratory flows and tensions it would create otherwise

Manage the digital addiction crisis. The world is facing a wave of addiction comparable to the opioid and tobacco crisis of the second half of the twentieth century. Human brains are susceptible to many manipulations that can hijack their rewards systems and create irresistible compulsions. The more areas like the nucleus accumbens are understood, the more it is seen how substances or experiences can interact directly with the reward machinery of the brain making it physiologically extremely challenging to resist. Alcohol and opium have been with humans since ancient times. Chemically-based drugs created a wave of addiction and public health problems during the twentieth century. Digital compulsion is the key problem for the 21st. Understanding of the brain has been used to design absolutely irresistible experiences that are tested scientifically to create ever more addictive serotonin highs. Thus we have millions compulsively addicted to digital experiences, from social media to gaming. This is specially worrisome with children who find it difficult to resist with only partially prepared frontal lobes. Digital addiction will have to be regulated, as we seen the pressure of competition drives players to implement maximum addiction in all cases.

Dangers of Digital. Uncontrolled AI could have dire consequences for the human race. Maybe it will not be the fantastic scenarios of The Matrix or Terminators but relevant figures in science and business see the dangers. The current “AI Cold War” between the US, China and Russia could have unintended consequences and lead to preemptive action if the topic is not addressed.

Inteligent Processes, Tech and Business

Brain vs. Computer

Can computers do everything our brains do? Not yet, but AI has allowed computing to replicate more than 75% of our nervous system.

Part of the series on how digital means the gradual substitution of neurons by transistors.

There are several ways to categorize the brain anatomically and functionally. The typical anatomical split is based on the spinal cord and peripheral nervous system, the cerebellum, and then the cerebrum, with the brain lobes. Our knowledge of how the brain works is still partial at best, the functions assigned to each area using the anatomical split would be as follows:

  • Brainstem, spinal cord, and peripheral nervous system. Input/output for the brain coordinating the sending of motor signals and the receiving of sensory information from organs, skin, and muscle.
  • Cerebellum. Complex movement, posture and balance.
  • Occipital Lobe. Vision, from basic perception to complex recognition.
  • Temporal Lobe. Auditory processing and language.
  • Parietal Lobe. Movement, orientation, recognition, and integration of perception
  • Frontal Lobe. reasoning, planning, executive function, parts of speech, emotions, and problem-solving. Also, the primary motor cortex which fires movement together with the parietal lobe and the cerebellum.
  • Memory is apparently distributed throughout the whole brain and cerebellum, and potentially even in parts of the brain stem and beyond.

We now take the ten top functions and how computers hold up vs. brain in each of them. We will see that computers already win easily in two of them. There are four areas in which computers have been able to catch up in the last decade and are fairly close or even slightly ahead. Finally, there are four areas in which human brains are still holding their own, among other things because we don’t really understand how they work in our own brains.

Areas in which computers win already

Sensory and motor inputs and outputs (Brainstem and spinal cord).

Sensory and motor inputs and outputs coordinate, process and take electrical signals originated in the brain to muscles or organs, or take sensory inputs originated in the periphery to the brain to be integrated as sensory stimulus. It goes beyond pure transmission with some adjustment like setting the “gain” or blocking some paths (e.g. while asleep).

This functioning has been replicated for quite some time with both effectors systems like motors (“actuators”) and sensory systems (“sensors”). We might still haven’t managed to replicate all the detailed function of all human effort and sensory systems but we have replicated most and extended beyond what they can do.

The next frontier is the universalization of actuators and sensors through the “Internet of Things” which connects wirelessly through the mobile internet and the integration of neural and computing processes, already achieved in some prosthetic limbs.

Basic information processing and memory (Frontal lobe and the brain)

Memory refers to the storing of information in a reliable long-term substrate. Basic information processing refers to executing operations (e.g. mathematical operations and algorithm) on the information stored on memory.

Basic information processing and memory were the initial reason for creating computers. The human brain has been only adapted with difficulty to this tasks and is not particularly good at it. It was only with the development of writing as a way to store and support information processing that humans were able to take information processing and record keeping to an initial level of proficiency.

Currently, computers are able to process and store information at levels far beyond what humans are capable of doing. The last decades have seen an explosion of the capability to store different forms of information in computers like video or audio, in which before the human brain had an advantage over computers. There are still mechanisms of memory that are unknown to us which promise even greater efficiency in computers if we can copy them (e.g. our ability to remember episodes), however, they have to do with the efficient processing of those memories rather than the information storage itself.

Areas in which computing is catching up quickly with the brain

Complex Movement (Cerebellum, Parietal and Frontal Lobes).

Complex movement is the orchestration of different muscles coordinating them through space and time and balancing minutely their relative strengths to achieve a specific outcome. This requires a minute understanding of the body state (proprioception) and the integration of the information coming from the senses into a coherent picture of the world. Some of the most complex examples of movement are driving, riding a bicycle, walking or feats of athletic or artistic performance like figure skating or dancing.

Repetitive and routine movement has been possible for a relatively long time, with industrial robots already available since the 1980s. On the other hand, human complex movement seemed beyond the reach of what we were able to recreate. Even relatively mundane tasks like walking were extremely challenging, while complex ones like driving were apparently beyond the possible.

However, over the last two years, we have seen the deployment of the latest AI techniques and increased sensory and computing power making complex movement feasible. There are now reasonably competent walking robots and autonomous cars are already in the streets of some cities. Consequently, we can expect some non-routine physical tasks like driving or deliveries to be at least partially automated.

Of course, we are still far away from a general system like the brain that can learn and adapt new complex motor behaviors. This is what we see robots in fiction being able to do. After recent progress, this seems closer and potentially feasible but still require significant work.

Visual processing (Occipital Lobe)

Vision refers to the capture and processing of light-based stimuli to create a picture of the world around us. It starts by distinguishing light from dark and basic forms (the V1 and V2 visual cortex), but extends all the way up to recognizing complex stimuli (e.g. faces, emotion, writing).

Vision is another area in which we had been able to do simple detection for a long time and have made great strides in the last decade. Basic vision tasks like perceiving light or darkness were feasible some time ago, with even simple object recognition proving extremely challenging.

The development of neural network-based object recognition networks has transformed our capacity for machine vision. Starting in 2012, when a Google algorithm learned to recognize cats through deep learning we have seen a veritable explosion of machine vision. Now it is routine to recognize writing (OCR), faces and even emotion.

Again, we are still far from a general system which recognizes a wide variety of objects like a human, but we have seen that the components are feasible. We will see machine vision take over tasks that require visual recognition with increasing accuracy.

Auditory processing and language (Temporal Lobe, including Wernicke’s area, and Broca’s area in the frontal lobe)

Auditory processing and language refer to the processing of sound-based stimuli, especially that of human language. It includes identifying the type of sound and the position and relative moment of its source and separating specific sounds and language from ambient noise. In terms of language, it includes the understanding and generation of language.

Sound processing and language have experienced a similar transformation after years of being stuck. Sound processing has been available for a long time, with only limited capability in terms of position identification and noise cancelation. In language, expert systems that were used in the past were able to do only limited speech to text, text understanding and text generation with generally poor accuracy.

The movement to brute force processing through deep learning has made a huge difference across the board. In the last decade, speech-to-text accuracy has reached human levels, as demonstrated by professional programs like Nuance’s Dragon or the emerging virtual assistants. At the same time, speech comprehension and generation have improved dramatically. Translators like Google Translator or Deepl are able to almost equal the best human translators. Chatbots are increasingly gaining ground in being able to understand and produce language for day to day interactions. Sound processing has also improved dramatically with noise cancelation being increasingly comparable to human levels.

Higher order comprehension of language is still challenging as it requires a wide corpus and eventually requires the higher order functions we will see in the frontal lobe. However, domain-specific language seems closer and closer to be automatizable for most settings. This development will allow the automatization of a wide variety of language-related tasks, from translating and editing to answering the phone in call centers, which currently represent a significant portion of the workforce.

Reasoning and problem solving (Frontal Lobe)

Reasoning and problem solving refer to the ability to process information at a higher level to come up with intuitive or deductive solutions to problems beyond the rote application of basic information processing capabilities.

As we have seen, basic information processing at the brute force level was the first casualty of automatization. The human brain is not designed for routine symbolic information processing such as basic math, so computers were able to take over that department quickly. However, non-routine tasks like reasoning and problem solving seemed to be beyond silicon.

It took years of hard work to take over structured but non-routine problem-solving. First with chess, where Deep Blue eventually managed to beat the human champion. Later with less structured or more complex games like Jeopardy, Go or even Blockout were neural networks and eventually deep learning had to be recruited to prevail.

We are still further away from human capacities than in other domains in this section, even if we are making progress. Human’s are incredibly adept at reasoning and problem solving in poorly defined, changing and multidimensional domains such as love, war, business and interpersonal relations in general. However, we are starting to see machine learning and deep learning finding complex relationships that are difficult for the human brain to tease out. A new science is being proclaimed in which humans work in concert with algorithms to tease out even deeper regularities of our world.

Areas in which the brain is still dominant

Creativity (Frontal Lobe)

Creativity can be defined as the production of new ideas, artistic creations or scientific theories beyond the current paradigms.

There has been ample attention in the news to what computers can do in terms of “small c” creativity. They can flawlessly create pieces in the style of Mozart, Bach or even modern pop music. They can find regularities in the date beyond what humans can, proving or disproving existing scientific ideas. They can even generate ideas randomly putting together existing concepts and coming up with interesting new combinations.

However, computers still lack the capability of deciding what really amounts to a new idea worth pursuing, or a new art form worth creating. They have also failed to produce a new scientific synthesis that overturns existing paradigms. So it seems we still have to understand much better how our own brain goes about this process until we can replicate the creation of new concepts like Marx’s Communist Manifesto, the creation of new art forms like Gaudi’s architectural style or Frida Kahlo’s painting style, or the discovery of new scientific concepts like radiation or relativity.

Emotion and empathy (Frontal Lobe)

Emotion and empathy are still only partially understood. However, their centrality to human reason and decision making is clear. Emotions not only serve as in the moment regulators, but also allow to very effectively predict the future by simulating scenarios on the basis of the emotions they evoke. Emotion is also one of the latest developments and innovations in the brain and neurons, with spindle cells, one of the last types of neurons to appear in evolution, apparently playing a substantial role.

Reading emotion from text or from facial expression through computing is increasingly accurate. There are also some attempts to create chatbots that support humans with proven psychological therapies (e.g. Woebot) or physical robots that provide some companionship, especially in aging societies like Japan. Attempts to create emotion in computing like Pepper the robot, are still far from creating actual emotion or generating real empathy. Maybe emotion and responding to emotion will stay as a solely human endeavor, or maybe emotion will prove key to create really autonomous Artificial Intelligence that is capable of directed action.

Planning and executive function (Frontal Lobe)

Planning and executive function are also at the apex of what the human brain can do. It is mostly based in the pre-frontal cortex, an area of the brain that is the result of the latest evolutionary steps from Australopithecus to Homo Sapiens Sapiens. Planning and executive function allow to plan, predict, create scenarios, and decide.

Computers are a lot better than humans at taking “rational” choices. However, the complex interplay of emotion and logic that allows for planning and executive function has been for now beyond them. Activities like entrepreneurship with their detailed future scenario envisioning and planning are beyond what computers can do right now. In planning speed and self-control speed is not so important for the most parts, so humans might still enjoy an advantage. There is also ample room for computer-human symbiosis in this area with computers being able to support humans in complex planning and executive function exercises.

Consciousness (Frontal lobe)

The final great mystery is consciousness. Consciousness is the self-referential experience of our own existence and decisions that each of us feels every waking moment. It is also the driving phenomenon of our spirituality and sense of awe. Neither neuroanatomy nor psychology or philosophy has been able to make sense of it. We don’t know what consciousness is about, how it comes to happen or what would be required to replicate it.

We can’t even start to think what a generating consciousness through computing would mean. Probably it would need to start with emotion and executive function. We don’t even know if to create a powerful AI would require replicating consciousness in some way to make it really powerful. Consciousness would also create important ethical challenges, as we typically assign rights to organisms with consciousness, and computer-based consciousness could even allow us to “port” a replica of our conscious experience to the cloud bringing many questions. So consciousness is probably the phenomenon which requires most study to understand, and the most care to decide if we want to replicate.

Conclusion

Overall it is impressive how computers have closed the gap with brains in terms of their potential for many of the key functions that our nervous system has. In the last decade, they have surpassed what our spinal cord, brainstem, cerebellum, occipital lobe, parietal lobe and temporal lobe can do. It is only in parts of the frontal lobe that humans still keep the advantage over computers. Given the speed advantage of transistors vs. neurons, this will make many of the tasks that humans perform currently uncompetitive. Only frontal lobe tasks seem to be dominant for humans at this point making creativity, emotion and empathy, planning and executive function and consciousness itself the key characteristics of the “jobs of the future”. Jobs like entrepreneurship, high touch professions, and the performing arts seem the future for neurons at this point. There might be also opportunity in centaurs (human-computer teams) or consumer discrimination for “human-made” goods or services.

This will require a severe transformation of the workforce. Many jobs currently depend mostly on other areas like complex motor skills (e.g. driving, item manipulation, delivery), vision (e.g. physical security) or purely transactional information processing (e.g. cashiers, administrative staff). People who have maximized those skills will need time and support to retrain to a more frontal lobe focused job life. At the same time technology continues to progress. As we understand emotion, creativity, executive function and even consciousness we might be able to replicate or supplement part of them, taking the race even further. The “new work has always emerged” argument made sense when just basic functions had been transitioned to computing, but with probably more than 75% of brain volume already effectively digitized it might be difficult to keep it going. So this is something we will have to consider seriously.

Digital Governance, Inteligent Processes, Tech and Business

Digital: From Neurons to Transistors

Digital captures the transition we are embarked on from neurons to transistors as the dominant substrate of information processing.

Neurons are exquisite creations of natural evolution. They have achieved through self-organization and evolution many of the properties we have managed to engineer in electrical systems such as logical operations, signal regeneration for information transmission or 1/0 information processing. When combined together in the circuits that constitute the human brain they allow amazing complexity and functionality. The roughly 100 billion neurons in a human brain form well over 100 trillion connections leading to consciousness, creativity, moral judgment and much more.

They have also led to transistors, an information processing system that has managed to replicate many of the properties of neurons while being approximately 10 million times faster. A neuron typically fires in the millisecond range (1000 Hz of frequency), while transistors operate comfortably in the nano to picoseconds (1-100 GHz range).

10 million times speed advantage makes transistor dominate neurons as an information processing medium, even if they are still capable of less complexity and require more energy than a brain. That is why we have seen the progressive substitution of neurons for transistors in many information processing tasks. This is fundamentally different from technologies like writing that extend neurons. It is only comparable to the use of the steam engine, electricity and internal combustion to substitute muscles.

This absurdly great speed advantage allows the Digital Paradox. When neurons are substituted by transistors in a process you get lower cost, higher quality and higher speedwithout trade-offs. Thus there is no going back, once we have engineered sufficient complexity in transistors to tackle a process there is no reason to use neurons anymore. Of course, neurons and transistors are often combined. Neurons still dominate for some tasks and they benefit greatly by being supported by transistors.

What we now call generically “Digital” is one more stage in this gradual substitution of neurons by transistors. In that sense, those who claim that Digital is not new are right. At the same time, the processes that are now being substituted have a wider impact, so the use of a new term is understandable. Finally, we can expect the substitution to continue after the term digital has faded from use. In that sense, there is a lot that will happen Beyond Digital.

Early stages of substitution

Transistors and their earlier cousins vacuum tubes started by substituting neurons in areas in which their advantage was greatest: complex brute force calculations and extensive data collection and archiving. This was epitomized by calculating missile trajectories and code-breaking during World War II and tabulating census data since the early 20th century.

Over time, this extended to databases to store large amounts of information for almost any purpose and programming repetitive management of information. This enabled important advances but still had limited impact in most people’s lives. Only very specialized functions like detailed memory, long relegated to writing, and rote calculation, the domain of only a minuscule fraction of the workforce, were affected.

The next step was to use these technologies to manage economic flows, inventory, and accounting within organizations. So-called Enterprise-Resource-Planning or ERP systems allowed to substitute complex neuron+writing processing systems which were at the limit of their capacity. This substituted some human jobs, but mainly made possible a level of complexity and performance that was not attainable before.

Substitution only started to penetrate the popular conscience with Personal Computers (PCs). PCs first allowed individuals to start leveraging the power of transistors for tasks such as creating documents, doing their accounting or entertainment.

Finally, the internet allowed to move most information transmission from neurons to transistors. We went from person to person telephone calls and printing encyclopedias, to email, web pages, and Wikipedia.

In this first stage, transistors were substituting mostly written records and some specialized jobs such as persons performing calculations, record keeping or information transmission. They were also enabling new activities like complex ERPs, computer games or electronic chats that were not possible before. In that sense, the transition was mostly additive for humans.

Digital: Mainstream substitution

The use of Digital coincides with when many mainstream neuron-based processes have started to be affected by transistors. This greater disruption of the supremacy of neurons is being felt beyond specialized roles and starts to become widespread. It also starts to be more substitutive, with transistor-based information processing being able to completely replace neurons in areas in which we thought neurons were reasonably well adapted to perform.

First went media and advertising. We used to have an industry that created, edited, curated and distributed news and delivered advertising on top of that. Most of these functions in the value chain have been taken over by transistors either in part or in full.

Then eCommerce and eServices moved to transistors the age-old process of selling and distributing products to humans. The buying has still stayed in human hands for now, but you are close to being able to buy a book from Amazon without any human touching it from the printing to the delivery. On the eServices side, no one goes to the bank teller anymore if it can be done instantly on the Internet.

Then the Cloud took the management of computers to transistors themselves. Instead of depending on neurons for deployment, scaling, and management of server capacity, services like Amazon Web Services or Microsoft Azure give transistors the capacity to manage themselves for the most part.

Our social lives and gossip might have seemed totally suited for neurons. However, services like Facebook, Whatsapp or LinkedIn have allowed transistors to manage a large part of them at much higher speeds.

Smartphones made transistors much more mobile and accessible, making them readily available in any context and any moment. Smartphones have started substituting tasks our brains used to be able to perform with neurons, like remembering phone numbers, navigating through a city o knowing where we have to go next.

Finally, platforms and marketplaces like Airbnb and Uber turned to transistors for tasks that were totally in the hands of neurons like getting hold of a cab or renting an apartment.

This encroachment of transistors on the daily tasks of neurons has woken all of us up to change. Now it is not just obscure professions or processes but a big chunk of our daily life that is being handed over to transistors. It creates mixed feelings for us. On the one hand, we love the Digital Paradox and its improvement of speed, quality, and cost. It would be difficult to convince us to forsake Amazon or to return to the bank branch. On the other side, transistors substituting neurons have left many humans without jobs and are causing social disruption. Transistors are also speeding up the world towards their native speed, which is inaccessible to humans. There are almost no more human stock traders because they cannot compete with the 10 million faster speeds of transistors.

Beyond Digital: Transistors take over

The final stages of the transition can be mapped to how core functions of the human brain might become substitutable by transistors. The process is already well underway with some question marks around the reach of transistors. In any case, we can expect it to continue accelerating speed taking us to the limits of what can be endurable for our sluggish neuron-based brains.

Substituting the sensory cortex. Machine vision, Enhanced Reality, Text-to-Speech, Natural Language Processing and Chatbots are just some of the technologies that are putting in question the neuron’s dominance in sight, sound, and language. Approximately ~15-25% of the brain is devoted to processing sensory signals and language at various levels. Transistors are getting very good at it, and have recently become able to recognize many items in images and process and create language effectively.

Substituting the motor cortex. The same with unscripted and adaptive movement. Robots are increasingly powerful and they are able to cook pizzas, walk through a forest and help you in a retail store. Autonomous driving promises for transistors to be able to navigate a vehicle, one of the most demanding motor-sensory tasks we humans undertake.

Substituting non-routine cognitive and information processing. We have seen basic calculation and data processing be taken over by transistors, now we are starting to see “frontal lobe” tasks go over to transistors. Chess, Go and Jeopardy are games in which AIs have already bested human champions. Other more professional fields like medicine, education or law are already seeing transistors start to encroach on neurons with Artificial Intelligence in which transistors mimic “neural networks”.

Encoding morality, justice, and cooperation. Another set capabilities which represent some of the highest complexity of the human brain and the frontal lobe are moral judgments and our capacity for cooperation. Blockchain promises to be able to encode morality, justice, and cooperation digitally and make it work automatically through “smart contracts”.

Connecting brains with transistors. Finally, we are seeing neurons get increasingly connected to transistors. There are now examples of humans interfacing with transistors directly with the brain. This could bring a time of integration in which neurons take care of non-time sensitive tasks while in continuous interaction with transistors that move at much higher speeds.

Will all this substitution leave anything to neurons? There are capabilities like empathy, creating goalscreativitycompassion or teaching humans that might be beyond the reach of transistors. On the other hand, it might be reasonable to believe that any task that can be done with a human brain can be done much quicker with an equivalently complex processor. There are no right answers but it seems that roles like entrepreneur, teacher, caregiver or artist might still have more time dominated by neurons than many other callings. Also, there might be some roles that we always want other humans to take with us, even if the transistors could take care of it much more quickly and more efficiently.

Blockchain, Digital Governance, Tech and Business

Bitcoin: Bubble or S-Curve?

You can find more about cryptocurrencies and other Exponential Revolutions that will shape the future in my book: Beyond Digital (here in Spanish).

I have been writing about Bitcoin and Cryptocurrencies for over a year now. The jump in prices in 2017 has been staggering, an order of magnitude. Now, with crypto between half and three-quarters of a trillion USD, the question in everyone’s mind is the same. Is it a bubble? What should I do about it?

I don’t have the answer and no one has. We are looking at an unprecedented phenomenon. It will be easy to explain in hindsight but right now we are completely at a loss to predict the future. There are two compelling and competing explanations out there about what is happening. They are making testable predictions that lead to diametrically opposed advice. The two theories are the bubble and the adoption curve.

Nasdaq Bubble and burst. Source: FedPrimeRate

The Bubble: Internet Boom all over again

The Bubble is the most widely spread explanation. It says this has happened before, many times. A new asset class is created, it starts to rise fueled by speculation and at some point, everyone buys into the game. Fear of missing out takes the best of caution and more and more people start to invest. The scarcity of the asset class drives high apparent valuations that are not real, but rather just predicated on the transaction prices of the few people that are selling vs. the crowd trying to get in. First, it is just the techies, then the financiers jump in, then the broader public and then there is no one left to jump in and prices collapse. Afterwards, the technology takes its time to develop and a small part of the asset class becomes very valuable over time.

The first bubble of this kind was the famous Tulipmania in 16th century Holland, then there was the South Sea Bubble, the 1929 stock market peak, the go-go years in the 1960s, the internet boom in the late 1990s and several real estate bubbles, the latest finishing with the 2008 crash. It is pure human mass psychology and begets stock phrases like “prices can never fall”, “this time is different” or a “we have reached a permanently high plateau of valuation”.

The facts are also consistent with the explanation, but with a much more radical speed and depth to it compared to other bubbles. The NASDAQ 1990s bubble took around a decade to form with an x11 price increase in the period from the ~500 trough in October 1990 to its ~5000 peak in February 2000. The crypto craze has done much more just during 2017, going from a ~20 billion valuation at the beginning of 2017 to a ~600 billion valuation at the end of the year. Approximately an x30 during 2017.

As more and more people have gotten into crypto prices have skyrocketed, leading to more people to get into crypto. Most people are buying and holding crypto, so there is scarcity to enter the asset class, a very small door to enter Bitcoin that bids prices ever upward. In the western world, we are living the start of the “financiers” and “everyone else” phase, with still plenty of people left to enter the cryptocurrency craze. However, in Korea, China or Japan we have been in the “everyone else” phase for a while, with governments deeply concerned about the speculation’s impact on their elderly or young.

If the bubble theory is correct there are three questions worth answering: When? How much? and How long? When will the crash come? Is what all speculators are thinking about, impossible to answer as it depends on crowd psychology. The Rockefeller anecdote about selling all his stocks when a shoeshine boy gave him a stock tip thus avoiding the 1929 crash seems a good warning sign. In some countries, taxi drivers are already recommending bitcoin investment, which could be a modern-day equivalent. How much will it collapse? Is another great question. The NASDAQ bubble collapsed to ~1200-1500 (-80%) twice, once in September 2002 and another one in February 2009. Of course, cryptocurrencies have no bottom at all, as there is almost no intrinsic value behind them, while the NASDAQ had real companies with real earnings. How long could it take to recover? The NASDAQ only confidently recouped its peak and went beyond last year, 17 years after the 2000 peak. This puts into perspective how much risk there really is. This is how long the internet’s real value took to catch up with its hype, even if there has obviously been a lot of real value. Blockchain will be a game-changing technology, but real applications are still few and far between.

4 famous S-curves. Source: Quora

The S-Curve: A New World of Value

Of course, there is an alternative explanation to the Bubble, the Adoption Curve (or S curve given its shape). The adoption curve is very widespread among starry-eyed crypto enthusiasts. It has also precedent. Eternal September is September 1993, when internet usage started growing significantly thanks to AOL (that is the bottom of the S) and it has only grown exponentially since. Of course, once the whole of the world uses the internet growth flattens (the top of the S) and it stops at that permanently high plateau.

Adoption curves or S-curves are prevalent in the adoption of technology, and for the most part have been tried and true ways of predicting technology adoption. Initial adoption is slow (bottom of the S) with innovators and enthusiasts, once the majority comes in it grows fast (slope of the S), finally the last laggards take a long time to adopt as they are anti-technology (top of the S). TVs, Trains, Electricity, Cars, and many other technologies can be explained with the S curve.

The rationale behind the S curve for cryptocurrencies is assuming that Crypto is a new asset class that is being adopted, not a stock or bond that is being subjected to an irrational euphoria. It took a long time to get started (~9 years until ~20 billion market cap) but now it is in the explosion phase and it will continue to grow until it flattens out once it reaches saturation. Where does it flatten will depend on what percentage does crypto attain as an asset class. Total wealth in the world is ~250-300 trillion USD depending on the estimates. So we are now at approximately 0.2% penetration, if it gets to 1% we still have an x5 run and if it gets to 10% we still have an x50 run, to close to $1 million Bitcoin.

The Adoption Curve has a number of important questions to be considered: How? What? and Which? How much penetration? Will it be 0.1%, 1% or 10% of the wealth? Depending on what you believe there is a big difference in potential. Real Estate is ~60% of the total wealth while Hedge Funds are 1%. What path will it take to the final penetration? S curves are about usage, not value. So a crash or correction could be consistent with it as long as usage and ownership continue to grow. Which cryptocurrencies will be used in the long term? Are Bitcoin and Ethereum Webvan and Pets.com or Amazon and Google? Crypto might be 10% of the wealth and you might still lose everything if you choose the wrong cryptocurrencies. Choosing between Friendster and Facebook is easy in hindsight, but very hard in advance.

So what should I do?

Don’t invest more than you are willing to lose, diversify and hedge your bets are always good paths to follow if you feel you don’t have an edge over others. It hurts when you hedge and miss the bull run, but it hurts more when you plunge in and lose what you cannot afford to lose. 2% of your assets or one month’s salary will be substantial but not take you to bankruptcy in most situations. Go beyond that at your peril.

The bar for beating the cryptomarkets is really high. People who are really investing in cryptocurrencies are dedicating a significant amount of their time to them, doing things like participating in Slack groups, trying out every new token out there and talking to founders. If you are not doing that you don’t have an edge and will be probably better off with a diversified portfolio and limited exposure.

Bioprogramming, Tech and Business

Future Scenario: 120-year-old youths

Death is the great equalizer, and one of the two aspects of God in most religious systems. Some, like Singularity University expert Jose Luis Cordeiro, are claiming that the “death of death” is near. We don’t need to go so far. What if we could reach 120 years old with reasonable health and well-being? Wouldn’t that be great?

Life expectancy from antiquity to the current today hasn’t really changed but has changed very much at the same time. It hasn’t really changed in the sense that a genetically and environmentally lucky human specimen could reach 90 years in Ancient Greece, today that same specimen might reach 100+, not much of a change. At the same time, the average lifespan has multiplied. In the past most humans were unlucky and they died as infants, children or adults, mainly through infectious disease. The 20th century decisively won the battle against infectious disease and now life expectancies routinely match that of the lucky Athenian.

However, we seem to have hit a stumbling block in terms of maximum life expectancy and, even more importantly, quality of old age. Our main killers nowadays are dependent on lifestyle and decay, and they are mostly chronic. Our health system and technology are much less capable of dealing with them. Cancer, the other main killer, is a product of decay but looks more like an infectious disease in terms of its severity and outcomes. We are getting better with cancer but it is still much beyond our control.

Being able to achieve the dream of being 120-year-olds with significant life quality requires that we are able to overcome both. It probably also requires that we somehow reverse cellular decay processes that seem to take place throughout the body and take to the grave even the luckiest of us. There are only between 150 and 600 supercentenarians alive that have lived beyond 110, and the oldest women alive just got to 122 years old. 120-year-old youths would be really a transformation as a species.

If we start with our chronic diseases, taking care of those could be embarrassingly easy. Books like Ray Kurzweil’s Transcend cover the 9 steps required to overcome most of the current killers. They are fairly low tech: talking to your doctor regularly, relaxation, regular health assessments, nutrition, calory reduction, exercise, some new technologies, and detoxification. Mostly common sense backed by truckloads of scientific evidence. Still, most of us don’t exercise, live stressed-out lives, don’t do health-assessments, eat too much and too bad, and don’t play to our genetic weaknesses. This first step might be more about Neurogamification than Bioprogramming.

The results from a regimen like the one prescribed by the one above would result in a step improvement from current health challenges. The top 10 causes of disease in high-income countries today according to the WHO are: Ischaemic heart disease, Stroke, Alzheimer’s and other dementias, Trachea, bronchus and lung cancer, chronic obstructive pulmonary disease, lower respiratory infections, colon and rectal cancer, diabetes mellitus, kidney disease and breast cancer. From the list, more than half have a majority of attribution to lifestyle choices (lack of exercise, obesity, smoking, drinking, pollution) and breast cancer has an important genetic component. Only colon and rectal cancer can be considered a true “non-lifestyle non-genetic” killer.

If next, we look at cancer, it is tougher but we are making progress. First, we have traditional treatments like chemo and radiotherapy which have progressively improved survival rates. Second, genomics-based treatments covered earlier have much potential in terms of reducing overall impact from rarer cancers. Third, new treatments are coming out all the time, like a soon to be approved by the FDA cancer treatment that uses the body own immune system. Finally, either very precise artificial life or nanobots will be designed at some point that can detect and eliminate cancer right away. We might be still decades away, but after all most of us are decades away from being 120.

The final frontier is cell degeneration. Cells are programmed to die and they degenerate. Our genes don’t care much about us once we have outlived our reproductive capability, so we decay and die. This is not true with all organisms. Single cell organisms like bacteria continue endlessly, it could even be argued that the original bacteria might live on, even if in a highly mutated state, after close to three billion years. Replicating this in the human cell requires a lot of basic science work. Silicon Valley billionaires are helping here as they are donating significant amounts to research and creating a new industry dubbed the “Immortality Industry”.

Considering all of this getting to 120-year-olds in a quite healthy status might require some work, but for those who value their time on Earth highly, it might be feasible. The impact of 120 healthy year-olds would require us to change many of our ways of organizing society. If 65 starts to be when you get your midlife crisis it might be a little earlier for retirement. Healthcare costs might rise to 20, 30 or 40% of total GDP. After all, there is little more precious than life. Finally, inheritance might take a fully new concept when you can expect to reasonably be a great-great-grandfather.

If you want to try to get there you just need to start changing your habits in very common sense ways. Think that cigarette, that rerun of Game of Thrones that kept you out of the gym, or those extra pounds might cost you decades of healthy living.

Bioprogramming, Tech and Business

Exponential Revolution #5 – Bioprogramming

Bioprogramming is the capacity to understand, modify, and create from scratch the functioning of living organisms manipulating their underlying genetic code as if it was digital code. It will allow reprogramming of existing organisms or the creation ex-novo of new ones. This reprogramming and creation will be potentially as easy and fast as with digital code. The reprogrammed life forms will to heal, fabricate, and much more.

Now we enter a completely different terrain and one that impinges one step further into what we consider our core. Bioprogramming doesn’t only have great economic impact, it also touches the substrate of our own bodies. It promises to change our relationship with them, giving us knowledge that seemed an unattainable mystery and perhaps impacting some of the deepest human mysteries like health, disease, reproduction, or death itself.

Like the New Energy Matrix, Bioprogramming combines digital technologies with very physical and “wet” technologies. It is based strongly on the breakthroughs in genetic engineering and biology that have allowed us to improve our understanding of how living organisms are coded and work. It has also required substantial advances in chemistry and physics to create the physical capability of manipulating micro scale genetic material at a very low cost. The latest in a long list of advances is the CRISPR/CAS9 technology, which allows permanent modification of the genetic information of organisms. Of course, it has also required the advance of digital technology to allow to do all this in an automated and cost-effective way.

Bioprogramming allows reading, writing and modifying genetic instructions in living cells. This can be used in mainly three ways: read the genetic instructions of existing organisms, write new genetic organisms ex-novo based on previously understood genetic sequences, and modify genetic instructions of existing individuals. This can be done in micro-scale organisms, in human-scale animals and plants, or in our own genetic information.

Manipulation of microscale organisms can be used for several purposes. On the one side, it can be used to create natural “fabrication” organisms that create a given compound that is difficult or expensive to create through non-organic means. On the other side of applications, it can be used to understand and control dangerous bacteria or virus.

Manipulation of human scale animals and plants can be used to improve agricultural output, although it is increasingly being used to add new functionality to organisms for new uses. The level of manipulation available goes far beyond what can be achieved through previous methods of genetically-modified organisms as it can be done from almost scratch.

Finally, manipulating human genetic information is useful to reduce the incidence of known genetic disease. And is increasingly being used beyond that to understand and improve some diseases like cancer. Eventually, it could also be used to improve humans dramatically by eliminating shortcomings (e.g. aging) or adding new functionalities.

Overall, Bioprogramming will have a tremendous and transformative impact on our lives and it brings forth deep and worrisome ethical challenges. However, in terms of sectors of economic activity, it will impact only some sectors. It will probably change healthcare and agriculture completely. It might also have an impact in other industries in which some organic use case can be found. But most industries will not have much direct impact from bioprogramming.

Even though most sectors will not change, we should all be concerned about bioprogramming. The changes we will see in humans could be breathtaking and the potential for change in our society is massive. Receding old age, important healthcare differences between haves and haves nots, designer babies, eugenics and re-genics, super-cattle and super dogs… The decisions we need to take as a society are innumerable and they deal with the core of our legends and myths, life, death, and rebirth.

We are seeing the dawn of the bioprogramming revolution with innovations that seemed beyond science fiction not so long ago:

Facts

  • Artificial Organisms. Creating new organisms
  • Targeted medicine. Medicines for individuals
  • Improved animals and plants. Superfarming.
  • Genetic therapy. Getting the bugs out of the (genetic) code.
  • Protein drugs. Taking pharma research to the next level down.

Speculations

  • Leather and meat fabrication. No cow was hurt preparing this hamburger.
  • Genetic modification. Designer babies.
  •  Genetic mapping. Decoding the spaghetti code of life.

Future scenario: 120-year-old youths