integrated reality, Tech and Business

Exponential Technology Revolution #3 – Integrated Reality Facts

The best way to illustrate the power of Exponential Technology Revolution #3 – Integrated Reality is to look at its real-world effects.

We will start with the examples that are already real and “in the wild”, covering 5 areas: digitized physical shops, smart cities, smart vehicles, robots and voice interfaces.

Digitized physical shopsGoogle Analytics for shops.

Shopping technology has had an incredible step forward in the last five years. If a store hasn’t changed over this period it is missing a big part of the value that IoT can bring to it. Mostly what has happened is a sensorization of the stores. Stores used to be knowable only through the sales associates, but now we can make them sense and see customers.

Sensing customers is done mostly through Wifi or Bluetooth to track the phones that enter and exit the stores, or through “person counters” which allow seeing how many individuals go through a storefront or get into a particular part of the stores. What sensing allows is to draw the funnel of where customers are and have been in a store and what movements they make. There are countless vendors of these type of technologies. They can also use WiFi information to try to identify the person. Gigya, a leading omnichannel identity management provider was recently purchased by SAP for 350 million USD.

Seeing customers is about cameras and artificial intelligence capabilities to analyze customers. You can look at customers who enter and understand demographics and sentiment. It is like having a smart associate following all your customers and seeing what they are up to. The technology is relatively straightforward so hundreds of companies are vying for position in this new area.

Of course, a concrete and impressive implementation of all this is the Amazon Go store.It tracks where you are and who you are through a “log in” at the entrance of the store and cameras. Those same cameras along with RFID tags allow knowing what you have bought and taken and charge it to you as you go out. The use case that was most obvious from the demonstration video was customer convenience, but as always Amazon also has an incredible amount of information that it can derive for customers in its store. It can be even more than the equivalent in the web.

Retail spaces are being transformed into the first digitized physical spaces. They will show us the potential of what can be done as the technology plummets in terms of cost and the skills to design this type of environments are developed at scale.

Smart cities.

Smart cities have been a buzzword for a long time already. They also represent an increasingly large investment category for governments. After all, cities are where most of the world lives and almost all of the economic activity happens. Improving cities has great return and a lot of possibilities.

At the same time, the hype hasn’t lived up to the promise. Smart cities were going to change how we lived very quickly, but for the most part, cities continue to be very much what they have been for the last five decades. This is to be expected. Change in the public sector takes even longer than in the private sector. A city, being one of the most complex entities in existence and marrying public and private, can be expected to take even longer to change.

Now, we are starting to see the first fruits of smart cities deployed in the real world and changing how we live. As usual, when those changes happen they become commonplace and stop being called smart cities anymore, but they represent substantial improvements.

There are many cities in the world vying for the title of the world’s smartest city. Santander in Spain has invested for a long time. Tempe in Finland is trying to leverage the Finnish technological legacy. As always there is a ranking for that, and the IESE ranking has many of the usual suspects at the top: New York, Berlin, San Francisco, Tokio. What is important is not the smartest city, but rather the smart things that seem to make sense for everyone and bring Integrated Reality to the fore. There are four of those very basic but smart things that seem to make sense for most cities.

Lighting. Smart lighting means LED lights and digitally controlled lighting times for the city. This might seem small potatoes but there are more than 300 million streetlampsin the world according to some estimates and they represent an important part of energy consumption and greenhouse gases for cities. Sydney has managed to reduce its energy bill by more than 30% since 2012 thanks to smart lighting, and there are many other examples across the world.

Parking. Parking is one of the most human time-wasting activities in the world. Drivers spend anywhere from 17 hours to 4 days a year looking for parking depending on the study. This is an incredible amount of wasted time. Parking sensors and displays can help drivers find parking quicker and avoid taking the car if the parking situation is too bad. The city of Santander, for example, has sensorized its parking spots in the city centers to do just that. It is also a great way to boost revenue, with smart parking companies claiming between 20 and 30% revenue uplift.

Garbage. Garbage collection is one of the most unglamorous but important tasks in a modern city. The impact of integrated reality in garbage collection covers the whole process. In waste collection, smart garbage dumps allow for collection on demand and trash type inspection. Cities like Seoul had to turn to this after even daily collection became insufficient. In waste processing, the highly intensively and back-breaking manual labor required for trash sorting is being substituted by robots which can handle trash sorting for recycling automatically and 24×7 and don’t require waste separation at origin.

Traffic. Life has changed significantly for all of us, but as usual, we take it for granted. I still remember the nerve-wracking choosing and planning you had to undergo to get to the Madrid historic city center by car in the Christmas period. You could take anywhere from 30 minutes to 3 hours to get in, the wrong choice in terms of route could easily add one hour. I first heard of Waze six years ago, when it was still an upstart. I got to meet the CEO in 2016 when they already had sold to Google. This simple application has made Google Maps smarter and allows us to know through digital how the physical reality is. Do you really want to take the car if Google predicts it will take longer than public transit? Probably not.

Cities will continue to get smarter by integrating reality more and more. We will slowly get to the point at which the physical city will have a virtual overlay that will allow optimizing its every function. This might seem only a developed world thing, but as cost drops through Moore’s law we will see “smart-first” cities in the developed world which will leapfrog into a completely new way of doing things. The smartest city in the world might be in Africa soon enough.

3D Printing. Turning digital into reality.

Everyone who is old enough will probably remember the magic of the first home printers. You could create something in the computer and then have this wonderful machine put it on paper. Now it seems commonplace and has lost its magic, but at that point in time, it was amazing.

3D Printing is an old technology, with its initial patents and industrial success in the 1980s, however, it is still in the amazing and clunky phase for the most part. You can design a 3D model, scan one using a phone and some software or a 3D scanner, or just get it from one a site from Thingiverse, and then make it real. You will make it real in orange plastic, and it won’t look very polished, but still, it is incredibly cool. Suddenly your computer and reality are connected in a very real sense.

Consumer 3D printers are getting cheaper and better all the time, Now there is a host of companies in the market and you can get one from $199 to $2.500. Quality and speed are getting better but they are far away from the dream of “build everything at home”. There might be times when they can be useful, like building some gear for your Halloween costume, but for the most part, they are a curiosity.

The real deal in 3D printing is in industrial settings. Here the longer history of 3D printing and the availability of higher quality models is taking some use cases by storm. It is still not the manufacturing use cases, as plastic injection molding is difficult to beat for anything at scale. It is mundane uses like spare parts, which you need in unit batches and where a lot of the cost is currently in inventory. Consumer-focused uses like personalization, where a small bit of 3D printing on top of a traditional industrial fabrication process can go a long way. Or really transformative cases like AI-based generative design for high tech (e.g. GE turbines, Boeing airplane frames) that is then 3D printed because it is too complex for industrial fabrication.

The current 3D printing craze will pass, but the technology will get better over time. More and more use cases will become economic, with traditional methods being economic at higher and higher volumes. Eventually, we will end in the paper printing scenario, in which only extremely high volume runs make sense with an industrial printing setting any more.

Physical robots. Embodied Intelligent Processes.

The robots are coming. Up to now most of the robots have been virtual robots, that have automated the tasks we call routine cognitive (e.g. data processing) and are little by little encroaching on non-routine cognitive through robotic process automation and cognitive. These robots are in computers and servers, and they are not exciting to see. However, there is a whole new wave of robots coming for the routine physical jobs that are a lot more like Star Wars characters.

Industrial robots have been with us for a long time, and many of the highest productivity countries and companies use them intensively. The automotive industry is a poster child of robotization of the assembly line. The most advanced countries like Korea, already have more than 800 industrial robots per 10.000 workers according to the International Robotics Federation. This is continuing and high profile M&A like China’s Midea acquisition of Germany’s Kuka shows it is still an area of rapid development and international high strategy.

However, the wave that is coming is taking robots beyond extremely high volume industrial processes. We can expect robots in many more mundane environments. We have the robotic waste sorters of the Smart City section. We have drones, that are well suited for both small weight delivery and sensing from above. We have Zume, the automated pizza restaurant on wheels. We have the 100% automated sushi restaurants in Japan. The retail robots that greet customers, show the merchandise and even do inventory counts in Lowe’s in the US and in the Softbank stores in Japan.

Advances in machine intelligence are making locomotion, sensing and physical interaction a lot more mundane. Talking to the CEO of the robot company that is helping Lowe’s he was telling me that the big change is speed and safety. Before you had to have an industrial robot in a cage, now it can interact with humans. Before sensing took too long for real-time, now it is quickly approaching workable real time for most applications.

There is no end to the variety of tasks that could be automated in this way, even without rethinking form factors and ways of doing current tasks. The important stuff from the Integrated Reality perspective is that all those robots will have virtual twins that are constantly controlled digitally. And the physical robots will be able to sense and act in the physical world, digitizing the physical environment.

Smart vehiclesI need you, Kit.

Getting into a Tesla might not look like integrated reality at first but it is extremely cool. The first times I got into one what really made me excited was the big tablet in the dashboard. That might look like the integrated reality in the Tesla, but when you talk to Tesla owners it is just the tip of the iceberg.

A Tesla is continuously reporting back to Tesla Motors almost everything. Its performance, health, location, issues, etc… Effectively Tesla Motors is always seeing the virtual Tesla which is the digital twin of the physical one. This has tremendous implications in terms of design and optimization. Tesla can see and report any problems developing, solving the problem for you and solving the problem forever in the design. It also has great safety features. If you have an accident you will get help as quickly as possible. If your physical car is stolen, its digital twin will know where it is. Finally, the Tesla is a computer, you can update it over the air and get a progressively smarter car each day.

While Tesla is the most flashy in terms of these improvements the rest of the industry is following quickly. Almost all cars will come with this functionality sooner rather than later. Adding more futuristic capabilities like cars that are able to rent themselves out while not in use or go get you with autonomous driving don’t seem to be too far out.

Voice interfaces. Alexa, I want to talk with Cortana and Siri.

Voice interfaces are booming and they probably deserve a book or at least an article to themselves. Voice could be a new interface that supersedes point and click, and even touch for a lot of use cases. The number of voice devices and applications is staggering. More than 20 million voice devices expected in 2017, and more than 10 shipped in 2015 and 2016. Applications are already in the tens of thousands.

However, the jury is still out with regard to real usage. My personal experience with Alexa and Google shows to music as the killer application. And apparently few of the other applications are gaining traction. Habit dies hard, and even small failures in performance (e.g. my device understands me two out of three times) can hamper adoption and addiction. Touchscreens also took time to adopt, and it was a very small step from point and click.

Voice is very relevant for Integrated Reality because it marries physical reality and the digital world. Speech is a very natural human art, a key part of human-physical interaction. When you touch a smartphone you take yourself outside the physical world. Talking could integrate digital in our daily physical reality more than any number of touchscreens can. Voice is also the language of magic, we have always fantasized about magic spells that change the world. Now it can start to happen.

The final important point of voice interfaces is their sensor nature. Like with the Amazon Now store, Alexa is really a data play. My Alexa and my Google are listening to every conversation (except when my youngest son disconnects them, which is quite often) and gather a huge corpus of data to analyze. The amount of client knowledge is really amazing and frightening. The corpus for machine learning of voice processing is also enormous. Voice biometrics can be relatively secure, especially when the text is dynamic making it difficult to synthesize a false positive. This could also add another layer of security to our reality.

With voice working as an interface, we would only need to create workable gestural interfaces to have full digital immersion in our physical reality. With all our devices being able to understand and obey us. We are only in the initial stages of voice and gesture. They might take their time to catch on, but the only real risk of them not being relevant is if we leapfrog them through mind-machine interfaces or virtual/augmented reality that we will see in Integrated Reality speculations.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s