Ambient Intelligence: When Computing Is All Around You
We have digital assistants that can make a restaurant reservation, lock the garage door, or check whether our laundry is dry when asked. We have tablets with cameras that automatically track us as we move around to make our video calls feel more like being there. We even have home automation hubs that use subsonic sound waves to locate us in a room and make their displays more readable from a distance.
The next logical step will be not needing to interact deliberately with our devices at all.
As the objects around us become instrumented, interconnected, and intelligent, they are enfolding us in a subtle but ubiquitous web of connectivity. Soon, all we’ll have to do to interact with them is speak or gesture. In some situations, we might not even have to do that; our environment will respond automatically to cues like respiration, heartbeat, or body temperature.
All around us and always on, this ambient intelligence will embed computing in every aspect of our daily lives. In addition to handling mundane tasks for us, it will increase accessibility and comfort for people with physical disabilities (for example, by locating an object that’s out of place or scheduling a medical appointment when it detects worrisome symptoms). In the process, ambient intelligence will improve quality of life for everyone.
The internet of not-things
The technology driving this seamless layer of interactivity is becoming increasingly sophisticated and immersive. Wearable computing, familiar to most of us now as smartwatches and fitness trackers, will be an essential enabler. Although Tesla’s experimental haptic bodysuit isn’t something most people are likely to slip on under their daily clothes, its technology could eventually make its way into clothing capable of anything, from adjusting the thermostat to alerting your doctor that you’re having a health emergency based on the data it continuously gathers.
In a less obtrusive way, “smart rings” like those being developed right now by Apple and Amazon will be functional yet decorative ways to control other devices with a gesture. Imagine bringing in a heavy box without fumbling by unlocking the trunk of your car with a wave and opening the door to your house with a wag of one finger. We’ll also have smart contact lenses with eye tracking and a built-in augmented reality/virtual reality (AR/VR) display to let you receive information or send a command to a device with a blink.
Meanwhile, though we can’t send an angry email with an eyeroll (yet), motion technology that doesn’t require a wearable interface is becoming available. Google’s Pixel 4 smartphone can detect hand gestures up to 18 inches away and translate them into simple commands like answering calls, playing music, or shutting off the screen.
Eventually, ambient intelligence will enable us to knit complex commands and gestures into intricate, interconnected sets of actions. We can already do this in small ways with speech. Amazon’s Alexa Guard feature uses its Alexa assistant and Echo smart speakers to arm your home alarm system when you tell it you’re going out. While you’re gone, it adjusts the lights to make it look like someone is there and listens for the sound of a fire alarm, carbon monoxide alarm, or breaking glass. If it hears anything amiss, it alerts you, lets you listen in, and can call the police or fire department automatically or on request. And in the more likely event that nothing goes wrong while you’re away, it stands down when you tell it you’ve returned.
Combining that capability with a video doorbell and facial recognition would create entirely new ways to improve home safety, security, and convenience. Imagine you lose your keys on your way home from work. Instead of making you wait on the porch until you can reach a locksmith or another family member arrives to let you in, the house could recognize you, disarm the alarm system, and unlock the door. And then, to save you from having to go out again, it could tell you what you could make for dinner based on what’s already in the refrigerator and pantry, especially if you need to use up ingredients that are about to go bad.
In a business context, ambient intelligence could manage your to-do list throughout the day, brief you on who you’re meeting with next and about what, deliver the right data to you at the moment you need it, tell you where to find necessary equipment or how to have it delivered to you, and even alert security that you’re authorized to be in the building after hours.
Surroundings that are smarter than we are
Ambient intelligence can create an entire ecosystem made up of sensors and devices communicating among themselves. When appliances, switches, devices, and other infrastructure can all share data and intelligence, our homes, offices, factories, and any other place people occupy will be safer and more comfortable while also controlling costs. These are just a few things that this collective intelligence might do:
- Adjust air quality (filtration, humidity, temperature, and more) predictively to adapt to outdoor conditions and personal preferences
- Change ambient lighting based on time of day and who is in the room
- Switch off electrical outlets, appliances, and other potential hazards depending on who is nearby
- Monitor the location, vital signs, and more of children, the ill, the elderly, and other vulnerable residents or visitors
Eventually, ambient intelligence may enable buildings to adapt to the needs of the people within them. Researchers at the University of Colorado Boulder’s ATLAS Institute, Keio University, and the University of Tokyo recently created a set of modular building blocks that can be controlled by a computer to change the shape of a room. Dubbed LiftTiles, they can be programmed to make work surfaces spring up from floors and walls shift to create different configurations. MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) has gone even farther with SprayableTech, a system that lets users create room-sized interactive surfaces embedded with sensors and displays. SprayableTech uses conductive inks and stencils to airbrush sensor-embedded displays onto any surface, from exterior and interior walls to appliances and furniture. The surface connects to a processor that runs code for sensing and visual output, essentially making almost anything a smart device.
Combining these technologies with artificial intelligence (AI) could someday allow buildings to reshape themselves on a schedule, on command, or in real time in response to events—for example, by spontaneously generating a seat if someone’s posture and breathing indicate that they’re having trouble standing, then using sensors within that seat to determine whether the person needs medical attention or is just wearing uncomfortable shoes.
Why ambient intelligence isn’t everywhere yet
Gartner has predicted the imminent rise of what it calls “multiexperience,”or the evolution of computing beyond a single point of interaction to include multisensory and multi-touchpoint interfaces in wearables, advanced sensors, consumer appliances, and vehicles. Much of this technology already exists; connecting it into truly ambient experiences depends on having more bandwidth to gather and transmit data, as well as settled standards for how smart devices will interact. In addition, consumers will need assurances that these devices won’t be used against them.
First, realizing the promise of ambient intelligence is going to require massive amounts of bandwidth. Most devices and sensors are too small to be powerful enough to do their own data-crunching. They’ll need ultra-fast connectivity so they can stream data to a remote server, receive the results, and act on them in near-real time. The new generations of Wi-Fi and cellular broadband will help solve this problem with much faster ambient connectivity, both indoors and outdoors, and the ability to support more devices with less power consumption.
Next, we need a resolution to conflicting goals that are being pursued by Apple, Google, Microsoft, and Amazon, which provide most of the digital assistants in smart phones and smart speakers. On the one hand, they want to create a common standard for smart home devices that enables more seamless interactivity and interoperability. But on the other hand, they want to lock in customers and capture as much of this emerging market as possible. The market turbulence created as these vendors jockey for position and try to out-innovate each other will forestall widespread adoption.
Finally, privacy is an obvious issue. We must be confident that our deliberate and even incidental actions aren’t being used to manipulate us against our best interests or act outside of our desired parameters. When everything around us is constantly monitoring and measuring almost everything about us so it can respond autonomously, there will be data about us on everything, and that data will be able to go anywhere. Anyone with access to it could use it to predict what we might do next, suggest actions likely to appeal to us, or even pretend to be us. As of right now, vendors don’t seem to be doing enough to mitigate these concerns, but as the public becomes increasingly educated and insistent about data privacy and transparency, business and governments will be under increasing pressure to consider the ramifications of systems that can sense, reason, act, and interact for us.
We may think that we’re already surrounded by information and connectivity, but we’re still in the most rudimentary stages of a world in which almost everything around us, from walls and furniture to the very air, will be capable of reacting to everything we do. In this future world, we won’t need to initiate contact with technology; it will initiate and maintain contact with us. And this all-encompassing web of awareness will be able to identify and satisfy our wants and needs before we express them—possibly even before we know that we have them.