Spatial computing is an umbrella term for digital experiences that reference objects and locations in the physical world. It includes augmented reality, mixed reality and virtual reality recreations of real-world places.
Although the term “spatial computing” has been around for decades, it gained traction following the 2023 announcement of the Apple Vision Pro, a headset with AR, VR and mixed reality capabilities that Apple describes as its first spatial computer.
Spatial Computing Definition
Spatial computing describes digital experiences that incorporate real-world locations and objects, typically taking the form of augmented reality, mixed reality or virtual reality that references real-world places.
Creating a device that effectively blends the real environment with virtual elements is a big challenge from a technical perspective. But the use cases — like a four-monitor workspace that packs away in a drawer or an immersive video call that doesn’t fully block out your surroundings — align more closely with how most people want to engage with their devices.
How Does Spatial Computing Work?
Spatial computing devices use data captured by cameras and other sensors to map the user’s surrounding area. While a headset like the Apple Vision Pro uses 12 cameras, five sensors and a specialized chip to achieve its mixed reality effect, a smartphone might only use its front-facing camera or a combination of a camera and LiDAR for spatial mapping.
That data is then typically fed into an algorithm, or several algorithms, to identify the shape of the objects around the user. More advanced devices may also use image recognition to classify the objects in the field of view.
Once the device has a spatial map and understanding of the objects within a space, it can superimpose virtual objects that blend with the physical environment — like a preview of a new couch in your living room, or a virtual animal racing up and down your hallway.
Spatial Computing Examples
Spatial computing can be used in a manufacturing environment to provide three-dimensional step-by-step instructions for a specific process, or allow a worker to call a supervisor who can see what they’re doing and provide guidance through sketches and diagrams. (An early demonstration of the Microsoft HoloLens 2 for The Verge placed the reporter in front of an ATV with a missing bolt, providing holographic instructions for fixing it.)
Spatial computing can also be used to display relevant data about the surrounding environment, like a machine’s temperature or how long an automated process will take to complete. These applications become particularly useful when presented through an AR or mixed reality headset, since this form factor frees up the wearer’s hands.
Spatial computing can help warehouse workers track to-do lists, navigate warehouses and identify items on a shelf. It can also be used for safety purposes, like alerting wearers if multiple people — or forklifts — are approaching the same corner.
AR and mixed reality headsets can provide checklists, instructions and other contextual information to healthcare professionals in a hands-free format. It can also be used to provide three-dimensional, interactive educational materials for medical students — a use case demonstrated in Apple’s 2023 WWDC Vision Pro announcement.
Home theaters have been a popular virtual reality application for a while, with streaming services like Amazon Prime Video and Netflix offering VR versions of their apps. Mixed reality headsets can take this concept to the next level, allowing the user to project a large screen in their living room while still being able to see what’s going on around them. A sports fan can also watch multiple full-screen games at once without making their living room look like a Buffalo Wild Wings.
Games like Pokémon Go and its predecessor, Ingress, use GPS and augmented reality technology to create gaming experiences that are specific to the user’s location and incorporate aspects of the surrounding environment. Given the recent advances in spatial computing, it seems likely that we’ll see a wider range of games that employ the technology in novel ways in years to come.
Fitness applications like Supernatural and BeatSaber have been popular among early VR adopters. We might also soon see mixed reality fitness applications that incorporate free weights and other equipment that’s difficult to handle without looking at them. High-end headsets can also track the wearer’s hand movements, eliminating the need for handheld controllers and making possible a wider range of exercises.
With headsets like the Apple Vision Pro, users can surround themselves with large virtual screens, simulating the multi-monitor battlestations favored by many software developers. Unlike a real four-monitor setup, however, a virtual workspace can be set up instantaneously in a coffee shop or at a dinner table and packed away in a drawer or carry-on case.
Spatial computing can be used for enhanced video calls that make it feel like the other person is in the room with you.
The History of Spatial Computing
The term “spatial computing” was coined in 2003 by MIT graduate researcher Simon Greenwold, whose early work in the field involved a combination of early augmented reality prototypes, input devices that allowed users to control computers through real-world actions, and a cheap 3D scanner.
In the years since, spatial computing has shown up in a number of ways.
In 2005, Google launched a mobile version of Google Maps. Although different from the AR and mixed reality applications that often come to mind when hearing the term, it is perhaps the foremost example of spatial computing: a continuously updated digital model of the physical world that tracks the user’s place within it.
In 2006, Israeli startup PrimeSense showed off a depth-sensing device that would allow users to control video games through gestures without the use of a physical controller. PrimeSense partnered with Microsoft to create Kinect, an Xbox 360 accessory that aimed to capitalize on the popularity of motion-controlled games launched by the Nintendo Wii.
In 2013, Apple acquired PrimeSense, with co-founder and CTO Alexander Shpunt joining the company as a distinguished engineer. Shpunt would later file dozens of patents related to Apple’s emerging work in spatial computing.
In 2016, Niantic launched Pokémon Go, an augmented reality game that encouraged users to move around in the real world to catch and train virtual monsters. The game earned a world record $206.5 million in its first month since launch and was downloaded 130 million times during that period.
In 2019, Microsoft announced a new version of the HoloLens, with an increased focus on enterprise applications. Key improvements included a larger field of view and brighter, higher-resolution images that make holograms “feel more real,” according to The Verge.
In 2023, Apple unveiled the Apple Vision Pro, marketing the device as its “first spatial computer.”
Greenwold, for one, is enthusiastic about the Apple Vision Pro in a way he hasn’t been for the others.
“They seem to be AR-, rather than VR-focused with it,” he told Built In, “which seems like a much better application if you can get it right.”
Differences Between AR, VR and Spatial Computing
Augmented reality and mixed reality, both of which superimpose digital elements onto the physical world in real time, are common use cases for spatial computing.
Virtual reality, which places the user in an immersive, simulated environment — normally through the use of a virtual reality headset — can employ spatial computing, but doesn’t always. A VR simulation of a real-world place, or of a virtual space that contains real-world objects, would be considered spatial computing under Greenwold’s definition. But an entirely fictional VR world would not.
The Future of Spatial Computing
Greenwold, whose first augmented reality application predates Pokémon Go by 15 years, said the field has progressed much slower than he and his colleagues at the MIT lab expected when he was working on his thesis 20 years ago.
But he believes we may be approaching an inflection point due to a number of recent technological advancements. For one, pixel density is important for any device that’s held up close or worn as a headset, as lower resolutions increase eye strain and break immersion. (The Vision Pro’s postage-stamp-sized displays have a resolution equivalent to a 4k display for each eye.) Improved sensors, more powerful chips and advances in machine learning have also been essential to improving graphical fidelity and making interactions between virtual elements, users and real-world objects feel more natural.
Greenwold also sees the rise of generative AI as a critical development for the field, as it can dramatically reduce the cost of animating virtual objects and environments, making new experiences cheaper and more ubiquitous.
He’s also optimistic that the direction Apple is taking with the Vision Pro might push the field forward in a way other devices haven’t — thanks to the decision to make a device that can do all the things people might want to do with a mixed reality headset, rather than make substantial sacrifices to bring the price point low enough for ordinary consumers.
“They’re not going to sell very many of these for $3,500, but it’s not going to be a flop because they’re not expecting to,” he said. “What they’re going to do is make something that’s amazing and inaccessible, but over time they will bring the cost down at some point to where people can get something they couldn’t have before.”
That said, he considers himself among the ordinary consumers who won’t shell out the $3,500.
“But at the right price point, I would buy one.”