Many of the projects that emerged from the Google Pilot Workshop XLabs seemed absolutely fantastic. Google Glass offered the promised wearables that would add technology to our view of the world, but the reality of Google Glass fell short of its promise. Another X Labs project that did not disappoint is the self-driving car. Despite the fantastic promise of a driverless car, these cars are a reality. This remarkable achievement depends on SLAM technology.
SLAM: simultaneous localization and mapping
SLAM is an acronym for Simultaneous Localization and Mapping, a technology whereby a robot or device can create a map of its environment and correctly navigate the map in real time. This is not an easy task and currently exists at the forefront of technological research and design. A big hurdle to the successful implementation of SLAM technology is the chicken-and-egg problem, which comes with two necessary challenges. To successfully render an environment, you must know your orientation and position in it; however, this information is only obtained from a pre-existing environmental map.
How SLAM works
SLAM technology typically overcomes this difficult chicken-egg problem by creating an existing environmental map using data GPS . This map is then refined as the robot or device moves through the environment. The true problem of technology lies in accuracy. Measurements must be carried out continuously when the robot or device moves in space, and the technology must take into account the “noise” that occurs both when the device moves and when the measurement method is inaccurate. This makes SLAM technology largely a matter of measurement and mathematics.
Measurement and mathematics
Google’s self-driving car is an example of measurement and math in action. The vehicle primarily takes measurements using a roof-mounted LIDAR (laser radar) that can create a three-dimensional map of its surroundings up to 10 times per second. This evaluation frequency is critical as the vehicle moves at speed. These measurements are used to complement existing GPS maps, which Google maintains well as part of its Google Maps service. Readings create a huge amount of data, and creating value from that data to make driving decisions is the job of statistics. The software on the vehicle uses advanced statistics, including Monte Carlo models and Bayesian filters, to accurately display the environment.
Implications for augmented reality
Autonomous vehicles are the obvious main application of SLAM technology. However, a less obvious use might be in the world of wearable technology and augmented reality. While Google Glass can use GPS data to determine a user’s approximate location, a similar future device could use SLAM technology to build a much more complex map of the user’s environment. This may include understanding what the user is looking at on the device. It can recognize when a user is looking at a landmark, storefront, or advertisement and use that information to create an augmented reality overlay. Although these features may seem far away, the MIT project developed one of the first examples of a wearable device with SLAM technology.