Invented by Ivan Poupyrev, Patrick M. Amihood, Google LLC

The market for fine-motion virtual reality or augmented reality using radar has been steadily growing in recent years. This technology, which combines radar sensors with virtual or augmented reality systems, allows for a more immersive and interactive user experience. Fine-motion virtual reality or augmented reality using radar involves the use of radar sensors to track the movements of the user’s hands, fingers, or body. This tracking data is then used to manipulate virtual objects or interact with augmented reality elements in real-time. The result is a more natural and intuitive user interface, where users can use their own movements to control and interact with the virtual or augmented world. One of the key advantages of using radar for fine-motion tracking is its ability to accurately capture even the smallest movements. Unlike other tracking technologies, such as cameras or motion sensors, radar is not affected by lighting conditions or occlusions. This makes it ideal for applications where precise and reliable tracking is crucial, such as virtual surgery simulations, industrial training, or gaming. The market for fine-motion virtual reality or augmented reality using radar is expected to grow significantly in the coming years. According to a report by MarketsandMarkets, the global market for virtual reality in healthcare alone is projected to reach $3.8 billion by 2025. This growth is driven by the increasing adoption of virtual reality in medical training and therapy, where fine-motion tracking is essential for realistic simulations and precise interactions. In addition to healthcare, other industries such as gaming, automotive, and aerospace are also expected to drive the demand for fine-motion virtual reality or augmented reality using radar. In gaming, for example, the ability to accurately track hand and finger movements can enhance the gameplay experience and enable more immersive interactions with virtual objects. In the automotive industry, radar-based fine-motion tracking can be used for driver training or to develop more intuitive and safer human-machine interfaces. Several companies are already at the forefront of developing and commercializing fine-motion virtual reality or augmented reality using radar. For example, Ultraleap, a UK-based company, has developed a hand-tracking technology called “Leap Motion” that uses radar sensors to precisely track hand and finger movements. This technology has been integrated into various virtual reality headsets and is being used in applications ranging from gaming to industrial training. Another company, Nod, has developed a radar-based gesture control system that allows users to interact with augmented reality elements using simple hand gestures. This technology has applications in areas such as retail, where customers can use hand gestures to virtually try on clothes or accessories. As the technology continues to advance and become more affordable, the market for fine-motion virtual reality or augmented reality using radar is expected to expand further. The potential applications are vast, ranging from healthcare and gaming to education and entertainment. With its ability to provide precise and reliable tracking, radar-based fine-motion virtual reality or augmented reality has the potential to revolutionize how we interact with virtual and augmented worlds.

The Google LLC invention works as follows

This document describes techniques to control fine-motion virtual reality or augmented reality using radar. These techniques allow for tracking of small movements and displacements, even at the millimeter-scale, to control user actions, even if those actions are small or fast or obscured by darkness or changing light. These techniques also allow for real-time control and fine resolution, unlike optical or RF tracking techniques.

Background for Fine-motion virtual reality or augmented-reality using radar

Visual tracking is used in virtual reality and augmented reality to control the user’s VR or AR environment. Visual tracking is the use of optical or infrared camera to track large body movements to control the VR or AR environment. “These cameras are limited by their low spatial resolution, and insensitivity to light or darkness.

Some VR and AR systems utilize hand-held controls. The controllers are not able to provide the wide range of control often required for VR/AR environments, due to the limited number of buttons and the orientation of the sensors. Hand-held controllers are often useless for VR because they do not allow the user to see their hand and body orientation in the VR environment.

Radio-frequency (RF), which tracks a single point on an object moving, is a partial solution. These RF techniques are unable to detect small movements without expensive, complex or large radar systems. This is because the hardware limits the resolution.

This document describes techniques to control fine-motion virtual reality or augmented reality using radar. These techniques allow for tracking of small movements and displacements, even at the millimeter-scale, to control user actions, even if those actions are small or fast or obscured by darkness or changing light. These techniques also allow for real-time control and fine resolution, unlike optical or RF tracking techniques.

This summary introduces simplified concepts regarding fine-motion virtual reality or augmented reality control using radar. These concepts are further described in the Detailed description. This summary does not aim to identify the essential features of claimed subject matter or determine the scope of claimed subject material.


The techniques described in this document enable VR/AR fine-motion control by using radar. These techniques allow for small movements and displacements, even at a millimeter-scale or below, to be tracked by the user in order to control actions within the VR/AR environment.

Consider, for instance, a tracking system that uses IR or optical cameras for VR/AR environments. Many users prefer to control the VR/AR environment in mid-air or without a physical object. Track pads, game controllers, mouses, keyboards, and other devices that interfere with VR/AR are therefore not recommended. A real-world controller can be just as effective in reminding a user that they are experiencing a simulated experience. A real-world control, no matter how simple, is only possible if the user has a controller. In an AR environment, many users don’t want to carry around objects as controllers. They just want the system to work.

Particular solutions for using physical controllers to control VR/AR involve in-the air gestures. However, current techniques only allow large-body movements, and have little ability to control fine motion. Current techniques are sensitive to background movement, lighting variations, occlusions and different types of clothing or users. These partial solutions do not allow for fine control. For the VR world, a VR user will be represented by conventional systems that are unable to show finger orientation or movement, or clothing detail or movement.

Also, consider a conventional RF tracking system in addition to optical techniques for tracking objects and motions. These conventional RF systems are limited by the antenna-beam size and bandwidth of their hardware, which is based on a conventional radar system. Multiple antennas can provide a slightly better resolution, but this adds complexity and costs while also increasing the lag between an action performed and its display or control in a game or VR/AR environment. “Even with 12 antennas, resolution is inferior than a single antenna when using the disclosed technologies.

Consider techniques that overcome the limitations of conventional systems for in-air motion recognition and radar. Take a look at the two examples in FIG. 1 . In the first case, a user is shown using fine-motion controls 104 to play a virtual reality game through a VR computing device. The user 102 here is manipulating VR/AR Controller 108. The VR/AR Controller 108 has been illustrated in the way the user 102 might see it when looking through the VR display 110 of the VR computing device. In FIG., the user’s 112 hands are also depicted. The user 102 is shown the hands 112 virtually. The user 102 can see his or her finger turning a small, rotating wheel. As the wheel is rotated, its appearance changes. The fine-motion VR/AR technique shows the appearance changes, and his or her finger moving in real time.

In the second case, a user (114) is shown using AR computing glasses 118 to view an augmented reality environment. The user 114 can be seen manipulating the VR/AR Controller 120. The VR/AR Controller 120 is shown as it would appear to the user 114 when looking through the viewport 122 on the AR computing glasses 118. Viewport 122 does not show the user’s real hands 124. The user 114 can see his or her fingers tapping on a number pad. As the finger presses each number, the color of the numbers changes (shown partially) to indicate that the AR environment correctly received the selections. Assume the numbers represent a telephone number. The AR computing spectacles will initiate a call without the user having to touch anything.

This document then turns to a computing device that can use fine-motion VR/AR using radar. It follows up with a computing device example, a radio field example and occluded part of a user?s hand, a technique, example RF waves propagation, and finishes with an computing system example.

Example computing device

Figure 2 illustrates an example implementation of the virtual reality computing devices shown in FIG. “Now that we have described some examples of fine motion VR/AR control using radar, let’s look at FIG. 1 in greater detail. Computing device 202 can be any type of computing device that is suitable for implementing various embodiments. Smart glasses 202-1 and virtual-reality googles 202-2 are examples of devices that can be used in this example. These examples are only for illustration purposes. Other suitable computing devices can be used without departing the scope of claimed subject matter. For example, a laptop or smartphone, smartwatch, desktop or netbook with an associated VR or AR screen, or a dedicated gaming console.

The computing device 202 comprises one or more computer processing units 204, and computer-readable medium 206. The processors 204 can execute applications 208 or an operating system (not pictured) that are embodied in computer-readable instructions stored on the computer-readable medium 206 to interface or invoke some or all the functionalities described, for example through VR/AR APIs 210 and user control. The applications 208 can include games, virtual reality, augmented reality, or other applications. Other programs could be used instead, for example, to control media, browse the web, etc.

The user control and VR/AR (APIs) 210 (APIs 2010) provide programming access to various routines, functionality and features incorporated into VR/AR Radar System 212 (radar systems 212). The APIs 210 can provide high-level programming access to the radar system 212, allowing a calling application to request notifications for identified events, query results, etc. The APIs can provide low-level control of the radar system, allowing a program to configure the hardware directly or partially. APIs 210 can provide programmatic input parameters to configure transmit signals or select VR/AR algorithms. These APIs allow programs, like the applications 208 to integrate the functionality provided by radar system 212 in executable code. The applications 208, for example, can invoke APIs to request or register an event notification when the detection of a fine-motion control is detected, disable or enable wireless gesture recognition on the computing device 202 and so on. APIs 210 may include or access low-level drivers that interact with the hardware implementations for the radar system 212. The APIs 210 may also be used to access algorithms on the radar 212 in order to configure algorithms, extract information (such 3D tracking, angular range, reflectivity profiles of different aspects, correlations among transforms/features on different channels), or change the operating mode of radar system 212.

The radar system is shown separately from the computer-readable medium 206, even though it could contain computer-readable instruction. The radar system 212 may be implemented as a chip integrated within the computing device 202. This could include a System on Chip (SoC), Integrated Circuits, a processor configured to access instructions in memory or embedded in hardware, a printed-circuit board with hardware components or any combination of these. The radar system 212 comprises a radar-emitting component 214, one of more antennas, a digital-signal processor 218 a machine-learning element 220 and a VR/AR library. The radar system 212, in conjunction with the fine-motion tracking module (224), VR/AR VR/AR Control module (226) and/or User Representation Module (228) (all described below), can enable advanced VR/AR controls, even for small object movements (e.g. fingers, lips, or tongue) at millimeter scale.

Generally, the element 214 emitting radar is configured to produce a radar field. The radar field can be configured to reflect at least a portion of a target object. The radar field may also be configured to reflect off human tissue and penetrate fabrics or other obstructions. Fabrics or obstructions such as wood, glass and plastic, cotton, nylon, wool and other fibers are all examples of these fabrics and obstructions. Human tissues, like the hand, can also reflect from them. Radar fields can also be reflected by objects such as a stylus or ring worn on a finger.

A radar can have a small field, like 1 millimeter up to 15 centimeters; a moderate size, like 10 centimeters up to 1.5 meters; or a moderately large one, like 0.5 to 8 metres (or even larger). These sizes are only for discussion and any other range suitable can be used. The modules 224 or 226 or the radar system 212 can be used to receive and process radar field reflections in order to determine large-body gestures. These are based on human tissue reflections caused by arm or leg movements. As noted below, multiple radar fields or a single field can be used to determine both small and large motions. Examples of large and small motion detection are determining the position of a person in three dimensions, and detecting large movements of an object or larger objects. Together with small movements such as finger and other similar objects, these can be combined to create realistic VR representations or user control actions. For example, both moving an arms and rubbing together two fingers.

The antennas 216 are receiving RF signals. These antennas (or one antenna) can receive different types of reflections. For example, a radar signal that represents a superposition or reflections from two or more points in the radar field generated by the radar emitting element 214. A point is often obscured, or optically occluded. Food, gloves, clothes, books, electronic devices and other items can be occluded. Often, one of the points is visually obscured. The lighting on an object can be dark, or difficult to capture optically. For example, if the point’s lighting is darker than the other two points or the ambient lighting in the radar field. Take, for instance, the user 102 in FIG. If the user is in a darkened room or if some of their fingers are shaded by other fingers or even obscured by them, it may be difficult to capture them optically or with conventional RF techniques.

Click here to view the patent on Google Patents.