Invented by Christian Holz, Eyal Ofek, Andrew D. Wilson, Lung-Pan Cheng, Junrui Yang, Microsoft Technology Licensing LLC

The market for reality-guided virtual roaming is rapidly expanding as technology continues to advance. This innovative concept combines the immersive experience of virtual reality (VR) with real-world navigation, allowing users to explore new places without leaving the comfort of their homes. Virtual reality has already made significant strides in various industries, including gaming, entertainment, and education. However, the idea of reality-guided virtual roaming takes VR to a whole new level. By integrating real-world data and mapping technology, users can now navigate through real locations and experience them as if they were physically present. One of the key drivers behind the growth of this market is the increasing demand for virtual travel experiences. With the ongoing COVID-19 pandemic restricting international travel, people are looking for alternative ways to explore new destinations. Reality-guided virtual roaming offers a safe and accessible solution, allowing users to visit famous landmarks, historical sites, and natural wonders from the comfort of their homes. Another factor contributing to the market’s growth is the advancements in VR technology. Headsets and devices are becoming more affordable and user-friendly, making them accessible to a wider audience. Additionally, the development of high-resolution displays and realistic graphics enhances the immersive experience, making virtual roaming feel more lifelike and engaging. The market for reality-guided virtual roaming is not limited to individual consumers. Businesses in the tourism and hospitality industry are also recognizing the potential of this technology. Virtual tours of hotels, resorts, and vacation destinations can help attract potential guests and provide a preview of what they can expect. Travel agencies can offer virtual travel packages, allowing customers to explore multiple destinations before making their final booking decisions. Furthermore, reality-guided virtual roaming has the potential to revolutionize the real estate industry. Prospective buyers can take virtual tours of properties, giving them a realistic sense of the space and layout. This technology also enables architects and designers to showcase their projects in a more interactive and immersive manner. However, like any emerging market, reality-guided virtual roaming also faces challenges. One of the main obstacles is the need for accurate and up-to-date mapping data. To provide a seamless experience, the technology relies on precise location information and realistic representations of the environment. Therefore, partnerships with mapping and navigation companies will be crucial to ensure the accuracy and quality of the virtual roaming experience. Privacy and security concerns are also important considerations. As users navigate through real-world locations, their personal data and information may be collected. It is essential for companies in this market to prioritize user privacy and implement robust security measures to protect sensitive data. In conclusion, the market for reality-guided virtual roaming is poised for significant growth as technology continues to advance. The demand for virtual travel experiences, coupled with the increasing accessibility of VR technology, creates a fertile ground for innovation. With the potential to transform industries such as tourism, real estate, and entertainment, reality-guided virtual roaming offers a unique and immersive way to explore the world without leaving home.

The Microsoft Technology Licensing LLC invention works as follows

In various embodiments, computerized systems and methods for dynamically updating an immersive virtual environment using tracked data from the physical environment. Sensor data is received by a computing device connected to an HMD. The computing device generates a virtual scenario based upon the sensor data received, and the virtual scene can include at least part of a virtual route that corresponds with at least part of the navigable path determined by the sensor data received. The computing device may modify the virtual scene to include a virtual obstacle that corresponds with a physical object detected by the sensors based on the additional sensor data. The user is shown the modified virtual scene, allowing them to safely navigate the physical environment and remain fully immersed in the virtual world.

Background for Reality-guided virtual roaming

Virtual reality technology uses specialized computer hardware and software in order to create virtual environments that are perceptually real and immersive. Users can explore and interact with these environments. Virtual reality technologies allow users to enter computer-generated virtual environments where they can interact and perceive virtual objects. Virtual environments and virtual objects may be present in a person’s perception of a virtual world, but they are not usually present in their immediate physical environment. “The same is true for the opposite, namely that objects in a user’s immediate environment are not typically present in their perceived virtual world.

Virtual environments and virtual objects are rendered graphically for stereoscopic displays, which can be viewed by users wearing virtual reality gear that is fully immersive, such as head-mounted displays. By virtue of its fully-immersive nature, virtual reality technology restricts the user’s ability to view their physically-surrounding environment, or in other words, the user’s real world surroundings. “There is a distinct disconnect between the user’s actual environment and the fully-immersive, virtual environment that the user perceives within the real environment.

Embodiments” described herein are systems and techniques that dynamically render and update a fully immersive virtual environment in order to guide safe real-world roaming. A computing device connected to a HMD receives sensor information from multiple sensors. The sensors, in essence, track the physical environment surrounding the head-mounted display (HMD) and obstacles within it, generating sensor information that corresponds with physical objects or obstructions present in the physical environment. The computing device processes the sensor data in real-time, allowing it to dynamically render a fully immersive virtual environment that is at least partially based on the tracked physical environment. The virtual environment is adjusted dynamically to influence the course of the user and avoid static and moving obstacles detected in the real world. The virtual environment can be updated in real-time to prevent collisions and redirect the user’s path. This allows a person wearing the HMD to safely navigate (e.g. walk, roam) in a real-world setting while remaining completely immersed (i.e. perceiving) the virtual world.

This summary is intended to present a number of concepts that will be further explained in the detailed description. This summary does not aim to identify the key features or essential elements of the claimed object matter. It is also not meant to be used alone to determine the scope of claimed subject matter.

Immersive technologies are perceptual and interactivity technologies that blur the lines between the real world and the virtual world. Perceptual technologies trick the brain of a user into thinking that digital information in simulated space is real. Interactive technologies can, however, recognize the user’s outputs (e.g. speech, gestures and movements). In virtual space, the technology can detect physical space and respond accordingly. Perceptual and interaction technologies combined can give users the illusion of an immersive virtual world or “virtual environment”. Virtual worlds or immersive virtual environments can give users the illusion that they are in a physical environment, or “physical world” “The physical world in which they exist is just as real as the environment or?physical world?

Virtual reality (VR), a technology that is fully immersive, allows users to only see a rendered virtual world and the virtual objects within it, as though the visual information they perceive were their current reality. When immersed in a virtual world (typically when wearing a HMD), the user becomes visually detached from the real world. While a user is still able to physically move around in the real-world, they can only see the displayed virtual world. This disconnect between the virtual world as perceived and the hidden physical world is a major drawback for the user. There is a sensory disconnection for the user between the physical world and the virtual environment. This disconnect limits the potential for virtual reality experiences and poses a danger to the user who could easily collide with physical objects without being aware.

Many efforts have been undertaken to overcome the disadvantages of this sense disconnect. In order to achieve a smooth user experience, VR systems rely on large, empty spaces. For example, a warehouse or room that is empty. Some conventional VR systems use optical scanners to scan a static space in order to create a model that can be used to reconstruct a virtual environment. However, these conventional VR systems require that the environment remain unchanged from one use to the next. These implementations, however, are not practical, because they don’t allow for portability into unfamiliar environments or take into account the dynamic nature of real life. “Conventional VR systems don’t protect users from collisions with physical obstacles when they are fully immersed in virtual worlds while physically moving in dynamic or new environments.

The term “real world” will be used throughout this disclosure. “As will be used throughout the present disclosure, the terms’real-world? Both terms can be interchanged, referring to non-virtual or tangible environments or objects. The term “on-the fly” is also used interchangeably. The terms ‘on-the-fly’ or’real time? The terms “responsive behavior” and “real-time” are used interchangeably to refer to a response, such as performing an operation upon receiving data or a message (e.g. from a sensor). In some cases, responsive behaviors are limited by response time or speed. However, in most situations, the responsive behavior will be performed substantially instantly (e.g. less than one second). The terms “render” and “generate” will also be used throughout the disclosure. The terms’render’ and ‘generate” are interchangeable throughout the disclosure. They refer to a digital creation of a virtual object or environment, such as one that can be provided for display on a HMD. The terms?generate’ and?create’ are used interchangeably to refer to the creation of a digital object or virtual environment that can be displayed on a HMD. The terms “object” and “obstruction? The terms?object? and?obstruction? are also interchangeable to correspond to?things? Both?obstruction? Perceivable objects can be found in both virtual and physical environments. A wall, a person, an animal, furniture, plants or anything tangible that can potentially obstruct a user’s movement is considered to be an object or obstruction. Sensor data can detect objects or obstructions in some cases. This includes anything that is taller than the reference height. A movement (e.g. traversal, walking or roaming) in a physical space can be perceived as a movement in a virtual space. As a user moves (e.g. takes a step), a change in location in the virtual environment can be felt. In some cases, the dimensions (e.g. width, height, length and relative distance) of an object in its physical environment may be a virtual object within a virtual world.

At the highest level, embodiments in this disclosure provide generally systems and methods that dynamically render and update a fully immersive virtual environment or scene? Real-time navigation is used to guide users safely through an unfamiliar or dynamic environment (e.g. not pre-scanned). Other embodiments can track the physical environment of a user to enable on-the-fly scene adaptations to prevent collisions. In some embodiments, the computing device connected to a HMD receives sensor information from a plurality sensors that track, among other things, the position, orientation and physical environment of the HMD. Depending on the aspect, certain sensors can generate tracking information (e.g. orientation, position, and movement) for the HMD, based upon detected movements. Other sensors can produce environmental data (e.g. depth frames) by detecting physical objects and obstructions in the physical environment. On-the-fly the computing device processes tracking data and environmental sensor information to render a responsively immersive virtual environment. The computing device updates the fully-immersive rendered virtual environment as the HMD moves around the physical environment. It emulates the user’s virtual movements about the virtual world on the fly, using newly received tracking and environmental data. The virtual environment adapts dynamically to the user’s travel path, rendering or moving virtual objects within the virtual world that are corresponding to physical objects in the real-world. The virtual environment is constantly updated to prevent collisions between the user and physical objects. This allows a HMD user to safely navigate (e.g. walk, roam) in the real-world while remaining completely immersed (i.e. perceiving) only the virtual environment.

The following is a brief overview of various embodiments described herein. 1-5. Referring to FIG. In accordance with certain embodiments of this disclosure, FIG. The HMD gives the user a fully immersive virtual reality experience. This means that the HMD can only display the virtual environment rendered stereoscopically by the computing device. A second illustration 110A shows an example of what a user might see if they were not wearing the HMD or fully immersed in the environment. The illustration 110A shows a variety of static objects, such as couches, chairs and walls, along with dynamic objects, like people or push carts.

In contrast to Illustration 110A, Illustration 110B depicts a visual image that demonstrates what a user might perceive while wearing the HMD in a virtual environment. According to various embodiments of the present invention, the computing device may employ sensors which continuously track, among other things, the HMD’s position, orientation, and location. The computing device, using sensor data from the sensors can render and update a virtual world that includes virtual objects that correspond to physical objects detected in the physical world. Illustration 110B shows a virtual world (e.g. a dungeon), with various virtual objects therein (e.g. knights, spikes, and walls). Based on sensor data received corresponding to a physical environment, the computing device can determine that physical objects (e.g. are physically higher above the ground) are present and obstruct a physically navigable path. In certain aspects, the computing devices can determine whether some physical objects are moving (e.g. people), while others are not (e.g. couches, chairs, or plants). The computing device may select a virtual object that moves (e.g. a knight), for a physical object that is determined to be moving, and a virtual object which does not move (e.g. spikes on the ground) for an object determined to be non-moving. Other techniques such as visual classifying, feature detection (e.g. height, speed, characteristics) or the like can be used to select a non-moving or moving virtual object.

With reference now to FIG. 2, various illustrations are provided to depict exemplary implementations of rendering and updating a virtual environment and/or virtual objects corresponding to a physical environment and/or physical objects on-the-fly, in accordance with some embodiments of the present disclosure. Illustration 210A depicts an exemplary visual image of what the user may perceive from his/her field of view if not wearing the HMD and not fully immersed in the virtual environment. In contrast to illustration 210A, illustration 210B depicts an exemplary visual image of what the user may perceive from the same field of view while wearing the HMD and fully immersed in the virtual environment. As can be seen in the virtual environment presented in illustration 210B, the virtual environment is bounded by a first virtual wall that corresponds to a physical obstruction (e.g., wall, structure) in the physical environment. The virtual environment is also bounded by a second virtual wall to the left and a third virtual wall to the right. While illustration 210A does not depict physical obstructions to the left or right, it is contemplated that the physical areas beyond the second and third virtual walls could be physically navigable or have other physical objects therein. The second and third virtual walls depicted in 210B are rendered by the computing device to effectively postpone a rendering and updating of the virtual environment for those corresponding physical areas, until more sensor data corresponding to such areas is received (e.g., the user is closer to these physical areas). To this end, in some embodiments, the computing device can generate virtual rooms (e.g., virtual wall-bounded areas) or virtual corridors (e.g., virtual hallways) on-the-fly to effectively limit the amount of rendering and/or updating of a virtual environment based on physical environment at any given time, thereby providing a smoother and more efficient physical-to-virtual experience.

In some cases, sensor data can indicate that the physical space (as shown in illustration 20A) is enough to render a room virtual. A virtual room (as shown in illustration 20B) may be rendered. However, in some cases, it may be impossible to determine the amount of space required. If the amount of space required beyond a certain physical area cannot be determined (e.g. outside the sensor’s view, insufficient data collected), then the computing device may use one or more virtual navigation techniques to direct the user closer to the physical location to collect additional sensor data. A virtual door, or another virtual obstruction, can be placed on a wall, or in a corridor to create a realistic obstacle that prevents users from seeing what is ahead. Virtual obstructions can be rendered in a way that corresponds to physical boundaries, which are detected, programmematically defined or yet to reconstructed, based on limited sensor information or processing state.

Illustration 220A, for example, shows a visual representation of what a user might see from their field of vision if they are not wearing hmds and are not fully immersed within the virtual environment. Illustration 220A depicts a visual image that the user might perceive while wearing an HMD, fully immersed into the virtual world. The computing device in this example can determine from sensor data received that there are physical walls (shown in illustration 220A) as they are close to the sensor, but it cannot determine what is ahead due to sensor resolution or lighting conditions or other limitations. The computing device can then render virtual walls that are a close match to the actual walls in the illustration 220A, and a virtual door that will direct a user down the hallway until more sensor data is collected. The virtual door may be opened automatically when the user reaches the physical location that corresponds to the virtual wall. It can also be opened by detecting a user input, such as a sensed interaction. The computing device can render a new virtual environment or virtual object in response to new sensor data collected at the physical location.

In some cases, sensor data may indicate that the user is unable to leave a certain physical space. Illustration 230A, for example, depicts a visual image that a user might perceive if he/she is not wearing a HMD or fully immersed in simulated environment. Illustration 230B, in contrast to illustration 220A, depicts a visual image that the user might perceive while fully immersed and wearing the HMD. The computing device in this example can determine whether physical walls exist within a room or whether objects block the path of the user. The computing device can then render virtual walls that are a close match to the physical walls, or physical objects (as shown in illustration 230B). This creates a virtual room where the user is unable to escape.

In some aspects, a computing device can limit the dimensions of a virtual room or virtual hallway. As briefly described in some aspects, a virtual space or virtual corridor may be generated when sensor data received indicates that there are one or more walls surrounding the room or obstructions to pass through within the physical environment. In other aspects, a virtual room, or virtual corridor, can be created based on all of the above, as well as predefined virtual dimensions which limit one or more dimensions in a generated virtual space or corridor. The predefined virtual dimension may, in essence, limit the dimensions for any generated virtual room even if sensor data indicates physical dimensions beyond the corresponding virtual dimensions are at least partially unobstructed. Predefined virtual dimensions are able to facilitate many benefits, including computing efficiency and user experience.

Now, let’s look at FIG. In accordance with certain embodiments of this disclosure, various illustrations are shown to illustrate exemplary techniques used to reduce lag and maintain a realistic virtual experience while rendering virtual objects and virtual environments on the fly. Illustration 310 shows an example of what a user might see (e.g. a virtual object) while fully immersed in a virtual environment and wearing he HMD. The virtual door may be rendered in a way that indicates there is a navigable space beyond the location where the user stands. In different embodiments, the virtual door can be rendered in order to direct a user towards the physical location where the virtual door is located. In some embodiments the virtual door may lead to a room that has not yet been discovered by the user, or a virtual space. The virtual door may be opened in some cases if, using sensor data, the computing device determines that a physically navigable space is beyond the location of a virtual door. In other aspects, the computing devices may determine sensor data that indicates the area is not physically navigable and prevent the virtual door opening.

Assuming the virtual doors is opened, opening the virtual doors can reveal other virtual rooms or corridors, among others. The illustration 320A shows an example of what a user might see if he/she is not wearing a HMD or fully immersed into the virtual environment. The illustration 320A shows a physically navigable space with a pedestrian in front of the users. In certain embodiments, a computing device can determine, based on sensor data, that a physical item (e.g. the pedestrian in the physical environment) has been detected (e.g. standing in front the user). The computing device can then render a virtual object corresponding to the detected object. The illustration 320B shows an example of what a user might see from their field of vision while wearing a HMD and immersed in a virtual environment. The illustration 320B shows a room with a consistent theme (e.g. a dungeon) with a navigable route partially blocked by a consistent virtual obstruction. The virtual location of virtual obstructions corresponds with the physical location (e.g. the pedestrian in illustration 320A) of the physical object.

In some embodiments, a computing device can determine, based on sensor data received, one or several characteristics of the object. These include, for example, a physical movement (e.g. relative to sensor(s)), the velocity of that movement, or one or more physical dimensions or the relative physical distance between the object and the sensor. The computing device can choose from a plurality virtual objects based on one or more characteristics to insert in the virtual environment. To maintain thematic consistency, and to avoid the awkward or sudden appearances of certain virtual items, it is possible to insert a thematically-consistent virtual object into the virtual environment (e.g. spikes rising from the virtual floor in illustration 320B). In this example, a rising device (e.g. virtual spikes) can be thematically consistent and facilitate a realistic virtual experience even if a physical object has just been detected (e.g. the object stepped into the user’s navigational area or the user turned towards the physical item). The computing device in this case selects the virtual spikes as the object’s representation, because it was determined that the object was located within a certain distance of the HMD and/or sensor.

Illustrations 330A and 330B are further examples of visual imagery, showing a physical (i.e. physical) environment with physical objects and a virtual (i.e. virtual) environment with virtual objects, both as perceived by a user in his/her respective field of view, in the non-immersed and fully-immersed environments. Illustration 330B shows a virtual “lava land” This can include a virtual representation of a physical path that is navigable, as well as a virtual representation of detected obstructions, such as walls or other obstacles. Illustrations 340A and 340B also provide examples of visual images that depict a physical environment, such as an office hall (e.g. in illustration 34A), and a virtual environment, which has a Tron effect, that in essence presents the determined navigable path or area, and any detected obstructions.

Referring now to FIG. According to some embodiments of the disclosure, FIG. 4 shows an illustration 410A showing a user wearing a HMD and a computing device while roaming a physical outdoor environment (not bounded by walls). The embodiments described with reference to FIG. The HMD gives the user a virtual reality experience that is fully immersive. This means that the HMD displays the stereoscopically rendered virtual environment created by the computing device. In illustration 410A the physical environment where the user roams is outdoors or, in other words in an area that is not bounded by a physical wall of a building. The illustration 410A shows a variety of exemplary static objects (e.g. buildings, tables, benches) as well as exemplary dynamic objects (e.g. people walking). Illustration 410B, in contrast to illustration 411A, depicts a visual example of what a user of illustration 411A might perceive while fully immersed and wearing the HMD.

Click here to view the patent on Google Patents.