Invented by Nova Spivack, Matthew Hoerl, Magical Technologies LLC

User Preferences. In recent years, there has been a significant rise in the use of digital assistants in various technological environments. With the advent of augmented reality (AR), these digital assistants have found a new medium to enhance user experiences and provide personalized assistance. The market for systems, methods, and apparatuses of digital assistants in an augmented reality environment is rapidly growing, fueled by the increasing demand for seamless integration of virtual and physical worlds. Augmented reality refers to the technology that overlays digital information onto the real world, creating an immersive and interactive experience. Digital assistants, on the other hand, are AI-powered software programs designed to perform tasks and provide information to users. Combining these two technologies opens up a world of possibilities for users, allowing them to interact with virtual objects and receive real-time assistance in their physical surroundings. One of the key drivers of this market is the local determination of user preferences. With the help of augmented reality, digital assistants can gather real-time data about user behavior, preferences, and needs. By analyzing this data, they can provide personalized recommendations, suggestions, and assistance tailored to individual users. For example, a digital assistant in an AR environment can suggest nearby restaurants based on a user’s dietary preferences, previous dining experiences, and current location. The local determination of user preferences also allows digital assistants to adapt to changing environments and provide context-aware assistance. For instance, a digital assistant can recognize a user’s location in a shopping mall and provide information about nearby stores, ongoing sales, and product recommendations based on the user’s previous purchases or browsing history. This level of personalization enhances the user experience and increases engagement with the digital assistant. Furthermore, the market for systems, methods, and apparatuses of digital assistants in an augmented reality environment is not limited to consumer applications. Industries such as healthcare, manufacturing, and logistics are also exploring the potential of this technology. In healthcare, for instance, digital assistants in an AR environment can assist doctors during surgeries by providing real-time patient data, medical records, and procedural guidance. In manufacturing and logistics, digital assistants can streamline operations by providing workers with real-time instructions, inventory management, and quality control assistance. However, the market for these systems, methods, and apparatuses is not without its challenges. Privacy and security concerns are paramount, as digital assistants in an AR environment gather and analyze vast amounts of user data. It is crucial for companies to implement robust privacy measures and ensure data protection to gain user trust and comply with regulations. In conclusion, the market for systems, methods, and apparatuses of digital assistants in an augmented reality environment and local determination of user preferences is witnessing rapid growth. The ability of digital assistants to provide personalized assistance in real-time, based on user behavior and preferences, is revolutionizing the way we interact with technology. As this market continues to evolve, it is essential for companies to prioritize privacy and security to build user trust and ensure the long-term success of these innovative technologies.

The Magical Technologies LLC invention works as follows

Systems and Methods of Digital Assistants in an Augmented Reality Environment and Local Determination of Virtual Object Placement are disclosed. Apparatuses for single- or multi-directional lenses as portals to a digital component of an augmented reality environment, are also disclosed. A method of presenting a digital assistant, which can be implemented on a computer system, is disclosed in one embodiment. The digital assistant, in response to receiving a request, can perform an operation within the augmented environment so that the user may interact with the augmented environment through the user interface. The method may also include training the digital assistance to learn from activities that occur in the augmented environment, and/or from behaviors of the users from their actions or interactions with the real-world environment.

Background for Systems, Methods and Apparatuses of Digital Assistants in an Augmented Reality Environment and Local Determination of Virtual Object Placement and Apparatuses of Single or Multi-Directional Lens as Portals between the physical world and digital world component of the Augmented Reality Environment

The World Wide Web’s proliferation and the advent of it in the 1990s changed the way people conduct business, communicate information, and interact or relate with others. “A new wave of technologies is about to transform our digitally-immersed lives.

The following descriptions and drawings should not be taken as restrictive. To provide a complete understanding of the disclosure, many specific details are provided. In some instances, however, well-known details or conventional ones are left out of the description to avoid confusing it. Referring to a particular embodiment is not always the same as referring to that embodiment.

Refer in this specification only to?one embodiment?” or ?an embodiment? It means that at least one embodiment contains a specific feature, structure, or characteristic related to the embodiment. It is possible to see the phrase “in one embodiment” in multiple places in the specification. The various instances of the phrase “in one embodiment” in the specification do not necessarily refer to the same embodiment. They also may refer to different or alternate embodiments that are mutually exclusive of each other. There are many features that may be displayed by different embodiments than others. Similar to the previous paragraph, different requirements may be required for certain embodiments and not other embodiments.

The terms used here have their usual meanings within the art and the context in which they are used. Term used to describe disclosures are described below or elsewhere in the specifications to give the practitioner additional guidance regarding the description. Certain terms can be highlighted for convenience. For example, using italics or quotation marks. The use of highlighters has no effect on the meaning and scope of a word. You will understand that the same thing could be said more than once.

Therefore, alternative terminology and synonyms can be used to describe any of the terms discussed in this document. It is also not important whether a particular term is discussed or elaborated. There are synonyms provided for some terms. The use of synonyms is not excluded by a list of synonyms. Examples of terms are used in this specification to illustrate the meaning and scope of the disclosure. The disclosure is also not limited to the various embodiments described in this specification.

Below are examples of instruments and apparatuses, methods, and results that correspond to embodiments of this disclosure. Please note that the titles and subtitles in the examples are for the reader’s convenience only, not to limit the scope. All technical and scientific words used in this document have the same meaning that would be understood by a person of ordinary skill. If there is a conflict, this document will take precedence, including its definitions.

Embodiments in the present disclosure include, for example, systems, methods, and apparatuses that enable digital assistants to be used in an augmented-reality environment (e.g. as shown in FIG. Local determination of virtual object placement. Other embodiments include single- or multi-directional lenses as portals that connect a physical environment to a digital component of an augmented reality (e.g. as hosted by host server 100 in the example shown in FIG. 1).

VR/AR that is always on

One embodiment of the current disclosure includes, in addition, a system, e.g. any one or more client devices 102 from FIG. “One embodiment of the present disclosure further includes a system (e.g., any one or more of client device 102 in FIG. Server 100 or client device 402 of FIG. Server 100 of FIG. 3A) with AR/VR hardware module/software agent having sensors (e.g. the camera, mic, and other sensors), that are always on. The system is always aware what is going on, unless the users presses a key to stop recording/stoplistening, or continues recording, but temporarily stops processing said data. While they are holding the button, the system will pause.

To save resources, the system can save and record a predefined time of recording (e.g. several milliseconds or seconds, 10-30 second, 30-60 second, few minutes etc.). At each location, flush any older data. The system may charge a fee for older feeds and data. The system can offer or implement tiered storage services for such data, either for users (e.g. what is happening from a user’s point of view) or fixed locations in the real-world. “The more data or historical data is stored, then the more refined or advanced the virtual or augmented experiences for the user or location.

On-Demand Sensors

An embodiment of the current disclosure includes a device (e.g. any one or more client devices 102 in FIG. “One embodiment of the present disclosure includes a system (e.g., any one or more of client device 102 in FIG. Server 100 or client device 402 of FIG. Server 100 of FIG. The server 300 of FIG. To rent sensors at certain locations (e.g. imaging devices, microphones or cameras of various devices), to record the events that are happening there.

For instance, Sue is in San Francisco and wants to know what’s happening at a specific location in New York City. She uses the system disclosed and types in the place (e.g. address or other identifier) and/or the event/phenomenon/activity/scene she wants to see (e.g., a live concert or basketball game at Madison Square Garden, or a solar eclipse in the Hamptons). The system detects users with devices in the network who are nearby or at the location and alerts them.

When the users agree to record the event or phenomena, the system discloses then books Sue for a “ride” “When users agree to capture the event or phenomenon, and then the disclosed system books Sue a?ride? On their devices (e.g. phone(s) and other devices). Users can be paid or rewarded for pointing their devices in the direction Sue desires?Or by taking their device on the path Sue desires?and streaming live recordings (e.g. audio, video, or others) to the system. The price for each “ride or experience” Supply and demand can drive the price of each?ride or experience? In some places, there may be more “sensor drivers” already installed. Some locations may have more?sensor drivers? Prices can also change depending on the time of day or location.

The disclosed system (e.g., any of one or more of, client device 102 of FIG. 1, client device 402 of FIG. 4A or server 100 of FIG. 1, server 300 of FIG. 3A), in one embodiment, includes a marketplace that connects parties requesting sensor data either real time or near real time at locations and times (video and audio from smartphones, but potentially other sensor data from any types of sensors as well), with parties who have devices that can supply that sensor data. For example, if there was sudden breaking news somewhere, there would be a spike in demand for live feed, recording or video of that situation. The disclosed marketplace can broker the supply for sensors to meet the demand for sensors in real time or near real time. The system can make the connection and provide the user interface to buyers and sellers of sensor data. Historical sensor data can also be stored, managed, archived and sold.

One embodiment also includes a marketplace that generates and updates live 3D maps for physical places, both indoors and outdoors. There is a frequent requirement for 3D surface and live video data in places with a high level of activity or change. The frequency of feeds is proportional to the amount of movement, traffic, etc. Remote users can view and even take part in VR/AR content at the location remotely. The system can save the history of events that took place at that location. “This is a way of crowdsourcing audio-video, 3D sensor data and temperature about the world.

The algorithm could result in periods of low activity or quiet being paid little or nothing. It would prefer popular times and locations that it could monetize. In most cases, you might not earn enough to make it worth your while in situations with a small audience. It uses cameras already in place to record sessions. Imagine people standing at a location offering to act as cameras for apps or people who want to view what is happening.

For instance, if someone installs a stationary camera, there will always be earning. The dynamic pricing makes sure that any location can be covered from all angles. The system can dynamically award points, rewards, and/or payments for places depending on their popularity, demand, activity level, rarity, accessibility, etc. The system is distributed so that each device, sensor or camera is its own stream. The system creates a large live cam network.

In one embodiment of the system, a points engine rewards users for what they do, starting with lending or renting a camera, sensor, or device. The system will show the user where the money is going and what it is projecting. Users can simply hang around and rent or buy time from people who wish to view through the camera. The user can now aim the sensors to what they wish to see. Surprise demand is driven by breaking news, for example. Also, events etc. Celeb sightings. Concerts. Sports events. Natural phenomenon (e.g., volcanic eruption, solar or lunar eclipses etc.). Scenic places. Live performances and happenings. You could have events (e.g. concerts, theatres, Golden Globe Awards, live TV, talk shows etc.). There are limited tickets available for certain events.

Search and Register Augmented Reality and Virtual Reality Content

When an entity publishes a VOB, it can notify a registry by using the API of the registry. In one case, registration is the act of providing a registry (e.g. as hosted by the server 100 in FIG. Registration is the act of providing metadata to a registry (e.g., as hosted by server 100 in FIG. Address (URL) and unique ID as well as other information about the content. Address (URL) and unique identifier and other information regarding the content (tags and type of content; publisher identity; date and time created by content creators, price, target audience, current locations, policies and descriptions, as well as preview images and 3D object previews). The registry can index the content.

Click here to view the patent on Google Patents.