Invented by Alicia C. Jones-McFadden, Matthew Hsieh, Scott R. Enscoe, Raghav Anand, Michael Anderson, Elizabeth Ann Miller, Daniel Shilov, Alicia Anli Salvino, Bank of America Corp
The Bank of America Corp invention works as followsSystems provide a virtual-reality experience of a predicted state based on an occurrence or contemplation event.” Other systems described here provide an augmented-reality experience which replaces an item in view with an enhanced display of the same object based upon the predicted future state.
Background for System for predicting future situations and generating a real-time interactive virtual experience
Virtual reality is a computer-generated artificial environment. It is presented to the user in a multimedia format, usually a three dimensional presentation. Virtual reality allows the user to suspend their belief in reality and experience a virtual reality. The user can interact by performing actions or manipulating objects in the virtual environment. Virtual reality can be displayed to users via any computer display. However, the experience is enhanced when it’s displayed on wearable computers such as Optical Head-Mounted Displays or similar.
Augmented Reality, also known as AR, is a direct or indirect live view of a real-world, actual environment, in which some elements or objects have been enhanced or supplemented with computer-generated sensory information (e.g. graphics, videos, sounds or the like). The computer modifies the reality view, enhancing the user’s perception of the world. “Augmented reality experiences can be provided by an image-capturing tool and are enhanced through the use of Optical Head-Mounted Displays or similar devices.
Therefore there is a need to improve the capabilities of virtual reality and augmented realities and, more specifically, to provide virtual or augmented reality experience that allows the user to gain an insight into future circumstances.
The following is a simplified overview of one or several embodiments to give a basic understanding. This summary does not provide an exhaustive overview of all the contemplated embodiments. It is also not intended to identify critical or key elements in all embodiments or define their scope. It is intended to simplify some concepts in one or more embodiments as a precursor to the detailed description which will be presented later.
Systems described herein provide a virtual-reality experience of the user’s predicted state based on an occurrence or contemplation event. Other systems described here provide an augmented-reality experience which replaces the visible object with a augmented version of that object based on its predicted future state. The user can easily see what the future will look like, based on the event that occurs and/or the attributes of the user.
First embodiments of the invention are “A system that provides an augmented-reality display of a predicted state of an object in the future.” The system comprises a wearable computer device with a memory and at least one processor communicating with the memory. It also includes an image-capturing unit in communication the processor as well as an Optical Head Mounted Display (OHMD). The system also includes a database that is configured to store attributes related to a wearable computing device user.
The system also includes a module for predicting future states that is stored and executed by the processor. The predicted future-state module is designed to capture an image via an image-capturing device of an object. It then identifies the object. From the database it retrieves one or more user attributes. It then predicts a future state for the object based upon the identification of that object and those attributes.
In specific embodiments, the predicted state module is configured to further identify the object using one or more object identification techniques. The predicted future state module can also be configured in other specific embodiments to identify the current state of an object (i.e. its current condition). The predicted future state module of such embodiments is configured to predict the state of an object in the future based on its identification, current condition, and one or more attributes of the user.
In other embodiments, the predicted state module is configured to determine the current state of an object by identifying its current value. The predicted future state module of such embodiments is configured to predict future states of objects based on the identification, current value and one or more attributes of the user.
In still other specific embodiments, the predicted state module is configured to further predict a state for the object based upon the identification of object and one or more attributes that are associated with the user. The future state is a state for a predetermined period of time. The predicted future state module in related specific embodiments allows the user to set the predetermined period of time before or after the object is captured (dynamically and on-the fly).
In still other specific embodiments, the predicted state module is configured to further predict the future of the object. The future state is a substitute object.
In other embodiments, the database configured for storing the attributes further defines the attribute as a financial performance attribute, such as, but not limited, to savings amounts, historic savings performance, loans amounts, historical repayment performance, and current inflow and output of financial resources. The predicted future state module of such embodiments is configured to further predict the future of the object using the identification of that object and one or more financial attributes.
Second embodiments of this invention are “A system that provides a user with a virtual experience of their future state”. The system comprises a mobile computing unit having a memory in communication with the storage and at least one processor. A database is also included in the system, which stores attributes related to a mobile computing device user. The system also includes a virtual experience module, which is stored in memory and executed by the processor. The virtual reality module is designed to receive input that indicates the occurrence, or contemplation, of an event related to the user. It will then retrieve from the database one or more attributes related to the users and create at least one VR experience of their future state based on this event or contemplation and these attributes.
In specific embodiments, the mobile computing system is defined as a wearable computer device that includes an Optical Head Mounted Display (OHMD), which is configured to show the virtual reality experience for the user.
In other specific embodiments, the virtual experience module is configured to generate at least one virtual experience of the future states of the user in accordance with the occurrence or contemplation the event, and the one of more attributes associated with that user. The future state defined by the predetermined future period. In other specific embodiments, the virtual experience module allows the user to set the predetermined time period before or after contemplation or occurrence of the event.
In still further specific embodiments, the virtual-reality experience module is configured to generate two virtual-reality experiences of the future states of the user. A first virtual-reality experience is based upon the event contemplated happening and a secondary virtual-reality experience is on the basis of the event contemplated not taking place.
In other embodiments, the database configured for storing attributes associated to a user defines the attributes further as profile attributes and financial attributes. Financial attributes in such embodiments include historical transaction records and one or more historical financial attributes, including current loan amounts and historical loan repayment performances, as well as current financial flows and goals. The virtual reality module of related embodiments is configured to receive input that indicates the occurrence or contemplation an event associated with a user. In this case, the event is defined as a transaction.
As described below in greater detail, the present invention allows for a virtual experience of the predicted future state of a user based on an event’s occurrence or contemplation and attributes associated with that user. In addition, the systems described below provide an augmented-reality experience which replaces a real viewed object by an augmented display based on attributes associated with a person. “As such, embodiments allow a user to easily comprehend what their future will or could be based upon the occurrence an event and attributes that are associated with them and/or a future physical object based on attributes that are associated with them.
To achieve the above and related goals, the one- or more embodiments include the features described in detail and specifically highlighted in the claims. The drawings and description that follow illustrate certain features of one or more embodiments. These features are only a few examples of how the principles of different embodiments can be used. This description is meant to cover all of these embodiments and their equivalents.
Embodiments” of the present invention, which are not all shown in the drawings, will be described hereinafter. The invention can be implemented in many forms, and it should not be construed to be limited to those embodiments that are described herein. These embodiments, however, are provided for the purpose of satisfying applicable legal requirements. For the purpose of explaining, many specific details will be provided in order to give a complete understanding of one or several embodiments. However, it may be obvious that the embodiment(s), as described above, can be implemented without these details. “Like numbers refer to similar elements throughout.
Different embodiments or features” will be presented as systems, which may include devices, components, module, etc. The various systems can include additional components, devices, modules, and so on. The various systems may include additional devices, components or modules, etc. Figures may not include all of the devices, components, modules etc. “A combination of these methods may be used.
Click here to view the patent on Google Patents.