Invented by W. Daniel Hillis, Kjerstin I. WILLIAMS, Thomas A. Tombrello, James W. SARRETT, Luke W. Khanlian, Adrian L. KAEHLER, Russel Howe, Applied Invention LLC
The Applied Invention LLC invention works as followsAt least one embodiment includes a communication method that allows an autonomous vehicle to interact with external observers. The method comprises: receiving a mission at the autonomous car; collecting data describing the surrounding environment from a sensor attached to the vehicle; determining the intended course for the autonomous to take based on both the task and collected data; and delivering a human-understandable output to an observer via an output device.
Background for Communication of autonomous vehicles with external observers
Operating any vehicle (such as a car or truck) can be a difficult task. A driver must learn how to drive the vehicle, navigate in the limits of the traffic laws, and interact with other drivers within that physical environment (such as a road or parking lot). In order to meet the last requirement, drivers can signal their intentions by a variety of conscious and unconscious actions. Some of these signals are based on devices that have been purposefully built into vehicles, such as brake lights and turn signals. Some signals are based on human characteristics. Other signals rely on innate human characteristics. These include intentional actions such as waving your hand to indicate to another driver that they should proceed through an intersection, and subconscious or reactionary actions such as turning your head before merging lanes. Human drivers can usually communicate their intentions and ask for help when all other methods of signaling fail, as in the case of a car breaking down or a lost driver.
External observers such as pedestrians or other drivers can perceive the signals. External observers can interpret these signals and gain insights into the driver’s intentions. These insights are crucial for a safe and efficient flow in vehicular traffic, as well as to provide assistance when necessary. “A central feature of these methods is the fact that an observer does not need to use any special equipment in order to interpret the signal and most signals don’t require any training.
The advancements in autonomous vehicle technology allow computers or other electronic devices drive vehicles. There are at least two different types of autonomous cars today: fully autonomous vehicles without human passengers, and semi-autonomous vehicle that can operate in an autonomous mode even when carrying passengers. With the increase in autonomous vehicles, better communication between autonomous vehicles and outside observers will become necessary. Unfortunately, signaling systems that are built into cars, like turn signals and brake light, can only communicate a subset of information. Signal and brake lights only provide limited communication with external observers. “At present, no autonomous vehicle is known to provide a comprehensive way of signaling external observers.
In general, the embodiments described herein describe various designs and methods which enable autonomous vehicles communicate navigation-related intents to external observers. This disclosure, in one aspect describes different ways to convey information to external viewers using images displayed on a vehicle. Images can be words, symbols or pictures. The disclosure also describes different ways to alert external observers that the vehicle intends to transmit information. The vehicle may use lights or sound to notify external observers.
The disclosure describes other ways to convey information to observers outside the vehicle using images projected onto the ground near the vehicle. These projections can include information about the intended trajectory as well as anti-trajectory.
In addition, the disclosure describes different ways to convey information to external observers by using an anthropomorphic object. The anthropomorphic object may be an object that is purpose-built or it can be integrated into a sensor package already installed on the car. The anthropomorphic object may also take the form of a picture. The disclosure also describes different ways to convey information to external observers by using a movement-state indicator (e.g. a three state indicator).
This Disclosure Overview is intended to provide a simplified version of a number of concepts that will be further explained in the Detailed description. This Summary does not aim to define key features or essential elements of the claimed matter. Nor is it meant to be used as a tool to limit the scope. The following description, illustrated by the accompanying drawings and defined by the appended claim, provides a more comprehensive presentation of features and details, utility and advantages of this disclosure. In some embodiments, the disclosure may include other elements, steps, aspects, or features in addition to those described above. The specification describes these potential additions and substitutions.
FIG. According to various embodiments, FIG. 1 shows a navigational system 100 of a car 102 that can indicate its navigational intentions externally. The current autonomous vehicles do not provide much, if anything, in the way of a readable notification to an outside observer. It is difficult for systems or people near an autonomous vehicle to predict its behavior, or to react to it. The vehicle 102 and other disclosed autonomous vehicles solve the challenge above by expressing the intentions of the vehicle in various human-understandable forms using sound or light.
The vehicle 102 is an autonomous vehicle that can be either fully autonomous (e.g. a semi-autonomous or fully autonomous vehicle) and has a specific task. The vehicle 102 has a first control 104 that can be implemented using a computing device such as a computer or a field-programmable gate array. The vehicle 102 can attempt to accomplish a task that is related to navigation, such as requiring the vehicle to navigate. The task could require the vehicle to navigate from a beginning point to a final point (e.g. a destination address). The ending point can be flexible, meaning that it could be altered or changed over time. The vehicle 102 may be required to pass through a series of waypoints or follow a route. The vehicle 102 has sensors that collect data about the surroundings of the vehicle. The vehicle plans its intended course of actions based on the data it gathers and the task assigned to it. It then communicates this information in a way that is easily understood by external observers.
FIG. The vehicle 102 can use a variety of forms of communication to communicate its intentions to external observers. In one example, communication could take the form of a human-driver communication 106. This involves both the first control system (104) of the vehicle and the human driver (108). Human driver communication 106 may be in the form of a light, sound, display, animation or other forms (e.g. a beam, flash, other radiating patterns). The vehicle 102 can generate the human driver communication 106 from a device that isn’t a brake or signal light.
Another instance, the communication could take the form inter-device communication 112. This involves the first vehicle control system (104), and the second vehicle control system (116) of another vehicle 114, e.g. another autonomous vehicle. This type of communication, for example, can include wireless digital or analog communications. Another example of inter-device communications 112 is steganographic communication, where digital information is encoded in a way that can be understood by humans (e.g. an image or sound). Inter-device communication can be optical, radio frequency, or acoustic based with encoded data.
In a third example, communication could take the form a pedestrian communication 120. This involves the first control system (104), and a pedestrian (122). The pedestrian 122 could be an external observer that is not driving a car. In another example, communication can be in the form of a communication between a first control system (104), and a traffic manager 128. Traffic controller 128 could be someone with authority to control traffic, such as a roadworker, police officer, or other. Traffic control devices or agent terminal devices can be used in the navigational environment to allow a remote controller to control traffic. The pedestrian communication 120, and the traffic control communication 126, can be similar to the driver communication 106. They may include a light or sound, a display or animation, a display of information, an animation. In some embodiments the first control system can simultaneously generate or present any of the pedestrian communication 120 and the traffic controller communications 126, as well as the inter-devicecommunication 112.
FIG. The example 2 shows an autonomous vehicle according to various embodiments. The autonomous vehicle 200 can be either a fully-autonomous or semi-autonomous. The autonomous vehicle 200 has a control system (e.g. the first control system of FIG. 1 ). The control system determines (e.g. calculates and plans), a trajectory for autonomous vehicle 200, and executes the trajectory according to the traffic laws and the physical limitations of the surrounding environment. 1 . The autonomous vehicle 200 also includes an intention communications subsystem 204. In some embodiments the intention communication system 204 is physically integrated into the control system. In some embodiments, the communication subsystem is physically coupled to the control system via wires or wirelessly. In certain embodiments, either the control system or the intention communication subsystem can be removable from the autonomous vehicle and/or portable.
The intention communication subsystem is capable of generating a human-understandable output in order to convey the intended course of action of the autonomous vehicle (e.g. the trajectory). The intended course can be illustrated explicitly or implicitly (e.g. by acknowledging traffic laws, obstacles, or physical borders beyond which the autonomous 200 will not cross). The control system 202, for example, can determine an intended course of actions based on a user’s configuration (e.g. by setting a location) via a configuration interface. Sensor inputs from one of more sensors 210 can be used by the control system 202 to take into consideration the environment surrounding the autonomous vehicle 200. The control system 202 can then determine, using the sensor inputs as well as the user configurations for the autonomous vehicle 200, the trajectory, route, and any other actions required to reach the destination.
The sensors 210 may include one of more cameras 212; one or multiple microphones 214; one radar 216; one sonar 218 or two lidars 220 or radio frequency antennas (RF) 222 as a sensor or transceiver (for machine-readable communication); one tactile sensor 224 or other passive or active sensors. The tactile sensors can detect the contact between a person, an object or any external object and autonomous vehicle 200. The tactile sensors 224 may be force sensors or accelerometers. Sensors 210 can collect data that describes the environment around the autonomous vehicle and/or provide self-diagnosis feedback for the autonomous vehicle (e.g. in relation to the environment). The control system can record metadata from sensors 210 to be used for further analysis. The control system 202 can detect various conditions and objects using the collected data. For example, it may detect traffic events, accidents or pedestrians. It could also detect active or passive traffic signals, road conditions or weather conditions. The control system can also detect other characteristics, such as types of fellow vehicles or detected objects. It may also be able to predict the behavior of these objects or events.
The intention communication subsystem (204) expresses or conveys human understandable outputs via one or more output devices. Output devices 230 include one or multiple projectors, one display device, one speaker, one light, one mechanically activated device, and any combination of these. The output devices can be bidirectional, omnidirectional, unidirectional or multidirectional.
When conveying a human-understandable output, the intention communications subsystem 204 may direct one or more output devices 230 to a detected external observor or an area which the detected external observor can perceive. In different embodiments, the intent communication subsystem can present detected objects (e.g. detected based upon the sensor data), in the human-understandable output, to acknowledge the awareness of the control system 202.
The configuration interface 206 allows the user to further customize the human-understandable output of the autonomous vehicle, including the output form. The user, for example, can select the best communication method to convey and express an intended course-of-action as determined by control system 202. The configuration interface can provide two or more options to convey the intended course of actions through the output devices. Below are some examples of outputs that can either be expressed or impliedly used to indicate the desired course of action by the autonomous vehicle 200.
Display on Vehicle
Display devices 234 can convey the human-understandable output in a variety of display formats. Display devices 234 may be mounted on autonomous vehicle 200 as a surface display, or in other forms. Display devices 234 are retrofitted to autonomous vehicles 200 and can be detached. Images can be used to convey the intentions of the control systems 202 in one format. The subsystem for intention communication 204 may select the images from a standard list of transportation or traffic symbols. These symbols can appear on the windshield of an autonomous vehicle 200 or another surface. If no standard symbol is available to convey the information required, then words can be displayed. As an example, FIG. “Figure 3 shows an autonomous vehicle, similar to the autonomous car 200, that displays a standard traffic sign 302 on an electronic display 304, visible through the windshield of the autonomous car 300. This is in accordance with different embodiments.
Click here to view the patent on Google Patents.