Invented by Rene Seeber, Ingo Seebach, Henning Meyer, Markus Schoeler, Kai Baumgart, Christian Scheibe, David Prantl, Dedrone Holdings Inc
The Dedrone Holdings Inc invention works as followsSystems, Methods, and Apparatus for Identifying and Tracking UAVs” include a plurality sensors connected via a network with a configuration software or hardware. The plurality of sensor monitors an environment and transmits data from the sensors to the configuration software or hardware. Data from each sensor can be used to determine whether a UAV has entered or is approaching the monitored area. The system allows for a UAV to tracked and predict its behavior over time. Sensor information and results generated by the systems and methods can be stored in databases to help improve UAV identification and tracking.
Background for Systems and methods for identifying, tracking and managing unmanned aircraft
Unmanned Aerial Vehicles, or “drones”, are aircrafts that operate without a pilot. UAVs can be remotely controlled or autonomously operated. They come in a variety of sizes and are available in different configurations. “The introduction and increasing popularity of UAVs has raised questions about government regulations and UAV usage.
The anonymous nature of UAVs is a problem in areas that are highly sensitive to accountability and identity. UAVs can compromise the safety and regulation of airspaces around airports, prisons and sporting venues. UAVs may not be flown maliciously, and they can perform a variety of tasks like delivering consumer products. As regulations change, this could become acceptable. There is therefore a long-felt, but unresolved, need for a device, system, apparatus or method that can detect, identify, monitor and track UAVs to protect airspaces, the surrounding areas and to monitor appropriate UAV operation.
Aspects of the present disclosure are described briefly and in accordance with one embodiment. They relate to general systems, methods and apparatuses for identifying, monitoring and managing unmanned aircraft vehicles (UAVs), using a plurality or sensors, hardware and software. According to one embodiment and aspects of the disclosure, a number of sensors, including video, audio and Wi-Fi sensors, as well as radio frequency (RF), sensors collect data in their environment in order for UAVs to be detected, identified, tracked, and managed.
In one embodiment, the sensor can be configured to “see” any approaching objects. “In one embodiment, the video sensor is configured to?see? The video sensor can record high definition video in various embodiments and detect objects approaching up to 100 meters away. According to different aspects of the disclosure, the audiosensor is configured to “listen” Noise and/or various frequencies or frequency ranges emitted by UAVs can be detected. In different embodiments, the WiFi sensor is configured to detect WiFi signals and, more specifically, detect information transmitted in WiFi signals like SSIDs, MAC address, and other information. In one embodiment, RF sensors are configured to monitor frequencies between 1 MHz and 6 GHz. However, in some embodiments, the RF sensors may be configurable outside of this range. Some embodiments include a software defined radio in the RF sensors. This allows the sensor to dynamically configure itself to monitor any radio frequency or range.
The systems, methods and apparatuses described herein can be arranged in various ways to collect and process large quantities of sensor data, which could allow the system not only to identify and track UAVs but also manage and maintain a catalog of UAVs the system “knows” and monitors. The system?knows?
In certain embodiments, the sensors may be able to collect their respective data and then process it locally in the circuitry within the sensor. In other embodiments the sensors simply collect data and send it to a server that processes the data.
The method includes the following steps: Receiving video data from a camera, where the video data contains at least one picture of an object which may be a UAV, analysing the video to determine the first confidence measurement that the object is a UAV, receiving audio data from an audio device, where the audio data includes frequency information indicating the presence of a UAV inside the airspace, and analyzing the audio data to determine the second confidence that the frequency data represents a UAV, and aggregating both the first and second confidence measures into a a a a UAV, and a UAV, and a UAV, and a UAV, and a UAV, the audio data, the data, the data, the data, the data, the data, the data, the data, to determine the data to a UAV, to create a UAV, to a UAV, to create a UAV, to obtain a confidence measure to a second to a UAV, and a combined measure of confidence, and a a a a a’regarding, the a’reality, and a a a a a a a a a a a a a
The method of analyzing RF data to determine the third confidence level further comprises the steps of filtering the RF data to remove one or more unwanted frequencies; decoding the filtered RF data to generate a pattern of one or more frequencies and one of greater amplitudes representing the RF data; comparing the pattern of the one of greater amplitudes and the frequency of the RF data with known patterns of frequencies and amplitudes associated with UAVs; and aggregating this third confidence into the combined measure The method further includes the steps of filtering the RF data in order to remove unwanted frequencies, decoding the filtered RF data to create a pattern consisting of one, two, or three frequencies, and one, two, or three amplitudes, representing the RF data, comparing this pattern to patterns of frequencies or amplitudes that are known to be associated to UAVs, and determining the third measure if the pattern substantially matches one of these patterns. The method also includes receiving Wi-Fi data from a Wi-Fi sensor located near the airspace, with the Wi-Fi data containing data that indicates a potential presence of a UAV in the airspace. The method also includes analyzing the Wi-Fi data to determine a 4th confidence measure that it corresponds to an UAV. The fourth confidence measurement is also aggregated to the combined confidence measures.
According to an aspect of the disclosure, the method includes a step in which a media-access control (MAC address) is extracted from Wi-Fi signals. The extracted MAC is also compared with one or more MAC addresses that are known to be associated UAVs. Upon determining that the extracted MAC addresses substantially match at least one known MAC, a 4th confidence measure is calculated. In some embodiments, a Wi-Fi signal can be used to extract a service set identifier. The extracted SSID will also be compared with one or more SSIDs that are known to be associated to UAVs. Upon determining that the extracted SSID matches substantially at least one known SSID the fourth confidence measurement is calculated.
According to an aspect of the disclosure, the method wherein the step analyzing the WiFi data to determine a fourth confidence measure also comprises a step in which a received strength indicator (RSSI), is extracted from WiFi signal data. The RSSI is also used to estimate the physical distance between the Wi-Fi sensor and the object that emits the Wi-Fi data. This physical distance must exceed a threshold value in order for the UAV to be present.
According to an aspect of the disclosure, the method wherein the step analyzing the video to determine a confidence measure also comprises identifying the image of an object which may be a UAV in a particular airspace, at least a ROI is identified in at least a video frame of the video data. The method also includes performing an object classifying process on the at least one image ROI in order to determine if the object is a UAV. The object classification process also includes the following steps: extracting image information from the image of at least one ROI, comparing it to previous image data of UAVs, determining a probability of the object being a UAV exceeding a predetermined threshold and determining the first confidence measurement.
According to an aspect of the disclosure, the method wherein audio data is analysed to determine a confidence measure, further comprises converting audio signal data into frequency domain data so that the audio signals data can be represented as one, two, or three frequencies. The method also includes determining whether a ratio of frequency to noise for each one or more frequencies falls within a predetermined range. The method also includes comparing the frequency with one or more UAV frequency known to be associated UAVs, after determining that the respective volume of frequency-to noise for the respective frequency is within the predetermined range. The method also comprises determining a second confidence measure if the determined frequency matches substantially at least one of one or more UAVs frequencies that are known to be associated UAVs.
According to an aspect of the disclosure, the method whereby the video and audio data signal data are analyzed, further includes the step of storing video and audio data signal data in a database, in conjunction with the indication that UAVs were identified in a particular airspace. The method also includes a step in which an alert is sent to the system user that a UAV was detected in the airspace. The predetermined threshold value can also be expressed as a percentage. The video sensor is enclosed within a single housing, as are the audio sensor.
In one embodiment, there is a system that identifies unmanned aerial vehicles in a specific airspace. It consists of a video and audio sensor located near the airspace, as well as a processor connected to both the audio and video sensors, along with a database. In one embodiment, a video sensor is configured for collecting and transmitting video data. The video data includes at least one picture of an object which may be a UAV that is flying in the air space. The audio sensor, in one embodiment, is configured to transmit and collect audio signal data. This audio signal data includes at least frequency information indicating the possible presence of UAVs within the air space. The processor can be configured to perform the following in various embodiments: analyze video data in order to determine a confidence level that the object within the at least one picture is a UAV. analyze audio signal data in order to determine another confidence level that the frequency data represents a UAV. aggregate the first and second confidence measures into a combined measure of confidence indicating the possible presence a UAV. and store in a database an indication that a UAV has been identified in that particular airspace.
In one embodiment, the method includes the following steps: obtaining a frame of video from a feed of video, in which the feed of video was captured by an individual video sensor located near the airspace; identifying a region of interest in the frame of video, in which the ROI is an image of a possible UAV in the airspace; and performing an object classification to determine whether the image in the frame of video is a UAV.
In one embodiment, an object classification process comprises the following steps: extracting images data from an image of at least one ROI, comparing it to previous image data in order to determine the probability that object in image is UAV, and, if the probability exceeds a threshold, indicating the object as UAV.
According to an aspect of the disclosure, a method is disclosed for identifying UAVs via one or more video sensor, which further comprises the step of performing background subtraction on the video frame before identifying the ROI. The method also confirms that, prior to performing an object classification process on the at-least one ROI, it is within the predefined attention area indicated to be likely to contain a UAV. The method also confirms that, prior to performing an object classification with respect to at least one of the ROIs, the object in the at-least one ROI’s image is not a part of the learned scene as represented by the video frame. After indicating that the object is a UAV in the image, an attention area is created to encompass the at-least one ROI. This is used in processing subsequent frames in order to indicate a likely region to contain a UAV.
According to an aspect of the disclosure, the method comprises performing a scene-learning process with respect the atleast one ROI in order to determine whether the ROI is part a learned scene that’s represented by the video frame. The method further comprises the following steps: comparing the ROI with one or multiple characteristics of stored ROIs that are associated with substantially static objects in a particular airspace; and performing an object classification process based on the ROI.
According to an aspect of the disclosure, “the method wherein the extracted images data includes RGB color values in one or more locations inside the at least one ROI.” The method also includes background subtraction at one or multiple locations in the ROI. The method also includes a step where the object classification process compares the extracted image data with prior image data for objects that are known not to be UAVs to determine the probability that the object is a UAV. The method also includes the step of identifying the object as a UAV by assigning it a confidence rating, which is a statistical measure to determine whether the object is a UAV. The method further comprises storing the video frame in a database.
Accordingly to an aspect of the disclosure, the method is where the frame stored in a database is associated with a signal that the object within the image is referred to as a UAV. The ROI also has a rectangular form. The method also includes a step in which an alert is sent to the system user that a UAV was detected in a particular airspace.
In one embodiment, the system is a system to identify unmanned aerial vehicles in a specific airspace, which comprises: one or multiple video sensors, and a processor that’s operatively connected to one or several video sensors. In one embodiment, one or more video sensor are located near the airspace and are configured for collecting and transmitting a frame of video from the video feed. In one embodiment, the processor is configured to: identify a region of interest (ROI), the ROI comprising an imaging of an object which may be a UAV that is flying in the air space in question; and perform an object classifying process on the ROI in order to determine whether the image represents a UAV. During the object classification, the processor may also be able to: extract the image data from the at least one ROI image; compare the extracted data with prior image data for objects that are known to represent UAVs in order to determine a likelihood that the UAV in the picture is the object; and, if the probability is greater than a threshold, indicate the UAV in the picture.
Click here to view the patent on Google Patents.