Patenting Innovations in Autonomous Vehicle Road Surface Recognition involves an autonomous vehicle road surface recognition system designed to allow autonomous vehicle (AV) systems to identify various roadway elements by coding predetermined directional aspects of various pavement marking materials, eliminating the need for sensors relying on vision based camera systems for functionality.

Notably, autonomous driving technology does not require vehicles to carry heavy and costly sensors; this feature is especially valuable given the difficulty associated with reaching SAE Level-5 autonomous driving.

Road surface recognition system  allows autonomous vehicle (AV) systems to identify various roadway elements by coding predetermined directional aspects of various pavement marking materials.

Understanding Patents

 When we talk about patents, we’re discussing exclusive rights granted by the government to inventors. They allow inventors to protect their intellectual property from being used, made, or sold by others for a specified period, typically 20 years. This exclusivity serves as a powerful incentive for inventors to push the boundaries of technology.

Why Patent Autonomous Vehicle Innovations?

Patents play a pivotal role in protecting and promoting innovation. In the realm of autonomous vehicles, where breakthroughs happen daily, securing your ideas is a necessity. It ensures that you reap the rewards of your innovation, encourages further advancements, and allows you to establish a competitive edge.

Types of Patents

 Before diving into the patenting process, it’s essential to know that not all patents are the same. The three primary types you’ll encounter in autonomous vehicles include:

a. Utility Patents: These cover new and useful processes, machines, or compositions of matter.

b. Design Patents: These protect the ornamental design or aesthetics of a product.

 c. Plant Patents: These are applicable to asexually reproducible plant varieties.

The Patenting Process

a. Invention Disclosure:

This is where it all begins. You must document your innovation thoroughly. Describe the problem your invention solves and how it works.

 b. Patent Search:

Before filing, you must check existing patents. This ensures your innovation is unique. It’s often a complex process, as patents can be written in legal jargon.

c. Choosing the Right Type:

 Depending on your invention, you’ll need to decide whether you’re applying for a utility, design, or plant patent.

d. Drafting the Patent Application:

Here’s where it gets intricate. You have two options: hire a patent attorney or write the application yourself. Be warned, writing your application can be challenging due to legal language and format requirements.

 e. Filing the Application:

 The process starts by submitting your application to the United States Patent and Trademark Office (USPTO). It’s important to remember that not all patents get approved.

 f. Examination:

USPTO will examine your application. They’ll check for novelty and utility. This stage can take several months.

 g. Publication:

 After the examination, your patent application will be published. You’re now in a public database.

h. Allowance or Rejection:

If your patent application is allowed, you’ll need to pay maintenance fees to keep it active.

i. Enforcing Your Patent:

 Once your patent is granted, it’s up to you to protect it. If someone infringes, you may need to take legal action to enforce your rights.

Common Pitfalls and Challenges

 a. Complexity:

The patenting process is intricate and time-consuming. Understanding legal jargon can be a challenge.

b. Cost:

 The costs of patenting can add up, especially if you hire an attorney.

c. Rejection:

 Not all patent applications are approved. It’s possible that your hard work might not be rewarded with a granted patent.

d. Maintenance:

 To maintain your patent, you need to pay fees regularly. Missing these payments can result in the loss of your patent rights.

The Role of Patent Attorneys

Enlisting the help of a patent attorney can be a wise decision. These professionals are well-versed in patent law and can navigate the complexities of the patenting process, increasing your chances of success.

The Global Aspect

 It’s important to note that patents are territorial. An approved patent in one country doesn’t protect your invention worldwide. If your autonomous vehicle innovation has international potential, you’ll need to file patents in various countries, which can be an intricate and costly process.

Innovations in Road Surface Recognition

LiDAR Sensors

Inventors who are developing autonomous vehicle systems are working tirelessly to find ways to incorporate sensors that recognize road surfaces and objects, providing information necessary for an autonomous car to navigate our three-dimensional world safely and avoid collisions while identifying vehicles, pedestrians, buildings or structures that could pose risks – these sensors also serve to determine the most optimal way to navigate within its bounds.

LiDAR devices provide sensor systems with the capability of quickly gathering 3D data of their environments, producing maps in 3D. The LiDAR emits laser light which is detected by receivers which record return pulses to create an impression of what surrounds it and analyze it into digital images of that area – much faster and with potentially greater accuracy than traditional imaging methods such as cameras.

LiDAR devices have the unique capability of working around obstacles such as trees and in poor weather conditions, making it a good fit for urban environments where conditions frequently shift and changes can affect sunlight or rain levels. Modern LiDARs also boast very accurate sensors with wide fields of view compared to older photogrammetric techniques, providing more effective results than old methods.

Sensor systems may be combined with camera or image data to produce a drivable road region representation, and include a computer programmed to identify traffic signs by both signal intensity and position of each sign. Traffic signs typically have higher reflectivities than surrounding surfaces; this higher intensity can be detected by the sensor system, and then its location compared with an existing map of traffic signals to ascertain whether additional actions must be taken against it.

LiDAR technology is widely utilized by construction companies for land surveying purposes and site preparation for building projects, while agriculture uses it to create maps of crops and forests to predict how efficiently their resources grow and are managed.


Road surface recognition is an integral component of autonomous driving, as it allows vehicles to sense their environment and navigate safely. Unfortunately, environmental conditions like sun glare, rain, fog, snow or any other adverse weather condition makes using cameras for this task challenging and this research focuses on techniques for optimizing camera performance in autonomous driving conditions with adverse weather.

The patented method combines camera images and LiDAR data to generate a dense three-dimensional map of drivable road surfaces, which is then refined using filters and smoothers to filter out noise associated with non-road surfaces. Finally, its output is used to train deep CNNs for road surface detection; ultimately enabling detection and avoidance of various obstacles without human input.

Patented technology offers one distinct advantage for autonomous vehicles (AVs) and advanced driver assistance systems (ADASs) of low automation levels: its ability to identify slippery road surfaces, such as ice, sand and snow. This information can also help ADASs reduce accidents caused by slippery roadway surfaces by pinpointing their presence.

This system can quickly identify both drivable and non-drivable areas within a camera image frame by filtering out only those points near roads in LiDAR’s point cloud, then applying a secondary filter with density-based spatial clustering algorithms such as DBSCAN to reduce noise associated with non-road surfaces.

Once the drivable area of a road has been determined, its surface map is then used to train deep CNNs for road surface detection using ground truth information as training data. This allows neural networks to learn to recognize various road conditions and obstacles safely allowing drivers to navigate safely across various weather conditions.

Apart from its safety advantages, drivable road surface recognition technology offers other advantages as well. For instance, its detection capability can quickly identify damage on the road and alert maintenance teams quickly for fast repairs. Furthermore, this capability helps optimize energy consumption by decreasing fuel use while increasing battery lifespan/range for electric vehicles.

Intensity Correction

Prior to autonomous vehicle technology becoming widely available, their road surface detection systems relied heavily on visual information from cameras. Unfortunately, camera images could become impaired by environmental objects, weather conditions, lighting conditions or other factors; hence their performance in real world driving environments was limited.

To enhance the quality of these images, it is necessary to correct the intensity of scanned data points using an intensity correction process using a linear model of intensity distribution to remove variations from an image and eliminate intensity variations that might reduce variability of signal intensity, providing for more accurate representations of driving surfaces.

Control of Intensity Correction requires several parameters, including polynomial order (which defines which kind of line will be used to remove intensity variations), edge sensitivity (which governs how aggressively an algorithm will attempt to eliminate high and low intensity patterns) and zero or one for edge sensitivity; either value may be set for zero to ignore edges in scanned point cloud data entirely or increased further for more aggressively filtering out points nearer to edges on drivable surfaces.

Once drivable road surface points have been extracted from RPS, a second filter can be applied to remove non-road surfaces from the remaining drivable point cloud points. This can be accomplished using a well-known density-based clustering algorithm such as DBSCAN; this process groups together points that are near each other while leaving out those located in low density regions, creating a more coherent representation of drivable road surfaces.

The multimodal drivable surface detection module 200 creates a drivable road surface map which can be utilized by other vehicle subsystems (for instance, vehicle drive subsystem 142 and vehicle sensor subsystem 144) as well as used to train deep CNNs used in vehicle control systems and conduct experiments and tests for new sensor technologies and applications. Thus, providing an effective solution to conventional road surface recognition systems.

Linear model of intensity distribution to remove variations from an image and eliminate intensity variations that might reduce variability of signal intensity

Data Processing

Road surface detection is a crucial function of many advanced driver assistance systems and autonomous vehicles, yet conventional visual detection systems are limited by factors like weather conditions and lighting. Therefore, tools to optimize these devices’ performance in real world driving conditions must exist.

Accordingly, this invention provides a method and system for producing a drivable road surface representation from multimodal sensor data. One embodiment involves filtering LIDAR point clouds to remove points not close to or in front of vehicles before applying density-based clustering algorithms such as DBSCAN on any remaining points; finally forming a set of drivable points into an interactive road surface map which can then be fed into deep CNNs for vehicle control systems.

The drivable point map can also be used to generate a list of driving conditions useful for controlling vehicles in their environment, such as slippery conditions like ice, snow or puddles with aquaplaning risks that will help determine appropriate actions to take by an autonomous or driver-assisted system. This list could then help control vehicles effectively.

Additionally, the control system 150 and multimodal drivable road surface detection module 200 may communicate with various network-based data sources and networks 120. These could include, but are not limited to cellular data networks, AM/FM radio networks, pager and UHF networks, gaming and WiFi networks, peer-to-peer networks as well as voice over IP (VoIP) networks as well as web-based data and content networks.