A deep dive into Generative AI with Sneha Sudhir Shetiya, Staff Software Engineer at TORC Robotics

Sneha Sudhir Shetiya, Staff Software Engineer at TORC Robotics, explores the integration of generative AI into Advanced Driver Assistance Systems (ADAS). 

Generative AI, including techniques like GANs, VAEs, and Transformer models, is revolutionizing ADAS by creating realistic synthetic data for training, enhancing safety and reliability. Sneha discusses the challenges of implementing generative AI in ADAS, particularly in ensuring robustness and data quality through advanced analysis techniques. Additionally, this interview examines the future role of generative AI in ADAS, highlighting emerging technologies like unsupervised learning and explainable AI.

1. Can you describe how generative AI (Artificial Intelligence) might be integrated into ADAS solutions? What specific aspects of ADAS benefit most from generative AI techniques? 

Generative Artificial Intelligence (GenAI) is a concept that has evolved from Artificial Intelligence. Earlier, neural networks were built from basic mathematical principles of Backpropagation. Over time these neural networks were used to train Machine Learning models for prediction algorithms. Multiple use cases of this were to study crop withering, object detection for basic self-driving features and so on. Eventually, Natural Language Processing gained prominence and paved the way for Large Language Models. The popular examples for these models today are OpenAI’s ChatGPT-4 and Google’s GEMINI. Its common place now to utilize these technologies to leverage code optimizations and do other redundant work like framing emails and generating AI images. 

GenAI in a nutshell can be defined as technology focused on creating new data often resembling existing data. This is the main use case which can be leveraged in ADAS technology as data collection plays a major role in training the models that will eventually drive the vehicles on its own with various level 2, 3 and 4 features. Some common types of this technology are elaborated below: 

  1. Generative Adversarial Networks (GANs)

    This has two neural networks in its design: Generator and a Discriminator. Generator’s job is to create new data and the discriminator, as the name suggests tries to distinguish real data from Generator’s creation 

    The output of this combination will give the desired creative result. 

    This is majorly used in creating product designs, composing music and developing novel materials. 

  1. Variational Autoencoders (VAEs)

    VAEs also work with data. It used encoding/decoding techniques to generate new data. It has a neural network which compresses the training data when encoding and then decodes it back to reconstruct it. VAE introduces randomness when decoding. This helps to create variable outputs which are similar to the input data. 

    This is majorly used to create variations in existing artwork, generate realistic 3D models and anomaly detection. 

  1. Autoregressive models

    These ML models generate data sequentially, predicting the next element based on the previously generated elements. This can be text, code or any other sequential data type. 

    This is majorly used in creative writing, chatbot responses and music composition. 

  1. Transformer-based models

    These are another set of powerful neural network-based models which deal with sequential data. These are used for generative tasks by conditioning the model on a specific point or a theme to generate new, relevant data 

    This is majorly used in generating creative text formats like poems and scripts, translating languages and summarizing text content. 

  1. Reinforcement learning for Generative Tasks

    Here a reinforcement learning agent interacts with the surrounding environment  and receives good feedback for generating desired types of data. Over time, the agent learns to create increasingly better outputs based on the reward feedback. 

    This has major applications in robotics, design game levels and autonomous systems. 

 2. What are the main challenges you’ve encountered while implementing generative AI in ADAS, particularly in terms of safety and reliability? How are these challenges addressed to ensure the robustness of the systems? 

In data collection strategies using GenAI, there is high risk of redundant data unline the raw data that is fetched through manual collection with the help of sensors. This can be mitigated by providing better algorithms which have a diverse dataset. 

In terms of safety , there is a need to do thorough FMEA and FTA analysis for various use cases where the sensor data can go wrong for various ADAS features. GenAI generated data can then serve as input to these processes to validate its accuracy. This would help in determining its reliability as well. For safety, one must determine the parameters of severity, exposure and controllability in every scenario. Unlike functional safety which enables us to do HARA (Hazard Analysis and Risk Assessment) and provide the appropriate ASIL levels for the development of the hardware, in case of ADAS features, one has to rely more on the SOTIF concepts i.e., PAS-21448. Here the analysis is done solely based on the S, E, C parameters discussed above. 

 3. Generative AI models require vast amounts of data for training. How is data collection, labeling, and processing managed to train generative AI models effectively for ADAS applications? 

-Generative AI models for ADAS applications require a robust data pipeline for collection, labeling, and processing. Here’s how it’s managed: 

Data Collection: 

  • Sensor data: Cameras, LiDAR, radar, and ultrasonic sensors in autonomous vehicles capture vast amounts of real-world driving data, including road scenes, objects, and traffic behavior. 
  • Simulated data: Simulations can generate diverse and controlled scenarios to augment real-world data. This helps address rare events or edge cases difficult to capture in real-world driving. 

Data Labeling: 

  • Human labeling: Specialized teams label the collected data, identifying objects (cars, pedestrians, traffic signs), actions (lane changes, braking), and environmental conditions (weather, lighting). This is crucial for supervised learning. 
  • Generative Pre-labeling: Generative AI models can be used to pre-label data, suggesting classifications for human reviewers. This can significantly speed up the labeling process. 

Data Processing: 

  • Data cleaning: The collected data may contain errors or inconsistencies. Techniques like filtering and normalization are used to ensure data quality. 
  • Data augmentation: Techniques like random cropping, flipping, and adding noise can artificially increase the size and diversity of the labeled data, improving the model’s generalization ability. 

By effectively managing these steps, developers can ensure their generative AI models for ADAS are trained on high-quality, diverse data, leading to robust and reliable performance in real-world driving scenarios. 

4. Looking forward, how do you envision the role of generative AI evolving in the next generation of ADAS? Are there any emerging technologies or methodologies being explored to enhance the capabilities of generative AI in this field? 

Generative AI will move beyond mimicking real-world data to create entirely new and even more diverse scenarios. This allows for training on rare events, extreme weather conditions, and unexpected situations, making ADAS systems more prepared for the unknown. 

Generative AI could personalize the driving experience by tailoring ADAS functionalities to individual driver behavior and preferences. Some technologies are listed below: 

  • Unsupervised and Federated Learning: Current approaches rely heavily on labeled data. Advancements in unsupervised learning will allow generative AI models to learn directly from raw sensor data, reducing the need for extensive manual labeling. Federated learning involves collaborative training on data distributed across multiple vehicles, improving data privacy and addressing geographical variations in driving scenarios. 
  • Explainable AI (XAI): XAI techniques will help us understand how generative AI models make decisions, leading to increased trust and transparency in their operation within ADAS systems. This can be used heavily to prevent cybersecurity related attacks as well.

    5. You joined a panel on 
    Future-Proofing Functional Safety Strategies at AutoSens USA in May, can you tell us how GenAI might support safety systems for ADAS and Avs? Why are platforms like AutoSens important for facilitating conversations on topics such as this?
     

As mentioned above as well, GenAI can be used heavily for the FMEA and FTA analysis for various safety scenarios of GenAI features. Although GenAI technology helps in improving our ML models today, if we need to leverage this software onto the vehicle, it is essential to have robust sensors. I had the privilege to check out some amazing sensor variants at the conference recently in May 2024. There were companies who were providing datasets for specific locations and weather conditions tailored for OEM needs. There were companies providing 8-megapixel cameras, thermal cameras and computer platforms which are necessary to have high computing power in order to leverage complex software having GenAI features. This is not only high in complexity but also the package size increases significantly when we add more and more safety features. In order to ensure features like Over The Air Updates (OTA) and also features like security and authentication to work efficiently the compute platform plays a pivotal role. 

At the conference, open discussions were done for safety of the ADAS features and also how autonomous software can be used to leverage for the trucking industry. These discussions are essential for different ideas to evolve in the autonomous world. 

Find out more about AutoSens Europe and book your pass now to secure the best rate!
Shopping cart0
There are no products in the cart!
0
2024 ADAS Guide

The state-of-play in today’s ADAS market

With exclusive editorials from Transport Canada and SAE;  the ADAS Guide is free resource for our community. It gives a detailed overview of features in today’s road-going vehicles, categorized by OEM, alongside expert analysis.