Research

nxtAIM will leverage the potential of Generative AI for the development of autonomous driving. The project aims to overcome current limitations and provide innovative solutions using new generative methods to master the significant challenges in this field.

Research

nxtAIM will leverage the potential of Generative AI for the development of autonomous driving. The project aims to overcome current limitations and provide innovative solutions using new generative methods to master the significant challenges in this field.

Project goal: What does nxtAIM want to achieve?

The hurdles on the path to higher levels of automated driving remain significant. Particularly concerning scalability (with regard to data and costs), transferability (expansion of Operational Design Domains, ODD), and traceability (safety and acceptance), the current AI methods and tools are reaching their limits. Considering this context, Generative AI offers promising new opportunities and alternative research approaches. Its technological maturity is very high and has impressively demonstrated its capabilities in the last two years with text-to-image generators and large language models.

nxtAIM seeks to harness the immense potential of Generative AI for the development of autonomous driving. The re-purposing of existing methods and tools could significantly accelerate development. Central to the project are generative, self-supervised learning foundation models as a new paradigm. The partners are leveraging AI for both situational interpretation and planning, as well as for system optimization within a differentiable, bidirectional chain of effects.

Which methods are used?

In the project, Generative AI is used in four research fields of autonomous driving:

1. Generation of multimodal sensor data for training and validation; offline and during operation
2. Generation of traffic scenarios for prediction and planning
3. Development of foundation models and their conditioning
4. System integration and bidirectional information flow: feedback loop for the chain of effects

nxtAIM research fields and their teams

The work of nxtAIM is organized in six sub-projects. Sub-projects 1, 2 and 3 focus on different parts of the chain of effects. Sub-project 1 aims to generate sensor data from the environment model for individual time steps. Sub-project 2 extends this to include the dynamic development of the situation and aims to create generative models for sequences of sensor data.

Generative models not only enable the sampling of sensor data, but also of time series in general. This is used in subproject 3 to generate traffic scenarios in geometrically abstracted environment models and thus improve prediction and planning algorithms.

Generative models are typically based on a latent feature space. Structuring and interpretation of this space is the task of sub-project 4. Sub-project 5 deals with how the new approaches can be implemented in a system that can be executed in the vehicle. Finally, sub-project 6 will evaluate the systems developed in sub-projects 1, 2 and 3 in terms of their degree of realism.

The sub-projects at a glance:

SP 1

Generative models for sensor technology


The goal of SP1 is the development of a generative foundation model for use in the automotive sector. This model is intended to be multimodal, enabling the generation of data for various sensors, such as cameras or LiDAR, particularly for scenarios that are often underrepresented in real-world datasets.


Björn Ommer

(SP CO-LEAD)

“The innovation lies in adapting existing models for camera data generation and extending them to other sensor types such as LiDAR and radar. This approach helps to overcome bottlenecks in data availability. The synthetic data is particularly valuable for critical domains such as nighttime and adverse weather scenarios, as real-world measurements in these situations are difficult to obtain and costly.
By defining semantic metrics, the plausibility and realism of the generated data become accessible and ultimately ensured. Integrating these metrics into learned models enables error detection and correction, while deep sensor fusion supports more comprehensive and precise environmental perception. These approaches enhance the robustness and reliability of perception functions in challenging situations and expand the Operational Design Domain (ODD). This has far-reaching implications, not only for the automotive industry but also for other sectors that rely on accurate and diverse sensor data.

SP 2

Generative autoregressive
models for image sequences


In SP2, we will use extensive data to create a comprehensive visual world model for the domain of driving.


Thomas Brox

(SP CO-LEAD)

“Up to now, every situation in automated driving, no matter how rare has to be modeled explicitly. With this project, we aim to change that.”

SP 3

Generative models for
abstract scenarios and planning


In the subproject ‘Generative models for abstract scenarios and planning,’ we are expanding the application of generative AI in automated driving by generating road layouts, future scenarios, and control commands using generative AI.


Hanno Gottschalk

(SP CO-LEAD)

“Generative AI can do more than just generate language or images.”

SP 4

Automotive Foundation Models and Latent Space

SP4 enables the widespread use of foundation models in autonomous driving through scaling and the utilization of large datasets. The key components are the integration of extensive data and the associated connection to the immense computing resources of the Supercomputer in Jülich.


Julian Wiederer

(SP LEAD)

“The scaling of data, models, and computing resources has led to impressive results in language models. We are leveraging this potential for the further development of autonomous driving.”

SP 5

Automotive Scalability


For large AI systems to be executed in the vehicle, the computing units in the vehicle must have sufficient resources, such as processing power or memory. Therefore, this subproject aims to develop methods that can transform the AI systems before execution in the vehicle, ensuring that their resource requirements remain within the available limits of the vehicle.


Arun Thirunavukkarasu

(SP CO-LEAD)

“We will jointly create a development platform with all partners that enables automotive developers to design their AI applications with hardware and resource awareness from the very beginning. In doing so, we contribute to bringing better AI-based driving functions into vehicles faster and more efficiently.”

Saqib Bukhari

(SP LEAD)

“Some of our main focuses in this subproject are:
1) the development of an optimization framework for AI applications that makes them suitable for execution on resource-constrained automotive hardware, and
2) the adaptation of AI algorithm optimization to meet the specific requirements of different vehicle types.”

SP 6

 Plausibility check


This SP centralizes validation for various data types using tailored plausibility tests, avoiding a one-size-fits-all solution due to the diversity of data. Regulatory bodies in the automotive sector allow virtual testing, for its suitability the difference of real and AI generated Data will be analyzed, therefore, metrics to measure and describe this Domain Gap will be researched and developed.


Leonard  Schroven

(SP LEAD)

“Data Generation with Generative AI is not perfect, methods for validity test are necessary. Our Core Innovation: Plausibility is a not a onedimensional metric. It is essential to research and develop methods that not only assess data plausibility but also provide meaningful insights, relevant to the application’s context.”

The sub-projects at a glance:

Generative models for sensor technology

The goal of SP1 is the development of a generative foundation model for use in the automotive sector. This model is intended to be multimodal, enabling the generation of data for various sensors, such as cameras or LiDAR, particularly for scenarios that are often underrepresented in real-world datasets.

Björn Ommer

(SP CO-LEAD)

“The innovation lies in adapting existing models for camera data generation and extending them to other sensor types such as LiDAR and radar. This approach helps to overcome bottlenecks in data availability. The synthetic data is particularly valuable for critical domains such as nighttime and adverse weather scenarios, as real-world measurements in these situations are difficult to obtain and costly.
By defining semantic metrics, the plausibility and realism of the generated data become accessible and ultimately ensured. Integrating these metrics into learned models enables error detection and correction, while deep sensor fusion supports more comprehensive and precise environmental perception. These approaches enhance the robustness and reliability of perception functions in challenging situations and expand the Operational Design Domain (ODD). This has far-reaching implications, not only for the automotive industry but also for other sectors that rely on accurate and diverse sensor data.”

Generative autoregressive models for image sequences

In SP2, we will use extensive data to create a comprehensive visual world model for the domain of driving.

Thomas Brox

(SP CO-LEAD)

“Up to now, every situation in automated driving, no matter how rare has to be modeled explicitly. With this project, we aim to change that.”

Generative models for
abstract scenarios and planning

In the subproject 'Generative models for abstract scenarios and planning,' we are expanding the application of generative AI in automated driving by generating road layouts, future scenarios, and control commands using generative AI.

Hanno Gottschalk

(SP CO-LEAD)

"Generative AI can do more than just generate language or images.”

Automotive Foundation Models and Latent Space

SP4 enables the widespread use of foundation models in autonomous driving through scaling and the utilization of large datasets. The key components are the integration of extensive data and the associated connection to the immense computing resources of the Supercomputer in Jülich.

Julian Wiederer

(SP LEAD)

"The scaling of data, models, and computing resources has led to impressive results in language models. We are leveraging this potential for the further development of autonomous driving.”

Automotive Scalability

For large AI systems to be executed in the vehicle, the computing units in the vehicle must have sufficient resources, such as processing power or memory. Therefore, this subproject aims to develop methods that can transform the AI systems before execution in the vehicle, ensuring that their resource requirements remain within the available limits of the vehicle.

Arun Thirunavukkarasu

(SP CO-LEAD)

"We will jointly create a development platform with all partners that enables automotive developers to design their AI applications with hardware and resource awareness from the very beginning. In doing so, we contribute to bringing better AI-based driving functions into vehicles faster and more efficiently."

Saqib Bukhari

(SP LEAD)

"Some of our main focuses in this subproject are: 1) the development of an optimization framework for AI applications that makes them suitable for execution on resource-constrained automotive hardware, and 2) the adaptation of AI algorithm optimization to meet the specific requirements of different vehicle types.”

Plausibility check

This SP centralizes validation for various data types using tailored plausibility tests, avoiding a one-size-fits-all solution due to the diversity of data. Regulatory bodies in the automotive sector allow virtual testing, for its suitability the difference of real and AI generated Data will be analyzed, therefore, metrics to measure and describe this Domain Gap will be researched and developed.

Leonard  Schroven

(SP LEAD)

"Data Generation with Generative AI is not perfect, methods for validity test are necessary. Our Core Innovation: Plausibility is a not a onedimensional metric. It is essential to research and develop methods that not only assess data plausibility but also provide meaningful insights, relevant to the application's context.”

Scroll to Top