Autonomous Grand Challenge 2025

In conjunction with

Features

  • Prize

    A total cash pool of up to USD 100,000.

    Over USD 15,000 cash awards for a winner of a single track (Innovation Award + Outstanding Champion).

  • Travel Grant

    USD 1,500 travel grant for each selected team from all tracks.

    For eligibility for the travel grant, please refer to the general rules.

Participate

  • For participation in the challenge, registering for your team is a strict requirement by filling out this Google Form. The registration information can be modified until [CVPR 2025] May 10. For more details, please check the general rules.

Contact

Timeline

  • Challenge Start

  • Test Server Open

  • [CVPR 2025] Submission Deadline

  • [CVPR 2025] Technical Report Submission Deadline

  • [CVPR 2025] Travel Grant Receiver Notification

  • [CVPR 2025] Winner Notification

  • [ICCV 2025] Test Server Close / Submission Deadline

  • [ICCV 2025] Technical Report Submission Deadline

  • [ICCV 2025] Travel Grant Receiver Notification

  • [ICCV 2025] Winner Notification

  • [ICCV 2025] Winner Presentation

  • Winner Presentation at Sibling Event in China

General Rules

Eligibility

  • A participant must be a team member and cannot be a member of multiple teams.
  • Participants can form teams of up to 10 members.
  • A team must be registered for participation.
  • A team is limited to one submission account.
  • A team can participate in multiple tracks.
  • An entity can have multiple teams.
  • Attempting to hack the test set or engaging in similar behaviors will result in disqualification.

Technical

  • All publicly available datasets and pretrained weights are allowed. The use of private datasets or pretrained weights is prohibited.
  • The use of future frames is prohibited except where explicitly stated.
  • The use of data must be described explicitly in the technical report.
  • All technical reports will be made public after the challenge is concluded.

Award & Travel Grant

  • Participants will receive certificates.
  • To be eligible for awards and travel grant:
    • Teams must make their results public on the leaderboard before the submission deadline and continue to keep them public thereafter;
    • Teams must submit a technical report via OpenReview (TBD) for each track in PDF format of at most 4 pages (excluding references);
    • If requested, teams must provide their code, docker image, or necessary materials to the award committee for verification;
    • Organizers of a track cannot claim any award of the track. However, they can participate in other tracks since the test data is excluded from them as well.
  • Innovation awards will be decided through a double-blind reviewing process on the technical reports by the award committee. Winners of Innovation awards can overlap with Outstanding Champion and Honorable Runner-up.
  • Winners will be invited to present during the workshops or sibling event.
  • The organizers reserve the right to update the rules in response to unforeseen circumstances in order to better serve the mission of the challenge. The organizers reserve the right to disqualify teams that have violated the rules. Organizers reserve all rights for the final explanation.

FAQ

How do we/I download the data?

For each track, we provide links for downloading data in the repository. The repository, which might also contain dataset API, baseline model, and other helpful information, is a good start to begin your participation.

How many times can we/I make a submission?

Each track has its submission limit. Please refer to the test server for each track.

Should we/I use future frames during inference?

No future frame is allowed except that it is noted explicitly in a track.

Which template of a technical report should we/I follow? How to submit the technical report?

The format of a technical report is constrained to a PDF file. We recommend using the CVPR 2025 template.

  • [CVPR 2025]
  • Outstanding Champion
    USD 3,000
  • [ICCV 2025]
  • Innovation Award
    USD 9,000
  • Outstanding Champion
    USD 6,000

Test Server

To be released on Mar. 01.

Task Description

A world model is a computer program that can imagine how the world evolves in response to an agent's behavior. It has the potential to solve general-purpose simulation and evaluation, enabling robots that are safe, reliable, and intelligent in a wide variety of scenarios. To help accelerate progress towards solving world models for robotics, our challenge invites participants to develop a robot world model that can accurately simulate the future interactions of humanoid robots with their surroundings based on past observations and agent actions. Participants will demonstrate the compression rate, efficiency, and evaluation ability of their approaches.

Contact

  • [CVPR 2025]
  • Outstanding Champion
    USD 3,000
  • [ICCV 2025]
  • Innovation Award
    USD 9,000
  • Outstanding Champion
    USD 6,000

Test Server

To be released on Mar. 01.

Task Description

Benchmarking sensorimotor driving policies with real data is challenging due to the misalignment between open- and closed-loop metrics. The NAVSIM framework aims to address this with simulation-based metrics, by unrolling simplified bird's eye view abstractions of scenes for a short simulation horizon. Going beyond evaluating driving systems in states observed during human data collection, this year's NAVSIM v2 challenge introduces reactive background traffic participants and realistic synthetic novel view inputs to better assess robustness and generalization.

The private test set is provided by nuPlan from Motional.

Contact

Citation

Please consider citing our work if the challenge helps your research with the following BibTex:

@inproceedings{Dauner2024ARXIV,
    title = {NAVSIM: Data-Driven Non-Reactive Autonomous Vehicle Simulation and Benchmarking},
    author = {Daniel Dauner and Marcel Hallgarten and Tianyu Li and Xinshuo Weng and Zhiyu Huang and Zetong Yang and Hongyang Li and Igor Gilitschenski and Boris Ivanovic and Marco Pavone and Andreas Geiger and Kashyap Chitta},
    journal = {arXiv},
    volume = {2406.15349},
    year = {2024}
}
  • Innovation Award
    TBD
  • Outstanding Champion
    TBD
  • Honorable Runner-up
    TBD

Task Description

Most existing robot learning benchmarks fall short when it comes to addressing real-world challenges, particularly those arising from low-quality data and limited sensing capabilities. In contrast, this competition offers a brand-new perspective to discuss broad areas of humanoid robots. The challenge aims to thoroughly evaluate and further advance the core capabilities of robots equipped with dexterous hands with tactile sensors in the realm of embodied intelligence. Through a diverse set of tasks within multi-agent real-world scenarios, the competition explores the instruction comprehension, manipulation, and generalization capabilities of robotic systems.

Please stay tuned for more details!

Organizers

General

Huijie Wang

OpenDriveLab

Yihang Qiu

The University of Hong Kong

Shijia Peng

OpenDriveLab / Shenzhen University

World Model Challenge by 1X

Eric Jang

1X

Jack Monas

1X

Daniel Ho

1X

NAVSIM v2 End-to-End Driving Challenge

Marcel Hallgarten

University of Tübingen

Kashyap Chitta

University of Tübingen

Daniel Dauner

University of Tübingen

Tianyu Li

OpenDriveLab

Wei Cao

Bosch

Pat Karnchanachari

Motional