Overview


Why these Challenges?

The field of autonomous driving (AD) is rapidly advancing, and while cutting-edge algorithms remain a crucial component, the emphasis on achieving high mean average precision (mAP) for object detectors or conventional segmentation for lane recognition is no longer paramount. Rather, we posit that the future of AD algorithms lies in the integration of perception and planning. In light of this, we propose four newly curated challenges that embody this philosophy.

motivation

Motivation of the Challenges: bond perception more closely with planning.


Track 1
OpenLane Topology Challenge
Go beyond conventional lane line detection as segmentation. Recognizing lanes as an abstraction of the scene - centerline, and building the topology between lanes and traffic elements. Such a topology is to facilitate planning and routing.
Track 2
Online HD Map Construction Challenge
Traditional mapping pipelines require a vast amount of human effort to maintain, which limits their scalability. This task aims to dynamically construct local maps with rich semantics based on onboard sensors. The vectorized map can be further utilized by downstream tasks.
Track 3
3D Occupancy Prediction Challenge
The representation of 3D bounding boxes is not enough to describe general objects (obstacles). Instead, inspired by the concept in Robotics, we deem general object detection as an occupancy representation to cover more irregularly shaped objects (e.g., protruding). The output could also be fed as cost volume for planning. This idea is also endorsed by Mobileye at CES 2023 and Tesla AI Day 2022 .
Track 4
nuPlan Planning Challenge
To verify the effectiveness of the newly-designed modules in perception, we need an ultimate planning framework with a closed-loop setting. Previous motion planning benchmarks focus on short-term motion forecasting and are limited to open-loop evaluation. nuPlan introduces long-term planning of the ego vehicle and corresponding metrics.


Contact

Send E-mail Contact us via [email protected] with the prefix [CVPR 2023 E2EAD].

Join Slack Join Slack to chat with Challenge organizers. Please follow the guidelines in #general and join the track-specific channels.

Summary

The Autonomous Driving Challenge at CVPR 2023 just wrapped up! We have witnessed an intensive engagement from the community. Numerous minds from universities and corporations, including China, Germany, France, Singapore, United States, United Kingdom, etc., join to tackle the challenging tasks for autonomous driving. With over 270 teams from 15 countries (regions), the challenge has been a true showcase of global talent and innovation. Over the course of 2,300 submissions, the top spot has been fiercely contested. We received a few inquiries on the eligibility, challenge rules, technical reports. Rest assured that all concerns have been appropriately addressed. The fairness and integrity of the Challenge has always been our highest priority.

Track 1
OpenLane Topology

GitHub GitHub GitHub EvalAI

We are happy to announce an important update to the OpenLane family, featuring two sets of additional data and annotations, namely Standard-definition (SD) Map and Map Element Bucket. Check out our GitHub repository for more details on the update, leaderboard, and upcoming challenge in 2024.


Leaderboard (Server remains active)

  • # Participating Teams: 34
  • # Countries and Regions: 4
  • # Submissions: 700+
  • The majority of the methods were able to achieve OLS within the range of 30 to 40. It is noteworthy that one method in particular emerged as the clear frontrunner, demonstrating a remarkably superior performance with an OLS of 55.

    Rank Country / Region Institution OLS (primary) Team Name $\text{DET}_{l}$ $\text{DET}_{t}$ $\text{TOP}_{ll}$ $\text{TOP}_{lt}$
    Page of 30    

    Innovation Award goes to "PlatypusWhisperers" for

    The team PlatypusWhisperers introduces an approach called TopoMask, offering an innovative solution for predicting centerlines in road topology. By utilizing an instance-mask based formulation and a quad-direction label representation, TopoMask effectively addresses the overlapping issue of centerlines and extends segmentation-based manner to scene understanding tasks.



    Task Description

    The OpenLane-V2 dataset* is the perception and reasoning benchmark for scene structure in autonomous driving. Given multi-view images covering the whole panoramic field of view, participants are required to deliver not only perception results of lanes and traffic elements but also topology relationships among lanes and between lanes and traffic elements simultaneously.


    Participation

    The primary metric is OpenLane-V2 Score (OLS), which comprises evaluations on three sub-tasks. On the website, we provide tools for data access, training models, evaluations, and visualization. To submit your results on EvalAI, please follow the submission instructions.


    Award

    Outstanding Champion USD $15,000
    Honorable Runner-up USD $5,000
    Innovation Award USD $5,000

    Contact

  • Huijie Wang (OpenDriveLab), [email protected]
  • Slack channel: #openlane-challenge-2023


  • Related Literature

  • Topology Reasoning for Driving Scenes
  • OpenLane-V2: A Topology Reasoning Benchmark for Scene Understanding in Autonomous Driving
  • PersFormer: 3D Lane Detection via Perspective Transformer and the OpenLane Benchmark
  • Structured Bird's-Eye-View Traffic Scene Understanding From Onboard Images
  • MapTR: Structured Modeling and Learning for Online Vectorized HD Map Construction






  • Track 2
    Online HD Map Construction

    GitHub GitHub GitHub EvalAI


    Leaderboard

  • # Participating Teams: 42
  • # Countries and Regions: 3
  • # Submissions: 500+
  • The competition among the second to sixth positions on the leaderboard was highly intense, with a difference of less than 3 points in mAP between them. In contrast, the first place holder demonstrated a significant lead of almost 10 points in mAP.

    Rank Country / Region Institution mAP (primary) Team Name Ped Crossing Divider Boundary
    Page of 30    

    Innovation Award goes to "MACH" for

    The team MACH introduces the MaskDino method into map detection tasks, combining the advantages of both vectorization and rasterization as two different map representations. They also propose a simple and practical post-processing-based method for model ensembling. These contributions exhibit strong novelty.


    Task Description

    Compared to conventional lane detection, the constructed HD map provides more semantics information with multiple categories. Vectorized polyline representations are adopted to deal with complicated and even irregular road structures. Given inputs from onboard sensors (cameras), the goal is to construct the complete local HD map.


    Participation

    The primary metric is mAP based on Chamfer distance over three categories, namely lane divider, boundary, and pedestrian crossing. Please refer to our GitHub for details on data and evaluation. Submission is conducted on EvalAI.


    Award

    Outstanding Champion USD $15,000
    Honorable Runner-up USD $5,000
    Innovation Award USD $5,000

    Contact

  • Tianyuan Yuan (MARS Lab), [email protected]
  • Slack channel: #map-challenge-2023


  • Related Literature

  • HDMapNet: An Online HD Map Construction and Evaluation Framework
  • VectorMapNet: End-to-end Vectorized HD Map Learning
  • InstaGraM: Instance-level Graph Modeling for Vectorized HD Map Learning






  • Track 3
    3D Occupancy Prediction

    GitHub GitHub GitHub EvalAI

    We are happy to announce the Largest 3D Occupancy Prediction Benchmark in autonomous driving. Check out our GitHub repository for more details on the dataset, leaderboard, and upcoming challenge in 2024.


    Leaderboard (Server remains active)

  • # Participating Teams: 149
  • # Countries and Regions: 10
  • # Submissions: 400+
  • This track featured one of the most fiercely contested tracks, with almost 150 participating teams. The difference in scores between the top 20 teams was less than 10 points.

    Rank Country / Region Institution mIoU (primary) Team Name
    Page of 30    

    Innovation Award goes to "NVOCC" for

    This innovation of team NVOCC deviates from the conventional 3D-2D or 2D-3D priors and offers fresh insights into the development of view transformation modules. The FB-OCC method demonstrates substantially improved performance in comparison to prior approaches.

    Innovation Award goes to "occ_transformer" for

    The present innovation by the team occ_transformer sheds light on the disparities between the detection model and the OCC model, and introduces an initial strategy for transforming bounding boxes (bbox) into OCC representations.




    Task Description

    Unlike previous perception representations, which depend on predefined geometric primitives or perceived data modalities, occupancy enjoys the flexibility to describe entities in arbitrary shapes. In this track, we provide a large-scale occupancy benchmark. Given multi-view images covering the whole panoramic field of view, participants are needed to provide the occupancy state and semantics of each voxel in 3D space for the complete scene.


    Participation

    The primary metric of this track is mIoU. On the website, we provide detailed information for the dataset, evaluation, and submission instructions. The test server is hosted on EvalAI.


    Award

    Outstanding Champion USD $15,000
    Honorable Runner-up USD $5,000
    Innovation Award * 2 USD $5,000

    Contact

  • Chonghao Sima (OpenDriveLab), [email protected]
  • Xiaoyu Tian (MARS Lab), [email protected]
  • Slack channel: #occupancy-challenge-2023


  • Related Literature

  • Scene as Occupancy
  • Convolutional Occupancy Networks
  • Occupancy Flow Fields for Motion Forecasting in Autonomous Driving
  • MonoScene: Monocular 3D Semantic Scene Completion
  • Diffusion Probabilistic Models for Scene-Scale 3D Categorical Data






  • Track 4
    nuPlan Planning

    GitHub GitHub GitHub EvalAI


    Leaderboard

  • # Participating Teams: 52
  • # Countries and Regions: 11
  • # Submissions: 600+
  • This track was characterized by the greatest diversity among this challenge, with participation from teams representing up to 11 countries and regions. The difference in scores between the top 5 teams was less than 10 points.

    Rank Country / Region Institution Overall Score (primary) Team Name CH1 Score CH2 Score CH3 Score
    Page of 30    

    Innovation Award goes to "AID" for

    GameFormer is a Transformer-based model that utilizes hierarchical game theory for interactive prediction and planning. The approach incorporates novel level-k decoders in the prediction model that iteratively refine the future trajectories of interacting agents, as well as a learning process that regulates the predicted behaviors of agents given the prediction results.

    Honorable Mention for Innovation Award goes to "raphamas" for

    MBAPPE leverages an MCTS on a partially learned environment of nuPlan. This method infers trajectories by integrating consecutive actions maximizing the cumulative reward, measured through exploration and evaluation within MBAPPE’s internal simulation. Decisions and choices of MBAPPE are explainable, reliable and reproductible as we have access to each step of the internal thought process.



    Task Description

    Previous benchmarks focus on short-term motion forecasting and are limited to open-loop evaluation. nuPlan introduces long-term planning of the ego vehicle and corresponding metrics. Provided as docker containers, submissions are deployed for simulation and evaluation.


    Participation

    The primary metric is the mean score over three increasingly complex modes: open-loop, closed-loop non-reactive agents, and closed-loop reactive agents. Participants can follow the steps to begin the competition. To submit your results on EvalAI, please follow the submission instructions.


    Award

    Outstanding Champion USD $10,000
    Honorable Runner-up (2nd) USD $8,000
    Honorable Runner-up (3rd) USD $5,000
    Innovation Award USD $5,000

    Contact

  • GitHub issue
  • Motional, [email protected]
  • Slack channel: #nuplan-challenge-2023


  • Related Literature

  • Driving in Real Life with Inverse Reinforcement Learning
  • Importance Is in Your Attention: Agent Importance Prediction for Autonomous Driving






  • General Rules

    Note Regarding Certificate (June 25, 2023)

    Thanks for your participation! For those who require a certificate for participation, please specify the names of all team members, the institution, the method name (optional), the team name, and the participating track to [email protected].


    Note Regarding Submission (May 24, 2023)

    Only PUBLIC results shown on the leaderboard will be valid. Please ensure your result is made public before the deadline and kept public after the deadline on the leaderboard.


    Statement Regarding Submission Information (May 15, 2023)

    Regarding submissions to all tracks, please make sure the appended information is correct, especially the email address. After submission deadlines, we will ask participants via email to provide further information for qualification and making certificates. Any form of late requests for claiming ownership of a particular submission will not be considered. Incorrect email addresses will lead to disqualification.


    Statement Regarding Leaderboard and Award (April 14, 2023)

    The primary objective of this Autonomous Driving Challenge is to facilitate all aspects of autonomous driving. Despite the current trend toward data-driven research, we strive to provide opportunities for participants without access to massive data or computing resources. To this end, we would like to reiterate the following rules:

    Leaderboard
    Certificates will be provided to all participants.
    All publicly available datasets and pretrained weights are allowed, including Objects365, Swin-T, DD3D-pretrained VoVNet, InternImage, etc.
    But the use of private datasets or pretrained weights is prohibited.

    Award
    To claim a cash award, all participants are required to submit a technical report.
    Cash awards for the first three places will be distributed based on the rankings on leaderboards. However, other factors, such as model sizes and data usage, will be taken into consideration.
    As we set up the Innovation Prize to encourage novel and innovative ideas, winners of this award are encouraged only to use ImageNet and COCO as external data.

    The challenge committee reserves all rights for the final explanation of the cash award.


    Rules

    Please refer to rules.







    FAQ



    How do we/I download the data?
    For each track, we provide links for downloading data in the GitHub repository. The repository, which might also contain dataset API, baseline model, and other helpful information, is a good start to begin your participation.

    How many times can we/I make a submission?
    Each track has its submission limit. Please refer to the EvalAI for each track. Submissions that error out do not count against this limit.

    How many tracks can we/I take part in?
    A team can participate in multiple tracks. An entity cannot be affiliated with more than one team unless the entity is an academic entity (e.g., a university).

    Should we/I use future frames during inference?
    No future frame is allowed except that it is noted explicitly.