FAQ

Overview


Why these Challenges?

Autonomous driving is developing fast. Although still deemed important, the requirement for cutting-edge algorithms is no longer to achieve as high as mAP for object detectors, or to recognize lanes as conventional segmentation. We believe the future of autonomous driving algorithms is to bond perception closely with planning. As such, we introduce four curated, brand-new challenges following such a philosophy.

motivation

Motivation of the Challenges: bond perception more closely with planning.


Track 1
OpenLane Topology Challenge
Go beyond conventional lane line detection as segmentation. Recognizing lanes as an abstraction of the scene - centerline, and building the topology between lanes and traffic elements. Such a topology is to facilitate planning and routing.
Track 2
Online HD Map Construction Challenge
Traditional mapping pipelines require a vast amount of human effort to maintain, which limits their scalability. This task aims to dynamically construct local maps with rich semantics based on onboard sensors. The vectorized map can be further utilized by downstream tasks.
Track 3
3D Occupancy Prediction Challenge
The representation of 3D bounding boxes is not enough to describe general objects (obstacles). Instead, inspired by the concept in Robotics, we deem general object detection as an occupancy representation to cover more irregularly shaped objects (e.g., protruding). The output could also be fed as cost volume for planning. This idea is also endorsed by Mobileye at CES 2023 and Tesla AI Day 2022.
Track 4
nuPlan Planning Challenge
To verify the effectiveness of the newly-designed modules in perception, we need an ultimate planning framework with a closed-loop setting. Previous motion planning benchmarks focus on short-term motion forecasting and are limited to open-loop evaluation. nuPlan introduces long-term planning of the ego vehicle and corresponding metrics.

Participation

For participation, you can create a team at EvalAI, and then make a submission. A valid submission would be automatically viewed as a successful participation. No more than ten individuals are allowed in a team. For more details please refer to FAQ and description in each track.


Get Started

Baseline models are provided. Please check out to GitHub repository for each track.
Moreover, we introduce the multimodal multitask general large model InternImage to serve as a strong backbone. Check out here for implementation details.


Contact

Send E-mail Contact us via workshop-e2e-ad@googlegroups.com with the prefix [CVPR 2023 E2EAD].

Join Slack Join Slack to chat with Challenge organizers. Please follow the guidelines in #general and join the track-specific channels.

Join Wechat Join WeChat group to chat with Challenge organizers.


Host

The year 2023's edition of the Challenge is hosted by:

Track 1
OpenLane Topology

GitHub GitHub GitHub EvalAI


Task Description

The OpenLane-V2 dataset* is the perception and reasoning benchmark for scene structure in autonomous driving. Given multi-view images covering the whole panoramic field of view, participants are required to deliver not only perception results of lanes and traffic elements but also topology relationships among lanes and between lanes and traffic elements simultaneously.


Participation

The primary metric is OpenLane-V2 Score (OLS), which comprises evaluations on three sub-tasks. On the website, we provide tools for data access, training models, evaluations, and visualization. To submit your results on EvalAI, please follow the submission instructions.


Important dates

Challenge Period OpenMarch 15, 2023
Challenge Period EndJune 01, 2023
Technical Report DeadlineJune 09, 2023
Winner AnnouncementJune 13, 2023

Leaderboard


Award

1st PlaceUSD $15,000
2nd PlaceUSD $5,000
Innovation PrizeUSD $5,000

Contact

  • Huijie Wang (OpenDriveLab), wanghuijie@pjlab.org.cn
  • Slack channel: #openlane-challenge-2023


  • Related Literature

  • Road Genome: A Topology Reasoning Benchmarkfor Scene Understanding in Autonomous Driving
  • PersFormer: 3D Lane Detection via Perspective Transformer and the OpenLane Benchmark
  • Structured Bird's-Eye-View Traffic Scene Understanding From Onboard Images
  • MapTR: Structured Modeling and Learning for Online Vectorized HD Map Construction






  • Track 2
    Online HD Map Construction

    GitHub GitHub GitHub EvalAI


    Task Description

    Compared to conventional lane detection, the constructed HD map provides more semantics information with multiple categories. Vectorized polyline representations are adopted to deal with complicated and even irregular road structures. Given inputs from onboard sensors (cameras), the goal is to construct the complete local HD map.


    Participation

    The primary metric is mAP based on Chamfer distance over three categories, namely lane divider, boundary, and pedestrian crossing. Please refer to our GitHub for details on data and evaluation. Submission is conducted on EvalAI.


    Important dates

    Challenge Period OpenMarch 15, 2023
    Challenge Period EndMay 26, 2023
    NotificationMay 27, 2023
    Technical Report DeadlineJune 05, 2023
    Winner AnnouncementJune 11, 2023

    Leaderboard

    Above is the final leaderboard. A notification was sent from online-hd-map-construction@googlegroups.com. If you did not receive the notification, please specify your email and team name to wanghuijie@pjlab.org.cn.


    Award

    1st PlaceUSD $15,000
    2nd PlaceUSD $5,000
    Innovation PrizeUSD $5,000

    Contact

  • Tianyuan Yuan (MARS Lab), yuantianyuan01@gmail.com
  • Slack channel: #map-challenge-2023


  • Related Literature

  • HDMapNet: An Online HD Map Construction and Evaluation Framework
  • VectorMapNet: End-to-end Vectorized HD Map Learning
  • InstaGraM: Instance-level Graph Modeling for Vectorized HD Map Learning






  • Track 3
    3D Occupancy Prediction

    GitHub GitHub GitHub EvalAI


    Task Description

    Unlike previous perception representations, which depend on predefined geometric primitives or perceived data modalities, occupancy enjoys the flexibility to describe entities in arbitrary shapes. In this track, we provide a large-scale occupancy benchmark. Given multi-view images covering the whole panoramic field of view, participants are needed to provide the occupancy state and semantics of each voxel in 3D space for the complete scene.


    Participation

    The primary metric of this track is mIoU. On the website, we provide detailed information for the dataset, evaluation, and submission instructions. The test server is hosted on EvalAI.


    Important dates

    Dataset and Devkit ReleaseFebruary 20, 2023
    Challenge Period OpenOpened
    Challenge Period EndJune 01, 2023
    NotificationJune 03, 2023
    Technical Report DeadlineJune 10, 2023
    Winner AnnouncementJune 12, 2023

    Leaderboard

    For the real-time leaderboard please refer to EvalAI.


    Award

    1st PlaceUSD $15,000
    2nd PlaceUSD $5,000
    Innovation PrizeUSD $5,000

    Contact

  • Chonghao Sima (OpenDriveLab), chonghaosima@gmail.com
  • Xiaoyu Tian (MARS Lab), cntxy001@gmail.com
  • Slack channel: #occupancy-challenge-2023


  • Related Literature

  • Convolutional Occupancy Networks
  • Occupancy Flow Fields for Motion Forecasting in Autonomous Driving
  • MonoScene: Monocular 3D Semantic Scene Completion
  • Diffusion Probabilistic Models for Scene-Scale 3D Categorical Data






  • Track 4
    nuPlan Planning

    GitHub GitHub GitHub EvalAI


    Task Description

    Previous benchmarks focus on short-term motion forecasting and are limited to open-loop evaluation. nuPlan introduces long-term planning of the ego vehicle and corresponding metrics. Provided as docker containers, submissions are deployed for simulation and evaluation.


    Participation

    The primary metric is the mean score over three increasingly complex modes: open-loop, closed-loop non-reactive agents, and closed-loop reactive agents. Participants can follow the steps to begin the competition. To submit your results on EvalAI, please follow the submission instructions.


    Important dates

    Test Phase EndMay 26, 2023
    Winner AnnouncementJune 02, 2023
    Winner PresentationJune 18, 2023

    Leaderboard

    For the real-time leaderboard please refer to EvalAI.


    Award

    1st PlaceUSD $10,000
    2nd PlaceUSD $8,000
    3rd PlaceUSD $5,000
    Innovation PrizeUSD $5,000

    Contact

  • GitHub issue
  • Motional, nuScenes@motional.com
  • Slack channel: #nuplan-challenge-2023


  • Related Literature

  • Driving in Real Life with Inverse Reinforcement Learning
  • Importance Is in Your Attention: Agent Importance Prediction for Autonomous Driving






  • General Rules

    Note Regarding Submission (May 24, 2023)

    Only PUBLIC results shown on the leaderboard will be valid. Please ensure your result is made public before the deadline and kept public after the deadline on the leaderboard.


    Statement Regarding Submission Information (May 15, 2023)

    Regarding submissions to all tracks, please make sure the appended information is correct, especially the email address. After submission deadlines, we will ask participants via email to provide further information for qualification and making certificates. Any form of late requests for claiming ownership of a particular submission will not be considered. Incorrect email addresses will lead to disqualification.


    Statement Regarding Leaderboard and Award (April 14, 2023)

    The primary objective of this Autonomous Driving Challenge is to facilitate all aspects of autonomous driving. Despite the current trend toward data-driven research, we strive to provide opportunities for participants without access to massive data or computing resources. To this end, we would like to reiterate the following rules:

    Leaderboard
    Certificates will be provided to all participants. For the top three participants on the leaderboard, rankings will be included on the certificates.
    All publicly available datasets and pretrained weights are allowed, including Objects365, Swin-T, DD3D-pretrained VoVNet, InternImage, etc.
    But the use of private datasets or pretrained weights is prohibited.

    Award
    To claim a cash award, all participants are required to submit a technical report.
    Cash awards for the first three places will be distributed based on the rankings on leaderboards. However, other factors, such as model sizes and data usage, will be taken into consideration.
    As we set up the Innovation Prize to encourage novel and innovative ideas, winners of this award are only allowed to use ImageNet and COCO as external data and need to follow the rules of each track strictly.

    The challenge committee reserves all rights for the final explanation of the cash award.


    Rules

    Please refer to rules.







    FAQ



    How do we/I download the data?
    For each track, we provide links for downloading data in the GitHub repository. The repository, which might also contain dataset API, baseline model, and other helpful information, is a good start to begin your participation.

    How many times can we/I make a submission?
    Each track has its submission limit. Please refer to the EvalAI for each track. Submissions that error out do not count against this limit.

    How many tracks can we/I take part in?
    A team can participate in multiple tracks. An entity cannot be affiliated with more than one team unless the entity is an academic entity (e.g., a university).

    Should we/I use future frames during inference?
    No future frame is allowed except that it is noted explicitly.