Autonomous driving is developing fast.
Although still deemed important, the requirement for cutting-edge algorithms is no longer to achieve as high as mAP for object detectors, or to recognize lanes as conventional segmentation.
We believe the
For participation, you can create a team at EvalAI, and then make a submission.
A valid submission would be automatically viewed as a successful participation.
No more than ten individuals are allowed in a team.
For more details please refer to FAQ and description in each track.
Baseline models are provided. Please check out to GitHub repository for each track.
Moreover, we introduce the multimodal multitask general large model InternImage to serve as a strong backbone.
Check out here for implementation details.
Contact us via
workshop-e2e-ad@googlegroups.com
with the prefix [CVPR 2023 E2EAD].
Join Slack to chat with Challenge organizers. Please follow the guidelines in
#general
and join the track-specific channels.
Join WeChat group to chat with Challenge organizers.
The year 2023's edition of the Challenge is hosted by:
The OpenLane-V2 dataset* is the perception and reasoning benchmark for scene structure in autonomous driving. Given multi-view images covering the whole panoramic field of view, participants are required to deliver not only perception results of lanes and traffic elements but also topology relationships among lanes and between lanes and traffic elements simultaneously.
The primary metric is OpenLane-V2 Score (OLS), which comprises evaluations on three sub-tasks. On the website, we provide tools for data access, training models, evaluations, and visualization. To submit your results on EvalAI, please follow the submission instructions.
Challenge Period Open | March 15, 2023 |
Challenge Period End | June 01, 2023 |
Technical Report Deadline | June 09, 2023 |
Winner Announcement | June 13, 2023 |
1st Place | USD $15,000 |
2nd Place | USD $5,000 |
Innovation Prize | USD $5,000 |
wanghuijie@pjlab.org.cn
#openlane-challenge-2023
Compared to conventional lane detection, the constructed HD map provides more semantics information with multiple categories. Vectorized polyline representations are adopted to deal with complicated and even irregular road structures. Given inputs from onboard sensors (cameras), the goal is to construct the complete local HD map.
The primary metric is mAP based on Chamfer distance over three categories, namely lane divider, boundary, and pedestrian crossing. Please refer to our GitHub for details on data and evaluation. Submission is conducted on EvalAI.
Challenge Period Open | March 15, 2023 |
Challenge Period End | May 26, 2023 |
Notification | May 27, 2023 |
Technical Report Deadline | June 05, 2023 |
Winner Announcement | June 11, 2023 |
Above is the final leaderboard. A notification was sent from online-hd-map-construction@googlegroups.com. If you did not receive the notification, please specify your email and team name to wanghuijie@pjlab.org.cn.
1st Place | USD $15,000 |
2nd Place | USD $5,000 |
Innovation Prize | USD $5,000 |
yuantianyuan01@gmail.com
#map-challenge-2023
Unlike previous perception representations, which depend on predefined geometric primitives or perceived data modalities, occupancy enjoys the flexibility to describe entities in arbitrary shapes. In this track, we provide a large-scale occupancy benchmark. Given multi-view images covering the whole panoramic field of view, participants are needed to provide the occupancy state and semantics of each voxel in 3D space for the complete scene.
The primary metric of this track is mIoU. On the website, we provide detailed information for the dataset, evaluation, and submission instructions. The test server is hosted on EvalAI.
Dataset and Devkit Release | February 20, 2023 |
Challenge Period Open | Opened |
Challenge Period End | June 01, 2023 |
Notification | June 03, 2023 |
Technical Report Deadline | June 10, 2023 |
Winner Announcement | June 12, 2023 |
For the real-time leaderboard please refer to EvalAI.
1st Place | USD $15,000 |
2nd Place | USD $5,000 |
Innovation Prize | USD $5,000 |
chonghaosima@gmail.com
cntxy001@gmail.com
#occupancy-challenge-2023
Previous benchmarks focus on short-term motion forecasting and are limited to open-loop evaluation. nuPlan introduces long-term planning of the ego vehicle and corresponding metrics. Provided as docker containers, submissions are deployed for simulation and evaluation.
The primary metric is the mean score over three increasingly complex modes: open-loop, closed-loop non-reactive agents, and closed-loop reactive agents. Participants can follow the steps to begin the competition. To submit your results on EvalAI, please follow the submission instructions.
Test Phase End | May 26, 2023 |
Winner Announcement | June 02, 2023 |
Winner Presentation | June 18, 2023 |
For the real-time leaderboard please refer to EvalAI.
1st Place | USD $10,000 |
2nd Place | USD $8,000 |
3rd Place | USD $5,000 |
Innovation Prize | USD $5,000 |
nuScenes@motional.com
#nuplan-challenge-2023
Only PUBLIC results shown on the leaderboard will be valid. Please ensure your result is made public before the deadline and kept public after the deadline on the leaderboard.
Regarding submissions to all tracks, please make sure the appended information is correct, especially the email address. After submission deadlines, we will ask participants via email to provide further information for qualification and making certificates. Any form of late requests for claiming ownership of a particular submission will not be considered. Incorrect email addresses will lead to disqualification.
The primary objective of this Autonomous Driving Challenge is to facilitate all aspects of autonomous driving.
Despite the current trend toward data-driven research, we strive to provide opportunities for participants without access to massive data or computing resources.
To this end, we would like to reiterate the following rules:
Leaderboard
Certificates will be provided to all participants. For the top three participants on the leaderboard, rankings will be included on the certificates.
All publicly available datasets and pretrained weights are allowed, including Objects365, Swin-T, DD3D-pretrained VoVNet, InternImage, etc.
But the use of private datasets or pretrained weights is prohibited.
Award
To claim a cash award, all participants are required to submit a technical report.
Cash awards for the first three places will be distributed based on the rankings on leaderboards. However, other factors, such as model sizes and data usage, will be taken into consideration.
As we set up the Innovation Prize to encourage novel and innovative ideas, winners of this award are only allowed to use ImageNet and COCO as external data and need to follow the rules of each track strictly.
The challenge committee reserves all rights for the final explanation of the cash award.
Please refer to rules.