The field of autonomous driving (AD) is rapidly advancing, and while cutting-edge algorithms remain a crucial component, the emphasis on achieving high mean average precision (mAP) for object detectors or conventional segmentation for lane recognition is no longer paramount. Rather, we posit that the future of AD algorithms lies in the integration of perception and planning. In light of this, we propose four newly curated challenges that embody this philosophy.
The Autonomous Driving Challenge at CVPR 2023 just wrapped up! We have witnessed an intensive engagement from the community. Numerous minds from universities and corporations, including China, Germany, France, Singapore, United States, United Kingdom, etc., join to tackle the challenging tasks for autonomous driving. With over 270 teams from 15 countries (regions), the challenge has been a true showcase of global talent and innovation. Over the course of 2,300 submissions, the top spot has been fiercely contested. We received a few inquiries on the eligibility, challenge rules, technical reports. Rest assured that all concerns have been appropriately addressed. The fairness and integrity of the Challenge has always been our highest priority.
Rank | Country / Region | Institution | OLS (primary) | Team Name | ||||
---|---|---|---|---|---|---|---|---|
1![]() |
MEGVII Technology 旷视科技 |
55.19 | MFV [paper] [arXiv] |
35.77 | 79.70 | 22.52 | 33.48 | |
- | QCraft * 轻舟智航 |
46.54 | qcraft2 * | 41.68 | 63.74 | 6.57 | 30.37 | |
2![]() |
AMD | 44.56 | Victory [paper] | 21.84 | 72.45 | 13.24 | 22.61 | |
- | QCraft * 轻舟智航 |
41.14 | qcraft-team * | 29.60 | 63.74 | 4.63 | 24.69 | |
3![]() |
Middle East Technical University | 39.22 | PlatypusWhisperers [paper] [arXiv] |
22.09 | 70.61 | 6.02 | 15.70 | |
4 | MeiTuan 美团 |
38.54 | gavin [paper] |
17.90 | 70.28 | 4.01 | 21.12 | |
5 | Beihang University 北京航空航天大学 |
38.53 | qwertyczx (e110_r) | 26.43 | 66.07 | 3.17 | 19.18 | |
6 | Turing Inc. | 34.93 | turing-machine | 13.35 | 78.64 | 1.48 | 12.66 | |
7 | - | - | 34.66 | Haoqing | 8.63 | 71.87 | 3.69 | 15.16 |
8 | - | - | 33.22 | TopoNet-Anonymous | 19.50 | 58.42 | 2.27 | 15.91 |
Page of 2 |
Innovation Award goes to "PlatypusWhisperers" for
If you use the challenge dataset in your paper, please consider citing the following BibTex:
The OpenLane-V2 dataset* is the perception and reasoning benchmark for scene structure in autonomous driving. Given multi-view images covering the whole panoramic field of view, participants are required to deliver not only perception results of lanes and traffic elements but also topology relationships among lanes and between lanes and traffic elements simultaneously.
The primary metric is OpenLane-V2 Score (OLS), which comprises evaluations on three sub-tasks. On the website, we provide tools for data access, training models, evaluations, and visualization. To submit your results on EvalAI, please follow the submission instructions.
![]() |
USD $15,000 |
![]() |
USD $5,000 |
![]() |
USD $5,000 |
wanghuijie@pjlab.org.cn
Rank | Country / Region | Institution | mAP (primary) | Team Name | Ped Crossing | Divider | Boundary |
---|---|---|---|---|---|---|---|
1![]() ![]() |
Mach Drive 迈驰智行 |
83.50 | MACH [paper] |
86.66 | 81.54 | 82.29 | |
2![]() |
Independent Researcher | 73.65 | MapNeXt | 68.94 | 76.66 | 75.34 | |
3 | Shanghai Jiao Tong University 上海交通大学 |
73.39 | SJTUCR [paper] |
70.37 | 75.08 | 74.73 | |
4 | Lotus NYO | 72.56 | LTS (MapLTS2) [paper] |
72.67 | 73.20 | 71.80 | |
5 | University of Science and Technology of China 中国科学技术大学 |
71.02 | ustc_vgg | 69.05 | 73.24 | 70.76 | |
6 | Xi'an Jiaotong University 西安交通大学 |
70.43 | XJTU-IAIR | 64.69 | 74.52 | 72.07 | |
7 | - | - | 69.71 | MapVision | 65.58 | 73.48 | 70.06 |
8 | - | - | 68.28 | Qml | 63.90 | 71.73 | 69.21 |
9 | - | - | 67.34 | SAITAD | 68.56 | 64.02 | 69.45 |
10 | GACRD 广汽研究院 |
67.10 | MapSeg [paper] [code] |
63.52 | 70.33 | 67.46 | |
Page of 3 |
Innovation Award goes to "MACH" for
If you use the challenge dataset in your paper, please consider citing the following BibTex:
Compared to conventional lane detection, the constructed HD map provides more semantics information with multiple categories. Vectorized polyline representations are adopted to deal with complicated and even irregular road structures. Given inputs from onboard sensors (cameras), the goal is to construct the complete local HD map.
The primary metric is mAP based on Chamfer distance over three categories, namely lane divider, boundary, and pedestrian crossing. Please refer to our GitHub for details on data and evaluation. Submission is conducted on EvalAI.
![]() |
USD $15,000 |
![]() |
USD $5,000 |
![]() |
USD $5,000 |
yuantianyuan01@gmail.com
Rank | Country / Region | Institution | mIoU (primary) | Team Name |
---|---|---|---|---|
1![]() ![]() |
NVIDIA | 54.19 | NVOCC (FB-OCC) [paper] | |
2![]() |
42dot | 52.45 | 42dot (MiLO) [paper] | |
3 | Xiaomi Car, Peking University 小米汽车, 北京大学 |
51.27 | UniOcc (final) [paper] | |
4 | SAIC AI Lab 上汽 AI LAB |
49.36 | occ-heiheihei [paper] | |
5![]() |
Harbin Institute of Technology 哈尔滨工业大学 |
49.23 | occ_transformer [paper] | |
6 | Huawei Noah's Ark Lab 华为诺亚方舟实验室 |
49.21 | CakeCake (Noah CV Lab - POP) | |
7 | University of Electronic Science and Technology of China 电子科技大学 |
49.12 | sdada (TEST) | |
8 | Zhejiang University, University of Glasgow 浙江大学 |
49.02 | LSS-Query | |
9 | Xi'an Jiaotong University 西安交通大学 |
48.69 | JUST Coding | |
10 | Institute of Computing Technology Chinese Academy of Sciences 中国科学院计算技术研究所 |
48.58 | Simple Occ | |
Page of 5 |
Innovation Award goes to "NVOCC" for
Innovation Award goes to "occ_transformer" for
If you use the challenge dataset in your paper, please consider citing the following BibTex:
Unlike previous perception representations, which depend on predefined geometric primitives or perceived data modalities, occupancy enjoys the flexibility to describe entities in arbitrary shapes. In this track, we provide a large-scale occupancy benchmark. Given multi-view images covering the whole panoramic field of view, participants are needed to provide the occupancy state and semantics of each voxel in 3D space for the complete scene.
The primary metric of this track is mIoU. On the website, we provide detailed information for the dataset, evaluation, and submission instructions. The test server is hosted on EvalAI.
![]() |
USD $15,000 |
![]() |
USD $5,000 |
![]() |
USD $5,000 |
chonghaosima@gmail.com
cntxy001@gmail.com
Rank | Country / Region | Institution | Overall Score (primary) | Team Name | CH1 Score | CH2 Score | CH3 Score |
---|---|---|---|---|---|---|---|
1![]() |
University of Tübingen | 89.52 | CS_Tu [paper] [arXiv] [code] |
82.89 | 92.76 | 92.91 | |
2![]() |
Horizon Robotics 地平线 |
87.45 | autoHorizon2023 (hoplan) [paper] | 85.23 | 88.99 | 88.13 | |
3![]() |
Pegasus 云骥智行 |
84.77 | pegasus_weitao (pegasus_multi_path) [paper] | 87.58 | 81.65 | 85.06 | |
4![]() |
Nanyang Technological University | 82.88 | AID (GameFormer Planner) [paper] [arXiv] | 84.00 | 80.87 | 83.76 | |
5 | Xi'an Jiaotong University 西安交通大学 |
82.20 | wheeljack | 82.86 | 81.46 | 82.29 | |
6 | - | - | 82.14 | xg_test (ltp-planner) | 87.85 | 78.76 | 79.81 |
7 | - | - | 79.83 | Forecast_MAE (planning, preliminary) | 90.72 | 76.09 | 72.67 |
8 | Motional (Baseline) | 74.67 | Host_68305_Team (UrbanDriver) | 86.29 | 68.21 | 69.52 | |
9 | The Hong Kong University of Science and Technology 香港科技大学 |
74.42 | HatOff2JuiceWRLD (& Haaland) | 81.56 | 66.51 | 75.19 | |
10![]() |
Mines Paris | 74.10 | raphamas (MBAPPE) [paper] | 78.61 | 69.96 | 73.72 | |
Page of 3 |
Innovation Award goes to "AID" for
Honorable Mention for Innovation Award goes to "raphamas" for
If you use the challenge dataset in your paper, please consider citing the following BibTex:
Previous benchmarks focus on short-term motion forecasting and are limited to open-loop evaluation. nuPlan introduces long-term planning of the ego vehicle and corresponding metrics. Provided as docker containers, submissions are deployed for simulation and evaluation.
The primary metric is the mean score over three increasingly complex modes: open-loop, closed-loop non-reactive agents, and closed-loop reactive agents. Participants can follow the steps to begin the competition. To submit your results on EvalAI, please follow the submission instructions.
![]() |
USD $10,000 |
![]() |
USD $8,000 |
![]() |
USD $5,000 |
![]() |
USD $5,000 |
nuScenes@motional.com
Thanks for your participation! For those who require a certificate for participation, please specify the names of all team members, the institution, the method name (optional), the team name, and the participating track to wanghuijie@pjlab.org.cn.
Only PUBLIC results shown on the leaderboard will be valid. Please ensure your result is made public before the deadline and kept public after the deadline on the leaderboard.
Regarding submissions to all tracks, please make sure the appended information is correct, especially the email address. After submission deadlines, we will ask participants via email to provide further information for qualification and making certificates. Any form of late requests for claiming ownership of a particular submission will not be considered. Incorrect email addresses will lead to disqualification.
The primary objective of this Autonomous Driving Challenge is to facilitate all aspects of autonomous driving.
Despite the current trend toward data-driven research, we strive to provide opportunities for participants without
access to massive data or computing resources.
To this end, we would like to reiterate the following rules:
Leaderboard
Certificates will be provided to all participants.
All publicly available datasets and pretrained weights are allowed, including Objects365, Swin-T, DD3D-pretrained
VoVNet, InternImage, etc.
But the use of private datasets or pretrained weights is prohibited.
Award
To claim a cash award, all participants are required to submit a technical report.
Cash awards for the first three places will be distributed based on the rankings on leaderboards. However,
other factors, such as model sizes and data usage, will be taken into consideration.
As we set up the Innovation Prize to encourage novel and innovative ideas, winners of this award are
encouraged only to use ImageNet and COCO as external data.
The challenge committee reserves all rights for the final explanation of the cash award.
Please refer to rules.