Overview

This workshop aims to promote the real-world impact of ML research toward self-driving technology. While ML-based components of modular stacks have been a huge success, there remains progress to be made in the development of integration strategies and intermediate representations. We invite contributions discussing the following topics, in order to empower the next generation of autonomous vehicles:

  • Representation learning for perception, prediction, planning, simulation, etc
  • Approaches that account for interactions between traditional sub-components (e.g., joint perception and prediction, end-to-end driving)
  • ML / statistical learning approaches to facilitate safety / interpretability / generalization
  • Driving environments / datasets for benchmarking ML algorithms
  • New perspectives on the future of autonomous driving
  • Contact us with prefix [ICLR 2023 SR4AD] at [email protected].
  • Speakers

    Schedule

    Please select your time zone:

    Time Speaker Theme Recording
    8:50,
    May 5
    Li Chen Introduction and opening remarks
    9:00,
    May 5
    Hang Qiu Scene Understanding beyond the Visible
    9:30,
    May 5
    Hang Zhao Vision-Centric Autonomous Driving: Perception, Prediction and Mapping
    10:00,
    May 5
    Li Chen Contributions
    11:00,
    May 5
    Coffee Break Coffee Break -
    11:30,
    May 5
    Dengxin Dai Robust Visual Perception for All Domains: Domain Synthesis, Adaptation, and Generalization
    12:00,
    May 5
    Yiyi Liao Towards Generative Photorealistic Simulation
    12:30,
    May 5
    Lunch Break Lunch Break -
    13:30,
    May 5
    Jamie Shotton Learning a Globally Scalable Driving Intelligence
    14:00,
    May 5
    Mengye Ren Contributions
    15:00,
    May 5
    Coffee Break Coffee Break -
    15:30,
    May 5
    Christos Sakaridis Optimizing Internal Network Representations for Geometric and Semantic Perception
    16:00,
    May 5
    Bo Li Secure and Safe Autonomous Driving in Adversarial Environments
    16:30-17:30,
    May 5
    Mengye Ren Panel Discussion with Hongyang Li, Hang Zhao, Yiyi Liao, Christos Sakaridis, Bo Li

    Call for Contributions

    There will be a best paper award for each track!
    Selected submissions will be given oral presentations onsite.


    Important Dates

  • Deadline for first-round submission of contributions : Feb 15, 2023, 23:59 Anywhere on Earth.
    1. Submission link
    2. Notification of first-round acceptance: Mar 03, 2023
  • Deadline for second-round submission of contributions : Mar 1, 2023, 23:59 Anywhere on Earth.
    1. Submission link
    2. Notification of second-round acceptance: Mar 15, 2023


    Submission Tracks

    To promote a diversity of content, contributions can be in the form of (1) blog posts, (2) github repositories, or (3) PDF documents (2-8 pages). Further, there will be two submission tracks:

  • Track 1: Research Insight
    1. Submit an analysis or a reimplementation of others' work that you wrote. You are a reporter / critic in this track.
    2. Authors will have to declare their conflicts of interest (both positive or negative) with the paper (and authors) they write about (e.g, recent collaboration, same institute, challenge competitor).
    3. Submissions will be reviewed for significant added value in comparison to the cited paper(s).
    4. The analysis must be supported by accurate and clear arguments.
  • Track 2: Original Contribution
    1. Submit original work of your own, which has not been published previously.
    2. Submissions will be reviewed for claims supported by convincing evidence.


    Format Guidelines

  • For blog posts, please submit a URL containing the content to be reviewed. There is no specific size or formatting template enforced for this track. Please refer to the ICLR 2022 blog post track website for some content examples.
  • For repositories, please submit a URL to the primary README.md file that describes how to run the code.
  • For PDFs, please use the ICLR template. Submissions can be in the extended abstract (2-4 pages) or full paper (4-8 pages) formats. The page limits do not include references or appendices.


  • Review

  • Submissions will be through OpenReview.
  • The review process will be single-blind.
  • Submissions and reviews will be private. Only accepted contributions will be made public.
  • The workshop is a non-archival venue and will not have official proceedings. PDF submissions can be subsequently or concurrently submitted to other venues.
  • All accepted contributions will be presented as posters.
  • At least one co-author of each accepted contribution is expected to register for ICLR 2023 and attend the poster session. Remote attendance is permitted.
  • All the accepted contributions will be available on our workshop website, though authors can indicate explicitly if they want to opt out.

  • Contact us with prefix [ICLR 2023 SR4AD] at [email protected].


    Accepted Contributions

  • Track 1: Research Insight
    1. How Do Vision Transformers See Depth in Single Images?

      Peter Mortimer
      Best Paper Award (Track 1), paper can be found here: ICLR 2023 Workshop SR4AD HYBRID

      Image Reconstruction from Event Cameras for Autonomous Driving

      Daniel Dauner
      The paper can be found here: ICLR 2023 Workshop SR4AD HYBRID

  • Track 2: Original Contribution
    1. Benchmarking 3D Perception Robustness to Common Corruptions and Sensor Failure

      Lingdong Kong, Youquan Liu, Xin Li, Runnan Chen, Wenwei Zhang, Jiawei Ren, Liang Pan, Kai Chen, Ziwei Liu
      Best Paper Award (Track 2), The paper can be found here: ICLR 2023 Workshop SR4AD HYBRID

      Pedestrian Cross Forecasting with Hybrid Feature Fusion

      Meng Dong
      The paper can be found here: ICLR 2023 Workshop SR4AD HYBRID

      Benchmarking Bird's Eye View Detection Robustness to Real-World Corruptions

      Shaoyuan Xie, Lingdong Kong, Wenwei Zhang, Jiawei Ren, Liang Pan, Kai Chen, Ziwei Liu
      The paper can be found here: ICLR 2023 Workshop SR4AD HYBRID

      Improving Data Augmentation for Multi-Modality 3D Object Detection

      Wenwei Zhang , Zhe Wang, Chen Change Loy
      The paper can be found here: ICLR 2023 Workshop SR4AD HYBRID

      aiMotive Dataset: A Multimodal Dataset for Robust Autonomous Driving with Long-Range Perception

      Tamas Matuszka
      The paper can be found here: ICLR 2023 Workshop SR4AD HYBRID

      SaFormer: A Conditional Sequence Modeling Approach to Offline Safe Reinforcement Learning

      Qin Zhang , Linrui Zhang, Li Shen, Haoran Xu, Bowen Wang, Xueqian Wang, Bo Yuan, Yongzhe Chang
      The paper can be found here: ICLR 2023 Workshop SR4AD HYBRID

      Prototypical Context-aware Dynamics Generalization for High-dimensional Model-based Reinforcement Learning

      Junjie Wang, Yao Mu, Dong Li, qichao Zhang, Dongbin Zhao, Yuzheng Zhuang, Ping Luo, Bin Wang, Jianye Hao
      The paper can be found here: ICLR 2023 Workshop SR4AD HYBRID

      Neural MPC-based Decision-making Framework for Autonomous Driving in Multi-Lane Roundabout

      Yao Mu, Zhiqian Lan, Chang Liu, Ping Luo, Shengbo Eben Li
      The paper can be found here: ICLR 2023 Workshop SR4AD HYBRID

      Label Calibration for Semantic Segmentation Under Domain Shift

      Ondrej Bohdal , Da Li, Timothy Hospedales
      The paper can be found here: ICLR 2023 Workshop SR4AD HYBRID

      Semi-Supervised LiDAR Semantic Segmentation with Spatial Consistency Training

      Lingdong Kong, Jiawei Ren, Liang Pan, Ziwei Liu
      The paper can be found here: ICLR 2023 Workshop SR4AD HYBRID

      CRN: Camera Radar Net for Accurate, Robust, Efficient 3D Perception

      Youngseok Kim, Sanmin Kim, Juyeb Shin, Jun Won Choi, Dongsuk Kum
      The paper can be found here: ICLR 2023 Workshop SR4AD HYBRID

      Use TensorRT network definition API and plugin to deploy 3D object detection algorithm SE-SSD

      Jingyue Guo
      The paper can be found here: ICLR 2023 Workshop SR4AD HYBRID

    Organizers