Learning Manipulation by Predicting Interaction

Jia Zeng*1    Qingwen Bu*2,1    Bangjun Wang*2,1    Wenke Xia*3,1   
Li Chen1    Hao Dong4    Haoming Song1    Dong Wang1    Di Hu3    Ping Luo1   
Heming Cui1    Bin Zhao1,5    Xuelong Li1,6    Yu Qiao1    Hongyang Li1,2
1Shanghai AI Lab          2Shanghai Jiao Tong University          3Renmin University of China
4Peking University          5Northwestern Polytechnical University          6TeleAI, China Telecom Corp Ltd RSS 2024
Image description

MPI is an interaction-oriented representation learning pipeline for robotic manipulation. Diverging from prior arts grounded in (a) Contrastive Learning, (b) Masked Signal Modeling, or (c) Video Prediction using random frames, our proposed approach in (d) instructs the model towards predicting transition frames and detecting manipulated objects with keyframes as input. As such, the model fosters better comprehension of “how-to-interact” and “where-to-interact”. MPI acquires more informative representations during pre-training and achieves evident improvement across downstream tasks.

Abstract

Representation learning approaches for robotic manipulation have boomed in recent years. Due to the scarcity of in-domain robot data, prevailing methodologies tend to leverage large-scale human video datasets to extract generalizable features for visuomotor policy learning. Despite the progress achieved, prior endeavors disregard the interactive dynamics that capture behavior patterns and physical interaction during the manipulation process, resulting in an inadequate understanding of the relationship between objects and the environment. To this end, we propose a general pre-training pipeline that learns Manipulation by Predicting the Interaction (MPI) and enhances the visual representation. Given a pair of keyframes representing the initial and final states, along with language instructions, our algorithm predicts the transition frame and detects the interaction object, respectively. These two learning objectives achieve superior comprehension towards “how-to-interact” and “where-to-interact”. We conduct a comprehensive evaluation of several challenging robotic tasks. The experimental results demonstrate that MPI exhibits remarkable improvement by 10% to 64% compared with previous state-of-the-art in real-world robot platforms as well as simulation environments.

Model Overview

Image description

MPI comprises a multi-modal transformer encoder and a transformer decoder designed for predicting the image of the target interaction state and detecting interaction objects respectively. We achieve synergistic modeling and optimization of the two tasks through information transition between the prediction and detection transformers. The decoder is solely engaged during the pre-training phase while deprecated for downstream adaptations.

Real-world Visuomotor Control Tasks

Take the spatula off the shelf (2x speed)

Lift up the pot lid (2x speed)

Close the drawer (2x speed)

Put pot into sink (2x speed)

Success Rates of Real-World Experiments

To provide a comprehensive evaluation of the effectiveness of different pre-trained encoders, we design two distinct scenarios with varying levels of complexity. The first scenario consists of ten diverse manipulation tasks in a clean background. These tasks require fundamental manipulation skills such as Pick & Place, articulated object manipulation, etc. In addition, we construct a more challenging kitchen environment that incorporates various interfering objects and backgrounds relevant to the target tasks. In this environment, we present five tasks: 1) taking the spatula off the shelf, 2) putting the pot into the sink, 3) putting the banana into the drawer, 4) lifting the lid, and 5) closing the drawer. As shown in Fig. 3(a), the complexity of these scenarios necessitates the visual encoder to possess both the “how-to-interact” and “where-to-interact” abilities to effectively handle these tasks.

Image description

Analysis: Real-World Experiment Details

Image description

Robustness to Real-World Distractions

To genuinely reflect various encoder architectures’ capabilities in data-efficient robotic learning within real-world environments, we developed a series of complex manipulation tasks both in kitchen environment(5 tasks) and in clean background(10 tasks). The complex scenarios need the visual encoder to have not only "how to interact" but also "where to interact" ability to handle these tasks. Here are some examples.

Task 1: Background Distraction (Put banana into drawer)

Original Setting (Real-time)
R3M (Real-time)

MPI(Ours) (Real-time)

Task 2: Object Variation (Lift up the pot lid)

Original Setting (Real-time)

R3M (Real-time)

MPI(Ours) (Real-time)

Generalization Experiment Results

Image description

More Generalization Evaluation in the Real World


Simulation Visuomotor Control Tasks

Previous studies have established imitation learning for visuomotor control in simulation as the standard evaluation method. This enables direct comparisons with prior works and focuses on assessing the sample-efficient generalization of visual representations and their impact on learning policies from limited demonstrations. We conduct this evaluation to compare the capabilities of different representations in acquiring both the knowledge of “where-to-interact” and “how-to-interact” in complex simulation environments.

Image description Image description

BibTeX

If you find the project helpful for your research, please consider citing our paper:
@inproceedings{zeng2024mpi,
        title={Learning Manipulation by Predicting Interaction},
        author={Jia, Zeng and Qingwen, Bu and Bangjun, Wang and Wenke, Xia and Li, Chen and Hao, Dong and Haoming, Song and Dong, Wang and Di, Hu and Ping, Luo and Heming, Cui and Bin, Zhao and Xuelong, Li and Yu, Qiao and Hongyang, Li},
        booktitle= {Proceedings of Robotics: Science and Systems (RSS)},
        year={2024}
      }