Sim-to-Real Transfer for Quadrupedal Locomotion via Terrain Transformer

Hang Lai1,2, Weinan Zhang1, Xialin He1, Chen Yu2,3, Zheng Tian4, Yong Yu1, Jun Wang2,5
1Dept. of Computer Sci. and Eng., Shanghai Jiao Tong University, China, 2Digital Brain Lab, Shanghai, China, 3School of Info. Sci. and Tech., ShanghaiTech University, China 4School of Creativity and Art, ShanghaiTech University, China 5Centre for Artificial Intelligence, University College London, UK

Abstract

    Deep reinforcement learning has recently emerged as an appealing alternative for legged locomotion over multiple terrains by training a policy in physical simulation and then transferring it to the real world (\ie sim-to-real transfer). Despite considerable progress, the capacity and scalability of traditional neural networks are still limited, which may hinder their applications in more complex environments. In contrast, the Transformer architecture has shown its superiority in a wide range of large-scale sequence modeling tasks, including natural language processing and decision-making problems. In this paper, we propose Terrain Transformer (TERT), a high-capacity Transformer model for quadrupedal locomotion control on various terrains. Furthermore, to better leverage Transformer in sim-to-real scenarios, we present a novel two-stage training framework consisting of an offline pretraining stage and an online correction stage, which can naturally integrate Transformer with privileged training. Extensive experiments in simulation demonstrate that TERT outperforms state-of-the-art baselines on different terrains in terms of return, energy consumption and control smoothness. In further real-world validation, TERT successfully traverses nine challenging terrains, including sand pit and stair down, which can not be accomplished by strong baselines.

System Overview

System Overview

BibTeX

@misc{lai2023simtoreal,
      title={Sim-to-Real Transfer for Quadrupedal Locomotion via Terrain Transformer}, 
      author={Hang Lai and Weinan Zhang and Xialin He and Chen Yu and Zheng Tian and Yong Yu and Jun Wang},
      year={2023},
      eprint={2212.07740},
      archivePrefix={arXiv},
      primaryClass={cs.RO}
}