CMoE: Contrastive Mixture of Experts for Motion Control and Terrain Adaptation of Humanoid Robots

Shihao Ma 1 , Hongjin Chen 1 , Zijun Xu 1 , Yi Zhao 1 , Ke Wu 1 , Ruichen Yang 1 , Leyao Zou 1 , Zhongxue Gan 1,† , Wenchao Ding 1,†
1 Fudan University
Corresponding Authors
Fudan University

Overview

Abstract

Rhythm teaser

For effective deployment in real-world environments, humanoid robots must autonomously navigate a diverse range of complex terrains with abrupt transitions. While the Vanilla mixture of experts (MoE) framework is theoretically capable of modeling diverse terrain features, in practice, the gating network exhibits nearly uniform expert activations across different terrains, weakening the expert specialization and limiting the model’s expressive power. To address this limitation, we introduce CMoE, a novel single-stage reinforcement learning framework that integrates contrastive learning to refine expert activation distributions. By imposing contrastive constraints, CMoE maximizes the consistency of expert activations within the same terrain while minimizing their similarity across different terrains, thereby encouraging experts to specialize in distinct terrain types. We validated our approach on the Unitree G1 humanoid robot through a series of challenging experiments. Results demonstrate that CMoE enables the robot to traverse continuous steps up to 20 cm high and gaps up to 80 cm wide, while achieving robust and natural gait across diverse mixed terrains, surpassing the limits of existing methods. To support further research and foster community development, we will release our code publicly.

Framework

Rhythm framework

Our main contributions are summarized as follows:

BibTeX citation

If you find our work useful, please consider citing our paper:

@article{ma2026cmoe,
title={CMoE: Contrastive Mixture of Experts for Motion Control and Terrain Adaptation of Humanoid Robots},
author={Shihao Ma and Hongjin Chen and Zijun Xu and Yi Zhao and Ke Wu and Ruichen Yang and Leyao Zou and Zhongxue Gan and Wenchao Ding},
journal={arXiv preprint arXiv:2603.03067},
year={2026},
url={https://arxiv.org/abs/2603.03067}
}