Chen Tessler (1) Yifeng Jiang (1) Erwin Coumans (1) Zhengyi Luo (1) Gal Chechik (1) Xue Bin Peng (1, 2)
(1) NVIDIA(2) Simon Fraser University
Abstract
We tackle the challenges of synthesizing versatile, physically
simulated human motions for full-body object manipulation.
Unlike prior methods that are focused on detailed motion
tracking, trajectory following, or teleoperation, our framework
enables users to specify versatile high-level objectives such
as target object poses or body poses. To achieve this, we
introduce MaskedManipulator, a generative control policy
distilled from a tracking controller trained on large-scale
human motion capture data. This two-stage learning process
allows the system to perform complex interaction behaviors,
while providing intuitive user control over both character
and object motions. MaskedManipulator produces goal-directed
manipulation behaviors that expand the scope of interactive
animation systems beyond task-specific solutions.
@inproceedings{tessler2025maskedmanipulator,
author = {Tessler, Chen and Jiang, Yifeng and Coumans, Erwin and Luo, Zhengyi and Chechik, Gal and Peng, Xue Bin},
title = {MaskedManipulator: Versatile Whole-Body Manipulation},
year = {2025},
booktitle={ACM SIGGRAPH Asia 2025 Conference Proceedings}
}