(1) University of California, Berkeley(2) NVIDIA (3) University of Toronto
Abstract
The incredible feats of athleticism demonstrated by humans are
made possible in part by a vast repertoire of general-purpose
motor skills, acquired through years of practice and experience.
These skills not only enable humans to perform complex tasks,
but also provide powerful priors for guiding their behaviors
when learning new tasks. This is in stark contrast to what is
common practice in physics-based character animation, where
control policies are most typically trained from scratch for
each task. In this work, we present a large-scale data-driven
framework for learning versatile and reusable skill embeddings
for physically simulated characters. Our approach combines
techniques from adversarial imitation learning and unsupervised
reinforcement learning to develop skill embeddings that produce
life-like behaviors, while also providing an easy to control
representation for use on new downstream tasks. Our models can
be trained using large datasets of unstructured motion clips,
without requiring any task-specific annotation or segmentation
of the motion data. By leveraging a massively parallel GPU-based
simulator, we are able to train skill embeddings using over a
decade of simulated experiences, enabling our model to learn a
rich and versatile repertoire of skills. We show that a single
pre-trained model can be effectively applied to perform a diverse
set of new tasks. Our system also allows users to specify tasks
through simple reward functions, and the skill embedding then
enables the character to automatically synthesize complex and
naturalistic strategies in order to achieve the task objectives.