Generating human-object interactions (HOIs) is critical with the tremendous advances of digital avatars. Existing datasets are typically limited to humans interacting with a single object while neglecting the ubiquitous manipulation of multiple objects.
Thus, we propose HIMO, a large-scale MoCap dataset of full-body human interacting with multiple objects, containing 3.3K 4D HOI sequences and 4.08M 3D HOI frames. We also annotate HIMO with detailed textual descriptions and temporal segments, benchmarking two novel tasks of HOI synthesis conditioned on either the whole text prompt or the segmented text prompts as fine-grained timeline control.
To address these novel tasks, we propose a dual-branch conditional diffusion model with a mutual interaction module for HOI synthesis. Besides, an auto-regressive generation pipeline is also designed to obtain smooth transitions between HOI segments. Experimental results demonstrate the generalization ability to unseen object geometries and temporal compositions.
@misc{lv2024himonewbenchmarkfullbody,
title={HIMO: A New Benchmark for Full-Body Human Interacting with Multiple Objects},
author={Xintao Lv and Liang Xu and Yichao Yan and Xin Jin and Congsheng Xu and Shuwen Wu and Yifan Liu and Lincheng Li and Mengxiao Bi and Wenjun Zeng and Xiaokang Yang},
year={2024},
eprint={2407.12371},
archivePrefix={arXiv},
primaryClass={cs.CV},
url={https://arxiv.org/abs/2407.12371},
}