Optimizing Active Perception for Learning Simultaneous Viewpoint Selection and Manipulation with Diffusion Policy

Xiatao Sun, Francis Fan, Yinxing Chen, Daniel Rakita

Published: 2024/9/22

Abstract

Robotic manipulation tasks often rely on static cameras for perception, which can limit flexibility, particularly in scenarios like robotic surgery and cluttered environments where mounting static cameras is impractical. Ideally, robots could jointly learn a policy for dynamic viewpoint and manipulation. However, dynamic viewpoint control requires additional degrees of freedom and intricate coordination with manipulation, which results in more challenging policy learning than single-arm manipulation. To address this complexity, we propose an integrated learning framework that combines diffusion policy with a novel look-at inverse kinematics solver for active perception. Our framework helps better coordinating between perception and manipulation. It automatically optimizes camera orientation for viewpoint selection, while allowing the policy to focus on essential manipulation and positioning decisions. We demonstrate that our integrated approach achieves superior performance and learning efficiency compared to directly applying diffusion policies to configuration space or end-effector space with various rotation representations. Further analysis suggests that these performance differences are driven by inherent variations in the high-frequency components across different state-action spaces.

Optimizing Active Perception for Learning Simultaneous Viewpoint Selection and Manipulation with Diffusion Policy | SummarXiv | SummarXiv