Robotics Arms Orchestration

Robotics

An advanced interactive environment designed to explore the frontier of human-machine collaboration in hybrid physical-digital spaces. The installation integrates 15 robot arms, each carrying a 4K display, with a unified software platform that enables natural and dynamic interaction across multiple modalities.

Orchestration System

The orchestration system coordinates the precise real-time movement, positioning, and content synchronization of all 15 robotic displays simultaneously, transforming a physical space into a dynamic, responsive canvas. The software stack handles:

  • Inverse kinematics computation for all 15 arms in real time
  • Collision avoidance through a hierarchical safety system operating at strategic, tactical, and reactive levels
  • Smooth trajectory planning with bezier curve interpolation and predictive rendering
  • Frame-perfect display synchronization with sub-10 ms latency from intent to physical motion

Perception and Interaction

The architecture is built around a real-time event bus that connects perception systems (depth cameras, LiDAR, presence sensors) to the motion planning pipeline and content rendering engine. Users interact with the installation through gestures, voice commands, touch, and spatial presence, and the system responds by reconfiguring the physical layout of displays in three-dimensional space.

The concept of "interaction volumes", invisible spatial zones around and between displays, responds to different types of human presence: walking near a display triggers orientation toward the user; reaching out activates direct manipulation mode; stepping between two displays causes them to part and reveal a third.

Content System

A virtual canvas system renders content in a continuous coordinate space, dynamically projecting it onto each physical display based on its real-time pose. The system accounts for perspective correction, edge blending between adjacent displays, and content priority when displays overlap from certain viewing angles. Predictive rendering pre-computes content for the next 200 ms of planned motion, ensuring there is never a frame where the content does not match the physical layout.

Applications

Use cases span immersive data visualization, collaborative design review, artistic installations, and accessibility-focused interaction research. The entire system is reconfigurable through a visual choreography editor, allowing researchers to design complex multi-robot behaviors without writing code.

Follow Up Questions

Next
Challenge

Operational Transformation Algorithms

Shared data layer built on Yjs CRDTs enabling real-time collaborative editing with bidirectional synchronization between a text editor and a mind-map visualization, fully offline-capable with seamless conflict resolution.

EXPLORE →