Frame-accurate spatial annotations, action keyframes, and episode management — export directly to LeRobot, COCO, RLDS, and HDF5. Built for teams training embodied AI.

Draw bounding boxes, polygons, and keypoints with frame-accurate precision. SAM 2 integration segments objects with a single click — no cloud dependency, runs entirely in-browser.
Define task episodes with start/end frame markers, then annotate action keyframes with schema-driven properties — gripper state, joint angles, and custom robot configurations.
Export annotated datasets directly to formats consumed by leading robot learning frameworks. Temporal-orchestrated pipeline handles validation, collection, and packaging.
Concurrent editing with exclusive lock acquisition. Every annotation action is logged to an append-only audit trail for compliance and reproducibility.

Choose a robot configuration preset or define a custom action schema.

Drag and drop MP4, MOV, or WebM files. Automatic VFR detection and transcoding.

Label frames with spatial tools, mark episodes, then export to LeRobot or COCO.
Start labeling immediately with action schemas designed for common robot embodiments. Each preset defines joints, end-effectors, cameras, and export dimensions.
6-DOF, parallel jaw gripper, 1 camera
2x 6-DOF arms, 3 cameras
6-DOF, vacuum end-effector, 1 camera
Define your own actions and properties
Every tool, mode switch, and navigation action is mapped to a keyboard shortcut. Frame-accurate J/K/L shuttle control and single-key tool switching keep your hands on the keyboard.

Parquet + trimmed video clips for HuggingFace LeRobot training pipelines
Instance segmentation format for object detection and segmentation models
TFRecord sequential data for Google RT-X and TensorFlow agents
Hierarchical scientific format for custom training loops and analysis
Tell us about your robot learning project and we'll get you set up with the labeling engine.
Or email us directly at contact@jnana.info