ज्ञJnana Labeling Engine

Label video data
for robot learning.

Frame-accurate spatial annotations, action keyframes, and episode management — export directly to LeRobot, COCO, RLDS, and HDF5. Built for teams training embodied AI.

SAM 2 In-Browser|Multi-Format Export|Collaborative Editing|Robot Config Presets
label-engine.traefik.me/projects/1/videos/1
Jnana Labeling Engine workspace showing video annotation editor with timeline, canvas tools, and track management
Capabilities

Everything you need to label
robot training data.

ANNOTATE

Spatial Annotation Tools

Draw bounding boxes, polygons, and keypoints with frame-accurate precision. SAM 2 integration segments objects with a single click — no cloud dependency, runs entirely in-browser.

  • Bounding boxes, polygons, keypoints
  • SAM 2 in-browser segmentation
  • Magic wand flood-fill selection
  • Linear interpolation across frames
  • Track-based object identity
EPISODES

Action & Episode System

Define task episodes with start/end frame markers, then annotate action keyframes with schema-driven properties — gripper state, joint angles, and custom robot configurations.

  • Episode bracketing (Shift+I/O)
  • Action keyframe properties
  • Schema-driven: boolean, numeric, enum
  • Robot arm presets (SO-100, bimanual)
  • Three-mode editor: Annotate / Episodes / Keyframes
EXPORT

Export to Training Formats

Export annotated datasets directly to formats consumed by leading robot learning frameworks. Temporal-orchestrated pipeline handles validation, collection, and packaging.

  • LeRobot v3 (Parquet + trimmed clips)
  • COCO instance segmentation
  • RLDS (TFRecord sequential)
  • HDF5 scientific format
  • Progress tracking & download
COLLAB

Collaboration & Audit

Concurrent editing with exclusive lock acquisition. Every annotation action is logged to an append-only audit trail for compliance and reproducibility.

  • Exclusive editing locks with heartbeat
  • Lock timeout & auto-release
  • Append-only audit event log
  • PaperTrail version history
  • Role-based access (annotator/reviewer/admin)
Workflow

From upload to export.

Create project dialog with robot configuration presets

1. Create Project

Choose a robot configuration preset or define a custom action schema.

Project detail view with uploaded video ready for annotation

2. Upload Videos

Drag and drop MP4, MOV, or WebM files. Automatic VFR detection and transcoding.

Video annotation workspace with timeline and canvas tools

3. Annotate & Export

Label frames with spatial tools, mark episodes, then export to LeRobot or COCO.

Robot Configurations

Pre-built schemas for popular robots.

Start labeling immediately with action schemas designed for common robot embodiments. Each preset defines joints, end-effectors, cameras, and export dimensions.

SO-100 Single Arm

6-DOF, parallel jaw gripper, 1 camera

SO-ARM101 Bimanual

2x 6-DOF arms, 3 cameras

SO-100 + Vacuum

6-DOF, vacuum end-effector, 1 camera

Custom Schema

Define your own actions and properties

Efficiency

Keyboard-first workflow.

Every tool, mode switch, and navigation action is mapped to a keyboard shortcut. Frame-accurate J/K/L shuttle control and single-key tool switching keep your hands on the keyboard.

VSelect
BBounding Box
PPolygon
KKeypoint
SSAM Segment
WMagic Wand
GGhost Frames
SpacePlay/Pause
Multi-lane timeline with episode, action keyframe, and track lanes
Export

Your format. Your framework.

LeRobot v3

Parquet + trimmed video clips for HuggingFace LeRobot training pipelines

COCO

Instance segmentation format for object detection and segmentation models

RLDS

TFRecord sequential data for Google RT-X and TensorFlow agents

HDF5

Hierarchical scientific format for custom training loops and analysis

Get Started

Ready to label your data?

Tell us about your robot learning project and we'll get you set up with the labeling engine.

Or email us directly at contact@jnana.info