The Robotics Domain Lab
Humanoids are real.
Five or so humanoid programs are now close enough to shipping a real robot that every consumer-facing AI conversation is about to become a robotics conversation. The base model is better every quarter. The hardware is better every quarter. The bottleneck is somewhere else.
The bottleneck is people teaching the robot what to do.
This is why we built a Domain Lab for robotics. Not a repackaging of general-purpose annotation. A workflow for the exact shape of the problem.
What a robotics team actually needs
Talk to a head of data collection at any of the humanoid companies and the list is the same.
Demonstrations at volume. Thousands of hours of operators doing real tasks — folding a towel, stocking a shelf, handing a fragile object to a person. The model learns from the demonstrations. Without them, there is no model.
Clean demonstrations. Not every captured clip is usable. A jerky trajectory, a safety near-miss, an operator who let attention drift — these are the clips that poison a fine-tune. The team needs scoring, review, and a release decision on every clip before it reaches training.
A regression record of what went wrong. When an operator demonstrates the wrong thing — a demonstration that is technically valid but violates a policy the team defined — the clip should become a test case that blocks a future policy from repeating the pattern. Without this, the team is rebuilding safety lessons from scratch every release.
The tuned model at the end. A team does not want to pay for demonstrations and receive only a dataset. The team wants the demonstrations, the review record, the rejected-clip regression set, and a fine-tuned model built from the reviewed set. Portable. Customer-owned.
Nobody in the market ships all four. A capture vendor ships the first. A labeling vendor ships a polish on the second. An open dataset covers the first and third for generic tasks but not for a team's specific policy. Internal data-collection ops handle the whole thing on Slack and Dropbox, which works until a safety auditor asks for the record.
What the Domain Lab does
One screen. One record. Every demonstration, every review, every release decision, every fine-tune.
Capture. iPhone LiDAR, stereo camera rigs, wearable motion-capture vests, full teleoperation cells. Any kit. One session format. A LiveKit session server when live operation is needed. Chunked resumable upload when the capture is offline. On-device privacy blur before anything leaves the kit.
Score. Safety, smoothness, and quality metrics run the moment the clip lands. Pose continuity checked. Trajectory bounds enforced. Safety near-misses flagged. The team reviewing clips starts with the ones the scorer has already triaged.
Review. The safety lead and the training lead both see the same queue. The review decision is recorded with the reviewer's credential, the data they reviewed, and the rationale for the call. This is the record a regulator will eventually ask about.
Release. Approved clips ship to training. Rejected clips ship to the regression bank. The release is explicit. A clip that was approved once does not disappear into a folder nobody can find again.
Train. Fine-tune OpenVLA on the reviewed set. Tenant-isolated training. Customer-owned weights. The team walks out of every engagement with a model that understands their task better than the base model did.
Keep. The tuned checkpoint is the customer's. Forever. Exported with the training record. Portable to any infrastructure the team chooses.
Six nodes. One record. That is the Robotics Domain Lab.
Two audiences, one page
Robotics is the only Domain Lab where AuraOne sells both sides of the market.
For the team building the robot. A workflow product. Capture through training. The lab that runs behind the robot. The team walks in with an OpenVLA checkpoint and a data-collection program, and walks out with a tuned model and a record the safety case can sign off on.
For the operator willing to teach the robot. A paid-capture program. Record demonstrations on an iPhone. Get scored. Get paid. Higher tiers get kits shipped — a stereo camera rig, a wearable motion-capture vest, eventually a full teleoperation cell for operators who scale with the program. Clean clips get full pay. Rejected clips get a reviewer note explaining why, so the operator's next session scores higher.
Both audiences are on the same page because both are part of the same workflow. Separating them into different products would be a marketing choice that obscured the real shape of the market. The team building the robot is paying for the demonstrations the operators produce. The record tying the two sides together is the product.
Why this matters now
Humanoids are going to be the most visible consumer AI story of the next two years. Every target on our hitlist has open data-collection roles today. Every one of them is buying or about to buy from a capture vendor. The stack they are buying is thin enough that the first Domain Labs case study in public probably happens in robotics.
Getting this right is not a marketing project. It is the difference between shipping a humanoid that behaves well on task and shipping a humanoid that behaves well in the lab and poorly in a warehouse. The demonstrations the operator produced this week decide which of those two products ships next quarter.
What is under the hood
Three implementation details that matter for a technical buyer reading this.
Capture schema. Pose, trajectory, safety signals, environmental metadata. A structured representation that every tool in the stack reads. Export formats include HDF5, Parquet, MCAP, and ROSbag — the shape a training stack already ingests.
Training pipeline. OpenVLA fine-tuning, native. Customer tenant. Customer GPUs or ours. The weight retention guarantee is explicit — the tuned checkpoint stays with the customer at contract end. Always.
Review engine. Per-reviewer agreement tracked on safety calls. Drift detection on a weekly cadence. A reviewer whose agreement slips gets re-calibrated inside a session. A reviewer whose agreement stays steady routes to the hardest cases. This is the same pattern that runs under the AI Labs product — applied to demonstrations instead of preference pairs.
None of this is speculative. It is the same architecture that runs fifteen other Domain Labs.
Where this leads
Every humanoid company will eventually face the same choice a frontier AI lab faced two years ago.
Run the workflow on five vendors — a capture shop, an annotation tool, a recruitment platform, an observability stack, and a training harness — and spend eighteen months building the integration layer that ties them together. Or run the workflow on one record and spend eighteen months making the robot better.
The first option is how most humanoid companies are operating in 2026. The second is what the Robotics Domain Lab was built for.
Humanoids are real. The models that run them need people to show them what to do.
One lab runs that workflow.
---
Building robots? → Tour the Robotics Lab
Want to teach them? → Apply through AI Jobs