AURAONE OPEN · LOCAL FIRST · MIT

Open source. Local first.

Four IDEs and a library of engines for agent reliability, rubric authoring, robotics review, and dataset trust. Public source. No account. Your data stays on disk.

REPOS
30+ public

Engines, IDEs, actions, packets — every one public, every one MIT.

TOOLS
4 local IDEs

Agent Studio, Rubric Studio, Robotics Studio, and the Open v2 set.

ADOPTERS
Labs · platforms · CI

Frontier labs, agent platforms, and PR review workflows already in.

FOUR SURFACES

Pick the IDE for the work.

Each one runs on disk, ships MIT, and exports a portable artifact a reviewer can pick up without a hosted account.

AGENT STUDIO OPEN
01

Local-first IDE for MCP and A2A agents.

Connect, inspect, record, replay, compare model behavior, ingest OTEL spans, and export regression suites to CI.

STACK
MIT · Desktop · Browser · CLI · VS Code
OPEN PAGE →
RUBRIC STUDIO OPEN
02

The IDE for the rubric.

Local, file-based, git-friendly authoring for criterion-level evaluations. Author, test, calibrate, diff, and export.

STACK
MIT · Desktop · Browser · CLI
OPEN PAGE →
ROBOTICS STUDIO OPEN
03

The IDE for reviewed teleop and VLA datasets.

Open LeRobot, RLDS, OpenX, HDF5, ROS bag, and mp4/jsonl captures. Scrub, tag, cluster, probe, export.

STACK
MIT · Desktop · CLI
OPEN PAGE →
OPEN V2
04

Trust gates for agentic and embodied AI.

Twelve installable packages for MCP/A2A review, trace replay, robotics data quality, VLA diagnostics, and review packets.

STACK
MIT · pip · GitHub Actions
OPEN PAGE →
PUBLIC ENGINES

Read the source. Install in a venv.

The libraries underneath every Open IDE. Each runs from a notebook, a CLI, or a GitHub Action.

REPOPURPOSEINSTALLSOURCE
mcp-risk-linter

Risk taxonomy and lint pass for MCP server manifests.

pip install mcp-risk-linter
GITHUB →
a2a-contract-test

Offline contract tests for A2A agent cards and task lifecycles.

pip install a2a-contract-test
GITHUB →
tool-call-replay

Deterministic replay harness for failed agent tool calls.

pip install tool-call-replay
GITHUB →
agent-trace-card

Portable Markdown and JSON card for one agent run.

pip install agent-trace-card
GITHUB →
otel-eval-bridge

Bridge OTEL and Phoenix GenAI spans into eval regression cases.

pip install otel-eval-bridge
GITHUB →
AURAONE OPEN

Your work. Your data. Your tools.

Read the source. Run it locally. Bring AuraOne in when shared state is the actual problem.

Open | Better eval data starts here | AuraOne