Training & Exports

Train it. Prove it. Ship it.

Datasets, reviewers, and export artifacts stay linked to the same run timeline — so repeatability is not an afterthought.

Truth check

This page describes the workflow shape and governance hooks. Specific training methods, model support, and export formats depend on your deployment and security requirements.

Why it matters

Own the training loop. Keep it auditable.

Training is not just compute. It is data governance, review, and evidence.

Reviewable from start to finish.

Datasets, rubrics, and approvals stay attached to the run.

Artifacts you can keep and audit.

Export weights, datasets, and evidence packs into your own environment.

Proof ships with the model.

The workflow produces an evidence trail. Not a slide deck.

Retention and redaction built in.

Consent, lineage, and deletion policies as first-class inputs.

How it works

Four steps. A run you can defend.

The workflow is designed to make the run reproducible, the decisions explainable, and the exports defensible.

1Step 1
Define the objective

Start with what you need to improve: a rubric, a failure class, or a domain-specific workflow.

2Step 2
Collect and review signals

Route uncertain samples to reviewers when required. Keep decisions and notes attached to the same run timeline.

3Step 3
Train with constraints

Run training inside defined budgets and policies. Log configuration so the run can be reproduced later.

4Step 4
Export and attach proof

Export artifacts and evidence packs that map back to the run inputs, reviewers, and evaluation outcomes.

RLHF

Human feedback. Structured and tracked.

Pairwise ranking, rewrite tasks, and preference collection — each with reviewer assignment, calibration, and audit trails.

Pairwise comparison

Present two outputs side-by-side. Reviewers rank by rubric. Disagreements route to calibration.

Rewrite tasks

Reviewers edit model outputs directly. The original and revision are stored together for training signal.

Preference collection

Collect structured preference data at scale. Route to domain experts when the task requires specialized knowledge.

RLHF workflows connect to the same reviewer pool, calibration system, and evidence pipeline as the rest of AuraOne. Feedback data exports alongside model artifacts so the training loop stays auditable.

Exports

Artifacts you can keep.

The UI below is illustrative. The point is the workflow: exports are treated as first-class artifacts.

Ownership

Export is a button, not a project.

Download weights, datasets, and proofs as signed artifacts. Keep them. Host them. Audit them.

export.sigExpires in 15m
Weights export
auraone_private_model_v1.safetensors
Signed weights you can deploy anywhere: VPC, on‑prem, edge.
Size
7.8 GB
curl -L \
  'https://exports.auraone.ai/weights.safetensors?sig=…' \
  -o auraone_private_model_v1.safetensors

sha256  2f4c…a19e  auraone_private_model_v1.safetensors
No vendor lock‑in: export is a first‑class feature.
Checksums included for integrity and audit trails.
Promotion gates can require the export artifact.
Signed URL
https://exports.auraone.ai/weights?sig=…
Download
Next step

Map a training run to your evidence requirements.

We will map a workflow that fits your evidence requirements, retention constraints, and deployment architecture.