Skip to main content
The model layer

Models.
Not one model. A system.

NeuraWrite’s multi-model intelligence layer sits behind every workflow. The platform selects the right model for each task, orchestrates them across steps, and continuously improves the ones we tune ourselves. Users never pick a model — the platform does.

Model flexibility without complexity

One call. The right model. Every time.

// User intent
aura.run({
  task: "draft a thought-leadership post on FedRAMP",
  project: "acme-marketing",
})
// Aura selects, calls, fans out
Aura Research   → web sources, claim-checked
Aura Academic   → first draft with brand book
Aura Humanize   → detection-aware rewrite
Aura Brand Voice → final voice pass
// You get
A finished post + a trace of what Aura did.

You don’t pick a model. Aura does.

Every Aura Model is a complete capability — not just an LLM call. Each one ships with its own system prompt, scoring loop, and governance. Aura routes each step of a workflow to the profile best suited for it.

  • One API across reasoning, generation, retrieval, and tools
  • Automatic fallback when a provider degrades
  • Swap or upgrade engines with no contract change
  • Same governance, telemetry, and audit on every call

Capabilities

What Aura Models can do.

Aura doesn’t list models — it lists capabilities. Each one is backed by one or more Aura Models, with deterministic routing rules and live retrain pipelines for the ones we own.

Humanize

Pass detection. Keep citations.

Humanize

live

Rewrite AI- or stiff-sounding drafts into natural, academic-quality prose.

Use this when you need drafts to read like a skilled human wrote them, especially student papers, reports, and long-form assignments. Aura Humanize runs a NeuraWrite-orchestrated pipeline (structure, vocabulary, voice, validation) so output is not locked to a single model fingerprint. Technical stages are summarized under Pipeline in Studio; day-to-day users just pick Humanize and choose strength in the product.

GPTZero target
≥ 85
Sapling target
≥ 85
Pipeline stages
3 always-on, 2 conditional
Training pairs (humanizer-tune)
199 (v1)

Humanize 2

preview

Next-generation humanize profile, training on real approved edits from production.

Preview successor to Humanize 1. While the new fine-tune is finishing training, traffic may use a fast evaluation backbone under the same Aura humanize prompts. When the pair threshold is met, weights swap with no change to how you use the product.

Pairs collected
live counter on /aura
Threshold to ship
500 approved pairs

Academic

Long-form drafting with discipline.

Academic

live

Long-form drafting that respects structure, discipline tone, and your sources.

Best for theses, journal-style sections, methods, literature reviews, and formal RFP language. Aura Academic uses a high-quality long-context backbone with NeuraWrite academic-voice prompts: it preserves citation markers, keeps section intent stable, and avoids casual or marketing tone where it does not belong.

Avg paper length
4–8k words
Citation preservation
100%

Research

Web-grounded, claim-checked.

Research

live

Grounded answers with live web search, your knowledge base, and cited sources.

NeuraWrite’s answer-engine profile: it pulls fresh context from the web (e.g. via Tavily), merges in your uploaded or connected knowledge base, and synthesizes a single response with explicit source links, similar in spirit to consumer “ask with sources” products, but wired into your workspace, playbooks, and chat. A claim-check pass flags statements that are weakly supported so you can tighten wording before publishing.

Source attribution
every claim
Claim-check
auto

Brand Voice

Style guide on every word.

Brand Voice

live

On-brand copy that follows your brand book, banned phrases, and tone rules.

Loads your active NeuraWrite brand book (voice, terminology, words to avoid, examples) into every request. Uses a low-temperature drafting profile so marketing, support, and social copy stay consistent without sounding generic.

Voice adherence
auto-evaluated
Banned-phrase enforcement
pre + post

Detect

In-house detection pre-predictor.

Score

roadmap

Fast, cheap pre-check for AI-detection risk before you run full scans.

Roadmap: a small NeuraWrite classifier trained on labeled text vs. detector scores. When live, it will estimate risk so we only call expensive external detectors on borderline drafts, saving time and credits on obvious human-like output.

Target accuracy vs GPTZero
≥ 90%
Inference cost
~$0.0001 / call

Performance + safety

Built for enterprise from the first call.

Privacy

Your content is never used to train third-party models. Aura\u2019s capture loop only feeds NeuraWrite-owned retrains, scoped to your tenant.

Governance

Every Aura Model call runs inside your project policies, content guardrails, and audit log. Same controls across every provider.

Reliability

Each Aura Model has a deterministic fallback. Provider outages degrade quality, never break workflows.

Latency

Aura routes light steps to fast small models and heavy steps to flagship models \u2014 you don\u2019t pay 200B-param latency for a 5-line summary.

Capture loop

Every successful Aura Humanize run becomes a training pair. Usage compounds into the next retrain. The longer you use NeuraWrite, the better Aura gets.

Observability

Every Aura Model call writes telemetry: provider, latency, fallback flag, tokens. We can answer \u201cis Aura Humanize 2 actually better?\u201d with data.

Use Aura outside NeuraWrite

Call Aura from your stack.

Public REST API

Issue an Aura API key in Settings, then call any Aura capability directly from your services.

# Humanize via Aura
curl https://neurawrite.ai/api/v1/aura/humanize \
-H "Authorization: Bearer nw_live_\u2026" \
-H "Content-Type: application/json" \
-d '{"text": "your AI draft…"}'

MCP server

Drop Aura into Claude Desktop, Cursor, or any MCP-compatible agent. Same Aura key, same governance, same capabilities.

# Claude Desktop config
{ "mcpServers": { "neurawrite": { "command": "npx", "args": ["@neurawrite/mcp-server"], "env": { "NEURAWRITE_API_KEY": "nw_live_…" } } } }

One model is a tool.
Aura is a system.

Stop wiring providers together. Let Aura route, retry, evaluate, and improve every workflow you ship.