Models.
Not one model. A system.
NeuraWrite’s multi-model intelligence layer sits behind every workflow. The platform selects the right model for each task, orchestrates them across steps, and continuously improves the ones we tune ourselves. Users never pick a model — the platform does.
Model flexibility without complexity
One call. The right model. Every time.
aura.run({
task: "draft a thought-leadership post on FedRAMP",
project: "acme-marketing",
})Aura Research → web sources, claim-checked Aura Academic → first draft with brand book Aura Humanize → detection-aware rewrite Aura Brand Voice → final voice pass
A finished post + a trace of what Aura did.
You don’t pick a model. Aura does.
Every Aura Model is a complete capability — not just an LLM call. Each one ships with its own system prompt, scoring loop, and governance. Aura routes each step of a workflow to the profile best suited for it.
- One API across reasoning, generation, retrieval, and tools
- Automatic fallback when a provider degrades
- Swap or upgrade engines with no contract change
- Same governance, telemetry, and audit on every call
Capabilities
What Aura Models can do.
Aura doesn’t list models — it lists capabilities. Each one is backed by one or more Aura Models, with deterministic routing rules and live retrain pipelines for the ones we own.
Humanize
Pass detection. Keep citations.
Humanize
liveRewrite AI- or stiff-sounding drafts into natural, academic-quality prose.
Use this when you need drafts to read like a skilled human wrote them, especially student papers, reports, and long-form assignments. Aura Humanize runs a NeuraWrite-orchestrated pipeline (structure, vocabulary, voice, validation) so output is not locked to a single model fingerprint. Technical stages are summarized under Pipeline in Studio; day-to-day users just pick Humanize and choose strength in the product.
Humanize 2
previewNext-generation humanize profile, training on real approved edits from production.
Preview successor to Humanize 1. While the new fine-tune is finishing training, traffic may use a fast evaluation backbone under the same Aura humanize prompts. When the pair threshold is met, weights swap with no change to how you use the product.
Academic
Long-form drafting with discipline.
Academic
liveLong-form drafting that respects structure, discipline tone, and your sources.
Best for theses, journal-style sections, methods, literature reviews, and formal RFP language. Aura Academic uses a high-quality long-context backbone with NeuraWrite academic-voice prompts: it preserves citation markers, keeps section intent stable, and avoids casual or marketing tone where it does not belong.
Research
Web-grounded, claim-checked.
Research
liveGrounded answers with live web search, your knowledge base, and cited sources.
NeuraWrite’s answer-engine profile: it pulls fresh context from the web (e.g. via Tavily), merges in your uploaded or connected knowledge base, and synthesizes a single response with explicit source links, similar in spirit to consumer “ask with sources” products, but wired into your workspace, playbooks, and chat. A claim-check pass flags statements that are weakly supported so you can tighten wording before publishing.
Brand Voice
Style guide on every word.
Brand Voice
liveOn-brand copy that follows your brand book, banned phrases, and tone rules.
Loads your active NeuraWrite brand book (voice, terminology, words to avoid, examples) into every request. Uses a low-temperature drafting profile so marketing, support, and social copy stay consistent without sounding generic.
Detect
In-house detection pre-predictor.
Score
roadmapFast, cheap pre-check for AI-detection risk before you run full scans.
Roadmap: a small NeuraWrite classifier trained on labeled text vs. detector scores. When live, it will estimate risk so we only call expensive external detectors on borderline drafts, saving time and credits on obvious human-like output.
Performance + safety
Built for enterprise from the first call.
Privacy
Your content is never used to train third-party models. Aura\u2019s capture loop only feeds NeuraWrite-owned retrains, scoped to your tenant.
Governance
Every Aura Model call runs inside your project policies, content guardrails, and audit log. Same controls across every provider.
Reliability
Each Aura Model has a deterministic fallback. Provider outages degrade quality, never break workflows.
Latency
Aura routes light steps to fast small models and heavy steps to flagship models \u2014 you don\u2019t pay 200B-param latency for a 5-line summary.
Capture loop
Every successful Aura Humanize run becomes a training pair. Usage compounds into the next retrain. The longer you use NeuraWrite, the better Aura gets.
Observability
Every Aura Model call writes telemetry: provider, latency, fallback flag, tokens. We can answer \u201cis Aura Humanize 2 actually better?\u201d with data.
Use Aura outside NeuraWrite
Call Aura from your stack.
Public REST API
Issue an Aura API key in Settings, then call any Aura capability directly from your services.
curl https://neurawrite.ai/api/v1/aura/humanize \
-H "Authorization: Bearer nw_live_\u2026" \
-H "Content-Type: application/json" \
-d '{"text": "your AI draft…"}'
MCP server
Drop Aura into Claude Desktop, Cursor, or any MCP-compatible agent. Same Aura key, same governance, same capabilities.
{ "mcpServers": { "neurawrite": { "command": "npx", "args": ["@neurawrite/mcp-server"], "env": { "NEURAWRITE_API_KEY": "nw_live_…" } } } }
One model is a tool.
Aura is a system.
Stop wiring providers together. Let Aura route, retry, evaluate, and improve every workflow you ship.