DOCUMENTATION
This page reflects the current implementation of MARIA OS (MARIA CODE). It connects doctrine → build → operations in one language, so teams can run a self-evolving loop with reproducible evidence.
Overview (MARIA CODE vs MARIA OS)
One philosophy, two products: personal developer execution (MARIA CODE) and enterprise organizational intelligence OS (MARIA OS).
MARIA is a Structural AGI operating system designed to explicitly model the structure of the world and organizations—OS, rules, flows, and causality—and to help you design, change, and invent those structures.
The goal is not “plausible answers,” but structures that are reproducible, operable, and evolvable. That is why this repo is designed as an OS across code (src/), configuration (config/), and doctrine & operations (docs/).
Universe / EVOLVE / doctor (what makes MARIA OS “enterprise”)
Not “more AI” — an operating system that keeps decisions reproducible, auditable, and improving.
Principles (Structural AGI doctrine)
Essence before Solution / Safety by Structure / Human-first — plus enterprise requirements: determinism, traceability, and explicit gates.
- Essence before Solution: define “what structural problem is this?” in 1–3 lines before discussing solutions.
- Safety by Structure: safety must be enforced by boundaries, responsibilities, detection, redundancy, and fail-safe design—not by “good intentions.”
- Human-first: AI extends humans; final decisions and accountability remain with humans.
- Determinism: same state → same conclusion (especially for doctor and gates).
- Traceability: every decision must be explainable and link back to evidence and boundaries.
- Explicit gates: safe/guarded/risky classification + approval when needed, with rollback conditions.
- No heuristics: do not hardcode fuzzy judgments. Delegate ambiguity to an LLM layer (e.g. ai-proxy) with explicit contracts and logs.
- If the flow exists, improve the system prompt/contract first.
- If the flow does not exist, improve the flow before tuning prompts.
Architecture (where things live)
CLI + slash commands + manifest + config + docs work together as one OS.
- src/: core implementation (CLI, commands, services, agents)
- config/: OS-layer configuration (agents, domains, brain profiles)
- docs/: doctrine & operations (meta layer)
- tests/: Vitest suites (unit/integration/contract/e2e)
- /develop: goal → spec → design → tasks → initial steps
- /code: plan-first safe code generation (dry-run/rollback/git-guard)
- /code-review: GitHub Code Review Universe helper (review diff + deliverables from webhook runId)
- /doctor: project/OS health checks (entry to improvement loop)
- /agents: design/run/monitor multi-agent squads
- /knowledge: knowledge packs + HOT KNOWLEDGE + HITL operations
- /image /video: multimodal generation for docs, demos, and design
- /workflow/resume: safe pause/resume for long-running work
- Main entry (LLM JSON diagnosis + deep mode): `src/services/doctor/ProjectDoctorService.ts`
- Deterministic check runner (non-LLM checks): `src/services/doctor/DoctorCore.ts`
- Universe init/validate/versioning: `src/services/ecosystem/UniverseLifecycleService.ts`
- Event sourcing (audit trail / replay): `src/services/memory-system/event-sourcing/*`
- Universe OS POC (local-only store; enterprise aligned): `src/services/universe-os-poc/UniverseOsPocService.ts`
- LLM-based boundary judgment (no heuristics in host code): `src/services/safety/BoundaryGuardService.ts`
- Role policy gate (STOP / HITL required / required artifacts): `src/services/decision-os/RolePolicy.ts`
- Command-level RBAC guard: `src/services/security/RBACCommandGuard.ts`
- Autonomous plan policy + approval requirement: `src/services/autonomous-agent/security/PolicyEngine.ts`
Universe prototypes (latest)
Concrete, auditable workflows that demonstrate what “Universe” means in practice.
- Inputs: PR metadata + diff + repo context + config (YAML) + optional graph/doctor context
- Outputs: inline findings + summary comment + ReviewReport + DecisionTrace + GateReport
- Determinism: same inputs → same findings (idempotency marker to avoid duplicates)
/code-review review --diff artifacts/pr.diff --repo acme/repo --pr 123 --base abc --head def --no-llm
/code-review deliver --run-id 12345678:abcd --repo acme/repo --pr 123 --tenant tenant_demo_a
Recommended workflow (structure → build → evolve)
Enterprise flow: diagnosis-first, gated execution, and safe learning into Universe.
- Structure: define OS/boundaries/responsibilities/failure modes first
- Design: turn goals into spec/tasks with clear acceptance criteria
- Build: /code in plan-only → apply (rollback/guard as default)
- Diagnose: /doctor + quality gates to keep “evidence”
- Sync: update docs/knowledge so the OS stays consistent
- doctor: produce a diagnosis with evidence (boundaries, blast radius, risk)
- Decision: classify safe/guarded/risky; request approval when required
- Envelope: issue an explicit work order (constraints, do-not-touch, required tests, stop conditions)
- Execution: agents act as roles (implementation/testing/review/ops) and publish Artifacts
- Verification: GateReport + rollback readiness; then DoctorDelta updates long-term memory
# 1) List available commands (only READY are shown) maria /help # 2) Turn a goal into spec/design/tasks maria /develop "<your goal>" # 3) Preview first (safe-by-default) maria /code "<what to build>" --plan-only # 4) Apply (non-interactive if needed) maria /code "<what to build>" --apply --yes --rollback on # 5) Health check maria /doctor
Specs (practical flags & contracts)
Details live in /help. This section highlights the “patterns” developers/operators use daily.
# Preview (safe default) maria /code "requirements..." --plan-only # Apply (non-interactive) maria /code "requirements..." --apply --yes --rollback on # Git-guarded (leave evidence) maria /code "requirements..." --apply --yes --git-guard on --git-commit on
# Example: limit scope and attempts maria /auto-dev run --goal "small fix" --target-files "src/..." --max-attempts 2
# Resume latest (summary mode) maria /workflow/resume --latest --rehydrate summary # Resume a specific task id (and pass flags to /code) maria /workflow/resume <taskId> --tests --fix --apply
- BoundaryGuard (Safety Court): evaluate output risk and decide allow / warn / block. Reference: `src/services/safety/BoundaryGuardService.ts`
- Role policy gate: determines STOP/HITL and required artifacts/scopes. Reference: `src/services/decision-os/RolePolicy.ts`
- RBAC command guard: centralized authorization for commands. Reference: `src/services/security/RBACCommandGuard.ts`
- Deterministic risk labeling (safe/guarded/risky) for change planning. Reference: `src/services/evolve-ecosystem/doctor-to-task-spec.ts`
Command catalog (auto-generated from READY.manifest.json)
This list is generated at build time from the current READY manifest.
- Enterprise org doctor: `maria doctor-enterprise --models ...` (implementation: `src/cli/doctor-enterprise.ts`, service: `src/services/enterprise-os/EnterpriseOrgDoctorService.ts`)
- Project doctor: `maria /doctor` (entry: `src/services/doctor/ProjectDoctorService.ts`)
- BoundaryGuard: enforced boundary checks for enterprise outputs (reference: `src/services/safety/BoundaryGuardService.ts`)
- Approval gates: role policy + RBAC command authorization (references: `src/services/decision-os/RolePolicy.ts`, `src/services/security/RBACCommandGuard.ts`)
Deployment & operations (priority)
Never commit secrets. Absorb env differences via config. Enterprise runs locally.
- Never commit secrets (API keys, OAuth credentials, JWT secrets).
- Use Secret Manager (or equivalent) and avoid plaintext secrets in env/config files.
Local LLM Setup Guide (Ultra–Enterprise)
Run Ollama / LM Studio / vLLM on your own hardware (no cloud dependency) and connect MARIA to your local inference server.
- This guide targets the Local LLM Infrastructure feature for Ultra–Enterprise.
- Enterprise is designed for local execution by default (behavior equivalent to LOCAL_MODE=1).
# Prefer *_API_BASE (OpenAI-compatible base). *_API_URL is legacy compatibility (may be removed). LMSTUDIO_API_BASE=http://localhost:1234/v1 OLLAMA_API_BASE=http://localhost:11434 VLLM_API_BASE=http://localhost:8000/v1 # Compatibility (deprecated) # LMSTUDIO_API_URL=http://localhost:1234 # OLLAMA_API_URL=http://localhost:11434 # VLLM_API_URL=http://localhost:8000 # Recommended: force local mode (Enterprise-equivalent) LOCAL_MODE=1 # Default provider/model (optional) MARIA_PROVIDER=lmstudio # or: ollama / vllm MARIA_MODEL=gpt-oss-20b # example (LM Studio)
# 1) Start (skip if already running) ollama serve # 2) Pull models (examples) ollama pull llama3.2:3b ollama pull mistral:7b ollama pull mixtral:8x7b ollama pull deepseek-coder:6.7b ollama pull phi3.5:3.8b # 3) Confirm installation ollama list # 4) Verify API curl http://localhost:11434/api/version curl http://localhost:11434/api/tags
# 1) (GUI) Download a model (e.g., gpt-oss-120b / gpt-oss-20b)
# 2) (GUI) Start Local Server in "OpenAI Compatible" mode (default: http://localhost:1234/v1)
# If you have the CLI (lms)
lms ls
lms server start
# Verify
curl http://localhost:1234/v1/models
curl http://localhost:1234/v1/chat/completions \
-H "Content-Type: application/json" \
-H "Authorization: Bearer lm-studio" \
-d '{"model":"gpt-oss-20b","messages":[{"role":"user","content":"ping"}],"stream":false}'
# Example: start an OpenAI-compatible server (follow vLLM's setup guide for dependencies)
python -m vllm.entrypoints.openai.api_server \
--model mistralai/Mistral-7B-Instruct-v0.2 \
--host 0.0.0.0 \
--port 8000
# Verify
curl http://localhost:8000/v1/models
curl http://localhost:8000/v1/chat/completions \
-H "Content-Type: application/json" \
-d '{"model":"mistralai/Mistral-7B-Instruct-v0.2","messages":[{"role":"user","content":"ping"}],"stream":false}'
# Example: run with LM Studio explicitly maria /ceo --provider lmstudio --model gpt-oss-20b "Summarize the requirements" # Example: run with Ollama explicitly maria /ceo --provider ollama --model llama3.2:3b "Summarize the requirements"
Next steps (how to stay aligned with “latest”)
Because the repository is not public, the safest “source of truth” is what the product exposes at runtime.
- Use /help for the latest available commands (READY-only, manifest-backed).
- Use this page’s Command catalog section (auto-generated at build time from the READY manifest).
- For details on a specific command, run /help <command>.
- Keep secrets out of Git; use a secret manager; keep NEXTAUTH_SECRET stable.
- Prefer deterministic flows: preview → apply, and keep evidence (logs/manifests).
- Enterprise policy: run locally (LOCAL_MODE-aligned); avoid heuristics; route ambiguity via LLM contracts.