Documentation Index
Fetch the complete documentation index at: https://solo-09d10f60.mintlify.app/llms.txt
Use this file to discover all available pages before exploring further.
Deployment
The reference deployment splits Enoch across two machines:
- Control VM — runs the FastAPI control plane, dashboard, queue state, paper review APIs, timers, and optional corpus/export tooling.
- Worker machine — runs the wake-gate API used by Codex/OMX jobs, tracks processes and telemetry, and stores project workspaces and evidence.
You can also run both roles on one host for development.
Prerequisites
Control VM:
- Linux with systemd
- Python 3.11+
uv
git
- network access to the worker API
Worker machine:
- Linux with systemd or an equivalent process manager
- Python 3.11+
uv
git
- the Codex/OMX stack used by your dispatch script
- NVIDIA telemetry libraries if you want GPU visibility
Install the control plane
git clone https://github.com/alias8818/enoch-agentic-research-system.git
cd enoch-agentic-research-system
uv venv --python /usr/bin/python3 .venv
uv pip install --python .venv/bin/python -e .
uv run pytest -q
The helper can copy the checkout into /opt, create config/state directories, install dependencies, and write systemd units when run as root:
sudo scripts/install-control-plane.sh \
--prefix /opt/enoch-agentic-research-system \
--config-dir /etc/enoch \
--state-dir /var/lib/enoch-control-plane \
--user enoch
Edit /etc/enoch/config.json before enabling the service. Replace every placeholder token and URL.
Configure required secrets
Generate distinct values for omx_inbound_bearer_token, completion_callback_token, and worker_wake_gate_bearer_token:
python3 -c "import secrets; print(secrets.token_urlsafe(48))"
Never commit live config files, Notion tokens, Pushover credentials, provider API keys, private hostnames, or production logs.
Run the control service
sudo systemctl daemon-reload
sudo systemctl enable --now enoch-control-plane.service
sudo systemctl status enoch-control-plane.service
curl -fsS http://127.0.0.1:8787/healthz
Open the dashboard and authenticate with the configured inbound token:
http://<control-vm>:8787/dashboard
Configure the worker
On the worker host:
git clone https://github.com/alias8818/enoch-agentic-research-system.git
cd enoch-agentic-research-system
scripts/install-worker.sh
The worker can run the same app with a worker-focused config:
OMX_WAKE_GATE_CONFIG=$HOME/.config/enoch-worker/config.json \
uv run uvicorn omx_wake_gate.app:app --host 0.0.0.0 --port 8787
curl -fsS -H "Authorization: Bearer $TOKEN" http://127.0.0.1:8787/control/health
Optional timers
The repo includes systemd units for optional workflows:
enoch-queue-alert-check.timer — periodically checks queue health and can send Pushover alerts when configured.
enoch-notion-sync.timer — syncs Notion intake/projection data when Notion environment variables are configured.
enoch-paper-draft-next.timer — drafts the next eligible paper without dispatching new work.
Enable only the timers you have configured and tested.
Smoke-test before live dispatch
export ENOCH_BASE_URL=http://<control-vm>:8787
export ENOCH_CONTROL_TOKEN=<omx_inbound_bearer_token>
scripts/smoke-test-local.sh
Then test worker preflight:
curl -fsS -H "Authorization: Bearer $ENOCH_CONTROL_TOKEN" \
-H 'Content-Type: application/json' \
-d '{"wake_gate_url":"http://<worker>:8787","bearer_token":"<worker-token>","require_paused":false,"strict":false}' \
"$ENOCH_BASE_URL/control/api/preflight" | python3 -m json.tool
Use dry-run dispatch first:
curl -fsS -H "Authorization: Bearer $ENOCH_CONTROL_TOKEN" \
-H 'Content-Type: application/json' \
-d '{"dry_run":true,"requested_by":"operator-smoke-test"}' \
"$ENOCH_BASE_URL/control/dispatch-next" | python3 -m json.tool
Paper artifact workflow
Paper generation is optional and depends on evidence and paper rows. The default paper_writer_provider is deterministic. The code also supports an OpenAI-compatible synthetic.new provider with paper_writer_base_url, paper_writer_model, and paper_writer_api_key settings.
Do not publish generated artifacts until corpus packaging/provenance checks pass and a human understands the limitations.
Review the dispatch flow
Every live dispatch request passes through the following checks in order. Understanding this sequence helps you diagnose failures at each stage:
- No conflicting active GPU lane exists (single active lane enforced).
- A queue item exists.
- Live dispatch is enabled (
live_dispatch_enabled must be true in config).
- The control plane is not paused and maintenance mode is not active (checked together).
- Worker preflight is healthy.
- The dispatch script launches the agent run.
- The wake gate tracks process and telemetry truth.
- The completion callback or status update is emitted only after the gate is satisfied.
Always use dry-run dispatch first when testing a new deployment. It exercises the core dispatch guards — pause checks, lane safety (single active GPU lane), and candidate selection — without launching a real agent run. Note that worker preflight is only performed during live dispatch, not dry-run.
Configure the paper artifact workflow
The control plane can rewrite and package generated research artifacts when paper rows and evidence are present. This path depends on provider credentials and remains an operator-reviewed path, not a publication guarantee. Add the following settings to /etc/enoch/config.json only when you are ready to test the provider-backed writer:
{
"paper_writer_provider": "synthetic.new",
"paper_writer_base_url": "https://api.synthetic.new/openai/v1",
"paper_writer_model": "hf:zai-org/GLM-5.1",
"paper_writer_api_key": "your-provider-key",
"paper_writer_fallback_enabled": true,
"paper_evidence_sync_enabled": true
}
Do not publish generated artifacts until corpus import, index building, packaging/provenance lint, strict claim/evidence audit status, and human review are explicit. The packaging/provenance checks scan for placeholder citations, missing provenance, and required metadata files. The stricter claim/evidence audit is separate and currently reports blocked audit gaps rather than full deep auditability.
What is not included
This repository does not include live secrets, private production config, generated paper corpus artifacts, old workflow-tool exports, private run state databases, or production logs. Those are intentionally excluded. Use the example config and this guide to recreate a clean deployment from scratch.