Skip to main content

Documentation Index

Fetch the complete documentation index at: https://solo-09d10f60.mintlify.app/llms.txt

Use this file to discover all available pages before exploring further.

Autonomous AI research runs produce scattered evidence: notes written during execution, intermediate metrics, result summaries, and claim records. Without a system to collect and preserve that evidence before artifact generation begins, generated reports have no stable grounding — they become prose detached from the work that produced them. Enoch treats evidence collection as a required pipeline stage, not an optional export. Evidence sync is available before paper generation and rewrite work, but it is configuration-gated: paper_evidence_sync_enabled defaults to false in config.example.json. When enabled, the control plane attempts to copy high-signal run evidence from the worker before rewriting artifacts; when disabled or when no local evidence is present, generated drafts must be treated as review-required and bounded by the available run record, evidence bundle, or claim ledger. Enoch separates generated prose from the evidence that grounded it. That separation is the main reason the generated corpus is reviewable.

Evidence files

The corpus snapshot contains per-artifact folders with files such as:
  • paper.md
  • metadata.json
  • evidence_bundle.json
  • claim_ledger.json
  • paper_manifest.json
The Hugging Face export flattens those artifacts into data/artifacts.jsonl and reports fields including paper_markdown, metadata, evidence_bundle, claim_ledger, paper_manifest, github_url, ai_generated, human_authorship_claimed, and review_status. Before the artifact writer runs, Enoch syncs evidence from the worker project workspace to the control VM. When evidence sync is enabled, the primary sync method is HTTP — the control plane calls the worker wake gate API to retrieve evidence files. If the HTTP sync is unavailable or returns an error, an optional SSH fallback uses paper_evidence_sync_ssh_host and paper_evidence_sync_remote_root from your config to copy files directly. When paper_evidence_sync_enabled is true, the control plane can sync evidence from a worker project root before paper generation. Config fields include paper_evidence_sync_ssh_host, paper_evidence_sync_remote_root, and paper_evidence_sync_timeout_sec. Docs should not imply evidence exists for a run unless artifact files or metadata show it.

Artifact writing

The default writer is deterministic, which avoids an external model call. The code also supports aliases for a synthetic.new OpenAI-compatible provider. Configure provider URL, model, API key, timeout, temperature, max tokens, and fallback behavior through paper_writer_* fields. Provider support does not mean generated papers are validated or peer reviewed.

Corpus gates

The corpus packaging/provenance policy requires generated artifacts to avoid fake citations, placeholder markers, implied human authorship, and peer-review claims. It also expects provenance metadata plus evidence-bundle and claim-ledger files. The stricter claim/evidence audit separately checks whether ledgers contain evidence-linked claims and whether referenced result files are public or explicitly unavailable.

Current local corpus snapshot

Do not publish or finalize a paper until packaging/provenance lint, strict claim/evidence audit status, and human review are explicit. A paper with placeholder citations or missing provenance artifacts misrepresents the evidence behind its claims. A paper can pass packaging/provenance lint while still failing the stricter claim/evidence audit if its claim ledger is empty or its result-file references are not public.

Correct framing

Papers move through the following PaperStatus values as they progress from generation to release:
StatusMeaning
eligibleRun complete; paper row eligible for generation or review
draft_generatingArtifact writer is generating the draft
draft_reviewDraft generated; awaiting operator packaging/provenance review
publication_generatingPublication-targeted rewrite in progress
publication_draftPublication draft ready for finalization review
human_review_requiredPackaging/provenance check flagged issues requiring human judgment
archivedPaper archived; removed from active review queue
The review workflow (ReviewStatus) tracks the operator’s progress through the checklist: unreviewedtriage_readyin_reviewchanges_requested or approved_for_finalizationfinalized. Papers can also be rejected or placed in blocked status if a blocker is recorded.

Provenance framing

The reports Enoch produces are AI-generated research artifacts, not human-authored or peer-reviewed papers. They are built from run notes, evidence bundles, claim ledgers, and reproducibility traces produced during autonomous agent runs. The provenance chain is:
agent run → run notes + metrics + results
         → evidence_bundle.json + claim_ledger.json (synced to control VM; strict audit is separate)
         → artifact writer (consumes evidence context)
         → Markdown report + LaTeX + paper_manifest.json
         → packaging/provenance scan
         → corpus release
The authorship statement for all artifacts produced by Enoch runs is:
The reports produced by this system are AI-generated research artifacts created from automated run notes, evidence bundles, claim ledgers, and reproducibility traces. The maintainer releases the corpus for inspection and critique but does not claim personal authorship of the generated papers, arguments, or prose.
When citing a paper produced by an Enoch run, credit the system operator as the release maintainer, not as the human author of the generated content. See the authorship and provenance reference for recommended citation language.