Skip to main content

Documentation Index

Fetch the complete documentation index at: https://solo-09d10f60.mintlify.app/llms.txt

Use this file to discover all available pages before exploring further.

When a research run has a paper row and evidence context, the Enoch control plane can rewrite and package generated paper artifacts. You control which provider performs that rewrite through the paper_writer_provider field. The deterministic mode produces a fixed review-required template with no external dependencies, while synthetic.new connects to a hosted OpenAI-compatible API using the configured model. Treat provider-backed rewriting as an operator-reviewed path, not a publication guarantee. If claim extraction is unavailable, the writer must preserve existing ledgers or write a blocked empty ledger that cannot pass strict claim/evidence audit.

Providers

deterministic

The default provider. It produces template-based output without calling an external LLM. Use it for local testing and predictable behavior.

synthetic.new

Optional provider-backed rewriting path. Enoch sends paper context to the Synthetic.new API endpoint using the OpenAI-compatible chat-completion interface, using GLM-5.1 (hf:zai-org/GLM-5.1) as the default model in config.example.json. The response replaces the draft paper body. If the provider request fails and paper_writer_fallback_enabled is true, Enoch falls back to the deterministic template rather than failing the paper-write step.

Fields

  • paper_writer_provider
  • paper_writer_base_url
  • paper_writer_model
  • paper_writer_api_key
  • paper_writer_timeout_sec
  • paper_writer_temperature
  • paper_writer_max_tokens
  • paper_writer_fallback_enabled
If paper_writer_api_key is empty, provider code can look for SYNTHETIC_API_KEY. Do not commit API keys.

Fallback behavior

When fallback is enabled, provider failure can fall back to deterministic output. That is operationally useful, but reviewers should still inspect the draft and its evidence before finalization.

Publication rule

paper_writer_timeout_sec
number
default:"180"
Maximum seconds to wait for a response from the provider before treating the request as failed. Minimum value is 10. Increase this for very long papers or slow network links.
paper_writer_temperature
number
default:"0.2"
Sampling temperature passed to the model. Range 0.02.0. Lower values produce more deterministic output; the default 0.2 is appropriate for structured research writing.
paper_writer_max_tokens
number
default:"12000"
Maximum number of output tokens the model may generate per paper-write request. Minimum value is 512. Increase this if your papers are truncated.
paper_writer_fallback_enabled
boolean
default:"true"
When true and paper_writer_provider is synthetic.new, a failed provider request causes Enoch to fall back to the deterministic template instead of raising an error. Set to false if you want provider failures to surface explicitly.

Synthetic.new config

Add the following fields to your control VM’s config file to test provider-backed paper rewriting with Synthetic.new and GLM-5.1. Confirm corpus packaging/provenance scans and human review before treating any output as release-ready:
{
  "paper_writer_provider": "synthetic.new",
  "paper_writer_base_url": "https://api.synthetic.new/openai/v1",
  "paper_writer_model": "hf:zai-org/GLM-5.1",
  "paper_writer_api_key": "your-provider-key",
  "paper_writer_fallback_enabled": true,
  "paper_evidence_sync_enabled": true
}
Do not publish generated paper artifacts until corpus packaging/provenance lint, strict claim/evidence audit status, and human review are explicit. The paper-writer output is a draft; a packaging/provenance pass is not a deep claim audit.