When a research run has a paper row and evidence context, the Enoch control plane can rewrite and package generated paper artifacts. You control which provider performs that rewrite through theDocumentation Index
Fetch the complete documentation index at: https://solo-09d10f60.mintlify.app/llms.txt
Use this file to discover all available pages before exploring further.
paper_writer_provider field. The deterministic mode produces a fixed review-required template with no external dependencies, while synthetic.new connects to a hosted OpenAI-compatible API using the configured model. Treat provider-backed rewriting as an operator-reviewed path, not a publication guarantee. If claim extraction is unavailable, the writer must preserve existing ledgers or write a blocked empty ledger that cannot pass strict claim/evidence audit.
Providers
deterministic
The default provider. It produces template-based output without calling an external LLM. Use it for local testing and predictable behavior.
synthetic.new
Optional provider-backed rewriting path. Enoch sends paper context to the Synthetic.new API endpoint using the OpenAI-compatible chat-completion interface, using GLM-5.1 (hf:zai-org/GLM-5.1) as the default model in config.example.json. The response replaces the draft paper body. If the provider request fails and paper_writer_fallback_enabled is true, Enoch falls back to the deterministic template rather than failing the paper-write step.
Fields
paper_writer_providerpaper_writer_base_urlpaper_writer_modelpaper_writer_api_keypaper_writer_timeout_secpaper_writer_temperaturepaper_writer_max_tokenspaper_writer_fallback_enabled
paper_writer_api_key is empty, provider code can look for SYNTHETIC_API_KEY. Do not commit API keys.
Fallback behavior
When fallback is enabled, provider failure can fall back to deterministic output. That is operationally useful, but reviewers should still inspect the draft and its evidence before finalization.Publication rule
Maximum seconds to wait for a response from the provider before treating the request as failed. Minimum value is
10. Increase this for very long papers or slow network links.Sampling temperature passed to the model. Range
0.0–2.0. Lower values produce more deterministic output; the default 0.2 is appropriate for structured research writing.Maximum number of output tokens the model may generate per paper-write request. Minimum value is
512. Increase this if your papers are truncated.When
true and paper_writer_provider is synthetic.new, a failed provider request causes Enoch to fall back to the deterministic template instead of raising an error. Set to false if you want provider failures to surface explicitly.Synthetic.new config
Add the following fields to your control VM’s config file to test provider-backed paper rewriting with Synthetic.new and GLM-5.1. Confirm corpus packaging/provenance scans and human review before treating any output as release-ready:Do not publish generated paper artifacts until corpus packaging/provenance lint, strict claim/evidence audit status, and human review are explicit. The paper-writer output is a draft; a packaging/provenance pass is not a deep claim audit.