v0.9 · Commercial beta · Now accepting applications

From a novel
to a finished reel,
agent-driven.

ArcReel is an open-source AI video workspace. Drop in a novel, and an agent orchestrates scriptwriting, character design, storyboarding and final video synthesis — keeping characters, scenes and props visually consistent across every shot.

Apply for commercial beta Star on GitHub · 1.9k
AGPL-3.0
License
Docker · WSL2 · macOS
Runs anywhere
4 providers · 12+ models
Gemini · 火山方舟 · Grok · OpenAI
Multi-agent orchestration
Subagent architecture
pipeline.reel · v0.9
06 stagesagent-orchestrated
01
Upload novel
Drop in the source text. Chinese or English, any length.
02
Build asset library
Agents scan the full work and index every character, scene, and prop.
03
Plan & split episodes
Progressive human-in-the-loop episode breakdown with AI-suggested cut points.
04
Generate script JSON
Normalize prose into structured scene/shot JSON — narration or drama mode.
05
Character & storyboard frames
Reference sheets first; then every shot with cross-scene consistency.
06
Synthesize video
Image-to-video per shot, then FFmpeg-composed final reel or Jianying export.

Built for consistency across every shot.

Characters keep their faces. Props keep their shapes. Scenes keep their mood. ArcReel's agent graph treats continuity as a first-class artifact — not an afterthought.

Agent workflow

Orchestration Skill + focused Subagents

A state-aware orchestrator detects your project phase and dispatches subagents for each task. Large context (novel text, references) stays inside subagents — only distilled summaries reach the main thread.

main-agent manga-workflow analyze split script render
Character DNA

Cross-shot consistency

Reference sheets are generated first; every downstream storyboard and video clip is conditioned on them. Characters, scenes, and props are tracked as persistent assets across every cut.

E1S01 E1S02 E1S03 E1S04
Queue

Async task engine

RPM-rate-limited, lease-based scheduling with independent image / video channels. Resumable.

Versioning

Every regen is history

One-click rollback. Compare variants side by side. Nothing is ever lost.

Export

Jianying-ready drafts

Ship per-episode ZIPs into Jianying 5.x / 6+ for human finishing. FFmpeg pipeline for automated cuts.

Bring your own model stack.

A unified backend protocol across image / video / text. Four preset providers out of the box — or plug in any OpenAI-compatible or Google-compatible endpoint, including self-hosted Ollama and vLLM.

Modality
Gemini
火山方舟
Grok (xAI)
OpenAI
Image
Nano Banana 2, Pro
Seedream 5.0 / Lite / 4.5
Grok Image
GPT Image 2 / Mini
Video
Veo 3.1 · Fast · Lite
Seedance 2.0 / 1.5 Pro
Grok Imagine Video
Sora 2 · Sora 2 Pro
Text
Gemini 3.1 Pro / Flash
Doubao Seed series
Grok 4.20 / 4.1 Fast
GPT-5.5 / Mini / Nano
4
Preset providers
12+
Models supported
Custom endpoints
2
Content modes · narration / drama

Apply for the commercial beta.

The open-source build is free forever. The commercial tier adds hosted infrastructure, pooled provider credits, priority queues, SSO, white-label branding, and an SLA. Limited spots in this cohort.

Replied within 2 business days

Join the community.

Swap tips, show reels, troubleshoot, and shape the roadmap with other creators and devs building on ArcReel.

Get early access to the reel.
We post release notes, prompt recipes, and sneak peeks of unreleased agents in the group. Join before the next cohort closes.
  • Weekly model & feature digests
  • Direct access to core maintainers
  • Showcase channel for community reels
  • First look at commercial features before GA
Feishu · Chinese-first
ArcReel Feishu community QR code
Scan with Feishu / Lark

Frequently asked.

The open-source build is AGPL-3.0 and free forever — self-host with Docker Compose on Linux, macOS or WSL2. You only pay for the upstream AI provider usage (Gemini, 火山方舟, Grok, OpenAI, or your own). The commercial tier is an optional managed offering for teams that want hosted infra, pooled credits, and SLAs.
Before any storyboard is rendered, a subagent scans the novel and builds a library of characters, scenes, and props. Reference sheets are generated first and every downstream image/video generation is conditioned on them. Continuity is designed in, not a prompt hack.
Four preset providers ship out of the box: Gemini (Nano Banana / Veo 3.1), 火山方舟 (Seedream / Seedance), Grok, and OpenAI (GPT Image / Sora 2). You can also add any OpenAI-compatible or Google-compatible endpoint — including self-hosted Ollama and vLLM — and ArcReel will auto-discover models.
Yes. Per-episode drafts export as a ZIP compatible with Jianying desktop 5.x and 6+. Import, fine-tune the cut, add music, done.
A POSIX-like environment: Linux, macOS, or Windows with WSL2 / Docker Desktop. A few low-level dependencies are POSIX-only, so native Windows isn't supported yet.
Same core pipeline, but hosted — no setup, pooled provider credits at better rates, priority GPU queues, team workspaces with SSO, white-label branding for agency use, and a support SLA.