Agent 01 — Screenwriter

Stop guessing at AI video prompts.

Most people type a sentence into Veo or Runway and pray. The Screenwriter agent uses a four-layer framework called V.M.C.A. to give generators exactly the information they need — so you get the shot you actually imagined.

AI video generators don't think like directors.

You type "a woman walks through a rainy street at night" and you get... something. Maybe it's okay. Maybe the camera is static when you wanted a dolly. Maybe the mood is bright when you wanted noir. Maybe the motion is all wrong.

The issue isn't the generator. It's the prompt. These models are trained on visual data, but they need structured visual language to produce structured results. That's what V.M.C.A. does.

V.M.C.A. — four layers, one prompt.

Every shot has four dimensions. The Screenwriter agent breaks your concept into all four, so nothing gets lost in translation.

V

Visual

Subject, environment, lighting quality, color palette, textures, wardrobe. Everything the camera sees in a still frame.

M

Motion

How things move within the frame. Walk speed, hair in wind, smoke drift, traffic flow. AI generators need explicit motion cues or they'll guess.

C

Camera

Lens choice, angle, movement type. A slow push-in on a 85mm tells a completely different story than a wide static on a 24mm. This layer controls that.

A

Atmosphere

The intangible layer — mood, tone, genre references, music energy. This is what separates a "cool shot" from a shot that makes people feel something.

From vague concept to production-ready prompt.

Your input

"A woman walks through Tokyo at night. Cyberpunk mood. Make it cinematic."

V.M.C.A. Output
Visual

A young woman in a long dark coat, wet pavement reflecting neon signage in pink and cyan. Narrow alley in Shinjuku, vending machines casting warm pools of light. Rain-slicked surfaces, kanji signage slightly out of focus in the background. Low-key lighting, deep shadows with neon rim light on her silhouette.

Motion

She walks at a deliberate pace, heels clicking. Steam rises from a grate to her left. Rain falls at a steady medium rate. Background pedestrians move faster, slightly blurred. A bicycle passes behind her, splashing a puddle.

Camera

Medium shot, 50mm equivalent. Slow tracking shot following her from the side at hip height. Subtle handheld drift. Shallow depth of field — she's sharp, background melts into bokeh circles of neon. 24fps, 1/48 shutter for natural motion blur.

Atmosphere

Blade Runner meets Lost in Translation. Melancholic but not sad — there's a quiet confidence. Synth pad underneath, lo-fi texture. The city feels alive around her but she's in her own world. Noir-adjacent, modern.

That's the difference between "generate something" and "direct something." The agent handles all four layers automatically — you just describe what you see in your head.

Works with every major AI video platform.

The Screenwriter agent outputs prompts optimized for Veo, Kling, Runway, Pika, and Sora. Same framework, adapted to each platform's strengths. You tell it which platform you're targeting, and it adjusts the prompt structure accordingly.

It also works with ChatGPT, Claude, and Gemini — paste the agent prompt into any of them and start directing.

The Screenwriter is one of 9 agents in the toolkit.

Budget breakdowns. Call sheets. Location scouts. Script analysis. All from a single download.

Get the Full Toolkit — $49 Instant download. No subscription. Three languages included.