Exit Strategy Productions is a Wyoming-based storytelling company and technology incubator focused on feature films in the $1M–$10M budget range with global digital distribution. We operate under a simple philosophy: Stories by Humans, Made with a Digital Edge.
We champion original narratives grounded in human experience and creative vision, leveraging advanced technology to deliver high-production-value projects to a global audience. Simultaneously, we develop filmmaking software and AI tools—solving problems directly in the field to create products that truly meet the needs of modern filmmakers.
Our dual focus: Storytelling and Tools for Storytelling.
We do not seek to replace the magic of filmmaking through human collaboration with generative AI. Instead, we use technology to unlock potential for a new generation of storytellers. Unless we are utilizing an LLM built on our own legally procured data, our focus remains on non-generative support. Human creativity is non-negotiable—technology assists the creative process, it never replaces it.
We are moving away from the traditional, top-heavy indie model. We utilize leaner crews who are compensated better, empowering them to do their best work, while producers take an active, hands-on role in production. We integrate AI-driven support tools into operational workflows to maximize efficiency without encroaching on the creative process and human touch. Film is a collaboration. Fundamentally we rely on the expertise and artistic expression of every team member to create a final product that everyone is proud to stand behind.
By strategically deploying—and often developing—cutting-edge technology throughout the pipeline, we dismantle the traditional barriers that keep ambitious, original narratives trapped in development hell. We break the equation that links low budgets to low quality, allowing more people to tell better stories.
Everything we do follows a single operating model: Build → Use → Sell. We build tools for our own productions, use them on our own films to prove they work, then sell them to other creators. Revenue from tool sales funds the next production, and the films themselves validate and market the tools. Every plugin we build eventually folds into our platform. Every project we produce stress-tests the pipeline.
Our first shipped product. A professional character rigging addon for Blender that places 56 visual markers on any humanoid mesh in 10–15 minutes, with 88 blendshapes for Wonder Studio and Flow Studio compatibility. Shipping now on Blender Market and Gumroad at $19.99. This is proof the model works.
The Ableton of Animation. A timeline-based storytelling tool that orchestrates the entire production pipeline—from script breakdown to final render—in a single Blender-based interface. ExitRig, text-to-motion, world building, LoRA styling, and compositing all live under one roof. Currently in active development.
We train custom LoRA models on our artist Loren Erdrich's paintings, then apply those learned styles to AI-generated 3D environments via ComfyUI and WorldLabs. Each project gets its own visual DNA—a style that no competitor can replicate without the source artist. Tested and operational.
A Mac Mini server running MDM inference via FastAPI. Type what you want a character to do, get animation. Walk, wave, jump, dance, fight—generated as SMPL-24 motion data, converted to Blender armatures, and applied to ExitRig characters. Operational on local LAN, scalable to cloud.
Animagic connects five production stages into a single workflow. Each stage is operational today, built on Blender and open-source tools, with Claude Max (Anthropic) as our development partner.
Screenplay → breakdown → shot list & asset manifest
3D mesh → ExitRig rigging → Wonder Studio validation
Real geography + WorldLabs AI + LoRA styling
Text-to-motion via MDM server + keyframe editing
Scene assembly, effects, rendering & style transfer
World building draws from three sources: Google Earth API for real geography and terrain, WorldLabs for AI-generated 3D panoramas with collision mesh, and Blosm combined with BagaPie for OpenStreetMap geometry and vegetation scattering. LoRA styling is applied through ComfyUI workflows using IP-Adapter nodes, producing environments that look painted rather than rendered.
The vision comes in two phases. Phase 1 (now): orchestrate the best available tools into one interface. Phase 2 (future): hardware plus software—the way Ableton built Push after proving the software.
Low-Rank Adaptation (LoRA) is a training technique that teaches an AI model to recognize and reproduce a specific visual style. Each LoRA is approximately 150MB of learned style weights—small, portable, and artist-specific. It learns visual properties like color palette, texture behavior, and edge quality, not content. The result: a style that belongs to the artist and can be applied at production scale.
Our LoRA pipeline is currently built on the paintings of Loren Erdrich (okloren.com), who creates fluid works using organic dyes and water. Her art has properties that digital filters cannot replicate—translucent color migration, soft diffused edges, luminous depth, unpredictable pigment behavior. These qualities require manual curation and hand-written captions, not automated processing.
The training process follows four steps: curate a dataset of 50 images at a 40/40/20 crop ratio (full paintings, detail textures, edge behaviors), write individual captions describing style properties, train via kohya_ss or ai-toolkit on Flux or SDXL foundation models for approximately 60 minutes on a 24GB GPU, then test and validate outputs at multiple epoch checkpoints.
Each project gets its own trained model with a unique trigger word. lorenstyle_gretas captures cool blues, grays, muted greens, and ice—the palette of Nordic winter. lorenstyle_terry targets warm contrast, neon bleeds, dark grounds, and grit—the heat of Philadelphia and New York. The pipeline flows from .safetensors through ComfyUI with IP-Adapter nodes, into WorldLabs 3D environment generation, and out to Blender for character placement.
IP ownership is absolute. Your art. Your model. Your output. Always. Training data is scrubbed from compute instances after each training run. We only train on art we own or have explicit permission to use. The resulting .safetensors file is a derivative of the client’s art and belongs to them.
Our current focus is The Gretas—a Scandinavian crime drama set in Upplands Vรคsby, Stockholm County. It’s the first full Animagic production and the proof-of-concept for the entire pipeline. Every tool in the stack gets exercised on this project: ExitRig rigs the characters, text-to-motion generates walk cycles and fight sequences, WorldLabs and Blosm build the streets and apartments, and our LoRA "lorenstyle_gretas" paints it all based on her IRL dye and fabric work.
Simultaneously, we’re developing the Terry content pipeline—an AI-powered social media presence built on a fully developed character. Terry isn’t invented for social media; he has 20 chapters of backstory, a complete character arc, and an authentic voice. The influencer revenue funds production while building an audience that de-risks distribution. Terry will become a feature once the world has been exposed to him through this platform.
ExitRig continues to expand across marketplaces while feeding directly into Animagic development. We don’t build the platform in a vacuum—we build individual tools, sell them standalone, use the revenue to build more, and fold each one into the platform. The tools prove the technology. The films prove the tools. The audience proves the films.
“Stories by humans, with a digital edge” is our methodology. We ensure that innovative voices—whether quiet or loud—have the resources to compete in a marketplace hungry for authentic, bold, and fresh storytelling.