Exit Strategy Productions builds in-house AI-powered tools using vibe coding to solve real production problems. Every tool starts as a challenge we've faced in our own work—when we hit a wall, we build the solution. This problem-first approach means our tools are battle-tested in actual productions before they ever reach another creator's hands. Our platforms prioritize ethical AI development, creator ownership, and lowering barriers to high-quality production. Our tools are open source at their core—the community can inspect, modify, and contribute to everything we build. For professionals working within established editing suites, we offer polished plugin versions available for a fee, integrating our AI tools directly into existing workflows with dedicated support. Revenue from plugin sales funds continued development of the open source ecosystem, creating a sustainable model where transparency and professional-grade tools reinforce each other.

Animation Suite

A unified platform that takes original artwork and stories through a complete animation pipeline using AI-powered tools—from script breakdown to character animation controlled by natural language to living, breathing environments.

The Vision

We propose building a single platform that lets artists maintain full IP ownership of their work while democratizing animation production for the 500,000+ creators worldwide who currently can't afford the complex technical pipeline or lack a complete skillset to animate on their own without a studio team.

The global animation software market, valued at $2.8 billion in 2023 and projected to reach $5.4 billion by 2030, presents a significant opportunity for disruption. Current professional pipelines cost $715/month minimum per artist—we're building to change that.

Platform Features

1. Script Intelligence & Scene Breakdown

Automatic shot generation, emotional beat mapping, and AI-powered pacing suggestions based on genre conventions. No animation-specific intelligence exists today—we're building it. Reduces script breakdown from 1-2 weeks to hours.

2. Character Creation & IP Ownership

Transform hand-drawn artwork into animated characters while preserving artistic style. Blockchain-verified ownership from sketch to screen. Enable comic artists (500,000+ professionals globally) and fine artists to enter animation—imagine watching a story with artwork based on your favorite painter.

3. Advanced Rigging and Animation

Natural language control of rigged 3D characters—prompt "Make him walk like he's carrying the weight of the world" and get editable keyframe data. No tool allows this today. Export to Maya, Blender, or game engines. Studios currently spend $5,000-15,000 per character for custom rigging.

4. Living Environment System

Transform LiDAR scans into dynamic worlds with wind systems, ambient crowds, traffic patterns, and weather dynamics. Every LiDAR scan today is frozen and lifeless—we make them breathe. Accepts scans from iPhone to professional Leica scanners, with AI enhancement to standardize quality.

5. Temporal AI Transformation

Feed in a modern scan of a New York street and our AI renders it as 1970s Times Square, Victorian-era London, or 2050 cyberpunk dystopia. Period transformation analyzes architectural styles, signage conventions, vehicle types, and environmental details from historical references.

6. Natural Language Cinematography

Director-style shot composition in 3D animated environments using prompts like "Hitchcock-style vertigo zoom." True 3D cinematography that integrates with the entire animation pipeline. Directors communicate directly with animated characters.

DeInk

DeInk is an AI-powered tattoo removal plugin for film and television production. One-time LiDAR actor scanning creates persistent profiles—then automatically remove tattoos from any footage, any angle, any lighting. Cuts VFX hours from dozens to minutes per shot.

The Challenge

In contemporary film and television production, tattoo removal represents a significant post-production challenge. Actors frequently have visible tattoos that conflict with character requirements, period settings, or continuity needs. Traditional approaches rely on either practical makeup (time-consuming and inconsistent) or frame-by-frame VFX work (expensive and labor-intensive).

A single scene requiring tattoo removal can consume dozens of VFX artist hours, with costs ranging from $500 to $2,000 per shot depending on complexity. DeInk introduces a paradigm shift: tattoo detection happens once during a controlled scanning session, while tattoo removal happens automatically during post-production.

System Architecture

Scan Phase: Actor Profile Creation

LiDAR capture using iPhone 12 Pro or newer. The operator walks around the actor in T-pose over 30-60 seconds. Point cloud reconstruction fuses multiple frames using ICP alignment. Mesh generation creates watertight triangle mesh with UV coordinates using Poisson surface reconstruction.

Tattoo Detection & Atlas Generation

Tattoo segmentation operates on UV-space texture maps using SAM (Segment Anything Model). Supports interactive mode (operator clicks on tattoos) and automatic mode (color anomaly detection). Results stored as grayscale atlas enabling projection onto any pose.

Process Phase: Automated Removal

Pose estimation detects body pose in each frame using MediaPipe or HMR2. Mesh fitting aligns stored actor mesh to detected pose. Atlas projection maps UV-space tattoo mask onto image coordinates. Temporal smoothing prevents mask flickering.

Video Inpainting

Generated masks processed through ProPainter for temporal-consistent video inpainting. Unlike image-based methods, ProPainter considers neighboring frames for seamless results. Maintains broadcast-quality output across any footage.

Technology Stack

LiDAR Capture: Record3D + iPhone. 3D Processing: Open3D, PyTorch3D. Segmentation: SAM (Meta). Pose Estimation: MediaPipe, HMR2. Inpainting: ProPainter. Complete pipeline from scan to processed footage.

Production Benefits

Eliminates manual rotoscoping. Actor profiles work across entire productions. Handles any pose, lighting, or camera angle automatically. Reduces per-shot costs from $500-2,000 to minutes of compute time. Consistent results across all footage.

PlaceRing

PlaceRing transforms modern location scans into historically accurate film environments using AI-powered temporal transformation. Scan a contemporary street with LiDAR and instantly generate period-accurate versions across any era—from Victorian London to 1970s New York to distant futures—complete with authentic architecture, vehicles, signage, and atmospheric details.

The Challenge

Period filmmaking traditionally requires either costly location shoots in preserved historical areas, expensive set construction, or extensive digital matte painting and VFX work. Even when suitable locations exist, they often require significant modification to remove modern elements like power lines, contemporary signage, and anachronistic architecture. This limits creative possibilities and inflates production budgets, particularly for independent filmmakers.

PlaceRing introduces a paradigm shift: capture any modern location with accessible LiDAR technology (iPhone 12 Pro or higher), then let AI transform it into any historical period or imagined future. The system analyzes architectural styles, transportation, urban planning, signage conventions, and environmental details from historical references to generate photorealistic, temporally accurate environments ready for cinematography.

System Capabilities

Temporal AI Transformation

Feed in a modern scan of any location and PlaceRing renders it as 1920s Chicago, Medieval Paris, 1980s Tokyo, or 2075 cyberpunk megacity. The AI analyzes period-specific architectural elements, street furniture, vehicle types, signage styles, and environmental characteristics to ensure historical authenticity.

Accessible LiDAR Capture

Works with consumer devices (iPhone 12 Pro+) through professional Leica scanners. AI enhancement standardizes quality across capture methods, democratizing location scanning for independent productions. Walk around a location for 60 seconds and have a production-ready 3D environment.

Living Environment System

Transform static scans into dynamic worlds with period-appropriate ambient life: pedestrian traffic patterns matching historical density, era-specific vehicles, weather dynamics, and atmospheric effects. Every scan becomes a breathing, cinematic environment rather than a frozen digital model.

Natural Language Cinematography

Direct camera movement and shot composition using plain language: "Hitchcock-style vertigo zoom on the protagonist" or "tracking shot following the carriage through cobblestone streets." True 3D cinematography that integrates seamlessly with the transformed environment.

Historical Accuracy Database

Built on extensive reference libraries of period-specific architectural details, vehicle designs, signage typography, street layouts, and cultural elements. Ensures transformations maintain historical authenticity while allowing artistic license where needed for storytelling.

Production Benefits

Eliminates costly period location scouting and rental. No need for extensive set construction or location modification. Enables period storytelling at independent film budgets. Generate multiple time periods from single scan. Complete creative control over historical environments.