Hardeep Gambhir
3 March 2026
founders
founders
A breakdown of Ben Affleck's stealth AI startup InterPositive, Netflix's acquisition, and what it means for production workflows.
By
Hardeep Gambhir
10 March 2026


Four years. Ben Affleck ran an AI company for four years without a single person in Hollywood or tech finding out. He registered it under a shell company called Fin Bone LLC, filed patents under his own name, and somehow kept a 16 person team of engineers and researchers completely off the radar. Netflix just acquired the whole operation.

The company is called InterPositive. It builds post production AI tools that train on a film's own dailies. Relighting, reframing, background replacement, continuity fixes, VFX work. Not text to video. Not Sora. Not Runway. This is a completely different thing and I think most people are going to miss that.
This also dropped days after Netflix walked away from the $83 billion Warner Bros. Discovery deal and right as SAG-AFTRA (the American labor union representing approximately 160,000 actors, journalists, recording artists, and media professionals worldwide) starts new contract negotiations around AI. The timing here is loaded and probably intentional.
InterPositive trains on a specific production's dailies. Your footage, your movie, your visual language. The AI doesn't know anything else. It doesn't pull from the internet, it doesn't generate from a text prompt, it doesn't hallucinate shots that never existed on your set.
Instead of using footage generated by AI, InterPositive looks through your existing repository of footage and lighting options and applies that to the final output. It works more like a semantic search engine through your own footage, finding and applying specific elements from what you already captured to the main timeline. Whether this technically qualifies as an agentic workflow or a more traditional model pipeline isn't entirely clear from what's been disclosed, but the behavior described, autonomously searching through production assets and making context-aware decisions about what to apply, certainly sounds like it's heading in that direction.
Affleck and his team filmed a proprietary dataset on a controlled soundstage that looked and felt like a real production. That dataset became the training foundation. The models they built from it focus on filmmaking techniques, not performances or faces. Lens behavior, lighting physics, how edits flow together. They kept the models small on purpose. Small means controllable, and controllable is the whole point when you're talking about tools that filmmakers actually need to trust.
In practice it works like this. You shoot your movie. You feed the dailies into the system. The model learns the specific visual decisions baked into your project, how your DP lit the scene, what lenses you used, the color palette, all of it. Then in post you can use the model to fix a missing shot, correct bad lighting, replace a background, pull wires off a stunt, reframe a composition, do color and VFX work. Everything is grounded in footage that already exists.
Affleck said it plainly in the Netflix announcement video. You have to create your movie first before you can build your model. This isn't about generating something from nothing.
To be more specific about what the tool handles, here's the list of capabilities that came out of the various press statements and the Netflix video: color mixing, relighting scenes, adding visual effects, correcting lighting issues, removing wires from stunt work, reframing shots, replacing or enhancing backgrounds, addressing continuity issues, and filling in missing shots. All within the visual language and creative intent of what was already shot. Based on what's been shared publicly, this is our best understanding of the tool's capabilities, but without a working demo we can't independently verify the quality or scope of what it delivers.
Affleck started paying attention to AI in production around 2022. Same year he launched Artists Equity with Matt Damon, their production company that shares profits with every crew member from the A-listers down to the PAs. He had connections in the VFX world and got early looks at what was being built with generative AI video. Said it scared him at first but he quickly realized it was mostly an illusion, impressive looking but falling apart the second you put it under real production pressure.
He described being shocked by the level of engineering, math and science expertise behind generative AI and at the same time being struck by how much the technology was lacking on the actual artistic filmmaking front. The models were built by engineers who didn't understand how a set works, how a DP makes decisions, how an editor thinks about continuity. That gap is what he decided to fill.
What actually pushed him to build InterPositive was watching tech companies try to remove the human from the process entirely. He went the opposite direction. Put together a small team, set up on a soundstage, and started building tools that use the vocabulary cinematographers and directors already know.
The first model they trained was focused on visual logic and editorial consistency. Think of it as teaching the AI the rules of filmmaking, not the content. How light behaves on set, how edits maintain continuity, how to preserve the creative intent of what was already shot. They deliberately built in constraints so the tool can only work within boundaries the filmmaker sets. Affleck called these "restraints to protect creative intent" which is an interesting design philosophy, you build the guardrails before you build the capabilities.
The resulting models were deliberately small datasets focused on filmmaking techniques rather than performances. That word "deliberately" keeps coming up. This wasn't a resource constraint. It was a product decision. Small models trained on techniques give you control.
**Large models trained on everything give you chaos.**
He kept all of this completely quiet. Even while doing interviews about AI on Joe Rogan and at CNBC conferences. Never mentioned InterPositive once. Started showing it to outsiders toward the end of 2025 and Netflix picked up on it almost immediately.

This part is worth paying attention to. Affleck has been one of the more vocal Hollywood figures talking about AI over the past two years, and when you go back and read what he said, it maps almost perfectly onto what InterPositive was being designed to do. He just never told anyone he was already building it.
At CNBC's Delivering Alpha summit in 2024 he said AI "cannot write you Shakespeare" and that "nothing new is created" by large language models. He drew a clear line between AI as a "creative" (which he rejected) and AI as a "craftsman" (which he embraced). He said AI would disintermediate the more laborious, less creative and more costly aspects of filmmaking, lowering barriers to entry and allowing more voices to be heard. In other words, the expensive, time consuming parts of post production like color correction, VFX cleanup, continuity fixes, and background work are exactly what he was already building tools to automate. He was laying out the business case for InterPositive on a public stage.
On Joe Rogan he said the writing quality from LLMs like Claude and ChatGPT was terrible. But he also said if you can shoot a scene in a studio and then make it realistically look like the North Pole using AI instead of actually going to the North Pole, that saves money, saves time, and lets you focus on the performances. That's essentially InterPositive's pitch, use AI to handle the environment, the lighting, the technical polish, while the humans focus on story and performance.
He was basically doing market validation in public interviews without anyone realizing it.
Affleck brought it up to Netflix execs last fall. Months of conversations later, Netflix bought the company outright. All 16 employees move to Netflix, Affleck becomes a senior advisor.
The tech is exclusive to Netflix now. They have no plans to sell it commercially or license it to other studios. Compare that to Disney, who went the licensing route with OpenAI, trading character IP access for an investment. Or compare it to the broader vendor model that has been standard in Hollywood for decades where studios license tools from third party companies. Netflix is choosing to own this internally and keep it behind closed doors. That exclusivity means Netflix productions will have access to post production capabilities that no other studio can use.
The deal deepens an already strong relationship between Affleck and Netflix. Artists Equity, the production company he runs with Matt Damon, just signed a streaming first-look deal with Netflix. His next directorial feature, Animals, starring Affleck, Kerry Washington and Gillian Anderson, is set for Netflix later this year. And The Rip, an action movie starring both Affleck and Damon, came out on Netflix in January. The acquisition grew out of a multi-year working relationship, not a cold pitch.
Netflix has been building toward this for a while. In their Q3 2025 shareholder letter they said they were "all in" on leveraging AI and described generative AI as a "significant opportunity" across content production, recommendations, and advertising. Ted Sarandos said publicly that they're "not worried about AI replacing creativity" but are "very excited about AI creating tools to help creativity."
The company also has its own set of production guidelines around AI. Filmmakers and vendors working on Netflix productions are required to disclose any planned AI use to their Netflix contact, ensure AI outputs don't infringe on copyrighted material, use enterprise-secured AI tools whenever possible, and get consent before using AI to replace performances or union covered work.
But the biggest piece of the puzzle is Eyeline. This is Netflix's VFX and virtual production division and it's a serious operation. It came together by merging Scanline VFX with Eyeline Studios, their virtual production and research unit founded in 2019.
Scanline VFX is one of the most respected VFX houses in the industry. Netflix acquired them in 2021. They hold a Scientific and Technical Academy Award for Flowline, their proprietary fluid simulation software. Their work is on some of the biggest productions in recent years: the Upside Down environments and creature effects in Stranger Things, the elaborate VFX in Wednesday Season 2, the water and creature simulations in Avatar: The Last Airbender and Godzilla x Kong: The New Empire, the ocean sequences in The Woman in Cabin 10, and the visual effects in Andor Season 2, which won them an Emmy for Outstanding Special Visual Effects.
Eyeline Studios brought virtual production and volumetric capture to the table. Their Light Dome is the first of its kind virtual production stage that replicates real world lighting conditions with measurable precision, used on Happy Gilmore 2 and A House of Dynamite. They won the Visual Effects Society's inaugural Groundbreaking Technology Award for their volumetric capture stage, which was used to create the disembodied head of Christopher Lloyd's Professor Orloff in Wednesday Season 2. They also used generative AI combined with volumetric capture to digitally de-age actors in Happy Gilmore 2 for a flashback scene.
On the research side, Eyeline Labs has been publishing at top tier academic venues. Their paper Go-with-the-Flow (CVPR 2025, Oral) introduced motion controllable video diffusion using real-time warped noise, which allows precise control over how objects and cameras move in generated video. FlashDepth (ICCV 2025) tackles real-time depth estimation at 2K resolution, critical for compositing and virtual production pipelines. VChain (ICCV 2025 Workshop, Outstanding Paper Award) introduced chain-of-visual-thought reasoning for video generation decisions. Virtually Being (SIGGRAPH Asia 2025) enables customizable camera-controllable video diffusion from multi-view performance captures. And DEGS (SIGGRAPH Asia 2025) advanced detail-enhanced Gaussian splatting for large-scale volumetric capture. They also built DifFRelight, a diffusion based framework for relighting complex facial performances with precise control over eye reflections and self-shadowing, which was Netflix's first published work using 3D Gaussian Splatting.
These aren't theoretical, they're production tools being developed for actual Netflix shows and films. The research feeds directly into the Eyeline pipeline.
And then there's the Eternaut case, which is probably the most concrete proof of concept. On Netflix's Argentine sci-fi series El Eternauta, the production needed a VFX-heavy building collapse sequence but the budget was already committed elsewhere and time was running out. They turned to Eyeline who used generative AI to complete the sequence at roughly 10x less cost than traditional VFX methods. That single case demonstrated to Netflix that AI tools could deliver real, measurable production value.
InterPositive slots into a different layer of this stack. Eyeline handles heavy VFX, virtual production, volumetric capture and generative research. InterPositive is about production-aware post production tools that work with the creative decisions already made on a specific project. Eyeline gives you the big VFX capabilities. InterPositive gives you the day to day post production intelligence trained on your actual footage. They complement each other.
Netflix's Q4 2025 shareholder letter projected $50.7 to $51.7 billion in revenue for 2026, up 12 to 14% year over year, with a target operating margin of 31.5%. They also noted approximately $275 million in acquisition related expenses, which likely includes this deal. Ted Sarandos' framing from 2024 sums up the whole strategy: there is a better business and a bigger business in making content 10% better than making it 50% cheaper.
Netflix showed zero demos. No footage of the tools working. No before and after. No comparisons to existing post production AI tools or anything else currently on the market. Financial terms undisclosed. And when you consider Luma AI and Runway are valued in the billions, what Netflix paid here is a real question.
The model is early. Netflix said they need interested productions to help refine and scale it. How that opt in works, what integration looks like, how much creative control the tools actually give you in practice, whether there are latency issues, how it handles edge cases in footage quality, all unanswered.
There's also the question of how InterPositive stacks up against what's already shipping. AI powered post production is not new. DaVinci Resolve has been building AI tools into their color and compositing pipeline for several versions now, and it's worth understanding what already exists.
DaVinci Resolve Studio ($295 one time purchase) includes Relight FX, an AI powered relighting tool available in the Color page. It uses the DaVinci Neural Engine to analyze your footage, estimate depth, build a rough 3D surface map of faces and objects in the scene, and then let you place virtual light sources that interact with that map. You can use directional lights (simulating sunlight), spotlights (focused cones of light), and point sources (omnidirectional). The results are genuinely useful for adding depth and drama to flat or poorly lit footage, though the tool has limitations with complex scenes and can't generate shadows convincingly yet.
Beyond relighting, DaVinci Resolve 20 ships with a full suite of AI features: Magic Mask for AI-powered subject isolation, UltraNR for neural engine driven noise reduction, DeHaze for atmospheric correction, IntelliTrack for AI object tracking that can auto-generate audio panning to match on-screen movement, and AI-powered depth maps. Blackmagic's president Dan May has said their goal is to "make life easier for creators without replacing creators."

Third party tools push this even further. Beeble's SwitchLight integrates with DaVinci Resolve via API to provide more advanced 3D relighting using AI-generated normal maps and surface passes. It produces more realistic results than Resolve's built-in tool, particularly for dramatic lighting changes, though it comes at a much higher price (starting at $504/year for indie, $3,000/year for standard).
So what would make InterPositive different from all of this? Based on what's been described, the key differentiator is the dailies-based training. Existing tools like DaVinci Relight work with generic AI models that estimate depth and surfaces from any footage. InterPositive supposedly builds a custom model from your specific production's footage, meaning it understands the exact visual language, lighting setup, and creative intent of your project. In theory that would produce more contextually accurate results. But without seeing it in action, we can't confirm whether the output quality actually justifies a different approach. We're taking Affleck and Netflix at their word for now.
SAG-AFTRA is negotiating new contracts right now. AI protections are the big issue. The 2023 strikes happened partly because of this exact fear. The strikes lasted over 100 days before SAG-AFTRA reached a tentative agreement with the Alliance of Motion Picture and Television Producers that included the first ever contractual AI protections for film and TV performers. Anything involving AI is still radioactive in Hollywood.
Affleck knows this. His messaging has been extremely careful. Everything says empowerment, not replacement. Tools that keep the filmmaker in charge. Technology that protects creative intent. All the right words. Netflix also released a video of Affleck in conversation with Chief Content Officer Bela Bajaria and Chief Technology Officer Elizabeth Stone to reinforce this framing.
But at a CNBC conference in 2024 he also said AI will disintermediate the more laborious, less creative and more costly aspects of filmmaking. Read that again. That points directly at below the line workers. Colorists. VFX artists. Compositors. The people doing the exact work InterPositive is designed to handle.
Newsshooter (a filmmaking and camera technology publication) made a sharp observation about this. Even if InterPositive only reduces post production costs by 10%, at Netflix's scale that could translate to hundreds of millions of dollars in annual savings. And they noted the irony that Affleck was on Joe Rogan just a few weeks before the announcement saying AI won't replace artists, while simultaneously finalizing the sale of an AI company to Netflix whose tools could potentially reduce the need for exactly those artists.
No Film School (one of the largest online publications for independent filmmakers) pointed out that both Netflix and Affleck are working hard to make clear that this company's mission is to create tools by and for filmmakers, but then added that talk is cheap in this fast-moving industry and that the real test will be what Netflix actually does with the technology over the coming months and years.
You can frame it however you want. The question of whether this protects those jobs or just makes cutting them easier to justify isn't going away.
Netflix isn't the only studio moving on AI but they're taking a distinctly different approach.
Disney went with a licensing deal with OpenAI, giving access to select character IP in exchange for an investment. That's a partnership model where Disney maintains control over its IP but doesn't own the underlying technology. Disney also showed it's staying vigilant by firing off a cease-and-desist letter to Google over AI efforts involving their properties.
The broader industry has mostly followed a vendor model. Studios license tools from companies like Runway, Luma, or traditional VFX houses and use them as needed on specific productions. Nobody owns the tech, everyone has access to the same tools, and competitive advantage comes from how you use them rather than exclusive access.
Netflix is doing something different by buying InterPositive outright. The technology can only be used by Netflix. Combined with Eyeline's internal VFX and research capabilities, Netflix is building a vertically integrated AI production stack that no other studio can replicate by simply licensing the same tools.
Their last acquisition before InterPositive (setting aside the Warner Bros. bid) was Ready Player Me, an avatar creation platform they bought in December. That pattern tells you where Netflix's head is at. They're assembling in-house capabilities across the AI production pipeline.
Luma Labs launched Luma Agents on the exact same day as this acquisition. Agentic system built on their Uni-1 model that orchestrates generation across image, video, audio and text with built in self evaluation loops. The agents can maintain persistent context across assets, evaluate and refine their own outputs through iterative self-critique, and coordinate with external models like Veo 3, Seedream, and ElevenLabs. CEO Amit Jain described the current state of AI creative tools as "here are 100 models, learn how to prompt them" and positioned Luma Agents as the layer that actually manages the workflow for you.
The trajectory is clear. These workflows are getting more abstracted and more autonomous, and how a piece of media actually gets made is going to become harder and harder to see from the outside. Black box territory.
We're LocalHost. Last year we ran the Mumbai AI Film Festival at the Royal Opera House, where over 1,200 teams applied, 15 were flown in from across the world, and 14 AI short films premiered on the red carpet in front of 600 people, judged by directors like Ram Madhvani and Shakun Batra, with Tanmay Bhat, Ritesh Deshmukh, and teams from Netflix India and Google in attendance. These were some of the biggest names in Bollywood. 80% of the attendees left their traditional jobs to work on AI Film adjacent fields, with job offers from top studios. In February 2026 we followed that up with the India AI Film Festival at Qutub Minar (UNESCO World Heritage site) during the India AI Impact Summit, in collaboration with the Government of India and sponsored by NVIDIA, screening films for 300+ investors, policymakers, and AI leaders from around the world. This year we're going global: five more AI film festivals in Los Angeles, San Francisco, Paris, Tokyo (in collaboration with the Tokyo Metropolitan Government), and Mumbai. If you're making things with these tools, interested in collaborating, shoot me a DM or email me at hardeep\[at\]localhosthq\[dot\]com Our team is young and lean, we move fast. Mumbai AI Film Festival was pulled off end-to-end in 25 days because I needed to get to Canada for my driver's license test. India AI Film Festival in Delhi was pulled off in 17 days off. Join us if you're driven and want to be the pioneers in AI Filmmaking space. We are a global team with deep connections in USA, Japan, India and Europe. We have a writing culture in the company and recently raised a round.
\- By Hardeep and Sanchay

Applications are reviewed on a rolling basis. We back young people from all backgrounds, regardless of credentials.

