In This Section
Lights, Camera, Algorithm: The Day Hollywood Lost to the Machine
Written by Misha Solodovnikov on June 30, 2025
Lights, Camera, Algorithm: The Day Hollywood Lost to the Machine
Misha Solodovnikov
Artificial Intelligence isn’t pushing boundaries anymore. It’s smashing them. The film and music industries are bracing for impact. Studios are nervous. What once took armies of artists, writers, and directors can now be done in minutes by machines. And the machines are getting really good.
Will systems like Vertex AI Media Studio[1], Llama[2], Sora by OpenAI[3], and Runway Gen-2[4] replace American giants like Disney or legendary French studio like EuropaCorp? In the United States, every individual owns the right to their own image as a form of intellectual property. AI has complicated the very definition of what one’s image is.
In the past, it took visionary minds like Luc Besson and designer Jean-Paul Gaultier to imagine entirely new worlds. Worlds that felt both foreign and strangely familiar. Crafting the 23rd-century New York in The Fifth Element wasn’t just about spectacle. It was a colossal fusion of fashion, culture, and cinematic talent. Gaultier’s futuristic costumes, rooted in contemporary style yet radically inventive, are a testament to the immense time and genius once required to envision the future. James Cameron’s Avatar followed in this tradition, with entire ecosystems, languages, and cultures meticulously built from scratch. And in Valerian and the City of a Thousand Planets, Besson once again pushed the limits of imagination, demanding years of creative effort and hard work.
For a moment, AI tools were simply convenient and affordable solutions for post-production tasks like rotoscoping[5], sound design, and editing. But now, these systems have hit a creative ceiling (and broken through it). Today, they're not just refining footage but generating entire video sequences, scenes, and even worlds from a single line of text (“text-to-video”). The rise of text-to-video platforms means that visionary directors like Luc Besson or James Cameron no longer need to wait for concept scratch artists, they can now type an idea and see it visualized in minutes. What we’re witnessing isn’t just a tech revolution. It’s the start of creative collapse, unless the world acts.
There is no real regulation. No universally agreed-upon standard for how these AI systems train, what they use, or who owns the result.
Major Hollywood figures are pushing back against OpenAI and Google‘s appeals to the U.S. government to allow their AI models to train on copyrighted works.
Film, television and music figures including Ron Howard, Cate Blanchett and Paul McCartney, have signed on to a letter expressing alarm at the tech giants’ suggestions in recent submissions to a White House office that they should be able to access publicly available intellectual property. “America’s global AI leadership must not come at the expense of our essential creative industries,” the letter states, adding that the arts and entertainment industry provides more than 2.3 million jobs and bolsters America’s democratic values abroad. “But AI companies are asking to undermine this economic and cultural strength by weakening copyright protections for the films, television series, artworks, writing, music, and voices used to train AI models at the core of multi-billion-dollar corporate valuations.”
But rising AI giants say they need access to everything. Scripts. Songs. Faces. Voices. Why?
Big Tech say they can't compete with China under existing US copyright laws and that they need unfettered access to art: from Tom and Jerry to Scarlett Johansson’s Lucy, Iron Man and James Bond. To train their AI models as a matter of national security.
Google and OpenAI want the US government to designate copyrighted art, movies and TV shows as "fair use" for them to train AI, arguing that without the exceptions, they will lose the race for dominance to China.
But at what cost? U.S. law currently states that every person owns the right to their likeness. But what is a likeness when a neural network can copy your face, your voice, your soul? On the other side of the pond, in the United Kingdom, lawmakers are considering a new “right of personality” to protect public figures from unauthorized use of their voice or image by artificial intelligence systems. Scarlett Johansson expressed outrage after OpenAI allegedly imitated her voice without permission, calling the act “shocking.”[6] Similarly, actor Paul Skye Lehrman has initiated legal action after alleging his voice was used without consent by an AI firm.[7]
More so, the 2024 SAG-AFTRA strike underscored a growing fear: AI isn’t just a tool. It’s a threat. Actors worry not about symbolic replacement, but literal displacement. Deepfakes and AI-generated performers can now replicate micro-expressions and vocal nuances with uncanny precision, all without human involvement.[8]
Writers also face unprecedented risk. The Writers Guild of America (WGA) has expressed concern that AI-generated scripts could dilute originality and raise serious questions about authorship and compensation.[9] Even when AI is used as a production tool - for tasks like rotoscoping, color correction, or object removal - questions of copyrightable authorship become increasingly murky.[10]
This is not a fringe issue. Studios are investing heavily in AI to reduce production costs. Where a blockbuster might once have required a thousand people, it could soon be done by fifty. AI slashes timelines and overhead.
The U.S. Copyright Office has attempted to clarify the boundaries. In 2024, it published an updated report confirming that AI-generated content may be eligible for copyright protection -but only if a human has made a substantial creative contribution.[11] However, the guidance is still vague. While it affirms that selecting and arranging AI-generated material can qualify as authorship, the threshold of “sufficient creativity” remains undefined.
The rise of AI presents a double-edged sword: it offers extraordinary creative possibilities and operational efficiency, but it also introduces complex legal uncertainties. Industry stakeholders—artists, unions, studios, technologists, and legislators—must collectively forge a new balance between innovation and intellectual property rights.
Legal precedent is beginning to emerge. In Andersen v. Stability AI Ltd., artists alleged that AI platforms including Stability AI and Midjourney trained their systems on billions of images scraped from the internet, including copyrighted content, without permission or compensation. The court found the allegations sufficient to proceed, particularly concerning claims of direct infringement through use of compressed image copies.[12]
Globally, regulators are beginning to respond. The European Union’s AI Act, passed in 2024, identifies AI as a high-risk technology and emphasizes transparency, safety, and fundamental rights, especially for high-impact sectors like creative industries.[13]
AI doesn’t have to be the villain. If developed responsibly, it could enhance digital security, assist human creators, and even strengthen intellectual property protections. But that requires firm boundaries like privacy safeguards, consent mechanisms, and fair compensation models. It won’t take hundreds of tools to upend the creative economy. Just a few truly original, never seen before, smashingly original AI actors.
Once those digital stars rise, economic logic will be impossible to resist.
Cheaper and Unstoppable.
Studios won’t just use AI.
They’ll become AI.
[1] Warren Barkley, Expanding Generative Media for Enterprise on Vertex AI, Google Cloud Blog (Apr. 9, 2025), https://cloud.google.com/blog/products/ai-machine-learning/expanding-generative-media-for-enterprise-on-vertex-ai.
[2] Llama by Meta, https://www.llama.com
[3] Sora is here, OpenAI (Dec. 9, 2024), https://openai.com/index/sora-is-here/.
[4] Gen-2: Generate novel videos with text, images or video clips, Runway, https://runwayml.com/research/gen-2
[5] Rotoscoping, once a tedious frame-by-frame animation technique used to isolate subjects and apply effects.
[6] Chris Gardner, Scarlett Johansson “Shocked” and “Angered” by ChatGPT Voice That Sounds “Eerily Similar” to Hers, Hollywood Reporter (May 2024), https://www.hollywoodreporter.com/business/business-news/scarlett-johansson-openai-chatgpt-voice-1236182168/.
[7] Ibid.
[8] Hollywood Pushes Back on OpenAI, Google Argument for Copyright Exception, Hollywood Reporter (Apr. 2024),
https://www.hollywoodreporter.com/business/business-news/hollywood-pushes-back-openai-google-argument-copyright-1236166626/.
[9] Copyright Registration Guidance: Works Containing Material Generated by Artificial Intelligence, 88 Fed. Reg. 16244 (Mar. 16, 2023), https://www.federalregister.gov/documents/2023/03/16/2023-05321/copyright-registration-guidance-works-containing-material-generated-by-artificial-intelligence.
[10] Copyright and Artificial Intelligence: Part II, U.S. Copyright Office (Mar. 2024), https://www.copyright.gov/reports/ai/part2.pdf.
[11] Ibid.
[12] Andersen v. Stability AI Ltd., 700 F. Supp. 3d 853 (N.D. Cal. 2023); see also Kevin Madigan, Top Takeaways from Order in the Andersen v. Stability AI Copyright Case, Copyright Alliance (Aug. 2024),
https://copyrightalliance.org/andersen-v-stability-ai-copyright-case/.
[13] EU AI Act: First Regulation on Artificial Intelligence, European Parliament Topics (Jun 2024), https://www.europarl.europa.eu/topics/en/article/20230601STO93804/eu-ai-act-first-regulation-on-artificial-intelligence.