
The Synthesis: We Must Protect Human Creativity from Being Flattened by AI
By shirin anlen, and Katerina Cizek

Generated by Chat GPT
The Synthesis is a monthly column exploring the intersection of Artificial Intelligence and documentary practice. Co-authors Kat Cizek and shirin anlen will synthesize the latest intelligence on synthetic media and AI tools—alongside their implications for nonfiction mediamaking. Balancing ethical, labor, and creative concerns, they will engage Documentary readers with interviews, analysis, and case studies. The Synthesis is part of an ongoing collaboration between the Co-Creation Studio at MIT’s Open Doc Lab and WITNESS.
In 2025, we find ourselves at a crossroads. Artificial intelligence—holding the powerful potential to be a tool of imaginative possibility—is increasingly reshaping the creative world in ways that demand urgent scrutiny. While AI promises new forms of expression and problem-solving, it also threatens to erode the very foundations of human creativity: ethics, truth, labor, and diversity.
In a recent joint submission to a call for contributions on AI and Creativity at the United Nations Human Rights Council Advisory Committee, WITNESS, the Co-Creation Studio at MIT, and the Archival Producers Alliance (APA) outlined these pressing dangers. Drawing from years of frontline research, workshops, and advocacy with creative communities and human rights defenders around the world, we identified seven core threats AI poses to human creativity:
First, the absence of ethical standards leaves artists, journalists, and advocates navigating a media landscape saturated with synthetic content—without the safeguards needed to protect their work and dignity. Without clear rules, trust in media and storytelling will continue to collapse.
Second, AI’s power to seamlessly fabricate and alter media jeopardizes the evidentiary integrity of archives and historical records. If we cannot distinguish truth from fabrication, we risk losing not only our past but our ability to fight for justice and tell the stories of today.
Third, the extraction of human labor without consent or compensation is an old story with new technology. AI models are often trained on millions of pieces of art, music, and writing—created by real people who are neither acknowledged nor paid. Meanwhile, the resulting outputs are registered as "originals," commodifying and erasing the human effort behind them.
Fourth, the explosion of low-effort AI-generated content is flooding platforms, overwhelming human-made work, and shifting the context in which creative pieces are consumed. Meaningful, original voices risk being drowned out in a sea of derivative material.
Fifth, bias baked into AI systems—largely trained on Western, male-centric datasets—flattens global creativity. It reproduces systemic inequalities, sidelines marginalized voices, and promotes sameness over the rich complexity of human culture.
Sixth, the disappearing human hand in creative interpretation—especially in documentary filmmaking—threatens the very nature of storytelling. Instead of grappling with gaps in memory or footage, AI now "fills in" with synthetic guesses, risking a future where history is not discovered but manufactured.
Finally, the threat to consent and human dignity cannot be overstated. Synthetic media often uses people's likenesses without permission, enabling new forms of exploitation and abuse.
But this is not an argument against AI. It is a call for a different future—one in which AI serves human creativity, rather than replacing or undermining it.
Solutions are already emerging. The European Union’s AI Act has begun to mandate transparency around copyrighted materials used in training data. Filmmakers and archivists are setting field-driven ethical standards, like APA’s Best Practices for GenAI in Documentaries. Technologists are creating opt-out tools for artists, while activists advocate for the protection of satire, freedom of expression, and evidence records.
Yet much more is needed.
We must embed consent, attribution, and human dignity at the heart of AI development. We must build legally enforceable standards for transparency and labeling of synthetic media. We must create incentives—market-based and policy-driven—that value authentic, human-made archives and storytelling. And we must educate the public to navigate a world where synthetic and real media collide, and to make conscious decisions about when to adopt, shape, or resist AI.
Above all, we must reclaim the space for spontaneity, imperfection, and resistance—the very things that make human creativity not only valuable, but vital.
shirin anlen is an award-winning creative technologist, artist, and researcher. She is a media technologist for WITNESS, which helps people use video and technology to defend human rights. WITNESS’s “Prepare, Don’t Panic” Initiative has focused on global, equitable preparation for deepfakes.
Katerina Cizek is a Peabody- and Emmy-winning documentarian, author, producer, and researcher working with collective processes and emergent technologies. She is a research scientist, and co-founder of the Co-Creation Studio at MIT Open Documentary Lab. She is lead author (with Uricchio et al.) of Collective Wisdom: Co-Creating Media for Equity and Justice, published by MIT Press in 2022.