Deepfakes have become increasingly common, flooding the internet with fake videos that are often misleading or harmful. According to ID verification platform Sumsub, deepfakes accounted for 7% of all fraud globally in 2024, marking a fourfold increase compared to the previous year. To tackle this growing problem, Meta has introduced a new AI watermarking tool called Video Seal, which embeds an invisible watermark in AI-generated videos.
Meta’s move aims to help identify and trace the origins of AI-generated content, a critical step in fighting misinformation and fraud online. The tool is open-source and built for integration into existing platforms, adding resilience against common video edits.
How Does Video Seal Work?
Video Seal applies an imperceptible watermark to every frame of an AI-generated video. This watermark not only preserves video quality but is also resistant to:
- Cropping
- Blurring
- Compression algorithms (commonly used on social platforms)
Meta claims that Video Seal can also embed a hidden message into videos, which can be decoded later to verify authenticity and trace its source.
Speaking about the tool, Pierre Fernandez, AI Research Scientist at Meta, explained:
“We developed Video Seal to provide a more effective video watermarking solution, particularly for detecting AI-generated videos and protecting originality. While other tools exist, they often lack resilience to compression and edits or aren’t scalable.”
Why Deepfakes Are a Growing Concern
The explosion of generative AI tools has made creating fake videos easier than ever. From impersonations to scams and misinformation campaigns, deepfakes pose significant risks:
- Fraud: Account takeovers, identity theft, and financial scams are often carried out using manipulated videos.
- Misinformation: Fake content about public figures, events, or news can mislead audiences and cause real-world consequences.
- Privacy Violations: Deepfakes have been used to produce inappropriate or harmful content without consent.
A survey conducted by Ofcom highlighted that 2 in 5 respondents had come across at least one deepfake in the past six months. Meanwhile, internal data from Sumsub revealed staggering increases in deepfake-related fraud:
- 1,740% increase in North America (2022)
- 1,530% increase in Asia-Pacific (2022)
These numbers underscore how much of a threat deepfakes have become worldwide.
How Does Video Seal Compare to Other Tools?
Meta’s Video Seal is not the first watermarking technology. Companies like Google and Microsoft have already introduced tools to detect and label AI-generated content. For example:
- Google’s SynthID: Tags AI-generated text and videos
- Microsoft’s Watermarking Solutions: Focus on identifying manipulated media
However, Fernandez from Meta points out key limitations of existing tools:
- Insufficient resilience: Watermarks can be lost during compression or video edits.
- Scalability: Not all tools can handle large-scale content creation.
- Optimization for videos: Many tools are based on image watermarking methods, which are less effective for videos.
Video Seal aims to address these challenges by providing a more robust solution. While heavy compression or extensive edits may still affect the watermark, the tool offers greater resistance than other existing technologies.
Adoption: The Real Challenge
The effectiveness of tools like Video Seal depends largely on adoption. Unless industry players and developers integrate the technology into their platforms, its benefits will remain limited.
To encourage adoption, Meta is:
- Launching the Meta Omni Seal Bench, a public leaderboard to compare watermarking methods.
- Hosting workshops at ICLR, a major AI research conference, to promote collaboration in watermarking technology.
Meta hopes that by making Video Seal open-source and engaging with the academic community, more developers will integrate watermarking solutions into their workflows. This would allow better traceability for AI-generated content.
The Broader Impact of Watermarking Tools
The release of Video Seal is part of a larger effort to address the misuse of AI-generated content. Watermarking tools can help news organizations, content creators, and fact-checkers identify manipulated videos more effectively.
However, watermarks alone cannot solve the problem of deepfakes. While tools like Video Seal add an important layer of transparency, they must be paired with:
- Education: Raising awareness about the dangers of deepfakes and how to identify them.
- Regulation: Implementing stricter laws to penalize the misuse of AI-generated content.
- Collaboration: Companies, researchers, and policymakers must work together to develop comprehensive solutions.
Conclusion
Deepfakes are a serious and growing concern, threatening to undermine trust and spread misinformation across the internet. Meta’s Video Seal provides a promising solution, offering invisible and resilient watermarks to trace the origins of AI-generated videos. By encouraging adoption through open-source tools, workshops, and public benchmarks, Meta hopes to build a stronger defense against manipulated content.
While Video Seal is not a silver bullet, it represents an important step toward ensuring authenticity in a world where fake content is increasingly difficult to detect. As AI technologies continue to advance, tools like Video Seal will be essential in protecting truth and transparency online.