OpenAI Unveils Sora 2 with Enhanced Safety Features
- •Sora 2 incorporates C2PA metadata and dynamic watermarks for clear AI content identification
- •New character feature grants users full control and revokable consent over their digital likeness
- •Multi-layered moderation filters block harmful content, including sexual material and unauthorized voice imitation
OpenAI has introduced Sora 2, a significant evolution in its video generation ecosystem that prioritizes safety and creator control from the ground up. Unlike earlier experimental phases, this release emphasizes provenance signals, ensuring that every generated video is tagged with industry-standard C2PA metadata. This digital signature serves as a nutrition label for content, allowing platforms and users to verify whether a video was synthesized by AI or captured by a lens.
One of the most innovative additions is the "characters" feature, which addresses the growing concern over digital identity theft and deepfakes. Users can now define specific characters and maintain absolute authority over their likeness, including the ability to revoke access at any time. This shift towards a consent-based model is further bolstered by stricter moderation for videos featuring minors and a requirement for users to attest they have permission when uploading real-life references.
Beyond visual safety, Sora 2 implements robust audio and text guardrails to prevent the creation of harmful or infringing content. The system proactively scans speech transcripts for policy violations and blocks attempts to mimic the voices of living artists without authorization. By layering these defenses—from visible watermarks to proactive filtering—OpenAI aims to balance the creative potential of high-fidelity video generation with the necessity of a safe digital environment for all users.