New AMP tool unveiled to help fight deepfake media

Advertisement

ID Crypt Global’s Authenticated Media Protection (AMP) product aims to tackle the rising issue of fake media by introducing an invisible cryptographic watermark that can validate videos, photos, and other relevant media. This watermark links the file directly to a verifiable digital identity, making it challenging to re-publish files anonymously. Modifications or re-publications break the watermark, instantly signaling that the file is untrusted. Users can verify authenticity using a freely available browser extension for Chrome or Microsoft Edge.

The proliferation of deepfake media poses a significant challenge, with celebrities and public figures increasingly falling victim to manipulated audio and video content. The tools to create such fake clips are becoming more sophisticated, with various software apps available online for altering and producing this material. Lauren Wilson-Smith, CEO of ID Crypt Global, highlights the concerning rise of deepfake media and the severe consequences of sharing falsified images and videos. However, she also notes the significant investment in detecting fake media, which helps level the playing field.

Products like AMP, along with FakeCatcher from Intel and Microsoft’s Video Authenticator, are crucial for identifying false material at its source, particularly during election cycles plagued by misinformation. Major democracies worldwide have faced election interference through fake audio recordings, counterfeit AI avatars, and even AI-generated versions of deceased politicians. The fraud detection sector has experienced substantial growth, reaching over £1.1 billion today, with the anticipated rise of generative AI tools exacerbating the situation.

Advertisement

The use of generative AI in creating synthetic media poses various risks, categorized into different types by the US National Security Agency. Shallow/Cheap Fakes involve crude media manipulation without AI, while regular deepfake media uses computing and machine learning to create fully or partially synthetic content. Such content can include compromising videos of politicians or celebrities, highly trained voice cloning, or completely fake online conference calls.

Real-world examples highlight the severe consequences of synthetic media manipulation, such as a clerk being conned out of $25 million by a faked video conference call with an AI construct posing as the company’s CFO. To mitigate such risks, the NSA recommends leveraging verification technologies like digital watermarking and implementing real-time checks such as multi-factor authorization and biometrics to verify the authenticity of communications.