The rise of artificial intelligence has affected image creation as well. Oftentimes we find ourselves in doubt whether an image is genuine or AI-generated.
Surely, there are various telltale signs that indicate an image was created by AI. Sometimes, creators openly acknowledge the use of AI in the description. In other instances, anomalies such as missing details, an extra finger, or unnaturally smooth backgrounds give away involvement of AI.
Nonetheless, there is a category of synthetically generated images that seamlessly blend with reality. The images are rendered so well that they are virtually indistinguishable from genuine ones. To tackle this challenge, various AI detection tools have emerged. Their primary function is to determine the degree of AI’s contribution to an image’s inception.
In line with these AI detection tools, Google DeepMind, the AI arm of the tech giant, introduced SynthID, a dual-purpose technology. This tool possesses the capability to not only recognize AI-generated images with remarkable accuracy but also to watermark them.
How is SynthID different?
What sets SynthID apart is its unwavering commitment to maintaining image quality. Using two deep learning models—one for watermarking and another for identification—both models have undergone comprehensive training using a diverse array of images.
Speaking of watermarking, it is a long-established technique used by photographers and online media companies to safeguard their visual content from copyright infringement. However, unlike traditional, SynthID’s watermarks are exceptionally granular and embedded within the image’s pixels.
To the naked eye, the watermarks remain invisible, yet SynthID can detect them effortlessly. Even post-processing measures like adding filters, altering color composition, or employing lossy compression do not diminish their effectiveness.
At present, SynthID is available exclusively in beta form to Vertex AI customers, specifically those who use Imagen, Google’s text-to-image AI generator. Vertex AI, Google’s unified AI platform, encompasses all of the company’s cloud services within a single ecosystem.
Not completely fool-proof
Google DeepMind emphasizes the importance of identifying AI-generated content to combat the proliferation of misinformation. Though SynthID is not infallible against extreme image manipulations, it does empower individuals and organizations to engage responsibly with AI-generated content.
The blog post does not explicitly mention whether SynthID, as a standalone tool, can identify AI-generated images lacking its digital watermark. However, it can successfully identify AI-generated images created using Imagen if the creator has incorporated the digital watermark.
Looking ahead, Google DeepMind has ambitious plans to expand this technology’s application to detect AI-generated text, audio, and video content.
With Google Images hosting over 136 billion images and the growing threat of deep fakes and their creation tools, it’s evident that Google is committed to enhancing image authenticity. Earlier this year, Google introduced the ‘About this image’ feature. It providing users with additional information about images, including their initial indexing date and the source website. This initiative aims to help users trace the original source of images, particularly in the fight against photorealistic AI-generated visuals.
In a world filled with deep fakes, like the images of Pope Francis in an oversized Balenciaga jacket or Elon Musk kissing a robot, both deceptively realistic but entirely fictional, Google’s deployment of SynthID and similar technologies holds the promise of effectively distinguishing genuine from fabricated content.