Today News | Google tests watermark to identify AI images
To fight disinformation, Google is experimenting with digital watermarks to search artificial intelligence (AI)-generated images.
Developed by Google’s AI arm DeepMind, SynthID will identify images generated by machines.
It works by embedding changes in individual pixels in images so that watermarks are invisible to the human eye, but detectable by computers.
But DeepMind said it’s “not foolproof against extreme image manipulation.”
As technology continues to evolve, it’s becoming increasingly difficult to tell the difference between real images and artificially generated images – as BBC Bitesize’s AI or Real quiz shows.
AI image generators have become mainstream, with the popular tool Midjourney boasting over 14.5 million users.
They allow people to create images in seconds by entering simple text instructions, raising copyright and ownership questions around the world.
Google has its own image generator called Imagen, and its system for creating and checking watermarks will only apply to images created using that tool.
Watermarks are usually a logo or text added to an image to indicate ownership, as well as to partially make the image difficult to copy and use without permission.
This is in images used on the BBC News website, which usually include a copyright watermark in the bottom left corner.
A simple guide to help you understand AI
But these types of watermarks are not suitable for identifying Al-generated images because they can be easily edited or cropped.
Tech companies use a technique called hashing to create digital “fingerprints” of known abuse videos, so they can quickly spot and remove them if they start spreading online. But they can also be corrupted if the video is cropped or edited.
Google’s system creates an effectively invisible watermark, allowing people using its software to instantly determine whether an image is real or machine-generated.