This is not a 100% foolproof solution to identifying AI-generated content because the tool only works if the image being scanned was made in Imagen. Additionally, whoever prompted the generation has to opt into adding the watermark. So, it means AI content produced in the popular DALL-E image generator from OpenAI, for example, won’t be identifiable with SynthID.
Still, for identifying Imagen content, the tool seems pretty robust. Google notes that the two deep learning models SynthID uses can identify one of these watermarks even if a filter altered the image, its colors were adjusted, or it was compressed. Deepmind noted that the tool isn’t 100% accurate, but its internal testing yielded favorable results. The company stated that this tool, currently in beta, may evolve to identify audio, text, and video content sometime down the line.
SynthID is just another example of Google fully embracing AI after it brought its co-founder in to work closely on the advancements. If it’s as powerful as the tech giant claims, this tool is an important next step in combating misinformation online. With the outlines laid out by the White House about the safe development of AI, it wouldn’t be a surprise if other AI companies adopted a similar tool.