Google has launched a new version of “Nano Banana Pro” – an AI platform for image and video generation. And now it’s really hard to tell what is artificial intelligence and what is reality
Google artificial intelligence research company “DeepMind” launched late last month the model “Nano Banana Pro” (officially called Gemini 3.0 Pro Image) for generating realistic AI pictures and videos.
An advanced version of an AI application that can visualize anything a person can think of is not a new thing on the Internet, but it is a significant step in the development of this technology, after which the border between AI and reality is further blurred.
How “deepfake” took over social media
Recently, videos and photos generated with the help of various AI tools have been appearing more and more on social networks, and it is difficult to tell whether they are real or not.
Most often, these are harmless memes on Instagram and TikTok, such as those of Messi and Ronaldo working at the cash register at McDonald’s and arguing about who is the “GOAT” of football, or those in which Donald Trump on Joe Rogan’s podcast encourages young people to go to the casino.
However, in many other cases AI-generated images and videos can be misused to spread fake news, fabricate affairs or even falsify court evidence.
The war in Ukraine showed how serious the so-called “deepfake” can be, when videos of Volodymyr Zelensky calling his soldiers to lay down their arms began to circulate on the Internet in 2022 (and then AI was not yet so developed).
Since then, AI tools have become incomparably more advanced, to the point that in 2025 it is easy to fool even the most technologically literate among us.
How to recognize AI
Leading tech companies are already moving towards regulating the use of AI content on their platforms, but for now it just comes down to the goodwill of content creators. The problem with these regulations is that if someone already intends to manipulate AI-generated images, they certainly won’t explicitly disclose it.
Networks also have the ability to display a warning on a post, like those used by X when it detects unconfirmed information on a tweet, however, the disputed post can be seen by millions of people before the warning appears.
Tools such as “Sora” developed by OpenAI, leave the inscription “Sora” in the corner of an image or video, as well as “invisible” codes through which a computer can recognize fake content. However, many people miss such indications.
In parallel with AI tools, tools for detecting content created with the help of AI, such as “SynthID”, were also developed. However, as image and video generation programs advance more rapidly, AI recognition tools are proving less and less reliable.
“It’s no longer enough to just notice six fingers on a hand”
University of California, Berkeley computer science professor and video forensics expert Hani Farid told the BBC that initially AI could be recognized based on poor image or video quality, as well as unnatural scenes. However, today even that is not enough.
“Leading generators like ‘Veol’ and ‘Sore’ still leave minor mistakes in their images. But they’re no longer six fingers on a hand or jumbled letters in text. Now they’re much more subtle,” he says.
AI ethics expert Katarina Doria, on the occasion of the launch of the “Nano Banana Pro” platform, whose images are considered to be probably the most realistic to date, gave several tips on her Instagram profile for recognizing AI-generated content.
She states, among other things, that we can no longer trust our eyes alone and that we must not rely on AI content detection platforms.
“Do your research, check credible sources and look at the context, don’t assume something is real just because it appears to be real,” she said.
Source: Vreme


