
A resident of Chernogolovka created a video of the New Year’s science city using a neural network. In the plot, he illuminated the buildings with lights and filled the city with hurrying passersby. It turned out festive, but not everyone appreciated the AI’s creation.
Soon after publishing, the clip quickly gained views and comments in local groups. Many users expressed admiration for the author’s creativity and the capabilities of neural networks, noting how lively and festive the town appeared.
“If only it were actually this beautiful!” wrote resident Anna S.
However, there were also those who reacted to the work with doubt. In the comments, subscribers actively discussed the “unnaturalness” and “lifelessness” of what was shown on screen. Particular attention was drawn to details such as “buses that run off their routes,” “strange movements of pedestrians,” and the general “unreality” of the world crafted by artificial intelligence.
“This seems like Chernogolovka, but also not quite, everything is somewhat inanimate,” users lamented.
Maria Petukhova, a specialist in digital technologies and visualization, noted that many have become accustomed to AI creations, but a considerable number of people still reject or even fear them.
“Such a reaction from viewers is quite predictable at the current stage of neural network development. Artificial intelligence excels at generating impressive visual imagery and creating a general ambiance, but it often stumbles on small, yet fundamental details that the human brain instantly registers as incorrect or untrue,” the expert explained.
The specialist said that neural networks train on vast amounts of data but currently lack “common sense” or a deep comprehension of world physics and social conventions. This is the so-called “uncanny valley” effect, where an image closely resembles reality but evokes discomfort or distrust due to slight discrepancies.
“Buses running a red light, or odd movements of pedestrians—these aren’t artistic errors, but rather the AI’s lack of full understanding of cause-and-effect relationships and real-world behavioral ethics. It renders what it has seen in the data but doesn’t grasp why things occur or what consequences follow,” emphasizes Petrova.
Nevertheless, the expert believes such trials hold immense value.
“It is precisely this user criticism that assists neural network developers in refining algorithms, making them ever more precise and plausible. The video by the Chernogolovka resident is not just entertainment; it is valuable material for advancing technology and understanding how humans perceive virtual reality,” concludes Maria Petrova.
Overall, the expert advises Chernogolovka residents to experiment more with AI to help it learn to avoid obvious blunders.