Cybercriminals are increasingly employing advanced voice spoofing technology to defraud their victims and steal money. Sergey Kuzmenko, head of the Roskachestvo Digital Expertise Center, shared this alarming trend in an interview with RT.
According to Kuzmenko, fraudsters are leveraging AI-powered neural networks to create highly convincing voice replicas of victims` acquaintances or relatives. To develop these sophisticated fakes, only a small audio fragment is needed, which criminals often obtain from publicly available audio and video recordings on social media platforms and messaging apps. Some even manage to record snippets of telephone conversations to gather voice samples.
When an unsuspecting individual hears a familiar voice during a call, it can be profoundly disorienting. This manipulation causes the victim to lower their guard and, without much hesitation, transfer money to unknown accounts. Kuzmenko emphasized that the deception is often uncovered only after the fraudulent transactions have already been completed.
Criminal schemes are continuously evolving. Now, deepfake technology is being integrated into video calls. “It`s not just a voice; it`s a live image of a close person, complete with their facial expressions and movements. This powerful sense of presence erases the last barriers of caution, compelling the brain to believe in the reality of what`s happening,” the expert concluded.
Earlier reports indicated a significant increase in the number of Russian children falling victim to fraudsters. Data from the Ministry of Internal Affairs revealed that the count of young victims surged by 120 percent in the first nine months of 2025.
Please note: The original article date (2025) suggests it might be a forward-looking or speculative piece. The content is presented as reported.
