The Influence of AI and Deepfakes on Reality and the Future

Reading time icon 5 min. read


Readers help support MSpoweruser. We may get a commission if you buy through our links. Tooltip Icon

Read our disclosure page to find out how can you help MSPoweruser sustain the editorial team Read more

mannequin, circuit board, face

In partnership with ExpressVPN

The recent advances in artificial intelligence are significantly making compelling changes to the lives of present men. Today, we can see how AI can effortlessly perform coding, essay writing, and — most of all — content generation in a matter of seconds. These same impressive capabilities, however, are AI’s own curse. According to VPN Providers, many are using AI to generate synthetic media for misinformation, and the deepfakes are now starting to spread like wildfire around the globe.

Experts have been experimenting with AI for decades, but Microsoft’s recent big push for the tech sparked the industry’s interest in investing more in such creations. Right after the unveiling of its ChatGPT-powered Bing search engine, Google struck back using Bard. Unlike its competitor, however, Google is still implementing a strict limit for Bard test access, with experts speculating that it’s due to the company’s fear of what AI can do under the wrong hands.

This is true in the case of other AI tech products in development. For instance, Microsoft’s VALL-E language model, which is currently publicly unavailable, can imitate a person’s voice and emotions to synthesize personalized speeches. It only requires a three-second recording as its acoustic prompt but can produce a different message using the original speaker’s voice.

While the general public can still not access the mentioned creations above, they already have counterparts offered by small tech firms. This has allowed not just ordinary users but also malicious actors to use the tools in any way they want. With this, it is no longer surprising that different reports have been surfacing recently about people getting fooled and scammed with the help of AI.

These reports specifically stressed the use of AI-generated voices mimicking their victims. In a story shared with Business Insider this month, a mother reportedly received a call from someone claiming to be a kidnapper demanding a $50,000 ransom for her 15-year-old daughter. Describing the call received, the mother said it was “100%” her daughter’s voice.

“It was completely her voice. It was her inflection. It was the way she would have cried. I never doubted for one second it was her,” said the mother, who later discovered the call was a deception and that her daughter was actually with his husband.

The same incident involving AI voice was experienced by a couple from Canada, who unfortunately lost $21,000 from a scammer on the phone. According to The Washington Post, the scammer posed as a lawyer and the couple’s son using an AI-generated voice and said that the money would be used for legal fees after claiming the son killed a diplomat in a car accident.

Aside from voices, other forms of AI-produced generative media can also trick anyone — for example, fake images and deepfake videos. While there are no present reports showing scammers using them for financial gain, their effect can be widespread for the general public. Recently, some AI-generated images of famous personalities circulated on the web. Some include photos showing Pope Francis in a fashionable puffer jacket, former President Donald Trump getting arrested, and Elon Musk’s holding the hand of his rival and GM CEO Mary Barra during a date. Meanwhile, in March 2022, a deepfake video of Ukrainian President Volodymyr Zelensky surfaced, asking Ukraine citizens to surrender to Russia.

Although the materials were quickly identified as fake, their presence undeniably fooled the public and caused temporary confusion to many. Even model and author Chrissy Teigen fell victim to the pictures of the pope. Yet, the effect of such synthetic media could be more serious as AI continues to develop, especially in these times with more tech companies spending billions of dollars to fashion the perfect AI creation. When that time comes, AI could be exploited to twist the reality everyone knows, which is not impossible. Even more, it could even be used to control and influence the public, resulting in different political, social, and moral issues across the globe.

This is evident in a deepfake video during the 2018 midterm elections showing Barack Obama maligning Donald Trump. The content was originally intended to warn the online world about the dangers of fake news on the web, but it rebounded adversely. This caused rage among several Trump supporters and unintentionally harmed the image of individuals used in the material.

Now, imagine the effect of such generative media if they were specifically designed to influence and manipulate opinion and promote propaganda to the public. The results could be drastic. This could specifically be true in countries where media and information are being censored by restrictive governments, such as Belarus, China, Egypt, Russia, Iran, North Korea, Turkmenistan, UAE, Uganda, Iraq, Turkey, Oman, and other predominantly Islamic countries. Some resort to using VPNs to access geographically-locked content, websites, and services to stay updated. Yet, VPN access is not entirely possible in these places since they have internet censorship circumvention and even blocked websites related to VPN services. With this, you can imagine the future of such areas with limited access to international news and their government having the power of what online content to allow. Then, add the possibility of a more flawless content-generating AI in the future, and the public should find it more difficult to discern what’s true and what’s not.

Industry groups and AI tech companies are already on the move to outline policies that will guide the use of AI tools. The Partnership on AI, for instance, provides recommendations for institutions and individuals building synthetic media tools or simply for those distributing such materials. However, private companies and non-profit organizations aren’t the only ones needed here. Legislators must also create a concrete set of rules that AI builders and end users will be pushed to observe. Nonetheless, will these future laws be efficient enough to harness the power of AI and prevent people from exploiting it? We should soon see that.

User forum

0 messages