North Koreans using ChatGPT to scam LinkedIn users; "attacks are getting very sophisticated"

Reading time icon 2 min. read


Readers help support MSpoweruser. We may get a commission if you buy through our links. Tooltip Icon

Read our disclosure page to find out how can you help MSPoweruser sustain the editorial team Read more

Key notes

  • North Korea using AI tools like ChatGPT for sophisticated cyberattacks against US workers.
  • Targets include employees in cybersecurity, defense, and crypto sectors on platforms like LinkedIn.
  • AI helps create fake recruiter profiles, craft messages, and build trust with targets.

Recent reports indicate that North Korean hacking groups are employing artificial intelligence tools like ChatGPT to launch intricate cyberattacks against American white-collar workers. This development raises concerns about the evolving landscape of online threats and the potential misuse of AI for malicious purposes.

North Korean actors are leveraging AI-powered LLMs to generate content likely used in spear-phishing campaigns. These campaigns typically involve impersonating legitimate entities, such as recruiters, to trick individuals into revealing sensitive information or clicking on malicious links.

The targets of these attacks appear to be concentrated in specific sectors, including global cybersecurity, defense, and cryptocurrency companies. Social media platforms like LinkedIn, Facebook, WhatsApp, Discord, and Telegram serve as the primary battlegrounds for these operations, with LinkedIn emerging as the platform of choice for phishing scams.

Experts believe that the funds acquired through these cyberattacks are channeled towards financing North Korea’s ballistic missile and nuclear programs, as per a UN panel of experts. The attacks themselves involve meticulous social engineering tactics.

Hackers meticulously craft fake recruiter profiles on LinkedIn, engaging in extended conversations to build trust with their targets. Generative AI is crucial in this process, assisting with content creation, message crafting, and identity fabrication.

The attacks are getting very sophisticated. We are not talking about a badly worded email that says ‘click on this link.

says Erin Plante, vice president of cybersecurity company Chainalysis.

North Korea’s use of AI in cyberattacks is a significant advancement in its capabilities. The nation has been involved in cyber projects since the 1980s and 1990s.

Despite North Korea’s substantial investment in AI, their attempts are not without vulnerabilities. Language barriers often challenge the hacking groups, leading to inconsistencies in communication and cultural misunderstandings. Instances of poorly written English, unusual hesitance to engage in video calls, and scripted responses have served as red flags for potential victims.

More here.

More about the topics: ChatGPT, LinedIn, North Korea, scam

Leave a Reply

Your email address will not be published. Required fields are marked *