Microsoft used AI to defend against AI-powered scams worth $4 billion in 2024-25
1 min. read
Published on
Read our disclosure page to find out how can you help MSPoweruser sustain the editorial team Read more

As detailed in its latest Cyber Signals report, Microsoft announced it thwarted over $4 billion in fraud attempts from April 2024 to April 2025. Announcing its cybersecurity milestone, the company also issues a warning for AI-enhanced fraud schemes. Few scams that use gen-AI include deepfake job interviews, fake e-commerce sites, impersonation scams, and more.
Microsoft highlighted a troubling rise in tech support scams, including attacks by the cybercriminal group Storm-1811. The attack exploited Windows’ Quick Assist tool to gain unauthorized device access. In response, Microsoft strengthened safeguards, now blocking over 4,400 suspicious Quick Assist connections daily.
Also read : Microsoft to Enhance Access Usability with New Magnification Slider
The report showcases Microsoft’s multipronged defense strategy. The move included the tech giant to use AI-powered scam detection in Microsoft Edge, domain impersonation protection, and machine learning-driven fraud prevention in Azure. Microsoft’s Digital Crimes Unit (DCU) continues to collaborate with global law enforcement to dismantle malicious infrastructures.
Corporate VP Kelly Bissell stressed the need for global collaboration. “Tech companies, governments, and users must unite against rising AI threats,” he said.
Microsoft recommends multifactor authentication for job platforms, AI detection tools, and public awareness campaigns.
User forum
0 messages