New Microsoft report reveals foreign adversaries are increasingly employing AI for disinformation, phishing, and other cyberattacks targeting America and its allies.
What Microsoft Found
- Foregrounded Threats: According to Microsoft’s latest annual digital threats report, countries such as Russia, China, Iran, and North Korea are ramping up the use of artificial intelligence for hostile cyber operations.
- In July 2025 alone, there were over 200 detected instances of AI-generated disinformation and fake content by foreign adversaries—more than double the number from July 2024 and over ten times the volume in 2023.
How AI Is Being Used
- Disinformation & Deepfakes: Voice cloning, fake videos and impersonations are being used to spread false narratives.
- Phishing & Social Engineering: Attackers use AI to craft more convincing phishing emails and manipulate targets.
- Espionage & Data Theft: Some cyber groups are using AI to probe sensitive sectors like government, defense, and critical infrastructure.
Microsoft’s Warnings & Recommendations
- Microsoft calls on governments, businesses, and individuals to improve their cybersecurity measures.
- The company stresses many U.S. organizations still rely on outdated security defenses, making them more vulnerable.
- Microsoft urges ramped-up investment in prevention, detection, and rapid reaction to AI-amplified threats.
Broader Implications
- As AI tools become more accessible and powerful, the attack surface for cyber espionage and influence operations grows.
- The escalation may heighten geopolitical tensions; AI misuse is likely to be a central issue in national security debates.
- International norms and regulation around AI and disinformation may get renewed attention