Warning: Artificial Intelligence doubles the risks of cyber attacks

Telegram     WhatsApp

تحذير: الذكاء الاصطناعي يضاعف مخاطر الهجمات السيبرانية

Anthropic, a US-based artificial intelligence developer, said in its threat intelligence report on Wednesday that cybercriminals are increasingly using artificial intelligence to launch cyberattacks.

The company added that its chatbot, “Cloud,” was being used illegally to penetrate networks, steal and analyze data, and formulate “psychologically targeted blackmail demands.”

In some cases, the attackers threatened to release stolen information unless they received amounts exceeding $500,000.

The company said that during the past month alone, 17 organizations in the healthcare, government and religious institutions sectors were targeted.

Claude helped the attackers identify security vulnerabilities and determine the target network and the data that should be extracted.

The director of Anthropic, Jacob Klein, told the technical website “The Verge” that such operations previously required specialized teams of experts, but artificial intelligence now allows only one person to launch sophisticated attacks.

Anthropic also documented cases of North Korean operatives using the “Cloud” chatbot while impersonating programmers working remotely for American companies in order to “finance North Korean weapons programs,” as artificial intelligence helped them communicate with employers and perform tasks that they lacked the necessary skills to accomplish on their own.

Historically, North Korean workers would have had to go through years of training for this purpose, Anthropic said, but “Claude and other models have effectively removed this restriction.”

Criminals have also devised artificial intelligence-powered fraud schemes that are offered for sale online, including a bot on the Telegram application used in emotional fraud, in which it emotionally manipulates victims in multiple languages ​​to extort money from the victims.

Anthropic confirmed that it has already implemented preventive measures to limit abuse, but attackers are still trying to find ways to circumvent them.

Anthropic said lessons learned from these incidents are being used to strengthen protection against AI-powered cybercrime.