
As for Mohammed Mohsen Ramadan, head of the Artificial Intelligence and Cybersecurity Unit at the Arab Center for Research and Studies, he believes that “the UN warning is not just a political statement, but an early warning of a new phase of an arms race based on algorithms capable of making lethal decisions independently, and the shift towards self-directed weapons represents a complex challenge combining cybersecurity, military technology, international humanitarian law, and the stability of the international order.”
He added, “Firstly, the real threat lies in the nature of artificial intelligence itself. The artificial intelligence used in modern weapons systems does not only rely on direct programming, but also on neural networks that learn automatically and algorithms that make decisions in an unstable environment and sensing systems that rely on data that may be misleading.”
He also added that with the shift from “human support” models to models of “increasing autonomy,” there is a risk of a weapon capable of making an offensive decision without precise human supervision, which may lead to military outcomes that do not accord with political intentions or rules of engagement, pointing to the potential technical and security risks in self-directed weapons, and to their susceptibility to penetration, as intelligent combat systems rely entirely on the digital infrastructure of control algorithms, navigation systems, databases, and communication networks.
He explained that “any penetration or manipulation of these components may lead to a change in the weapon’s path and redirect it to kill civilian targets, disable the self-security system, and carry out unauthorized offensive operations, and thus the weapon turns from a defensive tool into an offensive platform for the benefit of the adversary.”
Ramadan pointed out that with regard to disinformation attacks, adversaries can deceive artificial intelligence systems through images modified with precise algorithms, misleading electronic signals, and false data entered into the system during operation, adding that it has been scientifically proven that algorithms can make incorrect decisions by a large percentage in the face of slight changes invisible to the human eye, explaining that there is what is called “loss of human control.”
Ramadan revealed that “the self-directed weapon shortens the decision-making cycle from: ‘monitoring, analysis, situation assessment, decision, and launch’ to just a momentary automated decision. This shift may lead to unintended escalation, hitting unauthorized targets, loss of the ability to intervene during an operational malfunction, and offensive decisions that violate international humanitarian law.”
He concluded by saying: “There is an important axis, which is the legal accountability gap. When an intelligent weapon makes a mistake in distinguishing between a combatant and a civilian, the question remains: who is responsible, the military commander, the developing company, the programmer? Or the system itself? This gap represents a direct threat to the international justice system.” (Al Arabiya)