
Lebanon Today
A team of researchers at the DataDog Cyber Security Lab has revealed a serious security issue in Microsoft’s “Copilot” application.
This issue allows hackers to carry out sophisticated scams to steal user information, which are difficult to detect.
According to a report from the tech site Bleeping Computer, the problem lies in the “Copilot studio” feature, which Microsoft introduced to improve productivity using AI-powered chatbots.
However, this feature has been exploited in a malicious way to create fake chatbots that ask users to enter their personal data or passwords, and then send them directly to hackers.
Attackers exploit users’ trust in genuine Microsoft links, as the links used in these attacks are actually located on the company’s servers, making them appear completely genuine.
This means that any user, even if they have limited privileges, can fall victim to these attacks without suspecting anything.
Microsoft has confirmed that it is aware of the issue and is currently developing an additional protection system to reduce the risk of this type of fraud. It pointed out that the problem is not entirely technical, but rather results from user behavior and their excessive trust in official interfaces.
It is worth noting that the company recently launched the “Copilot Studio” application as part of its strategy to integrate artificial intelligence into its future products, but this incident raises questions about the safety of artificial intelligence tools available to users.
source: 961 today