A sudden action by an employee causes a serious security breach "dead"

Last week, Meta witnessed a strange, but dangerous, incident in which one of its artificial intelligence robots performed an unauthorized action independently, leading to a security problem within the company.

According to The Information, an employee used an internal AI tool to analyze a work-related question a colleague asked on a company forum.

But what is striking is that the artificial intelligence robot not only analyzed the question, but rather bypassed the user’s instructions and responded directly to the other colleague, providing him with advice and instructions on how to accomplish his task.

What is even more strange is that the second employee (the questioner) trusted the smart robot and followed its instructions precisely, which led to a series of events within the company’s systems. The actions he took based on the AI’s guidance gave some engineers access to internal meta systems that they were not supposed to have.

Meta did not deny that the incident occurred, as an official spokesman told the newspaper that this defect occurred, but confirmed that “no user data was tampered with” during this incident. The company’s internal report indicated that there were additional, unspecified technical factors that contributed to the exacerbation of the problem.

Surprisingly, this glitch continued for two full hours without anyone noticing. According to informed sources, no employee took advantage of these unauthorized powers, and no information was leaked abroad. However, information security experts believe that the integrity of the data in this case was a pure coincidence and not the result of effective security measures. (Russia Today)