An AI agent’s advice led to internal exposure of meta-sensitive data, highlighting ongoing AI challenges at tech firms.
After a Meta employee internally asked for help with an engineering problem, an AI agent responded with a solution. Following this advice exposed a significant amount of confidential user and company data to engineers for two hours.
Although no user data was mishandled, a Meta spokesperson emphasized that a human could also give erroneous advice. The incident, first reported by The Information, triggered a major security alert at Meta. According to the company, this response demonstrates how seriously it takes data protection, suggesting both the gravity of the breach and Meta’s focus on accountability.
This breach follows several recent high-profile AI-related incidents in US tech companies. For instance, last month, the Financial Times reported two Amazon outages tied to its internal AI tools.
More than six Amazon employees later told The Guardian that the company’s rushed effort to embed AI in every part of its operations led to major mistakes, messy code, and lower productivity.
The technology driving these incidents, called agentic AImeaning AI systems that can take actions, make decisions, and operate with a level of autonomy to accomplish goals, has changed quickly in recent months. In December, new features in Anthropic’s AI coding tool, Claude Code, drew attention for its ability to book theater tickets, manage personal finances, and even grow plants on its own.
Soon after OpenClaw appeared, this viral AI personal assistant worked with agents like Claude Code but couldn’t act on its own. For example, it could trade millions of dollars in cryptocurrency or handle large volumes of user emails. These abilities led to a lot of talk about AGI, or Artificial General Intelligence, which means AI capable of performing many tasks usually done by humans.
In the weeks since, stock markets have been volatile amid concerns that AI agents could harm software companies, disrupt the economy, and take over jobs from people.
Tarek Nseir, a co-founder of a consulting company, said incidents at Meta and Amazon indicate that both are still testing how agentic AI fits into their organizations.
They are not really stepping back from these things and actually taking an appropriate risk assessment. If you put an entry-level intern on this stuff, you would never give that entry-level intern access to all your critical, high-severity HR data, He said.
He added, “The vulnerability was clear in hindsight. This is Meta being bold and experimenting at scale.”
Jameson O’Reilly, a security specialist who focuses on building offensive AI, said AI agents introduced a certain kind of error that humans did not, and this may explain the incident at Meta.
A human knows the context of a task, the implicit knowledge that one should not, for example, set the sofa on fire to trigger a run, delete a little-used but critical file, or take an action that would expose user data downstream.
For AI agents, this is more complicated. They have context windows, a sort of working memory that holds instructions, but these lapses lead to errors.
A human engineer who has worked somewhere for two years or so, with an accumulated sense of what matters, what breaks at 2 am, what the cost of downtime is, and which systems touch customers. The contact context lives in them, in their long-term memory, even if it is not at the front of their mind, O’Reilly said.
The agent, on the other hand, has none of that unless you explicitly include it in the prompt, and even then, it fades unless it is in the training data.
Nseir said more mistakes are inevitable as AI agent use expands.
Source: Meta AI agent’s instruction causes large sensitive data leak to employees










