Meta has ended its partnership with Mercor, a $10 billion AI data startup, after a supply chain attack exposed some of the AI industry’s most closely guarded secrets. The breach, which began with a compromised open source library in late March 2026, revealed not only personal data but also the training methods behind top language models. Hackers used a tampered version of the LiteLLM open-source library, leading to investigations at OpenAI and Anthropic and a class-action lawsuit involving over 40,000 people.  

Last month, in March 2026, hackers targeted a popular open-source library and stole more than just personal data. Wired reports that they may have also taken the blueprints for building some of the world’s most advanced AI models.  

This disruption follows a major cybersecurity breach. Meta has put its partnership with Mercor, a San Francisco AI data company, on hold after a March 5, 2026, cyberattack exposed private details about Mercor and possibly its clients. The pause has no set end date and has made many in the industry nervous since companies have spent billions developing these Secret methods.  

The Startup Behind the Curtain 

Mercor may not be widely known, but it plays a key role in the AI industry. Founded in 2023 by Brendan Foody, Adarsh Hiremath, and Surya Midha, three friends from the Bay Area, the company brings together contractors, engineers, lawyers, doctors, bankers, and journalists to create high-quality training data for AI labs. Its clients include Meta, OpenAI, Anthropic, and Google.  

Mercor’s growth has been remarkable, even in October 2025, for Silicon Valley. It raised $350,000,000,000 in a Series C round, reaching a $10,000,000,000 valuation and making its three founders the world’s youngest self-made billionaires at 22. By September 2025, the course’s annual revenue hit $150,000,000 up from $100,000,000 just three months ago. Its focus on creating fine-tuning and reinforcement learning data for AI labs has made it one of the most valuable private companies in the AI supply chain.  

However, Mercor’s position at the center of the air supply chain has come with risks.  

A Poison Package, a Cascade of Exposure 

The attack on Mercor started further up the supply chain in late March 2026. Wiz, Synk, and Datadog Security Labs found that a hacker group called TeamPCP broke into the CI/CD pipeline of LiteLLM, an open-source Python library. LiteLLM is used by millions of developers and has 97,000,000 monthly downloads, and is found in about 36% of cloud environments.  

Earlier, TeamPCP used a supply-chain attack against Trivy, a popular security scanner, to steal traditional credentials. On March 27, 2026, a light ML LLM maintainer used these credentials to upload two malicious versions of LiteLLM 1.82.7 and 1.82.8 to the Python Package Index (PyPI). The harmful packages were available for about 40 minutes before they were found and taken down.  

The attack was complex. Version 1.8.7 hit base64-encoded malware in the library’s proxy server code, which ran as soon as it was imported. Version 1.8.2 used a harmful path configuration file that triggered every time a Python process started. Both versions were made to steal environment variables, API keys, SSH keys, cloud credentials for AWS, Google Cloud, and Azure, CI/CD secrets, and database authentication, and to send all the stolen data to a server at app. models.litellm[.]cloud.  

Mercor confirmed it was hit by the attack. Later, they discovered nearly 4 terabytes of exposed data. This included platform source code, a large user database, and video interviews with identity documents. The breach may have revealed the names and social security numbers of over 40,000 contractors and customers.  

The Secrets That Matter Most 

The exposure of personal data is serious, but Meta and other AI labs are more concerned about leaked proprietary information.  

Because Mercor manages data pipelines for multiple AI companies, the bridge may have developed expertise and confidential data-processing and training strategies over the years of investment. Unlike datasets, these methods have critical advantages. Several AI labs are investigating the extent of the leak.  

OpenAI is reviewing the incident but hasn’t stopped working with Mercor. Anthropic hasn’t commented publicly. Google is also believed to be assessing the impact of the breach.  

This bridge reveals a major industry risk column when many companies rely on a single supplier. A single bridge can compromise the top AI methods across the sector.  

Extortion and Legal Fallout 

The hacker group Lapsus$, known for past attacks on big companies, claimed responsibility for the NORCO breach and started selling the stolen data on dark web forums. Security experts believe Lapsus$ is working with the team PCP, which has become a major threat in the AI and enterprise software worlds. This group is also believed to be behind a series of supply chain attacks that hit over 1,000 enterprise SaaS environments, including a breach of the European Commission linked to the same campaign by CERT-EU.  

On April 1, 2026, plaintiff Lisa Gil, a resident of Wahiawa, Hawaii, filed a class action complaint against Mercor.io Corp. in the U.S. District Court for the Northern District of California. The suit alleges that Mercor failed to maintain adequate cybersecurity protections, leaving more than 40,000 people exposed to identity theft and fraud. The complaint states that a LiteLLM incident on March 27 was the entry point. It also claims that Mercor’s reliance on a compromised open-source dependency, without sufficient monitoring, created dangerous conditions that led to the breach.  

Meta has not made any public statements about the breach. In March 2026, the company signed a $27,000,000,000 AI infrastructure deal with Nebius Group and experts, spending between $135,000,000,000 and $150,000,000,000 this year, making its air training pipeline extremely important. Stopping work with a T beta vendor is a decision Meta would make only if their secret methods outweighed the cost of hunting operations.  

A Cautionary Tail For The AI Supply Chain 

The Mercor highlights how modern supply chain attacks can expose both credentials and unique intellectual property when AI companies depend on the same data vendors and open-source tools.  

Security companies have warned about this exact problem. Aikido Security, which became a unicorn in January 2026, is based on the idea that open-sourced dependency risk is a major threat to enterprise software. The Mercor breach shows that this risk may be even greater for the AI training pipeline.  

The next few months of 2026 will show whether Mercosur’s rapid growth can continue after a March breach that compromised both user data and clients’ most closely guarded secrets. The AI industry’s rapid pace in 2025 was driven by the belief that this infrastructure was secure. Now, that belief is being questioned.  

Source:  Meta freezes AI data work after breach puts training secrets at risk