The Information reported that an AI agent inside Meta took unauthorized motion that led to an worker making a safety breach on the social firm final week. In response to the publication, an worker used an in-house agentic AI to investigate a question from a second worker on an inner discussion board. The AI agent posted a response to the second worker with recommendation although the primary individual didn’t direct it to take action.
The second worker took the agent's really useful motion, sparking a domino impact that led to some engineers getting access to Meta methods that they shouldn't have permission to see. A consultant from the corporate confirmed the incident to The Data and mentioned that "no consumer information was mishandled." Meta's inner report indicated that there have been unspecified further points that led to the breach. A supply mentioned that there was no proof that anybody took benefit of the sudden entry or that the info was made public throughout the two hours when the safety breach was energetic. Nonetheless, which may be the results of dumb luck greater than the rest.
Many tech leaders and firms have touted the advantages of synthetic intelligence, that is simply the most recent incident the place human staff have misplaced management over an AI agent. Amazon Net Providers skilled a 13-hour outage earlier this yr that additionally (apparently coincidentally) concerned its Kiro agentic AI coding instrument. Moltbook, the social community for AI brokers not too long ago acquired by Meta, had a safety flaw that uncovered consumer info due to an oversight within the vibe-coded platform.
This text initially appeared on Engadget at https://www.engadget.com/ai/a-meta-agentic-ai-sparked-a-security-incident-by-acting-without-permission-224013384.html?src=rss