Signal President Meredith Whittaker raised concerns about the security and privacy implications of agentic AI during her talk at the SXSW 2025 conference, highlighting how these AI-driven systems could compromise user data.
Whittaker explained that agentic AI designed to perform tasks autonomously, such as booking tickets, scheduling events, and messaging contacts requires deep access to user systems. This means the AI must interact with web browsers, payment methods, calendars, and messaging apps, leading to significant privacy risks.
However, she warned that for AI agents to function effectively, they would require near-root access to devices, pulling data across multiple applications often unencrypted. Such processes are unlikely to remain on-device and would instead be processed on cloud servers, exposing sensitive information to security threats.

She also highlighted how AI agents could undermine end-to-end encryption in messaging apps like Signal. If an AI assistant can access and summarize messages, it inherently weakens privacy protections by requiring access to the message data itself.
Her concerns align with broader critiques of AI’s reliance on vast data collection. She argued that the prevailing bigger-is-better AI paradigm, which thrives on mass surveillance and extensive datasets, poses long-term risks to user security.
Whittaker concluded by urging the tech industry to reconsider the push toward agentic AI, warning that while such technology promises convenience, it may come at the cost of fundamental privacy and security principles.
Also Read : Microsoft Expands AI Efforts to Compete with OpenAI