Rakuten and Cisco Highlight Deepfake and Agent Security Risks
- •Cisco showcases real-time deepfake impersonation during keynote at Rakuten Technology Conference
- •Experts warn of AI agents being manipulated into making unauthorized financial commitments
- •Rakuten utilizes red teaming and guardrails to prevent prompt injection and data leaks
The Rakuten Technology Conference 2025 recently spotlighted the shifting landscape of digital threats, where AI has transitioned from simple content generation to autonomous decision-making. Cisco Systems Principal Architect Tiju Johnson opened the session with a startling demonstration: a live video participant revealed as a real-time deepfake impersonation. This jarring start illustrated how modern AI can now navigate territory once reserved for human interaction—impersonation, persuasion, and complex negotiation.
As companies deploy autonomous systems to handle customer interactions, the risk of exploitation grows. One shared example involved a user manipulating an AI agent into agreeing to a legally binding car sale for just one dollar. These vulnerabilities highlight the distinction between AI Safety—preventing human harm—and AI security, which focuses on protecting the underlying infrastructure from data poisoning or unauthorized access.
To combat these risks, Rakuten’s Cyber Security Defense Department is utilizing red teaming, a method of simulating attacks to identify weaknesses like Prompt Injection before they reach the public. They are also closely monitoring the Model Context Protocol (MCP), which connects AI systems to external tools. Because a compromised server could allow attackers to hijack agent behavior, Rakuten emphasizes that security must be integrated at every layer of the AI lifecycle.