Google Proposes "Privacy by Innovation" Framework for AI
- •Google reports AI efficiency increased 300-fold over a two-year period.
- •Personal Intelligence integrates contextual data from Gmail and Photos into Google Search.
- •Ukraine launches Diia.AI national assistant for tailored government services and document processing.
Google’s President of Global Affairs, Kent Walker, recently addressed the IAPP Global Summit, outlining a fundamental shift in how the industry must approach data protection. As AI models become roughly 300 times more efficient than those from just two years ago, the focus is transitioning from simple chatbots to proactive "Personal Intelligence." These assistants do not just provide links; they connect contextual dots across personal applications like Gmail and Photos to execute tasks autonomously.
A prime example of this evolution is Ukraine’s Diia.AI, a national assistant that handles government services through a chat interface. Instead of navigating complex web portals, citizens can request documents like income certificates directly within the conversation. This level of utility requires a high degree of user trust, which Walker argues must be built through "privacy by innovation." This approach suggests that developers should compete on privacy techniques—such as robust toggle controls and sensitive topic guardrails—as much as they do on raw model quality.
To support this, the proposal advocates for the broader adoption of Privacy-Enhancing Technologies (PETs). These are specialized tools that allow AI to learn from data patterns without ever seeing or exposing the raw, underlying information. By shifting away from overwhelming "consent fatigue" toward intuitive, context-aware safeguards, the goal is to create a digital environment where AI acts as a trusted partner. Governments are encouraged to provide regulatory incentives for companies that prioritize these advanced safety standards.