9 Sources
9 Sources
[1]
Google details security measures for Chrome's agentic features | TechCrunch
An increasing number of browsers are experimenting with agentic features that will take actions on your behalf, such as booking tickets or shopping for different items. However, these agentic capabilities also come with security risks that could lead to loss of data or money. Google detailed its approach to handling user security on Chrome using observer models and consent for user action. The company previewed agentic capabilities on Chrome in September and said these features will roll out in the coming months. The company said it is using the help of a few models to keep agentic actions in check. Google said it built a User Alignment Critic using Gemini to scrutinize the action items built by the planner model for a particular task. If the critic model thinks that the planned tasks don't serve the user's goal, it asks the planner model to rethink the strategy. Google noted that the critic model only sees the metadata of the proposed action and not the actual web content. What's more, to prevent agents from accessing disallowed or untrustworthy sites, Google is using Agent Origin Sets, which restrict the model to access read-only origins and read-writeable origins. Read-only origin is data that Gemini is permitted to consume content from. For instance, on a shopping site, the listings are relevant to the task, but banner ads aren't. Similarly, Google said the agent is only allowed to click or type on certain iframes of a page. "This delineation enforces that only data from a limited set of origins is available to the agent, and this data can only be passed on to the writable origins. This bounds the threat vector of cross-origin data leaks. This also gives the browser the ability to enforce some of that separation, such as by not even sending to the model data that is outside the readable set," the company said in a blog post. Google is also keeping a check on page navigation by investigating URLs through another observer model. This can prevent navigation to harmful model-generated URLs, the company said. The search giant said that it is also handing over the reins to users for sensitive tasks. For instance, when an agent tries to navigate to a sensitive site with information like banking or your medical data, it first asks the user. For sites that require sign-in, it'll ask the user for permission to let Chrome use the password manager. Google said that the agent's model doesn't have exposure to password data. The company added that it will ask users before taking actions like making a purchase or sending a message. Google said that, in addition to this, it also has a prompt-injection classifier to prevent unwanted actions and is also testing agentic capabilities against attacks created by researchers. AI browser makers are also paying attention to security. Earlier this month, Perplexity released a new open-source content detection model to prevent prompt injection attacks against agents.
[2]
Google says Chrome's AI creates risks only more AI can fix
'User Alignment Critic' will review agentic actions so bots don't do things like emptying your bank account Google plans to add a second Gemini-based model to Chrome to address the security problems created by adding the first Gemini model to Chrome. In September, Google added a Gemini-powered chat window to its browser and promised the software would soon gain agentic capabilities that allow it to interact with browser controls and other tools in response to a prompt. Allowing error-prone AI models to browse the web without human intervention is dangerous, because the software can ingest content - perhaps from a maliciously crafted web page - that instructs it to ignore safety guardrails. This is known as "indirect prompt injection." Google knows about the risks posed by indirect prompt injection, and in a Monday blog post Chrome security engineer Nathan Parker rated it as "the primary new threat facing all agentic browsers." "It can appear in malicious sites, third-party content in iframes, or from user-generated content like user reviews, and can cause the agent to take unwanted actions such as initiating financial transactions or exfiltrating sensitive data," Parker wrote. The seriousness of the threat recently led IT consultancy Gartner to recommend that companies block all AI browsers. The Chocolate Factory, having invested billions in AI infrastructure and services, would prefer that people embrace AI rather than shun it. So the ad biz is adding a second model to keep its Gemini-based agent in line. Parker refers to the oversight mechanism "a User Alignment Critic." "The User Alignment Critic runs after the planning is complete to double-check each proposed action," he explains. "Its primary focus is task alignment: determining whether the proposed action serves the user's stated goal. If the action is misaligned, the Alignment Critic will veto it." According to Parker, Google designed the Critic so attackers cannot poison it by exposing the model to malicious content. Enlisting one machine learning model to moderate another has become an accepted pattern among AI firms. Suggested by developer Simon Willison in 2023, it was formalized in a Google DeepMind paper published this year. The technique is called "CaMeL," which stands for "CApabilities for MachinE Learning." Parker adds that Google is also bringing Chrome's origin-isolation abilities to agent-driven site interactions. The web's security model is based on the same-origin policy - sites should not have access to data that comes from different origins (e.g. domains). And Chrome tries to enforce Site Isolation, which puts cross-site data in different processes, away from the web page process, unless allowed by CORS. Google extended this design to agents using tech called Agent Origin Sets that aims to prevent Chrome-based AI from interacting with data from arbitrary origins. The Register understands that Chrome devs have incorporated some of this work, specifically the origin isolation extension, into current builds of the browser, and that other agentic features will appear in future releases. Additionally, Google aims to make Chrome's agentic interactions more transparent, so user directives to tackle some complicated task don't end in tears when things go awry. The model/agent will seek user confirmation before navigating to sites that deal with sensitive data (e.g. banks, medical sites). Also, the robo-browser will also seek confirmation before letting Chrome sign-in to a site using the Google Password Manager. And for sensitive web actions like online purchases, sending messages, or other unspecified consequential actions, the agent will either ask for permission or just tell the user to complete the final step. To ensure that security researchers put Chrome's agentic safeguards to the test, Parker says Google has revised its Vulnerability Rewards Program (aka bug bounties) to offer payouts for folks who find flaws. "We want to hear about any serious vulnerabilities in this system and will pay up to $20,000 for those that demonstrate breaches in the security boundaries," said Parker. ®
[3]
Google Adds Layered Defenses to Chrome to Block Indirect Prompt Injection Threats
Google on Monday announced a set of new security features in Chrome, following the company's addition of agentic artificial intelligence (AI) capabilities to the web browser. To that end, the tech giant said it has implemented layered defenses to make it harder for bad actors to exploit indirect prompt injections that arise as a result of exposure to untrusted web content and inflict harm. Chief among the features is a User Alignment Critic, which uses a second model to independently evaluate the agent's actions in a manner that's isolated from malicious prompts. This approach complements Google's existing techniques, like spotlighting, which instruct the model to stick to user and system instructions rather than abiding by what's embedded in a web page. "The User Alignment Critic runs after the planning is complete to double-check each proposed action," Google said. "Its primary focus is task alignment: determining whether the proposed action serves the user's stated goal. If the action is misaligned, the Alignment Critic will veto it." The component is designed to view only metadata about the proposed action and is prevented from accessing any untrustworthy web content, thereby ensuring that it is not poisoned through malicious prompts that may be included in a website. With the User Alignment Critic, the idea is to provide safeguards against any malicious attempts to exfiltrate data or hijack the intended goals to carry out the attacker's bidding. "When an action is rejected, the Critic provides feedback to the planning model to re-formulate its plan, and the planner can return control to the user if there are repeated failures," Nathan Parker from the Chrome security team said. Google is also enforcing what's called Agent Origin Sets to ensure that the agent only has access to data from origins that are relevant to the task at hand or data sources the user has opted to share with the agent. This aims to address site isolation bypasses where a compromised agent can interact with arbitrary sites and enable it to exfiltrate data from logged-in sites. This is implemented by means of a gating function that determines which origins are related to the task and categorizes them into two sets - "This delineation enforces that only data from a limited set of origins is available to the agent, and this data can only be passed on to the writable origins," Google explained. "This bounds the threat vector of cross-origin data leaks." Similar to the User Alignment Critic, the gating function is not exposed to untrusted web content. The planner is also required to obtain the gating function's approval before adding new origins, although it can use context from the web pages a user has explicitly shared in a session. Another key pillar underpinning the new security architecture relates to transparency and user control, allowing the agent to create a work log for user observability and request their explicit approval before navigating to sensitive sites, such as banking and healthcare portals, permitting sign-ins via Google Password Manager, or completing web actions like purchases, payments, or sending messages. Lastly, the agent also checks each page for indirect prompt injections and operates alongside Safe Browsing and on-device scam detection to block potentially suspicious content. "This prompt-injection classifier runs in parallel to the planning model's inference, and will prevent actions from being taken based on content that the classifier determined has intentionally targeted the model to do something unaligned with the user's goal," Google said. To further incentivize research and poke holes in the system, the company said it will pay up to $20,000 for demonstrations that result in a breach of the security boundaries. These include indirect prompt injections that allow an attacker to - "By extending some core principles like origin-isolation and layered defenses, and introducing a trusted-model architecture, we're building a secure foundation for Gemini's agentic experiences in Chrome," Google said. "We remain committed to continuous innovation and collaboration with the security community to ensure Chrome users can explore this new era of the web safely." The announcement follows research from Gartner that called on enterprises to block the use of agentic AI browsers until the associated risks, such as indirect prompt injections, erroneous agent actions, and data loss, can be appropriately managed. The research also warns of a possible scenario where employees "might be tempted to use AI browsers and automate certain tasks that are mandatory, repetitive, and less interesting." This could cover cases where an individual dodges mandatory cybersecurity training by instructing the AI browser to complete it on their behalf. "Agentic browsers, or what many call AI browsers, have the potential to transform how users interact with websites and automate transactions while introducing critical cybersecurity risks," the advisory firm said. "CISOs must block all AI browsers in the foreseeable future to minimize risk exposure." The development comes as the U.S. National Cyber Security Centre (NCSC) said that large language models (LLMs) may suffer from a persistent class of vulnerability known as prompt injection and that the problem can never be resolved in its entirety. "Current large language models (LLMs) simply do not enforce a security boundary between instructions and data inside a prompt," said David C, NCSC technical director for Platforms Research. "Design protections need to therefore focus more on deterministic (non-LLM) safeguards that constrain the actions of the system, rather than just attempting to prevent malicious content reaching the LLM."
[4]
Google Chrome adds new security layer for Gemini AI agentic browsing
Google is introducing in the Chrome browser a new defense layer called 'User Alignment Critic' to protect upcoming agentic AI browsing features powered by Gemini. Agentic browsing is an emerging mode in which an AI agent is configured to autonomously perform for the user multi-step tasks on the web, including navigating sites, reading their content, clicking buttons, filling forms, and carrying out a sequence of actions. User Alignment Critic is a separate LLM model isolated from untrusted content that acts as a "high-trust system component." Gemini is Google's AI assistant, that can generate text, media, and code. It is used on Android and various Google services, and integrated into Chrome since September. At the time, Google announced plans to add agentic browsing capabilities in Chrome via Gemini and now the company is introducing a new security architecture to protect it. The new architecture, presented in an announcement from Google's engineer Nathan Parker, mitigates the risk of indirect prompt injection, where malicious page content manipulates AI agents into performing unsafe actions that lead to user data exposure or fraudulent transactions. Parker explains that the new security system involves a layered defense approach combining deterministic rules, model-level protections, isolation boundaries, and user oversight. The main pillars of the new architecture are: Google's layered defense approach towards agentic browsing shows that the company is more careful about giving its LLMs access to the browser than vendors of similar products, who researchers showed to be vulnerable to phishing, prompt injection, and purchasing from fake shops through prompt injection attacks. Google has also developed automated red-teaming systems that generate test sites and LLM-driven attacks to continuously test defenses and develop new ones where required, pushed quickly to users via Chrome's auto-update mechanism. Finally, Google has announced bounty payments of up to $20,000 for security researchers who can break the new system, calling the community to join in the effort to build a robust agentic browsing framework on Chrome.
[5]
Google adds prompt injection defenses to Chrome
Agents now log activity and seek approval before accessing sensitive sites Google is adding new defenses to the Chrome browser, to make sure its agentic capabilities cannot be abused through indirect prompt injection. Indirect prompt injection is a type of attack in which the AI agent reads third-party content (for example, an incoming email) and executes it. An example would be a prompt to execute a crypto transaction from a browser wallet plugin written into an email. The text is in white color and in font size 0, so the victim can't see it, but if they run the email through the AI for any reason, the agent might act on the prompt. To make sure this doesn't happen, Google now introduced additional security layers, including the User Alignment Critic, and Agent Origin Sets. User Alignment Critic is a feature that monitors the agent's actions in an environment isolated from untrusted content. "The User Alignment Critic runs after the planning is complete to double-check each proposed action," Google explained. "Its primary focus is task alignment: determining whether the proposed action serves the user's stated goal. If the action is misaligned, the Alignment Critic will veto it. This component is architected to see only metadata about the proposed action and not any unfiltered untrustworthy web content, thus ensuring it cannot be poisoned directly from the web. It has less context, but it also has a simpler job -- just approve or reject an action." Agent Origin Sets, on the other hand, makes sure the agent can only access data from origins that are related to the task it's currently doing, or data that the user chose to share with the agent. "This prevents a compromised agent from acting arbitrarily on unrelated origins," Google added. "For each task on the web, a trustworthy gating function decides which origins proposed by the planner are relevant to the task. The design is to separate these into two sets, tracked for each session." Finally, agents are now also allowed to create a work log for user observability and will ask explicit approval before navigating to sensitive sites such as banking or healthcare portals. Via The Hacker News
[6]
Google details security guardrails for Chrome's new AI agents
Google detailed its security approach for agentic features within its Chrome browser, which can perform actions on behalf of users. The company previously previewed these capabilities in September, and a wider rollout will occur in coming months. The company has implemented observer models and user consent mechanisms to manage these agentic actions. This strategy addresses potential security risks, which could include data loss or financial implications associated with such features. Google employs several models to monitor agentic actions. It has developed a User Alignment Critic, powered by Gemini, which evaluates action items generated by a planner model for a specific task. If the critic model deems the proposed tasks misaligned with the user's objective, it instructs the planner model to revise its strategy. The critic model accesses only the metadata of the proposed action, not the actual web content. To prevent agents from accessing unauthorized or untrustworthy websites, Google utilizes Agent Origin Sets. These sets restrict models to designated read-only and read-writeable origins. Read-only origins permit Gemini to consume content, such as product listings on a shopping site, while excluding irrelevant elements like banner ads. Similarly, the agent can only interact with specific iframes on a page. "This delineation enforces that only data from a limited set of origins is available to the agent, and this data can only be passed on to the writable origins," the company said in a blog post. "This bounds the threat vector of cross-origin data leaks. This also gives the browser the ability to enforce some of that separation, such as by not even sending to the model data that is outside the readable set." Google also monitors page navigation using another observer model, which scrutinizes URLs. This mechanism aims to prevent navigation to harmful model-generated URLs. For sensitive tasks, Google requires user permission. When an agent attempts to access sites containing banking or medical information, it first prompts the user for consent. For sites requiring sign-in, Chrome requests user permission to utilize the password manager, ensuring the agent's model does not access password data. The company will also seek user approval before actions such as making a purchase or sending a message. Additionally, Google has a prompt-injection classifier to prevent unwanted actions and continuously tests its agentic capabilities against attacks developed by researchers. Other AI browser developers have also concentrated on security; Perplexity released a new open-source content detection model earlier this month to counter prompt injection attacks against agents.
[7]
Google Built an AI Watchdog to Stop Chrome's Gemini From Going Rogue - Phandroid
Google just built an AI to babysit another AI, and honestly, it makes a lot of sense. The company announced something called the User Alignment Critic in December 2025, which basically acts as a watchdog for Chrome's Gemini agent to stop it from doing anything stupid or dangerous while browsing for you. But why does this matter? For starters, Chrome is getting more AI-powered features that can handle entire tasks on their own, like shopping, booking flights, or filling out forms. Sounds convenient, right? The problem is that sketchy websites could trick your AI assistant into doing things you definitely don't want, like draining your bank account or handing over your personal info. It's like phishing scams, but targeting your AI instead of you directly. So Google's solution is pretty clever. Before Chrome's AI does anything, this second AI checks it first. Let's say you ask Gemini to help you book a hotel. The main AI figures out what buttons to click and forms to fill. But before it actually does anything, the User Alignment Critic jumps in and asks, "Wait, does this actually match what the user wanted?" If something looks sketchy, it blocks the action or hands control back to you. The watchdog AI doesn't even look at the actual webpage content, which is smart. Malicious sites could hide instructions in the text trying to hijack your AI. Instead, the critic only reviews clean metadata about what the AI wants to do. That way, it can't be fooled by hidden prompts buried in reviews or comments. Google is piling on other safety layers too. Chrome AI safety features limit which websites the AI can even interact with based on your task. There's real-time scam detection running in the background. Plus, anything sensitive like payments still needs your explicit approval before going through. Google is so confident this system works that they're offering a $20,000 bounty to security researchers who can break it. They updated their bug bounty program specifically to cover agentic AI in Chrome, paying up to $20,000 for anyone who finds a serious vulnerability. For most people, this just means you can let Chrome's AI handle boring tasks without worrying it'll get tricked by a fake website. For businesses that have been nervous about AI browsers potentially leaking company data, this addresses a major concern. Google is basically using AI to keep AI in check, and that might become the new normal as browsers get smarter.
[8]
Google Shares Safety Guardrails for Chrome Browser's Agentic Capabilities
User confirmation will be required before the agent takes critical steps Google, on Monday, shared the safety measures it is implementing to protect users and their data from bad actors while they use the agentic features in Google Chrome. These agentic features were added to the browser recently, and has not been widely rolled out yet. However, in recent times, several experts have raised concerns over the vulnerabilities in the operation of artificial intelligence (AI) agents in online environments. The Mountain View-based tech giant has now shared its multi-layer security architecture that will help mitigate any prompt injection or other forms of cyberattacks. Google Shares Its Security Architecture for Chrome's Agentic Features In a blog post, the tech giant introduced several new safety measures for Google Chrome, such as user alignment critic, improved origin-isolation capabilities, user confirmations, and real-time detection of threats. These separate layers are mainly to safeguar the AI agents from indirect prompt injections, where malicious content in websites or third-party iframes could trick the AI agent into unwanted actions. One of the core component is the user alignment critic, a separate AI model isolated from untrusted content. This critic reviews every planned agent action and checks if it matches the user's original intent before allowing it to proceed. Only metadata about the proposed action is exposed to the critic, not the full untrusted web content, helping prevent content-based prompt hijacking. Another measure is enhanced origin isolation. The agent's interactions are constrained to specific web origins relevant to the user's task. This limits the scope in which a compromised or malicious site can influence agentic operations, ensuring that the end user's data on the device is protected from any origin-based attacks. For critical steps, such as filling out forms, submitting payments or handling sensitive data, the system automatically triggers user confirmation prompts. This layer was added so that even if an agent suggests a high-impact action, the final approval rests with the user. Additionally, real-time threat detection and red-teaming exercises (using ethical hackers to identify vulnerabilities in the system) are also implemented by Google to continuously test and refine the safety guardrails. The company says these safety layers build on prior security and privacy work in Chrome and are meant to adapt the browser for a new "agentic web" era, where AI agents are first-class participants rather than passive assistants.
[9]
Google's increasing Chrome security for agentic AI actions with User Alignment Critic model
User Alignment Critic protects against AI misuse and prompt injection attacks Google is gearing up for a major shift in how people interact with the web. As its Gemini assistant moves beyond simple suggestions and gains the ability to perform actions directly inside Chrome, the browser is being reshaped to handle a new category of risk. Letting an AI click, navigate, and complete tasks on a live website introduces threats that traditional browser protections were never built to manage. To prepare for this next phase, Google is rolling out a new safety layer called the User Alignment Critic model, a system designed to keep AI behaviour predictable, supervised, and firmly tied to user intent. Also read: ChatGPT as a grocery store: From recipe to goods delivery, OpenAI's chatbot is evolving Agentic browsing allows an AI assistant to click buttons, fill forms, navigate menus, and complete tasks online without the user handling each step manually. This convenience brings a significant vulnerability known as indirect prompt injection. In simple terms, a malicious site can hide instructions inside text or code that attempts to steer the AI toward unwanted actions. These actions can be as mild as opening more pages or as serious as authorising payments or requesting sensitive data. Google is treating this threat as a structural challenge rather than a minor glitch. The company understands that as AI grows more autonomous in the browser, attacks will target the assistant instead of the user. Chrome's answer is a multilayered defence system meant to supervise and restrict AI behaviour. At the centre of this system is the User Alignment Critic model. It functions as a reviewer that evaluates each action the AI proposes before the browser carries it out. Instead of giving the model full access to a webpage, Google only provides metadata about the action. This reduces the possibility that the model itself could be influenced by harmful content. Also read: Google wants you to wear AI: The 2026 glasses plan explained The critic checks whether the action fits the user's request and whether it stays within allowed boundaries. If the action seems risky, confusing, or unrelated to the task, the model can block it. This creates an internal feedback loop where the AI must justify every step. Google is adding further control through something it calls Agent Origin Sets. When a user gives Gemini a task, Chrome will restrict the agent to a defined set of domains. This prevents the AI from drifting to unrelated sites or following links that attackers might place deliberately. It keeps the task focused and easier to supervise. The browser will also require human approval for any high risk operation. Payments, logins, form submissions involving personal data, and interaction with banking or government portals will remain under user control. Action logs will be accessible so users can track what the AI attempted, allowed, or rejected. Google's strategy signals a shift toward more autonomous AI features inside Chrome. But the company is aware that trust cannot be assumed. With this new system, Google aims to demonstrate that convenience and safety can coexist. The protections are not meant to slow down agentic AI. They are designed to ensure that every action taken on behalf of the user truly serves the user. As AI driven browsing becomes mainstream, such checks will likely become a standard part of how browsers are built. Google's approach suggests that autonomy must always come with accountability, and that the safest AI is the one that knows when to stop.
Share
Share
Copy Link
Google unveiled new security measures for Chrome's agentic features, including a User Alignment Critic model that monitors AI actions and Agent Origin Sets that restrict data access. The company is offering up to $20,000 through its bug bounty program for researchers who find vulnerabilities in these defenses against indirect prompt injection attacks.
Google is rolling out comprehensive security measures for Chrome as the browser prepares to launch agentic features powered by Gemini AI integration. The agentic browsing features, first previewed in September, will enable AI agents to autonomously perform multi-step tasks like booking tickets, shopping, and navigating websites on behalf of users
1
. However, these capabilities introduce serious risks, with Chrome security engineer Nathan Parker identifying indirect prompt injection as "the primary new threat facing all agentic browsers"2
. This threat occurs when AI models ingest malicious content from web pages that instructs them to ignore safety guardrails, potentially leading to unauthorized financial transactions or data leaks3
.
Source: BleepingComputer
At the core of Google Chrome's new AI security architecture is the User Alignment Critic, a separate Gemini-based LLM model that functions as a "high-trust system component" isolated from untrusted content
4
. This oversight mechanism runs after the planner model completes its work, double-checking each proposed action to determine whether it serves the user's stated goal1
. The User Alignment Critic sees only metadata about proposed actions, never accessing unfiltered web content, which prevents attackers from poisoning it through malicious prompts embedded in websites3
. When the critic identifies misaligned actions, it vetoes them and provides feedback to the planner model to reformulate its strategy, returning control to the user after repeated failures3
.
Source: Digit
Google extended Chrome's Site Isolation capabilities through Agent Origin Sets, which ensure AI agents only access data from origins relevant to the current task or explicitly shared by users
3
. A trustworthy gating function categorizes origins into read-only and read-writeable sets for each session1
. Read-only origins contain content Gemini can consume, like product listings on shopping sites while excluding banner ads, whereas read-writeable origins allow the agent to click or type on specific page elements1
. This separation bounds the threat vector of data leaks by ensuring only limited origin data reaches the agent and can only be passed to authorized writable origins1
. The gating function operates independently from untrusted web content and requires planner approval before adding new origins3
.
Source: Hacker News
Related Stories
Google is implementing strict user control measures for sensitive operations within its agentic browsing features. The AI agent will request explicit approval before navigating to banking or healthcare portals that handle sensitive data
5
. For sites requiring authentication, Chrome asks permission before using Google Password Manager credentials, with the agent's model having no exposure to password data1
. Users must also approve consequential actions like making purchases, sending messages, or completing financial transactions2
. Agents create work logs for user observability, allowing people to monitor what actions are being planned and executed on their behalf4
.Google revised its Vulnerability Rewards Program to incentivize security researchers to probe Chrome's agentic safeguards, offering payouts up to $20,000 for demonstrations that breach security boundaries
2
. The company developed automated red-teaming systems that generate test sites and LLM-driven attacks to continuously evaluate defenses, with new protections deployed quickly through Chrome's auto-update mechanism4
. Additional security measures include a prompt-injection classifier running parallel to the planner model's inference, checking each page for indirect prompt injection attempts alongside Safe Browsing and on-device scam detection3
. Google also investigates URLs through an observer model to prevent navigation to harmful model-generated addresses1
. This comprehensive approach comes as IT consultancy Gartner recently recommended enterprises block AI browsers until associated risks can be appropriately managed, highlighting concerns about employees potentially using AI agents to bypass mandatory cybersecurity training3
. The technique of using one machine learning model to moderate another, formalized in a Google DeepMind paper this year as "CaMeL" (CApabilities for MachinE Learning), represents an industry pattern for addressing AI safety challenges2
.Summarized by
Navi
[2]
[4]
[5]
1
Business and Economy

2
Business and Economy

3
Policy and Regulation
