Microsoft is laying out how Windows 11 will secure AI agents with a layered model that keeps Copilot Actions off by default, isolates agent activity, and gives users continuous visibility and control over what the agent does on their PC. The approach debuts in an upcoming preview for Windows Insiders via Copilot Labs and aligns with broader security commitments Microsoft highlighted alongside the latest wave of AI features in Windows 11.
What’s new: Copilot Actions

Copilot Actions is an AI agent that can complete tasks by interacting with apps and files using vision and reasoning to click, type, and scroll like a human, moving assistants from passive chat to active collaboration on your PC with your permission. Windows will preview Copilot Actions in an experimental mode that can operate on local files, building on the earlier web version that handled actions like booking or ordering with explicit user approval. The preview centers on a new agent workspace that contains the agent’s activity in a separate, controlled environment distinct from the user’s desktop.
Why it matters
Agentic AI can streamline real work—updating documents, organizing files, or sending emails—but it also introduces risks like hallucinations and cross‑prompt injection, where malicious content can try to override instructions and trigger unintended actions such as data exfiltration or malware installs. Microsoft’s model aims to constrain agent capabilities, require clear consent, and provide transparent oversight so users stay informed and in charge at every step. These safeguards align with Windows’ broader commitments to roll out agentic features responsibly in preview and iterate with real‑world feedback before wider release.
Security principles
Windows is establishing durable security and privacy principles for agentic features, including distinct agent accounts, limited privileges, operational trust via signing and validation, and a privacy‑preserving design aligned to Microsoft’s published standards. Agents will only gain access to resources explicitly granted by the user and can have that access revoked at any time within well‑defined boundaries. Agents integrating with Windows must be signed by trusted sources so poorly behaved agents can be revoked and blocked using defense‑in‑depth measures.
Four security controls
-
User Control: Copilot Actions is disabled by default and can only be enabled through Settings > System > AI components > Agent tools > Experimental agentic features on Windows 11.
-
Agent accounts: Actions execute under separate standard agent accounts to enable agent‑level authorization and clear separation from the user’s account.
-
Agent workspace: A contained environment with runtime isolation and granular permissions provides the agent its own desktop while limiting its visibility into the user’s activities.
-
User Transparency: Users can authorize, monitor, and take over agent actions at any time, with prompts for additional approval on sensitive steps.
When enabled, Copilot Actions runs under the agent account so agent activity is clearly distinguished from user actions across the system, aiding auditing and control. During the experimental preview, the agent can access only limited known folders—Documents, Downloads, Desktop, Pictures—and any broader access requires explicit user authorization enforced by standard Windows access control lists. Throughout execution, users can watch progress, intervene, and must approve sensitive or important operations before they proceed.

Data and consent
Windows’ privacy‑preserving design ensures agents collect and process data only for clear purposes with transparency consistent with Microsoft’s privacy and responsible AI standards. The model emphasizes informed consent at activation and step‑up approvals for impactful actions, balancing productivity benefits with stringent oversight. These controls are intended to mitigate risks from novel attack classes like cross‑prompt injection while enabling useful real‑world automation scenarios.
Enterprise and roadmap
Microsoft will expand identity support with Entra and Microsoft account integration and make Windows platform controls available in private preview for developers to build and validate agent experiences. The company plans to evolve defenses as agentic capabilities expand and will share more details at Microsoft Ignite 2025 in November. These efforts complement the broader Windows 11 security posture and the commitments announced with the latest AI updates, including user control, visibility, and a responsible preview‑first rollout.
How to enable the preview
Once available to Windows Insiders in Copilot Labs, Copilot Actions can be toggled on in Settings > System > AI components > Agent tools > Experimental agentic features, which will provision the separate agent account and workspace on the device. Users remain in full control to pause, take over, or disable the feature at any time, with clear prompts for sensitive actions and granular consent enforced by Windows security boundaries. Microsoft will refine the experience during preview with more granular security and privacy controls before broader availability.
Discover more from Microsoft News Now
Subscribe to get the latest posts sent to your email.