Azure AI Foundry brings collectively safety, security, and governance in a layered course of enterprises can comply with to construct belief of their brokers.
This weblog publish is the sixth out of a six-part weblog collection referred to as Agent Manufacturing unit which shares greatest practices, design patterns, and instruments to assist information you thru adopting and constructing agentic AI.
Belief as the subsequent frontier
Belief is quickly turning into the defining problem for enterprise AI. If observability is about seeing, then safety is about steering. As brokers transfer from intelligent prototypes to core enterprise methods, enterprises are asking a tougher query: how can we maintain brokers secure, safe, and underneath management as they scale?
The reply isn’t a patchwork of level fixes. It’s a blueprint. A layered strategy that places belief first by combining id, guardrails, evaluations, adversarial testing, knowledge safety, monitoring, and governance.
Why enterprises have to create their blueprint now
Throughout industries, we hear the identical considerations:
- CISOs fear about agent sprawl and unclear possession.
- Safety groups want guardrails that hook up with their present workflows.
- Builders need security inbuilt from day one, not added on the finish.
These pressures are driving the shift left phenomenon. Safety, security, and governance duties are transferring earlier into the developer workflow. Groups can’t wait till deployment to safe brokers. They want built-in protections, evaluations, and coverage integration from the beginning.
Knowledge leakage, immediate injection, and regulatory uncertainty stay the highest blockers to AI adoption. For enterprises, belief is now a key deciding consider whether or not brokers transfer from pilot to manufacturing.
What secure and safe brokers appear to be
From enterprise adoption, 5 qualities stand out:
- Distinctive id: Each agent is thought and tracked throughout its lifecycle.
- Knowledge safety by design: Delicate data is assessed and ruled to cut back oversharing.
- Constructed-in controls: Hurt and danger filters, risk mitigations, and groundedness checks scale back unsafe outcomes.
- Evaluated towards threats: Brokers are examined with automated security evaluations and adversarial prompts earlier than deployment and all through manufacturing.
- Steady oversight: Telemetry connects to enterprise safety and compliance instruments for investigation and response.

These qualities don’t assure absolute security, however they’re important for constructing reliable brokers that meet enterprise requirements. Baking these into our merchandise displays Microsoft’s strategy to reliable AI. Protections are layered throughout the mannequin, system, coverage, and person expertise ranges, constantly improved as brokers evolve.
How Azure AI Foundry helps this blueprint

Azure AI Foundry brings collectively safety, security, and governance capabilities in a layered course of enterprises can comply with to construct belief of their brokers.
- Entra Agent ID
Coming quickly, each agent created in Foundry shall be assigned a singular Entra Agent ID, giving organizations visibility into all lively brokers throughout a tenant and serving to to cut back shadow brokers. - Agent controls
Foundry gives trade first agent controls which can be each complete and inbuilt. It’s the solely AI platform with a cross-prompt injection classifier that scans not simply immediate paperwork but additionally software responses, e mail triggers, and different untrusted sources to flag, block, and neutralize malicious directions. Foundry additionally offers controls to stop misaligned software calls, excessive danger actions, and delicate knowledge loss, together with hurt and danger filters, groundedness checks, and guarded materials detection.

- Danger and security evaluations
Evaluations present a suggestions loop throughout the lifecycle. Groups can run hurt and danger checks, groundedness scoring, and guarded materials scans each earlier than deployment and in manufacturing. The Azure AI Crimson Teaming Agent and PyRIT toolkit simulate adversarial prompts at scale to probe habits, floor vulnerabilities, and strengthen resilience earlier than incidents attain manufacturing. - Knowledge management with your personal sources
Commonplace agent setup in Azure AI Foundry Agent Service permits enterprises to carry their very own Azure sources. This contains file storage, search, and dialog historical past storage. With this setup, knowledge processed by Foundry brokers stays inside the tenant’s boundary underneath the group’s personal safety, compliance, and governance controls. - Community isolation
Foundry Agent Service helps personal community isolation with customized digital networks and subnet delegation. This configuration ensures that brokers function inside a tightly scoped community boundary and work together securely with delicate buyer knowledge underneath enterprise phrases. - Microsoft Purview
Microsoft Purview helps lengthen knowledge safety and compliance to AI workloads. Brokers in Foundry can honor Purview sensitivity labels and DLP insurance policies, so protections utilized to knowledge carry by into agent outputs. Compliance groups may also use Purview Compliance Supervisor and associated instruments to evaluate alignment with frameworks just like the EU AI Act and NIST AI RMF, and securely work together together with your delicate buyer knowledge underneath your phrases. - Microsoft Defender
Foundry surfaces alerts and suggestions from Microsoft Defender instantly within the agent atmosphere, giving builders and directors visibility into points equivalent to immediate injection makes an attempt, dangerous software calls, or uncommon habits. This similar telemetry additionally streams into Microsoft Defender XDR, the place safety operations middle groups can examine incidents alongside different enterprise alerts utilizing their established workflows. - Governance collaborators
Foundry connects with governance collaborators equivalent to Credo AI and Saidot. These integrations permit organizations to map analysis outcomes to frameworks together with the EU AI Act and the NIST AI Danger Administration Framework, making it simpler to show accountable AI practices and regulatory alignment.
Blueprint in motion
From enterprise adoption, these practices stand out:
- Begin with id. Assign Entra Agent IDs to determine visibility and stop sprawl.
- Constructed-in controls. Use Immediate Shields, hurt and danger filters, groundedness checks, and guarded materials detection.
- Repeatedly consider. Run hurt and danger checks, groundedness scoring, protected materials scans, and adversarial testing with the Crimson Teaming Agent and PyRIT earlier than deployment and all through manufacturing.
- Defend delicate knowledge. Apply Purview labels and DLP so protections are honored in agent outputs.
- Monitor with enterprise instruments. Stream telemetry into Defender XDR and use Foundry observability for oversight.
- Join governance to regulation. Use governance collaborators to map analysis knowledge to frameworks just like the EU AI Act and NIST AI RMF.
Proof factors from our prospects
Enterprises are already creating safety blueprints with Azure AI Foundry:
- EY makes use of Azure AI Foundry’s leaderboards and evaluations to match fashions by high quality, value, and security, serving to scale options with better confidence.
- Accenture is testing the Microsoft AI Crimson Teaming Agent to simulate adversarial prompts at scale. This permits their groups to validate not simply particular person responses, however full multi-agent workflows underneath assault circumstances earlier than going stay.
Be taught extra
Did you miss these posts within the Agent Manufacturing unit collection?