The Rise of AI Agents: A New Era in Workforce Collaboration

In recent years, frontier organizations have been at the forefront of redefining how work is accomplished, with humans and AI agents working collaboratively to enhance human endeavor. Data from Microsoft reveals that these human-agent teams are expanding rapidly and are being broadly adopted on a global scale.

The swift deployment of AI agents is outpacing some companies’ ability to monitor them, which presents a significant business risk. Organizations are now in urgent need of robust governance and security frameworks to adopt agents safely, foster innovation, and mitigate risks. Much like human users, AI agents require protection through observability, governance, and robust security protocols anchored in Zero Trust principles. Enterprises that will thrive in the next phase of AI adoption will be those that swiftly integrate business, IT, security, and developer teams to observe, govern, and secure their AI transformation.

Across Microsoft’s ecosystem, customers are building and deploying agents on diverse platforms—from Fabric and Foundry to Copilot Studio and Agent Builder—indicating a significant shift toward AI-powered automation in the workflow. Interestingly, agent-building is no longer confined to technical roles; employees across various positions are now creating and utilizing agents in their everyday work. Microsoft data shows over 80% of the Fortune 500 companies are deploying active agents using low-code/no-code tools.

With the expansion of agent use and the multiplication of transformation opportunities, it is crucial to establish foundational controls. Zero Trust principles for agents, similar to those for human employees, involve:

– Least privilege access: Providing every user, AI agent, or system with only what they require—nothing more.
– Explicit verification: Continuously verifying who or what is requesting access using identity, device health, location, and risk level.
– Assuming compromise is possible: Designing systems with the expectation that attackers might infiltrate.

The rapid deployment of agents can surpass security and compliance controls, elevating the risk of shadow AI. Malicious actors could exploit agents’ access and privileges, converting them into unintended “double agents.” Similar to human employees, an agent with excessive access—or erroneous instructions—can become a vulnerability.

The threat of double agents being exploited if left unmanaged, mis-permissioned, or manipulated by untrusted input is not merely theoretical. Recently, Microsoft’s Defender team uncovered a fraudulent campaign with multiple actors exploiting an AI attack technique known as “memory poisoning” to manipulate AI assistants’ memory persistently, subtly influencing future responses and undermining the system’s accuracy.

In independent, secure testbed research conducted by Microsoft’s AI Red Team, researchers documented instances where agents were misled by deceptive interface elements, such as following harmful instructions embedded in everyday content. The Red Team also found that agents’ reasoning could be subtly redirected through manipulated task framing. These findings underscore the necessity for enterprises to have full observability and management of all agents within their enterprise, enabling centrally enforced controls and integrated risk management.

Frontier firms are leveraging the AI wave to modernize governance, minimize unnecessary data exposure, and implement enterprise-wide controls. They are coupling this with a cultural shift: business leaders may own the AI strategy, but IT and security teams are now essential partners in observability, governance, and safe experimentation. For these organizations, securing agents is not a limitation—it is a competitive advantage, built on treating AI agents like humans and applying the same Zero Trust principles.

It begins with observability, as it is impossible to protect what is unseen, and management is unfeasible without understanding. Observability involves having a control plane across all layers of the organization—IT, security, developers, and AI teams—to comprehend:

– What agents exist
– Who owns them
– What systems and data they access
– How they behave

Cybersecurity expert Vasu Jakkal from Microsoft discusses how AI-powered agents reshape cyber risk and what leaders must prioritize.

The path to mitigating AI risks is clear: treat AI agents with the same diligence as any employee or software service account. Organizations that succeed with AI agents will be those that prioritize observability, governance, and security. Achieving this necessitates collaboration across all teams and observability of AI agents at every organizational level: IT professionals, security teams, AI teams, and developers, all of which can be managed and observed through a unified central control platform.

Agent 365 is Microsoft’s unified control plane for managing AI agents across an organization. It provides a centralized, enterprise-grade system to register, govern, secure, observe, and operate AI agents—whether they are built on Microsoft platforms, open-source frameworks, or third-party systems.

Note: This article is inspired by content from https://www.microsoft.com/en-us/security/security-insider/emerging-trends/cyber-pulse-ai-security-report. It has been rephrased for originality. Images are credited to the original source.