The democratization of artificial intelligence (AI) is rapidly reshaping enterprise operations, but as promising as it sounds, it introduces significant and often overlooked risks. Tenable’s latest research highlights these dangers by successfully jailbreaking Microsoft Copilot Studio, illustrating how AI’s widespread accessibility could lead to severe security vulnerabilities.
Organizations are embracing ‘no-code’ platforms to empower employees to create their own AI agents. These platforms promise increased efficiency without the need for developers. However, this well-intentioned automation lacks strict governance, potentially leading to catastrophic failures.
Summary of Research
Tenable Research embarked on a mission to showcase how easily AI agents can be manipulated. They crafted an AI travel agent using Microsoft Copilot Studio to manage customer travel reservations autonomously, handling tasks such as creating and modifying reservations.
The AI travel agent was equipped with demo data, including customer names, contact information, and credit card details. It was instructed to verify customer identities before sharing information or modifying bookings. Despite these safeguards, Tenable Research successfully hijacked the AI agent’s workflow using a technique known as prompt injection. This allowed them to book a free vacation and extract sensitive credit card information.
Business Implications of the Findings
The research findings have profound business implications, including:
– Data Breaches and Regulatory Exposure: Tenable Research demonstrated how the AI agent could bypass identity verification, leaking other customers’ payment card information. The agent, designed to handle sensitive data, was manipulated into exposing full customer records.
– Revenue Loss and Fraud: The agent’s broad ‘edit’ permissions, intended for updating travel dates, could also be exploited to change critical financial fields. Tenable Research instructed the agent to alter a trip’s price to $0, effectively granting unauthorized free services.
Keren Katz, Senior Group Manager of AI Security Product and Research at Tenable, stated, ‘AI agent builders, like Copilot Studio, democratize the ability to build powerful tools, but they also democratize the ability to execute financial fraud, thereby creating significant security risks without even knowing it.’ She emphasized that this power can easily transform into tangible security risks.
The Importance of AI Governance and Enforcement
A critical takeaway from this research is that AI agents often possess excessive permissions that are invisible to non-developers. To mitigate these risks, business leaders must implement robust governance and enforce strict security protocols before deploying these tools.
Tenable offers several recommendations to prevent data leakage:
– Preemptive Visibility: Map out exactly which systems and data stores an agent can interact with before deployment.
– Least Privilege Access: Minimize write and update capabilities to only what is necessary for the agent’s core use case.
– Active Monitoring: Continuously track agent actions for signs of data leakage or deviations from intended business logic.
The findings from Tenable’s research serve as a stark reminder of the potential security pitfalls associated with the democratization of AI. As organizations continue to adopt these technologies, a balanced approach that includes robust governance and security enforcement is crucial to prevent unintended consequences.
Note: This article is inspired by content from https://www.smehorizon.com/no-code-agentic-ai-can-be-used-for-financial-fraud-and-workflow-hijacking/. It has been rephrased for originality. Images are credited to the original source.

Leave a Reply