Zenity CTO Warns of Major Security Flaws in Microsoft’s Copilot at Black Hat Conference

Zenity CTO Warns of Major Security Flaws in Microsoft's Copilot at Black Hat Conference
Zenity CTO Warns of Major Security Flaws in Microsoft's Copilot at Black Hat Conference

Zenity CTO Michael Bargury recently highlighted critical security flaws in Microsoft’s Copilot at the Black Hat conference. Bargury revealed that creating a secure Copilot Studio bot is challenging due to insecure default settings. He warned that these vulnerabilities could be exploited to gain unauthorized access to sensitive company data, especially given the widespread use of Copilot bots in large enterprises.

Microsoft Copilot, a tool that allows non-technical users to create conversational bots, uses internal business data to provide answers through a natural-language interface. However, the default settings in Copilot Studio are inadequate, making many bots easily discoverable and potentially vulnerable to data breaches.

Zenity found that a significant portion of these bots, particularly in large enterprises, are publicly accessible, creating a risk for data exfiltration if exploited.

Bargury’s team discovered thousands of Copilot bots exposed online, primarily due to Copilot Studio’s default settings, which previously allowed bots to be published on the web without authentication. Although Microsoft has since updated these settings to be more secure, the changes only apply to new installations. Businesses using older setups are advised to review and update their deployments to avoid potential security breaches.

Zenity CTO Warns of Major Security Flaws in Microsoft's Copilot at Black Hat Conference
Zenity CTO Warns of Major Security Flaws in Microsoft’s Copilot at Black Hat Conference

Zenity has also released a tool called CopilotHunter to help organizations detect and exploit vulnerabilities in Copilot bots, emphasizing the ease with which Copilot can be manipulated through indirect prompt injection attacks. Bargury compared these attacks to remote code execution (RCE), underscoring their severity when targeting enterprises with sensitive data.

Copilot’s vulnerabilities are not limited to data breaches; they can also be exploited to gain initial access to a network through common phishing tactics. Bargury demonstrated how an email could be used to alter banking information during a financial transaction, highlighting the potential for significant damage with minimal user interaction.

Bargury believes that the AI industry’s infancy is reflected in these vulnerabilities, as AI’s integration with enterprise data creates new attack surfaces, particularly through prompt injection. He noted that securing AI platforms like Copilot is challenging due to the need for flexibility, which often compromises security.

To address these concerns, Zenity has introduced another testing tool, “LOLCopilot,” for organizations to assess their vulnerability to Copilot exploits. Bargury stressed the importance of real-time monitoring and tracking potential RCEs in AI systems, recognizing that while Copilot offers valuable capabilities, ensuring its security is paramount to prevent significant breaches. Microsoft has acknowledged Zenity’s findings and continues to work on improving security for Copilot users.

Nate O'Hara
Nathan is a seasoned commerce writer with a passion for unraveling the intricacies of the business world and distilling them into engaging narratives. During his academic journey, he delved deep into subjects like economics, marketing, and entrepreneurship, honing his analytical skills and developing a keen understanding of market dynamics.