Microsoft Copilot Exposes Thousands of Private GitHub Repositories, Raising Security Concerns

Curated by THEOUTPOST

On Thu, 27 Feb, 12:04 AM UTC

5 Sources

Share

Security researchers discover that Microsoft's AI assistant Copilot can access and expose data from over 20,000 private GitHub repositories, affecting major tech companies and posing significant security risks.

AI-Powered Security Breach: Copilot Exposes Private GitHub Repositories

In a startling revelation, security researchers have uncovered a significant vulnerability in Microsoft's AI assistant, Copilot, which has been exposing data from thousands of private GitHub repositories. This discovery, made by Israeli cybersecurity firm Lasso, has sent shockwaves through the tech industry, highlighting the potential risks associated with AI-powered tools and data caching mechanisms 1.

The Scope of the Breach

Lasso's investigation revealed that over 20,000 GitHub repositories, which had been set to private in 2024, were still accessible through Copilot. This security breach affected more than 16,000 organizations, including major technology companies such as IBM, Google, PayPal, Tencent, Microsoft, and Amazon Web Services 2.

How the Breach Occurred

The root cause of this breach lies in the caching mechanism of Microsoft's Bing search engine. When repositories were temporarily made public, Bing indexed and cached their contents. Even after these repositories were switched back to private, Copilot retained access to the cached data, making it potentially accessible to anyone using the AI assistant 3.

Sensitive Information at Risk

The exposed repositories contained highly sensitive data, including:

  1. Intellectual property
  2. Confidential corporate information
  3. Access keys and security tokens
  4. Tools for bypassing AI safety measures 4

This breach has raised concerns about the potential for cybercriminals to manipulate Copilot into revealing confidential information, posing a significant security threat to affected organizations.

Microsoft's Response and Mitigation Efforts

When informed about the issue in November 2024, Microsoft initially classified it as a "low-severity" problem, describing the caching behavior as "acceptable." However, the company took some steps to address the situation:

  1. Removed links to Bing's cache from search results in December 2024
  2. Disabled public access to a special Bing user interface at cc.bingj.com 5

Despite these measures, Lasso researchers found that Copilot could still access the cached data, indicating that the fix was only partial and temporary.

Implications and Recommendations

This incident highlights the challenges of managing data privacy and security in the age of AI-powered tools. It also underscores the importance of proper security practices when handling sensitive information in code repositories.

Experts recommend that affected organizations take the following steps:

  1. Rotate or revoke any compromised security credentials
  2. Review and update their data handling practices
  3. Implement stricter controls on repository visibility

As AI technologies continue to evolve, this incident serves as a stark reminder of the need for robust security measures and careful consideration of the potential risks associated with these powerful tools.

Continue Reading
DeepSeek's Cybersecurity Woes: Exposed Database Raises

DeepSeek's Cybersecurity Woes: Exposed Database Raises Serious Concerns

A cybersecurity firm discovers an unprotected DeepSeek database, exposing sensitive information and raising questions about the AI startup's security practices.

pcgamer logoNDTV Gadgets 360 logoAndroid Authority logo

3 Sources

pcgamer logoNDTV Gadgets 360 logoAndroid Authority logo

3 Sources

ChatGPT macOS Vulnerability: Long-Term Data Exfiltration

ChatGPT macOS Vulnerability: Long-Term Data Exfiltration Risk Discovered

A critical vulnerability in ChatGPT's macOS app could have allowed hackers to plant false memories, enabling long-term data exfiltration. The flaw, now patched, highlights the importance of AI security.

The Hacker News logoArs Technica logo

2 Sources

The Hacker News logoArs Technica logo

2 Sources

New AI Attack 'Imprompter' Covertly Extracts Personal Data

New AI Attack 'Imprompter' Covertly Extracts Personal Data from Chatbot Conversations

Security researchers have developed a new attack method called 'Imprompter' that can secretly instruct AI chatbots to gather and transmit users' personal information to attackers, raising concerns about the security of AI systems.

Wired logoDataconomy logo9to5Mac logo

3 Sources

Wired logoDataconomy logo9to5Mac logo

3 Sources

Slack AI Vulnerability Raises Privacy Concerns

Slack AI Vulnerability Raises Privacy Concerns

A security flaw in Slack's AI feature exposed private information, including login details. The issue highlights the potential risks of AI integration in workplace communication tools.

Decrypt logoTechRadar logoDigital Trends logo

3 Sources

Decrypt logoTechRadar logoDigital Trends logo

3 Sources

ChatGPT Crawler Vulnerability: Potential for DDoS Attacks

ChatGPT Crawler Vulnerability: Potential for DDoS Attacks and Prompt Injection

A security researcher has uncovered a vulnerability in ChatGPT's crawler that could potentially be exploited for DDoS attacks and prompt injection, raising concerns about AI security and OpenAI's response to the issue.

MakeUseOf logoDataconomy logoNDTV Gadgets 360 logotheregister.com logo

4 Sources

MakeUseOf logoDataconomy logoNDTV Gadgets 360 logotheregister.com logo

4 Sources

TheOutpost.ai

Your one-stop AI hub

The Outpost is a comprehensive collection of curated artificial intelligence software tools that cater to the needs of small business owners, bloggers, artists, musicians, entrepreneurs, marketers, writers, and researchers.

© 2025 TheOutpost.AI All rights reserved