4 Sources
4 Sources
[1]
Employees regularly paste company secrets into ChatGPT
Employees could be opening up to OpenAI in ways that put sensitive data at risk. According to a study by security biz LayerX, a large number of corporate users paste Personally Identifiable Information (PII) or Payment Card Industry (PCI) numbers right into ChatGPT, even if they're using the bot without permission. In its Enterprise AI and SaaS Data Security Report 2025, LayerX blames the growing, largely uncontrolled usage of generative AI tools for exfiltrating personal and payment data from enterprise environments. With 45 percent of enterprise employees now using generative AI tools, 77 percent of these AI users have been copying and pasting data into their chatbot queries, the LayerX study says. A bit more than a fifth (22 percent) of these copy and paste operations include PII/PCI. "With 82 percent of pastes coming from unmanaged personal accounts, enterprises have little to no visibility into what data is being shared, creating a massive blind spot for data leakage and compliance risks," the report says. About 40 percent of file uploads to generative AI sites include PII/PCI data, it's claimed, with 39 percent of these uploads coming from non-corporate accounts. LayerX monitors data in the browser via an enterprise browser extension, meaning that the company sees only web-based AI interaction and not API calls from apps. Or Eshed, CEO of LayerX, in response to a question from The Register about whether AI data leakage has caused actual harm, pointed to Samsung's decision in 2023 to temporarily ban staff usage of ChatGPT after an employee reportedly uploaded sensitive code to the chatbot. He said that having enterprise data leak via AI tools can raise geopolitical issues (e.g. with Chinese AI models like Qwen), regulatory and compliance concerns, and lead to corporate data being inappropriately used for training if exposed through personal AI tool usage. The LayerX report says that app usage through non-corporate accounts (shadow IT) is common not only for generative AI (67 percent), but also for chat/instant messaging (87 percent), online meetings (60 percent), Salesforce (77 percent), Microsoft Online (68 percent), and Zoom (64 percent). In a surprising endorsement of shadow IT, Microsoft recently said it will support personal Copilot account usage in corporate Microsoft 365 accounts. That may be a reflection of Microsoft's discomfort with the dominance of OpenAI's ChatGPT, which LayerX says has become the de facto enterprise standard AI tool. "Amongst all AI apps, ChatGPT dominates enterprise AI usage, with over 9 in 10 employees accessing it compared to far lower adoption of alternatives like Google Gemini (15 percent), Claude (5 percent), and Copilot (~2-3 percent)," the report says, adding that most people (83.5 percent) use just one AI tool. "We see that users have a preferred AI platform and even if the business has an 'official' AI or a licensed one, users pick whatever they want," Eshed told The Register in an email. "In this case, it is overwhelmingly ChatGPT. In other words, users prefer ChatGPT." Asked about the survey's figures on Microsoft Copilot adoption in enterprises, Eshed cited a report claiming that Microsoft had "a 1.81 percent conversion rate across the 440 million Microsoft 365 subscribers" and noted that number "is almost identical to our findings (about 2 percent)." ChatGPT's enterprise penetration comes to 43 percent, LayerX's report says, approaching the popularity of applications like Zoom (75 percent penetration) and Google services (65 percent) while surpassing the penetration of Slack (22 percent), Salesforce (18 percent), and Atlassian (15 percent). Overall, the LayerX report finds AI usage in the enterprise is growing rapidly, accounting for 11 percent of all application usage, just behind email (20 percent), online meetings (20 percent), and office productivity applications (14 percent). Employee affinity for generative AI, the security firm argues, means that CISOs have to get serious about enforcing Single Sign-On (SSO) across every business critical applications if they want to have visibility into data flows. Asked to provide specifics about the number of customers contributing data for the report, a LayerX spokesperson replied that the company did not want to reveal exact figures on its customer base. Eshed said LayerX's client base consists of "dozens of global enterprises and large enterprises (1,000-100,000 users) primarily in financial services, healthcare, services and semiconductors. Most of our customers are in North America but we have customers in all 5 continents and any vertical." ®
[2]
New Research: AI Is Already the #1 Data Exfiltration Channel in the Enterprise
For years, security leaders have treated artificial intelligence as an "emerging" technology, something to keep an eye on but not yet mission-critical. A new Enterprise AI and SaaS Data Security Report by AI & Browser Security company LayerX proves just how outdated that mindset has become. Far from a future concern, AI is already the single largest uncontrolled channel for corporate data exfiltration -- bigger than shadow SaaS or unmanaged file sharing. The findings, drawn from real-world enterprise browsing telemetry, reveal a counterintuitive truth: the problem with AI in enterprises isn't tomorrow's unknowns, it's today's everyday workflows. Sensitive data is already flowing into ChatGPT, Claude, and Copilot at staggering rates, mostly through unmanaged accounts and invisible copy/paste channels. Traditional DLP tools -- built for sanctioned, file-based environments -- aren't even looking in the right direction. In just two years, AI tools have reached adoption levels that took email and online meetings decades to achieve. Almost one in two enterprise employees (45%) already use generative AI tools, with ChatGPT alone hitting 43% penetration. Compared with other SaaS tools, AI accounts for 11% of all enterprise application activity, rivaling file-sharing and office productivity apps. The twist? This explosive growth hasn't been accompanied by governance. Instead, the vast majority of AI sessions happen outside enterprise control. 67% of AI usage occurs through unmanaged personal accounts, leaving CISOs blind to who is using what, and what data is flowing where. Perhaps the most surprising and alarming finding is how much sensitive data is already flowing into AI platforms: 40% of files uploaded into GenAI tools contain PII or PCI data, and employees are using personal accounts for nearly four in ten of those uploads. Even more revealing: files are only part of the problem. The real leakage channel is copy/paste. 77% of employees paste data into GenAI tools, and 82% of that activity comes from unmanaged accounts. On average, employees perform 14 pastes per day via personal accounts, with at least three containing sensitive data. That makes copy/paste into GenAI the #1 vector for corporate data leaving enterprise control. It's not just a technical blind spot; it's a cultural one. Security programs designed to scan attachments and block unauthorized uploads miss the fastest-growing threat entirely. Security leaders often assume that "corporate" accounts equate to secure access. The data proves otherwise. Even when employees use corporate credentials for high-risk platforms like CRM and ERP, they overwhelmingly bypass SSO: 71% of CRM and 83% of ERP logins are non-federated. That makes a corporate login functionally indistinguishable from a personal one. Whether an employee signs into Salesforce with a Gmail address or with a password-based corporate account, the outcome is the same: no federation, no visibility, no control. While AI is the fastest-growing channel of data leakage, instant messaging is the quietest. 87% of enterprise chat usage occurs through unmanaged accounts, and 62% of users paste PII/PCI into them. The convergence of shadow AI and shadow chat creates a dual blind spot where sensitive data constantly leaks into unmonitored environments. Together, these findings paint a stark picture: security teams are focused on the wrong battlefields. The war for data security isn't in file servers or sanctioned SaaS. It's in the browser, where employees blend personal and corporate accounts, shift between sanctioned and shadow tools, and move sensitive data fluidly across both. The report's recommendations are clear, and unconventional: The surprising truth revealed by the data is this: AI isn't just a productivity revolution, it's a governance collapse. The tools employees love most are also the least controlled, and the gap between adoption and oversight is widening every day. For security leaders, the implications are urgent. Waiting to treat AI as "emerging" is no longer an option. It's already embedded in workflows, already carrying sensitive data, and already serving as the leading vector for corporate data loss. The enterprise perimeter has shifted again, this time into the browser. If CISOs don't adapt, AI won't just shape the future of work, it will dictate the future of data breaches. The new research report from LayerX provides the full scope of these findings, offering CISOs and security teams unprecedented visibility into how AI and SaaS are really being used inside the enterprise. Drawing on real-world browser telemetry, the report details where sensitive data is leaking, which blind spots carry the greatest risk, and what practical steps leaders can take to secure AI-driven workflows. For organizations seeking to understand their true exposure and how to protect themselves, the report delivers the clarity and guidance needed to act with confidence.
[3]
Employees are unknowingly leaking company secrets through ChatGPT, new report warns
A disturbing new report finds ChatGPT and Copilot are already the biggest source of workplace data leaks -- here's what we know As more companies adopt generative AI like ChatGPT, Microsoft Copilot and Claude to improve productivity and workflow, they are discovering these tools are exposing company secrets at an alarming rate. According to a new Cyera report highlights that AI chats are now the No. 1 cause of data leaks in the workplace, surpassing both cloud storage and email for the first time. And the scariest part? Most of it is happening so far under the radar that companies aren't even noticing. The research shows that nearly 50% of enterprise employees are using generative AI at work, often by pasting sensitive information such as financial information, personally identifiable data and even strategy docs -- directly into AI chatbots. This type of information should never be shared with AI, so why are users doing it? In most cases, they happen through personal, unmanaged AI accounts like ChatGPT or Gemini, making them invisible to corporate security systems, yet 77% of these interactions involve real company data. Because the data is shared through copy/pasted actions within chat windows, not file uploads directly, they bypass traditional data-loss prevention tools entirely. Most security platforms are built to catch file attachments, suspicious downloads or outbound emails. But AI conversations look like normal web traffic -- yes, even when they contain confidential info. A 2025 LayerX enterprise report found that 67% of AI interactions happen on personal accounts, which means IT teams can't monitor or restrict them. Because IT teams are unable to monitor personal logins or provide oversight on personal accounts, AI becomes a blind spot. The reports aren't suggesting to ban AI outright, instead they are a wakeup call to companies and users to tighten controls, improve visibility and provide critical oversight. Here's what the researchers suggest: This may seem obvious, but if you are an employee, do not paste anything into an AI chat that you wouldn't post publicly on the internet. AI is still fairly new in the workplace setting, so employees are learning how to use the tools while also juggling what it should be used for. This can get sticky because most employees would not intentionally leak data. For instance, a simple prompt like "Summarize this report for me" could seem to employee as if they are using AI to get ahead and be more productive, yet it could put an entire company at risk if the wrong document is pasted into the chat. In the race to boost productivity with AI, one innocent copy-paste could be all it takes to expose your company's secrets. Knowing the threat is there, is the first step in boosting security.
[4]
Watch out - your workers might be pasting company secrets into ChatGPT
Enterprises face major blind spots in data leakage and compliance due to unmonitored GenAI use, experts say ChatGPT and other Generative Artificial Intelligence (GenAI) tools are transforming what "risk of Shadow IT" means, new research has found, as employees are becoming a little too open - and many have even provided the tool with Personally Identifiable Information (PII) or Payment Card Industry (PCI) numbers willingly. Shadow IT is the practice of using programs and apps in a business environment that weren't approved or otherwise vetted by the IT security department. Employees often use apps they're not allowed to because it's easy and convenient - things like web-based image-to-pdf converters, WhatsApp, personal cloud storage solutions like Dropbox, and similar. But research from LayerX claims this is opening up companies to all sorts of cyber-risks, from introducing malware and ransomware to corporate infrastructure, to leaking sensitive data via unprotected cloud storage, or uploading classified documents to shady services. The company's latest Enterprise AI and SaaS Data Security Report 2025 found almost half (45%) of enterprise employees are now using generative AI in one form or another. Of those, more than three-quarters (77%) have been copying and pasting data into the tool, and almost a quarter (22%) have done the same with PII/PCI. "With 82 percent of pastes coming from unmanaged personal accounts, enterprises have little to no visibility into what data is being shared, creating a massive blind spot for data leakage and compliance risks," the report says. Furthermore, roughly two in five files uploaded to generative AI sites also contain this type of information, while 39% of these uploads came from non-corporate accounts. ChatGPT is by far the most popular GenAI tool, with more than 90% of employees using it. The vast majority (around 83%) use just one tool. Other notable mentions include Gemini (15%), Claude (5%), and Copilot (around 3%). Via The Register
Share
Share
Copy Link
Recent studies reveal that employees are inadvertently leaking sensitive company information through generative AI tools, particularly ChatGPT, posing significant security and compliance risks for enterprises.
Generative AI tools, particularly ChatGPT, have emerged as the leading channel for corporate data exfiltration, surpassing traditional security concerns like shadow IT and unmanaged file sharing. Recent studies by LayerX and Cyera have shed light on this growing issue, revealing alarming trends in enterprise AI usage and its associated risks
1
2
.Source: TechRadar
According to LayerX's Enterprise AI and SaaS Data Security Report 2025, approximately 45% of enterprise employees now use generative AI tools, with ChatGPT dominating the landscape at over 90% usage
1
4
. This rapid adoption has outpaced governance measures, resulting in 67% of AI usage occurring through unmanaged personal accounts2
.The most concerning finding is the prevalence of sensitive data being shared with AI tools. The study reveals that 77% of AI users have been copying and pasting data into their chatbot queries, with 22% of these operations including Personally Identifiable Information (PII) or Payment Card Industry (PCI) data
1
. Additionally, about 40% of file uploads to generative AI sites contain PII/PCI data, with 39% of these uploads coming from non-corporate accounts2
.Source: The Register
Related Stories
Traditional data loss prevention tools are ill-equipped to handle this new threat vector. Copy-paste actions within chat windows bypass conventional security measures, appearing as normal web traffic even when they contain confidential information
3
. This creates a significant blind spot for enterprise security teams.Source: Tom's Guide
The findings underscore the urgent need for enterprises to adapt their security strategies. Recommendations include:
1
.2
.3
.As AI tools become increasingly integrated into enterprise workflows, addressing these security challenges is crucial to prevent data breaches and maintain regulatory compliance.
Summarized by
Navi
[1]