Curated by THEOUTPOST
On Fri, 28 Feb, 4:02 PM UTC
2 Sources
[1]
57% of employees expose sensitive data to GenAI
More than half of enterprise employees utilizing generative AI (GenAI) assistants at work have admitted to entering sensitive company data into publicly available tools, according to a recent survey by TELUS Digital Experience. This survey, conducted in January 2025, gathered responses from 1,000 employees in U.S.-based companies with at least 5,000 staff members. The survey revealed that 57% of these employees input confidential information into AI platforms such as ChatGPT, Google Gemini, and Microsoft Copilot. Furthermore, 68% reported accessing GenAI assistants through personal accounts instead of company-sanctioned platforms, indicating a trend of what is termed 'shadow AI', where AI usage occurs without IT and security oversight, thereby heightening risks related to data exposure and compliance violations. Employees disclosed various categories of sensitive data entered into public GenAI tools: 31% reported entering personal details like names and email addresses, 29% shared project-specific information including unreleased product details, 21% acknowledged inputting customer data like order histories, and 11% admitted to entering financial information such as revenue figures and budgets. Despite 29% of respondents confirming that their organizations have clear AI usage policies prohibiting the input of sensitive information into GenAI, enforcement of these policies is inconsistent. Only 24% of employees indicated they had received mandatory training on AI usage, while 44% expressed uncertainty regarding the existence of AI policies at their companies. Furthermore, 50% reported being unsure about whether they were adhering to AI-related guidelines, and 42% noted that there were no consequences for failing to follow these policies. Cyberstalkers, scammers, and data leaks -- Why you need privacy protection In terms of productivity, 60% of employees stated that AI assistants help them work faster, 57% said these tools improve efficiency, and 49% reported enhanced work performance. A significant 84% expressed a desire to continue using AI in their roles, with 51% attributing improvements in creativity to AI and 50% noting that AI facilitates the automation of repetitive tasks. Bret Kinsella, General Manager of Fuel iXâ„¢ at TELUS Digital, emphasized the dual nature of AI adoption, stating, "Generative AI is proving to be a productivity superpower for hundreds of business tasks. If their company doesn't provide AI tools, they'll bring their own, which is problematic." He warned that organizations remain largely unaware of the risks associated with shadow AI despite benefiting from productivity gains. The survey found that 22% of employees with access to a company-provided GenAI assistant still opted to use personal accounts. Kinsella noted the importance of aligning AI solutions with security and compliance measures, urging organizations to implement structured AI policies, employee training programs, and develop secure AI platforms to address these security gaps effectively. In conjunction with the survey findings, TELUS Digital is set to present these results at the 2025 Mobile World Congress (MWC25) from March 3-6 in Barcelona. The company will showcase its Fuel iX platform, which focuses on allowing organizations to provide AI access while maintaining data security and compliance. Hesham Fahmy, Chief Information Officer at TELUS, highlighted the need for secure AI solutions that enable safe experimentation while protecting customer trust. The TELUS Digital AI at Work survey methodology included responses from 1,000 adults aged 18 and older who are employed by large enterprises, confirming their usage of AI assistants at work.
[2]
89% of Enterprise GenAI Usage Is Invisible to Organizations Exposing Critical Security Risks, New Report Reveals
Organizations are either already adopting GenAI solutions, evaluating strategies for integrating these tools into their business plans, or both. To drive informed decision-making and effective planning, the availability of hard data is essential -- yet such data remains surprisingly scarce. The "Enterprise GenAI Data Security Report 2025" by LayerX delivers unprecedented insights into the practical application of AI tools in the workplace, while highlighting critical vulnerabilities. Drawing on real-world telemetry from LayerX's enterprise clients, this report is one of the few reliable sources that details actual employee use of GenAI. For instance, it reveals that nearly 90% of enterprise AI usage occurs outside the visibility of IT, exposing organizations to significant risks such as data leakage and unauthorized access. Below we bring some of the report's key findings. Read the full report to refine and enhance your security strategies, leverage data-driven decision-making for risk management, and evangelize for resources to enhance GenAI data protection measures. To register to a webinar that will cover the key findings in this report, click here. Use of GenAI in the Enterprise is Casual at Most (for Now) While the GenAI hype may make it seem like the entire workforce has transitioned their office operations to GenAI, LayerX finds the actual use a tad more lukewarm. Approximately 15% of users access GenAI tools on a daily basis. This is not a percentage to be ignored, but it is not the majority. Yet. Here at The New Stack we concur with LayerX's analysis, predicting this trend will accelerate quickly. Especially since 50% of users currently use GenAI every other week. In addition, they find that 39% of regular GenAI tool users are software developers, meaning that the highest potential of data leakage through GenAI is of source and proprietary code, as well as the risk of using risky code in your codebase. How is GenAI Being Used? Who Knows? Since LayerX is situated in the browser, the tool has visibility into the use of Shadow SaaS. This means they can see employees using tools that were not approved by the organization's IT or through non-corporate accounts. And while GenAI tools like ChatGPT are used for work purposes, nearly 72% of employees access them through their personal accounts. If employees do access through corporate accounts, only about 12% is done with SSO. As a result, nearly 90% of GenAI usage is invisible to the organization. This leaves organizations blind to 'shadow AI' applications and the unsanctioned sharing of corporate information on AI tools. 50% of Pasting Activity intoGenAI Includes Corporate Data Remember the Pareto principle? In this case, while not all users use GenAI on a daily basis, users who do paste into GenAI applications, do so frequently and of potentially confidential information. LayerX found that pasting of corporate data occurs almost 4 times a day, on average, among users who submit data to GenAI tools. This could include business information, customer data, financial plans, source code, etc. How to Plan for GenAI Usage: What Enterprises Must Do Now The findings in the report signal an urgent need for new security strategies to manage GenAI risk. Traditional security tools fail to address the modern AI-driven workplace where applications are browser-based. They lack the ability to detect, control, and secure AI interactions at the source -- the browser. Browser-based security provides visibility into access to AI SaaS applications, unknown AI applications beyond ChatGOT, AI-enabled browser extensions, and more. This visibility can be used to employ DLP solutions for GenAI, allowing enterprises to safely include GenAI in their plans, future-proofing their business.
Share
Share
Copy Link
Recent surveys and reports highlight alarming trends in enterprise GenAI usage, with a majority of employees exposing sensitive data and organizations largely unaware of the extent of AI tool usage, posing significant security and compliance risks.
A recent survey by TELUS Digital Experience has revealed that 57% of enterprise employees using generative AI (GenAI) assistants at work have admitted to entering sensitive company data into publicly available tools 1. The survey, conducted in January 2025, gathered responses from 1,000 employees in U.S.-based companies with at least 5,000 staff members.
Employees disclosed various categories of sensitive information entered into public GenAI tools:
The survey also revealed that 68% of employees accessed GenAI assistants through personal accounts instead of company-sanctioned platforms, indicating a trend of 'shadow AI' 1. This usage occurs without IT and security oversight, heightening risks related to data exposure and compliance violations.
Despite 29% of respondents confirming clear AI usage policies prohibiting the input of sensitive information, enforcement is inconsistent. Only 24% of employees indicated they had received mandatory training on AI usage, while 44% expressed uncertainty regarding the existence of AI policies at their companies 1.
The "Enterprise GenAI Data Security Report 2025" by LayerX provides further insights into the practical application of AI tools in the workplace. The report reveals that nearly 90% of enterprise AI usage occurs outside the visibility of IT, exposing organizations to significant risks such as data leakage and unauthorized access 2.
LayerX's report highlights several key findings:
The report found that among users who submit data to GenAI tools, pasting of corporate data occurs almost 4 times a day on average. This could include sensitive information such as business data, customer information, financial plans, and source code 2.
Despite the security risks, the TELUS Digital survey reported significant productivity benefits:
Experts emphasize the need for new security strategies to manage GenAI risks. Traditional security tools fail to address the modern AI-driven workplace where applications are browser-based. Recommendations include:
Reference
[1]
As AI technologies advance, cybersecurity faces new challenges and opportunities. This story explores the intersection of AI and cybersecurity, highlighting NVIDIA's role and the broader implications for system protection in the age of generative AI.
2 Sources
2 Sources
As software workers show enthusiasm for generative AI in the workplace, businesses are advised to move beyond the hype and focus on practical applications. This story explores the growing excitement around AI tools and the need for strategic implementation.
2 Sources
2 Sources
As AI revolutionizes cybersecurity, it presents both unprecedented threats and powerful defensive tools. This story explores the evolving landscape of AI-based attacks and the strategies businesses and cybersecurity professionals are adopting to counter them.
2 Sources
2 Sources
As AI technology advances, cybercriminals are leveraging it to create more sophisticated and personalized social engineering attacks, posing significant challenges for organizations, especially SMEs and supply chains.
3 Sources
3 Sources
Shadow AI, the unauthorized use of AI tools by employees, is rapidly spreading in organizations, posing significant security and compliance risks. This trend highlights the urgent need for companies to implement proper AI governance and policies.
2 Sources
2 Sources
The Outpost is a comprehensive collection of curated artificial intelligence software tools that cater to the needs of small business owners, bloggers, artists, musicians, entrepreneurs, marketers, writers, and researchers.
© 2025 TheOutpost.AI All rights reserved