3 Sources
[1]
Major survey finds most people use AI regularly at work - but almost half admit to doing so inappropriately
Have you ever used ChatGPT to draft a work email? Perhaps to summarise a report, research a topic or analyse data in a spreadsheet? If so, you certainly aren't alone. Artificial intelligence (AI) tools are rapidly transforming the world of work. Released today, our global study of more than 32,000 workers from 47 countries shows that 58% of employees intentionally use AI at work - with a third using it weekly or daily. Most employees who use it say they've gained some real productivity and performance benefits from adopting AI tools. However, a concerning number are using AI in highly risky ways - such as uploading sensitive information into public tools, relying on AI answers without checking them, and hiding their use of it. There's an urgent need for policies, training and governance on responsible use of AI, to ensure it enhances - not undermines - how work is done. Our research We surveyed 32,352 employees in 47 countries, covering all global geographical regions and occupational groups. Most employees report performance benefits from AI adoption at work. These include improvements in: efficiency (67%) information access (61%) innovation (59%) work quality (58%). These findings echo prior research demonstrating AI can drive productivity gains for employees and organisations. We found general-purpose generative AI tools, such as ChatGPT, are by far the most widely used. About 70% of employees rely on free, public tools, rather than AI solutions provided by their employer (42%). However, almost half the employees we surveyed who use AI say they have done so in ways that could be considered inappropriate (47%) and even more (63%) have seen other employees using AI inappropriately. Sensitive information One key concern surrounding AI tools in the workplace is the handling of sensitive company information - such as financial, sales or customer information. Nearly half (48%) of employees have uploaded sensitive company or customer information into public generative AI tools, and 44% admit to having used AI at work in ways that go against organisational policies. This aligns with other research showing 27% of content put into AI tools by employees is sensitive. Check your answer We found complacent use of AI is also widespread, with 66% of respondents saying they have relied on AI output without evaluating it. It is unsurprising then that a majority (56%) have made mistakes in their work due to AI. Younger employees (aged 18-34 years) are more likely to engage in inappropriate and complacent use than older employees (aged 35 or older). This carries serious risks for organisations and employees. Such mistakes have already led to well-documented cases of financial loss, reputational damage and privacy breaches. About a third (35%) of employees say the use of AI tools in their workplace has increased privacy and compliance risks. 'Shadow' AI use When employees aren't transparent about how they use AI, the risks become even more challenging to manage. We found most employees have avoided revealing when they use AI (61%), presented AI-generated content as their own (55%), and used AI tools without knowing if it is allowed (66%). This invisible or "shadow AI" use doesn't just exacerbate risks - it also severely hampers an organisation's ability to detect, manage and mitigate risks. A lack of training, guidance and governance appears to be fuelling this complacent use. Despite their prevalence, only a third of employees (34%) say their organisation has a policy guiding the use of generative AI tools, with 6% saying their organisation bans it. Pressure to adopt AI may also fuel complacent use, with half of employees fearing they will be left behind if they do not. Better literacy and oversight Collectively, our findings reveal a significant gap in the governance of AI tools and an urgent need for organisations to guide and manage how employees use them in their everyday work. Addressing this will require a proactive and deliberate approach. Investing in responsible AI training and developing employees' AI literacy is key. Our modelling shows self-reported AI literacy - including training, knowledge, and efficacy - predicts not only whether employees adopt AI tools but also whether they critically engage with them. This includes how well they verify the tools' output, and consider their limitations before making decisions. We found AI literacy is also associated with greater trust in AI use at work and more performance benefits from its use. Despite this, less than half of employees (47%) report having received AI training or related education. Organisations also need to put in place clear policies, guidelines and guardrails, systems of accountability and oversight, and data privacy and security measures. There are many resources to help organisations develop robust AI governance systems and support responsible AI use. The right culture On top of this, it's crucial to create a psychologically safe work environment, where employees feel comfortable to share how and when they are using AI tools. The benefits of such a culture go beyond better oversight and risk management. It is also central to developing a culture of shared learning and experimentation that supports responsible diffusion of AI use and innovation. AI has the potential to improve the way we work. But it takes an AI-literate workforce, robust governance and clear guidance, and a culture that supports safe, transparent and accountable use. Without these elements, AI becomes just another unmanaged liability.
[2]
Major survey finds most people use AI regularly at work -- but almost half admit to doing so inappropriately
Have you ever used ChatGPT to draft a work email? Perhaps to summarize a report, research a topic or analyze data in a spreadsheet? If so, you certainly aren't alone. Artificial intelligence (AI) tools are rapidly transforming the world of work. Released today, our global study of more than 32,000 workers from 47 countries shows that 58% of employees intentionally use AI at work -- with a third using it weekly or daily. Most employees who use it say they've gained some real productivity and performance benefits from adopting AI tools. However, a concerning number are using AI in highly risky ways -- such as uploading sensitive information into public tools, relying on AI answers without checking them, and hiding their use of it. There's an urgent need for policies, training and governance on responsible use of AI, to ensure it enhances -- not undermines -- how work is done. Most employees report performance benefits from AI adoption at work. These include improvements in: These findings echo prior research demonstrating AI can drive productivity gains for employees and organizations. We found general-purpose generative AI tools, such as ChatGPT, are by far the most widely used. About 70% of employees rely on free, public tools, rather than AI solutions provided by their employer (42%). However, almost half the employees we surveyed who use AI say they have done so in ways that could be considered inappropriate (47%) and even more (63%) have seen other employees using AI inappropriately. Sensitive information One key concern surrounding AI tools in the workplace is the handling of sensitive company information -- such as financial, sales or customer information. Nearly half (48%) of employees have uploaded sensitive company or customer information into public generative AI tools, and 44% admit to having used AI at work in ways that go against organizational policies. We found complacent use of AI is also widespread, with 66% of respondents saying they have relied on AI output without evaluating it. It is unsurprising then that a majority (56%) have made mistakes in their work due to AI. Younger employees (aged 18-34 years) are more likely to engage in inappropriate and complacent use than older employees (aged 35 or older). This carries serious risks for organizations and employees. Such mistakes have already led to well-documented cases of financial loss, reputational damage and privacy breaches. About a third (35%) of employees say the use of AI tools in their workplace has increased privacy and compliance risks. 'Shadow' AI use When employees aren't transparent about how they use AI, the risks become even more challenging to manage. We found most employees have avoided revealing when they use AI (61%), presented AI-generated content as their own (55%), and used AI tools without knowing if it is allowed (66%). This invisible or "shadow AI" use doesn't just exacerbate risks -- it also severely hampers an organization's ability to detect, manage and mitigate risks. A lack of training, guidance and governance appears to be fueling this complacent use. Despite their prevalence, only a third of employees (34%) say their organization has a policy guiding the use of generative AI tools, with 6% saying their organization bans it. Pressure to adopt AI may also fuel complacent use, with half of employees fearing they will be left behind if they do not. Better literacy and oversight Collectively, our findings reveal a significant gap in the governance of AI tools and an urgent need for organizations to guide and manage how employees use them in their everyday work. Addressing this will require a proactive and deliberate approach. Investing in responsible AI training and developing employees' AI literacy is key. Our modeling shows self-reported AI literacy -- including training, knowledge, and efficacy -- predicts not only whether employees adopt AI tools but also whether they critically engage with them. This includes how well they verify the tools' output, and consider their limitations before making decisions. We found AI literacy is also associated with greater trust in AI use at work and more performance benefits from its use. Despite this, less than half of employees (47%) report having received AI training or related education. Organizations also need to put in place clear policies, guidelines and guardrails, systems of accountability and oversight, and data privacy and security measures. There are many resources to help organizations develop robust AI governance systems and support responsible AI use. The benefits of such a culture go beyond better oversight and risk management. It is also central to developing a culture of shared learning and experimentation that supports responsible diffusion of AI use and innovation. AI has the potential to improve the way we work. But it takes an AI-literate workforce, robust governance and clear guidance, and a culture that supports safe, transparent and accountable use. Without these elements, AI becomes just another unmanaged liability.
[3]
Nearly half of workers using AI at work admit to doing so inappropriately
Have you ever used ChatGPT to draft a work email? Perhaps to summarise a report, research a topic or analyse data in a spreadsheet? If so, you certainly aren't alone. Artificial intelligence (AI) tools are rapidly transforming the world of work. Released today, our global study of more than 32,000 workers from 47 countries shows that 58% of employees intentionally use AI at work -- with a third using it weekly or daily. Most employees who use it say they've gained some real productivity and performance benefits from adopting AI tools. However, a concerning number are using AI in highly risky ways -- such as uploading sensitive information into public tools, relying on AI answers without checking them, and hiding their use of it.
Share
Copy Link
A major global study finds that 58% of employees use AI at work, with many reporting productivity gains. However, nearly half admit to using AI inappropriately, raising concerns about data security and governance.
A comprehensive global study involving over 32,000 workers from 47 countries has revealed that 58% of employees intentionally use AI at work, with one-third using it on a weekly or daily basis 12. This widespread adoption of AI tools is rapidly transforming the work environment, offering significant productivity and performance benefits to many users.
The survey found that employees report several improvements from AI adoption at work:
General-purpose generative AI tools, such as ChatGPT, are the most widely used, with about 70% of employees relying on free, public tools rather than AI solutions provided by their employers (42%) 12.
Despite the benefits, the study uncovered concerning trends in AI usage:
The survey also highlighted issues with uncritical AI use:
These practices have led to documented cases of financial loss, reputational damage, and privacy breaches. About 35% of employees believe that AI tool use has increased privacy and compliance risks in their workplace 1.
The study revealed a significant lack of transparency in AI use:
This "shadow AI" use exacerbates risks and hinders organizations' ability to manage and mitigate potential issues. Only 34% of employees report that their organization has a policy guiding the use of generative AI tools, while 6% say their organization bans it 1.
The research emphasizes the urgent need for organizations to implement:
The study found that AI literacy is associated with greater trust in AI use at work and more performance benefits. However, less than half of employees (47%) report having received AI training or related education 1.
To harness the potential of AI while mitigating risks, organizations need to foster a psychologically safe work environment where employees feel comfortable sharing how and when they use AI tools. This approach supports responsible diffusion of AI use and innovation, going beyond mere risk management 12.
NVIDIA CEO Jensen Huang confirms the development of the company's most advanced AI architecture, 'Rubin', with six new chips currently in trial production at TSMC.
2 Sources
Technology
17 hrs ago
2 Sources
Technology
17 hrs ago
Databricks, a leading data and AI company, is set to acquire machine learning startup Tecton to bolster its AI agent offerings. This strategic move aims to improve real-time data processing and expand Databricks' suite of AI tools for enterprise customers.
3 Sources
Technology
17 hrs ago
3 Sources
Technology
17 hrs ago
Google is providing free users of its Gemini app temporary access to the Veo 3 AI video generation tool, typically reserved for paying subscribers, for a limited time this weekend.
3 Sources
Technology
9 hrs ago
3 Sources
Technology
9 hrs ago
Broadcom's stock rises as the company capitalizes on the AI boom, driven by massive investments from tech giants in data infrastructure. The chipmaker faces both opportunities and challenges in this rapidly evolving landscape.
2 Sources
Technology
17 hrs ago
2 Sources
Technology
17 hrs ago
Apple is set to introduce new enterprise-focused AI tools, including ChatGPT configuration options and potential support for other AI providers, as part of its upcoming software updates.
2 Sources
Technology
17 hrs ago
2 Sources
Technology
17 hrs ago