2 Sources
[1]
How GenAI complacency is becoming cybersecurity's silent crisis
The reliance on GenAI tools has inadvertently fostered a dangerous sense of complacency within organizations GenAI tools such as ChatGPT, Gemini, and Copilot have become essential components of modern workflows, significantly saving countless hours and revolutionizing various tasks. 42% of enterprises actively deployed AI, and 40% are experimenting with it and 59% of those using or exploring AI have accelerated their investments over the past two years. Their widespread adoption across industries has demonstrably boosted efficiency and productivity, making them indispensable for many organizations across almost all industries. However, the rapid integration and reliance on GenAI tools have inadvertently fostered a dangerous sense of complacency within organizations. While these tools are easy to use and offer widespread benefits, ignoring the consequences of misuse and even malicious use has led to a serious underestimation of the inherent risks tied to their deployment and management, creating fertile ground for potential vulnerabilities. While typical users may not consider the vulnerabilities that GenAI tools bring, many CISOs and AI leaders are increasingly concerned about the misuse that's unfolding quietly beneath the surface. What often appears to be innovation and efficiency can, in reality, mask significant security blind spots. By 2027, it is estimated that over 40% of breaches will originate from the improper cross-border use of GenAI. For CISOs, this isn't a distant concern but an urgent and growing risk that demands immediate attention and action. The exploitation of everyday AI users isn't just a scary headline or a cautionary tale from IT -- it's a rapidly growing reality. These emerging attacks are sweeping across industries, catching many off guard. Just recently, researchers disclosed a Microsoft Copilot vulnerability that could have enabled sensitive data exfiltration via prompt injection attacks. The ongoing underestimation of basic AI usage risks within organizations is a key driver of this emerging danger. The lack of awareness and robust policies surrounding the secure deployment and ongoing management of GenAI tools is creating critical blind spots that malicious actors are increasingly exploiting. The evolving landscape of GenAI presents a critical inflection point for cybersecurity leaders. It's imperative that CISOs and industry professionals move beyond the initial excitement and acknowledge that these tools have inherent risks that have been introduced by the widespread adoption of these powerful tools. The current situation, marked by rapid integration and security oversight mixed with dangerous complacency, demands a fundamental shift in how organizations perceive and manage their digital defenses especially with AI. The future of network security hinges on intelligent, comprehensive monitoring systems capable of understanding normal behavioral patterns and rapidly identifying deviations. This approach is not only crucial but paramount for detecting sophisticated threats that bypass traditional defenses. Tools that can defend and protect against highly sophisticated threats need to include advanced capabilities at their core. Particularly, when considering scenarios where seemingly innocuous actions, like using a basic GenAI chatbot could lead to the silent exfiltration of sensitive corporate data, without user interaction or explicit warnings. In these instances, traditional signature-based detection methods would likely prove ineffective. Therefore, it's imperative to begin leveraging advanced pattern recognition and behavioral analysis to combat threats specifically designed to evolve and evade detection. With the rise of increasingly sophisticated threats pressing closer to the enterprise perimeter, organizations must take decisive and actionable steps. This begins with addressing internal distrust of AI. Roughly three-quarters of AI experts think the technology will benefit them personally, however, only a quarter of the public says the same. Fostering an environment where employees understand both the advantages and the risks associated with its use is essential to bridging this gap in perception. The promotion of responsible usage across the organization lays the groundwork for a more secure adoption of GenAI technologies. While traditional human error remains a threat, the widespread adoption of GenAI has created a new, more subtle class of behavioral risks. Equipping employees with the knowledge to use GenAI tools securely is essential and should include comprehensive training, setting clear usage guidelines, and implementing robust policies tailored to defend against AI-driven attack vectors. As the AI landscape adapts and changes, security frameworks must be continuously updated to keep pace with these evolving threats and to ensure appropriate safeguards are in place. Despite technological advancements, attackers continue to exploit human error. Today's most significant data exposure isn't necessarily from a phishing link, while still a prime point of entry for threat actors; it's from an employee pasting proprietary source code, draft financial reports, or sensitive customer data into a public AI chatbot to work more efficiently. In turn, companies must adopt strategies that address human behavior and decision-making. In an attempt to boost productivity they inadvertently externalize intellectual property. This requires companies to evolve their approach beyond periodic training. It demands continuous engagement focused on GenAI-specific scenarios: teaching employees to recognize the difference between a safe, internal AI sandbox and a public tool. It means creating a culture where asking "Can I put this data in this AI?" becomes as instinctual as locking your computer screen. Employees must be equipped to understand these new risks and feel accountable for using AI responsibly. Demonizing AI usage, even basic use will never solve the problem at hand. Instead, embracing a secure approach to GenAI from a holistic point of view empowers employees to leverage these powerful tools with confidence to maximize their operational advantages while minimizing exposure to risk. By leading with clear guidance, highlighting potential warning signs and operational risks, organizations can significantly reduce the chances of data breaches related to improper AI usage, ultimately protecting critical assets and preserving organizational integrity. We feature the best firewall for small business.
[2]
Menlo Security research finds use of shadow AI is booming - SiliconANGLE
As generative artificial intelligence tools are becoming more prevalent in the workplace, employees are accessing these tools via personal accounts on company devices, pasting in sensitive data, and downloading content -- all of which creates potential security risks. Meanwhile, cybercriminals are capitalizing on this trend by weaponizing AI and impersonating trusted tools. Menlo Security Inc. recently released a new report that takes a closer look at how gen AI is shaping today's workplace. The data was collected over 30 days (May-June 2025) using Menlo's telemetry. During this period, web traffic and gen AI interactions were analyzed from hundreds of global organizations. Since most gen AI tools are accessed via a browser, Menlo was able to observe browser traffic to gen AI sites and regional adoption trends. To frame its findings in a broader context, Menlo also cites Similarweb data showing that between February 2024 and January 2025, traffic to gen AI sites jumped from 7 billion visits to more than 10.5 billion visits. That's a 50% increase in less than a year. About 80% of gen AI use still happens in the browser, a convenient option for most users because it works across virtually all devices and operating systems. ChatGPT, unsurprisingly, tops the list. It now has about 400 million weekly users. Yet the vast majority, 95%, are on the free tier. The benefit of the free tier is that its free, but as the saying goes, you don't get what you don't pay for. The advanced tier uses better models and gives more accurate responses, which is important in a business context. Also, OpenAI's privacy policy states it may use the data provided to train its models. Users can opt out of this, but many shadow AI users may not be aware of this. For business or sensitive data, using a paid tier such as ChatGPT Enterprise or the API ensures your data is not used for training models by default. There's no doubt that gen AI adoption has skyrocketed globally. While the Americas saw the most total traffic, gen AI use is growing fastest in the Asia-Pacific. In China, 75% of organizations are implementing gen AI in some way. Nearly as many, 73%, are doing the same in India. However, Europe and the Middle East are adopting gen AI more slowly, which is attributed to stricter data protection laws and regulatory frameworks. Given the popularity of gen AI tools, organizations are increasingly seeing them in the workplace. According to a TELUS Digital survey cited in Menlo's report, 68% of employees are using public tools such as ChatGPT through personal accounts. What's even more concerning: Fifty-seven percent admitted to pasting sensitive company information in these tools. In just one month, Menlo observed more than 155,000 copy attempts and more than 313,000 paste attempts involving gen AI. Many organizations flagged this content as sensitive or restricted, including personal information, financial data, login credentials and intellectual property. Employees may unintentionally leak data while using gen AI to summarize a report or write an email, according to Menlo. But sharing information isn't the only problem. Employees download PDFs and text files from gen AI tools, which may have embedded malware or phishing links. It's also becoming more difficult to distinguish between legitimate and fake AI tools, with malicious browser extensions on the rise. Menlo tracked nearly 600 phishing sites pretending to be legitimate gen AI, often masking themselves as ChatGPT or Copilot in their domain names. Between December 2024 and February 2025, researchers tracked more than 2,600 lookalike domain names and impersonation websites. Cybercriminals are jumping on the bandwagon like everyone else, using gen AI to make their phishing attacks more convincing and tailored to specific individuals. For example, they're combining AI-written phishing emails with other tactics that exploit browser flaws. This has resulted in a 130% year-over-year increase in zero-hour phishing attacks, which hit before security systems know they exist. The use of "shadow" tools with workers is nothing new and should not be a surprise with gen AI. Since users have computers the use of consumer grade tools has been the norm. Mobile devices, internet accounts, e-mail, cloud are just a few examples. When workers have a way of making their lives easier, they will use whatever tools they have at their disposal. If the company does not give them a viable option, that's when the use of "shadow" apps and tools boom. Right now with AI, many companies are reviewing policies and trying to determine the best path forward while the report clearly states users are charging ahead. Going forward, organizations need to take control of how gen AI is used. Menlo stresses the importance of eliminating shadow AI by limiting access to consumer-facing gen AI tools via personal accounts in the workplace. Organizations should make approved AI tools the only ones employees are allowed to use. On top of that, they should enforce data loss prevention policies to restrict actions such as copy/paste, file uploads and downloads. DLP is necessary to apply the right level of protection. Menlo also recommends inspecting gen AI browser traffic and focusing closely on high-risk file types such as PDFs and DOCX. The files may appear harmless, but they often hide malware or phishing links. Adopting zero-trust security, particularly on unmanaged devices used by contractors and third parties, is another important safeguard. With zero-trust security, organizations can verify every user and device before granting them access to the corporate network. Finally, Menlo emphasizes educating users about the risks of public gen AI tools. Once employees turn to tools outside the information technology department's control, it becomes easy for sensitive company data to end up in the hands of cybercriminals. It's impossible to ban gen AI use completely in the workplace due to its popularity. However, if employees understand the risks and use only company-approved tools, organizations can create a work environment where gen AI is helpful instead of harmful. Although the use of alternate tools is not new, it has come to AI faster than other technologies I have seen. IT leaders need to get out in front of this and ensure the proper controls and safeguards are in place before employees unknowingly put company data at risk.
Share
Copy Link
As generative AI tools become increasingly prevalent in workplaces, a dangerous sense of complacency is emerging, leading to significant cybersecurity risks. This story explores the rapid adoption of GenAI, its associated security challenges, and the urgent need for organizations to address these issues.
Generative AI (GenAI) tools such as ChatGPT, Gemini, and Copilot have become integral components of modern workflows, revolutionizing various tasks and significantly boosting efficiency. According to recent data, 42% of enterprises have actively deployed AI, while 40% are experimenting with it 1. The rapid adoption of these tools has led to a 50% increase in traffic to GenAI sites, jumping from 7 billion visits to more than 10 billion visits between February 2024 and January 2025 2.
Source: SiliconANGLE
While the widespread adoption of GenAI tools has undoubtedly improved productivity, it has also fostered a dangerous sense of complacency within organizations. This complacency has led to a serious underestimation of the inherent risks associated with GenAI deployment and management 1. By 2027, it is estimated that over 40% of breaches will originate from the improper cross-border use of GenAI, highlighting the urgent need for attention to this growing risk 1.
The concept of "shadow AI" – the use of unauthorized AI tools in the workplace – has emerged as a significant concern. A TELUS Digital survey revealed that 68% of employees are using public GenAI tools through personal accounts, with 57% admitting to pasting sensitive company information into these tools 2. This behavior has led to potential data leaks, including personal information, financial data, login credentials, and intellectual property 2.
Cybercriminals are capitalizing on the GenAI trend by weaponizing AI and impersonating trusted tools. Menlo Security tracked nearly 600 phishing sites pretending to be legitimate GenAI services, often masquerading as ChatGPT or Copilot 2. Between December 2024 and February 2025, researchers identified more than 2,600 lookalike domain names and impersonation websites 2.
To address these emerging threats, organizations must implement comprehensive security measures:
Eliminate shadow AI: Limit access to consumer-facing GenAI tools via personal accounts in the workplace and provide approved AI tools for employee use 2.
Enforce data loss prevention policies: Restrict actions such as copy/paste, file uploads, and downloads to protect sensitive information 2.
Implement advanced monitoring systems: Develop intelligent, comprehensive monitoring capable of understanding normal behavioral patterns and rapidly identifying deviations 1.
Adopt zero-trust security: Verify every user and device before granting access to the corporate network, particularly for unmanaged devices used by contractors and third parties 2.
Organizations must foster an environment where employees understand both the advantages and risks associated with GenAI use. This includes:
Comprehensive training: Equip employees with knowledge to use GenAI tools securely 1.
Clear usage guidelines: Implement robust policies tailored to defend against AI-driven attack vectors 1.
Continuous engagement: Focus on GenAI-specific scenarios, teaching employees to recognize the difference between safe, internal AI sandboxes and public tools 1.
Cultural change: Create a culture where questioning the safety of data input into AI tools becomes instinctual 1.
Source: TechRadar
While GenAI adoption has skyrocketed globally, regional differences exist. The Americas lead in total traffic, but the Asia-Pacific region is experiencing the fastest growth, with 75% of organizations in China and 73% in India implementing GenAI 2. Europe and the Middle East are adopting GenAI more slowly, attributed to stricter data protection laws and regulatory frameworks 2.
As the GenAI landscape continues to evolve, organizations must remain vigilant and adaptive in their approach to cybersecurity. The current situation, marked by rapid integration and security oversight mixed with dangerous complacency, demands a fundamental shift in how organizations perceive and manage their digital defenses, especially concerning AI 1.
Summarized by
Navi
NVIDIA CEO Jensen Huang confirms the development of the company's most advanced AI architecture, 'Rubin', with six new chips currently in trial production at TSMC.
2 Sources
Technology
23 hrs ago
2 Sources
Technology
23 hrs ago
Databricks, a leading data and AI company, is set to acquire machine learning startup Tecton to bolster its AI agent offerings. This strategic move aims to improve real-time data processing and expand Databricks' suite of AI tools for enterprise customers.
3 Sources
Technology
23 hrs ago
3 Sources
Technology
23 hrs ago
Google is providing free users of its Gemini app temporary access to the Veo 3 AI video generation tool, typically reserved for paying subscribers, for a limited time this weekend.
3 Sources
Technology
15 hrs ago
3 Sources
Technology
15 hrs ago
Broadcom's stock rises as the company capitalizes on the AI boom, driven by massive investments from tech giants in data infrastructure. The chipmaker faces both opportunities and challenges in this rapidly evolving landscape.
2 Sources
Technology
23 hrs ago
2 Sources
Technology
23 hrs ago
Apple is set to introduce new enterprise-focused AI tools, including ChatGPT configuration options and potential support for other AI providers, as part of its upcoming software updates.
2 Sources
Technology
23 hrs ago
2 Sources
Technology
23 hrs ago