Curated by THEOUTPOST
On Fri, 25 Oct, 4:02 PM UTC
2 Sources
[1]
AI could empower and proliferate social engineering cyberattacks
Countering this requires a coordinated approach with cooperation across sectors at its heart. Generative AI (GenAI) and Large Language Models (LLM) in particular, have taken the world by storm. The technology has shown tremendous potential to automate various day-to-day tasks, ranging from basic IT helpdesk requests to sophisticated user behaviour analysis. This task automation is typically carried out by AI agents -- autonomous software that is designed to perform tasks and execute actions. Notably, businesses across industries are increasingly adopting AI tools to increase efficiency and reduce costs. However, the rise of AI models has also led to the emergence of new cyberattacks that effectively utilize it, known as AI-based attacks. These attacks are characterized by being automated, adaptive and tailored to their targets. The rise of these attacks opens a new arena in cybersecurity and is changing the cybersecurity landscape. In fact, MITRE has introduced the MITRE ATLAS framework as an extension of the widely used MITRE ATT&CK framework to address the adversarial tactics in AI systems. Despite the exciting and rapid advancement of AI technologies, their misuse has raised significant concerns. In fact, the World Economic Forum's Global Risks Report 2024 ranks misinformation and disinformation as the top risk associated with AI technology. Utilizing advanced models, AI has transformed the cybersecurity threat landscape and can cause devastating impact. The ability of AI-based attacks to reason and act renders traditional defense techniques ineffective. This development has paved the way for emerging sophisticated threats at a speed and scale far beyond human capabilities. The wave of AI-based attacks has been grabbing the headlines in recent years. Deepfake technology is known for generating deceptive content using AI. In Hong Kong, it was used to scam a finance worker into paying $25 million to fraudsters; scammers impersonated the company's chief financial officer during a video conference call. Deepfake-driven scams like this, which come with negligible costs to threat actors, are expected to accelerate, threatening all businesses at large. With an ever-growing online footprint of personal data and the increasing sophistication of AI-based attacks, threat actors are now capable of developing attacks that are more personalized and deceitful. One such attack is the social engineering attack, which is the art of manipulating individuals into revealing confidential information or performing actions that compromise their security. The availability of powerful AI models, particularly LLMs, makes the development of social engineering attacks accessible for historically less capable threat actors. For instance, a voice imitation technique used by a scammer left a mother of a 15-year-old daughter in a terrifying situation when scammers claimed to have kidnapped her daughter, who was actually safe. Digital deception has advanced significantly, contributing to the new frontier of social engineering attacks. The implications of social engineering attacks on digital assets are serious and include financial loss and privacy breaches. These attacks can be carried out through different mediums such as emails, phishing websites, text messages, voice or video calls and social media platforms. Social engineering attacks, generally, rely heavily on exploiting human-centric vulnerabilities rather than the shortcomings of digital infrastructure security. While solid defense techniques against AI-based attacks are still in the making, the number of cybersecurity incidents involving AI techniques has increased significantly. Becoming more powerful and often highly successful, AI-based attacks have rapidly enhanced their capabilities, making the mission of securing environments more difficult. But that does not mean the scammers and hackers have won. Addressing concerns from AI-based attacks could be achieved in three ways. First, it is essential to fully understand the effectiveness of the current state of cybersecurity controls against the emerging AI-based cyberattacks. This will help in establishing global countermeasures by having immediate dialogues and exchanging information between organizations before AI-based attacks evolve further. Second, defending our critical assets depends on improving the current security measures, developing solid defenses against these emerging attacks and educating and raising the community awareness of the new forms of techniques. Moreover, revisiting the existing frameworks and updating them in response to the new AI-based attacks is a significant step towards safeguarding valuable assets. Finally, the entire ecosystem, including governments and leading technology players, should synergize to support research centers, startups and small and medium-sized enterprises that focus on the intersection of AI and cybersecurity. This investment is a key driver that could have the potential to uncover groundbreaking solutions -- just like how OpenAI revolutionized AI before any established players had. This is an urgent demand for more cooperative initiatives on the global stage to ensure that cybersecurity defenses evolve faster than the rising threats of these new AI-based attacks.
[2]
AI isn't the cyber future - it's the present
It's easier to talk about the areas of life that AI won't affect than where it will. Businesses are at the forefront of that adoption. But where businesses go, bad actors often follow - sometimes, they're even ahead. Whilst Gen AI is being used positively amongst businesses, speeding up admin tasks and acting as an assistant to many, it has already gotten into the 'wrong hands'. More and more Gen AI offerings are available on the dark web to assist wanna-be hackers and bad actors in their endeavours. The commodification of AI can help cybercriminals make phishing attacks seem more personable and realistic, which can increase the likelihood of successful intrusions that could lead to ransomware attacks. Ransomware is one of the biggest threats to businesses today, putting businesses, reputations and careers at risk, and it is here to stay. In the face of these evolving threats, the onus is on businesses to engage all its stakeholders including C-Suite and prioritize cyber resilience to ensure business continuity. It is not a case of if an attack happens, but when. Data is every organization's most important asset and if your data is secure, your business is resilient. Typically, we associate AI with large language models such as OpenAI's ChatGPT and Google's Bard AI, and not with the potential cybercrime threats that tools like Worm GPT and FraudGPT can bring. However, in the cybercrime field, we are all too aware of cyber criminals focusing on the biggest return and reward for the lowest investment of time and effort, and Generative AI can represent a perfect synergy in this respect in the cybercrime underworld. AI can be used by adversaries to optimize and expand the reach of their threat campaigns far more efficiently than ever before, resulting in attacks that narrow the window for defenders to respond and mitigate. As AI technology advances, the sophistication of scams is following suit. In the future, AI threats could include autonomous systems capable of making decisions on how to modify their attack strategies in real time, with the ability to analyze attack campaign effectiveness. It could enable the use of data sets to constantly evolve and improve automatically, building an adeptness at bypassing traditional security measures - something that we've not seen in the history of cybersecurity. For stretched CISOs and IT teams, however, AI can appear to be an additional strain on their workloads. This is as in the UK, 92% of senior IT and security leaders in the UK reported changes to their emotional and/or psychological state as a direct result of a cyberattack, with 36% worrying over job security. That doesn't have to be the case however. For example, Generative AI companions can help stretched teams in simplifying and automating cyber incident responses and therefore recovery. Despite the threats, it does pay to get ahead. Businesses need to be leveraging AI in controlled environments where they are confident of its benefits, which typically includes the automation of admin tasks, support with data compiling, and creative inspiration. When used by CISOs and IT teams to support cyber resilience, AI can assist in the areas of analysis, investigation and threat modelling to understand potential attack vectors and enhance their anomaly detection capabilities. This not only takes away some of the strain on stretched teams, it reduces their admin time and allows them to focus on 'bigger fish' activity - and their cyber resilience strategies. Investing in AI tools should include training employees on its use cases in controlled environments, shining a company-wide light on cyber resilience. However, IT teams and CISOs must continue to closely monitor its use, govern access to training data, and set guardrails. It is imperative that the C-Suite is heavily involved in cyber resilience, as the ultimate responsibility to adopt and implement compliant AI functions will always lay with the executive leadership in an organization. Simply relying on prevention is not enough. To help ensure uninterrupted business operations in the face of threats, IT teams and CISOs must build cyber recovery and resilience strategies that proactively safeguard data integrity, identify sensitive data and threats, and enable a clean, rapid recovery. The newly proposed Cyber Security and Resilience Act introduces expanded reporting requirements for ransomware attacks, providing government agencies with valuable new information on the scale of attack and the ability to increase support to affected businesses. The aim of the bill is that mandatory reporting requirements will provide policymakers and threat intelligence agencies with valuable data on the prevalence of cyberattacks, currently seen as a "known unknown." With all of this additional data to hand, it is critical that it is managed effectively, and with law enforcement and cybersecurity companies involved, to mitigate threats effectively. When it comes to the impact on businesses, it is important to have a balanced approach - one that combines regulatory measures with practical support for affected organizations. Despite the threats when used positively by CISOs and IT teams, AI can help with analyzing, investigating and threat modelling to help build cyber resilience strategies, and better understand potential threats. To meet these growing threats CISOs and IT teams must fight fire with fire or risk losing the AI cyber arms race. We've reviewed and rated the best cloud antivirus.
Share
Share
Copy Link
As AI revolutionizes cybersecurity, it presents both unprecedented threats and powerful defensive tools. This story explores the evolving landscape of AI-based attacks and the strategies businesses and cybersecurity professionals are adopting to counter them.
Artificial Intelligence (AI) has emerged as a transformative force in the cybersecurity landscape, presenting both unprecedented challenges and innovative solutions. Generative AI (GenAI) and Large Language Models (LLMs) have demonstrated remarkable potential in automating various tasks, from basic IT helpdesk requests to sophisticated user behavior analysis 1. However, this technological advancement has also given rise to a new breed of cyber threats: AI-based attacks.
AI-powered attacks are characterized by their automation, adaptability, and tailored approach to targets. These sophisticated threats operate at a speed and scale far beyond human capabilities, rendering traditional defense techniques ineffective 1. The cybercrime underworld has quickly recognized the potential of AI, with tools like WormGPT and FraudGPT emerging on the dark web to assist aspiring hackers 2.
One of the most alarming developments is the rise of AI-enhanced social engineering attacks. Deepfake technology, for instance, has been used to scam a finance worker in Hong Kong out of $25 million by impersonating the company's chief financial officer during a video conference call 1. Voice imitation techniques have also been employed in terrifying scams, such as fake kidnapping claims 1.
As the threat landscape evolves, cybersecurity professionals are adopting new strategies to counter AI-based attacks:
Understanding Current Defenses: It's crucial to assess the effectiveness of existing cybersecurity controls against emerging AI-based cyberattacks. This knowledge facilitates the establishment of global countermeasures through immediate dialogues and information exchange between organizations 1.
Improving Security Measures: Developing robust defenses against AI-powered attacks involves enhancing current security protocols, educating the community about new threat techniques, and updating existing frameworks to address these emerging risks 1.
Leveraging AI for Defense: CISOs and IT teams are now using AI to support cyber resilience efforts. AI assists in analysis, investigation, and threat modeling, enhancing anomaly detection capabilities and allowing security teams to focus on high-priority tasks 2.
The proposed Cyber Security and Resilience Act introduces expanded reporting requirements for ransomware attacks, aiming to provide government agencies with valuable data on the scale of attacks and increase support for affected businesses 2. This regulatory approach, combined with practical support for organizations, is crucial in addressing the evolving threat landscape.
Furthermore, there's a growing call for collaboration across the entire ecosystem. Governments, leading technology players, research centers, startups, and SMEs focusing on the intersection of AI and cybersecurity must work together to uncover groundbreaking solutions 1.
Reference
[1]
[2]
As AI technology advances, cybercriminals are leveraging it to create more sophisticated and personalized social engineering attacks, posing significant challenges for organizations, especially SMEs and supply chains.
3 Sources
3 Sources
As AI transforms the cybersecurity landscape, businesses are increasingly adopting AI-powered defenses to combat sophisticated AI-driven threats, highlighting both the risks and opportunities in this technological arms race.
3 Sources
3 Sources
As AI enhances cyber threats, organizations must adopt AI-driven security measures to stay ahead. Experts recommend implementing zero-trust architecture, leveraging AI for defense, and addressing human factors to combat sophisticated AI-powered attacks.
4 Sources
4 Sources
As AI technologies advance, cybersecurity faces new challenges and opportunities. This story explores the intersection of AI and cybersecurity, highlighting NVIDIA's role and the broader implications for system protection in the age of generative AI.
2 Sources
2 Sources
Malicious AI models like FraudGPT and WormGPT are becoming the latest tools for cybercriminals, posing significant risks to online security. These AI systems are being used to create sophisticated phishing emails, malware, and other cyber threats.
2 Sources
2 Sources
The Outpost is a comprehensive collection of curated artificial intelligence software tools that cater to the needs of small business owners, bloggers, artists, musicians, entrepreneurs, marketers, writers, and researchers.
© 2025 TheOutpost.AI All rights reserved