Curated by THEOUTPOST
On Thu, 8 Aug, 4:02 PM UTC
2 Sources
[1]
Kaspersky Highlights Challenges Posed by AI in Cyber Threat Landscape
Alexey Antonov, Lead Data Scientist at Kaspersky, stated, "We analysed this massive data leak and found that 32% of user passwords are not strong enough and can be reverted from encrypted hash form using a simple brute-force algorithm and a modern GPU 4090 in less than 60 minutes." He added, "We also trained a language model on the password database and tried to check passwords with the obtained AI method. We found that 78% of passwords could be cracked this way, which is about three times faster than using a brute-force algorithm. Only 7% of those passwords are strong enough to resist a long-term attack." AI can also be used for social engineering to generate plausible-looking content, including text, images, audio, and video. Threat actors can use large language models like ChatGPT-4o for generating scam text, such as sophisticated phishing messages. AI-generated phishing can overcome language barriers and create personalised emails based on users' social media information. It can even mimic specific individuals' writing styles, making phishing attacks potentially harder to detect. Deepfakes present another cybersecurity challenge. What was once just scientific research has now become a widespread issue. Criminals have tricked many people with celebrity impersonation scams, leading to significant financial losses. Deepfakes are also used to steal user accounts and send audio money requests using the account owner's voice to friends and relatives. Sophisticated romance scams involve criminals creating fake personas and communicating with victims on dating sites. One of the most elaborate attacks occurred in February in Hong Kong, where scammers simulated a video conference call using deepfakes to impersonate company executives, convincing a finance worker to transfer approximately $25 million.
[2]
Kaspersky Highlights the Challenges of Offensive AI in a Brave New World
Kaspersky emphasises potential offensive applications of AI by cybercriminals and the need for developing proactive cybersecurity defences The rapid proliferation of advanced AI systems in the public domain has resulted in widespread usage by individuals and businesses. These systems are exceptionally adaptable to various tasks, including content generation and code creation through natural language prompts. However, this accessibility has opened the door for threat actors to use AI for sophisticated attacks. Adversaries can leverage AI to automate attacks, speed up routines, and execute more complex operations to achieve their goals. AI as a Powerful Tool We have observed several ways cybercriminals are using AI: 1. ChatGPT can be used for writing malicious software and automating attacks against multiple users. 2. AI programs can log users' smartphone inputs by analysing acceleration sensor data, potentially capturing messages, passwords, and bank codes. 3. Swarm intelligence can operate autonomous botnets that communicate with each other to restore malicious networks after damage. Kaspersky recently conducted another comprehensive research on using AI for password cracking. Most passwords are stored encrypted with cryptographic hash functions like MD5 and SHA. While it's simple to convert a text password to an encrypted line, reversing the process is challenging. Unfortunately, password database leaks occur regularly, affecting both small companies and tech leaders. In July 2024, the largest leaked password compilation to date was published online, containing about 10 billion lines with 8.2 billion unique passwords. Alexey Antonov, Lead Data Scientist at Kaspersky, stated, "We analysed this massive data leak and found that 32% of user passwords are not strong enough and can be reverted from encrypted hash form using a simple brute-force algorithm and a modern GPU 4090 in less than 60 minutes." He added, "We also trained a language model on the password database and tried to check passwords with the obtained AI method. We found that 78% of passwords could be cracked this way, which is about three times faster than using a brute-force algorithm. Only 7% of those passwords are strong enough to resist a long-term attack." Social Engineering with AI AI can also be used for social engineering to generate plausible-looking content, including text, images, audio, and video. Threat actors can use large language models like ChatGPT-4o for generating scam text, such as sophisticated phishing messages. AI-generated phishing can overcome language barriers and create personalised emails based on users' social media information. It can even mimic specific individuals' writing styles, making phishing attacks potentially harder to detect. Deepfakes present another cybersecurity challenge. What was once just scientific research has now become a widespread issue. Criminals have tricked many people with celebrity impersonation scams, leading to significant financial losses. Deepfakes are also used to steal user accounts and send audio money requests using the account owner's voice to friends and relatives. Sophisticated romance scams involve criminals creating fake personas and communicating with victims on dating sites. One of the most elaborate attacks occurred in February in Hong Kong, where scammers simulated a video conference call using deepfakes to impersonate company executives, convincing a finance worker to transfer approximately $25 million. AI Vulnerabilities Besides using AI for harmful purposes, adversaries can also attack AI algorithms themselves. These attacks include: 1. Prompt injection attacks on large language models, where attackers create requests that bypass previous prompt restrictions. 2. Adversarial attacks on machine learning algorithms, where hidden information in images or audio can confuse AI and force incorrect decisions. As AI becomes more integrated into our lives through products like Apple Intelligence, Google Gemini, and Microsoft Copilot, addressing AI vulnerabilities becomes crucial. Kaspersky's Use of AI Kaspersky has been using AI technologies to protect customers for many years. We employ various AI models to detect threats and continuously research AI vulnerabilities to make our technologies more resistant. We also actively study different harmful techniques to provide reliable protection against offensive AI.
Share
Share
Copy Link
Kaspersky, a leading cybersecurity firm, highlights the growing concerns surrounding the use of AI in cyber threats. The company emphasizes the need for proactive measures to combat AI-powered attacks in an increasingly complex digital environment.
In a recent report, Kaspersky, a renowned cybersecurity company, has shed light on the emerging challenges posed by artificial intelligence (AI) in the cyber threat landscape. As AI technology continues to advance, it presents both opportunities and risks in the realm of digital security 1.
Kaspersky's experts have identified a concerning trend: the increasing use of AI for malicious purposes. Cybercriminals are leveraging AI to enhance their attack strategies, making them more sophisticated and harder to detect. This development has been termed "offensive AI," and it poses a significant threat to organizations and individuals alike 2.
The report highlights several ways in which AI is being employed in cyber attacks:
The rise of AI-powered threats presents unique challenges for cybersecurity experts. Traditional security measures may not be sufficient to combat these advanced attacks. Kaspersky emphasizes the need for continuous learning and adaptation in the cybersecurity field to keep pace with evolving AI-driven threats 1.
To address these challenges, Kaspersky recommends a multi-faceted approach:
As AI continues to evolve, the cybersecurity landscape is expected to become increasingly complex. Kaspersky's experts predict that the arms race between attackers and defenders will intensify, with both sides leveraging AI to gain an edge 2.
The report emphasizes the importance of collaboration within the cybersecurity community. Sharing information about new AI-driven threats and defense strategies is crucial for staying ahead of cybercriminals. Kaspersky encourages organizations to participate in threat intelligence sharing platforms and industry forums to collectively combat the evolving threat landscape.
Reference
[1]
As AI revolutionizes cybersecurity, it presents both unprecedented threats and powerful defensive tools. This story explores the evolving landscape of AI-based attacks and the strategies businesses and cybersecurity professionals are adopting to counter them.
2 Sources
Malicious AI models like FraudGPT and WormGPT are becoming the latest tools for cybercriminals, posing significant risks to online security. These AI systems are being used to create sophisticated phishing emails, malware, and other cyber threats.
2 Sources
CERN scientists have discovered a new type of cyber attack that uses AI to exploit vulnerabilities in computer systems. This breakthrough highlights the evolving landscape of cybersecurity threats and the need for advanced defense mechanisms.
2 Sources
As AI technologies advance, cybersecurity faces new challenges and opportunities. This story explores the intersection of AI and cybersecurity, highlighting NVIDIA's role and the broader implications for system protection in the age of generative AI.
2 Sources
Cybersecurity experts have identified malware attacks using AI-generated code, marking a significant shift in the landscape of digital threats. This development raises concerns about the potential for more sophisticated and harder-to-detect cyberattacks.
6 Sources
The Outpost is a comprehensive collection of curated artificial intelligence software tools that cater to the needs of small business owners, bloggers, artists, musicians, entrepreneurs, marketers, writers, and researchers.
© 2024 TheOutpost.AI All rights reserved