Curated by THEOUTPOST
On Mon, 5 Aug, 4:01 PM UTC
2 Sources
[1]
These experts believe AI can help us win the cybersecurity battle
The arrival of artificial intelligence (AI) in many cybersecurity products can't come too soon, according to the founder of prominent cybersecurity vendor Palo Alto Networks, who sees the spiraling threat landscape as too complex to be managed by human efforts alone. "They are going to try a million ways to get in," said Nir Zuk, the chief technologist and co-founder of Palo Alto Networks, regarding malicious actors. Also: AI-powered 'narrative attacks' a growing threat: 3 defense strategies for business leaders As for the threat hunters, he said: "You can't be correct a million out of a million times -- that doesn't scale." That's where AI comes in. Zuk and Palo Alto's Chief Product Officer, Lee Klarich, sat down with ZDNET recently to discuss how AI is changing cybersecurity. Also: The best VPN services (and how to choose the right one for you) Palo Alto began almost 18 years ago as a network security vendor competing with numerous firewall specialists and intrusion detection and prevention companies and eventually moved into cloud security and managed services. Zuk, a mathematician by training, has a long history running technology for cybersecurity outfits, having previously served as CTO at Juniper Networks, and before that founding cybersecurity startup OneSecure (later sold to NetScreen Technologies, which was sold to Juniper). Also: Generative AI is new attack vector endangering enterprises, says CrowdStrike CTO Klarich was previously director of product management for Juniper, and head of firewall technology at NetScreen before that. The flash point for AI and security, said Zuk, is the security operations center, or SOC, which watches what happens on the network and tries to detect and stop malicious behavior. The chief information society officer (CISO) and their team are outgunned. "If you look at the numbers for respond, recover, remediate," the main things a CISO does following a breach -- those numbers are horrible," said Zuk. "When the SEC [US Securities and Exchange Commission] announced that it expects public companies to report within four days about a major breach, everybody had an, 'Oh, crap' moment," he said. He noted the security team can't even close routine IT tickets from that day: "They're looking for a needle in a haystack." Because there aren't enough engineers, or hours in the day, "the idea of AI in the SOC is to do the things that humans do," but in "the most scalable way and faster," said Zuk, to reduce the "mean time to detect" a breach to minutes. Also: Intel sees AI in enterprise on a 'three to five-year path' "I think that there's an opportunity where AI effectively automates a majority of how cybersecurity is deployed, configured, and operationalized," said Zuk, because, "it's become so complex for people to do." Automation is a broad, general term that is widely used. The aim of using emerging technology in the SOC is for the AI model to discover what "normal" means. The CISO and their teams spend their time hunting for traces of suspicious behavior, said Zuk. This effort takes hours, days and, weeks. It would be better if the machine could find what normal looks like in the enterprise, said Zuk, so that anything malicious stands out. "Let's use AI to learn what's not normal in the organization, irrespective of which attack technique posted," said Zuk. "I don't care how they broke in and I don't care how they move laterally and so on; if I can detect the abnormal within the organization, which humans cannot, and AI can in a scalable way, it gives me an advantage that they don't have today." Zuk and Klarich see an advantage in using software's breadth to find the normal. The training and the generation of predictions in AI require the integration of sensor data from many sources. Also: As AI agents spread, so do the risks, scholars say "You can't collect data into data silos and then expect to run AI on it. It works much better when the sensors and the AI come from the same vendor," said Klarich. "The more data sources it has, the more accurate the picture of normal is going to be in order to be able to determine what unusual activity looks like." The concentration of data means that Palo Alto believes AI may fuel consolidation in the cybersecurity industry, which is classically fragmented across vendors. "Cybersecurity is largely toward one end of the extreme in terms of having a huge number of smaller point product vendors," said Klarich. "It's not that you need to go from a hundred different security solutions to one, it's that you need to go from a hundred to a lot less. You can't expect to collect data into silos, and then expect to run AI on it." Also: The best VPN services for iPhone and iPad (yes, you need to use one) Complexity is rising, of course, as the attacks from errant actors become automated. "We do assume, in terms of how we think about our technologies, that there will be new attack techniques that they will come up with, and, increasingly, automated attacks," said Zuk. "That dramatically changes the scale with which attacks can be carried out because they'll no longer be limited by their human resources in terms of their capacity, but rather they'll also be able to use AI to carry out attacks in parallel." The legitimate use of AI also increases the "threat surface", according to Zuk and Klarich. A programmer who uses a programming "copilot" to write code exposes more of their company's source files to a remote service. "That's intellectual property that just left the control of the enterprise, right?", said Zuk. "And that's one of hundreds of new AI applications that exist that run the same risk." The good news is Zuk said he believes the forces of good can win out in the battle of AI. "I personally think that AI is going to help the defenders more than it's going to help the attackers," he said.
[2]
Cybersecurity QA fails: Social media's a part of it too
Malware is often described as biology. It should be the other way around Opinion In Neal Stephenson's 1992 novel Snow Crash, he invents malware that can leap species from silicon to the human brain. That's a great metaphor for so much of our online lives, but it raises one question of particular interest. If humans can be damaged by our own technology, should we protect not just our data but ourselves through cybersecurity? There are many areas where this makes sense. Our struggles to define what generative AI can safely do and how the results fit into law and ethics are as much about protecting our culture as our commerce. You can't have it both ways. You can't say that your AI expertise and security chops are the finest in the business, and say that there's no solution to hate speech and disinfo online Likewise, the power of algorithmic content selection and social media to amplify disinfo and hate has a direct effect on how people behave towards others. These are real world problems with frightening potential for actual harm. They exist at the interface between our technology and ourselves, and they must have a technical component in their remedy. Cybersecurity is all about detecting and disabling damage agents at interfaces. It feels like a good conceptual fit. If only cybersecurity wasn't so frequently awful. A lot is fine; the stuff that works never makes the headlines. We've built a global network of billions of devices that most of the time doesn't actually attack us. But when cybersecurity goes wrong, oh boy. Occasionally, this is because people discover things that are genuinely hard to foresee, such as glitching chips with noise spikes, exposing data that's supposed to be uninspectable. Mostly, though, it's the trifecta of human vice - stupidity, greed, and laziness. This is particularly obnoxious in security software. PC antivirus software, so often forced on users as bundled bloatware, often slowed down and crashed computers far more often than viruses ever did. Without consumer choice in the loop, it fed corporate greed without demanding high quality user experiences that proper design and QA could provide. Software design may have improved over the years, but humans haven't. CrowdStrike lived up to its name recently when it BSODed up the world by pushing a broken update. How did that happen? Not enough QA. Was this stupidity in thinking this wasn't necessary? Greed in minimizing costs to plump up the bottom line? Laziness in doing a half-arsed job before knocking off for a beer? Whatever the case may be, the result was a global cyberattack that touched millions. Meanwhile, Meta made a richly amusing clown of itself when its machine learning powered prompt injection detector could itself be skewered by a prompt injection attack. One enabled, moreover, by simply typing some spaces. In general, failures in brand new software can be seen as part of the QA process. Moving from release candidate to production just means you probably haven't found all the bugs yet. That this software team didn't find this bug in this tool, though, will overload even the best protected irony meter. Thanks, Zuck, we needed that. It's quite possible that by the time you read this, that bug will be fixed. Meta, like all its peers in big tech, is proud of its AI and predicts huge things for it. There's not an area of human activity untouched by the analysis of massive datasets and the synthesis of useful output. The end results may be uncertain, but the process is well under way. Well, almost. At the same time as Meta and Google are proclaiming their AI expertise in performing human-like tasks at massive scale, they claim an inability to solace one particular and highly dangerous technology/human interface problem. This is the way content-selection algorithms on social media and content delivery services hack human psychology to drive engagement. You can see this for yourself if you show any sort of interest in invective online. People engage very strongly with content that angers them, even more so if it assures the content consumer that they're the victim and should be angry too. The result is an explosion in online populist politics, conspiracy-centered culture warcraft, and pathways to radicalization. It is symbiotic with disinformation, hate speech, and unevidenced claims. It is not difficult to spot, rather it is very difficult to avoid. It is eating the world at a terrifying rate. If we can spot this human malware, it should be bread and butter to large language model-driven machine learning. Especially if deployed within the networks that deliver it. Detecting the attack at the interface, observing the behavioral changes it engenders, at scale without affecting legitimate usage? Exactly the mix of classic cybersecurity and new LLM technology that Meta and friends promote as the future. Except, it seems, for the algorithms that push engagement and profit. You can't have it both ways. You can't say that your AI expertise and security chops are the finest in the business, and say that there's no solution to hate speech and disinfo online. Perhaps machine learning can't solve this problem, in which case they've got some explaining to do about how it's so "good" at everything else. Or perhaps they'd rather break the world than clean up their act. It's a massive failure in QA, not technology. Stupidity, greed, laziness. Let's call their bluff. Let's ask about the projects to use their prize technologies to solve their most heinous antisocial behavior. Let's ask our politicians why they accept excuses in place of action. And let's ask ourselves, who know and use good cybersecurity when we see it, how we can start to protect the human endpoints of the network. We are, after all, under attack. ®
Share
Share
Copy Link
Experts discuss the potential of AI in bolstering cybersecurity defenses. While AI shows promise in detecting threats, concerns about its dual-use nature and the need for human oversight persist.
As cyber threats continue to evolve and become more sophisticated, experts are turning to artificial intelligence (AI) as a potential game-changer in the field of cybersecurity. According to recent discussions, AI technologies could significantly enhance our ability to detect and respond to cyber threats more efficiently than ever before 1.
One of the primary advantages of AI in cybersecurity is its ability to process and analyze vast amounts of data at unprecedented speeds. This capability allows AI systems to identify patterns and anomalies that might indicate potential security breaches or malicious activities. Experts believe that AI-powered tools can dramatically improve threat detection rates and response times, potentially outperforming traditional security measures [1].
Despite the optimism surrounding AI in cybersecurity, experts also acknowledge several challenges. One significant concern is the dual-use nature of AI technology. While it can be used to strengthen defenses, it can also be exploited by malicious actors to create more sophisticated attacks 2.
Another limitation is the potential for false positives and the need for human oversight. AI systems, while powerful, are not infallible and may misinterpret benign activities as threats. This underscores the importance of maintaining a balance between AI automation and human expertise in cybersecurity operations [1].
The application of AI in cybersecurity extends beyond traditional network protection. Experts are exploring its potential in safeguarding social media platforms, which have become prime targets for disinformation campaigns and cyber attacks. Machine learning algorithms are being developed to detect and counter malicious activities on these platforms, although challenges remain in distinguishing between legitimate and harmful content [2].
As AI technologies continue to advance, their role in cybersecurity is expected to grow. Experts predict that AI will become an integral part of future security strategies, working alongside human analysts to create more robust and adaptive defense systems. However, they also emphasize the need for ongoing research and development to stay ahead of evolving threats and to address the ethical implications of AI in cybersecurity [1][2].
While AI shows great promise, experts stress that it should not be seen as a silver bullet. The human element remains crucial in cybersecurity. AI tools are most effective when used to augment human expertise rather than replace it entirely. Cybersecurity professionals will need to adapt their skills to work effectively with AI systems, interpreting their outputs and making informed decisions based on both AI-generated insights and human judgment [1][2].
Reference
[2]
As AI revolutionizes cybersecurity, it presents both unprecedented threats and powerful defensive tools. This story explores the evolving landscape of AI-based attacks and the strategies businesses and cybersecurity professionals are adopting to counter them.
2 Sources
The generative AI cybersecurity market is projected to reach $40.1 billion by 2032, with tech giants leading the way. Meanwhile, ethical hackers at DEF CON highlight potential vulnerabilities in AI systems.
2 Sources
AI is transforming network security, as highlighted at Black Hat USA 2024. Experts discuss the potential of AI in threat detection and response, while also addressing concerns about AI-powered attacks.
2 Sources
As cyber threats evolve, AI emerges as a game-changer in cybersecurity. Industry leaders discuss the integration of AI in security strategies, emphasizing its role in enhancing cyber resilience and addressing the challenges of the digital age.
4 Sources
Recent controversies surrounding tech leaders like Elon Musk and Sam Altman have sparked debates about AI ethics and the influence of Silicon Valley elites. Critics argue that these figures may be manipulating public opinion while pushing potentially dangerous AI technologies.
3 Sources
The Outpost is a comprehensive collection of curated artificial intelligence software tools that cater to the needs of small business owners, bloggers, artists, musicians, entrepreneurs, marketers, writers, and researchers.
© 2024 TheOutpost.AI All rights reserved