Curated by THEOUTPOST
On Tue, 27 Aug, 4:03 PM UTC
2 Sources
[1]
AI Could One Day Engineer a Pandemic, Experts Warn
Chatbots are not the only AI models to have advanced in recent years. Specialized models trained on biological data have similarly leapt forward, and could help to accelerate vaccine development, cure diseases, and engineer drought-resistant crops. But the same qualities that make these models beneficial introduce potential dangers. For a model to be able to design a vaccine that is safe, for instance, it must first know what is harmful. That is why experts are calling for governments to introduce mandatory oversight and guardrails for advanced biological models in a new policy paper published Aug. 22 in the peer-reviewed journal Science. While today's AI models probably do not "substantially contribute" to biological risk, the authors write, future systems could help to engineer new pandemic-capable pathogens. "The essential ingredients to create highly concerning advanced biological models may already exist or soon will," write the authors, who are public health and legal professionals from Stanford School of Medicine, Fordham University, and the Johns Hopkins Center for Health Security. "Establishment of effective governance systems now is warranted." "We need to plan now," says Anita Cicero, deputy director at the Johns Hopkins Center for Health Security and a co-author of the paper. "Some structured government oversight and requirements will be necessary in order to reduce risks of especially powerful tools in the future." Read More: The Researcher Trying to Glimpse the Future of AI Humans have a long history of weaponizing biological agents. In the 14th century, Mongol forces are thought to have catapulted plague-infested corpses over enemy walls, potentially contributing to the spread of the Black Death in Europe. During the Second World War, several major powers experimented with biological weapons such as plague and typhoid, which Japan used on several Chinese cities. And at the height of the Cold War, both America and the Soviets ran expansive biological weapons programs. But in 1972, both sides -- along with the rest of the world -- agreed to dismantle such programs and ban biological weapons, resulting in the Biological Weapons Convention. This international treaty, while largely considered effective, did not fully dispel the threat of biological weapons. As recently as the early 1990s, the Japanese cult Aum Shinrikyo repeatedly tried to develop and release bioweapons such as anthrax. These efforts failed because the group lacked technical expertise. But experts warn that future AI systems could compensate for this gap. "As these models get more powerful, it will lower the level of sophistication a malicious actor would need in order to do harm," Cicero says. Not all pathogens that have been weaponized can spread from person to person, and those that can tend to become less lethal as they become more contagious. But AI might be able to "figure out how a pathogen could maintain its transmissibility while retaining its fitness," Cicero says. A terror group or other malicious actor is not the only way this could happen. Even a well-intentioned researcher, without the right protocols in place, could accidentally develop a pathogen that gets "released and then spreads uncontrollably," says Cicero. Bioterrorism continues to attract global concern, including from the likes of Bill Gates and U.S. Commerce Secretary Gina Raimondo, who has been leading the Biden administration's approach to AI. Read More: U.K.'s AI Safety Summit Ends With Limited, but Meaningful, Progress The gap between a virtual blueprint and a physical biological agent is surprisingly narrow. Many companies allow you to order biological material online, and while there are some measures to prevent the purchase of dangerous genetic sequences, they are applied unevenly both within the U.S. and abroad, making them easy to circumvent. "There's a lot of little holes in the dam, with water spurting out," Cicero explains. She and her co-authors encourage mandatory screening requirements, but note even these are insufficient to fully guard against the risks of biological AI models. To date, 175 people -- including researchers, academics, and industry professionals from Harvard, Moderna, and Microsoft -- have signed a set of voluntary commitments contained in the Responsible AI x Biodesign community statement, published earlier this year. Cicero, who is one of the signatories, says she and her co-authors agree that while these commitments are important, they are insufficient to protect against the risks. The paper notes that we do not rely on voluntary commitments alone in other high-risk biological domains, such as where live Ebola virus is used in a lab. The authors recommend governments work with experts in machine learning, infectious disease, and ethics to devise a "battery of tests" that biological AI models must undergo before they are released to the public, with a focus on whether they could pose "pandemic-level risks." Cicero explains "there needs to be some kind of floor. At the very minimum, the risk-benefit evaluations and the pre-release reviews of biological design tools and highly capable large language models would include an evaluation of whether those models could lead to pandemic- level risks, in addition to other things." Because testing for such abilities in an AI system can be risky in itself, the authors recommend creating proxy assessments -- for example, whether an AI can synthesize a new benign pathogen as a proxy for its ability to synthesize a deadly one. On the basis of these tests, officials can decide whether access to a model should be restricted, and to what extent. Oversight policies will also need to address the fact that open-source systems can be modified after release, potentially becoming more dangerous in the process. Read More: Republicans' Vow to Repeal Biden's AI Executive Order Has Some Experts Worried The authors also recommend that the U.S. creates a set of standards to guide the responsible sharing of large-scale datasets on "pathogenic characteristics of concern," and that a federal agency be empowered to work with the recently created U.S. AI Safety Institute. The U.K. AI Safety Institute, which works closely with its U.S. counterpart, has already conducted safety testing, including for biological risks, on leading AI models; however, this testing has largely focused on assessing the capabilities of general-purpose large language models rather than biology-specific systems. "The last thing we want to do is cut the industry off at the knees and hobble our progress," Cicero says. "It's a balancing act." To avoid hampering research through over-regulation, the authors recommend regulators initially focus only on two kinds of models: those trained with very large amounts of computing power on biological data, and models of any size trained on especially sensitive biological data that is not widely accessible, such as new information that links viral genetic sequences to their potential for causing pandemics. Over time, the scope of concerning models may widen, particularly if future AIs are capable of doing research autonomously, Cicero says. Imagine "100 million Chief Science Officers of Pfizer working round the clock at 100 times the speed of the real one," says Cicero, pointing out that while this could lead to incredible breakthroughs in drug design and discovery, it would also greatly increase risk. The paper emphasizes the need for international collaboration to manage these risks, particularly given that they endanger the entire globe. Even so, the authors note that while harmonizing policies would be ideal, "countries with the most advanced AI technology should prioritize effective evaluations, even if they come at some cost to international uniformity." Due to predicted advances in AI capabilities and the relative ease of both procuring biological material and hiring third-parties to perform experiments remotely, Cicero thinks that biological risks from AI could manifest "within the next 20 years, and maybe even much less," unless there is proper oversight. "We need to be thinking not just of the current version of all of the available tools, but the next versions, because of the exponential growth that we see. These tools are going to be getting more powerful," she says.
[2]
Checking In on the AI Doomers
Helen Toner remembers when every person who worked in AI safety could fit onto a school bus. The year was 2016. Toner hadn't yet joined OpenAI's board and hadn't yet played a crucial role in the (short-lived) firing of its CEO, Sam Altman. She was working at Open Philanthropy, a nonprofit associated with the effective-altruism movement, when she first connected with the small community of intellectuals who care about AI risk. "It was, like, 50 people," she told me recently by phone. They were more of a sci-fi-adjacent subculture than a proper discipline. But things were changing. The deep-learning revolution was drawing new converts to the cause. AIs had recently started seeing more clearly and doing advanced language translation. They were developing fine-grained notions about what videos you, personally, might want to watch. Killer robots weren't crunching human skulls underfoot, but the technology was advancing quickly, and the number of professors, think tankers, and practitioners at big AI labs concerned about its dangers was growing. "Now it's hundreds or even thousands of people," Toner said. "Some of them seem smart and great. Some of them seem crazy." After ChatGPT's release in November 2022, that whole spectrum of AI-risk experts -- from measured philosopher types to those convinced of imminent Armageddon -- achieved a new cultural prominence. People were unnerved to find themselves talking fluidly with a bot. Many were curious about the new technology's promise, but some were also frightened by its implications. Researchers who worried about AI risk had been treated as pariahs in elite circles. Suddenly, they were able to get their case across to the masses, Toner said. They were invited onto serious news shows and popular podcasts. The apocalyptic pronouncements that they made in these venues were given due consideration. But only for a time. After a year or so, ChatGPT ceased to be a sparkly new wonder. Like many marvels of the internet age, it quickly became part of our everyday digital furniture. Public interest faded. In Congress, bipartisan momentum for AI regulation stalled. Some risk experts -- Toner in particular -- had achieved real power inside tech companies, but when they clashed with their overlords, they lost influence. Now that the AI-safety community's moment in the sun has come to a close, I wanted to check in on them -- especially the true believers. Are they licking their wounds? Do they wish they'd done things differently? The ChatGPT moment was particularly heady for Eliezer Yudkowsky, the 44-year-old co-founder of the Machine Intelligence Research Institute, an organization that seeks to identify potential existential risks from AI. Yudkowsky is something of a fundamentalist about AI risk; his entire worldview orbits around the idea that humanity is hurtling toward a confrontation with a superintelligent AI that we won't survive. Last year, Yudkowsky was named to Time's list of the world's most influential people in AI. He'd given a popular TED Talk on the subject; he'd gone on the Lex Fridman Podcast; he'd even had a late-night meetup with Altman. In an essay for Time, he proposed an indefinite international moratorium on developing advanced AI models like those that power ChatGPT. If a country refused to sign and tried to build computing infrastructure for training, Yudkowsky's favored remedy was air strikes. Anticipating objections, he stressed that people should be more concerned about violations of the moratorium than about a mere "shooting conflict between nations." The public was generally sympathetic, if not to the air strikes, then to broader messages about AI's downsides -- and understandably so. Writers and artists were worried that the novels and paintings they'd labored over had been strip-mined and used to train their replacements. People found it easy to imagine slightly more accurate chatbots competing seriously for their job. Robot uprisings had been a pop-culture fixture for decades, not only in pulp science fiction but also at the multiplex. "For me, one of the lessons of the ChatGPT moment is that the public is really primed to think of AI as a bad and dangerous thing," Toner told me. Politicians started to hear from their constituents. Altman and other industry executives were hauled before Congress. Senators from both sides of the aisle asked whether AIs might pose an existential risk to humanity. The Biden administration drafted an executive order on AI, possibly its "longest ever." Read: The White House is preparing for an AI-dominated future AI-risk experts were suddenly in the right rooms. They had input on legislation. They'd even secured positions of power within each of the big-three AI labs. OpenAI, Google DeepMind, and Anthropic all had founders who emphasized a safety-conscious approach. OpenAI was famously formed to benefit "all of humanity." Toner was invited to join its board in 2021 as a gesture of the company's commitment to that principle. During the early months of last year, the company's executives insisted that it was still a priority. Over coffee in Singapore that June, Altman himself told me that OpenAI would allocate a whopping 20 percent of the company's computing power -- the industry's coin of the realm -- to a team dedicated to keeping AIs aligned with human goals. It was to be led by OpenAI's risk-obsessed chief scientist, Ilya Sutskever, who also sat on the company's board. That might have been the high-water mark for members of the AI-risk crowd. They were dealt a grievous blow soon thereafter. During OpenAI's boardroom fiasco last November, it quickly became clear that whatever nominal titles these people held, they wouldn't be calling the shots when push came to shove. Toner had by then grown concerned that it was becoming difficult to oversee Altman, because, according to her, he had repeatedly lied to the board. (Altman has said that he does not agree with Toner's recollection of events.) She and Sutskever were among those who voted to fire him. For a brief period, Altman's ouster seemed to vindicate the company's governance structure, which was explicitly designed to prevent executives from sweeping aside safety considerations -- to enrich themselves or participate in the pure exhilaration of being at the technological frontier. Yudkowsky, who had been skeptical that such a structure would ever work, admitted in a post on X that he'd been wrong. But the moneyed interests that funded the company -- Microsoft in particular -- rallied behind Altman, and he was reinstated. Yudkowsky withdrew his mea culpa. Sutskever and Toner subsequently resigned from OpenAI's board, and the company's superalignment team was disbanded a few months later. Young AI-safety researchers were demoralized. From the September 2023 issue: Does Sam Altman know what he's creating? Yudkowsky told me that he is in despair about the way these past few years have unfolded. He said that when a big public-relations opportunity had suddenly materialized, he and his colleagues weren't set up to handle it. Toner told me something similar. "There was almost a dog-that-caught-the-car effect," she said. "This community had been trying so long to get people to take these ideas seriously, and suddenly people took them seriously, and it was like, 'Okay, now what?'" Yudkowsky did not expect an AI that works as well as ChatGPT this soon, and it concerns him that its creators don't know exactly what's happening underneath its hood. If AIs become much more intelligent than us, their inner workings will become even more mysterious. The big labs have all formed safety teams of some kind. It's perhaps no surprise that some tech grandees have expressed disdain for these teams, but Yudkowsky doesn't like them much either. "If there's any trace of real understanding [on those teams], it is really well hidden," he told me. The way he sees it, it is ludicrous for humanity to keep building ever more powerful AIs without a clear technical understanding of how to keep them from escaping our control. It's "an unpleasant game board to play from," he said. Read: Inside the chaos at OpenAI ChatGPT and bots of its ilk have improved only incrementally so far. Without seeing more big, flashy breakthroughs, the general public has been less willing to entertain speculative scenarios about AI's future dangers. "A lot of people sort of said, 'Oh, good, I can stop paying attention again,'" Toner told me. She wishes more people would think about longer trajectories rather than near-term dangers posed by today's models. It's not that GPT-4 can make a bioweapon, she said. It's that AI is getting better and better at medical research, and at some point, it is surely going to get good at figuring out how to make bioweapons too. Toby Ord, a philosopher at Oxford University who has worked on AI risk for more than a decade, believes that it's an illusion that progress has stalled out. "We don't have much evidence of that yet," Ord told me. "It's difficult to appropriately calibrate your intuitive responses when something moves forward in these big lurches." The leading AI labs sometimes take years to train new models, and they keep them out of sight for a while after they're trained, to polish them up for consumer use. As a result, there is a bit of a staircase effect: Massive changes are followed by a flatline. "You can find yourself incorrectly oscillating between the sensation that everything is changing and nothing is changing," Ord said. In the meantime, the AI-risk community has learned a few things. They have learned that solemn statements of purpose drafted during a start-up's founding aren't worth much. They have learned that promises to cooperate with regulators can't be trusted either. The big AI labs initially advertised themselves as being quite friendly to policy makers, Toner told me. They were surprisingly prominent in conversations, in both the media and on Capitol Hill, about AI potentially killing everyone, she said. Some of this solicitousness might have been self-interested -- to distract from more immediate regulatory concerns, for instance -- but Toner believes that it was in good faith. When those conversations led to actual regulatory proposals, things changed. A lot of the companies no longer wanted to riff about how powerful and dangerous this tech would be, Toner said: "They sort of realized, Hang on, people might believe us.'" The AI-risk community has also learned that novel corporate-governance structures cannot constrain executives who are hell-bent on acceleration. That was the big lesson of OpenAI's boardroom fiasco. "The governance model at OpenAI was supposed to prevent financial pressures from overrunning things," Ord said. "It didn't work. The people who were meant to hold the CEO to account were unable to do so." The money won. No matter what the initial intentions of their founders, tech companies tend to eventually resist external safeguards. Even Anthropic -- the safety-conscious AI lab founded by a splinter cell of OpenAI researchers who believed that Altman was prioritizing speed over caution -- has recently shown signs of bristling at regulation. In June, the company joined an "innovation economy" trade group that is opposing a new AI-safety bill in California, although Anthropic also recently said that the bill's benefits would outweigh its costs. Yudkowsky told me that he's always considered Anthropic a force for harm, based on "personal knowledge of the founders." They want to be in the room where it happens, he said. They want a front-row seat to the creation of a greater-than-human intelligence. They aren't slowing things down; they've become a product company. A few months ago, they released a model that some have argued is better than ChatGPT. Yudkowsky told me that he wishes AI researchers would all shut down their frontier projects forever. But if AI research is going to continue, he would slightly prefer for it to take place in a national-security context -- in a Manhattan Project setting, perhaps in a handful of rich, powerful countries. There would still be arms-race dynamics, of course, and considerably less public transparency. But if some new AI proved existentially dangerous, the big players -- the United States and China in particular -- might find it easier to form an agreement not to pursue it, compared with a teeming marketplace of 20 to 30 companies spread across several global markets. Yudkowsky emphasized that he wasn't absolutely sure this was true. This kind of thing is hard to know in advance. The precise trajectory of this technology is still so unclear. For Yudkowsky, only its conclusion is certain. Just before we hung up, he compared his mode of prognostication to that of Leo Szilard, the physicist who in 1933 first beheld a fission chain reaction, not as an experiment in a laboratory but as an idea in his mind's eye. Szilard chose not to publish a paper about it, despite the great acclaim that would have flowed to him. He understood at once how a fission reaction could be used in a terrible weapon. "He saw that Hitler, specifically, was going to be a problem," Yudkowsky said. "He foresaw mutually assured destruction." He did not, however, foresee that the first atomic bomb would be dropped on Japan in August 1945, nor did he predict the precise conditions of its creation in the New Mexico desert. No one can know in advance all the contingencies of a technology's evolution, Yudkowsky said. No one can say whether there will be another ChatGPT moment, or when it might occur. No one can guess what particular technological development will come next, or how people will react to it. The end point, however, he could predict: If we keep on our current path of building smarter and smarter AIs, everyone is going to die.
Share
Share
Copy Link
Exploring the potential of AI in combating pandemics while addressing concerns about its misuse in bioterrorism. Experts weigh in on the delicate balance between technological advancement and global security.
Artificial Intelligence (AI) has emerged as a powerful tool in the fight against global health crises. Experts are increasingly recognizing its potential to revolutionize pandemic prevention and response strategies. According to recent reports, AI systems can analyze vast amounts of data to predict disease outbreaks, accelerate vaccine development, and optimize resource allocation during health emergencies 1.
While AI offers significant benefits in public health, it also raises alarming concerns about potential misuse. Security experts warn that the same technologies that can help prevent pandemics could be exploited by malicious actors to engineer more dangerous pathogens or plan bioterrorist attacks. This dual-use nature of AI in the biological realm has sparked intense debate among policymakers and scientists 1.
The challenge lies in harnessing AI's potential for good while mitigating its risks. Helen Toner, a prominent figure in AI policy, emphasizes the need for a nuanced approach. She argues that while the risks are real, they should not paralyze progress in beneficial AI applications. Toner advocates for responsible development and governance of AI technologies to ensure they remain a net positive for society 2.
Experts stress the importance of international collaboration in addressing the dual-use dilemma of AI in biotechnology. There are calls for establishing global frameworks and regulations to guide the development and deployment of AI in sensitive areas like pathogen research. Such measures aim to promote transparency and prevent the misuse of these powerful technologies 1.
One of the most promising applications of AI in pandemic prevention is its use in early warning systems. Machine learning algorithms can analyze diverse data sources, including social media, satellite imagery, and health records, to detect early signs of disease outbreaks. This capability could provide crucial lead time for public health responses, potentially saving countless lives 1.
As AI becomes more integrated into public health and biosecurity measures, ethical considerations come to the forefront. Issues of data privacy, algorithmic bias, and public trust in AI-driven decisions are critical challenges that need to be addressed. Experts emphasize the need for transparent and accountable AI systems to maintain public confidence in these technologies 2.
Looking ahead, the integration of AI into global health security strategies seems inevitable. However, the path forward requires careful navigation of the complex interplay between technological advancement, ethical considerations, and security concerns. As the world continues to grapple with these challenges, the responsible development and deployment of AI in pandemic prevention and biosecurity will likely shape the future of global health resilience.
Reference
[2]
In 2024, AI made significant strides in capabilities and adoption, driving massive profits for tech companies. However, concerns about safety, regulation, and societal impact also grew.
13 Sources
13 Sources
An exploration of the complex landscape surrounding AI development, including political implications, economic impacts, and societal concerns, highlighting the need for responsible innovation and regulation.
2 Sources
2 Sources
A comprehensive look at the latest developments in AI, including OpenAI's internal struggles, regulatory efforts, new model releases, ethical concerns, and the technology's impact on Wall Street.
6 Sources
6 Sources
As ChatGPT turns two, the AI landscape is rapidly evolving with new models, business strategies, and ethical considerations shaping the future of artificial intelligence.
6 Sources
6 Sources
Leading AI companies are experiencing diminishing returns on scaling their AI systems, prompting a shift in approach and raising questions about the future of AI development.
7 Sources
7 Sources
The Outpost is a comprehensive collection of curated artificial intelligence software tools that cater to the needs of small business owners, bloggers, artists, musicians, entrepreneurs, marketers, writers, and researchers.
© 2025 TheOutpost.AI All rights reserved