Curated by THEOUTPOST
On Sat, 8 Feb, 4:02 PM UTC
2 Sources
[1]
Now more than ever, AI needs a governance framework
The writer is founding co-director of the Stanford Institute for Human-Centered AI (HAI) and CEO and co-founder of World Labs Artificial intelligence is advancing at a breakneck pace. What used to take computation models days can now be done in minutes, and while the training costs have gone up dramatically, they will soon go down as developers learn to do more with less. I've said it before, and I'll repeat it -- the future of AI is now. To anyone in the field, this is not surprising. Computer scientists have been hard at work; companies have been innovating for years. What is surprising -- and eyebrow-raising -- is the seeming lack of an overarching framework for the governance of AI. Yes, AI is progressing rapidly -- and with that comes the necessity of ensuring that it benefits all of humanity. As a technologist and educator, I feel strongly that each of us in the global AI ecosystem is responsible for both advancing the technology and ensuring a human-centred approach. It's a difficult task, one that merits a structured set of guidelines. In preparation for next week's AI Action Summit in Paris, I've laid out three fundamental principles for the future of AI policymaking. First, use science, not science fiction. The foundation of scientific work is the principled reliance on empirical data and rigorous research. The same approach should be applied to AI governance. While futuristic scenarios capture our imagination -- whether utopia or apocalypse -- effective policymaking demands a clear-eyed view of current reality. We've made significant progress in areas such as image recognition and natural language processing. Chatbots and co-pilot software assistance programs are transforming work in exciting ways -- but they are applying advanced data learning and pattern generation. They are not forms of intelligence with intentions, free will or consciousness. Understanding this is critical, saving us from the distraction of far-fetched scenarios, and allowing us to focus on vital challenges. Given AI's complexity, even focusing on our reality isn't always easy. To bridge the gap between scientific advancements and real-world applications, we need tools that will share accurate, up-to-date information about its capabilities. Established institutions, such as the US National Institute of Standards and Technology, could illuminate AI's real-world effects, leading to precise, actionable policies grounded in technical reality. Second, be pragmatic, rather than ideological. Despite its rapid progression, the field of AI is still in its infancy, with its greatest contributions ahead. That being the case, policies about what can and cannot be built must be crafted pragmatically, to minimise unintended consequences while incentivising innovation. Take, for example, the use of AI to more accurately diagnose disease. This has the potential to rapidly democratise access to high-quality medical care. Yet, if not properly guided, it might also exacerbate biases present in today's healthcare systems. Developing AI is no easy task. It is possible to develop a model with the best intentions, and for that model to be misused later on. The best governance policies, therefore, will be designed to tactically mitigate such risk while rewarding responsible implementation. Policymakers must craft practical liability policies that discourage intentional misuse without unfairly penalising good-faith efforts. Finally, empower the AI ecosystem. The technology can inspire students, help us care for our ageing population and innovate solutions for cleaner energy -- and the best innovations come about through collaboration. It's therefore all the more important that policymakers empower the entire AI ecosystem -- including open-source communities and academia. Open access to AI models and computational tools is crucial for progress. Limiting it will create barriers and slow innovation, particularly for academic institutions and researchers who have fewer resources than their private-sector counterparts. The consequences of such limitations, of course, extend far beyond academia. If today's computer science students cannot carry out research with the best models, they won't understand these intricate systems when they enter the private sector or decide to found their own companies -- a serious gap. The AI revolution is here -- and I am excited. We have the potential to dramatically improve our human condition in an AI-powered world but to make that a reality, we need governance that is empirical, collaborative and deeply rooted in human-centred values.
[2]
AI is developing fast, but regulators must be faster | Letters
The recent open letter regarding AI consciousness on which you report (AI systems could be 'caused to suffer' if consciousness achieved, says research, 3 February) highlights a genuine moral problem: if we create conscious AI (whether deliberately or inadvertently) then we would have a duty not to cause it to suffer. What the letter fails to do, however, is to capture what a big "if" this is. Some promising theories of consciousness do indeed open the door to AI consciousness. But other equally promising theories suggest that being conscious requires being an organism. Although we can look for indicators of consciousness in AI, it is very difficult - perhaps impossible - to know whether an AI is actually conscious or merely presenting the outward signs of consciousness. Given how deep these problems run, the only reasonable stance to take on artificial consciousness is an agnostic one. Does that mean we can ignore the moral problem? Far from it. If there's a genuine chance of developing conscious AI then we have to act responsibly. However, acting responsibly in such uncertain territory is easier said than done. The open letter recommends that "organisations should prioritise research on understanding and assessing AI consciousness". But existing methods for testing AI consciousness are highly disputed so can only deliver contentious results. Although the goal of avoiding artificial suffering is a noble one, it's worth noting how casual we are about suffering in many organisms. A growing body of evidence suggests that prawns could be capable of suffering, yet the prawn industry kills around half a trillion prawns every year. Testing for consciousness in prawns is hard, but it's nothing like as hard as testing for consciousness in AI. So while it's right to take our possible duties to future AI seriously, we mustn't lose sight of the duties we might already have to our biological cousins. Dr Tom McClelland Lecturer in philosophy of science, University of Cambridge Regarding your editorial (The Guardian view on AI and copyright law: big tech must pay, 31 January), I agree that AI regulations need a balance so that we all benefit. However, the focus is perhaps too much on the training of AI models and not enough on the processing of creative works by AI models. To use a metaphor - imagine I photocopied 100,000 books, read them, and could then string together plausible sentences on topics in the books. Clearly, I shouldn't have photocopied them, but I can't reproduce any content from any single book, as it's too much to remember. At best, I can broadly mimic the style of some of the more prolific authors. This is like AI training. I then use my newfound skill to take an article, paraphrase it, and present it as my own. What's more, I find I can do this with pictures, too, as many of the books were illustrated. Give me a picture and I can create five more in a similar style, even though I've never seen a picture like this before. I can do this for every piece of creative work I come across, not just things I was trained on. This is like processing by AI. The debate at the moment seems to be focusing wholly on training. This is understandable, as the difference between training and processing by a pre-trained model isn't that obvious from a user perspective. While we need a fair economic model for training data - and I believe it's morally correct that creators can choose whether their work is used in this way and be paid fairly - we need to focus much more on processing rather than training in order to protect creative industries. Michael Webb Director of AI, Jisc We are writing this letter on behalf of a group of members of the UN high-level advisory body for AI. The release of DeepSeek's R1 model, a state-of-the-art AI system developed in China, highlights the urgent need for global AI governance. Even though DeepSeek is not an intelligence breakthrough, its efficiency highlights that cutting-edge AI is no longer confined to a few corporations. Its open-source nature, like Meta's Llama and Mistral, raises complex questions: while transparency fosters innovation and oversight, it also enables AI-driven misinformation, cyber-attacks and deepfake propaganda. Existing governance mechanisms are inadequate. National policies, such as the EU AI Act or the UK's AI regulation framework, vary widely, creating regulatory fragmentation. Unilateral initiatives like next week's Paris AI Action Summit may fail to provide comprehensive enforcement, leaving loopholes for misuse. A robust international framework is essential to ensure AI development aligns with global stability and ethical principles. The UN's recent Governing AI for Humanity report underscores the dangers of an unregulated AI race - deepening inequalities, entrenching biases, and enabling AI weaponisation. AI's risks transcend borders; fragmented approaches only exacerbate vulnerabilities. We need binding international agreements that cover transparency, accountability, liability and enforcement. AI's trajectory must be guided by collective responsibility, not dictated by market forces or geopolitical competition. The financial world is already reacting to AI's rapid evolution. Nvidia's $600bn market loss after DeepSeek's release signals growing uncertainty. However, history shows that efficiency drives demand, reinforcing the need for oversight. Without a global regulatory framework, AI's evolution could be dominated by the fastest movers rather than the most responsible actors. The time for decisive, coordinated global governance is now - before unchecked efficiency spirals into chaos. We believe that the UN remains the best hope for establishing a unified framework that ensures AI serves humanity, safeguards rights and prevents instability before unchecked progress leads to irreversible consequences. Virginia Dignum Wallenberg professor of responsible AI, Umeå University Wendy Hall Regius professor of computer science, University of Southampton
Share
Share
Copy Link
As AI rapidly advances, experts and policymakers stress the critical need for a global governance framework to ensure responsible development and implementation.
Artificial Intelligence (AI) is progressing at an unprecedented pace, with computational models now performing tasks in minutes that previously took days 1. This rapid advancement has raised concerns about the lack of a comprehensive governance framework for AI. Experts argue that as AI technology evolves, it becomes increasingly crucial to ensure that its benefits are extended to all of humanity 1.
A key principle proposed for AI policymaking is the reliance on scientific data rather than speculative scenarios. While AI has made significant strides in areas like image recognition and natural language processing, it's important to understand that current AI systems are not forms of intelligence with consciousness or free will 1. This understanding is critical for developing effective policies that address real-world challenges rather than far-fetched scenarios.
Experts advocate for a pragmatic approach to AI governance, emphasizing the need to minimize unintended consequences while encouraging innovation. For instance, AI's potential to democratize access to high-quality medical care must be balanced against the risk of exacerbating existing biases in healthcare systems 1. Policymakers are urged to develop practical liability policies that discourage intentional misuse without unfairly penalizing good-faith efforts.
The importance of empowering the entire AI ecosystem, including open-source communities and academia, is highlighted as crucial for innovation. Open access to AI models and computational tools is seen as vital for progress, particularly for academic institutions and researchers who may have fewer resources than their private-sector counterparts 1.
The release of advanced AI models like DeepSeek's R1 underscores the urgent need for global AI governance. While such open-source models foster innovation and oversight, they also raise concerns about potential misuse for misinformation, cyber-attacks, and deepfake propaganda 2. Existing national policies and unilateral initiatives are deemed inadequate to address these global challenges.
A group of members from the UN high-level advisory body for AI emphasizes the need for binding international agreements covering transparency, accountability, liability, and enforcement 2. They argue that AI's risks transcend borders, and fragmented approaches only exacerbate vulnerabilities.
The financial world is already reacting to AI's rapid evolution, as evidenced by Nvidia's $600 billion market loss following DeepSeek's release 2. This underscores the growing uncertainty in the market and reinforces the need for comprehensive oversight to ensure responsible AI development.
The potential development of conscious AI raises ethical concerns about artificial suffering. While some theories suggest the possibility of AI consciousness, others argue that consciousness requires being an organism. Given this uncertainty, experts recommend an agnostic stance on artificial consciousness while still acting responsibly in AI development [3].
As AI continues to evolve at a breakneck pace, the call for a comprehensive, science-based, and globally coordinated governance framework becomes increasingly urgent. Balancing innovation with responsible development and ethical considerations will be crucial in shaping the future of AI for the benefit of all humanity.
Reference
[1]
[2]
The global AI race heats up as China challenges US dominance, raising concerns about unregulated competition and potential catastrophic risks. The debate between open-source and proprietary AI development intensifies amid geopolitical tensions.
2 Sources
2 Sources
The AI Action Summit in Paris marks a significant shift in global attitudes towards AI, emphasizing economic opportunities over safety concerns. This change in focus has sparked debate among industry leaders and experts about the balance between innovation and risk management.
7 Sources
7 Sources
United Nations experts urge the establishment of a global governance framework for artificial intelligence, emphasizing the need to address both risks and benefits of AI technology on an international scale.
11 Sources
11 Sources
Leading computer scientists and AI experts issue warnings about the potential dangers of advanced AI systems. They call for international cooperation and regulations to ensure human control over AI development.
3 Sources
3 Sources
As world leaders gather in Paris for an AI summit, experts emphasize the need for greater regulation to prevent AI from escaping human control. The summit aims to address both risks and opportunities associated with AI development.
2 Sources
2 Sources
The Outpost is a comprehensive collection of curated artificial intelligence software tools that cater to the needs of small business owners, bloggers, artists, musicians, entrepreneurs, marketers, writers, and researchers.
© 2025 TheOutpost.AI All rights reserved