The Outpost is a comprehensive collection of curated artificial intelligence software tools that cater to the needs of small business owners, bloggers, artists, musicians, entrepreneurs, marketers, writers, and researchers.
© 2024 TheOutpost.AI All rights reserved
Curated by THEOUTPOST
On September 5, 2024
11 Sources
[1]
The US, UK, EU and other major nations have signed a landmark global AI treaty
The Council of Europe's Framework Convention aims to align AI with human rights and democracy. The United States, United Kingdom, European Union, and several other countries have signed an AI safety treaty laid out by the Council of Europe (COE), an international standards and human rights organization. This landmark treaty, known as the Framework Convention on artificial intelligence and human rights, democracy, and the rule of law, opened for signature in Vilnius, Lithuania. It is the first legally binding international agreement aimed at ensuring that AI systems align with democratic values. The treaty focuses on three main areas: protecting human rights (including privacy and preventing discrimination), safeguarding democracy, and upholding the rule of law. It also provides a legal framework covering the entire lifecycle of AI systems, promoting innovation, and managing potential risks. Besides the US, UK and the EU, the treaty's other signatories include Andorra, Georgia, Iceland, Norway, Moldova, San Marino, and Israel. Notably absent are many major countries from Asia and the Middle East, and Russia, but any country will be eligible to join it in the future as long as they commit to comply with its provisions, according to a statement from the Council of Europe. "We must ensure that the rise of AI upholds our standards, rather than undermining them," said COE secretary general Marija Pejčinović Burić in the statement. "The Framework Convention is designed to ensure just that. It is a strong and balanced text - the result of the open and inclusive approach by which it was drafted and which ensured that it benefits from multiple and expert perspectives. The treaty will enter into force three months after five signatories, including at least three Council of Europe member states, ratify it. The COE's treaty joins other recent efforts to regulate AI including the UK's AI Safety Summit, the G7-led Hiroshima AI Process, and the UN's AI resolution.
[2]
US and UK sign legally enforceable AI treaty
The US, UK, and the European Union have signed the first "legally binding" treaty on AI, which is supposed to ensure its use aligns with "human rights, democracy and the rule of law," according to the Council of Europe. The treaty, called the Framework Convention on Artificial Intelligence, lays out key principles AI systems must follow, such as protecting user data, respecting the law, and keeping practices transparent. Each country that signs the treaty must "adopt or maintain appropriate legislative, administrative or other measures" that reflects the framework. Andorra, Georgia, Iceland, Norway, the Republic of Moldova, San Marino, and Israel also signed the framework, which has been in the works since 2019. Over the past several months, we've seen a swath of other AI safety agreements emerge -- but the majority don't have consequences for the signatories who break their commitments. Even though this new treaty is supposed to be "legally binding," the Financial Times points out that "compliance is measured primarily through monitoring, which is a relatively weak form of enforcement." Still, the treaty could serve as a blueprint for countries developing their own laws surrounding AI. The US has bills in the works related to AI, the EU already passed landmark regulations on AI, and the UK is considering its own. California is also getting close to passing an AI safety law that giants like OpenAI have pushed back against. "We must ensure that the rise of AI upholds our standards, rather than undermining them," Council of Europe Secretary General Marija Pejčinović Burić says in a statement. "The Framework Convention is designed to ensure just that. It is a strong and balanced text -- the result of the open and inclusive approach." The treaty will come into force three months after five signatories ratify it.
[3]
US, Britain, and EU sign first international AI treaty for responsible development By Invezz
Invezz.com - The European Union, the US, and the UK have signed the world's first legally binding international treaty on artificial intelligence (AI) and related systems, known as the AI Convention. Adopted in May after years of negotiations among 57 countries, the treaty aims to address the risks posed by AI while promoting responsible innovation. Some experts argue that the treaty's broad language and caveats could undermine its effectiveness. While this development marks a significant milestone in global efforts to regulate AI, questions remain about its practical impact and enforcement. The AI Convention, the first of its kind, focuses on protecting human rights for those affected by AI systems. This agreement is separate from the EU's AI Act, which came into force last month and imposes strict regulations on AI development and deployment within the EU. Negotiated by 57 countries, the AI Convention reflects a global commitment to ensuring AI technologies do not undermine fundamental values such as human rights and the rule of law. The Council of Europe, an international organisation distinct from the EU, spearheaded the treaty. With a mandate to safeguard human rights, the Council includes 46 member countries, comprising all 27 EU member states. The treaty's adoption follows years of discussions, beginning with a feasibility study in 2019 and culminating in the establishment of a Committee on Artificial Intelligence in 2022 to draft the text. The AI Convention allows signatories to adopt or maintain legislative, administrative, or other measures to implement its provisions. While the treaty's primary focus is on ensuring AI systems align with human rights protections, critics argue that the broad language and numerous exemptions could limit its effectiveness. Francesca Fanucci, a legal expert at the European Center for Not-for-Profit Law Stichting (ECNL), who contributed to the treaty's drafting process, has expressed concerns about its enforceability. Fanucci noted that the "formulation of principles and obligations" in the convention is "overbroad and fraught with caveats," raising questions about legal certainty and effective enforcement. One major criticism centres on the exemptions allowed for AI systems used for national security purposes and the perceived disparity in scrutiny between private companies and the public sector. The treaty reflects an attempt to balance the need for innovation with the imperative to protect human rights and uphold ethical standards. Britain's justice minister, Shabana Mahmood, described the convention as a "major step" in ensuring AI technologies can be harnessed without eroding fundamental values such as human rights and the rule of law. The UK government has indicated it will work with regulators, devolved administrations, and local authorities to appropriately implement the treaty's new requirements. The newly signed AI Convention is distinct from the EU AI Act, which already imposes comprehensive regulations on AI systems within the EU's internal market. The AI Act categorises AI applications based on their risk levels -- unacceptable, high, limited, and minimal risk -- each with corresponding requirements for compliance, transparency, and governance. In contrast, the AI Convention provides a framework for international cooperation and guidance, but with a broader set of principles that some argue lack specificity. While the AI Convention has been hailed as a significant step towards a more regulated AI landscape, the criticism around its perceived loopholes and generalised principles suggests that further refinement may be necessary. Fanucci and other legal experts argue that without more robust and clear provisions, the treaty may struggle to enforce meaningful protections against potential abuses of AI technologies. The need for international cooperation in AI governance is evident, but the challenge lies in creating a legally binding framework that effectively balances innovation with accountability. As AI technologies continue to evolve rapidly, the effectiveness of treaties like the AI Convention will likely depend on future amendments, more stringent guidelines, and the political will of its signatories to enforce them. The treaty's impact will largely depend on how signatory countries implement its provisions and address the criticisms raised. The global AI landscape is constantly evolving, and the need for adaptive and enforceable regulations is crucial. As the UK and other nations work towards embedding the treaty's principles into national law, the effectiveness of this pioneering effort will be closely watched by policymakers, businesses, and civil society groups worldwide.
[4]
US, UK, And EU Sign Legally Binding Treaty On AI To Protect 'Human Rights, Democracy And The Rule Of Law'
The U.S., U.K., and the EU have come together to sign the first legally binding treaty on artificial intelligence, to ensure that AI use aligns with human rights, democracy, and the rule of law. What Happened: The treaty, known as the Framework Convention on AI, sets out key principles for AI systems, encompassing data protection, adherence to the law, and transparency in practices. Countries that sign the treaty are obligated to adopt or maintain measures that reflect this framework. "It promotes AI progress and innovation, while managing the risks it may pose to human rights, democracy and the rule of law. To stand the test of time, it is technology-neutral," the Council of Europe stated in the announcement. Other countries that have signed the treaty include Andorra, Georgia, Iceland, Norway, the Republic of Moldova, San Marino, and Israel. The treaty has been in the works since 2019. See Also: Apple, Nvidia Key Supplier TSMC Leads Taiwanese Chip Giants To Localize Neon Gas Production By 2025 "We must ensure that the rise of AI upholds our standards, rather than undermining them," said Council of Europe Secretary General Marija Pejčinović Burić. The treaty will come into effect three months after ratification by five signatories. Subscribe to the Benzinga Tech Trends newsletter to get all the latest tech developments delivered to your inbox. Why It Matters: The signing of this treaty comes at a time when AI safety has been a hot topic. Earlier this year, it was reported that ChatGPT-parent OpenAI has expanded its lobbying team to shape AI regulations amid growing safety concerns. Last month, the California legislature approved a controversial AI safety bill, which faced resistance from the tech industry. According to prediction market Polymarket, the bill, backed by Elon Musk, has a 57% chance of being signed by Governor Gavin Newsom. Check out more of Benzinga's Consumer Tech coverage by following this link. Read Next: Apple iPhone 16 Event Not 'Sell The News,' Analyst Says: 'Shares Could Move Higher' Disclaimer: This content was partially produced with the help of AI tools and was reviewed and published by Benzinga editors. Photo courtesy: Shutterstock Market News and Data brought to you by Benzinga APIs
[5]
US, UK, EU and others sign landmark AI safety treaty - SiliconANGLE
More than a dozen countries have signed a treaty designed to ensure that artificial intelligence models are used in a safe manner. The development was announced today at an event in Vilnius, Lithuania. The treaty in question is known as the Council of Europe Framework Convention on Artificial Intelligence and Human Rights, Democracy, and the Rule of Law. It's the fruit of a four-year initiative that involved dozens of experts. The document is the first international, legally binding treaty designed to ensure that AI systems are used in a manner consistent with human rights, democracy and the rule of law. At today's event in Vilnius, the treaty was officially opened for signature. It has so far been signed by the U.S., the UK and the European Union as well as Andorra, Georgia, Iceland, Israel, Norway, the Republic of Moldova and San Marino. The treaty outlines a set of principles that "activities within the lifecycle of AI systems" must uphold. It lists human dignity and individual autonomy, equality and nondiscrimination, respect for privacy and personal data protection, transparency and oversight, accountability and responsibility, reliability and safe innovation. The treaty also specifies a set of steps that signatories should take to ensure that AI projects adhere to those principles. The treaty specifies that countries should perform assessments to map out how AI systems may impact human rights, democracy and the rule of law. If a signatory identifies potential risks, it's expected to take steps to mitigate them. Additionally, signatories must provide a way for authorities to ban harmful applications of AI. The treaty also lists a number of other approaches to ensuring that neural networks aren't misused. In one section, the document states that signatories should provide a way for individuals to challenge decisions made using an AI system or "based substantially on it." If a person requires more information about the AI system in question or the way it's used to file a challenge, officials must share relevant data. Transparency is another focus of the treaty. In some situations, AI systems will be expected to display a notice informing users that they're interacting with an algorithm and not a human. "In order to stand the test of time, the Framework Convention does not regulate technology and is essentially technology-neutral," the Council of Europe stated today.
[6]
World's First AI Treaty Set for Signing by US, UK, and EU Amid Concerns | PYMNTS.com
The world's first legally binding treaty on artificial intelligence (AI) will soon be open for signing by countries that played a key role in its negotiation, including the United States, the United Kingdom and European Union member states. The Council of Europe, a prominent human rights organization, announced that the treaty will be available for signatures starting Thursday, signaling a major step in regulating AI technology, according to Reuters. The AI Convention, years in the making and formally adopted in May after negotiations involving 57 nations, is designed to address the potential risks AI poses while encouraging innovation that aligns with societal values. "This Convention is a major step to ensuring that these new technologies can be harnessed without eroding our oldest values, like human rights and the rule of law," said British justice minister Shabana Mahmood in a statement. Focused primarily on protecting human rights for individuals affected by AI systems, the AI Convention is distinct from the European Union's AI Act. The latter, which came into effect last month, sets forth comprehensive rules governing the development and application of AI technologies within the EU's internal market. The AI Convention takes a broader human rights-based approach but lacks the detailed regulatory framework of the EU's act. The Council of Europe, founded in 1949, is an organization separate from the European Union, dedicated to protecting human rights and fostering democracy across its 46 member states, which include all 27 EU countries. Read more: Ireland Ends Legal Battle with X Over AI Data Use Work on the treaty dates back to 2019 when an ad hoc committee began exploring the feasibility of such an agreement. This eventually led to the creation of a Committee on Artificial Intelligence in 2022, tasked with drafting and negotiating the final text. The treaty allows signatories to implement legislative or administrative measures to ensure its provisions are followed, per Reuters. However, not all parties are satisfied with the final version of the AI Convention. Francesca Fanucci, a legal expert from the European Center for Not-for-Profit Law Stichting (ECNL) who was involved in the drafting process, expressed concern that the treaty has been "watered down" into a vague set of principles. According to Fanucci, the wording of the convention lacks specificity and includes too many caveats, raising doubts about its legal clarity and enforceability. She also criticized the treaty for allowing exemptions for AI systems used in national security and for applying less scrutiny to private companies compared to public institutions. "This double standard is disappointing," she told Reuters. Despite the criticisms, the U.K. government has indicated its commitment to implementing the treaty. It plans to collaborate with regulators, devolved administrations and local authorities to ensure that the treaty's requirements are met appropriately within its jurisdiction.
[7]
US, Britain and Brussels to sign agreement on AI standards
Simply sign up to the Artificial intelligence myFT Digest -- delivered directly to your inbox. The three major western jurisdictions building technologies for artificial intelligence are set to sign the first international treaty on the use of AI that is legally binding, as companies worry that a patchwork of national regulations could hinder innovation. The US, EU and UK are all expected to sign the Council of Europe's convention on AI on Thursday, which emphasises human rights and democratic values in its approach to the regulation of public and private-sector systems. The convention was drafted over two years by more than 50 countries, also including Canada, Israel, Japan and Australia. It requires signatories to be accountable for any harmful and discriminatory outcomes of AI systems. It also requires that outputs of such systems respect equality and privacy rights, and that victims of AI-related rights violations have legal recourse. "[With] innovation that is as fast-moving as AI, it is really important that we get to this first step globally," said Peter Kyle, the UK's minister for science, innovation and technology. "It's the first [agreement] with real teeth globally, and it's bringing together a very disparate set of nations as well." "The fact that we hope such a diverse group of nations are going to sign up to this treaty shows that actually we are rising as a global community to the challenges posed by AI," he added. While the treaty is billed as "legally enforceable", critics have pointed out that it has no sanctions such as fines. Compliance is measured primarily through monitoring, which is a relatively weak form of enforcement. Hanne Juncher, the director in charge of the negotiations for the council, said 10 participants are expected to be among the first to approve it when the convention opens for signatures on Thursday She said: "This is confirmation that [the convention] goes beyond Europe and that these signatories were super invested in the negotiations and . . . satisfied with the outcome." A senior Biden administration official told the FT the US was "committed to ensuring that AI technologies support respect for human rights and democratic values" and saw "the key value-add of the Council of Europe in this space". The treaty comes as governments develop a host of new regulations, commitments and agreements to oversee fast-evolving AI software. These include Europe's AI Act, the G7 deal agreed last October, and the Bletchley Declaration which was signed in November by 28 countries, including the US and China, last November. While the US Congress has not passed any broad framework for AI regulation, lawmakers in California, where many AI start-ups are based, did so last week. That bill, which has split opinion in the industry, is awaiting the state governor's signature. The EU regulation, which came into force last month, is the first major regional law, but the UK's Kyle points out that it remains divisive among companies building AI software. "Companies like Meta, for example, are refusing to roll out their latest Llama product in the EU because of it. So it's really good to have a baseline which goes beyond just individual territories," he said. Although the EU's AI Act was seen as an attempt to set a precedent for other countries, the signing of the new treaty illustrates a more cohesive, international approach, rather than relying on the so-called Brussels effect. Věra Jourová, vice-president of the European Commission for values and transparency, said: "I am very glad to see so many international partners ready to sign the convention on AI. The new framework sets important steps for the design, development and use of AI applications, which should bring trust and reassurance that AI innovations are respectful of our values -- protecting and promoting human rights, democracy and rule of law." "This was the basic principle of . . . the European AI Act and now it serves as a blueprint around the globe," she added.
[8]
US, Britain, EU to sign agreement on AI standards: report
The three would sign the Council of Europe's convention on AI on Thursday, the newspaper said, adding that the convention was drafted for over two years by more than 50 countries including Canada, Israel, Japan and Australia.The U.S., Britain and the EU are expected to sign the first international treaty on the use of AI that is legally binding, the Financial Times reported on Thursday. The three would sign the Council of Europe's convention on AI on Thursday, the newspaper said, adding that the convention was drafted for over two years by more than 50 countries including Canada, Israel, Japan and Australia. The U.S. was "committed to ensuring that AI technologies support respect for human rights and democratic values" and saw "the key value-add of the Council of Europe in this space," a senior Biden administration official told the newspaper. "It's the first [agreement] with real teeth globally, and it's bringing together a very disparate set of nations as well," Britain's technology minister Peter Kyle said, according to the report. The Council of Europe's framework convention on AI, which addresses human rights, democracy, and the rule of law, was drafted by the Committee on Artificial Intelligence (CAI). The CAI finalised the draft of the convention in March. Following this, the convention was adopted by the Committee of Ministers of the Council of Europe on May 17 and will be opened for signature in Vilnius on September 5.
[9]
UK signs first international treaty to implement AI safeguards
Also signed by the EU, US and Israel, the declaration aims to mitigate the threats that AI may pose to human rights, democracy and the rule of law The UK government has signed the first international treaty on artificial intelligence in a move that aims to prevent misuses of the technology, such as spreading misinformation or using biased data to make decisions. Under the legally binding agreement, states must implement safeguards against any threats posed by AI to human rights, democracy and the rule of law. The treaty, called the framework convention on artificial intelligence, was drawn up by the Council of Europe, an international human rights organisation, and was signed on Thursday by the EU, UK, US and Israel. The justice secretary, Shabana Mahmood, said AI had the capacity to "radically improve" public services and "turbocharge" economic growth, but that it must be adopted without affecting basic human rights. "This convention is a major step to ensuring that these new technologies can be harnessed without eroding our oldest values, like human rights and the rule of law," she said. Here is an outline of the convention and its impact on AI use. According to the Council of Europe, its goal is to "fill any legal gaps that may result from rapid technological advances". Recent breakthroughs in AI - the term for computer systems that can perform tasks typically associated with intelligent beings, such as learning and problem-solving - have triggered a regulatory scramble around the world to mitigate the technology's potential flaws. It means there is a patchwork of regulations and agreements covering the technology, from the EU AI Act to last year's Bletchley declaration at the inaugural global AI safety summit - and a voluntary testing regime signed by a host of countries and companies at the same gathering. Thursday's agreement is an attempt to create a global framework. The treaty states that AI systems must comply with a set of principles including: protecting personal data; non-discrimination; safe development; and human dignity. As a result, governments are expected to introduce safeguards such as stemming AI-generated misinformation and preventing systems from being trained on biased data, which could result in wrongful decisions in a number of situations such as job or benefits applications. It covers the use of AI by public authorities and the private sector. Any company or body using relevant AI systems must assess their potential impact on human rights, democracy and the rule of law - and make that information available to the public. People must be able to challenge decisions made by AI systems and be able to lodge complaints with authorities. Users of AI systems must also be given notice that they are dealing with an AI and not a human being. The UK now needs to see whether its various provisions are covered by existing legislation - such as the European court of human rights and other human rights laws. The government is drawing up a consultation on a new AI bill. "Once the treaty is ratified and brought into effect in the UK, existing laws and measures will be enhanced," said the government. In terms of imposing sanctions, the convention refers to authorities being able to ban certain uses of AI. For instance, the EU AI Act bans systems that use facial recognition databases scraped from CCTV or the internet. It also bans systems that categorise humans based on their social behaviour.
[10]
Global powers sign AI pact focused on democratic values
The US, EU, UK, and other nations have signed up to a legal framework setting out a treaty for the implementation of AI that is underpinned by human rights and democratic values. The agreement follows two years of talks involving more than 50 countries, which also included Canada, Israel, Japan, and Australia. It sets out accountability for harm and discrimination resulting from the application of AI in business and society. Speaking to the Financial Times, a Biden administration official said the US was "committed to ensuring that AI technologies support respect for human rights and democratic values." The new framework agreed by the Council of Europe commits parties to collective action to manage AI products and protect the public from potential misuse. The agreement was signed against a backdrop of high expectations from governments, which see AI as likely to boost productivity and, for example, increase cancer detection rates - despite concurrent concerns from industry over hallucinations and incurracy. On the regulatory side of things, fears persist that AI could also risk the spread of misinformation or create biased automated decision-making. The UK's lord chancellor and justice secretary, Shabana Mahmood, who signed the agreement, said the technology has the capacity to radically improve the responsiveness and effectiveness of public services and "turbocharge" economic growth. "This convention is a major step to ensuring that these new technologies can be harnessed without eroding our oldest values, like human rights and the rule of law," she said. Representatives including European Commission vice-president for values and transparency Věra Jourová signed the Framework Convention on Artificial Intelligence during a conference of ministers of justice in Vilnius, Lithuania. The European Commission, the executive arm of the EU, said the new convention was consistent with the recently introduced EU AI Act, including a number of overlapping concepts such as a risk-based approach and key principles for trustworthy AI. The Commission said the convention is set to apply to activities within the life cycle of AI systems undertaken by public authorities or the commercial sector acting on their behalf. "As regards private sector actors, while they still must address risks and impacts from AI systems in a way that aligns with the Convention's goals, they have the option to either apply the Convention's obligations directly, or implement alternative, appropriate measures," it said in a statement. ®
[11]
First international AI safety treaty signed by U.S., Britain, EU - Fast Company
The AI Convention, which has been in the works for years and was adopted in May after discussions between 57 countries, addresses the risks AI may pose, while promoting responsible innovation. "This Convention is a major step to ensuring that these new technologies can be harnessed without eroding our oldest values, like human rights and the rule of law," Britain's justice minister, Shabana Mahmood, said in a statement. The AI Convention mainly focuses on the protection of human rights of people affected by AI systems and is separate from the EU AI Act, which entered into force last month.
Share
Share
Copy Link
The United States, United Kingdom, European Union, and other major nations have signed a legally binding international treaty on artificial intelligence. This landmark agreement aims to ensure responsible AI development while protecting human rights, democracy, and the rule of law.
In a historic move, major nations including the United States, United Kingdom, and the European Union have signed the first-ever legally binding international treaty on artificial intelligence 1. This landmark agreement, aimed at ensuring responsible AI development, marks a significant step towards global cooperation in managing the rapidly evolving field of artificial intelligence 2.
The primary focus of this treaty is to protect human rights, democracy, and the rule of law in the context of AI development and deployment 4. By establishing a common framework, the signatories aim to address potential risks associated with AI while fostering innovation and economic growth.
The treaty covers various aspects of AI development and use, including:
As a legally enforceable agreement, the treaty provides a robust foundation for holding nations accountable for their AI practices 2.
While the treaty prioritizes responsible AI development, it also recognizes the economic potential of AI technologies. The agreement aims to strike a balance between regulation and innovation, ensuring that AI can contribute to economic growth while adhering to ethical standards 3.
In addition to the US, UK, and EU, several other nations have signed the treaty, demonstrating a broad international commitment to responsible AI development 5. This global participation is expected to create a more unified approach to AI governance and foster collaboration in addressing challenges posed by emerging AI technologies.
The treaty establishes mechanisms for ongoing monitoring and assessment of AI developments. This includes:
These measures aim to ensure that the treaty remains relevant and effective as AI technology continues to evolve rapidly.
The AI industry has shown mixed reactions to the treaty. While many companies welcome the clarity provided by international guidelines, some express concerns about potential limitations on innovation. However, proponents argue that the treaty will create a more stable and trustworthy environment for AI development, ultimately benefiting both industry and society 1.
Reference
[2]
[3]
[4]
The UK has signed the first international treaty to implement AI safeguards, joined by the US and other nations. This legally enforceable agreement aims to ensure the safe and ethical development of artificial intelligence.
2 Sources
Antitrust watchdogs from the US, UK, and EU have joined forces to address potential monopolistic practices in the rapidly evolving AI industry. This collaborative effort aims to ensure fair competition and prevent market dominance by tech giants.
6 Sources
The Biden administration announces plans to convene a global summit on artificial intelligence safety in November, aiming to address the potential risks and benefits of AI technology.
12 Sources
Apple has agreed to participate in a voluntary US government initiative aimed at managing the risks associated with artificial intelligence. This move aligns Apple with other major tech companies in addressing AI safety concerns.
12 Sources
A UN advisory body has put forward seven key recommendations for governing artificial intelligence, addressing global concerns about AI's impact and potential risks. The proposals aim to establish international cooperation and oversight in AI development and deployment.
8 Sources