Curated by THEOUTPOST
On Fri, 26 Jul, 4:01 PM UTC
12 Sources
[1]
Apple Agrees To Manage AI Risks Via The US Govt Scheme: Know More - News18
WASHINGTON: Apple Inc has signed U.S. President Joe Biden's voluntary commitments governing artificial intelligence (AI), joining 15 other firms that have committed to ensuring that AI's power is not used for destructive purposes, the White House said on Friday. The original commitments, announced in July 2023, were signed on to by firms including Google and OpenAI partner Microsoft. In September, eight more firms including Adobe, IBM, Nvidia signed on. Apple announced its AI ambition with Apple Intelligence at the WWDC 2024 last month. The company seems to have gone in a different path with its AI roadmap and is not ready to compromise on user privacy. In fact, Apple says that most of the AI processing will be done on-device thanks to the Apple AI chips powering the data centres. However, Apple has signed a deal with OpenAI that brings ChatGPT to macOS, iOS 18 and other devices later this year. The company has gone against working with Meta for its chequered history with user data and their privacy. Having said that, Apple is willing to deal with Google in the near future, which most likely will bring Gemini AI to its devices. Google is hosting its own Pixel launch event next month where new AI features and plans will be revealed. While Apple holds up for the regular September launch timeline for the iPhone 16 series this year which has got the industry excited for the future of AI and the overall demand for the technology in the years to come.
[2]
Apple signs on to voluntary US scheme to manage AI risks, White House says
WASHINGTON: Apple Inc has signed U.S. President Joe Biden's voluntary commitments governing artificial intelligence (AI), joining 15 other firms that have committed to ensuring that AI's power is not used for destructive purposes, the White House said on Friday. The original commitments, announced in July 2023, were signed on to by firms including Google and OpenAI partner Microsoft. In September, eight more firms including Adobe, IBM, Nvidia signed on.
[3]
Apple Signs on to Voluntary US Scheme to Manage AI Risks, White House Says
WASHINGTON (Reuters) - Apple Inc has signed U.S. President Joe Biden's voluntary commitments governing artificial intelligence (AI), joining 15 other firms that have committed to ensuring that AI's power is not used for destructive purposes, the White House said on Friday. The original commitments, announced in July 2023, were signed on to by firms including Google and OpenAI partner Microsoft. In September, eight more firms including Adobe, IBM, Nvidia signed on.
[4]
Apple signs on to voluntary US scheme to manage AI risks, White House says
WASHINGTON (Reuters) - Apple Inc has signed U.S. President Joe Biden's voluntary commitments governing artificial intelligence (AI), joining 15 other firms that have committed to ensuring that AI's power is not used for destructive purposes, the White House said on Friday. The original commitments, announced in July 2023, were signed on to by firms including Google and OpenAI partner Microsoft. In September, eight more firms including Adobe, IBM, Nvidia signed on.
[5]
Apple signs on to voluntary US scheme to manage AI risks, White House says
WASHINGTON (Reuters) - Apple Inc has signed U.S. President Joe Biden's voluntary commitments governing artificial intelligence (AI), joining 15 other firms that have committed to ensuring that AI's power is not used for destructive purposes, the White House said on Friday. The original commitments, announced in July 2023, were signed on to by firms including Google and OpenAI partner Microsoft. In September, eight more firms including Adobe, IBM, Nvidia signed on.
[6]
Apple agrees to abide by White House AI safeguards
President Joe Biden's administration said at the time that it had secured commitments from the companies "to help move toward safe, secure, and transparent development of AI technology." The safeguards include tech companies simulating attacks on AI models in a testing technique referred to as "red teaming" to expose flaws or vulnerabilities.The White House on Friday said that Apple has joined more than a dozen tech firms committed to following safeguards intended to curb the risks of artificial intelligence. Amazon, Google, Microsoft and OpenAI were among the AI sector rivals who joined US officials in unveiling the voluntary pact a year ago. President Joe Biden's administration said at the time that it had secured commitments from the companies "to help move toward safe, secure, and transparent development of AI technology." The safeguards include tech companies simulating attacks on AI models in a testing technique referred to as "red teaming" to expose flaws or vulnerabilities. Testing of AI models or systems is to include societal risks and national security concerns such as cyber assaults and developing biological weapons, according to the White House. Companies that signed on to the commitment are to work on sharing information with each other and the government about AI dangers and attempts to circumvent defenses. Apple in June unveiled "Apple Intelligence," its suite of AI features for its coveted devices as it looks to reassure users that it is not falling behind on the AI frenzy. Apple's announcement included a partnership with OpenAI that would make ChatGPT available to iPhone users on request. Biden late last year signed an executive order setting new safety standards for AI systems and requiring developers to share results of safety tests with the US government. Biden's executive order was touted by the White House as "the most sweeping actions ever taken to protect Americans from the potential risks of AI systems." Shortly after it was issued, Vice President Kamala Harris gave a major AI policy speech at a gathering of politicians, tech industry figures and academics. The event focused on growing fears about the implications of advanced AI models that have prompted concerns around everything from job losses and cyber-attacks to humankind losing control of the very systems it created.
[7]
Apple Agrees to Follow President Biden's AI Safety Guidelines
Apple has committed to a set of voluntary AI safeguards established by President Joe Biden's administration, joining other tech giants in a move to ensure responsible AI development (via Bloomberg). Apple is now part of the a group of influential technology companies agreeing to the Biden administration's voluntary safeguards for artificial intelligence. The safeguards, announced by the White House last year as part of an Executive Order, aim to guide the development of AI systems, ensuring they are tested for discriminatory tendencies, security vulnerabilities, and potential national security risks. The principles outlined in the guidelines call for companies to share the results of AI system tests with governments, civil society, and academia. This level of transparency is intended to foster an environment of accountability and peer review, promoting the development of safer and more reliable AI technologies. The safeguards Apple and other tech companies have agreed to also include commitments to test their AI systems for biases and security concerns. Although these guidelines are not legally binding, they signify a collective effort by the tech industry to self-regulate and mitigate the potential risks associated with AI technologies. The executive order signed by President Biden last year also requires AI systems to undergo testing before being eligible for federal procurement. Apple's participation in the initiative coincides with its plans to introduce its own cohesive AI system and deep integration with OpenAI's ChatGPT. Apple Intelligence will be supported by the iPhone 15 Pro and iPhone 15 Pro Max, as well as all upcoming iPhone 16 models. For the Mac and iPad, all devices equipped with M-series Apple silicon chips will support Apple Intelligence. While Apple Intelligence is not yet available in beta for iOS 18, iPadOS 18, or macOS Sequoia, the company has pledged to release some features in beta soon, with a public release expected by the end of the year. Further enhancements, including an overhaul to Siri that leverages in-app actions and personal context, are anticipated to roll out in the spring of 2025.
[8]
Apple adopts Biden administration's AI safeguards
Apple Intelligence is coming soon, and Apple is pledging to make them safe. Credit: Apple Apple has agreed to adopt a set of artificial intelligence safeguards, set forth by the Biden-Harris administration. The move was announced by the administration on Friday. Bloomberg was the first to report on the news. By adopting the guidelines, Apple has joined the ranks of OpenAI, Google, Microsoft, Amazon, and Meta, to name a few. The news comes ahead of Apple's much-awaited launch of Apple Intelligence (Apple's name for AI), which will become widely available in September, with the public launch of iOS 18, iPadOS 18, and macOS Sequoia. The new features, unveiled by Apple in June, aren't available even as beta right now, but the company is expected to slowly roll them out in the months to come. Apple is one of the signees of the Biden-Harris administration's AI Safety Institute Consortium (AISIC), which was created in February. But now the company has pledged to abide by a set of safeguards which include testing AI systems for security flaws and sharing the results of those tests with the U.S. government, developing mechanisms that would allow users to know when content is AI-generated, as well as developing standards and tools to make sure AI systems are safe. The safeguards are voluntary and not enforceable, meaning the companies won't suffer consequences for not abiding to them. The European Union's AI Act - a set of regulations designed to protect citizens against high-risk AI - will be legally binding when it becomes effective on August 2, 2026, though some of its provisions will apply from February 2, 2025. Apple's upcoming set of AI features includes integration with OpenAI's powerful AI chatbot, ChatGPT. The announcement prompted X owner and Tesla and xAI CEO Elon Musk to warn he would ban Apple devices at his companies, deeming them an "unacceptable security violation." Musk's companies are notably absent from the AISIC signee list.
[9]
Apple to Adopt Voluntary AI Safeguards Established by Biden
Apple Inc. is the latest company to agree to a set of voluntary safeguards for artificial intelligence crafted by President Joe Biden's administration as it tries to guide the development of the emerging technology and encourage firms to protect consumers. The administration announced Friday that the technology giant is joining the ranks of OpenAI Inc., Amazon.com Inc., Alphabet Inc., Meta Platforms Inc., Microsoft Corp. and others in committing to test their AI systems for any discriminatory tendencies, security flaws or national security risks.
[10]
Ahead of Apple Intelligence launch, Apple agrees to AI safety guidelines established by Biden administration - 9to5Mac
Apple has joined other tech giants like OpenAI, Amazon, Google, Meta and Microsoft in agreeing to a set of voluntary AI safety rules established by the Biden administration (via Bloomberg). The safeguards are a first step in the US government having oversight of how artificial intelligence technology develops. Apple's cooperation comes ahead of the launch of its suite of Apple Intelligence features, starting with iOS 18 this fall. Apple Intelligence is not yet available in the beta channels for iOS 18, iPadOS 18 or macOS Sequoia. However, the company has pledged for some Apple Intelligence features to be available in beta soon, with the first set of features launching to customers by the end of the year. Other improvements, like an overhaul to Siri taking advantage of in-app actions and personal context, are expected to roll out in the spring of 2025. Apple Intelligence will be supported by iPhone 15 Pro and Pro Max, and is expected to include all iPhone 16 models launching this fall. For Mac and iPad platforms, if your device runs on an Apple Silicon chip, it will support Apple Intelligence. The administrations safety rules outline commitments for the companies to test behavior of AI systems, ensuring they do not exhibit discriminatory tendencies or have national security concerns. The results of conducted tests must be shared with governments and academia for peer review. At least for now, the White House AI guidelines are not enforceable in law. Congress has yet to pass a bill for formal AI regulation, but these tech companies are agreeing voluntarily in good faith.
[11]
United States on MacRumors
Apple Agrees to Follow President Biden's AI Safety Guidelines Apple has committed to a set of voluntary AI safeguards established by President Joe Biden's administration, joining other tech giants in a move to ensure responsible AI development (via Bloomberg). Apple is now part of the a group of influential technology companies agreeing to the Biden administration's voluntary safeguards for artificial intelligence. The safeguards, announced by the...
[12]
Apple Intelligence will adhere to new and vague federal artificial intelligence safeguards
Future expansions to Apple Intelligence may involve more AI partners, paid subscriptions Apple, amongst other big tech companies, has agreed to a voluntary and toothless initiative that asks for fairness in developing artificial intelligence models, and monitoring of potential security or privacy issues as the development continues. In an announcement on Friday morning, the White House has clearly said that Apple has agreed to adhere to a set of voluntary artificial intelligence safeguards that the Biden administration has crafted. Other big tech firms have already signed onto the effort, including Alphabet, Amazon, Meta, Microsoft, and OpenAI. As they stand, the guidelines aren't hard lines in the sand against behaviors. The executive order that Apple has agreed to in principle has six key tenets. Under the executive order, companies are asked to share results of compliance testing with each other, and the federal government. There is also a voluntary security risk assessment called for in the order, the results of which are also asked to be shared far and wide. As of yet, there are no penalties for non-compliance, nor is there any kind of enforcement framework. In order to be eligible for federal purchase, AI systems must be tested in advance of submittal to any requests for proposal. There is no monitoring for compliance of the terms of the order, and it's not clear if any actual egally enforceable guardrails will be applied. What's also not clear is what might happen to the agreement under a future administration. There has been bipartisan efforts at regulation of AI development, but there has been little movement or discussion after initial lip service. A resumption of that debate isn't expected any time soon, and certainly not before the November 2024 elections. The Executive Order in October that preceded Apple's commitment was itself launched before a multinational effort in November 2023 to lay out safe frameworks for AI development. It's not yet clear how the two efforts dovetail, or if they do at all. The White House is expected to hold a further briefing on the matter on July 26.
Share
Share
Copy Link
Apple has agreed to participate in a voluntary US government initiative aimed at managing the risks associated with artificial intelligence. This move aligns Apple with other major tech companies in addressing AI safety concerns.
Apple Inc. has officially signed on to a voluntary US government scheme designed to manage the risks associated with artificial intelligence (AI) 1. This decision, announced by the White House, marks a significant step in the tech giant's approach to AI development and deployment.
The initiative, part of President Joe Biden's executive order on AI, aims to promote responsible innovation in artificial intelligence 2. Companies participating in this scheme commit to measures such as conducting security testing of their AI systems before public release, sharing information about managing AI risks, and developing robust AI systems.
Apple's participation aligns it with other major tech companies that have already joined the initiative. These include Alphabet's Google, Amazon, Anthropic, Inflection, Meta Platforms, Microsoft, and OpenAI 3. The collective involvement of these industry leaders underscores the growing importance of addressing AI safety concerns.
While Apple has been relatively quiet about its AI projects compared to its peers, this move suggests a more active stance in the AI landscape 4. The company's participation in this scheme may indicate plans for more significant AI integrations in future products and services.
The voluntary nature of this scheme highlights the collaborative approach between the US government and the tech industry in addressing AI challenges. It represents a proactive step towards establishing guidelines and best practices for AI development without resorting to strict regulations 5.
This US initiative comes amid growing global efforts to regulate AI. The European Union has been working on comprehensive AI regulations, while other countries are also considering similar measures. Apple's participation in the US scheme may influence its approach to AI development and deployment on a global scale.
Reference
[3]
[5]
Apple and Meta have declined to join a voluntary AI pact in the European Union, despite support from major tech companies like OpenAI, Google, and Microsoft. This decision highlights the growing divide in the tech industry's approach to AI regulation.
3 Sources
Apple introduces on-device AI capabilities for iPhones, iPads, and Macs, promising enhanced user experiences while maintaining privacy. The move puts Apple in direct competition with other tech giants in the AI race.
6 Sources
Apple releases a free software update introducing AI features to iPhone 16 lineup and select older models, marking its entry into the AI-powered smartphone market.
12 Sources
As major AI companies like OpenAI and Google encounter obstacles in improving their models, Apple's gradual rollout of AI features through Apple Intelligence is gaining momentum, potentially reshaping the competitive landscape.
3 Sources
Apple's upcoming AI platform, Apple Intelligence, is set to launch with iOS 18, bringing new features to iPhones, iPads, and Macs. This article explores the platform's capabilities, rollout strategy, and how it compares to competitors.
23 Sources
The Outpost is a comprehensive collection of curated artificial intelligence software tools that cater to the needs of small business owners, bloggers, artists, musicians, entrepreneurs, marketers, writers, and researchers.
© 2024 TheOutpost.AI All rights reserved