Curated by THEOUTPOST
On Fri, 25 Oct, 12:06 AM UTC
17 Sources
[1]
Apple is challenging hackers to break into the company's servers
Apple is taking its server safety very seriously. The $3.5 trillion software giant is challenging hackers to break into the company's tech -- and a $1 million cheque is up for grabs to those who succeed. The "security research challenge" coincides with Apple's rollout of its new AI-powered Apple Intelligence offering, as part of iOS 18.1. The server on which many of the Intelligence commands are run is called the Private Cloud Compute (PCC) server -- and Apple is desperate to protect that server from any cyber attacks, hacks, or security breaches. The company swiftly sent out a call to amateur hackers and security experts alike to attempt to poke holes in its PCC: "Today we're making these resources publicly available to invite all security and privacy researchers -- or anyone with interest and a technical curiosity -- to learn more about PCC and perform their own independent verification of our claims," Apple wrote in a statement last week. "And we're excited to announce that we're expanding Apple Security Bounty to include PCC, with significant rewards for reports of issues with our security or privacy claims." The tech firm also supplied a security guide for the server, detailing how it functions, how it authenticates requests, and how it's built to protect against break-ins. It even released the source code for some parts of PCC on GitHub. Then it outlined the rewards for anyone willing to give it a whirl: Anywhere from hundreds of thousands to millions of dollars, depending on the difficulty and gravity of the hack. So just how much could you take home if you manage to break into Apple's servers? "We award maximum amounts for vulnerabilities that compromise user data and inference request data outside the PCC trust boundary," Apple explained, before breaking down the bug bounty on offer. In the blog post explaining the challenge, Apple wrote that it considers PCC to be the "most advanced security architecture ever deployed for cloud AI compute at scale, and we look forward to working with the research community to build trust in the system and make it even more secure and private over time." What's more, if a hacker spots a security issue not covered by Apple's outline, the company still promises to consider providing a bounty, it wrote. And the big-ticket ask? If a hacker is able to pull off "arbitrary execution of code without the user's permission or knowledge with arbitrary entitlements," they'll be awarded $1,000,000.
[2]
Apple Offers $1 Million Bug Bounty to Anyone Who Can Hack Its AI Servers
Apple is offering a reward of up to $1 million to anyone who can hack its new fleet of AI-focused servers meant for Apple Intelligence, which is slated to launch next week. Apple is asking researchers to test the security of "Private Cloud Compute," the servers that will receive and process user requests for Apple Intelligence when the AI task is too complex for the on-device processing of an iPhone, iPad, or Mac. To address privacy concerns, Apple designed Private Cloud Compute servers to immediately delete a user's request once the task is fulfilled. In addition, the system features end-to-end encryption, meaning Apple cannot uncover the user requests made through Apple Intelligence, even though it controls the server hardware. Still, Apple has invited the security community to vet the privacy claims around Private Cloud Compute. Cupertino started with a select group of researchers, but on Thursday, the company opened the door to any interested members of the public. Apple is offering access to the source code for key components of Private Cloud Compute, giving researchers an easy way to analyze the technology's software side. The company also created a "virtual research environment" for macOS that can run the Private Cloud Compute software. Another helpful tool is a security guide that covers more technical details about the company's server system for Apple Intelligence. "To further encourage your research in Private Cloud Compute, we're expanding Apple Security Bounty to include rewards for vulnerabilities that demonstrate a compromise of the fundamental security and privacy guarantees of PCC," the company added. Rewards include $250,000 for discovering a way to remotely hack Private Cloud Compute into exposing a user's data request. Apple is also offering $1 million if you can remotely attack the servers to execute rogue computer code with privileges. Lower rewards will be granted for security research that uncovers how to attack Private Cloud Compute from a "privileged network position." Apple says it'll also consider rewards for reported vulnerabilities "even if it doesn't match a published category." "We believe Private Cloud Compute is the most advanced security architecture ever deployed for cloud AI compute at scale, and we look forward to working with the research community to build trust in the system and make it even more secure and private over time," it says.
[3]
Apple will pay you up to $1 million if you can hack into Apple Intelligence servers
The company's bug bounty is designed to test the security of the servers that process Apple Intelligence requests. Think you can hack your way into an Apple server? If so, you could score as much as $1 million courtesy of a new bug bounty. On Thursday, Apple revealed a challenge to test the security of the servers that will play a major role in its Apple Intelligence service. As Apple preps for the official launch of its AI-powered service next week, the company is naturally focused on security. Though much of the processing for Apple Intelligence requests will occur on your device, certain ones will have to be handled by Apple servers. Known collectively as Private Cloud Compute (PCC), these servers need to be hardened against any type of cyberattack or hack to guard against data theft and compromise. Also: 7 essential password rules to follow in 2024, according to security experts Apple has already been proactive about protecting PCC. After initially announcing Apple Intelligence, the company invited security and privacy researchers to inspect and verify the end-to-end security and privacy of the servers. Apple even gave certain researchers and auditors access to a Virtual Research Environment and other resources to help them test the security of PCC. Now the company is opening the door for anyone who wants to attempt to hack into its server collection. To give people a head start, Apple has published a Private Cloud Compute Security Guide. This guide explains how PCC works with a particular focus on how requests are authenticated, how to inspect the software running in Apple's data centers, and how PCC's privacy and security are designed to withstand different types of cyberattacks. The Virtual Research Environment (VRE) is also open to anyone vying for the bug bounty. Running on a Mac, the VRE lets you inspect the PCC's software releases, download the files for each release, boot up a release in a virtual environment, and debut the PCC software to investigate it further. Apple has even published the source code for certain key components of PCC, which is accessible on GitHub. Also: Have you stayed at a Marriott? Here's what its settlement with the FTC means for you Now how about that bug bounty? The program is designed to uncover vulnerabilities across three major areas: Breaking it down further, here are the amounts Apple will pay out for different kinds of hacks and discoveries: However, Apple promises to consider awarding money for any security issue that significantly impacts PCC even if it doesn't match a published category. Here, the company will evaluate your report based on the quality of your presentation, proof of what can be exploited, and the impact on users. To learn more about Apple's bug bounty program and how to submit your own research, browse the Apple Security Bounty page. Also: Why remove Russian maintainers of Linux kernel? Here's what Torvalds says "We hope that you'll dive deeper into PCC's design with our Security Guide, explore the code yourself with the Virtual Research Environment, and report any issues you find through Apple Security Bounty," Apple said in its post. "We believe Private Cloud Compute is the most advanced security architecture ever deployed for cloud AI compute at scale, and we look forward to working with the research community to build trust in the system and make it even more secure and private over time."
[4]
Apple will pay you $1 million if you can hack into their systems
Apple is offering up to $1 million to anyone who is able to hack its AI cloud, referred to as Private Cloud Compute (PCC). The company in a blog post said it is expanding Apple Security Bounty to include rewards for vulnerabilities that demonstrate a compromise of the fundamental security and privacy guarantees of PCC. Are you an ethical hacker? If yes, then here's your chance to get a reward from Apple. Yes you heard it right! Apple has opened up its private cloud compute to researchers, offering up to $1 million to anyone who finds a hole in the secure cloud platform that supports its Apple Intelligence features. Put in simple, the company has expanded its bug bounty program. It will offer upto Rs 8 crore to anyone who can hack its new fleet of AI-focused servers meant for Apple Intelligence. According to a report in Zdnet, Apple recently unveiled a challenge to test the security of the servers that will play a major role in its Apple intelligence services. ALSO READ: Digital condom: German company launches Camdom app. Know how it works The company is offering a massive bug bounty of up to $1 million to anyone who is able to hack its AI cloud, referred to as Private Cloud Compute (PCC). With the Apple gearing up to roll out the first set of features for its AI-driven Apple Intelligence in the next few days, the company has also shifted its focus on security. These new servers will be launched next week. This comes at a time as Apple Intelligence is about to launch on iphones next week with the arrival of its major point upgrade iOS 18.1. This will include iPhone AI features for the first time, such as enhancements to its voice assistant Siri. As per a recent blog post by Apple, the company for the first time has created a virtual research environment and opened the doors to the public to let everyone take a peek at the code and judge its security. The PCC was initially only available to a group of security researchers and auditors, but now, anyone can take a shot at trying to hack Apple's AI cloud. Apple calls PCC "the most advanced security architecture ever deployed for cloud AI compute at scale." ALSO READ: Why is Elon Musk endorsing Donald Trump? Here's the reason In the security research Blog, Apple has added that it "provided third-party auditors and select security researchers early access" to Private Cloud Compute resources to enable inspection. "Today we're making these resources publicly available to invite all security and privacy researchers -- or anyone with interest and a technical curiosity -- to learn more about PCC and perform their own independent verification of our claims. And we're excited to announce that we're expanding Apple Security Bounty to include PCC, with significant rewards for reports of issues with our security or privacy claims," it said in the blog post. Under the bug bounty program, Apple has categorised vulnerabilities into three main areas, each with distinct reward levels based on risk and complexity. Accidental data disclosure: vulnerabilities leading to unintended data exposure due to configuration flaws or system design issues. External compromise from user requests: vulnerabilities enabling external actors to exploit user requests to gain unauthorized access to PCC. Physical or internal access: vulnerabilities where access to internal interfaces enables a compromise of the system. ALSO READ: Why women in this country are turning to AI boyfriends Apple will offer access to the source code for key components of PCC, giving researchers an easy way to analyze the technology's software side. Breaking it down further, here are the amounts Apple will pay out for different kinds of hacks and discoveries: Accidental or unexpected disclosure of data due to deployment or configuration issue -- $50,000 Ability to execute code that has not been certified -- $100,000. Access to a user's request data or other sensitive user details outside the trust boundary -- the area where the level of trust changes because of the sensitive nature of the data being captured -- $150,000. Access to a user's request data or sensitive information about the user's requests outside the trust boundary -- $250,000. Arbitrary execution of code without the user's permission or knowledge with arbitrary entitlements -- $1,000,000. "We hope that you'll dive deeper into PCC's design with our Security Guide, explore the code yourself with the Virtual Research Environment, and report any issues you find through Apple Security Bounty," Apple said in its post. "We believe Private Cloud Compute is the most advanced security architecture ever deployed for cloud AI compute at scale, and we look forward to working with the research community to build trust in the system and make it even more secure and private over time."
[5]
Think you can hack Apple Intelligence? The company might give you $1M
When Apple is publicly dishing out dosh, it's usually related to one of the company's compensation schemes. Some users were recently afflicted by the infamously failure-prone Butterfly keyboard, for example, were recently handed out $400 each. But if you think you can hack into Apple's AI servers, the company might give you considerably more than that. Apple has made a big deal about the security and privacy of Apple Intelligence, and is now putting its money where its mouth is by offering up to $1M to security researchers who can hack Private Cloud Compute, the company's new AI servers. Apple has revealed a list of bounty amounts for security researchers, including $1M for those who can achieve "arbitrary code execution with arbitrary entitlements", or "access to a user's request data or sensitive information about the user's requests outside the trust boundary". In a blog post introducing Private Cloud Compute, Apple explained how "to build public trust in the system," it has taken the "extraordinary step of allowing security and privacy researchers to inspect and verify the end-to-end security and privacy promises of PCC." "We designed Private Cloud Compute as part of Apple Intelligence to take an extraordinary step forward for privacy in AI," Apple adds. "This includes providing verifiable transparency -- a unique property that sets it apart from other server-based AI approaches."
[6]
Apple will pay up to $1M to anyone who hacks its AI cloud | Digital Trends
Apple just made an announcement that shows it means business when it comes to keeping Apple Intelligence secure. The company is offering a massive bug bounty of up to $1 million to anyone who will be able to hack its AI cloud, referred to as Private Cloud Compute. These servers will take over Apple Intelligence tasks when the on-device AI capabilities just aren't good enough -- but there are downsides, which is why Apple's bug-squashing mission seems like a good idea. As per a recent Apple Security blog post, Apple has created a virtual research environment and opened the doors to the public to let everyone take a peek at the code and judge its security. The PCC was initially only available to a group of security researchers and auditors, but now, anyone can take a knack at trying to hack Apple's AI cloud. Recommended Videos A lot of Apple Intelligence tasks are said to be done on-device but for more complex demands, the PCC steps in. Apple offers end-to-end encryption and only makes the data available to the user to ensure that your private requests remain just that -- private. However, with sensitive data like what AI might handle, be it on Macs or iPhones, users are right to feel concerned about the potential of the data leaving their device and ending up in the wrong hands. That's presumably partly why Apple is now reaching out to anyone who's interested and offering up to $1 million for hacking the Private Cloud Compute. The company provides access to the source code for some of the most important parts of PCC, which will make it possible for researchers to dig into its flaws. The $1 million bounty is not universal. That's the highest reward for the person or the team who manages to run malicious code on the PCC servers. The next bounty sits at $250,000 and covers exploits that might allow hackers to extract user data from Apple's AI cloud. There are also smaller rewards, starting at $150,000, which will be paid out to anyone who accesses user data from a "privileged network position." Apple's bug bounty program has previously helped it spot exploits ahead of time while rewarding the researchers involved. A couple of years ago, Apple paid a student $100,000 for successfully hacking a Mac. Let's hope that if there are any bugs to be found in Apple's AI cloud, they'll be spotted before Apple Intelligence becomes widely available.
[7]
Apple will pay security researchers up to $1 million to hack its private AI cloud | TechCrunch
Ahead of the debut of Apple's private AI cloud next week, dubbed Private Cloud Compute, the technology giant says it will pay security researchers up to $1 million to find vulnerabilities that can compromise the security of its private AI cloud. In a post on Apple's security blog, the company said it would pay up to the maximum $1 million bounty to anyone who reports exploits capable of remotely running malicious code on its Private Cloud Compute servers. Apple said it would also award researchers up to $250,000 for privately reporting exploits capable of extracting users' sensitive information or the prompts that customers submit to the company's private cloud. Apple said it would "consider any security issue that has a significant impact" outside of a published category, including up to $150,000 for exploits capable of accessing sensitive user information from a privileged network position. "We award maximum amounts for vulnerabilities that compromise user data and inference request data outside the [private cloud compute] trust boundary," Apple said. This is Apple's latest logical extension of its bug bounty program, which offers hackers and security researchers financial rewards to privately report flaws and vulnerabilities that could be used to compromise its customers' devices or accounts. In recent years, Apple has opened up the security of its flagship iPhones by creating a special researcher-only iPhone designed for hacking, in an effort to improve the device's security, which has been frequently targeted by spyware makers in recent years. Apple revealed more about the security of its Private Cloud Computer service in a blog post, as well as its source code and documentation. Apple bills its Private Cloud Compute as an online extension of its customers' on-device AI model, dubbed Apple Intelligence, which can handle far heavier-lift AI tasks in a way that Apple says preserves the customers' privacy.
[8]
Apple offers $1 million bounty for uncovering security flaws in private AI cloud
In a nutshell: Apple says that Private Cloud Compute is the most advanced security architecture ever deployed for cloud AI compute at scale. By inviting scrutiny from the security research community, Cupertino hopes to build trust in its system and improve its security measures. Apple has announced a significant expansion of its bug bounty program to improve the security of its upcoming Private Cloud Compute service, designed as an extension of its on-device AI model, Apple Intelligence. This cloud-based service aims to handle more complex AI tasks while maintaining user privacy. The expanded bounty program focuses on three main threat categories: accidental data disclosure, external compromise from user requests, and physical or internal access vulnerabilities. Specifically, it is zeroing in on remote code execution, data extraction, and network-based attacks, with the maximum bounty of $1 million awarded to researchers who can identify exploits that allow malicious code to run remotely on its Private Cloud Compute servers. Researchers can also earn up to $250,000 for reporting vulnerabilities that enable the extraction of sensitive user information or submitted prompts. Exploits that access sensitive user data from a privileged network position could net researchers up to $150,000. Apple pointed out that the rewards for Private Cloud Compute vulnerabilities are comparable to those offered for iOS, given the critical nature of the service's security and privacy guarantees. To support this initiative, Apple is providing researchers with extensive resources to inspect and verify the end-to-end security and privacy promises of Private Cloud Compute. These include a comprehensive security guide detailing the architecture and security measures, a Virtual Research Environment (VRE) that allows direct analysis of the system on Mac computers, and access to source code for key components under a limited-use license agreement. Providing these tools is an unprecedented step for Cupertino, Apple said, aimed at building trust in the system. It has already provided third-party auditors and select security researchers early access to these resources. "Today we're making these resources publicly available to invite all security and privacy researchers - or anyone with interest and a technical curiosity - to learn more about PCC and perform their own independent verification of our claims." The VRE, which runs on Macs with Apple silicon and at least 16GB of unified memory, offers powerful tools for examining and verifying PCC software releases, booting releases in a virtualized environment, performing inference against demonstration models, and modifying and debugging PCC software for deeper investigation. Apple has made source code available for several components of Private Cloud Compute that cover various aspects of PCC's security and privacy implementation. These include the CloudAttestation project, Thimble project, splunkloggingd daemon, and srd_tools project. Apple also said it will consider providing rewards for any security issue uncovered with a significant impact, even if it falls outside the published categories. "We'll evaluate every report according to the quality of what's presented, the proof of what can be exploited, and the impact to users," it said.
[9]
Apple will pay up to $1 million to anyone who finds a privacy flaw inside Apple Intelligence
Apple made a very big deal about Apple Intelligence's privacy credentials when it launched the AI suite earlier this year. There has been some skepticism about those claims, especially from people like Elon Musk who took particular offense to Apple's partnership with ChatGPT. But now Apple is putting its money where its mouth is, launching the first Apple Intelligence Bug Bounty. Specifically, Apple is inviting hackers to investigate the Private Cloud Compute (PCC) feature. While on-device AI is inherently more private because all the data stays on the phone, cloud-computing is a different matter. PCC is Apple's attempt to fix that issue, and offer cloud-based AI processing without compromising on data security and user privacy. But clearly Apple isn't expecting us all to take its word for it, and is actively inviting security researchers and "anyone with interest and a technical curiosity" to independently verify the company's claims about PCC. It would be a huge blow to Apple if this system were somehow compromised and bad actors got access to supposedly-secure user data. The point of bug bounties is to incentivize hackers and other security professionals. Hackers are an intrepid bunch, and can often find ways to stress test systems that in-house developers never thought of. And by reporting any problems they come across, they form a mutually beneficial arrangement with Apple. Apple gets to fix security flaws quietly, without user data being exposed to the wrong people, and the hackers get paid for their effort. In the case of PCC, Apple is offering various rewards depending on the issue reported, but the maximum has now been increased to $1 million. That sum is only available for "Arbitrary code execution with arbitrary entitlements" during a "remote attack on request data." That should tell you how seriously Apple is taking this, or how confident it is that PCC is secure. Overall, a million dollars is a small price to pay to avoid the PR disaster that would occur if criminals found a way in. To facilitate this, Apple is offering various tools and resources to aid bug bounty hunters in their work. They include a security guide with PCC's technical details, source code for ""certain key components of PCC that help to implement its security and privacy requirements" and a "Virtual Research Environment" for doing security analysis of PCC. The latter requires you to have a Mac with Apple Silicon, at least 16GB of RAM and access to the macOS Sequoia 15.1 developer preview. Privacy is always a concern when you're using online services, and cloud-AI is absolutely no different. Thankfully Apple does still seem to be sticking with its usual Privacy-centric mandate, and is openly looking for ways to ensure things are secure. It won't please everyone, but it's better than nothing.
[10]
Apple Dares Anyone to Find a Problem With Its Darling AI, Offers $1 Million Bounty
If you find anything wrong with Apple Intelligence’s Private Cloud Compute you can net between $50,000 up to $1 million. Apple is very proud of the privacy apparatus surrounding Apple Intelligence, so proud that it’s offering princely sums to anyone who finds any privacy issue or attack vector in its code. Apple's first bug bounty program for its AI is offering a hefty sum of $50,000 for anybody who finds any accidental data disclosure, but the real prize is $1 million for a remote attack on Apple’s newfangled cloud processing. Apple first announced its Private Cloud Compute back in June, at the same time it detailed all the new AI features coming to iOS, iPadOS, and, eventually, MacOS. The most important aspect of Apple’s AI was the reinvigorated Siri that's capable of working across apps. As presented, Siri could go into your texts to pull up some information about a cousin’s birthday your mom sent you, then pull extra information from your emails to make a calendar event. This also required processing the data through Apple’s internal cloud servers. Apple, in turn, would be managing a treasure trove of user data that most people would want kept private. To keep up its reputation as a stickler for privacy, Apple says that Private Cloud Compute is an extra layer of both software and hardware security. Simply put, Apple claims your data will be secure, and that it won’tâ€"and can’tâ€"retain your data. Which brings us to the security bounty program. In a Thursday blog post, Apple’s security team said it's inviting “all security researchersâ€"or anyone with interest and a technical curiosityâ€| [to] perform their own independent verification of our claims.†So far, Apple said it has allowed third-party auditors inside to root around, but this is the first time it’s opening it up for the public. It supplies a security guide and access to a virtual research environment to analyze PCC inside the macOS Sequoia 15.1 developer preview. You’ll need a Mac with an M-series chip and at least 16 GB of RAM to access. The Cupertino company is supplying the cloud compute source code in a Github repository. Beyond calling all hackers and script kiddies to the table, Apple is offering a wide variety of payouts for any bugs or security issues. The base $50,000 is only for “accidental or unexpected data disclosure†but you could get a sweet $250,000 for “access to users’ request data or sensitive information about the users’ request.†The top $1 million bounty is for “arbitrary code execution with arbitrary entitlements.†It’s indicative of how confident Apple is in this system, but at the very least the open invite could allow more people to go under the hood with Apple’s cloud processes. The initial rollout of iOS 18.1 is set to hit iPhones on Oct. 28. There’s already a beta for iOS 18.2 which gives users access to the ChatGPT integration. Apple forces users to grant permission to ChatGPT before it can see any of your requests or interact with Siri. OpenAI’s chatbot is merely a stopgap before Apple has a chance to get its own AI fully in place. Apple touts its strong track record on privacy issues, though it has a penchant for tracking users within its own software ecosystems. In PCC's case, Apple is claiming it won't have any ability to check your logs or requests with Siri. Perhaps anybody accessing the source code can fact-check the tech giant on its privacy claims before Siri finally gets her upgrade, likely sometime in 2025.
[11]
Think you can hack into Apple's private AI clouds? There's $1,000,000 in it if you can
Key Takeaways Apple is offering a bug bounty with rewards up to $1,000,000 for finding vulnerabilities. The top tier involves remote attack on request data for a massive payout. Even the lowest tier bounty offers a significant sum, making ethical hacking lucrative. ✕ Remove Ads Hacking a company, not getting into legal trouble, and scoring a huge amount of money as payment sounds like fiction, but it's anything but. Ethical hackers are always taking on challenges set by companies which award people money in exchange for finding flaws in their security, in a system called "bug bounties." Now, Apple has set up one for its new private AI clouds, and you can make some serious bank if you manage to crack open its security. Related iPhone 16 Pro Max review: Apple Intelligence is decent, but the new video capabilities are what really shine The iPhone 16 Pro Max is hyped around its AI capabilities, but most are not ready for roll out. Instead, get this phone for its great video recording. Apple sets up a bug bounty with a $1,000,000 reward You can read all about the bug bounty over on the Apple Security website. The company has some huge bounties up for grabs, with even the smallest tier earning the hacker some serious life-changing money: Category Description Maximum Bounty Remote attack on request data Arbitrary code execution with arbitrary entitlements $1,000,000 Access to a user's request data or sensitive information about the user's requests outside the trust boundary $250,000 Attack on request data from a privileged network position Access to a user's request data or other sensitive information about the user outside the trust boundary $150,000 Ability to execute unattested code $100,000 Accidental or unexpected data disclosure due to deployment or configuration issue $50,000 ✕ Remove Ads $1,000,000 may seem like a lot of money for finding a bug, but the circumstances are extremely dire if such a hack is possible. If someone, without any administrative power, can run any code they like within Apple's private AI cloud, it would cost the company a lot more thank $1,000,000 to get everything fixed and regain consumer trust. So, feeling up to the task? If you don't feel like you can take on one of the biggest tech company's security, but you're still interested in ethical hacking, why not check out the Flipper Zero instead? ✕ Remove Ads
[12]
Apple opens Private Cloud Compute to public scrutiny
In June, Apple used its Worldwide Developer Conference to announce the creation of the Private Cloud Compute platform to run its AI Intelligence applications, and now it's asking people to stress test the system for security holes. Apple has revealed that the platform (PCC) runs on custom-built server hardware and runs a specially hardened operating system derived from the same code base as iOS and macOS. It's also issued a security guide to the system, and pentesters can set up a Virtual Research Environment that investigators can use to examine the platform's strengths and weaknesses. "In the weeks after we announced Apple Intelligence and PCC, we provided third-party auditors and select security researchers early access to the resources we created to enable this inspection, including the PCC Virtual Research Environment (VRE)," the Apple Security Engineering and Architecture team wrote in a blog post on Thursday. "Today we're making these resources publicly available to invite all security and privacy researchers - or anyone with interest and a technical curiosity - to learn more about PCC and perform their own independent verification of our claims." Apple is also releasing the full source code for some elements of the PCC, namely: To further incentivize white-hat hackers, the fruit cart is also offering serious money for flaws. If you can remotely pull off arbitrary code execution with arbitrary entitlements there's up to a million dollars to be had, with $250,000 if you manage to pull data off a user's device. There are also bounties between $50,000 and $150,000 if you can hack the system from a privileged network position. "We hope that you'll dive deeper into PCC's design with our Security Guide, explore the code yourself with the Virtual Research Environment, and report any issues you find through Apple Security Bounty," the team declared. "We believe Private Cloud Compute is the most advanced security architecture ever deployed for cloud AI compute at scale, and we look forward to working with the research community to build trust in the system and make it even more secure and private over time." ®
[13]
Apple creates Private Cloud Compute VM to let researchers find bugs
Apple created a Virtual Research Environment to allow public access to testing the security of its Private Cloud Compute system, and released the source code for some "key components" to help researchers analyze the privacy and safety features on the architecture. The company also seeks to improve the system's security and has expanded its security bounty program to include rewards of up to $1 million for vulnerabilities that could compromise "the fundamental security and privacy guarantees of PCC." Private Cloud Compute (PCC) is a cloud intelligence system for complex AI processing of data from user devices in a way that does not compromise privacy. This is achieved through end-to-end encryption, to ensure that personal data from Apple devices sent to PCC is accessible only to the user and not even Apple can observe it. Shortly after Apple announced PCC, the company gave early access to select security researchers and auditors so they could verify the privacy and security promises for the system. In a blog post today, Apple announces that access to PCC is now public and anyone curious can inspect how it works and check if it rises to the promised claims. The company makes available the Private Cloud Compute Security Guide, which explains the architecture and technical details of the components and the way they work. Apple also provides a Virtual Research Environment (VRE), which replicates locally the cloud intelligence system and allows inspecting it as well as testing its security and hunting for issues. "The VRE runs the PCC node software in a virtual machine with only minor modifications. Userspace software runs identically to the PCC node, with the boot process and kernel adapted for virtualization," Apple explains, sharing documentation on how to set up the Virtual Research Environment on your device. VRE is present on macOS Sequia 15.1 Developer Preview and it needs a device with Apple silicaon and at least 16GB of unified memory. The tools available in the virtual environment allow booting a PCC release in an isolated environment, modifying and debugging the PCC software for a more thorough scrutiny, and perform inference against demonstration models. To make it easier for researchers, Apple decided to release the source code for some PCC components that implement security and privacy requirements: Apple also incentivizes research with new PCC categories in its security bounty program for accidental data disclosure, external compromise from user requests, and physical or internal access. The highest reward is $1 million for a remote attack on request data, which achieves remote code execution with arbitrary entitlements. For showing how to obtain access to a user's request data or sensitive info, a researcher can get a bounty of $250,000. Demonstrating the same type of attack, but from the network with elevated privileges, comes with a payment between $50,000 and $150,000. However, Apple says that it considers for rewards any issues that have a significant impact on PCC, even if they are outside the categories in its bug bounty program. The company believes that its "Private Cloud Compute is the most advanced security architecture ever deployed for cloud AI compute at scale" but still hopes to improve it further in terms of security and privacy with the help of researchers.
[14]
Apple Shares Private Cloud Compute Virtual Research Environment, Provides Bounties for Vulnerabilities
Private Cloud Compute is a cloud intelligence system that Apple designed for private artificial intelligence processing, and it's what Apple is using to keep Apple Intelligence requests secure when they need to be processed in the cloud. Apple promised to allow security and privacy researchers to verify the end-to-end security and privacy promises that Apple made with Private Cloud Compute, and today, Apple made its Private Cloud Compute Virtual Research Environment (VRE) and other materials publicly available to all security researchers. Apple has a Private Cloud Compute (PCC) Security Guide that details all of the components of PCC and how they work to provide privacy for cloud-based AI processing. Apple released the source code for select components of PCC that help implement its security and privacy requirements, which allows for a deeper dive into PCC. Along with these tools, Apple is expanding its Apple Security Bounty to include rewards for vulnerabilities that demonstrate a compromise of the fundamental privacy and security guarantees of Private Cloud Compute. Security researchers who locate a vulnerability can earn up to $1 million.
[15]
Apple Opens PCC Source Code for Researchers to Identify Bugs in Cloud AI Security
Apple has publicly made available its Private Cloud Compute (PCC) Virtual Research Environment (VRE), allowing the research community to inspect and verify the privacy and security guarantees of its offering. PCC, which Apple unveiled earlier this June, has been marketed as the "most advanced security architecture ever deployed for cloud AI compute at scale." With the new technology, the idea is to offload computationally complex Apple Intelligence requests to the cloud in a manner that doesn't sacrifice user privacy. Apple said it's inviting "all security and privacy researchers -- or anyone with interest and a technical curiosity -- to learn more about PCC and perform their own independent verification of our claims." To further incentivize research, the iPhone maker said it's expanding the Apple Security Bounty program to include PCC by offering monetary payouts ranging from $50,000 to $1,000,000 for security vulnerabilities identified in it. This includes flaws that could allow execution of malicious code on the server, and exploits capable of extracting users' sensitive data, or information about the user's requests. The VRE aims to offer a suite of tools to help researchers carry out their analysis of PCC from the Mac. It comes with a virtual Secure Enclave Processor (SEP) and leverages built-in macOS support for paravirtualized graphics to enable inference. Apple also said it's making the source code associated with some components of PCC accessible via GitHub to facilitate a deeper analysis. This includes CloudAttestation, Thimble, splunkloggingd, and srd_tools. "We designed Private Cloud Compute as part of Apple Intelligence to take an extraordinary step forward for privacy in AI," the Cupertino-based company said. "This includes providing verifiable transparency - a unique property that sets it apart from other server-based AI approaches." The development comes as broader research into generative artificial intelligence (AI) continues to uncover novel ways to jailbreak large language models (LLMs) and produce unintended output. Earlier this week, Palo Alto Networks detailed a technique called Deceptive Delight that involves mixing malicious and benign queries together to trick AI chatbots into bypassing their guardrails by taking advantage of their limited "attention span." The attack requires a minimum of two interactions, and works by first asking the chatbot to logically connect several events - including a restricted topic (e.g., how to make a bomb) - and then asking it to elaborate on the details of each event. Researchers have also demonstrated what's called a ConfusedPilot attack, which targets Retrieval-Augmented Generation (RAG) based AI systems like Microsoft 365 Copilot by poisoning the data environment with a seemingly innocuous document containing specifically crafted strings. "This attack allows manipulation of AI responses simply by adding malicious content to any documents the AI system might reference, potentially leading to widespread misinformation and compromised decision-making processes within the organization," Symmetry Systems said. Separately, it has been found that it's possible to tamper with a machine learning model's computational graph to plant "codeless, surreptitious" backdoors in pre-trained models like ResNet, YOLO, and Phi-3, a technique codenamed ShadowLogic. "Backdoors created using this technique will persist through fine-tuning, meaning foundation models can be hijacked to trigger attacker-defined behavior in any downstream application when a trigger input is received, making this attack technique a high-impact AI supply chain risk," Hidden Layer researchers Eoin Wickens, Kasimir Schulz, and Tom Bonner said. "Unlike standard software backdoors that rely on executing malicious code, these backdoors are embedded within the very structure of the model, making them more challenging to detect and mitigate."
[16]
Apple Intelligence bug bounty invites researchers to test its privacy claims
Many AI applications from other companies also rely on servers to complete more difficult requests. Still, users don't have much line of sight into how secure those server-based operations are. Apple, of course, has made a big deal over the years about how much it cares about user privacy, so poorly designed cloud servers for AI could poke a hole in that image. To prevent that, Apple said it designed the PCC so that the company's security and privacy guarantees are enforceable and that security researchers can independently verify those guarantees.
[17]
Apple offers Private Cloud Compute up for a security probe
The virtual environment for testing Private Cloud Compute - Image credit: Apple Apple advised at launch that Private Cloud Compute's security will be inspectable by third parties. On Thursday, it fulfilled its promise. In July, Apple introduced Apple Intelligence and its cloud-based processing facility, Private Cloud Compute. It was pitched as being a secure and private way to handle in-cloud processing of Siri queries under Apple Intelligence. As well as insisting that it used cryptography and didn't store user data, it also insisted that the features could be inspected by independent experts. On October 24, it offered an update on that plan. In a Security Research blog post titled "Security research in Private Cloud Compute," Apple explains that it provided third-party auditors and some security researchers with early access. This included access to resources created for the project, including the PCC Virtual Research Environment (VRE). The post also says that the same resources are being made publicly available from Thursday. Apple says this allows all security and privacy researchers, "or anyone with interest and a technical curiosity" to learn about Private Cloud Compute's workings and to make their own independent verification. The release includes a new Private Cloud Compute Security Guide, which explains how the architecture is designed to meet Apple's core requirements for the project. It includes technical details of PCC components and their workings, how authentications and routing of requests occurs, and how the security holds up to various forms of attack. The VRE is Apple's first ever for any of its platforms. It consists of tools to run the PCC node software on a virtual machine. This isn't specifically the same code as used on servers, as there are "minor modifications" for it to work locally. Apple insists the software runs identically to the PCC node, with changes only to the boot process and the kernel. The VRE also includes a virtual Secure Enclave Processor, and takes advantage of the built-in macOS support for paravirtualized graphics. Apple is also making the source code for some key components available for inspection. Offered under a limited-use license intended for analysis, the source code includes the CloudAttestation project for constructing and validating PCC node attestations. There's also the Thimble project, which includes a daemon for a user's device that works with CloudAttestation for verifying transparency. Furthermore, Apple is expanding its Apple Security Bounty. It promises "significant rewards" for reports of issues with security and privacy in Private Cloud Compute. The new categories in the bounty directly align with critical threats from the Security Guide. This includes accidental data disclosure, external compromise from user requests, and physical or internal access vulnerabilities. The prize scale starts from $50,000 for the accidental or unexpected disclosure of data due to a deployment or configuration issue. At the top end of the scale, managing to demonstrate arbitrary code execution with arbitrary entitlements, which can earn participants up to $1 million. Apple adds that it will consider any security issue that has a "significant impact" to PCC for a potential award, even if it's not lined up with one of the defined categories. "We hope that you'll dive deeper into PCC's design with our Security Guide, explore the code yourself with the Virtual Research Environment, and report any issues you find through Apple Security Bounty," the post states. In closing, Apple says it designed PCC "to take an extraordinary step forward for privacy in AI," including verifiable transparency. The post concludes "We believe Private Cloud Compute is the most advanced security architecture ever deployed for cloud AI compute at scale, and we look forward to working with the research community to build trust in the system and make it even more secure and private over time."
Share
Share
Copy Link
Apple challenges hackers and security researchers to test the security of its new AI-powered servers, offering up to $1 million in bug bounties as part of its commitment to privacy and security for the upcoming Apple Intelligence service.
In an unprecedented move, Apple has thrown down the gauntlet to hackers and security researchers worldwide, offering up to $1 million to anyone who can successfully breach its new AI-focused servers. This bold initiative comes as the tech giant prepares to launch Apple Intelligence, its latest AI-powered offering, as part of iOS 18.1 1.
At the heart of this challenge is Apple's Private Cloud Compute (PCC) server, which will handle complex AI tasks that exceed the processing capabilities of iPhones, iPads, and Macs. Apple has designed PCC with stringent privacy measures, including immediate deletion of user requests post-processing and end-to-end encryption 2.
To demonstrate its commitment to transparency, Apple has taken several unprecedented steps:
Apple's expanded Security Bounty program offers rewards for various levels of system compromise:
The company has also expressed willingness to consider rewards for significant security issues not explicitly listed in their categories 5.
This initiative underscores Apple's commitment to maintaining robust security and privacy standards in the rapidly evolving field of AI. By inviting public scrutiny, Apple aims to build trust in its system and potentially set a new benchmark for transparency in AI development.
As the tech industry grapples with concerns over AI safety and data privacy, Apple's approach could influence how other companies address these issues, potentially leading to more open and secure AI systems across the board.
Reference
[4]
Microsoft announces Zero Day Quest, a major hacking event with $4 million in rewards, focusing on AI and cloud security vulnerabilities. The initiative aims to enhance cybersecurity and foster collaboration with the security research community.
4 Sources
Craig Federighi, Apple's senior vice president of software engineering, reveals that Apple's intelligence servers are intentionally kept simple to prioritize user privacy. This approach contrasts with other tech giants' complex AI systems.
3 Sources
A critical 18-year-old security vulnerability dubbed the "0.0.0.0 bug" has been discovered affecting major web browsers including Chrome, Safari, and Firefox. Tech giants Google and Apple are working on urgent fixes to protect users from potential spy attacks.
4 Sources
Apple rolls out its AI features, Apple Intelligence, with a focus on privacy and security. The update brings new capabilities but faces criticism for inconsistent performance and battery drain issues.
4 Sources
Apple is reportedly in discussions with Foxconn and Lenovo to manufacture AI servers using Apple Silicon, aiming to power its Apple Intelligence services and boost its AI capabilities.
4 Sources
The Outpost is a comprehensive collection of curated artificial intelligence software tools that cater to the needs of small business owners, bloggers, artists, musicians, entrepreneurs, marketers, writers, and researchers.
© 2024 TheOutpost.AI All rights reserved