5 Sources
[1]
OpenAI tightens the screws on security to keep away prying eyes | TechCrunch
OpenAI has reportedly overhauled its security operations to protect against corporate espionage. According to the Financial Times, the company accelerated an existing security clampdown after Chinese startup DeepSeek released a competing model in January, with OpenAI alleging that DeepSeek improperly copied its models using "distillation" techniques. The beefed-up security includes "information tenting" policies that limit staff access to sensitive algorithms and new products. For example, during development of OpenAI's o1 model, only verified team members who had been read into the project could discuss it in shared office spaces, according to the FT. And there's more. OpenAI now isolates proprietary technology in offline computer systems, implements biometric access controls for office areas (it scans employees' fingerprints), and maintains a "deny-by-default" internet policy requiring explicit approval for external connections, per the report, which further adds that the company has increased physical security at data centers and expanded its cybersecurity personnel. The changes are said to reflect broader concerns about foreign adversaries attempting to steal OpenAI's intellectual property, though given the ongoing poaching wars amid American AI companies and increasingly frequent leaks of CEO Sam Altman's comments, OpenAI may be attempting to address internal security issues, too.
[2]
OpenAI clamps down on security after foreign spying threats
OpenAI has overhauled its security operations to protect its intellectual property from corporate espionage, following claims of having been targeted by Chinese rivals. The changes in recent months include stricter controls on sensitive information and enhanced vetting of staff, according to several people close to the $300bn artificial intelligence company. The San Francisco-based start-up has been bolstering its security efforts since last year, but the clampdown was accelerated after Chinese AI start-up DeepSeek released a rival model in January. OpenAI claimed that DeepSeek had improperly copied the California-based company's models, using a technique known as "distillation", to release a rival AI system. It has since added security measures to guard against these tactics. DeepSeek has not commented on the claims. The episode "prompted OpenAI to be much more rigorous", said one person close to its security team, who added that the company, led by Sam Altman, had been "aggressively" expanding its security personnel and practices, including cyber security teams. A global AI arms race has led to greater concerns about attempts to steal the technology, which could threaten economic and national security. US authorities warned tech start-ups last year that foreign adversaries, including China, had increased efforts to acquire their sensitive data. OpenAI insiders said the start-up had been implementing stricter policies in its San Francisco offices since last summer to restrict staff access to crucial information about technologies such as its algorithms and new products. The policies -- known as information "tenting" -- significantly reduced the number of people who could access the novel algorithms being developed, insiders said. For example, when OpenAI was developing its new o1 model last year, codenamed "Strawberry" internally, staff working on the project were told to check that other employees were also part of the "Strawberry tent" before discussing it in communal office spaces. The strict approach made work difficult for some staff. "It got very tight -- you either had everything or nothing," one person said. They added that over time "more people are being read in on the things they need to be, without being read in on others". The company now keeps a lot of its proprietary technology in isolated environments, meaning computer systems are kept offline and separate from other networks, according to people familiar with the practices. It also had biometric checks in its offices, where individuals could only access certain rooms by scanning their fingerprints, they added. In order to protect model weights -- parameters that influence how a model responds to prompts -- OpenAI adopts a "deny-by-default egress policy", meaning nothing is allowed to connect to the internet unless explicitly approved. OpenAI had also increased physical security at its data centres, the people said. It was one of a number of Silicon Valley companies that stepped up their screening of staff and potential recruits because of an increased threat of Chinese espionage, the Financial Times reported last year. Washington and Beijing are locked in a growing strategic competition, with the US imposing export controls to make it harder for China to obtain and develop cutting-edge technologies. However, concerns have also been raised about a rise in xenophobia at US tech companies given the prevalence of skilled workers of Asian descent. OpenAI hired Dane Stuckey last October as its new chief information security officer from the same role at Palantir, the data intelligence group known for its extensive military and government work. Stuckey works alongside Matt Knight, OpenAI's vice-president of security products. Knight has been developing ways to use OpenAI's large language models to improve its defences against cyber attacks, according to a person with knowledge of the matter. Retired US army general Paul Nakasone was appointed to OpenAI's board last year to help oversee its defences against cyber security threats. OpenAI said it was investing heavily in its security privacy programs, as it wants to lead the industry. The changes were not made in response to any particular incident, it added.
[3]
OpenAI is reportedly upping security following rumored foreign threats
The ChatGPT maker is already funding further AI security research ChatGPT-maker OpenAI has reportedly intensified its security operations to combat corporate espionage, amid rumors foreign companies could be looking to the AI giant for inspiration. The move follows Chinese startup DeepSeek's release of a competing AI model, which reportedly uses distillation to copy OpenAI's technology. Distillation is where a third-party transfers knowledge from a large, complex 'teacher' model to a smaller, more efficient 'student' model, allowing the third-party to create a smaller model with improved inferencing speed. OpenAI has reportedly introduced new policies to restrict employee access to sensitive projects and discussions, similar to how it handled the development of the o1 model - according to a TechCrunch report, only pre-approved staff could discuss the o1 model in shared office areas. Moreover, proprietary technologies are now being kept on offline systems to prevent the chances of a breach, while offices now use fingerprint scans for access to strengthen physical security. Strict network policies also center around a deny-by-default approach, with external connections requiring additional approval. The reports also indicate that OpenAI has added more personnel to strengthen its cybersecurity teams and to enhance physical security and important sites like its data centers. Being at the forefront of AI innovation comes with added cost for OpenAI - its Cybersecurity Grant Program has funded 28 research initiatives that explore the concepts of prompt injection, secure code generation and autonomous cybersecurity defenses, with the company acknowledging that AI has the power to democratize cyberattackers' access to more sophisticated technologies. TechRadar Pro has asked OpenAI for more context surrounding the reports, but the company did not respond to our request.
[4]
OpenAI is Limiting Employees from Accessing its Top AI Algorithms: Report | AIM
The company is taking stringent measures to prevent corporate espionage. OpenAI, the AI startup behind ChatGPT, has 'overhauled' its security operations to protect intellectual property from corporate espionage, the Financial Times reported on July 8. The development took place after the company claimed that Chinese AI startup DeepSeek had copied its models using distillation techniques. The report noted that OpenAI is implementing tighter restrictions on sensitive data and strengthening staff vetting processes. The company's policies, referred to as information 'tenting', have limited the number of personnel who can access the new algorithms being developed at OpenAI, according to insiders quoted by FT. Besides, employees are only permitted to enter certain rooms by scanning their fingerprints. OpenAI safeguards its model weights by implementing a 'deny-by-default egress policy, ' which means that no connections to the internet are permitted unless they are explicitly authorised. In addition, the company has also increased physical security at its data centres. Earlier this year, Microsoft security researchers believed that individuals potentially connected to DeepSeek are 'exfiltrating a significant amount of data' through OpenAI's API, as per a report by Bloomberg. Furthermore, OpenAI also told FT that it had seen "some evidence of distillation", which is a technique to improve the performance of an AI model by using outputs from another one. In April, Business Insider reported that OpenAI required developers seeking access to the company's advanced AI models to verify their identity with a government ID. China's DeepSeek, a subsidiary of HighFlyer, launched the R1 reasoning model a few months ago. It generated significant industry buzz by being open source and providing capabilities comparable to OpenAI's o1 reasoning model, but at a fraction of the training cost. Since then, there has been a lot of buzz around the threat that models from China potentially pose.
[5]
OpenAI thinks it's being watched
OpenAI has reportedly revised its security protocols in response to perceived corporate espionage, according to information obtained by the Financial Times. This intensification of security measures followed the January release of a competing model by Chinese startup DeepSeek, which OpenAI claims improperly replicated its models through "distillation" techniques. The enhanced security framework incorporates "information tenting" policies, which restrict employee access to sensitive algorithms and new product developments. For instance, during the development phase of OpenAI's o1 model, only designated team members with explicit project clearance were authorized to discuss it within shared office environments, as detailed in the Financial Times report. NYT forces OpenAI to retain chat data in court Further modifications include the isolation of proprietary technology within offline computer systems. The company has also implemented biometric access controls, utilizing fingerprint scans for entry into specific office areas. A "deny-by-default" internet policy is now in place, mandating explicit authorization for all external network connections. The report further indicates that OpenAI has augmented physical security measures at its data centers and expanded its cybersecurity staffing. These revisions are understood to address broader concerns regarding foreign adversaries attempting to acquire OpenAI's intellectual property. However, the ongoing recruitment competition among American AI companies and frequent leaks of CEO Sam Altman's statements suggest OpenAI may also be addressing internal security vulnerabilities. OpenAI has been contacted for comment regarding these developments.
Share
Copy Link
OpenAI has significantly enhanced its security protocols to protect its intellectual property from potential corporate espionage, particularly following claims of Chinese rivals targeting its technology.
OpenAI, the company behind ChatGPT, has significantly enhanced its security measures to protect its intellectual property from potential corporate espionage. This move comes in response to growing concerns about foreign adversaries, particularly Chinese competitors, attempting to steal sensitive AI technology 12.
The security clampdown was accelerated after Chinese AI startup DeepSeek released a competing model in January 2025. OpenAI alleged that DeepSeek had improperly copied its models using "distillation" techniques 14. This incident prompted OpenAI to become "much more rigorous" in its security practices, according to sources close to the company 2.
OpenAI has implemented a range of new security protocols:
Information "Tenting": This policy limits staff access to sensitive algorithms and new products. For example, during the development of OpenAI's o1 model (codenamed "Strawberry"), only verified team members could discuss the project in shared office spaces 12.
Isolated Environments: Proprietary technology is now kept in offline computer systems, separate from other networks 2.
Biometric Access Controls: Employees must scan their fingerprints to access certain office areas 124.
Source: TechCrunch
"Deny-by-Default" Internet Policy: External connections require explicit approval, particularly to protect model weights 23.
Enhanced Physical Security: The company has increased security measures at its data centers 12.
Expanded Cybersecurity Personnel: OpenAI has been "aggressively" expanding its security teams 2.
The heightened security measures at OpenAI reflect broader concerns in the AI industry about protecting intellectual property. The global AI arms race has led to increased efforts by foreign entities to acquire sensitive data from tech startups 2. This situation has prompted US authorities to warn companies about the rising threat of foreign espionage 2.
To bolster its security efforts, OpenAI has made strategic appointments:
Source: Financial Times News
While these measures are crucial for protecting OpenAI's intellectual property, they have also created challenges for some staff members. The strict approach has made work difficult for some employees, with one insider noting, "It got very tight -- you either had everything or nothing" 2. However, the company is working to refine its approach, ensuring that employees have access to the information they need without compromising security 2.
OpenAI's security overhaul is part of a larger trend in Silicon Valley, where companies are stepping up their screening of staff and potential recruits due to increased threats of espionage 2. However, this has also raised concerns about potential xenophobia, given the prevalence of skilled workers of Asian descent in the tech industry 2.
Source: TechRadar
As the AI industry continues to evolve rapidly, the balance between innovation, collaboration, and security remains a critical challenge for companies like OpenAI. The company's aggressive approach to security underscores the high stakes in the global AI race and the increasing importance of protecting intellectual property in this fast-paced, competitive field.
Summarized by
Navi
[2]
[4]
Analytics India Magazine
|OpenAI is Limiting Employees from Accessing its Top AI Algorithms: Report | AIM[5]
Meta has recruited Ruoming Pang, Apple's head of AI models, in a significant move that highlights the intense competition for AI talent among tech giants. This development marks another setback for Apple's AI efforts and underscores Meta's aggressive strategy in building its superintelligence team.
26 Sources
Technology
23 hrs ago
26 Sources
Technology
23 hrs ago
An unknown individual has used AI technology to impersonate Secretary of State Marco Rubio, contacting foreign ministers and US officials through voice messages and texts on Signal, raising alarm about potential information security breaches.
27 Sources
Technology
7 hrs ago
27 Sources
Technology
7 hrs ago
Q2 2025 sees a significant increase in global venture funding, reaching $91 billion, with AI sector dominating investments. The quarter also witnessed a concentration of capital in larger funding rounds and increased M&A activity.
2 Sources
Business and Economy
7 hrs ago
2 Sources
Business and Economy
7 hrs ago
Meta Platforms invests $3.5 billion in EssilorLuxottica, the world's largest eyewear maker, to strengthen its position in the AI-powered smart glasses market.
3 Sources
Technology
7 hrs ago
3 Sources
Technology
7 hrs ago
An analysis of how artificial intelligence is reshaping the job market, with conflicting views on whether entry-level or experienced workers are more at risk of displacement.
3 Sources
Business and Economy
7 hrs ago
3 Sources
Business and Economy
7 hrs ago