7 Sources
[1]
OpenAI tightens the screws on security to keep away prying eyes | TechCrunch
OpenAI has reportedly overhauled its security operations to protect against corporate espionage. According to the Financial Times, the company accelerated an existing security clampdown after Chinese startup DeepSeek released a competing model in January, with OpenAI alleging that DeepSeek improperly copied its models using "distillation" techniques. The beefed-up security includes "information tenting" policies that limit staff access to sensitive algorithms and new products. For example, during development of OpenAI's o1 model, only verified team members who had been read into the project could discuss it in shared office spaces, according to the FT. And there's more. OpenAI now isolates proprietary technology in offline computer systems, implements biometric access controls for office areas (it scans employees' fingerprints), and maintains a "deny-by-default" internet policy requiring explicit approval for external connections, per the report, which further adds that the company has increased physical security at data centers and expanded its cybersecurity personnel. The changes are said to reflect broader concerns about foreign adversaries attempting to steal OpenAI's intellectual property, though given the ongoing poaching wars amid American AI companies and increasingly frequent leaks of CEO Sam Altman's comments, OpenAI may be attempting to address internal security issues, too.
[2]
OpenAI clamps down on security after foreign spying threats
OpenAI has overhauled its security operations to protect its intellectual property from corporate espionage, following claims of having been targeted by Chinese rivals. The changes in recent months include stricter controls on sensitive information and enhanced vetting of staff, according to several people close to the $300bn artificial intelligence company. The San Francisco-based start-up has been bolstering its security efforts since last year, but the clampdown was accelerated after Chinese AI start-up DeepSeek released a rival model in January. OpenAI claimed that DeepSeek had improperly copied the California-based company's models, using a technique known as "distillation", to release a rival AI system. It has since added security measures to guard against these tactics. DeepSeek has not commented on the claims. The episode "prompted OpenAI to be much more rigorous", said one person close to its security team, who added that the company, led by Sam Altman, had been "aggressively" expanding its security personnel and practices, including cyber security teams. A global AI arms race has led to greater concerns about attempts to steal the technology, which could threaten economic and national security. US authorities warned tech start-ups last year that foreign adversaries, including China, had increased efforts to acquire their sensitive data. OpenAI insiders said the start-up had been implementing stricter policies in its San Francisco offices since last summer to restrict staff access to crucial information about technologies such as its algorithms and new products. The policies -- known as information "tenting" -- significantly reduced the number of people who could access the novel algorithms being developed, insiders said. For example, when OpenAI was developing its new o1 model last year, codenamed "Strawberry" internally, staff working on the project were told to check that other employees were also part of the "Strawberry tent" before discussing it in communal office spaces. The strict approach made work difficult for some staff. "It got very tight -- you either had everything or nothing," one person said. They added that over time "more people are being read in on the things they need to be, without being read in on others". The company now keeps a lot of its proprietary technology in isolated environments, meaning computer systems are kept offline and separate from other networks, according to people familiar with the practices. It also had biometric checks in its offices, where individuals could only access certain rooms by scanning their fingerprints, they added. In order to protect model weights -- parameters that influence how a model responds to prompts -- OpenAI adopts a "deny-by-default egress policy", meaning nothing is allowed to connect to the internet unless explicitly approved. OpenAI had also increased physical security at its data centres, the people said. It was one of a number of Silicon Valley companies that stepped up their screening of staff and potential recruits because of an increased threat of Chinese espionage, the Financial Times reported last year. Washington and Beijing are locked in a growing strategic competition, with the US imposing export controls to make it harder for China to obtain and develop cutting-edge technologies. However, concerns have also been raised about a rise in xenophobia at US tech companies given the prevalence of skilled workers of Asian descent. OpenAI hired Dane Stuckey last October as its new chief information security officer from the same role at Palantir, the data intelligence group known for its extensive military and government work. Stuckey works alongside Matt Knight, OpenAI's vice-president of security products. Knight has been developing ways to use OpenAI's large language models to improve its defences against cyber attacks, according to a person with knowledge of the matter. Retired US army general Paul Nakasone was appointed to OpenAI's board last year to help oversee its defences against cyber security threats. OpenAI said it was investing heavily in its security privacy programs, as it wants to lead the industry. The changes were not made in response to any particular incident, it added.
[3]
OpenAI is reportedly upping security following rumored foreign threats
The ChatGPT maker is already funding further AI security research ChatGPT-maker OpenAI has reportedly intensified its security operations to combat corporate espionage, amid rumors foreign companies could be looking to the AI giant for inspiration. The move follows Chinese startup DeepSeek's release of a competing AI model, which reportedly uses distillation to copy OpenAI's technology. Distillation is where a third-party transfers knowledge from a large, complex 'teacher' model to a smaller, more efficient 'student' model, allowing the third-party to create a smaller model with improved inferencing speed. OpenAI has reportedly introduced new policies to restrict employee access to sensitive projects and discussions, similar to how it handled the development of the o1 model - according to a TechCrunch report, only pre-approved staff could discuss the o1 model in shared office areas. Moreover, proprietary technologies are now being kept on offline systems to prevent the chances of a breach, while offices now use fingerprint scans for access to strengthen physical security. Strict network policies also center around a deny-by-default approach, with external connections requiring additional approval. The reports also indicate that OpenAI has added more personnel to strengthen its cybersecurity teams and to enhance physical security and important sites like its data centers. Being at the forefront of AI innovation comes with added cost for OpenAI - its Cybersecurity Grant Program has funded 28 research initiatives that explore the concepts of prompt injection, secure code generation and autonomous cybersecurity defenses, with the company acknowledging that AI has the power to democratize cyberattackers' access to more sophisticated technologies. TechRadar Pro has asked OpenAI for more context surrounding the reports, but the company did not respond to our request.
[4]
Increasingly Paranoid OpenAI Has Installed Fingerprint Scanners and Airgapped Systems to Prevent Secrets Escaping
As the United States embroils itself in a self-inflicted "arms race" with China, tech companies are ratcheting up the paranoia to extreme levels. Take ChatGPT's creator OpenAI, which is reportedly clamping down hard on physical security after it says it was "targeted" by Chinese AI rivals. Per the Financial Times, the company has gone as far as installing fingerprint "biometric access controls" around its offices, as well as electronically-dependent security airlocks, similar to the kind found in industrial cleanrooms. It likewise beefed up "physical security" -- presumably meaning security guards -- in its datacenters, and shored up its cybersecurity team for good measure. The physical moves come along with some draconian crackdowns on digital security, or as OpenAI calls it, "information tenting." Per the FT, OpenAI has been increasingly limiting the number of employees with access to sensitive information and spaces where such information is allowed to be discussed, such as new product developments or algorithms. The company's computers are likewise cut off from the wider internet, with a "deny-by-default" system requiring approval for external connections. Given the tech giant's vague explanation, it's tough to say what really prompted the security crackdown. Broadly, it could be the case that the Trump administration is imposing stricter computer standards on the industry, as billion-dollar tech companies like Palantir and SpaceX become interlocked with US military and foreign policy interests. For example, the news comes nearly a month after OpenAI signed a $200 million contract with the US Department of Defense to develop "national security and defense systems," and over half a year after Chinese company DeepSeek released its rival large language model (LLM), which strengthened the narrative that Chinese AI development was catching up to the US. OpenAI had previously used DeepSeek -- and China more broadly -- as an excuse to call for sweeping bans on the scary foreign tech, which would serve to lessen the pressure on ChatGPT to actually compete on the open market. However, OpenAI could also be responding to a domestic threat: specifically, the risk of corporate espionage between rival AI company Meta, which has gone on a humongous poaching spree on OpenAI developers in recent weeks. A number of OpenAI companies have reportedly been offered sign-on bonuses as high as nine figures to defect to Meta, ostensibly taking their AI knowhow with them. CEO Sam Altman responded to that rival threat by shutting down for a week so executives could assess the damages and stop the bleeding.
[5]
OpenAI is Limiting Employees from Accessing its Top AI Algorithms: Report | AIM
The company is taking stringent measures to prevent corporate espionage. OpenAI, the AI startup behind ChatGPT, has 'overhauled' its security operations to protect intellectual property from corporate espionage, the Financial Times reported on July 8. The development took place after the company claimed that Chinese AI startup DeepSeek had copied its models using distillation techniques. The report noted that OpenAI is implementing tighter restrictions on sensitive data and strengthening staff vetting processes. The company's policies, referred to as information 'tenting', have limited the number of personnel who can access the new algorithms being developed at OpenAI, according to insiders quoted by FT. Besides, employees are only permitted to enter certain rooms by scanning their fingerprints. OpenAI safeguards its model weights by implementing a 'deny-by-default egress policy, ' which means that no connections to the internet are permitted unless they are explicitly authorised. In addition, the company has also increased physical security at its data centres. Earlier this year, Microsoft security researchers believed that individuals potentially connected to DeepSeek are 'exfiltrating a significant amount of data' through OpenAI's API, as per a report by Bloomberg. Furthermore, OpenAI also told FT that it had seen "some evidence of distillation", which is a technique to improve the performance of an AI model by using outputs from another one. In April, Business Insider reported that OpenAI required developers seeking access to the company's advanced AI models to verify their identity with a government ID. China's DeepSeek, a subsidiary of HighFlyer, launched the R1 reasoning model a few months ago. It generated significant industry buzz by being open source and providing capabilities comparable to OpenAI's o1 reasoning model, but at a fraction of the training cost. Since then, there has been a lot of buzz around the threat that models from China potentially pose.
[6]
OpenAI tightens internal security amid fears of IP theft by Chinese AI rivals - SiliconANGLE
OpenAI tightens internal security amid fears of IP theft by Chinese AI rivals OpenAI is reportedly upping its internal security to protect its intellectual property from corporate espionage amid claims that it has been targeted by Chinese artificial intelligence companies. According to the Financial Times, which references several unnamed people close to OpenAI today, the changes recently have included stricter controls of sensitive information and enhanced vetting of staff. The decision to ramp up security is also said to have accelerated after Chinese AI startup DeepSeek released a rival AI model in January that is alleged to have used ChatGPT data to train its R1 large language model, a process known as model "distillation." The move angered OpenAI in an ironic twist, considering that the company trains its models on vast swaths of public internet data, much of it used without direct permission. OpenAI has put in place safeguards to stop a repeat of the Deepseek situation and also implemented physical safeguards on the ground to protect its IP. The company's internal projects are now being developed under a system of "tenting," which limits access to information only to team members who are read into specific projects. Key initiatives, like the o1 model that was developed last year, have been subject to these extreme compartmentalization practices, effectively walling off code, data and even conversations between teams. Other new measures include the implementation of biometric authentication, such as fingerprint scans for sensitive lab access, as well as a hardened "deny-by-default" approach to internet connectivity within internal systems. Portions of the company's infrastructure have been air-gapped to ensure critical data remains physically isolated from external networks. The company has also beefed up its cybersecurity and governance team, hiring former Palantir Technologes Inc. security head Dane Stuckey as chief information security officer and has appointed retired U.S. Army General Paul Nakasone to its board. While the security measures are meant to shield OpenAI's IP from prying eyes, they have allegedly introduced new frictions internally. The increased compartmentalization has made cross-team collaboration more difficult and slowed development workflows. "It got very tight - you either had everything or nothing," one person told FT, before adding that over time "more people are being read in on the things they need to be, without being read in on others." The shift comes as part of a broader industry trend: As generative AI becomes more strategically and commercially valuable, protecting the models that power it is becoming just as important as building them.
[7]
OpenAI thinks it's being watched
OpenAI has reportedly revised its security protocols in response to perceived corporate espionage, according to information obtained by the Financial Times. This intensification of security measures followed the January release of a competing model by Chinese startup DeepSeek, which OpenAI claims improperly replicated its models through "distillation" techniques. The enhanced security framework incorporates "information tenting" policies, which restrict employee access to sensitive algorithms and new product developments. For instance, during the development phase of OpenAI's o1 model, only designated team members with explicit project clearance were authorized to discuss it within shared office environments, as detailed in the Financial Times report. NYT forces OpenAI to retain chat data in court Further modifications include the isolation of proprietary technology within offline computer systems. The company has also implemented biometric access controls, utilizing fingerprint scans for entry into specific office areas. A "deny-by-default" internet policy is now in place, mandating explicit authorization for all external network connections. The report further indicates that OpenAI has augmented physical security measures at its data centers and expanded its cybersecurity staffing. These revisions are understood to address broader concerns regarding foreign adversaries attempting to acquire OpenAI's intellectual property. However, the ongoing recruitment competition among American AI companies and frequent leaks of CEO Sam Altman's statements suggest OpenAI may also be addressing internal security vulnerabilities. OpenAI has been contacted for comment regarding these developments.
Share
Copy Link
OpenAI has significantly enhanced its security protocols to protect its intellectual property from potential corporate espionage, particularly in response to alleged threats from foreign competitors.
OpenAI, the company behind ChatGPT, has implemented a comprehensive security overhaul to safeguard its intellectual property from corporate espionage. This move comes in response to growing concerns about foreign adversaries, particularly Chinese competitors, attempting to steal sensitive AI technology 1.
Source: Financial Times News
The security clampdown, which had been in progress since last year, was accelerated after Chinese AI startup DeepSeek released a competing model in January 2025. OpenAI alleged that DeepSeek had improperly copied its models using "distillation" techniques 2.
OpenAI has introduced several stringent security measures to protect its proprietary technology:
Source: SiliconANGLE
Isolated Computer Systems: Proprietary technology is now kept on offline systems, separate from other networks 2.
"Deny-by-Default" Internet Policy: External connections require explicit approval, maintaining a strict control over data flow 1.
Enhanced Data Center Security: Physical security at data centers has been increased to protect critical infrastructure 4.
Expanded Cybersecurity Team: OpenAI has aggressively expanded its security personnel, including cybersecurity teams 2.
OpenAI has implemented a policy known as information "tenting" to restrict access to sensitive information:
Limited Access: The number of employees with access to crucial information about technologies, algorithms, and new products has been significantly reduced 5.
Project-Specific Clearance: During the development of OpenAI's o1 model, codenamed "Strawberry," only verified team members could discuss the project in shared office spaces 2.
Strict Vetting Process: The company has enhanced its vetting process for staff and potential recruits to mitigate potential security risks 2.
Source: TechCrunch
The heightened security measures at OpenAI reflect broader concerns in the AI industry:
Global AI Arms Race: The security overhaul is part of a larger trend in the ongoing competition between the United States and China in AI development 2.
Balancing Security and Collaboration: Some staff members have reported difficulties in working under such tight restrictions, highlighting the challenge of maintaining security while fostering innovation 2.
Concerns of Xenophobia: The increased scrutiny of foreign workers, particularly those of Asian descent, has raised concerns about potential discrimination in the tech industry 2.
Domestic Competition: The security measures may also be aimed at preventing corporate espionage from domestic rivals, as evidenced by recent poaching attempts by companies like Meta 4.
As AI technology continues to advance and its strategic importance grows, companies like OpenAI are likely to maintain and potentially increase their security measures to protect their intellectual property and maintain their competitive edge in the global AI market.
Summarized by
Navi
[2]
[5]
Analytics India Magazine
|OpenAI is Limiting Employees from Accessing its Top AI Algorithms: Report | AIMNVIDIA announces significant upgrades to its GeForce NOW cloud gaming service, including RTX 5080-class performance, improved streaming quality, and an expanded game library, set to launch in September 2025.
10 Sources
Technology
19 hrs ago
10 Sources
Technology
19 hrs ago
Nvidia is reportedly developing a new AI chip, the B30A, based on its latest Blackwell architecture for the Chinese market. This chip is expected to outperform the currently allowed H20 model, raising questions about U.S. regulatory approval and the ongoing tech trade tensions between the U.S. and China.
11 Sources
Technology
19 hrs ago
11 Sources
Technology
19 hrs ago
SoftBank Group has agreed to invest $2 billion in Intel, buying common stock at $23 per share. This strategic investment comes as Intel undergoes a major restructuring under new CEO Lip-Bu Tan, aiming to regain its competitive edge in the semiconductor industry, particularly in AI chips.
18 Sources
Business
11 hrs ago
18 Sources
Business
11 hrs ago
Databricks, a data analytics firm, is set to raise its valuation to over $100 billion in a new funding round, showcasing the strong investor interest in AI startups. The company plans to use the funds for AI acquisitions and product development.
7 Sources
Business
3 hrs ago
7 Sources
Business
3 hrs ago
OpenAI introduces ChatGPT Go, a new subscription plan priced at ₹399 ($4.60) per month exclusively for Indian users, offering enhanced features and affordability to capture a larger market share.
15 Sources
Technology
11 hrs ago
15 Sources
Technology
11 hrs ago