Curated by THEOUTPOST
On Mon, 10 Feb, 4:04 PM UTC
21 Sources
[1]
3nm TSMC node, HBM, AI-focussed: No, it's not Nvidia's latest GPU but probably its worst nightmare to date
Rising costs and a worrying reliance on AI behemoth Nvidia have led tech giants such as Microsoft, Google, and Meta to look at building their own artificial intelligence chips. OpenAI, which is involved in the recently-announced $500 billion Stargate initiative, is also reportedly developing its own AI hardware to give it some freedom from Team Green. A report from Reuters claims the company is in the final stages of producing this first chip - which could have cost OpenAI upwards of $500 million to design - and expects to send it for fabrication at TSMC in the coming months, with mass production likely to begin in 2026. OpenAI's chip development has been ongoing for a while. We first reported in July 2024 that Sam Altman's company was in discussions with Broadcom to design and build its own silicon and, more recently, that the AI firm was edging closer to this becoming a reality. The report claims, "If the initial tape-out goes smoothly, it would enable the ChatGPT maker to mass-produce its first in-house AI chip and potentially test an alternative to Nvidia's chips later this year. OpenAI's plan to send its design to TSMC this year demonstrates the startup has made speedy progress on its first design, a process that can take other chip designers years longer." Led by Richard Ho, who joined OpenAI over a year ago and previously played a key role in developing Google's own custom AI processors, the team developing the chip is reportedly relatively small, consisting of just 40 engineers. While the in-house AI chip will be capable of both training and inference tasks, Reuters' sources say that it will initially be "deployed on a limited scale, and primarily for running AI models." It will also have a limited role within the company's infrastructure. According to the news outlet, OpenAI views its custom AI chip as a way of improving its negotiating position with existing suppliers, including Nvidia. The chip is being produced using TSMC's advanced 3-nanometer process and will feature a commonly used systolic array architecture, HBM, and advanced networking capabilities. OpenAI and TSMC declined to comment on the report.
[2]
Watch out, Nvidia. OpenAI's proprietary AI chip is coming along
Reports say OpenAI is almost finished with its in-house AI chip design and will begin mass production sometime in 2026. According to a new report from Reuters (spotted by Thurrott), OpenAI could finalize the design of its first 3nm AI chip in the coming months, with the goal of starting mass production at TSMC in 2026. The chip is being developed by a team of 40 OpenAI employees in collaboration with Broadcom. The project is being led by Richard Ho, OpenAI's new head of hardware, who previously worked on solutions for Google's infrastructure and cloud services. According to Reuters, OpenAI's chip will be able to both train and run AI models, but initially it'll be used mainly for inference (running AI models) and to a limited extent within the company's infrastructure. Demand for Nvidia's AI chips remains extremely high right now, with companies like OpenAI, Microsoft, Meta, and Google investing billions in AI data centers. But developing AI chips that can compete with Nvidia's own will be a challenge. According to Reuters, OpenAI may need to invest $500 million to develop the first version of its 3nm chip, and the cost could potentially double as software and peripherals are developed.
[3]
OpenAI plans to launch its custom AI processor by 2026: goodbye to Nvidia? - Softonic
OpenAI is ready to finalize the design of its custom AI processor: the goal is to stop relying on Nvidia OpenAI plans to finalize its first design of a custom artificial intelligence processor in the coming months, sending it to TSMC for production, with the goal of achieving large-scale manufacturing by 2026. This movement aims to ensure the company's competitiveness against giants like Google, Meta, and Microsoft, which have already made progress in the development of their own custom silicon. The design of OpenAI's processor will include a systolic array, a structure that allows for efficient matrix or vector calculations by connecting processing elements in a pipeline configuration. This processor is expected to use HBM memory and be manufactured using TSMC's capacity in the N3 process, which belongs to the 3 nm classification. OpenAI is collaborating with Broadcom, where the artificial intelligence company will develop the key intellectual property while Broadcom will handle the assembly of the final design. Richard Ho leads this project, who has experience in developing TPUs for Google. Although the OpenAI team has grown to 40 engineers, it is still considerably smaller compared to the teams at Amazon or Google, suggesting that its internal contribution to the design may be limited. The development of high-performance processors is a very costly project, with estimates suggesting that each model could cost hundreds of millions of dollars, not including the necessary infrastructure that could double that figure. However, the growing demand for AI chips has led companies like Meta and Microsoft to allocate multi-billion dollar investments in infrastructure, which could encourage OpenAI to follow a similar path. If OpenAI manages to achieve mass production of its processor by mid-2026, it could begin its deployment in the second half of that year.
[4]
OpenAI is building its own AI chip to reduce reliance on Nvidia
OpenAI is nearing completion of its first in-house AI chip, designed to reduce reliance on Nvidia, and is set to send the finalized design to Taiwan Semiconductor Manufacturing Company (TSMC) for production. The report comes from Reuters, which indicated that testing could occur ahead of mass production, targeted for 2026. In October 2024, Fortune reported that OpenAI is collaborating with Broadcom to create its own chip. This initiative aims to help OpenAI manage infrastructure costs and diversify its supply chain. Once the design is complete, TSMC will fabricate the chip utilizing its advanced 3-nanometer process technology. The development team, led by Richard Ho, a former leader in Google's custom AI chip program, has expanded significantly from 20 to 40 members in recent months, according to Reuters. TSMC's production will incorporate "high-bandwidth memory" and "extensive networking capabilities" to enhance performance. Nvidia warns: New AI chip rules could hurt America's edge in tech Using its own AI chip will allow OpenAI to decrease its dependence on Nvidia's chips for training and running AI models. At launch, OpenAI plans to deploy the chip on a limited scale, primarily for AI model operations, and aims to create future iterations with more advanced processors and features. Major tech companies such as Amazon Web Services, Microsoft, Google, and Meta Platforms are also investing in their own silicon to optimize performance and efficiency as they build out their AI capabilities. This strategic push follows a previous report indicating OpenAI's partnership with Broadcom to develop a custom chip, highlighting an ongoing trend among AI firms investing billions in hardware to meet the demands of their data-intensive models.
[5]
OpenAI to Produce First AI Chip with TSMC's 3nm Technology
OpenAI is progressing in its plan to develop custom AI chips to reduce the reliance on NVIDIA. According to a report, the company is preparing to finalise the design of its first in-house chip in the coming months and intends to send it for fabrication at TSMC (Taiwan Semiconductor Manufacturing Company). The process, known as "taping out," marks a key step before production. OpenAI expects to begin mass production in 2026. "The training-focused chip is viewed as a strategic tool to strengthen OpenAI's negotiating leverage with other chip suppliers," the report stated. The company plans to improve its processors over multiple iterations. Richard Ho, formerly of Google, leads OpenAI's chip team, which has grown to 40 members. The project also involves collaboration with Broadcom. Tech companies such as Microsoft and Meta have faced challenges in producing AI chips. OpenAI's initiative follows broader industry efforts to reduce dependence on NVIDIA, which controls around 80% of the AI chip market. Microsoft and Meta have announced AI infrastructure investments of $80 billion and $60 billion, respectively, for the coming year. The chip is designed for training and running AI models but will be initially deployed on a limited scale. Expanding to the scale of Google or Amazon's AI chip programs would require OpenAI to hire significantly more engineers. TSMC will manufacture OpenAI's chip using 3-nm process technology. The design, similar to NVIDIA's chips, includes a systolic array architecture, high-bandwidth memory, and advanced networking features. OpenAI, along with Oracle, SoftBank, NVIDIA, and Microsoft, recently announced the Stargate Project to build AI infrastructure in the US with a planned investment of up to $500 billion over four years, starting with an initial deployment of $100 billion.
[6]
OpenAI custom chip project aims to challenge Nvidia's dominance
In context: Big tech companies and AI startups still largely rely on Nvidia's chips to train and operate the most advanced AI models. However, that could change fast. OpenAI is spearheading a massive industry-wide effort to bring cheaper custom AI accelerators to market. If successful, this push could weaken Nvidia's dominance in the AI hardware space, pushing the company into a tougher market. OpenAI is nearing the launch of its first custom-designed AI chip. Reuters expects the company to send the chip design to TSMC in the coming months for validation before mass production begins in 2026. The chip has reached the tape-out stage, but OpenAI will likely need a significantly larger workforce to achieve full self-reliance in the AI accelerator market. The custom chip was designed by a "small" in-house team led by Richard Ho, who left Google to join OpenAI over a year ago. The 40-person team collaborated with Broadcom, a controversial company with a well-known track record for creating custom ASIC solutions. The two companies began negotiating a chip-focused partnership in 2024, with the ultimate goal of building new AI chips. Industry sources said OpenAI's design can both train and run AI models, but the company will initially use it in limited quantities for AI inferencing tasks only. TSMC will manufacture the final chip on its 3nm technology node, and OpenAI expects it to include a certain amount of high-bandwidth memory, like any other major AI (or GPU) silicon design. Despite playing a minor role in the company's infrastructure for the next few months, OpenAI's chip could become a significant disruptive force in the near future. The new design will need to pass the tape-out stage with flying colors first, and Ho's team will need to fix any hardware bugs discovered during the initial manufacturing tests. Many tech companies are actively working to replace Nvidia products with their own custom solutions for AI acceleration, but the GPU maker still holds around 80 percent of the market. Microsoft, Google, Meta, and other Big Tech giants are employing hundreds of engineers to solve the silicon problem, with OpenAI coming in last both in timing and workforce size. Simply put, OpenAI will need much more than its small in-house team led by Richard Ho currently working on its AI chip prototype. Internally, the chip project is seen as a crucial tool for future strategic moves in the growing AI sector. While still waiting for design validation from TSMC, OpenAI engineers are already planning more advanced iterations for broader adoption.
[7]
OpenAI's secret weapon against Nvidia dependence takes shape
OpenAI is entering the final stages of designing its long-rumored AI processor with the aim of decreasing the company's dependence on Nvidia hardware, according to a Reuters report released Monday. The ChatGPT creator plans to send its chip designs to Taiwan Semiconductor Manufacturing Co. (TSMC) for fabrication within the next few months, but the chip has not yet been formally announced. The OpenAI chip's full capabilities, technical details, and exact timeline are still unknown, but the company reportedly intends to iterate on the design and improve it over time, giving it leverage in negotiations with chip suppliers -- and potentially granting the company future independence with a chip design it controls outright. In the past, we've seen other tech companies, such as Microsoft, Amazon, Google, and Meta, create their own AI acceleration chips for reasons that range from cost reduction to relieving shortages of AI chips supplied by Nvidia, which enjoys a near-market monopoly on high-powered GPUs (such as the Blackwell series) for data center use. In October 2023, we covered a report about OpenAI's intention to create its own AI accelerator chips for similar reasons, so OpenAI's custom chip project has been in the works for some time. In early 2024, OpenAI CEO Sam Altman also began spending considerable time traveling around the world trying to raise up to a reported $7 trillion to increase world chip fabrication capacity.
[8]
OpenAI and Broadcom to finalize custom AI processor in the coming months say industry sources
OpenAI expects to finalize its first custom AI processor design in the coming months and send it to TSMC for production, aiming for large-scale manufacturing by 2026, reports Reuters. OpenAI follows its rivals from Google, Meta, and Microsoft, so to remain competitive in terms of costs, it needs its own custom processors earlier rather than later. The custom silicon OpenAI will create for AI processors is expected to feature a so-called systolic array design, a grid of identical processing elements (PEs that perform matrix or vector computations) that are arranged in rows and columns and connected in such a way that data 'pulses' through the array in a pipeline-like fashion. The processor is said to use HBM memory, though it is unclear whether OpenAI plans to use HBM3E or HBM4. As for process technology, OpenAI reportedly aims at TSMC's proven N3-series (3nm-class) fabrication process. OpenAI is reportedly working with Broadcom on its custom processor for AI workloads project. Typically, companies that work with Broadcom on custom processors develop key differentiating intellectual property (IP) in-house (or at least define it with Broadcom), and then Broadcom adds the remaining parts such as general-purpose CPU cores, memory, and I/O controllers and physical interfaces, as well as assembling the final design. On the OpenAI side, the effort is led by Richard Ho, who previously worked on Google's TPUs. Ho's team reportedly doubled to 40 engineers recently, but it is still considerably smaller than those at Amazon Web Services or Google. Expanding the initiative to match the scale of Google or Amazon would require hiring hundreds more engineers. That said, it is reasonable to expect OpenAI's in-house contribution to the design to be relatively small. Internally, OpenAI sees its custom processor as a way to improve its bargaining power with existing suppliers, mainly Nvidia. However, if successful, the company intends to refine and upgrade its custom silicon over time, expanding functionality with each iteration. The first version is expected to be produced in small quantities, primarily for running inference workloads on AI models rather than training them. Developing a high-performance AI processor is an expensive undertaking. Industry estimates suggest a single model could cost hundreds of millions of dollars, and supporting infrastructure, including necessary hardware and software, could double that amount. However, when more than one processor is developed per platform, those costs per processor typically drop. Historically, even companies like AWS, Google, Meta, and Microsoft have struggled to create competitive in-house processors that could beat Nvidia's GPUs in terms of performance. However, they have managed to build much cheaper processors with higher energy efficiency tailored for their workloads, which enables them to more than offset development costs. Also, these custom processors make running AI models cheaper for their cloud customers, which is good for the market. The demand for AI chips continues to surge as Big Tech companies need vast quantities of processors to train and then run their increasingly sophisticated models. Meta has allocated $60 billion for AI infrastructure this year, while Microsoft plans to invest $80 billion in 2025. OpenAI has not formally announced its 2025 spending plan -- which is not surprising as it is not a public company -- but, likely, it will also spend tens of billions of dollars on hardware, software, and infrastructure this year. If OpenAI manages to tape out its first custom processor in the coming months, then it will be able to mass produce it sometime a year after that, in mid-2026. If it is lucky, it will begin deployment in the second half of 2026.
[9]
NVIDIA grip loosens as OpenAI nears completion of custom AI chip
TL;DR: OpenAI is finalizing its first in-house AI chip design, aiming for mass production by 2026, challenging NVIDIA's dominance in AI hardware. NVIDIA is currently the only supplier of high-end AI hardware, but that is about to change as OpenAI is reportedly nearing completion of its first generation, in-house AI-dedicated silicon. The news comes from an exclusive report from Reuters that states the ChatGPT maker is currently in the midst of finalizing the design for its first in-house AI chip, and over the next few months, the design will be locked and sent over to the Taiwan Semiconductor Manufacturing Co (TSMC) for fabrication. OpenAI previously set a goal of hitting mass production of its own AI hardware sometime in 2026, and according to the report, these plans coincide with that timeline and illustrate the ChatGPT marker is on track for hitting them. Despite OpenAI nearing the design completion for the chip, the challenges don't stop there as the first chip design will take approximately 6 months to produce, cost tens of millions of dollars, and there's no guarantee the silicon will function as intended. A failure in the first generation of silicon will mean OpenAI will need to take the design back to the drawing board, assess where it went wrong, and repeat the whole process all over again. According to sources that spoke with Reuters, OpenAI's chip will be training-focused and is internally viewed as a tool to strengthen OpenAI's negotiating leverage with other chip suppliers, such as NVIDIA. Notably, OpenAI's AI-chip development team is being helmed by Richard Ho, who joined OpenAI more than a year ago after transitioning from Google, where he assisted in the company's custom AI chip program.
[10]
OpenAI's custom chip design is near completion
OpenAI is poised to enter the custom component business in 2025. The brand is currently in talks with Taiwan Semiconductor Manufacturing Co. (TSMC) to fabricate the first generation of its in-house AI-based silicon, according to an exclusive report by Reuters. The AI start-up has plans to create custom chips to lessen its dependence on AI chips from Nvidia. Sources told Reuters that OpenAI is wrapping up the final design of its chip, which should be complete in the coming months. The company will design the chip in collaboration with Broadcom. The 40-person in-house team is spearheaded by former Google engineering lead, Richard Ho. Recommended Videos Once designs are complete, the company will send them to TSMC for an initial fabrication process called "taping out," using its 3-nanometer process technology, to ensure that the chip is viable for mass production. If successful, the chip could begin mass production at TSMC in 2026. While reports of these plans are circulating, neither OpenAI nor TSMC have confirmed that they are designing or collaborating on an AI component. Even so, Reuters noted that OpenAI striving to mass produce an in-house AI chip in such a short time frame is an ambitious goal. Such a feat is extremely expensive and time-consuming. There is also the possibility an initial tape-out could fail, meaning the company would have to test for faults and repeat the process. However, if a tape-out is successful, and OpenAI is able to develop its first in-house AI chip in a snappy schedule, the company will have completed a feat similar organizations have not been able to achieve. The brand would have "a strategic tool to strengthen OpenAI's negotiating leverage with other chip suppliers," sources told the publication. After the first chip rollout, the company plans to create more powerful components, with greater abilities version after version, Reuters added. OpenAI has long had heavy competition with American contemporaries in the AI space. However, the introduction of the Chinese start-up, DeepSeek, with its open-source platform has really shaken up the industry for all involved. CEO Sam Altman recently indicated that the company's former strategy of being a closed-sourced business is a thing of the past. The brand has taken some swift moves in response to the dark horse, DeepSeek, in aligning itself with U.S. President Donald Trump's $500 billion Stargate infrastructure program. Reports also indicate OpenAI is close to closing a $40 billion deal with Japanese investment firm, SoftBank. The company also recently announced a new visual rebrand. In addition to recent product launches, including o3 mini reasoning model and the Deep Research feature, the brand aired its first television commercial at the 59th Super Bowl on Sunday. Despite its efforts to develop AI chips quickly, the company remains very much in start-up mode. Sources told Reuters even if OpenAI is successful at developing and mass-producing its own chips, the components would have a limited function within the company. They would mainly be used for running AI models, whereas the brand also requires chips for training AI models.
[11]
OpenAI Is In The Process Of Finalizing The Design Of Its Custom AI Chip In The Next Few Months, Tape-Out Process At TSMC Could Commence From H1 2025
The journey to reduce OpenAI's reliance on NVIDIA and its GPUs starts with designing its custom AI chip, and the company behind ChatGPT has made some decent progress in this regard, assuming the latest report is to be believed. The artificial intelligence startup is said to be in the middle of finalizing the design of the silicon, with the completion said to take a few months. If this route has a minimum amount of potholes, OpenAI could send its in-house chip to TSMC for tape-out in the year's first half. While there can be more than a few obstacles standing in the way of OpenAI's plans, the company seems determined to see its goal to fruition. As reported by CNBC, the tape-out process will take six months to complete and will cost millions and millions of dollars. However, if OpenAI is willing to pay TSMC a premium, the latter may produce the AI chip faster. Sadly, there is no guarantee that the first tape-out will be successful, as a failure will mean that another tape-out is required to repeat the process and isolate the problem. OpenAI was previously reported to leverage TSMC's A16 Angstrom process for its Sora video generator, but it is unconfirmed if this is the same AI chip whose design is set to be finalized in the coming months or a different in-house solution. This specific division is currently being led by OpenAI's Richard Ho, with the headcount increased to 40 talented individuals. The in-house chip design process is being assisted by Broadcom, though the exact capacity of the company's contribution is unknown. The name of OpenAI's custom AI chip has yet to be revealed, but its functionality will revolve around training and running artificial intelligence models. The silicon's capabilities will initially have a limited role, which can increase depending on how many units OpenAI intends to deploy in the future. It is mentioned that if everything goes according to plan, then mass production is estimated to start from 2026, with TSMC utilizing its 3nm technology for the chip, along with systolic array architecture paired with High Bandwidth Memory (HBM) that NVIDIA uses for its own AI GPUs.
[12]
OpenAI set to finalise first custom chip design this year
The update shows that OpenAI is on track to meet its ambitious goal of mass production at TSMC in 2026. A typical tape-out costs tens of millions of dollars and will take roughly six months to produce a finished chip, unless OpenAI pays substantially more for expedited manufacturing. OpenAI is pushing ahead on its plan to reduce its reliance on Nvidia for its chip supply by developing its first generation of in-house artificial-intelligence silicon. The ChatGPT maker is finalising the design for its first in-house chip in the next few months and plans to send it for fabrication at Taiwan Semiconductor Manufacturing Co, sources told Reuters. The process of sending a first design through a chip factory is called "taping out." OpenAI and TSMC declined to comment. The update shows that OpenAI is on track to meet its ambitious goal of mass production at TSMC in 2026. A typical tape-out costs tens of millions of dollars and will take roughly six months to produce a finished chip, unless OpenAI pays substantially more for expedited manufacturing. There is no guarantee the silicon will function on the first tape out and a failure would require the company to diagnose the problem and repeat the tape-out step. Inside OpenAI, the training-focused chip is viewed as a strategic tool to strengthen OpenAI's negotiating leverage with other chip suppliers, the sources said. After the initial chip, OpenAI's engineers plan to develop increasingly advanced processors with broader capabilities with each new iteration. If the initial tape out goes smoothly, it would enable the ChatGPT maker to mass-produce its first in-house AI chip and potentially test an alternative to Nvidia's chips later this year. OpenAI's plan to send its design to TSMC this year demonstrates the startup has made speedy progress on its first design, a process that can take other chip designers years longer. Big tech companies such as Microsoft and Meta have struggled to produce satisfactory chips despite years of effort. The recent market rout triggered by Chinese AI startup DeepSeek has also raised questions about whether fewer chips will be needed in developing powerful models in the future. The chip is being designed by OpenAI's in-house team led by Richard Ho, which had doubled in the past months to 40 people, in collaboration with Broadcom. Ho joined OpenAI more than a year ago from Alphabet's Google where he helped lead the search giant's custom AI chip program. Reuters first reported OpenAI's plans with Broadcom last year. Ho's team is smaller than the large-scale efforts at tech giants such as Google or Amazon. A new chip design for an ambitious, large-scale program could cost $500 million for a single version of a chip, according to industry sources with knowledge of chip design budgets. Those costs could double to build the necessary software and peripherals around it. Generative AI model makers like OpenAI, Google and Meta have demonstrated that ever-larger numbers of chips strung together in data centers make models smarter, and as a result, they have an insatiable demand for the chips. Meta has said it will spend $60 billion on AI infrastructure in the next year and Microsoft has said it will spend $80 billion in 2025. Currently, Nvidia's chips are the most popular and hold a market share of roughly 80%. OpenAI is itself participating in the $500 billion Stargate infrastructure program announced by U.S. President Donald Trump last month. But rising costs and dependence on a single supplier have led major customers such as Microsoft, Meta and now OpenAI to explore in-house or external alternatives to Nvidia's chips. OpenAI's in-house AI chip, while capable of both training and running AI models, will initially be deployed on a limited scale, and primarily for running AI models, the sources said. The chip will have a limited role within the company's infrastructure. To build out an effort as comprehensive as Google or Amazon's AI chip program, OpenAI would have to hire hundreds of engineers. TSMC is manufacturing OpenAI's AI chip using its advanced 3-nanometer process technology. The chip features a commonly used systolic array architecture with high-bandwidth memory (HBM) - also used by Nvidia for its chips - and extensive networking capabilities, sources said.
[13]
OpenAI To Mass-Produce Its In-House AI Chips By 2026; Expected To Reach Tape-Out Stage Within This Year
OpenAI is reportedly planning to send its first in-house chip for tape-out to TSMC in the upcoming months in an attempt to reduce its reliance on NVIDIA. Well, it seems like OpenAI is planning to join the likes of Google and Amazon in having an arsenal of its custom-built AI accelerators, and it might not be long before we see the company's AI chips debuting in the market. In a new report by Reuters, it is now claimed that OpenAI is proceeding towards finalizing the design of its project, after which they will be sent out to TSMC to manufacture the initial batch, which we call the "tape-out" stage. This move is said to be OpenAI's plan to influence the chip supply, targeting the likes of NVIDIA into negotiating for better prices or supply. While details surrounding OpenAI's first in-house chip are uncertain for now, the project is expected to utilize TSMC's 3nm process. Since OpenAI gave up on self-manufacturing a while ago, the venture will likely have partners like Broadcom. However, OpenAI still needs a lot of work since the first tape-out attempt often requires design revisions, which means re-evaluating the whole chip and pushing the release timeline even further. Interestingly, OpenAI's in-house project is said to be led by Richard Ho, a former Google engineer who is a key figure in the company's AI chip ambitions. While the firm doesn't have a large employee base for its custom chip project, the firm plans to scale up the project massively if it sees success from its first venture. This chip will likely give OpenAI access to a wider range of computing portfolio, and at the same time, act as a negotiating tool as well. The custom ASIC race is definitely on for everyone, mainstream AI giants, since it translates into a more effective price-to-performance ratio, and since these chips are tailored for custom workloads, it ultimately gives companies the ability to squeeze out a more valued performance compared to chips from NVIDIA. And since chip suppliers can't afford to lose customers, they ultimately show ease in the negotiating process, creating a win-win situation for firms like OpenAI.
[14]
OpenAI is reportedly getting closer to launching its in-house chip
Emma Roth is a news writer who covers the streaming wars, consumer tech, crypto, social media, and much more. Previously, she was a writer and editor at MUO. OpenAI remains on track to start producing its in-house AI chip next year, according to a report from Reuters. Sources tell the outlet that OpenAI plans to finalize its design over the next few months before sending it to the Taiwan Semiconductor Manufacturing Co (TSMC) for fabrication. By making a chip of its own, OpenAI won't have to use Nvidia's chips as much to train and run AI models. TSMC will produce the chip using the more efficient 3-nanometer technology, with "high-bandwidth memory" and "extensive networking capabilities," according to Reuters. At launch, OpenAI will deploy its in-house chip on a "limited scale" and will mostly use it for running AI models, Reuters reports. The company also reportedly plans to develop future versions of the chip with more advanced processors and capabilities. This follows last year's report from Reuters, which suggested OpenAI is working with Broadcom to develop a custom chip. OpenAI's chip design team is led by former Google TPU engineer Richard Ho and has increased from 20 to 40 people in recent months as efforts ramp up, according to Reuters. Tech giants like OpenAI have poured billions into building out AI infrastructure and buying up chips to power their data-hungry AI models. It doesn't look like the spending will slow down anytime soon, despite the AI startup DeepSeek calling into question whether companies really need to purchase thousands of chips to power their systems.
[15]
OpenAI reportedly finalizing design for in-house AI chip ahead of TSMC fabrication - SiliconANGLE
OpenAI reportedly finalizing design for in-house AI chip ahead of TSMC fabrication OpenAI is reportedly getting closer to producing its own in-house artificial intelligence chip as it seeks to gain leverage over existing manufacturers and continues to expand its data centers. According to sources referenced by Reuters, the company is in the final months of finalizing the design for the in-house chip, with plans for the chip to be sent for fabrication at Taiwan Semiconductor Manufacturing Co. Ltd. this year. The report says OpenAI is aiming to have the chip in mass production by 2026. Exactly what the OpenAI AI chip design will look like is unknown, but there have been previous reports, including in October that OpenAI was working with both Broadcom Inc. and TSMC on the chip. Previous suggestions have included that OpenAI is not looking to replace GPUs such as those provided by Nvidia Corp. but instead is looking to design a specialized chip that will undertake inference, the process of applying trained models to make predictions or decisions on new data in real-time applications. Chips that support inference are needed as more tech companies use AI models to undertake more complex tasks. Currently, OpenAI relies heavily on Nvidia's GPUs for training its models, a process that requires immense computational power to refine algorithms using vast datasets. However, inference requires different chip capabilities that are optimized for speed and energy efficiency rather than raw computational power. While the technical idea behind the chip is solid, tech manufacturer politics is also seemingly playing a role in the development of the chip, with Reuters noting that the chip "is viewed as a strategic tool to strengthen OpenAI's negotiating leverage with other chip suppliers." Those motivations aside, the exercise will not be a cheap one for OpenAI, even if it is a company where a $500 million investment -- the estimated cost of designing the chip -- is not much more than a rounding error. OpenAI has raised $17.9 billion to date, including a record-setting round of $6.6 billion in October. It was also reported in late January that OpenAI was seeking to raise a new round of $40 billion on a $340 billion valuation.
[16]
OpenAI may be trying to wean itself off Nvidia
Like the broader artificial intelligence sector, OpenAI relies on Nvidia's (NVDA+2.83%) costly training chips to build tools like ChatGPT, but that could change -- if OpenAI's in-house chip design somehow pans out. The AI-software maker is close to finalizing its first design and intends to have Taiwan Semiconductor Manufacturing Company (TSM+0.48%) (TSMC) fabricate it, according to Reuters. Citing unnamed sources, the news outlet reported Monday that OpenAI intends to send its design to TSMC "in the next few months" and aims to mass-produce an AI-training chip with the fabricator in 2026. If OpenAI's alternative design works out, the company would be a step closer to weaning itself off Nvidia's coveted training chips, and in-house alternatives could empower OpenAI to negotiate better deals with suppliers, according to Reuters. Quartz reached out to OpenAI, TSMC, and Nvidia for comment on the report, but did not immediately hear back. The initial testing process alone could cost OpenAI tens of millions of dollars -- a small price to pay relative to the $5 billion the company reportedly expected to burn in 2024 alone. Much of OpenAI's expenses go to cloud-computing providers such as Microsoft (MSFT+0.84%). Nvidia's chips power Microsoft's AI cloud offerings, and both tech firms have invested in OpenAI. OpenAI is also working with Oracle (ORCL+1.91%) and Softbank (SFTBY+0.27%) on Stargate, a Trump-backed, Musk-bashed project to spend as much as $500 billion on AI data centers. Early in the day, Nvidia was up by more than 3% -- to around $133 per share. Microsoft, meanwhile, was up 1.15% -- to around $414 per share.
[17]
OpenAI expects to have its first custom AI chip soon
As the industry embraces artificial intelligence, the need for chips ready for AI tasks has increased considerably in recent years. While Nvidia is currently the main supplier of AI chips, OpenAI (the company behind ChatGPT) wants to have its first custom chip ready by the end of the year. As reported by Reuters on Monday, OpenAI has been "pushing ahead" with its plans to design and build its own AI-ready silicon by the end of this year. Sources familiar with the matter say the company is finalizing the design and plans to send it to manufacturing in the next few months. TSMC, which already supplies chips to Apple, is rumored to produce OpenAI's first chip. The report notes that "tape-out," which is the final stage in the process of designing a new chip, costs "tens of millions of dollars" and can take up to six months before production begins. This is a critical stage as the first chips produced on a large scale could fail - and this would require redoing the whole process. At first, the OpenAI chip would be used to run AI models with a "limited role," but the chip also capable of training AI models and may be used for this purpose in the future. If all goes well, OpenAI engineers already have plans to develop even more powerful chips. If OpenAI succeeds in building its own chip, the company will not only have more control over how it processes data for AI training, but will also reduce its reliance on Nvidia. Apple, for instance, currently uses Amazon chips to pre-train Apple Intelligence models. However, the company is also rumored to be working on its first custom server chip for AI tasks in partnership with Broadcom. Other companies such as Meta and Microsoft have also been spending billions of dollars on AI infrastructure. At the same time, the latest AI model introduced by Chinese startup DeepSeek has shown the world that it's possible to develop powerful AI models with fewer hardware resources.
[18]
Exclusive: OpenAI set to finalize first custom chip design this year
SAN FRANCISCO/NEW YORK, Feb 10 (Reuters) - OpenAI is pushing ahead on its plan to reduce its reliance on Nvidia (NVDA.O), opens new tab for its chip supply by developing its first generation of in-house artificial-intelligence silicon. The ChatGPT maker is finalizing the design for its first in-house chip in the next few months and plans to send it for fabrication at Taiwan Semiconductor Manufacturing Co (2330.TW), opens new tab, sources told Reuters. The process of sending a first design through a chip factory is called "taping out." OpenAI and TSMC declined to comment. The update shows that OpenAI is on track to meet its ambitious goal of mass production at TSMC in 2026. A typical tape-out costs tens of millions of dollars and will take roughly six months to produce a finished chip, unless OpenAI pays substantially more for expedited manufacturing. There is no guarantee the silicon will function on the first tape out and a failure would require the company to diagnose the problem and repeat the tape-out step. Inside OpenAI, the training-focused chip is viewed as a strategic tool to strengthen OpenAI's negotiating leverage with other chip suppliers, the sources said. After the initial chip, OpenAI's engineers plan to develop increasingly advanced processors with broader capabilities with each new iteration. If the initial tape out goes smoothly, it would enable the ChatGPT maker to mass-produce its first in-house AI chip and potentially test an alternative to Nvidia's chips later this year. OpenAI's plan to send its design to TSMC this year demonstrates the startup has made speedy progress on its first design, a process that can take other chip designers years longer. Big tech companies such as Microsoft (MSFT.O), opens new tab and Meta (META.O), opens new tab have struggled to produce satisfactory chips despite years of effort. The recent market rout triggered by Chinese AI startup DeepSeek has also raised questions about whether fewer chips will be needed in developing powerful models in the future. The chip is being designed by OpenAI's in-house team led by Richard Ho, which had doubled in the past months to 40 people, in collaboration with Broadcom (AVGO.O), opens new tab. Ho joined OpenAI more than a year ago from Alphabet's (GOOGL.O), opens new tab Google where he helped lead the search giant's custom AI chip program. Reuters first reported OpenAI's plans with Broadcom last year. Ho's team is smaller than the large-scale efforts at tech giants such as Google or Amazon (AMZN.O), opens new tab. A new chip design for an ambitious, large-scale program could cost $500 million for a single version of a chip, according to industry sources with knowledge of chip design budgets. Those costs could double to build the necessary software and peripherals around it. Generative AI model makers like OpenAI, Google and Meta have demonstrated that ever-larger numbers of chips strung together in data centers make models smarter, and as a result, they have an insatiable demand for the chips. Meta has said it will spend $60 billion on AI infrastructure in the next year and Microsoft has said it will spend $80 billion in 2025. Currently, Nvidia's chips are the most popular and hold a market share of roughly 80%. OpenAI is itself participating in the $500 billion Stargate infrastructure program announced by U.S. President Donald Trump last month. But rising costs and dependence on a single supplier have led major customers such as Microsoft, Meta and now OpenAI to explore in-house or external alternatives to Nvidia's chips. OpenAI's in-house AI chip, while capable of both training and running AI models, will initially be deployed on a limited scale, and primarily for running AI models, the sources said. The chip will have a limited role within the company's infrastructure. To build out an effort as comprehensive as Google or Amazon's AI chip program, OpenAI would have to hire hundreds of engineers. TSMC is manufacturing OpenAI's AI chip using its advanced 3-nanometer process technology. The chip features a commonly used systolic array architecture with high-bandwidth memory (HBM) - also used by Nvidia for its chips - and extensive networking capabilities, sources said. Reporting by Anna Tong and Max A. Cherney in San Francisco and Krystal Hu in New York; Editing by Kenneth Li, Sam Holmes and Matthew Lewis Our Standards: The Thomson Reuters Trust Principles., opens new tab Suggested Topics:Artificial Intelligence Anna Tong Thomson Reuters Anna Tong is a correspondent for Reuters based in San Francisco, where she reports on the technology industry. She joined Reuters in 2023 after working at the San Francisco Standard as a data editor. Tong previously worked at technology startups as a product manager and at Google where she worked in user insights and helped run a call center. Tong graduated from Harvard University. Max A. Cherney Thomson Reuters Max A. Cherney is a correspondent for Reuters based in San Francisco, where he reports on the semiconductor industry and artificial intelligence. He joined Reuters in 2023 and has previously worked for Barron's magazine and its sister publication, MarketWatch. Cherney graduated from Trent University with a degree in history. Krystal Hu Thomson Reuters Krystal reports on venture capital and startups for Reuters. She covers Silicon Valley and beyond through the lens of money and characters, with a focus on growth-stage startups, tech investments and AI. She has previously covered M&A for Reuters, breaking stories on Trump's SPAC and Elon Musk's Twitter financing. Previously, she reported on Amazon for Yahoo Finance, and her investigation of the company's retail practice was cited by lawmakers in Congress. Krystal started a career in journalism by writing about tech and politics in China. She has a master's degree from New York University, and enjoys a scoop of Matcha ice cream as much as getting a scoop at work.
[19]
OpenAI Might Be Planning to Manufacture Its First Native AI Chipsets
A recent trademark filing by OpenAI hinted at chipset manufacturing OpenAI is reportedly planning to manufacture its first custom artificial intelligence (AI) chipset this year. As per the report, the San Francisco-based AI firm has begun the design process internally and is on track to complete the design of the processor in the next few months. The company's main reason to fabricate custom AI chipsets is said to reduce reliance on Nvidia and to strengthen its negotiation with other chip suppliers. Notably, a recent trademark filing by OpenAI revealed that the company is planning to manufacture a wide range of hardware, including chipsets. According to a Reuters report, OpenAI is currently finalising the design for the in-house chipset and is expected to finish up the process in the next few months. Citing sources familiar with the matter, the publication said the AI firm will then reportedly tape out (the process of sending the first design through a chip factory) the chipset at Taiwan Semiconductor Manufacturing Company (TSMC). TSMC will reportedly handle the fabrication process for OpenAI. The proposed chipset is said to be a 3-nanometer process technology featuring systolic array architecture with high-bandwidth memory (HBM), and extensive networking capabilities. Notably, HBM-based design is also used in Nvidia's chipsets. OpenAI reportedly believes that building its own chipsets will give it a strategic advantage over other chipset suppliers when negotiating a deal. It is also said to reduce the company's reliance on Nvidia, whose chips it has been extensively using. The publication stated that the AI firm plans to develop "increasingly advanced processors with broader capabilities" with future iterations of the chipset. Citing the sources, Reuters claimed that the chipset is being designed by the AI firm's in-house team led by Richard Ho, the head of hardware at OpenAI. Interestingly, Ho has previously worked at Lightmatter and Google and specialises in semiconductor engineering. The team under Ho has reportedly doubled in the past months and now contains 40 employees. Notably, the report claims that OpenAI's first chipset will initially be used on a limited scale, with the primary function being running some of the company's AI models. It is said to have a limited role in the company's infrastructure currently, but it can increase in the future. Ultimately, the AI firm intends to use the chips for both inference as well as training AI models.
[20]
Sam Altman's OpenAI Set To Finalize In-House AI Chip, Reducing Dependence On Nvidia: Report Open AI Aims To Reduce Dependency On NVIDIA. Advances Towards In-House AI Chip Designing - Microsoft (NASDAQ:MSFT), Meta Platforms (NASDAQ:META)
Sam Altman-led OpenAI is almost ready to finalize the design for its first proprietary artificial intelligence (AI) chip. This development could reduce the company's reliance on Nvidia NVDA and enhance its bargaining power with other chip suppliers. What Happened: OpenAI is making headway in its endeavor to create its first in-house AI silicon. The company is expected to submit its design to Taiwan Semiconductor Manufacturing Co TSM for fabrication in the upcoming months, a step known as "taping out", as per a Reuters report released today. OpenAI is on track to achieve its ambitious goal of mass production at TSMC by 2026. However, the process is costly and time-consuming, with no guarantee of success on the first attempt. In the event of a failure, the company would have to identify the problem and send out the chip design again. This large-scale program may cost $500 million for a single-chip version. However, the costs could potentially double based on the required software and peripherals. The in-house AI chip, developed by a team at OpenAI led by Richard Ho, is considered a strategic asset within the company. Despite the hurdles encountered by tech behemoths like Microsoft Corporation MSFT and Meta Platforms Inc. META in producing satisfactory chips, OpenAI's rapid progress on its first design underscores the startup's potential in the chip design sector. The engineers at OpenAI strive to develop increasingly advanced processors with each new version. If the initial tape-out proves successful, it could enable the company to mass-produce its first proprietary AI chip and potentially consider an alternative to Nvidia's chips later this year. SEE ALSO: Short Seller Jim Chanos Flags DeepSeek-Like Risk As Biggest Threat To Markets In Next 6 Months: '...Comes Out Of Left Field That Changes People's Thinking' Why It Matters: Nvidia's chips currently dominate the market, holding nearly 80% share, allowing it to command the industry pricing. However, reliance on a single supplier and escalating costs have prompted major companies like Microsoft, Meta, and OpenAI to seek in-house or external alternatives to Nvidia's chips. Also, DeepSeek's rise as an AI competitor, leveraging cost-effective, lower-end chips has raised questions about the tech sector's dependence on Nvidia products. OpenAI's move towards producing its own AI silicon is a significant step in reducing its dependency on Nvidia. This development comes amid shifts in semiconductor dynamics and could potentially disrupt the chip supply chain. OpenAI is also involved in the $500 billion Stargate infrastructure initiative, backed by U.S. President Donald Trump. Reports suggest that the ChatGPT-maker is searching across the U.S. for locations to establish a network of massive data centers to support its artificial intelligence technology. In addition to its flagship site in Texas, the company is exploring options in 16 states to advance the Stargate project. READ ALSO: Arm Confirms Central Role In $100 Billion Stargate AI Infrastructure Project With OpenAI, Oracle Image via Shutterstock Disclaimer: This content was partially produced with the help of AI tools and was reviewed and published by Benzinga editors. METAMeta Platforms Inc$720.950.90%Overview Rating:Speculative50%Technicals Analysis660100Financials Analysis400100WatchlistOverviewMSFTMicrosoft Corp$412.850.76%NVDANVIDIA Corp$130.040.15%TSMTaiwan Semiconductor Manufacturing Co Ltd$208.581.19%Market News and Data brought to you by Benzinga APIs
[21]
ChatGPT maker OpenAI is taking bold steps to send stern message to Nvidia, DeepSeek
Inside OpenAI, the training-focused chip is viewed as a strategic tool to strengthen OpenAI's negotiating leverage with other chip suppliers, the sources said.OpenAI is pushing ahead on its plan to reduce its reliance on Nvidia for its chip supply by developing its first generation of in-house artificial-intelligence silicon, as per a report. The ChatGPT maker is finalizing the design for its first in-house chip in the next few months and plans to send it for fabrication at Taiwan Semiconductor Manufacturing Co, sources told Reuters. The process of sending a first design through a chip factory is called "taping out." OpenAI and TSMC declined to comment. The update shows that OpenAI is on track to meet its ambitious goal of mass production at TSMC in 2026. A typical tape-out costs tens of millions of dollars and will take roughly six months to produce a finished chip, unless OpenAI pays substantially more for expedited manufacturing. There is no guarantee the silicon will function on the first tape out and a failure would require the company to diagnose the problem and repeat the tape-out step, Reuters reported. Inside OpenAI, the training-focused chip is viewed as a strategic tool to strengthen OpenAI's negotiating leverage with other chip suppliers, the sources said. After the initial chip, OpenAI's engineers plan to develop increasingly advanced processors with broader capabilities with each new iteration. If the initial tape out goes smoothly, it would enable the ChatGPT maker to mass-produce its first in-house AI chip and potentially test an alternative to Nvidia's chips later this year. OpenAI's plan to send its design to TSMC this year demonstrates the startup has made speedy progress on its first design, a process that can take other chip designers years longer. Big tech companies such as Microsoft and Meta have struggled to produce satisfactory chips despite years of effort. The recent market rout triggered by Chinese AI startup DeepSeek has also raised questions about whether fewer chips will be needed in developing powerful models in the future. The chip is being designed by OpenAI's in-house team led by Richard Ho, which had doubled in the past months to 40 people, in collaboration with Broadcom. Ho joined OpenAI more than a year ago from Alphabet's Google where he helped lead the search giant's custom AI chip program. Reuters first reported OpenAI's plans with Broadcom last year. Ho's team is smaller than the large-scale efforts at tech giants such as Google or Amazon. A new chip design for an ambitious, large-scale program could cost $500 million for a single version of a chip, according to industry sources with knowledge of chip design budgets. Those costs could double to build the necessary software and peripherals around it. Generative AI model makers like OpenAI, Google and Meta have demonstrated that ever-larger numbers of chips strung together in data centers make models smarter, and as a result, they have an insatiable demand for the chips. Meta has said it will spend $60 billion on AI infrastructure in the next year and Microsoft has said it will spend $80 billion in 2025. Currently, Nvidia's chips are the most popular and hold a market share of roughly 80%. OpenAI is itself participating in the $500 billion Stargate infrastructure program announced by U.S. President Donald Trump last month. But rising costs and dependence on a single supplier have led major customers such as Microsoft, Meta and now OpenAI to explore in-house or external alternatives to Nvidia's chips. OpenAI's in-house AI chip, while capable of both training and running AI models, will initially be deployed on a limited scale, and primarily for running AI models, the sources said. The chip will have a limited role within the company's infrastructure. To build out an effort as comprehensive as Google or Amazon's AI chip program, OpenAI would have to hire hundreds of engineers. TSMC is manufacturing OpenAI's AI chip using its advanced 3-nanometer process technology. The chip features a commonly used systolic array architecture with high-bandwidth memory (HBM) - also used by Nvidia for its chips - and extensive networking capabilities, sources said. Q1. Which are generative AI model makers? A1. Generative AI model makers are OpenAI, Google and Meta. Q2. What is OpenAI's plan? A2. OpenAI's in-house AI chip, while capable of both training and running AI models, will initially be deployed on a limited scale, and primarily for running AI models, the sources said.
Share
Share
Copy Link
OpenAI is finalizing the design of its first in-house AI chip, aiming to reduce reliance on Nvidia. The chip, set for TSMC production using 3nm technology, is expected to enter mass production by 2026.
OpenAI, the company behind ChatGPT, is on the verge of finalizing its first custom AI chip design. This move aims to reduce the company's dependence on Nvidia, the current dominant player in the AI chip market. The project, which could cost OpenAI upwards of $500 million, is expected to reach a crucial milestone in the coming months 12.
The chip is being developed using TSMC's advanced 3-nanometer process technology and will feature:
OpenAI plans to send the design to TSMC for fabrication soon, with mass production targeted for 2026 24.
The project is led by Richard Ho, a former Google executive who played a key role in developing Google's custom AI processors. OpenAI's chip development team has grown to about 40 engineers, though this is still relatively small compared to teams at companies like Amazon or Google 15.
OpenAI is collaborating with Broadcom on this project, with OpenAI developing the key intellectual property while Broadcom handles the assembly of the final design 3.
This initiative is part of a broader trend among tech giants to develop their own AI chips:
The move comes amid rising costs and concerns about reliance on Nvidia, which currently controls about 80% of the AI chip market 5. Major tech companies are allocating significant resources to AI infrastructure:
While OpenAI's first chip will have a limited role within the company's infrastructure, it represents a significant step towards technological independence. The company plans to improve its processors over multiple iterations, potentially expanding its chip development program in the future 45.
This development is part of the larger Stargate Project, a $500 billion initiative involving OpenAI, Oracle, SoftBank, Nvidia, and Microsoft to build AI infrastructure in the US 5.
Reference
[1]
[5]
OpenAI is working on its first custom AI chip for inference tasks, partnering with Broadcom and TSMC. The company is also diversifying its chip supply by adding AMD alongside NVIDIA GPUs to meet growing infrastructure demands.
18 Sources
18 Sources
Meta has begun testing its first in-house chip for AI training, aiming to reduce reliance on Nvidia and cut infrastructure costs. The move marks a significant step in Meta's custom silicon development efforts.
15 Sources
15 Sources
OpenAI, the company behind ChatGPT, is reportedly in discussions with Broadcom and other chipmakers to develop custom AI chips. This move could potentially reshape the AI hardware landscape and challenge Nvidia's dominance in the market.
8 Sources
8 Sources
As Nvidia dominates the AI training chip market with GPUs, competitors are focusing on developing specialized AI inference chips to meet the growing demand for efficient AI deployment and reduce computing costs.
6 Sources
6 Sources
ByteDance, the parent company of TikTok, faces conflicting reports about its AI chip development plans. While some sources claim the company is working on custom AI chips, ByteDance has officially denied these rumors, emphasizing cost optimization efforts instead.
3 Sources
3 Sources
The Outpost is a comprehensive collection of curated artificial intelligence software tools that cater to the needs of small business owners, bloggers, artists, musicians, entrepreneurs, marketers, writers, and researchers.
© 2025 TheOutpost.AI All rights reserved