Meta unveils four MTIA chip generations to power AI inference and reduce Nvidia dependence
Meta announced four successive generations of its custom MTIA chips on March 11, scheduled for deployment through 2027. The MTIA 300, 400, 450, and 500 are optimized for AI inference workloads, with HBM bandwidth increasing 4.5 times across the lineup. Meta has already deployed hundreds of thousands of MTIA chips in production, joining Google, AWS, and Microsoft in building custom silicon to diversify from sole reliance on Nvidia.