
OpenAI has decided to reduce its dependency on Nvidia for AI chips with the help of Google.
As reported by Reuters, a source close to the development has shared that the ChatGPT maker has started using Google's artificial intelligence chips to power its chatbot and other products.
This move by the Microsoft-backed AI startup is in order to bring diversification in its chip suppliers beyond Nvidia.
OpenAI has been known as one of the largest purchasers of Nvidia's graphics processing units (GPUs), which it uses for both inference computing and model training.
These operations involve an AI model applying its knowledge to new information for predictions or decisions.
Earlier this month, the outlet reported that OpenAI intended to add Google's Cloud service to meet its increasing demand for computing capacity.
The tech-giant might have given the green light to the OpenAI deal, as it appeared to expand external access to its proprietary tensor processing units (TPUs), which were previously reserved mainly for internal operations.
This shift has attracted clients such as Apple, along with startups and rival of the ChatGPT, Anthropic, which was founded by former OpenAI executives.
OpenAI's decision to lease Google's TPUs marks its first significant use of non-NVIDIA chips and reflects a move away from depending entirely on Microsoft's data centres.