OpenAI co-founder raises $1 billion for new safety-focused AI startup

Safe Superintelligence aims to develop safe artificial intelligence that surpasses human capabilities

Safe Superintelligence aims to develop safe artificial intelligence that surpasses human capabilities
Safe Superintelligence aims to develop safe artificial intelligence that surpasses human capabilities

OpenAI co-founder and former chief scientist Ilya Sutskever has raised $1 billion for his new startup Safe Superintelligence (SSI).

According to Reuters, Sutskever, who left OpenAI in May, aims to develop safe AI (artificial intelligence) systems that significantly exceed human capabilities.

SSI, in a post on X (formerly Twitter), explained, “SSI is our mission, our name, and our entire product roadmap, because it is our sole focus.”

It further added, “Our singular focus means no distraction by management overhead or product cycles, and our business model means safety, security, and progress are all insulated from short-term commercial pressures.”

The computer scientist who announced the new venture earlier this year after leaving OpenAI wrote on X, “We will pursue safe superintelligence in a straight shot, with one focus, one goal, and one product.”

The investors of the company include Andreessen Horowitz, Sequoia Capital, NFDG, DST Global, and SV Angel.

Moreover, SSI will use the $1 billion fund to buy computing power and to hire top experts in the field. The company will focus on building a small but highly trusted team of engineers and researchers in Palo Alto, California, and Tel Aviv, Israel.