Safe Superintelligence (SSI), a new AI startup co-founded by Ilya Sutskever, has raised $1 billion to develop advanced “safe” AI systems. This significant investment, led by top venture capital firms like Andreessen Horowitz, Sequoia Capital, DST Global, and SV Angel, underscores the continued faith in Sutskever’s vision despite growing doubts about the profitability of large-scale AI investments.
Launched just three months ago, SSI is valued at $5 billion, according to sources, despite having no publicly-known products. The company, which currently has 10 employees, plans to expand its research team in Palo Alto, California, and Tel Aviv, Israel. The newly acquired funds will be used to enhance computing power and attract top talent.
SSI’s focus on “AI safety” stems from the belief that superintelligent AI—technology that could surpass human intelligence—may soon become a reality, with the potential to pose existential risks to humanity. Sutskever, who previously played a key role in OpenAI’s success, co-founded SSI after leaving OpenAI due to dissatisfaction with resource allocation for his superalignment research and his involvement in the temporary removal of OpenAI CEO Sam Altman.
Much like Anthropic, another AI firm founded by former OpenAI employees, SSI plans to spend several years on research and development before bringing a product to market. The company’s emphasis on AI safety aligns with ongoing debates in the tech industry, particularly in light of proposed regulations like California’s SB-1047, which has sparked considerable controversy.