Ilya Sutskever Safe Superintelligence company announced – BGR

Sign up for our daily newsletter
If you buy through a BGR link, we may earn an affiliate commission, helping support our expert product labs.
About a month ago, Ilya Sutskever announced his departure from OpenAI. Considering what had happened in November when OpenAI CEO Sam Altman was fired and then promptly rehired following all the outrage, this wasn’t a surprise.
Sutskever was a cofounder and chief scientist at OpenAI. He was also the man who was supposed to make ChatGPT and other AI products safe for us, and was instrumental in both Altman’s firing and the CEO’s return. That’s what the public was told, at least.


Sutskever chose to depart OpenAI at a time when the newest ChatGPT developments were turning lots of heads. I’m talking about the GPT-4o model OpenAI unveiled a day before Google’s big I/O 2024 event, which was focused almost entirely on AI.
I said at the time I couldn’t wait to see what Sutskever would be working on next, as he teased “a project that is very personally meaningful to me about which I will share details in due time.” Fast-forward to mid-June, we now have the name and purpose of said project.
Sign up for the most interesting tech & entertainment news out there.
By signing up, I agree to the Terms of Use and have reviewed the Privacy Notice.
The new AI company is called Safe Superintelligence Inc. (SSI), and we probably won’t hear any specifics about its products anytime soon.


Safe Superintelligence has two other cofounders: Former Apple AI lead Daniel Gross, and former OpenAI engineer Daniel Levy. Sutskever announced the new venture on X, where he outlined the purpose of Safe Superintelligence. Here’s a part of the announcement:
SSI is our mission, our name, and our entire product roadmap, because it is our sole focus. Our team, investors, and business model are all aligned to achieve SSI.
We approach safety and capabilities in tandem, as technical problems to be solved through revolutionary engineering and scientific breakthroughs. We plan to advance capabilities as fast as possible while making sure our safety always remains ahead.
This way, we can scale in peace.
Our singular focus means no distraction by management overhead or product cycles, and our business model means safety, security, and progress are all insulated from short-term commercial pressures.
That sounds great, at least in theory. A company that develops safe superintelligence is the dream we, humanity, didn’t know we needed until recently.
Superintelligence is within reach.

Building safe superintelligence (SSI) is the most important technical problem of our​​ time.

We’ve started the world’s first straight-shot SSI lab, with one goal and one product: a safe superintelligence.

It’s called Safe Superintelligence…
We want superintelligence to be safe. We wouldn’t want AI to take over the world and eventually destroy humanity. That’s the worry out there, and it’s why it was so important that Sutskever was working on ensuring the safety of ChatGPT. It’s also why the massive changes at OpenAI’s superalignment team were so disturbing.


I can’t help but wonder whether or not what Sutskever wants to deliver is possible. In an interview with Bloomberg, Sutskever said that the company won’t launch any products anytime soon. “This company is special in that its first product will be the safe superintelligence, and it will not do anything else up until then,” he said.
Sutskever didn’t reveal the names of his backers or the amount of money he raised for his Safe Superintelligence startup. Those backers will want some sort of return, no matter how aligned they might be with Sutskever’s interests. That’s harder to do without a commercial product.


Safe superintelligence, the product, should be artificial intelligence that’s far beyond what humans can do. The next step in AI should be AGI (artificial general intelligence), or AI that can reason as well as humans. But that’s not necessarily the path we’ll take. We could always reach superintelligence before AGI.
Safe Superintelligence will not be the only player in the industry trying to achieve superintelligence. It’s very likely that many other tech giants and startups, including OpenAI, are also pursuing that goal.


Also, reputation aside, Sutskever doesn’t have a monopoly on AI safety. It’s not like OpenAI, Google, Microsoft, Meta, and Apple want to develop unsafe AI. But all these companies will be able to test their AI innovations via commercial products, whereas Safe Superintelligence will not launch anything anytime soon.
I’ll also say that Sutskever’s work on safe AI doesn’t guarantee that his AI or another AI won’t go rogue. Anyone with the right resources can train AI models. The theoretical risk remains that someone will reach AGI or superintelligence without properly aligning AI with humanity’s interests.
Chris Smith has been covering consumer electronics ever since the iPhone revolutionized the industry in 2008. When he’s not writing about the most recent tech news for BGR, he brings his entertainment expertise to Marvel’s Cinematic Universe and other blockbuster franchises.
Outside of work, you’ll catch him streaming almost every new movie and TV show release as soon as it’s available.
Sign up for the most interesting tech & entertainment news out there.
By signing up, I agree to the Terms of Use and have reviewed the Privacy Notice.
BGR’s audience craves our industry-leading insights on the latest in tech and entertainment, as well as our authoritative and expansive reviews.
We guide our loyal readers to some of the best products, latest trends, and most engaging stories with non-stop coverage, available across all major news platforms.
Founded in 2006
Over 2 billion visitors
100K+ articles published
Millions of readers helped
Honest news coverage, reviews, and opinions since 2006.
– Jonathan S. Geller, Founder
BGR is a part of Penske Media Corporation.
© 2024 BGR Media, LLC. All Rights Reserved.

source

Jesse
https://playwithchatgtp.com