Ilya Sutskever and Jan Leike gone

Have to write a quick update, as just today two main figures I mentioned in November in my post “Who decides the values of the coming superintelligence” left OpenAI.

“Paljon tästä kaikesta lepää yksittäisten ihmisten ja heidän tiimiläistensä harteilla, joista on syytä mainita vähintään OpenAI:n Chief Scientist Ilya Sutskever sekä Head of Alignment Jan Leike.”

Ilya was obvious leaver due to being part of the “coup”, but Leike resigning at the same time means something bigger is changing in company’s policies when it comes to AI safety and alignment. So probably Altman wants to accelerate the progress, and Leike saw that being against his core values (as earlier Sutskever). Seems that Sutskever and Leike honestly are very worried about too quick progress in AGI development, which Altman is driving. The resignation happened timingly at the same time GPT-4o was released.

Btw what I’ve had some time using GPT-4o, it seems a real improvement. The name’s mean nothing anymore, it’s totally from different series than even 3 month old models, and can’t be even compared to GPT-4 year ago. I think I might have discussed with this level of model like two times before and was very impressed. There’s a very notable difference in the intelligence level of this model. I’m not sure if I’m better at anything in digital world – but of course I have these magic writing skills GPT-4o loses 100-0 🙂

Well.. at least future looks interesting. What a time to be alive, grateful for this experience already.

Avatar
Timo Nieminen
https://selko.ai

Leave a Reply