Do we really know that gpt-5-codex is a finetune of gpt-5(-thinking)? The article doesn't clearly say that, right?
I suspect that this is smaller than gpt-5 or at least a quantized version. Similar to what I suspect Opus 4.1 is. That would also explain why it's faster.
It's annoying to see a link to a Theo video -- same guy who went with Simon to OpenAI's GPT-5 glazefest and had to backpedal when everyone realized what a shill he is.
I know neither of them are journalists -- I'm probably expecting too much -- but Simon should know better.
This seems to me like a very harsh take on Theo’s motivations. I don’t know him beyond what I’ve learned from his videos, but given occams razor I’m inclined to believe him: gpt5 seemed much better during the private demo than the public release. There are many possible explanations but jumping to ‘shill’ (implying deception) seems uncalled for.
I did actually consider that quite a bit when I got invited to OpenAI's mysterious recorded launch event (they didn't tell us it was GPT-5 until we got there) - would it damage my credibility as an independent voice in the AI space?
I decided to risk it. Crucially OpenAI at no point asked for any influence over my content at all, aside from sticking to their embargo (which I've done with other companies before.)
Is it possible that open ai let you test a private version of GPT-5 that was better than what was released to the public, like the previous commenter claimed?
They changed the model ID we were using multiple times in the two weeks we had access to - so clearly they were still iterating on the model during that time.
They weren't deceptive about that - the new model IDs were clearly communicated - but with hindsight it did mean that those early impressions weren't an exact match for what was finally released.
My biggest miss was that I didn't pay attention to the ChatGPT router while I was previewing the models. I think a lot of the early disappointment in GPT-5 was caused by the router sending people to the weaker model.
For what it's worth, the GPT-5 I'm using today feels as impressive to me as the one I had during the preview. It's great at code and great at search, the two things I care most about.
> "We find that comments by GPT‑5-Codex are less likely to be incorrect or unimportant" -- less unimportant comments in code is definitely an improvement!
This seems to be a misunderstanding. In the original OpenAI article, comment here is about code review comment, not comment in code.
Do we really know that gpt-5-codex is a finetune of gpt-5(-thinking)? The article doesn't clearly say that, right?
I suspect that this is smaller than gpt-5 or at least a quantized version. Similar to what I suspect Opus 4.1 is. That would also explain why it's faster.
OpenAI say:
"Today, we’re releasing GPT‑5-Codex—a version of GPT‑5 further optimized for agentic coding in Codex."
So yeah, simplifying that to a "fine-tune" is likely incorrect. I just added a correction note about that to my article.
Thank you for your work, Simon.
It's annoying to see a link to a Theo video -- same guy who went with Simon to OpenAI's GPT-5 glazefest and had to backpedal when everyone realized what a shill he is.
I know neither of them are journalists -- I'm probably expecting too much -- but Simon should know better.
This seems to me like a very harsh take on Theo’s motivations. I don’t know him beyond what I’ve learned from his videos, but given occams razor I’m inclined to believe him: gpt5 seemed much better during the private demo than the public release. There are many possible explanations but jumping to ‘shill’ (implying deception) seems uncalled for.
While not a journalist, Simon definitely has a background in journalism.
He was one of the original authors of Django, back when it was a “web framework for journalists with deadlines”.
Exactly. That's why I said he should know better. He never should have gone to that event to hype GPT-5 under the guise of "testing" it out.
I did actually consider that quite a bit when I got invited to OpenAI's mysterious recorded launch event (they didn't tell us it was GPT-5 until we got there) - would it damage my credibility as an independent voice in the AI space?
I decided to risk it. Crucially OpenAI at no point asked for any influence over my content at all, aside from sticking to their embargo (which I've done with other companies before.)
Is it possible that open ai let you test a private version of GPT-5 that was better than what was released to the public, like the previous commenter claimed?
They changed the model ID we were using multiple times in the two weeks we had access to - so clearly they were still iterating on the model during that time.
They weren't deceptive about that - the new model IDs were clearly communicated - but with hindsight it did mean that those early impressions weren't an exact match for what was finally released.
My biggest miss was that I didn't pay attention to the ChatGPT router while I was previewing the models. I think a lot of the early disappointment in GPT-5 was caused by the router sending people to the weaker model.
For what it's worth, the GPT-5 I'm using today feels as impressive to me as the one I had during the preview. It's great at code and great at search, the two things I care most about.
Literally the only channel I've ever blocked on Youtube.
> "We find that comments by GPT‑5-Codex are less likely to be incorrect or unimportant" -- less unimportant comments in code is definitely an improvement!
This seems to be a misunderstanding. In the original OpenAI article, comment here is about code review comment, not comment in code.
The pelican is not very good
But probably fast
Would be faster if it got on the bike
[dead]