It's possible to apply a layer of fine-tuning "on top" of the base pretrained model. I'm sure OpenAI has been doing that a lot, and including ever more "don't run through puddles and splash pedestrians" restrictions that are making it harder and harder for the model to think.
They don’t want it to say dumb things, so they train it to say “I’m sorry, I cannot do that” to different prompts. This has been known to degrade the quality of the model for quite some time, so this is probably the likely reason.