gpt-4 level AI is almost 2 years old by now. When GPT-next comes out at one OOM above, the entire field will shift again as it did with GPT-4, and nobody will care that GPT-4-par models are almost free.
There’s nothing to suggest LLM’s haven’t already fallen victim to diminishing returns.
Just making even bigger models isn’t a near linear jump in quality.
GPT-5 might be massive, expensive, and only a bit better and fall to the same problems as precious models, without a radically different underlying architecture (which is not clear if it exists)