Researchers found that ChatGPT's performance varied significantly over time, showing "wild fluctuations" in its ability to solve math problems, answer questions, generate code, and do visual reasoning between March and June 2022. In particular, ChatGPT's accuracy in solving math problems dropped drastically from over 97% in March to just 2.4% in June for one test. ChatGPT also stopped explaining its reasoning for answers and responses over time, making it less transparent. While ChatGPT became "safer" by avoiding engaging with sensitive questions, researchers note that providing less rationale limits understanding of how the AI works. The study highlights the need to continuously monitor large language models to catch performance drifts over time.
I think this might be what stops AI from taking over as much as people fear. If I was a business owner I wouldn't want to put my trust in a black box if I can pay someone to ensure it works exactly to my specification
As someone getting an MBA that hates the idea of labor being displaced by AI, if I were an unethical business owner that treated labor as a cost to minimize, I'd use AI to generate content that's "good enough" and use fewer people to make it exactly to my specification.
You know, I wouldn't care about being replaced by a machine, as long as I get UBI. Then I could just do what I like to do and wouldn't need to care whether I actually make money with it.
That's not how UBI is supposed to work. You would certainly have enough time to do what you like, just not the resources. Any money you'd get would only cover the absolute necessities like shelter and food.
I think that's what part of the Hollywood writers strike is about. AI generating "good enough" scripts, and studios shelling a few peanuts for some writers to finalize them.