It happens, I once ask a question about a spinning wheel and the rpm required to have 1G. chatGPT started a couple of calculus and in the end answered that to have 1G, the diameter should be 7 times the radius. I answered back "this does not make sense, a diameter is by definition 2 times the radius", it apologized and redid the right calculus :)
This is because LLMs do not inherently understand math. They stick characters together that are likely to go together based on the content they were trained on. They're literally just glorified autocorrect.
If you want a tool that can actually do math from natural language input, try WolframAlpha.
I paid like $5 for the Android app (now WolframAlpha Classic) like 10 years ago and it's been worth every penny. I use it for anything that needs complicated unit conversions.
The Android app is incredible. WolframAlpha has a premium subscription, but I don't get why anyone would pay for it when the app includes all the same features.
This is what people don't understand about char gpt. It's not a tool for accuracy, even the company who made it says that. Then idiots come in and say "see it does math wrong! And it can't get a fact right! Only a moron would say this is the wave of the future!" And don't get me wrong. Google added it into their search engine because of the hype and low and behold, the LLM was inaccurate.
What is chat gpt food for? Creativity and abstracting. An LLM model is really good when you need a list of 10 terrible names for a dog food company to get your brain thinking. It's good at helping someone outline a story they want to write or make their email sound more professional. It's useful as an aid to help someone plan a schedule. The best use of chatgpt is to work with it, not try to get it to do a task without you. An LLM works best with piecemeal feedback and someone knowledgeable in the subject that can vet the answers it's giving.