Meta has shown that getting huge amounts of training data can lead to great results with a model that's much simpler than what openAI uses and it looks like they are taking a more open approach to LLMs because of that.
Twitter has shitloads of possible training data, but it's Twitter so that data isn't great.
Elon is known to be afraid of AGIs becoming hostile, so that explains the decision.
I don't think it'll slow down AI development too much. There are new Llama-based models coming out every month that are better than the previous ones.
Reddit is a much better source of data and if they don't want to lose SEO, their data can still be gathered by scraping even after the API changes take effect.