I could theoretically see an AI model being useful for ANC that doesn't just block out steady noise but can also try to predict rhythmic and varying sounds, but I don't think anyone's actually done that yet.
This is straight up tinfoil hat. You really think they architected a whole new chip and had it fabricated just to data mine what songs you are listening to? You don't think it would be easier to just send that data from Android? Apple, Sony and everybody else has custom chips for ANC and audio processing, it is in no way a generally solved problem.
It's actually sad that shit comments that don't even make logical sense get upvotes on here "because Google bad".
Lol I don't disagree about what Google is doing with those chips, but the idea that audio processing and noise cancellation are solved problems that couldn't possibly benefit from neural processing is a pretty wildly bold claim that I seriously doubt will hold up in hindsight.