On the one hand, this could be filed under "yeah, no shit, we all know stuff in the cloud is forever".
On the other hand, it's something that's easy to forget with the ubiquitous omnipresence of compute in our lives. We become numb to it, and everyone has moments of crisis or weakness where they may let their guard down.
The US needs better privacy and consumer protection laws. But we're always behind Europe, and way behind technology, when it comes to our crappy legal system.
I mean, just look at the way Microsoft are trying to ram "AI" into every interaction with every app right now. As the big players make it more and more non-optional, people are going to have to work really hard not to put anything into, say, Word that they don't want sent back for analysis
You make an important point, it is definitely being layered in to all sorts of apps. Some of it is box-checking bullshit, so that a marketing underling can tell the c-suite "we have implemented AI". But some of it is semi-sophisticated bossware type shit. It's going to get smarter and it's going to be everywhere.
This is my big concern. Right now Gemini is an option you can switch on to replace the existing assistant, which I expect has similar terms. But how long will it be until Google just integrates this with their email, search, and online office suite with no options to disable it? They'll tout it as an improvement and new features.
Microsoft at least has to cater to business customers, so there will be options for systems administrators to opt-out for longer. With their government contracts they will have to prove adequate security. I still don't like the AI push, or Microsoft as a whole, but I trust them not to have a data leak, or to sell business data to whoever. They don't have overwhelming financial incentives in advertising or data collection for it, just normal sized incentives.
On the other hand, Google's biggest revenue stream is advertising, and that works due to the absurd amount of non-paying users they have with their free services. They have no business or financial incentives whatsoever to not just offer all this data they collect up on a silver platter. No incentives not to train horrible dystopian AI to maximize advertising effectiveness through A/B testing specific market/interest groups on an unimaginable scale.
Google also has a history of collecting more data than they were allowed to, pinning it on a "rogue employee enabling a feature they were told to disable" when they are caught, and then proceeding to use that data anyway for their projects after the news dies down.
I've always wanted to see a true "AI" personal assistant, leveraging tech to make lives easier, but this shit is not the way.
As much as the tech savvy folks on here can espouse trying to protect your own privacy by doing this or avoiding that, it's just not a reasonable expectation and the burden to do better should be on the companies collecting data. The vast, vast majority of users won't even be aware of what's happening, and that means it's everyone's problem, or will be, whenever this blows up someday. You can try your best to avoid giving up your data, but none of it matters because everyone else in your life gave it up already. It's all a villainous entreprise and I do believe it will blow up someday, maybe not even too far in the future.
I think most of that advice is given with good intentions, but it does ultimately feed into the establishment preference for punching down. "Climate change? Paper straws. AI violating your privacy? Nord VPN."
Yes, especially because Gemini is used (now, optionally) in place of Google assistant. You give personal information to Google assistant for convenience, but Gemini would use the information more, most likely in unexpected ways too.
I hope some people with the means setup bot farms that just pump garbage or subversive stuff like you said into these things until they lose all usefulness to the corpos. It does seem like one of the best ways to counter them.
Because it already knows everything personal about you from your google account, chrome browser, search history, emails & files, and even your keyboard. Gemini wants to guess, because it's more exciting that way! 🤩 /s
Heck, these LLMs are really good at summary. Now, they can now summarize all your disparate data, including your weird interactions with Gemini (and associated apps), for advertisers' and governments' conveniences!
The most likely reason for this is how AI model training work. Depending on the model's complexity, training data size etc, it can take enornous amounts of time to finish a training. Probably the initial training must be atleast 2-4 week at Google but that's just a huge assumption.
After that they probably train this base model with some newly acquired data (ex: 1 week of data) which won't take as much time to finish compared to starting from 0 all over again.