I think the average person just simply doesn't care about their privacy.
In some of the music communities I'm in the content creators are already telling their userbase to go follow them on threads. They're all talking about some kind of beef between Elon and Mark and the possibility of a boxing match... Mark was right to call the people he's leaching off of fucking idiots.
I really think this thread is a great example of why the average person doesn't care that much.
The whole thread is full of comments like "the issues caused by giving away all your data are too abstract, too far away, or too difficult to understand". This is true by the way, I completely agree.
But I haven't seen a single comment trying to explain those possible issues in an easily understandable way. The average person (or, at least me) reading threads like this won't learn anything new. Give me a practical issue that I might face, and if I agree that it's an issue, I'll focus more on avoiding that issue.
In other words, an example:
Let's say I'm a person using lemmy/mastodon, only using privacy-focused search engines etc.
If I would now change to using facebook/threads, started using Chrome as my browser, etc the usual mainstream tracking stuff - what problems can this cause for me in the future?
PS. I do agree with the notion of "minimize the data you give away", which is one reason I'm here, but I really don't have an answer for these questions. I'm like "I understand the point of privacy, but can't explain the reasons".
This isn't a plug, but it is a link to an article I wrote for exactly this reason. I tried to succinctly explain why privacy matters with real work examples and precedent.
If an algorithm knows exactly who you are, then it knows how you think, and it knows what sort of content will manipulate you politically. And right wing political content is profitable. It's called the alt right pipeline. Most people have some kind of argument that will manage to radicalise them to any position you can name. Through correlative learning, an algorithm will look at how people like you changed their views, and it'll send you down the same path. It's easy.
Cambridge Analytica. Not only did they influence the elections and general political attitude of the Philippines, it also affected US elections as well. I think there was a genocide that was caused by targeted campaigns too, not sure where it happened, though.
don't forget Facebook knowing that their programs increased teen suicide rated but still stayed course because changing that negative content would lower revenue
Don't you feel like awareness of it can be the number one thing to protect you from the manipulation that is rampant? I look at everything and say that is trying to change my mind. Unfortunately, with that comes this cynicism that I'm being sold to all the time and whatnot. But if I happen to hop on a browser that doesn't have AdBlock, I don't walk away having spent money on snake oil, or I don't go sign up for my local right wing political action committee, because I've been made aware consistently that everything is aimed at getting me to give something.
And let's be real, we all remember the constant reference to the Reddit hivemind. If we're saying they're wasn't some sort of external influence that landed everyone on the same wavelength, that feels naive. Or I'm a cynic and can't enjoy anything anymore.
No, awareness isn't the number 1 protection, it's the number 3 protection. The number 1 protection is actually having a nuanced understanding of the issues so that when propaganda tries to prey on your misconceptions, it can't find any. And the number 2 protection is avoiding propaganda, because anything starts to sound persuasive if it's repeated enough. Awareness is important, but it's just not as effective as those two other protections. It has too many weaknesses.
I think that's pretty well put and I'll agree with it.
In spite of everything telling me not to I still pop over to Reddit because I think there is still value there, even if I have to wade through the bullshit and the bots and whatever. A couple subs weren't infested with shit yet. But for sure it's a risk assessment, and I think I got it, but probably I'm not as smart as I think.
I've always felt like data gathering is kind of like lobbying. It is not directed toward you in person. It is used to shift the way people think and their opinions on topics.
A company / non-profit / movement / whatever lobbying towards a goal might be buying lunches or making seminars and talking about their point with selected group of people who have a say in a topic. Or they might not but they are in the vicinity of the topic or perhaps they are a group that a the company feels like they do not know what the fuck they are talking about and that needs to change.
These are not directed toward you but to a group of people whom you most likely have nothing to do with. This group has power to change something. Whether for good or for bad, that depends who doing the lobbying and for what purpose and how you think about the topic.
Data gathering is similar. This data that is being gathered is not identifiable to you (or it can be but this is not what I am talking about) but it gets clumped together with a buuuuunch of people. This bunch might be people from country x or Christians or people who like Mc Donald's or who are against gun-rights or pro abortion or people whom think that companies should not be pushing climate change responsibility to the consumer. This clump of people are the same bunch that the lobbyists are targeting. But they do not have direct power over a subject, in general. Point being that even if most of the people have no power over a topic, some of them might (they might hold power oma person company deciding whether to do more against climate change). And even if they do not, they will converse about the topic and this will shift the general consensus around a topic.
And this bunch of people can be very accurately targeted. People in their 20-30s, who graduated (or will soon) from a university that are most likely to go work in high-tech companies in or in the government who have people around them (family, friends) that are against gun-rights but still own guns and do hunting? Ezpz. Or perhaps own a car and drive a lot and have relatives far enough that car is a necessity but have shifted their thinking being more against cars? Np.
The problem is that this does not easily be used against you in particular. But it can be used against a group of people that you are a part of. It is used to shift the way we think as a community. It is used to push ads and news articles (or just the topics of articles because glancing it also works) to you, comments in twitter, posts in Facebook, and change the search results that you might see. Kind of like ads as well; ads work really well even though lots (most?) people would say that ads don't make them buy a product and only annoy them. Advertisers aren't dumb, they know exactly what people think and how they function, and ads work.
And again to reiterate, it has nothing to do with you. You are a blip. But you are a part of a larger community and in order to shift that community toward something all of its little bits and pieces need to be moved toward that target. Not all of them need to move toward that target. Just enough.
This got a bit rambly I think but anyhooo it's kinda how I see it.
So, if I understand correctly, and please correct me if I'm wrong, but the simplified version of this is: data collection allows massive cooperations to target Communities of Interest (CoI) and manipulate them by collectively altering their digital perception via a barrage of targeted advertisements, promoted articles and suggested social media posts?
And all of this leads to an eventual shift in the opinions and desires of said CoIs, leading to what the company would deem desirable behavior, be it growing apathetic to digital privacy, buying their product or growing more engaged with their platform?
Bad actors may use it to manipulate you or cause problems in other aspects of your life (HELLO data breaches!).
This is a hypothetical. Think about all of the normal stuff people could see about you on Facebook. Would you also want those strangers to have your other personal information and possibly passwords? How about your boss? School? Insurance agency? Bank? Someone who works at one of those places, and still remembers that information after they clock out?
Let's say there isn't a data breach. They also use that information to try to get you to click ads, even if those ads might be unsafe to click.
Please answer something for me. What is it that makes you think that Zuckerberg would act in your best interests? What would stop him from turning around, selling data again? How can you know that he will keep that data in trustworthy spaces, and away from bad actors?
I wouldn't even give my own parents access to that level of information unless I absolutely had to. I'm certainly not happier about a stranger having access to it.
I remember back when Snowgen first leaked all of that imfo about government tracking. One show, either the daily show or colbert report, did an episode about it. Almost no one they talked to cared until they mentioned the government can also track your dick pics.