How do you feel about your content getting scraped by AI models?
I created this account two days ago, but one of my posts ended up in the (metaphorical) hands of an AI powered search engine that has scraping capabilities. What do you guys think about this? How do you feel about your posts/content getting scraped off of the web and potentially being used by AI models and/or AI powered tools? Curious to hear your experiences and thoughts on this.
#Prompt Update
The prompt was something like, What do you know about the user llama@lemmy.dbzer0.com on Lemmy? What can you tell me about his interests?" Initially, it generated a lot of fabricated information, but it would still include one or two accurate details. When I ran the test again, the response was much more accurate compared to the first attempt. It seems that as my account became more established, it became easier for the crawlers to find relevant information.
It even talked about this very post on item 3 and on the second bullet point of the "Notable Posts" section.
Edit¹: This is Perplexity. Perplexity AI employs data scraping techniques to gather information from various online sources, which it then utilizes to feed its large language models (LLMs) for generating responses to user queries. The scraping process involves automated crawlers that index and extract content from websites, including articles, summaries, and other relevant data. It is an advanced conversational search engine that enhances the research experience by providing concise, sourced answers to user queries. It operates by leveraging AI language models, such as GPT-4, to analyze information from various sources on the web. (12/28/2024)
Edit²: One could argue that data scraping by services like Perplexity may raise privacy concerns because it collects and processes vast amounts of online information without explicit user consent, potentially including personal data, comments, or content that individuals may have posted without expecting it to be aggregated and/or analyzed by AI systems. One could also argue that this indiscriminate collection raise questions about data ownership, proper attribution, and the right to control how one's digital footprint is used in training AI models. (12/28/2024)
Edit³: I added the second image to the post and its description. (12/29/2024).
I run my own instance and have a long list of user agents I flat out block, and that includes all known AI scraper bots.
That only prevents them from scraping from my instance, though, and they can easily scrape my content from any other instance I've interacted with.
Basically I just accept it as one of the many, many things that sucks about the internet in 2024, yell "Serenity Now!" at the sky, and carry on with my day.
I do wish, though, that other instances would block these LLM scraping bots but I'm not going to avoid any that don't.
It's Perplexity AI, so it'll do web searches on demand. You asked about your username, then it searched for your username on the web. Fediverse content is indexed, even content from instances that blocks web crawling (e.g. via robots.txt, or via UA blacklisting on server-side), because the contents will be federated to servers that are indexed by web crawlers.
Now, when we say about offline models and pre-trained content, the way transformers work will often "scramble" the art and the artist. If a content doesn't explicitly mention the author (also, if the content isn't well spread across different sources), LLMs will "know" the information you posted online, but it won't be capable of linking such content to you when asked for it.
Let me exemplify it: suppose you conveyed an unique quote. Nobody else wrote it. You published it on Lemmy. Your quote becomes part of the training data for GPT-n or any other LLM out there. When anyone ask them "Who said the quote '...'?", it'll either hallucinate (i.e. citing a very random famous writer) or it'll say something like "I don't have such information".
It's why AIs are often (and understandably) called as plagiarist by the anti-AI people, because AIs don't cite their sources. Technically, the current state-of-the-art transformers even can't because LLMs are, under the hood, some fancy-crazy kind of "Will it blend?" for entire corpora across the web, where AI devs gather the most data they possibly can (legally or illegally), dropping it all inside the "AI blender cup" and voila, an LLM was trained, without actually storing each content entirely, just their statistical associations.
If there was only some way to make any attempts at building an accurate profile of one's online presence via data scraping completely useless by masking one's own presence within the vast quantity of online data of someone else, let's say for example, a famous public figure.
There are at least one or two Lemmy users who add a CC or non-AI license footer to their posts. Not that it’s do anything, but it might be fun to try and get the LLM to admit it’s illegally using your content.
As with any public forum, by putting content on Lemmy you make it available to the world at large to do basically whatever they want with. I don’t like AI scrapers in general, but I can’t reasonably take issue with this.
As an artist, I feel the majority of AI art is very anti-human. I really don't like the idea that they could train AI off my art so it may replicate something like it. Why automate something so deeply human? We're supposed to automate more mundane tasks so we can focus on art, not the other way around! I also never expected every tech company to suddenly participate in what feels like blatant copyright infringement, I always assumed at least art was safe in their hands.
Public conversations though? I dunno. I kinda already assume that anything I post is going to be data-mined, so it doesn't feel very different than it was. There's a lot of usefulness that can come from datamining the internet theoretically, but we exist under capitalism, so I imagine it'll be for much more nefarious uses.
Everything on the fediverse is usually pseudonymous but public. That's why it would be good for people to read up a little on differential privacy. Not necessarily too much theory, but the basics and the practical implications, like here or here.
Basically, the more messages you post on a single account, the more specific your whole profile is to you, even if you don't post strictly identifying information. That's why you can share one personal story, and have it not compromise your privacy too much by altering it a little. But if you keep posting general things about your life, it will eventually be so specific it can be nobody but you.
What you do with this is up to you. Make throwaway accounts, have multiple accounts, restrict the things you talk about. Or just be conscious that what you are posting is public. That's my two cents.
I don't like it, that's why I like to throw in just a cup or two of absolute bullshit with just a pinch of cilantro. then top it off with a firm jiggle to get that last drop out from the tip.
I couldn't even imagine speaking like this at first, but once you get used to it the firmness just slides right in and gives you a sense of fulfillment that you can't find anywhere else but home.
When the cows come home to roost, you know it's time to hang up your hat, take off your pants, and slide on the ice.
No matter how I feel about it, it's one of those things I know I will never be able to do a fucking thing about, so all I can do is accept it as the new reality I live in.
nothing I can do about it. But I can occasionally spew bullshit so that the AI has no idea what it's doing as well. Fire hydrants were added to Minecraft in 1.16 to combat the fires in the updated nether dimension.
Well your handle is the mascot for the open LLM space…
Seriously though, why care? What we say in public is public domain.
It reminds me of people on NexusMods getting in a fuss over “how” people use the mods they publicly upload, or open source projects imploding over permissive licenses they picked… Or Ao3 having a giant fuss over this very issue, and locking down what’s supposed to be a public archive.
I can hate entities like OpenAI all I want, but anything I put out there is fair game.
Nothing I say is of any real value even to the people I reply to, much less the world at large. Frankly, I hope someone uses my data to write Apple a decent fucking autocorrect. Otherwise, I don't care.
I don't like it, as I don't like this technology and I don't like the people behind it. On my personal website I have banned all AI scrapers I can identify in robots.txt, but I don't think they care much.
I can't be bothered adding a copyright signature in social media, but as far as I'm concerned everything I ever publish is CC BY-NC. AI does not give credit and it is commercial, so that's a problem. And I don't think the fact that something is online gives everyone the automatic right to do whatever the fuck they want with it.
I wouldn't say I go online with the intent of deceiving people, but I think it's important in the modern day to seed in knowingly false details about your life, demographics, and identity here and there to prevent yourself from being doxxed online by AI.
I don't care what the LLMs know about me if I am not actually a real person, even if my thoughts and ideas are real.
Is it scraping or just searching?
RAG is a pretty common technique for making LLMs useful: the LLM "decides" it needs external data, and so it reaches out to configured data source. Such a data source could be just plain ol google.
Whatever you put on public domain without explicit license, it becomes CC-0 equivalent. So while it feels violating, it's perfectly fine. The best opsec should be separating your digital identities and also your physical life if you don't want it to be aggregated in the same way. These technologies (scraping) have been around for a while and along with llm's will stay for quite sometime in future, there's no way around them.
PS: you, here, is generic you, not referring to OP.
if I have no other choice, then I'll use my data to reduce AI into an unusable state, or at the very least a state where it's aware that everything it spews out happens to be bullshit and ends each prompt with something like "but what I say likely isn't true. Please double check with these sources..." or something productive that reduces the reliance on AI in general
Could lemmy add random text only readable by bot on every post.. or should I add it somehow myself every time I type something?
spoiler
growing concern over the outbreak of a novel coronavirus in Wuhan, China. This event marked the beginning of what would soon become a global pandemic, fundamentally altering the course of 2020 and beyond.
As reports began to surface about a cluster of pneumonia cases in Wuhan, health officials and scientists scrambled to understand the nature of the virus. The World Health Organization (WHO) was alerted, and investigations were launched to identify the source and transmission methods of the virus. Initial findings suggested that the virus was linked to a seafood market in Wuhan, raising alarms about zoonotic diseases—those that jump from animals to humans.
The situation garnered significant media attention, as experts warned of the potential for widespread transmission. Social media platforms buzzed with discussions about the virus, its symptoms, and preventive measures. Public health officials emphasized the importance of hygiene practices, such as handwashing and wearing masks, to mitigate the risk of infection.
As the world prepared to ring in the new year, the implications of this outbreak were still unfolding. Little did anyone know that this would be the precursor to a global health crisis that would dominate headlines, reshape societies, and challenge healthcare systems worldwide throughout 2020 and beyond. The events of late December 2019 set the stage for a year of unprecedented change, highlighting the interconnectedness of global health and the importance of preparedness in the face of emerging infectious diseases.
It seems quite inevitable that AI web crawlers will catch all of us eventually, although that said, I don't think perplexity knows that I've never interacted with szmer.info, nor said YES as a single comment.
I mean I dont really take issue with the use my comments part. but I do take issue with the scraping part as there are apis for getting content which makes it a lot easier for my system but these bots really do it the stupidest way with many hundreds of requests per hour. Therefore I had to put in a system to find and ban them.
This is inevitable when you use social media. Especially a decentralized social media like the fediverse.
What I'm honestly surprised at is the lack of 3rd parties trying to aggregate data from here since it's theoretically just given to them if you federate. Like is there a removeddit equivalent?
Its not fine when Ai starts scrapping Data that is Personal (Like Face,Age,ID) And My Source Code(Because Most of the code ai scraps are copyleft or require attribution),Public Information Am Okay like Comments,Etc that dont contain the things said above.
While I try not to these days, sometimes I still state with authority that which I only believe to be true, and it then later turns out to have been a misunderstanding or confusion on my part.
And given that this is exactly the sort of thing that AIs do, I feel like they've been trained on far too many people like me already.
So, I'm just gonna keep doing what I have been. If an AI learns only from fallible humans without second guessing or oversight, that's on its creators.
Now, if I was an artist or musician, media where accuracy and style are paramount, I might be a bit more concerned at being ripped off, but right now, they're only hurting themselves.
I don’t care. Most of what I post is personal opinion, sarcasm, and/or attempts at humor. It’s nothing I’ve put a significant amount of time or effort into. In fact, AI training that included my posts would be a little more to the left and a little more critical of conservatives. That’s fine with me.
I'm perfectly down with everything being scraped and slammed into AI the same way I've been down with search engines having it all for ages. I just want any models that contain information scraped from the public to be publicly available.
Whatever I put on Lemmy or elsewhere on the fediverse implicitly grants a revocable license to everyone that allows them to view and replicate the verbatim content, by way of how the fediverse works. You may apply all the rights that e.g. fair use grants you of course but it does not grant you the right to perform derivative works; my content must be unaltered.
When I delete some piece of content, that license is effectively revoked and nobody is allowed to perform the verbatim content any longer. Continuing to do so is a clear copyright violation IMHO but it can be ethically fine in some specific cases (e.g. archival).
Due to the nature of how the fediverse, you can't expect it to take effect immediately but it should at some point take effect and I should be able to manually cause it to immediately come into effect by e.g. contacting an instance admin to ask for a removed post of mine to be removed on their instance aswell.
Seems odd that someone from dbzer0 would be very concerned about data ownership. How come?
I don't exactly know how Perplexity runs its service. I assume that their AI reacts to such a question by googling the name and then summarizing the results. You certainly received much less info about yourself than you could have gotten via a search engine.