Artificial Intelligence - Ethics | Law | Philsophy
- Pulitzer Winner Michael Chabon Sues Meta for Allegedly Using His Copyrighted Material to Train AIgizmodo.com Pulitzer Winner Michael Chabon Sues Meta for Allegedly Using His Copyrighted Material to Train AI
The author of The Amazing Adventures of Kavalier & Clay joined a group of writers who say Meta's AI efforts are ripping them off.
- “Will AI Destroy Us?”: Roundtable with Coleman Hughes, Eliezer Yudkowsky, Gary Marcus, and Scott Aaronson
YouTube Video
Click to view this content.
A written out transcript on Scott Aaronson's blog: https://scottaaronson.blog/?p=7431
-------------------------
My takes:
> ELIEZER: What strategy can a like 70 IQ honest person come up with and invent themselves by which they will outwit and defeat a 130 IQ sociopath?
Physically attack them. That might seem like a non-sequitur, but what I'm getting at is that Yudowski seems to underestimate how powerful and unpredictable meatspace can be over the short-to-medium term. I really don't think you could conquer the world over wifi either, unless maybe you can break encryption.
> SCOTT: Look, I can imagine a world where we only got one try, and if we failed, then it destroys all life on Earth. And so, let me agree to the conditional statement that if we are in that world, then I think that we’re screwed.
Also agreed, with the caveat that there's wide differences between failure scenarios, although we're probably getting a random one at this rate.
> ELIEZER: I mean, it’s not presently ruled out that you have some like, relatively smart in some ways, dumb in some other ways, or at least not smarter than human in other ways, AI that makes an early shot at taking over the world, maybe because it expects future AIs to not share its goals and not cooperate with it, and it fails. And the appropriate lesson to learn there is to, like, shut the whole thing down. And, I’d be like, “Yeah, sure, like wouldn’t it be good to live in that world?”
> And the way you live in that world is that when you get that warning sign, you shut it all down.
I suspect little but reversible incidents are going to happen more and more, if we keep being careful and talking about risks the way we have been. I honestly have no clue where things go from there, but I imagine the tenor and consistency of response will be pandemic-ish.
> GARY: I’m not real thrilled with that. I mean, I don’t think we want to leave what their objective functions are, what their desires are to them, working them out with no consultation from us, with no human in the loop, right?
Gary has a far better impression of human leadership than me. Like, we're not on track for a benevolent AI if such a thing makes sense (see his next paragraph), but if we had that it would blow human governments out of the water.
> ELIEZER: Part of the reason why I’m worried about the focus on short-term problems is that I suspect that the short-term problems might very well be solvable, and we will be left with the long-term problems after that. Like, it wouldn’t surprise me very much if, in 2025, there are large language models that just don’t make stuff up anymore.
> GARY: It would surprise me.
Hey, so there's a prediction to watch!
> SCOTT: We just need to figure out how to delay the apocalypse by at least one year per year of research invested.
That's a good way of looking at it. Maybe that will be part of whatever the response to smaller incidents is.
> GARY: Yeah, I mean, I think we should stop spending all this time on LLMs. I don’t think the answer to alignment is going to come from through LLMs. I really don’t. I think they’re too much of a black box. You can’t put explicit, symbolic constraints in the way that you need to. I think they’re actually, with respect to alignment, a blind alley. I think with respect to writing code, they’re a great tool. But with alignment, I don’t think the answer is there.
Yes, agreed. I don't think we can un-invent them at this point, though.
> ELIEZER: I was going to name the smaller problem. The problem was having an agent that could switch between two utility functions depending on a button, or a switch, or a bit of information, or something. Such that it wouldn’t try to make you press the button; it wouldn’t try to make you avoid pressing the button. And if it built a copy of itself, it would want to build a dependency on the switch into the copy.
> So, that’s an example of a very basic problem in alignment theory that is still open.
Neat. I suspect it's impossible with a reasonable cost function, if the thing actually sees all the way ahead.
> So, before GPT-4 was released, [the Alignment Research Center] did a bunch of evaluations of, you know, could GPT-4 make copies of itself? Could it figure out how to deceive people? Could it figure out how to make money? Open up its own bank account?
> ELIEZER: Could it hire a TaskRabbit?
> SCOTT: Yes. So, the most notable success that they had was that it could figure out how to hire a TaskRabbit to help it pass a CAPTCHA. And when the person asked, ‘Well, why do you need me to help you with this?’–
> ELIEZER: When the person asked, ‘Are you a robot, LOL?’
> SCOTT: Well, yes, it said, ‘No, I am visually impaired.’
I wonder who got the next-gen AI cold call, haha!
- Dr Stephen Wolfram on Technical and Philosophical Aspects of AI Risk
YouTube Video
Click to view this content.
- ‘Very wonderful, very toxic’: how AI became the culture war’s new frontierwww.theguardian.com ‘Very wonderful, very toxic’: how AI became the culture war’s new frontier
While the far right claims artificial intelligence has become too ‘woke’, experts argue it’s not a sentient being with its own viewpoints
- George Hotz and Eliezer Yudkowsky AI Safety Debate - 8/15/23 5pm ET
https://twitter.com/i/broadcasts/1nAJErpDYgRxL
https://www.youtube.com/watch?v=6yQEA18C-XI
- Experts want to give AI human ‘souls’ so they don’t kill us allcointelegraph.com Experts want to give AI human ‘souls’ so they don’t kill us all
Some experts believe the alignment problem can be fixed by making AIs more human — but others say that will just make things much worse.
- “AI” Hurts Consumers and Workers -- and Isn’t Intelligenttechpolicy.press “AI” Hurts Consumers and Workers -- and Isn’t Intelligent
Researchers Alex Hanna and Emily M. Bender call on businesses not to succumb to this artificial “intelligence” hype.
cross-posted from: https://lemmy.ml/post/2811405
> "We view this moment of hype around generative AI as dangerous. There is a pack mentality in rushing to invest in these tools, while overlooking the fact that they threaten workers and impact consumers by creating lesser quality products and allowing more erroneous outputs. For example, earlier this year America’s National Eating Disorders Association fired helpline workers and attempted to replace them with a chatbot. The bot was then shut down after its responses actively encouraged disordered eating behaviors. "
- Studios Quietly Go on Hiring Spree for AI Specialist Jobs Amid Picket Line Anxietywww.hollywoodreporter.com Studios Quietly Go on Hiring Spree for AI Specialist Jobs Amid Picket Line Anxiety
Netflix is hiring a $ 900,000-per-year AI product manager, Disney is looking for generative AI specialists, and Sony seeks an AI ethics expert, while the tech becomes a staple of SAG-AFTRA and Writers Guild picket signs.
- A new partnership to promote responsible AIblog.google A new partnership to promote responsible AI
Today, Google, Microsoft, OpenAI and Anthropic published a joint announcement establishing the Frontier Model Forum.
- Generative AI and the Law
https://www.lexisnexis.com/html/lexisnexis-generative-ai-story/
Generative AI and the Law: AI is here already – with the power to change the legal profession Author: Suzanne McGee Word count: 2209 words Estimated read time: 9 minutes Source code repos: None provided Supporting links:
- Generative AI & The Legal Profession Survey, LexisNexis 2023: https://www.lexisnexis.com/en-us/about-us/media/press-release.page?id=1667497591151179 ↗
- Mike Walsh, CEO, LexisNexis Legal & Professional: https://www.lexisnexis.com/en-us/about-us/our-people/leadership/mike-walsh.page ↗
- Greg Lambert, Chief Knowledge Services Officer, Jackson Walker LLP: https://www.jw.com/people/greg-lambert ↗
- Danielle Benecke, Founder & Global Head, Baker McKenzie Machine Learning: https://www.bakermckenzie.com/en/people/b/benecke-danielle ↗
- Ashley Armstrong, Assistant Clinical Professor of Law, University of Connecticut: https://www.law.uconn.edu/faculty/full-time-faculty/ashley-b-armstrong ↗
- Jamie Buckley, Chief Product Officer, LexisNexis Legal & Professional: https://www.lexisnexis.com/en-us/about-us/our-people/leadership/jamie-buckley.page ↗
- Joel F. Murray, Attorney, McKean Smith LLC: https://www.mckeansmithlaw.com/joel-f-murray.html ↗
Summary:
The article discusses the potential impact of generative AI like ChatGPT on the legal profession. It notes that while AI tools have been used in law for over a decade, recent advances like ChatGPT have renewed interest in how AI can transform legal work. Potential applications include drafting documents, analyzing large datasets, and leveling the playing field for smaller firms. However, risks include AI generating inaccurate or fictional information. Custom models trained on relevant legal data, like LexisNexis' 144 billion document repository, can mitigate this. Lawyers believe AI will increase efficiency and change practice, but not wholly replace human skills like judgment and creativity. Concerns around copyright, IP, and confidentiality exist regarding training data. Experts say AI will augment lawyers' work rather than replace them, allowing focus on high-value tasks. AI-proficient lawyers are expected to replace those who don't adopt new tech. Overall, AI has immense potential to transform legal services.
Evaluation:
This article provides a balanced overview of the potential impact of large language models like ChatGPT on the legal profession. It highlights several promising applications in areas like drafting, research, and analysis where these models can increase efficiency and capabilities. The article also importantly covers risks around inaccurate output, copyright issues, and confidentiality that need to be addressed. It notes experts believe AI will augment rather than replace lawyers, allowing them to focus on high-judgment tasks. The sources cited from legal industry executives, law firm partners, and academics lend credibility. Overall this is a strong analysis of how large language models could transform legal services, if applied judiciously with proper training data. It provides a thoughtful assessment of the technology's applicability in this field. The article gives a realistic perspective on the technology's current abilities and limitations. It would be a helpful read for those exploring use cases for large language models in the legal industry.
- Is there another denialism campaign starting about AI risks?
I have no real evidence, or even an idea about who would fund that, but I've seen a couple BBC articles now where just Meta is pitted against everyone else as if it's an equal match, which is a pretty familiar phenomenon from climate and public health issues.
- US judge finds flaws in artists' lawsuit against AI companieswww.reuters.com US judge finds flaws in artists' lawsuit against AI companies
U.S. District Judge William Orrick said during a hearing in San Francisco on Wednesday that he was inclined to dismiss most of a lawsuit brought by a group of artists against generative artificial intelligence companies, though he would allow them to file a new complaint.
- This AI Watches Millions Of Cars And Tells Cops If You’re Driving Like A Criminal - Lemmy.worldlemmy.world This AI Watches Millions Of Cars And Tells Cops If You’re Driving Like A Criminal - Lemmy.world
This AI Watches Millions Of Cars And Tells Cops If You’re Driving Like A Criminal::Artificial intelligence is helping American cops look for “suspicious” patterns of movement using license plate databases.
- Interview: The Ethical Puzzle of Sentient AIundark.org Interview: The Ethical Puzzle of Sentient AI
Philosophy professor Jonathan Birch says the potential for conscious AI raises a host of moral concerns.
- Artificial Intelligence Is Making The Housing Crisis Worsewww.levernews.com Artificial Intelligence Is Making The Housing Crisis Worse
Landlords are using AI to screen their tenants, heightening errors and discrimination.
- Five Worlds of AI (Boaz Barak and Scott Aaronson)
I have to say, the way they describe the AI-Fizzle scenario is a weird one to me. Do they realise how many people are employed doing something existing chatbots could (and probably will) replace? The real fizzle scenario would be in between that and Futurama (since ChatGPT can't advance math as of yet, as described).
They did say they were ignoring probability, I guess.
- CEO replaces 90% of support staff with AI, praises the system on Twitterwww.techspot.com CEO replaces 90% of support staff with AI, praises the system on Twitter
Shah, the 31-year-old CEO and founder of Bengaluru-based Duukan, which helps merchants to set up online stores and sell products digitally, posted that "We had to layoff...
- Are guaranteed-income programs working? (16.06.2023 article)chicago.suntimes.com Are guaranteed-income programs working?
Thousands of Chicago and Cook County residents got $500 a month the past year from programs that are aiming to give people a little financial cushion.
- Copyright lawsuits against Meta and OpenAI mention shadow libraries, including Library Genesis, as sources of training data
cross-posted from: https://lemmy.world/post/1330998
> cross-posted from: https://lemmy.world/post/1330512 > > > Below are direct quotes from the filings. > > > > OpenAI > > >As noted in Paragraph 32, supra, the OpenAI Books2 dataset can be estimated to contain about 294,000 titles. The only “internet-based books corpora” that have ever offered that much material are notorious “shadow library” websites like Library Genesis (aka LibGen), Z-Library (aka B-4ok), Sci-Hub, and Bibliotik. The books aggregated by these websites have also been available in bulk via torrent systems. These flagrantly illegal shadow libraries have long been of interest to the AI-training community: for instance, an AI training dataset published in December 2020 by EleutherAI called “Books3” includes a recreation of the Bibliotik collection and contains nearly 200,000 books. On information and belief, the OpenAI Books2 dataset includes books copied from these “shadow libraries,” because those are the most sources of trainable books most similar in nature and size to OpenAI’s description of Books2. > > > > Meta > > >Bibliotik is one of a number of notorious “shadow library” websites that also includes Library Genesis (aka LibGen), Z-Library (aka B-ok), and Sci-Hub. The books and other materials aggregated by these websites have also been available in bulk via torrent systems. These shadow libraries have long been of interest to the AI-training community because of the large quantity of copyrighted material they host. For that reason, these shadow libraries are also flagrantly illegal. > > > > This article from Ars Tecnica covers a few more details. Filings are viewable at the law firm's site here.
- Why everyone is mad about New York’s AI hiring lawwww.technologyreview.com Why everyone is mad about New York’s AI hiring law
The law is a first step in regulating AI, but critics aren’t happy
cross-posted from: https://lemmy.ca/post/1338661
> What do you think about this regulation? I personally feel it’s a step in the right direction towards regulating AI use, but think it could be stricter.
- IATSE Announces Core Principles for the Application of AI Technology in Entertainmentvariety.com IATSE Announces Core Principles for the Application of AI Technology in Entertainment
IATSE has announced its core principles for the application of AI technologies in the entertainment industry. In a statement, IATSE, a labor union which represents over 160,000 technicians, artisan…
- "AI-generated content farms designed to rake in cash are cropping up at an alarming rate"
cross-posted from: https://lemmy.intai.tech/post/44133
> cross-posted from: https://lemmy.world/post/955996 > > > Prominent international brands are unintentionally funding low-quality AI content platforms. Major banks, consumer tech companies, and a Silicon Valley platform are some of the key contributors. Their advertising efforts indirectly fund these platforms, which mainly rely on programmatic advertising revenue. > > > > * NewsGuard identified hundreds of Fortune 500 companies unknowingly advertising on these sites. > > * The financial support from these companies boosts the financial incentive of low-quality AI content creators. > > > > Emergence of AI Content Farms: AI tools are making it easier to set up and fill websites with massive amounts of content. OpenAI's ChatGPT is a tool used to generate text on a large scale, which has contributed to the rise of these low-quality content farms. > > > > * The scale of these operations is significant, with some websites generating hundreds of articles a day. > > * The low quality and potential for misinformation does not deter these operations, and the ads from legitimate companies could lend undeserved credibility. > > > > Google's Role: Google and its advertising arm play a crucial role in the viability of the AI spam business model. Over 90% of ads on these low-quality websites were served by Google Ads, which indicates a problem in Google's ad policy enforcement. > > > > > > > > > > Source (Futurism) > > > > > > PS: I run a ML-powered news aggregator that summarizes with an AI the best tech news from 50+ media (TheVerge, TechCrunch…). If you liked this analysis, you’ll love the content you’ll receive from this tool!
- OpenAI being Sued for "Stealing" Peoples Content Onlinewww.firstpost.com ChatGPT in trouble: OpenAI sued for stealing everything anyone’s ever written on the Internet
OpenAI's ChatGPT and Sam Altman are in massive trouble. OpenAI is getting sued in the US for illegally using content from the internet to train their LLM or large language models
cross-posted from: https://lemmy.world/post/949452
> OpenAI's ChatGPT and Sam Altman are in massive trouble. OpenAI is getting sued in the US for illegally using content from the internet to train their LLM or large language models
- Probably a controversial topic, but A.I and how I feel corporate would abuse it
cross-posted from: https://lemmy.world/post/846055
> Tldr: > > A.I. should held to the same ethical standard we hold humans, because humans will find ways to abuse A.I. use for potentially unethical means. > > Mildly infuriated at the potential of A.I. manipulation by bad actors > > Tldr end > > I firstly want to say that I believe A.I. Development has a place to be worked on and has the potential for human growth but I also feel mildly infuriated at the potential money sink out-of-control corporations could develop. > > I've seen arguments increase about A.I. and usually the heated arguments are very specific to a particular aspect. It does make me feel frustrated and I am trying to maybe express an aspect of A.I. use too I guess. > > I am not trying to start any wars on the ethics of the use A.I. however I feel that there should be some form of ethics implemented. I guess that line of thought falls in with calls for regulation. > > If I glance at how social media and games go when A.I. is used as means to figure out how to make the "factory must grow" it feels like things will only get worse as there will be an ever increasing drive for getting one more currency. The more the algorithm grows and refines the more lifeless things seem to get. All this with just "basic" A.I. models. > > Efficiency increases, but what is the cost? > > If I compare something like Reddit to Lemmy... Lemmy feels more "alive" because social interaction without the "corporate machine interface" trying to analyze you feels organic at the moment as you know there is a human and not a bot trying to make you engage. > > My anger is not at the A.I., but more the way A.I. can provide unethical actors a means to push a questionable agenda. One can already see it with things like influencers and targeted advertising with human actors, and once said unethical actors figure out how to train and develop A.I. to successfully mimic a human I fear for the control said actors will push towards an unsustainable precipice towards a desired state of consciousness. > > Maybe it is fear of a dystopian future, but I fear the reality of said future is more real if A.I. doesn't have ethics either. > > If said topic is not in line within forum discussion, please let me know and I will remove it and if possible please direct me to a more appropriate instance > > Thank you for you time
- What The Inventor of Chatbots Knew About The Danger of AI
YouTube Video
Click to view this content.
- Stop talking about tomorrow’s AI doomsday when AI poses risks todaywww.nature.com Stop talking about tomorrow’s AI doomsday when AI poses risks today
Talk of artificial intelligence destroying humanity plays into the tech companies’ agenda, and hinders effective regulation of the societal harms AI is causing right now.
cross-posted from: https://beehaw.org/post/849870
> Article Link from Nature
- The Vatican Releases Its Own AI Ethics Handbookgizmodo.com The Vatican Releases Its Own AI Ethics Handbook
Rather than sit around and wait for the apocalypse, the Pope and company partnered with Santa Clara University on a guidebook with AI ethics principles that tech companies can use right now.
- No, ‘AI’ Will Not Fix Accessibilityadrianroselli.com No, ‘AI’ Will Not Fix Accessibility
In recent years, a series of new technologies have provided better experiences and outcomes for disabled users. Collectively branded “Artificial Intelligence”, the two biggest breakthroughs have been in computer vision and large language models (LLM). The former, computer vision, allows a computer t...
cross-posted from: https://beehaw.org/post/528711
> Via @aardrian@toot.cafe: > > > …consider tools like GitHub Copilot, which claims to be “your AI pair programmer”. These work by leaning on the code of thousands and thousands of projects to build its code auto-complete features. > > > > When you copy broken patterns you get broken patterns. And I assure you, GitHub, Google, Apple, Facebook, Amazon, stacks of libraries and frameworks, piles of projects, and so on, are rife with accessibility barriers.
- AI machines aren’t ‘hallucinating’. But their makers are | Artificial intelligence (AI) | The Guardianamp.theguardian.com AI machines aren’t ‘hallucinating’. But their makers are | Artificial intelligence (AI) | The Guardian
Tech CEOs want us to believe that generative AI will benefit humanity. They are kidding themselves
cross-posted from: https://kbin.social/m/technology@beehaw.org/t/98592
> Tech CEOs want us to believe that generative AI will benefit humanity. They are kidding themselves
- Social Context of LLMs - the BigScience Approach, Part 3: Data Governance and Representation | Montreal AI Ethics Institutemontrealethics.ai Social Context of LLMs - the BigScience Approach, Part 3: Data Governance and Representation | Montreal AI Ethics Institute
✍️ Original article by Yacine Jernite, Suhas Pai, Giada Pistilli, and Margaret Mitchell from HuggingFace. Yacine Jernite is a researcher at Hugging Face working on exploring the social and legal…
- Voices from Beyond: How AI Could Offer a Path to Immortalitywww.psychologytoday.com Voices from Beyond: How AI Could Offer a Path to Immortality
A Personal Perspective: The ethical complexities of a loved one living on digitally.
- Nick Land / Accelerationismknowyourmeme.com Nick Land / Accelerationism
Nick Land is a philosopher best known for his popularization of the term Accelerationism as a critical theory for analyzing the effects of capitalism and technological progress. Land was part of the Cybernetic Culture Research Unit (CCRU), a group that also included Mark Fisher, the author of Capita...
- How do you rationalize building artificial superintelligence that may exterminate humanity?
https://twitter.com/gregkieser/status/1671720819142139904
- Social Context of LLMs - the BigScience Approach, Part 2: Project Ethical and Legal Grounding | Montreal AI Ethics Institutemontrealethics.ai Social Context of LLMs - the BigScience Approach, Part 2: Project Ethical and Legal Grounding | Montreal AI Ethics Institute
✍️ Original article by Yacine Jernite, Giada Pistilli, and Carlos Muñoz Ferrandis from HuggingFace. Yacine Jernite is a researcher at Hugging Face working on exploring the social and legal context of…
- Why AI Will Save the World | Andreessen Horowitza16z.com Why AI Will Save the World | Andreessen Horowitz
There's a full-blown moral panic about AI right now. But the real risk is losing the race to global AI technological superiority.
cross-posted from: https://lemmy.ml/post/1354017
> cross-posted from: https://lemmy.ml/post/1353899 > > > > > Article summarized by AI below: > > > The article argues that artificial intelligence (AI) is not a threat to humanity, but a powerful tool to solve global challenges such as climate change, poverty, disease, and inequality. It gives examples of how AI is already being used to improve health care, education, agriculture, and energy efficiency. It also discusses the ethical and social implications of AI, and how we can ensure that it is aligned with human values and goals. The article concludes that AI will save the world if we use it wisely and responsibly.
- Future LLMs will be progressively worse - and possibly change how humans write
cross-posted from: https://beehaw.org/post/616101
> I was thinking about this after a discussion at work about large language models (LLMs) - the initial scrape of the internet before Chat GPT become publicly usable was probably the last truly high quality scrape of human-made content any model will get. The second Chat GPT went public, the data pool became tainted with people publishing information from it. Future language models will have increasingly large percentages of their data tainted by AI-generated content, skewing the results away from how humans actually write. To get actual human content, they may need to turn to transcriptions of audio recordings or phone calls for training, and even that wouldn't be quite correct because people write differently than they speak. > > I sort of wonder if eventually people will start being influenced in how they choose to write based on seeing this AI content. If teachers use AI-generated texts in school lessons, especially at lower levels, will that effect how kids end up writing and formatting their work? It's weird to think about the wider implications of how this AI stuff will ultimately impact society. > > What's your predictions? Is there a future where AI can get a clean, human-made scrape? Are we doomed to start writing like AIs?
- Social Context of LLMs - the BigScience Approach, Part 1: Overview of the Governance, Ethics, and Legal Work | Montreal AI Ethics Institutemontrealethics.ai Social Context of LLMs - the BigScience Approach, Part 1: Overview of the Governance, Ethics, and Legal Work | Montreal AI Ethics Institute
✍️ Original article by Yacine Jernite, Suzana Ilić, Giada Pistilli, Sasha Luccioni, and Margaret Mitchell from HuggingFace. Yacine Jernite is a researcher at Hugging Face working on exploring the…
- Robot souls and the junk code of liferse.org.uk Robot souls and the junk code of life
Dr Eve Poole considers the importance of reflecting human agency, values, and our 'junk code' in developing advanced artificial intelligence.
- Can AI ever be conscious? Open AI co-founder Ilya Sutskever explains a simple experiment to test for AI sentience
https://twitter.com/thealexker/status/1668390891705311232?s=20
- The Everything Everywhere Censorship of Chinawww.freiheit.org The Everything Everywhere Censorship of China
The Internet is facing its biggest crisis to date. A significant part of this is that state-related actors attempt to turn the Internet into a realm of tools for the surveillance of Internet users as well as societies. This publication analyses China’s model of Internet governance and the complex to...
cross-posted from: https://feddit.de/post/820900
> Beijing utilizes a sophisticated and evolving set of legal, technical, and operational apparatuses to perfect a near real-time censorship system across all platforms and channels to conduct propaganda and disinformation campaigns and collect massive amounts of data. The publication formulates the most important lessons learned from history so that democracies can find new ways or improve their strategy to preserve our liberal values and protect the rights of Internet users. > > Here is the study (pdf): https://shop.freiheit.org/download/P2@1489/733787/The%20Everything%20Everywhere%20Censorship%20of%20China_EN.pdf