There is an Alibaba LLM that won't respond to questions about Tienanmen Square at all, just saying it can't reply.
I hate censored LLMs that don't allow an answer to follow political norms of what is acceptable. It's such a slippery slope towards technological thought-police Orwellian restrictions on topics. I don't like it when China does it or when the US does it and when US companies do it, they imply that this is ethically acceptable.
Fortunately, there are many LLMs that aren't censored.
I would rather have an Alibaba LLM just say "Tienanmen Square resulted in fatalities but capitalism is extremely mean to people so the cruelty was justified" and get some sort of brutal but at least honest opinion, or outright deny it if that's their position. I suppose the reality is any answer on the topic by the LLM would result in problems from Chinese censors.
I used to be a somewhat extreme capitalist, but capitalism somewhat lost me when they started putting up the anti-homeless architecture. Spikes on the ground to keep people from sleeping? If this is the outcome of capitalism, I need to either adopt a different political position or more misanthropy.
Gemini is such a bad LLM from everything I've seen and read that it's hard to know if this sort of censorship is an error or a feature.
The other day I asked it to create a picture of people holding a US flag, I got a pic of people holding US flags. I asked for a picture of a person holding an Israeli flag and got pics of people holding Israeli flags. I asked for pics of people holding Palestinian flags, was told they can't generate pics of real life flags, it's against company policy
I’m finding the censorship on AI to be a HUGE negative for LLMs in general, since in my mind they’re basically an iteration of search engines. Imagine trying to just search for a basic term or for some kind of information and being told that that information is restricted. And not just for illegal things, but just historical facts or information about public figures. I guess I understand them censoring the image generation just because of how that could be abused, but the text censorship makes it useless in a large number of cases. It even tries to make you feel bad for some relatively innocuous prompts.
No generative AI is to be trusted as long as it's controlled by organisations which main objective is profit.
Can't recommend enough Noam Chomsky take on this: https://chomsky.info/20230503-2/
It is likely because Israel vs. Palestine is a much much more hot button issue than Russia vs. Ukraine.
Some people will assault you for having the wrong opinion in the wrong place about the former, and that is press Google does not want to be able to be associated with their LLM in anyway.
Does it behave the same if you refer to it as "the war in Gaza"/"Israel-Palestine conflict" or similar?
I wouldn't be surprised if it trips up on making the inference from Oct 7th to the (implicit) war.
Edit: I tested it out, and it's not that - formatting the question the same for Russia-Ukraine and Israel-Palestine respectively does still yield those results. Horrifying.
Guy you can't compare different fucking prompts, what are you even doing with your life
like asking it to explain an apple and then an orange and complaining the answers are different
it's not a fucking person m8 ITS A COMPUTER
and yes, queries on certain subjects generate canned, pre-written-by-humans responses which you can work around simply by rephrasing the question, because, again, it's a computer. The number of people getting mad at a computer because of their own words is fuckin painful to see.
You didn't ask the same question both times. In order to be definitive and conclusive you would have needed ask both the questions with the exact same wording. In the first prompt you ask about a number of deaths after a specific date in a country. Gaza is a place, not the name of a conflict. In the second prompt you simply asked if there had been any deaths in the start of the conflict; Giving the name of the conflict this time. I am not defending the AI's response here I am just pointing out what I see as some important context.
Someone should realize that LLM's aren't always trained up to date on the latest and greatest of news. Ukraine's conflict is two years running, and Gaza happened ~4½ months ago. It also really didn't outright refuse, it just told the user to use search.
This could be caused by the training dataset cutoff date. These models are not being trained on real time, so they don't have information about recent events. War in Ukraine is lasting longer than 2 years already, and the current Gazan conflict is relatively recent. My quick search didn't find what Gemini dataset cutoff date is.
They probably would have blacklisted the topic if they remembered. At least in America a portion of the population has forgotten about the conflict in the Ukraine because of Gaza and Gemini literally just got released to the general public.