It sure is lovely that the "AI" "Revolution" has given hacks a bunch of hard to audit but scientific-sounding metrics for them to apply however they want.
Armchair peer review time: I'd love to see them introducing a control group for their "toxicity" model by including subs from their other identified clusters. How can you know what it means for tankies to be millions of billions toxic if you don't have baselines? I do like how they agree with r/liberal and r/conservative being in the same cluster though. On the domain analysis I'd require them to also include the total number of articles and not just the percentages, which I'd bet would give a fun graph.
Overall, I've read less funny and more informative parody papers. For the AI nerds, this one might be fun.
I’d love to see them introducing a control group for their “toxicity” model by including subs from their other identified clusters. How can you know what it means for tankies to be millions of billions toxic if you don’t have baselines?
Ironically(?) the funding is to develop a machine learning algorithm not to spot and moderate racism but to spot and moderate the least racist of any two examples. Which means, the project is to develop a comparative model but they haven't thought about using comparison within the research itself. Meanwhile, real scholars get fired from all over the place for being in unions and demanding a living wage.
It sure is lovely that the “AI” “Revolution” has given hacks a bunch of hard to audit but scientific-sounding metrics for them to apply however they want.
I'm slogging through it right now and coming to similar assessments. "With enough Machine Learning shenanigans, I can arrive at whatever conclusion I want!"
They're tired of gaslighting people into becoming liberals, now they're doing it with machines. Whoever thought of letting misinformation giants like Google "teach" "AI" should be fired at.