What are your thoughts on a on-device model that "fact-checks/determines bias" on comments and posts?
Have had a few pet projects in the past around RSS aggregation/news reading, which could fact-check the sources/article while reading, also determining the biases from the grammar and linguistic patterns used by the journalist for the article. Same could be applied to comments.
Wonder if such a feature had value for a reader app for Lemmy? I feel a definitive score is toxic. But, if it were to simply display the variables to look out for it can help make a objective decision yourself?
Another application of this, is also pulling just the objective statements in the articles for faster reading.
No aggression. I'm just being sort of aggressively nice about telling you that your software idea is neat but not really a good idea. You are the asker, by the way. I'm just being generous and answering. No hard feelings.
Figure out what the truth is for you and leave that shit alone.
This actually got me thinking quite a bit and was hoping you'd expand on it. Is it more directed to building things that are not driven by a personal truth?
It's more about how you can create a computer program to see an absolute truth when that absolute truth is called into question. The limits of your computer program are limited by you and your mind. That's why, in the end, people are trying to create artificial intelligence and they're not getting it right and perhaps they won't. Your idea of truth may work for you in your life, personally. Does it work for everyone else in the world? Again, philosophy is a nice thing to learn about. It really helps. Cheap science fiction movies also help, they sort of always get at what I'm saying somehow. On the bright side, artificial intelligence is quite capable of making itself smart and then stupid, according to the headlines. So, it is capable of making itself dumber. Why is that? What truth did it find? What does it know that we do not? That's the problem, you see. Perhaps it knows nothing. Or perhaps it doesn't care. Or perhaps it sees that a simple math problem can be answered wrong and it doesn't care. That's where we are with this type of thing that you want to create that reads for us. I can find plagiarism but the software can't. The software sometimes can find plagiarism, and most of the time it's wrong. That's where we are. If you can find a way to make it better, that's great. But don't make it about computers reading things in place of humans. That's not possible right now. In the future, maybe humans and software will be able to read things similarly. In the place of? I don't know, man. My next door neighbor can read. Do I want my next door neighbor reading things for me? No, not really. I barely know my next door neighbor. I prefer to read the things myself.
It's not a bad idea. It just needs fine tuning, like, think about the plagiarism software I'm talking about. It's utterly useless most of the time, but it isn't a bad thing. At the same time, it is indeed a bad thing. Think about what would happen to my students if I was lazy. I'd just not read the papers that were flagged as plagiarized and have them expelled from the university, not bothering reading the papers at all, saving myself some work and screwing over people in my class. I could - and actually can - use the software results of the plagiarism detector as my evidence and have the student thrown out and not have to read any of their writing. It would save me so much time. I could have more time for me. I could drink too much. Try out some new fancy illegal drugs. Party at the club. Lose my job.
Slowly removing passion. Its interesting seeing how things I would feel would increase passion (simply because it creates/saves more time), may have the complete opposite effect and thus going against the whole intention. I ignore this side a lot of the time.
Passion as in, spending the time to look into the thoughts of the paper and spending the time to observe each student's work fairly to help the student improve on their writing. Maybe the plagiarism checker is wrong about something, and makes you skip reading through that section. But, infact the student may have laid out some interesting thoughts that should have received positive reinforcement.
Our overall discussion reminded me of this piece by Aaron Swartz aswell, thought it would be nice read to suggest: http://www.aaronsw.com/weblog/anders, the specific piece is called "Confront reality"
OK, so you're looking for a way to figure out punditry, what the pundits say that is fact and that is just their opinion. I think that this type of goal is entertaining. What you're looking for is to create software that singles out journalists (they are usually the pundits). It looks easy watching TV, it's harder to with software. But you're right in that regard. Journalists aren't what they used to be. They are free to have an opinion and they are viewed as fact reporters. It's problematic. Humans are now better at figuring that out than AI. But if you can figure it out, that's great.