The change is that a general ban on military use has been removed in favor of a generalized ban on harm.
So for example, the Army could use it to do their accounting, but not to generate a disinformation campaign against a hostile nation.
If anyone actually really read the article, we could have a productive conversation around whether any military usage is truly harmless, the nuances of the usefulness of a military ban in a world where so much military labor is outsourced to private corporations which could 'launder' terms compliance, or the general inability of terms to preemptively prevent harmful use at all.
Instead, we have people taking the headline only and discussing AI being put in charge of nukes.
Lemmy seems to care a lot more about debating straw men arguments about how terrible AI is than engaging with reality.
The point is that it's a purposeful slow walk, the entire "non-profit" framing and these "limitations" are a very calculated marketing play to soften the justified fears of unregulated, for-profit ( I.e. Endless growth) AI development. It will find its way to full evil with 1000 small cuts, and with folks like you arguing for them at every step along the way, "IT'S JUST A SMALL CUT!!!"
It will find its way to full evil with 1000 small cuts, and with folks like you arguing for them at every step along the way, "IT'S JUST A SMALL CUT!!!"
While I do think AI development isn't going to be going in the direction you think it is, if you read it carefully you'll notice that I'm actually not saying anything about whether it's "a small cut" or not, I'm simply laying out the key nuance of the article that no one is reading.
My point isn't "OpenAI changing the scope of their military ban is a good thing" it's "people should read the fucking article before commenting if we want to have productive discussion."
If I did accounting (or even just cooking, really) for the Mafia would be less bad than actually going with a gun to tether or kill people but it would still be bad.
Why? Because it still helps an organisation which core mission is hurting people.
And it’s purely out of greed because ChatGPT doesn’t desperately need this application otherwise they will go bankrupt
I guess, but I never got hooked on any of the big social media sites, and the few I did (reddit mostly) I limited myself to rather non-political subjects like jokes and specific kinds of content. I'm new to Lemmy and this is most of what I've been seeing, which is why I said that.
Obviously I know that this is what all social media looks like these days. I hoped Lemmy would have at least some noticeable vocal minority of balanced people, but nah.