Sailor Sega Saturn @ sailor_sega_saturn @awful.systems Posts 1Comments 362Joined 2 yr. ago

Speaking of imposters, there's a screenshot of a fake manifesto substack post (since deleted) which has been linked a couple times on reddit.
The only problem is it was first published to substack over four hours after the arrest was reported according to the post's own json-ld metadata. People be trying to stir things up.
Amatuers. Can't even publish the post early and edit it later for that extra bit of plausible deniability.
Friends don't let friends OSINT. That said... people have found his twitter (still up), goodreads (deleted), github (still up, already full of troll github issues), linkedin (I guess deleted), an interview about has school games club (deleted), his game development studio which published one iphone game (facebook profile deleted).
His twitter account links to his linktree, but that only contains some inscrutable emoji rather than any links so hasn't really been reported:
š»š¤ - š„·šāāļøš§āāļøšļø - šš¤ - š¦š§ - šš§ - ššØāāļø - āÆļø
(I'm sure his inevitable groupies will be puzzling over the meaning of cow judge for years to come)
The youtube page you found is less talked about, though a reddit comment on one of them said "anyone else thinking burntbabylon is Luigi?".
NYTimes also reports a steam account, facebook account, and instagram account (couldn't find any of these).
https://www.nytimes.com/2024/12/09/nyregion/uhc-suspect-video-games.html
Other NYTimes articles are now investigating his health issues (back surgery, etc)
Also this wasn't necessarily a DMCA request.
itchio said this on the hackernews thread (bolding mine):
The BrandShield software is probably instructed to eradicate all "unauthorized" use of their trademark, so they sent reports independently to our host and registrar claiming there was "fraud and phishing" going on, likely to cause escalation instead of doing the expected DMCA/cease-and-desist.
And BrandShield's response / nonpology (bolding mine):
BrandShield serves as a trusted partner to many global brands. Our AI-driven platform detects potential threats and provides analysis; then our team of Cybersecurity Threat hunters and IP lawyers decide on what actions should be taken. In this case, an abuse was identified from the itchio subdomain. BrandShield remains committed to supporting our clients by identifying potential digital threats and infringements and we encourage platforms to implement stronger self-regulation systems that prevent such issues from occurring.
Which translated into English is possibly something like "We would be very happy if the general public thought this was a normal DMCA takedown. Our chatbot said the website was a phishing page. Our overworked cybersecurity expert hunter agreed after looking at it for zero milliseconds. We encourage itchio to get wrecked."
This difference matters because site hosts and domain registrars can be extremely proactive about any possibility of fraud / abuse / hacks, and there's less of a standard legal process for them.
Dear Funko please do not call my mom.
On the third day of OpenAI my true love enemy gave to me three french hens Sora.
The version of Sora we are deploying has many limitations. It often generates unrealistic physics and struggles with complex actions over long durations.
"12 days of OpenAI" lol. Such marketing.
Big eye roll to this part too:
Weāre introducing our video generation technology now to give society time to explore its possibilities and co-develop norms and safeguards that ensure itās used responsibly as the field advances.
if youāre benefiting from some particular way of drawing a boundary around and thinking about AI, Iād really like to hear about it.
A bit of a different take than their post, but since they asked:
I've noticed a lot of people use "AI" when they really mean "LLM and/or diffusion model". I can't count the number of times someone at my job has said AI when solely describing LLMs. at this point I've given up on clarifying or correcting the point.
This isn't entirely because LLM is a mouthful to say, but also because it's convenient for tech companies if people don't look at the algorithm behind the curtain (flawed, as all algorithms are) and instead see it as magic.
It's blindingly obvious to anyone who's looked that LLMs and generative image models cannot reason or exhibit actual creativity (c.f. the post about poetry here). Throw enough training data and compute at one and it may be able to multiply better (holy smokes stop the presses a neural network being able to multiply numbers???), or produce obviously bad output x% less of the time, but by this point we've more or less reached the bounds of what the technology can do. The industry's answer is stuff like RAG or manual blacklists, which just serves to hide it's capabilities behind a curtain.
Everyone wants AI money, but classic chatbots don't make money unless they're booking vacations for customers, writing up doctor's notes, or selling you cars.
But LLMs can't actually do this, so in particular any tool in the space has to be uninterrogated enough both to give customers plausible deniability, and to keep the bubble going before they figure it out.
Look at my widget! It's an āØAIāØ! A magical mystery box that makes healthcare, housing, hiring, organ donation, and grading decisions with maybe no bias at all... who can say? Look buster if you hire a human they'll definitely be biased!
If you use "statistical language model" instead of "AI" in this sentence then people start asking uncomfortable questions about how appropriate it is to expect a mad-libs algorithm trained on 4chan to not be racist.
ā¦ an insurance pricing formula, for example, might be considered AI if it was developed by having the computer analyze past claims data, but not if it was a direct result of an expertās knowledge, even if the actual rule was identical in both cases. [page 13]
This is an interesting quote indeed, as expert systems used to be on the forefront of AI; now it's apparently not considered AI at all.
Eventually LLMs will just be considered LLMs, and image generators will just be considered image generators, and people will stop ascribing āØmagicāØ to them; they will join the rank of expert systems, tree search algorithms, logic programming, and everyone else that we just take for granted as another tool in the toolbox. The bubble people will then have to come up with some shinier newer system to attract money.
Asking students to pay good money for LLM slop is bold.
Also did the press release really need a generative image? "Ah yes, this image will make my press release look nice and professional and reassure the audience that AI is being used with due care."
Of neriacular latin to: an evoolitun on nance langusages.
Nanolu.age languga, Lgugar lanilan, pachnans, NlbN, Latolcean, Framen, ArpianhCATifN, Dvnutalmnk's, Sgiaiviaesgn, Italian, Ioveimneilaeiawepoew, Pfowchance -> Vullaq Luainles, Leawenowas, Laohaixisahh Aimwyvrnestrattidn, Frooidangs $Chha
Not everyone can respond to consultations about wanting to die, but a robot can accept anything you say.
I didn't really understand just how absurd this is before looking up the robot.
It is essentially Furby on wheels. It has extremely slick marketing, makes weird cooing sounds, has a weird camera sprouting out of it's head like a fungus, has big LED eyes, scoots around randomly, stores your face on the cloud "remembers up to 1000 people", and you can (as the kids say) boop the snoot. That's about it.
I'm trying to imagine someone going "Lovot, sometimes I don't want to go on. I'm sorry I didn't mean that. Thank you for always listening" and it being all "coo chirp gigigi tweeeee" while wiggling it's stupid little Lovot arms... and I just can't.
Also:
Mr. Tetzloff contacted Defendants to inquire about the reason for denying his claim. Defendant refused to provide any reason, stating that it is confidential.
wtf?
Insurance companies have played doctor for far too long in the US. It's so gross.
I don't think private insurance should be necessary at all, but given that they exist they should be much more regulated. The doctor, not the insurance, should decide what conditions a patient has and what care is necessary; then if the insurance had said they cover it they should have to pay up without arguing.
Like here in the lawsuit they said an old guy in the last year of his life who had muscle atrophy after breaking a leg, and had just started PT, should go home. Despite the doctor saying "shit's weak and paralyzed yo":
Defendants explained that there were no acute medical issues because the patient was self-feeding and required minimal help for hygiene and grooming. This determination went against the physical therapistās recommendation and notes describing Mr. Lokken muscle functions as paralyzed and weak.
As if people are just hanging out in hospice care for fun or something! When I was in the hospital with pancreatitis I was about ready to start flipping tables by the end I wanted to go home so bad.
Asking employees to "bend" their perfectly sensible values like "I don't like homophobes" or "members of the KKK suck" is insane to me, but exactly the sort of thing a tech CEO would think would resonate with his workers.
I stay at my job not because I have molded my soul into a perfect vessel for my companies values (which, TBH, kind of suck), but because I have a mortgage payment.
(Also as the header graphic points out, "love is at our core" and "inclusive environment" are apparently some of their values so maybe it's Digital Ocean which needs to bend to Digital Ocean's values).
At least there's a happy ending:
A month after the all-hands meeting, in August 2023, DigitalOcean announced that it was conducting a search for a new CEO, but did not say why.
Patrick Soon-Shiong adding AI ābias meterā to the LA Times to convince readers itās not biased
Not AI in the sense of making up stories -- but [...] a bias meter
Saying the quiet part out loud.
"Listen guys I'm not going to make up stories. I'm going to make up numbers! That way it's harder for the public to spot the errors!"
Well maybe tech should stop taking over perfectly good words grumble grumble.
Well half joking; there are much worse culprits than Godot (looking at you Apple) and something like "Still Waiting For Godot" may have made the reference easier to get.
Someone looked at the 10,000 year old dragon in body of thirteen year old trope and thought "wait wait! I can make that worse!"
And if you think about it you can literally get one of these PCs for one month and then win a Fortnite tournament or something and have enough money to buy your own PC.
Marketing to pre-teen gamers must be like shooting fish in a barrel.
whatever rot13ed word they used for cult.
It's impossible to read a post here without going down some weird internet rabbit hole isn't it? This is totally off topic but I was reading the comments on this old phyg post, and one of the comments said (seemingly seriously):
It's true that lots of Utilitarianisms have corner cases where they support action that would normally considered awful. But most of them involve highly hypothetical scenarios that seldom happen, such as convicting an innocent man to please a mob.
And I'm just thinking, riight highly hypothetical.
Wait that's a strategy? I thought Mr. Musk was just in a sort of perpetual horny/lonely mid-life crisis mode or something like that.
Ah yes Africa, the small country on the northern coast of Africa.
If I mathed right that'd be one waymo every 350 feet of road on average. Is that a lot? It sounds like it might be a lot. Especially since self-driving cars greatest weakness appears to be driving in the vicinity of other self-driving cars.
I woke up and immediately read about something called "Defense Llama". The horrors are never ceasing: https://theintercept.com/2024/11/24/defense-llama-meta-military/
Scale AI advertised their chatbot as being able to:
apply the power of generative AI to their unique use cases, such as planning military or intelligence operations and understanding adversary vulnerabilities
However their marketing material, as is tradition, include an example of terrible advice. Which is not great given it's about blowing up a building "while minimizing collateral damage".
Scale AI's response to the news pointing this out -- complaining that everyone took their murderbot marketing material seriously:
The claim that a response from a hypothetical website example represents what actually comes from a deployed, fine-tuned LLM that is trained on relevant materials for an end user is ridiculous.