There's a pretty big difference between chatGPT and the science/medicine AIs.
And keep in mind that for LLMs and other chatbots, it's not that they aren't useful at all but that they aren't useful enough to justify their costs. Microsoft is struggling to get significant uptake for Copilot addons in Microsoft 365, and this is when AI companies are still in their "sell below cost and light VC money on fire to survive long enough to gain market share" phase. What happens when the VC money dries up and AI companies have to double their prices (or more) in order to make enough revenue to cover their costs?
Good luck, I'm hoping that I can get a worker's visa through my employer to move to the UK myself.
Okay, but consider: The Des Moines Register came out with a poll from Seltzer today showing Kamala up by three in Iowa. For context, Seltzer is considered an S-tier pollster; they came out with Trump beating Biden by 7 points in Iowa, foreshadowing that the rust belt states would be much closer than other polls were showing, and was only one point off the actual result (Trump +8).
They got the 2008 caucus right, surprising everyone that some upstart named Barack Obama was putting up a real challenge to Clinton. They got 2008, 2012, and 2016 right. You have to go back to 2004 for a sizable miss in the presidential election, where the poll shows Kerry up by five when Bush won by less than a percent. They also had a 5-point miss in 2018 with a state-level race, but that is as bad as it gets.
Even if there's a similar 5-point miss here, going from Trump +8 to Trump +2 in Iowa is catastrophic for his campaign. Hell, even a 7 or 8-point polling miss by Seltzer would still seal Trump's fate. Iowa votes pretty similarly to Michigan, Wisconsin, and Pennsylvania, so even if Seltzer missed more than she ever has before, a 3-4 point swing away from Trump in those three states would be more than enough to all but guarantee victory for Harris.
And if the results are anywhere close to this poll, you start seeing knock-on effects elsewhere and really crazy shit starts happening, like Harris winning every swing state with Florida or Texas potentially turning blue, giving Harris over 400 electoral votes. While it's unlikely to actually happen this election, the fact it's even in the realm of possibility is givine actual legitimate hope (instead of industrial strength cognitive dissonance) for the first time since the Biden debate. It also bode well for down ballot races too--we may see Allred finally kicking Cruz out of the Senate (and good fucking riddance).
This is gonna turn into the gamer version of "this is extremely dangerous to our democracy" isn't it
Cool story, too bad it's inaccurate. In every state in the USA, if you are waiting in line when polls close you have the right to remain in line and cast your vote, regardless of how long it takes.
If you were told to go home because the polls were closed and they weren't accepting further voters, then you were lied to.
That first GIF is from Team America: World Police, it's a Bush-era film from the guys who made South Park. It aged pretty badly in a lot of ways since it's lampooning the War on Terror, but it's still hilarious IMO. Worth a watch if you like the idea of South Park as an R-rated puppet movie.
Yup, being nice and polite to the people helping you is the single biggest way to get them to look the other way or have them bend the rules for you. The instant you start playing the asshole card, you usually get strict by-the-letter policy.
It's amazing how many people forgot about the classical "get a rise out of everyone with shitty arguments" troll, or forgot that the way to deal with them was to ignore and ban on sight. Fuck, I was practically in diapers when Usenet and BBSes were a thing and I still remember "don't feed the troll."
As others have said, it's a very snowbally game. The various characters all grow naturally stronger over the course of the game through gold (to buy items) and experience that you earn by killing minions. The problem is that killing an enemy player and destroying enemy towers grants a lot of gold and experience, so if you fuck up and die (or if you get ganged up on by the enemy team) you can end up making your opponent much stronger. Even if you live and are forced to return to base to heal, the opportunity for free farm or destroying your tower (which also makes it riskier for you to push forward) can make your opponent a lot stronger than you, which lets him kill you easier, which makes him stronger. This can also spill over to other lanes, where the opponent you made stronger starts killing your teammates and taking their towers.
There's ways to overcome this snowball--players on killstreaks are worth more gold when they die, you can gang up on a fed opponent and catch them out to nullify their stat advantage, and you can try and help other lanes to get your team stronger. The champions also have different scaling levels, and some champions get a lot of front-loaded baseline damage while others scale better with items, and a select few champions have theoretically infinite scaling (but are generally much weaker in other areas to compensate). Worst case, this means your team can play super defensive and try to wait out the advantage until they catch up and then win from there. The problem is that all this requires A) communication and the ability to quickly adapt from your teammates, B) the opposing team screws up and doesn't press their advantage, and C) your team is willing to try (which may require dragging the game out for over an hour). Needless to say, this is not always the case, and this design makes it very easy to blame another player for the loss (warranted or not).
Brought to you by the American National Automation Laboratory Corp?
I mean, you don't have to go full-blown fursuit and conventions if you don't want to. Most furries never actually bother with fursuiting--speaking from personal experience, it's hot as shit (especially outdoors or in summer), you can barely see or hear anything, and if you wear glasses they're prone to getting knocked off your nose or fogging up so badly that you can't see anything. Many fursonas exist exclusively in artwork or stories--either commissioned or self-drawn--and even that's optional.
You don't even have to actively participate in the community if you don't want to. Many furries are passive members who just follow artists, lurk in streams or group chats, occasionally leave a comment on a submission, and generally exist in furry spaces. Literally the only requirement to be a furry is to say you're a furry!
Honestly, don't stress yourself out over it, and keep an open mind. It might not be your cup of tea, and that's perfectly fine--there undoubtedly is a large sexual aspect to furry, and lots of folks (especially folks who are cisgender, heterosexual, have a less relaxed view about sexuality, etc.--not to say that you can't be a straight male furry, but there are a LOT of gay/bi furries) may find it to be a dealbreaker. Ultimately, furry has its roots in the nerd and geek communities, back when being nerdy or geeky was something to be bullied over, and it still shows it today.
Furry is a community that has a disproportionate number of LGBT+ folks, neurodivergent folks (especially people on the ADHD/autism spectrum), and other marginalized groups. Among many things, this means it revels in being proudly and unabashedly weird, both as a celebration of itself and as a defense mechanism against becoming overwhelmed by the kinds of business interests that would love nothing more than to push out all the sexuality and weirdness to provide a safe space for advertisers to shovel their slop down our throats.
If that sounds like something you'd enjoy being a part of, then I'd suggest checking out some places like the furry_irl subreddit, looking up streamers under the furry tag on Twitch (Skaifox, WhiskeyDing0, etc.), maybe make an account on FurAffinity, and look up furmeets or conventions in your area you can attend. You might not like it, or you might find yourself joining the best community I've ever been part of.
Yeah, definitely. Furry encompasses basically anything that's a non-human anthropomorphic creature. I've seen fursonas based on birds, sharks, dolphins, turtles, rhinos, dinos, frogs, hippos, orcas, dragons, reptiles, plant creatures... hell, there are alien species like sergals and avalis, anthro/machine hybrids like protogens, and even entirely robotic characters.
It's just called furry because furred species are the most common, and the original community that splintered off from sci-fi conventions in the 70s and 80s and grew through fanzines pre-Internet largely used furred species for their characters. ("Fun" fact, the early community had a lot of skunk characters, which is why one of the first derogatory terms for furries was "skunkfucker.")
Oh yes, let me just contact the manufacturer for this appliance and ask them to update it to support automated certificate renewa--
What's that? "Device is end of life and will not receive further feature updates?" Okay, let me ask my boss if I can replace i--
What? "Equipment is working fine and there is no room in the budget for a replacement?" Okay, then let me see if I can find a workaround with existing equipme--
Huh? "Requested feature requires updating subscription to include advanced management capabilities?" Oh, fuck off...
Believe it or not this is exactly how most furries make their fursona
I keep thinking of the anticapitalist manifesto that a spinoff team from the disco elysium developers dropped, and this part in particular stands out to me and helps crystallize exactly why I don't like AI art:
All art is communication — dialogue across time, space and thought. In its rawest, it is one mind’s ability to provoke emotion in another. Large language models — simulacra, cold comfort, real-doll pocket-pussy, cyberspace freezer of an abandoned IM-chat — which are today passed off for “artificial intelligence”, will never be able to offer a dialogue with the vision of another human being.
Machine-generated works will never satisfy or substitute the human desire for art, as our desire for art is in its core a desire for communication with another, with a talent who speaks to us across worlds and ages to remind us of our all-encompassing human universality. There is no one to connect to in a large language model. The phone line is open but there’s no one on the other side.
Yeah, suuuuure you weren't.
Note that the proof also generalizes to any form of creating an AI by training it on a dataset, not just LLMs. But sure, we'll absolutely develop an entirely new approach to cognitive science in a few years, we're definitely not boiling the planet and funneling enough money to end world poverty several times over into a scientific dead end!
You literally were LMAO
Other than that, we will keep incrementally improving our technology and it's only a matter of time untill we get there. May take 5 years, 50 or 500 but it seems pretty inevitable to me.
Literally a direct quote. In what world is this not talking about LLMs?
Did you read the article, or the actual research paper? They present a mathematical proof that any hypothetical method of training an AI that produces an algorithm that performs better than random chance could also be used to solve a known intractible problem, which is impossible with all known current methods. This means that any algorithm we can produce that works by training an AI would run in exponential time or worse.
The paper authors point out that this also has severe implications for current AI, too--since the current AI-by-learning method that underpins all LLMs is fundamentally NP-hard and can't run in polynomial time, "the sample-and-time requirements grow non-polynomially (e.g. exponentially or worse) in n." They present a thought experiment of an AI that handles a 15-minute conversation, assuming 60 words are spoken per minute (keep in mind the average is roughly 160). The resources this AI would require to process this would be 60*15 = 900. The authors then conclude:
"Now the AI needs to learn to respond appropriately to conversations of this size (and not just to short prompts). Since resource requirements for AI-by-Learning grow exponentially or worse, let us take a simple exponential function O(2n ) as our proxy of the order of magnitude of resources needed as a function of n. 2^900 ∼ 10^270 is already unimaginably larger than the number of atoms in the universe (∼10^81 ). Imagine us sampling this super-astronomical space of possible situations using so-called ‘Big Data’. Even if we grant that billions of trillions (10 21 ) of relevant data samples could be generated (or scraped) and stored, then this is still but a miniscule proportion of the order of magnitude of samples needed to solve the learning problem for even moderate size n."
That's why LLMs are a dead end.
Or they'll do shit like put Harris on full blast for not providing "detailed policies," and then moving the goalposts to "but how do you pay for it" when she does, and nitpicking every word of every sentence she says. Meanwhile, Trump will cancel interviews, go up on stage at rally, spew a word salad response, and the NYT will bend over backwards to reword the salad to make him look better, while casting his decision to dodge a second debate as "smart" and avoiding any form of scrutiny as "efficient use of campaign funds." At best, they'll halfheartedly throw in a fact check like "his plan to fix inflation by levying tariffs will increase inflation" but they don't dare portray him as the senile, hate-filled lunatic he is because they're terrified of angering their right wing audience (who are already shifting away from legacy media anyway to reinforce their bubble). They also do this because virtually all forms of legacy media have been coopted by the billionaire sociopaths that would very much like a second Trump term to give them another tax cut and the "freedom" to pollute our world and grind the heel of their boot into the face of the working class so that they can race to become the first trillionaire.
This is a fairly persistent issue that appears to be exclusive to Connect, and extremely annoying.
Any time a post accumulates very large numbers of comments (say, 300 or more), Connect will eventually just... stop loading additional comments. At first, scrolling down will load a few more top-level comments, but eventually it'll just give up and act like there's no more comments to load, even though there Connect has loaded less than 50 comments out of a 1,000+ comment megathread. Worse yet, if a user direct links to a comment on one of those megathreads, Connect will load a completely empty comment thread. This issue doesn't occur on Voyager or Jerboa, nor on the web UI.
I won't link specific posts for obvious reasons, but there were multiple posts here that were removed by community moderators but were still visible in the Connect app: https://lemmy.world/post/1468971
Needless to say, getting blasted with a bunch of rhetoric vile enough to warrant moderation is not the way I wanted to start my day.