hendrik @ hendrik @palaver.p3x.de Posts 7Comments 1,943Joined 8 mo. ago
Sure. I think being honest is a solid choice, generally speaking. There is some etiquette. If you're way too direct, you might be perceived as a creep. But you certainly have to do something, or it won't lead anywhere.
Telling people you want to stay in contact, or you think they're attractive, or you like their outfit, or whatever people do for flirting seems to be alright. Some people crack jokes and try to be funny, or interesting... Whatever floats your boat. I think the one important thing is to read the room. See if they're comfortable. And if they enjoy talking to you, or if you've just cornered them and are monologuing. Most (not all) people can do that. And I'd say as long as everyone is comfortable, it's the right thing. I mean you have to send some signals for them to know what's up with you. So yeah, that kind of directness might be helpful. And after that, spending time together (and not just in a larger group) is a signal, too, in my opinion.
I don't think there is any general, correct way of doing it. It just depends on the situation, on who you are, and especially what the other person likes.
I don't know why everyone else here says "No." Maybe it's down to preference. I usually like people not just for their outer appearance, but to a greater degree for their intelligence, wits, humor, similar perspective on life... And it just takes time to talk about all of that. So, I rather keep it down with being suggestive and just let things play out. Took me a long time. But everyone is different.
I'm not sure if I have a good definition of flirting. I'm more a problem-oriented person. I do whatever gets the job done. If I want to meet someone again, I just tell them that, as you said. And I usually don't have any ulterior motives. And I'm currently not in the dating game, so I'm pretty much relaxed on parties and social events in that regard. But I think I've always gone to social events to have fun, and not so much to do dating.
It depends a bit on who your target audience is. I think it's usually a good idea to roughly be how you are and not play some role. But I'm not a dating expert, so I might be wrong.
I'd say yes. That'd be a clear sign. And bordering on what I'd call flirting. If you say "Hey, I really enjoyed that conversation, let's meet for a coffee some day, how can I text you?"
It'd say it's polite and does the job. And there's no need to be super explicit, unless you want to initiate a one-night-stand.
Isn't flirting the accepted way of signaling to another person, that you're interested in them in a certain way? I mean I talk to lots of different people of different genders in my life. And I'm mostly very nice to people and find interesting topics to talk about. But how are they supposed to find out if it's just a nice conversation, or if I want to meet them again, or if I want to go on a date with them?
Maybe iTunes? I haven't used it, don't know if it has algorithms.
Ah, yeah I forgot about watches and jewelry. Guess you can buy a lot of them and they won't take up that much space. I'd stick with one or two of them, though. Make it a very nice one you really like and wear it all the time. IMO it doesn't really help if you get 20 half-nice watches and keep 19 of them in one of your multiple wardrobes, that's just hoarding stuff... Same applies to shoes, albeit you might be allowed to get a few more pairs of them. But what do I know...
Though there has to be more to this story. Chartering an entire private jet costs like a few thousand to 15,000 dollars for an hour. You can do this twice a week on that budget. Or buy lots of fancy food, electronic gadgets and gucci bags, maybe even cars. But don't you quickly run out of space to put them? So how would someone spend 100k?
That laptop should be a bit faster than mine. It's a few generations newer, has DDR5 RAM and maybe even proper dual channel. As far as I know, LLM inference is almost always memory bound. That means the bottleneck is your RAM speed (and how wide the bus is between CPU and memory). So whether you use SyCL, Vulkan or even the CPU cores shouldn't have a dramatic effect. The main thing limiting speed is, that the computer has to transfer gigabytes worth of numbers from memory to the processor on each step. So the iGPU or processor spends most of its time waiting for memory transfers. I haven't kept up with development, so I might be wrong here, but I don't think more that single digit tokens/sec is possible on such a computer. It'd have to be a workstation or server with multiple separate memory banks, or something like a MacBook with Apple silicon and its unified memory. Or a GPU with fast VRAM on it. Though, you might be able to do a bit more than 3 t/s.
Maybe keep trying the different computation backends. Have a look at your laptop's power settings as well. Mine is a bit slow when it's on the default "balanced" power profile. It'll speed up once I set it to "performance" or gaming mode. And if you can't get llama.cpp compiled, maybe just try Ollama, Koboldcpp instead. They use the same framework and might be easier to install. And SyCL might prove to be a bit of a letdown. It's nice. But seems few people are using it, so it might not be very polished or optimized.
what is there to do?
I'd say a national strike would be in order. All people in the country should refuse to work for like a week and spend that time protesting on the streets. And demand democracy and food for their children.
I'm not sure what kind of laptop you own. Mine does about 2-3 tokens/sec if I'm running a 8B parameter model. So your last try seems about right. Concerning the memory: Llama.cpp can load models "memory mapped". That means the system decides which necessary parts lo load into the memory. It might be all in there, but it doesn't count as active memory usage. I believe it'll count towards the "cached" value in the statistics. If you want to make sure, you have to force it not to memory-map the model. In llama.cpp that's the parameter --no-mmap
I have no idea how to do it in gpt4all-chat. But I'd say it's already loaded in your case, it just doesn't show up as used memory, since it's the mmap thing.
Maybe try a few other software as well, like one of: ollama, koboldcpp, llama.cpp and see how they do. And I wouldn't run full precision models on an iGPU. Keep it to quantized models. Q8 or Q5... or Q4...
Update? Seems hexbear is online again.
Couldn't agree more. And a phone number is kind of important. I don't want to hand that out to 50 random companies for "security", tracking, and them to sell it to advertisers. Or lose it to hackers, which also happens regularly. And I really don't like to pull down my pants for Discord (or whoever) to inspect my private parts.
Btw, the cross-post still leads to an error page for me.
I think interoperability works with centralized services as well. They can offer an API for other services to hook into. Like Reddit had different apps, bots, tools... You can connect your software to the Google cloud, even if it's made by a different company... I think interoperability works just fine with both models, at least on the technical side.
Yeah, I hope we someday manage to transition to renewables and get cheap and relatively clean energy. I'm living in a country which isn't sitting on huge oil reserves, so I'd say it'd be clever if we made an effort... And we kind of do. But it's probably a bit uncoordinated. And there are people lobbying for the opposite... (And seems it's a big undertaking.)
I hope AI is going to get a bit more democratized in the future. And as you said, more efficient. It'll probably be a combination of factors. More efficient hardware, custom-built LLMs tailored to specific use-cases, scientific progress... I'd like more affordable hardware to run LLMs at home. I think something like Apple processors with their "unified memory" might be promising. I heard LLMs run pretty well on modern MacBooks, without any seperate gaming graphics card.
And I'm not even sure how it'll turn out. Sure, the AI companies predict a high demand for AI, and they're building datacenters and need new power plants to power all of that. But I'm not totally convinced. Maybe that's part of the big AI hype, and it'll turn out there is far less demand than what they tell their investors. Or they're unable to keep up the pace and it'll take longer until AI is intelligent enough to do all the things they envisioned. AI will be some part of the world's electricity bill, though.
I'd say this is unlikely to work out. It mainly combines the downsides of two approaches. The centralization will make it less free and diverse and gives power to few people, while the decentralization adds unnecesary complexity. Since at that point it's mainly one large instance, but that has to send out loads of network traffic to very few people at other places to keep them in the loop. At that point, why not make it 100% centralized? That'd make programming and maintainance way easier.
Maybe Google reverse image search helps? Or you just report them and let someone else check on this.
Ja, ich find's halt doof, dass das ne Zwangsgebühr ist. Und auch ungerecht, weil nicht an Einkommen oder sowas gekoppelt. Ich würd das mit Steuer abhandeln. Und ein bisschen zusammenstreichen, keiner braucht beispielsweise die selbe Wahl-Nacht-Show mit denselben Meinungen nur halt einmal von der ARD und einmal vom ZDF separat produziert. Für mich wäre bei 15€ im Monat Schluss, ich hab auch Netflix gekündigt als die dann noch teurer geworden sind. Nur hier kann ich mir das nicht aussuchen. Und die sollen mal mehr mit anderen ÖRR zusammenarbeiten, z.B. Dr. Who von der BBC einkaufen. Und wenn wir schon nach Amerika schauen, finde ich sollten wir uns mal ein Beispiel daran nehmen, dass Dinge, die gemeinschaftlich finanziert werden, dann auch dem Bürger gehören. Ich finde, uns ständen dann Nutzungsrechte zu, wenn wir das schon bezahlen...
Aber ja, wir sind im Allgemeinen viel besser dran mit unseren Medien als einige andere Länder. Gerade die beiden. Und hier bemüht man sich dann doch meistens tatsächlichen Journalismus zu betreiben. Gibt auch tatsächlich viele nette Sendungen in unseren Öffentlich Rechtlichen. Und sowas wie die Aktuelle Stunde oder regionales Zeug kriegt man wahrscheinlich sonst auch schlecht finanziert.
Lol, "Kein Selbstbedienungsladen", sagen grad die Richtigen, die sich jedes Jahr einfach mal so 220€ aus meinem Portemonnaie gönnen...
Uh, idk. There's also a list detailing that: https://join.piefed.social/features/
It's a different software, connecting you with the same communities and people. Just has slightly different features, a bit more control here and there, a few perks, has a different design philosophy and is written in an entirely different programming language, which affects participation, maintainability, resource usage... You can see how it looks for example on https://piefed.social/ I always struggle to describe the detailed differences, because there are a lot of them and it has a lot to do with what's important to you and what you're used to. It's a bit like describing how a banana tastes IMO. You better have a look yourself.
Yes, it's currently being worked on. And in the meantime, it can be already be used as a "progressive web app": https://join.piefed.social/docs/piefed-mobile/
A proper(?) app is on the 2025 roadmap, and development of the API and related things already started.

Is there a working Spotify downloader that actually downloads from Spotify?