Nah, you're just not good with maths. Programming languages are mathematical objects and denotational semantics is merely treating languages as categories and looking for functors leading out of them.
Semantics was originally studied as model theory, and today is phrased with category theory. You use this every day when you imagine what a program does in terms of machine effects.
Incorrect. The hidden gold is Factor. You were close!
Extension modules are implemented in C because the interpreter is written in C. If it were written in another language, folks would write extension modules for that language instead. Also, it would be less relevant if people used portable C bindings like cffi, which are portable to PyPy and other interpreters… but they don't.
You tried to apply far too much pressure over too large a surface area. Either make a more focused approach by not chasing Free Software and XMPP supremacy at the same time, or find ambient ways to give people options without forcing them to make choices in the direction you want. In particular, complaining about bridges usually doesn't get the discussion to a useful place; instead, try showing people on the other side of the bridge how wonderful your experience is.
Also, I get that you might not personally like IRC, but you need to understand its place in high-reliability distributed systems before trying to replace it; the majority of them use IRC instead of XMPP for their disaster recovery precisely because its protocol jankiness makes it easier to wield in certain disaster situations.
At some point, reading kernel code is easier than speculating. The answer is actually 3. there are multiple semantics for filesystems in the VFS layer of the kernel. For example, XFS is the most prominent user of the "async" semantics; all transactions in XFS are fundamentally asynchronous. By comparison, something like ext4 uses the "standard" semantics, where actions are synchronous. These correspond to filling out different parts of the VFS structs and registering different handlers for different actions; they might as well be two distinct APIs. It is generally suspected that all filesystem semantics are broken in different ways.
Also, "hobby" is the wrong word; the lieutenant doing the yelling is paid to work on Linux and Debian. There are financial aspects to the situation; it's not solely politics or machismo, although those are both on display.
Watch the video. Wedson is being yelled at by Ted Ts'o. If the general doesn't yell, but his lieutenants yell, is that really progress? I will say that last time I saw Linus, he was very quiet and courteous, but that likely was because it was early morning and the summit-goers were starting to eat breakfast and drink their coffee.
How much more? When it comes to whether I'd write GPU drivers for money, I can tell you that LF doesn't pay enough, Collabora doesn't pay enough, Red Hat doesn't pay enough, and Google illegally depressed my salary. Due to their shitty reputation, Apple, Meta, or nVidia cannot pay enough. And AMD only hires one open-source developer every few years. Special shoutout to Intel, who is too incompetent to consider as an employer.
I want to run PipeWire as a system user and have multiple login users access it. My current hack is to run it as one login user and then do something like:
export XDG_RUNTIME_DIR=/run/user/1001
Where 1001
is the user ID. Is there a cleaner approach?
This is basically what you said here, and it's still wrong: social dynamics, not money, is the main reason why young hackers (don't) work on Linux. I'm starting to suspect that you've not hacked kernel before.
Well, I don't want to pull the kernel-hacker card, but it sounds like you might not have experienced being yelled at by Linus during a kernel summit. It's not fun and not worth the money. Also it's well-known that LF can't compete with e.g. Collabora or Red Hat on salary, so the only folks who stick around and focus on Linux infrastructure for the sake of Linux are bureaucrats, in the sense of Pournelle's Iron Law of Bureaucracy.
I already helped build a language Monte based on Python and E. Guido isn't invited, because he doesn't understand capabilities; I've had dinner with him before, and he's a nice guy but not really deep into theory.
Sounds like it's time to start training code-writing models on leaked Microsoft source code. Don't worry, it's not like it'll "emit memorized code".
Feel free to say anything on-topic. Right now you're in Reddit mode: nothing to contribute, but really eager to put words into the box. This is a Wolfram article; you could be on-topic with as little as "lol wolfram sux".
I don't think that this critique is focused enough to be actionable. It doesn't take much effort to explain why a neural network made a decision, but the effort scales with the size of the network, and LLMs are quite large, so the amount of effort is high. See recent posts by (in increasing disreputability of sponsoring institution) folks at MIT and University of Cambridge, Cynch.ai, Apart Research, and University of Cambridge, and LessWrong. (Yep, even the LW cultists have figured out neural-net haruspicy!)
I was hoping that your complaint would be more like Evan Miller's Transformers note, which lays out a clear issue in the Transformers arithmetic and gives a possible solution. If this seems like it's over your head right now, then I'd encourage you to take it slowly and carefully study the maths.
Dude, we all saw your anti-woke meltdown. Nobody is taking what you say seriously.
I think that the mistake is thinking that "smart" is a meaningful word. I'd encourage you to learn about the technology you're critiquing and not listen to memetic bullshit from articles like the one we're discussing. Consider:
- AI/cybernetics/robotics (same field, different perspectives) is always only useful for specific tasks, never for general replacement of humans
- Black-box treatments of machine learning are only done at the most introductory level and there are several ways to examine how e.g. a Transformers-based language model's weights contribute to its outputs
- We have many useful theories about how to learn functions in general, with machine learning as a special case
This has happened before and it will happen again. I'm sure you've seen the phrase "AI winter" floating around.
This was a terrible article from a serial plagiarist who refuses to do work or cite sources.
But at a fundamental level we still don’t really know why neural nets “work”—and we don’t have any kind of “scientific big picture” of what’s going on inside them.
Neural networks are Turing-complete just like any other spreadsheet-style formalism which evolves in time with loops. We've had several theories; the best framework is still PAC learning, which generalizes beyond neural networks.
And in a sense, therefore, the possibility of machine learning is ultimately yet another consequence of the phenomenon of computational irreducibility.
This is masturbatory; he just wants credit for Valiant's work and is willing to use his bullshit claims about computation as a springboard.
Instead, the story will be much closer to the fundamentally computational “new kind of science” that I’ve explored for so long, and that has brought us our Physics Project and the ruliad.
The NKoS programme is dead in the water because — as has been known since the late 1960s — no discrete cellular automaton can possibly model quantum mechanics. Multiple experts in the field, including Aaronson in quantum computing and Shalizi in machine learning, have pointed out the utter futility of this line of research.
Show your list of packages or shut the fuck up already.
I'm the only one talking to you. You've convinced nobody else that you're even worth speaking to. Honestly, you sound like the weenie who tried to publish that bootlicking pro-military letter. Wanna go be the second person to sign it? You certainly aren't doing anything worthwhile with your time here on Lemmy.
YouTube Video
Click to view this content.
I'm happy to finally release this flake; it's been on my plate for months but bigger things kept getting in the way.
Let me know here or @corbin@defcon.social if you successfully run any interpreter on any system besides amd64 Linux.
YouTube Video
Click to view this content.
Thanks to Samantha Cole at 404 Media, we are now aware that Automattic plans to sell user data from Tumblr and WordPress.com (which is the host for my blog) for “AI” products. In respon…
The abstract:
> This paper presents μKanren, a minimalist language in the miniKanren family of relational (logic) programming languages. Its implementation comprises fewer than 40 lines of Scheme. We motivate the need for a minimalist miniKanren language, and iteratively develop a complete search strategy. Finally, we demonstrate that through sufcient user-level features one regains much of the expressiveness of other miniKanren languages. In our opinion its brevity and simple semantics make μKanren uniquely elegant.
Colored Functions and Monadic Effects. GitHub Gist: instantly share code, notes, and snippets.
Everybody's talking about colored and effectful functions again, so I'm resharing this short note about a category-theoretic approach to colored functions.