![programming_languages](https://programming.dev/pictrs/image/aa43d40c-a1ab-48e1-bc89-2f60606741b9.png?format=webp&thumbnail=48)
Programming Languages
- Bril: An Intermediate Language for Teaching Compilerswww.cs.cornell.edu Bril: An Intermediate Language for Teaching Compilers
I created a new intermediate language, called Bril, for teaching my funky open-source, hands-on compilers course. Because it’s for education, Bril prioritizes simplicity and regularity over more typical compiler priorities like performance and concision. This is an overview of Bril’s design, its qui...
> I created Bril, the Big Red Intermediate Language, to support the class's implementation projects. Bril isn't very interesting from a compiler engineering perspective, but I think it's pretty good for the specific use case of teaching compilers classes. Here's a factorial program: > >
ruby > @main(input: int) { > res: int = call @fact input; > print res; > } > > @fact(n: int): int { > one: int = const 1; > cond: bool = le n one; > br cond .then .else; > .then: > ret one; > .else: > decr: int = sub n one; > rec: int = call @fact decr; > prod: int = mul n rec; > ret prod; > } > >
> > Bril is the only compiler IL I know of that is specifically designed for education. Focusing on teaching means that Bril prioritizes these goals: > > - It is fast to get started working with the IL. > - It is easy to mix and match components that work with the IL, including things that fellow students write. > - The semantics are simple, without too many distractions. > - The syntax is ruthlessly regular. > > Bril is different from other ILs because it prioritizes those goals above other, more typical ones: code size, compiler speed, and performance of the generated code. > > Aside from that inversion of priorities, Bril looks a lot like any other modern compiler IL. It's an instruction-based, assembly-like, typed, ANF language. There's a quote from why the lucky stiff where he introduces Camping, the original web microframework, as "a little white blood cell in the vein of Rails." If LLVM is an entire circulatory system, Bril is a single blood cell. - GLisp: Graphical Lisp (interactive)
> Glisp is a Lisp-based design tool that combines generative approaches with traditional design methods, empowering artists to discover new forms of expression.
> Glisp literally uses a customized dialect of Lisp as a project file. As the Code as Data concept of Lisp, the project file itself is the program to generate an output at the same time as a tree structure representing SVG-like list of shapes. And even the large part of the app's built-in features are implemented by the identical syntax to project files. By this nature so-called homoiconicity, artists can dramatically hack the app and transform it into any tool which can be specialized in various realms of graphics -- daily graphic design, illustration, generative art, drawing flow-chart, or whatever they want. I call such a design concept "purpose-agnostic". Compared to the most of existing design tools that are strictly optimized for a concrete genre of graphics such as printing or UI of smartphone apps, I believe the attitude that developers intentionally keep being agnostic on how a tool should be used by designers makes it further powerful.
- The algebra (and calculus!) of algebraic data types
This blog post explains why algebraic data types are "algebraic" - how every algebraic data type corresponds to a mathematical equation - and describes some ways to use a type's corresponding equation to reason about the type itself.
- Abstract interpretation in the Toy Optimizerbernsteinbear.com Abstract interpretation in the Toy Optimizer
CF Bolz-Tereick wrote some excellent posts in which they introduce a small IR and optimizer and extend it with allocation removal. We also did a live stream together in which we did some more heap optimizations.
Intro:
> CF Bolz-Tereick wrote some excellent posts in which they introduce a small IR and optimizer and extend it with allocation removal. We also did a live stream together in which we did some more heap optimizations. > > In this blog post, I'm going to write a small abtract interpreter for the Toy IR and then show how we can use it to do some simple optimizations. It assumes that you are familiar with the little IR, which I have reproduced unchanged in a GitHub Gist. > > Abstract interpretation is a general framework for efficiently computing properties that must be true for all possible executions of a program. It's a widely used approach both in compiler optimizations as well as offline static analysis for finding bugs. I'm writing this post to pave the way for CF's next post on proving abstract interpreters correct for range analysis and known bits analysis inside PyPy.
- Modal Effect Types (paper)
Abstract:
> We propose a novel type system for effects and handlers using modal types. Conventional effect systems attach effects to function types, which can lead to verbose effect-polymorphic types, especially for higher-order functions. Our modal effect system provides succinct types for higher-order first-class functions without losing modularity and reusability. The core idea is to decouple effects from function types and instead to track effects through relative and absolute modalities, which represent transformations on the ambient effects provided by the context. > > We formalise the idea of modal effect types in a multimodal System F-style core calculus
Met
with effects and handlers.Met
supports modular effectful programming via modalities without relying on effect variables. We encode a practical fragment of a conventional row-based effect system with effect polymorphism, which captures most common use-cases, intoMet
in order to formally demonstrate the expressive power of modal effect types. To recover the full power of conventional effect systems beyond this fragment, we seamlessly extendMet
toMete
with effect variables. We propose a surface languageMetel
forMete
with a sound and complete type inference algorithm inspired byFreezeML
.Modal logic and modal types have also been used to implement Rust-like "ownership", staged metaprogramming, distributed programming, and more.
- IELR(1) parser generator insightsbranchtaken.com Now Back to Our Regularly Scheduled Programming Language
Champion of the Hemlock programming language
IELR(1) a niche LR(1) parser generator. More well-known are LALR and Pager's "minimal" LR(1) algorithm (PGM(1)), but IELR(1) can generate a parser for certain grammars that those cannot. This post by the same authors goes into more detail about the problem IELR(1) solves.
The linked post is about implementing IELR(1), in particular the challenges the authors faced doing so.
They've implemented IERL(1) in their own parser generator,
hocc
, that they're writing for their own language, Hemlock: "a systems programming language that emphasizes reliable high performance parallel computation" that is (or at least very similar to) an ML dialect. - Bio: A Lisp dialect written in Ziggithub.com GitHub - cryptocode/bio: A Lisp dialect written in Zig
A Lisp dialect written in Zig. Contribute to cryptocode/bio development by creating an account on GitHub.
> Bio is an experimental Lisp dialect similar to Scheme, with an interpreter written in Zig > > Features include macros, garbage collection, error handling, a module facility, destructuring, and a standard library. > > Example: > >
lisp > (filter > (quicksort '(5 40 1 -3 2) <) > (λ (x) (>= x 0))) > > (1 2 5 40) >
- We need visual programming. No, not like that.blog.sbensu.com We need visual programming. No, not like that.
Why do we keep building visual programming environments? Why do we never use them? What should we do instead?
> Most visual programming environments fail to get any usage. Why? They try to replace code syntax and business logic but developers never try to visualize that. Instead, developers visualize state transitions, memory layouts, or network requests. > > In my opinion, those working on visual programming would be more likely to succeed if they started with aspects of software that developers already visualize.
...
> Developers say they want "visual programming", which makes you think "oh, let's replace
if
andfor
". But nobody ever made a flow chart to readfor (i in 0..10) if even?(i) print(i)
. Developers familiar with code already like and understand textual representations to read and write business logic2. > > Let's observe what developers do, not what they say. > > Developers do spend the time to visualize aspects of their code but rarely the logic itself. They visualize other aspects of their software that are important, implicit, and hard to understand. > > Here are some visualizations that I encounter often in serious contexts of use: > > - Various ways to visualize the codebase overall. > - Diagrams that show how computers are connected in a network > - Diagrams that show how data is laid out in memory > - Transition diagrams for state machines. > - Swimlane diagrams for request / response protocols. > > This is the visual programming developers are asking for. Developers need help with those problems and they resort to visuals to tackle them. - mazeppa: An IR and supercompiler for functional languagesgithub.com GitHub - mazeppa-dev/mazeppa: A modern supercompiler for call-by-value functional languages
A modern supercompiler for call-by-value functional languages - mazeppa-dev/mazeppa
> Supercompilation1 is a program transformation technique that symbolically evaluates a given program, with run-time values as unknowns. In doing so, it discovers execution patterns of the original program and synthesizes them into standalone functions; the result of supercompilation is a more efficient residual program. In terms of transformational power, supercompilation subsumes both deforestation2 and partial evaluation3, and even exhibits certain capabilities of theorem proving. > > Mazeppa is a modern supercompiler intended to be a compilation target for call-by-value functional languages. Unlike previous supercompilers, Mazeppa 1) provides the full set of primitive data types, 2) supports manual control of function unfolding, 3) is fully transparent in terms of what decisions it takes during transformation, and 4) is designed with efficiency in mind from the very beginning.
Supercompilation explained on Stack Overflow
Mazeppa example (https://github.com/mazeppa-dev/mazeppa/blob/master/examples/sum-squares/main.mz):
>
mazeppa > main(xs) := sum(mapSq(xs)); > > sum(xs) := match xs { > Nil() -> 0i32, > Cons(x, xs) -> +(x, sum(xs)) > }; > > mapSq(xs) := match xs { > Nil() -> Nil(), > Cons(x, xs) -> Cons(*(x, x), mapSq(xs)) > }; >
Mazeppa automatically translates the above program into the semantically-equivalent but more efficient:
>
mazeppa > main(xs) := f0(xs); > > f0(x0) := match x0 { > Cons(x1, x2) -> +(*(x1, x1), f0(x2)), > Nil() -> 0i32 > }; >
It also generates a graph to visualize and debug the transformation (SVG if it doesn't load):
Mazeppa is written in OCaml and can be used as an OCaml library.
- Esolang Park - online interpreter and debugger for esoteric programming languages
> Think Repl.it, but a simpler version for esoteric languages, with a visual debugger catered to each language, that runs in your browser.
The linked example shows a Brainfuck "Hello world!" program. You can run it and visualize instructions executed in real time, the memory, and the output. You can pause/step, insert breakpoints, adjust the delay after each instruction, and enter user input. There's also syntax highlighting and checking.
- Toit - a language for microcontrollers with live reloadingtoitlang.org Toit programming language
Toit is a modern high-level language designed specifically for microcontrollers
Some features:
- Ruby-like syntax, terse lambdas that look like regular control statements.
- Everything is an object.
- Integrated package manager.
Jaguar is a small app that wirelessly connects to an ESP32 and can load and live-reload Toit programs. This is opposed to something like connecting the device via USB and re-installing the program every time you make a change.
Example (https://github.com/toitlang/toit/blob/master/examples/wifi/scan.toit, see the GitHub link for syntax highlighting):
>
toit > // Copyright (C) 2022 Toitware ApS. > // Use of this source code is governed by a Zero-Clause BSD license that can > // be found in the examples/LICENSE file. > > // This example illustrates how to scan for WiFi access points. > > import net.wifi > > SCAN-CHANNELS := #[1, 2, 3, 4, 5, 6, 7] > > main: > access-points := wifi.scan > SCAN-CHANNELS > --period-per-channel-ms=120 > if access-points.size == 0: > print "Scan done, but no APs found" > return > > print """ > $(%-32s "SSID") $(%-18s "BSSID") \ > $(%-6s "RSSI") $(%-8s "Channel") \ > $(%-8s "Author")\n""" > > access-points.do: | ap/wifi.AccessPoint | > print """ > $(%-32s ap.ssid) $(%-18s ap.bssid-name) \ > $(%-6s ap.rssi) $(%-8s ap.channel) \ > $(%-8s ap.authmode-name)""" >
- Some tricks from the Scrapscript compilerbernsteinbear.com Some tricks from the Scrapscript compiler
Scrapscript is a small, pure, functional, content-addressable, network-first programming language.
> Scrapscript is a small, pure, functional, content-addressable, network-first programming language. > >
scrapscript > fact 5 > . fact = > | 0 -> 1 > | n -> n * fact (n - 1) >
> > My previous post talked about the compiler that Chris and I built. This post is about some optimization tricks that we've added since. > > Pretty much all of these tricks are standard operating procedure for language runtimes (OCaml, MicroPython, Objective-C, Skybison, etc). We didn't invent them. > > They're also somewhat limited in scope; the goal was to be able to add as much as possible to the baseline compiler without making it or the runtime notably more complicated. A fully-featured optimizing compiler is coming soon™ but not ready yet. - Solving a math problem with planner programmingbuttondown.email Solving a math problem with planner programming
More opportunities to mess with exotic technology
The blog explains how to solve the following problem using Picat and planner programming, a form of logic programming:
> Suppose that at the beginning there is a blank document, and a letter "a" is written in it. In the following steps, only the three functions of "select all", "copy" and "paste" can be used. > > Find the minimum number of steps to reach at least 100,000 a's (each of the three operations of "select all", "copy" and "paste" is counted as one step). If the target number is not specified, and I want to get the exact amount of "a"s, is there a general formula?
- Scoped Propagatorswww.orionreed.com Orion Reed
My research investigates the intersection of computing, human-system interfaces, and emancipatory politics. I am interested in the potential of computing as a medium for thought, as a tool for collective action, and as a means of emancipation.
A programming model that is a graph, where code is written on the edges to add behavior and interactivity to the connected nodes.
Video demo:
- An online playground for ArkScript
I wanted people to be able to try out my language online, and it’s now possible with a vscode like interface, sending code to a docker image running the interpreter!
It was easier than I thought to implement, and yes, security was a concern, but I have been able to harden the docker container as well as implement restrictions on the websocket server to avoid having users escaping the docker image and getting access to the VM it’s running on.
- Exploring biphasic (multi-stage) programmingrybicki.io Exploring biphasic programming: a new approach in language design
I’ve noticed a small but interesting trend in the programming languages space. I’m not sure how novel it is, but this pattern, which I’ll refer to as “biphasic programming,” is characterized by languages and frameworks that enable identical syntax to express computations executed in two distinct pha...
> This pattern [(multi-stage programming)], which I'll refer to as "biphasic programming," is characterized by languages and frameworks that enable identical syntax to express computations executed in two distinct phases or environments while maintaining consistent behavior (i.e., semantics) across phases. These phases typically differ temporally (when they run), spatially (where they run), or both.
An older (2017) page on multi-stage programming
Winglang ("a programming language for the cloud"), the author's language
- Associated Effects: Flexible Abstractions for Effectful Programming (paper)dl.acm.org Associated Effects: Flexible Abstractions for Effectful Programming | Proceedings of the ACM on Programming Languages
We present associated effects, a programming language feature that enables type classes to abstract over the effects of their function signatures, allowing each type class instance to specify its concrete effects. Associated effects significantly ...
Abstract:
> We present associated effects, a programming language feature that enables type classes to abstract over the effects of their function signatures, allowing each type class instance to specify its concrete effects. > > Associated effects significantly increase the flexibility and expressive power of a programming language that combines a type and effect system with type classes. In particular, associated effects allow us to (i) abstract over total and partial functions, where partial functions may throw exceptions, (ii) abstract over immutable data structures and mutable data structures that have heap effects, and (iii) implement adaptors that combine type classes with algebraic effects. > > We implement associated effects as an extension of the Flix programming language and refactor the Flix Standard Library to use associated effects, significantly increasing its flexibility and expressive power. Specifically, we add associated effects to 11 type classes, which enables us to add 28 new type class instances.
See also: the Flix programming language
- International Conference on Functional Programming 2024 Accepted Papersicfp24.sigplan.org ICFP 2024 - ICFP Papers - ICFP 2024
PACMPL (ICFP) seeks contributions on the design, implementations, principles, and uses of functional programming, covering the entire spectrum of work, from practice to theory, including its peripheries. Authors of papers published in this issue of PACMPL will present their work at during the in-per...
- Deriving Dependently-Typed OOP from First Principles (paper)
Abstract:
> The expression problem describes how most types can easily be extended with new ways to produce the type or new ways to consume the type, but not both. When abstract syntax trees are defined as an algebraic data type, for example, they can easily be extended with new consumers, such as
print
oreval
, but adding a new constructor requires the modification of all existing pattern matches. The expression problem is one way to elucidate the difference between functional or data-oriented programs (easily extendable by new consumers) and object-oriented programs (easily extendable by new producers).> This difference between programs which are extensible by new producers or new consumers also exists for dependently typed programming, but with one core difference: Dependently-typed programming almost exclusively follows the functional programming model and not the object-oriented model, which leaves an interesting space in the programming language landscape unexplored.
> In this paper, we explore the field of dependently-typed object-oriented programming by deriving it from first principles using the principle of duality. That is, we do not extend an existing object-oriented formalism with dependent types in an ad-hoc fashion, but instead start from a familiar data-oriented language and derive its dual fragment by the systematic use of defunctionalization and refunctionalization
> Our central contribution is a dependently typed calculus which contains two dual language fragments. We provide type- and semantics-preserving transformations between these two language fragments: defunctionalization and refunctionalization. We have implemented this language and these transformations and use this implementation to explain the various ways in which constructions in dependently typed programming can be explained as special instances of the general phenomenon of duality.
Background:
Expression problem (wikipedia)
- A reckless introduction to Hindley-Milner type inference
> Several months ago I gave a talk at work about Hindley-Milner type inference. When I agreed to give the talk I didn't know much about the subject, so I learned about it. And now I'm writing about it, based on the contents of my talk but more fleshed out and hopefully better explained. > > I call this a reckless introduction, because my main source is wikipedia. A bunch of people on the internet have collectively attempted to synthesise a technical subject. I've read their synthesis, and now I'm trying to re-synthesise it, without particularly putting in the effort to check my own understanding. I'm not going to argue that this is a good idea. Let's just roll with it.
- Writing an IR from Scratch and survive to write a post (long)farena.in Writing an IR from Scratch and survive to write a post
The following post will talk about the design of the first version of the Intermediate Representation of Kunai, the design decisions and how it was implemented.
> In this post, I will talk about the first version of the Intermediate Representation I designed for Kunai Static Analyzer, this is a dalvik analysis library that I wrote as a project for my PhD, also as a way of learning about the Dalvik file format and improving my programming skills.
- The Pre-Scheme Restoration
Pre-Scheme is an interesting Scheme dialect which is being ported to modern systems. The language, why its being ported, and the porting effort are described in the linked post.
> As described in the Pre-Scheme paper, the language offers a unique combination of features: > > - Scheme syntax, with full support for macros, and a compatibility library to run Pre-Scheme code in a Scheme interpreter. The compiler is also implemented in Scheme, enabling both interactive development and compile-time evaluation. > - A static type system based on Hindley/Milner type reconstruction, as used in the ML family of languages (eg. Standard ML, OCaml, Haskell). Pre-Scheme supports parametric polymorphism, and has nascent support for algebraic data types and pattern matching, which are recently gaining popularity in mainstream languages. > - An optimizing compiler targeting C, allowing for efficient native code generation and portable low-level machine access. C remains the common interface language for operating system facilities, and compatibility at this level is essential for modern systems languages. > > Due to the restrictions of static typing and the C runtime model, Pre-Scheme does not (currently) support many of Scheme's high-level features, such as garbage collection, universal tail-call optimization, heap-allocated runtime closures, first-class continuations, runtime type checks, heterogenous lists, and the full numeric tower. Even with these limitations, Pre-Scheme enables a programming style that is familiar to Scheme programmers and more expressive than writing directly in C.
Ironically, Pre-Scheme's original purpose was to implement the interpreter for another Scheme dialect, Scheme 48 (Wikipedia). But the lower-level Pre-Scheme is now getting its own attention, in part due to the popularity of Rust and Zig. Pre-Scheme has the portability and speed of C (or at least close to it), but like Rust and Haskell it also has a static type system with ADTs and parametric polymorphism; and being a LISP dialect, like most other dialects it has powerful meta-programming and a REPL.
- Lady Deirdre: Unified compiler framework
From README:
> Lady Deirdre is a framework for incremental programming language compilers, interpreters, and source code analyzers. > > This framework helps you develop a hybrid program that acts as a language compiler or interpreter and as a language server for a code editor's language extension. It offers the necessary components to create an in-memory representation of language files, including the source code, their lexis and syntax, and the semantic model of the entire codebase. These components are specifically designed to keep the in-memory representation in sync with file changes as the codebase continuously evolves in real time. > ### Key Features > - Parser Generator Using Macros. > - Hand-Written Parsers. > - Error Resilience. > - Semantics Analysis Framework. > - Incremental Compilation. > - Parallel Computations. > - Web-Assembly Compatibility. > - Source Code Formatters. > - Annotated Snippets. > - Self-Sufficient API.
- Staged compilation with dependent types (GitHub)github.com GitHub - AndrasKovacs/staged: Staged compilation with dependent types
Staged compilation with dependent types. Contribute to AndrasKovacs/staged development by creating an account on GitHub.
Abstract from the ICFP 2024 paper:
> Many abstraction tools in functional programming rely heavily on general-purpose compiler optimization\ to achieve adequate performance. For example, monadic binding is a higher-order function which yields\ runtime closures in the absence of sufficient compile-time inlining and beta-reductions, thereby significantly\ degrading performance. In current systems such as the Glasgow Haskell Compiler, there is no strong guarantee\ that general-purpose optimization can eliminate abstraction overheads, and users only have indirect and\ fragile control over code generation through inlining directives and compiler options. We propose a two-stage\ language to simultaneously get strong guarantees about code generation and strong abstraction features. The\ object language is a simply-typed first-order language which can be compiled without runtime closures. The\ compile-time language is a dependent type theory. The two are integrated in a two-level type theory.\ We demonstrate two applications of the system. First, we develop monads and monad transformers. Here,\ abstraction overheads are eliminated by staging and we can reuse almost all definitions from the existing\ Haskell ecosystem. Second, we develop pull-based stream fusion. Here we make essential use of dependent\ types to give a concise definition of a concatMap operation with guaranteed fusion. We provide an Agda\ implementation and a typed Template Haskell implementation of these developments.
The repo also contains demo implementations in Agda and Haskell), and older papers/implementations/videos.
- F - A tiny functional concatenative language
F is written in K, another small language. In fact, the entire implementation is in one file: <http://www.nsl.com/k/f/f.k>.
Example program (defines factorial and computes
fac(6)
, from <http://www.nsl.com/k/f/f/fac.f>):>
f > [[1=][][dup!pred!fac!*]cond!]fac >
The main site, "no stinking loops", is full of links including to other languages (scroll down) and resources on K. Although many are broken 🙁.
- The design decisions and evolution of a method definition - Ruby case studyzverok.space The design decisions and evolution of a method definition - Ruby case study
Episode 01 of studying Ruby programming language design decisions, how they evolved with time, and how they look in a wider context.
A blog post on the evolution of method syntax in Ruby.
The author (Victor Shepelev AKA zverok) lives in Ukraine and is part of the Armed Forces. See his other posts (about Ruby, the war, and more) here.
- AUTOMAP: How to do NumPy-style broadcasting in Futhark (but better)futhark-lang.org AUTOMAP: How to do NumPy-style broadcasting in Futhark (but better)
A high-performance and high-level purely functional data-parallel array programming language that can execute on the GPU and CPU.
NumPy-style broadcasting = being able to write code like
futhark [[1, 2, 3], [4, 5, 6], + [10, 11, 12] [7, 8, 9]]
and have it evaluate the same as
futhark [[1, 2, 3], map (map (+)) [4, 5, 6], (rep 3 [10, 11, 12]) [7, 8, 9]]
(which evaluates to
[[11, 13, 15], [14, 16, 18], [18, 19, 21]]
).numpy docs. Numpy and most other languages do this at runtime, which is typically easier. But Futhark is statically-typed, so the type of the result must be known at compile time, and this means the broadcasting must also be done at compile time.
- Crossing the Impossible FFI Boundary, and My Gradual Descent Into Madness (Vale)
Adding Rust FFI into Vale, in particular the part where the Vale compiler determines the signature and proper overload of a Rust function.
> Our ultimate goal is to write this Vale code: > >
vale > import rust.std.vec.Vec; > > exported func main() { > vec = Vec<int>.with_capacity(42); > println("Length: " + vec.capacity()); > } >
>stdout > Length: 42 >
> > ...which would successfully make a Rust Vec and call capacity on it. - vvvv - visual live-progamming for .NETvisualprogramming.net vvvv - visual live-progamming for .NET
A visual live-programming environment that takes you from rapid prototyping to final production.
vvvv is a node-based visual programming environment whose programs run on the CLR (same virtual machine as C#). It has hot reloading (compilation happens in the background) and can be extended with custom nodes written in C#. It has built-in libraries for 2D and 3D rendering, image recognition, machine learning, networking, and IO for various devices (Kinect and cameras). It's been used for a lot of interactive art installations, as seen in the "showcase" section on the website.
- The Swift compiler is slow due to how types are inferreddanielchasehooper.com Apple didn't fix Swift's biggest flaw
How a 10 year old design choice for Swift’s type checker still haunts us to this day
A notable case study for anyone designing a type system for their own language.
Swift's compiler can infer ambiguous expressions like enum case names based on their type, inferred from surrounding context. This makes the syntax less verbose, but at a cost; some seemingly benign expressions take several seconds to type-check, after which the compiler ultimately gives up with an unhelpful error message: "the compiler is unable to type-check this expression in reasonable time".
The linked post goes into more detail.
- Forsp: A Forth+Lisp Hybrid Lambda Calculus Language
> Forsp has: > > - An S-Expression syntax like Lisp > - Function abstraction like Lisp > - Function application like Forth > - An environment structure like Lisp > - Lexically-scoped closures like Lisp (Scheme) > - Cons-cells / lists / atoms like Lisp > - A value/operand stack like Forth > - An ability to express the Lambda Calculus > - A Call-By-Push-Value evaluation order > - Only 3 syntax special forms: ' ^ $ > - Only 1 eval-time special form: quote > - Only 10 primitive functions need to self-implement > - Ability to self-implement in very little code > > It's evaluator is very simple. I suspect simpler than a McCarthy Lisp eval() function, but I haven't defined a "simplicity function", so you can be the judge. > > In contrast to Lisp, apply() is trivial in Forsp, and instead we have a core function called compute()
- How we test(ed) the Futhark compilerfuthark-lang.org How we test(ed) the Futhark compiler
A high-performance and high-level purely functional data-parallel array programming language that can execute on the GPU and CPU.
> In this post I will go through the evolution of the tools we use for testing the Futhark compiler. Along the way, these also became
futhark test
, the user-facing tool for testing Futhark programs, and although rather convenient, it is still pretty crude compared to the testing tools you will find in other languages. This post is perhaps most interesting to other language implementation hobbyists.For reference, Futhark is an up-and-coming language for writing efficient parallel code, sort of like a more functional (Haskell-like) version of CUDA or OpenCL.
- TypeLoom: Gradual Typing with the LSPgithub.com GitHub - frroossst/TypeLoom: Weaving LSP-based Gradual and Optional Typing into Dynamic Languages
Weaving LSP-based Gradual and Optional Typing into Dynamic Languages - frroossst/TypeLoom
Abstract (emphasis added):
> This paper introduces TypeLoom, a tool utilising a novel approach to add gradual, optional typing into legacy code bases of dynamically typed languages. TypeLoom leverages the Language Server Protocol (LSP) to provide in-editor type information through inlay hints and collect subsequent through code actions to type information. This approach differs from the ones that exist in so far as it requires no syntactical changes to add type hints (like in Python, TypeScript) and it does not require syntactically correct comments to provide type information (like in @ts-docs and Ruby). TypeLoom utilises a graph based data structure to provide type inference and type checking. This graph-based approach is particularly effective for gradual typing as it allows flexible representation of type relationships and dependencies.
- MegaLibm: A DSL for Implementing Math Functionsblog.sigplan.org A DSL for Implementing Math Functions
Numerical code needs to carefully balance accuracy and performance. A new DSL, MegaLibm, makes this easier by checking for numerical correctness and offering flexible compilation to efficient code.…
Excerpt:
> Typically, math library functions are written in a low-level language like C or raw assembler to maximize performance. But general purpose languages (like these) don't help developers avoid semantic errors in mathematical code. > > How many times has this happened to you: you're writing some math computation, and you accidentally write a plus sign instead of a minus sign, or put in the wrong constant? Your programming language can't catch these bugs for you because its types, like
float
anddouble
, don't distinguish betweenx
and-x
or between different constants. > > But numerical code could really benefit from compiler assistance with precisely this task, especially since we expect the user to test out several different implementations of some mathematical expression and compare them for accuracy and performance. Numerical errors really throw a wrench in that process (through misleading performance or accuracy numbers) and MegaLibm therefore aims to prevent them. - The Skew Programming Language
A language which was used by Figma until recently. It compiles to JavaScript or C#.
Why and how Figma migrated from Skew to Typescript.
Differences between Skew and TypeScript
Example from the homepage:
>
skew > # Execution starts at the entry point > @entry > def fizzBuzz { > for i in 1..101 { > var text = mod(i, 3, "Fizz") + mod(i, 5, "Buzz") > document.write((text == "" ? i.toString : text) + "\n") > } > } > > # The compiler inlines this function in release mode > def mod(i int, n int, text string) string { > return i % n == 0 ? text : "" > } > > # Imported declarations are just for type checking > @import > namespace document { > def write(text string) > } >
- A baseline scrapscript compilerbernsteinbear.com A baseline scrapscript compiler
Scrapscript is a small, pure, functional, content-addressable, network-first programming language. fact 5 . fact = | 0 -> 1 | n -> n * fact (n - 1)
> Scrapscript is a small, pure, functional, content-addressable, network-first programming language. > >
> fact 5 > . fact = > | 0 -> 1 > | n -> n * fact (n - 1) >
> > My previous post introduced the language a bit and then talked about the interpreter that Chris and I built. This post is about the compiler that Chris and I built. - The borrow checker within · baby steps
This is about Rust, but it's a great post and likely relevant to languages with Rust-style ownership. It describes four features planned for Rust to make its borrow checker more expressive and permissive:
- Allow a function to return a reference in one branch of a conditional and not consider it "borrowed" in the other branch (this is supported by the new borrow checker, Polonius, which should be in stable Rust very soon).
- Allow "place" lifetimes; that is, lifetimes that refer to other variables or their fields (e.g.
'foo.bar
is a lifetime that, when attached to a reference, means the reference is borrowingfoo.bar
or some data within it). - View types and inter-procedural AKA "partial" borrows: define a function that can only borrow specific fields in a structure, so that the function can be called with a structure whose other fields (at the call-site) are mutably borrowed or moved.
- Self-referential structures: structures where one field borrows another owned field (e.g.
Foo { foo: String, bar: &'self.foo str }
,foo
is owned andbar
borrows from it). Crates like self_cell and ouroboros provide this feature, but the goal is to have it in safe Rust with a cleaner syntax.