hckrnws
Loon: A functional lang with invisible types, safe ownership, and alg. effects
by surprisetalk
I got really excited reading this! The docs site is very polished and hypes up lots of features which click with things I've been wanting out of a language. But then I went to the repository [0] and realized that this is a week-old project, with every single commit written by Claude. I went to the playground page [1] and tried the example for effects, a headline feature and what drew me in the most, but it threw an "unbound symbol" error. I thought maybe the example could just be out-of-date, so I tried the example under the "Algebraic effects" heading on the homepage, which shows a different syntax, but that threw a parse error. The "Pattern matching" example is supposed to return 78.5, but it returns 15.700000000000001 when run in the playground. The example for "Mutation" on the ownership docs page [2] throws "unbound symbol 'set!'". The "Type Signatures" example from the types guide [3] throws another parse error. That's where I stopped.
How much of this is actually real?
[0] https://github.com/ecto/loon
hey thank you, should all be fixed now!
Yea, kudos on the docs. It’s rare that something this new is this polished. Well done!
I love this! The docs are really good and I am looking forward to playing with this.
I think I disagree a little bit about the value of type and lifetime annotations, I think if I used this, I would put a lot of [sig ..] into my code.
I think having a similar syntax for lifetimes would be valuable so I as a developer can make sure that some value is not 'static (or has the lifetime of some other object that lives as long as my program). Basically I want to ensure that my values are actually dropped at some point.
One thing the docs could be more explicit about is where this runs. I saw that it has a Wasm backend, but can I also compile this to native code? Do the effects require some runtime (and I don't mean something like the JRE, more like what Go has)? Maybe I missed this, though.
Anyway, I hope I get to play with this soon!
Perhaps relevant: https://campedersen.com/loon
This looks like a really neat project/idea; seeing the road map is exciting too, nearly everything I'd want.
I don't love the brackets syntax, or the [op val1 val2] ([* x x]) style, but I appreciate the attempt at clarity and consistency and none of these things are dealbreakers.
I do wonder why they've leaned so hard into talking about the type system being out of sight. Again, not a dealbreaker, but I feel strongly that explicit typing has a place in codebases beyond "describe something because you have to".
Strongly typed languages strike me as providing detailed hints throughout the codebase about what "shape" I need my data in or what shape of data I'm dealing with (without needing to lean on an LSP). I find it makes things very readable, almost self-documenting when done right.
From their docs about their choices: "The reasoning is simple: types exist to help the compiler catch your mistakes. They do not exist to help you express intent, at least not primarily." This strikes me as unnecessarily pedantic; as someone reading more code than I write (even my own), seeing a type distinctly—particular as part of a function signature—helps me understand (or add strong context) to the original author's goal before I even get to reading the implementation.
I find this doubly so when working through monadic types where I may get a typed error, a value, and have it all wrapped in an async promise of some kind (or perhaps an effect or two).
By the same token many languages allow you to leave out type annotations where they may be simple or clearly implied (and/or inferred by the compiler), so again, I'm not understanding the PoV (or need) for these claims. Perhaps Loon simply does it better? Am I missing something? Can I write return types to stub functions?
From the above blog post: "That's how good type inference feels! You write code. The types are just there. Because the language can see where it's going." Again, it feels strongly geared towards a world where we value writing code over reading/maintaining/understanding code, but maybe that's just my own bias/limitations.
Will follow it closely.
Good news, there's a line in the "Coming from Rust"[1] page that says
> You never annotate a function signature unless you want to for documentation purposes.
so it sounds like function annotation is still an option for the purposes of communication, just no longer required in all cases.
Aha, here's the syntax in case you're curious (using an example lifted from the playground)
[type Shape
[Circle f64]
[Rect f64 f64]
Point
]
[sig test_sig : Shape -> Float]
[fn test_sig [shape]
[match shape
[Circle r] [* 3.14159 [* r r]]
[Rect w h] [* w h]
Point 0.0
]
]
Unfortunately it seems like this doesn't currently work as expected when I use it in the playground, so I'm going to go file an issuethank you <3 I will fix asap
Yeah, the idea that types exist just to help the compiler catch your mistakes shows a depressingly superficial understanding of the benefits of static typing.
Types exist so that the compiler can reason about your code better - but not incidentally, they also help you reason about your code better!
To wit: even when working in dynamic languages, it's often considered a good practice to write down in docstrings the types of objects a function can operate on, even without static enforcement. Thinking about types is helpful for humans, too.
And it's not even just a thing to help you read code in the future - types help me write code, because as I sit down to write a function I know the possible values and states and capabilities of the object I'm working with. In the best of cases I can analytically handle all the possible cases of the object, almost automatically - the code flows out of the structure of the type.
> Types exist so that the compiler can reason about your code better - but not incidentally, they also help you reason about your code better!
THIS. So much. This observation is extremely intuitive to me.
Having inherited some large python code bases before type annotations were common made me never want to personally read through highly-typed inferenced code again.
It reminded me of a mathematician in my field who had rather brilliant ideas, but whose papers were largely unreadable due to idiosyncratic symbology and style. Fortunately in that case one can still leverage an information dense symbology that points to a well-specified formalism.
> Strongly typed languages strike me as providing detailed hints throughout the codebase about what "shape" I need my data in
I agree that seeing types is helpful, though typing them is also not necessary. Perhaps the solution is an IDE that shows you all the types inferred by the compiler or maybe a linter that adds comments with types on file save.
> I agree that seeing types is helpful, though typing them is also not necessary. Perhaps the solution is an IDE that shows you all the types inferred by the compiler
see "The Editor as Type Viewer" section in the docs: https://loonlang.com/concepts/invisible-types
I think I am in love. Clojure + Rust, everything is typed, but I don't need to annotate. And algebraic effects that I really wanted to explore in Ocaml, but now can do it in language with way easier syntax. I might be missing bit of Clojure dynamic nature, but it looks like a bunch of really interesting ideas in one language.
Coming from Clojure, I like types being invisible. Square brackets feels like a needless change. If you want sexprs, just use sexprs. Interesting ideas, as you say.
Yeah. Clojure is by far my favorite dynamic language. But, I love static types. At a glance, a quick glance at Loon- looks like it could just flat out become my favorite language. Loon with a standard library that approaches Go’s would be :chefskiss:
<3
Looks amazing. However, I'm confused about some of the examples in the ownership section. Consider this example:
[fn consume [xs]
[println [str "got " [len xs] " items"]]]
[let items #[1 2 3]]
[consume items]
; items has been moved — using it here is a compile error
This is described as an example of a setting where the variable 'items' has been moved, so it can no longer be used. However, the consume function does not mutate its argument, so I don't see why 'items' can no longer be used. In addition, it seems to me that the term 'moved' is being used to mean 'mutated'. Is that correct, or do I have it wrong?Another example from the website is the following:
[fn length [xs] [len xs]]
[let items #[1 2 3]]
[println [length items]]
[println [length items]] ; items not consumed
The website explains this example as follows: "If the compiler can prove that a reference will not outlive the owner, it passes a borrow instead of moving." However, there are no references in this example, only the original variable 'items'. Am I missing something? I think the language is cool, but the explanations of the semantics could be improved.This looks really nice! I'm excited to see it and am left with questions from perusing the site. Let me know if I missed it.
It's simple and also has an excellent choice of where to invest in powerful features. It looks like an elegant, minimal selection of things existing languages already do well, while cutting out a lot of cruft.
The site also mentions two differentiating and less established features that make it sound like more than yet another fp remix: type-based ownership and algebraic effects.
While ownership stuff is well explored by Rust and a less explicit variation by Mojo, this sounds like a meaningful innovation and deserves a good write-up! Ownership is an execution-centric idea, where fp usually tries to stay evaluation-centric (Turing v Church). It's hard to make these ideas work will together, and real progress is exciting.
I'm less familiar with algebraic effects, but it seems like a somewhat newer (in the broader consciousness) idea with a lot of variation. How does Loon approach it?
These seem like the killer features, and I'd love to see more details.
(The one technical choice I just can't agree with is multi-arity definitions. They make writing code easier and reading it harder, which is rarely or never the better choice. Teams discourage function overloading all the time for this reason.)
Thanks for sharing!
thanks for the kind words :)
You're right, ownership and effects are the real differentiators. The idea here is to let the compiler discover ownership rather than it being declared imperatively
Regarding algebraic effects, Loon uses continuations similar to Koka but more restrictive. Effects are declared with operations, and every side effect (IO, failure, async, state) is an effect that propagates through the call graph. The interesting part is `handle`, which lets you intercept effects:
[handle [load-config "app.toml"]
[IO.read-file path] [resume "mock contents"]
[Fail.fail msg] "default"]
Handling an effect subtracts it from the function's effect setso a function that handles all its IO internally is pure from the outside. This replaces exceptions, async/await, DI, and mocking with one mechanism. Testing is just handling effects with test dataMulti-arity: get lost! fork me?
The pattern matching example has a type Shape which is never referenced and this seems to conflict with the idea that you never write a type, am I missing something obvious?
I think they mean you never write types for your variables or functions. They don't mean you can't create types. That's the reference to Hindley–Milner type system and type inference. You don't have to say
x : Nat x = 5
You just say x = 5
I personally don't like that you don't seem to be able to manually describe the type for a fn/var, because it's very useful when prototyping to write stubs where you provide the typedef but then the actual variable/function is just marked as "todo"
Neat! I think the website could use a bit more information about how the "global" Effect handlers work, and whether it's possible to opt-in to that functionality yourself when writing Effects.
That being said I took a look at the roadmap and the next major release is the one that focuses on Effects, so perhaps I'm jumping the gun a tad. Maybe I'll whip this out for AoC this year!
Very cool! I’ve been flirting with the idea of biting the bullet and moving more towards language extensions around protobuf/grpc vs just tools so it’s really great to see projects on the other side of that kind of decision shipping and what choices they made
Why the square brackets in particular? Notation is such an annoying part of this stuff, I’m actually leaning towards pushing a lot of structure to the filesystem
1. I don't know much about HM systems mathematically, but how do the effect handlers interact with type inference? I thought there was some issues with automatic inference there.
2. The macros examples on the website don't show binding situations. Are the macro hygienic like in Scheme?
3. Why the choice of [] over ()?
good questions
1. effects are tracked in the type system as row types, so they compose with HM inference pretty naturally. the tricky part is effect polymorphism. Loon handles that similarly to how koka does it, with row polymorphism. no ambiguity issues so far but idk
2. yes, macros are hygienic! documenting some binding situations would make a great first PR :)
3. easier to type!
I guess I'll have to read more about effect handling systems, I'm very much out of my element there.
> yes, macros are hygienic!
I'm glad. Too many lisps chicken out and don't add them.
> easier to type!
Fair enough.
This seems very interesting, I will check it out.
I enjoyed the "Coming from Clojure" section. I guess there seems to be more differences not yet outlined there. One that I spotted is that destructuring is also really different but not mentioned there.
I also tried the the example destructuring from the tour page and these below doesn't run correctly:
[let #[a b c] #[1 2 3]] ; this doesnt compile
[let {name age} {:name "Loon" :age 1}]
[println age] ; age unbounded
> Square brackets replace parentheses for a clean, uniform syntax.
Oh dear, why? Abrasive aesthetics aside, this is bad for people with certain non-English keyboard layouts. Not me, but many do exist.
Better for people with English keyboards though. And I prefer the aesthetics.
Point taken. I forgot that brackets are not shifted on my keyboard.
They do require worse acrobatics than a shift key on a German keyboard, though - one of the Alt keys is special and needed to trigger them, if memory serves.
Well, that's another argument for everyone to use an English layout for coding, I suppose.
Is it better for English keyboards because () are shifted and [] are not ?
Yes, and on a lower row.
But it’s an oblique pinky move though. I find them uncomfortable. Is it just me?
[dead]
Not for me. Type annotations at API boundaries and bidirectional type checking is better. I don't know why people keep thinking Hindley-Milner is good.
Looks great! However, the website is really slow. Every page takes several seconds to load and trying to open the reference freezes my browser.
Apparently the site itself is written in Loon. The HTML is just a static shell that loads a `boot.js` script[1] that runs some WASM that compiles and evals the Loon source files. I found the source code here[2].
Definitely cool in concept, but very performance-intensive and slow.
I'm assuming the website is written in Loon and according to roadmap its version 0.4 and compilation is planned in 0.7. So it demonstrates that the language works, but its not optimised yet.
exactly! I didn't post this (thank u whoever did though) so wasn't ready to launch yet. but the idea is it will SSR and hydrate each page. I want to pull it all out into a framework congruent to Next.js
Comment was deleted :(
True beauty. Wow. My only ask would be optional type annotations for function and type parameters, so it's easy to fully describe your interfaces in code.
How is it related to the Standard Meta Language (SML) family of languages?
This reminds me a lot of REBOL
No parenthesis no gain
It's such a lot of effort to make a language like this. I don't get why they don't just put in like 2% more effort and add syntax that makes it less awful for humans. Nobody really wants to write `[* 5 5]` do they?
[fn square [x] [* x x]]
Could very easily be fn square(x) = x * x;
Or something like that, which is much more readable.Also
> Hindley-Milner inference eliminates type annotations.
I think it's pretty widely agreed at this point that global type inference is a bad idea. The downsides outweigh the upsides. Specifically: much worse errors & much less readable code.
> Nobody really wants to write `[* 5 5]` do they?
I do. The advantage being that if you suddenly realize that you want to do super-duper-multiplication instead of regular multiplication, you can just change the name instead of having to rewrite the entire expression. And honestly, having a few random functions be called differently from others feels gross.
You never used lisp-like languages did you?
There’s no use arguing. As the ancient Lisp proverb says, when the programmer is ready, the parens will disappear. Until then, you’re just wasting your breath.
No because the syntax is so awful. Programming languages are consumed by machines but written by humans. You need to find a middle ground that works for both. That's (one of the reasons) why we don't all program in assembly any more.
Lisp and similar are just "hey it's really easy to write a parser if we just make all programmers write the AST directly!". Cool if the goal of your language is a really simple parser. Not so cool if you want to make it pleasant to read for humans.
I've never used a Lisp either, but I get the impression that "forcing you to write the AST" is sort of the secret sauce. That is, if your source code is basically an AST to begin with, then transforming that AST programmatically (i.e. macros) is much more ergonomic. So you do, which means that Lisp ends up operating at a higher level of abstraction than most languages because you can basically create DSL on the fly for whatever you're doing.
That's my impression, at least. Like I said, I've never actually used a Lisp. Maybe I'm put off by the smug superiority of so many Lisp people who presume that using Lisp makes them better at programming, smarter, and probably morally superior to me.
As someone who writes a lot of Scheme, I agree that the math syntax is not good. There have been proposals to add infix expressions (https://srfi.schemers.org/srfi-105/) but nobody seems to want them, or can agree on specifics.
However, code that is mostly function calls is fine for me, since those would have parentheses anyways in C++/Rust/whatever. In that case it makes the language more regular, which is nice for writing macros.
I'd be curious to hear your opinion on wisp (https://srfi.schemers.org/srfi-119/srfi-119.html) and the Readable project (https://srfi.schemers.org/srfi-110/srfi-110.html) which are significant indentation syntaxes for Lisp languages that are still closely related to the AST and allow for easy macro writing.
Earlier last year, I "quietly" introduced an infix support macro into TXR Lisp.
I devised a well-crafted macro expansion hooking mechanism (public, documented) in support of it.
It works by creating a lexical contour in which infix expressions are recognized without being delimited in any way (no curly brace read syntax translating to a special representation or anything), and transformed to ordinary Lisp.
A translation of the FFT routine from Numerical Recipes in C appears among the infix test cases:
https://www.kylheku.com/cgit/txr/tree/tests/012/infix.tl?id=...
The entire body is wrapped in the (ifx ...) macro and then inside it you can do things like (while (x < 2) ...).
In completing this work, I have introduced an innovation to operator precedence parsing, the "Precedence Demotion Rule" which allows certain kinds of expressions to be written intuitively without parentheses.
Everything is documented in detail:
Wisp looks pretty good! I'd have to try it (or see longer examples) to know for sure but it's exactly the sort of thing I meant.
This view is false because what is hard to parse for machines also presents difficulty for humans.
We deal with most languages (Lisp family and not) via indentation, to indicate the major organization, so that there isn't a lot left to parse in a line of code, (unless someone wants to be "that" programmer).
> This view is false because what is hard to parse for machines also presents difficulty for humans.
Yes definitely to some extent, but they aren't perfectly aligned. Most languages make things a bit harder to parse for machines but easier for humans. Some get it wrong (e.g. I would say OCaml is hard to parse for humans, and some of C's syntax too like the mental type declaration syntax). I don't think you could say that e.g. Dart is harder to parse for humans than Lisp, even though it's clearly harder for machines.
I like the syntax with the parenthesis man. The consistency is the thing.
Crafted by Rajat
Source Code