A point the authors didn't make is that Common Lisp compilers are quite fast compared to e.g. C++ compilers. So even in rare cases where you do need to recompile everything, the cycle time is short.
CCL's compiler is lightning-fast. I can recompile an entire system in CCL almost as fast as I can load the compiled object code. SBCL's compiler is slower (while often generating faster code because it does more work at compile time), but it's still much faster than a typical C++ compiler.
It does, in form of a sane macro system with the full language available for use at compilation time - something that C++ never had and still doesn't have. No, consteval and constexpr aren't equivalent. No, CPP is not a sane macro system. No, don't even get me started about templates.
Yes. That's a consequence of language design - unboxed arrays of structs mean no object identity, which means that operators like SETF and EQ no longer have their invariants satisfied when performing assignment to such arrays.
Sure, there may be reasons for it, but it means you can’t really build zero cost abstractions. For example, you can’t make a simple 2D vector object with the standard operations defined over it and then store those vectors in flat arrays. This is something that can trivially be done in C++, Rust, etc.
Probably the best workaround for this is going into the foreign memory land - grab some raw memory, define operators over it, manage all that yourself. CFFI works across enough implementations for this to be feasible.
Yeah. Common Lisp has quirks and you need to adapt your abstractions to those. Sometimes it's easy, sometimes it's rewarding, sometimes it's just annoying.
This week I'm scaling back some abstractions, writing more Fortran-like code on specialized arrays than individual objects, for the sake of zero cost.
I appreciate a little bit of a headwind against inventing new abstractions too casually. But it does remind me of programming in C or Forth. That's not everyone's cup of tea.
This one isn’t a quirk, it’s a fundamental constraint on the kind of code that you can write in CL. C++, Rust, Go (and even Java to some extent, with all the wizardry in Hotspot) all allow you to build a 2D vector abstraction without requiring you to box your vectors. In CL you simply can’t do this. Just look at the kind of bonkers workarounds my comment gave rise to: https://news.ycombinator.com/item?id=35855576
I don't really see the problem. If you want to define the bit layout of your objects then you define them using FFI. Support for FFI objects is comprehensive. It is exactly the tool for the job in Common Lisp.
Same for LuaJIT. I've spent years happily writing high-level Lua code that's actually operating on objects whose bit layout is explicitly defined using FFI at the C level of abstraction. It doesn't feel much different to objects or dictionaries to me.
Sure, it is great that those other languages have native support for inlining storage of structs into various containers, but the lack of such in Common Lisp only makes me write quirky code and doesn't really hold me back.
It’s not a problem if you don’t need zero cost abstractions (which indeed you may not, depending on your domain). But if you do, then using the FFI to define unboxed arrays of a simple 2D point class is considerably less attractive than writing
struct Point { float x, y };
Point points[10];
If you really can’t see this then I think we’re at an impasse.
It would be useful to have a more abstract, portable form of this (vaguely analogous to WebAssembly): a way to write code for a low level virtual machine that translates to native code.
Nothing in your link suggests a practical way of constructing unboxed arrays of structs in CL. And even if it did, it would presumably be one that worked only on a specific architecture.
The C++ that we have can only be used as an external tool: we write a character-level program into a pipe or file, which is read by an external program. This drops an object file that we have to process. I'm saying that we could have some Algol-like sublanguage with value semantics, and unboxed types. It could output code for a virtual machine, which could be further translated to native code. It would all be in Lisp, not requiring any external tools unrelated to Lisp to be installed.
The source file I linked to shows how you can write native code without leaving Lisp. In that native code, you can move the stack to allocate an unboxed array, and whatever else. But from that it's not a huge leap into having a similar thing but at a higher, and machine-independent level.
That's all pie in the sky isn't it? I was talking about CL. The only practical suggestion I can take from this is that one could drop down to inline assembly in some CL implementations. It's unclear to me how that would help with the problem of defining unboxed arrays of structs.
I am not sure what you mean. If I want to use unboxed array of structs I can easily do so via FFI and create some sort of DSL for working with them. If this sounds like too much work for you, then yes we are at an impase. I think CL is great in this way in that writing FFIs is pretty natural. Also if you need performance code, why would you be worried too much about portability accross implementations. Maybe I am missing something
Anyway in Haskell you use unboxed arrays but without much of the high level benefits of the rest of the language. I fail to see how this is fundamentally different
>If this sounds like too much work for you, then yes we are at an impase
It's not necessarily too much work (that depends on the context), but it is more work.
I am not sure why there is so much reluctance to acknowledge this one particular disadvantage of Common Lisp as compared to e.g. C++, Rust, etc. Common Lisp is not a perfect programming language. It does not need to be reflexively defended against all potential criticisms.
Please note that none of my posts says "Common Lisp sucks", or "you should use Language X instead of Common Lisp", or "the lack of proper support for unboxed arrays outweighs all the potential advantages of Common Lisp in all circumstances".
Not sure why you bring up Haskell, but yes, the same criticism applies to Haskell as well. The difference is really just a cultural one: even the most fanatical Haskell advocates would probably acknowledge that it's not a great language to use if you need fine control over memory layout.
i was more confused about the point you are making. more as an educational piece for myself in the sense maybe there is something im missing in my knowledge. you seem to know what you are talking about. that said, as far as i am concerned, you made your point clear in this reply. common lisp is primarily a high level language, albeit one with really good low level capabilities. however for tasks that require fine grained stack memory control, or even abstractions over that, it would be hard to see how it could compete with c family. i think this is perfectly fine
i brought up haskell as an example of a high level language that does unboxing of arrays, but also because your handle[0] suggested to me that you know quite a bit about haskell and will be able to inform me accordingly :)
for what its worth i personally think that it is a very good thing for a programmer to have thorough knowledge of both high and low level languages, and a great thing if they are able to combine it. for me lisp fits the latter porpose, but thats not important for everyone, and people are definitely free to dislike lisp
I wouldn't go that far, personally. It's convenient to be able to have unboxed arrays of arbitrary non-primitive datatypes, but you can work around it.
given that i use common lisp for numerical computations including ml, i hope not :) given just that python reigns supreme in this field, clear answer is no. however it is worth keeping in mimd that common lisp is a high level language with very good low level features. you can for example use c data structures seamlessly if you need fine grained memory control you, or write inline assembly
I'll grant you that's a kludge compared with your example. It wouldn't hold me back though. And I wouldn't consider trading in my lovely late-bound programming environment for an issue of this magnitude.
A design decision which needs to be made is at what level of abstraction should the data be "fixed", i.e. not to be manipulated with the full power of CL. This is often a flexibility vs. efficiency tradeoff.
In my 3D CL system [0], I have so far kept all geometric data as naive CLOS classes as the intent of the system is to provide a sandbox for experimentation. I have thought of, perhaps one day, representing the geometry as a foreign library for efficient passing to the GPU.
sure taken separately that looks more elegant. but, provided your whole program is complex enough, i think if you take a step back your language of choice is gonna look like a pigs sty compared to the same thing written in common lisp.
i think the amazing thing about common lisp as a high level language is that it can be a low language also. imo it is an unmatched balance of a high/low level language
Are there any examples of elegant CL code that performs lots of vector geometry calculations? I'm skeptical that this sort of code would come out elegantly in CL (at least if it had to perform reasonably well).
This isn't a direct answer to your question (it's not a computational geometry codebase), but one option for
implementing geometry might be to use MAGICL. [1] It is optionally and transparently accelerated by BLAS/LAPACK.
That isn't what Bjarne means by zero cost abstractions.
It means the abstractions produce the same code as having written the same manually without the abstraction, e.g. having a class with virtual methods versus having a struct with function pointers as fields.
I interpret the term compostitonally. But CL doesn’t have zero cost abstractions in that sense either.
Take the example I mentioned. Say that you’re looping through an array of pairs of 2D vectors and calculating the dot product of each pair. In C++ you can use your 2D vector class without any additional cost. In CL you either need to remove that abstraction (and deal with flat arrays of scalars) or incur the cost of boxing.
More generally, if every object with its own methods requires a boxed representation, then that severely limits the range of zero cost abstractions that you can create. If using the abstraction requires boxing then it’s not zero cost. (If Bjarne disagrees on that point, then I disagree with him!)
Anyway, I’m sure you know all this, so I’m not really sure what point you’re trying to make here. I don’t think anyone would suggest that CL is a good language for building zero cost abstractions, whatever the precise definition of the term.
I am interested though: how would you define an unboxed array of structs in Genera's dialect of Lisp?
Your quote clearly applies to my example. You can avoid the boxing by hand coding the dot product computation over flat arrays; you can't avoid the boxing if you use a 2D vector abstraction (in CL).
I don't think so? Judging by the documentation that's just the usual option (also available in CL) to have struct fields stored in an array, list, name value pair list, etc. There's no suggestion that it would accomplish unboxing in general. I guess it is possible that if each field of the struct had the same type (say, a float), then the backing storage for the struct would be an unboxed array of floats, and an array of such structs would then come out with its backing storage as an unboxed multidimensional array of floats. But:
* I'd really want to see this actually working to believe it. The documentation doesn't make clear what would happen in this circumstance. (Time to spin up OpenGenera? Ha!)
* At best this works for structures with homogenous field types. My example of a 2D vector happens to fit that criterion, but it is also useful to have unboxed arrays of structures with heterogenous field types.
Accessors take care of reading/writing from the backing storage mechanism.
If you want to be really picky, don't forget the whole OS was written in Lisp, and even the C, Pascal and Ada compilers targeted it, and much follow the same semantics. The same folder has the manuals for C and Pascal.
And if you want to be really sure how it goes, there is the low level forms for Assembly like coding, e.g. sys:art-q, which is used to pack C structures into arrays.
(:type :array) is documented as the default. So there must be something more required to create an unboxed array of structs than just that.
If, for whatever reason, it is important to you to persuade the internet that the Lisp dialect of a long-defunct operating system was able to define unboxed arrays of structs, then I think you should at least show example code demonstrating this. The more salient point, however, is that Common Lisp can't do this.
> Say that you’re looping through an array of pairs of 2D vectors and calculating the dot product of each pair. In C++ you can use your 2D vector class without any additional cost. In CL you either need to remove that abstraction (and deal with flat arrays of scalars) or incur the cost of boxing.
Or maybe use compiler macros to remove the abstraction without the cost of boxing?
Here is the relevant paragraph from Stroustrup, B (2013): The C++ programming language, 4th edition. Pearson Education, page 10:
"What you don’t use you don’t pay for. If programmers can hand-write reasonable code to
simulate a language feature or a fundamental abstraction and provide even slightly better
performance, someone will do so, and many will imitate. Therefore, a language feature and
a fundamental abstraction must be designed not to waste a single byte or a single processor
cycle compared to equivalent alternatives. This is known as the zero-overhead principle."
I don't know who came up with "zero cost" abstractions, but it's wrong since there is no zero cost. For the people chanting "zero cost" the cost might not be obvious though.
Not quite to the degree that C++ does, but it's pretty good. There are several places in the standard that specifically allow for making a dynamism/performance tradeoff.
Here's a question the article doesn't answer: why Common Lisp? I.e. why not Scheme? Scheme is a plenty powerful language these days with all the libraries it has available. Granted, it doesn't have history going back to the 1950s, but it does date back to the 1970s. There are also multiple implementations, each with its own set of strengths. And there are some solid standards written for the language that have good implementations.
It seems to have all the exact same features they're saying make CL a good choice (e.g. it's hard for me to imagine that a language older than most working programmers doesn't have "longevity" or "staying power.") Besides, if you click through to https://stevelosh.com/blog/2018/08/a-road-to-common-lisp/#s8... (linked in the article), it shows a picture of the index of CLtL, where the entry for 'kludges' spans the entire book lol!
* Even the fastest of the schemes is slower than SBCL.
* The interactive development features in Common Lisp are there because they got baked into the spec. They aren’t really a thing with scheme.
* Common Lisp has a specification everyone follows. Scheme has THREE completely different specs everyone fights over. Version 6 is the LEAST common (though probably the best for real work), but it’s also the choice of Chez scheme which is the fastest implementation (last I checked).
* Related are the SRFIs. There’s over 200 of them. The language isn’t useful unless a bunch of these have been baked into the compiler. This is HORRIBLE for users. If you want to do something, you must search the SRFIs, find the correct one, look up if it’s actually supported, then read the spec because implementations don’t actually documents this stuff in their own docs. https://srfi.schemers.org/
This all makes portability a disaster. Imagine trying to make a library that works across three versions of the language (all with different features and ways of doing basic things like importing code). Next, you have to work around which implementations support which basic SRFI (eg, Can I use list operators beyond car/cdr? Can I use basic string operators? Mutable arrays? Hash Tables? Records? Threads? Integer bitwise operators? Streams? The other 200+ SRFIs that cover basic stuff that should have been in the language itself?
The result is everyone constantly recreating the wheel in slightly different ways. R7RS-large was supposed to fix some of the worst of these issues, but we’re 10 years out now and still waiting.
I'm not qualified to respond, but I switched from Scheme to Common Lisp for CL's condition system and REPL.
Other than Smalltalk and some very niche languages, I couldn't find any languages that, after hitting an error, let me reliably interactively inject new/replacement code from my code editor and then re-run the function (obviously assuming it is reasonable to do so).
That's a very good question and the same thing I ask myself whenever I consider using some lisp (for hobby purposes mostly).
I came to the conclusion that scheme is a better language from a technical standpoint and I enjoy using it more, but the primary issue is that lisp is already sort of niche, and scheme is like a niche inside of a niche, which in practice results in the "ecosystem" being very weak.
By ecosystem, I mean how much activity there is on the implementations, but moreso the available libraries and the quality of the available libraries!
An example of the ecosystem thing would be elisp. elisp has a ton of libraries available, there "s.el" for string manipulation, dash for lists, "f.el" for files and paths, "ht" for hashtables, and a lot more. Vanilla Emacs comes with around 5 million lines worth of Elisp OOTB too, this includes stuff like regex, a custom sexpr based regex DSL called "rx", json and xml parsers, sqlite bindings, etc. Very rarely do I feel like I'm missing something in elisp, with the exception of concurrency support!
When I used scheme, I was not able to find alternatives to many of these things and felt pretty limited. Schemes do tend to have better "native" support for some things like list manipulation, so it's at least usable without something like dash.
Anyways, Guile scheme seems like the most usable scheme to me right now and gets a bit of attention thanks to guix, but I think common lisp gets a bit more attention right now, and neither can compare to something like ruby or python sadly! So this could be one reason one might prefer common lisp over scheme I guess.
So for me personally, common lisp feels too old and clunky to comfortably use, scheme is nice but lacks support in a lot of areas, so in practice I end up not using either of them. I have yet to try racket though and it looks like it addresses most of my issues.
I've never used it, but the big disadvantage seems to be that unless you're going to do some super advanced macrology, any function you run within a thread needs to explicitly call `thread-yield` in order to yield back to the scheduler so other threads can run. I don't know of any library or anything that makes it so you can just take a function that isn't thread-aware, run it concurrently with other possibly not thread-aware functions, and have it all work sensibly, but at least the building blocks are there.
---
If that doesn't suit you, Emacs 25 introduced generators!
Not to my knowledge. Apparently, according to https://www.emacswiki.org/emacs/NoThreading, dynamic scope makes it devilishly difficult to implement os-level threading. It is, apparently, possible to automatically translate elisp into a lexically scoped lisp dialect, and rebuilt emacs on that foundation, but, again, to my knowledge, nobody's done that. And, there are occasional mutterings about making a new, natively multithreaded emacs, but nothing's ever made it into the GNU emacs repo, AFAIK.
It does have process-based concurrency, which is basically where you spawn off a whole new emacs, then run your code within that. I've never used it, so I don't know how it works, other than the obvious disadvantage of having a large startup time and potentially memory use to start up another instance of emacs.
You are indeed right. I had understood that actual preemptive threads had been added, but it seems that for now it is still cooperative and requires explicit yields.
Note that elisp has lexical scoping now, but it will be years of course before all packages will have been updated (and for many things you still want dynamic scoping).
You might want to check out Racket. It's not immediately obvious that it's a Scheme, but it has a much more vast ecosystem of libraries than any other Scheme I know of.
Racket qua Racket (and PLT Scheme before the rename) didn't conform to any of the existing Scheme specs (it was something like R5RS minus some things it didn’t like, plus some things from R6RS, plus a whole bunch of its own stuff.)
After looking at the Racket website, from the POV of someone who wants to use or learn Scheme, that doesn't sound like a useful distinction. They have R5RS and R6RS implementations that look to have reasonable conformance with the corresponding standards. I don't see how the fact that the language called "Racket," which is a lispy language that implements (among other things) R5RS, R6RS, and mzscheme even matters. It sounds an awful lot like like saying "foo-scheme isn't a scheme because it's implemented partly in C."
I recently decided to try a scheme and did not find any that had supported Windows as a first class platform, so that's one limitation. By comparison SBCL just runs on Windows without any difficulties.
Racket has supported Windows since I tried it out in high school, which was a long, long time ago.
The thing with Schemes is that you can implement one in an afternoon, and you can probably add some neato feature in the same afternoon, so there are a ton of Schemes out there. But you can't build community, ecosystem of libraries, platform support, etc. that you would want for a language you're going to use in production.
Racket has been around for a long time, is used in various production systems, and has a solid community. It actually has the best standard library I know of for writing desktop GUIs, which might not be much of a claim to fame in 2023, but makes it super useful for writing small desktop tools. It definitely does some things better than others, like any language, but I think I'd be comfortable committing to it for a production project any day, which is more than I can say for any other Scheme.
I like Scheme, but prefer Common Lisp for its environments and the old-school feel of the language.
> And there are some solid standards
Good things don't come alone: R5RS, R6RS, R7RS, RNRS + SRFI X..Y, IEEE Scheme, Racket, ...
> entry for 'kludges'
That's a form of self-deprecating humor, if you missed it. There are many more funny index entries in CLtL. The core designers did not take themselves too serious. Especially Guy L Steele.
For example Dave Moon said about the search for a name for the language:
"...whatever we call this common Lisp" and this time, among great sadness and consternation that a better name could not be had, it was selected."
> That's a form of self-deprecating humor, if you missed it.
No, of course, I didn't miss that. ;) Yes, it was funny! But the mere fact that it's a joke made by none other than Guy Steele, and that Lisp as a language goes back to 1960, which is old enough to make it eligible for a retirement pension in most countries, tells me that there's a bit of truth to it.
Scheme is from 1975 with huge influences from Lisp -> 48 years.
Sussman is 76, Steele is 68.
I don't think it's a only a problem of Common Lisp, which, btw., is younger than Scheme (which wasn't created completely new in a vacuum). The Scheme Report revisions upto R5RS for example ignored a lot of practical problems (no error handling, no namespaces, no object system, ...). The kludges then were outside of the language definition. But having no error handling in the language definition is not better than having one with less than ideal integration into the language. It's just a different way to deal with the problem of features which need a deep language integration: writing a larger standard with compromises or just pretend that one defines a language primarily for teaching computer science concepts (and which lacks really necessary features like error handling). The language definition then should only have as many (very dense) pages, as a student could handle in introductory course.
R6RS was an attempt to get a more complete language, which then kind of failed and was controversial amongst users and implementors.
I'd think, a to defined a language with kludges, but which tries to address practical problems, is a respectable and useful approach. Being self-aware that not everything is ideal is also nothing to look down to.
After learning Python, I wanted to take the next step and find what was better. I looked into a lot of languages reading books on Haskell and Lisp and many others. At least for my use cases (desktop scripting, numerical work... etc) I didn't find Lisp to be superior. Most of what I actually needed to do could be done simpler in Python. Python's REPL isn't near as good as CL, but it's good enough. Then the batteries included was huge. So practically any task I could whip something up pretty quick. There are definitely some areas where Lisp is obviously superior to Python such as performance of plain code or things such as grammarly that would've likely been harder in python. I just think for the vast majority of programmers, enough good parts were pulled from lisp.
As a Lisp fan who sometimes codes in Scheme and Common Lisp whenever I get the chance, I agree with you. The gap has certainly narrowed in the past 20 or so years between Lisp and widely-used programming languages. In addition, the rise of statically-typed functional programming languages like OCaml and Haskell provide another alternative for those who love functional programming but want Hindley-Milner types.
I still think Lisp, whether in the form of Scheme or Common Lisp (I haven’t tried Clojure), is quite enlightening due to the immense flexibility these languages provide. However, I’m reminded of the rationale of MIT’s decision back in 2009 to move away from SICP and Scheme in the intro CS course in favor of Python: the vast majority of developers aren’t building new ecosystems from the ground up, but are instead reliant on an ecosystem of libraries. Lisps are wonderful for creating whole worlds due to the powerful tools they provide. I’m currently working on a side project where Lisp’s flexibility comes in handy. But if I’m effectively gluing together APIs to build a solution (and this is not an insult), then do I need macros, MOP, multiple dispatch, and homoiconicity? Thus, many programmers do not need the full power of Lisp to get their jobs done. There’s also the fact that languages like Python and JavaScript have a lot more commercial and community backing than Scheme and Common Lisp.
If I’m writing something complex like a DBMS or web browser, then Common Lisp would be one of my first choices. However, if I’m writing a machine learning application or a web application, then I’ll most likely reach for Python or JavaScript, respectively, due to the library ecosystems.
I feel exactly the same. I wish I did something cool enough to require the full power of lisp, but generally speaking...I just need to glue some stuff together. Other people will have different needs though. I can still appreciate the beauty of lisp and I'm glad its out there in case I ever need it!
Oh, and I do use CL's super powers during development: interactive debugger, restarting a point in the stackframe, fast and incremental compilation, good type checks by SBCL… and I get a fast binary.
Learning didn't come without a few gotchas, but they're all better documented out there now.
If you have any interest in submitting a talk for a polyglot conference in South Carolina this August, you're talking about several languages that haven't yet been represented in the submissions.
I don't think Lisp, Scheme, OCaml or Haskell talks have been submitted yet actually.
> if I’m writing a machine learning application or a web application, then I’ll most likely reach for Python or JavaScript, respectively, due to the library ecosystems.
This sounds eminently sensible but it never pans out that way in practice for me:
How do you understand the ML algorithms if you haven't implemented them yourself?
And if you have implemented them for the sake of understanding, in whatever language you fancied, why not just keep using that implementation?
The reason I'd choose Python in practice would be as a social compromise with collaborators.
(I say this having recently switched from Julia to Common Lisp because I didn't feel the ecosystem was giving me much practical benefit.)
Do you drive a car you built from scratch? How can you drive from A to B without first understanding how a car works by building one? And since you already built one, why not just keep using that one?
Or maybe you have built a car. But did you type your message on a computer you built yourself using some silicon and a home-baked x-ray lithography machine?
I definitely think there can be value in reimplementing something for educational purposes, but often knowing how to use something without necessarily being able to build it yourself is just fine. And those things you do build for educational purposes should almost always be abandoned after serving their educational purpose.
My limited experience with probabilistic sorts of programming though is that the risk of misunderstanding/misuse is very high relative to the implementation complexity. Cars and. X-ray machines don't have simple implementations.
Often very little code but a lot of opportunities to goof up one assumption or another.
But yeah I'm probably overgeneralizing from limited experience.
The one thing that I most wish Python had is Common Lisp's restartable conditions. They're like exceptions but they don't break your whole process if you don't handle them in the code. If you forgot to define a variable, you'll have the option to define it and resume computation. If you forgot to provide a value, you'll be able to provide one now and resume computation. As it is, Python just dies if something unforeseen happens, which feels like a missed opportunity given that Python is a dynamic language (for which it pays a price).
I work in ML, and launching a job on a cluster just to have it fail an hour later on a typo got old ten years ago. Being able to resume after silly mistakes would easily reduce debugging time by an order of magnitude, just because I need to run the job only once and not ten times.
> After learning Python, I wanted to take the next step and find what was better
Elixir and Golang. Both are learned quite easily and offer a lot (Golang's ecosystem is bigger but Elixir's is very focused and has all the standard stuff you will need).
If you are willing to spend more time and/or care about resource usage -- Rust.
What exactly is remarkable about that? hn serves a couple thousand requests/s at most and is text only content. Which language/runtime *couldn't* do the same?
Is Arc still built on top of (formerly PLT Scheme, now) Racket? The Wikipedia page[1] says so. I'm more familiar with Common Lisp (which blazes) than Racket, but these seem like impressive stats to me.
Is it? HN is mostly static content. Ten years ago I wrote a simple web application using twister (python) that did well over a thousand requests per second. If we count just handling requests (and not churning out much data or doing anything fun processing) japronto for python claims to handle something like 1.2 million req/s on a single thread.
The most frequently access parts of the site are always in flux with users constantly adding, removing, and editing content.
I don’t know how you could statically generate that. At most, you could memoize the parts of the page that haven’t changed yet and return them without a complete lookup, but keeping that in synch with a database is not an easy problem.
You could regenerate the start page every 5s and comment pages whenever you need (new, edited or deleted comments) or every request if there has been more than 30s since it was last updated.
That would leave the serving to some of the applications being good at that (varnish?).
Finding the right heuristic regarding page regeneration might take some tweaking, but the amount of generating you have to do is small enough not to bother any single threaded program.
I like lisps syntactically. The only thing stopping me from using it more frequently is the lack of static typing. I wasn't always a stickler for static typing, but I've spent a lot of time working with Scala and TS and the main advantage for me is ease of refactoring and avoiding NPEs.
EDIT: What I'm continuously evaluating for myself is Clojure specifically
Clojure is head and shoulders above other lisps. Watch Rich Hickey's Sermons From the Mount in the years following Clojure's release and you may never be the same again. At least that was my experience. David Nolen's Clojurescript videos are similarly riveting.
Clojure also has far more reach than any other lisp with implementations for JS, JVM, .Net and recently Dart/Flutter. It also has a library - libpython - for easily importing Python libraries.
Clojure makes simple code easy to write and maintain. It is more practical than most other Lisps by wisely choosing on which giants should to stand. E.g.
- make immutability the normal case, yet it is sufficiently performing that one rarely has to go back to mutability
- minimal syntax, uniformity, dynamic typing, macro-system, symbols from Lisp
- it extends Lisp with namespaces, even symbols get namespace
- access to the whole ecosystem of the JVM, of Javascript and mostly to Python's ecosystem
The price for accessing those ecosystems was omitting first class continuations. Which one is more valuable depends on your use case, but for many use cases I am considering that was the right choice.
I would probably say that It is opinionated in a way that makes it good to write services in and it integrates well with the JVM.
CL has packages and most implementations have PLN so you get most (all?) of what the clojure namespaces offer.
I think that if anyone was rethinking scheme the would steal the immutability things from clojure (something like HAMTs and RRB trees or the scala finger tree vectors are more exciting I would say).
I don't think clojure brought anything new, per se. It took some nice parts from common lisp and scheme and added an immutability first paradigm. Nothing new, but getting people to understand how nice immutability is probably involves making the happy path immutable from day 1.
It is not true that "any mutation" causes issues. In fact, most idiomatic usage of mutation is problem-free and completely sound.
One can find out more about polymorphic mutation as a "counterexample" to Hindley-Milner type inference on Wikipedia [1]. It's a well-studied problem with space of solutions. Purity, weak references, the value restriction, etc. are all ways of dealing with this shortcoming of Hindley-Milner.
Interestingly, one of the first proposed solutions to this problem was suggested by Wright in the journal "LISP and Symbolic Computation" [2].
> When you have a running program you can compile functions, redefine classes, etc. all while the program is running. You are changing the internal state of the image.
This can surely be convenient in certain cases, however most of the times I find myself playing around with unit tests rather than with a single running process. Unit tests are small and fast so I don't even notice any inconvenience with the traditional compilation approach.
I've definitely come around to unit tests, but I think that whether they're good in all circumstances is a matter of taste. Obviously you want to have good test coverage by the time the codebase is starting to solidify- I assume there are people that would argue against that but they would be wrong.
Writing tests at more intermediary stages is not as much a matter of best practice, but I see know the appeal. At some point I started writing one-off functions to rerun the same code over and over in the REPL- that's when I realized it was time to start writing tests even if the API wasn't finalized. But full TDD is a matter of taste. There are people that like to have a really good idea of what they're getting into, and it seems like TDD works well for them. I myself have trouble. When I try to write tests first, I wind up paralyzed by making decisions without knowing the costs and benefits. That's where the REPL comes in handy for me- figuring out what shape I want my data to be, double checking edge cases faster than I could reference the documentation, and playing with new building blocks.
One never notices the inconvenience of not having something one is unaware of.
This is the biggest challenge for answering “why Lisp?” It’s different enough from the programming that most people are used to that it rises to the level of a radical novelty, with all the explanatory difficulties that entails.
In fact, it’s a defining characteristic of the radical novelty that it can only be understood experientially. And even then experience is merely necessary, but certainly not sufficient.
It's possible that you have experienced an enlightenment that I have yet to, but ultimately I like lisp because parenthesis are pretty, I hate remembering syntax, keywords/symbols are nicer than immutable strings, and typing out commas makes me sad (not that you'd know it from the way I abuse them in this comment).
I mean, lots of languages have REPLs now- people get that they're really useful! This isn't secret knowledge, the pros and cons are on display. Macros are nice (though of course Ruby has them too), but I'm always surprised by how far people get with method call chaining (hell they have monads over in JS land now). The experience of writing lisp is very similar to that of writing Ruby, or even Javascript- really any dynamically typed language with good enough lambda support. It's by no means as strange as an array or concatenate language. It would surprise me very much if the reason you and I like lisp, or other people don't like lisp, was anything other than taste!
Nothing you say is in any way incorrect and I agree with it. However, for me, the experience of writing Lisp is very different from that of writing Ruby, or Perl, despite the latter languages having a great many Lispy features. In fact I jokingly call Perl a Lisp-6ish, because of how typeglobs let you mess with the symbol table directly and it has separate namespaces for SCALAR, ARRAY, HASH, CODE (I think that's the subroutine namespace), and a few others if I recall correctly. It's been a bit since I wrote any XS which is pretty much the best way to grok Perl internals. And that difference comes down to interactivity. I totally agree that REPLs are great and can bring a measure of interactivity to other languages. Except I barely ever use the REPL when I'm writing Lisp. I just evaluate the current form directly. Haven't you ever wondered why DEFPARAMETER and DEFVAR both exist? Having both makes no sense in a world where the Common Lisp REPL as equivalent to, say, irb. I'd say that notebooks are where non Lisp languages are coming closest to a kind of Lispy/Smalltalky interactive programming environment. And yes you can do some very very clever things with the JS console in a browser that are in tune with the Lisp way. And that's no surprise since JavaScript was originally a Lisp DSL and that very much shaped its philosophy.
Interestingly enough the Lispiest non Lisp dev experience I've professionally had is writing java 8 using Eclipse on the JVM treating the unit test runner as a kind of C-x C-e. It's no wonder Clojure was such a good fit for the JVM.
I think this is mostly an issue of tooling! I admit I often find myself recompiling a full file because SBCL is just so goddamn quick it doesn't make a difference, so perhaps I'm missing something essential. But with tree sitter you could totally write a plugin to, say, run the literal content of a python expression in the REPL. It's just shuffling text around after all. The fact that people haven't done this may indicate, and I stress may, that this is not something that people are missing.
I personally can't use anything without vim bindings. I have evil mode in Emacs, vimium while I'm in Firefox, vim bindings in Nyxt, and an aversion to using any program that isn't one of the above. This is not just hipsterism- on 10 occasions while writing this comment I've had to delete a kj or jk as I tried to exit edit mode. That doesn't really mean non-vim bindings are defective, if you catch my drift. No judgement here! But it's a possibility you might consider.
Common Lisp borrowed parts of the smalltalk development strategy. You can start a process and gradually update it in very sophisticated ways as you live code against it.
You wouldn’t dream of packaging up your Python REPL and shipping it to users. You certainly wouldn’t open a Python REPL on your production server and start redefining functions and data structures on the fly.
This kind of experience is pretty much limited to Smalltalk, Forth, and Common Lisp (with Erlang having a more limited ability to do live updates).
I know, I enjoy CL quite a bit. I actually currently have REPLs running for a few services, because I like managing them from the inside. But while you can, if you want, deploy major changes by redefining a bunch of stuff on production servers without version control or backups, you...should not.
It's really nice to be able to redefine classes while you're playing around- but CL's handling of class redefinition just isn't up to snuff on a production server. It's just like doing live changes to your schema, only you see, rather more so. Compared to scp->shutdown->mv->start, if you can possibly tolerate <10s of downtime, or the many existing solutions for gradual rollout if you can't, it just doesn't make sense.
If you had to pick between having REPL access in production and REPL access locally, would it be close? Because I value being able to mess around with a REPL while developing thousands of times as much as I like a neat toy in production. And that ability is exactly what Python or Ruby or Erlang give you. Technically lacking compared to the full suite? Perhaps. But we're talking 0.999, not 0.5.
> If you had to pick between having REPL access in production and REPL access locally, would it be close? Because I value being able to mess around with a REPL while developing thousands of times as much as I like a neat toy in production. And that ability is exactly what Python or Ruby or Erlang give you. Technically lacking compared to the full suite? Perhaps. But we're talking 0.999, not 0.5.
I can't speak to Ruby or Erlang, but the Python REPL is much less than 0.999 because of the semantics of "import" making it very challenging to change code once it's been imported.
> Common Lisp borrowed parts of the smalltalk development strategy. You can start a process and gradually update it in very sophisticated ways as you live code against it.
Lisp had interactive programming a decade before Smalltalk. The interactive PDP-1 Lisp is from 1963. BBN Lisp was based on it. Which then was taken over by Xerox PARC as Interlisp.
BBN Lisp / Interlisp already had a very sophisticated programming environment in 1970, including a built-in structure editor for Lisp.
My main problem is with the “driven” part of “test driven”. I’m fine with aiming for pervasive coverage etc., but I disagree that tests are this amazing software design tool that the test-driven ethos seems to put forth. There’s something off in the cart vs. horse equation with that.
I also work in a REPL. For me, design usually starts well before code. Then the REPL will take me towards defining behavior via understanding and exploring particulars around a solution or set of alternative solutions. Then as some final step, I will assert that it does what it should do, and guard against possible future changes violating some assumption.
There's a place for both even with a REPL I think. Sometimes when I'm tracking a bug down or writing a new function against an old one, I find that I'm writing little one-off functions or worse, replicating a function body line by line in a let block to simulate arguments. Once I opened up a project and realized I wanted one of them, but it was lost when the REPL closed. I literally got as far as opening another file to stow these little utilities before I realized I had reinvented bargain-bin testing.
My dream is still to come up with some ergonomic way of blending the two. I want to have tests written, but swap out the thing they test live in the REPL. Or register new one-off tests from the REPL for things I don't think will be useful once things are set in stone. I've heard one of the several billion testing frameworks for common lisp does something like this, but I haven't checked it out yet.
Ohhh, I wondered why people were so hyped about that! In my head I rounded it off to not wanting to recompile the whole file- I should have realized how dumb that was. This actually makes a lot of sense. When I helped a friend set up Calva, a part of me was really just assuming it was misconfigured- but no, it's designed to facilitate this kind of editing.
Though, I must ask, is it really that much better? I do keep functions around if I think I'll have a use for them again- the vast vast majority really are one-offs. Plus, most of my REPL usage is iterating on an expression. Obviously it's handy to edit it in place, but it's nice having multiple versions, even multiple trees, without having to relocate a cursor or undo unrelated changes.
> Though, I must ask, is it really that much better?
Yes, it is.
> Obviously it's handy to edit it in place, but it's nice having multiple versions, even multiple trees, without having to relocate a cursor or undo unrelated changes.
I almost always do my repl work in a source code file. That way I never lose it when the repl closes and I can copy it into a test when I know what’s going on.
Depends on what you mean be effective. TDD is great if you want to be sure you don’t break things in the future. Repl seems better for getting the next thing working right now.
The repl is great for understanding something quickly and easily. It works for big new parts of the code base and small things that you just need to be reminded about. The nice thing is, if you save your repl output, you can really easily create tests from it.
Saving repl output is a great way to make tests. It’s often the stuff that is confusing and likely to change that you exercise in that space so if you take what you find from there and make a test out of it someone will likely benefit later.
I'll have to agree here, with the caveat that this applies to functional code.
For side-effect heavy code, a repl is not nearly as productive as good tests and a debugger.
(What is interesting is that people tend not to test or check manually their code's side effects, so a repl is the best option for almost all the debugging that people actually do.)
I worked in Kotlin and Java for awhile and the closest thing I had to a repl was stopping the debugger and running arbitrary single lines of code in the context I paused. It was awful compared to the clojure repl.
>Unit tests are small and fast so I don't even notice any inconvenience with the traditional compilation approach.
Depends on the programming language, how much state they need to build up, and so on. And they're still slower than just running the actual live code anyway.
Best of both worlds: write unit tests with an interactive debugger o/
Tell the test runner to not catch errors and let them pass through up to the debugger. Have a test fail, get the debugger, go to the buggy line (without quitting the debugger), fix the bug, compile the function with a keystroke (yes, one function), come back to the debugger, restart not the full operation, but the function call, the debugger frame where the error happened, and see the test succeed.
You ran the test exactly once and fixed everything in the process, very quickly. This is exactly what we do everyday with the debugger, with trivial or complex code, with unit tests or regular code.
I find the repl invaluable for writing unit tests. I think about what I want to test, try it out in the repl, refine it for better generality, and then basically copy/paste it into a file full of unit tests.
Elixir is like a LISP with pure functions, only immutable values all the way down (which gets you a guarantee you wouldn't have otherwise in languages where this is optional), actor concurrency, pattern-matching, actual readable syntax, and of course macros (which do the same thing LISP macros do- Accept and output an AST value that happens at the compilation step- the only difference being LISP homoiconicity). It singlehandedly ended my Haskell envy, my LISP envy and my Erlang envy, while essentially killing off what remained of my Ruby romance.
I just wish I could 1) compile it directly to a single binary (so I still have some Rust and Go envy), 2) run it 100% in the browser (so I still have some Clojure envy via ClojureScript), 3) talk to ML stuff as well as Python can (although Elixir Nx and Livebooks are coming along!)
2) ClojureScript (already mentioned) for the browser.
3) For ML, Clojure in the last 2-3 years has built a really great internal ecosystem but it hardly matters because libpython-clj exists so you can run NumPy, PyTorth, etc. from Clojure. Getting data in and out of Python land is just as easy as Java-interop. The reverse also works (calling out to Clojure from an arbitrary Python interpreter).
Plus a few other things you didn't mention:
4) ClojErl is Clojure on BEAM. You have full access to Erlang libraries (e.g OTP) and 98% of Clojure, missing only things that don't make sense on BEAM.
5) Write native iOS/Android apps with either React Native or Flutter (via ClojureDart). Develop them interactively just like you do browser stuff with ClojureScript.
6) Clojure now makes a great scripting platform with Babushka, which has completed replaced Node.js/Python/Bash for that use case for me.
6) Datomic/Datalog, there's still nothing like it after 10 years and in the last two weeks, it's Apache 2-licensed free to use and deploy.
Elixir/Erlang are really impressive, but being BEAM-only is a serious limitation. Clojure is a bit more practical (and I still get to run Clojure-on-BEAM when that makes sense).
ClojureDart is not ready. I would be surprised of ClojErl was. And GraalVM is not some drop-in native compiler. You might have to radically change your code to work with GraalVm.
Proposing these things as "ready for the average Clojure noob" is exactly the snake-oil nonsense that turns people off to Clojure
ClojureDart is very similar to ClojureScript (which the GP is already comfortable with). If you can create and deploy an app with ClojureScript, you can 100% deploy something with ClojureDart today.
Re: Clojure-not-for-n00bs, I 100% agree and haven't met anyone in the Clojure ecosystem who would disagree. Clojure developers have consistently the most experience of any language ecosystem I've worked in over the last 25 years.
I don't know of a rational basis for this other than simply being more comfortable with more syntax and fewer parentheses to trace visually. I did take a class in Scheme once so I'm not completely unfamiliar with coding in a Lisplike.
Yeah, taste is taste. I do get it- it's why I like Clojure's syntax more than CLs, but man I don't get commas. Or even Elixir, I love the language but its always painful to come back to the syntax. Pipe operator aside, that is.
Speaking of the pipe operator, do Lisplikes have anything like that, or like pattern-matching, or complex value deconstruction, yet?
One very nice thing in Elixir that ends up removing a ton of (possibly buggy) boilerplate logic on function entry is that the function heads pattern-match deeply on the structure of the input while also doing inline assignments.
I mean depending on what you mean by "have", yes? But also that's been true since the 90s. Out of the box, less so. Clojure has destructuring assignment, but no matching. A lot of little lisps downstream of Clojure (Janet, Fennel I think) do have matching, though not for function calls. I think that destructuring is most of the value add of pattern matching for me- that's not to say matching isn't great, just that destructuring is such a no-brainer value add that its hard to compete with. I do find myself missing the {:error, ...}, {:ok, ...} pattern in other languages. Nil punning doesn't give you anything close to as much, and at significant cost.
For pipes, the answer is definitely yes though. Clojure has arrow macros, which let you compose without nesting just like Elixir.
There's one for threading through the first argument, and another for threading through the last, and various others for specific situations. Because they're pretty simple macros to write, I also use similar macros extensively in my CL code. In fact, I actually learned CL after getting used to Elixir so the first entry to my utilities package was a threading macro.
I think Clojure has benefited from matching being kept out of the built-in stdlib. https://github.com/clojure/core.match is a plug-in and as a result we've had lots of cool data traversal/matching DSLs come around and evolved user communities with time such as Meander and Datascript not to mention the parsing applications of the schema systems (spec & malli).
The "other editors" link[1] is welcome. I've never liked the implicit Emacs requirement associated w/ Lisp envs. The list includes Vim, NeoVim, & even IntelliJ!? Good stuff.
I'm curiuos about the interactivity of Janet as well. I saw that someone added support for opting into a level of indirection for resolving symbols, which would theoretically support re-defining functions interactively. There also appears to be Conjure support for editor integration.
In my case, I wasn't convinced Janet would support REPL-driven programming like Common Lisp, so I decided to try CL first. I very much like the concept of Janet (a Lua-sized Lisp), but I couldn't identify a compelling reason to choose Janet over CL in my case (just some hobby projects).
Janet has the pieces for this in the spork stdlib with netrepl combined with fibers, but there's no standard or pre-configured way to set it up. Its current approach to namespaces doesn't particularly allow for it either, so you have to choose one of a few various hacks for redefining things as you noticed.
People have been asking for it and demonstrating proofs of concepts for the last year or so so it wouldn't surprise me to see it formally addressed soon. But I don't know, I don't have any particularly insight I just like that language and keep half an eye on it.
I find most of it applies to Clojure, although being a younger language there is no code from 30 years ago to run. Based on the stability I’ve seen so far, it wouldn’t shock me if the Clojure code I write today still works 30 years from now.
Edit: There are some nuances of interactive support Common Lisp has that Clojure lacks, but I find Clojure supports interactive development much better than other languages.
Somewhat! The authors talk about the longevity of CL, which is less a fact about lisp than an accident of history. Janet, for instance, is obviously quite new. Elisp is quite old, but I wouldn't use it for non-emacs development. Schemes are old, and while there are lots of one-off schemes which seem to be educational projects for their developers there are also lots of very old and battle-tested dialects (Chicken scheme comes to mind, Racket is old-ish and backed by its use in education).
The interactivity is fairly universal. Scheme in its smallest nuts and bolts doesn't care that much about interactivity, giving eval but no other real support for a REPL, but due to cultural effects pretty much every specific scheme is very interactive. I've seen complaints about every specific lisp I've ever used that it's not a real lisp because it doesn't support [extremely specific interactive option user integrated into their workflow in '93 and can't live without], but everything the authors talked about is likely to be present in just about every lisp.
I have pg's list of nine Lisp ideas [1]. What of these are missing in Janet? My guess is that only 9th one but I would like to understand Lisp (any) vs Janet point.
LISP originally came in an implementation (LISP I) and LISP is short for List Processor.
Janet looks like a really great language&implementation, but it is not similar to that specific List Processor, for example given that it does not use linked lists at its core.
> I can implement a scheme that passes most test suites using vectors instead of linked lists, would you not consider it a lisp?
My Symbolics Lisp Machine has a Lisp implementation where some form of vectors are an optimization of lists (-> CDR coding). But that's an implementation detail. For most purposes the thing behaves as it uses linked lists - and even has primitive operators for linked lists as CPU instructions.
CLISP, another Lisp implementation, has its own virtual machine, written in C. There too, CAR, CDR, CONS are primitive operations in the VM:
For me Lisp means more than parentheses or some vague idea of a language "family". It's a language, a bunch of implementations which have a similar core (data structures, operators) and which share code. These languages tend to have the name 'Lisp' in their language.
Then there is this meaning that Lisp is a very diverse group of languages bases on largely undefined criteria. Like C is a member of the ALGOL language family.
That's the "Lisp family": It includes JavaScript/ECMAScript, Dylan, Clojure, Scheme, Racket, R, SKILL... Some have s-expressions, some not. Some use linked lists, some not. Some are object-oriented, some not. And so on.
But for practical purposes it's meaningless: if I want to know about how a particular Scheme construct works, I would look into a Scheme documentation (or its particular implementation), not into a Lisp book.
Scheme, Clojure, .. all have their own language family by now: there are language definitions, a bunch of similar implementations, shared code, books, ...
It's also funny how people claim that language X is a Lisp, as if that would mean something "special" or even "better". There are lots of excellent and useful programming languages out there, which are not Lisp.
JavaScript and R are more Lisp-inspired to me. I tend to think of the Lisp family as containing Scheme, Clojure, and other languages with a fairly close resemblance. Common Lisp, its predecessors, and a few other languages like uLisp are more closely related still and are practically, if not strictly, the same language.
I’m not sure this informal taxonomy is “right,” but it seems to be useful and in line with common usage.
what is this fairly close resemblance? Parentheses?
There are a bunch of Lisp like languages without s-expression syntax: Lisp 2, Logo, MDL, RLISP, CLISP (not the CL implementation), Dylan, Racket with its new syntax (Racket2, Rhombus), Skill, ...
For example Dylan is based on Scheme & CLOS + a different syntax + some other influences. https://opendylan.org
Expression-based and supporting macros are two big features JavaScript lacks that the ones with the close resemblance have. Subjectively, they also have constructs in common that make coding relatively similar (although the presence of features like CLOS, continuations, and multimethods can have a pretty big effect). And yes, they use the same S-expression syntax. (Although you can be a Lisp without it.)
In fairness, R might be closer than I realized since I’ve never done extensive programming in it.
JavaScript started as a Scheme inspired language. It has runtime evaluation, a serialized data structure format, first class functions, garbage collection, dynamic typing, functional/imperative/object-oriented features,
Many languages are not just looking at the syntax or the user-facing features of the language, but also at the implementation techniques of Lisp-like languages. An early example was Garbage Collection, which was invented for automatic and dynamic memory management of linked lists in the first Lisp implementation. R for example started as an attempt to reimplement S with a Scheme-like runtime.
So if Janet refactored the interpreter to use linked lists, CONS/CAR/CDR everywhere in the C code but the language stayed the same (if possible) would it fit your definition of being a lisp ?
The code which is interpreted, is Lisp code in the form of lists. Lists are processed in user programs and the implementation itself processes Lisp code in the form of lists. Both are using the same list data structure, the same representation of the list data structure and the same primitive operators for it.
For me that's the core of Lisp.
If the language and its implementation does not process lists, then don't call it to be a "List Processor".
Am I understand right that Janet becames a Lisp (in your understanding of Lisp) if two conditions will be fullfilled: if everything will be reimplemented on linked lists with CAR/CDR everywhere and also a byte code interpreter will somehow coerce all the C syntax with ability to recompile all C code in such a way to fullfill a "no compile time" condition and that's it?
Sorry for speaking about things I do not understand, just I am searching for a comprehensive answer in "C vs Janet" question. I have a feel that there is something beautiful in Janet but I don't know what, so I have started this tree of discussion.
there's obviously a strong family resemblance, but scheme's focus on purity over practicality lends to a much different feel.
one of the illustrations from this 20 year old discussion that someone linked here a bit ago [t] was that, while you might say c is a 'member of the algol family', you wouldn't say that c 'is an algol'.
perhaps an extreme example, but the idea is that lisp having a history of multiple implementations doesn't make scheme one of them.
put another way- while schemes extrapolate on a similar kind of purity to that which lisps are already famous for, giving them a kind of 'more lisp than lisp' aura, that shouldn't negate the language split.
This thread may be as good as any to ask: the code I write has to deal with a huge amount of state. To summarize, my code receives millions of "change commands", which update state in memory. Data structures are hash tables, lists, etc. And some commands, once in a while, are queries like "how many of these are there now, of what does this element now point to?".
I guess I'm describing some kind of custom in-memory database engine.
Anyway, I'm confused as to how the purity of Lisp functions would or would not prevent me from writing this code? Is this a use case that is bad for Lisp, or am missing something?
As far as FP langugages go, LISP is very much a free-for-all. It can be pure if you want it to be, or stateful and imperative if you prefer. Nothing is really enforced like for example in Haskell.
For your use-case a mutable reference behind some type of synchronized accessor would probably suffice. If you want to get fancy you could design the system in such a way that any point in time could be replayed or rewound to, since you have the commands and the initial state.
Lisp functions are generally not pure! Common lisp, the language this article is mostly talking about, is very comfortable with state. CL, like most lisps, often encourages a functional style- everything is an expression, and usually you'll return data structures even if you mutate them. But big hash tables and arrays are definately in common lisp's wheelhouse. Even Clojure, which is largely pure when dealing only with itself, has full support for diving into the messy practicalities of mutation on the JVM.
Lisp is not pure, in general. cons is "functional" because lists, if treated immutably, are functional. There's still setq (set! in Scheme), though, and operations on hash tables, arrays, and sets are typically mutating.
so Lisp is actually not a pure language, like, at all. Emacs is often written about as a big hairy ball of state and Lisp itself can be as imperative or statefule as you want. So while I don't know much about what you're designing, Lisp wouldn't be necessarily a bad choice.
If you’re referring to Nyxt, I’ve been using guix to install it, and it takes care of all the dependencies, including getting them right so that connecting to Nyxt’s swank server can be done from emacs.
it's doable, on linux systems i don't mind doing that. but on my mac i don't want to use all that disk space, particularly for a for-funsies web browser (although i really do like the browser).
and for other programs, if CL CFFI bindings don't build on your platform then nix or guix can't save you
sucks because i love lisp but CL is just not good for program distribution
the fact that there are repeated statements like "why X" where X = some language, points to rationalization more than reality; if the language is really so good, there won't be any need for a "why" articles on it. I haven't seen a repeated series of "Why C#" or "Why Go" or "Why Swift" articles. I also think Lisp is way overrated. Swift is based on Miranda which is based on Hope which borrows heavily from SML...There's nothing Swift can't do better that Lisp can.
I do not agree. I do not have time to familiarize myself with every language in existence to such a level that their strengths become obvious to me. It is perfectly reasonable for a expert in a language to write an article on why to use it.
There was a straight decade where HN was covered, nay choked, by "Why Go" articles. This ended under two years ago. Swift doesn't get the same coverage, but pretty clearly falls into roughly the same category for everyone but Apple UI developers
Different languages are better and worse at different things. It's a really complicated design space. There probably are some which are better than others on every axis, but generally making one better at some things makes it worse at others.
> The developers of Lisp give you the full powers that they had to develop the language.
So someone joining the project doesn't just have to learn Lisp, they have to learn Inhouse Lisp? And the abstractions provided by modern languages are woefully incomplete?
> Lisp code written some 30 years ago will most of the time, without issue, work on a modern Common Lisp implementation
> So someone joining the project doesn't just have to learn Lisp, they have to learn Inhouse Lisp?
I think this tends to hold true for all complex systems past certain point. Especially web frameworks. They just tend to circumvent the language so much (via macros, type system, etc.) so it's no longer like a bare language that was before. But I appreciate your point that macros can add cognitive burden in many cases where less powerful (and more easily understandable) mechanisms would suffice.
> So someone joining the project doesn't just have to learn Lisp, they have to learn Inhouse Lisp? And the abstractions provided by modern languages are woefully incomplete?
Yes. To both.
All programming involves building up sets of abstractions that make sense to the domain. A new project usually means a whole new load of abstractions.
If the language is opinionated about that, there's a good chance they'll be similar to other projects. Python, Java come to mind. C++ is heading in that broad direction.
Most languages make application abstractions and language abstractions look different. Sometimes the standard library looks deliberately different as well. This seems to be the popular path.
Common lisp has an inconsistent stdlib. Scheme has a small one. Both draw a distinction between builtins (which get weird indentation and syntax highlighting) and functions, but it's not as big a visual distinction as other languages. Both let you add a function which normally looks like other functions at the call site but has weird semantics and the special name macro. This is approximately making extensions look the same as builtins.
Mixed with that, there's a credible risk of having limited continuations, or ideally delimited continuations, which are a complicated control flow construct. There's some risk of having first class environments. Also the character stream might get munged by reader macros before anything happens to it. You will have the compiler available at runtime and probably have weird phase separation hooks.
If you combine leaky syntax abstraction (the macros), reified continuations and environments you get something very expressive with interesting composition challenges for greater than zero developers.
The enterprisey lisp, clojure, drops a lot of power and adds a lot of convention.
Kernel fixes the inconsistency in macros. It also goes with first class environments, which are the sane thing to do for module systems. I don't remember if it has reified continuations.
The lambda calculus with environment style symbol lookup is a really nice core for a programming language. Lisp is about as close to that as any.
Qt and React have been used for years by lots of people, so there's a lot of knowledge out there about them, their designs are more general-purpose and refined, and you're likely to have encountered them before.
The biggest rock in the pond recently is the Great Renaming to Jakarta, but many containers still support the older package names.
The only real legacy stuff from 20 years ago that has truly died on the vine is Entity Beans. But even then, Entity Beans we’re mostly horrible, and not many folks used them.
But they may well still be supported in some modern containers.
That said the old style way of using XML for everything (notably declarations of Session Beans and such) still exists today. Annotations rule the day today, but the old XML style still exists, and has not been deprecated, or if it has, it’s quite recently.
There is in fact a lot of backward compatibility stuff still there in the modern containers because fundamentally the concepts have not changed, just how they are expressed in modern runtimes.
> Future proof is a term thrown around a lot in the tech industry.
This is the TXR Lisp interactive listener of TXR 286.
Quit with :quit or Ctrl-D on an empty line. Ctrl-X ? for cheatsheet.
Truly future-proof systems are rare; future-resistant ones are everywhere.
1> _
> The Lisp designers do not assume what syntax, features or functions will be necessary in the future. The developers of Lisp give you the full powers that they had to develop the language.
Isn't that ~ abdicating responsibility for language design?
Guy Steele was involved in the specs for C, JS, Fortran, scheme, Common Lisp, and Java (among others). He gave a must-watch talk about Java called “Growing a Language”.
He makes a great case that a language must be able to deal with future problems and ideas the original designers didn’t or couldn’t foresee. He seems to have been one of the voices pushing for stuff like generics so users could design libraries that would feel more native.
And of course, I can’t mention him here without quoting his statement about Java:
> We were not out to win over the Lisp programmers; we were after the C++ programmers. We managed to drag a lot of them about halfway to Lisp.
A goal of Common Lisp was giving all the tools future programmers might need to create new tools. As a result, you’d be hard-pressed to come up with a language feature that hasn’t been ported to CL at some time.
May I quote the first sentences from Scheme's R7RS standard that relates to this topic:
"Programming languages should be designed not by piling feature on top of feature, but by removing the weaknesses and restrictions that make additional features appear necessary. Scheme demonstrates that a very small number of rules for forming expressions, with no restrictions on how they are composed, suffice to form a practical and efficient programming language that is flexible enough to support most of the major programming paradigms in use today."
I can gloss over LISP's lack of static strong typing and if I force myself a bit, I can also ignore it not producing small efficient native binaries... but the lack of (semi-)transparent concurrency / parallelism is my deal-breaker. Multicore CPUs are a fact for life for a long time now.
Where is the stuff like Erlang/Elixir's green threads or Golang's goroutines/channels and Rust's async runtimes workers/channels, and OCaml 5.0's recent multicore runtime (and the emerging libraries around it)?
People have been quick to point out various old-school multithreading libraries in the past but as a guy who was working with `pthreads` almost instantly after he started his career 21+ years ago... yeah, nope, I want the higher-level stuff. I want to solve problems and not invent a new async orchestration runtime for every project I work on.
If LISP is so fantastic why isn't there a serious effort to make it multicore-friendly? Where's the transparent concurrency and parallelism? The actor runtime? The [stateful] effect handlers?
---
Believe it or not, I understand the value of inventing your own DSL and that solving business problems with it is much easier compared to coding it in another programming language but (a) nobody wants to pay me for that because that way I make myself irreplaceable and business is extremely against this and (b) that solution might end up being non-extendable, rigid and a dead-end. So it's a high-risk endeavor in more ways than one.
So OK, LISP is awesome. I tried it, liked it, couldn't find a way to make money with it. I suspect many other programmers are in my boat.
If you want more adoption then do the grunt work and bring it to 2023. Give us an HTTP (1.x & 2.0) and WebSockets libraries, give us an actor runtime (or a goroutine-like one), maybe an optional type checker -- and I am sure the community can very easily pick it up afterwards and lift it up to big heights.
No? Then it remains a niche curiosity. Enlightenment does not have to always go hand-in-hand with being a spartan. I want libraries.
Many Lisps (and Common Lisp) had 'green threads' decades ago. Green threads are not preemptively scheduled by the OS/hardware. Multi-Processor Lisps were new and very rare (like the Lisp&Scheme for BBN Butterfly https://en.wikipedia.org/wiki/BBN_Butterfly or the Lisp for massive parallel Connection Machine). So the green threads then were mostly on a single core.
Many Common Lisps moved to native threads in the last two decades because that provided preemptively scheduled threads on multi-core machines. Early examples were Corman Lisp (on Windows) and Scieneer CL (a commercial fork of CMUCL). Native threads have also advantages when interfacing to the OS.
Implementations like SBCL, LispWorks, Allegro CL and others provide native threads and support multi-core machines. In the LispWorks development environment I use, every IDE tool runs in its own thread on a multicore CPU.
The big problem most implementations face is getting a parallel and/or concurrent garbage collector. For the JVM such things exist in various forms. Allegro CL has something like a parallel GC.
The numbers are: with SBCL's core-compression, a web app with dozens on dependencies will weight ±30 to 40MB. This includes the compiler, the debugger, etc. Without core compression, we reach ±150MB.
This is why it's important to always look at the issues on a repo instead of just believing what's in the README. It fails to detect type errors in some of the most basic situations
"Coalton doesn't actually work" is extremely disingenuous and easily misinterpreted to mean something that is not true. Coalton does work, and has been deployed for use in production on non-trivial, commercial problems. [1,2]
Mutation is an issue with any Hindley-Milner system, because mutation is inherently impure and non-functional, violating principles that the Hindley-Milner algorithm relies on. The Coalton developers have to decide what they want to do about it. Haskell chose pervasive functional purity and monads (but still gives you many type-unsafe escape hatches). OCaml chose weak polymorphism. [3]
Being a sensitive design choice, Coalton has not committed to a strategy for dealing with this, so as it stands, it is possible to use a combination of impure features to produce a type error. However, this does not block productive development. One can:
- Avoid mutation altogether and rely on an otherwise sound type system. Use purely functional data structures built in to the language, or write your own.
- Use mutation in monomorphic scenarios where there will not be any issue.
- Mutate outside of Coalton (i.e., in Common Lisp).
- Be very careful, know that "Here Be Dragons", and carefully use mutation in a polymorphic context, understanding that one runs the risk of a type error.
Until Coalton reaches "1.0" and a language design choice is made around mutation and the type system, one of these strategies must be used. And they have been used, successfully, because idiomatic Coalton code isn't typically mutating polymorphic vectors left-and-right anyway.
With all that said, I do agree with examining a language or tool for caveats, known bugs, and limitations. I don't know how you'd make engineering trade-offs otherwise.
Honestly you're defending this a little too much. Nobody cares about an unfinished LISP spin-off that may be production ready some day. The OP's point still stands, CL lacks static types.
This is frustratingly common with niche languages (and I am not saying CommonLisp is not productive, itself). If people are criticizing a missing feature in a language and the response is "Yeah, but we have some bespoke third-party solution available here," it's like the responder has missed the whole point.
Common Lisp, as defined by the standard, does not have static types. As such, Common Lisp will never have static types as a built-in language feature. For the people who care about built-in languages features alone, and not what the library ecosystem has to offer, the discussion could very well end there.
Most programmers seem to care what the library ecosystem has to offer, though, so long as said libraries are actually working and useful. Is Coalton such? It's a technology that has been in development for around 5 years, that is presently used for commercial purposes, and that offers an approach to static typing a la Haskell's type system within Common Lisp projects. It's not a slapdash weekend project that purports to do something, only to find it really only does 5% of what was advertised. All things considered, it seems reasonable to suggest it as an option for static typing.
Production-readiness is a gradient. (Or, less usefully, it's a binary quality determined by the question, "Is it used in production at a company?") I wouldn't personally use Coalton to build real-time rocket control software for many reasons, the most important of which is that it's not billed as a "1.0" product. But I also wouldn't dismiss it because it has a bug tracker with an exposition on an intrinsic limitation of Hindley-Milner type inference. Coalton is a tool built in Common Lisp that is used in production, today, now, as documented by those links in my GP comment. You can even be paid real money to develop Coalton and/or software in Coalton.
Look, I get that people are tired of this peculiar rhetoric, the
> Lisp has macros therefore it's always one step away from implementing any popular language feature.
kind. For a variety of reasons, it's usually not true in a practical sense, and ultimately leaves the would-be user holding the bag.
I suggest, however, that Coalton really isn't a smokescreen that just exists to duly check off a "Has Static Types?" feature checkbox for Common Lisp.
Common Lisp (CL) is not the entirety of the Lisp world. However, CL does have a lot of limitations given its 30-year old design.
1. The typing is vague, annoying and its weird half-assed existence to a CLOS is a constant source of trouble
2. Code-walking is not completely deterministic and is not implementation independent.
3. The environment is not part of the standard.
CL was written in a completely different time when there was money to be made selling various bespoke implementations, and therefore there was a big need to vagueness in the spec. Today this need is much less so: there's 1-2 big open source impl. and about 2-3 commercial vendors.
> CL was written in a completely different time when there was money to be made selling various bespoke implementations, and therefore there was a big need to vagueness in the spec. Today this need is much less so
Part of the issue here is that the spec has essentially been abandoned, and frozen for the last almost 30 years. The committee which produced it decided to dissolve itself.
If the surviving major commercial vendors and open source implementations thought it was in their interest, they could get together and update the standard for the 21st century, removing a lot of the vagueness and standardising the most widely implemented extensions. ANSI/INCITS is commonly criticised as an overly cumbersome and bureaucratic process, so maybe it would best be done by starting a new bespoke standardisation committee, like what WHAT WG did for HTML. However, I don’t think enough of the major players see it as sufficiently worth their while to invest in, which is why I doubt it will ever actually happen.
Why not just something like (require ‘cl-2029) to indicate one’s code is written to the new “Common Lisp 2029” standard? Which does its best to retain backward compatibility with existing code, but breaks compatibility whenever doing so is really worth it.
All that stuff you're worried about is just implemented in libraries. Lisp lets you do that. You don't have to wait around years for a ECMA committee or whatnot to provide you a "async" keyword, or any other keyword. The language is flexible enough to let you add it yourself. It's a whole change of mindset.
In practice however, several people probably implemented "async" independently and the community as a whole just settles around a favorite.
Also in practice: no ready-baked solution and I am not gonna move the ecosystem when I just want to do some commercial work with it, because nobody is gonna pay me to evolve the ecosystem.
Maybe an article or two demonstrating step by step how does one invent their own async runtime exactly would be hugely helpful in terms of credibility.
Otherwise it's just handwaving and I am sure you can see it.
> that solution might end up being non-extendable, rigid and a dead-end
This is definitely something businesses are concerned about, but my experience has been that bog-standard enterprise-style codebases are way more likely to calcify than custom DSLs... but, somehow, it's never seen as a problem with standard technologies and approaches.
I agree about high-level parallelism. Having done it every which way, from C to Python (bleh) to Rust to Java to Erlang: NodeJS is actually nice with throng, at least for a webserver. And the built-in async/await stuff is your greenthreading. Easy to avoid race conditions because you know the event loop stays in context until you hit an `await`.
Well, that's exactly what I don't care about. Most languages have native threads. I want an actor runtime with transparent green threads, for example. Like Erlang/Elixir.
> I don't think you understand what green threads are. Green threads are simulated threads that run on a single core.
I guess this is just semantics, but it seems to be that this is an overly narrow and stale definition of green threads. The wikipedia article is not even internally consistent in its definition of green threads necessarily being scheduled only onto a single core (it includes goroutines as an example). The only reference to such a definition cites two sources from the very early 2000's.
I think it's safe to say that in 2023 "green threads" simply refers to user space threads rather than kernel threads, and that may include threads scheduled onto multiple cores.
Nothing implies that green threads need to run on a single core. The summary in your own link is pretty good: "In computer programming, a green thread (virtual thread) is a thread that is scheduled by a runtime library or virtual machine (VM) instead of natively by the underlying operating system (OS)."
Which is half of what we're looking for here. A thread that can be managed by the runtime, scheduled onto a native OS thread, and (here's the important part, where Common Lisp falls down on the job), transparently operates like a real thread or in other words, you can make normal blocking operations and the runtime just takes care of it. For example: goroutines
If you want to be a pedant ¯\_(ツ)_/¯ then OK (and I am not sure how "native threads" == "green threads" but I won't argue) -- look up Erlang/Elixir's model, or Golang's, or Rust's async workers.
That's what I am looking for; basically M:N parallel execution model where M can be 50_000 and N (the number of CPU cores/threads) is no more than 32-64.
The rest is a suboptimal mess and, as already mentioned above, I am not looking to invent my own async [task] orchestration runtime for every project I participate in.
> I am not sure how "native threads" == "green threads"
You are still confused. Green threads and native threads are mutually exclusive implementation strategies for threads. Native threads are provided by the O/S. Green threads are implemented at the user level without relying on any O/S capabilities. Green threads, by definition, run in a single O/S process and can therefore only use one core.
> look up Erlang/Elixir's model, or Golang's, or Rust's async workers
I'm familiar with those. Those are not "green threads". But whatever you call them, you can do those things in Lisp too.
FWIW, you are literally the first person I've seen who has insisted that green threads are incompatible with an M:N threading model. Every other definition I've seen for green threads is that they involve userspace scheduling, and there's no requirement that there must be but one thing that something can be scheduled onto.
It's possible that the definition has changed; (human) language is malleable. But the original definition of "green thread", which goes back at least to the 1990s, was in contrast to "native thread" where the latter was implemented by the operating system and the former by the language implementation. The whole point of "green threads" was that they let you do something that looked like parallel processing (but was really just time-slicing under the hood) with no support from the O/S.
If there's a new accepted definition someone should update the Wikipedia page.
Traditional, purist definitions: 1:1 threading = native threads, N:1 threading = green threads, M:N threading = hybrid threads (since it is a hybrid of the native and green approaches)
However, a lot of people nowadays call M:N threading "green" instead of "hybrid". Just Google it you will find lots of people using the term in this way. From a traditional purist viewpoint it is an incorrect usage, but it is also now very common. I guess part of the reason is that green threads in the original sense is less useful today – one of the major historical motivators was it could run on OS platforms which lacked native threads, which was a common problem in the mid-1990s and earlier but nowadays almost never is – so it is unsurprising the term gets stolen for a closely related yet distinct technology with far greater contemporary usefulness.
Wikipedia's article – https://en.wikipedia.org/wiki/Green_thread – essentially contradicts itself. The start of the article gives the traditional definition of "green threads". But then the "Green threads in other languages" section ends up describing lots of things which are much closer to being "hybrid threads" than "green threads". This is the problem with an "encyclopaedia which anyone can edit", it is easy to unintentionally edit an article into contradicting itself, and on highly technical topics it is easy for the contradiction to go unnoticed. I myself am not sure I can fix it, because although I know the true story about the definitions (or at least I think I do), I don't know any reliable source to cite for it.
Their thread article – https://en.wikipedia.org/wiki/Thread_(computing)#Threading_m... – does define M:N as "hybrid threading", whereas it calls N:1 "user-level threading". It also notes "green threads" as a synonym for "user threads"–but "user threads" aka "green threads" exist in both the M:N and N:1 models. It never defines "green threads" as a model as opposed to as a type of thread which exists in two out of the three models.
Thanks a lot for the context. I wasn't aware of the distinction between green vs. hybrid threads and was only using "green threads" I suppose as "suspendable coroutines" or "fibers"? Not sure about that either, gotta admit.
There’s history here. Green threads are as opposed to native, you both agree on that. The original green threads were pretty weak and behaved as described above. People use the green threads term today to mean a much more featured version that is as you describe it. Lisp also has it. So really everyone is right, except for your statement that Lisp does not have green threads. It has native, old style green and new style green threads.
The only implementation I know of that offers these features built-in is Allegro. But the cool thing about Lisp is that it's easy to add features like this at the user level even if they are not a native part of the implementation.
Not convinced. If it's so easy why hasn't anyone done it yet and made it easy to consume for everyone else?
To me this is a red flag. "Just roll it yourself" is not a constructive response in the eyes of a commercial programmer. We come to an ecosystem expecting certain basic building blocks. I am not paid to evolve LISP's ecosystem. Not to mention nobody will give me the time in a world where deadlines and milestones are a fact of life.
If I was doing programming purely as a hobby -- sure! But I don't. And I'd bet most commercial programmers don't as well.
> why hasn't anyone done it yet and made it easy to consume for everyone else?
Dunno. No demand? No standard?
I recently wrote a library like this for my job. It took me about a day. I haven't published it because it was a work for hire and so it's not mine to publish. But it's the basis for my confidence that it's not hard to do.
If you want to write up a spec, I'm available for contract work (and I'm sure you would have people lining up if you actually were willing to pay someone to do this work). But I suspect you would find that writing the spec is the hard part, and if you actually did it you would find that it's easier to implement yourself than to try to hire someone.
I mean, high-quality work should be paid. Obviously.
But now we're making a full U-turn to my initial argument: LISP doesn't have features that I find are (or should be) basic building blocks so I am just taking my business elsewhere.
I think you are being willfully disagreeable here. There is a Lisp with the style of threads you claimed doesn’t exist. So you’re simply wrong with your claim. Also even if that didn’t exist, no one can say that a day’s worth of work is a prohibitive barrier to getting things done.
I suppose part of the context got lost here but no, I haven't gotten out of my way to disagree specifically; more like I got frustrated because the other guy seemed to purposefully miss the point from where I was standing.
I don't want just a best-effort-with-what-we-have-in-our-runtime-that-was-not-ever-designed-for-it implementation however, I want proper preemptive scheduling and a transparent M:N threading model, very akin to what Erlang/Elixir have.
I was made aware of Clojure's core.async but after reviewing it, it's IMO not good enough. Though as others have said it can be used as a building block for something better. That seems to be true as well.
But it does. Allegro CL has it. Other implementation might have it too. It's hard to say because you've been very vague about exactly what it is that you want.
Was I vague? Or were you being overly pedant about Wikipedia-sourced definitions that, as another commenter pointed out, are kind of outdated? And now trying to frame me as not knowing what I want? And pretending that the examples I gave about three other languages don't paint the picture well enough?
You seem to be arguing in bad faith from where I am standing.
...And okay -- I'd like a M:N green threads runtime (or library if you prefer) with preemptive scheduling or at least not a completely manual cooperative scheduling. And yes, we're talking full 100% CPU usage on all cores when needed. Not single-thread.
And to repeat, since you kind of pretended I never said anything about what other languages do, and if you are indeed aware of how Erlang's BEAM VM is doing it -- that is what should be present in more than one language / runtime IMO. Failing that, Golang's goroutines and Rust's async workers+channels are quite fine too.
To me, in 2023, no language has an excuse for not having something like that. Modern example: OCaml team worked on it for years and recently delivered it.
> Where is the stuff like Erlang/Elixir's green threads or Golang's goroutines/channels and Rust's async runtimes workers/channels, and OCaml 5.0's recent multicore runtime (and the emerging libraries around it)?
> I want the higher-level stuff. I want to solve problems and not invent a new async orchestration runtime for every project I work on.
That seems pretty vague to me. You mentioned four different languages (Erland, Go, Rust, OCaml). Do you want the intersection of the features of all those languages? The union? Some subset?
> You seem to be arguing in bad faith
I didn't even realize this was an argument, so I guess the problem must be that I'm just too dim to glean the correct meaning of your words. Sorry about that.
Well it is an argument insofar as you keep insisting that the LISP ecosystem has what I want, and I keep disagreeing with that claim.
But yep, I want Erlang-style concurrency / parallelism and, failing that, something like Golang builtins or Rust's libraries.
So not a subset or intersection, more like a priority-ordered wish-list: I want what I perceive as an ideal model (Erlang) but if that's not available, there is stuff 1-2 floors below that are good enough (Golang, Rust).
But native threads or fully cooperative opt-in parallelism are practically a drag (or were in the projects I used them) and aren't cutting it for the work I do.
I'll grant you that the building blocks for something like what I need are there but I am not willing to put the work to create the runtime / the library, nor pay anyone to do it for me. Hence my original post: I am commenting on the status quo, not on how it could theoretically change at any time. I find the latter of no consequence.
OK, but you must want something besides that because...
> an ideal model (Erlang) but if that's not available
It is available. In Erlang. So if that's what you want, why are you not just using Erlang? Why even bother with the contingency of "if that's available"? Why are we having this discussion at all?
This is the reason that Erlang-style concurrency is not available off-the-shelf in Lisp. There's no market for it. The people who really want Erlang-style concurrency just use Erlang. No language will ever do Erlang-style concurrency better than Erlang. Erlang-style concurrency is Erlang's defining feature. The whole point of Erlang is to do Erlang-style concurrency. You can't do better than Erlang at "Erlang-style concurrency" by definition.
The point of Lisp is not do X-style-anything better than X. The main benefit of Lisp is that it allows you to explore a much larger space of possible solutions in a much shorter time, which is a big win when you don't know what you want, which, I submit, is most of the time.
> So if that's what you want, why are you not just using Erlang?
I do.
> Why are we having this discussion at all?
I saw an article praising LISP, I decided to chime in with realism because certain fandoms (LISP's included) seem very unaware of the realities of the commercial programming outside of their niche hobby language. As a senior dev (who also worked as a CTO a few times) I have learned to evaluate technology and to never wear rose-tinted glasses even for my favorite tech stacks. They became favorites based on merit and nothing else. (In fact I am starting to dislike working with Elixir for certain projects, even though I loved doing them with it in the past.)
LISP is not cutting it for commercial work in general, so I strive to bring nuance to articles (or discussions) that to me seem heavily tilted to the "I am a fan!" direction. And forgive me if "LISP has longevity" and "it's future-proof" seem like hand-waving to me. I don't see factoids, I see people reinforcing their own positive feeling based on actual factoids (like the powerful macro system and a REPL, for example).
And we keep chatting because you seem to insist that either LISP has what I deem good (disagreed, it doesn't) or that it's not important / there's no market for it (disagreed).
> There's no market for it.
You're doing post-hoc rationalization. You don't know that for a fact. I know I would have coded much more Racket and Gerbil Scheme if they had proper async runtime a la Erlang or Golang or Rust (OCaml these days as well though the story there is still unfolding after their recent 5.0 release).
> You can't do better than Erlang at "Erlang-style concurrency" by definition.
Loose definitions then. Rust and OCaml are making very serious strides. I have hope they can surpass Erlang in the next 2-5 years. Both are faster, much stricter with types (lack of those is an endless pit of bugs) and more memory-safe than Erlang (though anything with a GC like OCaml is prone to some of the nastiest problems Erlang has, like cycles between big objects but... topic for another time).
> The point of Lisp is not do X-style-anything better than X. The main benefit of Lisp is that it allows you to explore a much larger space of possible solutions in a much shorter time, which is a big win when you don't know what you want, which, I submit, is most of the time.
We finally got somewhere productive, thank you.
I use Elixir (lives in the Erlang's BEAM VM so has access to everything Erlang) for the same and I agree that being able to explore quickly is very valuable. I've only made the mistake to prototype stuff with Rust once. Never again. Nowadays I use Elixir and Golang for prototyping and the end products either remain that or get rewritten in Rust.
Also REPL story can be better with Clojure but I'll admit it at least exists, unlike that of many other languages. Startup time is not ideal but then again, neither is Erlang's sadly. Editor support I haven't checked in a long time, might be good. Library coverage is very hit and miss depending on which LISP you use. I keep hearing CL has a lot, maybe that's true but I am 50/50 there; judging by your attitude -- "it took me a day to roll it myself" -- I remain unconvinced that the library story is good, it's more like some pie-in-the-sky goodness that's eternally out of reach. Still, if I reach for LISP again in the future I'll evaluate that aspect in detail and will know for a fact.
So yeah, on the "LISP is quick to prototype stuff with" I agree completely. To me it doesn't go all the way however, hence my initial comment.
You can think of me as "dislikes anything that looks like shilling".
There is no place for feelings in our work. When I retire and if I still want to code then, maybe I'll make decisions based on feelings. Before that -- no.
> I saw an article praising LISP, I decided to chime in with realism because certain fandoms (LISP's included) seem very unaware of the realities of the commercial programming outside of their niche hobby language.
Lisp's detractors often seem equally unaware. I've been using Lisp in a commercial setting for the last ten years. And before that I used it in a research setting for 15 years, and even got Lisp sent into space. It worked great.
> LISP is not cutting it for commercial work in general
Lisp is rarely tried for commercial work nowadays, in no small measure because people like you keep taking pot shots at it from the side lines. Have ever actually tried using Lisp in a commercial setting? I have. It works great.
> > There's no market for it.
> You're doing post-hoc rationalization. You don't know that for a fact.
Well, I pitched the idea to you and you didn't bite, so there's a data point.
Yes, it's possible that there's a huge untapped market for Lisp out there if only it had Erlang-style concurrency. But I'll give you long odds against that being the limiting factor. The limiting factor from where I sit is ignorance and prejudice.
> To me it doesn't go all the way however
That's fine. But please don't assume that because it doesn't go all the way for you that it can't go all the way for others. Different goals give rise to different requirements.
No, I go not think this particular feature can be added at the user level. It's something that requires significant support from the runtime to handle all the blocking -> thread parking shenanigans. Attempts to do it at the user level are terribly limited in that respect and just end up like clojure's core.async.
You're not wrong, but most modern CL implementations provide all the primitives you need even if they don't go all the way and provide a fully-fledged Erlang-style parallelism library out of the box.
The first two links, when it comes to green threads, have nothing that fit the bill. They are not transparent. They have all the problems of clojure's core.async. What you need is to be able to call existing blocking code and have the runtime take care of it. A macro based solution cannot do this, because macros are purely source to source transforms. If it can't access the source, it can't modify it. It must be done by the runtime.
i very much doubt that there is anything in lisp that can prevent that from being implemented at a library level (library because lisp is ANSI standardized). if you can prove that lisp is inherently unable to do what you call transparent threads im pretty sure it would be a significant cs journal paper
anyway it seems to me that this discussion evolved from lisp doesnt support multicore/ concurrency/parallelism (false) to "lisp aint erlang" (truism)
if you want erlang just use erlang, or a lisp version of it https://lfe.io/
Other commenters say it's nowhere close to what I'm looking for, btw.
And let me repeat that I am not looking for the old-school OS threads support. Almost all programming languages have that. It's nowhere nearly good enough.
It has Go style concurrency, look for `core.async`. Maybe it could have been said that it wasn't lightweight enough and that would have been due to the JVM not providing the primitives but they're here now under the name "Virtual threads".
IIRC somebody in this thread said it's not preemptive and doesn't have enough functionality. Quickly looking at it, it seems to be fully opt-in / cooperative parallelism solution which is IMO not good enough.
It's...close-ish? It could really use default non-blocking IO, but honestly the reason no one's made one integrated solution yet is that giving a callback that pushes to a channel is mostly fine. I'd like something a little more robust obviously, but it's by no means a toy.
I'd kill for common lisp with transparent green threads. It would be so great. As you noted once you use them you can never really go back. Of course, it will never happen because it can only be done in the compiler/runtime, not in a library, and the standard is frozen in stone, so that's that.
I've got some hope for clojure with that at least, thanks to it being jvm hosted.
Back in the 1990s the usual in CL implementations was green threads. Everyone kinda wished for POSIX threads support instead, which was eventually gained on most runtimes.
You can still see the glimpse of them as atavisms in old code and documentation.
I indeed saw this years ago mentioned in clozurecl's documentation. I was very upset that all the support for it got ripped out of the language, because as shown with https://blog.linuxplumbersconf.org/2013/ocw/proposals/1653 (which inspired what is now making its way into Java) this is the way to go for performant solutions with very very high thread counts.
shrug I personally hate cooperative multitasking, and so far never ran into a problem that couldn't be scaled with system threads and use of select/poll. But am sure Google sees its uses.
Web backends. BEAM VM's green threads work excellently for it, you can handle bursty workloads transparently and all you will pay for it is slightly increased latency (and I really mean slightly).
I know, many people claim Ruby [on Rails] and Python [Django] do the same but I've seen APM dashboards of such projects and those of Elixir [Phoenix] and Golang [various, most recently GoFiber] apps and the difference is pretty stark.
But if you mean the golden path of select/poll then yeah, hard to beat those on their own turf of course.
Probably the random missing documentation. To be fair, it's easy enough to navigate the source code but the real problem for me is discovering what's what and where to import it from.
Coalton doesn't really work (https://github.com/coalton-lang/coalton/issues/84?s=09) and Typed Racket is only typed outside macros (and I don't just mean that macros are hard to type, I mean the imperative code you write to generate code is not type checked). Any others?
Clojure's core.async channel concurrency is nothing like go's. You need to wrap your concurrent block in a macro so it can code walk to find explicit blocking points. That means these points need to be marked, and can't be in already compiled functions. That's an enormous limitation. Compare to go's which just works, no caveat.
Plus Go is just so nice with the automatic IO handoff. But in a functional language I really don't think it's as big a limitation- still large, but not as bad as it could be. The pattern of writing mostly small, pure functions and then piping them together with channels all at the end is pretty workable.
Appreciating what Lisp is capable of doing (think macros), and having worked through SICP some twenty years ago, and after having tried to make a deep dive in CL and Emacs Lisp two years ago, I come to the conclusion that there is no silver bullet in Lisp-land. Python is a good enough Lisp, as Peter Norvig has concluded. And it’s got all batteries included. Ain‘t nothing it can’t do. Building websites, doing maths, automating, except building compilers and OSes. But Lisp won’t compete in that field anyway.
Programming languages are there to express ideas and solve problems. I can think of more elegant ways to express ideas (functional languages for instance), but Python is an excellent ecosystem for solving problems.
> Python is a good enough Lisp, as Peter Norvig has concluded. And it’s got all batteries included. Ain‘t nothing it can’t do.
I almost agree, but Python fails in one area, which happens to be my current area of interest. It doesn't have sufficiently strict encapsulation to support object capabilities[0] at the language level.
JavaScript, on the other hand, does[1] - so you don't necessarily need Lisp for this, but I do enjoy the parentheses. :)
First, it has no macros. The ability to metaprogram in Python is practically nonexistent compared to Lisp.
Second, it has no usable lambda (being limited to a single expression in a non-expression oriented language. Most lisps heavily rely on the ability to pass around arbitrary functions whether named or not.
Third, it is slow. Your toy scheme implementation is probably going to be about as fast as Python and something like SBCL/Common Lisp will be literally 3-400x faster and that’s without mentioning that Python is limited to green threads. There’s no reason to limit yourself to such a slow language
Fourth, it lacks the interactive development environment of Common Lisp. Sue it has a REPL, but you can’t do all your coding against a live instance with fine-grained control of your updated code. This is hard to describe, but it’s a massive differentiator.
Python does have plenty of metaprogramming capabilities, they just are more complex to use than a macro system. One simple, but highly limited system is metaclasses. Similarly weak but existing features include creating classes at runtime, and adding new methods to classes at runtime. A far more powerful system is compiling source to AST, which can then be manipulated, and finally compiled to bytecode. This allows arbitrary metaprogramming, but is certainly fairly complex.
One could even use it to implement a macro-like system by coding up a new finder that uses a new SourceFileLoader subclass that overrides the (accidentally undocumented!?) `source_to_code` method. In there, one can compile to ast, transform the ast, and then compile the ast to bytecode. (The method was documented back in 3.4 by way of being documented on a ancestor abstract base class, but that documented ancestor was changed to be static in 3.5, leaving this method seemingly accidentally undocumented.)
For the transform the AST step, one might look for apparent function calls to special macro functions, and instead call those macro methods as AST time passing in the AST of its arguments, receiving back as AST that you place in the tree replacing the function call. There is some trickiness here. For example, it is much easier if the macros are defined in another module so that they can already be compiled, and thus available while importing a module that uses them.
It should also be possible to handle macros defined in the same file by stripping out non-macro function definitions, and non-imports from top level statements, compiling that, and then using the result while processing the whole (unstripped) AST, and then finally compiling the result.
Of course such a system is significantly more complicated than with a lisp, but it is still possible. Its load time performance is also not likely to be especially wonderful, but python is not exactly a performance powerhouse.
SBCL is written in Lisp, yes? Except the runtime, which is C + asm.
I've heard people wrote some OSes in the past, like Genera. Or if you prefer recent attempt, try https://github.com/froggey/Mezzano. Never tried it, though.
The true killer feature of lisp is s-expressions and the structural editing affordances they provide. “If I can’t slurp and barf, I don’t want to be part of your revolution.” ~Emma Goldman(ish)
I think Forth beats Lisp out in these domains. To program at all in Forth is to write a DSL and to metaprogram. It is extremely common to come into a Forth codebase in an enterprise environment, and it looks nothing like anything you've seen before. Hand an intermediate Lisp programmer a codebase where the first 1000 lines are completely replacing all the built-ins and then swapping to some non-sexpr language, you're going to get a lot of headscratches and frustration. A Forth programmer will have expected this and have a large bottle of APAP in their hand for the ensuing task ahead of them. In this same way, Forth is also an important lesson as to why these things are more "anti-features" when you base your entire language's identity around them.
It seems like, if anything, a language built around creating and traversing tree structures would be perfect for making compilers. But, hey, what do I know? :)
It depends on how you write the code and the compiler you use, but Lisp can be as fast as if not faster than C. But the main thing is that in Lisp it's easy to turn the dial between speed and other things you might want to optimize for like safety and debugability. Making C code memory safe is a lot of work at best, and actually impossible at worst.
Others have responded, but without much depth. Common Lisp on SBCL (maybe the proprietary compilers are similar/better, I haven't worked with them) has a remarkably good type inference system. You need to declare, which is optional, but of course you'd need to declare all your types anyway in Rust or C++. The result is extremely good- for a highly dynamic language. Real number crunchy code actually gets very close to bare metal performance with type declarations, though if you're doing generic function calls it'll choke up fast.
So to get fast, the compiler knows the types because you tell it. For safe, you don't need to do anything- CL is garbage collected. It's easy to have fast or safe in any language- doing both requires type declarations and a good optimizing compiler.
That being said, none of that matters for this discussion. Nyxt uses Webkit under the hood because CL, even with SBCL pulling off genuine miracles, is not really intended to go toe to toe with C. I think of CL as occupying around the same niche as Java, which it's usually only a bit slower than.
SBCL (the implementation) gives us pretty useful type errors and warnings. It is not "fully typed" (see Coalton for a Haskell-in-Lisp) but very useful anyways: it warns on a function arguments not having the right types, stuff like this. The nice part is that we get these warnings instantly, since we compile the code function by function. We can add type declarations, that help the compiler further infer types, throw more warnings, and optimize code.
funny story on Lisp's performance (compared to Java and Rust): https://renato.athaydes.com/posts/revisiting-prechelt-paper-... CL was beating Rust out of the water (in speed) with its un-optimized algorithm, that the post author copied from a Lisp manual. After much sweating, Rust eventually beat CL's version.
I don't disagree. The post I was replying to effectively said macros are too hard to use outside of small teams.
They did have a point- it's not the kind of tool that you reach for on a whim. However, they can be fantastically useful when planned and used with discipline.
I’ve worked through chapter one of SICP, and whereas I can admire the purity and kind of ‘belly feel’ how good Lisp can be it’s really painful translating expressions into pre-fix and having to do all the iteration using recursion.
You do get used to the syntax, at which point returning to languages that are prefix here, infix there, statements one way, expressions another, can wrinkle your nose a bit. But syntactic patterns are a question of emphasis; they establish a grain to go with. Lisp's syntax is so uniform as to have no grain. That puts more responsibility on you in exchange for high level flexibility—you can do things the way you want, and a creative method won't be against the grain like it might be in another language.
As for the recursion part, that's more true of Scheme, and it serves SICP's purpose of explaining that iterative and recursive processes can each be generated by recursive code. Common Lisp and friends do have iterative options for loops. But you will probably work in aggregate list operators more than touching a loop yourself.
FWIW, that's 100% true and virtually everyone experiences it.
It completely goes away, so much so that it's almost impossible to remember emotionally how difficult it once was. I can promise you won't struggle with it for more than a few weeks (at most) of daily use.
Source: someone who switched to Clojure recently, is loving it, and finds the syntax/iteration no longer on the list of things I even think or worry about.
> Yeah, just like Perl. Or Latin. No one uses them, these are dead languages that belong to a museum.
The best analogy I have for Lisp is that it is the wheelbarrow of languages. It was invented long ago and is in no way obsolete. In many ways it achieved a level of mastery of it's use domain that makes it almost impossible to think of a substitute.
When you need a wheelbarrow, a dump truck will not do, and neither will a bicycle or automobile. There is a reason for this rather famous quote: "Whoever does not understand LISP, is doomed to reinvent it."
Arm, AMD, and Intel are using a Common Lisp application called ACL2 to formally verify their main products. Google Flights' core is written in Common Lisp. It's far from dead.
I wish ITA did some talks because I assume their experience of CL is very different from most. Even at the project management layer, they have deep serious constraints.. it's not just a simple product.
There are many many more users of CL, I just listed some household names. If you don't hear about some technology every day, it doesn't mean it has to be "dead", there are plenty of important and "alive" technologies that you are not aware of. You just can't win this argument, if you try to prove the opposite you automatically lose, that's rather unreasonable.
>There are many many more users of CL, I just listed some household names.
I know. But it's still admitting defeat, as they're not as many as to make its liveliness self-evident, and make giving a list of projects using the language moot.
Nobody will ask for such proof from a language whose liveliness is not in dispute, one for which the see constant mentions on social media, jobs posted, major new projects being written in them, companies using them left and right,
books a-plenty being written all the time about them, meeting people who use them is trivial, and so on...
>You just can't win this argument, if you try to prove the opposite you automatically lose, that's rather unreasonable.
This "inabillity to win" using such proof though, makes sense though if one understands "language X is dead" as not some kind of absolute statement that nobody uses it, but rather as it was meant: "it's not as lively as it was, nor it is particularly popular".
"But this company uses it somewhere" doesn't really answer it. Companies use all kinds of niche stuff here and there, on legacy projects, stuff they bought, or stuff done by some small team and used because "it works, so let's keep it", but that stuff remains niche. We can find companies using Eiffel, APL, and whatever too. Does mean they're not dead-dead either, doesn't mean there's much life in them.
If it was one of the handful of "Google sanctioned official langauges" for example that stuff is written is, that would be a good argument (even if that was still just an individual company).
If you need to do your homework to tell whether a language is used, then it's not particularly alive.
The point isn't if it's used by some (that's true for all languages, there are some companies doing stuff in whatever niche language you want, and if you look enough, you'll find this or that project in some FAANG using any old language, they might even use Oberon or Dylan.
But that's not the point when we usually say it's "dead" or "alive" (else all languages would be "alive", even Algol, I'm pretty sure there are projects there still done in Algol). So, it is not about there being some use, but about whether the language is mass adopted, or smaller but growing, or dwinlding, past its heyday, and only used by the occasional outlier.
To drive the point home, there are swing bands playing 30s swing, and even clubs and events devoted to that genre. But swing music is pretty much dead in the sense we're talking about, whereas in the 20s and 30s it was pretty much alive.
probably not the best example because both the Catholic Church and Latin are likely still going to be around when all of us, most programming languages, companies and probably countries are in the dust
peak popularity in the <current year> in a domain where most things have a half life of less than 20 years isn't a good indicator for longevity.
CCL's compiler is lightning-fast. I can recompile an entire system in CCL almost as fast as I can load the compiled object code. SBCL's compiler is slower (while often generating faster code because it does more work at compile time), but it's still much faster than a typical C++ compiler.