When I was younger, I imagined there would come along a true language that would solve my problems.
It took me a long time (and a lot of banging my head into bad environments) to realize that language matters a lot less when the IDE is well-configured and fluent with what I want to do, the toolchain allows for experimentation without ruining the ability to do large, collaborative work, and both debugging and profiling are possible. And those are things you can't really add to a language environment when the language itself (spec and implementation) is constantly in flux.
Maybe a language will appear one day that renders all those tools obsolete, but we still evaluate and run code on real machines and it's still written by human beings so I sincerely doubt it.
> when the language itself (spec and implementation) is constantly in flux.
Common Lisp and Clojure are legendary for their stability. So many times I have experienced it myself - I would pick up a 5-6 year old Clojure project, update a bunch of dependencies, and somehow most things work right off the bat. That trick never worked for me with pretty much any other language - C#, Java, Python, Ruby, Go, Lua, Javascript - you name it.
Why is that? Probably because Lisp dialects essentially have no syntax. Code is data and data is code.
Macros don't get written very often. Some Lispers (e.g.: Clojurists) advocate against solving problems with macros (if they can be solved with regular functions). However, at the end of the day, macro or a function - data remains data. In non-lispy languages syntax often complects meaning and order.
It's not how often you write macros. Just write ten macros -> ten syntax extensions. The Common Lisp implementation I use has around 700 predefined external macros.
> against solving problems with macros (if they can be solved with regular functions).
It's just that macros are for different things and for those are widely used in Lisp.
A language is a tool that was built to solve some kind of problem, but no language is built to be the best at all problems. There are pros and cons to using it depending on the task just like anything else. You have to consider what you want to do and pick and language that aligns with your goals.
Clojure is good for this because it's a parasitic language it sits on top of and interops with JavaScript, Java, .NET, Erlang and their ecosystems, npm, maven etc
Learn the main language once and you get incredible reach, for every new runtime the core language is practically the same
I'd love to see language design and language development treated as different disciplines.
Just like developers were the first web designers, developers have been the first language designers.
Ideally a language designer would only be tasked with how to express complex ideas in a simple, composable syntax.
Language developers would be tasked with everything else: basically making those ideas work as efficiently and consistently as possible.
Not to say a language designer and developer can't be the same person, but I think there might be some benefit to having a linguist (with some understanding of development) and a developer (with some understanding of linguistics) team up on a new language.
The amount of compiler and language theory needed to design a language that is also practical to use and implement is far greater than what a UI designer needs to know about web technology. If there was a language designer that wasn't also very knowledgeable about the implementation and use side of things, they would probably be pretty useless outside of syntactic sugar type stuff.
On the other hand experts in their own field have a better chance at having a complete idea if what is needed.
For example, from a theoretical computer science perspective of programming languages handling backwards incompatible change is not a hot topic. Or also compiling ergonomics (as a slight counterexample to my own point, stack, the build system for Haskell has a shebang mode where you can take a Haskell source file and transform it into an executable by prefixing some magic comments)
This might interest you (and anyone reading this thread): AnyDSL is a language framework that separates the concerns for both viewpoints (designing for expressivity, designing for abstracting 'complex ideas') but also that of the machine expert (designing for efficiency). I'm not affiliated with AnyDSL, I'm just parroting the blurb on its main site[0]:
> When developing a DSL, people from different areas come together:
> - the application developer who just wants to use the DSL,
> - the DSL designer who develops domain-specific abstractions, and
> - the machine expert who knows the target machine very well and how to massage the code in order to achieve good performance.
> AnyDSL allows a separation of these concerns using
Syntax is really not what makes most programming difficult at all. We have languages with really good syntax now, at least for the fairly low level most programming languages operate in (storing and retrieving variables, calling functions, etc).
The only way I could see what you're proposing making sense is if you got the linguist to design a much higher level language that operated on more concrete concepts. Some kind of domain specific language.
I agree as the biggest test of new language features that involve syntax changes is not the superficial situations it makes easier, but how that feature works with other features that currently exist.
I agree with this point of view. I’m a part-time MSc student in CS and have enrolled in a few courses in both linguistics and philosophy (how to convey ideas through language, non-CS) with the goal of designing a language for my thesis. At a theoretical level they seem connected but I’m mostly just interested in how they relate at the design and implementation level!
But you can't fill out the checklist "correctly", that's the entire point. A lot of these points are mutually exclusive, and which point to pick is a matter of conviction a lot of the time. Some of these points can be checked no matter the language (eg, you appear to believe that scaling to large projects will be easy).
This is particularly obvious in the "Your language has/lacks" section, where whichever choice you make can be argued against for almost every point.
The point is that you there is no such thing as a perfect language and the success of a language rarely depends on the kind of criteria that programmers will judge them on at first glance, and success is a relative thing.
Bjarne Stroustrup, creator of C++, is often quoted with this statement: "There are only two kinds of languages: the ones people complain about and the ones nobody uses."
I suppose, scathing criticism in that respect is the highest form of praise.
Define "successful language". Popularity of a programming language doesn't mean success. COBOL, Fortran and Pascal used to be hugely popular, now with every passing year, it is costing more and more money to support systems that once considered as "successfully built".
Sadly, it seems in a few decades, pretty much that would be the fate of every single PL in TIOBE's Top. All popular PLs of today are morally outdated. Programming languages of future would be striving for:
a) Developer's productivity
b) Execution performance
c) Correctness of the program
And none of the popular languages today are making good progress in any of those directions.
Maybe JScript (Microsoft's proprietary JavaScript) and/or VBScript (VB sometimes shoehorned into a JavaScript-style role) would work for the first one, but to be honest I don't have enough experience with either to know if the "but worse" statement is fair.
Also the "sweet spot" for Lua has some overlap with JavaScript but I don't think the "but worse" complaint is quite fair and I think Lua may actually pre-date JS.
But surely there are a bunch of HTML templating languages that would fit the category of "PHP but worse".
JScript is misunderstood. It isn't a proprietary JavaScript competitor. It's just Microsoft's implementation of JavaScript. They had to use a different name when talking about it in order to get around trademark issues.
There were some differences from Netscape's implementation, but they're pretty minor compared to the dialectical differences between competing implementations of pretty much any other language.
You misunderstand the real evil of JScript. Microsoft made it to be a complete clone of Netscape's implementation.
When Netscape decided to standardize the language, they went to ECMA. Microsoft didn't have ECMA completely under their thumb, but did have some control.
Netscape wanted to fix some fundamental issues with JS, but MS had just spent a ton of money making their clone, so they insisted all those problematic bits stay right where they were.
So, JScript is to blame for most of the bad parts of Javascript and is therefore worthy of all the hate you could possible heap upon it.
That's all beside the point. The focus of the discussion was whether JScript could be described as a new language ("you have reinvented...") that's basically Javascript but worse. It can't, because it fails the "new language" test: it's just a different name for the same language.
document.write called on a loaded document? Reopen it and clear its contents. Yes because that makes sense and I wouldn’t want that to be an error state. Oh JavaScript.
Sure, something like a .clear or similar but the default behavior of clearing a loaded document by calling open on it is strange compared to what you would expect to happen with “normal” files/streams.
Edit to add:
I think calling write on a loaded document should have been an exceptional case and you should have had to explicitly call something to unload, replace or clear it.
IMO this is the inherent difficulty with Javascript. I would prefer that you have the "source code" as two separate things: the hard details of the page and the code to manipulate behaviour.
This [1] current HN front page link would illustrate what I mean. You have a colour palette (hard details) and the behaviour (code to plot the colour palette). Of course this is a simplification, but it shows the way of thinking and the way of optimising use of code and space.
That's not really a difficulty of javascript the language - you're supposed to have your javascript completely separate from the HTML and CSS, with everything in static files of that type. Separation of concerns was built into the web from the beginning.
It just happens to be a popular architectural pattern at the moment to reimplement the DOM, HTML and CSS all in javascript and have one big mess of code.
You're right that it's not javascript's fault per se. But the rest of your comment represents a narrow, dated and frankly ignorant viewpoint. Component-based front-end architecture has tremendous advantages. And if you want to see "one big mess", simply examine the CSS of virtually any moderately-sized website or web application.
I've been doing web-related development since the late 90's, and I make a living as a UI Architect. Increasingly my clients are looking for my help to unfk their nightmarish, effectively unmaintainable legacy codebases -- which typically have followed this same cargo-cult "wisdom" to their detriment.
I'll just vaguely gesture in the general direction of the multitudes of developers on HN who both work with front-end frameworks and complain ceaselessly about what a nightmare it is and stand by my statement.
It can have both tremendous advantages and be a mess. Both can be, and are, true.
Separation of concerns can be handled in one language. The popular MVC pattern does not require three separate languagues. And in fact that's rare in my experience. So web frameworks moving towards the norm and allowing it to all be handled in the same language should not be surprising or a mess.
>The popular MVC pattern does not require three separate languagues. And in fact that's rare in my experience.
ASP.NET MVC uses C#, (C)HTML and CSS for a total of three languages. WPF uses XAML and C# for a total of two. Android uses XML and Java for a total of two. I think Delphi only uses one, I vaguely remember the designer outputting plain Delphi instructions? I might be misremembering.
React is not exactly encouraging separation of concerns, and in some ways this was better in the times of JQuery. But I don't think this is directly related to the number of languages involved.
Of course when it comes to the web it's three languages underneath. Per the standard, you can't escape that. Now look at apps that are built to run on an OS using MVC. It's usually all in the same language. Why?
Separation of concerns doesn't imply separate languages. There might be some merit to the idea of a DSL for each concern? But if the end result of that experiment includes CSS, I would call the experiment a failure.
Android, WPF and Universal Windows all use XML to specify UI, and the programming language of choice to specify code. That's still two languages, even if it removes the need for a third language exclusively for styles. In each you could use pure code to build the UI but readability and maintainability would suffer a lot, so nobody does that. Once you have the view separated from the controller, writing the view in a language better suited for nested UI seems quite natural.
That's fair. My experience on the subject is admittedly pretty ancient. When I was doing it, back in the early 2000s, IB was how everyone I knew did it.
> Those who don’t learn from history are doomed to repeat it. Any day now, I suspect some fools are going to want to write web apps, but they won’t want to use raw javascript and they’ll create some ridiculous custom language to javascript compiler. I can only hope they have the good sense not to tell anybody about it.
What if I compile javascript... to javascript. So that I can have actually functioning javascript across the majority of fairly modern browsers, while using modern features.
All I'm saying is, honestly. I don't blame people for not writing straight javascript, when even if you do, you still need a build system in between if you want to support all browsers.
While I personally agree with you, I know that some people really hate 'everything is an expression', especially when assignment is treated as an expression. This has been a very controversial addition to Python 3 for example.
I would also mention that support for higher order functions, in the sense of functions which take other functions as arguments, is eevn possible in C. I think a much more important feature that is only recently becoming mainstream is support for lambdas. Support for HoF without lambdas is relatively worthless in my opinion.
Many things are not expressions in Python. My point was that the community is hostile to making (some of) them expressions, as was seen with the controversy over adding := as an assignment expression (so you can do `if x:=y`, for example).
It's older than that. I won't say how I know, but bits and pieces of this were tumbling around in the undergrad hivemind of a certain Pittsburgh university for awhile before 2011. ;)
I remember seeing it back when I was interested in programming language design. I lost interest around 2010 (after concluding largely what the checklist was getting at, that there's no point developing a new programming language), so that would put it in the 2005-2010 period.
Ironically, most of the languages I've used in the last 5 years (Swift, Rust, Kotlin) post-date that period, so I was wrong in my conclusion.
Do you have a pointer to anything that predates the link I provide? From what I gather, that's the website of one of the listed authors. If you have an earlier link to something that looks like an earlier draft, that would be nice to have for us armchair internet historians.
I got an idea for a language where the code in the file format is some intermediary language, which you write with an editor plugin that makes it more human-readable.
The editor plugin (in contrast to existing language plugins) also translates the error msgs back to the human-readable code.
Just an idea, but I could not get it to fit in the questionaire :)
>Unison is a language in which programs are not text. That is, the source of truth for a program is not its textual representation as source code, but its structured representation as an abstract syntax tree.
It has some further goals for doing this which are really exciting to me, such as content-addressable code, but the starting point is similar to the one you stated. :)
Projectional editors use that setup. Might be slightly different than what you have in mind since projectional editing is not great at text editing. But have a look at Jetbrains MPS, that's a language workbench for projectional editors.
In SmallTalk you had everything as part of a binary "image". The compiler/interpreter, debugger, IDE, the standard library, your source and everything else was in one big blob.
>> You appear to believe that: Syntax is what makes programming difficult
This is exactly the problem. People spend so much effort looking for better syntax that they miss the most important part of programming. It's about design and structure, the syntax doesn't really matter at all. I don't even care if it's dynamically typed or statically typed or functional or not. A bad developer will produce bad code no matter the language is. I've seen this over and over.
It's not so different from human languages. It doesn't matter what the language is; if you know it well enough, you can write a great novel with it... But only if you're a good author to begin with.
A bad author is not going to start learning French because they couldn't write a successful novel in their native English... Instead, they will work on their storytelling because they know that they are at fault, not the language.
I've written code in many languages and I can produce good software with any of them. The language I'm most familiar with is the one which allows me to work faster, but it has basically no effect on quality of the code.
For example, in the past, I've written Golang code after learning it a couple of days before for a job interview and the interviewer (a well known developer and founder of a popular startup at the time) commented that it was one of the best designed/structured samples he has seen. It took me a long time to write it because I had to look up stuff all the time but it seems to have had little effect on the quality of the code.
People have to stop blaming tools and start blaming themselves. It takes over a decade of intense work (nights and weekends too) of constant self-blaming and adjustments to become good at coding. That's if you have natural talent for it. If you're not a natural then you have to be even more patient.
A master can make good work with shit tools, but they'll still prefer to work with good tools.
Syntax matters because it's part of the interface, and so it has the ability to clarify or confound. Destructuring assignment; async/await instead of CPS (technically not exactly equivalent due to variable scope, but close enough in actual usage); pipe operators/macros instead of (third-function (second-function (first-function the-data))). None of these make the language more powerful but they do aid clarity and concision, as well as just making the language nicer to use.
I'm a big fan of async/await and I can appreciate what it achieves in terms of helping to prevent coding errors. I've rewritten an entire open source WebSocket framework in async/await, so I know exactly what the benefits are. But in spite of that, in the grand scheme of things, I still think it doesn't really matter. The architecture of the project is much more important.
I had been working on that project for 3 years but migrating it to async/await only took me about 2 months of casual after hours work. The reason it was so easy to migrate and that I didn't have to write the project again from scratch is precisely because the architecture was correct to begin with.
When you think about code quality in terms of how much (or rather, how little) time it takes to maintain it to adapt to industry trends, architecture is by far the most important factor. Because even if everything changes from underneath you, a good architecture will survive the test of time. Any other metric to measure code quality is arbitrary and meaningless IMO.
It's mind boggling that my previous comment received downvotes. I guess it reinforces the point that the author of the article is making. The average developer is incapable of understanding what is important. If people can't articulate what their goal is and they don't know what is important in order to achieve this goal, then there is zero chance that they will be able to make the right tradeoffs. Because there is no free lunch, every decision is a tradeoff. If, like for the vast majority of projects, there is a good chance that the requirements will change significantly in a year or two, then you need to know what really counts and architecture is one of those things that can stick if done correctly.
I would say a good developer can make the code clear, I can make my ES5 code clearer then some bad ES7 code, most of the time clear code depends on how you name and split your code in logical and intuitive steps.
Somehow it does though: I find even j/k/apl easier to parse in my head than Ruby; somehow the ruby syntax gives me a headache. Not sure why that is, as I wrote a lot of code in it, but I find Python, C#, F#, Haskell or Kotlin far nicer to read and write. And, even though I understand the semantics and can write code in Ruby, if I have a choice, I would never touch it again. And that's only syntax/idioms. I find JS also quite rancid syntactically, but not as repulsive as Ruby.
And I know other people have this with different (or the same) languages as well; like someone else said, most people (even on HN) won't even try languages with a 'weird' syntax, even if they are proven to be more productive in some cases relevant to them. I know quite a lot of Lisp-y coders (most of them do Clojure these days) and they don't understand how/why you would ever use anything else. Most people here (maybe including you) have the inverse of that. As there are blazingly fast Lisp/Schemes on every platform, why are you using Go or another syntax than the basic AST? And yet you are and most of us are. If syntax does not matter, why have any syntax at all?
1. Other things matter more. How a language scales to 100 programmers working on 10 million lines over 20 years, say, matters more (in some environments) than the syntax does. Syntax contributes to that. But syntax contributes to that precisely by being pretty vanilla, uninteresting syntax. More, sexier syntax makes a language worse for that environment. (I'm talking about go here. But I could make a similar argument for other languages in other environments.) Syntax matters as a means to an end; the end matters. Syntax where the end is syntax doesn't matter so much.
2. I suspect (and assert without proof) that peoples' brains work in different ways, and that a person finds languages easier or harder as those languages conform or conflict with the way the person thinks. Ruby syntax gives you a headache? And the problem isn't that you just need to learn Ruby better. But for every you, there's (at least one) someone who has the same issue with J/K/APL. And that's fine. People whose way of thinking matches APL should program in APL, and those whose way of thinking matches Ruby should program in Ruby. We don't need one language to rule them all. They each have their target niche and their target audience.
> I know quite a lot of Lisp-y coders (most of them do Clojure these days) and they don't understand how/why you would ever use anything else
Because Lisps emphasize data more than the syntax.
The following quote is from Rich Hickey's famous: "Simple Made easy" talk:
Syntax, interestingly, complects meaning and order often in a very
unidirectional way. Professor Sussman made the great point about data versus
syntax, and it's super true. I don't care how many you really love the
syntax of your favorite language. It's inferior to data in every way.
Yeah for sure this is a big thing. I love Ruby’s syntax and if I could, I’d love to write in it all day if I was being paid the same amount as another language. Psychology is also at play. JavaScript I agreed can be quite rancid. But knowing it’s the only language that works in every browser, I internally whined about it less and now rarely do at all.
I do not really whine about it (except when people are talking about it as I do feel it's important to have and hear different opinions) and of course JS is currently a necessary evil. Thinking about that too much makes no sense and serves no purpose, but when I step back, I know I just really don't like it. And then I just get back to coding. With Ruby I have a harder time doing that. Guess it's taste and 'first experience'; my first Ruby experience was inheriting a very large, badly written (very hacky) RoR codebase I had to migrate 2 versions up to the latest RoR. It was pure hell. So it would be a combination of things that gives me such PTSD for syntax.
> it’s the only language that works in every browser
Assembly is the only language that works on every machine, no matter how small or big. But that doesn't mean that we all have to write assembly. Or Javascript (if we're targeting Web). Purescript, Elm, ReasonML, Clojurescript - they all exist for good reasons, not some made-up bullshit points.
> It's about design and structure, the syntax doesn't really matter at all.
While it’s true that you could write any program in almost any syntax, that does not mean that syntax isn’t important. Syntax is the UI if a programming languages and as with any UI, UX matters. (You can accomplish tasks with bad UI’s too but that doesn’t mean it’s pleasant or efficient) Syntax shapes how you think in a language and how you view or use the semantics. We have many languages which are semantically essentially the same and only differ in syntax and people prefer one over the other often for different purposes. Ergonomics are important.
Hell, many people won’t even give languages whose syntax they don’t like a chance... (don’t like lisp parentheses? Python significant whitespace? Forth’s stack shenanigans? J/k/apl/Perl’s keyboard-mash-symbols? Etc)
Public interfaces should ideally be formally typed, both to provide a behavioral contract between parties and to serve as user documentation. Whether the contract is enforced at compile- or run-time is secondary to having a formal mechanism for expressing it in the first place; ditto whether the code hidden behind those interfaces is explicitly typed, inferred, or untyped.
Two problems that tend to occur in practice:
* Untyped languages (e.g. Python, Ruby, JavaScript) typically lack a formal dialect for expressing parameter and result types, and most don’t even provide formal mechanism for annotating interfaces. Without a common language for expressing interface types, it’s nigh impossible to develop the tooling and education that are the necessary prerequisites to popular adoption.
* Typed languages (e.g. C†, Java, Swift) frequently have poor/limited expressivity, creating friction for users, who end up fighting against the type system instead of working with it to describe their precise needs, while failing to provide the promised safety. e.g. Inability to describe numeric bounds means many functions with integer/double parameters will be partial, and passing inappropriate values will trigger runtime errors/aborts/undefined behaviors (e.g. divide-by-zero, bad array index). And that’s just the easy stuff.
Python3 is one exception to the former, in that function interfaces can be formally annotated, though the language itself doesn’t provide the mechanism to enforce those annotations. And Eiffel is something of an exception to the latter in that interfaces are annotated with a mix of compile- and run-time checks. And then, of course, there’s ongoing work on dependent type systems and no doubt other CS research that may someday filter down into the production languages in mainstream use. But it’s tough and slow, and a giant wildly-inconsistent ballache in the meantime, with more really badly reinvented wheels than it’s possible to count.
--
† Inasmuch as C can be said to have a “type system” at all (Really, it’s just got a handful of annotations for allocating memory on stack.)
I am still more productive with a type system, also on my own. When projects reach a certain size, it just helps a lot in keeping things working and clean long term. This 'contract' can be with your former or future self, and, in my experience, helps a lot.
If syntax doesn't matter at all, then why haven't more people switched to [insert name of esoteric programming language]? Programming is indeed about design and structure, so why create tools that gratuitously distract programmers from design and structure?
I've used enough graphical programming languages to have come to believe that syntax matters (iff you're constraining your definition of "programming" to "writing strings of ASCII or Unicode by hand that get interpreted into an executable format by an intermediary program" ;) ).
"but it has basically no effect on quality of the code" well if you include number of bugs of certain class as a criteria of quality it would be pretty hard to argue that there is no difference between say Rust and C.
I think C has better syntax than Rust, by miles. Rust's syntax is the bastard child of C++ and OCaml and is more than the sum of its parts. Rust code having fewer bugs has absolutely nothing to do with its syntax.
After 3 years Rust and 20 years of C, syntax of C and C-like languages (C++, Java) looks awkward to me, because in C I need to write more code to get uglier result.
I have coded C since 1984, and C++ since 1988. I found Rust syntax extremely easy to switch to, but I found switching back to C++ syntax extremely difficult.
Do not learn Rust if you think you might need to code C++ or C ever again.
Rust probably should have used "." in place of ";". Or anything, really.
It's true that some people can write extremely clean, beautiful Perl. Nevertheless, it's still a lot less effort to write clean, beautiful Python or Ruby.
This checklist comes across as incredibly ignorant. We push forward with research into all kinds of things, processors, hardware, mathematics, etc. The fact that PLT should somehow be exempt is silly. New languages coming out that do something different are a testing ground for features, some will become popular, most won't.
Sometimes there is a real advantage to writing real world applications in some particular language. Maybe you are writing something that needs to be performant, safe, and low level, so Rust is an attractive option. Maybe you are writing web front-ends and you see the benefit of a type system like what's available in Typescript. There are lots of reasons not to just write everything in C or Java. Don't be sour because we aren't all using the same language for 50 years; we've made advancements in computer science and that's reflected in our tools.
I think GP gets it. This checklist does a great job making several different valid points, and at the heart of them seems to be the fact that there is no Perfect Language. Discovering this checklist helped me a ton back when I was turning down language after language for silly reasons. For example, I'm glad I gave Rust a chance this past month, despite dismissing it a few years ago.
I make languages and I LOLed. Self-awareness, good knowledge of CS history, and a huge heaping helping of humility are all key prerequisites to inventing a language that is even marginally less appalling than all of its predecessors. “You will FAIL” is about the best advice a nascent language designer can receive, and they should embrace it.
I don't take this so much to be making fun of all people making languages as making fun of people thinking they will upend the existing zeitgeist of languages because though decades (nearly a century) of language designers have come before them, they have the key insight that those minds lacked.
Some day, that may become true. But the safe-money bet is on "no" for the vast majority of languages one will encounter in academia and industry. Most will fail to catch on, a few might be remembered, and the ones that succeed will do so because of forces unrelated to the zen of their design as much as the new ideas (or old ideas done right) they bring to the table.
(They share that in common with startup companies ;) ).
Criticism of Typed FP seems to be the main focus of this as most of it's criticisms seem to be at the top of the list. I think it's partly justified, typed FP has been cargo culted to the point where I've seen hello world implementations that definetely were not obvious and required a PhD in category theory. That said I've seen a ton of python code that took monumentous effort to get working at runtime, and broke unpredictably in production too...
> You appear to believe that: [ ] Nobody really needs: [ ] concurrency [ ] a REPL [ ] debugger support [ ] IDE support [ ] I/O [ ] to interact with code not written in your language
This is funny for general purpose languages, but doesn't seem relevant for specialised languages. I wish the checklist clearly stated that it targets general purpose languages.
I think the joke-inside-the-joke is the assumption that every language seeks to be a GP language. Which is not true, but sure tends toward truth as time approaches infinity.
Man, I so need something like this filled out for Blazor, but since it's not expressly a language, it doesn't quite fit. Been arguing against it for a while, half the new projects (from other teams) where I work are using it.
My mates a ‘die hard’ C# fan and has been using it (almost exclusively) for over 15 years. Recently he’s been talking about Blazor a lot, and having worked on a few web apps myself, I can see its general appeal.
Why don’t you like it? I’ve not read any anti-Blazor opinions.
First off, I'm not really against it in concept... about my only real complaint is the latency for server actions over the internet is rather painful (look at devexpress or telerik component demos) and the wasm payload is around 2.2mb for a hello world app to start.
The component libraries themselves are relatively poor quality and have some weird JS integrations. There's no good open-source component libraries. Even a good library that utilizes bootstrap or material-design scripts would be nice.
Once these issues are generally resolved, I'd be more inclined to suggest it, but I do not want to be the first mover on this one. I've seen way too many X to browser libraries come and go without gaining any real traction only to die on the vine so to speak, then you're stuck rewriting or patching an application that looks and feels ancient.
Even then, one of the larger arguments in favor of Blazor is really, "I can reuse my C# skills." Even there, you need a lot of Browser application context knowledge in practice, and you've just created disconnects and more work where you do need it. That doesn't even get into the mirrored trees directory structure of .Net MVC that is repeated with Blazor.
If I were to explore something similar, I'd probably favor Yew (rust) which seems to be around 220k for the todo mvc app.
In summary:
* Bulky payload or laggy response
* Lighter alternatives
* No network effect
* Dirty abstractions
* Messy structure by default
* Verbose
My preference for web applications isn't particularly lightweight, but much more responsive and imho much better developer velocity with a more consistent and better functioning application...
It's a variation of the "checklist explanation of why your proposal is a failure" that was in vogue back in the day, and is less often seen these days. I first saw it in the 90s.
It's a joke. You don't "use" it except to diss a language you don't like.
I also chuckled at the '[ ] "multi-paradigm"' and immediately thought of lisp since it's one of the only languages that's ever had a reason to lay claim to that.
If you think about it, lisp is really just the smallest amount of higher-level PL primordial soup you can give to a programmer.
Almost every language is multi paradigm (it just takes two paradigms after all, so every imperative OOP language is multi paradigm). All the most popular languages today have a mix of imperative, functional and OOP. Few languages (and certainly fewer popular ones) are paradigmically “pure”.
Imperative is orthogonal to object-ness, that doesn’t make a language multi paradigm. C++ was considered multi paradigm because it supported both OOP and procedural programming for organizing code.
Of course, these days procedural is taken for granted and drops off the radar as a notable paradigm.
The dual of imperative is declarative. It is quite possible to have a functional imperative language as Guy Steel points out in his Lambda: the ultimate imperative paper.
I am not sure how you can count imperative and OOP as two paradigms. OOP is an extension of imperative paradigm. I would like to see what language you consider to be OOP but not imperative and not functional.
In the spirit of a bit of fun, Erlang can arguably make that claim. The processes are easy to view as objects, in the sense that I've often thought that when I read what Alan Kay said about objects, it often seems to be describing Erlang moreso than Smalltalk. But the language those objects are implemented in is functional.
(That said, if you read real Erlang code, despite being immutable and nominally functional, the code often is in practice simply imperative code where the code author has manually done the SSA rewrite. This is one of my major criticisms with the language. If you really try to program it functionally, it's quite a pain compared to Haskell.)
i hadn't heard of pict, but there's occam-pi and jocaml which are based on the pi calculus. (jocaml is based on the join calculus, which has been demonstrated to be equivalent to the pi calculus).
I'm sure the authors have heard of Lisp. And I suspect the authors are making a dig at the fact that Lisp is now over 60 years old and its popularity has been dwarfed by many, many "worse" languages (partially because Lisp has several properties that are checkboxes in this list ;) ).
You're making a classic mistake - Lisp is not a language. It's a set of ideas. Some brilliant ideas. Pretty much every single PL that in use today was influenced by those ideas. There's plenty of Lisp in Python and Javascript. And pretty much there's a Lisp dialect and compiler/transpiler/interpreter for every platform today. Lisp (despite being over 60 years old) never went out of fashion and I doubt it ever will.
Indeed, no actual compiler or interpreter is needed. Or computer, really. Like the Lambda Calculus, it's a notation for expressing algorithmic ideas. The only right thing to do with an algorithmic idea is to analyze it. Executing just gets your hands dirty, and makes you smell bad.
Okay, so replace "Lisp" in the above with "implementations of Lisp." We're clearly not talking about the abstract, idealized, unexecuted language for discussing algorithmic theory in the context of implemented programming languages.
It took me a long time (and a lot of banging my head into bad environments) to realize that language matters a lot less when the IDE is well-configured and fluent with what I want to do, the toolchain allows for experimentation without ruining the ability to do large, collaborative work, and both debugging and profiling are possible. And those are things you can't really add to a language environment when the language itself (spec and implementation) is constantly in flux.
Maybe a language will appear one day that renders all those tools obsolete, but we still evaluate and run code on real machines and it's still written by human beings so I sincerely doubt it.