Hacker News new | past | comments | ask | show | jobs | submit login
Ask HN: What are the nice feature you need in a programming language?
15 points by Hashex129542 19 hours ago | hide | past | favorite | 43 comments
I'm developing a programming language, the keywords and features mostly based on Swift 5 but some additional features like,

1. async function will be called from no async function but no async/await keyword. If you want to block main thread then block_main() function will be used. block_main() /* operations */ unblock_main()

2. protocol can inherit another protocol(s) & protocol can confirm a class like swift.

3. no `let`. only `var`. compiler can optimize further.

4. if (a == (10 || 20 || 30) || b == a) && c { }

5. `Asterisk` is replaced to `x` operator for mul operations.

What are the features you found or you need in a programming language?






> no `let`. only `var`. compiler can optimize further.

This is not an "additional feature" so much as removing a feature. One reason a programmer might declare something to be a constant is to prevent the symbol from being rebound. You're right that one consequence of using `let` instead of `var` is that the compiler can more aggressively optimize constants, but the primary consequence is to capture and enforce the programmer's intent that the binding be immutable.


Drive by language design tips

Stop caring about syntax and start caring about semantics. For example

> if (a == (10 || 20 || 30) || b == a) && c { }

I get what you're after, but don't optimize syntax for bad code. `a in [10, 20, 30, b] and c` makes a lot more sense to me conceptually than trying to parse nested binary operators, especially when you consider the impact on type checking those expressions.

> async function will be called from no async function but no async/await keyword. If you want to block main thread then block_main() function will be used. block_main() /* operations */ unblock_main()

Think very carefully about the concurrency model of your language before thinking about how the programmer is going to express it. In particular, this model leaves a footgun to leave a program in invalid state.

> What are the features you found or you need in a programming language?

The biggest misdesigns I see in languages are a result of PL designers pushing off uninteresting but critical work so they can explore interesting language semantics. Like the compile/link/load model and module resolution semantics have far more impact over the success and usability of any new language (because they empower package management, code reuse, etc) and the details are hairy enough that taking shortcuts or pushing it off until later will kneecap your design.


For me meta programing (code that can generate code based on other code). I used this on my current personal project to get serilization & tagged unions, in a language that didn't have those in the standard feature set. It enables so many things, I can't go without now!

Nullables are a must for me these days - all variables that can be null should be marked as null, and it should be able to quickly handle null conditions and such.

Observables/reactive programming are pretty important to me on FE too, especially if you have async threads all over the place (looking at you web tech).


* Quick syntax for getters/setters.

* Defining multidimensional indexing/enumeration for instance of a class that isn't a sequential array, i.e: citizens['elbonia'][1234] mapped to indexer(country, id), i.e. loops: for (citizen of citizens['elbonia']) mapped to enumerator(country).

* Full observability, you can listen to any action like assignment/reading, instancing/destruction, calling/returning.


I dream of a programming language that's verbose enough to be extremely readable without relying on editor & tooling but concise enough to not be overwhelming for the programmer.

It'd have a small number of reserved keywords. It'd read top-to-bottom without hidden control flows. Editor & toolings won't be necessary but will add progressive enhancements to development experience when available. It'd be easy to tokenize and parse. No syntax sugars.

The compiler will enforce Hungarian notation for all variable names. `iSize` will be an integer the same way `strName` is a string. The type information is never hidden away. Declare constants the same way, but with SCREAMING names. The const-ness is never hidden away.

Functions are of higher-order but first-class. Declare functions the same way, but with fn suffix. Functions don't necessarily need to be pure, but pure ones will be optimized. Async-await is automatic and the compiler will just optimizes it away when you don't need to. No function colorings.

The type system must be flexible enough for me to be able to design complex algebraic data types and sum types. Pattern matching is a must have in that case. I have no idea how the system would be like. But I want it invisible enough that it doesn't get in my way about 75% of the time. I want to make a software, not write poetic programs.

No meta programming. I do not like the way it is done currently in almost all programming languages. But I do understand it's much needed. A new way of doing it must be figured out.

Everything should be explicit. Needs to allocate? Get an allocator. Have an exception? Return it. Set boundaries where compiler can step in to perform optimizations.

Backward compatibility of the language design and the compiler shouldn't get in the way of the language design's potential. The language design and the compiler will get better with time, but people won't rewrite their programs every year. The standard to be maintained is that programmers should be able to incrementally migrate to newer language versions one dependency at a time, one file at a time, heck even one function at a time. Most languages fail this. See JS with CJS/ESM mess for example.

Oh, also file paths should always be POSIX-y. No `use a::b::c;` for imports. No aliased `@a/b/c` paths. Just plain, absolute or relative file paths that I could follow simply in any text editor without language specific toolings.


Native wrangling of JSON objects. Instead of stringify, parse, convert to arrays; treat JSON as a primitive object with properties. When you assign a JSON object to a string, the language should have an official casting mechanism of JSONtoString. Even better it should have an official way of casting JSONtoStruct so you can easily insert it to a table.

- Sum types / discriminated unions

- Pattern matching (arguably a more flexible and general way to do the type of thing you have in your number 4 if syntax)


I actually like javascript.

I want types, like typescript, but instead of compilation, there should be a "boot type check" which does about the same checks as tsc, but the types should be first-class, available at runtime, so I can pass them around, create, read, update and delete them at runtime.

I want runtime code creation that is more robust than creating source-code as strings and then compiling them, I want first-class code-as-data like in LISP. I want to be able to define some blocks of code, and conditionally combine them the new blocks which I can compile to functions. I want to be able to derive functions that is more, less or different from their parents (for example, removing or replacing a number of its statements) (basically, I want to have enough control that I can chose a pattern of generating ideal callbacks with no conditionals)

I want to be able to express (I use % for lack of a better idea right now, this is the type-safe version of the syntax))

const someBlock = (a: number, b:string)=%{ console.log(a+2+c); }

And pass it to a function: myFunc(1, someBlock, 3);

(and someblock should be able to use it: function someFunc( aBlock: % ) { const a = 1; const c = 3; someblock; }

I want better introspection, if I pass a function, there's no reasonable, robust and performant way to reason about that function.. You can't access its parameter list to learn the names of its parameters, their type or number, you can't access its body to learn what it does, you can't even see its return type, to determine what to do with its result, mind you, most of this metadata is already present in the js runtime, just not exposed in a good way to the language.. You can't even access the ast.


You can do a lot of the codeblock stuff with JavaScript's Function prototype (bind, apply, call):

const someBlock = function () { console.log(this.a + 2 + this.c); }

const myFunc = someBlock.bind({a: 1, c: 3})

myFunc(); // => 6


I don't know how. And the answer may be Lisp (!) ... but a decent way to deal with tonnes of cross cutting concerns out of the box for many use cases.

For example a web server needs authcn, authzn, logging, metrics, db retrys, and all the usual gubbins. A DSL for this would rock.

Elm is an example of a language that went down the DSL road and while arguable make some rough choices it proved that a DSL can work really nicely. One side effect is dep management was perfect!


Apparently, x86-64 has:

    981  unique mnemonics
    3684 instruction variants
Since the architecture and instruction set has been evolving for decades, I often wonder whether the compiler is generating code for some lowly common denominator. If a sufficiently smart compiler were to compile code for the developer's most current CPU, the code will need to be recompiled for lesser systems.

ARM architectures are getting instruction set bloat and RISC-V is also tending that way with many variants being produced.

I prefer minimal syntax, e.g. Lisp, Smalltalk, Self. Then let the abstractions be plugged in with appropriate compiler support. I find the idea of blessed built-in types constrain implementation designs due to their prevalence.


If the compiler were fast enough, code could be recompiled for every deployment. With a comprehensive profiling/benchmarking test suite, it could be even more advanced; all features could be used and quirks accounted for, code/data could be optimized for cache size and bus speeds, etc.

Language support protocol for ide/editor (aka pigments[1]/treesitter[2] bnf for emacs, neovim, zed, etc)

Because of the similarity to swift language, perhaps hazel[3] option to highlight differences between standard swift & swift-like language.

-----

[1] Pygments, generic syntax highligher : https://news.ycombinator.com/item?id=41324901

[2] Treesitter : https://news.ycombinator.com/item?id=39408195 / https://github.com/mingodad/plgh

[3] : Hazel: A live functional programming environment featuring typed holes : https://news.ycombinator.com/item?id=42004133 ~


A formatter with little to no configuration, similar to black.

A linter what the reasoning for all rules is well explained, similar to shellcheck or eslint.

Both ideally integrated in an LSP, which also has all the common features of a modern LSP.


Instead of going up in abstraction, I think it’d be nice to have a lang that tries to go down, more coherent with the hw, without “historical” assumptions.

My pet peeve is multiplication: u64*u64=u128. true on any modern hw, true in math, false in any programming lang that I know of. There are many others like unnecessary assumptions on how easy is to access memory.

Vector and matrix ix should be another first class citizen.

The reason for a lang vs just writing in asm are 1) I don’t want to distinguish x86 vs arm, 2) I want a compiler + optimizer.


Nice. I am not sure we need 128bit data types. I think 64bit is enough for now. Matrix collection is really essential. We'll discuss about it. So we might ignore Multi dimensional array complexity.

If you can squeeze it in.... A magical assignment like :== which declares the left value is ALWAYS equal to the expression on the right after that point, so that if any part of the expression on the right changes, the left gets updates as a dependency.

I once saw this in a language called metamine and the demo was amazing, mixed imperative and declarative programming interweaved.


Features from other languages I wish were Python builtins:

- Haskell algebraic data types + syntactic sugar

- C/Lisp macros


I think Swift already does this but I appreciate good type inference. Writing functions in a Pythonic/dynamic looking way (i.e. without a bunch of type declarations unless you want them) but still actually statically typed and enforced at compilation/runtime.

REPL, Polish notation, extensible language, recompilation of software functions without stopping the software.

I want nominal typing. I don’t want to see structural typing again.

Strings that are automatically allocated, reference counted, copy on write, counted, can contain binary data, and are null terminated for compatibility. Like the strings in Free Pascal

Be flexible when I need it, but be strict when I need it.

Typescript is the best syntax-wise language currently that fulfills that on dynamic side.

Swift has also a decent DevX and close to that on a "static" side of universe.

Sorry to disappoint but there are much more important problems begging for solution in infra, package management, transpilation and other domains in all popular languages today.

All syntax sugar and features are already good enough for your next Goodle or facebook.


* functions as first class citizens

* procedures as first class citizens

* lexical scope

* strongly typed

* single character syntax and operators

* inheritance and poly instantiation as a feature of language configuration but remove from language instantiation

* event orientation via callbacks. many developers don’t like callbacks but they provide the most flexible and clearly identifiable flow control path

* single string format with interpolation


A vast open source community. If you want to go fast, go alone. If you want to go far, go together.

If you want to go deep?

you send bruce willis and ben affleck to drill a meteor

Im going to save you time and describe what the optimal programming language anyone actually wants, no matter what they say:

People want to be able to write either python or javascript (i.e the 2 most widely used languages) , and have a compiler with an language model (doesn't have to be large) on the back end that spits out the optimal assembly code, or IR code for LLVM.

Its already possible to do this with the LLMs direct from the source code, (although converting to C usually yields better results than direct to assembly) but these models are overkill and slow for real compilation work. The actual compiler just need to have a specifically trained model that reads in bytecode (or output of the lexer) and does the conversion, which should be much smaller in size due to having a way smaller token space.

Not only do you get super easy adoption with not having to learn a new language, you also get the advantage of all the libraries in pypi/npm that exist that can be easily converted to optimal native code.

If you manage to get this working, and make it modular, the widespread use of it will inevitably result in community copying this for other languages. Then you can just write in any language you want, and have it all be fast in the end.

And, with transfer learning, the compiler will only get better. For example, it will start to recognize things like parallel processing stuff that it can offload to the GPU or use AVX instructions. It can also automatically make things memory safe without the user having to manually specify it.


I find it dubious that an ML model would outperform existing compilers (AOT or JIT) for Python and JS, both of which exist and have many engineer years invested in their design and testing.

I find it even more dubious that someone would want something that could hallucinate generating machine code. The difficulty of optimizing compiler passes is not in writing code that appears to be "better" or "faster" but guaranteeing that it is correct in all possible contexts.


This would be a much different training task than LLMs. The reference to it being possible with large LLMs is just a proof that it can be done.

The reason its different is because you are working with a finite set of token sequences, and you will be training the model on every value of that set, because its fairly small. So hallucination won't be a problem.

Even without ML, its a lengthy but P hard task to really build a python to C translator. Once you unroll things like classes, list comprehensions, generators, e.t.c, you end up with basically the same rough structure of code minus memory allocation. And for the latter, its a process of semantic analysis to figure out how to allocate memory, very deterministic. Then you have your C compiler code as it exists. Put the two together, and you basically have a much faster python without any dynamic memory handling.

The advantage of doing it through ML is that once you do the initial setup of the training set, and set up the pipeline to train the compiler, to integrate any pattern recognition into the compiler would be very trivial.


> Once you unroll things like classes, list comprehensions, generators, e.t.c, you end up with basically the same rough structure of code minus memory allocation.

No, you don't, and that's why there are many engineer years invested into designing AoT and JIT compilers for JS and Python.

If you write C like Python you get Python but slower.

> The advantage of doing it through ML is that once you do the initial setup of the training set, and set up the pipeline to train the compiler, to integrate any pattern recognition into the compiler would be very trivial.

Except this has already been done, so what advantage does ML bring? Other than doing it again, but worse, and possibly incorrectly?


Confused by this approach, people want to write in interpreted languages and have it compiled with an LLM?

How would you do things like dynamic code execution or reflection? Lots of properties are stripped as part of the compilation that you wouldn't be able to refer back to.

Are you just saying write python -> interpret it -> compile it -> convert to assembly? Because I believe that already exists, but is difficult to just do that all the time because of the compile step and having to convert to static typing


>dynamic code execution

Run the code through actual Python or NodeJs. Once you are happy with result, compile it to native.

>reflection.

Reflection can be "unrolled" to static values during compilation.

>Are you just saying write python -> interpret it -> compile it -> convert to assembly? Because I believe that already exists,

It exists in the sense that you still have all the python interpreter code for dynamic typing baked into in the executable. This would remove all of this.


The same way c# used to do it. C# provided dynamic code generation in both byte-code-level, and AST/lamba implementations. And even provided an interactive C# "interpreter" that actually used dynamic code generation under the covers. All of which died with .net core. I rather suspected that Microsoft decided that dynamic code generation was far too useful for writing cloaked viruses, and not quite generally useful enough to justify the effort.

You'd have to generate reflection data at compile time. And llvm supports dynamic code generation, so that's not a problem either.

Not really sure why anyone would want to do an interpreted language though.


Expression Trees and IQueryable<T> compilation did not die and remain fully supported features. For example EF Core uses them for query compilation. 'dynamic' did not die either even though it should not be used because there are usually better constructs for this.

Yes it definitely saves lot of time & effort. Great :)

n:m scheduled green threads in the style of goroutines or erlang/elixir processes. await is an abomination.

Make it easy to use multiple cores without forcing the user to think about it constantly.


I like the c style for loop syntax. Swift creator's decision of removing it doesn't make sense to me.

I've found that a set of features is not all that important, and in fact leads to languages like C++ or Java that are actively unpleasant.

Better to have some guiding principles or philosophy, and arrive at C, Lisp, APL, Lua, SML or Haskell.


Is

  a == (10 || 20 || 30)
really better than

  a in (10, 20, 30)
It seems the first is just ambiguous, and longer.

Wow great, yes good approach. Thanks!

The second is Python :)



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: