It'd be a fun exercise to write a tiny Forth in machine code (sans assembler) and use it to write enough of a C compiler to build tcc, or something along those lines. From there I think you can chain old (but accessible) gcc versions up to modern gcc.
I lived without X11/Wayland for several years. (These days, I live in GUI Emacs with EXWM instead, which has a similar feeling.)
Some tools I had luck with:
- mpv is your friend - it can play audio, video, and display images directly to the framebuffer without X11. It also makes a pretty decent PDF reader combined with ghostscript to render each page to a PNG.
- The Emacs ecosystem is very useful - there are many packages for interacting with common services, and they typically work in both terminal and GUI modes.
- For web stuff, the best solution I found was to push as much as possible into RSS feeds and/or IRC, which have good terminal clients. Things like rss-bridge and bitlbee are great here. When I needed a browser, I typically used Elinks but was never really satisfied with it. I believe if you need decent JS support you're out of luck.
"We are about to study the idea of a computational process. Computational processes are abstract beings that inhabit computers. As they evolve, processes manipulate other abstract things called data. The evolution of a process is directed by a pattern of rules called a program. People create programs to direct processes. In effect, we conjure the spirits of the computer with our spells."
In any languages with multiple namespaces, anything inhabiting some non-primary namespace (i.e., not the matrix containing normal variables) feel second class. Emphasis "feel", and hence "awkward" over "technically limiting".
Additionally, it makes the transition to lambda calculus more difficult to grok, which somewhat hinders understanding. It's pretty trivial to make some simple rewrite rules from (most of) Scheme to lambda calculus: it's a two-hour project at most. Doing this for CL is much less intuitive/elegant, possibly making it more difficult for those with that sort of theoretical background.
It is expressly not the goal of CL to be all things to all people, and yet to be many things to many people. Scheme has a kind of fixed set of things it definitely wants to be and sacrifices others, it just does it in a different shape so that the particular examples you pick are easy.
One way I sometimes conceive it is that in any given language there are a certain number of small expressions and a certain number of large ones. Differences in semantics don't make things non-computable (which is why turing equivalance is boring) but they change which expressions will be easily reachable. There are certain things Scheme wants to be able to say in not many characters and different things CL does. Neither is a flawed design. But they satisfy different needs. It's possible to dive into either and be fine. As others have pointed out here, it's not as big a deal in practice as it seems like in theory. What matters in practice is to have an intelligible and usable design, which both languages do. But to assume that the optimal way to say something in one language should stay constant even if you change the syntax and semantics of the language is to not understand why you would want to change the syntax and semantics of the language.
In Lisp-1 dialects such as Scheme, macros are not in the "matrix containing normal variables".
You cannot say with a straight face that "all elements of a form are evaluated equally and then the rest values are applied as args to the first value" because the counterexample (let ((a 42) a) doesn't work that way.
The Lisp-1 has to treat the leftmost position specially to determine whether let is a macro to be expanded or a special operator. That will not happen in a form like this (list 3 let 4).
In the TXR Lisp dialect (which provides Lisp-1 and Lisp-2 style evaluation), I fix this. In the Lisp-1 style forms, macros are not allowed. So [let ((a 3)) a] is a priori nonsense. The let symbol's position is evaluated exactly like the other positions, without being considered a macro (other than a symbol macro, which all the other positions may be).
The combination of Lisp-2 and Lisp-1 in one dialect let me have a cleaner, purer Lisp-1 in which that half-truth about all positions of a Lisp-1 form being equally evaluated is literally true, always.
Lisp-2 for macros and special ops, Lisp-1 for HOF pushing: beautiful. (list list) works, no funcall anywhere, and even if let is a variable that holds a function [let 42] call the damn thing:
1> (let ((let (lambda (n) n))) [let 3])
1
The let operator is not shadowed:
2> (let ((let (lambda (n) n)))
(let ((a 'b))
[let a])) ;; let var visible across inner let
b
Basically, as far as I'm concerned, this whole Lisp-1 versus Lisp-2 squabbling is an obsolete debate and solved problem (by me).
There are reasons to think second-class isn't always bad. Function objects aren't second class but the privileged position of the function namespace means you can watch what's being stored there at storage time, which happens infrequently, and can jump to things faster, which happens comparatively more frequently.
But also, there was a very interesting proposal to ISO that did not survive in which variable number of arguments were handled by an alternate namespace with very specific operations that were understandable to a compiler. You could promote them to the regular namespace if you needed to, but the compiler could do nice things with the stack if you kept them in their more limited arena where it could figure out what you were meaning to do with them. That proposal got voted down, and I was one who didn't like it, but I came to think it was less of a bad idea than I had thought when I saw some of the confusions that came up with managing rest lists on the stack in CL, which are very hard to manipulate and know for sure when they need copying and when not. First class implies that there's probably a halting problem in the most general case of a compiler trying to do code analysis on what you're doing with a thing. Often compilers can recognize sufficient idioms that this doesn't come up in practice, but second class spaces can lead you in certain ways to do things that are better.
I find that site a bit misguided. Any amount of CSS might still break, for example, w3m. It seems more sensible to have pure HTML sites with client-side styling within the browser.
I have two pieces of advice. First, expand your horizons. Yes, Windows jumping games using a game creator are a great start, but there is so much more. At 12, when I started off, I was writing very similar games, albeit using Python and Pygame rather than JS. Your route is equally valid, with its own unique challenges, and hey, in the end you get the same result. However, from here, the paths begin to branch. While the Windows world you've embraced will lead you to Visual Basic and eventually Visual C++ and C#, you could take another path. At 11, I installed Ubuntu Linux for the first time; at 14, I switched to Arch. This was probably a bad decision, given I knew nothing about the OS and had to essentially re-learn everything. You might be intimidated. Don't be. Learning your way around Unix now will make a world of difference later. By 16, I had written my first compiler, a basic baremetal assembly kernel, and an interpreted language with a decent optimizing compiler and VM. I am still not an amazing programmer, and I still have a lot to learn. Don't doubt yourself because of your age. My second piece of advice ties into the first: don't try to use your age to gain an advantage. Not because it's unfair: because it will bring you more personal validation when people praise you because of the quality of your work rather than how young you are. It's the diffence between making a good program, and making a good program for a teenager. Also, it makes it far easier to find jobs.
Source: My own experiences as the 16-year old (breaking my own rule here for the first time) author of the solid programming language.
P.S. I'm not trying to be arrogant. I just don't want you to repeat my mistakes. Don't be afraid to do things that seem difficult: with research and a little elbow grease, you can accomplish anything, and nothing anyone says can take that away from you.
Without debug symbols (the default Makefile includes them), the executable is 83KB (93KB with -g), less than half the size of Lua. The shared library is a bit larger, at 102KB.
Lua's whole table-metatable system always felt awkward to me, what with the magic names for operators. It's really not so bad, I just didn't see any reason to spend time learning something that felt awkward and archaic to me when I could make something that seemed a bit cleaner.
It's a nice little language, and I dislike Lua's 1-origin indexing a lot (no real problem with its object system otherwise).
However, despite taking some 200K compiled, Lua does give you a lot of things for that dirtier object system or origin (or other things you'll dislike).
Some of those things are already planned by yourself, some you'll find out that you need only with enough usage, and others may be outside the scope of Solid. A partial list:
0. A reasonably efficient GC
1. Very fast portable interpreter - faster than most other portable interpreters (e.g. Ruby, Python) with comparable dynamic behaviour
2. A crazy fast JIT compiler (LuaJIT2) fully compatible with that interpreter that rivals compiled and optimized static languages, and is available for many common architectures (x86, AMD64, various PPC, various ARM)
3. Useful built in data type like a dictionary/array ("table"), strings and floating point math, C user definable structure.
4. A vast ecosystem, different in nature than Python's or Ruby's (geared towards embedding rather than direct use), but definitely there and definitely useful.
Once you implement the infrastructure required for these, you'll find out that magically, you're not much shorter than Lua after all (if at all), and that some things just cannot be matched (JITting. Unless you're the 10 people or so that have made PyPy happen. Or you are Mike Pall).
Still, it's a nice little language, and seems quite well written (though the choice of vm.regs[255] for a return value is likely to bite you at some point - make a macro for that, at the very least, to guard you against a type of 244 or 266 or 2555 or any other random number).
And if you are looking for inspiration for some features, another interesting and practical little language is Aardappel's Lobster: https://github.com/aardappel/lobster ; I really like the fresh approaches that he takes.
The VM, the AST parser, and the Bison/Flex frontend are built in a layered manner: you could (with the appropriate Makefile) build only the VM, the VM+the AST parser, or all three. I don't currently have Makefile options for this, but it would be pretty trivial to add.
How long did it take to write this once you had all the ideas? I looked at the github history but they only went back 5 months and it looks like there was already a significant amount written so I couldn't tell how far back from there it went.