One can appreaciate striving for simplicity (a programming language that can be taught and explained with pen and paper), but one must also consider that computers are meta-devices.
Before computers, we could write things only on paper, either with our hands or a typewriter. So, naturally, when computers came about, the way of thinking about programming was very text-driven, with an emphasis on what a typewriter could represent.
But then, code could be written directly with computers, opening up more typesetting possibilities thanks to keyboards not being bound anymore by the mechanical limitations of typewriters. You could add keys and combinations to your heart's desire, and they would be natively digital and unlimited.
Now, with graphics, both 2D and 3D, and a myriad or other HIDs, shouldn't we try to make another cognitive jump?
It's very strange to see handwriting lumped in with typewriting, to be described as limited relative to screens! Iverson notation was a 2D format (both in handwriting and typeset publications) making use of superscripts, subscripts, and vertical stacking like mathematics. It was linearized to allow for computer execution, but the designers described this as making the language more general rather than less:
> The practical objective of linearizing the typography also led to increased uniformity and generality. It led to the present bracketed form of indexing, which removes the rank limitation on arrays imposed by use of superscripts and subscripts.
I think this is more true than they realized at that time. The paper describes the outer product, which in Iverson notation was written as a function with a superscript ∘ and in APL became ∘. followed by the function. In both cases only primitive functions were allowed, that is, single glyphs. However, APL's notation easily extends to any function used in an outer product, no matter how long. But Iverson notation would have you write it in the lower half of the line, which would quickly start to look bad.
I've long been fascinated by this question, probably spurred on by having read Hermann Hesse's _The Glass Bead Game_ (originally published as _Magister Ludi_) when I was impressionably young.
The problem of course is: ``What does an algorithm look like?''
Depicting one usually directs one into flowchart territory, and interestingly efforts at that style of programming often strive for simplicity, e.g., the straight-down preference from Raptor or Drakon --- systems which do not implement that often become a visual metaphor for ``spaghetti code'':
As a person who uses: https://www.blockscad3d.com/editor/ and https://github.com/derkork/openscad-graph-editor a fair bit, and needs to get Ryven up-and-running again (or to fix the OpenSCAD layer in his current project or try https://www.nodebox.net/ again), this is something I'd really like to see someone be successful with, but the most successful exemplar would be Scratch, which I've never seen described as innovatively expressive --- I'd love to see such a tool which could make a traditional graphical application.
All those things can be specified in text. Fortress was a language that had the facility to use mathematical notation. Turned out to be not so compelling iirc.
We might also consider letting the language semantics invade the editor. Hazel integrates its parser into the text editor, so rather than getting a red squiggly when you break a rule you're just unable to break the rules. It represents code you haven't yet written as a "typed hole" so instead of
1 +
The + would cause the following to appear
1 + <int>
where <int> is the typed hole, reminding you to put an expression there which is an integer. It's perhaps a smaller leap than using shapes and space, but it's one I'd like to feel out a bit sometime.
We do have syntax highlighting these days. And our editors work like hypertext, where I can go to definitions, find usages, get inheritance hierarchies etc. Quite a ways from your suggestion, but also a few steps removed from a type writer.
I think any such leap would have to be a really big one to catch on though, due to inertia. Colorforth is not exactly popular, and I can't think of any other examples.
Can sketchpad do this? (relatively simple, but showing what an LLM can do with a sketch with very little prompting, full transcript of further typing included)
Yes, but there don't seem to be any current implementations which are more than academic exercises (I'd love to be wrong about that and be pointed to something which I could try).
The reason for this is that we've been trying to draw code by hand since 1963 and it doesn't really work out well except in limited domains. Maybe it'll work better with LLMs tho, I guess we'll see.
>Parametric CAD, in my view, is a perfect example of "visual programming", you have variables, iteration/patterning reducing repetition, composability of objects/sketches again reducing repetition, modularity of design though a hierarchy of assemblies. The alignment between programming principles and CAD modelling principles, while not immediately obvious, are very much there. An elegantly designed CAD model is just as beautiful (in its construction) as elegantly written code.
Obviously, it is fitting that a visual product is amenable to a visual approach/solution, so my question is, what programming environment for general purpose is most like to a parametric CAD system?
Yeah I think CAD is a perfect domain for this kind of thing, and IIRC that was one of the original target applications for Sketchpad, where Sutherland demonstrated constraint-based bridge design where the constraints were sketched in.
I agree I don't think LLMs really change the equation much.
For another look at where drawing-based programming has gone, see Dynamicland by Bret Victor. No LLMs required.
Says who? Not all devices can have the same level of repairability by laypeople. What if I complained that todays' CPUs are too miniaturized and that in my time I could swap the individual vacuum tubes in case something went wrong?
If CPU failure was a leading cause of device obsolescence, your argument would make sense. Next, the EU or other regulators should explicitly regulate software mechanisms that prevent owners of a device from installing an alternate OS, enabling open source or aftermarket OS developers to support devices that mainstream vendors no longer want to support.
Why is normal in quotes? Do you mean visible light vs filtered monochrome with false-color outputs or infrared/radio/x-ray like some other telescopes use? Would that be the abnormal you are referring? The Apollo images were taken with Hasselblad film cameras that were "normal" cameras[0].
I am curious to see how being namedropped by NASA affects the sales of these cameras. It's probably not as much as I am imagining but still not an insignificant number. Probably less than Nutella that accidentally flew off the shelf live on stream straight across the shot
When people talk about software or computers being "fun" in the past, it reminds me how advertisements about children's foods talk about how their cereal brings "fun" to the breakfast.
What does that even mean? Seems like empty words to me from people too accustomed to tv commercials.
There is the trivial meaning, where the subject of the sentence is apparently whiling away time, achieving nothing of note except pretending perhaps to be in an imaginary land.
Then there is another sense, one that includes the thrill of experimentation, the disappointment of failure, the doggedness of persistence, and the satisfaction of victory and success when the puzzle is complete, understood, and the whole thing is working as desired or expected. This is why we call programming "fun" and if you are having fun doing it for yourself, you should perhaps be very careful where you end up doing it for work, if you do.
You could do that on computers of the 1990s, and still have the feeling of a broad system, but one which was not unfathomably deep. That's because those systems could be completely understood by one human brain, and being able, striving to be able to do that, was indeed enormously engaging, but people who waxed lyrical about such things were often seen as weirdos, and humans don't like that, generally, so instead they reach for a word that has universal meaning: "fun". Of course, words that have universal meaning, and for which everyone has their own interpretation (though they may not be aware of it), in this manner ironically tend to lose all shared meaning in the strictest sense.
What's sometimes overlooked in the Smalltalk story is that Alan Kay was leading the "Learning Research Group", which is why he refers to educational theorists like Jean Piaget. In some of Alan's talks he goes into some detail showing how children can learn about calculus by watching and visualizing the acceleration of a ball as it falls and bounces. This sort of thing is a serious kind of fun because it actually has a positive benefit, much like sport does for many people.
On the other hand, the use of the word in "making breakfast fun for children" in the advertising sense is a disgusting perversion, and is no way reasonable comparable to the idea of "computers being fun in the past".
Now, if you'll excuse me, I'm going to have my breakfast consisting of dippy eggs and soldiers, and marvel at the viscosity.
Before computers, we could write things only on paper, either with our hands or a typewriter. So, naturally, when computers came about, the way of thinking about programming was very text-driven, with an emphasis on what a typewriter could represent.
But then, code could be written directly with computers, opening up more typesetting possibilities thanks to keyboards not being bound anymore by the mechanical limitations of typewriters. You could add keys and combinations to your heart's desire, and they would be natively digital and unlimited.
Now, with graphics, both 2D and 3D, and a myriad or other HIDs, shouldn't we try to make another cognitive jump?
reply