Please stop bickering about verilog vs vhdl - if you use NBAs the scheduler works exactly the same in modern day simulators. There is no crown jewel in vhdl anymore. Also type system is annoying. Its just in your way, not helping at all.
You're not wrong, but blocking assignments (and their equivalent in VHDL, variables), are useful as local variables to a process/always block. For instance to factor common sub-expressions and not repeat them. So using only non-blocking assignments everywhere would lead to more ugly code.
Both SystemVerilog and VHDL have AMS extensions for simulating analog circuits. They work pretty well but you also pay a pretty penny for the simulator licenses for them.
dont know if trolling. SR latch you can do with 2 NANDs, or NORs there are plenty of *digital* circuits with that functionality, and yes, there are very rare cases when you construct this out of logic and not use a library cell for this. pulse circuit is AND(not(not(not(a))),a) also rarely used but used nonetheless. to properly model/simulate them you would need delta cycles
I'm not sure if you are trolling. 99.999% of digital design is "if rising edge clk new_state <= fn(old_state, input)", with an (a)sync reset. The language should make that the default and simple to do, and anything else out of the ordinary hard. Now it's more the other way around.
I always applaud homebrew cpu designs but after doing so many myself I would reaaaaly advice to stay away from dip chips/breadboards/wirewraps and any attempts to put it into real physical world. Taking a build out of a logisim/verilog to real world in chips sucks away all the fun about cpu design - suddenly you have to deal with invisible issues like timing, glitchy half-dead chip, bad wire connection, etc. these are not challenges, just mundane dull work.
The only exception to „stay in the sim“ rule is if you want to make an „art statement“, i.e. like BMOW (or my relay cpu https://github.com/artemonster/relay-cpu/blob/main/images/fr... /shamelessplug)
I'm totally with you personally, but sometimes doing the actually hard part is fun. Type 2 fun.
Long ago I took a CPU architecture class and we implemented designs in Verilog as a final project. Apparently people who took the class in the late 90s (before my time) could actually tape-out their designs and pay a few hundred dollars to get fabbed chips as part of a multiproject wafer. I was always curious if those chips actually worked, or just looked pretty.
My advice would be to consider the possibility, not necessarily to stay out of the physical world. For some, those physical details may be the fun part. Some hate verilog. Some want to put it on an FPGA, some don't. I, personally, moved away from FPGAs due to bad documentation (looking at you, Lattice).
An alternative to Verilog is RTl simulation in a higher-level Language, or even higher-level Simulation.
Just remember that you can't define what is "fun".
id take it further to say dont even design your own ISA because its super rewarding watching your custom designed CPU run real software from an actual compiler (all you need is rv32i minus the CSRs)
Couldn't disagree more. To the extent building a homebrew CPU is interesting at all, for me it's _only_ making it actually work despite all of the real world hiccups that make it interesting. Designing it in the simulator is "easy".
I think next step will be an isolated version of invite-only internet where you have to be physically present with your invitee to give them access. There will be a beautiful navigation widget where you can access a unified "addon" to any page: community moderated comment section, version history of that page, backlinks, carefully curated "related" section(so that you can continue browsing beautiful human written content on 1910 era steam locomotives, similar to 90s era webrings), donate button so that you can support he author and much more! Oh, the dream
optional de-centralized hosting, unified cryptocurrency as payment tokens, single open LLM as summary and search-indexing tool, specialized toolkits for journals and social networks (livejournal, early twitter, early fb). Most importantly: you can post anonymously where its allowed (there could be areas where it can be disallowed entirely, like a public square), but your account will take the punishment, so no edgy shitposting behind throwaways.
I find it interesting that we havent invented a democratic version of policing a rule system. HN is dang, and he is dictator and guardian of these rules, basically. If you replace them with some typical reddit mod HN dies. If you spread out this role to some democratically elected mods via karma system this will fall apart just as quick as StackOverflow did, so, also HN dies.
very hot and edgy take: theoretical CS is vastly overrated and useless. as someone who actively studied the field, worked on contemporary CPU archs and still doing some casual PL research - asides from VERY FEW instances from theoretical CS about graphs/algos there is little to zero impact on our practical developments in the overall field since 80s. all modern day Dijkstras produce slop research about waving dynamic context into java program by converting funds into garbage papers. more deep CS research is totally lost in some type gibberish or nonsense formalisms. IMO research and science overall is in a deep crisis and I can clearly see it from CS perspective
Well, I think there is something to it. Computers were at some point newly invented so research in algorithms suddenly became much more applicable. This opened up a gold mine of research opportunities. But like real life mines at some point they get depleted and then the research becomes much less interesting unless you happen to be interested in niche topics. But, of course, the paper mill needs to keep running and so does the production of PhDs.
You made a preposterous statement, got called out, and are now making excuses.
Anybody who claims to have studied "Theoretical Computer Science" can/will never make the statements that you did (and that too in a thread to do with Niklaus Wirth's achievements who was one of the most "practical" of "theoretical computer scientists"!).
I assume that you are talking about modern "theoretical CS", because among the "theoretical CS" papers from the fifties, sixties, seventies, and even some that are more recent I have found a lot that remain very valuable and I have seen a lot of modern programmers who either make avoidable mistakes or they implement very suboptimal solutions, just because they are no longer aware of ancient research results that were well known in the past.
I especially hate those who attempt to design new programming languages today, but then demonstrate a complete lack of awareness about the history of programming languages, by introducing a lot of design errors in their languages, which had been discussed decades ago and for which good solutions had been found at that time, but those solutions were implemented in languages that never reached the popularity of C and its descendants, so only few know about them today.
If you really have followed the research in type systems and see how it *factually* intersects with practical reality you wouldnt joke about it. Its a bizzare nonsense what they do in „research“ and sane implementations (only slightly grounded in formalisms) are actually used
I do, and hope that one day stuff like dependent types and formal proofs are every day tools, alongside our AI masters, which also don't use any learnings from scientific research.
Every clueless person who suggest that we move to GPUs entirely have zero idea how things work and basically are suggesting using lambos to plow fields and tractors to race in nascar
In the past times where Czar elite could be executed like cattle and when French kings knew their heads could fly off guillotines, the elites were *behaving*. There was an unspoked social contract that you do shit for us, and we let you do be yourselves, whatever you do. Nowadays, we have wonderful law and nobody is responsible for anything, nobody is prosecuted, just fucking nothing. Time for pitchforks?
Don't make the mistake of idealizing the past. It took a decade of terrible winters and famine for the head of a single French king to be parted from his body. And it took one century more for a lasting Republic to be born.
We made the Norman nobles CEOs and gave them protection/removed responsibility from all of their actions. But let them continue to see themselves as purely 'value extractors' extracting from workers/markets/economies and doing nothing else.
At least Norman lords had to nominally provide housing on their holdings and had to have some kind of care that their serfs survived. CEOs don't even do that (they literally build models on lowest wage zero hour jobs that their labor can't actually live on or move labor from one desperate overseas country to the next).
Second, asymmetric multimethods give something up: symmetry is a desirable property -- it's more faithful to mathematics, for instance. There's a priori no reason to preference the first argument over the second.
So why do I think they are promising?
1. You're not giving up that much. These are still real multimethods. The papers above show how these can still easily express things like multiplication of a band diagonal matrix with a sparse matrix. The first paper (which focuses purely on binary operators) points out it can handle set membership for arbitrary elements and sets.
2. Fidelity to mathematics is a fine thing, but it behooves us to remember we are designing a programming language. Programmers are already familiar with the notion that the receiver is special -- we even have a nice notation, UFCS, which makes this idea clear. (My language will certainly have UFCS.) So you're not asking the programmer to make a big conceptual leap to understand the mechanics of asymmetric multimethods.
3. The type checking of asymmetric multimethods is vastly simpler than symmetric multimethods. Your algorithm is essentially a sort among the various candidate multimethod instances. For symmetric multimethods, choosing which candidate multimethod "wins" requires PhD level techniques, and the algorithms can explode exponentially with the arity of the function. Not so with asymmetric multimethods: a "winner" can be determined argument by argument, from left to right. It's literally a lexicographical sort, with each step being totally trivial -- which multimethod has a more specific argument at that position (having eliminated all the candidates given the prior argument position). So type checking now has two desirable properties. First, it design principle espoused by Bjarne Stroustroup (my personal language designer "hero"): the compiler implementation should use well-known, straightforward techniques. (This is listed as a reason for choosing a nominal type system in Design And Evolution of C++ -- an excellent and depressing book to read. [Because anything you thought of, Bjarne already thought of in the 80s and 90s.]) Second, this algorithm has no polynomial or exponential explosion: it's fast as hell.
4. Aside from being faster and easier to implement, the asymmetry also "settles" ambiguities which would exist if you adopted symmetric multimethods. This is a real problem in languages, like Julia, with symmetric multimethods. The implementers of that language resort to heuristics, both to avoid undesired ambiguities, and explosions in compile times. I anticipate that library implementers will be able to leverage this facility for disambiguation, in a manner similar to (but not quite the same) as C++ distinguishes between forward and random access iterators using empty marker types as the last argument. So while technically being a disadvantage, I think it will actually be a useful device -- precisely because the type checking mechanism is so predictable.
5. This predictability also makes the job of the programmer easier: they can form an intuition of which candidate method will be selected much more readily in the case of asymmetric multimethods than symmetric ones. You already know the trick the compiler is using: it's just double-dispatch, the trick used for "hit tests" of shapes against each other. Only here, it can be extended to more than two arguments, and of course, the compiler writes the overloads for you. (And it won't actually write overloads, it will do what I said above: form a lexicographical sort over the set of multimethods, and lower this into a set of tables which can be traversed dynamically, or when the types are concrete, the compiler can leverage monomorphize -- the series of "if arg1 extends Tk" etc. is done in the compiler instead of at runtime. (But it's the same data structure.)
6. It's basically impossible to do separate compilation using symmetric multimethods. With asymmetric multimethods, it's trivial. To form an intuition, simply remember that double-dispatch can easily be done using separate compilation. Separate compilation is mentioned as a feature in both the cited papers. This is, in my view, a huge advantage. I admit, this I haven't quite figured out generics will fit into this -- at least if you follow C++'s approach, you'll have to give up some aspects of separate compilation. My bet is that this won't matter so much; the type checking ought to be so much faster that even when a template needs to be instantiated at a callsite, the faster and simpler algorithm will mean the user experience will still be very good -- certainly faster than C++ (which uses a symmetric algorithm for type checking of function overloads).
To go a bit more into my "vision" -- the papers were written during a time when object-orientation was the dominant paradigm. I'd like to relax this somewhat: instead of classes, there will only be structs. And there won't be instance methods, everything will be a multimethods. So instead of the multimethods being "encapsulated" in their classes, they'll be encapsulated in the module in which they're defined. I'll adopt the Python approach where everything is public, so you need to worry about accessibility. Together with UFCS, this means there is no "privileging" of the writer of a library. It's not like in C++ or Java, where only the writer of the library can leverage the succinct dot notation to access frequently used methods. An extension can import a library, write a multimethod providing new functionality, and that can be used -- using the exact same notation as the methods of the library itself. (I always sigh when I read languages, having made the mistake of distinguishing between free functions and instance methods, "fix" the problem that you can only extend a library from the outside using free functions -- which have a less convenient syntax -- by adding yet another type of function, an "extension function. In my language, there are only structs and functions -- it has the same simplicity as Zig and C in this sense, only my functions are multimethods.)
Together with my ideas for how the parser will work, I think this language will offer -- much like Julia -- attractive opportunities to extend libraries -- and compose libraries that weren't designed to work "together".
And yeah, Claude Code and Gemini are going to implement it. Probably in Python first, just for initial testing, and then they'll port it to C++ (or possibly self-host).
Thanks for elaborated reply, both papers Ive seen too. I have mostly same views, but I really dislike that there is no clean solution for binary methods, i.e. add( float, int), where symmetric add(int, float) ends up being a boilerplate. Also I think in asymmetric case its hard to handle dispatch when it has failed to produce method when looking in first argument. i.e. dispatching "collide" with Asteroid, Ship, if collider method is found in Ship, how to bind "this", where does Asteroid is bound. Anyways, good luck with your experiments!
reply