HN2new | past | comments | ask | show | jobs | submitlogin
PC DOS Reimagined (pcjs.org)
115 points by ingve on Dec 23, 2020 | hide | past | favorite | 73 comments


I remember being on a "compiler panel" AMA at a conference in the late 1980s. There were 5 of us on the panel, each representing a different C compiler. I was in the middle. We got asked "does your compiler system work on a floppy only machine?"

Vendor1: "Yes. We have a special install for it, carefully distribute the files among floppies, and you can do it."

Vendor2: "Yes, we can do it. It has a special install, blah blah blah."

Me: "Yes. It costs $200 extra and comes with a hard disk."

The question never progressed to Vendors 4 and 5, the question never came up again from anybody. That marked the end of the floppy-only era as far as I was concerned :-)

It still took some more years before we could stop distributing the compiler on a stack of floppies and go with a CD.

P.S. The price of a "hard card" (a hard disk on an expansion plug-in card) was $200 at the time.


That's a brilliant answer.

As someone who has worked supporting a commercial compiler, it never ceases to amaze me how companies will pay 6 figures per seat for development tools, and then install them on a 5 year old machine that wasn't fit for the title "workstation" when it was brand new.


> As someone who has worked supporting a commercial compiler, it never ceases to amaze me how companies will pay 6 figures per seat for development tools, and then install them on a 5 year old machine that wasn't fit for the title "workstation" when it was brand new.

Well, because the dev shop buys the dev tools, but general desktop IT buys the computers, with policies around limiting number of different builds in the organization for IT convenience, and distribution policies for new machines based on internal politics (if you are in th chain of command above the head of the unit in charge of the hardware, or politically connected to someone in that position, you and your staff can get newer, better machines, whether or not you have a real business need, and even if that means denying someone else with a real business need.)

Or, at least, I've seen enterprise shops like that.


I wish companies would force developers to use much slower machines than they are normally given, so that they will be more efficient with what they do, since chances are that the majority of your users won't be running extremely high-end hardware either.

...and I'm saying this as a developer myself.


I've visited customers where we hit "build" and then wait about 15 minutes for an incremental build of a project.

Every day I was there they could have bought a faster machine with the amount of money they were paying me to wait for builds to complete. They were developing embedded software, so the "dogfooding their customer's machines" was non-sensical.


Very good point. I've thought that about Google, starting years ago. Look at the Wave fiasco / failure. That would not have happened (at least due to reasons of unusable slowness) if your suggestion had been implemented there. And there are many other examples of their products needing more machine horsepower. On the contrary, they equip their developers with high-end machines. How dumb. It should at least be a mix of low and high-end, and should be made compulsory to test the products on low-end satisfactorily before release.


It still happens, see Android development experience with Studio and build times.


Yes, seen tangentially earlier, when I consulted to a startup using Android as one of the stacks. Happy I did not have to work on it, going by what I have read about its too frequent changes. Like the JavaScript framework du jour :)

https://mobile.twitter.com/vasudevram/status/127075265940270...


Idk, waiting on a task kick me out of the zone real fast, and as I age it takes longer for me to get into it.


Building the dmd D compiler with itself takes a few seconds.


I always used an older, slower machine, and that was one of the reasons. Still do, my main machine is about 8 years old.


Not quite. Run your tests on an old machine. I remember testing some software over a slow dial-up line (1990s) and it was possible to see how slowly certain actions were painted on the screen. Being able to see that allowed me to make optimisations I wouldn't otherwise have considered.


I've tried a few times to get into Twitch streaming (as a viewer), though I don't play video games. I've come across a few programmers who've chosen to host streams there. Besides other reasons that also contribute to the failures of those experiments, I come away thinking about how unusably bad the Twitch website is. It hit me for the first time a few weeks ago why that was.


Doing 4th generation programming language is what we forced ourself into. Instead of complied in minutes we do one run in 5 hours using a mainframe . And our boss said it is better than 1960s using visual approach.


Tragedy of the commons.

If you do this, your programmers will leave your company. And your competitors will offer them better computers.

You'd need all software companies to sync and do this, or at least a large majority.


I think a good solution is to have a lowest common denominator machine, and rotate developers through using it for a day, maybe once a month.

Windows

Celeron

2-4gb ram

5400rpm hard drive

No administrative privileges

Shitty antivirus installed

1080p monitor


Or at least test on lower class machines.


True, that does happen, but at Zortech we discovered that the professional segment of the market was not price sensitive for just that reason - the price of the dev tools was noise compared to the cost of developer time.


> As someone who has worked supporting a commercial compiler, it never ceases to amaze me how companies will pay 6 figures per seat for development tools, and then install them on a 5 year old machine that wasn't fit for the title "workstation" when it was brand new.

Well, now you often have an overreaction to this problem, when web developers work on their Javascript frontends and Electron applications on the latest, beefiest Macbooks and blazing fast internet speeds and get completely out of touch with what their end users might be using. (This comment was written on an i9, 64gb RAM Macbook Pro provided by my employer).


My first hard drive was around that era. It was 40 MB and $900. Hooked it up to a IIgs. $200 doesn't sound like enough. :-)


Mine, as well. This was the late 80's and a 30 meg hard drive ("SupraDrive" for the Amiga) cost around $700 or $800.


I might have the year wrong.


If you want to match the font of the IBM PC technical documentation, it's ITC Garamond, not Times. Very characteristic of the era, Apple also adopted it (in it condensed variant).


This is very cool!

The IBM PC (and clone) hardware was progressing rapidly at the time: from an 8-bit 8088 @ 4.77Mhz with 64-256K RAM to much much faster 16-bit 80286/80386 systems @ 16Mhz+ with 2-4MB RAM in just a few short years.

As a consequence of this rapid progress, the software of the era didn't really exist for long enough to get heavily optimized and [with the exception of certain games] never pushed the PC/XT hardware to anywhere near its full capabilities.


Some nitpicks. (Mostly, “bitpicks”, I guess.)

> from an 8-bit 8088

the 8088 was a 16-bit processor choked by pushing data through an 8-bit data bus.

> 16-bit 80286/80386 systems

Depending on exactly what period and systems you mean, this should probably be:

16-bit 8086/80286 (8086, of which the 8088 was the 8-bit-bus copy, was obviously earlier than 8088, but used later in PC clones)

Or

16-/32-bit 80286/80386 (the 386 was a 32-bit processor.)


Yeah, "was the 8088 an 8-bit or a 16-bit processor?" doesn't have a straightforward answer...

The main speedups from 8088 -> 80286 were moving from an 8-bit to a 16-bit external data bus and also the 80286's dedicated address generation unit which made calculating operand effective addresses much much faster.

And the 80386, when running DOS programs in 16-bit real mode, was mostly just a glorified but very fast 80286. Especially the 80386DX with its 32-bit memory bus. It took a while for software like Desqview, EMM386, and Windows 3.1 (in 386 enhanced mode) to take advantage of the 80386's advanced features.

The poor little 8088 had to squeeze both its instruction stream and the data stream through a tiny 8-bit bus, so it's really not that much different from say the Z80 (widely considered an 8-bit CPU) which nonetheless can do (limited) 16-bit operations on some of the internal registers.


> Yeah, "was the 8088 an 8-bit or a 16-bit processor?" doesn't have a straightforward answer...

"Bitness" is usually held to refer to the size of a processor's main register set, so the size of data units it is capable of processing internally most of the time, ignoring special cases. By that common definition the 8088 is definitely a 16-bit CPU. Especially as it was a modification of the 16-bit 8086 design and fully code compatible with it, the modification being that 8 bit data bus which slowed down talking to the outside world.

Similarly the 386SX is a 32-bit unit. They took the original 386 and modified it in much the same way, giving it a 16-bit data bus to allow it to be used on much cheaper motherboard designs being the main defining change. The 386DX actually came first technically because it was essentially the original 386 lines renamed once the SX variants came along.

The Z80 could perform a limited set of 16 bit operations such as partially 16-bit multiply (8-bit inputs to a 16-bit output) but was definitely an 8-bit unit overall as the majority of it's inwards were (and the majority of the 8088s innards were 16-bit, and the 386's (original/DX, or SX) were 32. If we count instruction outputs and not general purpose register size then the 6502 family would be 9-bit not 8 as most of its arithmetic instructions output 9 bits (the main 8, plus overflow in the flags register).

The 486 would not be called 80-bit because of its floating point unit having 80-bit registers for intermediate operations, and early-ish Pentiums were still 32-bit when 64-bit chewing MMX instructions were added (in fact the very first Pentiums had 64-bit data busses so they could pump data into the on-die cache twice as fast to try keep up with the demands of the fast pipelined internals but where still considered 32-bit as said internals were).

There are some CPU/GPU/other-PU designs where the distinction is rather muddy, but for Intel's main lines and those inheriting from them I'd say their bitness is pretty well defined.


Also, the 286 was the last time we had truly synchronized, 0-wait state CPU-RAM combination with the 16MHz 286 and 16MHz RAM.


The 286 was really a weird beast. It was a nice speed uptick from the 8088 but was still a 16-bit system in most respects. (As, truth be told, as was the 386 so long as it was still running DOS.)


The 286 was almost entirely still a 16bit system. It had a 24bit address space for 16MB of memory, but then again the 8088 itself already had a 20bit address space (for 1MB), and both could only continuously address 64kB without segmentation.

It was also famously called "braindead", mostly because switching into 286 Protected Mode to make use of its new features pretty much meant turning off 8088/8086 compatibility (unless you used some horrible tricks).

When the 80386 came along with its real 32bit register set and address space, and significantly increased compatibility even in Protected Mode through vm86 and a few other additions, it really made a tremendous impact in comparison.

And while DOS was still 16bit, it would now often run in memory managers (e.g. QEMM, EMM386...) as a vm86 task in Protected Mode, making some good use of some of the new 32bit features.


I think you're being a bit harsh on the 286 with the benefit of hindsight. Remember that it hit the market in 1982.

For it to have finished up in 1982, it likely started development before or just as the original IBM PC hit the market. At the time, there wasn't a huge installed base of can't-afford-to-break 8086 code, or much of a precedent for backwards compatibility on a wholesale platform change for PCs. They may well have figured anyone trying to build a 286 system would be targeting a fully protected-mode OS, and the non-protected mode support was more of a stopgap for compatibility and bootstrap purposes.


Thanks for pointing that out. I should clarify, I'm not calling it "brain-dead", that was famously Bill Gates (even noted so on the 286's Wikipedia page).

I don't know or remember much about what the 286 was used for (besides IBM PS/2 systems), but I as well wouldn't be surprised if it was more targeted towards things like PBXs and industrial control, where Protected Mode without DOS compatibility made perfect sense...


It feels like there were a lot of missteps for the embedded market which got crushed beneath the juggernaut of PC-compatible x86 chips.

The big one was the 80186/188-- basically an 8086/8088 with a few onboard timers and serial controllers and a few enhanced instructions, but the new goodies were not in a PC-compatible design. You do see them quite often in embedded devices.

The more interesting one was the 80376-- a 386 which was protected-mode only, but also missed other key features so it wasn't just "take a 386 oS and change the bootloader."


I'm familiar with both of those. In fact, I even own two 80186 PCs! Siemens PC-Ds, not IBM but DOS compatible. Marvelous machines. They even have MMUs made of custom logic chips, so they could not only run DOS, but also SINIX with full address space separation and supervisor mode (though no paging)...

The non-compatible goodies of the 80186 play a big role in making it not IBM compatible, but other things as well. In many ways it was a bit better, e.g. high resolution (but monochrome) text and graphics, and no 640k barrier.

The 80376 is an interesting beast because it got rid of 16bit support entirely--any 16bit segment descriptors were invalid.


Err, "custom MMUs made of individual logic chips" is what I meant. The MMUs are custom, the logic chips are just of the 74 variety.


Re. i386, not exactly - there were numerous "DOS extenders" available, and some programs and games required it or included one on the install disk. DOOM, for example, used DOS/4G.


While the state of the art advanced fast, there was also a great deal of market spread. An early 386 would be several grand, but at the same time there were also floppy-only 8088 machines from about USD500 on up, and a significant installed base of XTs and clones.

The software evolved far more slowly because of this-- you still had a lot of "It supports an 8088" or "it supports CGA" until the early 1990s.


>> As a consequence of this rapid progress

I still remember all the peer group pressure we had due to hardware evolution: Bought a fast 486DX2? Well, I bought my Pentium 60. Got RAM? Well, I bought EDO-Ram.

Memories... :D


Many other microcomputers of the era used BASIC as their shell Apple II, TI 99-4a, and the Commodore 64 are all examples of this. So the machine would boot to BASIC, and then you would call something like this (Commodore 64) from the BASIC prompt:

LOAD "FILE",8

Computers with MS-DOS and CPM were different in that you would load a proper shell and launch basic from there. I seem to recall (I was 9 years old at the time) the original IBM PC having a less capable BASIC that could use a cassette tape for storage built in that would boot if you did not have a floppy drive or disk in the boot drive when the machine started.


Some, like the Commodore 128, had both: a "legacy" C64 style BASIC-enabled frontend, and another one: CP/M, the predecessor of DOS, quite popular in the 70s. I loved this combination of two very different worlds.


Its BASIC frontend was not really legacy, instead the CP/M environment was basically tacked on. It had an entire separate CPU, an actual Z80, for that purpose, but unfortunately running CP/M was almost unbearably slow (which was not really the fault of the Z80).

BASIC, in its upgraded version "BASIC 7.0" (compared to the C64's "BASIC 2.0") was quite obviously still the "main mode" and the one that supported all the C128's features.

However, there was also a third mode, a "C64 compatibility mode", which removed (almost) all of the new features and presented itself exactly like a C64 would, same ROM and all.


Why was running CP/M so slow? I never got to run it but I don't think of CP/M as being a particularly bloated/slow OS; but you say it wasn't the fault of the Z80. Something particular to the C128?


I haven’t done any independent research (other than trying it out in a C128 emulator and concluding that it indeed feels very slow 8) ), but there is a detailed answer here: https://retrocomputing.stackexchange.com/questions/2361/why-...

The issue does not seem entirely clear cut, but from the looks of it, while apparently the newer CP/M 3.0 used there does seem a bit slower than earlier versions, the majority of the reasons seem indeed pretty specific to the C128.


This was so much more background than I’d dared hoped for! Thanks, really interesting. What a mess.


The super Pets had a bunch of stuff in ROM: BASIC, Pascal and Fortran ... probably more I'm forgetting..


DOS as a 'proper shell' is missing the point.

BASIC as a shell was more capable than DOS. There is no reason to have DOS instead of BASIC other than it has fewer features to be implemented.

For all the bashing BASIC gets for favoring unstructured programming, you could do a lot in it as a first gear programming language. Think of the amount of bash scripts that shuffle strings around and ask if you would not rather program in e.g. python if it gave immediate access to files.


First, it's cool that someone is trying to make a BASIC shell. What I was pointing to is that most same-era computers used BASIC as their shell.


The most non-sh shell I've seen in recent >10years is scsh which is a scheme shell. I don't know if anyone is working on a Basic shell today and you're right about your observation.

As a programming language, Basic was more suited to be a shell than DOS was suited to be a programming language. :) that's probably why I avoided learning dos batch programming like mad.


I had an IBM PC XT in early 90-es, just like in the picture. Green monochrome monitor, dual 5 inch floppies, 20mb hard drive. Some company was throwing them out and my father brought one home.

At one point the hard drive failed and the PC booted into basic from ROM.


Unless you inserted a cartridge into the C64 ... in which case there was no typing required!


In the DOS era, my favorite DOS was DR-DOS, which was much better than MS DOS in my opinion. Although I cannot remember exactly why, since this was decades ago.

Unfortunately Microsoft did everything in its power to kill it.

> "If you're going to kill someone there isn't much reason to get all worked up about it and angry. Any discussions beforehand are a waste of time. We need to smile at Novell while we pull the trigger." - Jim Allchin, Microsoft Co-President, on a memo about DR-DOS.


I remember when I first learned about multitasking. I think I read an article in Compute! or PC Magazine about Desqview or Concurrent DOS, and I remember wondering "Who would ever want to run two programs at the same time? Nobody has a multitasking brain!"


Desqview was acceptable to run a two-line Dial-up BBS on the same machine. It had stability issues beyond that so when the need arose for a third line we moved to OS/2 which was wonderful to multitask DOS applications in. The internet killed our little BBS before we had a need for four lines and beyond :)


There was a cottage industry of software to get around DOS' single-tasking limitations. DoubleDOS was another program that let you have multiple sessions. There were terminate and stay resident (TSR) program like Sidekick.


I always thought that TSR stood for "trash system randomly".

I'll show myself out.


Yep and then you get the 2000's brain that can't focus on a thing for more than a few minutes at a time.


Meanwhile I was clamoring to find a copy of DesqView as fast as I could, because my favorite MOD player had a DOS shell, but you had to exit your shelled-out program to return to the player and pick another song.

That was acceptable for a quick dial-up-and-toss-a-mail-packet session, but not for door games or chat, so the prospect of being able to run something in an _alternate_ task, not a _parent_ task, was incredibly attractive.


Speaking of multi-tasking, I remember writing code that hijacked timer interrupt to surreptitiously perform background tasks. I would also hijack interrupt 9 (I think) to capture keyboard so that I would pop up a small text mode window to display the current state/status when you clicked the magic key combo (ctrl+shift+9 for example). The dialog would vanish and background restored as soon as the keys were let go. Miss that time. The size of my binary was a little over 1K (written in assembler).


TSR days, heh? Good times


As an OS nerd, I find this very interesting. Thanks for posting it.


This kinda happened in the UNIX world:

https://en.wikipedia.org/wiki/C_shell


Kinda, except that C shell syntax isn't very C language like.

I think 8 bit micros (Apple II, Commodore 64) are closer. They booted up into BASIC and their DOS was an extension to BASIC.


The DOS for the Commodore computers including the above mentioned Commodore 64 was not an extension to BASIC. The DOS for Commodore computers instead ran on the floppy drive units themselves which included their own RAM and 6502 family CPUs to do so.

The other 8-bit micro I used personally was the TRS-80 Model 1. It would boot directly to BASIC but their was no DOS as part of this basic. To use a floppy drive a DOS boot disk was required. There were a few different DOS's available for the machine. The official DOS available from Tandy was TRS-DOS.


Regardless of where it ran, to the C64 user, at a command level, it looked and acted like an extension to BASIC. You can see some examples of the basic code here: https://en.wikipedia.org/wiki/Commodore_DOS

Apple ProDOS had similar stuff...


I like to say their OS shell was also an IDE.


TempleOS is written in a custom language inspired by C and C++ called HolyC. And HolyC is used as the shell. So that's actually much closer to what you are talking about than any Unix shell is (C shell included)


Desqview is a name I have not heard in some time. Pretty revolutionary at the time, for someone who only knew traditional DOS, and was a sign of other multitasking window managers to come.


That could still be useful as a package for freedos I think.


Before OS/2, Microsoft had Xenix.

It was the last good OS they ever shipped.


https://en.wikipedia.org/wiki/Xenix

But Xenix does not run DOS or Windows programs like OS/2 does. Microsoft needed the backwards compatibility to be able to sell the next DOS. When Microsoft split from IBM for OS/2 Microsoft's OS/2 NT 3.0 became Microsoft Windows NT 3.1


There was also that XEDOS, that never officially saw the light of day. But some of those feature apparently made it into MS-DOS 2.x and appear as those *nix-like features that DOS had from 2.0+.

https://youtu.be/Vo8NG8T4rWs


> There was also that XEDOS, that never officially saw the light of day

I wonder if XEDOS ever existed as actual code, as opposed to simply an item in a roadmap.


When Microsoft first released Xenix, DOS wasn't a thing and GUIs were only available in rich research labs. Not being compatible with PC-DOS was not really an issue.

As the IBM PC became Microsoft's passport to success, they ditched the Unixness of Xenix and embraced the CP/M-ness of PC-DOS. That's why people do dir in Windows instead of ls.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: