Hacker News .hnnew | past | comments | ask | show | jobs | submit | dsand's commentslogin

About 360 and pi: In the late 70's, "Two Pi Corporation" of Santa Clara made S/370-clone minicomputers named V/32. In early 1981, Two Pi was acquired by Four Phase Systems of Cupertino, a maker of early PMOS cpus. Four Phase was itself acquired in late 1981 by Motorola and withered away. Four Phase's campus was leveled and replaced by Apple's Infinite Loop campus, with nearly the same footprint.

My partner Elaine Gord was on VisiOn's C compiler team in 1982-1984 with two others. They experimented with having two instruction sets: the native 8088 code for best performance, and a C virtual machine bytecode for code density. The two modes were mixed at the function level, and shared the same call/return stack mechanism. This was terrible for speed, but was thought necessary because the target machines did not have enough ram for the total VisiOn functionality. I don't know if the bytecode scheme got into "production".


All of Microsoft's applications used to do that! The 16-bit versions of Word, Excel, PowerPoint, etc all were implemented using bytecode. That technology was only integrated in Microsoft C and made public in 1992 [1], before that the Microsoft applications group used their own private C toolchain. A copy of it can be found in the Word 1.1 source release [2] ("csl").

[1] https://sandsprite.com/vb-reversing/files/Microsoft%20P-Code... [2] https://github.com/danielcosta/MSWORD


I think Multiplan for 8-bit systems was implemented in a similar fashion, which enabled it to be widely ported.


A fount of knowledge about Microsoft's productivity application group history is Steve Sinofsky's blog, https://hardcoresoftware.learningbyshipping.com/

For p-code references, the relevant blog post is https://hardcoresoftware.learningbyshipping.com/p/003-klunde....

  There was a proprietary programming language called CSL, also named after CharlesS. This language, based on C, had a virtual machine, which made it easier to run on other operating systems (in theory) and also had a good debugger—attributes that were lacking in the relatively immature C product from Microsoft. 
CharlesS is Charles Simonyi, ex-Xerox PARC employee, hired away by Microsoft and worked on MS Word as well as creating the Hungarian naming system (the Apps version is the definitive version, not the bastard watered-down Systems version used in the Windows header files) - see https://en.wikipedia.org/wiki/Hungarian_notation.

The blog post included excerpts from internal MS docs for apps developers. An OCR version of one such page in the blog post follows:

  ====

  One of the most important decisions made in the development of Multiplan was the decision to use C compiled to pseudo-code (Pcode). This decision was largely forced by technological constraints. In early 1981, the microcomputer world was mainly composed of Apple II's and CP/M-80 machines; they had 8-bit processors, and 64K of memory was a lot; 128K was about the maximum. In addition, each of the CP/M-80 machines was a little different; programs that ran on one would not automatically run on another. Pcode made the development of ambitious applications possible; compiling to machine code would have resulted in programs too big to fit on the machines (even with Pcode it was necessary to do a goodly amount of swapping). It also allowed us to isolate machine dependencies in one place, the interpreter, making for very portable code (all that was necessary to port from one machine to another was a new interpreter). For Multiplan, this was an extremely successful strategy; it probably runs on more different kinds of machines than any other application ever written, ranging from the TI/99 to the AT&T 3B series.

  Of course, Pcode has its disadvantages as well, and we've certainly run into our share. One disadvantage is that it's slow; many of our products have a reputation for slowness for exactly that reason. There are of course ways to speed up the code, but to get a great deal of speed requires coding a goodly amount in tight hand-crafted assembly language. Another disadvantage is our Pcode's memory model. Since it was originally designed when most machines had very little memory, the original Pcode specification supported only 64K of data; it was not until Multiplan 1.1 was developed in early 1983 that Pcode was extended to support larger data spaces. A final disadvantage of Pcode is that we need our own special tools in order to develop with it; most obviously these include a compiler, linker, and debugger. In order to support these needs, there has been a Tools group within the Applications Development group almost from the beginning, and we have so far been largely unable to take advantage of development effort in other parts of the company in producing better compilers and debuggers. (It should be noted that the Tools group is responsible for considerably more than just Pcode support these days.)

  Although portability was one of the goals of using Pcode, it became apparent fairly early on that simply changing the interpreter was not sufficient for porting to all machines. The major problem lay in the different I/O environments available; for example, a screen-based program designed for use on a 24 by 80 does not adapt well to different screen arrangements. To support radically different environments requires radically rewriting the code; we decided the effort was worth it for two special cases: the TRS-80 Model 100 (first laptop computer) and the Macintosh. In retrospect, the Model 100 was probably not worth the effort we put into it, but the Macintosh proved to be an extremely important market.

  ====
Jon DeVaan's comment to the blog post mentions, https://hardcoresoftware.learningbyshipping.com/p/003-klunde...:

  P-Code was a very important technology for Microsoft's early apps. Cross platform was one reason, as Steven writes. It was also very important for reducing the memory size of code. When I started, we were writing apps for 128k (K, not m or g) RAM Macs. There were not hard drives, only 400k floppy disks. (Did I mention we all had to live in a lake?)

  P-Code was much smaller than native code so it saved RAM and disk space. Most Macs had only one floppy disk drive. A market risk for shipping Excel was requiring 512k Macs with two disk drives which allowed for the OS and Excel code to live on the first drive and user's data on the second. Mac OS did not have code swapping functions, each app had to roll its own from memory manager routines, so the P-Code interpreter provided that function as well.

  On early Windows versions of Excel the memory savings aspect was extremely important. The size of programs grew as fast as typical RAM and hard disk sizes for many years so saving code size was a primary concern. Eventually Moore's Law won and compilers improved to where the execution trade-off was no longer worth it. When Windows 95 introduced 32 bit code these code size dynamics returned for a different reason – IO bandwidth. 16 bit Excel with P-Code outperformed 32 bit Excel in native code in any scenario where code swapping was needed. Waiting for the hard drive took longer than the ~7x execution overhead of the P-Code interpreter.
Another Jon DeVaan comment, https://hardcoresoftware.learningbyshipping.com/p/008-compet... :

   I am surprised to hear Steven say that the app teams and Excel in particular were looking in any serious way at the Borland tools. The reality was the CSL compiler had a raft of special features and our only hope of moving to a commercial tool was getting the Microsoft C team to add the features we needed. This was the first set of requirements that came from being the earliest GUI app developers. Because of the early performance constraints a lot of "tricks" were used that became barriers to moving to commercial tools. Eventually this was all ironed out, but it was thought to be quite a barrier at the time. About this time the application code size was starting to press the limits of the CSL P-Code system and we really needed commercial tools.
And Steve Sinofsky's reply:

  Technically it was the linker not the compiler. The Excel project was getting big and the apps Tools team was under resource pressure to stop investing in proprietary tools while at the same time the C Tools group was under pressure to win over the internal teams. It was *very* busy with the Systems team, particularly the NT team, on keeping them happy. We’re still 5 years away from Excel and Word getting rid of PCode. Crazy to think about. But the specter of Borland was definitely used by management to torment the Languages team who was given a mission to get Microsoft internally using its tools.


C was very much the Javascript of its day. Hand rolling toolchains and compilers to the bytecode they knew and loved (or despised).

This is awesome history. Formal history only remembers the what’s, the when’s, rarely does it catalog the why’s, or how’s. The decision making process of those early programmers helped shape a whole industry.


Yes, all Tandem full-time employees got paid sabbaticals. And stock options.

My partner joined Tandem 2 years after me, so our eligibility for sabbaticals was out of sync. One of us would have to defer our next sabbatical for 2 years to get us in sync. Instead, we both took off together for 6 weeks every two years, using time off without pay when it wasn't our turn for a paid vacation. We took a lot of overseas trips.


The historical former building stood unchanged until 5-10 years ago. There was a historical plaque on that building. It has been demolished and replaced by a nameless large 5-story office building.


The new building has an elaborate plaque facing the sidewalk. It includes a railroad diagram of the many corporate spinoffs from Shockley Semi.


There are no heirs for LCM. Allen's will specified what do to in particular with a few of his many assets. But for all the rest, he wrote to sell it all, and donate the raised money to charities. So the will's executor (his sister) does not have the latitude to divert some assets to other outcomes.

LCM was never self-sustaining via tickets. It always needed yearly infusions of cash from Allen. Re-opening it as it was would require similar levels of cash to burn. I wish that Allen had loved the LCM enough to design an endowment to keep it going, and had specified in the will how to treat LCM specially. But he did not. What he wanted instead for his legacy, was large cash donations to various charities.


The people railing here are ignoring the fact that for whatever reason (he didn't actually care that much, he was sick and didn't have the time) didn't explicitly provide for this museum to be maintained into the foreseeable future. That may or may not be a bad outcome but it's what happens when someone passes and they haven't made an explicit provision with funding for something they owned.

By and large, museums need to cover their operating costs and apparently this one didn't.


HP partnered with Intel to bring HP's Playdoh vliw architecture to market, because HP could not afford to continue investing in new leading-edge fabs. Compaq/DEC similarly killed Alpha shortly before getting acquired by HP, because Compaq could not afford its own new leading edge fab either. SGI spun off its MIPS division and switched to Itanium for the same reason -- fabs were getting too expensive for low-volume parts. The business attraction wasn't Itanium's novel architecture. It was the prospect of using the high-volume most profitable fab lines in the world. But ironically, Itanium never worked well enough to sell in enough volumes to pay its way in either fab investments or in design teams.

The entire Itanium saga was based on the theory that dynamic instruction scheduling via OOO hardware could not be scaled up to high IPC with high clock rates. Lots of academic papers said so. VLIW was sold as a path to get high IPC with short pipelines and fast cycle times and less circuit area. But Intel's own x86 designers then showed that OOO would indeed work well in practice, better than the papers said. It just took huge design teams and very high circuit density, which the x86 product line could afford. That success doomed the Itanium product line, all by itself.

Intel did not want its future to lie with an extended x86 architecture shared with AMD. It wanted a monopoly. It wanted a proprietary, patented, complicated architecture that no one could copy, or even retarget its software. That x86-successor arch could not be yet another RISC, because those programs are too easy to retarget to another assembler language. So, way beyond RISC, and every extra gimmick like rotating register files was a good thing, not a hindrance to clock speeds and pipelines and compilers.

HP's Playdoh architecture came from its HP Labs, as had the very successful PARISC before it. But the people involved were all different. And they could make their own reputations only by doing something very different from PARISC. They sold HP management on this adventure without proving that it would work for business and other nonnumerical workloads.

VLIW had worked brilliantly in numerical applications like Floating Point Systems' vector coprocessor. Very long loop counts, very predictable latencies, and all software written by a very few people. VLIW continues to thrive today in the DSP units inside all cell phone SOCs. Josh Fisher thought his compiler techniques could extract reliable instruction-level parallelism from normal software with short-running loops, dynamically-changing branch probabilities, and unpredictable cache misses. Fisher was wrong. OOO was the technically best answer to all that, and upward compatible with massive amounts of existing software.

Intel planned to reserve the high-margin 64-bit server market for Itanium, so it deliberately held back its x86 team from going to market with their completed 64 bit extensions. AMD did not hold back, so Intel lost control of the market it intended for Itanium.

Itanium chips were targeted only for high-end systems needing lots of ILP concurrency. There was no economic way to make chips with less ILP (or much more ILP), so no Itanium chips cheap and low-power enough to be packaged as development boxes for individual open-source programmers like Torvalds. This was only going to market via top-down corporate edicts, not bottom-up improvements.

The first-gen Itanium chip, Merced, included a modest processor for directly executing x86 32-bit code. This ran much slower than Intel's contemporary cheap x86 chips, so no one wanted that migration route. It also ran slower than using static translation from x86 assembler code to Itanium native code. So HP dropped that x86 portion from future Itanium chips. Itanium had to make it on its own via its own native-built software. The large base of x86 software was of no help. In contrast, DEC designed Alpha and migration tools so that Alpha could efficiently run VAX object code at higher speeds than on any VAX.


> Intel planned to reserve the high-margin 64-bit server market for Itanium, so it deliberately held back its x86 team from going to market with their completed 64 bit extensions. AMD did not hold back, so Intel lost control of the market it intended for Itanium.

Is there anything I can about what Intel planned for their x86 extension to 64 bits? I'm curious about this road not taken.

> Itanium chips were targeted only for high-end systems needing lots of ILP concurrency. There was no economic way to make chips with less ILP (or much more ILP), so no Itanium chips cheap and low-power enough to be packaged as development boxes for individual open-source programmers like Torvalds. This was only going to market via top-down corporate edicts, not bottom-up improvements.

One wonders why they have not learned from this mistake. They continue to make it again and again (AVX-512 and NVRAM are some more recent examples). If the ordinary joe can't get his hands on a box with the new stuff, he's not going to port his software to it or make use of its special features.

> The first-gen Itanium chip, Merced, included a modest processor for directly executing x86 32-bit code. This ran much slower than Intel's contemporary cheap x86 chips, so no one wanted that migration route. It also ran slower than using static translation from x86 assembler code to Itanium native code. So HP dropped that x86 portion from future Itanium chips. Itanium had to make it on its own via its own native-built software. The large base of x86 software was of no help. In contrast, DEC designed Alpha and migration tools so that Alpha could efficiently run VAX object code at higher speeds than on any VAX.

Seems like Apple learned from that. Both generations of Rosetta have top-notch performance. Hard to emulate bits were circumvented by just adding extra features to the CPU that directly implement missing functionality (e.g. there's an x86-like parity flag on the Apple M1).


Ancient languages ran on ancient machines with 6-bit character sets. No available way to key in or print lowercase letters.


The 1401 was designed as a stored-program alternative to IBM's never-shipped WWAM World Wide Accounting Machine. WWAM was designed in IBM Europe in 1955. WWAM was to be a low-cost plugboard-programmed transistor computer to handle the same card tasks as IBM's existing and popular relay-based punch-card accounting equipment. WWAM was IBM's reaction to the threat of losing its many punch-card customers to a low-cost computer Gamma 3 made by French company Bull. 1401 uses the same ALU etc as WWAM, but with newer standard circuit modules. Both use arbitrary-length decimal fields, just like the punch card machines before them.

Several incompatible IBM machines in the 7000 series also used arbitrary length decimal numbers, not binary words.

At Burroughs, the "Medium Systems" B2500-B4800 series were similarly decimal only with arbitrary length numbers. They got extended with fixed-length decimal accumulator and index registers, and competed with IBM's mid range 360 and 4331 systems. Building a decimal-addressed memory using binary-addressed chips got increasingly kludgy.

https://ibm-1401.info/1401Origins.html#Motivations-1


I came to say about the same. The IBM 1401 was really the next after punch card appliances and specifications owed much to those.

On the Bull Gamma 3: http://www.feb-patrimoine.com/english/gamma_3.htm

The Register had once a nice article on the occasion of the 1401's 50th anniversary: https://www.theregister.com/2009/11/17/ibm_1401_fiftieth_ann...


Peter Samson ran the Spacewar demo today on CHM's PDP-1.


That is a very interesting article. For how they aim to run existing software by binary translations to an ISA optimized for such binary translations. And how much they dread possibly being cut off from foreign foundries and ARM and x86 chips.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: