Ivan, I bought your book "Bulletproof SSL and TLS". It's excellent, and I recommend it to those who are charged with building, deploying, or managing a real world deployment that includes TLS.
I have yet to buy it, but I've recommended it to five or six people who said they wanted to understand TLS, because I've never heard of someone being disappointed by it!
AFAIK, SABRE is/was an airlines reservation system, not a GDS. SABRE evolved into TPF which is not related to z/OS except inasmuch as they are both operating systems that run on IBM z-series mainframes.
This is a little mixed up. First trivial point you meant "virtualization" not "visualization". Second, the only thing the 360/67 had (and that was carried forward to the 370 "Advanced Function" line) was a mode bit in the PSW for virtual mode, and the fact that supervisor-state instructions would cleanly trap to the kernel if executed in user mode. There was no "hypervisor hardware", it was all software, specifically the Control Program or CP portion of CP/CMS.
The thing that the System/360 architecture got right was a clean exception mechanism when an attempt was made to execute a "privileged instruction" when in user mode. Privileged instructions were all those that operating systems used to manage and run applications. By having a clean exception/trap mechanism, the hypervisor was made feasible, because it could run the virtual machine in user mode, let the S/360 hardware do all the normal user-mode instructions natively, and trap out to the hypervisor when a privileged instruction was run.
Kudos to Andris Padegs and his original System/360 architecture team for having the foresight in the early 60s to implement the instruction set in this way.
If you looked at the source code of VM/370 CP, which all came from CP/67, you would see code which intercepts exceptions caused by user mode applications attempting to execute supervisor instructions such as SIO (Start I/O) or LPSW (Load PSW), and emulates them like the real S/370 hardware would do. Therefore, a user-mode application could actually be an operating system that thought it was running on real hardware.
SIO is the fundamental way that an OS communicates with I/O devices, all of which were virtualized by CP. LPSW is how an OS dispatches one of its tasks, so CP virtualizes the hardware state and switches the virtual CPU from virtual supervisor to virtual user mode. Of course it's all much more complex than that.
In particular, CP could virtualize an operating system that itself did virtual memory and acted as a hypervisor. If the virtual OS was itself a hypervisor, it would run its second level virtual OSes in (virtual) user mode. The first-level CP would get a privileged instruction exception, and would look a the virtual machine's virtual state and see that it was in virtual user mode. Thus, it would simulate a privileged-instruction execution exception in the virtual machine, which in turn would emulate that privileged instruction for the second-level virtual OS.
The most difficult and compute intensive work was in simulating the virtual memory hardware of a virtual machine that was itself using virtual memory for its user tasks, or in the case of a second level hypervisor, its virtual machines. We had to have code in CP that simulates the translate lookaside buffer for the virtual machine, and also does direct lookups within the virtual page and segment tables maintained by the virtual OS.
But it all worked, and we could happily run virtual OSes like OS/VS1, OS/VS2 (later called MVS), and CP itself, underneath CP as virtual machines.
However, as you could imaging, performance was not great for many workloads. So, the hardware engineers in Endicott and Poughkeepsie came up with "Virtual Machine Assist" microcode, which would step in and run the high-use hypervision directly in microcode on the hardware, which was an order of magnitude faster than doing it in S/70 instructions. A good example is Load Real Address (LRA), which could be run very quickly in microcode.
I spent thousands of hours working on the CP source code, first as a user, then as a developer and design manager at IBM in the early days of VM/370 and VM/SP. I was too young to have been involved in the Cambridge Scientific Center's early work on the 360/44 and later the 360/67, but did get to meet and talk with some of the original people.
No expert here, but I would guess the important bit is the "root": value, which in the example points to rootfs. rootfs, I suppose, would be the directory on the host system containing the entire root filesystem that is to be mounted at /. This could be created using overlayfs, or a COW filesystem like btrfs, which would be how you'd achieve layers.
In this way, the notion of layers is external to the container spec.
I'm old enough that I've lived through these stages:
1) There were no answering machines and no cordless phones, much less mobile. If you called someone at home, and they didn't answer, you just tried later. If you called someone at a business, everyone had a secretary who could pick up if they didn't answer. She (and it was always "she") would write your message on a message slip (they were generally pink) and put it on the person's desk. Or, for smaller businesses, you called the main number, and the operator would connect you, and take a message if no answer.
2) Home answering machines happened. They used cassette tapes, and you could rewind and replay your messages. The outgoing greeting was on a second tape. When you got home, or if you were screening your calls, you'd see a blinking light indicating "new message", then hit buttons to rewind and play your messages. When the tape got full you could put in a new tape or write over the old messages. For businesses, phone systems became smarter and could take voicemail, but this was expensive, many businesses still used the old people-intensive way for messages, and many businesses preferred this because customers liked talking to a real person to leave a message.
3) Mobile phones with SMS became available to consumers. (We are into the 90s now). I don't recall this being especially popular at first, most people still left voicemails, even on mobile phones. But SMS became more and more used over the next decade plus (before smartphones when they exploded in popularity).
4) Fast forward to today. People born after 1980 or so, I find, really don't do voicemail at all, text messaging is the thing. If you leave them a voicemail, it doesn't get listened to. The best you can expect is a callback or a text message because they saw you called, but the usual response is nothing. For businesses, some form of instant messaging (IM) is prevalent inside the company. The IM might say "Hey do you have a minute to talk OTP?" but no one just calls someone else cold, much less leaves a voicemail and expects a response.
IMO stage (1) actually worked fine, and now that we are at stage (4) that works fine too (as long as you don't pretend voicemail is a thing and use other means of communication). The intermediate stages were awkward at best.
I think Coca-Cola's action makes a ton of sense for any company. Maybe it was fear of legal discovery that prompted it, maybe it was what they said (streamlining operations), but it is a good thing, IMO.
Cloud Foundry also quietly forked Docker with Warden/Diego (edit: I meant Garden, thanks kapilvt), although in that case they remained compatible with Docker images.
clearing up some facts.. warden predates docker, its a container impl. diego is something entirely different more like kubernetes or mesosphere (scheduling & health, etc). garden the go implementation of warden containers does add fs compatibility for docker.
Your edit is incomplete. Warden predates Docker and is an independent container system. Diego is a new controller/staging/allocating/health system for part of Cloud Foundry, which is a complete PaaS, of which Warden is a low-level component.
I attended Docker Global Hack Day #2 on Oct 30 from Austin. A talk was given on an active Docker project for host clustering and container management, which was non-pluggable, and made no reference to and used none of the code from CoreOS's etcd/fleet/flannel projects.
This was where I first started worrying about CoreOS and Docker divergence.
But since the hack day there has been a pretty reasonable (IMO) GitHub discussion about the tradeoffs between out-of-box ease of use and customizability.
I saw that same presentation at the same event, but came away with a very different impression: the container management they showed was implemented completely outside of docker itself, with no patches to the docker codebase needed. Also, IIRC it actually did use significant code from etcd for coordination.
It had no etcd in it and the POC was implemented as part of the Docker API/CLI, as best I recall. There were significant questions in the discussion about etcd not being there.