And you're stuck with whatever fucked up kernel the vendor gave you, assuming they even followed their obligations and gave you access to the source. The vast majority of x86 systems run mainline kernels because there's a sufficient level of abstraction. The number of Arm devices that's true for is a tiny percentage of the Arm devices out there running Linux.
What? You can build an entirely free UEFI. ACPI has a free compiler and a free interpreter. Neither implies or requires the existence of non-free blobs, and neither implies or requires any code running in a more privileged environment than the OS.
I've a bunch of devices running coreboot with a Tianocore payload, but they're largely either very weird and now unavailable or I haven't upstreamed them so it's not super helpful, but it's absolutely not impossible and you can certainly buy Librebooted devices
Let's say some hw manufacturer would open-source the required specs to implement it on it's chips. (Very unlikely, but let's say they do...) So what? Dangerous capabilites remain.
Until UEFI and secure boot, SMM would run code provided by the BIOS. BIOS was updatable, moddable, replaceable. See coreboot and numerous BIOS mods such as wifi whitelist removal.
Trustzone usually runs code from eMMC. These chips are programed in factory with a secret key in the RPMB partiton. It's a one-time operation - the user can't replace it. Without that key you can't update the code Trustzone executes. Only the manufacturer can update it.
Also, any ring -2 code can be used for secure boot locking the device to manufacturer approved OS, enforce DRM, lock hardware upgrades and repairs, spy, call home, install trojans by remote commands, you name it. And you can't audit what it does.
To respond in more detail: secure boot (as in the UEFI specification) does nothing to prevent a user from modifying their system firmware. Intel's Boot Guard and AMD's Platform Secure Boot do, to varying degrees of effectiveness, but they're not part of the UEFI spec and are not UEFI specific. I have replaced UEFI firmware on several systems with Coreboot (including having ported Coreboot to that hardware myself), I am extremely familiar with what's possible here.
> Trustzone usually runs code from eMMC.
This might be true in so far as the largest number of systems using Trustzone may be using eMMC, but there's nothing magical about eMMC here (my phone, which absolutely uses Trustzone, has no eMMC). But when you then go on to say:
> Without that key you can't update the code Trustzone executes. Only the manufacturer can update it.
you're describing the same sort of limitation that you decried with SMM. As commonly deployed, Trustzone is strictly worse for user freedom than SMM is. This isn't an advantage for Arm.
> Also, any ring -2 code can be used for secure boot locking the device to manufacturer approved OS
No, the secure boot code that implements cryptographic validation of the OS is typically running in an entirely normal CPU mode.
> enforce DRM
This is more typical, but only on Arm - on x86 it's typically running on the GPU in a more convoluted way.
> lock hardware upgrades and repairs
Typically no, because there's no need at all to do any sort of hardware binding at that level - you can implement it more easily in normal code, why make it harder?
> spy
When you're saying "can be used", what do you mean here? Code running in any execution environment is able to spy.
> call home
Code in SMM or Trustzone? That isn't literally impossible but it would be far from trivial, and I don't think we've seen examples of it that don't also involve OS-level components.
> install trojans by remote commands
Again, without OS support, I'm calling absolute bullshit on this. You're going to have an SMM trap on every network packet to check whether it's a remote command? You're going to understand a journaling filesystem and modify it in a way that remains consistent with whatever's in cache? This would be an absolute nightmare to implement in a reliable way.
> And you can't audit what it does.
Trustzone blobs do have a nasty habit of being encrypted, but SMM is just… sitting there. You can pull it out of your firmware. It's plain x86, an extremely well understood architecture with amazing reverse engineering tools. You can absolutely audit it, and in many ways it's easier to notice backdoors in binary than it is in source.
Trustzone is mostly deployed on Devicetree-based platform. What saves you here isn't the choice of firmware interface, it's whether the platform depends on hostile code. If you don't care about secure boot (or if you do but don't care about updating the validation keys at runtime), you can implement a functional UEFI/ACPI platform on x86 with zero SMM.
I appreciate your detailed reply. I think we are looking from different perspectives. You are correct in an item-by-item way, but you need to put them all together and see the bigger picture. In my comment, it may have made a mess confusing technologies and their capabilities, but I was looking at the forest, not the trees.
There are only two viable firmware alternatives in the world right now: ring 0 U-boot* or the ones that use ring -2: UEFI* and various bootloaders +TrustZone in Android world (read the footnotes!). Manufacturers usually focus on only one of the two: either ring -2 (locked bootloaders, UEFI +ACPI +SMM +whatever crapware they may want to add) protected by secure boot or ring 0 U-boot +a device tree +their GPL source code. The ones interested in locked-down platforms choose the ring -2 option and they are not going to make it open source, nor provide the signing keys to allow it to be replaced by FOSS alternatives.
I appreciate freedom. Linux kernel is free (ring 0). U-boot and coreboot are free (ring -2 if they include ACPI / SMM, else still ring 0). When I run a Linux kernel, I don't want it preempted and sabotaged by a ring -2 component. If that ring -2 includes proprietary blobs, then it's a hard "no" from me. You may argue that SMM (and ACPI) brings useful features such as overheating shutdown when the kernel froze/crashed or the system is stuck at bootloader, but let's face it: practically there's no free alternative to manufacturer's blobs when it comes to ring -2. The FOSS community barely keeps u-boot and the device tree working. Barely! An open source UEFI + all that complexity for every single board out there is a no-go from the start. If you ported Coreboot, i'm sure you know how difficult it is.
I recently learned that ACPI can be decompiled to source code, so that's an improvement, but not by much. Unlike a device tree, which is only a hardware description, ACPI is executable code. I see that as a risk and I'm not the only one. Even Linus had something to say about it - the quote is on wikipedia article. Some of that code executes in ring -2. It can also install components in the OS - spyware components - you can also read about that in the wikipedia article. U-boot has the capability of creating files on some filesystems and you can argue that a proprietary fork could maliciously install OS components by dropping something in init.d, but I've never heard of it being misused that way, and a manufacturer must publish the GPL source code, so it would be difficult to hide. A device tree can't to that at all. If you use UEFI, then every single blob published by the manufacturer must be decompiled and be inspected. U-boot + ACPI is probably simpler than porting Coreboot, but it still won't happen. There are simply too many systems to support.
So, as a conclusion. I see ring -2 as a dangerous capability (even if the malware itself doesn't execute in ring -2) because there are no viable open source alternatives. For this reason I encourage you to not support or promote UEFI and ring -2.
> Trustzone is strictly worse for user freedom than SMM is. This isn't an advantage for Arm.
> Trustzone is mostly deployed on Devicetree-based platform.
True, but ARM world still has unlocked CPUs that can boot unsigned firmware. There are none left in x86 world. (Or at least none that I know about.)
> No, the secure boot code that implements cryptographic validation of the OS is typically running in an entirely normal CPU mode.
OK, valid observation, I may have used "ring -2" to describe features that are not typically running in ring -2. I tried to avoid these technologies as much as possible and I don't have much hands-on experience about what runs where.
> you can implement a functional UEFI/ACPI platform on x86 with zero SMM.
One dev could probably implement and maintain one or maybe 5-10 systems if they are related (same CPU, mostly same hardware). How many systems are there and how many devs? Not possible, but for very very few exceptions, as long as some random dev got one of these systems for himself and does it as a pet project.
----
* When I say U-boot, I mean mainline U-boot plus a device tree, or forks with pubished GPL source code. I know U-boot can include ACPI and secure boot, but that's not what I mean in the context of this comment. Sure, you can set up secure boot with open source U-boot if you want to. There's nothing wrong with that.
* When I say UEFI, I mean all related technologies: ACPI, SMM, secure boot, signed firmware, etc. The whole forest.
There's a standard like ACPI for Arm devices - it's called ACPI, and it's a requirement for the non-Devicetree SystemReady spec (https://www.arm.com/architecture/system-architectures/system...). But it doesn't describe the huge range of weirdness that exists in the more embedded end of the Arm world, and it's functionally impossible for it to do so while Arm vendors see devices as an integrated whole rather than a general purpose device that can have commodity operating systems installed on them.
Fi launched with Sprint and T-Mobile roaming and added US Cellular, but is presently T-Mobile only. I don't think AT&T has ever been a supporter carrier.
The article makes clear that the orientation of the lettering has changed over time, which counts against the idea that what it is now necessarily reflects the original intent.
To me the evidence in the article still suggests that “hard correctness” is probably not historically appropriate…hand lettering is not a typeface.
That’s really where I am coming from — the perspective of historical architecture, historical architectural practice, and historical methods of delivering buildings.
In particular, today’s mythological Wright is not the 1908’s historical Wright on a commercial jobsite. And the contractual relationships of a 1908 construction project were not delineated like current construction projects.
And yet the article shows the original sketches Wright made for the building that show the asymmetrical H's with the bars aligned with the bars on the E's (i.e on the upper half) in virtually identical font to what was eventually installed.
I don't really see how you can come away with the conclusion that this suggests lack of intent; at most, it seems like you had already formed the opinion that there was no intent, and you didn't find the evidence to the contrary convincing enough that you were wrong. I don't think your take is necessarily wrong, but I don't think it's fair to characterize the evidence as suggesting what you're saying.
No, there's nothing special about the spec secure boot variables as far as boot services goes - you can modify those in runtime as well. We use boot service variables to protect the MOK key in Shim, but that's outside what the spec defines as secure boot.
I really don't understand why people keep misunderstanding this post so badly. It's not a complaint about C as a programming language. It's a complaint that, due to so much infrastructure being implemented in C, anyone who wants to interact with that infrastructure is forced to deal with some of the constraints of C. C has moved beyond merely being a programming language and become the most common interface for in-process interoperability between languages[1], and that means everyone working at that level needs to care about C even if they have no intention of writing C.
It's understandable how we got here, but it's an entirely legitimate question - could things be better if we had an explicitly designed interoperability interface? Given my experiences with cgo, I'd be pretty solidly on the "Fuck yes" side of things.
(Of course, any such interface would probably end up being designed by committee and end up embodying chunks of ALGOL ABI or something instead, so this may not be the worst possible world but that doesn't mean we have to like it)
[1] I absolutely buy the argument that HTTP probably wins out for out of process
I don't see that as a problem. C has been the bedrock of computing since the 1970s because it is the most minimal way of speaking to the hardware in a mostly portable way. Anything can be done in C, from writing hardware drivers, to GUI applications and scientific computing. In fact I deplore the day people stopped using C for desktop applications and moved to bloated, sluggish Web frameworks to program desktop apps. Today's desktop apps are slower than Windows 95 era GUI programs because of that.
Ok you're still missing the point. This isn't about C being good or bad or suitable or unsuitable. It's about whether it's good that C has, through no deliberate set of choices, ended up embodying the interface that lets us build rust that can be called by go.
Sure history is great and all, but in C it's hard to say reliably define this int is 64-bit wide, because of the wobbly type system. Plus, the whole historical baggage of not having 128-bit wide ints. Or sane strings (not null terminated).
> in C it's hard to say reliably define this int is 64-bit wide
That isn't really a problem any more (since c99). You can define it as uint64_t.
But we have a ton of existing APIs that are defined using the wobbly types, so we're kind of stuck with it. And even new APIs use the wobbly types because the author didn't use that for whatever reason.
But that is far from the only issue.
128 bit ints is definitely a problem though, you don't even get agreement between different compilers on the same os on the same hardware.
You're still thinking of C as a programming language but the blogpost is not about this, it's about using C to describe interfaces between other languages.
> because it is the most minimal way of speaking to the hardware in a mostly portable way.
C is not really the most minimal way to do so, and a lot of C is not portable anyway unless you want to become mad. It's just the most minimal and portable thing that we settled on. It's "good enough" but it still has a ton of resolvable problems.
> could things be better if we had an explicitly designed interoperability interface?
Yes, we could define a language-agnostic binary interoperability standard with it's own interface definition language, or IDL. Maybe call it something neutral like the component object model, or just COM[1]. :)
Of course things could be better. That doesn’t mean that we can just ignore the constraints imposed by the existing software landscape.
It’s not just C. There are a lot of things that could be substantially better in an OS than Linux, for example, or in client-server software and UI frameworks than the web stack. It nevertheless is quite unrealistic to ditch Linux or the web stack for something else. You have to work with what’s there.
reply