A simpler, less breathless description from the Red Hat BZ[1]
An out-of-bounds memory access flaw was found in the way QEMU's virtual Floppy Disk Controller (FDC) handled FIFO buffer access while processing certain FDC commands. A privileged guest user could use this flaw to crash the guest or, potentially, execute arbitrary code on the host with the privileges of the hosting QEMU process.
If you're using RHEL, then SELinux further confines the qemu process so although you can run arbitrary code in there (which is very bad) you cannot access any files on the host filesystem except ones which qemu has open. Also libvirt runs qemu as a separate user, and further confines it with some cgroups rules. Depending on the version of RHEL, seccomp may be involved too, which limits the type of syscalls that qemu can make.
Applying the qemu fix is still highly recommended of course.
An even simpler description of the root cause: a circular buffer used as a FIFO was not treated as being circular - the modulus operation for indices was omitted, allowing them to run off the end and not wrap back to the beginning as they should.
Well, this lets the attacker jump to neighbouring VMs. So unless there's some extra SELinux magic going on, the protections get you so far as "OK only all my virtual machines are compromised" which is very small consolation for a lot of setups. (And complete host compromise if it happens to be bad week re local kernel privilege escalation bugs.)
You are wrong. SELinux does protect adjacent VMs, by putting each VM into its own separate context and controlling which host files/devices are assigned to each context label. VMs cannot access files or devices on the host which are assigned to other VMs.
That is the purpose of the cXXX,cYYY part of the label as seen in the example here:
Why indeed. In fact we tried to remove the FDC, but Windows needs it in order to do certain operations like installing some drivers, so there was resistance there.
Anyhow, security bugs happen (in all sorts of devices, old and new), and the important thing is that we fix them quickly and that SELinux mitigates the immediate effects.
This is one of my biggest rants as a VMWare admin. Every VM I build, I have to boot to BIOS and disable the drive there. Why is it even enabled? Do VM manufacturers have a large number of government or corporate customers for whom a floppy interface is necessary? I haven't had to use them in almost 10 years; even RAID drivers are downloadable for easy use with any CD or USB drive, and even slipstreamed into the OS install.
Unfortunately I'm always working with established systems, so I can't push this feedback during the sales process.
If that's disabling the floppy as a boot drive, we do that in KVM by default. :) But the device is still there, even if you disable booting it from there.
Yeah, I mean going into the BIOS setting for Floppy Drive and hitting + until it rotates around various disk dimensions, and finally says [Disabled]. :)
That's the only way I've found to remove it from the list of drives that WMI's win32_physicaldisk class presents.
Don't forget that Heartbleed was in a new feature of TLS... legacy or not doesn't matter. If anything I'd say that older code tends to mature and stabilise over time as bug fixes are applied - it's the "rip everything out and rewrite it" mentality that leads to more bugs.
This isn't a justification, but I've had to use the virtual floppy drive as recently as this year, to install drivers on proprietary OSes under virtualization.
The administrator can decide to leave the bay empty, but the drive and controller simply cannot be disabled. That's not a bug, simply there's no code and no option at all to do it.
>The article says that even if you turned off the option, Xen and QEMU have a bug which doesn't actually do that.
Incompetence on top of incompetence doesn't invalidate my argument. Minimizing your attack surface should be the norm, unfortunately here on HN it just leads to downvotes.
RHEL cuts tons of devices compared to upstream qemu. Go and grab the source RPM and see the number of '--disable-XXX' options and the additional patches we add to remove devices. We publish a whitelist of devices we allow [which unfortunately I cannot find now, but it's in the RHEL docs online], and anything else is cut.
My main use of QEMU is to run (in isolation, preferably) old software, whether it's some ancient game or some ancient accounting software or what have you. Said software is usually distributed as floppies (or, nowadays in a world where virtualization is hot shit) floppy images. Thus, even in virtualized environments, there's still a use case, for me at least. I can use DOSBox for a lot of this, I'm sure, but not all the things I run on QEMU run on DOS (and some of the things which do run on DOS don't run on MS-DOS or FreeDOS).
In the physical world, I still maintain quite a few old computers (and I mean really old) that do things around the house or someone else's house. Many of these lack working CD-ROM drives and USB ports (let alone bootable USB ports), so the most surefire means to transfer data to/from them are either over a network (which depends on them having a NIC; this isn't always the case) or via floppies (which pretty much all of them have); floppy drives are also almost universally needed on these machines in order to boot OS installers (and, in some cases, even boot the main OS itself; I have at least one machine that boots off a floppy with GRUB in order to load an OS of choice off a USB thumbstick - one of the lucky few I have that has USB ports without supporting USB boot). Here, Linux having a floppy controller is incredibly useful (whether in virtualized or physical environments), since it makes it easier to create boot floppies and the like with `dd`.
It's why I try to deploy xen dom0s without any QEMU installed at all; reading through past xen CVEs was enough to convince me that HVM guests seem more exposed.. If anybody knows a writeup on what you might lose in terms of isolation and protection from guest escapes by sticking to PV, please do share
Paravirtualization loses instructions which are useful for high performance computing. That's not relevant to isolation/protection... but it is worth mentioning.
> If you're using RHEL, then SELinux further confines the qemu process so although you can run arbitrary code in there (which is very bad) you cannot access any files on the host filesystem except ones which qemu has open.
So, what you're saying is you don't expect attackers who can write exploits to escape a VM to be able to write a linux privilege escalation exploit when SELinux is enabled?
I don't like this trend of "marketing vulnerabilities" with a cute name and a startup-looking landing page. That entire page says nothing about the actual issue, the nice looking graphic just shows how this exploit (and really any exploit like this?) can give attackers access to things outside of a VM. Duh.
It makes sense for the ones that require mass mobilization and mass education to fix. This vulnerability isn't quite that... there are relatively fewer people managing VM infrastructure, and this only affects certain types of VMs.
That said, it's really hard to market security companies in ways that represent the hard work that they do, in ways that are not all snake oil and spin. So it's hard to blame folks for trying to turn excellent security investigative work into self-promotional opportunities.
(Edits: clarity and trying not to sound judgmental of the parent comment)
> require mass mobilization and mass education to fix
Except that Crowdstrike is heavily involved in 'threat intelligence' so this isn't really about patching vulnerabilities at the technical level but educating non-technical executives on threats and 'threat actors'. So corporate execs can be handed a dossier of recent events, like they were the US President evaluating their national security policy.
The only problem is that threat intelligence has marginal value, as infosec changes so rapidly and is so diverse, so at the end of the day it is very much simply emotional gratification - that Crowdstrike delivers at a very high price.
In terms of resource utilization, it doesn't seem like a good use of time/money to obsess over each bug as if it were an atypical event in a slow moving enviornment. But hey if it gets a few people at the top to start caring about security, maybe there is some value... I just hope it doesn't result in execs nagging the infosec team for updates on 'venom' and disrupting their work on real security measures for the company by focusing on the latest hot topic.
I see a benefit outside tech circles: I can readily share such a nice presentation form factor to explain to upper management why security and best practices matter. Later on, instead of important issues being hand-waved at meetings, proper steps will be taken care of with management approval.
Please don't underestimate the human work needed to be done along with our tech jobs.
So many times have I tried to push things forward (internal system upgrades, new security policies, etc) that did not have any immediate impact but then something happens and we have to scramble together.
Being able to show this to a non-technical person and have them at least somewhat understand that there is a problem that needs to be addressed is invaluable.
I like to have a name to refer to, instead of "you know this vulnerability CVE-2015-xxxx affecting this software we use on some machines in some of our data centers".
I find it much easier to talk about heartbleed or shellshock (which is like ~7 different bugs). But googling for bugs and to find out which versions/patches fix this bug, I'll still need the CVE number.
It does explain the FDC IO port buffer overrun under the Q&A heading "What is the vulnerability?"
What "the actual issue" is depends on your POV.One might argue the big-picture view given in the infographic is closer to providing a workable description of the problem for most people than the bit-twiddly details.
I like it. I found the site pretty informative from a layman standpoint. I find the whole marketing of vulnerabilities pretty fun. It's like the ASCII art and MIDI in those keygen software.
It means that folks with little to no technical experience (read: authors of WiReD articles) will latch onto the new buzzword and start regurgitating it left and right, eventually tricking other technologically-illiterate people to latch onto it and start pelting me with questions like "OMG DID YOU HEAR ABOUT VENOM???!!!" and "OMG U BUTTR PASH UR SURVURZ OMGOMGLOL!!!!111one" instead of letting me do my job. All because the publishers want to satisfy their attention-seeking desires ("LOOK AT ME I FOUND A SECURITY BUG AND GAVE IT A HIP COOL BUZZWORD I'M SO SPECIAL!!!!").
I personally don't like the trend for the same reason why I dislike terminology like "ninja" or "rockstar" or "badass" or "devops". It cheapens computer science/engineering into resembling something a bunch of hip middle schoolers yammer on about alongside their video games and their skateboards instead of the multi-billion-dollar professional field it actually is.
"You've been smoking something really mind altering, and I think you should share it.
x86 virtualization is about basically placing another nearly full kernel, full of new bugs, on top of a nasty x86 architecture which barely has correct page protection. Then running your operating system on the other side of this brand new pile of shit."
> Theo de Raadt's problem is that he views security the way cryptography experts view cyphers: as an absolute. But security isn't like math. It's not absolute. There are right and wrong ways of doing security.
Not an absolute but there is a right and wrong way to "do security?"
To be fair Boender is attacking the naive maxim that "virtualization is secure." It's just another layer if you isolate processes inside of virtualized run-time environments. Makes sense... it's not snake oil.
However there is no need for "right" and "wrong" in these discussions. The security of any given system exists in a continuum and it will only be a matter of time before the next vulnerability is discovered. I get the sense that all we can do is limit the damage that can be done by any particular system.
It seems that virtualization is just one path towards providing those limits just as chroot and other attempts have been.
I'm most interested in seeing how jitsu and unikernels can turn the tables... not only can a process be wrapped in a virtualization layer but it is short lived and only runs when it is requested. It puts the onus on us to set up the summoner properly and provide safe-guards... but it's just another layer of complexity for attackers to manage.
> I'm most interested in seeing how jitsu and unikernels can turn the tables
The useful aspect about the MirageOS unikernels is that they use the pure Xen PV interface, which has almost no dependency on qemu. No floppy or block/net emulation, timers through the direct Xen shared_info page, and generally as "native" to x86 as Xen permits.
HVM has support PV timers (and interrupt controllers, and spinlocks) for quite some time now
>generally as "native" to x86 as Xen permits.
I really don't know if I agree with this. With hardware extensions basic CPU performance is going to be significantly better on HVM instances (No longer having to bounce to the hypervisor every time you make a system call since you once again have three CPU protection rings with ring -1), and that's before getting into SR-IOV, etc.
Unikernels are sweet and all, but without PVH, PV will outperform modern "PVHVM" implementations, and with PVH you're still running in a partial HVM shell.
It's with those thoughts in mind that usually deploy Xen dom0s without QEMU installed at all (so only PV guests). You just have to read through past Xen CVEs to see that HVM presents a lot of attack surface, admittedly I'm unsure if PV is smaller but shallower (certainly you leak more info about dom0 to guests but for guest escapes it seems there are just a lot of opportunities with QEMU in the mix).
A shame, too, because there's a (small, admittedly) lesson buried in the fix--one that can be applied even in low-level languages.
If you have a buffer with odd semantics (such as wrapping out-of-bounds addresses back into bounds), it should probably be wrapped in something that enforces that. In C++, it could be made to look like a normal buffer, except that operator[] is overloaded to wrap for you, and you can make the compiler scream at you if you try to escape that safety net--for an inline class, very likely without any performance cost over adding the wrapping computation to each access by hand.
In C, your options are more limited. The safest is an opaque handle that has to be passed to an accessor function. You're more likely to take a performance hit (unless LTO fixes it for you), but in a floppy drive emulator, I doubt it matters.
If the overhead does matter, a macro or static inline function still makes the access convention easier to memorize, which should make it less likely for someone to forget, and should make code that does forget more suspicious on code review.
"Q: How is this different from previous VM escape vulnerabilities?
A: Most VM escape vulnerabilities discovered in the past were only exploitable in non-default configurations or in configurations that wouldn’t be used in secured environments. Other VM escape vulnerabilities only applied to a single virtualization platform, or didn’t directly allow for arbitrary code execution."
Seriously every vulnerability will have it's own cool name and a website now? Even this? Not a single vendor has classified this as even critical.
Yes like all security vulnerabilities it should be taken seriously, but when every clown out there goes all heart bleed on you for every security vulnerability they find because it's the smart marketing move today you stop taking them seriously which in the end is counter productive to this whole "raising awareness" BS they are trying to do....;
I don't disagree, however with no actual exploit (in the wild or POC according to RH), no confirmation of the ability to execute code on the actual host, so yes important but doesn't really justify the whole name and landing page ordeal.
Not because it's not important, but because it just desensitizes the whole impact of vulnerabilities the caliber of Heartbleed or Shellshock which did affect a large chunk of the servers and machines connected to the internet at the time.
Now they claim it's bigger than heartbleed, but with no exploit, and no clear statement on what actual in use implementations are affected, Amazon already have came out saying that VENOM has never affected their implementation of Xen, if Digital Ocean and Rackspace come out with the same statement it just makes this whole "bigger than HB stance" is silly.
And as far as the corporate/enterprise world goes, well VMware, CISCO, and MSFT hypervisors have a much bigger share out there and their hypervisors are not affected so again no much of a bite there.
VirtualBox is affected in a different/partial way. Summary: Patch to 4.3.28, released today.
The vulnerability is not mentioned explicitly in the change log. It only shows up as one of 32 bullet points "Floppy: several fixes". The actual changes are recorded only as "2015-05-08 12:58 Changeset in vbox [55753] by vboxsync: FDC: Fixed DRIVE SPECIFICATION command".
There were some changes related to command buffers five days ago by Frank, but they only address FD_CMD_DRIVE_SPECIFICATION_COMMAND (in a slightly different way than QEMU's developers did it). The VirtualBox source code diffs are at:
The vulnerability does not affect the current VirtualBox FD_CMD_READ_ID or the versions of the file going way. Maybe because it might have been forked as far back as 2003? Crowdstrike did point out that the vulnerability was present from 2004. But the vulnerability manifests in two bugs, one of which appears to affect VirtualBox and the other not.
I think this serves as a good example of minimising one's attack surface.
The exploit is in the floppy disk controller, of a virtual machine, in an era when almost no physical machine includes a floppy disk drive, and those entering the field might never have seen a floppy disk other than the "File -> Save" icon; plus the exploit can be triggered even when the FDC is disabled.
Certainly a sobering thought for those using large, feature-filled applications 'just in case' some feature might be needed in the future.
To be fair, serial ports are actually really important and while we frequently don't have physical serial ports anymore, tons of devices still use serial emulation because it is such a simple technology to understand and the hard parts of the protocols are done in userland rather than drivers.
For managing virtual machines, it's more surprising that we give VMs VGA devices rather than just using serial: when using VGA emulation, you cannot trivially write code that reads text on the VM's screen, but if you configure the VM to use a serial console, you can trivially write a program which controls the VM. In a libvirt-managed qemu environment when the OS has its serial console enabled, you can run "virsh console MyVM" and instantly start executing commands and parsing their output. You can also have the OS write its log to serial so that if the OS crashes you can still read the full log. When all else fails, serial still works. Additionally, a virtual VGA device has an infinitely larger attack surface than a serial device.
When doing unattended windows installs, a lot of people use floppy drives to store the Autounattend.xml file. Floppy disk images are the most trivial and smallest images for automation tools to create. They're additionally useful for placing a linux bootloader on to boot a linux install CD with command line parameters.
Most people still use CD-ROM images to install operating systems, and it's basically required for windows. Virtual machine management software also tends to use the cd drive to install guest tools since it's the easiest way to let the guest see large files from outside the VM - nearly every OS can read CDs.
> To be fair, serial ports are actually really important and while we frequently don't have physical serial ports anymore, tons of devices still use serial emulation because it is such a simple technology to understand and the hard parts of the protocols are done in userland rather than drivers.
To be sure! But it's not always necessary.
> When doing unattended windows installs, a lot of people use floppy drives to store the Autounattend.xml file.
> Most people still use CD-ROM images to install operating systems,
Also both true, but they shouldn't be available unless you need them.
Not always necessary, but I think it's useful, if not important, to have by default in any infrastructure which uses long-term VMs and doesn't just replace "immutable" VMs every time a setting is changed. You always want some path to get data into the VM without networking or VGA, otherwise you have a big problem when something goes wrong with the network and you need to fix things in VMs which you don't want to reboot. This is a corner I'm sure enough sysadmins have found themselves in.
For extra-security-conscious deployments, most hypervisors let you remove most hardware, and qemu gives you enough flexibility to define nearly every device on the VM's motherboard at the command line rather than taking a pre-configured motherboard setup. The default settings in most hypervisors give you lots of unneeded hardware, but this hardware is really convenient for any user who is just trying to get a VM up.
I realize that, from a "secure defaults" perspective, the CD-rom and unused serial port increase your attack surface, but I also think this trade is worth it in most scenarios, but it's a tough line to draw.
Nowadays security research looks like a lot of extra work. Having to do the research and also think about a logo, website design, choice of colors, pretty diagrams, a cool name, social media interaction, market research to make sure all that isn't too similar to previous bugs. Eww!
I'm guessing this. Why even make floppy emulation available? It has no use cases on AWS. Floppy support only exists really for Windows driver install at install time, if you can't slipstream the driver in. They probably tore out a lot of stuff while developing the EC2 infrastructure.
We're all equal, but some are more equal than others.
This is the endgame of your so-called "responsible disclosure". Those with profit loss exposure win, and the peasants get it whenever the PR company is done making the logo and infographics.
Distributors of operating systems with Xen support.
Here "provider", "vendor", and "distributor" is meant to include anyone who is making a genuine service, available to the public, whether for a fee or gratis. For projects providing a service for a fee, the rule of thumb of "genuine" is that you are offering services which people are purchasing. For gratis projects, the rule of thumb for "genuine" is measured in terms of the amount of time committed to providing the service. For instance, a software project which has 2-3 active developers, each of whom spend 3-4 hours per week doing development, is very likely to be accepted; whereas a project with a single developer who spends a few hours a month will most likey be rejected.
----------
Basically, if you provide a service to the public which uses Xen (Not restricted on size), or use Xen at large scale internally, you can get on the list. There are several small hosting providers that utilize Xen on that list.
Presumably if you use Xen at small scale internally you're less worried about security vulnerabilities as it is only your employees with root access to the machines - if external users have root access, you probably fall under one of the other definitions.
Or, AWS takes steps to mitigate their exposure, regardless of whether or not they receive embargoed disclosures.
On the Rackspace public cloud I've been through three full-fleet reboot cycles so far. Only one of those affected AWS customers, and AWS handled it in such a way that only a portion of their fleet was affected.
How could AWS do this when Rackspace and others couldn't?
For one, they could stratify guest placement based on instance type and guest OS. (Which I hear they do.) Most recent XSAs have only affected PV or HVM guests, not both. If you keep PV and HVM guests separate ...
AWS seems to be an example of good engineering, not an example of the perils of capitalism.
How is it bad that large providers have an opportunity to patch before the vulnerability is released to the wild?
Maybe next you'll insist that everyone's prevented from patching for a week after disclosure so that smaller companies that don't have the resources to react immediately are not unfairly left behind?
I'm not insisting anything. I'm just saying that lack of immediate and full disclosure is essentially crony capitalism where there are the Big Important Companies That Must Be Protected and then there is everybody else, including small startups and private individuals.
It is fundamentally unfair, and sets up a non-level playing field.
I think it is even simpler than that: The big companies that have thousands of customers doing millions of dollars of business on hundreds of thousands of machines need more time to patch because their is much more money / business to be lost. Not giving large companies time to patch would do more harm than good in the end.
It is fundamentally unfair, and is perfectly reasonable.
You seem to have twisted my "tell everyone and let the fittest survive and thrive" into some weird Harrison Bergeron thing which is the exact opposite of my point.
To answer your question directly, it is bad because it gives them a massive unfair advantage over their smaller competitors. It favors incumbents versus favoring efficiency.
Profit and PR are hardly the goal here -- community awareness and public safety are paramount. Vulnerabilities need to be obviated to the general populace at large.
Tell the entire market the information you have and let them do with it as they will, versus telling your friends first and letting everyone else go to hell.
As long as those friends don't misuse the information (spreading it to blackhats), what's the difference? If it's "hell" for everyone else on day N, it would be "hell" for everyone on day 0.
As other people mentioned, it seems more likely that they're running a Xen fork or have some other mitigation. The redhat notes state that once the host is patched, the VMs need to be shut down for it to take effect:
https://rhn.redhat.com/errata/RHSA-2015-0998.html
The vulnerability is in floppy drive emulator code. It isn't clear to me whether all users are vulnerable or only hosts that have floppy drive devices defined [in their guests] are vulnerable.
If the latter, perhaps Amazon was never vulnerable anyway?
If I were amazon, I would have done an audit of the hyperv software and removed the floppy driver code entirely if unused for precisely this reason. This strikes me as a basic, "no-brainer" hardening step for my billion dollar(s) hosting business.
Their PV domains are not affected anyway. Quite possibly they are running qemu in stub domains for HVM as well, rather than on dom0, but you may well be right about the floppy code too.
Given the complexity of QEMU and its pace of development, there is likely an endless supply of such bugs for punching through the QEMU emulation layer. The problem is that most of the time, the QEMU process is running in dom0, giving an attacker an opportunity to hijack the hypervisor. Xen offers a more general solution for this: running QEMU in a stub domain. The main problem with that solution right now is that Xen forked QEMU such that the stub domain only supports running QEMU 0.10 or such - the up-to-date version of QEMU (known in Xen as qemu-upstream) is somewhere around 2.2, but it runs the emulation layer as a process in dom0, which exposes the system as a whole to "VENOM" and related attacks on QEMU.
Can someone help me understand something? If a VM has no attached floppy drive, is it still vulnerable? The site says that disabling a virtual floppy drive is not enough, but if there's no floppy drive at all is that still a problem?
"For many of the affected virtualization products, a virtual floppy drive is added to new virtual machines by default. And on Xen and QEMU, even if the administrator explicitly disables the virtual floppy drive, an unrelated bug causes the vulnerable FDC code to remain active and exploitable by attackers."
Edit: This comment seems to indicate that even lacking a virtual floppy drive, the floppy drive controller is still present and thus the system is vulnerable: https://news.ycombinator.com/item?id=9539191
This is up to the VM provider. I haven't seen a list yet except that this doesn't affect VirtualBox (if no floppy is mounted, the exploit is not possible).
One would expect though that there is no issue if a floppy drive is not attached, and hope that there is not a separate security hole to mount a floppy from sandboxed code (unlikely).
Cloudways provide three cloud infrastructure providers to host applications. We asked our point of contact with Amazon AWS, Google Compute Engine and DigitalOcean and here is what they respond.
DigitalOcean (DO): Being Patched. (The DO staff are busy in rolling out security updates. The patch will automatically be applied on DO servers inside Cloudways Platform.)
Amazon Web Service: Officially confirmed to be Safe.
Google Compute Engine: Officially confirmed to be Safe. (A Google representative informed Cloudways, “Google Cloud Platform was never vulnerable to this flaw. We do not use the vulnerable software.”)
Oracle, which develops VirtualBox, said in an emailed statement that the company was "aware" of the problem, and fixed the code, adding that it will release a maintenance update soon.
"We will release a VirtualBox 4.3 maintenance release very soon. Apart from this, only a limited amount of users should be affected as the floppy device emulation is disabled for most of the standard virtual machine configurations," said software lead Frank Mehnert.
Since the vulnerability is in floppy drive emulator code, it isn't clear to me whether all deployments are vulnerable or only hosts that have floppy drive devices defined in their guests are vulnerable. Can someone please clarify?
> For many of the affected virtualization products, a virtual floppy drive is added to new virtual machines by default. And on Xen and QEMU, even if the administrator explicitly disables the virtual floppy drive, an unrelated bug causes the vulnerable FDC code to remain active and exploitable by attackers.
So I guess for KVM you're safe if you don't have a virtual floppy drive, unclear whether it's KVM default though. For the others, you're still vulnerable by an unrelated bug.
"For many of the affected virtualization products, a virtual floppy drive is added to new virtual machines by default. And on Xen and QEMU, even if the administrator explicitly disables the virtual floppy drive, an unrelated bug causes the vulnerable FDC code to remain active and exploitable by attackers."
If there's a solution to this problem, I don't know what it is. Trying to replace an underfunded monoculture with severely underfunded diverse implementations may not even work and may actually reduce security.
[...] security vulnerability in the virtual floppy drive code [...] For many of the affected virtualization products, a virtual floppy drive is added to new virtual machines by default.
That is a pretty simple mitigation. Make sure there are no (unnecessary) virtual floppy devices defined in your VMs.
I checked my VMs (Ubuntu/KVM) and, as expected, none of them have a virtual floppy - they are not added by default on that platform.
"For many of the affected virtualization products, a virtual floppy drive is added to new virtual machines by default. And on Xen and QEMU, even if the administrator explicitly disables the virtual floppy drive, an unrelated bug causes the vulnerable FDC code to remain active and exploitable by attackers."
> How do I protect myself from the VENOM vulnerability?
> If you administer a system running Xen, KVM, or the native QEMU client, review and apply the latest patches developed to address this vulnerability.
> If you have a vendor service or device using one of the affected hypervisors, contact the vendor’s support team to see if their staff has applied the latest VENOM patches.
Or you could, you know, search for "qemu disable floppy" in google, read a bit and apply this flag to the VM:
qemu -global isa-fdc.driveA=
or -nodefaults to only enable the devices you want to enable...
"And on Xen and QEMU, even if the administrator explicitly disables the virtual floppy drive, an unrelated bug causes the vulnerable FDC code to remain active and exploitable by attackers."
An out-of-bounds memory access flaw was found in the way QEMU's virtual Floppy Disk Controller (FDC) handled FIFO buffer access while processing certain FDC commands. A privileged guest user could use this flaw to crash the guest or, potentially, execute arbitrary code on the host with the privileges of the hosting QEMU process.
If you're using RHEL, then SELinux further confines the qemu process so although you can run arbitrary code in there (which is very bad) you cannot access any files on the host filesystem except ones which qemu has open. Also libvirt runs qemu as a separate user, and further confines it with some cgroups rules. Depending on the version of RHEL, seccomp may be involved too, which limits the type of syscalls that qemu can make.
Applying the qemu fix is still highly recommended of course.
[1] https://bugzilla.redhat.com/show_bug.cgi?id=1218611