They had a perfectly valid configuration and it uncovered a bug in the Linux kernel. That wouldn't have happened had they ignored the issue and tried again.
Of course, anything necessary to get your production box producing, but a well-engineered server is worth the debugging time.
Not at all, originally the whole purpose of DragonFly was a test ground for its scheduler. It moved on to being much more though, refactoring the entire source tree and rewriting parts of the system. The whole purpose of DragonFly is DragonFly.
Only if you're lucky. Most of these exploits probably took weeks to find and analyze properly, it's not like one person found more than one a day. They're found because whole teams are working with the linux kernel at the same time and either happen by them or actively look for them.
That's specifically incorrect: the results showed that the compressed ZFS was slower on pgbench and faster on TPC-B.
It's also generically an unwise assumption because you have to know how fast the compression is compared to disk I/O. If, like almost all servers these days, you have more CPU than I/O capacity the compression overhead will often be buried in the I/O latency and if the data compresses well it it's easy to have it be faster because the I/O savings is greater than the CPU.
Since LZ4 was designed to be very fast, that seems like a reasonable bar to hit — I see single-core performance on old desktops in the 1.8+GB/s range.
Of course, anything necessary to get your production box producing, but a well-engineered server is worth the debugging time.