Hacker News new | past | comments | ask | show | jobs | submit login
Fixing Mass Effect black blobs on modern AMD CPUs (cookieplmonster.github.io)
562 points by Macha on July 19, 2020 | hide | past | favorite | 178 comments



>We fuzzed that function and found out that it does not give consistent results between Intel and AMD CPUs when SSE2 instructions are used.

I'm sad this wasn't investigated further. Was one of the implementations not standards compliant?


I’ve traced bugs back to similar issues. The underlying cause is that some floating point operations are underspecified, such that the behavior of some bits in the output of some operations are implementation defined. The intent was to allow future microarchitectures to make small refinements in implementation as transistor budgets grew that might flip a bit in some high-precision edge cases.

In practice, some alternative implementations of these instructions treat the underspecification as license to do just enough to clear the threshold of being technically compliant with the standard and no more. This can show up as material differences in output in the least significant bits in some cases.

For floating point intensive algorithms, these small discrepancies and edge cases will occasionally bubble to the surface as material differences in high-level code behavior. My introduction to this class of bug many years ago was an application that was counting things for tax purposes based on geospatial relationships. On AMD, it produced a count that was off by one.


It's still now clear how small rounding change results with NaN. It would be understandable if some triangle would be off by one pixel on AMD and probably not noticeable.

So far it looks like a CPU bug which produces NaN on some input.


Graphics uses a lot of stacked matrices. Small errors (more than just rounding, but within spec bounds) can add up to make the result non invertable. That results in NaNs.


0/0 = NaN


IEEE floats are a complete clusterfuck. Sadly too much is already invested in them and transitioning to a new and better format... I don't see it happening without an industry-wide consortium sticking with it.


IEEE754 floats are completely fine. They are 100% defined and produce 100% bit identical results across all compliant implementations.

The problem is most implementations include a few extra instructions or modes that are not defined in the IEEE spec. The Fast Reciprocal Estimate instruction that caused the issue in this example is one such instruction.

It's not defined in the IEEE spec. It's only defined in the x86 spec (and corresponding similar instructions are defined for other architectures). As its name implies, it's not defined to give an exact result. It's designed to be fast and it's designed to be a rough estimate to some level of precision.

You should not use such instructions if you care about any kind of accuracy or cross platform reproducibly. Stick to the instructions that are actually defined in the IEEE 754 spec. And stick to one of the recommended rounding modes, make sure all platforms use the same rounding mode.


> You should not use such instructions if you care about any kind of accuracy

This seems like an overly strong statement, as one reading of it is that "Fast Reciprocal Estimate" is just a bad rng function. I guess you mean that it's up to the user to provide their own error estimation?


Matrix inversion requires taking the reciprocal of a determinant. I don't have the D3D binary to disassemble it, but chances are that they used the RCPSS/RCPPS instructions to get an approximate reciprocal. The precision of the approximation is specified. Both Intel and AMD satisfy the specification, but with different results.


Here’s Intel versus AMD relative error of RCPPS instruction: http://const.me/tmp/vrcpps-errors-chart.png AMD is Ryzen 5 3600, Intel is Core i3 6157U.

Over the complete range of floats, AMD is more precise on average, 0.000078 versus 0.000095 relative error. However, Intel has 0.000300 maximum relative error, AMD 0.000315.

Both are well within the spec. The documentation says “maximum relative error for this approximation is less than 1.5*2^-12”, in human language that would be 3.6621E-4.

Source code that compares them by creating 16GB binary files with the complete range of floats: https://gist.github.com/Const-me/a6d36f70a3a77de00c61cf4f6c1...


This is great (I was actually just toying with the _mm_rcp_ss intrinsic)! It would also indicate that RCPSS is not the culprit.


> It would also indicate that RCPSS is not the culprit.

I’m not sure about that. I think it might be. Despite both are within specs, the results are numerically different.

Modern DirectXMath doesn‘t use these approximated instruction for inverting matrices, it’s open source, there: https://github.com/microsoft/DirectXMath/blob/83634c742a85d1...


> Despite both are within specs, the results are numerically different.

True, but it's quite common that floating point ops throwaway bits (i.e. beyond the epsilon) will be numerically different between vendors.

> Modern DirectXMath doesn‘t use these approximated instruction for inverting matrices, it’s open source, there: https://github.com/microsoft/DirectXMath/blob/83634c742a85d1....

Good find, neither does Wine, actually: https://doxygen.reactos.org/dc/dd8/d3dx9math_8h.html#a3870c6...

Would be interesting to see if this bug also happens on Linux.


BTW, there’s much less reason to use these approximations on modern hardware.

Divps instruction (32-bit float divide, the precise version) was relatively slow back then, e.g. on AMD K8 it had 18-30 cycles both latency and throughput, 21 cycles on AMD Jaguar, Core 2 Duo did in 6-18 cycles.

Fortunately, they fixed the CPUs. Skylake has 11 cycles latency of that instruction, Ryzen 10 cycles.

It’s similar for square root. Modern CPUs compute non-approximated 32-bit sqrt(x) in 9-12 cycles (Ryzen being the fastest at 9-10 but Skylake ain’t bad either at 12 cycles), old CPUs were spending like 20-40 cycles on that.


> it's quite common that floating point ops throwaway bits (i.e. beyond the epsilon) will be numerically different between vendors.

I think these implementation-specific shenanigans only applied to legacy x87 instructions which used 80-bit registers. SSE and AVX instructions operate on 32 or 64-bit floats. The math there, including rounding behavior, is well specified in these IEEE standards.

I’ve made a small test app: https://github.com/Const-me/SimdPrecisionTest/blob/master/ma...

It abuses AES instructions to generate long pseudo-random sequence of bits, then re-interprets these bits as floats, does some FP math on these numbers, and saves the output.

I’ve tested addition, multiplication, FMA, and float to integer conversions. All 4 output files, 1 GB / each, are bitwise equal between AMD desktop and Intel laptop.


> It abuses AES instructions to generate long pseudo-random sequence of bits, then re-interprets these bits as floats, does some FP math on these numbers, and saves the output.

Nice work! I'd be extremely curious to see if this still holds on intrinsics like `_mm512_exp2a23_round_ps` or `_mm512_rsqrt14_ps` (I'd wager it probably won't).


I don’t have AVX512 capable hardware in this house. Lately, I mostly develop for desktop and embedded. For both use cases AVX512 is safe to ignore, as the market penetration is next to none.


> True, but it's quite common that floating point ops throwaway bits (i.e. beyond the epsilon) will be numerically different between vendors.

This is a common sentiment, and it is perhaps a helpful way to look at things if you don't want to dig into the details (even if it's not quite right).

But it's worth understanding the magnitude of errors. rcpps will be 3-4 orders of magnitude "more wrong" compared to the typical operation (if you view the "epsilon" of most operations to be the error after rounding). Or said another way: it would take the cumulative error from many thousands of adds and multiplies to produce the same error as one rcpps operation.


Related, LLVM developers evaluated the use of these instructions too:

https://bugs.llvm.org/show_bug.cgi?id=21385

On your graph, what’s red and what’s blue and what is on x axis?


AMD is red.

X axis is source value, I've extracted a small subset of the data to graph in Excel, there: https://gist.github.com/Const-me/a6d36f70a3a77de00c61cf4f6c1...

Also, I've just recalled Y is absolute error not relative; that's why it follows shape of 1/x.


> I've extracted a small subset of the data to graph in Excel

Thanks. If you still have the data (or if you can regenerate it) it would actually be possible to make a few small graphs that would cover the whole set:

Basically, the idea is to have x points as much as there are different exponents, which in 32-bit floats is at most 256. Then on the y axis one draws the number of bits of maximum distance between the really correct value of the mantissa and the calculated mantissa in the whole interval of one exponent. Those would allow comparing the implementations of Intel and AMD separately, and these graphs are what I'd be very interested to see. So the idea is to find the maximums in an interval, and there are then a limited number of the intervals. Only if such graphs match between AMD and Intel it would be interesting to compare inside of the intervals, but differences on that level would be these that I'd expect would be the obvious ones making the most problems, where the results like in the article (full black instead of shadow) wouldn't be surprising.

For that don't have to juggle with huge files, it's only 256 values per each CPU in that pass that are to be compared.


Can re-generate the data files, it’s couple pages of code to copy-paste from my gist and compile, but I’m not sure how to implement what you wrote.

The results are not guaranteed to have same exponent, e.g. 1 / 0.499999 can be either 1.999999 or 2.0000001, both are correct within the precision, but have different exponents.


The algorithm would be approximately:

1) Use the binary representation of the numbers! To do so, cast the resulting float to the unsigned integer, then use bit masks and shifts to extract the exponent and mantissa. Note that the leading 1 is not explicit but implicit in the IEEE format unless it's a denormal number (so make it always explicit during the extraction).

2) use the exponent of the correct result as the "interval" reference.

If the exponents are the same, do a subtraction of the smaller from the bigger mantissa, that's the absolute "distance" between the two numbers -- the goal is to find what is the biggest absolute distance in which interval.

3) If one of the 2 values that are compared has different exponent, they can be converted to the same by a bit shift. Shift the mantissa of the one with the bigger exponent left accordingly. Again do the subtraction and use the result as the absolute distance. The goal is to figure out the biggest absolute distance in each interval (maintaining a maximum for each interval).

In short, think binary, not decimal, and measure using these values. Binary are only values that matter, decimal representation doesn't necessarily represent the exact values of bits.

Examples:

float 1.0 == unsigned 0x3f800000 here exponent is 127 == 2^0 and mantissa 0 with implicit 1 at the start i.e. explicit: 0x800000

float 0.999999940395355224609375 = unsigned 0x3f7fffff here exponent is 126 == 2^-1 and mantissa explicit: 0xffffff

The absolute distance between these two numbers is 1 (adding one to the lowest bit of mantissa of the smaller number would result the higher number 0xffffff + 1 = 0x1000000, the later is the mantissa adjusted to the same exponent of the smaller number (0x800000 << 1) ). If the "correct" number was 0x3f800000 and even if the shift was needed to calculate the absolute distance, the interval is still 0 (i.e. 0 is the x axis value, as its exponent was 2^0 i.e. 127, and the value to be plotted is on y is 1 until a bigger distance occurs).

For more examples of the format you can play here:

https://www.h-schmidt.net/FloatConverter/IEEE754.html

Also note that a few exponents are special, meaning infinity or NaN. Whenever the "correct" answer is not a NaN or infinity but the "incorrect" is, that should be treated specially, if it actually happens.


Results from Zen 2: https://github.com/Const-me/SimdPrecisionTest/blob/master/vr... Results from Skylake: https://github.com/Const-me/SimdPrecisionTest/blob/master/vr...

Total, exact, less, greater columns have total count of floats in a bucket. Sum of the “Total” gives 2^32, the total count of unique floats.

Computing max bit that’s different is too slow for the use case, neither SSE nor AVX have vector version of BSR instruction. Instead, I’m re-interpreting floats as integers and computing difference of the integers. maxLess, maxGreater, and maxAbs columns have that maximum error, measured as count of float values of the error. The value 4989 means the mantissa had like 12-13 lowest bits incorrect.

Source code is there: https://github.com/Const-me/SimdPrecisionTest/blob/master/rc... Not particularly readable because I’ve used AVX2 and OpenMP, however this way it takes less than a second on desktop, and maybe 1.5 seconds on a laptop to process all of these floats.


Based on how I understand the numbers in the tables, it looks to me like both implementations behave the same in the critical points, and AMD obviously achieves less distance from the "exact", but has some kind of truncating instead of rounding logic which changes the distribution of the approximations, and you discovered that with counting "less" and "greater." Congratulations!


Really fascinating, thanks!


Funny, I think this graph is from a thread where I got roasted for arguing that differences in CPU implementations meant that Intel (or anyone else really) would need to be careful about shipping SIMD software on processors they don't test.


Rightfully so. Because they were replacing the SIMD code with non-SIMD code that was similarly untested, with the side effect of crippling performance on their competitor's chips. That's a really bad kind of "being careful".


I'm not aware of instructions that have this implementation defined behavior risk in non-SIMD code, and you can be sure that Intel has tested this. If Microsoft had shipped a non-SIMD version of D3DXMatrixInverse, this bug likely could not have existed (or it would have been caught on the developer's machines).

In this case (games), they probably made the right tradeoff despite the bug. But in general? I don't want to rehash the argument, I'm just really, really glad I'm not the one flipping that switch (or subject to the hordes of HNers who think I'm a monster for not flipping it).


Correct, I've scavenged the comment, and the data+code, from an old thread here on HN.


In case anyone is wondering why RCP[PS]S and RSQRT[PS]S are specified to give only a relatively small number of digits of accuracy (I think only 7 or 14 bits?) it's because they use the fast inverse square root algorithm in hardware.

The IEEE754 floating point representation gives you an easy way to roughly approximately convert between the log2 of it. The exponent gives you exactly the integer log2 of the number, and the mantissa gives you a fractional linear term that you can drop onto the integer part to make it closer to the actual log2 version. This lets you do some fun things fairly easily.

x -> log2(x) -> -log2(x) -> 1/x

x -> log2(x) -> -0.5 log2(x) -> 1/sqrt(x)

x -> log2(x) -> 0.5 log2(x) -> sqrt(x)

The lossiness of the conversion limits your precision, but there's a lot of times you don't give a shit. So the instructions are still valuable even if they're only approximately correct. For the reciprocal case, it will do something like this:

https://godbolt.org/z/n6M4az

I'm not sure where the nondeterminism comes in between AMD and Intel. As you can see, with the reciprocal, there's no magic constant like there is with inverse square root. Maybe they're fudging something, or have a different form of Newton's method, I dunno.


Approximations implemented in HW generally use different techniques than ones in SW. They typically start with a lookup table, possibly followed by some other steps. Someone reverse-engineered the old AMD 3DNow implementations here:

https://pdf.sciencedirectassets.com/272990/1-s2.0-S157106610...


It is utterly bizarre to me that a table lookup is preferable to integer subtraction. The f32 registers are already hooked up to i32 ALUs. A table lookup requires die space, while mine requires none, and the i32 subtraction is already heavily optimized.

An initial thought I had that was that their table lookup is to find a magic constant that might nudge the result of the final value in the right direction. (instead of the magic constant 0x7F000000 that my code boils down to.) But that doesn't seem to be what my Kaby Lake is doing.

This is gonna bug me all night, I know it.


> The f32 registers are already hooked up to i32 ALUs.

Technically they are. Practically many CPUs, especially older one, have non-trivial latency cost for passing a value between FP and integer ALUs, like couple of cycles each direction.

Ever wondered why there're 3 sets of bitwise instructions, e.g. pandn, andnps, andnpd which do exactly same thing with the bits in these registers? Now you know.


They're basically the same thing, just the integer aliasing approach is a lookup table with one element in it, while an actual lookup table will have more, giving better precision for roughly the same cost (or even better, since you don't need to load the constant into a register).


Yep, RCPSS is present in d3dx9_31!intelsse2_D3DXMatrixInverse.


That seems likely. Per the SDM the required precision is to within 1.5*2^-12, which is quite low and allows a lot of slop in the implementation.

One possibility is that there's an accidental equality test on the path of a runtime (AMD) value vs. a value that is computed at compile time (presumably on an Intel CPU).


deleted


I think you've lost sight of the fact that this entire discussion is constrained to the 4x4 matrices used in computer graphics. How to best invert a matrix of a particular small fixed size is a very different question from how to best invert matrices in general.


deleted


While true, there is a reason that the GLSL standard includes builtin data types for matnxm for all n,m in {2, 3, 4}. The single most common matrix operations in computer graphics are related to linear transforms on color spaces and geometry. Both of those max out at transforms on 4D vectors, for RGBA and 3D projective geometry.


They seem to have assumed it was taking the SSE2 path, but never checked that, just replacing the function at a higher level for the final fix.

I’m guessing that the 3dnow! path was still being chosen based on a check like processor brand == AMD and processor generation >= x, assuming that 3dnow! would never be removed. This is something that has already been discovered being done in other games.


Just some months ago I was re-playing this game and I was looking around for fixes for this bug. This is incredible, and _very_ welcome. Thank you for the amazing write-up, it's super interesting.


Just to clarify, I am not the original author, I'm not sure if they read HN.


The "Rafael" mentioned in the article seems to have answered a comment above.


It's completely off topic, but articles like this one are the reason why I love HackerNews so much and don't tell anyone about it.


If you love hackernews why wouldn't you tell anyone about it?


HN is a very nice place that I feel bad telling people about it just to come and ruin it like many Internet forums I've seen in the past. From all the people I know there are maybe 5 that have a place here and most are already long time readers.


Yep, I never link to HN on social media.

I'll talk about it in person with people I know at work or I'd mention it on my old blog - but I don't want to risk it to the "unwashed masses".

But maybe I'm being too cautious - it is pretty in-depht here. That alone might dissuade most of the shitposters. And the karma-requirements for downvoting seem sufficient.


HN is, broadly speaking, ad free and has a high degree of technical experts.

More people mean a broader dilution of the level of discourse and knowledge, and greater and greater incentives to market / spam / propagandize.

ad free in the sense that there are no obviously in-your-face efforts at graphical marketing; HN is run by a strartup incubator and has a startup focus; flogging your latest release is the raison d'etre of the site.


It’s frowned upon by true intellectuals because people discuss things they are not experts in: https://www.newyorker.com/news/letter-from-silicon-valley/th...

It’s similar to recommending slate star codex to someone. Unless that person is comfortable with rationally discussing uncomfortable topics, they will think you a nut job who supports everything on the site.


To be the only one who knows about this and not to cultivate the competition


Apparently one common fix for this according to some Reddit post is to use framecapping. It might be that i am too biased against delta timing in games (ie. using the time delta between frames for animations) because i've seen way too many games break using it and is the reason i run any game older than ~7 years with framecapping enabled, but given the description on the site then i guess what is really happening is that since the game uses mostly baked lighting, they sample the environment lighting to apply to the entities and then -since the sampled positions, be it probes or lightmap lumels, are few and in fixed locations- they interpolate them as they entities move using the time delta between frames to make that interpolation look smooth. Then as the game runs in faster hardware, it ends up with smaller time deltas, which in turn breaks things because time deltas are evil :-P.

Though that is obviously a guess (that this is what happens, not that time deltas are evil, that is fact :-P), though i've done something similar at the past for getting light on dynamic entities from a static environment, so some things did click.


Just out of curiosity, what is the problem with delta timing and what is the alternative?


Numerical instability introduced by very small and very big and always varying delta values. At its core it is something like running

    pos += speed*delta
   
for every frame with delta having varying precision depending on what is rendered on the current frame.

The solution is to change this to run

   pos += speed
at fixed intervals while still rendering every frame. This will fix any issues with delta timing, but it will force your animations to run at the interval you choose regardless of framerate. If you use something like 60Hz however it will be fine for the vast majority of users since 60Hz screens are everywhere, however you'd still be wasting time rendering unnecessary frames. One solution for this is to cap the framerate (vsync at 60Hz will do that but not everyone likes vsync for unrelated reasons) but a better solution is to interpolate the previous and current states using the interval between updates when rendering, so something like

    visible_pos = prev_pos*(1.0-inbetween) + pos*inbetween
with inbetween being calculated using something like

    inbetween = (now - last_time)/(1000.0/interval)
(last_time becomes now in the next update - ie. when pos += speed is called)

The drawback with this is that in the worst case (inbetween=0) the visible output is one frame behind, though without any form of framecapping (including no vsync) it this should rarely be the case and with 60Hz intervals it shouldn't be visible even on 120Hz+ monitors. But if you want to avoid this you can have some systems use the current position instead of interpolating them, e.g. have the camera's direction in first person games bypass the interpolation so that when the user moves the mouse they get the most instant feedback.

Personally i've used this approach on my last engine and it has buttery smooth and instant response without any perceivable lag or issues even on a 120Hz CRT monitor (pretty much all modern flat panel PC monitors have response time issues that can hide frame latency issues, but nothing stays hidden on a fast CRT :-P).


Could you just allow variable delta between two bounds that you control like 1fps and 600fps and make sure that computations work at both capped extremes to avoid degenerate behavior? Or are there downsides to that approach as well?


Well, considering i've seen several games break even with 120fps (they were probably never tested at such high framerates), i'd say there are. Though the lower the cap, the less likely you are to see issues in such games (probably because the games were tested with those framerates).

But the tiny deltas are one issue. Another is that you get variable deltas and they can go from big to small (even capped) from one frame to another and this can still introduce numerical instability and harder to spot heisenbugs.

It might sound like a PITA to keep the previous state around but you really only need it for things that can't be calculated on the fly (e.g. position that can arbitrarily change) but others can be calculated (e.g. particles) and in the long term you are making a more robust system and saving yourself from having to chase after weird bugs. By having everything run at fixed updates you know that once you see something working, it'll keep working regardless of the framerate.


It's hard to make your game physics simulation numerically stable and consistent across a wide range of timesteps, unless you make it really complicated. Things like collision detection are often implemented in a way that only works for a certain range of object speeds and timesteps—and that's just one of many glitches that commonly result from running the physics simulation at an unintended frequency. Dividing things by unexpectedly tiny numbers can produce surprisingly big results.

The correct solution is to run the simulation on a different thread from the rendering, so that the simulation can be run at an appropriate frequency and the rendering can proceed at whatever framerate the user's hardware is capable of. The more commonly used "solution" is for the game to run the simulation and rendering on the same thread, and cap the framerate as a way to indirectly cap the simulation update frequency. Occasionally you find a game that merely assumes that the framerate won't go over 60Hz, and if your monitor is faster, the game itself runs in fast-forward.


No, this is not how it's mostly done correctly in practice. Most physics calculations are run at a predetermined frequency, and then the appropriate amount of iterations are done before drawing each frame (it may be zero). And then one can use the reminder of the delta to interpolate positions if needed.

I think very few games run the physics in a different thread, that sounds like asking for trouble.


> No, this is not how it's mostly done correctly in practice.

I don't believe anyone was asserting that how it's usually done in practice is the right way to do it. Developers obviously have a strong preference for easy over correct and flexible.

What you describe is a reasonable compromise for developers who are afraid of multithreading even with a straightforward producer-consumer data flow. It comes with its own complications, like having to buffer input events along with their timestamps to apply them during the right catch-up iteration of the simulation.


Interesting but:

1) How do you deal with hardware that can't run the simulation at the appropriate frequency?

2) How do you keep simulation and animation smooth and linear over time when facing processing oscilations if not by using time deltas?

3) Is ther a graphics/processing demanding game that doesn't use time delta?


Usually a constant time step is used, and you throttle the rate at which you run physics ticks to make that match wall clock time. It’s common for game physics engines to also run iterative integrators for a fixed number of steps per tick instead of to convergence, which reduces variation in work per tick. If the system can’t keep up, you start dropping physics frames.

You keep animation smooth by decoupling it from physics; the output of your physics engine will generally include at least linear and angular velocities you can use for interpolation. This kind of thing is necessary anyway if you’re running your physics simulation on a server and have to communicate to the renderer over the network.


With the simulation as a separate thread, and presumably with a well defined interface, a less intensive (and lower TPS) simulation could be switched out at lower performance levels.


Running in another thread does not make it correct, it only increases performance.


I think you completely misunderstood that sentence. Let's try again:

> The correct solution is to run the simulation on a different thread from the rendering, so that the simulation can be run at an appropriate frequency and the rendering can proceed at whatever framerate the user's hardware is capable of.

The "running in another thread" is a way to enable running the simulation at a fixed frequency where it will produce "correct" results, while not imposing the same constraints on the rest of the game engine that are not as sensitive to timestep issues. I never said that moving the simulation to a thread of its own is the entire solution itself, and "increasing performance" isn't the goal or result.


Ignoring your condescending tone, there is no need for another thread to "enable" running the simulation at a fixed frequency.

In fact, a myriad of games (and other kinds of simulations) have been doing exactly that for a very long time.


Super tangential and subjective, but I found the writing in the Mass Effect games subpar. I know a lot of people think the opposite. I've spent a fair amount of time looking for games with writing I like, but it seems to be an uphill battle. Someone suggested to not play for writing, which somewhat makes sense and has led me to do more reading, which I've been thoroughly enjoying. I replaced looking for games with looking for books and find it more rewarding. That said, for the purposes of a survey, if nothing more, what game stories did you enjoy?

To start it off, Kingdom Come: Deliverance was a big deal for me (though not sure if I especially liked the story or just that it didn't get in the way), and Planescape: Torment is a favorite too (still remember the wonder with which I explored it the first time).


> Someone suggested to not play for writing

This is not correct nowadays (it was surely so 15+ years ago, very roughly before the introduction of walking simulators and/or the expansion of the indie market).

Regardless, games that are strongly centered on narrative (writing) don't require a huge investment, so I think you'll mostly (or even exclusively) find them in the indie area; action/RPG games also are not the best in this department.

I suggest to start with "What remains of Edith Finch" - you won't be disappointed, assuming you don't necessarily look for action/RPGs.


Fallout: New Vegas is essentially a big choose-your-own-adventure platform with several major choices of varying mutual exclusivity. The real heart and soul of FNV, though, is in the incidental stories found in side quests and companion quests--while they are rarely necessary to "win the game" (i.e., trigger the credits screen), they are often more compelling than the major quests (find the MacGuffin and pick A, B, C, or D) and allow much greater opportunity for forming the player character. I'm not sure that you'll find the total storytelling better than that of a good or great book, but I think New Vegas still leverages the type of narration that is unique (but not guaranteed) by video games.

edit: Just remembered that Planescape Torment was created by Black Isle, the studio that also worked on the first two Fallout games. Black Isle later dissolved, but the key players later formed Obsidian, the studio that developed New Vegas on Bethesda's behalf. I imagine it's very likely you've already played FNV (in which case, I'd love to hear how your experience aligns with mine), but if not, it's probably up your alley.


The recent game The Outer Worlds is also very FNV-like. I really like it!


I, on the other hand, found it very lackluster. Most of the game's environments are filled with identical copy-paste monsters, it's almost entirely lacking all the incidental background quests and character material that make even the "bad" Fallout games feel like there's something new around every corner, and the game very visibly gets a simpler, less player-driven narrative as you go to each new world.


The 'side quest' side of all fallout games is usually far stronger and more interesting/rewarding for sure than the main trail which I think is almost deliberately made flat/weak to encourage you to explore the world more. The really good ones I think are the stories you piece together from terminal entries, particularly those in vaults (Vault 11!).


I had received very strong recommendations about the first two Fallout games.


They all right. I think there is some nostalgia goggles there, same with Planescape: Torment.

Decent writing, for a game, but the isometric view isn't great and the gameplay is clunky. Plot and theme-wise it holds together better than Fallout 3 -- which has plot holes so big you can drive a bus through them -- but FO3 was a lot more fun to play, at least the first 2-3 playthroughs, and has so many freakin mods it's a different game.


The very first two, like from 1997?[1] or the first two mainstream ones (fallout 3 and NV)?

[1]https://en.m.wikipedia.org/wiki/Fallout_(video_game)


Yes, the actual first two.


I know people love games, but a lot of the suggestions have writing on par with if not worse than Mass Effect.

Give Disco Elysium a go. The writing's good, but how it uses Gameplay to explore that writing is what makes it great. Similar Amnesic setup to Planescape with interesting dialogue mechanics.


Interactive fiction games. I've never been a big gamer (I'm actually in a year-long hiatus from Mass Effect currently...) but I make a point of playing an interactive fiction game every few months. The older one favor logic puzzles sometimes instead of a plot, but the ones listed below do not. They take some getting used to, though.

My favorites in terms of writing: Newish:

  - Spider and Web
  - Babel
  - Photopia
  - Superluminal Vagrant Twin
  - Harmonia
  - The Dreamhold
Oldish:

  - Trinity
  - A Mind Forever Voyaging


The Internet often makes me feel old, but seeing Photopia called a "newish" game certainly put some spryness in my joints again. It's not that it's not a great game - it is - but even in its medium, I do think it should be called a classic by now. A lot of things have changed since it was released.

Over the last several years of very occasionally playing interactive fiction, I've been particularly impressed by:

- Cactus Blue Motel, by Astrid Dalmady. 2016. A coming-of-age story with a bit of magical realism, written in Twine. Highly accessible, and it takes just minutes to give the game a try: http://astriddalmady.com/cactusblue.html

- Chlorophyll, by Steph Cherrywell. 2015. Also a coming-of-age story, but mostly a rip-roaring scifi adventure. Could well make a good introduction to the more modern views on interactive fiction.

- Coloratura, by Lynnea Glasser. 2013. Carpenterian horror from an unusual perspective. Swept the awards in the IF community when it came out.

- Eat Me, by Chandler Groover. 2017. A twisted fairytale that's thoroughly obsessed with food - the richer and the more varied, the better. A great showing of how much a writer who's willing to go far enough with it can do with prose style.


The older I get the more I find all games to have horrible writing. Even games I used to love like ME are painful on replay. Most conversations boil down to “I don’t want to do a thing.” “But I’m Commander Shepard and I’m picking the blue option.” “OK, I will do the thing now!” Recent games with ‘good writing’ (pillars of eternity, divinity, MGS5) have all left me completely cold and wondering if I’m even playing the same game as reviewers are.

Maybe I can recommend games with a strong atmosphere instead. Like Morrowind, Deadly Premonition, Sleeping Dogs, Kentucky Route Zero, Silent Hill 2/3. Nier Automata if you like anime (which I don’t, so I didn’t find it as thrilling as many game reviewers seemed to).


Have you tried Planescape: Torment? The original Deus Ex (the prequels are definitely lackluster)? VtM: Bloodlines? Life is Strange? Dishonored?


+1 for VtM: Bloodlines. There is a review by a dude who goes by sseth (that's with 2 x s) that does a good job explaining why it's great.


What can change the nature of a man?


The higher the agency the less need for a plot. If you are looking for strong writing then you'll need to play games that are mostly based on a linear story with less agency. By linear story I don't just mean a half hearted chain of quests. Plot heavy games are almost indistinguishable from books. Sometimes you spend 10 minutes doing nothing but reading.


There is still some good writing out there for sure. Check out Spider-man on PS4 - it's a linear game so there aren't conversation options to choose, but the writing is up there with anything else in the spider-man/marvel universe in any format in my opinion.

The games you mention all seem to be games with options for dialog which if you think about it is hard to write for, games are the only medium where you're essentially choosing your own adventure and the writing has to be able to account for any/all choices and still end up leading you towards some kind of conclusion that makes some sense.


Hmm, I agree the writing in games can be on par with the writing in comic books and superhero movies, but I’m not sure that’s a compliment :)


Have you tried Disco Elysium?


Waiting for the switch version! I have a strange feeling I won’t like it though. All the dialogue I’ve seen is too self-consciously ‘wacky’. Although I don’t know to what extent that’s influenced by the fact that I see it via youtubers playing it and they’re picking the stupid options.


I think that Mass Effect had pretty good writing in the first game but the later games are pretty bad. Disco Elyseum has great writing, you might check that out.


The atmosphere of the first Mass Effect was incredible. After more than a decade later, I still have fond memories of that game.

The second one though, didn't really click with me, and I stopped playing the series there.

Maybe with EA games being on Steam now, I could give the third one a try.


I have a different take here: ME1 I loved, it did some incredible world-building and the reveal of the true bad guy will stick with me. However, ME2 is my favourite. It’s an appreciably smaller story, but it’s all about the characters, and they’re engaging people. The first visit to the nightclub is cracking.

ME3? Failure from start to finish. The opening scene just basically says “the last two games were completely irrelevant”. The ending is not only unsatisfying, it relies on stuff that was barely established even in the first game and not even mentioned in the others. It’s a textbook case of failing to stick the landing.

(And the less said about “choose your favourite primary colour” as an ending the better. Should have just had one ending and made it a good one.)


ME1 was groundbreaking for the time but really has not stood the test of time all that well. The gameplay mechanics are significantly rougher than ME2, ME2 feels like a shooter with strong RPG elements and ME1 feels like KOTOR where you get to aim the guns, it is very much an old-school bioware RPG title. The mako planet exploration is excruciatingly bad. Droid scavenger hunt on the citadel is excruciatingly bad. Endless elevator rides are excruciatingly bad.

There are some sequences that felt tedious but the story is definitely the best part overall.

ME2 is definitely the apex of the series by modern standards.

Also you forgot to mention ME:A. Maybe your face was tired? ;)


> ME2 feels like a shooter with strong RPG elements and ME1 feels like KOTOR where you get to aim the guns, it is very much an old-school bioware RPG title.

Funny, that difference is exactly what I most disliked about ME2.


I haven’t bothered playing it. Hardly even seems canonical and the reviews described it as worse than DA:I, which... did not impress me.


Frostbite engine is such an unwieldy monster that it has killed the last 3 games that tried to use it, all of them top-tier AAA franchises no less (BFV, Anthem, ME:A).

It is optimized for two things, Battlefield and FIFA, and it is even cumbersome for that (see: BFV's lifecycle and the numerous long-standing bugs that plagued it). It was never designed for an RPG and things like inventory systems and facial animation did not exist and had to be invented (badly). But EA is all-in on the "everything has to run frostbite company-wide".


This characterization is inaccurate and misleading. A game engine is a basic framework with which to build a game - responsible for things like platform abstraction, memory management, and the basic rendering pipeline - but much of what is identifiably part of any specific game lives on top of that. Unreal Engine and Unity don’t ship with “inventory systems” and neither are “designed for an RPG”.

Having worked on multiple Frostbite and multiple Unreal games, both engines are capable of building a wide variety of games. The discrepancies in my experience aren’t technical but organizational. It is hard to compete with the scale of Epic’s developer support organization and the wider industry inertia around their technology.


Make sure to get all of the DLCs as it will make the third episode immensely more enjoyable


> I think that Mass Effect had pretty good writing in the first game but the later games are pretty bad.

I really enjoyed the first Mass Effect and its three-way balance between storytelling, exploration and combat.

I was really looking forward to playing Mass Effect 2, but its storytelling didn't seem as good as the first game and exploration was almost non-existent. ME2 is more focused on combat, which seems to be what most people want (ME2 gets fantastic reviews), but to me that's the least interesting part of a game.

I never played Mass Effect 3, but I hear it is even more combat-focused.


Mass Effect 1 spent a huge portion of its attention on world building. And I agree, it's an outstanding achievement. Mass Effect's world is a very compelling place.

But by the time you get to Mass Effect 2, that world is already built. So ME2 instead spends its time telling a more contained story in that world. It's a valid choice, even if not your cup of tea.

ME2's story works because it's a relatively well executed Seven Samurai-style plot. We've seen this story many times before. It's a classic. Sci-fi stuff threatens the galaxy, assemble the crew, watch them love/hate each other through tough challenges, some don't make it through the epic conclusion, etc. Standard. For a lot of people, it's fun to relive those story beats in an interesting new world (Mass Effect) in a novel format (video games).

I think of ME2 as a smart and enduring example of video game writing craft. They were juggling a lot of different requirements: player choices affecting the story (including from the first game); a huge, complicated new setting; making the story accessible to new players without needing to play the first game; developing interesting characters; supporting multiple protagonists (male/female, paragon/renegade) and varied character interactions based on those attributes; supporting high production values (all lines voiced by real actors); fast development timeline; on and on. Hanging all of that on a familiar plot structure probably brought a lot of structure to what was otherwise a pretty chaotic project. Furthermore, what ME2 does better than other games in the series is let characters drive the action, rather than dragging characters from event to event. The writers did an amazing job on ME2 in context.


> Mass Effect 1 spent a huge portion of its attention on world building. And I agree, it's an outstanding achievement. Mass Effect's world is a very compelling place.

It really is.

It's a tremendous shame that ME 3 ruins it.


I agree that there is a lot more world building in ME1, but in the last few days I've been playing it again, and I'm finding that it doesn't exactly hold up.

The game just feels like a chore. For one thing, the environments are too big. There's a lot of walking to get from one plot-point/action-sequence to the next. Second, the combat is bad. AI, controls, weapons that don't shoot where you point them until you level up several times to upgrade the skill. The conversation trees often feel like I'm just going through the paces of uncovering Codex entries for XP, and the long, pregnant pauses between dialog portions as the game loads up the animations for your responses is super annoying. It's an exercise in frustration and it really harms the storytelling aspect. I just never feel like I'm in any kind of flow of hearing the story. It feels dragged out (and I even have bug-fixing and fast-elevator mods installed).

I'm a fairly middling FPS player, and by that I mean I usually rank in the middle of online matches against humans, and I can usually finish single-player campaigns on "hard", if I have the patience for it on those days. But ME1 has been so consistently frustrating me that I've about decided to quit.


I think ME2's great reviews were mostly driven by the excellent cast of memorable characters you got to assemble throughout the game. Such an upgrade over the BioWare trademark mild soup of forgettables in ME1, and also the biggest reason ME3 was maligned after relegating most of those beloved characters to a 5-minute scene.


Also, the vastly upgraded combat system.


Everyone says ME2’s combat is much better than ME1’s, but as someone who rarely plays shooters, I didn’t really notice much difference.

What changes in ME2 made the combat vastly better than ME1?


I think the first game relied on weapon overheating to limit how fast you could fire.

The second game used ammo clips and that made combat far easier.


That’s right. I’ve seen arguments over which scheme is better, but the second game’s scheme is definitely more orthodox.

I also keep hearing ME1’s combat described as “clunky” and ME2’s as “smoother”. Without much experience with this type of game, I think I just don’t have the skills to feel the difference.


You're probably playing on a PC with mouse and keyboard. The Xbox 360 Mass Effect controls were not great, but that was the original release.


> Disco Elyseum

Thanks for reminding me of it! I loved it. I didn't finish it though and might come back to it.


The greatest piece of games writing for me is the worldbuilding of Sid Meier's Alpha Centauri, through the informative texts for buildings and technologies.

Direct narrative in games is very tricky, because it can conflict with player agency.


Games aren't novels, if that is what you want. It's interactive storytelling. The interaction of story, gameplay, world building, relationships, choices, and immersion. Each individual element might not be extremely satisfying in isolation. But when combined into a cohesive experience it's something unique altogether. Mass Effect is likely revered because of how well it does that. Not because it is a good novel.


Mass effect is a CYOA novel. The moment-to-moment gameplay doesn't matter to the story; the only way in which the story is interactive is in the CYOA choices and the passive storytelling (unlocking codex info, environmental storytelling)


Maybe the game-play doesn't matter to the story (I'd argue it does because of immersion), but it definitely affects the experience. The experience is a collection of it's parts. You can't look at one in isolation (aka the story) and say that' its inadequate compared to an experience where that is the only part (a novel). Because a game offers more than just the story and if you ignore those other parts then you aren't appreciating a game for what it is. They all work together.


Sure, but a game would generally be improved by being less disjointed, right?


“428: Shibuya Scramble” _is_ writing, pretty much literally, as the game resides in the visual novel genre. However it manages to weave a complex tale of parallel interconnected narratives that is only possible in the digital medium; if you can approach it with an open mind I highly recommend it. One of the very few games that earned a 40/40 score in Famitsu magazine.


Is it actually filmed? Haven't played anything like it. Thanks.


I suffer with you here, as I mostly play to enjoy a story interactively. That makes gaming a really frustrating experience, as players in general seem to value action and farming over a good story... there are, of course, exceptions to this though: one of my all-time favourites is The Last Of Us 1 (haven’t bought the new one yet), which had incredible writing. Another was Witcher 3, maybe not for the main story line but all the detailed stories hidden in the woods of the game. Then there’s God Of War, A Plague Tale: Innocence, Red Dead Redemption 2, and Unravel, all of which I can heavily recommend.


What do you think of The Last Of Us 2?


Different person, I think Last of Us 2 really stretches the medium and that is commendable. But the overarching themes are not satisfying.

I think people forgot that Last of Us 1 wasn't a satisfying ending either, aside from being able to empathize with why people did what they did.

In that light, Last of Us 2 really doubles down on making you feel dissatisfied, and I like that such a possibility mirrors real life: irrational, striving for redemption, but not necessarily getting it.

I hate that I had/choose to experience this in someone else's shoes. I love that they choose to portray it. The feeling is almost analogous to watching The Road if it ended with the adult's perspective, or how Children of Men kept me entertained, on the edge of my seat and able to admire the technical prowess of the intense long scenes with no camera cuts, but not sure if the resolution was a resolution at all. And how that's life.


I found the story of the leaks, backlash, counter-backlash, counter-counter-backlash etc. very entertaining. FOOOORE!


I found the real world reaction to be perplexing and also ignorable.

I found the story of the game to be very nihilistic, and that just doesn't make for something that feels good, but portraying that took real effort which I can appreciate.

My litmus test is asking myself "If this game had a different name and wasn't part of a franchise I liked, would I appreciate it" and the answer is yes, a resounding yes.


Nier:Automata has great world building and story. Also I'd suggest checking the Zero Escape games: 999 and Virtue's Last Reward.


Seconding NieR: Automata. It's a pretty quirky game with tight controls and combat, which is a plus, but the story is interesting and has a lot of variety.

In a similar sci-fi vein I've just started on Detroit: Beyond Human, which I've got high hopes for, story-wise.


Star Control 2. Game lore is massive and deep, and is given out in small chunks, never overwhelming the player. Mass Effect actually borrows from this game a lot.

Tie Fighter. If I had any doubts fighting for the Empire, they were all gone in just five of six missions. After that, I wanted to blast as many traitorous rebel scum from the space as I could!

Betrayal at Krondor. One of the best RPGs I ever played. Writers could challenge R. Feist himself!

Full Throttle. The finest LucasArts quest ever, with great characters and story.


Visual Novels. Danganronpa.

Hyperdimension Neptunia games made after 2013.

I would love to see games with emotionally involving improvisational acting with A.I. characters but we're not yet there yet.

Social interactions in virtual reality can can you into an intense places: characters can invade your personal space (to the point of causing an adrenaline dump) or make you feel uncomfortable by keeping too far away. Will we see something that is halfway between "Frog Blender" and "Ender's Game?"


I liked:

Beneath a steel sky

Psychonauts

Ico (I don't think there was much of an explicitly stated story, it's mostly atmosphere)


Pillars of Eternity and if you haven’t, the Baldur’s Gate series.

The Shadowrun RPG setting is very 90s but if you like that sort of thing the Dragonfall game a couple years back was quite well written - I had no prior experience with the RPG world but enjoyed it because of the story.


I really recommend Kentucky Route Zero, in case you're looking for a superbly written game. It's one of the best narration-driven games of the last few years.


I really enjoyed Horizon Zero Dawn for the writing. Portal 2 and Planescape Torments had really good writing too.

But I agree with you, it's very difficult to find really well written games. Otherwise games that are closer to the old point and click adventure game genre have generally better writing: Gone Home, To the moon, The Book of Unwritten Tales, Overclocked an history of violence to name a few.


the Dishonored series for one.

Not sure if "worldbuilding" and "writing" are the same, or whether resources invested in the former leads to shortcuts in narrative, etc.

For Mass Effect, the main storyline seemed to be beholden to Video Game "Boss" requirements, but the side stories and character stories allowed whomever was assigned to those, to shine...


I recently finished Outer Wilds (not to be confused with Outer Worlds) where most of the story is told through "journal entry" style text. I thought it was a very poignant, bittersweet story. I also appreciated that there's no combat in the game.


Seconded. Outer Wilds was absolutely wonderful. The gameplay is lacking in places but after watching the making of documentary [1] my mind is still kind of blown by the technical side, even if it didn't transfer 100% smoothly to the player's experience.

[1] https://www.youtube.com/watch?v=LbY0mBXKKT0


This video has given me even more appreciation for the game. The exploration in their design process, yet still achieving very intentional design for guiding player motivation. Incredible.


There's only a few games I enjoyed for the writing. Hard Rain and LA Noire come to mind.


God of War 4 The last of us Uncharted 3 + 4 Shadow of the Colossus Metal Gear Solid (I recommend watching one of those videos putting the story into chronological order) GTA 4 Red Dead Redemption 2

ToDo's on my list: - The Last of us 2 - Ghost of Tsushima


Psychonauts!


If story is your bag then Spiderweb Software's catalogue is worthy. Also the Project Eternity games: Tyranny, Pillars of Eternity 1&2.


I'm surprised no one has mentioned Metal Gear Solid in this thread (although I really enjoy all the indie suggestions).

Each MGS game is essentially an epic interactive movie[^], with individual cutscenes lasting up to an hour.

Granted not everyone likes Kojimas style, but those who do won't regret their time. Best played sequentially from MGS2.

[^] MGS5 is a notable exception as an open-world style game and an abrubt ending, ultimately leading to Kojima leaving Konami. Still good fun and brings some of the stories from earlier games together.


Erhm, you'd be missing out on a lot by not playing MGS1. And MGS2 is considered one of the weaker entries in the series.

As for "Kojima's style", it's basically "AAA Japanese Game Studio Style", which is "tell a coherent story through the first half, then go completely off the rails in the second."


Surprised nobody has mentioned the Metro series. It’s made after a book, and I think it’s pretty good story-wise...


Witcher 2 and Dragon Age Origins (also Bioware


Second the Witcher series. They really bring out that Eastern Bloc cynicism / black humor, it is very refreshing compared to western games.


If you played and liked the games, read the books too. There are many things that can be written in the story, but are difficult to put into gameplay or movie.


Horizon: Zero Dawn was pretty awesome


Yes!

You might look at it and think "Robot dinosaurs in a post-apocalyptic world where humans hunt them with bows and arrows? Sounds like good dumb fun, even if the writing is preachy and nonsensical." Thing is, the writing isn't preachy and nonsensical. It's unexpectedly excellent. They did something unusual:

1. hired a good writer 2. early in development 3. listened to him

and the results are phenomenal. No caveats necessary -- the story has none of the Mass Effect "strong limbs, weak backbone" issues.

Also, while HZD used to be a PS exclusive it will be on steam in a few weeks.


TL;DR: Pyre and Divinity Original Sin 2

---

Disclaimer: I'm not a person to dwell too much in the specific wordings in games, mostly valuing worldbuilding and the immersive aspects of storytelling, but I do pay some attention to it.

Divinity Original Sin 2 has some excellent world building, and while the overarching story might be a bit cliché "chosen one"/"zero to hero" type deal, the moment to moment narratives are pretty well made, and the quest lines are very well tied, not only giving you stuff to do, but giving opportunities to learn more about the world.

And also perhaps anything made by Supergiant is a good pick, in your case I would specially recommend giving Pyre a try. I held from playing it for the longest time because I was skeptical about the in-story "sport" gameplay, but it is very well made and perfectly enhances the narrative.


What can change the nature of a man?


20 years later we finally got the answer. Drinking yourself half to death and losing most of your memory in the process can change the nature of a man.


">Since PIX does not “take screenshots” but instead captures the sequence of D3D commands and executes them on hardware, we can observe that executing the commands captured from an AMD box results in the same bug when executed on Intel."

Mass Effect bugs aside, this is interesting!

Before this article, I never knew that DirectX (D3D) commands could be proxied from PC to PC; I think that's a great capability!

Also, if that's the case, and apparently it is, then it would seem like you could do something like X-Windows/X11 but for PC's running Windows over a network by proxying D3D commands... And of course, if Microsoft wants to be proprietary about that, then the same thing could probably be done with open source software using OpenGL commands, that is, proxy them over a network connection to gain an X-Windows like effect, if I am understanding the underlying technology correctly, or am I mistaken?


That is just one example among many, why most AAA studios favour proprietary APIs.

Khronos just does specifications and then lets its partners come up with actual tooling, which means that you end up with OEM specific SDKs most of them very thin in capabilities.


I just finished Mass Effect (for the 342345th time) and I only encountered this on Ilos. I thought that it was just a temporary artifact on my machine ad I haven't seen it ever since.


I really enjoyed this article! I'm not even a game dev, but I still found the article very approachable and engaging. Thanks!


Am I correct in understanding that there is a bug in `D3DXMatrixInverse`, or is it that some assumption is wrong?


No, it's not necessarily a bug but rather multiple systems working as designed, yet coming together to produce incorrect results. `D3DXMatrixInverse` makes use of hardware-implemented fast math routines by design, for performance reasons. These implementations may differ depending on the CPU model but they are all valid, provided they remain within the IEEE 754 spec.

What has happened here is that a new implementation of these fast math routines appeared that returned results that were unexpected by the game engine and the engine was not robust enough to deal with these variations. This is not too surprising as these AMD CPUs did not exist yet when the game was developed so QA will not have tested the game's compatibility with these CPUs.

The solution was to divert calls to `D3DXMatrixInverse` to another matrix inversion routine that makes use of more accurate floating point math, which produces identical results on all tested hardware.


Yes, it evidently uses sse2 instructions when doing the matrix inversion resulting in the NaN values inside the matrix.


It’s awesome investigation and well written!


How does Wine handle it on AMD CPUs?


It doesn't. You generally install real D3DX runtimes to run games on wine, where it then would do the same thing on AMD CPUs.


In the past 3 years or so, there is a native d3d9 implementation in mesa which is used by wine. There's also vkd3d, which is a wine implementation of dx12 on top of vulkan.

I don't think the real direct3d binaries are used by default anymore, unless you go out of your way to configure it that way.


D3DX is a user-space library, though. Separate from D3D. So if WINE or VKD3D ship their own open-source version of D3DX9, you could use that, but you could also use the original Microsoft version of the .dll - the individual numbered versions (d3dx9_31.dll, etc) exist to facilitate using the same version that your game was compiled and tested against. The D3D shader compiler (D3DCompiler) dlls are still versioned in this way for games to link against as well. If you look inside the SxS directory for a current installation of Google Chrome, you'll find a D3DCompiler_47.dll sitting in there that they deploy instead of relying on whatever the OS has available.


I think dxvk is used a lot more commonly than Gallium Nine.


Wine does come with it's own D3DX runtimes, for example [0]. You only need to install the Microsoft ones if the Wine implementation is missing something or has a bug.

[0] https://source.winehq.org/git/wine.git/tree/HEAD:/dlls/d3dx9...


You mean Wine doesn't support D3DX on its own? I think it should. Including through dxvk.


Buying AMD GPUs has been in my experience a terrible gamble. I still remember when I had to wait months to be able to play GTA V until they released working drivers.


This is an article about CPUs


Same here. AMD is still a much better choice for me since since I spend 99% of my time on Linux, but the GTA V situation was something else. I ran into the same easily reproducible crash, it was reported by myself and many other users, and we waited for something like 8 or 9 months for it to be fixed.


Do you know if the problem was in the driver or in the game, there are some games that worked by accident and years later will fail to work in newer driver/windows versions. I had such a bug with Oblivion,Nvidia and Win7 but for some reason I could play the game under wine in Linux.


not the person you were responding to, but most of AMD's gpu troubles come down to the windows driver being hot garbage. They opened the linux driver enough that others can do a lot of the work for them. Things that don't work under the windows driver usually work great under linux.

Radeon Tech Group's in-house software support has always been abysmal since the days they were called ATI. It's been a chronic problem for both their drivers and their GPGPU ecosystem, NVIDIA can afford more engineers and better engineers to develop libraries that support the ecosystem and to make sure that everything works properly on their hardware. AMD's greatest successes have been when they get the open-source community to maintain and develop something for them.

Yes, AMD is operating on a much smaller budget but in the end it doesn't matter too much to the consumer when they can't play Overwatch for 9 months because AMD has a driver bug that causes "render target lost" errors leading to competitive bans for repeated disconnects, or... whatever the fuck happened with Navi.

Part of what you are paying for when you buy a graphics card is the ongoing software support, and AMD has always fallen flat on their face into a dumpster of rusty used syringe needles in that department.


> but most of AMD's gpu troubles come down to the windows driver being hot garbage.

That definitely needs some evidence to back it up. In my experience, most game rendering code is hot garbage that has been hammered just enough to work on the tested platforms (read: mostly nVidia).

> They opened the linux driver enough that others can do a lot of the work for them.

While there are outside contributions, most work on radeonsi (OpenGL) and the amdgpu kernel driver is done by AMD employees. The AMD Linux driver is better because it has less legacy code, can share more work with other drivers, can benefit from users who are more used to filing detailed bug reports and test development builds, and yes because users and other interested parties (Valve, Red Hat, ...) can contribute fixes for their pet issues - but it is still AMD doing most of the work.

For Vulkan on Linux with AMD graphics the most popular driver is entirely community developed. But AMD's Vulkan driver also works from what I hear.


The need for good drivers was something NVidia 'figured out' really early on in the game.

Oddly, while I agree that ATI/AMD's 3d drivers have ranged from 'ok' to 'dumpster fire', I remember a time when their AIO (i.e. VIVO/Tuner/3d) boards just plain worked (aside from not very good 3d performance.) Perhaps their driver team couldn't adapt.


It was a bug in their Windows driver right after the game was released on Windows, when every other non-AMD and some AMD users (depending on your card) were playing it just fine.


How do you know it was a driver bug though and not a game bug that was only affecting that driver? Even if it was "fixed" by a driver bug that doesn't mean the game didn't grossly violate the D3D/OpenGL/whatever API spec and just happened to work everywhere else.


Just out of curiosity, was this bug something related to an AMD GPU, or CPU, like it was the case in the article? And was it trying to play the game in Linux or Windows?


Sorry, I should have worded that message better. It was an AMD Windows driver bug with absolutely terrible response from them.


So the last PC I built was with the Intel 9700 and that was ~2 years ago. At that time Ryzen was pretty good but didn't have the proven track record or price-performance it does now.

Now, in terms of pure price-performance, i think I'd want to buy Ryzen but... this sort of stuff is what scares me off AMD. I just want my crap to work. Reading some of the comments here, one commenter suggested there's an instruction to find an approximate determinant of a matrix where both Intel and AMD are standards compliant but those instructions produce different results on each chip.

Of course I don't know if this is true or not but saving $200 on a PC build is just not something that justifies (to me) dealing with kind of issue or, worse, potentially dealing with issues like these.

I buy NVidia for pretty much the same reason.


> saving $200 on a PC build is just not something that justifies (to me) dealing with kind of issue

This kind of issue.. you mean a visual glitch with limited scope and available work arounds in a more than decade old video game?

Yes, what a serious issue. /s


Or the Destiny 2 bug (broken RDRAND implementation). Or the segfault bug (manufacturing error with broken uop cache). etc etc. There have been a fair number of teething problems affecting AMD users.

They are not incorrect that there is a certain turnkey nature of using Intel, and certain merits to using a core that has been basically only incrementally refined for the last 10 years.

And yes, Intel has processor errata too, but AMD had to work through some major ones because it was a brand new architecture. They also chose not to take corrective action for some rather major ones - the fix for the segfault bug should have been disabling uop cache, or to do a recall, instead they just let people go on thinking the intermittent crashes they experience (including in windows) are software-related. They entirely declined to patch the Ryzen Take-A-Way bug, which leaks metadata about page table layouts and breaks KASLR on their processors, leaving users even more vulnerable to spectre v2. etc.


This bug upsets you but things like Meltdown and CSME vulnerabilities don't worry you?

I had a 3770k in a previous machine, which suffered noticeable performance hits with the software mitigation applied.


There's very little reason for home users to be running smeltdown mitigations, especially if you find the performance hit objectionable.

Browsers immediately mitigated via other measures, and I've never read about any criminal network choosing to crawl through memory for credentials (that might even be stored encrypted) as opposed to just dropping some malware and keylogging.

It's a big deal if you are a cloud host or your threat model includes state actors, not for joe public worrying about his bank credentials or his CS:GO knife skins.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: