On several occasions I have heard "they said it couldn't be done" - only to discover that yes it is technically correct, however, "they" was on one random person who had no clue and anyone with any domain knowledge said it was reasonable.
Usually when I hear "they said it couldn't be done", it is used as triumphant downplay of legitimate critique. If you dig deeper that "couldn't be done" usually is in relation to some constraints or performance characteristics, which the "done" thing still does not meet, but the goalposts have already been moved.
It's extremely common that legitimate critique gets used to illegitimately attack people doing things differently enough that the relative importance of several factors change.
This is really, really common. And it's done both by mistake and in bad faith. In fact, it's a guarantee that once anybody tries anything different enough, they'll be constantly attacked this way.
> that "couldn't be done" usually is in relation to some constraints or performance characteristics, which the "done" thing still does not meet
I'd say theoretical proofs of impossibility tend to make valid logical deductions within the formal model they set up, but the issue is that model often turns out to be a deficient representation of reality.
For instance, Minsky and Papert's Perceptrons book, credited in part with prompting the 1980s AI winter, gives a valid mathematical proof about inability of networks within their framework to represent the XOR function. This function is easily solved by multilayer neural networks, but Minsky/Papert considered those to be a "sterile" extension and believed neural networks trained by gradient descent would fail to scale up.
Or more contemporary, Gary Marcus has been outspoken since 2012 that deep learning is hitting a wall - giving the example that a dense network trained on just `1000 -> 1000`, `0100 -> 0100`, `0010 -> 0010` can't then reliably predict `0001 -> 0001` because the fourth output neuron was never activated in training. Similarly, this function is easily solved by transformers representing input/output as a sequence of tokens thus not needing to light up an untrained neuron to give the answer (nor do humans when writing/speaking the answer).
If I claimed that it was topologically impossible to drink a Capri-Sun, then someone comes along and punctures it with a straw (an unaccounted for advancement from the blindspot of my model), I could maybe cling on and argue that my challenge remains technically true and unsolved because that violates one of the constraints I set out - but at the very least the relevance of my proof to reality has diminished and it may no longer support the viewpoints/conclusions I intended it to ("don't buy Capri-Sun"). Not to say that theoretical results can't still be interesting in their own right - like the halting problem, which does not apply to real computers.