I'm reminded of how proofs work. We don't understand how mathematicians really think, and each mathematician might think differently, using different mental images. But a mathematician can write a proof, and other mathematicians can verify it.
Similarly, the output of an opaque and perhaps unreliable process could be not just an answer but also a justification. If the justification can be verified more easily than starting from scratch, we've made progress.
Even in the humanities there is a difference between argument from authority (blind trust) and argument based on reason and justification. And sometimes we have both. Court decisions include not just the decisions but also an explanation of why the court ruled that way. It certainly doesn't eliminate bad decisions, but it's useful.
So the discipline of being able to explain yourself seems pretty important. I don't think AI will be able to fully participate in high-stakes decision-making until it can do that.
Similarly, the output of an opaque and perhaps unreliable process could be not just an answer but also a justification. If the justification can be verified more easily than starting from scratch, we've made progress.
Even in the humanities there is a difference between argument from authority (blind trust) and argument based on reason and justification. And sometimes we have both. Court decisions include not just the decisions but also an explanation of why the court ruled that way. It certainly doesn't eliminate bad decisions, but it's useful.
So the discipline of being able to explain yourself seems pretty important. I don't think AI will be able to fully participate in high-stakes decision-making until it can do that.