I also think our hands could do more in keyboard position. Lift off slightly and gestures could be recognized by a camera, even things like 2-finger scrolling could be done by lifting the fingers slightly and doing it.
Same with pinch-zoom or scroll. Or when working with a 3D model, making a hand shape as if you were gripping a globe, you should be able to twist and rotate objects on screen.
I remember posting excitedly about this on Slashdot 15 years ago and am just remembering it now. This should be easily achievable with modern libraries to get basic detection working.
Although some abhor the idea and want more complex keystrokes chords, for myself I think there are specific gestures I consider intuitive and wouldn't have to particularly learn anything new.
Cameras, etc could be used to read our gestures:
https://www.cnbc.com/2018/07/16/ctrl-labss-neural-tech-lets-...
https://atap.google.com/soli/