Hacker News new | past | comments | ask | show | jobs | submit login

Analog computing is dramatically more energy-efficient than digital because it needs less hardware. For example, you can store an analog value in a capacitor while storing a digital value requires a multi-bit register that is made of multiple transistors per bit. Analog multiplication can be done with an amplifier while digital multiplication requires thousands of transistors. The tradeoff is that digital computation is exact while analog computation contains noise which may accumulate through the computation.



So is there a particular difficulty in overcoming this noise, or is it more just unexplored territory?


Definitely not unexplored territory, analog computation came first. Notably, the way you program an (electronic) analog computer is by adjusting it's circuitry.. not exactly the friendliest programming model.

The problem with noise seems to be repeatability. Enough noise, and reliability becomes a huge problem. With same inputs and analog circuit, you want the same answer every time, don't you?

You cannot "overcome this noise" by removing it. The perfection of materials, process and environment required for that would be on the order of an experiment like this: https://www.simonsfoundation.org/quanta/20131010-neutrino-ex...

We overcame the noise via digitization. In fact, there is obviously still noise in our current digital computers, since the components within them are fundamentally analog, but digital circuits quantize the analog signals, interpreting the 1's and 0's, despite their analog nature. (this is simplified and I know next to nothing about digital circuit design)


I think the main potential in analogue computing is to create complex networks of feedback loops where different regions of stability correspond to different machine states. I've seen models of neural network memory where the interconnection of neurons works like a combination of a symmetric linear transform and an amplifier followed by vector normalisation. The transform maps the sensory input into a reduced dimensional space (where each dimension corresponds to a possible memory). The reduced vector is amplified via the neural response function, and then it's transformed back to the sensory input vector space through an inverse to the original transform. That creates a feedback loop where (because of how the neural response function works) whatever the input is, the system converges to a vector that corresponds to exactly one of the memory vectors. It basically picks out and amplifies the closest memory to the sensory input.

That kind of system is a huge simplification, but similar things could be done with analogue computing. In particular, I think probabilistic computing could be done by setting up network feedback loops corresponding to underlying Bayesian networks, where stable points correspond to highest likelihood parameterisations. (I may actually do some work in this direction next year, because it's pretty cool stuff.)




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: