Training and optimization of hardware neural networks



Machine learning and artificial intelligence (AI) have advanced so rapidly that they can now outperform humans at many (or even most) tasks. However, the large models that perform the most complex tasks use an enormous amount of energy, in particular during the learning or training phase. However, it is also clear that despite the rapid progress in AI performance, there is still something fundamental that we don’t understand about how these tasks are done natively in the brain with such low energy on noisy, variable, biological components, and with a level of adaptability to new circumstances that has not been reproduced in artificial systems. This gap in our knowledge is particularly clear in new, emergent bio-inspired hardware being developed for AI in the hopes of reproducing this robust, energy-efficient operation. While large arrays of such devices have been built, their utility and scaling has been limited by the inability to simulate and program them in the same way that digital hardware can be modeled and simulated. This has spawned a variety of training procedures tuned to particular hardware platforms, and has generally limited the size and scope of the emerging hardware. The goal of this project is to develop and demonstrate a general training technique that can be natively implemented on a variety of hardware neural networks, from feedforward crossbar arrays to recurrent physical networks to spiking neuromorphic hardware.

Publications




Multiplexed gradient descent: Fast online training of modern datasets on hardware neural networks without backpropagation


Adam N. McCaughan, Bakhrom G. Oripov, Natesh Ganesh, Sae Woo Nam, Andrew Dienstfrey, Sonia M. Buckley

[Cover of the Issue][Featured Article], APL Machine Learning, vol. 1(2), 2023, p. 026118


Share


Follow this website


You need to create an Owlstown account to follow this website.


Sign up

Already an Owlstown member?

Log in