Multiplexed gradient descent: Fast online training of modern datasets on hardware neural networks without backpropagation


Journal article


Adam N. McCaughan, Bakhrom G. Oripov, Natesh Ganesh, Sae Woo Nam, Andrew Dienstfrey, Sonia M. Buckley
[Cover of the Issue][Featured Article], APL Machine Learning, vol. 1(2), 2023, p. 026118


View PDF
Cite

Cite

APA   Click to copy
McCaughan, A. N., Oripov, B. G., Ganesh, N., Nam, S. W., Dienstfrey, A., & Buckley, S. M. (2023). Multiplexed gradient descent: Fast online training of modern datasets on hardware neural networks without backpropagation. APL Machine Learning, 1(2), 026118. https://doi.org/10.1063/5.0157645


Chicago/Turabian   Click to copy
McCaughan, Adam N., Bakhrom G. Oripov, Natesh Ganesh, Sae Woo Nam, Andrew Dienstfrey, and Sonia M. Buckley. “Multiplexed Gradient Descent: Fast Online Training of Modern Datasets on Hardware Neural Networks without Backpropagation.” Edited by [Cover of the Issue][Featured Article]. APL Machine Learning 1, no. 2 (2023): 026118.


MLA   Click to copy
McCaughan, Adam N., et al. “Multiplexed Gradient Descent: Fast Online Training of Modern Datasets on Hardware Neural Networks without Backpropagation.” APL Machine Learning, edited by [Cover of the Issue][Featured Article], vol. 1, no. 2, 2023, p. 026118, doi:10.1063/5.0157645.


BibTeX   Click to copy

@article{adam2023a,
  title = {Multiplexed gradient descent: Fast online training of modern datasets on hardware neural networks without backpropagation},
  year = {2023},
  issue = {2},
  journal = {APL Machine Learning},
  pages = {026118},
  volume = {1},
  doi = {10.1063/5.0157645},
  author = {McCaughan, Adam N. and Oripov, Bakhrom G. and Ganesh, Natesh and Nam, Sae Woo and Dienstfrey, Andrew and Buckley, Sonia M.},
  editor = {of the Issue][Featured Article], [Cover}
}

"...The neuromorphic community is incredibly diverse and spans across all stacks of computational abstractions. At the top layer, neuromorphic algorithms most commonly refer to spiking neural networks (SNNs)5,6 or training models via biologically plausible learning rules. What does it mean for a learning rule to be biologically plausible? A common criterion is the need for locality in both space and time. While error backpropagation routes a huge number of gradient signals to each model parameter, the synapses in the brain are thought to only update on the basis of signals that are immediately available to it. For example, the work by McCaughan et al. published in this issue of APL Machine Learning derives a method to broadcast a global loss signal to all parameters, where each weight in a network selectively “extracts” the component relevant to itself. While the technique is likened to wireless communication, it also bears similarities to global mechanisms in the brain, such as dopamine release. Fundamentally, the power savings it can have by reducing the amount of data routing are vast." excerpt from Adnan Mehonic, Jason Eshraghian; Brains and bytes: Trends in neuromorphic technology. APL Machine Learning 1 June 2023; 1 (2): 020401. https://doi.org/10.1063/5.0162712

Share


Follow this website


You need to create an Owlstown account to follow this website.


Sign up

Already an Owlstown member?

Log in