Skip to main content

Lund University Publications

LUND UNIVERSITY LIBRARIES

TinyFoA : Memory Efficient Forward-Only Algorithm for On-Device Learning

Huang, Baichuan LU orcid and Aminifar, Amir LU orcid (2025) 39th Annual AAAI Conference on Artificial Intelligence, AAAI 2025 In Proceedings of the AAAI Conference on Artificial Intelligence 39. p.17377-17385
Abstract

Forward-only algorithms offer a promising memory-efficient alternative to Backpropagation (BP) for on-device training. However, state-of-the-art forward-only algorithms, e.g., Forward-Forward (FF), still require a substantial amount of memory during the training process, often exceeding the limits of mobile edge and Internet of Things (IoT) devices. At the same time, existing memory-optimization techniques, e.g., binarizing parameters and activations, are mainly designed for BP, hence significantly degrading the classification performance when applied to state-of-the-art forward-only algorithms. In this paper, we propose a memory-efficient forward-only algorithm called TinyFoA, to reduce dynamic memory overhead in the training process.... (More)

Forward-only algorithms offer a promising memory-efficient alternative to Backpropagation (BP) for on-device training. However, state-of-the-art forward-only algorithms, e.g., Forward-Forward (FF), still require a substantial amount of memory during the training process, often exceeding the limits of mobile edge and Internet of Things (IoT) devices. At the same time, existing memory-optimization techniques, e.g., binarizing parameters and activations, are mainly designed for BP, hence significantly degrading the classification performance when applied to state-of-the-art forward-only algorithms. In this paper, we propose a memory-efficient forward-only algorithm called TinyFoA, to reduce dynamic memory overhead in the training process. Our TinyFoA optimizes the memory efficiency not only by layer-wise training but also by partially updating each layer, as well as by binarizing the weights and the activations. We extensively evaluate our proposed TinyFoA against BP and other forward-only algorithms and demonstrate its effectiveness and superiority compared to state-of-the-art forward-only algorithms in terms of classification performance and training memory overhead, reducing the memory overheads by an order of magnitude.

(Less)
Please use this url to cite or link to this publication:
author
and
organization
publishing date
type
Chapter in Book/Report/Conference proceeding
publication status
published
subject
host publication
Thirty-Ninth AAAI Conference on Artificial Intelligence Thirty-Seventh Conference on Innovative Applications of Artificial Intelligence Fifteenth Symposium on Educational Advances in Artificial Intelligence
series title
Proceedings of the AAAI Conference on Artificial Intelligence
volume
39
edition
16
pages
9 pages
conference name
39th Annual AAAI Conference on Artificial Intelligence, AAAI 2025
conference location
Philadelphia, United States
conference dates
2025-02-25 - 2025-03-04
external identifiers
  • scopus:105003908682
ISSN
2159-5399
ISBN
978-1-57735-897-8
DOI
10.1609/aaai.v39i16.33910
language
English
LU publication?
yes
id
3bf7096e-18fc-4f98-9012-2df44aaa3483
date added to LUP
2025-08-13 11:18:10
date last changed
2025-08-13 11:19:18
@inproceedings{3bf7096e-18fc-4f98-9012-2df44aaa3483,
  abstract     = {{<p>Forward-only algorithms offer a promising memory-efficient alternative to Backpropagation (BP) for on-device training. However, state-of-the-art forward-only algorithms, e.g., Forward-Forward (FF), still require a substantial amount of memory during the training process, often exceeding the limits of mobile edge and Internet of Things (IoT) devices. At the same time, existing memory-optimization techniques, e.g., binarizing parameters and activations, are mainly designed for BP, hence significantly degrading the classification performance when applied to state-of-the-art forward-only algorithms. In this paper, we propose a memory-efficient forward-only algorithm called TinyFoA, to reduce dynamic memory overhead in the training process. Our TinyFoA optimizes the memory efficiency not only by layer-wise training but also by partially updating each layer, as well as by binarizing the weights and the activations. We extensively evaluate our proposed TinyFoA against BP and other forward-only algorithms and demonstrate its effectiveness and superiority compared to state-of-the-art forward-only algorithms in terms of classification performance and training memory overhead, reducing the memory overheads by an order of magnitude.</p>}},
  author       = {{Huang, Baichuan and Aminifar, Amir}},
  booktitle    = {{Thirty-Ninth AAAI Conference on Artificial Intelligence Thirty-Seventh Conference on Innovative Applications of Artificial Intelligence Fifteenth Symposium on Educational Advances in Artificial Intelligence}},
  isbn         = {{978-1-57735-897-8}},
  issn         = {{2159-5399}},
  language     = {{eng}},
  month        = {{04}},
  pages        = {{17377--17385}},
  series       = {{Proceedings of the AAAI Conference on Artificial Intelligence}},
  title        = {{TinyFoA : Memory Efficient Forward-Only Algorithm for On-Device Learning}},
  url          = {{http://dx.doi.org/10.1609/aaai.v39i16.33910}},
  doi          = {{10.1609/aaai.v39i16.33910}},
  volume       = {{39}},
  year         = {{2025}},
}