Sparsifying dimensionality reduction of PDE solution data with Bregman learning
(2025) In SIAM Journal on Scientific Computing 47(5). p.1033-1058- Abstract
Classical model reduction techniques project the governing equations onto a linear subspace of the original state space. More recent data-driven techniques use neural networks to enable nonlinear projections. While those often enable stronger compression, they may have redundant parameters and lead to suboptimal latent dimensionality. To overcome these issues, we propose a multistep algorithm that induces sparsity in the encoder-decoder networks for effective reduction in the number of parameters and additional compression of the latent space. This algorithm starts with sparsely initializing a network and training it using linearized Bregman iterations. These iterations have been very successful in computer vision and compressed sensing... (More)
Classical model reduction techniques project the governing equations onto a linear subspace of the original state space. More recent data-driven techniques use neural networks to enable nonlinear projections. While those often enable stronger compression, they may have redundant parameters and lead to suboptimal latent dimensionality. To overcome these issues, we propose a multistep algorithm that induces sparsity in the encoder-decoder networks for effective reduction in the number of parameters and additional compression of the latent space. This algorithm starts with sparsely initializing a network and training it using linearized Bregman iterations. These iterations have been very successful in computer vision and compressed sensing tasks, but have not yet been used for reduced-order modeling. After the training, we further compress the latent space dimensionality by using a form of proper orthogonal decomposition. Last, we use a bias propagation technique to change the induced sparsity into an effective reduction of parameters. We apply this algorithm to three representative PDE models: 1D diffusion, 1D advection, and 2D reaction-diffusion. Compared to conventional training methods like Adam, the proposed method achieves similar accuracy with 30\% fewer parameters and a significantly smaller latent space.
(Less)
- author
- Heeringa, Tjeerd Jan ; Brune, Christoph and Guo, Mengwu LU
- organization
- publishing date
- 2025-09-17
- type
- Contribution to journal
- publication status
- published
- subject
- keywords
- linearized Bregman iterations, neural architecture search, nonlinear dimensionality reduction, scientific machine learning, sparsity
- in
- SIAM Journal on Scientific Computing
- volume
- 47
- issue
- 5
- pages
- 26 pages
- publisher
- Society for Industrial and Applied Mathematics
- external identifiers
-
- scopus:105018394641
- ISSN
- 1064-8275
- DOI
- 10.1137/24M1684566
- language
- English
- LU publication?
- yes
- additional info
- Publisher Copyright: © 2025 Society for Industrial and Applied Mathematics
- id
- d71fc158-8362-4b34-bad9-07f6c5901e14
- date added to LUP
- 2025-10-24 03:29:51
- date last changed
- 2025-12-18 16:21:02
@article{d71fc158-8362-4b34-bad9-07f6c5901e14,
abstract = {{<p>Classical model reduction techniques project the governing equations onto a linear subspace of the original state space. More recent data-driven techniques use neural networks to enable nonlinear projections. While those often enable stronger compression, they may have redundant parameters and lead to suboptimal latent dimensionality. To overcome these issues, we propose a multistep algorithm that induces sparsity in the encoder-decoder networks for effective reduction in the number of parameters and additional compression of the latent space. This algorithm starts with sparsely initializing a network and training it using linearized Bregman iterations. These iterations have been very successful in computer vision and compressed sensing tasks, but have not yet been used for reduced-order modeling. After the training, we further compress the latent space dimensionality by using a form of proper orthogonal decomposition. Last, we use a bias propagation technique to change the induced sparsity into an effective reduction of parameters. We apply this algorithm to three representative PDE models: 1D diffusion, 1D advection, and 2D reaction-diffusion. Compared to conventional training methods like Adam, the proposed method achieves similar accuracy with 30\% fewer parameters and a significantly smaller latent space.</p>}},
author = {{Heeringa, Tjeerd Jan and Brune, Christoph and Guo, Mengwu}},
issn = {{1064-8275}},
keywords = {{linearized Bregman iterations; neural architecture search; nonlinear dimensionality reduction; scientific machine learning; sparsity}},
language = {{eng}},
month = {{09}},
number = {{5}},
pages = {{1033--1058}},
publisher = {{Society for Industrial and Applied Mathematics}},
series = {{SIAM Journal on Scientific Computing}},
title = {{Sparsifying dimensionality reduction of PDE solution data with Bregman learning}},
url = {{http://dx.doi.org/10.1137/24M1684566}},
doi = {{10.1137/24M1684566}},
volume = {{47}},
year = {{2025}},
}