Skip to main content

Lund University Publications

LUND UNIVERSITY LIBRARIES

Using machine learning hardware to solve linear partial differential equations with finite difference methods

Boulasikis, Michail LU ; Gruian, Flavius LU orcid and Szász, Robert-Zoltán LU (2025) In International Journal of Parallel Programming 53.
Abstract
This study explores the potential of utilizing hardware built for Machine Learning (ML) tasks as a platform for solving linear Partial Differential Equations via numerical methods. We examine the feasibility, benefits, and obstacles associated with this approach. Given an Initial Boundary Value Problem (IBVP) and a finite difference method, we directly compute stencil coefficients and assign them to the kernel of a convolution layer, a common component used in ML. The convolution layer’s output can be applied iteratively in a stencil loop to construct the solution of the IBVP. We describe this stencil loop as a TensorFlow (TF) program and use a Google Cloud instance to verify that it can target ML hardware and to profile its behavior and... (More)
This study explores the potential of utilizing hardware built for Machine Learning (ML) tasks as a platform for solving linear Partial Differential Equations via numerical methods. We examine the feasibility, benefits, and obstacles associated with this approach. Given an Initial Boundary Value Problem (IBVP) and a finite difference method, we directly compute stencil coefficients and assign them to the kernel of a convolution layer, a common component used in ML. The convolution layer’s output can be applied iteratively in a stencil loop to construct the solution of the IBVP. We describe this stencil loop as a TensorFlow (TF) program and use a Google Cloud instance to verify that it can target ML hardware and to profile its behavior and performance. We show that such a solver can be implemented in TF, creating opportunities in exploiting the computational power of ML accelerators for numerics and simulations. Furthermore, we discover that the primary issues in such implementations are under-utilization of the hardware and its low arithmetic precision. We further identify data movement and boundary condition handling as potential future bottlenecks, underscoring the need for improvements in the TF backend to optimize such computational patterns. Addressing these challenges could pave the way for broader applications of ML hardware in numerical computing and simulations. (Less)
Please use this url to cite or link to this publication:
author
; and
organization
publishing date
type
Contribution to journal
publication status
published
subject
keywords
AI hardware, Partial differential equations, Finite difference methods
in
International Journal of Parallel Programming
volume
53
article number
15
pages
22 pages
publisher
Springer Nature
ISSN
1573-7640
DOI
10.1007/s10766-025-00791-6
project
Employing AI Hardware for General Purpose Computing
language
English
LU publication?
yes
id
02d12439-1202-4e7f-9483-cb9fe852c307
date added to LUP
2025-03-07 13:43:35
date last changed
2025-04-04 14:57:41
@article{02d12439-1202-4e7f-9483-cb9fe852c307,
  abstract     = {{This study explores the potential of utilizing hardware built for Machine Learning (ML) tasks as a platform for solving linear Partial Differential Equations via numerical methods. We examine the feasibility, benefits, and obstacles associated with this approach. Given an Initial Boundary Value Problem (IBVP) and a finite difference method, we directly compute stencil coefficients and assign them to the kernel of a convolution layer, a common component used in ML. The convolution layer’s output can be applied iteratively in a stencil loop to construct the solution of the IBVP. We describe this stencil loop as a TensorFlow (TF) program and use a Google Cloud instance to verify that it can target ML hardware and to profile its behavior and performance. We show that such a solver can be implemented in TF, creating opportunities in exploiting the computational power of ML accelerators for numerics and simulations. Furthermore, we discover that the primary issues in such implementations are under-utilization of the hardware and its low arithmetic precision. We further identify data movement and boundary condition handling as potential future bottlenecks, underscoring the need for improvements in the TF backend to optimize such computational patterns. Addressing these challenges could pave the way for broader applications of ML hardware in numerical computing and simulations.}},
  author       = {{Boulasikis, Michail and Gruian, Flavius and Szász, Robert-Zoltán}},
  issn         = {{1573-7640}},
  keywords     = {{AI hardware; Partial differential equations; Finite difference methods}},
  language     = {{eng}},
  month        = {{03}},
  publisher    = {{Springer Nature}},
  series       = {{International Journal of Parallel Programming}},
  title        = {{Using machine learning hardware to solve linear partial differential equations with finite difference methods}},
  url          = {{http://dx.doi.org/10.1007/s10766-025-00791-6}},
  doi          = {{10.1007/s10766-025-00791-6}},
  volume       = {{53}},
  year         = {{2025}},
}