Skip to main content

Lund University Publications

LUND UNIVERSITY LIBRARIES

Thalassa : Transforming Symbolic PDEs into Tensor-Based Solvers Running on ML Accelerators

Boulasikis, Michail LU ; Gruian, Flavius LU orcid and Szász, Robert Zoltán LU (2025) 2025 IEEE International Parallel and Distributed Processing Symposium Workshops, IPDPSW 2025 p.463-472
Abstract

We introduce Thalassa, a framework designed to convert nonlinear systems of partial differential equations (PDEs) with a time-like component into tensor programs that solve these equations. These programs can run on GPUs as well as machine learning (ML) acceleration hardware, enabling scientific computing fields such as computational fluid dynamics, astrophysics, mechanics and biology to utilize any of these resources. Thalassa accepts as input a PDE system expressed in symbolic form and a discretization strategy for the system's derivatives, in the form of an explicit finite difference method (FDM). With these inputs, Thalassa generates a PyTorch program that solves the system of PDEs numerically using the specified FDM. The generated... (More)

We introduce Thalassa, a framework designed to convert nonlinear systems of partial differential equations (PDEs) with a time-like component into tensor programs that solve these equations. These programs can run on GPUs as well as machine learning (ML) acceleration hardware, enabling scientific computing fields such as computational fluid dynamics, astrophysics, mechanics and biology to utilize any of these resources. Thalassa accepts as input a PDE system expressed in symbolic form and a discretization strategy for the system's derivatives, in the form of an explicit finite difference method (FDM). With these inputs, Thalassa generates a PyTorch program that solves the system of PDEs numerically using the specified FDM. The generated solver is described as a neural network via a subset of PyTorch operations commonly supported in ML accelerators. However, unlike other ML-based PDE solvers that learn the target PDE system via backpropagation, our approach does not involve training the network. Instead, we directly implement the given numerical method by fixing the weights of the network, making inference correspond to integration. Our results show that a wide variety of PDE systems can be solved by our generated solvers. In addition, the solvers can be generated for either a CPU, GPU or ML accelerators only by changing one parameter in the framework. Finally, we explore the performance bottlenecks of our solvers and their efficiency in using the targeted hardware, identifying areas for optimization and hardware-software co-design as future research directions.

(Less)
Please use this url to cite or link to this publication:
author
; and
organization
publishing date
type
Chapter in Book/Report/Conference proceeding
publication status
published
subject
keywords
finite difference methods, heterogeneous computing, ml accelerators, partial differential equations, symbolic computation, tensor programs
host publication
Proceedings - 2025 IEEE International Parallel and Distributed Processing Symposium Workshops, IPDPSW 2025
pages
10 pages
publisher
IEEE - Institute of Electrical and Electronics Engineers Inc.
conference name
2025 IEEE International Parallel and Distributed Processing Symposium Workshops, IPDPSW 2025
conference location
Milan, Italy
conference dates
2025-06-03 - 2025-06-07
external identifiers
  • scopus:105015572127
ISBN
9798331526436
DOI
10.1109/IPDPSW66978.2025.00072
language
English
LU publication?
yes
id
c233ce6d-09b6-401c-976c-2e2c69672f16
date added to LUP
2025-11-12 15:25:21
date last changed
2025-11-12 15:26:25
@inproceedings{c233ce6d-09b6-401c-976c-2e2c69672f16,
  abstract     = {{<p>We introduce Thalassa, a framework designed to convert nonlinear systems of partial differential equations (PDEs) with a time-like component into tensor programs that solve these equations. These programs can run on GPUs as well as machine learning (ML) acceleration hardware, enabling scientific computing fields such as computational fluid dynamics, astrophysics, mechanics and biology to utilize any of these resources. Thalassa accepts as input a PDE system expressed in symbolic form and a discretization strategy for the system's derivatives, in the form of an explicit finite difference method (FDM). With these inputs, Thalassa generates a PyTorch program that solves the system of PDEs numerically using the specified FDM. The generated solver is described as a neural network via a subset of PyTorch operations commonly supported in ML accelerators. However, unlike other ML-based PDE solvers that learn the target PDE system via backpropagation, our approach does not involve training the network. Instead, we directly implement the given numerical method by fixing the weights of the network, making inference correspond to integration. Our results show that a wide variety of PDE systems can be solved by our generated solvers. In addition, the solvers can be generated for either a CPU, GPU or ML accelerators only by changing one parameter in the framework. Finally, we explore the performance bottlenecks of our solvers and their efficiency in using the targeted hardware, identifying areas for optimization and hardware-software co-design as future research directions.</p>}},
  author       = {{Boulasikis, Michail and Gruian, Flavius and Szász, Robert Zoltán}},
  booktitle    = {{Proceedings - 2025 IEEE International Parallel and Distributed Processing Symposium Workshops, IPDPSW 2025}},
  isbn         = {{9798331526436}},
  keywords     = {{finite difference methods; heterogeneous computing; ml accelerators; partial differential equations; symbolic computation; tensor programs}},
  language     = {{eng}},
  pages        = {{463--472}},
  publisher    = {{IEEE - Institute of Electrical and Electronics Engineers Inc.}},
  title        = {{Thalassa : Transforming Symbolic PDEs into Tensor-Based Solvers Running on ML Accelerators}},
  url          = {{http://dx.doi.org/10.1109/IPDPSW66978.2025.00072}},
  doi          = {{10.1109/IPDPSW66978.2025.00072}},
  year         = {{2025}},
}