A generalized framework of neural networks for Hamiltonian systems

Philipp Horn (Corresponding author), Veronica Saz Ulibarrena, Barry Koren, Simon Portegies Zwart

Research output: Contribution to journalArticleAcademicpeer-review

2 Downloads (Pure)

Abstract

When solving Hamiltonian systems using numerical integrators, preserving the symplectic structure may be crucial for many problems. At the same time, solving chaotic or stiff problems requires integrators to approximate the trajectories with extreme precision. So, integrating Hamilton's equations to a level of scientific reliability such that the answer can be used for scientific interpretation, may be computationally expensive. However, a neural network can be a viable alternative to numerical integrators, offering high-fidelity solutions orders of magnitudes faster. To understand whether it is also important to preserve the symplecticity when neural networks are used, we analyze three well-known neural network architectures that are including the symplectic structure inside the neural network's topology. Between these neural network architectures many similarities can be found. This allows us to formulate a new, generalized framework for these architectures. In the generalized framework Symplectic Recurrent Neural Networks, SympNets and HénonNets are included as special cases. Additionally, this new framework enables us to find novel neural network topologies by transitioning between the established ones. We compare new Generalized Hamiltonian Neural Networks (GHNNs) against the already established SympNets, HénonNets and physics-unaware multilayer perceptrons. This comparison is performed with data for a pendulum, a double pendulum and a gravitational 3-body problem. In order to achieve a fair comparison, the hyperparameters of the different neural networks are chosen such that the prediction speeds of all four architectures are the same during inference. A special focus lies on the capability of the neural networks to generalize outside the training data. The GHNNs outperform all other neural network architectures for the problems considered.

Original languageEnglish
Article number113536
Number of pages18
JournalJournal of Computational Physics
Volume521
DOIs
Publication statusPublished - 15 Jan 2025

Bibliographical note

Publisher Copyright:
© 2024 The Author(s)

Keywords

  • Hamiltonian systems
  • Neural networks
  • Scientific machine learning
  • Structure preservation
  • Symplectic algorithms

Fingerprint

Dive into the research topics of 'A generalized framework of neural networks for Hamiltonian systems'. Together they form a unique fingerprint.

Cite this