Port-Hamiltonian Neural Networks
Motivation
Standard neural ODEs learn dynamics without physical constraints — they can violate energy conservation, produce non-symplectic flows, and fail to generalize beyond training trajectories. Port-Hamiltonian systems (PHS) provide a principled framework for energy-preserving, dissipative dynamical systems. The question: can we enforce PH structure as an inductive bias in neural ODE architectures?
Formulation
A port-Hamiltonian system is governed by:
$$\dot{x} = (J(x) - R(x))\nabla H(x) + g(x)u$$
where $J$ is skew-symmetric (energy-routing), $R$ is positive semi-definite (dissipation), $H$ is the Hamiltonian (total energy), and $g$ maps external ports. I parameterize $J$, $R$, and $H$ as neural networks with structural constraints enforced via Cholesky decomposition and skew-symmetrization layers.
Architecture
- Hamiltonian $H(x)$: MLP with positive output (softplus final layer)
- $J(x)$: skew-symmetric matrix via antisymmetric parameterization $A - A^T$
- $R(x)$: PSD matrix via Cholesky $LL^T$
- Integration: adjoint-based backprop through torchdiffeq / Diffrax
Experiments
Evaluated on: pendulum, double pendulum, spring-mass-damper, and fluid systems. Comparing energy drift over long rollouts between vanilla Neural ODE, Hamiltonian Neural Network (HNN), and port-HNN. Metric: relative energy error $|H(x(t)) - H(x_0)| / H(x_0)$.
Status
Ongoing. Conservative systems working well. Dissipative and port-coupled systems in progress.