This talk highlights recent advancements in Physics-Informed Machine Learning (PIML), focusing on innovations in Kolmogorov-Arnold Networks (KANs). Since their introduction in 2017, Physics-Informed Neural Networks (PINNs) have become essential tools for solving differential equations from sparse data. Advances in network architectures, adaptive refinement, domain decomposition, and weighting strategies have further enhanced their performance. Among these, Physics-Informed Kolmogorov-Arnold Networks (PIKANS), based on Kolmogorov’s 1957 representation model, offer a robust alternative to traditional PINNs.
A key application of these developments is the Artificial Intelligence Velocimetry-Thermometry (AIVT) method, which infers continuous, differentiable temperature fields in turbulent convection from sparse 3D velocity data. AIVT achieves high accuracy, validated against a unique experimental dataset combining Particle Image Thermometry and Lagrangian Particle Tracking. The method demonstrates fidelity consistent with theoretical predictions and performance comparable to direct numerical simulations, offering novel insights into turbulent flows.
Additionally, the talk introduces the Kurkova-Kolmogorov-Arnold Network (KKAN), a two-block architecture inspired by the Kolmogorov-Arnold theorem and Kurkova’s principles. KKAN integrates multi-layer perceptrons with flexible basis functions, achieving superior performance in function approximation, physics-informed learning, and operator tasks. Insights into KKAN’s learning dynamics, analyzed through information bottleneck theory, reveal trends across representation models and tasks, highlighting correlations between geometric complexity and generalization capabilities. These findings provide valuable perspectives on the behavior and utility of advanced PIML architectures.