SystemX is organising a first workshop on simulation and machine learning hybrid modeling, on November 18.
The Deep Learning revolution in the last decade has gradually invaded all scientific fields of Digital Sciences. The field of Numerical Simulation is not an exception, and Deep Learning techniques are gradually being used at all stages of the modelisation, simulation, optimization, and control of physical systems.
The IA2 Program – Artificial Intelligence and Augmented Engineering – at IRT SystemX aims at combining Artificial Intelligence techniques with the methods already deployed by industrial engineering. The HSA project focuses on the hybridization of Simulation with Machine Learning, and organizes its first workshop, featuring one international keynote presentation by Maziar Raissi (University Colorado Boulder), and one invited talk by Emmanuel Franck (Inria Nancy Grand Est). This workshop will also allow researchers of the field to present their most recent works, and we solicit contributed presentations covering various topics in the field, including but not limited to the following:
- Handling and explaining the massive output data of heavy numerical simulations
- Accelerating numerical simulations with Deep / Machine Learning
- Improving the accuracy or the robostuness of simulations with Machine Learning
- Learning to solve ODEs and PDEs
- Discovering mechanistic/behavioral models from data
- Incorporating physical constraints in Deep Learning
Call for participation
To submit your abstract, please click here.
The detailed program will be announced soon.
Keynote #1: International Keynote: Data-Efficient Deep Learning using Physics-Informed Neural Networks – Maizar Raissi
Abstract: A grand challenge with great opportunities is to develop a coherent framework that enables blending conservation laws, physical principles, and/or phenomenological behaviors expressed by differential equations with the vast data sets available in many fields of engineering, science, and technology. At the intersection of probabilistic machine learning, deep learning, and scientific computations, this work is pursuing the overall vision to establish promising new directions for harnessing the long-standing developments of classical methods in applied mathematics and mathematical physics to design learning machines with the ability to operate in complex domains without requiring large quantities of data. To materialize this vision, this work is exploring two complementary directions: (1) designing data-efficient learning machines capable of leveraging the underlying laws of physics, expressed by time dependent and non-linear differential equations, to extract patterns from high-dimensional data generated from experiments, and (2) designing novel numerical algorithms that can seamlessly blend equations and noisy multi-fidelity data, infer latent quantities of interest (e.g., the solution to a differential equation), and naturally quantify uncertainty in computations.
Biography: I am currently an Assistant Professor of Applied Mathematics at the University of Colorado Boulder. I received my Ph.D. in Applied Mathematics & Statistics, and Scientific Computations from University of Maryland College Park. I then moved to Brown University to carry out my postdoctoral research in the Division of Applied Mathematics. I then worked at NVIDIA in Silicon Valley for a little more than one year as a Senior Software Engineer before moving to Boulder. My expertise lies at the intersection of Probabilistic Machine Learning, Deep Learning, and Data Driven Scientific Computing. In particular, I have been actively involved in the design of learning machines that leverage the underlying physical laws and/or governing equations to extract patterns from high-dimensional data generated from experiments.
Keynote #2: Hybrid numerical methods and models for plasma physics and gas dynamic – Emmanuel Franck
Abstract: In this talk, we will introduce different examples of coupling between classical PDE approach and deep learning methods for physics applications. Firstly we will focus on the numerical resolution to compressible fluid equations in 1D like Euler equations. We will introduce two related numerical methods and show how they can be improved by introducing deep learning based on neural networks. We will make the link between the proposed method and reinforcement learning before presenting preliminary results and comparing the two new methods obtained. We will also discuss quickly the extension to unstructured m
eshes. In a second step, on an example of plasma physics, we will propose a new model reduction based on learning techniques and discuss the advantages compared to classical methods.
The registration is free but mandatory.