Matthieu Kirchmeyer

I am a ML researcher at Roche-Genentech in the Prescient Design team.

I obtained a PhD degree at Sorbonne Université. My PhD research focused on improving out-of-distribution generalization of deep learning models in the setting of classification and physical dynamics modelling. I obtained a Master's degree at Mines Paris - PSL and at Ecole Normale Supérieure Paris Saclay (Master MVA).

Email  /  Google Scholar  /  Semantic Scholar  /  GitHub  /  LinkedIn  /  Twitter

News

  • 01/2023: Spotlight paper (notable-top-25%) @ ICLR2023.

  • 10/2022: Paper @ NeurIPS 2022.

  • 07/2022: Paper @ ICML 2022.

  • 04/2022: Paper @ ICLR 2022 and I am highlighted reviewer !

  • 04/2022: We are organizing a challenge on domain generalization for computational advertising at ECML-PKDD 2022. Check it out !

Publications
PontTuset Continuous PDE Dynamics Forecasting with Implicit Neural Representations
Y. Yin*, M. Kirchmeyer*, J-Y. Franceschi*, A. Rakotomamonjy, P. Gallinari (*equal contribution)
ICLR 2023 - Spotlight (notable-top-25%). Preliminary version at NeurIPS 2022 AI4Science Workshop, ICLR 2023 Neural Fields Workshop.
arXiv / OpenReview / code /

We propose DINo, a novel continuous-time continuous-space neural PDE forecaster with extensive spatiotemporal extrapolation capabilities including generalization to unseen free-form sparse meshes and resolutions. DINo combines a neural ODE model with Implicit Neural Representations (INRs). Check out our videos on our github page !

PontTuset Diverse Weight Averaging for Out-of-Distribution Generalization
A. Ramé*, M. Kirchmeyer*, T. Rahier, A. Rakotomamonjy, P. Gallinari, M. Cord (*equal contribution)
NeurIPS 2022. Preliminary version at ICML 2022 PODS Workshop
arXiv / OpenReview / code / slides / poster

DiWA is a new weight averaging method for OOD generalization, which introduces diversity. It is state-of-the-art on the challenging DomainBed benchmark against recent OOD methods without additional inference overhead. DiWA is backed theoretically by a bias-variance decomposition of weight averaging.

PontTuset Generalizing to New Physical Systems via Context-Informed Dynamics Model
M. Kirchmeyer*, Y. Yin*, J. DonĂ , N. Baskiotis, A. Rakotomamonjy, P. Gallinari (*equal contribution)
ICML 2022
arXiv / PMLR / code / slides / poster / video

We propose CoDA a new context-informed framework based on hypernetworks to adapt fast, parameter and sample-efficiently neural forecasters to new PDEs and ODEs parameters, thereby limiting the retraining cost on new systems.

PontTuset Mapping conditional distributions for domain adaptation under generalized target shift
M. Kirchmeyer, A. Rakotomamonjy, E. de BĂ©zenac, P. Gallinari
ICLR 2022. Also presented at CAp 2022 (oral)
arXiv / OpenReview / code / slides / poster / video

We propose OSTAR a new deep learning method to align pretrained representations under unsupervised domain adaptation with both conditional and label shift. OSTAR introduces useful regularization biases in NNs with Optimal Transport.

PontTuset Unsupervised domain adaptation with non-stochastic missing data
M. Kirchmeyer, P. Gallinari, A. Rakotomamonjy, A. Mantrach
ECML-PKDD 2021 - Data Mining and Knowledge Discovery journal.
arXiv / journal link / pdf / code / slides

We propose a new deep learning model to impute non-stochastic missing data, as seen in cold-start problems in recommender systems or imputation problems in computer vision. It performs unsupervised domain adaptation by leveraging supervision from a fully observed domain.

Older
PontTuset Conformal Robotic Stereolithography
A. Stevens*, R. Oliver*, M. Kirchmeyer, J. Wu, L. Chin, E. Polsen, C. Archer, C. Boyle, J. Garber and J. Hart (*equal contribution)
3D Printing and Additive Manufacturing journal, 2016.
journal link

We present a robotic system that is capable of maskless layerwise photopolymerization on curved surfaces. This paper on robotic 3D printing was done as part of a 5-month internship in the Mechanosynthesis Group in the Mechanical Engineering department at MIT.


Template modified from here