Skip navigation
Please use this identifier to cite or link to this item: http://arks.princeton.edu/ark:/88435/dsp0147429d27x
Title: Invariant Mechanisms in Transfer Learning and Causal Inference : Some Theoretical Perspectives and Algorithms.
Authors: Martinet, Guillaume Gaetan
Advisors: Engelhardt, Barbara E
Contributors: Operations Research and Financial Engineering Department
Keywords: Causal Discovery
Causal Inference
Machine Learning
Transfer Learning
Subjects: Statistics
Issue Date: 2021
Publisher: Princeton, NJ : Princeton University
Abstract: This thesis presents several new ideas about transfer learning, treatment effect estimation and causal discovery. Our contributions are in part of a theoretical nature, in part algorithmic, but all rely in their development on a common property: The existence of an invariant mechanism, either between source and target distributions – in the case of transfer learning – or between observational and interventional data – in the case of causal inference. We clarify these notions below. The first chapter introduces a new minimax analysis of transfer learning in a nonparametric classification setting. More precisely, we study the problem of learning a classifier meant to generalize well under a target data distribution Q while most of the labeled data is from a different but related source distribution P. The invariant mechanism is the conditional distribution of the label given the covariates, which remains identical between P and Q – this assumption is often termed covariate-shift. We derive a new notion – the transfer exponent γ – that accurately characterizes the difficulty of transfer between P and Q in terms of achievable minimax rates. We also show that a recent semi-supervised k-NN algorithm can be refined to adapt to unknown γ, while requesting labels of target data only when beneficial. Then, we show in the second chapter that balance-weighting approaches to treatment effect estimation can be restated as discrepancy minimization problems under the covariate-shift assumption. While balance-weighting methods have mainly focused on binary treatments, such considerations offer a new perspective on how to generalize these approaches to continuous and multivariate treatments. In the last chapter, we address the more qualitative problem of recovering the direct causes of a target variable using data from different experimental settings. We show that a recent and already influential method, called invariant causal prediction (ICP), admits a much more efficient formulation. Our reformulation consists in performing a series of nonparametric tests based on the minimization of a new loss function – named Wasserstein variance – that we derived from optimal transport theory; while ICP’s runtime scales exponentially in the number of variables, our approach only scales linearly. We establish our method’s performance both theoretically and empirically.
URI: http://arks.princeton.edu/ark:/88435/dsp0147429d27x
Alternate format: The Mudd Manuscript Library retains one bound copy of each dissertation. Search for these copies in the library's main catalog: catalog.princeton.edu
Type of Material: Academic dissertations (Ph.D.)
Language: en
Appears in Collections:Operations Research and Financial Engineering

Files in This Item:
File Description SizeFormat 
Martinet_princeton_0181D_13929.pdf4.52 MBAdobe PDFView/Download


Items in Dataspace are protected by copyright, with all rights reserved, unless otherwise indicated.