Methods of transfer learning try to combine knowledge from several related tasks (or domains) to improve performance on a test task. Inspired by causal methodology, we relax …
Two apparently unrelated fields—normalizing flows and causality—have recently received considerable attention in the machine learning community. In this work, we highlight an …
Label noise generally degenerates the performance of deep learning algorithms because deep neural networks easily overfit label errors. Let $ X $ and $ Y $ denote the instance and …
Statistical learning relies upon data sampled from a distribution, and we usually do not care what actually generated it in the first place. From the point of view of causal modeling, the …
B Huang, K Zhang, M Gong… - … conference on machine …, 2019 - proceedings.mlr.press
In many scientific fields, such as economics and neuroscience, we are often faced with nonstationary time series, and concerned with both finding causal relations and forecasting …
We study the problem of causal structure learning in linear systems from observational data given in multiple domains, across which the causal coefficients and/or the distribution of the …
Today's methods for uncovering causal relationships from observational data either constrain functional assignments (linearity/additive noise assumptions) or the data …
In real life, accurately annotating large-scale datasets is sometimes difficult. Datasets used for training deep learning models are likely to contain label noise. To make use of the …
To determine causal relationships between two variables, approaches based on Functional Causal Models (FCMs) have been proposed by properly restricting model classes; however …