|Date||Fri 10 Mar|
|Time||13:00 — 14:00|
Title: Robustness and complexity tradeoffs in inference and learning
Inference and learning from data relies on assumptions on the generative process of the data. While minimal assumptions are necessary, more stringent ones compromise robustness. However, additional assumptions may allow for improvements in sample complexity, leading to a robustness-complexity tradeoff.
In this talk, we show how a better understanding of this tradeoff results in improved inference and learning algorithms. Specifically, we discuss two problems where robustness can be gained at essentially no cost in sample complexity. First, we show that for the problem of active ranking from pairwise comparisons, popular parametric assumptions only limit the robustness while failing to provide any significant reductions in sample complexity. Second, going ``off the grid'' in sparse linear models improves the estimation accuracy without compromising sample complexity.
Reinhard Heckel is a Postdoctoral researcher in the Department of Electrical Engineering and Computer Sciences at the University of California, Berkeley. Before that, he spent a year in the Cognitive Computing & Computational Sciences Department at IBM Research, Zurich. He completed his Ph.D. in August 2014 at ETH Zurich, Department of Information Technology and Electrical Engineering, advised by Helmut Bölcskei. In Fall 2013, he was a visiting Ph.D. student in the Statistics Department of Stanford University. Reinhard is interested in machine learning, statistics, and signal processing.