13 accepted papers and two spotlights at NeurIPS 2025
Congratulations to Professor Kasper Green Larsen, and PhD students PhD student Natascha Schalburg and Mikael Møller Høgsgaard, whose papers have been selected for spotlight presentations at NeurIPS 2025, one of the world’s leading AI conferences. Spotlight papers are reserved for the top 3.55% of submissions, underscoring the significance of this achievement.
The spotlight papers
- Tight Generalization Bounds for Large-Margin Halfspaces
Authors: Kasper Green Larsen (AU), Natascha Schalburg (AU)
This paper resolves a decades-long open question by proving the first asymptotically tight generalization bound for large-margin half spaces - one of the most fundamental models in machine learning. These results precisely characterize how well such models can generalize from training data to unseen data. By strengthening the mathematical backbone of machine learning, the work supports the development of more robust AI systems. - On Agnostic PAC Learning in the Small Error Regime
Authors: Julian Asilis (USC), Mikael Møller Høgsgaard (AU), Grigoris Velegkas (Yale Uni. --> Google research)
This work advances the understanding of optimal learning in one of the most well-known models — agnostic PAC learning. Specifically, it determines how well learning is possible when the data are only mildly noisy. Combined with previous results, it provides an almost complete characterization of the best achievable performance in this setting. In short, it brings us very close to a comprehensive theory of optimal machine learning under noise.
Strong representation from the department
In total, researchers from the department have contributed to 13 accepted papers at NeurIPS 2025, reflecting both theoretical advances and applied innovations. Topics include boosting, differentially private data analysis, explainability methods for time series and visual data, and detecting covert advertisements on social media.
📄 Full list of accepted papers from the department (with link, if available):
- Tight Generalization Bounds for Large-Margin Halfspaces - Kasper Green Larsen, Natascha Schalburg (Spotlight) arXiv
- On Agnostic PAC Learning in the Small Error Regime - Julian Asilis, Mikael Møller Høgsgaard, Grigoris Velegkas (Spotlight) arXiv
- Revisiting Agnostic Boosting - Arthur da Cunha, Mikael Møller Høgsgaard, Andrea Paudice, Yuxin Sun arXiv
- CHASM: Unveiling Covert Advertisements on Chinese Social Media - Jingyi Zheng, Tianyi Hu, Yule Liu, Zhen Sun, Zongmin Zhang, Wenhan Dong, Zifan Peng, Xinlei He
- Automatic Auxiliary Task Selection and Adaptive Weighting Boost Molecular Property Prediction - Zhiqiang Zhong, Davide Mottin
- Simple and Optimal Sublinear Algorithms for Mean Estimation - Beatrice Bertolotti, Matteo Russo, Chris Schwiegelshohn, Sudarshan Shyam arXiv
- PREAMBLE: Private and Efficient Aggregation via Block Sparse Vectors - Hilal Asi, Vitaly Feldman, Hannah Keller, Guy N. Rothblum, Kunal Talwar arXiv
- Differentially Private Quantiles with Smaller Error - Jacob Imola, Fabrizio Boninsegna, Hannah Keller, Anders Aamand, Amrita Roy Chowdhury, Rasmus Pagh arXiv
- LeapFactual: Robust Visual Counterfactual Explanation Using Conditional Flow Matching - Zhuo Cao, Xuan Zhao, Lena Krieger, Hanno Scharr, Ira Assent
- Ultrametric Cluster Hierarchies: I Want ‘em All! - Andrew Draganov, Pascal Weber, Rasmus Skibdahl Melanchton Jørgensen, Anna Beer, Claudia Plant, Ira Assent arXiv
- MIX: A Multi-view Time-Frequency Interactive Explanation Framework for Time Series Classification - Viet-Hung Tran, Ngoc Phu Doan, Zichi Zhang, Tuan Dung Pham, Phi Hung Nguyen, Xuan Hoang Nguyen, Hans Vandierendonck, Ira Assent, Son T. Mai arXiv
- Explaining the Law of Supply and Demand via Online Learning - Stratis Skoulakis
- Optimism Without Regularization: Constant Regret in Zero-Sum Games - John Lazarsfeld, Georgios Piliouras, Ryann Sim, Stratis Skoulakis arxiv
This strong presence at NeurIPS highlights the department’s role in advancing both the theory and practice of machine learning - with impact that reaches far beyond academia. Learn more about NeurIPS 2025 at neurips.cc.