Publications

DEPARTMENTS

Emperical Interference

Haptic Intelligence

Modern Magnetic Systems

Perceiving Systems

Physical Intelligence

Robotic Materials

Social Foundations of Computation


Research Groups

Autonomous Vision

Autonomous Learning

Bioinspired Autonomous Miniature Robots

Dynamic Locomotion

Embodied Vision

Human Aspects of Machine Learning

Intelligent Control Systems

Learning and Dynamical Systems

Locomotion in Biorobotic and Somatic Systems

Micro, Nano, and Molecular Systems

Movement Generation and Control

Neural Capture and Synthesis

Physics for Inference and Optimization

Organizational Leadership and Diversity

Probabilistic Learning Group


Topics

Robot Learning

Conference Paper

2022

Autonomous Learning

Robotics

AI

Career

Award


Empirische Inferenz Conference Paper On the Emergence and Test-Time Use of Structural Information in Large Language Models Chen, M. C., Miller, M., Schölkopf, B., Guo, S. 64th Annual Meeting of the Association for Computational Linguistics (ACL 2026), July 2026 (Accepted) arXiv BibTeX

Empirical Inference Conference Paper Estimating Joint Interventional Distributions from Marginal Interventional Data Garrido Mejia, S., Kirschbaum, E., Kekić, A., Schölkopf, B., Mastakouri, A. A. Proceedings of the Fifth Conference on Causal Learning and Reasoning, 323:1-23, PMLR, 5th Conference on Causal Learning and Reasoning, April 2026 (To be published) arXiv BibTeX

Social Foundations of Computation Conference Paper ROC-n-reroll: How Verifier Imperfection affects Test-Time Scaling Dorner, F. E., Chen, Y. C., Cruz, A. F., Yang, F. Y. The Fourteenth International Conference on Learning Representations (ICLR), April 2026 (Accepted)
Test-time scaling aims to improve language model performance by leveraging additional compute during inference. Many works have empirically studied techniques such as Best-of-N (BoN) and Rejection Sampling (RS) that make use of a verifier to enable test-time scaling. However, to date there is little theoretical understanding of how verifier imperfection affects performance -- a gap we address in this work. Specifically, we prove that the instance-level accuracy of these methods is precisely characterized by the geometry of the verifier's ROC curve. Our theory has two important takeaways, confirmed by experiments with Qwen and LLama models on GSM8K and MATH500. First, RS outperforms BoN for fixed compute, while both methods converge to the same accuracy in the infinite-compute limit. Second, it is generally impossible to predict the high-compute performance of either method based on observations in the low-compute regime.
arXiv BibTeX

Social Foundations of Computation Conference Paper Train-before-Test Harmonizes Language Model Rankings Zhang, G., Dominguez-Olmedo, R., Hardt, M. The Fourteenth International Conference on Learning Representations (ICLR), oral, Top1.18%, January 2026 (Accepted)
Existing language model benchmarks provide contradictory model rankings, even for benchmarks that aim to capture similar skills. This dilemma of conflicting rankings hampers model selection, clouds model comparisons, and adds confusion to a growing ecosystem of competing models. Recent work attributed ranking disagreement to the phenomenon of training on the test task: As released, different models exhibit a different level of preparation for any given test task. A candidate solution to the problem is train-before-test: Give each model the same benchmark-specific finetuning before evaluation. Our primary contribution is a broad empirical evaluation of train-before-test across 24 benchmarks and 61 models. We show that train-before-test significantly improves ranking agreement consistently across all benchmarks. Whereas rankings have little external validity to start with, they enjoy a significant degree of external validity when applying train-before-test: Model rankings transfer gracefully from one benchmark to the other. Even within the same model family, train-before-test reduces strong ranking disagreement to near-perfect agreement. In addition, train-before-test reduces the model-score matrix to essentially rank one, revealing new insights into the latent factors of benchmark performance. Our work supports the recommendation to make train-before-test a default component of LLM benchmarking.
arXiv BibTeX

Empirical Inference Conference Paper A data and task-constrained mechanistic model of the mouse outer retina shows robustness to contrast variations Kadhim, K. L., Beck, J., Huang, Z., Macke, J. H., Rieke, F., Euler, T., Deistler, M., Berens, P. Advances in Neural Information Processing Systems 38 (NeurIPS 2025), 39th Annual Conference on Neural Information Processing Systems, December 2025 (Accepted) bioRxiv BibTeX

Empirical Inference Conference Paper Are Language Models Efficient Reasoners? A Perspective from Logic Programming Opedal, A., Zengaffinen, Y., Shirakami, H., Pasti, C., Sachan, M., Saparov, A., Cotterell, R., Schölkopf, B. Advances in Neural Information Processing Systems 38 (NeurIPS 2025), 39th Annual Conference on Neural Information Processing Systems, December 2025 (Accepted) arXiv BibTeX

Empirical Inference Conference Paper CauSciBench: Assessing LLM Causal Reasoning for Scientific Research Acharya, S., Zhang, T. J., Kim, A., Haghighat, A., Sun, X., Shrestha, R. B., Mordig, M., Danisman, F., Jose, C., Qi, Y., Cobben, P., Schölkopf, B., Sachan, M., Jin, Z. NeurIPS 2025: 5th Workshop on Mathematical Reasoning and AI (Math-AI) and CauScien Workshop, December 2025 (Published) URL BibTeX

Empirical Inference Conference Paper Counterfactual reasoning: an analysis of in-context emergence Miller, M., Schölkopf, B., Guo, S. Advances in Neural Information Processing Systems 38 (NeurIPS 2025), 39th Annual Conference on Neural Information Processing Systems, December 2025 (Accepted) arXiv BibTeX

Empirical Inference Conference Paper Cultural Alien Sampler: Open-ended art generation balancing originality and coherence Hernandez, A., Yakura, H., Brinkmann, L., Sola, M. C., Alhaija, H. A., Serna, I., Rahaman, N., Schölkopf, B., Rahwan, I. Advances in Neural Information Processing Systems 38 (NeurIPS 2025), 39th Annual Conference on Neural Information Processing Systems, Creative AI Track, December 2025 (Accepted) arXiv BibTeX

Empirical Inference Conference Paper Do-PFN: In-Context Learning for Causal Effect Estimation Robertson*, J., Reuter*, A., Guo, S., Hollmann, N., Hutter, F., Schölkopf, B. Advances in Neural Information Processing Systems 38 (NeurIPS 2025), 39th Annual Conference on Neural Information Processing Systems, December 2025, *equal contribution (Accepted) arXiv BibTeX

Empirical Inference Conference Paper Effortless, Simulation-Efficient Bayesian Inference using Tabular Foundation Models Vetter, J., Gloeckler, M., Gedon, D., Macke, J. H. Advances in Neural Information Processing Systems 38 (NeurIPS 2025), 39th Annual Conference on Neural Information Processing Systems, December 2025 (Accepted) arXiv BibTeX

Empirical Inference Conference Paper FNOPE: Simulation-based inference on function spaces with Fourier Neural Operators Moss, G., Muhle, L. S., Drews, R., Macke, J. H., Schröder, C. Advances in Neural Information Processing Systems 38 (NeurIPS 2025), 39th Annual Conference on Neural Information Processing Systems, December 2025 (Accepted) arXiv BibTeX

Empirical Inference Conference Paper Identifying multi-compartment Hodgkin-Huxley models with high-density extracellular voltage recordings Tanoh, I. C., Deistler, M., Macke, J. H., Linderman, S. Advances in Neural Information Processing Systems 38 (NeurIPS 2025), 39th Annual Conference on Neural Information Processing Systems, December 2025 (Accepted) arXiv BibTeX

Empirical Inference Conference Paper Reparameterized LLM Training via Orthogonal Equivalence Transformation Qiu, Z., Buchholz, S., Xiao, T., Dax, M., Schölkopf, B., Liu, W. Advances in Neural Information Processing Systems 38 (NeurIPS 2025), 39th Annual Conference on Neural Information Processing Systems, December 2025 (Accepted) arXiv BibTeX

Empirical Inference Conference Paper Root Cause Analysis of Outliers with Missing Structural Knowledge Orchard, W. R., Okati, N., Garrido Mejia, S., Blöbaum, P., Janzing, D. Advances in Neural Information Processing Systems 38 (NeurIPS 2025), 39th Annual Conference on Neural Information Processing Systems, December 2025 (Accepted) arXiv BibTeX

Empirical Inference Conference Paper SPARTAN: A Sparse Transformer World Model Attending to What Matters Lei, A., Schölkopf, B., Posner, I. Advances in Neural Information Processing Systems 38 (NeurIPS 2025), 39th Annual Conference on Neural Information Processing Systems, December 2025 (Accepted) arXiv BibTeX

Organizational Leadership and Diversity Conference Paper Inclusive Leadership in the Age of AI: A Dataset and Comparative Study of LLMs vs. Real-Life Leaders in Workplace Action Planning Singh, V., Schulte im Walde, S., Keplinger, K. Findings of the Association for Computational Linguistics: EMNLP 2025, 19732-19753, Association for Computational Linguistics, Suzhou, China, Empirical Methods in Natural Language Processing, November 2025 (Published)
Generative Large Language Models have emerged as useful tools, reshaping professional workflows. However, their efficacy in inherently complex and human-centric tasks such as leadership and strategic planning remains under-explored. In this interdisciplinary study, we present a novel dataset and compare LLMs and human leaders in the context of work-place action planning, specifically focusing on translating the abstract idea of inclusion into actionable SMART goals. We developed the Leader Success Bot, a script-based chat-bot co-designed with domain experts, to guide more than 250 real-life leaders in generating inclusive workplace action plans. We systematically prompted seven state-of-the-art chat-based LLMs to perform the same task using the socio-demographic data of real-life leaders and instructions co-developed with domain experts. Our publicly released dataset enables direct comparison between human and LLM-generated workplace action plans, offering in-sights into their respective strengths, biases, and limitations. Our findings highlight critical gaps and opportunities for LLMs in leadership applications, fostering interdisciplinary collaboration and NLP applications.
DOI URL BibTeX

Empirical Inference Conference Paper Improving Large Language Model Safety with Contrastive Representation Learning Simko, S., Sachan, M., Schölkopf, B., Jin, Z. Proceedings of the 2025 Conference on Empirical Methods in Natural Language Processing (EMNLP), 28166-28194, (Editors: Christodoulopoulos, Christos and Chakraborty, Tanmoy and Rose, Carolyn and Peng, Violet), Association for Computational Linguistics, November 2025 (Published) arXiv DOI URL BibTeX

Social Foundations of Computation Conference Paper Strategic Hypothesis Testing Hossain, S., Chen, Y., Chen, Y. The Thirty-Ninth Annual Conference on Neural Information Processing Systems (NeurIPS), Spotlight Poster, top 3%, September 2025 (Accepted)
We examine hypothesis testing within a principal-agent framework, where a strategic agent, holding private beliefs about the effectiveness of a product, submits data to a principal who decides on approval. The principal employs a hypothesis testing rule, aiming to pick a p-value threshold that balances false positives and false negatives while anticipating the agent's incentive to maximize expected profitability. Building on prior work, we develop a game-theoretic model that captures how the agent's participation and reporting behavior respond to the principal's statistical decision rule. Despite the complexity of the interaction, we show that the principal's errors exhibit clear monotonic behavior when segmented by an efficiently computable critical p-value threshold, leading to an interpretable characterization of their optimal p-value threshold. We empirically validate our model and these insights using publicly available data on drug approvals. Overall, our work offers a comprehensive perspective on strategic interactions within the hypothesis testing framework, providing technical and regulatory insights.
arXiv BibTeX

Empirical Inference Deep Models and Optimization Conference Paper Generalized Interpolating Discrete Diffusion von Rütte, D., Fluri, J., Ding, Y., Orvieto, A., Schölkopf, B., Hofmann, T. Proceedings of the 42nd International Conference on Machine Learning (ICML), 267:61810-61843, Proceedings of Machine Learning Research, (Editors: Singh, Aarti and Fazel, Maryam and Hsu, Daniel and Lacoste-Julien, Simon and Berkenkamp, Felix and Maharaj, Tegan and Wagstaff, Kiri and Zhu, Jerry), PMLR, International Conference on Machine Learning, July 2025 (Published) arXiv URL BibTeX

Empirical Inference Conference Paper Generative Intervention Models for Causal Perturbation Modeling Schneider, N., Lorch, L., Kilbertus, N., Schölkopf, B., Krause, A. Proceedings of the 42nd International Conference on Machine Learning (ICML), 267:53388-53412, Proceedings of Machine Learning Research, (Editors: Singh, Aarti and Fazel, Maryam and Hsu, Daniel and Lacoste-Julien, Simon and Berkenkamp, Felix and Maharaj, Tegan and Wagstaff, Kiri and Zhu, Jerry), PMLR, International Conference on Machine Learning, July 2025 (Published) arXiv URL BibTeX

Empirical Inference Conference Paper Learning Joint Interventional Effects from Single-Variable Interventions in Additive Models Kekić, A., Garrido Mejia, S., Schölkopf, B. Proceedings of the 42nd International Conference on Machine Learning (ICML), 267:29651-29669, Proceedings of Machine Learning Research, (Editors: Singh, Aarti and Fazel, Maryam and Hsu, Daniel and Lacoste-Julien, Simon and Berkenkamp, Felix and Maharaj, Tegan and Wagstaff, Kiri and Zhu, Jerry), PMLR, International Conference on Machine Learning, July 2025 (Published) arXiv URL BibTeX

Empirical Inference Conference Paper Position: Probabilistic Modelling is Sufficient for Causal Inference Mlodozeniec, B. K., Krueger, D., Turner, R. E. Proceedings of the 42nd International Conference on Machine Learning (ICML), 267:81810-81840, Proceedings of Machine Learning Research, (Editors: Singh, Aarti and Fazel, Maryam and Hsu, Daniel and Lacoste-Julien, Simon and Berkenkamp, Felix and Maharaj, Tegan and Wagstaff, Kiri and Zhu, Jerry), PMLR, International Conference on Machine Learning, July 2025 (Published) URL BibTeX

Empirical Inference Conference Paper Progressive Tempering Sampler with Diffusion Rissanen*, S., OuYang*, R., He*, J., Chen, W., Heinonen, M., Solin, A., Hernández-Lobato, J. M. Proceedings of the 42nd International Conference on Machine Learning (ICML), 267:51724-51746, Proceedings of Machine Learning Research, (Editors: Singh, Aarti and Fazel, Maryam and Hsu, Daniel and Lacoste-Julien, Simon and Berkenkamp, Felix and Maharaj, Tegan and Wagstaff, Kiri and Zhu, Jerry), PMLR, International Conference on Machine Learning, July 2025, *equal contribution (Published) arXiv URL BibTeX

Empirical Inference Conference Paper Scalable Gaussian Processes with Latent Kronecker Structure Lin, J. A., Ament, A., Balandat, M., Eriksson, D., Hernández-Lobato, J. M., Bakshy, E. Proceedings of the 42nd International Conference on Machine Learning (ICML), 267:37730-37744, Proceedings of Machine Learning Research, (Editors: Singh, Aarti and Fazel, Maryam and Hsu, Daniel and Lacoste-Julien, Simon and Berkenkamp, Felix and Maharaj, Tegan and Wagstaff, Kiri and Zhu, Jerry), PMLR, International Conference on Machine Learning, July 2025 (Published) arXiv URL BibTeX

Social Foundations of Computation Conference Paper How Benchmark Prediction from Fewer Data Misses the Mark Zhang, G., Dorner, F. E., Hardt, M. The Thirty-Ninth Annual Conference on Neural Information Processing Systems (NeurIPS), June 2025 (Accepted)
Large language model (LLM) evaluation is increasingly costly, prompting interest in methods that speed up evaluation by shrinking benchmark datasets. Benchmark prediction (also called efficient LLM evaluation) aims to select a small subset of evaluation points and predict overall benchmark performance from that subset. In this paper, we systematically assess the strengths and limitations of 11 benchmark prediction methods across 19 diverse benchmarks. First, we identify a highly competitive baseline: Take a random sample and fit a regression model on the sample to predict missing entries. Outperforming most existing methods, this baseline challenges the assumption that careful subset selection is necessary for benchmark prediction. Second, we discover that all existing methods crucially depend on model similarity. They work best when interpolating scores among similar models. The effectiveness of benchmark prediction sharply declines when new models have higher accuracy than previously seen models. In this setting of extrapolation, none of the previous methods consistently beat a simple average over random samples. To improve over the sample average, we introduce a new method inspired by augmented inverse propensity weighting. This method consistently outperforms the random sample average even for extrapolation. However, its performance still relies on model similarity and the gains are modest in general. This shows that benchmark prediction fails just when it is most needed: at the evaluation frontier, where the goal is to evaluate new models of unknown capabilities.
arXiv BibTeX

Empirical Inference Conference Paper Accuracy on the wrong line: On the pitfalls of noisy data for out-of-distribution generalisation Sanyal, A., Hu, Y., Yu, Y., Ma, Y., Wang, Y., Schölkopf, B. Proceedings of the 28th International Conference on Artificial Intelligence and Statistics (AISTATS), 258:2170-2178, Proceedings of Machine Learning Research, (Editors: Li, Yingzhen and Mandt, Stephan and Agrawal, Shipra and Khan, Emtiyaz), PMLR, May 2025 (Published) URL BibTeX

Empirical Inference Conference Paper Training Neural Samplers with Reverse Diffusive KL Divergence He*, J., Chen*, W., Zhang*, M., Barber, D., Hernández-Lobato, J. M. Proceedings of the 28th International Conference on Artificial Intelligence and Statistics (AISTATS), 258:5167-5175, Proceedings of Machine Learning Research, (Editors: Li, Yingzhen and Mandt, Stephan and Agrawal, Shipra and Khan, Emtiyaz), PMLR, May 2025, *equal contribution (Published) URL BibTeX

Empirical Inference Conference Paper Your Finetuned Large Language Model is Already a Powerful Out-of-distribution Detector Zhang, A., Xiao, T. Z., Liu, W., Bamler, R., Wischik, D. Proceedings of the 28th International Conference on Artificial Intelligence and Statistics (AISTATS), 258:2701-2709, Proceedings of Machine Learning Research, (Editors: Li, Yingzhen and Mandt, Stephan and Agrawal, Shipra and Khan, Emtiyaz), PMLR, May 2025 (Published) URL BibTeX

Empirical Inference Autonomous Learning Conference Paper Advancing Out-of-Distribution Detection via Local Neuroplasticity Canevaro, A., Schmidt, J., Marvi, M. S., Yu, H., Martius, G., Jordan, J. The Thirteenth International Conference on Learning Representations (ICLR), April 2025 (Published) arXiv BibTeX

Empirical Inference Perceiving Systems Conference Paper Can Large Language Models Understand Symbolic Graphics Programs? Qiu, Z., Liu, W., Feng, H., Liu, Z., Xiao, T. Z., Collins, K. M., Tenenbaum, J. B., Weller, A., Black, M. J., Schölkopf, B. The Thirteenth International Conference on Learning Representations (ICLR), April 2025 (Published)
Against the backdrop of enthusiasm for large language models (LLMs), there is a growing need to scientifically assess their capabilities and shortcomings. This is nontrivial in part because it is difficult to find tasks which the models have not encountered during training. Utilizing symbolic graphics programs, we propose a domain well-suited to test multiple spatial-semantic reasoning skills of LLMs. Popular in computer graphics, these programs procedurally generate visual data. While LLMs exhibit impressive skills in general program synthesis and analysis, symbolic graphics programs offer a new layer of evaluation: they allow us to test an LLM’s ability to answer semantic questions about the images or 3D geometries without a vision encoder. To semantically understand the symbolic programs, LLMs would need to possess the ability to “imagine” and reason how the corresponding graphics content would look with only the symbolic description of the local curvatures and strokes. We use this task to evaluate LLMs by creating a large benchmark for the semantic visual understanding of symbolic graphics programs, built procedurally with minimal human effort. Particular emphasis is placed on transformations of images that leave the image level semantics invariant while introducing significant changes to the underlying program. We evaluate commercial and open-source LLMs on our benchmark to assess their ability to reason about visual output of programs, finding that LLMs considered stronger at reasoning generally perform better. Lastly, we introduce a novel method to improve this ability – Symbolic Instruction Tuning (SIT), in which the LLM is finetuned with pre-collected instruction data on symbolic graphics programs. Interestingly, we find that SIT not only improves LLM’s understanding on symbolic programs, but it also improves general reasoning ability on various other benchmarks.
arXiv Paper BibTeX

Empirical Inference Conference Paper Compositional simulation-based inference for time series Gloeckler*, M., Toyota*, S., Fukumizu, K., Macke, J. H. The Thirteenth International Conference on Learning Representations (ICLR), April 2025 (Published) arXiv BibTeX

Empirical Inference Robust Machine Learning Conference Paper Cross-Entropy Is All You Need to Invert the Data Generating Process Reizinger*, P., Bizeul*, A., Juhos*, A., Vogt, J. E., Balestriero, R., Brendel, W., Klindt, D. The Thirteenth International Conference on Learning Representations (ICLR), April 2025, *Joint first authorship (Published) arXiv BibTeX

Empirical Inference Conference Paper Differentially private steering for Large language model alignment Goel, A., Hu, Y., Gurevych, I., Sanyal, A. The Thirteenth International Conference on Learning Representations (ICLR), April 2025 (Published) arXiv BibTeX

Empirical Inference Perceiving Systems Conference Paper Efficient Diversity-Preserving Diffusion Alignment via Gradient-Informed GFlowNets Liu, Z., Xiao, T. Z., Liu, W., Bengio, Y., Zhang, D. The Thirteenth International Conference on Learning Representations (ICLR), April 2025 (Published)
While one commonly trains large diffusion models by collecting datasets on target downstream tasks, it is often desired to align and finetune pretrained diffusion models with some reward functions that are either designed by experts or learned from small-scale datasets. Existing post-training methods for reward finetuning of diffusion models typically suffer from lack of diversity in generated samples, lack of prior preservation, and/or slow convergence in finetuning. Inspired by recent successes in generative flow networks (GFlowNets), a class of probabilistic models that sample with the unnormalized density of a reward function, we propose a novel GFlowNet method dubbed Nabla-GFlowNet (abbreviated as ∇-GFlowNet), the first GFlowNet method that leverages the rich signal in reward gradients, together with an objective called ∇-DB plus its variant residual ∇-DB designed for prior-preserving diffusion finetuning. We show that our proposed method achieves fast yet diversity- and prior-preserving finetuning of Stable Diffusion, a large-scale text-conditioned image diffusion model, on different realistic reward functions.
arXiv BibTeX

Empirical Inference Conference Paper Improving Probabilistic Diffusion Models With Optimal Covariance Matching Ou*, Z., Zhang*, M., Zhang, A., Xiao, T. Z., Li, Y., Barber, D. The Thirteenth International Conference on Learning Representations (ICLR), April 2025, *equal contribution (Published) arXiv BibTeX

Empirical Inference Conference Paper Influence Functions for Scalable Data Attribution in Diffusion Models Mlodozeniec, B. K., Eschenhagen, R., Bae, J., Immer, A., Krueger, D., Turner, R. E. The Thirteenth International Conference on Learning Representations (ICLR), April 2025 (Published) arXiv BibTeX

Empirical Inference Robust Machine Learning Conference Paper Interaction Asymmetry: A General Principle for Learning Composable Abstractions Brady, J., von Kügelgen, J., Lachapelle, S., Buchholz, S., Kipf*, T., Brendel*, W. The Thirteenth International Conference on Learning Representations (ICLR), April 2025, *joint senior author (Published) arXiv BibTeX

Empirical Inference Conference Paper Language Model Alignment in Multilingual Trolley Problems Jin, Z., Kleiman-Weiner, M., Piatti, G., Levine, S., Liu, J., Gonzalez, F., Ortu, F., Strausz, A., Sachan, M., Mihalcea, R., Choi, Y., Schölkopf, B. The Thirteenth International Conference on Learning Representations (ICLR), April 2025 (Published) arXiv BibTeX

Social Foundations of Computation Conference Paper Limits to Predicting Online Speech Using Large Language Models Remeli, M., Hardt, M., Williamson, R. C. April 2025 (Submitted)
We study the predictability of online speech on social media, and whether predictability improves with information outside a user's own posts. Recent work suggests that the predictive information contained in posts written by a user's peers can surpass that of the user's own posts. Motivated by the success of large language models, we empirically test this hypothesis. We define unpredictability as a measure of the model's uncertainty, i.e., its negative log-likelihood on future tokens given context. As the basis of our study, we collect a corpus of 6.25M posts from more than five thousand X (previously Twitter) users and their peers. Across three large language models ranging in size from 1 billion to 70 billion parameters, we find that predicting a user's posts from their peers' posts performs poorly. Moreover, the value of the user's own posts for prediction is consistently higher than that of their peers'. Across the board, we find that the predictability of social media posts remains low, comparable to predicting financial news without context. We extend our investigation with a detailed analysis about the causes of unpredictability and the robustness of our findings. Specifically, we observe that a significant amount of predictive uncertainty comes from hashtags and @-mentions. Moreover, our results replicate if instead of prompting the model with additional context, we finetune on additional context.
arXiv BibTeX

Empirical Inference Autonomous Learning Conference Paper On the Transfer of Object-Centric Representation Learning Didolkar, A. R., Zadaianchuk, A., Goyal, A., Mozer, M. C., Bengio, Y., Martius*, G., Seitzer*, M. The Thirteenth International Conference on Learning Representations (ICLR), April 2025, *equal contribution (Published) URL BibTeX

Empirical Inference Conference Paper Preference Elicitation for Offline Reinforcement Learning Pace, A., Schölkopf, B., Rätsch, G., Ramponi, G. The Thirteenth International Conference on Learning Representations (ICLR), April 2025 (Published) arXiv BibTeX

Empirical Inference Conference Paper Standardizing Structural Causal Models Ormaniec*, W., Sussex*, S., Lorch*, L., Schölkopf, B., Krause, A. The Thirteenth International Conference on Learning Representations (ICLR), April 2025, *equal contribution (Published) arXiv BibTeX

Empirical Inference Conference Paper The Directionality of Optimization Trajectories in Neural Networks Singh, S. P., He, B., Hofmann, T., Schölkopf, B. The Thirteenth International Conference on Learning Representations (ICLR), April 2025 (Published) URL BibTeX

Empirical Inference Conference Paper What Does It Mean to Be a Transformer? Insights from a Theoretical Hessian Analysis Ormaniec, W., Dangel, F., Singh, S. P. The Thirteenth International Conference on Learning Representations (ICLR), April 2025 (Published) arXiv BibTeX

Empirical Inference Conference Paper Why AI Is WEIRD and Should Not Be This Way: Towards AI For Everyone, With Everyone, By Everyone Mihalcea*, R., Ignat*, O., Bai, L., Borah, A., Chiruzzo, L., Jin, Z., Kwizera, C., Nwatu, J., Poria, S., Solorio, T. The Thirty-Nineth AAAI Conference on Artificial Intelligence, AAAI 2025 (Senior Member Presentation Track), (27)28657-28670, (Editors: Toby Walsh, Julie Shah, Zico Kolter ), AAAI Press, April 2025, *equal contribution (Published)
This paper presents a vision for creating AI systems that are inclusive at every stage of development, from data collection to model design and evaluation. We address key limitations in the current AI pipeline and its WEIRD* representation, such as lack of data diversity, biases in model performance, and narrow evaluation metrics. We also focus on the need for diverse representation among the developers of these systems, as well as incentives that are not skewed toward certain groups. We highlight opportunities to develop AI systems that are for everyone (with diverse stakeholders in mind), with everyone (inclusive of diverse data and annotators), and by everyone (designed and developed by a globally diverse workforce). *WEIRD = an acronym coined by Joseph Henrich to highlight the coverage limitations of many psychological studies, referring to populations that are Western, Educated, Industrialized, Rich, and Democratic; while we do not fully adopt this term for AI, as its current scope does not perfectly align with the WEIRD dimensions, we believe that today’s AI has a similarly "weird" coverage, particularly in terms of who is involved in its development and who benefits from it.
arXiv DOI URL BibTeX

Empirical Inference Conference Paper MathGAP: Out-of-Distribution Evaluation on Problems with Arbitrarily Complex Proofs Opedal*, A., Shirakami*, H., Schölkopf, B., Saparov, A., Sachan, M. The Thirteenth International Conference on Learning Representations (ICLR), April 2025, *equal contribution (Published) arXiv BibTeX