Social Foundations of Computation Miscellaneous 2026

Scaling Open-Ended Reasoning To Predict the Future

arXiv
Thumb ticker sm headshot
Social Foundations of Computation
  • Doctoral Researcher
Thumb ticker sm 20241104 hardt moritz 12 cleaned kleiner
Social Foundations of Computation
  • Director

High-stakes decision making involves reasoning under uncertainty about the future. In this work, we train language models to make predictions on open-ended forecasting questions. To scale up training data, we synthesize novel forecasting questions from global events reported in daily news, using a fully automated, careful curation recipe. We train the Qwen3 thinking models on our dataset, OpenForesight. To prevent leakage of future information during training and evaluation, we use an offline news corpus, both for data generation and retrieval in our forecasting system. Guided by a small validation set, we show the benefits of retrieval, and an improved reward function for reinforcement learning (RL). Once we obtain our final forecasting system, we perform held-out testing between May to August 2025. Our specialized model, OpenForecaster 8B, matches much larger proprietary models, with our training improving the accuracy, calibration, and consistency of predictions. We find calibration improvements from forecasting training generalize across popular benchmarks. We open-source all our models, code, and data to make research on language model forecasting broadly accessible.

Author(s): Chandak, Nikhil and Shashwat, Goel and Prabhu, Ameya and Hardt, Moritz and Geiping, Jonas
Links:
Year: 2026
Month: January
BibTeX Type: Miscellaneous (misc)
State: Submitted

BibTeX

@misc{chandak2026scaling,
  title = {Scaling Open-Ended Reasoning To Predict the Future},
  abstract = {High-stakes decision making involves reasoning under uncertainty about the future. In this work, we train language models to make predictions on open-ended forecasting questions. To scale up training data, we synthesize novel forecasting questions from global events reported in daily news, using a fully automated, careful curation recipe. We train the Qwen3 thinking models on our dataset, OpenForesight. To prevent leakage of future information during training and evaluation, we use an offline news corpus, both for data generation and retrieval in our forecasting system. Guided by a small validation set, we show the benefits of retrieval, and an improved reward function for reinforcement learning (RL). Once we obtain our final forecasting system, we perform held-out testing between May to August 2025. Our specialized model, OpenForecaster 8B, matches much larger proprietary models, with our training improving the accuracy, calibration, and consistency of predictions. We find calibration improvements from forecasting training generalize across popular benchmarks. We open-source all our models, code, and data to make research on language model forecasting broadly accessible.},
  month = jan,
  year = {2026},
  author = {Chandak, Nikhil and Shashwat, Goel and Prabhu, Ameya and Hardt, Moritz and Geiping, Jonas},
  month_numeric = {1}
}