FutureSearch Logofuturesearch benchmarks

If you would like to run an LLM or agent on our benchmarks or would like to see a model added to the leaderboard, please contact us at evals@futuresearch.ai.

Deep Research Bench (DRB)

DRB benchmarks how well LLM agents do research on the web. Each of the 0 diverse, real-world tasks provides 10-100k webpages stored offline for search and reasoning, accompanied by carefully curated answers.

Bench to the Future 2 (BTF-2)

BTF-2 evaluates agents on 1,417 hard forecasting questions. Agents research and forecast offline against a frozen 15M-document corpus. Rationales and reasoning traces are evaluated for strategic reasoning.

Loading radar chart...

DRB Leaderboard

Last updated:

Agent
Score
Cost ($)
Runtime (s)

Scores averaged first per task category (radar chart), then across all tasks (table). Runtime is estimated from ReAct steps, not wall-clock time.

Papers

BTF-2 Leaderboard

Last updated: 2026-04-20

AgentBrier
(accuracy)
CalibrationRefinement
FutureSearch Agent0.1190.0020.081
Opus 4.6 Agent0.1300.0050.075
Gemini 3.1 Pro Agent0.1410.0120.069
GPT-5.4 Agent0.1520.0100.056
Grok 4.20 Beta Agent0.1650.0030.039

Brier scores on 1,417 pastcasting questions (lower is better). The FutureSearch Agent is an ensemble significantly more accurate than any single frontier agent. Radar chart shows CHAMPS KNOW strategic emphasis (Borda scores, 8 of 10 dimensions).

Papers

Best Accuracy per Dollar on DRB

No data available

Best Accuracy per Second on DRB

No data available

RetroSearch

DRB and BTF-2 use RetroSearch, a system designed to serve agents a frozen, previously scraped version of the internet instead of the live pages, allowing reproducible runs even as the internet changes, and enabling forecasting tasks to be run as "pastcasting".

RetroSearch aims to emulate Google search (specifically, the Serper search API) as closely as possible, so as to minimize differences between live and "retro" agent runs. A single RetroSearch search query follows the following steps:

  • Run a live Serper search for the query
  • Look up pages obtained from live search in the RetroSearch database and other archive sources
  • If the page is not found in the RetroSearch database, remove it from the results
  • Write new snippets from a sample of page content using a simple LLM
  • Return the results in the original format of the Google results

This approach ensures a search experience for agents that is consistent with real search, but backed exclusively by pages we have a frozen candidate for. The following diagram from the paper illustrates the process:

Diagram showing how RetroSearch provides frozen web snapshots to agents
Illustration of the system architecture of Deep Research Bench using RetroSearch. This shows the flow from task definition through the scraping pipeline that populates the RetroSearch database prior to running the benchmark, and then how agents use RetroSearch via an API at the time of task evaluation.