LiteResearcher: A Scalable Agentic RL Training Framework for Deep Research Agent

Wanli Li1,2,*, Bince Qu1,2,*, Bo Pan1, Jianyu Zhang1, Zheng Liu3,†, Pan Zhang2, Wei Chen1, Bo Zhang2,†
1 Zhejiang University  •  2 Simplex AI  •  3 Beijing Academy of Artificial Intelligence
* Equal contribution. Work done during internship at Simplex AI.    Corresponding authors
✉ {wanli_li@zju.edu.cn, tonyzhang@simplexai.com}

TL;DR

We train a 4B-parameter deep research agent using scalable agentic RL in a virtual world environment. LiteResearcher-4B achieves 71.3% on GAIA and 78% on Xbench — matching Claude-4.5-Sonnet and outperforming open-source models up to 8× larger.

Key Results

71.3%
GAIA-Text
= Claude-4.5-Sonnet (71.2%)
78.0%
Xbench-DS
> Tongyi Deepsearch 30B (75.0%)
83.1%
Frames
> Claude-4-Sonnet (80.7%)
4B
Parameters
8–32× smaller than peers
Performance comparison across models
Performance of LiteResearcher. Left: Accuracy comparison on the Xbench DeepSearch benchmark across models of various scales. Right: Average rollout latency and cost per turn.

Abstract

Reinforcement Learning (RL) has emerged as a powerful training paradigm for LLM-based agents. However, scaling agentic RL for deep research remains constrained by two coupled challenges: hand-crafted synthetic data fail to elicit genuine real-world search capabilities, and real-world search dependency during RL training introduces instability and expensive cost, which limit the scalability of Agentic RL.

LiteResearcher is a training framework to make Agentic RL scalable: by constructing a lite virtual world that mirrors the real-world search dynamics, we enabled a continuously improving training recipe that empowers tiny search agent to outperform large-scale open-source and commercial models (e.g. Tongyi DeepResearch and Claude-4.5 Sonnet). Specifically, on most common benchmarks like GAIA and Xbench, our LiteResearcher-4B achieves the open-source state-of-the-art results of 71.3% and 78.0% respectively, proving that scalable RL training is essential for Deep Research Agents.

Method Overview

LiteResearcher constructs a virtual world with identical architecture to the real web but isolated in execution. The framework consists of three key components:

(1) Co-constructed Training Data & Corpus: We scale up information sources (32M+ webpages, 1M+ domains) and identify five atomic search capabilities — direct retrieval, aggregation, enumeration, cross-verification, and statistics — to generate diverse, realistic training tasks.

(2) Stable Local Tool Environment: A local search engine (BGE-M3 + Milvus, ~0.15s/query) and local browse tool (PostgreSQL, ~0.17s/page) that enable 73.2M tool calls during training at zero marginal cost.

(3) Difficulty-Aware Curriculum RL: Multi-stage training that progressively increases task difficulty and context length, retaining only partially-solvable instances to maintain consistent training signal.

LiteResearcher framework overview
Overview of the LiteResearcher training framework.

Main Results

LiteResearcher-4B consistently outperforms open-source models up to 8× larger and matches or exceeds proprietary systems across eight benchmarks.

Models GAIA-Text Browsecomp Browse.(ZH) HLE Frames Webwalker Seal-0 Xbench-DS
Commercial Models
Claude-4-Sonnet68.312.229.120.380.761.7-64.6
Claude-4.5-Sonnet71.219.640.824.585.0-53.466.0
Deepseek-V3.263.567.665.040.880.2-38.571.0
DeepSeek-V3.163.130.049.229.883.761.2-71.0
Minimax-M275.744.048.531.8---72.0
OpenAI-GPT-5-high76.454.965.035.2--51.477.8
GLM-4.671.945.149.530.4---70.0
Kimi-Researcher---26.978.8-36.069.0
Kimi-K2-090560.27.422.221.758.1-25.261.0
Open-Source Models
Mirothinker 8B66.431.140.221.580.660.640.460.6
Tongyi Deepsearch 30B70.943.446.732.990.672.2-75.0
ASearcher QWQ v2 32B58.7---74.5--51.1
WebSailor 30B53.2------53.3
WebDancer 32B (QwQ)51.53.818.0--47.9-38.3
WebExplorer 8B50.015.732.017.375.762.7-53.7
DeepMiner 32B58.733.540.1----62.0
AFM-RL 32B55.311.1-18.0-63.0--
SFR-DeepResearch 20B66.0--28.782.8---
AgentCPM-Explore 4B63.924.129.119.182.768.140.570.0
LiteResearcher-4B71.327.5*32.5*22.083.172.741.878.0

Best open-source results in bold. Results with * use a 64k context window with a memory mechanism.

Training Dynamics

Our difficulty-aware curriculum learning prevents training saturation. Stage 2 with adjusted difficulty yields +3.6% GAIA accuracy after Stage 1 plateaus, demonstrating the importance of progressive curriculum design.

Training dynamics across stages
GAIA accuracy across training stages, showing continued improvement with curriculum learning.

BibTeX

@article{li2026literesearcher, title={LiteResearcher: A Scalable Agentic RL Training Framework for Deep Research Agent}, author={Wanli Li and Bince Qu and Bo Pan and Jianyu Zhang and Zheng Liu and Pan Zhang and Wei Chen and Bo Zhang}, year={2026} }