Evaluation Sheet for Deep Research: A Use Case for Academic Survey Writing

Israel Abebe Azime, Tadesse Destaw Belay, Atnafu Lambebo Tonja

Published: 2025/9/30

Abstract

Large Language Models (LLMs) powered with argentic capabilities are able to do knowledge-intensive tasks without human involvement. A prime example of this tool is Deep research with the capability to browse the web, extract information and generate multi-page reports. In this work, we introduce an evaluation sheet that can be used for assessing the capability of Deep Research tools. In addition, we selected academic survey writing as a use case task and evaluated output reports based on the evaluation sheet we introduced. Our findings show the need to have carefully crafted evaluation standards. The evaluation done on OpenAI`s Deep Search and Google's Deep Search in generating an academic survey showed the huge gap between search engines and standalone Deep Research tools, the shortcoming in representing the targeted area.

Evaluation Sheet for Deep Research: A Use Case for Academic Survey Writing | SummarXiv | SummarXiv