* The table shows the score under the Zero-Shot Setting.
- The GPT models were evaluated on 03/15/2024, which is slightly different from the paper.
- Additional results from other models will be included in the future.
Most of the existing Large Language Model (LLM) benchmarks on scientific problem reasoning focus on problems grounded in high school subjects and are confied to elementary algebraic operations. To systematically examine the reasoning capabilities required for solving complex scientific problems, we introduce an expansive benchmark suite SciBench for LLMs.
SciBench contains a carefully curated dataset featuring a range of collegiate-level scientific problems from mathematics, chemistry, and physics domains. Based on the dataset, we conduct an in-depth benchmarking study of representative open-source and proprietary LLMs with various prompting strategies. The results reveal that the current LLMs fall short of delivering satisfactory performance, with the best overall score of merely 48.96%. Furthermore, through a detailed user study, we categorize the errors made by LLMs into ten problem-solving abilities.
Our analysis indicates that no single prompting strategy significantly outperforms the others and some strategies that demonstrate improvements in certain problem-solving skills could result in declines in other skills. We envision that SciBench will catalyze further developments in the reasoning abilities of LLMs, thereby ultimately contributing to scientific research and discovery.
* The table shows the score under the Zero-Shot Setting.
- The GPT models were evaluated on 03/15/2024, which is slightly different from the paper.
- Additional results from other models will be included in the future.
SciBench is a carefully curated dataset of college-level scientific problems, collected from widely-used textbooks in college-level Chemistry, Physics, and Mathematics courses. Distinct from existing benchmarks, all of the problems are open-ended, free-response questions that demand multi-step reasoning abilities, the understanding of scientific concepts, the retrieval of domain-specific knowledge (e.g., equations and theorems), and complex numeric computation capabilities (e.g., calculus or differential equations).
To evaluate the capabilities and analyze the limitations of Large Language Models (LLMs) to solve scientific computing problems, we collect a new dataset consisting of college-level textbooks and course exams in a variety of domains. This section details the dataset construction process.
Data selection criteria.Our dataset aims to improve the previous benchmarks by including more challenging problems. Specifically, the selected dataset should fulfill the following requirements:
One example for each textbook from SciBench
fund textbook
thermo textbook
class textbook
quan textbook
chemmc textbook
atkins textbook
matter textbook
calc textbook
stat textbook
diff textbook
One example in visual context from SciBench
calculus textbook
fund textbook
@inproceedings{wang2024scibench,
author = {Wang, Xiaoxuan and Hu, Ziniu and Lu, Pan and Zhu, Yanqiao and Zhang, Jieyu and Subramaniam, Satyen and Loomba, Arjun R. and Zhang, Shichang and Sun, Yizhou and Wang, Wei},
title = {{SciBench: Evaluating College-Level Scientific Problem-Solving Abilities of Large Language Models}},
booktitle = {Proceedings of the Forty-First International Conference on Machine Learning},
year = {2024},
}