You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
SciCode is a challenging benchmark designed to evaluate the capabilities of language models (LMs) in generating code for solving realistic scientific research problems. It has a diverse coverage of <b> 16 </b> subdomains from </strong>6</strong> domains: Physics, Math, Material Science, Biology, and Chemistry. Unlike previous benchmarks that consist of exam-like question-answer pairs, SciCode is converted from real research problems. SciCode problems naturally factorize into multiple subproblems, each involving knowledge recall, reasoning, and code synthesis. In total, SciCode contains </strong>338</strong> subproblems decomposed from </strong>80</strong> challenging main problems, and it offers optional descriptions specifying useful scientific background information and scientist-annotated gold-standard solutions and test cases for evaluation. Claude3.5-Sonnet, the best-performing model among those tested, can solve only </strong>4.6%</strong> of the problems in the most realistic setting. Broadly, SciCode demonstrates a realistic and scientists' everyday workflow of identifying critical science concepts and facts and then transforming them into computation and simulation code. We believe SciCode not only helps demonstrate contemporary LLMs' progress towards helpful assistant for scientists but also helps shed light on future building and evaluation of scientific AI.
69
+
SciCode is a challenging benchmark designed to evaluate the capabilities of language models (LMs) in generating code for solving realistic scientific research problems. It has a diverse coverage of <b>16</b> subdomains from <b>6</b>b> domains: Physics, Math, Material Science, Biology, and Chemistry. Unlike previous benchmarks that consist of exam-like question-answer pairs, SciCode is converted from real research problems. SciCode problems naturally factorize into multiple subproblems, each involving knowledge recall, reasoning, and code synthesis. In total, SciCode contains <b>338</b>b> subproblems decomposed from <b>80</b>b> challenging main problems, and it offers optional descriptions specifying useful scientific background information and scientist-annotated gold-standard solutions and test cases for evaluation. Claude3.5-Sonnet, the best-performing model among those tested, can solve only <b>4.6%</b>b> of the problems in the most realistic setting. Broadly, SciCode demonstrates a realistic and scientists' everyday workflow of identifying critical science concepts and facts and then transforming them into computation and simulation code. We believe SciCode not only helps demonstrate contemporary LLMs' progress towards helpful assistant for scientists but also helps shed light on future building and evaluation of scientific AI.
70
70
</p>
71
71
72
72
@@ -76,24 +76,24 @@ SciCode sources challenging and realistic research-level coding problems across
76
76
77
77
Among various coding necessities, Scicode mainly focuses on: 1. Numerical methods 2. Simulation of systems 3. Scientific calculation. These are the tasks we believe require intense scientific knowledge and reasoning to optimally test LM’s science capability. The below figure is an example of the combination of 1 and 3.
78
78
79
-
In designing test cases for evaluation, we incorporate domain-specific test cases in addition to numerical cases. These tests are extracted from real scientific workflows: scientists must design domain-specific test cases to verify code accuracy by reproducing results published in papers or matching analytical solutions derived from theoretical models. Each problem goes through </strong>3</strong> rounds of validation (i.e. by in-domain scientists, out-of-domain scientists, GPT4) for quality control.
79
+
In designing test cases for evaluation, we incorporate domain-specific test cases in addition to numerical cases. These tests are extracted from real scientific workflows: scientists must design domain-specific test cases to verify code accuracy by reproducing results published in papers or matching analytical solutions derived from theoretical models. Each problem goes through <b>3</b> rounds of validation (i.e. by in-domain scientists, out-of-domain scientists, GPT4) for quality control.
<pstyle="text-align: center;">Left: Distribution of Main Problems Right: Distribution of Subproblems</p>
94
94
95
95
<palign="justify">
96
-
We include several research problems that are built upon or reproduce methods used in Nobel Prize-winning studies to highlight current trends in scientific research: the self-consistent field (SCF) method for density functional theory (DFT) calculations (</strong>The Nobel Prize in Chemistry 1998</strong>), the PMNS matrix for neutrino oscillation in matter (</strong>The Nobel Prize in Physics 2015</strong>), the Haldane model for the anomalous quantum Hall effect (</strong>The Nobel Prize in Physics 2016</strong>), optical tweezer simulations for microscopic thermodynamics (</strong>The Nobel Prize in Physics 2018</strong>), and the replica method for spin glasses (</strong>The Nobel Prize in Physics 2021</strong>).
96
+
We include several research problems that are built upon or reproduce methods used in Nobel Prize-winning studies to highlight current trends in scientific research: the self-consistent field (SCF) method for density functional theory (DFT) calculations (<b>The Nobel Prize in Chemistry 1998</b>), the PMNS matrix for neutrino oscillation in matter (<b>The Nobel Prize in Physics 2015</b>), the Haldane model for the anomalous quantum Hall effect (<b>The Nobel Prize in Physics 2016</b>), optical tweezer simulations for microscopic thermodynamics (<b>The Nobel Prize in Physics 2018</b>), and the replica method for spin glasses (<b>The Nobel Prize in Physics 2021</b>).
0 commit comments