ML-Bench: Evaluating Large Language Models and Agents for Machine Learning Tasks on Repository-Level Code

Yale University Nanjing University Peking University
* Contributed equally.

Our contribution:

1. ML-Bench provides a comprehensive benchmark for LLMs, focusing on repository-scale code interpretation and end-to-end execution. It addresses gaps in current benchmarking and challenges models with real-world programming tasks.

2. By featuring 9,641 examples across 18 GitHub repositories and employing two distinct setups, ML-LLM-Bench and ML-AGENT-Bench, ML-Bench assesses the capability of LLMs to generate executable scripts and autonomous agents to perform complex coding tasks in a Linux sandbox environment.

3. The benchmark reveals significant room for improvement in current LLMs, as demonstrated by issues with hallucinated outputs and bash script generation challenges, while also confirming the potential for iterative action and feedback to enhance performance in complex task resolution.



Abstract

Despite Large Language Models (LLMs) like GPT-4 achieving impressive results in function-level code generation, they struggle with repository-scale code understanding (e.g., coming up with the right arguments for calling routines), requiring a deeper comprehension of complex file interactions. Also, recently, people have developed LLM agents that attempt to interact with repository code (e.g., compiling and evaluating its execution), prompting the need to evaluate their performance. These gaps have motivated our development of ML-Bench, a benchmark rooted in real-world programming applications that leverage existing code repositories to perform tasks. Addressing the need for LLMs to interpret long code contexts and translate instructions into precise, executable scripts, ML-Bench encompasses annotated 9,641 examples across 18 GitHub repositories, challenging LLMs to accommodate user-specified arguments and documentation intricacies effectively. To evaluate both LLMs and AI agents, two setups are employed: ML-LLM-Bench for assessing LLMs' text-to-code conversion within a predefined deployment environment, and ML-Agent-Bench for testing autonomous agents in an end-to-end task execution within a Linux sandbox environment. Our findings indicate that while GPT-4o leads with a Pass@5 rate surpassing 50%, there remains significant scope for improvement, highlighted by issues such as hallucinated outputs and difficulties with bash script generation. Notably, in the more demanding ML-Agent-Bench, GPT-4o achieves a 76.47% success rate, reflecting the efficacy of iterative action and feedback in complex task resolution.