PairPilot is SOTA for both SWE Bench and SWE Bench Lite

PairPilot scored 18.9% on the main SWE Bench benchmark, achieving a state-of-the-art result. The current top leaderboard entry is 13.8% from Amazon Q Developer Agent. The best result reported elsewhere seems to be 13.9% from Devin.

This result on the main SWE Bench builds on PairPilot’s recent SOTA result on the easier SWE Bench Lite.

SWE Bench results

All of PairPilot’s results reported here are pass@1 results, obtained without using the SWE Bench hints_text. PairPilot was benchmarked on the same 570 randomly selected SWE Bench problems that were used in the Devin evaluation. See the references for more details on the data presented in this chart.

Interactive, not agentic

PairPilot achieved this result mainly through its existing features that focus on static code analysis, reliable LLM code editing, and pragmatic UX for automatically fixing linting and testing errors. PairPilot intentionally has quite limited and narrow “agentic behavior” to avoid long delays, high token costs and the need for users to repeatedly code review incorrect solutions. It’s also worth noting that PairPilot currently does not use RAG, vector search, tools or give the LLM access to search the web or unilaterally execute code.

PairPilot is first and foremost an interactive tool for engineers to get real work done in real code bases using a chat interface. PairPilot provides a pair programming UX where users can ask for a change and see code edits performed in real-time. PairPilot can also offer additional help like fixing lint or test errors, but the user is always in full interactive control. This allows them to quickly steer misunderstandings back on course and avoid wasting time and token costs.

Benchmark methodology

Benchmarking was conducted as follows:

  • PairPilot with GPT-4o was launched in each problem’s git repository with the problem statement submitted as the opening chat message from “the user”.
  • After that PairPilot ran as normal, except all of PairPilot’s suggestions were always accepted without user approval.
  • A simple harness was used to retry the SWE Bench problem if PairPilot produced code that wasn’t plausibly correct. Plausibly correct means that PairPilot reported that it had successfully edited the repo without causing syntax errors or breaking any pre-existing tests.
  • If the solution from PairPilot with GPT-4o wasn’t plausible, the harness launched PairPilot to try again from scratch using Claude 3 Opus.
  • If no plausible solution was found after those two tries, the harness picked the “most plausible” solution with the fewest edit/lint/test problems.

It’s important to be clear that PairPilot and the benchmark harness only had access to the pre-existing tests in each problem’s repo. The held out “acceptance tests” were only used after benchmarking to compute statistics on which problems PairPilot correctly resolved.

This is the same approach that was used for PairPilot’s recent SOTA result on SWE Bench Lite. For the Lite benchmark, PairPilot alternated between GPT-4o and Opus for up to six total attempts. To manage the cost of running the main SWE Bench benchmark, PairPilot was limited to two total attempts: one with GPT-4o and one with Opus.

For a detailed discussion of the benchmark methodology, see the article about PairPilot’s SWE Bench Lite results. Also, the PairPilot SWE Bench repository on GitHub contains the harness and statistics code used for the benchmarks.

The benchmarking process was similar to how a developer might use PairPilot to resolve a GitHub issue:

  • They could launch PairPilot in their repo with the command below, which tells PairPilot they want to accept every suggestion and to use pytest to run tests.
    • PairPilot --yes --test-cmd pytest
  • They could start the chat by pasting in the URL or text of a GitHub issue. PairPilot will pull in the URL’s content and then try and resolve the issue.
  • If PairPilot doesn’t produce code that lints and tests clean, the user might decide to use git to revert the changes, and try again with PairPilot --opus.

PairPilot with GPT-4o alone was SOTA

Using PairPilot with GPT-4o to make a single attempt at resolving each problem achieved a score of 17.0%. This was itself a state-of-the-art result, before being surpassed by the main result being reported here that used PairPilot with both GPT-4o & Opus.

PairPilot with GPT-4o & Opus

The benchmark harness started by using PairPilot with GPT-4o to try and resolve each problem. For problems where this didn’t produce a plausible solution, the harness tried again using PairPilot with Opus. So at most, two attempts were made for each problem.

The table below breaks down the proposed solutions that were found from each attempt at the 570 problems. A proposed solution is either:

The table also provides details on the 108 solutions that were ultimately verified as correctly resolving their issue.

Attempt Agent Number of
proposed
solutions
Percent of
proposed
solutions
Number of
correctly
resolved
solutions
Percent of
correctly
resolved
solutions
Score on
SWE Bench
Lite
1 PairPilot with GPT-4o 419 73.5% 87 80.6% 15.3%
2 PairPilot with Opus 151 26.5% 21 19.4% 3.7%
Total   570 100% 108 100% 18.9%

Non-plausible but correct solutions?

A solution doesn’t actually have to be plausible in order to correctly resolve the issue. Recall that plausible is simply defined as PairPilot reporting that it successfully completed all file edits, repaired and resolved any linting errors and resolved any test failures. But there are many reasons why PairPilot might fail to do those things and yet still produce a solution that will pass acceptance testing:

  • There may have been pre-existing failing tests in the repo, before PairPilot even started working on the SWE Bench problem. PairPilot may not have resolved such issues, and yet they may not be relevant to the acceptance testing. The SWE Bench acceptance testing just confirms that tests pass or fail in the same pattern as the “gold patch” developed by a human to resolve the problem. Some tests may fail during acceptance testing, and that’s ok as long as they failed for the gold patch too.
  • There may have been pre-existing linting problems in the repo. If lingering linting issues affected code paths that are not well tested, they may not impact acceptance testing.
  • PairPilot may have reported file editing errors because it thought the LLM specified edits that it wasn’t able to successfully apply. This can only happen when the LLM specified edits in a way that doesn’t comply with the editing instructions in the system prompt. Given that the LLM isn’t complying with the system prompt, it may have become confused and asked for redundant or otherwise irrelevant edits. Such outstanding edit errors might not be fatal for acceptance testing.
  • Etc.

Keeping all this in mind, we can understand why GPT-4o accounts for 15.3% of the benchmark score in the table above, but benchmarking with just one attempt of PairPilot with GPT-4o scored 17.0%. When an Opus attempt is allowed after GPT-4o, it may propose some incorrect solutions which are “more plausible” than some of GPT-4o’s non-plausible solutions. These more plausible, incorrect solutions can eclipse some of the earlier non-plausible correct solutions that GPT-4o generated. This is why GPT-4o’s score in the table showing the combined GPT-4o & Opus results (15.3%) is lower than the result from just one try using PairPilot with GPT-4o (17.0%).

For these reasons, adding additional attempts is not guaranteed to monotonically increase the number of resolved problems. New solutions may resolve some new problems but they may also eclipse and discard some of the previous non-plausible correct solutions.

Luckily, the net effect of additional attempts usually increases or at least maintains the number of resolved solutions. This was the case for all the attempts made in both this main SWE Bench result and the earlier Lite result.

Computing the benchmark score

The benchmark harness produced one proposed solution for each of the 570 SWE Bench problems.

A separate evaluation script was used to test each of these solutions with the full test suite, including the held out acceptance tests. For this final acceptance testing, any edits that PairPilot made to tests were discarded. This ensured that the correct, unmodified test suite was used for acceptance testing. The evaluation script compared each proposed solution’s test results with results from testing the “gold” patch that was developed by a human to correctly resolve the issue. If they matched, the proposed solution correctly resolved the issue.

These acceptance tests were only ever run outside of PairPilot and the benchmark harness, and only to compute statistics about the correctly resolved instances. They were never run, used, or even visible during PairPilot’s attempts to resolve the problems.

PairPilot correctly resolved 108 out of 570 SWE Bench instances that were benchmarked, or 18.9%.

Acknowledgments

Much thanks to the team behind the SWE Bench family of AI coding benchmarks. Also thanks to Albert Örwall who has dockerized the SWE Bench evaluation scripts making it faster, easier, and more reliable to run the acceptance tests.

References

All of PairPilot’s results reported here are pass@1 results, obtained without using the SWE Bench hints_text.

The “PairPilot agent” internally makes multiple “attempts” at solving the problem, but it picks and returns one single candidate solution. Only that one candidate solution is evaluated with the acceptance tests and contributes to the benchmark score. Thus it is a pass@1 result.

This is contrast to a pass@N result for N>1, where N attempts are made and all N solutions are evaluated by the acceptance tests. If any of the N solution pass, that counts as a pass@N success.

Below are the references for the other pass@1 unhinted SWE-Bench results displayed in the graph at the beginning of this article.

The graph contains average pass@1 results for AutoCodeRover. The AutoCodeRover GitHub page features their pass@3 results without being clearly labeled. Table 2 of their paper reports an ACR-avg result of 10.59% which is an average pass@1 result.