Enhancing repository-level code completion with selective retrieval


Large language models for code are models pretrained on source code rather than natural-language texts. They’re remarkably good at completing the code for arbitrary program functions based solely on context. They struggle, however, with new, large software development projects, where correct code completion may depend on API calls or functions defined elsewhere in the code repository.

Retrieval-augmented generation (RAG) addresses this issue by fetching relevant context from the repository, enriching the model’s understanding and improving its outputs. But performing retrieval takes time and slows generation: is it always the best choice?

In a paper we presented at this year’s International Conference on Machine Learning (ICML), we investigated this question and found that, indeed, 80% of the time, retrieval does not improve the quality of the code generation.

The effect of context retrieval on model performance. Orange bars indicate no (0%) change.

To address this inefficiency, we fine-tuned an LLM to determine whether or not retrieval is likely to help and to emit one of two special tokens, depending on the answer.

Code completion with (right) and without (left) context retrieval.

For fine-tuning, we used a dataset constructed by sampling code from open-license repositories, randomly masking out lines of the code, and retrieving related code from elsewhere in the repository. Then we compared an LLM’s reconstructions of the masked code both with and without the additional context. The examples are then labeled according to whether or not retrieval improved generation.

In experiments, we found that on code completion tasks, a code LLM fine-tuned on our dataset performed even better than a model that always performed retrieval— but sped up inference by 70% due to selective retrieval. In the paper, we also report extensive experimentation intended to demonstrate that our approach generalizes well to different models and different code completion tasks.

Method

All the steps in creating our dataset — sampling and masking code, retrieving related code, and code generation with and without retrieved context — can be automated, which makes our approach self-supervised: it requires no human annotation and can scale to arbitrarily large dataset sizes.

Related content

Uses of the functional programming language include formal mathematics, software and hardware verification, AI for math and code synthesis, and math and computer science education.

We experimented with multiple methods for retrieving contextual information from the repository, including UniXCoder, which uses Transformer-based semantic embeddings to match code sequences, and CodeBLEU, which uses n-gram data, syntax trees, and code flow semantics. Neither, however, outperformed the much more efficient Jaccard similarity, which is the ratio of two symbol sequences’ intersection to their union. So for most of our experiments, we used Jaccard similarity for retrieval. We hypothesize that we can achieve better performance with semantic retrieval that uses structure-aware chunking rather than fixed lines of chunking. We leave this as future work.

For model fine-tuning, we used the “fill-in-the-middle” mechanism, in which the masked code is excised from the code sequence, and the preceding and succeeding sections are identified with special tokens. The training target consists of the input string with the masked code appended at the end of the string, again identified with special tokens. This allows the model to make use of the contextual information both before and after the masked code; it has been shown to yield better results than training the model to insert the generated code between the preceding and succeeding sections.

During fine-tuning, we have two training objectives: correct reconstruction of the missing code and accurate assessment of when retrieved information will aid reconstruction.

Accuracy evaluation

Compared to existing models such as StarCoder, our method — which we call Repoformer — improves accuracy and reduces inference latency across various benchmarks, including RepoEval and CrossCodeEval, a new benchmark targeted at long-form code completion.

Model performance, measured according to exact match (EM), edit similarity (ES), and unit test pass rate (UT). SelectiveG (where the “G” stands for “greedy”) performs retrieval if the most likely next token comes from elsewhere in the repository; SelectiveT performs retrieval only if the likelihood exceeds some threshold.

Latency evaluation

We illustrate Repoformer’s ability to reduce latency in a realistic “online serving” setting. We assume that the working repository has already been indexed. Given a code completion request containing the current file, the system initiates three processes at the same time:

  • make a retrieval decision using Repoformer;
  • use a code LMM to generate the code completion without the cross-file context;
  • retrieve the cross-file context and use it to generate the code completion.

Across a range of fixed selection thresholds, Repoformer’s selective retrieval is able to improve both the accuracy and the inference speed. The performance also holds with a wide range of threshold settings.

Latency-accuracy trade-off of self-selective RAG for the billion-parameter Repoformer model.

Accuracy and latency of larger code LMs when the billion-parameter Repoformer is the policy model for selective RAG. “SU” stands for “speedup” (relative to always retrieving).

Analysis of instances in which Repoformer abstains from retrieval. Dark blue indicates that the model generates the correct output without RAG; light blue indicates that the model generates an incorrect output, but RAG does not improve performance; red indicates that the model generates an incorrect output, and RAG would have helped.

More interestingly, Repoformer is able to function as a plug-and-play policy model, reducing the inference latency of various strong code LLMs as the generation model in RAG.

With over 85% accuracy in retrieval decision making, Repoformer ensures that context retrieval is used only when it adds value.

Further analyses show that the proposed strategy improves Repoformer’s robustness to retrieval, with fewer harmful retrievals and more instances improved by retrieval.

Acknowledgements

We’re incredibly grateful to Wasi Uddin Ahmad and Dejiao Zhang their contributions as the mentors for this project. Their guidance, from formulating the project to all their great suggestions in regular meetings, made a big difference. We’d also like to thank the other coauthors and anonymous ICML reviewers for their valuable feedback, which really helped improve and refine the work.





Source link

We will be happy to hear your thoughts

Leave a reply

Rockstary Reviews
Logo
Shopping cart