Expectation vs. Experience: Evaluating the Usability of Code Generation Tools Powered by Large...
Informations
Type:
inproceedings
Auteurs:
Vaithilingam, Priyan and Zhang, Tianyi and Glassman, Elena L.
Pertinence:
Moyenne
Référence:
Doi:
10.1145/3491101.3519665
Mots-clés:
GitHub copilot, large language model
Url:
https://doi.org/10.1145/3491101.3519665
Date de publication:
04/2022
Résumé:
copilot vs intellisens (test de vitesse et préférence sur un panel de 24 personnes)
Abstract:
Recent advances in Large Language Models (LLM) have made automatic code generation possible for real-world programming tasks in general-purpose programming languages such as Python. However, there are few human studies on the usability of these tools and how they fit the programming workflow. In this work, we conducted a within-subjects user study with 24 participants to understand how programmers use and perceive Copilot, a LLM-based code generation tool. We found that, while Copilot did not necessarily improve the task completion time or success rate, most participants preferred to use Copilot in daily programming tasks, since Copilot often provided a useful starting point and saved the effort of searching online. However, participants did face difficulties in understanding, editing, and debugging code snippets generated by Copilot, which significantly hindered their task-solving effectiveness. Finally, we highlighted several promising directions for improving the design of Copilot based on our observations and participants’ feedback.
Pdf:
Lien pdf
Références
0 articles
Titre Type Pertinence Auteurs Date Publication Références Citations Actions
Pas encore d'article
Citations
0 articles
Titre Type Pertinence Auteurs Date Publication Références Citations Actions
Pas encore d'article
Mots-clés
2 mots-clés
Nom Nombre d'articles Actions
large language model 2
GitHub copilot 2
Auteurs
4 auteurs
Nom Nombre d'articles Actions
Elena L. 2
Priyan and Zhang 1
Vaithilingam 1
Tianyi and Glassman 1