"What It Wants Me To Say": Bridging the Abstraction Gap Between End-User Programmers and Code-Generating Large Language Models
Informations
Type:
inproceedings
Auteurs:
Michael Xieyang Liu and Advait Sarkar and Carina Negreanu and Benjamin Zorn and Jack Williams and Ne...
Pertinence:
Moyenne
Référence:
Liu_2023
Doi:
10.1145/3544548.3580817
Mots-clés:
Url:
https://doi.org/10.1145%2F3544548.3580817
Date de publication:
04/2023
Résumé:
natural langage programming (+ interpretation du code python génété) pour des utilisateurs qui ne savent pas programmer. Interface pour décrire plusieurs étapes et résultat attendu (debugging?)
Abstract:
Code-generating large language models translate natural language into code. However, only a small portion of the infinite space of naturalistic utterances is effective at guiding code generation. For non-expert end-user programmers, learning this is the challenge of abstraction matching. We examine this challenge in the specific context of data analysis in spreadsheets, in a system that maps the users natural language query to Python code using the Codex generator, executes the code, and shows the result. We propose grounded abstraction matching, which bridges the abstraction gap by translating the code back into a systematic and predictable naturalistic utterance. In a between-subjects, think-aloud study (n=24), we compare grounded abstraction matching to an ungrounded alternative based on previously established query framing principles. We find that the grounded approach improves end-users' understanding of the scope and capabilities of the code-generating model, and the kind of language needed to use it effectively.
Pdf:
Lien pdf
Références
0 articles
Titre Type Pertinence Auteurs Date Publication Références Citations Actions
Pas encore d'article
Citations
0 articles
Titre Type Pertinence Auteurs Date Publication Références Citations Actions
Pas encore d'article
Mots-clés
0 mots-clés
Nom Nombre d'articles Actions
Pas encore de mot-clé
Auteurs
1 auteurs
Nom Nombre d'articles Actions
Michael Xieyang Liu and Advait Sarkar and Carina Negreanu and Benjamin Zorn and Jack Williams and Neil Toronto and Andrew D. Gordon 1