2 Sources
2 Sources
[1]
Student trust in AI coding tools grows briefly, then levels off with experience
How much do undergraduate computer science students trust chatbots powered by large language models like GitHub Copilot and ChatGPT? And how should computer science educators modify their teaching based on these levels of trust? These were the questions that a group of U.S. computer scientists set out to answer in a study that will be presented at the Koli Calling conference Nov. 11 to 16 in Finland. In the course of the study's few weeks, researchers found that trust in generative AI tools increased in the short run for a majority of students. But in the long run, students said they realized they needed to be competent programmers without the help of AI tools. This is because these tools often generate incorrect code or would not help students with code comprehension tasks. The study was motivated by the dramatic change in the skills required from undergraduate computer science students since the advent of generative AI tools that can create code from scratch. The work is published on the arXiv preprint server. "Computer science and programming is changing immensely," said Gerald Soosairaj, one of the paper's senior authors and an associate teaching professor in the Department of Computer Science and Engineering at the University of California San Diego. Today, students are tempted to overly rely on chatbots to generate code and, as a result, might not learn the basics of programming, researchers said. These tools also might generate code that is incorrect or vulnerable to cybersecurity attacks. Conversely, students who refuse to use chatbots miss out on the opportunity to program faster and be more productive. But once they graduate, computer science students will most likely use generative AI tools in their day-to-day, and need to be able to do so effectively. This means they will still need to have a solid understanding of the fundamentals of computing and how programs work, so they can evaluate the AI-generated code they will be working with, researchers said. "We found that student trust, on average, increased as they used GitHub Copilot throughout the study. But after completing the second part of the study-a more elaborate project-students felt that using Copilot to its full extent requires a competent programmer that can complete some tasks manually," said Soosairaj. The study surveyed 71 junior and senior computer science students, half of whom had never used GitHub Copilot. After an 80-minute class where researchers explained how GitHub Copilot works and had students use the tool, half of the students said their trust in the tool had increased, while about 17% said it had decreased. Students then took part in a 10-day-long project where they worked on a large open-source codebase using GitHub Copilot throughout the project to add a small new functionality to the codebase. At the end of the project, about 39% of students said their trust in Copilot had increased. But about 37% said their trust in Copilot had decreased somewhat while about 24% said it had not changed. The results of this study have important implications for how computer science educators should approach the introduction of AI assistants in introductory and advanced courses. Researchers make a series of recommendations for computer science educators in an undergraduate setting. * To help students calibrate their trust and expectations of AI assistants, computer science educators should provide opportunities for students to use AI programming assistants for tasks with a range of difficulty, including tasks within large codebases. * To help students determine how much they can trust AI assistants' output, computer science educators should ensure that students can still comprehend, modify, debug, and test code in large codebases without AI assistants. * Computer science educators should ensure that students are aware of how AI assistants generate output via natural language processing so that students understand the AI assistants' expected behavior. * Computer science educators should explicitly inform and demonstrate key features of AI assistants that are useful for contributing to a large code base, such as adding files as context while using the 'explain code' feature and using keywords such as "/explain," "/fix," and "/docs" in GitHub Copilot. "CS educators should be mindful that how we present and discuss AI assistants can impact how students perceive such assistants," the researchers write. Researchers plan to repeat their experiment and survey with a larger pool of 200 students this winter quarter.
[2]
How Can Computer Science Educators Teach Students to Calibrate Their Trust in GenAI Programming Tools? | Newswise
Newswise -- How much do undergraduate computer science students trust chatbots powered by large language models like GitHub CoPilot and ChatGPT? And how should computer science educators modify their teaching based on these levels of trust? These were the questions that a group of U.S. computer scientists set out to answer in a study that will be presented at the Koli Calling conference Nov. 11 to 16 in Finland. In the course of the study's few weeks, researchers found that trust in generative AI tools increased in the short run for a majority of students. But in the long run, students said they realized they needed to be competent programmers without the help of AI tools. This is because these tools would often generate incorrect code or would not help students with code comprehension tasks. The study was motivated by the dramatic change in the skills required from undergraduate computer science students since the advent of generative AI tools that can create code from scratch. "Computer science and programming is changing immensely," said Gerald Soosairaj, one of the paper's senior authors and an associate teaching professor in the Department of Computer Science and Engineering at the University of California San Diego. Today, students are tempted to overly rely on chatbots to generate code and as a result might not learn the basics of programming, researchers said. These tools also might generate code that is incorrect or vulnerable to cybersecurity attacks. Conversely, students who refuse to use chatbots miss out on the opportunity to program faster and be more productive. But once they graduate, computer science students will most likely use generative AI tools in their day-to-day, and need to be able to do so effectively. This means they will still need to have a solid understanding of the fundamentals of computing and how programs work, so they can evaluate the AI-generated code they will be working with, researchers said. "We found that student trust, on average, increased as they used GitHub Copilot throughout the study. But after completing the second part of the study-a more elaborate project-students felt that using Copilot to its full extent requires a competent programmer that can complete some tasks manually," said Soosairaj. The study surveyed 71 junior and senior computer science students, half of whom had never used Github CoPilot. After an 80-minute class where researchers explained how GitHub CoPilot works and had students use the tool, half of the students said their trust in the tool had increased, while about 17% said it had decreased. Students then took part in a 10-day long project where they worked on a large open-source codebase using GitHub Copilot throughout the project to add a small new functionality to the codebase. At the end of the project, about 39% of students said their trust in Copilot had increased. But about 37% said their trust in Copilot had decreased somewhat while about 24% said it had not changed. The results of this study have important implications for how computer science educators should approach the introduction of AI assistants in introductory and advanced courses. Researchers make a series of recommendations for computer science educators in an undergraduate setting. "CS educators should be mindful that how we present and discuss AI assistants can impact how students perceive such assistants," the researchers write. Researchers plan to repeat their experiment and survey with a larger pool of 200 students this winter quarter. Evolution of Programmers' Trust in Generative AI Programming Assistants Anshul Shah, Elena Tomson, Leo Porter, William G. Griswold, and Adalbert Gerald Soosai Raj. Department of Computer Science and Engineering, University of California San Diego
Share
Share
Copy Link
A UC San Diego study reveals that undergraduate computer science students initially develop increased trust in AI coding tools like GitHub Copilot, but this trust levels off as they gain experience and realize the importance of fundamental programming skills.

A comprehensive study conducted by researchers at the University of California San Diego has uncovered nuanced patterns in how undergraduate computer science students develop trust in AI-powered coding tools. The research, set to be presented at the Koli Calling conference in Finland from November 11-16, tracked 71 junior and senior computer science students as they worked with GitHub Copilot over several weeks
1
.The study was motivated by the dramatic transformation in computer science education following the emergence of generative AI tools capable of creating code from scratch. "Computer science and programming is changing immensely," explained Gerald Soosairaj, one of the paper's senior authors and an associate teaching professor in the Department of Computer Science and Engineering at UC San Diego
2
.The research methodology involved a two-phase approach to measure trust evolution. Initially, researchers conducted an 80-minute introductory class explaining how GitHub Copilot functions, allowing students to experiment with the tool. Following this session, approximately 50% of students reported increased trust in the AI assistant, while about 17% experienced decreased confidence
1
.The second phase proved more revealing. Students participated in a 10-day project working on a large open-source codebase, using GitHub Copilot throughout to add new functionality. This extended exposure to real-world programming scenarios produced more balanced results: 39% of students reported increased trust, 37% experienced decreased trust, and 24% saw no change in their confidence levels
2
."We found that student trust, on average, increased as they used GitHub Copilot throughout the study. But after completing the second part of the study—a more elaborate project—students felt that using Copilot to its full extent requires a competent programmer that can complete some tasks manually," Soosairaj noted
1
.This finding highlights a fundamental paradox in AI-assisted programming education. While students initially embrace these tools for their apparent ability to accelerate coding tasks, extended use reveals their limitations. The tools frequently generate incorrect code or fail to assist with code comprehension tasks, leading students to recognize the irreplaceable value of fundamental programming knowledge
2
.Related Stories
The study's findings carry significant implications for computer science curriculum design. Researchers identified a delicate balance educators must strike: students who over-rely on AI tools risk missing fundamental programming concepts, while those who completely avoid these technologies may find themselves unprepared for industry practices where AI assistance is increasingly standard
1
.The research team developed specific recommendations for educators. They suggest providing students with opportunities to use AI programming assistants across tasks of varying difficulty levels, including complex work within large codebases. Additionally, educators should ensure students maintain the ability to comprehend, modify, debug, and test code independently of AI assistance
2
.Furthermore, the study emphasizes the importance of helping students understand how AI assistants generate output through natural language processing, enabling them to better predict and evaluate tool behavior. Educators are also encouraged to demonstrate specific features useful for large codebase contributions, such as adding files as context and using specialized keywords like "/explain," "/fix," and "/docs" in GitHub Copilot
1
.Recognizing the preliminary nature of their findings, the research team plans to expand their investigation. They intend to repeat the experiment with a larger cohort of 200 students during the upcoming winter quarter, potentially providing more robust data on trust patterns and educational outcomes
2
.Summarized by
Navi
04 Nov 2025•Technology

12 Sept 2025•Technology

29 Oct 2025•Science and Research

1
Technology

2
Technology

3
Business and Economy
