Computer Science Students' Trust in AI Coding Tools Peaks Early, Then Stabilizes as Experience Grows

2 Sources

Share

A UC San Diego study reveals that undergraduate computer science students initially develop increased trust in AI coding tools like GitHub Copilot, but this trust levels off as they gain experience and realize the importance of fundamental programming skills.

News article

Study Reveals Complex Relationship Between Students and AI Coding Tools

A comprehensive study conducted by researchers at the University of California San Diego has uncovered nuanced patterns in how undergraduate computer science students develop trust in AI-powered coding tools. The research, set to be presented at the Koli Calling conference in Finland from November 11-16, tracked 71 junior and senior computer science students as they worked with GitHub Copilot over several weeks

1

.

The study was motivated by the dramatic transformation in computer science education following the emergence of generative AI tools capable of creating code from scratch. "Computer science and programming is changing immensely," explained Gerald Soosairaj, one of the paper's senior authors and an associate teaching professor in the Department of Computer Science and Engineering at UC San Diego

2

.

Initial Trust Surge Followed by Reality Check

The research methodology involved a two-phase approach to measure trust evolution. Initially, researchers conducted an 80-minute introductory class explaining how GitHub Copilot functions, allowing students to experiment with the tool. Following this session, approximately 50% of students reported increased trust in the AI assistant, while about 17% experienced decreased confidence

1

.

The second phase proved more revealing. Students participated in a 10-day project working on a large open-source codebase, using GitHub Copilot throughout to add new functionality. This extended exposure to real-world programming scenarios produced more balanced results: 39% of students reported increased trust, 37% experienced decreased trust, and 24% saw no change in their confidence levels

2

.

The Competency Paradox

"We found that student trust, on average, increased as they used GitHub Copilot throughout the study. But after completing the second part of the study—a more elaborate project—students felt that using Copilot to its full extent requires a competent programmer that can complete some tasks manually," Soosairaj noted

1

.

This finding highlights a fundamental paradox in AI-assisted programming education. While students initially embrace these tools for their apparent ability to accelerate coding tasks, extended use reveals their limitations. The tools frequently generate incorrect code or fail to assist with code comprehension tasks, leading students to recognize the irreplaceable value of fundamental programming knowledge

2

.

Educational Implications and Recommendations

The study's findings carry significant implications for computer science curriculum design. Researchers identified a delicate balance educators must strike: students who over-rely on AI tools risk missing fundamental programming concepts, while those who completely avoid these technologies may find themselves unprepared for industry practices where AI assistance is increasingly standard

1

.

The research team developed specific recommendations for educators. They suggest providing students with opportunities to use AI programming assistants across tasks of varying difficulty levels, including complex work within large codebases. Additionally, educators should ensure students maintain the ability to comprehend, modify, debug, and test code independently of AI assistance

2

.

Furthermore, the study emphasizes the importance of helping students understand how AI assistants generate output through natural language processing, enabling them to better predict and evaluate tool behavior. Educators are also encouraged to demonstrate specific features useful for large codebase contributions, such as adding files as context and using specialized keywords like "/explain," "/fix," and "/docs" in GitHub Copilot

1

.

Future Research Directions

Recognizing the preliminary nature of their findings, the research team plans to expand their investigation. They intend to repeat the experiment with a larger cohort of 200 students during the upcoming winter quarter, potentially providing more robust data on trust patterns and educational outcomes

2

.

Today's Top Stories

TheOutpost.ai

Your Daily Dose of Curated AI News

Don’t drown in AI news. We cut through the noise - filtering, ranking and summarizing the most important AI news, breakthroughs and research daily. Spend less time searching for the latest in AI and get straight to action.

© 2025 Triveous Technologies Private Limited
Instagram logo
LinkedIn logo