2 Sources
[1]
Warm and friendly or competent and straightforward? What students want from AI chatbots in the classroom
University of Auckland, Waipapa Taumata Rau provides funding as a member of The Conversation NZ. Artificial intelligence (AI) is rapidly transforming education, with schools and universities increasingly experimenting with AI chatbots to assist students in self-directed learning. These digital assistants offer immediate feedback, answer questions and guide students through complex material. For teachers, the chatbots can reduce their workload by helping them provide scalable and personalised feedback to students. But what makes an effective AI teaching assistant? Should it be warm and friendly or professional and competent? What are the potential pitfalls of integrating such technology into the classroom? Our ongoing research explores student preferences, highlighting the benefits and challenges of using AI chatbots in education. Warm or competent? We developed two AI chatbots - John and Jack. Both chatbots were designed to assist university students with self-directed learning tasks but differed in their personas and interaction styles. John, the "warm" chatbot, featured a friendly face and casual attire. His communication style was encouraging and empathetic, using phrases like "spot on!" and "great progress! Keep it up!". When students faced difficulties, John responded with support: "It looks like this part might be tricky. I'm here to help!" His demeanour aimed to create a comfortable and approachable learning environment. Jack, the "competent" chatbot, had an authoritative appearance with formal business attire. His responses were clear and direct, such as "correct" or "good! This is exactly what I was looking for." When identifying problems, he was straightforward: "I see some issues here. Let's identify where it can be improved." Jack's persona was intended to convey professionalism and efficiency. We introduced the chatbots to university students during their self-directed learning activities. We then collected data through surveys and interviews about their experiences. Distinct preferences We found there were distinct preferences among the students. Those from engineering backgrounds tended to favour Jack's straightforward and concise approach. One engineering student commented: Jack felt like someone I could take more seriously. He also pointed out a few additional things that John hadn't when asked the same question. This suggests a professional and efficient interaction style resonated with students who value precision and directness in their studies. Other students appreciated John's friendly demeanour and thorough explanations. They found his approachable style helpful, especially when grappling with complex concepts. One student noted: John's encouraging feedback made me feel more comfortable exploring difficult topics. Interestingly, some students desired a balance between the two styles. They valued John's empathy but also appreciated Jack's efficiency. The weaknesses of Jack and John While many students found the AI chatbots helpful, several concerns and potential weaknesses were highlighted. Some felt the chatbots occasionally provided superficial responses that lacked depth. As one student remarked: Sometimes, the answers felt generic and didn't fully address my question. There is also a risk of students becoming too dependent on AI assistance, potentially hindering the development of critical thinking and problem-solving skills. One student admitted: I worry that always having instant answers could make me less inclined to figure things out on my own. The chatbots also sometimes struggled with understanding the context or nuances of complex questions. A student noted: When I asked about a specific case study, the chatbot couldn't grasp the intricacies and gave a broad answer. This underscored AI's challenges in interpreting complex human language and specialised content. Privacy and data security concerns were also raised. Some students were uneasy about the data collected during interactions. Additionally, potential biases in AI responses were a significant concern. Since AI systems learn from existing data, they can inadvertently perpetuate biases present in their training material. Future-proofing classrooms The findings highlight the need for a balanced approach in incorporating AI into education. Offering students options to customise their AI assistant's persona could cater to diverse preferences and learning styles. Enhancing the AI's ability to understand context and provide deeper, more nuanced responses is also essential. Human oversight remains crucial. Teachers should continue to play a central role, guiding students and addressing areas where AI falls short. AI should be seen as a tool to augment, not replace, human educators. By collaborating with AI, educators can focus on fostering critical thinking and creativity, skills AI cannot replicate. Another critical aspect is addressing privacy and bias. Institutions must implement robust data privacy policies and regularly audit AI systems to minimise biases and ensure ethical use. Transparent communication about how data is used and protected can alleviate student concerns. The nuances of AI in classrooms Our study is ongoing, and we plan to expand it to include more students across different courses and educational levels. This broader scope will help us better understand the nuances of student interactions with AI teaching assistants. By acknowledging both the strengths and weaknesses of AI chatbots, we aim to inform the development of tools that enhance learning outcomes while addressing potential challenges. The insights from this research could significantly impact how universities design and implement AI teaching assistants in the future. By tailoring AI tools to meet diverse student needs and addressing the identified issues, educational institutions can leverage AI to create more personalised and effective learning experiences. This research was completed with Guy Bate and Shohil Kishore. The authors would also like to acknowledge the support of Soul Machines in providing the AI technology used in this research.
[2]
Warm and friendly or competent and straightforward? What students want from AI chatbots in the classroom
by Shahper Richter, Inna Piven and Patrick Dodd , The Conversation Artificial intelligence (AI) is rapidly transforming education, with schools and universities increasingly experimenting with AI chatbots to assist students in self-directed learning. These digital assistants offer immediate feedback, answer questions and guide students through complex material. For teachers, the chatbots can reduce their workload by helping them provide scalable and personalized feedback to students. But what makes an effective AI teaching assistant? Should it be warm and friendly or professional and competent? What are the potential pitfalls of integrating such technology into the classroom? Our ongoing research explores student preferences, highlighting the benefits and challenges of using AI chatbots in education. Warm or competent? We developed two AI chatbots -- John and Jack. Both chatbots were designed to assist university students with self-directed learning tasks but differed in their personas and interaction styles. John, the "warm" chatbot, featured a friendly face and casual attire. His communication style was encouraging and empathetic, using phrases like "spot on!" and "great progress! Keep it up!". When students faced difficulties, John responded with support: "It looks like this part might be tricky. I'm here to help!" His demeanor aimed to create a comfortable and approachable learning environment. Jack, the "competent" chatbot, had an authoritative appearance with formal business attire. His responses were clear and direct, such as "correct" or "good! This is exactly what I was looking for." When identifying problems, he was straightforward: "I see some issues here. Let's identify where it can be improved." Jack's persona was intended to convey professionalism and efficiency. We introduced the chatbots to university students during their self-directed learning activities. We then collected data through surveys and interviews about their experiences. Distinct preferences We found there were distinct preferences among the students. Those from engineering backgrounds tended to favor Jack's straightforward and concise approach. One engineering student commented: Jack felt like someone I could take more seriously. He also pointed out a few additional things that John hadn't when asked the same question. This suggests a professional and efficient interaction style resonated with students who value precision and directness in their studies. Other students appreciated John's friendly demeanor and thorough explanations. They found his approachable style helpful, especially when grappling with complex concepts. One student noted: "John's encouraging feedback made me feel more comfortable exploring difficult topics." Interestingly, some students desired a balance between the two styles. They valued John's empathy but also appreciated Jack's efficiency. The weaknesses of Jack and John While many students found the AI chatbots helpful, several concerns and potential weaknesses were highlighted. Some felt the chatbots occasionally provided superficial responses that lacked depth. As one student remarked: "Sometimes, the answers felt generic and didn't fully address my question." There is also a risk of students becoming too dependent on AI assistance, potentially hindering the development of critical thinking and problem-solving skills. One student admitted: "I worry that always having instant answers could make me less inclined to figure things out on my own." The chatbots also sometimes struggled with understanding the context or nuances of complex questions. A student noted: "When I asked about a specific case study, the chatbot couldn't grasp the intricacies and gave a broad answer." This underscored AI's challenges in interpreting complex human language and specialized content. Privacy and data security concerns were also raised. Some students were uneasy about the data collected during interactions. Additionally, potential biases in AI responses were a significant concern. Since AI systems learn from existing data, they can inadvertently perpetuate biases present in their training material. Future-proofing classrooms The findings highlight the need for a balanced approach in incorporating AI into education. Offering students options to customize their AI assistant's persona could cater to diverse preferences and learning styles. Enhancing the AI's ability to understand context and provide deeper, more nuanced responses is also essential. Human oversight remains crucial. Teachers should continue to play a central role, guiding students and addressing areas where AI falls short. AI should be seen as a tool to augment, not replace, human educators. By collaborating with AI, educators can focus on fostering critical thinking and creativity, skills AI cannot replicate. Another critical aspect is addressing privacy and bias. Institutions must implement robust data privacy policies and regularly audit AI systems to minimize biases and ensure ethical use. Transparent communication about how data is used and protected can alleviate student concerns. The nuances of AI in classrooms Our study is ongoing, and we plan to expand it to include more students across different courses and educational levels. This broader scope will help us better understand the nuances of student interactions with AI teaching assistants. By acknowledging both the strengths and weaknesses of AI chatbots, we aim to inform the development of tools that enhance learning outcomes while addressing potential challenges. The insights from this research could significantly impact how universities design and implement AI teaching assistants in the future. By tailoring AI tools to meet diverse student needs and addressing the identified issues, educational institutions can leverage AI to create more personalized and effective learning experiences.
Share
Copy Link
A research study explores student preferences for AI chatbots in education, comparing warm and friendly versus competent and straightforward approaches, while highlighting benefits and challenges of integrating AI in classrooms.
A recent study conducted by researchers at the University of Auckland has shed light on student preferences for AI chatbots in educational settings. As artificial intelligence continues to transform education, the research explores the effectiveness of different AI teaching assistant personas and their impact on student learning 1.
Researchers developed two distinct AI chatbots, John and Jack, to assist university students with self-directed learning tasks. Each chatbot was designed with a unique persona and interaction style:
The study revealed distinct preferences among students:
While many students found the AI chatbots helpful, several concerns were highlighted:
The research findings emphasize the need for a balanced approach in integrating AI into education:
The study is ongoing, with plans to expand to include more students across different courses and educational levels. By acknowledging both the strengths and weaknesses of AI chatbots, the researchers aim to inform the development of tools that enhance learning outcomes while addressing potential challenges 2.
As AI continues to play an increasingly significant role in education, the insights from this research could have a substantial impact on how universities design and implement AI teaching assistants in the future, potentially reshaping the landscape of digital learning tools and pedagogical approaches.
NVIDIA announces significant upgrades to its GeForce NOW cloud gaming service, including RTX 5080-class performance, improved streaming quality, and an expanded game library, set to launch in September 2025.
9 Sources
Technology
6 hrs ago
9 Sources
Technology
6 hrs ago
As nations compete for dominance in space, the risk of satellite hijacking and space-based weapons escalates, transforming outer space into a potential battlefield with far-reaching consequences for global security and economy.
7 Sources
Technology
22 hrs ago
7 Sources
Technology
22 hrs ago
OpenAI updates GPT-5 to make it more approachable following user feedback, sparking debate about AI personality and user preferences.
6 Sources
Technology
14 hrs ago
6 Sources
Technology
14 hrs ago
A pro-Russian propaganda group, Storm-1679, is using AI-generated content and impersonating legitimate news outlets to spread disinformation, raising concerns about the growing threat of AI-powered fake news.
2 Sources
Technology
22 hrs ago
2 Sources
Technology
22 hrs ago
A study reveals patients' increasing reliance on AI for medical advice, often trusting it over doctors. This trend is reshaping doctor-patient dynamics and raising concerns about AI's limitations in healthcare.
3 Sources
Health
14 hrs ago
3 Sources
Health
14 hrs ago