Are undergraduate students too reliant on artificial intelligence (AI) tools like ChatGPT and Co-Pilot?

Neha Rani, Ph.D., assistant professor in the Department of Computer & Information Science & Engineering, may not be able to answer that question, but she’s certainly working to understand the factors that contribute to students’ patterns of AI reliance.
Ultimately, she and her team want to help students use AI tools more effectively.
In a study recently published in conjunction with the 27th International Conference on Human-Computer Interaction, Rani and her co-authors identified predictors for appropriate reliance, over-reliance and under-reliance behaviors.
To do so, the authors designed a controlled experiment in which students were asked to solve Python programming problems with an AI assistant that provided both accurate and misleading assistance. Students were also surveyed before and after their interactions with the system.
The results?
Basically, students who were confident of their programming skills generally used AI tools appropriately, while less confident and experienced students unknowingly accepted errors while falsely believing the system was extremely helpful.
Put another way, AI systems can deliver incorrect information with confidence, which students with less experience might not be able to detect.
All is not lost, however.
One of the authors’ motivations was finding ways to promote appropriate reliance on AI tools, which are increasingly incorporated into curricula and educational technologies. Results from the controlled experiment revealed students are somewhat capable of recognizing their own over-reliance, providing an important opportunity for designing educational interventions.
Researchers hope that leveraging this tendency could foster a future where students are more likely to use AI tools more appropriately and effectively.