top of page

Will AI Change the Way Students Learn? New Research Uncovers the Truth

Writer: Yusuf ÖçYusuf Öç

A few years ago, the idea of students using artificial intelligence (AI) to write essays, create presentation slides, generate images, write codes or even complete assignments seemed like science fiction. Fast forward to today, and AI tools like ChatGPT, MidJourney, DALL-E, and Grammarly are becoming as common as textbooks in higher education.

But how do students really feel about AI? Are they embracing it as a learning tool without thinking or avoiding it due to concerns about ethics and reliability?

In our latest research, published in the Journal of Marketing Education, we set out to understand how students interact with AI in their studies, why some eagerly use it while others hesitate.

A Mixed Reaction: Excitement, Fear, and Skepticism

Imagine three university students—Alex, Sarah, and James—all facing the same challenge: a big assignment deadline. Each of them has access to AI tools like ChatGPT, Dall-E, Gamma.ai and Grammarly, but their approach to using these tools is dramatically different.

  • Alex is tech-savvy and experimental. He loves new technology and sees AI as an assistant that can help him brainstorm, organize ideas, and refine his writing. He still double-checks AI-generated content and makes sure his work aligns with the module content.

  • Sarah is cautious and hardworking. She’s heard horror stories of students getting penalized for relying too much on AI, so she avoids it altogether. She’s worried about misinformation, plagiarism, and losing her critical thinking skills.

  • James is overly trusting. He copies AI-generated responses without verifying sources or checking accuracy. He assumes everything AI produces is correct and well-written, even when it’s not. As a result, his assignments often lack depth, originality, or connection to the course material.

So, what separates students like Alex from those like Sarah? Why do some students embrace AI responsibly, while others either reject it or misuse it?Our research found that it largely comes down to three key factors: risk perception, trust, and tech-savviness.


Me presenting the study at the Academy of Marketing Seminar. Virtual presentation with six speakers on a screen. Text reads "Generative AI in Higher Education Assessments." Office background, energetic mood.
Me presenting the study at the Academy of Marketing Seminar.

Perceived Risk: The Fear of AI “Taking Over”

Sarah’s hesitation is a classic example of perceived risk, the fear that using AI could lead to negative consequences.

🔹 Will AI-generated content get flagged for plagiarism?🔹 What if the information AI provides is wrong or biased?🔹 Could relying on AI weaken my writing and critical thinking skills?

For students like Sarah, these concerns outweigh the potential benefits, leading them to avoid AI tools entirely.

📌 Finding: The more students feared AI-related risks, the less likely they were to use AI in their studies.

Trust: The Balancing Act Between Confidence and Caution

James, on the other hand, represents students who blindly trust AI, without fact-checking or refining their responses.

Trust plays a huge role in AI adoption. Some students, like Alex, trust AI but still exercise caution by verifying content before submission. Others, like James, trust AI too much, assuming all outputs are accurate and well-structured.

Another important factor? Trust in educators. When professors openly discuss AI use and provide clear guidelines on ethical AI use, students feel more comfortable integrating AI into their work. But if educators are strictly anti-AI, students may either avoid it completely (like Sarah) or use it secretly without guidance (like James).

📌 Finding: Balanced trust, both in AI tools and in clear guidelines from educators, leads to more responsible AI usage.

Tech-Savviness: The Confidence to Use AI Effectively

Alex’s approach to AI is what we call tech-savviness, he’s comfortable with digital tools, understands how AI works, and knows its limitations.

Tech-savvy students tend to:✅ Use AI to enhance their own thinking rather than replace it.✅ Edit and refine AI-generated text before submitting assignments.✅ Adapt AI tools for different academic tasks, such as summarization, research, and content organization.

📌 Finding: Students with higher tech-savviness were more likely to see AI as a helpful tool rather than a risky shortcut.

AI generated image of four university students working on their assignments in front of a computer. Four individuals collaborate in an office. One types, another writes, while two discuss ideas. Bright natural light and casual attire set a focused mood.
You have to put an AI generated image for this blog post, right? :)

How Should Universities Respond?

Our study highlights an important reality: AI is here to stay. The question isn’t whether students will use it, but how they will use it.

To help students make the most of AI while avoiding misuse, universities can:

✅ Provide Clear AI Guidelines. Instead of banning AI, educators should offer structured guidance on when and how it can be used responsibly.✅ Encourage AI Literacy. Teaching students how to critically evaluate AI-generated content will help prevent blind trust or misuse.✅ Create Ethical AI Assignments. Incorporating AI into coursework in a controlled, guided manner will help students learn how to integrate technology without replacing their own thinking.

The Future of AI in Higher Education

Alex, Sarah, and James represent three very different responses to AI in education. The ideal student response? A balance between trust and critical thinking, just like Alex.

As AI continues to evolve, students, educators, and institutions must work together to find ethical, effective, and educational ways to integrate AI into learning.

💡 What do you think? Should universities embrace AI in education, or should they introduce stricter regulations? Let’s discuss in the comments!

 
 
 

Comments


bottom of page