One of the questions facing education today is what to do with AI. Where, how and even should it fit? To address these questions, we invited Sara, an Assistant Professor of Political Science and Law at Drexel University, to relate her real world experience as an educator in the age of AI – both from her students perspective as well as colleagues. If you’re looking for a thoughtful and rational exploration of how educators today and moving forward should think about navigating AI, this is the talk for you.
Transcript
Hi, everybody. So what happens when students think that they can learn faster from a machine than from their professors? And that’s what I’m going to talk about today. I’m an Assistant Professor of political science and law at Drexel University in Philadelphia.
>> Drexel grads? Woohoo!! All right. So as most of you know, the rapid proliferation of artificial intelligence or AI platforms has posed fundamental challenges to the way both students and educators approach the learning process.
While students have been faster to incorporate AI technologies into their day-to-day lives, many faculties faculty hemisphere that we’ve just been left behind.
Raised concern about how much information is actually being retained by our students and fears that cheating is just rampant. On the faculty side there’s concern that professors are going to be outpaced by evolving AI platforms that can synthesize and tailor information to student queries.
Because of the short time period in which ChatGPT and other platforms have proliferated most universities don’t have a comprehensive policy in place to guide student and faculty usage of AI.
So ultimately there’s a need for better literacy for both faculty and students so that we can ensure that students are properly trained without sacrificing crucial elements of the learning process.
When ChatGPT was released at the end of November in 2022, it landed in a university system that was still transitioning in the wake of the Covid pandemic. So faculty and students were just readapting to life back in the classroom after a two-year pivot to online teaching and learning. Now, just 6 days after ChatGPT entered the public domain, it announced that it already reached over a million users, and a lot of them were students.
LL Ms like ChatGPT which was developed of course by open AI are capable of understanding and generating higher education text. As a result, higher education is being transformed as more students seek to incorporate what they see as essay-writing. So understanding the nuances of generative AI adoption, use among students is critical for educators, for caring adults, for tech companies, and for policymakers, as well. Universities now have to start critically assess and adapt their missions, their curricula, teaching methodologies and assessment strategies.
At the same time, students have to master new competencies, if they’re going to be able to effectively harness the solution offered by these AI platforms.
Now, if universities are going to transform this so-called AI disruption, they have to address issues of equity, ethics and faculty adoption in a timely manner and that’s not something that we’re seeing happening.
So, how many college students are actively using AI platforms like ChatGPT? Well, because student usage of AI is still relatively new, it’s hard to get a truly accurate sense of how many students are consulting AI platforms on a daily basis. Now, having just taught a summer introduction to politics class online, it required students to write discussion board posts based on the textbook. It sometimes felt like all of the students were using AI to generate at least part of their responses. For many of us faculty members, it feels this way when we check our email and every single student message begins with good afternoon, regardless of the time of day it is, Dr. X. Sometimes they don’t even bother to fill our names in. I trust this email finds you well. But what does the actual data tell us about student use of AI?
Well, at first glance, the data that we have on the topic is just all over the place. AI
We see figures that show those using AI include 86% of college students in a sample of 10 countries, 65% of Harvard students, and the numbers go on and on.
So to summarize, we can safely assume that half to probably all college students at the this point are using AI in some way.
And when we look at the studies that have been done to date, we do see a number of trends emerge: Later studies obviously show a marked increase in student usage of generative AI platforms over earlier studies, companies like Open AI. Open AI actually came to campuses and gave college students free access to premium ChatGPT right during the spring 2025 exam cycle. If you went to a bathroom anywhere on campus, there were ChatGPT flyers everywhere. The most widely cited current data comes from Harvard’s hope lab which concluded that half of young people are using generative AI on a weekly basis. So now the bigger question, what are students actually using AI for? The most commonly reported use of generative AI for young adults what students call brainstorming, which is about 51% of their usage. Now, although most students are at least passively familiar with generative AI they’re still learning how to use this relatively new tech knowledge, especially in the social sciences and the law. So they’re using it for inspiration and advice on assignments, for support on researching new topics, to find academic sources and then to help them cite those sources, to assist with clerical tasks, obviously like writing emails that find us all well, and managing their schedules. They’re also using it for help with math problems, sometimes for help with coding. So students are turning to AI for a multitude of reasons and the most common reasons, is to save time. Being a student is more expensive than ever.
More than 68% of students now have a part-time job and that’s the highest rate in the past two decades.
Many of my students have told me that they feel like they have less time than ever to just be students.
They’re constantly worried about working, about gaining work experience, and also with a multitude of other activities. Things like sports, honors societies, professional clubs, that they think are going to help to guarantee them a future career.
Now, at the same time, today’s undergraduate students are also a product of that online pivot that took place during Covid and so they have really limited attention spans when it comes to actually sitting down and making sense of course readings and for these reasons they see AI as the ultimate time-saving tool that enables them to fast-track the learning process.
At the same time, though, students’ willingness to experiment with AI varies widely from student to student.
Some adventurous students are diving in more readily than their peers and these pathfinding students chat platforms like ChatGPT like they were all other social mediasite. And they leave it on in the background and they talk to it like a buddy. And it’s not uncommon for a student to say, chat told me and then whatever point they’re raising in class, as if it was a classmate. So the style of LLMs natural encourage students to project human qualities onto them. Sometimes they even name their ChatGPT. One student kept telling me things her friend Shany told her about our class around I thought this is so weird because there’s no Shany on the class roster and it turned out that she had named her chatbot and she was having in-depth conversations with her chatbot.
More than half of user interactions especially amongst students could be classified as collaborative, involving a dialogue between the student and the AI.
Students have indicated that they are more willing to use AI platforms to ask what they see as dumb questions than to ask those questions in class or to me as the professor.
They’re so sensitive to the critiques of their peers that they’re more comfortable asking questions to AI because, and I’m quoting a student, it just won’t judge me.
Students have shown that in recent years, they’re more comfortable receiving feedback from A I AI than from a professor, as well. In recognizing the tendency especially of younger students to anthropomorphize AI is important because it shapes how they evaluate credibility, how they develop trust and how they integrate those systems into their academic lives.
So now the big question: Are they using it to cheat? The obvious answer feels like it should be a resounding yes. Cluely which is a desktop AI assistant that listens to your screen audio and then gives you real-time prompts, answers suggestions and talking points, seems to be designed and marketed to help students cheat on exams and assignments. Cluely’s motto claims it to be the invisible AI that thinks for you.
Now, while there are certainly many students who cut and paste their professor’s assignment prompt into ChatGPT and then submit the resulting answer, emdashes and all, not all students view their use of generative AI as cheating, per se. There are some students who also actively avoid generative AI because they’re worried about accusations of cheating.
Students who use it for what they call brainstorming don’t think of it as cheating. So while most students purport to use AI as a supportive tool, what we’ll see as support and what it actually means is complicated. Now, students are pretty ambivalent when they’re asked whether generative AI has been a positive element or a negative one and their views are often pretty deeply divided, even when you ask those who are using AI on a weekly basis.
At the same time, 54 percent were also very concerned about how it could affect their end learning outcomes.
Most of the students that I’ve talked to in the past year are pretty excited about the potential of AI platforms like ChatGPT for changing how they access information, but they’re also worried that they’re going to be accused of cheating and they’re also worried that they’re not using these platforms correctedly or effectively.
Many of the undergraduates have also expressed deep concerns about personal privacy when using AI. They worry about being disadvantaged if they don’t use AI while their peers do. Some students have pointed out that paid versions like ChatGPT + and cloud create inequities for those who can pay for access and those who can’t.
There’s a lot of uncertainty against students and faculties members over what counts as acceptable use in a university setting. Is it brainstorming in what about proofreading? Writing full first drafts? And so all of this uncertainty leads to a lot of anxiety about inconsistent policies that are in place across the university.
And so as a result, the only conclusion we can make about student views of AI is that a plurality of students think AI will have both positive and negative consequences.
Now, one of the major trends that we’re noticing at the university level is that as students become more and more reliant on ChatGPT and other AI platforms for their assignments, they’re starting to lose the ability to discuss key concepts relate to their studies within the classroom setting and this points to something that’s called the illusion of competence.
And this is where students think they know more about a given subject than they do. Students think they understand the material because it starts to feel familiar. They may have typed some questions into ChatGPT, cut and pasted some answered into an assignment, but they’re not able to recall or apply that same information independently and so when learning something for the first time, genuine effort is required for our minds to incorporate concepts and skills into our long-term memory and systemic thinking processes.
When students utilize AI programs in an attempt to speed up the learning process, they may be engaging in something we call cognitive offloading and this is where we push mental work outside of our minds to reduce cognitive burden. Sometimes offloading menial tasks can be helpful so for example using a PDF search function to find specific keywords win a document or looking for research articles that we should start a research project with. But offloading is problematic when we use tools like generative AI to draft a paper or to answer specific questions, the result is a cognitive debt. It causes an overreliance on outside sources, replacing our own thinking and as students become reliant on AI for completing their assignments, a number of studies have shown correlations between decreased memory retention, critical thinking and — …
For students who are trying to shortcut their base — AI may be a great produce the appearance of learning while masking the costs for some students who are engaging in these processes. We know that deep learning comes from cognitive work and exercises and strengthens our mind’s abilities so a false sense of mastery among students can lead to further overconfidence, to shallow studying to poor performance when true understanding is actually tested.
It can also leave students unprepared for career-specific scenarios that arise outside of the university setting.
So I’ve talked a little bit about students and their interactions with AI, but what the faculty members?
There is a reason why many faculties members invoke Skynet as a metaphor when talking about LLMs about ChatGPT and — in the terminator movies, Skynet is a military designed artificial intelligence system that becomes self-aware and perseveres humanity as a threat that must be destroyed. In order to protect itself, it launches a nuclear apocalypse and creates machines to hunt down survivors, driving the plot of the rest of the movie.
Many faculties members, especially the older, already tenured ones, view the arrival of generative AI as an uncontrollable disruption. They’re experiencing a sense of loss of control, and a fear of replacement
And this narrative is fueled by enduring fears that AI might replace many of the core functions of professors, and this fear isn’t new. For those of you familiar with the book Robot-Proof, more Americans fear robots taking over their jobs than death itself. So when ChatGPT was released in 2022, a lot of faculty members simply panicked.
Its release came into the middle of the school term and so of course it was too late to introduce individual course policies or to change any syllabi.
Many saw the new tool as the homework killer. They feared that this new program was going to write a student’s essay in second and they wouldn’t be able to detect the implied cheating. now, in addition to a simmering fear that AI was going to replace all professors, there were also concerns about how these technologies could be used by repressive regimes, including outside of the university context.
So at least initially, many faculties members were more afraid of AI than they were curious.
Existential dread brought about by the advent of AI use. Well, many faculty members use AI tools to detect student plagiarism.
A recent survey found that 91% of teaching faculty have concerns about preventing academic dishonesty.
Now, when asked if AI was making their jobs easier or harder, 76% of respondents in the same study said that it was just deflating their job enthusiasm.
Professors stress the importance of protecting intellectual property, the need to maintain data privacy, and additionally many emphasized the breakdown of traditional teaching formulas that they saw as being eroded by the use of generative AI and they expressed of worries that students were going to continue to deprioritize clinical thinking or research skills.
Now, behind this fear that generative AI is going to become some — and that is the fear of looking dumb in front of our students.
After ten years or more of study, the completion of multiple degrees, the struggle to publish articles and new research, professors have a deep-seated fear of looking stupid in front of their classes when we’re faced with a disruptive technological innovation that many of us just don’t understand. There’s a sense that if we can’t demonstrate that we know more of our students, that we’re going to suffer a loss of authority in the classroom.
Additionally, many professors are afraid that they won’t be able to detect AI misuse, they’re unsure how to use detection tools and so they worry that those are unreliable. What if the students figure out that we’re all actually clueless? Are they going to check out totally?
So fears that professors harbor include things like cheating, are students going to use AI to avoid thinking through the concepts that we’re trying to teach in the classroom. What about de-skilling? Are students going to lose critical thinking, writing and research skills?
Many of the faculty worry that this powerful technology has been unleashed quickly with consequences that we may not fully understand.
Now, some of the faculty fear surrounding generative AI can be adescribed to the lack of teacher training that * many professors received from their home institutions. When I first started teaching, I was hired to be Assistant Professor at a. We had days of training on setting up a SCIF. Think men in black going to a conference, using — they did show us the library, but at the end of the day, a man walked in telling us that he was from the Center for Teaching and Learning and he had a guitar. He started to sing. Hey, good day, I’m here to say, that I’m Dr. Messier and it’s super-cool to be a professor.
- He talked for about 45 minutes about how students really enjoy rhyming lessons and musical theater.
OK.
So this 45-minute musical interlude was the full extent of teacher training that I received when I started my career as a faculty member.
And I’m not alone in this. Faculty members spend years receiving specialized training in their areas of expertise, but they get little to no instruction on how to present this information to students in an actual classroom. Many faculty have not received any guidance from their universities about using AI for teaching and learning.
Since many professors aren’t yet comfortable using AI themselves, they need to learn how to integrate technology into their classrooms.
Universities should be developing training seminars for faculty to show them how to incorporate LLM technologies into their research and teaching.
The next challenge, though, would be getting faculty to attend these educational training sessions. In an environment where it sometimes feels now like the university is under attack, there’s pressure on faculty members to prioritize our research over the introduction of new teaching practices.
In another survey that asked faculty members why they haven’t engaged with AI platforms, the most common response was that they lacked the time to dedicate to learning new tools and this was mostly the result of having just an impossibly heavy workload. In a field that seems to reify the old man in a tweed jacket trope, there’s also extensive concern among faculties memories that generative AI disrupts traditional learning models.
Following the shift to online teaching and learning that was brought about by Covid, the innovation of emerging technologies has intensified. Academia, however, is known for its institutional lag.
There is a slow process to change how things are traditionally done. We go from committee to white paper, back to committee, to a vote, and by the time we actually make up any kind of a policy or reach a final decision, the technology has changed again and everything we’ve done is obsoletes.
So the arriving of generative AI has had a significant impact on traditional teaching models, so much so that many faculty initially reacted by banning its use in their classrooms and so this tension between the speed of technological change and the deliberate pace of academic culture leaves a lot of professors struggling to balance their pedagogical concerns with this need to innovate and to move forward.
Whether they realize it or not, many professors are actually already using AI technology daily.
Only 15% of faculties members in a recent study indicated that their university mandates the use of AI. But 81% of those questioned at the same time stated that they are absolutely required to use learning management systems or LMFs like Canvas, Black Board and Moodle. All of these have predictive AI embedded in their systems, even when users opt out of the AI function.
Within these systems, predictive analytics flag at-risk students based on their login patterns and their grades and they come equipped with automated dating tools and so what this suggests is that resistance in AI in higher education often results less from whether faculty are using AI at all and more from whether they recognize its presence and understand its implications.
I also think it’s important to note that not all faculty members are afraid of an AI-generated apocalypse. Some of us have already figured out that that experimenting with AI can make teaching fun again. Professors, especially those who I think are newer to the academy help to promote creativity or bring a sense of play back into the classroom setting. Some professors have also admitted that their use of AI has become pervasive.
My own classroom, I use AI to generate complicated concepts, especially to translate them for the PowerPoint slides. I also use them to compare the application of grading criteria across a number of papers to make sure it’s fair. I don’t use it to grade assignments, but I sometimes do use it to soften the blow of what I’m going to write on the actual paper. So it can help to kind of cushion that.
[laughter]
So the proposed use of AI for grading purposes, though, demonstrates this double standard that many students express when we’re thinking about generative AI. It’s acceptable for them to use it for brainstorming, but it’s not OK for their professors to use that same technology.
If you look at forums — and I hate this one — like rate my professors, students complain about the faculty use of AI and they also got rid of the chili pepper rating, whether your faculty member was attractive or not. That was a huge let-down.
The biggest problem here is for the most part, professors really don’t care about student grades. Students, on the other hand, care a great deal about grades, and this contradiction where students use AI to personalize their own learning, but they don’t faculty members to use grading strategies.
So what are the next steps we should take at the university level? Is it well, first, I think there’s a need for colleges and universities to implement clear AI policies to guide faculty in policing the ack actions of their students. The arrival of LLMs allow students to bypass. Students need boundaries. So the measures need to address symptoms of our current educational crises.
Not the underlying cause.
So when we’re developing AI policies to guide faculty, universities have to realize that the issue really isn’t AI-assisted cheating, it’s the failure of the university to cultivate intrinsic motivation and to sustain attention in our students.
Many institutions, though, have tried to offload the responsibility for managing student AI use onto the professors, and as I’ve already discussed, this doesn’t always work, because there are many faculty who are still trying to figure out what AI actually is, as they adjust the plastic overhead projector.
Stark variations in AI policy that differ from class to class leave our students confused about the extent to which they can engage with the new tools that are now widely available to them. And so in a desperate bid to prevent cheating, many professors have actually gone back to blue-book-style exams where students have to hand write out answers. This would not work for me. I have the world’s worst handwriting, so I would end up having to type it out anyway. And the wider problem with this strategy is that these pen and paper exams disadvantage students who were encouraged to master typing in their younger years.
Meanwhile, faculty members have to accept that even if they rely on traditional assessments like final papers or take-home exams, they’re going to receive submissions that are heavily shaped by AI, so a middle ground has to be found where students are encouraged to incorporate AI into the learning process but about they’re not permitted to submit AI-generated assignments.
One strategy would be to implement mandatory digital literacy training for both students and faculty and this would focus on the importance of effective prompt construction.
Because this is an area where both faculty and students are struggling.
As you probably all know, the information, sentences or questions that are entered into a generative AI tool as a prompt have a huge influence on the quality of the outputs that you receive.
So generic prompts like write me a story are going to produce generic results and so your AI interactions and the output quality of your results largely hinge on how you word these prompts. And so as educators we have to do a much better job of encouraging students’ creativity with things like AI prompting activity.
AI prompts help faculty and students by encourage willing them to iterate on their inquiries and receive deeper insights.
… And to critically engage with information. Effective prompting also helps to highlight ethical concerns, things like AI bias and hallucinations and it also represents to promote a wider sense of digital literacy.
Everyone at the university could use a digital literacy refresher. Students and some professors face difficulties being able to discern credible information from AI hallucinations and irregular sources. Recently I had a student that was very obviously AI-generated and he said I know it’s peer reviewed and I said how do you know it’s peer reviewed? He said all of my friends liked it on Facebook.
Oh, OK.
So AI literacy begins with mental habits that are already in use: Clarifying assumptions, testing ideas and revising repeatedly. These skills lend themselves to explore the possibilities of AI while remaining aware of its limitations. AI is obviously the future so we have to use it safely and effectively so that as educators we can help to train the next generation.
Now, before I end my talk, I have a gift to deliver to our host from my colleague, Meg G. As Steve already mentioned she’s currently at her niece’s wedding but she asked that I give Steve a small gift, in the form of a reminder. And that reminder is that tufts men’s lacrosse is the back to back Division III national champion, go jumbos!
[applause]












