Along with the explosive growth of Artificial Intelligence (AI), there’s been an explosion of talk about it. Not just talk…but hope, hype, predictions, expectations and fears.
Here in the School of Communication, we feel the impact and import of AI every day, both in the classroom and through our faculty research. Along with the wider Ohio State community, we’re working to meet the substantial intellectual, pedagogical and ethical challenges these technologies pose.
In the classroom, some uses of AI are entirely practical and mundane. For example, students can now read what their professors are saying in real time, thanks to AI-enabled live captioning of lectures.
But educating students about how to use these powerful tools goes well beyond live captioning. For instance, AI can be put to work as a tireless and reliable tutor. In a class I’m teaching, I encourage students to use the technology to help them work with unfamiliar concepts. They can ask AI to define a term, offer an example, and generate practice questions to test their understanding.
Journalism professor Nicole Kraft encourages her students to use AI to help brainstorm story ideas. Many of the ideas AI generates are not particularly strong, but identifying the flaws in AI-generated pitches and trying to make them better can be a great learning opportunity. Students need to understand the limits of AI and become alert to the kinds of mistakes that the technology can make, from fabricating examples to plagiarizing the writing of others. This will help them become savvy users of AI in their professional lives.
Of course, for all its strengths, AI is also creating new challenges in the classroom. Perhaps most notably, it raises complicated questions about which uses are consistent with the learning objectives of a university and which constitute misconduct. This might seem like a simple distinction; it is anything but.
Consider:
The spelling and grammar checkers built into most major word processors can be used without concern about academic misconduct. But what if your word processor offers to rewrite a sentence for you to make your ideas come through more clearly? Should that be allowed? And if so, must you disclose what the word processor did? What if a student who’s still learning English drafts an assignment in another language and then uses AI to translate it? Should that be allowed? Perhaps the answer depends on the subject matter. It’s easy to see why this would be a problem in a class focused on writing skills. But what if the class is focused on technical skills, like programming or accountancy?
It’s critical, of course, that we help students understand why letting AI do their work for them is a mistake. I favor a simple way of explaining this: If everything you do can be done as well or better by AI, why would anyone hire you? Our students need to make sure they have skills that make them valuable beyond AI—and that includes becoming critical users of AI.
One opportunity to do just that came last month from NBC Universal Academy, which invited our students to participate in The AI Generation: A Student Innovation Summit. This noteworthy conference included sessions on the future of work, tips for using generative AI effectively, and AI’s impact on journalism.
Meanwhile, our world-class faculty researchers are probing the social implications of these new technologies.
For example, Nic Matthews is investigating whether people’s feelings toward AI resemble how they feel about other machines or if they might be more like how they feel about living beings. Some research has suggested that when machines behave in human-like ways, people unconsciously begin treating them as if they were human. In one study, Dr. Matthews asks whether people feel pity for an AI-based system when a user treats it abusively. Preliminary findings suggest that most people don't feel much pity for these machines. They do, however, think that cruelty to AI can be bad for the moral character of the person doing it.
Bingjie Liu’s work examines how people use ChatGPT and similar tools in their everyday interpersonal communications. Much has been written about the widespread use of generative AI to quickly and easily craft clear, professional email messages. But what does this do to the human relationships that are built and sustained through such communications?
And what about you? I’d love to hear how you are using AI in your lives and how it affects your communications, both personal and professional.
Kelly Garrett
Director, School of Communication