Paul L. Caron

Saturday, February 25, 2023

It’s Not Just Our Students: ChatGPT Is Coming For Faculty Scholarship

Chronicle of Higher Ed Op-Ed:  It’s Not Just Our Students — ChatGPT Is Coming for Faculty Writing, by Ben Chrisinger (Oxford; Google Scholar):

Open AI ChatGPTAlmost immediately after OpenAI released ChatGPT in late November, people began wondering what it would mean for teaching and learning. A widely read piece in The Atlantic that provided one of the first looks at the tool’s ability to put together high-quality writing concluded that it would kill the student essay. Since then, academics everywhere have done their own experimenting with the technology — and weighed in on what to do about it. Some have banned students from using it, while others have offered tips on how to create essay assignments that are AI-proof. Many have suggested that we embrace the technology and incorporate it into the classroom.

While we’ve been busy worrying about what ChatGPT could mean for students, we haven’t devoted nearly as much attention to what it could mean for academics themselves. And it could mean a lot. Critically, academics disagree on exactly how AI can and should be used. And with the rapidly improving technology at our doorstep, we have little time to deliberate.

Already some researchers are using the technology. Among only the small sample of my work colleagues, I’ve learned that it is being used for such daily tasks as: translating code from one programming language to another, potentially saving hours spent searching web forums for a solution; generating plain-language summaries of published research, or identifying key arguments on a particular topic; and creating bullet points to pull into a presentation or lecture.

Even this limited use is complicated. Different audiences — journal editors, grant panels, conference attendees, students — will have different expectations about originality for particular tasks. For example, while peer reviewers might accept translated statistical code, students might balk at AI-generated lecture slides.

But it’s in the realm of academic writing and research where ethical debates about transparency and fairness really come into play.

Recently, several leading academic journals and publishers updated their submission guidelines to explicitly ban researchers from listing ChatGPT as a co-author, or using text copied from a ChatGPT response. Some professors have criticized these bans as shortsightedly resistant to an inevitable technological change. We shouldn’t be surprised at the disagreement. This is a new ethical space that only roughly follows the outlines of our existing agreements on plagiarism, authorship criteria, and fraud. Precisely where to draw red lines is not clear. ...

Our academic systems rely on trust. As a peer reviewer for grants and journal articles, I’ve never used a plagiarism checker or directly questioned the accuracy of an author-contribution statement. Compare this to my students’ essays, which are automatically passed through plagiarism-checking software upon submission. Academics enjoy an environment where we might challenge claims and critique the novelty of ideas, but we rarely question the originality of each other’s written work.

For this system of trust to hold in academe, we must firmly and rapidly commit to transparency around the use of AI. Only then can we hope to have informed and reasoned discussions about what norms and rules should govern academic writing in the future.

Prior TaxProf Blog coverage:

Legal Ed News, Legal Ed Scholarship, Legal Ed Tech, Legal Education, Scholarship | Permalink