GenAI Policy
Use of Generative Artificial Intelligence in A and B Exams
Purpose of the Policy
The field of Communication requires creative insight, rigorous research, and critical thinking. An excellent communication scholar should also be an excellent communicator. Generative AI (GenAI) tools can support idea generation, analysis, and writing, but they also raise concerns about integrity, authorship, validity, and transparency. Although GenAI can assist in the research process, it cannot replace a student’s ability to express complex ideas clearly and sensitively. This policy is intended to promote fairness, consistency, and clarity about acceptable uses of GenAI for the A and B exams, and to help students navigate new technologies while upholding the rigor and originality expected for doctoral research in Communication.
All work for the A and B exams must be the student’s own. Students may not use GenAI platforms such as ChatGPT to draft or write their A exam papers or dissertation.
The use of GenAI tools that suggest improvements to students’ language and grammar must be disclosed, including prompts given, suggestions offered, and revisions made to the writing.
Students must discuss with their committee their use of GenAI in any of the formative processes (such as brainstorming a lit review), reporting phases (such as producing data tables), and writing phase of their A or B exam. All students should write a reflexive commentary in an appendix to A exam papers and the dissertation on their use of GenAI, including prompts, a summary of the suggestions, and observations about how this has shaped their work.
Students should also consider Cornell’s IRB rules for using GenAI in research with human participants. For example, they restrict the use of GenAI for transcription to the Live Meeting Transcription option within Cornell-licensed Zoom. Our IRB is currently working out policies, so check the latest requirements here.
Because both A and B exams involve an oral defense, students must be able to verbally verify and explicate sources, claims, and arguments (review the oral exam expectations here: A exam and B exam).
Best Practices
Students must be attentive to the principles of confidentiality in all their scholarly enterprises, including in the preparation of their exams (see Cornell's Generative AI in Academic Research: Perspectives and Cultural Norms). They must understand the potential risks associated with inputting sensitive, private, confidential, or proprietary data into these tools, and that doing so may violate IRB, legal, or contractual requirements, including study participants’ expectations of privacy.
All researchers should be cautious about entering intellectual property into GenAI platforms. Unless a version with strict data privacy protections is used (like Enterprise, Team, or API), original research is likely to be retained and used to further train the model. To safeguard unpublished work, avoid sharing original research with GenAI.
Any uses of GenAI must be scrupulously verified with original sources. LLMs can produce “hallucinations,” including incorrect or fabricated information, fake references, and distorted facts. All researchers must treat GenAI query results as unverified and avoid relying on them for factual accuracy or citation. Using unverified material risks compromising research quality, academic integrity, and scholarly credibility.