Use of LLMs
Large Language Models (LLMs) can be useful for structuring ideas, drafting text, or refining language. At the same time, they raise important questions of academic integrity, transparency, and accountability.
We take a balanced view: LLMs can improve performance and help democratize science—for example, by reducing barriers for researchers whose first language is not English. They also help manage increasing demands on researchers as major conferences now receive 20,000+ submissions, making review workloads extremely high. Still, all scientific content must remain human-authored, and in our lab, LLMs are limited to editing and support roles.
Below are the concrete rules I apply to myself and encourage my students and employees to follow as well:
1. Students & PhD Students
- Permitted: Brainstorming, outlining, and language polishing.
- Not permitted: Submitting AI-generated text, code, or analysis as original work. All scientific contributions must be your own.
- Compliance: Students must follow the regulations of their program or graduate school, share these with the PI/supervisor, and ensure they are respected.
2. Publications
- Permitted: Language refinement (grammar, style, conciseness).
- Accountability: Authors remain fully responsible for accuracy, originality, and ethical standards. Only verified content may be published.
- Compliance: LLM use must follow (a) graduate school rules if the work is part of a thesis, and (b) journal policies. Where required, LLM use should be declared.
3. Ad-hoc Reviews
- Permitted: Language editing of reviews.
- Not permitted: Sharing unpublished manuscripts with external tools or using AI to generate judgments.
- Compliance: Review practices must follow the rules of the respective journal or conference, as (hopefully) provided during the review invitation.
This determinantion maybe subject to changes as the field and respective regulations evolve further.