Use of LLMs
Large Language Models (LLMs) can be useful for structuring ideas or refining language. At the same time, they raise important questions of academic integrity, transparency, and accountability.
We take a balanced view: LLMs can be helpful for editing, sketching ideas, coding and more, they also reduce barriers for researchers whose first language is not English. In addition they help manage increasing demands on researchers as major conferences now receive 20,000+ submissions, making review workloads extremely high. In our lab, LLMs are limited to editing and support roles.
Below are concrete and specific determinations:
1. Students & PhD Candidates
- Permitted: Brainstorming, outlining, and language polishing.
- Compliance: Students and PhD Candidates must follow the regulations of their program or graduate school, share these with the PI/supervisor, and ensure they are respected.
2. Publications
- Permitted: Language refinement (grammar, style, conciseness).
- Accountability: Authors remain fully responsible for accuracy, originality, and ethical standards. Only verified content may be published.
- Compliance: LLM use must follow (a) the policies imposed by a journal or conference and (b) the graduate school rules, if the work is part of a thesis. Where required, the use of LLMs should be declared.
3. Ad-hoc Reviews
- Permitted: Language editing of reviews.
- Not permitted: Sharing unpublished manuscripts with external tools or using AI to generate judgments.
- Compliance: Review practices must follow the rules of the respective journal or conference, as (hopefully) provided during the review invitation.
This determinantion maybe subject to changes as the field and respective regulations evolve further.
