In authors or contributors

1 resource

  • Jooyoung Lee, Toshini Agrawal, Adaku Uch...
    |
    Jun 24th, 2024
    |
    preprint
    Jooyoung Lee, Toshini Agrawal, Adaku Uch...
    Jun 24th, 2024

    Recent literature has highlighted potential risks to academic integrity associated with large language models (LLMs), as they can memorize parts of training instances and reproduce them in the generated texts without proper attribution. In addition, given their capabilities in generating high-quality texts, plagiarists can exploit LLMs to generate realistic paraphrases or summaries indistinguishable from original work. In response to possible malicious use of LLMs in plagiarism, we introduce...

Last update from database: 05/11/2025, 09:15 (UTC)
Powered by Zotero and Kerko.