In authors or contributors

1 resource

  • Masahiro Kaneko, Danushka Bollegala, Nao...
    |
    Jan 28th, 2024
    |
    preprint
    Masahiro Kaneko, Danushka Bollegala, Nao...
    Jan 28th, 2024

    There exist both scalable tasks, like reading comprehension and fact-checking, where model performance improves with model size, and unscalable tasks, like arithmetic reasoning and symbolic reasoning, where model performance does not necessarily improve with model size. Large language models (LLMs) equipped with Chain-of-Thought (CoT) prompting are able to make accurate incremental predictions even on unscalable tasks. Unfortunately, despite their exceptional reasoning abilities, LLMs tend...

Last update from database: 28/10/2025, 22:15 (UTC)
Powered by Zotero and Kerko.