Chain-of-Thought Prompting Elicits Reasoning in Large Language Models

Article Status
Published
Authors/contributors
Title
Chain-of-Thought Prompting Elicits Reasoning in Large Language Models
Abstract
We explore how generating a chain of thought -- a series of intermediate reasoning steps -- significantly improves the ability of large language models to perform complex reasoning. In particular, we show how such reasoning abilities emerge naturally in sufficiently large language models via a simple method called chain of thought prompting, where a few chain of thought demonstrations are provided as exemplars in prompting. Experiments on three large language models show that chain of thought prompting improves performance on a range of arithmetic, commonsense, and symbolic reasoning tasks. The empirical gains can be striking. For instance, prompting a 540B-parameter language model with just eight chain of thought exemplars achieves state of the art accuracy on the GSM8K benchmark of math word problems, surpassing even finetuned GPT-3 with a verifier.
Repository
arXiv
Archive ID
arXiv:2201.11903
Date
2023-01-10
Accessed
27/10/2023, 17:32
Library Catalogue
Extra
arXiv:2201.11903 [cs]
Citation
Wei, J., Wang, X., Schuurmans, D., Bosma, M., Ichter, B., Xia, F., Chi, E., Le, Q., & Zhou, D. (2023). Chain-of-Thought Prompting Elicits Reasoning in Large Language Models (arXiv:2201.11903). arXiv. http://arxiv.org/abs/2201.11903
Technical methods
Powered by Zotero and Kerko.