SelfCheck: Using LLMs to Zero-Shot Check Their Own Step-by-Step Reasoning

Article Status
Published
Authors/contributors
Title
SelfCheck: Using LLMs to Zero-Shot Check Their Own Step-by-Step Reasoning
Abstract
The recent progress in large language models (LLMs), especially the invention of chain-of-thought prompting, has made it possible to automatically answer questions by stepwise reasoning. However, when faced with more complicated problems that require non-linear thinking, even the strongest LLMs make mistakes. To address this, we explore whether LLMs are able to recognize errors in their own step-by-step reasoning, without resorting to external resources. To this end, we propose SelfCheck, a general-purpose zero-shot verification schema for recognizing such errors. We then use the results of these checks to improve question-answering performance by conducting weighted voting on multiple solutions to the question. We test SelfCheck on three datasets (GSM8K, MathQA, and MATH) and find that it successfully recognizes errors and, in turn, increases final answer accuracies.
Repository
arXiv
Archive ID
arXiv:2308.00436
Date
2023-10-05
Accessed
27/10/2023, 17:35
Short Title
SelfCheck
Library Catalogue
Extra
arXiv:2308.00436 [cs]
Citation
Miao, N., Teh, Y. W., & Rainforth, T. (2023). SelfCheck: Using LLMs to Zero-Shot Check Their Own Step-by-Step Reasoning (arXiv:2308.00436). arXiv. http://arxiv.org/abs/2308.00436
Technical methods
Powered by Zotero and Kerko.