Dialogue Chain-of-Thought Distillation for Commonsense-aware Conversational Agents
Article Status
Published
Authors/contributors
- Chae, Hyungjoo (Author)
- Song, Yongho (Author)
- Ong, Kai Tzu-iunn (Author)
- Kwon, Taeyoon (Author)
- Kim, Minjin (Author)
- Yu, Youngjae (Author)
- Lee, Dongha (Author)
- Kang, Dongyeop (Author)
- Yeo, Jinyoung (Author)
Title
Dialogue Chain-of-Thought Distillation for Commonsense-aware Conversational Agents
Abstract
Human-like chatbots necessitate the use of commonsense reasoning in order to effectively comprehend and respond to implicit information present within conversations. Achieving such coherence and informativeness in responses, however, is a non-trivial task. Even for large language models (LLMs), the task of identifying and aggregating key evidence within a single hop presents a substantial challenge. This complexity arises because such evidence is scattered across multiple turns in a conversation, thus necessitating integration over multiple hops. Hence, our focus is to facilitate such multi-hop reasoning over a dialogue context, namely dialogue chain-of-thought (CoT) reasoning. To this end, we propose a knowledge distillation framework that leverages LLMs as unreliable teachers and selectively distills consistent and helpful rationales via alignment filters. We further present DOCTOR, a DialOgue Chain-of-ThOught Reasoner that provides reliable CoT rationales for response generation. We conduct extensive experiments to show that enhancing dialogue agents with high-quality rationales from DOCTOR significantly improves the quality of their responses.
Repository
arXiv
Archive ID
arXiv:2310.09343
Date
2023-10-22
Accessed
08/11/2024, 21:12
Library Catalogue
Extra
arXiv:2310.09343 [cs]
Citation Key: chae2023
Citation
Chae, H., Song, Y., Ong, K. T., Kwon, T., Kim, M., Yu, Y., Lee, D., Kang, D., & Yeo, J. (2023). Dialogue Chain-of-Thought Distillation for Commonsense-aware Conversational Agents (arXiv:2310.09343). arXiv. http://arxiv.org/abs/2310.09343
Link to this record