Enhancing long-form question answering via reflection with question decomposition

Research output: Contribution to journalArticleAcademicpeer-review

Abstract

Long-Form Question Answering (LFQA) requires multi-paragraph responses that explain, contextualize and justify an answer rather than returning a single fact. Large proprietary language models can meet this bar, but privacy, cost and hardware limits often force practitioners to rely on much smaller, locally hosted models — whose outputs are typically shallow or incomplete. We introduce Decomposition-Reflection, a training-free prompting framework that (i) decomposes a user question into the complementary sub-questions, (ii) answers each one, and (iii) runs a lightweight self-reflection loop after every stage to enhance the comprehensiveness, entailment and factuality of the results before synthesizing the final response. Across three LFQA benchmarks, the proposed approach raises ROUGE and LLM-based factuality scores over strong chain-of-thought and self-refinement baselines. Ablation study confirms that removing either decomposition or reflection sharply degrades coverage and entailment, underscoring the importance of both components.

Original languageEnglish
Article number104274
Number of pages17
JournalInformation Processing and Management
Volume62
Issue number6
DOIs
Publication statusPublished - Nov 2025

Bibliographical note

Publisher Copyright:
© 2025 Elsevier Ltd

Keywords

  • Decomposition
  • Language models
  • Long-form question answering
  • Prompt
  • Reflection

Fingerprint

Dive into the research topics of 'Enhancing long-form question answering via reflection with question decomposition'. Together they form a unique fingerprint.

Cite this