SerenQA

SerenQA

Assessing large language models for serendipitous discovery in knowledge graphs, with a focus on drug repurposing.

Abstract

Large Language Models (LLMs) have greatly advanced knowledge graph question answering (KGQA), yet existing systems are typically optimized for returning highly relevant but predictable answers. A missing yet desired capacity is to exploit LLMs to suggest surprise and novel ("serendipitious") answers. In this paper, we formally define the serendipity-aware KGQA task and propose the SerenQA framework to evaluate LLMs' ability to uncover unexpected insights in scientific KGQA tasks. SerenQA includes a rigorous serendipity metric based on relevance, novelty, and surprise, along with an expert-annotated benchmark derived from the Clinical Knowledge Graph, focused on drug repurposing. Additionally, it features a structured evaluation pipeline encompassing three subtasks: knowledge retrieval, subgraph reasoning, and serendipity exploration. Our experiments reveal that while state-of-the-art LLMs perform well on retrieval, they still struggle to identify genuinely surprising and valuable discoveries, underscoring a significant room for future improvements. Our curated resources and extended version are released at: https://cwru-db-group.github.io/serenQA.

Citation

@inproceedings{wang2026assessing,
  title={Assessing LLMs for Serendipity Discovery in Knowledge Graphs: A Case for Drug Repurposing},
  author={Wang, Mengying and Ma, Chenhui and Jiao, Ao and Liang, Tuo and Lu, Pengjun and Hegde, Shrinidhi and Yin, Yu and Gurkan-Cavusoglu, Evren and Wu, Yinghui},
  booktitle={The Fortieth AAAI Conference on Artificial Intelligence},
  year={2026}
}
SerenQA framework / serendipity-aware KGQA pipeline