UNCO: Towards Unifying Neural Combinatorial Optimization through Large Language Model

Research output: Working paperPreprintProfessional

16 Downloads (Pure)

Abstract

Recently, applying neural networks to address combinatorial optimization problems (COPs) has attracted considerable research attention. The prevailing methods always train deep models independently on specific problems, lacking a unified framework for concurrently tackling various COPs. To this end, we propose a unified neural combinatorial optimization (UNCO) framework to solve different types of COPs by a single model. Specifically, we use natural language to formulate text-attributed instances for different COPs and encode them in the same embedding space by the large language model (LLM). The obtained embeddings are further advanced by an encoder-decoder model without any problem-specific modules, thereby facilitating a unified process of solution construction. We further adopt the conflict gradients erasing reinforcement learning (CGERL) algorithm to train the UNCO model, delivering better performance across different COPs than vanilla multi-objective learning. Experiments show that the UNCO model can solve multiple COPs after a single-session training, and achieves satisfactory performance that is comparable to several traditional or learning-based baselines. Instead of pursuing the best performance for each COP, we explore the synergy between tasks and few-shot generalization based on LLM to inspire future work.
Original languageEnglish
PublisherarXiv.org
Number of pages8
Volume2408.12214
DOIs
Publication statusPublished - 22 Aug 2024

Keywords

  • cs.AI

Fingerprint

Dive into the research topics of 'UNCO: Towards Unifying Neural Combinatorial Optimization through Large Language Model'. Together they form a unique fingerprint.

Cite this