Paper Reading #9: T2T

Last updated on October 29, 2025 pm

本文将精读论文 “T2T: From Distribution Learning in Training to Gradient Search in Testing for Combinatorial Optimization”,作者 Li et al.,时间 2023 年,链接 openreview

论文概述

这篇论文发表在 NeurIPS 2023 上,是来自我们实验室的一篇工作,通讯作者为严骏驰老师。文章基于组合优化的目标是每一个具体实例找到最优解的想法,提出了名为 T2T (Training to Testing) 的新框架,旨在设计一个无需在测试时更新模型权重的、高效的、基于梯度的神经搜索范式,以充分利用预训练生成模型的能力来直接优化目标函数。

Abstract

Extensive experiments have gradually revealed the potential performance bottleneck of modeling Combinatorial Optimization (CO) solving as neural solution prediction tasks. The neural networks, in their pursuit of minimizing the average objective score across the distribution of historical problem instances, diverge from the core target of CO of seeking optimal solutions for every test instance. This calls for an effective search on each problem instance, while the model should serve to provide supporting knowledge that benefits the search. To this end, we propose T2T (Training to Testing) framework that first leverages the generative modeling to estimate the high-quality solution distribution for each instance during training, and then conducts a gradient-based search within the solution space during testing. The proposed neural search paradigm consistently leverages generative modeling, specifically diffusion, for graduated solution improvement. It disrupts the local structure of the given solution by introducing noise and reconstructs a lower-cost solution guided by the optimization objective. Experimental results on Traveling Salesman Problem (TSP) and Maximal Independent Set (MIS) show the significant superiority of T2T, demonstrating an average performance gain of 49.15% for TSP solving and 17.27% for MIS solving compared to the previous state-of-the-art.


Paper Reading #9: T2T
https://cny123222.github.io/2025/10/29/Paper-Reading-9-T2T/
Author
Nuoyan Chen
Posted on
October 29, 2025
Licensed under