Nonparametric Teaching of Attention Learners ICLR 2026
- Chen Zhang* HKU
- Jianghui Wang* KAUST
- Bingyang Cheng HKU
- Zhongtao Chen HKU
- Wendong Xu HKU
- Cong Wang Ind. Res.
- Marco Canini KAUST
- Francesco Orabona KAUST
- Yik-Chung Wu HKU
- Ngai Wong HKU
Abstract
Attention learners, neural networks built on the attention mechanism, e.g., transformers, excel at learning the implicit relationships that relate sequences to their corresponding properties, e.g., mapping a given sequence of tokens to the probability of the next token. However, the learning process tends to be costly. To address this, we present a novel paradigm named Attention Neural Teaching (AtteNT) that reinterprets the learning process through a nonparametric teaching perspective. Specifically, the latter provides a theoretical framework for teaching mappings that are implicitly defined (i.e., nonparametric) via example selection. Such an implicit mapping is embodied through a dense set of sequence-property pairs, with the AtteNT teacher selecting a subset to accelerate convergence in attention learner training. By analytically investigating the role of attention on parameter-based gradient descent during training, and recasting the evolution of attention learners, shaped by parameter updates, through functional gradient descent in nonparametric teaching, we show for the first time that teaching attention learners is consistent with teaching importance-adaptive nonparametric learners. These new findings readily commit AtteNT to enhancing learning efficiency of attention learners. Specifically, we observe training time reductions of 13.01% for LLMs and 20.58% for ViTs, spanning both fine-tuning and training-from-scratch regimes. Crucially, these gains are achieved without compromising accuracy; in fact, performance is consistently preserved and often enhanced across a diverse set of downstream tasks.
Implementations
We provide a plug-and-play package to generally speed up the learning efficiency of the attention learners.
Poster
Related links
Related works (for developing a deeper understanding of AtteNT) are:
[ICML 2025 spotlight] Nonparametric Teaching for Graph Property Learners,
[ICML 2024] Nonparametric Teaching of Implicit Neural Representations,
[NeurIPS 2023] Nonparametric Teaching for Multiple Learners,
[ICML 2023] Nonparametric Iterative Machine Teaching.
Citation
Acknowledgments
We thank all anonymous reviewers for their constructive feedback, which helped improve our paper.
This work was supported in part by the Theme-based Research Scheme (TRS) project T45-701/22-R of the Research Grants Council of Hong Kong, and in part by the AVNET-HKU Emerging Microelectronics and Ubiquitous Systems (EMUS) Lab.
The website template was borrowed from Michaƫl Gharbi.
Send feedback and questions to Chen Zhang