Nonparametric Teaching for Graph Property Learners
ICML 2025 Spotlight

* denotes equal contribution

Abstract

Inferring properties of graph-structured data, e.g., the solubility of molecules, essentially involves learning the implicit mapping from graphs to their properties. This learning process is often costly for graph property learners like Graph Convolutional Networks (GCNs). To address this, we propose a paradigm called Graph Neural Teaching (GraNT) that reinterprets the learning process through a novel nonparametric teaching perspective. Specifically, the latter offers a theoretical framework for teaching implicitly defined (i.e., nonparametric) mappings via example selection. Such an implicit mapping is realized by a dense set of graph-property pairs, with the GraNT teacher selecting a subset of them to promote faster convergence in GCN training. By analytically examining the impact of graph structure on parameter-based gradient descent during training, and recasting the evolution of GCNs—shaped by parameter updates—through functional gradient descent in nonparametric teaching, we show for the first time that teaching graph property learners (i.e., GCNs) is consistent with teaching structure-aware nonparametric learners. These new findings readily commit GraNT to enhancing learning efficiency of the graph property learner, showing significant reductions in training time for graph-level regression (-36.62%), graph-level classification (-38.19%), node-level regression (-30.97%) and node-level classification (-47.30%), all while maintaining its generalization performance.

Implementations

We provide a plug-and-play package to generally speed up the learning efficiency of the graph property learners.

Poster

Related links

Related works (for developing a deeper understanding of GraNT) are:

[ICML 2024] Nonparametric Teaching of Implicit Neural Representations,

[NeurIPS 2023] Nonparametric Teaching for Multiple Learners,

[ICML 2023] Nonparametric Iterative Machine Teaching.

Citation

Acknowledgements

We thank all anonymous reviewers for their constructive feedback, which helped improve our paper. We also thank Yikun Wang for his helpful discussions.
This work is supported in part by the Theme-based Research Scheme (TRS) project T45-701/22-R of the Research Grants Council (RGC), Hong Kong SAR, and in part by the AVNET-HKU Emerging Microelectronics & Ubiquitous Systems (EMUS) Lab.
The website template was borrowed from Michaël Gharbi.

Send feedback and questions to Chen Zhang