CORE, a novel data augmentation framework, leverages the Information Bottleneck principle to eliminate noisy and spurious edges while recovering missing edges in graphs, thereby enhancing the generalizability of link prediction models.
Efficient mini-batch training with personalized PageRank tokenization in graph transformers.
Enhancing the robustness and efficiency of Simplified PCNet for graph representation learning.
Novel Graph Neural Network GOOD designed for out-of-domain link prediction in dynamic multi-relational graphs.
Polynormer is a novel polynomial-expressive graph transformer with linear complexity that outperforms state-of-the-art GNN and GT baselines on various datasets.
The author explores the dual effects of edge perturbation on graph neural networks, proposing a unified formulation to bridge augmentation and attack methods by prioritizing edge perturbations.
ChebNet's inferior performance is due to illegal coefficients leading to over-fitting, addressed by ChebNetII using Chebyshev interpolation. ChebNetII outperforms SOTA methods in graph neural networks.
The author proposes the KuramotoGNN, a continuous-depth GNN inspired by the Kuramoto model, to address over-smoothing by leveraging insights from synchronization. The model demonstrates superior performance compared to other GNN variants.
Polynormer introduces a polynomial-expressive graph transformer with linear complexity, balancing expressivity and scalability. The model outperforms state-of-the-art GNN and GT baselines on various datasets.