Orthogonal Bases for Equivariant Graph Learning is a framework for learning graph-structured data using graph neural networks (GNNs). Due to the permutation-invariant requirement of graph learning tasks, invariant and equivariant linear layers are essential. Previous work provided a maximal collection of these layers and a simple deep neural network model, k-IGN, for graph data defined on k-tuples of nodes. However, the high complexity of these layers makes k-IGNs computationally infeasible for k >= 3. This framework shows that a smaller dimension for the linear layers is sufficient to achieve the same expressive power. Two sets of orthogonal bases for the linear layers are provided, each with only 3(2^k-1)-k basis elements. Based on these linear layers, neural network models GNN-a and GNN-b are developed, achieving the expressive power of the k-WL algorithm and the (k+1)-WL algorithm in graph isomorphism tests, respectively. In molecular prediction tasks on benchmark datasets, low-order neural network models with the proposed linear layers outperform other models.
Graph neural networks, orthogonal bases
Graph neural networks
Benchmark graph datasets
Expressive power, computational efficiency
Cloud-based, on-premises
Yes
Yes
Equivariant graph learning, orthogonal bases
Yes
GPU for training
Linux, Windows, macOS
Compatible with existing graph learning frameworks
None
None
None
No
Limited community support
Research team from leading universities
Varies depending on the dataset
Low
Moderate
None
Ensuring ethical use of graph learning models
Requires high-quality graph data
Chemistry, bioinformatics, data science
Molecular prediction, graph isomorphism tests
Research institutions, tech companies
Integrates with graph learning frameworks
Scalable to large graph datasets
Research team support
None
Command-line interface
Yes
English
Research grant funded
No
Academic collaborations
None
None
1.0
Research framework
No
Academic research
0.00
USD
Research license
01/01/2023
01/10/2023
+1-800-555-0199
Supports efficient graph learning with orthogonal bases
Yes