pyg_spectral.nn.conv
- class BaseMP(num_hops: int = 0, hop: int = 0, cached: bool = True, **kwargs)[source]
Bases:
MessagePassingBase filter layer structure.
- Parameters:
num_hops (
int, default:0) – total number of propagation hops.hop (
int, default:0) – current number of propagation hops of this layer.cached (
bool, default:True) – whether cache the propagation matrix.**kwargs – Additional arguments of
torch_geometric.nn.conv.MessagePassing.
-
param:
dict[str,NewType(ParamTuple,tuple[str,tuple,dict[str,Any],Callable[[Any],str]])] = {}
- classmethod register_classes(registry: dict[str, dict[str, Any]] | None = None) dict[source]
Register args for all subclass.
- Parameters:
name (
dict[str, Callable[[Any], str]]) – Conv class logging path name.pargs (
dict[str, list[str]]) – Conv arguments from argparse.pargs_default (
dict[str, dict[str, Any]]) – Default values for model arguments. Not recommended.param (
dict[str, dict[str, ParamTuple]]) –Conv parameters to tune.
(str) parameter type,
(tuple) args for
optuna.trial.suggest_,(dict) kwargs for
optuna.trial.suggest_,(callable) format function to str.
- get_propagate_mat(x: Tensor, edge_index: Tensor | SparseTensor) Tensor | SparseTensor[source]
Get matrices for
propagate(). Called before eachforward()with same input.- Parameters:
x (
Tensor) – fromtorch_geometric.data.Dataedge_index (
Tensor|SparseTensor) – fromtorch_geometric.data.Data
- Variables:
propagate_mat (
str) – propagation schemes, separated by,. Each scheme starts withAorLfor adjacency or Laplacian, optionally following+[p*]Ior-[p*]Ifor scaling the diagonal, wherepcan be float or attribute name.- Returns:
prop (
SparseTensor) – propagation matrix
- _get_propagate_mat(x: Tensor, edge_index: Tensor | SparseTensor) Tensor | SparseTensor[source]
Shadow function for
get_propagate_mat().
- _get_forward_mat(x: Tensor, edge_index: Tensor | SparseTensor) dict[source]
Returns should match the arg list of
forward()whenself.comp_scheme == 'forward'.- Returns:
out (
Tensor) – initial output tensor (shape: \((|\mathcal{V}|, F)\))
- _get_convolute_mat(x: Tensor, edge_index: Tensor | SparseTensor) dict[source]
Returns should match the arg list of
forward().
- get_forward_mat(x: Tensor, edge_index: Tensor | SparseTensor, comp_scheme: str | None = None) dict[source]
Get matrices for
forward(). Called duringforward().- Parameters:
x (
Tensor) – fromtorch_geometric.data.Dataedge_index (
Tensor|SparseTensor) – fromtorch_geometric.data.Data
- Returns:
out (
Tensor) – output tensor (shape: \((|\mathcal{V}|, F)\))prop (
Adj) – propagation matrix
- _forward_theta(**kwargs)[source]
- Variables:
theta (
nn.Parameter | nn.Module) – transformation of propagation result before applying to the output.
- _forward_out(**kwargs) Tensor[source]
Shadow function for calling
_forward_theta()and accumulating results.- Returns:
out (
Tensor) – output tensor for accumulating propagation results (shape: \((|\mathcal{V}|, F)\))
- forward(**kwargs) dict[source]
Wrapper for distinguishing precomputed outputs. Args & Returns should match the output of
get_forward_mat()
- _forward(x: Tensor, prop: Tensor | SparseTensor) dict[source]
Shadow function for
forward()to be implemented in subclasses without calculating output. ifself.supports_batch == True, then should not contain derivable computations. Dicts of Args & Returns should be matched.- Returns:
x (
Tensor) – tensor for calculatingout
- message_and_aggregate(adj_t: Tensor | SparseTensor, x: Tensor) Tensor[source]
Fuses computations of
message()andaggregate()into a single function. If applicable, this saves both time and memory since messages do not explicitly need to be materialized. This function will only gets called in case it is implemented and propagation takes place based on atorch_sparse.SparseTensoror atorch.sparse.Tensor.
- class AdjConv(num_hops: int = 0, hop: int = 0, beta: float | None = None, cached: bool = True, **kwargs)[source]
Bases:
BaseMPLinear filter using the normalized adjacency matrix for propagation.
- Parameters:
beta (
float|None, default:None) – additional scaling for self-loop in adjacency matrix \(\mathbf{A} + \beta\mathbf{I}\), i.e.improvedintorch_geometric.nn.conv.GCNConv.
- pargs: list[str] = ['beta']
- param: dict[str, ParamTuple] = {'beta': ('float', (0.01, 2.0), {'step': 0.01}, <function AdjConv.<lambda>>)}
- class AdjDiffConv(num_hops: int = 0, hop: int = 0, alpha: float | None = None, beta: float | None = None, cached: bool = True, **kwargs)[source]
Bases:
AdjConvLinear filter using the normalized adjacency matrix for propagation. Preprocess the feature by distinguish matrix \(\alpha\mathbf{L} + \mathbf{I}\).
- Parameters:
alpha (
float|None, default:None) – scaling for self-loop in distinguish matrix \(\alpha\mathbf{L} + \mathbf{I}\)beta (
float|None, default:None) – additional scaling for self-loop in adjacency matrix \(\mathbf{A} + \beta\mathbf{I}\), i.e.improvedintorch_geometric.nn.conv.GCNConv.
- pargs: list[str] = ['alpha']
- param: dict[str, ParamTuple] = {'alpha': ('float', (0.0, 1.0), {'step': 0.01}, <function AdjDiffConv.<lambda>>)}
- class AdjiConv(num_hops: int = 0, hop: int = 0, alpha: float | None = None, beta: float | None = None, cached: bool = True, **kwargs)[source]
Bases:
BaseMPIterative linear filter using the normalized adjacency matrix for augmented propagation.
- Parameters:
alpha (
float|None, default:None) – decay factor \(\alpha(\mathbf{A} + \beta\mathbf{I})\). Can be \(\alpha < 0\).beta (
float|None, default:None) – scaling for skip connection, i.e., self-loop in adjacency matrix, i.e.improvedintorch_geometric.nn.conv.GCNConvandepsintorch_geometric.nn.conv.GINConv. Can be \(\beta < 0\).beta = 'var'for learnable beta as parameter.
- pargs: list[str] = ['alpha', 'beta']
- param: dict[str, ParamTuple] = {'alpha': ('float', (0.0, 1.0), {'step': 0.01}, <function AdjiConv.<lambda>>), 'beta': ('float', (0.01, 2.0), {'step': 0.01}, <function AdjiConv.<lambda>>)}
- _get_forward_mat(x: Tensor, edge_index: Tensor | SparseTensor) dict[source]
Returns should match the arg list of
forward()whenself.comp_scheme == 'forward'.- Returns:
out (
Tensor) – initial output tensor (shape: \((|\mathcal{V}|, F)\))
- class Adji2Conv(num_hops: int = 0, hop: int = 0, alpha: float | None = None, beta: float | None = None, cached: bool = True, **kwargs)[source]
Bases:
AdjiConvIterative linear filter using the 2-hop normalized adjacency matrix for augmented propagation.
- Parameters:
num_hops (
int, default:0) – total number of propagation hops. NOTE that there are only \(\text{num_hops} / 2\) conv layers.alpha (
float|None, default:None) – decay factor \(\alpha(\mathbf{A} + \beta\mathbf{I})\). Can be \(\alpha < 0\).beta (
float|None, default:None) – scaling for skip connection, i.e., self-loop in adjacency matrix, i.e.improvedintorch_geometric.nn.conv.GCNConvandepsintorch_geometric.nn.conv.GINConv. Can be \(\beta < 0\).beta = 'var'for learnable beta as parameter.
- class AdjSkipConv(num_hops: int = 0, hop: int = 0, alpha: float | None = None, beta: float | None = None, cached: bool = True, **kwargs)[source]
Bases:
BaseMPIterative linear filter with skip connection.
- Parameters:
alpha (
float|None, default:None) – decay factor \(\alpha(\mathbf{A} + \beta\mathbf{I})\). Can be \(\alpha < 0\).beta (
float|None, default:None) – scaling for skip connection, i.e., self-loop in adjacency matrix, i.e.improvedintorch_geometric.nn.conv.GCNConvandepsintorch_geometric.nn.conv.GINConv. Can be \(\beta < 0\).beta = 'var'for learnable beta as parameter.
- pargs: list[str] = ['alpha', 'beta']
- param: dict[str, ParamTuple] = {'alpha': ('float', (0.0, 1.0), {'step': 0.01}, <function AdjSkipConv.<lambda>>), 'beta': ('float', (0.01, 2.0), {'step': 0.01}, <function AdjSkipConv.<lambda>>)}
- _get_forward_mat(x: Tensor, edge_index: Tensor | SparseTensor) dict[source]
Returns should match the arg list of
forward()whenself.comp_scheme == 'forward'.- Returns:
out (
Tensor) – initial output tensor (shape: \((|\mathcal{V}|, F)\))
- _get_convolute_mat(x: Tensor, edge_index: Tensor | SparseTensor) dict[source]
Returns should match the arg list of
forward().
- class AdjSkip2Conv(num_hops: int = 0, hop: int = 0, alpha: float | None = None, beta: float | None = None, cached: bool = True, **kwargs)[source]
Bases:
AdjSkipConvIterative linear filter with 2-hop propagation and skip connection.
- Parameters:
alpha (
float|None, default:None) – decay factor \(\alpha(\mathbf{A} + \beta\mathbf{I})\). Can be \(\alpha < 0\).beta (
float|None, default:None) – scaling for skip connection, i.e., self-loop in adjacency matrix, i.e.improvedintorch_geometric.nn.conv.GCNConvandepsintorch_geometric.nn.conv.GINConv. Can be \(\beta < 0\).beta = 'var'for learnable beta as parameter.
- class AdjResConv(num_hops: int = 0, hop: int = 0, alpha: float | None = None, beta: float | None = None, cached: bool = True, **kwargs)[source]
Bases:
BaseMPIterative linear filter with residual connection.
- Parameters:
alpha (
float|None, default:None) – decay factor \(\alpha(\mathbf{A} + \beta\mathbf{I})\). Can be \(\alpha < 0\).beta (
float|None, default:None) – scaling for skip connection, i.e., self-loop in adjacency matrix, i.e.improvedintorch_geometric.nn.conv.GCNConvandepsintorch_geometric.nn.conv.GINConv. Can be \(\beta < 0\).beta = 'var'for learnable beta as parameter.
- pargs: list[str] = ['alpha', 'beta']
- param: dict[str, ParamTuple] = {'alpha': ('float', (0.0, 1.0), {'step': 0.01}, <function AdjResConv.<lambda>>), 'beta': ('float', (0.01, 2.0), {'step': 0.01}, <function AdjResConv.<lambda>>)}
- _get_forward_mat(x: Tensor, edge_index: Tensor | SparseTensor) dict[source]
Returns should match the arg list of
forward()whenself.comp_scheme == 'forward'.- Returns:
out (
Tensor) – initial output tensor (shape: \((|\mathcal{V}|, F)\))
- _get_convolute_mat(x: Tensor, edge_index: Tensor | SparseTensor) dict[source]
Returns should match the arg list of
forward().
- class LapiConv(num_hops: int = 0, hop: int = 0, cached: bool = True, **kwargs)[source]
Bases:
BaseMPIterative linear filter using the normalized adjacency matrix. Used in AdaGNN.
- Paper:
AdaGNN: Graph Neural Networks with Adaptive Frequency Response Filter
- Ref:
- Parameters:
- _get_forward_mat(x: Tensor, edge_index: Tensor | SparseTensor) dict[source]
Returns should match the arg list of
forward()whenself.comp_scheme == 'forward'.- Returns:
out (
Tensor) – initial output tensor (shape: \((|\mathcal{V}|, F)\))
- class HornerConv(num_hops: int = 0, hop: int = 0, alpha: float | None = None, cached: bool = True, **kwargs)[source]
Bases:
BaseMPConvolutional layer with adjacency propagation and explicit residual.
- Paper:
Clenshaw Graph Neural Networks
- Ref:
https://github.com/yuziGuo/ClenshawGNN/blob/master/layers/HornerConv.py
- Parameters:
- pargs: list[str] = ['alpha']
- param: dict[str, ParamTuple] = {'alpha': ('float', (0.01, 10), {'log': True}, <function HornerConv.<lambda>>)}
- _get_forward_mat(x: Tensor, edge_index: Tensor | SparseTensor) dict[source]
Returns should match the arg list of
forward()whenself.comp_scheme == 'forward'.- Returns:
out (
Tensor) – initial output tensor (shape: \((|\mathcal{V}|, F)\))
- _get_convolute_mat(x: Tensor, edge_index: Tensor | SparseTensor) dict[source]
Returns should match the arg list of
forward().
- class ClenshawConv(num_hops: int = 0, hop: int = 0, alpha: float | None = None, cached: bool = True, **kwargs)[source]
Bases:
BaseMPConvolutional layer with Chebyshev Polynomials and explicit residual.
- Paper:
Clenshaw Graph Neural Networks
- Ref:
https://github.com/yuziGuo/ClenshawGNN/blob/master/models/ChebClenshawNN.py
- Parameters:
- pargs: list[str] = ['alpha']
- param: dict[str, ParamTuple] = {'alpha': ('float', (0.01, 10), {'log': True}, <function ClenshawConv.<lambda>>)}
- _get_forward_mat(x: Tensor, edge_index: Tensor | SparseTensor) dict[source]
Returns should match the arg list of
forward()whenself.comp_scheme == 'forward'.- Returns:
out (
Tensor) – initial output tensor (shape: \((|\mathcal{V}|, F)\))
- _get_convolute_mat(x: Tensor, edge_index: Tensor | SparseTensor) dict[source]
Returns should match the arg list of
forward().
- _forward_out(**kwargs) Tensor[source]
- Returns:
out (
Tensor) – output tensor for accumulating propagation results (shape: \((|\mathcal{V}|, F)\))
- class ChebConv(num_hops: int = 0, hop: int = 0, alpha: float | None = None, cached: bool = True, **kwargs)[source]
Bases:
BaseMPConvolutional layer with Chebyshev Polynomials.
- Paper:
Convolutional Neural Networks on Graphs with Chebyshev Approximation, Revisited
- Ref:
https://github.com/ivam-he/ChebNetII/blob/main/main/Chebbase_pro.py
- Parameters:
- pargs: list[str] = ['alpha']
- param: dict[str, ParamTuple] = {'alpha': ('float', (0.0, 1.0), {'step': 0.01}, <function ChebConv.<lambda>>)}
- class ChebIIConv(num_hops: int = 0, hop: int = 0, cached: bool = True, **kwargs)[source]
Bases:
BaseMPConvolutional layer with Chebyshev-II Polynomials.
- Paper:
Convolutional Neural Networks on Graphs with Chebyshev Approximation, Revisited
- Ref:
https://github.com/ivam-he/ChebNetII/blob/main/main/ChebnetII_pro.py
- Parameters:
- coeffs_data = None
- _get_convolute_mat(x: Tensor, edge_index: Tensor | SparseTensor) dict[source]
Returns should match the arg list of
forward().
- _get_forward_mat(x: Tensor, edge_index: Tensor | SparseTensor) dict[source]
- Variables:
thetas (
Tensor) – learnable/fixed (wrt decoupled/iterative model) scalar parameters representing cheb(x)
- class BernConv(num_hops: int = 0, hop: int = 0, cached: bool = True, **kwargs)[source]
Bases:
BaseMPConvolutional layer with Bernstein Polynomials. We propose a new implementation reducing memory overhead from \(O(KFn)\) to \(O(3Fn)\).
- Paper:
BernNet: Learning Arbitrary Graph Spectral Filters via Bernstein Approximation
- Ref:
https://github.com/ivam-he/BernNet/blob/main/NodeClassification/Bernpro.py
- Parameters:
- _forward_theta(**kwargs)[source]
- Variables:
theta (
nn.Parameter | nn.Module) – transformation of propagation result before applying to the output.
- _forward(x: Tensor, prop_0: Tensor | SparseTensor, prop_1: Tensor | SparseTensor) dict[source]
- Returns:
x (
Tensor) – propagation result through \(2I-L\) (shape: \((|\mathcal{V}|, F)\))prop_0 (
SparseTensor) – \(L\)prop_1 (
SparseTensor) – \(2I-L\)
- class LegendreConv(num_hops: int = 0, hop: int = 0, cached: bool = True, **kwargs)[source]
Bases:
BaseMPConvolutional layer with Legendre Polynomials.
- Paper:
How Powerful are Spectral Graph Neural Networks
- Ref:
- Paper:
Improved Modeling and Generalization Capabilities of Graph Neural Networks With Legendre Polynomials
- Ref:
- Parameters:
- class JacobiConv(num_hops: int = 0, hop: int = 0, alpha: float | None = None, beta: float | None = None, cached: bool = True, **kwargs)[source]
Bases:
BaseMPConvolutional layer with Jacobi Polynomials.
- Paper:
How Powerful are Spectral Graph Neural Networks
- Ref:
- Parameters:
- pargs: list[str] = ['alpha', 'beta']
- param: dict[str, ParamTuple] = {'alpha': ('float', (0.0, 1.0), {'step': 0.01}, <function JacobiConv.<lambda>>), 'beta': ('float', (0.0, 1.0), {'step': 0.01}, <function JacobiConv.<lambda>>)}
- class FavardConv(num_hops: int = 0, hop: int = 0, cached: bool = True, **kwargs)[source]
Bases:
BaseMPConvolutional layer with basis in Favard’s Theorem.
- Paper:
Graph Neural Networks with Learnable and Optimal Polynomial Bases
- Ref:
https://github.com/yuziGuo/FarOptBasis/blob/master/layers/FavardNormalConv.py
- Parameters:
- _get_convolute_mat(x: Tensor, edge_index: Tensor | SparseTensor) dict[source]
Returns should match the arg list of
forward().
- _get_forward_mat(x: Tensor, edge_index: Tensor | SparseTensor) dict[source]
- Returns:
out (
Tensor) – initial output tensor (shape: \((|\mathcal{V}|, F)\))alpha_1 – parameter for \(k-1\)
- class OptBasisConv(num_hops: int = 0, hop: int = 0, cached: bool = True, **kwargs)[source]
Bases:
BaseMPConvolutional layer with optimal adaptive basis.
- Paper:
Graph Neural Networks with Learnable and Optimal Polynomial Bases
- Ref:
https://github.com/yuziGuo/FarOptBasis/blob/master/layers/NormalBasisConv.py
- Parameters:
- class ACMConv(num_hops: int = 0, hop: int = 0, alpha: int | None = None, cached: bool = True, out_channels: int | None = None, **kwargs)[source]
Bases:
BaseMPConvolutional layer of FBGNN & ACMGNN(I & II).
- Paper:
Revisiting Heterophily For Graph Neural Networks
- Paper:
Complete the Missing Half: Augmenting Aggregation Filtering with Diversification for Graph Convolutional Networks
- Ref:
https://github.com/SitaoLuan/ACM-GNN/blob/main/ACM-Geometric/layers.py
- Parameters:
- pargs: list[str] = ['alpha']
- _init_with_theta()[source]
- Variables:
theta (
torch.nn.ModuleDict) – Linear transformation for each scheme.
- _get_forward_mat(x: Tensor, edge_index: Tensor | SparseTensor) dict[source]
Returns should match the arg list of
forward()whenself.comp_scheme == 'forward'.- Returns:
out (
Tensor) – initial output tensor (shape: \((|\mathcal{V}|, F)\))
- _get_convolute_mat(x: Tensor, edge_index: Tensor | SparseTensor) dict[source]
Returns should match the arg list of
forward().
- _forward_theta(x, scheme)[source]
- Variables:
theta (
torch.nn.ModuleDict) – Linear transformation for each scheme.