pyg_spectral.nn.models

class BaseNN(conv: str, num_hops: int = 0, in_channels: int | None = None, hidden_channels: int | None = None, out_channels: int | None = None, in_layers: int | None = None, out_layers: int | None = None, dropout_lin: float | list[float] = 0.0, dropout_conv: float = 0.0, act: str | Callable | None = 'relu', act_first: bool = False, act_kwargs: dict[str, Any | None] | None = None, norm: str | Callable | None = 'batch_norm', norm_kwargs: dict[str, Any | None] | None = None, plain_last: bool = False, bias: bool | list[bool] = True, **kwargs)[source]

Bases: Module

Base NN structure with MLP before and after convolution layers.

Parameters:
supports_edge_weight: Final[bool] = False
supports_edge_attr: Final[bool] = False
supports_norm_batch: Final[bool][source]
supports_batch: bool[source]
name: str[source]
conv_name(args)[source]
pargs: list[str] = ['conv', 'num_hops', 'in_layers', 'out_layers', 'in_channels', 'hidden_channels', 'out_channels', 'dropout_lin', 'dropout_conv']
param: dict[str, NewType(ParamTuple, tuple[str, tuple, dict[str, Any], Callable[[Any], str]])] = {'dropout_conv': ('float', (0.0, 0.9), {'step': 0.1}, <function BaseNN.<lambda>>), 'dropout_lin': ('float', (0.0, 0.9), {'step': 0.1}, <function BaseNN.<lambda>>), 'hidden_channels': ('categorical', ([16, 32, 64, 128, 256],), {}, <function BaseNN.<lambda>>), 'in_layers': ('int', (1, 3), {}, <function BaseNN.<lambda>>), 'num_hops': ('int', (2, 30), {'step': 2}, <function BaseNN.<lambda>>), 'out_layers': ('int', (1, 3), {}, <function BaseNN.<lambda>>)}
classmethod register_classes(registry: dict[str, dict[str, Any]] | None = None) dict[source]

Register args for all subclass.

Parameters:
  • name (dict[str, str]) – Model class logging path name.

  • conv_name (dict[str, Callable[[str, Any], str]]) – Wrap conv logging path name.

  • module (dict[str, str]) – Module for importing the model.

  • pargs (dict[str, list[str]]) – Model arguments from argparse.

  • pargs_default (dict[str, dict[str, Any]]) – Default values for model arguments. Not recommended.

  • param (dict[str, dict[str, ParamTuple]]) –

    Model parameters to tune.

    • (str) parameter type,

    • (tuple) args for optuna.trial.suggest_,

    • (dict) kwargs for optuna.trial.suggest_,

    • (callable) format function to str.

init_channel_list(conv: str, in_channels: int, hidden_channels: int, out_channels: int, **kwargs) list[int][source]
init_conv(conv: str, num_hops: int, lib: str, **kwargs) MessagePassing[source]
_set_conv_attr(key: str) Callable[source]
reset_cache()[source]
reset_parameters()[source]
get_optimizer(dct)[source]
preprocess(x: Tensor, edge_index: Tensor | SparseTensor) Any[source]

Preprocessing step that not counted in forward() overhead. Here mainly transforming graph adjacency to actual propagation matrix.

convolute(x: Tensor, edge_index: Tensor | SparseTensor, batch: Tensor | None = None, batch_size: int | None = None) Tensor[source]

Decoupled propagation step for calling the convolutional module.

forward(x: Tensor, edge_index: Tensor | SparseTensor, batch: Tensor | None = None, batch_size: int | None = None) Tensor[source]
Parameters:
  • x (Tensor) – from torch_geometric.data.Data

  • edge_index (Tensor | SparseTensor) – from torch_geometric.data.Data

  • batch (Tensor | None, default: None) – The batch vector \(\mathbf{b} \in {\{ 0, \ldots, B-1\}}^N\), which assigns each element to a specific example. Only needs to be passed in case the underlying normalization layers require the batch information.

  • batch_size (int | None, default: None) – The number of examples \(B\). Automatically calculated if not given. Only needs to be passed in case the underlying normalization layers require the batch information.

class BaseNNCompose(conv: str, num_hops: int = 0, in_channels: int | None = None, hidden_channels: int | None = None, out_channels: int | None = None, in_layers: int | None = None, out_layers: int | None = None, dropout_lin: float | list[float] = 0.0, dropout_conv: float = 0.0, act: str | Callable | None = 'relu', act_first: bool = False, act_kwargs: dict[str, Any | None] | None = None, norm: str | Callable | None = 'batch_norm', norm_kwargs: dict[str, Any | None] | None = None, plain_last: bool = False, bias: bool | list[bool] = True, **kwargs)[source]

Bases: BaseNN

Base NN structure with multiple conv channels.

Parameters:
pargs: list[str] = ['combine']
param: dict[str, ParamTuple] = {'combine': ('categorical', ['sum', 'sum_weighted', 'cat'], {}, <function BaseNNCompose.<lambda>>)}
init_channel_list(conv: str, in_channels: int, hidden_channels: int, out_channels: int, **kwargs) list[int][source]
Variables:

channel_list – width for each conv channel

_set_conv_attr(key: str) list[Callable][source]
reset_cache()[source]
reset_parameters()[source]
get_optimizer(dct)[source]
preprocess(x: Tensor, edge_index: Tensor | SparseTensor) Any[source]

Preprocessing step that not counted in forward() overhead. Here mainly transforming graph adjacency to actual propagation matrix.

convolute(x: Tensor, edge_index: Tensor | SparseTensor, batch: Tensor | None = None, batch_size: int | None = None) Tensor[source]

Decoupled propagation step for calling the convolutional module.

class Iterative(conv: str, num_hops: int = 0, in_channels: int | None = None, hidden_channels: int | None = None, out_channels: int | None = None, in_layers: int | None = None, out_layers: int | None = None, dropout_lin: float | list[float] = 0.0, dropout_conv: float = 0.0, act: str | Callable | None = 'relu', act_first: bool = False, act_kwargs: dict[str, Any | None] | None = None, norm: str | Callable | None = 'batch_norm', norm_kwargs: dict[str, Any | None] | None = None, plain_last: bool = False, bias: bool | list[bool] = True, **kwargs)[source]

Bases: BaseNN

Iterative structure with matrix transformation each hop of propagation.

Parameters:
name: str = 'Iterative'
init_conv(conv: str, num_hops: int, lib: str, **kwargs) MessagePassing[source]
class IterativeCompose(conv: str, num_hops: int = 0, in_channels: int | None = None, hidden_channels: int | None = None, out_channels: int | None = None, in_layers: int | None = None, out_layers: int | None = None, dropout_lin: float | list[float] = 0.0, dropout_conv: float = 0.0, act: str | Callable | None = 'relu', act_first: bool = False, act_kwargs: dict[str, Any | None] | None = None, norm: str | Callable | None = 'batch_norm', norm_kwargs: dict[str, Any | None] | None = None, plain_last: bool = False, bias: bool | list[bool] = True, **kwargs)[source]

Bases: BaseNNCompose

Iterative structure with matrix transformation each hop of propagation.

Parameters:
name: str = 'Iterative'
init_conv(conv: str, num_hops: int, lib: str, **kwargs) MessagePassing[source]
class IterativeFixed(conv: str, num_hops: int = 0, in_channels: int | None = None, hidden_channels: int | None = None, out_channels: int | None = None, in_layers: int | None = None, out_layers: int | None = None, dropout_lin: float | list[float] = 0.0, dropout_conv: float = 0.0, act: str | Callable | None = 'relu', act_first: bool = False, act_kwargs: dict[str, Any | None] | None = None, norm: str | Callable | None = 'batch_norm', norm_kwargs: dict[str, Any | None] | None = None, plain_last: bool = False, bias: bool | list[bool] = True, **kwargs)[source]

Bases: Iterative

name: str = 'IterativeFixed'
class IterativeFixedCompose(conv: str, num_hops: int = 0, in_channels: int | None = None, hidden_channels: int | None = None, out_channels: int | None = None, in_layers: int | None = None, out_layers: int | None = None, dropout_lin: float | list[float] = 0.0, dropout_conv: float = 0.0, act: str | Callable | None = 'relu', act_first: bool = False, act_kwargs: dict[str, Any | None] | None = None, norm: str | Callable | None = 'batch_norm', norm_kwargs: dict[str, Any | None] | None = None, plain_last: bool = False, bias: bool | list[bool] = True, **kwargs)[source]

Bases: IterativeCompose

name: str = 'IterativeFixed'
class DecoupledFixed(conv: str, num_hops: int = 0, in_channels: int | None = None, hidden_channels: int | None = None, out_channels: int | None = None, in_layers: int | None = None, out_layers: int | None = None, dropout_lin: float | list[float] = 0.0, dropout_conv: float = 0.0, act: str | Callable | None = 'relu', act_first: bool = False, act_kwargs: dict[str, Any | None] | None = None, norm: str | Callable | None = 'batch_norm', norm_kwargs: dict[str, Any | None] | None = None, plain_last: bool = False, bias: bool | list[bool] = True, **kwargs)[source]

Bases: BaseNN

Decoupled structure without matrix transformation during propagation. Fixed scalar propagation parameters.

Note

Apply conv every forward() call. Not to be mixed with Precomputed models.

Parameters:
name: str = 'DecoupledFixed'
conv_name(args)[source]
pargs: list[str] = ['theta_scheme', 'theta_param']
param: dict[str, ParamTuple] = {'theta_param': <function DecoupledFixed.<lambda>>}
init_conv(conv: str, num_hops: int, lib: str, **kwargs) MessagePassing[source]
class DecoupledVar(conv: str, num_hops: int = 0, in_channels: int | None = None, hidden_channels: int | None = None, out_channels: int | None = None, in_layers: int | None = None, out_layers: int | None = None, dropout_lin: float | list[float] = 0.0, dropout_conv: float = 0.0, act: str | Callable | None = 'relu', act_first: bool = False, act_kwargs: dict[str, Any | None] | None = None, norm: str | Callable | None = 'batch_norm', norm_kwargs: dict[str, Any | None] | None = None, plain_last: bool = False, bias: bool | list[bool] = True, **kwargs)[source]

Bases: BaseNN

Decoupled structure without matrix transformation during propagation. Learnable scalar propagation parameters.

Note

Apply conv every forward() call. Not to be mixed with Precomputed models.

Parameters:
name: str = 'DecoupledVar'
pargs: list[str] = ['theta_scheme', 'theta_param']
param: dict[str, ParamTuple] = {'theta_param': <function DecoupledVar.<lambda>>}
init_conv(conv: str, num_hops: int, lib: str, **kwargs) MessagePassing[source]
reset_parameters()[source]
class DecoupledFixedCompose(conv: str, num_hops: int = 0, in_channels: int | None = None, hidden_channels: int | None = None, out_channels: int | None = None, in_layers: int | None = None, out_layers: int | None = None, dropout_lin: float | list[float] = 0.0, dropout_conv: float = 0.0, act: str | Callable | None = 'relu', act_first: bool = False, act_kwargs: dict[str, Any | None] | None = None, norm: str | Callable | None = 'batch_norm', norm_kwargs: dict[str, Any | None] | None = None, plain_last: bool = False, bias: bool | list[bool] = True, **kwargs)[source]

Bases: BaseNNCompose

Decoupled structure without matrix transformation during propagation. Fixed scalar propagation parameters.

Parameters:
name: str = 'DecoupledFixed'
conv_name(args)[source]
pargs: list[str] = ['theta_scheme', 'theta_param']
param: dict[str, ParamTuple] = {'theta_param': <function DecoupledFixedCompose.<lambda>>}
init_conv(conv: str, num_hops: int, lib: str, **kwargs) MessagePassing[source]
class DecoupledVarCompose(conv: str, num_hops: int = 0, in_channels: int | None = None, hidden_channels: int | None = None, out_channels: int | None = None, in_layers: int | None = None, out_layers: int | None = None, dropout_lin: float | list[float] = 0.0, dropout_conv: float = 0.0, act: str | Callable | None = 'relu', act_first: bool = False, act_kwargs: dict[str, Any | None] | None = None, norm: str | Callable | None = 'batch_norm', norm_kwargs: dict[str, Any | None] | None = None, plain_last: bool = False, bias: bool | list[bool] = True, **kwargs)[source]

Bases: BaseNNCompose

Decoupled structure without matrix transformation during propagation. Learnable scalar propagation parameters.

Parameters:
name: str = 'DecoupledVar'
pargs: list[str] = ['theta_scheme', 'theta_param']
param: dict[str, ParamTuple] = {'theta_param': <function DecoupledVarCompose.<lambda>>}
init_conv(conv: str, num_hops: int, lib: str, **kwargs) MessagePassing[source]
reset_parameters()[source]
class PrecomputedFixed(in_layers: int | None = None, **kwargs)[source]

Bases: DecoupledFixed

Decoupled structure with precomputation separating propagation from transformation. Fixed scalar propagation parameters and accumulating precompute results.

Note

Only apply propagation in convolute(). Not to be mixed with Decoupled models.

Parameters:
name: str = 'PrecomputedFixed'
param: dict[str, ParamTuple] = {'in_layers': ('int', (0, 0), {}, <function PrecomputedFixed.<lambda>>)}
convolute(x: Tensor, edge_index: Tensor | SparseTensor) Tensor[source]

Decoupled propagation step for calling the convolutional module. Requires no variable transformation in conv.forward().

Returns:

embed (Tensor) – Precomputed node embeddings.

forward(x: Tensor, batch: Tensor | None = None, batch_size: int | None = None) Tensor[source]
Parameters:
class PrecomputedVar(in_layers: int | None = None, **kwargs)[source]

Bases: DecoupledVar

Decoupled structure with precomputation separating propagation from transformation. Learnable scalar propagation parameters and storing all intermediate precompute results.

Note

Only apply propagation in convolute(). Not to be mixed with Decoupled models.

Parameters:
name: str = 'PrecomputedVar'
param: dict[str, ParamTuple] = {'in_layers': ('int', (0, 0), {}, <function PrecomputedVar.<lambda>>)}
convolute(x: Tensor, edge_index: Tensor | SparseTensor) list[source]

Decoupled propagation step for calling the convolutional module. _forward() should not contain derivable computations.

Returns:

embed – List of precomputed node embeddings of each hop. Each shape is \((|\mathcal{V}|, F, |convs|+1)\).

forward(xs: Tensor, batch: Tensor | None = None, batch_size: int | None = None) Tensor[source]
Parameters:
class PrecomputedFixedCompose(conv: str, num_hops: int = 0, in_channels: int | None = None, hidden_channels: int | None = None, out_channels: int | None = None, in_layers: int | None = None, out_layers: int | None = None, dropout_lin: float | list[float] = 0.0, dropout_conv: float = 0.0, act: str | Callable | None = 'relu', act_first: bool = False, act_kwargs: dict[str, Any | None] | None = None, norm: str | Callable | None = 'batch_norm', norm_kwargs: dict[str, Any | None] | None = None, plain_last: bool = False, bias: bool | list[bool] = True, **kwargs)[source]

Bases: DecoupledFixedCompose

Decoupled structure with precomputation separating propagation from transformation. Fixed scalar propagation parameters and accumulating precompute results.

Parameters:
name: str = 'PrecomputedFixed'
param: dict[str, ParamTuple] = {'in_layers': ('int', (0, 0), {}, <function PrecomputedFixedCompose.<lambda>>)}
convolute(x: Tensor, edge_index: Tensor | SparseTensor) Tensor[source]

Decoupled propagation step for calling the convolutional module. Requires no variable transformation in conv.forward().

Returns:

embed (Tensor) – Precomputed node embeddings. (shape: \((|\mathcal{V}|, F, Q)\))

forward(x: Tensor, batch: Tensor | None = None, batch_size: int | None = None) Tensor[source]
Parameters:
class PrecomputedVarCompose(conv: str, num_hops: int = 0, in_channels: int | None = None, hidden_channels: int | None = None, out_channels: int | None = None, in_layers: int | None = None, out_layers: int | None = None, dropout_lin: float | list[float] = 0.0, dropout_conv: float = 0.0, act: str | Callable | None = 'relu', act_first: bool = False, act_kwargs: dict[str, Any | None] | None = None, norm: str | Callable | None = 'batch_norm', norm_kwargs: dict[str, Any | None] | None = None, plain_last: bool = False, bias: bool | list[bool] = True, **kwargs)[source]

Bases: DecoupledVarCompose

Decoupled structure with precomputation separating propagation from transformation. Learnable scalar propagation parameters and storing all intermediate precompute results.

Parameters:
name: str = 'PrecomputedVar'
param: dict[str, ParamTuple] = {'in_layers': ('int', (0, 0), {}, <function PrecomputedVarCompose.<lambda>>)}
convolute(x: Tensor, edge_index: Tensor | SparseTensor) Tensor[source]

Decoupled propagation step for calling the convolutional module. Requires no variable transformation in conv.forward().

Returns:

embed (Tensor) – List of precomputed node embeddings of each hop. Shape: \((|\mathcal{V}|, F, Q, |convs|+1)\).

forward(xs: Tensor, batch: Tensor | None = None, batch_size: int | None = None) Tensor[source]
Parameters:
class AdaGNN(conv: str, num_hops: int = 0, in_channels: int | None = None, hidden_channels: int | None = None, out_channels: int | None = None, in_layers: int | None = None, out_layers: int | None = None, dropout_lin: float | list[float] = 0.0, dropout_conv: float = 0.0, act: str | Callable | None = 'relu', act_first: bool = False, act_kwargs: dict[str, Any | None] | None = None, norm: str | Callable | None = 'batch_norm', norm_kwargs: dict[str, Any | None] | None = None, plain_last: bool = False, bias: bool | list[bool] = True, **kwargs)[source]

Bases: BaseNN

Decoupled structure with diag transformation each hop of propagation.

Paper:

AdaGNN: Graph Neural Networks with Adaptive Frequency Response Filter

Ref:

https://github.com/yushundong/AdaGNN

Parameters:
name: str = 'DecoupledVar'
pargs: list[str] = ['theta_scheme', 'theta_param']
param: dict[str, ParamTuple] = {'theta_param': <function AdaGNN.<lambda>>}
init_conv(conv: str, num_hops: int, lib: str, **kwargs) MessagePassing[source]
reset_parameters()[source]
class ACMGNN(conv: str, num_hops: int = 0, in_channels: int | None = None, hidden_channels: int | None = None, out_channels: int | None = None, in_layers: int | None = None, out_layers: int | None = None, dropout_lin: float | list[float] = 0.0, dropout_conv: float = 0.0, act: str | Callable | None = 'relu', act_first: bool = False, act_kwargs: dict[str, Any | None] | None = None, norm: str | Callable | None = 'batch_norm', norm_kwargs: dict[str, Any | None] | None = None, plain_last: bool = False, bias: bool | list[bool] = True, **kwargs)[source]

Bases: BaseNN

Iterative structure for ACM conv.

Paper:

Revisiting Heterophily For Graph Neural Networks

Paper:

Complete the Missing Half: Augmenting Aggregation Filtering with Diversification for Graph Convolutional Networks

Ref:

https://github.com/SitaoLuan/ACM-GNN

Parameters:
name: str = 'Iterative'
conv_name(args)[source]
pargs: list[str] = ['theta_scheme']
init_conv(conv: str, num_hops: int, lib: str, **kwargs) MessagePassing[source]
class ACMGNNDec(conv: str, num_hops: int = 0, in_channels: int | None = None, hidden_channels: int | None = None, out_channels: int | None = None, in_layers: int | None = None, out_layers: int | None = None, dropout_lin: float | list[float] = 0.0, dropout_conv: float = 0.0, act: str | Callable | None = 'relu', act_first: bool = False, act_kwargs: dict[str, Any | None] | None = None, norm: str | Callable | None = 'batch_norm', norm_kwargs: dict[str, Any | None] | None = None, plain_last: bool = False, bias: bool | list[bool] = True, **kwargs)[source]

Bases: BaseNN

Decoupled structure for ACM conv.

Paper:

Revisiting Heterophily For Graph Neural Networks

Paper:

Complete the Missing Half: Augmenting Aggregation Filtering with Diversification for Graph Convolutional Networks

Ref:

https://github.com/SitaoLuan/ACM-GNN

Parameters:
name: str = 'DecoupledVar'
conv_name(args)[source]
pargs: list[str] = ['theta_scheme']
init_conv(conv: str, num_hops: int, lib: str, **kwargs) MessagePassing[source]
reset_parameters()[source]
gen_theta(num_hops: int, scheme: str, param: float | list[float] | None = None) Tensor[source]

Generate list of hop parameters based on given scheme.

Parameters:
  • num_hops (int) – Total number of hops.

  • scheme (str) – Method to generate parameters. - zeros: all-zero, \(\theta_k = 0\). - ones: all-same, \(\theta_k = p\). - impulse: K-hop, \(\theta_K = p, else 0\). - inverse: Inverse, \(\theta_k = p/(k+1)\). - mono: Monomial, \(\theta_k = (1-p)/K, \theta_0 = p\). - ‘appr’: Approximate PPR, \(\theta_k = p (1 - p)^k\). - ‘nappr’: Negative PPR, \(\theta_k = p^k\). - ‘hk’: Heat Kernel, \(\theta_k = e^{-p}p^k / k!\). - ‘gaussian’: Graph Gaussian Kernel, \(theta_k = p^{k} / k!\). - ‘log’: Logarithmic, \(\theta_k = log(p / (k+1) + 1)\). - ‘chebyshev’: Chebyshev polynomial. - ‘uniform’: Random uniform distribution. - ‘normal_one’: Gaussian distribution N(0, p) with sum normalized to 1. - ‘normal’: Gaussian distribution N(0, p). - ‘custom’: Custom list of hop parameters.

  • param (float, optional) – Hyperparameter for the scheme. - zeros: NA. - ones: Value. - ‘impulse’: Value. - ‘inverse’: Value. - ‘mono’: Decay factor, \(p \in [0, 1]\). - ‘appr’: Decay factor, \(p \in [0, 1]\). - ‘nappr’: Decay factor, \(p \in [-1, 1]\). - ‘hk’: Decay factor, \(p > 0\). - ‘gaussian’: Decay factor, \(p > 0\). - ‘log’: Decay factor, \(p > 0\). - ‘chebyshev’: NA. - ‘uniform’: Distribution bound. - ‘normal_one’: Distribution variance. - ‘normal’: Distribution variance. - ‘custom’: Float list of hop parameters.

Returns:

theta (Tensor) – Lenth (num_hops+1) list of hop parameters.

class CppPrecFixed(in_layers: int | None = None, **kwargs)[source]

Bases: PrecomputedFixed

Decoupled structure with C++ propagation precomputation. Fixed scalar propagation parameters and accumulating precompute results.

preprocess(x: Tensor, edge_index: Tensor | SparseTensor) Tensor | SparseTensor[source]

Preprocessing step that not counted in forward() overhead. Here mainly transforming graph adjacency to actual propagation matrix.

convolute(x: Tensor, edge_index: Tensor | SparseTensor) Tensor[source]

Decoupled propagation step for calling the convolutional module. Requires no variable transformation in conv.forward().

Returns:

embed (Tensor) – Precomputed node embeddings.