pyg_spectral.nn.conv

class BaseMP(num_hops: int = 0, hop: int = 0, cached: bool = True, **kwargs)[source]

Bases: MessagePassing

Base filter layer structure.

Parameters:
  • num_hops (int, default: 0) – total number of propagation hops.

  • hop (int, default: 0) – current number of propagation hops of this layer.

  • cached (bool, default: True) – whether cache the propagation matrix.

  • **kwargs – Additional arguments of torch_geometric.nn.conv.MessagePassing.

supports_batch: bool = True
supports_norm_batch: bool = True
name: Callable[[Any], str][source]
pargs: list[str] = []
param: dict[str, NewType(ParamTuple, tuple[str, tuple, dict[str, Any], Callable[[Any], str]])] = {}
classmethod register_classes(registry: dict[str, dict[str, Any]] | None = None) dict[source]

Register args for all subclass.

Parameters:
  • name (dict[str, Callable[[Any], str]]) – Conv class logging path name.

  • pargs (dict[str, list[str]]) – Conv arguments from argparse.

  • pargs_default (dict[str, dict[str, Any]]) – Default values for model arguments. Not recommended.

  • param (dict[str, dict[str, ParamTuple]]) –

    Conv parameters to tune.

    • (str) parameter type,

    • (tuple) args for optuna.trial.suggest_,

    • (dict) kwargs for optuna.trial.suggest_,

    • (callable) format function to str.

_cache: Any | None[source]
reset_cache()[source]
get_propagate_mat(x: Tensor, edge_index: Tensor | SparseTensor) Tensor | SparseTensor[source]

Get matrices for propagate(). Called before each forward() with same input.

Parameters:
Variables:

propagate_mat (str) – propagation schemes, separated by ,. Each scheme starts with A or L for adjacency or Laplacian, optionally following +[p*]I or -[p*]I for scaling the diagonal, where p can be float or attribute name.

Returns:

prop (SparseTensor) – propagation matrix

_get_propagate_mat(x: Tensor, edge_index: Tensor | SparseTensor) Tensor | SparseTensor[source]

Shadow function for get_propagate_mat().

_get_forward_mat(x: Tensor, edge_index: Tensor | SparseTensor) dict[source]

Returns should match the arg list of forward() when self.comp_scheme == 'forward'.

Returns:

out (Tensor) – initial output tensor (shape: \((|\mathcal{V}|, F)\))

_get_convolute_mat(x: Tensor, edge_index: Tensor | SparseTensor) dict[source]

Returns should match the arg list of forward().

get_forward_mat(x: Tensor, edge_index: Tensor | SparseTensor, comp_scheme: str | None = None) dict[source]

Get matrices for forward(). Called during forward().

Parameters:
Returns:
  • out (Tensor) – output tensor (shape: \((|\mathcal{V}|, F)\))

  • prop (Adj) – propagation matrix

_forward_theta(**kwargs)[source]
Variables:

theta (nn.Parameter | nn.Module) – transformation of propagation result before applying to the output.

_forward_out(**kwargs) Tensor[source]

Shadow function for calling _forward_theta() and accumulating results.

Returns:

out (Tensor) – output tensor for accumulating propagation results (shape: \((|\mathcal{V}|, F)\))

forward(**kwargs) dict[source]

Wrapper for distinguishing precomputed outputs. Args & Returns should match the output of get_forward_mat()

_forward(x: Tensor, prop: Tensor | SparseTensor) dict[source]

Shadow function for forward() to be implemented in subclasses without calculating output. if self.supports_batch == True, then should not contain derivable computations. Dicts of Args & Returns should be matched.

Returns:

x (Tensor) – tensor for calculating out

message_and_aggregate(adj_t: Tensor | SparseTensor, x: Tensor) Tensor[source]

Fuses computations of message() and aggregate() into a single function. If applicable, this saves both time and memory since messages do not explicitly need to be materialized. This function will only gets called in case it is implemented and propagation takes place based on a torch_sparse.SparseTensor or a torch.sparse.Tensor.

class AdjConv(num_hops: int = 0, hop: int = 0, beta: float | None = None, cached: bool = True, **kwargs)[source]

Bases: BaseMP

Linear filter using the normalized adjacency matrix for propagation.

Parameters:
name()[source]
pargs: list[str] = ['beta']
param: dict[str, ParamTuple] = {'beta': ('float', (0.01, 2.0), {'step': 0.01}, <function AdjConv.<lambda>>)}
_forward(x: Tensor, prop: Tensor | SparseTensor) tuple[source]
Returns:
  • x (Tensor) – current propagation result (shape: \((|\mathcal{V}|, F)\))

  • prop (Adj) – propagation matrix

class AdjDiffConv(num_hops: int = 0, hop: int = 0, alpha: float | None = None, beta: float | None = None, cached: bool = True, **kwargs)[source]

Bases: AdjConv

Linear filter using the normalized adjacency matrix for propagation. Preprocess the feature by distinguish matrix \(\alpha\mathbf{L} + \mathbf{I}\).

Parameters:
  • alpha (float | None, default: None) – scaling for self-loop in distinguish matrix \(\alpha\mathbf{L} + \mathbf{I}\)

  • beta (float | None, default: None) – additional scaling for self-loop in adjacency matrix \(\mathbf{A} + \beta\mathbf{I}\), i.e. improved in torch_geometric.nn.conv.GCNConv.

  • num_hops (int, default: 0) – args for BaseMP

  • hop (int, default: 0) – args for BaseMP

  • cached (bool, default: True) – args for BaseMP

name()[source]
pargs: list[str] = ['alpha']
param: dict[str, ParamTuple] = {'alpha': ('float', (0.0, 1.0), {'step': 0.01}, <function AdjDiffConv.<lambda>>)}
_forward(x: Tensor, prop: Tensor | SparseTensor) dict[source]
Returns:
  • x (Tensor) – current propagation result (shape: \((|\mathcal{V}|, F)\))

  • prop (Adj) – propagation matrix

class AdjiConv(num_hops: int = 0, hop: int = 0, alpha: float | None = None, beta: float | None = None, cached: bool = True, **kwargs)[source]

Bases: BaseMP

Iterative linear filter using the normalized adjacency matrix for augmented propagation.

Parameters:
supports_batch: bool = False
name()[source]
pargs: list[str] = ['alpha', 'beta']
param: dict[str, ParamTuple] = {'alpha': ('float', (0.0, 1.0), {'step': 0.01}, <function AdjiConv.<lambda>>), 'beta': ('float', (0.01, 2.0), {'step': 0.01}, <function AdjiConv.<lambda>>)}
reset_parameters()[source]

Resets all learnable parameters of the module.

_get_forward_mat(x: Tensor, edge_index: Tensor | SparseTensor) dict[source]

Returns should match the arg list of forward() when self.comp_scheme == 'forward'.

Returns:

out (Tensor) – initial output tensor (shape: \((|\mathcal{V}|, F)\))

_get_convolute_mat(x: Tensor, edge_index: Tensor | SparseTensor) dict[source]

Returns should match the arg list of forward().

forward(out: Tensor, prop: Tensor | SparseTensor) dict[source]

Overwrite forward method.

Returns:
  • out (Tensor) – output tensor for accumulating propagation results (shape: \((|\mathcal{V}|, F)\))

  • prop (Adj) – propagation matrix

class Adji2Conv(num_hops: int = 0, hop: int = 0, alpha: float | None = None, beta: float | None = None, cached: bool = True, **kwargs)[source]

Bases: AdjiConv

Iterative linear filter using the 2-hop normalized adjacency matrix for augmented propagation.

Parameters:
  • num_hops (int, default: 0) – total number of propagation hops. NOTE that there are only \(\text{num_hops} / 2\) conv layers.

  • alpha (float | None, default: None) – decay factor \(\alpha(\mathbf{A} + \beta\mathbf{I})\). Can be \(\alpha < 0\).

  • beta (float | None, default: None) – scaling for skip connection, i.e., self-loop in adjacency matrix, i.e. improved in torch_geometric.nn.conv.GCNConv and eps in torch_geometric.nn.conv.GINConv. Can be \(\beta < 0\). beta = 'var' for learnable beta as parameter.

  • hop (int, default: 0) – args for BaseMP

  • cached (bool, default: True) – args for BaseMP

name()[source]
message_and_aggregate(adj_t: Tensor | SparseTensor, x: Tensor) Tensor[source]

Perform 2-hop propagation.

class AdjSkipConv(num_hops: int = 0, hop: int = 0, alpha: float | None = None, beta: float | None = None, cached: bool = True, **kwargs)[source]

Bases: BaseMP

Iterative linear filter with skip connection.

Parameters:
name()[source]
pargs: list[str] = ['alpha', 'beta']
param: dict[str, ParamTuple] = {'alpha': ('float', (0.0, 1.0), {'step': 0.01}, <function AdjSkipConv.<lambda>>), 'beta': ('float', (0.01, 2.0), {'step': 0.01}, <function AdjSkipConv.<lambda>>)}
reset_parameters()[source]

Resets all learnable parameters of the module.

_get_forward_mat(x: Tensor, edge_index: Tensor | SparseTensor) dict[source]

Returns should match the arg list of forward() when self.comp_scheme == 'forward'.

Returns:

out (Tensor) – initial output tensor (shape: \((|\mathcal{V}|, F)\))

_get_convolute_mat(x: Tensor, edge_index: Tensor | SparseTensor) dict[source]

Returns should match the arg list of forward().

_forward_out(**kwargs) Tensor[source]
Returns:

out (Tensor) – output tensor for accumulating propagation results (shape: \((|\mathcal{V}|, F)\))

_forward(out: Tensor, h: Tensor, prop: Tensor | SparseTensor) dict[source]
Returns:
  • out (Tensor) – output tensor for accumulating propagation results (shape: \((|\mathcal{V}|, F)\))

  • prop (Adj) – propagation matrix

class AdjSkip2Conv(num_hops: int = 0, hop: int = 0, alpha: float | None = None, beta: float | None = None, cached: bool = True, **kwargs)[source]

Bases: AdjSkipConv

Iterative linear filter with 2-hop propagation and skip connection.

Parameters:
name()[source]
message_and_aggregate(adj_t: Tensor | SparseTensor, x: Tensor) Tensor[source]

Perform 2-hop propagation.

class AdjResConv(num_hops: int = 0, hop: int = 0, alpha: float | None = None, beta: float | None = None, cached: bool = True, **kwargs)[source]

Bases: BaseMP

Iterative linear filter with residual connection.

Parameters:
name()[source]
pargs: list[str] = ['alpha', 'beta']
param: dict[str, ParamTuple] = {'alpha': ('float', (0.0, 1.0), {'step': 0.01}, <function AdjResConv.<lambda>>), 'beta': ('float', (0.01, 2.0), {'step': 0.01}, <function AdjResConv.<lambda>>)}
reset_parameters()[source]

Resets all learnable parameters of the module.

_get_forward_mat(x: Tensor, edge_index: Tensor | SparseTensor) dict[source]

Returns should match the arg list of forward() when self.comp_scheme == 'forward'.

Returns:

out (Tensor) – initial output tensor (shape: \((|\mathcal{V}|, F)\))

_get_convolute_mat(x: Tensor, edge_index: Tensor | SparseTensor) dict[source]

Returns should match the arg list of forward().

_forward_out(**kwargs) Tensor[source]
Returns:

out (Tensor) – output tensor for accumulating propagation results (shape: \((|\mathcal{V}|, F)\))

_forward(out: Tensor, x_0: Tensor, prop: Tensor | SparseTensor) dict[source]
Returns:
  • out (Tensor) – output tensor for accumulating propagation results (shape: \((|\mathcal{V}|, F)\))

  • x_0 (Tensor) – initial input (shape: \((|\mathcal{V}|, F)\))

  • prop (Adj) – propagation matrix

class LapiConv(num_hops: int = 0, hop: int = 0, cached: bool = True, **kwargs)[source]

Bases: BaseMP

Iterative linear filter using the normalized adjacency matrix. Used in AdaGNN.

Paper:

AdaGNN: Graph Neural Networks with Adaptive Frequency Response Filter

Ref:

https://github.com/yushundong/AdaGNN/blob/main/layers.py

Parameters:
supports_batch: bool = False
name()[source]
_get_forward_mat(x: Tensor, edge_index: Tensor | SparseTensor) dict[source]

Returns should match the arg list of forward() when self.comp_scheme == 'forward'.

Returns:

out (Tensor) – initial output tensor (shape: \((|\mathcal{V}|, F)\))

_get_convolute_mat(x: Tensor, edge_index: Tensor | SparseTensor) dict[source]

Returns should match the arg list of forward().

forward(out: Tensor, prop: Tensor | SparseTensor) dict[source]
Returns:
  • out (Tensor) – output tensor for accumulating propagation results (shape: \((|\mathcal{V}|, F)\))

  • prop (Adj) – propagation matrix

class HornerConv(num_hops: int = 0, hop: int = 0, alpha: float | None = None, cached: bool = True, **kwargs)[source]

Bases: BaseMP

Convolutional layer with adjacency propagation and explicit residual.

Paper:

Clenshaw Graph Neural Networks

Ref:

https://github.com/yuziGuo/ClenshawGNN/blob/master/layers/HornerConv.py

Parameters:
  • alpha (float | None, default: None) – transformation strength.

  • num_hops (int, default: 0) – args for BaseMP

  • hop (int, default: 0) – args for BaseMP

  • cached (bool, default: True) – args for BaseMP

name()[source]
pargs: list[str] = ['alpha']
param: dict[str, ParamTuple] = {'alpha': ('float', (0.01, 10), {'log': True}, <function HornerConv.<lambda>>)}
reset_parameters()[source]

Resets all learnable parameters of the module.

_get_forward_mat(x: Tensor, edge_index: Tensor | SparseTensor) dict[source]

Returns should match the arg list of forward() when self.comp_scheme == 'forward'.

Returns:

out (Tensor) – initial output tensor (shape: \((|\mathcal{V}|, F)\))

_get_convolute_mat(x: Tensor, edge_index: Tensor | SparseTensor) dict[source]

Returns should match the arg list of forward().

_forward_out(**kwargs) Tensor[source]
Returns:

out (Tensor) – output tensor for accumulating propagation results (shape: \((|\mathcal{V}|, F)\))

_forward(out: Tensor, x_0: Tensor, prop: Tensor | SparseTensor) dict[source]
Returns:
  • out (Tensor) – output tensor for accumulating propagation results (shape: \((|\mathcal{V}|, F)\))

  • x_0 (Tensor) – initial input (shape: \((|\mathcal{V}|, F)\))

  • prop (Adj) – propagation matrix

class ClenshawConv(num_hops: int = 0, hop: int = 0, alpha: float | None = None, cached: bool = True, **kwargs)[source]

Bases: BaseMP

Convolutional layer with Chebyshev Polynomials and explicit residual.

Paper:

Clenshaw Graph Neural Networks

Ref:

https://github.com/yuziGuo/ClenshawGNN/blob/master/models/ChebClenshawNN.py

Parameters:
  • alpha (float | None, default: None) – transformation strength.

  • num_hops (int, default: 0) – args for BaseMP

  • hop (int, default: 0) – args for BaseMP

  • cached (bool, default: True) – args for BaseMP

name()[source]
pargs: list[str] = ['alpha']
param: dict[str, ParamTuple] = {'alpha': ('float', (0.01, 10), {'log': True}, <function ClenshawConv.<lambda>>)}
reset_parameters()[source]

Resets all learnable parameters of the module.

_get_forward_mat(x: Tensor, edge_index: Tensor | SparseTensor) dict[source]

Returns should match the arg list of forward() when self.comp_scheme == 'forward'.

Returns:

out (Tensor) – initial output tensor (shape: \((|\mathcal{V}|, F)\))

_get_convolute_mat(x: Tensor, edge_index: Tensor | SparseTensor) dict[source]

Returns should match the arg list of forward().

_forward_out(**kwargs) Tensor[source]
Returns:

out (Tensor) – output tensor for accumulating propagation results (shape: \((|\mathcal{V}|, F)\))

_forward(out: Tensor, out_1: Tensor, x_0: Tensor, prop: Tensor | SparseTensor) dict[source]
Returns:
  • out (Tensor) – output tensor for accumulating propagation results (shape: \((|\mathcal{V}|, F)\))

  • out_1 (Tensor) – propagation result of \(k-2\) (shape: \((|\mathcal{V}|, F)\))

  • x_0 (Tensor) – initial input (shape: \((|\mathcal{V}|, F)\))

  • prop (Adj) – propagation matrix

class ChebConv(num_hops: int = 0, hop: int = 0, alpha: float | None = None, cached: bool = True, **kwargs)[source]

Bases: BaseMP

Convolutional layer with Chebyshev Polynomials.

Paper:

Convolutional Neural Networks on Graphs with Chebyshev Approximation, Revisited

Ref:

https://github.com/ivam-he/ChebNetII/blob/main/main/Chebbase_pro.py

Parameters:
  • alpha (float | None, default: None) – decay factor for each hop \(1/k^\alpha\).

  • num_hops (int, default: 0) – args for BaseMP

  • hop (int, default: 0) – args for BaseMP

  • cached (bool, default: True) – args for BaseMP

name()[source]
pargs: list[str] = ['alpha']
param: dict[str, ParamTuple] = {'alpha': ('float', (0.0, 1.0), {'step': 0.01}, <function ChebConv.<lambda>>)}
_get_convolute_mat(x: Tensor, edge_index: Tensor | SparseTensor) dict[source]

Returns should match the arg list of forward().

_forward(x: Tensor, x_1: Tensor, prop: Tensor | SparseTensor) dict[source]
Returns:
  • x (Tensor) – propagation result of \(k-1\) (shape: \((|\mathcal{V}|, F)\))

  • x_1 (Tensor) – propagation result of \(k-2\) (shape: \((|\mathcal{V}|, F)\))

  • prop (Adj) – propagation matrix

class ChebIIConv(num_hops: int = 0, hop: int = 0, cached: bool = True, **kwargs)[source]

Bases: BaseMP

Convolutional layer with Chebyshev-II Polynomials.

Paper:

Convolutional Neural Networks on Graphs with Chebyshev Approximation, Revisited

Ref:

https://github.com/ivam-he/ChebNetII/blob/main/main/ChebnetII_pro.py

Parameters:
name()[source]
coeffs_data = None
reset_parameters()[source]

Resets all learnable parameters of the module.

_get_convolute_mat(x: Tensor, edge_index: Tensor | SparseTensor) dict[source]

Returns should match the arg list of forward().

_get_forward_mat(x: Tensor, edge_index: Tensor | SparseTensor) dict[source]
Variables:

thetas (Tensor) – learnable/fixed (wrt decoupled/iterative model) scalar parameters representing cheb(x)

_forward_theta(**kwargs)[source]
Variables:

theta (nn.Parameter | nn.Module) – transformation of propagation result before applying to the output.

_forward(x: Tensor, x_1: Tensor, prop: Tensor | SparseTensor) dict[source]
Returns:
  • x (Tensor) – propagation result of \(k-1\) (shape: \((|\mathcal{V}|, F)\))

  • x_1 (Tensor) – propagation result of \(k-2\) (shape: \((|\mathcal{V}|, F)\))

  • prop (Adj) – propagation matrix

class BernConv(num_hops: int = 0, hop: int = 0, cached: bool = True, **kwargs)[source]

Bases: BaseMP

Convolutional layer with Bernstein Polynomials. We propose a new implementation reducing memory overhead from \(O(KFn)\) to \(O(3Fn)\).

Paper:

BernNet: Learning Arbitrary Graph Spectral Filters via Bernstein Approximation

Ref:

https://github.com/ivam-he/BernNet/blob/main/NodeClassification/Bernpro.py

Parameters:
name()[source]
_forward_theta(**kwargs)[source]
Variables:

theta (nn.Parameter | nn.Module) – transformation of propagation result before applying to the output.

_forward(x: Tensor, prop_0: Tensor | SparseTensor, prop_1: Tensor | SparseTensor) dict[source]
Returns:
  • x (Tensor) – propagation result through \(2I-L\) (shape: \((|\mathcal{V}|, F)\))

  • prop_0 (SparseTensor) – \(L\)

  • prop_1 (SparseTensor) – \(2I-L\)

class LegendreConv(num_hops: int = 0, hop: int = 0, cached: bool = True, **kwargs)[source]

Bases: BaseMP

Convolutional layer with Legendre Polynomials.

Paper:

How Powerful are Spectral Graph Neural Networks

Ref:

https://github.com/GraphPKU/JacobiConv

Paper:

Improved Modeling and Generalization Capabilities of Graph Neural Networks With Legendre Polynomials

Ref:

https://github.com/12chen20/LegendreNet

Parameters:
name()[source]
_get_convolute_mat(x: Tensor, edge_index: Tensor | SparseTensor) dict[source]

Returns should match the arg list of forward().

_forward(x: Tensor, x_1: Tensor, prop: Tensor | SparseTensor) dict[source]
Returns:
  • x (Tensor) – propagation result of \(k-1\) (shape: \((|\mathcal{V}|, F)\))

  • x_1 (Tensor) – propagation result of \(k-2\) (shape: \((|\mathcal{V}|, F)\))

  • prop (Adj) – propagation matrix

class JacobiConv(num_hops: int = 0, hop: int = 0, alpha: float | None = None, beta: float | None = None, cached: bool = True, **kwargs)[source]

Bases: BaseMP

Convolutional layer with Jacobi Polynomials.

Paper:

How Powerful are Spectral Graph Neural Networks

Ref:

https://github.com/GraphPKU/JacobiConv

Parameters:
  • alpha (float | None, default: None) – hyperparameters in Jacobi polynomials.

  • beta (float | None, default: None) – hyperparameters in Jacobi polynomials.

  • num_hops (int, default: 0) – args for BaseMP

  • hop (int, default: 0) – args for BaseMP

  • cached (bool, default: True) – args for BaseMP

name()[source]
pargs: list[str] = ['alpha', 'beta']
param: dict[str, ParamTuple] = {'alpha': ('float', (0.0, 1.0), {'step': 0.01}, <function JacobiConv.<lambda>>), 'beta': ('float', (0.0, 1.0), {'step': 0.01}, <function JacobiConv.<lambda>>)}
_get_convolute_mat(x: Tensor, edge_index: Tensor | SparseTensor) dict[source]

Returns should match the arg list of forward().

_forward(x: Tensor, x_1: Tensor, prop: Tensor | SparseTensor) dict[source]
Returns:
  • x (Tensor) – propagation result of \(k-1\) (shape: \((|\mathcal{V}|, F)\))

  • x_1 (Tensor) – propagation result of \(k-2\) (shape: \((|\mathcal{V}|, F)\))

  • prop (Adj) – propagation matrix

class FavardConv(num_hops: int = 0, hop: int = 0, cached: bool = True, **kwargs)[source]

Bases: BaseMP

Convolutional layer with basis in Favard’s Theorem.

Paper:

Graph Neural Networks with Learnable and Optimal Polynomial Bases

Ref:

https://github.com/yuziGuo/FarOptBasis/blob/master/layers/FavardNormalConv.py

Parameters:
supports_batch: bool = False
name()[source]
_init_with_theta()[source]
reset_parameters()[source]

Resets all learnable parameters of the module.

_get_convolute_mat(x: Tensor, edge_index: Tensor | SparseTensor) dict[source]

Returns should match the arg list of forward().

_get_forward_mat(x: Tensor, edge_index: Tensor | SparseTensor) dict[source]
Returns:
  • out (Tensor) – initial output tensor (shape: \((|\mathcal{V}|, F)\))

  • alpha_1 – parameter for \(k-1\)

static _mul_coeff(x, coeff, pos=True)[source]
static _div_coeff(x, coeff, pos=True)[source]
_forward(x: Tensor, x_1: Tensor, prop: Tensor | SparseTensor, alpha_1: Parameter | Module) dict[source]
Returns:
  • x (Tensor) – propagation result of \(k-1\) (shape: \((|\mathcal{V}|, F)\))

  • x_1 (Tensor) – propagation result of \(k-2\) (shape: \((|\mathcal{V}|, F)\))

  • prop (Adj) – propagation matrix

  • alpha_1 – parameter for \(k-1\)

class OptBasisConv(num_hops: int = 0, hop: int = 0, cached: bool = True, **kwargs)[source]

Bases: BaseMP

Convolutional layer with optimal adaptive basis.

Paper:

Graph Neural Networks with Learnable and Optimal Polynomial Bases

Ref:

https://github.com/yuziGuo/FarOptBasis/blob/master/layers/NormalBasisConv.py

Parameters:
name()[source]
_get_convolute_mat(x: Tensor, edge_index: Tensor | SparseTensor) dict[source]

Returns should match the arg list of forward().

_forward(x: Tensor, x_1: Tensor, prop: Tensor | SparseTensor) dict[source]
Returns:
  • x (Tensor) – propagation result of \(k-1\) (shape: \((|\mathcal{V}|, F)\))

  • x_1 (Tensor) – propagation result of \(k-2\) (shape: \((|\mathcal{V}|, F)\))

  • prop (Adj) – propagation matrix

class ACMConv(num_hops: int = 0, hop: int = 0, alpha: int | None = None, cached: bool = True, out_channels: int | None = None, **kwargs)[source]

Bases: BaseMP

Convolutional layer of FBGNN & ACMGNN(I & II).

Paper:

Revisiting Heterophily For Graph Neural Networks

Paper:

Complete the Missing Half: Augmenting Aggregation Filtering with Diversification for Graph Convolutional Networks

Ref:

https://github.com/SitaoLuan/ACM-GNN/blob/main/ACM-Geometric/layers.py

Parameters:
  • alpha (int | None, default: None) – variant I (propagate first) or II (act first)

  • num_hops (int, default: 0) – args for BaseMP

  • hop (int, default: 0) – args for BaseMP

  • cached (bool, default: True) – args for BaseMP

supports_batch: bool = False
name()[source]
pargs: list[str] = ['alpha']
_init_with_theta()[source]
Variables:

theta (torch.nn.ModuleDict) – Linear transformation for each scheme.

reset_parameters()[source]

Resets all learnable parameters of the module.

_get_forward_mat(x: Tensor, edge_index: Tensor | SparseTensor) dict[source]

Returns should match the arg list of forward() when self.comp_scheme == 'forward'.

Returns:

out (Tensor) – initial output tensor (shape: \((|\mathcal{V}|, F)\))

_get_convolute_mat(x: Tensor, edge_index: Tensor | SparseTensor) dict[source]

Returns should match the arg list of forward().

_forward_theta(x, scheme)[source]
Variables:

theta (torch.nn.ModuleDict) – Linear transformation for each scheme.

forward(out: Tensor, prop_0: Tensor | SparseTensor, prop_1: Tensor | SparseTensor) dict[source]
Returns:
  • out (Tensor) – current propagation result (shape: \((|\mathcal{V}|, F)\))

  • prop_0, prop_1 (SparseTensor) – propagation matrices