pyg_spectral.nn.models

class BaseNN(conv: str, num_hops: int = 0, in_channels: int | None = None, hidden_channels: int | None = None, out_channels: int | None = None, in_layers: int | None = None, out_layers: int | None = None, dropout_lin: float | List[float] = 0.0, dropout_conv: float = 0.0, act: str | Callable | None = 'relu', act_first: bool = False, act_kwargs: Dict[str, Any] | None = None, norm: str | Callable | None = 'batch_norm', norm_kwargs: Dict[str, Any] | None = None, plain_last: bool = False, bias: bool | List[bool] = True, **kwargs)[source]

Bases: Module

Base NN structure with MLP before and after convolution layers.

Parameters:
supports_edge_weight: Final[bool] = False
supports_edge_attr: Final[bool] = False
supports_batch: Final[bool] = False
supports_norm_batch: Final[bool][source]
init_channel_list(conv: str, in_channels: int, hidden_channels: int, out_channels: int, **kwargs) List[int][source]
init_conv(conv: str, num_hops: int, lib: str, **kwargs) MessagePassing[source]
_set_conv_attr(key: str) Callable[source]
reset_cache()[source]
reset_parameters()[source]
get_optimizer(dct)[source]
preprocess(x: Tensor, edge_index: Tensor | SparseTensor) Any[source]

Preprocessing step that not counted in forward() overhead. Here mainly transforming graph adjacency to actual propagation matrix.

convolute(x: Tensor, edge_index: Tensor | SparseTensor, batch: Tensor | None = None, batch_size: int | None = None) Tensor[source]

Decoupled propagation step for calling the convolutional module.

forward(x: Tensor, edge_index: Tensor | SparseTensor, batch: Tensor | None = None, batch_size: int | None = None) Tensor[source]
Parameters:
  • x (Tensor) – from torch_geometric.data.Data

  • edge_index (Tensor | SparseTensor) – from torch_geometric.data.Data

  • batch (Tensor | None, default: None) – The batch vector \(\mathbf{b} \in {\{ 0, \ldots, B-1\}}^N\), which assigns each element to a specific example. Only needs to be passed in case the underlying normalization layers require the batch information.

  • batch_size (int | None, default: None) – The number of examples \(B\). Automatically calculated if not given. Only needs to be passed in case the underlying normalization layers require the batch information.

class BaseNNCompose(conv: str, num_hops: int = 0, in_channels: int | None = None, hidden_channels: int | None = None, out_channels: int | None = None, in_layers: int | None = None, out_layers: int | None = None, dropout_lin: float | List[float] = 0.0, dropout_conv: float = 0.0, act: str | Callable | None = 'relu', act_first: bool = False, act_kwargs: Dict[str, Any] | None = None, norm: str | Callable | None = 'batch_norm', norm_kwargs: Dict[str, Any] | None = None, plain_last: bool = False, bias: bool | List[bool] = True, **kwargs)[source]

Bases: BaseNN

Base NN structure with multiple conv channels.

Parameters:
init_channel_list(conv: str, in_channels: int, hidden_channels: int, out_channels: int, **kwargs) List[int][source]
Variables:

channel_list – width for each conv channel

_set_conv_attr(key: str) List[Callable][source]
reset_cache()[source]
reset_parameters()[source]
get_optimizer(dct)[source]
preprocess(x: Tensor, edge_index: Tensor | SparseTensor) Any[source]

Preprocessing step that not counted in forward() overhead. Here mainly transforming graph adjacency to actual propagation matrix.

convolute(x: Tensor, edge_index: Tensor | SparseTensor, batch: Tensor | None = None, batch_size: int | None = None) Tensor[source]

Decoupled propagation step for calling the convolutional module.

class Iterative(conv: str, num_hops: int = 0, in_channels: int | None = None, hidden_channels: int | None = None, out_channels: int | None = None, in_layers: int | None = None, out_layers: int | None = None, dropout_lin: float | List[float] = 0.0, dropout_conv: float = 0.0, act: str | Callable | None = 'relu', act_first: bool = False, act_kwargs: Dict[str, Any] | None = None, norm: str | Callable | None = 'batch_norm', norm_kwargs: Dict[str, Any] | None = None, plain_last: bool = False, bias: bool | List[bool] = True, **kwargs)[source]

Bases: BaseNN

Iterative structure with matrix transformation each hop of propagation.

Parameters:
init_conv(conv: str, num_hops: int, lib: str, **kwargs) MessagePassing[source]
class IterativeCompose(conv: str, num_hops: int = 0, in_channels: int | None = None, hidden_channels: int | None = None, out_channels: int | None = None, in_layers: int | None = None, out_layers: int | None = None, dropout_lin: float | List[float] = 0.0, dropout_conv: float = 0.0, act: str | Callable | None = 'relu', act_first: bool = False, act_kwargs: Dict[str, Any] | None = None, norm: str | Callable | None = 'batch_norm', norm_kwargs: Dict[str, Any] | None = None, plain_last: bool = False, bias: bool | List[bool] = True, **kwargs)[source]

Bases: BaseNNCompose

Iterative structure with matrix transformation each hop of propagation.

Parameters:
init_conv(conv: str, num_hops: int, lib: str, **kwargs) MessagePassing[source]
class DecoupledFixed(conv: str, num_hops: int = 0, in_channels: int | None = None, hidden_channels: int | None = None, out_channels: int | None = None, in_layers: int | None = None, out_layers: int | None = None, dropout_lin: float | List[float] = 0.0, dropout_conv: float = 0.0, act: str | Callable | None = 'relu', act_first: bool = False, act_kwargs: Dict[str, Any] | None = None, norm: str | Callable | None = 'batch_norm', norm_kwargs: Dict[str, Any] | None = None, plain_last: bool = False, bias: bool | List[bool] = True, **kwargs)[source]

Bases: BaseNN

Decoupled structure without matrix transformation during propagation. Fixed scalar propagation parameters.

Note

Apply conv every forward() call. Not to be mixed with Precomputed models.

Parameters:
init_conv(conv: str, num_hops: int, lib: str, **kwargs) MessagePassing[source]
class DecoupledVar(conv: str, num_hops: int = 0, in_channels: int | None = None, hidden_channels: int | None = None, out_channels: int | None = None, in_layers: int | None = None, out_layers: int | None = None, dropout_lin: float | List[float] = 0.0, dropout_conv: float = 0.0, act: str | Callable | None = 'relu', act_first: bool = False, act_kwargs: Dict[str, Any] | None = None, norm: str | Callable | None = 'batch_norm', norm_kwargs: Dict[str, Any] | None = None, plain_last: bool = False, bias: bool | List[bool] = True, **kwargs)[source]

Bases: BaseNN

Decoupled structure without matrix transformation during propagation. Learnable scalar propagation parameters.

Note

Apply conv every forward() call. Not to be mixed with Precomputed models.

Parameters:
init_conv(conv: str, num_hops: int, lib: str, **kwargs) MessagePassing[source]
reset_parameters()[source]
class DecoupledFixedCompose(conv: str, num_hops: int = 0, in_channels: int | None = None, hidden_channels: int | None = None, out_channels: int | None = None, in_layers: int | None = None, out_layers: int | None = None, dropout_lin: float | List[float] = 0.0, dropout_conv: float = 0.0, act: str | Callable | None = 'relu', act_first: bool = False, act_kwargs: Dict[str, Any] | None = None, norm: str | Callable | None = 'batch_norm', norm_kwargs: Dict[str, Any] | None = None, plain_last: bool = False, bias: bool | List[bool] = True, **kwargs)[source]

Bases: BaseNNCompose

Decoupled structure without matrix transformation during propagation. Fixed scalar propagation parameters.

Parameters:
init_conv(conv: str, num_hops: int, lib: str, **kwargs) MessagePassing[source]
class DecoupledVarCompose(conv: str, num_hops: int = 0, in_channels: int | None = None, hidden_channels: int | None = None, out_channels: int | None = None, in_layers: int | None = None, out_layers: int | None = None, dropout_lin: float | List[float] = 0.0, dropout_conv: float = 0.0, act: str | Callable | None = 'relu', act_first: bool = False, act_kwargs: Dict[str, Any] | None = None, norm: str | Callable | None = 'batch_norm', norm_kwargs: Dict[str, Any] | None = None, plain_last: bool = False, bias: bool | List[bool] = True, **kwargs)[source]

Bases: BaseNNCompose

Decoupled structure without matrix transformation during propagation. Learnable scalar propagation parameters.

Parameters:
init_conv(conv: str, num_hops: int, lib: str, **kwargs) MessagePassing[source]
reset_parameters()[source]
class PrecomputedFixed(in_layers: int | None = None, **kwargs)[source]

Bases: DecoupledFixed

Decoupled structure with precomputation separating propagation from transformation. Fixed scalar propagation parameters and accumulating precompute results.

Note

Only apply propagation in convolute(). Not to be mixed with Decoupled models.

Parameters:
convolute(x: Tensor, edge_index: Tensor | SparseTensor) Tensor[source]

Decoupled propagation step for calling the convolutional module. Requires no variable transformation in conv.forward().

Returns:

embed (Tensor) – Precomputed node embeddings.

forward(x: Tensor, batch: Tensor | None = None, batch_size: int | None = None) Tensor[source]
Parameters:
class PrecomputedVar(in_layers: int | None = None, **kwargs)[source]

Bases: DecoupledVar

Decoupled structure with precomputation separating propagation from transformation. Learnable scalar propagation parameters and storing all intermediate precompute results.

Note

Only apply propagation in convolute(). Not to be mixed with Decoupled models.

Parameters:
convolute(x: Tensor, edge_index: Tensor | SparseTensor) list[source]

Decoupled propagation step for calling the convolutional module. _forward() should not contain derivable computations.

Returns:

embed – List of precomputed node embeddings of each hop. Each shape is \((|\mathcal{V}|, F, |convs|+1)\).

forward(xs: Tensor, batch: Tensor | None = None, batch_size: int | None = None) Tensor[source]
Parameters:
class PrecomputedFixedCompose(conv: str, num_hops: int = 0, in_channels: int | None = None, hidden_channels: int | None = None, out_channels: int | None = None, in_layers: int | None = None, out_layers: int | None = None, dropout_lin: float | List[float] = 0.0, dropout_conv: float = 0.0, act: str | Callable | None = 'relu', act_first: bool = False, act_kwargs: Dict[str, Any] | None = None, norm: str | Callable | None = 'batch_norm', norm_kwargs: Dict[str, Any] | None = None, plain_last: bool = False, bias: bool | List[bool] = True, **kwargs)[source]

Bases: DecoupledFixedCompose

Decoupled structure with precomputation separating propagation from transformation. Fixed scalar propagation parameters and accumulating precompute results.

Parameters:
convolute(x: Tensor, edge_index: Tensor | SparseTensor) Tensor[source]

Decoupled propagation step for calling the convolutional module. Requires no variable transformation in conv.forward().

Returns:

embed (Tensor) – Precomputed node embeddings. (shape: \((|\mathcal{V}|, F, Q)\))

forward(x: Tensor, batch: Tensor | None = None, batch_size: int | None = None) Tensor[source]
Parameters:
class PrecomputedVarCompose(conv: str, num_hops: int = 0, in_channels: int | None = None, hidden_channels: int | None = None, out_channels: int | None = None, in_layers: int | None = None, out_layers: int | None = None, dropout_lin: float | List[float] = 0.0, dropout_conv: float = 0.0, act: str | Callable | None = 'relu', act_first: bool = False, act_kwargs: Dict[str, Any] | None = None, norm: str | Callable | None = 'batch_norm', norm_kwargs: Dict[str, Any] | None = None, plain_last: bool = False, bias: bool | List[bool] = True, **kwargs)[source]

Bases: DecoupledVarCompose

Decoupled structure with precomputation separating propagation from transformation. Learnable scalar propagation parameters and storing all intermediate precompute results.

Parameters:
convolute(x: Tensor, edge_index: Tensor | SparseTensor) Tensor[source]

Decoupled propagation step for calling the convolutional module. Requires no variable transformation in conv.forward().

Returns:

embed (Tensor) – List of precomputed node embeddings of each hop. Shape: \((|\mathcal{V}|, F, Q, |convs|+1)\).

forward(xs: Tensor, batch: Tensor | None = None, batch_size: int | None = None) Tensor[source]
Parameters:
class AdaGNN(conv: str, num_hops: int = 0, in_channels: int | None = None, hidden_channels: int | None = None, out_channels: int | None = None, in_layers: int | None = None, out_layers: int | None = None, dropout_lin: float | List[float] = 0.0, dropout_conv: float = 0.0, act: str | Callable | None = 'relu', act_first: bool = False, act_kwargs: Dict[str, Any] | None = None, norm: str | Callable | None = 'batch_norm', norm_kwargs: Dict[str, Any] | None = None, plain_last: bool = False, bias: bool | List[bool] = True, **kwargs)[source]

Bases: BaseNN

Decoupled structure with diag transformation each hop of propagation.

Paper:

AdaGNN: Graph Neural Networks with Adaptive Frequency Response Filter

Ref:

https://github.com/yushundong/AdaGNN

Parameters:
init_conv(conv: str, num_hops: int, lib: str, **kwargs) MessagePassing[source]
reset_parameters()[source]
class ACMGNN(conv: str, num_hops: int = 0, in_channels: int | None = None, hidden_channels: int | None = None, out_channels: int | None = None, in_layers: int | None = None, out_layers: int | None = None, dropout_lin: float | List[float] = 0.0, dropout_conv: float = 0.0, act: str | Callable | None = 'relu', act_first: bool = False, act_kwargs: Dict[str, Any] | None = None, norm: str | Callable | None = 'batch_norm', norm_kwargs: Dict[str, Any] | None = None, plain_last: bool = False, bias: bool | List[bool] = True, **kwargs)[source]

Bases: BaseNN

Iterative structure for ACM conv.

Paper:

Revisiting Heterophily For Graph Neural Networks

Paper:

Complete the Missing Half: Augmenting Aggregation Filtering with Diversification for Graph Convolutional Networks

Ref:

https://github.com/SitaoLuan/ACM-GNN

Parameters:
init_conv(conv: str, num_hops: int, lib: str, **kwargs) MessagePassing[source]
class ACMGNNDec(conv: str, num_hops: int = 0, in_channels: int | None = None, hidden_channels: int | None = None, out_channels: int | None = None, in_layers: int | None = None, out_layers: int | None = None, dropout_lin: float | List[float] = 0.0, dropout_conv: float = 0.0, act: str | Callable | None = 'relu', act_first: bool = False, act_kwargs: Dict[str, Any] | None = None, norm: str | Callable | None = 'batch_norm', norm_kwargs: Dict[str, Any] | None = None, plain_last: bool = False, bias: bool | List[bool] = True, **kwargs)[source]

Bases: BaseNN

Decoupled structure for ACM conv.

Paper:

Revisiting Heterophily For Graph Neural Networks

Paper:

Complete the Missing Half: Augmenting Aggregation Filtering with Diversification for Graph Convolutional Networks

Ref:

https://github.com/SitaoLuan/ACM-GNN

Parameters:
init_conv(conv: str, num_hops: int, lib: str, **kwargs) MessagePassing[source]
reset_parameters()[source]
class CppCompFixed(in_layers: int | None = None, **kwargs)[source]

Bases: PrecomputedFixed

Decoupled structure with C++ propagation precomputation. Fixed scalar propagation parameters and accumulating precompute results.

preprocess(x: Tensor, edge_index: Tensor | SparseTensor) Tensor | SparseTensor[source]

Preprocessing step that not counted in forward() overhead. Here mainly transforming graph adjacency to actual propagation matrix.

convolute(x: Tensor, edge_index: Tensor | SparseTensor) Tensor[source]

Decoupled propagation step for calling the convolutional module. Requires no variable transformation in conv.forward().

Returns:

embed (Tensor) – Precomputed node embeddings.