pyg_spectral.nn.models
- class BaseNN(conv: str, num_hops: int = 0, in_channels: int | None = None, hidden_channels: int | None = None, out_channels: int | None = None, in_layers: int | None = None, out_layers: int | None = None, dropout_lin: float | list[float] = 0.0, dropout_conv: float = 0.0, act: str | Callable | None = 'relu', act_first: bool = False, act_kwargs: dict[str, Any | None] | None = None, norm: str | Callable | None = 'batch_norm', norm_kwargs: dict[str, Any | None] | None = None, plain_last: bool = False, bias: bool | list[bool] = True, **kwargs)[source]
Bases:
ModuleBase NN structure with MLP before and after convolution layers.
- Parameters:
conv (
str) – Name ofpyg_spectral.nn.convmodule.num_hops (
int, default:0) – Total number of conv hops.in_channels (
int|None, default:None) – Size of each input sample.hidden_channels (
int|None, default:None) – Size of each hidden sample.out_channels (
int|None, default:None) – Size of each output sample.in_layers (
int|None, default:None) – Number of MLP layers before conv.out_layers (
int|None, default:None) – Number of MLP layers after conv.dropout_lin (
float|list[float], default:0.0) – Dropout probability for both MLPs.dropout_conv (
float, default:0.0) – Dropout probability before conv.act (
str|Callable|None, default:'relu') – args fortorch_geometric.nn.models.MLP.act_first (
bool, default:False) – args fortorch_geometric.nn.models.MLP.act_kwargs (
dict[str,Any|None] |None, default:None) – args fortorch_geometric.nn.models.MLP.norm (
str|Callable|None, default:'batch_norm') – args fortorch_geometric.nn.models.MLP.norm_kwargs (
dict[str,Any|None] |None, default:None) – args fortorch_geometric.nn.models.MLP.plain_last (
bool, default:False) – args fortorch_geometric.nn.models.MLP.bias (
bool|list[bool], default:True) – args fortorch_geometric.nn.models.MLP.lib_conv – Parent module library other than
pyg_spectral.nn.conv.**kwargs – Additional arguments of
pyg_spectral.nn.conv.
-
pargs:
list[str] = ['conv', 'num_hops', 'in_layers', 'out_layers', 'in_channels', 'hidden_channels', 'out_channels', 'dropout_lin', 'dropout_conv']
-
param:
dict[str,NewType(ParamTuple,tuple[str,tuple,dict[str,Any],Callable[[Any],str]])] = {'dropout_conv': ('float', (0.0, 0.9), {'step': 0.1}, <function BaseNN.<lambda>>), 'dropout_lin': ('float', (0.0, 0.9), {'step': 0.1}, <function BaseNN.<lambda>>), 'hidden_channels': ('categorical', ([16, 32, 64, 128, 256],), {}, <function BaseNN.<lambda>>), 'in_layers': ('int', (1, 3), {}, <function BaseNN.<lambda>>), 'num_hops': ('int', (2, 30), {'step': 2}, <function BaseNN.<lambda>>), 'out_layers': ('int', (1, 3), {}, <function BaseNN.<lambda>>)}
- classmethod register_classes(registry: dict[str, dict[str, Any]] | None = None) dict[source]
Register args for all subclass.
- Parameters:
name (
dict[str, str]) – Model class logging path name.conv_name (
dict[str, Callable[[str, Any], str]]) – Wrap conv logging path name.module (
dict[str, str]) – Module for importing the model.pargs (
dict[str, list[str]]) – Model arguments from argparse.pargs_default (
dict[str, dict[str, Any]]) – Default values for model arguments. Not recommended.param (
dict[str, dict[str, ParamTuple]]) –Model parameters to tune.
(str) parameter type,
(tuple) args for
optuna.trial.suggest_,(dict) kwargs for
optuna.trial.suggest_,(callable) format function to str.
- init_channel_list(conv: str, in_channels: int, hidden_channels: int, out_channels: int, **kwargs) list[int][source]
- preprocess(x: Tensor, edge_index: Tensor | SparseTensor) Any[source]
Preprocessing step that not counted in
forward()overhead. Here mainly transforming graph adjacency to actual propagation matrix.
- convolute(x: Tensor, edge_index: Tensor | SparseTensor, batch: Tensor | None = None, batch_size: int | None = None) Tensor[source]
Decoupled propagation step for calling the convolutional module.
- forward(x: Tensor, edge_index: Tensor | SparseTensor, batch: Tensor | None = None, batch_size: int | None = None) Tensor[source]
- Parameters:
x (
Tensor) – fromtorch_geometric.data.Dataedge_index (
Tensor|SparseTensor) – fromtorch_geometric.data.Databatch (
Tensor|None, default:None) – The batch vector \(\mathbf{b} \in {\{ 0, \ldots, B-1\}}^N\), which assigns each element to a specific example. Only needs to be passed in case the underlying normalization layers require thebatchinformation.batch_size (
int|None, default:None) – The number of examples \(B\). Automatically calculated if not given. Only needs to be passed in case the underlying normalization layers require thebatchinformation.
- class BaseNNCompose(conv: str, num_hops: int = 0, in_channels: int | None = None, hidden_channels: int | None = None, out_channels: int | None = None, in_layers: int | None = None, out_layers: int | None = None, dropout_lin: float | list[float] = 0.0, dropout_conv: float = 0.0, act: str | Callable | None = 'relu', act_first: bool = False, act_kwargs: dict[str, Any | None] | None = None, norm: str | Callable | None = 'batch_norm', norm_kwargs: dict[str, Any | None] | None = None, plain_last: bool = False, bias: bool | list[bool] = True, **kwargs)[source]
Bases:
BaseNNBase NN structure with multiple conv channels.
- Parameters:
combine (
str) – How to combine different channels of convs. (sum,sum_weighted, orcat).conv (
str) – Name ofpyg_spectral.nn.convmodule.num_hops (
int, default:0) – Total number of conv hops.in_channels (
int|None, default:None) – Size of each input sample.hidden_channels (
int|None, default:None) – Size of each hidden sample.out_channels (
int|None, default:None) – Size of each output sample.in_layers (
int|None, default:None) – Number of MLP layers before conv.out_layers (
int|None, default:None) – Number of MLP layers after conv.dropout_lin (
float|list[float], default:0.0) – Dropout probability for both MLPs.dropout_conv (
float, default:0.0) – Dropout probability before conv.act (
str|Callable|None, default:'relu') – args fortorch_geometric.nn.models.MLP.act_first (
bool, default:False) – args fortorch_geometric.nn.models.MLP.act_kwargs (
dict[str,Any|None] |None, default:None) – args fortorch_geometric.nn.models.MLP.norm (
str|Callable|None, default:'batch_norm') – args fortorch_geometric.nn.models.MLP.norm_kwargs (
dict[str,Any|None] |None, default:None) – args fortorch_geometric.nn.models.MLP.plain_last (
bool, default:False) – args fortorch_geometric.nn.models.MLP.bias (
bool|list[bool], default:True) – args fortorch_geometric.nn.models.MLP.lib_conv – Parent module library other than
pyg_spectral.nn.conv.**kwargs – Additional arguments of
pyg_spectral.nn.conv.
- pargs: list[str] = ['combine']
- param: dict[str, ParamTuple] = {'combine': ('categorical', ['sum', 'sum_weighted', 'cat'], {}, <function BaseNNCompose.<lambda>>)}
- init_channel_list(conv: str, in_channels: int, hidden_channels: int, out_channels: int, **kwargs) list[int][source]
- Variables:
channel_list – width for each conv channel
- class Iterative(conv: str, num_hops: int = 0, in_channels: int | None = None, hidden_channels: int | None = None, out_channels: int | None = None, in_layers: int | None = None, out_layers: int | None = None, dropout_lin: float | list[float] = 0.0, dropout_conv: float = 0.0, act: str | Callable | None = 'relu', act_first: bool = False, act_kwargs: dict[str, Any | None] | None = None, norm: str | Callable | None = 'batch_norm', norm_kwargs: dict[str, Any | None] | None = None, plain_last: bool = False, bias: bool | list[bool] = True, **kwargs)[source]
Bases:
BaseNNIterative structure with matrix transformation each hop of propagation.
- Parameters:
bias (
bool | None) – whether learn an additive bias in conv.weight_initializer (
str | None) – The initializer for the weight matrix ("glorot","uniform","kaiming_uniform", orNone).bias_initializer (
str | None) – The initializer for the bias vector ("zeros"orNone).hidden_channels (
int|None, default:None) – args forBaseNNdropout_lin (
float|list[float], default:0.0) – args forBaseNNlib_conv – args for
BaseNNact (
str|Callable|None, default:'relu') – args fortorch_geometric.nn.models.MLP.act_first (
bool, default:False) – args fortorch_geometric.nn.models.MLP.act_kwargs (
dict[str,Any|None] |None, default:None) – args fortorch_geometric.nn.models.MLP.norm (
str|Callable|None, default:'batch_norm') – args fortorch_geometric.nn.models.MLP.norm_kwargs (
dict[str,Any|None] |None, default:None) – args fortorch_geometric.nn.models.MLP.plain_last (
bool, default:False) – args fortorch_geometric.nn.models.MLP.bias – args for
torch_geometric.nn.models.MLP.**kwargs – Additional arguments of
pyg_spectral.nn.conv.
- name: str = 'Iterative'
- class IterativeCompose(conv: str, num_hops: int = 0, in_channels: int | None = None, hidden_channels: int | None = None, out_channels: int | None = None, in_layers: int | None = None, out_layers: int | None = None, dropout_lin: float | list[float] = 0.0, dropout_conv: float = 0.0, act: str | Callable | None = 'relu', act_first: bool = False, act_kwargs: dict[str, Any | None] | None = None, norm: str | Callable | None = 'batch_norm', norm_kwargs: dict[str, Any | None] | None = None, plain_last: bool = False, bias: bool | list[bool] = True, **kwargs)[source]
Bases:
BaseNNComposeIterative structure with matrix transformation each hop of propagation.
- Parameters:
bias (
bool | None) – whether learn an additive bias in conv.weight_initializer (
str | None) – The initializer for the weight matrix ("glorot","uniform","kaiming_uniform", orNone).bias_initializer (
str | None) – The initializer for the bias vector ("zeros"orNone).combine – How to combine different channels of convs. (
sum,sum_weighted, orcat).hidden_channels (
int|None, default:None) – args forBaseNNdropout_lin (
float|list[float], default:0.0) – args forBaseNNlib_conv – args for
BaseNNact (
str|Callable|None, default:'relu') – args fortorch_geometric.nn.models.MLP.act_first (
bool, default:False) – args fortorch_geometric.nn.models.MLP.act_kwargs (
dict[str,Any|None] |None, default:None) – args fortorch_geometric.nn.models.MLP.norm (
str|Callable|None, default:'batch_norm') – args fortorch_geometric.nn.models.MLP.norm_kwargs (
dict[str,Any|None] |None, default:None) – args fortorch_geometric.nn.models.MLP.plain_last (
bool, default:False) – args fortorch_geometric.nn.models.MLP.bias – args for
torch_geometric.nn.models.MLP.**kwargs – Additional arguments of
pyg_spectral.nn.conv.
- name: str = 'Iterative'
- class IterativeFixed(conv: str, num_hops: int = 0, in_channels: int | None = None, hidden_channels: int | None = None, out_channels: int | None = None, in_layers: int | None = None, out_layers: int | None = None, dropout_lin: float | list[float] = 0.0, dropout_conv: float = 0.0, act: str | Callable | None = 'relu', act_first: bool = False, act_kwargs: dict[str, Any | None] | None = None, norm: str | Callable | None = 'batch_norm', norm_kwargs: dict[str, Any | None] | None = None, plain_last: bool = False, bias: bool | list[bool] = True, **kwargs)[source]
Bases:
Iterative- name: str = 'IterativeFixed'
- class IterativeFixedCompose(conv: str, num_hops: int = 0, in_channels: int | None = None, hidden_channels: int | None = None, out_channels: int | None = None, in_layers: int | None = None, out_layers: int | None = None, dropout_lin: float | list[float] = 0.0, dropout_conv: float = 0.0, act: str | Callable | None = 'relu', act_first: bool = False, act_kwargs: dict[str, Any | None] | None = None, norm: str | Callable | None = 'batch_norm', norm_kwargs: dict[str, Any | None] | None = None, plain_last: bool = False, bias: bool | list[bool] = True, **kwargs)[source]
Bases:
IterativeCompose- name: str = 'IterativeFixed'
- class DecoupledFixed(conv: str, num_hops: int = 0, in_channels: int | None = None, hidden_channels: int | None = None, out_channels: int | None = None, in_layers: int | None = None, out_layers: int | None = None, dropout_lin: float | list[float] = 0.0, dropout_conv: float = 0.0, act: str | Callable | None = 'relu', act_first: bool = False, act_kwargs: dict[str, Any | None] | None = None, norm: str | Callable | None = 'batch_norm', norm_kwargs: dict[str, Any | None] | None = None, plain_last: bool = False, bias: bool | list[bool] = True, **kwargs)[source]
Bases:
BaseNNDecoupled structure without matrix transformation during propagation. Fixed scalar propagation parameters.
Note
Apply conv every
forward()call. Not to be mixed withPrecomputedmodels.- Parameters:
theta_scheme (
str) – Method to generate decoupled parameters.theta_param (
float, optional) – Hyperparameter for the scheme.hidden_channels (
int|None, default:None) – args forBaseNNdropout_lin (
float|list[float], default:0.0) – args forBaseNNlib_conv – args for
BaseNNact (
str|Callable|None, default:'relu') – args fortorch_geometric.nn.models.MLP.act_first (
bool, default:False) – args fortorch_geometric.nn.models.MLP.act_kwargs (
dict[str,Any|None] |None, default:None) – args fortorch_geometric.nn.models.MLP.norm (
str|Callable|None, default:'batch_norm') – args fortorch_geometric.nn.models.MLP.norm_kwargs (
dict[str,Any|None] |None, default:None) – args fortorch_geometric.nn.models.MLP.plain_last (
bool, default:False) – args fortorch_geometric.nn.models.MLP.bias (
bool|list[bool], default:True) – args fortorch_geometric.nn.models.MLP.**kwargs – Additional arguments of
pyg_spectral.nn.conv.
- name: str = 'DecoupledFixed'
- pargs: list[str] = ['theta_scheme', 'theta_param']
- param: dict[str, ParamTuple] = {'theta_param': <function DecoupledFixed.<lambda>>}
- class DecoupledVar(conv: str, num_hops: int = 0, in_channels: int | None = None, hidden_channels: int | None = None, out_channels: int | None = None, in_layers: int | None = None, out_layers: int | None = None, dropout_lin: float | list[float] = 0.0, dropout_conv: float = 0.0, act: str | Callable | None = 'relu', act_first: bool = False, act_kwargs: dict[str, Any | None] | None = None, norm: str | Callable | None = 'batch_norm', norm_kwargs: dict[str, Any | None] | None = None, plain_last: bool = False, bias: bool | list[bool] = True, **kwargs)[source]
Bases:
BaseNNDecoupled structure without matrix transformation during propagation. Learnable scalar propagation parameters.
Note
Apply conv every
forward()call. Not to be mixed withPrecomputedmodels.- Parameters:
theta_scheme (
str) – Method to generate decoupled parameters.theta_param (
float, optional) – Hyperparameter for the scheme.hidden_channels (
int|None, default:None) – args forBaseNNdropout_lin (
float|list[float], default:0.0) – args forBaseNNlib_conv – args for
BaseNNact (
str|Callable|None, default:'relu') – args fortorch_geometric.nn.models.MLP.act_first (
bool, default:False) – args fortorch_geometric.nn.models.MLP.act_kwargs (
dict[str,Any|None] |None, default:None) – args fortorch_geometric.nn.models.MLP.norm (
str|Callable|None, default:'batch_norm') – args fortorch_geometric.nn.models.MLP.norm_kwargs (
dict[str,Any|None] |None, default:None) – args fortorch_geometric.nn.models.MLP.plain_last (
bool, default:False) – args fortorch_geometric.nn.models.MLP.bias (
bool|list[bool], default:True) – args fortorch_geometric.nn.models.MLP.**kwargs – Additional arguments of
pyg_spectral.nn.conv.
- name: str = 'DecoupledVar'
- pargs: list[str] = ['theta_scheme', 'theta_param']
- param: dict[str, ParamTuple] = {'theta_param': <function DecoupledVar.<lambda>>}
- class DecoupledFixedCompose(conv: str, num_hops: int = 0, in_channels: int | None = None, hidden_channels: int | None = None, out_channels: int | None = None, in_layers: int | None = None, out_layers: int | None = None, dropout_lin: float | list[float] = 0.0, dropout_conv: float = 0.0, act: str | Callable | None = 'relu', act_first: bool = False, act_kwargs: dict[str, Any | None] | None = None, norm: str | Callable | None = 'batch_norm', norm_kwargs: dict[str, Any | None] | None = None, plain_last: bool = False, bias: bool | list[bool] = True, **kwargs)[source]
Bases:
BaseNNComposeDecoupled structure without matrix transformation during propagation. Fixed scalar propagation parameters.
- Parameters:
theta_scheme (
list[str]) – Method to generate decoupled parameters.theta_param (
list[float], optional) – Hyperparameter for the scheme.combine – How to combine different channels of convs. (
sum,sum_weighted, orcat).hidden_channels (
int|None, default:None) – args forBaseNNdropout_lin (
float|list[float], default:0.0) – args forBaseNNlib_conv – args for
BaseNNact (
str|Callable|None, default:'relu') – args fortorch_geometric.nn.models.MLP.act_first (
bool, default:False) – args fortorch_geometric.nn.models.MLP.act_kwargs (
dict[str,Any|None] |None, default:None) – args fortorch_geometric.nn.models.MLP.norm (
str|Callable|None, default:'batch_norm') – args fortorch_geometric.nn.models.MLP.norm_kwargs (
dict[str,Any|None] |None, default:None) – args fortorch_geometric.nn.models.MLP.plain_last (
bool, default:False) – args fortorch_geometric.nn.models.MLP.bias (
bool|list[bool], default:True) – args fortorch_geometric.nn.models.MLP.**kwargs – Additional arguments of
pyg_spectral.nn.conv.
- name: str = 'DecoupledFixed'
- pargs: list[str] = ['theta_scheme', 'theta_param']
- param: dict[str, ParamTuple] = {'theta_param': <function DecoupledFixedCompose.<lambda>>}
- class DecoupledVarCompose(conv: str, num_hops: int = 0, in_channels: int | None = None, hidden_channels: int | None = None, out_channels: int | None = None, in_layers: int | None = None, out_layers: int | None = None, dropout_lin: float | list[float] = 0.0, dropout_conv: float = 0.0, act: str | Callable | None = 'relu', act_first: bool = False, act_kwargs: dict[str, Any | None] | None = None, norm: str | Callable | None = 'batch_norm', norm_kwargs: dict[str, Any | None] | None = None, plain_last: bool = False, bias: bool | list[bool] = True, **kwargs)[source]
Bases:
BaseNNComposeDecoupled structure without matrix transformation during propagation. Learnable scalar propagation parameters.
- Parameters:
theta_scheme (
list[str]) – Method to generate decoupled parameters.theta_param (
list[float], optional) – Hyperparameter for the scheme.combine – How to combine different channels of convs. (
sum,sum_weighted, orcat).hidden_channels (
int|None, default:None) – args forBaseNNdropout_lin (
float|list[float], default:0.0) – args forBaseNNlib_conv – args for
BaseNNact (
str|Callable|None, default:'relu') – args fortorch_geometric.nn.models.MLP.act_first (
bool, default:False) – args fortorch_geometric.nn.models.MLP.act_kwargs (
dict[str,Any|None] |None, default:None) – args fortorch_geometric.nn.models.MLP.norm (
str|Callable|None, default:'batch_norm') – args fortorch_geometric.nn.models.MLP.norm_kwargs (
dict[str,Any|None] |None, default:None) – args fortorch_geometric.nn.models.MLP.plain_last (
bool, default:False) – args fortorch_geometric.nn.models.MLP.bias (
bool|list[bool], default:True) – args fortorch_geometric.nn.models.MLP.**kwargs – Additional arguments of
pyg_spectral.nn.conv.
- name: str = 'DecoupledVar'
- pargs: list[str] = ['theta_scheme', 'theta_param']
- param: dict[str, ParamTuple] = {'theta_param': <function DecoupledVarCompose.<lambda>>}
- class PrecomputedFixed(in_layers: int | None = None, **kwargs)[source]
Bases:
DecoupledFixedDecoupled structure with precomputation separating propagation from transformation. Fixed scalar propagation parameters and accumulating precompute results.
Note
Only apply propagation in
convolute(). Not to be mixed withDecoupledmodels.- Parameters:
theta_scheme (
str) – Method to generate decoupled parameters.theta_param (
float | None) – Hyperparameter for the scheme.conv – args for
BaseNNnum_hops – args for
BaseNNin_channels – args for
BaseNNhidden_channels – args for
BaseNNout_channels – args for
BaseNNout_layers – args for
BaseNNdropout_lin – args for
BaseNNdropout_conv – args for
BaseNNlib_conv – args for
BaseNNact – args for
torch_geometric.nn.models.MLP.act_first – args for
torch_geometric.nn.models.MLP.act_kwargs – args for
torch_geometric.nn.models.MLP.norm – args for
torch_geometric.nn.models.MLP.norm_kwargs – args for
torch_geometric.nn.models.MLP.plain_last – args for
torch_geometric.nn.models.MLP.bias – args for
torch_geometric.nn.models.MLP.**kwargs – Additional arguments of
pyg_spectral.nn.conv.
- name: str = 'PrecomputedFixed'
- param: dict[str, ParamTuple] = {'in_layers': ('int', (0, 0), {}, <function PrecomputedFixed.<lambda>>)}
- class PrecomputedVar(in_layers: int | None = None, **kwargs)[source]
Bases:
DecoupledVarDecoupled structure with precomputation separating propagation from transformation. Learnable scalar propagation parameters and storing all intermediate precompute results.
Note
Only apply propagation in
convolute(). Not to be mixed withDecoupledmodels.- Parameters:
theta_scheme (
str) – Method to generate decoupled parameters.theta_param (
float | None) – Hyperparameter for the scheme.conv – args for
BaseNNnum_hops – args for
BaseNNin_channels – args for
BaseNNhidden_channels – args for
BaseNNout_channels – args for
BaseNNout_layers – args for
BaseNNdropout_lin – args for
BaseNNdropout_conv – args for
BaseNNlib_conv – args for
BaseNNact – args for
torch_geometric.nn.models.MLP.act_first – args for
torch_geometric.nn.models.MLP.act_kwargs – args for
torch_geometric.nn.models.MLP.norm – args for
torch_geometric.nn.models.MLP.norm_kwargs – args for
torch_geometric.nn.models.MLP.plain_last – args for
torch_geometric.nn.models.MLP.bias – args for
torch_geometric.nn.models.MLP.**kwargs – Additional arguments of
pyg_spectral.nn.conv.
- name: str = 'PrecomputedVar'
- param: dict[str, ParamTuple] = {'in_layers': ('int', (0, 0), {}, <function PrecomputedVar.<lambda>>)}
- convolute(x: Tensor, edge_index: Tensor | SparseTensor) list[source]
Decoupled propagation step for calling the convolutional module.
_forward()should not contain derivable computations.- Returns:
embed – List of precomputed node embeddings of each hop. Each shape is \((|\mathcal{V}|, F, |convs|+1)\).
- class PrecomputedFixedCompose(conv: str, num_hops: int = 0, in_channels: int | None = None, hidden_channels: int | None = None, out_channels: int | None = None, in_layers: int | None = None, out_layers: int | None = None, dropout_lin: float | list[float] = 0.0, dropout_conv: float = 0.0, act: str | Callable | None = 'relu', act_first: bool = False, act_kwargs: dict[str, Any | None] | None = None, norm: str | Callable | None = 'batch_norm', norm_kwargs: dict[str, Any | None] | None = None, plain_last: bool = False, bias: bool | list[bool] = True, **kwargs)[source]
Bases:
DecoupledFixedComposeDecoupled structure with precomputation separating propagation from transformation. Fixed scalar propagation parameters and accumulating precompute results.
- Parameters:
theta_scheme (
list[str]) – Method to generate decoupled parameters.theta_param (
list[float], optional) – Hyperparameter for the scheme.combine – How to combine different channels of convs. (
sum,sum_weighted, orcat).hidden_channels (
int|None, default:None) – args forBaseNNdropout_lin (
float|list[float], default:0.0) – args forBaseNNlib_conv – args for
BaseNNact (
str|Callable|None, default:'relu') – args fortorch_geometric.nn.models.MLP.act_first (
bool, default:False) – args fortorch_geometric.nn.models.MLP.act_kwargs (
dict[str,Any|None] |None, default:None) – args fortorch_geometric.nn.models.MLP.norm (
str|Callable|None, default:'batch_norm') – args fortorch_geometric.nn.models.MLP.norm_kwargs (
dict[str,Any|None] |None, default:None) – args fortorch_geometric.nn.models.MLP.plain_last (
bool, default:False) – args fortorch_geometric.nn.models.MLP.bias (
bool|list[bool], default:True) – args fortorch_geometric.nn.models.MLP.**kwargs – Additional arguments of
pyg_spectral.nn.conv.
- name: str = 'PrecomputedFixed'
- param: dict[str, ParamTuple] = {'in_layers': ('int', (0, 0), {}, <function PrecomputedFixedCompose.<lambda>>)}
- class PrecomputedVarCompose(conv: str, num_hops: int = 0, in_channels: int | None = None, hidden_channels: int | None = None, out_channels: int | None = None, in_layers: int | None = None, out_layers: int | None = None, dropout_lin: float | list[float] = 0.0, dropout_conv: float = 0.0, act: str | Callable | None = 'relu', act_first: bool = False, act_kwargs: dict[str, Any | None] | None = None, norm: str | Callable | None = 'batch_norm', norm_kwargs: dict[str, Any | None] | None = None, plain_last: bool = False, bias: bool | list[bool] = True, **kwargs)[source]
Bases:
DecoupledVarComposeDecoupled structure with precomputation separating propagation from transformation. Learnable scalar propagation parameters and storing all intermediate precompute results.
- Parameters:
theta_scheme (
list[str]) – Method to generate decoupled parameters.theta_param (
list[float], optional) – Hyperparameter for the scheme.combine – How to combine different channels of convs. (
sum,sum_weighted, orcat).hidden_channels (
int|None, default:None) – args forBaseNNdropout_lin (
float|list[float], default:0.0) – args forBaseNNlib_conv – args for
BaseNNact (
str|Callable|None, default:'relu') – args fortorch_geometric.nn.models.MLP.act_first (
bool, default:False) – args fortorch_geometric.nn.models.MLP.act_kwargs (
dict[str,Any|None] |None, default:None) – args fortorch_geometric.nn.models.MLP.norm (
str|Callable|None, default:'batch_norm') – args fortorch_geometric.nn.models.MLP.norm_kwargs (
dict[str,Any|None] |None, default:None) – args fortorch_geometric.nn.models.MLP.plain_last (
bool, default:False) – args fortorch_geometric.nn.models.MLP.bias (
bool|list[bool], default:True) – args fortorch_geometric.nn.models.MLP.**kwargs – Additional arguments of
pyg_spectral.nn.conv.
- name: str = 'PrecomputedVar'
- param: dict[str, ParamTuple] = {'in_layers': ('int', (0, 0), {}, <function PrecomputedVarCompose.<lambda>>)}
- convolute(x: Tensor, edge_index: Tensor | SparseTensor) Tensor[source]
Decoupled propagation step for calling the convolutional module. Requires no variable transformation in
conv.forward().- Returns:
embed (
Tensor) – List of precomputed node embeddings of each hop. Shape: \((|\mathcal{V}|, F, Q, |convs|+1)\).
- class AdaGNN(conv: str, num_hops: int = 0, in_channels: int | None = None, hidden_channels: int | None = None, out_channels: int | None = None, in_layers: int | None = None, out_layers: int | None = None, dropout_lin: float | list[float] = 0.0, dropout_conv: float = 0.0, act: str | Callable | None = 'relu', act_first: bool = False, act_kwargs: dict[str, Any | None] | None = None, norm: str | Callable | None = 'batch_norm', norm_kwargs: dict[str, Any | None] | None = None, plain_last: bool = False, bias: bool | list[bool] = True, **kwargs)[source]
Bases:
BaseNNDecoupled structure with diag transformation each hop of propagation.
- Paper:
AdaGNN: Graph Neural Networks with Adaptive Frequency Response Filter
- Ref:
- Parameters:
hidden_channels (
int|None, default:None) – args forBaseNNdropout_lin (
float|list[float], default:0.0) – args forBaseNNlib_conv – args for
BaseNNact (
str|Callable|None, default:'relu') – args fortorch_geometric.nn.models.MLP.act_first (
bool, default:False) – args fortorch_geometric.nn.models.MLP.act_kwargs (
dict[str,Any|None] |None, default:None) – args fortorch_geometric.nn.models.MLP.norm (
str|Callable|None, default:'batch_norm') – args fortorch_geometric.nn.models.MLP.norm_kwargs (
dict[str,Any|None] |None, default:None) – args fortorch_geometric.nn.models.MLP.plain_last (
bool, default:False) – args fortorch_geometric.nn.models.MLP.bias (
bool|list[bool], default:True) – args fortorch_geometric.nn.models.MLP.**kwargs – Additional arguments of
pyg_spectral.nn.conv.
- name: str = 'DecoupledVar'
- pargs: list[str] = ['theta_scheme', 'theta_param']
- param: dict[str, ParamTuple] = {'theta_param': <function AdaGNN.<lambda>>}
- class ACMGNN(conv: str, num_hops: int = 0, in_channels: int | None = None, hidden_channels: int | None = None, out_channels: int | None = None, in_layers: int | None = None, out_layers: int | None = None, dropout_lin: float | list[float] = 0.0, dropout_conv: float = 0.0, act: str | Callable | None = 'relu', act_first: bool = False, act_kwargs: dict[str, Any | None] | None = None, norm: str | Callable | None = 'batch_norm', norm_kwargs: dict[str, Any | None] | None = None, plain_last: bool = False, bias: bool | list[bool] = True, **kwargs)[source]
Bases:
BaseNNIterative structure for ACM conv.
- Paper:
Revisiting Heterophily For Graph Neural Networks
- Paper:
Complete the Missing Half: Augmenting Aggregation Filtering with Diversification for Graph Convolutional Networks
- Ref:
- Parameters:
theta_scheme (
str) – Channel list. “FBGNN”=”low-high”, “ACMGNN”=”low-high-id”, (“ACMGNN+”=”low-high-id-struct”, not implemented).weight_initializer (
str, optional) – The initializer for the weight.hidden_channels (
int|None, default:None) – args forBaseNNdropout_lin (
float|list[float], default:0.0) – args forBaseNNlib_conv – args for
BaseNNact (
str|Callable|None, default:'relu') – args fortorch_geometric.nn.models.MLP.act_first (
bool, default:False) – args fortorch_geometric.nn.models.MLP.act_kwargs (
dict[str,Any|None] |None, default:None) – args fortorch_geometric.nn.models.MLP.norm (
str|Callable|None, default:'batch_norm') – args fortorch_geometric.nn.models.MLP.norm_kwargs (
dict[str,Any|None] |None, default:None) – args fortorch_geometric.nn.models.MLP.plain_last (
bool, default:False) – args fortorch_geometric.nn.models.MLP.bias (
bool|list[bool], default:True) – args fortorch_geometric.nn.models.MLP.**kwargs – Additional arguments of
pyg_spectral.nn.conv.
- name: str = 'Iterative'
- pargs: list[str] = ['theta_scheme']
- class ACMGNNDec(conv: str, num_hops: int = 0, in_channels: int | None = None, hidden_channels: int | None = None, out_channels: int | None = None, in_layers: int | None = None, out_layers: int | None = None, dropout_lin: float | list[float] = 0.0, dropout_conv: float = 0.0, act: str | Callable | None = 'relu', act_first: bool = False, act_kwargs: dict[str, Any | None] | None = None, norm: str | Callable | None = 'batch_norm', norm_kwargs: dict[str, Any | None] | None = None, plain_last: bool = False, bias: bool | list[bool] = True, **kwargs)[source]
Bases:
BaseNNDecoupled structure for ACM conv.
- Paper:
Revisiting Heterophily For Graph Neural Networks
- Paper:
Complete the Missing Half: Augmenting Aggregation Filtering with Diversification for Graph Convolutional Networks
- Ref:
- Parameters:
theta_scheme (
str) – Channel list. “FBGNN”=”low-high”, “ACMGNN”=”low-high-id”, (“ACMGNN+”=”low-high-id-struct”, not implemented).weight_initializer (
str, optional) – The initializer for the weight.hidden_channels (
int|None, default:None) – args forBaseNNdropout_lin (
float|list[float], default:0.0) – args forBaseNNlib_conv – args for
BaseNNact (
str|Callable|None, default:'relu') – args fortorch_geometric.nn.models.MLP.act_first (
bool, default:False) – args fortorch_geometric.nn.models.MLP.act_kwargs (
dict[str,Any|None] |None, default:None) – args fortorch_geometric.nn.models.MLP.norm (
str|Callable|None, default:'batch_norm') – args fortorch_geometric.nn.models.MLP.norm_kwargs (
dict[str,Any|None] |None, default:None) – args fortorch_geometric.nn.models.MLP.plain_last (
bool, default:False) – args fortorch_geometric.nn.models.MLP.bias (
bool|list[bool], default:True) – args fortorch_geometric.nn.models.MLP.**kwargs – Additional arguments of
pyg_spectral.nn.conv.
- name: str = 'DecoupledVar'
- pargs: list[str] = ['theta_scheme']
- gen_theta(num_hops: int, scheme: str, param: float | list[float] | None = None) Tensor[source]
Generate list of hop parameters based on given scheme.
- Parameters:
num_hops (
int) – Total number of hops.scheme (
str) – Method to generate parameters. - zeros: all-zero, \(\theta_k = 0\). - ones: all-same, \(\theta_k = p\). - impulse: K-hop, \(\theta_K = p, else 0\). - inverse: Inverse, \(\theta_k = p/(k+1)\). - mono: Monomial, \(\theta_k = (1-p)/K, \theta_0 = p\). - ‘appr’: Approximate PPR, \(\theta_k = p (1 - p)^k\). - ‘nappr’: Negative PPR, \(\theta_k = p^k\). - ‘hk’: Heat Kernel, \(\theta_k = e^{-p}p^k / k!\). - ‘gaussian’: Graph Gaussian Kernel, \(theta_k = p^{k} / k!\). - ‘log’: Logarithmic, \(\theta_k = log(p / (k+1) + 1)\). - ‘chebyshev’: Chebyshev polynomial. - ‘uniform’: Random uniform distribution. - ‘normal_one’: Gaussian distribution N(0, p) with sum normalized to 1. - ‘normal’: Gaussian distribution N(0, p). - ‘custom’: Custom list of hop parameters.param (
float, optional) – Hyperparameter for the scheme. - zeros: NA. - ones: Value. - ‘impulse’: Value. - ‘inverse’: Value. - ‘mono’: Decay factor, \(p \in [0, 1]\). - ‘appr’: Decay factor, \(p \in [0, 1]\). - ‘nappr’: Decay factor, \(p \in [-1, 1]\). - ‘hk’: Decay factor, \(p > 0\). - ‘gaussian’: Decay factor, \(p > 0\). - ‘log’: Decay factor, \(p > 0\). - ‘chebyshev’: NA. - ‘uniform’: Distribution bound. - ‘normal_one’: Distribution variance. - ‘normal’: Distribution variance. - ‘custom’: Float list of hop parameters.
- Returns:
theta (
Tensor) – Lenth (num_hops+1) list of hop parameters.
- class CppPrecFixed(in_layers: int | None = None, **kwargs)[source]
Bases:
PrecomputedFixedDecoupled structure with C++ propagation precomputation. Fixed scalar propagation parameters and accumulating precompute results.