pyg_spectral.nn.models
- class BaseNN(conv: str, num_hops: int = 0, in_channels: int | None = None, hidden_channels: int | None = None, out_channels: int | None = None, in_layers: int | None = None, out_layers: int | None = None, dropout_lin: float | List[float] = 0.0, dropout_conv: float = 0.0, act: str | Callable | None = 'relu', act_first: bool = False, act_kwargs: Dict[str, Any] | None = None, norm: str | Callable | None = 'batch_norm', norm_kwargs: Dict[str, Any] | None = None, plain_last: bool = False, bias: bool | List[bool] = True, **kwargs)[source]
Bases:
Module
Base NN structure with MLP before and after convolution layers.
- Parameters:
conv (
str
) – Name ofpyg_spectral.nn.conv
module.num_hops (
int
, default:0
) – Total number of conv hops.in_channels (
int
|None
, default:None
) – Size of each input sample.hidden_channels (
int
|None
, default:None
) – Size of each hidden sample.out_channels (
int
|None
, default:None
) – Size of each output sample.in_layers (
int
|None
, default:None
) – Number of MLP layers before conv.out_layers (
int
|None
, default:None
) – Number of MLP layers after conv.dropout_lin (
float
|List
[float
], default:0.0
) – Dropout probability for both MLPs.dropout_conv (
float
, default:0.0
) – Dropout probability before conv.act (
str
|Callable
|None
, default:'relu'
) – args fortorch_geometric.nn.models.MLP
.act_first (
bool
, default:False
) – args fortorch_geometric.nn.models.MLP
.act_kwargs (
Dict
[str
,Any
] |None
, default:None
) – args fortorch_geometric.nn.models.MLP
.norm (
str
|Callable
|None
, default:'batch_norm'
) – args fortorch_geometric.nn.models.MLP
.norm_kwargs (
Dict
[str
,Any
] |None
, default:None
) – args fortorch_geometric.nn.models.MLP
.plain_last (
bool
, default:False
) – args fortorch_geometric.nn.models.MLP
.bias (
bool
|List
[bool
], default:True
) – args fortorch_geometric.nn.models.MLP
.lib_conv – Parent module library other than
pyg_spectral.nn.conv
.**kwargs – Additional arguments of
pyg_spectral.nn.conv
.
- init_channel_list(conv: str, in_channels: int, hidden_channels: int, out_channels: int, **kwargs) List[int] [source]
- preprocess(x: Tensor, edge_index: Tensor | SparseTensor) Any [source]
Preprocessing step that not counted in
forward()
overhead. Here mainly transforming graph adjacency to actual propagation matrix.
- convolute(x: Tensor, edge_index: Tensor | SparseTensor, batch: Tensor | None = None, batch_size: int | None = None) Tensor [source]
Decoupled propagation step for calling the convolutional module.
- forward(x: Tensor, edge_index: Tensor | SparseTensor, batch: Tensor | None = None, batch_size: int | None = None) Tensor [source]
- Parameters:
x (
Tensor
) – fromtorch_geometric.data.Data
edge_index (
Tensor
|SparseTensor
) – fromtorch_geometric.data.Data
batch (
Tensor
|None
, default:None
) – The batch vector \(\mathbf{b} \in {\{ 0, \ldots, B-1\}}^N\), which assigns each element to a specific example. Only needs to be passed in case the underlying normalization layers require thebatch
information.batch_size (
int
|None
, default:None
) – The number of examples \(B\). Automatically calculated if not given. Only needs to be passed in case the underlying normalization layers require thebatch
information.
- class BaseNNCompose(conv: str, num_hops: int = 0, in_channels: int | None = None, hidden_channels: int | None = None, out_channels: int | None = None, in_layers: int | None = None, out_layers: int | None = None, dropout_lin: float | List[float] = 0.0, dropout_conv: float = 0.0, act: str | Callable | None = 'relu', act_first: bool = False, act_kwargs: Dict[str, Any] | None = None, norm: str | Callable | None = 'batch_norm', norm_kwargs: Dict[str, Any] | None = None, plain_last: bool = False, bias: bool | List[bool] = True, **kwargs)[source]
Bases:
BaseNN
Base NN structure with multiple conv channels.
- Parameters:
combine (
str
) – How to combine different channels of convs. (sum
,sum_weighted
, orcat
).conv (
str
) – Name ofpyg_spectral.nn.conv
module.num_hops (
int
, default:0
) – Total number of conv hops.in_channels (
int
|None
, default:None
) – Size of each input sample.hidden_channels (
int
|None
, default:None
) – Size of each hidden sample.out_channels (
int
|None
, default:None
) – Size of each output sample.in_layers (
int
|None
, default:None
) – Number of MLP layers before conv.out_layers (
int
|None
, default:None
) – Number of MLP layers after conv.dropout_lin (
float
|List
[float
], default:0.0
) – Dropout probability for both MLPs.dropout_conv (
float
, default:0.0
) – Dropout probability before conv.act (
str
|Callable
|None
, default:'relu'
) – args fortorch_geometric.nn.models.MLP
.act_first (
bool
, default:False
) – args fortorch_geometric.nn.models.MLP
.act_kwargs (
Dict
[str
,Any
] |None
, default:None
) – args fortorch_geometric.nn.models.MLP
.norm (
str
|Callable
|None
, default:'batch_norm'
) – args fortorch_geometric.nn.models.MLP
.norm_kwargs (
Dict
[str
,Any
] |None
, default:None
) – args fortorch_geometric.nn.models.MLP
.plain_last (
bool
, default:False
) – args fortorch_geometric.nn.models.MLP
.bias (
bool
|List
[bool
], default:True
) – args fortorch_geometric.nn.models.MLP
.lib_conv – Parent module library other than
pyg_spectral.nn.conv
.**kwargs – Additional arguments of
pyg_spectral.nn.conv
.
- init_channel_list(conv: str, in_channels: int, hidden_channels: int, out_channels: int, **kwargs) List[int] [source]
- Variables:
channel_list – width for each conv channel
- class Iterative(conv: str, num_hops: int = 0, in_channels: int | None = None, hidden_channels: int | None = None, out_channels: int | None = None, in_layers: int | None = None, out_layers: int | None = None, dropout_lin: float | List[float] = 0.0, dropout_conv: float = 0.0, act: str | Callable | None = 'relu', act_first: bool = False, act_kwargs: Dict[str, Any] | None = None, norm: str | Callable | None = 'batch_norm', norm_kwargs: Dict[str, Any] | None = None, plain_last: bool = False, bias: bool | List[bool] = True, **kwargs)[source]
Bases:
BaseNN
Iterative structure with matrix transformation each hop of propagation.
- Parameters:
bias (
Optional[bool]
) – whether learn an additive bias in conv.weight_initializer (
Optional[str]
) – The initializer for the weight matrix ("glorot"
,"uniform"
,"kaiming_uniform"
, orNone
).bias_initializer (
Optional[str]
) – The initializer for the bias vector ("zeros"
orNone
).hidden_channels (
int
|None
, default:None
) – args forBaseNN
dropout_lin (
float
|List
[float
], default:0.0
) – args forBaseNN
lib_conv – args for
BaseNN
act (
str
|Callable
|None
, default:'relu'
) – args fortorch_geometric.nn.models.MLP
.act_first (
bool
, default:False
) – args fortorch_geometric.nn.models.MLP
.act_kwargs (
Dict
[str
,Any
] |None
, default:None
) – args fortorch_geometric.nn.models.MLP
.norm (
str
|Callable
|None
, default:'batch_norm'
) – args fortorch_geometric.nn.models.MLP
.norm_kwargs (
Dict
[str
,Any
] |None
, default:None
) – args fortorch_geometric.nn.models.MLP
.plain_last (
bool
, default:False
) – args fortorch_geometric.nn.models.MLP
.bias – args for
torch_geometric.nn.models.MLP
.**kwargs – Additional arguments of
pyg_spectral.nn.conv
.
- class IterativeCompose(conv: str, num_hops: int = 0, in_channels: int | None = None, hidden_channels: int | None = None, out_channels: int | None = None, in_layers: int | None = None, out_layers: int | None = None, dropout_lin: float | List[float] = 0.0, dropout_conv: float = 0.0, act: str | Callable | None = 'relu', act_first: bool = False, act_kwargs: Dict[str, Any] | None = None, norm: str | Callable | None = 'batch_norm', norm_kwargs: Dict[str, Any] | None = None, plain_last: bool = False, bias: bool | List[bool] = True, **kwargs)[source]
Bases:
BaseNNCompose
Iterative structure with matrix transformation each hop of propagation.
- Parameters:
bias (
Optional[bool]
) – whether learn an additive bias in conv.weight_initializer (
Optional[str]
) – The initializer for the weight matrix ("glorot"
,"uniform"
,"kaiming_uniform"
, orNone
).bias_initializer (
Optional[str]
) – The initializer for the bias vector ("zeros"
orNone
).combine – How to combine different channels of convs. (
sum
,sum_weighted
, orcat
).hidden_channels (
int
|None
, default:None
) – args forBaseNN
dropout_lin (
float
|List
[float
], default:0.0
) – args forBaseNN
lib_conv – args for
BaseNN
act (
str
|Callable
|None
, default:'relu'
) – args fortorch_geometric.nn.models.MLP
.act_first (
bool
, default:False
) – args fortorch_geometric.nn.models.MLP
.act_kwargs (
Dict
[str
,Any
] |None
, default:None
) – args fortorch_geometric.nn.models.MLP
.norm (
str
|Callable
|None
, default:'batch_norm'
) – args fortorch_geometric.nn.models.MLP
.norm_kwargs (
Dict
[str
,Any
] |None
, default:None
) – args fortorch_geometric.nn.models.MLP
.plain_last (
bool
, default:False
) – args fortorch_geometric.nn.models.MLP
.bias – args for
torch_geometric.nn.models.MLP
.**kwargs – Additional arguments of
pyg_spectral.nn.conv
.
- class DecoupledFixed(conv: str, num_hops: int = 0, in_channels: int | None = None, hidden_channels: int | None = None, out_channels: int | None = None, in_layers: int | None = None, out_layers: int | None = None, dropout_lin: float | List[float] = 0.0, dropout_conv: float = 0.0, act: str | Callable | None = 'relu', act_first: bool = False, act_kwargs: Dict[str, Any] | None = None, norm: str | Callable | None = 'batch_norm', norm_kwargs: Dict[str, Any] | None = None, plain_last: bool = False, bias: bool | List[bool] = True, **kwargs)[source]
Bases:
BaseNN
Decoupled structure without matrix transformation during propagation. Fixed scalar propagation parameters.
Note
Apply conv every
forward()
call. Not to be mixed withPrecomputed
models.- Parameters:
theta_scheme (
str
) – Method to generate decoupled parameters.theta_param (
float, optional
) – Hyperparameter for the scheme.hidden_channels (
int
|None
, default:None
) – args forBaseNN
dropout_lin (
float
|List
[float
], default:0.0
) – args forBaseNN
lib_conv – args for
BaseNN
act (
str
|Callable
|None
, default:'relu'
) – args fortorch_geometric.nn.models.MLP
.act_first (
bool
, default:False
) – args fortorch_geometric.nn.models.MLP
.act_kwargs (
Dict
[str
,Any
] |None
, default:None
) – args fortorch_geometric.nn.models.MLP
.norm (
str
|Callable
|None
, default:'batch_norm'
) – args fortorch_geometric.nn.models.MLP
.norm_kwargs (
Dict
[str
,Any
] |None
, default:None
) – args fortorch_geometric.nn.models.MLP
.plain_last (
bool
, default:False
) – args fortorch_geometric.nn.models.MLP
.bias (
bool
|List
[bool
], default:True
) – args fortorch_geometric.nn.models.MLP
.**kwargs – Additional arguments of
pyg_spectral.nn.conv
.
- class DecoupledVar(conv: str, num_hops: int = 0, in_channels: int | None = None, hidden_channels: int | None = None, out_channels: int | None = None, in_layers: int | None = None, out_layers: int | None = None, dropout_lin: float | List[float] = 0.0, dropout_conv: float = 0.0, act: str | Callable | None = 'relu', act_first: bool = False, act_kwargs: Dict[str, Any] | None = None, norm: str | Callable | None = 'batch_norm', norm_kwargs: Dict[str, Any] | None = None, plain_last: bool = False, bias: bool | List[bool] = True, **kwargs)[source]
Bases:
BaseNN
Decoupled structure without matrix transformation during propagation. Learnable scalar propagation parameters.
Note
Apply conv every
forward()
call. Not to be mixed withPrecomputed
models.- Parameters:
theta_scheme (
str
) – Method to generate decoupled parameters.theta_param (
float, optional
) – Hyperparameter for the scheme.hidden_channels (
int
|None
, default:None
) – args forBaseNN
dropout_lin (
float
|List
[float
], default:0.0
) – args forBaseNN
lib_conv – args for
BaseNN
act (
str
|Callable
|None
, default:'relu'
) – args fortorch_geometric.nn.models.MLP
.act_first (
bool
, default:False
) – args fortorch_geometric.nn.models.MLP
.act_kwargs (
Dict
[str
,Any
] |None
, default:None
) – args fortorch_geometric.nn.models.MLP
.norm (
str
|Callable
|None
, default:'batch_norm'
) – args fortorch_geometric.nn.models.MLP
.norm_kwargs (
Dict
[str
,Any
] |None
, default:None
) – args fortorch_geometric.nn.models.MLP
.plain_last (
bool
, default:False
) – args fortorch_geometric.nn.models.MLP
.bias (
bool
|List
[bool
], default:True
) – args fortorch_geometric.nn.models.MLP
.**kwargs – Additional arguments of
pyg_spectral.nn.conv
.
- class DecoupledFixedCompose(conv: str, num_hops: int = 0, in_channels: int | None = None, hidden_channels: int | None = None, out_channels: int | None = None, in_layers: int | None = None, out_layers: int | None = None, dropout_lin: float | List[float] = 0.0, dropout_conv: float = 0.0, act: str | Callable | None = 'relu', act_first: bool = False, act_kwargs: Dict[str, Any] | None = None, norm: str | Callable | None = 'batch_norm', norm_kwargs: Dict[str, Any] | None = None, plain_last: bool = False, bias: bool | List[bool] = True, **kwargs)[source]
Bases:
BaseNNCompose
Decoupled structure without matrix transformation during propagation. Fixed scalar propagation parameters.
- Parameters:
theta_scheme (
List[str]
) – Method to generate decoupled parameters.theta_param (
List[float], optional
) – Hyperparameter for the scheme.combine – How to combine different channels of convs. (
sum
,sum_weighted
, orcat
).hidden_channels (
int
|None
, default:None
) – args forBaseNN
dropout_lin (
float
|List
[float
], default:0.0
) – args forBaseNN
lib_conv – args for
BaseNN
act (
str
|Callable
|None
, default:'relu'
) – args fortorch_geometric.nn.models.MLP
.act_first (
bool
, default:False
) – args fortorch_geometric.nn.models.MLP
.act_kwargs (
Dict
[str
,Any
] |None
, default:None
) – args fortorch_geometric.nn.models.MLP
.norm (
str
|Callable
|None
, default:'batch_norm'
) – args fortorch_geometric.nn.models.MLP
.norm_kwargs (
Dict
[str
,Any
] |None
, default:None
) – args fortorch_geometric.nn.models.MLP
.plain_last (
bool
, default:False
) – args fortorch_geometric.nn.models.MLP
.bias (
bool
|List
[bool
], default:True
) – args fortorch_geometric.nn.models.MLP
.**kwargs – Additional arguments of
pyg_spectral.nn.conv
.
- class DecoupledVarCompose(conv: str, num_hops: int = 0, in_channels: int | None = None, hidden_channels: int | None = None, out_channels: int | None = None, in_layers: int | None = None, out_layers: int | None = None, dropout_lin: float | List[float] = 0.0, dropout_conv: float = 0.0, act: str | Callable | None = 'relu', act_first: bool = False, act_kwargs: Dict[str, Any] | None = None, norm: str | Callable | None = 'batch_norm', norm_kwargs: Dict[str, Any] | None = None, plain_last: bool = False, bias: bool | List[bool] = True, **kwargs)[source]
Bases:
BaseNNCompose
Decoupled structure without matrix transformation during propagation. Learnable scalar propagation parameters.
- Parameters:
theta_scheme (
List[str]
) – Method to generate decoupled parameters.theta_param (
List[float], optional
) – Hyperparameter for the scheme.combine – How to combine different channels of convs. (
sum
,sum_weighted
, orcat
).hidden_channels (
int
|None
, default:None
) – args forBaseNN
dropout_lin (
float
|List
[float
], default:0.0
) – args forBaseNN
lib_conv – args for
BaseNN
act (
str
|Callable
|None
, default:'relu'
) – args fortorch_geometric.nn.models.MLP
.act_first (
bool
, default:False
) – args fortorch_geometric.nn.models.MLP
.act_kwargs (
Dict
[str
,Any
] |None
, default:None
) – args fortorch_geometric.nn.models.MLP
.norm (
str
|Callable
|None
, default:'batch_norm'
) – args fortorch_geometric.nn.models.MLP
.norm_kwargs (
Dict
[str
,Any
] |None
, default:None
) – args fortorch_geometric.nn.models.MLP
.plain_last (
bool
, default:False
) – args fortorch_geometric.nn.models.MLP
.bias (
bool
|List
[bool
], default:True
) – args fortorch_geometric.nn.models.MLP
.**kwargs – Additional arguments of
pyg_spectral.nn.conv
.
- class PrecomputedFixed(in_layers: int | None = None, **kwargs)[source]
Bases:
DecoupledFixed
Decoupled structure with precomputation separating propagation from transformation. Fixed scalar propagation parameters and accumulating precompute results.
Note
Only apply propagation in
convolute()
. Not to be mixed withDecoupled
models.- Parameters:
theta_scheme (
str
) – Method to generate decoupled parameters.theta_param (
Optional[float]
) – Hyperparameter for the scheme.conv – args for
BaseNN
num_hops – args for
BaseNN
in_channels – args for
BaseNN
hidden_channels – args for
BaseNN
out_channels – args for
BaseNN
out_layers – args for
BaseNN
dropout_lin – args for
BaseNN
dropout_conv – args for
BaseNN
lib_conv – args for
BaseNN
act – args for
torch_geometric.nn.models.MLP
.act_first – args for
torch_geometric.nn.models.MLP
.act_kwargs – args for
torch_geometric.nn.models.MLP
.norm – args for
torch_geometric.nn.models.MLP
.norm_kwargs – args for
torch_geometric.nn.models.MLP
.plain_last – args for
torch_geometric.nn.models.MLP
.bias – args for
torch_geometric.nn.models.MLP
.**kwargs – Additional arguments of
pyg_spectral.nn.conv
.
- class PrecomputedVar(in_layers: int | None = None, **kwargs)[source]
Bases:
DecoupledVar
Decoupled structure with precomputation separating propagation from transformation. Learnable scalar propagation parameters and storing all intermediate precompute results.
Note
Only apply propagation in
convolute()
. Not to be mixed withDecoupled
models.- Parameters:
theta_scheme (
str
) – Method to generate decoupled parameters.theta_param (
Optional[float]
) – Hyperparameter for the scheme.conv – args for
BaseNN
num_hops – args for
BaseNN
in_channels – args for
BaseNN
hidden_channels – args for
BaseNN
out_channels – args for
BaseNN
out_layers – args for
BaseNN
dropout_lin – args for
BaseNN
dropout_conv – args for
BaseNN
lib_conv – args for
BaseNN
act – args for
torch_geometric.nn.models.MLP
.act_first – args for
torch_geometric.nn.models.MLP
.act_kwargs – args for
torch_geometric.nn.models.MLP
.norm – args for
torch_geometric.nn.models.MLP
.norm_kwargs – args for
torch_geometric.nn.models.MLP
.plain_last – args for
torch_geometric.nn.models.MLP
.bias – args for
torch_geometric.nn.models.MLP
.**kwargs – Additional arguments of
pyg_spectral.nn.conv
.
- convolute(x: Tensor, edge_index: Tensor | SparseTensor) list [source]
Decoupled propagation step for calling the convolutional module.
_forward()
should not contain derivable computations.- Returns:
embed – List of precomputed node embeddings of each hop. Each shape is \((|\mathcal{V}|, F, |convs|+1)\).
- class PrecomputedFixedCompose(conv: str, num_hops: int = 0, in_channels: int | None = None, hidden_channels: int | None = None, out_channels: int | None = None, in_layers: int | None = None, out_layers: int | None = None, dropout_lin: float | List[float] = 0.0, dropout_conv: float = 0.0, act: str | Callable | None = 'relu', act_first: bool = False, act_kwargs: Dict[str, Any] | None = None, norm: str | Callable | None = 'batch_norm', norm_kwargs: Dict[str, Any] | None = None, plain_last: bool = False, bias: bool | List[bool] = True, **kwargs)[source]
Bases:
DecoupledFixedCompose
Decoupled structure with precomputation separating propagation from transformation. Fixed scalar propagation parameters and accumulating precompute results.
- Parameters:
theta_scheme (
List[str]
) – Method to generate decoupled parameters.theta_param (
List[float], optional
) – Hyperparameter for the scheme.combine – How to combine different channels of convs. (
sum
,sum_weighted
, orcat
).hidden_channels (
int
|None
, default:None
) – args forBaseNN
dropout_lin (
float
|List
[float
], default:0.0
) – args forBaseNN
lib_conv – args for
BaseNN
act (
str
|Callable
|None
, default:'relu'
) – args fortorch_geometric.nn.models.MLP
.act_first (
bool
, default:False
) – args fortorch_geometric.nn.models.MLP
.act_kwargs (
Dict
[str
,Any
] |None
, default:None
) – args fortorch_geometric.nn.models.MLP
.norm (
str
|Callable
|None
, default:'batch_norm'
) – args fortorch_geometric.nn.models.MLP
.norm_kwargs (
Dict
[str
,Any
] |None
, default:None
) – args fortorch_geometric.nn.models.MLP
.plain_last (
bool
, default:False
) – args fortorch_geometric.nn.models.MLP
.bias (
bool
|List
[bool
], default:True
) – args fortorch_geometric.nn.models.MLP
.**kwargs – Additional arguments of
pyg_spectral.nn.conv
.
- class PrecomputedVarCompose(conv: str, num_hops: int = 0, in_channels: int | None = None, hidden_channels: int | None = None, out_channels: int | None = None, in_layers: int | None = None, out_layers: int | None = None, dropout_lin: float | List[float] = 0.0, dropout_conv: float = 0.0, act: str | Callable | None = 'relu', act_first: bool = False, act_kwargs: Dict[str, Any] | None = None, norm: str | Callable | None = 'batch_norm', norm_kwargs: Dict[str, Any] | None = None, plain_last: bool = False, bias: bool | List[bool] = True, **kwargs)[source]
Bases:
DecoupledVarCompose
Decoupled structure with precomputation separating propagation from transformation. Learnable scalar propagation parameters and storing all intermediate precompute results.
- Parameters:
theta_scheme (
List[str]
) – Method to generate decoupled parameters.theta_param (
List[float], optional
) – Hyperparameter for the scheme.combine – How to combine different channels of convs. (
sum
,sum_weighted
, orcat
).hidden_channels (
int
|None
, default:None
) – args forBaseNN
dropout_lin (
float
|List
[float
], default:0.0
) – args forBaseNN
lib_conv – args for
BaseNN
act (
str
|Callable
|None
, default:'relu'
) – args fortorch_geometric.nn.models.MLP
.act_first (
bool
, default:False
) – args fortorch_geometric.nn.models.MLP
.act_kwargs (
Dict
[str
,Any
] |None
, default:None
) – args fortorch_geometric.nn.models.MLP
.norm (
str
|Callable
|None
, default:'batch_norm'
) – args fortorch_geometric.nn.models.MLP
.norm_kwargs (
Dict
[str
,Any
] |None
, default:None
) – args fortorch_geometric.nn.models.MLP
.plain_last (
bool
, default:False
) – args fortorch_geometric.nn.models.MLP
.bias (
bool
|List
[bool
], default:True
) – args fortorch_geometric.nn.models.MLP
.**kwargs – Additional arguments of
pyg_spectral.nn.conv
.
- convolute(x: Tensor, edge_index: Tensor | SparseTensor) Tensor [source]
Decoupled propagation step for calling the convolutional module. Requires no variable transformation in
conv.forward()
.- Returns:
embed (
Tensor
) – List of precomputed node embeddings of each hop. Shape: \((|\mathcal{V}|, F, Q, |convs|+1)\).
- class AdaGNN(conv: str, num_hops: int = 0, in_channels: int | None = None, hidden_channels: int | None = None, out_channels: int | None = None, in_layers: int | None = None, out_layers: int | None = None, dropout_lin: float | List[float] = 0.0, dropout_conv: float = 0.0, act: str | Callable | None = 'relu', act_first: bool = False, act_kwargs: Dict[str, Any] | None = None, norm: str | Callable | None = 'batch_norm', norm_kwargs: Dict[str, Any] | None = None, plain_last: bool = False, bias: bool | List[bool] = True, **kwargs)[source]
Bases:
BaseNN
Decoupled structure with diag transformation each hop of propagation.
- Paper:
AdaGNN: Graph Neural Networks with Adaptive Frequency Response Filter
- Ref:
- Parameters:
hidden_channels (
int
|None
, default:None
) – args forBaseNN
dropout_lin (
float
|List
[float
], default:0.0
) – args forBaseNN
lib_conv – args for
BaseNN
act (
str
|Callable
|None
, default:'relu'
) – args fortorch_geometric.nn.models.MLP
.act_first (
bool
, default:False
) – args fortorch_geometric.nn.models.MLP
.act_kwargs (
Dict
[str
,Any
] |None
, default:None
) – args fortorch_geometric.nn.models.MLP
.norm (
str
|Callable
|None
, default:'batch_norm'
) – args fortorch_geometric.nn.models.MLP
.norm_kwargs (
Dict
[str
,Any
] |None
, default:None
) – args fortorch_geometric.nn.models.MLP
.plain_last (
bool
, default:False
) – args fortorch_geometric.nn.models.MLP
.bias (
bool
|List
[bool
], default:True
) – args fortorch_geometric.nn.models.MLP
.**kwargs – Additional arguments of
pyg_spectral.nn.conv
.
- class ACMGNN(conv: str, num_hops: int = 0, in_channels: int | None = None, hidden_channels: int | None = None, out_channels: int | None = None, in_layers: int | None = None, out_layers: int | None = None, dropout_lin: float | List[float] = 0.0, dropout_conv: float = 0.0, act: str | Callable | None = 'relu', act_first: bool = False, act_kwargs: Dict[str, Any] | None = None, norm: str | Callable | None = 'batch_norm', norm_kwargs: Dict[str, Any] | None = None, plain_last: bool = False, bias: bool | List[bool] = True, **kwargs)[source]
Bases:
BaseNN
Iterative structure for ACM conv.
- Paper:
Revisiting Heterophily For Graph Neural Networks
- Paper:
Complete the Missing Half: Augmenting Aggregation Filtering with Diversification for Graph Convolutional Networks
- Ref:
- Parameters:
theta_scheme (
str
) – Channel list. “FBGNN”=”low-high”, “ACMGNN”=”low-high-id”, (“ACMGNN+”=”low-high-id-struct”, not implemented).weight_initializer (
str, optional
) – The initializer for the weight.hidden_channels (
int
|None
, default:None
) – args forBaseNN
dropout_lin (
float
|List
[float
], default:0.0
) – args forBaseNN
lib_conv – args for
BaseNN
act (
str
|Callable
|None
, default:'relu'
) – args fortorch_geometric.nn.models.MLP
.act_first (
bool
, default:False
) – args fortorch_geometric.nn.models.MLP
.act_kwargs (
Dict
[str
,Any
] |None
, default:None
) – args fortorch_geometric.nn.models.MLP
.norm (
str
|Callable
|None
, default:'batch_norm'
) – args fortorch_geometric.nn.models.MLP
.norm_kwargs (
Dict
[str
,Any
] |None
, default:None
) – args fortorch_geometric.nn.models.MLP
.plain_last (
bool
, default:False
) – args fortorch_geometric.nn.models.MLP
.bias (
bool
|List
[bool
], default:True
) – args fortorch_geometric.nn.models.MLP
.**kwargs – Additional arguments of
pyg_spectral.nn.conv
.
- class ACMGNNDec(conv: str, num_hops: int = 0, in_channels: int | None = None, hidden_channels: int | None = None, out_channels: int | None = None, in_layers: int | None = None, out_layers: int | None = None, dropout_lin: float | List[float] = 0.0, dropout_conv: float = 0.0, act: str | Callable | None = 'relu', act_first: bool = False, act_kwargs: Dict[str, Any] | None = None, norm: str | Callable | None = 'batch_norm', norm_kwargs: Dict[str, Any] | None = None, plain_last: bool = False, bias: bool | List[bool] = True, **kwargs)[source]
Bases:
BaseNN
Decoupled structure for ACM conv.
- Paper:
Revisiting Heterophily For Graph Neural Networks
- Paper:
Complete the Missing Half: Augmenting Aggregation Filtering with Diversification for Graph Convolutional Networks
- Ref:
- Parameters:
theta_scheme (
str
) – Channel list. “FBGNN”=”low-high”, “ACMGNN”=”low-high-id”, (“ACMGNN+”=”low-high-id-struct”, not implemented).weight_initializer (
str, optional
) – The initializer for the weight.hidden_channels (
int
|None
, default:None
) – args forBaseNN
dropout_lin (
float
|List
[float
], default:0.0
) – args forBaseNN
lib_conv – args for
BaseNN
act (
str
|Callable
|None
, default:'relu'
) – args fortorch_geometric.nn.models.MLP
.act_first (
bool
, default:False
) – args fortorch_geometric.nn.models.MLP
.act_kwargs (
Dict
[str
,Any
] |None
, default:None
) – args fortorch_geometric.nn.models.MLP
.norm (
str
|Callable
|None
, default:'batch_norm'
) – args fortorch_geometric.nn.models.MLP
.norm_kwargs (
Dict
[str
,Any
] |None
, default:None
) – args fortorch_geometric.nn.models.MLP
.plain_last (
bool
, default:False
) – args fortorch_geometric.nn.models.MLP
.bias (
bool
|List
[bool
], default:True
) – args fortorch_geometric.nn.models.MLP
.**kwargs – Additional arguments of
pyg_spectral.nn.conv
.
- class CppCompFixed(in_layers: int | None = None, **kwargs)[source]
Bases:
PrecomputedFixed
Decoupled structure with C++ propagation precomputation. Fixed scalar propagation parameters and accumulating precompute results.