pygho.honn package

Submodules

pygho.honn.Conv module

Representative GNN layers built upon message passing operations. For all module, A means adjacency matrix, X means tuple representation mode SS means sparse adjacency and sparse X, SD means sparse adjacency and dense X, DD means dense adjacency and dense X. datadict contains precomputation results.

class pygho.honn.Conv.DSSGNNConv(indim: int, outdim: int, aggr_subg: str = 'sum', aggr_global: str = 'sum', pool: str = 'mean', mode: Literal['SD', 'DD', 'SS'] = 'SS', mlp: dict = {}, optuplefeat: str = 'X', opadj: str = 'A')[source]

Bases: Module

Implementation of the DSSGNNConv layer based on the paper “Equivariant subgraph aggregation networks” by Beatrice Bevilacqua et al., ICLR 2022. This layer performs message passing on 2D subgraph representations with subgraph pooling.

Args:

  • indim (int): Input feature dimension.

  • outdim (int): Output feature dimension.

  • aggr_subg (str): Aggregation method for message passing within subgraphs (e.g., “sum”).

  • aggr_global (str): Aggregation method for message passing in the global context (e.g., “sum”).

  • pool (str): Pooling method (e.g., “mean”).

  • mode (str): Mode for specifying tensor types (e.g., “SS” for sparse adjacency and sparse X).

  • mlp (dict): Parameters for the MLP layer.

Methods:

  • forward(A: Union[SparseTensor, MaskedTensor], X: Union[SparseTensor, MaskedTensor], datadict: dict) -> Union[SparseTensor, MaskedTensor]: Forward pass of the DSSGNNConv layer.

forward(A: SparseTensor | MaskedTensor, X: SparseTensor | MaskedTensor, datadict: dict) SparseTensor | MaskedTensor[source]

Defines the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

class pygho.honn.Conv.GNNAKConv(indim: int, outdim: int, aggr: str = 'sum', pool: str = 'mean', mode: Literal['SD', 'DD', 'SS'] = 'SS', mlp0: dict = {}, mlp1: dict = {}, ctx: bool = True, optuplefeat: str = 'X', opadj: str = 'A')[source]

Bases: Module

Implementation of the GNNAKConv layer based on the paper “From stars to subgraphs: Uplifting any GNN with local structure awareness” by Lingxiao Zhao et al., ICLR 2022. This layer performs message passing on 2D subgraph representations with subgraph pooling and cross-subgraph pooling.

Args:

  • indim (int): Input feature dimension.

  • outdim (int): Output feature dimension.

  • aggr (str): Aggregation method for message passing (e.g., “sum”).

  • pool (str): Pooling method (e.g., “mean”).

  • mode (str): Mode for specifying tensor types (e.g., “SS” for sparse adjacency and sparse X).

  • mlp0 (dict): Parameters for the first MLP layer.

  • mlp1 (dict): Parameters for the second MLP layer.

  • ctx (bool): Whether to include context information.

Methods:

  • forward(A: Union[SparseTensor, MaskedTensor], X: Union[SparseTensor, MaskedTensor], datadict: dict) -> Union[SparseTensor, MaskedTensor]: Forward pass of the GNNAKConv layer.

forward(A: SparseTensor | MaskedTensor, X: SparseTensor | MaskedTensor, datadict: dict) SparseTensor | MaskedTensor[source]

Defines the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

class pygho.honn.Conv.I2Conv(indim: int, outdim: int, aggr: str = 'sum', mode: Literal['SD', 'DD', 'SS'] = 'SS', mlp: dict = {}, optuplefeat: str = 'X', opadj: str = 'A')[source]

Bases: Module

Implementation of the I2Conv layer based on the paper “Boosting the cycle counting power of graph neural networks with I2-GNNs” by Yinan Huang et al., ICLR 2023. This layer performs message passing on 3D subgraph representations.

Args:

  • indim (int): Input feature dimension.

  • outdim (int): Output feature dimension.

  • aggr (str): Aggregation method for message passing (e.g., “sum”).

  • mode (str): Mode for specifying tensor types (e.g., “SS” for sparse adjacency and sparse X).

  • mlp (dict): Parameters for the MLP layer.

Methods:

  • forward(A: Union[SparseTensor, MaskedTensor], X: Union[SparseTensor, MaskedTensor], datadict: dict) -> Union[SparseTensor, MaskedTensor]: Forward pass of the I2Conv layer.

Notes: - This layer is based on the I2-GNN paper and performs message passing on 3D subgraph representations.

forward(A: SparseTensor | MaskedTensor, X: SparseTensor | MaskedTensor, datadict: dict) SparseTensor | MaskedTensor[source]

Defines the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

class pygho.honn.Conv.NGNNConv(indim: int, outdim: int, aggr: str = 'sum', mode: Literal['SD', 'DD', 'SS'] = 'SS', mlp: dict = {}, optuplefeat: str = 'X', opadj: str = 'A', message_func: Callable | None = None)[source]

Bases: Module

Implementation of the NGNNConv layer based on the paper “Nested Graph Neural Networks” by Muhan Zhang and Pan Li, NeurIPS 2021. This layer performs message passing on 2D subgraph representations.

Args:

  • indim (int): Input feature dimension.

  • outdim (int): Output feature dimension.

  • aggr (str): Aggregation method for message passing (e.g., “sum”).

  • mode (str): Mode for specifying tensor types (e.g., “SS” for sparse adjacency and sparse X).

  • mlp (dict): Parameters for the MLP layer.

Methods:

  • forward(A: Union[SparseTensor, MaskedTensor], X: Union[SparseTensor, MaskedTensor], datadict: dict) -> Union[SparseTensor, MaskedTensor]: Forward pass of the NGNNConv layer.

forward(A: SparseTensor | MaskedTensor, X: SparseTensor | MaskedTensor, datadict: dict) SparseTensor | MaskedTensor[source]

Defines the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

class pygho.honn.Conv.PPGNConv(indim: int, outdim: int, aggr: str = 'sum', mode: Literal['DD', 'SS'] = 'SS', mlp: dict = {}, optuplefeat: str = 'X')[source]

Bases: Module

Implementation of the PPGNConv layer based on the paper “Provably powerful graph networks” by Haggai Maron et al., NeurIPS 2019. This layer performs message passing with power-sum pooling on 2D subgraph representations.

Args:

  • indim (int): Input feature dimension.

  • outdim (int): Output feature dimension.

  • aggr (str): Aggregation method for message passing (e.g., “sum”).

  • mode (str): Mode for specifying tensor types (e.g., “SS” for sparse adjacency and sparse X).

  • mlp (dict): Parameters for the MLP layers.

Methods:

  • forward(A: Union[SparseTensor, MaskedTensor], X: Union[SparseTensor, MaskedTensor], datadict: dict) -> Union[SparseTensor, MaskedTensor]: Forward pass of the PPGNConv layer.

forward(A: SparseTensor | MaskedTensor, X: SparseTensor | MaskedTensor, datadict: dict) SparseTensor | MaskedTensor[source]

Defines the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

class pygho.honn.Conv.SSWLConv(indim: int, outdim: int, aggr: str = 'sum', mode: Literal['SD', 'DD', 'SS'] = 'SS', mlp: dict = {}, optuplefeat: str = 'X', opadj: str = 'A')[source]

Bases: Module

Implementation of the SSWLConv layer based on the paper “A complete expressiveness hierarchy for subgraph GNNs via subgraph Weisfeiler-Lehman tests” by Bohang Zhang et al., ICML 2023. This layer performs message passing on 2D subgraph representations and cross-subgraph pooling.

Args:

  • indim (int): Input feature dimension.

  • outdim (int): Output feature dimension.

  • aggr (str): Aggregation method for message passing (e.g., “sum”).

  • mode (str): Mode for specifying tensor types (e.g., “SS” for sparse adjacency and sparse X).

  • mlp (dict): Parameters for the MLP layer.

Methods:

  • forward(A: Union[SparseTensor, MaskedTensor], X: Union[SparseTensor, MaskedTensor], datadict: dict) -> Union[SparseTensor, MaskedTensor]: Forward pass of the SSWLConv layer.

forward(A: SparseTensor | MaskedTensor, X: SparseTensor | MaskedTensor, datadict: dict) SparseTensor | MaskedTensor[source]

Defines the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

class pygho.honn.Conv.SUNConv(indim: int, outdim: int, aggr: str = 'sum', pool: str = 'mean', mode: Literal['SD', 'DD', 'SS'] = 'SS', mlp0: dict = {}, mlp1: dict = {}, optuplefeat: str = 'X', opadj: str = 'A')[source]

Bases: Module

Implementation of the SUNConv layer based on the paper “Understanding and extending subgraph GNNs by rethinking their symmetries” by Fabrizio Frasca et al., NeurIPS 2022. This layer performs message passing on 2D subgraph representations with subgraph and cross-subgraph pooling.

Args:

  • indim (int): Input feature dimension.

  • outdim (int): Output feature dimension.

  • aggr (str): Aggregation method for message passing (e.g., “sum”).

  • pool (str): Pooling method (e.g., “mean”).

  • mode (str): Mode for specifying tensor types (e.g., “SS” for sparse adjacency and sparse X).

  • mlp0 (dict): Parameters for the first MLP layer.

  • mlp1 (dict): Parameters for the second MLP layer.

Methods:

  • forward(A: Union[SparseTensor, MaskedTensor], X: Union[SparseTensor, MaskedTensor], datadict: dict) -> Union[SparseTensor, MaskedTensor]: Forward pass of the SUNConv layer.

Notes: - This layer is based on Symmetry Understanding Networks (SUN) and performs message passing on 2D subgraph representations with subgraph and cross-subgraph pooling.

forward(A: SparseTensor | MaskedTensor, X: SparseTensor | MaskedTensor, datadict: dict) SparseTensor | MaskedTensor[source]

Defines the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

pygho.honn.MaOperator module

class pygho.honn.MaOperator.Op2FWL[source]

Bases: OpMessagePassing

Operator for simulating the 2-Folklore-Weisfeiler-Lehman (FWL) test. X <- X1 * X2.

This operator is specifically designed for simulating the 2-Folklore-Weisfeiler-Lehman (FWL) test by performing message passing between two masked tensors ‘X1’ and ‘X2’. The result is masked as the target masked tensor ‘tarX’.

Args:

  • None

See Also:

  • OpMessagePassing: The base class for generalized message passing.

forward(X1: MaskedTensor, X2: MaskedTensor, datadict: Dict, tarX: MaskedTensor) MaskedTensor[source]

Simulate the 2-Folklore-Weisfeiler-Lehman (FWL) test by performing message passing.

Args:

  • X1 (MaskedTensor): The first input masked tensor of shape (b, n, n,*denseshapeshape1).

  • X2 (MaskedTensor): The second input masked tensor of shape (b, n, n,*denseshapeshape2).

  • datadict (Dict): A dictionary for caching intermediate data.

  • tarX (MaskedTensor): The target masked tensor.

class pygho.honn.MaOperator.OpDiag(dims: Iterable[int])[source]

Bases: Module

Operator for extracting diagonal elements from a SparseTensor.

Args:

  • dims (Iterable[int]): A list of dimensions along which to extract diagonal elements.

forward(A: MaskedTensor) MaskedTensor[source]

forward function

Args:

  • A (MaskedTensor): The input masked Tensor

Returns:

  • MaskedTensor: The diagonal elements.

class pygho.honn.MaOperator.OpDiag2D[source]

Bases: OpDiag

forward(X: MaskedTensor) MaskedTensor[source]

Extract diagonal elements from the input masked.

Args:

  • A (MaskedTensor): The input MaskedTensor from which to extract diagonal elements. Be of shape (b, n, n,*denseshapeshape)

Returns:

  • MaskedTensor: of shape (b, n,*denseshapeshape)

Returns:

  • Union[Tensor, SparseTensor]: The extracted diagonal elements as either a dense or sparse tensor.

class pygho.honn.MaOperator.OpMessagePassing(dim1: int, dim2: int)[source]

Bases: Module

General message passing operator for masked tensor adjacency and masked tensor tuple representation.

This operator takes two input masked tensors ‘A’ and ‘B’ and performs message passing between them to generate a new masked tensor ‘tarX’. The resulting tensor has a shape of (b,* maskedshape1_dim1,* maskedshape2_dim2,*denseshapeshape), where ‘b’ represents the batch size.

Args:

  • dim1 (int): The dimension along which message passing is applied in ‘A’.

  • dim2 (int): The dimension along which message passing is applied in ‘B’.

forward(A: MaskedTensor, B: MaskedTensor, tarX: MaskedTensor) MaskedTensor[source]

Perform message passing between two masked tensors.

Args:

  • A (MaskedTensor): The first input masked tensor.

  • B (MaskedTensor): The second input masked tensor.

  • tarX (MaskedTensor): The target masked tensor. The output will use its mask

Returns:

  • MaskedTensor: The result of message passing, represented as a masked tensor.

Notes:

  • This method applies message passing between ‘A’ and ‘B’ to generate ‘tarX’.

  • It considers the specified dimensions for message passing.

class pygho.honn.MaOperator.OpMessagePassingCrossSubg2D[source]

Bases: OpMessagePassing

Perform message passing across subgraphs within the 2D subgraph Graph Neural Network (GNN).

Args:

  • None

See Also:

  • OpMessagePassing: The base class for generalized message passing.

Notes:

  • It assumes that ‘A’ represents the adjacency matrix of subgraphs, and ‘X’ represents 2D representations of subgraph nodes.

forward(A: MaskedTensor, X: MaskedTensor, datadict: Dict, tarX: MaskedTensor) MaskedTensor[source]

Perform message passing across subgraphs within the 2D subgraph Graph Neural Network.

Args:

  • A (MaskedTensor): The input masked tensor representing the adjacency matrix of subgraphs. of shape (b, n, n,*denseshapeshape1).

  • X (MaskedTensor): The input masked tensor representing 2D representations of subgraph nodes. of shape (b, n, n,*denseshapeshape2).

  • datadict (Dict): A dictionary for caching intermediate data (not used in this method).

  • tarX (MaskedTensor): The target masked tensor to store the result. of shape (b, n, n,*denseshapeshape3).

Returns:

  • MaskedTensor: The result of message passing that bridges subgraphs.

class pygho.honn.MaOperator.OpMessagePassingOnSubg2D[source]

Bases: OpMessagePassing

Operator for performing message passing on each subgraph for 2D subgraph Graph Neural Networks.

This operator is designed for use in 2D subgraph Graph Neural Networks (GNNs). It extends the base class OpMessagePassing to perform message passing on each subgraph represented by input tensors ‘A’ (adjacency matrix) and ‘X’ (2D representations). The result is stored in the target masked tensor ‘tarX’.

Args:

  • None

See Also:

  • OpMessagePassing: The base class for generalized message passing.

forward(A: MaskedTensor, X: MaskedTensor, datadict: Dict, tarX: MaskedTensor) MaskedTensor[source]

Perform message passing on each subgraph for 2D subgraph Graph Neural Networks.

Args:

  • A (MaskedTensor): The input masked tensor representing the adjacency matrix of subgraphs, of shape (b, n, n,*denseshapeshape1).

  • X (MaskedTensor): The input masked tensor representing 2D representations of subgraph nodes, of shape (b, n, n,*denseshapeshape2).

  • datadict (Dict): A dictionary for caching intermediate data (not used in this method).

  • tarX (MaskedTensor): The target masked tensor to mask the result.

Returns:

  • MaskedTensor: The result of message passing on each subgraph.

class pygho.honn.MaOperator.OpMessagePassingOnSubg3D[source]

Bases: OpMessagePassing

Operator for performing message passing on each subgraph for 3D subgraph Graph Neural Networks.

Args:

  • None

See Also:

  • OpMessagePassing: The base class for generalized message passing.

forward(A: MaskedTensor, X: MaskedTensor, datadict: Dict, tarX: MaskedTensor) MaskedTensor[source]

Perform message passing on each subgraph for 3D subgraph Graph Neural Networks.

Args:

  • A (MaskedTensor): The input masked tensor representing the adjacency matrix of subgraphs, of shape (b, n, n,*denseshapeshape1)

  • X (MaskedTensor): The input masked tensor representing 3D representations of subgraph nodes, of shape (b, n, n, n,*denseshapeshape2)

  • datadict (Dict): A dictionary for caching intermediate data (not used in this method).

  • tarX (MaskedTensor): The target masked tensor to mask the result, of shape (b, n, n, n,*denseshapeshape3).

Notes:

  • denseshape1, denseshape2 must be broadcastable.

class pygho.honn.MaOperator.OpNodeMessagePassing[source]

Bases: Module

Perform node-level message passing with an adjacency matrix A of shape (b, n, n) and node features X of shape (b, n).

Args:

  • None

forward(A: MaskedTensor, X: MaskedTensor, tarX: MaskedTensor) Tensor[source]

Perform forward pass of node-level message passing.

Args:

  • A (MaskedTensor): Adjacency matrix of shape (b, n, n).

  • X (MaskedTensor): Node features of shape (b, n).

  • tarX (MaskedTensor): Target node features of shape (b, n).

Returns:

  • Tensor: The result of the message passing operation.

class pygho.honn.MaOperator.OpPooling(dims: int | Iterable[int], pool: str = 'sum')[source]

Bases: Module

forward(X: MaskedTensor) MaskedTensor[source]

Defines the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

class pygho.honn.MaOperator.OpPoolingCrossSubg2D(pool: str = 'sum')[source]

Bases: OpPooling

Operator for pooling the same node representations within different subgraphsfor 2D subgraph GNNs. It returns dense output only.

Parameters:
  • pool (str): The pooling operation to apply.

forward(X: MaskedTensor) MaskedTensor[source]
Parameters:
  • X (MaskedTensor): The input MaskedTensor of shape(b, n, n,*denseshapeshape) representing 2D node representations.

Returns:
  • (Tensor): The pooled dense tensor. of shape (b, n,*denseshapeshape)

Raises:
  • AssertionError: If X is not 2D representations.

class pygho.honn.MaOperator.OpPoolingSubg2D(pool: str = 'sum')[source]

Bases: OpPooling

Operator for pooling node representations within each subgraph for 2D subgraph GNNs.

Parameters:
  • pool (str): The pooling operation to apply.

forward(X: MaskedTensor) MaskedTensor[source]
Parameters:
  • X (MaskedTensor): The input MaskedTensor of shape(b, n, n,*denseshapeshape) representing 2D node representations.

Returns:
  • (Tensor): The pooled dense tensor. of shape (b, n,*denseshapeshape)

Raises:
  • AssertionError: If X is not 2D representations.

class pygho.honn.MaOperator.OpPoolingSubg3D(pool: str = 'sum')[source]

Bases: OpPooling

Operator for pooling node representations within each subgraph for 3D subgraph GNNs. It returns sparse output only.

Parameters:
  • pool (str): The pooling operation to apply.

forward(X: MaskedTensor) MaskedTensor[source]
Parameters:
  • X (MaskedTensor): The input MaskedTensor of shape(b, n, n, n,*denseshapeshape) representing 2D node representations.

Returns:
  • (Tensor): The pooled dense tensor. of shape (b, n, n,*denseshapeshape)

Raises:
  • AssertionError: If X is not 2D representations.

class pygho.honn.MaOperator.OpSpMessagePassing(dim1: int, dim2: int, aggr: str = 'sum')[source]

Bases: Module

OpMessagePassing but use sparse adjacency matrix.

forward(A: SparseTensor, X: MaskedTensor, tarX: MaskedTensor) MaskedTensor[source]

Defines the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

class pygho.honn.MaOperator.OpSpMessagePassingCrossSubg2D(aggr: str = 'sum')[source]

Bases: OpSpMessagePassing

OpMessagePassingCrossSubg2D but use sparse adjacency matrix.

forward(A: SparseTensor, X: MaskedTensor, datadict: Dict, tarX: MaskedTensor) MaskedTensor[source]

Defines the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

class pygho.honn.MaOperator.OpSpMessagePassingOnSubg2D(aggr: str = 'sum')[source]

Bases: OpSpMessagePassing

OpMessagePassingOnSubg2D but use sparse adjacency matrix.

forward(A: SparseTensor, X: MaskedTensor, datadict: Dict, tarX: MaskedTensor) MaskedTensor[source]

Defines the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

class pygho.honn.MaOperator.OpSpMessagePassingOnSubg3D(aggr: str = 'sum')[source]

Bases: OpSpMessagePassing

OpMessagePassingOnSubg3D but use sparse adjacency matrix.

forward(A: SparseTensor, X: MaskedTensor, datadict: Dict, tarX: MaskedTensor) MaskedTensor[source]

Defines the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

class pygho.honn.MaOperator.OpSpNodeMessagePassing(aggr: str = 'sum')[source]

Bases: Module

Operator for node-level message passing.

Args:

  • aggr (str, optional): The aggregation method for message passing (default: “sum”).

Attributes:

  • aggr (str): The aggregation method used for message passing.

Methods: - forward(A: SparseTensor, X: Tensor, tarX: Tensor) -> Tensor: Perform node-level message passing.

forward(A: SparseTensor, X: MaskedTensor, tarX: MaskedTensor) Tensor[source]

Perform forward pass of node-level message passing.

Args:

  • A (SparseTensor): Adjacency matrix of shape (b, n, n).

  • X (MaskedTensor): Node features of shape (b, n).

  • tarX (MaskedTensor): Target node features of shape (b, n).

Returns:

  • Tensor: The result of the message passing operation.

class pygho.honn.MaOperator.OpUnpooling(dims: int | Iterable[int])[source]

Bases: Module

Operator for unpooling tensors by adding new dimensions.

Parameters:
  • dims (int or Iterable[int]): The dimensions along which to unpool the tensor.

forward(X: MaskedTensor, tarX: MaskedTensor) MaskedTensor[source]

Perform unpooling on tensors by adding new dimensions.

Parameters:
  • X (MaskedTensor): The input tensor to unpool.

  • tarX (MaskedTensor): The target MaskedTensor to mask the output.

Returns:
  • (MaskedTensor): The result of unpooling.

class pygho.honn.MaOperator.OpUnpoolingRootNodes2D[source]

Bases: OpUnpooling

Operator for copy root node representations to the subgraph rooted at i for all nodes

class pygho.honn.MaOperator.OpUnpoolingSubgNodes2D[source]

Bases: OpUnpooling

Operator for copy node representations to the node representation of all subgraphs

pygho.honn.SpOperator module

Operators for SparseTensor

class pygho.honn.SpOperator.Op2FWL(aggr: str = 'sum', optuplefeat: str = 'X')[source]

Bases: OpMessagePassing

Operator for simulating the 2-Folklore-Weisfeiler-Lehman (FWL) test. X <- X1 * X2.

Args:

  • aggr (str, optional): The aggregation method for message passing (default: “sum”).

Methods:

  • forward(X1: SparseTensor, X2: SparseTensor, datadict: Dict, tarX: Optional[SparseTensor] = None) -> SparseTensor: Simulate the 2-FWL test by performing message passing.

See Also:

  • OpMessagePassing: The base class for generalized message passing.

forward(X1: SparseTensor, X2: SparseTensor, datadict: Dict, tarX: SparseTensor | None = None) SparseTensor[source]

Simulate the 2-Folklore-Weisfeiler-Lehman (FWL) test by performing message passing.

Args:

  • X1 (SparseTensor): The first input sparse tensor (2D representations).

  • X2 (SparseTensor): The second input sparse tensor (2D representations).

  • datadict (Dict): A dictionary for caching intermediate data.

  • tarX (Optional[SparseTensor]): The target sparse tensor (default: None).

Returns:

  • SparseTensor: The result of simulating the 2-FWL test by performing message passing.

class pygho.honn.SpOperator.OpDiag(dims: Iterable[int], return_sparse: bool = False)[source]

Bases: Module

Operator for extracting diagonal elements from a SparseTensor.

Args:

  • dims (Iterable[int]): A list of dimensions along which to extract diagonal elements.

  • return_sparse (bool, optional): Whether to return the diagonal elements as a SparseTensor of a Tensor (default: False).

Methods:

  • forward(A: SparseTensor) -> Union[Tensor, SparseTensor]: Extract diagonal elements from the input SparseTensor.

Notes:

  • This class is used to extract diagonal elements from a SparseTensor along specified dimensions.

  • You can choose to return the diagonal elements as either a dense or sparse tensor.

forward(A: SparseTensor) Tensor | SparseTensor[source]

Defines the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

class pygho.honn.SpOperator.OpDiag2D[source]

Bases: OpDiag

forward(X: SparseTensor) Tensor[source]

Extract diagonal elements from the input SparseTensor.

Args:

  • A (SparseTensor): The input SparseTensor from which to extract diagonal elements.

Returns:

  • Union[Tensor, SparseTensor]: The extracted diagonal elements as either a dense or sparse tensor.

class pygho.honn.SpOperator.OpMessagePassing(op0: str = 'X', op1: str = 'X', dim1: int = 1, op2: str = 'A', dim2: int = 0, aggr: str = 'sum', message_func: Callable | None = None)[source]

Bases: Module

Generalized message passing on tuple features.

This class operates on two sparse tensors A and B and performs message passing based on specified operations and dimensions.

Args:

  • op0 (str, optional): The operation name for the first input (default: “X”). It is used to compute precomputekey and retrieve precomputation results

  • op1 (str, optional): The operation name for the second input (default: “X”).

  • dim1 (int, optional): The dimension to apply op0 (default: 1).

  • op2 (str, optional): The operation name for the third input (default: “A”).

  • dim2 (int, optional): The dimension to apply op2 (default: 0).

  • aggr (str, optional): The aggregation method for message passing (default: “sum”).

Attributes:

  • dim1 (int): The dimension to apply op0.

  • dim2 (int): The dimension to apply op2.

  • precomputekey (str): The precomputed key for caching intermediate data.

  • aggr (str): The aggregation method used for message passing.

Methods:

  • forward(A: SparseTensor, B: SparseTensor, datadict: Dict, tarX: Optional[SparseTensor] = None) -> SparseTensor: Perform generalized message passing.

Notes:

  • This class is designed for generalized message passing on tuple features.

  • It supports specifying custom operations and dimensions for message passing.

  • The forward method performs the message passing operation and returns the result.

forward(A: SparseTensor, B: SparseTensor, datadict: Dict, tarX: SparseTensor | None = None) SparseTensor[source]

Perform generalized message passing.

Args:

  • A (SparseTensor): The first input sparse tensor.

  • B (SparseTensor): The second input sparse tensor.

  • datadict (Dict): A dictionary for caching intermediate data. Containing precomputation results.

  • tarX (Optional[SparseTensor]): The target sparse tensor (default: None).

Returns:

  • SparseTensor: The result of generalized message passing.

Notes:

  • This method performs the generalized message passing operation using the provided inputs.

  • It supports caching intermediate data in the datadict dictionary.

class pygho.honn.SpOperator.OpMessagePassingCrossSubg2D(aggr: str = 'sum', optuplefeat: str = 'X', opadj: str = 'A', message_func: Callable | None = None)[source]

Bases: OpMessagePassing

Perform message passing across subgraphs within the 2D subgraph Graph Neural Network (GNN).

Args:

  • aggr (str): The aggregation method in message passing

Returns:

  • SparseTensor: The result of message passing on each subgraph within the 2D subgraph GNN.

forward(A: SparseTensor, X: SparseTensor, datadict: Dict, tarX: SparseTensor | None = None) SparseTensor[source]

Perform message passing across subgraphs within the 2D subgraph Graph Neural Network (GNN).

Args:

  • A (SparseTensor): The adjacency matrix of the whole graph (nxn).

  • X (SparseTensor): The 2D representations of the subgraphs.

  • datadict (Dict): A dictionary for caching intermediate data.

  • tarX (Optional[SparseTensor]): The target sparse tensor (default: None).

Returns:

  • SparseTensor: The result of message passing on each subgraph within the 2D subgraph GNN.

class pygho.honn.SpOperator.OpMessagePassingOnSubg2D(aggr: str = 'sum', optuplefeat: str = 'X', opadj: str = 'A', message_func: Callable | None = None)[source]

Bases: OpMessagePassing

Operator for performing message passing on each subgraph for 2D subgraph Graph Neural Networks.

Args:

  • aggr (str, optional): The aggregation method for message passing (default: “sum”).

Methods:

  • forward(A: SparseTensor, X: SparseTensor, datadict: Dict, tarX: Optional[SparseTensor] = None) -> SparseTensor: Perform message passing on each subgraph within the 2D subgraph GNN.

See Also:

  • OpMessagePassing: The base class for generalized message passing.

forward(A: SparseTensor, X: SparseTensor, datadict: Dict, tarX: SparseTensor | None = None) SparseTensor[source]

Perform message passing on each subgraph within the 2D subgraph Graph Neural Network (GNN).

Args:

  • A (SparseTensor): The adjacency matrix of the whole graph (nxn).

  • X (SparseTensor): The 2D representations of the subgraphs.

  • datadict (Dict): A dictionary for caching intermediate data.

  • tarX (Optional[SparseTensor]): The target sparse tensor (default: None).

Returns:

  • SparseTensor: The result of message passing on each subgraph within the 2D subgraph GNN.

class pygho.honn.SpOperator.OpMessagePassingOnSubg3D(aggr: str = 'sum', optuplefeat: str = 'X', opadj: str = 'A', message_func: Callable | None = None)[source]

Bases: OpMessagePassing

Operator for performing message passing on each subgraph for 3D subgraph Graph Neural Networks.

Args:

  • aggr (str, optional): The aggregation method for message passing (default: “sum”).

Methods:

  • forward(A: SparseTensor, X: SparseTensor, datadict: Dict, tarX: Optional[SparseTensor] = None) -> SparseTensor: Perform message passing on each subgraph within the 2D subgraph GNN.

See Also:

  • OpMessagePassing: The base class for generalized message passing.

forward(A: SparseTensor, X: SparseTensor, datadict: Dict, tarX: SparseTensor | None = None) SparseTensor[source]

Perform message passing on each subgraph within the 3D subgraph Graph Neural Network (GNN).

Args:

  • A (SparseTensor): The adjacency matrix of the whole graph (nxn).

  • X (SparseTensor): The 3D representations of the subgraphs.

  • datadict (Dict): A dictionary for caching intermediate data.

  • tarX (Optional[SparseTensor]): The target sparse tensor (default: None).

Returns:

  • SparseTensor: The result of message passing on each subgraph within the 2D subgraph GNN.

class pygho.honn.SpOperator.OpNodeMessagePassing(aggr: str = 'sum')[source]

Bases: Module

Operator for node-level message passing.

Args:

  • aggr (str, optional): The aggregation method for message passing (default: “sum”).

Attributes: - aggr (str): The aggregation method used for message passing.

Methods: - forward(A: SparseTensor, X: Tensor, tarX: Tensor) -> Tensor: Perform node-level message passing.

forward(A: SparseTensor, X: Tensor, tarX: Tensor | None = None) Tensor[source]

Perform node-level message passing.

Args:

  • A (SparseTensor): The adjacency matrix of the graph.

  • X (Tensor): The node feature tensor.

  • tarX (Tensor): The target node feature tensor. of no use

Returns:

  • Tensor: The result of node-level message passing (AX).

class pygho.honn.SpOperator.OpPooling(dims: int | Iterable[int], pool: str = 'sum', return_sparse: bool = False)[source]

Bases: Module

Operator for pooling tuple representations by reducing dimensions.

Args:

  • dims (Union[int, Iterable[int]]): The dimensions along which to apply pooling.

  • pool (str, optional): The pooling operation to apply (default: “sum”).

  • return_sparse (bool, optional): Whether to return the pooled tensor as a SparseTensor (default: False).

Methods:

  • forward(X: SparseTensor) -> Union[SparseTensor, Tensor]: Apply pooling operation to the input SparseTensor.

forward(X: SparseTensor) SparseTensor | Tensor[source]

Apply pooling operation to the input SparseTensor.

Args:

  • X (SparseTensor): The input SparseTensor to which pooling is applied.

Returns:

  • Union[SparseTensor, Tensor]: The pooled tensor as either a dense or sparse tensor.

class pygho.honn.SpOperator.OpPoolingCrossSubg2D(pool)[source]

Bases: OpPooling

Operator for pooling the same node representations within different subgraphsfor 2D subgraph GNNs. It returns dense output only.

Parameters:
  • pool (str): The pooling operation to apply.

forward(X: SparseTensor) Tensor[source]
Parameters:
  • X (SparseTensor): The input SparseTensor representing 2D node representations.

Returns:
  • (Tensor): The pooled sparse tensor.

Raises:
  • AssertionError: If X is not 2D representations.

class pygho.honn.SpOperator.OpPoolingSubg2D(pool)[source]

Bases: OpPooling

Operator for pooling node representations within each subgraph for 2D subgraph GNNs. It returns dense output only.

Parameters:
  • pool (str): The pooling operation to apply.

forward(X: SparseTensor) Tensor[source]
Parameters:
  • X (SparseTensor): The input SparseTensor representing 2D node representations.

Returns:
  • (Tensor): The pooled dense tensor.

Raises:
  • AssertionError: If X is not 2D representations.

class pygho.honn.SpOperator.OpPoolingSubg3D(pool)[source]

Bases: OpPooling

Operator for pooling node representations within each subgraph for 3D subgraph GNNs. It returns sparse output only.

Parameters:
  • pool (str): The pooling operation to apply.

forward(X: SparseTensor) SparseTensor[source]
Parameters:
  • X (SparseTensor): The input SparseTensor representing 2D node representations.

Returns:
  • (SparseTensor): The pooled sparse tensor.

Raises:
  • AssertionError: If X is not 3D representations.

class pygho.honn.SpOperator.OpUnpooling(dims: int | Iterable[int], fromdense1dim: bool = True)[source]

Bases: Module

Operator for unpooling tensors by adding new dimensions.

Parameters:
  • dims (int or Iterable[int]): The dimensions along which to unpool the tensor.

  • fromdense1dim (bool, optional): Whether to perform unpooling from dense 1D. Default is True.

forward(X: Tensor | SparseTensor, tarX: SparseTensor) SparseTensor[source]

Perform unpooling on tensors by adding new dimensions.

Parameters:
  • X (Union[Tensor, SparseTensor]): The input tensor to unpool.

  • tarX (SparseTensor): The target SparseTensor.

Returns:
  • (SparseTensor): The result of unpooling as a SparseTensor.

class pygho.honn.SpOperator.OpUnpoolingRootNodes2D[source]

Bases: OpUnpooling

Operator for copy root node representations to the subgraph rooted at i for all nodes

class pygho.honn.SpOperator.OpUnpoolingSubgNodes2D[source]

Bases: OpUnpooling

Operator for copy node representations to the node representation of all subgraphs

pygho.honn.SpOperator.parse_precomputekey(model: Module) List[str][source]

Parse and return precompute keys from a PyTorch model.

Args:

  • model (Module): The PyTorch model to parse.

Returns:

  • List[str]: A list of unique precompute keys found in the model.

Example:

model = MyModel()  # Initialize your PyTorch model
precompute_keys = parse_precomputekey(model)

Notes: - This function is useful for extracting precompute keys from message-passing models. - It iterates through the model’s modules and identifies instances of OpMessagePassing modules. - The precompute keys associated with these modules are collected and returned as a list.

pygho.honn.TensorOp module

Wrappers unifying operators for sparse and masked tensors

class pygho.honn.TensorOp.Op2FWL(mode: Literal['SS', 'DD'] = 'SS', aggr: Literal['sum', 'mean', 'max'] = 'sum', optuplefeat: str = 'X')[source]

Bases: Module

Simulate the 2-Folklore-Weisfeiler-Lehman (FWL) test with support for both sparse and masked tensors.

This class allows you to simulate the 2-Folklore-Weisfeiler-Lehman (FWL) test by performing message passing between two input tensors, X1 and X2. It supports both sparse and masked tensors and offers flexibility in specifying the aggregation method.

Args:

  • mode (Literal[“SS”, “DD”], optional): The mode indicating tensor types (default: “SS”). SS means sparse adjacency and sparse X, DD means dense adjacency and dense X.

  • aggr (Literal[“sum”, “mean”, “max”], optional): The aggregation method for message passing (default: “sum”).

See Also:

  • SpOperator.Op2FWL: Sparse tensor operator for simulating 2-FWL.

  • MaOperator.Op2FWL: Masked tensor operator for simulating 2-FWL.

forward(X1: SparseTensor | MaskedTensor, X2: SparseTensor | MaskedTensor, datadict: Dict | None = None, tarX: SparseTensor | MaskedTensor | None = None) SparseTensor | MaskedTensor[source]

Simulate the 2-Folklore-Weisfeiler-Lehman (FWL) test by performing message passing.

Args:

  • X1 (Union[SparseTensor, MaskedTensor]): The first input tensor.

  • X2 (Union[SparseTensor, MaskedTensor]): The second input tensor.

  • datadict (Optional[Dict]): A dictionary for caching intermediate data (not used in this method).

  • tarX (Optional[Union[SparseTensor, MaskedTensor]]): The target tensor to store the result.

Returns:

  • Union[SparseTensor, MaskedTensor]: The result of simulating the 2-Folklore-Weisfeiler-Lehman (FWL) test.

class pygho.honn.TensorOp.OpDiag2D(mode: Literal['D', 'S'] = 'S')[source]

Bases: Module

Perform diagonalization operation for 2D subgraph Graph Neural Networks with support for both sparse and masked tensors.

Args:

  • mode (Literal[“S”, “D”], optional): The mode indicating tensor types (default: “S”). S means sparse, D means dense

See Also:

  • SpOperator.OpDiag2D: Sparse tensor operator for diagonalization in 2D GNNs.

  • MaOperator.OpDiag2D: Masked tensor operator for diagonalization in 2D GNNs.

forward(X: MaskedTensor | SparseTensor) MaskedTensor | Tensor[source]

Perform diagonalization operation for 2D subgraph Graph Neural Networks.

Args:

  • X (Union[MaskedTensor, SparseTensor]): The input tensor for diagonalization.

Returns:

  • Union[MaskedTensor, Tensor]: The result of the diagonalization operation.

class pygho.honn.TensorOp.OpMessagePassingCrossSubg2D(mode: Literal['SD', 'SS', 'DD'] = 'SS', aggr: Literal['sum', 'mean', 'max'] = 'sum', optuplefeat: str = 'X', opadj: str = 'A', message_func: Callable | None = None)[source]

Bases: Module

Perform message passing across subgraphs within the 2D subgraph Graph Neural Network (GNN) with support for both sparse and masked tensors.

This class is designed for performing message passing across subgraphs within the 2D subgraph Graph Neural Network (GNN). It supports both sparse and masked tensors and provides flexibility in specifying the aggregation method.

Args:

  • mode (Literal[“SD”, “SS”, “DD”], optional): The mode indicating tensor types (default: “SS”).

  • aggr (Literal[“sum”, “mean”, “max”], optional): The aggregation method for message passing (default: “sum”).

See Also:

  • SpOperator.OpMessagePassingCrossSubg2D: Sparse tensor operator for cross-subgraph message passing in 2D GNNs.

  • MaOperator.OpSpMessagePassingCrossSubg2D: Masked tensor operator for cross-subgraph message passing in 2D GNNs.

  • MaOperator.OpMessagePassingCrossSubg2D: Masked tensor operator for cross-subgraph message passing in 2D GNNs with dense adjacency.

forward(A: SparseTensor | MaskedTensor, X: SparseTensor | MaskedTensor, datadict: Dict | None = None, tarX: SparseTensor | MaskedTensor | None = None) SparseTensor | MaskedTensor[source]

Perform message passing across subgraphs within the 2D subgraph Graph Neural Network (GNN).

Args:

  • A (Union[SparseTensor, MaskedTensor]): The input tensor representing the adjacency matrix of subgraphs.

  • X (Union[SparseTensor, MaskedTensor]): The input tensor representing 2D representations of subgraph nodes.

  • datadict (Optional[Dict]): A dictionary for caching intermediate data (not used in this method).

  • tarX (Optional[Union[SparseTensor, MaskedTensor]]): The target tensor to store the result.

Returns:

  • Union[SparseTensor, MaskedTensor]: The result of message passing across subgraphs.

class pygho.honn.TensorOp.OpMessagePassingOnSubg2D(mode: Literal['SD', 'SS', 'DD'] = 'SS', aggr: Literal['sum', 'mean', 'max'] = 'sum', optuplefeat: str = 'X', opadj: str = 'A', message_func: Callable | None = None)[source]

Bases: Module

forward(A: SparseTensor | MaskedTensor, X: SparseTensor | MaskedTensor, datadict: Dict | None = None, tarX: SparseTensor | MaskedTensor | None = None) SparseTensor | MaskedTensor[source]

Perform message passing on each subgraph for 2D subgraph Graph Neural Networks.

Args:

  • A (Union[SparseTensor, MaskedTensor]): The input tensor representing the adjacency matrix of subgraphs.

  • X (Union[SparseTensor, MaskedTensor]): The input tensor representing 2D representations of subgraph nodes.

  • datadict (Optional[Dict]): A dictionary for caching intermediate data (not used in this method).

  • tarX (Optional[Union[SparseTensor, MaskedTensor]]): The target tensor to store the result.

Returns:

  • Union[SparseTensor, MaskedTensor]: The result of message passing on each subgraph.

class pygho.honn.TensorOp.OpMessagePassingOnSubg3D(mode: Literal['SD', 'SS', 'DD'] = 'SS', aggr: Literal['sum', 'mean', 'max'] = 'sum', optuplefeat: str = 'X', opadj: str = 'A', message_func: Callable | None = None)[source]

Bases: Module

Perform message passing on each subgraph for 3D subgraph Graph Neural Networks with support for both sparse and masked tensors.

This class is designed for performing message passing on each subgraph within 3D subgraph Graph Neural Networks. It supports both sparse and masked tensors and provides flexibility in specifying the aggregation method.

Args:

  • mode (Literal[“SD”, “SS”, “DD”], optional): The mode indicating tensor types (default: “SS”). SS means sparse adjacency and sparse X, SD means sparse adjacency and dense X, DD means dense adjacency and dense X.

  • aggr (Literal[“sum”, “mean”, “max”], optional): The aggregation method for message passing (default: “sum”).

See Also:

  • SpOperator.OpMessagePassingOnSubg3D: Sparse tensor operator for message passing on 3D subgraphs.

  • MaOperator.OpSpMessagePassingOnSubg3D: Masked tensor operator for message passing on 3D subgraphs.

  • MaOperator.OpMessagePassingOnSubg3D: Masked tensor operator for message passing on 3D subgraphs with dense adjacency.

forward(A: SparseTensor | MaskedTensor, X: SparseTensor | MaskedTensor, datadict: Dict | None = None, tarX: SparseTensor | MaskedTensor | None = None) SparseTensor | MaskedTensor[source]

Perform message passing on each subgraph for 3D subgraph Graph Neural Networks.

Args:

  • A (Union[SparseTensor, MaskedTensor]): The input tensor representing the adjacency matrix of subgraphs.

  • X (Union[SparseTensor, MaskedTensor]): The input tensor representing 3D representations of subgraph nodes.

  • datadict (Optional[Dict]): A dictionary for caching intermediate data (not used in this method).

  • tarX (Optional[Union[SparseTensor, MaskedTensor]]): The target tensor to store the result.

Returns:

  • Union[SparseTensor, MaskedTensor]: The result of message passing on each subgraph.

class pygho.honn.TensorOp.OpNodeMessagePassing(mode: Literal['SS', 'SD', 'DD'] = 'SS', aggr: str = 'sum')[source]

Bases: Module

Perform node-wise message passing with support for both sparse and masked tensors.

This class wraps the message passing operator, allowing it to be applied to both sparse and masked tensors. It can perform node-wise message passing based on the provided mode and aggregation method.

Args:

  • mode (Literal[“SS”, “SD”, “DD”], optional): The mode indicating tensor types (default: “SS”). SS means sparse adjacency and sparse X, SD means sparse adjacency and dense X, DD means dense adjacency and dense X.

  • aggr (str, optional): The aggregation method for message passing (default: “sum”).

See Also:

  • SpOperator.OpNodeMessagePassing: Sparse tensor node-wise message passing operator.

  • MaOperator.OpSpNodeMessagePassing: Masked tensor node-wise message passing operator for sparse adjacency.

  • MaOperator.OpNodeMessagePassing: Masked tensor node-wise message passing operator for dense adjacency.

Methods:

  • forward(A: Union[SparseTensor, MaskedTensor], X: Union[Tensor, MaskedTensor]) -> Union[Tensor, MaskedTensor]: Perform node-wise message passing on the input tensors based on the specified mode and aggregation method.

forward(A: SparseTensor | MaskedTensor, X: Tensor | MaskedTensor) Tensor | MaskedTensor[source]

Perform node-wise message passing on the input tensors.

Args:

  • A (Union[SparseTensor, MaskedTensor]): The input adjacency tensor.

  • X (Union[Tensor, MaskedTensor]): The input tensor representing tuple features.

Returns:

  • Union[Tensor, MaskedTensor]: The result of node-wise message passing.

class pygho.honn.TensorOp.OpPoolingCrossSubg2D(mode: Literal['S', 'D'] = 'S', pool: str = 'sum')[source]

Bases: Module

forward(X: MaskedTensor | SparseTensor) MaskedTensor | Tensor[source]

Defines the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

class pygho.honn.TensorOp.OpPoolingSubg2D(mode: Literal['S', 'D'] = 'S', pool: str = 'sum')[source]

Bases: Module

Perform pooling operation for subgraphs within 2D subgraph Graph Neural Networks by reducing dimensions.

Args:

  • mode (Literal[“S”, “D”], optional): The mode indicating tensor types (default: “S”). S means sparse, D means dense

  • pool (Literal[“sum”, “mean”, “max”], optional): The pooling method (default: “sum”).

See Also:

  • SpOperator.OpPoolingSubg2D: Sparse tensor operator for pooling in 2D GNNs.

  • MaOperator.OpPoolingSubg2D: Masked tensor operator for pooling in 2D GNNs.

forward(X: MaskedTensor | SparseTensor) MaskedTensor | Tensor[source]

Defines the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

class pygho.honn.TensorOp.OpPoolingSubg3D(mode: Literal['S', 'D'] = 'S', pool: str = 'sum')[source]

Bases: Module

This class is designed for performing pooling operation across subgraphs within the 2D subgraph Graph Neural Network (GNN).

Args:

  • mode (Literal[“S”, “D”], optional): The mode indicating tensor types (default: “S”). S means sparse, D means dense.

  • pool (Literal[“sum”, “mean”, “max”], optional): The pooling method (default: “sum”).

See Also:

  • SpOperator.OpPoolingCrossSubg2D: Sparse tensor operator for cross-subgraph pooling in 2D GNNs.

  • MaOperator.OpPoolingCrossSubg2D: Masked tensor operator for cross-subgraph pooling in 2D GNNs.

forward(X: MaskedTensor | SparseTensor) MaskedTensor | Tensor[source]

Defines the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

class pygho.honn.TensorOp.OpUnpoolingRootNodes2D(mode: Literal['S', 'D'] = 'S')[source]

Bases: Module

This class is designed for performing unpooling operation for root nodes within 2D subgraph Graph Neural Networks. It supports both sparse and masked tensors.

Args:

  • mode (Literal[“S”, “D”], optional): The mode indicating tensor types (default: “S”).

See Also:

  • SpOperator.OpUnpoolingRootNodes2D: Sparse tensor operator for unpooling

forward(X: Tensor | MaskedTensor, tarX: SparseTensor | MaskedTensor) SparseTensor | MaskedTensor[source]

Defines the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

class pygho.honn.TensorOp.OpUnpoolingSubgNodes2D(mode: Literal['S', 'D'] = 'S')[source]

Bases: Module

This class is designed for performing unpooling operation for subgraph nodes within 2D subgraph Graph Neural Networks. It supports both sparse and masked tensors.

Args:

  • mode (Literal[“S”, “D”], optional): The mode indicating tensor types (default: “S”). S means sparse, D means dense.

See Also:

  • SpOperator.OpUnpoolingSubgNodes2D: Sparse tensor operator for unpooling subgraph nodes in 2D GNNs.

  • MaOperator.OpUnpoolingSubgNodes2D: Masked tensor operator for unpooling subgraph nodes in 2D GNNs.

forward(X: Tensor | MaskedTensor, tarX: SparseTensor | MaskedTensor) SparseTensor | MaskedTensor[source]

Defines the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

pygho.honn.utils module

A general MLP class

class pygho.honn.utils.BatchNorm(dim, normparam=0.1)[source]

Bases: Module

forward(x: Tensor)[source]

Defines the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

class pygho.honn.utils.LayerNorm(dim, normparam=0.1)[source]

Bases: Module

forward(x: Tensor)[source]

Defines the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

class pygho.honn.utils.MLP(hiddim: int, outdim: int, numlayer: int, tailact: bool, dp: float = 0, norm: str = 'bn', act: str = 'relu', tailbias=True, normparam: float = 0.1)[source]

Bases: Module

Multi-Layer Perceptron (MLP) module with customizable layers and activation functions.

Args:

  • hiddim (int): Number of hidden units in each layer.

  • outdim (int): Number of output units.

  • numlayer (int): Number of hidden layers in the MLP.

  • tailact (bool): Whether to apply the activation function after the final layer.

  • dp (float): Dropout probability, if greater than 0, dropout layers are added.

  • norm (str): Normalization method to apply between layers (e.g., “bn” for BatchNorm).

  • act (str): Activation function to apply between layers (e.g., “relu”).

  • tailbias (bool): Whether to include a bias term in the final linear layer.

  • normparam (float): Parameter for normalization (e.g., momentum for BatchNorm).

Methods:

  • forward(x: Tensor) -> Tensor: Forward pass of the MLP.

Notes:

  • This class defines a multi-layer perceptron with customizable layers, activation functions, normalization, and dropout.

forward(x: Tensor)[source]

Defines the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

class pygho.honn.utils.NoneNorm(dim=0, normparam=0)[source]

Bases: Module

forward(x)[source]

Defines the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

class pygho.honn.utils.NormMomentumScheduler(mfunc: ~typing.Callable, initmomentum: float, normtype=<class 'torch.nn.modules.batchnorm.BatchNorm1d'>)[source]

Bases: object

step(model: Module)[source]

Module contents