type
status
date
slug
summary
tags
category
icon
password
5.MeshAutoEncoder **
nn.linear 与 nn.embedding 区别
Linear(4,3) (input_dim, output_dim) 会生成(3,4)的linear.weight
y = wx 也就是(out, in) * (in, …)
w_linear(requires dense matrix → row.toarray())
Embedding(4,3) ← linear.weight.T
Dense 与 Sparse Matrix 区别
- Sparse matrices are more memory-efficient than dense matrices when dealing with large matrices that have a significant number of zero elements. By storing only the non-zero elements explicitly, sparse matrices can save memory and computation time for certain operations.
- However, there are situations where operations or algorithms may require dense matrices. For example, some mathematical operations, such as matrix multiplication or certain linear algebra operations, may be more efficient or easier to implement with dense matrices. Dense matrices allow for straightforward element-wise operations and can leverage optimized libraries for linear algebra computations.
SAGEConv
Implements the GraphSAGE (Graph Sample and Aggregated) convolutional layer
Each node in a graph updates its representation by aggregating features from its neighbors. Instead of using the entire neighborhood (as in standard GCNs), GraphSAGE uses a sampling mechanism to sample a fixed number of neighbors. After gathering information(With multiple possible aggregation functions) from the neighbors, the node combines its own features with the aggregated neighbor features to compute the updated node representation
- Author:ran2323
- URL:https://www.blueif.me//article/12671a79-6e22-80e9-b533-cbb545af28d3
- Copyright:All articles in this blog, except for special statements, adopt BY-NC-SA agreement. Please indicate the source!