type
status
date
slug
summary
tags
category
icon
password

1.Padding处理

 
def pad_at_dim(t, padding, dim = -1, value = 1):
t_dim = t.ndim
dim_after_padding = (t_dim - dim - 1) if dim ≥ 0 or (-dim - 1)
zeros = (0,0) * dim_after_padding
// 这里注意padding 是从最后面dim开始往前数的 每一位有两个数代表前后各自需要填充的个数 当我们把后面dim过完以后 我们正式来到想要调整的dim中 填入我们的padding
return F.pad(t, (*zeros, *padding), value = value)
 
容易看很多 就是单纯填充到长度
def pad_to_length(t, length, dim = -1, value = 0, left = True):
 
remain = length - t.shape[dim]
if remain ≤ 0:
return t
 
pad = (remain, 0) if left else (0, remain)
return F.pad(t, pad, dim = dim, value = value)
 

2.Angles, Areas, and normals

 

3.FiLM (Feature-wise Linear Modulation)与 Squeeze-and-Excitation (SE) block

*FiLM layers modulate the input feature map x with learned parameters gamma (scaling) and beta (shifting) that are computed from the conditioning(cond) information
这里每一个cond 和 x的batch size对应
 
*Adaptively recalibrate channel-wise feature responses by learning a set of weights for each channel based on global (average or masked) statistics
 
(b, c, n)→ (b, c) → (b, c, 1) x:(b, c, n) * (b, c, 1)
计算每个channel的average(to capture global info) 代入linear
决定 emphasizing or diminishing different channels based on their importance
最终 multiply input feature maps by these learned weights, effectively reweighing each channel
 

4.Block(+ResBlock + GateLoopBlock)

 
Block: proj(conv1d) → norm → activation → dropout
 
ResBlock:
Block1(dim, dim_out) →
Block2(dim_out, dim_out) →
squeeze_excite(dim_out) +
residual_conv(dim, dim_out, 1)
 
*GateLoopBlock class is a neural network module that applies a sequence of gated layers (specifically SimpleGateLoopLayer instances) to an input x. Each layer in the gateloops modifies the input, and these modifications are iteratively accumulated. This structure is designed for layers that may have recurrent behavior or caching mechanisms, making it useful for tasks like sequence processing or memory-augmented networks
The recurrent loop (if implemented) within each GateLoop block allows the model to revisit previous outputs and refine them iteratively, mimicking the behavior of recurrent neural networks (RNNs).
Prevent the model from overfitting by focusing only on the most relevant information
 
创建depth层的gateloop 在ModuleList([])中
x + 层专属cache → out + new_cache(存入new_caches中)
x = x + x_out
 
 
 
Meshgpt代码阅读【2】WSL Docker配置失败
Loading...
ran2323
ran2323
我们再来一次, 这一次, 好好来!
Latest posts
Leetcode记录「2」
2024-12-27
Flutter 基础 记录
2024-12-25
Flutter tutorial 记录
2024-12-25
Privicy policy for GitHub To Text (Chrome Extension)
2024-12-22
一些 Kubernetes 笔记
2024-12-21
一些 docker 笔记
2024-12-20
Announcement
 
 
 
 
暂时没有新的内容