一、Gan的Lipschitz稳定性约束
Lipschitz约束简单而言就是:要求在整个 f ( ⋅ ) f(\cdot) f(⋅) 的定义域内有
∥ f ( x ) − f ( x ′ ) ∥ 2 ∥ x − x ′ ∥ 2 ≤ M ( 3 ) \frac{\Vert f(x)-f(x’) \Vert_2}{\Vert x-x’ \Vert_2} \le M \uad(3) ∥x−x′∥2∥f(x)−f(x′)∥2≤M(3)
其中,M是一个常数。满足公式(3)的函数 f ( ⋅ ) f(\cdot) f(⋅),具体表现为:函数变化不会太快,其梯度总是有限的,即使最剧烈时,也被限制在小于等于M的范围。
WGan首先提出Discriminator的参数矩阵需要满足Lipschitz约束,但其方法比较简单粗暴:直接对参数矩阵中元素进行限制,不让其大于某个值。这种方法,是可以保证Lipschitz约束的,但在削顶的同时,也破坏了整个参数矩阵的结构——各参数之间的比例关系。针对这个问题,【1】提出了一个既满足Lipschitz条件,又不用破坏矩阵结构的方法——Spectral Normalization。
二、多层神经网络的分析
三、谱归一化的实现
为获得每层参数矩阵的谱范数,需要求解 W i W_i Wi 的奇异值,这将耗费大量的计算资源,因而可采用“幂迭代法”来近似求取,其迭代过程如下:
1 、 v l 0 ← a random Gaussian vector 2 、 loop k : u l k ← W l v l k − 1 , normalization: u l k ← u l k ∥ u l k ∥ , v l k ← ( W l ) T u l k , normalization: v l k ← v l k ∥ v l k ∥ , end loop 3 、 σ l ( W ) = ( u l k ) T W v l k 1、v_l^{0} \leftarrow \text{ a random Gaussian vector} \\ 2、\text{loop k :} \\ u_l^{k}\leftarrow W_lv_l^{k-1}, \text{ normalization: } u_l^{k}\leftarrow \frac{u_l^{k}}{\Vert u_l^{k} \Vert},\\ v_l^k\leftarrow (W_l)^Tu_l^k , \text{ normalization: } v_l^{k}\leftarrow \frac{v_l^{k}}{\Vert v_l^{k} \Vert},\\ \text{end loop} \\ 3、\sigma_l(W)= (u_l^k)^T W v_l^k 1、vl0← a random Gaussian vector2、loop k :ulk←Wlvlk−1, normalization: ulk←∥ulk∥ulk,vlk←(Wl)Tulk, normalization: vlk←∥vlk∥vlk,end loop3、σl(W)=(ulk)TWvlk
求得谱范数后,每个参数矩阵上的参数皆除以它,以达到归一化目的。其实,上述算法在迭代了足够次数后, u k \mathbf u^k uk就是该矩阵( W W W)的最大奇异值对应的特征矢量,有:
W W T u = σ ( W ) ⋅ u ⇒ u T W W T u = 1 ⋅ σ ( W ) , as ∥ u ∥ = 1 σ ( W ) = u T W v , as v = W T u WW^T \mathbf u=\sigma(W)\cdot \mathbf u \Rightarrow \mathbf u^TWW^T \mathbf u = 1\cdot \sigma(W), \text{ as } \Vert \mathbf u \Vert=1\\ \sigma(W) = \mathbf u^TW\mathbf v, \text{ as } \mathbf v=W^T \mathbf u WWTu=σ(W)⋅u⇒uTWWTu=1⋅σ(W), as ∥u∥=1σ(W)=uTWv, as v=WTu
谱归一具体的pytorch实现代码可以参考【3】,以下摘抄部分如下:
1、计算谱范数
import torch import torch.nn.functional as F #define _l2normalization def _l2normalize(v, eps=1e-12): return v / (torch.norm(v) + eps) def max_singular_value(W, u=None, Ip=1): """ power iteration for weight parameter """ #xp = W.data if not Ip >= 1: raise ValueError("Power iteration should be a positive integer") if u is None: u = torch.FloatTensor(1, W.size(0)).normal_(0, 1).cuda() _u = u for _ in range(Ip): _v = _l2normalize(torch.matmul(_u, W.data), eps=1e-12) _u = _l2normalize(torch.matmul(_v, torch.transpose(W.data, 0, 1)), eps=1e-12) sigma = torch.sum(F.linear(_u, torch.transpose(W.data, 0, 1)) * _v) return sigma, _u
class SNLinear(Linear): def __init__(self, in_features, out_features, bias=True): super(SNLinear, self).__init__(in_features, out_features, bias) self.register_buffer('u', torch.Tensor(1, out_features).normal_()) @property def W_(self): w_mat = self.weight.view(self.weight.size(0), -1) sigma, _u = max_singular_value(w_mat, self.u) self.u.copy_(_u) return self.weight / sigma def forward(self, input): return F.linear(input, self.W_, self.bias)
卷积层:
class SNConv2d(conv._ConvNd): def __init__(self, in_channels, out_channels, kernel_size, stride=1, padding=0, dilation=1, groups=1, bias=True): kernel_size = _pair(kernel_size) stride = _pair(stride) padding = _pair(padding) dilation = _pair(dilation) super(SNConv2d, self).__init__( in_channels, out_channels, kernel_size, stride, padding, dilation, False, _pair(0), groups, bias) self.register_buffer('u', torch.Tensor(1, out_channels).normal_()) @property def W_(self): w_mat = self.weight.view(self.weight.size(0), -1) sigma, _u = max_singular_value(w_mat, self.u) self.u.copy_(_u) return self.weight / sigma def forward(self, input): return F.conv2d(input, self.W_, self.bias, self.stride, self.padding, self.dilation, self.groups)
由这两个层的构造可看到:谱范数的计算和应用谱范数的归一化层。这些层可以加到Discriminator中,如下:
class ResBlock(nn.Module): def __init__(self, in_channels, out_channels, hidden_channels=None, use_BN = False, downsample=False): super(ResBlock, self).__init__() #self.conv1 = SNConv2d(n_dim, n_out, kernel_size=3, stride=2) hidden_channels = in_channels self.downsample = downsample self.resblock = self.make_res_block(in_channels, out_channels, hidden_channels, use_BN, downsample) self.residual_connect = self.make_residual_connect(in_channels, out_channels) def make_res_block(self, in_channels, out_channels, hidden_channels, use_BN, downsample): model = [] if use_BN: model += [nn.BatchNorm2d(in_channels)] model += [nn.ReLU()] model += [SNConv2d(in_channels, hidden_channels, kernel_size=3, padding=1)] model += [nn.ReLU()] model += [SNConv2d(hidden_channels, out_channels, kernel_size=3, padding=1)] if downsample: model += [nn.AvgPool2d(2)] return nn.Sequential(*model) def make_residual_connect(self, in_channels, out_channels): model = [] model += [SNConv2d(in_channels, out_channels, kernel_size=1, padding=0)] if self.downsample: model += [nn.AvgPool2d(2)] return nn.Sequential(*model) else: return nn.Sequential(*model) def forward(self, input): return self.resblock(input) + self.residual_connect(input) class OptimizedBlock(nn.Module): def __init__(self, in_channels, out_channels): super(OptimizedBlock, self).__init__() self.res_block = self.make_res_block(in_channels, out_channels) self.residual_connect = self.make_residual_connect(in_channels, out_channels) def make_res_block(self, in_channels, out_channels): model = [] model += [SNConv2d(in_channels, out_channels, kernel_size=3, padding=1)] model += [nn.ReLU()] model += [SNConv2d(out_channels, out_channels, kernel_size=3, padding=1)] model += [nn.AvgPool2d(2)] return nn.Sequential(*model) def make_residual_connect(self, in_channels, out_channels): model = [] model += [SNConv2d(in_channels, out_channels, kernel_size=1, padding=0)] model += [nn.AvgPool2d(2)] return nn.Sequential(*model) def forward(self, input): return self.res_block(input) + self.residual_connect(input) class SNResDiscriminator(nn.Module): def __init__(self, ndf=64, ndlayers=4): super(SNResDiscriminator, self).__init__() self.res_d = self.make_model(ndf, ndlayers) self.fc = nn.Sequential(SNLinear(ndf*16, 1), nn.Sigmoid()) def make_model(self, ndf, ndlayers): model = [] model += [OptimizedBlock(3, ndf)] tndf = ndf for i in range(ndlayers): model += [ResBlock(tndf, tndf*2, downsample=True)] tndf *= 2 model += [nn.ReLU()] return nn.Sequential(*model) def forward(self, input): out = self.res_d(input) out = F.avg_pool2d(out, out.size(3), stride=1) out = out.view(-1, 1024) return self.fc(out)
生成器SNResDiscriminator 用到两个构建模块ResBlock、OptimizedBlock,这两个模块都用SNConv2d层来构建带有谱归一化的卷积层。在SNConv2d实现中,用到@property def W_(self),是我第一次见到的,接下来要好好研究研究。
小结:
Gan要想训练稳定进行,就需要其Discriminator的映射函数满足Lipschitz约束,[1]提出谱范数可作为Lipschitz约束的实施方法,进而给出归一化的实现思路,整个过程十分精巧,值得学习。
发布者:全栈程序员-站长,转载请注明出处:https://javaforall.net/211699.html原文链接:https://javaforall.net
