【发布时间】:2021-05-06 19:23:30
【问题描述】:
以下两个与Python中初始化类相关的代码有什么区别吗?
class summation:
def __init__(self, f, s):
self.first = f
self.second = s
self.summ = self.first + self.second
.
.
.
class summation:
def __init__(self, f, s):
self.first = f
self.second = s
self.summ = f + s
.
.
.
如果有任何区别,那是什么,哪个代码更可取?
编辑:我打算用 Python(和 Pytorch)编写一个人工神经网络。其实上面的两个代码只是一些例子。在实际案例中,我在各种资源中看到,当一个类的初始化中存在self.input = input时,在其他部分它被用作self.input,而不是input。
我的问题:这两种方法有什么区别?为什么在我的情况下使用self.input 更可取?
import torch
import torch.nn as nn
import torch.nn.functional as F
from dgl import DGLGraph
import dgl.function as fn
from functools import partial
class RGCNLayer(nn.Module):
def __init__(self, in_feat, out_feat, num_rels, num_bases=-1, bias=None,
activation=None, is_input_layer=False):
super(RGCNLayer, self).__init__()
self.in_feat = in_feat
self.out_feat = out_feat
self.num_rels = num_rels
self.num_bases = num_bases
self.bias = bias
self.activation = activation
self.is_input_layer = is_input_layer
# sanity check
if self.num_bases <= 0 or self.num_bases > self.num_rels:
self.num_bases = self.num_rels
# weight bases in equation (3)
self.weight = nn.Parameter(torch.Tensor(self.num_bases, self.in_feat,
self.out_feat))
if self.num_bases < self.num_rels:
# linear combination coefficients in equation (3)
self.w_comp = nn.Parameter(torch.Tensor(self.num_rels, self.num_bases))
# add bias
if self.bias:
self.bias = nn.Parameter(torch.Tensor(out_feat))
# init trainable parameters
nn.init.xavier_uniform_(self.weight,
gain=nn.init.calculate_gain('relu'))
if self.num_bases < self.num_rels:
nn.init.xavier_uniform_(self.w_comp,
gain=nn.init.calculate_gain('relu'))
if self.bias:
nn.init.xavier_uniform_(self.bias,
gain=nn.init.calculate_gain('relu'))
def forward(self, g):
if self.num_bases < self.num_rels:
# generate all weights from bases (equation (3))
weight = self.weight.view(self.in_feat, self.num_bases, self.out_feat)
weight = torch.matmul(self.w_comp, weight).view(self.num_rels,
self.in_feat, self.out_feat)
else:
weight = self.weight
if self.is_input_layer:
def message_func(edges):
# for input layer, matrix multiply can be converted to be
# an embedding lookup using source node id
embed = weight.view(-1, self.out_feat)
index = edges.data['rel_type'] * self.in_feat + edges.src['id']
return {'msg': embed[index] * edges.data['norm']}
else:
def message_func(edges):
w = weight[edges.data['rel_type']]
msg = torch.bmm(edges.src['h'].unsqueeze(1), w).squeeze()
msg = msg * edges.data['norm']
return {'msg': msg}
def apply_func(nodes):
h = nodes.data['h']
if self.bias:
h = h + self.bias
if self.activation:
h = self.activation(h)
return {'h': h}
g.update_all(message_func, fn.sum(msg='msg', out='h'), apply_func)
【问题讨论】:
-
我会挑战——如果
self.first或self.second被更改,self.summ就会过时。相反,它应该是在访问时计算的属性。鉴于self.first和self.second可能也 是属性,这两个版本实际上可能具有不同的行为,因此在您的上下文中哪种行为有意义并不是偏好问题。 -
@jonrsharpe 感谢您的评论。请给我一个例子,上面两个代码有不同的行为吗?
-
如果
summation.first和summation.second是描述符(例如property),它可能会有所不同,因为那时self.first和@ 987654338@ 会调用描述符的__get__方法,可以任意操作。 -
@ZahraTaheri 看看我编辑的答案。我希望它现在为您澄清一切:)
标签: python-3.x class neural-network pytorch self