【发布时间】:2017-08-04 18:59:31
【问题描述】:
我是 tensorflow 的初学者,我使用 tf.expand_dims 时出现错误,我无法理解原因,那么我错过了什么?
这是代码
ML_OUTPUT = None
input_for_classification = None
def ConstructML( input_tensor, layers_count, node_for_each_layer):
global ML_OUTPUT
global input_for_classification
FeatureVector = np.array(input_tensor)
FeatureVector = FeatureVector.flatten()
print(FeatureVector.shape)
ML_ModelINT(FeatureVector, layers_count, node_for_each_layer)
def ML_ModelINT(FeatureVector, layers_count, node_for_each_layer):
hidden_layer = []
Alloutputs = []
hidden_layer.append({'weights': tf.Variable(tf.random_normal([FeatureVector.shape[0], node_for_each_layer[0]])),'biases': tf.Variable(tf.random_normal([node_for_each_layer[0]]))})
for i in range(1, layers_count):
hidden_layer.append({'weights': tf.Variable(tf.random_normal([node_for_each_layer[i - 1], node_for_each_layer[i]])),'biases': tf.Variable(tf.random_normal([node_for_each_layer[i]]))})
FeatureVector = tf.expand_dims(FeatureVector,0)
layers_output = tf.add(tf.matmul(FeatureVector, hidden_layer[0]['weights']), hidden_layer[0]['biases'])
layers_output = tf.nn.relu(layers_output)
Alloutputs.append(layers_output)
for j in range(1, layers_count):
layers_output = tf.add(tf.matmul(layers_output, hidden_layer[j]['weights']), hidden_layer[j]['biases'])
layers_output = tf.nn.relu(layers_output)
Alloutputs.append(layers_output)
ML_OUTPUT = layers_output
input_for_classification = Alloutputs[1]
return ML_OUTPUT
ML_Net = ConstructML(input,3,[1024,512,256])
它在这一行给我错误
FeatureVector = tf.expand_dims(FeatureVector,0)
错误是Expected binary or unicode string, got tf.Tensor 'Relu_11:0' shape=(?, 7, 7, 512) dtype=float32
注意输入是另一个网络的输出张量,效果很好
【问题讨论】:
-
你的类型有些奇怪,我认为......在函数 ConstructML 中,FeatureVector 是一个 numpy 数组。然后将其传递给 ModelINT,并对其运行 tf 张量运算,而无需对其进行转换...
标签: tensorflow conv-neural-network