【问题标题】:ValueError: Cannot convert a Tensor of dtype resource to a NumPy arrayValueError:无法将 dtype 资源的张量转换为 NumPy 数组
【发布时间】:2023-04-08 23:00:01
【问题描述】:

我试图通过设置参数矩阵来隔离一些用户特定的参数,其中每个数组都会学习特定于该用户的参数。

我想使用用户 ID 索引矩阵,并将参数连接到其他特征。

最后,有一些全连接层以获得理想的结果。

但是,我一直在代码的最后一行收到此错误。


---------------------------------------------------------------------------
ValueError                                Traceback (most recent call last)
<ipython-input-1-93de3591ccf0> in <module>
     20 # combined = tf.keras.layers.Concatenate(axis=-1)([le_param, le])
     21 
---> 22 net = tf.keras.layers.Dense(128)(combined)

~/anaconda3/envs/tam-env/lib/python3.6/site-packages/tensorflow_core/python/keras/engine/base_layer.py in __call__(self, inputs, *args, **kwargs)
    793     # framework.
    794     if build_graph and base_layer_utils.needs_keras_history(inputs):
--> 795       base_layer_utils.create_keras_history(inputs)
    796 
    797     # Clear eager losses on top level model call.

~/anaconda3/envs/tam-env/lib/python3.6/site-packages/tensorflow_core/python/keras/engine/base_layer_utils.py in create_keras_history(tensors)
    182     keras_tensors: The Tensors found that came from a Keras Layer.
    183   """
--> 184   _, created_layers = _create_keras_history_helper(tensors, set(), [])
    185   return created_layers
    186 

~/anaconda3/envs/tam-env/lib/python3.6/site-packages/tensorflow_core/python/keras/engine/base_layer_utils.py in _create_keras_history_helper(tensors, processed_ops, created_layers)
    229               constants[i] = backend.function([], op_input)([])
    230       processed_ops, created_layers = _create_keras_history_helper(
--> 231           layer_inputs, processed_ops, created_layers)
    232       name = op.name
    233       node_def = op.node_def.SerializeToString()

~/anaconda3/envs/tam-env/lib/python3.6/site-packages/tensorflow_core/python/keras/engine/base_layer_utils.py in _create_keras_history_helper(tensors, processed_ops, created_layers)
    229               constants[i] = backend.function([], op_input)([])
    230       processed_ops, created_layers = _create_keras_history_helper(
--> 231           layer_inputs, processed_ops, created_layers)
    232       name = op.name
    233       node_def = op.node_def.SerializeToString()

~/anaconda3/envs/tam-env/lib/python3.6/site-packages/tensorflow_core/python/keras/engine/base_layer_utils.py in _create_keras_history_helper(tensors, processed_ops, created_layers)
    227           else:
    228             with ops.init_scope():
--> 229               constants[i] = backend.function([], op_input)([])
    230       processed_ops, created_layers = _create_keras_history_helper(
    231           layer_inputs, processed_ops, created_layers)

~/anaconda3/envs/tam-env/lib/python3.6/site-packages/tensorflow_core/python/keras/backend.py in __call__(self, inputs)
   3746     return nest.pack_sequence_as(
   3747         self._outputs_structure,
-> 3748         [x._numpy() for x in outputs],  # pylint: disable=protected-access
   3749         expand_composites=True)
   3750 

~/anaconda3/envs/tam-env/lib/python3.6/site-packages/tensorflow_core/python/keras/backend.py in <listcomp>(.0)
   3746     return nest.pack_sequence_as(
   3747         self._outputs_structure,
-> 3748         [x._numpy() for x in outputs],  # pylint: disable=protected-access
   3749         expand_composites=True)
   3750 

ValueError: Cannot convert a Tensor of dtype resource to a NumPy array.

重现错误的代码:

import tensorflow as tf

num_uids = 50
input_uid = tf.keras.layers.Input(shape=(1,), dtype=tf.int32)
params = tf.Variable(tf.random.normal((num_uids, 9)), trainable=True)

param = tf.gather_nd(params, input_uid)

input_shared_features = tf.keras.layers.Input(shape=(128,), dtype=tf.float32)
combined = tf.concat([param, input_shared_features], axis=-1)

net = tf.keras.layers.Dense(128)(combined)

我尝试过的东西很少:

  1. 我尝试使用 tf.keras.layers.Lambda 封装 tf.gather_nd 和 tf.concat。
  2. 我尝试将 tf.concat 替换为 tf.keras.layers.Concatenate。

奇怪的是,如果我指定项目的数量并将 Input 替换为 tf.Variable,代码将按预期工作:

import tensorflow as tf

num_uids = 50
input_uid = tf.Variable(tf.ones((32, 1), dtype=tf.int32))
params = tf.Variable(tf.random.normal((num_uids, 9)), trainable=True)

param = tf.gather_nd(params, input_uid)

input_shared_features = tf.Variable(tf.ones((32, 128), dtype=tf.float32))
combined = tf.concat([param, input_shared_features], axis=-1)

net = tf.keras.layers.Dense(128)(combined)

我将 Tensorflow 2.1 与 Python 3.6.10 一起使用

【问题讨论】:

标签: python tensorflow keras tensorflow2.0


【解决方案1】:

当我尝试在 TensorFlow 2.x 中使用 TensorFlow 表查找 (tf.lookup.StaticHashTable) 时,我遇到了类似的问题。我最终通过将其保存在 Custom Keras Layer 中来解决它。同样的解决方案似乎也适用于这个问题 - 至少直到问题中提到的版本。 (我尝试使用 TensorFlow 2.0、2.1 和 2.2,并且在所有这些版本中都可以使用。)

import tensorflow as tf

num_uids = 50
input_uid = tf.keras.Input(shape=(1,), dtype=tf.int32)
input_shared_features = tf.keras.layers.Input(shape=(128,), dtype=tf.float32)

class CustomLayer(tf.keras.layers.Layer):
    def __init__(self,num_uids):
        super(CustomLayer, self).__init__(trainable=True,dtype=tf.int64)
        self.num_uids = num_uids

    def build(self,input_shape):
        self.params = tf.Variable(tf.random.normal((num_uids, 9)), trainable=True)
        self.built=True

    def call(self, input_uid,input_shared_features):
        param = tf.gather_nd(self.params, input_uid)
        combined = tf.concat([param, input_shared_features], axis=-1)
        return combined

    def get_config(self):
        config = super(CustomLayer, self).get_config()
        config.update({'num_uids': self.num_uids})
        return config

combined = CustomLayer(num_uids)(input_uid,input_shared_features)
net = tf.keras.layers.Dense(128)(combined)
model = tf.keras.Model(inputs={'input_uid':input_uid,'input_shared_features':input_shared_features},outputs=net)
model.summary()

模型摘要如下所示:

Model: "model"
__________________________________________________________________________________________________
Layer (type)                    Output Shape         Param #     Connected to                     
==================================================================================================
input_1 (InputLayer)            [(None, 1)]          0                                            
__________________________________________________________________________________________________
input_2 (InputLayer)            [(None, 128)]        0                                            
__________________________________________________________________________________________________
custom_layer (CustomLayer)      (None, 137)          450         input_1[0][0]                    
__________________________________________________________________________________________________
dense (Dense)                   (None, 128)          17664       custom_layer[0][0]               
==================================================================================================
Total params: 18,114
Trainable params: 18,114
Non-trainable params: 0

更多信息您可以参考tf.keras.layers.Layer documentation

如果您想参考查表问题和解决方案,请点击以下链接:

【讨论】:

  • 欢迎来到 Stack Overflow!这是一个非常好的第一篇文章。感谢您投入如此多的关注和细节。
  • 感谢@JeremyCaney 的鼓励和整理答案。我希望我将来能够为社区做出进一步的贡献:)
【解决方案2】:

虽然 Jithin Jees 的回答非常明确,但下面显示的是使用 Concatenate 操作的稍微不同的解决方法:

import tensorflow as tf

num_uids = 50
#input_uid = tf.keras.layers.Input(shape=(1,), dtype=tf.int32, batch_size = 32)
#input_uid = tf.keras.layers.Input(shape=(1,), dtype=tf.int32)
#params = tf.Variable(tf.random.normal((num_uids, 9)), trainable=True)

#param = tf.gather_nd(params, input_uid)

indices = tf.keras.layers.Input(name='indices', shape=(), dtype='int32')
params = tf.Variable(params)

class GatherLayer(tf.keras.layers.Layer):
    def call(self, indices, params):
        return tf.gather(params, indices)

output = GatherLayer()(indices, params)

#input_shared_features = tf.keras.layers.Input(shape=(128,), dtype=tf.float32, batch_size = 32)
input_shared_features = tf.keras.layers.Input(shape=(128,), dtype=tf.float32)
combined = tf.concat([output, input_shared_features], axis=-1)

net = tf.keras.layers.Dense(128)(combined)

更多详情请参考Github Issue

【讨论】:

    猜你喜欢
    • 2021-05-05
    • 2021-02-22
    • 2020-10-22
    • 2019-04-28
    • 1970-01-01
    • 1970-01-01
    • 1970-01-01
    • 2021-01-27
    • 2021-10-19
    相关资源
    最近更新 更多