【问题标题】:Why is Google colab TPU slow?为什么谷歌 colab TPU 慢?
【发布时间】:2020-02-12 13:44:23
【问题描述】:

我正在使用 Talos 运行 Keras 模型的超参数调整。在 Google colab TPU 上运行这个短代码非常慢。我认为这与数据的类型有关。我应该将其转换为张量以使 TPU 更快吗?

%tensorflow_version 2.x
import os
import tensorflow as tf
import talos as ta
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Dense
from tensorflow.keras.optimizers import Adam
from sklearn.model_selection import train_test_split

def iris_model(x_train, y_train, x_val, y_val, params):

    # Specify a distributed strategy to use TPU
    resolver = tf.distribute.cluster_resolver.TPUClusterResolver(tpu='grpc://' + os.environ['COLAB_TPU_ADDR'])
    tf.config.experimental_connect_to_host(resolver.master())
    tf.tpu.experimental.initialize_tpu_system(resolver)
    strategy = tf.distribute.experimental.TPUStrategy(resolver)

    # Use the strategy to create and compile a Keras model
    with strategy.scope():
      model = Sequential()
      model.add(Dense(32, input_shape=(4,), activation=tf.nn.relu, name="relu"))
      model.add(Dense(3, activation=tf.nn.softmax, name="softmax"))
      model.compile(optimizer=Adam(learning_rate=0.1), loss=params['losses'])

    # Convert data type to use TPU
    x_train = x_train.astype('float32')
    x_val = x_val.astype('float32')

    # Fit the Keras model on the dataset
    out = model.fit(x_train, y_train, batch_size=params['batch_size'], epochs=params['epochs'], validation_data=[x_val, y_val], verbose=0, steps_per_epoch=0)

    return out, model

# Load dataset
X, y = ta.templates.datasets.iris()

# Train and test set
x_train, x_val, y_train, y_val = train_test_split(X, y, test_size=0.30, shuffle=False)

# Create a hyperparameter distributions 
p = {'losses': ['logcosh'], 'batch_size': [128, 256, 384, 512, 1024], 'epochs': [10, 20]}

# Use Talos to scan the best hyperparameters of the Keras model
scan_object = ta.Scan(x_train, y_train, params=p, model=iris_model, experiment_name='test', x_val=x_val, y_val=y_val, fraction_limit=0.5)

【问题讨论】:

标签: tensorflow keras google-colaboratory google-cloud-tpu talos


【解决方案1】:

感谢您的提问。

很遗憾,我无法让您的代码示例在 TensorFlow 2.2 上运行,所以我不知道您最初看到的性能如何。通过以下更改,我能够修复它并让它在 TPU 上运行:

  • tf.config.experimental_connect_to_host(resolver.master()) 替换为tf.config.experimental_connect_to_cluster(resolver)
  • 将 TPU 初始化移到 iris_model() 之外。
  • tf.data.Dataset 用于 TPU 输入。

这是修改后的 Colab 代码:

# Run this to install Talos before running the rest of the code.
!pip install git+https://github.com/autonomio/talos@1.0
%tensorflow_version 2.x
import os
import tensorflow as tf
import talos as ta
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Dense
from tensorflow.keras.optimizers import Adam
from sklearn.model_selection import train_test_split

print(tf.__version__) # TF 2.2.0 in my case

resolver = tf.distribute.cluster_resolver.TPUClusterResolver(tpu='grpc://' + os.environ['COLAB_TPU_ADDR'])
tf.config.experimental_connect_to_cluster(resolver)
tf.tpu.experimental.initialize_tpu_system(resolver)

def iris_model(x_train, y_train, x_val, y_val, params):
    # Use the strategy to create and compile a Keras model
    strategy = tf.distribute.experimental.TPUStrategy(resolver)
    with strategy.scope():
      model = Sequential()
      model.add(Dense(32, input_shape=(4,), activation=tf.nn.relu, name="relu"))
      model.add(Dense(3, activation=tf.nn.softmax, name="softmax"))
      model.compile(optimizer=Adam(learning_rate=0.1), loss=params['losses'])

    train_dataset = tf.data.Dataset.from_tensor_slices((x_train, y_train)).batch(params['batch_size'])
    val_dataset = tf.data.Dataset.from_tensor_slices((x_val, y_val)).batch(params['batch_size'])

    # Fit the Keras model on the dataset
    out = model.fit(train_dataset, epochs=params['epochs'], validation_data=val_dataset)

    return out, model

# Load dataset
X, y = ta.templates.datasets.iris()

# Train and test set
x_train, x_val, y_train, y_val = train_test_split(X, y, test_size=0.30, shuffle=False)

# Create a hyperparameter distributions 
p = {'losses': ['logcosh'], 'batch_size': [128, 256, 384, 512, 1024], 'epochs': [10, 20]}

# Use Talos to scan the best hyperparameters of the Keras model
scan_object = ta.Scan(x_train, y_train, params=p, model=iris_model, experiment_name='test', x_val=x_val, y_val=y_val, fraction_limit=0.5)

对我来说,最后一次通话只用了不到 2 分钟。

对于知名数据集,您可以使用TensorFlow Datasets library 跳过创建自己的tf.data.Dataset 的步骤。 TFDS 的库中确实有 iris dataset。有关将 TFDS 与 TPU 结合使用的端到端示例,请参阅 TensorFlow 的official TPU guide

【讨论】:

    猜你喜欢
    • 2019-07-29
    • 2021-02-19
    • 2021-07-07
    • 2019-03-02
    • 2021-03-26
    • 1970-01-01
    • 1970-01-01
    • 2019-08-22
    相关资源
    最近更新 更多