【发布时间】:2017-10-21 17:01:20
【问题描述】:
我有一个基本的 Android TensorFlowInference 示例,可以在单线程中正常运行。
public class InferenceExample {
private static final String MODEL_FILE = "file:///android_asset/model.pb";
private static final String INPUT_NODE = "intput_node0";
private static final String OUTPUT_NODE = "output_node0";
private static final int[] INPUT_SIZE = {1, 8000, 1};
public static final int CHUNK_SIZE = 8000;
public static final int STRIDE = 4;
private static final int NUM_OUTPUT_STATES = 5;
private static TensorFlowInferenceInterface inferenceInterface;
public InferenceExample(final Context context) {
inferenceInterface = new TensorFlowInferenceInterface(context.getAssets(), MODEL_FILE);
}
public float[] run(float[] data) {
float[] res = new float[CHUNK_SIZE / STRIDE * NUM_OUTPUT_STATES];
inferenceInterface.feed(INPUT_NODE, data, INPUT_SIZE[0], INPUT_SIZE[1], INPUT_SIZE[2]);
inferenceInterface.run(new String[]{OUTPUT_NODE});
inferenceInterface.fetch(OUTPUT_NODE, res);
return res;
}
}
当按照下面的示例在 ThreadPool 中运行时,该示例会崩溃并出现各种异常,包括 java.lang.ArrayIndexOutOfBoundsException 和 java.lang.NullPointerException,所以我猜它不是线程安全的。
InferenceExample inference = new InferenceExample(context);
ExecutorService executor = Executors.newFixedThreadPool(NUMBER_OF_CORES);
Collection<Future<?>> futures = new LinkedList<Future<?>>();
for (int i = 1; i <= 100; i++) {
Future<?> result = executor.submit(new Runnable() {
public void run() {
inference.call(randomData);
}
});
futures.add(result);
}
for (Future<?> future:futures) {
try { future.get(); }
catch(ExecutionException | InterruptedException e) {
Log.e("TF", e.getMessage());
}
}
是否可以通过TensorFlowInferenceInterface 使用多核 Android 设备?
【问题讨论】:
标签: java android tensorflow