【发布时间】:2017-02-18 08:32:06
【问题描述】:
我正在尝试在 Tensroflow android 演示应用中使用重新训练的 inception-v3 模型,但没有显示输出。
我做了什么
根据描述 retrain inception 训练模型。训练后(只有五门课),我使用
测试了图表bazel build tensorflow/examples/label_image:label_image &&
bazel-bin/tensorflow/examples/label_image/label_image \
--output_layer=final_result \
--labels=/tf_files/retrained_labels.txt \
--image=/home/hannan/Desktop/images.jpg \
--graph=/tf_files/retrained_graph.pb
以下是输出
I tensorflow/examples/label_image/main.cc:206] shoes (3): 0.997833
I tensorflow/examples/label_image/main.cc:206] chair (1): 0.00118802
I tensorflow/examples/label_image/main.cc:206] door lock (2): 0.000544737
I tensorflow/examples/label_image/main.cc:206] bench (4): 0.000354453
I tensorflow/examples/label_image/main.cc:206] person (0): 7.93592e-05
使用
完成推理优化bazel build tensorflow/python/tools:optimize_for_inference
bazel-bin/tensorflow/python/tools/optimize_for_inference \
--input=/tf_files/retrained_graph.pb \
--output=/tf_files/optimized_graph.pb \
--input_names=Mul \
--output_names=final_result
并再次测试输出图工作正常。
最后运行了以下 strip_unsued.py
python strip_unused.py \
--input_graph=/tf_files/optimized_graph.pb \
--output_graph=/tf_files/stirpped_graph.pb\
--input_node_names="Mul" \
--output_node_names="final_result" \
--input_binary=true
再次测试图表工作正常。
分类器活动中的 Android 应用程序更改
private static final int NUM_CLASSES = 5;
private static final int INPUT_SIZE = 229;
private static final int IMAGE_MEAN = 128;
private static final float IMAGE_STD = 128;
private static final String INPUT_NAME = "Mul:0";
private static final String OUTPUT_NAME = "final_result:0";
private static final String MODEL_FILE"file:///android_asset/optimized_graph.pb";
private static final String LABEL_FILE = "file:///android_asset/retrained_labels.txt";
构建并运行项目。
追溯
D/tensorflow: CameraActivity: onCreate org.tensorflow.demo.ClassifierActivity@adfa77e
W/ResourceType: For resource 0x0103045b, entry index(1115) is beyond type entryCount(1)
W/ResourceType: For resource 0x01030249, entry index(585) is beyond type entryCount(1)
W/ResourceType: For resource 0x01030249, entry index(585) is beyond type entryCount(1)
W/ResourceType: For resource 0x01030248, entry index(584) is beyond type entryCount(1)
W/ResourceType: For resource 0x01030247, entry index(583) is beyond type entryCount(1)
D/PhoneWindowEx: [PWEx][generateLayout] setLGNavigationBarColor : colors=0xff000000
I/PhoneWindow: [setLGNavigationBarColor] color=0x ff000000
D/tensorflow: CameraActivity: onStart org.tensorflow.demo.ClassifierActivity@adfa77e
D/tensorflow: CameraActivity: onResume org.tensorflow.demo.ClassifierActivity@adfa77e
D/OpenGLRenderer: Use EGL_SWAP_BEHAVIOR_PRESERVED: false
D/PhoneWindow: notifyNavigationBarColor, color=0x: ff000000, token: android.view.ViewRootImplAO$WEx@5d35dc4
I/OpenGLRenderer: Initialized EGL, version 1.4
I/CameraManagerGlobal: Connecting to camera service
I/tensorflow: CameraConnectionFragment: Adding size: 1920x1440
I/tensorflow: CameraConnectionFragment: Adding size: 1920x1088
I/tensorflow: CameraConnectionFragment: Adding size: 1920x1080
I/tensorflow: CameraConnectionFragment: Adding size: 1280x720
I/tensorflow: CameraConnectionFragment: Adding size: 960x720
I/tensorflow: CameraConnectionFragment: Adding size: 960x540
I/tensorflow: CameraConnectionFragment: Adding size: 800x600
I/tensorflow: CameraConnectionFragment: Adding size: 864x480
I/tensorflow: CameraConnectionFragment: Adding size: 800x480
I/tensorflow: CameraConnectionFragment: Adding size: 720x480
I/tensorflow: CameraConnectionFragment: Adding size: 640x480
I/tensorflow: CameraConnectionFragment: Adding size: 480x368
I/tensorflow: CameraConnectionFragment: Adding size: 480x320
I/tensorflow: CameraConnectionFragment: Not adding size: 352x288
I/tensorflow: CameraConnectionFragment: Not adding size: 320x240
I/tensorflow: CameraConnectionFragment: Not adding size: 176x144
I/tensorflow: CameraConnectionFragment: Chosen size: 480x320
I/TensorFlowImageClassifier: Reading labels from: retrained_labels.txt
I/TensorFlowImageClassifier: Read 5, 5 specified
I/native: tensorflow_inference_jni.cc:97 Native TF methods loaded.
I/TensorFlowInferenceInterface: Native methods already loaded.
I/native: tensorflow_inference_jni.cc:85 Creating new session variables for 7e135ad551738da4
I/native: tensorflow_inference_jni.cc:113 Loading Tensorflow.
I/native: tensorflow_inference_jni.cc:120 Session created.
I/native: tensorflow_inference_jni.cc:126 Acquired AssetManager.
I/native: tensorflow_inference_jni.cc:128 Reading file to proto: file:///android_asset/optimized_graph.pb
I/native: tensorflow_inference_jni.cc:132 GraphDef loaded from file:///android_asset/optimized_graph.pb with 515 nodes.
I/native: stat_summarizer.cc:38 StatSummarizer found 515 nodes
I/native: tensorflow_inference_jni.cc:139 Creating TensorFlow graph from GraphDef.
I/native: tensorflow_inference_jni.cc:151 Initialization done in 931.7ms
I/tensorflow: ClassifierActivity: Sensor orientation: 90, Screen orientation: 0
I/tensorflow: ClassifierActivity: Initializing at size 480x320
I/CameraManager: Using legacy camera HAL.
I/tensorflow: CameraConnectionFragment: Opening camera preview: 480x320
I/CameraDeviceState: Legacy camera service transitioning to state CONFIGURING
I/RequestThread-0: Configure outputs: 2 surfaces configured.
D/Camera: app passed NULL surface
I/[MALI][Gralloc]: dlopen libsec_mem.so fail
I/Choreographer: Skipped 89 frames! The application may be doing too much work on its main thread.
I/Timeline: Timeline: Activity_idle id: android.os.BinderProxy@a9290d7 time:114073819
I/CameraDeviceState: Legacy camera service transitioning to state IDLE
I/RequestQueue: Repeating capture request set.
W/LegacyRequestMapper: convertRequestMetadata - control.awbRegions setting is not supported, ignoring value
W/LegacyRequestMapper: Only received metering rectangles with weight 0.
W/LegacyRequestMapper: Only received metering rectangles with weight 0.
E/Camera: Unknown message type -2147483648
I/CameraDeviceState: Legacy camera service transitioning to state CAPTURING
I/[MALI][Gralloc]: lock_ycbcr: videobuffer_status is invalid, use default value
I/[MALI][Gralloc]: lock_ycbcr: videobuffer_status is invalid, use default value
D/tensorflow: CameraActivity: Initializing buffer 0 at size 153600
D/tensorflow: CameraActivity: Initializing buffer 1 at size 38400
D/tensorflow: CameraActivity: Initializing buffer 2 at size 38400
I/[MALI][Gralloc]: lock_ycbcr: videobuffer_status is invalid, use default value
I/[MALI][Gralloc]: lock_ycbcr: videobuffer_status is invalid, use default value
当我使用应用程序识别对象时,没有显示输出。 Output
这个也显示在日志中
I/native: tensorflow_inference_jni.cc:228 End computing. Ran in 4639ms (4639ms avg over 1 runs)
E/native: tensorflow_inference_jni.cc:233 Error during inference: Invalid argument: computed output size would be negative
[[Node: pool_3 = AvgPool[T=DT_FLOAT, data_format="NHWC", ksize=[1, 8, 8, 1], padding="VALID", strides=[1, 1, 1, 1], _device="/job:localhost/replica:0/task:0/cpu:0"](mixed_10/join)]]
E/native: tensorflow_inference_jni.cc:170 Output [final_result] not found, aborting!
I/[MALI][Gralloc]: lock_ycbcr: videobuffer_status is invalid, use default value
【问题讨论】:
标签: android python tensorflow