【问题标题】:How do I overcome "Permission Denial....obtain access using ACTION_OPEN_DOCUMENT or related APIs"?如何克服“权限被拒绝......使用 ACTION_OPEN_DOCUMENT 或相关 API 获得访问权限”?
【发布时间】:2020-06-01 14:52:06
【问题描述】:

我正在使用react-native-firebasereact-native-document-picker,我正在尝试关注face detection tutorial

尽管通过PermissionsAndroid 进行了读取访问,但目前出现以下错误:

Permission Denial: reading com.android.provides.media.MediaDocumentsProvider uri [uri] from pid=4746, uid=10135 requires that you obtain access using ACTION_OPEN_DOCUMENT or related APIs

我可以在屏幕上显示用户选择的图像,但 react-native-firebase 功能似乎无法获得权限。错误发生在此调用:const faces = await vision().faceDetectorProcessImage(localPath);

关于如何授予人脸检测功能访问权限或我做错了什么有什么建议?

我的 AndroidManifest.xml 文件包含以下内容:

<uses-permission android:name="android.permission.INTERNET" />
<uses-permission android:name="android.permission.CAMERA" />
<uses-permission android:name="android.permission.READ_EXTERNAL_STORAGE" />
<uses-permission android:name="android.permission.WRITE_EXTERNAL_STORAGE" />

这是该组件中的所有代码供参考:

import React, {useState} from 'react';
import { Button, Text, Image, PermissionsAndroid } from 'react-native';
import vision, { VisionFaceContourType } from '@react-native-firebase/ml-vision';
import DocumentPicker from 'react-native-document-picker';



async function processFaces(localPath) {

  console.log(localPath)
  const faces = await vision().faceDetectorProcessImage(localPath);
  console.log("Got faces")

  faces.forEach(face => {
    console.log('Head rotation on Y axis: ', face.headEulerAngleY);
    console.log('Head rotation on Z axis: ', face.headEulerAngleZ);

    console.log('Left eye open probability: ', face.leftEyeOpenProbability);
    console.log('Right eye open probability: ', face.rightEyeOpenProbability);
    console.log('Smiling probability: ', face.smilingProbability);

    face.faceContours.forEach(contour => {
      if (contour.type === VisionFaceContourType.FACE) {
        console.log('Face outline points: ', contour.points);
      }
    });
  });
}

async function pickFile () {
    // Pick a single file
    try {
        const res = await DocumentPicker.pick({
            type: [DocumentPicker.types.images],
        });
        console.log(
            res.uri,
            res.type, // mime type
            res.name,
            res.size
        );
        return res
    } catch (err) {
        if (DocumentPicker.isCancel(err)) {
        // User cancelled the picker, exit any dialogs or menus and move on
            console.log("User cancelled")
        } else {
            console.log("Error picking file or processing faces")
            throw err;
        }
    }
}

const requestPermission = async () => {
    try {
      const granted = await PermissionsAndroid.request(
        PermissionsAndroid.PERMISSIONS.READ_EXTERNAL_STORAGE,
        {
          title: "Files Permission",
          message:
            "App needs access to your files " +
            "so you can run face detection.",
          buttonNeutral: "Ask Me Later",
          buttonNegative: "Cancel",
          buttonPositive: "OK"
        }
      );
      if (granted === PermissionsAndroid.RESULTS.GRANTED) {
        console.log("We can now read files");
      } else {
        console.log("File read permission denied");
      }
      return granted
    } catch (err) {
      console.warn(err);
    }
  };

function FaceDetectionScreen ({navigation}) {
    const [image, setImage] = useState("");
    return (
        <>
            <Text>This is the Face detection screen.</Text>
            <Button title="Select Image to detect faces" onPress={async () => {
                const permission = await requestPermission();
                if (permission === PermissionsAndroid.RESULTS.GRANTED) {
                    const pickedImage = await pickFile();
                    const pickedImageUri = pickedImage.uri
                    setImage(pickedImageUri);
                    processFaces(pickedImageUri).then(() => console.log('Finished processing file.'));
                }
                }}/>
            <Image style={{flex: 1}} source={{ uri: image}}/>
        </>
    ); 
}

export default FaceDetectionScreen;

【问题讨论】:

    标签: react-native react-native-android react-native-firebase


    【解决方案1】:

    感谢this comment on a github issue,我能够更新我的代码并通过将前三行 processFaces 更新为:

    async function processFaces(contentUri) {
      const stat = await RNFetchBlob.fs.stat(contentUri)
      const faces = await vision().faceDetectorProcessImage(stat.path);
    

    在导入import RNFetchBlob from 'rn-fetch-blob' 之后。

    rn-fetch-blob

    【讨论】:

      猜你喜欢
      • 1970-01-01
      • 2014-08-10
      • 2021-10-30
      • 1970-01-01
      • 1970-01-01
      • 2021-01-21
      • 2014-06-03
      • 1970-01-01
      • 2018-08-04
      相关资源
      最近更新 更多