【问题标题】:IBM Watson speech to text dependencyIBM Watson 语音到文本的依赖关系
【发布时间】:2019-03-19 19:41:39
【问题描述】:

我想开始使用 IBM Watson 进行语音识别。下一步我将在 Pepper 人形机器人上运行我的代码。实际上我无法导入以下行:

import com.ibm.watson.developer_cloud.speech_to_text.v1.model.SpeechResults;

现在,我正在寻找 'SpeechResults' 的 maven 依赖项来修复我的 java 代码错误。 我创建了一个 java maven 项目并添加了它的依赖项。我的 .pom 文件在这里:

 <?xml version="1.0" encoding="UTF-8"?>
     <project xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 
  http://maven.apache.org/xsd/maven-4.0.0.xsd" 
   xmlns="http://maven.apache.org/POM/4.0.0"
    xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance">
  <modelVersion>4.0.0</modelVersion>
  <groupId>com.ibm.watson.developer_cloud</groupId>
  <artifactId>java-sdk</artifactId>
  <version>6.14.0</version>
  <name>Watson Developer Cloud Java SDK</name>
  <description>Client library to use the IBM Watson Services and 
   AlchemyAPI</description>
  <url>https://www.ibm.com/watson/developer</url>
  <licenses>
    <license>
      <name>The Apache License, Version 2.0</name>
      <url>http://www.apache.org/licenses/LICENSE-2.0.txt</url>
    </license>
  </licenses>
  <developers>
    <developer>
      <id>german</id>
      <name>German Attanasio</name>
      <email>germanatt@us.ibm.com</email>
    </developer>
  </developers>
  <scm>
    <connection>scm:git:git@github.com:watson-developer-cloud/java- 
 sdk.git</connection>
    <developerConnection>scm:git:git@github.com:watson-developer-cloud/java- 
  sdk.git</developerConnection>
    <url>https://github.com/watson-developer-cloud/java-sdk</url>
  </scm>
  <dependencies>
    <dependency>
      <groupId>com.ibm.watson.developer_cloud</groupId>
      <artifactId>assistant</artifactId>
      <version>6.14.0</version>
      <scope>compile</scope>
    </dependency>
    <dependency>
      <groupId>com.ibm.watson.developer_cloud</groupId>
      <artifactId>conversation</artifactId>
      <version>6.14.0</version>
      <scope>compile</scope>
    </dependency>
    <dependency>
      <groupId>com.ibm.watson.developer_cloud</groupId>
      <artifactId>core</artifactId>
      <version>6.14.0</version>
      <scope>compile</scope>
    </dependency>
    <dependency>
      <groupId>com.ibm.watson.developer_cloud</groupId>
      <artifactId>discovery</artifactId>
      <version>6.14.0</version>
      <scope>compile</scope>
    </dependency>
    <dependency>
      <groupId>com.ibm.watson.developer_cloud</groupId>
      <artifactId>language-translator</artifactId>
      <version>6.14.0</version>
      <scope>compile</scope>
    </dependency>
    <dependency>
      <groupId>com.ibm.watson.developer_cloud</groupId>
      <artifactId>natural-language-classifier</artifactId>
      <version>6.14.0</version>
      <scope>compile</scope>
     </dependency>
     <dependency>
      <groupId>com.ibm.watson.developer_cloud</groupId>
       <artifactId>natural-language-understanding</artifactId>
       <version>6.14.0</version>
      <scope>compile</scope>
    </dependency>
    <dependency>
  <groupId>com.ibm.watson.developer_cloud</groupId>
  <artifactId>personality-insights</artifactId>
  <version>6.14.0</version>
  <scope>compile</scope>
</dependency>
<dependency>
  <groupId>com.ibm.watson.developer_cloud</groupId>
  <artifactId>speech-to-text</artifactId>
  <version>6.14.0</version>
  <scope>compile</scope>
</dependency>
<dependency>
  <groupId>com.ibm.watson.developer_cloud</groupId>
  <artifactId>text-to-speech</artifactId>
  <version>6.14.0</version>
  <scope>compile</scope>
</dependency>
<dependency>
  <groupId>com.ibm.watson.developer_cloud</groupId>
  <artifactId>tone-analyzer</artifactId>
  <version>6.14.0</version>
  <scope>compile</scope>
</dependency>
<dependency>
  <groupId>com.ibm.watson.developer_cloud</groupId>
  <artifactId>visual-recognition</artifactId>
  <version>6.14.0</version>
  <scope>compile</scope>
</dependency>
<dependency>
  <groupId>com.squareup.okhttp3</groupId>
  <artifactId>mockwebserver</artifactId>
  <version>3.11.0</version>
  <scope>test</scope>
</dependency>
<dependency>
  <groupId>ch.qos.logback</groupId>
  <artifactId>logback-classic</artifactId>
  <version>1.2.3</version>
  <scope>test</scope>
</dependency>
<dependency>
  <groupId>com.google.guava</groupId>
  <artifactId>guava</artifactId>
  <version>20.0</version>
  <scope>test</scope>
</dependency>
<dependency>
  <groupId>junit</groupId>
  <artifactId>junit</artifactId>
  <version>4.12</version>
  <scope>test</scope>
</dependency>
<dependency>
<groupId>javax.websocket</groupId>
<artifactId>javax.websocket-api</artifactId>
<version>1.0</version>
</dependency>
  </dependencies>
</project>

我还在这里附加了我的 java 代码,我的 Eclipse IDE 在第 47,53 和 55 行有错误:

{package com.ibm.watson.developer_cloud.java_sdk;

import javax.sound.sampled.AudioFormat;
import javax.sound.sampled.AudioInputStream;
import javax.sound.sampled.AudioSystem;
import javax.sound.sampled.DataLine;
import javax.sound.sampled.LineUnavailableException;
import javax.sound.sampled.TargetDataLine;

import com.ibm.watson.developer_cloud.http.HttpMediaType;
import com.ibm.watson.developer_cloud.speech_to_text.v1.SpeechToText;
import com.ibm.watson.developer_cloud.speech_to_text.v1.model.RecognizeOptions;
import com.ibm.watson.developer_cloud.speech_to_text.v1.model.SpeechResults;
import com.ibm.watson.developer_cloud.speech_to_text.v1.websocket.BaseRecognizeCallbac;

public class SpeechToTextUsingWatson {
    SpeechToText service = new SpeechToText();
    boolean keepListeningOnMicrophone = true;
    String transcribedText = "";

    public SpeechToTextUsingWatson() {
        service = new SpeechToText();
        service.setUsernameAndPassword("<username>", "<password>");
    }

    public String recognizeTextFromMicrophone() {
        keepListeningOnMicrophone = true;
        try {
            // Signed PCM AudioFormat with 16kHz, 16 bit sample size, mono
            int sampleRate = 16000;
            AudioFormat format = new AudioFormat(sampleRate, 16, 1, true, false);
            DataLine.Info info = new DataLine.Info(TargetDataLine.class, format);

            if (!AudioSystem.isLineSupported(info)) {
                System.err.println("Line not supported");
                return null;
            }

            TargetDataLine line = (TargetDataLine) AudioSystem.getLine(info);
            line.open(format);
            line.start();

            AudioInputStream audio = new AudioInputStream(line);

            RecognizeOptions options = new RecognizeOptions.Builder()
              .continuous(true)
              .interimResults(true)
              .inactivityTimeout(5) // use this to stop listening when the speaker pauses, i.e. for 5s
              .contentType(HttpMediaType.AUDIO_RAW + "; rate=" + sampleRate)
              .build();

            service.recognizeUsingWebSocket(audio, options, new BaseRecognizeCallback() {
                @Override
                public void onTranscription(SpeechResults speechResults) {
                    // System.out.println(speechResults);
                    String transcript = speechResults.getResults().get(0).getAlternatives().get(0).getTranscript();
                    if (speechResults.getResults().get(0).isFinal()) {
                        keepListeningOnMicrophone = false;
                        transcribedText = transcript;
                        System.out.println("Sentence " + (speechResults.getResultIndex() + 1) + ": " + transcript + "\n");
                    } else {
                        System.out.print(transcript + "\r");
                    }
                }
            });

            do {
                Thread.sleep(1000);
            } while (keepListeningOnMicrophone);

            // closing the WebSockets underlying InputStream will close the WebSocket itself.
            line.stop();
            line.close();
        } catch (LineUnavailableException e) {
            e.printStackTrace();
        } catch (InterruptedException e) {
            e.printStackTrace();
        }
        return transcribedText;
    }

    public static void main(String[] args) {
        SpeechToTextUsingWatson speechToTextUsingWatson = new SpeechToTextUsingWatson();
        String recognizedText = speechToTextUsingWatson.recognizeTextFromMicrophone();
        System.out.println("Recognized Text = " + recognizedText);
        System.exit(0);
    }
}
}

【问题讨论】:

  • 您的实际脚本中的代码(从package ... 之前开始)是用大括号括起来的吗?

标签: java maven ibm-cloud speech-to-text pepper


【解决方案1】:

找不到类通常是类路径问题。也许依赖项没有从 Maven 中心提取。让我们尝试一个基本示例。

java-sdk maven 依赖项包含所有 Watson 服务,包括 Speech to Text。

在你的pom.xml 添加:

<dependency>
    <groupId>com.ibm.watson.developer_cloud</groupId>
    <artifactId>java-sdk</artifactId>
    <version>6.14.0</version>
</dependency>

在Eclipse中创建一个java文件并粘贴以下内容:

public class SpeechToTextExample {

  public static void main(String[] args) throws FileNotFoundException {
    IamOptions options = new IamOptions.Builder()
        .apiKey("<iam_api_key>")
        .build();

    SpeechToText service = new SpeechToText();
    service.setIamCredentials(options);
    // In case you are using the Frankfurt instance
    service.setEndPoint("https://stream-fra.watsonplatform.net/speech-to-text/api");
    File audio = new File("<path-to-wav-audio-file");
    RecognizeOptions options = new RecognizeOptions.Builder()
        .audio(audio)
        .contentType(RecognizeOptions.ContentType.AUDIO_WAV)
        .build();
    SpeechRecognitionResults transcript = service.recognize(options).execute();

    System.out.println(transcript);
  }

}

确保使用 mvn clean compileEclipse M2E plugin 从 Maven 中心提取依赖项。

如果上面的代码有效,那么您可以转到WebSocket and Microphone example,这与您尝试使用代码完成的操作类似。

【讨论】:

  • 您的代码不起作用。我还将端点添加到服务类以解决问题,然后它可以正常工作,例如下面的代码,因为我选择的服务器在法兰克福: service.setEndPoint("stream-fra.watsonplatform.net/speech-to-text/api"); 另一点是关于音频波。你知道吗哪个文件可以正常工作?您关于 Eclipse 插件的观点是正确的。感谢您的指导
  • 我不熟悉音频格式。我知道不同的内容会以较小的尺寸存储相同的内容。试试看google.com/…
猜你喜欢
  • 1970-01-01
  • 2019-06-23
  • 1970-01-01
  • 2016-07-02
  • 2021-07-07
  • 2019-03-23
  • 1970-01-01
  • 1970-01-01
  • 1970-01-01
相关资源
最近更新 更多