【问题标题】:Google Streaming Speech Recognition on an Audio Stream Python音频流 Python 上的 Google 流式语音识别
【发布时间】:2017-10-20 15:47:42
【问题描述】:

我已经搜索了所有可用的 Google 文档,但找不到在 Python 中对音频流进行流式语音识别的示例。

目前,我在 Django 中使用 Python 语音识别来获取用户的音频,然后收听音频。然后我可以保存文件并运行 google 语音识别或直接从创建的音频实例运行。

有人可以指导我如何对音频流执行流式语音识别吗?

【问题讨论】:

    标签: python django audio google-speech-api


    【解决方案1】:

    Google 提供了流式 Python API here 的示例。

    与其打开音频文件来创建流(如该示例的第 34 行),不如将流直接传递给音频样本对象(如第 36 行)。

    【讨论】:

    • @blambert 如果你能用一些代码来说明它会很有帮助。
    • @indexOutOfBounds 你能试试吗?
    【解决方案2】:

    这是满足上述要求的有效code

    代码:

    import asyncio
    import websockets
    import json
    import threading
    from six.moves import queue
    from google.cloud import speech
    from google.cloud.speech import types
    
    
    IP = '0.0.0.0'
    PORT = 8000
    
    class Transcoder(object):
        """
        Converts audio chunks to text
        """
        def __init__(self, encoding, rate, language):
            self.buff = queue.Queue()
            self.encoding = encoding
            self.language = language
            self.rate = rate
            self.closed = True
            self.transcript = None
    
        def start(self):
            """Start up streaming speech call"""
            threading.Thread(target=self.process).start()
    
        def response_loop(self, responses):
            """
            Pick up the final result of Speech to text conversion
            """
            for response in responses:
                if not response.results:
                    continue
                result = response.results[0]
                if not result.alternatives:
                    continue
                transcript = result.alternatives[0].transcript
                if result.is_final:
                    self.transcript = transcript
    
        def process(self):
            """
            Audio stream recognition and result parsing
            """
            #You can add speech contexts for better recognition
            cap_speech_context = types.SpeechContext(phrases=["Add your phrases here"])
            client = speech.SpeechClient()
            config = types.RecognitionConfig(
                encoding=self.encoding,
                sample_rate_hertz=self.rate,
                language_code=self.language,
                speech_contexts=[cap_speech_context,],
                model='command_and_search'
            )
            streaming_config = types.StreamingRecognitionConfig(
                config=config,
                interim_results=False,
                single_utterance=False)
            audio_generator = self.stream_generator()
            requests = (types.StreamingRecognizeRequest(audio_content=content)
                        for content in audio_generator)
    
            responses = client.streaming_recognize(streaming_config, requests)
            try:
                self.response_loop(responses)
            except:
                self.start()
    
        def stream_generator(self):
            while not self.closed:
                chunk = self.buff.get()
                if chunk is None:
                    return
                data = [chunk]
                while True:
                    try:
                        chunk = self.buff.get(block=False)
                        if chunk is None:
                            return
                        data.append(chunk)
                    except queue.Empty:
                        break
                yield b''.join(data)
    
        def write(self, data):
            """
            Writes data to the buffer
            """
            self.buff.put(data)
    
    
    async def audio_processor(websocket, path):
        """
        Collects audio from the stream, writes it to buffer and return the output of Google speech to text
        """
        config = await websocket.recv()
        if not isinstance(config, str):
            print("ERROR, no config")
            return
        config = json.loads(config)
        transcoder = Transcoder(
            encoding=config["format"],
            rate=config["rate"],
            language=config["language"]
        )
        transcoder.start()
        while True:
            try:
                data = await websocket.recv()
            except websockets.ConnectionClosed:
                print("Connection closed")
                break
            transcoder.write(data)
            transcoder.closed = False
            if transcoder.transcript:
                print(transcoder.transcript)
                await websocket.send(transcoder.transcript)
                transcoder.transcript = None
    
    start_server = websockets.serve(audio_processor, IP, PORT)
    asyncio.get_event_loop().run_until_complete(start_server)
    asyncio.get_event_loop().run_forever()
    

    【讨论】:

    • 有没有办法从谷歌云存储桶中读取这个音频流?
    【解决方案3】:

    如果您使用 React Web 应用程序来流式传输客户端的音频,那么您可以参考此存储库以获取代码示例(或者您可以克隆它并添加您的专有代码)https://github.com/saharmor/realtime-transcription-playground

    【讨论】: