【问题标题】:Audio - Streaming Audio from Java is Choppy音频 - 来自 Java 的流式音频是断断续续的
【发布时间】:2020-01-06 05:51:47
【问题描述】:

我的主要目标是通过麦克风创建加密语音聊天的实时流。 然后,加密的音频通过网络从一个客户端传输到另一个客户端。 问题是在运行程序(流式传输)时,音频总是卡顿和断断续续。

  1. 我尝试了不同类型的硬件(PC、笔记本电脑、Raspberry Pi)。
  2. 不同的操作系统也是如此。
  3. 仅对未加密的音频进行采样以消除由加密算法引起的任何问题。
  4. 更改音频采样率。

不幸的是,一切都失败了。

为简单起见,我只包含了通过网络传输音频所需的代码,无需加密。

MAIN CLASS - 发送者和接收者

package com.emaraic.securevoice;

import com.emaraic.securevoice.utils.AES;
import java.net.DatagramPacket;
import java.net.DatagramSocket;
import java.net.InetAddress;

import javax.sound.sampled.*;

public class SecureVoice 
{

    public static void main(String[] args) 
    {

        Receiver rx = new Receiver();
        rx.start();

        Transmitter tx = new Transmitter();
        tx.start();

    }

    public static AudioFormat getAudioFormat() 
    { //you may change these parameters to fit you mic
        float sampleRate = 8000.0f;  //8000,11025,16000,22050,44100
        int sampleSizeInBits = 16;    //8,16
        int channels = 1;             //1,2
        boolean signed = true;        //true,false
        boolean bigEndian = false;    //true,false
        return new AudioFormat(sampleRate, sampleSizeInBits, channels, signed, bigEndian);
    }

    public static final String ANSI_BOLD = "\033[0;1m"; //not working in NetBeans
    public static final String ANSI_RESET = "\033[0m";
    public static final String ANSI_BLACK = "\033[30m";
    public static final String ANSI_RED = "\033[31m";
    public static final String ANSI_GREEN = "\033[32;4m";
    public static final String ANSI_YELLOW = "\033[33m";
    public static final String ANSI_BLUE = "\033[34m";
    public static final String ANSI_PURPLE = "\033[35m";
    public static final String ANSI_CYAN = "\033[36m";
    public static final String ANSI_WHITE = "\033[37m";
}

发件人

package com.emaraic.securevoice;

import com.emaraic.securevoice.utils.AES;
import java.io.*;
import java.io.File;
import java.net.DatagramPacket;
import java.net.DatagramSocket;
import java.net.InetAddress;
import java.text.SimpleDateFormat;
import java.util.Date;
import javax.sound.sampled.AudioFileFormat;
import javax.sound.sampled.AudioFormat;
import javax.sound.sampled.AudioInputStream;
import javax.sound.sampled.AudioSystem;
import javax.sound.sampled.DataLine;
import javax.sound.sampled.Mixer;
import javax.sound.sampled.Port;
import javax.sound.sampled.TargetDataLine;

public class Transmitter extends Thread
{    
    // these parameters must be copied and used in the Receiver class of the other client
    private static final String TX_IP = "10.101.114.179"; //ip to send to 
    private static final int TX_PORT = 1034;

    @Override
    public void run() 
    {
        SecureVoice color = new SecureVoice();
        Mixer.Info minfo[] = AudioSystem.getMixerInfo();
        System.out.println(color.ANSI_BLUE + "Detecting sound card drivers...");
        for (Mixer.Info minfo1 : minfo) 
        {
            System.out.println("   " + minfo1);
        }

        if (AudioSystem.isLineSupported(Port.Info.MICROPHONE)) 
        {
            try 
            {
                DataLine.Info dataLineInfo = new DataLine.Info(TargetDataLine.class, SecureVoice.getAudioFormat());
                final TargetDataLine line = (TargetDataLine) AudioSystem.getLine(dataLineInfo); //recording from mic
                line.open(SecureVoice.getAudioFormat());
                line.start(); //start recording
                System.out.println(color.ANSI_GREEN + "Recording...");
                byte tempBuffer[] = new byte[line.getBufferSize()];
                System.out.println(color.ANSI_BLUE + "Buffer size = " + tempBuffer.length + " bytes");
                //AudioCapture audio = new AudioCapture(line); //capture the audio into .wav file
                //audio.start();
                while (true) //AES encryption
                {
                    int read = line.read(tempBuffer, 0, tempBuffer.length);
                    byte[] encrypt = AES.encrypt(tempBuffer, 0, read);
//                    sendToUDP(encrypt);
                    sendToUDP(tempBuffer);
                }


            }

            catch (Exception e) 
            {
                System.out.println(e.getMessage());
                System.exit(0);
            }
        }


    }




    public static void sendToUDP(byte soundpacket[]) 
    {
        try 
        {
//            EncryptedAudio encrypt = new EncryptedAudio(soundpacket);
//            encrypt.start();
            DatagramSocket sock = new DatagramSocket();
            sock.send(new DatagramPacket(soundpacket, soundpacket.length, InetAddress.getByName(TX_IP), TX_PORT));
            sock.close();
        } 
        catch (Exception e) 
        {
            System.out.println(e.getMessage());
        }
    }
}

接收器

package com.emaraic.securevoice;


import com.emaraic.securevoice.utils.AES;
import java.net.DatagramPacket;
import java.net.DatagramSocket;
import java.net.InetAddress;
import java.util.logging.Level;
import java.util.logging.Logger;

import javax.sound.sampled.AudioFormat;
import javax.sound.sampled.AudioSystem;
import javax.sound.sampled.DataLine;
import javax.sound.sampled.SourceDataLine;

public class Receiver extends Thread {
    // these parameters must by used in the Transmitter class of the other client
    private static final String RX_IP = "localhost"; 
    private static final int RX_PORT = 1034;


    @Override
    public void run() {
        byte b[] = null;
        while (true) {

                b = rxFromUDP();
                speak(b);

        }
    }


    public static byte[] rxFromUDP() {
        try {
            DatagramSocket sock = new DatagramSocket(RX_PORT);
             byte soundpacket[] = new byte[8192];
            DatagramPacket datagram = new DatagramPacket(soundpacket, soundpacket.length, InetAddress.getByName(RX_IP), RX_PORT);
            sock.receive(datagram);
            sock.close();

//            return AES.decrypt(datagram.getData(),0,soundpacket.length); // soundpacket ;
            return soundpacket; //  if you want to hear encrypted form 
        } catch (Exception e) {
            System.out.println(e.getMessage());
            return null;
        }

    }

    public static void speak(byte soundbytes[]) {

        try {
            DataLine.Info dataLineInfo = new DataLine.Info(SourceDataLine.class, SecureVoice.getAudioFormat());
            try (SourceDataLine sourceDataLine = (SourceDataLine) AudioSystem.getLine(dataLineInfo)) {
                sourceDataLine.open(SecureVoice.getAudioFormat());
                sourceDataLine.start();
                sourceDataLine.write(soundbytes, 0, soundbytes.length);
                sourceDataLine.drain();
            }
        } catch (Exception e) {
            System.out.println(e.getMessage());
        }

    }



}

额外链接 http://emaraic.com/blog/secure-voice-chat

使用的 IDE - Netbeans 11.1

Java JDK 版本 - Java 13 (Windows) - OpenJDK11 (Linux)

【问题讨论】:

    标签: java audio inputstream


    【解决方案1】:

    两个问题。网络流数据在到达时间上会有抖动。由于操作系统和硬件驱动程序的开销时间,开始和停止音频播放会导致延迟间隙和抖动。录音和播放系统之间还有一个较小的音频采样率时钟速率同步问题。所有这些都会以固定的速率影响连续的音频样本流。

    为避免音频启动延迟问题,请不要在网络数据包之间停止音频播放或录制系统,始终准备好以当前采样率连续播放音频数据。为了帮助覆盖网络抖动,在开始播放之前缓冲一些音频数据,这样即使下一个网络数据包明显延迟,也总会有一些音频准备好播放。

    您可能需要收集有关音频启动和网络延迟以及延迟变化的一些统计数据,以确定合适的缓冲量。另一种方法是音频丢失隐藏算法,实现起来要复杂得多。

    【讨论】:

      猜你喜欢
      • 1970-01-01
      • 1970-01-01
      • 1970-01-01
      • 1970-01-01
      • 1970-01-01
      • 1970-01-01
      • 2021-04-11
      • 1970-01-01
      • 1970-01-01
      相关资源
      最近更新 更多