【问题标题】:How to convert OutputStream to InputStream?如何将输出流转换为输入流?
【发布时间】:2011-08-12 07:52:44
【问题描述】:

我正处于开发阶段,我有两个模块,其中一个模块输出为OutputStream,第二个模块仅接受InputStream。你知道如何将OutputStream 转换为InputStream(反之亦然,我的意思是真的这样)我将能够连接这两个部分吗?

谢谢

【问题讨论】:

  • @c0mrade,op 想要 IOUtils.copy 之类的东西,只是在另一个方向。当有人写入 OutputStream 时,其他人可以在 InputStream 中使用它。这基本上就是 PipedOutputStream/PipedInputStream 所做的。不幸的是,管道流无法从其他流构建。
  • 那么 PipedOutputStream/PipedInputStream 是解决方案?
  • 基本上,为了让 PipedStreams 在您的情况下工作,您的 OutputStream 需要像 new YourOutputStream(thePipedOutputStream)new YourInputStream(thePipedInputStream) 那样构造,这可能不是您的流的工作方式。所以我认为这不是解决方案。

标签: java inputstream outputstream


【解决方案1】:

easystream 开源库直接支持将 OutputStream 转换为 InputStream:http://io-tools.sourceforge.net/easystream/tutorial/tutorial.html

// create conversion
final OutputStreamToInputStream<Void> out = new OutputStreamToInputStream<Void>() {
    @Override
    protected Void doRead(final InputStream in) throws Exception {
           LibraryClass2.processDataFromInputStream(in);
           return null;
        }
    };
try {   
     LibraryClass1.writeDataToTheOutputStream(out);
} finally {
     // don't miss the close (or a thread would not terminate correctly).
     out.close();
}

他们还列出了其他选项:http://io-tools.sourceforge.net/easystream/outputstream_to_inputstream/implementations.html

  • 将数据写入内存缓冲区 (ByteArrayOutputStream) 获取 byteArray 并使用 ByteArrayInputStream 再次读取。如果您确定您的数据适合内存,这是最好的方法。
  • 将您的数据复制到一个临时文件并读回。
  • 使用管道:这是内存使用和速度的最佳方法(您可以充分利用多核处理器),也是 Sun 提供的标准解决方案。
  • 使用来自 easystream 库的 InputStreamFromOutputStream 和 OutputStreamToInputStream。

【讨论】:

  • 是的!使用easystream!
【解决方案2】:

似乎有很多链接和其他类似的东西,但没有使用管道的实际代码。使用java.io.PipedInputStreamjava.io.PipedOutputStream 的好处是不会额外消耗内存。 ByteArrayOutputStream.toByteArray() 返回原始缓冲区的副本,这意味着无论您在内存中拥有什么,现在都有两个副本。然后写入InputStream 意味着您现在拥有三个数据副本。

代码:

// take the copy of the stream and re-write it to an InputStream
PipedInputStream in = new PipedInputStream();
final PipedOutputStream out = new PipedOutputStream(in);
new Thread(new Runnable() {
    public void run () {
        try {
            // write the original OutputStream to the PipedOutputStream
            // note that in order for the below method to work, you need
            // to ensure that the data has finished writing to the
            // ByteArrayOutputStream
            originalByteArrayOutputStream.writeTo(out);
        }
        catch (IOException e) {
            // logging and exception handling should go here
        }
        finally {
            // close the PipedOutputStream here because we're done writing data
            // once this thread has completed its run
            if (out != null) {
                // close the PipedOutputStream cleanly
                out.close();
            }
        }   
    }
}).start();

此代码假定originalByteArrayOutputStreamByteArrayOutputStream,因为它通常是唯一可用的输出流,除非您正在写入文件。我希望这有帮助!这样做的好处是,因为它在一个单独的线程中,所以它也是并行工作的,所以任何消耗你的输入流的东西也会从你的旧输出流中流出。这是有益的,因为缓冲区可以保持较小,您将有更少的延迟和更少的内存使用。

【讨论】:

  • 我投了赞成票,但最好将out 传递给in 的构造函数,否则由于竞争条件(我经历过),您可能会在in 上遇到封闭管道异常.使用 Java 8 Lambda:PipedInputStream in = new PipedInputStream(out); ((Runnable)() -&gt; {originalOutputStream.writeTo(out);}).run(); return in;
  • 不,我的情况源于我将 PDF 存储在 Mongo GridFS 中,然后使用 Jax-RS 流式传输到客户端。 MongoDB 提供了一个 OutputStream,但 Jax-RS 需要一个 InputStream。在完全建立 OutputStream 之前,我的路径方法将返回到带有 InputStream 的容器,看起来(也许缓冲区尚未被缓存)。无论如何,Jax-RS 会在 InputStream 上抛出管道关闭异常。奇怪,但那是发生了一半的时间。更改为上面的代码可以防止这种情况发生。
  • @JohnManko 我对此进行了更多研究,并从PipedInputStream Javadocs 中看到:如果一个线程正在向连接的管道输出流提供数据字节,则称该管道已损坏不再活着。 所以我怀疑如果你使用上面的例子,线程在Jax-RS 消耗输入流之前完成。同时,我查看了 MongoDB Javadocs。 GridFSDBFile 有一个输入流,为什么不直接将它传递给 Jax-RS
  • @DennisCheung 是的,当然。没有什么是免费的,但它肯定会小于 15MB 的副本。优化将包括使用线程池来减少不断创建线程/对象的 GC 流失。
  • 请记住,PipedInputStream 和 PipedOutputStream 需要在单独的线程中,否则在一定大小后可能会发生死锁(参见 Java 文档:docs.oracle.com/javase/7/docs/api/java/io/PipedInputStream.html
【解决方案3】:

io-extras 可能有用。例如,如果您想使用 GZIPOutputStream 压缩 InputStream 并且希望它同步发生(使用默认缓冲区大小 8192):

InputStream is = ...
InputStream gz = IOUtil.pipe(is, o -> new GZIPOutputStream(o));

请注意,该库具有 100% 的单元测试覆盖率(当然值得!)并且位于 Maven Central 上。 Maven依赖是:

<dependency>
  <groupId>com.github.davidmoten</groupId>
  <artifactId>io-extras</artifactId>
  <version>0.1</version>
</dependency>

请务必检查更新版本。

【讨论】:

    【解决方案4】:

    在我看来,java.io.PipedInputStream/java.io.PipedOutputStream 是考虑的最佳选择。在某些情况下,您可能想要使用 ByteArrayInputStream/ByteArrayOutputStream。问题是您需要复制缓冲区才能将 ByteArrayOutputStream 转换为 ByteArrayInputStream。 ByteArrayOutpuStream/ByteArrayInputStream 也限制为 2GB。这是我为绕过 ByteArrayOutputStream/ByteArrayInputStream 限制而编写的 OutpuStream/InputStream 实现(Scala 代码,但对于 java 开发人员来说很容易理解):

    import java.io.{IOException, InputStream, OutputStream}
    
    import scala.annotation.tailrec
    
    /** Acts as a replacement for ByteArrayOutputStream
      *
      */
    class HugeMemoryOutputStream(capacity: Long) extends OutputStream {
      private val PAGE_SIZE: Int = 1024000
      private val ALLOC_STEP: Int = 1024
    
      /** Pages array
        *
        */
      private var streamBuffers: Array[Array[Byte]] = Array.empty[Array[Byte]]
    
      /** Allocated pages count
        *
        */
      private var pageCount: Int = 0
    
      /** Allocated bytes count
        *
        */
      private var allocatedBytes: Long = 0
    
      /** Current position in stream
        *
        */
      private var position: Long = 0
    
      /** Stream length
        *
        */
      private var length: Long = 0
    
      allocSpaceIfNeeded(capacity)
    
      /** Gets page count based on given length
        *
        * @param length   Buffer length
        * @return         Page count to hold the specified amount of data
        */
      private def getPageCount(length: Long) = {
        var pageCount = (length / PAGE_SIZE).toInt + 1
    
        if ((length % PAGE_SIZE) == 0) {
          pageCount -= 1
        }
    
        pageCount
      }
    
      /** Extends pages array
        *
        */
      private def extendPages(): Unit = {
        if (streamBuffers.isEmpty) {
          streamBuffers = new Array[Array[Byte]](ALLOC_STEP)
        }
        else {
          val newStreamBuffers = new Array[Array[Byte]](streamBuffers.length + ALLOC_STEP)
          Array.copy(streamBuffers, 0, newStreamBuffers, 0, streamBuffers.length)
          streamBuffers = newStreamBuffers
        }
    
        pageCount = streamBuffers.length
      }
    
      /** Ensures buffers are bug enough to hold specified amount of data
        *
        * @param value  Amount of data
        */
      private def allocSpaceIfNeeded(value: Long): Unit = {
        @tailrec
        def allocSpaceIfNeededIter(value: Long): Unit = {
          val currentPageCount = getPageCount(allocatedBytes)
          val neededPageCount = getPageCount(value)
    
          if (currentPageCount < neededPageCount) {
            if (currentPageCount == pageCount) extendPages()
    
            streamBuffers(currentPageCount) = new Array[Byte](PAGE_SIZE)
            allocatedBytes = (currentPageCount + 1).toLong * PAGE_SIZE
    
            allocSpaceIfNeededIter(value)
          }
        }
    
        if (value < 0) throw new Error("AllocSpaceIfNeeded < 0")
        if (value > 0) {
          allocSpaceIfNeededIter(value)
    
          length = Math.max(value, length)
          if (position > length) position = length
        }
      }
    
      /**
        * Writes the specified byte to this output stream. The general
        * contract for <code>write</code> is that one byte is written
        * to the output stream. The byte to be written is the eight
        * low-order bits of the argument <code>b</code>. The 24
        * high-order bits of <code>b</code> are ignored.
        * <p>
        * Subclasses of <code>OutputStream</code> must provide an
        * implementation for this method.
        *
        * @param      b the <code>byte</code>.
        */
      @throws[IOException]
      override def write(b: Int): Unit = {
        val buffer: Array[Byte] = new Array[Byte](1)
    
        buffer(0) = b.toByte
    
        write(buffer)
      }
    
      /**
        * Writes <code>len</code> bytes from the specified byte array
        * starting at offset <code>off</code> to this output stream.
        * The general contract for <code>write(b, off, len)</code> is that
        * some of the bytes in the array <code>b</code> are written to the
        * output stream in order; element <code>b[off]</code> is the first
        * byte written and <code>b[off+len-1]</code> is the last byte written
        * by this operation.
        * <p>
        * The <code>write</code> method of <code>OutputStream</code> calls
        * the write method of one argument on each of the bytes to be
        * written out. Subclasses are encouraged to override this method and
        * provide a more efficient implementation.
        * <p>
        * If <code>b</code> is <code>null</code>, a
        * <code>NullPointerException</code> is thrown.
        * <p>
        * If <code>off</code> is negative, or <code>len</code> is negative, or
        * <code>off+len</code> is greater than the length of the array
        * <code>b</code>, then an <tt>IndexOutOfBoundsException</tt> is thrown.
        *
        * @param      b   the data.
        * @param      off the start offset in the data.
        * @param      len the number of bytes to write.
        */
      @throws[IOException]
      override def write(b: Array[Byte], off: Int, len: Int): Unit = {
        @tailrec
        def writeIter(b: Array[Byte], off: Int, len: Int): Unit = {
          val currentPage: Int = (position / PAGE_SIZE).toInt
          val currentOffset: Int = (position % PAGE_SIZE).toInt
    
          if (len != 0) {
            val currentLength: Int = Math.min(PAGE_SIZE - currentOffset, len)
            Array.copy(b, off, streamBuffers(currentPage), currentOffset, currentLength)
    
            position += currentLength
    
            writeIter(b, off + currentLength, len - currentLength)
          }
        }
    
        allocSpaceIfNeeded(position + len)
        writeIter(b, off, len)
      }
    
      /** Gets an InputStream that points to HugeMemoryOutputStream buffer
        *
        * @return InputStream
        */
      def asInputStream(): InputStream = {
        new HugeMemoryInputStream(streamBuffers, length)
      }
    
      private class HugeMemoryInputStream(streamBuffers: Array[Array[Byte]], val length: Long) extends InputStream {
        /** Current position in stream
          *
          */
        private var position: Long = 0
    
        /**
          * Reads the next byte of data from the input stream. The value byte is
          * returned as an <code>int</code> in the range <code>0</code> to
          * <code>255</code>. If no byte is available because the end of the stream
          * has been reached, the value <code>-1</code> is returned. This method
          * blocks until input data is available, the end of the stream is detected,
          * or an exception is thrown.
          *
          * <p> A subclass must provide an implementation of this method.
          *
          * @return the next byte of data, or <code>-1</code> if the end of the
          *         stream is reached.
          */
        @throws[IOException]
        def read: Int = {
          val buffer: Array[Byte] = new Array[Byte](1)
    
          if (read(buffer) == 0) throw new Error("End of stream")
          else buffer(0)
        }
    
        /**
          * Reads up to <code>len</code> bytes of data from the input stream into
          * an array of bytes.  An attempt is made to read as many as
          * <code>len</code> bytes, but a smaller number may be read.
          * The number of bytes actually read is returned as an integer.
          *
          * <p> This method blocks until input data is available, end of file is
          * detected, or an exception is thrown.
          *
          * <p> If <code>len</code> is zero, then no bytes are read and
          * <code>0</code> is returned; otherwise, there is an attempt to read at
          * least one byte. If no byte is available because the stream is at end of
          * file, the value <code>-1</code> is returned; otherwise, at least one
          * byte is read and stored into <code>b</code>.
          *
          * <p> The first byte read is stored into element <code>b[off]</code>, the
          * next one into <code>b[off+1]</code>, and so on. The number of bytes read
          * is, at most, equal to <code>len</code>. Let <i>k</i> be the number of
          * bytes actually read; these bytes will be stored in elements
          * <code>b[off]</code> through <code>b[off+</code><i>k</i><code>-1]</code>,
          * leaving elements <code>b[off+</code><i>k</i><code>]</code> through
          * <code>b[off+len-1]</code> unaffected.
          *
          * <p> In every case, elements <code>b[0]</code> through
          * <code>b[off]</code> and elements <code>b[off+len]</code> through
          * <code>b[b.length-1]</code> are unaffected.
          *
          * <p> The <code>read(b,</code> <code>off,</code> <code>len)</code> method
          * for class <code>InputStream</code> simply calls the method
          * <code>read()</code> repeatedly. If the first such call results in an
          * <code>IOException</code>, that exception is returned from the call to
          * the <code>read(b,</code> <code>off,</code> <code>len)</code> method.  If
          * any subsequent call to <code>read()</code> results in a
          * <code>IOException</code>, the exception is caught and treated as if it
          * were end of file; the bytes read up to that point are stored into
          * <code>b</code> and the number of bytes read before the exception
          * occurred is returned. The default implementation of this method blocks
          * until the requested amount of input data <code>len</code> has been read,
          * end of file is detected, or an exception is thrown. Subclasses are encouraged
          * to provide a more efficient implementation of this method.
          *
          * @param      b   the buffer into which the data is read.
          * @param      off the start offset in array <code>b</code>
          *                 at which the data is written.
          * @param      len the maximum number of bytes to read.
          * @return the total number of bytes read into the buffer, or
          *         <code>-1</code> if there is no more data because the end of
          *         the stream has been reached.
          * @see java.io.InputStream#read()
          */
        @throws[IOException]
        override def read(b: Array[Byte], off: Int, len: Int): Int = {
          @tailrec
          def readIter(acc: Int, b: Array[Byte], off: Int, len: Int): Int = {
            val currentPage: Int = (position / PAGE_SIZE).toInt
            val currentOffset: Int = (position % PAGE_SIZE).toInt
    
            val count: Int = Math.min(len, length - position).toInt
    
            if (count == 0 || position >= length) acc
            else {
              val currentLength = Math.min(PAGE_SIZE - currentOffset, count)
              Array.copy(streamBuffers(currentPage), currentOffset, b, off, currentLength)
    
              position += currentLength
    
              readIter(acc + currentLength, b, off + currentLength, len - currentLength)
            }
          }
    
          readIter(0, b, off, len)
        }
    
        /**
          * Skips over and discards <code>n</code> bytes of data from this input
          * stream. The <code>skip</code> method may, for a variety of reasons, end
          * up skipping over some smaller number of bytes, possibly <code>0</code>.
          * This may result from any of a number of conditions; reaching end of file
          * before <code>n</code> bytes have been skipped is only one possibility.
          * The actual number of bytes skipped is returned. If <code>n</code> is
          * negative, the <code>skip</code> method for class <code>InputStream</code> always
          * returns 0, and no bytes are skipped. Subclasses may handle the negative
          * value differently.
          *
          * The <code>skip</code> method of this class creates a
          * byte array and then repeatedly reads into it until <code>n</code> bytes
          * have been read or the end of the stream has been reached. Subclasses are
          * encouraged to provide a more efficient implementation of this method.
          * For instance, the implementation may depend on the ability to seek.
          *
          * @param      n the number of bytes to be skipped.
          * @return the actual number of bytes skipped.
          */
        @throws[IOException]
        override def skip(n: Long): Long = {
          if (n < 0) 0
          else {
            position = Math.min(position + n, length)
            length - position
          }
        }
      }
    }
    

    易于使用,无缓冲区重复,无 2GB 内存限制

    val out: HugeMemoryOutputStream = new HugeMemoryOutputStream(initialCapacity /*may be 0*/)
    
    out.write(...)
    ...
    
    val in1: InputStream = out.asInputStream()
    
    in1.read(...)
    ...
    
    val in2: InputStream = out.asInputStream()
    
    in2.read(...)
    ...
    

    【讨论】:

      【解决方案5】:

      OutputStream 是您写入数据的位置。如果某个模块暴露了OutputStream,则期望在另一端有读取内容。

      另一方面,暴露InputStream 的东西表明您将需要收听此流,并且会有您可以读取的数据。

      因此可以将InputStream 连接到OutputStream

      InputStream----read---&gt; intermediateBytes[n] ----write----&gt; OutputStream

      正如有人提到的,这就是 IOUtils 中的 copy() 方法可以让你做的事情。走另一条路是没有意义的……希望这有点意义

      更新:

      当然,我越想这个,我就越能明白这实际上是一个要求。我知道一些 cmets 提到了Piped 输入/输出流,但还有另一种可能性。

      如果暴露的输出流是ByteArrayOutputStream,那么您始终可以通过调用toByteArray() 方法获取完整内容。然后,您可以使用ByteArrayInputStream 子类创建输入流包装器。这两个是伪流,它们基本上都只是包装一个字节数组。因此,以这种方式使用流在技术上是可行的,但对我来说仍然很奇怪......

      【讨论】:

      • copy() 根据 API 对操作系统执行此操作,我需要它向后执行
      • 用例非常简单:假设您有一个序列化库(例如,序列化为 JSON)和一个采用 InputStream 的传输层(例如,Tomcat)。因此,您需要通过要从 InputStream 读取的 HTTP 连接从 JSON 传输 OutputStream。
      • 这在单元测试时很有用,而且你对避免接触文件系统非常迂腐。
      • @JBCP 的评论很到位。另一个用例是在 HTTP 请求期间使用 PDFBox 构建 PDF。 PDFBox 使用 OutputStream 来保存 PDF 对象,REST API 接受 InputStream 来回复客户端。因此,OutputStream -> InputStream 是一个非常真实的用例。
      • “你总是可以通过调用 toByteArray() 方法获取全部内容”使用流的要点是不要将整个内容加载到内存中!!
      【解决方案6】:

      虽然您不能将 OutputStream 转换为 InputStream,但 java 提供了一种使用 PipedOutputStream 和 PipedInputStream 的方法,您可以将数据写入 PipedOutputStream 以通过关联的 PipedInputStream 变得可用。
      有时,在处理需要将 InputStream 实例而不是 OutputStream 实例传递给它们的第三方库时,我遇到了类似的情况。
      我解决此问题的方法是使用 PipedInputStream 和 PipedOutputStream。
      顺便说一句,它们使用起来很棘手,您必须使用多线程来实现您想要的。我最近在 github 上发布了一个你可以使用的实现。
      这是link。您可以通过 wiki 了解如何使用它。

      【讨论】:

        【解决方案7】:
        ByteArrayOutputStream buffer = (ByteArrayOutputStream) aOutputStream;
        byte[] bytes = buffer.toByteArray();
        InputStream inputStream = new ByteArrayInputStream(bytes);
        

        【讨论】:

        • 你不应该使用它,因为toByteArray() 方法体就像return Arrays.copyOf(buf, count); 这样返回一个新数组。
        • java.io.FileOutputStream cannot be cast to java.io.ByteArrayOutputStream
        【解决方案8】:

        我在将ByteArrayOutputStream 转换为ByteArrayInputStream 时遇到了同样的问题,并通过使用来自ByteArrayOutputStream 的派生类解决了这个问题,该派生类能够返回使用@ 的内部缓冲区初始化的ByteArrayInputStream 987654327@。这种方式不使用额外的内存并且“转换”非常快:

        package info.whitebyte.utils;
        import java.io.ByteArrayInputStream;
        import java.io.ByteArrayOutputStream;
        
        /**
         * This class extends the ByteArrayOutputStream by 
         * providing a method that returns a new ByteArrayInputStream
         * which uses the internal byte array buffer. This buffer
         * is not copied, so no additional memory is used. After
         * creating the ByteArrayInputStream the instance of the
         * ByteArrayInOutStream can not be used anymore.
         * <p>
         * The ByteArrayInputStream can be retrieved using <code>getInputStream()</code>.
         * @author Nick Russler
         */
        public class ByteArrayInOutStream extends ByteArrayOutputStream {
            /**
             * Creates a new ByteArrayInOutStream. The buffer capacity is
             * initially 32 bytes, though its size increases if necessary.
             */
            public ByteArrayInOutStream() {
                super();
            }
        
            /**
             * Creates a new ByteArrayInOutStream, with a buffer capacity of
             * the specified size, in bytes.
             *
             * @param   size   the initial size.
             * @exception  IllegalArgumentException if size is negative.
             */
            public ByteArrayInOutStream(int size) {
                super(size);
            }
        
            /**
             * Creates a new ByteArrayInputStream that uses the internal byte array buffer 
             * of this ByteArrayInOutStream instance as its buffer array. The initial value 
             * of pos is set to zero and the initial value of count is the number of bytes 
             * that can be read from the byte array. The buffer array is not copied. This 
             * instance of ByteArrayInOutStream can not be used anymore after calling this
             * method.
             * @return the ByteArrayInputStream instance
             */
            public ByteArrayInputStream getInputStream() {
                // create new ByteArrayInputStream that respects the current count
                ByteArrayInputStream in = new ByteArrayInputStream(this.buf, 0, this.count);
        
                // set the buffer of the ByteArrayOutputStream 
                // to null so it can't be altered anymore
                this.buf = null;
        
                return in;
            }
        }
        

        我把东西放在github上:https://github.com/nickrussler/ByteArrayInOutStream

        【讨论】:

        • 如果内容不适合缓冲区怎么办?
        • 那么你首先不应该使用 ByteArrayInputStream。
        • 此解决方案将所有字节都保存在内存中。对于小文件,这没问题,但您也可以在 ByteArrayOutput Stream 上使用 getBytes()
        • 如果您的意思是toByteArray,这将导致复制内部缓冲区,这将占用我的方法两倍的内存。编辑:啊,我明白,对于小文件,这当然可以..
        • 浪费时间。 ByteArrayOutputStream 有一个 writeTo 方法将内容传输到另一个输出流
        【解决方案9】:

        由于输入和输出流只是起点和终点,解决方案是将数据临时存储在字节数组中。因此,您必须创建中间ByteArrayOutputStream,从中创建byte[],用作新ByteArrayInputStream 的输入。

        public void doTwoThingsWithStream(InputStream inStream, OutputStream outStream){ 
          //create temporary bayte array output stream
          ByteArrayOutputStream baos = new ByteArrayOutputStream();
          doFirstThing(inStream, baos);
          //create input stream from baos
          InputStream isFromFirstData = new ByteArrayInputStream(baos.toByteArray()); 
          doSecondThing(isFromFirstData, outStream);
        }
        

        希望对你有帮助。

        【讨论】:

        【解决方案10】:

        您将需要一个中间类来缓冲。每次调用InputStream.read(byte[]...) 时,缓冲类将用从OutputStream.write(byte[]...) 传入的下一个块填充传入的字节数组。由于块的大小可能不同,适配器类将需要存储一定的量,直到它有足够的量来填充读取缓冲区和/或能够存储任何缓冲区溢出。

        这篇文章很好地细分了解决这个问题的几种不同方法:

        http://blog.ostermiller.org/convert-java-outputstream-inputstream

        【讨论】:

        • 感谢@mckamey,基于循环缓冲区的方法正是我所需要的!
        【解决方案11】:

        旧帖子,但可能对其他人有帮助,请使用这种方式:

        OutputStream out = new ByteArrayOutputStream();
        ...
        out.write();
        ...
        ObjectInputStream ois = new ObjectInputStream(new ByteArrayInputStream(out.toString().getBytes()));
        

        【讨论】:

        • to String --> 大小问题
        • 此外,在流上调用toString().getBytes() *不会返回流的内容。
        【解决方案12】:

        如果你想从 InputStream 中创建一个 OutputStream 有一个基本问题。写入 OutputStream 的方法会阻塞,直到完成。所以写完方法后就可以得到结果了。这有两个后果:

        1. 如果您只使用一个线程,则需要等到所有内容都写入后(因此您需要将流的数据存储在内存或磁盘中)。
        2. 如果要在数据完成之前访问数据,则需要第二个线程。

        变体 1 可以使用字节数组或字段来实现。 变体 1 可以使用 pipies 来实现(直接或使用额外的抽象 - 例如 RingBuffer 或其他评论中的 google lib)。

        确实,对于标准 java,没有其他方法可以解决该问题。每个解决方案都是其中之一的实现。

        有一个概念叫做“延续”(详见wikipedia)。在这种情况下,基本上这意味着:

        • 有一个特殊的输出流需要一定数量的数据
        • 如果达到数量,则流将控制权交给它的对应物,这是一个特殊的输入流
        • 输入流使数据量在被读取之前可用,之后,它将控制权传递回输出流

        虽然有些语言内置了这个概念,但对于 java,您需要一些“魔法”。例如,来自 apache 的“commons-javaflow”为 java 实现了此类。缺点是这需要在构建时进行一些特殊的字节码修改。因此,将所有内容放在一个带有自定义构建脚本的额外库中是有意义的。

        【讨论】:

          猜你喜欢
          • 1970-01-01
          • 1970-01-01
          • 1970-01-01
          • 2017-08-20
          • 2012-03-05
          • 1970-01-01
          • 2011-09-30
          • 2016-11-24
          • 2013-07-03
          相关资源
          最近更新 更多