【问题标题】:How to speed up rendering with OpenGL (ES) 2 Android如何使用 OpenGL (ES) 2 Android 加速渲染
【发布时间】:2013-07-07 21:13:12
【问题描述】:

我在 android 上使用 OpenGL-ES 开发了一个地图。它可以很好地显示我的地图,并且我刚刚添加了触摸事件处理,因此我可以移动和扔它,这也很有效。

但是它有大约 1 秒的延迟时间。显然,我希望图像的平移尽可能平滑。

我有相当多的矢量数据要显示,但仍然必须有其他方法可以使交互更流畅,我有 17000 个多边形(地块或地块)和大约 1500 条线(道路中心线),它们当应用程序启动时,两者都被预加载到保存 FloatBuffers 的列表中。当我转到我的地图活动时,渲染器会遍历这些列表,正如您将在下面的代码中看到的那样。

我非常感谢一些关于如何加快速度的建议。

(另外请注意,请忽略比例检测器和任何旋转代码,它们不起作用,我现在只关注平移地图。)

package com.ANDRRA1.utilities;

import android.content.Context;
import android.opengl.GLSurfaceView;
import android.util.AttributeSet;
import android.view.MotionEvent;
import android.view.GestureDetector;
import android.view.ScaleGestureDetector;
import android.view.animation.DecelerateInterpolator;
import android.view.animation.Interpolator;

public class CustomGLView extends GLSurfaceView {

    public vboCustomGLRenderer mGLRenderer;

    public CustomGLView(Context context){
        super(context);
    }

    public CustomGLView(Context context, AttributeSet attrs) 
    {
        super(context, attrs);  
    }

    // Hides superclass method.
    public void setRenderer(vboCustomGLRenderer renderer) 
    {
        mGLRenderer = renderer;
        super.setRenderer(renderer);

        super.setRenderMode(GLSurfaceView.RENDERMODE_WHEN_DIRTY);
    }

    private static final int INVALID_POINTER_ID = -1;

    private float mPosX;
    private float mPosY;

    private float mLastTouchX;
    private float mLastTouchY;
    private float mLastGestureX;
    private float mLastGestureY;
    private int mActivePointerId = INVALID_POINTER_ID;
    private int mActivePointerId2 = INVALID_POINTER_ID;
    float oL1X1, oL1Y1, oL1X2, oL1Y2;

    private ScaleGestureDetector mScaleDetector = new ScaleGestureDetector(getContext(), new ScaleListener());
    private GestureDetector mGestureDetector = new GestureDetector(getContext(), new GestureListener());

    private float mScaleFactor = 1.f;

    //The following variable control the fling gesture
    private Interpolator animateInterpolator;
    private long startTime;
    private long endTime;
    private float totalAnimDx;
    private float totalAnimDy;
    private float lastAnimDx;
    private float lastAnimDy;

    @Override
    public boolean onTouchEvent(MotionEvent ev) {
        // Let the ScaleGestureDetector inspect all events.
        mScaleDetector.onTouchEvent(ev);
        mGestureDetector.onTouchEvent(ev);

        final int action = ev.getAction();
        switch (action & MotionEvent.ACTION_MASK) {
            case MotionEvent.ACTION_DOWN: {

                if (!mScaleDetector.isInProgress()) {
                    final float x = ev.getX();
                    final float y = ev.getY();

                    mLastTouchX = x;
                    mLastTouchY = y;
                    mActivePointerId = ev.getPointerId(0);
                }
                break;
            }
            case MotionEvent.ACTION_POINTER_DOWN: {
                if (mScaleDetector.isInProgress()) {
                    mActivePointerId2 = ev.getPointerId(1);

                    mLastGestureX = mScaleDetector.getFocusX();
                    mLastGestureY = mScaleDetector.getFocusY();

                    oL1X1 = ev.getX(ev.findPointerIndex(mActivePointerId));
                    oL1Y1 = ev.getY(ev.findPointerIndex(mActivePointerId));
                    oL1X2 = ev.getX(ev.findPointerIndex(mActivePointerId2));
                    oL1Y2 = ev.getY(ev.findPointerIndex(mActivePointerId2));
                }
                break;
            }

            case MotionEvent.ACTION_MOVE: {

                // Only move if the ScaleGestureDetector isn't processing a gesture.
                if (!mScaleDetector.isInProgress()) {
                    final int pointerIndex = ev.findPointerIndex(mActivePointerId);
                    final float x = ev.getX(pointerIndex);
                    final float y = ev.getY(pointerIndex);

                    final float dx = x - mLastTouchX;
                    final float dy = y - mLastTouchY;

                    mPosX += dx;
                    mPosY += dy;

                    mGLRenderer.setEye(dx, dy);
                    requestRender();

                    mLastTouchX = x;
                    mLastTouchY = y;
                }
                else{
                    final float gx = mScaleDetector.getFocusX();
                    final float gy = mScaleDetector.getFocusY();

                    final float gdx = gx - mLastGestureX;
                    final float gdy = gy - mLastGestureY;

                    mPosX += gdx;
                    mPosY += gdy;

                    mLastGestureX = gx;
                    mLastGestureY = gy;
                }

                break;
            }

            case MotionEvent.ACTION_UP: {
                mActivePointerId = INVALID_POINTER_ID;

                break;
            }
            case MotionEvent.ACTION_CANCEL: {
                mActivePointerId = INVALID_POINTER_ID;
                break;
            }
            case MotionEvent.ACTION_POINTER_UP: {

                final int pointerIndex = (ev.getAction() & MotionEvent.ACTION_POINTER_INDEX_MASK) 
                        >> MotionEvent.ACTION_POINTER_INDEX_SHIFT;
                final int pointerId = ev.getPointerId(pointerIndex);
                if (pointerId == mActivePointerId) {
                    // This was our active pointer going up. Choose a new
                    // active pointer and adjust accordingly.
                    final int newPointerIndex = pointerIndex == 0 ? 1 : 0;
                    mLastTouchX = ev.getX(newPointerIndex);
                    mLastTouchY = ev.getY(newPointerIndex);
                    mActivePointerId = ev.getPointerId(newPointerIndex);
                }
                else{
                    final int tempPointerIndex = ev.findPointerIndex(mActivePointerId);
                    mLastTouchX = ev.getX(tempPointerIndex);
                    mLastTouchY = ev.getY(tempPointerIndex);
                }

                break;
            }
        }

        return true;
    }

    private class ScaleListener extends ScaleGestureDetector.SimpleOnScaleGestureListener {
        @Override
        public boolean onScale(ScaleGestureDetector detector) {
            mScaleFactor *= detector.getScaleFactor();

            // Don't let the object get too small or too large.
            mScaleFactor = Math.max(0.1f, Math.min(mScaleFactor, 10000.0f));

            //invalidate();
            return true;
        }
    }

    private class GestureListener extends GestureDetector.SimpleOnGestureListener {
        @Override
        public boolean onFling(MotionEvent e1, MotionEvent e2, float velocityX, float velocityY) {

            if (e1 == null || e2 == null){
                return false;
            }
            final float distanceTimeFactor = 0.4f;
            final float totalDx = (distanceTimeFactor * velocityX/2);
            final float totalDy = (distanceTimeFactor * velocityY/2);

            onAnimateMove(totalDx, totalDy, (long) (1000 * distanceTimeFactor));
            return true;
        }
    }

    public void onAnimateMove(float dx, float dy, long duration) {
        animateInterpolator = new DecelerateInterpolator();
        startTime = System.currentTimeMillis();
        endTime = startTime + duration;
        totalAnimDx = dx;
        totalAnimDy = dy;
        lastAnimDx = 0;
        lastAnimDy = 0;

        post(new Runnable() {
            @Override
            public void run() {
                onAnimateStep();
            }
        });
    }

    private void onAnimateStep() {
        long curTime = System.currentTimeMillis();
        float percentTime = (float) (curTime - startTime) / (float) (endTime - startTime);
        float percentDistance = animateInterpolator.getInterpolation(percentTime);
        float curDx = percentDistance * totalAnimDx;
        float curDy = percentDistance * totalAnimDy;

        float diffCurDx = curDx - lastAnimDx;
        float diffCurDy = curDy - lastAnimDy;
        lastAnimDx = curDx;
        lastAnimDy = curDy;

        doAnimation(diffCurDx, diffCurDy);

        if (percentTime < 1.0f) {
            post(new Runnable() {
                @Override
                public void run() {
                    onAnimateStep();
                }
            });
        }
    }

    public void doAnimation(float diffDx, float diffDy) {
        mPosX += diffDx;
        mPosY += diffDy;

        mGLRenderer.setEye(diffDx, diffDy);
        requestRender();
    }

    public float angleBetween2Lines(float L1X1, float L1Y1, float L1X2, float L1Y2, float L2X1, float L2Y1, float L2X2, float L2Y2)
    {
        float angle1 = (float) Math.atan2(L1Y1 - L1Y2, L1X1 - L1X2);
        float angle2 = (float) Math.atan2(L2Y1 - L2Y2, L2X1 - L2X2);

        float angleDelta = findAngleDelta( (float)Math.toDegrees(angle1), (float)Math.toDegrees(angle2));
        return -angleDelta;
    }

    private float findAngleDelta( float angle1, float angle2 )
    {
        return angle1 - angle2;
    }
}

.

package com.ANDRRA1.utilities;

import java.nio.FloatBuffer;
import java.util.ListIterator;

import javax.microedition.khronos.egl.EGLConfig;
import javax.microedition.khronos.opengles.GL10;

import android.opengl.GLES20;
import android.opengl.GLSurfaceView;
import android.opengl.Matrix;

public class vboCustomGLRenderer  implements GLSurfaceView.Renderer {

    /**
     * Store the model matrix. This matrix is used to move models from object space (where each model can be thought
     * of being located at the center of the universe) to world space.
     */
    private float[] mModelMatrix = new float[16];

    /**
     * Store the view matrix. This can be thought of as our camera. This matrix transforms world space to eye space;
     * it positions things relative to our eye.
     */
    private float[] mViewMatrix = new float[16];

    /** Store the projection matrix. This is used to project the scene onto a 2D viewport. */
    private float[] mProjectionMatrix = new float[16];

    /** Allocate storage for the final combined matrix. This will be passed into the shader program. */
    private float[] mMVPMatrix = new float[16];

    /** This will be used to pass in the transformation matrix. */
    private int mMVPMatrixHandle;

    /** This will be used to pass in model position information. */
    private int mPositionHandle;

    /** This will be used to pass in model color information. */
    private int mColorUniformLocation;

    /** How many bytes per float. */
    private final int mBytesPerFloat = 4;   

    /** Offset of the position data. */
    private final int mPositionOffset = 0;

    /** Size of the position data in elements. */
    private final int mPositionDataSize = 3;

    /** How many elements per vertex for double values. */
    private final int mPositionFloatStrideBytes = mPositionDataSize * mBytesPerFloat;

    // geometry types
    private final byte wkbPoint = 1;
    private final byte wkbLineString = 2;
    private final byte wkbPolygon = 3;
    //private final byte wkbMultiPoint = 4;
    //private final byte wkbMultiLineString = 5;
    //private final byte wkbMultiPolygon = 6;
    //private final byte wkbGeometryCollection = 7;

    // Big Endian
    final int wkbXDR = 0;
    // Little Endian
    final int wkbNDR = 1;


    float count = 0;

    // Position the eye behind the origin.
    public volatile float eyeX = default_settings.mbrMinX + ((default_settings.mbrMaxX - default_settings.mbrMinX)/2);
    public volatile float eyeY = default_settings.mbrMinY + ((default_settings.mbrMaxY - default_settings.mbrMinY)/2);

    // Position the eye behind the origin.
    //final float eyeZ = 1.5f;
    public volatile float eyeZ = 1.5f;

    // We are looking toward the distance
    public volatile float lookX = eyeX;
    public volatile float lookY = eyeY;
    public volatile float lookZ = 0.0f;

    // Set our up vector. This is where our head would be pointing were we holding the camera.
    public volatile float upX = 0.0f;
    public volatile float upY = 1.0f;
    public volatile float upZ = 0.0f;


    public vboCustomGLRenderer() {
    }

    public void setEye(float x, float y){

        eyeX -= (x/screen_vs_map_horz_ratio);
        lookX = eyeX;
        eyeY += (y/screen_vs_map_vert_ratio);
        lookY = eyeY;

        // Set the camera position (View matrix)
        Matrix.setLookAtM(mViewMatrix, 0, eyeX, eyeY, eyeZ, lookX, lookY, lookZ, upX, upY, upZ);
    }

    @Override
    public void onSurfaceCreated(GL10 unused, EGLConfig config) {


        Thread.currentThread().setPriority(Thread.MIN_PRIORITY);

        // Set the background frame color
        //White
        GLES20.glClearColor(1.0f, 1.0f, 1.0f, 1.0f);

        // Set the view matrix. This matrix can be said to represent the camera position.
        // NOTE: In OpenGL 1, a ModelView matrix is used, which is a combination of a model and
        // view matrix. In OpenGL 2, we can keep track of these matrices separately if we choose.
        Matrix.setLookAtM(mViewMatrix, 0, eyeX, eyeY, eyeZ, lookX, lookY, lookZ, upX, upY, upZ);

        final String vertexShader =
            "uniform mat4 u_MVPMatrix;      \n"     // A constant representing the combined model/view/projection matrix.

          + "attribute vec4 a_Position;     \n"     // Per-vertex position information we will pass in.
          + "attribute vec4 a_Color;        \n"     // Per-vertex color information we will pass in.              

          + "varying vec4 v_Color;          \n"     // This will be passed into the fragment shader.

          + "void main()                    \n"     // The entry point for our vertex shader.
          + "{                              \n"
          + "   v_Color = a_Color;          \n"     // Pass the color through to the fragment shader. 
                                                    // It will be interpolated across the triangle.
          + "   gl_Position = u_MVPMatrix   \n"     // gl_Position is a special variable used to store the final position.
          + "               * a_Position;   \n"     // Multiply the vertex by the matrix to get the final point in                                                                   
          + "}                              \n";    // normalized screen coordinates.

        final String fragmentShader =
                "precision mediump float;       \n"     // Set the default precision to medium. We don't need as high of a 
                                                        // precision in the fragment shader.                
              + "uniform vec4 u_Color;          \n"     // This is the color from the vertex shader interpolated across the 
                                                        // triangle per fragment.             
              + "void main()                    \n"     // The entry point for our fragment shader.
              + "{                              \n"
              + "   gl_FragColor = u_Color;     \n"     // Pass the color directly through the pipeline.          
              + "}                              \n";                                                

        // Load in the vertex shader.
        int vertexShaderHandle = GLES20.glCreateShader(GLES20.GL_VERTEX_SHADER);

        if (vertexShaderHandle != 0) 
        {
            // Pass in the shader source.
            GLES20.glShaderSource(vertexShaderHandle, vertexShader);

            // Compile the shader.
            GLES20.glCompileShader(vertexShaderHandle);

            // Get the compilation status.
            final int[] compileStatus = new int[1];
            GLES20.glGetShaderiv(vertexShaderHandle, GLES20.GL_COMPILE_STATUS, compileStatus, 0);

            // If the compilation failed, delete the shader.
            if (compileStatus[0] == 0) 
            {               
                GLES20.glDeleteShader(vertexShaderHandle);
                vertexShaderHandle = 0;
            }
        }

        if (vertexShaderHandle == 0)
        {
            throw new RuntimeException("Error creating vertex shader.");
        }

        // Load in the fragment shader shader.
        int fragmentShaderHandle = GLES20.glCreateShader(GLES20.GL_FRAGMENT_SHADER);

        if (fragmentShaderHandle != 0) 
        {
            // Pass in the shader source.
            GLES20.glShaderSource(fragmentShaderHandle, fragmentShader);

            // Compile the shader.
            GLES20.glCompileShader(fragmentShaderHandle);

            // Get the compilation status.
            final int[] compileStatus = new int[1];
            GLES20.glGetShaderiv(fragmentShaderHandle, GLES20.GL_COMPILE_STATUS, compileStatus, 0);

            // If the compilation failed, delete the shader.
            if (compileStatus[0] == 0) 
            {               
                GLES20.glDeleteShader(fragmentShaderHandle);
                fragmentShaderHandle = 0;
            }
        }

        if (fragmentShaderHandle == 0)
        {
            throw new RuntimeException("Error creating fragment shader.");
        }

        // Create a program object and store the handle to it.
        int programHandle = GLES20.glCreateProgram();

        if (programHandle != 0) 
        {
            // Bind the vertex shader to the program.
            GLES20.glAttachShader(programHandle, vertexShaderHandle);           

            // Bind the fragment shader to the program.
            GLES20.glAttachShader(programHandle, fragmentShaderHandle);

            // Bind attributes
            GLES20.glBindAttribLocation(programHandle, 0, "a_Position");
            GLES20.glBindAttribLocation(programHandle, 1, "a_Color");

            // Link the two shaders together into a program.
            GLES20.glLinkProgram(programHandle);

            // Get the link status.
            final int[] linkStatus = new int[1];
            GLES20.glGetProgramiv(programHandle, GLES20.GL_LINK_STATUS, linkStatus, 0);

            // If the link failed, delete the program.
            if (linkStatus[0] == 0) 
            {               
                GLES20.glDeleteProgram(programHandle);
                programHandle = 0;
            }
        }

        if (programHandle == 0)
        {
            throw new RuntimeException("Error creating program.");
        }

        // Set program handles. These will later be used to pass in values to the program.
        mMVPMatrixHandle = GLES20.glGetUniformLocation(programHandle, "u_MVPMatrix");
        mPositionHandle = GLES20.glGetAttribLocation(programHandle, "a_Position");
        mColorUniformLocation = GLES20.glGetUniformLocation(programHandle, "u_Color");

        // Tell OpenGL to use this program when rendering.
        GLES20.glUseProgram(programHandle);

    }

    static float mWidth = 0;
    static float mHeight = 0;
    static float mLeft = 0;
    static float mRight = 0;
    static float mTop = 0;
    static float mBottom = 0;
    static float mRatio = 0;
    float screen_width_height_ratio;
    float screen_height_width_ratio;
    final float near = 1.5f;
    final float far = 10.0f;

    double screen_vs_map_horz_ratio = 0;
    double screen_vs_map_vert_ratio = 0;

    @Override
    public void onSurfaceChanged(GL10 unused, int width, int height) {

        // Adjust the viewport based on geometry changes,
        // such as screen rotation
        // Set the OpenGL viewport to the same size as the surface.
        GLES20.glViewport(0, 0, width, height);
        //Log.d("","onSurfaceChanged");

        screen_width_height_ratio = (float) width / height;
        screen_height_width_ratio = (float) height / width;

        //Initialize
        if (mRatio == 0){
            mWidth = (float) width;
            mHeight = (float) height;

            //map height to width ratio
            float map_extents_width = default_settings.mbrMaxX - default_settings.mbrMinX;
            float map_extents_height = default_settings.mbrMaxY - default_settings.mbrMinY;
            float map_width_height_ratio = map_extents_width/map_extents_height;
            //float map_height_width_ratio = map_extents_height/map_extents_width;
            if (screen_width_height_ratio > map_width_height_ratio){
                mRight = (screen_width_height_ratio * map_extents_height)/2;
                mLeft = -mRight;
                mTop = map_extents_height/2;
                mBottom = -mTop;
            }
            else{
                mRight = map_extents_width/2;
                mLeft = -mRight;
                mTop = (screen_height_width_ratio * map_extents_width)/2;
                mBottom = -mTop;
            }

            mRatio = screen_width_height_ratio;
        }

        if (screen_width_height_ratio != mRatio){
            final float wRatio = width/mWidth;
            final float oldWidth = mRight - mLeft;
            final float newWidth = wRatio * oldWidth;
            final float widthDiff = (newWidth - oldWidth)/2;
            mLeft = mLeft - widthDiff;
            mRight = mRight + widthDiff;

            final float hRatio = height/mHeight;
            final float oldHeight = mTop - mBottom;
            final float newHeight = hRatio * oldHeight;
            final float heightDiff = (newHeight - oldHeight)/2;
            mBottom = mBottom - heightDiff;
            mTop = mTop + heightDiff;

            mWidth = (float) width;
            mHeight = (float) height;

            mRatio = screen_width_height_ratio;
        }

        screen_vs_map_horz_ratio = (mWidth/(mRight-mLeft));
        screen_vs_map_vert_ratio = (mHeight/(mTop-mBottom));

        Matrix.frustumM(mProjectionMatrix, 0, mLeft, mRight, mBottom, mTop, near, far);
    }

    @Override
    public void onDrawFrame(GL10 unused) {

        GLES20.glClear(GLES20.GL_DEPTH_BUFFER_BIT | GLES20.GL_COLOR_BUFFER_BIT);

        //The following lists hold the vector data in FloatBuffers pre-loaded from when then application starts
        ListIterator<mapLayer> orgNonAssetCatLayersList_it = default_settings.orgNonAssetCatMappableLayers.listIterator();
        while (orgNonAssetCatLayersList_it.hasNext()) {
            mapLayer MapLayer = orgNonAssetCatLayersList_it.next();

            ListIterator<FloatBuffer> mapLayerObjectList_it = MapLayer.objFloatBuffer.listIterator();
            ListIterator<Byte> mapLayerObjectTypeList_it = MapLayer.objTypeArray.listIterator();
            while (mapLayerObjectTypeList_it.hasNext()) {

                switch (mapLayerObjectTypeList_it.next()) {
                    case wkbPoint:
                        break;
                    case wkbLineString:
                        Matrix.setIdentityM(mModelMatrix, 0);
                        //Matrix.rotateM(mModelMatrix, 0, 0, 0.0f, 0.0f, 1.0f);
                        drawLineString(mapLayerObjectList_it.next(), MapLayer.lineStringObjColor);
                        break;
                    case wkbPolygon:
                        Matrix.setIdentityM(mModelMatrix, 0);
                        //Matrix.rotateM(mModelMatrix, 0, 0, 0.0f, 0.0f, 1.0f);
                        drawPolygon(mapLayerObjectList_it.next(), MapLayer.polygonObjColor);
                        break;
                }
            }
        }
    }

    private void drawLineString(final FloatBuffer geometryBuffer, final float[] colorArray)
    {
        // Pass in the position information
        geometryBuffer.position(mPositionOffset);
        GLES20.glVertexAttribPointer(mPositionHandle, mPositionDataSize, GLES20.GL_FLOAT, false, mPositionFloatStrideBytes, geometryBuffer);

        GLES20.glEnableVertexAttribArray(mPositionHandle);

        GLES20.glUniform4f(mColorUniformLocation, colorArray[0], colorArray[1], colorArray[2], 1f);

        // This multiplies the view matrix by the model matrix, and stores the result in the MVP matrix
        // (which currently contains model * view).
        Matrix.multiplyMM(mMVPMatrix, 0, mViewMatrix, 0, mModelMatrix, 0);

        // This multiplies the modelview matrix by the projection matrix, and stores the result in the MVP matrix
        // (which now contains model * view * projection).
        Matrix.multiplyMM(mMVPMatrix, 0, mProjectionMatrix, 0, mMVPMatrix, 0);

        GLES20.glUniformMatrix4fv(mMVPMatrixHandle, 1, false, mMVPMatrix, 0);

        GLES20.glLineWidth(2.0f);
        GLES20.glDrawArrays(GLES20.GL_LINE_STRIP, 0, geometryBuffer.capacity()/mPositionDataSize);
    }

    private void drawPolygon(final FloatBuffer geometryBuffer, final float[] colorArray)
    {
        // Pass in the position information
        geometryBuffer.position(mPositionOffset);
        GLES20.glVertexAttribPointer(mPositionHandle, mPositionDataSize, GLES20.GL_FLOAT, false, mPositionFloatStrideBytes, geometryBuffer);

        GLES20.glEnableVertexAttribArray(mPositionHandle);

        GLES20.glUniform4f(mColorUniformLocation, colorArray[0], colorArray[1], colorArray[2], 1f);

        // This multiplies the view matrix by the model matrix, and stores the result in the MVP matrix
        // (which currently contains model * view).
        Matrix.multiplyMM(mMVPMatrix, 0, mViewMatrix, 0, mModelMatrix, 0);

        // This multiplies the modelview matrix by the projection matrix, and stores the result in the MVP matrix
        // (which now contains model * view * projection).
        Matrix.multiplyMM(mMVPMatrix, 0, mProjectionMatrix, 0, mMVPMatrix, 0);

        GLES20.glUniformMatrix4fv(mMVPMatrixHandle, 1, false, mMVPMatrix, 0);

        GLES20.glLineWidth(1.0f);
        GLES20.glDrawArrays(GLES20.GL_LINE_LOOP, 0, geometryBuffer.capacity()/mPositionDataSize);
    }
}

【问题讨论】:

  • 你好汉克。已阅读您的问题并查看了您的图片附件,并且非常感兴趣。我现在正在用两层楼渲染小型办公室地图问题。在我看来,您附上它正是我需要的,因为据我了解,您的地图是可缩放和可旋转的。我只是在寻找一个开始的地方——任何关于地图渲染的网站、书籍、技术。如果您有关于代码 sn-ps 的信息,并且可以将我指向正确的方向,我将非常感激。提前谢谢。

标签: java android opengl-es rendering


【解决方案1】:

这里的其他答案已经非常好,并且显示了可以查看的地方和需要改进的地方。我怀疑速度下降是由于每个绘图帧调用 drawPolygon 和 drawLine 数千次(如果你有数千个多边形和线),并且每个方法调用都多次调用 OpenGL 方法。您确实想批量处理这些调用,以便在单个单独的绘制调用中绘制所有多边形和所有线条。

根据我的经验,很难准确计时,OpenGL 会缓冲调用,甚至 Android 跟踪器也会给出不准确的结果。您可以做的是尝试在运行之间删除和更改代码,为整个绘制循环计时,然后看看情况如何变化。

尝试删除 Thread.currentThread().setPriority(Thread.MIN_PRIORITY);,并重新设计应用程序以将您的数据放入与 GL_STATIC_DRAW 绑定的顶点缓冲区对象中。使用一次绘图调用绘制所有线条。为避免状态更改和中断绘制调用,您可以将颜色作为属性而不是制服。您还可以在每次整体绘制而不是每条线时计算和传递一次矩阵制服。

【讨论】:

  • 如果您不想在创建 VBO 后更改它们,GL_STATIC_DRAW 也可以。
  • 我已将我所有的 FloatBuffers 放入两个大的单个 FloatBuffers。我现在想知道如何调整我的 drawPolygon 函数以适应批次?另外设置 GLES20.glBufferData(target, size, data, usage) 的最佳位置在哪里?
  • 我应该更详细一点,多边形和线条正在绘制,但它们现在是一个连续的多边形和一个连续的线。看起来有点乱。然而,此时它现在非常快,并且平移地图很顺畅。这是否会通过拆分它们而改变还有待观察。我是否需要为大缓冲区中的每个单独的缓冲区向 GLES 发送一组起始位置或类似的内容?
  • 如果您只想设置一次数据,您可以从 onSurfaceCreated 调用 glBufferData。您能否找到一种生成线条数据的方法,以便您可以单独绘制线条而不是线条循环/条带?这也可以让您批量处理所有行。这可能会增加数据的大小,但节省数千次绘图调用可能是值得的。如果您使用索引缓冲区,则大小的增加将仅限于索引,这些索引占用的空间比顶点少得多。
  • 要继续使用不同的颜色,可以将颜色添加为顶点数据的属性,而不是统一。在开始绘制之前,您也可以只传入一次统一矩阵。除非您想相对于彼此移动它们,否则无需为每个单独的线/多边形计算它。
【解决方案2】:

我建议您专注于根据缩放级别和视口限制绘制的数量,而不是尝试优化渲染效率。如果您有多达 17,000 个多边形且响应时间不错,那么您可能不会进行太多低效的渲染调用。

看看你发布的图片。如果您在那里渲染 17,000 个多边形和 1,500 条线,那么大部分细节都被浪费了,因为我们看不到那个级别的细节,对吧?我当然看不到 17,000 个多边形。

相反,保持加载完整的细节,并编写代码以根据缩放级别限制细节。不出所料,这种方法被称为level of detail 算法。如果您曾经使用 MipMaps 做过很多事情,那么它基于相同的原理。

我会为您想要的所有缩放级别计算详细级别数据,并根据当前缩放级别引用此缓存数据。当用户不在您的离散缩放级别之一时,只需参考最接近的缩放级别。

当需要更接近缩放级别的高细节时,您可以通过culling 使用Spatial Partitioning 算法在您的详细级别数据中不需要渲染哪些线条和多边形来保持快速。

如果您需要澄清任何问题,请告诉我。这些东西说起来容易,但编码起来很棘手。祝你好运!

编辑:

一种 LoD 实现是根据矩阵缩放计算多边形和线的位置。然后,丢弃任何相距不够远的点。我只是将它们的浮点位置转换为整数,然后看看它是什么样子。对多个缩放级别执行此操作。将这些结果存储在一个数组中,然后四舍五入您所处的任何缩放级别以选择最近的缓存 LoD 数据。

【讨论】:

  • 大声笑我不敢相信你没有算上他们。我同意很多细节都被浪费了。细节级别(LOD)似乎是最好的起点,一旦我准备好放大,我会看看剔除。顺便说一下,我还没有对 Mipmaps 做过任何事情。寻找适合的算法似乎是下一步,有什么想法可以找到现有的吗?我想给每个 Float 缓冲区一种缩放索引并在该索引上对其进行排序,可能是一种方法,但是找到哪个优先于另一个,这是如何工作的?
  • 我尝试在现有的 LoD 算法中搜索线条,但结果很少。我已经编辑了我的问题以包含一个可能的解决方案。它应该很容易实现。
【解决方案3】:

有几件事让我大吃一惊。

1) 不要在您的绘图例程中创建对象,例如 onDrawFrame。迭代器如

    ListIterator<mapLayer> orgNonAssetCatLayersList_it = default_settings.orgNonAssetCatMappableLayers.listIterator();

在绘图例程中创建对象和创建对象会损害性能。

2) 尽可能减少 OpenGL 调用。当您执行 OpenGL 调用时,Java 仍然需要跨越 JNI 边界,因此如果您可以将所有内容放在几个大字节缓冲区中并避免更改 OpenGL 状态。我会尝试将数据组织到尽可能少的缓冲区中来绘制线条,另一个设置绘制多边形。

您可能只想考虑在不同的缩放级别只渲染部分数据。其他人可能有更好的想法,如果您环顾 SO 或在线,我相信您会找到它们。

3) 始终衡量你的表现,看看你真正的问题在哪里。 Android 提供了多种工具(TraceviewSystraceOpenGL ES Tracer)。

查看更多通用 Android 性能提示:http://developer.android.com/training/articles/perf-tips.html

【讨论】:

  • 将一组多边形放入一个缓冲区听起来很有趣,它如何知道它何时完成了一个多边形并开始了另一个?
  • 我没有意识到您正在使用 LINE_LOOP/LINE_STRIP 并且正在考虑使用带有 TRIANGLE_STRIP 的退化三角形来绘制多条线。看到这个关于线宽的问题:stackoverflow.com/questions/14514543/…
【解决方案4】:

没有人提到FBO的??我的意思是,LOD 将是一个很好的方法,但在某些情况下,您可以使用 FBO。这取决于很多事情,但你应该考虑它们!

您可以将洞场景(或其中的一部分)渲染到帧缓冲区对象并显示图像,而不是每帧都绘制场景。这将多边形数量减少到只有几个(最好是 2 个,即一个正方形)。

一个新问题是如何处理平移/缩放,因为您需要动态重新计算 fbo。您可以在 fbo 准备好之前绘制一个模糊的图像,或者您可以通过一些预加载采取更高级的方法,例如将地图划分为正方形并预加载其中的 9 个(中心和 8 个邻居),就像 google-maps 加载它的那样地图数据。

您还需要查看内存消耗,您不能只将每个组合都绘制到 FBO。

我再说一遍,FBO 不是一个“独立”的解决方案,只要记住它们,看看你是否可以在某个地方使用它们!

【讨论】:

    【解决方案5】:

    你是在虚拟机还是在真正的手机上运行它?

    你能添加一些代码来检查你的函数的渲染时间吗?

    尝试找出这样需要时间的原因。

    【讨论】:

    • 手机,每次完整渲染大约需要 1000 毫秒
    • 是的,但是每一步要多少钱。 onDrawFrame、drawPolygon 和 drawLineString 各需要多长时间?
    猜你喜欢
    • 1970-01-01
    • 1970-01-01
    • 1970-01-01
    • 1970-01-01
    • 1970-01-01
    • 2011-09-11
    • 2012-02-10
    • 1970-01-01
    • 1970-01-01
    相关资源
    最近更新 更多