【问题标题】:Moving texture OpenGL ES 2.0移动纹理 OpenGL ES 2.0
【发布时间】:2023-03-17 13:44:02
【问题描述】:

我正在尝试在 OpenGL ES 2.0 中实现 8 列和 8 行的精灵
我出现了第一个图像,但我无法弄清楚如何在 OpenGL ES 2.0 中转换纹理矩阵,我正在寻找的 OpenGL 1.0 中的代码是

        gl.glMatrixMode(GL10.GL_TEXTURE);
        gl.glLoadIdentity();
        gl.glPushMatrix();
        gl.glTranslatef(0.0f, 0.2f, 0f);
        gl.glPopMatrix();

这是我使用 atm 的矩阵

/**
 * Store the model matrix. This matrix is used to move models from object space (where each model can be thought
 * of being located at the center of the universe) to world space.
 */
private float[] mModelMatrix = new float[16];

/**
 * Store the view matrix. This can be thought of as our camera. This matrix transforms world space to eye space;
 * it positions things relative to our eye.
 */
private float[] mViewMatrix = new float[16];

/** Store the projection matrix. This is used to project the scene onto a 2D viewport. */
private float[] mProjectionMatrix = new float[16];

/** Allocate storage for the final combined matrix. This will be passed into the shader program. */
private float[] mMVPMatrix = new float[16];

/** 
 * Stores a copy of the model matrix specifically for the light position.
 */
private float[] mLightModelMatrix = new float[16];  

我的顶点着色器

uniform mat4 u_MVPMatrix;       // A constant representing the combined     model/view/projection matrix.                  
uniform mat4 u_MVMatrix;        // A constant representing the combined model/view matrix.              

attribute vec4 a_Position;      // Per-vertex position information we will pass in.                             
attribute vec3 a_Normal;        // Per-vertex normal information we will pass in.      
attribute vec2 a_TexCoordinate; // Per-vertex texture coordinate information we will pass in.       

varying vec3 v_Position;        // This will be passed into the fragment shader.                            
varying vec3 v_Normal;          // This will be passed into the fragment shader.  
varying vec2 v_TexCoordinate;   // This will be passed into the fragment shader.            

// The entry point for our vertex shader.  
void main()                                                     
{                                                         
    // Transform the vertex into eye space.     
    v_Position = vec3(u_MVMatrix * a_Position);                 

    // Pass through the texture coordinate.
    v_TexCoordinate = a_TexCoordinate;                                      

    // Transform the normal's orientation into eye space.
    v_Normal = vec3(u_MVMatrix * vec4(a_Normal, 0.0));

    // gl_Position is a special variable used to store the final position.
    // Multiply the vertex by the matrix to get the final point in normalized screen coordinates.
    gl_Position = u_MVPMatrix * a_Position;                               
}

我的片段着色器:

precision mediump float;        // Set the default precision to medium. We don't need as high of a 
                            // precision in the fragment shader.
    uniform vec3 u_LightPos;        // The position of the light in eye space.
    uniform sampler2D u_Texture;    // The input texture.

    varying vec3 v_Position;        // Interpolated position for this fragment.
    varying vec3 v_Normal;          // Interpolated normal for this fragment.
    varying vec2 v_TexCoordinate;   // Interpolated texture coordinate per fragment.

    // The entry point for our fragment shader.
    void main()                         
    {                              
        // Will be used for attenuation.
      float distance = length(u_LightPos - v_Position);                  

        // Get a lighting direction vector from the light to the vertex.
     vec3 lightVector = normalize(u_LightPos - v_Position);                 

        // Calculate the dot product of the light vector and vertex normal. If the normal and light vector are
        // pointing in the same direction then it will get max illumination.
     float diffuse = max(dot(v_Normal, lightVector), 0.0);                                                                                

        // Add attenuation. 
     diffuse = diffuse * (1.0 / (1.0 + (0.25 * distance)));

     // Add ambient lighting
     diffuse = diffuse + 0.7;  

        // Multiply the color by the diffuse illumination level and texture value to get final output color.
     gl_FragColor = (diffuse * texture2D(u_Texture, v_TexCoordinate));                                          
     }                                                                      

【问题讨论】:

    标签: android opengl-es textures


    【解决方案1】:

    您需要自己执行纹理坐标的转换,您可以在以下四个位置之一执行此操作:

    • 将转换应用于您的原始模型数据。
    • 在 CPU 中应用转换(不推荐,除非您有充分的理由,因为这是顶点着色器的用途)。
    • 在顶点着色器中应用转换(推荐)。
    • 在片段着色器中应用转换。

    如果您要对纹理坐标应用平移,最灵活的方法是使用数学库创建平移矩阵并将新矩阵作为统一传递给顶点着色器(与传递mMVPMatrix 和 mLightModelMatrix)。 然后,您可以将平移矩阵乘以顶点着色器中的纹理坐标,并将结果作为可变向量输出。

    顶点着色器:

    texture_coordinate_varying = texture_matrix_uniform * texture_coordinate_attribute;
    

    片段着色器:

    gl_FragColor = texture2D(texture_sampler, texture_coordinate_varying);
    

    请注意:您的 GLES 1.0 代码实际上并没有执行翻译,因为您用 push 和 pop 包围了它。

    【讨论】:

    • 我不相信您的建议对于这种情况是正确的:在这种情况下在 CPU 上进行转换仅仅意味着用非常简单的操作创建 8 个浮点数并将 8 个浮点数传递给 GPU。在 GPU 上执行此操作意味着使用更复杂的系统(尽管仍然很简单)在 CPU 上创建一个包含 16 个浮点数的矩阵,并将 16 个浮点数发送到 GPU 并在顶点着色器上添加​​一个额外的矩阵乘法。我并不是说您需要对某个大对象进行转换,这绝对是可行的方法,但不适用于这种情况。
    • 感谢 OFE 的回复这真的很有帮助,我明白你的意思,问题是我是着色器新手,我不知道如何将解决方案转换为我的着色器,我更新了问题我的着色器,如果你不介意的话,能不能再解释一下?
    猜你喜欢
    • 2011-05-19
    • 1970-01-01
    • 2012-04-08
    • 1970-01-01
    • 1970-01-01
    • 2012-04-08
    • 2012-03-29
    • 1970-01-01
    • 1970-01-01
    相关资源
    最近更新 更多