BloggerAds

搜尋此網誌

2011年10月14日 星期五

Stereoscopic 3D on Android ( OpenGL ES )

本篇在Android上使用OpenGL ES簡單的實做Stereo View影像, 以下是結果畫面


執行結果














1.Introduction


        之前發過一篇x86上用OpenGL實做Stereo View, 概念一樣, 這次改在Android上實做
        網址: 
        Stereoscopic 3D in OpenGL
            http://arkkk.blogspot.com/2011/09/stereoscopic-3d-in-opengl.html


        雖然說是用OpenGL ES, 不過嚴格說來並不是直接使用, 在這邊我說明一下, 在Android上使用OpenGL ES開發3D軟體常見有兩種方式, 一種是透過官方Android SDK幫你包好的版本下去實做, 一種是不使用SDK提供的版本, 直接在Android的source tree下使用Android底層的的OpenGL ES來撰寫編譯可執行檔或者lib, 透過JNI去link編譯好的lib或者直接執行檔執行, 兩種方式都不會很困難, 不過為了效率考量, 比較建議用原生的版本去撰寫程式, 這篇只是要講個概念而非OpenGL ES如何使用, 所以為了方便直接使用Android SDK來實做.


2.Implementation


        首先, Android SDK軟體開發的基本架構資料很好找到, 這裡就不多談了, 主要架構程式碼如下




 Android Application基本架構
package com.GLDemo;

import android.app.Activity;

public class GLDemo extends Activity 
{    
    /** Called when the activity is first created. */
    @Override
    public void onCreate(Bundle savedInstanceState) 
    {
        super.onCreate(savedInstanceState);
    }

    @Override
    protected void onPause() 
    {
        super.onPause();   
    }

    @Override
    protected void onResume() 
    {
        super.onResume();

    }
}


        基本架構有了後, 第一步一開始要先做的是先偵測目前系統支援的OpenGL ES版本是否有到2.0, 如果有用到2.0才有的功能, 像是shader等等功能, 使用前可以先檢查版本有支援再執行


 檢查OpenGL ES版本
    private boolean detectOpenGLES20() 
    {
        ActivityManager am =
            (ActivityManager) getSystemService(Context.ACTIVITY_SERVICE);
        ConfigurationInfo info = am.getDeviceConfigurationInfo();
        return (info.reqGlEsVersion >= 0x20000);
    }


        前面準備的動作都好了後, 接下來要做的就是產生一個GLSurfaceView以及Renderer來做OpenGESL繪圖, 大致介紹一下, class GLSurfaceView是Android裡SurfaceView的一種實作, 用來給OpenGLES繪圖元件繪圖的View, 而Renderer是一個Interface用來處理OpenGLES的每個frame的繪圖, 可以分開來兩個另外實作, 也可以直接在同一個class中處理, 在這邊我在同一個class中處理.


OpenGLES繪圖處理用的View
package com.GLDemo;

import javax.microedition.khronos.opengles.GL10;

import android.content.Context;
import android.opengl.GLSurfaceView;
import android.opengl.GLSurfaceView.Renderer;
import android.view.MotionEvent;


//switch perspective and stereo-view by touch
public class GLStereoView extends GLSurfaceView implements Renderer
{
    private Context context;    
    
    public GLStereoView(Context context)
    {
        super(context);
        //如果Render是分開做的話, setRenderer要另外處理
        this.setRenderer(this);
    }
    
    public void onDrawFrame(GL10 gl)
    {
        
            
    }

    public void onSurfaceChanged(GL10 gl, int width, int height) 
    {
                
    }

    public void onSurfaceCreated(GL10 gl, EGLConfig config) 
    {                
        
        
    }        
    
    public boolean onTouchEvent(final MotionEvent event)
    {        
        
        return true;        
    }    
}


        在這個class GLStereoView裡可以發現有幾個要自己實做的函式, 以下一步一步來講解其意義以及實作方式




.OpenGLES繪圖處理


        首 先, 在繪圖之前我們必須要把整個場景的資料先處理好, 至少要有3D的點資料才有辦法繪圖, 以下大概簡單講一下我的作法, 這邊要注意的一點是Android SDK提供的Java層OPENGLES在使用都用Java的Buffer在處理, 也就是說, 不管哪種資料, 勢必得在java層複製一次, 多多少少會影響到效能, 以下是以Cube為例子的程式碼


建立3D Cube資料
public class Cube 
{

    private float SIZE = 1.0F;
    /**
     * The Cube constructor.
     *
     * Initiate the buffers.
     */
    public Cube(float size)
    {
        SIZE = size;
        vertices = new float[]
                {
                // Vertices according to faces
                -SIZE, -SIZE, SIZE, //v0
                SIZE, -SIZE, SIZE,     //v1
                -SIZE, SIZE, SIZE,     //v2
                SIZE, SIZE, SIZE,     //v3

                SIZE, -SIZE, SIZE,     //...
                SIZE, -SIZE, -SIZE,
                SIZE, SIZE, SIZE,
                SIZE, SIZE, -SIZE,

                SIZE, -SIZE, -SIZE,
                -SIZE, -SIZE, -SIZE,
                SIZE, SIZE, -SIZE,
                -SIZE, SIZE, -SIZE,

                -SIZE, -SIZE, -SIZE,
                -SIZE, -SIZE, SIZE,
                -SIZE, SIZE, -SIZE,
                -SIZE, SIZE, SIZE,

                -SIZE, -SIZE, -SIZE,
                SIZE, -SIZE, -SIZE,
                -SIZE, -SIZE, SIZE,
                SIZE, -SIZE, SIZE,

                -SIZE, SIZE, SIZE,
                SIZE, SIZE, SIZE,
                -SIZE, SIZE, -SIZE,
                SIZE, SIZE, -SIZE
                                    };

        
        texture = new float[]
                {
                //Mapping coordinates for the vertices
                0.0f, 0.0f,
                0.0f, 1.0f,
                1.0f, 0.0f,
                1.0f, 1.0f,

                0.0f, 0.0f,
                0.0f, 1.0f,
                1.0f, 0.0f,
                1.0f, 1.0f,

                0.0f, 0.0f,
                0.0f, 1.0f,
                1.0f, 0.0f,
                1.0f, 1.0f,

                0.0f, 0.0f,
                0.0f, 1.0f,
                1.0f, 0.0f,
                1.0f, 1.0f,

                0.0f, 0.0f,
                0.0f, 1.0f,
                1.0f, 0.0f,
                1.0f, 1.0f,

                0.0f, 0.0f,
                0.0f, 1.0f,
                1.0f, 0.0f,
                1.0f, 1.0f
                            };

        indices = new byte[]
                {
                // Faces definition
                0, 1, 3, 0, 3, 2,         // Face front
                4, 5, 7, 4, 7, 6,         // Face right
                8, 9, 11, 8, 11, 10,     // ...
                12, 13, 15, 12, 15, 14,
                16, 17, 19, 16, 19, 18,
                20, 21, 23, 20, 23, 22
                                        };


        //
        ByteBuffer byteBuf = ByteBuffer.allocateDirect(vertices.length * 4);
        byteBuf.order(ByteOrder.nativeOrder());
        vertexBuffer = byteBuf.asFloatBuffer();
        vertexBuffer.put(vertices);
        vertexBuffer.position(0);

        //
        byteBuf = ByteBuffer.allocateDirect(texture.length * 4);
        byteBuf.order(ByteOrder.nativeOrder());
        textureBuffer = byteBuf.asFloatBuffer();
        textureBuffer.put(texture);
        textureBuffer.position(0);


        //
        indexBuffer = ByteBuffer.allocateDirect(indices.length);
        indexBuffer.put(indices);
        indexBuffer.position(0);
   }
   public void draw(GL10 gl)
   {
   }
}


        處理Cube的點資料完後, 再來是繪圖的部分, 把這些點透過Android SDK提供的OpenGLES畫出來, 以下是程式碼




Cube繪圖處理
 public void draw(GL10 gl)
    {
       
        //Bind the texture according to the set texture filter
        gl.glBindTexture(GL10.GL_TEXTURE_2D, textures);

        //Enable the vertex, texture and normal state
        gl.glEnableClientState(GL10.GL_VERTEX_ARRAY);
        gl.glEnableClientState(GL10.GL_TEXTURE_COORD_ARRAY);

       
        gl.glTranslatef(tranform.pos[0], tranform.pos[1], tranform.pos[2]);
        gl.glRotatef(tranform.rotationAngle[0], 1.0f, 0.0f, 0.0f);
        gl.glRotatef(tranform.rotationAngle[1], 0.0f, 1.0f, 0.0f);
        gl.glRotatef(tranform.rotationAngle[2], 0.0f, 0.0f, 1.0f);

        gl.glScalef(tranform.scale[0], tranform.scale[1], tranform.scale[2]);//*/

        //Point to our buffers
        gl.glVertexPointer(3, GL10.GL_FLOAT, 0, vertexBuffer);
        gl.glTexCoordPointer(2, GL10.GL_FLOAT, 0, textureBuffer);

        //Draw the vertices as triangles, based on the Index Buffer information
        gl.glDrawElements(GL10.GL_TRIANGLES, indices.length, GL10.GL_UNSIGNED_BYTE, indexBuffer);

        gl.glBindTexture(GL10.GL_TEXTURE_2D, 0);
        //Disable the client state before leaving
        gl.glDisableClientState(GL10.GL_VERTEX_ARRAY);
        gl.glDisableClientState(GL10.GL_TEXTURE_COORD_ARRAY);
    }


        基本的繪圖處理完後, 接下來回到GLStereoView開始處理整個場景的繪圖, 主要分為幾個步驟.


  • 建立場景物件
  • 處理Stereo View空間轉換矩陣
  • 利用Render to Texture畫出Stereo兩個View場景


  • 建立場景物件
        如其名建立場景物件, 實做結果我建立2個Cube以及2個平面, 以下我以2個Cube為例子, 建立的時機在GLStereoView的onSurfaceCreated裡, 這邊說明一下, onSurfaceCreated是重載自GLSurfaceView的一個函式, 呼叫時機也由GLSurfaceView自己處理, 可以在這個函式裡面作一些場景初始化的處理, 甚至場景不大不需要管理的, 在這裡讀入所有場景也沒關係, 以下是我的程式部份.




建立場景物件
    public void onSurfaceCreated(GL10 gl, EGLConfig config) 
    {                
        // Set the background color to black ( rgba ).
        //gl.glClearColor(0.0f, 0.0f, 0.0f, 0.5f);
        gl.glClearColor(1.0f, 0.0f, 0.0f, 0.0f);
        // Enable Smooth Shading, default not really needed.
        gl.glShadeModel(GL10.GL_SMOOTH);
        // Depth buffer setup.
        //gl.glClearDepthf(1.0f);
        gl.glEnable(GL10.GL_TEXTURE_2D);
        // Enables depth testing.
        gl.glEnable(GL10.GL_DEPTH_TEST);
        // The type of depth testing to do.
        gl.glDepthFunc(GL10.GL_LEQUAL);
        // Really nice perspective calculations.
        gl.glHint(GL10.GL_PERSPECTIVE_CORRECTION_HINT, GL10.GL_NICEST);
        
        textureIDGroup = createTextures(gl, 2, new int[]
                                                       {
                                                        R.drawable.create, 
                                                        R.drawable.ps_grid06});
        Cube []moveableObjs = new Cube[2];

        
        moveableObjs[0] = new Cube(1.0f);
        //Load the texture for the cube once during Surface creation
        moveableObjs[0].setGLTexture(textureIDGroup[0]);
        moveableObjs[0].translateOffset(2.0f, 0.0f, 0.0f);


        moveableObjs[1] = new Cube(1.0f);//0x10000);
        //Load the texture for the cube once during Surface creation
        moveableObjs[1].setGLTexture(textureIDGroup[0]);
        moveableObjs[1].translateOffset(1.5f, 1.0f, 5.0f);
    }


  • 處理Stereo View空間轉換矩陣


        接下來為了要產生Stereo View, 需要先處理Camera, 也就是投影矩陣的參數, 此部份與我上一篇Stereoscopic 3D in OpenGL處理是一樣的, 以下列出主要程式碼, 主要使用時機是想要切換成Stereo View的時候, 我是在onTouchEvent呼叫, 另外注意如果有切換多種View, 記得在onSurfaceChanged要作處理.


Camera參數
    //near clipping plane >0
    private double mNearZ = 0.1f;
    //far clipping plane >0
    private double mFarZ = 1000.0f;
    //screen projection plane
    private double mScreenZ = 50.0f;
    //intraocular distance
    private double mIOD = 0.05f;
    private double mFoxy = 45.0f;
    private double DTR = 0.0174532925; 
/*
     *  IOD : intraocular distance, that decides offset distances in StereoView 
     *          (opengl lookup the center of two-eyes, offset IOD/2)
     *  fovy : view angle of eyes
     *  aspect : screen aspect ratio, w/h, notice!! use Framebuffer w/h in FBO
     *  nearZ,farZ : positive , z clip plane
     *  clipPlaneRect : clip plane rect {left, right,bottom,top} will be setup based on DTR,foxy and nearZ,screenZ
     */
    private void setStereoFrustumParameter(double width, double height)
    {        
        double aspect = width/height;
        double foxy = mFoxy;
        double nearZ =  mNearZ;
        double screenZ = mScreenZ;
        double IOD = mIOD;
        double top = nearZ*Math.tan(DTR*foxy/2.0f);
        double right = aspect*top;
        double frustumShift = (IOD/2.0f)*nearZ/screenZ;
        
        //Left eye
        mLeftCam.mTopFrustum = top;
        mLeftCam.mBottomFrustum = -top;
        //center of two-eyes, offset IOD/2
        mLeftCam.mLeftFrustum = -right+frustumShift;
        mLeftCam.mRightFrustum = right+frustumShift;
        mLeftCam.mModelTranslation = IOD/2.0f;
            
        //right eye
        mRightCam.mTopFrustum = top;
        mRightCam.mBottomFrustum = -top;
        mRightCam.mLeftFrustum = -right-frustumShift;
        mRightCam.mRightFrustum = right-frustumShift;
        mRightCam.mModelTranslation = -IOD/2.0f;        
    }
    private float[] getFrustum(SIDEVIEW side)
    {
        float []frustum = new float[10];
  
        if(side == SIDEVIEW.RIGHT)
        {
            frustum[0] = (float)mRightCam.mLeftFrustum;
            frustum[1] = (float)mRightCam.mRightFrustum;
            frustum[2] = (float)mRightCam.mBottomFrustum;
            frustum[3] = (float)mRightCam.mTopFrustum;
            frustum[6] = (float)mRightCam.mModelTranslation;
            frustum[7] = 0.0f;
            frustum[8] = 0.0f;
        }
        else
        {
            frustum[0] = (float)mLeftCam.mLeftFrustum;
            frustum[1] = (float)mLeftCam.mRightFrustum;
            frustum[2] = (float)mLeftCam.mBottomFrustum;
            frustum[3] = (float)mLeftCam.mTopFrustum;
            frustum[6] = (float)mLeftCam.mModelTranslation;
            frustum[7] = 0.0f;
            frustum[8] = 0.0f;
        }
        frustum[4] = (float)mNearZ;
        frustum[5] = (float)mFarZ;
        frustum[9] = (float) mScreenZ;
  
        return frustum;
    }


  • 利用Render to Texture畫出Stereo兩個View場景
        最後, 剩下繪圖部份, 為了可以分別畫出兩個View, 這邊需要使用Render to Texture的方式, 由於實做比較複雜一點,大概列一下步驟


  1. 建立一個材質準備繪圖在上面
  2. 建立Frame Buffer Object(FBO), 並把他attach到材質上
  3. 繪圖


建立一個材質準備繪圖在上面
         int[] texture = new int[1];
         gl.glGenTextures(1, texture, 0);   
         gl.glBindTexture(GL10.GL_TEXTURE_2D, texture[0]);
         //notice!!, use null in render, not null in bmp? 
         gl.glTexImage2D(GL10.GL_TEXTURE_2D, 0, GL10.GL_RGBA, width, height, 0,
                 GL10.GL_RGBA, GL10.GL_UNSIGNED_BYTE, null);
         gl.glTexParameterf(GL10.GL_TEXTURE_2D, GL10.GL_TEXTURE_MIN_FILTER,
                 GL10.GL_NEAREST);
         gl.glTexParameterf(GL10.GL_TEXTURE_2D,
                 GL10.GL_TEXTURE_MAG_FILTER,
                 GL10.GL_LINEAR);
         gl.glTexParameterx(GL10.GL_TEXTURE_2D, GL10.GL_TEXTURE_WRAP_S,
                 GL10.GL_REPEAT);
         gl.glTexParameterx(GL10.GL_TEXTURE_2D, GL10.GL_TEXTURE_WRAP_T,
                 GL10.GL_REPEAT);


建立Frame Buffer Object(FBO), 並把他attach到材質上
         int mFramebuffer;
         GL11ExtensionPack gl11ep = (GL11ExtensionPack) gl;
         int framebuffer;
         int[] framebuffers = new int[1];
         gl11ep.glGenFramebuffersOES(1, framebuffers, 0);
         framebuffer = framebuffers[0];
         gl11ep.glBindFramebufferOES(GL11ExtensionPack.GL_FRAMEBUFFER_OES, framebuffer);

         int depthbuffer;
         int[] renderbuffers = new int[1];
         gl11ep.glGenRenderbuffersOES(1, renderbuffers, 0);
         depthbuffer = renderbuffers[0];

         gl11ep.glBindRenderbufferOES(GL11ExtensionPack.GL_RENDERBUFFER_OES, depthbuffer);
         gl11ep.glRenderbufferStorageOES(GL11ExtensionPack.GL_RENDERBUFFER_OES,
                 GL11ExtensionPack.GL_DEPTH_COMPONENT16, width, height);
         gl11ep.glFramebufferRenderbufferOES(GL11ExtensionPack.GL_FRAMEBUFFER_OES,
                 GL11ExtensionPack.GL_DEPTH_ATTACHMENT_OES,
                 GL11ExtensionPack.GL_RENDERBUFFER_OES, depthbuffer);

         //Attach 2D Texture to the FBO
         gl11ep.glFramebufferTexture2DOES(GL11ExtensionPack.GL_FRAMEBUFFER_OES,
                 GL11ExtensionPack.GL_COLOR_ATTACHMENT0_OES, GL10.GL_TEXTURE_2D,
                 targetTextureId, 0);
         int status = gl11ep.glCheckFramebufferStatusOES(GL11ExtensionPack.GL_FRAMEBUFFER_OES);
         if (status != GL11ExtensionPack.GL_FRAMEBUFFER_COMPLETE_OES) 
         {
             throw new RuntimeException("Framebuffer is not complete: " +
                     Integer.toHexString(status));
         }
         gl11ep.glBindFramebufferOES(GL11ExtensionPack.GL_FRAMEBUFFER_OES, 0);



         繪 圖部份比較複雜, 這邊說明一下, Offscreen是繪圖到FBO, 也就是Texture上, 當在材質上畫完後, 再來只需要建個model, 把貼圖貼上畫出來就可以, drawOnscreen就是作這個, 我只畫了一個簡單的矩形貼上貼圖而已, 程式碼我就省略了.


繪圖
    public void drawFrame(GL10 gl, int width, int height,boolean ifFrustum, float []proj,float []look)
    {
         if (mContextSupportsFrameBufferObject) 
            {            
                //debug
                //drawOffscreenImage(gl, mFramebufferWidth, mFramebufferHeight);
                GL11ExtensionPack gl11ep = (GL11ExtensionPack) gl;
                
                //設定繪圖到FBO中
                gl11ep.glBindFramebufferOES(GL11ExtensionPack.GL_FRAMEBUFFER_OES, mFramebuffer);
                //draw on "FrameBuffer", all initial reset by FrameBuffer env
                drawOffscreenImage(gl, mFramebufferWidth, mFramebufferHeight,ifFrustum, proj,look);
             
                //關閉FBO繪圖
                gl11ep.glBindFramebufferOES(GL11ExtensionPack.GL_FRAMEBUFFER_OES, 0);
                                
                //draw on screen,  all initial reset by screen env
                drawOnscreen(gl, width, height);              
             } 
            else 
            {
                 // Current context doesn't support frame buffer objects.
                // Indicate this by drawing a red background.
                gl.glClearColor(1,0,0,0);
                gl.glClear(GL10.GL_COLOR_BUFFER_BIT | GL10.GL_DEPTH_BUFFER_BIT);
            }
    }
    private void drawOffscreenImage(GL10 gl, int width, int height, boolean ifFrustum, float []proj,float []look)
    {
        long time = SystemClock.uptimeMillis()% 4000L;
        float angle = 0.090f * ((int) time);
        gl.glViewport(0, 0, width, height);
      
        //framebuffer w/h, not screen
        float ratio = (float) width / height;
         
        gl.glMatrixMode(GL10.GL_PROJECTION);
        gl.glLoadIdentity();
        //notice!! gl.glFrustumf : model should locates in nearz~farz
        //gl.glFrustumf(-ratio, ratio, -1, 1, 3, 1000);
        //GLU.gluPerspective(gl, 45.0f, ratio, 0.1f, 1000.0f);
        if(ifFrustum == true)
        {
            gl.glFrustumf(proj[0], proj[1], proj[2], proj[3], proj[4], proj[5]);
            //notice!! x offset, not concern y,z now
            gl.glTranslatef(proj[6], proj[7], proj[8]);
        }
        else         
            GLU.gluPerspective(gl, proj[0], ratio,proj[1], proj[2]);
         
        //Set the face rotation
        gl.glFrontFace(GL10.GL_CCW);
        gl.glEnable(GL10.GL_CULL_FACE);
        gl.glCullFace(GL10.GL_BACK);
        gl.glEnable(GL10.GL_DEPTH_TEST);
         
        //notice!! unlike drawOnscreen, it's all right to clear buffer, because twice framebuffer are draw on individual texture
        //one texture is on left plane, the other is on right plane, so that clear is just clear buffer in individual texture 
        gl.glClearColor(0.0f,0.0f,0.0f,0);
        gl.glClear(GL10.GL_COLOR_BUFFER_BIT | GL10.GL_DEPTH_BUFFER_BIT);
        gl.glMatrixMode(GL10.GL_MODELVIEW);
        gl.glLoadIdentity();
         
        //GLU.gluLookAt(gl, 0, 0, 40, 0, 0, 0, 0, 1, 0);
               GLU.gluLookAt(gl,look[0],look[1],look[2],look[3],look[4],look[5],look[6],look[7],look[8]);
        gl.glEnableClientState(GL10.GL_VERTEX_ARRAY);
        gl.glEnableClientState(GL10.GL_TEXTURE_COORD_ARRAY);

         
        /*   gl.glTranslatef(0, 0, -3.0f);
        gl.glRotatef(angle,        0, 1, 0);
        gl.glRotatef(angle*0.25f,  1, 0, 0);*/
         
        gl.glPushMatrix();
        if(ifFrustum == true)
        {
            //move to screenZ
            gl.glTranslatef(0, 0, -proj[9]);
        }
        drawScene(gl,time);
        gl.glPopMatrix();
         
        gl.glDisable(GL10.GL_CULL_FACE);
        gl.glDisable(GL10.GL_DEPTH_TEST);
        gl.glDisableClientState(GL10.GL_VERTEX_ARRAY);
        gl.glDisableClientState(GL10.GL_TEXTURE_COORD_ARRAY);        
        } 


        最後, 把整個場景在GLStereoView畫一畫就大功告成了, 在onDrawFrame呼叫drawFrame.




GLStereoView繪圖
    public void onDrawFrame(GL10 gl)
    {
        float []look = null;
        float lapsetime = SystemClock.uptimeMillis()% 1000.0f;
        //smooth move
        float cameraDistX =   mCameraSpeed * mMoveDistX * 0.0001f * lapsetime;
        float cameraDistY =  0.0f;//mCameraOffsetY * 0.0001f * lapsetime;
        
        mCameraPosX += cameraDistX;

                        
        gl.glClearColor(0.0f,0,1.0f,0);
        gl.glClear(GL10.GL_COLOR_BUFFER_BIT | GL10.GL_DEPTH_BUFFER_BIT);

        if(ifFrustum == true)
        {
            float screenOffsetw = (float)mDrawWidth*0.5f;
            float screenOffseth = (float)mDrawHeight*0.5f;
            look =  new float[]{ mCameraPosX, mCameraPosY, 0, mCameraPosX, mCameraPosY,(float) -mScreenZ,  0, 1, 0};
            mPrjParameter = getFrustum(SIDEVIEW.RIGHT);
            
            //right screen
            mFBOSceneRight.setPosition(new float[]{-screenOffsetw,0.0f,0.0f});
            mFBOSceneRight.drawFrame(gl, mWidth, mHeight,ifFrustum, mPrjParameter,look);

            //look =  new float[]{ mCameraPosX, mCameraPosY, 0, mCameraPosX, mCameraPosY,(float) -mScreenZ,  0, 1, 0};
            mPrjParameter = getFrustum(SIDEVIEW.LEFT);
            
            //left screen
            mFBOSceneLeft.setPosition(new float[]{screenOffsetw,0.0f,0.0f});
            mFBOSceneLeft.drawFrame(gl, mWidth, mHeight,ifFrustum, mPrjParameter,look);
        }
        else
        {
            look = new float[]{ mCameraPosX, mCameraPosY, 40, mCameraPosX, mCameraPosY, 0, 0, 1, 0};
            mPrjParameter = new float[]{45.0f, 0.1f, 1000.0f};
            mFBOScene.setPosition(new float[]{0.0f,0.0f,0.0f});
            mFBOScene.drawFrame(gl, mWidth, mHeight,ifFrustum, mPrjParameter,look);
        }        
    }


.實做結果




.原帖
http://arkkk.blogspot.com/2011/10/stereoscopic-3d-in-opengl-es-android.html

沒有留言:

張貼留言