📜 ⬆️ ⬇️

Uploading .3DS files to Android

This article is a prequel to my last post on the Glow effect. I promised to tell you how to load .3ds files to draw them using the shaders used there.

Some general information about the file format can be read, for example, in Wikipedia or in the demo.design 3D programming FAQ , but this is all theory (and not written without errors), and here we will talk about practice, and with reference to Java and Android.

What will be here:

What will not be here:

I will not begin to chew the whole source in detail, as I did last time (this is more than a thousand lines), I’ll just focus on the main points and provide links to the full source code (at the end of the article). Still interesting? Then we continue.

Reading file


Strangely enough, the simplest reading of numbers from a file was one of the most difficult tasks that one had to face in the first place. There are two rakes: speed and correctness. We will provide speed using BufferedInputStream and exclusively sequential reading, but with correctness everything is a bit more complicated: Java considers that all data in the file should be big-endian, whereas in .3ds little-endian is used. Well ... We use a simple wrapper:
')
private BufferedInputStream file; private byte[] bytes = new byte[8]; private long filePos = 0; ... private void Skip(long count) throws IOException { file.skip(count); filePos += count; } private void Seek(long end) throws IOException { if (filePos < end) { Skip(end - filePos); filePos = end; } } private byte ReadByte() throws IOException { file.read(bytes, 0, 1); filePos++; return bytes[0]; } private int ReadUnsignedByte() throws IOException { file.read(bytes, 0, 1); filePos++; return (bytes[0]&0xff); } private int ReadUnsignedShort() throws IOException { file.read(bytes, 0, 2); filePos += 2; return ((bytes[1]&0xff) << 8 | (bytes[0]&0xff)); } private int ReadInt() throws IOException { file.read(bytes, 0, 4); filePos += 4; return (bytes[3]) << 24 | (bytes[2]&0xff) << 16 | (bytes[1]&0xff) << 8 | (bytes[0]&0xff); } private float ReadFloat() throws IOException { return Float.intBitsToFloat(ReadInt()); } 


In an amicable way, this should have been a separate class inherited from BufferedInputStream, but in this case it was more convenient for me to do just that.

Now you can start reading chunks. To begin with - the main one:

  private Scene3D ProcessFile(long fileLen) throws IOException { Scene3D scene = null; while (filePos < fileLen) { int chunkID = ReadUnsignedShort(); int chunkLen = ReadInt() - 6; switch (chunkID) { case CHUNK_MAIN: if (scene == null) scene = ChunkMain(chunkLen); else Skip(chunkLen); break; default: Skip(chunkLen); } } return scene; } private Scene3D ChunkMain(int len) throws IOException { Scene3D scene = new Scene3D(); scene.materials = new ArrayList<Material3D>(); scene.objects = new ArrayList<Object3D>(); scene.lights = new ArrayList<Light3D>(); scene.animations = new ArrayList<Animation>(); long end = filePos + len; while (filePos < end) { int chunkID = ReadUnsignedShort(); int chunkLen = ReadInt() - 6; switch (chunkID) { case CHUNK_OBJMESH: Chunk3DEditor(scene, chunkLen); break; case CHUNK_KEYFRAMER: ChunkKeyframer(scene, chunkLen); break; case CHUNK_BACKCOL: scene.background = new float[4]; ChunkColor(chunkLen, scene.background); break; case CHUNK_AMB: scene.ambient = new float[4]; ChunkColor(chunkLen, scene.ambient); break; default: Skip(chunkLen); } } Seek(end); scene.Compute(0); return scene; } 


The structure of the loader as a whole is quite homogeneous: for each chunk there is its own function containing information about those sub chunks that may occur. All the information we need we will upload, not needed - jump, moving immediately to the next chunk. Protection against incorrect files here is minimal.

Materials


The block of materials usually goes first, because the block of triangles then refers to it.

The material consists of several colors (ambient, diffuse, specular), material name, flare parameters, texture name. As noted above, textures are not loaded here, but this is easy to add if necessary.

3D models


Each 3D model (see the ChunkTrimesh function) is defined by the following data:

If everything is clear with the first three points, the last one looks somewhat mysterious. Looking ahead, I will say that for me it has remained a rather incomprehensible entity, although I still learned how to correctly apply this data.

We dump all the information about the vertices into one float [] array, storing eight real numbers for each vertex (three for coordinates and a normal, plus two texture coordinates). A couple of lines from the last article will need to be changed:

  GLES20.glVertexAttribPointer(maPosition, 3, GLES20.GL_FLOAT, false, 32, 0); GLES20.glVertexAttribPointer(maNormal, 3, GLES20.GL_FLOAT, false, 32, 12); 

Here, the number 24 has changed to 32, since there were no texture coordinates before, but now there is.

All coordinates are loaded by the ChunkVector function, which at the same time swaps the Y and Z axes:

  private void ChunkVector(float[] vec, int offset) throws IOException { vec[offset + 0] = ReadFloat(); vec[offset + 2] = ReadFloat(); vec[offset + 1] = ReadFloat(); } 


Well, in general, for some standard types, such as colors or percentages, their functions are used.

The list of triangles should be processed in a special way: first, materials are applied to the edges (and not to the vertices), and secondly, it is by the edges that the normals to the vertices can be determined. To do this, we calculate the normal to each face, add it to each of the three vertices, and then (at the end, after loading all the triangles) we normalize. A bundle of functions, a little math - and that's it.

Another feature of the list of faces is that after a chunk with the names of materials there may remain faces to which the material was not applied. For them, you need to use the default material when drawing, like this:

  mAmbient[0] = 0.587f; mAmbient[1] = 0.587f; mAmbient[2] = 0.587f; mAmbient[3] = 1.0f; mDiffuse[0] = 0.587f; mDiffuse[1] = 0.587f; mDiffuse[2] = 0.587f; mDiffuse[3] = 1.0f; mSpecular[0] = 0.896f; mSpecular[1] = 0.896f; mSpecular[2] = 0.896f; mSpecular[3] = 1.0f; ... int mats = obj.faceMats.size(); for (j = 0; j < mats; j++) { FaceMat mat = obj.faceMats.get(j); if (mat.material != null) { if (mat.material.ambient != null && scene.ambient != null) { for (k = 0; k < 3; k++) mAmbient[k] = mat.material.ambient[k] * scene.ambient[k]; GLES20.glUniform4fv(muAmbient, 1, mAmbient, 0); } else GLES20.glUniform4f(muAmbient, 0, 0, 0, 1); if (mat.material.diffuse != null) GLES20.glUniform4fv(muDiffuse, 1, mat.material.diffuse, 0); else GLES20.glUniform4fv(muDiffuse, 1, mDiffuse, 0); if (mat.material.specular != null) GLES20.glUniform4fv(muSpecular, 1, mat.material.specular, 0); else GLES20.glUniform4fv(muSpecular, 1, mSpecular, 0); GLES20.glUniform1f(muShininess, mat.material.shininess); } else { GLES20.glUniform4f(muAmbient, 0, 0, 0, 1); GLES20.glUniform4fv(muDiffuse, 1, mDiffuse, 0); GLES20.glUniform4fv(muSpecular, 1, mSpecular, 0); GLES20.glUniform1f(muShininess, 0); } GLES20.glDrawElements(GLES20.GL_TRIANGLES, mat.indexBuffer.length, GLES20.GL_UNSIGNED_SHORT, mat.bufOffset * 2); } 


Voila

Sources of light


There are omnidirectional and directional. About directional light sources, again, until we talk (although it is not difficult to write a shader that takes into account the directionality), I’ll say a few words about highlights. Consider the shader for the model from the previous article, and add to it a few lines:

  private final String vertexShaderCode = "precision mediump float;\n" + "uniform mat4 uMVPMatrix;\n" + "uniform mat4 uMVMatrix;\n" + "uniform mat3 uNMatrix;\n" + "uniform vec4 uAmbient;\n" + "uniform vec4 uDiffuse;\n" + "uniform vec4 uSpecular;\n" + "uniform float uShininess;\n" + ... "vec4 light_point_view_local(vec3 epos, vec3 normal, int idx) {\n" + " vec3 vert2light = uLight[idx].position - epos;\n" + " vec3 ldir = normalize(vert2light);\n" + " vec3 vdir = vec3(0.0, 0.0, 1.0);\n" + " vec3 halfv = normalize(ldir + vdir);\n" + " float NdotL = dot(normal, ldir);\n" + " float NdotH = dot(normal, halfv);\n" + " vec4 outCol = vec4(0.0, 0.0, 0.0, 1.0);\n" + " if (NdotL > 0.0) {\n" + " outCol = uLight[idx].color * uDiffuse * NdotL;\n" + " if (NdotH > 0.0 && uShininess > 0) {\n" + " outCol += uSpecular * pow(NdotH, uShininess);\n" + " }\n" + " }\n" + " return outCol;\n" + "}\n"; 


Actually, the calculation and application of NdotH was added. Sininess here and shininess in Material3D have different dimensions, I did not select the exact match between them (again, if someone needs it, this is easy to do).

Animation


One of the most complex and interesting themes in the .3ds format. The fact is that without the use of animated tracks, some objects may not be displayed correctly at all. And if objects are clones of each other, then even more so will not be displayed.

All objects in the .3ds file are combined into a hierarchical tree, and transformations of each “ancestor” should be applied to the “descendant”. The vertices of the tree are written in the order “from top to bottom”, therefore the application of transformations can be carried out in the same order. Curiously, from the point of view of the .3ds file, 3D models, light sources and cameras are peer objects that can be linked to each other by a hierarchy and apply animation in the same way. However, for the time being, we are only interested in 3D models, and in particular, tracks of movement, turns and scaling in them.

For each object is stored:


Loading tracks is boring, so let's talk about how to use them. So, we have:

It remains to collect all this into one ready-made transformation matrix. With shift and scaling, everything is relatively simple: linear interpolation is simply applied between two frames, the values ​​are given in absolute form. But turns must be applied all consistently! And between keyframes, we just apply the rotation of the next frame to the corresponding number of degrees, interpolating it linearly.

Another interesting point is that you need to keep in mind two matrices: a transformation for the child (result) and a transformation for the model (world). The first is used in the chain of hierarchy, the second - when drawing the model. In what order is it all going?

 result = parent.result * move * rotate * scale; world = result * Move(-pivot) * trmatrix; 

It is assumed that the transformations are applied to the top in the order of "right to left" (as is customary in OpenGL). Here trmatrix is ​​the matrix inverse of the one that was in the chunk of the 3D model. Total, conversion calculation code for a given point in time (when loading all frame numbers were converted to real numbers from 0 to 1):

  private void lerp3(float[] out, float[] from, float[] to, float t) { for (int i = 0; i < 3; i++) out[i] = from[i] + (to[i] - from[i]) * t; } private AnimKey findVec(AnimKey[] keys, float time) { AnimKey key = keys[keys.length - 1]; // We'll use either first, or last, or interpolated key for (int j = 0; j < keys.length; j++) { if (keys[j].time >= time) { if (j > 0) { float local = (time - keys[j - 1].time) / (keys[j].time - keys[j - 1].time); key = new AnimKey(); key.time = time; key.data = new float[3]; lerp3(key.data, keys[j - 1].data, keys[j].data, local); } else key = keys[j]; break; } } return key; } private void applyRot(float[] result, float[] data, float t) { if (Math.abs(data[3]) > 1.0e-7 && Math.hypot(Math.hypot(data[0], data[1]), data[2]) > 1.0e-7) Matrix.rotateM(result, 0, (float) (data[3] * t * 180 / Math.PI), data[0], data[1], data[2]); } public void Compute(float time) { int i, n = animations.size(); for (i = 0; i < n; i++) { Animation anim = animations.get(i); Object3D obj = anim.object; float[] result = new float[16]; Matrix.setIdentityM(result, 0); if (anim.position != null && anim.position.length > 0) { AnimKey key = findVec(anim.position, time); float[] pos = key.data; Matrix.translateM(result, 0, pos[0], pos[1], pos[2]); } if (anim.rotation != null && anim.rotation.length > 0) { // All rotations that are prior to the target time should be applied sequentially for (int j = anim.rotation.length - 1; j > 0; j--) { if (time >= anim.rotation[j].time) // rotation in the past, apply as is applyRot(result, anim.rotation[j].data, 1); else if (time > anim.rotation[j - 1].time) { // rotation between key frames, apply part of it float local = (time - anim.rotation[j - 1].time) / (anim.rotation[j].time - anim.rotation[j - 1].time); applyRot(result, anim.rotation[j].data, local); } // otherwise, it's a rotation in the future, skip it } // Always apply the first rotation applyRot(result, anim.rotation[0].data, 1); } if (anim.scaling != null && anim.scaling.length > 0) { AnimKey key = findVec(anim.scaling, time); float[] scale = key.data; Matrix.scaleM(result, 0, scale[0], scale[1], scale[2]); } if (anim.parent != null) Matrix.multiplyMM(anim.result, 0, anim.parent.result, 0, result, 0); else Matrix.translateM(anim.result, 0, result, 0, 0, 0, 0); if (obj != null && obj.trMatrix != null) { float[] pivot = new float[16]; Matrix.setIdentityM(pivot, 0); Matrix.translateM(pivot, 0, -anim.pivot[0], -anim.pivot[1], -anim.pivot[2]); Matrix.multiplyMM(result, 0, pivot, 0, obj.trMatrix, 0); } else { Matrix.setIdentityM(result, 0); Matrix.translateM(result, 0, -anim.pivot[0], -anim.pivot[1], -anim.pivot[2]); } Matrix.multiplyMM(anim.world, 0, anim.result, 0, result, 0); } } 


All this was obtained by trial and error on particularly sophisticated examples, but I’m still afraid to vouch for absolute accuracy and correctness, it’s all too powerful. And this is without using splines!

In addition, the cycle on models from the previous article now looks a little different:

  num = scene.animations.size(); for (i = 0; i < num; i++) { Animation anim = scene.animations.get(i); Object3D obj = anim.object; if (obj == null) continue; Matrix.multiplyMM(mMVMatrix, 0, mVMatrix, 0, anim.world, 0); Matrix.multiplyMM(mMVPMatrix, 0, mProjMatrix, 0, mMVMatrix, 0); // Apply a ModelView Projection transformation GLES20.glUniformMatrix4fv(muMVPMatrix, 1, false, mMVPMatrix, 0); GLES20.glUniformMatrix4fv(muMVMatrix, 1, false, mMVMatrix, 0); for (j = 0; j < 3; j++) for (k = 0; k < 3; k++) mNMatrix[k*3 + j] = mMVMatrix[k*4 + j]; GLES20.glUniformMatrix3fv(muNMatrix, 1, false, mNMatrix, 0); GLES20.glBindBuffer(GLES20.GL_ARRAY_BUFFER, obj.glVertices); GLES20.glVertexAttribPointer(maPosition, 3, GLES20.GL_FLOAT, false, 32, 0); GLES20.glVertexAttribPointer(maNormal, 3, GLES20.GL_FLOAT, false, 32, 12); GLES20.glBindBuffer(GLES20.GL_ARRAY_BUFFER, 0); ... 


Further, everything is the same as before.

Conclusion


That's all, for the overwhelming majority of cases this knowledge is quite enough, well, and I mentioned the most painful rake here and bypassed as best I could. If you add a texture download, then everything will be quite good, but I will leave it as a homework.

Well and, actually, the promised ready source codes: Scene3D (data structures) and Load3DS (loader). Please note that the files are loaded from the root of the memory card ("/ sdcard /"), I strongly recommend changing it to something more reasonable.

Update: Since there are so many copies broken about normals, the code for working with smoothing groups has been added to the sources. Index buffers remain 2-byte, so be careful not to overflow!

Source: https://habr.com/ru/post/144955/


All Articles