
Winter has practically come to Sydney - and I managed to catch the flu (simple, not swine). And then there's work, Mother's Day, etc., and so on. In a word, due to lack of time, we will move quickly. But before embarking on the creation of a “unique” three-dimensional world, we will master the concepts of moving in
3D space.
We have to master the event handling code, which will allow to walk "on the floor." With the help of touches, we will turn left, right, move forward and backward. We can do without running, turning the head and sharpening, although it is easy to add them. Such restrictions are explained both by the desire to simplify the presentation, and the opportunity for those who do not have
iPod Touch or
iPhone to achieve similar results in the simulator.
To get started,
download the project base here .
')
There is not a lot of code - basically explanations of what is happening and how.
Mythical camera
Most perceive the
3D worlds as a space that you look at through the camera, but in the
OpenGL camera itself is not. For the illusion of movement around the scene relative to the starting point (0, 0, 0), objects are moved, not a camera, like in a movie.
The process may seem time consuming, but it is not. Depending on the application, there are many ways to solve this problem and even more - optimization for really big worlds. On this I will briefly discuss a little later.
To simplify the work a bit, I attached a convenient toy from the “big brother”
OpenGL ES - the
GLU library to the lesson: I mean the function “
gluLookAt () ”.
Although I rarely mention
OpenGL in these articles, I think almost everyone is familiar with the
GLU library. Unfortunately, it is not included in the
OpenGL ES specifications, but this does not mean that we will not be able to use the features that are useful to us. To work with them it is not necessary to transfer the entire library - select only the options that are relevant for you.
I took the function "
gluLookAt () " from the
SGI Open Source release. The choice is explained solely by the fact that it was at hand, and I am familiar with the principles of its work. The license for the function is here in the code (I am not the author of the code). For those who are not satisfied with this option for one reason or another, there are plenty of alternatives from open sources.
If you decide to work with other code or import other functions, do not forget to change all "
GLdouble " to "
GLfloat ", and all the
gl calls tied to a floating-point version. Another general recommendation is to avoid everything that is focused on the user interface (input functions, windows). In general, the moments to which attention should be paid are mass, but the rest are quite obvious.
For professional purposes, look for the latest free version updates. I note that
Mesa is not recommended by the creators themselves - it is not updated, active development is suspended. I know that the Internet has a code for the
Mesa GLU iPhone , but it is not suitable for professional use (read: it contains errors).
If anyone is wondering why developers recommend
SGI or other solutions instead of their library, look for information on the
Mesa website.
Working with gluLookAt ()
Having mastered the function "
gluLookAt () ", you will definitely appreciate its simplicity and convenience. Let's look at the prototype:
void gluLookAt( GLfloat eyex,
GLfloat eyey,
GLfloat eyez,
GLfloat centerx,
GLfloat centery,
GLfloat centerz,
GLfloat upx,
GLfloat upy,
GLfloat upz)
I agree, 9 parameters are sometimes too much, but the main thing here is to figure it out. The first three characterize the viewer's position (these are just the X, Y, Z coordinates).
The second three refer to the object under consideration (again, the trio X, Y, Z).
The last three can be combined into a vector "up". Now we will not consider them, since the first two positions give the desired effect.
The coordinates of the viewer (eye) - this is the mythical camera. Naturally, they correspond to the coordinates of space. In fact, this is the point in space from where you watch what is happening. The coordinates of “center” correspond to the direction of gaze, i.e. his goals. If the "
center " Y coordinate is above the Y coordinate, the user looks up. If less, then, accordingly - down.
Our basic project is already configured, but without movement. We painted the floor and look nowhere:

This is what happens when you click on the "
Build and Go " button.
To begin, let's try to work with the "
glLookAt () " function. Go to the "
drawView :" method and after calling "
glLoadIdentity () " add the code below:
glLoadIdentity();
gluLookAt(5.0, 1.5, 2.0, // , ""
-5.0, 1.5, -10.0, // , ""
0.0, 1.0, 0.0); //
Click the "
Build and Go " button again, happy to make sure everything works. The result in the simulator should be as follows:

The only appeal to the function, we turned the view from one angle to the opposite. Experiment with the parameters "
glLookAt () ", watching what is happening.
Move to 3D
Now, having got an idea of ​​"
gluLookAt () ", I propose to reproduce the walk on the floor. In fact, we will move along two axes (X and Z, that is, without changing the height), changing direction with the help of rotation.
If you recall the function "
gluLookAt () ", what information do you think is needed for walking in three-dimensional space?
You will need:
viewer location “eye”;
direction of sight (goal) "center".Knowing these two introductory, we are ready to process information from the user, allowing him to control the location in space.
Suppose we decide to start with the two quantities involved earlier. So far, the fragment's hard-coded information does not allow to move, so first we go to the interface and add the following variables:
GLfloat eye[3];//
GLfloat center[3];//
The names "
eye " and "
center ", if desired, can be completely replaced with "
position " and "
facing " - this does not matter (I just used the terms of the function "
gluLookAt () ").
Two variables contain the X, Y, and Z coordinates. The value of Y can be hard-coded in the code, since it does not change, but I decided to do without unnecessary movements.
Moving on to the "
initWithCoder: "
method . Here we initialize two variables with the values ​​used earlier to refer to
gluLookAt () :
eye[0] = 5.0;
eye[1] = 1.5;
eye[2] = 2.0;
center[0] = -5.0;
center[1] = 1.5;
center[2] = -10.0;
Go back to the "
drawView: " method. Call "
gluLookAt () " change to:
gluLookAt(eye[0], eye[1], eye[2], center[0], center[1], center[2],
0.0, 1.0, 0.0);
For complete peace of mind, click on the "
Build & Go " button, making sure that everything works.
Getting ready to move
Before we can handle events moving through space, you need to set up a number of points in the header file. Switch to it to set several default settings and create a new list type.
To get started, let's decide on the speed of walking and turns:
#define WALK_SPEED 0.005
#define TURN_SPEED 0.01
To me, these values ​​seem to be somewhat slow, therefore, having understood their work, you can contribute your own.
The next step is to create an enumerated type in order to precisely preserve our actions. Add the following:
typedef enum __MOVMENT_TYPE {
MTNone = 0,
MTWalkForward,
MTWAlkBackward,
MTTurnLeft,
MTTurnRight
} MovementType;
Now in the process of the application functioning we can stand
(MTNone) , go forward, backward, turn left and right. I am afraid that for now we will have to restrict ourselves.
It remains to specify the variable containing the current movement:
MovementType currentMovement;
Do not forget to go to the "
initWithCoder: "
method and set the default value for the "
currentMovement " variable:
currentMovement = MTNone;
By default, this value for a variable will be that way anyway, but such actions are good practice.
Touch me
Having dealt with the basics, you can proceed to the actual processing of touches. If you remember, in the last lesson I presented all four methods of processing them. This time - for simplicity - we will use only two: "
touchesBegan " and "
touchesEnded ".
To determine the action taken, I divided the iPhone screen into four zones:

The standard screen height is 480 pixels. We divide it into 3 equal parts of 160 pixels. Pixels 0 ~ 160 correspond to forward movement, 320 ~ 480 - to movement back, central 160 are divided into right and left halves for turns.
Now you can imagine the first touch method:
- (void)touchesBegan:(NSSet *)touches withEvent:(UIEvent *)event {
UITouch *t = [[touches allObjects] objectAtIndex:0];
CGPoint touchPos = [t locationInView:t.view];
// .
// iPhone,
// .
//
// (0, 0)
// +-----------+
// | |
// | 160 |
// |-----------| 160
// | | |
// | | |
// |-----------| 320
// | |
// | |
// +-----------+ (320, 480)
//
if (touchPos.y < 160) {
//
currentMovement = MTWalkForward;
} else if (touchPos.y > 320) {
//
currentMovement = MTWAlkBackward;
} else if (touchPos.x < 160) {
//
currentMovement = MTTurnLeft;
} else {
//
currentMovement = MTTurnRight;
}
}
When the user touches the screen, it remains to fix the segment and specify a variable in order to know what to do when the moment of calculation of the new position comes. Do not forget that there is no need to make a definition of the method in the interface - such methods are inherited.
It is the turn of the "
touchesEnded " method.
- (void)touchesEnded:(NSSet *)touches withEvent:(UIEvent *)event {
currentMovement = MTNone;
}
In fact, everything is clear. Now we need a method for handling touch event data. This time will require a method declaration in the interface. Switch to the header file and add the following method definition:
- (void)handleTouches;
We turn back and proceed to its implementation. In this method, we will calculate the displacement over three-dimensional space.
Theory of Moves in 3D
Let's start with the basic concepts. I am sure no one will be surprised when he finds out that this is just one of the methods for calculating new locations in three-dimensional space with n-number number of movements along any vector v. Unfortunately, I don’t remember who said it first (perhaps
Arvo ). In any case, it was a long time ago - even before
Wolf 3D showed how it happens in real time.
First consider walking. If the user informs about the desire to go forward, you need not only to take into account the view from the viewer's location, but also not to miss the target point. A glance from the location tells us about the current location, and a glance at the target determines the direction of travel.
Any picture is better than a thousand words: take a look at an image that represents a look from a point and a look at a target.

With this method of moving, the distance between two points is the delta value for the X coordinates and the delta for the Y coordinates. It remains to get the new X and Z values ​​by multiplying the current coordinates by the “speed” value. Like that:

We can easily calculate the new coordinates for the red point.
We start with
deltaX and deltaZ :
deltaX = 1.5 - 1.0 = 0.5
deltaZ = -10 - (- 5.0) = -5.0Multiply by walking speed:xDisplacement = deltaX * WALK_SPEED
= 0.5 * 0.01
= 0.005zDisplacement = deltaZ * WALK_SPEED
= -5.0 * 0.01
= 0.05Accordingly, the new coordinate presented in the figure above is a red dot:
eyeC + CDisplacement(eyex + xDisplacement, eyey, eyez + zDisplacement)
= (0.005 + 1.0, eyey, (- 10) + 0.05)
= (1.005, eyey, -9.95)I note that the proposed method is not without flaws. The main problem is that the greater the distance between the location of the viewer and the object of sight, the higher the “walking speed”. Nevertheless, we solve the problem, but in terms of CPU resources, it is less expensive compared to many other motion algorithms.
The size of our world is small, but in reality the difference between the viewer and the object of view will be too huge, so be sure to experiment. As it turns out, the speed of movement directly depends on the ratio of the distance between two points and the value of "
WALK_SPEED ".
It remains to consider turning left / right.
Often I have to deal with the code in which programmers responsibly write out the angle at which the scene is visualized. This is not our case. The working angle is known to us, since we know two points (remember Pythagoras - we have a regular triangle).
Take a look at the picture:

In order to initiate a turn, we just need to move the look at the target object in a circle. Our definition of "
TURN_SPEED " is, in fact, the angle of rotation.
The key to what is happening: there is no need to adjust the coordinates of the viewer - the object of sight changes. Putting a new point-location on a virtual circle before our eyes (ie, gradually increasing the angle value, defined by "
TURN_SPEED "), we get a new "angle of rotation".
Since the turn corresponds to a drawn circle, the center point of which is the viewer's location or point of view, it suffices to recall the principles of drawing a circle.
In other words, it all comes down to:
newX = eyeX + radius * cos (TURN_SPEED) * deltaX - sin (TURN_SPEED) * deltaZnewZ = eyeZ + radius * sin (TURN_SPEED) * deltaX +
cos (TURN_SPEED) * deltaZEvent handling with conversion to motion
Let's try it out in practice.
Returning to the implementation, push off from the touch to get new parameters for "
gluLookAt () ". Let's start with the implementation method and a couple of basic principles:
- (void)handleTouches {
if (currentMovement == MTNone) {
// ,
return;
}
To begin, check the fact of movement. If he is absent, there is nothing more to do.
Regardless of whether we are moving or turning, it is necessary to know the values ​​of "
deltaX " and "
deltaZ ". I save them in a called variable vector:
GLfloat vector[3];
vector[0] = center[0] - eye[0];
vector[1] = center[1] - eye[1];
vector[2] = center[2] - eye[2];
I calculated the value of
Y delta , although we do not need it.
Now we find out what actions to take to move. Everything is contained in the selection statement:
switch (currentMovement) {
case MTWalkForward:
eye[0] += vector[0] * WALK_SPEED;
eye[2] += vector[2] * WALK_SPEED;
center[0] += vector[0] * WALK_SPEED;
center[2] += vector[2] * WALK_SPEED;
break;
case MTWAlkBackward:
eye[0] -= vector[0] * WALK_SPEED;
eye[2] -= vector[2] * WALK_SPEED;
center[0] -= vector[0] * WALK_SPEED;
center[2] -= vector[2] * WALK_SPEED;
break;
case MTTurnLeft:
center[0] = eye[0] + cos(-TURN_SPEED)*vector[0] -
sin(-TURN_SPEED)*vector[2];
center[2] = eye[2] + sin(-TURN_SPEED)*vector[0] +
cos(-TURN_SPEED)*vector[2];
break;
case MTTurnRight:
center[0] = eye[0] + cos(TURN_SPEED)*vector[0] - sin(TURN_SPEED)*vector[2];
center[2] = eye[2] + sin(TURN_SPEED)*vector[0] + cos(TURN_SPEED)*vector[2];
break;
}
}
That's the whole touch processing method. The implementation is an algorithm that we have already discussed.
Bring together
Return to the "
drawView " method and before calling "
gluLookAt (): " add the following line:
[self handleTouches];
[self handleTouches];
All is ready!
You can click on the "
Build and Go " button - right now!
The source code for the lesson can be downloaded
here .