📜 ⬆️ ⬇️

Creating a multi-user touch-enabled interface for Windows


Monoblocks first appeared in the mid-1990s and have since been widely used in various fields. Unfortunately, at the moment, the application programming interface (API) of Windows does not have sufficient resources to create applications for two or more users using one monoblock at a time. However, this does not mean that developers can not use their own. When developing an interface, the most important thing is to make the application easy to use and that it correctly accepts input commands.
Below we will look at what Windows API tools can be used to develop applications with touch controls, and also demonstrate how to create a simple multiplayer game with them.

Creating a user interface

Modern touchscreens are not able to determine which of the users (if there are several) touched the display. In some cases this can be neglected. For example, in a board game, it doesn’t matter at all which of the players moves the chips across the field, since the users themselves are watching this.
Another thing is when it is necessary to correctly identify the teams coming from each player and ensure simultaneous interaction with the interface. In this case, the developer must create the most efficient way to determine which player touched the screen. As a rule, for this purpose, the screen is divided into several parts (each part for one player). Users must take care not to go beyond the boundaries of their zone. Naturally, there are other methods for creating a multi-user interface (for example, using heuristic tools), but they are not covered in this article.

Windows API
Windows supports two basic formats of touch commands. The first is the WM_TOUCH message. They are responsible only for the primary data, so the application must itself process and execute incoming commands.
The second format is WM_GESTURE. This message is generated when using one of the Windows control gestures, such as scaling using spreading and pinching fingers, dragging elements, etc. Thanks to their presence, the task of the developer is greatly simplified, since the operating system can recognize the control and touch gestures on the screen and transfer them in a convenient form for processing.
Currently, only touch commands from one user (hand) are supported in Windows, so using WM_GESTURE messages to recognize the touch of several players in our application will not work.
In addition, it is important to remember that the API defines the coordinates of the touches relative to the entire screen (they are measured in hundredths of a pixel), and not just inside the program window. If you need to get exactly the coordinates in the window, you should convert them using the ScreenToClient () function.

Sample application
As an example, we decided to recreate the classic table tennis simulator - the game Pong, and give it a touch control. The gameplay is that the players move the rackets vertically, beating the ball to the opponent’s side. When the ball reaches one of the field borders (vertically), the player on the opposite side gets a point.
The ball moves across the screen diagonally. After each movement, the application checks if it has encountered an obstacle. If the ball hits the top or bottom edge of the playing field, its vertical trajectory changes in accordance with the angle of collision. If he hits the racket or the vertical border of the window, its horizontal trajectory also changes to the opposite.
Ball movement processing takes place in a separate stream of the application itself to ensure smooth movement during the execution of sensory commands.
Graphics are rendered using standard Windows graphics device interface (GDI) calls. As rectangles serve as regular rectangles, as a ball - a circle. The score of the match is displayed as text on the background. When creating this application did not use raster images of objects or other graphical tools. An example of the interface is shown in the figure below. Click on it to watch a video about the application.
')

Figure 1. Standard interface for two users

Interface for two users with support for touch control

In order for the game to perceive the touch of each user, we divided the screen into two parts: player 1 is on the left, player 2 is on the right.
Each time a WM_TOUCH event occurs, the program receives a complete list of touch points (code example 1). It can contain from one to several records (the maximum number of points depends on the capabilities of the device). Then iteration is performed, which allows you to determine the coordinates of each touch along the X and Y axes.

Code Example 1
1 … 2 bret = GetTouchInputInfo( 3 (HTOUCHINPUT)lparam, 4 data.geometry_data.touch_inputs_count, 5 data.geometry_data.touch_inputs, 6 sizeof(TOUCHINPUT) 7 ); 8 assert(bret != FALSE); 9 // 10 //    11 // 12 for(i = 0; i < data.geometry_data.touch_inputs_count; i++) 13 { 14 touch_input = &data.geometry_data.touch_inputs[i]; 15 // 16 //      17 // 18 x = touch_input->x / 100; 19 y = touch_input->y / 100;) 20 { 21 … 

Next, you need to determine which side the touch occurred, and then update the vertical position of the racket so that it coincides with the position of the player’s finger (code example 2). This will move the racket across the field.

Code Sample 2
 1 … 2 if (x < (ULONG)((data.graphic_data.rect.right – data.graphic_data.rect.left) / 2)) 3 { 4 // 5 //      6 // 7 Data.geometry_data.p1y = y; 8 }else{ 9 Data.geometry_data.p2y = y; 10 } 11 bret = move_paddles(&data); 12 assert(bret == TRUE); 13 { 

Since the application processes all input commands, players can not only simultaneously tap the screen, but also perform several touches. The position of the racket will be determined by the last registered touch.

Adaptation of the application for four users
To adapt the application for four users, you need to change the playing field and the active zones of the players.
We added two rackets and two scoreboards with a score at the bottom and top of the screen. The demarcation lines of the game zones acquired the shape of the letter “X”, dividing the application window into four triangles (Figure 2).


Figure 2. The division of play areas for four players

Figure 3 shows how the application determines which zones the screen touched.


Figure 3. Defining the touch command zone

When the screen is divided into four zones, it becomes more difficult to determine which of them has touched (code example 3).

Code Sample 3
 1 2 if(touch point.y > (LONG)(((float)data.graphic data.rect.bottom / data.graphic data.rect.right) * touch point.x)) { 3 if(touch point.y > (LONG)((((float)data.graphic data.rect.bottom / data.graphic data.rect.right) * -1) * touch point.x) + data.graphic data.rect.bottom){ 4 data.geometry data.p4x = touch point.x; 5 }else{ 6 data.geometry data.ply = touch point.y;} 7 }else{ 8 if(touchpoint.y < (LONG)((((float)data.graphicdata.rect.bottom / data.graphic data.rect.right) * -1) * touch point.x) + data.graphicdata.rect.bottom) 9 { 10 data.geometry data.p3x = touch point.x; 11 }else{ 12 data.geometry data.p2y = touch point.y; 13 } 14 } 15 ... 


Conclusion

When creating multi-user applications with touch controls for Windows 7 and later, use the WM_TOUCH and WM_GESTURES messages. If you find the most effective way to determine which of the gaming zones is touching, the development of the application even for four simultaneously playing users will not be difficult.

Source: https://habr.com/ru/post/238233/


All Articles