I hope that the long-awaited continuation of the cycle of notes on the QNX real-time operating system . This time, I would like to talk about Qnet, the QNX proprietary network protocol. I’ll clarify right away that in addition to Qnet’s native network, QNX also supports the TCP / IP protocol stack, which you should generally be familiar with for administrators of Unix-like systems. Therefore, in the note, I first talk a little about the network administrator io-pkt , and then in more detail about the Qnet protocol. In the course of the narrative, four lyrical and one technical digressions are also waiting for us.io-pkt* administrator (built using QNX resource manager technology), device modules (drivers), for example, devnp-e1000.so , protocol modules, for example, lsm-qnet.so , and utilities, for example, ifconfig and nicinfo . By the way, QNX has three network managers: io-pkt-v4 , io-pkt-v4-hc and io-pkt-v6-hc . The suffix v4 says that this manager supports only IPv4, and the version with v6 supports IPv4 and IPv6. The suffix hc (high capacity) means an advanced version with support for encryption and Wi-Fi. Therefore, sometimes in the literature one can find the name io-pkt* , but we will call the manager io-pkt (without an asterisk), since In our case, it does not matter what version of TCP / IP we are talking about, because this note is about Qnet.io-net , and the TCP / IP protocol module was not linked to io-net , but was a stand-alone module like lsm-qnet.so . Although wait, the modules then had a different prefix, since TCP / IP modules were called npm-tcpip-v4.so and npm-tcpip-v6.so , and Qnet was npm-qnet.so . Although the latter is not entirely true, in ancient times (in the QNX 6.3.2 era) there were two Qnet modules — npm-qnet-compat.so (for compatibility with older versions of QNX6) and npm-qnet-l4_lite.so ( which is supported by the QNX 6.5.0 lsm-qnet.so module). By the way, npm means Network Protocol Module, and lsm means Loadable Shared Module.io-net , network driver modules carried the proud prefix devn. The drivers for io-pkt have another prefix - devnp. Older drivers can also be connected to io-pkt , io-pkt interlayer module is automatically used.
io-pkt .io-pkt is shown in Figure 1. At the bottom level are the drivers of wired and wireless networks. These are loadable modules (DLL, shared libraries). It is worth noting that in addition to Ethernet, other transmission media are supported. For example, the devn-fd.so driver allows devn-fd.so to organize the transmission and reception of data using a file descriptor (fd is just a file descriptor), so you can organize the network, for example, on a serial port. Speed, of course, will be appropriate, but sometimes it is very saving. Device drivers connect to a second-level multi-thread component (stack). The stack, first, provides the possibility of bridging and relaying. Secondly, the stack provides a unified interface for packet management and processing of IP, TCP and UDP protocols. At the top level is the resource manager, which implements the transfer of messages between the stack and user applications, i.e. provides the functions open() , read() , write() and ioctl() . The libsocket.so library converts the io-pkt to the BSD socket layer API, which is the standard for most modern networking code.io-pkt with support for the Qnet protocol: io-pkt-v4-hc -d e1000 -p qnet -d option indicates the network controller driver devnp-e1000.so , the -p option loads the Qnet module. In fact, when installing the QNX from the installation disk, the network starts automatically from the start-up scenarios; manual start is usually required when QNX is embedded and to reduce the system boot speed. io-pkt-v4-hv -d e1000 -p qnet bind=ip,resolve=dns io-pkt was launched without Qnet support, then the lsm-qnet.so module can be lsm-qnet.so later, for example: mount -Tio-pkt lsm-qnet.so io-pkt can be obtained in the QNX help system, and we will proceed to the main topic of our note. ls -l /net/zvezda vi /net/zvezda/etc/rc.d/rc.local qtalk -m /net/zvezda/dev/ser1 phcalc -s /net/zvezda/dev/photon -s ). In this case, the application runs on the local node, and the graphical window will be displayed on the zvezda node. The local node does not even need to start the graphics subsystem. This may be convenient in some cases, for example, when a node does not have a graphic controller, but a graphic display of the collected data from sensors or a system setting is required. Also, this approach may allow the central processor of the graphics server to be unloaded.Tcpip manager Tcpip only one node in the FLEET network, and direct all requests from the socket library to this node. Do not be scared, in QNX6 usually do not. And in the case of io-pkt there is no point in this, because TCP / IP stack is inseparable from io-pkt .-n option, which says that you need to either work on a remote node or collect information from a remote node. For example, you can get a list of running processes with command line arguments like this: pidin -n zvezda arg -n option, then the regular on utility comes in handy, which is exactly intended for running applications on another node, for example: on -f zvezda ls ls utility is run on the zvezda node, and the utility output is displayed in the current terminal.on utility provides rich possibilities for controlling the parameters of running processes. You can not only start processes on a remote node, but also change the priority level, dispatching discipline, start processes from another user, and even bind the process to a specific processor core. More details can be found in the help for the utility.slay utility supports the -n option. For example: slay -n zvezda io-usb slay you can send any other signal, not only SIGTERM. #include <unistd.h> #include <stdlib.h> #include <stdio.h> #include <fcntl.h> int main(int argc, char *argv[]) { int fd; char str[] = "This is a string.\n"; if ( argc < 2 ) { printf("Please specify file name.\n"); exit(1); } if ( (fd = open(argv[1], O_RDWR|O_CREAT, 0644)) < 0 ) { perror("open()"); exit(1); } write(fd, str, sizeof(str)-1); close(fd); return 0; } /tmp/1.txt or /net/zvezda/tmp/1.txt . The same with devices, two code fragments for the programmer are identical (the only difference is in the file name): fd = open("/dev/ser1", O_RDWR); fd = open("/net/zvezda/dev/ser1", O_RDWR); open() , spawn a series of low-level microkernel calls that control message exchanges : ConnectAttach() , MsgSend() , etc. The program code required for network interaction is identical to that used locally. The only difference is in the path name; in the case of networking, the path name will contain the node name. The prefix with the node name is converted into a node handle, which is later used in the low-level call to ConnectAttach() . Each node in the network has its own descriptor. To search for the descriptor, the pathnames of the file system are used. In the case of a single machine, the search result will be a node descriptor, a process id, and a channel id. In the case of a Qnet network, the result will be the same, only the node descriptor will be non-zero (i.e. not equal to ND_LOCAL_NODE, and this indicates a remote connection). However, all these calls are hidden and there is no need to take care of them if you simply use open() .open()? function open()?node1 needs to use the serial port /dev/ser1 on node2 . In fig. 2 shows what operations are performed when the application calls the open() function with the name /net/node2/dev/ser1 .
/net/node2/dev/ser1 . Since lsm-qnet.so is responsible for the namespace /net , then the process manager returns a redirect message, indicating that the application should contact the local io-pkt network administrator.io-pkt network administrator with the same request to resolve the io-pkt . The local network administrator returns a redirection message indicating the node descriptor, process identifier, and channel ID of the node node2 process manager.node2 process manager and sends the request for the node2 resolution again. The process manager on node2 in turn returns another redirect message indicating the node descriptor, process identifier, and channel identifier of the serial port driver on its own node ( node2 ).node2 and obtains a connection identifier that can be used for further message exchange (for example, when calling read() , write() and other POSIX functions).node1 and the serial port driver (also, by the way, an ordinary application in QNX) on node2 .open() POSIX function, all these low-level calls are hidden. As a result of the open() call, either a file descriptor or an error code will be received.node2/dev/ser1 . In step 3, the name contains only dev/ser1 . In step 4, only ser1 .en0 to access the serial port on the node zvezda the name would be as follows: /net/zvezda~exclusive:en0/dev/ser1 Source: https://habr.com/ru/post/279679/
All Articles