📜 ⬆️ ⬇️

Mongoose Embedded Compact Web Server

In the process of developing various projects in C / C ++, it is often necessary to communicate with external systems or send data to clients via HTTP. An example is any web service, as well as any device with a web interface such as a router, video surveillance system, etc.

What then do they usually do? That's right, go trodden path - Apache / nginx + PHP. And then hell begins, because:

1. All this needs to be installed and configured.
2. All this eats a decent amount of resources.
3. From PHP somehow need to get data from the developed system. If you are lucky enough to just get into the database.
')
Therefore, I, as I think of many other developers, have an overwhelming desire to cram all these functions directly into the system being developed. This will give undeniable advantages:

1. It is less than external dependences, so installation and setup is simpler.
2. Theoretically less resource consumption.
3. You can give data directly from your product, without intermediaries.
But at the same time, we don’t want to bother with all the intricacies of processing HTTP connections, parsing, etc.

There are such solutions. And in this article I would like to superficially introduce you to one of them - the Mongoose embedded server (not to be confused with MongoDB).

Main features


Mongoose was initially positioned as an embedded web server. This means that if you have a project in C / C ++ - you just need to include two compact files mongoose.c and mongoose.h in your project, write literally a few dozen lines of code - and voila, you can process HTTP requests!

However, in recent years, Mongoose has seriously grown up and now it’s not just an embedded web server, but a whole embedded “network library”. That is, in addition to the HTTP server, with its help you can also implement: TCP and UDP sockets, HTTP client, WebSocket, MQTT, DNS client and DNS server, etc.

Also a huge plus of this library is that it works asynchronously, i.e. you simply write an event handler function that is called on any event (connection establishment, break, data reception, transfer, request, etc.), and in the main loop of your program you insert a function that your handler calls for each event event.

Thus, your program can be single-threaded and non-blocking, which has a positive effect on saving resources and performance.

Usage example


An abstract example for clarity:

#include "mongoose.h" //     struct mg_mgr mg_manager; //  http- struct mg_connection *http_mg_conn; //  http- struct mg_serve_http_opts s_http_server_opts; const char *example_data_buf = "{ \"some_response_data\": \"Hello world!\" }"; const char *html_error_template = "<html>\n" "<head><title>%d %s</title></head>\n" "<body bgcolor=\"white\">\n" "<center><h1>%d %s</h1></center>\n" "</body>\n" "</html>\n"; //----------------------------------------------------------------------------- //     void http_request_handler(struct mg_connection *conn, int ev, void *ev_data) { switch (ev) { case MG_EV_ACCEPT: { //   -      conn->sock break; } case MG_EV_HTTP_REQUEST: { struct http_message *http_msg = (struct http_message *)ev_data; //  HTTP- // http_msg->uri - URI  // http_msg->body -   //    if (mg_vcmp(&http_msg->uri, "/api/v1.0/queue/get") == 0) { mg_printf(conn, "HTTP/1.1 200 OK\r\n" "Server: MyWebServer\r\n" "Content-Type: application/json\r\n" "Content-Length: %d\r\n" "Connection: close\r\n" "\r\n", (int)strlen(example_data_buf)); mg_send(conn, example_data_buf, strlen(example_data_buf)); //      conn->flags // ,        : conn->flags |= MG_F_SEND_AND_CLOSE; } //    404 else if (strncmp(http_msg->uri.p, "/api", 4) == 0) { char buf_404[2048]; sprintf(buf_404, html_error_template, 404, "Not Found", 404, "Not Found"); mg_printf(conn, "HTTP/1.1 404 Not Found\r\n" "Server: MyWebServer\r\n" "Content-Type: text/html\r\n" "Content-Length: %d\r\n" "Connection: close\r\n" "\r\n", (int)strlen(buf_404)); mg_send(conn, buf_404, strlen(buf_404)); conn->flags |= MG_F_SEND_AND_CLOSE; } //   URI -   else mg_serve_http(conn, http_msg, s_http_server_opts); break; } case MG_EV_RECV: { //  *(int *)ev_data  break; } case MG_EV_SEND: { //  *(int *)ev_data  break; } case MG_EV_CLOSE: { //   break; } default: { break; } } } bool flag_kill = false; //----------------------------------------------------------------------------- void termination_handler(int) { flag_kill = true; } //--------------------------------------------------------------------------- int main(int, char *[]) { signal(SIGTERM, termination_handler); signal(SIGSTOP, termination_handler); signal(SIGKILL, termination_handler); signal(SIGINT, termination_handler); signal(SIGQUIT, termination_handler); //    s_http_server_opts.document_root = "/var/www"; //       s_http_server_opts.enable_directory_listing = "no"; //   mg_mgr_init(&mg_manager, NULL); //    localhost:8080    -  http_request_handler http_mg_conn = mg_bind(&mg_manager, "127.0.0.1:8080", http_request_handler); if (!http_mg_conn) return -1; //   http mg_set_protocol_http_websocket(http_mg_conn); while (!flag_kill) { //    -   //    mg_connection->sock   //   (   )      select/poll, //     sleep- // ... // int ms_wait = 1000; //           ms_wait   //     bool has_other_work_to_do = false; //       mg_mgr_poll(&mg_manager, has_other_work_to_do ? 0 : ms_wait); } //    mg_mgr_free(&mg_manager); return 0; } 

Please note that the connection remains open until the client closes it, or until we close it explicitly (using conn-> flags). This means that we can process the request after exiting the handler function.

Thus, for asynchronous request processing, we can only implement the request queue and control of connections. And then you can make asynchronous queries to the database and external data sources / consumers.

In theory, it should be a very beautiful solution!
It is ideal for creating (for AJAX) web interfaces for managing compact devices, and for example, for creating various APIs using the HTTP protocol.

Despite the simplicity, it seems to me that this is also a scalable solution (if this applies generally to the architecture of your application, of course), because ahead you can put the nginx proxy:
  location /api { proxy_pass http://127.0.0.1:8080; } 

Well, then you can still connect and balancing a few instances ...

Conclusion


Judging by the project GitHub page , it is still actively developing.

A huge fly in the ointment is the GPLv2 license, and the price tag on a commercial license for small projects bites.

If any reader uses this library, especially in production, please leave comments!

Source: https://habr.com/ru/post/321430/


All Articles