📜 ⬆️ ⬇️

Through thorns to the clouds: creating a cloud service for 3D design and design of premises based on the C3D and WebGL core

Nowadays, on the Internet, they only talk about clouds, how endless and beautiful they are ... about the servers that they saw there ... And you? So I decided to share with readers my experience in developing an online service for designing premises and interiors in 3D. Here I will try to talk about the architecture of the project as a whole and about the implementation details.


image



What is a cloud design 3D system? Since recently the term “cloud computing” is very popular and used to the place and out of place, I will begin with a definition. Cloud 3D design in my understanding and my implementation is a software architecture in which all data on the 3D model and processing actions are located on remote servers (i.e., in the clouds), and client devices request this or that piece of data or the results of calculations on the Internet. In other words, such systems differ from classical design systems in that they produce most of the computational operations on servers rather than on client devices and transmit only a small part of the data for visualizing the model and its parameters to the client. The architecture of such systems is divided into closely interacting, but remotely located, server and client parts, which requires a special approach to ensure their interaction unnoticed by the product user.


The next question is: what advantages does this architecture have? Undoubtedly, the cloud architecture is more complicated than the classical one, in which the user, his data and their processing are and are produced in one place. However, for my project, the cloud architecture has a number of indisputable advantages in terms of both development and use, making such an increase in application architecture appropriate. I will try to formulate them:



Of course, this approach has its drawbacks. Of these, I highlight the following:



Project architecture


When choosing architecture, I tried to take into account the possibility of scaling in the future and divided the project into parts so that it was easy to parallelize the most loaded parts. In the calculation, I used the following assumptions, based on existing experience in CAD development and refined when creating the prototype:


To load an average apartment with furniture, I need about 25 MB of uncompressed geometric data and additional attributes (5 MB of compressed) + 10 MB of textures. Data generation time from 0.2 seconds. up to 5 seconds (in the most difficult cases). I plan to limit the volume of the model at 3-5 million triangles.


During the work on designing a plan and arranging various products by the user, one operation (inserting and editing products, regenerating a plan) accounts for an average of 100 to 500 Kb of outgoing traffic. The execution time of each operation on the server averages 0.1-0.5 seconds.
User activity is at the level of opening one model per minute or performing 5 - 10 editing operations per minute.


Based on this, it became clear that retaining more than a hundred active users on one server would be problematic, so it would be necessary to ensure processing of geometric queries on different servers, and somehow distribute the 3D models between them.


In the choice of development tools, I was initially bound by several limitations. First, the use of C3D as a geometric core and the high speed requirements of geometric calculations when working with a model predetermined the use of C ++ on the server side. Secondly, the launch of the client part on the browser also narrowed down the choice of languages ​​to those that support compilation in JavaScript.


As a result, at this stage, the project consists of four independent parts: two internal (backend) services and two external Web applications. The main internal service is responsible for the geometric modeling and calculations. It is written in C ++ using the C3D and Qt Core libraries. An auxiliary service is responsible for managing files, user directories and texture processing. It is written in ASP.NET Core. Web applications are divided similarly. One is directly responsible for the modeling and is written in TypeScript + WebGL, and the second provides the user interface for managing projects and user directories and is written on a bunch of Angular 2 + TypeScript. The interaction between the client and server parts is carried out using simple HTTP requests. The part responsible for interactive room modeling uses WebSocket connections over which compressed binary data is transmitted. In order to avoid duplication of code between the server services, they also exchange the necessary information via the HTTP protocol.


Server part


The main highlight of the project is a combination of the server and client parts, which is responsible for geometric modeling, visualization and preservation of the model editing history. This part of the project has high requirements for performance, memory consumption, parallelization, and scalability, since geometric modeling is in itself a rather expensive computational task, and the execution of model building queries from a variety of active users complicates it even more. The C3D core from C3D Labs was selected as the “heart” of the system for performing geometric modeling tasks on the server, the reasons for which were described in my previous article “Nuclear Technologies in CAD” [1]. To implement the functionality of managing complex 3D projects, we developed our own data storage system of the 3D model, which is based on a hierarchical ECS (Entity Component System), popularized by game developers. It is a tree-like structure of a model consisting of different elements (entities), where each element has different data sets (components), such as geometric parameters, BREP wrappers, triangular grids, user data, etc. In order for the system to meet the necessary requirements, its implementation has a number of distinctive features:


When a model is loaded, only its structure is loaded, and all its data (components) are stored in the NoSQL database and automatically loaded into RAM when accessing components, and also automatically unloaded from memory as needed. This allows you to work on the server with thousands of simultaneously open models at a low cost of RAM.


Inside components, links to components in other entities are stored as a pair of “Entity ID - Component TYPE”. When copying model elements, all elements get new IDs from the old ID and a random operation code using a symmetric hash function, so instead of replacing all the changed IDs inside the components, the code converting the old IDs to the new ones is stored in the entities. This allows you to copy component data with simple and fast byte-by-copying without losing referential integrity in the model structure. As a result, you can copy huge models without reading the structure of their components, which, in turn, ensures instant copying of large assemblies inside the designed room.


Thanks to the previous mechanism, a special transaction component is implemented, in which the history of the change in the structure of the model is automatically saved as the various commands edit its contents. Dividing the entity into relatively small components allowed to put a “listener” on the appeal to each component, and to monitor, thereby, its change automatically. This allows you to store the entire history of model changes and return to any point in its creation, even made many months ago (since the entire history is also stored in components that are not unnecessarily loaded into RAM). From the point of view of the developer, this means that the geometric model has an analogue of transactions similar to those existing in the DBMS.


The versioning of each element and component of the model provides for the rapid formation of special patch files, which contain information about which entities and components need to be adjusted on the client model in order to synchronize it with the version on the server. Together with the use of a binary version of the WebSocket protocol, this ensures efficient synchronization of model data on all connected clients in real time.


In practice, such a system works fairly quickly, providing the processing time for most requests to the model in the range of 5-50 ms. However, the system has a bottleneck: the opening of the model requires the transfer of all data for its visualization, which in the case of massive models requires multiple calls to the database to retrieve these components, leading to significant delays of several seconds on models of tens of thousands of elements. This problem was overcome by caching patch files in Redis. Since patches do not lose their relevance (patches from version 0 to version one hundred + patch from version 100 to version 200 are equivalent to a single patch from version 0 to version 200), this easily solves the cache invalidation problem: it can be updated in the background without worrying about loss of data relevance.


Client part


Designing the client side began with the choice of the engine for visualization. Almost all popular WebGL engines were tried, however, none of them could be stopped for the following reasons:



As a result of the analysis, it came to an understanding that it would be faster, better and much easier to maintain to write your “bicycle”. For the basis was chosen an excellent library https://twgljs.org


Here are examples of drawing a floor plan and a 3D view on WebGL:


image


image


The next question was the choice of programming language and the platform as a whole. I already had experience developing a JavaScript application with a size of about 10 thousand lines. Based on this experience, the idea of ​​developing in JavaScript something more personally inspired me in awe. Another TypeScript release and the fact that Anders Hejlsberg is behind it predetermined the choice of language. The choice of the platform for the Web fell on Angular 2 (which is now 4): I already had to build a project from a considerable number of disparate libraries, and I had no desire to assemble my combine for a Web application. I wanted to have a framework in which “all inclusive”. The advanced features of deferred loading of system modules, efficient code generation (AOT), and internationalization possibilities only strengthened my choice. The only thing that still confuses me at the moment is the lack of localization of messages in the source files, but I sincerely hope that by the fourth version they implement this functionality))


Realization of intentions


The project began with the C ++ implementation of the prototype structure of the future model and experimental visualization on OpenGL. After several months of debugging, I started translating the application to the client-server model. Initially, I did this as follows: I wrote a combined C # REST + WebSocket server and connected a geometric service, like a dynamic library with a C-interface for working with models, which will be called for geometric queries. The extreme inconvenience of debugging such a hybrid application and unnecessary overlays when copying data from C ++ to C, and then to C # made me look for alternative solutions. In the end, I turned on the WebSocket server inside the C ++ part and routed all requests to it through a proxy server. At the same time, for client authentication, the geometric service makes internal requests to the main REST service.


The next step was the implementation of the model synchronization algorithm, which is changed on the server, with the model displayed on the client. The initial ideas of server tracking of the client's state, or sending the client of its current state to the server, before synchronization, had to be brushed off as not very reliable and difficult to implement. I stopped at the following implementation: each component stores an integer version of the component. Thus, the version of the model as a whole is determined by the maximum version among all its components and the components of the child entities. During synchronization, the client sends a request to the server containing the version of its model, in response to which the server sends data for all components whose version is older than the client version. This ensured the synchronization of the tree model between clients and the server with the lowest possible traffic (the synchronization request is one number, and the response contains only the modified components).


After writing the prototypes of the client and server parts, I started looking for the optimal data format for transferring the geometric model between the client and the server. In this format, I wanted to have the following features:



JSON used in the experiments had to be eliminated immediately, and then there was a choice between MessagePack, Google Protocol Buffers, Apache Thrift, BSON and similar libraries. My choice was Google Protocol Buffers because of the best performance, good compression and convenient code generator. An important factor was also the prevalence of the library and the hope that it will not be abandoned in the long term. As a result, I use native protobuf in C ++, protobufjs to read and write on the client, proto2typescript to use a single scheme between C ++ and TypeScript. In addition, data is additionally compressed by zlib when transferred via WebSockets. This scheme allowed us to very comfortably and quickly transfer all the necessary data on the model.


PS: I recently stumbled upon the FlatBuffers library from the same developer, and I had an idea that this option would be even better, but I don’t have enough time to try this library, moreover, TypeScript support is not yet available in the main branch.


After setting the data format and the first experiments on each part of the system, it became approximately clear how the service as a whole would function. In addition, bottlenecks were evaluated and scaling options were made for the future. Then the first versions of the program were created, showing the work of the bundle “user action - request to the modeling server - visualization of the user action”. At this stage, the first disappointment came: such a scheme of work provides an update of data in the style of “pulled the marker, released the mouse, the object was rebuilt”. It was too slow to interactively display user actions while moving the cursor.


This result required to rethink the border between the server and client parts and make the client more extensive, as well as duplicate the functionality between the server and the client so that the client performs preliminary calculations for interactive polygonal visualization, and the server performs the final operations on the BREP model and synchronizes all clients between by myself. This made me think deeply about the WebAssembly project, which theoretically would allow having a single code base for the client and server parts and quickly manage the execution of calculations between the client and the server, redistributing the load as needed. But for now it's all a dream ...


The next step was the implementation of a full-fledged WebGL render. While his capabilities are quite modest, however, they also had to sweat over them. I will list the main points of implementation:


At the current stage, I use classic Forward-rendering and several passes for rendering. For the future, I plan to implement shading based on Screen Space Ambient Occlusion or Scalable Ambient Obscurance.


To achieve acceptable performance, I glue small objects into large vertex buffers in the global coordinate system and send them to a video card. When changing objects, all the necessary buffers are recalculated on the processor again. This may look awkward, but sending the matrix of the object in additional attributes is even more expensive.


Rendering in 3D lines with a thickness other than 1 pixel is extremely non-trivial in WebGL. I implemented it by drawing two triangles, all vertices of which lie on the same line, and the thickness is stored in attributes — tangent vectors. The final vertices of the triangles are calculated in the vertex shader by converting points into the screen coordinate system (to take into account the ratio of the width and height of the screen), increasing the required thickness of the lines and translating into normalized coordinates. Line smoothing is implemented through the alpha channel in the fragment shader.
Text rendering is done using the SDF technique published by Valve. To prepare the fonts, the Hiero utility from libgdx was used. The result I received is satisfactory: with a font size of 14-16 pixels, the text looks good, if the size is smaller and the text is at an acute angle to the screen plane, then it is practically unreadable. Perhaps, I just do not know how to prepare SDF, but having lost a lot of time, it was not possible to get a radical improvement in the results. In the future, I plan to try this technique .


I also want to note that WebGL support in modern browsers is excellent compared to OpenGL under Windows. This, apparently, thanks to the Angle project, which emulates WebGL calls via DirectX. The crutchless code works fine even under IE 11. On the other hand, in an application that operates on large amounts of data with complex structures, there is an acute problem of memory leaks, which are very difficult to deal with.


Epilogue


The next step in the development was the implementation of the subject area of ​​the program - building modeling, layout of rooms and various structural elements, placement of interior items and the creation of user catalogs. Of course, a lot of things related to the web service require attention and time in the form of authorization and authentication, ensuring backup, scaling various parts, continuous integration of the entire development process. In these directions there is a considerable work front ahead before the project can be opened for public use. Despite this, the work carried out allowed me to gain a great experience. I hope my inventions will be useful to someone from the readers. If someone has experience and is ready, in turn, to share tips on implementing a web service, I will be very interested to hear them. Write in a personal, or leave comments here.


')

Source: https://habr.com/ru/post/319406/


All Articles