Once I was given the task - to generate reports on data from a dozen of branches on office and household hardware (P4-2GHz, 1Gb RAM), which are represented in the form of hundreds of separate databases with a thousand files each. These were the 1C-Trade 7.7 (dbf) bases, cropped by months, which came from the branches on flash drives. The total volume was measured in hundreds of gigabytes, but it took more than an hour to copy. But reports for 3 years for all branches were carried out in a few minutes. How?
Very simple. A superbase was created. No, this is not one big common base where data from all bases are merged. I tried it, it turns out. Too large volumes, too many problems with data synchronization. Everything is much simpler ...
')
Superbase is an empty database that can work with other databases. It has a list of available databases and a set of treatments (scripts) for receiving data and generating reports.
For example, you need a report on the settlement with the client LLC "Spartak". Specify part of the name and TIN (if any), set the period and other parameters. The superbase starts the databases known to it one by one and, via the API (via OLE), sends them the command to generate such a report with such parameters. The base forms the report and gives it to the super base. From separate reports for each database, one general report is collected. The data collected for the reports is cached; when re-forming with similar conditions, it is not necessary to re-collect the data.
Since the database is accessed via the API, the version and structure of the specific database is completely unimportant. Branches are autonomous, each has its own troubles, each solves them in its own way. General only the format of issuing reports and the format of receiving the primary (invoices, bills, invoices, warrants and other standard paper documents). In addition, the quality of data at each branch is different. Some neatly conduct reference books and documents, fill in all the basic details. Others write as in vain, with errors and duplicates, the details are either not filled or with errors. But this is not so important, part of the name is enough. In the final report will be a breakdown of different versions of the names, remove the extra is easier than getting the missing.
Another great convenience - the database can be on different carriers. It can be flash drives, USB-HDD, network balls. The bases can be switched on and off “on the move” - they are, in fact, launched autonomously and atomically, the failure of a separate base does not harm in any way. You can simultaneously run several databases from different media without loss of performance on IOPS and locks. Such is the dynamic partitioning for the poor.
The speed was achieved much more than on one common base, even if you use MS-SQL on SSD and more powerful hardware. At the same time, several databases were launched from different media, while there were no problems with locks and single-threadedness of each separate database. On flash drives, the bases sometimes worked even faster than on the HDD. The reports were formed in parts, which allowed to begin their consideration before the end of the whole process. In case of errors in the conditions, you can slow down the report, without waiting until it is fully formed.
Another plus is security. Since the bases at the branches were cut off every month and were stored on portable media, then:
- there was no extra data for outsiders (data for just the current month maximum)
- there is no possibility for cunning personnel to spoil data for past periods. And if you decide to spoil it, you can easily compare it with the state at the end of the previous period.
- data loss in case of force majeure is not more than a month. Once there was a raider attack. Lost data in just 3 days, and the villains got a stub of one base in a week.
- Portable media (especially flash drives) in case of force majeure can be easily destroyed or hidden. And no traces, passwords, and other hints. And the work does not affect.
A little bit about the implementation details. All this was done on the 1C-Enterprise 7.7 release 25. Two options for opening bases - one through OLE, for unhurriedly pulling out the primary and reference books. The second is an autonomous launch from the command line, with the start of autorun.ert processing. Before opening the database, this autorun.ert and the settings file were copied to the base directory. After processing, the result was saved to a file (usually as a List of Values or Table of Values, serialized through ValueViFile ()).
Most often, the superbase was used to transfer the primary organization from the operational trading bases on the “pure” bases of legal entities. To do this, first created a large list of all invoices without tabular parts. Then this list was manually (almost manually) filtered by accountants, there were only “clean” documents on a particular legal entity. And after that, from the available databases, the documents from the filtered list were copied into the finishing base of the legal entity.