======================================================================================================================== Run # 1 (5 Threads) rpcsd (hbgsrv0189, PID:0718, TID:2648) # 03-09-2012 | 13:50:45 | Servlet: A::RpcsServlet, URI: /index-search ======================================================================================================================== NS | Name | C | T | Tot(s) | TwR(s) | Avg(s) | AwR(s) | Max(s) | Min(s) ======================================================================================================================== ::RPC::Service | service | 1 | 1 | 1.593 | 1.593 | 1.593 | 1.593 | 1.593 | 1.593 ::A::RpcsServlet | service | 1 | 1 | 1.592 | 1.592 | 1.592 | 1.592 | 1.592 | 1.592 ::IndexSrvRpc | index-search | 1 | 1 | 1.584 | 1.584 | 1.584 | 1.584 | 1.584 | 1.584 ::Indexer::Search | Search | 1 | 1 | 1.584 | 1.584 | 1.584 | 1.584 | 1.584 | 1.584 ::Indexer::Search | ParallelSearch | 2 | 2 | 1.256 | 1.256 | 0.628 | 0.628 | 0.655 | 0.601 ::Indexer::Search::Cache | SearchL2Index | 44 | 44 | 0.686 | 0.686 | 0.016 | 0.016 | 0.016 | 0.015 ::Indexer::Search | InvalidateCacheIdx | 20 | 20 | 0.570 | 0.570 | 0.028 | 0.028 | 0.031 | 0.020 ::Indexer::Search::Cache | InvalidateIdx | 20 | 20 | 0.276 | 0.276 | 0.014 | 0.014 | 0.016 | 0.002 ::Indexer::Search | SearchL1Index | 1 | 14 | 0.203 | 0.203 | 0.203 | 0.016 | 0.203 | 0.016 ::Indexer::Search | MergeJoin | 1 | 1 | 0.125 | 0.125 | 0.125 | 0.125 | 0.125 | 0.125 ======================================================================================================================== Run # 2 (25 Threads w/o semaphore) rpcsd (hbgsrv0189, PID:0718, TID:2648) # 03-09-2012 | 13:52:03 | Servlet: A::RpcsServlet, URI: /index-search ======================================================================================================================== NS | Name | C | T | Tot(s) | TwR(s) | Avg(s) | AwR(s) | Max(s) | Min(s) ======================================================================================================================== ::RPC::Service | service | 1 | 1 | 4.255 | 4.255 | 4.255 | 4.255 | 4.255 | 4.255 ::A::RpcsServlet | service | 1 | 1 | 4.254 | 4.254 | 4.254 | 4.254 | 4.254 | 4.254 ::IndexSrvRpc | index-search | 1 | 1 | 4.244 | 4.244 | 4.244 | 4.244 | 4.244 | 4.244 ::Indexer::Search | Search | 1 | 1 | 4.244 | 4.244 | 4.244 | 4.244 | 4.244 | 4.244 ::Indexer::Search | ParallelSearch | 2 | 2 | 3.729 | 3.729 | 1.865 | 1.865 | 1.889 | 1.840 ::Indexer::Search | InvalidateCacheIdx | 20 | 20 | 2.497 | 2.497 | 0.125 | 0.125 | 0.126 | 0.125 ::Indexer::Search::Cache | InvalidateIdx | 20 | 20 | 2.188 | 2.188 | 0.109 | 0.109 | 0.113 | 0.109 ::Indexer::Search::Cache | SearchL2Index | 44 | 44 | 1.231 | 1.231 | 0.028 | 0.028 | 0.031 | 0.015 ::Indexer::Search | SearchL1Index | 1 | 14 | 0.360 | 0.360 | 0.360 | 0.028 | 0.360 | 0.016 ::Indexer::Search | MergeJoin | 1 | 1 | 0.155 | 0.155 | 0.155 | 0.155 | 0.155 | 0.155 ======================================================================================================================== Run # 3 (25 Threads with semaphore in InvalidateCacheIdx, before InvalidateIdx) rpcsd (hbgsrv0189, PID:0718, TID:2648) # 03-09-2012 | 14:02:51 | Servlet: A::RpcsServlet, URI: /index-search ======================================================================================================================== NS | Name | C | T | Tot(s) | TwR(s) | Avg(s) | AwR(s) | Max(s) | Min(s) ======================================================================================================================== ::RPC::Service | service | 1 | 1 | 2.213 | 2.213 | 2.213 | 2.213 | 2.213 | 2.213 ::A::RpcsServlet | service | 1 | 1 | 2.213 | 2.213 | 2.213 | 2.213 | 2.213 | 2.213 ::IndexSrvRpc | index-search | 1 | 1 | 2.205 | 2.205 | 2.205 | 2.205 | 2.205 | 2.205 ::Indexer::Search | Search | 1 | 1 | 2.205 | 2.205 | 2.205 | 2.205 | 2.205 | 2.205 ::Indexer::Search | ParallelSearch | 2 | 2 | 1.690 | 1.690 | 0.845 | 0.845 | 0.889 | 0.801 ::Indexer::Search::Cache | SearchL2Index | 44 | 44 | 1.153 | 1.153 | 0.026 | 0.026 | 0.031 | 0.016 ::Indexer::Search | InvalidateCacheIdx | 20 | 20 | 0.537 | 0.537 | 0.027 | 0.027 | 0.031 | 0.007 ::Indexer::Search | SearchL1Index | 1 | 14 | 0.359 | 0.359 | 0.359 | 0.028 | 0.359 | 0.017 ::Indexer::Search::Cache | InvalidateIdx | 20 | 20 | 0.278 | 0.278 | 0.014 | 0.014 | 0.016 | 0.004 ::Indexer::Search | MergeJoin | 1 | 1 | 0.156 | 0.156 | 0.156 | 0.156 | 0.156 | 0.156
InvalidateIdx
method and, accordingly, of the InvalidateCacheIdx
method have changed, after invCI_semaphore
the InvalidateIdx
method after surrounding the invCI_semaphore
semaphore semaphore invCI_semaphore(config.InvCI_Count/* = 5*/); ... int InvalidateCacheIdx() { ... while (...) { cache.SearchL2Index(); invCI_semaphore++; while (cache.InvalidateIdx()) {}; invCI_semaphore--; } ... }
// before doing ... if ( thisThread.pool.count() > 1 && !(currentTaskType in (asap, immediately, now)) ) { thisThread.priority = 2 * thisThread.pool.priority; } else { thisThread.priority = 5 * thisThread.pool.priority; } // do current task ...
// : int log(int level, ...) { if (level >= level2log) { logMutex.lock(); try { file.write(...); file.flush(); } finally { logMutex.release(); } } }
// - : int log(int level, ...) { if (level >= level2log) { // , : logQueue.mutex.lock(); logQueue.add(currentThread.id, ...); logQueue.mutex.release(); // -worker' : logQueue.threadEvent.pulse(); } } // background-logging thread: int logThreadProc() { ... while (true) { // - /* 500 ms */ /* 10 */: if ( logQueue.count < config.LogMaxCount /* = 10 */ || (sleepTime = currentTime - lastTime) < config.LogLatency /* = 500 */) { logQueue.threadEvent.wait(config.LogLatency - sleepTime); continue; }; // : logQueue.mutex.lock(); try { foreach (... in logQueue) { file.write(...); logQueue.delete(...); } } finally { logQueue.mutex.release(); } // : file.flush(); // : logQueue.threadEvent.wait(); lastTime = currentTime; } ... }
log
method. ## / ... ... ## counter : set counter [db onecolumn {select cntr from accesslog where userid = $userid}] %> <%= $counter %> ... <% ## " access log" in background, "update idle": after idle UpdateAccess $userid [clock seconds] ## . .... ## - : proc UpdateAccess {userid lasttime} { db exec {update accesslog set cntr = cntr + 1, lastaccess = $lasttime where userid = $userid} }
malloc
/ free
knows why this is done :). Pools were organized according to the FIFO principle — the mymalloc
function returned the first element that had been pooled by the myfree
function a long time ago. The reason that prompted the developer to use the FIFO is simple to banality - if some unscrupulous "programmer" will use the object after myfree
while, then the program will probably work longer. After replacing the LIFO with the entire arsenal (application server), actively using the memory manager earned about 30% faster. ... RWMutex mtx(500); ... mtx.lockWrite(); hashTab.add(...); mtx.releaseWrite(); ...
void RWMutex::lockWrite() { writeMutex.lock(); for (register int i = 0; i < readersCount /* 500 */; i++) readSemaphore++; } void RWMutex::releaseWrite() { if (!f4read) writeMutex.release(); readSemaphore -= readersCount; if (f4read) writeMutex.release(); }
readSemaphore++
increment in the body of lockWrite
readSemaphore++
, instead of readSemaphore += readersCount
, gives the same chances for reader- and writer-streams. Perhaps he did not know that the semaphore class for building this RWMutex, used one cross-platform library, which produced for this particular platform a simple code that looked something like this: int Semaphore::operator ++() { mutex.lock(); if (sema++ > MaxFlowCount) flowMutex.lock(); mutex.release(); }
hashTab
, while reading several threads at the same time, we had 100 * 500 locks (and precipitation for several milliseconds due to the context switch). The most interesting thing about this story is that it was the base class RWSyncHashTable, which is actively used throughout our code. static int mtx_locked = 0; // - - , 1 ? while ( mtx_locked || !mtx.lock(config.MaxWaitTime /* 1 ms */) ) { // - - ... ... processNextRequest(); } // - ... mtx_locked++; // ... processInLock(); // unlock ... mtx_locked--; mtx.release();
mtx_locked
. This technique allows you not to perform mtx.lock
, if you mtx.lock
know that the code is blocked ( mtx_locked > 0
), and we don’t have to know this for sure - we just do something else.Source: https://habr.com/ru/post/150801/
All Articles