runtime
turn to the OS for allocating new memory. Imagine: select a piece (for example []byte
), work with it, release it. It will take some time before the GC "wakes up from sleep" and collect this piece . If at this time we allocate one more such piece and the memory already allocated to the OS is not enough for it, then the application will have to ask the OS for more memory. By the time the application memory request from the OS lasts forever. And at this very time, somewhere gathering dust, the old "spent" piece is waiting for its time. import( "sync" ) var bytesPool = sync.Pool{ New: func() interface{} { return []byte{} }, } /* `New` . , `New` `nil` - . `interace{}` - . - . */
// ary []byte ary = ary[:0] // len, cap
/* , ( ) - ; : 2048 500-800 , - */ const maxCap = 1024 if cap(ary) <= maxCap { // bytesPool.Put(ary) }
nextAry := bytesPool.Get().([]byte)
New
function creates an empty []byte{}
, and even these conversions to interface{}
and vice versa. In the case of []byte
we are likely to increase it with the help of append
, which in principle makes such an approach unprofitable:[]byte
zero capacityinterface{}
and backappend
will still create a new piece.append
can be fed nil
, only of type [] byte (and not interface {}) // func getBytes() (b []byte) { ifc := bytesPool.Get() if ifc != nil { b = ifc.([]byte) } return } // func putBytes(b []byte) { if cap(b) <= maxCap { b = b[:0] // bytesPool.Put(b) } }
sync.Pool
not a panaceaSource: https://habr.com/ru/post/277137/
All Articles