
Good day!
I want to bring to your attention a translation of the article by
Jonathan Bartlett , who is the technical director at
New Medio . The article was published on November 16, 2004 on
ibm.com and is dedicated to memory management techniques. Although the age of the article is quite high (by the standards of IT), the information in it is fundamental and describes the approaches to the distribution of memory, their strengths and weaknesses. All this is accompanied by "self-made" implementations, for better absorption of the material.
Abstract from the authorSolutions, trade-offs, and dynamic memory allocation implementationsGet an idea of ​​the memory management methods available to Linux developers. These methods are not limited to the C language, they are also used in other programming languages. This article gives a detailed description of how memory is managed, with examples of a manual approach (
manually ),
semi- manual using reference
referencing or
pooling, and automatic using
garbage collection .
')
Why is there a need for memory managementMemory management is one of the most fundamental areas in programming. In a variety of scripting languages, you can not worry about memory management, but this does not make this mechanism less significant. Knowledge of the capabilities of your memory manager
(memory manager ) and the intricacies of his work, are the key to effective programming. In most system languages, such as C / C ++, for example, the developer needs to keep track of the memory used. The article is about manual, semi-automatic and automatic methods of memory management.
There was a time when memory management was not a big problem. As an example, we can recall the development time in assembler for Apple II. Basically, the programs were launched not separately from the OS, but with it. Any piece of memory could be used by both the system and the developer. There was no need to calculate the total amount of memory, because it was the same for all computers. So the memory requirements were quite static - it was necessary to simply select a section of memory and use it.
However, even in such a simple computer, you could grab problems, especially if you didn’t know how much memory you might need in a particular section of the program.
If there are limitations associated with memory, then an approach is needed that will address such tasks as:
- Determine if there is enough memory;
- Get section from available memory;
- Return the section back to the pool so that it can be used in other parts of the program or other programs.
(
comment of the translator : Let's designate this list as
Memory-Requirements to refer to it further)
The libraries that search / allocate / free memory are called allocator. As the complexity of the program increases, the complexity of memory management increases and thereby the role of the allocator in the program increases. Let's take a look at the various methods of memory management, consider their advantages and disadvantages, as well as the situations where they are most effective.
Allokatory (C-Style)The
C language supports two functions that solve tasks from Memory-Requirements:
- malloc : Select the specified number of bytes and return a pointer to them. If there is not enough memory, returns a pointer to a NULL ( null pointer );
- free : Accepts a pointer to an area in memory allocated by malloc and returns it for further use in the program or operating system (in fact, some malloc return memory for later use only by the program, but not by the OS).
Physical and virtual memoryTo understand how allocation occurs within a program, you must have an idea how the OS allocates memory for the program. (
comment of the translator : since the program runs under a specific OS, then it is she who decides how much memory to allocate for this or that program) Each running process considers that it has access to all the physical memory of the computer. The fact is obvious that at the same time many processes work, and each of them cannot have access to all memory. But what will happen if processes use virtual memory.
As an example, let's say the program refers to the 629th memory location. The
virtual memory system does not guarantee that the data is stored in RAM at address 629. In fact, it may not even be RAM - the data could be transferred to disk if the RAM was all busy. Those. in virt. memory can store addresses corresponding to the physical device. The OS keeps a table of correspondence Wirth. addresses to physical
(virtual address-to-physical address ), so that the computer can properly respond to the request by address (
address requests ). If RAM stores physical addresses, the OS will be forced to temporarily suspend the process, unload part of the data into a DISK (from RAM), load the necessary data for the operation of the WITH DISK process and restart the process. Thus, each process gets its address space with which it can operate and can get even more memory than the OS allocated to it.
In 32-bit applications (
x86 architecture), each process can work with 4 gigabytes of memory. At the moment, most users do not own this amount. Even if
swap is used , it should still be less than 4 GB per process. Thus, when a process is unloaded into memory, a certain space is allocated to it. The end of this section of memory is referred to as a
system break . Behind this boundary is unpartitioned memory, i.e. without projection on disk or ram. Therefore, when a process runs out of memory (from the one that was allocated to it at boot), it must request a larger chunk of memory from the OS. (
Mapping (from the English.
Mapping - reflection, projection) is a mathematical term meaning one-to-one correspondence - that is, when another address (disk address) is stored at the virtual address, where real data is already stored)
UNIX-based operating systems have two system calls for additional memory markup:
- brk : brk () is a very simple system call. System break is the extreme limit of the memory marked up for the process. brk () simply moves this boundary forward / backward to increase or decrease the amount of allocated memory. ( note of the translator : imagine the scale of the scale in the same MS Word. System break is the maximum value that the slider can accept, and the slider itself is Current break );
- mmap : mmap () (or “memory map”) is similar to brk () , but is a more flexible tool. First, it can allocate memory anywhere in the address space, and not just at the end of the process. Secondly, it can not just mark up memory (virtual) as a projection to physical or swap, it can bind the memory to specific files so that reading and writing will operate directly on the file. The antipode of mmap () is munmap () .
As you can see, simple calls to
brk () or
mmap () can be used to expand the process memory. Further in the text will be used
brk () . It is the simplest and most common tool.
Implementing a simple allocatorIf you wrote programs in C, you probably used such functions as
malloc () and
free () . Surely you did not even think about their implementation. This section will demonstrate a simplified implementation of these functions and illustrate how they participate in memory allocation.
For example, we need this
listing . Copy and paste it into a file called
malloc.c . We will analyze it a bit later.
Allocate memory in most operating systems tied to two simple functions:
- void * malloc (long numbytes): Allocates numbytes bytes in memory and returns a pointer to the first one;
- void free (void * firstbyte): firstbyte - pointer obtained using malloc () and the memory for which must be freed.
We declare the function
malloc_init , which will initialize our allocator. This implies three things: Mark allocator as initialized, find the last valid address in memory (that is, which could be used for allocation) and set a pointer to the beginning of this memory. To do this, we declare three global variables:
Code Listing 1: Global variables for our allocatorint has_initialized = 0; void *managed_memory_start; void *last_valid_address;
As mentioned above, the "edge" of the marked memory (the last valid address) has several names -
System break or
Current break . In most Unix-like systems, the
sbrk (0) function is used to search for the
current system break .
sbrk pushes the
current break by n bytes (passed in the argument), after which the
current break will take the new value. Calling
sbrk (0) simply returns the
current system . We write the code for our
malloc , which will look for the
current break and initialize the variables:
Code Listing 2: Initializing the Allocator #include <unistd.h> void malloc_init() { /* ( ) */ last_valid_address = sbrk(0); /* , * last_valid_address */ managed_memory_start = last_valid_address; /* , */ has_initialized = 1; }
For proper management, you need to monitor allocated and released memory. It is necessary to mark the memory as “unused”, after calling
free () for any part of the memory. This is required to search for free memory when
malloc () is called. Thus, the beginning of each section of memory that
malloc () returns will have the following structure:
Code Listing 3: Memory Control Block structure struct mem_control_block { int is_available; int size; };
You can guess that this structure will interfere if you return a pointer to it (a call to the
malloc function). (
comment of the translator : meaning that if the pointer is set at the beginning of this structure, then when writing to this memory, we will lose information about how much memory has been allocated) Everything is solved quite simply - it must be hidden, namely return the pointer to memory which is located immediately behind this structure. Those. in fact, return a pointer to the area that does not store any information in itself and where you can “write” your data. When a
free () call occurs in which the pointer is passed, we simply rewind a certain number of bytes (and specifically
sizeof (mem_control_block)) in order to use the data in this structure to search further.
First, let's talk about freeing memory, because This process is simpler than selection. All that needs to be done to free up memory is to take the pointer passed as a parameter to the
free () function, move it to
sizeof (struct mem_control_block) byte back, and mark the memory as free. Here is the code:
Code Listing 4: Freeing memory void free(void *firstbyte) { struct mem_control_block *mcb; mcb = firstbyte - sizeof(struct mem_control_block); mcb->is_available = 1; return; }
As you can see, in this example, the release takes place in constant time, because has a very simple implementation. With the selection is a little more difficult. Consider the algorithm in general:
Code Listing 5: Pseudocode for the allocator algorithm 1. , . 2. sizeof(struct mem_control_block) ; 3. managed_memory_start. 4. last_valid address? 5. : A. . 6. : A. ?( mem_control_block->is_available == 1)? B. : I) - (mem_control_block->is_available >= )? II) : a. (mem_control_block->is_available = 0) b. mem_control_block III) : a. "size" b. 4 C. : I) "size" II) 4
The whole point is in a kind of “walk” from memory to find free sites. Take a look at the code:
Code Listing 6: Implementing the algorithm void *malloc(long numbytes) { void *current_location; struct mem_control_block *current_location_mcb; void *memory_location; if(! has_initialized) { malloc_init(); } numbytes = numbytes + sizeof(struct mem_control_block); memory_location = 0; current_location = managed_memory_start; while(current_location != last_valid_address) { current_location_mcb = (struct mem_control_block *)current_location; if(current_location_mcb->is_available) { if(current_location_mcb->size >= numbytes) { current_location_mcb->is_available = 0; memory_location = current_location; break; } } current_location = current_location + current_location_mcb->size; } if(! memory_location) { sbrk(numbytes); memory_location = last_valid_address; last_valid_address = last_valid_address + numbytes; current_location_mcb = memory_location; current_location_mcb->is_available = 0; current_location_mcb->size = numbytes; } memory_location = memory_location + sizeof(struct mem_control_block); return memory_location; }
This is our Memory Manager. Now it is necessary to collect it, for use in its programs.
To build our
malloc-like allocator, you need to type the following command (we did not touch on such functions as
realloc () , but
malloc () and
free () are the most significant):
Code Listing 7: Compiling gcc -shared -fpic malloc.c -o malloc.so
At the output we get the file
malloc.so , which is a library and contains our code.
On Unix systems, you can use your allocator instead of the system one. This is done like this:
Code Listing 8: Replacing standard malloc LD_PRELOAD=/path/to/malloc.so export LD_PRELOAD
LD_PRELOAD is an
environment variable . It is used by the dynamic linker (
dynamic linker ) to determine the characters that are contained in the library, before the library will be loaded by any application. This underlines the importance of symbols in dynamic libraries. Thus, applications that will be created in the current session will use
malloc () , which we have just written. Some applications do not use
malloc () , but this is the exception rather than the rule. Others that use
realloc () like allocators, or who have no idea about the malloc () internal behavior, are likely to fall.
Ash shell (ash is a command shell for UNIX-like systems) works great with our
malloc allocator.
If you want to make sure your
malloc () is being used , you can add a
write () call to the top of your functions.
In terms of functionality, our memory
manager leaves much to be desired, but it is excellent as an example to demonstrate the work. Of its shortcomings should be noted:
- Since it works with a system break (global variable), it cannot coexist with other allocators or with mmap ;
- In the allocation, the allocator, in the worst case, will have to go through the entire memory of the process, which among other things may also include data addresses that are stored on disk. This will cause the OS to spend time moving data from disk to virtual. memory and back;
- It does not have the best handling of errors associated with an out-of-memory error;
- It does not have many other functions, such as realloc () ;
- Since sbrk () can allocate more memory than we requested, this will cause a memory leak at the end of the heap;
- is_available uses 4 bytes, although in fact only one bit is needed;
- The allocator does not have thread-safety ;
Cannot merge into larger blocks. (comment of translator: let's say we request 32 bytes. There are two free blocks of 16 bytes in succession in the memory. The allocator will not take this into account.); - Uses a non-trivial algorithm that potentially leads to memory fragmentation;
- Of course there are other problems. But this is only an example!
Other malloc implementationsThere are many other implementations of
malloc () that have both strengths and weaknesses. There are a number of criteria that should be considered when designing allocators:
- Allocation speed ( allocation speed );
- Deallocation speed ;
- Behavior in a multithreaded environment;
- Behavior when memory ends;
- Cache location;
- Accounting for additional memory costs;
- Behavior in virtual memory;
- Large and small objects;
- Stable work in real time.
For example, for our allocator, a plus would be a quick release of memory, a minus - a slow release. Also, because of the primitive algorithm of working with Wirth. memory, it works best with large objects.
There are many varieties of allocators. Here are some of them:
- Doug Lea malloc : is a whole subset of allocators, including original Doug Lea allocators, GNU libc allocators and ptmalloc . Deug Lea allocators have a similar structure as our allocator, but it has indices in its arsenal for faster search and can combine several unused blocks into one large one. There is also caching support that speeds up the process of reusing recently released memory. ptmalloc is the same Deug Lea that has been extended to support multithreading. Description Doug Lea's malloc is available in the references at the end of the article.
- BSD malloc : BSD Malloc , an implementation that has been distributed in BSD since version 4.2 and is included in FreeBSD as an allocator that places objects from the pool with a known size in memory. It has at its disposal the size of classes related to objects - the power of two minus constant. So if you request a memory for an object, it will simply allocate the memory of any of the classes, the size of which will be appropriate. This provides a simple implementation, but memory costs are possible. A description is also available at the end of the article.
- Hoard : Hoard was written to work fast in a multithreaded environment. Therefore, it is designed to work with locks that help to work with processes that await memory allocation. This can significantly speed up multi-threaded processes that constantly work with memory. Description in the list of references.
These are the most famous of the many allocators. If your application needs a special allocation of memory, then you can write a self-made (custom -
custom ) allocator that will work based on your requirements. Be that as it may, if you are not familiar with the concept of the work of the allocator, then self-written implementations will create more headaches than bring profit. For a deeper introduction to the subject area, I advise you to read the following book:
Donald Knuth: The Art of Programming Volume 1: Basic Algorithms - Section 2.5: Dynamic Memory Allocation. Of course, the material is outdated; work with Wirth is not affected there. environment memory, but the base of the algorithms has not changed.
In C ++, you can implement your allocator for a class or template using the overload of the
new () operator. Andrei Alexandrescu, in his book
Modern Programming in C ++, described a small object allocator (Chapter 4: Placing in memory of small objects).
Disadvantages of distribution with malloc ()Not only our memory manager has flaws, they are also present in other implementations. Managing with
malloc () is quite a dangerous thing for programs that store data for a long time and that should be easily accessible. Having many references to dynamically allocated memory, it is often difficult to know when it needs to be freed. The manager, usually easily copes with his work, if the lifetime of a variable (or the time of working with a piece of memory) is limited to the scope of a function (local variables), but for global variables whose memory is used throughout the program, the task becomes more difficult. Also, many APIs are described not very clearly and it becomes not clear who is responsible for managing the memory - on the program itself or on the called function.
Due to similar problems, many programs work with memory according to their own rules. Sometimes it can show that more operations (
approx. Translator : in the text “more code”) are spent on allocating and freeing memory than on the computational component. Therefore, consider alternative ways to manage memory.
Semi-automatic ( semi-automatic ) approaches to memory management
Reference countingReference
counting is a semi-automatic method of working with memory, which requires additional code and which can be used to not keep track of when memory is no longer used.
Reference-counting does it for you.
The mechanism of operation is the following - for each dynamically allocated memory there is a field that stores the number of references to it. If a variable appears in the program that references this piece of memory, the counter is incremented. And vice versa - while decreasing the variables referring to this memory, the counter decreases. When decrementing the counter, a check occurs - if the number of links is 0, then the memory is freed.
Each link that references this memory simply increases or decreases the counter. This prevents memory cleaning situations when it is used. In any case, you should not forget to use the functions responsible for counting references, if you work with this type of (“counted”) structures. Also, built-in functions and third-party libraries may not be able to work with
reference-counting or have their own mechanism of operation.
To implement this mechanism, you only need two functions. The first will increase the reference count, the second will reduce and free up memory if it has reached zero.
For example, the link counting function might look something like this:
Listing 9. Reference-counting principle struct refcountedstruct { int refcount; } void REF(void *data) { struct refcountedstruct *rstruct; rstruct = (struct refcountedstruct *) data; rstruct->refcount++; } void UNREF(void *data) { struct refcountedstruct *rstruct; rstruct = (struct refcountedstruct *) data; rstruct->refcount--; if(rstruct->refcount == 0) { free(rstruct); } }
REF and
UNREF can be more difficult - it all depends on what goals you are pursuing. For example, you want to add a lock for multi-threaded applications. Then you will need to add a function pointer to the
refcounted struct to free up memory (like a destructor in object-oriented languages ​​— this is necessary if your structure contains pointers)
When using
REF and
UNREF , you must adhere to the following rules when assigning pointers:
- UNREF - called before assignment
- REF - called after assignment
For functions that take
recounted structures, the following rules are used:
- REF - called the beginning of the function
- UNREF - called at the end of the function
Here is another small example:
struct mydata { int refcount; int datafield1; int datafield2; }; void dosomething(struct mydata *data) { REF(data); UNREF(data); } struct mydata *globalvar1; void storesomething(struct mydata *data) { REF(data); globalvar1 = data; REF(data); UNREF(data); }
Since
reference counting is quite a trivial mechanism, then many developers implement it themselves, avoiding third-party libraries. However, their implementation is still based on allocators like
malloc and
free , which are engaged in the allocation and release of memory.
Reference counting is also used in high-level languages, such as
Perl . These duties are assigned to the language itself, so you don’t need to worry about anything unless you want to expand it. Of course, reference counting slightly reduces the speed of work, but adds a bit of security and simplicity to the development. Consider the main benefits:
- Simple implementation;
- Just use;
- An object reference is part of a structure that ensures good cache locality .
There are also disadvantages:
- It is necessary to remember about calling the link counting function;
- You cannot free memory if the object is part of a ring structure;
- Slow down when assigning a pointer;
- Special care is required when handling exceptions ( try or setjmp () / longjmp () );
- Additional memory is required when working with links;
- Reference counter is in first place in the structure, which gives quick access to most machines;
- Slow execution and additional difficulties when working in a multi-threaded environment.
C ++ can reduce the likelihood of error through smart pointers (
smart pointers ), which work with pointers as hard as
reference counting . legacy ,
smart pointers (,
linkage C) , , . C++ . , “ ”
C++ ( ).
Memory poolsMemory pools - . , / (
stagesa) implementation, at each stage of which we know how much space the program will need. As an example, we can cite server processes where a lot of memory is allocated for connections - it has the maximum lifetime ( lifespan ) coincides with the connection lifetime. The same Apache - each connection is a separate stage , which has its own memory pool . After the execution of the fragment, the memory is instantly released.In the “pool” model of management, each allocation of memory refers to a specific pool, from which memory will be allocated. (comment of the translator : , 5 char. .. , 5 . ..
stage , 5 . “” , .)
pool . apache,
pool , , .… , , , , . , (
cleanup functions ), (- ).
,
obstack (
GNU — libc)
Apache Protable Runtime (Apache).
obstack , Linux .
Apache Portable Runtime . , .
“” obstack:
11. obstack #include <obstack.h> #include <stdlib.h> /* obstack */ /* obstack (xmalloc * malloc, , (.. ) */ #define obstack_chunk_alloc xmalloc #define obstack_chunk_free free /* Pools */ /* Only permanent allocations should go in this pool */ struct obstack *global_pool; /* pool (per-connection) */ struct obstack *connection_pool; /* (per-request) */ struct obstack *request_pool; void allocation_failed() { exit(1); } int main() { /* */ global_pool = (struct obstack *) xmalloc (sizeof (struct obstack)); obstack_init(global_pool); connection_pool = (struct obstack *) xmalloc (sizeof (struct obstack)); obstack_init(connection_pool); request_pool = (struct obstack *) xmalloc (sizeof (struct obstack)); obstack_init(request_pool); /* */ obstack_alloc_failed_handler = &allocation_failed; /* */ while(1) { wait_for_connection(); /* */ while(more_requests_available()) { /* */ handle_request(); /* (request pool) */ obstack_free(request_pool, NULL); } /* - */ obstack_free(connection_pool, NULL); } } int handle_request() { /* , (request pool) */ int bytes_i_need = 400; void *data1 = obstack_alloc(request_pool, bytes_i_need); /* */ /* */ return 0; }
,
stage -,
obstack . , ,
stage ,
obstack , .
obstack_free() NULL ,
obstack . .
:
- ;
- , .. . O(1) ( O(n) , O(1) );
- , , ;
- , .
:
- , ( stage );
- ;
- , , ( memory management system );
- , . , .
( Garbage collection )Garbage collection — . , . , “” , — (
stack ), (
global variables ) (
registers ). , . , , . , “” , .
- ( Coping ) : . “” , “” . , “”, . , . garbage collector .
- ( Mark and sweep ) : . , “” “” , 1 , , .
- ( Incremental ) : . , ( ). .
- ( Conservative ) : . , . , . , , . ( . : int . , .) . , .
(
Hans Boehm ) , .. . (
malloc /
free API)
-enable-redirect-malloc . ( .
trick — , ) LD_PRELOAD, , . , , . Mozila — . Windows Linux.
:
Disadvantages:
: , , , . — - . . , , . , .
1: | | | | | | real time | SMP |
---|
Custom allocator | | | | | | | |
Simple allocator | | | Low | | | Not | Not |
GNU malloc | | | | | | Not | |
Hoard | | | | | | Not | Yes |
Reference counting | - | - | | | | ( malloc ) | |
Pooling | | | | | | ( malloc ) | |
Garbage collection | ( ) | | Low | | | Not | |
Incremental garbage collection | | | | | | Not | |
Incremental conservative garbage collection | | | | | | Not | |
:- The obstacks section of the GNU C Library manual gives the programming interface for obstacks.
- The Apache Portable Runtime documentation describes the interface to their pooled allocator.
- Doug Lea's Malloc is one of the more popular memory allocators.
- BSD Malloc is used in most BSD-based systems.
- ptmalloc is derived from Doug Lea's malloc and is used in GLIBC.
- GNU Memory-Mapped Malloc (part of GDB) is a
malloc
implementation that is based on mmap() . - GNU Obstacks (part of GNU Libc) is the most widely installed pooled allocator, since it's on every glibc-based system.
- Apache's pooled allocator (in the Apache Portable Runtime) is the most widely used pooled allocator.
- NetBSD also has its own pooled allocator.
- The Loki C++ Library has a number of generic patterns implemented for C++, including smart pointers and a custom small-object allocator.
- The Hahns Boehm Conservative Garbage Collector is the most popular open source garbage collector, which can be used in regular C/C++ programs.
- A New Virtual Memory Implementation for Berkeley UNIX by Marshall Kirk McKusick and Michael J. Karels discusses BSD's VM system.
- Mel Gorman's Linux VM Documentation discusses the Linux VM system.
- Malloc in Modern Virtual Memory Environments by Poul-Henning Kamp talks about BSD's malloc and how it interacts with BSD virtual memory.
- Design of a General Purpose Memory Allocator for the 4.3BSD UNIX Kernel by Marshall Kirk McKusick and Michael J. Karels discusses kernel-level allocators.
- A Memory Allocator by Doug Lea gives an overview of the design and implementation of allocators, including design choices and tradeoffs.
- Memory Management for High-Performance Applications by Emery D. Berger talks about custom memory management and how it affects high-performance applications.
- Some Storage Management Techniques for Container Classes by Doug Lea describes writing custom allocators for C++ classes.
- The Measured Cost of Garbage Collection by Benjamin Zorn presents hard data on garbage allocation and performance.
- Memory Allocation Myths and Half-Truths by Hans-Juergen Boehm presents the myths surrounding garbage collection.
- Space Efficient Conservative Garbage Collection by Hans-Juergen Boehm is a paper describing his garbage collector for C/C++.
- The Memory Management Reference contains numerous references and links to papers on memory management.
- OOPS Group Papers on Memory Management and Memory Hierarchies is a great set of technical papers on the subject.
- Memory Management in C++ discusses writing custom allocators for C++.
- Programming Alternatives: Memory Management discusses several choices programmers have for memory management.
- Richard Jones's Garbage Collection Bibliography has links to any paper you ever wanted about garbage collection.
- C++ Pointers and Dynamic Memory Management by Michael Daconta covers numerous techniques on memory management.
- Memory as a Programming Concept in C and C++ by Frantisek Franek discusses techniques and tools for developing effective memory use and gives the role of memory-related errors the prominence it deserves in computer programming.
- Garbage Collection: Algorithms for Automatic Dynamic Memory Management by Richard Jones and Rafael Lins describes the most common algorithms for garbage collection in use.
- Section 2.5, «Dynamic Storage Allocation» from Fundamental Algorithms , Volume 1 of The Art of Computer Programming by Donald Knuth describes several techniques for implementing basic allocators.
- Section 2.3.5, «Lists and Garbage Collection» from Fundamental Algorithms , volume 1 of The Art of Computer Programming by Donald Knuth discusses garbage collection algorithms for lists.
- Chapter 4, «Small Object Allocation» from Modern C++ Design by Andrei Alexandrescu describes a high-speed small-object allocator that is quite a bit more efficient than the C++ standard allocator.
- Chapter 7, «Smart Pointers» from Modern C++ Design by Andrei Alexandrescu describes the implementation of smart pointers in C++.
- Jonathan's Chapter 8, «Intermediate Memory Topics» from Programming from the Ground Up contains an assembly-language version of the simple allocator used in this article.
- Self-manage data buffer memory (developerWorks, January 2004) outlines a pseudo-C implementation of a self-managing, abstract data buffer for managing memory.
- A framework for the user defined malloc replacement feature (developerWorks, February 2002) shows how to take advantage of a facility in AIX that lets you replace the memory subsystem with one of your own design.
- Mastering Linux debugging techniques (developerWorks, August 2002) describes debugging methods you can use in four different scenarios: segmentation faults, memory overruns, memory leaks, and hangs.
- In Handling memory leaks in Java programs (developerWorks, February 2001), learn what causes Java memory leaks and when they should be of concern.
- In the developerWorks Linux zone , find more resources for Linux developers, and scan our most popular articles and tutorials .
- See all Linux tips and Linux tutorials on developerWorks.
- Stay current with developerWorks technical events and Webcasts .