Linux new/delete, malloc/free Large memory block

Linux new/delete, malloc/free Large memory block … here is a solution to the problem.

Linux new/delete, malloc/free Large memory block

We have a Linux system (kubuntu 7.10) that runs many CORBA Server processes.
The server software uses the glibc library for memory allocation.
Linux PCs have 4G of physical memory. For speed reasons, swapping is disabled.

When a request is received to process the data, one of the server processes allocates a large data buffer (using the standard C++ operator “new”). The buffer size varies depending on many parameters, but is typically around 1.2G bytes. It can be up to 1.9G bytes. When the request completes, use Delete to free the buffer.

This applies to multiple consecutive requests that allocate buffers of the same size, or if the request allocates a smaller size than the previous one.
Memory seems to be free – otherwise buffer allocation attempts will eventually fail after a few requests.
In any case, we can see that buffer memory is allocated and freed for each request using tools such as KSysGuard.

The problem occurs when a request requires a larger buffer than the previous one.
In this case, the operator “new” throws an exception.
It is as if memory freed from the first allocation cannot be reallocated even if enough free physical memory is available.

If I terminate and restart the server process after the first operation, the second request for a larger buffer size succeeds. That is, the terminating process appears to completely free the freed memory back to the system.

Can anyone explain what might be happening here?
Could it be some kind of fragmentation or mapping table size issue?
I’m thinking about replacing new/delete with malloc/free and using mallopt to adjust how memory is freed to the system.

BTW – I’m not sure if it’s related to our issue, but the server uses Pthreads, which is created and destroyed on every request.

Solution

If this were a 32-bit machine, you would have 3Gb of address space at your disposal – 1Gb reserved for the kernel. In addition to that, share libraries, exe files, data segments, etc. You should check /proc/pid/maps to see how the address space is laid out.

It’s hard to say how much physical address space is available, and it is occupied by all system processes, kernels, and your other processes. Assuming these don’t add up to 1Gb, you can still use 3Gb.

What can happen is fragmentation:

0Gb                                                     3Gb
---------------------~------------------------------------
| Stuff | Heap,1.2Gb allocated stuff | free heap   | Stack|
---------------------~------------------------------------

Then you free up this large object, but in between some other memory
Assignment, leave it to you :

0Gb                                                         3Gb
---------------------~------------------------------------------
| Stuff | Heap,1.2Gb free |small object(s) | free heap   | Stack|
---------------------~------------------------------------------

If you try to allocate a larger object now, it won’t fit in the available 1.2Gb space
And may not fit into free heap space, as it may not have enough space.

If you use the stack heavily, it is likely that the stack grows and takes up space
Otherwise it can be used for heap space – although most distributions limit stacks to 8-10Mb by default.

Using malloc/realloc doesn’t help with this. However, if you know the size of the maximum object you need, you can keep that much at startup. That piece should never be released/deleted, it should be reused. It’s hard to say whether this will cause you other troubles elsewhere, though—the space available for other objects will be smaller.

Related Problems and Solutions