4 Replies - 9924 Views - Last Post: 25 October 2012 - 11:49 AM

#1 aresh  Icon User is offline

  • It's a 16-Bit World!
  • member icon

Reputation: 273
  • View blog
  • Posts: 4,163
  • Joined: 08-January 12

Heap Compaction

Posted 13 October 2012 - 02:54 AM

Well, I borrowed this fancy title from a book, so don't blame me if it doesn't explain the question.

Coming to the question, what I want to know is that after many calls to new and delete, the memory will obviously get fragmented - continuous blocks of memory remaining will be very less. What if I want to allocate a huge amount of memory, but there is not that much continuous memory? I can't allocate it. So, I will need to somehow de-fragment the memory.

Although I don't exactly require this, I am curious about it. Can it be done? If yes, then how? I would certainly like to try making one :D

Is This A Good Question/Topic? 0
  • +

Replies To: Heap Compaction

#2 jimblumberg  Icon User is online

  • member icon


Reputation: 4025
  • View blog
  • Posts: 12,432
  • Joined: 25-December 09

Re: Heap Compaction

Posted 13 October 2012 - 06:00 AM

Quote

what I want to know is that after many calls to new and delete, the memory will obviously get fragmented - continuous blocks of memory remaining will be very less.

If you are running into fragmentation issues with the size of memory available today then you probably should be looking at why your program is using such large memory allocations. If you're using a space constrained system then you probably shouldn't be using dynamic memory in the first place.

Quote

Although I don't exactly require this, I am curious about it. Can it be done? If yes, then how? I would certainly like to try making one

Yes it can be done, see this link:A Compacting Real-Time Memory Management System for an example. There are many more links available when Googling "Memory Compaction" that may be of interest.


Jim
Was This Post Helpful? 1
  • +
  • -

#3 ishkabible  Icon User is offline

  • spelling expret
  • member icon




Reputation: 1622
  • View blog
  • Posts: 5,709
  • Joined: 03-August 09

Re: Heap Compaction

Posted 14 October 2012 - 01:30 PM

you also don't necessarily have to use the standard pointer types to get pointer like behavior. if say you make a class that acts just like a pointer but is actually represented by a double pointer the allocator can maintain a reference to the pointer that it can update as needed and compact memory as needed. you take a slight performance hit doing this but by using this you can maintain almost perfectly compact memory with very little extra memory usage.

this is such a useful idea that processors with paging have whats called a MMU or "Memory management unit" that lets you do something like this with hardware support and even allows pointers to contiguous chunks of memory to in reality be fragmented across lots of space. this gives you a 'virtual' address space and each process can have it's own virtual address space(and memory can also be stored on the hard drive and an interrupt can handle what the CPU thinks is a segfault but is actually just a massive cache miss. this is why windows guarantees 2GB of memory in terms of function to each and every program). problem is, the virtual address space can become fragmented (which is actually the issue here) so you end up having to do this with some sort of abstraction sans hardware support using something like a double pointer or a page offset table.

edit:
they have some even more refined ideas in that link jim provided

This post has been edited by ishkabible: 14 October 2012 - 01:43 PM

Was This Post Helpful? 1
  • +
  • -

#4 baavgai  Icon User is offline

  • Dreaming Coder
  • member icon

Reputation: 5805
  • View blog
  • Posts: 12,644
  • Joined: 16-October 07

Re: Heap Compaction

Posted 14 October 2012 - 02:19 PM

View Postaresh, on 13 October 2012 - 05:54 AM, said:

after many calls to new and delete, the memory will obviously get fragmented - continuous blocks of memory remaining will be very less.


Perhaps. It really depends on all the new and delete. If, for instance, the new and delete were all of objects of the same size, then fragmentation is really meaningless.

Also, under the covers, you tend to get memory by the page. That is, there is a minimum the OS will offer the program and then that gets chewed on until it doesn't work.

If you want to handle your own memory management, go for it. It's kind of fun. But don't assume the inner workings suck at it, because they're probably more clever than you think.
Was This Post Helpful? 1
  • +
  • -

#5 vividexstance  Icon User is offline

  • D.I.C Lover
  • member icon

Reputation: 656
  • View blog
  • Posts: 2,247
  • Joined: 31-December 10

Re: Heap Compaction

Posted 25 October 2012 - 11:49 AM

Some implementations of new/delete are just a wrapper around malloc/free. Some implementations keep a list of freed memory and if it can, the implementation may actually take blocks that are next to each other and combine them. I was recently reading about how malloc/free works and found this site that may be of interest: A Memory Allocator.

This post has been edited by vividexstance: 25 October 2012 - 11:49 AM

Was This Post Helpful? 1
  • +
  • -

Page 1 of 1