Origins of Garbage Collection

March 12, 2004

Presented by: Richard Cobbe

Notes by: Greg Pettyjohn

``Memoria est omnis divisa in partes duas'' - J. Caesar (sort of).

Some Questions to motivate the topic:

The final answer as taken as best.

1  McCarthy, April 1960

Old computers(IBM 704) have small memory. Can actually run out. McCarthy assumes a homogeneous heap. Specifically, the heap consists entirely of cons cells.

Mark Sweep Example

Garbage collection is a conservative approximation. Garbage collectors preserve all reachable data rather than just live data. Liveness is undecidable. McCarthy likely doesn't mention this in his paper.

In his paper, McCarthy assumes the heap contains no cycles, but in actuality this is not an issue for mark-sweep. McCarthy's implementation probably really did have to deal with cycles.

Mark-sweep is most efficient if there is a large proportion of garbage.

Question (Sam Tobin-Hochstadt): Does McCarthy only collect when the system is almost out of memory.

Answer: Yes

2  Collins, December 1960

Problem with McCarthy: Efficiency drops off as the amount of reachable memory decreases.

Diagram showing ``owner'' and ``borrower'' pointer discipline in manual memory management. Due to Gelernter et al. When two lists have a common tail, one of the lists is dominant and that list's pointer to the tail section is called an ``owner'' pointer. The other list's pointer is called a ``borrower'' pointer. The tail of the list is freed up if and only if the owner pointer is freed.

Question: What's wrong with this?

Answer: Suppose that the owner is freed up first and the borrower is left dangling.

So this invariant is impractical to maintain, but it serves as motivation for reference counting.

Reference counting: maintain a reference count for each allocated cell which keeps track of how many references to the cell exist. When new references are created, increase the count. When references are removed, then decrease the count. When the reference count reaches zero, then free up the cell.

Advantages:

Disadvantages:

3  Minsky, 1963

Minsky has the neat copy collector that uses the persistent drum as to-space. Basically, the collector copies all reachable data from program memory to auxiliary storage using a depth-first search.

Minsky notices that by using a depth-first search, that linked lists wind up with their cons cells in consecutive order. Minsky observes that ``there are probably some important applications of this.''

McCarthy's mark sweep algorithm uses time proportional to the size of allocated heap, while Minsky's is proportional only to the size of the reachable heap.

Obviously, Minsky's algorithm wouldn't be practical on a modern PC where the disk is several orders of magnitude slower than RAM.

Minsky's code is highly optimized and hard to understand.

4  Fenichel and Yochelson, 1969

By 1969, computers are beginning to have virtual memory systems.

Richard had a question here to motivate that memory was no-longer limited. i.e. no longer the case that you run out of addresses.

The affect this has on garbage collection is that virtual memory system and garbage collector can now interfere with each other. In particular, determining a good collection time is now critical. If collection happens too early, then there isn't enough garbage, and the collection is inefficient. If collection happens too late, then the heap grows too large and causes unnecessary paging.

Fenichel and Yochelson's collector is a two-space copying collector motivated by Minsky's collector. Rather than copying to auxiliary storage, they divide RAM into two halves. When from-space is nearly full, they use a depth-first search to copy reachable objects to to-space. Finally, when copying is finished the roles of from-space and to-space are reversed and allocation resumes.

Algorithm is a depth first search. It is implemented recursively and thus will build a stack. Fenichel and Yochelson suggest that perhaps this could be transformed into an iterative algorithm.

5  Cheney 1970

Cheney comes up with an iterative breadth-first copying collector. From-space and to-space are as before. Two pointers, called ``scan'' and ``next'' are used to keep track of the ``frontier'' in to-space. Initially, scan and next point to the beginning of to-space. First, all objects reachable from the root set are copied to to-space. Exact copies are made, so pointers to from-space will be preserved at this step. After each object is copied to to-space, the next pointer is updated to point to the next free space following the most recently copied object. Whenever an object is copied, the original is modified to store a ``forwarding'' pointer to the new copy. After the first round of objects is copied, the scan pointer is incremented. Whenever a pointer to from space is found by the scan pointer, then the object referred to by that pointer is copied to to-space and the pointer is updated to point to the new object. Objects that have already been copied are not copied again and the pointer is simply updated based on the forwarding pointer of the original object. The algorithm terminates when the scan pointer catches up with the next pointer.

Cheney's algorithm answers Fenichel and Yochelson's conjecture that their copying collector could be made iterative. Since it is iterative, the collector requires no additional space during copying. There is a drawback in that the copy now proceeds in breadth-first order rather than depth-first order. Search order is not in itself significant, but the ultimate result affects the locality of reference. With depth-first, cons cells end up being consecutive. With breadth-first, cells get interleaved.

Last modified: Friday, March 12th, 2004
HTML conversion by TeX2page 2003-09-27