If you’ve ever watched a Python app’s memory usage creep up and up, you know it’s not always simple.
Python brilliantly handles memory management so you don't usually have to think about it.
But when a memory leak strikes, that 'magic' suddenly feels like a black box.
As you move to a graduate level of expertise, you need to know what's in that box.
Understanding this is crucial for writing high-performance code and debugging those 'why is this program using gigs of RAM?' moments.
Today, we're diving into the heart of Python's memory management.
We'll start with the foundational Python Data Model.
From there, we'll explore the two-part system that keeps your programs clean: Reference Counting and the Generational Garbage Collector.
This isn't just academic. This knowledge is your map and compass for navigating application performance.
Target Audience: This article is for intermediate to advanced Python developers, data scientists, and software engineers. If you want to move beyond just using Python and start understanding it, you're in the right place. If you've ever wondered what del really does or how Python cleans up after itself, read on.
Part 1: The Foundation - The Python Data Model
To appreciate garbage collection, we first have to understand how Python sees variables and data.
The core concept is this: variables are names, not boxes.
In many languages, a = 42 creates a memory box named a and puts 42 inside.
In Python, it's a bit different.
-
It creates an instance (or object) to represent the value
42. -
It creates a reference (a symbolic name) called
a. -
It points the reference
ato the instance42.
Every single instance at runtime has three defining characteristics:
-
Data Type (Type): The blueprint. Accessed via
type(obj). (e.g.,int,str, or a customclass). -
Value: The content stored. (e.g.,
42,"Hello", or[1, 2, 3]). -
Identity: A unique, unchangeable ID for that instance. You can see it with the
id()function. In CPython (the version you likely use), this ID is the object's memory address.But the language just guarantees it's unique, not that it's the address.
Let's prove this in the Python shell.
# This creates an integer instance with value 1337
# and 'a' becomes a reference to it.
>>> a = 1337
# Let's inspect its "Holy Trinity"
>>> type(a)
<class 'int'>
>>> a # Accessing the value
1337
>>> id(a)
4422521936 # This will be different on your machine
# Now, let's create a new reference 'b'
>>> b = a
# 'b' is just another name. It points to the *exact same* instance.
>>> id(b)
4422521936 # Notice: The ID is identical!
>>> a is b
True # 'is' is the identity comparator. It's the same as id(a) == id(b)
This is why we say Python uses "call by sharing."
When you pass a list to a function, you're not passing a copy of the list. You're passing a copy of the reference.
That's why both the original reference and the function's parameter point to the exact same list.
Optimization: Object Interning
You might have seen this in real life: a is b is True for a = 100 and b = 100.
But it's False for a = 10000 and b = 10000.
This isn't a bug! It's a memory optimization called interning.
To save memory and speed up comparisons, CPython pre-allocates and re-uses instances for a few common things:
-
Small Integers: All integers from
-5to256are singletons. Any reference to100will always point to the same instance. -
Short Strings: Many simple strings (like variable names) are also interned.
<!-- end list -->
# Small integers are interned (point to the same object)
>>> a = 100
>>> b = 100
>>> id(a)
4389985232
>>> id(b)
4389985232
>>> a is b
True
# Large integers are (usually) created as new objects
>>> x = 10000
>>> y = 10000
>>> id(x)
4423322064
>>> id(y)
4423322096
>>> x is y
False
This is a CPython detail, but it's a perfect example of how the Data Model enables powerful optimizations.
The Numeric Cost: sys.getsizeof()
This reference-based model has real, numeric consequences.
A Python list doesn't store the objects themselves. It just stores references to them.
That's why a single list can hold an int, a str, and a custom object all at once.
We can use the sys module to see the memory footprint.
import sys
# An empty list already has a base size.
# On my 64-bit system, this is 56 bytes.
>>> empty_list = []
>>> sys.getsizeof(empty_list)
56
# Let's add one item. A reference (pointer) on a 64-bit
# system takes 8 bytes.
>>> empty_list.append(1)
>>> sys.getsizeof(empty_list)
80 # 56 (base) + 24 (allocated for 3 * 8-byte references)
# Note: Python pre-allocates space to avoid resizing every time.
# Let's add a few more
>>> empty_list.append(2)
>>> empty_list.append(3)
>>> empty_list.append(4)
>>> sys.getsizeof(empty_list)
88 # 56 (base) + 32 (for 4 * 8-byte references)
# Add one more, triggering another pre-allocation
>>> empty_list.append(5)
>>> sys.getsizeof(empty_list)
120 # 56 (base) + 64 (re-allocated for 8 * 8-byte references)
# The list's size is its overhead *plus* space for references.
# It does NOT include the size of the objects it refers to.
>>> a = 999999999999999999
>>> b = "a very long string" * 100
>>> my_list = [a, b]
# The list size just accounts for two 8-byte references
>>> sys.getsizeof(my_list)
72 # 56 (base) + 16 (for 2 * 8-byte references)
# The *actual* objects take up much more space
>>> sys.getsizeof(a)
32
>>> sys.getsizeof(b)
2149
Understanding this is the first step.
You aren't managing the memory of a. You're managing the lifecycle of the reference a.
Optimizing Instance Memory: __slots__
By default, every instance of a custom class stores its attributes in a __dict__.
This is super flexible (you can add new attributes on the fly!), but it's also memory-intensive.
That dictionary has a lot of overhead.
If you're creating millions of instances and you know what attributes they will have, you can use __slots__.
__slots__ tells Python not to use a __dict__.
Instead, it allocates a fixed-size, array-like structure just for those attributes.
Example: The Memory-Saving Difference
import sys
# A standard class using __dict__
class ObjectWithDict:
def __init__(self, x, y):
self.x = x
self.y = y
# A class using __slots__
class ObjectWithSlots:
__slots__ = ['x', 'y'] # Define the exact attributes
def __init__(self, x, y):
self.x = x
self.y = y
# --- Compare the memory usage ---
# Create one instance of each
>>> dict_obj = ObjectWithDict(1, 2)
>>> slots_obj = ObjectWithSlots(1, 2)
# Check the instance size (on 64-bit Python)
>>> sys.getsizeof(dict_obj)
48 # Base object size
>>> sys.getsizeof(slots_obj)
48 # Base object size - wait, they're the same?
# The size is for the *instance itself*. The attributes
# are in the __dict__, which is a separate object.
>>> sys.getsizeof(dict_obj.__dict__)
104 # This is the *real* overhead per instance!
# The slots object has no __dict__
>>> slots_obj.__dict__
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
AttributeError: 'ObjectWithSlots' object has no attribute '__dict__'
# Total memory per dict-based object: 48 (base) + 104 (dict) = 152 bytes
# Total memory per slots-based object: 48 bytes
# If we create 1,000,000 instances:
# Dict version: ~152 MB
# Slots version: ~48 MB
# This is a massive saving!
Trade-offs: So, what's the catch?
The cost is flexibility. You can't add new attributes to a __slots__ object that aren't in the __slots__ list.
Part 2: The Primary Mechanism - Automatic Reference Counting
This brings us to the big question: if multiple references can point to the same instance, how does Python know when to delete it?
Unlike languages where you manually manage memory, Python takes care of this automatically using Garbage Collection (GC).
The primary and most basic method Python uses is Reference Counting.
The mechanism is beautifully simple:
-
Counter Maintenance: Every instance internally keeps a counter.
-
Increment: The count goes up by one every time a new reference points to that instance.
-
a = 1337(Instance1337is created, ref count = 1) -
b = a(Ref count for1337is now 2) -
my_list = [a](Ref count for1337is now 3)
-
-
Decrement: The count goes down by one whenever a reference is destroyed.
-
A reference is destroyed when it's reassigned (
b = 42), goes out of scope (like at the end of a function), or is explicitly deleted.
-
The del Statement and sys.getrefcount()
This is the most misunderstood part.
Many people think del v1 deletes the object. It doesn't.
It just unlinks the name v1 from the object. The object itself is only deleted when nothing refers to it anymore.
When the reference count of an instance drops to zero, it means no names are pointing to it.
The instance is now "uncollectible."
CPython will immediately deallocate it and reclaim its memory.
We can watch this happen with sys.getrefcount().
import sys
# Create an instance (an empty list)
>>> my_object = []
# Let's check its reference count.
# We expect 1, but...
>>> sys.getrefcount(my_object)
2
Why 2?
The reference my_object counts as 1. But the argument passed to sys.getrefcount() also creates a temporary reference.
So, the "true" count is always 1 less than what getrefcount() reports.
Now, let's see the count change in real-time.
>>> a = my_object # Create a new reference
>>> sys.getrefcount(my_object)
3 # (my_object, a, and the function argument)
>>> b = a # Create another reference
>>> sys.getrefcount(my_object)
4 # (my_object, a, b, and the function argument)
# Now, let's delete the references
>>> del b
>>> sys.getrefcount(my_object)
3 # (my_object, a, and the function argument)
>>> del a
>>> sys.getrefcount(my_object)
2 # (my_object and the function argument)
>>> del my_object
# At this exact moment, the last "real" reference is gone.
# The count hits 0, and the list instance is
# immediately deallocated from memory.
>>> my_object
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
NameError: name 'my_object' is not defined
Advantage: This system is simple, fast, and deterministic.
Memory is reclaimed the instant it's no longer needed.
Disadvantage: It's not free. Every assignment and deletion requires a tiny bit of overhead.
But it has a much, much bigger problem.
The Achilles' Heel: Reference Cycles
Reference counting has one fatal flaw: it can't handle reference cycles.
A cycle happens when two or more objects hold references to each other, but nothing outside the cycle references them.
Here is the classic example:
# Create two instances
>>> a = {}
>>> b = {}
# Check their initial ref counts (should be 1, reports 2)
>>> sys.getrefcount(a)
2
>>> sys.getrefcount(b)
2
# Now, let's create a cycle
>>> a['b_ref'] = b # 'a' now holds a reference to 'b'
>>> b['a_ref'] = a # 'b' now holds a reference to 'a'
# The ref counts have increased
>>> sys.getrefcount(a)
3 # (a, b['a_ref'], and the function argument)
>>> sys.getrefcount(b)
3 # (b, a['b_ref'], and the function argument)
Now, watch what happens when we delete the only external references.
>>> del a
>>> del b
The references a and b are gone.
No part of our accessible program can ever reach these dictionaries again. They are garbage.
But what are their reference counts?
-
The dictionary formerly
astill has a ref count of 1 (held byb['a_ref']). -
The dictionary formerly
bstill has a ref count of 1 (held bya['b_ref']).
Since their counts are not zero, the reference counting mechanism will never deallocate them.
This is a memory leak.
If this happened in a loop, your program's memory would grow forever.
Part 3: The Reinforcement - Generational Garbage Collection
This is where the real "Garbage Collector" (GC) module comes in.
To solve the problem of reference cycles, Python employs a secondary mechanism: a Generational Garbage Collector.
This GC only tracks container objects (like lists, dicts, and class instances).
Why? Because simple objects (like numbers and strings) can't create reference cycles.
They just can't hold references to other objects.
This GC is based on a "generational hypothesis."
This is a simple observation from real-life programs: most objects die young.
Objects created inside a function are often discarded when the function returns.
CPython's GC divides all trackable objects into three generations:
-
Generation 0 (Young): All new container objects start here.
-
Generation 1 (Medium): Objects that survive a Generation 0 collection are "promoted" to Generation 1.
-
Generation 2 (Old): Objects that survive a Generation 1 collection are promoted to Generation 2.
Numeric Thresholds: gc.get_thresholds() and gc.get_count()
The gc module runs based on allocation thresholds, not on a timer. We can inspect these thresholds.
import gc
# Returns a tuple of (threshold0, threshold1, threshold2)
>>> gc.get_thresholds()
(700, 10, 10)
This numeric tuple is the key to when the GC runs:
-
threshold0(700): This is the Generation 0 threshold. The GC keeps a running count of(allocations - deallocations). When the net number of new container objects in Gen 0 hits 700, it triggers a Generation 0 collection (gc.collect(0)). -
threshold1(10): Every time a Gen 0 collection runs, a counter goes up. When this counter hits 10, a Generation 1 collection (gc.collect(1)) is triggered. -
threshold2(10): Every time a Gen 1 collection runs, its counter goes up. When that counter hits 10, a Generation 2 collection (gc.collect(2)) is triggered.
Generation 2 is the oldest. A gc.collect(2) (or just gc.collect()) runs a full collection on all generations.
You can inspect the current counts at any time with gc.get_count():
# Returns (count0, count1, count2)
>>> gc.get_count()
(534, 3, 1)
# This means:
# 534 new objects in Gen 0 (will trigger at 700)
# 3 Gen 0 collections have happened since the last Gen 1
# 1 Gen 1 collection has happened since the last Gen 2
How it Finds Cycles: Mark and Sweep
When a generational collection runs, it does not use reference counting.
Instead, it runs a "mark and sweep" algorithm to find and break unreachable cycles.
-
Build Graph: The GC creates a graph of all container objects in that generation.
-
Find Roots: It identifies all "root" objects—objects that are reachable from outside (e.g., from global scope or an older generation).
-
Mark Reachable: It walks the graph from the roots and "marks" every object it can visit as "reachable."
-
Sweep: It scans all objects. Any object that was not marked is, by definition, "unreachable" (part of an isolated cycle). The GC "sweeps" these objects up and deallocates them.
This two-part system gives Python the best of both worlds:
-
Reference Counting: Gives us immediate, deterministic deallocation for the 99% of objects that don't involve cycles.
-
Generational GC: Periodically "stops the world" (for a very short time) to clean up the one thing ref-counting can't handle: cycles.
By collecting younger generations more often, Python focuses its expensive work on the objects most likely to be garbage.
It's a brilliant optimization.
Part 4: Practical Implications - Cleanup vs. Finalization
This brings us to a critical, practical point: resource cleanup vs. memory deallocation.
-
Memory Deallocation: Reclaiming the memory an object used. This is what GC does.
-
Resource Cleanup: Releasing other resources. This includes closing files, releasing locks, or closing network sockets.
Python provides two "magic methods" for this, and choosing the right one is vital.
The Unreliable Finalizer: __del__
When an instance is finally deallocated by the GC, it calls the instance's finalizer method: __del__(self).
It's tempting to put critical cleanup code here. For example:
# DO NOT DO THIS. THIS IS A BAD EXAMPLE.
class BadFileHandler:
def __init__(self, filename):
self.file = open(filename, 'w')
def write(self, data):
self.file.write(data)
def __del__(self):
# "I'll just close the file when the object is garbage collected!"
print("Finalizer called, closing file.")
self.file.close()
# ... much later ...
handler = BadFileHandler('log.txt')
handler.write('some data')
# 'handler' reference is deleted, __del__ *might* be called.
Relying on __del__ is extremely risky for two big reasons:
-
Timeliness is Not Guaranteed: Python doesn't guarantee when the finalizer is called. The
handlerobject might not be deallocated for a long time, leaving the file open and locking system resources. -
It Breaks the Cycle Collector: This is the most dangerous part. If an object with a
__del__method is part of a reference cycle, the generational GC cannot collect it. Why? The GC can't determine a 'safe' order to call the__del__methods. Because it's better to be safe than to crash, it simply gives up. It leaves the objects alive and moves them to thegc.garbagelist. This is a recipe for a guaranteed, unfixable memory leak.
The Guaranteed Solution: Context Managers (with)
For cleanup tasks that must happen at a specific time, Python provides a correct and guaranteed mechanism: Context Managers.
You access them using the with statement.
An object used with with must implement two magic methods:
-
__enter__(self): Called when entering thewithblock. It should get the resource and return it. -
__exit__(self, exc_type, exc_value, traceback): Called immediately when the control flow leaves thewithblock, for any reason (even an exception).
The crucial difference is that the call to __exit__ is guaranteed and immediate.
# THE CORRECT, PYTHONIC WAY
class GoodFileHandler:
def __init__(self, filename):
self.filename = filename
def __enter__(self):
print("Entering context, opening file.")
self.file = open(self.filename, 'w')
return self.file # Return the resource to be used
def __exit__(self, exc_type, exc_value, traceback):
# This is *guaranteed* to run
print("Exiting context, closing file.")
self.file.close()
# If an exception happened, exc_type will not be None.
# We can handle it or return False (default) to re-raise it.
return False
# How to use it:
with GoodFileHandler('log.txt') as f:
f.write('some data')
f.write('some more data')
# a = 1 / 0 # Even if this exception happens...
# ...the __exit__ method is *still* called right here.
# The file is safely closed.
This is why all robust resource-handling objects in Python (like files or locks) are context managers.
It's the explicit, deterministic way to ensure resources are released.
Part 5: Advanced Tools for Memory Management
Knowing the theory is one thing; applying it is another. Here are essential tools for your high-level Python toolbox.
Preventing Cycles Programmatically: weakref
Sometimes, you need references that shouldn't keep an object alive.
A common example is a cache.
You want the cache to hold references, but you don't want the cache itself to prevent those objects from being collected.
The solution is the weakref module.
A weak reference is a reference that does not increase the object's reference count.
import weakref
import sys # Need this again for sys.getrefcount
# Create a regular ("strong") reference
>>> a = [1, 2, 3]
>>> sys.getrefcount(a)
2 # (The 'a' reference and the getrefcount argument)
# Create a weak reference to 'a'
>>> a_weak = weakref.ref(a)
# The ref count DID NOT increase
>>> sys.getrefcount(a)
2
# We access the object by *calling* the weak reference
>>> a_weak()
[1, 2, 3]
# Now, let's delete the only strong reference
>>> del a
# The weak reference is still there, but it points to nothing.
# The object was immediately garbage collected.
>>> a_weak()
None
This tool is essential for building complex object graphs (like trees with parent references) or caches, as it lets you create links without creating cycles.
Finding Leaks Actively: The tracemalloc Module
The gc module can tell you if you have a leak (via gc.garbage), but it can't easily tell you where it's coming from.
For that, you need tracemalloc.
This built-in module traces every memory block allocated by Python.
It tells you the exact filename and line number that created it.
This is the standard tool for debugging memory leaks.
Here is a simplified workflow:
import tracemalloc
import time
def leaky_function():
# This list will "leak" because it's never cleared
leaky_list = []
for i in range(10000):
leaky_list.append(f"object_{i}")
return leaky_list
# 1. Start tracing memory allocations
tracemalloc.start()
# 2. Run the code you suspect is leaking
my_leaky_data = leaky_function()
# 3. Take a snapshot of memory usage
snapshot1 = tracemalloc.take_snapshot()
# ... run more code ...
time.sleep(1)
more_leaky_data = leaky_function()
# 4. Take a second snapshot and compare
snapshot2 = tracemalloc.take_snapshot()
top_stats = snapshot2.compare_to(snapshot1, 'lineno')
# 5. Print the top 10 lines allocating the most new memory
print("[ Top 10 new memory allocations ]")
for stat in top_stats[:10]:
print(stat)
# Example output (it tells you the exact line!):
# [ Top 10 new memory allocations ]
# .../my_app.py:8: size=392 KiB (+392 KiB), count=10001 (+10001)
#
# This line tells you:
# - .../my_app.py:8: The code is on line 8 in this file.
# - size=392 KiB: That line is responsible for 392 KiB of memory *in total*.
# - (+392 KiB): All 392 KiB of that memory is *new* since the last snapshot.
# - count=10001: It allocated 10,001 objects.
# - (+10001): All 10,001 of those are new.
This powerful tool lets you pinpoint exactly which parts of your code are allocating memory and not releasing it.
Visualizing Leaks: objgraph (A Third-Party Tool)
For very complex leaks, tracemalloc might tell you what is leaking, but not why it's being kept alive.
You need to know, "what still has a reference to this object?"
The objgraph library is a powerful third-party tool for visualizing the reference graph. You can install it via pip install objgraph.
It's fantastic for:
-
Finding all objects of a certain type:
objgraph.by_type('MyClass') -
Showing what an object refers to:
objgraph.show_refs(my_object) -
Showing what refers back to an object:
objgraph.show_backrefs(my_object)
This is the ultimate tool for answering "Why won't this object die!?"
import objgraph
# In a real app, you'd pick a leaking object
# Here, we'll recreate our cycle from Part 2
a = {}
b = {}
a['b_ref'] = b
b['a_ref'] = a
# Delete the external references
del a
del b
# Run the GC to find cycles
import gc
gc.collect()
# Now, let's hunt for dictionary objects
# objgraph.by_type('dict') will show us the two
# dictionaries that are stuck in the cycle.
# We can then use objgraph.show_backrefs() on one of
# their IDs to see the cycle visually.
# This is too complex for a simple print, but it's
# how you'd use it in a debugging session.
Conclusion: What This Means For You
So, after all that, what's the big takeaway?
You now know that Python's "magic" memory management is a two-part system.
First, you have instant cleanup with Reference Counting.
Second, there's a powerful Generational GC that hunts down the "memory leaks" (reference cycles) that the first system misses.
This is why it matters.
You now know that __del__ is unreliable for cleanup. You know you should always use with statements for files or locks.
And when your app is mysteriously leaking memory, you have a new superpower.
You know to look for reference cycles.
You can use tracemalloc to find what is leaking, and tools like weakref or objgraph to fix it.
Happy coding!
Quiz: Test Your Memory (Management)
-
References vs. Instances: In Python's data model, what is the conceptual difference between a variable (like
x) and the value it holds (like42)? -
Instance Identity: Which built-in function returns the unique identity of a Python object, and what relationship does this concept have with the comparison operator
is? -
Memory Optimization: What is the term for Python's optimization of re-using instances for small integers and short strings?
-
Class Memory Optimization: What "magic" class attribute can you define to prevent the creation of
__dict__for instances, saving memory? -
Primary GC Mechanism: What is the primary and immediate mechanism CPython uses to determine when an instance is no longer needed and can be deallocated?
-
Reference Count Nuance: You run
x = []and thensys.getrefcount(x). Why does it return2instead of1? -
The Fatal Flaw: What is the fundamental problem that reference counting cannot solve, requiring a secondary GC mechanism?
-
The Secondary GC: The generational GC only tracks certain kinds of objects. What kind, and why?
-
GC Thresholds: You see
gc.get_thresholds()returns(700, 10, 10). What does the700represent? -
Guaranteed Cleanup: Why are context managers (
withstatement) preferred over relying on the__del__finalizer method for critical cleanup tasks like closing files? -
The
__del__Trap: What happens to an object that has a__del__method and becomes part of an unreachable reference cycle? -
Manual Control: How would you manually trigger a full garbage collection run that checks all generations?
-
Preventing Cycles: What module would you use to create a reference to an object that does not increase its reference count, and why is this useful?
-
Debugging Leaks: What built-in module is the standard tool for tracing memory allocations back to the specific line of code that created them?
-
Visualizing Leaks: What popular third-party library is used to visualize the graph of references and back-references to debug complex leaks?
Answer Key
-
References vs. Instances: A variable (
x) is a reference (a symbolic name or pointer). The value (42) is the instance (the actual data object) in memory. Multiple references can point to the same single instance. -
Instance Identity:
id(object)returns the unique identity. The expressionreference1 is reference2is a direct check to see ifid(reference1) == id(reference2), confirming if they are references to the exact same instance. -
Memory Optimization: Object Interning.
-
Class Memory Optimization:
__slots__. -
Primary GC Mechanism: Reference Counting. Every object has a counter. When the counter drops to 0 (because all references to it have been deleted or gone out of scope), the object is immediately deallocated.
-
Reference Count Nuance: The count is
2because there are two references: 1) the variablexitself, and 2) the temporary reference created by passingxas an argument to thesys.getrefcount()function. -
The Fatal Flaw: Reference Cycles. When two or more objects reference each other but have no external references, their reference counts will never drop to 0, and they will leak.
-
The Secondary GC: It only tracks container objects (like lists, dicts, instances). This is because simple, non-container objects (like numbers and strings) cannot hold references to other objects, so they cannot participate in a reference cycle.
-
GC Thresholds: The
700is the Generation 0 threshold. It means a Gen 0 collection will run when the number of (allocations - deallocations) of container objects since the last Gen 0 run exceeds 700. -
Guaranteed Cleanup: Context managers are preferred because the
__exit__method is guaranteed to be called immediately when leaving thewithblock, even if an error occurs. The__del__finalizer is not guaranteed to run at a specific time (or at all, in cycle cases). -
The
__del__Trap: The generational GC cannot collect it. It will break the cycle but leave the object alive because it cannot determine a safe order to run the__del__methods. The object is moved to thegc.garbagelist, effectively leaking it. -
Manual Control:
gc.collect()(orgc.collect(2)) will perform a full, stop-the-world collection of all three generations. -
Preventing Cycles: The
weakrefmodule. It's useful for creating caches or parent-pointer in trees, where you need to refer to an object without preventing it from being garbage collected. -
Debugging Leaks: The
tracemallocmodule. It can take snapshots of memory and compare them to show which lines of code are responsible for new allocations. -
Visualizing Leaks: The
objgraphlibrary.
.jpeg)