Memory Management ToolKit
MarkSweep
to work with both our native mark sweep policy and the malloc mark sweep policy backed by malloc libraries.FreeListAllocator
which is implemented as a MiMalloc
allocator.ImmixAllocator
that alignment is properly taken into consideration when deciding whether to do overflow allocation.MarkSweepSpace
:
malloc_mark_sweep
is enabled, it uses the selected malloc library to back up its allocation.BlockPageResource
.SFTSparseChunkMap
, and only use it for 32 bits architectures.SFTSpaceMap
and by default use it for 64 bits architectures.SFTDenseChunkMap
and use it when we have no control of the virtual address range for a MMTk space on 64 bits architectures.thread_affinity
to set processor affinity for MMTk GC threads.AllocationSemantics::NonMoving
to allocate objects that are known to be non-moving at allocation time.ReferenceGlue::is_referent_cleared
to allow some bindings to use a special value rather than a normal null reference for a cleared referent.pin
, unpin
, and is_pinned
for object pinning. Note that some spaces do not support object pinning, and using these methods may
cause panic if the space does not support object pinning.ObjectReference
:
ObjectModel::ref_to_address
to get an address from an object reference for setting per-object side metadata.ObjectModel::address_to_ref
that does the opposite of ref_to_address
: getting an object reference from an address that is returned by ref_to_address
.ObjectModel::ref_to_header
for the binding to tell us the base header address from an object reference.ObjectModel::object_start_ref
to ObjectModel::ref_to_object_start
(to be consistent with other methods).ObjectModel::OBJECT_REF_OFFSET_BEYOND_CELL
, as we no longer use the raw address of an object reference.ObjectModel::UNIFIED_OBJECT_REFERENCE_ADDRESS
. If a binding uses the same address for ObjectReference
, ref_to_address
and ref_to_object_start
,
they should set this to true
. MMTk can utilize this information for optimization.ObjectModel::OBJECT_REF_OFFSET_LOWER_BOUND
to specify the minimam value of the possible offsets between an allocation result and object reference's raw address.destroy_mutator()
no longer requires a boxed mutator as its argument. Instead, a mutable reference to the mutator is required. It is made clear that the binding should
manage the lifetime of the boxed mutator from a bind_mutator()
call.VMBinding::LOG_MIN_ALIGNMENT
and VMBinding::MAX_ALIGNMENT_SHIFT
(so we only keep VMBinding::MIN_ALIGNMENT
and VMBinding::MAX_ALIGNMENT
).BlockPageResource
that can be used for policies that always allocate memory at the granularity of a fixed sized block.
This page resource facilitates block allocation and reclamation, and uses lock-free operations where possible.FreeListPageResource
when multiple threads release pages.fetch_and/or
in our metadata implementation.meta_data_pages_per_region
in page resource implementations.MaybeUninit::uninit().assume_init()
in FreeListPageResource
which has undefined behaviors
and causes the illegal instruction error with newer Rust toolchains.From<Address>
and Into<Address>
for Region
, as we cannot guarantee safe conversion between those two types.Chunk
and ChunkMap
from the immix policy, and make it available for all the policies.immix_no_defrag
to use the variant.
Note that this variant performs poorly, compared to normal immix.mod build_info
for bindings to get information about the current build.trait Edge
. A binding can implement its own edge type if they need more sophisiticated edges than a simple address slot,
e.g. to support compressed pointers, base pointers with offsets, or tagged pointers.object_reference_write()
and pre/post write barriers object_reference_write_pre/post()
.array_copy
in Java. MMTk provides memory_region_copy()
(subsuming) and memory_region_copy_pre/post()
.ignore_system_g_c
option is renamed to ignore_system_gc
to be consistent with our naming convention.max/min_nursery
option is replaced by nursery
. Bindings can use nursery=Fixed:<size>
or Bounded:<size>
to config the nursery size.Result
rather than a boolean, which is more consistent with Rust atomic types.fetch_and
, fetch_or
and fetch_update
.ObjectModel
now have default implementations.CopySpace
should not try zeroing alloc bit if there is no allocation in the space.ProcessEdgesWork
is no longer exposed in the Scanning
trait. Instead, RootsWorkFactory
is introduced
for the bindings to create more work packets.Collection::stop_all_mutators()
now provides a callback mutator_visitor
. The implementation is required
to call mutator_visitor
for each mutator once it is stopped. This requirement was implicit prior to this change.MMTKBuilder
is introduced. Command line argument processing API (process()
and process_bulk()
) now
takes &MMTKBuilder
as an argument instead of &MMTK
.gc_init()
is renamed to mmtk_init()
. mmtk_init()
now takes &MMTKBuilder
as an argument,
and returns an MMTk instance Box<MMTK>
.heap_size
(which used to be an argument for gc_init()
) is now an MMTk option.Scanning::support_edge_enqueuing()
. A binding may return false
if they cannot do edge scanning for certain objects.Scanning::scan_object_and_trace_edges()
will be called.Plan::gc_init()
and Space::init()
are removed. Initialization is now properly done in the respective constructors.Finalizable
to ReferenceGlue
, with which, a binding can define their own finalizer type.vm_trace_object()
to ActivePlan
. When tracing an object that is not in any of MMTk spaces, MMTk will call this method
and allow bindings to handle the object.trait TransitiveClosure
is split into two different traits: EdgeVisitor
and ObjectQueue
, and TransitiveClosure
is now removed.acquire_lock
was used to lock a larger scope than what was necessary, which caused bad performance when we have many
allocation threads (e.g. more than 24 threads).trait PlanTraceObject
and procedural macros to derive implementation for it for all the current plans.PlanProcessEdges
that uses PlanTraceObject
. All the current plans use this type for tracing objects.trait PolicyTraceObject
. Added an implementation for each policy.no_reference_types=false
to enable it). Related APIs are slightly changed.TransitiveClosure
in Scanning::scan_object()/scan_objects()
is now replaced with vm::EdgeVisitor
.Scanning::scan_object()/scan_objects()
so they are more consistent.SFTProcessEdges
. Most plans now use SFTProcessEdges
for tracing objects,
and no longer need to implement any plan-specific work packet. Mark compact and immix plans still use their own
tracing work packet.ImmixCopyContext
did not set the mark bit after copying an object.MarkCompactSpace
used ObjectReference
and Address
interchangably. Now MarkCompactSpace
properly deals with ObjectReference
.is_mapped_object()
is superseded by is_in_mmtk_spaces()
. It returns true if the given object reference is in
MMTk spaces, but it does not guarantee that the object reference actually points to an object.is_mmtk_object()
is added. It can be used to check if an object reference points to an object (useful for conservative stack canning).
is_mmtk_object()
is only availble when the is_mmtk_object
feature is enabled.trait Region
and struct RegionIterator<R>
to allow convenient iteration through memory regions.GCWorkerCopyContext
(similar to how they config Mutator
).needs_log_bit
was always set to true
for generational plans, no matter
their barrier used the log bit or not.get_available_pages()
.ObjectIterator
for linear scan.GCController
, a counterpart of GCWorker
, for the controller thread.GCWorker
. Now GCWorker
is seperated into two parts, a thread local part GCWorker
which is owned by GC threads, and a shared part GCWorkerShared
that is shared between GC threads
and the scheduler.Option<T>
and RwLock<T>
.process_bulk()
that allows bindings to pass options as a string of key-value pairs.ObjectModel::copy()
now takes CopySemantics
as a parameter.Collection::spawn_worker_thread()
to spawn_gc_thread()
, which is now used to spawn both GC worker and
GC controller.Collection::out_of_memory()
now takes AllocationError
as a parameter which hints the binding
on how to handle the OOM error.Collection::out_of_memory()
now allows a binding to return from the method in the case of a non-critical OOM.
If a binding returns, alloc()
will return a zero address.ObjectIterator
that provides linear scanning through a region to iterate
objects using the alloc bit.work_packet_stats
to optionally collect work packet statistics. Note that
MMTk used to always collect work packet statistics.mmtk.h
now uses the prefix mmtk_
for all the functions.DEFRAG
is disabled.precise_stress
(which defaults to true
). For precise stress test, MMTk will check for stress GC in
each allocation (including thread local fastpath allocation). For non-precise stress test, MMTk only checks for stress GC in global allocation.schedule_common()
to schedule the common work packets for all the plans.roots: bool
to ProcessEdgesWork::new()
to indicate whether the packet contains root edges.ImmixProcessEdges.trace_object()
so it can deal with both defrag GC and fast GC.MallocSpace
.MallocAllocator
.enable_collection()
to initialize_collection()
.enable_collection()
and disable_collection()
. When MMTk collection is disabled, MMTk allows allocation without
triggering GCs.COORDINATOR_ONLY_STW
to the Collection
trait. If this is set, the StopMutators
work can only done by the MMTk
coordinator thread. Otherwise, any GC thread may be used to stop mutators.extreme_assertions
to check if side metadata access is within their bounds.SideMetadataSpec.name
to help debug.util::metadata::side_metadata::spec_defs
to help define side metadata specs without
explicitly laying out specs and providing offsets for each spec.SideMetadataSpec.log_min_obj_size
to SideMetadataSpec.log_bytes_in_region
to avoid ambiguity.global_alloc_bit
: mmtk-core will set a bit for each allocated object. This will later be
used to implement heap iteration and to support tracing internal pointers.Scheduler
, Context
and WorkerLocal
.primary
to full_heap
in a few prepare()
/release()
methods.mu
(mutator)/gc
to other
/stw
(stop-the-world) so they won't cause
confusion in concurrenct GC plans.MallocSpace
that caused side metadata was not mapped correctly if an object crossed chunk boundary.MallocSpace
that it may incorrectly consider a chunk's side metadata is mapped.LockFreeImmortalSpace
.