Memory Management ToolKit
Full Changelog: https://github.com/mmtk/mmtk-core/compare/v0.23.0...v0.24.0
Full Changelog: https://github.com/mmtk/mmtk-core/compare/v0.22.1...v0.23.0
Full Changelog: https://github.com/mmtk/mmtk-core/compare/v0.22.0...v0.22.1
destroy_mutator
by @k-sareen in https://github.com/mmtk/mmtk-core/pull/1045
Full Changelog: https://github.com/mmtk/mmtk-core/compare/v0.21.0...v0.22.0
is_emergency_collection
to VM bindings by @wks in https://github.com/mmtk/mmtk-core/pull/997
to_object_reference()
in comment by @k-sareen in https://github.com/mmtk/mmtk-core/pull/998
cargo generate-lockfile
to update JikesRVM's Cargo.lock by @qinsoon in https://github.com/mmtk/mmtk-core/pull/996
Full Changelog: https://github.com/mmtk/mmtk-core/compare/v0.20.0...v0.21.0
Full Changelog: https://github.com/mmtk/mmtk-core/compare/v0.19.0...v0.20.0
is_live
for ImmixSpace by @wks in https://github.com/mmtk/mmtk-core/pull/842
Atomic<Address>
where appropriate by @ClSlaid in https://github.com/mmtk/mmtk-core/pull/843
ActivePlan::mutators()
's return type by @ArberSephirotheca in https://github.com/mmtk/mmtk-core/pull/817
offset
type to usize
instead isize
by @fepicture in https://github.com/mmtk/mmtk-core/pull/838
scan_thread_root{,s}
functions by @k-sareen in https://github.com/mmtk/mmtk-core/pull/846
Full Changelog: https://github.com/mmtk/mmtk-core/compare/v0.18.0...v0.19.0
StickyImmix
. This is an variant of immix using a sticky mark bit. This plan allows generational behaviors without using a compulsory copying nursery.STRESS_DEFRAG
and DEFRAG_EVERY_BLOCK
to allow immix to stress defrag copying for debugging.object_probable_write
. The method can be called before the fields of an object may get updated without a normal write barrier.immix_non_moving
to replace the current immix_no_defrag
feature. This is only intended for debugging uses to rule out issues from copying.Mmapper
and VMMap
. Their implementation is now chosen dynamically, rather than statically based on the pointer size of the architecture. This allows
us to use a more appropriate implementation to support compressed pointers.ScanObjectsWork
does not properly set the worker when it internally uses ProcessEdgesWork
.harness_begin
does not force a full heap GC for generational plans.SemiSpace::get_available_pages
returned unused pages as 'available pages'. It should return half of the unused pages as 'available'.Plan::end_of_gc()
method
that is executed after all the GC work is done.immix_zero_on_release
as a debug feature for Immix to eagerly zero any reclaimed memory.FreeListAllocator
. The allowed maximum object size depends on both the block size and the largest bin size.GCWorkScheduler::closure_end()
.Scanning::process_weak_refs()
and Scanning::forward_weak_refs()
. They both supply an ObjectTracerContext
argument for retaining and
updating weak references. Scanning::process_weak_refs()
allows a boolean return value to indicate if the method should be called again
by MMTk after finishing transitive closures for the weak references, which can be used to implement ephemeron.Collection::post_forwarding()
which is called by MMTk after all weak reference related work is done. A binding can use this call
for any post processing for weak references, such as enqueue cleared weak references to a queue (Java).Collection::process_weak_refs()
, Collection::vm_release()
, and memory_manager::on_closure_end
.mmap
system calls at boot time.GCTrigger
to allow different heuristics to trigger a GC:
FixedHeapSizeTrigger
that triggers the GC when the maximum heap size is reached (the current approach).MemBalancerTrigger
(https://dl.acm.org/doi/pdf/10.1145/3563323) as the dynamic heap resize heuristic.SFTMap
so it is dynamically created now.Address::store
dropped the value after store.CommonFreeListPageResource
.SynchronizedCounter
.MarkSweep
to work with both our native mark sweep policy and the malloc mark sweep policy backed by malloc libraries.FreeListAllocator
which is implemented as a MiMalloc
allocator.ImmixAllocator
that alignment is properly taken into consideration when deciding whether to do overflow allocation.MarkSweepSpace
:
malloc_mark_sweep
is enabled, it uses the selected malloc library to back up its allocation.BlockPageResource
.SFTSparseChunkMap
, and only use it for 32 bits architectures.SFTSpaceMap
and by default use it for 64 bits architectures.SFTDenseChunkMap
and use it when we have no control of the virtual address range for a MMTk space on 64 bits architectures.thread_affinity
to set processor affinity for MMTk GC threads.AllocationSemantics::NonMoving
to allocate objects that are known to be non-moving at allocation time.ReferenceGlue::is_referent_cleared
to allow some bindings to use a special value rather than a normal null reference for a cleared referent.pin
, unpin
, and is_pinned
for object pinning. Note that some spaces do not support object pinning, and using these methods may
cause panic if the space does not support object pinning.ObjectReference
:
ObjectModel::ref_to_address
to get an address from an object reference for setting per-object side metadata.ObjectModel::address_to_ref
that does the opposite of ref_to_address
: getting an object reference from an address that is returned by ref_to_address
.ObjectModel::ref_to_header
for the binding to tell us the base header address from an object reference.ObjectModel::object_start_ref
to ObjectModel::ref_to_object_start
(to be consistent with other methods).ObjectModel::OBJECT_REF_OFFSET_BEYOND_CELL
, as we no longer use the raw address of an object reference.ObjectModel::UNIFIED_OBJECT_REFERENCE_ADDRESS
. If a binding uses the same address for ObjectReference
, ref_to_address
and ref_to_object_start
,
they should set this to true
. MMTk can utilize this information for optimization.ObjectModel::OBJECT_REF_OFFSET_LOWER_BOUND
to specify the minimam value of the possible offsets between an allocation result and object reference's raw address.destroy_mutator()
no longer requires a boxed mutator as its argument. Instead, a mutable reference to the mutator is required. It is made clear that the binding should
manage the lifetime of the boxed mutator from a bind_mutator()
call.VMBinding::LOG_MIN_ALIGNMENT
and VMBinding::MAX_ALIGNMENT_SHIFT
(so we only keep VMBinding::MIN_ALIGNMENT
and VMBinding::MAX_ALIGNMENT
).BlockPageResource
that can be used for policies that always allocate memory at the granularity of a fixed sized block.
This page resource facilitates block allocation and reclamation, and uses lock-free operations where possible.FreeListPageResource
when multiple threads release pages.fetch_and/or
in our metadata implementation.meta_data_pages_per_region
in page resource implementations.MaybeUninit::uninit().assume_init()
in FreeListPageResource
which has undefined behaviors
and causes the illegal instruction error with newer Rust toolchains.From<Address>
and Into<Address>
for Region
, as we cannot guarantee safe conversion between those two types.Chunk
and ChunkMap
from the immix policy, and make it available for all the policies.