Compare commits

...

11 Commits

Author SHA1 Message Date
Jonathan Gramain 24887caeb0 Merge branch 'bugfix/ARSN-398-doNotRefreshGapBuildingIfDisabled' into user/jonathan/S3C-3926 2024-02-14 17:38:16 -08:00
Jonathan Gramain 8cac166a69 bf: ARSN-398 DelimiterMaster: fix when gap building is disabled
- Fix the situation where gap building is disabled by
  `_saveBuildingGap()` but we attempted to reset the building gap state
  anyway.

- Introduce a new state 'Expired' that can be differentiated from
  'Disabled': it makes `getGapBuildingValidityPeriodMs()` return 0
  instead of 'null' to hint the listing backend that it should trigger
  a new listing.
2024-02-14 17:35:59 -08:00
Jonathan Gramain 86e50d3ead Merge branch 'feature/ARSN-397-gapCacheClear' into user/jonathan/S3C-3926 2024-02-14 11:40:53 -08:00
Jonathan Gramain e168e2eb33 Merge branch 'feature/ARSN-389-optimizeListingWithGapCache' into user/jonathan/S3C-3926 2024-02-13 10:40:40 -08:00
Jonathan Gramain 6490916c1f Merge branch 'bugfix/ARSN-394-GapCacheInvalidateStagingGaps' into user/jonathan/S3C-3926 2024-02-13 10:40:13 -08:00
Jonathan Gramain 8dc3ba7ca6 bf: ARSN-394 GapCache: invalidate staging gaps
In the GapCache._removeOverlappingGapsBeforeExpose() helper, remove
the gaps from the *staging* set that overlap with any of the staging
or frozen updates, in addition to removing the gaps from the frozen
set.

Without this extra invalidation, it's still possible to have gaps
created within the exposure delay that miss some invalidation,
resulting in stale gaps in the cache.

Modify an existing unit test to cover this case by adding extra wait
time to ensure `_removeOverlappingGapsBeforeExpose()` is called once
after the invalidating update but before the `setGap()` call.
2024-02-13 10:37:40 -08:00
Jonathan Gramain ade4597691 ARSN-389 [squash] don't extend gaps with zero extra weight
Prevent extending an existing gap when the extension is zero
weight. This avoids creating useless zero-weight gaps when there is
concurrent invalidation of the gap being listed.

Note: It is still possible to create a gap smaller than the minimum
weight if the original gap has been invalidated, but it is less likely
than the previous odds of creating a zero-weight gap because the
original gap is usually already maxing out the skippable range.
2024-02-12 18:23:45 -08:00
Jonathan Gramain 9a3fb02512 Merge branch 'bugfix/ARSN-393-infiniteLoopInCoalesceGapChain' into user/jonathan/S3C-3926 2024-02-12 12:02:29 -08:00
Jonathan Gramain f7e3dcf44f ARSN-389 bump arsenal version 2024-02-09 16:51:58 -08:00
Jonathan Gramain ea13d0fbda ARSN-389 DelimiterMaster: v0 format gap skipping
Implement logic in DelimiterMaster to improve efficiency of listings
of buckets in V0 format that have a lot of current delete markers.

A GapCache instance can be attached to a DelimiterMaster instance,
which enables the following:

- Lookups in the cache to be able to restart listing directly beyond
  the cached gaps. It is done by returning FILTER_SKIP code when
  listing inside a gap, which hints the caller (RepdServer) that it is
  allowed to restart a new listing from a specific later key.

- Building gaps and cache them, when listing inside a series of current
  delete markers. This allows future listings to benefit from the gap
  information and skip over them.

An important caveat is that there is a limited time in which gaps can
be built from the current listing: it is a trade-off to guarantee the
validity of cached gaps when concurrent operations may invalidate
them. This time is set in the GapCache instance as `exposureDelayMs`,
and is the time during which concurrent operations are kept in memory
to potentially invalidate future gap creations. Because listings use a
snapshot of the database, they return entries that are older than when
the listing started. For this reason, in order to be allowed to
consistently build new gaps, it is necessary to limit the running time
of listings, and potentially redo periodically new listings (based on
time or number of listed keys), resuming from where the previous
listing stopped, instead of continuing the current listing.
2024-02-09 16:51:58 -08:00
Jonathan Gramain 60992ecaea impr: ARSN-389 change contract of skipping() API
Instead of returning a "prefix" for the listing task to skip over,
directly return the key on which to skip and continue the listing.

It is both more natural as well as needed to implement skipping over
cached "gaps" of deleted objects.

Note that it could even be more powerful to return the type of query
param to apply for the next listing ('gt' or 'gte'), but it would be
more complex to implement with little practical benefit, so instead we
add a null byte at the end of the returned key to skip to, whenever we
want a 'gt' behavior from the returned 'gte' key.

Also in this commit: clarify the API contract and always return
FILTER_ACCEPT when not allowed to skip over low-level listing
contents. A good chunk of the history of listing bugs and workarounds
comes from this confusion.
2024-02-09 09:36:00 -08:00
5 changed files with 47 additions and 19 deletions

View File

@ -121,20 +121,23 @@ export default class GapCache implements GapCacheInterface {
}
/**
* Internal helper to remove gaps in the frozen set overlapping
* with previously updated keys, right before the frozen gaps get
* exposed.
* Internal helper to remove gaps in the staging and frozen sets
* overlapping with previously updated keys, right before the
* frozen gaps get exposed.
*
* @return {undefined}
*/
_removeOverlappingGapsBeforeExpose(): void {
// simple optimization to avoid looping over all updates if
// there is no gap in the frozen set
if (this._frozenUpdates.newGaps.size === 0) {
return;
}
for (const updateSet of [this._stagingUpdates, this._frozenUpdates]) {
this._frozenUpdates.newGaps.removeOverlappingGaps(updateSet.updatedKeys);
for (const { updatedKeys } of [this._stagingUpdates, this._frozenUpdates]) {
if (updatedKeys.size() === 0) {
continue;
}
for (const { newGaps } of [this._stagingUpdates, this._frozenUpdates]) {
if (newGaps.size === 0) {
continue;
}
newGaps.removeOverlappingGaps(updatedKeys);
}
}
}

View File

@ -73,13 +73,14 @@ type GapCachingInfo = GapCachingInfo_NoGapCache
export const enum GapBuildingState {
Disabled = 0, // no gap cache or not allowed to build due to exposure delay timeout
Disabled = 0, // no gap cache or no gap building needed (e.g. in V1 versioning format)
NotBuilding = 1, // not currently building a gap (i.e. not listing within a gap)
Building = 2, // currently building a gap (i.e. listing within a gap)
Expired = 3, // not allowed to build due to exposure delay timeout
};
type GapBuildingInfo_Disabled = {
state: GapBuildingState.Disabled;
type GapBuildingInfo_NothingToBuild = {
state: GapBuildingState.Disabled | GapBuildingState.Expired;
};
type GapBuildingParams = {
@ -118,7 +119,7 @@ type GapBuildingInfo_Building = {
gapWeight: number;
};
type GapBuildingInfo = GapBuildingInfo_Disabled
type GapBuildingInfo = GapBuildingInfo_NothingToBuild
| GapBuildingInfo_NotBuilding
| GapBuildingInfo_Building;
@ -214,6 +215,8 @@ export class DelimiterMaster extends Delimiter {
switch (this._gapBuilding.state) {
case GapBuildingState.Disabled:
return null;
case GapBuildingState.Expired:
return 0;
case GapBuildingState.NotBuilding:
gapBuilding = <GapBuildingInfo_NotBuilding> this._gapBuilding;
break;
@ -314,6 +317,7 @@ export class DelimiterMaster extends Delimiter {
_checkGapOnMasterDeleteMarker(key: string): FilterReturnValue {
switch (this._gapBuilding.state) {
case GapBuildingState.Disabled:
case GapBuildingState.Expired:
break;
case GapBuildingState.NotBuilding:
this._createBuildingGap(key, 1);
@ -494,16 +498,23 @@ export class DelimiterMaster extends Delimiter {
return params;
}
_saveBuildingGap(): void {
/**
* Save the gap being built if allowed (i.e. still within the
* allocated exposure time window).
*
* @return {boolean} - true if the gap was saved, false if we are
* outside the allocated exposure time window.
*/
_saveBuildingGap(): boolean {
const { gapCache, params, gap, gapWeight } =
<GapBuildingInfo_Building> this._gapBuilding;
const totalElapsed = Date.now() - params.initTimestamp;
if (totalElapsed >= gapCache.exposureDelayMs) {
this._gapBuilding = {
state: GapBuildingState.Disabled,
state: GapBuildingState.Expired,
};
this._refreshedBuildingParams = null;
return;
return false;
}
const { firstKey, lastKey, weight } = gap;
gapCache.setGap(firstKey, lastKey, weight);
@ -518,6 +529,7 @@ export class DelimiterMaster extends Delimiter {
},
gapWeight,
};
return true;
}
/**
@ -569,7 +581,10 @@ export class DelimiterMaster extends Delimiter {
// only set gaps that are significant enough in weight and
// with a non-empty extension
if (gapWeight >= params.minGapWeight && gap.weight > 0) {
this._saveBuildingGap();
// we're done if we were not allowed to save the gap
if (!this._saveBuildingGap()) {
return;
}
// params may have been refreshed, reload them
gapBuilding = <GapBuildingInfo_Building> this._gapBuilding;
params = gapBuilding.params;

View File

@ -3,7 +3,7 @@
"engines": {
"node": ">=16"
},
"version": "7.70.23",
"version": "7.70.24",
"description": "Common utilities for the S3 project components",
"main": "build/index.js",
"repository": {

View File

@ -174,6 +174,11 @@ describe('GapCache', () => {
it('removeOverlappingGaps() should invalidate gaps created later by setGap() but ' +
'within the exposure delay', async () => {
// wait for 80ms (slightly less than exposure delay of 100ms)
// before calling removeOverlappingGaps(), so that the next
// exposure timer kicks in before the call to setGap()
await new Promise(resolve => setTimeout(resolve, 80));
// there is no exposed gap yet, so expect 0 gap removed
expect(gapCache.removeOverlappingGaps(['dog'])).toEqual(0);

View File

@ -1186,6 +1186,11 @@ describe('DelimiterMaster listing algorithm: gap caching and lookup', () => {
resumeFromState = filterEntries(listing, 'Ddv Ddv Ddv Vvv', 'ass ass ass ass',
resumeFromState);
expect(gapCache.toArray()).toEqual(gapsArray);
// gap building should be in expired state
expect(listing._gapBuilding.state).toEqual(GapBuildingState.Expired);
// remaining validity period should still be 0 because gap building has expired
validityPeriod = listing.getGapBuildingValidityPeriodMs();
expect(validityPeriod).toEqual(0);
// we should still be able to skip over the existing cached gaps
expect(listing._gapCaching.state).toEqual(GapCachingState.GapLookupInProgress);