Compare commits

...

772 Commits

Author SHA1 Message Date
Vitaliy Filippov 19855115ae Use TS? 2024-08-06 19:56:20 +03:00
Vitaliy Filippov 329d8ef32c Add Vitastor support 2024-08-05 02:23:54 +03:00
Vitaliy Filippov f0ded4ea4f Use swc to transpile during installation 2024-08-04 00:00:10 +03:00
Vitaliy Filippov 3eea263384 Use ^ dependencies, suppress aws-sdk maintenance mode message 2024-08-04 00:00:01 +03:00
Vitaliy Filippov c26d4f7d70 Fix readUInt with length 8 2024-08-04 00:00:01 +03:00
Vitaliy Filippov 63137e7a7b Change git dependency URLs 2024-08-04 00:00:01 +03:00
Vitaliy Filippov fdb23b1cd2 Remove yarn lock 2024-08-04 00:00:01 +03:00
Vitaliy Filippov 4120eac127 Make sproxydclient and hdclient dependencies optional 2024-08-04 00:00:01 +03:00
Maha Benzekri d9bbd6cf3e
bump project version
Issue : https://scality.atlassian.net/browse/ARSN-426
2024-07-31 11:22:01 +02:00
Maha Benzekri 65e89d286d
ensure callback is only called once on AwsClient
Issue : https://scality.atlassian.net/browse/ARSN-426
2024-07-31 11:21:56 +02:00
Maha Benzekri dcbc5ca98f
ensure callback is only called once on MutipleBackendGateway
Issue : https://scality.atlassian.net/browse/ARSN-426
2024-07-31 11:21:44 +02:00
Maha Benzekri 817bb836ec
ARSN-420: bump arsenal version 2024-07-15 15:20:08 +02:00
Maha Benzekri e3e4b2aea7
ARSN-420: putObjectNoVar function update with hack
We agreed on Introducing the same “hack” as in internalDelete function,
so write the MD twice in the oplog: one "deleted: true" copy of the previous MD,
followed by the expected update with the new metadata
2024-07-15 15:19:06 +02:00
Francois Ferrand 9cd72221e8
Bump arsenal 8.1.132
Issue: ARSN-421
2024-07-10 18:45:22 +02:00
Francois Ferrand bdcd4685ad
gha: bump codecov v4
and use codecov token.

Issue: ARSN:421
2024-07-10 18:45:22 +02:00
Francois Ferrand b2b6c47ba7
Introduce objectGetArchiveInfo verb
This may be used to allow access to more details about archived objects.

Issue: ARSN-421
2024-07-10 18:29:53 +02:00
Jonathan Gramain da173d53b4 Merge remote-tracking branch 'origin/w/7.70/bugfix/ARSN-425-listingLatestCrashWithUndefined' into w/8.1/bugfix/ARSN-425-listingLatestCrashWithUndefined 2024-07-08 11:28:59 -07:00
Jonathan Gramain 7eb2701f21 Merge remote-tracking branch 'origin/bugfix/ARSN-425-listingLatestCrashWithUndefined' into w/7.70/bugfix/ARSN-425-listingLatestCrashWithUndefined 2024-07-08 11:03:50 -07:00
Jonathan Gramain 6ec3c8e10d ARSN-425 bump arsenal version 2024-07-08 10:59:25 -07:00
Jonathan Gramain 7aaf277db2 bf: ARSN-425 listing crash if key contains "undefined"
Fix a crash in DelimiterMaster listing without a delimiter, when a key
contains the string "undefined".

Note: a similar fix was done in ARSN-330 for DelimiterVersions. I
ported the existing unit test there to the development/7.10 branch to
enhance regression testing, even though this bug on DelimiterVersions
only existed on 7.70.
2024-07-08 10:56:48 -07:00
Francois Ferrand 67421f8c76
Merge branch 'w/7.70/improvement/ARSN-415' into w/8.1/improvement/ARSN-415 2024-05-10 14:28:11 +02:00
Francois Ferrand bf2260b1ae
Merge branch 'improvement/ARSN-415' into w/7.70/improvement/ARSN-415 2024-05-10 14:27:00 +02:00
Francois Ferrand 11e0e1b489
Bump gha actions
- checkout@v4
- codeql@v2
- dependency-review@v4
- setup-node@v4
- artifacts@v4

Issue: ARSN-415
2024-05-10 14:26:29 +02:00
Anurag Mittal f13ec2cf4c
Merge remote-tracking branch 'origin/bugfix/ARSN-412-add-support-for-exists-condition' into w/8.1/bugfix/ARSN-412-add-support-for-exists-condition 2024-05-03 13:37:07 +02:00
Anurag Mittal e369c7e6d2
ARSN-412: bump-package.json-to-v7.70.31 2024-05-03 13:34:46 +02:00
Anurag Mittal c5c1db4568
ARSN-412-test-relevant-errors 2024-05-03 13:34:16 +02:00
Anurag Mittal 58f4d3cb3a
VAULT-412-add-unit-test-for-conditions 2024-05-03 13:34:16 +02:00
Anurag Mittal b049f39e2a
ARSN-412: add support for exists pre-condition 2024-05-03 13:34:16 +02:00
williamlardier 30eaaf15eb ARSN-406: bump project version 2024-05-02 09:01:13 +02:00
williamlardier 9d16fb0a34 ARSN-406: create the QuotaExceeded error 2024-05-02 09:01:06 +02:00
williamlardier cdc612f379 ARSN-406: add quota numbers in report 2024-05-02 09:00:51 +02:00
williamlardier 61dd65b2c4 ARSN-406: add request context options for quota evaluation 2024-05-02 09:00:00 +02:00
bert-e 2c0696322e Merge branch 'improvement/ARSN-410-quotas-for-bucket-apis' into q/8.1 2024-04-30 16:08:07 +00:00
Maha Benzekri c464a70b90
ARSN-410: bump project version 2024-04-30 17:19:42 +02:00
Maha Benzekri af07bb3df4
ARSN-410: adding api methods in actionMonitoringMapS3 2024-04-30 17:19:20 +02:00
Maha Benzekri 1858654f34
ARSN-410: new no such quota error 2024-04-30 17:18:54 +02:00
Maha Benzekri 0475c8520a
ARSN-410: update routes for bucket get/put/delete quota 2024-04-30 17:18:12 +02:00
Maha Benzekri 31a4de5372
ARSN-410: add getbucketQuota in metaDataWrapper 2024-04-30 17:17:46 +02:00
Maha Benzekri 0c53d13439
ARSN-410: update bucketInfo test 2024-04-30 17:17:18 +02:00
Maha Benzekri cad8b14df1
ARSN-410: update bucketInfo and md 2024-04-30 17:16:50 +02:00
Nicolas Humbert fe29bacc79 Merge remote-tracking branch 'origin/bugfix/ARSN-413/null' into w/8.1/bugfix/ARSN-413/null 2024-04-30 10:26:58 +02:00
Nicolas Humbert a86cff4631 ARSN-413 bump package version 2024-04-26 19:37:11 +02:00
Kerkesni f13a5d79ea bugfix: ARSN-278 handle getting versionId when object is versioning suspended
When replicating a versioning suspended object, we need to specify 'null'
as the encoded versionId as the versionId contained within the object's
metadata is strictly internal

In the replication processor we use getVersionId() when putting/deleting a tag.
It's used by the mongoClient to fetch the object from MongoDB, here again we
need to specify 'null' to get the versioning suspended object (cloudserver already
knows how to handle 'null' versionId and transforms it to undefined before giving
it to the mongoClient)

(cherry picked from commit d1cd7e8dba)
2024-04-26 17:20:36 +02:00
Maha Benzekri ca8f570f15
ARSN-404: project bump 2024-04-05 11:35:52 +02:00
Maha Benzekri a4bca10faf
ARSN-404: adding permission in BP and IAM action Map 2024-04-05 11:35:52 +02:00
Jonathan Gramain c2ab4a2052 ARSN-402 [8.1] typescript fixes 2024-03-13 09:10:25 -07:00
Jonathan Gramain fd0aa314eb Merge remote-tracking branch 'origin/w/7.70/bugfix/ARSN-402-batchDeleteRequestLogger' into w/8.1/bugfix/ARSN-402-batchDeleteRequestLogger 2024-03-13 09:10:21 -07:00
Jonathan Gramain a643a3e6cc Merge remote-tracking branch 'origin/bugfix/ARSN-402-batchDeleteRequestLogger' into w/7.70/bugfix/ARSN-402-batchDeleteRequestLogger 2024-03-13 09:08:05 -07:00
Jonathan Gramain e9d815cc9d ARSN-402 bump arsenal version 2024-03-13 08:40:02 -07:00
Jonathan Gramain c86d24fc8f bf: ARSN-402 sanitize use of log object in DataWrapper.delete()
Don't assume that we can safely call `end()` on the passed log object
if there is no callback (separation of concerns). Additionally, an
error object was passed where `end()` expects a string as a message,
causing implicit conversion.

Since errors are already logged, there is no need to bind the
`callback` object to `log.end` (there is no strong reason to log the
elapsed time there, the only use I can see where we don't pass a
callback in Cloudserver is to support deletion of old metadata with a
string as location array. IMHO not worth the complexity of adding it
there, as the rest of the API doesn't log elapsed time anyways except
for `batchDelete`).
2024-03-13 08:39:35 -07:00
Jonathan Gramain 3b6d3838f5 bf: ARSN-402 use local RequestLogger in batchDelete
Create a local RequestLogger in batchDelete(): this allows to track
the elapsed time of the batch delete sub-request, and avoids being
forced to create a new request logger before calling the function (due
to the call to `log.end()`), which was error-prone and hardly
maintainable.
2024-03-13 08:39:35 -07:00
Jonathan Gramain fcdfa889be ARSN-402 bump werelogs dependency
+ typescript fixes to be compatible with the latest werelogs
2024-03-13 08:39:35 -07:00
Mickael Bourgois 5b8fcf0313
ARSN-401: Bump version 2024-03-08 14:11:30 +01:00
Mickael Bourgois bdfde26fe4
Merge remote-tracking branch 'origin/improvement/ARSN-401-cluster-rpc-primary' into w/8.1/improvement/ARSN-401-cluster-rpc-primary 2024-03-08 14:11:06 +01:00
Mickael Bourgois e53613783a
Merge remote-tracking branch 'origin/development/8.1' into w/8.1/improvement/ARSN-401-cluster-rpc-primary 2024-03-08 14:10:12 +01:00
Mickael Bourgois 69dbbb143a
Merge branch 'development/7.70' into improvement/ARSN-401-cluster-rpc-primary 2024-03-08 14:08:52 +01:00
Mickael Bourgois 403c4e5040
ARSN-401: Bump version 2024-03-08 14:07:24 +01:00
Nicolas Humbert a1dc2bd84d Merge remote-tracking branch 'origin/w/7.70/bugfix/ARSN-403/bump' into w/8.1/bugfix/ARSN-403/bump 2024-03-06 16:40:02 +01:00
Nicolas Humbert 01409d690c Merge remote-tracking branch 'origin/bugfix/ARSN-403/bump' into w/7.70/bugfix/ARSN-403/bump 2024-03-06 16:31:42 +01:00
Nicolas Humbert 9ee40f343b ARSN-403 bump package 2024-03-06 16:07:08 +01:00
bert-e 77ed018b4f Merge branch 'w/7.70/bugfix/ARSN-403/fix-put-metadata-2' into tmp/octopus/w/8.1/bugfix/ARSN-403/fix-put-metadata-2 2024-03-05 12:41:44 +00:00
bert-e f77700236f Merge branch 'bugfix/ARSN-403/fix-put-metadata-2' into tmp/octopus/w/7.70/bugfix/ARSN-403/fix-put-metadata-2 2024-03-05 12:41:44 +00:00
Nicolas Humbert 43ff16b28a ARSN-403 fix tests 2024-03-05 13:41:27 +01:00
bert-e 05c628728d Merge branch 'w/7.70/bugfix/ARSN-403/fix-put-metadata-2' into tmp/octopus/w/8.1/bugfix/ARSN-403/fix-put-metadata-2 2024-03-04 13:23:08 +00:00
Nicolas Humbert 2a807dc4ef Merge remote-tracking branch 'origin/bugfix/ARSN-403/fix-put-metadata-2' into w/7.70/bugfix/ARSN-403/fix-put-metadata-2 2024-03-04 14:21:11 +01:00
Nicolas Humbert 1f8b0a4032 ARSN-403 Set nullVersionId to master when replacing a null version. 2024-03-04 11:51:33 +01:00
bert-e 0dd7fe9875 Merge branch 'improvement/ARSN-401-cluster-rpc-primary' into tmp/octopus/w/8.1/improvement/ARSN-401-cluster-rpc-primary 2024-02-29 08:58:13 +00:00
Mickael Bourgois f7a6af8d9a
ARSN-401: Test clusterRPC fix error response code
In case a regular error without code is thrown
2024-02-29 09:57:30 +01:00
Mickael Bourgois e6d0eff1a8
Merge remote-tracking branch 'origin/improvement/ARSN-401-cluster-rpc-primary' into w/8.1/improvement/ARSN-401-cluster-rpc-primary 2024-02-28 01:52:02 +01:00
Mickael Bourgois 9d558351e7
ARSN-401: Test new RPC communication 2024-02-27 21:05:28 +01:00
Mickael Bourgois 68150da72e
ARSN-401: add errorCode in cluster RPC for scuba 2024-02-27 21:04:57 +01:00
Mickael Bourgois 2b2c4bc50e
ARSN-401: Bump werelogs for types 2024-02-26 18:46:20 +01:00
Mickael Bourgois 3068086a97
ARSN-401: Fix werelogs config in cluster RPC
Also note that there are some arsenal modules that
have some side effect by being imported as they reconfigure
the werelogs logLevel.
Like: lib/storage/data/external/GCP/GcpUtils.js
2024-02-26 18:18:35 +01:00
Mickael Bourgois 0af7eb5530
ARSN-401: Add PRIMARY communication in cluster RPC 2024-02-26 18:17:34 +01:00
bert-e 7e372b7bd5 Merge branches 'w/8.1/improvement/ARSN-400-scuba-admin' and 'q/2224/7.70/improvement/ARSN-400-scuba-admin' into tmp/octopus/q/8.1 2024-02-26 13:59:56 +00:00
bert-e a121810552 Merge branches 'w/7.70/improvement/ARSN-400-scuba-admin' and 'q/2224/7.10/improvement/ARSN-400-scuba-admin' into tmp/octopus/q/7.70 2024-02-26 13:59:54 +00:00
bert-e 9bf1bcc483 Merge branch 'improvement/ARSN-400-scuba-admin' into q/7.10 2024-02-26 13:59:54 +00:00
Nicolas Humbert 06402c6c94 Merge remote-tracking branch 'origin/w/7.70/bugfix/ARSN-392/bump' into w/8.1/bugfix/ARSN-392/bump 2024-02-21 10:11:29 +01:00
Nicolas Humbert a6f3c82827 Merge remote-tracking branch 'origin/bugfix/ARSN-392/bump' into w/7.70/bugfix/ARSN-392/bump 2024-02-21 10:01:01 +01:00
Nicolas Humbert f1891851b3 ARSN-392 version bump 2024-02-21 09:54:30 +01:00
bert-e a1eed4fefb Merge branch 'bugfix/ARSN-392/null7.70' into tmp/octopus/w/8.1/bugfix/ARSN-392/null7.70 2024-02-20 14:22:16 +00:00
Nicolas Humbert 68204448a1 ARSN-392 Fix processVersionSpecificPut
- For backward compatibility (if isNull is undefined), add the nullVersionId field to the master update. The nullVersionId is needed for listing, retrieving, and deleting null versions.

- For the new null key implementation (if isNull is defined): add the isNull2 field and set it to true to specify that the new version is null AND has been put with a Cloudserver handling null keys (i.e., supporting S3C-7352).

- Manage scenarios in which a version is marked with the isNull attribute set to true, but without a version ID. This happens after BackbeatClient.putMetadata() is applied to a standalone null master.
2024-02-20 15:18:44 +01:00
Nicolas Humbert 40e271f7e2 ARSN-392 Import the V0 processVersionSpecificPut from Metadata
This logic is used by CRR replication feature to BackbeatClient.putMetadata on top of a null version
2024-02-20 15:18:05 +01:00
bert-e d8f7f18f5a Merge branches 'w/8.1/bugfix/ARSN-392/null' and 'q/2215/7.70/bugfix/ARSN-392/null' into tmp/octopus/q/8.1 2024-02-20 14:02:12 +00:00
bert-e 5f4d7afefb Merge branch 'bugfix/ARSN-392/null' into q/7.10 2024-02-20 14:02:11 +00:00
bert-e 2482fdfafc Merge branches 'w/7.70/bugfix/ARSN-392/null' and 'q/2215/7.10/bugfix/ARSN-392/null' into tmp/octopus/q/7.70 2024-02-20 14:02:11 +00:00
bert-e e151b3fff1 Merge branch 'w/7.70/bugfix/ARSN-392/null' into tmp/octopus/w/8.1/bugfix/ARSN-392/null 2024-02-20 13:54:33 +00:00
Nicolas Humbert b8bbdbbd81 Merge remote-tracking branch 'origin/bugfix/ARSN-392/null' into w/7.70/bugfix/ARSN-392/null 2024-02-20 14:49:31 +01:00
Nicolas Humbert 46258bca74 ARSN-392 Fix processVersionSpecificPut
- Add the nullVersionId field into the master update. The nullVersionId is needed for listing, retrieving, and deleting null version.

- Manage scenarios in which a version is marked with the isNull attribute set to true, but without a version ID.
It happens after BackbeatClient.putMetadata() is applied to a standalone null master.
2024-02-19 11:42:17 +01:00
williamlardier b6bc11881a Merge remote-tracking branch 'origin/w/7.70/bugfix/ARSN-396-standardize-actionMapBP-and-chainbackend' into w/8.1/bugfix/ARSN-396-standardize-actionMapBP-and-chainbackend 2024-02-19 09:26:47 +01:00
williamlardier 648257612b Merge remote-tracking branch 'origin/development/8.1' into w/8.1/bugfix/ARSN-396-standardize-actionMapBP-and-chainbackend 2024-02-19 09:26:06 +01:00
williamlardier 7423fac674 Merge remote-tracking branch 'origin/bugfix/ARSN-396-standardize-actionMapBP-and-chainbackend' into w/7.70/bugfix/ARSN-396-standardize-actionMapBP-and-chainbackend 2024-02-19 09:25:05 +01:00
williamlardier 9647043a02 ARSN-396: bump project 2024-02-19 09:24:27 +01:00
williamlardier f9e1f91791 Merge remote-tracking branch 'origin/development/7.70' into w/7.70/bugfix/ARSN-396-standardize-actionMapBP-and-chainbackend 2024-02-19 09:23:29 +01:00
williamlardier 9c5bc2bfe0 ARSN-396: bump project 2024-02-19 09:22:23 +01:00
Jonathan Gramain 1a0a981271 Merge remote-tracking branch 'origin/bugfix/ARSN-398-doNotRefreshGapBuildingIfDisabled' into w/8.1/bugfix/ARSN-398-doNotRefreshGapBuildingIfDisabled 2024-02-16 10:04:07 -08:00
bert-e a45b2eb6a4 Merge branch 'w/7.70/improvement/ARSN-400-scuba-admin' into tmp/octopus/w/8.1/improvement/ARSN-400-scuba-admin 2024-02-16 10:29:54 +00:00
bert-e b00378d46d Merge branch 'improvement/ARSN-400-scuba-admin' into tmp/octopus/w/7.70/improvement/ARSN-400-scuba-admin 2024-02-16 10:29:53 +00:00
Mickael Bourgois 2c3bfb16ef
ARSN-400: Add scuba admin actions 2024-02-16 11:18:05 +01:00
Jonathan Gramain c72d8be223 ARSN-398 bump arsenal version 2024-02-15 11:23:53 -08:00
Jonathan Gramain f63cb3c762 bf: ARSN-398 DelimiterMaster: fix when gap building is disabled
- Fix the situation where gap building is disabled by
  `_saveBuildingGap()` but we attempted to reset the building gap state
  anyway.

- Introduce a new state 'Expired' that can be differentiated from
  'Disabled': it makes `getGapBuildingValidityPeriodMs()` return 0
  instead of 'null' to hint the listing backend that it should trigger
  a new listing.
2024-02-15 11:21:25 -08:00
bert-e 15fd621c5c Merge branches 'w/8.1/feature/ARSN-397-gapCacheClear' and 'q/2222/7.70/feature/ARSN-397-gapCacheClear' into tmp/octopus/q/8.1 2024-02-15 19:07:32 +00:00
bert-e effbf63dd4 Merge branch 'feature/ARSN-397-gapCacheClear' into q/7.70 2024-02-15 19:07:32 +00:00
bert-e 285fe2f63b Merge branches 'w/8.1/bugfix/ARSN-394-GapCacheInvalidateStagingGaps' and 'q/2218/7.70/bugfix/ARSN-394-GapCacheInvalidateStagingGaps' into tmp/octopus/q/8.1 2024-02-15 19:07:20 +00:00
bert-e 1d8ebe6a9c Merge branch 'bugfix/ARSN-394-GapCacheInvalidateStagingGaps' into q/7.70 2024-02-15 19:07:20 +00:00
bert-e 00555597e0 Merge branch 'feature/ARSN-397-gapCacheClear' into tmp/octopus/w/8.1/feature/ARSN-397-gapCacheClear 2024-02-15 18:59:42 +00:00
bert-e bddc2ccd01 Merge branch 'bugfix/ARSN-394-GapCacheInvalidateStagingGaps' into tmp/octopus/w/8.1/bugfix/ARSN-394-GapCacheInvalidateStagingGaps 2024-02-15 18:59:33 +00:00
Jonathan Gramain 7908654b51 ft: ARSN-397 GapCache.clear()
Add a clear() method to clear exposed and staging gaps. Retains
invalidating updates for gaps inserted after the call to clear().
2024-02-14 11:36:28 -08:00
Jonathan Gramain 0d7cf8d40a Merge remote-tracking branch 'origin/feature/ARSN-389-optimizeListingWithGapCache' into w/8.1/feature/ARSN-389-optimizeListingWithGapCache 2024-02-14 10:24:17 -08:00
Jonathan Gramain c4c75e976c ARSN-389 DelimiterMaster: v0 format gap skipping
Implement logic in DelimiterMaster to improve efficiency of listings
of buckets in V0 format that have a lot of current delete markers.

A GapCache instance can be attached to a DelimiterMaster instance,
which enables the following:

- Lookups in the cache to be able to restart listing directly beyond
  the cached gaps. It is done by returning FILTER_SKIP code when
  listing inside a gap, which hints the caller (RepdServer) that it is
  allowed to restart a new listing from a specific later key.

- Building gaps and cache them, when listing inside a series of current
  delete markers. This allows future listings to benefit from the gap
  information and skip over them.

An important caveat is that there is a limited time in which gaps can
be built from the current listing: it is a trade-off to guarantee the
validity of cached gaps when concurrent operations may invalidate
them. This time is set in the GapCache instance as `exposureDelayMs`,
and is the time during which concurrent operations are kept in memory
to potentially invalidate future gap creations. Because listings use a
snapshot of the database, they return entries that are older than when
the listing started. For this reason, in order to be allowed to
consistently build new gaps, it is necessary to limit the running time
of listings, and potentially redo periodically new listings (based on
time or number of listed keys), resuming from where the previous
listing stopped, instead of continuing the current listing.
2024-02-14 10:18:02 -08:00
Jonathan Gramain 1266a14253 impr: ARSN-389 change contract of skipping() API
Instead of returning a "prefix" for the listing task to skip over,
directly return the key on which to skip and continue the listing.

It is both more natural as well as needed to implement skipping over
cached "gaps" of deleted objects.

Note that it could even be more powerful to return the type of query
param to apply for the next listing ('gt' or 'gte'), but it would be
more complex to implement with little practical benefit, so instead we
add a null byte at the end of the returned key to skip to, whenever we
want a 'gt' behavior from the returned 'gte' key.

Also in this commit: clarify the API contract and always return
FILTER_ACCEPT when not allowed to skip over low-level listing
contents. A good chunk of the history of listing bugs and workarounds
comes from this confusion.
2024-02-14 10:18:02 -08:00
williamlardier 851c72bd0f ARSN-396: consider action and isImplicit flags in multipeBackend
The new flags are set when IAM returns detailed information about
the actions, whether they are allowed or denied, with the
isImplicit flag. The mergePolicy must be updated to support the
new fields, and do not merge policies that are for different
actions.

Note that this function will consider that any Allow takes
precedence, so this behavior is not changed.
2024-02-14 12:35:22 +01:00
bert-e 722b6ae699 Merge branch 'w/7.70/bugfix/ARSN-396-standardize-actionMapBP-and-chainbackend' into tmp/octopus/w/8.1/bugfix/ARSN-396-standardize-actionMapBP-and-chainbackend 2024-02-14 11:13:29 +00:00
bert-e 29925a15ad Merge branch 'bugfix/ARSN-396-standardize-actionMapBP-and-chainbackend' into tmp/octopus/w/7.70/bugfix/ARSN-396-standardize-actionMapBP-and-chainbackend 2024-02-14 11:13:28 +00:00
williamlardier 6b64f50450 ARSN-396: use request context aciton map for the bucket policies
The S3 Bucket Policies checks must support and evaluate the same
actions as the ones sent to the IAM checks.
Today, we only check a subset of it, so we missed the Versioned
APIs.
2024-02-14 12:02:45 +01:00
Jonathan Gramain 8dc3ba7ca6 bf: ARSN-394 GapCache: invalidate staging gaps
In the GapCache._removeOverlappingGapsBeforeExpose() helper, remove
the gaps from the *staging* set that overlap with any of the staging
or frozen updates, in addition to removing the gaps from the frozen
set.

Without this extra invalidation, it's still possible to have gaps
created within the exposure delay that miss some invalidation,
resulting in stale gaps in the cache.

Modify an existing unit test to cover this case by adding extra wait
time to ensure `_removeOverlappingGapsBeforeExpose()` is called once
after the invalidating update but before the `setGap()` call.
2024-02-13 10:37:40 -08:00
bert-e 3c2283b062 Merge branch 'bugfix/ARSN-393-infiniteLoopInCoalesceGapChain' into tmp/octopus/w/8.1/bugfix/ARSN-393-infiniteLoopInCoalesceGapChain 2024-02-13 18:15:57 +00:00
Jonathan Gramain a6a76acede bf: ARSN-393 infinite loop in GapSet._coalesceGapChain()
The `GapSet._coalesceGapChain()` helper could infinite loop when
encountering a single-key gap (typically as an unchained single gap).
2024-02-12 12:00:04 -08:00
Jonathan Gramain 6a116734a9 ARSN-388 [fixup 8.1] merge fix: add missing files 2024-02-09 10:10:43 -08:00
Jonathan Gramain 9325ea4996 Merge remote-tracking branch 'origin/feature/ARSN-391-gapCache' into w/8.1/feature/ARSN-391-gapCache 2024-02-09 10:00:08 -08:00
Jonathan Gramain 33ba89f0cf Merge remote-tracking branch 'origin/feature/ARSN-388-gapSet' into w/8.1/feature/ARSN-388-gapSet 2024-02-09 09:45:36 -08:00
Jonathan Gramain c67331d350 ft: ARSN-391 GapCache: gap caching and invalidation
Introduce a new helper class GapCache that sits on top of a set of
GapSet instances, that delays exposure of gaps by a specific time to
guarantee atomicity wrt. invalidation from overlapping PUT/DELETE
operations.

The way it is implemented is the following:

- three update sets are used, each containing a GapSet instance and a
  series of key update batches: `staging`, `frozen`, and `exposed`

- `staging` receives the new gaps from `setGap()` calls and the
  updates from `removeOverlappingGaps()`

- `lookupGap()` only returns gaps present in `exposed`

- every `exposureDelayMs` milliseconds, the following happens:

  - the `frozen` gaps get invalidated by all key updates buffered in
    either `staging` or `frozen` update sets

  - the remainder of the `frozen` gaps is merged into `exposed` (via
    internal calls to `exposed.setGap()`)

  - the `staging` update set becomes the new `frozen` update set (both
    the gaps and the key updates)

  - a new `staging` update set is instanciated, empty

This guarantees that any gap set via `setGap()` is only exposed after
a minimum of `exposureDelayMs`, and a maximum of twice that time (plus
extra needed processing time). Also, keys passed to
`removeOverlappingGaps()` are kept in memory for at least `exposureDelayMs`
so they can invalidate new gaps that are created in this time frame.

This combined with insurance that setGap() is never called after
`exposureDelayMs` has passed since the listing process started (from a
DB snapshot), guarantees that all gaps not yet exposed have been
invalidated by any overlapping PUT/DELETE operation, hence exposed
gaps are still valid at the time they are exposed. They may still be
invalidated thereafter by future calls to removeOverlappingGaps().

The number of gaps that can be cached is bounded by the 'maxGaps'
attribute. The current strategy consists of simply not adding new gaps
when this limit is reached, solely relying on removeOverlappingGaps()
to make room for new gaps. In the future we could consider
implementing an eviction mechanism to remove less used gaps and/or
with smaller weights, but today the cost vs. benefit of doing this is
unclear.
2024-02-09 09:34:37 -08:00
Jonathan Gramain 6d6f1860ef ft: ARSN-388 implement GapSet (caching of listing gaps)
The GapSet class is intended for caching listing "gaps", which are
contiguous series of current delete markers in buckets, although the
semantics can allow for other uses in the future.

The end goal is to increase the performance of listings on V0 buckets
when a lot of delete markers are present, as a temporary solution
until buckets are migrated to V1 format.

This data structure is intented to be used by a GapCache instance,
which implements specific caching semantics (to ensure consistency
wrt. DB updates for example).
2024-02-09 09:32:49 -08:00
Nicolas Humbert cbe6a5e2d6 ARSN-392 Import the V0 processVersionSpecificPut from Metadata
This logic is used by CRR replication feature to BackbeatClient.putMetadata on top of a null version
2024-02-07 16:19:41 +01:00
Mickael Bourgois be1557d972
ARSN-390: Bump version 2024-02-05 20:03:24 +01:00
Mickael Bourgois a03463061c
Merge remote-tracking branch 'origin/w/7.70/improvement/ARSN-390-scuba-arn' into w/8.1/improvement/ARSN-390-scuba-arn 2024-02-05 20:03:10 +01:00
Mickael Bourgois 8ad0ea73a7
ARSN-390: Bump version 2024-02-05 17:45:22 +01:00
Mickael Bourgois a94040d13b
Merge remote-tracking branch 'origin/improvement/ARSN-390-scuba-arn' into w/7.70/improvement/ARSN-390-scuba-arn 2024-02-05 17:45:06 +01:00
Mickael Bourgois f265ed6122
ARSN-390: Bump version 2024-02-05 14:07:31 +01:00
Mickael Bourgois 7301c706fd
ARSN-390: Apply suggestion from code review 2024-02-05 14:07:31 +01:00
Mickael Bourgois bfc8dee559
ARSN-390: Add scuba arn for policy
Relates to SCUBA-76 and SCUBA-77
2024-01-26 16:33:32 +01:00
Frédéric Meinnel 5a5ef7c572 Merge remote-tracking branch 'origin/w/7.70/bugfix/ARSN-386/fix-generate-v4-headers-for-put-with-body-requests' into w/8.1/bugfix/ARSN-386/fix-generate-v4-headers-for-put-with-body-requests 2024-01-23 13:15:43 +01:00
Frédéric Meinnel 918c2c5473 Merge remote-tracking branch 'origin/bugfix/ARSN-386/fix-generate-v4-headers-for-put-with-body-requests' into w/7.70/bugfix/ARSN-386/fix-generate-v4-headers-for-put-with-body-requests 2024-01-23 12:25:28 +01:00
Frédéric Meinnel 29f39ab480 ARSN-386: version bump 2024-01-19 11:07:20 +01:00
Frédéric Meinnel b7ac7f4616 ARSN-385: Fix generateV4Headers for HTTP PUT with body 2024-01-19 11:07:20 +01:00
Frédéric Meinnel f8ce90f9c3 Merge remote-tracking branch 'origin/w/7.70/bugfix/ARSN-385/fully-align-with-aws-on-lifecycle-configuration-dates' into w/8.1/bugfix/ARSN-385/fully-align-with-aws-on-lifecycle-configuration-dates 2024-01-16 17:58:09 +01:00
Frédéric Meinnel 5734d11cf1 Merge remote-tracking branch 'origin/bugfix/ARSN-385/fully-align-with-aws-on-lifecycle-configuration-dates' into w/7.70/bugfix/ARSN-385/fully-align-with-aws-on-lifecycle-configuration-dates 2024-01-16 17:47:02 +01:00
Frédéric Meinnel 4da59769d2 ARSN-385: Version bump 2024-01-16 17:40:34 +01:00
Frédéric Meinnel 60573991ee ARSN-385: Lifecycle configuration dates aligned with XML spec and ISO-8601 2024-01-12 18:45:24 +01:00
Jonathan Gramain 6f58f9dd68 Merge remote-tracking branch 'origin/improvement/ARSN-381-cluster-rpc-helpers' into w/8.1/improvement/ARSN-381-cluster-rpc-helpers 2024-01-11 16:34:37 -08:00
Jonathan Gramain 3b9c93be68 ARSN-381 bump arsenal version 2024-01-11 16:26:33 -08:00
Jonathan Gramain 081af3e795 ARSN-381 RPC command system between cluster workers
When using the cluster module, new processes are forked and are
dispatched workloads, usually HTTP requests. The ClusterRPC module
implements a RPC system to send commands to all cluster worker
processes at once from any particular worker, and retrieve their
individual command results, like a distributed map operation.

The existing cluster IPC channel is setup from the primary to each
worker, but not between workers, so there has to be a hop by the
primary.

How a command is treated:

- a worker sends a command message to the primary

- the primary then forwards that command to each existing worker
  (including the requestor)

- each worker then executes the command and returns a result or an
  error

- the primary gathers all workers results into an array

- finally, the primary dispatches the results array to the original
  requesting worker callback

The original use of this feature is in Metadata DBD (bucketd) to
implement a global cache refresh across worker processes.
2024-01-11 16:26:33 -08:00
bert-e 042f541a45 Merge branches 'w/8.1/bugfix/ARSN-384-redirect-error-body' and 'q/2207/7.70/bugfix/ARSN-384-redirect-error-body' into tmp/octopus/q/8.1 2024-01-10 10:23:22 +00:00
bert-e 63bf2cb5b1 Merge branch 'bugfix/ARSN-384-redirect-error-body' into q/7.10 2024-01-10 10:23:21 +00:00
bert-e 39f42d9cb4 Merge branches 'w/7.70/bugfix/ARSN-384-redirect-error-body' and 'q/2207/7.10/bugfix/ARSN-384-redirect-error-body' into tmp/octopus/q/7.70 2024-01-10 10:23:21 +00:00
Mickael Bourgois 02f126f040
ARSN-384: fix after merge 8.1 param name 2024-01-10 11:15:38 +01:00
bert-e 1477a70e47 Merge branch 'w/7.70/bugfix/ARSN-384-redirect-error-body' into tmp/octopus/w/8.1/bugfix/ARSN-384-redirect-error-body 2024-01-10 09:51:16 +00:00
Mickael Bourgois 7233ec2635
Merge remote-tracking branch 'origin/bugfix/ARSN-384-redirect-error-body' into w/7.70/bugfix/ARSN-384-redirect-error-body 2024-01-10 10:50:15 +01:00
Mickael Bourgois c4b44016bc
ARSN-384: bump version 2024-01-10 10:46:26 +01:00
Mickael Bourgois a78a84faa7
ARSN-384: update error check 2024-01-10 10:46:26 +01:00
Mickael Bourgois c3ff6526a1
ARSN-384: ignore 302 statusMessage override
Keep Found instead of Moved Temporarily
And apply code review suggestion
2024-01-10 10:46:26 +01:00
Frédéric Meinnel 59d47a3e21 Merge remote-tracking branch 'origin/w/7.70/bugfix/ARSN-383-lifecycle-configuration-dates-must-be-set-to-midnight' into w/8.1/bugfix/ARSN-383-lifecycle-configuration-dates-must-be-set-to-midnight 2024-01-09 10:35:12 +01:00
Frédéric Meinnel 6b61347c29 Merge remote-tracking branch 'origin/bugfix/ARSN-383-lifecycle-configuration-dates-must-be-set-to-midnight' into w/8.1/bugfix/ARSN-383-lifecycle-configuration-dates-must-be-set-to-midnight 2024-01-08 18:22:57 +01:00
Mickael Bourgois 4bf29524eb
ARSN-384: test redirect on error 2024-01-08 17:49:22 +01:00
Mickael Bourgois 9aa001c4d1
ARSN-384: implement a redirect with error and body 2024-01-08 17:49:22 +01:00
Frédéric Meinnel aea4663ff2 Merge remote-tracking branch 'origin/bugfix/ARSN-383-lifecycle-configuration-dates-must-be-set-to-midnight' into w/7.70/bugfix/ARSN-383-lifecycle-configuration-dates-must-be-set-to-midnight 2024-01-08 15:47:01 +01:00
Frédéric Meinnel 5012e9209c ARSN-383: Version bump 2024-01-08 15:28:06 +01:00
Frédéric Meinnel 1568ad59c6 ARSN-383: Dates must now be set to midnight for lifecycle configurations. 2024-01-08 15:27:23 +01:00
bert-e c2f6b45116 Merge branch 'w/7.70/bugfix/ARSN-382-redirect-root-empty' into tmp/octopus/w/8.1/bugfix/ARSN-382-redirect-root-empty 2024-01-03 08:52:09 +00:00
bert-e a0322b131c Merge branch 'bugfix/ARSN-382-redirect-root-empty' into tmp/octopus/w/7.70/bugfix/ARSN-382-redirect-root-empty 2024-01-03 08:52:08 +00:00
Mickael Bourgois b5487e3c94
ARSN-382: add unit tests for redirect request 2024-01-03 09:51:20 +01:00
bert-e 993b9e6093 Merge branch 'w/7.70/bugfix/ARSN-382-redirect-root-empty' into tmp/octopus/w/8.1/bugfix/ARSN-382-redirect-root-empty 2024-01-02 18:09:07 +00:00
bert-e ddd6c87831 Merge branch 'bugfix/ARSN-382-redirect-root-empty' into tmp/octopus/w/7.70/bugfix/ARSN-382-redirect-root-empty 2024-01-02 18:09:06 +00:00
Mickael Bourgois f2974cbd07
ARSN-382: update redirect location condition
Co-authored-by: Jonathan Gramain <jonathan.gramain@scality.com>
2024-01-02 19:08:59 +01:00
bert-e 7440794d93 Merge branch 'w/7.70/bugfix/ARSN-382-redirect-root-empty' into tmp/octopus/w/8.1/bugfix/ARSN-382-redirect-root-empty 2024-01-02 10:53:55 +00:00
Mickael Bourgois 1efab676bc
Merge remote-tracking branch 'origin/bugfix/ARSN-382-redirect-root-empty' into w/7.70/bugfix/ARSN-382-redirect-root-empty
# Conflicts:
#	package.json
2024-01-02 11:53:05 +01:00
Mickael Bourgois a167e1d5fa
ARSN-382: bump version 2024-01-02 11:17:55 +01:00
Mickael Bourgois c7e153917a
ARSN-382: fix empty location when redirect to /
If object has a redirect to / it is sliced out
and the function receives an empty string as redirectKey
Therefore if redirectLocation consists of a single character /
The Location header would be empty
2024-01-02 10:52:50 +01:00
bert-e 087369b37d Merge branches 'w/8.1/improvement/ARSN-363-retention-day-condition' and 'q/2191/7.70/improvement/ARSN-363-retention-day-condition' into tmp/octopus/q/8.1 2023-12-26 10:55:59 +00:00
bert-e 2d2030dfe4 Merge branches 'w/7.70/improvement/ARSN-363-retention-day-condition' and 'q/2191/7.10/improvement/ARSN-363-retention-day-condition' into tmp/octopus/q/7.70 2023-12-26 10:55:58 +00:00
bert-e 45cc4aa79e Merge branch 'improvement/ARSN-363-retention-day-condition' into q/7.10 2023-12-26 10:55:58 +00:00
Will Toozs da80e12dab
Merge remote-tracking branch 'origin/w/7.70/improvement/ARSN-363-retention-day-condition' into w/8.1/improvement/ARSN-363-retention-day-condition 2023-12-26 11:49:28 +01:00
Will Toozs a7cf94d0fe
Merge remote-tracking branch 'origin/improvement/ARSN-363-retention-day-condition' into w/7.70/improvement/ARSN-363-retention-day-condition 2023-12-26 11:47:28 +01:00
Jonathan Gramain 2a82095d03 ARSN-379 [8.1] bump arsenal version 2023-12-22 12:41:17 -08:00
Jonathan Gramain 44b3d25459 ARSN-379 [8.1] adapt skipping delete markers in DelimiterMaster
With the MongoDB implementation there may be delete markers in the
masters prefix to go through.

Replace the original implementation for this by a new implementation
compatible with the latest DelimiterMaster changes.

Note: changed the returned value from FILTER_SKIP to FILTER_ACCEPT:
this is the correct logic as there is no range to skip, only the key
shouldn't be added to the results.
2023-12-22 12:41:01 -08:00
Jonathan Gramain f1d6e30fb6 Merge remote-tracking branch 'origin/w/7.70/bugfix/ARSN-379-cherry-pick-ARSN-284-and-ARSN-293' into w/8.1/bugfix/ARSN-379-cherry-pick-ARSN-284-and-ARSN-293 2023-12-22 12:40:18 -08:00
Jonathan Gramain 9186643caa ARSN-379 [7.70] bump arsenal version 2023-12-22 12:35:57 -08:00
Jonathan Gramain 485a76ceb9 ARSN-379 [7.70] import FilterState and FilterReturnValue types from Delimiter 2023-12-22 12:35:44 -08:00
Jonathan Gramain 00109a2c44 ARSN-379 [7.70] adapt `DelimiterCurrent` to changes in `Delimiter`/`DelimiterMaster`
The internals of `DelimiterMaster` have changed with S3C-4682
implementation, which requires changes in the `DelimiterCurrent` class
that inherits from it.

Removed the unit test passing a key with a different prefix, because
the prefix check was removed in `DelimiterMaster` as no such key can
be passed by construction of the listing parameters.
2023-12-22 12:35:44 -08:00
Jonathan Gramain aed1247825 Merge remote-tracking branch 'origin/bugfix/ARSN-379-cherry-pick-ARSN-284-and-ARSN-293' into w/7.70/bugfix/ARSN-379-cherry-pick-ARSN-284-and-ARSN-293 2023-12-22 12:35:34 -08:00
Jonathan Gramain 0507c04ce9 ARSN-284 bump arsenal version 2023-12-22 12:13:09 -08:00
Will Toozs 62736abba4
ARSN-363: update package version 2023-12-21 17:24:59 +01:00
Will Toozs 97118f09c4
ARSN-363: update test 2023-12-21 17:24:46 +01:00
Will Toozs 5a84a8c0ad
ARSN-363: add object retention days logic to structures 2023-12-21 17:24:34 +01:00
bert-e 37234efd14 Merge branch 'improvement/ARSN-380-delimiterVersionsInheritFromExtension' into tmp/octopus/w/8.1/improvement/ARSN-380-delimiterVersionsInheritFromExtension 2023-12-20 20:01:59 +00:00
Jonathan Gramain 2799381ef2 ARSN-380 rf: DelimiterVersions class inherits from Extension
Small refactor of DelimiterVersions class to inherit from the base
class Extension rather than Delimiter. Copy the missing fields and
methods from `Delimiter`.

This prepares for merging ARSN-379 which would otherwise cause a lot
of incompatibilities due to changes in the interface of
`DelimiterVersions` from S3C-8242.

Other minor tweaks:

- reset `nextVersionIdMarker` when skipping a common prefix

- rename `this.Contents` to `this.Versions` as we don't need to keep
  compatibility with `Delimiter`, and as it is the name used in the
  final result
2023-12-20 11:57:57 -08:00
Jonathan Gramain a3f13e5387 ARSN-284 fix and refactor Delimiter + DelimiterMaster
Large refactor of Delimiter and DelimiterMaster classes to typescript,
that fixes most known issues with the previous implementation.

The new implementation uses explicit states to manage various
conditions, instead of relying on a bunch of internal variable values
and maintaining their state. It allows a more robust code flow and
fixes issues related to prefix skipping that were hard to fix by
keeping the overall logic of the previous implementation.

This refactor brings the following bug fixes and enhancements:

- prefixes with delete markers and non-deleted objects are
  now always included in CommonPrefixes (S3C-7248)

- no more duplication of internal range listings when doing skip-scan
  over prefixes (discovered when analyzing regressions for S3C-4682)

- the skip-scan mecanism for prefixes and versions is no
  more disturbed by delete markers and PHD keys (S3C-2930)

- NextMarker is now always set to a valid, listed or listable key
  (that may still be hidden under a CommonPrefix), no more
  manipulation of next marker to avoid corner-cases with keys ending
  with a prefix (S3C-4682 and S3C-7274)

- deleting a delete marker immediately allows the new current version
  to be visible in the listing (S3C-7272)

- Expecting lower CPU usage overall, as the number of checks to do in
  each state is reduced (may help to reduce the load and reduce impact
  of cases such as S3C-3946)

- Uses typescript to allow more sanity checks

This bugfix and refactor work has been re-integrated in the code by
cherry-picking the following commits:

- f62c3d22 ARSN-252 - listing bug in DelimisterMaster
- 87b060f2 ARSN-269 - listing bug in versioned bucket edge cases.
- 4f0a8468 ARSN-284 [cleanup] remove unused test dependency
- 7b648962 ARSN-284 [rf] delimiterVersions.addCommonPrefix()
- 4d7eaee0 ARSN-284 fix and refactor Delimiter + DelimiterMaster
- 1c07618b ARSN-284 [doc] add state charts
- fbb62ef1 bugfix: ARSN-293 DelimiterMaster: default to vFormat=v0
- 6e5d8d14 bugfix: ARSN-294 use CommonPrefix for NextMarker
2023-12-18 18:13:21 -08:00
Jonathan Gramain f4e83086d6 Merge remote-tracking branch 'origin/bugfix/ARSN-377-v1NullKeyDeleteMarkerNotInCommonPrefixes' into w/8.1/bugfix/ARSN-377-v1NullKeyDeleteMarkerNotInCommonPrefixes 2023-12-14 14:54:24 -08:00
Jonathan Gramain d08a267965 ARSN-377 bump arsenal version 2023-12-14 14:52:11 -08:00
Jonathan Gramain 063a2fb8fb ARSN-377 fix DelimiterNonCurrent and add a unit test 2023-12-14 14:51:48 -08:00
Jonathan Gramain 1bc3360daf ARSN-377 correctly handle null keys with common prefix
When encountering a null key, check for its common prefix before
including it in either the Versions array or CommonPrefixes array,
instead of always including it in the Versions array.

This commit refactors how `DelimiterVersions` works with null keys
slightly: the null key is now inserted at its correct ordered position
by the top-level `filter()` method, and the state machine handlers
only have to deal with sorted versions. Previously the individual
handlers would have to deal with the null key positioning themselves
resulting in more complex state management.
2023-12-14 14:12:26 -08:00
Jonathan Gramain 206f14bdf5 ARSN-377 improve versioned listing test
Add version IDs to delete marker metadata
2023-12-14 14:12:26 -08:00
Maha Benzekri 74ff1691a0
Merge remote-tracking branch 'origin/w/7.70/improvement/ARSN-378-BP-authorization' into w/8.1/improvement/ARSN-378-BP-authorization 2023-12-14 11:58:47 +01:00
Maha Benzekri 5ffae72693
Merge remote-tracking branch 'origin/improvement/ARSN-378-BP-authorization' into w/7.70/improvement/ARSN-378-BP-authorization 2023-12-14 11:57:20 +01:00
Maha Benzekri 477a574500
ARSN-378: bump ARSN version 2023-12-14 11:55:54 +01:00
bert-e 2a4ea38301 Merge branch 'w/7.70/improvement/ARSN-378-BP-authorization' into tmp/octopus/w/8.1/improvement/ARSN-378-BP-authorization 2023-12-14 10:55:37 +00:00
bert-e df4c22154e Merge branch 'improvement/ARSN-378-BP-authorization' into tmp/octopus/w/7.70/improvement/ARSN-378-BP-authorization 2023-12-14 10:55:36 +00:00
Maha Benzekri 3642ac03b2
ARSN-378: adding missing authorizations to actionMapBP 2023-12-14 11:52:39 +01:00
Francois Ferrand d800179f86
Release arsenal 8.1.115
Issue: ARSN-374
2023-12-01 17:28:59 +01:00
Francois Ferrand c1c45a4af9
gha: upgrade actions
Issue: ARSN-374
2023-12-01 17:27:41 +01:00
Francois Ferrand da536ed037
ObjectMD: Add transition time
Store transition time when marking the object as ‘transition in
progress’. This is used to compute metrics on the duration of transition.

Issue: ARSN-374
2023-12-01 17:27:41 +01:00
Nicolas Humbert 06901104e8 Merge remote-tracking branch 'origin/w/7.70/bugfix/ARSN-376/probe' into w/8.1/bugfix/ARSN-376/probe 2023-12-01 13:38:36 +01:00
Nicolas Humbert a99a6d9d97 Merge remote-tracking branch 'origin/bugfix/ARSN-376/probe' into w/7.70/bugfix/ARSN-376/probe 2023-12-01 11:36:09 +01:00
Nicolas Humbert 06244059a8 bump version 2023-11-30 14:48:07 +01:00
Nicolas Humbert 079f631711 ARSN-376 Probe response logic should be handled in the handler
Currently, the probe response logic is distributed between Backbeat probe handlers and Arsenal's onRequest method.

This scattered approach causes confusion for developers and results in bugs.

The solution is to centralize the probe response logic exclusively within the Backbeat probe handlers.
2023-11-30 14:39:42 +01:00
Benoit A. 863f45d256
ARSN-373 bump hdclient to 1.1.7 2023-11-20 16:52:41 +01:00
KillianG 4b642cf8b4
Add custom listing parser to MongoDB listObject
test to check for location param is absent

Issue: ARSN-372
2023-11-17 17:45:10 +01:00
KillianG 2537f8aa9a
Exclude location field from search query in MongoReadStream.
Issue: ARSN-372
2023-11-13 11:07:43 +01:00
Maha Benzekri 7866a1d06f
Merge remote-tracking branch 'origin/w/7.70/improvement/ARSN-362-implicitDeny' into w/8.1/improvement/ARSN-362-implicitDeny 2023-10-30 16:55:21 +01:00
Maha Benzekri 29ef2ef265
fixup 2023-10-30 16:51:41 +01:00
Maha Benzekri 1509f1bdfe
fix 2023-10-30 16:47:32 +01:00
Maha Benzekri 13d349d211
fix 2023-10-30 16:40:00 +01:00
Maha Benzekri 34a32c967d
Merge remote-tracking branch 'origin/w/7.70/improvement/ARSN-362-implicitDeny' into w/8.1/improvement/ARSN-362-implicitDeny 2023-10-30 16:38:08 +01:00
Maha Benzekri 90ab985271
Merge remote-tracking branch 'origin/improvement/ARSN-362-implicitDeny' into w/7.70/improvement/ARSN-362-implicitDeny 2023-10-30 16:35:32 +01:00
Maha Benzekri fbf5562a11
bump arsenal version 2023-10-30 16:08:14 +01:00
bert-e d79ed1b9c8 Merge branch 'w/7.70/improvement/ARSN-362-implicitDeny' into tmp/octopus/w/8.1/improvement/ARSN-362-implicitDeny 2023-10-30 15:01:06 +00:00
bert-e c34ad0dc31 Merge branch 'improvement/ARSN-362-implicitDeny' into tmp/octopus/w/7.70/improvement/ARSN-362-implicitDeny 2023-10-30 15:01:06 +00:00
Maha Benzekri df5ff0f400
ARSN-362:fixups on impl deny policy tests
As the evaluateAllPolicies function is using the result of the
standardEvaluateAllPolicies , the redundant tests are removed.
The test that was kept is only to show that we use the result.verdict
in old flow evaluation.
2023-10-30 14:30:28 +01:00
Maha Benzekri 777783171a
ARSN-362: change new function name for clarity 2023-10-30 09:36:56 +01:00
Will Toozs 39988e52e2
ARSN-362: add implicit deny logic to policy eval tests 2023-10-27 17:23:36 +02:00
Will Toozs 79c82a4c3d
ARSN-362: add implicit deny logic to policy evaluation 2023-10-27 17:22:20 +02:00
williamlardier 17b5bbc233 ARSN-370: bump project version 2023-10-06 09:14:13 +02:00
williamlardier 4aa8b5cc6e ARSN-370: handle error cases 2023-10-06 09:13:46 +02:00
williamlardier 5deed6c2e1 ARSN-370: fix memory leak
The MongoDBReadStreams are not properly destroyed in both the
Bucket V1 and V0 cases. In the V1 case, only the pipe-ed stream,
the Transform one, is cleaned. In the V0 case, we directly call
the callback without properly cleaning the stream, leaving open,
in both cases, the mongodb cursors, that in turn affect the
mongos memory consumption.
2023-10-06 09:13:46 +02:00
Nicolas Humbert af34571771 Merge remote-tracking branch 'origin/bugfix/ARSN-369/skip' into w/8.1/bugfix/ARSN-369/skip 2023-10-05 11:49:01 +02:00
Nicolas Humbert 79b83a9067 ARSN-369 orphan delete marker list interruption skips processed key
In the event of a listing interruption due to reaching the maximum scanned entries, the next “orphan delete marker“ listing skips the currently processed key.
2023-10-05 09:39:45 +02:00
Nicolas Humbert 5fd675a316 Merge remote-tracking branch 'origin/improvement/ARSN-366/listing-scanned-limit' into w/8.1/improvement/ARSN-366/listing-scanned-limit 2023-09-27 17:22:45 +02:00
Nicolas Humbert d84cc974d3 ARSN-366 Limit lifecycle listing on scanned entries 2023-09-27 17:19:03 +02:00
Maha Benzekri dcf0f902ff
Merge remote-tracking branch 'origin/w/7.70/bugfix/ARSN-367-principal-user-arn-on-policy' into w/8.1/bugfix/ARSN-367-principal-user-arn-on-policy 2023-09-25 12:20:06 +02:00
Maha Benzekri 0177fbe98f
Merge remote-tracking branch 'origin/bugfix/ARSN-367-principal-user-arn-on-policy' into w/7.70/bugfix/ARSN-367-principal-user-arn-on-policy 2023-09-25 12:17:43 +02:00
Maha Benzekri f49cea3914
ARSN-367- bump ARSN version 2023-09-25 12:05:46 +02:00
Maha Benzekri 73c6f41fa3
ARSN-367:principal change on schema and test add
The maximum length should be 2048 characters
having 31 characters on the fixed length prefix
this explains the 2017 max limit put in the schema
2023-09-15 10:27:47 +02:00
bert-e 5b66f8d089 Merge branch 'w/7.70/bugfix/ARSN-365-id-on-resource-policy' into tmp/octopus/w/8.1/bugfix/ARSN-365-id-on-resource-policy 2023-09-13 06:27:36 +00:00
bert-e b61d178b18 Merge branch 'bugfix/ARSN-365-id-on-resource-policy' into tmp/octopus/w/7.70/bugfix/ARSN-365-id-on-resource-policy 2023-09-13 06:27:35 +00:00
Maha Benzekri 9ea39c6ed9
ARSN-365:Id added on policy schema and validator
Signed-off-by: Maha Benzekri <maha.benzekri@scality.com>
2023-09-12 21:01:45 +02:00
Florent Monjalet e51b06cfea ARSN-364: bump arsenal to 8.1.109 2023-08-31 18:46:36 +02:00
Florent Monjalet f2bc701f8c ARSN-364: bump sproxydclient to 8.0.10 (for SPRXCLT-12) 2023-08-31 18:46:06 +02:00
Nicolas Humbert 4d6b03ba47 ARSN-360 bump package version 2023-08-11 13:31:22 -04:00
Nicolas Humbert f03f049683 ARSN-360 Test enable V0 bucket format for Artesca lifecycle listing 2023-08-11 12:37:25 -04:00
Nicolas Humbert d7b51de024 ARSN-360 Enable V0 bucket format for Artesca lifecycle listing 2023-08-11 08:30:55 -04:00
Nicolas Humbert cf51adf1c7 Merge remote-tracking branch 'origin/bugfix/ARSN-359/max-keys' into w/8.1/bugfix/ARSN-359/max-keys 2023-08-08 19:59:22 -04:00
Nicolas Humbert 8a7c1be2d1 ARSN-359 bump arsenal version 2023-08-08 19:50:42 -04:00
Nicolas Humbert c049df0a97 ARSN-359 Fix NextMarker calculation in listLifecycleCurrent
Please note that there are no missing entries in the listing and no extra resource used since the next listing will do the fetching anyway. The issue lies in how we determine the NextMarker. It has to be compatible with the current logic merged in Artesca.

When using the listLifecycleCurrent function, we need to calculate the NextMarker correctly. Currently, if the maximum number of keys (max-keys) is reached, the function continues fetching more entries, which is unnecessary and should be done by the next listing.

For instance, if max-keys is set to 1 and the first entry (key0) is eligible, while the following two entries (key1 and key2) are not eligible, but the fourth entry (key3) is eligible, the listing should stop at key0 and the NextMarker should be key0 instead the listing keep fetching until key3 and return the NextMarker key2.
2023-08-08 19:50:12 -04:00
Nicolas Humbert 2b2667e29a Merge remote-tracking branch 'origin/improvement/ARSN-358/bump' into w/8.1/improvement/ARSN-358/bump 2023-08-08 13:16:11 -04:00
Nicolas Humbert 8eb4a29c36 ARSN-358 bump version 2023-08-08 13:12:22 -04:00
bert-e 862317703e Merge branch 'improvement/ARSN-356/list-orphan-delete-marker-v0' into tmp/octopus/w/8.1/improvement/ARSN-356/list-orphan-delete-marker-v0 2023-08-04 21:25:02 +00:00
Nicolas Humbert e69a97f240 add comment about this.start 2023-08-04 17:24:07 -04:00
Nicolas Humbert 81e838000f ARSN-356 List lifecycle orphan delete markers supports V0 2023-08-04 17:24:03 -04:00
bert-e 547ce816e0 Merge branch 'improvement/ARSN-355/list-non-current-v0' into tmp/octopus/w/8.1/improvement/ARSN-355/list-non-current-v0 2023-08-04 17:03:23 +00:00
Nicolas Humbert 8256d6debf ARSN-355 List lifecycle non-current versions supports V0 2023-08-04 13:02:35 -04:00
bert-e 15d5e93a2d Merge branch 'improvement/ARSN-354/list-current-v0' into tmp/octopus/w/8.1/improvement/ARSN-354/list-current-v0 2023-08-01 15:56:22 +00:00
Nicolas Humbert 69c1698eb7 ARSN-354 List lifecycle current versions supports V0 bucket format 2023-08-01 11:53:37 -04:00
bert-e d11bcb56e9 Merge branch 'improvement/ARSN-352/list-current' into tmp/octopus/w/8.1/improvement/ARSN-352/list-current 2023-08-01 14:16:19 +00:00
Nicolas Humbert c2cd90925f Adapt delimiterCurrent for S3C Metadata 2023-08-01 10:09:26 -04:00
bert-e 0ed35c3d86 Merge branch 'q/2151/7.70/improvement/ARSN-351/backport' into tmp/normal/q/8.1 2023-07-21 16:40:33 +00:00
bert-e b1723594eb Merge branch 'improvement/ARSN-351/backport' into q/7.70 2023-07-21 16:40:31 +00:00
Nicolas Humbert c0218821ff Merge remote-tracking branch 'origin/improvement/ARSN-351/backport' into w/8.1/improvement/ARSN-351/backport 2023-07-21 12:30:02 -04:00
Nicolas Humbert 49e32758fb ARSN-351 cleanup MongoDB tests 2023-07-21 08:29:16 -04:00
Nicolas Humbert e13d0f5ed8 ARSN-351 support listLifecycleObject in BucketFileInterface 2023-07-21 08:29:16 -04:00
Nicolas Humbert 0d5907956f ARSN-351 export DelimiterNonCurrent and DelimiterOrphanDeleteMarker for Metadata 2023-07-21 08:29:16 -04:00
Nicolas Humbert f0c5d60ce9 ARSN-351 export DelimiterCurrent for Metadata 2023-07-21 08:29:16 -04:00
Nicolas Humbert 8c2f4cf357 ARSN-351 support listLifecycleObject in BucketClientInterface 2023-07-21 08:29:16 -04:00
Nicolas Humbert f3f1da9bb3 ARSN-350 Missing Null Version in Lifecycle List of Non-Current Versions
Note: We only support the v1 bucket format for "list lifecycle" in Artesca.

We made the assumption that the first version key stored the current/latest version, which is true in most cases except for "null" versions. In the case of a "null" version, the current version is stored in the master key alone, rather than being stored in both the master key and a new version key. Here's an example of the key structure:

Mkey0: Represents the null version ID.
VKey0<versionID>: Represents a non-current version.

Additionally, we assumed that the versions for a given key were ordered by creation date, from newest to oldest. However, in Ring S3C, for non-current null versions, the metadata version ID is not part of the metadata key id. Therefore, the non-current null version is listed before the current version that has a version ID. Here's an example of the key ordering:

Mkey0: Master version
Vkey0: "null" non-current version
VKey0<versionID>: Current version

The listing was using only versions, but because those assumptions are incorrect, we now use both the master and the versions for each given key to ensure that we return the correct non-current versions.

(cherry picked from commit 0a4d6f862f)
2023-07-21 08:29:16 -04:00
Nicolas Humbert 036b75842e ARSN-328 Exclude keys based on their dataStoreName
(cherry picked from commit e216c9dd20)
2023-07-21 08:29:16 -04:00
Nicolas Humbert 7ac5774635 ARSN-312 Add logic to list orphan delete markers for Lifecycle
DelimiterOrphan used for listing orphan delete marker.The Metadata call returns the versions (V prefix).The MD response is then processed to only return the delete markers with zero noncurrent versions before a defined date: beforeDate.

(cherry picked from commit c9a444969b)
2023-07-17 09:06:23 -04:00
Nicolas Humbert f3b928fce0 ARSN-311 Add logic to list non-current versions for Lifecycle
DelimiterNonCurrent used for listing non-current version.The Metadata call returns the versions (V prefix).The MD response is then processed to only return the non-current versions that became non-current before a defined date: beforeDate.

(cherry picked from commit 5d018860ec)
2023-07-17 09:06:23 -04:00
Nicolas Humbert 7173a357d9 ARSN-326 Lifecycle listings should handle null version
(cherry picked from commit 4be0a06c4a)
2023-07-17 09:06:23 -04:00
Nicolas Humbert 7c4f461196 bump version 2023-07-14 09:20:58 -04:00
Nicolas Humbert 0a4d6f862f ARSN-350 Missing Null Version in Lifecycle List of Non-Current Versions
Note: We only support the v1 bucket format for "list lifecycle" in Artesca.

We made the assumption that the first version key stored the current/latest version, which is true in most cases except for "null" versions. In the case of a "null" version, the current version is stored in the master key alone, rather than being stored in both the master key and a new version key. Here's an example of the key structure:

Mkey0: Represents the null version ID.
VKey0<versionID>: Represents a non-current version.

Additionally, we assumed that the versions for a given key were ordered by creation date, from newest to oldest. However, in Ring S3C, for non-current null versions, the metadata version ID is not part of the metadata key id. Therefore, the non-current null version is listed before the current version that has a version ID. Here's an example of the key ordering:

Mkey0: Master version
Vkey0: "null" non-current version
VKey0<versionID>: Current version

The listing was using only versions, but because those assumptions are incorrect, we now use both the master and the versions for each given key to ensure that we return the correct non-current versions.
2023-07-14 09:20:36 -04:00
bert-e 8716fee67d Merge branch 'q/2134/7.70/improvement/ARSN-345-optimize-multiobjectdelete-api-and-batching' into tmp/normal/q/8.1 2023-07-12 11:36:29 +00:00
bert-e 2938bb0c88 Merge branch 'improvement/ARSN-345-optimize-multiobjectdelete-api-and-batching' into q/7.70 2023-07-12 11:36:28 +00:00
williamlardier 05c93446ab
Merge remote-tracking branch 'origin/improvement/ARSN-345-optimize-multiobjectdelete-api-and-batching' into w/8.1/improvement/ARSN-345-optimize-multiobjectdelete-api-and-batching 2023-07-12 13:26:01 +02:00
williamlardier 8d758327dd
ARSN-345: bump package version 2023-07-12 13:19:38 +02:00
williamlardier be63c09624
ARSN-345: update tests and logic 2023-07-12 13:19:01 +02:00
Nicolas Humbert 4615875462 ARSN-310 Add logic to list current/master versions for Lifecycle
DelimiterCurrent used for listing current versions. The Metadata call returns the masters (M prefix) younger than a defined date: beforeDate. No extra filtering action is needed on the Metadata call response.

(cherry picked from commit ecd600ac4b)
2023-06-23 08:11:54 -04:00
Rahul Padigela bdb59a0e63 Merge remote-tracking branch 'origin/w/7.70/improvement/ARSN-349-update-node-fcntl' into w/8.1/improvement/ARSN-349-update-node-fcntl 2023-06-20 16:34:38 -07:00
bert-e a89d1d8d75 Merge branch 'improvement/ARSN-349-update-node-fcntl' into tmp/octopus/w/7.70/improvement/ARSN-349-update-node-fcntl 2023-06-20 23:12:07 +00:00
Rahul Padigela 89e5f7dffe improvement: ARSN-349 bump node-fcntl 2023-06-20 16:05:12 -07:00
williamlardier 57e84980c8
ARSN-345: optimize InternalDeleteObject with direct deletion support 2023-06-15 13:43:27 +02:00
williamlardier 51bfd41bea
ARSN-345: optimize MultiDeleteObject with batching support 2023-06-15 13:43:27 +02:00
Nicolas Humbert 96cbaeb821 Merge remote-tracking branch 'origin/w/7.70/bugfix/ARSN-347/socketio' into w/8.1/bugfix/ARSN-347/socketio 2023-06-08 11:46:23 -04:00
Nicolas Humbert cb01346d07 Merge remote-tracking branch 'origin/bugfix/ARSN-347/socketio' into w/7.70/bugfix/ARSN-347/socketio 2023-06-08 11:44:15 -04:00
Nicolas Humbert 3f24336b83 bump arsenal version 2023-06-08 11:39:11 -04:00
Nicolas Humbert 1e66518a79 ARSN-347 socket.io client is disconnected when sending a big payload
The file backend test fails when migrating the socket.io client from version 2.x to 4.x due to a change in the default value of maxHttpBufferSize. In the newer version, the default value has been reduced from 100MB to 1MB, causing the failure when attempting to initiate, put parts, and complete an MPU (Multipart Upload) with 10,000 parts.
2023-06-08 11:38:59 -04:00
bert-e 15b68fa9fa Merge branch 'improvement/ARSN-344/bump' into q/8.1 2023-06-07 14:06:37 +00:00
Nicolas Humbert 51703a65f5 ARSN-344 bump version 2023-06-07 08:58:42 -04:00
bert-e 09aaa2d5ee Merge branch 'improvement/ARSN-339/time-progression-factor' into q/8.1 2023-06-07 12:10:45 +00:00
Nicolas Humbert ad39d90b6f ARSN-339 Introduce the time-progression-factor flag
The "time-progression-factor" variable serves as a testing-specific feature that accelerates the progression of time within a system.
By reducing the significance of each day, it enables the swift execution of specific actions, such as expiration, transition, and object locking, which are typically associated with longer timeframes.

This capability allows for efficient testing and evaluation of outcomes, optimizing the observation of processes that would normally take days or even years.
It's important to note that this variable is intended exclusively for testing purposes and is not employed in live production environments, where real-time progression is crucial for accurate results.
2023-06-05 17:17:45 -04:00
Jonathan Gramain 20e9fe4adb Merge remote-tracking branch 'origin/w/7.70/bugfix/ARSN-340-bump-socket-io' into w/8.1/bugfix/ARSN-340-bump-socket-io 2023-05-30 16:06:31 -07:00
bert-e e9c67f7f67 Merge branch 'bugfix/ARSN-340-bump-socket-io' into tmp/octopus/w/7.70/bugfix/ARSN-340-bump-socket-io 2023-05-30 22:49:34 +00:00
Jonathan Gramain af3fd17ec2 bf: ARSN-340 bump socket.io dep to 4.6.1
4.6.1 is the latest version to date of nodejs socket.io module. It
fixes a bunch of CVEs related to socket.io and xmlhttprequest modules
for the open-source metadata storage.
2023-05-30 15:42:24 -07:00
bert-e 536d474f57 Merge branches 'development/8.1' and 'w/7.70/improvement/ARSN-335-implement-ghas' into tmp/octopus/w/8.1/improvement/ARSN-335-implement-ghas 2023-05-25 17:52:46 +00:00
bert-e 55e68cfa17 Merge branch 'w/7.10/improvement/ARSN-335-implement-ghas' into tmp/octopus/w/7.70/improvement/ARSN-335-implement-ghas 2023-05-25 17:52:45 +00:00
bert-e 67c98fd81b Merge branch 'improvement/ARSN-335-implement-ghas' into tmp/octopus/w/7.10/improvement/ARSN-335-implement-ghas 2023-05-25 17:52:45 +00:00
williamlardier 5cd70d7cf1 ARSN-267: fix failing unit test
NodeJS 16.17.0 introduced a change in the error handling of TLS sockets
in case of error. The connexion is closed before the response is sent,
so handling the ECONNRESET error in the affected test will unblock it,
until this is fixed by NodeJS, if appropriate.

(cherry picked from commit a237e38c51)
2023-05-25 17:50:00 +00:00
KillianG 25be9014c9
Bump version 8.1.101 2023-05-25 10:00:14 +00:00
KillianG ed42f24580
Add comment to explain
Issue: ARSN-337
2023-05-25 09:00:14 +00:00
KillianG ce076cb3df
Add test to check master version are skipped in v1 as well
Issue: ARSN-337
2023-05-23 13:30:57 +00:00
KillianG 4bc3de52ff
Filter delete marker from version suspended buckets
Issue: ARSN-337
2023-05-22 16:40:15 +00:00
bert-e beb5f69be3 Merge branch 'w/7.70/improvement/ARSN-335-implement-ghas' into tmp/octopus/w/8.1/improvement/ARSN-335-implement-ghas 2023-05-19 15:59:38 +00:00
bert-e 5f3540a0d5 Merge branch 'w/7.10/improvement/ARSN-335-implement-ghas' into tmp/octopus/w/7.70/improvement/ARSN-335-implement-ghas 2023-05-19 15:59:38 +00:00
bert-e 654d628d39 Merge branch 'improvement/ARSN-335-implement-ghas' into tmp/octopus/w/7.10/improvement/ARSN-335-implement-ghas 2023-05-19 15:59:37 +00:00
gaspardmoindrot e8a409e337 [ARSN-335] Implement GHAS 2023-05-16 21:21:49 +00:00
Alexander Chan 4093bf2b04 bump version 2023-04-16 19:12:15 -07:00
Alexander Chan d0bb6d5b0c ARSN-334: add mongodb list in progress indexing jobs 2023-04-16 19:12:15 -07:00
bert-e 3f7229eebe Merge branch 'improvement/ARSN-309/addMongoIndexObjectTransforms' into q/8.1 2023-04-15 00:33:49 +00:00
bert-e 7eb9d52da5 Merge branch 'improvement/ARSN-328/excludedDataStoreName' into q/8.1 2023-04-14 21:50:25 +00:00
Nicolas Humbert e216c9dd20 ARSN-328 Exclude keys based on their dataStoreName 2023-04-14 14:42:13 -07:00
williamlardier 0c1afe535b
ARN-333: bump to 8.1.97 2023-04-14 20:05:57 +02:00
williamlardier 73335ae6ec
ARN-333: fix callback for adminDb command in sharded mode with mongo driver 2023-04-14 20:05:33 +02:00
Alexander Chan 99c514e8f2 bump version 2023-04-13 15:14:47 -07:00
Alexander Chan cfd9fdcfc4 bump eslint dependency 2023-04-13 15:14:30 -07:00
Alexander Chan d809dac5e3 ARSN-309: add mongodb index object helper methods 2023-04-13 15:14:01 -07:00
williamlardier 53dac8d233
ARSN-329: bump arsenal to 8.1.96 2023-04-13 16:29:37 +02:00
williamlardier 6d5ef07eee
ARSN-329: update latest changes 2023-04-13 16:29:36 +02:00
williamlardier 272166e406
ARSN-329: update tests 2023-04-13 15:44:43 +02:00
williamlardier 3af05e672b
ARSN-329: switch to promises as callbacks are deprecated 2023-04-13 15:44:42 +02:00
williamlardier 8b0c90cb2f
ARSN-329: bump mongodb driver 2023-04-13 15:44:39 +02:00
Alexander Chan dfc9b761e2 bump version 2023-04-12 14:00:31 -07:00
Alexander Chan 04f1eb7f04 ARSN-332: bump sproxydclient dependency 2023-04-12 14:00:31 -07:00
bert-e c204b90847 Merge branch 'feature/ARSN-324-add-s3-lifecycle-expiration-to-existing-object-delete-function' into q/8.1 2023-04-11 13:48:33 +00:00
bert-e 78d6e7fd72 Merge branch 'feature/ARSN-309/supportMongoIndexing' into q/8.1 2023-04-10 17:57:52 +00:00
Alexander Chan 7768fa8d35 ARSN-309: support indexing for mongo 2023-04-07 09:51:49 -07:00
KillianG 4d9a9adc48
Bump arsenal 8.1.94
Issue: ARSN-324
2023-04-07 12:35:50 +00:00
KillianG c4804e52ee
Add unit test for internal delete object function with custom origin OP
Issue: ARSN-324
2023-04-07 12:34:10 +00:00
KillianG 671cf3a679
Add argument to internal delete object in case the call is made from lifecycle expiration to avoid raising an objectremoved:delete event
Issue: ARSN-324
2023-04-07 12:34:10 +00:00
Jonathan Gramain 9a5e27f97b Merge remote-tracking branch 'origin/bugfix/ARSN-330-delimiterVersionsWithKeyContainingUndefined' into w/8.1/bugfix/ARSN-330-delimiterVersionsWithKeyContainingUndefined 2023-04-05 15:41:59 -07:00
Jonathan Gramain d744a709d2 ARSN-330 bump arsenal version 2023-04-05 15:40:53 -07:00
Jonathan Gramain a9d003c6f8 Merge remote-tracking branch 'origin/bugfix/ARSN-330-delimiterVersionsWithKeyContainingUndefined' into w/8.1/bugfix/ARSN-330-delimiterVersionsWithKeyContainingUndefined 2023-04-05 15:37:11 -07:00
Jonathan Gramain 99e04bd6fa bf: ARSN-330 fix DelimiterVersions exception when key contains "undefined"
Fix a crash when a listed key contains the string "undefined": as the
`key.indexOf` method was used without prior checking whether a
delimiter was set, it converted the delimiter to the string
"undefined", which could be found in a key containing such string, and
causing an exception thereafter.
2023-04-05 15:35:35 -07:00
Jonathan Gramain d3bdddeba3 Merge remote-tracking branch 'origin/improvement/ARSN-320-newObjectMDIsNull2' into w/8.1/improvement/ARSN-320-newObjectMDIsNull2 2023-04-04 09:31:14 -07:00
bert-e 3252f7de03 Merge branch 'feature/ARSN-317-bucketFileNullKeySupport' into tmp/octopus/w/8.1/feature/ARSN-317-bucketFileNullKeySupport 2023-04-04 16:11:03 +00:00
Jonathan Gramain c4cc5a2c3d ARSN-320 bump arsenal version to 7.70.4 2023-04-04 09:10:19 -07:00
Jonathan Gramain fedd0190cc impr: ARSN-320 add "isNull2" attribute to ObjectMD
This new attribute will be set whenever a Cloudserver supporting null
keys sets the "isNull" attribute to a master key, along with it.

The purpose of this attribute is to allow Cloudserver to optimize by
not having to check and delete a null versioned key when the null
master has "isNull2" set, as it is guaranteed not to exist.

We need to introduce a new attribute to keep backward compatibility,
the naming is a bit unfortunate but it has the benefit of being short
and not too specific to a particular optimization, just stating that
it is a "new" null master.
2023-04-04 09:10:19 -07:00
Jonathan Gramain 56fd4ad734 ft: ARSN-317 null key support in BucketFile backend
Support null keys in BucketFile backend - null keys are the new way to
store null versions, where a single database key with a specific empty
version ID is used instead of referencing the null version via
"nullVersionId" in object metadata.

Add relevant unit tests to check the new behavior (those were copied
and mechanically adapted from the Metadata repository).
2023-04-04 09:09:05 -07:00
Jonathan Gramain ebe6b65fcf ARSN-317 [rf] cleanup logging
Use "logger.addDefaultFields()" to set bucket, key and options to the
logs, which cleans up log calls.

Log repair errors with `log.error` unless it's ObjNotFound
2023-04-04 09:08:48 -07:00
Nicolas Humbert 7994bf7b96 ARSN-327 Bump Arsenal 8.1.92 2023-04-03 14:38:45 -04:00
Nicolas Humbert 4be0a06c4a ARSN-326 Lifecycle listings should handle null version 2023-04-03 08:39:08 -04:00
bert-e da7dbdc51f Merge branch 'improvement/ARSN-325-bump-sproxydclient' into q/8.1 2023-03-29 11:56:40 +00:00
Will Toozs 2103ef1237
ARSN-325: bump project version 2023-03-29 13:39:55 +02:00
Will Toozs dbc1c54246
ARSN-325: bump sproxydclient 2023-03-29 13:17:19 +02:00
bert-e 6c22f8404d Merge branch 'feature/ARSN-318-bucketFileListVersionKeys' into tmp/octopus/w/8.1/feature/ARSN-318-bucketFileListVersionKeys 2023-03-28 22:58:53 +00:00
KillianG 00e03f0592
bump 8.1.90
Issue: ARSN-323
2023-03-24 16:05:23 +00:00
KillianG d453758b7d
add s3:lifecycleexpiration to the list of supported notifications events
Issue: ARSN-322
2023-03-24 15:50:42 +00:00
KillianG a964dc99c3
Add: s3:LifecycleTransition event to the list of supportedNotificationEvents
Issue: ARSN-321
2023-03-24 10:09:02 +00:00
Jonathan Gramain 3a4da1d7c0 ARSN-318 port listVersionKeys() helper for BucketFile backend
Port the listVersionKeys() helper from the Metadata backend to the
BucketFile backend, as a first step towards supporting null keys in
BucketFile.
2023-03-23 10:57:41 -07:00
williamlardier 5074e6c0a4
ARSN-316: bump to 8.1.89 2023-03-21 13:33:32 +01:00
williamlardier bd05dd6918
ARSN-316: add tests for new mongodb routes 2023-03-21 13:32:18 +01:00
williamlardier fbda12ce3c
ARSN-316: individually update bucket capabilities 2023-03-21 13:32:15 +01:00
Nicolas Humbert b02934bb39 ARSN-319 bump arsenal 2023-03-16 13:03:27 -04:00
Nicolas Humbert c9a444969b ARSN-312 Add logic to list orphan delete markers for Lifecycle
DelimiterOrphan used for listing orphan delete marker.The Metadata call returns the versions (V prefix).The MD response is then processed to only return the delete markers with zero noncurrent versions before a defined date: beforeDate.
2023-03-16 12:06:27 -04:00
Nicolas Humbert 5d018860ec ARSN-311 Add logic to list non-current versions for Lifecycle
DelimiterNonCurrent used for listing non-current version.The Metadata call returns the versions (V prefix).The MD response is then processed to only return the non-current versions that became non-current before a defined date: beforeDate.
2023-03-16 10:03:04 -04:00
bert-e 5838e02096 Merge branch 'feature/ARSN-310/listLifecycleCurrent' into q/8.1 2023-03-16 13:15:55 +00:00
Nicolas Humbert ecd600ac4b ARSN-310 Add logic to list current/master versions for Lifecycle
DelimiterCurrent used for listing current versions. The Metadata call returns the masters (M prefix) younger than a defined date: beforeDate. No extra filtering action is needed on the Metadata call response.
2023-03-16 08:40:14 -04:00
Naren ab0324da05 impr: ARSN-315 bump version to 8.1.87 2023-03-14 17:05:46 -07:00
Naren 2b353b33af Merge remote-tracking branch 'origin/w/7.70/improvement/ARSN-315-bump-version-7-10-46' into w/8.1/improvement/ARSN-315-bump-version-7-10-46 2023-03-14 17:02:30 -07:00
Naren 5377b20ceb impr: ARSN-315 bump version to 7.70.3 2023-03-14 16:52:08 -07:00
Naren 21b329b301 Merge remote-tracking branch 'origin/improvement/ARSN-315-bump-version-7-10-46' into w/7.70/improvement/ARSN-315-bump-version-7-10-46 2023-03-14 16:49:03 -07:00
Naren bd76402586 impr: ARSN-315 bump version 7.10.46 2023-03-14 16:25:06 -07:00
bert-e fd57f47be1 Merge branch 'w/7.70/improvement/ARSN-315-disable-default-metrics-collection' into tmp/octopus/w/8.1/improvement/ARSN-315-disable-default-metrics-collection 2023-03-14 23:13:08 +00:00
bert-e 94edf8be70 Merge branch 'improvement/ARSN-315-disable-default-metrics-collection' into tmp/octopus/w/7.70/improvement/ARSN-315-disable-default-metrics-collection 2023-03-14 23:13:08 +00:00
Naren 1d104345fd impr: ARSN-315 expose collecting default metrics as fn
Collecting default metrics should not be the default, should be invoked when needed. This causes build errors when multiple components use Arsenal.
2023-03-14 16:08:44 -07:00
Jonathan Gramain 58e47e5015 ARSN-306 [8.1 only] skip PHDs in DelimiterVersions V1
Since Artesca uses PHD keys in V1 format, skip them during listing of
versions
2023-03-09 10:03:02 -08:00
Jonathan Gramain 4d782ecec6 Merge remote-tracking branch 'origin/improvement/ARSN-306-delimiterVersionsNullKeySupport' into w/8.1/improvement/ARSN-306-delimiterVersionsNullKeySupport 2023-03-09 10:02:54 -08:00
Jonathan Gramain 655a10ce52 ARSN-306 version bump 2023-03-09 09:57:25 -08:00
Jonathan Gramain 0c7f0e607d ARSN-306 [doc] add state chart for DelimiterVersions
And a markdown file with summary of what the listing algo does
2023-03-09 09:56:28 -08:00
Jonathan Gramain caa5d53e9b impr: ARSN-306 support null keys in versions listing
Add support for null keys in versions listing:

- when they exist, output the null keys at the appropriate position in
  the Versions array

- handle KeyMarker/VersionIdMarker appropriately as if the null keys
  were real versions. This requires the listing to start at the very
  first version of the next key each time to see the null key, then
  potentially skip over the versions below VersionIdMarker using
  skip-scan optimization.
2023-03-09 09:56:28 -08:00
Jonathan Gramain 21da975187 ARSN-306 [refactor] DelimiterVersions state machine
Use a state machine for cleaner state management in DelimiterVersions
listing algo, with Typescript for enhanced type checking

Also, fix an inefficiency with listing params generated from the
KeyMarker parameter when there is a delimiter: it was listing more
keys than necessary when the KeyMarker equals a CommonPrefix.
2023-03-09 09:56:28 -08:00
bert-e e0df67a115 Merge branch 'bugfix/ARSN-314-missingDescribeInListObjectsTest' into q/8.1 2023-03-09 17:51:50 +00:00
Naren 7e18ae77e0 impr: ARSN-313 update healthprobe server tests 2023-03-08 19:30:41 -08:00
Naren 4750118f85 impr: ARSN-313 upgrade prom-client 2023-03-08 19:10:34 -08:00
Naren c273c8b823 Merge remote-tracking branch 'origin/w/7.70/improvement/ARSN-313-upgrade-prom-client' into w/8.1/improvement/ARSN-313-upgrade-prom-client 2023-03-08 19:01:16 -08:00
Jonathan Gramain d3b50fafa8 ARSN-314 [test fix] add missing describe() in listObject
Add a missing describe() block to avoid tests running in parallel for
v0 and v1. This usually led to v1 being used for all tests.
2023-03-08 18:38:49 -08:00
Naren 47e68a9b60 Merge remote-tracking branch 'origin/improvement/ARSN-313-upgrade-prom-client' into w/7.70/improvement/ARSN-313-upgrade-prom-client 2023-03-08 17:51:26 -08:00
Naren bd0a199ffa impr: ARSN-313 corrections in ZenkoMetrics
- retain metric config types
- set asPrometheus as async fn
2023-03-08 16:37:38 -08:00
Naren 4b1f69bcbb impr: ARSN-313 bump version to 7.10.45 2023-03-08 15:28:48 -08:00
Naren e3a6814e3f impr ARSN-313 upgrade prom-client 2023-03-08 15:27:30 -08:00
Alexander Chan bf4072151f Merge remote-tracking branch 'origin/w/7.70/bugfix/ARSN-308/addLifecycleUtilsNoncurrentVersionSupport' into w/8.1/bugfix/ARSN-308/addLifecycleUtilsNoncurrentVersionSupport 2023-03-01 05:26:10 -08:00
Alexander Chan f33cd69e45 Merge remote-tracking branch 'origin/bugfix/ARSN-308/addLifecycleUtilsNoncurrentVersionSupport' into w/7.70/bugfix/ARSN-308/addLifecycleUtilsNoncurrentVersionSupport 2023-03-01 04:55:37 -08:00
Alexander Chan acd13ff31b ARSN-308: update lifecycle utils to support noncurrent version
* update lifecycle utils to support
* remove `console.log`
2023-03-01 04:45:19 -08:00
Alexander Chan bb3e5d078f version bump 2023-03-01 04:44:30 -08:00
Jonathan Gramain 22fa04b7e7 Merge remote-tracking branch 'origin/feature/ARSN-307-bumpVersion' into w/8.1/feature/ARSN-307-bumpVersion 2023-02-23 23:02:17 -08:00
Jonathan Gramain 10a94a0a96 ARSN-307 bump version to 7.70.0 2023-02-23 23:00:46 -08:00
bert-e 4d71a834d5 Merge branch 'w/7.70/feature/ARSN-298/addHeapDataStructure' into tmp/octopus/w/8.1/feature/ARSN-298/addHeapDataStructure 2023-02-24 02:19:16 +00:00
Alexander Chan 054f61d6c1 ARSN-298: add Min/Max heap data structure 2023-02-23 18:19:05 -08:00
Alexander Chan fa26a487f5 Merge remote-tracking branch 'origin/w/7.70/feature/ARSN-298/supportNewerNoncurrentVersions' into w/8.1/feature/ARSN-298/supportNewerNoncurrentVersions 2023-02-23 16:04:47 -08:00
Alexander Chan c1dd2e4946 bump version 2023-02-23 15:03:26 -08:00
Alexander Chan a714103b82 ARSN-298: support lifecycle NewerNoncurrentVersions
updates `LifecyleConfiguration` and `LifecycleRule` to support the
`NewerNoncurrentVersions` parameter for NoncurrentVersionExpirations
2023-02-23 15:00:57 -08:00
Jonathan Gramain 66740f5aba Merge remote-tracking branch 'origin/bugfix/ARSN-284-revert' into w/8.1/bugfix/ARSN-284-revert 2023-01-30 16:16:05 +01:00
Jonathan Gramain a3a83dd89c ARSN-284 bump arsenal version 2023-01-30 16:10:02 +01:00
williamlardier 8db8109391 ARSN-267: fix failing unit test
NodeJS 16.17.0 introduced a change in the error handling of TLS sockets
in case of error. The connexion is closed before the response is sent,
so handling the ECONNRESET error in the affected test will unblock it,
until this is fixed by NodeJS, if appropriate.

(cherry picked from commit a237e38c51)
2023-01-30 16:10:02 +01:00
Jonathan Gramain d90af29019 Revert "ARSN-252 - listing bug in DelimisterMaster"
This reverts commit f62c3d22ed.
2023-01-30 16:07:06 +01:00
Jonathan Gramain 9d8d98fcc9 Revert "ARSN-269 - listing bug in versioned bucket edge cases."
This reverts commit 87b060f2ae.
2023-01-30 16:07:06 +01:00
Jonathan Gramain 01830d19a0 Revert "ARSN-284 [cleanup] remove unused test dependency"
This reverts commit 4f0a846814.
2023-01-30 16:07:05 +01:00
Jonathan Gramain 49cc018fa4 Revert "ARSN-284 [rf] delimiterVersions.addCommonPrefix()"
This reverts commit 7b64896234.
2023-01-30 16:07:05 +01:00
Jonathan Gramain dd87c869ca Revert "ARSN-284 fix and refactor Delimiter + DelimiterMaster"
This reverts commit 4d7eaee0cc.
2023-01-30 16:07:04 +01:00
Jonathan Gramain df44cffb96 Revert "ARSN-284 [doc] add state charts"
This reverts commit 1c07618b18.
2023-01-30 16:07:03 +01:00
Jonathan Gramain 164053d1e8 Revert "bugfix: ARSN-293 DelimiterMaster: default to vFormat=v0"
This reverts commit fbb62ef17c.
2023-01-30 16:07:03 +01:00
Jonathan Gramain af741c50fb Revert "bugfix: ARSN-294 use CommonPrefix for NextMarker"
This reverts commit 6e5d8d14af.
2023-01-30 16:07:02 +01:00
williamlardier 9c46703b89
ARSN-297: bump to 8.1.82 2023-01-23 16:49:28 +01:00
williamlardier 47672d60ce
ARSN-297: remove Version from request context 2023-01-23 16:46:32 +01:00
Jonathan Gramain 6d41d103e8 Merge remote-tracking branch 'origin/bugfix/ARSN-294-setNextMarkerToCommonPrefix' into w/8.1/bugfix/ARSN-294-setNextMarkerToCommonPrefix 2023-01-12 15:46:32 -08:00
Jonathan Gramain 34ccca9b07 ARSN-294 bump arsenal version 2023-01-12 15:28:28 -08:00
Jonathan Gramain 6e5d8d14af bugfix: ARSN-294 use CommonPrefix for NextMarker
Revert behavior introduced for S3C-7274 that changed NextMarker to an
object key instead of a common prefix, the ticket was invalid as AWS
does use a CommonPrefix.

Add a unit test for a corner case with a marker inside a prefix that
was only caught in Cloudserver functional tests.
2023-01-12 15:27:50 -08:00
Jonathan Gramain 890ac08dcd Merge remote-tracking branch 'origin/bugfix/ARSN-293-delimiterMasterDefaultsToV0' into w/8.1/bugfix/ARSN-293-delimiterMasterDefaultsToV0 2023-01-08 19:20:46 -08:00
Jonathan Gramain 4cda9f6a6b ARSN-293 bump arsenal version 2023-01-08 19:17:19 -08:00
Jonathan Gramain fbb62ef17c bugfix: ARSN-293 DelimiterMaster: default to vFormat=v0
The BucketFile interface (open-source) does not pass an explicit
vFormat to the constructor of the listing algorithm. DelimiterMaster
does not interpret it correctly and uses vFormat=v1 logic in this
case, resulting in wrong listing results.

Fix it by checking against `this.vFormat` that was set with a default
value by the Delimiter class, instead of directly using the
constructor parameter `vFormat`.
2023-01-08 19:14:39 -08:00
Jonathan Gramain 4949b7cc35 ARSN-284 [8.1] adjust routesToMem listing test 2023-01-06 16:37:36 -08:00
Jonathan Gramain 2b6fee4e84 Merge remote-tracking branch 'origin/bugfix/ARSN-284-refactorDelimiter' into w/8.1/bugfix/ARSN-284-refactorDelimiter 2023-01-06 16:02:30 -08:00
Jonathan Gramain 8077186c3a ARSN-284 bump version 2023-01-06 15:59:00 -08:00
Jonathan Gramain 1c07618b18 ARSN-284 [doc] add state charts
Add new state charts in GraphViz format for Delimiter and DelimiterMaster
2023-01-06 15:57:51 -08:00
Jonathan Gramain 4d7eaee0cc ARSN-284 fix and refactor Delimiter + DelimiterMaster
Large refactor of Delimiter and DelimiterMaster classes to typescript,
that fixes most known issues with the previous implementation.

The new implementation uses explicit states to manage various
conditions, instead of relying on a bunch of internal variable values
and maintaining their state. It allows a more robust code flow and
fixes issues related to prefix skipping that were hard to fix by
keeping the overall logic of the previous implementation.

This refactor brings the following bug fixes and enhancements:

- prefixes with delete markers and non-deleted objects are
  now always included in CommonPrefixes (S3C-7248)

- no more duplication of internal range listings when doing skip-scan
  over prefixes (discovered when analyzing regressions for S3C-4682)

- the skip-scan mecanism for prefixes and versions is no
  more disturbed by delete markers and PHD keys (S3C-2930)

- NextMarker is now always set to a valid, listed or listable key
  (that may still be hidden under a CommonPrefix), no more
  manipulation of next marker to avoid corner-cases with keys ending
  with a prefix (S3C-4682 and S3C-7274)

- deleting a delete marker immediately allows the new current version
  to be visible in the listing (S3C-7272)

- Expecting lower CPU usage overall, as the number of checks to do in
  each state is reduced (may help to reduce the load and reduce impact
  of cases such as S3C-3946)

- Uses typescript to allow more sanity checks
2023-01-06 15:57:19 -08:00
williamlardier c460338163
ARSN-291: bump arsenal to 8.1.78 2023-01-04 14:02:35 +01:00
williamlardier f17d52b602
ARSN-291: use separate function to get specific capability 2023-01-04 14:02:35 +01:00
williamlardier a6b234b7a8
ARSN-291: new bucket field for capabilities 2023-01-04 12:19:59 +01:00
williamlardier ff353bb4d6
ARSN-291: document new field for capabilities 2022-12-26 09:26:34 +01:00
williamlardier 0f9c9c2f18
ARSN-289: bump Arsenal to 8.1.77 2022-12-20 17:20:00 +01:00
williamlardier f6b2cf2c1a
ARSN-289: bump projects for better sockets handling 2022-12-20 17:19:41 +01:00
Kerkesni ecafbae36a
bugfix: ARSN-278 bump version 2022-12-19 15:52:11 +01:00
Kerkesni d1cd7e8dba
bugfix: ARSN-278 handle getting versionId when object is versioning suspended
When replicating a versioning suspended object, we need to specify 'null'
as the encoded versionId as the versionId contained within the object's
metadata is strictly internal

In the replication processor we use getVersionId() when putting/deleting a tag.
It's used by the mongoClient to fetch the object from MongoDB, here again we
need to specify 'null' to get the versioning suspended object (cloudserver already
knows how to handle 'null' versionId and transforms it to undefined before giving
it to the mongoClient)
2022-12-19 15:51:56 +01:00
Francois Ferrand 3da6719200
Release 8.1.75
Issue: ARSN-273
2022-12-16 15:51:07 +01:00
Francois Ferrand c0dd54ef51
Support alternate azure auth method
Issue: ARSN-273
2022-12-16 15:48:17 +01:00
Francois Ferrand 7910792390
Fix commit blocks list 2022-12-16 15:46:07 +01:00
Francois Ferrand a4f4c51290
Fix mpu block id
it must be base64-encoded in new azure API.

Issue: ARSN-281
2022-12-16 15:46:07 +01:00
Francois Ferrand 66c4bc52b5
AzureClient : Cleanup _errorWrapper
Make better use of async and simplify error handling.

Issue: ARSN-281
2022-12-16 15:46:07 +01:00
Francois Ferrand 81cd6652d6
Use new url parser in mongoclient
This fixes a warning in logs. Old parser is deprecated, and will be
removed at some point.

Issue: ARSN-281
2022-12-16 15:46:07 +01:00
Francois Ferrand 2a07f67244
Fix yarn warning
Issue: ARSN-281
2022-12-16 15:46:07 +01:00
Francois Ferrand 1a634015ee
Upgrade azure sdk
There are a few caveats:
* The `proxy.certs` param is not used anymore (though looking at old SDK
code it may not have been supported already)
* `azureStreamingOptions/options` parameters have not been updated. The
old options (`range` and `DateUnModifiedSince`) are still used and
supported, to avoid compatibility issues.

Issue: ARSN-281
2022-12-16 15:46:07 +01:00
williamlardier 7a88a54918
ARSN-277: bump project version 2022-12-14 17:18:19 +01:00
williamlardier b25e620750
ARSN-277: use JS version of httpagent 2022-12-14 17:18:19 +01:00
williamlardier 38ef89cc83
ARSN-277: standard private repos import 2022-12-14 10:03:32 +01:00
williamlardier 1a6c828bfc
ARSN-277: update jest configuration for typescript subdeps 2022-12-13 20:07:48 +01:00
williamlardier 3d769c6960
ARSN-277: ensure install dependencies step is stable 2022-12-13 20:07:48 +01:00
williamlardier 8a27920a85
ARSN-277: update logic according to changes 2022-12-13 20:07:47 +01:00
williamlardier 7642a22176
ARSN-277: bump projects and add httpagent 2022-12-13 20:07:43 +01:00
Jonathan Gramain 7b64896234 ARSN-284 [rf] delimiterVersions.addCommonPrefix()
Copy addCommonPrefix from Delimiter to DelimiterVersions to prepare for the rehaul of Delimiter class, and make it use this.NextMarker directly
2022-12-09 14:22:40 -08:00
Jonathan Gramain 4f0a846814 ARSN-284 [cleanup] remove unused test dependency 2022-12-09 14:15:13 -08:00
bert-e 8f63687ef3 Merge branch 'feature/ARSN-280-abstract-update' into q/8.1 2022-11-18 15:11:34 +00:00
Kerkesni 26f45fa81a
feature: ARSN-280 bump version to 8.1.73 2022-11-18 16:00:59 +01:00
Kerkesni 76b59057f7
feature: ARSN-280 Set update event's type to delete
The update operation we do just before deleting an object, where
we set the deletion flag will be used as the deletion event as contrary
to the actual deletion event it contains object metadata.
2022-11-18 16:00:59 +01:00
Kerkesni ae0da3d605
feature: ARSN-279 support S3:ObjectRestore event notifications 2022-11-15 17:21:19 +01:00
bert-e 7c1bd453ee Merge branch 'feature/ARSN-235-update-object-before-deleting-it' into q/7.10 2022-11-14 09:20:17 +00:00
bert-e 162d9ec46b Merge branch 'q/1944/7.10/feature/ARSN-235-update-object-before-deleting-it' into tmp/normal/q/8.1 2022-11-14 09:20:17 +00:00
Kerkesni ccd6462015
feature: ARSN-235 bump version to 8.1.72 2022-11-14 10:10:49 +01:00
Kerkesni 665c77570c
feature: ARSN-235 fix ObjectMD unit tests 2022-11-13 22:16:59 +01:00
Kerkesni 27307b397c
feature: ARSN-235 unskip unit tests in 8.x 2022-11-13 22:16:59 +01:00
Kerkesni 414eada32b
feature: ARSN-235 add functional tests 2022-11-13 22:16:59 +01:00
Kerkesni fdf0c6fe99
feature: ARSN-235 add isPHD flag to ObjectMD model
The "isPHD" flag serves showing that a master object is in a temporary
invalid state that gets repaired asynchronously after a certain period
of time. The repair either updates the metadata or deletes the master
object.

This invalid state happens when deleting the last version of an object.
Previously the "isPHD" flag was set directly inside the object metadata
without going through the ObjectMD model, which is not ideal.
2022-11-13 22:16:58 +01:00
Kerkesni 8cc0be7da2
feature: ARSN-235 add deletion flag to ObjectMD model
Deletion flag serves showing that an object is in the process of
beeing deleted, the object's metadata is updated with deletion flag
set to true before deleting it to keep a trace of the latest metadata
inside the oplog as normal mongo delete events don't contain any metadata.
2022-11-13 22:16:58 +01:00
bert-e 65231633a7 Merge branch 'feature/ARSN-235-update-object-before-deleting-it' into tmp/octopus/w/8.1/feature/ARSN-235-update-object-before-deleting-it 2022-11-13 21:16:18 +00:00
Kerkesni 9a975723c1
feature: ARSN-235 document oplog 2022-11-13 22:04:29 +01:00
Kerkesni ef024ddef3
feature: ARSN-235 fix unit tests 2022-11-13 22:04:29 +01:00
Kerkesni b61138a348
feature: ARSN-235 ignore objects flagged for deletion when listing objects 2022-11-13 22:04:28 +01:00
Kerkesni d852eef08e
feature: ARSN-235 ignore objects flagged for deletion when getting object 2022-11-13 22:04:28 +01:00
Kerkesni fd63b857f3
feature: ARSN-235 update object before deletion
Object deletion no longer directly deletes the object, it first
updates its metadata by setting the deletion flag and originOp then
proceeds to deleting the object.

This is done to keep a trace of the latest object metadata before deletion
in the oplog, as oplog delete events don't hold that information. This
information is needed for both Cold Storage and Bucket Notification

We also add all the object metadata to the placeholder (PHD) master
which wasn't previously the case, again this is done to keep the metadata
in the oplog as a PHD might get directly deleted in the repair phase.
2022-11-13 22:04:28 +01:00
Alexander Chan 92c567414a bump version to 8.1.71 2022-11-07 16:26:38 -08:00
Alexander Chan ec55e39175 ARSN-276: putObjectVerCase3 - add check for v1 format and versioned updates
erronenous master entry is created when performing previous version
update in v1 format bucket.

added fix:
* check to see if update is to a previous version update
* check if master entry exists
* if master entry doesn't exist and operation is an update to a previous
  version, skip upsert
2022-11-07 16:22:50 -08:00
Jonathan Gramain c343820cae Merge remote-tracking branch 'origin/bugfix/ARSN-274-fixBucketPolicyActionMap' into w/8.1/bugfix/ARSN-274-fixBucketPolicyActionMap 2022-11-01 18:34:44 -07:00
Jonathan Gramain 0f9da6a44e ARSN-274 bump version to 7.10.38 2022-11-01 18:20:58 -07:00
Jonathan Gramain 53a42f7411 bugfix: ARSN-274 move `objectHead` action in shared map
Move the `objectHead` action in the shared action map so that bucket
policies can use it and grant HEAD request access when 's3:GetObject'
permission is present.

Note: relevant tests will be added in Cloudserver, see CLDSRV-291
2022-11-01 18:18:51 -07:00
Jonathan Gramain 9c2bed8034 cleanup: ARSN-274 remove duplicate notification actions 2022-11-01 15:24:37 -07:00
williamlardier 8307a1513e
ARSN-272: bump version 2022-10-03 09:34:51 +02:00
williamlardier 706c2425fe
ARSN-272: support array of arrays for req context 2022-10-03 09:34:47 +02:00
williamlardier 8618d77de9
Merge remote-tracking branch 'origin/improvement/ARSN-270-use-standard-permission-names' into w/8.1/improvement/ARSN-270-use-standard-permission-names 2022-09-27 09:18:08 +02:00
williamlardier 9d614a4ab3
ARSN-270: bump project version 2022-09-27 09:15:28 +02:00
williamlardier 7763685cb0
ARSN-270: change bad permission names 2022-09-27 09:14:53 +02:00
Artem Bakalov 8abe746222 Merge remote-tracking branch 'origin/improvement/ARSN-271-bump-version' into w/8.1/improvement/ARSN-271-bump-version 2022-09-26 20:04:36 -07:00
Artem Bakalov 4c6712741b v7.10.36 2022-09-26 19:43:43 -07:00
bert-e e74cca6795 Merge branch 'bugfix/ARSN-269-listing-bug-versioned-bucket-edge-case' into tmp/octopus/w/8.1/bugfix/ARSN-269-listing-bug-versioned-bucket-edge-case 2022-09-23 23:58:08 +00:00
Artem Bakalov 87b060f2ae ARSN-269 - listing bug in versioned bucket edge cases.
Simplifies testing that was used in ARSN-262. Adds a function allowDelimiterRangeSkip
to determine when a nextContinueMarker range can be skipped when .skipping is called.
This function uses a new state variable prefixKeySeen and the nextContinueMarker to determine
if a range of the form prefix/ can be skipped. An additional check is added when processing
delete markers of the form prefix/foo/(bar) so that the prefix/foo/ range can still be skipped
as an optimization.
2022-09-22 20:03:47 -07:00
bert-e 1427abecb7 Merge branches 'q/1982/7.10/bugfix/ARSN-252-listing-bug-versioned-bucket' and 'w/8.1/bugfix/ARSN-252-listing-bug-versioned-bucket' into tmp/octopus/q/8.1 2022-09-16 10:30:20 +00:00
bert-e 9dc357ab8d Merge branch 'bugfix/ARSN-252-listing-bug-versioned-bucket' into q/7.10 2022-09-16 10:30:19 +00:00
bert-e 4771ce3067 Merge branch 'bugfix/ARSN-252-listing-bug-versioned-bucket' into tmp/octopus/w/8.1/bugfix/ARSN-252-listing-bug-versioned-bucket 2022-09-16 02:26:12 +00:00
Artem Bakalov f62c3d22ed ARSN-252 - listing bug in DelimisterMaster
DelimiterMaster.filter is used to determine when a key range can be skipped in Metadata:RepdServer to optimize listing performance.
When a bucket is created with vFormat=v0, and subsequently a listing is done with a prefix, DelimiterMaster.filter was incorrectly
determining that a range could be skipped if a key was listed such that key == prefix. This case is now correctly handled in filterV0.
2022-09-15 19:05:29 -07:00
williamlardier 4e8a907d99
Merge remote-tracking branch 'origin/improvement/ARSN-267-support-updaterole-action' into w/8.1/improvement/ARSN-267-support-updaterole-action 2022-09-07 13:30:51 +02:00
williamlardier a237e38c51
ARSN-267: fix failing unit test
NodeJS 16.17.0 introduced a change in the error handling of TLS sockets
in case of error. The connexion is closed before the response is sent,
so handling the ECONNRESET error in the affected test will unblock it,
until this is fixed by NodeJS, if appropriate.
2022-09-07 13:22:30 +02:00
williamlardier 4388cb7790
ARSN-267: bump project version 2022-09-06 10:43:42 +02:00
williamlardier 095a2012cb
ARSN-267: support UpdateRole action 2022-09-06 10:43:30 +02:00
Killian Gardahaut 6f42b3e64c Merge remote-tracking branch 'origin/improvement/ARSN-266-change-bucketownedbyyou-error-message' into w/8.1/improvement/ARSN-266-change-bucketownedbyyou-error-message 2022-08-24 13:27:00 +00:00
Killian Gardahaut 264e0c1aad ARSN-266: change create bucket owned by you message error 2022-08-24 13:17:29 +00:00
Jonathan Gramain 237872a5a3 Merge remote-tracking branch 'origin/feature/ARSN-265-release-7.10.33' into w/8.1/feature/ARSN-265-release-7.10.33 2022-08-17 16:29:30 -07:00
Jonathan Gramain 0130355e1a ARSN-265 release 7.10.33 2022-08-17 16:26:52 -07:00
bert-e 390fd97edf Merge branch 'bugfix/ARSN-263/cb' into q/8.1 2022-08-17 22:50:41 +00:00
Nicolas Humbert 1c9e4eb93d bump version 2022-08-17 18:43:20 -04:00
bert-e af50ef47d7 Merge branch 'bugfix/ARSN-255-revampEvaluatePolicyForTagConditions' into q/7.10 2022-08-17 22:01:22 +00:00
bert-e a4f163f466 Merge branches 'w/8.1/bugfix/ARSN-255-revampEvaluatePolicyForTagConditions' and 'q/1989/7.10/bugfix/ARSN-255-revampEvaluatePolicyForTagConditions' into tmp/octopus/q/8.1 2022-08-17 22:01:22 +00:00
Nicolas Humbert 4d0cc9bc12 ARSN-263 retrieveData callback should only be called once 2022-08-17 12:41:33 -04:00
bert-e 657f969d05 Merge branch 'bugfix/ARSN-262-fixRequestContextConstructor' into tmp/octopus/w/8.1/bugfix/ARSN-262-fixRequestContextConstructor 2022-08-12 01:24:07 +00:00
Jonathan Gramain 4f2b1ca960 bugfix: ARSN-262 fixes/tests in RequestContext
- remove "postXml" field, as it was a left-over from prototyping

- handle fields related to tag conditions: requestObjTags,
  existingObjTag, needTagEval, those were missing from constructor
  params

- fix a typo in serialization: requersterInfo -> requesterInfo

- new unit tests for RequestContext
  constructor/serialize/deserialize/getters
2022-08-11 18:19:38 -07:00
bert-e b43cf22b2c Merge branch 'bugfix/ARSN-255-revampEvaluatePolicyForTagConditions' into tmp/octopus/w/8.1/bugfix/ARSN-255-revampEvaluatePolicyForTagConditions 2022-08-10 22:04:06 +00:00
Killian Gardahaut 46c44ccaa6 Merge remote-tracking branch 'origin/improvement/ARSN-261-bump-7-10-32' into w/8.1/improvement/ARSN-261-bump-7-10-32 2022-08-10 08:38:02 +00:00
Killian Gardahaut f45f65596b ARSN-261: bump 7.10.32 2022-08-10 08:36:22 +00:00
bert-e 90c63168c1 Merge branches 'w/8.1/improvement/ARSN-257-bump-7-10-31' and 'q/1980/7.10/improvement/ARSN-257-bump-7-10-31' into tmp/octopus/q/8.1 2022-08-10 08:17:10 +00:00
bert-e 10402ae78d Merge branch 'improvement/ARSN-257-bump-7-10-31' into q/7.10 2022-08-10 08:17:10 +00:00
Jonathan Gramain 5cd1df8601 bugfix: ARSN-255 revamp evaluatePolicy logic for tag conditions
Rethink the logic of tag condition evaluation, so that the
"evaluateAllPolicies" function appropriately returns the verdict:
Allow or Deny or NeedTagConditionEval, the latter being when tag
values (request and/or object tags) are needed to settle the verdict
to Allow or Deny, in which case, Cloudserver knows it has to resend
the request to Vault along with tag info.
2022-08-09 18:43:58 -07:00
Jonathan Gramain ee38856f29 ARSN-255 [cleanup] better exports in evaluator.ts
Turn 'const' function objects into actual functions.
2022-08-09 18:29:16 -07:00
Jonathan Gramain fe5f868f43 Merge remote-tracking branch 'origin/improvement/ARSN-260-findConditionKeyInefficiency' into w/8.1/improvement/ARSN-260-findConditionKeyInefficiency 2022-08-09 18:00:46 -07:00
Jonathan Gramain dc229bb8aa improvement: ARSN-260 improve efficiency of findConditionKey
Instead of pre-creating a Map with all supported condition keys before
returning the wanted one, use a switch/case construct to directly
return the attribute from the request context.
2022-08-09 17:54:58 -07:00
Killian Gardahaut c0ee81eb7a Merge remote-tracking branch 'origin/improvement/ARSN-257-bump-7-10-31' into w/8.1/improvement/ARSN-257-bump-7-10-31 2022-08-09 15:35:13 +00:00
Killian Gardahaut a6a48e812f ARSN-257: bump 7.10.31 2022-08-09 15:32:33 +00:00
bert-e 604a0170f1 Merge branches 'w/8.1/feature/ARSN-256-supportTaggingAndAclEvents' and 'q/1978/7.10/feature/ARSN-256-supportTaggingAndAclEvents' into tmp/octopus/q/8.1 2022-08-08 19:41:51 +00:00
bert-e 5a8372437b Merge branch 'feature/ARSN-256-supportTaggingAndAclEvents' into q/7.10 2022-08-08 19:41:50 +00:00
Killian Gardahaut 9d8f4793c9 Merge remote-tracking branch 'origin/bugfix/ARSN-253-issue-with-special-unicode-chars' into w/8.1/bugfix/ARSN-253-issue-with-special-unicode-chars 2022-08-08 13:53:39 +00:00
Killian Gardahaut 69d33a3341 ARSN-253: Speedup aws URI encore function 2022-08-08 13:49:18 +00:00
Killian Gardahaut c4ead93bd9 ARSN-253: Speedup aws URI encore function 2022-08-05 10:05:41 +00:00
Jonathan Gramain 981c9c1a23 Merge remote-tracking branch 'origin/feature/ARSN-256-supportTaggingAndAclEvents' into w/8.1/feature/ARSN-256-supportTaggingAndAclEvents 2022-08-04 17:00:45 -07:00
Jonathan Gramain 71de409ee9 feature: ARSN-256 support tagging and ACL events
Add to the list of supported event types for bucket notification
purpose, the tagging and ACL-related events that can be set in bucket
notification

Reference: https://docs.aws.amazon.com/AmazonS3/latest/userguide/notification-how-to-event-types-and-destinations.html#supported-notification-event-types
2022-08-04 16:57:23 -07:00
KillianG 806f988334
Merge remote-tracking branch 'origin/bugfix/ARSN-253-issue-with-special-unicode-chars' into w/8.1/bugfix/ARSN-253-issue-with-special-unicode-chars 2022-08-03 10:13:53 +02:00
KillianG 976a05c3e5
Merge branch 'w/8.1/bugfix/ARSN-253-issue-with-special-unicode-chars' of github.com:scality/arsenal into w/8.1/bugfix/ARSN-253-issue-with-special-unicode-chars 2022-08-03 10:03:35 +02:00
KillianG 46c24c5cc3
fixup! bugfix/ARSN-253: adding test and better handling of all the possible cases 2022-08-03 10:01:28 +02:00
Killian Gardahaut c5004cb521 Merge remote-tracking branch 'origin/bugfix/ARSN-253-issue-with-special-unicode-chars' into w/8.1/bugfix/ARSN-253-issue-with-special-unicode-chars 2022-08-02 12:42:30 +00:00
KillianG bc9cfb0b6d
ARSN-254: Fix constness problem 2022-08-02 13:14:56 +02:00
KillianG 4b6e342ff8
Merge remote-tracking branch 'origin/bugfix/ARSN-253-issue-with-special-unicode-chars' into w/8.1/bugfix/ARSN-253-issue-with-special-unicode-chars 2022-08-02 13:09:01 +02:00
Killian Gardahaut d48d4d0c18 bugfix/ARSN-253: adding test and better handling of all the possible cases 2022-08-02 08:43:54 +00:00
Killian Gardahaut 5a32c8eca0 bugfix/ARSN-253:
fixing the problem with unicode special chars by encoding them with URI
Problem was that our encoreURI function was not working properly for special chars
2022-08-01 12:55:35 +00:00
Kerkesni 480f5a4427
bugfix: ARSN-251 bump arsenal to 8.1.64 2022-07-22 15:15:22 +02:00
bert-e 852ae9bd0f Merge branch 'bugfix/ARSN-251-fix-azure-mpuUtils-import' into tmp/octopus/w/8.1/bugfix/ARSN-251-fix-azure-mpuUtils-import 2022-07-22 13:13:02 +00:00
Kerkesni 6c132bca90
bugfix: ARSN-251 fix azure mpuUtils import 2022-07-22 15:07:20 +02:00
Taylor McKinnon 3d77540c47 Merge remote-tracking branch 'origin/bugfix/ARSN-250/fix_getByteRangeFromSpec_edgecase' into w/8.1/bugfix/ARSN-250/fix_getByteRangeFromSpec_edgecase 2022-07-21 11:45:24 -07:00
Taylor McKinnon 3882ecf1a0 bf(ARSN-250): Fix getByteRangeFromSpec when range is 0-0 2022-07-21 11:42:16 -07:00
Taylor McKinnon 4f0506cf31 Merge remote-tracking branch 'origin/improvement/ARSN-248/release_7_10_28' into w/8.1/improvement/ARSN-248/release_7_10_28 2022-07-20 14:18:01 -07:00
Taylor McKinnon acf38cc010 impr(ARSN-248): Release 7.10.28 2022-07-20 14:11:56 -07:00
Nicolas Humbert d92a91f076 bump package version 2022-07-19 08:52:56 +02:00
Nicolas Humbert 28779db602 bugfix/ARSN-247 data.delete 404 errors not handled properly 2022-07-19 08:40:02 +02:00
Alexander Chan 8db16c5532 ARSN-246: fix non-current transition rule comparison
fix issue in which non-current transition rule is compared to a
transition object
2022-07-12 16:55:26 -07:00
Jordi Bertran de Balanda 33439ec215 Merge remote-tracking branch 'origin/improvement/ARSN-245-release-7.10.27' into w/8.1/improvement/ARSN-245-release-7.10.27 2022-07-12 16:12:19 +02:00
Jordi Bertran de Balanda 785b824b69 ARSN-245 - release 7.10.27 2022-07-11 18:17:45 +02:00
bert-e 9873c0f112 Merge branch 'bugfix/ARSN-244-missing-ismasterkey-export' into tmp/octopus/w/8.1/bugfix/ARSN-244-missing-ismasterkey-export 2022-07-11 16:05:28 +00:00
Jordi Bertran de Balanda 63212e2db3 ARSN-244 - export isMasterKey in versioning 2022-07-11 16:59:29 +02:00
Nicolas Humbert 725a492c2c ARSN-243 bump 8.1.60 2022-07-11 11:51:26 +02:00
Nicolas Humbert e446e3e132 ARSN-242 Fix non-current version transition 2022-07-09 11:46:19 +02:00
bert-e 25c6b34a1e Merge branch 'improvement/ARSN-240/transition' into q/8.1 2022-07-08 17:54:09 +00:00
Jordi Bertran de Balanda 721d7ede93 Merge remote-tracking branch 'origin/improvement/ARSN-241-release-arsenal-7.10.26' into w/8.1/improvement/ARSN-241-release-arsenal-7.10.26 2022-07-08 15:13:10 +02:00
Jordi Bertran de Balanda 3179d1c620 ARSN-241 - release arsenal 7.10.26 2022-07-08 15:07:38 +02:00
Nicolas Humbert fbbba32d69 Introduce x-amz-scal-transition-in-progress object md 2022-07-08 12:47:30 +02:00
Jordi Bertran de Balanda 56c1ba5c21 ARSN-239 - release arsenal 8.1.59 2022-07-08 11:02:52 +02:00
Will Toozs 73431094a3
Merge remote-tracking branch 'origin/bugfix/ARSN-238' into w/8.1/bugfix/ARSN-238 2022-07-08 09:58:02 +02:00
Will Toozs aed1d8419b
ARSN-238: add documentation on listing process 2022-07-08 09:49:32 +02:00
Will Toozs c3cb0aa514
ARSN-238: ignore phd keys with no versions 2022-07-08 09:49:32 +02:00
bert-e 5919d20fa4 Merge branch 'w/8.1/improvement/ARSN-234' into tmp/octopus/q/8.1 2022-07-06 17:18:25 +00:00
Nicolas Humbert 56665069c1 ARSN-237 bump to 8.1.58 2022-07-05 20:14:07 +02:00
Nicolas Humbert 61fe54bd73 ARSN-236 Put bucket replication to dmf is not supported 2022-07-05 15:42:52 +02:00
Francois Ferrand e227d9d5ca
Merge remote-tracking branch 'origin/improvement/ARSN-234' into w/8.1/improvement/ARSN-234 2022-07-01 18:24:06 +02:00
Francois Ferrand a206b5f95e
Remove check with empty bucket name
This test is not relevant, since a bucket cannot have an empty name;
and there is now a check in AWS SDK which rejects the request directly.

Issue: ARSN-234
2022-07-01 18:18:05 +02:00
Francois Ferrand 9b8f9f8afd
Bump aws-sdk to 2.1005+
Use same spec as other packages (utapi, vault...), and allow automatic
version bump (dependabot).

Issue: ARSN-234
2022-06-30 15:13:09 +02:00
Francois Ferrand cdcc44d272
Merge remote-tracking branch 'origin/improvement/ARSN-233' into w/8.1/improvement/ARSN-233 2022-06-29 12:02:25 +02:00
Francois Ferrand 066be20a9d
Bump azure-storage to 2.10.7
Issue: ARSN-233
2022-06-29 11:45:14 +02:00
Xin LI 5acef6895f Merge remote-tracking branch 'origin/improvement/ARSN-225-add-User-Tag-actions' into w/8.1/improvement/ARSN-225-add-User-Tag-actions 2022-06-20 18:22:20 +02:00
Xin LI 6e3386f693 improvement: ARSN-225- correct UntagUser action name 2022-06-20 12:17:49 +02:00
Xin LI 2c630848ee improvement: ARSN-225-bump version 2022-06-17 12:19:20 +02:00
williamlardier f7d360fe0b
ARSN-227: bump package version and improve tags validation 2022-06-16 19:18:53 +02:00
williamlardier 0a61b43252
ARSN-227: refining type and validation 2022-06-16 19:18:52 +02:00
williamlardier c014e630be
ARSN-227: introduce BucketTag type and improve tag checking 2022-06-16 19:18:52 +02:00
williamlardier a747d5feda
ARSN-227: add unit tests for bucket tags 2022-06-16 19:18:51 +02:00
KillianG 765857071a
ARSN-227: update bucket info model 2022-06-16 19:18:51 +02:00
KillianG 91b39da7e5
ARSN-227: support bucket tags in Bucket Info 2022-06-16 19:18:50 +02:00
williamlardier 2cc6ebe9b4
ARSN-227: Add NoSuchTag error 2022-06-16 19:18:50 +02:00
Xin LI 5634e1bb1f improvement: ARSN-225-add User Tag actionMaps 2022-06-16 10:57:56 +02:00
williamlardier 7887d22d0d
ARSN-232: bump arsenal 2022-06-15 17:25:11 +02:00
williamlardier 2f142aea7f
ARSN-232: add missing permissions for Version 2022-06-15 17:24:51 +02:00
williamlardier 26a046c9b2
ARSN-224: bump package.json to 8.1.54 2022-06-10 14:15:02 +02:00
bert-e ab23d59daf Merge branch 'bugfix/ARSN-224-fix-models-imports' into tmp/octopus/w/8.1/bugfix/ARSN-224-fix-models-imports 2022-06-10 12:00:50 +00:00
williamlardier b744385584
ARSN-224: fix default value for the filter of bucket notif config 2022-06-10 14:00:34 +02:00
bert-e 6950df200a Merge branch 'bugfix/ARSN-224-fix-models-imports' into tmp/octopus/w/8.1/bugfix/ARSN-224-fix-models-imports 2022-06-10 10:20:14 +00:00
williamlardier d407cd702b
ARSN-224: fix missing default for models imports 2022-06-10 12:19:15 +02:00
williamlardier 3265d162a7
ARSN-223: bump package.json version 2022-06-10 11:21:31 +02:00
bert-e 67200d80ad Merge branch 'bugfix/ARSN-223-fix-wgm-default-import' into tmp/octopus/w/8.1/bugfix/ARSN-223-fix-wgm-default-import 2022-06-10 09:20:40 +00:00
williamlardier 20a071fba9
ARSN-223: fix file imports with default 2022-06-10 11:19:52 +02:00
bert-e aa2992cd9f Merge branches 'w/8.1/feature/ARSN-209-type-check-models' and 'q/1920/7.10/feature/ARSN-209-type-check-models' into tmp/octopus/q/8.1 2022-06-10 08:09:10 +00:00
bert-e f897dee3c5 Merge branch 'feature/ARSN-209-type-check-models' into q/7.10 2022-06-10 08:09:09 +00:00
williamlardier 0e2071ed3b
ARSN-221: bump package.json version to 8.1.52 2022-06-09 11:51:24 +02:00
williamlardier ad579b2bd2
Bump SproxydClient version in package.json
Integrates the Node16 bugfix of SproxydClient
in Artesca.
2022-06-09 11:49:16 +02:00
Guillaume Hivert 139da904a7 Merge remote-tracking branch 'origin/feature/ARSN-209-type-check-models' into w/8.1/feature/ARSN-209-type-check-models 2022-06-09 10:15:31 +02:00
Guillaume Hivert e8851b40c0 Merge remote-tracking branch 'origin/development/8.1' into w/8.1/feature/ARSN-209-type-check-models 2022-06-09 10:15:21 +02:00
Guillaume Hivert 536f36df4e ARSN-209 Fix JSDoc as asked in PR 2022-06-09 10:04:02 +02:00
Naren cd9456b510 bf: ARSN-220 export isMasterKey in versioning module 2022-06-08 17:13:17 -07:00
Alexander Chan 15f07538d8 ARSN-218: enable lifecycle noncurrent version transition 2022-05-28 01:26:49 -07:00
Guillaume Hivert e95d07af12 Merge remote-tracking branch 'origin/feature/ARSN-184-type-check-s3routes' into w/8.1/feature/ARSN-184-type-check-s3routes 2022-05-25 11:58:41 +02:00
Guillaume Hivert 571128efb1 Fix TODOs 2022-05-25 11:57:13 +02:00
Guillaume Hivert f1478cbc66 Fix TODOs 2022-05-25 11:56:45 +02:00
Guillaume Hivert b21f7f3440 Fix TODOs 2022-05-25 11:55:09 +02:00
Guillaume Hivert ca2d23710f Merge remote-tracking branch 'origin/feature/ARSN-184-type-check-s3routes' into w/8.1/feature/ARSN-184-type-check-s3routes 2022-05-25 11:28:53 +02:00
Guillaume Hivert 310fd30266 Merge remote-tracking branch 'origin/development/8.1' into w/8.1/feature/ARSN-184-type-check-s3routes 2022-05-25 11:28:44 +02:00
Guillaume Hivert 75c5c855d9 Merge remote-tracking branch 'origin/development/7.10' into HEAD 2022-05-25 11:27:11 +02:00
Guillaume Hivert 8743e9c3ac ARSN-209 Fix imports/exports in tests 2022-05-20 18:08:57 +02:00
bert-e b2af7c0aea Merge branch 'feature/ARSN-209-type-check-models' into tmp/octopus/w/8.1/feature/ARSN-209-type-check-models 2022-05-20 16:05:39 +00:00
Guillaume Hivert 43d466e2fe ARSN-209 Fix import due to rebase of development/7.10 2022-05-20 18:05:30 +02:00
bert-e 58c24376aa Merge branch 'feature/ARSN-209-type-check-models' into tmp/octopus/w/8.1/feature/ARSN-209-type-check-models 2022-05-20 16:02:41 +00:00
Guillaume Hivert efa8c8e611 ARSN-209 Fix linter error in tests 2022-05-20 18:02:32 +02:00
Guillaume Hivert 62c13c1eed ARSN-209 Fix everything in 8.1 2022-05-20 18:00:57 +02:00
Guillaume Hivert ee81fa5829 Merge remote-tracking branch 'origin/feature/ARSN-209-type-check-models' into w/8.1/feature/ARSN-209-type-check-models 2022-05-20 16:57:12 +02:00
Guillaume Hivert 820ad4f8af ARSN-209 Fix imports/exports of models 2022-05-20 16:23:24 +02:00
Guillaume Hivert 34eeecf6de ARSN-209 Type check BucketInfo 2022-05-20 16:23:24 +02:00
Guillaume Hivert 050f5ed002 ARSN-209 Type check NotificationConfiguration 2022-05-20 16:23:20 +02:00
Guillaume Hivert 2fba338639 ARSN-209 Type check LifecycleConfiguration 2022-05-20 16:20:55 +02:00
Guillaume Hivert 950ac8e19b ARSN-209 Type check ObjectMD 2022-05-20 16:20:55 +02:00
Guillaume Hivert 61929bb91a ARSN-209 Type check ReplicationConfiguration 2022-05-20 16:20:55 +02:00
Guillaume Hivert 9175148bd1 ARSN-209 Type check WebsiteConfiguration 2022-05-20 16:20:55 +02:00
Guillaume Hivert 5f08ea9310 ARSN-209 Type check ObjectMDLocation 2022-05-20 16:20:55 +02:00
Guillaume Hivert 707bf795a9 ARSN-209 Type check ObjectLockConfiguration 2022-05-20 16:20:55 +02:00
Guillaume Hivert fcf64798dc ARSN-209 Type check LifecycleRules 2022-05-20 16:20:55 +02:00
Guillaume Hivert 9b607be633 ARSN-209 Type check BucketPolicy 2022-05-20 16:20:55 +02:00
Guillaume Hivert 01a8992cec ARSN-209 Type check BackendInfo 2022-05-20 16:20:55 +02:00
Guillaume Hivert 301541223d ARSN-209 Type check ARN 2022-05-20 16:20:55 +02:00
Guillaume Hivert 4f58a4b2f3 ARSN-210 Restore correct constants in 8.2 to 7.10 backport from ARSN-128 2022-05-20 16:20:55 +02:00
Guillaume Hivert 6f3babd223 ARSN-209 Rename all models to .ts 2022-05-20 16:20:55 +02:00
bert-e d7df1df2b6 Merge branch 'bugfix/ARSN-212-remove-assert-in-decoder' into tmp/octopus/w/8.1/bugfix/ARSN-212-remove-assert-in-decoder 2022-05-20 00:56:02 +00:00
Artem Bakalov 3f26b432b7 ARSN-212 remove assert in decoder in favor of returning an error. 2022-05-19 16:27:05 -07:00
bert-e f59b1b5e07 Merge branches 'w/8.1/feature/ARSN-201-type-check-versioning' and 'q/1894/7.10/feature/ARSN-201-type-check-versioning' into tmp/octopus/q/8.1 2022-05-19 08:51:50 +00:00
bert-e b684bdbaa9 Merge branch 'feature/ARSN-201-type-check-versioning' into q/7.10 2022-05-19 08:51:50 +00:00
Guillaume Hivert a3418603d0 Merge remote-tracking branch 'origin/feature/ARSN-206-type-check-jsutil' into w/8.1/feature/ARSN-206-type-check-jsutil 2022-05-18 11:35:20 +02:00
Guillaume Hivert 947ccd90d9 Merge remote-tracking branch 'origin/development/8.1' into w/8.1/feature/ARSN-206-type-check-jsutil 2022-05-18 11:35:11 +02:00
Guillaume Hivert 23113616d9 Merge remote-tracking branch 'origin/development/7.10' into HEAD 2022-05-18 11:34:01 +02:00
Guillaume Hivert f460ffdb21 Merge remote-tracking branch 'origin/feature/ARSN-207-type-check-string-hash' into w/8.1/feature/ARSN-207-type-check-string-hash 2022-05-18 11:24:56 +02:00
Guillaume Hivert dfa49c79c5 Merge remote-tracking branch 'origin/development/8.1' into w/8.1/feature/ARSN-207-type-check-string-hash 2022-05-18 11:24:41 +02:00
Guillaume Hivert ba94dc7e86 Merge remote-tracking branch 'origin/development/7.10' into HEAD 2022-05-18 11:23:08 +02:00
Guillaume Hivert e582882883 Merge remote-tracking branch 'origin/feature/ARSN-208-type-check-db' into w/8.1/feature/ARSN-208-type-check-db 2022-05-18 11:11:30 +02:00
Guillaume Hivert dd61c1abbe Merge remote-tracking branch 'origin/development/8.1' into w/8.1/feature/ARSN-208-type-check-db 2022-05-18 11:10:56 +02:00
Guillaume Hivert 5e8f4f2a30 Merge remote-tracking branch 'origin/development/7.10' into HEAD 2022-05-18 11:09:53 +02:00
Guillaume Hivert a15f8a56e3 Merge remote-tracking branch 'origin/feature/ARSN-201-type-check-versioning' into w/8.1/feature/ARSN-201-type-check-versioning 2022-05-18 11:00:22 +02:00
Guillaume Hivert 43e82f7f33 Merge remote-tracking branch 'origin/development/8.1' into w/8.1/feature/ARSN-201-type-check-versioning 2022-05-18 11:00:10 +02:00
Guillaume Hivert f54feec57f Merge remote-tracking branch 'origin/development/7.10' into HEAD 2022-05-18 10:59:05 +02:00
bert-e d7625ced17 Merge branches 'w/8.1/feature/ARSN-205-type-check-error-utils' and 'q/1901/7.10/feature/ARSN-205-type-check-error-utils' into tmp/octopus/q/8.1 2022-05-17 15:05:31 +00:00
bert-e bbe5f293f4 Merge branch 'feature/ARSN-205-type-check-error-utils' into q/7.10 2022-05-17 15:05:31 +00:00
Guillaume Hivert a2c1989a5d Merge remote-tracking branch 'origin/development/8.1' into w/8.1/feature/ARSN-205-type-check-error-utils 2022-05-17 16:58:11 +02:00
bert-e 8ad1cceeb8 Merge branch 'feature/ARSN-204-type-check-shuffle' into q/7.10 2022-05-17 08:19:19 +00:00
bert-e 24755c8472 Merge branches 'w/8.1/feature/ARSN-204-type-check-shuffle' and 'q/1899/7.10/feature/ARSN-204-type-check-shuffle' into tmp/octopus/q/8.1 2022-05-17 08:19:19 +00:00
bert-e bd970c65ea Merge branch 'bugfix/ARSN-191-getting-wrong-notification-type-when-master-version-deleted' into q/7.10 2022-05-13 14:29:55 +00:00
bert-e fb39a4095e Merge branches 'w/8.1/bugfix/ARSN-191-getting-wrong-notification-type-when-master-version-deleted' and 'q/1866/7.10/bugfix/ARSN-191-getting-wrong-notification-type-when-master-version-deleted' into tmp/octopus/q/8.1 2022-05-13 14:29:55 +00:00
bert-e 32dfba2f89 Merge branch 'bugfix/ARSN-191-getting-wrong-notification-type-when-master-version-deleted' into tmp/octopus/w/8.1/bugfix/ARSN-191-getting-wrong-notification-type-when-master-version-deleted 2022-05-13 14:06:42 +00:00
Kerkesni 43a8772529
bugfix: ARSN-191 fix wrong notification type when master version is deleted 2022-05-13 16:06:05 +02:00
Guillaume Hivert a2ca197bd8 Merge remote-tracking branch 'origin/feature/ARSN-208-type-check-db' into w/8.1/feature/ARSN-208-type-check-db 2022-05-13 15:18:10 +02:00
Guillaume Hivert fc05956983 ARSN-208 Type check DB 2022-05-13 15:16:14 +02:00
Xin LI 3ed46f2d16 improvement: ARSN-180 bump arsenal to 8.1.48 2022-05-13 14:48:51 +02:00
williamlardier 5c936c94ee
ARSN-177: better date check 2022-05-13 14:33:13 +02:00
Xin LI f87101eef6
improvement: ARSN-197 improve code structure 2022-05-13 14:00:38 +02:00
Xin LI 14f86282b6
improvement: ARSN-197 update jsdoc 2022-05-13 13:59:37 +02:00
Xin LI f9dba52d38
improvement: ARSN-197 add index 2022-05-13 13:59:36 +02:00
Yutaka Oishi 6714aed351
improvement: ARSN-197 implement object restore request xml parser 2022-05-13 13:59:36 +02:00
williamlardier 99f96dd377
ARSN-177: accept date as valid date string after stored in the db 2022-05-13 13:59:36 +02:00
williamlardier ae08d89d7d
ARSN-177: set to undefined to clear MD 2022-05-13 13:59:35 +02:00
williamlardier c48e2948f0
ARSN-177: expose new model 2022-05-13 13:59:35 +02:00
williamlardier fc942febca
ARSN-177: better use of undefined and remove unused md field 2022-05-13 13:59:35 +02:00
williamlardier a4fe998c34
ARSN-177: complete unit tests 2022-05-13 13:59:34 +02:00
williamlardier 1460e94488
ARSN-177: return true in validator 2022-05-13 13:59:34 +02:00
williamlardier dcc7117d88
ARSN-177: add tests for new restore field 2022-05-13 13:59:33 +02:00
williamlardier 99cee367aa
ARSN-177: better isValid for class 2022-05-13 13:59:33 +02:00
williamlardier ad5a4c152d
ARSN-177: Introduce archive field in object metadata 2022-05-13 13:59:30 +02:00
bert-e b608c043f5 Merge branch 'feature/ARSN-207-type-check-string-hash' into tmp/octopus/w/8.1/feature/ARSN-207-type-check-string-hash 2022-05-13 11:57:31 +00:00
Guillaume Hivert 8ec4a11a4b ARSN-207 Fix tests and export 2022-05-13 13:57:21 +02:00
bert-e 079c09e1ec Merge branch 'feature/ARSN-207-type-check-string-hash' into tmp/octopus/w/8.1/feature/ARSN-207-type-check-string-hash 2022-05-13 11:55:55 +00:00
Guillaume Hivert c9ff3cd60e ARSN-207 Type check stringHash 2022-05-13 13:55:33 +02:00
bert-e 75f07440ef Merge branch 'feature/ARSN-178-introduce-x-amz-restore-header' into q/8.1 2022-05-13 11:50:07 +00:00
bert-e 3a6bac1158 Merge branch 'feature/ARSN-206-type-check-jsutil' into tmp/octopus/w/8.1/feature/ARSN-206-type-check-jsutil 2022-05-12 15:45:01 +00:00
Guillaume Hivert a15d4cd130 ARSN-206 Add proper index export 2022-05-12 17:44:52 +02:00
bert-e f2d119326a Merge branch 'feature/ARSN-206-type-check-jsutil' into tmp/octopus/w/8.1/feature/ARSN-206-type-check-jsutil 2022-05-12 15:44:27 +00:00
Guillaume Hivert 45ba80ec23 ARSN-206 Type check jsutil 2022-05-12 17:44:07 +02:00
Guillaume Hivert 2a019f3788 ARSN-204 Export errorUtils 2022-05-12 17:26:38 +02:00
bert-e 5e22900c0f Merge branch 'feature/ARSN-205-type-check-error-utils' into tmp/octopus/w/8.1/feature/ARSN-205-type-check-error-utils 2022-05-12 15:25:33 +00:00
Guillaume Hivert 32cff324d8 ARSN-205 Type check errorUtils 2022-05-12 17:24:59 +02:00
Guillaume Hivert e62ed598e8 Merge remote-tracking branch 'origin/feature/ARSN-204-type-check-shuffle' into w/8.1/feature/ARSN-204-type-check-shuffle 2022-05-12 17:20:51 +02:00
Guillaume Hivert cda5d7cfed ARSN-204 Refacto shuffle 2022-05-12 17:19:37 +02:00
bert-e a217ad58e8 Merge branches 'w/8.1/feature/ARSN-186-type-check-clustering' and 'q/1860/7.10/feature/ARSN-186-type-check-clustering' into tmp/octopus/q/8.1 2022-05-12 14:05:31 +00:00
bert-e e46b90cbad Merge branch 'feature/ARSN-186-type-check-clustering' into q/7.10 2022-05-12 14:05:30 +00:00
bert-e 10cf10daa4 Merge branch 'feature/ARSN-185-type-check-patches' into q/8.1 2022-05-12 14:01:57 +00:00
Guillaume Hivert 6ec2f99a91 Merge remote-tracking branch 'origin/development/8.1' into HEAD 2022-05-12 15:53:39 +02:00
bert-e dfd8f20bf2 Merge branch 'q/1858/7.10/feature/ARSN-183-type-check-stream' into tmp/normal/q/8.1 2022-05-12 13:52:32 +00:00
bert-e 435f9f7f3c Merge branch 'feature/ARSN-183-type-check-stream' into q/7.10 2022-05-12 13:52:31 +00:00
Guillaume Hivert fc17ab4299 ARSN-185 Add literal union 2022-05-12 15:51:42 +02:00
Guillaume Hivert 44f398b01f Merge remote-tracking branch 'origin/feature/ARSN-183-type-check-stream' into w/8.1/feature/ARSN-183-type-check-stream 2022-05-12 15:45:01 +02:00
Guillaume Hivert dc32d78b0f Merge remote-tracking branch 'origin/development/8.1' into w/8.1/feature/ARSN-183-type-check-stream 2022-05-12 15:43:56 +02:00
Guillaume Hivert 9f1ea09ee6 ARSN-183 Switch index.ts 2022-05-12 15:42:15 +02:00
Guillaume Hivert 073d752ad8 Merge remote-tracking branch 'origin/bugfix/ARSN-97-stop-ignoring-ts-errors-in-yarn-install' into w/8.1/bugfix/ARSN-97-stop-ignoring-ts-errors-in-yarn-install 2022-05-12 15:25:26 +02:00
Guillaume Hivert 37c325f033 ARSN-97 Stop ignoring build errors 2022-05-12 15:20:34 +02:00
bert-e 3454e934f5 Merge branch 'feature/ARSN-201-type-check-versioning' into tmp/octopus/w/8.1/feature/ARSN-201-type-check-versioning 2022-05-12 13:18:29 +00:00
Guillaume Hivert 76bffb2a23 ARSN-201 Fix tests 2022-05-12 15:16:23 +02:00
Guillaume Hivert bd498d414b ARSN-201 Export in index 2022-05-12 15:16:19 +02:00
Guillaume Hivert f98c65ffb4 ARSN-201 Type check VersioningRequestProcessor 2022-05-12 15:16:00 +02:00
Guillaume Hivert eae29c53dd ARSN-201 Type check constants 2022-05-12 15:15:52 +02:00
Guillaume Hivert 8d17b69eb8 ARSN-201 Type check WriteGatheringManager 2022-05-12 15:15:42 +02:00
Guillaume Hivert 938d64f48e ARSN-201 Type check WriteCache 2022-05-12 15:15:28 +02:00
Guillaume Hivert 485ca38867 ARSN-201 Type check VersionID 2022-05-12 15:14:48 +02:00
Guillaume Hivert 355c540510 ARSN-201 Type check Version 2022-05-12 15:14:42 +02:00
Jordi Bertran de Balanda 399fdaaed0 Merge remote-tracking branch 'origin/improvement/ARSN-203-release-7.10.24' into w/8.1/improvement/ARSN-203-release-7.10.24 2022-05-12 15:11:07 +02:00
Jordi Bertran de Balanda d97a218170 ARSN-203 - release 7.10.24 2022-05-12 15:09:45 +02:00
Jordi Bertran de Balanda 5084c8f971 Merge remote-tracking branch 'origin/bugfix/ARSN-199-bugfix-https-proxy-agent' into w/8.1/bugfix/ARSN-199-bugfix-https-proxy-agent 2022-05-12 11:50:33 +02:00
Jordi Bertran de Balanda 82c3330321 ARSN-199 - add https-proxy-agent dependency 2022-05-12 11:28:18 +02:00
williamlardier 3388de6fb6
ARSN-178: set to undefined to clear MD 2022-05-12 09:39:28 +02:00
Guillaume Hivert db70743439 ARSN-201 Rename all files to TS 2022-05-11 15:56:50 +02:00
Alexander Chan 86e9d4a356 ARSN-200: fix probe server readiness path 2022-05-10 14:26:05 -07:00
williamlardier a0010efbdd
ARSN-178: expose new model 2022-05-10 11:09:30 +02:00
Nicolas Humbert 8eb7efd58a ARSN-187 Introduce s3:PutObjectVersion action 2022-05-09 10:47:29 -07:00
williamlardier 25ae7e443b
ARSN-178: remove unused field in test 2022-05-09 16:45:14 +02:00
williamlardier 4afa1ed78d
ARSN-178: better use of undefined and remove unused md field 2022-05-09 16:45:14 +02:00
williamlardier 706dfddf5f
ARSN-178: complete unit tests 2022-05-09 16:45:13 +02:00
williamlardier 4cce306a12
ARSN-178: return true in validator 2022-05-09 16:45:13 +02:00
williamlardier f3bf6f2615
ARSN-178: better isValid for AmzRestore class 2022-05-09 16:45:13 +02:00
williamlardier bbe51b2e5e
ARSN-178: add tests for AmzRestore header 2022-05-09 16:45:12 +02:00
williamlardier 3cd06256d6
ARSN-178: add model in ObjectMD 2022-05-09 16:45:12 +02:00
Yutaka Oishi 6e42216549
ARSN-178: Add AmzRestore header and model 2022-05-09 16:45:11 +02:00
williamlardier e37712e94f
ARSN-195: bump arsenal 2022-05-09 16:28:22 +02:00
williamlardier ac30d29509
ARSN-195: add missing exports for 8.x 2022-05-09 16:25:52 +02:00
Xin LI 1f235d569d improvement: release 8.1.46 2022-05-09 15:32:39 +02:00
williamlardier 320713a764
Merge remote-tracking branch 'origin/bugfix/ARSN-195-fix-ts-migration-bugs' into w/8.1/bugfix/ARSN-195-fix-ts-migration-bugs 2022-05-09 14:59:31 +02:00
williamlardier 4594578919
ARSN-195: add unit test for getMetaHeaders 2022-05-09 14:57:52 +02:00
williamlardier bc0cb0a8fe
ARSN-195: fix arsenal bugs and missing default in require 2022-05-09 14:57:51 +02:00
williamlardier 9e0cee849c
ARSN-195: fix index for s3middleware 2022-05-09 14:57:48 +02:00
Artem Bakalov fbf686feab ARSN-194 disable short version id by default 2022-05-06 20:44:23 +00:00
Guillaume Hivert 4b795a245c ARSN-184 Fix tests 2022-05-06 16:03:36 +02:00
Guillaume Hivert 983d59d565 ARSN-184 Fix responseBody test 2022-05-06 15:58:04 +02:00
Guillaume Hivert fd7f0a1a91 ARSN-184 Fix merge 2022-05-06 15:41:42 +02:00
bert-e 459fd99316 Merge branches 'development/8.1' and 'feature/ARSN-184-type-check-s3routes' into tmp/octopus/w/8.1/feature/ARSN-184-type-check-s3routes 2022-05-06 13:21:27 +00:00
Guillaume Hivert d6e4bca3ed ARSN-184 Remove useless signatures 2022-05-06 15:21:17 +02:00
Guillaume Hivert 235b2ac6d4 Merge remote-tracking branch 'origin/feature/ARSN-184-type-check-s3routes' into w/8.1/feature/ARSN-184-type-check-s3routes 2022-05-06 15:19:05 +02:00
bert-e f49006a64e Merge branch 'feature/ARSN-171-type-s3-middlewares' into q/7.10 2022-05-06 12:50:22 +00:00
bert-e 8025ce08fe Merge branches 'w/8.1/feature/ARSN-171-type-s3-middlewares' and 'q/1844/7.10/feature/ARSN-171-type-s3-middlewares' into tmp/octopus/q/8.1 2022-05-06 12:50:22 +00:00
Guillaume Hivert 75811ba553 ARSN-184 Exports 2022-05-06 14:45:44 +02:00
Guillaume Hivert 26de19b22b ARSN-184 Type check routeWebsite 2022-05-06 14:26:40 +02:00
Guillaume Hivert 72bdd130f0 ARSN-184 Type check routePUT 2022-05-06 14:26:40 +02:00
Guillaume Hivert 4131732b74 ARSN-184 Type check routePOST 2022-05-06 14:26:40 +02:00
Guillaume Hivert 7cecbe27be ARSN-184 Type check routeOPTIONS 2022-05-06 14:26:40 +02:00
Guillaume Hivert 3fab05071d ARSN-184 Type check routeHEAD 2022-05-06 14:26:40 +02:00
Guillaume Hivert a98f2cede5 ARSN-184 Type check routeGET 2022-05-06 14:26:40 +02:00
Guillaume Hivert 283a0863c2 ARSN-184 Type check routeDELETE 2022-05-06 14:26:40 +02:00
Guillaume Hivert 18b089fc2d ARSN-184 Type check routes 2022-05-06 14:26:40 +02:00
Guillaume Hivert 60139abb10 ARSN-184 Type check routesUtils 2022-05-06 14:26:40 +02:00
Guillaume Hivert 2cc1a9886f ARSN-184 WIP Routes 2022-05-06 14:26:40 +02:00
Guillaume Hivert 1c7122b7e4 ARSN-184 Type check routesUtils 2022-05-06 14:26:40 +02:00
Guillaume Hivert 4eba3ca6a0 ARSN-184 Type check routes 2022-05-06 14:26:40 +02:00
Guillaume Hivert 670d57a9db ARSN-184 Fix StatsClient 2022-05-06 14:26:40 +02:00
Guillaume Hivert 8784113544 ARSN-184 Move all .js to .ts files 2022-05-06 14:26:40 +02:00
bert-e bffb00266f Merge branch 'dependabot/npm_and_yarn/ajv-6.12.3' into q/8.1 2022-05-05 17:00:41 +00:00
bert-e a6cd3a67e0 Merge branch 'dependabot/npm_and_yarn/node-forge-1.3.0' into q/8.1 2022-05-05 17:00:37 +00:00
dependabot[bot] 18605a9546
Bump ajv from 6.12.2 to 6.12.3
Bumps [ajv](https://github.com/ajv-validator/ajv) from 6.12.2 to 6.12.3.
- [Release notes](https://github.com/ajv-validator/ajv/releases)
- [Commits](https://github.com/ajv-validator/ajv/compare/v6.12.2...v6.12.3)

---
updated-dependencies:
- dependency-name: ajv
  dependency-type: direct:production
...

Signed-off-by: dependabot[bot] <support@github.com>
2022-05-05 15:44:27 +00:00
dependabot[bot] 74d7fe5e68
Bump node-forge from 0.7.6 to 1.3.0
Bumps [node-forge](https://github.com/digitalbazaar/forge) from 0.7.6 to 1.3.0.
- [Release notes](https://github.com/digitalbazaar/forge/releases)
- [Changelog](https://github.com/digitalbazaar/forge/blob/main/CHANGELOG.md)
- [Commits](https://github.com/digitalbazaar/forge/compare/0.7.6...v1.3.0)

---
updated-dependencies:
- dependency-name: node-forge
  dependency-type: direct:production
...

Signed-off-by: dependabot[bot] <support@github.com>
2022-05-05 15:42:41 +00:00
dependabot[bot] e707cf4398
Bump async from 2.6.3 to 2.6.4
Bumps [async](https://github.com/caolan/async) from 2.6.3 to 2.6.4.
- [Release notes](https://github.com/caolan/async/releases)
- [Changelog](https://github.com/caolan/async/blob/v2.6.4/CHANGELOG.md)
- [Commits](https://github.com/caolan/async/compare/v2.6.3...v2.6.4)

---
updated-dependencies:
- dependency-name: async
  dependency-type: direct:production
...

Signed-off-by: dependabot[bot] <support@github.com>
2022-05-05 14:56:17 +00:00
bert-e 47c34a4f5c Merge branch 'dependabot/npm_and_yarn/minimist-1.2.6' into q/8.1 2022-05-05 14:33:37 +00:00
bert-e 59f7e32037 Merge branch 'feature/ARSN-179-support-restore-object' into q/8.1 2022-05-05 10:29:37 +00:00
williamlardier 7f93695300
ARSN-179: add s3 action map for RestoreObject 2022-05-05 10:10:52 +02:00
dependabot[bot] 7c6f5d34b8
Bump minimist from 1.2.5 to 1.2.6
Bumps [minimist](https://github.com/substack/minimist) from 1.2.5 to 1.2.6.
- [Release notes](https://github.com/substack/minimist/releases)
- [Commits](https://github.com/substack/minimist/compare/1.2.5...1.2.6)

---
updated-dependencies:
- dependency-name: minimist
  dependency-type: indirect
...

Signed-off-by: dependabot[bot] <support@github.com>
2022-05-04 18:04:18 +00:00
Guillaume Hivert f378a85799 ARSN-185 Type Check patches/locationConstraints 2022-05-03 17:54:00 +02:00
bert-e 23ea19bcb3 Merge branch 'feature/ARSN-186-type-check-clustering' into tmp/octopus/w/8.1/feature/ARSN-186-type-check-clustering 2022-05-03 15:36:22 +00:00
Guillaume Hivert c6249cd2d5 ARSN-186 Type check Clustering 2022-05-03 17:35:46 +02:00
Guillaume Hivert 97019d3b44 ARSN-186 Move Clustering.js to Clustering.ts 2022-05-03 17:16:26 +02:00
bert-e 6da31dfd18 Merge branch 'feature/ARSN-183-type-check-stream' into tmp/octopus/w/8.1/feature/ARSN-183-type-check-stream 2022-05-03 15:14:50 +00:00
Guillaume Hivert 75b4e6328e Type check stream 2022-05-03 17:11:34 +02:00
Guillaume Hivert eb9f936e78 Move readJSONStreamObject from .js to .ts 2022-05-03 16:59:10 +02:00
Yutaka Oishi ee1e65d778
ARSN-179: add route for RestoreObject API 2022-05-03 15:14:12 +02:00
williamlardier 3534927ccf
ARSN-179: add action map for RestoreObject API 2022-05-03 15:11:29 +02:00
Guillaume Hivert 40e5100cd8 ARSN-173 Fix BackendInfo 2022-04-29 18:08:19 +02:00
Guillaume Hivert 0851aa1406 Merge remote-tracking branch 'origin/feature/ARSN-171-type-s3-middlewares' into w/8.1/feature/ARSN-171-type-s3-middlewares 2022-04-29 17:47:22 +02:00
Guillaume Hivert 5c16601657 ARSN-171 Fix tests 2022-04-29 17:05:07 +02:00
Guillaume Hivert 3ff3330f1a ARSN-171 Type check s3middleware/validateConditionalHeaders 2022-04-29 17:05:07 +02:00
Guillaume Hivert 5b02d20e4d ARSN-171 Type check s3middleware/userMetadata 2022-04-29 17:05:07 +02:00
Guillaume Hivert 867da9a3d0 ARSN-171 Type check s3middleware/tagging 2022-04-29 17:05:07 +02:00
Guillaume Hivert c9f6d35fa4 ARSN-171 Type check s3middleware/processMpuParts 2022-04-29 17:05:07 +02:00
Guillaume Hivert c79a5c2ee3 ARSN-171 Type check s3middleware/objectRetention 2022-04-29 17:05:07 +02:00
Guillaume Hivert a400beb8b9 ARSN-171 Type check s3middleware/objectLegalHold 2022-04-29 17:05:07 +02:00
Guillaume Hivert 8ce0b07e63 ARSN-171 Backport constants to 7.10 2022-04-29 17:05:07 +02:00
Guillaume Hivert a0876d3df5 ARSN-171 Type prepareStream and refactor V4Transform to export type 2022-04-29 17:05:07 +02:00
Guillaume Hivert e829fa3d3f ARSN-171 Type objectUtils 2022-04-29 17:05:07 +02:00
Guillaume Hivert da25890556 ARSN-171 Type objectLegalHold 2022-04-29 17:05:07 +02:00
Guillaume Hivert 8df0f5863a ARSN-171 Type nullStream 2022-04-29 17:05:07 +02:00
Guillaume Hivert 2d66248303 ARSN-171 Add Types for xml2js 2022-04-29 17:05:07 +02:00
Guillaume Hivert 8221852eef ARSN-171 Type LifecycleUtils and LifecycleHelpers 2022-04-29 17:05:07 +02:00
Guillaume Hivert d50e1bfd6d ARSN-171 Type LifecycleDatetime 2022-04-29 17:05:07 +02:00
Guillaume Hivert 5f453789d4 ARSN-171 Type convertToXml 2022-04-29 17:05:07 +02:00
Guillaume Hivert 7658481128 ARSN-171 Type mpuUtils 2022-04-29 17:05:07 +02:00
Guillaume Hivert 593bb31ac3 ARSN-171 Type SubStreamInterface 2022-04-29 14:51:04 +02:00
Guillaume Hivert f5e89c9660 ARSN-171 Type ResultsCollector 2022-04-29 14:51:04 +02:00
Guillaume Hivert 62db2267fc ARSN-171 Type MD5Sum 2022-04-29 14:51:04 +02:00
Guillaume Hivert f6544f7a2e ARSN-171 Move all files from JS to TS 2022-04-29 14:51:04 +02:00
256 changed files with 28959 additions and 15274 deletions

View File

@ -1 +1,6 @@
{ "extends": "scality" } {
"extends": "scality",
"parserOptions": {
"ecmaVersion": 2020
}
}

25
.github/workflows/codeql.yaml vendored Normal file
View File

@ -0,0 +1,25 @@
---
name: codeQL
on:
push:
branches: [development/*, stabilization/*, hotfix/*]
pull_request:
branches: [development/*, stabilization/*, hotfix/*]
workflow_dispatch:
jobs:
analyze:
name: Static analysis with CodeQL
runs-on: ubuntu-latest
steps:
- name: Checkout code
uses: actions/checkout@v4
- name: Initialize CodeQL
uses: github/codeql-action/init@v3
with:
languages: javascript, typescript
- name: Build and analyze
uses: github/codeql-action/analyze@v3

View File

@ -0,0 +1,16 @@
---
name: dependency review
on:
pull_request:
branches: [development/*, stabilization/*, hotfix/*]
jobs:
dependency-review:
runs-on: ubuntu-latest
steps:
- name: 'Checkout Repository'
uses: actions/checkout@v4
- name: 'Dependency Review'
uses: actions/dependency-review-action@v4

View File

@ -25,18 +25,18 @@ jobs:
- 6379:6379 - 6379:6379
steps: steps:
- name: Checkout - name: Checkout
uses: actions/checkout@v2 uses: actions/checkout@v4
- uses: actions/setup-node@v2 - uses: actions/setup-node@v4
with: with:
node-version: '16' node-version: '16'
cache: 'yarn' cache: 'yarn'
- name: install dependencies - name: install dependencies
run: yarn install --frozen-lockfile --prefer-offline run: yarn install --frozen-lockfile --prefer-offline --network-concurrency 1
continue-on-error: true # TODO ARSN-97 Remove it when no errors in TS continue-on-error: true # TODO ARSN-97 Remove it when no errors in TS
- name: lint yaml - name: lint yaml
run: yarn --silent lint_yml run: yarn --silent lint_yml
- name: lint javascript - name: lint javascript
run: yarn --silent lint -- --max-warnings 0 run: yarn --silent lint --max-warnings 0
- name: lint markdown - name: lint markdown
run: yarn --silent lint_md run: yarn --silent lint_md
- name: add hostname - name: add hostname
@ -46,7 +46,9 @@ jobs:
run: yarn --silent coverage run: yarn --silent coverage
- name: run functional tests - name: run functional tests
run: yarn ft_test run: yarn ft_test
- uses: codecov/codecov-action@v2 - uses: codecov/codecov-action@v4
with:
token: ${{ secrets.CODECOV_TOKEN }}
- name: run executables tests - name: run executables tests
run: yarn install && yarn test run: yarn install && yarn test
working-directory: 'lib/executables/pensieveCreds/' working-directory: 'lib/executables/pensieveCreds/'
@ -57,9 +59,9 @@ jobs:
runs-on: ubuntu-latest runs-on: ubuntu-latest
steps: steps:
- name: Checkout - name: Checkout
uses: actions/checkout@v2 uses: actions/checkout@v4
- name: Install NodeJS - name: Install NodeJS
uses: actions/setup-node@v2 uses: actions/setup-node@v4
with: with:
node-version: '16' node-version: '16'
cache: yarn cache: yarn
@ -70,7 +72,7 @@ jobs:
run: yarn build run: yarn build
continue-on-error: true # TODO ARSN-97 Remove it when no errors in TS continue-on-error: true # TODO ARSN-97 Remove it when no errors in TS
- name: Upload artifacts - name: Upload artifacts
uses: scality/action-artifacts@v2 uses: scality/action-artifacts@v4
with: with:
url: https://artifacts.scality.net url: https://artifacts.scality.net
user: ${{ secrets.ARTIFACTS_USER }} user: ${{ secrets.ARTIFACTS_USER }}

12
.swcrc Normal file
View File

@ -0,0 +1,12 @@
{
"$schema": "https://swc.rs/schema.json",
"jsc": {
"parser": {
"syntax": "typescript"
},
"target": "es2017"
},
"module": {
"type": "commonjs"
}
}

View File

@ -178,3 +178,83 @@ this._serverSideEncryption.configuredMasterKeyId = configuredMasterKeyId || unde
### Usage ### Usage
Used to store the users configured KMS key id Used to store the users configured KMS key id
## Model version 15
### Properties Added
```javascript
this._tags = tags || null;
```
The Tag Set of a bucket is an array of objects with Key and Value:
```javascript
[
{
Key: 'something',
Value: 'some_data'
}
]
```
## Model version 16
### Properties Added
```javascript
this._capabilities = capabilities || undefined;
```
For capacity-enabled buckets, contains the following data:
```javascript
{
_capabilities: {
VeeamSOSApi?: {
SystemInfo?: {
ProtocolVersion: String,
ModelName: String,
ProtocolCapabilities: {
CapacityInfo: Boolean,
UploadSessions: Boolean,
IAMSTS: Boolean,
},
APIEndpoints: {
IAMEndpoint: String,
STSEndpoint: String,
},
SystemRecommendations?: {
S3ConcurrentTaskLimit: Number,
S3MultiObjectDelete: Number,
StorageCurrentTasksLimit: Number,
KbBlockSize: Number,
}
LastModified?: String,
},
CapacityInfo?: {
Capacity: Number,
Available: Number,
Used: Number,
LastModified?: String,
},
}
},
}
```
### Usage
Used to store bucket tagging
## Model version 17
### Properties Added
```javascript
this._quotaMax = quotaMax || 0;
```
### Usage
Used to store bucket quota

View File

@ -0,0 +1,27 @@
# Delimiter
The Delimiter class handles raw listings from the database with an
optional delimiter, and fills in a curated listing with "Contents" and
"CommonPrefixes" as a result.
## Expected Behavior
- only lists keys belonging to the given **prefix** (if provided)
- groups listed keys that have a common prefix ending with a delimiter
inside CommonPrefixes
- can take a **marker** or **continuationToken** to list from a specific key
- can take a **maxKeys** parameter to limit how many keys can be returned
## State Chart
- States with grey background are *Idle* states, which are waiting for
a new listing key
- States with blue background are *Processing* states, which are
actively processing a new listing key passed by the filter()
function
![Delimiter State Chart](./pics/delimiterStateChart.svg)

View File

@ -0,0 +1,45 @@
# DelimiterMaster
The DelimiterMaster class handles raw listings from the database of a
versioned or non-versioned bucket with an optional delimiter, and
fills in a curated listing with "Contents" and "CommonPrefixes" as a
result.
## Expected Behavior
- only lists latest versions of versioned buckets
- only lists keys belonging to the given **prefix** (if provided)
- does not list latest versions that are delete markers
- groups listed keys that have a common prefix ending with a delimiter
inside CommonPrefixes
- can take a **marker** or **continuationToken** to list from a specific key
- can take a **maxKeys** parameter to limit how many keys can be returned
- reconciles internal PHD keys with the next version (those are
created when a specific version that is the latest version is
deleted)
- skips internal keys like replay keys
## State Chart
- States with grey background are *Idle* states, which are waiting for
a new listing key
- States with blue background are *Processing* states, which are
actively processing a new listing key passed by the filter()
function
### Bucket Vformat=v0
![DelimiterMaster State Chart for v0 format](./pics/delimiterMasterV0StateChart.svg)
### Bucket Vformat=v1
For buckets in versioning key format **v1**, the algorithm used is the
one from [Delimiter](delimiter.md).

View File

@ -0,0 +1,33 @@
# DelimiterVersions
The DelimiterVersions class handles raw listings from the database of a
versioned or non-versioned bucket with an optional delimiter, and
fills in a curated listing with "Versions" and "CommonPrefixes" as a
result.
## Expected Behavior
- lists individual distinct versions of versioned buckets
- only lists keys belonging to the given **prefix** (if provided)
- groups listed keys that have a common prefix ending with a delimiter
inside CommonPrefixes
- can take a **keyMarker** and optionally a **versionIdMarker** to
list from a specific key or version
- can take a **maxKeys** parameter to limit how many keys can be returned
- skips internal keys like replay keys
## State Chart
- States with grey background are *Idle* states, which are waiting for
a new listing key
- States with blue background are *Processing* states, which are
actively processing a new listing key passed by the filter()
function
![DelimiterVersions State Chart](./pics/delimiterVersionsStateChart.svg)

View File

@ -0,0 +1,45 @@
digraph {
node [shape="box",style="filled,rounded",fontsize=16,fixedsize=true,width=3];
edge [fontsize=14];
rankdir=TB;
START [shape="circle",width=0.2,label="",style="filled",fillcolor="black"]
END [shape="circle",width=0.2,label="",style="filled",fillcolor="black",peripheries=2]
node [fillcolor="lightgrey"];
"NotSkippingPrefixNorVersions.Idle" [label="NotSkippingPrefixNorVersions",group="NotSkippingPrefixNorVersions",width=4];
"SkippingPrefix.Idle" [label="SkippingPrefix",group="SkippingPrefix"];
"SkippingVersions.Idle" [label="SkippingVersions",group="SkippingVersions"];
"WaitVersionAfterPHD.Idle" [label="WaitVersionAfterPHD",group="WaitVersionAfterPHD"];
node [fillcolor="lightblue"];
"NotSkippingPrefixNorVersions.Processing" [label="NotSkippingPrefixNorVersions",group="NotSkippingPrefixNorVersions",width=4];
"SkippingPrefix.Processing" [label="SkippingPrefix",group="SkippingPrefix"];
"SkippingVersions.Processing" [label="SkippingVersions",group="SkippingVersions"];
"WaitVersionAfterPHD.Processing" [label="WaitVersionAfterPHD",group="WaitVersionAfterPHD"];
START -> "SkippingVersions.Idle" [label="[marker != undefined]"]
START -> "NotSkippingPrefixNorVersions.Idle" [label="[marker == undefined]"]
"NotSkippingPrefixNorVersions.Idle" -> "NotSkippingPrefixNorVersions.Processing" [label="filter(key, value)"]
"SkippingPrefix.Idle" -> "SkippingPrefix.Processing" [label="filter(key, value)"]
"SkippingVersions.Idle" -> "SkippingVersions.Processing" [label="filter(key, value)"]
"WaitVersionAfterPHD.Idle" -> "WaitVersionAfterPHD.Processing" [label="filter(key, value)"]
"NotSkippingPrefixNorVersions.Processing" -> "SkippingVersions.Idle" [label="[Version.isDeleteMarker(value)]\n-> FILTER_ACCEPT"]
"NotSkippingPrefixNorVersions.Processing" -> "WaitVersionAfterPHD.Idle" [label="[Version.isPHD(value)]\n-> FILTER_ACCEPT"]
"NotSkippingPrefixNorVersions.Processing" -> "SkippingPrefix.Idle" [label="[key.startsWith(<ReplayPrefix>)]\n/ prefix <- <ReplayPrefix>\n-> FILTER_SKIP"]
"NotSkippingPrefixNorVersions.Processing" -> END [label="[isListableKey(key, value) and\nKeys == maxKeys]\n-> FILTER_END"]
"NotSkippingPrefixNorVersions.Processing" -> "SkippingPrefix.Idle" [label="[isListableKey(key, value) and\nnKeys < maxKeys and\nhasDelimiter(key)]\n/ prefix <- prefixOf(key)\n/ CommonPrefixes.append(prefixOf(key))\n-> FILTER_ACCEPT"]
"NotSkippingPrefixNorVersions.Processing" -> "SkippingVersions.Idle" [label="[isListableKey(key, value) and\nnKeys < maxKeys and\nnot hasDelimiter(key)]\n/ Contents.append(key, value)\n-> FILTER_ACCEPT"]
"SkippingPrefix.Processing" -> "SkippingPrefix.Idle" [label="[key.startsWith(prefix)]\n-> FILTER_SKIP"]
"SkippingPrefix.Processing" -> "NotSkippingPrefixNorVersions.Processing" [label="[not key.startsWith(prefix)]"]
"SkippingVersions.Processing" -> "SkippingVersions.Idle" [label="[isVersionKey(key)]\n-> FILTER_SKIP"]
"SkippingVersions.Processing" -> "NotSkippingPrefixNorVersions.Processing" [label="[not isVersionKey(key)]"]
"WaitVersionAfterPHD.Processing" -> "NotSkippingPrefixNorVersions.Processing" [label="[isVersionKey(key) and master(key) == PHDkey]\n/ key <- master(key)"]
"WaitVersionAfterPHD.Processing" -> "NotSkippingPrefixNorVersions.Processing" [label="[not isVersionKey(key) or master(key) != PHDkey]"]
}

View File

@ -0,0 +1,216 @@
<?xml version="1.0" encoding="UTF-8" standalone="no"?>
<!DOCTYPE svg PUBLIC "-//W3C//DTD SVG 1.1//EN"
"http://www.w3.org/Graphics/SVG/1.1/DTD/svg11.dtd">
<!-- Generated by graphviz version 2.43.0 (0)
-->
<!-- Title: %3 Pages: 1 -->
<svg width="2313pt" height="460pt"
viewBox="0.00 0.00 2313.37 460.00" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink">
<g id="graph0" class="graph" transform="scale(1 1) rotate(0) translate(4 456)">
<title>%3</title>
<polygon fill="white" stroke="transparent" points="-4,4 -4,-456 2309.37,-456 2309.37,4 -4,4"/>
<!-- START -->
<g id="node1" class="node">
<title>START</title>
<ellipse fill="black" stroke="black" cx="35.37" cy="-445" rx="7" ry="7"/>
</g>
<!-- NotSkippingPrefixNorVersions.Idle -->
<g id="node3" class="node">
<title>NotSkippingPrefixNorVersions.Idle</title>
<path fill="lightgrey" stroke="black" d="M925.37,-387C925.37,-387 661.37,-387 661.37,-387 655.37,-387 649.37,-381 649.37,-375 649.37,-375 649.37,-363 649.37,-363 649.37,-357 655.37,-351 661.37,-351 661.37,-351 925.37,-351 925.37,-351 931.37,-351 937.37,-357 937.37,-363 937.37,-363 937.37,-375 937.37,-375 937.37,-381 931.37,-387 925.37,-387"/>
<text text-anchor="middle" x="793.37" y="-365.2" font-family="Times,serif" font-size="16.00">NotSkippingPrefixNorVersions</text>
</g>
<!-- START&#45;&gt;NotSkippingPrefixNorVersions.Idle -->
<g id="edge2" class="edge">
<title>START&#45;&gt;NotSkippingPrefixNorVersions.Idle</title>
<path fill="none" stroke="black" d="M42.39,-443.31C95.3,-438.15 434.98,-404.99 638.94,-385.08"/>
<polygon fill="black" stroke="black" points="639.54,-388.53 649.15,-384.08 638.86,-381.57 639.54,-388.53"/>
<text text-anchor="middle" x="497.87" y="-408.8" font-family="Times,serif" font-size="14.00">[marker == undefined]</text>
</g>
<!-- SkippingVersions.Idle -->
<g id="node5" class="node">
<title>SkippingVersions.Idle</title>
<path fill="lightgrey" stroke="black" d="M242.37,-138C242.37,-138 50.37,-138 50.37,-138 44.37,-138 38.37,-132 38.37,-126 38.37,-126 38.37,-114 38.37,-114 38.37,-108 44.37,-102 50.37,-102 50.37,-102 242.37,-102 242.37,-102 248.37,-102 254.37,-108 254.37,-114 254.37,-114 254.37,-126 254.37,-126 254.37,-132 248.37,-138 242.37,-138"/>
<text text-anchor="middle" x="146.37" y="-116.2" font-family="Times,serif" font-size="16.00">SkippingVersions</text>
</g>
<!-- START&#45;&gt;SkippingVersions.Idle -->
<g id="edge1" class="edge">
<title>START&#45;&gt;SkippingVersions.Idle</title>
<path fill="none" stroke="black" d="M33.04,-438.14C20.64,-405.9 -34.57,-248.17 33.37,-156 36.76,-151.4 40.74,-147.39 45.16,-143.89"/>
<polygon fill="black" stroke="black" points="47.27,-146.68 53.53,-138.13 43.3,-140.92 47.27,-146.68"/>
<text text-anchor="middle" x="85.87" y="-321.8" font-family="Times,serif" font-size="14.00">[marker != undefined]</text>
</g>
<!-- END -->
<g id="node2" class="node">
<title>END</title>
<ellipse fill="black" stroke="black" cx="727.37" cy="-120" rx="7" ry="7"/>
<ellipse fill="none" stroke="black" cx="727.37" cy="-120" rx="11" ry="11"/>
</g>
<!-- NotSkippingPrefixNorVersions.Processing -->
<g id="node7" class="node">
<title>NotSkippingPrefixNorVersions.Processing</title>
<path fill="lightblue" stroke="black" d="M925.37,-300C925.37,-300 661.37,-300 661.37,-300 655.37,-300 649.37,-294 649.37,-288 649.37,-288 649.37,-276 649.37,-276 649.37,-270 655.37,-264 661.37,-264 661.37,-264 925.37,-264 925.37,-264 931.37,-264 937.37,-270 937.37,-276 937.37,-276 937.37,-288 937.37,-288 937.37,-294 931.37,-300 925.37,-300"/>
<text text-anchor="middle" x="793.37" y="-278.2" font-family="Times,serif" font-size="16.00">NotSkippingPrefixNorVersions</text>
</g>
<!-- NotSkippingPrefixNorVersions.Idle&#45;&gt;NotSkippingPrefixNorVersions.Processing -->
<g id="edge3" class="edge">
<title>NotSkippingPrefixNorVersions.Idle&#45;&gt;NotSkippingPrefixNorVersions.Processing</title>
<path fill="none" stroke="black" d="M793.37,-350.8C793.37,-339.16 793.37,-323.55 793.37,-310.24"/>
<polygon fill="black" stroke="black" points="796.87,-310.18 793.37,-300.18 789.87,-310.18 796.87,-310.18"/>
<text text-anchor="middle" x="851.37" y="-321.8" font-family="Times,serif" font-size="14.00">filter(key, value)</text>
</g>
<!-- SkippingPrefix.Idle -->
<g id="node4" class="node">
<title>SkippingPrefix.Idle</title>
<path fill="lightgrey" stroke="black" d="M1209.37,-138C1209.37,-138 1017.37,-138 1017.37,-138 1011.37,-138 1005.37,-132 1005.37,-126 1005.37,-126 1005.37,-114 1005.37,-114 1005.37,-108 1011.37,-102 1017.37,-102 1017.37,-102 1209.37,-102 1209.37,-102 1215.37,-102 1221.37,-108 1221.37,-114 1221.37,-114 1221.37,-126 1221.37,-126 1221.37,-132 1215.37,-138 1209.37,-138"/>
<text text-anchor="middle" x="1113.37" y="-116.2" font-family="Times,serif" font-size="16.00">SkippingPrefix</text>
</g>
<!-- SkippingPrefix.Processing -->
<g id="node8" class="node">
<title>SkippingPrefix.Processing</title>
<path fill="lightblue" stroke="black" d="M1070.37,-36C1070.37,-36 878.37,-36 878.37,-36 872.37,-36 866.37,-30 866.37,-24 866.37,-24 866.37,-12 866.37,-12 866.37,-6 872.37,0 878.37,0 878.37,0 1070.37,0 1070.37,0 1076.37,0 1082.37,-6 1082.37,-12 1082.37,-12 1082.37,-24 1082.37,-24 1082.37,-30 1076.37,-36 1070.37,-36"/>
<text text-anchor="middle" x="974.37" y="-14.2" font-family="Times,serif" font-size="16.00">SkippingPrefix</text>
</g>
<!-- SkippingPrefix.Idle&#45;&gt;SkippingPrefix.Processing -->
<g id="edge4" class="edge">
<title>SkippingPrefix.Idle&#45;&gt;SkippingPrefix.Processing</title>
<path fill="none" stroke="black" d="M1011.89,-101.96C994.96,-97.13 981.04,-91.17 975.37,-84 967.11,-73.56 966.25,-58.93 967.72,-46.2"/>
<polygon fill="black" stroke="black" points="971.22,-46.52 969.4,-36.09 964.31,-45.38 971.22,-46.52"/>
<text text-anchor="middle" x="1033.37" y="-65.3" font-family="Times,serif" font-size="14.00">filter(key, value)</text>
</g>
<!-- SkippingVersions.Processing -->
<g id="node9" class="node">
<title>SkippingVersions.Processing</title>
<path fill="lightblue" stroke="black" d="M381.37,-36C381.37,-36 189.37,-36 189.37,-36 183.37,-36 177.37,-30 177.37,-24 177.37,-24 177.37,-12 177.37,-12 177.37,-6 183.37,0 189.37,0 189.37,0 381.37,0 381.37,0 387.37,0 393.37,-6 393.37,-12 393.37,-12 393.37,-24 393.37,-24 393.37,-30 387.37,-36 381.37,-36"/>
<text text-anchor="middle" x="285.37" y="-14.2" font-family="Times,serif" font-size="16.00">SkippingVersions</text>
</g>
<!-- SkippingVersions.Idle&#45;&gt;SkippingVersions.Processing -->
<g id="edge5" class="edge">
<title>SkippingVersions.Idle&#45;&gt;SkippingVersions.Processing</title>
<path fill="none" stroke="black" d="M141.4,-101.91C138.35,-87.58 136.8,-67.37 147.37,-54 151.89,-48.28 161.64,-43.34 173.99,-39.12"/>
<polygon fill="black" stroke="black" points="175.39,-42.36 183.89,-36.04 173.3,-35.67 175.39,-42.36"/>
<text text-anchor="middle" x="205.37" y="-65.3" font-family="Times,serif" font-size="14.00">filter(key, value)</text>
</g>
<!-- WaitVersionAfterPHD.Idle -->
<g id="node6" class="node">
<title>WaitVersionAfterPHD.Idle</title>
<path fill="lightgrey" stroke="black" d="M1534.37,-138C1534.37,-138 1342.37,-138 1342.37,-138 1336.37,-138 1330.37,-132 1330.37,-126 1330.37,-126 1330.37,-114 1330.37,-114 1330.37,-108 1336.37,-102 1342.37,-102 1342.37,-102 1534.37,-102 1534.37,-102 1540.37,-102 1546.37,-108 1546.37,-114 1546.37,-114 1546.37,-126 1546.37,-126 1546.37,-132 1540.37,-138 1534.37,-138"/>
<text text-anchor="middle" x="1438.37" y="-116.2" font-family="Times,serif" font-size="16.00">WaitVersionAfterPHD</text>
</g>
<!-- WaitVersionAfterPHD.Processing -->
<g id="node10" class="node">
<title>WaitVersionAfterPHD.Processing</title>
<path fill="lightblue" stroke="black" d="M1534.37,-36C1534.37,-36 1342.37,-36 1342.37,-36 1336.37,-36 1330.37,-30 1330.37,-24 1330.37,-24 1330.37,-12 1330.37,-12 1330.37,-6 1336.37,0 1342.37,0 1342.37,0 1534.37,0 1534.37,0 1540.37,0 1546.37,-6 1546.37,-12 1546.37,-12 1546.37,-24 1546.37,-24 1546.37,-30 1540.37,-36 1534.37,-36"/>
<text text-anchor="middle" x="1438.37" y="-14.2" font-family="Times,serif" font-size="16.00">WaitVersionAfterPHD</text>
</g>
<!-- WaitVersionAfterPHD.Idle&#45;&gt;WaitVersionAfterPHD.Processing -->
<g id="edge6" class="edge">
<title>WaitVersionAfterPHD.Idle&#45;&gt;WaitVersionAfterPHD.Processing</title>
<path fill="none" stroke="black" d="M1438.37,-101.58C1438.37,-86.38 1438.37,-64.07 1438.37,-46.46"/>
<polygon fill="black" stroke="black" points="1441.87,-46.22 1438.37,-36.22 1434.87,-46.22 1441.87,-46.22"/>
<text text-anchor="middle" x="1496.37" y="-65.3" font-family="Times,serif" font-size="14.00">filter(key, value)</text>
</g>
<!-- NotSkippingPrefixNorVersions.Processing&#45;&gt;END -->
<g id="edge10" class="edge">
<title>NotSkippingPrefixNorVersions.Processing&#45;&gt;END</title>
<path fill="none" stroke="black" d="M649.15,-273.62C611.7,-268.54 578.44,-260.07 566.37,-246 540.33,-215.64 540,-186.08 566.37,-156 586.46,-133.07 673.88,-148.86 702.37,-138 705.22,-136.91 708.06,-135.44 710.76,-133.82"/>
<polygon fill="black" stroke="black" points="712.88,-136.61 719.13,-128.05 708.91,-130.84 712.88,-136.61"/>
<text text-anchor="middle" x="672.87" y="-212.3" font-family="Times,serif" font-size="14.00">[isListableKey(key, value) and</text>
<text text-anchor="middle" x="672.87" y="-197.3" font-family="Times,serif" font-size="14.00">Keys == maxKeys]</text>
<text text-anchor="middle" x="672.87" y="-182.3" font-family="Times,serif" font-size="14.00">&#45;&gt; FILTER_END</text>
</g>
<!-- NotSkippingPrefixNorVersions.Processing&#45;&gt;SkippingPrefix.Idle -->
<g id="edge9" class="edge">
<title>NotSkippingPrefixNorVersions.Processing&#45;&gt;SkippingPrefix.Idle</title>
<path fill="none" stroke="black" d="M937.6,-274.31C1018.89,-269.01 1106.69,-260.11 1119.37,-246 1143.16,-219.51 1134.03,-175.72 1124.38,-147.62"/>
<polygon fill="black" stroke="black" points="1127.6,-146.22 1120.86,-138.04 1121.03,-148.64 1127.6,-146.22"/>
<text text-anchor="middle" x="1254.37" y="-212.3" font-family="Times,serif" font-size="14.00">[key.startsWith(&lt;ReplayPrefix&gt;)]</text>
<text text-anchor="middle" x="1254.37" y="-197.3" font-family="Times,serif" font-size="14.00">/ prefix &lt;&#45; &lt;ReplayPrefix&gt;</text>
<text text-anchor="middle" x="1254.37" y="-182.3" font-family="Times,serif" font-size="14.00">&#45;&gt; FILTER_SKIP</text>
</g>
<!-- NotSkippingPrefixNorVersions.Processing&#45;&gt;SkippingPrefix.Idle -->
<g id="edge11" class="edge">
<title>NotSkippingPrefixNorVersions.Processing&#45;&gt;SkippingPrefix.Idle</title>
<path fill="none" stroke="black" d="M799.18,-263.65C800.96,-258.05 802.85,-251.79 804.37,-246 814.73,-206.45 793.03,-183.41 823.37,-156 851.23,-130.83 954.1,-142.59 991.37,-138 992.65,-137.84 993.94,-137.68 995.24,-137.52"/>
<polygon fill="black" stroke="black" points="995.81,-140.98 1005.29,-136.25 994.93,-134.03 995.81,-140.98"/>
<text text-anchor="middle" x="969.37" y="-234.8" font-family="Times,serif" font-size="14.00">[isListableKey(key, value) and</text>
<text text-anchor="middle" x="969.37" y="-219.8" font-family="Times,serif" font-size="14.00">nKeys &lt; maxKeys and</text>
<text text-anchor="middle" x="969.37" y="-204.8" font-family="Times,serif" font-size="14.00">hasDelimiter(key)]</text>
<text text-anchor="middle" x="969.37" y="-189.8" font-family="Times,serif" font-size="14.00">/ prefix &lt;&#45; prefixOf(key)</text>
<text text-anchor="middle" x="969.37" y="-174.8" font-family="Times,serif" font-size="14.00">/ CommonPrefixes.append(prefixOf(key))</text>
<text text-anchor="middle" x="969.37" y="-159.8" font-family="Times,serif" font-size="14.00">&#45;&gt; FILTER_ACCEPT</text>
</g>
<!-- NotSkippingPrefixNorVersions.Processing&#45;&gt;SkippingVersions.Idle -->
<g id="edge7" class="edge">
<title>NotSkippingPrefixNorVersions.Processing&#45;&gt;SkippingVersions.Idle</title>
<path fill="none" stroke="black" d="M649.11,-279.23C439.56,-275.94 73.58,-267.19 53.37,-246 25.76,-217.06 30.6,-188.89 53.37,-156 56.56,-151.39 60.44,-147.39 64.78,-143.91"/>
<polygon fill="black" stroke="black" points="66.8,-146.76 73.04,-138.2 62.83,-141 66.8,-146.76"/>
<text text-anchor="middle" x="167.87" y="-204.8" font-family="Times,serif" font-size="14.00">[Version.isDeleteMarker(value)]</text>
<text text-anchor="middle" x="167.87" y="-189.8" font-family="Times,serif" font-size="14.00">&#45;&gt; FILTER_ACCEPT</text>
</g>
<!-- NotSkippingPrefixNorVersions.Processing&#45;&gt;SkippingVersions.Idle -->
<g id="edge12" class="edge">
<title>NotSkippingPrefixNorVersions.Processing&#45;&gt;SkippingVersions.Idle</title>
<path fill="none" stroke="black" d="M649.33,-279.1C514.97,-275.99 331.4,-267.75 305.37,-246 273.69,-219.53 311.53,-185.22 282.37,-156 276.73,-150.36 270.32,-145.59 263.42,-141.56"/>
<polygon fill="black" stroke="black" points="264.92,-138.39 254.44,-136.84 261.67,-144.59 264.92,-138.39"/>
<text text-anchor="middle" x="411.87" y="-227.3" font-family="Times,serif" font-size="14.00">[isListableKey(key, value) and</text>
<text text-anchor="middle" x="411.87" y="-212.3" font-family="Times,serif" font-size="14.00">nKeys &lt; maxKeys and</text>
<text text-anchor="middle" x="411.87" y="-197.3" font-family="Times,serif" font-size="14.00">not hasDelimiter(key)]</text>
<text text-anchor="middle" x="411.87" y="-182.3" font-family="Times,serif" font-size="14.00">/ Contents.append(key, value)</text>
<text text-anchor="middle" x="411.87" y="-167.3" font-family="Times,serif" font-size="14.00">&#45;&gt; FILTER_ACCEPT</text>
</g>
<!-- NotSkippingPrefixNorVersions.Processing&#45;&gt;WaitVersionAfterPHD.Idle -->
<g id="edge8" class="edge">
<title>NotSkippingPrefixNorVersions.Processing&#45;&gt;WaitVersionAfterPHD.Idle</title>
<path fill="none" stroke="black" d="M937.38,-280.87C1099.43,-279.42 1344.59,-272.74 1378.37,-246 1411.11,-220.08 1384.48,-192.16 1405.37,-156 1407.38,-152.52 1409.8,-149.11 1412.4,-145.87"/>
<polygon fill="black" stroke="black" points="1415.16,-148.04 1419.13,-138.21 1409.9,-143.41 1415.16,-148.04"/>
<text text-anchor="middle" x="1486.87" y="-204.8" font-family="Times,serif" font-size="14.00">[Version.isPHD(value)]</text>
<text text-anchor="middle" x="1486.87" y="-189.8" font-family="Times,serif" font-size="14.00">&#45;&gt; FILTER_ACCEPT</text>
</g>
<!-- SkippingPrefix.Processing&#45;&gt;SkippingPrefix.Idle -->
<g id="edge13" class="edge">
<title>SkippingPrefix.Processing&#45;&gt;SkippingPrefix.Idle</title>
<path fill="none" stroke="black" d="M1064.61,-36.08C1074.44,-40.7 1083.66,-46.57 1091.37,-54 1101.65,-63.92 1107.13,-78.81 1110.04,-91.84"/>
<polygon fill="black" stroke="black" points="1106.62,-92.56 1111.88,-101.76 1113.5,-91.29 1106.62,-92.56"/>
<text text-anchor="middle" x="1190.37" y="-72.8" font-family="Times,serif" font-size="14.00">[key.startsWith(prefix)]</text>
<text text-anchor="middle" x="1190.37" y="-57.8" font-family="Times,serif" font-size="14.00">&#45;&gt; FILTER_SKIP</text>
</g>
<!-- SkippingPrefix.Processing&#45;&gt;NotSkippingPrefixNorVersions.Processing -->
<g id="edge14" class="edge">
<title>SkippingPrefix.Processing&#45;&gt;NotSkippingPrefixNorVersions.Processing</title>
<path fill="none" stroke="black" d="M899.82,-36.01C864.18,-48.2 824.54,-68.57 802.37,-102 771.84,-148.02 779.31,-216.26 786.77,-253.8"/>
<polygon fill="black" stroke="black" points="783.43,-254.92 788.94,-263.97 790.28,-253.46 783.43,-254.92"/>
<text text-anchor="middle" x="899.37" y="-116.3" font-family="Times,serif" font-size="14.00">[not key.startsWith(prefix)]</text>
</g>
<!-- SkippingVersions.Processing&#45;&gt;SkippingVersions.Idle -->
<g id="edge15" class="edge">
<title>SkippingVersions.Processing&#45;&gt;SkippingVersions.Idle</title>
<path fill="none" stroke="black" d="M283.88,-36.24C281.71,-50.87 276.4,-71.43 263.37,-84 258.07,-89.11 252.06,-93.48 245.62,-97.21"/>
<polygon fill="black" stroke="black" points="243.85,-94.19 236.61,-101.92 247.09,-100.39 243.85,-94.19"/>
<text text-anchor="middle" x="349.87" y="-72.8" font-family="Times,serif" font-size="14.00">[isVersionKey(key)]</text>
<text text-anchor="middle" x="349.87" y="-57.8" font-family="Times,serif" font-size="14.00">&#45;&gt; FILTER_SKIP</text>
</g>
<!-- SkippingVersions.Processing&#45;&gt;NotSkippingPrefixNorVersions.Processing -->
<g id="edge16" class="edge">
<title>SkippingVersions.Processing&#45;&gt;NotSkippingPrefixNorVersions.Processing</title>
<path fill="none" stroke="black" d="M382.46,-36.08C396.72,-40.7 410.82,-46.57 423.37,-54 476.67,-85.57 487.28,-102.42 518.37,-156 539.39,-192.23 514.46,-218.85 546.37,-246 561.72,-259.06 598.56,-267.25 639.23,-272.39"/>
<polygon fill="black" stroke="black" points="639.01,-275.89 649.36,-273.59 639.84,-268.93 639.01,-275.89"/>
<text text-anchor="middle" x="590.37" y="-116.3" font-family="Times,serif" font-size="14.00">[not isVersionKey(key)]</text>
</g>
<!-- WaitVersionAfterPHD.Processing&#45;&gt;NotSkippingPrefixNorVersions.Processing -->
<g id="edge17" class="edge">
<title>WaitVersionAfterPHD.Processing&#45;&gt;NotSkippingPrefixNorVersions.Processing</title>
<path fill="none" stroke="black" d="M1536.41,-36.13C1544.73,-40.79 1552.27,-46.65 1558.37,-54 1585.64,-86.89 1597.89,-215.12 1568.37,-246 1547.29,-268.05 1167.71,-276.42 947.74,-279.43"/>
<polygon fill="black" stroke="black" points="947.67,-275.93 937.71,-279.57 947.76,-282.93 947.67,-275.93"/>
<text text-anchor="middle" x="1758.37" y="-123.8" font-family="Times,serif" font-size="14.00">[isVersionKey(key) and master(key) == PHDkey]</text>
<text text-anchor="middle" x="1758.37" y="-108.8" font-family="Times,serif" font-size="14.00">/ key &lt;&#45; master(key)</text>
</g>
<!-- WaitVersionAfterPHD.Processing&#45;&gt;NotSkippingPrefixNorVersions.Processing -->
<g id="edge18" class="edge">
<title>WaitVersionAfterPHD.Processing&#45;&gt;NotSkippingPrefixNorVersions.Processing</title>
<path fill="none" stroke="black" d="M1546.51,-21.25C1677.94,-26.54 1888.29,-44.09 1937.37,-102 1947.71,-114.21 1946.85,-125.11 1937.37,-138 1841.62,-268.08 1749.48,-218.23 1590.37,-246 1471.26,-266.79 1143.92,-275.5 947.77,-278.94"/>
<polygon fill="black" stroke="black" points="947.6,-275.44 937.66,-279.11 947.72,-282.44 947.6,-275.44"/>
<text text-anchor="middle" x="2124.87" y="-116.3" font-family="Times,serif" font-size="14.00">[not isVersionKey(key) or master(key) != PHDkey]</text>
</g>
</g>
</svg>

After

Width:  |  Height:  |  Size: 18 KiB

View File

@ -0,0 +1,35 @@
digraph {
node [shape="box",style="filled,rounded",fontsize=16,fixedsize=true,width=3];
edge [fontsize=14];
rankdir=TB;
START [shape="circle",width=0.2,label="",style="filled",fillcolor="black"]
END [shape="circle",width=0.2,label="",style="filled",fillcolor="black",peripheries=2]
node [fillcolor="lightgrey"];
"NotSkipping.Idle" [label="NotSkipping",group="NotSkipping"];
"NeverSkipping.Idle" [label="NeverSkipping",group="NeverSkipping"];
"NotSkippingPrefix.Idle" [label="NotSkippingPrefix",group="NotSkippingPrefix"];
"SkippingPrefix.Idle" [label="SkippingPrefix",group="SkippingPrefix"];
node [fillcolor="lightblue"];
"NeverSkipping.Processing" [label="NeverSkipping",group="NeverSkipping"];
"NotSkippingPrefix.Processing" [label="NotSkippingPrefix",group="NotSkippingPrefix"];
"SkippingPrefix.Processing" [label="SkippingPrefix",group="SkippingPrefix"];
START -> "NotSkipping.Idle"
"NotSkipping.Idle" -> "NeverSkipping.Idle" [label="[delimiter == undefined]"]
"NotSkipping.Idle" -> "NotSkippingPrefix.Idle" [label="[delimiter == '/']"]
"NeverSkipping.Idle" -> "NeverSkipping.Processing" [label="filter(key, value)"]
"NotSkippingPrefix.Idle" -> "NotSkippingPrefix.Processing" [label="filter(key, value)"]
"SkippingPrefix.Idle" -> "SkippingPrefix.Processing" [label="filter(key, value)"]
"NeverSkipping.Processing" -> END [label="[nKeys == maxKeys]\n-> FILTER_END"]
"NeverSkipping.Processing" -> "NeverSkipping.Idle" [label="[nKeys < maxKeys]\n/ Contents.append(key, value)\n -> FILTER_ACCEPT"]
"NotSkippingPrefix.Processing" -> END [label="[nKeys == maxKeys]\n -> FILTER_END"]
"NotSkippingPrefix.Processing" -> "SkippingPrefix.Idle" [label="[nKeys < maxKeys and hasDelimiter(key)]\n/ prefix <- prefixOf(key)\n/ CommonPrefixes.append(prefixOf(key))\n-> FILTER_ACCEPT"]
"NotSkippingPrefix.Processing" -> "NotSkippingPrefix.Idle" [label="[nKeys < maxKeys and not hasDelimiter(key)]\n/ Contents.append(key, value)\n -> FILTER_ACCEPT"]
"SkippingPrefix.Processing" -> "SkippingPrefix.Idle" [label="[key.startsWith(prefix)]\n-> FILTER_SKIP"]
"SkippingPrefix.Processing" -> "NotSkippingPrefix.Processing" [label="[not key.startsWith(prefix)]"]
}

View File

@ -0,0 +1,166 @@
<?xml version="1.0" encoding="UTF-8" standalone="no"?>
<!DOCTYPE svg PUBLIC "-//W3C//DTD SVG 1.1//EN"
"http://www.w3.org/Graphics/SVG/1.1/DTD/svg11.dtd">
<!-- Generated by graphviz version 2.43.0 (0)
-->
<!-- Title: %3 Pages: 1 -->
<svg width="975pt" height="533pt"
viewBox="0.00 0.00 975.00 533.00" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink">
<g id="graph0" class="graph" transform="scale(1 1) rotate(0) translate(4 529)">
<title>%3</title>
<polygon fill="white" stroke="transparent" points="-4,4 -4,-529 971,-529 971,4 -4,4"/>
<!-- START -->
<g id="node1" class="node">
<title>START</title>
<ellipse fill="black" stroke="black" cx="283" cy="-518" rx="7" ry="7"/>
</g>
<!-- NotSkipping.Idle -->
<g id="node3" class="node">
<title>NotSkipping.Idle</title>
<path fill="lightgrey" stroke="black" d="M379,-474C379,-474 187,-474 187,-474 181,-474 175,-468 175,-462 175,-462 175,-450 175,-450 175,-444 181,-438 187,-438 187,-438 379,-438 379,-438 385,-438 391,-444 391,-450 391,-450 391,-462 391,-462 391,-468 385,-474 379,-474"/>
<text text-anchor="middle" x="283" y="-452.2" font-family="Times,serif" font-size="16.00">NotSkipping</text>
</g>
<!-- START&#45;&gt;NotSkipping.Idle -->
<g id="edge1" class="edge">
<title>START&#45;&gt;NotSkipping.Idle</title>
<path fill="none" stroke="black" d="M283,-510.58C283,-504.23 283,-494.07 283,-484.3"/>
<polygon fill="black" stroke="black" points="286.5,-484.05 283,-474.05 279.5,-484.05 286.5,-484.05"/>
</g>
<!-- END -->
<g id="node2" class="node">
<title>END</title>
<ellipse fill="black" stroke="black" cx="196" cy="-120" rx="7" ry="7"/>
<ellipse fill="none" stroke="black" cx="196" cy="-120" rx="11" ry="11"/>
</g>
<!-- NeverSkipping.Idle -->
<g id="node4" class="node">
<title>NeverSkipping.Idle</title>
<path fill="lightgrey" stroke="black" d="M262,-387C262,-387 70,-387 70,-387 64,-387 58,-381 58,-375 58,-375 58,-363 58,-363 58,-357 64,-351 70,-351 70,-351 262,-351 262,-351 268,-351 274,-357 274,-363 274,-363 274,-375 274,-375 274,-381 268,-387 262,-387"/>
<text text-anchor="middle" x="166" y="-365.2" font-family="Times,serif" font-size="16.00">NeverSkipping</text>
</g>
<!-- NotSkipping.Idle&#45;&gt;NeverSkipping.Idle -->
<g id="edge2" class="edge">
<title>NotSkipping.Idle&#45;&gt;NeverSkipping.Idle</title>
<path fill="none" stroke="black" d="M216.5,-437.82C206.51,-433.18 196.91,-427.34 189,-420 182.25,-413.74 177.33,-405.11 173.81,-396.79"/>
<polygon fill="black" stroke="black" points="177.05,-395.47 170.3,-387.31 170.49,-397.9 177.05,-395.47"/>
<text text-anchor="middle" x="279.5" y="-408.8" font-family="Times,serif" font-size="14.00">[delimiter == undefined]</text>
</g>
<!-- NotSkippingPrefix.Idle -->
<g id="node5" class="node">
<title>NotSkippingPrefix.Idle</title>
<path fill="lightgrey" stroke="black" d="M496,-387C496,-387 304,-387 304,-387 298,-387 292,-381 292,-375 292,-375 292,-363 292,-363 292,-357 298,-351 304,-351 304,-351 496,-351 496,-351 502,-351 508,-357 508,-363 508,-363 508,-375 508,-375 508,-381 502,-387 496,-387"/>
<text text-anchor="middle" x="400" y="-365.2" font-family="Times,serif" font-size="16.00">NotSkippingPrefix</text>
</g>
<!-- NotSkipping.Idle&#45;&gt;NotSkippingPrefix.Idle -->
<g id="edge3" class="edge">
<title>NotSkipping.Idle&#45;&gt;NotSkippingPrefix.Idle</title>
<path fill="none" stroke="black" d="M340.77,-437.93C351.2,-433.2 361.45,-427.29 370,-420 377.58,-413.53 383.76,-404.65 388.51,-396.16"/>
<polygon fill="black" stroke="black" points="391.63,-397.74 393.08,-387.24 385.4,-394.54 391.63,-397.74"/>
<text text-anchor="middle" x="442.5" y="-408.8" font-family="Times,serif" font-size="14.00">[delimiter == &#39;/&#39;]</text>
</g>
<!-- NeverSkipping.Processing -->
<g id="node7" class="node">
<title>NeverSkipping.Processing</title>
<path fill="lightblue" stroke="black" d="M204,-270C204,-270 12,-270 12,-270 6,-270 0,-264 0,-258 0,-258 0,-246 0,-246 0,-240 6,-234 12,-234 12,-234 204,-234 204,-234 210,-234 216,-240 216,-246 216,-246 216,-258 216,-258 216,-264 210,-270 204,-270"/>
<text text-anchor="middle" x="108" y="-248.2" font-family="Times,serif" font-size="16.00">NeverSkipping</text>
</g>
<!-- NeverSkipping.Idle&#45;&gt;NeverSkipping.Processing -->
<g id="edge4" class="edge">
<title>NeverSkipping.Idle&#45;&gt;NeverSkipping.Processing</title>
<path fill="none" stroke="black" d="M64.1,-350.93C47.33,-346.11 33.58,-340.17 28,-333 15.72,-317.21 17.05,-304.74 28,-288 30.93,-283.52 34.58,-279.6 38.69,-276.19"/>
<polygon fill="black" stroke="black" points="40.97,-278.86 47.1,-270.22 36.92,-273.16 40.97,-278.86"/>
<text text-anchor="middle" x="86" y="-306.8" font-family="Times,serif" font-size="14.00">filter(key, value)</text>
</g>
<!-- NotSkippingPrefix.Processing -->
<g id="node8" class="node">
<title>NotSkippingPrefix.Processing</title>
<path fill="lightblue" stroke="black" d="M554,-270C554,-270 362,-270 362,-270 356,-270 350,-264 350,-258 350,-258 350,-246 350,-246 350,-240 356,-234 362,-234 362,-234 554,-234 554,-234 560,-234 566,-240 566,-246 566,-246 566,-258 566,-258 566,-264 560,-270 554,-270"/>
<text text-anchor="middle" x="458" y="-248.2" font-family="Times,serif" font-size="16.00">NotSkippingPrefix</text>
</g>
<!-- NotSkippingPrefix.Idle&#45;&gt;NotSkippingPrefix.Processing -->
<g id="edge5" class="edge">
<title>NotSkippingPrefix.Idle&#45;&gt;NotSkippingPrefix.Processing</title>
<path fill="none" stroke="black" d="M395.69,-350.84C392.38,-333.75 390.03,-307.33 401,-288 403.42,-283.74 406.58,-279.94 410.19,-276.55"/>
<polygon fill="black" stroke="black" points="412.5,-279.18 418.1,-270.18 408.11,-273.73 412.5,-279.18"/>
<text text-anchor="middle" x="459" y="-306.8" font-family="Times,serif" font-size="14.00">filter(key, value)</text>
</g>
<!-- SkippingPrefix.Idle -->
<g id="node6" class="node">
<title>SkippingPrefix.Idle</title>
<path fill="lightgrey" stroke="black" d="M554,-138C554,-138 362,-138 362,-138 356,-138 350,-132 350,-126 350,-126 350,-114 350,-114 350,-108 356,-102 362,-102 362,-102 554,-102 554,-102 560,-102 566,-108 566,-114 566,-114 566,-126 566,-126 566,-132 560,-138 554,-138"/>
<text text-anchor="middle" x="458" y="-116.2" font-family="Times,serif" font-size="16.00">SkippingPrefix</text>
</g>
<!-- SkippingPrefix.Processing -->
<g id="node9" class="node">
<title>SkippingPrefix.Processing</title>
<path fill="lightblue" stroke="black" d="M691,-36C691,-36 499,-36 499,-36 493,-36 487,-30 487,-24 487,-24 487,-12 487,-12 487,-6 493,0 499,0 499,0 691,0 691,0 697,0 703,-6 703,-12 703,-12 703,-24 703,-24 703,-30 697,-36 691,-36"/>
<text text-anchor="middle" x="595" y="-14.2" font-family="Times,serif" font-size="16.00">SkippingPrefix</text>
</g>
<!-- SkippingPrefix.Idle&#45;&gt;SkippingPrefix.Processing -->
<g id="edge6" class="edge">
<title>SkippingPrefix.Idle&#45;&gt;SkippingPrefix.Processing</title>
<path fill="none" stroke="black" d="M452.35,-101.95C448.76,-87.65 446.54,-67.45 457,-54 461.44,-48.29 471.08,-43.36 483.3,-39.15"/>
<polygon fill="black" stroke="black" points="484.61,-42.41 493.1,-36.07 482.51,-35.73 484.61,-42.41"/>
<text text-anchor="middle" x="515" y="-65.3" font-family="Times,serif" font-size="14.00">filter(key, value)</text>
</g>
<!-- NeverSkipping.Processing&#45;&gt;END -->
<g id="edge7" class="edge">
<title>NeverSkipping.Processing&#45;&gt;END</title>
<path fill="none" stroke="black" d="M102.91,-233.88C97.93,-213.45 93.18,-179.15 109,-156 123.79,-134.35 154.41,-126.09 175.08,-122.94"/>
<polygon fill="black" stroke="black" points="175.62,-126.4 185.11,-121.69 174.76,-119.45 175.62,-126.4"/>
<text text-anchor="middle" x="185" y="-189.8" font-family="Times,serif" font-size="14.00">[nKeys == maxKeys]</text>
<text text-anchor="middle" x="185" y="-174.8" font-family="Times,serif" font-size="14.00">&#45;&gt; FILTER_END</text>
</g>
<!-- NeverSkipping.Processing&#45;&gt;NeverSkipping.Idle -->
<g id="edge8" class="edge">
<title>NeverSkipping.Processing&#45;&gt;NeverSkipping.Idle</title>
<path fill="none" stroke="black" d="M129.49,-270.27C134.87,-275.48 140.18,-281.55 144,-288 153.56,-304.17 159.09,-324.63 162.21,-340.81"/>
<polygon fill="black" stroke="black" points="158.78,-341.49 163.94,-350.74 165.68,-340.29 158.78,-341.49"/>
<text text-anchor="middle" x="265.5" y="-321.8" font-family="Times,serif" font-size="14.00">[nKeys &lt; maxKeys]</text>
<text text-anchor="middle" x="265.5" y="-306.8" font-family="Times,serif" font-size="14.00">/ Contents.append(key, value)</text>
<text text-anchor="middle" x="265.5" y="-291.8" font-family="Times,serif" font-size="14.00"> &#45;&gt; FILTER_ACCEPT</text>
</g>
<!-- NotSkippingPrefix.Processing&#45;&gt;END -->
<g id="edge9" class="edge">
<title>NotSkippingPrefix.Processing&#45;&gt;END</title>
<path fill="none" stroke="black" d="M349.96,-237.93C333,-232.81 316.36,-225.74 302,-216 275.27,-197.87 285.01,-177.6 261,-156 247.64,-143.98 229.41,-134.62 215.65,-128.62"/>
<polygon fill="black" stroke="black" points="216.74,-125.28 206.16,-124.7 214.07,-131.75 216.74,-125.28"/>
<text text-anchor="middle" x="378" y="-189.8" font-family="Times,serif" font-size="14.00">[nKeys == maxKeys]</text>
<text text-anchor="middle" x="378" y="-174.8" font-family="Times,serif" font-size="14.00"> &#45;&gt; FILTER_END</text>
</g>
<!-- NotSkippingPrefix.Processing&#45;&gt;NotSkippingPrefix.Idle -->
<g id="edge11" class="edge">
<title>NotSkippingPrefix.Processing&#45;&gt;NotSkippingPrefix.Idle</title>
<path fill="none" stroke="black" d="M499.64,-270.11C506.59,-274.86 512.87,-280.76 517,-288 526.9,-305.38 528.94,-316.96 517,-333 513.56,-337.62 509.53,-341.66 505.07,-345.18"/>
<polygon fill="black" stroke="black" points="502.89,-342.43 496.63,-350.98 506.85,-348.2 502.89,-342.43"/>
<text text-anchor="middle" x="690.5" y="-321.8" font-family="Times,serif" font-size="14.00">[nKeys &lt; maxKeys and not hasDelimiter(key)]</text>
<text text-anchor="middle" x="690.5" y="-306.8" font-family="Times,serif" font-size="14.00">/ Contents.append(key, value)</text>
<text text-anchor="middle" x="690.5" y="-291.8" font-family="Times,serif" font-size="14.00"> &#45;&gt; FILTER_ACCEPT</text>
</g>
<!-- NotSkippingPrefix.Processing&#45;&gt;SkippingPrefix.Idle -->
<g id="edge10" class="edge">
<title>NotSkippingPrefix.Processing&#45;&gt;SkippingPrefix.Idle</title>
<path fill="none" stroke="black" d="M458,-233.74C458,-211.98 458,-174.32 458,-148.56"/>
<polygon fill="black" stroke="black" points="461.5,-148.33 458,-138.33 454.5,-148.33 461.5,-148.33"/>
<text text-anchor="middle" x="609.5" y="-204.8" font-family="Times,serif" font-size="14.00">[nKeys &lt; maxKeys and hasDelimiter(key)]</text>
<text text-anchor="middle" x="609.5" y="-189.8" font-family="Times,serif" font-size="14.00">/ prefix &lt;&#45; prefixOf(key)</text>
<text text-anchor="middle" x="609.5" y="-174.8" font-family="Times,serif" font-size="14.00">/ CommonPrefixes.append(prefixOf(key))</text>
<text text-anchor="middle" x="609.5" y="-159.8" font-family="Times,serif" font-size="14.00">&#45;&gt; FILTER_ACCEPT</text>
</g>
<!-- SkippingPrefix.Processing&#45;&gt;SkippingPrefix.Idle -->
<g id="edge12" class="edge">
<title>SkippingPrefix.Processing&#45;&gt;SkippingPrefix.Idle</title>
<path fill="none" stroke="black" d="M593.49,-36.23C591.32,-50.84 586,-71.39 573,-84 567.75,-89.09 561.77,-93.45 555.38,-97.17"/>
<polygon fill="black" stroke="black" points="553.66,-94.12 546.43,-101.87 556.91,-100.32 553.66,-94.12"/>
<text text-anchor="middle" x="672" y="-72.8" font-family="Times,serif" font-size="14.00">[key.startsWith(prefix)]</text>
<text text-anchor="middle" x="672" y="-57.8" font-family="Times,serif" font-size="14.00">&#45;&gt; FILTER_SKIP</text>
</g>
<!-- SkippingPrefix.Processing&#45;&gt;NotSkippingPrefix.Processing -->
<g id="edge13" class="edge">
<title>SkippingPrefix.Processing&#45;&gt;NotSkippingPrefix.Processing</title>
<path fill="none" stroke="black" d="M703.16,-31.64C728.6,-36.87 750.75,-44.11 759,-54 778.46,-77.34 776.26,-200.01 762,-216 749.37,-230.17 656.13,-239.42 576.2,-244.84"/>
<polygon fill="black" stroke="black" points="575.77,-241.36 566.03,-245.51 576.24,-248.34 575.77,-241.36"/>
<text text-anchor="middle" x="870" y="-116.3" font-family="Times,serif" font-size="14.00">[not key.startsWith(prefix)]</text>
</g>
</g>
</svg>

After

Width:  |  Height:  |  Size: 12 KiB

View File

@ -0,0 +1,50 @@
digraph {
node [shape="box",style="filled,rounded",fontsize=16,fixedsize=true,width=3];
edge [fontsize=14];
rankdir=TB;
START [shape="circle",width=0.2,label="",style="filled",fillcolor="black"]
END [shape="circle",width=0.2,label="",style="filled",fillcolor="black",peripheries=2]
node [fillcolor="lightgrey"];
"NotSkipping.Idle" [label="NotSkipping",group="NotSkipping",width=4];
"SkippingPrefix.Idle" [label="SkippingPrefix",group="SkippingPrefix"];
"WaitForNullKey.Idle" [label="WaitForNullKey",group="WaitForNullKey"];
"SkippingVersions.Idle" [label="SkippingVersions",group="SkippingVersions"];
node [fillcolor="lightblue"];
"NotSkipping.Processing" [label="NotSkipping",group="NotSkipping",width=4];
"NotSkippingV0.Processing" [label="NotSkippingV0",group="NotSkipping",width=4];
"NotSkippingV1.Processing" [label="NotSkippingV1",group="NotSkipping",width=4];
"NotSkippingCommon.Processing" [label="NotSkippingCommon",group="NotSkipping",width=4];
"SkippingPrefix.Processing" [label="SkippingPrefix",group="SkippingPrefix"];
"WaitForNullKey.Processing" [label="WaitForNullKey",group="WaitForNullKey"];
"SkippingVersions.Processing" [label="SkippingVersions",group="SkippingVersions"];
START -> "WaitForNullKey.Idle" [label="[versionIdMarker != undefined]"]
START -> "NotSkipping.Idle" [label="[versionIdMarker == undefined]"]
"NotSkipping.Idle" -> "NotSkipping.Processing" [label="filter(key, value)"]
"SkippingPrefix.Idle" -> "SkippingPrefix.Processing" [label="filter(key, value)"]
"WaitForNullKey.Idle" -> "WaitForNullKey.Processing" [label="filter(key, value)"]
"SkippingVersions.Idle" -> "SkippingVersions.Processing" [label="filter(key, value)"]
"NotSkipping.Processing" -> "NotSkippingV0.Processing" [label="vFormat='v0'"]
"NotSkipping.Processing" -> "NotSkippingV1.Processing" [label="vFormat='v1'"]
"WaitForNullKey.Processing" -> "NotSkipping.Processing" [label="master(key) != keyMarker"]
"WaitForNullKey.Processing" -> "SkippingVersions.Processing" [label="master(key) == keyMarker"]
"NotSkippingV0.Processing" -> "SkippingPrefix.Idle" [label="[key.startsWith(<ReplayPrefix>)]\n/ prefix <- <ReplayPrefix>\n-> FILTER_SKIP"]
"NotSkippingV0.Processing" -> "NotSkipping.Idle" [label="[Version.isPHD(value)]\n-> FILTER_ACCEPT"]
"NotSkippingV0.Processing" -> "NotSkippingCommon.Processing" [label="[not key.startsWith(<ReplayPrefix>)\nand not Version.isPHD(value)]"]
"NotSkippingV1.Processing" -> "NotSkippingCommon.Processing" [label="[always]"]
"NotSkippingCommon.Processing" -> END [label="[isListableKey(key, value) and\nKeys == maxKeys]\n-> FILTER_END"]
"NotSkippingCommon.Processing" -> "SkippingPrefix.Idle" [label="[isListableKey(key, value) and\nnKeys < maxKeys and\nhasDelimiter(key)]\n/ prefix <- prefixOf(key)\n/ CommonPrefixes.append(prefixOf(key))\n-> FILTER_ACCEPT"]
"NotSkippingCommon.Processing" -> "NotSkipping.Idle" [label="[isListableKey(key, value) and\nnKeys < maxKeys and\nnot hasDelimiter(key)]\n/ Contents.append(key, versionId, value)\n-> FILTER_ACCEPT"]
"SkippingPrefix.Processing" -> "SkippingPrefix.Idle" [label="[key.startsWith(prefix)]\n-> FILTER_SKIP"]
"SkippingPrefix.Processing" -> "NotSkipping.Processing" [label="[not key.startsWith(prefix)]"]
"SkippingVersions.Processing" -> "NotSkipping.Processing" [label="master(key) !== keyMarker or \nversionId > versionIdMarker"]
"SkippingVersions.Processing" -> "SkippingVersions.Idle" [label="master(key) === keyMarker and \nversionId < versionIdMarker\n-> FILTER_SKIP"]
"SkippingVersions.Processing" -> "SkippingVersions.Idle" [label="master(key) === keyMarker and \nversionId == versionIdMarker\n-> FILTER_ACCEPT"]
}

View File

@ -0,0 +1,265 @@
<?xml version="1.0" encoding="UTF-8" standalone="no"?>
<!DOCTYPE svg PUBLIC "-//W3C//DTD SVG 1.1//EN"
"http://www.w3.org/Graphics/SVG/1.1/DTD/svg11.dtd">
<!-- Generated by graphviz version 2.43.0 (0)
-->
<!-- Title: %3 Pages: 1 -->
<svg width="1522pt" height="922pt"
viewBox="0.00 0.00 1522.26 922.00" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink">
<g id="graph0" class="graph" transform="scale(1 1) rotate(0) translate(4 918)">
<title>%3</title>
<polygon fill="white" stroke="transparent" points="-4,4 -4,-918 1518.26,-918 1518.26,4 -4,4"/>
<!-- START -->
<g id="node1" class="node">
<title>START</title>
<ellipse fill="black" stroke="black" cx="393.26" cy="-907" rx="7" ry="7"/>
</g>
<!-- NotSkipping.Idle -->
<g id="node3" class="node">
<title>NotSkipping.Idle</title>
<path fill="lightgrey" stroke="black" d="M436.26,-675C436.26,-675 172.26,-675 172.26,-675 166.26,-675 160.26,-669 160.26,-663 160.26,-663 160.26,-651 160.26,-651 160.26,-645 166.26,-639 172.26,-639 172.26,-639 436.26,-639 436.26,-639 442.26,-639 448.26,-645 448.26,-651 448.26,-651 448.26,-663 448.26,-663 448.26,-669 442.26,-675 436.26,-675"/>
<text text-anchor="middle" x="304.26" y="-653.2" font-family="Times,serif" font-size="16.00">NotSkipping</text>
</g>
<!-- START&#45;&gt;NotSkipping.Idle -->
<g id="edge2" class="edge">
<title>START&#45;&gt;NotSkipping.Idle</title>
<path fill="none" stroke="black" d="M391.06,-899.87C380.45,-870.31 334.26,-741.58 313.93,-684.93"/>
<polygon fill="black" stroke="black" points="317.12,-683.46 310.45,-675.23 310.53,-685.82 317.12,-683.46"/>
<text text-anchor="middle" x="470.76" y="-783.8" font-family="Times,serif" font-size="14.00">[versionIdMarker == undefined]</text>
</g>
<!-- WaitForNullKey.Idle -->
<g id="node5" class="node">
<title>WaitForNullKey.Idle</title>
<path fill="lightgrey" stroke="black" d="M692.26,-849C692.26,-849 500.26,-849 500.26,-849 494.26,-849 488.26,-843 488.26,-837 488.26,-837 488.26,-825 488.26,-825 488.26,-819 494.26,-813 500.26,-813 500.26,-813 692.26,-813 692.26,-813 698.26,-813 704.26,-819 704.26,-825 704.26,-825 704.26,-837 704.26,-837 704.26,-843 698.26,-849 692.26,-849"/>
<text text-anchor="middle" x="596.26" y="-827.2" font-family="Times,serif" font-size="16.00">WaitForNullKey</text>
</g>
<!-- START&#45;&gt;WaitForNullKey.Idle -->
<g id="edge1" class="edge">
<title>START&#45;&gt;WaitForNullKey.Idle</title>
<path fill="none" stroke="black" d="M399.56,-903.7C420.56,-896.05 489.7,-870.85 540.08,-852.48"/>
<polygon fill="black" stroke="black" points="541.38,-855.73 549.57,-849.02 538.98,-849.16 541.38,-855.73"/>
<text text-anchor="middle" x="608.76" y="-870.8" font-family="Times,serif" font-size="14.00">[versionIdMarker != undefined]</text>
</g>
<!-- END -->
<g id="node2" class="node">
<title>END</title>
<ellipse fill="black" stroke="black" cx="45.26" cy="-120" rx="7" ry="7"/>
<ellipse fill="none" stroke="black" cx="45.26" cy="-120" rx="11" ry="11"/>
</g>
<!-- NotSkipping.Processing -->
<g id="node7" class="node">
<title>NotSkipping.Processing</title>
<path fill="lightblue" stroke="black" d="M761.26,-558C761.26,-558 497.26,-558 497.26,-558 491.26,-558 485.26,-552 485.26,-546 485.26,-546 485.26,-534 485.26,-534 485.26,-528 491.26,-522 497.26,-522 497.26,-522 761.26,-522 761.26,-522 767.26,-522 773.26,-528 773.26,-534 773.26,-534 773.26,-546 773.26,-546 773.26,-552 767.26,-558 761.26,-558"/>
<text text-anchor="middle" x="629.26" y="-536.2" font-family="Times,serif" font-size="16.00">NotSkipping</text>
</g>
<!-- NotSkipping.Idle&#45;&gt;NotSkipping.Processing -->
<g id="edge3" class="edge">
<title>NotSkipping.Idle&#45;&gt;NotSkipping.Processing</title>
<path fill="none" stroke="black" d="M333.17,-638.98C364.86,-620.99 417.68,-592.92 466.26,-576 483.64,-569.95 502.44,-564.74 520.88,-560.34"/>
<polygon fill="black" stroke="black" points="521.83,-563.71 530.78,-558.04 520.25,-556.89 521.83,-563.71"/>
<text text-anchor="middle" x="524.26" y="-594.8" font-family="Times,serif" font-size="14.00">filter(key, value)</text>
</g>
<!-- SkippingPrefix.Idle -->
<g id="node4" class="node">
<title>SkippingPrefix.Idle</title>
<path fill="lightgrey" stroke="black" d="M662.26,-138C662.26,-138 470.26,-138 470.26,-138 464.26,-138 458.26,-132 458.26,-126 458.26,-126 458.26,-114 458.26,-114 458.26,-108 464.26,-102 470.26,-102 470.26,-102 662.26,-102 662.26,-102 668.26,-102 674.26,-108 674.26,-114 674.26,-114 674.26,-126 674.26,-126 674.26,-132 668.26,-138 662.26,-138"/>
<text text-anchor="middle" x="566.26" y="-116.2" font-family="Times,serif" font-size="16.00">SkippingPrefix</text>
</g>
<!-- SkippingPrefix.Processing -->
<g id="node11" class="node">
<title>SkippingPrefix.Processing</title>
<path fill="lightblue" stroke="black" d="M779.26,-36C779.26,-36 587.26,-36 587.26,-36 581.26,-36 575.26,-30 575.26,-24 575.26,-24 575.26,-12 575.26,-12 575.26,-6 581.26,0 587.26,0 587.26,0 779.26,0 779.26,0 785.26,0 791.26,-6 791.26,-12 791.26,-12 791.26,-24 791.26,-24 791.26,-30 785.26,-36 779.26,-36"/>
<text text-anchor="middle" x="683.26" y="-14.2" font-family="Times,serif" font-size="16.00">SkippingPrefix</text>
</g>
<!-- SkippingPrefix.Idle&#45;&gt;SkippingPrefix.Processing -->
<g id="edge4" class="edge">
<title>SkippingPrefix.Idle&#45;&gt;SkippingPrefix.Processing</title>
<path fill="none" stroke="black" d="M552.64,-101.74C543.31,-87.68 534.41,-67.95 545.26,-54 549.71,-48.29 559.34,-43.36 571.56,-39.15"/>
<polygon fill="black" stroke="black" points="572.87,-42.41 581.36,-36.07 570.77,-35.73 572.87,-42.41"/>
<text text-anchor="middle" x="603.26" y="-65.3" font-family="Times,serif" font-size="14.00">filter(key, value)</text>
</g>
<!-- WaitForNullKey.Processing -->
<g id="node12" class="node">
<title>WaitForNullKey.Processing</title>
<path fill="lightblue" stroke="black" d="M692.26,-762C692.26,-762 500.26,-762 500.26,-762 494.26,-762 488.26,-756 488.26,-750 488.26,-750 488.26,-738 488.26,-738 488.26,-732 494.26,-726 500.26,-726 500.26,-726 692.26,-726 692.26,-726 698.26,-726 704.26,-732 704.26,-738 704.26,-738 704.26,-750 704.26,-750 704.26,-756 698.26,-762 692.26,-762"/>
<text text-anchor="middle" x="596.26" y="-740.2" font-family="Times,serif" font-size="16.00">WaitForNullKey</text>
</g>
<!-- WaitForNullKey.Idle&#45;&gt;WaitForNullKey.Processing -->
<g id="edge5" class="edge">
<title>WaitForNullKey.Idle&#45;&gt;WaitForNullKey.Processing</title>
<path fill="none" stroke="black" d="M596.26,-812.8C596.26,-801.16 596.26,-785.55 596.26,-772.24"/>
<polygon fill="black" stroke="black" points="599.76,-772.18 596.26,-762.18 592.76,-772.18 599.76,-772.18"/>
<text text-anchor="middle" x="654.26" y="-783.8" font-family="Times,serif" font-size="14.00">filter(key, value)</text>
</g>
<!-- SkippingVersions.Idle -->
<g id="node6" class="node">
<title>SkippingVersions.Idle</title>
<path fill="lightgrey" stroke="black" d="M1241.26,-558C1241.26,-558 1049.26,-558 1049.26,-558 1043.26,-558 1037.26,-552 1037.26,-546 1037.26,-546 1037.26,-534 1037.26,-534 1037.26,-528 1043.26,-522 1049.26,-522 1049.26,-522 1241.26,-522 1241.26,-522 1247.26,-522 1253.26,-528 1253.26,-534 1253.26,-534 1253.26,-546 1253.26,-546 1253.26,-552 1247.26,-558 1241.26,-558"/>
<text text-anchor="middle" x="1145.26" y="-536.2" font-family="Times,serif" font-size="16.00">SkippingVersions</text>
</g>
<!-- SkippingVersions.Processing -->
<g id="node13" class="node">
<title>SkippingVersions.Processing</title>
<path fill="lightblue" stroke="black" d="M1241.26,-675C1241.26,-675 1049.26,-675 1049.26,-675 1043.26,-675 1037.26,-669 1037.26,-663 1037.26,-663 1037.26,-651 1037.26,-651 1037.26,-645 1043.26,-639 1049.26,-639 1049.26,-639 1241.26,-639 1241.26,-639 1247.26,-639 1253.26,-645 1253.26,-651 1253.26,-651 1253.26,-663 1253.26,-663 1253.26,-669 1247.26,-675 1241.26,-675"/>
<text text-anchor="middle" x="1145.26" y="-653.2" font-family="Times,serif" font-size="16.00">SkippingVersions</text>
</g>
<!-- SkippingVersions.Idle&#45;&gt;SkippingVersions.Processing -->
<g id="edge6" class="edge">
<title>SkippingVersions.Idle&#45;&gt;SkippingVersions.Processing</title>
<path fill="none" stroke="black" d="M1145.26,-558.25C1145.26,-576.77 1145.26,-606.45 1145.26,-628.25"/>
<polygon fill="black" stroke="black" points="1141.76,-628.53 1145.26,-638.53 1148.76,-628.53 1141.76,-628.53"/>
<text text-anchor="middle" x="1203.26" y="-594.8" font-family="Times,serif" font-size="14.00">filter(key, value)</text>
</g>
<!-- NotSkippingV0.Processing -->
<g id="node8" class="node">
<title>NotSkippingV0.Processing</title>
<path fill="lightblue" stroke="black" d="M436.26,-411C436.26,-411 172.26,-411 172.26,-411 166.26,-411 160.26,-405 160.26,-399 160.26,-399 160.26,-387 160.26,-387 160.26,-381 166.26,-375 172.26,-375 172.26,-375 436.26,-375 436.26,-375 442.26,-375 448.26,-381 448.26,-387 448.26,-387 448.26,-399 448.26,-399 448.26,-405 442.26,-411 436.26,-411"/>
<text text-anchor="middle" x="304.26" y="-389.2" font-family="Times,serif" font-size="16.00">NotSkippingV0</text>
</g>
<!-- NotSkipping.Processing&#45;&gt;NotSkippingV0.Processing -->
<g id="edge7" class="edge">
<title>NotSkipping.Processing&#45;&gt;NotSkippingV0.Processing</title>
<path fill="none" stroke="black" d="M573.96,-521.95C558.07,-516.64 540.84,-510.46 525.26,-504 460.22,-477.02 387.62,-439.36 343.97,-415.84"/>
<polygon fill="black" stroke="black" points="345.57,-412.72 335.11,-411.04 342.24,-418.88 345.57,-412.72"/>
<text text-anchor="middle" x="573.76" y="-462.8" font-family="Times,serif" font-size="14.00">vFormat=&#39;v0&#39;</text>
</g>
<!-- NotSkippingV1.Processing -->
<g id="node9" class="node">
<title>NotSkippingV1.Processing</title>
<path fill="lightblue" stroke="black" d="M758.26,-411C758.26,-411 494.26,-411 494.26,-411 488.26,-411 482.26,-405 482.26,-399 482.26,-399 482.26,-387 482.26,-387 482.26,-381 488.26,-375 494.26,-375 494.26,-375 758.26,-375 758.26,-375 764.26,-375 770.26,-381 770.26,-387 770.26,-387 770.26,-399 770.26,-399 770.26,-405 764.26,-411 758.26,-411"/>
<text text-anchor="middle" x="626.26" y="-389.2" font-family="Times,serif" font-size="16.00">NotSkippingV1</text>
</g>
<!-- NotSkipping.Processing&#45;&gt;NotSkippingV1.Processing -->
<g id="edge8" class="edge">
<title>NotSkipping.Processing&#45;&gt;NotSkippingV1.Processing</title>
<path fill="none" stroke="black" d="M628.91,-521.8C628.39,-496.94 627.44,-450.74 626.83,-421.23"/>
<polygon fill="black" stroke="black" points="630.32,-421.11 626.62,-411.18 623.33,-421.25 630.32,-421.11"/>
<text text-anchor="middle" x="676.76" y="-462.8" font-family="Times,serif" font-size="14.00">vFormat=&#39;v1&#39;</text>
</g>
<!-- NotSkippingV0.Processing&#45;&gt;NotSkipping.Idle -->
<g id="edge12" class="edge">
<title>NotSkippingV0.Processing&#45;&gt;NotSkipping.Idle</title>
<path fill="none" stroke="black" d="M304.26,-411.25C304.26,-455.74 304.26,-574.61 304.26,-628.62"/>
<polygon fill="black" stroke="black" points="300.76,-628.81 304.26,-638.81 307.76,-628.81 300.76,-628.81"/>
<text text-anchor="middle" x="385.76" y="-543.8" font-family="Times,serif" font-size="14.00">[Version.isPHD(value)]</text>
<text text-anchor="middle" x="385.76" y="-528.8" font-family="Times,serif" font-size="14.00">&#45;&gt; FILTER_ACCEPT</text>
</g>
<!-- NotSkippingV0.Processing&#45;&gt;SkippingPrefix.Idle -->
<g id="edge11" class="edge">
<title>NotSkippingV0.Processing&#45;&gt;SkippingPrefix.Idle</title>
<path fill="none" stroke="black" d="M448.41,-376.93C508.52,-369.95 565.63,-362.09 570.26,-357 622.9,-299.12 594.8,-196.31 577.11,-147.78"/>
<polygon fill="black" stroke="black" points="580.33,-146.4 573.53,-138.28 573.78,-148.87 580.33,-146.4"/>
<text text-anchor="middle" x="720.26" y="-297.8" font-family="Times,serif" font-size="14.00">[key.startsWith(&lt;ReplayPrefix&gt;)]</text>
<text text-anchor="middle" x="720.26" y="-282.8" font-family="Times,serif" font-size="14.00">/ prefix &lt;&#45; &lt;ReplayPrefix&gt;</text>
<text text-anchor="middle" x="720.26" y="-267.8" font-family="Times,serif" font-size="14.00">&#45;&gt; FILTER_SKIP</text>
</g>
<!-- NotSkippingCommon.Processing -->
<g id="node10" class="node">
<title>NotSkippingCommon.Processing</title>
<path fill="lightblue" stroke="black" d="M436.26,-304.5C436.26,-304.5 172.26,-304.5 172.26,-304.5 166.26,-304.5 160.26,-298.5 160.26,-292.5 160.26,-292.5 160.26,-280.5 160.26,-280.5 160.26,-274.5 166.26,-268.5 172.26,-268.5 172.26,-268.5 436.26,-268.5 436.26,-268.5 442.26,-268.5 448.26,-274.5 448.26,-280.5 448.26,-280.5 448.26,-292.5 448.26,-292.5 448.26,-298.5 442.26,-304.5 436.26,-304.5"/>
<text text-anchor="middle" x="304.26" y="-282.7" font-family="Times,serif" font-size="16.00">NotSkippingCommon</text>
</g>
<!-- NotSkippingV0.Processing&#45;&gt;NotSkippingCommon.Processing -->
<g id="edge13" class="edge">
<title>NotSkippingV0.Processing&#45;&gt;NotSkippingCommon.Processing</title>
<path fill="none" stroke="black" d="M304.26,-374.74C304.26,-358.48 304.26,-333.85 304.26,-314.9"/>
<polygon fill="black" stroke="black" points="307.76,-314.78 304.26,-304.78 300.76,-314.78 307.76,-314.78"/>
<text text-anchor="middle" x="435.26" y="-345.8" font-family="Times,serif" font-size="14.00">[not key.startsWith(&lt;ReplayPrefix&gt;)</text>
<text text-anchor="middle" x="435.26" y="-330.8" font-family="Times,serif" font-size="14.00">and not Version.isPHD(value)]</text>
</g>
<!-- NotSkippingV1.Processing&#45;&gt;NotSkippingCommon.Processing -->
<g id="edge14" class="edge">
<title>NotSkippingV1.Processing&#45;&gt;NotSkippingCommon.Processing</title>
<path fill="none" stroke="black" d="M616.43,-374.83C606.75,-359.62 590.48,-338.14 570.26,-327 549.98,-315.83 505.48,-307.38 458.57,-301.23"/>
<polygon fill="black" stroke="black" points="458.9,-297.74 448.53,-299.95 458.01,-304.69 458.9,-297.74"/>
<text text-anchor="middle" x="632.26" y="-338.3" font-family="Times,serif" font-size="14.00">[always]</text>
</g>
<!-- NotSkippingCommon.Processing&#45;&gt;END -->
<g id="edge15" class="edge">
<title>NotSkippingCommon.Processing&#45;&gt;END</title>
<path fill="none" stroke="black" d="M159.92,-279.56C109.8,-274.24 62.13,-264.33 46.26,-246 20.92,-216.72 30.42,-167.54 38.5,-140.42"/>
<polygon fill="black" stroke="black" points="41.94,-141.16 41.67,-130.57 35.27,-139.02 41.94,-141.16"/>
<text text-anchor="middle" x="152.76" y="-212.3" font-family="Times,serif" font-size="14.00">[isListableKey(key, value) and</text>
<text text-anchor="middle" x="152.76" y="-197.3" font-family="Times,serif" font-size="14.00">Keys == maxKeys]</text>
<text text-anchor="middle" x="152.76" y="-182.3" font-family="Times,serif" font-size="14.00">&#45;&gt; FILTER_END</text>
</g>
<!-- NotSkippingCommon.Processing&#45;&gt;NotSkipping.Idle -->
<g id="edge17" class="edge">
<title>NotSkippingCommon.Processing&#45;&gt;NotSkipping.Idle</title>
<path fill="none" stroke="black" d="M214.74,-304.54C146.51,-322.73 57.06,-358.99 13.26,-429 -49.27,-528.95 128.43,-602.49 233.32,-635.95"/>
<polygon fill="black" stroke="black" points="232.34,-639.31 242.93,-638.97 234.43,-632.63 232.34,-639.31"/>
<text text-anchor="middle" x="156.76" y="-492.8" font-family="Times,serif" font-size="14.00">[isListableKey(key, value) and</text>
<text text-anchor="middle" x="156.76" y="-477.8" font-family="Times,serif" font-size="14.00">nKeys &lt; maxKeys and</text>
<text text-anchor="middle" x="156.76" y="-462.8" font-family="Times,serif" font-size="14.00">not hasDelimiter(key)]</text>
<text text-anchor="middle" x="156.76" y="-447.8" font-family="Times,serif" font-size="14.00">/ Contents.append(key, versionId, value)</text>
<text text-anchor="middle" x="156.76" y="-432.8" font-family="Times,serif" font-size="14.00">&#45;&gt; FILTER_ACCEPT</text>
</g>
<!-- NotSkippingCommon.Processing&#45;&gt;SkippingPrefix.Idle -->
<g id="edge16" class="edge">
<title>NotSkippingCommon.Processing&#45;&gt;SkippingPrefix.Idle</title>
<path fill="none" stroke="black" d="M292.14,-268.23C288.18,-261.59 284.27,-253.75 282.26,-246 272.21,-207.28 255.76,-185.96 282.26,-156 293.6,-143.18 374.98,-134.02 447.74,-128.3"/>
<polygon fill="black" stroke="black" points="448.24,-131.77 457.94,-127.51 447.7,-124.79 448.24,-131.77"/>
<text text-anchor="middle" x="428.26" y="-234.8" font-family="Times,serif" font-size="14.00">[isListableKey(key, value) and</text>
<text text-anchor="middle" x="428.26" y="-219.8" font-family="Times,serif" font-size="14.00">nKeys &lt; maxKeys and</text>
<text text-anchor="middle" x="428.26" y="-204.8" font-family="Times,serif" font-size="14.00">hasDelimiter(key)]</text>
<text text-anchor="middle" x="428.26" y="-189.8" font-family="Times,serif" font-size="14.00">/ prefix &lt;&#45; prefixOf(key)</text>
<text text-anchor="middle" x="428.26" y="-174.8" font-family="Times,serif" font-size="14.00">/ CommonPrefixes.append(prefixOf(key))</text>
<text text-anchor="middle" x="428.26" y="-159.8" font-family="Times,serif" font-size="14.00">&#45;&gt; FILTER_ACCEPT</text>
</g>
<!-- SkippingPrefix.Processing&#45;&gt;SkippingPrefix.Idle -->
<g id="edge18" class="edge">
<title>SkippingPrefix.Processing&#45;&gt;SkippingPrefix.Idle</title>
<path fill="none" stroke="black" d="M681.57,-36.04C679.28,-50.54 673.9,-71.03 661.26,-84 656.4,-88.99 650.77,-93.28 644.72,-96.95"/>
<polygon fill="black" stroke="black" points="642.71,-94.06 635.6,-101.92 646.05,-100.21 642.71,-94.06"/>
<text text-anchor="middle" x="759.26" y="-72.8" font-family="Times,serif" font-size="14.00">[key.startsWith(prefix)]</text>
<text text-anchor="middle" x="759.26" y="-57.8" font-family="Times,serif" font-size="14.00">&#45;&gt; FILTER_SKIP</text>
</g>
<!-- SkippingPrefix.Processing&#45;&gt;NotSkipping.Processing -->
<g id="edge19" class="edge">
<title>SkippingPrefix.Processing&#45;&gt;NotSkipping.Processing</title>
<path fill="none" stroke="black" d="M791.46,-33.51C815.84,-38.71 837.21,-45.46 846.26,-54 868.07,-74.57 864.26,-89.02 864.26,-119 864.26,-394 864.26,-394 864.26,-394 864.26,-462.4 791.27,-499.6 726.64,-519.12"/>
<polygon fill="black" stroke="black" points="725.39,-515.84 716.77,-521.99 727.35,-522.56 725.39,-515.84"/>
<text text-anchor="middle" x="961.26" y="-282.8" font-family="Times,serif" font-size="14.00">[not key.startsWith(prefix)]</text>
</g>
<!-- WaitForNullKey.Processing&#45;&gt;NotSkipping.Processing -->
<g id="edge9" class="edge">
<title>WaitForNullKey.Processing&#45;&gt;NotSkipping.Processing</title>
<path fill="none" stroke="black" d="M599.08,-725.78C604.81,-690.67 617.89,-610.59 624.8,-568.31"/>
<polygon fill="black" stroke="black" points="628.3,-568.61 626.46,-558.18 621.39,-567.48 628.3,-568.61"/>
<text text-anchor="middle" x="707.26" y="-653.3" font-family="Times,serif" font-size="14.00">master(key) != keyMarker</text>
</g>
<!-- WaitForNullKey.Processing&#45;&gt;SkippingVersions.Processing -->
<g id="edge10" class="edge">
<title>WaitForNullKey.Processing&#45;&gt;SkippingVersions.Processing</title>
<path fill="none" stroke="black" d="M704.4,-726.26C797.32,-711.87 931.09,-691.16 1026.87,-676.33"/>
<polygon fill="black" stroke="black" points="1027.55,-679.77 1036.89,-674.78 1026.47,-672.85 1027.55,-679.77"/>
<text text-anchor="middle" x="1001.26" y="-696.8" font-family="Times,serif" font-size="14.00">master(key) == keyMarker</text>
</g>
<!-- SkippingVersions.Processing&#45;&gt;SkippingVersions.Idle -->
<g id="edge21" class="edge">
<title>SkippingVersions.Processing&#45;&gt;SkippingVersions.Idle</title>
<path fill="none" stroke="black" d="M1241.89,-638.98C1249.74,-634.29 1256.75,-628.4 1262.26,-621 1274.21,-604.96 1274.21,-592.04 1262.26,-576 1258.82,-571.38 1254.79,-567.34 1250.33,-563.82"/>
<polygon fill="black" stroke="black" points="1252.11,-560.8 1241.89,-558.02 1248.15,-566.57 1252.11,-560.8"/>
<text text-anchor="middle" x="1392.26" y="-609.8" font-family="Times,serif" font-size="14.00">master(key) === keyMarker and </text>
<text text-anchor="middle" x="1392.26" y="-594.8" font-family="Times,serif" font-size="14.00">versionId &lt; versionIdMarker</text>
<text text-anchor="middle" x="1392.26" y="-579.8" font-family="Times,serif" font-size="14.00">&#45;&gt; FILTER_SKIP</text>
</g>
<!-- SkippingVersions.Processing&#45;&gt;SkippingVersions.Idle -->
<g id="edge22" class="edge">
<title>SkippingVersions.Processing&#45;&gt;SkippingVersions.Idle</title>
<path fill="none" stroke="black" d="M1036.97,-654.38C978.97,-650.96 915.73,-642.25 897.26,-621 884.15,-605.9 884.15,-591.1 897.26,-576 914.65,-555.99 971.71,-547.1 1026.73,-543.28"/>
<polygon fill="black" stroke="black" points="1027.21,-546.76 1036.97,-542.62 1026.76,-539.77 1027.21,-546.76"/>
<text text-anchor="middle" x="1019.26" y="-609.8" font-family="Times,serif" font-size="14.00">master(key) === keyMarker and </text>
<text text-anchor="middle" x="1019.26" y="-594.8" font-family="Times,serif" font-size="14.00">versionId == versionIdMarker</text>
<text text-anchor="middle" x="1019.26" y="-579.8" font-family="Times,serif" font-size="14.00">&#45;&gt; FILTER_ACCEPT</text>
</g>
<!-- SkippingVersions.Processing&#45;&gt;NotSkipping.Processing -->
<g id="edge20" class="edge">
<title>SkippingVersions.Processing&#45;&gt;NotSkipping.Processing</title>
<path fill="none" stroke="black" d="M1037.02,-651.24C897.84,-644.67 672.13,-632.37 657.26,-621 641.04,-608.6 634.18,-586.13 631.3,-568.16"/>
<polygon fill="black" stroke="black" points="634.76,-567.68 630.02,-558.21 627.82,-568.57 634.76,-567.68"/>
<text text-anchor="middle" x="770.26" y="-602.3" font-family="Times,serif" font-size="14.00">master(key) !== keyMarker or </text>
<text text-anchor="middle" x="770.26" y="-587.3" font-family="Times,serif" font-size="14.00">versionId &gt; versionIdMarker</text>
</g>
</g>
</svg>

After

Width:  |  Height:  |  Size: 21 KiB

131
index.ts
View File

@ -1,23 +1,49 @@
import * as evaluators from './lib/policyEvaluator/evaluator'; import * as evaluators from './lib/policyEvaluator/evaluator';
import evaluatePrincipal from './lib/policyEvaluator/principal'; import evaluatePrincipal from './lib/policyEvaluator/principal';
import RequestContext from './lib/policyEvaluator/RequestContext'; import RequestContext, {
actionNeedQuotaCheck,
actionNeedQuotaCheckCopy,
actionWithDataDeletion } from './lib/policyEvaluator/RequestContext';
import * as requestUtils from './lib/policyEvaluator/requestUtils'; import * as requestUtils from './lib/policyEvaluator/requestUtils';
import * as actionMaps from './lib/policyEvaluator/utils/actionMaps'; import * as actionMaps from './lib/policyEvaluator/utils/actionMaps';
import { validateUserPolicy } from './lib/policy/policyValidator' import { validateUserPolicy } from './lib/policy/policyValidator'
import * as locationConstraints from './lib/patches/locationConstraints';
import * as userMetadata from './lib/s3middleware/userMetadata';
import convertToXml from './lib/s3middleware/convertToXml';
import escapeForXml from './lib/s3middleware/escapeForXml';
import * as objectLegalHold from './lib/s3middleware/objectLegalHold';
import * as tagging from './lib/s3middleware/tagging';
import { checkDateModifiedHeaders } from './lib/s3middleware/validateConditionalHeaders';
import { validateConditionalHeaders } from './lib/s3middleware/validateConditionalHeaders';
import MD5Sum from './lib/s3middleware/MD5Sum';
import NullStream from './lib/s3middleware/nullStream';
import * as objectUtils from './lib/s3middleware/objectUtils';
import * as mpuUtils from './lib/s3middleware/azureHelpers/mpuUtils';
import ResultsCollector from './lib/s3middleware/azureHelpers/ResultsCollector';
import SubStreamInterface from './lib/s3middleware/azureHelpers/SubStreamInterface';
import { prepareStream } from './lib/s3middleware/prepareStream';
import * as processMpuParts from './lib/s3middleware/processMpuParts';
import * as retention from './lib/s3middleware/objectRetention';
import * as objectRestore from './lib/s3middleware/objectRestore';
import * as lifecycleHelpers from './lib/s3middleware/lifecycleHelpers';
export { default as errors } from './lib/errors'; export { default as errors } from './lib/errors';
export { default as Clustering } from './lib/Clustering';
export * as ClusterRPC from './lib/clustering/ClusterRPC';
export * as ipCheck from './lib/ipCheck'; export * as ipCheck from './lib/ipCheck';
export * as auth from './lib/auth/auth'; export * as auth from './lib/auth/auth';
export * as constants from './lib/constants'; export * as constants from './lib/constants';
export * as https from './lib/https'; export * as https from './lib/https';
export * as metrics from './lib/metrics'; export * as metrics from './lib/metrics';
export * as network from './lib/network'; export * as network from './lib/network';
export * as s3routes from './lib/s3routes';
export const db = require('./lib/db'); export * as versioning from './lib/versioning';
export const errorUtils = require('./lib/errorUtils'); export * as stream from './lib/stream';
export const shuffle = require('./lib/shuffle'); export * as jsutil from './lib/jsutil';
export const stringHash = require('./lib/stringHash'); export { default as stringHash } from './lib/stringHash';
export const jsutil = require('./lib/jsutil'); export * as db from './lib/db';
export const Clustering = require('./lib/Clustering'); export * as errorUtils from './lib/errorUtils';
export { default as shuffle } from './lib/shuffle';
export * as models from './lib/models';
export const algorithms = { export const algorithms = {
list: require('./lib/algos/list/exportAlgos'), list: require('./lib/algos/list/exportAlgos'),
@ -26,12 +52,15 @@ export const algorithms = {
Skip: require('./lib/algos/list/skip'), Skip: require('./lib/algos/list/skip'),
}, },
cache: { cache: {
GapSet: require('./lib/algos/cache/GapSet'),
GapCache: require('./lib/algos/cache/GapCache'),
LRUCache: require('./lib/algos/cache/LRUCache'), LRUCache: require('./lib/algos/cache/LRUCache'),
}, },
stream: { stream: {
MergeStream: require('./lib/algos/stream/MergeStream'), MergeStream: require('./lib/algos/stream/MergeStream'),
}, },
SortedSet: require('./lib/algos/set/SortedSet'), SortedSet: require('./lib/algos/set/SortedSet'),
Heap: require('./lib/algos/heap/Heap'),
}; };
export const policies = { export const policies = {
@ -41,53 +70,36 @@ export const policies = {
RequestContext, RequestContext,
requestUtils, requestUtils,
actionMaps, actionMaps,
actionNeedQuotaCheck,
actionWithDataDeletion,
actionNeedQuotaCheckCopy,
}; };
export const testing = { export const testing = {
matrix: require('./lib/testing/matrix.js'), matrix: require('./lib/testing/matrix.js'),
}; };
export const versioning = {
VersioningConstants: require('./lib/versioning/constants.js').VersioningConstants,
Version: require('./lib/versioning/Version.js').Version,
VersionID: require('./lib/versioning/VersionID.js'),
WriteGatheringManager: require('./lib/versioning/WriteGatheringManager.js'),
WriteCache: require('./lib/versioning/WriteCache.js'),
VersioningRequestProcessor: require('./lib/versioning/VersioningRequestProcessor.js'),
};
export const s3routes = {
routes: require('./lib/s3routes/routes'),
routesUtils: require('./lib/s3routes/routesUtils'),
};
export const s3middleware = { export const s3middleware = {
userMetadata: require('./lib/s3middleware/userMetadata'), userMetadata,
convertToXml: require('./lib/s3middleware/convertToXml'), convertToXml,
escapeForXml: require('./lib/s3middleware/escapeForXml'), escapeForXml,
objectLegalHold: require('./lib/s3middleware/objectLegalHold'), objectLegalHold,
tagging: require('./lib/s3middleware/tagging'), tagging,
checkDateModifiedHeaders: checkDateModifiedHeaders,
require('./lib/s3middleware/validateConditionalHeaders') validateConditionalHeaders,
.checkDateModifiedHeaders, MD5Sum,
validateConditionalHeaders: NullStream,
require('./lib/s3middleware/validateConditionalHeaders') objectUtils,
.validateConditionalHeaders,
MD5Sum: require('./lib/s3middleware/MD5Sum'),
NullStream: require('./lib/s3middleware/nullStream'),
objectUtils: require('./lib/s3middleware/objectUtils'),
azureHelper: { azureHelper: {
mpuUtils: mpuUtils,
require('./lib/s3middleware/azureHelpers/mpuUtils'), ResultsCollector,
ResultsCollector: SubStreamInterface,
require('./lib/s3middleware/azureHelpers/ResultsCollector'),
SubStreamInterface:
require('./lib/s3middleware/azureHelpers/SubStreamInterface'),
}, },
prepareStream: require('./lib/s3middleware/prepareStream'), prepareStream,
processMpuParts: require('./lib/s3middleware/processMpuParts'), processMpuParts,
retention: require('./lib/s3middleware/objectRetention'), retention,
lifecycleHelpers: require('./lib/s3middleware/lifecycleHelpers'), objectRestore,
lifecycleHelpers,
}; };
export const storage = { export const storage = {
@ -154,35 +166,10 @@ export const storage = {
utils: require('./lib/storage/utils'), utils: require('./lib/storage/utils'),
}; };
export const models = {
BackendInfo: require('./lib/models/BackendInfo'),
BucketInfo: require('./lib/models/BucketInfo'),
BucketAzureInfo: require('./lib/models/BucketAzureInfo'),
ObjectMD: require('./lib/models/ObjectMD'),
ObjectMDLocation: require('./lib/models/ObjectMDLocation'),
ObjectMDAzureInfo: require('./lib/models/ObjectMDAzureInfo'),
ARN: require('./lib/models/ARN'),
WebsiteConfiguration: require('./lib/models/WebsiteConfiguration'),
ReplicationConfiguration:
require('./lib/models/ReplicationConfiguration'),
LifecycleConfiguration:
require('./lib/models/LifecycleConfiguration'),
LifecycleRule: require('./lib/models/LifecycleRule'),
BucketPolicy: require('./lib/models/BucketPolicy'),
ObjectLockConfiguration:
require('./lib/models/ObjectLockConfiguration'),
NotificationConfiguration:
require('./lib/models/NotificationConfiguration'),
};
export const pensieve = { export const pensieve = {
credentialUtils: require('./lib/executables/pensieveCreds/utils'), credentialUtils: require('./lib/executables/pensieveCreds/utils'),
}; };
export const stream = {
readJSONStreamObject: require('./lib/stream/readJSONStreamObject'),
};
export const patches = { export const patches = {
locationConstraints: require('./lib/patches/locationConstraints'), locationConstraints,
}; };

View File

@ -1,18 +1,28 @@
'use strict'; // eslint-disable-line import cluster, { Worker } from 'cluster';
import * as werelogs from 'werelogs';
const cluster = require('cluster'); export default class Clustering {
_size: number;
_shutdownTimeout: number;
_logger: werelogs.Logger;
_shutdown: boolean;
_workers: (Worker | undefined)[];
_workersTimeout: (NodeJS.Timeout | undefined)[];
_workersStatus: (number | string | undefined)[];
_status: number;
_exitCb?: (clustering: Clustering, exitSignal?: string) => void;
_index?: number;
class Clustering {
/** /**
* Constructor * Constructor
* *
* @param {number} size Cluster size * @param size Cluster size
* @param {Logger} logger Logger object * @param logger Logger object
* @param {number} [shutdownTimeout=5000] Change default shutdown timeout * @param [shutdownTimeout=5000] Change default shutdown timeout
* releasing ressources * releasing ressources
* @return {Clustering} itself * @return itself
*/ */
constructor(size, logger, shutdownTimeout) { constructor(size: number, logger: werelogs.Logger, shutdownTimeout?: number) {
this._size = size; this._size = size;
if (size < 1) { if (size < 1) {
throw new Error('Cluster size must be greater than or equal to 1'); throw new Error('Cluster size must be greater than or equal to 1');
@ -32,7 +42,6 @@ class Clustering {
* Method called after a stop() call * Method called after a stop() call
* *
* @private * @private
* @return {undefined}
*/ */
_afterStop() { _afterStop() {
// Asuming all workers shutdown gracefully // Asuming all workers shutdown gracefully
@ -41,10 +50,11 @@ class Clustering {
for (let i = 0; i < size; ++i) { for (let i = 0; i < size; ++i) {
// If the process return an error code or killed by a signal, // If the process return an error code or killed by a signal,
// set the status // set the status
if (typeof this._workersStatus[i] === 'number') { const status = this._workersStatus[i];
this._status = this._workersStatus[i]; if (typeof status === 'number') {
this._status = status;
break; break;
} else if (typeof this._workersStatus[i] === 'string') { } else if (typeof status === 'string') {
this._status = 1; this._status = 1;
break; break;
} }
@ -58,13 +68,17 @@ class Clustering {
/** /**
* Method called when a worker exited * Method called when a worker exited
* *
* @param {Cluster.worker} worker - Current worker * @param worker - Current worker
* @param {number} i - Worker index * @param i - Worker index
* @param {number} code - Exit code * @param code - Exit code
* @param {string} signal - Exit signal * @param signal - Exit signal
* @return {undefined}
*/ */
_workerExited(worker, i, code, signal) { _workerExited(
worker: Worker,
i: number,
code: number,
signal: string,
) {
// If the worker: // If the worker:
// - was killed by a signal // - was killed by a signal
// - return an error code // - return an error code
@ -91,8 +105,9 @@ class Clustering {
this._workersStatus[i] = undefined; this._workersStatus[i] = undefined;
} }
this._workers[i] = undefined; this._workers[i] = undefined;
if (this._workersTimeout[i]) { const timeout = this._workersTimeout[i];
clearTimeout(this._workersTimeout[i]); if (timeout) {
clearTimeout(timeout);
this._workersTimeout[i] = undefined; this._workersTimeout[i] = undefined;
} }
// If we don't trigger the stop method, the watchdog // If we don't trigger the stop method, the watchdog
@ -110,29 +125,28 @@ class Clustering {
/** /**
* Method to start a worker * Method to start a worker
* *
* @param {number} i Index of the starting worker * @param i Index of the starting worker
* @return {undefined}
*/ */
startWorker(i) { startWorker(i: number) {
if (!cluster.isMaster) { if (!cluster.isPrimary) {
return; return;
} }
// Fork a new worker // Fork a new worker
this._workers[i] = cluster.fork(); this._workers[i] = cluster.fork();
// Listen for message from the worker // Listen for message from the worker
this._workers[i].on('message', msg => { this._workers[i]!.on('message', msg => {
// If the worker is ready, send him his id // If the worker is ready, send him his id
if (msg === 'ready') { if (msg === 'ready') {
this._workers[i].send({ msg: 'setup', id: i }); this._workers[i]!.send({ msg: 'setup', id: i });
} }
}); });
this._workers[i].on('exit', (code, signal) => this._workers[i]!.on('exit', (code, signal) =>
this._workerExited(this._workers[i], i, code, signal)); this._workerExited(this._workers[i]!, i, code, signal));
// Trigger when the worker was started // Trigger when the worker was started
this._workers[i].on('online', () => { this._workers[i]!.on('online', () => {
this._logger.info('Worker started', { this._logger.info('Worker started', {
id: i, id: i,
childPid: this._workers[i].process.pid, childPid: this._workers[i]!.process.pid,
}); });
}); });
} }
@ -140,10 +154,10 @@ class Clustering {
/** /**
* Method to put handler on cluster exit * Method to put handler on cluster exit
* *
* @param {function} cb - Callback(Clustering, [exitSignal]) * @param cb - Callback(Clustering, [exitSignal])
* @return {Clustering} Itself * @return Itself
*/ */
onExit(cb) { onExit(cb: (clustering: Clustering, exitSignal?: string) => void) {
this._exitCb = cb; this._exitCb = cb;
return this; return this;
} }
@ -152,33 +166,33 @@ class Clustering {
* Method to start the cluster (if master) or to start the callback * Method to start the cluster (if master) or to start the callback
* (worker) * (worker)
* *
* @param {function} cb - Callback to run the worker * @param cb - Callback to run the worker
* @return {Clustering} itself * @return itself
*/ */
start(cb) { start(cb: (clustering: Clustering) => void) {
process.on('SIGINT', () => this.stop('SIGINT')); process.on('SIGINT', () => this.stop('SIGINT'));
process.on('SIGHUP', () => this.stop('SIGHUP')); process.on('SIGHUP', () => this.stop('SIGHUP'));
process.on('SIGQUIT', () => this.stop('SIGQUIT')); process.on('SIGQUIT', () => this.stop('SIGQUIT'));
process.on('SIGTERM', () => this.stop('SIGTERM')); process.on('SIGTERM', () => this.stop('SIGTERM'));
process.on('SIGPIPE', () => {}); process.on('SIGPIPE', () => {});
process.on('exit', (code, signal) => { process.on('exit', (code?: number, signal?: string) => {
if (this._exitCb) { if (this._exitCb) {
this._status = code || 0; this._status = code || 0;
return this._exitCb(this, signal); return this._exitCb(this, signal);
} }
return process.exit(code || 0); return process.exit(code || 0);
}); });
process.on('uncaughtException', err => { process.on('uncaughtException', (err: Error) => {
this._logger.fatal('caught error', { this._logger.fatal('caught error', {
error: err.message, error: err.message,
stack: err.stack.split('\n').map(str => str.trim()), stack: err.stack?.split('\n')?.map(str => str.trim()),
}); });
process.exit(1); process.exit(1);
}); });
if (!cluster.isMaster) { if (!cluster.isPrimary) {
// Waiting for message from master to // Waiting for message from master to
// know the id of the slave cluster // know the id of the slave cluster
process.on('message', msg => { process.on('message', (msg: any) => {
if (msg.msg === 'setup') { if (msg.msg === 'setup') {
this._index = msg.id; this._index = msg.id;
cb(this); cb(this);
@ -186,7 +200,7 @@ class Clustering {
}); });
// Send message to the master, to let him know // Send message to the master, to let him know
// the worker has started // the worker has started
process.send('ready'); process.send?.('ready');
} else { } else {
for (let i = 0; i < this._size; ++i) { for (let i = 0; i < this._size; ++i) {
this.startWorker(i); this.startWorker(i);
@ -198,7 +212,7 @@ class Clustering {
/** /**
* Method to get workers * Method to get workers
* *
* @return {Cluster.Worker[]} Workers * @return Workers
*/ */
getWorkers() { getWorkers() {
return this._workers; return this._workers;
@ -207,7 +221,7 @@ class Clustering {
/** /**
* Method to get the status of the cluster * Method to get the status of the cluster
* *
* @return {number} Status code * @return Status code
*/ */
getStatus() { getStatus() {
return this._status; return this._status;
@ -216,7 +230,7 @@ class Clustering {
/** /**
* Method to return if it's the master process * Method to return if it's the master process
* *
* @return {boolean} - True if master, false otherwise * @return - True if master, false otherwise
*/ */
isMaster() { isMaster() {
return this._index === undefined; return this._index === undefined;
@ -225,7 +239,7 @@ class Clustering {
/** /**
* Method to get index of the worker * Method to get index of the worker
* *
* @return {number|undefined} Worker index, undefined if it's master * @return Worker index, undefined if it's master
*/ */
getIndex() { getIndex() {
return this._index; return this._index;
@ -234,11 +248,10 @@ class Clustering {
/** /**
* Method to stop the cluster * Method to stop the cluster
* *
* @param {string} signal - Set internally when processes killed by signal * @param signal - Set internally when processes killed by signal
* @return {undefined}
*/ */
stop(signal) { stop(signal?: string) {
if (!cluster.isMaster) { if (!cluster.isPrimary) {
if (this._exitCb) { if (this._exitCb) {
return this._exitCb(this, signal); return this._exitCb(this, signal);
} }
@ -251,13 +264,17 @@ class Clustering {
} }
this._workersTimeout[i] = setTimeout(() => { this._workersTimeout[i] = setTimeout(() => {
// Kill the worker if the sigterm was ignored or take too long // Kill the worker if the sigterm was ignored or take too long
process.kill(worker.process.pid, 'SIGKILL'); if (worker.process.pid) {
process.kill(worker.process.pid, 'SIGKILL');
}
}, this._shutdownTimeout); }, this._shutdownTimeout);
// Send sigterm to the process, allowing to release ressources // Send sigterm to the process, allowing to release ressources
// and save some states // and save some states
return process.kill(worker.process.pid, 'SIGTERM'); if (worker.process.pid) {
return process.kill(worker.process.pid, 'SIGTERM');
} else {
return true;
}
}); });
} }
} }
module.exports = Clustering;

363
lib/algos/cache/GapCache.ts vendored Normal file
View File

@ -0,0 +1,363 @@
import { OrderedSet } from '@js-sdsl/ordered-set';
import {
default as GapSet,
GapSetEntry,
} from './GapSet';
// the API is similar but is not strictly a superset of GapSetInterface
// so we don't extend from it
export interface GapCacheInterface {
exposureDelayMs: number;
maxGapWeight: number;
size: number;
setGap: (firstKey: string, lastKey: string, weight: number) => void;
removeOverlappingGaps: (overlappingKeys: string[]) => number;
lookupGap: (minKey: string, maxKey?: string) => Promise<GapSetEntry | null>;
[Symbol.iterator]: () => Iterator<GapSetEntry>;
toArray: () => GapSetEntry[];
};
class GapCacheUpdateSet {
newGaps: GapSet;
updatedKeys: OrderedSet<string>;
constructor(maxGapWeight: number) {
this.newGaps = new GapSet(maxGapWeight);
this.updatedKeys = new OrderedSet();
}
addUpdateBatch(updatedKeys: OrderedSet<string>): void {
this.updatedKeys.union(updatedKeys);
}
};
/**
* Cache of listing "gaps" i.e. ranges of keys that can be skipped
* over during listing (because they only contain delete markers as
* latest versions).
*
* Typically, a single GapCache instance would be attached to a raft session.
*
* The API usage is as follows:
*
* - Initialize a GapCache instance by calling start() (this starts an internal timer)
*
* - Insert a gap or update an existing one via setGap()
*
* - Lookup existing gaps via lookupGap()
*
* - Invalidate gaps that overlap a specific set of keys via removeOverlappingGaps()
*
* - Shut down a GapCache instance by calling stop() (this stops the internal timer)
*
* Gaps inserted via setGap() are not exposed immediately to lookupGap(), but only:
*
* - after a certain delay always larger than 'exposureDelayMs' and usually shorter
* than twice this value (but might be slightly longer in rare cases)
*
* - and only if they haven't been invalidated by a recent call to removeOverlappingGaps()
*
* This ensures atomicity between gap creation and invalidation from updates under
* the condition that a gap is created from first key to last key within the time defined
* by 'exposureDelayMs'.
*
* The implementation is based on two extra temporary "update sets" on top of the main
* exposed gap set, one called "staging" and the other "frozen", each containing a
* temporary updated gap set and a list of updated keys to invalidate gaps with (coming
* from calls to removeOverlappingGaps()). Every "exposureDelayMs" milliseconds, the frozen
* gaps are invalidated by all key updates coming from either of the "staging" or "frozen"
* update set, then merged into the exposed gaps set, after which the staging updates become
* the frozen updates and won't receive any new gap until the next cycle.
*/
export default class GapCache implements GapCacheInterface {
_exposureDelayMs: number;
maxGaps: number;
_stagingUpdates: GapCacheUpdateSet;
_frozenUpdates: GapCacheUpdateSet;
_exposedGaps: GapSet;
_exposeFrozenInterval: NodeJS.Timeout | null;
/**
* @constructor
*
* @param {number} exposureDelayMs - minimum delay between
* insertion of a gap via setGap() and its exposure via
* lookupGap()
* @param {number} maxGaps - maximum number of cached gaps, after
* which no new gap can be added by setGap(). (Note: a future
* improvement could replace this by an eviction strategy)
* @param {number} maxGapWeight - maximum "weight" of individual
* cached gaps, which is also the granularity for
* invalidation. Individual gaps can be chained together,
* which lookupGap() transparently consolidates in the response
* into a single large gap.
*/
constructor(exposureDelayMs: number, maxGaps: number, maxGapWeight: number) {
this._exposureDelayMs = exposureDelayMs;
this.maxGaps = maxGaps;
this._stagingUpdates = new GapCacheUpdateSet(maxGapWeight);
this._frozenUpdates = new GapCacheUpdateSet(maxGapWeight);
this._exposedGaps = new GapSet(maxGapWeight);
this._exposeFrozenInterval = null;
}
/**
* Create a GapCache from an array of exposed gap entries (used in tests)
*
* @return {GapCache} - a new GapCache instance
*/
static createFromArray(
gaps: GapSetEntry[],
exposureDelayMs: number,
maxGaps: number,
maxGapWeight: number
): GapCache {
const gapCache = new GapCache(exposureDelayMs, maxGaps, maxGapWeight);
gapCache._exposedGaps = GapSet.createFromArray(gaps, maxGapWeight)
return gapCache;
}
/**
* Internal helper to remove gaps in the staging and frozen sets
* overlapping with previously updated keys, right before the
* frozen gaps get exposed.
*
* @return {undefined}
*/
_removeOverlappingGapsBeforeExpose(): void {
for (const { updatedKeys } of [this._stagingUpdates, this._frozenUpdates]) {
if (updatedKeys.size() === 0) {
continue;
}
for (const { newGaps } of [this._stagingUpdates, this._frozenUpdates]) {
if (newGaps.size === 0) {
continue;
}
newGaps.removeOverlappingGaps(updatedKeys);
}
}
}
/**
* This function is the core mechanism that updates the exposed gaps in the
* cache. It is called on a regular interval defined by 'exposureDelayMs'.
*
* It does the following in order:
*
* - remove gaps from the frozen set that overlap with any key present in a
* batch passed to removeOverlappingGaps() since the last two triggers of
* _exposeFrozen()
*
* - merge the remaining gaps from the frozen set to the exposed set, which
* makes them visible from calls to lookupGap()
*
* - rotate by freezing the currently staging updates and initiating a new
* staging updates set
*
* @return {undefined}
*/
_exposeFrozen(): void {
this._removeOverlappingGapsBeforeExpose();
for (const gap of this._frozenUpdates.newGaps) {
// Use a trivial strategy to keep the cache size within
// limits: refuse to add new gaps when the size is above
// the 'maxGaps' threshold. We solely rely on
// removeOverlappingGaps() to make space for new gaps.
if (this._exposedGaps.size < this.maxGaps) {
this._exposedGaps.setGap(gap.firstKey, gap.lastKey, gap.weight);
}
}
this._frozenUpdates = this._stagingUpdates;
this._stagingUpdates = new GapCacheUpdateSet(this.maxGapWeight);
}
/**
* Start the internal GapCache timer
*
* @return {undefined}
*/
start(): void {
if (this._exposeFrozenInterval) {
return;
}
this._exposeFrozenInterval = setInterval(
() => this._exposeFrozen(),
this._exposureDelayMs);
}
/**
* Stop the internal GapCache timer
*
* @return {undefined}
*/
stop(): void {
if (this._exposeFrozenInterval) {
clearInterval(this._exposeFrozenInterval);
this._exposeFrozenInterval = null;
}
}
/**
* Record a gap between two keys, associated with a weight to
* limit individual gap's spanning ranges in the cache, for a more
* granular invalidation.
*
* The function handles splitting and merging existing gaps to
* maintain an optimal weight of cache entries.
*
* NOTE 1: the caller must ensure that the full length of the gap
* between 'firstKey' and 'lastKey' has been built from a listing
* snapshot that is more recent than 'exposureDelayMs' milliseconds,
* in order to guarantee that the exposed gap will be fully
* covered (and potentially invalidated) from recent calls to
* removeOverlappingGaps().
*
* NOTE 2: a usual pattern when building a large gap from multiple
* calls to setGap() is to start the next gap from 'lastKey',
* which will be passed as 'firstKey' in the next call, so that
* gaps can be chained together and consolidated by lookupGap().
*
* @param {string} firstKey - first key of the gap
* @param {string} lastKey - last key of the gap, must be greater
* or equal than 'firstKey'
* @param {number} weight - total weight between 'firstKey' and 'lastKey'
* @return {undefined}
*/
setGap(firstKey: string, lastKey: string, weight: number): void {
this._stagingUpdates.newGaps.setGap(firstKey, lastKey, weight);
}
/**
* Remove gaps that overlap with a given set of keys. Used to
* invalidate gaps when keys are inserted or deleted.
*
* @param {OrderedSet<string> | string[]} overlappingKeys - remove gaps that
* overlap with any of this set of keys
* @return {number} - how many gaps were removed from the exposed
* gaps only (overlapping gaps not yet exposed are also invalidated
* but are not accounted for in the returned value)
*/
removeOverlappingGaps(overlappingKeys: OrderedSet<string> | string[]): number {
let overlappingKeysSet;
if (Array.isArray(overlappingKeys)) {
overlappingKeysSet = new OrderedSet(overlappingKeys);
} else {
overlappingKeysSet = overlappingKeys;
}
this._stagingUpdates.addUpdateBatch(overlappingKeysSet);
return this._exposedGaps.removeOverlappingGaps(overlappingKeysSet);
}
/**
* Lookup the next exposed gap that overlaps with [minKey, maxKey]. Internally
* chained gaps are coalesced in the response into a single contiguous large gap.
*
* @param {string} minKey - minimum key overlapping with the returned gap
* @param {string} [maxKey] - maximum key overlapping with the returned gap
* @return {Promise<GapSetEntry | null>} - result of the lookup if a gap
* was found, null otherwise, as a Promise
*/
lookupGap(minKey: string, maxKey?: string): Promise<GapSetEntry | null> {
return this._exposedGaps.lookupGap(minKey, maxKey);
}
/**
* Get the maximum weight setting for individual gaps.
*
* @return {number} - maximum weight of individual gaps
*/
get maxGapWeight(): number {
return this._exposedGaps.maxWeight;
}
/**
* Set the maximum weight setting for individual gaps.
*
* @param {number} gapWeight - maximum weight of individual gaps
*/
set maxGapWeight(gapWeight: number) {
this._exposedGaps.maxWeight = gapWeight;
// also update transient gap sets
this._stagingUpdates.newGaps.maxWeight = gapWeight;
this._frozenUpdates.newGaps.maxWeight = gapWeight;
}
/**
* Get the exposure delay in milliseconds, which is the minimum
* time after which newly cached gaps will be exposed by
* lookupGap().
*
* @return {number} - exposure delay in milliseconds
*/
get exposureDelayMs(): number {
return this._exposureDelayMs;
}
/**
* Set the exposure delay in milliseconds, which is the minimum
* time after which newly cached gaps will be exposed by
* lookupGap(). Setting this attribute automatically updates the
* internal state to honor the new value.
*
* @param {number} - exposure delay in milliseconds
*/
set exposureDelayMs(exposureDelayMs: number) {
if (exposureDelayMs !== this._exposureDelayMs) {
this._exposureDelayMs = exposureDelayMs;
if (this._exposeFrozenInterval) {
// invalidate all pending gap updates, as the new interval may not be
// safe for them
this._stagingUpdates = new GapCacheUpdateSet(this.maxGapWeight);
this._frozenUpdates = new GapCacheUpdateSet(this.maxGapWeight);
// reinitialize the _exposeFrozenInterval timer with the updated delay
this.stop();
this.start();
}
}
}
/**
* Get the number of exposed gaps
*
* @return {number} number of exposed gaps
*/
get size(): number {
return this._exposedGaps.size;
}
/**
* Iterate over exposed gaps
*
* @return {Iterator<GapSetEntry>} an iterator over exposed gaps
*/
[Symbol.iterator](): Iterator<GapSetEntry> {
return this._exposedGaps[Symbol.iterator]();
}
/**
* Get an array of all exposed gaps
*
* @return {GapSetEntry[]} array of exposed gaps
*/
toArray(): GapSetEntry[] {
return this._exposedGaps.toArray();
}
/**
* Clear all exposed and staging gaps from the cache.
*
* Note: retains invalidating updates from removeOverlappingGaps()
* for correctness of gaps inserted afterwards.
*
* @return {undefined}
*/
clear(): void {
this._stagingUpdates.newGaps = new GapSet(this.maxGapWeight);
this._frozenUpdates.newGaps = new GapSet(this.maxGapWeight);
this._exposedGaps = new GapSet(this.maxGapWeight);
}
}

366
lib/algos/cache/GapSet.ts vendored Normal file
View File

@ -0,0 +1,366 @@
import assert from 'assert';
import { OrderedSet } from '@js-sdsl/ordered-set';
import errors from '../../errors';
export type GapSetEntry = {
firstKey: string,
lastKey: string,
weight: number,
};
export interface GapSetInterface {
maxWeight: number;
size: number;
setGap: (firstKey: string, lastKey: string, weight: number) => GapSetEntry;
removeOverlappingGaps: (overlappingKeys: string[]) => number;
lookupGap: (minKey: string, maxKey?: string) => Promise<GapSetEntry | null>;
[Symbol.iterator]: () => Iterator<GapSetEntry>;
toArray: () => GapSetEntry[];
};
/**
* Specialized data structure to support caching of listing "gaps",
* i.e. ranges of keys that can be skipped over during listing
* (because they only contain delete markers as latest versions)
*/
export default class GapSet implements GapSetInterface, Iterable<GapSetEntry> {
_gaps: OrderedSet<GapSetEntry>;
_maxWeight: number;
/**
* @constructor
* @param {number} maxWeight - weight threshold for each cached
* gap (unitless). Triggers splitting gaps when reached
*/
constructor(maxWeight: number) {
this._gaps = new OrderedSet(
[],
(left: GapSetEntry, right: GapSetEntry) => (
left.firstKey < right.firstKey ? -1 :
left.firstKey > right.firstKey ? 1 : 0
)
);
this._maxWeight = maxWeight;
}
/**
* Create a GapSet from an array of gap entries (used in tests)
*/
static createFromArray(gaps: GapSetEntry[], maxWeight: number): GapSet {
const gapSet = new GapSet(maxWeight);
for (const gap of gaps) {
gapSet._gaps.insert(gap);
}
return gapSet;
}
/**
* Record a gap between two keys, associated with a weight to limit
* individual gap sizes in the cache.
*
* The function handles splitting and merging existing gaps to
* maintain an optimal weight of cache entries.
*
* @param {string} firstKey - first key of the gap
* @param {string} lastKey - last key of the gap, must be greater
* or equal than 'firstKey'
* @param {number} weight - total weight between 'firstKey' and 'lastKey'
* @return {GapSetEntry} - existing or new gap entry
*/
setGap(firstKey: string, lastKey: string, weight: number): GapSetEntry {
assert(lastKey >= firstKey);
// Step 1/4: Find the closest left-overlapping gap, and either re-use it
// or chain it with a new gap depending on the weights if it exists (otherwise
// just creates a new gap).
const curGapIt = this._gaps.reverseLowerBound(<GapSetEntry>{ firstKey });
let curGap;
if (curGapIt.isAccessible()) {
curGap = curGapIt.pointer;
if (curGap.lastKey >= lastKey) {
// return fully overlapping gap already cached
return curGap;
}
}
let remainingWeight = weight;
if (!curGap // no previous gap
|| curGap.lastKey < firstKey // previous gap not overlapping
|| (curGap.lastKey === firstKey // previous gap overlapping by one key...
&& curGap.weight + weight > this._maxWeight) // ...but we can't extend it
) {
// create a new gap indexed by 'firstKey'
curGap = { firstKey, lastKey: firstKey, weight: 0 };
this._gaps.insert(curGap);
} else if (curGap.lastKey > firstKey && weight > this._maxWeight) {
// previous gap is either fully or partially contained in the new gap
// and cannot be extended: substract its weight from the total (heuristic
// in case the previous gap doesn't start at 'firstKey', which is the
// uncommon case)
remainingWeight -= curGap.weight;
// there may be an existing chained gap starting with the previous gap's
// 'lastKey': use it if it exists
const chainedGapIt = this._gaps.find(<GapSetEntry>{ firstKey: curGap.lastKey });
if (chainedGapIt.isAccessible()) {
curGap = chainedGapIt.pointer;
} else {
// no existing chained gap: chain a new gap to the previous gap
curGap = {
firstKey: curGap.lastKey,
lastKey: curGap.lastKey,
weight: 0,
};
this._gaps.insert(curGap);
}
}
// Step 2/4: Cleanup existing gaps fully included in firstKey -> lastKey, and
// aggregate their weights in curGap to define the minimum weight up to the
// last merged gap.
let nextGap;
while (true) {
const nextGapIt = this._gaps.upperBound(<GapSetEntry>{ firstKey: curGap.firstKey });
nextGap = nextGapIt.isAccessible() && nextGapIt.pointer;
// stop the cleanup when no more gap or if the next gap is not fully
// included in curGap
if (!nextGap || nextGap.lastKey > lastKey) {
break;
}
this._gaps.eraseElementByIterator(nextGapIt);
curGap.lastKey = nextGap.lastKey;
curGap.weight += nextGap.weight;
}
// Step 3/4: Extend curGap to lastKey, adjusting the weight.
// At this point, curGap weight is the minimum weight of the finished gap, save it
// for step 4.
let minMergedWeight = curGap.weight;
if (curGap.lastKey === firstKey && firstKey !== lastKey) {
// extend the existing gap by the full amount 'firstKey -> lastKey'
curGap.lastKey = lastKey;
curGap.weight += remainingWeight;
} else if (curGap.lastKey <= lastKey) {
curGap.lastKey = lastKey;
curGap.weight = remainingWeight;
}
// Step 4/4: Find the closest right-overlapping gap, and if it exists, either merge
// it or chain it with curGap depending on the weights.
if (nextGap && nextGap.firstKey <= lastKey) {
// nextGap overlaps with the new gap: check if we can merge it
minMergedWeight += nextGap.weight;
let mergedWeight;
if (lastKey === nextGap.firstKey) {
// nextGap is chained with curGap: add the full weight of nextGap
mergedWeight = curGap.weight + nextGap.weight;
} else {
// strict overlap: don't add nextGap's weight unless
// it's larger than the sum of merged ranges (as it is
// then included in `minMergedWeight`)
mergedWeight = Math.max(curGap.weight, minMergedWeight);
}
if (mergedWeight <= this._maxWeight) {
// merge nextGap into curGap
curGap.lastKey = nextGap.lastKey;
curGap.weight = mergedWeight;
this._gaps.eraseElementByKey(nextGap);
} else {
// adjust the last key to chain with nextGap and substract the next
// gap's weight from curGap (heuristic)
curGap.lastKey = nextGap.firstKey;
curGap.weight = Math.max(mergedWeight - nextGap.weight, 0);
curGap = nextGap;
}
}
// return a copy of curGap
return Object.assign({}, curGap);
}
/**
* Remove gaps that overlap with one or more keys in a given array or
* OrderedSet. Used to invalidate gaps when keys are inserted or deleted.
*
* @param {OrderedSet<string> | string[]} overlappingKeys - remove gaps that overlap
* with any of this set of keys
* @return {number} - how many gaps were removed
*/
removeOverlappingGaps(overlappingKeys: OrderedSet<string> | string[]): number {
// To optimize processing with a large number of keys and/or gaps, this function:
//
// 1. converts the overlappingKeys array to a OrderedSet (if not already a OrderedSet)
// 2. queries both the gaps set and the overlapping keys set in a loop, which allows:
// - skipping ranges of overlapping keys at once when there is no new overlapping gap
// - skipping ranges of gaps at once when there is no overlapping key
//
// This way, it is efficient when the number of non-overlapping gaps is large
// (which is the most common case in practice).
let overlappingKeysSet;
if (Array.isArray(overlappingKeys)) {
overlappingKeysSet = new OrderedSet(overlappingKeys);
} else {
overlappingKeysSet = overlappingKeys;
}
const firstKeyIt = overlappingKeysSet.begin();
let currentKey = firstKeyIt.isAccessible() && firstKeyIt.pointer;
let nRemoved = 0;
while (currentKey) {
const closestGapIt = this._gaps.reverseUpperBound(<GapSetEntry>{ firstKey: currentKey });
if (closestGapIt.isAccessible()) {
const closestGap = closestGapIt.pointer;
if (currentKey <= closestGap.lastKey) {
// currentKey overlaps closestGap: remove the gap
this._gaps.eraseElementByIterator(closestGapIt);
nRemoved += 1;
}
}
const nextGapIt = this._gaps.lowerBound(<GapSetEntry>{ firstKey: currentKey });
if (!nextGapIt.isAccessible()) {
// no more gap: we're done
return nRemoved;
}
const nextGap = nextGapIt.pointer;
// advance to the last key potentially overlapping with nextGap
let currentKeyIt = overlappingKeysSet.reverseLowerBound(nextGap.lastKey);
if (currentKeyIt.isAccessible()) {
currentKey = currentKeyIt.pointer;
if (currentKey >= nextGap.firstKey) {
// currentKey overlaps nextGap: remove the gap
this._gaps.eraseElementByIterator(nextGapIt);
nRemoved += 1;
}
}
// advance to the first key potentially overlapping with another gap
currentKeyIt = overlappingKeysSet.lowerBound(nextGap.lastKey);
currentKey = currentKeyIt.isAccessible() && currentKeyIt.pointer;
}
return nRemoved;
}
/**
* Internal helper to coalesce multiple chained gaps into a single gap.
*
* It is only used to construct lookupGap() return values and
* doesn't modify the GapSet.
*
* NOTE: The function may take a noticeable amount of time and CPU
* to execute if a large number of chained gaps have to be
* coalesced, but it should never take more than a few seconds. In
* most cases it should take less than a millisecond. It regularly
* yields to the nodejs event loop to avoid blocking it during a
* long execution.
*
* @param {GapSetEntry} firstGap - first gap of the chain to coalesce with
* the next ones in the chain
* @return {Promise<GapSetEntry>} - a new coalesced entry, as a Promise
*/
_coalesceGapChain(firstGap: GapSetEntry): Promise<GapSetEntry> {
return new Promise(resolve => {
const coalescedGap: GapSetEntry = Object.assign({}, firstGap);
const coalesceGapChainIteration = () => {
// efficiency trade-off: 100 iterations of log(N) complexity lookups should
// not block the event loop for too long
for (let opCounter = 0; opCounter < 100; ++opCounter) {
const chainedGapIt = this._gaps.find(
<GapSetEntry>{ firstKey: coalescedGap.lastKey });
if (!chainedGapIt.isAccessible()) {
// chain is complete
return resolve(coalescedGap);
}
const chainedGap = chainedGapIt.pointer;
if (chainedGap.firstKey === chainedGap.lastKey) {
// found a single-key gap: chain is complete
return resolve(coalescedGap);
}
coalescedGap.lastKey = chainedGap.lastKey;
coalescedGap.weight += chainedGap.weight;
}
// yield to the event loop before continuing the process
// of coalescing the gap chain
return process.nextTick(coalesceGapChainIteration);
};
coalesceGapChainIteration();
});
}
/**
* Lookup the next gap that overlaps with [minKey, maxKey]. Internally chained
* gaps are coalesced in the response into a single contiguous large gap.
*
* @param {string} minKey - minimum key overlapping with the returned gap
* @param {string} [maxKey] - maximum key overlapping with the returned gap
* @return {Promise<GapSetEntry | null>} - result of the lookup if a gap
* was found, null otherwise, as a Promise
*/
async lookupGap(minKey: string, maxKey?: string): Promise<GapSetEntry | null> {
let firstGap: GapSetEntry | null = null;
const minGapIt = this._gaps.reverseLowerBound(<GapSetEntry>{ firstKey: minKey });
const minGap = minGapIt.isAccessible() && minGapIt.pointer;
if (minGap && minGap.lastKey >= minKey) {
firstGap = minGap;
} else {
const maxGapIt = this._gaps.upperBound(<GapSetEntry>{ firstKey: minKey });
const maxGap = maxGapIt.isAccessible() && maxGapIt.pointer;
if (maxGap && (maxKey === undefined || maxGap.firstKey <= maxKey)) {
firstGap = maxGap;
}
}
if (!firstGap) {
return null;
}
return this._coalesceGapChain(firstGap);
}
/**
* Get the maximum weight setting for individual gaps.
*
* @return {number} - maximum weight of individual gaps
*/
get maxWeight(): number {
return this._maxWeight;
}
/**
* Set the maximum weight setting for individual gaps.
*
* @param {number} gapWeight - maximum weight of individual gaps
*/
set maxWeight(gapWeight: number) {
this._maxWeight = gapWeight;
}
/**
* Get the number of gaps stored in this set.
*
* @return {number} - number of gaps stored in this set
*/
get size(): number {
return this._gaps.size();
}
/**
* Iterate over each gap of the set, ordered by first key
*
* @return {Iterator<GapSetEntry>} - an iterator over all gaps
* Example:
* for (const gap of myGapSet) { ... }
*/
[Symbol.iterator](): Iterator<GapSetEntry> {
return this._gaps[Symbol.iterator]();
}
/**
* Return an array containing all gaps, ordered by first key
*
* NOTE: there is a toArray() method in the OrderedSet implementation
* but it does not scale well and overflows the stack quickly. This is
* why we provide an implementation based on an iterator.
*
* @return {GapSetEntry[]} - an array containing all gaps
*/
toArray(): GapSetEntry[] {
return [...this];
}
}

124
lib/algos/heap/Heap.ts Normal file
View File

@ -0,0 +1,124 @@
export enum HeapOrder {
Min = -1,
Max = 1,
}
export enum CompareResult {
LT = -1,
EQ = 0,
GT = 1,
}
export type CompareFunction = (x: any, y: any) => CompareResult;
export class Heap {
size: number;
_maxSize: number;
_order: HeapOrder;
_heap: any[];
_cmpFn: CompareFunction;
constructor(size: number, order: HeapOrder, cmpFn: CompareFunction) {
this.size = 0;
this._maxSize = size;
this._order = order;
this._cmpFn = cmpFn;
this._heap = new Array<any>(this._maxSize);
}
_parent(i: number): number {
return Math.floor((i - 1) / 2);
}
_left(i: number): number {
return Math.floor((2 * i) + 1);
}
_right(i: number): number {
return Math.floor((2 * i) + 2);
}
_shouldSwap(childIdx: number, parentIdx: number): boolean {
return this._cmpFn(this._heap[childIdx], this._heap[parentIdx]) as number === this._order as number;
}
_swap(i: number, j: number) {
const tmp = this._heap[i];
this._heap[i] = this._heap[j];
this._heap[j] = tmp;
}
_heapify(i: number) {
const l = this._left(i);
const r = this._right(i);
let c = i;
if (l < this.size && this._shouldSwap(l, c)) {
c = l;
}
if (r < this.size && this._shouldSwap(r, c)) {
c = r;
}
if (c != i) {
this._swap(c, i);
this._heapify(c);
}
}
add(item: any): any {
if (this.size >= this._maxSize) {
return new Error('Max heap size reached');
}
++this.size;
let c = this.size - 1;
this._heap[c] = item;
while (c > 0) {
if (!this._shouldSwap(c, this._parent(c))) {
return null;
}
this._swap(c, this._parent(c));
c = this._parent(c);
}
return null;
};
remove(): any {
if (this.size <= 0) {
return null;
}
const ret = this._heap[0];
this._heap[0] = this._heap[this.size - 1];
this._heapify(0);
--this.size;
return ret;
};
peek(): any {
if (this.size <= 0) {
return null;
}
return this._heap[0];
};
}
export class MinHeap extends Heap {
constructor(size: number, cmpFn: CompareFunction) {
super(size, HeapOrder.Min, cmpFn);
}
}
export class MaxHeap extends Heap {
constructor(size: number, cmpFn: CompareFunction) {
super(size, HeapOrder.Max, cmpFn);
}
}

View File

@ -1,6 +1,6 @@
'use strict'; // eslint-disable-line strict 'use strict'; // eslint-disable-line strict
const { FILTER_SKIP, SKIP_NONE } = require('./tools'); const { FILTER_ACCEPT, SKIP_NONE } = require('./tools');
// Use a heuristic to amortize the cost of JSON // Use a heuristic to amortize the cost of JSON
// serialization/deserialization only on largest metadata where the // serialization/deserialization only on largest metadata where the
@ -92,21 +92,26 @@ class Extension {
* @param {object} entry - a listing entry from metadata * @param {object} entry - a listing entry from metadata
* expected format: { key, value } * expected format: { key, value }
* @return {number} - result of filtering the entry: * @return {number} - result of filtering the entry:
* > 0: entry is accepted and included in the result * FILTER_ACCEPT: entry is accepted and may or not be included
* = 0: entry is accepted but not included (skipping) * in the result
* < 0: entry is not accepted, listing should finish * FILTER_SKIP: listing may skip directly (with "gte" param) to
* the key returned by the skipping() method
* FILTER_END: the results are complete, listing can be stopped
*/ */
filter(entry) { filter(/* entry: { key, value } */) {
return entry ? FILTER_SKIP : FILTER_SKIP; return FILTER_ACCEPT;
} }
/** /**
* Provides the insight into why filter is skipping an entry. This could be * Provides the next key at which the listing task is allowed to skip to.
* because it is skipping a range of delimited keys or a range of specific * This could allow to skip over:
* version when doing master version listing. * - a key prefix ending with the delimiter
* - all remaining versions of an object when doing a current
* versions listing in v0 format
* - a cached "gap" of deleted objects when doing a current
* versions listing in v0 format
* *
* @return {string} - the insight: a common prefix or a master key, * @return {string} - the next key at which the listing task is allowed to skip to
* or SKIP_NONE if there is no insight
*/ */
skipping() { skipping() {
return SKIP_NONE; return SKIP_NONE;

View File

@ -1,7 +1,7 @@
'use strict'; // eslint-disable-line strict 'use strict'; // eslint-disable-line strict
const { inc, checkLimit, listingParamsMasterKeysV0ToV1, const { inc, checkLimit, listingParamsMasterKeysV0ToV1,
FILTER_END, FILTER_ACCEPT } = require('./tools'); FILTER_END, FILTER_ACCEPT, SKIP_NONE } = require('./tools');
const DEFAULT_MAX_KEYS = 1000; const DEFAULT_MAX_KEYS = 1000;
const VSConst = require('../../versioning/constants').VersioningConstants; const VSConst = require('../../versioning/constants').VersioningConstants;
const { DbPrefixes, BucketVersioningKeyFormat } = VSConst; const { DbPrefixes, BucketVersioningKeyFormat } = VSConst;
@ -163,7 +163,7 @@ class MultipartUploads {
} }
skipping() { skipping() {
return ''; return SKIP_NONE;
} }
/** /**

View File

@ -2,7 +2,7 @@
const Extension = require('./Extension').default; const Extension = require('./Extension').default;
const { checkLimit, FILTER_END, FILTER_ACCEPT, FILTER_SKIP } = require('./tools'); const { checkLimit, FILTER_END, FILTER_ACCEPT } = require('./tools');
const DEFAULT_MAX_KEYS = 10000; const DEFAULT_MAX_KEYS = 10000;
/** /**
@ -91,7 +91,7 @@ class List extends Extension {
* < 0 : listing done * < 0 : listing done
*/ */
filter(elem) { filter(elem) {
// Check first in case of maxkeys <= 0 // Check if the result array is full
if (this.keys >= this.maxKeys) { if (this.keys >= this.maxKeys) {
return FILTER_END; return FILTER_END;
} }
@ -99,7 +99,7 @@ class List extends Extension {
this.filterKeyStartsWith !== undefined) && this.filterKeyStartsWith !== undefined) &&
typeof elem === 'object' && typeof elem === 'object' &&
!this.customFilter(elem.value)) { !this.customFilter(elem.value)) {
return FILTER_SKIP; return FILTER_ACCEPT;
} }
if (typeof elem === 'object') { if (typeof elem === 'object') {
this.res.push({ this.res.push({

View File

@ -1,274 +0,0 @@
'use strict'; // eslint-disable-line strict
const Extension = require('./Extension').default;
const { inc, listingParamsMasterKeysV0ToV1,
FILTER_END, FILTER_ACCEPT, FILTER_SKIP } = require('./tools');
const VSConst = require('../../versioning/constants').VersioningConstants;
const { DbPrefixes, BucketVersioningKeyFormat } = VSConst;
/**
* Find the common prefix in the path
*
* @param {String} key - path of the object
* @param {String} delimiter - separator
* @param {Number} delimiterIndex - 'folder' index in the path
* @return {String} - CommonPrefix
*/
function getCommonPrefix(key, delimiter, delimiterIndex) {
return key.substring(0, delimiterIndex + delimiter.length);
}
/**
* Handle object listing with parameters
*
* @prop {String[]} CommonPrefixes - 'folders' defined by the delimiter
* @prop {String[]} Contents - 'files' to list
* @prop {Boolean} IsTruncated - truncated listing flag
* @prop {String|undefined} NextMarker - marker per amazon format
* @prop {Number} keys - count of listed keys
* @prop {String|undefined} delimiter - separator per amazon format
* @prop {String|undefined} prefix - prefix per amazon format
* @prop {Number} maxKeys - number of keys to list
*/
class Delimiter extends Extension {
/**
* Create a new Delimiter instance
* @constructor
* @param {Object} parameters - listing parameters
* @param {String} [parameters.delimiter] - delimiter per amazon
* format
* @param {String} [parameters.prefix] - prefix per amazon
* format
* @param {String} [parameters.marker] - marker per amazon
* format
* @param {Number} [parameters.maxKeys] - number of keys to list
* @param {Boolean} [parameters.v2] - indicates whether v2
* format
* @param {String} [parameters.startAfter] - marker per amazon
* format
* @param {String} [parameters.continuationToken] - obfuscated amazon
* token
* @param {Boolean} [parameters.alphabeticalOrder] - Either the result is
* alphabetically ordered
* or not
* @param {RequestLogger} logger - The logger of the
* request
* @param {String} [vFormat] - versioning key format
*/
constructor(parameters, logger, vFormat) {
super(parameters, logger);
// original listing parameters
this.delimiter = parameters.delimiter;
this.prefix = parameters.prefix;
this.marker = parameters.marker;
this.maxKeys = parameters.maxKeys || 1000;
this.startAfter = parameters.startAfter;
this.continuationToken = parameters.continuationToken;
this.alphabeticalOrder =
typeof parameters.alphabeticalOrder !== 'undefined' ?
parameters.alphabeticalOrder : true;
this.vFormat = vFormat || BucketVersioningKeyFormat.v0;
// results
this.CommonPrefixes = [];
this.Contents = [];
this.IsTruncated = false;
this.NextMarker = parameters.marker;
this.NextContinuationToken =
parameters.continuationToken || parameters.startAfter;
this.startMarker = parameters.v2 ? 'startAfter' : 'marker';
this.continueMarker = parameters.v2 ? 'continuationToken' : 'marker';
this.nextContinueMarker = parameters.v2 ?
'NextContinuationToken' : 'NextMarker';
if (this.delimiter !== undefined &&
this[this.nextContinueMarker] !== undefined &&
this[this.nextContinueMarker].startsWith(this.prefix || '')) {
const nextDelimiterIndex =
this[this.nextContinueMarker].indexOf(this.delimiter,
this.prefix ? this.prefix.length : 0);
this[this.nextContinueMarker] =
this[this.nextContinueMarker].slice(0, nextDelimiterIndex +
this.delimiter.length);
}
Object.assign(this, {
[BucketVersioningKeyFormat.v0]: {
genMDParams: this.genMDParamsV0,
getObjectKey: this.getObjectKeyV0,
skipping: this.skippingV0,
},
[BucketVersioningKeyFormat.v1]: {
genMDParams: this.genMDParamsV1,
getObjectKey: this.getObjectKeyV1,
skipping: this.skippingV1,
},
}[this.vFormat]);
}
genMDParamsV0() {
const params = {};
if (this.prefix) {
params.gte = this.prefix;
params.lt = inc(this.prefix);
}
const startVal = this[this.continueMarker] || this[this.startMarker];
if (startVal) {
if (params.gte && params.gte > startVal) {
return params;
}
delete params.gte;
params.gt = startVal;
}
return params;
}
genMDParamsV1() {
const params = this.genMDParamsV0();
return listingParamsMasterKeysV0ToV1(params);
}
/**
* check if the max keys count has been reached and set the
* final state of the result if it is the case
* @return {Boolean} - indicates if the iteration has to stop
*/
_reachedMaxKeys() {
if (this.keys >= this.maxKeys) {
// In cases of maxKeys <= 0 -> IsTruncated = false
this.IsTruncated = this.maxKeys > 0;
return true;
}
return false;
}
/**
* Add a (key, value) tuple to the listing
* Set the NextMarker to the current key
* Increment the keys counter
* @param {String} key - The key to add
* @param {String} value - The value of the key
* @return {number} - indicates if iteration should continue
*/
addContents(key, value) {
if (this._reachedMaxKeys()) {
return FILTER_END;
}
this.Contents.push({ key, value: this.trimMetadata(value) });
this[this.nextContinueMarker] = key;
++this.keys;
return FILTER_ACCEPT;
}
getObjectKeyV0(obj) {
return obj.key;
}
getObjectKeyV1(obj) {
return obj.key.slice(DbPrefixes.Master.length);
}
/**
* Filter to apply on each iteration, based on:
* - prefix
* - delimiter
* - maxKeys
* The marker is being handled directly by levelDB
* @param {Object} obj - The key and value of the element
* @param {String} obj.key - The key of the element
* @param {String} obj.value - The value of the element
* @return {number} - indicates if iteration should continue
*/
filter(obj) {
const key = this.getObjectKey(obj);
const value = obj.value;
if ((this.prefix && !key.startsWith(this.prefix))
|| (this.alphabeticalOrder
&& typeof this[this.nextContinueMarker] === 'string'
&& key <= this[this.nextContinueMarker])) {
return FILTER_SKIP;
}
if (this.delimiter) {
const baseIndex = this.prefix ? this.prefix.length : 0;
const delimiterIndex = key.indexOf(this.delimiter, baseIndex);
if (delimiterIndex === -1) {
return this.addContents(key, value);
}
return this.addCommonPrefix(key, delimiterIndex);
}
return this.addContents(key, value);
}
/**
* Add a Common Prefix in the list
* @param {String} key - object name
* @param {Number} index - after prefix starting point
* @return {Boolean} - indicates if iteration should continue
*/
addCommonPrefix(key, index) {
const commonPrefix = getCommonPrefix(key, this.delimiter, index);
if (this.CommonPrefixes.indexOf(commonPrefix) === -1
&& this[this.nextContinueMarker] !== commonPrefix) {
if (this._reachedMaxKeys()) {
return FILTER_END;
}
this.CommonPrefixes.push(commonPrefix);
this[this.nextContinueMarker] = commonPrefix;
++this.keys;
return FILTER_ACCEPT;
}
return FILTER_SKIP;
}
/**
* If repd happens to want to skip listing on a bucket in v0
* versioning key format, here is an idea.
*
* @return {string} - the present range (NextMarker) if repd believes
* that it's enough and should move on
*/
skippingV0() {
return this[this.nextContinueMarker];
}
/**
* If repd happens to want to skip listing on a bucket in v1
* versioning key format, here is an idea.
*
* @return {string} - the present range (NextMarker) if repd believes
* that it's enough and should move on
*/
skippingV1() {
return DbPrefixes.Master + this[this.nextContinueMarker];
}
/**
* Return an object containing all mandatory fields to use once the
* iteration is done, doesn't show a NextMarker field if the output
* isn't truncated
* @return {Object} - following amazon format
*/
result() {
/* NextMarker is only provided when delimiter is used.
* specified in v1 listing documentation
* http://docs.aws.amazon.com/AmazonS3/latest/API/RESTBucketGET.html
*/
const result = {
CommonPrefixes: this.CommonPrefixes,
Contents: this.Contents,
IsTruncated: this.IsTruncated,
Delimiter: this.delimiter,
};
if (this.parameters.v2) {
result.NextContinuationToken = this.IsTruncated
? this.NextContinuationToken : undefined;
} else {
result.NextMarker = (this.IsTruncated && this.delimiter)
? this.NextMarker : undefined;
}
return result;
}
}
module.exports = { Delimiter };

356
lib/algos/list/delimiter.ts Normal file
View File

@ -0,0 +1,356 @@
'use strict'; // eslint-disable-line strict
const Extension = require('./Extension').default;
const { inc, listingParamsMasterKeysV0ToV1,
FILTER_END, FILTER_ACCEPT, FILTER_SKIP, SKIP_NONE } = require('./tools');
const VSConst = require('../../versioning/constants').VersioningConstants;
const { DbPrefixes, BucketVersioningKeyFormat } = VSConst;
export interface FilterState {
id: number,
};
export interface FilterReturnValue {
FILTER_ACCEPT,
FILTER_SKIP,
FILTER_END,
};
export const enum DelimiterFilterStateId {
NotSkipping = 1,
SkippingPrefix = 2,
};
export interface DelimiterFilterState_NotSkipping extends FilterState {
id: DelimiterFilterStateId.NotSkipping,
};
export interface DelimiterFilterState_SkippingPrefix extends FilterState {
id: DelimiterFilterStateId.SkippingPrefix,
prefix: string;
};
type KeyHandler = (key: string, value: string) => FilterReturnValue;
export type ResultObject = {
CommonPrefixes: string[];
Contents: {
key: string;
value: string;
}[];
IsTruncated: boolean;
Delimiter ?: string;
NextMarker ?: string;
NextContinuationToken ?: string;
};
/**
* Handle object listing with parameters
*
* @prop {String[]} CommonPrefixes - 'folders' defined by the delimiter
* @prop {String[]} Contents - 'files' to list
* @prop {Boolean} IsTruncated - truncated listing flag
* @prop {String|undefined} NextMarker - marker per amazon format
* @prop {Number} keys - count of listed keys
* @prop {String|undefined} delimiter - separator per amazon format
* @prop {String|undefined} prefix - prefix per amazon format
* @prop {Number} maxKeys - number of keys to list
*/
export class Delimiter extends Extension {
state: FilterState;
keyHandlers: { [id: number]: KeyHandler };
/**
* Create a new Delimiter instance
* @constructor
* @param {Object} parameters - listing parameters
* @param {String} [parameters.delimiter] - delimiter per amazon
* format
* @param {String} [parameters.prefix] - prefix per amazon
* format
* @param {String} [parameters.marker] - marker per amazon
* format
* @param {Number} [parameters.maxKeys] - number of keys to list
* @param {Boolean} [parameters.v2] - indicates whether v2
* format
* @param {String} [parameters.startAfter] - marker per amazon
* format
* @param {String} [parameters.continuationToken] - obfuscated amazon
* token
* @param {RequestLogger} logger - The logger of the
* request
* @param {String} [vFormat] - versioning key format
*/
constructor(parameters, logger, vFormat) {
super(parameters, logger);
// original listing parameters
this.delimiter = parameters.delimiter;
this.prefix = parameters.prefix;
this.maxKeys = parameters.maxKeys || 1000;
if (parameters.v2) {
this.marker = parameters.continuationToken || parameters.startAfter;
} else {
this.marker = parameters.marker;
}
this.nextMarker = this.marker;
this.vFormat = vFormat || BucketVersioningKeyFormat.v0;
// results
this.CommonPrefixes = [];
this.Contents = [];
this.IsTruncated = false;
this.keyHandlers = {};
Object.assign(this, {
[BucketVersioningKeyFormat.v0]: {
genMDParams: this.genMDParamsV0,
getObjectKey: this.getObjectKeyV0,
skipping: this.skippingV0,
},
[BucketVersioningKeyFormat.v1]: {
genMDParams: this.genMDParamsV1,
getObjectKey: this.getObjectKeyV1,
skipping: this.skippingV1,
},
}[this.vFormat]);
// if there is a delimiter, we may skip ranges by prefix,
// hence using the NotSkippingPrefix flavor that checks the
// subprefix up to the delimiter for the NotSkipping state
if (this.delimiter) {
this.setKeyHandler(
DelimiterFilterStateId.NotSkipping,
this.keyHandler_NotSkippingPrefix.bind(this));
} else {
// listing without a delimiter never has to skip over any
// prefix -> use NeverSkipping flavor for the NotSkipping
// state
this.setKeyHandler(
DelimiterFilterStateId.NotSkipping,
this.keyHandler_NeverSkipping.bind(this));
}
this.setKeyHandler(
DelimiterFilterStateId.SkippingPrefix,
this.keyHandler_SkippingPrefix.bind(this));
this.state = <DelimiterFilterState_NotSkipping> {
id: DelimiterFilterStateId.NotSkipping,
};
}
genMDParamsV0() {
const params: { gt ?: string, gte ?: string, lt ?: string } = {};
if (this.prefix) {
params.gte = this.prefix;
params.lt = inc(this.prefix);
}
if (this.marker && this.delimiter) {
const commonPrefix = this.getCommonPrefix(this.marker);
if (commonPrefix) {
const afterPrefix = inc(commonPrefix);
if (!params.gte || afterPrefix > params.gte) {
params.gte = afterPrefix;
}
}
}
if (this.marker && (!params.gte || this.marker >= params.gte)) {
delete params.gte;
params.gt = this.marker;
}
return params;
}
genMDParamsV1() {
const params = this.genMDParamsV0();
return listingParamsMasterKeysV0ToV1(params);
}
/**
* check if the max keys count has been reached and set the
* final state of the result if it is the case
* @return {Boolean} - indicates if the iteration has to stop
*/
_reachedMaxKeys(): boolean {
if (this.keys >= this.maxKeys) {
// In cases of maxKeys <= 0 -> IsTruncated = false
this.IsTruncated = this.maxKeys > 0;
return true;
}
return false;
}
/**
* Add a (key, value) tuple to the listing
* Set the NextMarker to the current key
* Increment the keys counter
* @param {String} key - The key to add
* @param {String} value - The value of the key
* @return {number} - indicates if iteration should continue
*/
addContents(key: string, value: string): void {
this.Contents.push({ key, value: this.trimMetadata(value) });
++this.keys;
this.nextMarker = key;
}
getCommonPrefix(key: string): string | undefined {
if (!this.delimiter) {
return undefined;
}
const baseIndex = this.prefix ? this.prefix.length : 0;
const delimiterIndex = key.indexOf(this.delimiter, baseIndex);
if (delimiterIndex === -1) {
return undefined;
}
return key.substring(0, delimiterIndex + this.delimiter.length);
}
/**
* Add a Common Prefix in the list
* @param {String} commonPrefix - common prefix to add
* @param {String} key - full key starting with commonPrefix
* @return {Boolean} - indicates if iteration should continue
*/
addCommonPrefix(commonPrefix: string, key: string): void {
// add the new prefix to the list
this.CommonPrefixes.push(commonPrefix);
++this.keys;
this.nextMarker = commonPrefix;
}
addCommonPrefixOrContents(key: string, value: string): string | undefined {
// add the subprefix to the common prefixes if the key has the delimiter
const commonPrefix = this.getCommonPrefix(key);
if (commonPrefix) {
this.addCommonPrefix(commonPrefix, key);
return commonPrefix;
}
this.addContents(key, value);
return undefined;
}
getObjectKeyV0(obj: { key: string }): string {
return obj.key;
}
getObjectKeyV1(obj: { key: string }): string {
return obj.key.slice(DbPrefixes.Master.length);
}
/**
* Filter to apply on each iteration, based on:
* - prefix
* - delimiter
* - maxKeys
* The marker is being handled directly by levelDB
* @param {Object} obj - The key and value of the element
* @param {String} obj.key - The key of the element
* @param {String} obj.value - The value of the element
* @return {number} - indicates if iteration should continue
*/
filter(obj: { key: string, value: string }): FilterReturnValue {
const key = this.getObjectKey(obj);
const value = obj.value;
return this.handleKey(key, value);
}
setState(state: FilterState): void {
this.state = state;
}
setKeyHandler(stateId: number, keyHandler: KeyHandler): void {
this.keyHandlers[stateId] = keyHandler;
}
handleKey(key: string, value: string): FilterReturnValue {
return this.keyHandlers[this.state.id](key, value);
}
keyHandler_NeverSkipping(key: string, value: string): FilterReturnValue {
if (this._reachedMaxKeys()) {
return FILTER_END;
}
this.addContents(key, value);
return FILTER_ACCEPT;
}
keyHandler_NotSkippingPrefix(key: string, value: string): FilterReturnValue {
if (this._reachedMaxKeys()) {
return FILTER_END;
}
const commonPrefix = this.addCommonPrefixOrContents(key, value);
if (commonPrefix) {
// transition into SkippingPrefix state to skip all following keys
// while they start with the same prefix
this.setState(<DelimiterFilterState_SkippingPrefix> {
id: DelimiterFilterStateId.SkippingPrefix,
prefix: commonPrefix,
});
}
return FILTER_ACCEPT;
}
keyHandler_SkippingPrefix(key: string, value: string): FilterReturnValue {
const { prefix } = <DelimiterFilterState_SkippingPrefix> this.state;
if (key.startsWith(prefix)) {
return FILTER_SKIP;
}
this.setState(<DelimiterFilterState_NotSkipping> {
id: DelimiterFilterStateId.NotSkipping,
});
return this.handleKey(key, value);
}
skippingBase(): string | undefined {
switch (this.state.id) {
case DelimiterFilterStateId.SkippingPrefix:
const { prefix } = <DelimiterFilterState_SkippingPrefix> this.state;
return inc(prefix);
default:
return SKIP_NONE;
}
}
skippingV0() {
return this.skippingBase();
}
skippingV1() {
const skipTo = this.skippingBase();
if (skipTo === SKIP_NONE) {
return SKIP_NONE;
}
return DbPrefixes.Master + skipTo;
}
/**
* Return an object containing all mandatory fields to use once the
* iteration is done, doesn't show a NextMarker field if the output
* isn't truncated
* @return {Object} - following amazon format
*/
result(): ResultObject {
/* NextMarker is only provided when delimiter is used.
* specified in v1 listing documentation
* http://docs.aws.amazon.com/AmazonS3/latest/API/RESTBucketGET.html
*/
const result: ResultObject = {
CommonPrefixes: this.CommonPrefixes,
Contents: this.Contents,
IsTruncated: this.IsTruncated,
Delimiter: this.delimiter,
};
if (this.parameters.v2) {
result.NextContinuationToken = this.IsTruncated
? this.nextMarker : undefined;
} else {
result.NextMarker = (this.IsTruncated && this.delimiter)
? this.nextMarker : undefined;
}
return result;
}
}

View File

@ -0,0 +1,127 @@
const { DelimiterMaster } = require('./delimiterMaster');
const { FILTER_ACCEPT, FILTER_END } = require('./tools');
type ResultObject = {
Contents: {
key: string;
value: string;
}[];
IsTruncated: boolean;
NextMarker ?: string;
};
/**
* Handle object listing with parameters. This extends the base class DelimiterMaster
* to return the master/current versions.
*/
class DelimiterCurrent extends DelimiterMaster {
/**
* Delimiter listing of current versions.
* @param {Object} parameters - listing parameters
* @param {String} parameters.beforeDate - limit the response to keys older than beforeDate
* @param {String} parameters.excludedDataStoreName - excluded datatore name
* @param {Number} parameters.maxScannedLifecycleListingEntries - max number of entries to be scanned
* @param {RequestLogger} logger - The logger of the request
* @param {String} [vFormat] - versioning key format
*/
constructor(parameters, logger, vFormat) {
super(parameters, logger, vFormat);
this.beforeDate = parameters.beforeDate;
this.excludedDataStoreName = parameters.excludedDataStoreName;
this.maxScannedLifecycleListingEntries = parameters.maxScannedLifecycleListingEntries;
this.scannedKeys = 0;
}
genMDParamsV0() {
const params = super.genMDParamsV0();
// lastModified and dataStoreName parameters are used by metadata that enables built-in filtering,
// a feature currently exclusive to MongoDB
if (this.beforeDate) {
params.lastModified = {
lt: this.beforeDate,
};
}
if (this.excludedDataStoreName) {
params.dataStoreName = {
ne: this.excludedDataStoreName,
}
}
return params;
}
/**
* Parses the stringified entry's value.
* @param s - sringified value
* @return - undefined if parsing fails, otherwise it contains the parsed value.
*/
_parse(s) {
let p;
try {
p = JSON.parse(s);
} catch (e: any) {
this.logger.warn(
'Could not parse Object Metadata while listing',
{ err: e.toString() });
}
return p;
}
/**
* check if the max keys count has been reached and set the
* final state of the result if it is the case
*
* specialized implementation on DelimiterCurrent to also check
* the number of scanned keys
*
* @return {Boolean} - indicates if the iteration has to stop
*/
_reachedMaxKeys(): boolean {
if (this.maxScannedLifecycleListingEntries && this.scannedKeys >= this.maxScannedLifecycleListingEntries) {
this.IsTruncated = true;
this.logger.info('listing stopped due to reaching the maximum scanned entries limit',
{
maxScannedLifecycleListingEntries: this.maxScannedLifecycleListingEntries,
scannedKeys: this.scannedKeys,
});
return true;
}
return super._reachedMaxKeys();
}
addContents(key, value) {
++this.scannedKeys;
const parsedValue = this._parse(value);
// if parsing fails, skip the key.
if (parsedValue) {
const lastModified = parsedValue['last-modified'];
const dataStoreName = parsedValue.dataStoreName;
// We then check if the current version is older than the "beforeDate" and
// "excludedDataStoreName" is not specified or if specified and the data store name is different.
if ((!this.beforeDate || (lastModified && lastModified < this.beforeDate)) &&
(!this.excludedDataStoreName || dataStoreName !== this.excludedDataStoreName)) {
super.addContents(key, value);
}
// In the event of a timeout occurring before any content is added,
// NextMarker is updated even if the object is not eligible.
// It minimizes the amount of data that the client needs to re-process if the request times out.
this.nextMarker = key;
}
}
result(): object {
const result: ResultObject = {
Contents: this.Contents,
IsTruncated: this.IsTruncated,
};
if (this.IsTruncated) {
result.NextMarker = this.nextMarker;
}
return result;
}
}
module.exports = { DelimiterCurrent };

View File

@ -1,196 +0,0 @@
'use strict'; // eslint-disable-line strict
const Delimiter = require('./delimiter').Delimiter;
const Version = require('../../versioning/Version').Version;
const VSConst = require('../../versioning/constants').VersioningConstants;
const { BucketVersioningKeyFormat } = VSConst;
const { FILTER_ACCEPT, FILTER_SKIP, SKIP_NONE } = require('./tools');
const VID_SEP = VSConst.VersionId.Separator;
const { DbPrefixes } = VSConst;
/**
* Handle object listing with parameters. This extends the base class Delimiter
* to return the raw master versions of existing objects.
*/
class DelimiterMaster extends Delimiter {
/**
* Delimiter listing of master versions.
* @param {Object} parameters - listing parameters
* @param {String} parameters.delimiter - delimiter per amazon format
* @param {String} parameters.prefix - prefix per amazon format
* @param {String} parameters.marker - marker per amazon format
* @param {Number} parameters.maxKeys - number of keys to list
* @param {Boolean} parameters.v2 - indicates whether v2 format
* @param {String} parameters.startAfter - marker per amazon v2 format
* @param {String} parameters.continuationToken - obfuscated amazon token
* @param {RequestLogger} logger - The logger of the request
* @param {String} [vFormat] - versioning key format
*/
constructor(parameters, logger, vFormat) {
super(parameters, logger, vFormat);
// non-PHD master version or a version whose master is a PHD version
this.prvKey = undefined;
this.prvPHDKey = undefined;
this.inReplayPrefix = false;
Object.assign(this, {
[BucketVersioningKeyFormat.v0]: {
filter: this.filterV0,
skipping: this.skippingV0,
},
[BucketVersioningKeyFormat.v1]: {
filter: this.filterV1,
skipping: this.skippingV1,
},
}[this.vFormat]);
}
/**
* Filter to apply on each iteration for buckets in v0 format,
* based on:
* - prefix
* - delimiter
* - maxKeys
* The marker is being handled directly by levelDB
* @param {Object} obj - The key and value of the element
* @param {String} obj.key - The key of the element
* @param {String} obj.value - The value of the element
* @return {number} - indicates if iteration should continue
*/
filterV0(obj) {
let key = obj.key;
const value = obj.value;
if (key.startsWith(DbPrefixes.Replay)) {
this.inReplayPrefix = true;
return FILTER_SKIP;
}
this.inReplayPrefix = false;
/* Skip keys not starting with the prefix or not alphabetically
* ordered. */
if ((this.prefix && !key.startsWith(this.prefix))
|| (typeof this[this.nextContinueMarker] === 'string' &&
key <= this[this.nextContinueMarker])) {
return FILTER_SKIP;
}
/* Skip version keys (<key><versionIdSeparator><version>) if we already
* have a master version. */
const versionIdIndex = key.indexOf(VID_SEP);
if (versionIdIndex >= 0) {
key = key.slice(0, versionIdIndex);
/* - key === this.prvKey is triggered when a master version has
* been accepted for this key,
* - key === this.NextMarker or this.NextContinueToken is triggered
* when a listing page ends on an accepted obj and the next page
* starts with a version of this object.
* In that case prvKey is default set to undefined
* in the constructor and comparing to NextMarker is the only
* way to know we should not accept this version. This test is
* not redundant with the one at the beginning of this function,
* we are comparing here the key without the version suffix,
* - key startsWith the previous NextMarker happens because we set
* NextMarker to the common prefix instead of the whole key
* value. (TODO: remove this test once ZENKO-1048 is fixed)
* */
if (key === this.prvKey || key === this[this.nextContinueMarker] ||
(this.delimiter &&
key.startsWith(this[this.nextContinueMarker]))) {
/* master version already filtered */
return FILTER_SKIP;
}
}
if (Version.isPHD(value)) {
/* master version is a PHD version, we want to wait for the next
* one:
* - Set the prvKey to undefined to not skip the next version,
* - return accept to avoid users to skip the next values in range
* (skip scan mechanism in metadata backend like Metadata or
* MongoClient). */
this.prvKey = undefined;
this.prvPHDKey = key;
return FILTER_ACCEPT;
}
if (Version.isDeleteMarker(value)) {
/* This entry is a deleteMarker which has not been filtered by the
* version test. Either :
* - it is a deleteMarker on the master version, we want to SKIP
* all the following entries with this key (no master version),
* - or a deleteMarker following a PHD (setting prvKey to undefined
* when an entry is a PHD avoids the skip on version for the
* next entry). In that case we expect the master version to
* follow. */
if (key === this.prvPHDKey) {
this.prvKey = undefined;
return FILTER_ACCEPT;
}
this.prvKey = key;
return FILTER_SKIP;
}
this.prvKey = key;
if (this.delimiter) {
// check if the key has the delimiter
const baseIndex = this.prefix ? this.prefix.length : 0;
const delimiterIndex = key.indexOf(this.delimiter, baseIndex);
if (delimiterIndex >= 0) {
// try to add the prefix to the list
return this.addCommonPrefix(key, delimiterIndex);
}
}
return this.addContents(key, value);
}
/**
* Filter to apply on each iteration for buckets in v1 format,
* based on:
* - prefix
* - delimiter
* - maxKeys
* The marker is being handled directly by levelDB
* @param {Object} obj - The key and value of the element
* @param {String} obj.key - The key of the element
* @param {String} obj.value - The value of the element
* @return {number} - indicates if iteration should continue
*/
filterV1(obj) {
// Filtering master keys in v1 is simply listing the master
// keys, as the state of version keys do not change the
// result, so we can use Delimiter method directly.
return super.filter(obj);
}
skippingBase() {
if (this[this.nextContinueMarker]) {
// next marker or next continuation token:
// - foo/ : skipping foo/
// - foo : skipping foo.
const index = this[this.nextContinueMarker].
lastIndexOf(this.delimiter);
if (index === this[this.nextContinueMarker].length - 1) {
return this[this.nextContinueMarker];
}
return this[this.nextContinueMarker] + VID_SEP;
}
return SKIP_NONE;
}
skippingV0() {
if (this.inReplayPrefix) {
return DbPrefixes.Replay;
}
return this.skippingBase();
}
skippingV1() {
const skipTo = this.skippingBase();
if (skipTo === SKIP_NONE) {
return SKIP_NONE;
}
return DbPrefixes.Master + skipTo;
}
}
module.exports = { DelimiterMaster };

View File

@ -0,0 +1,620 @@
import {
Delimiter,
FilterState,
FilterReturnValue,
DelimiterFilterStateId,
DelimiterFilterState_NotSkipping,
DelimiterFilterState_SkippingPrefix,
ResultObject,
} from './delimiter';
const Version = require('../../versioning/Version').Version;
const VSConst = require('../../versioning/constants').VersioningConstants;
const { BucketVersioningKeyFormat } = VSConst;
const { FILTER_ACCEPT, FILTER_SKIP, FILTER_END, SKIP_NONE, inc } = require('./tools');
import { GapSetEntry } from '../cache/GapSet';
import { GapCacheInterface } from '../cache/GapCache';
const VID_SEP = VSConst.VersionId.Separator;
const { DbPrefixes } = VSConst;
export const enum DelimiterMasterFilterStateId {
SkippingVersionsV0 = 101,
WaitVersionAfterPHDV0 = 102,
SkippingGapV0 = 103,
};
interface DelimiterMasterFilterState_SkippingVersionsV0 extends FilterState {
id: DelimiterMasterFilterStateId.SkippingVersionsV0,
masterKey: string,
};
interface DelimiterMasterFilterState_WaitVersionAfterPHDV0 extends FilterState {
id: DelimiterMasterFilterStateId.WaitVersionAfterPHDV0,
masterKey: string,
};
interface DelimiterMasterFilterState_SkippingGapV0 extends FilterState {
id: DelimiterMasterFilterStateId.SkippingGapV0,
};
export const enum GapCachingState {
NoGapCache = 0, // there is no gap cache
UnknownGap = 1, // waiting for a cache lookup
GapLookupInProgress = 2, // asynchronous gap lookup in progress
GapCached = 3, // an upcoming or already skippable gap is cached
NoMoreGap = 4, // the cache doesn't have any more gaps inside the listed range
};
type GapCachingInfo_NoGapCache = {
state: GapCachingState.NoGapCache;
};
type GapCachingInfo_NoCachedGap = {
state: GapCachingState.UnknownGap
| GapCachingState.GapLookupInProgress
gapCache: GapCacheInterface;
};
type GapCachingInfo_GapCached = {
state: GapCachingState.GapCached;
gapCache: GapCacheInterface;
gapCached: GapSetEntry;
};
type GapCachingInfo_NoMoreGap = {
state: GapCachingState.NoMoreGap;
};
type GapCachingInfo = GapCachingInfo_NoGapCache
| GapCachingInfo_NoCachedGap
| GapCachingInfo_GapCached
| GapCachingInfo_NoMoreGap;
export const enum GapBuildingState {
Disabled = 0, // no gap cache or no gap building needed (e.g. in V1 versioning format)
NotBuilding = 1, // not currently building a gap (i.e. not listing within a gap)
Building = 2, // currently building a gap (i.e. listing within a gap)
Expired = 3, // not allowed to build due to exposure delay timeout
};
type GapBuildingInfo_NothingToBuild = {
state: GapBuildingState.Disabled | GapBuildingState.Expired;
};
type GapBuildingParams = {
/**
* minimum weight for a gap to be created in the cache
*/
minGapWeight: number;
/**
* trigger a cache setGap() call every N skippable keys
*/
triggerSaveGapWeight: number;
/**
* timestamp to assess whether we're still inside the validity period to
* be allowed to build gaps
*/
initTimestamp: number;
};
type GapBuildingInfo_NotBuilding = {
state: GapBuildingState.NotBuilding;
gapCache: GapCacheInterface;
params: GapBuildingParams;
};
type GapBuildingInfo_Building = {
state: GapBuildingState.Building;
gapCache: GapCacheInterface;
params: GapBuildingParams;
/**
* Gap currently being created
*/
gap: GapSetEntry;
/**
* total current weight of the gap being created
*/
gapWeight: number;
};
type GapBuildingInfo = GapBuildingInfo_NothingToBuild
| GapBuildingInfo_NotBuilding
| GapBuildingInfo_Building;
/**
* Handle object listing with parameters. This extends the base class Delimiter
* to return the raw master versions of existing objects.
*/
export class DelimiterMaster extends Delimiter {
_gapCaching: GapCachingInfo;
_gapBuilding: GapBuildingInfo;
_refreshedBuildingParams: GapBuildingParams | null;
/**
* Delimiter listing of master versions.
* @param {Object} parameters - listing parameters
* @param {String} [parameters.delimiter] - delimiter per amazon format
* @param {String} [parameters.prefix] - prefix per amazon format
* @param {String} [parameters.marker] - marker per amazon format
* @param {Number} [parameters.maxKeys] - number of keys to list
* @param {Boolean} [parameters.v2] - indicates whether v2 format
* @param {String} [parameters.startAfter] - marker per amazon v2 format
* @param {String} [parameters.continuationToken] - obfuscated amazon token
* @param {RequestLogger} logger - The logger of the request
* @param {String} [vFormat="v0"] - versioning key format
*/
constructor(parameters, logger, vFormat?: string) {
super(parameters, logger, vFormat);
if (this.vFormat === BucketVersioningKeyFormat.v0) {
// override Delimiter's implementation of NotSkipping for
// DelimiterMaster logic (skipping versions and special
// handling of delete markers and PHDs)
this.setKeyHandler(
DelimiterFilterStateId.NotSkipping,
this.keyHandler_NotSkippingPrefixNorVersionsV0.bind(this));
// add extra state handlers specific to DelimiterMaster with v0 format
this.setKeyHandler(
DelimiterMasterFilterStateId.SkippingVersionsV0,
this.keyHandler_SkippingVersionsV0.bind(this));
this.setKeyHandler(
DelimiterMasterFilterStateId.WaitVersionAfterPHDV0,
this.keyHandler_WaitVersionAfterPHDV0.bind(this));
this.setKeyHandler(
DelimiterMasterFilterStateId.SkippingGapV0,
this.keyHandler_SkippingGapV0.bind(this));
if (this.marker) {
// distinct initial state to include some special logic
// before the first master key is found that does not have
// to be checked afterwards
this.state = <DelimiterMasterFilterState_SkippingVersionsV0> {
id: DelimiterMasterFilterStateId.SkippingVersionsV0,
masterKey: this.marker,
};
} else {
this.state = <DelimiterFilterState_NotSkipping> {
id: DelimiterFilterStateId.NotSkipping,
};
}
} else {
// save base implementation of the `NotSkipping` state in
// Delimiter before overriding it with ours, to be able to call it from there
this.keyHandler_NotSkipping_Delimiter = this.keyHandlers[DelimiterFilterStateId.NotSkipping];
this.setKeyHandler(
DelimiterFilterStateId.NotSkipping,
this.keyHandler_NotSkippingPrefixNorVersionsV1.bind(this));
}
// in v1, we can directly use Delimiter's implementation,
// which is already set to the proper state
// default initialization of the gap cache and building states, can be
// set by refreshGapCache()
this._gapCaching = {
state: GapCachingState.NoGapCache,
};
this._gapBuilding = {
state: GapBuildingState.Disabled,
};
this._refreshedBuildingParams = null;
}
/**
* Get the validity period left before a refresh of the gap cache is needed
* to continue building new gaps.
*
* @return {number|null} one of:
* - the remaining time in milliseconds in which gaps can be added to the
* cache before a call to refreshGapCache() is required
* - or 0 if there is no time left and a call to refreshGapCache() is required
* to resume caching gaps
* - or null if refreshing the cache is never needed (because the gap cache
* is either not available or not used)
*/
getGapBuildingValidityPeriodMs(): number | null {
let gapBuilding;
switch (this._gapBuilding.state) {
case GapBuildingState.Disabled:
return null;
case GapBuildingState.Expired:
return 0;
case GapBuildingState.NotBuilding:
gapBuilding = <GapBuildingInfo_NotBuilding> this._gapBuilding;
break;
case GapBuildingState.Building:
gapBuilding = <GapBuildingInfo_Building> this._gapBuilding;
break;
}
const { gapCache, params } = gapBuilding;
const elapsedTime = Date.now() - params.initTimestamp;
return Math.max(gapCache.exposureDelayMs - elapsedTime, 0);
}
/**
* Refresh the gaps caching logic (gaps are series of current delete markers
* in V0 bucket metadata format). It has two effects:
*
* - starts exposing existing and future gaps from the cache to efficiently
* skip over series of current delete markers that have been seen and cached
* earlier
*
* - enables building and caching new gaps (or extend existing ones), for a
* limited time period defined by the `gapCacheProxy.exposureDelayMs` value
* in milliseconds. To refresh the validity period and resume building and
* caching new gaps, one must restart a new listing from the database (starting
* at the current listing key, included), then call refreshGapCache() again.
*
* @param {GapCacheInterface} gapCacheProxy - API proxy to the gaps cache
* (the proxy should handle prefixing object keys with the bucket name)
* @param {number} [minGapWeight=100] - minimum weight of a gap for it to be
* added in the cache
* @param {number} [triggerSaveGapWeight] - cumulative weight to wait for
* before saving the current building gap. Cannot be greater than
* `gapCacheProxy.maxGapWeight` (the value is thresholded to `maxGapWeight`
* otherwise). Defaults to `gapCacheProxy.maxGapWeight / 2`.
* @return {undefined}
*/
refreshGapCache(
gapCacheProxy: GapCacheInterface,
minGapWeight?: number,
triggerSaveGapWeight?: number
): void {
if (this.vFormat !== BucketVersioningKeyFormat.v0) {
return;
}
if (this._gapCaching.state === GapCachingState.NoGapCache) {
this._gapCaching = {
state: GapCachingState.UnknownGap,
gapCache: gapCacheProxy,
};
}
const refreshedBuildingParams: GapBuildingParams = {
minGapWeight: minGapWeight || 100,
triggerSaveGapWeight: triggerSaveGapWeight
|| Math.trunc(gapCacheProxy.maxGapWeight / 2),
initTimestamp: Date.now(),
};
if (this._gapBuilding.state === GapBuildingState.Building) {
// refreshed params will be applied as soon as the current building gap is saved
this._refreshedBuildingParams = refreshedBuildingParams;
} else {
this._gapBuilding = {
state: GapBuildingState.NotBuilding,
gapCache: gapCacheProxy,
params: refreshedBuildingParams,
};
}
}
/**
* Trigger a lookup of the closest upcoming or already skippable gap.
*
* @param {string} fromKey - lookup a gap not before 'fromKey'
* @return {undefined} - the lookup is asynchronous and its
* response is handled inside this function
*/
_triggerGapLookup(gapCaching: GapCachingInfo_NoCachedGap, fromKey: string): void {
this._gapCaching = {
state: GapCachingState.GapLookupInProgress,
gapCache: gapCaching.gapCache,
};
const maxKey = this.prefix ? inc(this.prefix) : undefined;
gapCaching.gapCache.lookupGap(fromKey, maxKey).then(_gap => {
const gap = <GapSetEntry | null> _gap;
if (gap) {
this._gapCaching = {
state: GapCachingState.GapCached,
gapCache: gapCaching.gapCache,
gapCached: gap,
};
} else {
this._gapCaching = {
state: GapCachingState.NoMoreGap,
};
}
});
}
_checkGapOnMasterDeleteMarker(key: string): FilterReturnValue {
switch (this._gapBuilding.state) {
case GapBuildingState.Disabled:
case GapBuildingState.Expired:
break;
case GapBuildingState.NotBuilding:
this._createBuildingGap(key, 1);
break;
case GapBuildingState.Building:
this._updateBuildingGap(key);
break;
}
if (this._gapCaching.state === GapCachingState.GapCached) {
const { gapCached } = this._gapCaching;
if (key >= gapCached.firstKey) {
if (key <= gapCached.lastKey) {
// we are inside the last looked up cached gap: transition to
// 'SkippingGapV0' state
this.setState(<DelimiterMasterFilterState_SkippingGapV0> {
id: DelimiterMasterFilterStateId.SkippingGapV0,
});
// cut the current gap before skipping, it will be merged or
// chained with the existing one (depending on its weight)
if (this._gapBuilding.state === GapBuildingState.Building) {
// substract 1 from the weight because we are going to chain this gap,
// which has an overlap of one key.
this._gapBuilding.gap.weight -= 1;
this._cutBuildingGap();
}
return FILTER_SKIP;
}
// as we are past the cached gap, we will need another lookup
this._gapCaching = {
state: GapCachingState.UnknownGap,
gapCache: this._gapCaching.gapCache,
};
}
}
if (this._gapCaching.state === GapCachingState.UnknownGap) {
this._triggerGapLookup(this._gapCaching, key);
}
return FILTER_ACCEPT;
}
filter_onNewMasterKeyV0(key: string, value: string): FilterReturnValue {
// if this master key is a delete marker, accept it without
// adding the version to the contents
if (Version.isDeleteMarker(value)) {
// update the state to start skipping versions of the new master key
this.setState(<DelimiterMasterFilterState_SkippingVersionsV0> {
id: DelimiterMasterFilterStateId.SkippingVersionsV0,
masterKey: key,
});
return this._checkGapOnMasterDeleteMarker(key);
}
if (Version.isPHD(value)) {
// master version is a PHD version: wait for the first
// following version that will be considered as the actual
// master key
this.setState(<DelimiterMasterFilterState_WaitVersionAfterPHDV0> {
id: DelimiterMasterFilterStateId.WaitVersionAfterPHDV0,
masterKey: key,
});
return FILTER_ACCEPT;
}
// cut the current gap as soon as a non-deleted entry is seen
this._cutBuildingGap();
if (key.startsWith(DbPrefixes.Replay)) {
// skip internal replay prefix entirely
this.setState(<DelimiterFilterState_SkippingPrefix> {
id: DelimiterFilterStateId.SkippingPrefix,
prefix: DbPrefixes.Replay,
});
return FILTER_SKIP;
}
if (this._reachedMaxKeys()) {
return FILTER_END;
}
const commonPrefix = this.addCommonPrefixOrContents(key, value);
if (commonPrefix) {
// transition into SkippingPrefix state to skip all following keys
// while they start with the same prefix
this.setState(<DelimiterFilterState_SkippingPrefix> {
id: DelimiterFilterStateId.SkippingPrefix,
prefix: commonPrefix,
});
return FILTER_ACCEPT;
}
// update the state to start skipping versions of the new master key
this.setState(<DelimiterMasterFilterState_SkippingVersionsV0> {
id: DelimiterMasterFilterStateId.SkippingVersionsV0,
masterKey: key,
});
return FILTER_ACCEPT;
}
keyHandler_NotSkippingPrefixNorVersionsV0(key: string, value: string): FilterReturnValue {
return this.filter_onNewMasterKeyV0(key, value);
}
filter_onNewMasterKeyV1(key: string, value: string): FilterReturnValue {
// if this master key is a delete marker, accept it without
// adding the version to the contents
if (Version.isDeleteMarker(value)) {
return FILTER_ACCEPT;
}
// use base Delimiter's implementation
return this.keyHandler_NotSkipping_Delimiter(key, value);
}
keyHandler_NotSkippingPrefixNorVersionsV1(key: string, value: string): FilterReturnValue {
return this.filter_onNewMasterKeyV1(key, value);
}
keyHandler_SkippingVersionsV0(key: string, value: string): FilterReturnValue {
/* In the SkippingVersionsV0 state, skip all version keys
* (<key><versionIdSeparator><version>) */
const versionIdIndex = key.indexOf(VID_SEP);
if (versionIdIndex !== -1) {
// version keys count in the building gap weight because they must
// also be listed until skipped
if (this._gapBuilding.state === GapBuildingState.Building) {
this._updateBuildingGap(key);
}
return FILTER_SKIP;
}
return this.filter_onNewMasterKeyV0(key, value);
}
keyHandler_WaitVersionAfterPHDV0(key: string, value: string): FilterReturnValue {
// After a PHD key is encountered, the next version key of the
// same object if it exists is the new master key, hence
// consider it as such and call 'onNewMasterKeyV0' (the test
// 'masterKey == phdKey' is probably redundant when we already
// know we have a versioned key, since all objects in v0 have
// a master key, but keeping it in doubt)
const { masterKey: phdKey } = <DelimiterMasterFilterState_WaitVersionAfterPHDV0> this.state;
const versionIdIndex = key.indexOf(VID_SEP);
if (versionIdIndex !== -1) {
const masterKey = key.slice(0, versionIdIndex);
if (masterKey === phdKey) {
return this.filter_onNewMasterKeyV0(masterKey, value);
}
}
return this.filter_onNewMasterKeyV0(key, value);
}
keyHandler_SkippingGapV0(key: string, value: string): FilterReturnValue {
const { gapCache, gapCached } = <GapCachingInfo_GapCached> this._gapCaching;
if (key <= gapCached.lastKey) {
return FILTER_SKIP;
}
this._gapCaching = {
state: GapCachingState.UnknownGap,
gapCache,
};
this.setState(<DelimiterMasterFilterState_SkippingVersionsV0> {
id: DelimiterMasterFilterStateId.SkippingVersionsV0,
});
// Start a gap with weight=0 from the latest skippable key. This will
// allow to extend the gap just skipped with a chained gap in case
// other delete markers are seen after the existing gap is skipped.
this._createBuildingGap(gapCached.lastKey, 0, gapCached.weight);
return this.handleKey(key, value);
}
skippingBase(): string | undefined {
switch (this.state.id) {
case DelimiterMasterFilterStateId.SkippingVersionsV0:
const { masterKey } = <DelimiterMasterFilterState_SkippingVersionsV0> this.state;
return masterKey + inc(VID_SEP);
case DelimiterMasterFilterStateId.SkippingGapV0:
const { gapCached } = <GapCachingInfo_GapCached> this._gapCaching;
return gapCached.lastKey;
default:
return super.skippingBase();
}
}
result(): ResultObject {
this._cutBuildingGap();
return super.result();
}
_checkRefreshedBuildingParams(params: GapBuildingParams): GapBuildingParams {
if (this._refreshedBuildingParams) {
const newParams = this._refreshedBuildingParams;
this._refreshedBuildingParams = null;
return newParams;
}
return params;
}
/**
* Save the gap being built if allowed (i.e. still within the
* allocated exposure time window).
*
* @return {boolean} - true if the gap was saved, false if we are
* outside the allocated exposure time window.
*/
_saveBuildingGap(): boolean {
const { gapCache, params, gap, gapWeight } =
<GapBuildingInfo_Building> this._gapBuilding;
const totalElapsed = Date.now() - params.initTimestamp;
if (totalElapsed >= gapCache.exposureDelayMs) {
this._gapBuilding = {
state: GapBuildingState.Expired,
};
this._refreshedBuildingParams = null;
return false;
}
const { firstKey, lastKey, weight } = gap;
gapCache.setGap(firstKey, lastKey, weight);
this._gapBuilding = {
state: GapBuildingState.Building,
gapCache,
params: this._checkRefreshedBuildingParams(params),
gap: {
firstKey: gap.lastKey,
lastKey: gap.lastKey,
weight: 0,
},
gapWeight,
};
return true;
}
/**
* Create a new gap to be extended afterwards
*
* @param {string} newKey - gap's first key
* @param {number} startWeight - initial weight of the building gap (usually 0 or 1)
* @param {number} [cachedWeight] - if continuing a cached gap, weight of the existing
* cached portion
* @return {undefined}
*/
_createBuildingGap(newKey: string, startWeight: number, cachedWeight?: number): void {
if (this._gapBuilding.state === GapBuildingState.NotBuilding) {
const { gapCache, params } = <GapBuildingInfo_NotBuilding> this._gapBuilding;
this._gapBuilding = {
state: GapBuildingState.Building,
gapCache,
params: this._checkRefreshedBuildingParams(params),
gap: {
firstKey: newKey,
lastKey: newKey,
weight: startWeight,
},
gapWeight: (cachedWeight || 0) + startWeight,
};
}
}
_updateBuildingGap(newKey: string): void {
const gapBuilding = <GapBuildingInfo_Building> this._gapBuilding;
const { params, gap } = gapBuilding;
gap.lastKey = newKey;
gap.weight += 1;
gapBuilding.gapWeight += 1;
// the GapCache API requires updating a gap regularly because it can only split
// it once per update, by the known last key. In practice the default behavior
// is to trigger an update after a number of keys that is half the maximum weight.
// It is also useful for other listings to benefit from the cache sooner.
if (gapBuilding.gapWeight >= params.minGapWeight &&
gap.weight >= params.triggerSaveGapWeight) {
this._saveBuildingGap();
}
}
_cutBuildingGap(): void {
if (this._gapBuilding.state === GapBuildingState.Building) {
let gapBuilding = <GapBuildingInfo_Building> this._gapBuilding;
let { gapCache, params, gap, gapWeight } = gapBuilding;
// only set gaps that are significant enough in weight and
// with a non-empty extension
if (gapWeight >= params.minGapWeight && gap.weight > 0) {
// we're done if we were not allowed to save the gap
if (!this._saveBuildingGap()) {
return;
}
// params may have been refreshed, reload them
gapBuilding = <GapBuildingInfo_Building> this._gapBuilding;
params = gapBuilding.params;
}
this._gapBuilding = {
state: GapBuildingState.NotBuilding,
gapCache,
params,
};
}
}
}

View File

@ -0,0 +1,202 @@
const { DelimiterVersions } = require('./delimiterVersions');
const { FILTER_END, FILTER_SKIP } = require('./tools');
const TRIM_METADATA_MIN_BLOB_SIZE = 10000;
/**
* Handle object listing with parameters. This extends the base class DelimiterVersions
* to return the raw non-current versions objects.
*/
class DelimiterNonCurrent extends DelimiterVersions {
/**
* Delimiter listing of non-current versions.
* @param {Object} parameters - listing parameters
* @param {String} parameters.keyMarker - key marker
* @param {String} parameters.versionIdMarker - version id marker
* @param {String} parameters.beforeDate - limit the response to keys with stale date older than beforeDate.
* stale date is the date on when a version becomes non-current.
* @param {Number} parameters.maxScannedLifecycleListingEntries - max number of entries to be scanned
* @param {String} parameters.excludedDataStoreName - exclude dataStoreName matches from the versions
* @param {RequestLogger} logger - The logger of the request
* @param {String} [vFormat] - versioning key format
*/
constructor(parameters, logger, vFormat) {
super(parameters, logger, vFormat);
this.beforeDate = parameters.beforeDate;
this.excludedDataStoreName = parameters.excludedDataStoreName;
this.maxScannedLifecycleListingEntries = parameters.maxScannedLifecycleListingEntries;
// internal state
this.prevKey = null;
this.staleDate = null;
this.scannedKeys = 0;
}
getLastModified(value) {
let lastModified;
try {
const v = JSON.parse(value);
lastModified = v['last-modified'];
} catch (e) {
this.logger.warn('could not parse Object Metadata while listing',
{
method: 'getLastModified',
err: e.toString(),
});
}
return lastModified;
}
// Overwrite keyHandler_SkippingVersions to include the last version from the previous listing.
// The creation (last-modified) date of this version will be the stale date for the following version.
// eslint-disable-next-line camelcase
keyHandler_SkippingVersions(key, versionId, value) {
if (key === this.keyMarker) {
// since the nonversioned key equals the marker, there is
// necessarily a versionId in this key
const _versionId = versionId;
if (_versionId < this.versionIdMarker) {
// skip all versions until marker
return FILTER_SKIP;
}
}
this.setState({
id: 1 /* NotSkipping */,
});
return this.handleKey(key, versionId, value);
}
filter(obj) {
if (this.maxScannedLifecycleListingEntries && this.scannedKeys >= this.maxScannedLifecycleListingEntries) {
this.IsTruncated = true;
this.logger.info('listing stopped due to reaching the maximum scanned entries limit',
{
maxScannedLifecycleListingEntries: this.maxScannedLifecycleListingEntries,
scannedKeys: this.scannedKeys,
});
return FILTER_END;
}
++this.scannedKeys;
return super.filter(obj);
}
/**
* NOTE: Each version of a specific key is sorted from the latest to the oldest
* thanks to the way version ids are generated.
* DESCRIPTION: Skip the version if it represents the master key, but keep its last-modified date in memory,
* which will be the stale date of the following version.
* The following version is pushed only:
* - if the "stale date" (picked up from the previous version) is available (JSON.parse has not failed),
* - if "beforeDate" is not specified or if specified and the "stale date" is older.
* - if "excludedDataStoreName" is not specified or if specified and the data store name is different
* The in-memory "stale date" is then updated with the version's last-modified date to be used for
* the following version.
* The process stops and returns the available results if either:
* - no more metadata key is left to be processed
* - the listing reaches the maximum number of key to be returned
* - the internal timeout is reached
* @param {String} key - The key to add
* @param {String} versionId - The version id
* @param {String} value - The value of the key
* @return {undefined}
*/
addVersion(key, versionId, value) {
this.nextKeyMarker = key;
this.nextVersionIdMarker = versionId;
// Skip the version if it represents the non-current version, but keep its last-modified date,
// which will be the stale date of the following version.
const isCurrentVersion = key !== this.prevKey;
if (isCurrentVersion) {
this.staleDate = this.getLastModified(value);
this.prevKey = key;
return;
}
// The following version is pushed only:
// - if the "stale date" (picked up from the previous version) is available (JSON.parse has not failed),
// - if "beforeDate" is not specified or if specified and the "stale date" is older.
// - if "excludedDataStoreName" is not specified or if specified and the data store name is different
let lastModified;
if (this.staleDate && (!this.beforeDate || this.staleDate < this.beforeDate)) {
const parsedValue = this._parse(value);
// if parsing fails, skip the key.
if (parsedValue) {
const dataStoreName = parsedValue.dataStoreName;
lastModified = parsedValue['last-modified'];
if (!this.excludedDataStoreName || dataStoreName !== this.excludedDataStoreName) {
const s = this._stringify(parsedValue, this.staleDate);
// check that _stringify succeeds to only push objects with a defined staleDate.
if (s) {
this.Versions.push({ key, value: s });
++this.keys;
}
}
}
}
// The in-memory "stale date" is then updated with the version's last-modified date to be used for
// the following version.
this.staleDate = lastModified || this.getLastModified(value);
return;
}
/**
* Parses the stringified entry's value and remove the location property if too large.
* @param {string} s - sringified value
* @return {object} p - undefined if parsing fails, otherwise it contains the parsed value.
*/
_parse(s) {
let p;
try {
p = JSON.parse(s);
if (s.length >= TRIM_METADATA_MIN_BLOB_SIZE) {
delete p.location;
}
} catch (e) {
this.logger.warn('Could not parse Object Metadata while listing', {
method: 'DelimiterNonCurrent._parse',
err: e.toString(),
});
}
return p;
}
_stringify(parsedMD, staleDate) {
const p = parsedMD;
let s = undefined;
p.staleDate = staleDate;
try {
s = JSON.stringify(p);
} catch (e) {
this.logger.warn('could not stringify Object Metadata while listing', {
method: 'DelimiterNonCurrent._stringify',
err: e.toString(),
});
}
return s;
}
result() {
const { Versions, IsTruncated, NextKeyMarker, NextVersionIdMarker } = super.result();
const result = {
Contents: Versions,
IsTruncated,
};
if (NextKeyMarker) {
result.NextKeyMarker = NextKeyMarker;
}
if (NextVersionIdMarker) {
result.NextVersionIdMarker = NextVersionIdMarker;
}
return result;
}
}
module.exports = { DelimiterNonCurrent };

View File

@ -0,0 +1,204 @@
const DelimiterVersions = require('./delimiterVersions').DelimiterVersions;
const { FILTER_END } = require('./tools');
const TRIM_METADATA_MIN_BLOB_SIZE = 10000;
/**
* Handle object listing with parameters. This extends the base class DelimiterVersions
* to return the orphan delete markers. Orphan delete markers are also
* refered as expired object delete marker.
* They are delete marker with zero noncurrent versions.
*/
class DelimiterOrphanDeleteMarker extends DelimiterVersions {
/**
* Delimiter listing of orphan delete markers.
* @param {Object} parameters - listing parameters
* @param {String} parameters.beforeDate - limit the response to keys older than beforeDate
* @param {Number} parameters.maxScannedLifecycleListingEntries - max number of entries to be scanned
* @param {RequestLogger} logger - The logger of the request
* @param {String} [vFormat] - versioning key format
*/
constructor(parameters, logger, vFormat) {
const {
marker,
maxKeys,
prefix,
beforeDate,
maxScannedLifecycleListingEntries,
} = parameters;
const versionParams = {
// The orphan delete marker logic uses the term 'marker' instead of 'keyMarker',
// as the latter could suggest the presence of a 'versionIdMarker'.
keyMarker: marker,
maxKeys,
prefix,
};
super(versionParams, logger, vFormat);
this.maxScannedLifecycleListingEntries = maxScannedLifecycleListingEntries;
this.beforeDate = beforeDate;
// this.prevKeyName is used as a marker for the next listing when the current one reaches its entry limit.
// We cannot rely on this.keyName, as it contains the name of the current key.
// In the event of a listing interruption due to reaching the maximum scanned entries,
// relying on this.keyName would cause the next listing to skip the current key because S3 starts
// listing after the marker.
this.prevKeyName = null;
this.keyName = null;
this.value = null;
this.scannedKeys = 0;
}
_reachedMaxKeys() {
if (this.keys >= this.maxKeys) {
return true;
}
return false;
}
_addOrphan() {
const parsedValue = this._parse(this.value);
// if parsing fails, skip the key.
if (parsedValue) {
const lastModified = parsedValue['last-modified'];
const isDeleteMarker = parsedValue.isDeleteMarker;
// We then check if the orphan version is a delete marker and if it is older than the "beforeDate"
if ((!this.beforeDate || (lastModified && lastModified < this.beforeDate)) && isDeleteMarker) {
// Prefer returning an untrimmed data rather than stopping the service in case of parsing failure.
const s = this._stringify(parsedValue) || this.value;
this.Versions.push({ key: this.keyName, value: s });
this.nextKeyMarker = this.keyName;
++this.keys;
}
}
}
/**
* Parses the stringified entry's value and remove the location property if too large.
* @param {string} s - sringified value
* @return {object} p - undefined if parsing fails, otherwise it contains the parsed value.
*/
_parse(s) {
let p;
try {
p = JSON.parse(s);
if (s.length >= TRIM_METADATA_MIN_BLOB_SIZE) {
delete p.location;
}
} catch (e) {
this.logger.warn('Could not parse Object Metadata while listing', {
method: 'DelimiterOrphanDeleteMarker._parse',
err: e.toString(),
});
}
return p;
}
_stringify(value) {
const p = value;
let s = undefined;
try {
s = JSON.stringify(p);
} catch (e) {
this.logger.warn('could not stringify Object Metadata while listing',
{
method: 'DelimiterOrphanDeleteMarker._stringify',
err: e.toString(),
});
}
return s;
}
/**
* The purpose of _isMaxScannedEntriesReached is to restrict the number of scanned entries,
* thus controlling resource overhead (CPU...).
* @return {boolean} isMaxScannedEntriesReached - true if the maximum limit on the number
* of entries scanned has been reached, false otherwise.
*/
_isMaxScannedEntriesReached() {
return this.maxScannedLifecycleListingEntries && this.scannedKeys >= this.maxScannedLifecycleListingEntries;
}
filter(obj) {
if (this._isMaxScannedEntriesReached()) {
this.nextKeyMarker = this.prevKeyName;
this.IsTruncated = true;
this.logger.info('listing stopped due to reaching the maximum scanned entries limit',
{
maxScannedLifecycleListingEntries: this.maxScannedLifecycleListingEntries,
scannedKeys: this.scannedKeys,
});
return FILTER_END;
}
++this.scannedKeys;
return super.filter(obj);
}
/**
* NOTE: Each version of a specific key is sorted from the latest to the oldest
* thanks to the way version ids are generated.
* DESCRIPTION: For a given key, the latest version is kept in memory since it is the current version.
* If the following version reference a new key, it means that the previous one was an orphan version.
* We then check if the orphan version is a delete marker and if it is older than the "beforeDate"
* The process stops and returns the available results if either:
* - no more metadata key is left to be processed
* - the listing reaches the maximum number of key to be returned
* - the internal timeout is reached
* NOTE: we cannot leverage MongoDB to list keys older than "beforeDate"
* because then we will not be able to assess its orphanage.
* @param {String} key - The object key.
* @param {String} versionId - The object version id.
* @param {String} value - The value of the key
* @return {undefined}
*/
addVersion(key, versionId, value) {
// For a given key, the youngest version is kept in memory since it represents the current version.
if (key !== this.keyName) {
// If this.value is defined, it means that <this.keyName, this.value> pair is "allowed" to be an orphan.
if (this.value) {
this._addOrphan();
}
this.prevKeyName = this.keyName;
this.keyName = key;
this.value = value;
return;
}
// If the key is not the current version, we can skip it in the next listing
// in the case where the current listing is interrupted due to reaching the maximum scanned entries.
this.prevKeyName = key;
this.keyName = key;
this.value = null;
return;
}
result() {
// Only check for remaining last orphan delete marker if the listing is not interrupted.
// This will help avoid false positives.
if (!this._isMaxScannedEntriesReached()) {
// The following check makes sure the last orphan delete marker is not forgotten.
if (this.keys < this.maxKeys) {
if (this.value) {
this._addOrphan();
}
// The following make sure that if makeKeys is reached, isTruncated is set to true.
// We moved the "isTruncated" from _reachedMaxKeys to make sure we take into account the last entity
// if listing is truncated right before the last entity and the last entity is a orphan delete marker.
} else {
this.IsTruncated = this.maxKeys > 0;
}
}
const result = {
Contents: this.Versions,
IsTruncated: this.IsTruncated,
};
if (this.IsTruncated) {
result.NextMarker = this.nextKeyMarker;
}
return result;
}
}
module.exports = { DelimiterOrphanDeleteMarker };

View File

@ -1,283 +0,0 @@
'use strict'; // eslint-disable-line strict
const Delimiter = require('./delimiter').Delimiter;
const Version = require('../../versioning/Version').Version;
const VSConst = require('../../versioning/constants').VersioningConstants;
const { inc, FILTER_END, FILTER_ACCEPT, FILTER_SKIP, SKIP_NONE } =
require('./tools');
const VID_SEP = VSConst.VersionId.Separator;
const { DbPrefixes, BucketVersioningKeyFormat } = VSConst;
/**
* Handle object listing with parameters
*
* @prop {String[]} CommonPrefixes - 'folders' defined by the delimiter
* @prop {String[]} Contents - 'files' to list
* @prop {Boolean} IsTruncated - truncated listing flag
* @prop {String|undefined} NextMarker - marker per amazon format
* @prop {Number} keys - count of listed keys
* @prop {String|undefined} delimiter - separator per amazon format
* @prop {String|undefined} prefix - prefix per amazon format
* @prop {Number} maxKeys - number of keys to list
*/
class DelimiterVersions extends Delimiter {
constructor(parameters, logger, vFormat) {
super(parameters, logger, vFormat);
// specific to version listing
this.keyMarker = parameters.keyMarker;
this.versionIdMarker = parameters.versionIdMarker;
// internal state
this.masterKey = undefined;
this.masterVersionId = undefined;
// listing results
this.NextMarker = parameters.keyMarker;
this.NextVersionIdMarker = undefined;
this.inReplayPrefix = false;
Object.assign(this, {
[BucketVersioningKeyFormat.v0]: {
genMDParams: this.genMDParamsV0,
filter: this.filterV0,
skipping: this.skippingV0,
},
[BucketVersioningKeyFormat.v1]: {
genMDParams: this.genMDParamsV1,
filter: this.filterV1,
skipping: this.skippingV1,
},
}[this.vFormat]);
}
genMDParamsV0() {
const params = {};
if (this.parameters.prefix) {
params.gte = this.parameters.prefix;
params.lt = inc(this.parameters.prefix);
}
if (this.parameters.keyMarker) {
if (params.gte && params.gte > this.parameters.keyMarker) {
return params;
}
delete params.gte;
if (this.parameters.versionIdMarker) {
// versionIdMarker should always come with keyMarker
// but may not be the other way around
params.gt = this.parameters.keyMarker
+ VID_SEP
+ this.parameters.versionIdMarker;
} else {
params.gt = inc(this.parameters.keyMarker + VID_SEP);
}
}
return params;
}
genMDParamsV1() {
// return an array of two listing params sets to ask for
// synchronized listing of M and V ranges
const params = [{}, {}];
if (this.parameters.prefix) {
params[0].gte = DbPrefixes.Master + this.parameters.prefix;
params[0].lt = DbPrefixes.Master + inc(this.parameters.prefix);
params[1].gte = DbPrefixes.Version + this.parameters.prefix;
params[1].lt = DbPrefixes.Version + inc(this.parameters.prefix);
} else {
params[0].gte = DbPrefixes.Master;
params[0].lt = inc(DbPrefixes.Master); // stop after the last master key
params[1].gte = DbPrefixes.Version;
params[1].lt = inc(DbPrefixes.Version); // stop after the last version key
}
if (this.parameters.keyMarker) {
if (params[1].gte <= DbPrefixes.Version + this.parameters.keyMarker) {
delete params[0].gte;
delete params[1].gte;
params[0].gt = DbPrefixes.Master + inc(this.parameters.keyMarker + VID_SEP);
if (this.parameters.versionIdMarker) {
// versionIdMarker should always come with keyMarker
// but may not be the other way around
params[1].gt = DbPrefixes.Version
+ this.parameters.keyMarker
+ VID_SEP
+ this.parameters.versionIdMarker;
} else {
params[1].gt = DbPrefixes.Version
+ inc(this.parameters.keyMarker + VID_SEP);
}
}
}
return params;
}
/**
* Used to synchronize listing of M and V prefixes by object key
*
* @param {object} masterObj object listed from first range
* returned by genMDParamsV1() (the master keys range)
* @param {object} versionObj object listed from second range
* returned by genMDParamsV1() (the version keys range)
* @return {number} comparison result:
* * -1 if master key < version key
* * 1 if master key > version key
*/
compareObjects(masterObj, versionObj) {
const masterKey = masterObj.key.slice(DbPrefixes.Master.length);
const versionKey = versionObj.key.slice(DbPrefixes.Version.length);
return masterKey < versionKey ? -1 : 1;
}
/**
* Add a (key, versionId, value) tuple to the listing.
* Set the NextMarker to the current key
* Increment the keys counter
* @param {object} obj - the entry to add to the listing result
* @param {String} obj.key - The key to add
* @param {String} obj.versionId - versionId
* @param {String} obj.value - The value of the key
* @return {Boolean} - indicates if iteration should continue
*/
addContents(obj) {
if (this._reachedMaxKeys()) {
return FILTER_END;
}
this.Contents.push({
key: obj.key,
value: this.trimMetadata(obj.value),
versionId: obj.versionId,
});
this.NextMarker = obj.key;
this.NextVersionIdMarker = obj.versionId;
++this.keys;
return FILTER_ACCEPT;
}
/**
* Filter to apply on each iteration if bucket is in v0
* versioning key format, based on:
* - prefix
* - delimiter
* - maxKeys
* The marker is being handled directly by levelDB
* @param {Object} obj - The key and value of the element
* @param {String} obj.key - The key of the element
* @param {String} obj.value - The value of the element
* @return {number} - indicates if iteration should continue
*/
filterV0(obj) {
if (obj.key.startsWith(DbPrefixes.Replay)) {
this.inReplayPrefix = true;
return FILTER_SKIP;
}
this.inReplayPrefix = false;
if (Version.isPHD(obj.value)) {
// return accept to avoid skipping the next values in range
return FILTER_ACCEPT;
}
return this.filterCommon(obj.key, obj.value);
}
/**
* Filter to apply on each iteration if bucket is in v1
* versioning key format, based on:
* - prefix
* - delimiter
* - maxKeys
* The marker is being handled directly by levelDB
* @param {Object} obj - The key and value of the element
* @param {String} obj.key - The key of the element
* @param {String} obj.value - The value of the element
* @return {number} - indicates if iteration should continue
*/
filterV1(obj) {
if (Version.isPHD(obj.value)) {
// return accept to avoid skipping the next values in range
return FILTER_ACCEPT;
}
// this function receives both M and V keys, but their prefix
// length is the same so we can remove their prefix without
// looking at the type of key
return this.filterCommon(obj.key.slice(DbPrefixes.Master.length),
obj.value);
}
filterCommon(key, value) {
if (this.prefix && !key.startsWith(this.prefix)) {
return FILTER_SKIP;
}
let nonversionedKey;
let versionId = undefined;
const versionIdIndex = key.indexOf(VID_SEP);
if (versionIdIndex < 0) {
nonversionedKey = key;
this.masterKey = key;
this.masterVersionId =
Version.from(value).getVersionId() || 'null';
versionId = this.masterVersionId;
} else {
nonversionedKey = key.slice(0, versionIdIndex);
versionId = key.slice(versionIdIndex + 1);
// skip a version key if it is the master version
if (this.masterKey === nonversionedKey && this.masterVersionId === versionId) {
return FILTER_SKIP;
}
this.masterKey = undefined;
this.masterVersionId = undefined;
}
if (this.delimiter) {
const baseIndex = this.prefix ? this.prefix.length : 0;
const delimiterIndex = nonversionedKey.indexOf(this.delimiter, baseIndex);
if (delimiterIndex >= 0) {
return this.addCommonPrefix(nonversionedKey, delimiterIndex);
}
}
return this.addContents({ key: nonversionedKey, value, versionId });
}
skippingV0() {
if (this.inReplayPrefix) {
return DbPrefixes.Replay;
}
if (this.NextMarker) {
const index = this.NextMarker.lastIndexOf(this.delimiter);
if (index === this.NextMarker.length - 1) {
return this.NextMarker;
}
}
return SKIP_NONE;
}
skippingV1() {
const skipV0 = this.skippingV0();
if (skipV0 === SKIP_NONE) {
return SKIP_NONE;
}
// skip to the same object key in both M and V range listings
return [DbPrefixes.Master + skipV0,
DbPrefixes.Version + skipV0];
}
/**
* Return an object containing all mandatory fields to use once the
* iteration is done, doesn't show a NextMarker field if the output
* isn't truncated
* @return {Object} - following amazon format
*/
result() {
/* NextMarker is only provided when delimiter is used.
* specified in v1 listing documentation
* http://docs.aws.amazon.com/AmazonS3/latest/API/RESTBucketGET.html
*/
return {
CommonPrefixes: this.CommonPrefixes,
Versions: this.Contents,
IsTruncated: this.IsTruncated,
NextKeyMarker: this.IsTruncated ? this.NextMarker : undefined,
NextVersionIdMarker: this.IsTruncated ?
this.NextVersionIdMarker : undefined,
Delimiter: this.delimiter,
};
}
}
module.exports = { DelimiterVersions };

View File

@ -0,0 +1,535 @@
'use strict'; // eslint-disable-line strict
const Extension = require('./Extension').default;
import {
FilterState,
FilterReturnValue,
} from './delimiter';
const Version = require('../../versioning/Version').Version;
const VSConst = require('../../versioning/constants').VersioningConstants;
const { inc, FILTER_END, FILTER_ACCEPT, FILTER_SKIP, SKIP_NONE } =
require('./tools');
const VID_SEP = VSConst.VersionId.Separator;
const { DbPrefixes, BucketVersioningKeyFormat } = VSConst;
export const enum DelimiterVersionsFilterStateId {
NotSkipping = 1,
SkippingPrefix = 2,
SkippingVersions = 3,
};
export interface DelimiterVersionsFilterState_NotSkipping extends FilterState {
id: DelimiterVersionsFilterStateId.NotSkipping,
};
export interface DelimiterVersionsFilterState_SkippingPrefix extends FilterState {
id: DelimiterVersionsFilterStateId.SkippingPrefix,
prefix: string;
};
export interface DelimiterVersionsFilterState_SkippingVersions extends FilterState {
id: DelimiterVersionsFilterStateId.SkippingVersions,
gt: string;
};
type KeyHandler = (key: string, versionId: string | undefined, value: string) => FilterReturnValue;
type ResultObject = {
CommonPrefixes: string[],
Versions: {
key: string;
value: string;
versionId: string;
}[];
IsTruncated: boolean;
Delimiter ?: string;
NextKeyMarker ?: string;
NextVersionIdMarker ?: string;
};
type GenMDParamsItem = {
gt ?: string,
gte ?: string,
lt ?: string,
};
/**
* Handle object listing with parameters
*
* @prop {String[]} CommonPrefixes - 'folders' defined by the delimiter
* @prop {String[]} Contents - 'files' to list
* @prop {Boolean} IsTruncated - truncated listing flag
* @prop {String|undefined} NextMarker - marker per amazon format
* @prop {Number} keys - count of listed keys
* @prop {String|undefined} delimiter - separator per amazon format
* @prop {String|undefined} prefix - prefix per amazon format
* @prop {Number} maxKeys - number of keys to list
*/
export class DelimiterVersions extends Extension {
state: FilterState;
keyHandlers: { [id: number]: KeyHandler };
constructor(parameters, logger, vFormat) {
super(parameters, logger);
// original listing parameters
this.delimiter = parameters.delimiter;
this.prefix = parameters.prefix;
this.maxKeys = parameters.maxKeys || 1000;
// specific to version listing
this.keyMarker = parameters.keyMarker;
this.versionIdMarker = parameters.versionIdMarker;
// internal state
this.masterKey = undefined;
this.masterVersionId = undefined;
this.nullKey = null;
this.vFormat = vFormat || BucketVersioningKeyFormat.v0;
// listing results
this.CommonPrefixes = [];
this.Versions = [];
this.IsTruncated = false;
this.nextKeyMarker = parameters.keyMarker;
this.nextVersionIdMarker = undefined;
this.keyHandlers = {};
Object.assign(this, {
[BucketVersioningKeyFormat.v0]: {
genMDParams: this.genMDParamsV0,
getObjectKey: this.getObjectKeyV0,
skipping: this.skippingV0,
},
[BucketVersioningKeyFormat.v1]: {
genMDParams: this.genMDParamsV1,
getObjectKey: this.getObjectKeyV1,
skipping: this.skippingV1,
},
}[this.vFormat]);
if (this.vFormat === BucketVersioningKeyFormat.v0) {
this.setKeyHandler(
DelimiterVersionsFilterStateId.NotSkipping,
this.keyHandler_NotSkippingV0.bind(this));
} else {
this.setKeyHandler(
DelimiterVersionsFilterStateId.NotSkipping,
this.keyHandler_NotSkippingV1.bind(this));
}
this.setKeyHandler(
DelimiterVersionsFilterStateId.SkippingPrefix,
this.keyHandler_SkippingPrefix.bind(this));
this.setKeyHandler(
DelimiterVersionsFilterStateId.SkippingVersions,
this.keyHandler_SkippingVersions.bind(this));
if (this.versionIdMarker) {
this.state = <DelimiterVersionsFilterState_SkippingVersions> {
id: DelimiterVersionsFilterStateId.SkippingVersions,
gt: `${this.keyMarker}${VID_SEP}${this.versionIdMarker}`,
};
} else {
this.state = <DelimiterVersionsFilterState_NotSkipping> {
id: DelimiterVersionsFilterStateId.NotSkipping,
};
}
}
genMDParamsV0() {
const params: GenMDParamsItem = {};
if (this.prefix) {
params.gte = this.prefix;
params.lt = inc(this.prefix);
}
if (this.keyMarker && this.delimiter) {
const commonPrefix = this.getCommonPrefix(this.keyMarker);
if (commonPrefix) {
const afterPrefix = inc(commonPrefix);
if (!params.gte || afterPrefix > params.gte) {
params.gte = afterPrefix;
}
}
}
if (this.keyMarker && (!params.gte || this.keyMarker >= params.gte)) {
delete params.gte;
if (this.versionIdMarker) {
// start from the beginning of versions so we can
// check if there's a null key and fetch it
// (afterwards, we can skip the rest of versions until
// we reach versionIdMarker)
params.gte = `${this.keyMarker}${VID_SEP}`;
} else {
params.gt = `${this.keyMarker}${inc(VID_SEP)}`;
}
}
return params;
}
genMDParamsV1() {
// return an array of two listing params sets to ask for
// synchronized listing of M and V ranges
const v0Params: GenMDParamsItem = this.genMDParamsV0();
const mParams: GenMDParamsItem = {};
const vParams: GenMDParamsItem = {};
if (v0Params.gt) {
mParams.gt = `${DbPrefixes.Master}${v0Params.gt}`;
vParams.gt = `${DbPrefixes.Version}${v0Params.gt}`;
} else if (v0Params.gte) {
mParams.gte = `${DbPrefixes.Master}${v0Params.gte}`;
vParams.gte = `${DbPrefixes.Version}${v0Params.gte}`;
} else {
mParams.gte = DbPrefixes.Master;
vParams.gte = DbPrefixes.Version;
}
if (v0Params.lt) {
mParams.lt = `${DbPrefixes.Master}${v0Params.lt}`;
vParams.lt = `${DbPrefixes.Version}${v0Params.lt}`;
} else {
mParams.lt = inc(DbPrefixes.Master);
vParams.lt = inc(DbPrefixes.Version);
}
return [mParams, vParams];
}
/**
* check if the max keys count has been reached and set the
* final state of the result if it is the case
* @return {Boolean} - indicates if the iteration has to stop
*/
_reachedMaxKeys(): boolean {
if (this.keys >= this.maxKeys) {
// In cases of maxKeys <= 0 -> IsTruncated = false
this.IsTruncated = this.maxKeys > 0;
return true;
}
return false;
}
/**
* Used to synchronize listing of M and V prefixes by object key
*
* @param {object} masterObj object listed from first range
* returned by genMDParamsV1() (the master keys range)
* @param {object} versionObj object listed from second range
* returned by genMDParamsV1() (the version keys range)
* @return {number} comparison result:
* * -1 if master key < version key
* * 1 if master key > version key
*/
compareObjects(masterObj, versionObj) {
const masterKey = masterObj.key.slice(DbPrefixes.Master.length);
const versionKey = versionObj.key.slice(DbPrefixes.Version.length);
return masterKey < versionKey ? -1 : 1;
}
/**
* Parse a listing key into its nonversioned key and version ID components
*
* @param {string} key - full listing key
* @return {object} obj
* @return {string} obj.key - nonversioned part of key
* @return {string} [obj.versionId] - version ID in the key
*/
parseKey(fullKey: string): { key: string, versionId ?: string } {
const versionIdIndex = fullKey.indexOf(VID_SEP);
if (versionIdIndex === -1) {
return { key: fullKey };
}
const nonversionedKey: string = fullKey.slice(0, versionIdIndex);
let versionId: string = fullKey.slice(versionIdIndex + 1);
return { key: nonversionedKey, versionId };
}
/**
* Include a key in the listing output, in the Versions or CommonPrefix result
*
* @param {string} key - key (without version ID)
* @param {string} versionId - version ID
* @param {string} value - metadata value
* @return {undefined}
*/
addKey(key: string, versionId: string, value: string) {
// add the subprefix to the common prefixes if the key has the delimiter
const commonPrefix = this.getCommonPrefix(key);
if (commonPrefix) {
this.addCommonPrefix(commonPrefix);
// transition into SkippingPrefix state to skip all following keys
// while they start with the same prefix
this.setState(<DelimiterVersionsFilterState_SkippingPrefix> {
id: DelimiterVersionsFilterStateId.SkippingPrefix,
prefix: commonPrefix,
});
} else {
this.addVersion(key, versionId, value);
}
}
/**
* Add a (key, versionId, value) tuple to the listing.
* Set the NextMarker to the current key
* Increment the keys counter
* @param {String} key - The key to add
* @param {String} versionId - versionId
* @param {String} value - The value of the key
* @return {undefined}
*/
addVersion(key: string, versionId: string, value: string) {
this.Versions.push({
key,
versionId,
value: this.trimMetadata(value),
});
this.nextKeyMarker = key;
this.nextVersionIdMarker = versionId;
++this.keys;
}
getCommonPrefix(key: string): string | undefined {
if (!this.delimiter) {
return undefined;
}
const baseIndex = this.prefix ? this.prefix.length : 0;
const delimiterIndex = key.indexOf(this.delimiter, baseIndex);
if (delimiterIndex === -1) {
return undefined;
}
return key.substring(0, delimiterIndex + this.delimiter.length);
}
/**
* Add a Common Prefix in the list
* @param {String} commonPrefix - common prefix to add
* @return {undefined}
*/
addCommonPrefix(commonPrefix: string): void {
// add the new prefix to the list
this.CommonPrefixes.push(commonPrefix);
++this.keys;
this.nextKeyMarker = commonPrefix;
this.nextVersionIdMarker = undefined;
}
/**
* Cache the current null key, to save it for outputting it later at
* the correct position
*
* @param {String} key - nonversioned key of the null key
* @param {String} versionId - real version ID of the null key
* @param {String} value - value of the null key
* @return {undefined}
*/
cacheNullKey(key: string, versionId: string, value: string): void {
this.nullKey = { key, versionId, value };
}
getObjectKeyV0(obj: { key: string }): string {
return obj.key;
}
getObjectKeyV1(obj: { key: string }): string {
return obj.key.slice(DbPrefixes.Master.length);
}
/**
* Filter to apply on each iteration, based on:
* - prefix
* - delimiter
* - maxKeys
* The marker is being handled directly by levelDB
* @param {Object} obj - The key and value of the element
* @param {String} obj.key - The key of the element
* @param {String} obj.value - The value of the element
* @return {number} - indicates if iteration should continue
*/
filter(obj: { key: string, value: string }): FilterReturnValue {
const key = this.getObjectKey(obj);
const value = obj.value;
const { key: nonversionedKey, versionId: keyVersionId } = this.parseKey(key);
if (this.nullKey) {
if (this.nullKey.key !== nonversionedKey
|| this.nullKey.versionId < <string> keyVersionId) {
this.handleKey(
this.nullKey.key, this.nullKey.versionId, this.nullKey.value);
this.nullKey = null;
}
}
if (keyVersionId === '') {
// null key
this.cacheNullKey(nonversionedKey, Version.from(value).getVersionId(), value);
if (this.state.id === DelimiterVersionsFilterStateId.SkippingVersions) {
return FILTER_SKIP;
}
return FILTER_ACCEPT;
}
return this.handleKey(nonversionedKey, keyVersionId, value);
}
setState(state: FilterState): void {
this.state = state;
}
setKeyHandler(stateId: number, keyHandler: KeyHandler): void {
this.keyHandlers[stateId] = keyHandler;
}
handleKey(key: string, versionId: string | undefined, value: string): FilterReturnValue {
return this.keyHandlers[this.state.id](key, versionId, value);
}
keyHandler_NotSkippingV0(key: string, versionId: string | undefined, value: string): FilterReturnValue {
if (key.startsWith(DbPrefixes.Replay)) {
// skip internal replay prefix entirely
this.setState(<DelimiterVersionsFilterState_SkippingPrefix> {
id: DelimiterVersionsFilterStateId.SkippingPrefix,
prefix: DbPrefixes.Replay,
});
return FILTER_SKIP;
}
if (Version.isPHD(value)) {
return FILTER_ACCEPT;
}
return this.filter_onNewKey(key, versionId, value);
}
keyHandler_NotSkippingV1(key: string, versionId: string | undefined, value: string): FilterReturnValue {
// NOTE: this check on PHD is only useful for Artesca, S3C
// does not use PHDs in V1 format
if (Version.isPHD(value)) {
return FILTER_ACCEPT;
}
return this.filter_onNewKey(key, versionId, value);
}
filter_onNewKey(key: string, versionId: string | undefined, value: string): FilterReturnValue {
if (this._reachedMaxKeys()) {
return FILTER_END;
}
if (versionId === undefined) {
this.masterKey = key;
this.masterVersionId = Version.from(value).getVersionId() || 'null';
this.addKey(this.masterKey, this.masterVersionId, value);
} else {
if (this.masterKey === key && this.masterVersionId === versionId) {
// do not add a version key if it is the master version
return FILTER_ACCEPT;
}
this.addKey(key, versionId, value);
}
return FILTER_ACCEPT;
}
keyHandler_SkippingPrefix(key: string, versionId: string | undefined, value: string): FilterReturnValue {
const { prefix } = <DelimiterVersionsFilterState_SkippingPrefix> this.state;
if (key.startsWith(prefix)) {
return FILTER_SKIP;
}
this.setState(<DelimiterVersionsFilterState_NotSkipping> {
id: DelimiterVersionsFilterStateId.NotSkipping,
});
return this.handleKey(key, versionId, value);
}
keyHandler_SkippingVersions(key: string, versionId: string | undefined, value: string): FilterReturnValue {
if (key === this.keyMarker) {
// since the nonversioned key equals the marker, there is
// necessarily a versionId in this key
const _versionId = <string> versionId;
if (_versionId < this.versionIdMarker) {
// skip all versions until marker
return FILTER_SKIP;
}
if (_versionId === this.versionIdMarker) {
// nothing left to skip, so return ACCEPT, but don't add this version
return FILTER_ACCEPT;
}
}
this.setState(<DelimiterVersionsFilterState_NotSkipping> {
id: DelimiterVersionsFilterStateId.NotSkipping,
});
return this.handleKey(key, versionId, value);
}
skippingBase(): string | undefined {
switch (this.state.id) {
case DelimiterVersionsFilterStateId.SkippingPrefix:
const { prefix } = <DelimiterVersionsFilterState_SkippingPrefix> this.state;
return inc(prefix);
case DelimiterVersionsFilterStateId.SkippingVersions:
const { gt } = <DelimiterVersionsFilterState_SkippingVersions> this.state;
// the contract of skipping() is to return the first key
// that can be skipped to, so adding a null byte to skip
// over the existing versioned key set in 'gt'
return `${gt}\0`;
default:
return SKIP_NONE;
}
}
skippingV0() {
return this.skippingBase();
}
skippingV1() {
const skipTo = this.skippingBase();
if (skipTo === SKIP_NONE) {
return SKIP_NONE;
}
// skip to the same object key in both M and V range listings
return [
`${DbPrefixes.Master}${skipTo}`,
`${DbPrefixes.Version}${skipTo}`,
];
}
/**
* Return an object containing all mandatory fields to use once the
* iteration is done, doesn't show a NextMarker field if the output
* isn't truncated
* @return {Object} - following amazon format
*/
result() {
// Add the last null key if still in cache (when it is the
// last version of the last key)
//
// NOTE: _reachedMaxKeys sets IsTruncated to true when it
// returns true. Here we want this because either:
//
// - we did not reach the max keys yet so the result is not
// - truncated, and there is still room for the null key in
// - the results
//
// - OR we reached it already while having to process a new
// key (so the result is truncated even without the null key)
//
// - OR we are *just* below the limit but the null key to add
// does not fit, so we know the result is now truncated
// because there remains the null key to be output.
//
if (this.nullKey) {
this.handleKey(this.nullKey.key, this.nullKey.versionId, this.nullKey.value);
}
const result: ResultObject = {
CommonPrefixes: this.CommonPrefixes,
Versions: this.Versions,
IsTruncated: this.IsTruncated,
};
if (this.delimiter) {
result.Delimiter = this.delimiter;
}
if (this.IsTruncated) {
result.NextKeyMarker = this.nextKeyMarker;
if (this.nextVersionIdMarker) {
result.NextVersionIdMarker = this.nextVersionIdMarker;
}
};
return result;
}
}
module.exports = { DelimiterVersions };

View File

@ -6,4 +6,7 @@ module.exports = {
DelimiterMaster: require('./delimiterMaster') DelimiterMaster: require('./delimiterMaster')
.DelimiterMaster, .DelimiterMaster,
MPU: require('./MPU').MultipartUploads, MPU: require('./MPU').MultipartUploads,
DelimiterCurrent: require('./delimiterCurrent').DelimiterCurrent,
DelimiterNonCurrent: require('./delimiterNonCurrent').DelimiterNonCurrent,
DelimiterOrphanDeleteMarker: require('./delimiterOrphanDeleteMarker').DelimiterOrphanDeleteMarker,
}; };

View File

@ -52,21 +52,21 @@ class Skip {
assert(this.skipRangeCb); assert(this.skipRangeCb);
const filteringResult = this.extension.filter(entry); const filteringResult = this.extension.filter(entry);
const skippingRange = this.extension.skipping(); const skipTo = this.extension.skipping();
if (filteringResult === FILTER_END) { if (filteringResult === FILTER_END) {
this.listingEndCb(); this.listingEndCb();
} else if (filteringResult === FILTER_SKIP } else if (filteringResult === FILTER_SKIP
&& skippingRange !== SKIP_NONE) { && skipTo !== SKIP_NONE) {
if (++this.streakLength >= MAX_STREAK_LENGTH) { if (++this.streakLength >= MAX_STREAK_LENGTH) {
let newRange; let newRange;
if (Array.isArray(skippingRange)) { if (Array.isArray(skipTo)) {
newRange = []; newRange = [];
for (let i = 0; i < skippingRange.length; ++i) { for (let i = 0; i < skipTo.length; ++i) {
newRange.push(this._inc(skippingRange[i])); newRange.push(skipTo[i]);
} }
} else { } else {
newRange = this._inc(skippingRange); newRange = skipTo;
} }
/* Avoid to loop on the same range again and again. */ /* Avoid to loop on the same range again and again. */
if (newRange === this.gteParams) { if (newRange === this.gteParams) {
@ -79,16 +79,6 @@ class Skip {
this.streakLength = 0; this.streakLength = 0;
} }
} }
_inc(str) {
if (!str) {
return str;
}
const lastCharValue = str.charCodeAt(str.length - 1);
const lastCharNewValue = String.fromCharCode(lastCharValue + 1);
return `${str.slice(0, str.length - 1)}${lastCharNewValue}`;
}
} }

View File

@ -14,7 +14,7 @@ function vaultSignatureCb(
err: Error | null, err: Error | null,
authInfo: { message: { body: any } }, authInfo: { message: { body: any } },
log: Logger, log: Logger,
callback: (err: Error | null, data?: any, results?: any, params?: any) => void, callback: (err: Error | null, data?: any, results?: any, params?: any, infos?: any) => void,
streamingV4Params?: any streamingV4Params?: any
) { ) {
// vaultclient API guarantees that it returns: // vaultclient API guarantees that it returns:
@ -38,7 +38,9 @@ function vaultSignatureCb(
} }
// @ts-ignore // @ts-ignore
log.addDefaultFields(auditLog); log.addDefaultFields(auditLog);
return callback(null, userInfo, authorizationResults, streamingV4Params); return callback(null, userInfo, authorizationResults, streamingV4Params, {
accountQuota: info.accountQuota || {},
});
} }
export type AuthV4RequestParams = { export type AuthV4RequestParams = {
@ -384,4 +386,19 @@ export default class Vault {
return callback(null, respBody); return callback(null, respBody);
}); });
} }
report(log: Logger, callback: (err: Error | null, data?: any) => void) {
// call the report function of the client
if (!this.client.report) {
return callback(null, {});
}
// @ts-ignore
return this.client.report(log.getSerializedUids(), (err: Error | null, obj?: any) => {
if (err) {
log.debug(`error from ${this.implName}`, { error: err });
return callback(err);
}
return callback(null, obj);
});
}
} }

View File

@ -163,6 +163,20 @@ function doAuth(
return cb(errors.InternalError); return cb(errors.InternalError);
} }
/**
* This function will generate a version 4 content-md5 header
* It looks at the request path to determine what kind of header encoding is required
*
* @param path - the request path
* @param payload - the request payload to hash
*/
function generateContentMD5Header(
path: string,
payload: string,
) {
const encoding = path && path.startsWith('/_/backbeat/') ? 'hex' : 'base64';
return crypto.createHash('md5').update(payload, 'binary').digest(encoding);
}
/** /**
* This function will generate a version 4 header * This function will generate a version 4 header
* *
@ -175,6 +189,7 @@ function doAuth(
* @param [proxyPath] - path that gets proxied by reverse proxy * @param [proxyPath] - path that gets proxied by reverse proxy
* @param [sessionToken] - security token if the access/secret keys * @param [sessionToken] - security token if the access/secret keys
* are temporary credentials from STS * are temporary credentials from STS
* @param [payload] - body of the request if any
*/ */
function generateV4Headers( function generateV4Headers(
request: any, request: any,
@ -182,8 +197,9 @@ function generateV4Headers(
accessKey: string, accessKey: string,
secretKeyValue: string, secretKeyValue: string,
awsService: string, awsService: string,
proxyPath: string, proxyPath?: string,
sessionToken: string sessionToken?: string,
payload?: string,
) { ) {
Object.assign(request, { headers: {} }); Object.assign(request, { headers: {} });
const amzDate = convertUTCtoISO8601(Date.now()); const amzDate = convertUTCtoISO8601(Date.now());
@ -196,7 +212,7 @@ function generateV4Headers(
const timestamp = amzDate; const timestamp = amzDate;
const algorithm = 'AWS4-HMAC-SHA256'; const algorithm = 'AWS4-HMAC-SHA256';
let payload = ''; payload = payload || '';
if (request.method === 'POST') { if (request.method === 'POST') {
payload = queryString.stringify(data, undefined, undefined, { payload = queryString.stringify(data, undefined, undefined, {
encodeURIComponent, encodeURIComponent,
@ -207,6 +223,7 @@ function generateV4Headers(
request.setHeader('host', request._headers.host); request.setHeader('host', request._headers.host);
request.setHeader('x-amz-date', amzDate); request.setHeader('x-amz-date', amzDate);
request.setHeader('x-amz-content-sha256', payloadChecksum); request.setHeader('x-amz-content-sha256', payloadChecksum);
request.setHeader('content-md5', generateContentMD5Header(request.path, payload));
if (sessionToken) { if (sessionToken) {
request.setHeader('x-amz-security-token', sessionToken); request.setHeader('x-amz-security-token', sessionToken);
@ -217,6 +234,7 @@ function generateV4Headers(
.filter(headerName => .filter(headerName =>
headerName.startsWith('x-amz-') headerName.startsWith('x-amz-')
|| headerName.startsWith('x-scal-') || headerName.startsWith('x-scal-')
|| headerName === 'content-md5'
|| headerName === 'host', || headerName === 'host',
).sort().join(';'); ).sort().join(';');
const params = { request, signedHeaders, payloadChecksum, const params = { request, signedHeaders, payloadChecksum,

View File

@ -133,23 +133,37 @@ export default class ChainBackend extends BaseBackend {
return; return;
} }
resp.message.body.forEach(policy => { const check = (policy) => {
const key = (policy.arn || '') + (policy.versionId || ''); const key = (policy.arn || '') + (policy.versionId || '') + (policy.action || '');
if (!policyMap[key] || !policyMap[key].isAllowed) { if (!policyMap[key] || !policyMap[key].isAllowed) {
policyMap[key] = policy; policyMap[key] = policy;
} }
// else is duplicate policy // else is duplicate policy
};
resp.message.body.forEach(policy => {
if (Array.isArray(policy)) {
policy.forEach(authResult => check(authResult));
} else {
check(policy);
}
}); });
}); });
return Object.keys(policyMap).map(key => { return Object.keys(policyMap).map(key => {
const policyRes:any = { isAllowed: policyMap[key].isAllowed }; const policyRes: any = { isAllowed: policyMap[key].isAllowed };
if (policyMap[key].arn !== '') { if (policyMap[key].arn !== '') {
policyRes.arn = policyMap[key].arn; policyRes.arn = policyMap[key].arn;
} }
if (policyMap[key].versionId) { if (policyMap[key].versionId) {
policyRes.versionId = policyMap[key].versionId; policyRes.versionId = policyMap[key].versionId;
} }
if (policyMap[key].isImplicit !== undefined) {
policyRes.isImplicit = policyMap[key].isImplicit;
}
if (policyMap[key].action) {
policyRes.action = policyMap[key].action;
}
return policyRes; return policyRes;
}); });
} }
@ -198,4 +212,22 @@ export default class ChainBackend extends BaseBackend {
return callback(null, res); return callback(null, res);
}); });
} }
report(reqUid: string, callback: any) {
this._forEachClient((client, done) =>
client.report(reqUid, done),
(err, res) => {
if (err) {
return callback(err);
}
const mergedRes = res.reduce((acc, val) => {
Object.keys(val).forEach(k => {
acc[k] = val[k];
});
return acc;
}, {});
return callback(null, mergedRes);
});
}
} }

View File

@ -4,8 +4,7 @@ import joi from 'joi';
import werelogs from 'werelogs'; import werelogs from 'werelogs';
import * as types from './types'; import * as types from './types';
import { Account, Accounts } from './types'; import { Account, Accounts } from './types';
import ARN from '../../../models/ARN';
const ARN = require('../../../models/ARN');
/** Load authentication information from files or pre-loaded account objects */ /** Load authentication information from files or pre-loaded account objects */
export default class AuthLoader { export default class AuthLoader {

View File

@ -161,6 +161,10 @@ class InMemoryBackend extends BaseBackend {
}; };
return cb(null, vaultReturnObject); return cb(null, vaultReturnObject);
} }
report(log: Logger, callback: any) {
return callback(null, {});
}
} }

View File

@ -35,15 +35,16 @@ export default function awsURIencode(
encodeSlash?: boolean, encodeSlash?: boolean,
noEncodeStar?: boolean noEncodeStar?: boolean
) { ) {
const encSlash = encodeSlash === undefined ? true : encodeSlash;
let encoded = '';
/** /**
* Duplicate query params are not suppported by AWS S3 APIs. These params * Duplicate query params are not suppported by AWS S3 APIs. These params
* are parsed as Arrays by Node.js HTTP parser which breaks this method * are parsed as Arrays by Node.js HTTP parser which breaks this method
*/ */
if (typeof input !== 'string') { if (typeof input !== 'string') {
return encoded; return '';
} }
let encoded = "";
const slash = encodeSlash === undefined || encodeSlash ? '%2F' : '/';
const star = noEncodeStar !== undefined && noEncodeStar ? '*' : '%2A';
for (let i = 0; i < input.length; i++) { for (let i = 0; i < input.length; i++) {
let ch = input.charAt(i); let ch = input.charAt(i);
if ((ch >= 'A' && ch <= 'Z') || if ((ch >= 'A' && ch <= 'Z') ||
@ -55,9 +56,9 @@ export default function awsURIencode(
} else if (ch === ' ') { } else if (ch === ' ') {
encoded = encoded.concat('%20'); encoded = encoded.concat('%20');
} else if (ch === '/') { } else if (ch === '/') {
encoded = encoded.concat(encSlash ? '%2F' : ch); encoded = encoded.concat(slash);
} else if (ch === '*') { } else if (ch === '*') {
encoded = encoded.concat(noEncodeStar ? '*' : '%2A'); encoded = encoded.concat(star);
} else { } else {
if (ch >= '\uD800' && ch <= '\uDBFF') { if (ch >= '\uD800' && ch <= '\uDBFF') {
// If this character is a high surrogate peek the next character // If this character is a high surrogate peek the next character

View File

@ -7,6 +7,15 @@ import { Callback } from '../../backends/in_memory/types';
import constructChunkStringToSign from './constructChunkStringToSign'; import constructChunkStringToSign from './constructChunkStringToSign';
export type TransformParams = {
accessKey: string;
signatureFromRequest: string;
region: string;
scopeDate: string;
timestamp: string;
credentialScope: string;
};
/** /**
* This class is designed to handle the chunks sent in a streaming * This class is designed to handle the chunks sent in a streaming
* v4 Auth request * v4 Auth request
@ -48,14 +57,7 @@ export default class V4Transform extends Transform {
* @param cb - callback to api * @param cb - callback to api
*/ */
constructor( constructor(
streamingV4Params: { streamingV4Params: TransformParams,
accessKey: string;
signatureFromRequest: string;
region: string;
scopeDate: string;
timestamp: string;
credentialScope: string;
},
vault: Vault, vault: Vault,
log: Logger, log: Logger,
cb: Callback, cb: Callback,

View File

@ -0,0 +1,569 @@
import cluster, { Worker } from 'cluster';
import * as werelogs from 'werelogs';
import { default as errors } from '../../lib/errors';
const rpcLogger = new werelogs.Logger('ClusterRPC');
/**
* Remote procedure calls support between cluster workers.
*
* When using the cluster module, new processes are forked and are
* dispatched workloads, usually HTTP requests. The ClusterRPC module
* implements a RPC system to send commands to all cluster worker
* processes at once from any particular worker, and retrieve their
* individual command results, like a distributed map operation.
*
* The existing nodejs cluster IPC channel is setup from the primary
* to each worker, but not between workers, so there has to be a hop
* by the primary.
*
* How a command is treated:
*
* - a worker sends a command message to the primary
*
* - the primary then forwards that command to each existing worker
* (including the requestor)
*
* - each worker then executes the command and returns a result or an
* error
*
* - the primary gathers all workers results into an array
*
* - finally, the primary dispatches the results array to the original
* requesting worker
*
*
* Limitations:
*
* - The command payload must be serializable, which means that:
* - it should not contain circular references
* - it should be of a reasonable size to be sent in a single RPC message
*
* - The "toWorkers" parameter of value "*" targets the set of workers
* that are available at the time the command is dispatched. Any new
* worker spawned after the command has been dispatched for
* processing, but before the command completes, don't execute
* the command and hence are not part of the results array.
*
*
* To set it up:
*
* - On the primary:
* if (cluster.isPrimary) {
* setupRPCPrimary();
* }
*
* - On the workers:
* if (!cluster.isPrimary) {
* setupRPCWorker({
* handler1: (payload: object, uids: string, callback: HandlerCallback) => void,
* handler2: ...
* });
* }
* Handler functions will be passed the command payload, request
* serialized uids, and must call the callback when the worker is done
* processing the command:
* callback(error: Error | null | undefined, result?: any)
*
* When this setup is done, any worker can start sending commands by calling
* the async function sendWorkerCommand().
*/
// exported types
export type ResultObject = {
error: Error | null;
result: any;
};
/**
* saved Promise for sendWorkerCommand
*/
export type CommandPromise = {
resolve: (results?: ResultObject[]) => void;
reject: (error: Error) => void;
timeout: NodeJS.Timeout | null;
};
export type HandlerCallback = (error: (Error & { code?: number }) | null | undefined, result?: any) => void;
export type HandlerFunction = (payload: object, uids: string, callback: HandlerCallback) => void;
export type HandlersMap = {
[index: string]: HandlerFunction;
};
export type PrimaryHandlerFunction = (worker: Worker, payload: object, uids: string, callback: HandlerCallback) => void;
export type PrimaryHandlersMap = Record<string, PrimaryHandlerFunction>;
// private types
type RPCMessage<T extends string, P> = {
type: T;
uids: string;
payload: P;
};
type RPCCommandMessage = RPCMessage<'cluster-rpc:command', any> & {
toWorkers: string;
toHandler: string;
};
type MarshalledResultObject = {
error: string | null;
errorCode?: number;
result: any;
};
type RPCCommandResultMessage = RPCMessage<'cluster-rpc:commandResult', MarshalledResultObject>;
type RPCCommandResultsMessage = RPCMessage<'cluster-rpc:commandResults', {
results: MarshalledResultObject[];
}>;
type RPCCommandErrorMessage = RPCMessage<'cluster-rpc:commandError', {
error: string;
}>;
interface RPCSetupOptions {
/**
* As werelogs is not a peerDependency, arsenal and a parent project
* might have their own separate versions duplicated in dependencies.
* The config are therefore not shared.
* Use this to propagate werelogs config to arsenal's ClusterRPC.
*/
werelogsConfig?: Parameters<typeof werelogs.configure>[0];
};
/**
* In primary: store worker IDs that are waiting to be dispatched
* their command's results, as a mapping.
*/
const uidsToWorkerId: {
[index: string]: number;
} = {};
/**
* In primary: store worker responses for commands in progress as a
* mapping.
*
* Result objects are 'null' while the worker is still processing the
* command. When a worker finishes processing it stores the result as:
* {
* error: string | null,
* result: any
* }
*/
const uidsToCommandResults: {
[index: string]: {
[index: number]: MarshalledResultObject | null;
};
} = {};
/**
* In workers: store promise callbacks for commands waiting to be
* dispatched, as a mapping.
*/
const uidsToCommandPromise: {
[index: string]: CommandPromise;
} = {};
function _isRpcMessage(message) {
return (message !== null &&
typeof message === 'object' &&
typeof message.type === 'string' &&
message.type.startsWith('cluster-rpc:'));
}
/**
* Setup cluster RPC system on the primary
*
* @param {object} [handlers] - mapping of handler names to handler functions
* handler function:
* `handler({Worker} worker, {object} payload, {string} uids, {function} callback)`
* handler callback must be called when worker is done with the command:
* `callback({Error|null} error, {any} [result])`
* @return {undefined}
*/
export function setupRPCPrimary(handlers?: PrimaryHandlersMap, options?: RPCSetupOptions) {
if (options?.werelogsConfig) {
werelogs.configure(options.werelogsConfig);
}
cluster.on('message', (worker, message) => {
if (_isRpcMessage(message)) {
_handlePrimaryMessage(worker, message, handlers);
}
});
}
/**
* Setup RPCs on a cluster worker process
*
* @param {object} handlers - mapping of handler names to handler functions
* handler function:
* handler({object} payload, {string} uids, {function} callback)
* handler callback must be called when worker is done with the command:
* callback({Error|null} error, {any} [result])
* @return {undefined}
* }
*/
export function setupRPCWorker(handlers: HandlersMap, options?: RPCSetupOptions) {
if (!process.send) {
throw new Error('fatal: cannot setup cluster RPC: "process.send" is not available');
}
if (options?.werelogsConfig) {
werelogs.configure(options.werelogsConfig);
}
process.on('message', (message: RPCCommandMessage | RPCCommandResultsMessage) => {
if (_isRpcMessage(message)) {
_handleWorkerMessage(message, handlers);
}
});
}
/**
* Send a command for workers to execute in parallel, and wait for results
*
* @param {string} toWorkers - which workers should execute the command
* Currently the supported values are:
* - "*", meaning all workers will execute the command
* - "PRIMARY", meaning primary process will execute the command
* @param {string} toHandler - name of handler that will execute the
* command in workers, as declared in setupRPCWorker() parameter object
* @param {string} uids - unique identifier of the command, must be
* unique across all commands in progress
* @param {object} payload - message payload, sent as-is to the handler
* @param {number} [timeoutMs=60000] - timeout the command with a
* "RequestTimeout" error after this number of milliseconds - set to 0
* to disable timeouts (the command may then hang forever)
* @returns {Promise}
*/
export async function sendWorkerCommand(
toWorkers: string,
toHandler: string,
uids: string,
payload: object,
timeoutMs: number = 60000
) {
if (typeof uids !== 'string') {
rpcLogger.error('missing or invalid "uids" field', { uids });
throw errors.MissingParameter;
}
if (uidsToCommandPromise[uids] !== undefined) {
rpcLogger.error('a command is already in progress with same uids', { uids });
throw errors.OperationAborted;
}
rpcLogger.info('sending command', { toWorkers, toHandler, uids, payload });
return new Promise((resolve, reject) => {
let timeout: NodeJS.Timeout | null = null;
if (timeoutMs) {
timeout = setTimeout(() => {
delete uidsToCommandPromise[uids];
reject(errors.RequestTimeout);
}, timeoutMs);
}
uidsToCommandPromise[uids] = { resolve, reject, timeout };
const message: RPCCommandMessage = {
type: 'cluster-rpc:command',
toWorkers,
toHandler,
uids,
payload,
};
return process.send?.(message);
});
}
/**
* Get the number of commands in flight
* @returns {number}
*/
export function getPendingCommandsCount() {
return Object.keys(uidsToCommandPromise).length;
}
function _dispatchCommandResultsToWorker(
worker: Worker,
uids: string,
resultsArray: MarshalledResultObject[]
): void {
const message: RPCCommandResultsMessage = {
type: 'cluster-rpc:commandResults',
uids,
payload: {
results: resultsArray,
},
};
worker.send(message);
}
function _dispatchCommandErrorToWorker(
worker: Worker,
uids: string,
error: Error,
): void {
const message: RPCCommandErrorMessage = {
type: 'cluster-rpc:commandError',
uids,
payload: {
error: error.message,
},
};
worker.send(message);
}
function _sendPrimaryCommandResult(
worker: Worker,
uids: string,
error: (Error & { code?: number }) | null | undefined,
result?: any
): void {
const message: RPCCommandResultsMessage = {
type: 'cluster-rpc:commandResults',
uids,
payload: {
results: [{ error: error?.message || null, errorCode: error?.code, result }],
},
};
worker.send?.(message);
}
function _handlePrimaryCommandMessage(
fromWorker: Worker,
logger: any,
message: RPCCommandMessage,
handlers?: PrimaryHandlersMap
): void {
const { toWorkers, toHandler, uids, payload } = message;
if (toWorkers === '*') {
if (uidsToWorkerId[uids] !== undefined) {
logger.warn('new command already has a waiting worker with same uids', {
uids, workerId: uidsToWorkerId[uids],
});
return undefined;
}
const commandResults = {};
for (const workerId of Object.keys(cluster.workers || {})) {
commandResults[workerId] = null;
}
uidsToWorkerId[uids] = fromWorker?.id;
uidsToCommandResults[uids] = commandResults;
for (const [workerId, worker] of Object.entries(cluster.workers || {})) {
logger.debug('sending command message to worker', {
workerId, toHandler, payload,
});
if (worker) {
worker.send(message);
}
}
} else if (toWorkers === 'PRIMARY') {
const { toHandler, uids, payload } = message;
const cb: HandlerCallback = (err, result) => _sendPrimaryCommandResult(fromWorker, uids, err, result);
if (toHandler in (handlers || {})) {
return handlers![toHandler](fromWorker, payload, uids, cb);
}
logger.error('no such handler in "toHandler" field from worker command message', {
toHandler,
});
return cb(errors.NotImplemented);
} else {
logger.error('unsupported "toWorkers" field from worker command message', {
toWorkers,
});
if (fromWorker) {
_dispatchCommandErrorToWorker(fromWorker, uids, errors.NotImplemented);
}
}
}
function _handlePrimaryCommandResultMessage(
fromWorkerId: number,
logger: any,
message: RPCCommandResultMessage
): void {
const { uids, payload } = message;
const commandResults = uidsToCommandResults[uids];
if (!commandResults) {
logger.warn('received command response message from worker for command not in flight', {
workerId: fromWorkerId,
uids,
});
return undefined;
}
if (commandResults[fromWorkerId] === undefined) {
logger.warn('received command response message with unexpected worker ID', {
workerId: fromWorkerId,
uids,
});
return undefined;
}
if (commandResults[fromWorkerId] !== null) {
logger.warn('ignoring duplicate command response from worker', {
workerId: fromWorkerId,
uids,
});
return undefined;
}
commandResults[fromWorkerId] = payload;
const commandResultsArray = Object.values(commandResults);
if (commandResultsArray.every(response => response !== null)) {
logger.debug('all workers responded to command', { uids });
const completeCommandResultsArray = <MarshalledResultObject[]> commandResultsArray;
const toWorkerId = uidsToWorkerId[uids];
const toWorker = cluster.workers?.[toWorkerId];
delete uidsToCommandResults[uids];
delete uidsToWorkerId[uids];
if (!toWorker) {
logger.warn('worker shut down while its command was executing', {
workerId: toWorkerId, uids,
});
return undefined;
}
// send back response to original worker
_dispatchCommandResultsToWorker(toWorker, uids, completeCommandResultsArray);
}
}
function _handlePrimaryMessage(
fromWorker: Worker,
message: RPCCommandMessage | RPCCommandResultMessage,
handlers?: PrimaryHandlersMap
): void {
const { type: messageType, uids } = message;
const logger = rpcLogger.newRequestLoggerFromSerializedUids(uids);
logger.debug('primary received message from worker', {
workerId: fromWorker?.id, rpcMessage: message,
});
if (messageType === 'cluster-rpc:command') {
return _handlePrimaryCommandMessage(fromWorker, logger, message, handlers);
}
if (messageType === 'cluster-rpc:commandResult') {
return _handlePrimaryCommandResultMessage(fromWorker?.id, logger, message);
}
logger.error('unsupported message type', {
workerId: fromWorker?.id, messageType, uids,
});
return undefined;
}
function _sendWorkerCommandResult(
uids: string,
error: Error | null | undefined,
result?: any
): void {
const message: RPCCommandResultMessage = {
type: 'cluster-rpc:commandResult',
uids,
payload: {
error: error ? error.message : null,
result,
},
};
process.send?.(message);
}
function _handleWorkerCommandMessage(
logger: any,
message: RPCCommandMessage,
handlers: HandlersMap
): void {
const { toHandler, uids, payload } = message;
const cb: HandlerCallback = (err, result) => _sendWorkerCommandResult(uids, err, result);
if (toHandler in handlers) {
return handlers[toHandler](payload, uids, cb);
}
logger.error('no such handler in "toHandler" field from worker command message', {
toHandler,
});
return cb(errors.NotImplemented);
}
function _handleWorkerCommandResultsMessage(
logger: any,
message: RPCCommandResultsMessage,
): void {
const { uids, payload } = message;
const { results } = payload;
const commandPromise: CommandPromise = uidsToCommandPromise[uids];
if (commandPromise === undefined) {
logger.error('missing promise for command results', { uids, payload });
return undefined;
}
if (commandPromise.timeout) {
clearTimeout(commandPromise.timeout);
}
delete uidsToCommandPromise[uids];
const unmarshalledResults = results.map(workerResult => {
let workerError: Error | null = null;
if (workerResult.error) {
if (workerResult.error in errors) {
workerError = errors[workerResult.error];
} else {
workerError = new Error(workerResult.error);
}
}
if (workerError && workerResult.errorCode) {
(workerError as Error & { code: number }).code = workerResult.errorCode;
}
const unmarshalledResult: ResultObject = {
error: workerError,
result: workerResult.result,
};
return unmarshalledResult;
});
return commandPromise.resolve(unmarshalledResults);
}
function _handleWorkerCommandErrorMessage(
logger: any,
message: RPCCommandErrorMessage,
): void {
const { uids, payload } = message;
const { error } = payload;
const commandPromise: CommandPromise = uidsToCommandPromise[uids];
if (commandPromise === undefined) {
logger.error('missing promise for command results', { uids, payload });
return undefined;
}
if (commandPromise.timeout) {
clearTimeout(commandPromise.timeout);
}
delete uidsToCommandPromise[uids];
let commandError: Error | null = null;
if (error in errors) {
commandError = errors[error];
} else {
commandError = new Error(error);
}
return commandPromise.reject(<Error> commandError);
}
function _handleWorkerMessage(
message: RPCCommandMessage | RPCCommandResultsMessage | RPCCommandErrorMessage,
handlers: HandlersMap
): void {
const { type: messageType, uids } = message;
const workerId = cluster.worker?.id;
const logger = rpcLogger.newRequestLoggerFromSerializedUids(uids);
logger.debug('worker received message from primary', {
workerId, rpcMessage: message,
});
if (messageType === 'cluster-rpc:command') {
return _handleWorkerCommandMessage(logger, message, handlers);
}
if (messageType === 'cluster-rpc:commandResults') {
return _handleWorkerCommandResultsMessage(logger, message);
}
if (messageType === 'cluster-rpc:commandError') {
return _handleWorkerCommandErrorMessage(logger, message);
}
logger.error('unsupported message type', {
workerId, messageType,
});
return undefined;
}

View File

@ -35,7 +35,13 @@ export const emptyFileMd5 = 'd41d8cd98f00b204e9800998ecf8427e';
// Version 4 add the Creation-Time and Content-Language attributes, // Version 4 add the Creation-Time and Content-Language attributes,
// and add support for x-ms-meta-* headers in UserMetadata // and add support for x-ms-meta-* headers in UserMetadata
// Version 5 adds the azureInfo structure // Version 5 adds the azureInfo structure
export const mdModelVersion = 5; // Version 6 adds a "deleted" flag that is updated to true before
// the object gets deleted. This is done to keep object metadata in the
// oplog when deleting the object, as oplog deletion events don't contain
// any metadata of the object.
// version 6 also adds the "isPHD" flag that is used to indicate that the master
// object is a placeholder and is not up to date.
export const mdModelVersion = 6;
/* /*
* Splitter is used to build the object name for the overview of a * Splitter is used to build the object name for the overview of a
* multipart upload and to build the object names for each part of a * multipart upload and to build the object names for each part of a
@ -131,6 +137,14 @@ export const supportedNotificationEvents = new Set([
's3:ObjectTagging:Put', 's3:ObjectTagging:Put',
's3:ObjectTagging:Delete', 's3:ObjectTagging:Delete',
's3:ObjectAcl:Put', 's3:ObjectAcl:Put',
's3:ObjectRestore:*',
's3:ObjectRestore:Post',
's3:ObjectRestore:Completed',
's3:ObjectRestore:Delete',
's3:LifecycleTransition',
's3:LifecycleExpiration:*',
's3:LifecycleExpiration:DeleteMarkerCreated',
's3:LifecycleExpiration:Delete',
]); ]);
export const notificationArnPrefix = 'arn:scality:bucketnotif'; export const notificationArnPrefix = 'arn:scality:bucketnotif';
// HTTP server keep-alive timeout is set to a higher value than // HTTP server keep-alive timeout is set to a higher value than
@ -149,7 +163,15 @@ export const supportedLifecycleRules = [
'expiration', 'expiration',
'noncurrentVersionExpiration', 'noncurrentVersionExpiration',
'abortIncompleteMultipartUpload', 'abortIncompleteMultipartUpload',
'transitions',
'noncurrentVersionTransition',
]; ];
// Maximum number of buckets to cache (bucket metadata) // Maximum number of buckets to cache (bucket metadata)
export const maxCachedBuckets = process.env.METADATA_MAX_CACHED_BUCKETS ? export const maxCachedBuckets = process.env.METADATA_MAX_CACHED_BUCKETS ?
Number(process.env.METADATA_MAX_CACHED_BUCKETS) : 1000; Number(process.env.METADATA_MAX_CACHED_BUCKETS) : 1000;
export const validRestoreObjectTiers = new Set(['Expedited', 'Standard', 'Bulk']);
export const maxBatchingConcurrentOperations = 5;
/** For policy resource arn check we allow empty account ID to not break compatibility */
export const policyArnAllowedEmptyAccountId = ['utapi', 'scuba'];

View File

@ -1,7 +1,3 @@
'use strict'; // eslint-disable-line strict
const writeOptions = { sync: true };
/** /**
* Like Error, but with a property set to true. * Like Error, but with a property set to true.
* TODO: this is copied from kineticlib, should consolidate with the * TODO: this is copied from kineticlib, should consolidate with the
@ -14,29 +10,36 @@ const writeOptions = { sync: true };
* use: * use:
* throw propError("badTypeInput", "input is not a buffer"); * throw propError("badTypeInput", "input is not a buffer");
* *
* @param {String} propName - the property name. * @param propName - the property name.
* @param {String} message - the Error message. * @param message - the Error message.
* @returns {Error} the Error object. * @returns the Error object.
*/ */
function propError(propName, message) { function propError(propName: string, message: string): Error {
const err = new Error(message); const err = new Error(message);
err[propName] = true; err[propName] = true;
// @ts-ignore
err.is = { [propName]: true };
return err; return err;
} }
/** /**
* Running transaction with multiple updates to be committed atomically * Running transaction with multiple updates to be committed atomically
*/ */
class IndexTransaction { export class IndexTransaction {
operations: { type: 'put' | 'del'; key: string; value?: any }[];
db: any;
closed: boolean;
conditions: { [key: string]: string }[];
/** /**
* Builds a new transaction * Builds a new transaction
* *
* @argument {Leveldb} db an open database to which the updates * @argument {Leveldb} db an open database to which the updates
* will be applied * will be applied
* *
* @returns {IndexTransaction} a new empty transaction * @returns a new empty transaction
*/ */
constructor(db) { constructor(db: any) {
this.operations = []; this.operations = [];
this.db = db; this.db = db;
this.closed = false; this.closed = false;
@ -46,30 +49,34 @@ class IndexTransaction {
/** /**
* Adds a new operation to participate in this running transaction * Adds a new operation to participate in this running transaction
* *
* @argument {object} op an object with the following attributes: * @argument op an object with the following attributes:
* { * {
* type: 'put' or 'del', * type: 'put' or 'del',
* key: the object key, * key: the object key,
* value: (optional for del) the value to store, * value: (optional for del) the value to store,
* } * }
* *
* @throws {Error} an error described by the following properties * @throws an error described by the following properties
* - invalidTransactionVerb if op is not put or del * - invalidTransactionVerb if op is not put or del
* - pushOnCommittedTransaction if already committed * - pushOnCommittedTransaction if already committed
* - missingKey if the key is missing from the op * - missingKey if the key is missing from the op
* - missingValue if putting without a value * - missingValue if putting without a value
*
* @returns {undefined}
*/ */
push(op) { push(op: { type: 'put'; key: string; value: any }): void;
push(op: { type: 'del'; key: string }): void;
push(op: { type: 'put' | 'del'; key: string; value?: any }): void {
if (this.closed) { if (this.closed) {
throw propError('pushOnCommittedTransaction', throw propError(
'can not add ops to already committed transaction'); 'pushOnCommittedTransaction',
'can not add ops to already committed transaction'
);
} }
if (op.type !== 'put' && op.type !== 'del') { if (op.type !== 'put' && op.type !== 'del') {
throw propError('invalidTransactionVerb', throw propError(
`unknown action type: ${op.type}`); 'invalidTransactionVerb',
`unknown action type: ${op.type}`
);
} }
if (op.key === undefined) { if (op.key === undefined) {
@ -93,57 +100,59 @@ class IndexTransaction {
* - pushOnCommittedTransaction if already committed * - pushOnCommittedTransaction if already committed
* - missingKey if the key is missing from the op * - missingKey if the key is missing from the op
* - missingValue if putting without a value * - missingValue if putting without a value
*
* @returns {undefined}
*
* @see push * @see push
*/ */
put(key, value) { put(key: string, value: any) {
this.push({ type: 'put', key, value }); this.push({ type: 'put', key, value });
} }
/** /**
* Adds a new del operation to this running transaction * Adds a new del operation to this running transaction
* *
* @argument {string} key - the key of the object to delete * @argument key - the key of the object to delete
* *
* @throws {Error} an error described by the following properties * @throws an error described by the following properties
* - pushOnCommittedTransaction if already committed * - pushOnCommittedTransaction if already committed
* - missingKey if the key is missing from the op * - missingKey if the key is missing from the op
* *
* @returns {undefined}
*
* @see push * @see push
*/ */
del(key) { del(key: string) {
this.push({ type: 'del', key }); this.push({ type: 'del', key });
} }
/** /**
* Adds a condition for the transaction * Adds a condition for the transaction
* *
* @argument {object} condition an object with the following attributes: * @argument condition an object with the following attributes:
* { * {
* <condition>: the object key * <condition>: the object key
* } * }
* example: { notExists: 'key1' } * example: { notExists: 'key1' }
* *
* @throws {Error} an error described by the following properties * @throws an error described by the following properties
* - pushOnCommittedTransaction if already committed * - pushOnCommittedTransaction if already committed
* - missingCondition if the condition is empty * - missingCondition if the condition is empty
* *
* @returns {undefined}
*/ */
addCondition(condition) { addCondition(condition: { [key: string]: string }) {
if (this.closed) { if (this.closed) {
throw propError('pushOnCommittedTransaction', throw propError(
'can not add conditions to already committed transaction'); 'pushOnCommittedTransaction',
'can not add conditions to already committed transaction'
);
} }
if (condition === undefined || Object.keys(condition).length === 0) { if (condition === undefined || Object.keys(condition).length === 0) {
throw propError('missingCondition', 'missing condition for conditional put'); throw propError(
'missingCondition',
'missing condition for conditional put'
);
} }
if (typeof (condition.notExists) !== 'string') { if (typeof condition.notExists !== 'string' && typeof condition.exists !== 'string') {
throw propError('unsupportedConditionalOperation', 'missing key or supported condition'); throw propError(
'unsupportedConditionalOperation',
'missing key or supported condition'
);
} }
this.conditions.push(condition); this.conditions.push(condition);
} }
@ -151,32 +160,35 @@ class IndexTransaction {
/** /**
* Applies the queued updates in this transaction atomically. * Applies the queued updates in this transaction atomically.
* *
* @argument {function} cb function to be called when the commit * @argument cb function to be called when the commit
* finishes, taking an optional error argument * finishes, taking an optional error argument
* *
* @returns {undefined}
*/ */
commit(cb) { commit(cb: (error: Error | null, data?: any) => void) {
if (this.closed) { if (this.closed) {
return cb(propError('alreadyCommitted', return cb(
'transaction was already committed')); propError(
'alreadyCommitted',
'transaction was already committed'
)
);
} }
if (this.operations.length === 0) { if (this.operations.length === 0) {
return cb(propError('emptyTransaction', return cb(
'tried to commit an empty transaction')); propError(
'emptyTransaction',
'tried to commit an empty transaction'
)
);
} }
this.closed = true; this.closed = true;
writeOptions.conditions = this.conditions; const options = { sync: true, conditions: this.conditions };
// The array-of-operations variant of the `batch` method // The array-of-operations variant of the `batch` method
// allows passing options such has `sync: true` whereas the // allows passing options such has `sync: true` whereas the
// chained form does not. // chained form does not.
return this.db.batch(this.operations, writeOptions, cb); return this.db.batch(this.operations, options, cb);
} }
} }
module.exports = {
IndexTransaction,
};

View File

@ -1,13 +0,0 @@
function reshapeExceptionError(error) {
const { message, code, stack, name } = error;
return {
message,
code,
stack,
name,
};
}
module.exports = {
reshapeExceptionError,
};

11
lib/errorUtils.ts Normal file
View File

@ -0,0 +1,11 @@
export interface ErrorLike {
message: any;
code: any;
stack: any;
name: any;
}
export function reshapeExceptionError(error: ErrorLike) {
const { message, code, stack, name } = error;
return { message, code, stack, name };
}

View File

@ -42,7 +42,7 @@ export const BucketAlreadyOwnedByYou: ErrorFormat = {
code: 409, code: 409,
description: description:
'Your previous request to create the named bucket succeeded and you already own it. You get this error in all AWS regions except US Standard, us-east-1. In us-east-1 region, you will get 200 OK, but it is no-op (if bucket exists S3 will not do anything).', 'A bucket with this name exists and is already owned by you',
}; };
export const BucketNotEmpty: ErrorFormat = { export const BucketNotEmpty: ErrorFormat = {
@ -365,6 +365,11 @@ export const NoSuchWebsiteConfiguration: ErrorFormat = {
description: 'The specified bucket does not have a website configuration', description: 'The specified bucket does not have a website configuration',
}; };
export const NoSuchTagSet: ErrorFormat = {
code: 404,
description: 'The TagSet does not exist',
};
export const NoSuchUpload: ErrorFormat = { export const NoSuchUpload: ErrorFormat = {
code: 404, code: 404,
description: description:
@ -685,6 +690,11 @@ export const ReportNotPresent: ErrorFormat = {
'The request was rejected because the credential report does not exist. To generate a credential report, use GenerateCredentialReport.', 'The request was rejected because the credential report does not exist. To generate a credential report, use GenerateCredentialReport.',
}; };
export const Found: ErrorFormat = {
code: 302,
description: 'Resource Found'
};
// ------------- Special non-AWS S3 errors ------------- // ------------- Special non-AWS S3 errors -------------
export const MPUinProgress: ErrorFormat = { export const MPUinProgress: ErrorFormat = {
@ -1032,3 +1042,15 @@ export const AuthMethodNotImplemented: ErrorFormat = {
description: 'AuthMethodNotImplemented', description: 'AuthMethodNotImplemented',
code: 501, code: 501,
}; };
// --------------------- quotaErros ---------------------
export const NoSuchQuota: ErrorFormat = {
code: 404,
description: 'The specified resource does not have a quota.',
};
export const QuotaExceeded: ErrorFormat = {
code: 429,
description: 'The quota set for the resource is exceeded.',
};

View File

@ -1,32 +0,0 @@
'use strict'; // eslint-disable-line
const debug = require('util').debuglog('jsutil');
// JavaScript utility functions
/**
* force <tt>func</tt> to be called only once, even if actually called
* multiple times. The cached result of the first call is then
* returned (if any).
*
* @note underscore.js provides this functionality but not worth
* adding a new dependency for such a small use case.
*
* @param {function} func function to call at most once
* @return {function} a callable wrapper mirroring <tt>func</tt> but
* only calls <tt>func</tt> at first invocation.
*/
module.exports.once = function once(func) {
const state = { called: false, res: undefined };
return function wrapper(...args) {
if (!state.called) {
state.called = true;
state.res = func.apply(func, args);
} else {
debug('function already called:', func,
'returning cached result:', state.res);
}
return state.res;
};
};

33
lib/jsutil.ts Normal file
View File

@ -0,0 +1,33 @@
import * as util from 'util';
const debug = util.debuglog('jsutil');
// JavaScript utility functions
/**
* force <tt>func</tt> to be called only once, even if actually called
* multiple times. The cached result of the first call is then
* returned (if any).
*
* @note underscore.js provides this functionality but not worth
* adding a new dependency for such a small use case.
*
* @param func function to call at most once
* @return a callable wrapper mirroring <tt>func</tt> but
* only calls <tt>func</tt> at first invocation.
*/
export function once<T>(func: (...args: any[]) => T): (...args: any[]) => T {
type State = { called: boolean; res: any };
const state: State = { called: false, res: undefined };
return function wrapper(...args: any[]) {
if (!state.called) {
state.called = true;
state.res = func.apply(func, args);
} else {
const m1 = 'function already called:';
const m2 = 'returning cached result:';
debug(m1, func, m2, state.res);
}
return state.res;
};
}

View File

@ -124,7 +124,7 @@ export default class StatsClient {
* report/record a request that ended up being a 500 on the server * report/record a request that ended up being a 500 on the server
* @param id - service identifier * @param id - service identifier
*/ */
report500(id: string, cb: (error: Error | null, value?: any) => void) { report500(id: string, cb?: (error: Error | null, value?: any) => void) {
if (!this._redis) { if (!this._redis) {
return undefined; return undefined;
} }

View File

@ -1,26 +1,19 @@
import promClient from 'prom-client'; import promClient from 'prom-client';
const collectDefaultMetricsIntervalMs =
process.env.COLLECT_DEFAULT_METRICS_INTERVAL_MS !== undefined ?
Number.parseInt(process.env.COLLECT_DEFAULT_METRICS_INTERVAL_MS, 10) :
10000;
promClient.collectDefaultMetrics({ timeout: collectDefaultMetricsIntervalMs });
export default class ZenkoMetrics { export default class ZenkoMetrics {
static createCounter(params: promClient.CounterConfiguration) { static createCounter(params: promClient.CounterConfiguration<string>) {
return new promClient.Counter(params); return new promClient.Counter(params);
} }
static createGauge(params: promClient.GaugeConfiguration) { static createGauge(params: promClient.GaugeConfiguration<string>) {
return new promClient.Gauge(params); return new promClient.Gauge(params);
} }
static createHistogram(params: promClient.HistogramConfiguration) { static createHistogram(params: promClient.HistogramConfiguration<string>) {
return new promClient.Histogram(params); return new promClient.Histogram(params);
} }
static createSummary(params: promClient.SummaryConfiguration) { static createSummary(params: promClient.SummaryConfiguration<string>) {
return new promClient.Summary(params); return new promClient.Summary(params);
} }
@ -28,11 +21,15 @@ export default class ZenkoMetrics {
return promClient.register.getSingleMetric(name); return promClient.register.getSingleMetric(name);
} }
static asPrometheus() { static async asPrometheus() {
return promClient.register.metrics(); return promClient.register.metrics();
} }
static asPrometheusContentType() { static asPrometheusContentType() {
return promClient.register.contentType; return promClient.register.contentType;
} }
static collectDefaultMetrics() {
return promClient.collectDefaultMetrics();
}
} }

View File

@ -1,23 +1,35 @@
const errors = require('../errors').default; import errors from '../errors'
const validServices = { const validServices = {
aws: ['s3', 'iam', 'sts', 'ring'], aws: ['s3', 'iam', 'sts', 'ring'],
scality: ['utapi', 'sso'], scality: ['utapi', 'sso'],
}; };
class ARN { export default class ARN {
_partition: string;
_service: string;
_region: string | null;
_accountId?: string | null;
_resource: string;
/** /**
* *
* Create an ARN object from its individual components * Create an ARN object from its individual components
* *
* @constructor * @constructor
* @param {string} partition - ARN partition (e.g. 'aws') * @param partition - ARN partition (e.g. 'aws')
* @param {string} service - service name in partition (e.g. 's3') * @param service - service name in partition (e.g. 's3')
* @param {string} [region] - AWS region * @param [region] - AWS region
* @param {string} [accountId] - AWS 12-digit account ID * @param [accountId] - AWS 12-digit account ID
* @param {string} resource - AWS resource path (e.g. 'foo/bar') * @param resource - AWS resource path (e.g. 'foo/bar')
*/ */
constructor(partition, service, region, accountId, resource) { constructor(
partition: string,
service: string,
region: string | undefined | null,
accountId: string | undefined | null,
resource: string,
) {
this._partition = partition; this._partition = partition;
this._service = service; this._service = service;
this._region = region || null; this._region = region || null;
@ -25,7 +37,7 @@ class ARN {
this._resource = resource; this._resource = resource;
} }
static createFromString(arnStr) { static createFromString(arnStr: string) {
const [arn, partition, service, region, accountId, const [arn, partition, service, region, accountId,
resourceType, resource] = arnStr.split(':'); resourceType, resource] = arnStr.split(':');
@ -102,5 +114,3 @@ class ARN {
.join(':'); .join(':');
} }
} }
module.exports = ARN;

View File

@ -1,22 +1,36 @@
const { legacyLocations } = require('../constants'); import { RequestLogger } from 'werelogs';
const escapeForXml = require('../s3middleware/escapeForXml');
import { legacyLocations } from '../constants';
import escapeForXml from '../s3middleware/escapeForXml';
type CloudServerConfig = any;
export default class BackendInfo {
_config: CloudServerConfig;
_requestEndpoint: string;
_objectLocationConstraint?: string;
_bucketLocationConstraint?: string;
_legacyLocationConstraint?: string;
class BackendInfo {
/** /**
* Represents the info necessary to evaluate which data backend to use * Represents the info necessary to evaluate which data backend to use
* on a data put call. * on a data put call.
* @constructor * @constructor
* @param {object} config - CloudServer config containing list of locations * @param config - CloudServer config containing list of locations
* @param {string | undefined} objectLocationConstraint - location constraint * @param objectLocationConstraint - location constraint
* for object based on user meta header * for object based on user meta header
* @param {string | undefined } bucketLocationConstraint - location * @param bucketLocationConstraint - location
* constraint for bucket based on bucket metadata * constraint for bucket based on bucket metadata
* @param {string} requestEndpoint - endpoint to which request was made * @param requestEndpoint - endpoint to which request was made
* @param {string | undefined } legacyLocationConstraint - legacy location * @param legacyLocationConstraint - legacy location constraint
* constraint
*/ */
constructor(config, objectLocationConstraint, bucketLocationConstraint, constructor(
requestEndpoint, legacyLocationConstraint) { config: CloudServerConfig,
objectLocationConstraint: string | undefined,
bucketLocationConstraint: string | undefined,
requestEndpoint: string,
legacyLocationConstraint: string | undefined,
) {
this._config = config; this._config = config;
this._objectLocationConstraint = objectLocationConstraint; this._objectLocationConstraint = objectLocationConstraint;
this._bucketLocationConstraint = bucketLocationConstraint; this._bucketLocationConstraint = bucketLocationConstraint;
@ -27,15 +41,18 @@ class BackendInfo {
/** /**
* validate proposed location constraint against config * validate proposed location constraint against config
* @param {object} config - CloudServer config * @param config - CloudServer config
* @param {string | undefined} locationConstraint - value of user * @param locationConstraint - value of user
* metadata location constraint header or bucket location constraint * metadata location constraint header or bucket location constraint
* @param {object} log - werelogs logger * @param log - werelogs logger
* @return {boolean} - true if valid, false if not * @return - true if valid, false if not
*/ */
static isValidLocationConstraint(config, locationConstraint, log) { static isValidLocationConstraint(
if (Object.keys(config.locationConstraints). config: CloudServerConfig,
indexOf(locationConstraint) < 0) { locationConstraint: string | undefined,
log: RequestLogger,
) {
if (!locationConstraint || !(locationConstraint in config.locationConstraints)) {
log.trace('proposed locationConstraint is invalid', log.trace('proposed locationConstraint is invalid',
{ locationConstraint }); { locationConstraint });
return false; return false;
@ -45,14 +62,17 @@ class BackendInfo {
/** /**
* validate that request endpoint is listed in the restEndpoint config * validate that request endpoint is listed in the restEndpoint config
* @param {object} config - CloudServer config * @param config - CloudServer config
* @param {string} requestEndpoint - request endpoint * @param requestEndpoint - request endpoint
* @param {object} log - werelogs logger * @param log - werelogs logger
* @return {boolean} - true if present, false if not * @return true if present, false if not
*/ */
static isRequestEndpointPresent(config, requestEndpoint, log) { static isRequestEndpointPresent(
if (Object.keys(config.restEndpoints). config: CloudServerConfig,
indexOf(requestEndpoint) < 0) { requestEndpoint: string,
log: RequestLogger,
) {
if (!(requestEndpoint in config.restEndpoints)) {
log.trace('requestEndpoint does not match config restEndpoints', log.trace('requestEndpoint does not match config restEndpoints',
{ requestEndpoint }); { requestEndpoint });
return false; return false;
@ -63,14 +83,18 @@ class BackendInfo {
/** /**
* validate that locationConstraint for request Endpoint matches * validate that locationConstraint for request Endpoint matches
* one config locationConstraint * one config locationConstraint
* @param {object} config - CloudServer config * @param config - CloudServer config
* @param {string} requestEndpoint - request endpoint * @param requestEndpoint - request endpoint
* @param {object} log - werelogs logger * @param log - werelogs logger
* @return {boolean} - true if matches, false if not * @return - true if matches, false if not
*/ */
static isRequestEndpointValueValid(config, requestEndpoint, log) { static isRequestEndpointValueValid(
if (Object.keys(config.locationConstraints). config: CloudServerConfig,
indexOf(config.restEndpoints[requestEndpoint]) < 0) { requestEndpoint: string,
log: RequestLogger,
) {
const restEndpoint = config.restEndpoints[requestEndpoint];
if (!(restEndpoint in config.locationConstraints)) {
log.trace('the default locationConstraint for request' + log.trace('the default locationConstraint for request' +
'Endpoint does not match any config locationConstraint', 'Endpoint does not match any config locationConstraint',
{ requestEndpoint }); { requestEndpoint });
@ -81,11 +105,11 @@ class BackendInfo {
/** /**
* validate that s3 server is running with a file or memory backend * validate that s3 server is running with a file or memory backend
* @param {object} config - CloudServer config * @param config - CloudServer config
* @param {object} log - werelogs logger * @param log - werelogs logger
* @return {boolean} - true if running with file/mem backend, false if not * @return - true if running with file/mem backend, false if not
*/ */
static isMemOrFileBackend(config, log) { static isMemOrFileBackend(config: CloudServerConfig, log: RequestLogger) {
if (config.backends.data === 'mem' || config.backends.data === 'file') { if (config.backends.data === 'mem' || config.backends.data === 'file') {
log.trace('use data backend for the location', { log.trace('use data backend for the location', {
dataBackend: config.backends.data, dataBackend: config.backends.data,
@ -103,12 +127,16 @@ class BackendInfo {
* data backend for the location. * data backend for the location.
* - if locationConstraint for request Endpoint does not match * - if locationConstraint for request Endpoint does not match
* any config locationConstraint, we will return an error * any config locationConstraint, we will return an error
* @param {object} config - CloudServer config * @param config - CloudServer config
* @param {string} requestEndpoint - request endpoint * @param requestEndpoint - request endpoint
* @param {object} log - werelogs logger * @param log - werelogs logger
* @return {boolean} - true if valid, false if not * @return - true if valid, false if not
*/ */
static isValidRequestEndpointOrBackend(config, requestEndpoint, log) { static isValidRequestEndpointOrBackend(
config: CloudServerConfig,
requestEndpoint: string,
log: RequestLogger,
) {
if (!BackendInfo.isRequestEndpointPresent(config, requestEndpoint, if (!BackendInfo.isRequestEndpointPresent(config, requestEndpoint,
log)) { log)) {
return BackendInfo.isMemOrFileBackend(config, log); return BackendInfo.isMemOrFileBackend(config, log);
@ -119,17 +147,22 @@ class BackendInfo {
/** /**
* validate controlling BackendInfo Parameter * validate controlling BackendInfo Parameter
* @param {object} config - CloudServer config * @param config - CloudServer config
* @param {string | undefined} objectLocationConstraint - value of user * @param objectLocationConstraint - value of user
* metadata location constraint header * metadata location constraint header
* @param {string | null} bucketLocationConstraint - location * @param bucketLocationConstraint - location
* constraint from bucket metadata * constraint from bucket metadata
* @param {string} requestEndpoint - endpoint of request * @param requestEndpoint - endpoint of request
* @param {object} log - werelogs logger * @param log - werelogs logger
* @return {object} - location constraint validity * @return - location constraint validity
*/ */
static controllingBackendParam(config, objectLocationConstraint, static controllingBackendParam(
bucketLocationConstraint, requestEndpoint, log) { config: CloudServerConfig,
objectLocationConstraint: string | undefined,
bucketLocationConstraint: string | null,
requestEndpoint: string,
log: RequestLogger,
) {
if (objectLocationConstraint) { if (objectLocationConstraint) {
if (BackendInfo.isValidLocationConstraint(config, if (BackendInfo.isValidLocationConstraint(config,
objectLocationConstraint, log)) { objectLocationConstraint, log)) {
@ -175,16 +208,16 @@ class BackendInfo {
/** /**
* Return legacyLocationConstraint * Return legacyLocationConstraint
* @param {object} config CloudServer config * @param config CloudServer config
* @return {string | undefined} legacyLocationConstraint; * @return legacyLocationConstraint;
*/ */
static getLegacyLocationConstraint(config) { static getLegacyLocationConstraint(config: CloudServerConfig) {
return legacyLocations.find(ll => config.locationConstraints[ll]); return legacyLocations.find(ll => config.locationConstraints[ll]);
} }
/** /**
* Return objectLocationConstraint * Return objectLocationConstraint
* @return {string | undefined} objectLocationConstraint; * @return objectLocationConstraint;
*/ */
getObjectLocationConstraint() { getObjectLocationConstraint() {
return this._objectLocationConstraint; return this._objectLocationConstraint;
@ -192,7 +225,7 @@ class BackendInfo {
/** /**
* Return bucketLocationConstraint * Return bucketLocationConstraint
* @return {string | undefined} bucketLocationConstraint; * @return bucketLocationConstraint;
*/ */
getBucketLocationConstraint() { getBucketLocationConstraint() {
return this._bucketLocationConstraint; return this._bucketLocationConstraint;
@ -200,7 +233,7 @@ class BackendInfo {
/** /**
* Return requestEndpoint * Return requestEndpoint
* @return {string} requestEndpoint; * @return requestEndpoint;
*/ */
getRequestEndpoint() { getRequestEndpoint() {
return this._requestEndpoint; return this._requestEndpoint;
@ -215,9 +248,9 @@ class BackendInfo {
* (4) default locationConstraint for requestEndpoint if requestEndpoint * (4) default locationConstraint for requestEndpoint if requestEndpoint
* is listed in restEndpoints in config.json * is listed in restEndpoints in config.json
* (5) default data backend * (5) default data backend
* @return {string} locationConstraint; * @return locationConstraint;
*/ */
getControllingLocationConstraint() { getControllingLocationConstraint(): string {
const objectLC = this.getObjectLocationConstraint(); const objectLC = this.getObjectLocationConstraint();
const bucketLC = this.getBucketLocationConstraint(); const bucketLC = this.getBucketLocationConstraint();
const reqEndpoint = this.getRequestEndpoint(); const reqEndpoint = this.getRequestEndpoint();
@ -236,5 +269,3 @@ class BackendInfo {
return this._config.backends.data; return this._config.backends.data;
} }
} }
module.exports = BackendInfo;

View File

@ -1,40 +1,86 @@
export type DeleteRetentionPolicy = {
enabled: boolean;
days: number;
};
/** /**
* Helper class to ease access to the Azure specific information for * Helper class to ease access to the Azure specific information for
* storage accounts mapped to buckets. * storage accounts mapped to buckets.
*/ */
class BucketAzureInfo { export default class BucketAzureInfo {
_data: {
sku: string;
accessTier: string;
kind: string;
systemKeys: string[];
tenantKeys: string[];
subscriptionId: string;
resourceGroup: string;
deleteRetentionPolicy: DeleteRetentionPolicy;
managementPolicies: any[];
httpsOnly: boolean;
tags: any;
networkACL: any[];
cname: string;
azureFilesAADIntegration: boolean;
hnsEnabled: boolean;
logging: any;
hourMetrics: any;
minuteMetrics: any;
serviceVersion: string;
}
/** /**
* @constructor * @constructor
* @param {object} obj - Raw structure for the Azure info on storage account * @param obj - Raw structure for the Azure info on storage account
* @param {string} obj.sku - SKU name of this storage account * @param obj.sku - SKU name of this storage account
* @param {string} obj.accessTier - Access Tier name of this storage account * @param obj.accessTier - Access Tier name of this storage account
* @param {string} obj.kind - Kind name of this storage account * @param obj.kind - Kind name of this storage account
* @param {string[]} obj.systemKeys - pair of shared keys for the system * @param obj.systemKeys - pair of shared keys for the system
* @param {string[]} obj.tenantKeys - pair of shared keys for the tenant * @param obj.tenantKeys - pair of shared keys for the tenant
* @param {string} obj.subscriptionId - subscription ID the storage account * @param obj.subscriptionId - subscription ID the storage account
* belongs to * belongs to
* @param {string} obj.resourceGroup - Resource group name the storage * @param obj.resourceGroup - Resource group name the storage
* account belongs to * account belongs to
* @param {object} obj.deleteRetentionPolicy - Delete retention policy * @param obj.deleteRetentionPolicy - Delete retention policy
* @param {boolean} obj.deleteRetentionPolicy.enabled - * @param obj.deleteRetentionPolicy.enabled -
* @param {number} obj.deleteRetentionPolicy.days - * @param obj.deleteRetentionPolicy.days -
* @param {object[]} obj.managementPolicies - Management policies for this * @param obj.managementPolicies - Management policies for this
* storage account * storage account
* @param {boolean} obj.httpsOnly - Server the content of this storage * @param obj.httpsOnly - Server the content of this storage
* account through HTTPS only * account through HTTPS only
* @param {object} obj.tags - Set of tags applied on this storage account * @param obj.tags - Set of tags applied on this storage account
* @param {object[]} obj.networkACL - Network ACL of this storage account * @param obj.networkACL - Network ACL of this storage account
* @param {string} obj.cname - CNAME of this storage account * @param obj.cname - CNAME of this storage account
* @param {boolean} obj.azureFilesAADIntegration - whether or not Azure * @param obj.azureFilesAADIntegration - whether or not Azure
* Files AAD Integration is enabled for this storage account * Files AAD Integration is enabled for this storage account
* @param {boolean} obj.hnsEnabled - whether or not a hierarchical namespace * @param obj.hnsEnabled - whether or not a hierarchical namespace
* is enabled for this storage account * is enabled for this storage account
* @param {object} obj.logging - service properties: logging * @param obj.logging - service properties: logging
* @param {object} obj.hourMetrics - service properties: hourMetrics * @param obj.hourMetrics - service properties: hourMetrics
* @param {object} obj.minuteMetrics - service properties: minuteMetrics * @param obj.minuteMetrics - service properties: minuteMetrics
* @param {string} obj.serviceVersion - service properties: serviceVersion * @param obj.serviceVersion - service properties: serviceVersion
*/ */
constructor(obj) { constructor(obj: {
sku: string;
accessTier: string;
kind: string;
systemKeys: string[];
tenantKeys: string[];
subscriptionId: string;
resourceGroup: string;
deleteRetentionPolicy: DeleteRetentionPolicy;
managementPolicies: any[];
httpsOnly: boolean;
tags: any;
networkACL: any[];
cname: string;
azureFilesAADIntegration: boolean;
hnsEnabled: boolean;
logging: any;
hourMetrics: any;
minuteMetrics: any;
serviceVersion: string;
}) {
this._data = { this._data = {
sku: obj.sku, sku: obj.sku,
accessTier: obj.accessTier, accessTier: obj.accessTier,
@ -62,7 +108,7 @@ class BucketAzureInfo {
return this._data.sku; return this._data.sku;
} }
setSku(sku) { setSku(sku: string) {
this._data.sku = sku; this._data.sku = sku;
return this; return this;
} }
@ -71,7 +117,7 @@ class BucketAzureInfo {
return this._data.accessTier; return this._data.accessTier;
} }
setAccessTier(accessTier) { setAccessTier(accessTier: string) {
this._data.accessTier = accessTier; this._data.accessTier = accessTier;
return this; return this;
} }
@ -80,7 +126,7 @@ class BucketAzureInfo {
return this._data.kind; return this._data.kind;
} }
setKind(kind) { setKind(kind: string) {
this._data.kind = kind; this._data.kind = kind;
return this; return this;
} }
@ -89,7 +135,7 @@ class BucketAzureInfo {
return this._data.systemKeys; return this._data.systemKeys;
} }
setSystemKeys(systemKeys) { setSystemKeys(systemKeys: string[]) {
this._data.systemKeys = systemKeys; this._data.systemKeys = systemKeys;
return this; return this;
} }
@ -98,7 +144,7 @@ class BucketAzureInfo {
return this._data.tenantKeys; return this._data.tenantKeys;
} }
setTenantKeys(tenantKeys) { setTenantKeys(tenantKeys: string[]) {
this._data.tenantKeys = tenantKeys; this._data.tenantKeys = tenantKeys;
return this; return this;
} }
@ -107,7 +153,7 @@ class BucketAzureInfo {
return this._data.subscriptionId; return this._data.subscriptionId;
} }
setSubscriptionId(subscriptionId) { setSubscriptionId(subscriptionId: string) {
this._data.subscriptionId = subscriptionId; this._data.subscriptionId = subscriptionId;
return this; return this;
} }
@ -116,7 +162,7 @@ class BucketAzureInfo {
return this._data.resourceGroup; return this._data.resourceGroup;
} }
setResourceGroup(resourceGroup) { setResourceGroup(resourceGroup: string) {
this._data.resourceGroup = resourceGroup; this._data.resourceGroup = resourceGroup;
return this; return this;
} }
@ -125,7 +171,7 @@ class BucketAzureInfo {
return this._data.deleteRetentionPolicy; return this._data.deleteRetentionPolicy;
} }
setDeleteRetentionPolicy(deleteRetentionPolicy) { setDeleteRetentionPolicy(deleteRetentionPolicy: DeleteRetentionPolicy) {
this._data.deleteRetentionPolicy = deleteRetentionPolicy; this._data.deleteRetentionPolicy = deleteRetentionPolicy;
return this; return this;
} }
@ -134,7 +180,7 @@ class BucketAzureInfo {
return this._data.managementPolicies; return this._data.managementPolicies;
} }
setManagementPolicies(managementPolicies) { setManagementPolicies(managementPolicies: any[]) {
this._data.managementPolicies = managementPolicies; this._data.managementPolicies = managementPolicies;
return this; return this;
} }
@ -143,7 +189,7 @@ class BucketAzureInfo {
return this._data.httpsOnly; return this._data.httpsOnly;
} }
setHttpsOnly(httpsOnly) { setHttpsOnly(httpsOnly: boolean) {
this._data.httpsOnly = httpsOnly; this._data.httpsOnly = httpsOnly;
return this; return this;
} }
@ -152,7 +198,7 @@ class BucketAzureInfo {
return this._data.tags; return this._data.tags;
} }
setTags(tags) { setTags(tags: any) {
this._data.tags = tags; this._data.tags = tags;
return this; return this;
} }
@ -161,7 +207,7 @@ class BucketAzureInfo {
return this._data.networkACL; return this._data.networkACL;
} }
setNetworkACL(networkACL) { setNetworkACL(networkACL: any[]) {
this._data.networkACL = networkACL; this._data.networkACL = networkACL;
return this; return this;
} }
@ -170,7 +216,7 @@ class BucketAzureInfo {
return this._data.cname; return this._data.cname;
} }
setCname(cname) { setCname(cname: string) {
this._data.cname = cname; this._data.cname = cname;
return this; return this;
} }
@ -179,7 +225,7 @@ class BucketAzureInfo {
return this._data.azureFilesAADIntegration; return this._data.azureFilesAADIntegration;
} }
setAzureFilesAADIntegration(azureFilesAADIntegration) { setAzureFilesAADIntegration(azureFilesAADIntegration: boolean) {
this._data.azureFilesAADIntegration = azureFilesAADIntegration; this._data.azureFilesAADIntegration = azureFilesAADIntegration;
return this; return this;
} }
@ -188,7 +234,7 @@ class BucketAzureInfo {
return this._data.hnsEnabled; return this._data.hnsEnabled;
} }
setHnsEnabled(hnsEnabled) { setHnsEnabled(hnsEnabled: boolean) {
this._data.hnsEnabled = hnsEnabled; this._data.hnsEnabled = hnsEnabled;
return this; return this;
} }
@ -197,7 +243,7 @@ class BucketAzureInfo {
return this._data.logging; return this._data.logging;
} }
setLogging(logging) { setLogging(logging: any) {
this._data.logging = logging; this._data.logging = logging;
return this; return this;
} }
@ -206,7 +252,7 @@ class BucketAzureInfo {
return this._data.hourMetrics; return this._data.hourMetrics;
} }
setHourMetrics(hourMetrics) { setHourMetrics(hourMetrics: any) {
this._data.hourMetrics = hourMetrics; this._data.hourMetrics = hourMetrics;
return this; return this;
} }
@ -215,7 +261,7 @@ class BucketAzureInfo {
return this._data.minuteMetrics; return this._data.minuteMetrics;
} }
setMinuteMetrics(minuteMetrics) { setMinuteMetrics(minuteMetrics: any) {
this._data.minuteMetrics = minuteMetrics; this._data.minuteMetrics = minuteMetrics;
return this; return this;
} }
@ -224,7 +270,7 @@ class BucketAzureInfo {
return this._data.serviceVersion; return this._data.serviceVersion;
} }
setServiceVersion(serviceVersion) { setServiceVersion(serviceVersion: any) {
this._data.serviceVersion = serviceVersion; this._data.serviceVersion = serviceVersion;
return this; return this;
} }
@ -233,5 +279,3 @@ class BucketAzureInfo {
return this._data; return this._data;
} }
} }
module.exports = BucketAzureInfo;

View File

@ -1,81 +1,194 @@
const assert = require('assert'); import assert from 'assert';
const uuid = require('uuid/v4'); import uuid from 'uuid/v4';
const { WebsiteConfiguration } = require('./WebsiteConfiguration'); import { WebsiteConfiguration } from './WebsiteConfiguration';
const ReplicationConfiguration = require('./ReplicationConfiguration'); import ReplicationConfiguration from './ReplicationConfiguration';
const LifecycleConfiguration = require('./LifecycleConfiguration'); import LifecycleConfiguration from './LifecycleConfiguration';
const ObjectLockConfiguration = require('./ObjectLockConfiguration'); import ObjectLockConfiguration from './ObjectLockConfiguration';
const BucketPolicy = require('./BucketPolicy'); import BucketPolicy from './BucketPolicy';
const NotificationConfiguration = require('./NotificationConfiguration'); import NotificationConfiguration from './NotificationConfiguration';
import { ACL as OACL } from './ObjectMD';
import { areTagsValid, BucketTag } from '../s3middleware/tagging';
// WHEN UPDATING THIS NUMBER, UPDATE BucketInfoModelVersion.md CHANGELOG // WHEN UPDATING THIS NUMBER, UPDATE BucketInfoModelVersion.md CHANGELOG
// BucketInfoModelVersion.md can be found in documentation/ at the root // BucketInfoModelVersion.md can be found in documentation/ at the root
// of this repository // of this repository
const modelVersion = 14; const modelVersion = 16;
export type CORS = {
id: string;
allowedMethods: string[];
allowedOrigins: string[];
allowedHeaders: string[];
maxAgeSeconds: number;
exposeHeaders: string[];
}[];
export type SSE = {
cryptoScheme: number;
algorithm: string;
masterKeyId: string;
configuredMasterKeyId: string;
mandatory: boolean;
};
export type VersioningConfiguration = {
Status: string;
MfaDelete: any;
};
export type VeeamSOSApi = {
SystemInfo?: {
ProtocolVersion: string,
ModelName: string,
ProtocolCapabilities: {
CapacityInfo: boolean,
UploadSessions: boolean,
IAMSTS?: boolean,
},
APIEndpoints?: {
IAMEndpoint: string,
STSEndpoint: string,
},
SystemRecommendations?: {
S3ConcurrentTaskLimit: number,
S3MultiObjectDelete: number,
StorageCurrentTasksLimit: number,
KbBlockSize: number,
}
LastModified?: string,
},
CapacityInfo?: {
Capacity: number,
Available: number,
Used: number,
LastModified?: string,
},
};
// Capabilities contains all specifics from external products supported by
// our S3 implementation, at bucket level
export type Capabilities = {
VeeamSOSApi?: VeeamSOSApi,
};
export type ACL = OACL & { WRITE: string[] }
export default class BucketInfo {
_acl: ACL;
_name: string;
_owner: string;
_ownerDisplayName: string;
_creationDate: string;
_mdBucketModelVersion: number;
_transient: boolean;
_deleted: boolean;
_serverSideEncryption: SSE;
_versioningConfiguration: VersioningConfiguration;
_locationConstraint: string | null;
_websiteConfiguration?: WebsiteConfiguration | null;
_cors: CORS | null;
_replicationConfiguration?: any;
_lifecycleConfiguration?: any;
_bucketPolicy?: any;
_uid?: string;
_objectLockEnabled?: boolean;
_objectLockConfiguration?: any;
_notificationConfiguration?: any;
_tags?: Array<BucketTag>;
_readLocationConstraint: string | null;
_isNFS: boolean | null;
_azureInfo: any | null;
_ingestion: { status: 'enabled' | 'disabled' } | null;
_capabilities?: Capabilities;
_quotaMax: number | 0;
class BucketInfo {
/** /**
* Represents all bucket information. * Represents all bucket information.
* @constructor * @constructor
* @param {string} name - bucket name * @param name - bucket name
* @param {string} owner - bucket owner's name * @param owner - bucket owner's name
* @param {string} ownerDisplayName - owner's display name * @param ownerDisplayName - owner's display name
* @param {object} creationDate - creation date of bucket * @param creationDate - creation date of bucket
* @param {number} mdBucketModelVersion - bucket model version * @param mdBucketModelVersion - bucket model version
* @param {object} [acl] - bucket ACLs (no need to copy * @param [acl] - bucket ACLs (no need to copy
* ACL object since referenced object will not be used outside of * ACL object since referenced object will not be used outside of
* BucketInfo instance) * BucketInfo instance)
* @param {boolean} transient - flag indicating whether bucket is transient * @param transient - flag indicating whether bucket is transient
* @param {boolean} deleted - flag indicating whether attempt to delete * @param deleted - flag indicating whether attempt to delete
* @param {object} serverSideEncryption - sse information for this bucket * @param serverSideEncryption - sse information for this bucket
* @param {number} serverSideEncryption.cryptoScheme - * @param serverSideEncryption.cryptoScheme -
* cryptoScheme used * cryptoScheme used
* @param {string} serverSideEncryption.algorithm - * @param serverSideEncryption.algorithm -
* algorithm to use * algorithm to use
* @param {string} serverSideEncryption.masterKeyId - * @param serverSideEncryption.masterKeyId -
* key to get master key * key to get master key
* @param {string} serverSideEncryption.configuredMasterKeyId - * @param serverSideEncryption.configuredMasterKeyId -
* custom KMS key id specified by user * custom KMS key id specified by user
* @param {boolean} serverSideEncryption.mandatory - * @param serverSideEncryption.mandatory -
* true for mandatory encryption * true for mandatory encryption
* bucket has been made * bucket has been made
* @param {object} versioningConfiguration - versioning configuration * @param versioningConfiguration - versioning configuration
* @param {string} versioningConfiguration.Status - versioning status * @param versioningConfiguration.Status - versioning status
* @param {object} versioningConfiguration.MfaDelete - versioning mfa delete * @param versioningConfiguration.MfaDelete - versioning mfa delete
* @param {string} locationConstraint - locationConstraint for bucket that * @param locationConstraint - locationConstraint for bucket that
* also includes the ingestion flag * also includes the ingestion flag
* @param {WebsiteConfiguration} [websiteConfiguration] - website * @param [websiteConfiguration] - website
* configuration * configuration
* @param {object[]} [cors] - collection of CORS rules to apply * @param [cors] - collection of CORS rules to apply
* @param {string} [cors[].id] - optional ID to identify rule * @param [cors[].id] - optional ID to identify rule
* @param {string[]} cors[].allowedMethods - methods allowed for CORS request * @param cors[].allowedMethods - methods allowed for CORS request
* @param {string[]} cors[].allowedOrigins - origins allowed for CORS request * @param cors[].allowedOrigins - origins allowed for CORS request
* @param {string[]} [cors[].allowedHeaders] - headers allowed in an OPTIONS * @param [cors[].allowedHeaders] - headers allowed in an OPTIONS
* request via the Access-Control-Request-Headers header * request via the Access-Control-Request-Headers header
* @param {number} [cors[].maxAgeSeconds] - seconds browsers should cache * @param [cors[].maxAgeSeconds] - seconds browsers should cache
* OPTIONS response * OPTIONS response
* @param {string[]} [cors[].exposeHeaders] - headers expose to applications * @param [cors[].exposeHeaders] - headers expose to applications
* @param {object} [replicationConfiguration] - replication configuration * @param [replicationConfiguration] - replication configuration
* @param {object} [lifecycleConfiguration] - lifecycle configuration * @param [lifecycleConfiguration] - lifecycle configuration
* @param {object} [bucketPolicy] - bucket policy * @param [bucketPolicy] - bucket policy
* @param {string} [uid] - unique identifier for the bucket, necessary * @param [uid] - unique identifier for the bucket, necessary
* @param {string} readLocationConstraint - readLocationConstraint for bucket * @param readLocationConstraint - readLocationConstraint for bucket
* addition for use with lifecycle operations * addition for use with lifecycle operations
* @param {boolean} [isNFS] - whether the bucket is on NFS * @param [isNFS] - whether the bucket is on NFS
* @param {object} [ingestionConfig] - object for ingestion status: en/dis * @param [ingestionConfig] - object for ingestion status: en/dis
* @param {object} [azureInfo] - Azure storage account specific info * @param [azureInfo] - Azure storage account specific info
* @param {boolean} [objectLockEnabled] - true when object lock enabled * @param [objectLockEnabled] - true when object lock enabled
* @param {object} [objectLockConfiguration] - object lock configuration * @param [objectLockConfiguration] - object lock configuration
* @param {object} [notificationConfiguration] - bucket notification configuration * @param [notificationConfiguration] - bucket notification configuration
* @param [tags] - bucket tag set
* @param [capabilities] - capabilities for the bucket
* @param quotaMax - bucket quota
*/ */
constructor(name, owner, ownerDisplayName, creationDate, constructor(
mdBucketModelVersion, acl, transient, deleted, name: string,
serverSideEncryption, versioningConfiguration, owner: string,
locationConstraint, websiteConfiguration, cors, ownerDisplayName: string,
replicationConfiguration, lifecycleConfiguration, creationDate: string,
bucketPolicy, uid, readLocationConstraint, isNFS, mdBucketModelVersion: number,
ingestionConfig, azureInfo, objectLockEnabled, acl: ACL | undefined,
objectLockConfiguration, notificationConfiguration) { transient: boolean,
deleted: boolean,
serverSideEncryption: SSE,
versioningConfiguration: VersioningConfiguration,
locationConstraint: string,
websiteConfiguration?: WebsiteConfiguration | null,
cors?: CORS,
replicationConfiguration?: any,
lifecycleConfiguration?: any,
bucketPolicy?: any,
uid?: string,
readLocationConstraint?: string,
isNFS?: boolean,
ingestionConfig?: { status: 'enabled' | 'disabled' },
azureInfo?: any,
objectLockEnabled?: boolean,
objectLockConfiguration?: any,
notificationConfiguration?: any,
tags?: Array<BucketTag> | [],
capabilities?: Capabilities,
quotaMax?: number | 0,
) {
assert.strictEqual(typeof name, 'string'); assert.strictEqual(typeof name, 'string');
assert.strictEqual(typeof owner, 'string'); assert.strictEqual(typeof owner, 'string');
assert.strictEqual(typeof ownerDisplayName, 'string'); assert.strictEqual(typeof ownerDisplayName, 'string');
@ -127,8 +240,10 @@ class BucketInfo {
} }
if (websiteConfiguration) { if (websiteConfiguration) {
assert(websiteConfiguration instanceof WebsiteConfiguration); assert(websiteConfiguration instanceof WebsiteConfiguration);
const { indexDocument, errorDocument, redirectAllRequestsTo, const indexDocument = websiteConfiguration.getIndexDocument();
routingRules } = websiteConfiguration; const errorDocument = websiteConfiguration.getErrorDocument();
const redirectAllRequestsTo = websiteConfiguration.getRedirectAllRequestsTo();
const routingRules = websiteConfiguration.getRoutingRules();
assert(indexDocument === undefined || assert(indexDocument === undefined ||
typeof indexDocument === 'string'); typeof indexDocument === 'string');
assert(errorDocument === undefined || assert(errorDocument === undefined ||
@ -160,7 +275,7 @@ class BucketInfo {
if (notificationConfiguration) { if (notificationConfiguration) {
NotificationConfiguration.validateConfig(notificationConfiguration); NotificationConfiguration.validateConfig(notificationConfiguration);
} }
const aclInstance = acl || { const aclInstance: ACL = acl || {
Canned: 'private', Canned: 'private',
FULL_CONTROL: [], FULL_CONTROL: [],
WRITE: [], WRITE: [],
@ -169,6 +284,15 @@ class BucketInfo {
READ_ACP: [], READ_ACP: [],
}; };
if (tags === undefined) {
tags = [] as BucketTag[];
}
assert.strictEqual(areTagsValid(tags), true);
if (quotaMax) {
assert.strictEqual(typeof quotaMax, 'number');
assert(quotaMax >= 0, 'Quota cannot be negative');
}
// IF UPDATING PROPERTIES, INCREMENT MODELVERSION NUMBER ABOVE // IF UPDATING PROPERTIES, INCREMENT MODELVERSION NUMBER ABOVE
this._acl = aclInstance; this._acl = aclInstance;
this._name = name; this._name = name;
@ -194,11 +318,15 @@ class BucketInfo {
this._objectLockEnabled = objectLockEnabled || false; this._objectLockEnabled = objectLockEnabled || false;
this._objectLockConfiguration = objectLockConfiguration || null; this._objectLockConfiguration = objectLockConfiguration || null;
this._notificationConfiguration = notificationConfiguration || null; this._notificationConfiguration = notificationConfiguration || null;
this._tags = tags;
this._capabilities = capabilities || undefined;
this._quotaMax = quotaMax || 0;
return this; return this;
} }
/** /**
* Serialize the object * Serialize the object
* @return {string} - stringified object * @return - stringified object
*/ */
serialize() { serialize() {
const bucketInfos = { const bucketInfos = {
@ -226,19 +354,24 @@ class BucketInfo {
objectLockEnabled: this._objectLockEnabled, objectLockEnabled: this._objectLockEnabled,
objectLockConfiguration: this._objectLockConfiguration, objectLockConfiguration: this._objectLockConfiguration,
notificationConfiguration: this._notificationConfiguration, notificationConfiguration: this._notificationConfiguration,
tags: this._tags,
capabilities: this._capabilities,
quotaMax: this._quotaMax,
}; };
if (this._websiteConfiguration) { const final = this._websiteConfiguration
bucketInfos.websiteConfiguration = ? {
this._websiteConfiguration.getConfig(); ...bucketInfos,
} websiteConfiguration: this._websiteConfiguration.getConfig(),
return JSON.stringify(bucketInfos); }
: bucketInfos;
return JSON.stringify(final);
} }
/** /**
* deSerialize the JSON string * deSerialize the JSON string
* @param {string} stringBucket - the stringified bucket * @param stringBucket - the stringified bucket
* @return {object} - parsed string * @return - parsed string
*/ */
static deSerialize(stringBucket) { static deSerialize(stringBucket: string) {
const obj = JSON.parse(stringBucket); const obj = JSON.parse(stringBucket);
const websiteConfig = obj.websiteConfiguration ? const websiteConfig = obj.websiteConfiguration ?
new WebsiteConfiguration(obj.websiteConfiguration) : null; new WebsiteConfiguration(obj.websiteConfiguration) : null;
@ -249,12 +382,13 @@ class BucketInfo {
obj.cors, obj.replicationConfiguration, obj.lifecycleConfiguration, obj.cors, obj.replicationConfiguration, obj.lifecycleConfiguration,
obj.bucketPolicy, obj.uid, obj.readLocationConstraint, obj.isNFS, obj.bucketPolicy, obj.uid, obj.readLocationConstraint, obj.isNFS,
obj.ingestion, obj.azureInfo, obj.objectLockEnabled, obj.ingestion, obj.azureInfo, obj.objectLockEnabled,
obj.objectLockConfiguration, obj.notificationConfiguration); obj.objectLockConfiguration, obj.notificationConfiguration, obj.tags,
obj.capabilities, obj.quotaMax);
} }
/** /**
* Returns the current model version for the data structure * Returns the current model version for the data structure
* @return {number} - the current model version set above in the file * @return - the current model version set above in the file
*/ */
static currentModelVersion() { static currentModelVersion() {
return modelVersion; return modelVersion;
@ -263,10 +397,10 @@ class BucketInfo {
/** /**
* Create a BucketInfo from an object * Create a BucketInfo from an object
* *
* @param {object} data - object containing data * @param data - object containing data
* @return {BucketInfo} Return an BucketInfo * @return Return an BucketInfo
*/ */
static fromObj(data) { static fromObj(data: any) {
return new BucketInfo(data._name, data._owner, data._ownerDisplayName, return new BucketInfo(data._name, data._owner, data._ownerDisplayName,
data._creationDate, data._mdBucketModelVersion, data._acl, data._creationDate, data._mdBucketModelVersion, data._acl,
data._transient, data._deleted, data._serverSideEncryption, data._transient, data._deleted, data._serverSideEncryption,
@ -276,79 +410,80 @@ class BucketInfo {
data._bucketPolicy, data._uid, data._readLocationConstraint, data._bucketPolicy, data._uid, data._readLocationConstraint,
data._isNFS, data._ingestion, data._azureInfo, data._isNFS, data._ingestion, data._azureInfo,
data._objectLockEnabled, data._objectLockConfiguration, data._objectLockEnabled, data._objectLockConfiguration,
data._notificationConfiguration); data._notificationConfiguration, data._tags, data._capabilities,
data._quotaMax);
} }
/** /**
* Get the ACLs. * Get the ACLs.
* @return {object} acl * @return acl
*/ */
getAcl() { getAcl() {
return this._acl; return this._acl;
} }
/** /**
* Set the canned acl's. * Set the canned acl's.
* @param {string} cannedACL - canned ACL being set * @param cannedACL - canned ACL being set
* @return {BucketInfo} - bucket info instance * @return - bucket info instance
*/ */
setCannedAcl(cannedACL) { setCannedAcl(cannedACL: string) {
this._acl.Canned = cannedACL; this._acl.Canned = cannedACL;
return this; return this;
} }
/** /**
* Set a specific ACL. * Set a specific ACL.
* @param {string} canonicalID - id for account being given access * @param canonicalID - id for account being given access
* @param {string} typeOfGrant - type of grant being granted * @param typeOfGrant - type of grant being granted
* @return {BucketInfo} - bucket info instance * @return - bucket info instance
*/ */
setSpecificAcl(canonicalID, typeOfGrant) { setSpecificAcl(canonicalID: string, typeOfGrant: string) {
this._acl[typeOfGrant].push(canonicalID); this._acl[typeOfGrant].push(canonicalID);
return this; return this;
} }
/** /**
* Set all ACLs. * Set all ACLs.
* @param {object} acl - new set of ACLs * @param acl - new set of ACLs
* @return {BucketInfo} - bucket info instance * @return - bucket info instance
*/ */
setFullAcl(acl) { setFullAcl(acl: ACL) {
this._acl = acl; this._acl = acl;
return this; return this;
} }
/** /**
* Get the server side encryption information * Get the server side encryption information
* @return {object} serverSideEncryption * @return serverSideEncryption
*/ */
getServerSideEncryption() { getServerSideEncryption() {
return this._serverSideEncryption; return this._serverSideEncryption;
} }
/** /**
* Set server side encryption information * Set server side encryption information
* @param {object} serverSideEncryption - server side encryption information * @param serverSideEncryption - server side encryption information
* @return {BucketInfo} - bucket info instance * @return - bucket info instance
*/ */
setServerSideEncryption(serverSideEncryption) { setServerSideEncryption(serverSideEncryption: SSE) {
this._serverSideEncryption = serverSideEncryption; this._serverSideEncryption = serverSideEncryption;
return this; return this;
} }
/** /**
* Get the versioning configuration information * Get the versioning configuration information
* @return {object} versioningConfiguration * @return versioningConfiguration
*/ */
getVersioningConfiguration() { getVersioningConfiguration() {
return this._versioningConfiguration; return this._versioningConfiguration;
} }
/** /**
* Set versioning configuration information * Set versioning configuration information
* @param {object} versioningConfiguration - versioning information * @param versioningConfiguration - versioning information
* @return {BucketInfo} - bucket info instance * @return - bucket info instance
*/ */
setVersioningConfiguration(versioningConfiguration) { setVersioningConfiguration(versioningConfiguration: VersioningConfiguration) {
this._versioningConfiguration = versioningConfiguration; this._versioningConfiguration = versioningConfiguration;
return this; return this;
} }
/** /**
* Check that versioning is 'Enabled' on the given bucket. * Check that versioning is 'Enabled' on the given bucket.
* @return {boolean} - `true` if versioning is 'Enabled', otherwise `false` * @return - `true` if versioning is 'Enabled', otherwise `false`
*/ */
isVersioningEnabled() { isVersioningEnabled() {
const versioningConfig = this.getVersioningConfiguration(); const versioningConfig = this.getVersioningConfiguration();
@ -356,32 +491,32 @@ class BucketInfo {
} }
/** /**
* Get the website configuration information * Get the website configuration information
* @return {object} websiteConfiguration * @return websiteConfiguration
*/ */
getWebsiteConfiguration() { getWebsiteConfiguration() {
return this._websiteConfiguration; return this._websiteConfiguration;
} }
/** /**
* Set website configuration information * Set website configuration information
* @param {object} websiteConfiguration - configuration for bucket website * @param websiteConfiguration - configuration for bucket website
* @return {BucketInfo} - bucket info instance * @return - bucket info instance
*/ */
setWebsiteConfiguration(websiteConfiguration) { setWebsiteConfiguration(websiteConfiguration: WebsiteConfiguration) {
this._websiteConfiguration = websiteConfiguration; this._websiteConfiguration = websiteConfiguration;
return this; return this;
} }
/** /**
* Set replication configuration information * Set replication configuration information
* @param {object} replicationConfiguration - replication information * @param replicationConfiguration - replication information
* @return {BucketInfo} - bucket info instance * @return - bucket info instance
*/ */
setReplicationConfiguration(replicationConfiguration) { setReplicationConfiguration(replicationConfiguration: any) {
this._replicationConfiguration = replicationConfiguration; this._replicationConfiguration = replicationConfiguration;
return this; return this;
} }
/** /**
* Get replication configuration information * Get replication configuration information
* @return {object|null} replication configuration information or `null` if * @return replication configuration information or `null` if
* the bucket does not have a replication configuration * the bucket does not have a replication configuration
*/ */
getReplicationConfiguration() { getReplicationConfiguration() {
@ -389,7 +524,7 @@ class BucketInfo {
} }
/** /**
* Get lifecycle configuration information * Get lifecycle configuration information
* @return {object|null} lifecycle configuration information or `null` if * @return lifecycle configuration information or `null` if
* the bucket does not have a lifecycle configuration * the bucket does not have a lifecycle configuration
*/ */
getLifecycleConfiguration() { getLifecycleConfiguration() {
@ -397,16 +532,16 @@ class BucketInfo {
} }
/** /**
* Set lifecycle configuration information * Set lifecycle configuration information
* @param {object} lifecycleConfiguration - lifecycle information * @param lifecycleConfiguration - lifecycle information
* @return {BucketInfo} - bucket info instance * @return - bucket info instance
*/ */
setLifecycleConfiguration(lifecycleConfiguration) { setLifecycleConfiguration(lifecycleConfiguration: any) {
this._lifecycleConfiguration = lifecycleConfiguration; this._lifecycleConfiguration = lifecycleConfiguration;
return this; return this;
} }
/** /**
* Get bucket policy statement * Get bucket policy statement
* @return {object|null} bucket policy statement or `null` if the bucket * @return bucket policy statement or `null` if the bucket
* does not have a bucket policy * does not have a bucket policy
*/ */
getBucketPolicy() { getBucketPolicy() {
@ -414,16 +549,16 @@ class BucketInfo {
} }
/** /**
* Set bucket policy statement * Set bucket policy statement
* @param {object} bucketPolicy - bucket policy * @param bucketPolicy - bucket policy
* @return {BucketInfo} - bucket info instance * @return - bucket info instance
*/ */
setBucketPolicy(bucketPolicy) { setBucketPolicy(bucketPolicy: any) {
this._bucketPolicy = bucketPolicy; this._bucketPolicy = bucketPolicy;
return this; return this;
} }
/** /**
* Get object lock configuration * Get object lock configuration
* @return {object|null} object lock configuration information or `null` if * @return object lock configuration information or `null` if
* the bucket does not have an object lock configuration * the bucket does not have an object lock configuration
*/ */
getObjectLockConfiguration() { getObjectLockConfiguration() {
@ -431,16 +566,16 @@ class BucketInfo {
} }
/** /**
* Set object lock configuration * Set object lock configuration
* @param {object} objectLockConfiguration - object lock information * @param objectLockConfiguration - object lock information
* @return {BucketInfo} - bucket info instance * @return - bucket info instance
*/ */
setObjectLockConfiguration(objectLockConfiguration) { setObjectLockConfiguration(objectLockConfiguration: any) {
this._objectLockConfiguration = objectLockConfiguration; this._objectLockConfiguration = objectLockConfiguration;
return this; return this;
} }
/** /**
* Get notification configuration * Get notification configuration
* @return {object|null} notification configuration information or 'null' if * @return notification configuration information or 'null' if
* the bucket does not have a notification configuration * the bucket does not have a notification configuration
*/ */
getNotificationConfiguration() { getNotificationConfiguration() {
@ -448,41 +583,41 @@ class BucketInfo {
} }
/** /**
* Set notification configuraiton * Set notification configuraiton
* @param {object} notificationConfiguration - bucket notification information * @param notificationConfiguration - bucket notification information
* @return {BucketInfo} - bucket info instance * @return - bucket info instance
*/ */
setNotificationConfiguration(notificationConfiguration) { setNotificationConfiguration(notificationConfiguration: any) {
this._notificationConfiguration = notificationConfiguration; this._notificationConfiguration = notificationConfiguration;
return this; return this;
} }
/** /**
* Get cors resource * Get cors resource
* @return {object[]} cors * @return cors
*/ */
getCors() { getCors() {
return this._cors; return this._cors;
} }
/** /**
* Set cors resource * Set cors resource
* @param {object[]} rules - collection of CORS rules * @param rules - collection of CORS rules
* @param {string} [rules.id] - optional id to identify rule * @param [rules.id] - optional id to identify rule
* @param {string[]} rules[].allowedMethods - methods allowed for CORS * @param rules[].allowedMethods - methods allowed for CORS
* @param {string[]} rules[].allowedOrigins - origins allowed for CORS * @param rules[].allowedOrigins - origins allowed for CORS
* @param {string[]} [rules[].allowedHeaders] - headers allowed in an * @param [rules[].allowedHeaders] - headers allowed in an
* OPTIONS request via the Access-Control-Request-Headers header * OPTIONS request via the Access-Control-Request-Headers header
* @param {number} [rules[].maxAgeSeconds] - seconds browsers should cache * @param [rules[].maxAgeSeconds] - seconds browsers should cache
* OPTIONS response * OPTIONS response
* @param {string[]} [rules[].exposeHeaders] - headers to expose to external * @param [rules[].exposeHeaders] - headers to expose to external
* applications * applications
* @return {BucketInfo} - bucket info instance * @return - bucket info instance
*/ */
setCors(rules) { setCors(rules: CORS) {
this._cors = rules; this._cors = rules;
return this; return this;
} }
/** /**
* get the serverside encryption algorithm * get the serverside encryption algorithm
* @return {string} - sse algorithm used by this bucket * @return - sse algorithm used by this bucket
*/ */
getSseAlgorithm() { getSseAlgorithm() {
if (!this._serverSideEncryption) { if (!this._serverSideEncryption) {
@ -492,7 +627,7 @@ class BucketInfo {
} }
/** /**
* get the server side encryption master key Id * get the server side encryption master key Id
* @return {string} - sse master key Id used by this bucket * @return - sse master key Id used by this bucket
*/ */
getSseMasterKeyId() { getSseMasterKeyId() {
if (!this._serverSideEncryption) { if (!this._serverSideEncryption) {
@ -502,72 +637,72 @@ class BucketInfo {
} }
/** /**
* Get bucket name. * Get bucket name.
* @return {string} - bucket name * @return - bucket name
*/ */
getName() { getName() {
return this._name; return this._name;
} }
/** /**
* Set bucket name. * Set bucket name.
* @param {string} bucketName - new bucket name * @param bucketName - new bucket name
* @return {BucketInfo} - bucket info instance * @return - bucket info instance
*/ */
setName(bucketName) { setName(bucketName: string) {
this._name = bucketName; this._name = bucketName;
return this; return this;
} }
/** /**
* Get bucket owner. * Get bucket owner.
* @return {string} - bucket owner's canonicalID * @return - bucket owner's canonicalID
*/ */
getOwner() { getOwner() {
return this._owner; return this._owner;
} }
/** /**
* Set bucket owner. * Set bucket owner.
* @param {string} ownerCanonicalID - bucket owner canonicalID * @param ownerCanonicalID - bucket owner canonicalID
* @return {BucketInfo} - bucket info instance * @return - bucket info instance
*/ */
setOwner(ownerCanonicalID) { setOwner(ownerCanonicalID: string) {
this._owner = ownerCanonicalID; this._owner = ownerCanonicalID;
return this; return this;
} }
/** /**
* Get bucket owner display name. * Get bucket owner display name.
* @return {string} - bucket owner dispaly name * @return - bucket owner dispaly name
*/ */
getOwnerDisplayName() { getOwnerDisplayName() {
return this._ownerDisplayName; return this._ownerDisplayName;
} }
/** /**
* Set bucket owner display name. * Set bucket owner display name.
* @param {string} ownerDisplayName - bucket owner display name * @param ownerDisplayName - bucket owner display name
* @return {BucketInfo} - bucket info instance * @return - bucket info instance
*/ */
setOwnerDisplayName(ownerDisplayName) { setOwnerDisplayName(ownerDisplayName: string) {
this._ownerDisplayName = ownerDisplayName; this._ownerDisplayName = ownerDisplayName;
return this; return this;
} }
/** /**
* Get bucket creation date. * Get bucket creation date.
* @return {object} - bucket creation date * @return - bucket creation date
*/ */
getCreationDate() { getCreationDate() {
return this._creationDate; return this._creationDate;
} }
/** /**
* Set location constraint. * Set location constraint.
* @param {string} location - bucket location constraint * @param location - bucket location constraint
* @return {BucketInfo} - bucket info instance * @return - bucket info instance
*/ */
setLocationConstraint(location) { setLocationConstraint(location: string) {
this._locationConstraint = location; this._locationConstraint = location;
return this; return this;
} }
/** /**
* Get location constraint. * Get location constraint.
* @return {string} - bucket location constraint * @return - bucket location constraint
*/ */
getLocationConstraint() { getLocationConstraint() {
return this._locationConstraint; return this._locationConstraint;
@ -575,7 +710,7 @@ class BucketInfo {
/** /**
* Get read location constraint. * Get read location constraint.
* @return {string} - bucket read location constraint * @return - bucket read location constraint
*/ */
getReadLocationConstraint() { getReadLocationConstraint() {
if (this._readLocationConstraint) { if (this._readLocationConstraint) {
@ -587,24 +722,24 @@ class BucketInfo {
/** /**
* Set Bucket model version * Set Bucket model version
* *
* @param {number} version - Model version * @param version - Model version
* @return {BucketInfo} - bucket info instance * @return - bucket info instance
*/ */
setMdBucketModelVersion(version) { setMdBucketModelVersion(version: number) {
this._mdBucketModelVersion = version; this._mdBucketModelVersion = version;
return this; return this;
} }
/** /**
* Get Bucket model version * Get Bucket model version
* *
* @return {number} Bucket model version * @return Bucket model version
*/ */
getMdBucketModelVersion() { getMdBucketModelVersion() {
return this._mdBucketModelVersion; return this._mdBucketModelVersion;
} }
/** /**
* Add transient flag. * Add transient flag.
* @return {BucketInfo} - bucket info instance * @return - bucket info instance
*/ */
addTransientFlag() { addTransientFlag() {
this._transient = true; this._transient = true;
@ -612,7 +747,7 @@ class BucketInfo {
} }
/** /**
* Remove transient flag. * Remove transient flag.
* @return {BucketInfo} - bucket info instance * @return - bucket info instance
*/ */
removeTransientFlag() { removeTransientFlag() {
this._transient = false; this._transient = false;
@ -620,14 +755,14 @@ class BucketInfo {
} }
/** /**
* Check transient flag. * Check transient flag.
* @return {boolean} - depending on whether transient flag in place * @return - depending on whether transient flag in place
*/ */
hasTransientFlag() { hasTransientFlag() {
return !!this._transient; return !!this._transient;
} }
/** /**
* Add deleted flag. * Add deleted flag.
* @return {BucketInfo} - bucket info instance * @return - bucket info instance
*/ */
addDeletedFlag() { addDeletedFlag() {
this._deleted = true; this._deleted = true;
@ -635,7 +770,7 @@ class BucketInfo {
} }
/** /**
* Remove deleted flag. * Remove deleted flag.
* @return {BucketInfo} - bucket info instance * @return - bucket info instance
*/ */
removeDeletedFlag() { removeDeletedFlag() {
this._deleted = false; this._deleted = false;
@ -643,14 +778,14 @@ class BucketInfo {
} }
/** /**
* Check deleted flag. * Check deleted flag.
* @return {boolean} - depending on whether deleted flag in place * @return - depending on whether deleted flag in place
*/ */
hasDeletedFlag() { hasDeletedFlag() {
return !!this._deleted; return !!this._deleted;
} }
/** /**
* Check if the versioning mode is on. * Check if the versioning mode is on.
* @return {boolean} - versioning mode status * @return - versioning mode status
*/ */
isVersioningOn() { isVersioningOn() {
return this._versioningConfiguration && return this._versioningConfiguration &&
@ -658,39 +793,39 @@ class BucketInfo {
} }
/** /**
* Get unique id of bucket. * Get unique id of bucket.
* @return {string} - unique id * @return - unique id
*/ */
getUid() { getUid() {
return this._uid; return this._uid;
} }
/** /**
* Set unique id of bucket. * Set unique id of bucket.
* @param {string} uid - unique identifier for the bucket * @param uid - unique identifier for the bucket
* @return {BucketInfo} - bucket info instance * @return - bucket info instance
*/ */
setUid(uid) { setUid(uid: string) {
this._uid = uid; this._uid = uid;
return this; return this;
} }
/** /**
* Check if the bucket is an NFS bucket. * Check if the bucket is an NFS bucket.
* @return {boolean} - Wether the bucket is NFS or not * @return - Wether the bucket is NFS or not
*/ */
isNFS() { isNFS() {
return this._isNFS; return this._isNFS;
} }
/** /**
* Set whether the bucket is an NFS bucket. * Set whether the bucket is an NFS bucket.
* @param {boolean} isNFS - Wether the bucket is NFS or not * @param isNFS - Wether the bucket is NFS or not
* @return {BucketInfo} - bucket info instance * @return - bucket info instance
*/ */
setIsNFS(isNFS) { setIsNFS(isNFS: boolean) {
this._isNFS = isNFS; this._isNFS = isNFS;
return this; return this;
} }
/** /**
* enable ingestion, set 'this._ingestion' to { status: 'enabled' } * enable ingestion, set 'this._ingestion' to { status: 'enabled' }
* @return {BucketInfo} - bucket info instance * @return - bucket info instance
*/ */
enableIngestion() { enableIngestion() {
this._ingestion = { status: 'enabled' }; this._ingestion = { status: 'enabled' };
@ -698,7 +833,7 @@ class BucketInfo {
} }
/** /**
* disable ingestion, set 'this._ingestion' to { status: 'disabled' } * disable ingestion, set 'this._ingestion' to { status: 'disabled' }
* @return {BucketInfo} - bucket info instance * @return - bucket info instance
*/ */
disableIngestion() { disableIngestion() {
this._ingestion = { status: 'disabled' }; this._ingestion = { status: 'disabled' };
@ -706,7 +841,7 @@ class BucketInfo {
} }
/** /**
* Get ingestion configuration * Get ingestion configuration
* @return {object} - bucket ingestion configuration: Enabled or Disabled * @return - bucket ingestion configuration: Enabled or Disabled
*/ */
getIngestion() { getIngestion() {
return this._ingestion; return this._ingestion;
@ -714,7 +849,7 @@ class BucketInfo {
/** /**
** Check if bucket is an ingestion bucket ** Check if bucket is an ingestion bucket
* @return {boolean} - 'true' if bucket is ingestion bucket, 'false' if * @return - 'true' if bucket is ingestion bucket, 'false' if
* otherwise * otherwise
*/ */
isIngestionBucket() { isIngestionBucket() {
@ -726,7 +861,7 @@ class BucketInfo {
} }
/** /**
* Check if ingestion is enabled * Check if ingestion is enabled
* @return {boolean} - 'true' if ingestion is enabled, otherwise 'false' * @return - 'true' if ingestion is enabled, otherwise 'false'
*/ */
isIngestionEnabled() { isIngestionEnabled() {
const ingestionConfig = this.getIngestion(); const ingestionConfig = this.getIngestion();
@ -735,7 +870,7 @@ class BucketInfo {
/** /**
* Return the Azure specific storage account information for this bucket * Return the Azure specific storage account information for this bucket
* @return {object} - a structure suitable for {@link BucketAzureIno} * @return - a structure suitable for {@link BucketAzureIno}
* constructor * constructor
*/ */
getAzureInfo() { getAzureInfo() {
@ -743,30 +878,93 @@ class BucketInfo {
} }
/** /**
* Set the Azure specific storage account information for this bucket * Set the Azure specific storage account information for this bucket
* @param {object} azureInfo - a structure suitable for * @param azureInfo - a structure suitable for
* {@link BucketAzureInfo} construction * {@link BucketAzureInfo} construction
* @return {BucketInfo} - bucket info instance * @return - bucket info instance
*/ */
setAzureInfo(azureInfo) { setAzureInfo(azureInfo: any) {
this._azureInfo = azureInfo; this._azureInfo = azureInfo;
return this; return this;
} }
/** /**
* Check if object lock is enabled. * Check if object lock is enabled.
* @return {boolean} - depending on whether object lock is enabled * @return - depending on whether object lock is enabled
*/ */
isObjectLockEnabled() { isObjectLockEnabled() {
return !!this._objectLockEnabled; return !!this._objectLockEnabled;
} }
/** /**
* Set the value of objectLockEnabled field. * Set the value of objectLockEnabled field.
* @param {boolean} enabled - true if object lock enabled else false. * @param enabled - true if object lock enabled else false.
* @return {BucketInfo} - bucket info instance * @return - bucket info instance
*/ */
setObjectLockEnabled(enabled) { setObjectLockEnabled(enabled: boolean) {
this._objectLockEnabled = enabled; this._objectLockEnabled = enabled;
return this; return this;
} }
}
module.exports = BucketInfo; /**
* Get the value of bucket tags
* @return - Array of bucket tags
*/
getTags() {
return this._tags;
}
/**
* Set bucket tags
* @return - bucket info instance
*/
setTags(tags: Array<BucketTag>) {
this._tags = tags;
return this;
}
/**
* Get the value of bucket capabilities
* @return - capabilities of the bucket
*/
getCapabilities() {
return this._capabilities;
}
/**
* Get a specific bucket capability
*
* @param capability? - if provided, will return a specific capacity
* @return - capability of the bucket
*/
getCapability(capability: string) : VeeamSOSApi | undefined {
if (capability && this._capabilities && this._capabilities[capability]) {
return this._capabilities[capability];
}
return undefined;
}
/**
* Set bucket capabilities
* @return - bucket info instance
*/
setCapabilities(capabilities: Capabilities) {
this._capabilities = capabilities;
return this;
}
/**
* Get the bucket quota information
* @return quotaMax
*/
getQuota() {
return this._quotaMax;
}
/**
* Set bucket quota
* @param quota - quota to be set
* @return - bucket quota info
*/
setQuota(quota: number) {
this._quotaMax = quota || 0;
return this;
}
}

View File

@ -1,7 +1,6 @@
const assert = require('assert'); import assert from 'assert';
import errors, { ArsenalError } from '../errors';
const errors = require('../errors').default; import { validateResourcePolicy } from '../policy/policyValidator';
const { validateResourcePolicy } = require('../policy/policyValidator');
/** /**
* Format of json policy: * Format of json policy:
@ -49,20 +48,22 @@ const objectActions = [
's3:PutObjectTagging', 's3:PutObjectTagging',
]; ];
class BucketPolicy { export default class BucketPolicy {
_json: string;
_policy: any;
/** /**
* Create a Bucket Policy instance * Create a Bucket Policy instance
* @param {string} json - the json policy * @param json - the json policy
* @return {object} - BucketPolicy instance * @return - BucketPolicy instance
*/ */
constructor(json) { constructor(json: string) {
this._json = json; this._json = json;
this._policy = {}; this._policy = {};
} }
/** /**
* Get the bucket policy * Get the bucket policy
* @return {object} - the bucket policy or error * @return - the bucket policy or error
*/ */
getBucketPolicy() { getBucketPolicy() {
const policy = this._getPolicy(); const policy = this._getPolicy();
@ -71,9 +72,9 @@ class BucketPolicy {
/** /**
* Get the bucket policy array * Get the bucket policy array
* @return {object} - contains error if policy validation fails * @return - contains error if policy validation fails
*/ */
_getPolicy() { _getPolicy(): { error: ArsenalError } | any {
if (!this._json || this._json === '') { if (!this._json || this._json === '') {
return { error: errors.MalformedPolicy.customizeDescription( return { error: errors.MalformedPolicy.customizeDescription(
'request json is empty or undefined') }; 'request json is empty or undefined') };
@ -101,13 +102,13 @@ class BucketPolicy {
/** /**
* Validate action and resource are compatible * Validate action and resource are compatible
* @return {error} - contains error or empty obj * @return - contains error or empty obj
*/ */
_validateActionResource() { _validateActionResource(): { error?: ArsenalError } {
const invalid = this._policy.Statement.every(s => { const invalid = this._policy.Statement.every((s: any) => {
const actions = typeof s.Action === 'string' ? const actions: string[] = typeof s.Action === 'string' ?
[s.Action] : s.Action; [s.Action] : s.Action;
const resources = typeof s.Resource === 'string' ? const resources: string[] = typeof s.Resource === 'string' ?
[s.Resource] : s.Resource; [s.Resource] : s.Resource;
const objectAction = actions.some(a => const objectAction = actions.some(a =>
a.includes('Object') || objectActions.includes(a)); a.includes('Object') || objectActions.includes(a));
@ -129,15 +130,12 @@ class BucketPolicy {
/** /**
* Call resource policy schema validation function * Call resource policy schema validation function
* @param {object} policy - the bucket policy object to validate * @param policy - the bucket policy object to validate
* @return {undefined}
*/ */
static validatePolicy(policy) { static validatePolicy(policy: any) {
// only the BucketInfo constructor calls this function // only the BucketInfo constructor calls this function
// and BucketInfo will always be passed an object // and BucketInfo will always be passed an object
const validated = validateResourcePolicy(JSON.stringify(policy)); const validated = validateResourcePolicy(JSON.stringify(policy));
assert.deepStrictEqual(validated, { error: null, valid: true }); assert.deepStrictEqual(validated, { error: null, valid: true });
} }
} }
module.exports = BucketPolicy;

View File

@ -1,138 +0,0 @@
const uuid = require('uuid/v4');
/**
* @class LifecycleRule
*
* @classdesc Simple get/set class to build a single Rule
*/
class LifecycleRule {
constructor(id, status) {
// defaults
this.id = id || uuid();
this.status = status === 'Disabled' ? 'Disabled' : 'Enabled';
this.tags = [];
}
build() {
const rule = {};
rule.ID = this.id;
rule.Status = this.status;
if (this.expiration) {
rule.Expiration = this.expiration;
}
if (this.ncvExpiration) {
rule.NoncurrentVersionExpiration = this.ncvExpiration;
}
if (this.abortMPU) {
rule.AbortIncompleteMultipartUpload = this.abortMPU;
}
if (this.transitions) {
rule.Transitions = this.transitions;
}
const filter = {};
if ((this.prefix && this.tags.length) || (this.tags.length > 1)) {
// And rule
const andRule = {};
if (this.prefix) {
andRule.Prefix = this.prefix;
}
andRule.Tags = this.tags;
filter.And = andRule;
} else {
if (this.prefix) {
filter.Prefix = this.prefix;
}
if (this.tags.length) {
filter.Tag = this.tags[0];
}
}
if (Object.keys(filter).length > 0) {
rule.Filter = filter;
} else {
rule.Prefix = '';
}
return rule;
}
addID(id) {
this.id = id;
return this;
}
disable() {
this.status = 'Disabled';
return this;
}
addPrefix(prefix) {
this.prefix = prefix;
return this;
}
addTag(key, value) {
this.tags.push({
Key: key,
Value: value,
});
return this;
}
/**
* Expiration
* @param {string} prop - Property must be defined in `validProps`
* @param {integer|boolean} value - integer for `Date` or `Days`, or
* boolean for `ExpiredObjectDeleteMarker`
* @return {undefined}
*/
addExpiration(prop, value) {
const validProps = ['Date', 'Days', 'ExpiredObjectDeleteMarker'];
if (validProps.indexOf(prop) > -1) {
this.expiration = this.expiration || {};
if (prop === 'ExpiredObjectDeleteMarker') {
this.expiration[prop] = JSON.parse(value);
} else {
this.expiration[prop] = value;
}
}
return this;
}
/**
* NoncurrentVersionExpiration
* @param {integer} days - NoncurrentDays
* @return {undefined}
*/
addNCVExpiration(days) {
this.ncvExpiration = { NoncurrentDays: days };
return this;
}
/**
* AbortIncompleteMultipartUpload
* @param {integer} days - DaysAfterInitiation
* @return {undefined}
*/
addAbortMPU(days) {
this.abortMPU = { DaysAfterInitiation: days };
return this;
}
/**
* Transitions
* @param {array} transitions - transitions
* @return {undefined}
*/
addTransitions(transitions) {
this.transitions = transitions;
return this;
}
}
module.exports = LifecycleRule;

190
lib/models/LifecycleRule.ts Normal file
View File

@ -0,0 +1,190 @@
import uuid from 'uuid/v4';
export type Status = 'Disabled' | 'Enabled';
export type Tag = { Key: string; Value: string };
export type Tags = Tag[];
export type And = { Prefix?: string; Tags: Tags };
export type Filter = { Prefix?: string; Tag?: Tag } | { And: And };
export type Expiration = {
ExpiredObjectDeleteMarker?: number | boolean;
Date?: number | boolean;
Days?: number | boolean;
};
export type NoncurrentExpiration = {
NoncurrentDays: number | null;
NewerNoncurrentVersions: number | null;
};
/**
* @class LifecycleRule
*
* @classdesc Simple get/set class to build a single Rule
*/
export default class LifecycleRule {
id: string;
status: Status;
tags: Tags;
expiration?: Expiration;
ncvExpiration?: NoncurrentExpiration;
abortMPU?: { DaysAfterInitiation: number };
transitions?: any[];
ncvTransitions?: any[];
prefix?: string;
constructor(id: string, status: Status) {
// defaults
this.id = id || uuid();
this.status = status === 'Disabled' ? 'Disabled' : 'Enabled';
this.tags = [];
}
build() {
const rule: {
ID: string;
Status: Status;
Expiration?: Expiration;
NoncurrentVersionExpiration?: NoncurrentExpiration;
AbortIncompleteMultipartUpload?: { DaysAfterInitiation: number };
Transitions?: any[];
NoncurrentVersionTransitions?: any[];
Filter?: Filter;
Prefix?: '';
} = { ID: this.id, Status: this.status };
if (this.expiration) {
rule.Expiration = this.expiration;
}
if (this.ncvExpiration) {
rule.NoncurrentVersionExpiration = this.ncvExpiration
}
if (this.abortMPU) {
rule.AbortIncompleteMultipartUpload = this.abortMPU;
}
if (this.transitions) {
rule.Transitions = this.transitions;
}
if (this.ncvTransitions) {
rule.NoncurrentVersionTransitions = this.ncvTransitions;
}
const filter = this.buildFilter();
if (Object.keys(filter).length > 0) {
rule.Filter = filter;
} else {
rule.Prefix = '';
}
return rule;
}
buildFilter() {
if ((this.prefix && this.tags.length) || this.tags.length > 1) {
// And rule
const And: And = { Tags: this.tags };
if (this.prefix) {
And.Prefix = this.prefix;
}
return { And };
} else {
const filter: Filter = {};
if (this.prefix) {
filter.Prefix = this.prefix;
}
if (this.tags.length > 0) {
filter.Tag = this.tags[0];
}
return filter;
}
}
addID(id: string) {
this.id = id;
return this;
}
disable() {
this.status = 'Disabled';
return this;
}
addPrefix(prefix: string) {
this.prefix = prefix;
return this;
}
addTag(key: string, value: string) {
this.tags.push({
Key: key,
Value: value,
});
return this;
}
/**
* Expiration
* @param prop - Property must be defined in `validProps`
* @param value - integer for `Date` or `Days`, or boolean for `ExpiredObjectDeleteMarker`
*/
addExpiration(prop: 'ExpiredObjectDeleteMarker', value: boolean): this;
addExpiration(prop: 'Date' | 'Days', value: number): this;
addExpiration(prop: string, value: number | boolean) {
const validProps = ['Date', 'Days', 'ExpiredObjectDeleteMarker'];
if (validProps.includes(prop)) {
this.expiration = this.expiration || {};
if (prop === 'ExpiredObjectDeleteMarker') {
// FIXME
// @ts-expect-error
this.expiration[prop] = JSON.parse(value);
} else {
this.expiration[prop] = value;
}
}
return this;
}
/**
* NoncurrentVersionExpiration
* @param prop - Property must be defined in `validProps`
* @param value - integer for `NoncurrentDays` and `NewerNoncurrentVersions`
*/
addNCVExpiration(prop: 'NoncurrentDays' | 'NewerNoncurrentVersions', value: number): this;
addNCVExpiration(prop: string, value: number) {
const validProps = ['NoncurrentDays', 'NewerNoncurrentVersions'];
if (validProps.includes(prop)) {
this.ncvExpiration = this.ncvExpiration || {
NoncurrentDays: null,
NewerNoncurrentVersions: null,
};
this.ncvExpiration[prop] = value;
}
return this;
}
/**
* abortincompletemultipartupload
* @param days - DaysAfterInitiation
*/
addAbortMPU(days: number) {
this.abortMPU = { DaysAfterInitiation: days };
return this;
}
/**
* Transitions
* @param transitions - transitions
*/
addTransitions(transitions: any[]) {
this.transitions = transitions;
return this;
}
/**
* NonCurrentVersionTransitions
* @param nvcTransitions - NonCurrentVersionTransitions
*/
addNCVTransitions(nvcTransitions) {
this.ncvTransitions = nvcTransitions;
return this;
}
}

View File

@ -1,11 +1,11 @@
const assert = require('assert'); import assert from 'assert';
const UUID = require('uuid'); import UUID from 'uuid';
const { import {
supportedNotificationEvents, supportedNotificationEvents,
notificationArnPrefix, notificationArnPrefix,
} = require('../constants'); } from '../constants';
const errors = require('../errors').default; import errors, { ArsenalError } from '../errors';
/** /**
* Format of xml request: * Format of xml request:
@ -51,21 +51,27 @@ const errors = require('../errors').default;
* } * }
*/ */
class NotificationConfiguration { export default class NotificationConfiguration {
_parsedXml: any;
_config: {
error?: ArsenalError;
queueConfig?: any[];
};
_ids: Set<string>;
/** /**
* Create a Notification Configuration instance * Create a Notification Configuration instance
* @param {string} xml - parsed configuration xml * @param xml - parsed configuration xml
* @return {object} - NotificationConfiguration instance * @return - NotificationConfiguration instance
*/ */
constructor(xml) { constructor(xml: any) {
this._parsedXml = xml; this._parsedXml = xml;
this._config = {}; this._config = {};
this._ids = new Set([]); this._ids = new Set();
} }
/** /**
* Get notification configuration * Get notification configuration
* @return {object} - contains error if parsing failed * @return - contains error if parsing failed
*/ */
getValidatedNotificationConfiguration() { getValidatedNotificationConfiguration() {
const validationError = this._parseNotificationConfig(); const validationError = this._parseNotificationConfig();
@ -77,7 +83,7 @@ class NotificationConfiguration {
/** /**
* Check that notification configuration is valid * Check that notification configuration is valid
* @return {error | null} - error if parsing failed, else undefined * @return - error if parsing failed, else undefined
*/ */
_parseNotificationConfig() { _parseNotificationConfig() {
if (!this._parsedXml || this._parsedXml === '') { if (!this._parsedXml || this._parsedXml === '') {
@ -95,19 +101,19 @@ class NotificationConfiguration {
return null; return null;
} }
this._config.queueConfig = []; this._config.queueConfig = [];
let parseError; let parseError: ArsenalError | undefined;
for (let i = 0; i < queueConfig.length; i++) { for (let i = 0; i < queueConfig.length; i++) {
const eventObj = this._parseEvents(queueConfig[i].Event); const eventObj = this._parseEvents(queueConfig[i].Event);
const filterObj = this._parseFilter(queueConfig[i].Filter); const filterObj = this._parseFilter(queueConfig[i].Filter);
const idObj = this._parseId(queueConfig[i].Id); const idObj = this._parseId(queueConfig[i].Id);
const arnObj = this._parseArn(queueConfig[i].Queue); const arnObj = this._parseArn(queueConfig[i].Queue);
if (eventObj.error) { if ('error' in eventObj) {
parseError = eventObj.error; parseError = eventObj.error;
this._config = {}; this._config = {};
break; break;
} }
if (filterObj.error) { if ('error' in filterObj) {
parseError = filterObj.error; parseError = filterObj.error;
this._config = {}; this._config = {};
break; break;
@ -129,42 +135,43 @@ class NotificationConfiguration {
filterRules: filterObj.filterRules, filterRules: filterObj.filterRules,
}); });
} }
return parseError; return parseError ?? null;
} }
/** /**
* Check that events array is valid * Check that events array is valid
* @param {array} events - event array * @param events - event array
* @return {object} - contains error if parsing failed or events array * @return - contains error if parsing failed or events array
*/ */
_parseEvents(events) { _parseEvents(events: any[]) {
const eventsObj = {
events: [],
};
if (!events || !events[0]) { if (!events || !events[0]) {
eventsObj.error = errors.MalformedXML.customizeDescription( const msg = 'each queue configuration must contain an event';
'each queue configuration must contain an event'); const error = errors.MalformedXML.customizeDescription(msg);
return eventsObj; return { error };
} }
events.forEach(e => { const eventsObj: { error?: ArsenalError, events: any[] } = {
events: [] as any[],
};
for (const e of events) {
if (!supportedNotificationEvents.has(e)) { if (!supportedNotificationEvents.has(e)) {
eventsObj.error = errors.MalformedXML.customizeDescription( const msg = 'event array contains invalid or unsupported event';
'event array contains invalid or unsupported event'); const error = errors.MalformedXML.customizeDescription(msg);
return { error };
} else { } else {
eventsObj.events.push(e); eventsObj.events.push(e);
} }
}); }
return eventsObj; return eventsObj;
} }
/** /**
* Check that filter array is valid * Check that filter array is valid
* @param {array} filter - filter array * @param filter - filter array
* @return {object} - contains error if parsing failed or filter array * @return - contains error if parsing failed or filter array
*/ */
_parseFilter(filter) { _parseFilter(filter: any[]) {
if (!filter || !filter[0]) { if (!filter || !filter[0]) {
return {}; return { filterRules: undefined };
} }
if (!filter[0].S3Key || !filter[0].S3Key[0]) { if (!filter[0].S3Key || !filter[0].S3Key[0]) {
return { error: errors.MalformedXML.customizeDescription( return { error: errors.MalformedXML.customizeDescription(
@ -175,7 +182,7 @@ class NotificationConfiguration {
return { error: errors.MalformedXML.customizeDescription( return { error: errors.MalformedXML.customizeDescription(
'if included, queue configuration filter must contain a rule') }; 'if included, queue configuration filter must contain a rule') };
} }
const filterObj = { const filterObj: { filterRules: { name: string; value: string }[] } = {
filterRules: [], filterRules: [],
}; };
const ruleArray = filterRules.FilterRule; const ruleArray = filterRules.FilterRule;
@ -201,15 +208,15 @@ class NotificationConfiguration {
/** /**
* Check that id string is valid * Check that id string is valid
* @param {string} id - id string (optional) * @param id - id string (optional)
* @return {object} - contains error if parsing failed or id * @return - contains error if parsing failed or id
*/ */
_parseId(id) { _parseId(id: string) {
if (id && id[0].length > 255) { if (id && id[0].length > 255) {
return { error: errors.InvalidArgument.customizeDescription( return { error: errors.InvalidArgument.customizeDescription(
'queue configuration ID is greater than 255 characters long') }; 'queue configuration ID is greater than 255 characters long') };
} }
let validId; let validId: string;
if (!id || !id[0]) { if (!id || !id[0]) {
// id is optional property, so create one if not provided or is '' // id is optional property, so create one if not provided or is ''
// We generate 48-character alphanumeric, unique id for rule // We generate 48-character alphanumeric, unique id for rule
@ -228,10 +235,10 @@ class NotificationConfiguration {
/** /**
* Check that arn string is valid * Check that arn string is valid
* @param {string} arn - queue arn * @param arn - queue arn
* @return {object} - contains error if parsing failed or queue arn * @return - contains error if parsing failed or queue arn
*/ */
_parseArn(arn) { _parseArn(arn: string) {
if (!arn || !arn[0]) { if (!arn || !arn[0]) {
return { error: errors.MalformedXML.customizeDescription( return { error: errors.MalformedXML.customizeDescription(
'each queue configuration must contain a queue arn'), 'each queue configuration must contain a queue arn'),
@ -249,11 +256,21 @@ class NotificationConfiguration {
/** /**
* Get XML representation of notification configuration object * Get XML representation of notification configuration object
* @param {object} config - notification configuration object * @param config - notification configuration object
* @return {string} - XML representation of config * @return - XML representation of config
*/ */
static getConfigXML(config) { static getConfigXML(config: {
const xmlArray = []; queueConfig: {
id: string;
events: string[];
queueArn: string;
filterRules: {
name: string;
value: string;
}[];
}[];
}) {
const xmlArray: string[] = [];
if (config && config.queueConfig) { if (config && config.queueConfig) {
config.queueConfig.forEach(c => { config.queueConfig.forEach(c => {
xmlArray.push('<QueueConfiguration>'); xmlArray.push('<QueueConfiguration>');
@ -284,20 +301,19 @@ class NotificationConfiguration {
/** /**
* Validate the bucket metadata notification configuration structure and * Validate the bucket metadata notification configuration structure and
* value types * value types
* @param {object} config - The notificationconfiguration to validate * @param config - The notificationconfiguration to validate
* @return {undefined}
*/ */
static validateConfig(config) { static validateConfig(config: any) {
assert.strictEqual(typeof config, 'object'); assert.strictEqual(typeof config, 'object');
if (!config.queueConfig) { if (!config.queueConfig) {
return; return;
} }
config.queueConfig.forEach(q => { config.queueConfig.forEach((q: any) => {
const { events, queueArn, filterRules, id } = q; const { events, queueArn, filterRules, id } = q;
events.forEach(e => assert.strictEqual(typeof e, 'string')); events.forEach((e: any) => assert.strictEqual(typeof e, 'string'));
assert.strictEqual(typeof queueArn, 'string'); assert.strictEqual(typeof queueArn, 'string');
if (filterRules) { if (filterRules) {
filterRules.forEach(f => { filterRules.forEach((f: any) => {
assert.strictEqual(typeof f.name, 'string'); assert.strictEqual(typeof f.name, 'string');
assert.strictEqual(typeof f.value, 'string'); assert.strictEqual(typeof f.value, 'string');
}); });
@ -307,5 +323,3 @@ class NotificationConfiguration {
return; return;
} }
} }
module.exports = NotificationConfiguration;

View File

@ -1,6 +1,12 @@
const assert = require('assert'); import assert from 'assert';
import errors, { ArsenalError } from '../errors';
const errors = require('../errors').default; export type Config = any;
export type LockMode = 'GOVERNANCE' | 'COMPLIANCE';
export type DefaultRetention = { Days: number } | { Years: number };
export type ParsedRetention =
| { error: ArsenalError }
| { timeType: 'days' | 'years'; timeValue: number };
/** /**
* Format of xml request: * Format of xml request:
@ -27,20 +33,23 @@ const errors = require('../errors').default;
* } * }
* } * }
*/ */
class ObjectLockConfiguration { export default class ObjectLockConfiguration {
_parsedXml: any;
_config: Config;
/** /**
* Create an Object Lock Configuration instance * Create an Object Lock Configuration instance
* @param {string} xml - the parsed configuration xml * @param xml - the parsed configuration xml
* @return {object} - ObjectLockConfiguration instance * @return - ObjectLockConfiguration instance
*/ */
constructor(xml) { constructor(xml: any) {
this._parsedXml = xml; this._parsedXml = xml;
this._config = {}; this._config = {};
} }
/** /**
* Get the object lock configuration * Get the object lock configuration
* @return {object} - contains error if parsing failed * @return - contains error if parsing failed
*/ */
getValidatedObjectLockConfiguration() { getValidatedObjectLockConfiguration() {
const validConfig = this._parseObjectLockConfig(); const validConfig = this._parseObjectLockConfig();
@ -52,131 +61,128 @@ class ObjectLockConfiguration {
/** /**
* Check that mode is valid * Check that mode is valid
* @param {array} mode - array containing mode value * @param mode - array containing mode value
* @return {object} - contains error if parsing failed * @return - contains error if parsing failed
*/ */
_parseMode(mode) { _parseMode(mode: LockMode[]): { error: ArsenalError } | { mode: LockMode } {
const validMode = {};
const expectedModes = ['GOVERNANCE', 'COMPLIANCE']; const expectedModes = ['GOVERNANCE', 'COMPLIANCE'];
if (!mode || !mode[0]) { if (!mode || !mode[0]) {
validMode.error = errors.MalformedXML.customizeDescription( const msg = 'request xml does not contain Mode';
'request xml does not contain Mode'); const error = errors.MalformedXML.customizeDescription(msg);
return validMode; return { error };
} }
if (mode.length > 1) { if (mode.length > 1) {
validMode.error = errors.MalformedXML.customizeDescription( const msg = 'request xml contains more than one Mode';
'request xml contains more than one Mode'); const error = errors.MalformedXML.customizeDescription(msg);
return validMode; return { error };
} }
if (!expectedModes.includes(mode[0])) { if (!expectedModes.includes(mode[0])) {
validMode.error = errors.MalformedXML.customizeDescription( const msg = 'Mode request xml must be one of "GOVERNANCE", "COMPLIANCE"';
'Mode request xml must be one of "GOVERNANCE", "COMPLIANCE"'); const error = errors.MalformedXML.customizeDescription(msg);
return validMode; return { error };
} }
validMode.mode = mode[0]; return { mode: mode[0] };
return validMode;
} }
/** /**
* Check that time limit is valid * Check that time limit is valid
* @param {object} dr - DefaultRetention object containing days or years * @param dr - DefaultRetention object containing days or years
* @return {object} - contains error if parsing failed * @return - contains error if parsing failed
*/ */
_parseTime(dr) { _parseTime(dr: DefaultRetention): ParsedRetention {
const validTime = {}; if ('Days' in dr && 'Years' in dr) {
if (dr.Days && dr.Years) { const msg = 'request xml contains both Days and Years';
validTime.error = errors.MalformedXML.customizeDescription( const error = errors.MalformedXML.customizeDescription(msg);
'request xml contains both Days and Years'); return { error };
return validTime;
} }
const timeType = dr.Days ? 'Days' : 'Years'; const timeType = 'Days' in dr ? 'Days' : 'Years';
if (!dr[timeType] || !dr[timeType][0]) { if (!dr[timeType] || !dr[timeType][0]) {
validTime.error = errors.MalformedXML.customizeDescription( const msg = 'request xml does not contain Days or Years';
'request xml does not contain Days or Years'); const error = errors.MalformedXML.customizeDescription(msg);
return validTime; return { error };
} }
if (dr[timeType].length > 1) { if (dr[timeType].length > 1) {
validTime.error = errors.MalformedXML.customizeDescription( const msg = 'request xml contains more than one retention period';
'request xml contains more than one retention period'); const error = errors.MalformedXML.customizeDescription(msg);
return validTime; return { error };
} }
const timeValue = Number.parseInt(dr[timeType][0], 10); const timeValue = Number.parseInt(dr[timeType][0], 10);
if (Number.isNaN(timeValue)) { if (Number.isNaN(timeValue)) {
validTime.error = errors.MalformedXML.customizeDescription( const msg = 'request xml does not contain valid retention period';
'request xml does not contain valid retention period'); const error = errors.MalformedXML.customizeDescription(msg);
return validTime; return { error };
} }
if (timeValue < 1) { if (timeValue < 1) {
validTime.error = errors.InvalidArgument.customizeDescription( const msg = 'retention period must be a positive integer';
'retention period must be a positive integer'); const error = errors.InvalidArgument.customizeDescription(msg);
return validTime; return { error };
} }
if ((timeType === 'Days' && timeValue > 36500) || if ((timeType === 'Days' && timeValue > 36500) ||
(timeType === 'Years' && timeValue > 100)) { (timeType === 'Years' && timeValue > 100)) {
validTime.error = errors.InvalidArgument.customizeDescription( const msg = 'retention period is too large';
'retention period is too large'); const error = errors.InvalidArgument.customizeDescription(msg);
return validTime; return { error };
} }
validTime.timeType = timeType.toLowerCase(); return {
validTime.timeValue = timeValue; timeType: timeType.toLowerCase() as 'days' | 'years',
return validTime; timeValue: timeValue,
};
} }
/** /**
* Check that object lock configuration is valid * Check that object lock configuration is valid
* @return {object} - contains error if parsing failed * @return - contains error if parsing failed
*/ */
_parseObjectLockConfig() { _parseObjectLockConfig() {
const validConfig = {}; const validConfig: { error?: ArsenalError } = {};
if (!this._parsedXml || this._parsedXml === '') { if (!this._parsedXml || this._parsedXml === '') {
validConfig.error = errors.MalformedXML.customizeDescription( const msg = 'request xml is undefined or empty';
'request xml is undefined or empty'); const error = errors.MalformedXML.customizeDescription(msg);
return validConfig; return { error };
} }
const objectLockConfig = this._parsedXml.ObjectLockConfiguration; const objectLockConfig = this._parsedXml.ObjectLockConfiguration;
if (!objectLockConfig || objectLockConfig === '') { if (!objectLockConfig || objectLockConfig === '') {
validConfig.error = errors.MalformedXML.customizeDescription( const msg = 'request xml does not include ObjectLockConfiguration';
'request xml does not include ObjectLockConfiguration'); const error = errors.MalformedXML.customizeDescription(msg);
return validConfig; return { error };
} }
const objectLockEnabled = objectLockConfig.ObjectLockEnabled; const objectLockEnabled = objectLockConfig.ObjectLockEnabled;
if (!objectLockEnabled || objectLockEnabled[0] !== 'Enabled') { if (!objectLockEnabled || objectLockEnabled[0] !== 'Enabled') {
validConfig.error = errors.MalformedXML.customizeDescription( const msg = 'request xml does not include valid ObjectLockEnabled';
'request xml does not include valid ObjectLockEnabled'); const error = errors.MalformedXML.customizeDescription(msg);
return validConfig; return { error };
} }
const ruleArray = objectLockConfig.Rule; const ruleArray = objectLockConfig.Rule;
if (ruleArray) { if (ruleArray) {
if (ruleArray.length > 1) { if (ruleArray.length > 1) {
validConfig.error = errors.MalformedXML.customizeDescription( const msg = 'request xml contains more than one rule';
'request xml contains more than one rule'); const error = errors.MalformedXML.customizeDescription(msg);
return validConfig; return { error };
} }
const drArray = ruleArray[0].DefaultRetention; const drArray = ruleArray[0].DefaultRetention;
if (!drArray || !drArray[0] || drArray[0] === '') { if (!drArray || !drArray[0] || drArray[0] === '') {
validConfig.error = errors.MalformedXML.customizeDescription( const msg = 'Rule request xml does not contain DefaultRetention';
'Rule request xml does not contain DefaultRetention'); const error = errors.MalformedXML.customizeDescription(msg);
return validConfig; return { error };
} }
if (!drArray[0].Mode || (!drArray[0].Days && !drArray[0].Years)) { if (!drArray[0].Mode || (!drArray[0].Days && !drArray[0].Years)) {
validConfig.error = errors.MalformedXML.customizeDescription( const msg =
'DefaultRetention request xml does not contain Mode or ' + 'DefaultRetention request xml does not contain Mode or ' +
'retention period (Days or Years)'); 'retention period (Days or Years)';
return validConfig; const error = errors.MalformedXML.customizeDescription(msg);
return { error };
} }
const validMode = this._parseMode(drArray[0].Mode); const validMode = this._parseMode(drArray[0].Mode);
if (validMode.error) { if ('error' in validMode) {
validConfig.error = validMode.error; return validMode;
return validConfig;
} }
const validTime = this._parseTime(drArray[0]); const validTime = this._parseTime(drArray[0]);
if (validTime.error) { if ('error' in validTime) {
validConfig.error = validTime.error; return validTime;
return validConfig;
} }
this._config.rule = {}; this._config.rule = {};
this._config.rule.mode = validMode.mode; this._config.rule.mode = validMode.mode;
this._config.rule[validTime.timeType] = validTime.timeValue; this._config.rule[validTime.timeType!] = validTime.timeValue;
} }
return validConfig; return validConfig;
} }
@ -184,10 +190,9 @@ class ObjectLockConfiguration {
/** /**
* Validate the bucket metadata object lock configuration structure and * Validate the bucket metadata object lock configuration structure and
* value types * value types
* @param {object} config - The object lock configuration to validate * @param config - The object lock configuration to validate
* @return {undefined}
*/ */
static validateConfig(config) { static validateConfig(config: any) {
assert.strictEqual(typeof config, 'object'); assert.strictEqual(typeof config, 'object');
const rule = config.rule; const rule = config.rule;
if (rule) { if (rule) {
@ -203,10 +208,10 @@ class ObjectLockConfiguration {
/** /**
* Get the XML representation of the configuration object * Get the XML representation of the configuration object
* @param {object} config - The bucket object lock configuration * @param config - The bucket object lock configuration
* @return {string} - The XML representation of the configuration * @return - The XML representation of the configuration
*/ */
static getConfigXML(config) { static getConfigXML(config: any) {
// object lock is enabled on the bucket but object lock configuration // object lock is enabled on the bucket but object lock configuration
// not set // not set
if (config.rule === undefined) { if (config.rule === undefined) {
@ -234,5 +239,3 @@ class ObjectLockConfiguration {
'</ObjectLockConfiguration>'; '</ObjectLockConfiguration>';
} }
} }
module.exports = ObjectLockConfiguration;

File diff suppressed because it is too large Load Diff

View File

@ -0,0 +1,94 @@
/*
* Code based on Yutaka Oishi (Fujifilm) contributions
* Date: 11 Sep 2020
*/
/**
* class representing the x-amz-restore of object metadata.
*
* @class
*/
export default class ObjectMDAmzRestore {
'expiry-date': Date | string;
'ongoing-request': boolean;
/**
*
* @constructor
* @param ongoingRequest ongoing-request
* @param [expiryDate] expiry-date
* @throws case of invalid parameter
*/
constructor(ongoingRequest: boolean, expiryDate?: Date | string) {
this.setOngoingRequest(ongoingRequest);
this.setExpiryDate(expiryDate);
}
/**
*
* @param data archiveInfo
* @returns true if the provided object is valid
*/
static isValid(data: { 'ongoing-request': boolean; 'expiry-date': Date | string }) {
try {
// eslint-disable-next-line no-new
new ObjectMDAmzRestore(data['ongoing-request'], data['expiry-date']);
return true;
} catch (err) {
return false;
}
}
/**
*
* @returns ongoing-request
*/
getOngoingRequest() {
return this['ongoing-request'];
}
/**
*
* @param value ongoing-request
* @throws case of invalid parameter
*/
setOngoingRequest(value?: boolean) {
if (value === undefined) {
throw new Error('ongoing-request is required.');
} else if (typeof value !== 'boolean') {
throw new Error('ongoing-request must be type of boolean.');
}
this['ongoing-request'] = value;
}
/**
*
* @returns expiry-date
*/
getExpiryDate() {
return this['expiry-date'];
}
/**
*
* @param value expiry-date
* @throws case of invalid parameter
*/
setExpiryDate(value?: Date | string) {
if (value) {
const checkWith = (new Date(value)).getTime();
if (Number.isNaN(Number(checkWith))) {
throw new Error('expiry-date is must be a valid Date.');
}
this['expiry-date'] = value;
}
}
/**
*
* @returns itself
*/
getValue() {
return this;
}
}

View File

@ -0,0 +1,184 @@
/**
* class representing the archive of object metadata.
*
* @class
*/
export default class ObjectMDArchive {
archiveInfo: any;
// @ts-ignore
restoreRequestedAt: Date | string;
// @ts-ignore
restoreRequestedDays: number;
// @ts-ignore
restoreCompletedAt: Date | string;
// @ts-ignore
restoreWillExpireAt: Date | string;
/**
*
* @constructor
* @param archiveInfo contains the archive info set by the TLP and returned by the TLP jobs
* @param [restoreRequestedAt] set at the time restore request is made by the client
* @param [restoreRequestedDays] set at the time restore request is made by the client
* @param [restoreCompletedAt] set at the time of successful restore
* @param [restoreWillExpireAt] computed and stored at the time of restore
* @throws case of invalid parameter
*/
constructor(
archiveInfo: any,
restoreRequestedAt?: Date | string,
restoreRequestedDays?: number,
restoreCompletedAt?: Date | string,
restoreWillExpireAt?: Date | string,
) {
this.setArchiveInfo(archiveInfo);
this.setRestoreRequestedAt(restoreRequestedAt!);
this.setRestoreRequestedDays(restoreRequestedDays!);
this.setRestoreCompletedAt(restoreCompletedAt!);
this.setRestoreWillExpireAt(restoreWillExpireAt!);
}
/**
*
* @param data archiveInfo
* @returns true if the provided object is valid
*/
static isValid(data: {
archiveInfo: any;
restoreRequestedAt?: Date;
restoreRequestedDays?: number;
restoreCompletedAt?: Date;
restoreWillExpireAt?: Date;
}) {
try {
// eslint-disable-next-line no-new
new ObjectMDArchive(
data.archiveInfo,
data.restoreRequestedAt,
data.restoreRequestedDays,
data.restoreCompletedAt,
data.restoreWillExpireAt,
);
return true;
} catch (err) {
return false;
}
}
/**
*
* @returns archiveInfo
*/
getArchiveInfo() {
return this.archiveInfo;
}
/**
* @param value archiveInfo
* @throws case of invalid parameter
*/
setArchiveInfo(value: any) {
if (!value) {
throw new Error('archiveInfo is required.');
} else if (typeof value !== 'object') {
throw new Error('archiveInfo must be type of object.');
}
this.archiveInfo = value;
}
/**
*
* @returns restoreRequestedAt
*/
getRestoreRequestedAt() {
return this.restoreRequestedAt;
}
/**
* @param value restoreRequestedAt
* @throws case of invalid parameter
*/
setRestoreRequestedAt(value: Date | string) {
if (value) {
const checkWith = (new Date(value)).getTime();
if (Number.isNaN(Number(checkWith))) {
throw new Error('restoreRequestedAt must be a valid Date.');
}
this.restoreRequestedAt = value;
}
}
/**
*
* @returns restoreRequestedDays
*/
getRestoreRequestedDays() {
return this.restoreRequestedDays;
}
/**
* @param value restoreRequestedDays
* @throws case of invalid parameter
*/
setRestoreRequestedDays(value: number) {
if (value) {
if (isNaN(value)) {
throw new Error('restoreRequestedDays must be type of Number.');
}
this.restoreRequestedDays = value;
}
}
/**
*
* @returns restoreCompletedAt
*/
getRestoreCompletedAt() {
return this.restoreCompletedAt;
}
/**
* @param value restoreCompletedAt
* @throws case of invalid parameter
*/
setRestoreCompletedAt(value: Date | string) {
if (value) {
if (!this.restoreRequestedAt || !this.restoreRequestedDays) {
throw new Error('restoreCompletedAt must be set after restoreRequestedAt and restoreRequestedDays.');
}
const checkWith = (new Date(value)).getTime();
if (Number.isNaN(Number(checkWith))) {
throw new Error('restoreCompletedAt must be a valid Date.');
}
this.restoreCompletedAt = value;
}
}
/**
*
* @returns restoreWillExpireAt
*/
getRestoreWillExpireAt() {
return this.restoreWillExpireAt;
}
/**
* @param value restoreWillExpireAt
* @throws case of invalid parameter
*/
setRestoreWillExpireAt(value: Date | string) {
if (value) {
if (!this.restoreRequestedAt || !this.restoreRequestedDays) {
throw new Error('restoreWillExpireAt must be set after restoreRequestedAt and restoreRequestedDays.');
}
const checkWith = (new Date(value)).getTime();
if (Number.isNaN(Number(checkWith))) {
throw new Error('restoreWillExpireAt must be a valid Date.');
}
this.restoreWillExpireAt = value;
}
}
/**
*
* @returns itself
*/
getValue() {
return this;
}
}

View File

@ -2,33 +2,61 @@
* Helper class to ease access to the Azure specific information for * Helper class to ease access to the Azure specific information for
* Blob and Container objects. * Blob and Container objects.
*/ */
class ObjectMDAzureInfo { export default class ObjectMDAzureInfo {
_data: {
containerPublicAccess: string;
containerStoredAccessPolicies: any[];
containerImmutabilityPolicy: any;
containerLegalHoldStatus: boolean;
containerDeletionInProgress: boolean;
blobType: string;
blobContentMD5: string;
blobIssuedETag: string;
blobCopyInfo: any;
blobSequenceNumber: number;
blobAccessTierChangeTime: Date;
blobUncommitted: boolean;
};
/** /**
* @constructor * @constructor
* @param {object} obj - Raw structure for the Azure info on Blob/Container * @param obj - Raw structure for the Azure info on Blob/Container
* @param {string} obj.containerPublicAccess - Public access authorization * @param obj.containerPublicAccess - Public access authorization
* type * type
* @param {object[]} obj.containerStoredAccessPolicies - Access policies * @param obj.containerStoredAccessPolicies - Access policies
* for Shared Access Signature bearer * for Shared Access Signature bearer
* @param {object} obj.containerImmutabilityPolicy - data immutability * @param obj.containerImmutabilityPolicy - data immutability
* policy for this container * policy for this container
* @param {boolean} obj.containerLegalHoldStatus - legal hold status for * @param obj.containerLegalHoldStatus - legal hold status for
* this container * this container
* @param {boolean} obj.containerDeletionInProgress - deletion in progress * @param obj.containerDeletionInProgress - deletion in progress
* indicator for this container * indicator for this container
* @param {string} obj.blobType - defines the type of blob for this object * @param obj.blobType - defines the type of blob for this object
* @param {string} obj.blobContentMD5 - whole object MD5 sum set by the * @param obj.blobContentMD5 - whole object MD5 sum set by the
* client through the Azure API * client through the Azure API
* @param {string} obj.blobIssuedETag - backup of the issued ETag on MD only * @param obj.blobIssuedETag - backup of the issued ETag on MD only
* operations like Set Blob Properties and Set Blob Metadata * operations like Set Blob Properties and Set Blob Metadata
* @param {object} obj.blobCopyInfo - information pertaining to past and * @param obj.blobCopyInfo - information pertaining to past and
* pending copy operation targeting this object * pending copy operation targeting this object
* @param {number} obj.blobSequenceNumber - sequence number for a PageBlob * @param obj.blobSequenceNumber - sequence number for a PageBlob
* @param {Date} obj.blobAccessTierChangeTime - date of change of tier * @param obj.blobAccessTierChangeTime - date of change of tier
* @param {boolean} obj.blobUncommitted - A block has been put for a * @param obj.blobUncommitted - A block has been put for a
* nonexistent blob which is about to be created * nonexistent blob which is about to be created
*/ */
constructor(obj) { constructor(obj: {
containerPublicAccess: string;
containerStoredAccessPolicies: any[];
containerImmutabilityPolicy: any;
containerLegalHoldStatus: boolean;
containerDeletionInProgress: boolean;
blobType: string;
blobContentMD5: string;
blobIssuedETag: string;
blobCopyInfo: any;
blobSequenceNumber: number;
blobAccessTierChangeTime: Date;
blobUncommitted: boolean;
}) {
this._data = { this._data = {
containerPublicAccess: obj.containerPublicAccess, containerPublicAccess: obj.containerPublicAccess,
containerStoredAccessPolicies: obj.containerStoredAccessPolicies, containerStoredAccessPolicies: obj.containerStoredAccessPolicies,
@ -49,7 +77,7 @@ class ObjectMDAzureInfo {
return this._data.containerPublicAccess; return this._data.containerPublicAccess;
} }
setContainerPublicAccess(containerPublicAccess) { setContainerPublicAccess(containerPublicAccess: string) {
this._data.containerPublicAccess = containerPublicAccess; this._data.containerPublicAccess = containerPublicAccess;
return this; return this;
} }
@ -58,7 +86,7 @@ class ObjectMDAzureInfo {
return this._data.containerStoredAccessPolicies; return this._data.containerStoredAccessPolicies;
} }
setContainerStoredAccessPolicies(containerStoredAccessPolicies) { setContainerStoredAccessPolicies(containerStoredAccessPolicies: any[]) {
this._data.containerStoredAccessPolicies = this._data.containerStoredAccessPolicies =
containerStoredAccessPolicies; containerStoredAccessPolicies;
return this; return this;
@ -68,7 +96,7 @@ class ObjectMDAzureInfo {
return this._data.containerImmutabilityPolicy; return this._data.containerImmutabilityPolicy;
} }
setContainerImmutabilityPolicy(containerImmutabilityPolicy) { setContainerImmutabilityPolicy(containerImmutabilityPolicy: any) {
this._data.containerImmutabilityPolicy = containerImmutabilityPolicy; this._data.containerImmutabilityPolicy = containerImmutabilityPolicy;
return this; return this;
} }
@ -77,7 +105,7 @@ class ObjectMDAzureInfo {
return this._data.containerLegalHoldStatus; return this._data.containerLegalHoldStatus;
} }
setContainerLegalHoldStatus(containerLegalHoldStatus) { setContainerLegalHoldStatus(containerLegalHoldStatus: boolean) {
this._data.containerLegalHoldStatus = containerLegalHoldStatus; this._data.containerLegalHoldStatus = containerLegalHoldStatus;
return this; return this;
} }
@ -86,7 +114,7 @@ class ObjectMDAzureInfo {
return this._data.containerDeletionInProgress; return this._data.containerDeletionInProgress;
} }
setContainerDeletionInProgress(containerDeletionInProgress) { setContainerDeletionInProgress(containerDeletionInProgress: boolean) {
this._data.containerDeletionInProgress = containerDeletionInProgress; this._data.containerDeletionInProgress = containerDeletionInProgress;
return this; return this;
} }
@ -95,7 +123,7 @@ class ObjectMDAzureInfo {
return this._data.blobType; return this._data.blobType;
} }
setBlobType(blobType) { setBlobType(blobType: string) {
this._data.blobType = blobType; this._data.blobType = blobType;
return this; return this;
} }
@ -104,7 +132,7 @@ class ObjectMDAzureInfo {
return this._data.blobContentMD5; return this._data.blobContentMD5;
} }
setBlobContentMD5(blobContentMD5) { setBlobContentMD5(blobContentMD5: string) {
this._data.blobContentMD5 = blobContentMD5; this._data.blobContentMD5 = blobContentMD5;
return this; return this;
} }
@ -113,7 +141,7 @@ class ObjectMDAzureInfo {
return this._data.blobIssuedETag; return this._data.blobIssuedETag;
} }
setBlobIssuedETag(blobIssuedETag) { setBlobIssuedETag(blobIssuedETag: string) {
this._data.blobIssuedETag = blobIssuedETag; this._data.blobIssuedETag = blobIssuedETag;
return this; return this;
} }
@ -122,7 +150,7 @@ class ObjectMDAzureInfo {
return this._data.blobCopyInfo; return this._data.blobCopyInfo;
} }
setBlobCopyInfo(blobCopyInfo) { setBlobCopyInfo(blobCopyInfo: any) {
this._data.blobCopyInfo = blobCopyInfo; this._data.blobCopyInfo = blobCopyInfo;
return this; return this;
} }
@ -131,7 +159,7 @@ class ObjectMDAzureInfo {
return this._data.blobSequenceNumber; return this._data.blobSequenceNumber;
} }
setBlobSequenceNumber(blobSequenceNumber) { setBlobSequenceNumber(blobSequenceNumber: number) {
this._data.blobSequenceNumber = blobSequenceNumber; this._data.blobSequenceNumber = blobSequenceNumber;
return this; return this;
} }
@ -140,7 +168,7 @@ class ObjectMDAzureInfo {
return this._data.blobAccessTierChangeTime; return this._data.blobAccessTierChangeTime;
} }
setBlobAccessTierChangeTime(blobAccessTierChangeTime) { setBlobAccessTierChangeTime(blobAccessTierChangeTime: Date) {
this._data.blobAccessTierChangeTime = blobAccessTierChangeTime; this._data.blobAccessTierChangeTime = blobAccessTierChangeTime;
return this; return this;
} }
@ -149,7 +177,7 @@ class ObjectMDAzureInfo {
return this._data.blobUncommitted; return this._data.blobUncommitted;
} }
setBlobUncommitted(blobUncommitted) { setBlobUncommitted(blobUncommitted: boolean) {
this._data.blobUncommitted = blobUncommitted; this._data.blobUncommitted = blobUncommitted;
return this; return this;
} }
@ -158,5 +186,3 @@ class ObjectMDAzureInfo {
return this._data; return this._data;
} }
} }
module.exports = ObjectMDAzureInfo;

View File

@ -1,28 +1,49 @@
export type Ciphered = { cryptoScheme: number; cipheredDataKey: string };
export type BaseLocation = { key: string; dataStoreName: string };
export type Location = BaseLocation & {
start: number;
size: number;
dataStoreETag: string;
dataStoreVersionId: string;
blockId?: string;
};
export type ObjectMDLocationData = {
key: string;
start: number;
size: number;
dataStoreName: string;
dataStoreETag: string;
dataStoreVersionId: string;
blockId?: string;
cryptoScheme?: number;
cipheredDataKey?: string;
};
/** /**
* Helper class to ease access to a single data location in metadata * Helper class to ease access to a single data location in metadata
* 'location' array * 'location' array
*/ */
class ObjectMDLocation { export default class ObjectMDLocation {
_data: ObjectMDLocationData;
/** /**
* @constructor * @constructor
* @param {object} locationObj - single data location info * @param locationObj - single data location info
* @param {string} locationObj.key - data backend key * @param locationObj.key - data backend key
* @param {number} locationObj.start - index of first data byte of * @param locationObj.start - index of first data byte of
* this part in the full object * this part in the full object
* @param {number} locationObj.size - byte length of data part * @param locationObj.size - byte length of data part
* @param {string} locationObj.dataStoreName - type of data store * @param locationObj.dataStoreName - type of data store
* @param {string} locationObj.dataStoreETag - internal ETag of * @param locationObj.dataStoreETag - internal ETag of
* data part * data part
* @param {string} [locationObj.dataStoreVersionId] - versionId, * @param [locationObj.dataStoreVersionId] - versionId,
* needed for cloud backends * needed for cloud backends
* @param {number} [location.cryptoScheme] - if location data is * @param [location.cryptoScheme] - if location data is
* encrypted: the encryption scheme version * encrypted: the encryption scheme version
* @param {string} [location.cipheredDataKey] - if location data * @param [location.cipheredDataKey] - if location data
* is encrypted: the base64-encoded ciphered data key * is encrypted: the base64-encoded ciphered data key
* @param {string} [locationObj.blockId] - blockId of the part, * @param [locationObj.blockId] - blockId of the part,
* set by the Azure Blob Service REST API frontend * set by the Azure Blob Service REST API frontend
*/ */
constructor(locationObj) { constructor(locationObj: Location | (Location & Ciphered)) {
this._data = { this._data = {
key: locationObj.key, key: locationObj.key,
start: locationObj.start, start: locationObj.start,
@ -32,7 +53,7 @@ class ObjectMDLocation {
dataStoreVersionId: locationObj.dataStoreVersionId, dataStoreVersionId: locationObj.dataStoreVersionId,
blockId: locationObj.blockId, blockId: locationObj.blockId,
}; };
if (locationObj.cryptoScheme) { if ('cryptoScheme' in locationObj) {
this._data.cryptoScheme = locationObj.cryptoScheme; this._data.cryptoScheme = locationObj.cryptoScheme;
this._data.cipheredDataKey = locationObj.cipheredDataKey; this._data.cipheredDataKey = locationObj.cipheredDataKey;
} }
@ -49,17 +70,17 @@ class ObjectMDLocation {
/** /**
* Update data location with new info * Update data location with new info
* *
* @param {object} location - single data location info * @param location - single data location info
* @param {string} location.key - data backend key * @param location.key - data backend key
* @param {string} location.dataStoreName - type of data store * @param location.dataStoreName - type of data store
* @param {string} [location.dataStoreVersionId] - data backend version ID * @param [location.dataStoreVersionId] - data backend version ID
* @param {number} [location.cryptoScheme] - if location data is * @param [location.cryptoScheme] - if location data is
* encrypted: the encryption scheme version * encrypted: the encryption scheme version
* @param {string} [location.cipheredDataKey] - if location data * @param [location.cipheredDataKey] - if location data
* is encrypted: the base64-encoded ciphered data key * is encrypted: the base64-encoded ciphered data key
* @return {ObjectMDLocation} return this * @return return this
*/ */
setDataLocation(location) { setDataLocation(location: BaseLocation | (BaseLocation & Ciphered)) {
[ [
'key', 'key',
'dataStoreName', 'dataStoreName',
@ -96,7 +117,7 @@ class ObjectMDLocation {
return this._data.start; return this._data.start;
} }
setPartStart(start) { setPartStart(start: number) {
this._data.start = start; this._data.start = start;
return this; return this;
} }
@ -105,7 +126,7 @@ class ObjectMDLocation {
return this._data.size; return this._data.size;
} }
setPartSize(size) { setPartSize(size: number) {
this._data.size = size; this._data.size = size;
return this; return this;
} }
@ -122,7 +143,7 @@ class ObjectMDLocation {
return this._data.blockId; return this._data.blockId;
} }
setBlockId(blockId) { setBlockId(blockId: string) {
this._data.blockId = blockId; this._data.blockId = blockId;
return this; return this;
} }
@ -131,5 +152,3 @@ class ObjectMDLocation {
return this._data; return this._data;
} }
} }
module.exports = ObjectMDLocation;

View File

@ -1,17 +1,16 @@
const assert = require('assert'); import assert from 'assert';
const UUID = require('uuid'); import UUID from 'uuid';
const escapeForXml = require('../s3middleware/escapeForXml'); import { RequestLogger } from 'werelogs';
const errors = require('../errors').default;
const { isValidBucketName } = require('../s3routes/routesUtils'); import escapeForXml from '../s3middleware/escapeForXml';
import errors from '../errors';
import { isValidBucketName } from '../s3routes/routesUtils';
import { Status } from './LifecycleRule';
const MAX_RULES = 1000; const MAX_RULES = 1000;
const RULE_ID_LIMIT = 255; const RULE_ID_LIMIT = 255;
const validStorageClasses = [ const validStorageClasses = ['STANDARD', 'STANDARD_IA', 'REDUCED_REDUNDANCY'];
'STANDARD',
'STANDARD_IA',
'REDUCED_REDUNDANCY',
];
/** /**
Example XML request: Example XML request:
@ -37,15 +36,45 @@ const validStorageClasses = [
</ReplicationConfiguration> </ReplicationConfiguration>
*/ */
class ReplicationConfiguration { export type Rule = {
prefix: string;
enabled: boolean;
id: string;
storageClass?: any;
};
export type Destination = { StorageClass: string[]; Bucket: string };
export type XMLRule = {
Prefix: string[];
Status: Status[];
ID?: string[];
Destination: Destination[];
Transition?: any[];
NoncurrentVersionTransition?: any[];
Filter?: string;
};
export default class ReplicationConfiguration {
_parsedXML: any;
_log: RequestLogger;
_config: any;
_configPrefixes: string[];
_configIDs: string[];
_role: string | null;
_destination: string | null;
_rules: Rule[] | null;
_prevStorageClass: null;
_hasScalityDestination: boolean | null;
_preferredReadLocation: string | null;
/** /**
* Create a ReplicationConfiguration instance * Create a ReplicationConfiguration instance
* @param {string} xml - The parsed XML * @param xml - The parsed XML
* @param {object} log - Werelogs logger * @param log - Werelogs logger
* @param {object} config - S3 server configuration * @param config - S3 server configuration
* @return {object} - ReplicationConfiguration instance * @return - ReplicationConfiguration instance
*/ */
constructor(xml, log, config) { constructor(xml: any, log: RequestLogger, config: any) {
this._parsedXML = xml; this._parsedXML = xml;
this._log = log; this._log = log;
this._config = config; this._config = config;
@ -64,7 +93,7 @@ class ReplicationConfiguration {
/** /**
* Get the role of the bucket replication configuration * Get the role of the bucket replication configuration
* @return {string|null} - The role if defined, otherwise `null` * @return - The role if defined, otherwise `null`
*/ */
getRole() { getRole() {
return this._role; return this._role;
@ -72,7 +101,7 @@ class ReplicationConfiguration {
/** /**
* The bucket to replicate data to * The bucket to replicate data to
* @return {string|null} - The bucket if defined, otherwise `null` * @return - The bucket if defined, otherwise `null`
*/ */
getDestination() { getDestination() {
return this._destination; return this._destination;
@ -80,7 +109,7 @@ class ReplicationConfiguration {
/** /**
* The rules for replication configuration * The rules for replication configuration
* @return {string|null} - The rules if defined, otherwise `null` * @return - The rules if defined, otherwise `null`
*/ */
getRules() { getRules() {
return this._rules; return this._rules;
@ -100,7 +129,7 @@ class ReplicationConfiguration {
/** /**
* Get the replication configuration * Get the replication configuration
* @return {object} - The replication configuration * @return - The replication configuration
*/ */
getReplicationConfiguration() { getReplicationConfiguration() {
return { return {
@ -113,18 +142,22 @@ class ReplicationConfiguration {
/** /**
* Build the rule object from the parsed XML of the given rule * Build the rule object from the parsed XML of the given rule
* @param {object} rule - The rule object from this._parsedXML * @param rule - The rule object from this._parsedXML
* @return {object} - The rule object to push into the `Rules` array * @return - The rule object to push into the `Rules` array
*/ */
_buildRuleObject(rule) { _buildRuleObject(rule: XMLRule) {
const obj = { const base = {
id: '',
prefix: rule.Prefix[0], prefix: rule.Prefix[0],
enabled: rule.Status[0] === 'Enabled', enabled: rule.Status[0] === 'Enabled',
}; };
const obj: Rule = { ...base };
// ID is an optional property, but create one if not provided or is ''. // ID is an optional property, but create one if not provided or is ''.
// We generate a 48-character alphanumeric, unique ID for the rule. // We generate a 48-character alphanumeric, unique ID for the rule.
obj.id = rule.ID && rule.ID[0] !== '' ? rule.ID[0] : obj.id =
Buffer.from(UUID.v4()).toString('base64'); rule.ID && rule.ID[0] !== ''
? rule.ID[0]
: Buffer.from(UUID.v4()).toString('base64');
// StorageClass is an optional property. // StorageClass is an optional property.
if (rule.Destination[0].StorageClass) { if (rule.Destination[0].StorageClass) {
obj.storageClass = rule.Destination[0].StorageClass[0]; obj.storageClass = rule.Destination[0].StorageClass[0];
@ -134,10 +167,10 @@ class ReplicationConfiguration {
/** /**
* Check if the Role field of the replication configuration is valid * Check if the Role field of the replication configuration is valid
* @param {string} ARN - The Role field value provided in the configuration * @param ARN - The Role field value provided in the configuration
* @return {boolean} `true` if a valid role ARN, `false` otherwise * @return `true` if a valid role ARN, `false` otherwise
*/ */
_isValidRoleARN(ARN) { _isValidRoleARN(ARN: string) {
// AWS accepts a range of values for the Role field. Though this does // AWS accepts a range of values for the Role field. Though this does
// not encompass all constraints imposed by AWS, we have opted to // not encompass all constraints imposed by AWS, we have opted to
// enforce the following. // enforce the following.
@ -154,30 +187,32 @@ class ReplicationConfiguration {
/** /**
* Check that the `Role` property of the configuration is valid * Check that the `Role` property of the configuration is valid
* @return {undefined}
*/ */
_parseRole() { _parseRole() {
const parsedRole = this._parsedXML.ReplicationConfiguration.Role; const parsedRole = this._parsedXML.ReplicationConfiguration.Role;
if (!parsedRole) { if (!parsedRole) {
return errors.MalformedXML; return errors.MalformedXML;
} }
const role = parsedRole[0]; const role: string = parsedRole[0];
const rolesArr = role.split(','); const rolesArr = role.split(',');
if (this._hasScalityDestination && rolesArr.length !== 2) { if (this._hasScalityDestination && rolesArr.length !== 2) {
return errors.InvalidArgument.customizeDescription( return errors.InvalidArgument.customizeDescription(
'Invalid Role specified in replication configuration: ' + 'Invalid Role specified in replication configuration: ' +
'Role must be a comma-separated list of two IAM roles'); 'Role must be a comma-separated list of two IAM roles'
);
} }
if (!this._hasScalityDestination && rolesArr.length > 1) { if (!this._hasScalityDestination && rolesArr.length > 1) {
return errors.InvalidArgument.customizeDescription( return errors.InvalidArgument.customizeDescription(
'Invalid Role specified in replication configuration: ' + 'Invalid Role specified in replication configuration: ' +
'Role may not contain a comma separator'); 'Role may not contain a comma separator'
);
} }
const invalidRole = rolesArr.find(r => !this._isValidRoleARN(r)); const invalidRole = rolesArr.find((r) => !this._isValidRoleARN(r));
if (invalidRole !== undefined) { if (invalidRole !== undefined) {
return errors.InvalidArgument.customizeDescription( return errors.InvalidArgument.customizeDescription(
'Invalid Role specified in replication configuration: ' + 'Invalid Role specified in replication configuration: ' +
`'${invalidRole}'`); `'${invalidRole}'`
);
} }
this._role = role; this._role = role;
return undefined; return undefined;
@ -185,7 +220,6 @@ class ReplicationConfiguration {
/** /**
* Check that the `Rules` property array is valid * Check that the `Rules` property array is valid
* @return {undefined}
*/ */
_parseRules() { _parseRules() {
// Note that the XML uses 'Rule' while the config object uses 'Rules'. // Note that the XML uses 'Rule' while the config object uses 'Rules'.
@ -195,7 +229,8 @@ class ReplicationConfiguration {
} }
if (Rule.length > MAX_RULES) { if (Rule.length > MAX_RULES) {
return errors.InvalidRequest.customizeDescription( return errors.InvalidRequest.customizeDescription(
'Number of defined replication rules cannot exceed 1000'); 'Number of defined replication rules cannot exceed 1000'
);
} }
const err = this._parseEachRule(Rule); const err = this._parseEachRule(Rule);
if (err) { if (err) {
@ -206,15 +241,16 @@ class ReplicationConfiguration {
/** /**
* Check that each rule in the `Rules` property array is valid * Check that each rule in the `Rules` property array is valid
* @param {array} rules - The rule array from this._parsedXML * @param rules - The rule array from this._parsedXML
* @return {undefined}
*/ */
_parseEachRule(rules) { _parseEachRule(rules: XMLRule[]) {
const rulesArr = []; const rulesArr: Rule[] = [];
for (let i = 0; i < rules.length; i++) { for (let i = 0; i < rules.length; i++) {
const err = const err =
this._parseStatus(rules[i]) || this._parsePrefix(rules[i]) || this._parseStatus(rules[i]) ||
this._parseID(rules[i]) || this._parseDestination(rules[i]); this._parsePrefix(rules[i]) ||
this._parseID(rules[i]) ||
this._parseDestination(rules[i]);
if (err) { if (err) {
return err; return err;
} }
@ -226,10 +262,9 @@ class ReplicationConfiguration {
/** /**
* Check that the `Status` property is valid * Check that the `Status` property is valid
* @param {object} rule - The rule object from this._parsedXML * @param rule - The rule object from this._parsedXML
* @return {undefined}
*/ */
_parseStatus(rule) { _parseStatus(rule: XMLRule) {
const status = rule.Status && rule.Status[0]; const status = rule.Status && rule.Status[0];
if (!status || !['Enabled', 'Disabled'].includes(status)) { if (!status || !['Enabled', 'Disabled'].includes(status)) {
return errors.MalformedXML; return errors.MalformedXML;
@ -239,18 +274,19 @@ class ReplicationConfiguration {
/** /**
* Check that the `Prefix` property is valid * Check that the `Prefix` property is valid
* @param {object} rule - The rule object from this._parsedXML * @param rule - The rule object from this._parsedXML
* @return {undefined}
*/ */
_parsePrefix(rule) { _parsePrefix(rule: XMLRule) {
const prefix = rule.Prefix && rule.Prefix[0]; const prefix = rule.Prefix && rule.Prefix[0];
// An empty string prefix should be allowed. // An empty string prefix should be allowed.
if (!prefix && prefix !== '') { if (!prefix && prefix !== '') {
return errors.MalformedXML; return errors.MalformedXML;
} }
if (prefix.length > 1024) { if (prefix.length > 1024) {
return errors.InvalidArgument.customizeDescription('Rule prefix ' + return errors.InvalidArgument.customizeDescription(
'cannot be longer than maximum allowed key length of 1024'); 'Rule prefix ' +
'cannot be longer than maximum allowed key length of 1024'
);
} }
// Each Prefix in a list of rules must not overlap. For example, two // Each Prefix in a list of rules must not overlap. For example, two
// prefixes 'TaxDocs' and 'TaxDocs/2015' are overlapping. An empty // prefixes 'TaxDocs' and 'TaxDocs/2015' are overlapping. An empty
@ -258,8 +294,9 @@ class ReplicationConfiguration {
for (let i = 0; i < this._configPrefixes.length; i++) { for (let i = 0; i < this._configPrefixes.length; i++) {
const used = this._configPrefixes[i]; const used = this._configPrefixes[i];
if (prefix.startsWith(used) || used.startsWith(prefix)) { if (prefix.startsWith(used) || used.startsWith(prefix)) {
return errors.InvalidRequest.customizeDescription('Found ' + return errors.InvalidRequest.customizeDescription(
`overlapping prefixes '${used}' and '${prefix}'`); 'Found ' + `overlapping prefixes '${used}' and '${prefix}'`
);
} }
} }
this._configPrefixes.push(prefix); this._configPrefixes.push(prefix);
@ -268,19 +305,20 @@ class ReplicationConfiguration {
/** /**
* Check that the `ID` property is valid * Check that the `ID` property is valid
* @param {object} rule - The rule object from this._parsedXML * @param rule - The rule object from this._parsedXML
* @return {undefined}
*/ */
_parseID(rule) { _parseID(rule: XMLRule) {
const id = rule.ID && rule.ID[0]; const id = rule.ID && rule.ID[0];
if (id && id.length > RULE_ID_LIMIT) { if (id && id.length > RULE_ID_LIMIT) {
return errors.InvalidArgument return errors.InvalidArgument.customizeDescription(
.customizeDescription('Rule Id cannot be greater than 255'); 'Rule Id cannot be greater than 255'
);
} }
// Each ID in a list of rules must be unique. // Each ID in a list of rules must be unique.
if (this._configIDs.includes(id)) { if (id && this._configIDs.includes(id)) {
return errors.InvalidRequest.customizeDescription( return errors.InvalidRequest.customizeDescription(
'Rule Id must be unique'); 'Rule Id must be unique'
);
} }
if (id !== undefined) { if (id !== undefined) {
this._configIDs.push(id); this._configIDs.push(id);
@ -290,15 +328,14 @@ class ReplicationConfiguration {
/** /**
* Check that the `StorageClass` property is valid * Check that the `StorageClass` property is valid
* @param {object} destination - The destination object from this._parsedXML * @param destination - The destination object from this._parsedXML
* @return {undefined}
*/ */
_parseStorageClass(destination) { _parseStorageClass(destination: Destination) {
const { replicationEndpoints } = this._config; const { replicationEndpoints } = this._config;
// The only condition where the default endpoint is possibly undefined // The only condition where the default endpoint is possibly undefined
// is if there is only a single replication endpoint. // is if there is only a single replication endpoint.
const defaultEndpoint = const defaultEndpoint =
replicationEndpoints.find(endpoint => endpoint.default) || replicationEndpoints.find((endpoint: any) => endpoint.default) ||
replicationEndpoints[0]; replicationEndpoints[0];
// StorageClass is optional. // StorageClass is optional.
if (destination.StorageClass === undefined) { if (destination.StorageClass === undefined) {
@ -320,9 +357,15 @@ class ReplicationConfiguration {
defaultEndpoint.type === undefined; defaultEndpoint.type === undefined;
return true; return true;
} }
const endpoint = replicationEndpoints.find(endpoint => const endpoint = replicationEndpoints.find(
endpoint.site === storageClass); (endpoint: any) => endpoint.site === storageClass
);
if (endpoint) { if (endpoint) {
// We do not support replication to cold location.
// Only transition to cold location is supported.
if (endpoint.site && this._config.locationConstraints[endpoint.site]?.isCold) {
return false;
}
// If this._hasScalityDestination was not set to true in any // If this._hasScalityDestination was not set to true in any
// previous iteration or by a prior rule's storage class, then // previous iteration or by a prior rule's storage class, then
// check if the current endpoint is a Scality destination. // check if the current endpoint is a Scality destination.
@ -343,10 +386,9 @@ class ReplicationConfiguration {
/** /**
* Check that the `Bucket` property is valid * Check that the `Bucket` property is valid
* @param {object} destination - The destination object from this._parsedXML * @param destination - The destination object from this._parsedXML
* @return {undefined}
*/ */
_parseBucket(destination) { _parseBucket(destination: Destination) {
const parsedBucketARN = destination.Bucket; const parsedBucketARN = destination.Bucket;
// If there is no Scality destination, we get the destination bucket // If there is no Scality destination, we get the destination bucket
// from the location configuration. // from the location configuration.
@ -359,7 +401,8 @@ class ReplicationConfiguration {
const bucketARN = parsedBucketARN[0]; const bucketARN = parsedBucketARN[0];
if (!bucketARN) { if (!bucketARN) {
return errors.InvalidArgument.customizeDescription( return errors.InvalidArgument.customizeDescription(
'Destination bucket cannot be null or empty'); 'Destination bucket cannot be null or empty'
);
} }
const arr = bucketARN.split(':'); const arr = bucketARN.split(':');
const isValidARN = const isValidARN =
@ -369,17 +412,20 @@ class ReplicationConfiguration {
arr[3] === '' && arr[3] === '' &&
arr[4] === ''; arr[4] === '';
if (!isValidARN) { if (!isValidARN) {
return errors.InvalidArgument return errors.InvalidArgument.customizeDescription(
.customizeDescription('Invalid bucket ARN'); 'Invalid bucket ARN'
);
} }
if (!isValidBucketName(arr[5], [])) { if (!isValidBucketName(arr[5], [])) {
return errors.InvalidArgument return errors.InvalidArgument.customizeDescription(
.customizeDescription('The specified bucket is not valid'); 'The specified bucket is not valid'
);
} }
// We can replicate objects only to one destination bucket. // We can replicate objects only to one destination bucket.
if (this._destination && this._destination !== bucketARN) { if (this._destination && this._destination !== bucketARN) {
return errors.InvalidRequest.customizeDescription( return errors.InvalidRequest.customizeDescription(
'The destination bucket must be same for all rules'); 'The destination bucket must be same for all rules'
);
} }
this._destination = bucketARN; this._destination = bucketARN;
return undefined; return undefined;
@ -387,10 +433,9 @@ class ReplicationConfiguration {
/** /**
* Check that the `destination` property is valid * Check that the `destination` property is valid
* @param {object} rule - The rule object from this._parsedXML * @param rule - The rule object from this._parsedXML
* @return {undefined}
*/ */
_parseDestination(rule) { _parseDestination(rule: XMLRule) {
const dest = rule.Destination && rule.Destination[0]; const dest = rule.Destination && rule.Destination[0];
if (!dest) { if (!dest) {
return errors.MalformedXML; return errors.MalformedXML;
@ -404,7 +449,6 @@ class ReplicationConfiguration {
/** /**
* Check that the request configuration is valid * Check that the request configuration is valid
* @return {undefined}
*/ */
parseConfiguration() { parseConfiguration() {
const err = this._parseRules(); const err = this._parseRules();
@ -416,48 +460,62 @@ class ReplicationConfiguration {
/** /**
* Get the XML representation of the configuration object * Get the XML representation of the configuration object
* @param {object} config - The bucket replication configuration * @param config - The bucket replication configuration
* @return {string} - The XML representation of the configuration * @return - The XML representation of the configuration
*/ */
static getConfigXML(config) { static getConfigXML(config: {
role: string;
destination: string;
rules: Rule[];
}) {
const { role, destination, rules } = config; const { role, destination, rules } = config;
const Role = `<Role>${escapeForXml(role)}</Role>`; const Role = `<Role>${escapeForXml(role)}</Role>`;
const Bucket = `<Bucket>${escapeForXml(destination)}</Bucket>`; const Bucket = `<Bucket>${escapeForXml(destination)}</Bucket>`;
const rulesXML = rules.map(rule => { const rulesXML = rules
const { prefix, enabled, storageClass, id } = rule; .map((rule) => {
const Prefix = prefix === '' ? '<Prefix/>' : const { prefix, enabled, storageClass, id } = rule;
`<Prefix>${escapeForXml(prefix)}</Prefix>`; const Prefix =
const Status = prefix === ''
`<Status>${enabled ? 'Enabled' : 'Disabled'}</Status>`; ? '<Prefix/>'
const StorageClass = storageClass ? : `<Prefix>${escapeForXml(prefix)}</Prefix>`;
`<StorageClass>${storageClass}</StorageClass>` : ''; const Status = `<Status>${
const Destination = enabled ? 'Enabled' : 'Disabled'
`<Destination>${Bucket}${StorageClass}</Destination>`; }</Status>`;
// If the ID property was omitted in the configuration object, we const StorageClass = storageClass
// create an ID for the rule. Hence it is always defined. ? `<StorageClass>${storageClass}</StorageClass>`
const ID = `<ID>${escapeForXml(id)}</ID>`; : '';
return `<Rule>${ID}${Prefix}${Status}${Destination}</Rule>`; const Destination = `<Destination>${Bucket}${StorageClass}</Destination>`;
}).join(''); // If the ID property was omitted in the configuration object, we
return '<?xml version="1.0" encoding="UTF-8"?>' + // create an ID for the rule. Hence it is always defined.
const ID = `<ID>${escapeForXml(id)}</ID>`;
return `<Rule>${ID}${Prefix}${Status}${Destination}</Rule>`;
})
.join('');
return (
'<?xml version="1.0" encoding="UTF-8"?>' +
'<ReplicationConfiguration ' + '<ReplicationConfiguration ' +
'xmlns="http://s3.amazonaws.com/doc/2006-03-01/">' + 'xmlns="http://s3.amazonaws.com/doc/2006-03-01/">' +
`${rulesXML}${Role}` + `${rulesXML}${Role}` +
'</ReplicationConfiguration>'; '</ReplicationConfiguration>'
);
} }
/** /**
* Validate the bucket metadata replication configuration structure and * Validate the bucket metadata replication configuration structure and
* value types * value types
* @param {object} config - The replication configuration to validate * @param config - The replication configuration to validate
* @return {undefined}
*/ */
static validateConfig(config) { static validateConfig(config: {
role: string;
destination: string;
rules: Rule[];
}) {
assert.strictEqual(typeof config, 'object'); assert.strictEqual(typeof config, 'object');
const { role, rules, destination } = config; const { role, rules, destination } = config;
assert.strictEqual(typeof role, 'string'); assert.strictEqual(typeof role, 'string');
assert.strictEqual(typeof destination, 'string'); assert.strictEqual(typeof destination, 'string');
assert.strictEqual(Array.isArray(rules), true); assert.strictEqual(Array.isArray(rules), true);
rules.forEach(rule => { rules.forEach((rule) => {
assert.strictEqual(typeof rule, 'object'); assert.strictEqual(typeof rule, 'object');
const { prefix, enabled, id, storageClass } = rule; const { prefix, enabled, id, storageClass } = rule;
assert.strictEqual(typeof prefix, 'string'); assert.strictEqual(typeof prefix, 'string');
@ -469,5 +527,3 @@ class ReplicationConfiguration {
}); });
} }
} }
module.exports = ReplicationConfiguration;

View File

@ -1,195 +0,0 @@
class RoutingRule {
/**
* Represents a routing rule in a website configuration.
* @constructor
* @param {object} params - object containing redirect and condition objects
* @param {object} params.redirect - specifies how to redirect requests
* @param {string} [params.redirect.protocol] - protocol to use for redirect
* @param {string} [params.redirect.hostName] - hostname to use for redirect
* @param {string} [params.redirect.replaceKeyPrefixWith] - string to replace
* keyPrefixEquals specified in condition
* @param {string} [params.redirect.replaceKeyWith] - string to replace key
* @param {string} [params.redirect.httpRedirectCode] - http redirect code
* @param {object} [params.condition] - specifies conditions for a redirect
* @param {string} [params.condition.keyPrefixEquals] - key prefix that
* triggers a redirect
* @param {string} [params.condition.httpErrorCodeReturnedEquals] - http code
* that triggers a redirect
*/
constructor(params) {
if (params) {
this._redirect = params.redirect;
this._condition = params.condition;
}
}
/**
* Return copy of rule as plain object
* @return {object} rule;
*/
getRuleObject() {
const rule = {
redirect: this._redirect,
condition: this._condition,
};
return rule;
}
/**
* Return the condition object
* @return {object} condition;
*/
getCondition() {
return this._condition;
}
/**
* Return the redirect object
* @return {object} redirect;
*/
getRedirect() {
return this._redirect;
}
}
class WebsiteConfiguration {
/**
* Object that represents website configuration
* @constructor
* @param {object} params - object containing params to construct Object
* @param {string} params.indexDocument - key for index document object
* required when redirectAllRequestsTo is undefined
* @param {string} [params.errorDocument] - key for error document object
* @param {object} params.redirectAllRequestsTo - object containing info
* about how to redirect all requests
* @param {string} params.redirectAllRequestsTo.hostName - hostName to use
* when redirecting all requests
* @param {string} [params.redirectAllRequestsTo.protocol] - protocol to use
* when redirecting all requests ('http' or 'https')
* @param {(RoutingRule[]|object[])} params.routingRules - array of Routing
* Rule instances or plain routing rule objects to cast as RoutingRule's
*/
constructor(params) {
if (params) {
this._indexDocument = params.indexDocument;
this._errorDocument = params.errorDocument;
this._redirectAllRequestsTo = params.redirectAllRequestsTo;
this.setRoutingRules(params.routingRules);
}
}
/**
* Return plain object with configuration info
* @return {object} - Object copy of class instance
*/
getConfig() {
const websiteConfig = {
indexDocument: this._indexDocument,
errorDocument: this._errorDocument,
redirectAllRequestsTo: this._redirectAllRequestsTo,
};
if (this._routingRules) {
websiteConfig.routingRules =
this._routingRules.map(rule => rule.getRuleObject());
}
return websiteConfig;
}
/**
* Set the redirectAllRequestsTo
* @param {object} obj - object to set as redirectAllRequestsTo
* @param {string} obj.hostName - hostname for redirecting all requests
* @param {object} [obj.protocol] - protocol for redirecting all requests
* @return {undefined};
*/
setRedirectAllRequestsTo(obj) {
this._redirectAllRequestsTo = obj;
}
/**
* Return the redirectAllRequestsTo object
* @return {object} redirectAllRequestsTo;
*/
getRedirectAllRequestsTo() {
return this._redirectAllRequestsTo;
}
/**
* Set the index document object name
* @param {string} suffix - index document object key
* @return {undefined};
*/
setIndexDocument(suffix) {
this._indexDocument = suffix;
}
/**
* Get the index document object name
* @return {string} indexDocument
*/
getIndexDocument() {
return this._indexDocument;
}
/**
* Set the error document object name
* @param {string} key - error document object key
* @return {undefined};
*/
setErrorDocument(key) {
this._errorDocument = key;
}
/**
* Get the error document object name
* @return {string} errorDocument
*/
getErrorDocument() {
return this._errorDocument;
}
/**
* Set the whole RoutingRules array
* @param {array} array - array to set as instance's RoutingRules
* @return {undefined};
*/
setRoutingRules(array) {
if (array) {
this._routingRules = array.map(rule => {
if (rule instanceof RoutingRule) {
return rule;
}
return new RoutingRule(rule);
});
}
}
/**
* Add a RoutingRule instance to routingRules array
* @param {object} obj - rule to add to array
* @return {undefined};
*/
addRoutingRule(obj) {
if (!this._routingRules) {
this._routingRules = [];
}
if (obj && obj instanceof RoutingRule) {
this._routingRules.push(obj);
} else if (obj) {
this._routingRules.push(new RoutingRule(obj));
}
}
/**
* Get routing rules
* @return {RoutingRule[]} - array of RoutingRule instances
*/
getRoutingRules() {
return this._routingRules;
}
}
module.exports = {
RoutingRule,
WebsiteConfiguration,
};

View File

@ -0,0 +1,218 @@
/**
* @param protocol - protocol to use for redirect
* @param hostName - hostname to use for redirect
* @param replaceKeyPrefixWith - string to replace keyPrefixEquals specified in condition
* @param replaceKeyWith - string to replace key
* @param httpRedirectCode - http redirect code
*/
export type Redirect = {
protocol?: string;
hostName?: string;
replaceKeyPrefixWith?: string;
replaceKeyWith?: string;
httpRedirectCode: string;
};
/**
* @param keyPrefixEquals - key prefix that triggers a redirect
* @param httpErrorCodeReturnedEquals - http code that triggers a redirect
*/
export type Condition = {
keyPrefixEquals?: string;
httpErrorCodeReturnedEquals?: string;
};
export type RoutingRuleParams = { redirect: Redirect; condition?: Condition };
export class RoutingRule {
_redirect?: Redirect;
_condition?: Condition;
/**
* Represents a routing rule in a website configuration.
* @constructor
* @param params - object containing redirect and condition objects
* @param params.redirect - specifies how to redirect requests
* @param [params.condition] - specifies conditions for a redirect
*/
constructor(params?: RoutingRuleParams) {
if (params) {
this._redirect = params.redirect;
this._condition = params.condition;
}
}
/**
* Return copy of rule as plain object
* @return rule;
*/
getRuleObject() {
const rule = {
redirect: this._redirect,
condition: this._condition,
};
return rule;
}
/**
* Return the condition object
* @return condition;
*/
getCondition() {
return this._condition;
}
/**
* Return the redirect object
* @return redirect;
*/
getRedirect() {
return this._redirect;
}
}
export type RedirectAllRequestsTo = {
hostName: string;
protocol?: string;
};
export class WebsiteConfiguration {
_indexDocument?: string;
_errorDocument?: string;
_redirectAllRequestsTo?: RedirectAllRequestsTo;
_routingRules?: RoutingRule[];
/**
* Object that represents website configuration
* @constructor
* @param params - object containing params to construct Object
* @param params.indexDocument - key for index document object
* required when redirectAllRequestsTo is undefined
* @param [params.errorDocument] - key for error document object
* @param params.redirectAllRequestsTo - object containing info
* about how to redirect all requests
* @param params.redirectAllRequestsTo.hostName - hostName to use
* when redirecting all requests
* @param [params.redirectAllRequestsTo.protocol] - protocol to use
* when redirecting all requests ('http' or 'https')
* @param params.routingRules - array of Routing
* Rule instances or plain routing rule objects to cast as RoutingRule's
*/
constructor(params: {
indexDocument: string;
errorDocument: string;
redirectAllRequestsTo: RedirectAllRequestsTo;
routingRules: RoutingRule[] | any[],
}) {
if (params) {
this._indexDocument = params.indexDocument;
this._errorDocument = params.errorDocument;
this._redirectAllRequestsTo = params.redirectAllRequestsTo;
this.setRoutingRules(params.routingRules);
}
}
/**
* Return plain object with configuration info
* @return - Object copy of class instance
*/
getConfig() {
const base = {
indexDocument: this._indexDocument,
errorDocument: this._errorDocument,
redirectAllRequestsTo: this._redirectAllRequestsTo,
};
if (this._routingRules) {
const routingRules = this._routingRules.map(r => r.getRuleObject());
return { ...base, routingRules };
}
return { ...base };
}
/**
* Set the redirectAllRequestsTo
* @param obj - object to set as redirectAllRequestsTo
* @param obj.hostName - hostname for redirecting all requests
* @param [obj.protocol] - protocol for redirecting all requests
*/
setRedirectAllRequestsTo(obj: { hostName: string; protocol?: string }) {
this._redirectAllRequestsTo = obj;
}
/**
* Return the redirectAllRequestsTo object
* @return redirectAllRequestsTo;
*/
getRedirectAllRequestsTo() {
return this._redirectAllRequestsTo;
}
/**
* Set the index document object name
* @param suffix - index document object key
*/
setIndexDocument(suffix: string) {
this._indexDocument = suffix;
}
/**
* Get the index document object name
* @return indexDocument
*/
getIndexDocument() {
return this._indexDocument;
}
/**
* Set the error document object name
* @param key - error document object key
*/
setErrorDocument(key: string) {
this._errorDocument = key;
}
/**
* Get the error document object name
* @return errorDocument
*/
getErrorDocument() {
return this._errorDocument;
}
/**
* Set the whole RoutingRules array
* @param array - array to set as instance's RoutingRules
*/
setRoutingRules(array?: (RoutingRule | RoutingRuleParams)[]) {
if (array) {
this._routingRules = array.map(rule => {
if (rule instanceof RoutingRule) {
return rule;
}
return new RoutingRule(rule);
});
}
}
/**
* Add a RoutingRule instance to routingRules array
* @param obj - rule to add to array
*/
addRoutingRule(obj?: RoutingRule | RoutingRuleParams) {
if (!this._routingRules) {
this._routingRules = [];
}
if (obj && obj instanceof RoutingRule) {
this._routingRules.push(obj);
} else if (obj) {
this._routingRules.push(new RoutingRule(obj));
}
}
/**
* Get routing rules
* @return - array of RoutingRule instances
*/
getRoutingRules() {
return this._routingRules;
}
}

16
lib/models/index.ts Normal file
View File

@ -0,0 +1,16 @@
export { default as ARN } from './ARN';
export { default as BackendInfo } from './BackendInfo';
export { default as BucketAzureInfo } from './BucketAzureInfo';
export { default as BucketInfo } from './BucketInfo';
export { default as BucketPolicy } from './BucketPolicy';
export { default as LifecycleConfiguration } from './LifecycleConfiguration';
export { default as LifecycleRule } from './LifecycleRule';
export { default as NotificationConfiguration } from './NotificationConfiguration';
export { default as ObjectLockConfiguration } from './ObjectLockConfiguration';
export { default as ObjectMD } from './ObjectMD';
export { default as ObjectMDAmzRestore } from './ObjectMDAmzRestore';
export { default as ObjectMDArchive } from './ObjectMDArchive';
export { default as ObjectMDAzureInfo } from './ObjectMDAzureInfo';
export { default as ObjectMDLocation } from './ObjectMDLocation';
export { default as ReplicationConfiguration } from './ReplicationConfiguration';
export * as WebsiteConfiguration from './WebsiteConfiguration';

View File

@ -1,5 +1,6 @@
import * as http from 'http'; import * as http from 'http';
import * as https from 'https'; import * as https from 'https';
import { https as HttpsAgent } from 'httpagent';
import * as tls from 'tls'; import * as tls from 'tls';
import * as net from 'net'; import * as net from 'net';
import assert from 'assert'; import assert from 'assert';
@ -409,7 +410,11 @@ export default class Server {
method: 'arsenal.network.Server.start', method: 'arsenal.network.Server.start',
port: this._port, port: this._port,
}); });
this._https.agent = new https.Agent(this._https); this._https.agent = new HttpsAgent.Agent(this._https, {
// Do not enforce the maximum number of sockets for the
// main server, as it might be able to serve more clients.
maxSockets: false,
});
this._server = https.createServer(this._https, this._server = https.createServer(this._https,
(req, res) => this._onRequest(req, res)); (req, res) => this._onRequest(req, res));
} else { } else {
@ -430,7 +435,6 @@ export default class Server {
this._server.on('connection', sock => { this._server.on('connection', sock => {
// Setting no delay of the socket to the value configured // Setting no delay of the socket to the value configured
// TODO fix this // TODO fix this
// @ts-expect-errors
sock.setNoDelay(this.isNoDelay()); sock.setNoDelay(this.isNoDelay());
sock.on('error', err => this._logger.info( sock.on('error', err => this._logger.info(
'socket error - request rejected', { error: err })); 'socket error - request rejected', { error: err }));

View File

@ -77,10 +77,11 @@ export function getByteRangeFromSpec(
objectSize - 1] }; objectSize - 1] };
} }
if (rangeSpec.start < objectSize) { if (rangeSpec.start < objectSize) {
// test is false if end is undefined // test is false if end is undefined or end is greater than objectSize
return { range: [rangeSpec.start, const end: number = rangeSpec.end !== undefined && rangeSpec.end < objectSize
((rangeSpec.end && (rangeSpec.end < objectSize)) ? ? rangeSpec.end
rangeSpec.end : objectSize - 1)] }; : objectSize - 1;
return { range: [rangeSpec.start, end] };
} }
return { error: errors.InvalidRange }; return { error: errors.InvalidRange };
} }

View File

@ -20,7 +20,7 @@ function _ttlvPadVector(vec: any[]) {
return vec; return vec;
} }
function _throwError(logger: werelogs.Logger, msg: string, data?: LogDictionnary) { function _throwError(logger: werelogs.Logger, msg: string, data?: LogDictionary) {
logger.error(msg, data); logger.error(msg, data);
throw Error(msg); throw Error(msg);
} }

View File

@ -62,7 +62,7 @@ export default class HealthProbeServer extends httpServer {
_onLiveness( _onLiveness(
_req: http.IncomingMessage, _req: http.IncomingMessage,
res: http.ServerResponse, res: http.ServerResponse,
log: RequestLogger, log: werelogs.RequestLogger,
) { ) {
if (this._livenessCheck(log)) { if (this._livenessCheck(log)) {
sendSuccess(res, log); sendSuccess(res, log);
@ -74,7 +74,7 @@ export default class HealthProbeServer extends httpServer {
_onReadiness( _onReadiness(
_req: http.IncomingMessage, _req: http.IncomingMessage,
res: http.ServerResponse, res: http.ServerResponse,
log: RequestLogger, log: werelogs.RequestLogger,
) { ) {
if (this._readinessCheck(log)) { if (this._readinessCheck(log)) {
sendSuccess(res, log); sendSuccess(res, log);
@ -84,10 +84,11 @@ export default class HealthProbeServer extends httpServer {
} }
// expose metrics to Prometheus // expose metrics to Prometheus
_onMetrics(_req: http.IncomingMessage, res: http.ServerResponse) { async _onMetrics(_req: http.IncomingMessage, res: http.ServerResponse) {
const metrics = await ZenkoMetrics.asPrometheus();
res.writeHead(200, { res.writeHead(200, {
'Content-Type': ZenkoMetrics.asPrometheusContentType(), 'Content-Type': ZenkoMetrics.asPrometheusContentType(),
}); });
res.end(ZenkoMetrics.asPrometheus()); res.end(metrics);
} }
} }

View File

@ -4,7 +4,7 @@ import * as werelogs from 'werelogs';
import errors from '../../errors'; import errors from '../../errors';
export const DEFAULT_LIVE_ROUTE = '/_/live'; export const DEFAULT_LIVE_ROUTE = '/_/live';
export const DEFAULT_READY_ROUTE = '/_/live'; export const DEFAULT_READY_ROUTE = '/_/ready';
export const DEFAULT_METRICS_ROUTE = '/metrics'; export const DEFAULT_METRICS_ROUTE = '/metrics';
/** /**
@ -16,7 +16,7 @@ export const DEFAULT_METRICS_ROUTE = '/metrics';
* @param log - Werelogs instance for logging if you choose to * @param log - Werelogs instance for logging if you choose to
*/ */
export type ProbeDelegate = (res: http.ServerResponse, log: RequestLogger) => string | void export type ProbeDelegate = (res: http.ServerResponse, log: werelogs.RequestLogger) => string | void
export type ProbeServerParams = { export type ProbeServerParams = {
port: number; port: number;

View File

@ -1,4 +1,7 @@
import * as http from 'http'; import * as http from 'http';
import { RequestLogger } from 'werelogs';
import { ArsenalError } from '../../errors'; import { ArsenalError } from '../../errors';
/** /**

View File

@ -4,7 +4,7 @@ import * as werelogs from 'werelogs';
import * as constants from '../../constants'; import * as constants from '../../constants';
import * as utils from './utils'; import * as utils from './utils';
import errors, { ArsenalError } from '../../errors'; import errors, { ArsenalError } from '../../errors';
import HttpAgent from 'agentkeepalive'; import { http as HttpAgent } from 'httpagent';
import * as stream from 'stream'; import * as stream from 'stream';
function setRequestUids(reqHeaders: http.IncomingHttpHeaders, reqUids: string) { function setRequestUids(reqHeaders: http.IncomingHttpHeaders, reqUids: string) {
@ -71,7 +71,7 @@ function makeErrorFromHTTPResponse(response: http.IncomingMessage) {
export default class RESTClient { export default class RESTClient {
host: string; host: string;
port: number; port: number;
httpAgent: HttpAgent; httpAgent: http.Agent;
logging: werelogs.Logger; logging: werelogs.Logger;
isPassthrough: boolean; isPassthrough: boolean;
@ -98,10 +98,10 @@ export default class RESTClient {
this.port = params.port; this.port = params.port;
this.isPassthrough = params.isPassthrough || false; this.isPassthrough = params.isPassthrough || false;
this.logging = new (params.logApi || werelogs).Logger('DataFileRESTClient'); this.logging = new (params.logApi || werelogs).Logger('DataFileRESTClient');
this.httpAgent = new HttpAgent({ this.httpAgent = new HttpAgent.Agent({
keepAlive: true, keepAlive: true,
freeSocketTimeout: constants.httpClientFreeSocketTimeout, freeSocketTimeout: constants.httpClientFreeSocketTimeout,
}); }) as http.Agent;
} }
/** Destroy the HTTP agent, forcing a close of the remaining open connections */ /** Destroy the HTTP agent, forcing a close of the remaining open connections */
@ -119,7 +119,7 @@ export default class RESTClient {
method: string, method: string,
headers: http.OutgoingHttpHeaders | null, headers: http.OutgoingHttpHeaders | null,
key: string | null, key: string | null,
log: RequestLogger, log: werelogs.RequestLogger,
responseCb: (res: http.IncomingMessage) => void, responseCb: (res: http.IncomingMessage) => void,
) { ) {
const reqHeaders = headers || {}; const reqHeaders = headers || {};

View File

@ -25,7 +25,7 @@ function setContentRange(
function sendError( function sendError(
res: http.ServerResponse, res: http.ServerResponse,
log: RequestLogger, log: werelogs.RequestLogger,
error: ArsenalError, error: ArsenalError,
optMessage?: string, optMessage?: string,
) { ) {
@ -68,7 +68,6 @@ export default class RESTServer extends httpServer {
}) { }) {
assert(params.port); assert(params.port);
// @ts-expect-error
werelogs.configure({ werelogs.configure({
level: params.log.logLevel, level: params.log.logLevel,
dump: params.log.dumpLevel, dump: params.log.dumpLevel,
@ -142,7 +141,7 @@ export default class RESTServer extends httpServer {
_onPut( _onPut(
req: http.IncomingMessage, req: http.IncomingMessage,
res: http.ServerResponse, res: http.ServerResponse,
log: RequestLogger, log: werelogs.RequestLogger,
) { ) {
let size: number; let size: number;
try { try {
@ -184,7 +183,7 @@ export default class RESTServer extends httpServer {
_onGet( _onGet(
req: http.IncomingMessage, req: http.IncomingMessage,
res: http.ServerResponse, res: http.ServerResponse,
log: RequestLogger, log: werelogs.RequestLogger,
) { ) {
let pathInfo: ReturnType<typeof parseURL>; let pathInfo: ReturnType<typeof parseURL>;
let rangeSpec: ReturnType<typeof httpUtils.parseRangeSpec> | undefined = let rangeSpec: ReturnType<typeof httpUtils.parseRangeSpec> | undefined =
@ -267,7 +266,7 @@ export default class RESTServer extends httpServer {
_onDelete( _onDelete(
req: http.IncomingMessage, req: http.IncomingMessage,
res: http.ServerResponse, res: http.ServerResponse,
log: RequestLogger, log: werelogs.RequestLogger,
) { ) {
let pathInfo: ReturnType<typeof parseURL>; let pathInfo: ReturnType<typeof parseURL>;
try { try {

View File

@ -1,6 +1,6 @@
import ioClient from 'socket.io-client'; import ioClient from 'socket.io-client';
import * as http from 'http'; import * as http from 'http';
import io from 'socket.io'; import { Server as IOServer } from 'socket.io';
import * as sioStream from './sio-stream'; import * as sioStream from './sio-stream';
import async from 'async'; import async from 'async';
import assert from 'assert'; import assert from 'assert';
@ -497,7 +497,7 @@ export function RPCServer(params: {
assert(params.logger); assert(params.logger);
const httpServer = http.createServer(); const httpServer = http.createServer();
const server = io(httpServer); const server = new IOServer(httpServer, { maxHttpBufferSize: 1e8 });
const log = params.logger; const log = params.logger;
/** /**
@ -508,7 +508,7 @@ export function RPCServer(params: {
* *
* @param {BaseService} serviceList - list of services to register * @param {BaseService} serviceList - list of services to register
*/ */
server.registerServices = function registerServices(...serviceList: any[]) { (server as any).registerServices = function registerServices(...serviceList: any[]) {
serviceList.forEach(service => { serviceList.forEach(service => {
const sock = this.of(service.namespace); const sock = this.of(service.namespace);
sock.on('connection', conn => { sock.on('connection', conn => {
@ -536,7 +536,7 @@ export function RPCServer(params: {
}); });
}; };
server.listen = function listen(port, bindAddress = undefined) { (server as any).listen = function listen(port, bindAddress = undefined) {
httpServer.listen(port, bindAddress); httpServer.listen(port, bindAddress);
}; };

View File

@ -1,159 +0,0 @@
'use strict'; // eslint-disable-line strict
const { URL } = require('url');
const { decryptSecret } = require('../executables/pensieveCreds/utils');
function patchLocations(overlayLocations, creds, log) {
if (!overlayLocations) {
return {};
}
const locations = {};
Object.keys(overlayLocations).forEach(k => {
const l = overlayLocations[k];
const location = {
name: k,
objectId: l.objectId,
details: l.details || {},
locationType: l.locationType,
};
let supportsVersioning = false;
let pathStyle = process.env.CI_CEPH !== undefined;
switch (l.locationType) {
case 'location-mem-v1':
location.type = 'mem';
location.details = { supportsVersioning: true };
break;
case 'location-file-v1':
location.type = 'file';
location.details = { supportsVersioning: true };
break;
case 'location-azure-v1':
location.type = 'azure';
if (l.details.secretKey && l.details.secretKey.length > 0) {
location.details = {
bucketMatch: l.details.bucketMatch,
azureStorageEndpoint: l.details.endpoint,
azureStorageAccountName: l.details.accessKey,
azureStorageAccessKey: decryptSecret(creds,
l.details.secretKey),
azureContainerName: l.details.bucketName,
};
}
break;
case 'location-ceph-radosgw-s3-v1':
case 'location-scality-ring-s3-v1':
pathStyle = true; // fallthrough
case 'location-aws-s3-v1':
case 'location-wasabi-v1':
supportsVersioning = true; // fallthrough
case 'location-do-spaces-v1':
location.type = 'aws_s3';
if (l.details.secretKey && l.details.secretKey.length > 0) {
let https = true;
let awsEndpoint = l.details.endpoint ||
's3.amazonaws.com';
if (awsEndpoint.includes('://')) {
const url = new URL(awsEndpoint);
awsEndpoint = url.host;
https = url.protocol.includes('https');
}
location.details = {
credentials: {
accessKey: l.details.accessKey,
secretKey: decryptSecret(creds,
l.details.secretKey),
},
bucketName: l.details.bucketName,
bucketMatch: l.details.bucketMatch,
serverSideEncryption:
Boolean(l.details.serverSideEncryption),
region: l.details.region,
awsEndpoint,
supportsVersioning,
pathStyle,
https,
};
}
break;
case 'location-gcp-v1':
location.type = 'gcp';
if (l.details.secretKey && l.details.secretKey.length > 0) {
location.details = {
credentials: {
accessKey: l.details.accessKey,
secretKey: decryptSecret(creds,
l.details.secretKey),
},
bucketName: l.details.bucketName,
mpuBucketName: l.details.mpuBucketName,
bucketMatch: l.details.bucketMatch,
gcpEndpoint: l.details.endpoint ||
'storage.googleapis.com',
https: true,
};
}
break;
case 'location-scality-sproxyd-v1':
location.type = 'scality';
if (l.details && l.details.bootstrapList &&
l.details.proxyPath) {
location.details = {
supportsVersioning: true,
connector: {
sproxyd: {
chordCos: l.details.chordCos || null,
bootstrap: l.details.bootstrapList,
path: l.details.proxyPath,
},
},
};
}
break;
case 'location-nfs-mount-v1':
location.type = 'pfs';
if (l.details) {
location.details = {
supportsVersioning: true,
bucketMatch: true,
pfsDaemonEndpoint: {
host: `${l.name}-cosmos-pfsd`,
port: 80,
},
};
}
break;
case 'location-scality-hdclient-v2':
location.type = 'scality';
if (l.details && l.details.bootstrapList) {
location.details = {
supportsVersioning: true,
connector: {
hdclient: {
bootstrap: l.details.bootstrapList,
},
},
};
}
break;
default:
log.info(
'unknown location type',
{ locationType: l.locationType },
);
return;
}
location.sizeLimitGB = l.sizeLimitGB || null;
location.isTransient = Boolean(l.isTransient);
location.legacyAwsBehavior = Boolean(l.legacyAwsBehavior);
locations[location.name] = location;
return;
});
return locations;
}
module.exports = {
patchLocations,
};

View File

@ -0,0 +1,209 @@
import { URL } from 'url';
import { decryptSecret } from '../executables/pensieveCreds/utils';
import { Logger } from 'werelogs';
export type LocationType =
| 'location-mem-v1'
| 'location-file-v1'
| 'location-azure-v1'
| 'location-ceph-radosgw-s3-v1'
| 'location-scality-ring-s3-v1'
| 'location-aws-s3-v1'
| 'location-wasabi-v1'
| 'location-do-spaces-v1'
| 'location-gcp-v1'
| 'location-scality-sproxyd-v1'
| 'location-nfs-mount-v1'
| 'location-scality-hdclient-v2';
export interface OverlayLocations {
[key: string]: {
name: string;
objectId: string;
details?: any;
locationType: string;
sizeLimitGB?: number;
isTransient?: boolean;
legacyAwsBehavior?: boolean;
};
}
export type Location = {
type:
| 'mem'
| 'file'
| 'azure'
| 'aws_s3'
| 'gcp'
| 'scality'
| 'pfs'
| 'scality';
name: string;
objectId: string;
details: { [key: string]: any };
locationType: string;
sizeLimitGB: number | null;
isTransient: boolean;
legacyAwsBehavior: boolean;
};
export function patchLocations(
overlayLocations: OverlayLocations | undefined | null,
creds: any,
log: Logger
) {
const locs = overlayLocations ?? {};
return Object.entries(locs).reduce<{ [key: string]: Location }>(
(acc, [k, l]) => {
const location: Location = {
type: 'mem',
name: k,
objectId: l.objectId,
details: l.details || {},
locationType: l.locationType,
sizeLimitGB: l.sizeLimitGB || null,
isTransient: Boolean(l.isTransient),
legacyAwsBehavior: Boolean(l.legacyAwsBehavior),
};
let supportsVersioning = false;
let pathStyle = process.env.CI_CEPH !== undefined;
switch (l.locationType) {
case 'location-mem-v1':
location.type = 'mem';
location.details = { supportsVersioning: true };
break;
case 'location-file-v1':
location.type = 'file';
location.details = { supportsVersioning: true };
break;
case 'location-azure-v1':
location.type = 'azure';
if (l.details.secretKey && l.details.secretKey.length > 0) {
location.details = {
bucketMatch: l.details.bucketMatch,
azureStorageEndpoint: l.details.endpoint,
azureStorageAccountName: l.details.accessKey,
azureStorageAccessKey: decryptSecret(
creds,
l.details.secretKey
),
azureContainerName: l.details.bucketName,
};
}
break;
case 'location-ceph-radosgw-s3-v1':
case 'location-scality-ring-s3-v1':
pathStyle = true; // fallthrough
case 'location-aws-s3-v1':
case 'location-wasabi-v1':
supportsVersioning = true; // fallthrough
case 'location-do-spaces-v1':
location.type = 'aws_s3';
if (l.details.secretKey && l.details.secretKey.length > 0) {
let https = true;
let awsEndpoint =
l.details.endpoint || 's3.amazonaws.com';
if (awsEndpoint.includes('://')) {
const url = new URL(awsEndpoint);
awsEndpoint = url.host;
https = url.protocol.includes('https');
}
location.details = {
credentials: {
accessKey: l.details.accessKey,
secretKey: decryptSecret(
creds,
l.details.secretKey
),
},
bucketName: l.details.bucketName,
bucketMatch: l.details.bucketMatch,
serverSideEncryption: Boolean(
l.details.serverSideEncryption
),
region: l.details.region,
awsEndpoint,
supportsVersioning,
pathStyle,
https,
};
}
break;
case 'location-gcp-v1':
location.type = 'gcp';
if (l.details.secretKey && l.details.secretKey.length > 0) {
location.details = {
credentials: {
accessKey: l.details.accessKey,
secretKey: decryptSecret(
creds,
l.details.secretKey
),
},
bucketName: l.details.bucketName,
mpuBucketName: l.details.mpuBucketName,
bucketMatch: l.details.bucketMatch,
gcpEndpoint:
l.details.endpoint || 'storage.googleapis.com',
https: true,
};
}
break;
case 'location-scality-sproxyd-v1':
location.type = 'scality';
if (
l.details &&
l.details.bootstrapList &&
l.details.proxyPath
) {
location.details = {
supportsVersioning: true,
connector: {
sproxyd: {
chordCos: l.details.chordCos || null,
bootstrap: l.details.bootstrapList,
path: l.details.proxyPath,
},
},
};
}
break;
case 'location-nfs-mount-v1':
location.type = 'pfs';
if (l.details) {
location.details = {
supportsVersioning: true,
bucketMatch: true,
pfsDaemonEndpoint: {
host: `${l.name}-cosmos-pfsd`,
port: 80,
},
};
}
break;
case 'location-scality-hdclient-v2':
location.type = 'scality';
if (l.details && l.details.bootstrapList) {
location.details = {
supportsVersioning: true,
connector: {
hdclient: {
bootstrap: l.details.bootstrapList,
},
},
};
}
break;
default:
log.info('unknown location type', {
locationType: l.locationType,
});
return acc;
}
return { ...acc, [location.name]: location };
},
{}
);
}

View File

@ -38,7 +38,7 @@
}, },
"principalAWSUserArn": { "principalAWSUserArn": {
"type": "string", "type": "string",
"pattern": "^arn:aws:iam::[0-9]{12}:user/(?!\\*)[\\w+=,.@ -/]{1,64}$" "pattern": "^arn:aws:iam::[0-9]{12}:user/(?!\\*)[\\w+=,.@ -/]{1,2017}$"
}, },
"principalAWSRoleArn": { "principalAWSRoleArn": {
"type": "string", "type": "string",
@ -360,6 +360,9 @@
"type": "string", "type": "string",
"const": "2012-10-17" "const": "2012-10-17"
}, },
"Id": {
"type": "string"
},
"Statement": { "Statement": {
"oneOf": [ "oneOf": [
{ {

View File

@ -28,7 +28,7 @@
}, },
"principalAWSUserArn": { "principalAWSUserArn": {
"type": "string", "type": "string",
"pattern": "^arn:aws:iam::[0-9]{12}:user/(?!\\*)[\\w+=,.@ -/]{1,64}$" "pattern": "^arn:aws:iam::[0-9]{12}:user/(?!\\*)[\\w+=,.@ -/]{1,2017}$"
}, },
"principalAWSRoleArn": { "principalAWSRoleArn": {
"type": "string", "type": "string",

View File

@ -12,13 +12,39 @@ import {
actionMapSSO, actionMapSSO,
actionMapSTS, actionMapSTS,
actionMapMetadata, actionMapMetadata,
actionMapScuba,
} from './utils/actionMaps'; } from './utils/actionMaps';
const _actionNeedQuotaCheck = { export const actionNeedQuotaCheck = {
objectPut: true, objectPut: true,
objectPutVersion: true,
objectPutPart: true, objectPutPart: true,
objectRestore: true,
}; };
/**
* This variable describes APIs that change the bytes
* stored, requiring quota updates
*/
export const actionWithDataDeletion = {
objectDelete: true,
objectDeleteVersion: true,
multipartDelete: true,
multiObjectDelete: true,
};
/**
* The function returns true if the current API call is a copy object
* and the action requires a quota evaluation logic, post retrieval
* of the object metadata.
* @param {string} action - the action being performed
* @param {string} currentApi - the current API being called
* @return {boolean} - whether the action requires a quota check
*/
export function actionNeedQuotaCheckCopy(action: string, currentApi: string) {
return action === 'objectGet' && (currentApi === 'objectCopy' || currentApi === 'objectPutCopyPart');
}
function _findAction(service: string, method: string) { function _findAction(service: string, method: string) {
switch (service) { switch (service) {
case 's3': case 's3':
@ -36,6 +62,8 @@ function _findAction(service: string, method: string) {
return actionMapSTS[method]; return actionMapSTS[method];
case 'metadata': case 'metadata':
return actionMapMetadata[method]; return actionMapMetadata[method];
case 'scuba':
return actionMapScuba[method];
default: default:
return undefined; return undefined;
} }
@ -105,6 +133,10 @@ function _buildArn(
return `arn:scality:metadata::${requesterInfo!.accountid}:` + return `arn:scality:metadata::${requesterInfo!.accountid}:` +
`${generalResource}/`; `${generalResource}/`;
} }
case 'scuba': {
return `arn:scality:scuba::${requesterInfo!.accountid}:` +
`${generalResource}${specificResource ? '/' + specificResource : ''}`;
}
default: default:
return undefined; return undefined;
} }
@ -168,12 +200,12 @@ export default class RequestContext {
_policyArn: string; _policyArn: string;
_action?: string; _action?: string;
_needQuota: boolean; _needQuota: boolean;
_postXml?: string;
_requestObjTags: string | null; _requestObjTags: string | null;
_existingObjTag: string | null; _existingObjTag: string | null;
_needTagEval: boolean; _needTagEval: boolean;
_foundAction?: string; _foundAction?: string;
_foundResource?: string; _foundResource?: string;
_objectLockRetentionDays?: number | null;
constructor( constructor(
headers: { [key: string]: string | string[] }, headers: { [key: string]: string | string[] },
@ -192,7 +224,10 @@ export default class RequestContext {
securityToken: string, securityToken: string,
policyArn: string, policyArn: string,
action?: string, action?: string,
postXml?: string, requestObjTags?: string,
existingObjTag?: string,
needTagEval?: false,
objectLockRetentionDays?: number,
) { ) {
this._headers = headers; this._headers = headers;
this._query = query; this._query = query;
@ -221,11 +256,12 @@ export default class RequestContext {
this._securityToken = securityToken; this._securityToken = securityToken;
this._policyArn = policyArn; this._policyArn = policyArn;
this._action = action; this._action = action;
this._needQuota = _actionNeedQuotaCheck[apiMethod] === true; this._needQuota = actionNeedQuotaCheck[apiMethod] === true
this._postXml = postXml; || actionWithDataDeletion[apiMethod] === true;
this._requestObjTags = null; this._requestObjTags = requestObjTags || null;
this._existingObjTag = null; this._existingObjTag = existingObjTag || null;
this._needTagEval = false; this._needTagEval = needTagEval || false;
this._objectLockRetentionDays = objectLockRetentionDays || null;
return this; return this;
} }
@ -238,7 +274,7 @@ export default class RequestContext {
apiMethod: this._apiMethod, apiMethod: this._apiMethod,
headers: this._headers, headers: this._headers,
query: this._query, query: this._query,
requersterInfo: this._requesterInfo, requesterInfo: this._requesterInfo,
requesterIp: this._requesterIp, requesterIp: this._requesterIp,
sslEnabled: this._sslEnabled, sslEnabled: this._sslEnabled,
awsService: this._awsService, awsService: this._awsService,
@ -254,10 +290,10 @@ export default class RequestContext {
securityToken: this._securityToken, securityToken: this._securityToken,
policyArn: this._policyArn, policyArn: this._policyArn,
action: this._action, action: this._action,
postXml: this._postXml,
requestObjTags: this._requestObjTags, requestObjTags: this._requestObjTags,
existingObjTag: this._existingObjTag, existingObjTag: this._existingObjTag,
needTagEval: this._needTagEval, needTagEval: this._needTagEval,
objectLockRetentionDays: this._objectLockRetentionDays,
}; };
return JSON.stringify(requestInfo); return JSON.stringify(requestInfo);
} }
@ -278,12 +314,28 @@ export default class RequestContext {
if (resource) { if (resource) {
obj.specificResource = resource; obj.specificResource = resource;
} }
return new RequestContext(obj.headers, obj.query, obj.generalResource, return new RequestContext(
obj.specificResource, obj.requesterIp, obj.sslEnabled, obj.headers,
obj.apiMethod, obj.awsService, obj.locationConstraint, obj.query,
obj.requesterInfo, obj.signatureVersion, obj.generalResource,
obj.authType, obj.signatureAge, obj.securityToken, obj.policyArn, obj.specificResource,
obj.action, obj.postXml); obj.requesterIp,
obj.sslEnabled,
obj.apiMethod,
obj.awsService,
obj.locationConstraint,
obj.requesterInfo,
obj.signatureVersion,
obj.authType,
obj.signatureAge,
obj.securityToken,
obj.policyArn,
obj.action,
obj.requestObjTags,
obj.existingObjTag,
obj.needTagEval,
obj.objectLockRetentionDays,
);
} }
/** /**
@ -627,26 +679,6 @@ export default class RequestContext {
return this._needQuota; return this._needQuota;
} }
/**
* Set request post
*
* @param postXml - request post
* @return itself
*/
setPostXml(postXml: string) {
this._postXml = postXml;
return this;
}
/**
* Get request post
*
* @return request post
*/
getPostXml() {
return this._postXml;
}
/** /**
* Set request object tags * Set request object tags
* *
@ -706,4 +738,24 @@ export default class RequestContext {
getNeedTagEval() { getNeedTagEval() {
return this._needTagEval; return this._needTagEval;
} }
/**
* Get object lock retention days
*
* @returns objectLockRetentionDays - object lock retention days
*/
getObjectLockRetentionDays() {
return this._objectLockRetentionDays;
}
/**
* Set object lock retention days
*
* @param objectLockRetentionDays - object lock retention days
* @returns itself
*/
setObjectLockRetentionDays(objectLockRetentionDays: number) {
this._objectLockRetentionDays = objectLockRetentionDays;
return this;
}
} }

View File

@ -13,7 +13,11 @@ const operatorsWithVariables = ['StringEquals', 'StringNotEquals',
const operatorsWithNegation = ['StringNotEquals', const operatorsWithNegation = ['StringNotEquals',
'StringNotEqualsIgnoreCase', 'StringNotLike', 'ArnNotEquals', 'StringNotEqualsIgnoreCase', 'StringNotLike', 'ArnNotEquals',
'ArnNotLike', 'NumericNotEquals']; 'ArnNotLike', 'NumericNotEquals'];
const tagConditions = new Set(['s3:ExistingObjectTag', 's3:RequestObjectTagKey', 's3:RequestObjectTagKeys']); const tagConditions = new Set([
's3:ExistingObjectTag',
's3:RequestObjectTagKey',
's3:RequestObjectTagKeys',
]);
/** /**
@ -24,11 +28,11 @@ const tagConditions = new Set(['s3:ExistingObjectTag', 's3:RequestObjectTagKey',
* @param log - logger * @param log - logger
* @return true if applicable, false if not * @return true if applicable, false if not
*/ */
export const isResourceApplicable = ( export function isResourceApplicable(
requestContext: RequestContext, requestContext: RequestContext,
statementResource: string | string[], statementResource: string | string[],
log: Logger, log: Logger,
): boolean => { ): boolean {
const resource = requestContext.getResource(); const resource = requestContext.getResource();
if (!Array.isArray(statementResource)) { if (!Array.isArray(statementResource)) {
// eslint-disable-next-line no-param-reassign // eslint-disable-next-line no-param-reassign
@ -59,7 +63,7 @@ export const isResourceApplicable = (
{ requestResource: resource }); { requestResource: resource });
// If no match found, no resource is applicable // If no match found, no resource is applicable
return false; return false;
}; }
/** /**
* Check whether action in policy statement applies to request * Check whether action in policy statement applies to request
@ -69,11 +73,11 @@ export const isResourceApplicable = (
* @param log - logger * @param log - logger
* @return true if applicable, false if not * @return true if applicable, false if not
*/ */
export const isActionApplicable = ( export function isActionApplicable(
requestAction: string, requestAction: string,
statementAction: string | string[], statementAction: string | string[],
log: Logger, log: Logger,
): boolean => { ): boolean {
if (!Array.isArray(statementAction)) { if (!Array.isArray(statementAction)) {
// eslint-disable-next-line no-param-reassign // eslint-disable-next-line no-param-reassign
statementAction = [statementAction]; statementAction = [statementAction];
@ -95,32 +99,33 @@ export const isActionApplicable = (
{ requestAction }); { requestAction });
// If no match found, return false // If no match found, return false
return false; return false;
}; }
/** /**
* Check whether request meets policy conditions * Check whether request meets policy conditions
* @param requestContext - info about request * @param {RequestContext} requestContext - info about request
* @param statementCondition - Condition statement from policy * @param {object} statementCondition - Condition statement from policy
* @param log - logger * @param {Logger} log - logger
* @return contains whether conditions are allowed and whether they * @return {boolean|null} a condition evaluation result, one of:
* contain any tag condition keys * - true: condition is met
* - false: condition is not met
* - null: condition evaluation requires additional info to be
* provided (namely, for tag conditions, request tags and/or object
* tags have to be provided to evaluate the condition)
*/ */
export const meetConditions = ( export function meetConditions(
requestContext: RequestContext, requestContext: RequestContext,
statementCondition: any, statementCondition: any,
log: Logger, log: Logger,
) => { ): boolean | null {
let hasTagConditions = false;
// The Condition portion of a policy is an object with different // The Condition portion of a policy is an object with different
// operators as keys // operators as keys
const conditionEval = {}; for (const operator of Object.keys(statementCondition)) {
const operators = Object.keys(statementCondition);
const length = operators.length;
for (let i = 0; i < length; i++) {
const operator = operators[i];
const hasPrefix = operator.includes(':'); const hasPrefix = operator.includes(':');
const hasIfExistsCondition = operator.endsWith('IfExists'); const hasIfExistsCondition = operator.endsWith('IfExists');
// If has "IfExists" added to operator name, or operator has "ForAnyValue" or // If has "IfExists" added to operator name, or operator has "ForAnyValue" or
// "For All Values" prefix, find operator name without "IfExists" or prefix // "ForAllValues" prefix, find operator name without "IfExists" or prefix
let bareOperator = hasIfExistsCondition ? operator.slice(0, -8) : let bareOperator = hasIfExistsCondition ? operator.slice(0, -8) :
operator; operator;
let prefix: string | undefined; let prefix: string | undefined;
@ -135,10 +140,6 @@ export const meetConditions = (
// Note: this should be the actual operator name, not the bareOperator // Note: this should be the actual operator name, not the bareOperator
const conditionsWithSameOperator = statementCondition[operator]; const conditionsWithSameOperator = statementCondition[operator];
const conditionKeys = Object.keys(conditionsWithSameOperator); const conditionKeys = Object.keys(conditionsWithSameOperator);
if (conditionKeys.some(key => tagConditions.has(key)) && !requestContext.getNeedTagEval()) {
// @ts-expect-error
conditionEval.tagConditions = true;
}
const conditionKeysLength = conditionKeys.length; const conditionKeysLength = conditionKeys.length;
for (let j = 0; j < conditionKeysLength; j++) { for (let j = 0; j < conditionKeysLength; j++) {
const key = conditionKeys[j]; const key = conditionKeys[j];
@ -155,6 +156,10 @@ export const meetConditions = (
// tag key is included in condition key and needs to be // tag key is included in condition key and needs to be
// moved to value for evaluation, otherwise key/value are unchanged // moved to value for evaluation, otherwise key/value are unchanged
const [transformedKey, transformedValue] = transformTagKeyValue(key, value); const [transformedKey, transformedValue] = transformTagKeyValue(key, value);
if (tagConditions.has(transformedKey) && !requestContext.getNeedTagEval()) {
hasTagConditions = true;
continue;
}
// Pull key using requestContext // Pull key using requestContext
// TODO: If applicable to S3, handle policy set operations // TODO: If applicable to S3, handle policy set operations
// where a keyBasedOnRequestContext returns multiple values and // where a keyBasedOnRequestContext returns multiple values and
@ -180,11 +185,10 @@ export const meetConditions = (
log.trace('condition not satisfied due to ' + log.trace('condition not satisfied due to ' +
'missing info', { operator, 'missing info', { operator,
conditionKey: transformedKey, policyValue: transformedValue }); conditionKey: transformedKey, policyValue: transformedValue });
return { allow: false }; return false;
} }
// If condition operator prefix is included, the key should be an array // If condition operator prefix is included, the key should be an array
if (prefix && !Array.isArray(keyBasedOnRequestContext)) { if (prefix && !Array.isArray(keyBasedOnRequestContext)) {
// @ts-expect-error
keyBasedOnRequestContext = [keyBasedOnRequestContext]; keyBasedOnRequestContext = [keyBasedOnRequestContext];
} }
// Transalate operator into function using bareOperator // Transalate operator into function using bareOperator
@ -196,14 +200,16 @@ export const meetConditions = (
if (!operatorFunction(keyBasedOnRequestContext, transformedValue, prefix)) { if (!operatorFunction(keyBasedOnRequestContext, transformedValue, prefix)) {
log.trace('did not satisfy condition', { operator: bareOperator, log.trace('did not satisfy condition', { operator: bareOperator,
keyBasedOnRequestContext, policyValue: transformedValue }); keyBasedOnRequestContext, policyValue: transformedValue });
return { allow: false }; return false;
} }
} }
} }
// @ts-expect-error // one or more conditions required tag info to be evaluated
conditionEval.allow = true; if (hasTagConditions) {
return conditionEval; return null;
}; }
return true;
}
/** /**
* Evaluate whether a request is permitted under a policy. * Evaluate whether a request is permitted under a policy.
@ -216,13 +222,15 @@ export const meetConditions = (
* @return Allow if permitted, Deny if not permitted or Neutral * @return Allow if permitted, Deny if not permitted or Neutral
* if not applicable * if not applicable
*/ */
export const evaluatePolicy = ( export function evaluatePolicy(
requestContext: RequestContext, requestContext: RequestContext,
policy: any, policy: any,
log: Logger, log: Logger,
): string => { ): string {
// TODO: For bucket policies need to add Principal evaluation // TODO: For bucket policies need to add Principal evaluation
let verdict = 'Neutral'; let allow = false;
let allowWithTagCondition = false;
let denyWithTagCondition = false;
if (!Array.isArray(policy.Statement)) { if (!Array.isArray(policy.Statement)) {
// eslint-disable-next-line no-param-reassign // eslint-disable-next-line no-param-reassign
@ -259,10 +267,18 @@ export const evaluatePolicy = (
} }
const conditionEval = currentStatement.Condition ? const conditionEval = currentStatement.Condition ?
meetConditions(requestContext, currentStatement.Condition, log) : meetConditions(requestContext, currentStatement.Condition, log) :
null; true;
// If do not meet conditions move on to next statement // If do not meet conditions move on to next statement
// @ts-expect-error if (conditionEval === false) {
if (conditionEval && !conditionEval.allow) { continue;
}
// If condition needs tag info to be evaluated, mark and move on to next statement
if (conditionEval === null) {
if (currentStatement.Effect === 'Deny') {
denyWithTagCondition = true;
} else {
allowWithTagCondition = true;
}
continue; continue;
} }
if (currentStatement.Effect === 'Deny') { if (currentStatement.Effect === 'Deny') {
@ -271,19 +287,30 @@ export const evaluatePolicy = (
return 'Deny'; return 'Deny';
} }
log.trace('Allow statement applies'); log.trace('Allow statement applies');
// If statement is applicable, conditions are met and Effect is // statement is applicable, conditions are met and Effect is
// to Allow, set verdict to Allow // to Allow
allow = true;
}
let verdict;
if (denyWithTagCondition) {
// priority is on checking tags to potentially deny
verdict = 'DenyWithTagCondition';
} else if (allow) {
// at least one statement is an allow
verdict = 'Allow'; verdict = 'Allow';
// @ts-expect-error } else if (allowWithTagCondition) {
if (conditionEval && conditionEval.tagConditions) { // all allow statements need tag checks
verdict = 'NeedTagConditionEval'; verdict = 'AllowWithTagCondition';
} } else {
// no statement matched to allow or deny
verdict = 'Neutral';
} }
log.trace('result of evaluating single policy', { verdict }); log.trace('result of evaluating single policy', { verdict });
return verdict; return verdict;
}; }
/** /**
* @deprecated Upgrade to standardEvaluateAllPolicies
* Evaluate whether a request is permitted under a policy. * Evaluate whether a request is permitted under a policy.
* @param requestContext - Info necessary to * @param requestContext - Info necessary to
* evaluate permission * evaluate permission
@ -294,24 +321,58 @@ export const evaluatePolicy = (
* @return Allow if permitted, Deny if not permitted. * @return Allow if permitted, Deny if not permitted.
* Default is to Deny. Deny overrides an Allow * Default is to Deny. Deny overrides an Allow
*/ */
export const evaluateAllPolicies = ( export function evaluateAllPolicies(
requestContext: RequestContext, requestContext: RequestContext,
allPolicies: any[], allPolicies: any[],
log: Logger, log: Logger,
): string => { ): string {
return standardEvaluateAllPolicies(requestContext, allPolicies, log).verdict;
}
export function standardEvaluateAllPolicies(
requestContext: RequestContext,
allPolicies: any[],
log: Logger,
): {
verdict: string;
isImplicit: boolean;
} {
log.trace('evaluating all policies'); log.trace('evaluating all policies');
let verdict = 'Deny'; let allow = false;
let allowWithTagCondition = false;
let denyWithTagCondition = false;
for (let i = 0; i < allPolicies.length; i++) { for (let i = 0; i < allPolicies.length; i++) {
const singlePolicyVerdict = const singlePolicyVerdict = evaluatePolicy(requestContext, allPolicies[i], log);
evaluatePolicy(requestContext, allPolicies[i], log);
// If there is any Deny, just return Deny // If there is any Deny, just return Deny
if (singlePolicyVerdict === 'Deny') { if (singlePolicyVerdict === 'Deny') {
return 'Deny'; return {
verdict: 'Deny',
isImplicit: false,
};
} }
if (singlePolicyVerdict === 'Allow') { if (singlePolicyVerdict === 'Allow') {
allow = true;
} else if (singlePolicyVerdict === 'AllowWithTagCondition') {
allowWithTagCondition = true;
} else if (singlePolicyVerdict === 'DenyWithTagCondition') {
denyWithTagCondition = true;
} // else 'Neutral'
}
let verdict;
let isImplicit = false;
if (allow) {
if (denyWithTagCondition) {
verdict = 'NeedTagConditionEval';
} else {
verdict = 'Allow'; verdict = 'Allow';
} }
} else {
if (allowWithTagCondition) {
verdict = 'NeedTagConditionEval';
} else {
verdict = 'Deny';
isImplicit = true;
}
} }
log.trace('result of evaluating all pollicies', { verdict }); log.trace('result of evaluating all policies', { verdict, isImplicit });
return verdict; return { verdict, isImplicit };
}; }

View File

@ -23,15 +23,22 @@ export default class Principal {
* @param statement - Statement policy field * @param statement - Statement policy field
* @return True if meet conditions * @return True if meet conditions
*/ */
static _evaluateCondition( static _evaluateStatement(
params: Params, params: Params,
statement: Statement, statement: Statement,
// TODO Fix return type ): 'Neutral' | 'Allow' | 'Deny' {
): any { const reverse = !!statement.NotPrincipal;
if (statement.Condition) { if (reverse) {
return meetConditions(params.rc, statement.Condition, params.log); // In case of anonymous NotPrincipal, this will neutral everyone
return 'Neutral';
} }
return true; if (statement.Condition) {
const conditionEval = meetConditions(params.rc, statement.Condition, params.log);
if (conditionEval === false || conditionEval === null) {
return 'Neutral';
}
}
return statement.Effect;
} }
/** /**
@ -48,19 +55,12 @@ export default class Principal {
statement: Statement, statement: Statement,
valids: Valid, valids: Valid,
): 'Neutral' | 'Allow' | 'Deny' { ): 'Neutral' | 'Allow' | 'Deny' {
const reverse = !!statement.NotPrincipal;
const principal = (statement.Principal || statement.NotPrincipal)!; const principal = (statement.Principal || statement.NotPrincipal)!;
if (typeof principal === 'string' && principal === '*') { const reverse = !!statement.NotPrincipal;
if (reverse) { if (typeof principal === 'string') {
// In case of anonymous NotPrincipal, this will neutral everyone if (principal === '*') {
return 'Neutral'; return Principal._evaluateStatement(params, statement);
} }
const conditionEval = Principal._evaluateCondition(params, statement);
if (!conditionEval || conditionEval.allow === false) {
return 'Neutral';
}
return statement.Effect;
} else if (typeof principal === 'string') {
return 'Deny'; return 'Deny';
} }
let ref = []; let ref = [];
@ -82,28 +82,8 @@ export default class Principal {
} }
toCheck = Array.isArray(toCheck) ? toCheck : [toCheck]; toCheck = Array.isArray(toCheck) ? toCheck : [toCheck];
ref = Array.isArray(ref) ? ref : [ref]; ref = Array.isArray(ref) ? ref : [ref];
if (toCheck.indexOf('*') !== -1) { if (toCheck.includes('*') || ref.some(r => toCheck.includes(r))) {
if (reverse) { return Principal._evaluateStatement(params, statement);
return 'Neutral';
}
const conditionEval = Principal._evaluateCondition(params, statement);
if (!conditionEval || conditionEval.allow === false) {
return 'Neutral';
}
return statement.Effect;
}
const len = ref.length;
for (let i = 0; i < len; ++i) {
if (toCheck.indexOf(ref[i]) !== -1) {
if (reverse) {
return 'Neutral';
}
const conditionEval = Principal._evaluateCondition(params, statement);
if (!conditionEval || conditionEval.allow === false) {
return 'Neutral';
}
return statement.Effect;
}
} }
if (reverse) { if (reverse) {
return statement.Effect; return statement.Effect;

View File

@ -4,14 +4,14 @@ const sharedActionMap = {
bucketDeleteEncryption: 's3:PutEncryptionConfiguration', bucketDeleteEncryption: 's3:PutEncryptionConfiguration',
bucketDeletePolicy: 's3:DeleteBucketPolicy', bucketDeletePolicy: 's3:DeleteBucketPolicy',
bucketDeleteWebsite: 's3:DeleteBucketWebsite', bucketDeleteWebsite: 's3:DeleteBucketWebsite',
bucketDeleteTagging: 's3:DeleteBucketTagging', bucketDeleteTagging: 's3:PutBucketTagging',
bucketGet: 's3:ListBucket', bucketGet: 's3:ListBucket',
bucketGetACL: 's3:GetBucketAcl', bucketGetACL: 's3:GetBucketAcl',
bucketGetCors: 's3:GetBucketCORS', bucketGetCors: 's3:GetBucketCORS',
bucketGetEncryption: 's3:GetEncryptionConfiguration', bucketGetEncryption: 's3:GetEncryptionConfiguration',
bucketGetLifecycle: 's3:GetLifecycleConfiguration', bucketGetLifecycle: 's3:GetLifecycleConfiguration',
bucketGetLocation: 's3:GetBucketLocation', bucketGetLocation: 's3:GetBucketLocation',
bucketGetNotification: 's3:GetBucketNotificationConfiguration', bucketGetNotification: 's3:GetBucketNotification',
bucketGetObjectLock: 's3:GetBucketObjectLockConfiguration', bucketGetObjectLock: 's3:GetBucketObjectLockConfiguration',
bucketGetPolicy: 's3:GetBucketPolicy', bucketGetPolicy: 's3:GetBucketPolicy',
bucketGetReplication: 's3:GetReplicationConfiguration', bucketGetReplication: 's3:GetReplicationConfiguration',
@ -23,7 +23,7 @@ const sharedActionMap = {
bucketPutCors: 's3:PutBucketCORS', bucketPutCors: 's3:PutBucketCORS',
bucketPutEncryption: 's3:PutEncryptionConfiguration', bucketPutEncryption: 's3:PutEncryptionConfiguration',
bucketPutLifecycle: 's3:PutLifecycleConfiguration', bucketPutLifecycle: 's3:PutLifecycleConfiguration',
bucketPutNotification: 's3:PutBucketNotificationConfiguration', bucketPutNotification: 's3:PutBucketNotification',
bucketPutObjectLock: 's3:PutBucketObjectLockConfiguration', bucketPutObjectLock: 's3:PutBucketObjectLockConfiguration',
bucketPutPolicy: 's3:PutBucketPolicy', bucketPutPolicy: 's3:PutBucketPolicy',
bucketPutReplication: 's3:PutReplicationConfiguration', bucketPutReplication: 's3:PutReplicationConfiguration',
@ -42,11 +42,20 @@ const sharedActionMap = {
objectGetLegalHold: 's3:GetObjectLegalHold', objectGetLegalHold: 's3:GetObjectLegalHold',
objectGetRetention: 's3:GetObjectRetention', objectGetRetention: 's3:GetObjectRetention',
objectGetTagging: 's3:GetObjectTagging', objectGetTagging: 's3:GetObjectTagging',
objectHead: 's3:GetObject',
objectPut: 's3:PutObject', objectPut: 's3:PutObject',
objectPutACL: 's3:PutObjectAcl', objectPutACL: 's3:PutObjectAcl',
objectPutLegalHold: 's3:PutObjectLegalHold', objectPutLegalHold: 's3:PutObjectLegalHold',
objectPutRetention: 's3:PutObjectRetention', objectPutRetention: 's3:PutObjectRetention',
objectPutTagging: 's3:PutObjectTagging', objectPutTagging: 's3:PutObjectTagging',
objectRestore: 's3:RestoreObject',
objectPutVersion: 's3:PutObjectVersion',
};
const actionMapBucketQuotas = {
bucketGetQuota: 'scality:GetBucketQuota',
bucketUpdateQuota: 'scality:UpdateBucketQuota',
bucketDeleteQuota: 'scality:DeleteBucketQuota',
}; };
// action map used for request context // action map used for request context
@ -56,36 +65,35 @@ const actionMapRQ = {
// see http://docs.aws.amazon.com/AmazonS3/latest/API/ // see http://docs.aws.amazon.com/AmazonS3/latest/API/
// RESTBucketDELETEcors.html // RESTBucketDELETEcors.html
bucketDeleteCors: 's3:PutBucketCORS', bucketDeleteCors: 's3:PutBucketCORS',
bucketDeleteReplication: 's3:DeleteReplicationConfiguration', bucketDeleteReplication: 's3:PutReplicationConfiguration',
bucketDeleteLifecycle: 's3:DeleteLifecycleConfiguration', bucketDeleteLifecycle: 's3:PutLifecycleConfiguration',
completeMultipartUpload: 's3:PutObject', completeMultipartUpload: 's3:PutObject',
initiateMultipartUpload: 's3:PutObject', initiateMultipartUpload: 's3:PutObject',
objectDeleteVersion: 's3:DeleteObjectVersion', objectDeleteVersion: 's3:DeleteObjectVersion',
objectDeleteTaggingVersion: 's3:DeleteObjectVersionTagging', objectDeleteTaggingVersion: 's3:DeleteObjectVersionTagging',
objectGetArchiveInfo: 'scality:GetObjectArchiveInfo',
objectGetVersion: 's3:GetObjectVersion', objectGetVersion: 's3:GetObjectVersion',
objectGetACLVersion: 's3:GetObjectVersionAcl', objectGetACLVersion: 's3:GetObjectVersionAcl',
objectGetTaggingVersion: 's3:GetObjectVersionTagging', objectGetTaggingVersion: 's3:GetObjectVersionTagging',
objectHead: 's3:GetObject',
objectPutACLVersion: 's3:PutObjectVersionAcl', objectPutACLVersion: 's3:PutObjectVersionAcl',
objectPutPart: 's3:PutObject', objectPutPart: 's3:PutObject',
objectPutTaggingVersion: 's3:PutObjectVersionTagging', objectPutTaggingVersion: 's3:PutObjectVersionTagging',
serviceGet: 's3:ListAllMyBuckets', serviceGet: 's3:ListAllMyBuckets',
objectReplicate: 's3:ReplicateObject', objectReplicate: 's3:ReplicateObject',
objectPutRetentionVersion: 's3:PutObjectVersionRetention', objectGetRetentionVersion: 's3:GetObjectRetention',
objectPutLegalHoldVersion: 's3:PutObjectVersionLegalHold', objectPutRetentionVersion: 's3:PutObjectRetention',
objectGetLegalHoldVersion: 's3:GetObjectLegalHold',
objectPutLegalHoldVersion: 's3:PutObjectLegalHold',
listObjectVersions: 's3:ListBucketVersions',
...sharedActionMap, ...sharedActionMap,
...actionMapBucketQuotas,
}; };
// action map used for bucket policies // action map used for bucket policies
const actionMapBP = { ...sharedActionMap }; const actionMapBP = actionMapRQ;
// action map for all relevant s3 actions // action map for all relevant s3 actions
const actionMapS3 = { const actionMapS3 = {
// TODO
// @ts-ignore
bucketGetNotification: 's3:GetBucketNotification',
// @ts-ignore
bucketPutNotification: 's3:PutBucketNotification',
...sharedActionMap, ...sharedActionMap,
...actionMapRQ, ...actionMapRQ,
...actionMapBP, ...actionMapBP,
@ -105,7 +113,7 @@ const actionMonitoringMapS3 = {
bucketGetCors: 'GetBucketCors', bucketGetCors: 'GetBucketCors',
bucketGetLifecycle: 'GetBucketLifecycleConfiguration', bucketGetLifecycle: 'GetBucketLifecycleConfiguration',
bucketGetLocation: 'GetBucketLocation', bucketGetLocation: 'GetBucketLocation',
bucketGetNotification: 'GetBucketNotificationConfiguration', bucketGetNotification: 'GetBucketNotification',
bucketGetObjectLock: 'GetObjectLockConfiguration', bucketGetObjectLock: 'GetObjectLockConfiguration',
bucketGetPolicy: 'GetBucketPolicy', bucketGetPolicy: 'GetBucketPolicy',
bucketGetReplication: 'GetBucketReplication', bucketGetReplication: 'GetBucketReplication',
@ -118,7 +126,7 @@ const actionMonitoringMapS3 = {
bucketPutACL: 'PutBucketAcl', bucketPutACL: 'PutBucketAcl',
bucketPutCors: 'PutBucketCors', bucketPutCors: 'PutBucketCors',
bucketPutLifecycle: 'PutBucketLifecycleConfiguration', bucketPutLifecycle: 'PutBucketLifecycleConfiguration',
bucketPutNotification: 'PutBucketNotificationConfiguration', bucketPutNotification: 'PutBucketNotification',
bucketPutObjectLock: 'PutObjectLockConfiguration', bucketPutObjectLock: 'PutObjectLockConfiguration',
bucketPutPolicy: 'PutBucketPolicy', bucketPutPolicy: 'PutBucketPolicy',
bucketPutReplication: 'PutBucketReplication', bucketPutReplication: 'PutBucketReplication',
@ -149,7 +157,17 @@ const actionMonitoringMapS3 = {
objectPutPart: 'UploadPart', objectPutPart: 'UploadPart',
objectPutRetention: 'PutObjectRetention', objectPutRetention: 'PutObjectRetention',
objectPutTagging: 'PutObjectTagging', objectPutTagging: 'PutObjectTagging',
objectRestore: 'RestoreObject',
serviceGet: 'ListBuckets', serviceGet: 'ListBuckets',
bucketGetQuota: 'GetBucketQuota',
bucketUpdateQuota: 'UpdateBucketQuota',
bucketDeleteQuota: 'DeleteBucketQuota',
};
const actionMapAccountQuotas = {
UpdateAccountQuota : 'scality:UpdateAccountQuota',
DeleteAccountQuota : 'scality:DeleteAccountQuota',
GetAccountQuota : 'scality:GetAccountQuota',
}; };
const actionMapIAM = { const actionMapIAM = {
@ -185,10 +203,15 @@ const actionMapIAM = {
removeUserFromGroup: 'iam:RemoveUserFromGroup', removeUserFromGroup: 'iam:RemoveUserFromGroup',
updateAccessKey: 'iam:UpdateAccessKey', updateAccessKey: 'iam:UpdateAccessKey',
updateGroup: 'iam:UpdateGroup', updateGroup: 'iam:UpdateGroup',
updateRole: 'iam:UpdateRole',
updateUser: 'iam:UpdateUser', updateUser: 'iam:UpdateUser',
getAccessKeyLastUsed: 'iam:GetAccessKeyLastUsed', getAccessKeyLastUsed: 'iam:GetAccessKeyLastUsed',
generateCredentialReport: 'iam:GenerateCredentialReport', generateCredentialReport: 'iam:GenerateCredentialReport',
getCredentialReport: 'iam:GetCredentialReport', getCredentialReport: 'iam:GetCredentialReport',
tagUser: 'iam:TagUser',
unTagUser: 'iam:UntagUser',
listUserTags: 'iam:ListUserTags',
...actionMapAccountQuotas,
}; };
const actionMapSSO = { const actionMapSSO = {
@ -204,6 +227,14 @@ const actionMapMetadata = {
default: 'metadata:bucketd', default: 'metadata:bucketd',
}; };
const actionMapScuba = {
GetMetrics: 'scuba:GetMetrics',
AdminStartIngest: 'scuba:AdminStartIngest',
AdminStopIngest: 'scuba:AdminStopIngest',
AdminReadRaftCseq: 'scuba:AdminReadRaftCseq',
AdminTriggerRepair: 'scuba:AdminTriggerRepair',
};
export { export {
actionMapRQ, actionMapRQ,
actionMapBP, actionMapBP,
@ -213,4 +244,5 @@ export {
actionMapSSO, actionMapSSO,
actionMapSTS, actionMapSTS,
actionMapMetadata, actionMapMetadata,
actionMapScuba,
}; };

View File

@ -1,5 +1,5 @@
import { handleWildcardInResource } from './wildcards'; import { handleWildcardInResource } from './wildcards';
import { policyArnAllowedEmptyAccountId } from '../../constants';
/** /**
* Checks whether an ARN from a request matches an ARN in a policy * Checks whether an ARN from a request matches an ARN in a policy
* to compare against each portion of the ARN from the request * to compare against each portion of the ARN from the request
@ -38,9 +38,10 @@ export default function checkArnMatch(
const requestSegment = caseSensitive ? requestArnArr[j] : const requestSegment = caseSensitive ? requestArnArr[j] :
requestArnArr[j].toLowerCase(); requestArnArr[j].toLowerCase();
const policyArnArr = policyArn.split(':'); const policyArnArr = policyArn.split(':');
// We want to allow an empty account ID for utapi service ARNs to not // We want to allow an empty account ID for utapi and scuba service ARNs to not
// break compatibility. // break compatibility.
if (j === 4 && policyArnArr[2] === 'utapi' && policyArnArr[4] === '') { if (j === 4 && policyArnAllowedEmptyAccountId.includes(policyArnArr[2])
&& policyArnArr[4] === '') {
continue; continue;
} else if (!segmentRegEx.test(requestSegment)) { } else if (!segmentRegEx.test(requestSegment)) {
return false; return false;

View File

@ -11,31 +11,30 @@ import ipaddr from 'ipaddr.js';
* @param requestContext - info sent with request * @param requestContext - info sent with request
* @return condition key value * @return condition key value
*/ */
export const findConditionKey = ( export function findConditionKey(
key: string, key: string,
requestContext: RequestContext, requestContext: RequestContext,
): string => { ): any {
// TODO: Consider combining with findVariable function if no benefit // TODO: Consider combining with findVariable function if no benefit
// to keeping separate // to keeping separate
const headers = requestContext.getHeaders(); const headers = requestContext.getHeaders();
const query = requestContext.getQuery(); const query = requestContext.getQuery();
const requesterInfo = requestContext.getRequesterInfo(); const requesterInfo = requestContext.getRequesterInfo();
const map = new Map();
// Possible AWS Condition keys (http://docs.aws.amazon.com/IAM/latest/ // Possible AWS Condition keys (http://docs.aws.amazon.com/IAM/latest/
// UserGuide/reference_policies_elements.html#AvailableKeys) // UserGuide/reference_policies_elements.html#AvailableKeys)
switch (key) {
// aws:CurrentTime Used for date/time conditions // aws:CurrentTime Used for date/time conditions
// (see Date Condition Operators). // (see Date Condition Operators).
map.set('aws:CurrentTime', new Date().toISOString()); case 'aws:CurrentTime': return new Date().toISOString();
// aws:EpochTime Used for date/time conditions // aws:EpochTime Used for date/time conditions
// (see Date Condition Operators). // (see Date Condition Operators).
map.set('aws:EpochTime', Date.now().toString()); case 'aws:EpochTime': return Date.now().toString();
// aws:TokenIssueTime Date/time that temporary security // aws:TokenIssueTime Date/time that temporary security
// credentials were issued (see Date Condition Operators). // credentials were issued (see Date Condition Operators).
// Only present in requests that are signed using temporary security // Only present in requests that are signed using temporary security
// credentials. // credentials.
map.set('aws:TokenIssueTime', requestContext.getTokenIssueTime()); case 'aws:TokenIssueTime': return requestContext.getTokenIssueTime();
// aws:MultiFactorAuthPresent Used to check whether MFA was used // aws:MultiFactorAuthPresent Used to check whether MFA was used
// (see Boolean Condition Operators). // (see Boolean Condition Operators).
// Note: This key is only present if MFA was used. So, the following // Note: This key is only present if MFA was used. So, the following
@ -45,133 +44,137 @@ export const findConditionKey = (
// Instead use: // Instead use:
// "Condition" : // "Condition" :
// { "Null" : { "aws:MultiFactorAuthPresent" : true } } // { "Null" : { "aws:MultiFactorAuthPresent" : true } }
map.set('aws:MultiFactorAuthPresent', case 'aws:MultiFactorAuthPresent': return requestContext.getMultiFactorAuthPresent();
requestContext.getMultiFactorAuthPresent());
// aws:MultiFactorAuthAge Used to check how many seconds since // aws:MultiFactorAuthAge Used to check how many seconds since
// MFA credentials were issued. If MFA was not used, // MFA credentials were issued. If MFA was not used,
// this key is not present // this key is not present
map.set('aws:MultiFactorAuthAge', requestContext.getMultiFactorAuthAge()); case 'aws:MultiFactorAuthAge': return requestContext.getMultiFactorAuthAge();
// aws:principaltype states whether the principal is an account, // aws:principaltype states whether the principal is an account,
// user, federated, or assumed role // user, federated, or assumed role
// Note: Docs for conditions have "PrincipalType" but simulator // Note: Docs for conditions have "PrincipalType" but simulator
// and docs for variables have lowercase // and docs for variables have lowercase
map.set('aws:principaltype', requesterInfo.principaltype); case 'aws:principaltype': return requesterInfo.principaltype;
// aws:Referer Used to check who referred the client browser to // aws:Referer Used to check who referred the client browser to
// the address the request is being sent to. Only supported by some // the address the request is being sent to. Only supported by some
// services, such as S3. Value comes from the referer header in the // services, such as S3. Value comes from the referer header in the
// HTTPS request made to AWS. // HTTPS request made to AWS.
map.set('aws:referer', headers.referer); case 'aws:referer': return headers.referer;
// aws:SecureTransport Used to check whether the request was sent // aws:SecureTransport Used to check whether the request was sent
// using SSL (see Boolean Condition Operators). // using SSL (see Boolean Condition Operators).
map.set('aws:SecureTransport', case 'aws:SecureTransport': return requestContext.getSslEnabled() ? 'true' : 'false';
requestContext.getSslEnabled() ? 'true' : 'false');
// aws:SourceArn Used check the source of the request, // aws:SourceArn Used check the source of the request,
// using the ARN of the source. N/A here. // using the ARN of the source. N/A here.
map.set('aws:SourceArn', undefined); case 'aws:SourceArn': return undefined;
// aws:SourceIp Used to check the requester's IP address // aws:SourceIp Used to check the requester's IP address
// (see IP Address Condition Operators) // (see IP Address Condition Operators)
map.set('aws:SourceIp', requestContext.getRequesterIp()); case 'aws:SourceIp': return requestContext.getRequesterIp();
// aws:SourceVpc Used to restrict access to a specific // aws:SourceVpc Used to restrict access to a specific
// AWS Virtual Private Cloud. N/A here. // AWS Virtual Private Cloud. N/A here.
map.set('aws:SourceVpc', undefined); case 'aws:SourceVpc': return undefined;
// aws:SourceVpce Used to limit access to a specific VPC endpoint // aws:SourceVpce Used to limit access to a specific VPC endpoint
// N/A here // N/A here
map.set('aws:SourceVpce', undefined); case 'aws:SourceVpce': return undefined;
// aws:UserAgent Used to check the requester's client app. // aws:UserAgent Used to check the requester's client app.
// (see String Condition Operators) // (see String Condition Operators)
map.set('aws:UserAgent', headers['user-agent']); case 'aws:UserAgent': return headers['user-agent'];
// aws:userid Used to check the requester's unique user ID. // aws:userid Used to check the requester's unique user ID.
// (see String Condition Operators) // (see String Condition Operators)
map.set('aws:userid', requesterInfo.userid); case 'aws:userid': return requesterInfo.userid;
// aws:username Used to check the requester's friendly user name. // aws:username Used to check the requester's friendly user name.
// (see String Condition Operators) // (see String Condition Operators)
map.set('aws:username', requesterInfo.username); case 'aws:username': return requesterInfo.username;
// Possible condition keys for S3: // Possible condition keys for S3:
// s3:x-amz-acl is acl request for bucket or object put request // s3:x-amz-acl is acl request for bucket or object put request
map.set('s3:x-amz-acl', headers['x-amz-acl']); case 's3:x-amz-acl': return headers['x-amz-acl'];
// s3:x-amz-grant-PERMISSION (where permission can be: // s3:x-amz-grant-PERMISSION (where permission can be:
// read, write, read-acp, write-acp or full-control) // read, write, read-acp, write-acp or full-control)
// Value is the value of that header (ex. id of grantee) // Value is the value of that header (ex. id of grantee)
map.set('s3:x-amz-grant-read', headers['x-amz-grant-read']); case 's3:x-amz-grant-read': return headers['x-amz-grant-read'];
map.set('s3:x-amz-grant-write', headers['x-amz-grant-write']); case 's3:x-amz-grant-write': return headers['x-amz-grant-write'];
map.set('s3:x-amz-grant-read-acp', headers['x-amz-grant-read-acp']); case 's3:x-amz-grant-read-acp': return headers['x-amz-grant-read-acp'];
map.set('s3:x-amz-grant-write-acp', headers['x-amz-grant-write-acp']); case 's3:x-amz-grant-write-acp': return headers['x-amz-grant-write-acp'];
map.set('s3:x-amz-grant-full-control', headers['x-amz-grant-full-control']); case 's3:x-amz-grant-full-control': return headers['x-amz-grant-full-control'];
// s3:x-amz-copy-source is x-amz-copy-source header if applicable on // s3:x-amz-copy-source is x-amz-copy-source header if applicable on
// a put object // a put object
map.set('s3:x-amz-copy-source', headers['x-amz-copy-source']); case 's3:x-amz-copy-source': return headers['x-amz-copy-source'];
// s3:x-amz-metadata-directive is x-amz-metadata-directive header if // s3:x-amz-metadata-directive is x-amz-metadata-directive header if
// applicable on a put object copy. Determines whether metadata will // applicable on a put object copy. Determines whether metadata will
// be copied from original object or replaced. Values or "COPY" or // be copied from original object or replaced. Values or "COPY" or
// "REPLACE". Default is "COPY" // "REPLACE". Default is "COPY"
map.set('s3:x-amz-metadata-directive', headers['metadata-directive']); case 's3:x-amz-metadata-directive': return headers['metadata-directive'];
// s3:x-amz-server-side-encryption -- Used to require that object put // s3:x-amz-server-side-encryption -- Used to require that object put
// use server side encryption. Value is the encryption algo such as // use server side encryption. Value is the encryption algo such as
// "AES256" // "AES256"
map.set('s3:x-amz-server-side-encryption', case 's3:x-amz-server-side-encryption': return headers['x-amz-server-side-encryption'];
headers['x-amz-server-side-encryption']);
// s3:x-amz-storage-class -- x-amz-storage-class header value // s3:x-amz-storage-class -- x-amz-storage-class header value
// (STANDARD, etc.) // (STANDARD, etc.)
map.set('s3:x-amz-storage-class', headers['x-amz-storage-class']); case 's3:x-amz-storage-class': return headers['x-amz-storage-class'];
// s3:VersionId -- version id of object // s3:VersionId -- version id of object
map.set('s3:VersionId', query.versionId); case 's3:VersionId': return query.versionId;
// s3:LocationConstraint -- Used to restrict creation of bucket // s3:LocationConstraint -- Used to restrict creation of bucket
// in certain region. Only applicable for CreateBucket // in certain region. Only applicable for CreateBucket
map.set('s3:LocationConstraint', requestContext.getLocationConstraint()); case 's3:LocationConstraint': return requestContext.getLocationConstraint();
// s3:delimiter is delimiter for listing request // s3:delimiter is delimiter for listing request
map.set('s3:delimiter', query.delimiter); case 's3:delimiter': return query.delimiter;
// s3:max-keys is max-keys for listing request // s3:max-keys is max-keys for listing request
map.set('s3:max-keys', query['max-keys']); case 's3:max-keys': return query['max-keys'];
// s3:prefix is prefix for listing request // s3:prefix is prefix for listing request
map.set('s3:prefix', query.prefix); case 's3:prefix': return query.prefix;
// s3 auth v4 additional condition keys // s3 auth v4 additional condition keys
// (See http://docs.aws.amazon.com/AmazonS3/latest/API/ // (See http://docs.aws.amazon.com/AmazonS3/latest/API/
// bucket-policy-s3-sigv4-conditions.html) // bucket-policy-s3-sigv4-conditions.html)
// s3:signatureversion -- Either "AWS" for v2 or // s3:signatureversion -- Either "AWS" for v2 or
// "AWS4-HMAC-SHA256" for v4 // "AWS4-HMAC-SHA256" for v4
map.set('s3:signatureversion', requestContext.getSignatureVersion()); case 's3:signatureversion': return requestContext.getSignatureVersion();
// s3:authType -- Method of authentication: either "REST-HEADER", // s3:authType -- Method of authentication: either "REST-HEADER",
// "REST-QUERY-STRING" or "POST" // "REST-QUERY-STRING" or "POST"
map.set('s3:authType', requestContext.getAuthType()); case 's3:authType': return requestContext.getAuthType();
// s3:signatureAge is the length of time, in milliseconds, // s3:signatureAge is the length of time, in milliseconds,
// that a signature is valid in an authenticated request. So, // that a signature is valid in an authenticated request. So,
// can use this to limit the age to less than 7 days // can use this to limit the age to less than 7 days
map.set('s3:signatureAge', requestContext.getSignatureAge()); case 's3:signatureAge': return requestContext.getSignatureAge();
// s3:x-amz-content-sha256 - Valid value is "UNSIGNED-PAYLOAD" // s3:x-amz-content-sha256 - Valid value is "UNSIGNED-PAYLOAD"
// so can use this in a deny policy to deny any requests that do not // so can use this in a deny policy to deny any requests that do not
// have a signed payload // have a signed payload
map.set('s3:x-amz-content-sha256', headers['x-amz-content-sha256']); case 's3:x-amz-content-sha256': return headers['x-amz-content-sha256'];
// s3:ObjLocationConstraint is the location constraint set for an // s3:ObjLocationConstraint is the location constraint set for an
// object on a PUT request using the "x-amz-meta-scal-location-constraint" // object on a PUT request using the "x-amz-meta-scal-location-constraint"
// header // header
map.set('s3:ObjLocationConstraint', case 's3:ObjLocationConstraint': return headers['x-amz-meta-scal-location-constraint'];
headers['x-amz-meta-scal-location-constraint']); case 'sts:ExternalId': return requestContext.getRequesterExternalId();
map.set('sts:ExternalId', requestContext.getRequesterExternalId()); case 'keycloak:groups': return requesterInfo.keycloakGroup;
map.set('keycloak:groups', requesterInfo.keycloakGroup); case 'keycloak:roles': return requesterInfo.keycloakRole;
map.set('keycloak:roles', requesterInfo.keycloakRole); case 'iam:PolicyArn': return requestContext.getPolicyArn();
map.set('iam:PolicyArn', requestContext.getPolicyArn());
// s3:ExistingObjectTag - Used to check that existing object tag has // s3:ExistingObjectTag - Used to check that existing object tag has
// specific tag key and value. Extraction of correct tag key is done in CloudServer. // specific tag key and value. Extraction of correct tag key is done in CloudServer.
// On first pass of policy evaluation, CloudServer information will not be included, // On first pass of policy evaluation, CloudServer information will not be included,
// so evaluation should be skipped // so evaluation should be skipped
map.set('s3:ExistingObjectTag', requestContext.getNeedTagEval() ? requestContext.getExistingObjTag() : undefined); case 's3:ExistingObjectTag':
return requestContext.getNeedTagEval()
? requestContext.getExistingObjTag() : undefined;
// s3:RequestObjectTag - Used to limit putting object tags to specific // s3:RequestObjectTag - Used to limit putting object tags to specific
// tag key and value. N/A here. // tag key and value. N/A here.
// Requires information from CloudServer // Requires information from CloudServer
// On first pass of policy evaluation, CloudServer information will not be included, // On first pass of policy evaluation, CloudServer information will not be included,
// so evaluation should be skipped // so evaluation should be skipped
map.set('s3:RequestObjectTagKey', requestContext.getNeedTagEval() ? requestContext.getRequestObjTags() : undefined); case 's3:RequestObjectTagKey':
return requestContext.getNeedTagEval()
? requestContext.getRequestObjTags() : undefined;
// s3:RequestObjectTagKeys - Used to limit putting object tags specific tag keys. // s3:RequestObjectTagKeys - Used to limit putting object tags specific tag keys.
// Requires information from CloudServer. // Requires information from CloudServer.
// On first pass of policy evaluation, CloudServer information will not be included, // On first pass of policy evaluation, CloudServer information will not be included,
// so evaluation should be skipped // so evaluation should be skipped
map.set('s3:RequestObjectTagKeys', case 's3:RequestObjectTagKeys':
requestContext.getNeedTagEval() && requestContext.getRequestObjTags() return requestContext.getNeedTagEval() && requestContext.getRequestObjTags()
? getTagKeys(requestContext.getRequestObjTags()!) ? getTagKeys(requestContext.getRequestObjTags()!)
: undefined, : undefined;
); // The maximum retention period is 100 years.
return map.get(key); case 's3:object-lock-remaining-retention-days':
}; return requestContext.getObjectLockRetentionDays() || undefined;
default:
return undefined;
}
}
// Wildcards are allowed in certain string comparison and arn comparisons // Wildcards are allowed in certain string comparison and arn comparisons
@ -231,7 +234,7 @@ function convertToEpochTime(time: string | string[]) {
* reference_policies_elements.html) * reference_policies_elements.html)
* @return true if condition passes and false if not * @return true if condition passes and false if not
*/ */
export const convertConditionOperator = (operator: string): boolean => { export function convertConditionOperator(operator: string): boolean {
// Policy Validator checks that the condition operator // Policy Validator checks that the condition operator
// is only one of these strings so should not have undefined // is only one of these strings so should not have undefined
// or security issue with object assignment // or security issue with object assignment
@ -446,4 +449,4 @@ export const convertConditionOperator = (operator: string): boolean => {
}, },
}; };
return operatorMap[operator]; return operatorMap[operator];
}; }

View File

@ -1,44 +0,0 @@
const Transform = require('stream').Transform;
const crypto = require('crypto');
/**
* This class is design to compute md5 hash at the same time as sending
* data through a stream
*/
class MD5Sum extends Transform {
/**
* @constructor
*/
constructor() {
super({});
this.hash = crypto.createHash('md5');
this.completedHash = undefined;
}
/**
* This function will update the current md5 hash with the next chunk
*
* @param {Buffer|string} chunk - Chunk to compute
* @param {string} encoding - Data encoding
* @param {function} callback - Callback(err, chunk, encoding)
* @return {undefined}
*/
_transform(chunk, encoding, callback) {
this.hash.update(chunk, encoding);
callback(null, chunk, encoding);
}
/**
* This function will end the hash computation
*
* @param {function} callback(err)
* @return {undefined}
*/
_flush(callback) {
this.completedHash = this.hash.digest('hex');
this.emit('hashed');
callback(null);
}
}
module.exports = MD5Sum;

View File

@ -0,0 +1,48 @@
import { Transform } from 'stream';
import * as crypto from 'crypto';
/**
* This class is design to compute md5 hash at the same time as sending
* data through a stream
*/
export default class MD5Sum extends Transform {
hash: ReturnType<typeof crypto.createHash>;
completedHash?: string;
constructor() {
super({});
this.hash = crypto.createHash('md5');
this.completedHash = undefined;
}
/**
* This function will update the current md5 hash with the next chunk
*
* @param chunk - Chunk to compute
* @param encoding - Data encoding
* @param callback - Callback(err, chunk, encoding)
*/
_transform(
chunk: string,
encoding: crypto.Encoding,
callback: (
err: Error | null,
chunk: string,
encoding: crypto.Encoding,
) => void,
) {
this.hash.update(chunk, encoding);
callback(null, chunk, encoding);
}
/**
* This function will end the hash computation
*
* @param callback(err)
*/
_flush(callback: (err: Error | null) => void) {
this.completedHash = this.hash.digest('hex');
this.emit('hashed');
callback(null);
}
}

View File

@ -1,4 +1,4 @@
const EventEmitter = require('events'); import { EventEmitter } from 'events';
/** /**
* Class to collect results of streaming subparts. * Class to collect results of streaming subparts.
@ -8,10 +8,12 @@ const EventEmitter = require('events');
* streaming is in-progress * streaming is in-progress
* @class ResultsCollector * @class ResultsCollector
*/ */
class ResultsCollector extends EventEmitter { export default class ResultsCollector extends EventEmitter {
/** // TODO Add better type.
* @constructor _results: any[];
*/ _queue: number;
_streamingFinished: boolean;
constructor() { constructor() {
super(); super();
this._results = []; this._results = [];
@ -22,14 +24,13 @@ class ResultsCollector extends EventEmitter {
/** /**
* ResultsCollector.pushResult - register result of putting one subpart * ResultsCollector.pushResult - register result of putting one subpart
* and emit "done" or "error" events if appropriate * and emit "done" or "error" events if appropriate
* @param {(Error|undefined)} err - error returned from Azure after * @param err - error returned from Azure after
* putting a subpart * putting a subpart
* @param {number} subPartIndex - the index of the subpart * @param subPartIndex - the index of the subpart
* @emits ResultCollector#done * @emits ResultCollector#done
* @emits ResultCollector#error * @emits ResultCollector#error
* @return {undefined}
*/ */
pushResult(err, subPartIndex) { pushResult(err: Error | null | undefined, subPartIndex: number) {
this._results.push({ this._results.push({
error: err, error: err,
subPartIndex, subPartIndex,
@ -44,7 +45,6 @@ class ResultsCollector extends EventEmitter {
/** /**
* ResultsCollector.pushOp - register operation to put another subpart * ResultsCollector.pushOp - register operation to put another subpart
* @return {undefined};
*/ */
pushOp() { pushOp() {
this._queue++; this._queue++;
@ -54,7 +54,6 @@ class ResultsCollector extends EventEmitter {
* ResultsCollector.enableComplete - register streaming has finished, * ResultsCollector.enableComplete - register streaming has finished,
* allowing ResultCollector#done event to be emitted when last result * allowing ResultCollector#done event to be emitted when last result
* has been returned * has been returned
* @return {undefined};
*/ */
enableComplete() { enableComplete() {
this._streamingFinished = true; this._streamingFinished = true;
@ -79,5 +78,3 @@ class ResultsCollector extends EventEmitter {
* @type {(Error|undefined)} error - error returned by Azure last subpart * @type {(Error|undefined)} error - error returned by Azure last subpart
* @type {number} subPartIndex - index of the subpart * @type {number} subPartIndex - index of the subpart
*/ */
module.exports = ResultsCollector;

View File

@ -1,10 +1,10 @@
const stream = require('stream'); import * as stream from 'stream';
class SubStream extends stream.PassThrough { class SubStream extends stream.PassThrough {
constructor(options) { constructor(options?: stream.TransformOptions) {
super(options); super(options);
this.on('stopStreamingToAzure', function stopStreamingToAzure() { this.on('stopStreamingToAzure', () => {
this._abortStreaming(); this._abortStreaming();
}); });
} }
@ -19,12 +19,19 @@ class SubStream extends stream.PassThrough {
* Interface for streaming subparts. * Interface for streaming subparts.
* @class SubStreamInterface * @class SubStreamInterface
*/ */
class SubStreamInterface { export default class SubStreamInterface {
_sourceStream: stream.Readable;
_totalLengthCounter: number;
_lengthCounter: number;
_subPartIndex: number;
_currentStream: SubStream;
_streamingAborted: boolean;
/** /**
* @constructor * @constructor
* @param {stream.Readable} sourceStream - stream to read for data * @param sourceStream - stream to read for data
*/ */
constructor(sourceStream) { constructor(sourceStream: stream.Readable) {
this._sourceStream = sourceStream; this._sourceStream = sourceStream;
this._totalLengthCounter = 0; this._totalLengthCounter = 0;
this._lengthCounter = 0; this._lengthCounter = 0;
@ -35,7 +42,6 @@ class SubStreamInterface {
/** /**
* SubStreamInterface.pauseStreaming - pause data flow * SubStreamInterface.pauseStreaming - pause data flow
* @return {undefined}
*/ */
pauseStreaming() { pauseStreaming() {
this._sourceStream.pause(); this._sourceStream.pause();
@ -43,7 +49,6 @@ class SubStreamInterface {
/** /**
* SubStreamInterface.resumeStreaming - resume data flow * SubStreamInterface.resumeStreaming - resume data flow
* @return {undefined}
*/ */
resumeStreaming() { resumeStreaming() {
this._sourceStream.resume(); this._sourceStream.resume();
@ -52,7 +57,6 @@ class SubStreamInterface {
/** /**
* SubStreamInterface.endStreaming - signal end of data for last stream, * SubStreamInterface.endStreaming - signal end of data for last stream,
* to be called when source stream has ended * to be called when source stream has ended
* @return {undefined}
*/ */
endStreaming() { endStreaming() {
this._totalLengthCounter += this._lengthCounter; this._totalLengthCounter += this._lengthCounter;
@ -62,11 +66,10 @@ class SubStreamInterface {
/** /**
* SubStreamInterface.stopStreaming - destroy streams, * SubStreamInterface.stopStreaming - destroy streams,
* to be called when streaming must be stopped externally * to be called when streaming must be stopped externally
* @param {stream.Readable} [piper] - a stream that is piping data into * @param [piper] - a stream that is piping data into
* source stream * source stream
* @return {undefined}
*/ */
stopStreaming(piper) { stopStreaming(piper?: stream.Readable) {
this._streamingAborted = true; this._streamingAborted = true;
if (piper) { if (piper) {
piper.unpipe(); piper.unpipe();
@ -77,7 +80,7 @@ class SubStreamInterface {
/** /**
* SubStreamInterface.getLengthCounter - return length of bytes streamed * SubStreamInterface.getLengthCounter - return length of bytes streamed
* for current subpart * for current subpart
* @return {number} - this._lengthCounter * @return - this._lengthCounter
*/ */
getLengthCounter() { getLengthCounter() {
return this._lengthCounter; return this._lengthCounter;
@ -85,7 +88,7 @@ class SubStreamInterface {
/** /**
* SubStreamInterface.getTotalBytesStreamed - return total bytes streamed * SubStreamInterface.getTotalBytesStreamed - return total bytes streamed
* @return {number} - this._totalLengthCounter * @return - this._totalLengthCounter
*/ */
getTotalBytesStreamed() { getTotalBytesStreamed() {
return this._totalLengthCounter; return this._totalLengthCounter;
@ -94,7 +97,7 @@ class SubStreamInterface {
/** /**
* SubStreamInterface.getCurrentStream - return subpart stream currently * SubStreamInterface.getCurrentStream - return subpart stream currently
* being written to from source stream * being written to from source stream
* @return {number} - this._currentStream * @return - this._currentStream
*/ */
getCurrentStream() { getCurrentStream() {
return this._currentStream; return this._currentStream;
@ -103,7 +106,7 @@ class SubStreamInterface {
/** /**
* SubStreamInterface.transitionToNextStream - signal end of data for * SubStreamInterface.transitionToNextStream - signal end of data for
* current stream, generate a new stream and start streaming to new stream * current stream, generate a new stream and start streaming to new stream
* @return {object} - return object containing new current stream and * @return - return object containing new current stream and
* subpart index of current subpart * subpart index of current subpart
*/ */
transitionToNextStream() { transitionToNextStream() {
@ -122,10 +125,9 @@ class SubStreamInterface {
/** /**
* SubStreamInterface.write - write to the current stream * SubStreamInterface.write - write to the current stream
* @param {Buffer} chunk - a chunk of data * @param chunk - a chunk of data
* @return {undefined}
*/ */
write(chunk) { write(chunk: Buffer) {
if (this._streamingAborted) { if (this._streamingAborted) {
// don't write // don't write
return; return;
@ -141,5 +143,3 @@ class SubStreamInterface {
this._lengthCounter += chunk.length; this._lengthCounter += chunk.length;
} }
} }
module.exports = SubStreamInterface;

View File

@ -1,230 +0,0 @@
const assert = require('assert');
const crypto = require('crypto');
const stream = require('stream');
const ResultsCollector = require('./ResultsCollector');
const SubStreamInterface = require('./SubStreamInterface');
const objectUtils = require('../objectUtils');
const MD5Sum = require('../MD5Sum');
const errors = require('../../errors').default;
const azureMpuUtils = {};
azureMpuUtils.splitter = '|';
azureMpuUtils.overviewMpuKey = 'azure_mpu';
azureMpuUtils.maxSubPartSize = 104857600;
azureMpuUtils.zeroByteETag = crypto.createHash('md5').update('').digest('hex');
// TODO: S3C-4657
azureMpuUtils.padString = (str, category) => {
const _padFn = {
left: (str, padString) =>
`${padString}${str}`.substr(-padString.length),
right: (str, padString) =>
`${str}${padString}`.substr(0, padString.length),
};
// It's a little more performant if we add pre-generated strings for each
// type of padding we want to apply, instead of using string.repeat() to
// create the padding.
const padSpec = {
partNumber: {
padString: '00000',
direction: 'left',
},
subPart: {
padString: '00',
direction: 'left',
},
part: {
padString:
'%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%',
direction: 'right',
},
};
const { direction, padString } = padSpec[category];
return _padFn[direction](str, padString);
};
// NOTE: If we want to extract the object name from these keys, we will need
// to use a similar method to _getKeyAndUploadIdFromMpuKey since the object
// name may have instances of the splitter used to delimit arguments
azureMpuUtils.getMpuSummaryKey = (objectName, uploadId) =>
`${objectName}${azureMpuUtils.splitter}${uploadId}`;
azureMpuUtils.getBlockId = (uploadId, partNumber, subPartIndex) => {
const paddedPartNumber = azureMpuUtils.padString(partNumber, 'partNumber');
const paddedSubPart = azureMpuUtils.padString(subPartIndex, 'subPart');
const splitter = azureMpuUtils.splitter;
const blockId = `${uploadId}${splitter}partNumber${paddedPartNumber}` +
`${splitter}subPart${paddedSubPart}${splitter}`;
return azureMpuUtils.padString(blockId, 'part');
};
azureMpuUtils.getSummaryPartId = (partNumber, eTag, size) => {
const paddedPartNumber = azureMpuUtils.padString(partNumber, 'partNumber');
const timestamp = Date.now();
const splitter = azureMpuUtils.splitter;
const summaryKey = `${paddedPartNumber}${splitter}${timestamp}` +
`${splitter}${eTag}${splitter}${size}${splitter}`;
return azureMpuUtils.padString(summaryKey, 'part');
};
azureMpuUtils.getSubPartInfo = dataContentLength => {
const numberFullSubParts =
Math.floor(dataContentLength / azureMpuUtils.maxSubPartSize);
const remainder = dataContentLength % azureMpuUtils.maxSubPartSize;
const numberSubParts = remainder ?
numberFullSubParts + 1 : numberFullSubParts;
const lastPartSize = remainder || azureMpuUtils.maxSubPartSize;
return {
expectedNumberSubParts: numberSubParts,
lastPartIndex: numberSubParts - 1,
lastPartSize,
};
};
azureMpuUtils.getSubPartSize = (subPartInfo, subPartIndex) => {
const { lastPartIndex, lastPartSize } = subPartInfo;
return subPartIndex === lastPartIndex ?
lastPartSize : azureMpuUtils.maxSubPartSize;
};
azureMpuUtils.getSubPartIds = (part, uploadId) =>
[...Array(part.numberSubParts).keys()].map(subPartIndex =>
azureMpuUtils.getBlockId(uploadId, part.partNumber, subPartIndex));
azureMpuUtils.putSinglePart = (errorWrapperFn, request, params, dataStoreName,
log, cb) => {
const { bucketName, partNumber, size, objectKey, contentMD5, uploadId }
= params;
const blockId = azureMpuUtils.getBlockId(uploadId, partNumber, 0);
const passThrough = new stream.PassThrough();
const options = {};
if (contentMD5) {
options.useTransactionalMD5 = true;
options.transactionalContentMD5 = contentMD5;
}
request.pipe(passThrough);
return errorWrapperFn('uploadPart', 'createBlockFromStream',
[blockId, bucketName, objectKey, passThrough, size, options,
(err, result) => {
if (err) {
log.error('Error from Azure data backend uploadPart',
{ error: err.message, dataStoreName });
if (err.code === 'ContainerNotFound') {
return cb(errors.NoSuchBucket);
}
if (err.code === 'InvalidMd5') {
return cb(errors.InvalidDigest);
}
if (err.code === 'Md5Mismatch') {
return cb(errors.BadDigest);
}
return cb(errors.InternalError.customizeDescription(
`Error returned from Azure: ${err.message}`),
);
}
const md5 = result.headers['content-md5'] || '';
const eTag = objectUtils.getHexMD5(md5);
return cb(null, eTag, size);
}], log, cb);
};
azureMpuUtils.putNextSubPart = (errorWrapperFn, partParams, subPartInfo,
subPartStream, subPartIndex, resultsCollector, log, cb) => {
const { uploadId, partNumber, bucketName, objectKey } = partParams;
const subPartSize = azureMpuUtils.getSubPartSize(
subPartInfo, subPartIndex);
const subPartId = azureMpuUtils.getBlockId(uploadId, partNumber,
subPartIndex);
resultsCollector.pushOp();
errorWrapperFn('uploadPart', 'createBlockFromStream',
[subPartId, bucketName, objectKey, subPartStream, subPartSize,
{}, err => resultsCollector.pushResult(err, subPartIndex)], log, cb);
};
azureMpuUtils.putSubParts = (errorWrapperFn, request, params,
dataStoreName, log, cb) => {
const subPartInfo = azureMpuUtils.getSubPartInfo(params.size);
const resultsCollector = new ResultsCollector();
const hashedStream = new MD5Sum();
const streamInterface = new SubStreamInterface(hashedStream);
log.trace('data length is greater than max subpart size;' +
'putting multiple parts');
resultsCollector.on('error', (err, subPartIndex) => {
log.error(`Error putting subpart to Azure: ${subPartIndex}`,
{ error: err.message, dataStoreName });
streamInterface.stopStreaming(request);
if (err.code === 'ContainerNotFound') {
return cb(errors.NoSuchBucket);
}
return cb(errors.InternalError.customizeDescription(
`Error returned from Azure: ${err}`));
});
resultsCollector.on('done', (err, results) => {
if (err) {
log.error('Error putting last subpart to Azure',
{ error: err.message, dataStoreName });
if (err.code === 'ContainerNotFound') {
return cb(errors.NoSuchBucket);
}
return cb(errors.InternalError.customizeDescription(
`Error returned from Azure: ${err}`));
}
const numberSubParts = results.length;
// check if we have streamed more parts than calculated; should not
// occur, but do a sanity assertion to detect any coding logic error
assert.strictEqual(numberSubParts, subPartInfo.expectedNumberSubParts,
`Fatal error: streamed ${numberSubParts} subparts but ` +
`expected ${subPartInfo.expectedNumberSubParts} subparts`);
const totalLength = streamInterface.getTotalBytesStreamed();
log.trace('successfully put subparts to Azure',
{ numberSubParts, totalLength });
hashedStream.on('hashed', () => cb(null, hashedStream.completedHash,
totalLength));
// in case the hashed event was already emitted before the
// event handler was registered:
if (hashedStream.completedHash) {
hashedStream.removeAllListeners('hashed');
return cb(null, hashedStream.completedHash, totalLength);
}
return undefined;
});
const currentStream = streamInterface.getCurrentStream();
// start first put to Azure before we start streaming the data
azureMpuUtils.putNextSubPart(errorWrapperFn, params, subPartInfo,
currentStream, 0, resultsCollector, log, cb);
request.pipe(hashedStream);
hashedStream.on('end', () => {
resultsCollector.enableComplete();
streamInterface.endStreaming();
});
hashedStream.on('data', data => {
const currentLength = streamInterface.getLengthCounter();
if (currentLength + data.length > azureMpuUtils.maxSubPartSize) {
const bytesToMaxSize = azureMpuUtils.maxSubPartSize - currentLength;
const firstChunk = bytesToMaxSize === 0 ? data :
data.slice(bytesToMaxSize);
if (bytesToMaxSize !== 0) {
// if we have not streamed full subpart, write enough of the
// data chunk to stream the correct length
streamInterface.write(data.slice(0, bytesToMaxSize));
}
const { nextStream, subPartIndex } =
streamInterface.transitionToNextStream();
azureMpuUtils.putNextSubPart(errorWrapperFn, params, subPartInfo,
nextStream, subPartIndex, resultsCollector, log, cb);
streamInterface.write(firstChunk);
} else {
streamInterface.write(data);
}
});
};
module.exports = azureMpuUtils;

View File

@ -0,0 +1,287 @@
import assert from 'assert';
import * as crypto from 'crypto';
import * as stream from 'stream';
import azure from '@azure/storage-blob';
import { RequestLogger } from 'werelogs';
import ResultsCollector from './ResultsCollector';
import SubStreamInterface from './SubStreamInterface';
import * as objectUtils from '../objectUtils';
import MD5Sum from '../MD5Sum';
import errors, { ArsenalError } from '../../errors';
export const splitter = '|';
export const overviewMpuKey = 'azure_mpu';
export const maxSubPartSize = 104857600;
export const zeroByteETag = crypto.createHash('md5').update('').digest('hex');
// TODO: S3C-4657
export const padString = (
str: number | string,
category: 'partNumber' | 'subPart' | 'part',
) => {
const _padFn = {
left: (str: number | string, padString: string) =>
`${padString}${str}`.slice(-padString.length),
right: (str: number | string, padString: string) =>
`${str}${padString}`.slice(0, padString.length),
};
// It's a little more performant if we add pre-generated strings for each
// type of padding we want to apply, instead of using string.repeat() to
// create the padding.
const padSpec = {
partNumber: {
padString: '00000',
direction: 'left',
},
subPart: {
padString: '00',
direction: 'left',
},
part: {
padString:
'%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%',
direction: 'right',
},
};
const { direction, padString } = padSpec[category];
const fun = _padFn[direction as 'left' | 'right'];
return fun(str, padString);
};
// NOTE: If we want to extract the object name from these keys, we will need
// to use a similar method to _getKeyAndUploadIdFromMpuKey since the object
// name may have instances of the splitter used to delimit arguments
export const getMpuSummaryKey = (objectName: string, uploadId: string) =>
`${objectName}${splitter}${uploadId}`;
export const getBlockId = (
uploadId: string,
partNumber: number,
subPartIndex: number,
) => {
const paddedPartNumber = padString(partNumber, 'partNumber');
const paddedSubPart = padString(subPartIndex, 'subPart');
const blockId = `${uploadId}${splitter}partNumber${paddedPartNumber}` +
`${splitter}subPart${paddedSubPart}${splitter}`;
return Buffer.from(padString(blockId, 'part')).toString('base64');
};
export const getSummaryPartId = (partNumber: number, eTag: string, size: number) => {
const paddedPartNumber = padString(partNumber, 'partNumber');
const timestamp = Date.now();
const summaryKey = `${paddedPartNumber}${splitter}${timestamp}` +
`${splitter}${eTag}${splitter}${size}${splitter}`;
return padString(summaryKey, 'part');
};
export const getSubPartInfo = (dataContentLength: number) => {
const numberFullSubParts =
Math.floor(dataContentLength / maxSubPartSize);
const remainder = dataContentLength % maxSubPartSize;
const numberSubParts = remainder ?
numberFullSubParts + 1 : numberFullSubParts;
const lastPartSize = remainder || maxSubPartSize;
return {
expectedNumberSubParts: numberSubParts,
lastPartIndex: numberSubParts - 1,
lastPartSize,
};
};
export const getSubPartSize = (
subPartInfo: { lastPartIndex: number; lastPartSize: number },
subPartIndex: number,
) => {
const { lastPartIndex, lastPartSize } = subPartInfo;
return subPartIndex === lastPartIndex ? lastPartSize : maxSubPartSize;
};
export const getSubPartIds = (
part: { numberSubParts: number; partNumber: number },
uploadId: string,
) => [...Array(part.numberSubParts).keys()].map(subPartIndex =>
getBlockId(uploadId, part.partNumber, subPartIndex));
type ErrorWrapperFn = (
s3Method: string,
azureMethod: string,
command: (client: azure.ContainerClient) => Promise<any>,
log: RequestLogger,
cb: (err: ArsenalError | null | undefined) => void,
) => void
export const putSinglePart = (
errorWrapperFn: ErrorWrapperFn,
request: stream.Readable,
params: {
bucketName: string;
partNumber: number;
size: number;
objectKey: string;
contentMD5: string;
uploadId: string;
},
dataStoreName: string,
log: RequestLogger,
cb: (err: ArsenalError | null | undefined, dataStoreETag?: string, size?: number) => void,
) => {
const { bucketName, partNumber, size, objectKey, contentMD5, uploadId }
= params;
const blockId = getBlockId(uploadId, partNumber, 0);
const passThrough = new stream.PassThrough();
const options = contentMD5
? { transactionalContentMD5: objectUtils.getMD5Buffer(contentMD5) }
: {};
request.pipe(passThrough);
return errorWrapperFn('uploadPart', 'createBlockFromStream', async client => {
try {
const result = await client.getBlockBlobClient(objectKey)
.stageBlock(blockId, () => passThrough, size, options);
const md5 = result.contentMD5 || '';
const eTag = objectUtils.getHexMD5(md5);
return eTag
} catch (err: any) {
log.error('Error from Azure data backend uploadPart',
{ error: err.message, dataStoreName });
if (err.code === 'ContainerNotFound') {
throw errors.NoSuchBucket;
}
if (err.code === 'InvalidMd5') {
throw errors.InvalidDigest;
}
if (err.code === 'Md5Mismatch') {
throw errors.BadDigest;
}
throw errors.InternalError.customizeDescription(
`Error returned from Azure: ${err.message}`
);
}
}, log, cb);
};
const putNextSubPart = (
errorWrapperFn: ErrorWrapperFn,
partParams: {
uploadId: string;
partNumber: number;
bucketName: string;
objectKey: string;
},
subPartInfo: { lastPartIndex: number; lastPartSize: number },
subPartStream: stream.Readable,
subPartIndex: number,
resultsCollector: ResultsCollector,
log: RequestLogger,
) => {
const { uploadId, partNumber, bucketName, objectKey } = partParams;
const subPartSize = getSubPartSize(
subPartInfo, subPartIndex);
const subPartId = getBlockId(uploadId, partNumber,
subPartIndex);
resultsCollector.pushOp();
errorWrapperFn('uploadPart', 'createBlockFromStream', async client => {
try {
const result = await client.getBlockBlobClient(objectKey)
.stageBlock(subPartId, () => subPartStream, subPartSize, {});
resultsCollector.pushResult(null, subPartIndex);
} catch (err: any) {
resultsCollector.pushResult(err, subPartIndex);
}
}, log, () => {});
};
export const putSubParts = (
errorWrapperFn: ErrorWrapperFn,
request: stream.Readable,
params: {
uploadId: string;
partNumber: number;
bucketName: string;
objectKey: string;
size: number;
},
dataStoreName: string,
log: RequestLogger,
cb: (err: ArsenalError | null | undefined, dataStoreETag?: string) => void,
) => {
const subPartInfo = getSubPartInfo(params.size);
const resultsCollector = new ResultsCollector();
const hashedStream = new MD5Sum();
const streamInterface = new SubStreamInterface(hashedStream);
log.trace('data length is greater than max subpart size;' +
'putting multiple parts');
resultsCollector.on('error', (err, subPartIndex) => {
log.error(`Error putting subpart to Azure: ${subPartIndex}`,
{ error: err.message, dataStoreName });
streamInterface.stopStreaming(request);
if (err.code === 'ContainerNotFound') {
return cb(errors.NoSuchBucket);
}
return cb(errors.InternalError.customizeDescription(
`Error returned from Azure: ${err}`));
});
resultsCollector.on('done', (err, results) => {
if (err) {
log.error('Error putting last subpart to Azure',
{ error: err.message, dataStoreName });
if (err.code === 'ContainerNotFound') {
return cb(errors.NoSuchBucket);
}
return cb(errors.InternalError.customizeDescription(
`Error returned from Azure: ${err}`));
}
const numberSubParts = results.length;
// check if we have streamed more parts than calculated; should not
// occur, but do a sanity assertion to detect any coding logic error
assert.strictEqual(numberSubParts, subPartInfo.expectedNumberSubParts,
`Fatal error: streamed ${numberSubParts} subparts but ` +
`expected ${subPartInfo.expectedNumberSubParts} subparts`);
const totalLength = streamInterface.getTotalBytesStreamed();
log.trace('successfully put subparts to Azure',
{ numberSubParts, totalLength });
hashedStream.on('hashed', () => cb(null, hashedStream.completedHash));
// in case the hashed event was already emitted before the
// event handler was registered:
if (hashedStream.completedHash) {
hashedStream.removeAllListeners('hashed');
return cb(null, hashedStream.completedHash);
}
return undefined;
});
const currentStream = streamInterface.getCurrentStream();
// start first put to Azure before we start streaming the data
putNextSubPart(errorWrapperFn, params, subPartInfo,
currentStream, 0, resultsCollector, log);
request.pipe(hashedStream);
hashedStream.on('end', () => {
resultsCollector.enableComplete();
streamInterface.endStreaming();
});
hashedStream.on('data', data => {
const currentLength = streamInterface.getLengthCounter();
if (currentLength + data.length > maxSubPartSize) {
const bytesToMaxSize = maxSubPartSize - currentLength;
const firstChunk = bytesToMaxSize === 0 ? data :
data.slice(bytesToMaxSize);
if (bytesToMaxSize !== 0) {
// if we have not streamed full subpart, write enough of the
// data chunk to stream the correct length
streamInterface.write(data.slice(0, bytesToMaxSize));
}
const { nextStream, subPartIndex } =
streamInterface.transitionToNextStream();
putNextSubPart(errorWrapperFn, params, subPartInfo, nextStream,
subPartIndex, resultsCollector, log);
streamInterface.write(firstChunk);
} else {
streamInterface.write(data);
}
});
};

View File

@ -1,107 +0,0 @@
const querystring = require('querystring');
const escapeForXml = require('./escapeForXml');
const convertMethods = {};
convertMethods.completeMultipartUpload = xmlParams => {
const escapedBucketName = escapeForXml(xmlParams.bucketName);
return '<?xml version="1.0" encoding="UTF-8"?>' +
'<CompleteMultipartUploadResult ' +
'xmlns="http://s3.amazonaws.com/doc/2006-03-01/">' +
`<Location>http://${escapedBucketName}.` +
`${escapeForXml(xmlParams.hostname)}/` +
`${escapeForXml(xmlParams.objectKey)}</Location>` +
`<Bucket>${escapedBucketName}</Bucket>` +
`<Key>${escapeForXml(xmlParams.objectKey)}</Key>` +
`<ETag>${escapeForXml(xmlParams.eTag)}</ETag>` +
'</CompleteMultipartUploadResult>';
};
convertMethods.initiateMultipartUpload = xmlParams =>
'<?xml version="1.0" encoding="UTF-8"?>' +
'<InitiateMultipartUploadResult ' +
'xmlns="http://s3.amazonaws.com/doc/2006-03-01/">' +
`<Bucket>${escapeForXml(xmlParams.bucketName)}</Bucket>` +
`<Key>${escapeForXml(xmlParams.objectKey)}</Key>` +
`<UploadId>${escapeForXml(xmlParams.uploadId)}</UploadId>` +
'</InitiateMultipartUploadResult>';
convertMethods.listMultipartUploads = xmlParams => {
const xml = [];
const l = xmlParams.list;
xml.push('<?xml version="1.0" encoding="UTF-8"?>',
'<ListMultipartUploadsResult ' +
'xmlns="http://s3.amazonaws.com/doc/2006-03-01/">',
`<Bucket>${escapeForXml(xmlParams.bucketName)}</Bucket>`,
);
// For certain XML elements, if it is `undefined`, AWS returns either an
// empty tag or does not include it. Hence the `optional` key in the params.
const params = [
{ tag: 'KeyMarker', value: xmlParams.keyMarker },
{ tag: 'UploadIdMarker', value: xmlParams.uploadIdMarker },
{ tag: 'NextKeyMarker', value: l.NextKeyMarker, optional: true },
{ tag: 'NextUploadIdMarker', value: l.NextUploadIdMarker,
optional: true },
{ tag: 'Delimiter', value: l.Delimiter, optional: true },
{ tag: 'Prefix', value: xmlParams.prefix, optional: true },
];
params.forEach(param => {
if (param.value) {
xml.push(`<${param.tag}>${escapeForXml(param.value)}` +
`</${param.tag}>`);
} else if (!param.optional) {
xml.push(`<${param.tag} />`);
}
});
xml.push(`<MaxUploads>${escapeForXml(l.MaxKeys)}</MaxUploads>`,
`<IsTruncated>${escapeForXml(l.IsTruncated)}</IsTruncated>`,
);
l.Uploads.forEach(upload => {
const val = upload.value;
let key = upload.key;
if (xmlParams.encoding === 'url') {
key = querystring.escape(key);
}
xml.push('<Upload>',
`<Key>${escapeForXml(key)}</Key>`,
`<UploadId>${escapeForXml(val.UploadId)}</UploadId>`,
'<Initiator>',
`<ID>${escapeForXml(val.Initiator.ID)}</ID>`,
`<DisplayName>${escapeForXml(val.Initiator.DisplayName)}` +
'</DisplayName>',
'</Initiator>',
'<Owner>',
`<ID>${escapeForXml(val.Owner.ID)}</ID>`,
`<DisplayName>${escapeForXml(val.Owner.DisplayName)}` +
'</DisplayName>',
'</Owner>',
`<StorageClass>${escapeForXml(val.StorageClass)}` +
'</StorageClass>',
`<Initiated>${escapeForXml(val.Initiated)}</Initiated>`,
'</Upload>',
);
});
l.CommonPrefixes.forEach(prefix => {
xml.push('<CommonPrefixes>',
`<Prefix>${escapeForXml(prefix)}</Prefix>`,
'</CommonPrefixes>',
);
});
xml.push('</ListMultipartUploadsResult>');
return xml.join('');
};
function convertToXml(method, xmlParams) {
return convertMethods[method](xmlParams);
}
module.exports = convertToXml;

View File

@ -0,0 +1,164 @@
import * as querystring from 'querystring';
import escapeForXml from './escapeForXml';
export type Params = {
bucketName: string;
hostname: string;
objectKey: string;
eTag: string;
uploadId: string;
list: string;
}
export type CompleteParams = { bucketName: string; hostname: string; objectKey: string; eTag: string }
export const completeMultipartUpload = (xmlParams: CompleteParams) => {
const bucketName = escapeForXml(xmlParams.bucketName);
const hostname = escapeForXml(xmlParams.hostname);
const objectKey = escapeForXml(xmlParams.objectKey);
const location = `http://${bucketName}.${hostname}/${objectKey}`;
const eTag = escapeForXml(xmlParams.eTag);
return `
<?xml version="1.0" encoding="UTF-8"?>
<CompleteMultipartUploadResult xmlns="http://s3.amazonaws.com/doc/2006-03-01/">
<Location>${location}</Location>
<Bucket>${bucketName}</Bucket>
<Key>${objectKey}</Key>
<ETag>${eTag}</ETag>
</CompleteMultipartUploadResult>
`.trim();
}
export type InitParams = { bucketName: string; objectKey: string; uploadId: string }
export const initiateMultipartUpload = (xmlParams: InitParams) => `
<?xml version="1.0" encoding="UTF-8"?>
<InitiateMultipartUploadResult xmlns="http://s3.amazonaws.com/doc/2006-03-01/">
<Bucket>${escapeForXml(xmlParams.bucketName)}</Bucket>
<Key>${escapeForXml(xmlParams.objectKey)}</Key>
<UploadId>${escapeForXml(xmlParams.uploadId)}</UploadId>
</InitiateMultipartUploadResult>
`.trim();
export type ListParams = {
list: {
NextKeyMarker?: string;
NextUploadIdMarker?: string;
Delimiter?: string;
MaxKeys: string;
IsTruncated: string;
CommonPrefixes: string[];
Uploads: Array<{
key: string;
value: {
UploadId: string;
Initiator: {
ID: string;
DisplayName: string;
};
Owner: {
ID: string;
DisplayName: string;
};
StorageClass: string;
Initiated: string;
};
}>;
};
encoding: 'url';
bucketName: string;
keyMarker: string;
uploadIdMarker: string;
prefix?: string;
}
export const listMultipartUploads = (xmlParams: ListParams) => {
const xml: string[] = [];
const l = xmlParams.list;
xml.push(
'<?xml version="1.0" encoding="UTF-8"?>',
'<ListMultipartUploadsResult xmlns="http://s3.amazonaws.com/doc/2006-03-01/">',
` <Bucket>${escapeForXml(xmlParams.bucketName)}</Bucket>`,
);
// For certain XML elements, if it is `undefined`, AWS returns either an
// empty tag or does not include it. Hence the `optional` key in the params.
const params = [
{ tag: 'KeyMarker', value: xmlParams.keyMarker },
{ tag: 'UploadIdMarker', value: xmlParams.uploadIdMarker },
{ tag: 'NextKeyMarker', value: l.NextKeyMarker, optional: true },
{ tag: 'NextUploadIdMarker', value: l.NextUploadIdMarker,
optional: true },
{ tag: 'Delimiter', value: l.Delimiter, optional: true },
{ tag: 'Prefix', value: xmlParams.prefix, optional: true },
];
params.forEach(param => {
if (param.value) {
xml.push(
`<${param.tag}>${escapeForXml(param.value)}</${param.tag}>`
);
} else if (!param.optional) {
xml.push(`<${param.tag} />`);
}
});
xml.push(
`<MaxUploads>${escapeForXml(l.MaxKeys)}</MaxUploads>`,
`<IsTruncated>${escapeForXml(l.IsTruncated)}</IsTruncated>`,
);
l.Uploads.forEach(upload => {
const val = upload.value;
let key = upload.key;
if (xmlParams.encoding === 'url') {
key = querystring.escape(key);
}
xml.push(
'<Upload>',
`<Key>${escapeForXml(key)}</Key>`,
`<UploadId>${escapeForXml(val.UploadId)}</UploadId>`,
'<Initiator>',
`<ID>${escapeForXml(val.Initiator.ID)}</ID>`,
`<DisplayName>`,
escapeForXml(val.Initiator.DisplayName),
'</DisplayName>',
'</Initiator>',
'<Owner>',
`<ID>${escapeForXml(val.Owner.ID)}</ID>`,
`<DisplayName>`,
escapeForXml(val.Owner.DisplayName),
'</DisplayName>',
'</Owner>',
`<StorageClass>`,
escapeForXml(val.StorageClass),
'</StorageClass>',
`<Initiated>${escapeForXml(val.Initiated)}</Initiated>`,
'</Upload>',
);
});
l.CommonPrefixes.forEach(prefix => {
xml.push(
'<CommonPrefixes>',
`<Prefix>${escapeForXml(prefix)}</Prefix>`,
'</CommonPrefixes>',
);
});
xml.push('</ListMultipartUploadsResult>');
return xml.join('');
}
const methods = {
listMultipartUploads,
initiateMultipartUpload,
completeMultipartUpload,
}
export default function convertToXml(method: 'initiateMultipartUpload', params: InitParams): string;
export default function convertToXml(method: 'listMultipartUploads', params: ListParams): string;
export default function convertToXml(method: 'completeMultipartUpload', params: CompleteParams): string;
export default function convertToXml(method: keyof typeof methods, xmlParams: any) {
return methods[method](xmlParams);
}

View File

@ -10,10 +10,8 @@ const XML_CHARACTER_MAP = {
'>': '&gt;', '>': '&gt;',
}; };
function escapeForXml(string) { export default function escapeForXml(string: string) {
return string && string.replace return string && string.replace
? string.replace(/([&"<>'])/g, (str, item) => XML_CHARACTER_MAP[item]) ? string.replace(/([&"<>'])/g, (str, item) => XML_CHARACTER_MAP[item])
: string; : string;
} }
module.exports = escapeForXml;

View File

@ -1,54 +0,0 @@
const oneDay = 24 * 60 * 60 * 1000; // Milliseconds in a day.
class LifecycleDateTime {
constructor(params = {}) {
this._transitionOneDayEarlier = params.transitionOneDayEarlier;
this._expireOneDayEarlier = params.expireOneDayEarlier;
}
getCurrentDate() {
const timeTravel = this._expireOneDayEarlier ? oneDay : 0;
return Date.now() + timeTravel;
}
/**
* Helper method to get total Days passed since given date
* @param {Date} date - date object
* @return {number} Days passed
*/
findDaysSince(date) {
const now = this.getCurrentDate();
const diff = now - date;
return Math.floor(diff / (1000 * 60 * 60 * 24));
}
/**
* Get the Unix timestamp of the given date.
* @param {string} date - The date string to convert to a Unix timestamp
* @return {number} - The Unix timestamp
*/
getTimestamp(date) {
return new Date(date).getTime();
}
/**
* Find the Unix time at which the transition should occur.
* @param {object} transition - A transition from the lifecycle transitions
* @param {string} lastModified - The object's last modified date
* @return {number|undefined} - The normalized transition timestamp
*/
getTransitionTimestamp(transition, lastModified) {
if (transition.Date !== undefined) {
return this.getTimestamp(transition.Date);
}
if (transition.Days !== undefined) {
const lastModifiedTime = this.getTimestamp(lastModified);
const timeTravel = this._transitionOneDayEarlier ? -oneDay : 0;
return lastModifiedTime + (transition.Days * oneDay) + timeTravel;
}
return undefined;
}
}
module.exports = LifecycleDateTime;

View File

@ -0,0 +1,82 @@
import { scaleMsPerDay } from '../objectUtils';
const msInOneDay = 24 * 60 * 60 * 1000; // Milliseconds in a day.
export default class LifecycleDateTime {
_transitionOneDayEarlier?: boolean;
_expireOneDayEarlier?: boolean;
_timeProgressionFactor?: number;
_scaledMsPerDay: number;
constructor(params?: {
transitionOneDayEarlier: boolean;
expireOneDayEarlier: boolean;
timeProgressionFactor: number;
}) {
this._transitionOneDayEarlier = params?.transitionOneDayEarlier;
this._expireOneDayEarlier = params?.expireOneDayEarlier;
this._timeProgressionFactor = params?.timeProgressionFactor || 1;
this._scaledMsPerDay = scaleMsPerDay(this._timeProgressionFactor);
}
getCurrentDate() {
const timeTravel = this._expireOneDayEarlier ? msInOneDay : 0;
return Date.now() + timeTravel;
}
/**
* Helper method to get total Days passed since given date
* @param date - date object
* @return Days passed
*/
findDaysSince(date: Date) {
const now = this.getCurrentDate();
const diff = now - date.getTime();
return Math.floor(diff / this._scaledMsPerDay);
}
/**
* Get the Unix timestamp of the given date.
* @param date - The date string to convert to a Unix timestamp
* @return - The Unix timestamp
*/
getTimestamp(date: string | Date) {
return new Date(date).getTime();
}
/**
* Find the Unix time at which the transition should occur.
* @param transition - A transition from the lifecycle transitions
* @param lastModified - The object's last modified date
* @return - The normalized transition timestamp
*/
getTransitionTimestamp(
transition: { Date?: string; Days?: number },
lastModified: string,
) {
if (transition.Date !== undefined) {
return this.getTimestamp(transition.Date);
}
if (transition.Days !== undefined) {
const lastModifiedTime = this.getTimestamp(lastModified);
const timeTravel = this._transitionOneDayEarlier ? -msInOneDay : 0;
return lastModifiedTime + (transition.Days * this._scaledMsPerDay) + timeTravel;
}
}
/**
* Find the Unix time at which the non-current version transition should occur.
* @param transition - A non-current version transition from the lifecycle non-current version transitions
* @param lastModified - The object's last modified date
* @return - The normalized transition timestamp
*/
getNCVTransitionTimestamp(
transition: { NoncurrentDays?: number },
lastModified: string,
) {
if (transition.NoncurrentDays !== undefined) {
const lastModifiedTime = this.getTimestamp(lastModified);
const timeTravel = this._transitionOneDayEarlier ? -msInOneDay : 0;
return lastModifiedTime + (transition.NoncurrentDays * this._scaledMsPerDay) + timeTravel;
}
}
}

Some files were not shown because too many files have changed in this diff Show More