505 Commits

Author SHA1 Message Date
Jonathan Gramain
6ec3c8e10d ARSN-425 bump arsenal version 2024-07-08 10:59:25 -07:00
Jonathan Gramain
7aaf277db2 bf: ARSN-425 listing crash if key contains "undefined"
Fix a crash in DelimiterMaster listing without a delimiter, when a key
contains the string "undefined".

Note: a similar fix was done in ARSN-330 for DelimiterVersions. I
ported the existing unit test there to the development/7.10 branch to
enhance regression testing, even though this bug on DelimiterVersions
only existed on 7.70.
2024-07-08 10:56:48 -07:00
Francois Ferrand
11e0e1b489 Bump gha actions
- checkout@v4
- codeql@v2
- dependency-review@v4
- setup-node@v4
- artifacts@v4

Issue: ARSN-415
2024-05-10 14:26:29 +02:00
Jonathan Gramain
e9d815cc9d ARSN-402 bump arsenal version 2024-03-13 08:40:02 -07:00
Jonathan Gramain
c86d24fc8f bf: ARSN-402 sanitize use of log object in DataWrapper.delete()
Don't assume that we can safely call `end()` on the passed log object
if there is no callback (separation of concerns). Additionally, an
error object was passed where `end()` expects a string as a message,
causing implicit conversion.

Since errors are already logged, there is no need to bind the
`callback` object to `log.end` (there is no strong reason to log the
elapsed time there, the only use I can see where we don't pass a
callback in Cloudserver is to support deletion of old metadata with a
string as location array. IMHO not worth the complexity of adding it
there, as the rest of the API doesn't log elapsed time anyways except
for `batchDelete`).
2024-03-13 08:39:35 -07:00
Jonathan Gramain
3b6d3838f5 bf: ARSN-402 use local RequestLogger in batchDelete
Create a local RequestLogger in batchDelete(): this allows to track
the elapsed time of the batch delete sub-request, and avoids being
forced to create a new request logger before calling the function (due
to the call to `log.end()`), which was error-prone and hardly
maintainable.
2024-03-13 08:39:35 -07:00
Jonathan Gramain
fcdfa889be ARSN-402 bump werelogs dependency
+ typescript fixes to be compatible with the latest werelogs
2024-03-13 08:39:35 -07:00
Nicolas Humbert
9ee40f343b ARSN-403 bump package 2024-03-06 16:07:08 +01:00
Nicolas Humbert
43ff16b28a ARSN-403 fix tests 2024-03-05 13:41:27 +01:00
Nicolas Humbert
1f8b0a4032 ARSN-403 Set nullVersionId to master when replacing a null version. 2024-03-04 11:51:33 +01:00
bert-e
9bf1bcc483 Merge branch 'improvement/ARSN-400-scuba-admin' into q/7.10 2024-02-26 13:59:54 +00:00
Nicolas Humbert
f1891851b3 ARSN-392 version bump 2024-02-21 09:54:30 +01:00
bert-e
5f4d7afefb Merge branch 'bugfix/ARSN-392/null' into q/7.10 2024-02-20 14:02:11 +00:00
Nicolas Humbert
46258bca74 ARSN-392 Fix processVersionSpecificPut
- Add the nullVersionId field into the master update. The nullVersionId is needed for listing, retrieving, and deleting null version.

- Manage scenarios in which a version is marked with the isNull attribute set to true, but without a version ID.
It happens after BackbeatClient.putMetadata() is applied to a standalone null master.
2024-02-19 11:42:17 +01:00
williamlardier
9c5bc2bfe0 ARSN-396: bump project 2024-02-19 09:22:23 +01:00
Mickael Bourgois
2c3bfb16ef ARSN-400: Add scuba admin actions 2024-02-16 11:18:05 +01:00
williamlardier
6b64f50450 ARSN-396: use request context aciton map for the bucket policies
The S3 Bucket Policies checks must support and evaluate the same
actions as the ones sent to the IAM checks.
Today, we only check a subset of it, so we missed the Versioned
APIs.
2024-02-14 12:02:45 +01:00
Nicolas Humbert
cbe6a5e2d6 ARSN-392 Import the V0 processVersionSpecificPut from Metadata
This logic is used by CRR replication feature to BackbeatClient.putMetadata on top of a null version
2024-02-07 16:19:41 +01:00
Mickael Bourgois
f265ed6122 ARSN-390: Bump version 2024-02-05 14:07:31 +01:00
Mickael Bourgois
7301c706fd ARSN-390: Apply suggestion from code review 2024-02-05 14:07:31 +01:00
Mickael Bourgois
bfc8dee559 ARSN-390: Add scuba arn for policy
Relates to SCUBA-76 and SCUBA-77
2024-01-26 16:33:32 +01:00
Frédéric Meinnel
29f39ab480 ARSN-386: version bump 2024-01-19 11:07:20 +01:00
Frédéric Meinnel
b7ac7f4616 ARSN-385: Fix generateV4Headers for HTTP PUT with body 2024-01-19 11:07:20 +01:00
Frédéric Meinnel
4da59769d2 ARSN-385: Version bump 2024-01-16 17:40:34 +01:00
Frédéric Meinnel
60573991ee ARSN-385: Lifecycle configuration dates aligned with XML spec and ISO-8601 2024-01-12 18:45:24 +01:00
bert-e
63bf2cb5b1 Merge branch 'bugfix/ARSN-384-redirect-error-body' into q/7.10 2024-01-10 10:23:21 +00:00
Mickael Bourgois
c4b44016bc ARSN-384: bump version 2024-01-10 10:46:26 +01:00
Mickael Bourgois
a78a84faa7 ARSN-384: update error check 2024-01-10 10:46:26 +01:00
Mickael Bourgois
c3ff6526a1 ARSN-384: ignore 302 statusMessage override
Keep Found instead of Moved Temporarily
And apply code review suggestion
2024-01-10 10:46:26 +01:00
Mickael Bourgois
4bf29524eb ARSN-384: test redirect on error 2024-01-08 17:49:22 +01:00
Mickael Bourgois
9aa001c4d1 ARSN-384: implement a redirect with error and body 2024-01-08 17:49:22 +01:00
Frédéric Meinnel
5012e9209c ARSN-383: Version bump 2024-01-08 15:28:06 +01:00
Frédéric Meinnel
1568ad59c6 ARSN-383: Dates must now be set to midnight for lifecycle configurations. 2024-01-08 15:27:23 +01:00
Mickael Bourgois
b5487e3c94 ARSN-382: add unit tests for redirect request 2024-01-03 09:51:20 +01:00
Mickael Bourgois
f2974cbd07 ARSN-382: update redirect location condition
Co-authored-by: Jonathan Gramain <jonathan.gramain@scality.com>
2024-01-02 19:08:59 +01:00
Mickael Bourgois
a167e1d5fa ARSN-382: bump version 2024-01-02 11:17:55 +01:00
Mickael Bourgois
c7e153917a ARSN-382: fix empty location when redirect to /
If object has a redirect to / it is sliced out
and the function receives an empty string as redirectKey
Therefore if redirectLocation consists of a single character /
The Location header would be empty
2024-01-02 10:52:50 +01:00
bert-e
45cc4aa79e Merge branch 'improvement/ARSN-363-retention-day-condition' into q/7.10 2023-12-26 10:55:58 +00:00
Jonathan Gramain
0507c04ce9 ARSN-284 bump arsenal version 2023-12-22 12:13:09 -08:00
Will Toozs
62736abba4 ARSN-363: update package version 2023-12-21 17:24:59 +01:00
Will Toozs
97118f09c4 ARSN-363: update test 2023-12-21 17:24:46 +01:00
Will Toozs
5a84a8c0ad ARSN-363: add object retention days logic to structures 2023-12-21 17:24:34 +01:00
Jonathan Gramain
a3f13e5387 ARSN-284 fix and refactor Delimiter + DelimiterMaster
Large refactor of Delimiter and DelimiterMaster classes to typescript,
that fixes most known issues with the previous implementation.

The new implementation uses explicit states to manage various
conditions, instead of relying on a bunch of internal variable values
and maintaining their state. It allows a more robust code flow and
fixes issues related to prefix skipping that were hard to fix by
keeping the overall logic of the previous implementation.

This refactor brings the following bug fixes and enhancements:

- prefixes with delete markers and non-deleted objects are
  now always included in CommonPrefixes (S3C-7248)

- no more duplication of internal range listings when doing skip-scan
  over prefixes (discovered when analyzing regressions for S3C-4682)

- the skip-scan mecanism for prefixes and versions is no
  more disturbed by delete markers and PHD keys (S3C-2930)

- NextMarker is now always set to a valid, listed or listable key
  (that may still be hidden under a CommonPrefix), no more
  manipulation of next marker to avoid corner-cases with keys ending
  with a prefix (S3C-4682 and S3C-7274)

- deleting a delete marker immediately allows the new current version
  to be visible in the listing (S3C-7272)

- Expecting lower CPU usage overall, as the number of checks to do in
  each state is reduced (may help to reduce the load and reduce impact
  of cases such as S3C-3946)

- Uses typescript to allow more sanity checks

This bugfix and refactor work has been re-integrated in the code by
cherry-picking the following commits:

- f62c3d22 ARSN-252 - listing bug in DelimisterMaster
- 87b060f2 ARSN-269 - listing bug in versioned bucket edge cases.
- 4f0a8468 ARSN-284 [cleanup] remove unused test dependency
- 7b648962 ARSN-284 [rf] delimiterVersions.addCommonPrefix()
- 4d7eaee0 ARSN-284 fix and refactor Delimiter + DelimiterMaster
- 1c07618b ARSN-284 [doc] add state charts
- fbb62ef1 bugfix: ARSN-293 DelimiterMaster: default to vFormat=v0
- 6e5d8d14 bugfix: ARSN-294 use CommonPrefix for NextMarker
2023-12-18 18:13:21 -08:00
Maha Benzekri
477a574500 ARSN-378: bump ARSN version 2023-12-14 11:55:54 +01:00
Maha Benzekri
3642ac03b2 ARSN-378: adding missing authorizations to actionMapBP 2023-12-14 11:52:39 +01:00
Nicolas Humbert
06244059a8 bump version 2023-11-30 14:48:07 +01:00
Nicolas Humbert
079f631711 ARSN-376 Probe response logic should be handled in the handler
Currently, the probe response logic is distributed between Backbeat probe handlers and Arsenal's onRequest method.

This scattered approach causes confusion for developers and results in bugs.

The solution is to centralize the probe response logic exclusively within the Backbeat probe handlers.
2023-11-30 14:39:42 +01:00
Maha Benzekri
fbf5562a11 bump arsenal version 2023-10-30 16:08:14 +01:00
Maha Benzekri
df5ff0f400 ARSN-362:fixups on impl deny policy tests
As the evaluateAllPolicies function is using the result of the
standardEvaluateAllPolicies , the redundant tests are removed.
The test that was kept is only to show that we use the result.verdict
in old flow evaluation.
2023-10-30 14:30:28 +01:00
Maha Benzekri
777783171a ARSN-362: change new function name for clarity 2023-10-30 09:36:56 +01:00
Will Toozs
39988e52e2 ARSN-362: add implicit deny logic to policy eval tests 2023-10-27 17:23:36 +02:00
Will Toozs
79c82a4c3d ARSN-362: add implicit deny logic to policy evaluation 2023-10-27 17:22:20 +02:00
Maha Benzekri
f49cea3914 ARSN-367- bump ARSN version 2023-09-25 12:05:46 +02:00
Maha Benzekri
73c6f41fa3 ARSN-367:principal change on schema and test add
The maximum length should be 2048 characters
having 31 characters on the fixed length prefix
this explains the 2017 max limit put in the schema
2023-09-15 10:27:47 +02:00
Maha Benzekri
9ea39c6ed9 ARSN-365:Id added on policy schema and validator
Signed-off-by: Maha Benzekri <maha.benzekri@scality.com>
2023-09-12 21:01:45 +02:00
Rahul Padigela
89e5f7dffe improvement: ARSN-349 bump node-fcntl 2023-06-20 16:05:12 -07:00
Nicolas Humbert
3f24336b83 bump arsenal version 2023-06-08 11:39:11 -04:00
Nicolas Humbert
1e66518a79 ARSN-347 socket.io client is disconnected when sending a big payload
The file backend test fails when migrating the socket.io client from version 2.x to 4.x due to a change in the default value of maxHttpBufferSize. In the newer version, the default value has been reduced from 100MB to 1MB, causing the failure when attempting to initiate, put parts, and complete an MPU (Multipart Upload) with 10,000 parts.
2023-06-08 11:38:59 -04:00
Jonathan Gramain
af3fd17ec2 bf: ARSN-340 bump socket.io dep to 4.6.1
4.6.1 is the latest version to date of nodejs socket.io module. It
fixes a bunch of CVEs related to socket.io and xmlhttprequest modules
for the open-source metadata storage.
2023-05-30 15:42:24 -07:00
bert-e
67c98fd81b Merge branch 'improvement/ARSN-335-implement-ghas' into tmp/octopus/w/7.10/improvement/ARSN-335-implement-ghas 2023-05-25 17:52:45 +00:00
williamlardier
5cd70d7cf1 ARSN-267: fix failing unit test
NodeJS 16.17.0 introduced a change in the error handling of TLS sockets
in case of error. The connexion is closed before the response is sent,
so handling the ECONNRESET error in the affected test will unblock it,
until this is fixed by NodeJS, if appropriate.

(cherry picked from commit a237e38c51)
2023-05-25 17:50:00 +00:00
bert-e
654d628d39 Merge branch 'improvement/ARSN-335-implement-ghas' into tmp/octopus/w/7.10/improvement/ARSN-335-implement-ghas 2023-05-19 15:59:37 +00:00
gaspardmoindrot
e8a409e337 [ARSN-335] Implement GHAS 2023-05-16 21:21:49 +00:00
Naren
bd76402586 impr: ARSN-315 bump version 7.10.46 2023-03-14 16:25:06 -07:00
Naren
1d104345fd impr: ARSN-315 expose collecting default metrics as fn
Collecting default metrics should not be the default, should be invoked when needed. This causes build errors when multiple components use Arsenal.
2023-03-14 16:08:44 -07:00
Naren
bd0a199ffa impr: ARSN-313 corrections in ZenkoMetrics
- retain metric config types
- set asPrometheus as async fn
2023-03-08 16:37:38 -08:00
Naren
4b1f69bcbb impr: ARSN-313 bump version to 7.10.45 2023-03-08 15:28:48 -08:00
Naren
e3a6814e3f impr ARSN-313 upgrade prom-client 2023-03-08 15:27:30 -08:00
Alexander Chan
acd13ff31b ARSN-308: update lifecycle utils to support noncurrent version
* update lifecycle utils to support
* remove `console.log`
2023-03-01 04:45:19 -08:00
Alexander Chan
bb3e5d078f version bump 2023-03-01 04:44:30 -08:00
Alexander Chan
054f61d6c1 ARSN-298: add Min/Max heap data structure 2023-02-23 18:19:05 -08:00
Alexander Chan
c1dd2e4946 bump version 2023-02-23 15:03:26 -08:00
Alexander Chan
a714103b82 ARSN-298: support lifecycle NewerNoncurrentVersions
updates `LifecyleConfiguration` and `LifecycleRule` to support the
`NewerNoncurrentVersions` parameter for NoncurrentVersionExpirations
2023-02-23 15:00:57 -08:00
Jonathan Gramain
a3a83dd89c ARSN-284 bump arsenal version 2023-01-30 16:10:02 +01:00
williamlardier
8db8109391 ARSN-267: fix failing unit test
NodeJS 16.17.0 introduced a change in the error handling of TLS sockets
in case of error. The connexion is closed before the response is sent,
so handling the ECONNRESET error in the affected test will unblock it,
until this is fixed by NodeJS, if appropriate.

(cherry picked from commit a237e38c51)
2023-01-30 16:10:02 +01:00
Jonathan Gramain
d90af29019 Revert "ARSN-252 - listing bug in DelimisterMaster"
This reverts commit f62c3d22ed.
2023-01-30 16:07:06 +01:00
Jonathan Gramain
9d8d98fcc9 Revert "ARSN-269 - listing bug in versioned bucket edge cases."
This reverts commit 87b060f2ae.
2023-01-30 16:07:06 +01:00
Jonathan Gramain
01830d19a0 Revert "ARSN-284 [cleanup] remove unused test dependency"
This reverts commit 4f0a846814.
2023-01-30 16:07:05 +01:00
Jonathan Gramain
49cc018fa4 Revert "ARSN-284 [rf] delimiterVersions.addCommonPrefix()"
This reverts commit 7b64896234.
2023-01-30 16:07:05 +01:00
Jonathan Gramain
dd87c869ca Revert "ARSN-284 fix and refactor Delimiter + DelimiterMaster"
This reverts commit 4d7eaee0cc.
2023-01-30 16:07:04 +01:00
Jonathan Gramain
df44cffb96 Revert "ARSN-284 [doc] add state charts"
This reverts commit 1c07618b18.
2023-01-30 16:07:03 +01:00
Jonathan Gramain
164053d1e8 Revert "bugfix: ARSN-293 DelimiterMaster: default to vFormat=v0"
This reverts commit fbb62ef17c.
2023-01-30 16:07:03 +01:00
Jonathan Gramain
af741c50fb Revert "bugfix: ARSN-294 use CommonPrefix for NextMarker"
This reverts commit 6e5d8d14af.
2023-01-30 16:07:02 +01:00
Jonathan Gramain
34ccca9b07 ARSN-294 bump arsenal version 2023-01-12 15:28:28 -08:00
Jonathan Gramain
6e5d8d14af bugfix: ARSN-294 use CommonPrefix for NextMarker
Revert behavior introduced for S3C-7274 that changed NextMarker to an
object key instead of a common prefix, the ticket was invalid as AWS
does use a CommonPrefix.

Add a unit test for a corner case with a marker inside a prefix that
was only caught in Cloudserver functional tests.
2023-01-12 15:27:50 -08:00
Jonathan Gramain
4cda9f6a6b ARSN-293 bump arsenal version 2023-01-08 19:17:19 -08:00
Jonathan Gramain
fbb62ef17c bugfix: ARSN-293 DelimiterMaster: default to vFormat=v0
The BucketFile interface (open-source) does not pass an explicit
vFormat to the constructor of the listing algorithm. DelimiterMaster
does not interpret it correctly and uses vFormat=v1 logic in this
case, resulting in wrong listing results.

Fix it by checking against `this.vFormat` that was set with a default
value by the Delimiter class, instead of directly using the
constructor parameter `vFormat`.
2023-01-08 19:14:39 -08:00
Jonathan Gramain
8077186c3a ARSN-284 bump version 2023-01-06 15:59:00 -08:00
Jonathan Gramain
1c07618b18 ARSN-284 [doc] add state charts
Add new state charts in GraphViz format for Delimiter and DelimiterMaster
2023-01-06 15:57:51 -08:00
Jonathan Gramain
4d7eaee0cc ARSN-284 fix and refactor Delimiter + DelimiterMaster
Large refactor of Delimiter and DelimiterMaster classes to typescript,
that fixes most known issues with the previous implementation.

The new implementation uses explicit states to manage various
conditions, instead of relying on a bunch of internal variable values
and maintaining their state. It allows a more robust code flow and
fixes issues related to prefix skipping that were hard to fix by
keeping the overall logic of the previous implementation.

This refactor brings the following bug fixes and enhancements:

- prefixes with delete markers and non-deleted objects are
  now always included in CommonPrefixes (S3C-7248)

- no more duplication of internal range listings when doing skip-scan
  over prefixes (discovered when analyzing regressions for S3C-4682)

- the skip-scan mecanism for prefixes and versions is no
  more disturbed by delete markers and PHD keys (S3C-2930)

- NextMarker is now always set to a valid, listed or listable key
  (that may still be hidden under a CommonPrefix), no more
  manipulation of next marker to avoid corner-cases with keys ending
  with a prefix (S3C-4682 and S3C-7274)

- deleting a delete marker immediately allows the new current version
  to be visible in the listing (S3C-7272)

- Expecting lower CPU usage overall, as the number of checks to do in
  each state is reduced (may help to reduce the load and reduce impact
  of cases such as S3C-3946)

- Uses typescript to allow more sanity checks
2023-01-06 15:57:19 -08:00
Jonathan Gramain
7b64896234 ARSN-284 [rf] delimiterVersions.addCommonPrefix()
Copy addCommonPrefix from Delimiter to DelimiterVersions to prepare for the rehaul of Delimiter class, and make it use this.NextMarker directly
2022-12-09 14:22:40 -08:00
Jonathan Gramain
4f0a846814 ARSN-284 [cleanup] remove unused test dependency 2022-12-09 14:15:13 -08:00
bert-e
7c1bd453ee Merge branch 'feature/ARSN-235-update-object-before-deleting-it' into q/7.10 2022-11-14 09:20:17 +00:00
Kerkesni
9a975723c1 feature: ARSN-235 document oplog 2022-11-13 22:04:29 +01:00
Kerkesni
ef024ddef3 feature: ARSN-235 fix unit tests 2022-11-13 22:04:29 +01:00
Kerkesni
b61138a348 feature: ARSN-235 ignore objects flagged for deletion when listing objects 2022-11-13 22:04:28 +01:00
Kerkesni
d852eef08e feature: ARSN-235 ignore objects flagged for deletion when getting object 2022-11-13 22:04:28 +01:00
Kerkesni
fd63b857f3 feature: ARSN-235 update object before deletion
Object deletion no longer directly deletes the object, it first
updates its metadata by setting the deletion flag and originOp then
proceeds to deleting the object.

This is done to keep a trace of the latest object metadata before deletion
in the oplog, as oplog delete events don't hold that information. This
information is needed for both Cold Storage and Bucket Notification

We also add all the object metadata to the placeholder (PHD) master
which wasn't previously the case, again this is done to keep the metadata
in the oplog as a PHD might get directly deleted in the repair phase.
2022-11-13 22:04:28 +01:00
Jonathan Gramain
0f9da6a44e ARSN-274 bump version to 7.10.38 2022-11-01 18:20:58 -07:00
Jonathan Gramain
53a42f7411 bugfix: ARSN-274 move objectHead action in shared map
Move the `objectHead` action in the shared action map so that bucket
policies can use it and grant HEAD request access when 's3:GetObject'
permission is present.

Note: relevant tests will be added in Cloudserver, see CLDSRV-291
2022-11-01 18:18:51 -07:00
Jonathan Gramain
9c2bed8034 cleanup: ARSN-274 remove duplicate notification actions 2022-11-01 15:24:37 -07:00
williamlardier
9d614a4ab3 ARSN-270: bump project version 2022-09-27 09:15:28 +02:00
williamlardier
7763685cb0 ARSN-270: change bad permission names 2022-09-27 09:14:53 +02:00
Artem Bakalov
4c6712741b v7.10.36 2022-09-26 19:43:43 -07:00
Artem Bakalov
87b060f2ae ARSN-269 - listing bug in versioned bucket edge cases.
Simplifies testing that was used in ARSN-262. Adds a function allowDelimiterRangeSkip
to determine when a nextContinueMarker range can be skipped when .skipping is called.
This function uses a new state variable prefixKeySeen and the nextContinueMarker to determine
if a range of the form prefix/ can be skipped. An additional check is added when processing
delete markers of the form prefix/foo/(bar) so that the prefix/foo/ range can still be skipped
as an optimization.
2022-09-22 20:03:47 -07:00
bert-e
9dc357ab8d Merge branch 'bugfix/ARSN-252-listing-bug-versioned-bucket' into q/7.10 2022-09-16 10:30:19 +00:00
Artem Bakalov
f62c3d22ed ARSN-252 - listing bug in DelimisterMaster
DelimiterMaster.filter is used to determine when a key range can be skipped in Metadata:RepdServer to optimize listing performance.
When a bucket is created with vFormat=v0, and subsequently a listing is done with a prefix, DelimiterMaster.filter was incorrectly
determining that a range could be skipped if a key was listed such that key == prefix. This case is now correctly handled in filterV0.
2022-09-15 19:05:29 -07:00
williamlardier
a237e38c51 ARSN-267: fix failing unit test
NodeJS 16.17.0 introduced a change in the error handling of TLS sockets
in case of error. The connexion is closed before the response is sent,
so handling the ECONNRESET error in the affected test will unblock it,
until this is fixed by NodeJS, if appropriate.
2022-09-07 13:22:30 +02:00
williamlardier
4388cb7790 ARSN-267: bump project version 2022-09-06 10:43:42 +02:00
williamlardier
095a2012cb ARSN-267: support UpdateRole action 2022-09-06 10:43:30 +02:00
Killian Gardahaut
264e0c1aad ARSN-266: change create bucket owned by you message error 2022-08-24 13:17:29 +00:00
Jonathan Gramain
0130355e1a ARSN-265 release 7.10.33 2022-08-17 16:26:52 -07:00
bert-e
af50ef47d7 Merge branch 'bugfix/ARSN-255-revampEvaluatePolicyForTagConditions' into q/7.10 2022-08-17 22:01:22 +00:00
Jonathan Gramain
4f2b1ca960 bugfix: ARSN-262 fixes/tests in RequestContext
- remove "postXml" field, as it was a left-over from prototyping

- handle fields related to tag conditions: requestObjTags,
  existingObjTag, needTagEval, those were missing from constructor
  params

- fix a typo in serialization: requersterInfo -> requesterInfo

- new unit tests for RequestContext
  constructor/serialize/deserialize/getters
2022-08-11 18:19:38 -07:00
Killian Gardahaut
f45f65596b ARSN-261: bump 7.10.32 2022-08-10 08:36:22 +00:00
bert-e
10402ae78d Merge branch 'improvement/ARSN-257-bump-7-10-31' into q/7.10 2022-08-10 08:17:10 +00:00
Jonathan Gramain
5cd1df8601 bugfix: ARSN-255 revamp evaluatePolicy logic for tag conditions
Rethink the logic of tag condition evaluation, so that the
"evaluateAllPolicies" function appropriately returns the verdict:
Allow or Deny or NeedTagConditionEval, the latter being when tag
values (request and/or object tags) are needed to settle the verdict
to Allow or Deny, in which case, Cloudserver knows it has to resend
the request to Vault along with tag info.
2022-08-09 18:43:58 -07:00
Jonathan Gramain
ee38856f29 ARSN-255 [cleanup] better exports in evaluator.ts
Turn 'const' function objects into actual functions.
2022-08-09 18:29:16 -07:00
Jonathan Gramain
dc229bb8aa improvement: ARSN-260 improve efficiency of findConditionKey
Instead of pre-creating a Map with all supported condition keys before
returning the wanted one, use a switch/case construct to directly
return the attribute from the request context.
2022-08-09 17:54:58 -07:00
Killian Gardahaut
a6a48e812f ARSN-257: bump 7.10.31 2022-08-09 15:32:33 +00:00
bert-e
5a8372437b Merge branch 'feature/ARSN-256-supportTaggingAndAclEvents' into q/7.10 2022-08-08 19:41:50 +00:00
Killian Gardahaut
c4ead93bd9 ARSN-253: Speedup aws URI encore function 2022-08-05 10:05:41 +00:00
Jonathan Gramain
71de409ee9 feature: ARSN-256 support tagging and ACL events
Add to the list of supported event types for bucket notification
purpose, the tagging and ACL-related events that can be set in bucket
notification

Reference: https://docs.aws.amazon.com/AmazonS3/latest/userguide/notification-how-to-event-types-and-destinations.html#supported-notification-event-types
2022-08-04 16:57:23 -07:00
KillianG
46c24c5cc3 fixup! bugfix/ARSN-253: adding test and better handling of all the possible cases 2022-08-03 10:01:28 +02:00
Killian Gardahaut
d48d4d0c18 bugfix/ARSN-253: adding test and better handling of all the possible cases 2022-08-02 08:43:54 +00:00
Killian Gardahaut
5a32c8eca0 bugfix/ARSN-253:
fixing the problem with unicode special chars by encoding them with URI
Problem was that our encoreURI function was not working properly for special chars
2022-08-01 12:55:35 +00:00
Kerkesni
6c132bca90 bugfix: ARSN-251 fix azure mpuUtils import 2022-07-22 15:07:20 +02:00
Taylor McKinnon
3882ecf1a0 bf(ARSN-250): Fix getByteRangeFromSpec when range is 0-0 2022-07-21 11:42:16 -07:00
Taylor McKinnon
acf38cc010 impr(ARSN-248): Release 7.10.28 2022-07-20 14:11:56 -07:00
Jordi Bertran de Balanda
785b824b69 ARSN-245 - release 7.10.27 2022-07-11 18:17:45 +02:00
Jordi Bertran de Balanda
63212e2db3 ARSN-244 - export isMasterKey in versioning 2022-07-11 16:59:29 +02:00
Jordi Bertran de Balanda
3179d1c620 ARSN-241 - release arsenal 7.10.26 2022-07-08 15:07:38 +02:00
Will Toozs
aed1d8419b ARSN-238: add documentation on listing process 2022-07-08 09:49:32 +02:00
Will Toozs
c3cb0aa514 ARSN-238: ignore phd keys with no versions 2022-07-08 09:49:32 +02:00
Francois Ferrand
a206b5f95e Remove check with empty bucket name
This test is not relevant, since a bucket cannot have an empty name;
and there is now a check in AWS SDK which rejects the request directly.

Issue: ARSN-234
2022-07-01 18:18:05 +02:00
Francois Ferrand
9b8f9f8afd Bump aws-sdk to 2.1005+
Use same spec as other packages (utapi, vault...), and allow automatic
version bump (dependabot).

Issue: ARSN-234
2022-06-30 15:13:09 +02:00
Francois Ferrand
066be20a9d Bump azure-storage to 2.10.7
Issue: ARSN-233
2022-06-29 11:45:14 +02:00
Xin LI
6e3386f693 improvement: ARSN-225- correct UntagUser action name 2022-06-20 12:17:49 +02:00
Xin LI
2c630848ee improvement: ARSN-225-bump version 2022-06-17 12:19:20 +02:00
Xin LI
5634e1bb1f improvement: ARSN-225-add User Tag actionMaps 2022-06-16 10:57:56 +02:00
williamlardier
b744385584 ARSN-224: fix default value for the filter of bucket notif config 2022-06-10 14:00:34 +02:00
williamlardier
d407cd702b ARSN-224: fix missing default for models imports 2022-06-10 12:19:15 +02:00
williamlardier
20a071fba9 ARSN-223: fix file imports with default 2022-06-10 11:19:52 +02:00
bert-e
f897dee3c5 Merge branch 'feature/ARSN-209-type-check-models' into q/7.10 2022-06-10 08:09:09 +00:00
Guillaume Hivert
536f36df4e ARSN-209 Fix JSDoc as asked in PR 2022-06-09 10:04:02 +02:00
Guillaume Hivert
571128efb1 Fix TODOs 2022-05-25 11:57:13 +02:00
Guillaume Hivert
f1478cbc66 Fix TODOs 2022-05-25 11:56:45 +02:00
Guillaume Hivert
75c5c855d9 Merge remote-tracking branch 'origin/development/7.10' into HEAD 2022-05-25 11:27:11 +02:00
Guillaume Hivert
43d466e2fe ARSN-209 Fix import due to rebase of development/7.10 2022-05-20 18:05:30 +02:00
Guillaume Hivert
efa8c8e611 ARSN-209 Fix linter error in tests 2022-05-20 18:02:32 +02:00
Guillaume Hivert
820ad4f8af ARSN-209 Fix imports/exports of models 2022-05-20 16:23:24 +02:00
Guillaume Hivert
34eeecf6de ARSN-209 Type check BucketInfo 2022-05-20 16:23:24 +02:00
Guillaume Hivert
050f5ed002 ARSN-209 Type check NotificationConfiguration 2022-05-20 16:23:20 +02:00
Guillaume Hivert
2fba338639 ARSN-209 Type check LifecycleConfiguration 2022-05-20 16:20:55 +02:00
Guillaume Hivert
950ac8e19b ARSN-209 Type check ObjectMD 2022-05-20 16:20:55 +02:00
Guillaume Hivert
61929bb91a ARSN-209 Type check ReplicationConfiguration 2022-05-20 16:20:55 +02:00
Guillaume Hivert
9175148bd1 ARSN-209 Type check WebsiteConfiguration 2022-05-20 16:20:55 +02:00
Guillaume Hivert
5f08ea9310 ARSN-209 Type check ObjectMDLocation 2022-05-20 16:20:55 +02:00
Guillaume Hivert
707bf795a9 ARSN-209 Type check ObjectLockConfiguration 2022-05-20 16:20:55 +02:00
Guillaume Hivert
fcf64798dc ARSN-209 Type check LifecycleRules 2022-05-20 16:20:55 +02:00
Guillaume Hivert
9b607be633 ARSN-209 Type check BucketPolicy 2022-05-20 16:20:55 +02:00
Guillaume Hivert
01a8992cec ARSN-209 Type check BackendInfo 2022-05-20 16:20:55 +02:00
Guillaume Hivert
301541223d ARSN-209 Type check ARN 2022-05-20 16:20:55 +02:00
Guillaume Hivert
4f58a4b2f3 ARSN-210 Restore correct constants in 8.2 to 7.10 backport from ARSN-128 2022-05-20 16:20:55 +02:00
Guillaume Hivert
6f3babd223 ARSN-209 Rename all models to .ts 2022-05-20 16:20:55 +02:00
Artem Bakalov
3f26b432b7 ARSN-212 remove assert in decoder in favor of returning an error. 2022-05-19 16:27:05 -07:00
bert-e
b684bdbaa9 Merge branch 'feature/ARSN-201-type-check-versioning' into q/7.10 2022-05-19 08:51:50 +00:00
Guillaume Hivert
23113616d9 Merge remote-tracking branch 'origin/development/7.10' into HEAD 2022-05-18 11:34:01 +02:00
Guillaume Hivert
ba94dc7e86 Merge remote-tracking branch 'origin/development/7.10' into HEAD 2022-05-18 11:23:08 +02:00
Guillaume Hivert
5e8f4f2a30 Merge remote-tracking branch 'origin/development/7.10' into HEAD 2022-05-18 11:09:53 +02:00
Guillaume Hivert
f54feec57f Merge remote-tracking branch 'origin/development/7.10' into HEAD 2022-05-18 10:59:05 +02:00
bert-e
bbe5f293f4 Merge branch 'feature/ARSN-205-type-check-error-utils' into q/7.10 2022-05-17 15:05:31 +00:00
bert-e
8ad1cceeb8 Merge branch 'feature/ARSN-204-type-check-shuffle' into q/7.10 2022-05-17 08:19:19 +00:00
bert-e
bd970c65ea Merge branch 'bugfix/ARSN-191-getting-wrong-notification-type-when-master-version-deleted' into q/7.10 2022-05-13 14:29:55 +00:00
Kerkesni
43a8772529 bugfix: ARSN-191 fix wrong notification type when master version is deleted 2022-05-13 16:06:05 +02:00
Guillaume Hivert
fc05956983 ARSN-208 Type check DB 2022-05-13 15:16:14 +02:00
Guillaume Hivert
8ec4a11a4b ARSN-207 Fix tests and export 2022-05-13 13:57:21 +02:00
Guillaume Hivert
c9ff3cd60e ARSN-207 Type check stringHash 2022-05-13 13:55:33 +02:00
Guillaume Hivert
a15d4cd130 ARSN-206 Add proper index export 2022-05-12 17:44:52 +02:00
Guillaume Hivert
45ba80ec23 ARSN-206 Type check jsutil 2022-05-12 17:44:07 +02:00
Guillaume Hivert
32cff324d8 ARSN-205 Type check errorUtils 2022-05-12 17:24:59 +02:00
Guillaume Hivert
cda5d7cfed ARSN-204 Refacto shuffle 2022-05-12 17:19:37 +02:00
bert-e
e46b90cbad Merge branch 'feature/ARSN-186-type-check-clustering' into q/7.10 2022-05-12 14:05:30 +00:00
bert-e
435f9f7f3c Merge branch 'feature/ARSN-183-type-check-stream' into q/7.10 2022-05-12 13:52:31 +00:00
Guillaume Hivert
9f1ea09ee6 ARSN-183 Switch index.ts 2022-05-12 15:42:15 +02:00
Guillaume Hivert
37c325f033 ARSN-97 Stop ignoring build errors 2022-05-12 15:20:34 +02:00
Guillaume Hivert
76bffb2a23 ARSN-201 Fix tests 2022-05-12 15:16:23 +02:00
Guillaume Hivert
bd498d414b ARSN-201 Export in index 2022-05-12 15:16:19 +02:00
Guillaume Hivert
f98c65ffb4 ARSN-201 Type check VersioningRequestProcessor 2022-05-12 15:16:00 +02:00
Guillaume Hivert
eae29c53dd ARSN-201 Type check constants 2022-05-12 15:15:52 +02:00
Guillaume Hivert
8d17b69eb8 ARSN-201 Type check WriteGatheringManager 2022-05-12 15:15:42 +02:00
Guillaume Hivert
938d64f48e ARSN-201 Type check WriteCache 2022-05-12 15:15:28 +02:00
Guillaume Hivert
485ca38867 ARSN-201 Type check VersionID 2022-05-12 15:14:48 +02:00
Guillaume Hivert
355c540510 ARSN-201 Type check Version 2022-05-12 15:14:42 +02:00
Jordi Bertran de Balanda
d97a218170 ARSN-203 - release 7.10.24 2022-05-12 15:09:45 +02:00
Jordi Bertran de Balanda
82c3330321 ARSN-199 - add https-proxy-agent dependency 2022-05-12 11:28:18 +02:00
Guillaume Hivert
db70743439 ARSN-201 Rename all files to TS 2022-05-11 15:56:50 +02:00
williamlardier
4594578919 ARSN-195: add unit test for getMetaHeaders 2022-05-09 14:57:52 +02:00
williamlardier
bc0cb0a8fe ARSN-195: fix arsenal bugs and missing default in require 2022-05-09 14:57:51 +02:00
williamlardier
9e0cee849c ARSN-195: fix index for s3middleware 2022-05-09 14:57:48 +02:00
Guillaume Hivert
d6e4bca3ed ARSN-184 Remove useless signatures 2022-05-06 15:21:17 +02:00
bert-e
f49006a64e Merge branch 'feature/ARSN-171-type-s3-middlewares' into q/7.10 2022-05-06 12:50:22 +00:00
Guillaume Hivert
75811ba553 ARSN-184 Exports 2022-05-06 14:45:44 +02:00
Guillaume Hivert
26de19b22b ARSN-184 Type check routeWebsite 2022-05-06 14:26:40 +02:00
Guillaume Hivert
72bdd130f0 ARSN-184 Type check routePUT 2022-05-06 14:26:40 +02:00
Guillaume Hivert
4131732b74 ARSN-184 Type check routePOST 2022-05-06 14:26:40 +02:00
Guillaume Hivert
7cecbe27be ARSN-184 Type check routeOPTIONS 2022-05-06 14:26:40 +02:00
Guillaume Hivert
3fab05071d ARSN-184 Type check routeHEAD 2022-05-06 14:26:40 +02:00
Guillaume Hivert
a98f2cede5 ARSN-184 Type check routeGET 2022-05-06 14:26:40 +02:00
Guillaume Hivert
283a0863c2 ARSN-184 Type check routeDELETE 2022-05-06 14:26:40 +02:00
Guillaume Hivert
18b089fc2d ARSN-184 Type check routes 2022-05-06 14:26:40 +02:00
Guillaume Hivert
60139abb10 ARSN-184 Type check routesUtils 2022-05-06 14:26:40 +02:00
Guillaume Hivert
2cc1a9886f ARSN-184 WIP Routes 2022-05-06 14:26:40 +02:00
Guillaume Hivert
1c7122b7e4 ARSN-184 Type check routesUtils 2022-05-06 14:26:40 +02:00
Guillaume Hivert
4eba3ca6a0 ARSN-184 Type check routes 2022-05-06 14:26:40 +02:00
Guillaume Hivert
670d57a9db ARSN-184 Fix StatsClient 2022-05-06 14:26:40 +02:00
Guillaume Hivert
8784113544 ARSN-184 Move all .js to .ts files 2022-05-06 14:26:40 +02:00
Jordi Bertran de Balanda
c9f279ac9b ARSN-190 - release 7.10.23 2022-05-05 12:04:40 +02:00
Jordi Bertran de Balanda
2622781a1d ARSN-188 - mop up stray address equality checks 2022-05-04 11:38:36 +02:00
Guillaume Hivert
c6249cd2d5 ARSN-186 Type check Clustering 2022-05-03 17:35:46 +02:00
Guillaume Hivert
97019d3b44 ARSN-186 Move Clustering.js to Clustering.ts 2022-05-03 17:16:26 +02:00
Guillaume Hivert
75b4e6328e Type check stream 2022-05-03 17:11:34 +02:00
Guillaume Hivert
eb9f936e78 Move readJSONStreamObject from .js to .ts 2022-05-03 16:59:10 +02:00
Jordi Bertran de Balanda
d1930c08e8 ARSN-181 - release 7.10.22 2022-05-03 10:53:40 +02:00
bert-e
3dd0fbfc80 Merge branch 'feature/ARSN-175-fix-errors-backwards' into q/7.10 2022-05-02 17:33:37 +00:00
Guillaume Hivert
2202ebac8a ARSN-175 Restores old behaviors of errors 2022-05-02 19:19:17 +02:00
Guillaume Hivert
5c16601657 ARSN-171 Fix tests 2022-04-29 17:05:07 +02:00
Guillaume Hivert
3ff3330f1a ARSN-171 Type check s3middleware/validateConditionalHeaders 2022-04-29 17:05:07 +02:00
Guillaume Hivert
5b02d20e4d ARSN-171 Type check s3middleware/userMetadata 2022-04-29 17:05:07 +02:00
Guillaume Hivert
867da9a3d0 ARSN-171 Type check s3middleware/tagging 2022-04-29 17:05:07 +02:00
Guillaume Hivert
c9f6d35fa4 ARSN-171 Type check s3middleware/processMpuParts 2022-04-29 17:05:07 +02:00
Guillaume Hivert
c79a5c2ee3 ARSN-171 Type check s3middleware/objectRetention 2022-04-29 17:05:07 +02:00
Guillaume Hivert
a400beb8b9 ARSN-171 Type check s3middleware/objectLegalHold 2022-04-29 17:05:07 +02:00
Guillaume Hivert
8ce0b07e63 ARSN-171 Backport constants to 7.10 2022-04-29 17:05:07 +02:00
Guillaume Hivert
a0876d3df5 ARSN-171 Type prepareStream and refactor V4Transform to export type 2022-04-29 17:05:07 +02:00
Guillaume Hivert
e829fa3d3f ARSN-171 Type objectUtils 2022-04-29 17:05:07 +02:00
Guillaume Hivert
da25890556 ARSN-171 Type objectLegalHold 2022-04-29 17:05:07 +02:00
Guillaume Hivert
8df0f5863a ARSN-171 Type nullStream 2022-04-29 17:05:07 +02:00
Guillaume Hivert
2d66248303 ARSN-171 Add Types for xml2js 2022-04-29 17:05:07 +02:00
Guillaume Hivert
8221852eef ARSN-171 Type LifecycleUtils and LifecycleHelpers 2022-04-29 17:05:07 +02:00
Guillaume Hivert
d50e1bfd6d ARSN-171 Type LifecycleDatetime 2022-04-29 17:05:07 +02:00
Guillaume Hivert
5f453789d4 ARSN-171 Type convertToXml 2022-04-29 17:05:07 +02:00
Guillaume Hivert
7658481128 ARSN-171 Type mpuUtils 2022-04-29 17:05:07 +02:00
Guillaume Hivert
593bb31ac3 ARSN-171 Type SubStreamInterface 2022-04-29 14:51:04 +02:00
Guillaume Hivert
f5e89c9660 ARSN-171 Type ResultsCollector 2022-04-29 14:51:04 +02:00
Guillaume Hivert
62db2267fc ARSN-171 Type MD5Sum 2022-04-29 14:51:04 +02:00
Guillaume Hivert
f6544f7a2e ARSN-171 Move all files from JS to TS 2022-04-29 14:51:04 +02:00
Kerkesni
5ec6acc061 bugfix: ARSN-172 fix invalid timestamp in the oplog entries 2022-04-29 14:11:05 +02:00
bert-e
6c7a1316ae Merge branch 'feature/ARSN-161-type-network' into q/7.10 2022-04-29 11:59:29 +00:00
Guillaume Hivert
d6635097c7 ARSN-161 Remove useless type, fix some typo and add explicit parens 2022-04-28 16:42:04 +02:00
bert-e
187ba67cc8 Merge branch 'feature/ARSN-159-type-policy-evaluator' into q/7.10 2022-04-28 08:34:31 +00:00
bert-e
c808873996 Merge branch 'feature/ARSN-156/release-7.10.21' into q/7.10 2022-04-27 16:53:55 +00:00
Guillaume Hivert
a3378c3df5 ARSN-161 Fix ersatz wrong import 2022-04-27 18:01:13 +02:00
Guillaume Hivert
e063eeeced ARSN-161 Fix tests and index.ts 2022-04-27 17:37:01 +02:00
Guillaume Hivert
a5051cffba ARSN-161 Type network/kmip 2022-04-27 17:36:45 +02:00
Guillaume Hivert
24deac9f92 ARSN-161 Move .js to .ts files in network/kmip 2022-04-27 17:36:27 +02:00
Guillaume Hivert
3621c7bc77 ARSN-161 Type network/rest 2022-04-27 17:36:09 +02:00
Guillaume Hivert
57c2d4fcd8 ARSN-161 Migrate files from .js to .ts and add type-checking 2022-04-27 17:35:44 +02:00
bert-e
835ffe79c6 Merge branch 'bugfix/ARSN-168-fix-flatten-errors' into q/7.10 2022-04-27 09:21:16 +00:00
Ronnie Smith
1ac27e8125 feature: release 7.10.21 2022-04-26 20:11:31 -07:00
Ronnie Smith
deb88ae03b feature: ARSN-156 update route type checks 2022-04-26 16:14:29 -07:00
Ronnie Smith
a2777d929e feature: ARSN-156 backport data retrieval style 2022-04-26 15:44:53 -07:00
Guillaume Hivert
03c7b6ea3e ARSN-159 Type policyEvaluator 2022-04-26 17:04:39 +02:00
Guillaume Hivert
872034073e ARSN-159 Type requestUtils 2022-04-26 17:04:25 +02:00
Guillaume Hivert
3d39b61a46 ARSN-170 Type ipCheck 2022-04-26 17:04:18 +02:00
Guillaume Hivert
c55c790a5d ARSN-159 Move everything to TS 2022-04-26 16:59:51 +02:00
Jordi Bertran de Balanda
ccbc1ed10c ARSN-168 - make flatten/unflatten on ArsenalError 2022-04-26 16:03:33 +02:00
bert-e
348c80060e Merge branch 'bugfix/ARSN-167/backbeat' into q/7.10 2022-04-26 13:13:02 +00:00
bert-e
b81d24c3ef Merge branch 'feature/ARSN-169/release-7.10.19' into q/7.10 2022-04-26 01:45:40 +00:00
bert-e
c03c67d9fb Merge branch 'improvement/ARSN-157-short-IDs' into q/7.10 2022-04-26 01:03:37 +00:00
Ronnie Smith
0f72b7c188 feature: ARSN-169 update version 2022-04-25 18:00:19 -07:00
Artem Bakalov
07fd3451ab ARSN-157 short-IDs 2022-04-26 00:23:33 +00:00
Ronnie Smith
473e241d5c feature: ARSN-164 rpc error utils missing is
* added a few missing constants
* fix a few more err.is usages
2022-04-25 16:28:41 -07:00
bert-e
ffe53ab72e Merge branch 'improvement/ARSN-162-add-getBucketTagging-error' into q/7.10 2022-04-25 16:44:51 +00:00
Nicolas Humbert
c13cff150f ARSN-167 Fix zenko metrics 2022-04-24 17:52:08 -04:00
bert-e
e446f20223 Merge branch 'feature/ARSN-158-type-policy' into q/7.10 2022-04-22 15:55:25 +00:00
Guillaume Hivert
dd0ca967c4 Merge remote-tracking branch 'origin/development/7.10' into HEAD 2022-04-22 17:39:14 +02:00
Guillaume Hivert
7b0bb25358 ARSN-99 Export HTTPS 2022-04-22 17:26:07 +02:00
Guillaume Hivert
57ab049565 Merge remote-tracking branch 'origin/development/7.10' into HEAD 2022-04-22 17:20:56 +02:00
bert-e
6a5f0964ff Merge branch 'feature/ARSN-99-type-check-auth-folder' into q/7.10 2022-04-22 13:56:53 +00:00
Guillaume Hivert
66043e5cd0 ARSN-99 Fix tests 2022-04-22 12:09:32 +02:00
Guillaume Hivert
bb2951be2c ARSN-99 Catch up changes in v4/streamingV4 2022-04-22 12:03:48 +02:00
Guillaume Hivert
0d68de5ec4 ARSN-99 Restore constants.emptyStringHash 2022-04-22 12:03:40 +02:00
Guillaume Hivert
f4e43f2cc7 ARSN-99 Fix Naming Credentials and various function names 2022-04-22 12:03:33 +02:00
Guillaume Hivert
b829b7662e ARSN-99 Migrate auth/v4/streamingV4 to TS 2022-04-22 12:01:15 +02:00
Will Toozs
e4be1d8d35 ARSN-162: add getBucketTagging NoSuchTagSet error 2022-04-21 18:22:30 +02:00
Guillaume Hivert
941d3ba73d ARSN-158 Fix linter 2022-04-20 15:48:39 +02:00
bert-e
9556d5cd61 Merge branch 'bugfix/ARSN-155-export-network-http-utils' into q/7.10 2022-04-20 13:17:43 +00:00
Guillaume Hivert
1fc6c2db86 ARSN-158 Fix test 2022-04-20 14:56:57 +02:00
Guillaume Hivert
c5949b547d ARSN-158 Type policy 2022-04-20 14:46:19 +02:00
Guillaume Hivert
3fdd6b8e80 ARSN-147 Export from metrics folder 2022-04-20 10:44:56 +02:00
Guillaume Hivert
4193511d1b ARSN-147 Type ZenkoMetrics 2022-04-20 10:36:51 +02:00
Guillaume Hivert
3bf00b14b8 ARSN-147 Type StatsModel 2022-04-20 10:36:42 +02:00
Guillaume Hivert
7d4c22594f ARSN-147 Type StatsClients 2022-04-20 10:36:28 +02:00
Guillaume Hivert
6f588c00d7 ARSN-147 Type RedisClient 2022-04-20 10:36:12 +02:00
Guillaume Hivert
441630d57e ARSN-147 Convert files to TS 2022-04-20 10:35:48 +02:00
Guillaume Hivert
3946a01871 ARSN-146 Type HTTPS 2022-04-20 10:29:28 +02:00
Jordi Bertran de Balanda
6f36a85353 ARSN-155 - export utils for cloudserver 2022-04-19 18:09:45 +02:00
Guillaume Hivert
5d4ed36096 Improve errors API 2022-04-19 16:22:04 +02:00
Guillaume Hivert
282dc7afb3 ARSN-108 Fix ESLint complains 2022-04-15 15:57:40 +02:00
Guillaume Hivert
617ec1f500 ARSN-108 Fix test suites 2022-04-15 15:55:03 +02:00
Guillaume Hivert
37157118af ARSN-108 Type auth/auth 2022-04-15 15:44:24 +02:00
Guillaume Hivert
33bea4adb3 ARSN-108 Type auth/v4 2022-04-15 15:43:25 +02:00
Guillaume Hivert
a0b62a9948 ARSN-103 Type auth/v2 2022-04-15 15:43:01 +02:00
Guillaume Hivert
c7c2c7ffaa ARSN-108 Type auth/Vault 2022-04-15 15:42:36 +02:00
Guillaume Hivert
362b82326e ARSN-108 Type auth/AuthInfo 2022-04-15 15:42:19 +02:00
Guillaume Hivert
38d462c833 ARSN-108 Type auth/in_memory/Indexer 2022-04-15 15:42:19 +02:00
Guillaume Hivert
7b73e34f9f ARSN-108 Type auth/in_memory/validateAuthConfig 2022-04-15 15:42:19 +02:00
Guillaume Hivert
d88ad57032 ARSN-108 Type auth/in_memory/Backend 2022-04-15 15:42:15 +02:00
Guillaume Hivert
800f79f125 ARSN-108 Type for auth/in_memory/vaultUtilities 2022-04-15 15:41:37 +02:00
Guillaume Hivert
522dfbc0db ARSN-98 Type auth/in_memory/AuthLoader 2022-04-15 15:41:37 +02:00
Guillaume Hivert
918ad4c7c2 ARSN-108 Type constants 2022-04-15 15:41:32 +02:00
Guillaume Hivert
2c8e611a15 ARSN-98 ARSN-108 ARSN-103 Add joi, eslint, simple-glob interface, @types/async and @types/utf8 to make TS compiler happy 2022-04-15 15:36:39 +02:00
Guillaume Hivert
0158fb0967 ARSN-108 Rename auth js files to ts files, and constants.js to constants.ts 2022-04-15 15:35:55 +02:00
Guillaume Hivert
fd33b9271b Bump version to 7.10.18 2022-04-15 11:13:28 +02:00
bert-e
0e7c47a7e9 Merge branch 'feature/ARSN-98-migrate-errors-to-typescript' into q/7.10 2022-04-15 09:03:21 +00:00
KillianG
0b51a6a3f0 ARSN-148: release arsenal 7 10 17 2022-04-14 18:58:40 +02:00
bert-e
67639f64d4 Merge branch 'improvement/ARSN-140-add-get-bucket-tagging-to-action-map' into q/7.10 2022-04-14 16:49:41 +00:00
bert-e
36fd21a3cd Merge branch 'improvement/ARSN-139-delete-bucket-tagging-to-action-map' into q/7.10 2022-04-14 16:37:06 +00:00
Killian Gardahaut
48fe6779bb Update actionMaps.js 2022-04-14 13:58:03 +02:00
Killian Gardahaut
6acc199eca Update lib/policyEvaluator/utils/actionMaps.js
Co-authored-by: William <91462779+williamlardier@users.noreply.github.com>
2022-04-14 10:51:18 +02:00
Killian Gardahaut
6eff4565dd Update lib/policyEvaluator/utils/actionMaps.js
Co-authored-by: William <91462779+williamlardier@users.noreply.github.com>
2022-04-14 10:51:06 +02:00
KillianG
8cc333e7f7 ARSN-140: add get bucket tagging to action map 2022-04-14 09:54:49 +02:00
KillianG
cbcaa97abb ARSN-139: add delete bucket tagging 2022-04-14 09:36:18 +02:00
KillianG
d18971bc6e fixup tagging instead of tagset 2022-04-14 09:33:50 +02:00
KillianG
f5bce507a5 ARSN-138: add pub bucket tagging to action map 2022-04-14 09:27:37 +02:00
Jordi Bertran de Balanda
ee49ec7d72 ARSN-144 - release 7.10.16 2022-04-13 18:15:06 +02:00
bert-e
07e8d44406 Merge branch 'improvement/ARSN-131-add-bucket-tagging-to-bucketinfo' into q/7.10 2022-04-13 12:53:23 +00:00
Guillaume Hivert
ab823b2797 ARSN-67 Fix all tests 2022-04-13 12:26:03 +02:00
Guillaume Hivert
e7502c9ffd ARSN-67 Change errors.spec.js to errors.spec.ts 2022-04-13 12:01:52 +02:00
Guillaume Hivert
9de879ecc2 ARSN-67 Switch errors to TS 2022-04-13 12:01:50 +02:00
Guillaume Hivert
68ca9a6e94 ARSN-67 Rename errors and arsenalErrors 2022-04-13 12:00:38 +02:00
bert-e
310834c237 Merge branch 'feature/ARSN-128/update-package-version' into q/7.10 2022-04-12 18:37:06 +00:00
KillianG
118f6dc787 ARSN-131: Add bucket tagging to BucketInfo.js 2022-04-12 11:33:09 +02:00
Ronnie Smith
3faf2433c7 feature: ARSN-128 update package version 2022-04-11 17:53:17 -07:00
Ronnie Smith
d3d2529719 feature: ARSN-128 put bucketclient back or circular issues 2022-04-06 13:39:56 -07:00
Ronnie Smith
23b9cf6e21 feature: ARSN-128 do not pass in bucketclient to data wrapper 2022-04-06 11:47:19 -07:00
Ronnie Smith
66910fb1a4 feature: ARSN-128 add missing export and constant 2022-04-05 11:06:39 -07:00
Ronnie Smith
24c82170d8 feature: ARSN-128 update storage exports and fix typo 2022-03-30 19:26:59 -07:00
Ronnie Smith
e26073ed6d feature: ARSN-128 add missing error and update deps 2022-03-30 10:56:39 -07:00
Ronnie Smith
0088a2849f feature: ARSN-128 add another missing constant 2022-03-29 17:43:15 -07:00
Ronnie Smith
e902eb61db feature: ARSN-128 add missing constant 2022-03-29 17:13:55 -07:00
Ronnie Smith
4cfd78c955 feature: ARSN-128 adding more missing parts 2022-03-29 14:46:51 -07:00
Ronnie Smith
cfee038a34 feature: ARSN-128 fix md lint 2022-03-29 13:14:46 -07:00
Ronnie Smith
06c2a0d90d feature: ARSN-128 disable eslint rule 2022-03-29 13:12:43 -07:00
Ronnie Smith
1e241bd79c feature: ARSN-128 move tests from 8 2022-03-29 11:58:34 -07:00
Ronnie Smith
0d526df512 feature: ARSN-128 moved storage and algos from 8 2022-03-29 11:50:44 -07:00
Guillaume Hivert
961b5abe41 ARSN-67 Fix linter 2022-03-24 15:02:16 +01:00
Guillaume Hivert
d0527d1ac1 ARSN-67 Upload Artifacts 2022-03-24 15:02:16 +01:00
Guillaume Hivert
08cb0a8c1c ARSN-67 Switch index.ts to import/export and fix JSON import in policyValidator 2022-03-24 15:02:16 +01:00
Guillaume Hivert
de0678d5bf ARSN-67 Rename index.js to index.ts for proper future migration 2022-03-24 15:02:16 +01:00
Guillaume Hivert
f619c0d33f ARSN-67 Remove ignore of build for NPM
Installing from git sources for dependents produced only an index.js
file. It was due to .gitignore ignoring the build folder and npm/yarn
removing the ignored files after install. Adding an empty .npmignore
solves the problem. This can be found here:
https://stackoverflow.com/questions/61754026/installing-npm-package-with-prepare-script-from-yarn-produces-only-index-js
2022-03-24 15:02:16 +01:00
Guillaume Hivert
7fea1d58a8 ARSN-67 Add TypeScript and Babel, and make test suite working 2022-03-24 15:02:16 +01:00
Guillaume Hivert
db25abeb99 ARSN-84 Correct Jest configuration for test suites and coverage
Thanks to files renaming, we can follow as much as we can the jest
default configurations. The options are gone, and we're specifying only
the maxWorkers (because the test suite is linear, and bugs if we're
running it in parallel) and the collect coverage files.
The coverage script itself is joined into one command instead of three
to leverage the Jest builtin coverage.
2022-03-24 15:02:16 +01:00
Guillaume Hivert
e90e37c42f ARSN-84 Rename all test files from [name].js to [name].spec.js
In order to simplify jest configuration, we have to remane the files to
follow the jest convention (to have a .spec.js extension for test
files).
2022-03-24 15:02:16 +01:00
Guillaume Hivert
38bb284694 ARSN-84 Fix Jest bug in _arsenalError
You can check out the bug at
https://github.com/facebook/jest/issues/2549.
The bug in inherent to jest and is a known bug since years, because jest
is switching the VM from node to a custom VM from jest. Jest injects
its own set of globals. The Error provided by Jest is different from
the Error provided by node and the test `err instanceof Error` is false.
Error:
```
 Expected value to be equal to:
      true
 Received:
      false
```
2022-03-24 15:02:16 +01:00
Guillaume Hivert
a123b3d781 ARSN-84 Fix redis commands in functional tests
The switch from mocha to Jest introduces some tests bugs.
As far as we can tell, jest is quicker than mocha, creating some
weird behaviour: some commands send to redis (with ioredis)
are working, and some aren’t. Our conclusion is that redis needs
to queue requests offline to avoid micro-disconnections from
redis in development. Otherwise, we got the following error:
```
  - StatsModel class › should correctly record a new request by default
one increment

    assert.ifError(received, expected)

    Expected value ifError to:
      null
    Received:
      [Error: Stream isn't writeable and enableOfflineQueue options is
false]

    Message:
      ifError got unwanted exception: Stream isn't writeable and
enableOfflineQueue options is false
```
Switching enableOfflineQueue to true makes the test suite to
success.
2022-03-24 15:02:16 +01:00
Guillaume Hivert
9b583b0541 ARSN-84 Introduce Jest and reconfigure ESLint
Add Jest as a test runner as a mocha replacement to have the
TS compiling on the fly and allowing mixed sources TS/JS in the
sources (and replacing the before and after of mocha with beforeAll
and afterAll of Jest), and adding some ESLint configuration to make
ESLint happy.
2022-03-24 15:02:16 +01:00
bert-e
27e06c51cc Merge branch 'bugfix/ARSN-105/locations' into tmp/octopus/w/7.10/bugfix/ARSN-105/locations 2022-03-15 18:47:37 +00:00
Nicolas Humbert
7d254a0556 ARSN-105 Disjointed reduced locations 2022-03-15 14:03:54 -04:00
Vianney Rancurel
7b451242b6 Merge remote-tracking branch 'origin/feature/ARSN-87-versioning-exports-missing' into w/7.10/feature/ARSN-87-versioning-exports-missing 2022-02-18 17:14:34 -08:00
Vianney Rancurel
5f8c92a0a2 ft: ARSN-87 some versioning exports are still missing for Armory 2022-02-18 17:09:27 -08:00
bert-e
29bab6f1f1 Merge branch 'feature/ARSN-64-sorted-set' into q/7.10 2022-02-16 23:00:46 +00:00
Taylor McKinnon
ab8cad95d7 Merge remote-tracking branch 'origin/improvement/ARSN-46/rollback_unneeded_changes_stab' into w/7.10/improvement/ARSN-46/rollback_unneeded_changes_stab 2022-02-16 14:56:25 -08:00
Vianney Rancurel
44f37bd156 ft: ARSN-64 sorted set routines
Large sets management routines implemented with array dichotomies.
No particular suitable external module was found.
2022-02-16 14:43:31 -08:00
Taylor McKinnon
b855de50eb impr(ARSN-46): Rollback changes
(cherry picked from commit 6861ac477a)
2022-02-14 11:40:33 -08:00
Taylor McKinnon
00602beadd Merge remote-tracking branch 'origin/improvement/ARSN-46/rollback_unneeded_changes' into w/7.10/improvement/ARSN-46/rollback_unneeded_changes 2022-02-14 11:19:58 -08:00
Taylor McKinnon
6861ac477a impr(ARSN-46): Rollback changes 2022-02-14 11:10:36 -08:00
Rached Ben Mustapha
4303cd8f5b ARSN-62: bump version to 7.10.11 2022-02-08 22:53:30 +00:00
Rached Ben Mustapha
0c73c952fa ARSN-62: include session token in v4 signature calculation 2022-02-08 22:53:16 +00:00
Nicolas Humbert
2f40ff3883 ARSN-21 update package version 2022-02-07 18:16:54 +01:00
Nicolas Humbert
90d6556229 ARSN-21 update package version 2022-02-07 18:13:46 +01:00
bert-e
d813842f89 Merge branches 'w/7.10/feature/ARSN-21/UpgradeToNode16' and 'q/1687/7.4/feature/ARSN-21/UpgradeToNode16' into tmp/octopus/q/7.10 2022-02-07 17:06:51 +00:00
bert-e
f7802650ee Merge branch 'feature/ARSN-21/UpgradeToNode16' into q/7.4 2022-02-07 17:06:51 +00:00
bert-e
f28783e616 Merge branches 'development/7.10' and 'feature/ARSN-21/UpgradeToNode16' into tmp/octopus/w/7.10/feature/ARSN-21/UpgradeToNode16 2022-02-07 17:04:19 +00:00
Nicolas Humbert
d0684396b6 S3C-5450 log is not accurate anymore 2022-02-04 10:45:48 +01:00
bert-e
4dc39e37b2 Merge branch 'bugfix/ARSN-57-correct-logging-client-ip' into tmp/octopus/w/7.10/bugfix/ARSN-57-correct-logging-client-ip 2022-01-29 01:20:12 +00:00
Naren
9b9a8660d9 bf: ARSN-57 log correct client ip
check request header 'x-forwarded-for' if there is no request
configuration.
2022-01-28 17:03:47 -08:00
Ronnie Smith
10f0a934b0 Merge remote-tracking branch 'origin/feature/ARSN-21/UpgradeToNode16' into w/7.10/feature/ARSN-21/UpgradeToNode16 2022-01-24 14:29:36 -08:00
Ronnie Smith
8c3f304d9b feature: ARSN-21 upgrade to node 16 2022-01-24 14:26:11 -08:00
bert-e
38705d1962 Merge branch 'feature/ARSN-54/RevertNode16Changes' into tmp/octopus/w/7.10/feature/ARSN-54/RevertNode16Changes 2022-01-20 23:21:12 +00:00
Ronnie Smith
efb3629eb0 feature: ARSN-54 use a less strict node engine 2022-01-20 15:20:43 -08:00
bert-e
e8084d4ab9 Merge branch 'feature/ARSN-54/RevertNode16Changes' into tmp/octopus/w/7.10/feature/ARSN-54/RevertNode16Changes 2022-01-20 20:19:44 +00:00
Ronnie Smith
6733d30439 feature: ARSN-54 revert node 16 2022-01-20 12:18:01 -08:00
Naren
8b1846647b improvement: ARSN-53 bump version to 7.10.6 2022-01-19 18:19:08 -08:00
bert-e
d5dad4734f Merge branch 'bugfix/ARSN-50-object-retention-date-with-sub-seconds-fails' into q/7.10 2022-01-20 01:05:54 +00:00
bert-e
e7869d832e Merge branches 'w/7.10/improvement/ARSN-21-Upgrade-Node-to-16' and 'q/1649/7.4/improvement/ARSN-21-Upgrade-Node-to-16' into tmp/octopus/q/7.10 2022-01-20 00:09:24 +00:00
bert-e
a1e14fccb1 Merge branch 'improvement/ARSN-21-Upgrade-Node-to-16' into q/7.4 2022-01-20 00:09:23 +00:00
Naren
f0981e2c57 bf: ARSN-50 object retention date with sub seconds should not fail 2022-01-19 14:38:23 -08:00
bert-e
0c17c748fe Merge branches 'w/7.10/bugfix/ARSN-35/add-http-header-too-large-error' and 'q/1611/7.4/bugfix/ARSN-35/add-http-header-too-large-error' into tmp/octopus/q/7.10 2022-01-19 00:48:16 +00:00
bert-e
030f47a88a Merge branch 'bugfix/ARSN-35/add-http-header-too-large-error' into q/7.4 2022-01-19 00:48:15 +00:00
bert-e
9c185007a2 Merge branch 'bugfix/ARSN-35/add-http-header-too-large-error' into tmp/octopus/w/7.10/bugfix/ARSN-35/add-http-header-too-large-error 2022-01-18 17:43:37 +00:00
Taylor McKinnon
d7a4bef3b3 Merge remote-tracking branch 'origin/improvement/ARSN-46/add_isAborted_flag' into w/7.10/improvement/ARSN-46/add_isAborted_flag 2022-01-13 13:53:41 -08:00
Taylor McKinnon
fc7711cca2 impr(ARSN-46): Add isAborted flag 2022-01-13 13:51:18 -08:00
Ronnie Smith
79699324d9 Merge remote-tracking branch 'origin/improvement/ARSN-21-Upgrade-Node-to-16' into w/7.10/improvement/ARSN-21-Upgrade-Node-to-16 2022-01-11 14:26:12 -08:00
Ronnie Smith
3919808d14 feature: ARSN-21 resolve broken tests 2022-01-11 14:18:56 -08:00
Dimitri Bourreau
b1dea67eef tests: ARSN-21 remove timeout 5500 from package.json script test
Signed-off-by: Dimitri Bourreau <contact@dimitribourreau.me>
2021-12-10 02:21:36 +01:00
Dimitri Bourreau
c3196181c1 chore: ARSN-21 add ioctl as optional dependency
Signed-off-by: Dimitri Bourreau <contact@dimitribourreau.me>
2021-12-10 02:20:14 +01:00
Dimitri Bourreau
c24ad4f887 chore: ARSN-21 remove ioctl
Signed-off-by: Dimitri Bourreau <contact@dimitribourreau.me>
2021-12-10 02:15:33 +01:00
Dimitri Bourreau
ad1c623c80 chore: ARSN-21 GitHub Actions run unit tests without --silent
Signed-off-by: Dimitri Bourreau <contact@dimitribourreau.me>
2021-12-10 02:14:08 +01:00
Dimitri Bourreau
9d81cad0aa tests: ARSN-21 update ws._server.connections with _connections
Signed-off-by: Dimitri Bourreau <contact@dimitribourreau.me>
2021-12-10 02:03:12 +01:00
Dimitri Bourreau
5f72738b7f improvement: ARSN-21 upgrade uuid from 3.3.2 to 3.4.0
Signed-off-by: Dimitri Bourreau <contact@dimitribourreau.me>
2021-12-09 00:38:07 +01:00
Dimitri Bourreau
70278f86ab improvement: ARSN-21 upgrade dependencies with yarn upgrade-interactive
Signed-off-by: Dimitri Bourreau <contact@dimitribourreau.me>
2021-12-07 14:35:33 +01:00
Dimitri Bourreau
083dd7454a improvement: ARSN-21 GitHub Actions should use Node 16 instead of 10
Signed-off-by: Dimitri Bourreau <contact@dimitribourreau.me>
2021-12-07 11:50:16 +01:00
Alexander Chan
8aa0f9d030 ARSN-33: add s3 lifecycle helpers 2021-11-19 18:01:05 -08:00
Jonathan Gramain
3b0ea3d7a1 Merge remote-tracking branch 'origin/improvement/ARSN-42-addNullUploadIdField' into w/7.10/improvement/ARSN-42-addNullUploadIdField 2021-11-18 18:24:33 -08:00
Jonathan Gramain
5ce057a498 ARSN-42 bump version to 7.4.13 2021-11-18 18:19:59 -08:00
Jonathan Gramain
8c3f88e233 improvement: ARSN-42 get/set ObjectMD.nullUploadId
Add getNullUploadId/setNullUploadId helpers to ObjectMD, to store the
null version uploadId, so that it can be passed to the metadata layer
as "replayId" when deleting the null version from another master key
2021-11-18 14:16:19 -08:00
Jonathan Gramain
8c2db870c7 Merge remote-tracking branch 'origin/feature/ARSN-38-replayPrefixHiddenInListings' into w/7.10/feature/ARSN-38-replayPrefixHiddenInListings 2021-11-04 15:22:29 -07:00
Jonathan Gramain
04581abbf6 ARSN-38 bump arsenal version 2021-11-03 15:45:30 -07:00
Jonathan Gramain
abfbe90a57 feature: ARSN-38 introduce replay prefix hidden in listings
- Add a new DB prefix for replay keys, similar to existing v1 vformat
  prefixes

- Hide this prefix for v0 listing algos DelimiterMaster and
  DelimiterVersions: skip keys beginning with this prefix, and update
  the "skipping" value to be able to skip the entire prefix after the
  streak length is reached (similar to how regular prefixes are
  skipped)

- fix an existing unit test in DelimiterVersions
2021-11-02 12:01:28 -07:00
bert-e
67e5cc770d Merge branch 'feature/ARSN-37-addUploadId' into tmp/octopus/w/7.10/feature/ARSN-37-addUploadId 2021-11-02 00:28:00 +00:00
Jonathan Gramain
b1c9474159 feature: ARSN-37 ObjectMD getUploadId/setUploadId
Add getter/setter for the "uploadId" field, used for MPUs in progress.
2021-11-01 17:25:57 -07:00
Ilke
8e8d771a64 bugfix: ARSN-35 add http header too large error 2021-10-29 20:17:42 -07:00
Rahul Padigela
07a110ff86 chore: update version 2021-10-26 14:52:35 -07:00
Rahul Padigela
c696f9a38b Merge remote-tracking branch 'origin/improvement/ARSN-31-update-version' into w/7.10/improvement/ARSN-31-update-version 2021-10-26 14:52:19 -07:00
Rahul Padigela
f941132c8a chore: update version 2021-10-26 14:47:21 -07:00
bert-e
c0825231e9 Merge branch 'bugfix/ARSN-31-invalid-query-params' into tmp/octopus/w/7.10/bugfix/ARSN-31-invalid-query-params 2021-10-26 00:27:15 +00:00
Rahul Padigela
2246a9fbdc bugfix: ARSN-31 return empty string for invalid requests
This returns empty string for invalid encoding requests, for example
when duplicate query params in HTTP URL are parsed by Node.js HTTP parser
which converts duplicate query params into an Array and this breaks the encoding
method.
2021-10-25 16:59:09 -07:00
Rahul Padigela
86270d8495 test: test for invalid type for encoding strings 2021-10-25 16:59:03 -07:00
Thomas Carmet
e52330b935 Merge branch 'feature/ARSN-20-migrate-github-actions' into w/7.10/feature/ARSN-20-migrate-github-actions 2021-09-23 11:37:29 -07:00
Thomas Carmet
4b08dd5263 ARSN-20 migrate to github actions
Co-authored-by: Ronnie <halfpint1170@gmail.com>
2021-09-23 11:37:04 -07:00
Thomas Carmet
ce7bba1f8d ARSN-17 fixup version mistake for dev/7.10 2021-08-31 10:44:52 -07:00
Thomas Carmet
46338119b6 Merge remote-tracking branch 'origin/feature/ARSN-17-setup-package.json' into w/7.10/feature/ARSN-17-setup-package.json 2021-08-31 09:57:28 -07:00
Thomas Carmet
36f6ca47e9 ARSN-17 align package.json with releases 2021-08-31 09:55:21 -07:00
bert-e
cd50d46162 Merge branch 'feature/ARSN-12-bumpArsenalVersion-stabilization' into tmp/octopus/w/7.10/feature/ARSN-12-bumpArsenalVersion-stabilization 2021-08-26 21:48:36 +00:00
Jonathan Gramain
016107500f feature: ARSN-12 bump arsenal version
Needed to ensure proper dependency update in Vault

(cherry picked from commit c495ecacb0)
2021-08-26 14:47:18 -07:00
Jonathan Gramain
04ebaa8d8f Merge remote-tracking branch 'origin/feature/ARSN-12-bumpArsenalVersion' into w/7.10/feature/ARSN-12-bumpArsenalVersion 2021-08-26 14:24:27 -07:00
Jonathan Gramain
c495ecacb0 feature: ARSN-12 bump arsenal version
Needed to ensure proper dependency update in Vault
2021-08-26 14:21:10 -07:00
bert-e
3f702c29cd Merge branch 'feature/ARSN-12-condition-put-backport' into tmp/octopus/w/7.10/feature/ARSN-12-condition-put-backport 2021-08-25 21:07:37 +00:00
anurag4DSB
8603ca5b99 feature: ARSN-12-introduce-cond-put-op
(cherry picked from commit f101a0f3a0)
2021-08-25 23:03:58 +02:00
bert-e
7b4e65eaf1 Merge branch 'feature/ARSN-12-introduce-cond-put' into tmp/octopus/w/7.10/feature/ARSN-12-introduce-cond-put 2021-08-25 20:54:20 +00:00
anurag4DSB
f101a0f3a0 feature: ARSN-12-introduce-cond-put-op 2021-08-25 22:50:23 +02:00
bert-e
e0b95fe931 Merge branch 'w/7.10/feature/ARSN-11-bump-werelogs' into tmp/octopus/q/7.10 2021-08-13 17:56:09 +00:00
naren-scality
db7d8b0b45 improvement: ARSN-13 expose isResourceApplicable for policy evaulation 2021-08-12 20:06:19 -07:00
bert-e
46d3a1e53c Merge branch 'feature/ARSN-11-bump-werelogs' into tmp/octopus/w/7.10/feature/ARSN-11-bump-werelogs 2021-08-12 17:06:27 +00:00
Thomas Carmet
ef6197250c ARSN-11 update werelogs to tagged version 2021-08-12 10:03:26 -07:00
Jonathan Gramain
9aa8710a57 ARSN-9 KMIP deep healthcheck
Add a healthcheck() function in the KMIP client that create a dummy
bucket key on the KMS then deletes it, to ensure basic functionality
is working
2021-08-04 11:51:23 -07:00
Ronnie Smith
735c6f2fb5 bugfix: ARSN-8 Remove response code and message from log
* The response has not been computed so this always
returns 200 which is not accurate and is confusing
2021-08-02 19:02:44 -07:00
bert-e
942c6c2a1e Merge branch 'bugfix/ARSN-7_SkipHeadersOn304' into tmp/octopus/w/7.10/bugfix/ARSN-7_SkipHeadersOn304 2021-07-30 23:46:09 +00:00
Ronnie Smith
836c65e91e bugfix: S3C-3810 Skip headers on 304 response 2021-07-30 15:24:31 -07:00
bert-e
4a6b69247b Merge branch 'feature/ARSN-5/addBucketInfoUIDField' into q/7.10 2021-07-28 16:58:33 +00:00
Gregoire Doumergue
66a48f44da Revert "S3C-656: Remove the expect header hack"
This reverts commit 3e1d8c8ed7.
2021-07-28 14:50:00 +02:00
Gregoire Doumergue
fa3ec78e25 Revert "ARSN-3: Remove the test for the old hack"
This reverts commit 8f4453862d.
2021-07-28 14:49:17 +02:00
Alexander Chan
112cee9118 ARSN-5: add BucketInfo field UID 2021-07-27 16:58:12 -07:00
Jonathan Gramain
6fdfbcb223 bugfix: ARSN-4 rework KMIP connection handling
Rework KMIP connection handling to catch all errors, including before
the connection is established, and return the error to each pending
command response.

In particular, setup the 'error' listener (also 'data' and 'end'
listeners) as soon as the TLS client socket is created instead of
waiting for the connection to be established to set the listeners.
2021-07-21 18:26:39 -07:00
Jonathan Gramain
c41f1ca4b3 bugfix: [test] ARSN-4 reproduce issue in func tests
- Change existing KMIP transport test to trigger issue: Modify the
  EchoChannel socket mock to use standard EventEmitter, which triggers
  an exception when an error event is emitted.

- Add a new test for TLS transport that raises the same TLS connection
  exception than witnessed on lab
2021-07-21 18:00:52 -07:00
Jonathan Gramain
888273bb2f improvement: S3C-4312 fix ObjectMDLocation.setDataLocation()
Fix ObjectMDLocation.setDataLocation() behavior when cryptoScheme and
cipheredDataKey location params are undefined: instead of setting the
attributes as undefined, remove the attributes.

The previous situation made some backbeat tests fail due to those
attributes existing, and it's cleaner this way.
2021-07-21 11:04:22 -07:00
Jonathan Gramain
1978405fb9 improvement: S3C-4312 backport + adapt ObjectMDLocation unit test
Backport and adapt to 7.x branch the ObjectMDLocation unit tests from
development/8.1 branch
2021-07-20 14:41:58 -07:00
Jonathan Gramain
d019076854 improvement: S3C-4312 encryption info in ObjectMDLocation.setDataLocation()
Support setting encryption info in ObjectMDLocation with the method
setDataLocation(), used by backbeat to set the new target location
before writing metadata on the target.
2021-07-20 14:41:37 -07:00
Gregoire Doumergue
8f4453862d ARSN-3: Remove the test for the old hack 2021-07-12 16:19:27 +02:00
Gregoire Doumergue
3e1d8c8ed7 S3C-656: Remove the expect header hack 2021-07-12 15:13:21 +02:00
Rached Ben Mustapha
a41d4db1c4 chore: bump version 2021-07-08 16:37:52 -07:00
Rached Ben Mustapha
00d9c9af0c bf: fix user arn validation with path 2021-07-08 16:37:52 -07:00
Rahul Padigela
7aafd05b74 bugfix: ARSN-1 conditionally check for content-md5 2021-07-06 16:17:33 -07:00
bert-e
5540afa194 Merge branch 'feature/S3C-4614/assumerole' into q/7.10 2021-06-29 21:40:05 +00:00
Rached Ben Mustapha
6b9e7fc11f chore: bump version 2021-06-29 20:11:44 +00:00
Nicolas Humbert
058455061d ft: S3C-4614 AssumeRole cross account with user as principal 2021-06-29 20:11:44 +00:00
vrancurel
d1e4c8dbb9 ft: S3C-4552 remove duplicate test 2021-06-29 13:02:36 -07:00
bert-e
e87198f7ba Merge branch 'feature/S3C-4552-tiny-version-ids' into q/7.10 2021-06-29 19:27:15 +00:00
vrancurel
a7bfedfa2b ft: S3C-4552 tiny version IDs
Will be enabled on new buckets only.
2021-06-29 11:13:39 -07:00
bert-e
2794fe0636 Merge branch 'improvement/S3C-4110/backport' into q/7.10 2021-06-29 12:15:38 +00:00
Jonathan Gramain
6347358cc2 bugfix: S3C-3744 fix bucket encryption related actions
Changes made to match the AWS reference:
https://docs.aws.amazon.com/service-authorization/latest/reference/list_amazons3.html

- change "bucketDeleteEncryption" action to "s3:PutEncryptionConfiguration"

- rename PUT and GET actions to PutEncryptionConfiguration and
  GetEncryptionConfiguration and add missing 's3:' prefix
2021-06-21 16:12:59 -07:00
Nicolas Humbert
739f0a709c S3C-4110 backport lifecycle expiration - add tests 2021-06-09 16:07:28 -05:00
bert-e
ffbe46edfb Merge branch 'bugfix/S3C-4257_StartSeqCanBeNull' into q/7.4 2021-06-08 08:18:01 +00:00
bert-e
ea6e0c464b Merge branches 'w/7.10/bugfix/S3C-4257_StartSeqCanBeNull' and 'q/1472/7.4/bugfix/S3C-4257_StartSeqCanBeNull' into tmp/octopus/q/7.10 2021-06-08 08:18:01 +00:00
bert-e
4948e3a75e Merge branch 'bugfix/S3C-4257_StartSeqCanBeNull' into tmp/octopus/w/7.10/bugfix/S3C-4257_StartSeqCanBeNull 2021-06-08 02:49:44 +00:00
Ronnie Smith
3ed07317e5 bugfix: S3C-4257 Start Seq can be null
* Return undefined if start seq is falsey
2021-06-07 19:49:13 -07:00
philipyoo
13f8d796b4 bf: apply multiple lifecycle filter tags if exists 2021-06-02 17:43:29 -05:00
Bennett Buchanan
9bdc330e9b feature: ZENKO-1317 AWS lifecycle compat 2021-06-02 17:43:25 -05:00
bert-e
bcb6836a23 Merge branch 'feature/S3C-3754_add_bucketDeleteEncryption_route' into q/7.10 2021-05-17 17:31:24 +00:00
Taylor McKinnon
cd15540cb9 ft(S3C-3754): Add bucketDeleteEncrytion route and support code 2021-05-17 10:27:52 -07:00
Ilke
fe264673e1 bf: S3C-4358 add versioned object lock actions 2021-05-12 16:10:59 -07:00
bert-e
e022fc9b99 Merge branches 'w/7.10/improvement/S3C-4336_add_BucketInfoModelVersion' and 'q/1436/7.4/improvement/S3C-4336_add_BucketInfoModelVersion' into tmp/octopus/q/7.10 2021-05-10 20:18:36 +00:00
bert-e
0487a18623 Merge branch 'improvement/S3C-4336_add_BucketInfoModelVersion' into q/7.4 2021-05-10 20:18:35 +00:00
Taylor McKinnon
5e1fe450f6 add BucketInfo versions 7-9 2021-05-10 13:06:49 -07:00
bert-e
8a1987ba69 Merge branch 'improvement/S3C-4336_add_BucketInfoModelVersion' into tmp/octopus/w/7.10/improvement/S3C-4336_add_BucketInfoModelVersion 2021-05-10 20:02:51 +00:00
Taylor McKinnon
a4ccb94978 impr(S3C-4336): Add BucketInfoModelVersion.md from cloudserver 2021-05-10 13:01:46 -07:00
bert-e
fa47c5045b Merge branch 'feature/S3C-4073_AddProbeServerToIndex' into tmp/octopus/w/7.10/feature/S3C-4073_AddProbeServerToIndex 2021-05-07 04:18:11 +00:00
Ronnie Smith
3098fcf1e1 feature: S3C-4073 Add probe server to index 2021-05-06 21:16:48 -07:00
bert-e
cd9949cb11 Merge branch 'feature/S3C-4073_add-new-probe-server' into tmp/octopus/w/7.10/feature/S3C-4073_add-new-probe-server 2021-04-30 19:56:03 +00:00
Ronnie Smith
41b3babc69 feature: S3C-4073 Add new probe server
* JsDocs for arsenal error
* ProbeServer as a replacement to HealthProbeServer
2021-04-30 12:53:38 -07:00
Taylor McKinnon
990987bb6a ft(S3C-3748): Add PutBucketEncryption route 2021-04-29 09:34:45 -07:00
Taylor McKinnon
faab2347f9 ft(S3C-3751): Add GetBucketEncryption route 2021-04-21 11:41:32 -07:00
bert-e
9a2b01c92e Merge branches 'w/7.10/bugfix/S3C-4275-versionListingWithDelimiterInefficiency' and 'q/1399/7.4/bugfix/S3C-4275-versionListingWithDelimiterInefficiency' into tmp/octopus/q/7.10 2021-04-14 01:17:38 +00:00
bert-e
403d9b5a08 Merge branch 'bugfix/S3C-4275-versionListingWithDelimiterInefficiency' into q/7.4 2021-04-14 01:17:37 +00:00
Taylor McKinnon
71c1c01b35 add BypassGovernanceRetention to action map 2021-04-13 13:25:16 -07:00
naren-scality
941b644e9e bf S3C-4239 log consumer callback error fix
A guard is added to ensure that the callback is called only once in the
event of an error while reading records in log consumer.
2021-04-12 10:47:31 -07:00
bert-e
7a92327da2 Merge branch 'bugfix/S3C-4275-versionListingWithDelimiterInefficiency' into tmp/octopus/w/7.10/bugfix/S3C-4275-versionListingWithDelimiterInefficiency 2021-04-10 00:16:30 +00:00
Jonathan Gramain
ecaf9f843a bugfix: S3C-4275 enable skip-scan for DelimiterVersions with a delimiter
Enable the skip-scan optimization to work for DelimiterVersions
listing algorithm when used with a delimiter.

For this to work, instead of returning FILTER_ACCEPT when encountering
a version that matches the master key (which resets the skip-scan
counter), return FILTER_SKIP to let the skip-scan counter increment
and eventually skip the entire listed common prefix after 100 entries.
2021-04-09 16:33:50 -07:00
Jonathan Gramain
3506fd9f4e bugfix: S3C-4275 more DelimiterVersions unit tests
Increase coverage for DelimiterVersions listing algorithm to have it
in par with DelimiterMaster before attempting a fix: most existing
tests from DelimiterMaster have been copied and adapted to fit the
DelimiterVersions logic.
2021-04-09 16:32:15 -07:00
bert-e
bf4c40dfb8 Merge branch 'feature/S3C-4262_BackportZenkoMetrics' into tmp/octopus/w/7.10/feature/S3C-4262_BackportZenkoMetrics 2021-04-06 09:45:40 +00:00
Ronnie Smith
d533bc4e0f Merge branch 'development/7.4' into feature/S3C-4262_BackportZenkoMetrics 2021-04-06 02:41:34 -07:00
Jonathan Gramain
4aa5071a0d Merge remote-tracking branch 'origin/dependabot/npm_and_yarn/development/7.4/mocha-8.0.1' into w/7.10/dependabot/npm_and_yarn/development/7.4/mocha-8.0.1 2021-04-02 12:44:06 -07:00
Jonathan Gramain
c6976e996e build(deps-dev): Bump mocha from 2.5.3 to 8.0.1
Clean remaining references in a few test suites to have mocha not hang
after tests complete, since mocha 4+ does not force exit anymore if
there are active references.

Ref: https://boneskull.com/mocha-v4-nears-release/#mochawontforceexit
2021-04-02 11:48:27 -07:00
Ronnie Smith
1584c4acb1 feature S3C-4262 Backport zenko metrics 2021-04-01 20:03:39 -07:00
dependabot[bot]
f1345ec2ed build(deps-dev): Bump mocha from 2.5.3 to 8.0.1
Bumps [mocha](https://github.com/mochajs/mocha) from 2.5.3 to 8.0.1.
- [Release notes](https://github.com/mochajs/mocha/releases)
- [Changelog](https://github.com/mochajs/mocha/blob/master/CHANGELOG.md)
- [Commits](https://github.com/mochajs/mocha/compare/v2.5.3...v8.0.1)

Signed-off-by: dependabot[bot] <support@github.com>
2021-03-30 15:55:18 -07:00
vrancurel
147946747c ft: S3C-4172 custom filter
Perform an optional filter on customAttributes sub-object with filterKey
and filterKeyStartWith optional parameters on basic filter.
2021-03-18 15:21:31 -07:00
bert-e
6eacd79f07 Merge branch 'bugfix/S3C-3962-zero-size-stream' into tmp/octopus/w/7.9/bugfix/S3C-3962-zero-size-stream 2021-02-10 17:31:07 +00:00
alexandre merle
f17006b91e bugfix: S3C-3962: considering zero size has valid in stream response 2021-02-09 13:44:05 +01:00
alexandre merle
65966f5ddf S3C-3904: more s3 action logs
Add 7.9 actions
2021-02-05 20:57:48 +01:00
bert-e
f6223d1472 Merge branch 'bugfix/S3C-3904-better-s3-action-logs' into tmp/octopus/w/7.9/bugfix/S3C-3904-better-s3-action-logs 2021-02-05 18:15:28 +00:00
alexandre merle
b3080e9ac6 S3C-3904: match api method with real aws s3 api call 2021-02-05 18:36:48 +01:00
bert-e
7d58ca38ce Merge branch 'bugfix/S3C-3904-better-s3-action-logs' into tmp/octopus/w/7.9/bugfix/S3C-3904-better-s3-action-logs 2021-02-05 01:10:08 +00:00
alexandre merle
9484366844 bugfix: S3C-3904: better-s3-action-logs
Introduce a map meant to override default
actionMap values for S3, will be used in logs
to monitor the s3 actions instead of the iam
permissions needed for that action
2021-02-05 02:09:08 +01:00
401 changed files with 31897 additions and 22404 deletions

25
.github/workflows/codeql.yaml vendored Normal file
View File

@@ -0,0 +1,25 @@
---
name: codeQL
on:
push:
branches: [development/*, stabilization/*, hotfix/*]
pull_request:
branches: [development/*, stabilization/*, hotfix/*]
workflow_dispatch:
jobs:
analyze:
name: Static analysis with CodeQL
runs-on: ubuntu-latest
steps:
- name: Checkout code
uses: actions/checkout@v4
- name: Initialize CodeQL
uses: github/codeql-action/init@v3
with:
languages: javascript, typescript
- name: Build and analyze
uses: github/codeql-action/analyze@v3

View File

@@ -0,0 +1,16 @@
---
name: dependency review
on:
pull_request:
branches: [development/*, stabilization/*, hotfix/*]
jobs:
dependency-review:
runs-on: ubuntu-latest
steps:
- name: 'Checkout Repository'
uses: actions/checkout@v4
- name: 'Dependency Review'
uses: actions/dependency-review-action@v4

76
.github/workflows/tests.yaml vendored Normal file
View File

@@ -0,0 +1,76 @@
---
name: tests
on:
push:
branches-ignore:
- 'development/**'
jobs:
test:
runs-on: ubuntu-latest
services:
# Label used to access the service container
redis:
# Docker Hub image
image: redis
# Set health checks to wait until redis has started
options: >-
--health-cmd "redis-cli ping"
--health-interval 10s
--health-timeout 5s
--health-retries 5
ports:
# Maps port 6379 on service container to the host
- 6379:6379
steps:
- name: Checkout
uses: actions/checkout@v2
- uses: actions/setup-node@v2
with:
node-version: '16'
cache: 'yarn'
- name: install dependencies
run: yarn install --frozen-lockfile --prefer-offline
continue-on-error: true # TODO ARSN-97 Remove it when no errors in TS
- name: lint yaml
run: yarn --silent lint_yml
- name: lint javascript
run: yarn --silent lint -- --max-warnings 0
- name: lint markdown
run: yarn --silent lint_md
- name: run unit tests
run: yarn test
- name: run functional tests
run: yarn ft_test
- name: run executables tests
run: yarn install && yarn test
working-directory: 'lib/executables/pensieveCreds/'
compile:
name: Compile and upload build artifacts
needs: test
runs-on: ubuntu-latest
steps:
- name: Checkout
uses: actions/checkout@v4
- name: Install NodeJS
uses: actions/setup-node@v4
with:
node-version: '16'
cache: yarn
- name: Install dependencies
run: yarn install --frozen-lockfile --prefer-offline
continue-on-error: true # TODO ARSN-97 Remove it when no errors in TS
- name: Compile
run: yarn build
continue-on-error: true # TODO ARSN-97 Remove it when no errors in TS
- name: Upload artifacts
uses: scality/action-artifacts@v4
with:
url: https://artifacts.scality.net
user: ${{ secrets.ARTIFACTS_USER }}
password: ${{ secrets.ARTIFACTS_PASSWORD }}
source: ./build
method: upload
if: success()

6
.gitignore vendored
View File

@@ -10,3 +10,9 @@ node_modules/
*-linux
*-macos
# Coverage
coverage/
.nyc_output/
# TypeScript
build/

0
.npmignore Normal file
View File

6
babel.config.js Normal file
View File

@@ -0,0 +1,6 @@
module.exports = {
presets: [
['@babel/preset-env', { targets: { node: 'current' } }],
'@babel/preset-typescript',
],
};

View File

@@ -0,0 +1,144 @@
# BucketInfo Model Version History
## Model Version 0/1
### Properties
``` javascript
this._acl = aclInstance;
this._name = name;
this._owner = owner;
this._ownerDisplayName = ownerDisplayName;
this._creationDate = creationDate;
```
### Usage
No explicit references in the code since mdBucketModelVersion
property not added until Model Version 2
## Model Version 2
### Properties Added
``` javascript
this._mdBucketModelVersion = mdBucketModelVersion || 0
this._transient = transient || false;
this._deleted = deleted || false;
```
### Usage
Used to determine which splitter to use ( < 2 means old splitter)
## Model version 3
### Properties Added
```
this._serverSideEncryption = serverSideEncryption || null;
```
### Usage
Used to store the server bucket encryption info
## Model version 4
### Properties Added
```javascript
this._locationConstraint = LocationConstraint || null;
```
### Usage
Used to store the location constraint of the bucket
## Model version 5
### Properties Added
```javascript
this._websiteConfiguration = websiteConfiguration || null;
this._cors = cors || null;
```
### Usage
Used to store the bucket website configuration info
and to store CORS rules to apply to cross-domain requests
## Model version 6
### Properties Added
```javascript
this._lifecycleConfiguration = lifecycleConfiguration || null;
```
### Usage
Used to store the bucket lifecycle configuration info
## Model version 7
### Properties Added
```javascript
this._objectLockEnabled = objectLockEnabled || false;
this._objectLockConfiguration = objectLockConfiguration || null;
```
### Usage
Used to determine whether object lock capabilities are enabled on a bucket and
to store the object lock configuration of the bucket
## Model version 8
### Properties Added
```javascript
this._notificationConfiguration = notificationConfiguration || null;
```
### Usage
Used to store the bucket notification configuration info
## Model version 9
### Properties Added
```javascript
this._serverSideEncryption.configuredMasterKeyId = configuredMasterKeyId || undefined;
```
### Usage
Used to store the users configured KMS key id
## Model version 10
### Properties Added
```javascript
this._uid = uid || uuid();
```
### Usage
Used to set a unique identifier on a bucket
## Model version 11
### Properties Added
```javascript
this._tags = tags || null;
```
### Usage
Used to store bucket tagging

View File

@@ -0,0 +1,27 @@
# Delimiter
The Delimiter class handles raw listings from the database with an
optional delimiter, and fills in a curated listing with "Contents" and
"CommonPrefixes" as a result.
## Expected Behavior
- only lists keys belonging to the given **prefix** (if provided)
- groups listed keys that have a common prefix ending with a delimiter
inside CommonPrefixes
- can take a **marker** or **continuationToken** to list from a specific key
- can take a **maxKeys** parameter to limit how many keys can be returned
## State Chart
- States with grey background are *Idle* states, which are waiting for
a new listing key
- States with blue background are *Processing* states, which are
actively processing a new listing key passed by the filter()
function
![Delimiter State Chart](./pics/delimiterStateChart.svg)

View File

@@ -0,0 +1,45 @@
# DelimiterMaster
The DelimiterMaster class handles raw listings from the database of a
versioned or non-versioned bucket with an optional delimiter, and
fills in a curated listing with "Contents" and "CommonPrefixes" as a
result.
## Expected Behavior
- only lists latest versions of versioned buckets
- only lists keys belonging to the given **prefix** (if provided)
- does not list latest versions that are delete markers
- groups listed keys that have a common prefix ending with a delimiter
inside CommonPrefixes
- can take a **marker** or **continuationToken** to list from a specific key
- can take a **maxKeys** parameter to limit how many keys can be returned
- reconciles internal PHD keys with the next version (those are
created when a specific version that is the latest version is
deleted)
- skips internal keys like replay keys
## State Chart
- States with grey background are *Idle* states, which are waiting for
a new listing key
- States with blue background are *Processing* states, which are
actively processing a new listing key passed by the filter()
function
### Bucket Vformat=v0
![DelimiterMaster State Chart for v0 format](./pics/delimiterMasterV0StateChart.svg)
### Bucket Vformat=v1
For buckets in versioning key format **v1**, the algorithm used is the
one from [Delimiter](delimiter.md).

View File

@@ -0,0 +1,45 @@
digraph {
node [shape="box",style="filled,rounded",fontsize=16,fixedsize=true,width=3];
edge [fontsize=14];
rankdir=TB;
START [shape="circle",width=0.2,label="",style="filled",fillcolor="black"]
END [shape="circle",width=0.2,label="",style="filled",fillcolor="black",peripheries=2]
node [fillcolor="lightgrey"];
"NotSkippingPrefixNorVersions.Idle" [label="NotSkippingPrefixNorVersions",group="NotSkippingPrefixNorVersions",width=4];
"SkippingPrefix.Idle" [label="SkippingPrefix",group="SkippingPrefix"];
"SkippingVersions.Idle" [label="SkippingVersions",group="SkippingVersions"];
"WaitVersionAfterPHD.Idle" [label="WaitVersionAfterPHD",group="WaitVersionAfterPHD"];
node [fillcolor="lightblue"];
"NotSkippingPrefixNorVersions.Processing" [label="NotSkippingPrefixNorVersions",group="NotSkippingPrefixNorVersions",width=4];
"SkippingPrefix.Processing" [label="SkippingPrefix",group="SkippingPrefix"];
"SkippingVersions.Processing" [label="SkippingVersions",group="SkippingVersions"];
"WaitVersionAfterPHD.Processing" [label="WaitVersionAfterPHD",group="WaitVersionAfterPHD"];
START -> "SkippingVersions.Idle" [label="[marker != undefined]"]
START -> "NotSkippingPrefixNorVersions.Idle" [label="[marker == undefined]"]
"NotSkippingPrefixNorVersions.Idle" -> "NotSkippingPrefixNorVersions.Processing" [label="filter(key, value)"]
"SkippingPrefix.Idle" -> "SkippingPrefix.Processing" [label="filter(key, value)"]
"SkippingVersions.Idle" -> "SkippingVersions.Processing" [label="filter(key, value)"]
"WaitVersionAfterPHD.Idle" -> "WaitVersionAfterPHD.Processing" [label="filter(key, value)"]
"NotSkippingPrefixNorVersions.Processing" -> "SkippingVersions.Idle" [label="[Version.isDeleteMarker(value)]\n-> FILTER_ACCEPT"]
"NotSkippingPrefixNorVersions.Processing" -> "WaitVersionAfterPHD.Idle" [label="[Version.isPHD(value)]\n-> FILTER_ACCEPT"]
"NotSkippingPrefixNorVersions.Processing" -> "SkippingPrefix.Idle" [label="[key.startsWith(<ReplayPrefix>)]\n/ prefix <- <ReplayPrefix>\n-> FILTER_SKIP"]
"NotSkippingPrefixNorVersions.Processing" -> END [label="[isListableKey(key, value) and\nKeys == maxKeys]\n-> FILTER_END"]
"NotSkippingPrefixNorVersions.Processing" -> "SkippingPrefix.Idle" [label="[isListableKey(key, value) and\nnKeys < maxKeys and\nhasDelimiter(key)]\n/ prefix <- prefixOf(key)\n/ CommonPrefixes.append(prefixOf(key))\n-> FILTER_ACCEPT"]
"NotSkippingPrefixNorVersions.Processing" -> "SkippingVersions.Idle" [label="[isListableKey(key, value) and\nnKeys < maxKeys and\nnot hasDelimiter(key)]\n/ Contents.append(key, value)\n-> FILTER_ACCEPT"]
"SkippingPrefix.Processing" -> "SkippingPrefix.Idle" [label="[key.startsWith(prefix)]\n-> FILTER_SKIP"]
"SkippingPrefix.Processing" -> "NotSkippingPrefixNorVersions.Processing" [label="[not key.startsWith(prefix)]"]
"SkippingVersions.Processing" -> "SkippingVersions.Idle" [label="[isVersionKey(key)]\n-> FILTER_SKIP"]
"SkippingVersions.Processing" -> "NotSkippingPrefixNorVersions.Processing" [label="[not isVersionKey(key)]"]
"WaitVersionAfterPHD.Processing" -> "NotSkippingPrefixNorVersions.Processing" [label="[isVersionKey(key) and master(key) == PHDkey]\n/ key <- master(key)"]
"WaitVersionAfterPHD.Processing" -> "NotSkippingPrefixNorVersions.Processing" [label="[not isVersionKey(key) or master(key) != PHDkey]"]
}

View File

@@ -0,0 +1,216 @@
<?xml version="1.0" encoding="UTF-8" standalone="no"?>
<!DOCTYPE svg PUBLIC "-//W3C//DTD SVG 1.1//EN"
"http://www.w3.org/Graphics/SVG/1.1/DTD/svg11.dtd">
<!-- Generated by graphviz version 2.43.0 (0)
-->
<!-- Title: %3 Pages: 1 -->
<svg width="2313pt" height="460pt"
viewBox="0.00 0.00 2313.37 460.00" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink">
<g id="graph0" class="graph" transform="scale(1 1) rotate(0) translate(4 456)">
<title>%3</title>
<polygon fill="white" stroke="transparent" points="-4,4 -4,-456 2309.37,-456 2309.37,4 -4,4"/>
<!-- START -->
<g id="node1" class="node">
<title>START</title>
<ellipse fill="black" stroke="black" cx="35.37" cy="-445" rx="7" ry="7"/>
</g>
<!-- NotSkippingPrefixNorVersions.Idle -->
<g id="node3" class="node">
<title>NotSkippingPrefixNorVersions.Idle</title>
<path fill="lightgrey" stroke="black" d="M925.37,-387C925.37,-387 661.37,-387 661.37,-387 655.37,-387 649.37,-381 649.37,-375 649.37,-375 649.37,-363 649.37,-363 649.37,-357 655.37,-351 661.37,-351 661.37,-351 925.37,-351 925.37,-351 931.37,-351 937.37,-357 937.37,-363 937.37,-363 937.37,-375 937.37,-375 937.37,-381 931.37,-387 925.37,-387"/>
<text text-anchor="middle" x="793.37" y="-365.2" font-family="Times,serif" font-size="16.00">NotSkippingPrefixNorVersions</text>
</g>
<!-- START&#45;&gt;NotSkippingPrefixNorVersions.Idle -->
<g id="edge2" class="edge">
<title>START&#45;&gt;NotSkippingPrefixNorVersions.Idle</title>
<path fill="none" stroke="black" d="M42.39,-443.31C95.3,-438.15 434.98,-404.99 638.94,-385.08"/>
<polygon fill="black" stroke="black" points="639.54,-388.53 649.15,-384.08 638.86,-381.57 639.54,-388.53"/>
<text text-anchor="middle" x="497.87" y="-408.8" font-family="Times,serif" font-size="14.00">[marker == undefined]</text>
</g>
<!-- SkippingVersions.Idle -->
<g id="node5" class="node">
<title>SkippingVersions.Idle</title>
<path fill="lightgrey" stroke="black" d="M242.37,-138C242.37,-138 50.37,-138 50.37,-138 44.37,-138 38.37,-132 38.37,-126 38.37,-126 38.37,-114 38.37,-114 38.37,-108 44.37,-102 50.37,-102 50.37,-102 242.37,-102 242.37,-102 248.37,-102 254.37,-108 254.37,-114 254.37,-114 254.37,-126 254.37,-126 254.37,-132 248.37,-138 242.37,-138"/>
<text text-anchor="middle" x="146.37" y="-116.2" font-family="Times,serif" font-size="16.00">SkippingVersions</text>
</g>
<!-- START&#45;&gt;SkippingVersions.Idle -->
<g id="edge1" class="edge">
<title>START&#45;&gt;SkippingVersions.Idle</title>
<path fill="none" stroke="black" d="M33.04,-438.14C20.64,-405.9 -34.57,-248.17 33.37,-156 36.76,-151.4 40.74,-147.39 45.16,-143.89"/>
<polygon fill="black" stroke="black" points="47.27,-146.68 53.53,-138.13 43.3,-140.92 47.27,-146.68"/>
<text text-anchor="middle" x="85.87" y="-321.8" font-family="Times,serif" font-size="14.00">[marker != undefined]</text>
</g>
<!-- END -->
<g id="node2" class="node">
<title>END</title>
<ellipse fill="black" stroke="black" cx="727.37" cy="-120" rx="7" ry="7"/>
<ellipse fill="none" stroke="black" cx="727.37" cy="-120" rx="11" ry="11"/>
</g>
<!-- NotSkippingPrefixNorVersions.Processing -->
<g id="node7" class="node">
<title>NotSkippingPrefixNorVersions.Processing</title>
<path fill="lightblue" stroke="black" d="M925.37,-300C925.37,-300 661.37,-300 661.37,-300 655.37,-300 649.37,-294 649.37,-288 649.37,-288 649.37,-276 649.37,-276 649.37,-270 655.37,-264 661.37,-264 661.37,-264 925.37,-264 925.37,-264 931.37,-264 937.37,-270 937.37,-276 937.37,-276 937.37,-288 937.37,-288 937.37,-294 931.37,-300 925.37,-300"/>
<text text-anchor="middle" x="793.37" y="-278.2" font-family="Times,serif" font-size="16.00">NotSkippingPrefixNorVersions</text>
</g>
<!-- NotSkippingPrefixNorVersions.Idle&#45;&gt;NotSkippingPrefixNorVersions.Processing -->
<g id="edge3" class="edge">
<title>NotSkippingPrefixNorVersions.Idle&#45;&gt;NotSkippingPrefixNorVersions.Processing</title>
<path fill="none" stroke="black" d="M793.37,-350.8C793.37,-339.16 793.37,-323.55 793.37,-310.24"/>
<polygon fill="black" stroke="black" points="796.87,-310.18 793.37,-300.18 789.87,-310.18 796.87,-310.18"/>
<text text-anchor="middle" x="851.37" y="-321.8" font-family="Times,serif" font-size="14.00">filter(key, value)</text>
</g>
<!-- SkippingPrefix.Idle -->
<g id="node4" class="node">
<title>SkippingPrefix.Idle</title>
<path fill="lightgrey" stroke="black" d="M1209.37,-138C1209.37,-138 1017.37,-138 1017.37,-138 1011.37,-138 1005.37,-132 1005.37,-126 1005.37,-126 1005.37,-114 1005.37,-114 1005.37,-108 1011.37,-102 1017.37,-102 1017.37,-102 1209.37,-102 1209.37,-102 1215.37,-102 1221.37,-108 1221.37,-114 1221.37,-114 1221.37,-126 1221.37,-126 1221.37,-132 1215.37,-138 1209.37,-138"/>
<text text-anchor="middle" x="1113.37" y="-116.2" font-family="Times,serif" font-size="16.00">SkippingPrefix</text>
</g>
<!-- SkippingPrefix.Processing -->
<g id="node8" class="node">
<title>SkippingPrefix.Processing</title>
<path fill="lightblue" stroke="black" d="M1070.37,-36C1070.37,-36 878.37,-36 878.37,-36 872.37,-36 866.37,-30 866.37,-24 866.37,-24 866.37,-12 866.37,-12 866.37,-6 872.37,0 878.37,0 878.37,0 1070.37,0 1070.37,0 1076.37,0 1082.37,-6 1082.37,-12 1082.37,-12 1082.37,-24 1082.37,-24 1082.37,-30 1076.37,-36 1070.37,-36"/>
<text text-anchor="middle" x="974.37" y="-14.2" font-family="Times,serif" font-size="16.00">SkippingPrefix</text>
</g>
<!-- SkippingPrefix.Idle&#45;&gt;SkippingPrefix.Processing -->
<g id="edge4" class="edge">
<title>SkippingPrefix.Idle&#45;&gt;SkippingPrefix.Processing</title>
<path fill="none" stroke="black" d="M1011.89,-101.96C994.96,-97.13 981.04,-91.17 975.37,-84 967.11,-73.56 966.25,-58.93 967.72,-46.2"/>
<polygon fill="black" stroke="black" points="971.22,-46.52 969.4,-36.09 964.31,-45.38 971.22,-46.52"/>
<text text-anchor="middle" x="1033.37" y="-65.3" font-family="Times,serif" font-size="14.00">filter(key, value)</text>
</g>
<!-- SkippingVersions.Processing -->
<g id="node9" class="node">
<title>SkippingVersions.Processing</title>
<path fill="lightblue" stroke="black" d="M381.37,-36C381.37,-36 189.37,-36 189.37,-36 183.37,-36 177.37,-30 177.37,-24 177.37,-24 177.37,-12 177.37,-12 177.37,-6 183.37,0 189.37,0 189.37,0 381.37,0 381.37,0 387.37,0 393.37,-6 393.37,-12 393.37,-12 393.37,-24 393.37,-24 393.37,-30 387.37,-36 381.37,-36"/>
<text text-anchor="middle" x="285.37" y="-14.2" font-family="Times,serif" font-size="16.00">SkippingVersions</text>
</g>
<!-- SkippingVersions.Idle&#45;&gt;SkippingVersions.Processing -->
<g id="edge5" class="edge">
<title>SkippingVersions.Idle&#45;&gt;SkippingVersions.Processing</title>
<path fill="none" stroke="black" d="M141.4,-101.91C138.35,-87.58 136.8,-67.37 147.37,-54 151.89,-48.28 161.64,-43.34 173.99,-39.12"/>
<polygon fill="black" stroke="black" points="175.39,-42.36 183.89,-36.04 173.3,-35.67 175.39,-42.36"/>
<text text-anchor="middle" x="205.37" y="-65.3" font-family="Times,serif" font-size="14.00">filter(key, value)</text>
</g>
<!-- WaitVersionAfterPHD.Idle -->
<g id="node6" class="node">
<title>WaitVersionAfterPHD.Idle</title>
<path fill="lightgrey" stroke="black" d="M1534.37,-138C1534.37,-138 1342.37,-138 1342.37,-138 1336.37,-138 1330.37,-132 1330.37,-126 1330.37,-126 1330.37,-114 1330.37,-114 1330.37,-108 1336.37,-102 1342.37,-102 1342.37,-102 1534.37,-102 1534.37,-102 1540.37,-102 1546.37,-108 1546.37,-114 1546.37,-114 1546.37,-126 1546.37,-126 1546.37,-132 1540.37,-138 1534.37,-138"/>
<text text-anchor="middle" x="1438.37" y="-116.2" font-family="Times,serif" font-size="16.00">WaitVersionAfterPHD</text>
</g>
<!-- WaitVersionAfterPHD.Processing -->
<g id="node10" class="node">
<title>WaitVersionAfterPHD.Processing</title>
<path fill="lightblue" stroke="black" d="M1534.37,-36C1534.37,-36 1342.37,-36 1342.37,-36 1336.37,-36 1330.37,-30 1330.37,-24 1330.37,-24 1330.37,-12 1330.37,-12 1330.37,-6 1336.37,0 1342.37,0 1342.37,0 1534.37,0 1534.37,0 1540.37,0 1546.37,-6 1546.37,-12 1546.37,-12 1546.37,-24 1546.37,-24 1546.37,-30 1540.37,-36 1534.37,-36"/>
<text text-anchor="middle" x="1438.37" y="-14.2" font-family="Times,serif" font-size="16.00">WaitVersionAfterPHD</text>
</g>
<!-- WaitVersionAfterPHD.Idle&#45;&gt;WaitVersionAfterPHD.Processing -->
<g id="edge6" class="edge">
<title>WaitVersionAfterPHD.Idle&#45;&gt;WaitVersionAfterPHD.Processing</title>
<path fill="none" stroke="black" d="M1438.37,-101.58C1438.37,-86.38 1438.37,-64.07 1438.37,-46.46"/>
<polygon fill="black" stroke="black" points="1441.87,-46.22 1438.37,-36.22 1434.87,-46.22 1441.87,-46.22"/>
<text text-anchor="middle" x="1496.37" y="-65.3" font-family="Times,serif" font-size="14.00">filter(key, value)</text>
</g>
<!-- NotSkippingPrefixNorVersions.Processing&#45;&gt;END -->
<g id="edge10" class="edge">
<title>NotSkippingPrefixNorVersions.Processing&#45;&gt;END</title>
<path fill="none" stroke="black" d="M649.15,-273.62C611.7,-268.54 578.44,-260.07 566.37,-246 540.33,-215.64 540,-186.08 566.37,-156 586.46,-133.07 673.88,-148.86 702.37,-138 705.22,-136.91 708.06,-135.44 710.76,-133.82"/>
<polygon fill="black" stroke="black" points="712.88,-136.61 719.13,-128.05 708.91,-130.84 712.88,-136.61"/>
<text text-anchor="middle" x="672.87" y="-212.3" font-family="Times,serif" font-size="14.00">[isListableKey(key, value) and</text>
<text text-anchor="middle" x="672.87" y="-197.3" font-family="Times,serif" font-size="14.00">Keys == maxKeys]</text>
<text text-anchor="middle" x="672.87" y="-182.3" font-family="Times,serif" font-size="14.00">&#45;&gt; FILTER_END</text>
</g>
<!-- NotSkippingPrefixNorVersions.Processing&#45;&gt;SkippingPrefix.Idle -->
<g id="edge9" class="edge">
<title>NotSkippingPrefixNorVersions.Processing&#45;&gt;SkippingPrefix.Idle</title>
<path fill="none" stroke="black" d="M937.6,-274.31C1018.89,-269.01 1106.69,-260.11 1119.37,-246 1143.16,-219.51 1134.03,-175.72 1124.38,-147.62"/>
<polygon fill="black" stroke="black" points="1127.6,-146.22 1120.86,-138.04 1121.03,-148.64 1127.6,-146.22"/>
<text text-anchor="middle" x="1254.37" y="-212.3" font-family="Times,serif" font-size="14.00">[key.startsWith(&lt;ReplayPrefix&gt;)]</text>
<text text-anchor="middle" x="1254.37" y="-197.3" font-family="Times,serif" font-size="14.00">/ prefix &lt;&#45; &lt;ReplayPrefix&gt;</text>
<text text-anchor="middle" x="1254.37" y="-182.3" font-family="Times,serif" font-size="14.00">&#45;&gt; FILTER_SKIP</text>
</g>
<!-- NotSkippingPrefixNorVersions.Processing&#45;&gt;SkippingPrefix.Idle -->
<g id="edge11" class="edge">
<title>NotSkippingPrefixNorVersions.Processing&#45;&gt;SkippingPrefix.Idle</title>
<path fill="none" stroke="black" d="M799.18,-263.65C800.96,-258.05 802.85,-251.79 804.37,-246 814.73,-206.45 793.03,-183.41 823.37,-156 851.23,-130.83 954.1,-142.59 991.37,-138 992.65,-137.84 993.94,-137.68 995.24,-137.52"/>
<polygon fill="black" stroke="black" points="995.81,-140.98 1005.29,-136.25 994.93,-134.03 995.81,-140.98"/>
<text text-anchor="middle" x="969.37" y="-234.8" font-family="Times,serif" font-size="14.00">[isListableKey(key, value) and</text>
<text text-anchor="middle" x="969.37" y="-219.8" font-family="Times,serif" font-size="14.00">nKeys &lt; maxKeys and</text>
<text text-anchor="middle" x="969.37" y="-204.8" font-family="Times,serif" font-size="14.00">hasDelimiter(key)]</text>
<text text-anchor="middle" x="969.37" y="-189.8" font-family="Times,serif" font-size="14.00">/ prefix &lt;&#45; prefixOf(key)</text>
<text text-anchor="middle" x="969.37" y="-174.8" font-family="Times,serif" font-size="14.00">/ CommonPrefixes.append(prefixOf(key))</text>
<text text-anchor="middle" x="969.37" y="-159.8" font-family="Times,serif" font-size="14.00">&#45;&gt; FILTER_ACCEPT</text>
</g>
<!-- NotSkippingPrefixNorVersions.Processing&#45;&gt;SkippingVersions.Idle -->
<g id="edge7" class="edge">
<title>NotSkippingPrefixNorVersions.Processing&#45;&gt;SkippingVersions.Idle</title>
<path fill="none" stroke="black" d="M649.11,-279.23C439.56,-275.94 73.58,-267.19 53.37,-246 25.76,-217.06 30.6,-188.89 53.37,-156 56.56,-151.39 60.44,-147.39 64.78,-143.91"/>
<polygon fill="black" stroke="black" points="66.8,-146.76 73.04,-138.2 62.83,-141 66.8,-146.76"/>
<text text-anchor="middle" x="167.87" y="-204.8" font-family="Times,serif" font-size="14.00">[Version.isDeleteMarker(value)]</text>
<text text-anchor="middle" x="167.87" y="-189.8" font-family="Times,serif" font-size="14.00">&#45;&gt; FILTER_ACCEPT</text>
</g>
<!-- NotSkippingPrefixNorVersions.Processing&#45;&gt;SkippingVersions.Idle -->
<g id="edge12" class="edge">
<title>NotSkippingPrefixNorVersions.Processing&#45;&gt;SkippingVersions.Idle</title>
<path fill="none" stroke="black" d="M649.33,-279.1C514.97,-275.99 331.4,-267.75 305.37,-246 273.69,-219.53 311.53,-185.22 282.37,-156 276.73,-150.36 270.32,-145.59 263.42,-141.56"/>
<polygon fill="black" stroke="black" points="264.92,-138.39 254.44,-136.84 261.67,-144.59 264.92,-138.39"/>
<text text-anchor="middle" x="411.87" y="-227.3" font-family="Times,serif" font-size="14.00">[isListableKey(key, value) and</text>
<text text-anchor="middle" x="411.87" y="-212.3" font-family="Times,serif" font-size="14.00">nKeys &lt; maxKeys and</text>
<text text-anchor="middle" x="411.87" y="-197.3" font-family="Times,serif" font-size="14.00">not hasDelimiter(key)]</text>
<text text-anchor="middle" x="411.87" y="-182.3" font-family="Times,serif" font-size="14.00">/ Contents.append(key, value)</text>
<text text-anchor="middle" x="411.87" y="-167.3" font-family="Times,serif" font-size="14.00">&#45;&gt; FILTER_ACCEPT</text>
</g>
<!-- NotSkippingPrefixNorVersions.Processing&#45;&gt;WaitVersionAfterPHD.Idle -->
<g id="edge8" class="edge">
<title>NotSkippingPrefixNorVersions.Processing&#45;&gt;WaitVersionAfterPHD.Idle</title>
<path fill="none" stroke="black" d="M937.38,-280.87C1099.43,-279.42 1344.59,-272.74 1378.37,-246 1411.11,-220.08 1384.48,-192.16 1405.37,-156 1407.38,-152.52 1409.8,-149.11 1412.4,-145.87"/>
<polygon fill="black" stroke="black" points="1415.16,-148.04 1419.13,-138.21 1409.9,-143.41 1415.16,-148.04"/>
<text text-anchor="middle" x="1486.87" y="-204.8" font-family="Times,serif" font-size="14.00">[Version.isPHD(value)]</text>
<text text-anchor="middle" x="1486.87" y="-189.8" font-family="Times,serif" font-size="14.00">&#45;&gt; FILTER_ACCEPT</text>
</g>
<!-- SkippingPrefix.Processing&#45;&gt;SkippingPrefix.Idle -->
<g id="edge13" class="edge">
<title>SkippingPrefix.Processing&#45;&gt;SkippingPrefix.Idle</title>
<path fill="none" stroke="black" d="M1064.61,-36.08C1074.44,-40.7 1083.66,-46.57 1091.37,-54 1101.65,-63.92 1107.13,-78.81 1110.04,-91.84"/>
<polygon fill="black" stroke="black" points="1106.62,-92.56 1111.88,-101.76 1113.5,-91.29 1106.62,-92.56"/>
<text text-anchor="middle" x="1190.37" y="-72.8" font-family="Times,serif" font-size="14.00">[key.startsWith(prefix)]</text>
<text text-anchor="middle" x="1190.37" y="-57.8" font-family="Times,serif" font-size="14.00">&#45;&gt; FILTER_SKIP</text>
</g>
<!-- SkippingPrefix.Processing&#45;&gt;NotSkippingPrefixNorVersions.Processing -->
<g id="edge14" class="edge">
<title>SkippingPrefix.Processing&#45;&gt;NotSkippingPrefixNorVersions.Processing</title>
<path fill="none" stroke="black" d="M899.82,-36.01C864.18,-48.2 824.54,-68.57 802.37,-102 771.84,-148.02 779.31,-216.26 786.77,-253.8"/>
<polygon fill="black" stroke="black" points="783.43,-254.92 788.94,-263.97 790.28,-253.46 783.43,-254.92"/>
<text text-anchor="middle" x="899.37" y="-116.3" font-family="Times,serif" font-size="14.00">[not key.startsWith(prefix)]</text>
</g>
<!-- SkippingVersions.Processing&#45;&gt;SkippingVersions.Idle -->
<g id="edge15" class="edge">
<title>SkippingVersions.Processing&#45;&gt;SkippingVersions.Idle</title>
<path fill="none" stroke="black" d="M283.88,-36.24C281.71,-50.87 276.4,-71.43 263.37,-84 258.07,-89.11 252.06,-93.48 245.62,-97.21"/>
<polygon fill="black" stroke="black" points="243.85,-94.19 236.61,-101.92 247.09,-100.39 243.85,-94.19"/>
<text text-anchor="middle" x="349.87" y="-72.8" font-family="Times,serif" font-size="14.00">[isVersionKey(key)]</text>
<text text-anchor="middle" x="349.87" y="-57.8" font-family="Times,serif" font-size="14.00">&#45;&gt; FILTER_SKIP</text>
</g>
<!-- SkippingVersions.Processing&#45;&gt;NotSkippingPrefixNorVersions.Processing -->
<g id="edge16" class="edge">
<title>SkippingVersions.Processing&#45;&gt;NotSkippingPrefixNorVersions.Processing</title>
<path fill="none" stroke="black" d="M382.46,-36.08C396.72,-40.7 410.82,-46.57 423.37,-54 476.67,-85.57 487.28,-102.42 518.37,-156 539.39,-192.23 514.46,-218.85 546.37,-246 561.72,-259.06 598.56,-267.25 639.23,-272.39"/>
<polygon fill="black" stroke="black" points="639.01,-275.89 649.36,-273.59 639.84,-268.93 639.01,-275.89"/>
<text text-anchor="middle" x="590.37" y="-116.3" font-family="Times,serif" font-size="14.00">[not isVersionKey(key)]</text>
</g>
<!-- WaitVersionAfterPHD.Processing&#45;&gt;NotSkippingPrefixNorVersions.Processing -->
<g id="edge17" class="edge">
<title>WaitVersionAfterPHD.Processing&#45;&gt;NotSkippingPrefixNorVersions.Processing</title>
<path fill="none" stroke="black" d="M1536.41,-36.13C1544.73,-40.79 1552.27,-46.65 1558.37,-54 1585.64,-86.89 1597.89,-215.12 1568.37,-246 1547.29,-268.05 1167.71,-276.42 947.74,-279.43"/>
<polygon fill="black" stroke="black" points="947.67,-275.93 937.71,-279.57 947.76,-282.93 947.67,-275.93"/>
<text text-anchor="middle" x="1758.37" y="-123.8" font-family="Times,serif" font-size="14.00">[isVersionKey(key) and master(key) == PHDkey]</text>
<text text-anchor="middle" x="1758.37" y="-108.8" font-family="Times,serif" font-size="14.00">/ key &lt;&#45; master(key)</text>
</g>
<!-- WaitVersionAfterPHD.Processing&#45;&gt;NotSkippingPrefixNorVersions.Processing -->
<g id="edge18" class="edge">
<title>WaitVersionAfterPHD.Processing&#45;&gt;NotSkippingPrefixNorVersions.Processing</title>
<path fill="none" stroke="black" d="M1546.51,-21.25C1677.94,-26.54 1888.29,-44.09 1937.37,-102 1947.71,-114.21 1946.85,-125.11 1937.37,-138 1841.62,-268.08 1749.48,-218.23 1590.37,-246 1471.26,-266.79 1143.92,-275.5 947.77,-278.94"/>
<polygon fill="black" stroke="black" points="947.6,-275.44 937.66,-279.11 947.72,-282.44 947.6,-275.44"/>
<text text-anchor="middle" x="2124.87" y="-116.3" font-family="Times,serif" font-size="14.00">[not isVersionKey(key) or master(key) != PHDkey]</text>
</g>
</g>
</svg>

After

Width:  |  Height:  |  Size: 18 KiB

View File

@@ -0,0 +1,35 @@
digraph {
node [shape="box",style="filled,rounded",fontsize=16,fixedsize=true,width=3];
edge [fontsize=14];
rankdir=TB;
START [shape="circle",width=0.2,label="",style="filled",fillcolor="black"]
END [shape="circle",width=0.2,label="",style="filled",fillcolor="black",peripheries=2]
node [fillcolor="lightgrey"];
"NotSkipping.Idle" [label="NotSkipping",group="NotSkipping"];
"NeverSkipping.Idle" [label="NeverSkipping",group="NeverSkipping"];
"NotSkippingPrefix.Idle" [label="NotSkippingPrefix",group="NotSkippingPrefix"];
"SkippingPrefix.Idle" [label="SkippingPrefix",group="SkippingPrefix"];
node [fillcolor="lightblue"];
"NeverSkipping.Processing" [label="NeverSkipping",group="NeverSkipping"];
"NotSkippingPrefix.Processing" [label="NotSkippingPrefix",group="NotSkippingPrefix"];
"SkippingPrefix.Processing" [label="SkippingPrefix",group="SkippingPrefix"];
START -> "NotSkipping.Idle"
"NotSkipping.Idle" -> "NeverSkipping.Idle" [label="[delimiter == undefined]"]
"NotSkipping.Idle" -> "NotSkippingPrefix.Idle" [label="[delimiter == '/']"]
"NeverSkipping.Idle" -> "NeverSkipping.Processing" [label="filter(key, value)"]
"NotSkippingPrefix.Idle" -> "NotSkippingPrefix.Processing" [label="filter(key, value)"]
"SkippingPrefix.Idle" -> "SkippingPrefix.Processing" [label="filter(key, value)"]
"NeverSkipping.Processing" -> END [label="[nKeys == maxKeys]\n-> FILTER_END"]
"NeverSkipping.Processing" -> "NeverSkipping.Idle" [label="[nKeys < maxKeys]\n/ Contents.append(key, value)\n -> FILTER_ACCEPT"]
"NotSkippingPrefix.Processing" -> END [label="[nKeys == maxKeys]\n -> FILTER_END"]
"NotSkippingPrefix.Processing" -> "SkippingPrefix.Idle" [label="[nKeys < maxKeys and hasDelimiter(key)]\n/ prefix <- prefixOf(key)\n/ CommonPrefixes.append(prefixOf(key))\n-> FILTER_ACCEPT"]
"NotSkippingPrefix.Processing" -> "NotSkippingPrefix.Idle" [label="[nKeys < maxKeys and not hasDelimiter(key)]\n/ Contents.append(key, value)\n -> FILTER_ACCEPT"]
"SkippingPrefix.Processing" -> "SkippingPrefix.Idle" [label="[key.startsWith(prefix)]\n-> FILTER_SKIP"]
"SkippingPrefix.Processing" -> "NotSkippingPrefix.Processing" [label="[not key.startsWith(prefix)]"]
}

View File

@@ -0,0 +1,166 @@
<?xml version="1.0" encoding="UTF-8" standalone="no"?>
<!DOCTYPE svg PUBLIC "-//W3C//DTD SVG 1.1//EN"
"http://www.w3.org/Graphics/SVG/1.1/DTD/svg11.dtd">
<!-- Generated by graphviz version 2.43.0 (0)
-->
<!-- Title: %3 Pages: 1 -->
<svg width="975pt" height="533pt"
viewBox="0.00 0.00 975.00 533.00" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink">
<g id="graph0" class="graph" transform="scale(1 1) rotate(0) translate(4 529)">
<title>%3</title>
<polygon fill="white" stroke="transparent" points="-4,4 -4,-529 971,-529 971,4 -4,4"/>
<!-- START -->
<g id="node1" class="node">
<title>START</title>
<ellipse fill="black" stroke="black" cx="283" cy="-518" rx="7" ry="7"/>
</g>
<!-- NotSkipping.Idle -->
<g id="node3" class="node">
<title>NotSkipping.Idle</title>
<path fill="lightgrey" stroke="black" d="M379,-474C379,-474 187,-474 187,-474 181,-474 175,-468 175,-462 175,-462 175,-450 175,-450 175,-444 181,-438 187,-438 187,-438 379,-438 379,-438 385,-438 391,-444 391,-450 391,-450 391,-462 391,-462 391,-468 385,-474 379,-474"/>
<text text-anchor="middle" x="283" y="-452.2" font-family="Times,serif" font-size="16.00">NotSkipping</text>
</g>
<!-- START&#45;&gt;NotSkipping.Idle -->
<g id="edge1" class="edge">
<title>START&#45;&gt;NotSkipping.Idle</title>
<path fill="none" stroke="black" d="M283,-510.58C283,-504.23 283,-494.07 283,-484.3"/>
<polygon fill="black" stroke="black" points="286.5,-484.05 283,-474.05 279.5,-484.05 286.5,-484.05"/>
</g>
<!-- END -->
<g id="node2" class="node">
<title>END</title>
<ellipse fill="black" stroke="black" cx="196" cy="-120" rx="7" ry="7"/>
<ellipse fill="none" stroke="black" cx="196" cy="-120" rx="11" ry="11"/>
</g>
<!-- NeverSkipping.Idle -->
<g id="node4" class="node">
<title>NeverSkipping.Idle</title>
<path fill="lightgrey" stroke="black" d="M262,-387C262,-387 70,-387 70,-387 64,-387 58,-381 58,-375 58,-375 58,-363 58,-363 58,-357 64,-351 70,-351 70,-351 262,-351 262,-351 268,-351 274,-357 274,-363 274,-363 274,-375 274,-375 274,-381 268,-387 262,-387"/>
<text text-anchor="middle" x="166" y="-365.2" font-family="Times,serif" font-size="16.00">NeverSkipping</text>
</g>
<!-- NotSkipping.Idle&#45;&gt;NeverSkipping.Idle -->
<g id="edge2" class="edge">
<title>NotSkipping.Idle&#45;&gt;NeverSkipping.Idle</title>
<path fill="none" stroke="black" d="M216.5,-437.82C206.51,-433.18 196.91,-427.34 189,-420 182.25,-413.74 177.33,-405.11 173.81,-396.79"/>
<polygon fill="black" stroke="black" points="177.05,-395.47 170.3,-387.31 170.49,-397.9 177.05,-395.47"/>
<text text-anchor="middle" x="279.5" y="-408.8" font-family="Times,serif" font-size="14.00">[delimiter == undefined]</text>
</g>
<!-- NotSkippingPrefix.Idle -->
<g id="node5" class="node">
<title>NotSkippingPrefix.Idle</title>
<path fill="lightgrey" stroke="black" d="M496,-387C496,-387 304,-387 304,-387 298,-387 292,-381 292,-375 292,-375 292,-363 292,-363 292,-357 298,-351 304,-351 304,-351 496,-351 496,-351 502,-351 508,-357 508,-363 508,-363 508,-375 508,-375 508,-381 502,-387 496,-387"/>
<text text-anchor="middle" x="400" y="-365.2" font-family="Times,serif" font-size="16.00">NotSkippingPrefix</text>
</g>
<!-- NotSkipping.Idle&#45;&gt;NotSkippingPrefix.Idle -->
<g id="edge3" class="edge">
<title>NotSkipping.Idle&#45;&gt;NotSkippingPrefix.Idle</title>
<path fill="none" stroke="black" d="M340.77,-437.93C351.2,-433.2 361.45,-427.29 370,-420 377.58,-413.53 383.76,-404.65 388.51,-396.16"/>
<polygon fill="black" stroke="black" points="391.63,-397.74 393.08,-387.24 385.4,-394.54 391.63,-397.74"/>
<text text-anchor="middle" x="442.5" y="-408.8" font-family="Times,serif" font-size="14.00">[delimiter == &#39;/&#39;]</text>
</g>
<!-- NeverSkipping.Processing -->
<g id="node7" class="node">
<title>NeverSkipping.Processing</title>
<path fill="lightblue" stroke="black" d="M204,-270C204,-270 12,-270 12,-270 6,-270 0,-264 0,-258 0,-258 0,-246 0,-246 0,-240 6,-234 12,-234 12,-234 204,-234 204,-234 210,-234 216,-240 216,-246 216,-246 216,-258 216,-258 216,-264 210,-270 204,-270"/>
<text text-anchor="middle" x="108" y="-248.2" font-family="Times,serif" font-size="16.00">NeverSkipping</text>
</g>
<!-- NeverSkipping.Idle&#45;&gt;NeverSkipping.Processing -->
<g id="edge4" class="edge">
<title>NeverSkipping.Idle&#45;&gt;NeverSkipping.Processing</title>
<path fill="none" stroke="black" d="M64.1,-350.93C47.33,-346.11 33.58,-340.17 28,-333 15.72,-317.21 17.05,-304.74 28,-288 30.93,-283.52 34.58,-279.6 38.69,-276.19"/>
<polygon fill="black" stroke="black" points="40.97,-278.86 47.1,-270.22 36.92,-273.16 40.97,-278.86"/>
<text text-anchor="middle" x="86" y="-306.8" font-family="Times,serif" font-size="14.00">filter(key, value)</text>
</g>
<!-- NotSkippingPrefix.Processing -->
<g id="node8" class="node">
<title>NotSkippingPrefix.Processing</title>
<path fill="lightblue" stroke="black" d="M554,-270C554,-270 362,-270 362,-270 356,-270 350,-264 350,-258 350,-258 350,-246 350,-246 350,-240 356,-234 362,-234 362,-234 554,-234 554,-234 560,-234 566,-240 566,-246 566,-246 566,-258 566,-258 566,-264 560,-270 554,-270"/>
<text text-anchor="middle" x="458" y="-248.2" font-family="Times,serif" font-size="16.00">NotSkippingPrefix</text>
</g>
<!-- NotSkippingPrefix.Idle&#45;&gt;NotSkippingPrefix.Processing -->
<g id="edge5" class="edge">
<title>NotSkippingPrefix.Idle&#45;&gt;NotSkippingPrefix.Processing</title>
<path fill="none" stroke="black" d="M395.69,-350.84C392.38,-333.75 390.03,-307.33 401,-288 403.42,-283.74 406.58,-279.94 410.19,-276.55"/>
<polygon fill="black" stroke="black" points="412.5,-279.18 418.1,-270.18 408.11,-273.73 412.5,-279.18"/>
<text text-anchor="middle" x="459" y="-306.8" font-family="Times,serif" font-size="14.00">filter(key, value)</text>
</g>
<!-- SkippingPrefix.Idle -->
<g id="node6" class="node">
<title>SkippingPrefix.Idle</title>
<path fill="lightgrey" stroke="black" d="M554,-138C554,-138 362,-138 362,-138 356,-138 350,-132 350,-126 350,-126 350,-114 350,-114 350,-108 356,-102 362,-102 362,-102 554,-102 554,-102 560,-102 566,-108 566,-114 566,-114 566,-126 566,-126 566,-132 560,-138 554,-138"/>
<text text-anchor="middle" x="458" y="-116.2" font-family="Times,serif" font-size="16.00">SkippingPrefix</text>
</g>
<!-- SkippingPrefix.Processing -->
<g id="node9" class="node">
<title>SkippingPrefix.Processing</title>
<path fill="lightblue" stroke="black" d="M691,-36C691,-36 499,-36 499,-36 493,-36 487,-30 487,-24 487,-24 487,-12 487,-12 487,-6 493,0 499,0 499,0 691,0 691,0 697,0 703,-6 703,-12 703,-12 703,-24 703,-24 703,-30 697,-36 691,-36"/>
<text text-anchor="middle" x="595" y="-14.2" font-family="Times,serif" font-size="16.00">SkippingPrefix</text>
</g>
<!-- SkippingPrefix.Idle&#45;&gt;SkippingPrefix.Processing -->
<g id="edge6" class="edge">
<title>SkippingPrefix.Idle&#45;&gt;SkippingPrefix.Processing</title>
<path fill="none" stroke="black" d="M452.35,-101.95C448.76,-87.65 446.54,-67.45 457,-54 461.44,-48.29 471.08,-43.36 483.3,-39.15"/>
<polygon fill="black" stroke="black" points="484.61,-42.41 493.1,-36.07 482.51,-35.73 484.61,-42.41"/>
<text text-anchor="middle" x="515" y="-65.3" font-family="Times,serif" font-size="14.00">filter(key, value)</text>
</g>
<!-- NeverSkipping.Processing&#45;&gt;END -->
<g id="edge7" class="edge">
<title>NeverSkipping.Processing&#45;&gt;END</title>
<path fill="none" stroke="black" d="M102.91,-233.88C97.93,-213.45 93.18,-179.15 109,-156 123.79,-134.35 154.41,-126.09 175.08,-122.94"/>
<polygon fill="black" stroke="black" points="175.62,-126.4 185.11,-121.69 174.76,-119.45 175.62,-126.4"/>
<text text-anchor="middle" x="185" y="-189.8" font-family="Times,serif" font-size="14.00">[nKeys == maxKeys]</text>
<text text-anchor="middle" x="185" y="-174.8" font-family="Times,serif" font-size="14.00">&#45;&gt; FILTER_END</text>
</g>
<!-- NeverSkipping.Processing&#45;&gt;NeverSkipping.Idle -->
<g id="edge8" class="edge">
<title>NeverSkipping.Processing&#45;&gt;NeverSkipping.Idle</title>
<path fill="none" stroke="black" d="M129.49,-270.27C134.87,-275.48 140.18,-281.55 144,-288 153.56,-304.17 159.09,-324.63 162.21,-340.81"/>
<polygon fill="black" stroke="black" points="158.78,-341.49 163.94,-350.74 165.68,-340.29 158.78,-341.49"/>
<text text-anchor="middle" x="265.5" y="-321.8" font-family="Times,serif" font-size="14.00">[nKeys &lt; maxKeys]</text>
<text text-anchor="middle" x="265.5" y="-306.8" font-family="Times,serif" font-size="14.00">/ Contents.append(key, value)</text>
<text text-anchor="middle" x="265.5" y="-291.8" font-family="Times,serif" font-size="14.00"> &#45;&gt; FILTER_ACCEPT</text>
</g>
<!-- NotSkippingPrefix.Processing&#45;&gt;END -->
<g id="edge9" class="edge">
<title>NotSkippingPrefix.Processing&#45;&gt;END</title>
<path fill="none" stroke="black" d="M349.96,-237.93C333,-232.81 316.36,-225.74 302,-216 275.27,-197.87 285.01,-177.6 261,-156 247.64,-143.98 229.41,-134.62 215.65,-128.62"/>
<polygon fill="black" stroke="black" points="216.74,-125.28 206.16,-124.7 214.07,-131.75 216.74,-125.28"/>
<text text-anchor="middle" x="378" y="-189.8" font-family="Times,serif" font-size="14.00">[nKeys == maxKeys]</text>
<text text-anchor="middle" x="378" y="-174.8" font-family="Times,serif" font-size="14.00"> &#45;&gt; FILTER_END</text>
</g>
<!-- NotSkippingPrefix.Processing&#45;&gt;NotSkippingPrefix.Idle -->
<g id="edge11" class="edge">
<title>NotSkippingPrefix.Processing&#45;&gt;NotSkippingPrefix.Idle</title>
<path fill="none" stroke="black" d="M499.64,-270.11C506.59,-274.86 512.87,-280.76 517,-288 526.9,-305.38 528.94,-316.96 517,-333 513.56,-337.62 509.53,-341.66 505.07,-345.18"/>
<polygon fill="black" stroke="black" points="502.89,-342.43 496.63,-350.98 506.85,-348.2 502.89,-342.43"/>
<text text-anchor="middle" x="690.5" y="-321.8" font-family="Times,serif" font-size="14.00">[nKeys &lt; maxKeys and not hasDelimiter(key)]</text>
<text text-anchor="middle" x="690.5" y="-306.8" font-family="Times,serif" font-size="14.00">/ Contents.append(key, value)</text>
<text text-anchor="middle" x="690.5" y="-291.8" font-family="Times,serif" font-size="14.00"> &#45;&gt; FILTER_ACCEPT</text>
</g>
<!-- NotSkippingPrefix.Processing&#45;&gt;SkippingPrefix.Idle -->
<g id="edge10" class="edge">
<title>NotSkippingPrefix.Processing&#45;&gt;SkippingPrefix.Idle</title>
<path fill="none" stroke="black" d="M458,-233.74C458,-211.98 458,-174.32 458,-148.56"/>
<polygon fill="black" stroke="black" points="461.5,-148.33 458,-138.33 454.5,-148.33 461.5,-148.33"/>
<text text-anchor="middle" x="609.5" y="-204.8" font-family="Times,serif" font-size="14.00">[nKeys &lt; maxKeys and hasDelimiter(key)]</text>
<text text-anchor="middle" x="609.5" y="-189.8" font-family="Times,serif" font-size="14.00">/ prefix &lt;&#45; prefixOf(key)</text>
<text text-anchor="middle" x="609.5" y="-174.8" font-family="Times,serif" font-size="14.00">/ CommonPrefixes.append(prefixOf(key))</text>
<text text-anchor="middle" x="609.5" y="-159.8" font-family="Times,serif" font-size="14.00">&#45;&gt; FILTER_ACCEPT</text>
</g>
<!-- SkippingPrefix.Processing&#45;&gt;SkippingPrefix.Idle -->
<g id="edge12" class="edge">
<title>SkippingPrefix.Processing&#45;&gt;SkippingPrefix.Idle</title>
<path fill="none" stroke="black" d="M593.49,-36.23C591.32,-50.84 586,-71.39 573,-84 567.75,-89.09 561.77,-93.45 555.38,-97.17"/>
<polygon fill="black" stroke="black" points="553.66,-94.12 546.43,-101.87 556.91,-100.32 553.66,-94.12"/>
<text text-anchor="middle" x="672" y="-72.8" font-family="Times,serif" font-size="14.00">[key.startsWith(prefix)]</text>
<text text-anchor="middle" x="672" y="-57.8" font-family="Times,serif" font-size="14.00">&#45;&gt; FILTER_SKIP</text>
</g>
<!-- SkippingPrefix.Processing&#45;&gt;NotSkippingPrefix.Processing -->
<g id="edge13" class="edge">
<title>SkippingPrefix.Processing&#45;&gt;NotSkippingPrefix.Processing</title>
<path fill="none" stroke="black" d="M703.16,-31.64C728.6,-36.87 750.75,-44.11 759,-54 778.46,-77.34 776.26,-200.01 762,-216 749.37,-230.17 656.13,-239.42 576.2,-244.84"/>
<polygon fill="black" stroke="black" points="575.77,-241.36 566.03,-245.51 576.24,-248.34 575.77,-241.36"/>
<text text-anchor="middle" x="870" y="-116.3" font-family="Times,serif" font-size="14.00">[not key.startsWith(prefix)]</text>
</g>
</g>
</svg>

After

Width:  |  Height:  |  Size: 12 KiB

View File

@@ -1,760 +0,0 @@
{
"_comment": "------------------- Amazon errors ------------------",
"AccessDenied": {
"code": 403,
"description": "Access Denied"
},
"AccessForbidden": {
"code": 403,
"description": "Access Forbidden"
},
"AccountProblem": {
"code": 403,
"description": "There is a problem with your AWS account that prevents the operation from completing successfully. Please use Contact Us."
},
"AmbiguousGrantByEmailAddress": {
"code": 400,
"description": "The email address you provided is associated with more than one account."
},
"BadDigest": {
"code": 400,
"description": "The Content-MD5 you specified did not match what we received."
},
"BucketAlreadyExists": {
"code": 409,
"description": "The requested bucket name is not available. The bucket namespace is shared by all users of the system. Please select a different name and try again."
},
"BucketAlreadyOwnedByYou": {
"code": 409,
"description": "Your previous request to create the named bucket succeeded and you already own it. You get this error in all AWS regions except US Standard, us-east-1. In us-east-1 region, you will get 200 OK, but it is no-op (if bucket exists S3 will not do anything)."
},
"BucketNotEmpty": {
"code": 409,
"description": "The bucket you tried to delete is not empty."
},
"CredentialsNotSupported": {
"code": 400,
"description": "This request does not support credentials."
},
"CrossLocationLoggingProhibited": {
"code": 403,
"description": "Cross-location logging not allowed. Buckets in one geographic location cannot log information to a bucket in another location."
},
"DeleteConflict": {
"code": 409,
"description": "The request was rejected because it attempted to delete a resource that has attached subordinate entities. The error message describes these entities."
},
"EntityTooSmall": {
"code": 400,
"description": "Your proposed upload is smaller than the minimum allowed object size."
},
"EntityTooLarge": {
"code": 400,
"description": "Your proposed upload exceeds the maximum allowed object size."
},
"ExpiredToken": {
"code": 400,
"description": "The provided token has expired."
},
"IllegalVersioningConfigurationException": {
"code": 400,
"description": "Indicates that the versioning configuration specified in the request is invalid."
},
"IncompleteBody": {
"code": 400,
"description": "You did not provide the number of bytes specified by the Content-Length HTTP header."
},
"IncorrectNumberOfFilesInPostRequest": {
"code": 400,
"description": "POST requires exactly one file upload per request."
},
"InlineDataTooLarge": {
"code": 400,
"description": "Inline data exceeds the maximum allowed size."
},
"InternalError": {
"code": 500,
"description": "We encountered an internal error. Please try again."
},
"InvalidAccessKeyId": {
"code": 403,
"description": "The AWS access key Id you provided does not exist in our records."
},
"InvalidAddressingHeader": {
"code": 400,
"description": "You must specify the Anonymous role."
},
"InvalidArgument": {
"code": 400,
"description": "Invalid Argument"
},
"InvalidBucketName": {
"code": 400,
"description": "The specified bucket is not valid."
},
"InvalidBucketState": {
"code": 409,
"description": "The request is not valid with the current state of the bucket."
},
"InvalidDigest": {
"code": 400,
"description": "The Content-MD5 you specified is not valid."
},
"InvalidEncryptionAlgorithmError": {
"code": 400,
"description": "The encryption request you specified is not valid. The valid value is AES256."
},
"InvalidLocationConstraint": {
"code": 400,
"description": "The specified location constraint is not valid."
},
"InvalidObjectState": {
"code": 403,
"description": "The operation is not valid for the current state of the object."
},
"InvalidPart": {
"code": 400,
"description": "One or more of the specified parts could not be found. The part might not have been uploaded, or the specified entity tag might not have matched the part's entity tag."
},
"InvalidPartOrder": {
"code": 400,
"description": "The list of parts was not in ascending order.Parts list must specified in order by part number."
},
"InvalidPartNumber": {
"code": 416,
"description": "The requested partnumber is not satisfiable."
},
"InvalidPayer": {
"code": 403,
"description": "All access to this object has been disabled."
},
"InvalidPolicyDocument": {
"code": 400,
"description": "The content of the form does not meet the conditions specified in the policy document."
},
"InvalidRange": {
"code": 416,
"description": "The requested range cannot be satisfied."
},
"InvalidRedirectLocation": {
"code": 400,
"description": "The website redirect location must have a prefix of 'http://' or 'https://' or '/'."
},
"InvalidRequest": {
"code": 400,
"description": "SOAP requests must be made over an HTTPS connection."
},
"InvalidSecurity": {
"code": 403,
"description": "The provided security credentials are not valid."
},
"InvalidSOAPRequest": {
"code": 400,
"description": "The SOAP request body is invalid."
},
"InvalidStorageClass": {
"code": 400,
"description": "The storage class you specified is not valid."
},
"InvalidTag": {
"code": 400,
"description": "The Tag you have provided is invalid"
},
"InvalidTargetBucketForLogging": {
"code": 400,
"description": "The target bucket for logging does not exist, is not owned by you, or does not have the appropriate grants for the log-delivery group."
},
"InvalidToken": {
"code": 400,
"description": "The provided token is malformed or otherwise invalid."
},
"InvalidURI": {
"code": 400,
"description": "Couldn't parse the specified URI."
},
"KeyTooLong": {
"code": 400,
"description": "Your key is too long."
},
"LimitExceeded": {
"code": 409,
"description": " The request was rejected because it attempted to create resources beyond the current AWS account limits. The error message describes the limit exceeded."
},
"MalformedACLError": {
"code": 400,
"description": "The XML you provided was not well-formed or did not validate against our published schema."
},
"MalformedPOSTRequest": {
"code": 400,
"description": "The body of your POST request is not well-formed multipart/form-data."
},
"MalformedXML": {
"code": 400,
"description": "The XML you provided was not well-formed or did not validate against our published schema."
},
"MaxMessageLengthExceeded": {
"code": 400,
"description": "Your request was too big."
},
"MaxPostPreDataLengthExceededError": {
"code": 400,
"description": "Your POST request fields preceding the upload file were too large."
},
"MetadataTooLarge": {
"code": 400,
"description": "Your metadata headers exceed the maximum allowed metadata size."
},
"MethodNotAllowed": {
"code": 405,
"description": "The specified method is not allowed against this resource."
},
"MissingAttachment": {
"code": 400,
"description": "A SOAP attachment was expected, but none were found."
},
"MissingContentLength": {
"code": 411,
"description": "You must provide the Content-Length HTTP header."
},
"MissingRequestBodyError": {
"code": 400,
"description": "Request body is empty"
},
"MissingRequiredParameter": {
"code": 400,
"description": "Your request is missing a required parameter."
},
"MissingSecurityElement": {
"code": 400,
"description": "The SOAP 1.1 request is missing a security element."
},
"MissingSecurityHeader": {
"code": 400,
"description": "Your request is missing a required header."
},
"NoLoggingStatusForKey": {
"code": 400,
"description": "There is no such thing as a logging status subresource for a key."
},
"NoSuchBucket": {
"code": 404,
"description": "The specified bucket does not exist."
},
"NoSuchCORSConfiguration": {
"code": 404,
"description": "The CORS configuration does not exist"
},
"NoSuchKey": {
"code": 404,
"description": "The specified key does not exist."
},
"NoSuchLifecycleConfiguration": {
"code": 404,
"description": "The lifecycle configuration does not exist."
},
"NoSuchObjectLockConfiguration": {
"code": 404,
"description": "The specified object does not have a ObjectLock configuration."
},
"NoSuchWebsiteConfiguration": {
"code": 404,
"description": "The specified bucket does not have a website configuration"
},
"NoSuchUpload": {
"code": 404,
"description": "The specified multipart upload does not exist. The upload ID might be invalid, or the multipart upload might have been aborted or completed."
},
"NoSuchVersion": {
"code": 404,
"description": "Indicates that the version ID specified in the request does not match an existing version."
},
"ReplicationConfigurationNotFoundError": {
"code": 404,
"description": "The replication configuration was not found"
},
"ObjectLockConfigurationNotFoundError": {
"code": 404,
"description": "The object lock configuration was not found"
},
"NotImplemented": {
"code": 501,
"description": "A header you provided implies functionality that is not implemented."
},
"NotModified": {
"code": 304,
"description": "Not Modified."
},
"NotSignedUp": {
"code": 403,
"description": "Your account is not signed up for the S3 service. You must sign up before you can use S3. "
},
"NoSuchBucketPolicy": {
"code": 404,
"description": "The specified bucket does not have a bucket policy."
},
"OperationAborted": {
"code": 409,
"description": "A conflicting conditional operation is currently in progress against this resource. Try again."
},
"PermanentRedirect": {
"code": 301,
"description": "The bucket you are attempting to access must be addressed using the specified endpoint. Send all future requests to this endpoint."
},
"PreconditionFailed": {
"code": 412,
"description": "At least one of the preconditions you specified did not hold."
},
"Redirect": {
"code": 307,
"description": "Temporary redirect."
},
"RestoreAlreadyInProgress": {
"code": 409,
"description": "Object restore is already in progress."
},
"RequestIsNotMultiPartContent": {
"code": 400,
"description": "Bucket POST must be of the enclosure-type multipart/form-data."
},
"RequestTimeout": {
"code": 400,
"description": "Your socket connection to the server was not read from or written to within the timeout period."
},
"RequestTimeTooSkewed": {
"code": 403,
"description": "The difference between the request time and the server's time is too large."
},
"RequestTorrentOfBucketError": {
"code": 400,
"description": "Requesting the torrent file of a bucket is not permitted."
},
"SignatureDoesNotMatch": {
"code": 403,
"description": "The request signature we calculated does not match the signature you provided."
},
"_comment" : {
"note" : "This is an AWS S3 specific error. We are opting to use the more general 'ServiceUnavailable' error used throughout AWS (IAM/EC2) to have uniformity of error messages even though we are potentially compromising S3 compatibility.",
"ServiceUnavailable": {
"code": 503,
"description": "Reduce your request rate."
}
},
"ServiceUnavailable": {
"code": 503,
"description": "The request has failed due to a temporary failure of the server."
},
"SlowDown": {
"code": 503,
"description": "Reduce your request rate."
},
"TemporaryRedirect": {
"code": 307,
"description": "You are being redirected to the bucket while DNS updates."
},
"TokenRefreshRequired": {
"code": 400,
"description": "The provided token must be refreshed."
},
"TooManyBuckets": {
"code": 400,
"description": "You have attempted to create more buckets than allowed."
},
"TooManyParts": {
"code": 400,
"description": "You have attempted to upload more parts than allowed."
},
"UnexpectedContent": {
"code": 400,
"description": "This request does not support content."
},
"UnresolvableGrantByEmailAddress": {
"code": 400,
"description": "The email address you provided does not match any account on record."
},
"UserKeyMustBeSpecified": {
"code": 400,
"description": "The bucket POST must contain the specified field name. If it is specified, check the order of the fields."
},
"NoSuchEntity": {
"code": 404,
"description": "The request was rejected because it referenced an entity that does not exist. The error message describes the entity."
},
"WrongFormat": {
"code": 400,
"description": "Data entered by the user has a wrong format."
},
"Forbidden": {
"code": 403,
"description": "Authentication failed."
},
"EntityDoesNotExist": {
"code": 404,
"description": "Not found."
},
"EntityAlreadyExists": {
"code": 409,
"description": "The request was rejected because it attempted to create a resource that already exists."
},
"KeyAlreadyExists": {
"code": 409,
"description": "The request was rejected because it attempted to create a resource that already exists."
},
"ServiceFailure": {
"code": 500,
"description": "Server error: the request processing has failed because of an unknown error, exception or failure."
},
"IncompleteSignature": {
"code": 400,
"description": "The request signature does not conform to AWS standards."
},
"InternalFailure": {
"code": 500,
"description": "The request processing has failed because of an unknown error, exception or failure."
},
"InvalidAction": {
"code": 400,
"description": "The action or operation requested is invalid. Verify that the action is typed correctly."
},
"InvalidClientTokenId": {
"code": 403,
"description": "The X.509 certificate or AWS access key ID provided does not exist in our records."
},
"InvalidParameterCombination": {
"code": 400,
"description": "Parameters that must not be used together were used together."
},
"InvalidParameterValue": {
"code": 400,
"description": "An invalid or out-of-range value was supplied for the input parameter."
},
"InvalidQueryParameter": {
"code": 400,
"description": "The AWS query string is malformed or does not adhere to AWS standards."
},
"MalformedQueryString": {
"code": 404,
"description": "The query string contains a syntax error."
},
"MissingAction": {
"code": 400,
"description": "The request is missing an action or a required parameter."
},
"MissingAuthenticationToken": {
"code": 403,
"description": "The request must contain either a valid (registered) AWS access key ID or X.509 certificate."
},
"MissingParameter": {
"code": 400,
"description": "A required parameter for the specified action is not supplied."
},
"OptInRequired": {
"code": 403,
"description": "The AWS access key ID needs a subscription for the service."
},
"RequestExpired": {
"code": 400,
"description": "The request reached the service more than 15 minutes after the date stamp on the request or more than 15 minutes after the request expiration date (such as for pre-signed URLs), or the date stamp on the request is more than 15 minutes in the future."
},
"Throttling": {
"code": 400,
"description": "The request was denied due to request throttling."
},
"AccountNotFound": {
"code": 404,
"description": "No account was found in Vault, please contact your system administrator."
},
"ValidationError": {
"code": 400,
"description": "The specified value is invalid."
},
"MalformedPolicyDocument": {
"code": 400,
"description": "Syntax errors in policy."
},
"InvalidInput": {
"code": 400,
"description": "The request was rejected because an invalid or out-of-range value was supplied for an input parameter."
},
"MalformedPolicy": {
"code": 400,
"description": "This policy contains invalid Json"
},
"ReportExpired": {
"code": 410,
"description": "The request was rejected because the most recent credential report has expired. To generate a new credential report, use GenerateCredentialReport."
},
"ReportInProgress": {
"code": 404,
"description": "The request was rejected because the credential report is still being generated."
},
"ReportNotPresent": {
"code": 410,
"description": "The request was rejected because the credential report does not exist. To generate a credential report, use GenerateCredentialReport."
},
"_comment": "-------------- Special non-AWS S3 errors --------------",
"MPUinProgress": {
"code": 409,
"description": "The bucket you tried to delete has an ongoing multipart upload."
},
"LocationNotFound": {
"code": 424,
"description": "The object data location does not exist."
},
"_comment": "-------------- Internal project errors --------------",
"_comment": "----------------------- Vault -----------------------",
"_comment": "#### formatErrors ####",
"BadName": {
"description": "name not ok",
"code": 5001
},
"BadAccount": {
"description": "account not ok",
"code": 5002
},
"BadGroup": {
"description": "group not ok",
"code": 5003
},
"BadId": {
"description": "id not ok",
"code": 5004
},
"BadAccountName": {
"description": "accountName not ok",
"code": 5005
},
"BadNameFriendly": {
"description": "nameFriendly not ok",
"code": 5006
},
"BadEmailAddress": {
"description": "email address not ok",
"code": 5007
},
"BadPath": {
"description": "path not ok",
"code": 5008
},
"BadArn": {
"description": "arn not ok",
"code": 5009
},
"BadCreateDate": {
"description": "createDate not ok",
"code": 5010
},
"BadLastUsedDate": {
"description": "lastUsedDate not ok",
"code": 5011
},
"BadNotBefore": {
"description": "notBefore not ok",
"code": 5012
},
"BadNotAfter": {
"description": "notAfter not ok",
"code": 5013
},
"BadSaltedPwd": {
"description": "salted password not ok",
"code": 5014
},
"ok": {
"description": "No error",
"code": 200
},
"BadUser": {
"description": "user not ok",
"code": 5016
},
"BadSaltedPasswd": {
"description": "salted password not ok",
"code": 5017
},
"BadPasswdDate": {
"description": "password date not ok",
"code": 5018
},
"BadCanonicalId": {
"description": "canonicalId not ok",
"code": 5019
},
"BadAlias": {
"description": "alias not ok",
"code": 5020
},
"_comment": "#### internalErrors ####",
"DBPutFailed": {
"description": "DB put failed",
"code": 5021
},
"_comment": "#### alreadyExistErrors ####",
"AccountEmailAlreadyUsed": {
"description": "an other account already uses that email",
"code": 5022
},
"AccountNameAlreadyUsed": {
"description": "an other account already uses that name",
"code": 5023
},
"UserEmailAlreadyUsed": {
"description": "an other user already uses that email",
"code": 5024
},
"UserNameAlreadyUsed": {
"description": "an other user already uses that name",
"code": 5025
},
"_comment": "#### doesntExistErrors ####",
"NoParentAccount": {
"description": "parent account does not exist",
"code": 5026
},
"_comment": "#### authErrors ####",
"BadStringToSign": {
"description": "stringToSign not ok'",
"code": 5027
},
"BadSignatureFromRequest": {
"description": "signatureFromRequest not ok",
"code": 5028
},
"BadAlgorithm": {
"description": "hashAlgorithm not ok",
"code": 5029
},
"SecretKeyDoesNotExist": {
"description": "secret key does not exist",
"code": 5030
},
"InvalidRegion": {
"description": "Region was not provided or is not recognized by the system",
"code": 5031
},
"ScopeDate": {
"description": "scope date is missing, or format is invalid",
"code": 5032
},
"BadAccessKey": {
"description": "access key not ok",
"code": 5033
},
"NoDict": {
"description": "no dictionary of params provided for signature verification",
"code": 5034
},
"BadSecretKey": {
"description": "secretKey not ok",
"code": 5035
},
"BadSecretKeyValue": {
"description": "secretKey value not ok",
"code": 5036
},
"BadSecretKeyStatus": {
"description": "secretKey status not ok",
"code": 5037
},
"_comment": "#### OidcpErrors ####",
"BadUrl": {
"description": "url not ok",
"code": 5038
},
"BadClientIdList": {
"description": "client id list not ok'",
"code": 5039
},
"BadThumbprintList": {
"description": "thumbprint list not ok'",
"code": 5040
},
"BadObject": {
"description": "Object not ok'",
"code": 5041
},
"_comment": "#### RoleErrors ####",
"BadRole": {
"description": "role not ok",
"code": 5042
},
"_comment": "#### SamlpErrors ####",
"BadSamlp": {
"description": "samlp not ok",
"code": 5043
},
"BadMetadataDocument": {
"description": "metadata document not ok",
"code": 5044
},
"BadSessionIndex": {
"description": "session index not ok",
"code": 5045
},
"Unauthorized": {
"description": "not authenticated",
"code": 401
},
"_comment": "--------------------- MetaData ---------------------",
"_comment": "#### formatErrors ####",
"CacheUpdated": {
"description": "The cache has been updated",
"code": 500
},
"DBNotFound": {
"description": "This DB does not exist",
"code": 404
},
"DBAlreadyExists": {
"description": "This DB already exist",
"code": 409
},
"ObjNotFound": {
"description": "This object does not exist",
"code": 404
},
"PermissionDenied": {
"description": "Permission denied",
"code": 403
},
"BadRequest": {
"description": "BadRequest",
"code": 400
},
"RaftSessionNotLeader": {
"description": "NotLeader",
"code": 500
},
"RaftSessionLeaderNotConnected": {
"description": "RaftSessionLeaderNotConnected",
"code": 400
},
"NoLeaderForDB": {
"description": "NoLeaderForDB",
"code": 400
},
"RouteNotFound": {
"description": "RouteNotFound",
"code": 404
},
"NoMapsInConfig": {
"description": "NoMapsInConfig",
"code": 404
},
"DBAPINotReady": {
"message": "DBAPINotReady",
"code": 500
},
"NotEnoughMapsInConfig:": {
"description": "NotEnoughMapsInConfig",
"code": 400
},
"TooManyRequests": {
"description": "TooManyRequests",
"code": 429
},
"_comment": "----------------------- cdmiclient -----------------------",
"ReadOnly": {
"description": "trying to write to read only back-end",
"code": 403
}
}

View File

@@ -1,48 +0,0 @@
---
version: 0.2
branches:
default:
stage: pre-merge
stages:
pre-merge:
worker: &master-worker
type: docker
path: eve/workers/master
volumes:
- '/home/eve/workspace'
steps:
- Git:
name: fetch source
repourl: '%(prop:git_reference)s'
shallow: True
retryFetch: True
haltOnFailure: True
- ShellCommand:
name: install dependencies
command: yarn install --frozen-lockfile
- ShellCommand:
name: run lint yml
command: yarn run --silent lint_yml
- ShellCommand:
name: run lint
command: yarn run --silent lint -- --max-warnings 0
- ShellCommand:
name: run lint_md
command: yarn run --silent lint_md
- ShellCommand:
name: add hostname
command: sudo sh -c "echo '127.0.0.1 testrequestbucket.localhost' \
>> /etc/hosts"
- ShellCommand:
name: run test
command: yarn run --silent test
- ShellCommand:
name: run ft_test
command: yarn run ft_test
- ShellCommand:
name: run executables tests
command: yarn install && yarn test
workdir: '%(prop:builddir)s/build/lib/executables/pensieveCreds/'

View File

@@ -1,57 +0,0 @@
FROM ubuntu:trusty
#
# Install apt packages needed by the buildchain
#
ENV LANG C.UTF-8
COPY buildbot_worker_packages.list arsenal_packages.list /tmp/
RUN apt-get update -q && apt-get -qy install curl apt-transport-https \
&& apt-get install -qy software-properties-common python-software-properties \
&& curl --silent https://deb.nodesource.com/gpgkey/nodesource.gpg.key | apt-key add - \
&& echo "deb https://deb.nodesource.com/node_10.x trusty main" > /etc/apt/sources.list.d/nodesource.list \
&& curl -sS http://dl.yarnpkg.com/debian/pubkey.gpg | apt-key add - \
&& echo "deb http://dl.yarnpkg.com/debian/ stable main" | tee /etc/apt/sources.list.d/yarn.list \
&& add-apt-repository ppa:ubuntu-toolchain-r/test \
&& apt-get update -q \
&& cat /tmp/buildbot_worker_packages.list | xargs apt-get install -qy \
&& cat /tmp/arsenal_packages.list | xargs apt-get install -qy \
&& pip install pip==9.0.1 \
&& rm -rf /var/lib/apt/lists/* \
&& rm -f /tmp/*_packages.list
#
# Install usefull nodejs dependencies
#
RUN yarn global add mocha
#
# Add user eve
#
RUN adduser -u 1042 --home /home/eve --disabled-password --gecos "" eve \
&& adduser eve sudo \
&& sed -ri 's/(%sudo.*)ALL$/\1NOPASSWD:ALL/' /etc/sudoers
#
# Run buildbot-worker on startup
#
ARG BUILDBOT_VERSION=0.9.12
RUN pip install yamllint
RUN pip install buildbot-worker==$BUILDBOT_VERSION
USER eve
ENV HOME /home/eve
#
# Setup nodejs environmnent
#
ENV CXX=g++-4.9
ENV LANG C.UTF-8
WORKDIR /home/eve/workspace
CMD buildbot-worker create-worker . "$BUILDMASTER:$BUILDMASTER_PORT" "$WORKERNAME" "$WORKERPASS" \
&& sudo service redis-server start \
&& buildbot-worker start --nodaemon

View File

@@ -1,4 +0,0 @@
nodejs
redis-server
g++-4.9
yarn

View File

@@ -1,9 +0,0 @@
ca-certificates
git
libffi-dev
libssl-dev
python2.7
python2.7-dev
python-pip
software-properties-common
sudo

View File

@@ -1,28 +0,0 @@
{
"groups": {
"default": {
"packages": [
"lib/executables/pensieveCreds/package.json",
"package.json"
]
}
},
"branchPrefix": "improvement/greenkeeper.io/",
"commitMessages": {
"initialBadge": "docs(readme): add Greenkeeper badge",
"initialDependencies": "chore(package): update dependencies",
"initialBranches": "chore(bert-e): whitelist greenkeeper branches",
"dependencyUpdate": "fix(package): update ${dependency} to version ${version}",
"devDependencyUpdate": "chore(package): update ${dependency} to version ${version}",
"dependencyPin": "fix: pin ${dependency} to ${oldVersionResolved}",
"devDependencyPin": "chore: pin ${dependency} to ${oldVersionResolved}",
"closes": "\n\nCloses #${number}"
},
"ignore": [
"ajv",
"eslint",
"eslint-plugin-react",
"eslint-config-airbnb",
"eslint-config-scality"
]
}

189
index.js
View File

@@ -1,189 +0,0 @@
module.exports = {
auth: require('./lib/auth/auth'),
constants: require('./lib/constants'),
db: require('./lib/db'),
errors: require('./lib/errors.js'),
errorUtils: require('./lib/errorUtils'),
shuffle: require('./lib/shuffle'),
stringHash: require('./lib/stringHash'),
ipCheck: require('./lib/ipCheck'),
jsutil: require('./lib/jsutil'),
https: {
ciphers: require('./lib/https/ciphers.js'),
dhparam: require('./lib/https/dh2048.js'),
},
algorithms: {
list: require('./lib/algos/list/exportAlgos'),
listTools: {
DelimiterTools: require('./lib/algos/list/tools'),
},
cache: {
LRUCache: require('./lib/algos/cache/LRUCache'),
},
stream: {
MergeStream: require('./lib/algos/stream/MergeStream'),
},
},
policies: {
evaluators: require('./lib/policyEvaluator/evaluator.js'),
validateUserPolicy: require('./lib/policy/policyValidator')
.validateUserPolicy,
evaluatePrincipal: require('./lib/policyEvaluator/principal'),
RequestContext: require('./lib/policyEvaluator/RequestContext.js'),
requestUtils: require('./lib/policyEvaluator/requestUtils'),
actionMaps: require('./lib/policyEvaluator/utils/actionMaps'),
},
Clustering: require('./lib/Clustering'),
testing: {
matrix: require('./lib/testing/matrix.js'),
},
versioning: {
VersioningConstants: require('./lib/versioning/constants.js')
.VersioningConstants,
Version: require('./lib/versioning/Version.js').Version,
VersionID: require('./lib/versioning/VersionID.js'),
},
network: {
http: {
server: require('./lib/network/http/server'),
},
rpc: require('./lib/network/rpc/rpc'),
level: require('./lib/network/rpc/level-net'),
rest: {
RESTServer: require('./lib/network/rest/RESTServer'),
RESTClient: require('./lib/network/rest/RESTClient'),
},
RoundRobin: require('./lib/network/RoundRobin'),
probe: {
HealthProbeServer:
require('./lib/network/probe/HealthProbeServer.js'),
},
kmip: require('./lib/network/kmip'),
kmipClient: require('./lib/network/kmip/Client'),
},
s3routes: {
routes: require('./lib/s3routes/routes'),
routesUtils: require('./lib/s3routes/routesUtils'),
},
s3middleware: {
userMetadata: require('./lib/s3middleware/userMetadata'),
convertToXml: require('./lib/s3middleware/convertToXml'),
escapeForXml: require('./lib/s3middleware/escapeForXml'),
objectLegalHold: require('./lib/s3middleware/objectLegalHold'),
tagging: require('./lib/s3middleware/tagging'),
checkDateModifiedHeaders:
require('./lib/s3middleware/validateConditionalHeaders')
.checkDateModifiedHeaders,
validateConditionalHeaders:
require('./lib/s3middleware/validateConditionalHeaders')
.validateConditionalHeaders,
MD5Sum: require('./lib/s3middleware/MD5Sum'),
NullStream: require('./lib/s3middleware/nullStream'),
objectUtils: require('./lib/s3middleware/objectUtils'),
azureHelper: {
mpuUtils:
require('./lib/s3middleware/azureHelpers/mpuUtils'),
ResultsCollector:
require('./lib/s3middleware/azureHelpers/ResultsCollector'),
SubStreamInterface:
require('./lib/s3middleware/azureHelpers/SubStreamInterface'),
},
prepareStream: require('./lib/s3middleware/prepareStream'),
processMpuParts: require('./lib/s3middleware/processMpuParts'),
retention: require('./lib/s3middleware/objectRetention'),
},
storage: {
metadata: {
MetadataWrapper: require('./lib/storage/metadata/MetadataWrapper'),
bucketclient: {
BucketClientInterface:
require('./lib/storage/metadata/bucketclient/' +
'BucketClientInterface'),
LogConsumer:
require('./lib/storage/metadata/bucketclient/LogConsumer'),
},
file: {
BucketFileInterface:
require('./lib/storage/metadata/file/BucketFileInterface'),
MetadataFileServer:
require('./lib/storage/metadata/file/MetadataFileServer'),
MetadataFileClient:
require('./lib/storage/metadata/file/MetadataFileClient'),
},
inMemory: {
metastore:
require('./lib/storage/metadata/in_memory/metastore'),
metadata: require('./lib/storage/metadata/in_memory/metadata'),
bucketUtilities:
require('./lib/storage/metadata/in_memory/bucket_utilities'),
},
mongoclient: {
MongoClientInterface:
require('./lib/storage/metadata/mongoclient/' +
'MongoClientInterface'),
LogConsumer:
require('./lib/storage/metadata/mongoclient/LogConsumer'),
},
proxy: {
Server: require('./lib/storage/metadata/proxy/Server'),
},
},
data: {
DataWrapper: require('./lib/storage/data/DataWrapper'),
MultipleBackendGateway:
require('./lib/storage/data/MultipleBackendGateway'),
parseLC: require('./lib/storage/data/LocationConstraintParser'),
file: {
DataFileStore:
require('./lib/storage/data/file/DataFileStore'),
DataFileInterface:
require('./lib/storage/data/file/DataFileInterface'),
},
external: {
AwsClient: require('./lib/storage/data/external/AwsClient'),
AzureClient: require('./lib/storage/data/external/AzureClient'),
GcpClient: require('./lib/storage/data/external/GcpClient'),
GCP: require('./lib/storage/data/external/GCP/GcpService'),
GcpUtils: require('./lib/storage/data/external/GCP/GcpUtils'),
GcpSigner: require('./lib/storage/data/external/GCP/GcpSigner'),
PfsClient: require('./lib/storage/data/external/PfsClient'),
backendUtils: require('./lib/storage/data/external/utils'),
},
inMemory: {
datastore: require('./lib/storage/data/in_memory/datastore'),
},
},
utils: require('./lib/storage/utils'),
},
models: {
BackendInfo: require('./lib/models/BackendInfo'),
BucketInfo: require('./lib/models/BucketInfo'),
BucketAzureInfo: require('./lib/models/BucketAzureInfo'),
ObjectMD: require('./lib/models/ObjectMD'),
ObjectMDLocation: require('./lib/models/ObjectMDLocation'),
ObjectMDAzureInfo: require('./lib/models/ObjectMDAzureInfo'),
ARN: require('./lib/models/ARN'),
WebsiteConfiguration: require('./lib/models/WebsiteConfiguration'),
ReplicationConfiguration:
require('./lib/models/ReplicationConfiguration'),
LifecycleConfiguration:
require('./lib/models/LifecycleConfiguration'),
BucketPolicy: require('./lib/models/BucketPolicy'),
ObjectLockConfiguration:
require('./lib/models/ObjectLockConfiguration'),
NotificationConfiguration:
require('./lib/models/NotificationConfiguration'),
},
metrics: {
StatsClient: require('./lib/metrics/StatsClient'),
StatsModel: require('./lib/metrics/StatsModel'),
RedisClient: require('./lib/metrics/RedisClient'),
ZenkoMetrics: require('./lib/metrics/ZenkoMetrics'),
},
pensieve: {
credentialUtils: require('./lib/executables/pensieveCreds/utils'),
},
stream: {
readJSONStreamObject: require('./lib/stream/readJSONStreamObject'),
},
};

159
index.ts Normal file
View File

@@ -0,0 +1,159 @@
import * as evaluators from './lib/policyEvaluator/evaluator';
import evaluatePrincipal from './lib/policyEvaluator/principal';
import RequestContext from './lib/policyEvaluator/RequestContext';
import * as requestUtils from './lib/policyEvaluator/requestUtils';
import * as actionMaps from './lib/policyEvaluator/utils/actionMaps';
import { validateUserPolicy } from './lib/policy/policyValidator'
import * as userMetadata from './lib/s3middleware/userMetadata';
import convertToXml from './lib/s3middleware/convertToXml';
import escapeForXml from './lib/s3middleware/escapeForXml';
import * as objectLegalHold from './lib/s3middleware/objectLegalHold';
import * as tagging from './lib/s3middleware/tagging';
import { validateConditionalHeaders } from './lib/s3middleware/validateConditionalHeaders';
import MD5Sum from './lib/s3middleware/MD5Sum';
import NullStream from './lib/s3middleware/nullStream';
import * as objectUtils from './lib/s3middleware/objectUtils';
import * as mpuUtils from './lib/s3middleware/azureHelpers/mpuUtils';
import ResultsCollector from './lib/s3middleware/azureHelpers/ResultsCollector';
import SubStreamInterface from './lib/s3middleware/azureHelpers/SubStreamInterface';
import * as processMpuParts from './lib/s3middleware/processMpuParts';
import * as retention from './lib/s3middleware/objectRetention';
import * as lifecycleHelpers from './lib/s3middleware/lifecycleHelpers';
export { default as errors } from './lib/errors';
export { default as Clustering } from './lib/Clustering';
export * as ipCheck from './lib/ipCheck';
export * as auth from './lib/auth/auth';
export * as constants from './lib/constants';
export * as https from './lib/https';
export * as metrics from './lib/metrics';
export * as network from './lib/network';
export * as s3routes from './lib/s3routes';
export * as versioning from './lib/versioning';
export * as stream from './lib/stream';
export * as jsutil from './lib/jsutil';
export { default as stringHash } from './lib/stringHash';
export * as db from './lib/db';
export { default as shuffle } from './lib/shuffle';
export * as models from './lib/models';
export const algorithms = {
list: {
Basic: require('./lib/algos/list/basic').List,
Delimiter: require('./lib/algos/list/delimiter').Delimiter,
DelimiterVersions: require('./lib/algos/list/delimiterVersions').DelimiterVersions,
DelimiterMaster: require('./lib/algos/list/delimiterMaster').DelimiterMaster,
MPU: require('./lib/algos/list/MPU').MultipartUploads,
},
listTools: {
DelimiterTools: require('./lib/algos/list/tools'),
},
cache: {
LRUCache: require('./lib/algos/cache/LRUCache'),
},
stream: {
MergeStream: require('./lib/algos/stream/MergeStream'),
},
SortedSet: require('./lib/algos/set/SortedSet'),
Heap: require('./lib/algos/heap/Heap'),
};
export const policies = {
evaluators,
validateUserPolicy,
evaluatePrincipal,
RequestContext,
requestUtils,
actionMaps,
};
export const testing = {
matrix: require('./lib/testing/matrix.js'),
};
export const s3middleware = {
userMetadata,
convertToXml,
escapeForXml,
objectLegalHold,
tagging,
validateConditionalHeaders,
MD5Sum,
NullStream,
objectUtils,
azureHelper: {
mpuUtils,
ResultsCollector,
SubStreamInterface,
},
processMpuParts,
retention,
lifecycleHelpers,
};
export const storage = {
metadata: {
MetadataWrapper: require('./lib/storage/metadata/MetadataWrapper'),
bucketclient: {
BucketClientInterface:
require('./lib/storage/metadata/bucketclient/' +
'BucketClientInterface'),
LogConsumer:
require('./lib/storage/metadata/bucketclient/LogConsumer'),
},
file: {
BucketFileInterface:
require('./lib/storage/metadata/file/BucketFileInterface'),
MetadataFileServer:
require('./lib/storage/metadata/file/MetadataFileServer'),
MetadataFileClient:
require('./lib/storage/metadata/file/MetadataFileClient'),
},
inMemory: {
metastore:
require('./lib/storage/metadata/in_memory/metastore'),
metadata: require('./lib/storage/metadata/in_memory/metadata'),
bucketUtilities:
require('./lib/storage/metadata/in_memory/bucket_utilities'),
},
mongoclient: {
MongoClientInterface:
require('./lib/storage/metadata/mongoclient/' +
'MongoClientInterface'),
LogConsumer:
require('./lib/storage/metadata/mongoclient/LogConsumer'),
},
proxy: {
Server: require('./lib/storage/metadata/proxy/Server'),
},
},
data: {
DataWrapper: require('./lib/storage/data/DataWrapper'),
MultipleBackendGateway:
require('./lib/storage/data/MultipleBackendGateway'),
parseLC: require('./lib/storage/data/LocationConstraintParser'),
file: {
DataFileStore:
require('./lib/storage/data/file/DataFileStore'),
DataFileInterface:
require('./lib/storage/data/file/DataFileInterface'),
},
external: {
AwsClient: require('./lib/storage/data/external/AwsClient'),
AzureClient: require('./lib/storage/data/external/AzureClient'),
GcpClient: require('./lib/storage/data/external/GcpClient'),
GCP: require('./lib/storage/data/external/GCP/GcpService'),
GcpUtils: require('./lib/storage/data/external/GCP/GcpUtils'),
GcpSigner: require('./lib/storage/data/external/GCP/GcpSigner'),
PfsClient: require('./lib/storage/data/external/PfsClient'),
backendUtils: require('./lib/storage/data/external/utils'),
},
inMemory: {
datastore: require('./lib/storage/data/in_memory/datastore'),
},
},
utils: require('./lib/storage/utils'),
};
export const pensieve = {
credentialUtils: require('./lib/executables/pensieveCreds/utils'),
};

View File

@@ -1,18 +1,28 @@
'use strict'; // eslint-disable-line
import cluster, { Worker } from 'cluster';
import * as werelogs from 'werelogs';
const cluster = require('cluster');
export default class Clustering {
_size: number;
_shutdownTimeout: number;
_logger: werelogs.Logger;
_shutdown: boolean;
_workers: (Worker | undefined)[];
_workersTimeout: (NodeJS.Timeout | undefined)[];
_workersStatus: (number | string | undefined)[];
_status: number;
_exitCb?: (clustering: Clustering, exitSignal?: string) => void;
_index?: number;
class Clustering {
/**
* Constructor
*
* @param {number} size Cluster size
* @param {Logger} logger Logger object
* @param {number} [shutdownTimeout=5000] Change default shutdown timeout
* @param size Cluster size
* @param logger Logger object
* @param [shutdownTimeout=5000] Change default shutdown timeout
* releasing ressources
* @return {Clustering} itself
* @return itself
*/
constructor(size, logger, shutdownTimeout) {
constructor(size: number, logger: werelogs.Logger, shutdownTimeout?: number) {
this._size = size;
if (size < 1) {
throw new Error('Cluster size must be greater than or equal to 1');
@@ -32,7 +42,6 @@ class Clustering {
* Method called after a stop() call
*
* @private
* @return {undefined}
*/
_afterStop() {
// Asuming all workers shutdown gracefully
@@ -41,10 +50,11 @@ class Clustering {
for (let i = 0; i < size; ++i) {
// If the process return an error code or killed by a signal,
// set the status
if (typeof this._workersStatus[i] === 'number') {
this._status = this._workersStatus[i];
const status = this._workersStatus[i];
if (typeof status === 'number') {
this._status = status;
break;
} else if (typeof this._workersStatus[i] === 'string') {
} else if (typeof status === 'string') {
this._status = 1;
break;
}
@@ -58,13 +68,17 @@ class Clustering {
/**
* Method called when a worker exited
*
* @param {Cluster.worker} worker - Current worker
* @param {number} i - Worker index
* @param {number} code - Exit code
* @param {string} signal - Exit signal
* @return {undefined}
* @param worker - Current worker
* @param i - Worker index
* @param code - Exit code
* @param signal - Exit signal
*/
_workerExited(worker, i, code, signal) {
_workerExited(
worker: Worker,
i: number,
code: number,
signal: string,
) {
// If the worker:
// - was killed by a signal
// - return an error code
@@ -91,8 +105,9 @@ class Clustering {
this._workersStatus[i] = undefined;
}
this._workers[i] = undefined;
if (this._workersTimeout[i]) {
clearTimeout(this._workersTimeout[i]);
const timeout = this._workersTimeout[i];
if (timeout) {
clearTimeout(timeout);
this._workersTimeout[i] = undefined;
}
// If we don't trigger the stop method, the watchdog
@@ -110,29 +125,28 @@ class Clustering {
/**
* Method to start a worker
*
* @param {number} i Index of the starting worker
* @return {undefined}
* @param i Index of the starting worker
*/
startWorker(i) {
if (!cluster.isMaster) {
startWorker(i: number) {
if (!cluster.isPrimary) {
return;
}
// Fork a new worker
this._workers[i] = cluster.fork();
// Listen for message from the worker
this._workers[i].on('message', msg => {
this._workers[i]!.on('message', msg => {
// If the worker is ready, send him his id
if (msg === 'ready') {
this._workers[i].send({ msg: 'setup', id: i });
this._workers[i]!.send({ msg: 'setup', id: i });
}
});
this._workers[i].on('exit', (code, signal) =>
this._workerExited(this._workers[i], i, code, signal));
this._workers[i]!.on('exit', (code, signal) =>
this._workerExited(this._workers[i]!, i, code, signal));
// Trigger when the worker was started
this._workers[i].on('online', () => {
this._workers[i]!.on('online', () => {
this._logger.info('Worker started', {
id: i,
childPid: this._workers[i].process.pid,
childPid: this._workers[i]!.process.pid,
});
});
}
@@ -140,10 +154,10 @@ class Clustering {
/**
* Method to put handler on cluster exit
*
* @param {function} cb - Callback(Clustering, [exitSignal])
* @return {Clustering} Itself
* @param cb - Callback(Clustering, [exitSignal])
* @return Itself
*/
onExit(cb) {
onExit(cb: (clustering: Clustering, exitSignal?: string) => void) {
this._exitCb = cb;
return this;
}
@@ -152,33 +166,33 @@ class Clustering {
* Method to start the cluster (if master) or to start the callback
* (worker)
*
* @param {function} cb - Callback to run the worker
* @return {Clustering} itself
* @param cb - Callback to run the worker
* @return itself
*/
start(cb) {
start(cb: (clustering: Clustering) => void) {
process.on('SIGINT', () => this.stop('SIGINT'));
process.on('SIGHUP', () => this.stop('SIGHUP'));
process.on('SIGQUIT', () => this.stop('SIGQUIT'));
process.on('SIGTERM', () => this.stop('SIGTERM'));
process.on('SIGPIPE', () => {});
process.on('exit', (code, signal) => {
process.on('exit', (code?: number, signal?: string) => {
if (this._exitCb) {
this._status = code || 0;
return this._exitCb(this, signal);
}
return process.exit(code || 0);
});
process.on('uncaughtException', err => {
process.on('uncaughtException', (err: Error) => {
this._logger.fatal('caught error', {
error: err.message,
stack: err.stack.split('\n').map(str => str.trim()),
stack: err.stack?.split('\n')?.map(str => str.trim()),
});
process.exit(1);
});
if (!cluster.isMaster) {
if (!cluster.isPrimary) {
// Waiting for message from master to
// know the id of the slave cluster
process.on('message', msg => {
process.on('message', (msg: any) => {
if (msg.msg === 'setup') {
this._index = msg.id;
cb(this);
@@ -186,7 +200,7 @@ class Clustering {
});
// Send message to the master, to let him know
// the worker has started
process.send('ready');
process.send?.('ready');
} else {
for (let i = 0; i < this._size; ++i) {
this.startWorker(i);
@@ -198,7 +212,7 @@ class Clustering {
/**
* Method to get workers
*
* @return {Cluster.Worker[]} Workers
* @return Workers
*/
getWorkers() {
return this._workers;
@@ -207,7 +221,7 @@ class Clustering {
/**
* Method to get the status of the cluster
*
* @return {number} Status code
* @return Status code
*/
getStatus() {
return this._status;
@@ -216,7 +230,7 @@ class Clustering {
/**
* Method to return if it's the master process
*
* @return {boolean} - True if master, false otherwise
* @return - True if master, false otherwise
*/
isMaster() {
return this._index === undefined;
@@ -225,7 +239,7 @@ class Clustering {
/**
* Method to get index of the worker
*
* @return {number|undefined} Worker index, undefined if it's master
* @return Worker index, undefined if it's master
*/
getIndex() {
return this._index;
@@ -234,11 +248,10 @@ class Clustering {
/**
* Method to stop the cluster
*
* @param {string} signal - Set internally when processes killed by signal
* @return {undefined}
* @param signal - Set internally when processes killed by signal
*/
stop(signal) {
if (!cluster.isMaster) {
stop(signal?: string) {
if (!cluster.isPrimary) {
if (this._exitCb) {
return this._exitCb(this, signal);
}
@@ -251,13 +264,17 @@ class Clustering {
}
this._workersTimeout[i] = setTimeout(() => {
// Kill the worker if the sigterm was ignored or take too long
process.kill(worker.process.pid, 'SIGKILL');
if (worker.process.pid) {
process.kill(worker.process.pid, 'SIGKILL');
}
}, this._shutdownTimeout);
// Send sigterm to the process, allowing to release ressources
// and save some states
return process.kill(worker.process.pid, 'SIGTERM');
if (worker.process.pid) {
return process.kill(worker.process.pid, 'SIGTERM');
} else {
return true;
}
});
}
}
module.exports = Clustering;

124
lib/algos/heap/Heap.ts Normal file
View File

@@ -0,0 +1,124 @@
export enum HeapOrder {
Min = -1,
Max = 1,
}
export enum CompareResult {
LT = -1,
EQ = 0,
GT = 1,
}
export type CompareFunction = (x: any, y: any) => CompareResult;
export class Heap {
size: number;
_maxSize: number;
_order: HeapOrder;
_heap: any[];
_cmpFn: CompareFunction;
constructor(size: number, order: HeapOrder, cmpFn: CompareFunction) {
this.size = 0;
this._maxSize = size;
this._order = order;
this._cmpFn = cmpFn;
this._heap = new Array<any>(this._maxSize);
}
_parent(i: number): number {
return Math.floor((i - 1) / 2);
}
_left(i: number): number {
return Math.floor((2 * i) + 1);
}
_right(i: number): number {
return Math.floor((2 * i) + 2);
}
_shouldSwap(childIdx: number, parentIdx: number): boolean {
return this._cmpFn(this._heap[childIdx], this._heap[parentIdx]) as number === this._order as number;
}
_swap(i: number, j: number) {
const tmp = this._heap[i];
this._heap[i] = this._heap[j];
this._heap[j] = tmp;
}
_heapify(i: number) {
const l = this._left(i);
const r = this._right(i);
let c = i;
if (l < this.size && this._shouldSwap(l, c)) {
c = l;
}
if (r < this.size && this._shouldSwap(r, c)) {
c = r;
}
if (c != i) {
this._swap(c, i);
this._heapify(c);
}
}
add(item: any): any {
if (this.size >= this._maxSize) {
return new Error('Max heap size reached');
}
++this.size;
let c = this.size - 1;
this._heap[c] = item;
while (c > 0) {
if (!this._shouldSwap(c, this._parent(c))) {
return null;
}
this._swap(c, this._parent(c));
c = this._parent(c);
}
return null;
};
remove(): any {
if (this.size <= 0) {
return null;
}
const ret = this._heap[0];
this._heap[0] = this._heap[this.size - 1];
this._heapify(0);
--this.size;
return ret;
};
peek(): any {
if (this.size <= 0) {
return null;
}
return this._heap[0];
};
}
export class MinHeap extends Heap {
constructor(size: number, cmpFn: CompareFunction) {
super(size, HeapOrder.Min, cmpFn);
}
}
export class MaxHeap extends Heap {
constructor(size: number, cmpFn: CompareFunction) {
super(size, HeapOrder.Max, cmpFn);
}
}

View File

@@ -1,7 +1,7 @@
'use strict'; // eslint-disable-line strict
const { inc, checkLimit, listingParamsMasterKeysV0ToV1,
FILTER_END, FILTER_ACCEPT } = require('./tools');
FILTER_END, FILTER_ACCEPT } = require('./tools');
const DEFAULT_MAX_KEYS = 1000;
const VSConst = require('../../versioning/constants').VersioningConstants;
const { DbPrefixes, BucketVersioningKeyFormat } = VSConst;

View File

@@ -2,7 +2,7 @@
const Extension = require('./Extension').default;
const { checkLimit, FILTER_END, FILTER_ACCEPT } = require('./tools');
const { checkLimit, FILTER_END, FILTER_ACCEPT, FILTER_SKIP } = require('./tools');
const DEFAULT_MAX_KEYS = 10000;
/**
@@ -21,6 +21,8 @@ class List extends Extension {
this.res = [];
if (parameters) {
this.maxKeys = checkLimit(parameters.maxKeys, DEFAULT_MAX_KEYS);
this.filterKey = parameters.filterKey;
this.filterKeyStartsWith = parameters.filterKeyStartsWith;
} else {
this.maxKeys = DEFAULT_MAX_KEYS;
}
@@ -44,6 +46,43 @@ class List extends Extension {
return params;
}
/**
* Filters customAttributes sub-object if present
*
* @param {String} value - The JSON value of a listing item
*
* @return {Boolean} Returns true if matches, else false.
*/
customFilter(value) {
let _value;
try {
_value = JSON.parse(value);
} catch (e) {
// Prefer returning an unfiltered data rather than
// stopping the service in case of parsing failure.
// The risk of this approach is a potential
// reproduction of MD-692, where too much memory is
// used by repd.
this.logger.warn(
'Could not parse Object Metadata while listing',
{ err: e.toString() });
return false;
}
if (_value.customAttributes !== undefined) {
for (const key of Object.keys(_value.customAttributes)) {
if (this.filterKey !== undefined &&
key === this.filterKey) {
return true;
}
if (this.filterKeyStartsWith !== undefined &&
key.startsWith(this.filterKeyStartsWith)) {
return true;
}
}
}
return false;
}
/**
* Function apply on each element
* Just add it to the array
@@ -56,6 +95,12 @@ class List extends Extension {
if (this.keys >= this.maxKeys) {
return FILTER_END;
}
if ((this.filterKey !== undefined ||
this.filterKeyStartsWith !== undefined) &&
typeof elem === 'object' &&
!this.customFilter(elem.value)) {
return FILTER_SKIP;
}
if (typeof elem === 'object') {
this.res.push({
key: elem.key,

View File

@@ -1,274 +0,0 @@
'use strict'; // eslint-disable-line strict
const Extension = require('./Extension').default;
const { inc, listingParamsMasterKeysV0ToV1,
FILTER_END, FILTER_ACCEPT, FILTER_SKIP } = require('./tools');
const VSConst = require('../../versioning/constants').VersioningConstants;
const { DbPrefixes, BucketVersioningKeyFormat } = VSConst;
/**
* Find the common prefix in the path
*
* @param {String} key - path of the object
* @param {String} delimiter - separator
* @param {Number} delimiterIndex - 'folder' index in the path
* @return {String} - CommonPrefix
*/
function getCommonPrefix(key, delimiter, delimiterIndex) {
return key.substring(0, delimiterIndex + delimiter.length);
}
/**
* Handle object listing with parameters
*
* @prop {String[]} CommonPrefixes - 'folders' defined by the delimiter
* @prop {String[]} Contents - 'files' to list
* @prop {Boolean} IsTruncated - truncated listing flag
* @prop {String|undefined} NextMarker - marker per amazon format
* @prop {Number} keys - count of listed keys
* @prop {String|undefined} delimiter - separator per amazon format
* @prop {String|undefined} prefix - prefix per amazon format
* @prop {Number} maxKeys - number of keys to list
*/
class Delimiter extends Extension {
/**
* Create a new Delimiter instance
* @constructor
* @param {Object} parameters - listing parameters
* @param {String} [parameters.delimiter] - delimiter per amazon
* format
* @param {String} [parameters.prefix] - prefix per amazon
* format
* @param {String} [parameters.marker] - marker per amazon
* format
* @param {Number} [parameters.maxKeys] - number of keys to list
* @param {Boolean} [parameters.v2] - indicates whether v2
* format
* @param {String} [parameters.startAfter] - marker per amazon
* format
* @param {String} [parameters.continuationToken] - obfuscated amazon
* token
* @param {Boolean} [parameters.alphabeticalOrder] - Either the result is
* alphabetically ordered
* or not
* @param {RequestLogger} logger - The logger of the
* request
* @param {String} [vFormat] - versioning key format
*/
constructor(parameters, logger, vFormat) {
super(parameters, logger);
// original listing parameters
this.delimiter = parameters.delimiter;
this.prefix = parameters.prefix;
this.marker = parameters.marker;
this.maxKeys = parameters.maxKeys || 1000;
this.startAfter = parameters.startAfter;
this.continuationToken = parameters.continuationToken;
this.alphabeticalOrder =
typeof parameters.alphabeticalOrder !== 'undefined' ?
parameters.alphabeticalOrder : true;
this.vFormat = vFormat || BucketVersioningKeyFormat.v0;
// results
this.CommonPrefixes = [];
this.Contents = [];
this.IsTruncated = false;
this.NextMarker = parameters.marker;
this.NextContinuationToken =
parameters.continuationToken || parameters.startAfter;
this.startMarker = parameters.v2 ? 'startAfter' : 'marker';
this.continueMarker = parameters.v2 ? 'continuationToken' : 'marker';
this.nextContinueMarker = parameters.v2 ?
'NextContinuationToken' : 'NextMarker';
if (this.delimiter !== undefined &&
this[this.nextContinueMarker] !== undefined &&
this[this.nextContinueMarker].startsWith(this.prefix || '')) {
const nextDelimiterIndex =
this[this.nextContinueMarker].indexOf(this.delimiter,
this.prefix ? this.prefix.length : 0);
this[this.nextContinueMarker] =
this[this.nextContinueMarker].slice(0, nextDelimiterIndex +
this.delimiter.length);
}
Object.assign(this, {
[BucketVersioningKeyFormat.v0]: {
genMDParams: this.genMDParamsV0,
getObjectKey: this.getObjectKeyV0,
skipping: this.skippingV0,
},
[BucketVersioningKeyFormat.v1]: {
genMDParams: this.genMDParamsV1,
getObjectKey: this.getObjectKeyV1,
skipping: this.skippingV1,
},
}[this.vFormat]);
}
genMDParamsV0() {
const params = {};
if (this.prefix) {
params.gte = this.prefix;
params.lt = inc(this.prefix);
}
const startVal = this[this.continueMarker] || this[this.startMarker];
if (startVal) {
if (params.gte && params.gte > startVal) {
return params;
}
delete params.gte;
params.gt = startVal;
}
return params;
}
genMDParamsV1() {
const params = this.genMDParamsV0();
return listingParamsMasterKeysV0ToV1(params);
}
/**
* check if the max keys count has been reached and set the
* final state of the result if it is the case
* @return {Boolean} - indicates if the iteration has to stop
*/
_reachedMaxKeys() {
if (this.keys >= this.maxKeys) {
// In cases of maxKeys <= 0 -> IsTruncated = false
this.IsTruncated = this.maxKeys > 0;
return true;
}
return false;
}
/**
* Add a (key, value) tuple to the listing
* Set the NextMarker to the current key
* Increment the keys counter
* @param {String} key - The key to add
* @param {String} value - The value of the key
* @return {number} - indicates if iteration should continue
*/
addContents(key, value) {
if (this._reachedMaxKeys()) {
return FILTER_END;
}
this.Contents.push({ key, value: this.trimMetadata(value) });
this[this.nextContinueMarker] = key;
++this.keys;
return FILTER_ACCEPT;
}
getObjectKeyV0(obj) {
return obj.key;
}
getObjectKeyV1(obj) {
return obj.key.slice(DbPrefixes.Master.length);
}
/**
* Filter to apply on each iteration, based on:
* - prefix
* - delimiter
* - maxKeys
* The marker is being handled directly by levelDB
* @param {Object} obj - The key and value of the element
* @param {String} obj.key - The key of the element
* @param {String} obj.value - The value of the element
* @return {number} - indicates if iteration should continue
*/
filter(obj) {
const key = this.getObjectKey(obj);
const value = obj.value;
if ((this.prefix && !key.startsWith(this.prefix))
|| (this.alphabeticalOrder
&& typeof this[this.nextContinueMarker] === 'string'
&& key <= this[this.nextContinueMarker])) {
return FILTER_SKIP;
}
if (this.delimiter) {
const baseIndex = this.prefix ? this.prefix.length : 0;
const delimiterIndex = key.indexOf(this.delimiter, baseIndex);
if (delimiterIndex === -1) {
return this.addContents(key, value);
}
return this.addCommonPrefix(key, delimiterIndex);
}
return this.addContents(key, value);
}
/**
* Add a Common Prefix in the list
* @param {String} key - object name
* @param {Number} index - after prefix starting point
* @return {Boolean} - indicates if iteration should continue
*/
addCommonPrefix(key, index) {
const commonPrefix = getCommonPrefix(key, this.delimiter, index);
if (this.CommonPrefixes.indexOf(commonPrefix) === -1
&& this[this.nextContinueMarker] !== commonPrefix) {
if (this._reachedMaxKeys()) {
return FILTER_END;
}
this.CommonPrefixes.push(commonPrefix);
this[this.nextContinueMarker] = commonPrefix;
++this.keys;
return FILTER_ACCEPT;
}
return FILTER_SKIP;
}
/**
* If repd happens to want to skip listing on a bucket in v0
* versioning key format, here is an idea.
*
* @return {string} - the present range (NextMarker) if repd believes
* that it's enough and should move on
*/
skippingV0() {
return this[this.nextContinueMarker];
}
/**
* If repd happens to want to skip listing on a bucket in v1
* versioning key format, here is an idea.
*
* @return {string} - the present range (NextMarker) if repd believes
* that it's enough and should move on
*/
skippingV1() {
return DbPrefixes.Master + this[this.nextContinueMarker];
}
/**
* Return an object containing all mandatory fields to use once the
* iteration is done, doesn't show a NextMarker field if the output
* isn't truncated
* @return {Object} - following amazon format
*/
result() {
/* NextMarker is only provided when delimiter is used.
* specified in v1 listing documentation
* http://docs.aws.amazon.com/AmazonS3/latest/API/RESTBucketGET.html
*/
const result = {
CommonPrefixes: this.CommonPrefixes,
Contents: this.Contents,
IsTruncated: this.IsTruncated,
Delimiter: this.delimiter,
};
if (this.parameters.v2) {
result.NextContinuationToken = this.IsTruncated
? this.NextContinuationToken : undefined;
} else {
result.NextMarker = (this.IsTruncated && this.delimiter)
? this.NextMarker : undefined;
}
return result;
}
}
module.exports = { Delimiter };

356
lib/algos/list/delimiter.ts Normal file
View File

@@ -0,0 +1,356 @@
'use strict'; // eslint-disable-line strict
const Extension = require('./Extension').default;
const { inc, listingParamsMasterKeysV0ToV1,
FILTER_END, FILTER_ACCEPT, FILTER_SKIP, SKIP_NONE } = require('./tools');
const VSConst = require('../../versioning/constants').VersioningConstants;
const { DbPrefixes, BucketVersioningKeyFormat } = VSConst;
export interface FilterState {
id: number,
};
export interface FilterReturnValue {
FILTER_ACCEPT,
FILTER_SKIP,
FILTER_END,
};
export const enum DelimiterFilterStateId {
NotSkipping = 1,
SkippingPrefix = 2,
};
export interface DelimiterFilterState_NotSkipping extends FilterState {
id: DelimiterFilterStateId.NotSkipping,
};
export interface DelimiterFilterState_SkippingPrefix extends FilterState {
id: DelimiterFilterStateId.SkippingPrefix,
prefix: string;
};
type KeyHandler = (key: string, value: string) => FilterReturnValue;
type ResultObject = {
CommonPrefixes: string[];
Contents: {
key: string;
value: string;
}[];
IsTruncated: boolean;
Delimiter ?: string;
NextMarker ?: string;
NextContinuationToken ?: string;
};
/**
* Handle object listing with parameters
*
* @prop {String[]} CommonPrefixes - 'folders' defined by the delimiter
* @prop {String[]} Contents - 'files' to list
* @prop {Boolean} IsTruncated - truncated listing flag
* @prop {String|undefined} NextMarker - marker per amazon format
* @prop {Number} keys - count of listed keys
* @prop {String|undefined} delimiter - separator per amazon format
* @prop {String|undefined} prefix - prefix per amazon format
* @prop {Number} maxKeys - number of keys to list
*/
export class Delimiter extends Extension {
state: FilterState;
keyHandlers: { [id: number]: KeyHandler };
/**
* Create a new Delimiter instance
* @constructor
* @param {Object} parameters - listing parameters
* @param {String} [parameters.delimiter] - delimiter per amazon
* format
* @param {String} [parameters.prefix] - prefix per amazon
* format
* @param {String} [parameters.marker] - marker per amazon
* format
* @param {Number} [parameters.maxKeys] - number of keys to list
* @param {Boolean} [parameters.v2] - indicates whether v2
* format
* @param {String} [parameters.startAfter] - marker per amazon
* format
* @param {String} [parameters.continuationToken] - obfuscated amazon
* token
* @param {RequestLogger} logger - The logger of the
* request
* @param {String} [vFormat] - versioning key format
*/
constructor(parameters, logger, vFormat) {
super(parameters, logger);
// original listing parameters
this.delimiter = parameters.delimiter;
this.prefix = parameters.prefix;
this.maxKeys = parameters.maxKeys || 1000;
if (parameters.v2) {
this.marker = parameters.continuationToken || parameters.startAfter;
} else {
this.marker = parameters.marker;
}
this.nextMarker = this.marker;
this.vFormat = vFormat || BucketVersioningKeyFormat.v0;
// results
this.CommonPrefixes = [];
this.Contents = [];
this.IsTruncated = false;
this.keyHandlers = {};
Object.assign(this, {
[BucketVersioningKeyFormat.v0]: {
genMDParams: this.genMDParamsV0,
getObjectKey: this.getObjectKeyV0,
skipping: this.skippingV0,
},
[BucketVersioningKeyFormat.v1]: {
genMDParams: this.genMDParamsV1,
getObjectKey: this.getObjectKeyV1,
skipping: this.skippingV1,
},
}[this.vFormat]);
// if there is a delimiter, we may skip ranges by prefix,
// hence using the NotSkippingPrefix flavor that checks the
// subprefix up to the delimiter for the NotSkipping state
if (this.delimiter) {
this.setKeyHandler(
DelimiterFilterStateId.NotSkipping,
this.keyHandler_NotSkippingPrefix.bind(this));
} else {
// listing without a delimiter never has to skip over any
// prefix -> use NeverSkipping flavor for the NotSkipping
// state
this.setKeyHandler(
DelimiterFilterStateId.NotSkipping,
this.keyHandler_NeverSkipping.bind(this));
}
this.setKeyHandler(
DelimiterFilterStateId.SkippingPrefix,
this.keyHandler_SkippingPrefix.bind(this));
this.state = <DelimiterFilterState_NotSkipping> {
id: DelimiterFilterStateId.NotSkipping,
};
}
genMDParamsV0() {
const params: { gt ?: string, gte ?: string, lt ?: string } = {};
if (this.prefix) {
params.gte = this.prefix;
params.lt = inc(this.prefix);
}
if (this.marker && this.delimiter) {
const commonPrefix = this.getCommonPrefix(this.marker);
if (commonPrefix) {
const afterPrefix = inc(commonPrefix);
if (!params.gte || afterPrefix > params.gte) {
params.gte = afterPrefix;
}
}
}
if (this.marker && (!params.gte || this.marker >= params.gte)) {
delete params.gte;
params.gt = this.marker;
}
return params;
}
genMDParamsV1() {
const params = this.genMDParamsV0();
return listingParamsMasterKeysV0ToV1(params);
}
/**
* check if the max keys count has been reached and set the
* final state of the result if it is the case
* @return {Boolean} - indicates if the iteration has to stop
*/
_reachedMaxKeys(): boolean {
if (this.keys >= this.maxKeys) {
// In cases of maxKeys <= 0 -> IsTruncated = false
this.IsTruncated = this.maxKeys > 0;
return true;
}
return false;
}
/**
* Add a (key, value) tuple to the listing
* Set the NextMarker to the current key
* Increment the keys counter
* @param {String} key - The key to add
* @param {String} value - The value of the key
* @return {number} - indicates if iteration should continue
*/
addContents(key: string, value: string): void {
this.Contents.push({ key, value: this.trimMetadata(value) });
++this.keys;
this.nextMarker = key;
}
getCommonPrefix(key: string): string | undefined {
if (!this.delimiter) {
return undefined;
}
const baseIndex = this.prefix ? this.prefix.length : 0;
const delimiterIndex = key.indexOf(this.delimiter, baseIndex);
if (delimiterIndex === -1) {
return undefined;
}
return key.substring(0, delimiterIndex + this.delimiter.length);
}
/**
* Add a Common Prefix in the list
* @param {String} commonPrefix - common prefix to add
* @param {String} key - full key starting with commonPrefix
* @return {Boolean} - indicates if iteration should continue
*/
addCommonPrefix(commonPrefix: string, key: string): void {
// add the new prefix to the list
this.CommonPrefixes.push(commonPrefix);
++this.keys;
this.nextMarker = commonPrefix;
}
addCommonPrefixOrContents(key: string, value: string): string | undefined {
// add the subprefix to the common prefixes if the key has the delimiter
const commonPrefix = this.getCommonPrefix(key);
if (commonPrefix) {
this.addCommonPrefix(commonPrefix, key);
return commonPrefix;
}
this.addContents(key, value);
return undefined;
}
getObjectKeyV0(obj: { key: string }): string {
return obj.key;
}
getObjectKeyV1(obj: { key: string }): string {
return obj.key.slice(DbPrefixes.Master.length);
}
/**
* Filter to apply on each iteration, based on:
* - prefix
* - delimiter
* - maxKeys
* The marker is being handled directly by levelDB
* @param {Object} obj - The key and value of the element
* @param {String} obj.key - The key of the element
* @param {String} obj.value - The value of the element
* @return {number} - indicates if iteration should continue
*/
filter(obj: { key: string, value: string }): FilterReturnValue {
const key = this.getObjectKey(obj);
const value = obj.value;
return this.handleKey(key, value);
}
setState(state: FilterState): void {
this.state = state;
}
setKeyHandler(stateId: number, keyHandler: KeyHandler): void {
this.keyHandlers[stateId] = keyHandler;
}
handleKey(key: string, value: string): FilterReturnValue {
return this.keyHandlers[this.state.id](key, value);
}
keyHandler_NeverSkipping(key: string, value: string): FilterReturnValue {
if (this._reachedMaxKeys()) {
return FILTER_END;
}
this.addContents(key, value);
return FILTER_ACCEPT;
}
keyHandler_NotSkippingPrefix(key: string, value: string): FilterReturnValue {
if (this._reachedMaxKeys()) {
return FILTER_END;
}
const commonPrefix = this.addCommonPrefixOrContents(key, value);
if (commonPrefix) {
// transition into SkippingPrefix state to skip all following keys
// while they start with the same prefix
this.setState(<DelimiterFilterState_SkippingPrefix> {
id: DelimiterFilterStateId.SkippingPrefix,
prefix: commonPrefix,
});
}
return FILTER_ACCEPT;
}
keyHandler_SkippingPrefix(key: string, value: string): FilterReturnValue {
const { prefix } = <DelimiterFilterState_SkippingPrefix> this.state;
if (key.startsWith(prefix)) {
return FILTER_SKIP;
}
this.setState(<DelimiterFilterState_NotSkipping> {
id: DelimiterFilterStateId.NotSkipping,
});
return this.handleKey(key, value);
}
skippingBase(): string | undefined {
switch (this.state.id) {
case DelimiterFilterStateId.SkippingPrefix:
const { prefix } = <DelimiterFilterState_SkippingPrefix> this.state;
return prefix;
default:
return SKIP_NONE;
}
}
skippingV0() {
return this.skippingBase();
}
skippingV1() {
const skipTo = this.skippingBase();
if (skipTo === SKIP_NONE) {
return SKIP_NONE;
}
return DbPrefixes.Master + skipTo;
}
/**
* Return an object containing all mandatory fields to use once the
* iteration is done, doesn't show a NextMarker field if the output
* isn't truncated
* @return {Object} - following amazon format
*/
result(): ResultObject {
/* NextMarker is only provided when delimiter is used.
* specified in v1 listing documentation
* http://docs.aws.amazon.com/AmazonS3/latest/API/RESTBucketGET.html
*/
const result: ResultObject = {
CommonPrefixes: this.CommonPrefixes,
Contents: this.Contents,
IsTruncated: this.IsTruncated,
Delimiter: this.delimiter,
};
if (this.parameters.v2) {
result.NextContinuationToken = this.IsTruncated
? this.nextMarker : undefined;
} else {
result.NextMarker = (this.IsTruncated && this.delimiter)
? this.nextMarker : undefined;
}
return result;
}
}

View File

@@ -1,182 +0,0 @@
'use strict'; // eslint-disable-line strict
const Delimiter = require('./delimiter').Delimiter;
const Version = require('../../versioning/Version').Version;
const VSConst = require('../../versioning/constants').VersioningConstants;
const { BucketVersioningKeyFormat } = VSConst;
const { FILTER_ACCEPT, FILTER_SKIP, SKIP_NONE } = require('./tools');
const VID_SEP = VSConst.VersionId.Separator;
const { DbPrefixes } = VSConst;
/**
* Handle object listing with parameters. This extends the base class Delimiter
* to return the raw master versions of existing objects.
*/
class DelimiterMaster extends Delimiter {
/**
* Delimiter listing of master versions.
* @param {Object} parameters - listing parameters
* @param {String} parameters.delimiter - delimiter per amazon format
* @param {String} parameters.prefix - prefix per amazon format
* @param {String} parameters.marker - marker per amazon format
* @param {Number} parameters.maxKeys - number of keys to list
* @param {Boolean} parameters.v2 - indicates whether v2 format
* @param {String} parameters.startAfter - marker per amazon v2 format
* @param {String} parameters.continuationToken - obfuscated amazon token
* @param {RequestLogger} logger - The logger of the request
* @param {String} [vFormat] - versioning key format
*/
constructor(parameters, logger, vFormat) {
super(parameters, logger, vFormat);
// non-PHD master version or a version whose master is a PHD version
this.prvKey = undefined;
this.prvPHDKey = undefined;
Object.assign(this, {
[BucketVersioningKeyFormat.v0]: {
filter: this.filterV0,
skipping: this.skippingV0,
},
[BucketVersioningKeyFormat.v1]: {
filter: this.filterV1,
skipping: this.skippingV1,
},
}[this.vFormat]);
}
/**
* Filter to apply on each iteration for buckets in v0 format,
* based on:
* - prefix
* - delimiter
* - maxKeys
* The marker is being handled directly by levelDB
* @param {Object} obj - The key and value of the element
* @param {String} obj.key - The key of the element
* @param {String} obj.value - The value of the element
* @return {number} - indicates if iteration should continue
*/
filterV0(obj) {
let key = obj.key;
const value = obj.value;
/* Skip keys not starting with the prefix or not alphabetically
* ordered. */
if ((this.prefix && !key.startsWith(this.prefix))
|| (typeof this[this.nextContinueMarker] === 'string' &&
key <= this[this.nextContinueMarker])) {
return FILTER_SKIP;
}
/* Skip version keys (<key><versionIdSeparator><version>) if we already
* have a master version. */
const versionIdIndex = key.indexOf(VID_SEP);
if (versionIdIndex >= 0) {
key = key.slice(0, versionIdIndex);
/* - key === this.prvKey is triggered when a master version has
* been accepted for this key,
* - key === this.NextMarker or this.NextContinueToken is triggered
* when a listing page ends on an accepted obj and the next page
* starts with a version of this object.
* In that case prvKey is default set to undefined
* in the constructor) and comparing to NextMarker is the only
* way to know we should not accept this version. This test is
* not redundant with the one at the beginning of this function,
* we are comparing here the key without the version suffix,
* - key startsWith the previous NextMarker happens because we set
* NextMarker to the common prefix instead of the whole key
* value. (TODO: remove this test once ZENKO-1048 is fixed. ).
* */
if (key === this.prvKey || key === this[this.nextContinueMarker] ||
(this.delimiter &&
key.startsWith(this[this.nextContinueMarker]))) {
/* master version already filtered */
return FILTER_SKIP;
}
}
if (Version.isPHD(value)) {
/* master version is a PHD version, we want to wait for the next
* one:
* - Set the prvKey to undefined to not skip the next version,
* - return accept to avoid users to skip the next values in range
* (skip scan mechanism in metadata backend like Metadata or
* MongoClient). */
this.prvKey = undefined;
this.prvPHDKey = key;
return FILTER_ACCEPT;
}
if (Version.isDeleteMarker(value)) {
/* This entry is a deleteMarker which has not been filtered by the
* version test. Either :
* - it is a deleteMarker on the master version, we want to SKIP
* all the following entries with this key (no master version),
* - or a deleteMarker following a PHD (setting prvKey to undefined
* when an entry is a PHD avoids the skip on version for the
* next entry). In that case we expect the master version to
* follow. */
if (key === this.prvPHDKey) {
this.prvKey = undefined;
return FILTER_ACCEPT;
}
this.prvKey = key;
return FILTER_SKIP;
}
this.prvKey = key;
if (this.delimiter) {
// check if the key has the delimiter
const baseIndex = this.prefix ? this.prefix.length : 0;
const delimiterIndex = key.indexOf(this.delimiter, baseIndex);
if (delimiterIndex >= 0) {
// try to add the prefix to the list
return this.addCommonPrefix(key, delimiterIndex);
}
}
return this.addContents(key, value);
}
/**
* Filter to apply on each iteration for buckets in v1 format,
* based on:
* - prefix
* - delimiter
* - maxKeys
* The marker is being handled directly by levelDB
* @param {Object} obj - The key and value of the element
* @param {String} obj.key - The key of the element
* @param {String} obj.value - The value of the element
* @return {number} - indicates if iteration should continue
*/
filterV1(obj) {
// Filtering master keys in v1 is simply listing the master
// keys, as the state of version keys do not change the
// result, so we can use Delimiter method directly.
return super.filter(obj);
}
skippingV0() {
if (this[this.nextContinueMarker]) {
// next marker or next continuation token:
// - foo/ : skipping foo/
// - foo : skipping foo.
const index = this[this.nextContinueMarker].
lastIndexOf(this.delimiter);
if (index === this[this.nextContinueMarker].length - 1) {
return this[this.nextContinueMarker];
}
return this[this.nextContinueMarker] + VID_SEP;
}
return SKIP_NONE;
}
skippingV1() {
const skipTo = this.skippingV0();
if (skipTo === SKIP_NONE) {
return SKIP_NONE;
}
return DbPrefixes.Master + skipTo;
}
}
module.exports = { DelimiterMaster };

View File

@@ -0,0 +1,190 @@
import {
Delimiter,
FilterState,
FilterReturnValue,
DelimiterFilterStateId,
DelimiterFilterState_NotSkipping,
DelimiterFilterState_SkippingPrefix,
} from './delimiter';
const Version = require('../../versioning/Version').Version;
const VSConst = require('../../versioning/constants').VersioningConstants;
const { BucketVersioningKeyFormat } = VSConst;
const { FILTER_ACCEPT, FILTER_SKIP, FILTER_END } = require('./tools');
const VID_SEP = VSConst.VersionId.Separator;
const { DbPrefixes } = VSConst;
const enum DelimiterMasterFilterStateId {
SkippingVersionsV0 = 101,
WaitVersionAfterPHDV0 = 102,
};
interface DelimiterMasterFilterState_SkippingVersionsV0 extends FilterState {
id: DelimiterMasterFilterStateId.SkippingVersionsV0,
masterKey: string,
};
interface DelimiterMasterFilterState_WaitVersionAfterPHDV0 extends FilterState {
id: DelimiterMasterFilterStateId.WaitVersionAfterPHDV0,
masterKey: string,
};
/**
* Handle object listing with parameters. This extends the base class Delimiter
* to return the raw master versions of existing objects.
*/
export class DelimiterMaster extends Delimiter {
/**
* Delimiter listing of master versions.
* @param {Object} parameters - listing parameters
* @param {String} parameters.delimiter - delimiter per amazon format
* @param {String} parameters.prefix - prefix per amazon format
* @param {String} parameters.marker - marker per amazon format
* @param {Number} parameters.maxKeys - number of keys to list
* @param {Boolean} parameters.v2 - indicates whether v2 format
* @param {String} parameters.startAfter - marker per amazon v2 format
* @param {String} parameters.continuationToken - obfuscated amazon token
* @param {RequestLogger} logger - The logger of the request
* @param {String} [vFormat] - versioning key format
*/
constructor(parameters, logger, vFormat) {
super(parameters, logger, vFormat);
Object.assign(this, {
[BucketVersioningKeyFormat.v0]: {
skipping: this.skippingV0,
},
[BucketVersioningKeyFormat.v1]: {
skipping: this.skippingV1,
},
}[this.vFormat]);
if (this.vFormat === BucketVersioningKeyFormat.v0) {
// override Delimiter's implementation of NotSkipping for
// DelimiterMaster logic (skipping versions and special
// handling of delete markers and PHDs)
this.setKeyHandler(
DelimiterFilterStateId.NotSkipping,
this.keyHandler_NotSkippingPrefixNorVersionsV0.bind(this));
// add extra state handlers specific to DelimiterMaster with v0 format
this.setKeyHandler(
DelimiterMasterFilterStateId.SkippingVersionsV0,
this.keyHandler_SkippingVersionsV0.bind(this));
this.setKeyHandler(
DelimiterMasterFilterStateId.WaitVersionAfterPHDV0,
this.keyHandler_WaitVersionAfterPHDV0.bind(this));
if (this.marker) {
// distinct initial state to include some special logic
// before the first master key is found that does not have
// to be checked afterwards
this.state = <DelimiterMasterFilterState_SkippingVersionsV0> {
id: DelimiterMasterFilterStateId.SkippingVersionsV0,
masterKey: this.marker,
};
} else {
this.state = <DelimiterFilterState_NotSkipping> {
id: DelimiterFilterStateId.NotSkipping,
};
}
}
// in v1, we can directly use Delimiter's implementation,
// which is already set to the proper state
}
filter_onNewMasterKeyV0(key: string, value: string): FilterReturnValue {
// if this master key is a delete marker, accept it without
// adding the version to the contents
if (Version.isDeleteMarker(value)) {
// update the state to start skipping versions of the new master key
this.setState(<DelimiterMasterFilterState_SkippingVersionsV0> {
id: DelimiterMasterFilterStateId.SkippingVersionsV0,
masterKey: key,
});
return FILTER_ACCEPT;
}
if (Version.isPHD(value)) {
// master version is a PHD version: wait for the first
// following version that will be considered as the actual
// master key
this.setState(<DelimiterMasterFilterState_WaitVersionAfterPHDV0> {
id: DelimiterMasterFilterStateId.WaitVersionAfterPHDV0,
masterKey: key,
});
return FILTER_ACCEPT;
}
if (key.startsWith(DbPrefixes.Replay)) {
// skip internal replay prefix entirely
this.setState(<DelimiterFilterState_SkippingPrefix> {
id: DelimiterFilterStateId.SkippingPrefix,
prefix: DbPrefixes.Replay,
});
return FILTER_SKIP;
}
if (this._reachedMaxKeys()) {
return FILTER_END;
}
const commonPrefix = this.addCommonPrefixOrContents(key, value);
if (commonPrefix) {
// transition into SkippingPrefix state to skip all following keys
// while they start with the same prefix
this.setState(<DelimiterFilterState_SkippingPrefix> {
id: DelimiterFilterStateId.SkippingPrefix,
prefix: commonPrefix,
});
return FILTER_ACCEPT;
}
// update the state to start skipping versions of the new master key
this.setState(<DelimiterMasterFilterState_SkippingVersionsV0> {
id: DelimiterMasterFilterStateId.SkippingVersionsV0,
masterKey: key,
});
return FILTER_ACCEPT;
}
keyHandler_NotSkippingPrefixNorVersionsV0(key: string, value: string): FilterReturnValue {
return this.filter_onNewMasterKeyV0(key, value);
}
keyHandler_SkippingVersionsV0(key: string, value: string): FilterReturnValue {
/* In the SkippingVersionsV0 state, skip all version keys
* (<key><versionIdSeparator><version>) */
const versionIdIndex = key.indexOf(VID_SEP);
if (versionIdIndex !== -1) {
return FILTER_SKIP;
}
return this.filter_onNewMasterKeyV0(key, value);
}
keyHandler_WaitVersionAfterPHDV0(key: string, value: string): FilterReturnValue {
// After a PHD key is encountered, the next version key of the
// same object if it exists is the new master key, hence
// consider it as such and call 'onNewMasterKeyV0' (the test
// 'masterKey == phdKey' is probably redundant when we already
// know we have a versioned key, since all objects in v0 have
// a master key, but keeping it in doubt)
const { masterKey: phdKey } = <DelimiterMasterFilterState_WaitVersionAfterPHDV0> this.state;
const versionIdIndex = key.indexOf(VID_SEP);
if (versionIdIndex !== -1) {
const masterKey = key.slice(0, versionIdIndex);
if (masterKey === phdKey) {
return this.filter_onNewMasterKeyV0(masterKey, value);
}
}
return this.filter_onNewMasterKeyV0(key, value);
}
skippingBase(): string | undefined {
switch (this.state.id) {
case DelimiterMasterFilterStateId.SkippingVersionsV0:
const { masterKey } = <DelimiterMasterFilterState_SkippingVersionsV0> this.state;
return masterKey + VID_SEP;
default:
return super.skippingBase();
}
}
}

View File

@@ -33,6 +33,7 @@ class DelimiterVersions extends Delimiter {
// listing results
this.NextMarker = parameters.keyMarker;
this.NextVersionIdMarker = undefined;
this.inReplayPrefix = false;
Object.assign(this, {
[BucketVersioningKeyFormat.v0]: {
@@ -150,6 +151,27 @@ class DelimiterVersions extends Delimiter {
return FILTER_ACCEPT;
}
/**
* Add a Common Prefix in the list
* @param {String} key - object name
* @param {Number} index - after prefix starting point
* @return {Boolean} - indicates if iteration should continue
*/
addCommonPrefix(key, index) {
const commonPrefix = key.substring(0, index + this.delimiter.length);
if (this.CommonPrefixes.indexOf(commonPrefix) === -1
&& this.NextMarker !== commonPrefix) {
if (this._reachedMaxKeys()) {
return FILTER_END;
}
this.CommonPrefixes.push(commonPrefix);
this.NextMarker = commonPrefix;
++this.keys;
return FILTER_ACCEPT;
}
return FILTER_SKIP;
}
/**
* Filter to apply on each iteration if bucket is in v0
* versioning key format, based on:
@@ -163,8 +185,15 @@ class DelimiterVersions extends Delimiter {
* @return {number} - indicates if iteration should continue
*/
filterV0(obj) {
if (obj.key.startsWith(DbPrefixes.Replay)) {
this.inReplayPrefix = true;
return FILTER_SKIP;
}
this.inReplayPrefix = false;
if (Version.isPHD(obj.value)) {
return FILTER_ACCEPT; // trick repd to not increase its streak
// return accept to avoid skipping the next values in range
return FILTER_ACCEPT;
}
return this.filterCommon(obj.key, obj.value);
}
@@ -182,11 +211,15 @@ class DelimiterVersions extends Delimiter {
* @return {number} - indicates if iteration should continue
*/
filterV1(obj) {
if (Version.isPHD(obj.value)) {
// return accept to avoid skipping the next values in range
return FILTER_ACCEPT;
}
// this function receives both M and V keys, but their prefix
// length is the same so we can remove their prefix without
// looking at the type of key
return this.filterCommon(obj.key.slice(DbPrefixes.Master.length),
obj.value);
obj.value);
}
filterCommon(key, value) {
@@ -205,8 +238,9 @@ class DelimiterVersions extends Delimiter {
} else {
nonversionedKey = key.slice(0, versionIdIndex);
versionId = key.slice(versionIdIndex + 1);
// skip a version key if it is the master version
if (this.masterKey === nonversionedKey && this.masterVersionId === versionId) {
return FILTER_ACCEPT; // trick repd to not increase its streak
return FILTER_SKIP;
}
this.masterKey = undefined;
this.masterVersionId = undefined;
@@ -222,6 +256,9 @@ class DelimiterVersions extends Delimiter {
}
skippingV0() {
if (this.inReplayPrefix) {
return DbPrefixes.Replay;
}
if (this.NextMarker) {
const index = this.NextMarker.lastIndexOf(this.delimiter);
if (index === this.NextMarker.length - 1) {
@@ -238,7 +275,7 @@ class DelimiterVersions extends Delimiter {
}
// skip to the same object key in both M and V range listings
return [DbPrefixes.Master + skipV0,
DbPrefixes.Version + skipV0];
DbPrefixes.Version + skipV0];
}
/**

View File

@@ -59,8 +59,15 @@ class Skip {
} else if (filteringResult === FILTER_SKIP
&& skippingRange !== SKIP_NONE) {
if (++this.streakLength >= MAX_STREAK_LENGTH) {
const newRange = this._inc(skippingRange);
let newRange;
if (Array.isArray(skippingRange)) {
newRange = [];
for (let i = 0; i < skippingRange.length; ++i) {
newRange.push(this._inc(skippingRange[i]));
}
} else {
newRange = this._inc(skippingRange);
}
/* Avoid to loop on the same range again and again. */
if (newRange === this.gteParams) {
this.streakLength = 1;

View File

@@ -0,0 +1,87 @@
function indexOf(arr, value) {
if (!arr.length) {
return -1;
}
let lo = 0;
let hi = arr.length - 1;
while (hi - lo > 1) {
const i = lo + ((hi - lo) >> 1);
if (arr[i] > value) {
hi = i;
} else {
lo = i;
}
}
if (arr[lo] === value) {
return lo;
}
if (arr[hi] === value) {
return hi;
}
return -1;
}
function indexAtOrBelow(arr, value) {
let i;
let lo;
let hi;
if (!arr.length || arr[0] > value) {
return -1;
}
if (arr[arr.length - 1] <= value) {
return arr.length - 1;
}
lo = 0;
hi = arr.length - 1;
while (hi - lo > 1) {
i = lo + ((hi - lo) >> 1);
if (arr[i] > value) {
hi = i;
} else {
lo = i;
}
}
return lo;
}
/*
* perform symmetric diff in O(m + n)
*/
function symDiff(k1, k2, v1, v2, cb) {
let i = 0;
let j = 0;
const n = k1.length;
const m = k2.length;
while (i < n && j < m) {
if (k1[i] < k2[j]) {
cb(v1[i]);
i++;
} else if (k2[j] < k1[i]) {
cb(v2[j]);
j++;
} else {
i++;
j++;
}
}
while (i < n) {
cb(v1[i]);
i++;
}
while (j < m) {
cb(v2[j]);
j++;
}
}
module.exports = {
indexOf,
indexAtOrBelow,
symDiff,
};

View File

@@ -0,0 +1,51 @@
const ArrayUtils = require('./ArrayUtils');
class SortedSet {
constructor(obj) {
if (obj) {
this.keys = obj.keys;
this.values = obj.values;
} else {
this.clear();
}
}
clear() {
this.keys = [];
this.values = [];
}
get size() {
return this.keys.length;
}
set(key, value) {
const index = ArrayUtils.indexAtOrBelow(this.keys, key);
if (this.keys[index] === key) {
this.values[index] = value;
return;
}
this.keys.splice(index + 1, 0, key);
this.values.splice(index + 1, 0, value);
}
isSet(key) {
const index = ArrayUtils.indexOf(this.keys, key);
return index >= 0;
}
get(key) {
const index = ArrayUtils.indexOf(this.keys, key);
return index >= 0 ? this.values[index] : undefined;
}
del(key) {
const index = ArrayUtils.indexOf(this.keys, key);
if (index >= 0) {
this.keys.splice(index, 1);
this.values.splice(index, 1);
}
}
}
module.exports = SortedSet;

View File

@@ -1,6 +1,4 @@
'use strict'; // eslint-disable-line strict
const constants = require('../constants');
import * as constants from '../constants';
/**
* Class containing requester's information received from Vault
@@ -8,9 +6,15 @@ const constants = require('../constants');
* shortid, email, accountDisplayName and IAMdisplayName (if applicable)
* @return {AuthInfo} an AuthInfo instance
*/
export default class AuthInfo {
arn: string;
canonicalID: string;
shortid: string;
email: string;
accountDisplayName: string;
IAMdisplayName: string;
class AuthInfo {
constructor(objectFromVault) {
constructor(objectFromVault: any) {
// amazon resource name for IAM user (if applicable)
this.arn = objectFromVault.arn;
// account canonicalID
@@ -53,10 +57,8 @@ class AuthInfo {
return this.canonicalID.startsWith(
`${constants.zenkoServiceAccount}/`);
}
isRequesterThisServiceAccount(serviceName) {
return this.canonicalID ===
`${constants.zenkoServiceAccount}/${serviceName}`;
isRequesterThisServiceAccount(serviceName: string) {
const computedCanonicalID = `${constants.zenkoServiceAccount}/${serviceName}`;
return this.canonicalID === computedCanonicalID;
}
}
module.exports = AuthInfo;

View File

@@ -1,16 +1,22 @@
const errors = require('../errors');
const AuthInfo = require('./AuthInfo');
import { Logger } from 'werelogs';
import errors from '../errors';
import AuthInfo from './AuthInfo';
/** vaultSignatureCb parses message from Vault and instantiates
* @param {object} err - error from vault
* @param {object} authInfo - info from vault
* @param {object} log - log for request
* @param {function} callback - callback to authCheck functions
* @param {object} [streamingV4Params] - present if v4 signature;
* @param err - error from vault
* @param authInfo - info from vault
* @param log - log for request
* @param callback - callback to authCheck functions
* @param [streamingV4Params] - present if v4 signature;
* items used to calculate signature on chunks if streaming auth
* @return {undefined}
*/
function vaultSignatureCb(err, authInfo, log, callback, streamingV4Params) {
function vaultSignatureCb(
err: Error | null,
authInfo: { message: { body: any } },
log: Logger,
callback: (err: Error | null, data?: any, results?: any, params?: any) => void,
streamingV4Params?: any
) {
// vaultclient API guarantees that it returns:
// - either `err`, an Error object with `code` and `message` properties set
// - or `err == null` and `info` is an object with `message.code` and
@@ -24,58 +30,99 @@ function vaultSignatureCb(err, authInfo, log, callback, streamingV4Params) {
const info = authInfo.message.body;
const userInfo = new AuthInfo(info.userInfo);
const authorizationResults = info.authorizationResults;
const auditLog = { accountDisplayName: userInfo.getAccountDisplayName() };
const auditLog: { accountDisplayName: string, IAMdisplayName?: string } =
{ accountDisplayName: userInfo.getAccountDisplayName() };
const iamDisplayName = userInfo.getIAMdisplayName();
if (iamDisplayName) {
auditLog.IAMdisplayName = iamDisplayName;
}
// @ts-ignore
log.addDefaultFields(auditLog);
return callback(null, userInfo, authorizationResults, streamingV4Params);
}
export type AuthV4RequestParams = {
version: 4;
log: Logger;
data: {
accessKey: string;
signatureFromRequest: string;
region: string;
stringToSign: string;
scopeDate: string;
authType: 'query' | 'header';
signatureVersion: string;
signatureAge?: number;
timestamp: number;
credentialScope: string;
securityToken: string;
algo: string;
log: Logger;
};
};
/**
* Class that provides common authentication methods against different
* authentication backends.
* @class Vault
*/
class Vault {
export default class Vault {
client: any;
implName: string;
/**
* @constructor
* @param {object} client - authentication backend or vault client
* @param {string} implName - implementation name for auth backend
*/
constructor(client, implName) {
constructor(client: any, implName: string) {
this.client = client;
this.implName = implName;
}
/**
* authenticateV2Request
*
* @param {string} params - the authentication parameters as returned by
* @param params - the authentication parameters as returned by
* auth.extractParams
* @param {number} params.version - shall equal 2
* @param {string} params.data.accessKey - the user's accessKey
* @param {string} params.data.signatureFromRequest - the signature read
* @param params.version - shall equal 2
* @param params.data.accessKey - the user's accessKey
* @param params.data.signatureFromRequest - the signature read
* from the request
* @param {string} params.data.stringToSign - the stringToSign
* @param {string} params.data.algo - the hashing algorithm used for the
* @param params.data.stringToSign - the stringToSign
* @param params.data.algo - the hashing algorithm used for the
* signature
* @param {string} params.data.authType - the type of authentication (query
* @param params.data.authType - the type of authentication (query
* or header)
* @param {string} params.data.signatureVersion - the version of the
* @param params.data.signatureVersion - the version of the
* signature (AWS or AWS4)
* @param {number} [params.data.signatureAge] - the age of the signature in
* @param [params.data.signatureAge] - the age of the signature in
* ms
* @param {string} params.data.log - the logger object
* @param params.data.log - the logger object
* @param {RequestContext []} requestContexts - an array of RequestContext
* instances which contain information for policy authorization check
* @param {function} callback - callback with either error or user info
* @returns {undefined}
* @param callback - callback with either error or user info
*/
authenticateV2Request(params, requestContexts, callback) {
authenticateV2Request(
params: {
version: 2;
log: Logger;
data: {
securityToken: string;
accessKey: string;
signatureFromRequest: string;
stringToSign: string;
algo: string;
authType: 'query' | 'header';
signatureVersion: string;
signatureAge?: number;
log: Logger;
};
},
requestContexts: any[],
callback: (err: Error | null, data?: any) => void
) {
params.log.debug('authenticating V2 request');
let serializedRCsArr;
let serializedRCsArr: any;
if (requestContexts) {
serializedRCsArr = requestContexts.map(rc => rc.serialize());
}
@@ -85,44 +132,48 @@ class Vault {
params.data.accessKey,
{
algo: params.data.algo,
// @ts-ignore
reqUid: params.log.getSerializedUids(),
logger: params.log,
securityToken: params.data.securityToken,
requestContext: serializedRCsArr,
},
(err, userInfo) => vaultSignatureCb(err, userInfo,
params.log, callback)
(err: Error | null, userInfo?: any) => vaultSignatureCb(err, userInfo,
params.log, callback),
);
}
/** authenticateV4Request
* @param {object} params - the authentication parameters as returned by
* @param params - the authentication parameters as returned by
* auth.extractParams
* @param {number} params.version - shall equal 4
* @param {string} params.data.log - the logger object
* @param {string} params.data.accessKey - the user's accessKey
* @param {string} params.data.signatureFromRequest - the signature read
* @param params.version - shall equal 4
* @param params.data.log - the logger object
* @param params.data.accessKey - the user's accessKey
* @param params.data.signatureFromRequest - the signature read
* from the request
* @param {string} params.data.region - the AWS region
* @param {string} params.data.stringToSign - the stringToSign
* @param {string} params.data.scopeDate - the timespan to allow the request
* @param {string} params.data.authType - the type of authentication (query
* @param params.data.region - the AWS region
* @param params.data.stringToSign - the stringToSign
* @param params.data.scopeDate - the timespan to allow the request
* @param params.data.authType - the type of authentication (query
* or header)
* @param {string} params.data.signatureVersion - the version of the
* @param params.data.signatureVersion - the version of the
* signature (AWS or AWS4)
* @param {number} params.data.signatureAge - the age of the signature in ms
* @param {number} params.data.timestamp - signaure timestamp
* @param {string} params.credentialScope - credentialScope for signature
* @param params.data.signatureAge - the age of the signature in ms
* @param params.data.timestamp - signaure timestamp
* @param params.credentialScope - credentialScope for signature
* @param {RequestContext [] | null} requestContexts -
* an array of RequestContext or null if authenticaiton of a chunk
* in streamingv4 auth
* instances which contain information for policy authorization check
* @param {function} callback - callback with either error or user info
* @return {undefined}
* @param callback - callback with either error or user info
*/
authenticateV4Request(params, requestContexts, callback) {
authenticateV4Request(
params: AuthV4RequestParams,
requestContexts: any[] | null,
callback: (err: Error | null, data?: any) => void
) {
params.log.debug('authenticating V4 request');
let serializedRCs;
let serializedRCs: any;
if (requestContexts) {
serializedRCs = requestContexts.map(rc => rc.serialize());
}
@@ -140,31 +191,39 @@ class Vault {
params.data.region,
params.data.scopeDate,
{
// @ts-ignore
reqUid: params.log.getSerializedUids(),
logger: params.log,
securityToken: params.data.securityToken,
requestContext: serializedRCs,
},
(err, userInfo) => vaultSignatureCb(err, userInfo,
params.log, callback, streamingV4Params)
(err: Error | null, userInfo?: any) => vaultSignatureCb(err, userInfo,
params.log, callback, streamingV4Params),
);
}
/** getCanonicalIds -- call Vault to get canonicalIDs based on email
* addresses
* @param {array} emailAddresses - list of emailAddresses
* @param {object} log - log object
* @param {function} callback - callback with either error or an array
* @param emailAddresses - list of emailAddresses
* @param log - log object
* @param callback - callback with either error or an array
* of objects with each object containing the canonicalID and emailAddress
* of an account as properties
* @return {undefined}
*/
getCanonicalIds(emailAddresses, log, callback) {
getCanonicalIds(
emailAddresses: string[],
log: Logger,
callback: (
err: Error | null,
data?: { canonicalID: string; email: string }[]
) => void
) {
log.trace('getting canonicalIDs from Vault based on emailAddresses',
{ emailAddresses });
this.client.getCanonicalIds(emailAddresses,
// @ts-ignore
{ reqUid: log.getSerializedUids() },
(err, info) => {
(err: Error | null, info?: any) => {
if (err) {
log.debug('received error message from auth provider',
{ errorMessage: err });
@@ -172,17 +231,17 @@ class Vault {
}
const infoFromVault = info.message.body;
log.trace('info received from vault', { infoFromVault });
const foundIds = [];
const foundIds: { canonicalID: string; email: string }[] = [];
for (let i = 0; i < Object.keys(infoFromVault).length; i++) {
const key = Object.keys(infoFromVault)[i];
if (infoFromVault[key] === 'WrongFormat'
|| infoFromVault[key] === 'NotFound') {
return callback(errors.UnresolvableGrantByEmailAddress);
}
const obj = {};
obj.email = key;
obj.canonicalID = infoFromVault[key];
foundIds.push(obj);
foundIds.push({
email: key,
canonicalID: infoFromVault[key],
})
}
return callback(null, foundIds);
});
@@ -190,18 +249,22 @@ class Vault {
/** getEmailAddresses -- call Vault to get email addresses based on
* canonicalIDs
* @param {array} canonicalIDs - list of canonicalIDs
* @param {object} log - log object
* @param {function} callback - callback with either error or an object
* @param canonicalIDs - list of canonicalIDs
* @param log - log object
* @param callback - callback with either error or an object
* with canonicalID keys and email address values
* @return {undefined}
*/
getEmailAddresses(canonicalIDs, log, callback) {
getEmailAddresses(
canonicalIDs: string[],
log: Logger,
callback: (err: Error | null, data?: { [key: string]: any }) => void
) {
log.trace('getting emailAddresses from Vault based on canonicalIDs',
{ canonicalIDs });
this.client.getEmailAddresses(canonicalIDs,
// @ts-ignore
{ reqUid: log.getSerializedUids() },
(err, info) => {
(err: Error | null, info?: any) => {
if (err) {
log.debug('received error message from vault',
{ errorMessage: err });
@@ -222,6 +285,44 @@ class Vault {
});
}
/** getAccountIds -- call Vault to get accountIds based on
* canonicalIDs
* @param canonicalIDs - list of canonicalIDs
* @param log - log object
* @param callback - callback with either error or an object
* with canonicalID keys and accountId values
*/
getAccountIds(
canonicalIDs: string[],
log: Logger,
callback: (err: Error | null, data?: { [key: string]: string }) => void
) {
log.trace('getting accountIds from Vault based on canonicalIDs',
{ canonicalIDs });
this.client.getAccountIds(canonicalIDs,
// @ts-expect-error
{ reqUid: log.getSerializedUids() },
(err: Error | null, info?: any) => {
if (err) {
log.debug('received error message from vault',
{ errorMessage: err });
return callback(err);
}
const infoFromVault = info.message.body;
log.trace('info received from vault', { infoFromVault });
const result = {};
/* If the accountId was not found in Vault, do not
send the canonicalID back to the API */
Object.keys(infoFromVault).forEach(key => {
if (infoFromVault[key] !== 'NotFound' &&
infoFromVault[key] !== 'WrongFormat') {
result[key] = infoFromVault[key];
}
});
return callback(null, result);
});
}
/** checkPolicies -- call Vault to evaluate policies
* @param {object} requestContextParams - parameters needed to construct
* requestContext in Vault
@@ -234,14 +335,19 @@ class Vault {
* @param {object} log - log object
* @param {function} callback - callback with either error or an array
* of authorization results
* @return {undefined}
*/
checkPolicies(requestContextParams, userArn, log, callback) {
checkPolicies(
requestContextParams: any[],
userArn: string,
log: Logger,
callback: (err: Error | null, data?: any[]) => void
) {
log.trace('sending request context params to vault to evaluate' +
'policies');
this.client.checkPolicies(requestContextParams, userArn, {
// @ts-ignore
reqUid: log.getSerializedUids(),
}, (err, info) => {
}, (err: Error | null, info?: any) => {
if (err) {
log.debug('received error message from auth provider',
{ error: err });
@@ -252,13 +358,14 @@ class Vault {
});
}
checkHealth(log, callback) {
checkHealth(log: Logger, callback: (err: Error | null, data?: any) => void) {
if (!this.client.healthcheck) {
const defResp = {};
defResp[this.implName] = { code: 200, message: 'OK' };
return callback(null, defResp);
}
return this.client.healthcheck(log.getSerializedUids(), (err, obj) => {
// @ts-ignore
return this.client.healthcheck(log.getSerializedUids(), (err: Error | null, obj?: any) => {
const respBody = {};
if (err) {
log.debug(`error from ${this.implName}`, { error: err });
@@ -278,5 +385,3 @@ class Vault {
});
}
}
module.exports = Vault;

View File

@@ -1,22 +1,21 @@
'use strict'; // eslint-disable-line strict
import * as crypto from 'crypto';
import { Logger } from 'werelogs';
import errors from '../errors';
import * as queryString from 'querystring';
import AuthInfo from './AuthInfo';
import * as v2 from './v2/authV2';
import * as v4 from './v4/authV4';
import * as constants from '../constants';
import constructStringToSignV2 from './v2/constructStringToSign';
import constructStringToSignV4 from './v4/constructStringToSign';
import { convertUTCtoISO8601 } from './v4/timeUtils';
import * as vaultUtilities from './in_memory/vaultUtilities';
import * as backend from './in_memory/Backend';
import validateAuthConfig from './in_memory/validateAuthConfig';
import AuthLoader from './in_memory/AuthLoader';
import Vault from './Vault';
const crypto = require('crypto');
const errors = require('../errors');
const queryString = require('querystring');
const AuthInfo = require('./AuthInfo');
const v2 = require('./v2/authV2');
const v4 = require('./v4/authV4');
const constants = require('../constants');
const constructStringToSignV2 = require('./v2/constructStringToSign');
const constructStringToSignV4 = require('./v4/constructStringToSign');
const convertUTCtoISO8601 = require('./v4/timeUtils').convertUTCtoISO8601;
const vaultUtilities = require('./in_memory/vaultUtilities');
const backend = require('./in_memory/Backend');
const validateAuthConfig = require('./in_memory/validateAuthConfig');
const AuthLoader = require('./in_memory/AuthLoader');
const Vault = require('./Vault');
let vault = null;
let vault: Vault | null = null;
const auth = {};
const checkFunctions = {
v2: {
@@ -33,7 +32,7 @@ const checkFunctions = {
// 'All Users Group' so use this group as the canonicalID for the publicUser
const publicUserInfo = new AuthInfo({ canonicalID: constants.publicId });
function setAuthHandler(handler) {
function setAuthHandler(handler: Vault) {
vault = handler;
return auth;
}
@@ -41,25 +40,30 @@ function setAuthHandler(handler) {
/**
* This function will check validity of request parameters to authenticate
*
* @param {Http.Request} request - Http request object
* @param {object} log - Logger object
* @param {string} awsService - Aws service related
* @param {object} data - Parameters from queryString parsing or body of
* @param request - Http request object
* @param log - Logger object
* @param awsService - Aws service related
* @param data - Parameters from queryString parsing or body of
* POST request
*
* @return {object} ret
* @return {object} ret.err - arsenal.errors object if any error was found
* @return {object} ret.params - auth parameters to use later on for signature
* @return ret
* @return ret.err - arsenal.errors object if any error was found
* @return ret.params - auth parameters to use later on for signature
* computation and check
* @return {object} ret.params.version - the auth scheme version
* @return ret.params.version - the auth scheme version
* (undefined, 2, 4)
* @return {object} ret.params.data - the auth scheme's specific data
* @return ret.params.data - the auth scheme's specific data
*/
function extractParams(request, log, awsService, data) {
function extractParams(
request: any,
log: Logger,
awsService: string,
data: { [key: string]: string }
) {
log.trace('entered', { method: 'Arsenal.auth.server.extractParams' });
const authHeader = request.headers.authorization;
let version = null;
let method = null;
let version: 'v2' |'v4' | null = null;
let method: 'query' | 'headers' | null = null;
// Identify auth version and method to dispatch to the right check function
if (authHeader) {
@@ -102,16 +106,21 @@ function extractParams(request, log, awsService, data) {
/**
* This function will check validity of request parameters to authenticate
*
* @param {Http.Request} request - Http request object
* @param {object} log - Logger object
* @param {function} cb - the callback
* @param {string} awsService - Aws service related
* @param request - Http request object
* @param log - Logger object
* @param cb - the callback
* @param awsService - Aws service related
* @param {RequestContext[] | null} requestContexts - array of RequestContext
* or null if no requestContexts to be sent to Vault (for instance,
* in multi-object delete request)
* @return {undefined}
*/
function doAuth(request, log, cb, awsService, requestContexts) {
function doAuth(
request: any,
log: Logger,
cb: (err: Error | null, data?: any) => void,
awsService: string,
requestContexts: any[] | null
) {
const res = extractParams(request, log, awsService, request.query);
if (res.err) {
return cb(res.err);
@@ -119,23 +128,31 @@ function doAuth(request, log, cb, awsService, requestContexts) {
return cb(null, res.params);
}
if (requestContexts) {
requestContexts.forEach(requestContext => {
requestContext.setAuthType(res.params.data.authType);
requestContext.setSignatureVersion(res.params
.data.signatureVersion);
requestContext.setSignatureAge(res.params.data.signatureAge);
requestContext.setSecurityToken(res.params.data.securityToken);
requestContexts.forEach((requestContext) => {
const { params } = res
if ('data' in params) {
const { data } = params
requestContext.setAuthType(data.authType);
requestContext.setSignatureVersion(data.signatureVersion);
requestContext.setSecurityToken(data.securityToken);
if ('signatureAge' in data) {
requestContext.setSignatureAge(data.signatureAge);
}
}
});
}
// Corner cases managed, we're left with normal auth
// TODO What's happening here?
// @ts-ignore
res.params.log = log;
if (res.params.version === 2) {
return vault.authenticateV2Request(res.params, requestContexts, cb);
// @ts-ignore
return vault!.authenticateV2Request(res.params, requestContexts, cb);
}
if (res.params.version === 4) {
return vault.authenticateV4Request(res.params, requestContexts, cb,
awsService);
// @ts-ignore
return vault!.authenticateV4Request(res.params, requestContexts, cb);
}
log.error('authentication method not found', {
@@ -144,20 +161,44 @@ function doAuth(request, log, cb, awsService, requestContexts) {
return cb(errors.InternalError);
}
/**
* This function will generate a version 4 content-md5 header
* It looks at the request path to determine what kind of header encoding is required
*
* @param path - the request path
* @param payload - the request payload to hash
*/
function generateContentMD5Header(
path: string,
payload: string,
) {
const encoding = path && path.startsWith('/_/backbeat/') ? 'hex' : 'base64';
return crypto.createHash('md5').update(payload, 'binary').digest(encoding);
}
/**
* This function will generate a version 4 header
*
* @param {Http.Request} request - Http request object
* @param {object} data - Parameters from queryString parsing or body of
* @param request - Http request object
* @param data - Parameters from queryString parsing or body of
* POST request
* @param {string} accessKey - the accessKey
* @param {string} secretKeyValue - the secretKey
* @param {string} awsService - Aws service related
* @param {sting} [proxyPath] - path that gets proxied by reverse proxy
* @return {undefined}
* @param accessKey - the accessKey
* @param secretKeyValue - the secretKey
* @param awsService - Aws service related
* @param [proxyPath] - path that gets proxied by reverse proxy
* @param [sessionToken] - security token if the access/secret keys
* are temporary credentials from STS
* @param [payload] - body of the request if any
*/
function generateV4Headers(request, data, accessKey, secretKeyValue,
awsService, proxyPath) {
function generateV4Headers(
request: any,
data: { [key: string]: string },
accessKey: string,
secretKeyValue: string,
awsService: string,
proxyPath?: string,
sessionToken?: string,
payload?: string,
) {
Object.assign(request, { headers: {} });
const amzDate = convertUTCtoISO8601(Date.now());
// get date without time
@@ -169,9 +210,9 @@ function generateV4Headers(request, data, accessKey, secretKeyValue,
const timestamp = amzDate;
const algorithm = 'AWS4-HMAC-SHA256';
let payload = '';
payload = payload || '';
if (request.method === 'POST') {
payload = queryString.stringify(data, null, null, {
payload = queryString.stringify(data, undefined, undefined, {
encodeURIComponent,
});
}
@@ -180,11 +221,18 @@ function generateV4Headers(request, data, accessKey, secretKeyValue,
request.setHeader('host', request._headers.host);
request.setHeader('x-amz-date', amzDate);
request.setHeader('x-amz-content-sha256', payloadChecksum);
request.setHeader('content-md5', generateContentMD5Header(request.path, payload));
if (sessionToken) {
request.setHeader('x-amz-security-token', sessionToken);
}
Object.assign(request.headers, request._headers);
const signedHeaders = Object.keys(request._headers)
.filter(headerName =>
headerName.startsWith('x-amz-')
|| headerName.startsWith('x-scal-')
|| headerName === 'content-md5'
|| headerName === 'host'
).sort().join(';');
const params = { request, signedHeaders, payloadChecksum,
@@ -196,7 +244,7 @@ function generateV4Headers(request, data, accessKey, secretKeyValue,
scopeDate,
service);
const signature = crypto.createHmac('sha256', signingKey)
.update(stringToSign, 'binary').digest('hex');
.update(stringToSign as string, 'binary').digest('hex');
const authorizationHeader = `${algorithm} Credential=${accessKey}` +
`/${credentialScope}, SignedHeaders=${signedHeaders}, ` +
`Signature=${signature}`;
@@ -204,21 +252,11 @@ function generateV4Headers(request, data, accessKey, secretKeyValue,
Object.assign(request, { headers: {} });
}
module.exports = {
setHandler: setAuthHandler,
server: {
extractParams,
doAuth,
},
client: {
generateV4Headers,
constructStringToSignV2,
},
inMemory: {
backend,
validateAuthConfig,
AuthLoader,
},
export const server = { extractParams, doAuth }
export const client = { generateV4Headers, constructStringToSignV2 }
export const inMemory = { backend, validateAuthConfig, AuthLoader }
export {
setAuthHandler as setHandler,
AuthInfo,
Vault,
};
Vault
}

View File

@@ -1,223 +0,0 @@
const fs = require('fs');
const glob = require('simple-glob');
const joi = require('@hapi/joi');
const werelogs = require('werelogs');
const ARN = require('../../models/ARN');
/**
* Load authentication information from files or pre-loaded account
* objects
*
* @class AuthLoader
*/
class AuthLoader {
constructor(logApi) {
this._log = new (logApi || werelogs).Logger('S3');
this._authData = { accounts: [] };
// null: unknown validity, true/false: valid or invalid
this._isValid = null;
this._joiKeysValidator = joi.array()
.items({
access: joi.string().required(),
secret: joi.string().required(),
})
.required();
const accountsJoi = joi.array()
.items({
name: joi.string().required(),
email: joi.string().email().required(),
arn: joi.string().required(),
canonicalID: joi.string().required(),
shortid: joi.string().regex(/^[0-9]{12}$/).required(),
keys: this._joiKeysValidator,
// backward-compat
users: joi.array(),
})
.required()
.unique('arn')
.unique('email')
.unique('canonicalID');
this._joiValidator = joi.object({ accounts: accountsJoi });
}
/**
* add one or more accounts to the authentication info
*
* @param {object} authData - authentication data
* @param {object[]} authData.accounts - array of account data
* @param {string} authData.accounts[].name - account name
* @param {string} authData.accounts[].email: email address
* @param {string} authData.accounts[].arn: account ARN,
* e.g. 'arn:aws:iam::123456789012:root'
* @param {string} authData.accounts[].canonicalID account
* canonical ID
* @param {string} authData.accounts[].shortid account ID number,
* e.g. '123456789012'
* @param {object[]} authData.accounts[].keys array of
* access/secret keys
* @param {object[]} authData.accounts[].keys[].access access key
* @param {object[]} authData.accounts[].keys[].secret secret key
* @param {string} [filePath] - optional file path info for
* logging purpose
* @return {undefined}
*/
addAccounts(authData, filePath) {
const isValid = this._validateData(authData, filePath);
if (isValid) {
this._authData.accounts =
this._authData.accounts.concat(authData.accounts);
// defer validity checking when getting data to avoid
// logging multiple times the errors (we need to validate
// all accounts at once to detect duplicate values)
if (this._isValid) {
this._isValid = null;
}
} else {
this._isValid = false;
}
}
/**
* add account information from a file
*
* @param {string} filePath - file path containing JSON
* authentication info (see {@link addAccounts()} for format)
* @return {undefined}
*/
addFile(filePath) {
const authData = JSON.parse(fs.readFileSync(filePath));
this.addAccounts(authData, filePath);
}
/**
* add account information from a filesystem path
*
* @param {string|string[]} globPattern - filesystem glob pattern,
* can be a single string or an array of glob patterns. Globs
* can be simple file paths or can contain glob matching
* characters, like '/a/b/*.json'. The matching files are
* individually loaded as JSON and accounts are added. See
* {@link addAccounts()} for JSON format.
* @return {undefined}
*/
addFilesByGlob(globPattern) {
const files = glob(globPattern);
files.forEach(filePath => this.addFile(filePath));
}
/**
* perform validation on authentication info previously
* loaded. Note that it has to be done on the entire set after an
* update to catch duplicate account IDs or access keys.
*
* @return {boolean} true if authentication info is valid
* false otherwise
*/
validate() {
if (this._isValid === null) {
this._isValid = this._validateData(this._authData);
}
return this._isValid;
}
/**
* get authentication info as a plain JS object containing all accounts
* under the "accounts" attribute, with validation.
*
* @return {object|null} the validated authentication data
* null if invalid
*/
getData() {
return this.validate() ? this._authData : null;
}
_validateData(authData, filePath) {
const res = joi.validate(authData, this._joiValidator,
{ abortEarly: false });
if (res.error) {
this._dumpJoiErrors(res.error.details, filePath);
return false;
}
let allKeys = [];
let arnError = false;
const validatedAuth = res.value;
validatedAuth.accounts.forEach(account => {
// backward-compat: ignore arn if starts with 'aws:' and log a
// warning
if (account.arn.startsWith('aws:')) {
this._log.error(
'account must have a valid AWS ARN, legacy examples ' +
'starting with \'aws:\' are not supported anymore. ' +
'Please convert to a proper account entry (see ' +
'examples at https://github.com/scality/S3/blob/' +
'master/conf/authdata.json). Also note that support ' +
'for account users has been dropped.',
{ accountName: account.name, accountArn: account.arn,
filePath });
arnError = true;
return;
}
if (account.users) {
this._log.error(
'support for account users has been dropped, consider ' +
'turning users into account entries (see examples at ' +
'https://github.com/scality/S3/blob/master/conf/' +
'authdata.json)',
{ accountName: account.name, accountArn: account.arn,
filePath });
arnError = true;
return;
}
const arnObj = ARN.createFromString(account.arn);
if (arnObj.error) {
this._log.error(
'authentication config validation error',
{ reason: arnObj.error.description,
accountName: account.name, accountArn: account.arn,
filePath });
arnError = true;
return;
}
if (!arnObj.isIAMAccount()) {
this._log.error(
'authentication config validation error',
{ reason: 'not an IAM account ARN',
accountName: account.name, accountArn: account.arn,
filePath });
arnError = true;
return;
}
allKeys = allKeys.concat(account.keys);
});
if (arnError) {
return false;
}
const uniqueKeysRes = joi.validate(
allKeys, this._joiKeysValidator.unique('access'));
if (uniqueKeysRes.error) {
this._dumpJoiErrors(uniqueKeysRes.error.details, filePath);
return false;
}
return true;
}
_dumpJoiErrors(errors, filePath) {
errors.forEach(err => {
const logInfo = { item: err.path, filePath };
if (err.type === 'array.unique') {
logInfo.reason = `duplicate value '${err.context.path}'`;
logInfo.dupValue = err.context.value[err.context.path];
} else {
logInfo.reason = err.message;
logInfo.context = err.context;
}
this._log.error('authentication config validation error',
logInfo);
});
}
}
module.exports = AuthLoader;

View File

@@ -0,0 +1,204 @@
import * as fs from 'fs';
import glob from 'simple-glob';
import joi from 'joi';
import werelogs from 'werelogs';
import * as types from './types';
import { Account, Accounts } from './types';
import ARN from '../../models/ARN';
/** Load authentication information from files or pre-loaded account objects */
export default class AuthLoader {
#log: werelogs.Logger;
#authData: Accounts;
#isValid: 'waiting-for-validation' | 'valid' | 'invalid';
constructor(logApi: { Logger: typeof werelogs.Logger } = werelogs) {
this.#log = new logApi.Logger('S3');
this.#authData = { accounts: [] };
this.#isValid = 'waiting-for-validation';
}
/** Add one or more accounts to the authentication info */
addAccounts(authData: Accounts, filePath?: string) {
const isValid = this.#isAuthDataValid(authData, filePath);
if (isValid) {
this.#authData.accounts = [
...this.#authData.accounts,
...authData.accounts,
];
// defer validity checking when getting data to avoid
// logging multiple times the errors (we need to validate
// all accounts at once to detect duplicate values)
if (this.#isValid === 'valid') {
this.#isValid = 'waiting-for-validation';
}
} else {
this.#isValid = 'invalid';
}
}
/**
* Add account information from a file. Use { legacy: false } as an option
* to use the new, Promise-based version.
*
* @param filePath - file path containing JSON
* authentication info (see {@link addAccounts()} for format)
*/
addFile(filePath: string, options: { legacy: false }): Promise<void>;
/** @deprecated Please use Promise-version instead. */
addFile(filePath: string, options?: { legacy: true }): void;
addFile(filePath: string, options = { legacy: true }) {
// On deprecation, remove the legacy part and keep the promises.
const readFunc: any = options.legacy ? fs.readFileSync : fs.promises.readFile;
const readResult = readFunc(filePath, 'utf8') as Promise<string> | string;
const prom = Promise.resolve(readResult).then((data) => {
const authData = JSON.parse(data);
this.addAccounts(authData, filePath);
});
return options.legacy ? undefined : prom;
}
/**
* Add account information from a filesystem path
*
* @param globPattern - filesystem glob pattern,
* can be a single string or an array of glob patterns. Globs
* can be simple file paths or can contain glob matching
* characters, like '/a/b/*.json'. The matching files are
* individually loaded as JSON and accounts are added. See
* {@link addAccounts()} for JSON format.
*/
addFilesByGlob(globPattern: string | string[]) {
// FIXME switch glob to async version
const files = glob(globPattern);
files.forEach((filePath) => this.addFile(filePath));
}
/**
* Perform validation on authentication info previously
* loaded. Note that it has to be done on the entire set after an
* update to catch duplicate account IDs or access keys.
*/
validate() {
if (this.#isValid === 'waiting-for-validation') {
const isValid = this.#isAuthDataValid(this.#authData);
this.#isValid = isValid ? 'valid' : 'invalid';
}
return this.#isValid === 'valid';
}
/**
* Get authentication info as a plain JS object containing all accounts
* under the "accounts" attribute, with validation.
*/
get data() {
return this.validate() ? this.#authData : null;
}
/** backward-compat: ignore arn if starts with 'aws:' and log a warning */
#isNotLegacyAWSARN(account: Account, filePath?: string) {
if (account.arn.startsWith('aws:')) {
const { name: accountName, arn: accountArn } = account;
this.#log.error(
'account must have a valid AWS ARN, legacy examples ' +
"starting with 'aws:' are not supported anymore. " +
'Please convert to a proper account entry (see ' +
'examples at https://github.com/scality/S3/blob/' +
'master/conf/authdata.json). Also note that support ' +
'for account users has been dropped.',
{ accountName, accountArn, filePath }
);
return false;
}
return true;
}
#isValidUsers(account: Account, filePath?: string) {
if (account.users) {
const { name: accountName, arn: accountArn } = account;
this.#log.error(
'support for account users has been dropped, consider ' +
'turning users into account entries (see examples at ' +
'https://github.com/scality/S3/blob/master/conf/' +
'authdata.json)',
{ accountName, accountArn, filePath }
);
return false;
}
return true;
}
#isValidARN(account: Account, filePath?: string) {
const arnObj = ARN.createFromString(account.arn);
const { name: accountName, arn: accountArn } = account;
if (arnObj instanceof ARN) {
if (!arnObj.isIAMAccount()) {
this.#log.error('authentication config validation error', {
reason: 'not an IAM account ARN',
accountName,
accountArn,
filePath,
});
return false;
}
} else {
this.#log.error('authentication config validation error', {
reason: arnObj.error.description,
accountName,
accountArn,
filePath,
});
return false;
}
return true;
}
#isAuthDataValid(authData: any, filePath?: string) {
const options = { abortEarly: true };
const response = types.validators.accounts.validate(authData, options);
if (response.error) {
this.#dumpJoiErrors(response.error.details, filePath);
return false;
}
const validAccounts = response.value.accounts.filter(
(account: Account) =>
this.#isNotLegacyAWSARN(account, filePath) &&
this.#isValidUsers(account, filePath) &&
this.#isValidARN(account, filePath)
);
const areSomeInvalidAccounts =
validAccounts.length !== response.value.accounts.length;
if (areSomeInvalidAccounts) {
return false;
}
const keys = validAccounts.flatMap((account) => account.keys);
const uniqueKeysValidator = types.validators.keys.unique('access');
const areKeysUnique = uniqueKeysValidator.validate(keys);
if (areKeysUnique.error) {
this.#dumpJoiErrors(areKeysUnique.error.details, filePath);
return false;
}
return true;
}
#dumpJoiErrors(errors: joi.ValidationErrorItem[], filePath?: string) {
errors.forEach((err) => {
const baseLogInfo = { item: err.path, filePath };
const logInfo = () => {
if (err.type === 'array.unique') {
const reason = `duplicate value '${err.context?.path}'`;
const dupValue = err.context?.value[err.context.path];
return { ...baseLogInfo, reason, dupValue };
} else {
const reason = err.message;
const context = err.context;
return { ...baseLogInfo, reason, context };
}
};
this.#log.error(
'authentication config validation error',
logInfo()
);
});
}
}

View File

@@ -1,189 +0,0 @@
'use strict'; // eslint-disable-line strict
const crypto = require('crypto');
const errors = require('../../errors');
const calculateSigningKey = require('./vaultUtilities').calculateSigningKey;
const hashSignature = require('./vaultUtilities').hashSignature;
const Indexer = require('./Indexer');
function _formatResponse(userInfoToSend) {
return {
message: {
body: { userInfo: userInfoToSend },
},
};
}
/**
* Class that provides a memory backend for verifying signatures and getting
* emails and canonical ids associated with an account.
*
* @class Backend
*/
class Backend {
/**
* @constructor
* @param {string} service - service identifer for construction arn
* @param {Indexer} indexer - indexer instance for retrieving account info
* @param {function} formatter - function which accepts user info to send
* back and returns it in an object
*/
constructor(service, indexer, formatter) {
this.service = service;
this.indexer = indexer;
this.formatResponse = formatter;
}
/** verifySignatureV2
* @param {string} stringToSign - string to sign built per AWS rules
* @param {string} signatureFromRequest - signature sent with request
* @param {string} accessKey - account accessKey
* @param {object} options - contains algorithm (SHA1 or SHA256)
* @param {function} callback - callback with either error or user info
* @return {function} calls callback
*/
verifySignatureV2(stringToSign, signatureFromRequest,
accessKey, options, callback) {
const entity = this.indexer.getEntityByKey(accessKey);
if (!entity) {
return callback(errors.InvalidAccessKeyId);
}
const secretKey = this.indexer.getSecretKey(entity, accessKey);
const reconstructedSig =
hashSignature(stringToSign, secretKey, options.algo);
if (signatureFromRequest !== reconstructedSig) {
return callback(errors.SignatureDoesNotMatch);
}
const userInfoToSend = {
accountDisplayName: this.indexer.getAcctDisplayName(entity),
canonicalID: entity.canonicalID,
arn: entity.arn,
IAMdisplayName: entity.IAMdisplayName,
};
const vaultReturnObject = this.formatResponse(userInfoToSend);
return callback(null, vaultReturnObject);
}
/** verifySignatureV4
* @param {string} stringToSign - string to sign built per AWS rules
* @param {string} signatureFromRequest - signature sent with request
* @param {string} accessKey - account accessKey
* @param {string} region - region specified in request credential
* @param {string} scopeDate - date specified in request credential
* @param {object} options - options to send to Vault
* (just contains reqUid for logging in Vault)
* @param {function} callback - callback with either error or user info
* @return {function} calls callback
*/
verifySignatureV4(stringToSign, signatureFromRequest, accessKey,
region, scopeDate, options, callback) {
const entity = this.indexer.getEntityByKey(accessKey);
if (!entity) {
return callback(errors.InvalidAccessKeyId);
}
const secretKey = this.indexer.getSecretKey(entity, accessKey);
const signingKey = calculateSigningKey(secretKey, region, scopeDate);
const reconstructedSig = crypto.createHmac('sha256', signingKey)
.update(stringToSign, 'binary').digest('hex');
if (signatureFromRequest !== reconstructedSig) {
return callback(errors.SignatureDoesNotMatch);
}
const userInfoToSend = {
accountDisplayName: this.indexer.getAcctDisplayName(entity),
canonicalID: entity.canonicalID,
arn: entity.arn,
IAMdisplayName: entity.IAMdisplayName,
};
const vaultReturnObject = this.formatResponse(userInfoToSend);
return callback(null, vaultReturnObject);
}
/**
* Gets canonical ID's for a list of accounts
* based on email associated with account
* @param {array} emails - list of email addresses
* @param {object} log - log object
* @param {function} cb - callback to calling function
* @returns {function} callback with either error or
* object with email addresses as keys and canonical IDs
* as values
*/
getCanonicalIds(emails, log, cb) {
const results = {};
emails.forEach(email => {
const lowercasedEmail = email.toLowerCase();
const entity = this.indexer.getEntityByEmail(lowercasedEmail);
if (!entity) {
results[email] = 'NotFound';
} else {
results[email] =
entity.canonicalID;
}
});
const vaultReturnObject = {
message: {
body: results,
},
};
return cb(null, vaultReturnObject);
}
/**
* Gets email addresses (referred to as diplay names for getACL's)
* for a list of accounts based on canonical IDs associated with account
* @param {array} canonicalIDs - list of canonicalIDs
* @param {object} options - to send log id to vault
* @param {function} cb - callback to calling function
* @returns {function} callback with either error or
* an object from Vault containing account canonicalID
* as each object key and an email address as the value (or "NotFound")
*/
getEmailAddresses(canonicalIDs, options, cb) {
const results = {};
canonicalIDs.forEach(canonicalId => {
const foundEntity = this.indexer.getEntityByCanId(canonicalId);
if (!foundEntity || !foundEntity.email) {
results[canonicalId] = 'NotFound';
} else {
results[canonicalId] = foundEntity.email;
}
});
const vaultReturnObject = {
message: {
body: results,
},
};
return cb(null, vaultReturnObject);
}
}
class S3AuthBackend extends Backend {
/**
* @constructor
* @param {object} authdata - the authentication config file's data
* @param {object[]} authdata.accounts - array of account objects
* @param {string=} authdata.accounts[].name - account name
* @param {string} authdata.accounts[].email - account email
* @param {string} authdata.accounts[].arn - IAM resource name
* @param {string} authdata.accounts[].canonicalID - account canonical ID
* @param {string} authdata.accounts[].shortid - short account ID
* @param {object[]=} authdata.accounts[].keys - array of key objects
* @param {string} authdata.accounts[].keys[].access - access key
* @param {string} authdata.accounts[].keys[].secret - secret key
* @return {undefined}
*/
constructor(authdata) {
super('s3', new Indexer(authdata), _formatResponse);
}
refreshAuthData(authData) {
this.indexer = new Indexer(authData);
}
}
module.exports = {
s3: S3AuthBackend,
};

View File

@@ -0,0 +1,194 @@
import * as crypto from 'crypto';
import errors from '../../errors';
import { calculateSigningKey, hashSignature } from './vaultUtilities';
import Indexer from './Indexer';
import { Accounts } from './types';
function _formatResponse(userInfoToSend: any) {
return {
message: {
body: { userInfo: userInfoToSend },
},
};
}
/**
* Class that provides a memory backend for verifying signatures and getting
* emails and canonical ids associated with an account.
*/
class Backend {
indexer: Indexer;
service: string;
constructor(service: string, indexer: Indexer) {
this.service = service;
this.indexer = indexer;
}
// CODEQUALITY-TODO-SYNC Should be synchronous
verifySignatureV2(
stringToSign: string,
signatureFromRequest: string,
accessKey: string,
options: { algo: 'SHA256' | 'SHA1' },
callback: (
error: Error | null,
data?: ReturnType<typeof _formatResponse>
) => void
) {
const entity = this.indexer.getEntityByKey(accessKey);
if (!entity) {
return callback(errors.InvalidAccessKeyId);
}
const secretKey = this.indexer.getSecretKey(entity, accessKey);
const reconstructedSig =
hashSignature(stringToSign, secretKey, options.algo);
if (signatureFromRequest !== reconstructedSig) {
return callback(errors.SignatureDoesNotMatch);
}
const userInfoToSend = {
accountDisplayName: this.indexer.getAcctDisplayName(entity),
canonicalID: entity.canonicalID,
arn: entity.arn,
// TODO Why?
// @ts-ignore
IAMdisplayName: entity.IAMdisplayName,
};
const vaultReturnObject = _formatResponse(userInfoToSend);
return callback(null, vaultReturnObject);
}
// TODO Options not used. Why ?
// CODEQUALITY-TODO-SYNC Should be synchronous
verifySignatureV4(
stringToSign: string,
signatureFromRequest: string,
accessKey: string,
region: string,
scopeDate: string,
_options: { algo: 'SHA256' | 'SHA1' },
callback: (
err: Error | null,
data?: ReturnType<typeof _formatResponse>
) => void
) {
const entity = this.indexer.getEntityByKey(accessKey);
if (!entity) {
return callback(errors.InvalidAccessKeyId);
}
const secretKey = this.indexer.getSecretKey(entity, accessKey);
const signingKey = calculateSigningKey(secretKey, region, scopeDate);
const reconstructedSig = crypto.createHmac('sha256', signingKey)
.update(stringToSign, 'binary').digest('hex');
if (signatureFromRequest !== reconstructedSig) {
return callback(errors.SignatureDoesNotMatch);
}
const userInfoToSend = {
accountDisplayName: this.indexer.getAcctDisplayName(entity),
canonicalID: entity.canonicalID,
arn: entity.arn,
// TODO Why?
// @ts-ignore
IAMdisplayName: entity.IAMdisplayName,
};
const vaultReturnObject = _formatResponse(userInfoToSend);
return callback(null, vaultReturnObject);
}
// TODO log not used. Why ?
// CODEQUALITY-TODO-SYNC Should be synchronous
getCanonicalIds(
emails: string[],
_log: any,
cb: (err: null, data: { message: { body: any } }) => void
) {
const results = {};
emails.forEach(email => {
const lowercasedEmail = email.toLowerCase();
const entity = this.indexer.getEntityByEmail(lowercasedEmail);
if (!entity) {
results[email] = 'NotFound';
} else {
results[email] =
entity.canonicalID;
}
});
const vaultReturnObject = {
message: {
body: results,
},
};
return cb(null, vaultReturnObject);
}
// TODO options not used. Why ?
// CODEQUALITY-TODO-SYNC Should be synchronous
getEmailAddresses(
canonicalIDs: string[],
_options: any,
cb: (err: null, data: { message: { body: any } }) => void
) {
const results = {};
canonicalIDs.forEach(canonicalId => {
const foundEntity = this.indexer.getEntityByCanId(canonicalId);
if (!foundEntity || !foundEntity.email) {
results[canonicalId] = 'NotFound';
} else {
results[canonicalId] = foundEntity.email;
}
});
const vaultReturnObject = {
message: {
body: results,
},
};
return cb(null, vaultReturnObject);
}
// TODO options not used. Why ?
// CODEQUALITY-TODO-SYNC Should be synchronous
/**
* Gets accountIds for a list of accounts based on
* the canonical IDs associated with the account
* @param canonicalIDs - list of canonicalIDs
* @param _options - to send log id to vault
* @param cb - callback to calling function
* @returns The next is wrong. Here to keep archives.
* callback with either error or
* an object from Vault containing account canonicalID
* as each object key and an accountId as the value (or "NotFound")
*/
getAccountIds(
canonicalIDs: string[],
_options: any,
cb: (err: null, data: { message: { body: any } }) => void
) {
const results = {};
canonicalIDs.forEach(canonicalID => {
const foundEntity = this.indexer.getEntityByCanId(canonicalID);
if (!foundEntity || !foundEntity.shortid) {
results[canonicalID] = 'Not Found';
} else {
results[canonicalID] = foundEntity.shortid;
}
});
const vaultReturnObject = {
message: {
body: results,
},
};
return cb(null, vaultReturnObject);
}
}
class S3AuthBackend extends Backend {
constructor(authdata: Accounts) {
super('s3', new Indexer(authdata));
}
refreshAuthData(authData: Accounts) {
this.indexer = new Indexer(authData);
}
}
export { S3AuthBackend as s3 };

View File

@@ -1,145 +0,0 @@
/**
* Class that provides an internal indexing over the simple data provided by
* the authentication configuration file for the memory backend. This allows
* accessing the different authentication entities through various types of
* keys.
*
* @class Indexer
*/
class Indexer {
/**
* @constructor
* @param {object} authdata - the authentication config file's data
* @param {object[]} authdata.accounts - array of account objects
* @param {string=} authdata.accounts[].name - account name
* @param {string} authdata.accounts[].email - account email
* @param {string} authdata.accounts[].arn - IAM resource name
* @param {string} authdata.accounts[].canonicalID - account canonical ID
* @param {string} authdata.accounts[].shortid - short account ID
* @param {object[]=} authdata.accounts[].keys - array of key objects
* @param {string} authdata.accounts[].keys[].access - access key
* @param {string} authdata.accounts[].keys[].secret - secret key
* @return {undefined}
*/
constructor(authdata) {
this.accountsBy = {
canId: {},
accessKey: {},
email: {},
};
/*
* This may happen if the application is configured to use another
* authentication backend than in-memory.
* As such, we're managing the error here to avoid screwing up there.
*/
if (!authdata) {
return;
}
this._build(authdata);
}
_indexAccount(account) {
const accountData = {
arn: account.arn,
canonicalID: account.canonicalID,
shortid: account.shortid,
accountDisplayName: account.name,
email: account.email.toLowerCase(),
keys: [],
};
this.accountsBy.canId[accountData.canonicalID] = accountData;
this.accountsBy.email[accountData.email] = accountData;
if (account.keys !== undefined) {
account.keys.forEach(key => {
accountData.keys.push(key);
this.accountsBy.accessKey[key.access] = accountData;
});
}
}
_build(authdata) {
authdata.accounts.forEach(account => {
this._indexAccount(account);
});
}
/**
* This method returns the account associated to a canonical ID.
*
* @param {string} canId - The canonicalId of the account
* @return {Object} account - The account object
* @return {Object} account.arn - The account's ARN
* @return {Object} account.canonicalID - The account's canonical ID
* @return {Object} account.shortid - The account's internal shortid
* @return {Object} account.accountDisplayName - The account's display name
* @return {Object} account.email - The account's lowercased email
*/
getEntityByCanId(canId) {
return this.accountsBy.canId[canId];
}
/**
* This method returns the entity (either an account or a user) associated
* to a canonical ID.
*
* @param {string} key - The accessKey of the entity
* @return {Object} entity - The entity object
* @return {Object} entity.arn - The entity's ARN
* @return {Object} entity.canonicalID - The canonical ID for the entity's
* account
* @return {Object} entity.shortid - The entity's internal shortid
* @return {Object} entity.accountDisplayName - The entity's account
* display name
* @return {Object} entity.IAMDisplayName - The user's display name
* (if the entity is an user)
* @return {Object} entity.email - The entity's lowercased email
*/
getEntityByKey(key) {
return this.accountsBy.accessKey[key];
}
/**
* This method returns the entity (either an account or a user) associated
* to an email address.
*
* @param {string} email - The email address
* @return {Object} entity - The entity object
* @return {Object} entity.arn - The entity's ARN
* @return {Object} entity.canonicalID - The canonical ID for the entity's
* account
* @return {Object} entity.shortid - The entity's internal shortid
* @return {Object} entity.accountDisplayName - The entity's account
* display name
* @return {Object} entity.IAMDisplayName - The user's display name
* (if the entity is an user)
* @return {Object} entity.email - The entity's lowercased email
*/
getEntityByEmail(email) {
const lowerCasedEmail = email.toLowerCase();
return this.accountsBy.email[lowerCasedEmail];
}
/**
* This method returns the secret key associated with the entity.
* @param {Object} entity - the entity object
* @param {string} accessKey - access key
* @returns {string} secret key
*/
getSecretKey(entity, accessKey) {
return entity.keys
.filter(kv => kv.access === accessKey)[0].secret;
}
/**
* This method returns the account display name associated with the entity.
* @param {Object} entity - the entity object
* @returns {string} account display name
*/
getAcctDisplayName(entity) {
return entity.accountDisplayName;
}
}
module.exports = Indexer;

View File

@@ -0,0 +1,93 @@
import { Accounts, Account, Entity } from './types';
/**
* Class that provides an internal indexing over the simple data provided by
* the authentication configuration file for the memory backend. This allows
* accessing the different authentication entities through various types of
* keys.
*/
export default class Indexer {
accountsBy: {
canId: { [id: string]: Entity | undefined },
accessKey: { [id: string]: Entity | undefined },
email: { [id: string]: Entity | undefined },
}
constructor(authdata?: Accounts) {
this.accountsBy = {
canId: {},
accessKey: {},
email: {},
};
/*
* This may happen if the application is configured to use another
* authentication backend than in-memory.
* As such, we're managing the error here to avoid screwing up there.
*/
if (!authdata) {
return;
}
this.#build(authdata);
}
#indexAccount(account: Account) {
const accountData: Entity = {
arn: account.arn,
canonicalID: account.canonicalID,
shortid: account.shortid,
accountDisplayName: account.name,
email: account.email.toLowerCase(),
keys: [],
};
this.accountsBy.canId[accountData.canonicalID] = accountData;
this.accountsBy.email[accountData.email] = accountData;
if (account.keys !== undefined) {
account.keys.forEach(key => {
accountData.keys.push(key);
this.accountsBy.accessKey[key.access] = accountData;
});
}
}
#build(authdata: Accounts) {
authdata.accounts.forEach(account => {
this.#indexAccount(account);
});
}
/** This method returns the account associated to a canonical ID. */
getEntityByCanId(canId: string): Entity | undefined {
return this.accountsBy.canId[canId];
}
/**
* This method returns the entity (either an account or a user) associated
* to a canonical ID.
* @param {string} key - The accessKey of the entity
*/
getEntityByKey(key: string): Entity | undefined {
return this.accountsBy.accessKey[key];
}
/**
* This method returns the entity (either an account or a user) associated
* to an email address.
*/
getEntityByEmail(email: string): Entity | undefined {
const lowerCasedEmail = email.toLowerCase();
return this.accountsBy.email[lowerCasedEmail];
}
/** This method returns the secret key associated with the entity. */
getSecretKey(entity: Entity, accessKey: string) {
const keys = entity.keys.filter(kv => kv.access === accessKey);
return keys[0].secret;
}
/** This method returns the account display name associated with the entity. */
getAcctDisplayName(entity: Entity) {
return entity.accountDisplayName;
}
}

View File

@@ -0,0 +1,51 @@
import joi from 'joi';
export type Callback<Data = any> = (err?: Error | null | undefined, data?: Data) => void;
export type Credentials = { access: string; secret: string };
export type Base = {
arn: string;
canonicalID: string;
shortid: string;
email: string;
keys: Credentials[];
};
export type Account = Base & { name: string; users: any[] };
export type Accounts = { accounts: Account[] };
export type Entity = Base & { accountDisplayName: string };
const keys = ((): joi.ArraySchema => {
const str = joi.string().required();
const items = { access: str, secret: str };
return joi.array().items(items).required();
})();
const account = (() => {
return joi.object<Account>({
name: joi.string().required(),
email: joi.string().email().required(),
arn: joi.string().required(),
canonicalID: joi.string().required(),
shortid: joi
.string()
.regex(/^[0-9]{12}$/)
.required(),
keys: keys,
// backward-compat
users: joi.array(),
});
})();
const accounts = (() => {
return joi.object<Accounts>({
accounts: joi
.array()
.items(account)
.required()
.unique('arn')
.unique('email')
.unique('canonicalID'),
});
})();
export const validators = { keys, account, accounts };

View File

@@ -1,18 +0,0 @@
const AuthLoader = require('./AuthLoader');
/**
* @deprecated please use {@link AuthLoader} class instead
*
* @param {object} authdata - the authentication config file's data
* @param {werelogs.API} logApi - object providing a constructor function
* for the Logger object
* @return {boolean} true on erroneous data
* false on success
*/
function validateAuthConfig(authdata, logApi) {
const authLoader = new AuthLoader(logApi);
authLoader.addAccounts(authdata);
return !authLoader.validate();
}
module.exports = validateAuthConfig;

View File

@@ -0,0 +1,16 @@
import { Logger } from 'werelogs';
import AuthLoader from './AuthLoader';
import { Accounts } from './types';
/**
* @deprecated please use {@link AuthLoader} class instead
* @return true on erroneous data false on success
*/
export default function validateAuthConfig(
authdata: Accounts,
logApi?: { Logger: typeof Logger }
) {
const authLoader = new AuthLoader(logApi);
authLoader.addAccounts(authdata);
return !authLoader.validate();
}

View File

@@ -1,6 +1,4 @@
'use strict'; // eslint-disable-line strict
const crypto = require('crypto');
import * as crypto from 'crypto';
/** hashSignature for v2 Auth
* @param {string} stringToSign - built string to sign per AWS rules
@@ -8,11 +6,19 @@ const crypto = require('crypto');
* @param {string} algorithm - either SHA256 or SHA1
* @return {string} reconstructed signature
*/
function hashSignature(stringToSign, secretKey, algorithm) {
export function hashSignature(
stringToSign: string,
secretKey: string,
algorithm: 'SHA256' | 'SHA1'
): string {
const hmacObject = crypto.createHmac(algorithm, secretKey);
return hmacObject.update(stringToSign, 'binary').digest('base64');
}
const sha256Digest = (key: string | Buffer, data: string) => {
return crypto.createHmac('sha256', key).update(data, 'binary').digest();
};
/** calculateSigningKey for v4 Auth
* @param {string} secretKey - requester's secretKey
* @param {string} region - region included in request
@@ -20,16 +26,15 @@ function hashSignature(stringToSign, secretKey, algorithm) {
* @param {string} [service] - To specify another service than s3
* @return {string} signingKey - signingKey to calculate signature
*/
function calculateSigningKey(secretKey, region, scopeDate, service) {
const dateKey = crypto.createHmac('sha256', `AWS4${secretKey}`)
.update(scopeDate, 'binary').digest();
const dateRegionKey = crypto.createHmac('sha256', dateKey)
.update(region, 'binary').digest();
const dateRegionServiceKey = crypto.createHmac('sha256', dateRegionKey)
.update(service || 's3', 'binary').digest();
const signingKey = crypto.createHmac('sha256', dateRegionServiceKey)
.update('aws4_request', 'binary').digest();
export function calculateSigningKey(
secretKey: string,
region: string,
scopeDate: string,
service?: string
): Buffer {
const dateKey = sha256Digest(`AWS4${secretKey}`, scopeDate);
const dateRegionKey = sha256Digest(dateKey, region);
const dateRegionServiceKey = sha256Digest(dateRegionKey, service || 's3');
const signingKey = sha256Digest(dateRegionServiceKey, 'aws4_request');
return signingKey;
}
module.exports = { hashSignature, calculateSigningKey };

View File

@@ -1,7 +1,5 @@
'use strict'; // eslint-disable-line strict
function algoCheck(signatureLength) {
let algo;
export default function algoCheck(signatureLength: number) {
let algo: 'sha256' | 'sha1';
// If the signature sent is 44 characters,
// this means that sha256 was used:
// 44 characters in base64
@@ -13,7 +11,6 @@ function algoCheck(signatureLength) {
if (signatureLength === SHA1LEN) {
algo = 'sha1';
}
// @ts-ignore
return algo;
}
module.exports = algoCheck;

View File

@@ -1,11 +0,0 @@
'use strict'; // eslint-disable-line strict
const headerAuthCheck = require('./headerAuthCheck');
const queryAuthCheck = require('./queryAuthCheck');
const authV2 = {
header: headerAuthCheck,
query: queryAuthCheck,
};
module.exports = authV2;

2
lib/auth/v2/authV2.ts Normal file
View File

@@ -0,0 +1,2 @@
export * as header from './headerAuthCheck';
export * as query from './queryAuthCheck';

View File

@@ -1,9 +1,9 @@
'use strict'; // eslint-disable-line strict
const errors = require('../../errors');
import { Logger } from 'werelogs';
import errors from '../../errors';
const epochTime = new Date('1970-01-01').getTime();
function checkRequestExpiry(timestamp, log) {
export default function checkRequestExpiry(timestamp: number, log: Logger) {
// If timestamp is before epochTime, the request is invalid and return
// errors.AccessDenied
if (timestamp < epochTime) {
@@ -32,5 +32,3 @@ function checkRequestExpiry(timestamp, log) {
return undefined;
}
module.exports = checkRequestExpiry;

View File

@@ -1,11 +1,14 @@
'use strict'; // eslint-disable-line strict
import { Logger } from 'werelogs';
import utf8 from 'utf8';
import getCanonicalizedAmzHeaders from './getCanonicalizedAmzHeaders';
import getCanonicalizedResource from './getCanonicalizedResource';
const utf8 = require('utf8');
const getCanonicalizedAmzHeaders = require('./getCanonicalizedAmzHeaders');
const getCanonicalizedResource = require('./getCanonicalizedResource');
function constructStringToSign(request, data, log, clientType) {
export default function constructStringToSign(
request: any,
data: { [key: string]: string },
log: Logger,
clientType?: any
) {
/*
Build signature per AWS requirements:
StringToSign = HTTP-Verb + '\n' +
@@ -42,5 +45,3 @@ function constructStringToSign(request, data, log, clientType) {
+ getCanonicalizedResource(request, clientType);
return utf8.encode(stringToSign);
}
module.exports = constructStringToSign;

View File

@@ -1,14 +1,12 @@
'use strict'; // eslint-disable-line strict
function getCanonicalizedAmzHeaders(headers, clientType) {
export default function getCanonicalizedAmzHeaders(headers: Headers, clientType: string) {
/*
Iterate through headers and pull any headers that are x-amz headers.
Need to include 'x-amz-date' here even though AWS docs
ambiguous on this.
*/
const filterFn = clientType === 'GCP' ?
val => val.substr(0, 7) === 'x-goog-' :
val => val.substr(0, 6) === 'x-amz-';
(val: string) => val.substr(0, 7) === 'x-goog-' :
(val: string) => val.substr(0, 6) === 'x-amz-';
const amzHeaders = Object.keys(headers)
.filter(filterFn)
.map(val => [val.trim(), headers[val].trim()]);
@@ -43,5 +41,3 @@ function getCanonicalizedAmzHeaders(headers, clientType) {
`${headerStr}${current[0]}:${current[1]}\n`,
'');
}
module.exports = getCanonicalizedAmzHeaders;

View File

@@ -1,6 +1,4 @@
'use strict'; // eslint-disable-line strict
const url = require('url');
import * as url from 'url';
const gcpSubresources = [
'acl',
@@ -41,7 +39,7 @@ const awsSubresources = [
'website',
];
function getCanonicalizedResource(request, clientType) {
export default function getCanonicalizedResource(request: any, clientType: string) {
/*
This variable is used to determine whether to insert
a '?' or '&'. Once a query parameter is added to the resourceString,
@@ -117,5 +115,3 @@ function getCanonicalizedResource(request, clientType) {
}
return resourceString;
}
module.exports = getCanonicalizedResource;

View File

@@ -1,12 +1,11 @@
'use strict'; // eslint-disable-line strict
import { Logger } from 'werelogs';
import errors from '../../errors';
import * as constants from '../../constants';
import constructStringToSign from './constructStringToSign';
import checkRequestExpiry from './checkRequestExpiry';
import algoCheck from './algoCheck';
const errors = require('../../errors');
const constants = require('../../constants');
const constructStringToSign = require('./constructStringToSign');
const checkRequestExpiry = require('./checkRequestExpiry');
const algoCheck = require('./algoCheck');
function check(request, log, data) {
export function check(request: any, log: Logger, data: { [key: string]: string }) {
log.trace('running header auth check');
const headers = request.headers;
@@ -52,6 +51,7 @@ function check(request, log, data) {
log.trace('invalid authorization header', { authInfo });
return { err: errors.MissingSecurityHeader };
}
// @ts-ignore
log.addDefaultFields({ accessKey });
const signatureFromRequest = authInfo.substring(semicolonIndex + 1).trim();
@@ -80,5 +80,3 @@ function check(request, log, data) {
},
};
}
module.exports = { check };

View File

@@ -1,11 +1,10 @@
'use strict'; // eslint-disable-line strict
import { Logger } from 'werelogs';
import errors from '../../errors';
import * as constants from '../../constants';
import algoCheck from './algoCheck';
import constructStringToSign from './constructStringToSign';
const errors = require('../../errors');
const constants = require('../../constants');
const algoCheck = require('./algoCheck');
const constructStringToSign = require('./constructStringToSign');
function check(request, log, data) {
export function check(request: any, log: Logger, data: { [key: string]: string }) {
log.trace('running query auth check');
if (request.method === 'POST') {
log.debug('query string auth not supported for post requests');
@@ -51,6 +50,7 @@ function check(request, log, data) {
return { err: errors.RequestTimeTooSkewed };
}
const accessKey = data.AWSAccessKeyId;
// @ts-ignore
log.addDefaultFields({ accessKey });
const signatureFromRequest = decodeURIComponent(data.Signature);
@@ -82,5 +82,3 @@ function check(request, log, data) {
},
};
}
module.exports = { check };

View File

@@ -1,11 +0,0 @@
'use strict'; // eslint-disable-line strict
const headerAuthCheck = require('./headerAuthCheck');
const queryAuthCheck = require('./queryAuthCheck');
const authV4 = {
header: headerAuthCheck,
query: queryAuthCheck,
};
module.exports = authV4;

2
lib/auth/v4/authV4.ts Normal file
View File

@@ -0,0 +1,2 @@
export * as header from './headerAuthCheck';
export * as query from './queryAuthCheck';

View File

@@ -1,73 +0,0 @@
'use strict'; // eslint-disable-line strict
/*
AWS's URI encoding rules:
URI encode every byte. Uri-Encode() must enforce the following rules:
URI encode every byte except the unreserved characters:
'A'-'Z', 'a'-'z', '0'-'9', '-', '.', '_', and '~'.
The space character is a reserved character and must be
encoded as "%20" (and not as "+").
Each Uri-encoded byte is formed by a '%' and the two-digit
hexadecimal value of the byte.
Letters in the hexadecimal value must be uppercase, for example "%1A".
Encode the forward slash character, '/',
everywhere except in the object key name.
For example, if the object key name is photos/Jan/sample.jpg,
the forward slash in the key name is not encoded.
See http://docs.aws.amazon.com/AmazonS3/latest/API/sig-v4-header-based-auth.html
*/
// converts utf8 character to hex and pads "%" before every two hex digits
function _toHexUTF8(char) {
const hexRep = Buffer.from(char, 'utf8').toString('hex').toUpperCase();
let res = '';
hexRep.split('').forEach((v, n) => {
// pad % before every 2 hex digits
if (n % 2 === 0) {
res += '%';
}
res += v;
});
return res;
}
function awsURIencode(input, encodeSlash, noEncodeStar) {
const encSlash = encodeSlash === undefined ? true : encodeSlash;
let encoded = '';
for (let i = 0; i < input.length; i++) {
let ch = input.charAt(i);
if ((ch >= 'A' && ch <= 'Z') ||
(ch >= 'a' && ch <= 'z') ||
(ch >= '0' && ch <= '9') ||
ch === '_' || ch === '-' ||
ch === '~' || ch === '.') {
encoded = encoded.concat(ch);
} else if (ch === ' ') {
encoded = encoded.concat('%20');
} else if (ch === '/') {
encoded = encoded.concat(encSlash ? '%2F' : ch);
} else if (ch === '*') {
encoded = encoded.concat(noEncodeStar ? '*' : '%2A');
} else {
if (ch >= '\uD800' && ch <= '\uDBFF') {
// If this character is a high surrogate peek the next character
// and join it with this one if the next character is a low
// surrogate.
// Otherwise the encoded URI will contain the two surrogates as
// two distinct UTF-8 sequences which is not valid UTF-8.
if (i + 1 < input.length) {
const ch2 = input.charAt(i + 1);
if (ch2 >= '\uDC00' && ch2 <= '\uDFFF') {
i++;
ch += ch2;
}
}
}
encoded = encoded.concat(_toHexUTF8(ch));
}
}
return encoded;
}
module.exports = awsURIencode;

View File

@@ -0,0 +1,78 @@
/*
AWS's URI encoding rules:
URI encode every byte. Uri-Encode() must enforce the following rules:
URI encode every byte except the unreserved characters:
'A'-'Z', 'a'-'z', '0'-'9', '-', '.', '_', and '~'.
The space character is a reserved character and must be
encoded as "%20" (and not as "+").
Each Uri-encoded byte is formed by a '%' and the two-digit
hexadecimal value of the byte.
Letters in the hexadecimal value must be uppercase, for example "%1A".
Encode the forward slash character, '/',
everywhere except in the object key name.
For example, if the object key name is photos/Jan/sample.jpg,
the forward slash in the key name is not encoded.
See http://docs.aws.amazon.com/AmazonS3/latest/API/sig-v4-header-based-auth.html
*/
// converts utf8 character to hex and pads "%" before every two hex digits
function _toHexUTF8(char: string) {
const hexRep = Buffer.from(char, 'utf8').toString('hex').toUpperCase();
let res = '';
hexRep.split('').forEach((v, n) => {
// pad % before every 2 hex digits
if (n % 2 === 0) {
res += '%';
}
res += v;
});
return res;
}
export default function awsURIencode(
input: string,
encodeSlash?: boolean,
noEncodeStar?: boolean
) {
/**
* Duplicate query params are not suppported by AWS S3 APIs. These params
* are parsed as Arrays by Node.js HTTP parser which breaks this method
*/
if (typeof input !== 'string') {
return '';
}
// precalc slash and star based on configs
const slash = encodeSlash === undefined || encodeSlash ? '%2F' : '/';
const star = noEncodeStar !== undefined && noEncodeStar ? '*' : '%2A';
const encoded: string[] = [];
const charArray = Array.from(input);
for (const ch of charArray) {
switch (true) {
case ch >= 'A' && ch <= 'Z':
case ch >= 'a' && ch <= 'z':
case ch >= '0' && ch <= '9':
case ch === '-':
case ch === '_':
case ch === '~':
case ch === '.':
encoded.push(ch);
break;
case ch === '/':
encoded.push(slash);
break;
case ch === '*':
encoded.push(star);
break;
case ch === ' ':
encoded.push('%20');
break;
default:
encoded.push(_toHexUTF8(ch));
break;
}
}
return encoded.join('');
}

View File

@@ -1,17 +1,33 @@
'use strict'; // eslint-disable-line strict
const crypto = require('crypto');
const createCanonicalRequest = require('./createCanonicalRequest');
import * as crypto from 'crypto';
import { Logger } from 'werelogs';
import createCanonicalRequest from './createCanonicalRequest';
/**
* constructStringToSign - creates V4 stringToSign
* @param {object} params - params object
* @returns {string} - stringToSign
*/
function constructStringToSign(params) {
const { request, signedHeaders, payloadChecksum, credentialScope, timestamp,
query, log, proxyPath } = params;
export default function constructStringToSign(params: {
request: any;
signedHeaders: any;
payloadChecksum: any;
credentialScope: string;
timestamp: string;
query: { [key: string]: string };
log?: Logger;
proxyPath?: string;
awsService: string;
}): string | Error {
const {
request,
signedHeaders,
payloadChecksum,
credentialScope,
timestamp,
query,
log,
proxyPath,
} = params;
const path = proxyPath || request.path;
const canonicalReqResult = createCanonicalRequest({
@@ -24,6 +40,8 @@ function constructStringToSign(params) {
service: params.awsService,
});
// TODO Why that line?
// @ts-ignore
if (canonicalReqResult instanceof Error) {
if (log) {
log.error('error creating canonicalRequest');
@@ -38,15 +56,5 @@ function constructStringToSign(params) {
.digest('hex');
const stringToSign = `AWS4-HMAC-SHA256\n${timestamp}\n` +
`${credentialScope}\n${canonicalHex}`;
console.log('!!!!!!!!!!!!!!!!!!!');
console.log(stringToSign);
console.log('!!!!!!!!!!!!!!!!!!!');
console.log(request);
console.log(canonicalReqResult);
console.log('!!!!!!!!!!!!!!!!!!!');
return stringToSign;
}
module.exports = constructStringToSign;

View File

@@ -1,27 +1,33 @@
'use strict'; // eslint-disable-line strict
const awsURIencode = require('./awsURIencode');
const crypto = require('crypto');
const queryString = require('querystring');
import * as crypto from 'crypto';
import * as queryString from 'querystring';
import awsURIencode from './awsURIencode';
/**
* createCanonicalRequest - creates V4 canonical request
* @param {object} params - contains pHttpVerb (request type),
* @param params - contains pHttpVerb (request type),
* pResource (parsed from URL), pQuery (request query),
* pHeaders (request headers), pSignedHeaders (signed headers from request),
* payloadChecksum (from request)
* @returns {string} - canonicalRequest
* @returns - canonicalRequest
*/
function createCanonicalRequest(params) {
export default function createCanonicalRequest(
params: {
pHttpVerb: string;
pResource: string;
pQuery: { [key: string]: string };
pHeaders: any;
pSignedHeaders: any;
service: string;
payloadChecksum: string;
}
) {
const pHttpVerb = params.pHttpVerb;
const pResource = params.pResource;
const pQuery = params.pQuery;
const pHeaders = params.pHeaders;
const pSignedHeaders = params.pSignedHeaders;
const service = params.service;
let payloadChecksum = params.payloadChecksum;
if (!payloadChecksum) {
if (pHttpVerb === 'GET') {
payloadChecksum = 'e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b' +
@@ -34,7 +40,7 @@ function createCanonicalRequest(params) {
if (/aws-sdk-java\/[0-9.]+/.test(pHeaders['user-agent'])) {
notEncodeStar = true;
}
let payload = queryString.stringify(pQuery, null, null, {
let payload = queryString.stringify(pQuery, undefined, undefined, {
encodeURIComponent: input => awsURIencode(input, true,
notEncodeStar),
});
@@ -61,11 +67,11 @@ function createCanonicalRequest(params) {
// signed headers
const signedHeadersList = pSignedHeaders.split(';');
signedHeadersList.sort((a, b) => a.localeCompare(b));
signedHeadersList.sort((a: any, b: any) => a.localeCompare(b));
const signedHeaders = signedHeadersList.join(';');
// canonical headers
const canonicalHeadersList = signedHeadersList.map(signedHeader => {
const canonicalHeadersList = signedHeadersList.map((signedHeader: any) => {
if (pHeaders[signedHeader] !== undefined) {
const trimmedHeader = pHeaders[signedHeader]
.trim().replace(/\s+/g, ' ');
@@ -87,5 +93,3 @@ function createCanonicalRequest(params) {
`${signedHeaders}\n${payloadChecksum}`;
return canonicalRequest;
}
module.exports = createCanonicalRequest;

View File

@@ -1,27 +1,32 @@
'use strict'; // eslint-disable-line strict
const errors = require('../../../lib/errors');
const constants = require('../../constants');
const constructStringToSign = require('./constructStringToSign');
const checkTimeSkew = require('./timeUtils').checkTimeSkew;
const convertUTCtoISO8601 = require('./timeUtils').convertUTCtoISO8601;
const convertAmzTimeToMs = require('./timeUtils').convertAmzTimeToMs;
const extractAuthItems = require('./validateInputs').extractAuthItems;
const validateCredentials = require('./validateInputs').validateCredentials;
const areSignedHeadersComplete =
require('./validateInputs').areSignedHeadersComplete;
import { Logger } from 'werelogs';
import errors from '../../../lib/errors';
import * as constants from '../../constants';
import constructStringToSign from './constructStringToSign';
import {
checkTimeSkew,
convertUTCtoISO8601,
convertAmzTimeToMs,
} from './timeUtils';
import {
extractAuthItems,
validateCredentials,
areSignedHeadersComplete,
} from './validateInputs';
/**
* V4 header auth check
* @param {object} request - HTTP request object
* @param {object} log - logging object
* @param {object} data - Parameters from queryString parsing or body of
* @param request - HTTP request object
* @param log - logging object
* @param data - Parameters from queryString parsing or body of
* POST request
* @param {string} awsService - Aws service ('iam' or 's3')
* @return {callback} calls callback
* @param awsService - Aws service ('iam' or 's3')
*/
function check(request, log, data, awsService) {
export function check(
request: any,
log: Logger,
data: { [key: string]: string },
awsService: string
) {
log.trace('running header auth check');
const token = request.headers['x-amz-security-token'];
@@ -62,16 +67,16 @@ function check(request, log, data, awsService) {
log.trace('authorization header from request', { authHeader });
const signatureFromRequest = authHeaderItems.signatureFromRequest;
const credentialsArr = authHeaderItems.credentialsArr;
const signedHeaders = authHeaderItems.signedHeaders;
const signatureFromRequest = authHeaderItems.signatureFromRequest!;
const credentialsArr = authHeaderItems.credentialsArr!;
const signedHeaders = authHeaderItems.signedHeaders!;
if (!areSignedHeadersComplete(signedHeaders, request.headers)) {
log.debug('signedHeaders are incomplete', { signedHeaders });
return { err: errors.AccessDenied };
}
let timestamp;
let timestamp: string | undefined;
// check request timestamp
const xAmzDate = request.headers['x-amz-date'];
if (xAmzDate) {
@@ -127,17 +132,6 @@ function check(request, log, data, awsService) {
return { err: errors.RequestTimeTooSkewed };
}
let proxyPath = null;
if (request.headers.proxy_path) {
try {
proxyPath = decodeURIComponent(request.headers.proxy_path);
} catch (err) {
log.debug('invalid proxy_path header', { proxyPath, err });
return { err: errors.InvalidArgument.customizeDescription(
'invalid proxy_path header') };
}
}
const stringToSign = constructStringToSign({
log,
request,
@@ -147,7 +141,6 @@ function check(request, log, data, awsService) {
timestamp,
payloadChecksum,
awsService: service,
proxyPath,
});
log.trace('constructed stringToSign', { stringToSign });
if (stringToSign instanceof Error) {
@@ -178,5 +171,3 @@ function check(request, log, data, awsService) {
},
};
}
module.exports = { check };

View File

@@ -1,24 +1,18 @@
'use strict'; // eslint-disable-line strict
const constants = require('../../constants');
const errors = require('../../errors');
const constructStringToSign = require('./constructStringToSign');
const checkTimeSkew = require('./timeUtils').checkTimeSkew;
const convertAmzTimeToMs = require('./timeUtils').convertAmzTimeToMs;
const validateCredentials = require('./validateInputs').validateCredentials;
const extractQueryParams = require('./validateInputs').extractQueryParams;
const areSignedHeadersComplete =
require('./validateInputs').areSignedHeadersComplete;
import { Logger } from 'werelogs';
import * as constants from '../../constants';
import errors from '../../errors';
import constructStringToSign from './constructStringToSign';
import { checkTimeSkew, convertAmzTimeToMs } from './timeUtils';
import { validateCredentials, extractQueryParams } from './validateInputs';
import { areSignedHeadersComplete } from './validateInputs';
/**
* V4 query auth check
* @param {object} request - HTTP request object
* @param {object} log - logging object
* @param {object} data - Contain authentification params (GET or POST data)
* @return {callback} calls callback
* @param request - HTTP request object
* @param log - logging object
* @param data - Contain authentification params (GET or POST data)
*/
function check(request, log, data) {
export function check(request: any, log: Logger, data: { [key: string]: string }) {
const authParams = extractQueryParams(data, log);
if (Object.keys(authParams).length !== 5) {
@@ -33,11 +27,11 @@ function check(request, log, data) {
return { err: errors.InvalidToken };
}
const signedHeaders = authParams.signedHeaders;
const signatureFromRequest = authParams.signatureFromRequest;
const timestamp = authParams.timestamp;
const expiry = authParams.expiry;
const credential = authParams.credential;
const signedHeaders = authParams.signedHeaders!;
const signatureFromRequest = authParams.signatureFromRequest!;
const timestamp = authParams.timestamp!;
const expiry = authParams.expiry!;
const credential = authParams.credential!;
if (!areSignedHeadersComplete(signedHeaders, request.headers)) {
log.debug('signedHeaders are incomplete', { signedHeaders });
@@ -62,17 +56,6 @@ function check(request, log, data) {
return { err: errors.RequestTimeTooSkewed };
}
let proxyPath = null;
if (request.headers.proxy_path) {
try {
proxyPath = decodeURIComponent(request.headers.proxy_path);
} catch (err) {
log.debug('invalid proxy_path header', { proxyPath });
return { err: errors.InvalidArgument.customizeDescription(
'invalid proxy_path header') };
}
}
// In query v4 auth, the canonical request needs
// to include the query params OTHER THAN
// the signature so create a
@@ -98,7 +81,6 @@ function check(request, log, data) {
credentialScope:
`${scopeDate}/${region}/${service}/${requestType}`,
awsService: service,
proxyPath,
});
if (stringToSign instanceof Error) {
return { err: stringToSign };
@@ -122,5 +104,3 @@ function check(request, log, data) {
},
};
}
module.exports = { check };

View File

@@ -1,33 +1,67 @@
const { Transform } = require('stream');
import { Transform } from 'stream';
import async from 'async';
import errors from '../../../errors';
import { Logger } from 'werelogs';
import Vault, { AuthV4RequestParams } from '../../Vault';
import { Callback } from '../../in_memory/types';
const async = require('async');
const errors = require('../../../errors');
import constructChunkStringToSign from './constructChunkStringToSign';
const constructChunkStringToSign = require('./constructChunkStringToSign');
export type TransformParams = {
accessKey: string;
signatureFromRequest: string;
region: string;
scopeDate: string;
timestamp: string;
credentialScope: string;
};
/**
* This class is designed to handle the chunks sent in a streaming
* v4 Auth request
*/
class V4Transform extends Transform {
export default class V4Transform extends Transform {
log: Logger;
cb: Callback;
accessKey: string;
region: string;
scopeDate: string;
timestamp: string;
credentialScope: string;
lastSignature: string;
currentSignature?: string;
haveMetadata: boolean;
seekingDataSize: number;
currentData?: any;
dataCursor: number;
currentMetadata: any[];
lastPieceDone: boolean;
lastChunk: boolean;
vault: Vault;
/**
* @constructor
* @param {object} streamingV4Params - info for chunk authentication
* @param {string} streamingV4Params.accessKey - requester's accessKey
* @param {string} streamingV4Params.signatureFromRequest - signature
* @param streamingV4Params - info for chunk authentication
* @param streamingV4Params.accessKey - requester's accessKey
* @param streamingV4Params.signatureFromRequest - signature
* sent with headers
* @param {string} streamingV4Params.region - region sent with auth header
* @param {string} streamingV4Params.scopeDate - date sent with auth header
* @param {string} streamingV4Params.timestamp - date parsed from headers
* @param streamingV4Params.region - region sent with auth header
* @param streamingV4Params.scopeDate - date sent with auth header
* @param streamingV4Params.timestamp - date parsed from headers
* in ISO 8601 format: YYYYMMDDTHHMMSSZ
* @param {string} streamingV4Params.credentialScope - items from auth
* @param streamingV4Params.credentialScope - items from auth
* header plus the string 'aws4_request' joined with '/':
* timestamp/region/aws-service/aws4_request
* @param {object} vault - Vault instance passed from CloudServer
* @param {object} log - logger object
* @param {function} cb - callback to api
* @param vault - Vault instance passed from CloudServer
* @param log - logger object
* @param cb - callback to api
*/
constructor(streamingV4Params, vault, log, cb) {
constructor(
streamingV4Params: TransformParams,
vault: Vault,
log: Logger,
cb: Callback,
) {
const { accessKey, signatureFromRequest, region, scopeDate, timestamp,
credentialScope } = streamingV4Params;
super({});
@@ -55,8 +89,8 @@ class V4Transform extends Transform {
/**
* This function will parse the metadata portion of the chunk
* @param {Buffer} remainingChunk - chunk sent from _transform
* @return {object} response - if error, will return 'err' key with
* @param remainingChunk - chunk sent from _transform
* @return response - if error, will return 'err' key with
* arsenal error value.
* if incomplete metadata, will return 'completeMetadata' key with
* value false
@@ -64,7 +98,7 @@ class V4Transform extends Transform {
* value true and the key 'unparsedChunk' with the remaining chunk without
* the parsed metadata piece
*/
_parseMetadata(remainingChunk) {
_parseMetadata(remainingChunk: Buffer) {
let remainingPlusStoredMetadata = remainingChunk;
// have metadata pieces so need to add to the front of
// remainingChunk
@@ -103,9 +137,8 @@ class V4Transform extends Transform {
'metadata format');
return { err: errors.InvalidArgument };
}
let dataSize = splitMeta[0];
// chunk-size is sent in hex
dataSize = Number.parseInt(dataSize, 16);
const dataSize = Number.parseInt(splitMeta[0], 16);
if (Number.isNaN(dataSize)) {
this.log.trace('chunk body did not contain valid size');
return { err: errors.InvalidArgument };
@@ -139,28 +172,30 @@ class V4Transform extends Transform {
/**
* Build the stringToSign and authenticate the chunk
* @param {Buffer} dataToSend - chunk sent from _transform or null
* @param dataToSend - chunk sent from _transform or null
* if last chunk without data
* @param {function} done - callback to _transform
* @return {function} executes callback with err if applicable
* @param done - callback to _transform
* @return executes callback with err if applicable
*/
_authenticate(dataToSend, done) {
_authenticate(dataToSend: Buffer | null, done: Callback) {
// use prior sig to construct new string to sign
const stringToSign = constructChunkStringToSign(this.timestamp,
this.credentialScope, this.lastSignature, dataToSend);
this.credentialScope, this.lastSignature, dataToSend ?? undefined);
this.log.trace('constructed chunk string to sign',
{ stringToSign });
// once used prior sig to construct string to sign, reassign
// lastSignature to current signature
this.lastSignature = this.currentSignature;
const vaultParams = {
this.lastSignature = this.currentSignature!;
const vaultParams: AuthV4RequestParams = {
log: this.log,
data: {
accessKey: this.accessKey,
signatureFromRequest: this.currentSignature,
signatureFromRequest: this.currentSignature!,
region: this.region,
scopeDate: this.scopeDate,
stringToSign,
// TODO FIXME This can not work
// @ts-expect-errors
timestamp: this.timestamp,
credentialScope: this.credentialScope,
},
@@ -181,12 +216,12 @@ class V4Transform extends Transform {
* use the metadata to authenticate with vault and send the
* data on to be stored if authentication passes
*
* @param {Buffer} chunk - chunk from request body
* @param {string} encoding - Data encoding
* @param {function} callback - Callback(err, justDataChunk, encoding)
* @return {function }executes callback with err if applicable
* @param chunk - chunk from request body
* @param _encoding - Data encoding unused
* @param callback - Callback(err, justDataChunk, encoding)
* @return executes callback with err if applicable
*/
_transform(chunk, encoding, callback) {
_transform(chunk: Buffer, _encoding: string, callback: Callback) {
// 'chunk' here is the node streaming chunk
// transfer-encoding chunks should be of the format:
// string(IntHexBase(chunk-size)) + ";chunk-signature=" +
@@ -223,6 +258,8 @@ class V4Transform extends Transform {
}
// have metadata so reset unparsedChunk to remaining
// without metadata piece
// TODO Is that okay?
// @ts-expect-errors
unparsedChunk = parsedMetadataResults.unparsedChunk;
}
if (this.lastChunk) {
@@ -269,13 +306,11 @@ class V4Transform extends Transform {
// final callback
err => {
if (err) {
return this.cb(err);
return this.cb(err as any);
}
// get next chunk
return callback();
}
},
);
}
}
module.exports = V4Transform;

View File

@@ -1,32 +0,0 @@
const crypto = require('crypto');
const constants = require('../../../constants');
/**
* Constructs stringToSign for chunk
* @param {string} timestamp - date parsed from headers
* in ISO 8601 format: YYYYMMDDTHHMMSSZ
* @param {string} credentialScope - items from auth
* header plus the string 'aws4_request' joined with '/':
* timestamp/region/aws-service/aws4_request
* @param {string} lastSignature - signature from headers or prior chunk
* @param {string} justDataChunk - data portion of chunk
* @returns {string} stringToSign
*/
function constructChunkStringToSign(timestamp,
credentialScope, lastSignature, justDataChunk) {
let currentChunkHash;
// for last chunk, there will be no data, so use emptyStringHash
if (!justDataChunk) {
currentChunkHash = constants.emptyStringHash;
} else {
currentChunkHash = crypto.createHash('sha256');
currentChunkHash = currentChunkHash
.update(justDataChunk, 'binary').digest('hex');
}
return `AWS4-HMAC-SHA256-PAYLOAD\n${timestamp}\n` +
`${credentialScope}\n${lastSignature}\n` +
`${constants.emptyStringHash}\n${currentChunkHash}`;
}
module.exports = constructChunkStringToSign;

View File

@@ -0,0 +1,35 @@
import * as crypto from 'crypto';
import * as constants from '../../../constants';
/**
* Constructs stringToSign for chunk
* @param timestamp - date parsed from headers
* in ISO 8601 format: YYYYMMDDTHHMMSSZ
* @param credentialScope - items from auth
* header plus the string 'aws4_request' joined with '/':
* timestamp/region/aws-service/aws4_request
* @param lastSignature - signature from headers or prior chunk
* @param justDataChunk - data portion of chunk
* @returns stringToSign
*/
export default function constructChunkStringToSign(
timestamp: string,
credentialScope: string,
lastSignature: string,
justDataChunk?: Buffer | string,
) {
let currentChunkHash: string;
// for last chunk, there will be no data, so use emptyStringHash
if (!justDataChunk) {
currentChunkHash = constants.emptyStringHash;
} else {
const hash = crypto.createHash('sha256');
const temp = justDataChunk instanceof Buffer
? hash.update(justDataChunk)
: hash.update(justDataChunk, 'binary');
currentChunkHash = temp.digest('hex');
}
return `AWS4-HMAC-SHA256-PAYLOAD\n${timestamp}\n` +
`${credentialScope}\n${lastSignature}\n` +
`${constants.emptyStringHash}\n${currentChunkHash}`;
}

View File

@@ -1,12 +1,11 @@
'use strict'; // eslint-disable-line strict
import { Logger } from 'werelogs';
/**
* Convert timestamp to milliseconds since Unix Epoch
* @param {string} timestamp of ISO8601Timestamp format without
* @param timestamp of ISO8601Timestamp format without
* dashes or colons, e.g. 20160202T220410Z
* @return {number} number of milliseconds since Unix Epoch
*/
function convertAmzTimeToMs(timestamp) {
export function convertAmzTimeToMs(timestamp: string) {
const arr = timestamp.split('');
// Convert to YYYY-MM-DDTHH:mm:ss.sssZ
const ISO8601time = `${arr.slice(0, 4).join('')}-${arr[4]}${arr[5]}` +
@@ -15,13 +14,12 @@ function convertAmzTimeToMs(timestamp) {
return Date.parse(ISO8601time);
}
/**
* Convert UTC timestamp to ISO 8601 timestamp
* @param {string} timestamp of UTC form: Fri, 10 Feb 2012 21:34:55 GMT
* @return {string} ISO8601 timestamp of form: YYYYMMDDTHHMMSSZ
* @param timestamp of UTC form: Fri, 10 Feb 2012 21:34:55 GMT
* @return ISO8601 timestamp of form: YYYYMMDDTHHMMSSZ
*/
function convertUTCtoISO8601(timestamp) {
export function convertUTCtoISO8601(timestamp: string | number) {
// convert to ISO string: YYYY-MM-DDTHH:mm:ss.sssZ.
const converted = new Date(timestamp).toISOString();
// Remove "-"s and "."s and milliseconds
@@ -30,13 +28,13 @@ function convertUTCtoISO8601(timestamp) {
/**
* Check whether timestamp predates request or is too old
* @param {string} timestamp of ISO8601Timestamp format without
* @param timestamp of ISO8601Timestamp format without
* dashes or colons, e.g. 20160202T220410Z
* @param {number} expiry - number of seconds signature should be valid
* @param {object} log - log for request
* @return {boolean} true if there is a time problem
* @param expiry - number of seconds signature should be valid
* @param log - log for request
* @return true if there is a time problem
*/
function checkTimeSkew(timestamp, expiry, log) {
export function checkTimeSkew(timestamp: string, expiry: number, log: Logger) {
const currentTime = Date.now();
const fifteenMinutes = (15 * 60 * 1000);
const parsedTimestamp = convertAmzTimeToMs(timestamp);
@@ -56,5 +54,3 @@ function checkTimeSkew(timestamp, expiry, log) {
}
return false;
}
module.exports = { convertAmzTimeToMs, convertUTCtoISO8601, checkTimeSkew };

View File

@@ -1,17 +1,19 @@
'use strict'; // eslint-disable-line strict
const errors = require('../../../lib/errors');
import { Logger } from 'werelogs';
import errors from '../../../lib/errors';
/**
* Validate Credentials
* @param {array} credentials - contains accessKey, scopeDate,
* @param credentials - contains accessKey, scopeDate,
* region, service, requestType
* @param {string} timestamp - timestamp from request in
* @param timestamp - timestamp from request in
* the format of ISO 8601: YYYYMMDDTHHMMSSZ
* @param {object} log - logging object
* @return {boolean} true if credentials are correct format, false if not
* @param log - logging object
*/
function validateCredentials(credentials, timestamp, log) {
export function validateCredentials(
credentials: [string, string, string, string, string],
timestamp: string,
log: Logger
): Error | {} {
if (!Array.isArray(credentials) || credentials.length !== 5) {
log.warn('credentials in improper format', { credentials });
return errors.InvalidArgument;
@@ -58,12 +60,21 @@ function validateCredentials(credentials, timestamp, log) {
/**
* Extract and validate components from query object
* @param {object} queryObj - query object from request
* @param {object} log - logging object
* @return {object} object containing extracted query params for authV4
* @param queryObj - query object from request
* @param log - logging object
* @return object containing extracted query params for authV4
*/
function extractQueryParams(queryObj, log) {
const authParams = {};
export function extractQueryParams(
queryObj: { [key: string]: string | undefined },
log: Logger
) {
const authParams: {
signedHeaders?: string;
signatureFromRequest?: string;
timestamp?: string;
expiry?: number;
credential?: [string, string, string, string, string];
} = {};
// Do not need the algorithm sent back
if (queryObj['X-Amz-Algorithm'] !== 'AWS4-HMAC-SHA256') {
@@ -99,7 +110,7 @@ function extractQueryParams(queryObj, log) {
return authParams;
}
const expiry = Number.parseInt(queryObj['X-Amz-Expires'], 10);
const expiry = Number.parseInt(queryObj['X-Amz-Expires'] ?? 'nope', 10);
const sevenDays = 604800;
if (expiry && (expiry > 0 && expiry <= sevenDays)) {
authParams.expiry = expiry;
@@ -110,6 +121,7 @@ function extractQueryParams(queryObj, log) {
const credential = queryObj['X-Amz-Credential'];
if (credential && credential.length > 28 && credential.indexOf('/') > -1) {
// @ts-ignore
authParams.credential = credential.split('/');
} else {
log.warn('invalid credential param', { credential });
@@ -121,14 +133,17 @@ function extractQueryParams(queryObj, log) {
/**
* Extract and validate components from auth header
* @param {string} authHeader - authorization header from request
* @param {object} log - logging object
* @return {object} object containing extracted auth header items for authV4
* @param authHeader - authorization header from request
* @param log - logging object
* @return object containing extracted auth header items for authV4
*/
function extractAuthItems(authHeader, log) {
const authItems = {};
const authArray = authHeader
.replace('AWS4-HMAC-SHA256 ', '').split(',');
export function extractAuthItems(authHeader: string, log: Logger) {
const authItems: {
credentialsArr?: [string, string, string, string, string];
signedHeaders?: string;
signatureFromRequest?: string;
} = {};
const authArray = authHeader.replace('AWS4-HMAC-SHA256 ', '').split(',');
if (authArray.length < 3) {
return authItems;
@@ -138,8 +153,12 @@ function extractAuthItems(authHeader, log) {
const signedHeadersStr = authArray[1];
const signatureStr = authArray[2];
log.trace('credentials from request', { credentialStr });
if (credentialStr && credentialStr.trim().startsWith('Credential=')
&& credentialStr.indexOf('/') > -1) {
if (
credentialStr &&
credentialStr.trim().startsWith('Credential=') &&
credentialStr.indexOf('/') > -1
) {
// @ts-ignore
authItems.credentialsArr = credentialStr
.trim().replace('Credential=', '').split('/');
} else {
@@ -166,11 +185,11 @@ function extractAuthItems(authHeader, log) {
/**
* Checks whether the signed headers include the host header
* and all x-amz- and x-scal- headers in request
* @param {string} signedHeaders - signed headers sent with request
* @param {object} allHeaders - request.headers
* @return {boolean} true if all x-amz-headers included and false if not
* @param signedHeaders - signed headers sent with request
* @param allHeaders - request.headers
* @return true if all x-amz-headers included and false if not
*/
function areSignedHeadersComplete(signedHeaders, allHeaders) {
export function areSignedHeadersComplete(signedHeaders: string, allHeaders: Headers) {
const signedHeadersList = signedHeaders.split(';');
if (signedHeadersList.indexOf('host') === -1) {
return false;
@@ -185,6 +204,3 @@ function areSignedHeadersComplete(signedHeaders, allHeaders) {
}
return true;
}
module.exports = { validateCredentials, extractQueryParams,
areSignedHeadersComplete, extractAuthItems };

View File

@@ -1,146 +0,0 @@
'use strict'; // eslint-disable-line strict
const crypto = require('crypto');
// The min value here is to manage further backward compat if we
// need it
const iamSecurityTokenSizeMin = 128;
const iamSecurityTokenSizeMax = 128;
// Security token is an hex string (no real format from amazon)
const iamSecurityTokenPattern =
new RegExp(`^[a-f0-9]{${iamSecurityTokenSizeMin},` +
`${iamSecurityTokenSizeMax}}$`);
module.exports = {
// info about the iam security token
iamSecurityToken: {
min: iamSecurityTokenSizeMin,
max: iamSecurityTokenSizeMax,
pattern: iamSecurityTokenPattern,
},
// PublicId is used as the canonicalID for a request that contains
// no authentication information. Requestor can access
// only public resources
publicId: 'http://acs.amazonaws.com/groups/global/AllUsers',
zenkoServiceAccount: 'http://acs.zenko.io/accounts/service',
metadataFileNamespace: '/MDFile',
dataFileURL: '/DataFile',
passthroughFileURL: '/PassthroughFile',
// AWS states max size for user-defined metadata
// (x-amz-meta- headers) is 2 KB:
// http://docs.aws.amazon.com/AmazonS3/latest/API/RESTObjectPUT.html
// In testing, AWS seems to allow up to 88 more bytes,
// so we do the same.
maximumMetaHeadersSize: 2136,
emptyFileMd5: 'd41d8cd98f00b204e9800998ecf8427e',
// Version 2 changes the format of the data location property
// Version 3 adds the dataStoreName attribute
// Version 4 add the Creation-Time and Content-Language attributes,
// and add support for x-ms-meta-* headers in UserMetadata
// Version 5 adds the azureInfo structure
mdModelVersion: 5,
/*
* Splitter is used to build the object name for the overview of a
* multipart upload and to build the object names for each part of a
* multipart upload. These objects with large names are then stored in
* metadata in a "shadow bucket" to a real bucket. The shadow bucket
* contains all ongoing multipart uploads. We include in the object
* name some of the info we might need to pull about an open multipart
* upload or about an individual part with each piece of info separated
* by the splitter. We can then extract each piece of info by splitting
* the object name string with this splitter.
* For instance, assuming a splitter of '...!*!',
* the name of the upload overview would be:
* overview...!*!objectKey...!*!uploadId
* For instance, the name of a part would be:
* uploadId...!*!partNumber
*
* The sequence of characters used in the splitter should not occur
* elsewhere in the pieces of info to avoid splitting where not
* intended.
*
* Splitter is also used in adding bucketnames to the
* namespacerusersbucket. The object names added to the
* namespaceusersbucket are of the form:
* canonicalID...!*!bucketname
*/
splitter: '..|..',
usersBucket: 'users..bucket',
// MPU Bucket Prefix is used to create the name of the shadow
// bucket used for multipart uploads. There is one shadow mpu
// bucket per bucket and its name is the mpuBucketPrefix followed
// by the name of the final destination bucket for the object
// once the multipart upload is complete.
mpuBucketPrefix: 'mpuShadowBucket',
// since aws s3 does not allow capitalized buckets, these may be
// used for special internal purposes
permittedCapitalizedBuckets: {
METADATA: true,
},
// Setting a lower object key limit to account for:
// - Mongo key limit of 1012 bytes
// - Version ID in Mongo Key if versioned of 33
// - Max bucket name length if bucket match false of 63
// - Extra prefix slash for bucket prefix if bucket match of 1
objectKeyByteLimit: 915,
/* delimiter for location-constraint. The location constraint will be able
* to include the ingestion flag
*/
zenkoSeparator: ':',
/* eslint-disable camelcase */
externalBackends: { aws_s3: true, azure: true, gcp: true, pfs: true },
replicationBackends: { aws_s3: true, azure: true, gcp: true },
// hex digest of sha256 hash of empty string:
emptyStringHash: crypto.createHash('sha256')
.update('', 'binary').digest('hex'),
mpuMDStoredExternallyBackend: { aws_s3: true, gcp: true },
// AWS sets a minimum size limit for parts except for the last part.
// http://docs.aws.amazon.com/AmazonS3/latest/API/mpUploadComplete.html
minimumAllowedPartSize: 5242880,
gcpMaximumAllowedPartCount: 1024,
// GCP Object Tagging Prefix
gcpTaggingPrefix: 'aws-tag-',
productName: 'APN/1.0 Scality/1.0 Scality CloudServer for Zenko',
legacyLocations: ['sproxyd', 'legacy'],
// healthcheck default call from nginx is every 2 seconds
// for external backends, don't call unless at least 1 minute
// (60,000 milliseconds) since last call
externalBackendHealthCheckInterval: 60000,
// some of the available data backends (if called directly rather
// than through the multiple backend gateway) need a key provided
// as a string as first parameter of the get/delete methods.
clientsRequireStringKey: { sproxyd: true, cdmi: true },
hasCopyPartBackends: { aws_s3: true, gcp: true },
versioningNotImplBackends: { azure: true, gcp: true },
// user metadata applied on zenko-created objects
zenkoIDHeader: 'x-amz-meta-zenko-instance-id',
// Default expiration value of the S3 pre-signed URL duration
// 604800 seconds (seven days).
defaultPreSignedURLExpiry: 7 * 24 * 60 * 60,
// Regex for ISO-8601 formatted date
shortIso8601Regex: /\d{4}-\d{2}-\d{2}T\d{2}:\d{2}:\d{2}Z/,
longIso8601Regex: /\d{4}-\d{2}-\d{2}T\d{2}:\d{2}:\d{2}.\d{3}Z/,
supportedNotificationEvents: new Set([
's3:ObjectCreated:*',
's3:ObjectCreated:Put',
's3:ObjectCreated:Copy',
's3:ObjectCreated:CompleteMultipartUpload',
's3:ObjectRemoved:*',
's3:ObjectRemoved:Delete',
's3:ObjectRemoved:DeleteMarkerCreated',
]),
notificationArnPrefix: 'arn:scality:bucketnotif',
// HTTP server keep-alive timeout is set to a higher value than
// client's free sockets timeout to avoid the risk of triggering
// ECONNRESET errors if the server closes the connection at the
// exact moment clients attempt to reuse an established connection
// for a new request.
//
// Note: the ability to close inactive connections on the client
// after httpClientFreeSocketsTimeout milliseconds requires the
// use of "agentkeepalive" module instead of the regular node.js
// http.Agent.
httpServerKeepAliveTimeout: 60000,
httpClientFreeSocketTimeout: 55000,
};

131
lib/constants.ts Normal file
View File

@@ -0,0 +1,131 @@
import * as crypto from 'crypto';
// The min value here is to manage further backward compat if we
// need it
const iamSecurityTokenSizeMin = 128;
const iamSecurityTokenSizeMax = 128;
// Security token is an hex string (no real format from amazon)
const iamSecurityTokenPattern = new RegExp(
`^[a-f0-9]{${iamSecurityTokenSizeMin},${iamSecurityTokenSizeMax}}$`,
);
// info about the iam security token
export const iamSecurityToken = {
min: iamSecurityTokenSizeMin,
max: iamSecurityTokenSizeMax,
pattern: iamSecurityTokenPattern,
};
// PublicId is used as the canonicalID for a request that contains
// no authentication information. Requestor can access
// only public resources
export const publicId = 'http://acs.amazonaws.com/groups/global/AllUsers';
export const zenkoServiceAccount = 'http://acs.zenko.io/accounts/service';
export const metadataFileNamespace = '/MDFile';
export const dataFileURL = '/DataFile';
// AWS states max size for user-defined metadata
// (x-amz-meta- headers) is 2 KB:
// http://docs.aws.amazon.com/AmazonS3/latest/API/RESTObjectPUT.html
// In testing, AWS seems to allow up to 88 more bytes,
// so we do the same.
export const maximumMetaHeadersSize = 2136;
export const emptyFileMd5 = 'd41d8cd98f00b204e9800998ecf8427e';
// Version 2 changes the format of the data location property
// Version 3 adds the dataStoreName attribute
export const mdModelVersion = 3;
/*
* Splitter is used to build the object name for the overview of a
* multipart upload and to build the object names for each part of a
* multipart upload. These objects with large names are then stored in
* metadata in a "shadow bucket" to a real bucket. The shadow bucket
* contains all ongoing multipart uploads. We include in the object
* name some of the info we might need to pull about an open multipart
* upload or about an individual part with each piece of info separated
* by the splitter. We can then extract each piece of info by splitting
* the object name string with this splitter.
* For instance, assuming a splitter of '...!*!',
* the name of the upload overview would be:
* overview...!*!objectKey...!*!uploadId
* For instance, the name of a part would be:
* uploadId...!*!partNumber
*
* The sequence of characters used in the splitter should not occur
* elsewhere in the pieces of info to avoid splitting where not
* intended.
*
* Splitter is also used in adding bucketnames to the
* namespacerusersbucket. The object names added to the
* namespaceusersbucket are of the form:
* canonicalID...!*!bucketname
*/
export const splitter = '..|..';
export const usersBucket = 'users..bucket';
// MPU Bucket Prefix is used to create the name of the shadow
// bucket used for multipart uploads. There is one shadow mpu
// bucket per bucket and its name is the mpuBucketPrefix followed
// by the name of the final destination bucket for the object
// once the multipart upload is complete.
export const mpuBucketPrefix = 'mpuShadowBucket';
// since aws s3 does not allow capitalized buckets, these may be
// used for special internal purposes
export const permittedCapitalizedBuckets = {
METADATA: true,
};
/* eslint-disable camelcase */
export const externalBackends = { aws_s3: true, azure: true, gcp: true, pfs: true }
export const hasCopyPartBackends = { aws_s3: true, gcp: true }
export const versioningNotImplBackends = { azure: true, gcp: true }
export const mpuMDStoredExternallyBackend = { aws_s3: true, gcp: true }
// AWS sets a minimum size limit for parts except for the last part.
// http://docs.aws.amazon.com/AmazonS3/latest/API/mpUploadComplete.html
export const minimumAllowedPartSize = 5242880;
// hex digest of sha256 hash of empty string:
export const emptyStringHash = crypto.createHash('sha256').update('', 'binary').digest('hex');
// Default expiration value of the S3 pre-signed URL duration
// 604800 seconds (seven days).
export const legacyLocations = ['sproxyd', 'legacy'];
export const defaultPreSignedURLExpiry = 7 * 24 * 60 * 60;
// Regex for ISO-8601 formatted date
export const shortIso8601Regex = /\d{4}-\d{2}-\d{2}T\d{2}:\d{2}:\d{2}Z/;
export const longIso8601Regex = /\d{4}-\d{2}-\d{2}T\d{2}:\d{2}:\d{2}.\d{3}Z/;
export const supportedNotificationEvents = new Set([
's3:ObjectCreated:*',
's3:ObjectCreated:Put',
's3:ObjectCreated:Copy',
's3:ObjectCreated:CompleteMultipartUpload',
's3:ObjectRemoved:*',
's3:ObjectRemoved:Delete',
's3:ObjectRemoved:DeleteMarkerCreated',
's3:ObjectTagging:*',
's3:ObjectTagging:Put',
's3:ObjectTagging:Delete',
's3:ObjectAcl:Put',
]);
export const notificationArnPrefix = 'arn:scality:bucketnotif';
// some of the available data backends (if called directly rather
// than through the multiple backend gateway) need a key provided
// as a string as first parameter of the get/delete methods.
export const clientsRequireStringKey = { sproxyd: true, cdmi: true };
// HTTP server keep-alive timeout is set to a higher value than
// client's free sockets timeout to avoid the risk of triggering
// ECONNRESET errors if the server closes the connection at the
// exact moment clients attempt to reuse an established connection
// for a new request.
//
// Note: the ability to close inactive connections on the client
// after httpClientFreeSocketsTimeout milliseconds requires the
// use of "agentkeepalive" module instead of the regular node.js
// http.Agent.
export const httpServerKeepAliveTimeout = 60000;
export const httpClientFreeSocketTimeout = 55000;
export const supportedLifecycleRules = [
'expiration',
'noncurrentVersionExpiration',
'abortIncompleteMultipartUpload',
];
// Maximum number of buckets to cache (bucket metadata)
export const maxCachedBuckets = process.env.METADATA_MAX_CACHED_BUCKETS ?
Number(process.env.METADATA_MAX_CACHED_BUCKETS) : 1000;
/** For policy resource arn check we allow empty account ID to not break compatibility */
export const policyArnAllowedEmptyAccountId = ['utapi', 'scuba'];

151
lib/db.js
View File

@@ -1,151 +0,0 @@
'use strict'; // eslint-disable-line strict
const writeOptions = { sync: true };
/**
* Like Error, but with a property set to true.
* TODO: this is copied from kineticlib, should consolidate with the
* future errors module
*
* Example: instead of:
* const err = new Error("input is not a buffer");
* err.badTypeInput = true;
* throw err;
* use:
* throw propError("badTypeInput", "input is not a buffer");
*
* @param {String} propName - the property name.
* @param {String} message - the Error message.
* @returns {Error} the Error object.
*/
function propError(propName, message) {
const err = new Error(message);
err[propName] = true;
return err;
}
/**
* Running transaction with multiple updates to be committed atomically
*/
class IndexTransaction {
/**
* Builds a new transaction
*
* @argument {Leveldb} db an open database to which the updates
* will be applied
*
* @returns {IndexTransaction} a new empty transaction
*/
constructor(db) {
this.operations = [];
this.db = db;
this.closed = false;
}
/**
* Adds a new operation to participate in this running transaction
*
* @argument {object} op an object with the following attributes:
* {
* type: 'put' or 'del',
* key: the object key,
* value: (optional for del) the value to store,
* }
*
* @throws {Error} an error described by the following properties
* - invalidTransactionVerb if op is not put or del
* - pushOnCommittedTransaction if already committed
* - missingKey if the key is missing from the op
* - missingValue if putting without a value
*
* @returns {undefined}
*/
push(op) {
if (this.closed) {
throw propError('pushOnCommittedTransaction',
'can not add ops to already committed transaction');
}
if (op.type !== 'put' && op.type !== 'del') {
throw propError('invalidTransactionVerb',
`unknown action type: ${op.type}`);
}
if (op.key === undefined) {
throw propError('missingKey', 'missing key');
}
if (op.type === 'put' && op.value === undefined) {
throw propError('missingValue', 'missing value');
}
this.operations.push(op);
}
/**
* Adds a new put operation to this running transaction
*
* @argument {string} key - the key of the object to put
* @argument {string} value - the value to put
*
* @throws {Error} an error described by the following properties
* - pushOnCommittedTransaction if already committed
* - missingKey if the key is missing from the op
* - missingValue if putting without a value
*
* @returns {undefined}
*
* @see push
*/
put(key, value) {
this.push({ type: 'put', key, value });
}
/**
* Adds a new del operation to this running transaction
*
* @argument {string} key - the key of the object to delete
*
* @throws {Error} an error described by the following properties
* - pushOnCommittedTransaction if already committed
* - missingKey if the key is missing from the op
*
* @returns {undefined}
*
* @see push
*/
del(key) {
this.push({ type: 'del', key });
}
/**
* Applies the queued updates in this transaction atomically.
*
* @argument {function} cb function to be called when the commit
* finishes, taking an optional error argument
*
* @returns {undefined}
*/
commit(cb) {
if (this.closed) {
return cb(propError('alreadyCommitted',
'transaction was already committed'));
}
if (this.operations.length === 0) {
return cb(propError('emptyTransaction',
'tried to commit an empty transaction'));
}
this.closed = true;
// The array-of-operations variant of the `batch` method
// allows passing options such has `sync: true` whereas the
// chained form does not.
return this.db.batch(this.operations, writeOptions, cb);
}
}
module.exports = {
IndexTransaction,
};

194
lib/db.ts Normal file
View File

@@ -0,0 +1,194 @@
/**
* Like Error, but with a property set to true.
* TODO: this is copied from kineticlib, should consolidate with the
* future errors module
*
* Example: instead of:
* const err = new Error("input is not a buffer");
* err.badTypeInput = true;
* throw err;
* use:
* throw propError("badTypeInput", "input is not a buffer");
*
* @param propName - the property name.
* @param message - the Error message.
* @returns the Error object.
*/
function propError(propName: string, message: string): Error {
const err = new Error(message);
err[propName] = true;
// @ts-ignore
err.is = { [propName]: true };
return err;
}
/**
* Running transaction with multiple updates to be committed atomically
*/
export class IndexTransaction {
operations: { type: 'put' | 'del'; key: string; value?: any }[];
db: any;
closed: boolean;
conditions: { [key: string]: string }[];
/**
* Builds a new transaction
*
* @argument {Leveldb} db an open database to which the updates
* will be applied
*
* @returns a new empty transaction
*/
constructor(db: any) {
this.operations = [];
this.db = db;
this.closed = false;
this.conditions = [];
}
/**
* Adds a new operation to participate in this running transaction
*
* @argument op an object with the following attributes:
* {
* type: 'put' or 'del',
* key: the object key,
* value: (optional for del) the value to store,
* }
*
* @throws an error described by the following properties
* - invalidTransactionVerb if op is not put or del
* - pushOnCommittedTransaction if already committed
* - missingKey if the key is missing from the op
* - missingValue if putting without a value
*/
push(op: { type: 'put'; key: string; value: any }): void;
push(op: { type: 'del'; key: string }): void;
push(op: { type: 'put' | 'del'; key: string; value?: any }): void {
if (this.closed) {
throw propError(
'pushOnCommittedTransaction',
'can not add ops to already committed transaction'
);
}
if (op.type !== 'put' && op.type !== 'del') {
throw propError(
'invalidTransactionVerb',
`unknown action type: ${op.type}`
);
}
if (op.key === undefined) {
throw propError('missingKey', 'missing key');
}
if (op.type === 'put' && op.value === undefined) {
throw propError('missingValue', 'missing value');
}
this.operations.push(op);
}
/**
* Adds a new put operation to this running transaction
*
* @argument {string} key - the key of the object to put
* @argument {string} value - the value to put
*
* @throws {Error} an error described by the following properties
* - pushOnCommittedTransaction if already committed
* - missingKey if the key is missing from the op
* - missingValue if putting without a value
* @see push
*/
put(key: string, value: any) {
this.push({ type: 'put', key, value });
}
/**
* Adds a new del operation to this running transaction
*
* @argument key - the key of the object to delete
*
* @throws an error described by the following properties
* - pushOnCommittedTransaction if already committed
* - missingKey if the key is missing from the op
*
* @see push
*/
del(key: string) {
this.push({ type: 'del', key });
}
/**
* Adds a condition for the transaction
*
* @argument condition an object with the following attributes:
* {
* <condition>: the object key
* }
* example: { notExists: 'key1' }
*
* @throws an error described by the following properties
* - pushOnCommittedTransaction if already committed
* - missingCondition if the condition is empty
*
*/
addCondition(condition: { [key: string]: string }) {
if (this.closed) {
throw propError(
'pushOnCommittedTransaction',
'can not add conditions to already committed transaction'
);
}
if (condition === undefined || Object.keys(condition).length === 0) {
throw propError(
'missingCondition',
'missing condition for conditional put'
);
}
if (typeof condition.notExists !== 'string') {
throw propError(
'unsupportedConditionalOperation',
'missing key or supported condition'
);
}
this.conditions.push(condition);
}
/**
* Applies the queued updates in this transaction atomically.
*
* @argument cb function to be called when the commit
* finishes, taking an optional error argument
*
*/
commit(cb: (error: Error | null, data?: any) => void) {
if (this.closed) {
return cb(
propError(
'alreadyCommitted',
'transaction was already committed'
)
);
}
if (this.operations.length === 0) {
return cb(
propError(
'emptyTransaction',
'tried to commit an empty transaction'
)
);
}
this.closed = true;
const options = { sync: true, conditions: this.conditions };
// The array-of-operations variant of the `batch` method
// allows passing options such has `sync: true` whereas the
// chained form does not.
return this.db.batch(this.operations, options, cb);
}
}

View File

@@ -1,13 +0,0 @@
function reshapeExceptionError(error) {
const { message, code, stack, name } = error;
return {
message,
code,
stack,
name,
};
}
module.exports = {
reshapeExceptionError,
};

11
lib/errorUtils.ts Normal file
View File

@@ -0,0 +1,11 @@
export interface ErrorLike {
message: any;
code: any;
stack: any;
name: any;
}
export function reshapeExceptionError(error: ErrorLike) {
const { message, code, stack, name } = error;
return { message, code, stack, name };
}

View File

@@ -1,35 +0,0 @@
'use strict'; // eslint-disable-line strict
class ArsenalError extends Error {
constructor(type, code, desc) {
super(type);
this.code = code;
this.description = desc;
this[type] = true;
}
customizeDescription(description) {
return new ArsenalError(this.message, this.code, description);
}
}
/**
* Generate an Errors instances object.
*
* @returns {Object.<string, ArsenalError>} - object field by arsenalError
* instances
*/
function errorsGen() {
const errors = {};
const errorsObj = require('../errors/arsenalErrors.json');
Object.keys(errorsObj)
.filter(index => index !== '_comment')
.forEach(index => {
errors[index] = new ArsenalError(index, errorsObj[index].code,
errorsObj[index].description);
});
return errors;
}
module.exports = errorsGen();

1044
lib/errors/arsenalErrors.ts Normal file

File diff suppressed because it is too large Load Diff

150
lib/errors/index.ts Normal file
View File

@@ -0,0 +1,150 @@
import type { ServerResponse } from 'http';
import * as rawErrors from './arsenalErrors';
/** All possible errors names. */
export type Name = keyof typeof rawErrors
/** Object containing all errors names. It has the format { [Name]: "Name" } */
export type Names = { [Name_ in Name]: Name_ };
/** Mapping used to determine an error type. It has the format { [Name]: boolean } */
export type Is = { [_ in Name]: boolean };
/** Mapping of all possible Errors. It has the format { [Name]: Error } */
export type Errors = { [_ in Name]: ArsenalError };
// This object is reused constantly through createIs, we store it there
// to avoid recomputation.
const isBase = Object.fromEntries(
Object.keys(rawErrors).map(key => [key, false])
) as Is;
// This allows to conditionally add the old behavior of errors to properly
// test migration.
// Activate CI tests with `ALLOW_UNSAFE_ERROR_COMPARISON=false yarn test`.
// Remove this mechanism in ARSN-176.
export const allowUnsafeErrComp = (
process.env.ALLOW_UNSAFE_ERROR_COMPARISON ?? 'true') === 'true'
// This contains some metaprog. Be careful.
// Proxy can be found on MDN.
// https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Proxy
// While this could seems better to avoid metaprog, this allows us to enforce
// type-checking properly while avoiding all errors that could happen at runtime.
// Even if some errors are made in JavaScript, like using err.is.NonExistingError,
// the Proxy will return false.
const createIs = (type: Name): Is => {
const get = (is: Is, value: string | symbol) => is[value] ?? false;
const final = Object.freeze({ ...isBase, [type]: true })
return new Proxy(final, { get });
};
export class ArsenalError extends Error {
/** HTTP status code. Example: 401, 403, 500, ... */
#code: number;
/** Text description of the error. */
#description: string;
/** Type of the error. */
#type: Name;
/** Object used to determine the error type.
* Example: error.is.InternalError */
#is: Is;
private constructor(type: Name, code: number, description: string) {
super(type);
this.#code = code;
this.#description = description;
this.#type = type;
this.#is = createIs(type);
// This restores the old behavior of errors, to make sure they're now
// backward-compatible. Fortunately it's handled by TS, but it cannot
// be type-checked. This means we have to be extremely careful about
// what we're doing when using errors.
// Disables the feature when in CI tests but not in production.
if (allowUnsafeErrComp) {
this[type] = true;
}
}
/** Output the error as a JSON string */
toString() {
const errorType = this.message;
const errorMessage = this.#description;
return JSON.stringify({ errorType, errorMessage });
}
flatten() {
return {
is_arsenal_error: true,
code: this.#code,
description: this.#description,
type: this.#type,
stack: this.stack
}
}
static unflatten(flat_obj) {
if (!flat_obj.is_arsenal_error) {
return null;
}
const err = new ArsenalError(
flat_obj.type,
flat_obj.code,
flat_obj.description
)
err.stack = flat_obj.stack
return err;
}
/** Write the error in an HTTP response */
writeResponse(res: ServerResponse) {
res.writeHead(this.#code);
const asStr = this.toString();
res.end(asStr);
}
/** Clone the error with a new description.*/
customizeDescription(description: string): ArsenalError {
const type = this.#type;
const code = this.#code;
return new ArsenalError(type, code, description);
}
/** Used to determine the error type. Example: error.is.InternalError */
get is() {
return this.#is;
}
/** HTTP status code. Example: 401, 403, 500, ... */
get code() {
return this.#code;
}
/** Text description of the error. */
get description() {
return this.#description;
}
/**
* Type of the error, belonging to Name. is should be prefered instead of
* type in a daily-basis, but type remains accessible for future use. */
get type() {
return this.#type;
}
/** Generate all possible errors. An instance is created by default. */
static errors() {
const errors = {}
Object.entries(rawErrors).forEach((value) => {
const name = value[0] as Name;
const error = value[1];
const { code, description } = error;
const get = () => new ArsenalError(name, code, description);
Object.defineProperty(errors, name, { get });
});
return errors as Errors
}
}
/** Mapping of all possible Errors.
* Use them with errors[error].customizeDescription for any customization. */
export default ArsenalError.errors();

View File

@@ -7,8 +7,8 @@
"test": "mocha --recursive --timeout 5500 tests/unit"
},
"dependencies": {
"mocha": "5.2.0",
"async": "~2.6.1",
"mocha": "2.5.3",
"async": "^2.6.0",
"node-forge": "^0.7.1"
}
}

View File

@@ -17,9 +17,9 @@ describe('decyrptSecret', () => {
describe('parseServiceCredentials', () => {
const conf = {
users: [{ accessKey,
accountType: 'service-clueso',
secretKey,
userName: 'Search Service Account' }],
accountType: 'service-clueso',
secretKey,
userName: 'Search Service Account' }],
};
const auth = JSON.stringify({ privateKey });

View File

@@ -1,6 +1,4 @@
'use strict'; // eslint-disable-line strict
const ciphers = [
export const ciphers = [
'DHE-RSA-AES128-GCM-SHA256',
'ECDHE-ECDSA-AES128-GCM-SHA256',
'ECDHE-RSA-AES256-GCM-SHA384',
@@ -28,7 +26,3 @@ const ciphers = [
'!EDH-RSA-DES-CBC3-SHA',
'!KRB5-DES-CBC3-SHA',
].join(':');
module.exports = {
ciphers,
};

View File

@@ -29,16 +29,11 @@ c2CNfUEqyRbJF4pE9ZcdQReT5p/llmyhQdvq6cHH+cKJk63C6DHRVoStsnsUcvKe
bLxKsygK77ttjr61cxLoDJeGd5L5h1CPmwIBAg==
-----END DH PARAMETERS-----
*/
'use strict'; // eslint-disable-line strict
const dhparam =
export const dhparam =
'MIIBCAKCAQEAh99T77KGNuiY9N6xrCJ3QNv4SFADTa3CD+1VMTAdRJLHUNpglB+i' +
'AoTYiLDFZgtTCpx0ZZUD+JM3qiCZy0OK5/ZGlVD7sZmxjRtdpVK4qIPtwav8t0J7' +
'c2CNfUEqyRbJF4pE9ZcdQReT5p/llmyhQdvq6cHH+cKJk63C6DHRVoStsnsUcvKe' +
'23PLGZulKg8H3eRBxHamHkmyuEVDtoNhMIoJONsdXSpo5GgcD4EQMM8xb/qsnCxn' +
'6QIGTBvcHskxtlTZOfUPk4XQ6Yb3tQi2TurzkQHLln4U7p/GZs+D+6D3SgSPqr6P' +
'bLxKsygK77ttjr61cxLoDJeGd5L5h1CPmwIBAg==';
module.exports = {
dhparam,
};

2
lib/https/index.ts Normal file
View File

@@ -0,0 +1,2 @@
export * as ciphers from './ciphers'
export * as dhparam from './dh2048'

View File

@@ -1,83 +0,0 @@
'use strict'; // eslint-disable-line strict
const ipaddr = require('ipaddr.js');
/**
* checkIPinRangeOrMatch checks whether a given ip address is in an ip address
* range or matches the given ip address
* @param {string} cidr - ip address range or ip address
* @param {object} ip - parsed ip address
* @return {boolean} true if in range, false if not
*/
function checkIPinRangeOrMatch(cidr, ip) {
// If there is an exact match of the ip address, no need to check ranges
if (ip.toString() === cidr) {
return true;
}
let range;
try {
range = ipaddr.IPv4.parseCIDR(cidr);
} catch (err) {
try {
// not ipv4 so try ipv6
range = ipaddr.IPv6.parseCIDR(cidr);
} catch (err) {
// range is not valid ipv4 or ipv6
return false;
}
}
try {
return ip.match(range);
} catch (err) {
return false;
}
}
/**
* Parse IP address into object representation
* @param {string} ip - IPV4/IPV6/IPV4-mapped IPV6 address
* @return {object} parsedIp - Object representation of parsed IP
*/
function parseIp(ip) {
if (ipaddr.IPv4.isValid(ip)) {
return ipaddr.parse(ip);
}
if (ipaddr.IPv6.isValid(ip)) {
// also parses IPv6 mapped IPv4 addresses into IPv4 representation
return ipaddr.process(ip);
}
// not valid ip address according to module, so return empty object
// which will obviously not match a range of ip addresses that the parsedIp
// is being tested against
return {};
}
/**
* Checks if an IP adress matches a given list of CIDR ranges
* @param {string[]} cidrList - List of CIDR ranges
* @param {string} ip - IP address
* @return {boolean} - true if there is match or false for no match
*/
function ipMatchCidrList(cidrList, ip) {
const parsedIp = parseIp(ip);
return cidrList.some(item => {
let cidr;
// patch the cidr if range is not specified
if (item.indexOf('/') === -1) {
if (item.startsWith('127.')) {
cidr = `${item}/8`;
} else if (ipaddr.IPv4.isValid(item)) {
cidr = `${item}/32`;
}
}
return checkIPinRangeOrMatch(cidr || item, parsedIp);
});
}
module.exports = {
checkIPinRangeOrMatch,
ipMatchCidrList,
parseIp,
};

71
lib/ipCheck.ts Normal file
View File

@@ -0,0 +1,71 @@
import ipaddr from 'ipaddr.js';
/**
* checkIPinRangeOrMatch checks whether a given ip address is in an ip address
* range or matches the given ip address
* @param cidr - ip address range or ip address
* @param ip - parsed ip address
* @return true if in range, false if not
*/
export function checkIPinRangeOrMatch(
cidr: string,
ip: ipaddr.IPv4 | ipaddr.IPv6,
): boolean {
// If there is an exact match of the ip address, no need to check ranges
if (ip.toString() === cidr) {
return true;
}
try {
if (ip instanceof ipaddr.IPv6) {
const range = ipaddr.IPv6.parseCIDR(cidr);
return ip.match(range);
} else {
const range = ipaddr.IPv4.parseCIDR(cidr);
return ip.match(range);
}
} catch (error) {
return false;
}
}
/**
* Parse IP address into object representation
* @param ip - IPV4/IPV6/IPV4-mapped IPV6 address
* @return parsedIp - Object representation of parsed IP
*/
export function parseIp(ip: string): ipaddr.IPv4 | ipaddr.IPv6 | {} {
if (ipaddr.IPv4.isValid(ip)) {
return ipaddr.parse(ip);
}
if (ipaddr.IPv6.isValid(ip)) {
// also parses IPv6 mapped IPv4 addresses into IPv4 representation
return ipaddr.process(ip);
}
return {};
}
/**
* Checks if an IP adress matches a given list of CIDR ranges
* @param cidrList - List of CIDR ranges
* @param ip - IP address
* @return - true if there is match or false for no match
*/
export function ipMatchCidrList(cidrList: string[], ip: string): boolean {
const parsedIp = parseIp(ip);
return cidrList.some((item) => {
let cidr: string | undefined;
// patch the cidr if range is not specified
if (item.indexOf('/') === -1) {
if (item.startsWith('127.')) {
cidr = `${item}/8`;
} else if (ipaddr.IPv4.isValid(item)) {
cidr = `${item}/32`;
}
}
return (
(parsedIp instanceof ipaddr.IPv4 ||
parsedIp instanceof ipaddr.IPv6) &&
checkIPinRangeOrMatch(cidr || item, parsedIp)
);
});
}

View File

@@ -1,32 +0,0 @@
'use strict'; // eslint-disable-line
const debug = require('util').debuglog('jsutil');
// JavaScript utility functions
/**
* force <tt>func</tt> to be called only once, even if actually called
* multiple times. The cached result of the first call is then
* returned (if any).
*
* @note underscore.js provides this functionality but not worth
* adding a new dependency for such a small use case.
*
* @param {function} func function to call at most once
* @return {function} a callable wrapper mirroring <tt>func</tt> but
* only calls <tt>func</tt> at first invocation.
*/
module.exports.once = function once(func) {
const state = { called: false, res: undefined };
return function wrapper(...args) {
if (!state.called) {
state.called = true;
state.res = func.apply(func, args);
} else {
debug('function already called:', func,
'returning cached result:', state.res);
}
return state.res;
};
};

33
lib/jsutil.ts Normal file
View File

@@ -0,0 +1,33 @@
import * as util from 'util';
const debug = util.debuglog('jsutil');
// JavaScript utility functions
/**
* force <tt>func</tt> to be called only once, even if actually called
* multiple times. The cached result of the first call is then
* returned (if any).
*
* @note underscore.js provides this functionality but not worth
* adding a new dependency for such a small use case.
*
* @param func function to call at most once
* @return a callable wrapper mirroring <tt>func</tt> but
* only calls <tt>func</tt> at first invocation.
*/
export function once<T>(func: (...args: any[]) => T): (...args: any[]) => T {
type State = { called: boolean; res: any };
const state: State = { called: false, res: undefined };
return function wrapper(...args: any[]) {
if (!state.called) {
state.called = true;
state.res = func.apply(func, args);
} else {
const m1 = 'function already called:';
const m2 = 'returning cached result:';
debug(m1, func, m2, state.res);
}
return state.res;
};
}

View File

@@ -1,230 +0,0 @@
const Redis = require('ioredis');
class RedisClient {
/**
* @constructor
* @param {Object} config - config
* @param {string} config.host - Redis host
* @param {number} config.port - Redis port
* @param {string} config.password - Redis password
* @param {werelogs.Logger} logger - logger instance
*/
constructor(config, logger) {
this._client = new Redis(config);
this._client.on('error', err =>
logger.trace('error from redis', {
error: err,
method: 'RedisClient.constructor',
redisHost: config.host,
redisPort: config.port,
})
);
return this;
}
/**
* scan a pattern and return matching keys
* @param {string} pattern - string pattern to match with all existing keys
* @param {number} [count=10] - scan count
* @param {callback} cb - callback (error, result)
* @return {undefined}
*/
scan(pattern, count = 10, cb) {
const params = { match: pattern, count };
const keys = [];
const stream = this._client.scanStream(params);
stream.on('data', resultKeys => {
for (let i = 0; i < resultKeys.length; i++) {
keys.push(resultKeys[i]);
}
});
stream.on('end', () => {
cb(null, keys);
});
}
/**
* increment value of a key by 1 and set a ttl
* @param {string} key - key holding the value
* @param {number} expiry - expiry in seconds
* @param {callback} cb - callback
* @return {undefined}
*/
incrEx(key, expiry, cb) {
return this._client
.multi([['incr', key], ['expire', key, expiry]])
.exec(cb);
}
/**
* increment value of a key by a given amount
* @param {string} key - key holding the value
* @param {number} amount - amount to increase by
* @param {callback} cb - callback
* @return {undefined}
*/
incrby(key, amount, cb) {
return this._client.incrby(key, amount, cb);
}
/**
* increment value of a key by a given amount and set a ttl
* @param {string} key - key holding the value
* @param {number} amount - amount to increase by
* @param {number} expiry - expiry in seconds
* @param {callback} cb - callback
* @return {undefined}
*/
incrbyEx(key, amount, expiry, cb) {
return this._client
.multi([['incrby', key, amount], ['expire', key, expiry]])
.exec(cb);
}
/**
* decrement value of a key by a given amount
* @param {string} key - key holding the value
* @param {number} amount - amount to increase by
* @param {callback} cb - callback
* @return {undefined}
*/
decrby(key, amount, cb) {
return this._client.decrby(key, amount, cb);
}
/**
* get value stored at key
* @param {string} key - key holding the value
* @param {callback} cb - callback
* @return {undefined}
*/
get(key, cb) {
return this._client.get(key, cb);
}
/**
* Checks if a key exists
* @param {string} key - name of key
* @param {function} cb - callback
* If cb response returns 0, key does not exist.
* If cb response returns 1, key exists.
* @return {undefined}
*/
exists(key, cb) {
return this._client.exists(key, cb);
}
/**
* execute a batch of commands
* @param {string[]} cmds - list of commands
* @param {callback} cb - callback
* @return {undefined}
*/
batch(cmds, cb) {
return this._client.pipeline(cmds).exec(cb);
}
/**
* Add a value and its score to a sorted set. If no sorted set exists, this
* will create a new one for the given key.
* @param {string} key - name of key
* @param {integer} score - score used to order set
* @param {string} value - value to store
* @param {callback} cb - callback
* @return {undefined}
*/
zadd(key, score, value, cb) {
return this._client.zadd(key, score, value, cb);
}
/**
* Get number of elements in a sorted set.
* Note: using this on a key that does not exist will return 0.
* Note: using this on an existing key that isn't a sorted set will
* return an error WRONGTYPE.
* @param {string} key - name of key
* @param {function} cb - callback
* @return {undefined}
*/
zcard(key, cb) {
return this._client.zcard(key, cb);
}
/**
* Get the score for given value in a sorted set
* Note: using this on a key that does not exist will return nil.
* Note: using this on a value that does not exist in a valid sorted set key
* will return nil.
* @param {string} key - name of key
* @param {string} value - value within sorted set
* @param {function} cb - callback
* @return {undefined}
*/
zscore(key, value, cb) {
return this._client.zscore(key, value, cb);
}
/**
* Remove a value from a sorted set
* @param {string} key - name of key
* @param {string|array} value - value within sorted set. Can specify
* multiple values within an array
* @param {function} cb - callback
* The cb response returns number of values removed
* @return {undefined}
*/
zrem(key, value, cb) {
return this._client.zrem(key, value, cb);
}
/**
* Get specified range of elements in a sorted set
* @param {string} key - name of key
* @param {integer} start - start index (inclusive)
* @param {integer} end - end index (inclusive) (can use -1)
* @param {function} cb - callback
* @return {undefined}
*/
zrange(key, start, end, cb) {
return this._client.zrange(key, start, end, cb);
}
/**
* Get range of elements in a sorted set based off score
* @param {string} key - name of key
* @param {integer|string} min - min score value (inclusive)
* (can use "-inf")
* @param {integer|string} max - max score value (inclusive)
* (can use "+inf")
* @param {function} cb - callback
* @return {undefined}
*/
zrangebyscore(key, min, max, cb) {
return this._client.zrangebyscore(key, min, max, cb);
}
/**
* get TTL or expiration in seconds
* @param {string} key - name of key
* @param {function} cb - callback
* @return {undefined}
*/
ttl(key, cb) {
return this._client.ttl(key, cb);
}
clear(cb) {
return this._client.flushdb(cb);
}
disconnect(cb) {
return this._client.quit(cb);
}
listClients(cb) {
return this._client.client('list', cb);
}
}
module.exports = RedisClient;

126
lib/metrics/RedisClient.ts Normal file
View File

@@ -0,0 +1,126 @@
import Redis from 'ioredis';
import { Logger } from 'werelogs';
export type Config = { host: string; port: number; password: string };
export type Callback = (error: Error | null, value?: any) => void;
export default class RedisClient {
_client: Redis.Redis;
constructor(config: Config, logger: Logger) {
this._client = new Redis(config);
this._client.on('error', err =>
logger.trace('error from redis', {
error: err,
method: 'RedisClient.constructor',
redisHost: config.host,
redisPort: config.port,
})
);
return this;
}
/** increment value of a key by 1 and set a ttl */
incrEx(key: string, expiry: number, cb: Callback) {
const exp = expiry.toString();
return this._client
.multi([['incr', key], ['expire', key, exp]])
.exec(cb);
}
/** increment value of a key by a given amount and set a ttl */
incrbyEx(key: string, amount: number, expiry: number, cb: Callback) {
const am = amount.toString();
const exp = expiry.toString();
return this._client
.multi([['incrby', key, am], ['expire', key, exp]])
.exec(cb);
}
/** execute a batch of commands */
batch(cmds: string[][], cb: Callback) {
return this._client.pipeline(cmds).exec(cb);
}
/**
* Checks if a key exists
* @param cb - callback
* If cb response returns 0, key does not exist.
* If cb response returns 1, key exists.
*/
exists(key: string, cb: Callback) {
return this._client.exists(key, cb);
}
/**
* Add a value and its score to a sorted set. If no sorted set exists, this
* will create a new one for the given key.
* @param score - score used to order set
*/
zadd(key: string, score: number, value: string, cb: Callback) {
return this._client.zadd(key, score, value, cb);
}
/**
* Get number of elements in a sorted set.
* Note: using this on a key that does not exist will return 0.
* Note: using this on an existing key that isn't a sorted set will
* return an error WRONGTYPE.
*/
zcard(key: string, cb: Callback) {
return this._client.zcard(key, cb);
}
/**
* Get the score for given value in a sorted set
* Note: using this on a key that does not exist will return nil.
* Note: using this on a value that does not exist in a valid sorted set key
* will return nil.
*/
zscore(key: string, value: string, cb: Callback) {
return this._client.zscore(key, value, cb);
}
/**
* Remove a value from a sorted set
* @param value - value within sorted set. Can specify multiple values within an array
* @param {function} cb - callback
* The cb response returns number of values removed
*/
zrem(key: string, value: string | string[], cb: Callback) {
return this._client.zrem(key, value, cb);
}
/**
* Get specified range of elements in a sorted set
* @param start - start index (inclusive)
* @param end - end index (inclusive) (can use -1)
*/
zrange(key: string, start: number, end: number, cb: Callback) {
return this._client.zrange(key, start, end, cb);
}
/**
* Get range of elements in a sorted set based off score
* @param min - min score value (inclusive)
* (can use "-inf")
* @param max - max score value (inclusive)
* (can use "+inf")
*/
zrangebyscore(
key: string,
min: number | string,
max: number | string,
cb: Callback,
) {
return this._client.zrangebyscore(key, min, max, cb);
}
clear(cb: Callback) {
return this._client.flushdb(cb);
}
disconnect() {
this._client.disconnect();
}
}

View File

@@ -1,231 +0,0 @@
const async = require('async');
class StatsClient {
/**
* @constructor
* @param {object} redisClient - RedisClient instance
* @param {number} interval - sampling interval by seconds
* @param {number} expiry - sampling duration by seconds
*/
constructor(redisClient, interval, expiry) {
this._redis = redisClient;
this._interval = interval;
this._expiry = expiry;
return this;
}
/*
* Utility function to use when callback is undefined
*/
_noop() {}
/**
* normalize to the nearest interval
* @param {object} d - Date instance
* @return {number} timestamp - normalized to the nearest interval
*/
_normalizeTimestamp(d) {
const s = d.getSeconds();
return d.setSeconds(s - s % this._interval, 0);
}
/**
* set timestamp to the previous interval
* @param {object} d - Date instance
* @return {number} timestamp - set to the previous interval
*/
_setPrevInterval(d) {
return d.setSeconds(d.getSeconds() - this._interval);
}
/**
* build redis key to get total number of occurrences on the server
* @param {string} name - key name identifier
* @param {Date} date - Date instance
* @return {string} key - key for redis
*/
buildKey(name, date) {
return `${name}:${this._normalizeTimestamp(date)}`;
}
/**
* reduce the array of values to a single value
* typical input looks like [[null, '1'], [null, '2'], [null, null]...]
* @param {array} arr - Date instance
* @return {string} key - key for redis
*/
_getCount(arr) {
return arr.reduce((prev, a) => {
let num = parseInt(a[1], 10);
num = Number.isNaN(num) ? 0 : num;
return prev + num;
}, 0);
}
/**
* report/record a new request received on the server
* @param {string} id - service identifier
* @param {number} incr - optional param increment
* @param {function} cb - callback
* @return {undefined}
*/
reportNewRequest(id, incr, cb) {
if (!this._redis) {
return undefined;
}
let callback;
let amount;
if (typeof incr === 'function') {
// In case where optional `incr` is not passed, but `cb` is passed
callback = incr;
amount = 1;
} else {
callback = (cb && typeof cb === 'function') ? cb : this._noop;
amount = (typeof incr === 'number') ? incr : 1;
}
const key = this.buildKey(`${id}:requests`, new Date());
return this._redis.incrbyEx(key, amount, this._expiry, callback);
}
/**
* Increment the given key by the given value.
* @param {String} key - The Redis key to increment
* @param {Number} incr - The value to increment by
* @param {function} [cb] - callback
* @return {undefined}
*/
incrementKey(key, incr, cb) {
const callback = cb || this._noop;
return this._redis.incrby(key, incr, callback);
}
/**
* Decrement the given key by the given value.
* @param {String} key - The Redis key to decrement
* @param {Number} decr - The value to decrement by
* @param {function} [cb] - callback
* @return {undefined}
*/
decrementKey(key, decr, cb) {
const callback = cb || this._noop;
return this._redis.decrby(key, decr, callback);
}
/**
* report/record a request that ended up being a 500 on the server
* @param {string} id - service identifier
* @param {callback} cb - callback
* @return {undefined}
*/
report500(id, cb) {
if (!this._redis) {
return undefined;
}
const callback = cb || this._noop;
const key = this.buildKey(`${id}:500s`, new Date());
return this._redis.incrEx(key, this._expiry, callback);
}
/**
* wrapper on `getStats` that handles a list of keys
* @param {object} log - Werelogs request logger
* @param {array} ids - service identifiers
* @param {callback} cb - callback to call with the err/result
* @return {undefined}
*/
getAllStats(log, ids, cb) {
if (!this._redis) {
return cb(null, {});
}
const statsRes = {
'requests': 0,
'500s': 0,
'sampleDuration': this._expiry,
};
let requests = 0;
let errors = 0;
// for now set concurrency to default of 10
return async.eachLimit(ids, 10, (id, done) => {
this.getStats(log, id, (err, res) => {
if (err) {
return done(err);
}
requests += res.requests;
errors += res['500s'];
return done();
});
}, error => {
if (error) {
log.error('error getting stats', {
error,
method: 'StatsClient.getAllStats',
});
return cb(null, statsRes);
}
statsRes.requests = requests;
statsRes['500s'] = errors;
return cb(null, statsRes);
});
}
/**
* get stats for the last x seconds, x being the sampling duration
* @param {object} log - Werelogs request logger
* @param {string} id - service identifier
* @param {callback} cb - callback to call with the err/result
* @return {undefined}
*/
getStats(log, id, cb) {
if (!this._redis) {
return cb(null, {});
}
const d = new Date();
const totalKeys = Math.floor(this._expiry / this._interval);
const reqsKeys = [];
const req500sKeys = [];
for (let i = 0; i < totalKeys; i++) {
reqsKeys.push(['get', this.buildKey(`${id}:requests`, d)]);
req500sKeys.push(['get', this.buildKey(`${id}:500s`, d)]);
this._setPrevInterval(d);
}
return async.parallel([
next => this._redis.batch(reqsKeys, next),
next => this._redis.batch(req500sKeys, next),
], (err, results) => {
/**
* Batch result is of the format
* [ [null, '1'], [null, '2'], [null, '3'] ] where each
* item is the result of the each batch command
* Foreach item in the result, index 0 signifies the error and
* index 1 contains the result
*/
const statsRes = {
'requests': 0,
'500s': 0,
'sampleDuration': this._expiry,
};
if (err) {
log.error('error getting stats', {
error: err,
method: 'StatsClient.getStats',
});
/**
* Redis for stats is not a critial component, ignoring
* any error here as returning an InternalError
* would be confused with the health of the service
*/
return cb(null, statsRes);
}
statsRes.requests = this._getCount(results[0]);
statsRes['500s'] = this._getCount(results[1]);
return cb(null, statsRes);
});
}
}
module.exports = StatsClient;

163
lib/metrics/StatsClient.ts Normal file
View File

@@ -0,0 +1,163 @@
import async from 'async';
import RedisClient from './RedisClient';
import { Logger } from 'werelogs';
export default class StatsClient {
_redis: RedisClient;
_interval: number;
_expiry: number;
/**
* @constructor
* @param redisClient - RedisClient instance
* @param interval - sampling interval by seconds
* @param expiry - sampling duration by seconds
*/
constructor(redisClient: RedisClient, interval: number, expiry: number) {
this._redis = redisClient;
this._interval = interval;
this._expiry = expiry;
return this;
}
/** Utility function to use when callback is undefined */
_noop() {}
/**
* normalize to the nearest interval
* @param d - Date instance
* @return timestamp - normalized to the nearest interval
*/
_normalizeTimestamp(d: Date): number {
const s = d.getSeconds();
return d.setSeconds(s - s % this._interval, 0);
}
/**
* set timestamp to the previous interval
* @param d - Date instance
* @return timestamp - set to the previous interval
*/
_setPrevInterval(d: Date): number {
return d.setSeconds(d.getSeconds() - this._interval);
}
/**
* build redis key to get total number of occurrences on the server
* @param name - key name identifier
* @param d - Date instance
* @return key - key for redis
*/
_buildKey(name: string, d: Date): string {
return `${name}:${this._normalizeTimestamp(d)}`;
}
/**
* reduce the array of values to a single value
* typical input looks like [[null, '1'], [null, '2'], [null, null]...]
* @param arr - Date instance
* @return key - key for redis
*/
_getCount(arr: [any, string | null][]): number {
return arr.reduce((prev, a) => {
let num = parseInt(a[1] ?? '', 10);
num = Number.isNaN(num) ? 0 : num;
return prev + num;
}, 0);
}
/**
* report/record a new request received on the server
* @param id - service identifier
* @param incr - optional param increment
*/
reportNewRequest(
id: string,
incr?: number | ((error: Error | null, value?: any) => void),
cb?: (error: Error | null, value?: any) => void,
) {
if (!this._redis) {
return undefined;
}
let callback: (error: Error | null, value?: any) => void;
let amount: number;
if (typeof incr === 'function') {
// In case where optional `incr` is not passed, but `cb` is passed
callback = incr;
amount = 1;
} else {
callback = (cb && typeof cb === 'function') ? cb : this._noop;
amount = (typeof incr === 'number') ? incr : 1;
}
const key = this._buildKey(`${id}:requests`, new Date());
return this._redis.incrbyEx(key, amount, this._expiry, callback);
}
/**
* report/record a request that ended up being a 500 on the server
* @param id - service identifier
*/
report500(id: string, cb?: (error: Error | null, value?: any) => void) {
if (!this._redis) {
return undefined;
}
const callback = cb || this._noop;
const key = this._buildKey(`${id}:500s`, new Date());
return this._redis.incrEx(key, this._expiry, callback);
}
/**
* get stats for the last x seconds, x being the sampling duration
* @param log - Werelogs request logger
* @param id - service identifier
*/
getStats(log: Logger, id: string, cb: (error: Error | null, value?: any) => void) {
if (!this._redis) {
return cb(null, {});
}
const d = new Date();
const totalKeys = Math.floor(this._expiry / this._interval);
const reqsKeys: ['get', string][] = [];
const req500sKeys: ['get', string][] = [];
for (let i = 0; i < totalKeys; i++) {
reqsKeys.push(['get', this._buildKey(`${id}:requests`, d)]);
req500sKeys.push(['get', this._buildKey(`${id}:500s`, d)]);
this._setPrevInterval(d);
}
return async.parallel([
next => this._redis.batch(reqsKeys, next),
next => this._redis.batch(req500sKeys, next),
], (err, results) => {
/**
* Batch result is of the format
* [ [null, '1'], [null, '2'], [null, '3'] ] where each
* item is the result of the each batch command
* Foreach item in the result, index 0 signifies the error and
* index 1 contains the result
*/
const statsRes = {
'requests': 0,
'500s': 0,
'sampleDuration': this._expiry,
};
if (err) {
log.error('error getting stats', {
error: err,
method: 'StatsClient.getStats',
});
/**
* Redis for stats is not a critial component, ignoring
* any error here as returning an InternalError
* would be confused with the health of the service
*/
return cb(null, statsRes);
}
statsRes.requests = this._getCount((results as any)[0]);
statsRes['500s'] = this._getCount((results as any)[1]);
return cb(null, statsRes);
});
}
}

View File

@@ -1,230 +0,0 @@
const async = require('async');
const StatsClient = require('./StatsClient');
/**
* @class StatsModel
*
* @classdesc Extend and overwrite how timestamps are normalized by minutes
* rather than by seconds
*/
class StatsModel extends StatsClient {
/**
* Utility method to convert 2d array rows to columns, and vice versa
* See also: https://docs.ruby-lang.org/en/2.0.0/Array.html#method-i-zip
* @param {array} arrays - 2d array of integers
* @return {array} converted array
*/
_zip(arrays) {
if (arrays.length > 0 && arrays.every(a => Array.isArray(a))) {
return arrays[0].map((_, i) => arrays.map(a => a[i]));
}
return [];
}
/**
* normalize to the nearest interval
* @param {object} d - Date instance
* @return {number} timestamp - normalized to the nearest interval
*/
_normalizeTimestamp(d) {
const m = d.getMinutes();
return d.setMinutes(m - m % (Math.floor(this._interval / 60)), 0, 0);
}
/**
* override the method to get the count as an array of integers separated
* by each interval
* typical input looks like [[null, '1'], [null, '2'], [null, null]...]
* @param {array} arr - each index contains the result of each batch command
* where index 0 signifies the error and index 1 contains the result
* @return {array} array of integers, ordered from most recent interval to
* oldest interval with length of (expiry / interval)
*/
_getCount(arr) {
const size = Math.floor(this._expiry / this._interval);
const array = arr.reduce((store, i) => {
let num = parseInt(i[1], 10);
num = Number.isNaN(num) ? 0 : num;
store.push(num);
return store;
}, []);
if (array.length < size) {
array.push(...Array(size - array.length).fill(0));
}
return array;
}
/**
* wrapper on `getStats` that handles a list of keys
* override the method to reduce the returned 2d array from `_getCount`
* @param {object} log - Werelogs request logger
* @param {array} ids - service identifiers
* @param {callback} cb - callback to call with the err/result
* @return {undefined}
*/
getAllStats(log, ids, cb) {
if (!this._redis) {
return cb(null, {});
}
const size = Math.floor(this._expiry / this._interval);
const statsRes = {
'requests': Array(size).fill(0),
'500s': Array(size).fill(0),
'sampleDuration': this._expiry,
};
const requests = [];
const errors = [];
if (ids.length === 0) {
return cb(null, statsRes);
}
// for now set concurrency to default of 10
return async.eachLimit(ids, 10, (id, done) => {
this.getStats(log, id, (err, res) => {
if (err) {
return done(err);
}
requests.push(res.requests);
errors.push(res['500s']);
return done();
});
}, error => {
if (error) {
log.error('error getting stats', {
error,
method: 'StatsModel.getAllStats',
});
return cb(null, statsRes);
}
statsRes.requests = this._zip(requests).map(arr =>
arr.reduce((acc, i) => acc + i), 0);
statsRes['500s'] = this._zip(errors).map(arr =>
arr.reduce((acc, i) => acc + i), 0);
return cb(null, statsRes);
});
}
/**
* Handles getting a list of global keys.
* @param {array} ids - Service identifiers
* @param {object} log - Werelogs request logger
* @param {function} cb - Callback
* @return {undefined}
*/
getAllGlobalStats(ids, log, cb) {
const reqsKeys = ids.map(key => (['get', key]));
return this._redis.batch(reqsKeys, (err, res) => {
const statsRes = { requests: 0 };
if (err) {
log.error('error getting metrics', {
error: err,
method: 'StatsClient.getAllGlobalStats',
});
return cb(null, statsRes);
}
statsRes.requests = res.reduce((sum, curr) => {
const [cmdErr, val] = curr;
if (cmdErr) {
// Log any individual request errors from the batch request.
log.error('error getting metrics', {
error: cmdErr,
method: 'StatsClient.getAllGlobalStats',
});
}
return sum + (Number.parseInt(val, 10) || 0);
}, 0);
return cb(null, statsRes);
});
}
/**
* normalize date timestamp to the nearest hour
* @param {Date} d - Date instance
* @return {number} timestamp - normalized to the nearest hour
*/
normalizeTimestampByHour(d) {
return d.setMinutes(0, 0, 0);
}
/**
* get previous hour to date given
* @param {Date} d - Date instance
* @return {number} timestamp - one hour prior to date passed
*/
_getDatePreviousHour(d) {
return d.setHours(d.getHours() - 1);
}
/**
* get list of sorted set key timestamps
* @param {number} epoch - epoch time
* @return {array} array of sorted set key timestamps
*/
getSortedSetHours(epoch) {
const timestamps = [];
let date = this.normalizeTimestampByHour(new Date(epoch));
while (timestamps.length < 24) {
timestamps.push(date);
date = this._getDatePreviousHour(new Date(date));
}
return timestamps;
}
/**
* get the normalized hour timestamp for given epoch time
* @param {number} epoch - epoch time
* @return {string} normalized hour timestamp for given time
*/
getSortedSetCurrentHour(epoch) {
return this.normalizeTimestampByHour(new Date(epoch));
}
/**
* helper method to add element to a sorted set, applying TTL if new set
* @param {string} key - name of key
* @param {integer} score - score used to order set
* @param {string} value - value to store
* @param {callback} cb - callback
* @return {undefined}
*/
addToSortedSet(key, score, value, cb) {
this._redis.exists(key, (err, resCode) => {
if (err) {
return cb(err);
}
if (resCode === 0) {
// milliseconds in a day
const msInADay = 24 * 60 * 60 * 1000;
const nearestHour = this.normalizeTimestampByHour(new Date());
// in seconds
const ttl = Math.ceil(
(msInADay - (Date.now() - nearestHour)) / 1000);
const cmds = [
['zadd', key, score, value],
['expire', key, ttl],
];
return this._redis.batch(cmds, (err, res) => {
if (err) {
return cb(err);
}
const cmdErr = res.find(r => r[0] !== null);
if (cmdErr) {
return cb(cmdErr);
}
const successResponse = res[0][1];
return cb(null, successResponse);
});
}
return this._redis.zadd(key, score, value, cb);
});
}
}
module.exports = StatsModel;

125
lib/metrics/StatsModel.ts Normal file
View File

@@ -0,0 +1,125 @@
import StatsClient from './StatsClient';
/**
* @class StatsModel
*
* @classdesc Extend and overwrite how timestamps are normalized by minutes
* rather than by seconds
*/
export default class StatsModel extends StatsClient {
/**
* normalize date timestamp to the nearest hour
* @param d - Date instance
* @return timestamp - normalized to the nearest hour
*/
normalizeTimestampByHour(d: Date): number {
return d.setMinutes(0, 0, 0);
}
/**
* get previous hour to date given
* @param d - Date instance
* @return timestamp - one hour prior to date passed
*/
_getDatePreviousHour(d: Date): number {
return d.setHours(d.getHours() - 1);
}
/**
* normalize to the nearest interval
* @param d - Date instance
* @return timestamp - normalized to the nearest interval
*/
_normalizeTimestamp(d: Date): number {
const m = d.getMinutes();
return d.setMinutes(m - m % (Math.floor(this._interval / 60)), 0, 0);
}
/**
* override the method to get the result as an array of integers separated
* by each interval
* typical input looks like [[null, '1'], [null, '2'], [null, null]...]
* @param arr - each index contains the result of each batch command
* where index 0 signifies the error and index 1 contains the result
* @return array of integers, ordered from most recent interval to
* oldest interval
*/
// @ts-ignore
// TODO change name or conform to parent class method
_getCount(arr: [any, string | null][]) {
return arr.reduce<number[]>((store, i) => {
let num = parseInt(i[1] ?? '', 10);
num = Number.isNaN(num) ? 0 : num;
store.push(num);
return store;
}, []);
}
/**
* get list of sorted set key timestamps
* @param epoch - epoch time
* @return array of sorted set key timestamps
*/
getSortedSetHours(epoch: number) {
const timestamps: number[] = [];
let date = this.normalizeTimestampByHour(new Date(epoch));
while (timestamps.length < 24) {
timestamps.push(date);
date = this._getDatePreviousHour(new Date(date));
}
return timestamps;
}
/**
* get the normalized hour timestamp for given epoch time
* @param epoch - epoch time
* @return normalized hour timestamp for given time
*/
getSortedSetCurrentHour(epoch: number) {
return this.normalizeTimestampByHour(new Date(epoch));
}
/**
* helper method to add element to a sorted set, applying TTL if new set
* @param key - name of key
* @param score - score used to order set
* @param value - value to store
* @param cb - callback
*/
addToSortedSet(
key: string,
score: number,
value: string,
cb: (error: Error | null, value?: any) => void,
) {
this._redis.exists(key, (err, resCode) => {
if (err) {
return cb(err);
}
if (resCode === 0) {
// milliseconds in a day
const msInADay = 24 * 60 * 60 * 1000;
const nearestHour = this.normalizeTimestampByHour(new Date());
// in seconds
const ttl = Math.ceil(
(msInADay - (Date.now() - nearestHour)) / 1000);
const cmds = [
['zadd', key, score.toString(), value],
['expire', key, ttl.toString()],
];
return this._redis.batch(cmds, (err, res) => {
if (err) {
return cb(err);
}
const cmdErr = res.find((r: any) => r[0] !== null);
if (cmdErr) {
return cb(cmdErr);
}
const successResponse = res[0][1];
return cb(null, successResponse);
});
}
return this._redis.zadd(key, score, value, cb);
});
}
}

View File

@@ -1,40 +0,0 @@
const promClient = require('prom-client');
const collectDefaultMetricsIntervalMs =
process.env.COLLECT_DEFAULT_METRICS_INTERVAL_MS !== undefined ?
Number.parseInt(process.env.COLLECT_DEFAULT_METRICS_INTERVAL_MS, 10) :
10000;
promClient.collectDefaultMetrics({ timeout: collectDefaultMetricsIntervalMs });
class ZenkoMetrics {
static createCounter(params) {
return new promClient.Counter(params);
}
static createGauge(params) {
return new promClient.Gauge(params);
}
static createHistogram(params) {
return new promClient.Histogram(params);
}
static createSummary(params) {
return new promClient.Summary(params);
}
static getMetric(name) {
return promClient.register.getSingleMetric(name);
}
static asPrometheus() {
return promClient.register.metrics();
}
static asPrometheusContentType() {
return promClient.register.contentType;
}
}
module.exports = ZenkoMetrics;

View File

@@ -0,0 +1,35 @@
import promClient from 'prom-client';
export default class ZenkoMetrics {
static createCounter(params: promClient.CounterConfiguration<string>) {
return new promClient.Counter(params);
}
static createGauge(params: promClient.GaugeConfiguration<string>) {
return new promClient.Gauge(params);
}
static createHistogram(params: promClient.HistogramConfiguration<string>) {
return new promClient.Histogram(params);
}
static createSummary(params: promClient.SummaryConfiguration<string>) {
return new promClient.Summary(params);
}
static getMetric(name: string) {
return promClient.register.getSingleMetric(name);
}
static async asPrometheus() {
return promClient.register.metrics();
}
static asPrometheusContentType() {
return promClient.register.contentType;
}
static collectDefaultMetrics() {
return promClient.collectDefaultMetrics();
}
}

4
lib/metrics/index.ts Normal file
View File

@@ -0,0 +1,4 @@
export { default as StatsClient } from './StatsClient';
export { default as StatsModel } from './StatsModel';
export { default as RedisClient } from './RedisClient';
export { default as ZenkoMetrics } from './ZenkoMetrics';

View File

@@ -1,23 +1,35 @@
const errors = require('../errors');
import errors from '../errors'
const validServices = {
aws: ['s3', 'iam', 'sts', 'ring'],
scality: ['utapi', 'sso'],
};
class ARN {
export default class ARN {
_partition: string;
_service: string;
_region: string | null;
_accountId?: string | null;
_resource: string;
/**
*
* Create an ARN object from its individual components
*
* @constructor
* @param {string} partition - ARN partition (e.g. 'aws')
* @param {string} service - service name in partition (e.g. 's3')
* @param {string} [region] - AWS region
* @param {string} [accountId] - AWS 12-digit account ID
* @param {string} resource - AWS resource path (e.g. 'foo/bar')
* @param partition - ARN partition (e.g. 'aws')
* @param service - service name in partition (e.g. 's3')
* @param [region] - AWS region
* @param [accountId] - AWS 12-digit account ID
* @param resource - AWS resource path (e.g. 'foo/bar')
*/
constructor(partition, service, region, accountId, resource) {
constructor(
partition: string,
service: string,
region: string | undefined | null,
accountId: string | undefined | null,
resource: string,
) {
this._partition = partition;
this._service = service;
this._region = region || null;
@@ -25,9 +37,9 @@ class ARN {
this._resource = resource;
}
static createFromString(arnStr) {
static createFromString(arnStr: string) {
const [arn, partition, service, region, accountId,
resourceType, resource] = arnStr.split(':');
resourceType, resource] = arnStr.split(':');
if (arn !== 'arn') {
return { error: errors.InvalidArgument.customizeDescription(
@@ -58,7 +70,7 @@ class ARN {
'must be a 12-digit number or "*"') };
}
const fullResource = (resource !== undefined ?
`${resourceType}:${resource}` : resourceType);
`${resourceType}:${resource}` : resourceType);
return new ARN(partition, service, region, accountId, fullResource);
}
@@ -98,9 +110,7 @@ class ARN {
toString() {
return ['arn', this.getPartition(), this.getService(),
this.getRegion(), this.getAccountId(), this.getResource()]
this.getRegion(), this.getAccountId(), this.getResource()]
.join(':');
}
}
module.exports = ARN;

View File

@@ -1,22 +1,36 @@
const { legacyLocations } = require('../constants');
const escapeForXml = require('../s3middleware/escapeForXml');
import { RequestLogger } from 'werelogs';
import { legacyLocations } from '../constants';
import escapeForXml from '../s3middleware/escapeForXml';
type CloudServerConfig = any;
export default class BackendInfo {
_config: CloudServerConfig;
_requestEndpoint: string;
_objectLocationConstraint?: string;
_bucketLocationConstraint?: string;
_legacyLocationConstraint?: string;
class BackendInfo {
/**
* Represents the info necessary to evaluate which data backend to use
* on a data put call.
* @constructor
* @param {object} config - CloudServer config containing list of locations
* @param {string | undefined} objectLocationConstraint - location constraint
* @param config - CloudServer config containing list of locations
* @param objectLocationConstraint - location constraint
* for object based on user meta header
* @param {string | undefined } bucketLocationConstraint - location
* @param bucketLocationConstraint - location
* constraint for bucket based on bucket metadata
* @param {string} requestEndpoint - endpoint to which request was made
* @param {string | undefined } legacyLocationConstraint - legacy location
* constraint
* @param requestEndpoint - endpoint to which request was made
* @param legacyLocationConstraint - legacy location constraint
*/
constructor(config, objectLocationConstraint, bucketLocationConstraint,
requestEndpoint, legacyLocationConstraint) {
constructor(
config: CloudServerConfig,
objectLocationConstraint: string | undefined,
bucketLocationConstraint: string | undefined,
requestEndpoint: string,
legacyLocationConstraint: string | undefined,
) {
this._config = config;
this._objectLocationConstraint = objectLocationConstraint;
this._bucketLocationConstraint = bucketLocationConstraint;
@@ -27,15 +41,18 @@ class BackendInfo {
/**
* validate proposed location constraint against config
* @param {object} config - CloudServer config
* @param {string | undefined} locationConstraint - value of user
* @param config - CloudServer config
* @param locationConstraint - value of user
* metadata location constraint header or bucket location constraint
* @param {object} log - werelogs logger
* @return {boolean} - true if valid, false if not
* @param log - werelogs logger
* @return - true if valid, false if not
*/
static isValidLocationConstraint(config, locationConstraint, log) {
if (Object.keys(config.locationConstraints).
indexOf(locationConstraint) < 0) {
static isValidLocationConstraint(
config: CloudServerConfig,
locationConstraint: string | undefined,
log: RequestLogger,
) {
if (!locationConstraint || !(locationConstraint in config.locationConstraints)) {
log.trace('proposed locationConstraint is invalid',
{ locationConstraint });
return false;
@@ -45,16 +62,19 @@ class BackendInfo {
/**
* validate that request endpoint is listed in the restEndpoint config
* @param {object} config - CloudServer config
* @param {string} requestEndpoint - request endpoint
* @param {object} log - werelogs logger
* @return {boolean} - true if present, false if not
* @param config - CloudServer config
* @param requestEndpoint - request endpoint
* @param log - werelogs logger
* @return true if present, false if not
*/
static isRequestEndpointPresent(config, requestEndpoint, log) {
if (Object.keys(config.restEndpoints).
indexOf(requestEndpoint) < 0) {
static isRequestEndpointPresent(
config: CloudServerConfig,
requestEndpoint: string,
log: RequestLogger,
) {
if (!(requestEndpoint in config.restEndpoints)) {
log.trace('requestEndpoint does not match config restEndpoints',
{ requestEndpoint });
{ requestEndpoint });
return false;
}
return true;
@@ -63,17 +83,21 @@ class BackendInfo {
/**
* validate that locationConstraint for request Endpoint matches
* one config locationConstraint
* @param {object} config - CloudServer config
* @param {string} requestEndpoint - request endpoint
* @param {object} log - werelogs logger
* @return {boolean} - true if matches, false if not
* @param config - CloudServer config
* @param requestEndpoint - request endpoint
* @param log - werelogs logger
* @return - true if matches, false if not
*/
static isRequestEndpointValueValid(config, requestEndpoint, log) {
if (Object.keys(config.locationConstraints).
indexOf(config.restEndpoints[requestEndpoint]) < 0) {
static isRequestEndpointValueValid(
config: CloudServerConfig,
requestEndpoint: string,
log: RequestLogger,
) {
const restEndpoint = config.restEndpoints[requestEndpoint];
if (!(restEndpoint in config.locationConstraints)) {
log.trace('the default locationConstraint for request' +
'Endpoint does not match any config locationConstraint',
{ requestEndpoint });
{ requestEndpoint });
return false;
}
return true;
@@ -81,11 +105,11 @@ class BackendInfo {
/**
* validate that s3 server is running with a file or memory backend
* @param {object} config - CloudServer config
* @param {object} log - werelogs logger
* @return {boolean} - true if running with file/mem backend, false if not
* @param config - CloudServer config
* @param log - werelogs logger
* @return - true if running with file/mem backend, false if not
*/
static isMemOrFileBackend(config, log) {
static isMemOrFileBackend(config: CloudServerConfig, log: RequestLogger) {
if (config.backends.data === 'mem' || config.backends.data === 'file') {
log.trace('use data backend for the location', {
dataBackend: config.backends.data,
@@ -103,14 +127,18 @@ class BackendInfo {
* data backend for the location.
* - if locationConstraint for request Endpoint does not match
* any config locationConstraint, we will return an error
* @param {object} config - CloudServer config
* @param {string} requestEndpoint - request endpoint
* @param {object} log - werelogs logger
* @return {boolean} - true if valid, false if not
* @param config - CloudServer config
* @param requestEndpoint - request endpoint
* @param log - werelogs logger
* @return - true if valid, false if not
*/
static isValidRequestEndpointOrBackend(config, requestEndpoint, log) {
static isValidRequestEndpointOrBackend(
config: CloudServerConfig,
requestEndpoint: string,
log: RequestLogger,
) {
if (!BackendInfo.isRequestEndpointPresent(config, requestEndpoint,
log)) {
log)) {
return BackendInfo.isMemOrFileBackend(config, log);
}
return BackendInfo.isRequestEndpointValueValid(config, requestEndpoint,
@@ -119,20 +147,25 @@ class BackendInfo {
/**
* validate controlling BackendInfo Parameter
* @param {object} config - CloudServer config
* @param {string | undefined} objectLocationConstraint - value of user
* @param config - CloudServer config
* @param objectLocationConstraint - value of user
* metadata location constraint header
* @param {string | null} bucketLocationConstraint - location
* @param bucketLocationConstraint - location
* constraint from bucket metadata
* @param {string} requestEndpoint - endpoint of request
* @param {object} log - werelogs logger
* @return {object} - location constraint validity
* @param requestEndpoint - endpoint of request
* @param log - werelogs logger
* @return - location constraint validity
*/
static controllingBackendParam(config, objectLocationConstraint,
bucketLocationConstraint, requestEndpoint, log) {
static controllingBackendParam(
config: CloudServerConfig,
objectLocationConstraint: string | undefined,
bucketLocationConstraint: string | null,
requestEndpoint: string,
log: RequestLogger,
) {
if (objectLocationConstraint) {
if (BackendInfo.isValidLocationConstraint(config,
objectLocationConstraint, log)) {
objectLocationConstraint, log)) {
log.trace('objectLocationConstraint is valid');
return { isValid: true };
}
@@ -143,7 +176,7 @@ class BackendInfo {
}
if (bucketLocationConstraint) {
if (BackendInfo.isValidLocationConstraint(config,
bucketLocationConstraint, log)) {
bucketLocationConstraint, log)) {
log.trace('bucketLocationConstraint is valid');
return { isValid: true };
}
@@ -159,7 +192,7 @@ class BackendInfo {
return { isValid: true, legacyLocationConstraint };
}
if (!BackendInfo.isValidRequestEndpointOrBackend(config,
requestEndpoint, log)) {
requestEndpoint, log)) {
return { isValid: false, description: 'Endpoint Location Error - ' +
`Your endpoint "${requestEndpoint}" is not in restEndpoints ` +
'in your config OR the default location constraint for request ' +
@@ -167,7 +200,7 @@ class BackendInfo {
'match any config locationConstraint - Please update.' };
}
if (BackendInfo.isRequestEndpointPresent(config, requestEndpoint,
log)) {
log)) {
return { isValid: true };
}
return { isValid: true, defaultedToDataBackend: true };
@@ -175,16 +208,16 @@ class BackendInfo {
/**
* Return legacyLocationConstraint
* @param {object} config CloudServer config
* @return {string | undefined} legacyLocationConstraint;
* @param config CloudServer config
* @return legacyLocationConstraint;
*/
static getLegacyLocationConstraint(config) {
static getLegacyLocationConstraint(config: CloudServerConfig) {
return legacyLocations.find(ll => config.locationConstraints[ll]);
}
/**
* Return objectLocationConstraint
* @return {string | undefined} objectLocationConstraint;
* @return objectLocationConstraint;
*/
getObjectLocationConstraint() {
return this._objectLocationConstraint;
@@ -192,7 +225,7 @@ class BackendInfo {
/**
* Return bucketLocationConstraint
* @return {string | undefined} bucketLocationConstraint;
* @return bucketLocationConstraint;
*/
getBucketLocationConstraint() {
return this._bucketLocationConstraint;
@@ -200,7 +233,7 @@ class BackendInfo {
/**
* Return requestEndpoint
* @return {string} requestEndpoint;
* @return requestEndpoint;
*/
getRequestEndpoint() {
return this._requestEndpoint;
@@ -215,9 +248,9 @@ class BackendInfo {
* (4) default locationConstraint for requestEndpoint if requestEndpoint
* is listed in restEndpoints in config.json
* (5) default data backend
* @return {string} locationConstraint;
* @return locationConstraint;
*/
getControllingLocationConstraint() {
getControllingLocationConstraint(): string {
const objectLC = this.getObjectLocationConstraint();
const bucketLC = this.getBucketLocationConstraint();
const reqEndpoint = this.getRequestEndpoint();
@@ -236,5 +269,3 @@ class BackendInfo {
return this._config.backends.data;
}
}
module.exports = BackendInfo;

View File

@@ -1,237 +0,0 @@
/**
* Helper class to ease access to the Azure specific information for
* storage accounts mapped to buckets.
*/
class BucketAzureInfo {
/**
* @constructor
* @param {object} obj - Raw structure for the Azure info on storage account
* @param {string} obj.sku - SKU name of this storage account
* @param {string} obj.accessTier - Access Tier name of this storage account
* @param {string} obj.kind - Kind name of this storage account
* @param {string[]} obj.systemKeys - pair of shared keys for the system
* @param {string[]} obj.tenantKeys - pair of shared keys for the tenant
* @param {string} obj.subscriptionId - subscription ID the storage account
* belongs to
* @param {string} obj.resourceGroup - Resource group name the storage
* account belongs to
* @param {object} obj.deleteRetentionPolicy - Delete retention policy
* @param {boolean} obj.deleteRetentionPolicy.enabled -
* @param {number} obj.deleteRetentionPolicy.days -
* @param {object[]} obj.managementPolicies - Management policies for this
* storage account
* @param {boolean} obj.httpsOnly - Server the content of this storage
* account through HTTPS only
* @param {object} obj.tags - Set of tags applied on this storage account
* @param {object[]} obj.networkACL - Network ACL of this storage account
* @param {string} obj.cname - CNAME of this storage account
* @param {boolean} obj.azureFilesAADIntegration - whether or not Azure
* Files AAD Integration is enabled for this storage account
* @param {boolean} obj.hnsEnabled - whether or not a hierarchical namespace
* is enabled for this storage account
* @param {object} obj.logging - service properties: logging
* @param {object} obj.hourMetrics - service properties: hourMetrics
* @param {object} obj.minuteMetrics - service properties: minuteMetrics
* @param {string} obj.serviceVersion - service properties: serviceVersion
*/
constructor(obj) {
this._data = {
sku: obj.sku,
accessTier: obj.accessTier,
kind: obj.kind,
systemKeys: obj.systemKeys,
tenantKeys: obj.tenantKeys,
subscriptionId: obj.subscriptionId,
resourceGroup: obj.resourceGroup,
deleteRetentionPolicy: obj.deleteRetentionPolicy,
managementPolicies: obj.managementPolicies,
httpsOnly: obj.httpsOnly,
tags: obj.tags,
networkACL: obj.networkACL,
cname: obj.cname,
azureFilesAADIntegration: obj.azureFilesAADIntegration,
hnsEnabled: obj.hnsEnabled,
logging: obj.logging,
hourMetrics: obj.hourMetrics,
minuteMetrics: obj.minuteMetrics,
serviceVersion: obj.serviceVersion,
};
}
getSku() {
return this._data.sku;
}
setSku(sku) {
this._data.sku = sku;
return this;
}
getAccessTier() {
return this._data.accessTier;
}
setAccessTier(accessTier) {
this._data.accessTier = accessTier;
return this;
}
getKind() {
return this._data.kind;
}
setKind(kind) {
this._data.kind = kind;
return this;
}
getSystemKeys() {
return this._data.systemKeys;
}
setSystemKeys(systemKeys) {
this._data.systemKeys = systemKeys;
return this;
}
getTenantKeys() {
return this._data.tenantKeys;
}
setTenantKeys(tenantKeys) {
this._data.tenantKeys = tenantKeys;
return this;
}
getSubscriptionId() {
return this._data.subscriptionId;
}
setSubscriptionId(subscriptionId) {
this._data.subscriptionId = subscriptionId;
return this;
}
getResourceGroup() {
return this._data.resourceGroup;
}
setResourceGroup(resourceGroup) {
this._data.resourceGroup = resourceGroup;
return this;
}
getDeleteRetentionPolicy() {
return this._data.deleteRetentionPolicy;
}
setDeleteRetentionPolicy(deleteRetentionPolicy) {
this._data.deleteRetentionPolicy = deleteRetentionPolicy;
return this;
}
getManagementPolicies() {
return this._data.managementPolicies;
}
setManagementPolicies(managementPolicies) {
this._data.managementPolicies = managementPolicies;
return this;
}
getHttpsOnly() {
return this._data.httpsOnly;
}
setHttpsOnly(httpsOnly) {
this._data.httpsOnly = httpsOnly;
return this;
}
getTags() {
return this._data.tags;
}
setTags(tags) {
this._data.tags = tags;
return this;
}
getNetworkACL() {
return this._data.networkACL;
}
setNetworkACL(networkACL) {
this._data.networkACL = networkACL;
return this;
}
getCname() {
return this._data.cname;
}
setCname(cname) {
this._data.cname = cname;
return this;
}
getAzureFilesAADIntegration() {
return this._data.azureFilesAADIntegration;
}
setAzureFilesAADIntegration(azureFilesAADIntegration) {
this._data.azureFilesAADIntegration = azureFilesAADIntegration;
return this;
}
getHnsEnabled() {
return this._data.hnsEnabled;
}
setHnsEnabled(hnsEnabled) {
this._data.hnsEnabled = hnsEnabled;
return this;
}
getLogging() {
return this._data.logging;
}
setLogging(logging) {
this._data.logging = logging;
return this;
}
getHourMetrics() {
return this._data.hourMetrics;
}
setHourMetrics(hourMetrics) {
this._data.hourMetrics = hourMetrics;
return this;
}
getMinuteMetrics() {
return this._data.minuteMetrics;
}
setMinuteMetrics(minuteMetrics) {
this._data.minuteMetrics = minuteMetrics;
return this;
}
getServiceVersion() {
return this._data.serviceVersion;
}
setServiceVersion(serviceVersion) {
this._data.serviceVersion = serviceVersion;
return this;
}
getValue() {
return this._data;
}
}
module.exports = BucketAzureInfo;

View File

@@ -1,18 +1,65 @@
const assert = require('assert');
const uuid = require('uuid/v4');
import assert from 'assert';
import uuid from 'uuid/v4';
const { WebsiteConfiguration } = require('./WebsiteConfiguration');
const ReplicationConfiguration = require('./ReplicationConfiguration');
const LifecycleConfiguration = require('./LifecycleConfiguration');
const ObjectLockConfiguration = require('./ObjectLockConfiguration');
const BucketPolicy = require('./BucketPolicy');
const NotificationConfiguration = require('./NotificationConfiguration');
import { WebsiteConfiguration } from './WebsiteConfiguration';
import ReplicationConfiguration from './ReplicationConfiguration';
import LifecycleConfiguration from './LifecycleConfiguration';
import ObjectLockConfiguration from './ObjectLockConfiguration';
import BucketPolicy from './BucketPolicy';
import NotificationConfiguration from './NotificationConfiguration';
import { ACL as OACL } from './ObjectMD';
// WHEN UPDATING THIS NUMBER, UPDATE MODELVERSION.MD CHANGELOG
// MODELVERSION.MD can be found in S3 repo: lib/metadata/ModelVersion.md
const modelVersion = 13;
// WHEN UPDATING THIS NUMBER, UPDATE BucketInfoModelVersion.md CHANGELOG
// BucketInfoModelVersion.md can be found in the root of this repository
const modelVersion = 10;
export type CORS = {
id: string;
allowedMethods: string[];
allowedOrigins: string[];
allowedHeaders: string[];
maxAgeSeconds: number;
exposeHeaders: string[];
}[];
export type SSE = {
cryptoScheme: number;
algorithm: string;
masterKeyId: string;
configuredMasterKeyId: string;
mandatory: boolean;
};
export type VersioningConfiguration = {
Status: string;
MfaDelete: any;
};
export type ACL = OACL & { WRITE: string[] }
export default class BucketInfo {
_acl: ACL;
_name: string;
_owner: string;
_ownerDisplayName: string;
_creationDate: string;
_mdBucketModelVersion: number;
_transient: boolean;
_deleted: boolean;
_serverSideEncryption: SSE;
_versioningConfiguration: VersioningConfiguration;
_locationConstraint: string | null;
_websiteConfiguration?: WebsiteConfiguration | null;
_cors: CORS | null;
_replicationConfiguration?: any;
_lifecycleConfiguration?: any;
_bucketPolicy?: any;
_uid?: string;
_objectLockEnabled?: boolean;
_objectLockConfiguration?: any;
_notificationConfiguration?: any;
_tags?: { key: string; value: string }[] | null;
class BucketInfo {
/**
* Represents all bucket information.
* @constructor
@@ -33,14 +80,15 @@ class BucketInfo {
* algorithm to use
* @param {string} serverSideEncryption.masterKeyId -
* key to get master key
* @param {string} serverSideEncryption.configuredMasterKeyId -
* custom KMS key id specified by user
* @param {boolean} serverSideEncryption.mandatory -
* true for mandatory encryption
* bucket has been made
* @param {object} versioningConfiguration - versioning configuration
* @param {string} versioningConfiguration.Status - versioning status
* @param {object} versioningConfiguration.MfaDelete - versioning mfa delete
* @param {string} locationConstraint - locationConstraint for bucket that
* also includes the ingestion flag
* @param {string} locationConstraint - locationConstraint for bucket
* @param {WebsiteConfiguration} [websiteConfiguration] - website
* configuration
* @param {object[]} [cors] - collection of CORS rules to apply
@@ -56,23 +104,34 @@ class BucketInfo {
* @param {object} [lifecycleConfiguration] - lifecycle configuration
* @param {object} [bucketPolicy] - bucket policy
* @param {string} [uid] - unique identifier for the bucket, necessary
* @param {string} readLocationConstraint - readLocationConstraint for bucket
* addition for use with lifecycle operations
* @param {boolean} [isNFS] - whether the bucket is on NFS
* @param {object} [ingestionConfig] - object for ingestion status: en/dis
* @param {object} [azureInfo] - Azure storage account specific info
* @param {boolean} [objectLockEnabled] - true when object lock enabled
* @param {object} [objectLockConfiguration] - object lock configuration
* @param {object} [notificationConfiguration] - bucket notification configuration
* @param {object[]} [tags] - bucket tags
*/
constructor(name, owner, ownerDisplayName, creationDate,
mdBucketModelVersion, acl, transient, deleted,
serverSideEncryption, versioningConfiguration,
locationConstraint, websiteConfiguration, cors,
replicationConfiguration, lifecycleConfiguration,
bucketPolicy, uid, readLocationConstraint, isNFS,
ingestionConfig, azureInfo, objectLockEnabled,
objectLockConfiguration, notificationConfiguration) {
constructor(
name: string,
owner: string,
ownerDisplayName: string,
creationDate: string,
mdBucketModelVersion: number,
acl: ACL | undefined,
transient: boolean,
deleted: boolean,
serverSideEncryption: SSE,
versioningConfiguration: VersioningConfiguration,
locationConstraint: string,
websiteConfiguration?: WebsiteConfiguration | null,
cors?: CORS,
replicationConfiguration?: any,
lifecycleConfiguration?: any,
bucketPolicy?: any,
uid?: string,
objectLockEnabled?: boolean,
objectLockConfiguration?: any,
notificationConfiguration?: any,
tags?: { key: string; value: string }[],
) {
assert.strictEqual(typeof name, 'string');
assert.strictEqual(typeof owner, 'string');
assert.strictEqual(typeof ownerDisplayName, 'string');
@@ -90,12 +149,15 @@ class BucketInfo {
}
if (serverSideEncryption) {
assert.strictEqual(typeof serverSideEncryption, 'object');
const { cryptoScheme, algorithm, masterKeyId, mandatory } =
serverSideEncryption;
const { cryptoScheme, algorithm, masterKeyId,
configuredMasterKeyId, mandatory } = serverSideEncryption;
assert.strictEqual(typeof cryptoScheme, 'number');
assert.strictEqual(typeof algorithm, 'string');
assert.strictEqual(typeof masterKeyId, 'string');
assert.strictEqual(typeof mandatory, 'boolean');
if (configuredMasterKeyId !== undefined) {
assert.strictEqual(typeof configuredMasterKeyId, 'string');
}
}
if (versioningConfiguration) {
assert.strictEqual(typeof versioningConfiguration, 'object');
@@ -110,19 +172,12 @@ class BucketInfo {
if (locationConstraint) {
assert.strictEqual(typeof locationConstraint, 'string');
}
if (ingestionConfig) {
assert.strictEqual(typeof ingestionConfig, 'object');
}
if (azureInfo) {
assert.strictEqual(typeof azureInfo, 'object');
}
if (readLocationConstraint) {
assert.strictEqual(typeof readLocationConstraint, 'string');
}
if (websiteConfiguration) {
assert(websiteConfiguration instanceof WebsiteConfiguration);
const { indexDocument, errorDocument, redirectAllRequestsTo,
routingRules } = websiteConfiguration;
const indexDocument = websiteConfiguration.getIndexDocument();
const errorDocument = websiteConfiguration.getErrorDocument();
const redirectAllRequestsTo = websiteConfiguration.getRedirectAllRequestsTo();
const routingRules = websiteConfiguration.getRoutingRules();
assert(indexDocument === undefined ||
typeof indexDocument === 'string');
assert(errorDocument === undefined ||
@@ -154,7 +209,7 @@ class BucketInfo {
if (notificationConfiguration) {
NotificationConfiguration.validateConfig(notificationConfiguration);
}
const aclInstance = acl || {
const aclInstance: ACL = acl || {
Canned: 'private',
FULL_CONTROL: [],
WRITE: [],
@@ -162,6 +217,9 @@ class BucketInfo {
READ: [],
READ_ACP: [],
};
if (tags) {
assert(Array.isArray(tags));
}
// IF UPDATING PROPERTIES, INCREMENT MODELVERSION NUMBER ABOVE
this._acl = aclInstance;
@@ -175,24 +233,22 @@ class BucketInfo {
this._serverSideEncryption = serverSideEncryption || null;
this._versioningConfiguration = versioningConfiguration || null;
this._locationConstraint = locationConstraint || null;
this._readLocationConstraint = readLocationConstraint || null;
this._websiteConfiguration = websiteConfiguration || null;
this._replicationConfiguration = replicationConfiguration || null;
this._cors = cors || null;
this._lifecycleConfiguration = lifecycleConfiguration || null;
this._bucketPolicy = bucketPolicy || null;
this._uid = uid || uuid();
this._isNFS = isNFS || null;
this._ingestion = ingestionConfig || null;
this._azureInfo = azureInfo || null;
this._objectLockEnabled = objectLockEnabled || false;
this._objectLockConfiguration = objectLockConfiguration || null;
this._notificationConfiguration = notificationConfiguration || null;
this._tags = tags || null;
return this;
}
/**
* Serialize the object
* @return {string} - stringified object
* @return - stringified object
*/
serialize() {
const bucketInfos = {
@@ -207,32 +263,31 @@ class BucketInfo {
serverSideEncryption: this._serverSideEncryption,
versioningConfiguration: this._versioningConfiguration,
locationConstraint: this._locationConstraint,
readLocationConstraint: this._readLocationConstraint,
websiteConfiguration: undefined,
cors: this._cors,
replicationConfiguration: this._replicationConfiguration,
lifecycleConfiguration: this._lifecycleConfiguration,
bucketPolicy: this._bucketPolicy,
uid: this._uid,
isNFS: this._isNFS,
ingestion: this._ingestion,
azureInfo: this._azureInfo,
objectLockEnabled: this._objectLockEnabled,
objectLockConfiguration: this._objectLockConfiguration,
notificationConfiguration: this._notificationConfiguration,
tags: this._tags,
};
if (this._websiteConfiguration) {
bucketInfos.websiteConfiguration =
this._websiteConfiguration.getConfig();
}
return JSON.stringify(bucketInfos);
const final = this._websiteConfiguration
? {
...bucketInfos,
websiteConfiguration: this._websiteConfiguration.getConfig(),
}
: bucketInfos;
return JSON.stringify(final);
}
/**
* deSerialize the JSON string
* @param {string} stringBucket - the stringified bucket
* @return {object} - parsed string
* @param stringBucket - the stringified bucket
* @return - parsed string
*/
static deSerialize(stringBucket) {
static deSerialize(stringBucket: string) {
const obj = JSON.parse(stringBucket);
const websiteConfig = obj.websiteConfiguration ?
new WebsiteConfiguration(obj.websiteConfiguration) : null;
@@ -241,14 +296,13 @@ class BucketInfo {
obj.transient, obj.deleted, obj.serverSideEncryption,
obj.versioningConfiguration, obj.locationConstraint, websiteConfig,
obj.cors, obj.replicationConfiguration, obj.lifecycleConfiguration,
obj.bucketPolicy, obj.uid, obj.readLocationConstraint, obj.isNFS,
obj.ingestion, obj.azureInfo, obj.objectLockEnabled,
obj.objectLockConfiguration, obj.notificationConfiguration);
obj.bucketPolicy, obj.uid, obj.objectLockEnabled,
obj.objectLockConfiguration, obj.notificationConfiguration, obj.tags);
}
/**
* Returns the current model version for the data structure
* @return {number} - the current model version set above in the file
* @return - the current model version set above in the file
*/
static currentModelVersion() {
return modelVersion;
@@ -257,92 +311,90 @@ class BucketInfo {
/**
* Create a BucketInfo from an object
*
* @param {object} data - object containing data
* @return {BucketInfo} Return an BucketInfo
* @param data - object containing data
* @return Return an BucketInfo
*/
static fromObj(data) {
static fromObj(data: any) {
return new BucketInfo(data._name, data._owner, data._ownerDisplayName,
data._creationDate, data._mdBucketModelVersion, data._acl,
data._transient, data._deleted, data._serverSideEncryption,
data._versioningConfiguration, data._locationConstraint,
data._websiteConfiguration, data._cors,
data._replicationConfiguration, data._lifecycleConfiguration,
data._bucketPolicy, data._uid, data._readLocationConstraint,
data._isNFS, data._ingestion, data._azureInfo,
data._objectLockEnabled, data._objectLockConfiguration,
data._notificationConfiguration);
data._bucketPolicy, data._uid, data._objectLockEnabled,
data._objectLockConfiguration, data._notificationConfiguration, data._tags);
}
/**
* Get the ACLs.
* @return {object} acl
* @return acl
*/
getAcl() {
return this._acl;
}
/**
* Set the canned acl's.
* @param {string} cannedACL - canned ACL being set
* @return {BucketInfo} - bucket info instance
* @param cannedACL - canned ACL being set
* @return - bucket info instance
*/
setCannedAcl(cannedACL) {
setCannedAcl(cannedACL: string) {
this._acl.Canned = cannedACL;
return this;
}
/**
* Set a specific ACL.
* @param {string} canonicalID - id for account being given access
* @param {string} typeOfGrant - type of grant being granted
* @return {BucketInfo} - bucket info instance
* @param canonicalID - id for account being given access
* @param typeOfGrant - type of grant being granted
* @return - bucket info instance
*/
setSpecificAcl(canonicalID, typeOfGrant) {
setSpecificAcl(canonicalID: string, typeOfGrant: string) {
this._acl[typeOfGrant].push(canonicalID);
return this;
}
/**
* Set all ACLs.
* @param {object} acl - new set of ACLs
* @return {BucketInfo} - bucket info instance
* @param acl - new set of ACLs
* @return - bucket info instance
*/
setFullAcl(acl) {
setFullAcl(acl: ACL) {
this._acl = acl;
return this;
}
/**
* Get the server side encryption information
* @return {object} serverSideEncryption
* @return serverSideEncryption
*/
getServerSideEncryption() {
return this._serverSideEncryption;
}
/**
* Set server side encryption information
* @param {object} serverSideEncryption - server side encryption information
* @return {BucketInfo} - bucket info instance
* @param serverSideEncryption - server side encryption information
* @return - bucket info instance
*/
setServerSideEncryption(serverSideEncryption) {
setServerSideEncryption(serverSideEncryption: SSE) {
this._serverSideEncryption = serverSideEncryption;
return this;
}
/**
* Get the versioning configuration information
* @return {object} versioningConfiguration
* @return versioningConfiguration
*/
getVersioningConfiguration() {
return this._versioningConfiguration;
}
/**
* Set versioning configuration information
* @param {object} versioningConfiguration - versioning information
* @return {BucketInfo} - bucket info instance
* @param versioningConfiguration - versioning information
* @return - bucket info instance
*/
setVersioningConfiguration(versioningConfiguration) {
setVersioningConfiguration(versioningConfiguration: VersioningConfiguration) {
this._versioningConfiguration = versioningConfiguration;
return this;
}
/**
* Check that versioning is 'Enabled' on the given bucket.
* @return {boolean} - `true` if versioning is 'Enabled', otherwise `false`
* @return - `true` if versioning is 'Enabled', otherwise `false`
*/
isVersioningEnabled() {
const versioningConfig = this.getVersioningConfiguration();
@@ -350,32 +402,32 @@ class BucketInfo {
}
/**
* Get the website configuration information
* @return {object} websiteConfiguration
* @return websiteConfiguration
*/
getWebsiteConfiguration() {
return this._websiteConfiguration;
}
/**
* Set website configuration information
* @param {object} websiteConfiguration - configuration for bucket website
* @return {BucketInfo} - bucket info instance
* @param websiteConfiguration - configuration for bucket website
* @return - bucket info instance
*/
setWebsiteConfiguration(websiteConfiguration) {
setWebsiteConfiguration(websiteConfiguration: WebsiteConfiguration) {
this._websiteConfiguration = websiteConfiguration;
return this;
}
/**
* Set replication configuration information
* @param {object} replicationConfiguration - replication information
* @return {BucketInfo} - bucket info instance
* @param replicationConfiguration - replication information
* @return - bucket info instance
*/
setReplicationConfiguration(replicationConfiguration) {
setReplicationConfiguration(replicationConfiguration: any) {
this._replicationConfiguration = replicationConfiguration;
return this;
}
/**
* Get replication configuration information
* @return {object|null} replication configuration information or `null` if
* @return replication configuration information or `null` if
* the bucket does not have a replication configuration
*/
getReplicationConfiguration() {
@@ -383,7 +435,7 @@ class BucketInfo {
}
/**
* Get lifecycle configuration information
* @return {object|null} lifecycle configuration information or `null` if
* @return lifecycle configuration information or `null` if
* the bucket does not have a lifecycle configuration
*/
getLifecycleConfiguration() {
@@ -391,16 +443,16 @@ class BucketInfo {
}
/**
* Set lifecycle configuration information
* @param {object} lifecycleConfiguration - lifecycle information
* @return {BucketInfo} - bucket info instance
* @param lifecycleConfiguration - lifecycle information
* @return - bucket info instance
*/
setLifecycleConfiguration(lifecycleConfiguration) {
setLifecycleConfiguration(lifecycleConfiguration: any) {
this._lifecycleConfiguration = lifecycleConfiguration;
return this;
}
/**
* Get bucket policy statement
* @return {object|null} bucket policy statement or `null` if the bucket
* @return bucket policy statement or `null` if the bucket
* does not have a bucket policy
*/
getBucketPolicy() {
@@ -408,16 +460,16 @@ class BucketInfo {
}
/**
* Set bucket policy statement
* @param {object} bucketPolicy - bucket policy
* @return {BucketInfo} - bucket info instance
* @param bucketPolicy - bucket policy
* @return - bucket info instance
*/
setBucketPolicy(bucketPolicy) {
setBucketPolicy(bucketPolicy: any) {
this._bucketPolicy = bucketPolicy;
return this;
}
/**
* Get object lock configuration
* @return {object|null} object lock configuration information or `null` if
* @return object lock configuration information or `null` if
* the bucket does not have an object lock configuration
*/
getObjectLockConfiguration() {
@@ -425,16 +477,16 @@ class BucketInfo {
}
/**
* Set object lock configuration
* @param {object} objectLockConfiguration - object lock information
* @return {BucketInfo} - bucket info instance
* @param objectLockConfiguration - object lock information
* @return - bucket info instance
*/
setObjectLockConfiguration(objectLockConfiguration) {
setObjectLockConfiguration(objectLockConfiguration: any) {
this._objectLockConfiguration = objectLockConfiguration;
return this;
}
/**
* Get notification configuration
* @return {object|null} notification configuration information or 'null' if
* @return notification configuration information or 'null' if
* the bucket does not have a notification configuration
*/
getNotificationConfiguration() {
@@ -442,41 +494,41 @@ class BucketInfo {
}
/**
* Set notification configuraiton
* @param {object} notificationConfiguration - bucket notification information
* @return {BucketInfo} - bucket info instance
* @param notificationConfiguration - bucket notification information
* @return - bucket info instance
*/
setNotificationConfiguration(notificationConfiguration) {
setNotificationConfiguration(notificationConfiguration: any) {
this._notificationConfiguration = notificationConfiguration;
return this;
}
/**
* Get cors resource
* @return {object[]} cors
* @return cors
*/
getCors() {
return this._cors;
}
/**
* Set cors resource
* @param {object[]} rules - collection of CORS rules
* @param {string} [rules.id] - optional id to identify rule
* @param {string[]} rules[].allowedMethods - methods allowed for CORS
* @param {string[]} rules[].allowedOrigins - origins allowed for CORS
* @param {string[]} [rules[].allowedHeaders] - headers allowed in an
* @param rules - collection of CORS rules
* @param [rules.id] - optional id to identify rule
* @param rules[].allowedMethods - methods allowed for CORS
* @param rules[].allowedOrigins - origins allowed for CORS
* @param [rules[].allowedHeaders] - headers allowed in an
* OPTIONS request via the Access-Control-Request-Headers header
* @param {number} [rules[].maxAgeSeconds] - seconds browsers should cache
* @param [rules[].maxAgeSeconds] - seconds browsers should cache
* OPTIONS response
* @param {string[]} [rules[].exposeHeaders] - headers to expose to external
* @param [rules[].exposeHeaders] - headers to expose to external
* applications
* @return {BucketInfo} - bucket info instance
* @return - bucket info instance
*/
setCors(rules) {
setCors(rules: CORS) {
this._cors = rules;
return this;
}
/**
* get the serverside encryption algorithm
* @return {string} - sse algorithm used by this bucket
* @return - sse algorithm used by this bucket
*/
getSseAlgorithm() {
if (!this._serverSideEncryption) {
@@ -486,7 +538,7 @@ class BucketInfo {
}
/**
* get the server side encryption master key Id
* @return {string} - sse master key Id used by this bucket
* @return - sse master key Id used by this bucket
*/
getSseMasterKeyId() {
if (!this._serverSideEncryption) {
@@ -496,109 +548,98 @@ class BucketInfo {
}
/**
* Get bucket name.
* @return {string} - bucket name
* @return - bucket name
*/
getName() {
return this._name;
}
/**
* Set bucket name.
* @param {string} bucketName - new bucket name
* @return {BucketInfo} - bucket info instance
* @param bucketName - new bucket name
* @return - bucket info instance
*/
setName(bucketName) {
setName(bucketName: string) {
this._name = bucketName;
return this;
}
/**
* Get bucket owner.
* @return {string} - bucket owner's canonicalID
* @return - bucket owner's canonicalID
*/
getOwner() {
return this._owner;
}
/**
* Set bucket owner.
* @param {string} ownerCanonicalID - bucket owner canonicalID
* @return {BucketInfo} - bucket info instance
* @param ownerCanonicalID - bucket owner canonicalID
* @return - bucket info instance
*/
setOwner(ownerCanonicalID) {
setOwner(ownerCanonicalID: string) {
this._owner = ownerCanonicalID;
return this;
}
/**
* Get bucket owner display name.
* @return {string} - bucket owner dispaly name
* @return - bucket owner dispaly name
*/
getOwnerDisplayName() {
return this._ownerDisplayName;
}
/**
* Set bucket owner display name.
* @param {string} ownerDisplayName - bucket owner display name
* @return {BucketInfo} - bucket info instance
* @param ownerDisplayName - bucket owner display name
* @return - bucket info instance
*/
setOwnerDisplayName(ownerDisplayName) {
setOwnerDisplayName(ownerDisplayName: string) {
this._ownerDisplayName = ownerDisplayName;
return this;
}
/**
* Get bucket creation date.
* @return {object} - bucket creation date
* @return - bucket creation date
*/
getCreationDate() {
return this._creationDate;
}
/**
* Set location constraint.
* @param {string} location - bucket location constraint
* @return {BucketInfo} - bucket info instance
* @param location - bucket location constraint
* @return - bucket info instance
*/
setLocationConstraint(location) {
setLocationConstraint(location: string) {
this._locationConstraint = location;
return this;
}
/**
* Get location constraint.
* @return {string} - bucket location constraint
* @return - bucket location constraint
*/
getLocationConstraint() {
return this._locationConstraint;
}
/**
* Get read location constraint.
* @return {string} - bucket read location constraint
*/
getReadLocationConstraint() {
if (this._readLocationConstraint) {
return this._readLocationConstraint;
}
return this._locationConstraint;
}
/**
* Set Bucket model version
*
* @param {number} version - Model version
* @return {BucketInfo} - bucket info instance
* @param version - Model version
* @return - bucket info instance
*/
setMdBucketModelVersion(version) {
setMdBucketModelVersion(version: number) {
this._mdBucketModelVersion = version;
return this;
}
/**
* Get Bucket model version
*
* @return {number} Bucket model version
* @return Bucket model version
*/
getMdBucketModelVersion() {
return this._mdBucketModelVersion;
}
/**
* Add transient flag.
* @return {BucketInfo} - bucket info instance
* @return - bucket info instance
*/
addTransientFlag() {
this._transient = true;
@@ -606,7 +647,7 @@ class BucketInfo {
}
/**
* Remove transient flag.
* @return {BucketInfo} - bucket info instance
* @return - bucket info instance
*/
removeTransientFlag() {
this._transient = false;
@@ -614,14 +655,14 @@ class BucketInfo {
}
/**
* Check transient flag.
* @return {boolean} - depending on whether transient flag in place
* @return - depending on whether transient flag in place
*/
hasTransientFlag() {
return !!this._transient;
}
/**
* Add deleted flag.
* @return {BucketInfo} - bucket info instance
* @return - bucket info instance
*/
addDeletedFlag() {
this._deleted = true;
@@ -629,7 +670,7 @@ class BucketInfo {
}
/**
* Remove deleted flag.
* @return {BucketInfo} - bucket info instance
* @return - bucket info instance
*/
removeDeletedFlag() {
this._deleted = false;
@@ -637,14 +678,14 @@ class BucketInfo {
}
/**
* Check deleted flag.
* @return {boolean} - depending on whether deleted flag in place
* @return - depending on whether deleted flag in place
*/
hasDeletedFlag() {
return !!this._deleted;
}
/**
* Check if the versioning mode is on.
* @return {boolean} - versioning mode status
* @return - versioning mode status
*/
isVersioningOn() {
return this._versioningConfiguration &&
@@ -652,106 +693,54 @@ class BucketInfo {
}
/**
* Get unique id of bucket.
* @return {string} - unique id
* @return - unique id
*/
getUid() {
return this._uid;
}
/**
* Check if the bucket is an NFS bucket.
* @return {boolean} - Wether the bucket is NFS or not
* Set unique id of bucket.
* @param uid - unique identifier for the bucket
* @return - bucket info instance
*/
isNFS() {
return this._isNFS;
}
/**
* Set whether the bucket is an NFS bucket.
* @param {boolean} isNFS - Wether the bucket is NFS or not
* @return {BucketInfo} - bucket info instance
*/
setIsNFS(isNFS) {
this._isNFS = isNFS;
return this;
}
/**
* enable ingestion, set 'this._ingestion' to { status: 'enabled' }
* @return {BucketInfo} - bucket info instance
*/
enableIngestion() {
this._ingestion = { status: 'enabled' };
return this;
}
/**
* disable ingestion, set 'this._ingestion' to { status: 'disabled' }
* @return {BucketInfo} - bucket info instance
*/
disableIngestion() {
this._ingestion = { status: 'disabled' };
return this;
}
/**
* Get ingestion configuration
* @return {object} - bucket ingestion configuration: Enabled or Disabled
*/
getIngestion() {
return this._ingestion;
}
/**
** Check if bucket is an ingestion bucket
* @return {boolean} - 'true' if bucket is ingestion bucket, 'false' if
* otherwise
*/
isIngestionBucket() {
const ingestionConfig = this.getIngestion();
if (ingestionConfig) {
return true;
}
return false;
}
/**
* Check if ingestion is enabled
* @return {boolean} - 'true' if ingestion is enabled, otherwise 'false'
*/
isIngestionEnabled() {
const ingestionConfig = this.getIngestion();
return ingestionConfig ? ingestionConfig.status === 'enabled' : false;
}
/**
* Return the Azure specific storage account information for this bucket
* @return {object} - a structure suitable for {@link BucketAzureIno}
* constructor
*/
getAzureInfo() {
return this._azureInfo;
}
/**
* Set the Azure specific storage account information for this bucket
* @param {object} azureInfo - a structure suitable for
* {@link BucketAzureInfo} construction
* @return {BucketInfo} - bucket info instance
*/
setAzureInfo(azureInfo) {
this._azureInfo = azureInfo;
setUid(uid: string) {
this._uid = uid;
return this;
}
/**
* Check if object lock is enabled.
* @return {boolean} - depending on whether object lock is enabled
* @return - depending on whether object lock is enabled
*/
isObjectLockEnabled() {
return !!this._objectLockEnabled;
}
/**
* Set the value of objectLockEnabled field.
* @param {boolean} enabled - true if object lock enabled else false.
* @return {BucketInfo} - bucket info instance
* @param enabled - true if object lock enabled else false.
* @return - bucket info instance
*/
setObjectLockEnabled(enabled) {
setObjectLockEnabled(enabled: boolean) {
this._objectLockEnabled = enabled;
return this;
}
}
module.exports = BucketInfo;
/**
* Get the value of bucket tags
* @return - Array of bucket tags as {"key" : "key", "value": "value"}
*/
getTags() {
return this._tags;
}
/**
* Set bucket tags
* @param tags - collection of tags
* @param tags[].key - key of the tag
* @param tags[].value - value of the tag
* @return - bucket info instance
*/
setTags(tags: { key: string; value: string }[]) {
this._tags = tags;
return this;
}
}

View File

@@ -1,7 +1,6 @@
const assert = require('assert');
const errors = require('../errors');
const { validateResourcePolicy } = require('../policy/policyValidator');
import assert from 'assert';
import errors, { ArsenalError } from '../errors';
import { validateResourcePolicy } from '../policy/policyValidator';
/**
* Format of json policy:
@@ -49,20 +48,22 @@ const objectActions = [
's3:PutObjectTagging',
];
class BucketPolicy {
export default class BucketPolicy {
_json: string;
_policy: any;
/**
* Create a Bucket Policy instance
* @param {string} json - the json policy
* @return {object} - BucketPolicy instance
* @param json - the json policy
* @return - BucketPolicy instance
*/
constructor(json) {
constructor(json: string) {
this._json = json;
this._policy = {};
}
/**
* Get the bucket policy
* @return {object} - the bucket policy or error
* @return - the bucket policy or error
*/
getBucketPolicy() {
const policy = this._getPolicy();
@@ -71,9 +72,9 @@ class BucketPolicy {
/**
* Get the bucket policy array
* @return {object} - contains error if policy validation fails
* @return - contains error if policy validation fails
*/
_getPolicy() {
_getPolicy(): { error: ArsenalError } | any {
if (!this._json || this._json === '') {
return { error: errors.MalformedPolicy.customizeDescription(
'request json is empty or undefined') };
@@ -101,13 +102,13 @@ class BucketPolicy {
/**
* Validate action and resource are compatible
* @return {error} - contains error or empty obj
* @return - contains error or empty obj
*/
_validateActionResource() {
const invalid = this._policy.Statement.every(s => {
const actions = typeof s.Action === 'string' ?
_validateActionResource(): { error?: ArsenalError } {
const invalid = this._policy.Statement.every((s: any) => {
const actions: string[] = typeof s.Action === 'string' ?
[s.Action] : s.Action;
const resources = typeof s.Resource === 'string' ?
const resources: string[] = typeof s.Resource === 'string' ?
[s.Resource] : s.Resource;
const objectAction = actions.some(a =>
a.includes('Object') || objectActions.includes(a));
@@ -129,15 +130,12 @@ class BucketPolicy {
/**
* Call resource policy schema validation function
* @param {object} policy - the bucket policy object to validate
* @return {undefined}
* @param policy - the bucket policy object to validate
*/
static validatePolicy(policy) {
static validatePolicy(policy: any) {
// only the BucketInfo constructor calls this function
// and BucketInfo will always be passed an object
const validated = validateResourcePolicy(JSON.stringify(policy));
assert.deepStrictEqual(validated, { error: null, valid: true });
}
}
module.exports = BucketPolicy;

Some files were not shown because too many files have changed in this diff Show More