484 Commits

Author SHA1 Message Date
williamlardier
5cd70d7cf1 ARSN-267: fix failing unit test
NodeJS 16.17.0 introduced a change in the error handling of TLS sockets
in case of error. The connexion is closed before the response is sent,
so handling the ECONNRESET error in the affected test will unblock it,
until this is fixed by NodeJS, if appropriate.

(cherry picked from commit a237e38c51)
2023-05-25 17:50:00 +00:00
gaspardmoindrot
e8a409e337 [ARSN-335] Implement GHAS 2023-05-16 21:21:49 +00:00
Nicolas Humbert
7d254a0556 ARSN-105 Disjointed reduced locations 2022-03-15 14:03:54 -04:00
Vianney Rancurel
5f8c92a0a2 ft: ARSN-87 some versioning exports are still missing for Armory 2022-02-18 17:09:27 -08:00
Taylor McKinnon
6861ac477a impr(ARSN-46): Rollback changes 2022-02-14 11:10:36 -08:00
Nicolas Humbert
90d6556229 ARSN-21 update package version 2022-02-07 18:13:46 +01:00
bert-e
f7802650ee Merge branch 'feature/ARSN-21/UpgradeToNode16' into q/7.4 2022-02-07 17:06:51 +00:00
Nicolas Humbert
d0684396b6 S3C-5450 log is not accurate anymore 2022-02-04 10:45:48 +01:00
Naren
9b9a8660d9 bf: ARSN-57 log correct client ip
check request header 'x-forwarded-for' if there is no request
configuration.
2022-01-28 17:03:47 -08:00
Ronnie Smith
8c3f304d9b feature: ARSN-21 upgrade to node 16 2022-01-24 14:26:11 -08:00
Ronnie Smith
efb3629eb0 feature: ARSN-54 use a less strict node engine 2022-01-20 15:20:43 -08:00
Ronnie Smith
6733d30439 feature: ARSN-54 revert node 16 2022-01-20 12:18:01 -08:00
bert-e
a1e14fccb1 Merge branch 'improvement/ARSN-21-Upgrade-Node-to-16' into q/7.4 2022-01-20 00:09:23 +00:00
bert-e
030f47a88a Merge branch 'bugfix/ARSN-35/add-http-header-too-large-error' into q/7.4 2022-01-19 00:48:15 +00:00
Taylor McKinnon
fc7711cca2 impr(ARSN-46): Add isAborted flag 2022-01-13 13:51:18 -08:00
Ronnie Smith
3919808d14 feature: ARSN-21 resolve broken tests 2022-01-11 14:18:56 -08:00
Dimitri Bourreau
b1dea67eef tests: ARSN-21 remove timeout 5500 from package.json script test
Signed-off-by: Dimitri Bourreau <contact@dimitribourreau.me>
2021-12-10 02:21:36 +01:00
Dimitri Bourreau
c3196181c1 chore: ARSN-21 add ioctl as optional dependency
Signed-off-by: Dimitri Bourreau <contact@dimitribourreau.me>
2021-12-10 02:20:14 +01:00
Dimitri Bourreau
c24ad4f887 chore: ARSN-21 remove ioctl
Signed-off-by: Dimitri Bourreau <contact@dimitribourreau.me>
2021-12-10 02:15:33 +01:00
Dimitri Bourreau
ad1c623c80 chore: ARSN-21 GitHub Actions run unit tests without --silent
Signed-off-by: Dimitri Bourreau <contact@dimitribourreau.me>
2021-12-10 02:14:08 +01:00
Dimitri Bourreau
9d81cad0aa tests: ARSN-21 update ws._server.connections with _connections
Signed-off-by: Dimitri Bourreau <contact@dimitribourreau.me>
2021-12-10 02:03:12 +01:00
Dimitri Bourreau
5f72738b7f improvement: ARSN-21 upgrade uuid from 3.3.2 to 3.4.0
Signed-off-by: Dimitri Bourreau <contact@dimitribourreau.me>
2021-12-09 00:38:07 +01:00
Dimitri Bourreau
70278f86ab improvement: ARSN-21 upgrade dependencies with yarn upgrade-interactive
Signed-off-by: Dimitri Bourreau <contact@dimitribourreau.me>
2021-12-07 14:35:33 +01:00
Dimitri Bourreau
083dd7454a improvement: ARSN-21 GitHub Actions should use Node 16 instead of 10
Signed-off-by: Dimitri Bourreau <contact@dimitribourreau.me>
2021-12-07 11:50:16 +01:00
Jonathan Gramain
5ce057a498 ARSN-42 bump version to 7.4.13 2021-11-18 18:19:59 -08:00
Jonathan Gramain
8c3f88e233 improvement: ARSN-42 get/set ObjectMD.nullUploadId
Add getNullUploadId/setNullUploadId helpers to ObjectMD, to store the
null version uploadId, so that it can be passed to the metadata layer
as "replayId" when deleting the null version from another master key
2021-11-18 14:16:19 -08:00
Jonathan Gramain
04581abbf6 ARSN-38 bump arsenal version 2021-11-03 15:45:30 -07:00
Jonathan Gramain
abfbe90a57 feature: ARSN-38 introduce replay prefix hidden in listings
- Add a new DB prefix for replay keys, similar to existing v1 vformat
  prefixes

- Hide this prefix for v0 listing algos DelimiterMaster and
  DelimiterVersions: skip keys beginning with this prefix, and update
  the "skipping" value to be able to skip the entire prefix after the
  streak length is reached (similar to how regular prefixes are
  skipped)

- fix an existing unit test in DelimiterVersions
2021-11-02 12:01:28 -07:00
Jonathan Gramain
b1c9474159 feature: ARSN-37 ObjectMD getUploadId/setUploadId
Add getter/setter for the "uploadId" field, used for MPUs in progress.
2021-11-01 17:25:57 -07:00
Ilke
8e8d771a64 bugfix: ARSN-35 add http header too large error 2021-10-29 20:17:42 -07:00
Rahul Padigela
f941132c8a chore: update version 2021-10-26 14:47:21 -07:00
Rahul Padigela
2246a9fbdc bugfix: ARSN-31 return empty string for invalid requests
This returns empty string for invalid encoding requests, for example
when duplicate query params in HTTP URL are parsed by Node.js HTTP parser
which converts duplicate query params into an Array and this breaks the encoding
method.
2021-10-25 16:59:09 -07:00
Rahul Padigela
86270d8495 test: test for invalid type for encoding strings 2021-10-25 16:59:03 -07:00
Thomas Carmet
4b08dd5263 ARSN-20 migrate to github actions
Co-authored-by: Ronnie <halfpint1170@gmail.com>
2021-09-23 11:37:04 -07:00
Thomas Carmet
36f6ca47e9 ARSN-17 align package.json with releases 2021-08-31 09:55:21 -07:00
Jonathan Gramain
c495ecacb0 feature: ARSN-12 bump arsenal version
Needed to ensure proper dependency update in Vault
2021-08-26 14:21:10 -07:00
anurag4DSB
8603ca5b99 feature: ARSN-12-introduce-cond-put-op
(cherry picked from commit f101a0f3a0)
2021-08-25 23:03:58 +02:00
Thomas Carmet
ef6197250c ARSN-11 update werelogs to tagged version 2021-08-12 10:03:26 -07:00
Ronnie Smith
836c65e91e bugfix: S3C-3810 Skip headers on 304 response 2021-07-30 15:24:31 -07:00
bert-e
ffbe46edfb Merge branch 'bugfix/S3C-4257_StartSeqCanBeNull' into q/7.4 2021-06-08 08:18:01 +00:00
Ronnie Smith
3ed07317e5 bugfix: S3C-4257 Start Seq can be null
* Return undefined if start seq is falsey
2021-06-07 19:49:13 -07:00
bert-e
0487a18623 Merge branch 'improvement/S3C-4336_add_BucketInfoModelVersion' into q/7.4 2021-05-10 20:18:35 +00:00
Taylor McKinnon
a4ccb94978 impr(S3C-4336): Add BucketInfoModelVersion.md from cloudserver 2021-05-10 13:01:46 -07:00
Ronnie Smith
3098fcf1e1 feature: S3C-4073 Add probe server to index 2021-05-06 21:16:48 -07:00
Ronnie Smith
41b3babc69 feature: S3C-4073 Add new probe server
* JsDocs for arsenal error
* ProbeServer as a replacement to HealthProbeServer
2021-04-30 12:53:38 -07:00
bert-e
403d9b5a08 Merge branch 'bugfix/S3C-4275-versionListingWithDelimiterInefficiency' into q/7.4 2021-04-14 01:17:37 +00:00
Jonathan Gramain
ecaf9f843a bugfix: S3C-4275 enable skip-scan for DelimiterVersions with a delimiter
Enable the skip-scan optimization to work for DelimiterVersions
listing algorithm when used with a delimiter.

For this to work, instead of returning FILTER_ACCEPT when encountering
a version that matches the master key (which resets the skip-scan
counter), return FILTER_SKIP to let the skip-scan counter increment
and eventually skip the entire listed common prefix after 100 entries.
2021-04-09 16:33:50 -07:00
Jonathan Gramain
3506fd9f4e bugfix: S3C-4275 more DelimiterVersions unit tests
Increase coverage for DelimiterVersions listing algorithm to have it
in par with DelimiterMaster before attempting a fix: most existing
tests from DelimiterMaster have been copied and adapted to fit the
DelimiterVersions logic.
2021-04-09 16:32:15 -07:00
Ronnie Smith
d533bc4e0f Merge branch 'development/7.4' into feature/S3C-4262_BackportZenkoMetrics 2021-04-06 02:41:34 -07:00
Jonathan Gramain
c6976e996e build(deps-dev): Bump mocha from 2.5.3 to 8.0.1
Clean remaining references in a few test suites to have mocha not hang
after tests complete, since mocha 4+ does not force exit anymore if
there are active references.

Ref: https://boneskull.com/mocha-v4-nears-release/#mochawontforceexit
2021-04-02 11:48:27 -07:00
Ronnie Smith
1584c4acb1 feature S3C-4262 Backport zenko metrics 2021-04-01 20:03:39 -07:00
dependabot[bot]
f1345ec2ed build(deps-dev): Bump mocha from 2.5.3 to 8.0.1
Bumps [mocha](https://github.com/mochajs/mocha) from 2.5.3 to 8.0.1.
- [Release notes](https://github.com/mochajs/mocha/releases)
- [Changelog](https://github.com/mochajs/mocha/blob/master/CHANGELOG.md)
- [Commits](https://github.com/mochajs/mocha/compare/v2.5.3...v8.0.1)

Signed-off-by: dependabot[bot] <support@github.com>
2021-03-30 15:55:18 -07:00
alexandre merle
f17006b91e bugfix: S3C-3962: considering zero size has valid in stream response 2021-02-09 13:44:05 +01:00
alexandre merle
b3080e9ac6 S3C-3904: match api method with real aws s3 api call 2021-02-05 18:36:48 +01:00
alexandre merle
9484366844 bugfix: S3C-3904: better-s3-action-logs
Introduce a map meant to override default
actionMap values for S3, will be used in logs
to monitor the s3 actions instead of the iam
permissions needed for that action
2021-02-05 02:09:08 +01:00
alexandre merle
7358bd10f8 bugfix: S3C-2201: econnreset rest client keep alive
Use agentkeepalive to avoid econnreset on client sockets, more info
in S3C-3114.

Fixes https://scality.atlassian.net/browse/S3C-2201
2021-01-25 20:26:25 +01:00
Ilke
38f851e30e bf: S3C-3425 parse client ip 2020-12-17 09:22:54 -08:00
Rahul Padigela
1ee4a610fc improvement: S3C-3653 add server ip, port fields 2020-12-01 23:03:33 -08:00
Dora Korpar
8dfe60a1a7 Make action maps utility file 2020-11-13 15:47:14 -08:00
Dora Korpar
c08a6f69e0 imprv: S3C-3475 add s3 actions in logs 2020-11-11 18:52:41 -08:00
Jonathan Gramain
918a1d7c89 bugfix: S3C-3388 constants for HTTP connection timeouts
Add constants related to HTTP client/server connection timeouts with
values avoiding ECONNRESET errors due to the server closing
connections that clients are attempting to reuse at the same moment.
2020-10-15 12:17:00 -07:00
Jonathan Gramain
15140cd6bb bugfix: S3C-3388 network.http.Server.setKeepAliveTimeout()
Add a helper function to set the keep-alive timeout of the node.js
HTTP server managed by the Server class.
2020-10-14 19:09:31 -07:00
Jonathan Gramain
0d328d18d1 bugfix: S3C-3402 remove wrong error log
Remove the error log 'rejected secure connection' when client
certificate checks are disabled in the HTTPS server, since the
connection is accepted although the client is not authenticated but is
still allowed to request the server.
2020-10-08 13:47:58 -07:00
bert-e
459839cb8a Merge branch 'dependabot/npm_and_yarn/development/7.4/lolex-6.0.0' into q/7.4 2020-07-21 00:44:29 +00:00
Jonathan Gramain
35f43b880e deps: replace lolex to latest version of @sinonjs/fake-timers
Project lolex has been renamed, hence use the new name.

Fix usage in unit tests to reflect the newest API
2020-07-20 15:38:21 -07:00
dependabot[bot]
ffc632034d build(deps): Bump debug from 2.3.3 to 2.6.9
Bumps [debug](https://github.com/visionmedia/debug) from 2.3.3 to 2.6.9.
- [Release notes](https://github.com/visionmedia/debug/releases)
- [Changelog](https://github.com/visionmedia/debug/blob/2.6.9/CHANGELOG.md)
- [Commits](https://github.com/visionmedia/debug/compare/2.3.3...2.6.9)

Signed-off-by: dependabot[bot] <support@github.com>
2020-07-01 19:38:18 +00:00
bert-e
efdffd6b99 Merge branch 'dependabot/npm_and_yarn/development/7.4/ajv-6.12.2' into q/7.4 2020-07-01 19:33:58 +00:00
Jonathan Gramain
9ded1d2051 build(deps): ajv dep bump: updates for compatibility with version 6
- Run migration tool on userPolicySchema.json to json-schema draft-06:
  `ajv migrate -s userPolicySchema.json`

- add a call to addMetaSchema() now needed to load the meta-schema of
  draft-06
2020-07-01 11:22:57 -07:00
dependabot[bot]
310599249d build(deps-dev): Bump lolex from 1.5.2 to 6.0.0
Bumps [lolex](https://github.com/sinonjs/lolex) from 1.5.2 to 6.0.0.
- [Release notes](https://github.com/sinonjs/lolex/releases)
- [Changelog](https://github.com/sinonjs/fake-timers/blob/master/CHANGELOG.md)
- [Commits](https://github.com/sinonjs/lolex/compare/v1.5.2...v6.0.0)

Signed-off-by: dependabot[bot] <support@github.com>
2020-06-30 20:14:18 +00:00
bert-e
f9dafb1f6b Merge branch 'dependabot/npm_and_yarn/development/7.4/temp-0.9.1' into q/7.4 2020-06-30 20:09:43 +00:00
dependabot[bot]
2943a1ebe8 build(deps): Bump ajv from 4.10.0 to 6.12.2
Bumps [ajv](https://github.com/epoberezkin/ajv) from 4.10.0 to 6.12.2.
- [Release notes](https://github.com/epoberezkin/ajv/releases)
- [Commits](https://github.com/epoberezkin/ajv/compare/4.10.0...v6.12.2)

Signed-off-by: dependabot[bot] <support@github.com>
2020-06-30 20:01:06 +00:00
dependabot[bot]
88c133b90a build(deps-dev): Bump temp from 0.8.3 to 0.9.1
Bumps [temp](https://github.com/bruce/node-temp) from 0.8.3 to 0.9.1.
- [Release notes](https://github.com/bruce/node-temp/releases)
- [Commits](https://github.com/bruce/node-temp/compare/v0.8.3...v0.9.1)

Signed-off-by: dependabot[bot] <support@github.com>
2020-06-30 20:00:55 +00:00
bert-e
5a50da6d90 Merge branch 'dependabot/npm_and_yarn/development/7.4/ipaddr.js-1.9.1' into q/7.4 2020-06-30 19:59:54 +00:00
dependabot[bot]
64390da174 build(deps): Bump socket.io from 1.7.4 to 2.3.0
Bumps [socket.io](https://github.com/socketio/socket.io) from 1.7.4 to 2.3.0.
- [Release notes](https://github.com/socketio/socket.io/releases)
- [Commits](https://github.com/socketio/socket.io/compare/1.7.4...2.3.0)

Signed-off-by: dependabot[bot] <support@github.com>
2020-06-29 19:18:49 -07:00
bert-e
c5055d4e72 Merge branch 'dependabot/npm_and_yarn/development/7.4/socket.io-client-2.3.0' into q/7.4 2020-06-30 00:53:53 +00:00
bert-e
7b4a295d8a Merge branch 'dependabot/npm_and_yarn/development/7.4/simple-glob-0.2.0' into q/7.4 2020-06-30 00:52:17 +00:00
dependabot[bot]
60751e1363 build(deps): Bump ipaddr.js from 1.2.0 to 1.9.1
Bumps [ipaddr.js](https://github.com/whitequark/ipaddr.js) from 1.2.0 to 1.9.1.
- [Release notes](https://github.com/whitequark/ipaddr.js/releases)
- [Commits](https://github.com/whitequark/ipaddr.js/compare/v1.2.0...v1.9.1)

Signed-off-by: dependabot[bot] <support@github.com>
2020-06-29 03:30:22 +00:00
dependabot[bot]
58b44556f6 build(deps): Bump socket.io-client from 1.7.4 to 2.3.0
Bumps [socket.io-client](https://github.com/Automattic/socket.io-client) from 1.7.4 to 2.3.0.
- [Release notes](https://github.com/Automattic/socket.io-client/releases)
- [Commits](https://github.com/Automattic/socket.io-client/compare/1.7.4...2.3.0)

Signed-off-by: dependabot[bot] <support@github.com>
2020-06-29 03:30:06 +00:00
dependabot[bot]
2aa4a9b5aa build(deps): Bump simple-glob from 0.1.1 to 0.2.0
Bumps [simple-glob](https://github.com/jedmao/simple-glob) from 0.1.1 to 0.2.0.
- [Release notes](https://github.com/jedmao/simple-glob/releases)
- [Changelog](https://github.com/jedmao/simple-glob/blob/master/CHANGELOG.md)
- [Commits](https://github.com/jedmao/simple-glob/commits)

Signed-off-by: dependabot[bot] <support@github.com>
2020-06-29 03:29:48 +00:00
dependabot[bot]
59cc006882 build(deps): Bump xml2js from 0.4.19 to 0.4.23
Bumps [xml2js](https://github.com/Leonidas-from-XIV/node-xml2js) from 0.4.19 to 0.4.23.
- [Release notes](https://github.com/Leonidas-from-XIV/node-xml2js/releases)
- [Commits](https://github.com/Leonidas-from-XIV/node-xml2js/commits)

Signed-off-by: dependabot[bot] <support@github.com>
2020-06-29 03:29:45 +00:00
Rahul Padigela
82b6017180 feature: add dependabot config file 2020-06-28 20:11:07 -07:00
Jonathan Gramain
3d064b9003 bugfix: S3C-2987 helper to get stream data as a JSON payload
Add a new helper function to get data from a stream as a JSON payload,
optionally validated against a joi schema.

Note: uses async/await, so updated the scality/Guidelines dependency
to please the linter
2020-06-24 17:09:27 -07:00
bert-e
6c53c023b8 Merge branch 'bugfix/S3C-2987-add-v0v1-vFormat' into q/7.4 2020-06-17 20:31:03 +00:00
Anurag Mittal
2b23c0d559 improvement: S3C-3044-add-audit-log-fields 2020-06-15 15:19:10 +02:00
Jonathan Gramain
709d1e3884 bugfix: S3C-2987 add v0v1 versioning key format 2020-06-03 17:28:10 -07:00
Jonathan Gramain
5f66ee992a bugfix: S3C-2899 handle MergeStream.destroy()
Make sure MergeStream destroys the non-ended input streams when
destroy() is called
2020-05-29 00:46:12 -07:00
bert-e
badaa8599b Merge branch 'bugfix/S3C-2899-vformatV1delimiterVersions' into q/7.4 2020-05-21 22:39:43 +00:00
bert-e
cd9bdcfa61 Merge branch 'bugfix/S3C-2899-vformatV1delimiterMaster' into q/7.4 2020-05-20 22:39:26 +00:00
Jonathan Gramain
d66d9245b9 bugfix: S3C-2899 implement v1 format for DelimiterVersions listing
Implement the v1 versioning key format for DelimiterVersions listing
method, in addition to v0.

Enhance existing unit tests to check the result of getMDParams()
2020-05-19 16:45:27 -07:00
Jonathan Gramain
fb89b4e683 bugfix: S3C-2899 support v1 in Delimiter, DelimiterMaster
The two listing methods Delimiter and DelimiterMaster now support v1
versioning key format in addition to v0.

Modify the listing algo classes to support buckets in v1 versioning
key format, in addition to v0.

Enhance existing unit tests to check the result of getMDParams()
2020-05-19 16:45:09 -07:00
Jonathan Gramain
1bda8559bc bugfix: S3C-2899 support vFormat v1 for MPU listing
Support listing MPUs stored with versioning key format v1
2020-05-19 16:44:42 -07:00
Jonathan Gramain
19dc603fe3 bugfix: S3C-2899 helper for v1 genMDParams() of master keys listing
New helper function to convert listing params from v0 to v1, when a
listing of master keys is requested. This logic is shared between
DelimiterMaster and MPU listing, hence a shared helper is useful.

Also, update the test function performListing to prepare for v1
testing of listing algos, by adding the vFormat parameter. Also check
that getMDParams() returns a valid object to enhance coverage.
2020-05-19 16:44:07 -07:00
Jonathan Gramain
cf4d90877f bugfix: S3C-2899 pass vFormat to listing params
Add an optional "vFormat" param to constructors of listing algo
classes, to specify the versioning key format used by the bucket to
list. Currently only v0 is supported.

Code cleanups done in the listing classes to prepare support for the
v1 format.
2020-05-15 23:33:08 -07:00
Jonathan Gramain
bf43c8498d bugfix: S3C-2899 update eslint-config-scality hash
This to benefit from the longer line length allowed (80 -> 120)
2020-05-15 23:29:59 -07:00
Jonathan Gramain
7a8437c30e bugfix: S3C-2899 tooling class to merge two sorted streams
Create class MergeStream to merge two readable sorted stream into one
readable stream, providing a comparison function.

This class is used to implement listing in bucket versioning key
format v1, that requires listing master keys and version keys
synchronously.
2020-05-12 17:15:41 -07:00
Jonathan Gramain
4c3b4d1012 bugfix: S3C-2899 add constants to support versioning key formats
- add constant prefixes for master and version keys

- add versioning key format version numbers

Those constants will be shared between listing logic (in Arsenal) and
put/get/etc. logic (in Metadata), hence needs to be in arsenal.
2020-05-11 15:27:08 -07:00
Jonathan Gramain
bbfc32e67e bugfix: S3C-2726 remove some default attributes from ObjectMD
Remove "nullVersionId", "isNull" and "isDeleteMarker" default values
from ObjectMD model, instead of the previous '' (empty string) default
value that was incorrect and could cause an issue by misinterpreting
the empty "nullVersionId" as an actual null version ID.
2020-04-21 14:23:45 -07:00
Ronnie Smith
5f6dda1aa1 bugfix: Remove tag regex to allow utf8 characters 2020-04-14 12:44:36 -07:00
Anurag Mittal
e1e2a4964a bugfix: S3C-2604-handle-multiple-specific-resources 2020-02-24 16:20:42 +01:00
Dora Korpar
0008b7989f bf: S3C-2502 move ip util to arsenal 2020-01-23 11:49:59 -08:00
Jonathan Gramain
d03f2d9ed8 bugfix: S3C-2541 LRU cache implementation
Add a generic implementation of a memory cache with least-recently
used eviction strategy, to be used to limit the number of bucket info
cached in repd process memory.
2019-12-20 16:13:44 -08:00
naren-scality
eb9559cb18 bugfix: S3C-2269 ArnMatch validation correction 2019-10-08 12:37:26 -07:00
naren-scality
a7b6fc8fb8 bugfix: S3C-1805 Bucket name validations corrected to support consecutive hyphens 2019-10-03 15:06:05 -07:00
Katherine Laue
f8bf038b81 improvement/S3C-2352 install yarn frozen lockfile 2019-08-08 11:09:21 -07:00
Katherine Laue
ae626b22ce Merge remote-tracking branch 'origin/development/7.4' into HEAD 2019-07-30 11:27:38 -07:00
Rahul Padigela
1d4bb01e1e improvement: S3C-2351 update joi to @hapi/joi 2019-07-29 15:46:13 -07:00
Katherine Laue
0e2a79cad3 improvement:S3C-2352-switch testing framework to yarn 2019-07-29 15:39:14 -07:00
Rahul Padigela
ce08806aea improvement: increase the limit of num. of allowed tags
This increases the limit of number of allowed tags on an object
from 10 to 50. This is to be inline and retain compatibility with
AWS S3.
2019-07-26 15:51:57 -07:00
anurag4dsb
470f38f7f9 bugfix: S3C-2335 Data Server closeSync 2019-07-17 16:12:43 -07:00
Rahul Padigela
9f2e74ec69 test: S3C-2127 skip versioning util test 2019-06-27 15:58:54 -07:00
Rahul Padigela
9894b88e5f improvement: S3C-2127 fix callback deprecation 2019-06-27 15:58:20 -07:00
Rahul Padigela
54f6a7aa42 improvement: S3C-2127 update packages for nodejs upgrade 2019-06-27 15:57:47 -07:00
Rahul Padigela
30ccf9a398 bugfix: S3C-2172 change error message for compatibility
When a delete bucket request is sent with an invalid bucket name
the server returns NoSuchBucket instead of InvalidBucketName error
to be compatible with AWS S3.
2019-05-22 16:54:10 -07:00
Jianqin Wang
bfb4a3034a S3C-2034: bump ioredis version to 4.9.5 to use redis 5.0 func 2019-05-20 14:44:14 -07:00
bert-e
c30250539f Merge branches 'w/7.4/feature/S3C-2002-admin-service' and 'q/722/7.4.3/feature/S3C-2002-admin-service' into tmp/octopus/q/7.4 2019-03-08 00:17:05 +00:00
bert-e
0eaae2bb2a Merge branch 'feature/S3C-2002-admin-service' into q/7.4.3 2019-03-08 00:17:05 +00:00
bert-e
436cb5109a Merge branch 'feature/S3C-2002-admin-service' into tmp/octopus/w/7.4/feature/S3C-2002-admin-service 2019-03-07 19:29:20 +00:00
Rahul Padigela
7a60ad9c21 feature: S3C-2002 introduce metadata service policy
This allows creating and assigning policies to users to access
metadata proxy over cloudsever.
2019-03-07 11:28:40 -08:00
Rahul Padigela
53d0ad38b8 bugfix: S3C-2017 remove CI badges
This commit removes the CI badges that are no longer active. It also helps mitigate
a cornercase bug in BertE
2019-03-07 10:14:23 -08:00
Jonathan Gramain
63d4e3c3f5 bugfix: S3C-2006 fix crash in vault-md on listing
Listed entries may either be objects { value[, key] } or plain strings
for key-only listings. Fix this by checking the filtered entry type
before calling trimMetadata().
2019-03-01 11:22:35 -08:00
Jonathan Gramain
15b0d05493 S3C-1985: heuristic to limit metadata trim to large blob
Add a heuristic to only do the trim for large metadata blob of more
than 10KB. This should limit the number of JSON (de)serializations to
only those blobs where we can hope a significant reduction in size.

Also renamed "filterContent" to "trimMetadata"
2019-02-19 11:48:53 -08:00
David Pineau
9f544b2409 S3C-1985: Fix tests after memory consumption fix
In the associated issues's memory consumption fix, a warning is logged in case
of unparseable entry from the listing.

This broke the unit tests for some specific listing algorithms, as they had
never taken any logger as constructor parameter, as opposed to their base
class.

Taking a look at the usage of these classes in the known client code
(CloudServer & Metadata), the logger was actually always provided, which means
that we could make use of it, and accept this forgotten parameter.

Fixes tests for S3C-1985, Fixes silent prototype usage mismatch.
2019-02-19 15:45:39 +01:00
David Pineau
fcdbff62cc S3C-1985: Reduce memory usage of listing algorithms
The issue discovered in the field was that an key with a heavy amount of data
could lead to an exhaustion of the available memory for Node.JS javascript
code.
In the case of the buckets handling, that would be MPU objects with an
important number of parts, listed in the "location" field.

A customer case revealed and triggered this corner case, loop-crashing the
processes responsible for the listing of the database, which were using the
Arsenal.algos.list.* extensions.

That issue is hereby fixed by this commit through the trimming of the known
heavy unused data: the "location" field of the bucket's values.
Note that as the code previously didn't need to parse the value before
forwarding it back to the caller, we are now parsing it, removing the unwanted
fields, and re-stringifying it for each selected entry in the listing.

A notable impact of this change is that the CPU usage might go up substantially
while listing the contents of any bucket.
Additionally, and for safety purposes, if the data cannot be parsed and
alterated, it will be returned as-is, at the risk of maintaining the memory
consuming behavior in the case of that corner case; while a warning is logged
on stdout.

Fixes S3C-1985
2019-02-19 15:22:54 +01:00
anurag4dsb
6edf027459 ft: S3C-1561 - add quotas to request context 2019-01-24 13:09:05 -08:00
Dora Korpar
90476ea9fd chore: increment package.json 2018-11-15 16:07:09 -08:00
Dora Korpar
b28a9fcec9 bf: S3C-1678-ipv6-check 2018-11-15 16:07:09 -08:00
Bennett Buchanan
89873b4c02 backport: S3C-1640 CRR retry feature 2018-10-19 11:32:56 -07:00
Rahul Padigela
879823c428 improvement: bump Arsenal version 2018-10-15 14:51:23 -07:00
Dora Korpar
0604c9daff ft: S3C 1171 list objects v2 2018-09-25 12:33:55 -07:00
Jeremy Desanlis
7290208a20 MD-661, ZENKO-945: fix delimiterMaster::filter
The return values of the delimiterMaster::filter function are used by
its client, metadata back-end like the mongoClient or Metadata, to
implement a skipScan mechanism: after a number of consecutive SKIP
return values, the clients changes their dataset to the next key range.

This algo, by allowing to skip values unwanted in the results
efficiently, gives good performance to master version listing.

The previous algo was broken, preventing it client to perform the
skipScan: it returns ACCEPT for versions, reseting the SKIP counter of
clients.

This commit changes the return values of this function to allow
delimiter clients to use the skipScan mechanism.
2018-08-22 11:10:14 -07:00
Jeremy Desanlis
eb2aef6064 ZENKO-945: delimiterMaster, add a deleteMarker test
The delimiterMaster::filter function will be modified. Add this test to
be sure to not change the behavior of this function with the new
modification.
2018-08-20 16:39:07 -07:00
David Pineau
6736508364 Merge remote-tracking branch 'origin/development/6.4' into development/7.4 2018-06-28 18:48:59 +02:00
David Pineau
c6292fcfe1 [Workflow] First branching: Use commit hashs instead of tags for dependencies 2018-06-28 18:48:17 +02:00
David Pineau
059dc71235 Merge remote-tracking branch 'origin/development/6.4' into development/7.4 2018-06-27 18:35:39 +02:00
David Pineau
41c272d7b1 [Workflow] Use tags instead of branches for dependencies 2018-06-27 18:34:36 +02:00
JianqinWang
dea1df2ee6 ft: list raft session buckets 2018-06-26 17:20:07 -07:00
alexandre-merle
d9bf780296 Merge pull request #450 from scality/fix/node-engine
FIX: Node engine
2018-03-26 13:23:16 +02:00
Alexandre Merle
ab701e1f33 feature: Update node version
Update node version to 6.13.1
2018-03-26 12:52:48 +02:00
Alexandre Merle
0c588da450 FIX: Node engine
Relax node engine to version superior to the current one, to allow upgrade
2018-03-26 12:38:25 +02:00
Rahul Padigela
200df1f50f Merge pull request #444 from scality/fix/statsclient-zero-byte
fix: StatsClient zero byte increment
2018-03-22 14:56:47 -07:00
philipyoo
d311ca61bc fix: StatsClient zero byte increment 2018-03-15 11:40:18 -07:00
ironman-machine
62289d388b merge #440 2018-03-13 04:49:57 +00:00
Dora Korpar
832fbb024e bf: minor lifecycle fixes 2018-03-12 19:06:06 -07:00
ironman-machine
449bf1a4f5 merge #431 2018-03-12 20:00:18 +00:00
Dora Korpar
8cd4601f55 bf: abortmpu days parsing 2018-03-12 20:00:18 +00:00
ironman-machine
94e15a8030 merge #423 2018-03-06 20:34:13 +00:00
Dora Korpar
417e316076 bf: fix get lifecycle xml 2018-03-06 20:34:13 +00:00
Rahul Padigela
eb56ed6192 Merge pull request #430 from scality/fwdport/7.2-7.4
Fwdport/7.2 7.4
2018-02-21 23:05:10 +05:30
Rahul Padigela
47ed80113f Merge remote-tracking branch 'origin/rel/7.2' into fwdport/7.2-7.4 2018-02-21 18:10:57 +05:30
ironman-machine
9d5d63a58a merge #424 2018-02-20 14:59:32 +00:00
ironman-machine
6e929c64bd merge #429 2018-02-20 13:12:21 +00:00
Flavien Lebarbe
0af6abf565 S3C-1026: S3 consumes too much tcp memory
retrieveData rework :
Use the proven 6.4 code with an eachSeries, replacing the recursion.
Isolate Azure in a separate function.
2018-02-20 17:20:06 +05:30
Alexandre Merle
2c83a05fd0 Merge remote-tracking branch 'origin/rel/7.2' into fwd/7.2-7.4 2018-02-20 12:03:14 +01:00
David Pineau
e3318ad7d5 Merge pull request #428 from scality/fwd/6.4-7.2
Fwd/6.4 7.2
2018-02-20 12:02:23 +01:00
Alexandre Merle
4becaac072 Merge remote-tracking branch 'origin/rel/6.4' into fwd/6.4-7.2 2018-02-20 11:58:56 +01:00
David Pineau
6ff44ece1f Merge pull request #426 from scality/fix/wrong-parameter-encode-uri
FIX: Wrong parameter encode url
2018-02-19 16:09:29 +01:00
Alexandre Merle
a72af2b7d1 FIX: Wrong parameter encode url
Fix a wrong argument, encoding the '/' instead of not encoding it
2018-02-19 11:48:03 +01:00
David Pineau
d6522c1a2d Merge pull request #422 from scality/EVE-817/addPensieveCredsTest
ft(test): EVE-817 add pensieveCreds tests
2018-02-13 17:42:23 +01:00
ironman-machine
5e3b5b9eb0 merge #421 2018-02-13 16:46:27 +00:00
Thibault Riviere
9d832ba2e8 ft(test): EVE-817 add pensieveCreds tests
Work from tcarmet, just adding it
2018-02-13 17:34:16 +01:00
Thomas Carmet
5b2ce43348 Merge remote-tracking branch 'origin/rel/7.2' into feature/EVE-817/7.4/setup-eve-pipeline 2018-02-13 10:42:42 +01:00
ThibaultRiviere
9fb1cc901c Merge pull request #418 from scality/feature/EVE-817/7.2/setup-eve-pipeline
FWD: EVE Pipeline on 7.2
2018-02-13 10:36:26 +01:00
Thomas Carmet
98b866cdd8 Merge remote-tracking branch 'origin/rel/6.4' into feature/EVE-817/7.2/setup-eve-pipeline 2018-02-12 17:41:18 +01:00
alexandre-merle
b6c051df89 Merge pull request #415 from scality/fwd/7.2-7.4
Fwd/7.2 7.4
2018-02-11 05:02:14 +01:00
ironman-machine
506bef141b merge #416 2018-02-09 19:30:53 +00:00
Alexandre Merle
b3e9cbf7ff Revert "bf: close/end readable/response streams on errors"
This reverts commit ba593850b9.
2018-02-09 18:40:37 +01:00
ironman-machine
76a036c73d merge #392 2018-02-09 14:13:25 +00:00
Rahul Padigela
ba593850b9 bf: close/end readable/response streams on errors
This fixes the leakage of sockets in CLOSE_WAIT state by closing the streams
and destroying the sockets when the client has abruptly closed the connection.

Upstream requests to Azure/AWS need to be aborted in
AzureClient/AWSClient implementation. Currently azure-storage module doesn't
have a clear way of aborting a request.
2018-02-09 14:13:25 +00:00
Alexandre Merle
d5202aec91 Merge remote-tracking branch 'origin/rel/7.2' into fwd/7.2-7.4 2018-02-08 22:04:29 +01:00
alexandre-merle
face851f94 Merge pull request #414 from scality/fwd/6.4-7.2
Fwd/6.4 7.2
2018-02-08 21:45:42 +01:00
Alexandre Merle
e5fe7075dd Merge remote-tracking branch 'origin/rel/6.4' into fwd/6.4-7.2 2018-02-08 16:33:40 +01:00
David Pineau
71e5a5776e Merge pull request #406 from scality/feature/S3C-1245-deps
S3C-1245 update dependencies
2018-02-05 14:43:13 +01:00
Anne Harper
fb1df3ec46 S3C-1245 update dependencies: version trick 2018-02-05 14:14:21 +01:00
Anne Harper
58c0578451 S3C-1245 update dependencies 2018-02-05 11:57:43 +01:00
David Pineau
c20a594061 Merge pull request #404 from scality/fwdport_7.2_to_master
Fwdport 7.2 to rel/7.4 (formerly master)
2018-02-02 15:49:51 +01:00
Thibault Riviere
0ff9d77eec Merge remote-tracking branch 'origin/rel/7.2' into fwdport_7.2_to_master 2018-01-31 14:26:29 +01:00
ironman-machine
d0c8aeb398 merge #398 2018-01-31 00:57:26 +00:00
Dora Korpar
6354123f0f ft: delete bucket lifecycle 2018-01-30 13:25:05 -08:00
Bennett Buchanan
b4d04ce1f5 Merge pull request #397 from scality/ft/S3C-1156-get-bucket-lifecycle
Ft/s3 c 1156 get bucket lifecycle
2018-01-30 13:21:51 -08:00
Dora Korpar
0df78fe030 ft: get bucket lifecycle 2018-01-29 14:23:36 -08:00
ironman-machine
84b2673814 merge #389 2018-01-29 21:57:18 +00:00
Dora Korpar
d28269cbf5 ft: Put bucket lifecycle 2018-01-29 10:35:16 -08:00
Lauren Spiegel
90c85c7dc7 Merge pull request #402 from scality/ft/bumpVersion
ft: bump version
2018-01-16 16:46:42 -08:00
Rahul Padigela
0a137be794 ft: bump version 2018-01-16 16:42:32 -08:00
ironman-machine
58a29072e6 merge #379 2018-01-16 22:50:02 +00:00
ironman-machine
251bd0fa42 merge #401 2018-01-16 19:42:02 +00:00
Thibault Riviere
ed99e5b903 ft(health): add Too many requests error 2018-01-16 15:54:00 +01:00
ironman-machine
048f9bf54c merge #396 2018-01-15 21:04:21 +00:00
Lauren Spiegel
0a2b66ec34 FT: Executable creator for credentials 2018-01-15 10:43:36 -08:00
Lauren Spiegel
1f5e71ba3b FT: Allow certain capitalized buckets 2018-01-15 10:13:38 -08:00
Lauren Spiegel
43cb132638 FT: Add getAttributes method
This is needed to validate search queries.
2018-01-15 10:13:38 -08:00
Rahul Padigela
e95bf801c5 Merge pull request #399 from scality/revert/fix/createSigTool
Revert "FIX: v4 signing tool"
2018-01-12 15:19:19 -08:00
Rahul Padigela
0e780fae7e Revert "FIX: v4 signing tool"
This reverts commit 526dcf4148.
2018-01-12 15:13:07 -08:00
Rahul Padigela
79e60c5bcb Merge pull request #394 from scality/fix/S3C-1144/enforce-two-roles-when-scality-endpoint
FIX: Two roles when scality replication endpoint
2018-01-12 14:45:54 -08:00
Bennett Buchanan
19e8bd11bd FIX: Two roles when scality replication endpoint 2018-01-12 13:47:15 -08:00
ironman-machine
c4ea3bf9a4 merge #395 2018-01-12 21:27:37 +00:00
Lauren Spiegel
526dcf4148 FIX: v4 signing tool 2018-01-11 17:12:01 -08:00
Rahul Padigela
0e5547197a Merge pull request #391 from scality/ft/S3C-1144/object-md-multiple-replica-versionIds
FT: Add multiple replication site version IDs
2018-01-10 09:19:12 -08:00
Bennett Buchanan
0eed5840bf FT: Single replication endpoint without default 2018-01-09 23:06:53 -08:00
Bennett Buchanan
0a06ec4cba FT: Add multiple replication site version IDs 2018-01-09 16:25:52 -08:00
Rahul Padigela
280c447a6f Merge pull request #390 from scality/ft/S3C-1144/replication-config-multiple-storage-classes
FT: Support multiple replication storage classes
2018-01-09 15:14:59 -08:00
Bennett Buchanan
2c1bf72cc6 FT: Support multiple replication storage classes 2018-01-08 12:12:13 -08:00
Jonathan Gramain
69922222b4 bf: stream response from getRaftLog()
Adapt LogConsumer.readRecords() to use the stream returned by the modified
BucketClient.getRaftLog() function. That allows end-to-end streaming, hence
supporting arbitrary-sized responses, which should avoid toString() exceptions
or consume excessive amounts of memory.
2018-01-02 15:57:36 -08:00
Rahul Padigela
77c9ed6c5d Merge pull request #387 from scality/fwdport/7.2-master
Fwdport/7.2 master
2017-12-21 11:02:13 -08:00
Rahul Padigela
36e0d84f56 Merge remote-tracking branch 'origin/rel/7.2' into fwdport/7.2-master 2017-12-19 16:53:41 -08:00
ThibaultRiviere
f6706ca7db Merge pull request #386 from scality/forward/rel/7.1
Forward port from rel/7.1 to rel/7.2
2017-12-19 15:38:05 +01:00
ironman-machine
a31d38e1b5 merge #376 2017-12-16 01:45:18 +00:00
Alexander Chan
8d3f247b5d FT: Modifies V2 auth funcs to handle GCP
This feature modifies the V2 functions to handle signing Google Cloud
Storage requests.
+Adds a conditional argument, clientType, to specify the
headers/querystring handling method. Uses 'GCP' only when clientType ===
'GCP'; the default is 'AWS'.
+Adds additional tests to accompany these changes.
2017-12-14 09:45:24 -08:00
Rahul Padigela
4e51246b43 Merge pull request #385 from scality/rf/azure-hacky-subpart-trick
rf: avoid hacky trick to store number subparts
2017-12-13 15:08:28 -08:00
Electra Chong
4c282c42f9 rf: avoid hacky trick to store number subparts 2017-12-13 10:24:44 -08:00
ironman-machine
7158882b63 merge #369 2017-12-12 20:20:50 +00:00
philipyoo
7ee8391833 ft: extend StatsClient to increase by given amount 2017-12-12 10:25:00 -08:00
ironman-machine
a8ef4d5a22 merge #377 2017-12-12 03:46:31 +00:00
Guillaume Gimenez
56fdb5c511 ft: S3C-1065: Null part generator
This provides a new class, NullStream, that will be used to read null parts.
(null parts are parts with null keys which are generated by the NFS server when
growing a file with truncate)
2017-12-11 15:58:53 -08:00
Electra Chong
2a490f8a70 Merge pull request #384 from scality/revert/7.2-getService-cors
Revert/7.2 get service cors [S3C-1009]
2017-12-07 16:43:02 -08:00
Nicolas Humbert
0fde855c37 Merge remote-tracking branch 'origin/rel/7.1' into forward/rel/7.1 2017-12-07 16:27:56 -08:00
ironman-machine
4434e8a9ee merge #380 2017-12-06 22:47:01 +00:00
jeremyds
bdac98700a Merge pull request #383 from scality/S3C-1103-readonly
S3C-1101 FT: add read only backend error
2017-12-06 08:05:14 -08:00
Electra Chong
6f7d964dda revert: remove support for CORS for getService 2017-12-05 11:57:21 -08:00
Jeremy Desanlis
377539f977 S3C-1103 FT: add read only backend error
The feature to make the CDMI backend read only requires a dedicated
arsenal error. This new error will be thrown by all the data and
metadata callbacks which make write access.
2017-12-04 09:01:33 -08:00
Nicolas Humbert
ad498fdb77 Merge remote-tracking branch 'origin/rel/7.0' into forward/rel/7.0 2017-11-30 11:37:22 -08:00
ironman-machine
c25c0884dc merge #378 2017-11-28 19:20:53 +00:00
Nicolas Humbert
607df9840b Merge remote-tracking branch 'origin/rel/6.4' into forward/rel/6.4 2017-11-27 11:08:25 -08:00
Rahul Padigela
78cbf36d7d Merge pull request #375 from scality/ft/S3C-1115/export-for-gcp
FT: Export constructStringtoSignV2 Module
2017-11-21 12:52:42 -08:00
ironman-machine
91fd9086d6 merge #373 2017-11-21 19:44:01 +00:00
Alexander Chan
42125aa7be FT: Export constructStringtoSignV2 Module
This feature will export the constructStringtoSignV2 module for use with
the Google Cloud Storage backend.
2017-11-21 10:34:57 -08:00
ThibaultRiviere
1ac024fca0 Merge pull request #372 from scality/fix/dependencies
fix(deps): use the 7.2 dependencies
2017-11-21 16:07:31 +01:00
ThibaultRiviere
2bbac71fad Merge pull request #372 from scality/fix/dependencies
fix(deps): use the 7.2 dependencies
2017-11-21 16:06:02 +01:00
Thibault Riviere
adad816b3a fix(deps): use the 7.2 dependencies 2017-11-20 12:03:52 +01:00
ironman-machine
2eee4fb6fe merge #367 2017-11-06 22:42:39 +00:00
Electra Chong
71db93185f fix: undefined stream.destroy call
To end streaming in case of error, we were calling an unofficial method of the stream API which was removed and does not exist in the version of node we use. The method is re-added officially in node v.8 but until we upgrade we need to destroy the streams manually, by pushing null for readables and calling stream.end() for writables.
2017-11-06 13:16:09 -08:00
ironman-machine
1270412d4b merge #365 2017-11-03 19:40:15 +00:00
Jonathan Gramain
a10c674f68 bf: convert contentLength to number
Make sure contentLength is converted to a number because it might
be a string.
2017-11-03 10:51:47 -07:00
ironman-machine
44800cf175 merge #366 2017-11-02 07:11:50 +00:00
Jonathan Gramain
51a4146876 Merge branch 'bf/S3C-1040-locations-max-issue' into fwd/bf/S3C-1040-locations-max-issue-master 2017-11-01 16:28:57 -07:00
Jonathan Gramain
3c54bd740f bf: sanity check on returned content-length
In responseStreamData() helper, ensure the content-length is correct
with respect to data locations aggregated size. Log an error and
return internal error to the client if not.

This should catch off-by-one errors when computing ranges of data
locations to fetch and return.
2017-10-31 00:03:28 -07:00
Rahul Padigela
563bbfcb8b Merge pull request #364 from scality/ft/S3C-972/add-datastoreVersionId-to-replicationInfo
FT: Add dataStoreVersionId to replicationInfo
2017-10-26 16:34:22 -07:00
Bennett Buchanan
4942fab225 FT: Add dataStoreVersionId to replicationInfo 2017-10-25 17:36:57 -07:00
Rahul Padigela
91a828805b Merge pull request #359 from scality/ft/objmd-datastore-version-id
ft: add getter for dataStoreVersionId
2017-10-24 23:02:44 -07:00
Rahul Padigela
eb9b60c0ef Merge pull request #363 from scality/ft/no-need-action-mapping
FT: No need action mapping
2017-10-24 23:02:06 -07:00
alexandremerle
66acfbbab4 FT: No need action mapping
We currently use an action mapping with api methods, allowing the option
to send directly the action to not have to modify arsenal each new
action managed.

Need for https://scality.atlassian.net/browse/MD-292
2017-10-25 03:53:32 +02:00
Electra Chong
efe8ed76ba ft: add getter for dataStoreVersionId 2017-10-24 16:43:32 -07:00
Rahul Padigela
dad0d456d3 Merge pull request #361 from scality/fwd/rel/7.1
FWD: rel/7.1 to master
2017-10-23 15:49:42 -07:00
Rahul Padigela
4601794d49 Merge pull request #362 from scality/ft/versionUp
fix: update release version to alter npm cache
2017-10-23 11:25:32 -07:00
Rahul Padigela
36157fb688 ft: update release version to alter npm cache 2017-10-23 11:22:37 -07:00
Bennett Buchanan
639374522d Merge remote-tracking branch 'origin/rel/7.1' into fwd/rel/7.1 2017-10-20 17:33:53 -07:00
Rahul Padigela
2499ce7277 Merge pull request #357 from scality/ft/S3C-983-reverseProxyAuth
ft: support auth with proxy paths
2017-10-20 12:00:46 -07:00
Rahul Padigela
0bab4069cd Merge pull request #355 from scality/ft/S3C-760-ObjectMDStorageClassGetter
ft: add getters to storageClass/storageType
2017-10-20 12:00:20 -07:00
Rahul Padigela
03f82ea891 Merge pull request #360 from scality/fwd/rel/7.0
FIX: Check only defined replication rule IDs
2017-10-20 11:59:47 -07:00
Rahul Padigela
5949e12ffc Merge pull request #350 from scality/bf/catch-azureclient-error
Fix: Wrap Azure calls with try/catch
2017-10-20 11:59:05 -07:00
Lauren Spiegel
7ca3c0515a Merge pull request #346 from scality/ft/S3C-938-refactorAuthLoading
rf: improve zenko authentication data loading
2017-10-19 17:35:14 -07:00
Rahul Padigela
711d64d5f1 ft: support auth with proxy paths
This commit adds support for sending authenticated requests to a
server through a reverse proxy. The key change is that the signature
calculated uses the path that the final server (S3/Vault) receives.
2017-10-19 13:51:24 -07:00
Rahul Padigela
b22d12009a Merge pull request #356 from scality/rf/S3C-981-s3ConstantsForBackbeatEcho
rf: move a couple S3 constants to Arsenal
2017-10-19 13:50:56 -07:00
Bennett Buchanan
5dc752c6a9 FIX: Check only defined replication rule IDs 2017-10-19 10:51:54 -07:00
Jonathan Gramain
50a90d2b41 rf: improve zenko authentication data loading
This to prepare for loading of service account credentials deployed
with docker stack (or kubernetes), which will deploy individual
service account info as docker secrets in separate files (e.g. one
file for backbeat, one for clueso).

Use a new AuthLoader class to load authentication data. It's now
possible to load authentication data from multiple files, through
setting S3AUTH_CONFIG environment variable to one or multiple file
glob patterns, instead of a single file name. A file path is also a
valid glob hence still supported naturally.

The format of authentication data in JSON has changed in the following
ways:

 - use proper ARNs for accounts. This will break the compatibility
   with previous examples in authdata.json - which may be used by
   existing users using volumes - because there are too many quirks to
   deal with for compat that it is not worth dealing with it. A
   detailed log warning is shown instead so that such users will
   quickly be able to convert or fix their existing file to the new
   format.

 - use joi module to validate format of authdata.json instead of
   ad-hoc code, resulting in smaller code and deeper validation.

 - drop account users support in authdata.json: since there's no
   policy support, top-level account support is sufficient. Log a
   detailed error message if a users array is found in an otherwise
   valid account.
2017-10-18 15:40:00 -07:00
Dora Korpar
a1ce222a87 fix: Azure client error handling 2017-10-18 09:40:20 -07:00
Jonathan Gramain
a77bf3126d rf: move a couple S3 constants to Arsenal
They are needed for backbeat echo mode.
2017-10-17 17:09:49 -07:00
ironman-machine
300769dda6 merge #354 2017-10-17 23:55:59 +00:00
Jonathan Gramain
a65c554f64 ft: add getters to storageClass/storageType
These getters were originally in backbeat QueueEntry class, move them
to the ObjectMD model.
2017-10-17 16:49:07 -07:00
philipyoo
894d41a30b chore: move stats and redisclient from s3
Moving StatsClient and RedisClient from S3 repo to reuse classes.
2017-10-17 09:00:07 -07:00
Rahul Padigela
673da3de99 Merge pull request #326 from scality/ft/S3C-760-objectMDImportFromBlob
ft: metadata import capability in ObjectMD
2017-10-16 21:21:37 -07:00
Jonathan Gramain
0f535cc26a ft: metadata import capability in ObjectMD
ObjectMD model class is now able to import metadata from stored blob
and convert it to the latest version internally.

Add support for backbeat replication needs in ObjectMD, with a
new ObjectMDLocation class with helpers to manipulate a single data
location.

This class should also give a cleaner way to import and manipulate
object metadata in S3.

Note: removed 'version' parameter from ObjectMD constructor since it's
handled internally by the class and should not be exposed anymore.
2017-10-16 18:32:18 -07:00
ironman-machine
b1447906dd merge #351 2017-10-14 00:12:50 +00:00
mvaude
d7e4e3b7aa fix checkArnMatch wrong condition 2017-10-13 15:30:58 +02:00
mvaude
b445a8487b MD-286 - add iam:policyArn condition key 2017-10-13 15:30:50 +02:00
ironman-machine
f5ad8b5428 merge #349 2017-10-10 19:05:37 +00:00
Rahul Padigela
af460a0939 Merge pull request #347 from scality/ft/S3C-972/add-storageType-to-replicationInfo
FT: Add storageType to replicationInfo
2017-10-10 10:51:19 -07:00
Bennett Buchanan
8cf3d091cb FT: Add storageType to replicationInfo 2017-10-09 18:44:04 -07:00
Lauren Spiegel
575c59bf2c Merge remote-tracking branch 'origin/rel/7.0' into forward7.0to7.1 2017-10-06 15:32:17 -07:00
Rahul Padigela
28c2492e50 Merge pull request #348 from scality/forward/6.4to7.0
Forward/6.4to7.0
2017-10-06 11:31:34 -07:00
Lauren Spiegel
1312b4d2e9 Merge remote-tracking branch 'origin/rel/6.4' into forward/6.4to7.0 2017-10-05 16:07:24 -07:00
ironman-machine
1f77deab61 merge #345 2017-10-05 19:39:37 +00:00
Jonathan Gramain
96823f0a06 ft: service account identification support
Add support in AuthInfo for service accounts, designated by a
canonical ID starting with the service accounts namespace URL (newly
defined). We can ask if the authenticated account is a service account
or a particular one by its name.
2017-10-04 17:24:30 -07:00
ironman-machine
41a823a57e merge #333 2017-09-28 23:38:44 +00:00
ironman-machine
deb3bf3981 merge #343 2017-09-28 19:40:21 +00:00
Lauren Spiegel
09d6c7e5ae S3C-919 FT: Enhanced Invalid URI log 2017-09-27 12:01:09 -07:00
ironman-machine
a8cc170fdb merge #342 2017-09-25 17:58:38 +00:00
Lauren Spiegel
c7eb4c8e26 Fix: S3C-905 Quiet health check request logs 2017-09-22 10:14:19 -07:00
Jonathan Gramain
b31bc06e63 ft: ARN object model class
Will be used by backbeat for roles, and potentially can be used for other
components that manipulate ARNs.
2017-09-21 11:17:05 -07:00
Rahul Padigela
46d703de6b Merge pull request #341 from scality/ft/update-mpuUtils
ft: update mpu utils with optimizations
2017-09-19 16:46:06 -07:00
ironman-machine
bb3e63ea17 merge #337 2017-09-19 23:37:26 +00:00
Electra Chong
5d466e01b3 chore: extract convertToXml methods from s3 2017-09-19 15:31:34 -07:00
Electra Chong
7cbdac5f52 ft: update mpu utils with optimizations 2017-09-19 13:51:38 -07:00
philipyoo
1e2d9be8f7 chore: reflect new eslint changes
fix prefer-spread
fix space-unary-ops
fix no-prototype-builtins
fix no-useless-escape,operator-assignment
fix no-lonely-if, no-tabs
fix indent-legacy
fix no-restricted-globals

Based on the difference between `Number.isNaN()` vs `isNaN()`
`isNaN()` checks whether the passed value is not a number
or cannot be converted into a Number.
`Number.isNaN()` only checks if the value is equal to `NaN`

To replicate behavior, I want to change and replicate behavior
for given changed files.
2017-09-18 18:05:58 -07:00
Electra Chong
e89395c428 Merge pull request #340 from scality/ft/S3C-878/allowCorsForGetService
Allow CORS for Get Service
2017-09-15 17:21:03 -07:00
Vianney Rancurel
58ac3abe1a ft: allow CORS requests on Get Service
- allow OPTIONS for get service
- return corsHeaders for get service
2017-09-15 17:16:54 -07:00
ironman-machine
c22b937fe5 merge #332 2017-09-14 20:33:47 +00:00
Jonathan Gramain
4c1fa030bf ft: RoundRobin default port option
When a default port is provided in constructor, set default port in
host objects returned when not provided in the bootstrap list.
2017-09-14 12:19:07 -07:00
Rahul Padigela
7e2676f635 Merge pull request #336 from scality/clean/edit-gitignore
clean: Ignore log files like npm-debug.log
2017-09-14 12:18:14 -07:00
ironman-machine
e5cf9b1aec merge #338 2017-09-14 19:06:04 +00:00
Alexandre Merle
d1e7f05c7d MD-7: fix resource account id
Fixing resource account id by using the target account instead of
the requester one

See https://scality.atlassian.net/browse/MD-7
2017-09-14 14:02:06 +02:00
ironman-machine
4323bfaab0 merge #335 2017-09-14 01:00:05 +00:00
Alexandre Merle
9d9d21127c MD-69: Condition checking
Check conditions for principal evaluation

Fixes https://scality.atlassian.net/browse/MD-69
2017-09-14 00:49:24 +02:00
ironman-machine
dd9df1745c merge #331 2017-09-13 22:47:53 +00:00
Alexandre Merle
2fcf728d38 MD-5: Principal evaluation
Allow evaluate principal field of policies for iam roles

See https://scality.atlassian.net/browse/MD-5
2017-09-13 22:47:53 +00:00
Lauren Spiegel
e9993ed64e Merge pull request #334 from scality/ft/azure-putpart-utils
ft: put putPart utils in s3middleware
2017-09-12 11:29:30 -07:00
Dora Korpar
012e281366 ft: put putPart utils in s3middleware 2017-09-11 12:05:45 -07:00
philipyoo
0d62d5a161 clean: Ignore log files like npm-debug.log 2017-09-11 08:58:16 -07:00
Rahul Padigela
cc5dad3e83 Merge pull request #315 from scality/fix/return-logProxy-in-MetaFileClient
FT: Return logProxy from openRecordLog
2017-08-30 16:50:47 -07:00
Bennett Buchanan
62f2accc5c FT: Return logProxy from openRecordLog 2017-08-30 15:50:21 -07:00
ironman-machine
6ad2af98cd merge #330 2017-08-30 00:51:02 +00:00
Dora Korpar
286a599ae8 ft: add azure putpart utils 2017-08-29 13:54:01 -07:00
ironman-machine
ce834fffd7 merge #329 2017-08-26 02:15:08 +00:00
Electra Chong
7ff8b4dc29 ft: blacklist object prefixes 2017-08-25 17:48:48 -07:00
ironman-machine
21490a518f merge #328 2017-08-23 19:11:48 +00:00
Rached Ben Mustapha
68b2815859 Add MetadataFileServer.rawListKeys() 2017-08-23 19:11:48 +00:00
ironman-machine
3afe967cc9 merge #327 2017-08-23 07:23:09 +00:00
Jonathan Gramain
c3d419037c ft: log uid from sent X-Scal-Request-Uids header
backbeat will send this header to provide the request uid set while
processing an entry to S3 routes.
2017-08-21 16:35:46 -07:00
Rahul Padigela
04ac2d2259 Merge pull request #325 from scality/fix/allow-configurable-storage-class
FIX: Allow configurable storageClass
2017-08-10 17:07:16 -07:00
Bennett Buchanan
2e725a579f FIX: Allow configurable storageClass 2017-08-09 22:55:06 -07:00
Rahul Padigela
74e3d17f5d Merge pull request #320 from scality/ft/S3C-704-roundRobinHelperFix
bf: force pickNextHost() to pick all hosts in turn
2017-08-09 14:19:12 -07:00
Rahul Padigela
e93197771f Merge pull request #324 from scality/fix/updateReplicationConfigFormat
fix: update replication configuration format
2017-08-09 12:23:57 -07:00
Rahul Padigela
0662b7c4b8 fix: update replication configuration format
This commit updates the format of the expected replication config
from S3 from name/endpoint to site/servers to support the new
bootstrap list changes.
2017-08-09 11:14:06 -07:00
alexandre-merle
cdde9c77a9 Merge pull request #323 from scality/fwd/7.0-to-master
Fwd/7.0 to master
2017-08-09 17:05:38 +02:00
Alexandre Merle
b85765fe15 Merge remote-tracking branch 'origin/rel/7.0' into fwd/7.0-to-master 2017-08-09 15:57:47 +02:00
alexandre-merle
f732734d86 Merge pull request #322 from scality/fix/rel/7.0
fix 7.0
2017-08-09 14:50:36 +02:00
Alexandre Merle
280d82eaf4 fix 7.0 2017-08-09 14:16:34 +02:00
Jonathan Gramain
e755c595c7 bf: force pickNextHost() to pick all hosts in turn
Previously the shuffling may have returned the same host or at least
not go through all hosts once in turn. Fix this by not shuffling the
hosts array and by doing round-robin prior to returning the next host.
2017-08-08 18:56:30 -07:00
Rahul Padigela
ebfc2e5a08 Merge pull request #317 from scality/ft/S3C-704-roundRobinHelper
ft: round-robin helper class in network/utils/RoundRobin
2017-08-08 18:27:57 -07:00
Rahul Padigela
4d4c268a59 Merge pull request #316 from scality/S3C-586/check-scope-policy
S3C-586: [policies] Allow sso scope checking
2017-08-08 18:27:24 -07:00
Jonathan Gramain
a3fac5f9d7 ft: round-robin helper class in network/utils/RoundRobin
This is meant to be a generic round-robin manager (aka. bootstrap
list) to contact hosts in turn.

Blacklisting should be implemented in a next iteration.
2017-08-08 16:44:46 -07:00
Alexandre Merle
987da167ce FT: [policies] Allow sso scope checking
Allow vault sso to run policy evaluator

See https://scality.atlassian.net/browse/S3C-586
2017-08-08 16:01:27 -07:00
David Pineau
fa6a021e40 Merge pull request #314 from scality/S3C-703/Fix-Werelogs-config-handling-design
S3C-703: Update werelogs config handling design
2017-08-08 20:09:45 +02:00
David Pineau
d5af432060 Adapt logging configuration handling to S3C-703
Also update documentation to be more concise about logger types.
2017-08-07 11:05:55 +02:00
Rahul Padigela
065aa904ca Merge pull request #313 from scality/ft/extract-md5Sum
ft: extract MD5Sum util
2017-08-04 16:44:37 -07:00
Electra Chong
99f1a3be4d ft: extract MD5Sum util 2017-08-04 16:31:55 -07:00
Rahul Padigela
f6d12274e5 Merge pull request #302 from scality/ft/disk-usage
Ft/disk usage
2017-07-31 12:16:22 -07:00
Rahul Padigela
586413e48f Merge pull request #312 from scality/ft/refresh-in-mem-auth
Allow refreshing in-mem auth data
2017-07-31 12:15:56 -07:00
Rached Ben Mustapha
6ab50c5f1a Implement getDiskUsage in Data server 2017-07-28 15:43:04 -07:00
Rached Ben Mustapha
a1e83c824e Implement getDiskUsage in MD server 2017-07-28 14:45:27 -07:00
Rached Ben Mustapha
570c3273ff Depend on diskusage 2017-07-28 14:44:28 -07:00
Rached Ben Mustapha
40c2db9727 Allow refreshing in-mem auth data 2017-07-28 14:21:05 -07:00
Rahul Padigela
2de4d6b7a0 Merge pull request #311 from scality/ft/azure/get
FIX for Azure GET in S3
2017-07-28 14:04:52 -07:00
Nicolas Humbert
24e59f5ff1 FIX for Azure GET in S3 2017-07-27 13:41:57 -07:00
Rahul Padigela
cfcf916d77 Merge pull request #310 from scality/ft/addParamToObjMD
Add dataStoreName attribute to ObjectMD
2017-07-25 17:13:25 -07:00
Dora Korpar
328aaec373 ft: Add dataStoreName attribute to ObjectMD 2017-07-25 15:57:33 -07:00
Rahul Padigela
96f79d8d4d Merge pull request #309 from scality/ft/S3C-635/support-mpu-for-big-files
FT: Add InvalidPartNumber error
2017-07-24 18:25:14 -07:00
Rahul Padigela
e6c6d75f4c Merge pull request #307 from scality/dev/update-models
update models
2017-07-24 18:16:09 -07:00
Jeremy Desanlis
d1b12cf579 ft: update models
Because I did not know the merge process in the Arsenal repository I
merged too early a PR moving code from S3 to Arsenal. The problem is
that the PR in S3 removing this models is not yet merged and some
modification have been mode on this code in S3, leading the previous S3
PR to fail.

This commit updates the models code and test in Arsenal, accordingly to
what has been done in S3.
In the S3 repository, the ReplicationConfiguration source file required
the S3 config. Because it is not possible to do that in the Arsenal
repository, the config is now a constructor parameter.
2017-07-24 18:06:52 -07:00
Rahul Padigela
6112ad8a77 Merge pull request #253 from scality/ft/S3C-301-recordLogAPI
Ft/s3 c 301 record log api
2017-07-24 17:57:04 -07:00
Bennett Buchanan
ea91113f6e FT: Add InvalidPartNumber error 2017-07-24 13:44:10 -07:00
Rahul Padigela
c21600c2ac Merge pull request #308 from scality/dev/delimiterTools-exportPath
DelimiterTools export path
2017-07-24 11:19:06 -07:00
Jeremy Desanlis
e18e184868 DelimiterTools export path
Export DelimiterTools outside the "algorithms.list" one which contains
only delimiter classes.

Furthermore, a S3 unit test expects all the class exported in this path
to be delimiter classes and iterate on them. Instead of adding
conditional in test, give to this tool class a specific export path.
2017-07-24 19:13:01 +02:00
Jonathan Gramain
f84c004550 ft: record log service
The record log is a metadata daemon service to keep a log of changes
on the main database. The changes are recorded atomically with the
main database operations, and can be queried using the readRecords API
call.

To enable logging of metadata operations, set the "recordLog.enabled"
option to true in config.json.
2017-07-21 15:59:42 -07:00
Rahul Padigela
65060d37e8 Merge pull request #298 from scality/dev/delimiter-s3sofs
Dev/delimiter s3sofs
2017-07-21 14:19:47 -07:00
Rahul Padigela
4627c26972 Merge pull request #306 from scality/rf/parameterize-retrieveData
Rf/parameterize retrieve data
2017-07-20 18:36:46 -07:00
Electra Chong
3e3c4952d3 rf: parameterize dataRetrievalFn call for Azure 2017-07-20 18:33:28 -07:00
Rahul Padigela
94ea1148ba Merge pull request #304 from stvngrcia/dev/doc/typo
Fixing typo in README
2017-07-20 11:00:09 -07:00
Jeremy Desanlis
9a6f302d72 export delimiter tools
When using the delimiter module, it is usefull to check the filter
method return value with the filter constants defined in the tools
source file.
This commits exports it as algorithms.list.DelimiterTools.
2017-07-20 14:10:57 +02:00
Jeremy Desanlis
d819b18be3 listing: handle non alphabetical order in delimiter.
For the S3SOFS feature, the listing results are not albetically sorted
so we can't rely on a basic string comparison to know if we should skip
a value when filtering a listing result.

This commit adds a albeticalOrder field to the Delimiter class. It is
set to true by default, avoiding to break anything.
2017-07-20 14:10:57 +02:00
Steven Garcia
9e372ffe50 fix: typo in README
Signed-off-by: Steven Garcia <steven.garcia@holbertonschool.com>
2017-07-19 17:21:50 -07:00
jeremyds
e8380a8786 Merge pull request #299 from scality/dev/models
Add Bucket and ObjecTMD models in Arsenal
2017-07-19 10:59:43 +02:00
Rahul Padigela
58ed9cb835 Merge pull request #303 from scality/fix/response-content-headers
bf: pull response content headers from query
2017-07-17 18:24:42 -07:00
Electra Chong
8fd19e83bd bf: pull response content headers from query
We used to pull them from the headers object, but they are supposed to be specified as query parameters.

It worked because of an accidental side effect with V2 auth where we assigned query parameters to request.headers. But this meant that getting the response content headers was failing with v4 auth.
2017-07-17 17:02:51 -07:00
Rahul Padigela
90ba85513d ft: add uuid module 2017-07-17 10:06:34 -07:00
Jeremy Desanlis
cbe1c59f73 Add S3 models in Arsenal
These models are needed for the S3SOFS feature into the cdmiclient
repository. The S3 one does not export modules and shared ones are in
Arsenal so move these model modules to it. Update module requirement
accordingly now these source files are in the Arsenal repository.
2017-07-17 15:20:50 +02:00
Rahul Padigela
7df2ac30da Merge pull request #295 from scality/rf/s3header-validators
ft: extract s3 conditional header validation
2017-07-14 17:51:12 -07:00
Electra Chong
68fcc73f13 rf: extract s3 conditional header validation
Dependency of scality/S3#807 & scality/Mystique#18
2017-07-14 17:47:52 -07:00
Rahul Padigela
d5bfec0338 Merge pull request #301 from scality/ft/md-get-uuid
Expose UUID through the Metadata service
2017-07-13 16:15:59 -07:00
Rached Ben Mustapha
5054669c49 Cache uuid to avoid sync calls 2017-07-13 10:30:50 -07:00
Rahul Padigela
a874fdfa2e Merge pull request #300 from scality/compat/expect100
AUTH: Handle expect header stripping
2017-07-12 15:35:21 -07:00
Rached Ben Mustapha
e157974744 Expose UUID through the Metadata service 2017-07-12 15:32:17 -07:00
Lauren Spiegel
d4887b74be AUTH: Handle expect header stripping
If load balancer strips off expect header but
expect header was included in signed headers for v4 auth,
authentication will fail. We add back the header value here since
there is only 1 specified expect header value.
2017-07-12 11:46:54 -07:00
Lauren Spiegel
5175c4fc27 Merge pull request #297 from scality/chore/bump-version
chore: bump package version
2017-07-11 15:49:27 -07:00
Electra Chong
ba73b6c668 chore: bump package version
To force ci builds to install new node_modules for previous release.
2017-07-11 15:33:43 -07:00
Rahul Padigela
e951d553a1 Merge pull request #280 from scality/ft/S3C-294-raftLogClient
add raft client on top of bucketclient API
2017-07-10 10:28:44 -07:00
Jonathan Gramain
ec3920b46a ft: add log consumer client for raft
Use this client to fetch logs from bucketd in a way consistent between
MetaData and bucketfile, on top of bucketclient API.
2017-07-07 17:03:29 -07:00
Rahul Padigela
35234db54b Merge pull request #296 from scality/fix/xml-res-content-length
fix: send accurate content-length for error xml
2017-07-07 11:21:55 -07:00
Electra Chong
3068eaca03 fix: send accurate content-length for error xml 2017-07-06 17:14:52 -07:00
Rahul Padigela
f3359a0998 Merge pull request #292 from scality/ft/S3C-350-jsonErrorResponse
ft: add JSON error response support
2017-07-05 17:24:36 -07:00
Rahul Padigela
1cd0ae2fe2 Merge pull request #293 from scality/ft/fixFlakyStringHashTest
test: increase timeout of stringHash test
2017-07-05 17:01:14 -07:00
Jonathan Gramain
c2d555cce1 test: increase timeout of stringHash test
This as an attempt to reduce flakiness on this test, which I have seen
failing regularly due to the 10s timeout reached on the CI environment.
2017-07-05 16:45:35 -07:00
Jonathan Gramain
7edab1330c ft: add JSON error response support
In addition to XML error response, JSON response will be used by
backbeat routes, because their success responses is in JSON
format. Having JSON as success and XML as error format confuses the
AWS client going to be used as the client for backbeat routes, as it
expects and can be configured for one or the other, not both.
2017-07-05 16:33:12 -07:00
Lauren Spiegel
c890ddcb34 Merge pull request #289 from scality/ft/taggingUtils
Ft/tagging utils
2017-06-30 15:30:04 -07:00
Lauren Spiegel
47b19b556b FT: Add tagging and escapeForXml utils 2017-06-30 10:25:14 -07:00
Rahul Padigela
2d17f0f924 Merge pull request #290 from scality/fix/check-continue-cb
fix: continue handling req after writeContinue
2017-06-30 07:11:47 -07:00
Electra Chong
e535b84ac8 fix: continue handling req after writeContinue 2017-06-29 18:18:12 -07:00
Rahul Padigela
bc8b415728 Merge pull request #288 from scality/fx/httpserver
FIX: http server + typo
2017-06-29 16:54:20 -07:00
Rahul Padigela
4df3636f85 Merge pull request #284 from scality/port/ft/S3C-350-newObjectReplicateAction
Port/ft/s3 c 350 new object replicate action
2017-06-29 12:01:01 -07:00
Rahul Padigela
69589f9788 Merge pull request #263 from scality/ft/S3C-350-newObjectReplicateAction
ft: add new action type 'objectReplicate'
(cherry picked from commit 9c8acd2176e3791accc9a30aea4696074572edd6)
2017-06-28 16:35:21 -07:00
Jonathan Gramain
b68839da5b ft: add new action type 'objectReplicate'
This action maps to a standard policy action 's3:ReplicateObject', that will be
granted to backbeat on the destination.

(cherry picked from commit c3ff21e3791297488fe401ed47855b623ed9b317)
2017-06-28 16:34:41 -07:00
Nicolas Humbert
30305ffa5d FIX: http server 2017-06-28 16:26:41 -07:00
Rahul Padigela
c941b614b9 Merge pull request #278 from scality/rf/unsupportedCheck
Rf/unsupported check
2017-06-28 11:06:23 -07:00
Electra Chong
184959fca8 rf: move unsupported check to api
Since S3-related projects may diverge on support and rely on different parameters, move check for unsupported queries and headers to api.
2017-06-28 10:48:36 -07:00
Rahul Padigela
22a793f744 Merge pull request #286 from scality/port/ft/S3C-447-isMasterKeyHelper
PORT port/ft/S3C-447-is-master–key-helper to master
2017-06-28 10:14:57 -07:00
Rahul Padigela
96e898017b Merge pull request #285 from scality/port/ft/S3C-291/add-replication-route
PORT port/ft/S3C-291/add-replication-route to master
2017-06-28 10:14:23 -07:00
Rahul Padigela
df5d161ceb Merge pull request #266 from scality/ft/S3C-447-isMasterKeyHelper
ft: add isMasterKey() helper for use by backbeat queue populator
(cherry picked from commit c4031a46757a7716cf4a9b50a7a1ea13dd4f7ab4)
2017-06-27 13:51:49 -07:00
Jonathan Gramain
b86951039e ft: isMasterKey() helper
for use by backbeat queue populator

(cherry picked from commit 448efba4c8770002fb5f88ea44c0635b79595155)
2017-06-27 13:51:21 -07:00
Rahul Padigela
c1b051d275 Merge pull request #275 from scality/ft/S3C-291/add-replication-route
FT: Handle replication in GET and DELETE routes
(cherry picked from commit 124e091b070b2009541d9d4e92937d5443f07ed6)
2017-06-27 13:37:17 -07:00
Bennett Buchanan
38de3f0ec8 FT: Handle replication in GET and DELETE routes
(cherry picked from commit 18c33d3daf6015a2f3a5a3593c092c7956e19e29)
2017-06-27 13:36:21 -07:00
Rahul Padigela
58b2eee82c Merge pull request #283 from scality/port/ft/S3C-291-delete-bucket-replication
Port/ft/s3 c 291 delete bucket replication
2017-06-26 18:46:42 -07:00
Rahul Padigela
9b62e8388c Merge commit '739b7f081023a663f95a902316848b302f258577' into port/ft/S3C-291-delete-bucket-replication 2017-06-26 17:48:02 -07:00
Rahul Padigela
24ce223618 Merge pull request #282 from scality/port/ft/rpc-service-rest-api
Port/ft/rpc service rest api
2017-06-26 17:47:01 -07:00
Rahul Padigela
0e47affc0f Merge pull request #262 from scality/ft/S3C-350-versionSpecificPut
ft: add a new mode in versioning put request for backbeat replication
2017-06-26 17:28:46 -07:00
Rahul Padigela
2dc4cefec4 Merge commit '4e61e97e01e50719190f42314cfb1fa3cbf82b24' into port/ft/rpc-service-rest-api 2017-06-26 16:50:56 -07:00
Rahul Padigela
602b770741 Merge pull request #281 from scality/port/S3C-291-get-bucket-replication
Port s3c-291-get-bucket-replication
2017-06-26 16:50:09 -07:00
Jonathan Gramain
be23c28ba5 ft: extend versioning api
When both 'versioning' and 'versionId' options are provided, write a
new version with the specified versionId, and also create or update
the master version like done for new versioned puts.
2017-06-26 15:47:53 -07:00
Rahul Padigela
5baf004bc0 Merge commit 'ab0fd3af5576949da6d1ea6ab0359ec944270957' into port/ft/S3-REP.1 2017-06-26 15:42:18 -07:00
Rahul Padigela
bd064a4453 Merge pull request #279 from scality/fix/vaultErrorLog
FIX: Error logs
2017-06-26 14:10:59 -07:00
Rahul Padigela
a2ea2d56bd Merge pull request #264 from scality/ft/addsevercheck
FT: adding more server handlers
2017-06-26 14:10:09 -07:00
Lauren Spiegel
6e1c729763 FIX: Error logs 2017-06-26 11:17:59 -07:00
Nicolas Humbert
80ef22b7b3 FT: adding more server handlers 2017-06-23 15:15:28 -07:00
Rahul Padigela
5aaf7fdea4 Merge pull request #276 from scality/rf/extract-s3auth
fix: missing parameter in buildArn call
2017-06-23 12:06:41 -07:00
Electra Chong
95bab6c415 fix: missing parameter in buildArn call 2017-06-23 11:55:06 -07:00
Rahul Padigela
e613d22199 Merge pull request #271 from scality/rf/extract-s3auth
ft: extract s3auth utils
2017-06-22 17:21:17 -07:00
Electra Chong
235d9c615b ft: extract s3auth utils 2017-06-22 14:55:16 -07:00
Rahul Padigela
554ff68124 Merge pull request #274 from scality/cleanup/S3C-349-generalizeInternalRoutes
cleanup: generalize internal routes handling
2017-06-22 14:45:20 -07:00
Jonathan Gramain
b35223eb0e cleanup: generalize internal routes handling
Every url starting with '/_/' is now routed through a registered internal service
handler in the routing code, instead of being specifically checked for a
particular service. This will be useful to integrate backbeat routes properly
in the next step.
2017-06-22 14:16:14 -07:00
Jonathan Gramain
5e960911fc cleanup: remove unnecessary test in routePUT 2017-06-22 14:16:10 -07:00
Rahul Padigela
b927b8193b Merge pull request #273 from scality/rf/addExtractedUtils
Rf/add extracted utils
2017-06-21 10:43:36 -07:00
Lauren Spiegel
19c1dcbb04 FT: Add s3validator functions from s3 2017-06-20 14:24:30 -07:00
Rahul Padigela
381d4552d1 Merge pull request #268 from scality/rf/extract-s3routes
Rf/extract s3routes [S3C-511]
2017-06-15 18:04:44 -07:00
Electra Chong
a9bb7c12a6 ft: extract routes from s3 2017-06-15 17:47:46 -07:00
Rahul Padigela
d83f2bfdbe Merge pull request #270 from scality/fwdport_6.4_to_master
Fwdport 6.4 to master
2017-06-15 08:16:48 -07:00
Thibault Riviere
0fd0c67f8f Merge remote-tracking branch 'origin/rel/6.4' into fwdport_6.4_to_master 2017-06-14 23:27:11 +02:00
Jonathan Gramain
739b7f0810 Merge remote-tracking branch 'origin/ft/S3C-291-delete-bucket-replication' into ft/S3-REP.1 2017-06-13 14:16:23 -07:00
Jonathan Gramain
4e61e97e01 Merge branch 'ft/rpc-service-rest-api' into ft/S3-REP.1 2017-06-13 14:12:14 -07:00
Bennett Buchanan
c3c3183d7b FT: Add deleteBucketReplication 2017-06-07 12:03:04 -07:00
Rahul Padigela
ab0fd3af55 Merge pull request #261 from scality/S3C-291-get-bucket-replication
FT: Add getBucketReplication
2017-06-07 11:59:45 -07:00
Jonathan Gramain
cfc15328e5 address review comments 2017-06-06 18:14:14 -07:00
Jonathan Gramain
31b3a89b59 ft: add a REST server for metadata RPC calls
This is meant to be an ease for scripting/debugging, using traditional
HTTP tools (curl etc.) rather than a socket.io client. The actual
client implementations should still use the socket.io client.
2017-06-06 14:38:15 -07:00
Rahul Padigela
c925940f76 Merge pull request #256 from scality/bf/dataStoreAPIArsenalErrors
bf: use arsenal errors in data store API
2017-06-06 14:37:14 -07:00
Jonathan Gramain
7eafbbaa80 bf: use arsenal errors in data store API
While adding consistency on error management, the goal is also to be able to transmit
these errors properly to the upper layers (above wrapper.js) which expect proper
arsenal errors, so that specific error types can be checked (e.g. ObjNotFound).

+ unrelated: minor fix in trace message in auth module (not worth
doing a separate PR for this one)
2017-06-06 11:06:17 -07:00
Bennett Buchanan
324ec1bb54 FT: Add getBucketReplication 2017-06-05 13:47:26 -07:00
Rahul Padigela
f31f98975a Merge pull request #259 from scality/S3C-405/trust-policy-validation
S3C-405: Trust policy validation
2017-06-01 12:29:05 -07:00
Alexandre Merle
4d227b97fc S3C-405: Trust policy validation
Allowing trust policy to be evaluated

Fix https://scality.atlassian.net/browse/S3C-405
2017-06-01 11:30:20 -07:00
Rahul Padigela
d3620ca76c Merge pull request #257 from scality/S3C-380/get-security-token
S3C-380: Get security token
2017-06-01 11:28:17 -07:00
Alexandre Merle
50d6617eef S3C-380: Get security token
This PR introduce the security token needed for
authenticate request with temporary credentials.

See https://scality.atlassian.net/browse/S3C-380
2017-06-01 13:32:38 +02:00
Rahul Padigela
6fa6c3e366 Merge pull request #258 from scality/S3C-291-put-bucket-replication
FT: Add putBucketReplication
2017-05-26 18:31:42 -07:00
Bennett Buchanan
cee743b663 FT: Add putBucketReplication 2017-05-26 16:58:45 -07:00
Rahul Padigela
532aec28b0 Merge pull request #260 from scality/ft/S3C-432-restrict-utapi-policies-to-accounts
FT: Allow account ID in Utapi policy ARNs
2017-05-24 18:23:34 -07:00
Bennett Buchanan
25b71c45f7 FT: Allow accountID in Utapi ARNs 2017-05-24 17:18:12 -07:00
Rahul Padigela
695d116bb6 Merge pull request #249 from scality/chore/removeTS
chore: remove typescript support
2017-05-24 11:04:23 -07:00
Rahul Padigela
ac2c8880e7 chore: remove typescript support
Since Typescript is no longer used in the projects this file is being
removed to avoid unnecessary maintenance.
2017-05-24 09:48:36 -07:00
Rahul Padigela
58b96d325d Merge pull request #255 from scality/ft/deleteobjecttagging
FT: deleteObjectTagging
2017-05-09 15:09:54 -07:00
Nicolas Humbert
84ba16ee16 FT: deleteObjectTagging 2017-05-08 16:21:56 -07:00
Rahul Padigela
36a5a0e43f Merge pull request #254 from scality/ft/getobjecttagging
FT: getObjectTagging
2017-05-05 10:58:51 -07:00
Nicolas Humbert
6b72f83dd8 FT: getObjectTagging 2017-05-04 14:19:59 -07:00
Rahul Padigela
0d2d3615b8 Merge pull request #235 from scality/ft/S3C-158-dataServer
S3C-158 REST interface for datafile backend
2017-05-02 17:51:11 -07:00
Jonathan Gramain
3e0c1a852e ft: REST interface for datafile backend
This is how we will be able to do the data storage in a separate
storage daemon, running on another container or host. For now a data
REST server will be spawned locally by the S3 server at startup (in S3
PR).

There are actually three parts: the REST client, the REST server, and
the DataFileStore which handles the storage logic. The DataFileStore
implementation comes from the original data/file implementation in S3
server.

The REST API uses a service base path named /DataFile, and does roughly:

 - a PUT directly on /DataFile URL creates a new object file and
   returns its new random hex-encoded key through its URL in a
   Location response header, along with a '201 Created' response
   code. The REST client extracts the key from this URL and returns it
   in the callback.

 - GET and DELETE on the URL returned in the 'Location' header shall
   do their duty, though the REST client appends the given key to the
   base path to recreate the URL.

Add generic HTTP range parsing code so that it can be shared with S3.

Add the possibility to configure the bind address for metadata and
data local ports, and make it localhost when not set.
2017-05-02 13:41:50 -07:00
Jonathan Gramain
9730df0bbf ft: new utility function jsutil.once()
This helper forces a function to be called at most once
2017-05-02 13:41:44 -07:00
Lauren Spiegel
06281f73fb Merge pull request #247 from scality/ft/replication-group-token
Ft/replication group token
2017-05-02 10:06:14 -07:00
Electra Chong
7ee2afa55d ft: use rep group id to build version ids 2017-05-01 17:35:27 -07:00
Rahul Padigela
b403d299b5 Merge pull request #248 from scality/ft/putobjecttagging
FT: putObjectTagging
2017-05-01 16:30:41 -07:00
Nicolas Humbert
1551698ea8 FT: putObjectTagging 2017-05-01 15:30:31 -07:00
Rahul Padigela
3fd4f64176 Merge pull request #245 from scality/ft/S3C-193-metadataServer-versioning
S3C-193 bucketfile versioning support in Arsenal
2017-04-24 14:22:18 -07:00
Vinh Tao
0b2f82c120 S3C-193 bucketfile versioning support in Arsenal
What it does: provides different layers of processing requests
- VersioningRequestProcessor: to process versioning information
- WriteCache: to ensure the atomicity and isolation of requests
- WriteGatheringManager: bucketfile's operation batching layer

This versioning support is meant to be used eventually for MetaData as
well, it is a slightly modified port of the original versioning code
in MetaData.

Author: Vinh Tao <vinh.tao@scality.com>

Changes made by: Jonathan Gramain <jonathan.gramain@scality.com>
2017-04-24 14:13:13 -07:00
Jonathan Gramain
d8500856d0 S3C-193 Refactor level-net RPC mechanism
It separates concerns of RPC management from LevelDB-specific RPC code
by segmenting the RPC code into services.

It should allow easier maintainability in general, while allowing
cleaner integration of versioning to bucketfile, and easier
extensibility (backbeat persistent queue service management is the
next step which will benefit from this refactoring).

S3C-193 + comment in openSub()

S3C-193 improve error message on connectivity issue

S3C-193 add doxygen to comment about request environment parameters and fix comment grammar
2017-04-24 14:13:04 -07:00
Jonathan Gramain
f9281c5156 S3C-193 reorganize level-net code structure
Put all files into a common rpc directory
2017-04-24 14:12:53 -07:00
Rahul Padigela
5f7ab7b290 Merge pull request #243 from scality/cleanup/versioningTestSetup
cleanup: make version id tests independent
2017-04-05 18:41:39 -07:00
Rahul Padigela
ac145f19a7 cleanup: make version id tests independent 2017-04-05 16:28:41 -07:00
Rahul Padigela
cf058841f6 Merge pull request #242 from scality/ft/version-encoding
rf: change encoding of version ids [S3C-184]
2017-04-05 16:10:51 -07:00
Electra Chong
53f0c58933 rf: change encoding of version ids 2017-04-05 15:51:31 -07:00
Rahul Padigela
466375c505 Merge pull request #240 from scality/fix/versioning
fix: wrong check of delimiter index
2017-04-04 12:06:58 -07:00
Vinh Tao
580db34dee fix: wrong check of delimiter index
fixes #241
2017-04-04 10:42:04 -07:00
Vinh Tao
4de83ad555 test: wrong check of delimiter index 2017-04-04 10:41:45 -07:00
Rahul Padigela
22fb66145a Merge pull request #239 from scality/ft/versioningpolicy
ft: request context with versioning actions
2017-04-04 10:40:42 -07:00
Nicolas Humbert
51c6f6af83 ft: request context with versioning actions
S3C-156
2017-04-03 11:12:23 -07:00
Electra Chong
149324b0c5 Merge pull request #228 from scality/ft/vsp
update with new versioning
2017-04-01 09:37:30 -07:00
Vinh Tao
c9f007d4e6 test: versioning listing extensions 2017-04-01 16:23:48 +02:00
Vinh Tao
6187cdeca0 ft: versioning listing extensions 2017-04-01 16:23:48 +02:00
Vinh Tao
771ebb060e rf: versioning constants and utilities 2017-04-01 16:23:48 +02:00
Rahul Padigela
b5ffc4c4ac Merge pull request #237 from scality/ft/objLocationConstraintPolicy
Add ObjLocationConstraint policy condition
2017-03-28 18:37:38 -07:00
Dora Korpar
a642a84a9f Add ObjLocationConstraint policy condition 2017-03-28 17:28:04 -07:00
Rahul Padigela
daaa57794b Merge pull request #238 from scality/ft/S3C-35-metadataServer-timeout
S3C-35 change metadata timeout from 5 to 30s
2017-03-28 17:10:48 -07:00
Jonathan Gramain
292ed358f8 S3C-35 change metadata timeout from 5 to 30s
The rationale is that tests have failed with timeout errors, it seems 5 seconds is too short for our test VMS. Hopefully 30s is long
enough that these errors will not occur again.
2017-03-28 15:26:17 -07:00
Lauren Spiegel
b659b9de77 Merge pull request #234 from scality/ft/S3C-35-metadataServer-2
S3C-35 init metadata storage in arsenal
2017-03-27 16:13:15 -07:00
Jonathan Gramain
96be9e04c6 S3C-35 init metadata storage in arsenal
moves that code from S3 to arsenal.
2017-03-24 11:42:47 -07:00
Lauren Spiegel
9867ffa1cc Merge pull request #236 from scality/fix/werelogsAsDependency
fix: werelogs as a dependency
2017-03-24 10:53:06 -07:00
Vinh Tao
f686b53cec fix: werelogs as a dependency 2017-03-24 17:28:30 +01:00
Rahul Padigela
6992f7c798 Merge pull request #230 from scality/ft/S3C-35-metadataServer
S3C-35 New communication channel for remote sublevel
2017-03-23 16:26:58 -07:00
Jonathan Gramain
d8f65786d9 S3C-35 New communication channel for remote sublevel
It allows the client to do the normal levelDB operations (put, get,
delete, list keys via createReadStream) as well as creating new
sublevels. (Note that this is a purely virtual operation, nothing is
created initially but the client gets a handle to manipulate the new
sublevel).

It's built on top of socket.io which provides messaging abstraction
and helps maintaining a reliable channel (detects connection failures,
attempts to reconnect etc.). It also allows to use several namespaces
and "rooms" on the same communication channel, which may be useful
later.

There is an additional timeout for each operation, which triggers an
error after 5 seconds without an answer by default.

A custom object stream implementation has been added in order to list
keys more efficiently than the third-party "socket.io-stream" module
which does a round-trip for every entry. This implementation both
gathers written objects in a single packet, and can pipeline multiple
packets to the server while waiting for acks (max 5 by default, should
be configurable). This should help a lot when the remote latency is
high.

Note that levelDB createReadStream() becomes asynchronous, the user
must pass a callback and the stream is returned as a callback
argument.

Unit tests added:

 - ping
 - basic CRUD test
 - sublevels (separation of namespace, nesting)
 - listing of keys (to the end and aborted by the client) + parallel rewrites
 - parallel random reads
 - parallel deletes
 - command timeout
2017-03-23 15:01:17 -07:00
Rahul Padigela
b30d1421e9 Merge pull request #233 from scality/forward/rel/6.4
Forward/rel/6.4
2017-03-20 15:51:27 -07:00
Lauren Spiegel
6751b390ef Merge remote-tracking branch 'origin/rel/6.4' into forward/rel/6.4 2017-03-20 14:54:11 -07:00
Rahul Padigela
cc1fc929e9 Merge pull request #222 from scality/ft/node-v6
FT: Node v6
2017-02-28 11:22:08 -08:00
Alexandre Merle
9218459ead FT: Switch to node v6
Switch to node v6
2017-02-28 20:03:02 +01:00
David Pineau
b18d55c2b2 Merge pull request #227 from scality/fwd/6.4-to-master
Fwd/6.4 to master
2017-02-27 17:50:10 +01:00
Alexandre Merle
7fc614b7ba Merge remote-tracking branch 'origin/rel/6.4' into fwd/6.4-to-master
Conflicts:
	package.json
2017-02-27 14:20:54 +01:00
David Pineau
ec84aa5e43 Merge pull request #226 from scality/fwd/6.4-to-master
Fwd/6.4 to master
2017-02-27 09:43:49 +01:00
Alexandre Merle
2cf1da6d8b Merge remote-tracking branch 'origin/rel/6.4' into fwd/6.4-to-master 2017-02-24 11:14:24 +01:00
Rahul Padigela
9c87d228dc Merge pull request #217 from scality/DEV/getBucketLocation
DEV: getBucketLocation
2017-02-15 14:09:32 -08:00
Nicolas Humbert
78a9e45344 DEV: getBucketLocation 2017-01-25 16:01:07 -08:00
David Pineau
b9b569f5f4 Merge pull request #216 from scality/Forward-rel/6.4-to-master
Forward rel/6.4 to master
2017-01-19 13:19:23 +01:00
David Pineau
f8d85e501d Merge remote-tracking branch 'origin/rel/6.4' into Forward-rel/6.4-to-master 2017-01-19 12:15:49 +01:00
192 changed files with 27866 additions and 1734 deletions

10
.github/dependabot.yml vendored Normal file
View File

@@ -0,0 +1,10 @@
---
version: 2
updates:
- package-ecosystem: npm
directory: "/"
schedule:
interval: daily
time: "13:00"
open-pull-requests-limit: 10
target-branch: "development/7.4"

25
.github/workflows/codeql.yaml vendored Normal file
View File

@@ -0,0 +1,25 @@
---
name: codeQL
on:
push:
branches: [development/*, stabilization/*, hotfix/*]
pull_request:
branches: [development/*, stabilization/*, hotfix/*]
workflow_dispatch:
jobs:
analyze:
name: Static analysis with CodeQL
runs-on: ubuntu-latest
steps:
- name: Checkout code
uses: actions/checkout@v3
- name: Initialize CodeQL
uses: github/codeql-action/init@v2
with:
languages: javascript, typescript
- name: Build and analyze
uses: github/codeql-action/analyze@v2

View File

@@ -0,0 +1,16 @@
---
name: dependency review
on:
pull_request:
branches: [development/*, stabilization/*, hotfix/*]
jobs:
dependency-review:
runs-on: ubuntu-latest
steps:
- name: 'Checkout Repository'
uses: actions/checkout@v3
- name: 'Dependency Review'
uses: actions/dependency-review-action@v3

47
.github/workflows/tests.yaml vendored Normal file
View File

@@ -0,0 +1,47 @@
---
name: tests
on:
push:
branches-ignore:
- 'development/**'
jobs:
test:
runs-on: ubuntu-latest
services:
# Label used to access the service container
redis:
# Docker Hub image
image: redis
# Set health checks to wait until redis has started
options: >-
--health-cmd "redis-cli ping"
--health-interval 10s
--health-timeout 5s
--health-retries 5
ports:
# Maps port 6379 on service container to the host
- 6379:6379
steps:
- name: Checkout
uses: actions/checkout@v2
- uses: actions/setup-node@v2
with:
node-version: '16'
cache: 'yarn'
- name: install dependencies
run: yarn install --frozen-lockfile
- name: lint yaml
run: yarn --silent lint_yml
- name: lint javascript
run: yarn --silent lint -- --max-warnings 0
- name: lint markdown
run: yarn --silent lint_md
- name: run unit tests
run: yarn test
- name: run functional tests
run: yarn ft_test
- name: run executables tests
run: yarn install && yarn test
working-directory: 'lib/executables/pensieveCreds/'

11
.gitignore vendored
View File

@@ -1 +1,12 @@
# Logs
*.log
# Dependency directory
node_modules/
*/node_modules/
# Build executables
*-win.exe
*-linux
*-macos

View File

@@ -1,8 +1,5 @@
# Arsenal
[![CircleCI][badgepub]](https://circleci.com/gh/scality/Arsenal)
[![Scality CI][badgepriv]](http://ci.ironmann.io/gh/scality/Arsenal)
Common utilities for the S3 project components
Within this repository, you will be able to find the shared libraries for the
@@ -104,7 +101,7 @@ You can handle exit event on both master and workers by calling the
'onExit' method and setting the callback. This allows release of resources
or save state before exiting the process.
#### Silencing a singnal
#### Silencing a signal
```
import { Clustering } from 'arsenal';

View File

@@ -2,13 +2,15 @@
general:
branches:
ignore:
- /^ultron\/.*/ # Ignore ultron/* branches
- /^ultron\/.*/ # Ignore ultron/* branches
machine:
node:
version: 6.13.1
services:
- redis
environment:
CXX: g++-4.9
node:
version: 4.5.0
dependencies:
override:
@@ -23,3 +25,4 @@ test:
- npm run --silent lint_md
- npm run --silent test
- npm run ft_test
- cd lib/executables/pensieveCreds && npm install && npm test

View File

@@ -0,0 +1,82 @@
# BucketInfo Model Version History
## Model Version 0/1
### Properties
``` javascript
this._acl = aclInstance;
this._name = name;
this._owner = owner;
this._ownerDisplayName = ownerDisplayName;
this._creationDate = creationDate;
```
### Usage
No explicit references in the code since mdBucketModelVersion
property not added until Model Version 2
## Model Version 2
### Properties Added
``` javascript
this._mdBucketModelVersion = mdBucketModelVersion || 0
this._transient = transient || false;
this._deleted = deleted || false;
```
### Usage
Used to determine which splitter to use ( < 2 means old splitter)
## Model version 3
### Properties Added
```
this._serverSideEncryption = serverSideEncryption || null;
```
### Usage
Used to store the server bucket encryption info
## Model version 4
### Properties Added
```javascript
this._locationConstraint = LocationConstraint || null;
```
### Usage
Used to store the location constraint of the bucket
## Model version 5
### Properties Added
```javascript
this._websiteConfiguration = websiteConfiguration || null;
this._cors = cors || null;
```
### Usage
Used to store the bucket website configuration info
and to store CORS rules to apply to cross-domain requests
## Model version 6
### Properties Added
```javascript
this._lifecycleConfiguration = lifecycleConfiguration || null;
```
### Usage
Used to store the bucket lifecycle configuration info

View File

@@ -56,6 +56,10 @@
"code": 400,
"description": "The provided token has expired."
},
"HttpHeadersTooLarge": {
"code": 400,
"description": "Your http headers exceed the maximum allowed http headers size."
},
"IllegalVersioningConfigurationException": {
"code": 400,
"description": "Indicates that the versioning configuration specified in the request is invalid."
@@ -120,6 +124,10 @@
"code": 400,
"description": "The list of parts was not in ascending order.Parts list must specified in order by part number."
},
"InvalidPartNumber": {
"code": 416,
"description": "The requested partnumber is not satisfiable."
},
"InvalidPayer": {
"code": 403,
"description": "All access to this object has been disabled."
@@ -152,6 +160,10 @@
"code": 400,
"description": "The storage class you specified is not valid."
},
"InvalidTag": {
"code": 400,
"description": "The Tag you have provided is invalid"
},
"InvalidTargetBucketForLogging": {
"code": 400,
"description": "The target bucket for logging does not exist, is not owned by you, or does not have the appropriate grants for the log-delivery group."
@@ -212,6 +224,10 @@
"code": 400,
"description": "Request body is empty"
},
"MissingRequiredParameter": {
"code": 400,
"description": "Your request is missing a required parameter."
},
"MissingSecurityElement": {
"code": 400,
"description": "The SOAP 1.1 request is missing a security element."
@@ -252,6 +268,10 @@
"code": 404,
"description": "Indicates that the version ID specified in the request does not match an existing version."
},
"ReplicationConfigurationNotFoundError": {
"code": 404,
"description": "The replication configuration was not found"
},
"NotImplemented": {
"code": 501,
"description": "A header you provided implies functionality that is not implemented."
@@ -699,5 +719,14 @@
"NotEnoughMapsInConfig:": {
"description": "NotEnoughMapsInConfig",
"code": 400
},
"TooManyRequests": {
"description": "TooManyRequests",
"code": 429
},
"_comment": "----------------------- cdmiclient -----------------------",
"ReadOnly": {
"description": "trying to write to read only back-end",
"code": 403
}
}

View File

@@ -1,39 +0,0 @@
---
version: 0.2
branches:
default:
stage: pre-merge
stages:
pre-merge:
worker: &master-worker
type: docker
path: eve/workers/master
volumes:
- '/home/eve/workspace'
steps:
- Git:
name: fetch source
repourl: '%(prop:git_reference)s'
shallow: True
retryFetch: True
haltOnFailure: True
- ShellCommand:
name: install dependencies
command: npm install
- ShellCommand:
name: run lint yml
command: npm run --silent lint_yml
- ShellCommand:
name: run lint
command: npm run --silent lint -- --max-warnings 0
- ShellCommand:
name: run lint_md
command: npm run --silent lint_md
- ShellCommand:
name: run test
command: npm run --silent test
- ShellCommand:
name: run ft_test
command: npm run ft_test

View File

@@ -1,55 +0,0 @@
FROM ubuntu:trusty
#
# Install apt packages needed by the buildchain
#
ENV LANG C.UTF-8
COPY buildbot_worker_packages.list arsenal_packages.list /tmp/
RUN apt-get update -q && apt-get -qy install curl apt-transport-https \
&& apt-get install -qy software-properties-common python-software-properties \
&& curl --silent https://deb.nodesource.com/gpgkey/nodesource.gpg.key | apt-key add - \
&& echo "deb https://deb.nodesource.com/node_6.x trusty main" > /etc/apt/sources.list.d/nodesource.list \
&& add-apt-repository ppa:ubuntu-toolchain-r/test \
&& apt-get update -q \
&& cat /tmp/buildbot_worker_packages.list | xargs apt-get install -qy \
&& cat /tmp/arsenal_packages.list | xargs apt-get install -qy \
&& pip install pip==9.0.1 \
&& rm -rf /var/lib/apt/lists/* \
&& rm -f /tmp/*_packages.list
#
# Install usefull nodejs dependencies
#
RUN npm install mocha -g
#
# Add user eve
#
RUN adduser -u 1042 --home /home/eve --disabled-password --gecos "" eve \
&& adduser eve sudo \
&& sed -ri 's/(%sudo.*)ALL$/\1NOPASSWD:ALL/' /etc/sudoers
#
# Run buildbot-worker on startup
#
ARG BUILDBOT_VERSION=0.9.12
RUN pip install yamllint
RUN pip install buildbot-worker==$BUILDBOT_VERSION
USER eve
ENV HOME /home/eve
#
# Setup nodejs environmnent
#
ENV CXX=g++-4.9
ENV LANG C.UTF-8
WORKDIR /home/eve/workspace
CMD buildbot-worker create-worker . "$BUILDMASTER:$BUILDMASTER_PORT" "$WORKERNAME" "$WORKERPASS" \
&& sudo service redis-server start \
&& buildbot-worker start --nodaemon

View File

@@ -1,3 +0,0 @@
nodejs
redis-server
g++-4.9

View File

@@ -1,9 +0,0 @@
ca-certificates
git
libffi-dev
libssl-dev
python2.7
python2.7-dev
python-pip
software-properties-common
sudo

194
index.d.ts vendored
View File

@@ -1,194 +0,0 @@
import { Logger } from 'werelogs';
interface Ciphers {
ciphers: string;
}
interface Dhparam {
dhparam: string;
}
declare module "arsenal" {
class ArsenalError extends Error {
code: number;
description: string;
'AccessDenied'?: boolean;
'AccountProblem'?: boolean;
'AmbiguousGrantByEmailAddress'?: boolean;
'BadDigest'?: boolean;
'BucketAlreadyExists'?: boolean;
'BucketAlreadyOwnedByYou'?: boolean;
'BucketNotEmpty'?: boolean;
'CredentialsNotSupported'?: boolean;
'CrossLocationLoggingProhibited'?: boolean;
'DeleteConflict'?: boolean;
'EntityTooSmall'?: boolean;
'EntityTooLarge'?: boolean;
'ExpiredToken'?: boolean;
'IllegalVersioningConfigurationException'?: boolean;
'IncompleteBody'?: boolean;
'IncorrectNumberOfFilesInPostRequest'?: boolean;
'InlineDataTooLarge'?: boolean;
'InternalError'?: boolean;
'InvalidAccessKeyId'?: boolean;
'InvalidAddressingHeader'?: boolean;
'InvalidArgument'?: boolean;
'InvalidBucketName'?: boolean;
'InvalidBucketState'?: boolean;
'InvalidDigest'?: boolean;
'InvalidEncryptionAlgorithmError'?: boolean;
'InvalidLocationConstraint'?: boolean;
'InvalidObjectState'?: boolean;
'InvalidPart'?: boolean;
'InvalidPartOrder'?: boolean;
'InvalidPayer'?: boolean;
'InvalidPolicyDocument'?: boolean;
'InvalidRange'?: boolean;
'InvalidRequest'?: boolean;
'InvalidSecurity'?: boolean;
'InvalidSOAPRequest'?: boolean;
'InvalidStorageClass'?: boolean;
'InvalidTargetBucketForLogging'?: boolean;
'InvalidToken'?: boolean;
'InvalidURI'?: boolean;
'KeyTooLong'?: boolean;
'LimitExceeded'?: boolean;
'MalformedACLError'?: boolean;
'MalformedPOSTRequest'?: boolean;
'MalformedXML'?: boolean;
'MaxMessageLengthExceeded'?: boolean;
'MaxPostPreDataLengthExceededError'?: boolean;
'MetadataTooLarge'?: boolean;
'MethodNotAllowed'?: boolean;
'MissingAttachment'?: boolean;
'MissingContentLength'?: boolean;
'MissingRequestBodyError'?: boolean;
'MissingSecurityElement'?: boolean;
'MissingSecurityHeader'?: boolean;
'NoLoggingStatusForKey'?: boolean;
'NoSuchBucket'?: boolean;
'NoSuchKey'?: boolean;
'NoSuchLifecycleConfiguration'?: boolean;
'NoSuchUpload'?: boolean;
'NoSuchVersion'?: boolean;
'NotImplemented'?: boolean;
'NotModified'?: boolean;
'NotSignedUp'?: boolean;
'NoSuchBucketPolicy'?: boolean;
'OperationAborted'?: boolean;
'PermanentRedirect'?: boolean;
'PreconditionFailed'?: boolean;
'Redirect'?: boolean;
'RestoreAlreadyInProgress'?: boolean;
'RequestIsNotMultiPartContent'?: boolean;
'RequestTimeout'?: boolean;
'RequestTimeTooSkewed'?: boolean;
'RequestTorrentOfBucketError'?: boolean;
'SignatureDoesNotMatch'?: boolean;
'ServiceUnavailable'?: boolean;
'SlowDown'?: boolean;
'TemporaryRedirect'?: boolean;
'TokenRefreshRequired'?: boolean;
'TooManyBuckets'?: boolean;
'TooManyParts'?: boolean;
'UnexpectedContent'?: boolean;
'UnresolvableGrantByEmailAddress'?: boolean;
'UserKeyMustBeSpecified'?: boolean;
'NoSuchEntity'?: boolean;
'WrongFormat'?: boolean;
'Forbidden'?: boolean;
'EntityDoesNotExist'?: boolean;
'EntityAlreadyExists'?: boolean;
'ServiceFailure'?: boolean;
'IncompleteSignature'?: boolean;
'InternalFailure'?: boolean;
'InvalidAction'?: boolean;
'InvalidClientTokenId'?: boolean;
'InvalidParameterCombination'?: boolean;
'InvalidParameterValue'?: boolean;
'InvalidQueryParameter'?: boolean;
'MalformedQueryString'?: boolean;
'MissingAction'?: boolean;
'MissingAuthenticationToken'?: boolean;
'MissingParameter'?: boolean;
'OptInRequired'?: boolean;
'RequestExpired'?: boolean;
'Throttling'?: boolean;
'AccountNotFound'?: boolean;
'ValidationError'?: boolean;
'MalformedPolicyDocument'?: boolean;
'InvalidInput'?: boolean;
'MPUinProgress'?: boolean;
'BadName'?: boolean;
'BadAccount'?: boolean;
'BadGroup'?: boolean;
'BadId'?: boolean;
'BadAccountName'?: boolean;
'BadNameFriendly'?: boolean;
'BadEmailAddress'?: boolean;
'BadPath'?: boolean;
'BadArn'?: boolean;
'BadCreateDate'?: boolean;
'BadLastUsedDate'?: boolean;
'BadNotBefore'?: boolean;
'BadNotAfter'?: boolean;
'BadSaltedPwd'?: boolean;
'ok'?: boolean;
'BadUser'?: boolean;
'BadSaltedPasswd'?: boolean;
'BadPasswdDate'?: boolean;
'BadCanonicalId'?: boolean;
'BadAlias'?: boolean;
'DBPutFailed'?: boolean;
'AccountEmailAlreadyUsed'?: boolean;
'AccountNameAlreadyUsed'?: boolean;
'UserEmailAlreadyUsed'?: boolean;
'UserNameAlreadyUsed'?: boolean;
'NoParentAccount'?: boolean;
'BadStringToSign'?: boolean;
'BadSignatureFromRequest'?: boolean;
'BadAlgorithm'?: boolean;
'SecretKeyDoesNotExist'?: boolean;
'InvalidRegion'?: boolean;
'ScopeDate'?: boolean;
'BadAccessKey'?: boolean;
'NoDict'?: boolean;
'BadSecretKey'?: boolean;
'BadSecretKeyValue'?: boolean;
'BadSecretKeyStatus'?: boolean;
'BadUrl'?: boolean;
'BadClientIdList'?: boolean;
'BadThumbprintList'?: boolean;
'BadObject'?: boolean;
'BadRole'?: boolean;
'BadSamlp'?: boolean;
'BadMetadataDocument'?: boolean;
'BadSessionIndex'?: boolean;
'Unauthorized'?: boolean;
'CacheUpdated'?: boolean;
'DBNotFound'?: boolean;
'DBAlreadyExists'?: boolean;
'ObjNotFound'?: boolean;
'PermissionDenied'?: boolean;
'BadRequest'?: boolean;
'RaftSessionNotLeader'?: boolean;
'RaftSessionLeaderNotConnected'?: boolean;
'NoLeaderForDB'?: boolean;
'RouteNotFound'?: boolean;
'NoMapsInConfig'?: boolean;
'DBAPINotReady'?: boolean;
'NotEnoughMapsInConfig:'?: boolean;
}
export var errors: { [key:string]: ArsenalError };
export class Clustering {
constructor(size: number, logger: Logger, timeout?: number);
start(cb: (cluster: Clustering) => void): Clustering;
}
namespace https {
var ciphers: Ciphers;
var dhparam: Dhparam;
}
}

View File

@@ -6,6 +6,7 @@ module.exports = {
shuffle: require('./lib/shuffle'),
stringHash: require('./lib/stringHash'),
ipCheck: require('./lib/ipCheck'),
jsutil: require('./lib/jsutil'),
https: {
ciphers: require('./lib/https/ciphers.js'),
dhparam: require('./lib/https/dh2048.js'),
@@ -20,12 +21,24 @@ module.exports = {
.DelimiterMaster,
MPU: require('./lib/algos/list/MPU').MultipartUploads,
},
listTools: {
DelimiterTools: require('./lib/algos/list/tools'),
},
cache: {
LRUCache: require('./lib/algos/cache/LRUCache'),
},
stream: {
MergeStream: require('./lib/algos/stream/MergeStream'),
},
},
policies: {
evaluators: require('./lib/policyEvaluator/evaluator.js'),
validateUserPolicy: require('./lib/policy/policyValidator')
.validateUserPolicy,
evaluatePrincipal: require('./lib/policyEvaluator/principal'),
RequestContext: require('./lib/policyEvaluator/RequestContext.js'),
requestUtils: require('./lib/policyEvaluator/requestUtils'),
actionMaps: require('./lib/policyEvaluator/utils/actionMaps'),
},
Clustering: require('./lib/Clustering'),
testing: {
@@ -34,11 +47,89 @@ module.exports = {
versioning: {
VersioningConstants: require('./lib/versioning/constants.js')
.VersioningConstants,
VersioningUtils: require('./lib/versioning/utils.js').VersioningUtils,
Version: require('./lib/versioning/Version.js').Version,
VersionID: require('./lib/versioning/VersionID.js'),
WriteGatheringManager: require('./lib/versioning/WriteGatheringManager.js'),
WriteCache: require('./lib/versioning/WriteCache.js'),
VersioningRequestProcessor: require('./lib/versioning/VersioningRequestProcessor.js'),
},
network: {
http: {
server: require('./lib/network/http/server'),
},
rpc: require('./lib/network/rpc/rpc'),
level: require('./lib/network/rpc/level-net'),
rest: {
RESTServer: require('./lib/network/rest/RESTServer'),
RESTClient: require('./lib/network/rest/RESTClient'),
},
probe: {
ProbeServer: require('./lib/network/probe/ProbeServer'),
},
RoundRobin: require('./lib/network/RoundRobin'),
},
s3routes: {
routes: require('./lib/s3routes/routes'),
routesUtils: require('./lib/s3routes/routesUtils'),
},
s3middleware: {
userMetadata: require('./lib/s3middleware/userMetadata'),
convertToXml: require('./lib/s3middleware/convertToXml'),
escapeForXml: require('./lib/s3middleware/escapeForXml'),
tagging: require('./lib/s3middleware/tagging'),
validateConditionalHeaders:
require('./lib/s3middleware/validateConditionalHeaders')
.validateConditionalHeaders,
MD5Sum: require('./lib/s3middleware/MD5Sum'),
NullStream: require('./lib/s3middleware/nullStream'),
objectUtils: require('./lib/s3middleware/objectUtils'),
azureHelper: {
mpuUtils:
require('./lib/s3middleware/azureHelpers/mpuUtils'),
ResultsCollector:
require('./lib/s3middleware/azureHelpers/ResultsCollector'),
SubStreamInterface:
require('./lib/s3middleware/azureHelpers/SubStreamInterface'),
},
},
storage: {
metadata: {
MetadataFileServer:
require('./lib/storage/metadata/file/MetadataFileServer'),
MetadataFileClient:
require('./lib/storage/metadata/file/MetadataFileClient'),
LogConsumer:
require('./lib/storage/metadata/bucketclient/LogConsumer'),
},
data: {
file: {
DataFileStore:
require('./lib/storage/data/file/DataFileStore'),
},
},
utils: require('./lib/storage/utils'),
},
models: {
BucketInfo: require('./lib/models/BucketInfo'),
ObjectMD: require('./lib/models/ObjectMD'),
ObjectMDLocation: require('./lib/models/ObjectMDLocation'),
ARN: require('./lib/models/ARN'),
WebsiteConfiguration: require('./lib/models/WebsiteConfiguration'),
ReplicationConfiguration:
require('./lib/models/ReplicationConfiguration'),
LifecycleConfiguration:
require('./lib/models/LifecycleConfiguration'),
},
metrics: {
StatsClient: require('./lib/metrics/StatsClient'),
StatsModel: require('./lib/metrics/StatsModel'),
RedisClient: require('./lib/metrics/RedisClient'),
ZenkoMetrics: require('./lib/metrics/ZenkoMetrics'),
},
pensieve: {
credentialUtils: require('./lib/executables/pensieveCreds/utils'),
},
stream: {
readJSONStreamObject: require('./lib/stream/readJSONStreamObject'),
},
};

167
lib/algos/cache/LRUCache.js vendored Normal file
View File

@@ -0,0 +1,167 @@
const assert = require('assert');
/**
* @class
* @classdesc Implements a key-value in-memory cache with a capped
* number of items and a Least Recently Used (LRU) strategy for
* eviction.
*/
class LRUCache {
/**
* @constructor
* @param {number} maxEntries - maximum number of entries kept in
* the cache
*/
constructor(maxEntries) {
assert(maxEntries >= 1);
this._maxEntries = maxEntries;
this.clear();
}
/**
* Add or update the value associated to a key in the cache,
* making it the most recently accessed for eviction purpose.
*
* @param {string} key - key to add
* @param {object} value - associated value (can be of any type)
* @return {boolean} true if the cache contained an entry with
* this key, false if it did not
*/
add(key, value) {
let entry = this._entryMap[key];
if (entry) {
entry.value = value;
// make the entry the most recently used by re-pushing it
// to the head of the LRU list
this._lruRemoveEntry(entry);
this._lruPushEntry(entry);
return true;
}
if (this._entryCount === this._maxEntries) {
// if the cache is already full, abide by the LRU strategy
// and remove the least recently used entry from the cache
// before pushing the new entry
this._removeEntry(this._lruTail);
}
entry = { key, value };
this._entryMap[key] = entry;
this._entryCount += 1;
this._lruPushEntry(entry);
return false;
}
/**
* Get the value associated to a key in the cache, making it the
* most recently accessed for eviction purpose.
*
* @param {string} key - key of which to fetch the associated value
* @return {object|undefined} - returns the associated value if
* exists in the cache, or undefined if not found - either if the
* key was never added or if it has been evicted from the cache.
*/
get(key) {
const entry = this._entryMap[key];
if (entry) {
// make the entry the most recently used by re-pushing it
// to the head of the LRU list
this._lruRemoveEntry(entry);
this._lruPushEntry(entry);
return entry.value;
}
return undefined;
}
/**
* Remove an entry from the cache if exists
*
* @param {string} key - key to remove
* @return {boolean} true if an entry has been removed, false if
* there was no entry with this key in the cache - either if the
* key was never added or if it has been evicted from the cache.
*/
remove(key) {
const entry = this._entryMap[key];
if (entry) {
this._removeEntry(entry);
return true;
}
return false;
}
/**
* Get the current number of cached entries
*
* @return {number} current number of cached entries
*/
count() {
return this._entryCount;
}
/**
* Remove all entries from the cache
*
* @return {undefined}
*/
clear() {
this._entryMap = {};
this._entryCount = 0;
this._lruHead = null;
this._lruTail = null;
}
/**
* Push an entry to the front of the LRU list, making it the most
* recently accessed
*
* @param {object} entry - entry to push
* @return {undefined}
*/
_lruPushEntry(entry) {
/* eslint-disable no-param-reassign */
entry._lruNext = this._lruHead;
entry._lruPrev = null;
if (this._lruHead) {
this._lruHead._lruPrev = entry;
}
this._lruHead = entry;
if (!this._lruTail) {
this._lruTail = entry;
}
/* eslint-enable no-param-reassign */
}
/**
* Remove an entry from the LRU list
*
* @param {object} entry - entry to remove
* @return {undefined}
*/
_lruRemoveEntry(entry) {
/* eslint-disable no-param-reassign */
if (entry._lruPrev) {
entry._lruPrev._lruNext = entry._lruNext;
} else {
this._lruHead = entry._lruNext;
}
if (entry._lruNext) {
entry._lruNext._lruPrev = entry._lruPrev;
} else {
this._lruTail = entry._lruPrev;
}
/* eslint-enable no-param-reassign */
}
/**
* Helper function to remove an existing entry from the cache
*
* @param {object} entry - cache entry to remove
* @return {undefined}
*/
_removeEntry(entry) {
this._lruRemoveEntry(entry);
delete this._entryMap[entry.key];
this._entryCount -= 1;
}
}
module.exports = LRUCache;

124
lib/algos/list/Extension.js Normal file
View File

@@ -0,0 +1,124 @@
'use strict'; // eslint-disable-line strict
const { FILTER_SKIP, SKIP_NONE } = require('./tools');
// Use a heuristic to amortize the cost of JSON
// serialization/deserialization only on largest metadata where the
// potential for size reduction is high, considering the bulk of the
// blob size is due to the "location" field containing a large number
// of MPU parts.
//
// Measured on some standard metadata:
// - 100 parts -> 9K blob
// - 2000 parts -> 170K blob
//
// Using a 10K threshold should lead to a worst case of about 10M to
// store a raw listing of 1000 entries, even with some growth
// multiplication factor due to some internal memory duplication, it
// should stay within reasonable memory limits.
const TRIM_METADATA_MIN_BLOB_SIZE = 10000;
/**
* Base class of listing extensions.
*/
class Extension {
/**
* This takes a list of parameters and a logger as the inputs.
* Derivatives should have their own format regarding parameters.
*
* @param {Object} parameters - listing parameter from applications
* @param {RequestLogger} logger - the logger
* @constructor
*/
constructor(parameters, logger) {
// inputs
this.parameters = parameters;
this.logger = logger;
// listing results
this.res = undefined;
this.keys = 0;
}
/**
* Filters-out non-requested optional fields from the value. This function
* shall be applied on any value that is to be returned as part of the
* result of a listing extension.
*
* @param {String} value - The JSON value of a listing item
*
* @return {String} The value that may have been trimmed of some
* heavy unused fields, or left untouched (depending on size
* heuristics)
*/
trimMetadata(value) {
let ret = undefined;
if (value.length >= TRIM_METADATA_MIN_BLOB_SIZE) {
try {
ret = JSON.parse(value);
delete ret.location;
ret = JSON.stringify(ret);
} catch (e) {
// Prefer returning an unfiltered data rather than
// stopping the service in case of parsing failure.
// The risk of this approach is a potential
// reproduction of MD-692, where too much memory is
// used by repd.
this.logger.warn(
'Could not parse Object Metadata while listing',
{ err: e.toString() });
}
}
return ret || value;
}
/**
* Generates listing parameters that metadata can understand from the input
* parameters. What metadata can understand: gt, gte, lt, lte, limit, keys,
* values, reverse; we use the same set of parameters as levelup's.
* Derivatives should have their own conversion of their original listing
* parameters into metadata listing parameters.
*
* @return {object} - listing parameters for metadata
*/
genMDParams() {
return {};
}
/**
* This function receives a data entry from metadata and decides if it will
* include the entry in the listing result or not.
*
* @param {object} entry - a listing entry from metadata
* expected format: { key, value }
* @return {number} - result of filtering the entry:
* > 0: entry is accepted and included in the result
* = 0: entry is accepted but not included (skipping)
* < 0: entry is not accepted, listing should finish
*/
filter(entry) {
return entry ? FILTER_SKIP : FILTER_SKIP;
}
/**
* Provides the insight into why filter is skipping an entry. This could be
* because it is skipping a range of delimited keys or a range of specific
* version when doing master version listing.
*
* @return {string} - the insight: a common prefix or a master key,
* or SKIP_NONE if there is no insight
*/
skipping() {
return SKIP_NONE;
}
/**
* Get the listing resutls. Format depends on derivatives' specific logic.
* @return {Array} - The listed elements
*/
result() {
return this.res;
}
}
module.exports.default = Extension;

View File

@@ -1,7 +1,10 @@
'use strict'; // eslint-disable-line strict
const checkLimit = require('./tools').checkLimit;
const { inc, checkLimit, listingParamsMasterKeysV0ToV1,
FILTER_END, FILTER_ACCEPT } = require('./tools');
const DEFAULT_MAX_KEYS = 1000;
const VSConst = require('../../versioning/constants').VersioningConstants;
const { DbPrefixes, BucketVersioningKeyFormat } = VSConst;
function numberDefault(num, defaultNum) {
const parsedNum = Number.parseInt(num, 10);
@@ -17,9 +20,12 @@ class MultipartUploads {
* Init and check parameters
* @param {Object} params - The parameters you sent to DBD
* @param {RequestLogger} logger - The logger of the request
* @param {String} [vFormat] - versioning key format
* @return {undefined}
*/
constructor(params, logger) {
constructor(params, logger, vFormat) {
this.params = params;
this.vFormat = vFormat || BucketVersioningKeyFormat.v0;
this.CommonPrefixes = [];
this.Uploads = [];
this.IsTruncated = false;
@@ -32,6 +38,44 @@ class MultipartUploads {
this.delimiter = params.delimiter;
this.splitter = params.splitter;
this.logger = logger;
Object.assign(this, {
[BucketVersioningKeyFormat.v0]: {
genMDParams: this.genMDParamsV0,
getObjectKey: this.getObjectKeyV0,
},
[BucketVersioningKeyFormat.v1]: {
genMDParams: this.genMDParamsV1,
getObjectKey: this.getObjectKeyV1,
},
}[this.vFormat]);
}
genMDParamsV0() {
const params = {};
if (this.params.keyMarker) {
params.gt = `overview${this.params.splitter}` +
`${this.params.keyMarker}${this.params.splitter}`;
if (this.params.uploadIdMarker) {
params.gt += `${this.params.uploadIdMarker}`;
}
// advance so that lower bound does not include the supplied
// markers
params.gt = inc(params.gt);
}
if (this.params.prefix) {
if (params.gt === undefined || this.params.prefix > params.gt) {
delete params.gt;
params.gte = this.params.prefix;
}
params.lt = inc(this.params.prefix);
}
return params;
}
genMDParamsV1() {
const v0params = this.genMDParamsV0();
return listingParamsMasterKeysV0ToV1(v0params);
}
/**
@@ -78,19 +122,27 @@ class MultipartUploads {
}
}
getObjectKeyV0(obj) {
return obj.key;
}
getObjectKeyV1(obj) {
return obj.key.slice(DbPrefixes.Master.length);
}
/**
* This function applies filter on each element
* @param {String} obj - The key and value of the element
* @return {Boolean} - True: Continue, False: Stop
* @return {number} - > 0: Continue, < 0: Stop
*/
filter(obj) {
// Check first in case of maxkeys = 0
if (this.keys >= this.maxKeys) {
// In cases of maxKeys <= 0 => IsTruncated = false
this.IsTruncated = this.maxKeys > 0;
return false;
return FILTER_END;
}
const key = obj.key;
const key = this.getObjectKey(obj);
const value = obj.value;
if (this.delimiter) {
const mpuPrefixSlice = `overview${this.splitter}`.length;
@@ -107,7 +159,11 @@ class MultipartUploads {
} else {
this.addUpload(value);
}
return true;
return FILTER_ACCEPT;
}
skipping() {
return '';
}
/**

View File

@@ -1,12 +1,14 @@
'use strict'; // eslint-disable-line strict
const checkLimit = require('./tools').checkLimit;
const Extension = require('./Extension').default;
const { checkLimit, FILTER_END, FILTER_ACCEPT } = require('./tools');
const DEFAULT_MAX_KEYS = 10000;
/**
* Class of an extension doing the simple listing
*/
class List {
class List extends Extension {
/**
* Constructor
* Set the logger and the res
@@ -15,7 +17,7 @@ class List {
* @return {undefined}
*/
constructor(parameters, logger) {
this.logger = logger;
super(parameters, logger);
this.res = [];
if (parameters) {
this.maxKeys = checkLimit(parameters.maxKeys, DEFAULT_MAX_KEYS);
@@ -25,20 +27,45 @@ class List {
this.keys = 0;
}
genMDParams() {
const params = this.parameters ? {
gt: this.parameters.gt,
gte: this.parameters.gte || this.parameters.start,
lt: this.parameters.lt,
lte: this.parameters.lte || this.parameters.end,
keys: this.parameters.keys,
values: this.parameters.values,
} : {};
Object.keys(params).forEach(key => {
if (params[key] === null || params[key] === undefined) {
delete params[key];
}
});
return params;
}
/**
* Function apply on each element
* Just add it to the array
* @param {Object} elem - The data from the database
* @return {Boolean} - True = continue the stream
* @return {number} - > 0 : continue listing
* < 0 : listing done
*/
filter(elem) {
// Check first in case of maxkeys <= 0
if (this.keys >= this.maxKeys) {
return false;
return FILTER_END;
}
if (typeof elem === 'object') {
this.res.push({
key: elem.key,
value: this.trimMetadata(elem.value),
});
} else {
this.res.push(elem);
}
this.res.push(elem);
this.keys++;
return true;
return FILTER_ACCEPT;
}
/**

View File

@@ -1,16 +1,10 @@
'use strict'; // eslint-disable-line strict
/**
* Find the next delimiter in the path
*
* @param {string} key - path of the object
* @param {string} delimiter - string to find
* @param {number} index - index to start at
* @return {number} delimiterIndex - returns -1 in case no delimiter is found
*/
function nextDelimiter(key, delimiter, index) {
return key.indexOf(delimiter, index);
}
const Extension = require('./Extension').default;
const { inc, listingParamsMasterKeysV0ToV1,
FILTER_END, FILTER_ACCEPT, FILTER_SKIP } = require('./tools');
const VSConst = require('../../versioning/constants').VersioningConstants;
const { DbPrefixes, BucketVersioningKeyFormat } = VSConst;
/**
* Find the common prefix in the path
@@ -36,38 +30,103 @@ function getCommonPrefix(key, delimiter, delimiterIndex) {
* @prop {String|undefined} prefix - prefix per amazon format
* @prop {Number} maxKeys - number of keys to list
*/
class Delimiter {
class Delimiter extends Extension {
/**
* Create a new Delimiter instance
* @constructor
* @param {Object} parameters - listing parameters
* @param {String} parameters.delimiter - delimiter per amazon format
* @param {String} parameters.start - prefix per amazon format
* @param {String} [parameters.gt] - NextMarker per amazon format
* @param {Number} [parameters.maxKeys] - number of keys to list
* @param {Object} parameters - listing parameters
* @param {String} [parameters.delimiter] - delimiter per amazon
* format
* @param {String} [parameters.prefix] - prefix per amazon
* format
* @param {String} [parameters.marker] - marker per amazon
* format
* @param {Number} [parameters.maxKeys] - number of keys to list
* @param {Boolean} [parameters.v2] - indicates whether v2
* format
* @param {String} [parameters.startAfter] - marker per amazon
* format
* @param {String} [parameters.continuationToken] - obfuscated amazon
* token
* @param {Boolean} [parameters.alphabeticalOrder] - Either the result is
* alphabetically ordered
* or not
* @param {RequestLogger} logger - The logger of the
* request
* @param {String} [vFormat] - versioning key format
*/
constructor(parameters) {
constructor(parameters, logger, vFormat) {
super(parameters, logger);
// original listing parameters
this.delimiter = parameters.delimiter;
this.prefix = parameters.prefix;
this.marker = parameters.marker;
this.maxKeys = parameters.maxKeys || 1000;
this.startAfter = parameters.startAfter;
this.continuationToken = parameters.continuationToken;
this.alphabeticalOrder =
typeof parameters.alphabeticalOrder !== 'undefined' ?
parameters.alphabeticalOrder : true;
this.vFormat = vFormat || BucketVersioningKeyFormat.v0;
// results
this.CommonPrefixes = [];
this.Contents = [];
this.IsTruncated = false;
this.NextMarker = parameters.gt;
this.keys = 0;
this.NextMarker = parameters.marker;
this.NextContinuationToken =
parameters.continuationToken || parameters.startAfter;
this.startMarker = parameters.v2 ? 'startAfter' : 'marker';
this.continueMarker = parameters.v2 ? 'continuationToken' : 'marker';
this.nextContinueMarker = parameters.v2 ?
'NextContinuationToken' : 'NextMarker';
this.delimiter = parameters.delimiter;
this.prefix = parameters.start;
this.maxKeys = parameters.maxKeys || 1000;
if (this.delimiter !== undefined &&
this.NextMarker !== undefined &&
this.NextMarker.startsWith(this.prefix || '')) {
this[this.nextContinueMarker] !== undefined &&
this[this.nextContinueMarker].startsWith(this.prefix || '')) {
const nextDelimiterIndex =
this.NextMarker.indexOf(this.delimiter,
this.prefix
? this.prefix.length
: 0);
this.NextMarker =
this.NextMarker.slice(0, nextDelimiterIndex +
this.delimiter.length);
this[this.nextContinueMarker].indexOf(this.delimiter,
this.prefix ? this.prefix.length : 0);
this[this.nextContinueMarker] =
this[this.nextContinueMarker].slice(0, nextDelimiterIndex +
this.delimiter.length);
}
Object.assign(this, {
[BucketVersioningKeyFormat.v0]: {
genMDParams: this.genMDParamsV0,
getObjectKey: this.getObjectKeyV0,
skipping: this.skippingV0,
},
[BucketVersioningKeyFormat.v1]: {
genMDParams: this.genMDParamsV1,
getObjectKey: this.getObjectKeyV1,
skipping: this.skippingV1,
},
}[this.vFormat]);
}
genMDParamsV0() {
const params = {};
if (this.prefix) {
params.gte = this.prefix;
params.lt = inc(this.prefix);
}
const startVal = this[this.continueMarker] || this[this.startMarker];
if (startVal) {
if (params.gte && params.gte > startVal) {
return params;
}
delete params.gte;
params.gt = startVal;
}
return params;
}
genMDParamsV1() {
const params = this.genMDParamsV0();
return listingParamsMasterKeysV0ToV1(params);
}
/**
@@ -90,34 +149,24 @@ class Delimiter {
* Increment the keys counter
* @param {String} key - The key to add
* @param {String} value - The value of the key
* @return {Boolean} - indicates if iteration should continue
* @return {number} - indicates if iteration should continue
*/
addContents(key, value) {
if (this._reachedMaxKeys()) {
return false;
return FILTER_END;
}
const tmp = JSON.parse(value);
this.Contents.push({
key,
value: {
Size: tmp['content-length'],
ETag: tmp['content-md5'],
LastModified: tmp['last-modified'],
Owner: {
DisplayName: tmp['owner-display-name'],
ID: tmp['owner-id'],
},
StorageClass: tmp['x-amz-storage-class'],
Initiated: tmp.initiated,
Initiator: tmp.initiator,
EventualStorageBucket: tmp.eventualStorageBucket,
partLocations: tmp.partLocations,
creationDate: tmp.creationDate,
},
});
this.NextMarker = key;
this.Contents.push({ key, value: this.trimMetadata(value) });
this[this.nextContinueMarker] = key;
++this.keys;
return true;
return FILTER_ACCEPT;
}
getObjectKeyV0(obj) {
return obj.key;
}
getObjectKeyV1(obj) {
return obj.key.slice(DbPrefixes.Master.length);
}
/**
@@ -129,21 +178,20 @@ class Delimiter {
* @param {Object} obj - The key and value of the element
* @param {String} obj.key - The key of the element
* @param {String} obj.value - The value of the element
* @return {Boolean} - indicates if iteration should continue
* @return {number} - indicates if iteration should continue
*/
filter(obj) {
const key = obj.key;
const key = this.getObjectKey(obj);
const value = obj.value;
if ((this.prefix && !key.startsWith(this.prefix))
|| (typeof this.NextMarker === 'string' &&
key <= this.NextMarker)) {
return true;
|| (this.alphabeticalOrder
&& typeof this[this.nextContinueMarker] === 'string'
&& key <= this[this.nextContinueMarker])) {
return FILTER_SKIP;
}
if (this.delimiter) {
const baseIndex = this.prefix ? this.prefix.length : 0;
const delimiterIndex = nextDelimiter(key,
this.delimiter,
baseIndex);
const delimiterIndex = key.indexOf(this.delimiter, baseIndex);
if (delimiterIndex === -1) {
return this.addContents(key, value);
}
@@ -161,15 +209,38 @@ class Delimiter {
addCommonPrefix(key, index) {
const commonPrefix = getCommonPrefix(key, this.delimiter, index);
if (this.CommonPrefixes.indexOf(commonPrefix) === -1
&& this.NextMarker !== commonPrefix) {
&& this[this.nextContinueMarker] !== commonPrefix) {
if (this._reachedMaxKeys()) {
return false;
return FILTER_END;
}
this.CommonPrefixes.push(commonPrefix);
this.NextMarker = commonPrefix;
this[this.nextContinueMarker] = commonPrefix;
++this.keys;
return FILTER_ACCEPT;
}
return true;
return FILTER_SKIP;
}
/**
* If repd happens to want to skip listing on a bucket in v0
* versioning key format, here is an idea.
*
* @return {string} - the present range (NextMarker) if repd believes
* that it's enough and should move on
*/
skippingV0() {
return this[this.nextContinueMarker];
}
/**
* If repd happens to want to skip listing on a bucket in v1
* versioning key format, here is an idea.
*
* @return {string} - the present range (NextMarker) if repd believes
* that it's enough and should move on
*/
skippingV1() {
return DbPrefixes.Master + this[this.nextContinueMarker];
}
/**
@@ -183,15 +254,20 @@ class Delimiter {
* specified in v1 listing documentation
* http://docs.aws.amazon.com/AmazonS3/latest/API/RESTBucketGET.html
*/
return {
const result = {
CommonPrefixes: this.CommonPrefixes,
Contents: this.Contents,
IsTruncated: this.IsTruncated,
NextMarker: (this.IsTruncated && this.delimiter)
? this.NextMarker
: undefined,
Delimiter: this.delimiter,
};
if (this.parameters.v2) {
result.NextContinuationToken = this.IsTruncated
? this.NextContinuationToken : undefined;
} else {
result.NextMarker = (this.IsTruncated && this.delimiter)
? this.NextMarker : undefined;
}
return result;
}
}

View File

@@ -1,84 +1,196 @@
'use strict'; // eslint-disable-line strict
const Delimiter = require('./delimiter').Delimiter;
const VSUtils = require('../../versioning/utils').VersioningUtils;
const Version = require('../../versioning/Version').Version;
const VSConst = require('../../versioning/constants').VersioningConstants;
const { BucketVersioningKeyFormat } = VSConst;
const { FILTER_ACCEPT, FILTER_SKIP, SKIP_NONE } = require('./tools');
const VID_SEP = VSConst.VersionId.Separator;
const { DbPrefixes } = VSConst;
/**
* Extended delimiter class for versioning.
* Handle object listing with parameters. This extends the base class Delimiter
* to return the raw master versions of existing objects.
*/
class DelimiterMaster extends Delimiter {
/**
* Overriding the base function to extract the versionId of the entry.
*
* @param {string} key - the key of the entry
* @param {object} value - the value of the entry
* @return {undefined}
* Delimiter listing of master versions.
* @param {Object} parameters - listing parameters
* @param {String} parameters.delimiter - delimiter per amazon format
* @param {String} parameters.prefix - prefix per amazon format
* @param {String} parameters.marker - marker per amazon format
* @param {Number} parameters.maxKeys - number of keys to list
* @param {Boolean} parameters.v2 - indicates whether v2 format
* @param {String} parameters.startAfter - marker per amazon v2 format
* @param {String} parameters.continuationToken - obfuscated amazon token
* @param {RequestLogger} logger - The logger of the request
* @param {String} [vFormat] - versioning key format
*/
addContents(key, value) {
this.Contents.push({
key,
value: {
Size: value['content-length'],
ETag: value['content-md5'],
LastModified: value['last-modified'],
// <versioning>
VersionId: VSUtils.getts(value),
// </versioning>
Owner: {
DisplayName: value['owner-display-name'],
ID: value['owner-id'],
},
StorageClass: value['x-amz-storage-class'],
Initiated: value.initiated,
Initiator: value.initiator,
EventualStorageBucket: value.eventualStorageBucket,
partLocations: value.partLocations,
creationDate: value.creationDate,
constructor(parameters, logger, vFormat) {
super(parameters, logger, vFormat);
// non-PHD master version or a version whose master is a PHD version
this.prvKey = undefined;
this.prvPHDKey = undefined;
this.inReplayPrefix = false;
Object.assign(this, {
[BucketVersioningKeyFormat.v0]: {
filter: this.filterV0,
skipping: this.skippingV0,
},
});
this.NextMarker = key;
++this.keys;
[BucketVersioningKeyFormat.v1]: {
filter: this.filterV1,
skipping: this.skippingV1,
},
}[this.vFormat]);
}
/**
* Overriding the filter function that formats the
* listing results based on the listing algorithm.
*
* @param {object} obj - metadata entry in the form of { key, value }
* @return {boolean} - continue filtering or return the formatted list
* Filter to apply on each iteration for buckets in v0 format,
* based on:
* - prefix
* - delimiter
* - maxKeys
* The marker is being handled directly by levelDB
* @param {Object} obj - The key and value of the element
* @param {String} obj.key - The key of the element
* @param {String} obj.value - The value of the element
* @return {number} - indicates if iteration should continue
*/
filter(obj) {
// Check first in case of maxkeys <= 0
if (this.keys >= this.maxKeys) {
// In cases of maxKeys <= 0 => IsTruncated = false
this.IsTruncated = this.maxKeys > 0;
return false;
filterV0(obj) {
let key = obj.key;
const value = obj.value;
if (key.startsWith(DbPrefixes.Replay)) {
this.inReplayPrefix = true;
return FILTER_SKIP;
}
// <versioning>
const value = VSUtils.decodeVersion(obj.value);
// ignore it if the master version is a delete marker
if (VSUtils.isDeleteMarker(value)) {
return true;
this.inReplayPrefix = false;
/* Skip keys not starting with the prefix or not alphabetically
* ordered. */
if ((this.prefix && !key.startsWith(this.prefix))
|| (typeof this[this.nextContinueMarker] === 'string' &&
key <= this[this.nextContinueMarker])) {
return FILTER_SKIP;
}
// use the original object name for delimitering to work correctly
const key = VSUtils.getObjectNameFromMasterKey(obj.key);
// </versioning>
if (this.delimiter) {
const commonPrefixIndex =
key.indexOf(this.delimiter, this.searchStart);
if (commonPrefixIndex === -1) {
this.addContents(key, value);
} else {
this.addCommonPrefix(
key.substring(0, commonPrefixIndex + this.delimLen));
/* Skip version keys (<key><versionIdSeparator><version>) if we already
* have a master version. */
const versionIdIndex = key.indexOf(VID_SEP);
if (versionIdIndex >= 0) {
key = key.slice(0, versionIdIndex);
/* - key === this.prvKey is triggered when a master version has
* been accepted for this key,
* - key === this.NextMarker or this.NextContinueToken is triggered
* when a listing page ends on an accepted obj and the next page
* starts with a version of this object.
* In that case prvKey is default set to undefined
* in the constructor and comparing to NextMarker is the only
* way to know we should not accept this version. This test is
* not redundant with the one at the beginning of this function,
* we are comparing here the key without the version suffix,
* - key startsWith the previous NextMarker happens because we set
* NextMarker to the common prefix instead of the whole key
* value. (TODO: remove this test once ZENKO-1048 is fixed)
* */
if (key === this.prvKey || key === this[this.nextContinueMarker] ||
(this.delimiter &&
key.startsWith(this[this.nextContinueMarker]))) {
/* master version already filtered */
return FILTER_SKIP;
}
} else {
this.addContents(key, value);
}
return true;
if (Version.isPHD(value)) {
/* master version is a PHD version, we want to wait for the next
* one:
* - Set the prvKey to undefined to not skip the next version,
* - return accept to avoid users to skip the next values in range
* (skip scan mechanism in metadata backend like Metadata or
* MongoClient). */
this.prvKey = undefined;
this.prvPHDKey = key;
return FILTER_ACCEPT;
}
if (Version.isDeleteMarker(value)) {
/* This entry is a deleteMarker which has not been filtered by the
* version test. Either :
* - it is a deleteMarker on the master version, we want to SKIP
* all the following entries with this key (no master version),
* - or a deleteMarker following a PHD (setting prvKey to undefined
* when an entry is a PHD avoids the skip on version for the
* next entry). In that case we expect the master version to
* follow. */
if (key === this.prvPHDKey) {
this.prvKey = undefined;
return FILTER_ACCEPT;
}
this.prvKey = key;
return FILTER_SKIP;
}
this.prvKey = key;
if (this.delimiter) {
// check if the key has the delimiter
const baseIndex = this.prefix ? this.prefix.length : 0;
const delimiterIndex = key.indexOf(this.delimiter, baseIndex);
if (delimiterIndex >= 0) {
// try to add the prefix to the list
return this.addCommonPrefix(key, delimiterIndex);
}
}
return this.addContents(key, value);
}
/**
* Filter to apply on each iteration for buckets in v1 format,
* based on:
* - prefix
* - delimiter
* - maxKeys
* The marker is being handled directly by levelDB
* @param {Object} obj - The key and value of the element
* @param {String} obj.key - The key of the element
* @param {String} obj.value - The value of the element
* @return {number} - indicates if iteration should continue
*/
filterV1(obj) {
// Filtering master keys in v1 is simply listing the master
// keys, as the state of version keys do not change the
// result, so we can use Delimiter method directly.
return super.filter(obj);
}
skippingBase() {
if (this[this.nextContinueMarker]) {
// next marker or next continuation token:
// - foo/ : skipping foo/
// - foo : skipping foo.
const index = this[this.nextContinueMarker].
lastIndexOf(this.delimiter);
if (index === this[this.nextContinueMarker].length - 1) {
return this[this.nextContinueMarker];
}
return this[this.nextContinueMarker] + VID_SEP;
}
return SKIP_NONE;
}
skippingV0() {
if (this.inReplayPrefix) {
return DbPrefixes.Replay;
}
return this.skippingBase();
}
skippingV1() {
const skipTo = this.skippingBase();
if (skipTo === SKIP_NONE) {
return SKIP_NONE;
}
return DbPrefixes.Master + skipTo;
}
}
module.exports = {
DelimiterMaster,
};
module.exports = { DelimiterMaster };

View File

@@ -1,106 +1,279 @@
'use strict'; // eslint-disable-line strict
const Delimiter = require('./delimiter').Delimiter;
const VSUtils = require('../../versioning/utils').VersioningUtils;
const Version = require('../../versioning/Version').Version;
const VSConst = require('../../versioning/constants').VersioningConstants;
const { inc, FILTER_END, FILTER_ACCEPT, FILTER_SKIP, SKIP_NONE } =
require('./tools');
const VID_SEP = VSConst.VersionId.Separator;
const { DbPrefixes, BucketVersioningKeyFormat } = VSConst;
/**
* Extended delimiter class for versioning.
* Handle object listing with parameters
*
* @prop {String[]} CommonPrefixes - 'folders' defined by the delimiter
* @prop {String[]} Contents - 'files' to list
* @prop {Boolean} IsTruncated - truncated listing flag
* @prop {String|undefined} NextMarker - marker per amazon format
* @prop {Number} keys - count of listed keys
* @prop {String|undefined} delimiter - separator per amazon format
* @prop {String|undefined} prefix - prefix per amazon format
* @prop {Number} maxKeys - number of keys to list
*/
class DelimiterVersions extends Delimiter {
/**
* Constructor of the extension
* Init and check parameters
* @param {Object} parameters - parameters sent to DBD
* @param {RequestLogger} logger - werelogs request logger
* @param {object} latestVersions - latest versions of some keys
* @return {undefined}
*/
constructor(parameters, logger, latestVersions) {
super(parameters, logger);
this.NextVersionMarker = undefined; // next version marker
this.latestVersions = undefined; // final list of the latest versions
this._latestVersions = latestVersions; // reserved for caching
constructor(parameters, logger, vFormat) {
super(parameters, logger, vFormat);
// specific to version listing
this.keyMarker = parameters.keyMarker;
this.versionIdMarker = parameters.versionIdMarker;
// internal state
this.masterKey = undefined;
this.masterVersionId = undefined;
// listing results
this.NextMarker = parameters.keyMarker;
this.NextVersionIdMarker = undefined;
this.inReplayPrefix = false;
Object.assign(this, {
[BucketVersioningKeyFormat.v0]: {
genMDParams: this.genMDParamsV0,
filter: this.filterV0,
skipping: this.skippingV0,
},
[BucketVersioningKeyFormat.v1]: {
genMDParams: this.genMDParamsV1,
filter: this.filterV1,
skipping: this.skippingV1,
},
}[this.vFormat]);
}
/**
* Overriding the base function to not process the metadata entry here,
* leaving the job of extracting object's attributes to S3.
*
* @param {string} key - key of the entry
* @param {string} value - value of the entry
* @return {undefined}
*/
addContents(key, value) {
const components =
VSUtils.getObjectNameAndVersionIdFromVersionKey(key);
const objectName = components.objectName;
const versionId = components.versionId;
this.Contents.push({
key: objectName,
value,
});
this.NextMarker = objectName;
this.NextVersionMarker = versionId;
++this.keys;
// only include the latest versions of the keys in the resulting list
// this is not actually used now, it's reserved for caching in future
if (this._latestVersions) {
this.latestVersions[objectName] = this._latestVersions[objectName];
genMDParamsV0() {
const params = {};
if (this.parameters.prefix) {
params.gte = this.parameters.prefix;
params.lt = inc(this.parameters.prefix);
}
}
/**
* Overriding the base function to only do delimitering, not parsing value.
*
* @param {object} obj - the metadata entry in the form of { key, value }
* @return {boolean} - continue filtering or return the formatted list
*/
filter(obj) {
// Check first in case of maxkeys <= 0
if (this.keys >= this.maxKeys) {
// In cases of maxKeys <= 0 => IsTruncated = false
this.IsTruncated = this.maxKeys > 0;
return false;
}
// <versioning>
const key = VSUtils.getObjectNameFromVersionKey(obj.key);
// </versioning>
if (this.delimiter) {
const commonPrefixIndex =
key.indexOf(this.delimiter, this.searchStart);
if (commonPrefixIndex === -1) {
this.addContents(obj.key, obj.value);
} else {
this.addCommonPrefix(key.substring(0,
commonPrefixIndex + this.delimLen));
if (this.parameters.keyMarker) {
if (params.gte && params.gte > this.parameters.keyMarker) {
return params;
}
delete params.gte;
if (this.parameters.versionIdMarker) {
// versionIdMarker should always come with keyMarker
// but may not be the other way around
params.gt = this.parameters.keyMarker
+ VID_SEP
+ this.parameters.versionIdMarker;
} else {
params.gt = inc(this.parameters.keyMarker + VID_SEP);
}
} else {
this.addContents(obj.key, obj.value);
}
return true;
return params;
}
genMDParamsV1() {
// return an array of two listing params sets to ask for
// synchronized listing of M and V ranges
const params = [{}, {}];
if (this.parameters.prefix) {
params[0].gte = DbPrefixes.Master + this.parameters.prefix;
params[0].lt = DbPrefixes.Master + inc(this.parameters.prefix);
params[1].gte = DbPrefixes.Version + this.parameters.prefix;
params[1].lt = DbPrefixes.Version + inc(this.parameters.prefix);
} else {
params[0].gte = DbPrefixes.Master;
params[0].lt = inc(DbPrefixes.Master); // stop after the last master key
params[1].gte = DbPrefixes.Version;
params[1].lt = inc(DbPrefixes.Version); // stop after the last version key
}
if (this.parameters.keyMarker) {
if (params[1].gte <= DbPrefixes.Version + this.parameters.keyMarker) {
delete params[0].gte;
delete params[1].gte;
params[0].gt = DbPrefixes.Master + inc(this.parameters.keyMarker + VID_SEP);
if (this.parameters.versionIdMarker) {
// versionIdMarker should always come with keyMarker
// but may not be the other way around
params[1].gt = DbPrefixes.Version
+ this.parameters.keyMarker
+ VID_SEP
+ this.parameters.versionIdMarker;
} else {
params[1].gt = DbPrefixes.Version
+ inc(this.parameters.keyMarker + VID_SEP);
}
}
}
return params;
}
/**
* This function format the result to return
* @return {Object} - The result.
* Used to synchronize listing of M and V prefixes by object key
*
* @param {object} masterObj object listed from first range
* returned by genMDParamsV1() (the master keys range)
* @param {object} versionObj object listed from second range
* returned by genMDParamsV1() (the version keys range)
* @return {number} comparison result:
* * -1 if master key < version key
* * 1 if master key > version key
*/
compareObjects(masterObj, versionObj) {
const masterKey = masterObj.key.slice(DbPrefixes.Master.length);
const versionKey = versionObj.key.slice(DbPrefixes.Version.length);
return masterKey < versionKey ? -1 : 1;
}
/**
* Add a (key, versionId, value) tuple to the listing.
* Set the NextMarker to the current key
* Increment the keys counter
* @param {object} obj - the entry to add to the listing result
* @param {String} obj.key - The key to add
* @param {String} obj.versionId - versionId
* @param {String} obj.value - The value of the key
* @return {Boolean} - indicates if iteration should continue
*/
addContents(obj) {
if (this._reachedMaxKeys()) {
return FILTER_END;
}
this.Contents.push({
key: obj.key,
value: this.trimMetadata(obj.value),
versionId: obj.versionId,
});
this.NextMarker = obj.key;
this.NextVersionIdMarker = obj.versionId;
++this.keys;
return FILTER_ACCEPT;
}
/**
* Filter to apply on each iteration if bucket is in v0
* versioning key format, based on:
* - prefix
* - delimiter
* - maxKeys
* The marker is being handled directly by levelDB
* @param {Object} obj - The key and value of the element
* @param {String} obj.key - The key of the element
* @param {String} obj.value - The value of the element
* @return {number} - indicates if iteration should continue
*/
filterV0(obj) {
if (obj.key.startsWith(DbPrefixes.Replay)) {
this.inReplayPrefix = true;
return FILTER_SKIP;
}
this.inReplayPrefix = false;
if (Version.isPHD(obj.value)) {
// return accept to avoid skipping the next values in range
return FILTER_ACCEPT;
}
return this.filterCommon(obj.key, obj.value);
}
/**
* Filter to apply on each iteration if bucket is in v1
* versioning key format, based on:
* - prefix
* - delimiter
* - maxKeys
* The marker is being handled directly by levelDB
* @param {Object} obj - The key and value of the element
* @param {String} obj.key - The key of the element
* @param {String} obj.value - The value of the element
* @return {number} - indicates if iteration should continue
*/
filterV1(obj) {
// this function receives both M and V keys, but their prefix
// length is the same so we can remove their prefix without
// looking at the type of key
return this.filterCommon(obj.key.slice(DbPrefixes.Master.length),
obj.value);
}
filterCommon(key, value) {
if (this.prefix && !key.startsWith(this.prefix)) {
return FILTER_SKIP;
}
let nonversionedKey;
let versionId = undefined;
const versionIdIndex = key.indexOf(VID_SEP);
if (versionIdIndex < 0) {
nonversionedKey = key;
this.masterKey = key;
this.masterVersionId =
Version.from(value).getVersionId() || 'null';
versionId = this.masterVersionId;
} else {
nonversionedKey = key.slice(0, versionIdIndex);
versionId = key.slice(versionIdIndex + 1);
// skip a version key if it is the master version
if (this.masterKey === nonversionedKey && this.masterVersionId === versionId) {
return FILTER_SKIP;
}
this.masterKey = undefined;
this.masterVersionId = undefined;
}
if (this.delimiter) {
const baseIndex = this.prefix ? this.prefix.length : 0;
const delimiterIndex = nonversionedKey.indexOf(this.delimiter, baseIndex);
if (delimiterIndex >= 0) {
return this.addCommonPrefix(nonversionedKey, delimiterIndex);
}
}
return this.addContents({ key: nonversionedKey, value, versionId });
}
skippingV0() {
if (this.inReplayPrefix) {
return DbPrefixes.Replay;
}
if (this.NextMarker) {
const index = this.NextMarker.lastIndexOf(this.delimiter);
if (index === this.NextMarker.length - 1) {
return this.NextMarker;
}
}
return SKIP_NONE;
}
skippingV1() {
const skipV0 = this.skippingV0();
if (skipV0 === SKIP_NONE) {
return SKIP_NONE;
}
// skip to the same object key in both M and V range listings
return [DbPrefixes.Master + skipV0,
DbPrefixes.Version + skipV0];
}
/**
* Return an object containing all mandatory fields to use once the
* iteration is done, doesn't show a NextMarker field if the output
* isn't truncated
* @return {Object} - following amazon format
*/
result() {
// Unset NextMarker when not truncated
/* NextMarker is only provided when delimiter is used.
* specified in v1 listing documentation
* http://docs.aws.amazon.com/AmazonS3/latest/API/RESTBucketGET.html
*/
return {
CommonPrefixes: this.CommonPrefixes,
Contents: this.Contents,
// <versioning>
LatestVersions: this.latestVersions,
// </versioning>
Versions: this.Contents,
IsTruncated: this.IsTruncated,
NextMarker: this.IsTruncated ? this.NextMarker : undefined,
NextVersionMarker: this.IsTruncated ?
this.NextVersionMarker : undefined,
NextKeyMarker: this.IsTruncated ? this.NextMarker : undefined,
NextVersionIdMarker: this.IsTruncated ?
this.NextVersionIdMarker : undefined,
Delimiter: this.delimiter,
};
}
}
module.exports = {
DelimiterVersions,
};
module.exports = { DelimiterVersions };

View File

@@ -1,3 +1,11 @@
const { DbPrefixes } = require('../../versioning/constants').VersioningConstants;
// constants for extensions
const SKIP_NONE = undefined; // to be inline with the values of NextMarker
const FILTER_ACCEPT = 1;
const FILTER_SKIP = 0;
const FILTER_END = -1;
/**
* This function check if number is valid
* To be valid a number need to be an Integer and be lower than the limit
@@ -13,6 +21,50 @@ function checkLimit(number, limit) {
return valid ? parsed : limit;
}
/**
* Increment the charCode of the last character of a valid string.
*
* @param {string} str - the input string
* @return {string} - the incremented string
* or the input if it is not valid
*/
function inc(str) {
return str ? (str.slice(0, str.length - 1) +
String.fromCharCode(str.charCodeAt(str.length - 1) + 1)) : str;
}
/**
* Transform listing parameters for v0 versioning key format to make
* it compatible with v1 format
*
* @param {object} v0params - listing parameters for v0 format
* @return {object} - listing parameters for v1 format
*/
function listingParamsMasterKeysV0ToV1(v0params) {
const v1params = Object.assign({}, v0params);
if (v0params.gt !== undefined) {
v1params.gt = `${DbPrefixes.Master}${v0params.gt}`;
} else if (v0params.gte !== undefined) {
v1params.gte = `${DbPrefixes.Master}${v0params.gte}`;
} else {
v1params.gte = DbPrefixes.Master;
}
if (v0params.lt !== undefined) {
v1params.lt = `${DbPrefixes.Master}${v0params.lt}`;
} else if (v0params.lte !== undefined) {
v1params.lte = `${DbPrefixes.Master}${v0params.lte}`;
} else {
v1params.lt = inc(DbPrefixes.Master); // stop after the last master key
}
return v1params;
}
module.exports = {
checkLimit,
inc,
listingParamsMasterKeysV0ToV1,
SKIP_NONE,
FILTER_END,
FILTER_SKIP,
FILTER_ACCEPT,
};

View File

@@ -0,0 +1,106 @@
const stream = require('stream');
class MergeStream extends stream.Readable {
constructor(stream1, stream2, compare) {
super({ objectMode: true });
this._compare = compare;
this._streams = [stream1, stream2];
// peekItems elements represent the latest item consumed from
// the respective input stream but not yet pushed. It can also
// be one of the following special values:
// - undefined: stream hasn't started emitting items
// - null: EOF reached and no more item to peek
this._peekItems = [undefined, undefined];
this._streamEof = [false, false];
this._streamToResume = null;
stream1.on('data', item => this._onItem(stream1, item, 0, 1));
stream1.once('end', () => this._onEnd(stream1, 0, 1));
stream1.once('error', err => this._onError(stream1, err, 0, 1));
stream2.on('data', item => this._onItem(stream2, item, 1, 0));
stream2.once('end', () => this._onEnd(stream2, 1, 0));
stream2.once('error', err => this._onError(stream2, err, 1, 0));
}
_read() {
if (this._streamToResume) {
this._streamToResume.resume();
this._streamToResume = null;
}
}
_destroy(err, callback) {
for (let i = 0; i < 2; ++i) {
if (!this._streamEof[i]) {
this._streams[i].destroy();
}
}
callback();
}
_onItem(myStream, myItem, myIndex, otherIndex) {
this._peekItems[myIndex] = myItem;
const otherItem = this._peekItems[otherIndex];
if (otherItem === undefined) {
// wait for the other stream to wake up
return myStream.pause();
}
if (otherItem === null || this._compare(myItem, otherItem) <= 0) {
if (!this.push(myItem)) {
myStream.pause();
this._streamToResume = myStream;
}
return undefined;
}
const otherStream = this._streams[otherIndex];
const otherMore = this.push(otherItem);
if (this._streamEof[otherIndex]) {
this._peekItems[otherIndex] = null;
return this.push(myItem);
}
myStream.pause();
if (otherMore) {
return otherStream.resume();
}
this._streamToResume = otherStream;
return undefined;
}
_onEnd(myStream, myIndex, otherIndex) {
this._streamEof[myIndex] = true;
if (this._peekItems[myIndex] === undefined) {
this._peekItems[myIndex] = null;
}
const myItem = this._peekItems[myIndex];
const otherItem = this._peekItems[otherIndex];
if (otherItem === undefined) {
// wait for the other stream to wake up
return undefined;
}
if (otherItem === null) {
return this.push(null);
}
if (myItem === null || this._compare(myItem, otherItem) <= 0) {
this.push(otherItem);
this._peekItems[myIndex] = null;
}
if (this._streamEof[otherIndex]) {
return this.push(null);
}
const otherStream = this._streams[otherIndex];
return otherStream.resume();
}
_onError(myStream, err, myIndex, otherIndex) {
myStream.destroy();
if (this._streams[otherIndex]) {
this._streams[otherIndex].destroy();
}
this.emit('error', err);
}
}
module.exports = MergeStream;

View File

@@ -49,6 +49,14 @@ class AuthInfo {
isRequesterPublicUser() {
return this.canonicalID === constants.publicId;
}
isRequesterAServiceAccount() {
return this.canonicalID.startsWith(
`${constants.zenkoServiceAccount}/`);
}
isRequesterThisServiceAccount(serviceName) {
return this.canonicalID ===
`${constants.zenkoServiceAccount}/${serviceName}`;
}
}
module.exports = AuthInfo;

282
lib/auth/Vault.js Normal file
View File

@@ -0,0 +1,282 @@
const errors = require('../errors');
const AuthInfo = require('./AuthInfo');
/** vaultSignatureCb parses message from Vault and instantiates
* @param {object} err - error from vault
* @param {object} authInfo - info from vault
* @param {object} log - log for request
* @param {function} callback - callback to authCheck functions
* @param {object} [streamingV4Params] - present if v4 signature;
* items used to calculate signature on chunks if streaming auth
* @return {undefined}
*/
function vaultSignatureCb(err, authInfo, log, callback, streamingV4Params) {
// vaultclient API guarantees that it returns:
// - either `err`, an Error object with `code` and `message` properties set
// - or `err == null` and `info` is an object with `message.code` and
// `message.message` properties set.
if (err) {
log.debug('received error message from auth provider',
{ errorMessage: err });
return callback(err);
}
log.debug('received info from Vault', { authInfo });
const info = authInfo.message.body;
const userInfo = new AuthInfo(info.userInfo);
const authorizationResults = info.authorizationResults;
const auditLog = { accountDisplayName: userInfo.getAccountDisplayName() };
const iamDisplayName = userInfo.getIAMdisplayName();
if (iamDisplayName) {
auditLog.IAMdisplayName = iamDisplayName;
}
log.addDefaultFields(auditLog);
return callback(null, userInfo, authorizationResults, streamingV4Params);
}
/**
* Class that provides common authentication methods against different
* authentication backends.
* @class Vault
*/
class Vault {
/**
* @constructor
* @param {object} client - authentication backend or vault client
* @param {string} implName - implementation name for auth backend
*/
constructor(client, implName) {
this.client = client;
this.implName = implName;
}
/**
* authenticateV2Request
*
* @param {string} params - the authentication parameters as returned by
* auth.extractParams
* @param {number} params.version - shall equal 2
* @param {string} params.data.accessKey - the user's accessKey
* @param {string} params.data.signatureFromRequest - the signature read
* from the request
* @param {string} params.data.stringToSign - the stringToSign
* @param {string} params.data.algo - the hashing algorithm used for the
* signature
* @param {string} params.data.authType - the type of authentication (query
* or header)
* @param {string} params.data.signatureVersion - the version of the
* signature (AWS or AWS4)
* @param {number} [params.data.signatureAge] - the age of the signature in
* ms
* @param {string} params.data.log - the logger object
* @param {RequestContext []} requestContexts - an array of RequestContext
* instances which contain information for policy authorization check
* @param {function} callback - callback with either error or user info
* @returns {undefined}
*/
authenticateV2Request(params, requestContexts, callback) {
params.log.debug('authenticating V2 request');
let serializedRCsArr;
if (requestContexts) {
serializedRCsArr = requestContexts.map(rc => rc.serialize());
}
this.client.verifySignatureV2(
params.data.stringToSign,
params.data.signatureFromRequest,
params.data.accessKey,
{
algo: params.data.algo,
reqUid: params.log.getSerializedUids(),
logger: params.log,
securityToken: params.data.securityToken,
requestContext: serializedRCsArr,
},
(err, userInfo) => vaultSignatureCb(err, userInfo,
params.log, callback)
);
}
/** authenticateV4Request
* @param {object} params - the authentication parameters as returned by
* auth.extractParams
* @param {number} params.version - shall equal 4
* @param {string} params.data.log - the logger object
* @param {string} params.data.accessKey - the user's accessKey
* @param {string} params.data.signatureFromRequest - the signature read
* from the request
* @param {string} params.data.region - the AWS region
* @param {string} params.data.stringToSign - the stringToSign
* @param {string} params.data.scopeDate - the timespan to allow the request
* @param {string} params.data.authType - the type of authentication (query
* or header)
* @param {string} params.data.signatureVersion - the version of the
* signature (AWS or AWS4)
* @param {number} params.data.signatureAge - the age of the signature in ms
* @param {number} params.data.timestamp - signaure timestamp
* @param {string} params.credentialScope - credentialScope for signature
* @param {RequestContext [] | null} requestContexts -
* an array of RequestContext or null if authenticaiton of a chunk
* in streamingv4 auth
* instances which contain information for policy authorization check
* @param {function} callback - callback with either error or user info
* @return {undefined}
*/
authenticateV4Request(params, requestContexts, callback) {
params.log.debug('authenticating V4 request');
let serializedRCs;
if (requestContexts) {
serializedRCs = requestContexts.map(rc => rc.serialize());
}
const streamingV4Params = {
accessKey: params.data.accessKey,
signatureFromRequest: params.data.signatureFromRequest,
region: params.data.region,
scopeDate: params.data.scopeDate,
timestamp: params.data.timestamp,
credentialScope: params.data.credentialScope };
this.client.verifySignatureV4(
params.data.stringToSign,
params.data.signatureFromRequest,
params.data.accessKey,
params.data.region,
params.data.scopeDate,
{
reqUid: params.log.getSerializedUids(),
logger: params.log,
securityToken: params.data.securityToken,
requestContext: serializedRCs,
},
(err, userInfo) => vaultSignatureCb(err, userInfo,
params.log, callback, streamingV4Params)
);
}
/** getCanonicalIds -- call Vault to get canonicalIDs based on email
* addresses
* @param {array} emailAddresses - list of emailAddresses
* @param {object} log - log object
* @param {function} callback - callback with either error or an array
* of objects with each object containing the canonicalID and emailAddress
* of an account as properties
* @return {undefined}
*/
getCanonicalIds(emailAddresses, log, callback) {
log.trace('getting canonicalIDs from Vault based on emailAddresses',
{ emailAddresses });
this.client.getCanonicalIds(emailAddresses,
{ reqUid: log.getSerializedUids() },
(err, info) => {
if (err) {
log.debug('received error message from auth provider',
{ errorMessage: err });
return callback(err);
}
const infoFromVault = info.message.body;
log.trace('info received from vault', { infoFromVault });
const foundIds = [];
for (let i = 0; i < Object.keys(infoFromVault).length; i++) {
const key = Object.keys(infoFromVault)[i];
if (infoFromVault[key] === 'WrongFormat'
|| infoFromVault[key] === 'NotFound') {
return callback(errors.UnresolvableGrantByEmailAddress);
}
const obj = {};
obj.email = key;
obj.canonicalID = infoFromVault[key];
foundIds.push(obj);
}
return callback(null, foundIds);
});
}
/** getEmailAddresses -- call Vault to get email addresses based on
* canonicalIDs
* @param {array} canonicalIDs - list of canonicalIDs
* @param {object} log - log object
* @param {function} callback - callback with either error or an object
* with canonicalID keys and email address values
* @return {undefined}
*/
getEmailAddresses(canonicalIDs, log, callback) {
log.trace('getting emailAddresses from Vault based on canonicalIDs',
{ canonicalIDs });
this.client.getEmailAddresses(canonicalIDs,
{ reqUid: log.getSerializedUids() },
(err, info) => {
if (err) {
log.debug('received error message from vault',
{ errorMessage: err });
return callback(err);
}
const infoFromVault = info.message.body;
log.trace('info received from vault', { infoFromVault });
const result = {};
/* If the email address was not found in Vault, do not
send the canonicalID back to the API */
Object.keys(infoFromVault).forEach(key => {
if (infoFromVault[key] !== 'NotFound' &&
infoFromVault[key] !== 'WrongFormat') {
result[key] = infoFromVault[key];
}
});
return callback(null, result);
});
}
/** checkPolicies -- call Vault to evaluate policies
* @param {object} requestContextParams - parameters needed to construct
* requestContext in Vault
* @param {object} requestContextParams.constantParams - params that have
* the same value for each requestContext to be constructed in Vault
* @param {object} requestContextParams.paramaterize - params that have
* arrays as values since a requestContext needs to be constructed with
* each option in Vault
* @param {string} userArn - arn of requesting user
* @param {object} log - log object
* @param {function} callback - callback with either error or an array
* of authorization results
* @return {undefined}
*/
checkPolicies(requestContextParams, userArn, log, callback) {
log.trace('sending request context params to vault to evaluate' +
'policies');
this.client.checkPolicies(requestContextParams, userArn, {
reqUid: log.getSerializedUids(),
}, (err, info) => {
if (err) {
log.debug('received error message from auth provider',
{ error: err });
return callback(err);
}
const result = info.message.body;
return callback(null, result);
});
}
checkHealth(log, callback) {
if (!this.client.healthcheck) {
const defResp = {};
defResp[this.implName] = { code: 200, message: 'OK' };
return callback(null, defResp);
}
return this.client.healthcheck(log.getSerializedUids(), (err, obj) => {
const respBody = {};
if (err) {
log.debug(`error from ${this.implName}`, { error: err });
respBody[this.implName] = {
error: err,
};
// error returned as null so async parallel doesn't return
// before all backends are checked
return callback(null, respBody);
}
respBody[this.implName] = {
code: 200,
message: 'OK',
body: obj,
};
return callback(null, respBody);
});
}
}
module.exports = Vault;

View File

@@ -1,15 +1,21 @@
'use strict'; // eslint-disable-line strict
const crypto = require('crypto');
const errors = require('../errors');
const queryString = require('querystring');
const AuthInfo = require('./AuthInfo');
const v2 = require('./v2/authV2');
const v4 = require('./v4/authV4');
const constants = require('../constants');
const constructStringToSignV2 = require('./v2/constructStringToSign');
const constructStringToSignV4 = require('./v4/constructStringToSign');
const convertUTCtoISO8601 = require('./v4/timeUtils').convertUTCtoISO8601;
const crypto = require('crypto');
const vaultUtilities = require('./in_memory/vaultUtilities');
const backend = require('./in_memory/Backend');
const validateAuthConfig = require('./in_memory/validateAuthConfig');
const AuthLoader = require('./in_memory/AuthLoader');
const Vault = require('./Vault');
let vault = null;
const auth = {};
const checkFunctions = {
@@ -65,7 +71,7 @@ function extractParams(request, log, awsService, data) {
} else if (authHeader.startsWith('AWS4')) {
version = 'v4';
} else {
log.trace('missing authorization security header',
log.trace('invalid authorization security header',
{ header: authHeader });
return { err: errors.AccessDenied };
}
@@ -118,6 +124,7 @@ function doAuth(request, log, cb, awsService, requestContexts) {
requestContext.setSignatureVersion(res.params
.data.signatureVersion);
requestContext.setSignatureAge(res.params.data.signatureAge);
requestContext.setSecurityToken(res.params.data.securityToken);
});
}
@@ -137,7 +144,6 @@ function doAuth(request, log, cb, awsService, requestContexts) {
return cb(errors.InternalError);
}
/**
* This function will generate a version 4 header
*
@@ -147,10 +153,11 @@ function doAuth(request, log, cb, awsService, requestContexts) {
* @param {string} accessKey - the accessKey
* @param {string} secretKeyValue - the secretKey
* @param {string} awsService - Aws service related
* @param {sting} [proxyPath] - path that gets proxied by reverse proxy
* @return {undefined}
*/
function generateV4Headers(request, data, accessKey, secretKeyValue,
awsService) {
awsService, proxyPath) {
Object.assign(request, { headers: {} });
const amzDate = convertUTCtoISO8601(Date.now());
// get date without time
@@ -181,8 +188,8 @@ function generateV4Headers(request, data, accessKey, secretKeyValue,
|| headerName === 'host'
).sort().join(';');
const params = { request, signedHeaders, payloadChecksum,
credentialScope, timestamp, query: data,
awsService: service };
credentialScope, timestamp, query: data,
awsService: service, proxyPath };
const stringToSign = constructStringToSignV4(params);
const signingKey = vaultUtilities.calculateSigningKey(secretKeyValue,
region,
@@ -205,5 +212,13 @@ module.exports = {
},
client: {
generateV4Headers,
constructStringToSignV2,
},
inMemory: {
backend,
validateAuthConfig,
AuthLoader,
},
AuthInfo,
Vault,
};

View File

@@ -0,0 +1,223 @@
const fs = require('fs');
const glob = require('simple-glob');
const joi = require('@hapi/joi');
const werelogs = require('werelogs');
const ARN = require('../../models/ARN');
/**
* Load authentication information from files or pre-loaded account
* objects
*
* @class AuthLoader
*/
class AuthLoader {
constructor(logApi) {
this._log = new (logApi || werelogs).Logger('S3');
this._authData = { accounts: [] };
// null: unknown validity, true/false: valid or invalid
this._isValid = null;
this._joiKeysValidator = joi.array()
.items({
access: joi.string().required(),
secret: joi.string().required(),
})
.required();
const accountsJoi = joi.array()
.items({
name: joi.string().required(),
email: joi.string().email().required(),
arn: joi.string().required(),
canonicalID: joi.string().required(),
shortid: joi.string().regex(/^[0-9]{12}$/).required(),
keys: this._joiKeysValidator,
// backward-compat
users: joi.array(),
})
.required()
.unique('arn')
.unique('email')
.unique('canonicalID');
this._joiValidator = joi.object({ accounts: accountsJoi });
}
/**
* add one or more accounts to the authentication info
*
* @param {object} authData - authentication data
* @param {object[]} authData.accounts - array of account data
* @param {string} authData.accounts[].name - account name
* @param {string} authData.accounts[].email: email address
* @param {string} authData.accounts[].arn: account ARN,
* e.g. 'arn:aws:iam::123456789012:root'
* @param {string} authData.accounts[].canonicalID account
* canonical ID
* @param {string} authData.accounts[].shortid account ID number,
* e.g. '123456789012'
* @param {object[]} authData.accounts[].keys array of
* access/secret keys
* @param {object[]} authData.accounts[].keys[].access access key
* @param {object[]} authData.accounts[].keys[].secret secret key
* @param {string} [filePath] - optional file path info for
* logging purpose
* @return {undefined}
*/
addAccounts(authData, filePath) {
const isValid = this._validateData(authData, filePath);
if (isValid) {
this._authData.accounts =
this._authData.accounts.concat(authData.accounts);
// defer validity checking when getting data to avoid
// logging multiple times the errors (we need to validate
// all accounts at once to detect duplicate values)
if (this._isValid) {
this._isValid = null;
}
} else {
this._isValid = false;
}
}
/**
* add account information from a file
*
* @param {string} filePath - file path containing JSON
* authentication info (see {@link addAccounts()} for format)
* @return {undefined}
*/
addFile(filePath) {
const authData = JSON.parse(fs.readFileSync(filePath));
this.addAccounts(authData, filePath);
}
/**
* add account information from a filesystem path
*
* @param {string|string[]} globPattern - filesystem glob pattern,
* can be a single string or an array of glob patterns. Globs
* can be simple file paths or can contain glob matching
* characters, like '/a/b/*.json'. The matching files are
* individually loaded as JSON and accounts are added. See
* {@link addAccounts()} for JSON format.
* @return {undefined}
*/
addFilesByGlob(globPattern) {
const files = glob(globPattern);
files.forEach(filePath => this.addFile(filePath));
}
/**
* perform validation on authentication info previously
* loaded. Note that it has to be done on the entire set after an
* update to catch duplicate account IDs or access keys.
*
* @return {boolean} true if authentication info is valid
* false otherwise
*/
validate() {
if (this._isValid === null) {
this._isValid = this._validateData(this._authData);
}
return this._isValid;
}
/**
* get authentication info as a plain JS object containing all accounts
* under the "accounts" attribute, with validation.
*
* @return {object|null} the validated authentication data
* null if invalid
*/
getData() {
return this.validate() ? this._authData : null;
}
_validateData(authData, filePath) {
const res = joi.validate(authData, this._joiValidator,
{ abortEarly: false });
if (res.error) {
this._dumpJoiErrors(res.error.details, filePath);
return false;
}
let allKeys = [];
let arnError = false;
const validatedAuth = res.value;
validatedAuth.accounts.forEach(account => {
// backward-compat: ignore arn if starts with 'aws:' and log a
// warning
if (account.arn.startsWith('aws:')) {
this._log.error(
'account must have a valid AWS ARN, legacy examples ' +
'starting with \'aws:\' are not supported anymore. ' +
'Please convert to a proper account entry (see ' +
'examples at https://github.com/scality/S3/blob/' +
'master/conf/authdata.json). Also note that support ' +
'for account users has been dropped.',
{ accountName: account.name, accountArn: account.arn,
filePath });
arnError = true;
return;
}
if (account.users) {
this._log.error(
'support for account users has been dropped, consider ' +
'turning users into account entries (see examples at ' +
'https://github.com/scality/S3/blob/master/conf/' +
'authdata.json)',
{ accountName: account.name, accountArn: account.arn,
filePath });
arnError = true;
return;
}
const arnObj = ARN.createFromString(account.arn);
if (arnObj.error) {
this._log.error(
'authentication config validation error',
{ reason: arnObj.error.description,
accountName: account.name, accountArn: account.arn,
filePath });
arnError = true;
return;
}
if (!arnObj.isIAMAccount()) {
this._log.error(
'authentication config validation error',
{ reason: 'not an IAM account ARN',
accountName: account.name, accountArn: account.arn,
filePath });
arnError = true;
return;
}
allKeys = allKeys.concat(account.keys);
});
if (arnError) {
return false;
}
const uniqueKeysRes = joi.validate(
allKeys, this._joiKeysValidator.unique('access'));
if (uniqueKeysRes.error) {
this._dumpJoiErrors(uniqueKeysRes.error.details, filePath);
return false;
}
return true;
}
_dumpJoiErrors(errors, filePath) {
errors.forEach(err => {
const logInfo = { item: err.path, filePath };
if (err.type === 'array.unique') {
logInfo.reason = `duplicate value '${err.context.path}'`;
logInfo.dupValue = err.context.value[err.context.path];
} else {
logInfo.reason = err.message;
logInfo.context = err.context;
}
this._log.error('authentication config validation error',
logInfo);
});
}
}
module.exports = AuthLoader;

View File

@@ -3,55 +3,73 @@
const crypto = require('crypto');
const errors = require('../../errors');
const accountsKeyedbyAccessKey =
require('./vault.json').accountsKeyedbyAccessKey;
const accountsKeyedbyCanID =
require('./vault.json').accountsKeyedbyCanID;
const accountsKeyedbyEmail =
require('./vault.json').accountsKeyedbyEmail;
const calculateSigningKey = require('./vaultUtilities').calculateSigningKey;
const hashSignature = require('./vaultUtilities').hashSignature;
const Indexer = require('./Indexer');
function _formatResponse(userInfoToSend) {
return {
message: {
body: { userInfo: userInfoToSend },
},
};
}
/**
* Class that provides a memory backend for verifying signatures and getting
* emails and canonical ids associated with an account.
*
* @class Backend
*/
class Backend {
/**
* @constructor
* @param {string} service - service identifer for construction arn
* @param {Indexer} indexer - indexer instance for retrieving account info
* @param {function} formatter - function which accepts user info to send
* back and returns it in an object
*/
constructor(service, indexer, formatter) {
this.service = service;
this.indexer = indexer;
this.formatResponse = formatter;
}
const backend = {
/** verifySignatureV2
* @param {string} stringToSign - string to sign built per AWS rules
* @param {string} signatureFromRequest - signature sent with request
* @param {string} accessKey - user's accessKey
* @param {string} accessKey - account accessKey
* @param {object} options - contains algorithm (SHA1 or SHA256)
* @param {function} callback - callback with either error or user info
* @return {function} calls callback
*/
verifySignatureV2: (stringToSign, signatureFromRequest,
accessKey, options, callback) => {
const account = accountsKeyedbyAccessKey[accessKey];
if (!account) {
verifySignatureV2(stringToSign, signatureFromRequest,
accessKey, options, callback) {
const entity = this.indexer.getEntityByKey(accessKey);
if (!entity) {
return callback(errors.InvalidAccessKeyId);
}
const secretKey = account.secretKey;
const secretKey = this.indexer.getSecretKey(entity, accessKey);
const reconstructedSig =
hashSignature(stringToSign, secretKey, options.algo);
if (signatureFromRequest !== reconstructedSig) {
return callback(errors.SignatureDoesNotMatch);
}
const userInfoToSend = {
accountDisplayName: account.displayName,
canonicalID: account.canonicalID,
arn: account.arn,
IAMdisplayName: account.IAMdisplayName,
};
const vaultReturnObject = {
message: {
body: userInfoToSend,
},
accountDisplayName: this.indexer.getAcctDisplayName(entity),
canonicalID: entity.canonicalID,
arn: entity.arn,
IAMdisplayName: entity.IAMdisplayName,
};
const vaultReturnObject = this.formatResponse(userInfoToSend);
return callback(null, vaultReturnObject);
},
}
/** verifySignatureV4
* @param {string} stringToSign - string to sign built per AWS rules
* @param {string} signatureFromRequest - signature sent with request
* @param {string} accessKey - user's accessKey
* @param {string} accessKey - account accessKey
* @param {string} region - region specified in request credential
* @param {string} scopeDate - date specified in request credential
* @param {object} options - options to send to Vault
@@ -59,13 +77,13 @@ const backend = {
* @param {function} callback - callback with either error or user info
* @return {function} calls callback
*/
verifySignatureV4: (stringToSign, signatureFromRequest, accessKey,
region, scopeDate, options, callback) => {
const account = accountsKeyedbyAccessKey[accessKey];
if (!account) {
verifySignatureV4(stringToSign, signatureFromRequest, accessKey,
region, scopeDate, options, callback) {
const entity = this.indexer.getEntityByKey(accessKey);
if (!entity) {
return callback(errors.InvalidAccessKeyId);
}
const secretKey = account.secretKey;
const secretKey = this.indexer.getSecretKey(entity, accessKey);
const signingKey = calculateSigningKey(secretKey, region, scopeDate);
const reconstructedSig = crypto.createHmac('sha256', signingKey)
.update(stringToSign, 'binary').digest('hex');
@@ -73,18 +91,14 @@ const backend = {
return callback(errors.SignatureDoesNotMatch);
}
const userInfoToSend = {
accountDisplayName: account.displayName,
canonicalID: account.canonicalID,
arn: account.arn,
IAMdisplayName: account.IAMdisplayName,
};
const vaultReturnObject = {
message: {
body: userInfoToSend,
},
accountDisplayName: this.indexer.getAcctDisplayName(entity),
canonicalID: entity.canonicalID,
arn: entity.arn,
IAMdisplayName: entity.IAMdisplayName,
};
const vaultReturnObject = this.formatResponse(userInfoToSend);
return callback(null, vaultReturnObject);
},
}
/**
* Gets canonical ID's for a list of accounts
@@ -96,15 +110,16 @@ const backend = {
* object with email addresses as keys and canonical IDs
* as values
*/
getCanonicalIds: (emails, log, cb) => {
getCanonicalIds(emails, log, cb) {
const results = {};
emails.forEach(email => {
const lowercasedEmail = email.toLowerCase();
if (!accountsKeyedbyEmail[lowercasedEmail]) {
const entity = this.indexer.getEntityByEmail(lowercasedEmail);
if (!entity) {
results[email] = 'NotFound';
} else {
results[email] =
accountsKeyedbyEmail[lowercasedEmail].canonicalID;
entity.canonicalID;
}
});
const vaultReturnObject = {
@@ -113,12 +128,11 @@ const backend = {
},
};
return cb(null, vaultReturnObject);
},
}
/**
* Gets email addresses (referred to as diplay names for getACL's)
* for a list of accounts
* based on canonical IDs associated with account
* for a list of accounts based on canonical IDs associated with account
* @param {array} canonicalIDs - list of canonicalIDs
* @param {object} options - to send log id to vault
* @param {function} cb - callback to calling function
@@ -126,14 +140,14 @@ const backend = {
* an object from Vault containing account canonicalID
* as each object key and an email address as the value (or "NotFound")
*/
getEmailAddresses: (canonicalIDs, options, cb) => {
getEmailAddresses(canonicalIDs, options, cb) {
const results = {};
canonicalIDs.forEach(canonicalId => {
const foundAccount = accountsKeyedbyCanID[canonicalId];
if (!foundAccount || !foundAccount.email) {
const foundEntity = this.indexer.getEntityByCanId(canonicalId);
if (!foundEntity || !foundEntity.email) {
results[canonicalId] = 'NotFound';
} else {
results[canonicalId] = foundAccount.email;
results[canonicalId] = foundEntity.email;
}
});
const vaultReturnObject = {
@@ -142,7 +156,34 @@ const backend = {
},
};
return cb(null, vaultReturnObject);
},
};
}
}
module.exports = backend;
class S3AuthBackend extends Backend {
/**
* @constructor
* @param {object} authdata - the authentication config file's data
* @param {object[]} authdata.accounts - array of account objects
* @param {string=} authdata.accounts[].name - account name
* @param {string} authdata.accounts[].email - account email
* @param {string} authdata.accounts[].arn - IAM resource name
* @param {string} authdata.accounts[].canonicalID - account canonical ID
* @param {string} authdata.accounts[].shortid - short account ID
* @param {object[]=} authdata.accounts[].keys - array of key objects
* @param {string} authdata.accounts[].keys[].access - access key
* @param {string} authdata.accounts[].keys[].secret - secret key
* @return {undefined}
*/
constructor(authdata) {
super('s3', new Indexer(authdata), _formatResponse);
}
refreshAuthData(authData) {
this.indexer = new Indexer(authData);
}
}
module.exports = {
s3: S3AuthBackend,
};

View File

@@ -0,0 +1,145 @@
/**
* Class that provides an internal indexing over the simple data provided by
* the authentication configuration file for the memory backend. This allows
* accessing the different authentication entities through various types of
* keys.
*
* @class Indexer
*/
class Indexer {
/**
* @constructor
* @param {object} authdata - the authentication config file's data
* @param {object[]} authdata.accounts - array of account objects
* @param {string=} authdata.accounts[].name - account name
* @param {string} authdata.accounts[].email - account email
* @param {string} authdata.accounts[].arn - IAM resource name
* @param {string} authdata.accounts[].canonicalID - account canonical ID
* @param {string} authdata.accounts[].shortid - short account ID
* @param {object[]=} authdata.accounts[].keys - array of key objects
* @param {string} authdata.accounts[].keys[].access - access key
* @param {string} authdata.accounts[].keys[].secret - secret key
* @return {undefined}
*/
constructor(authdata) {
this.accountsBy = {
canId: {},
accessKey: {},
email: {},
};
/*
* This may happen if the application is configured to use another
* authentication backend than in-memory.
* As such, we're managing the error here to avoid screwing up there.
*/
if (!authdata) {
return;
}
this._build(authdata);
}
_indexAccount(account) {
const accountData = {
arn: account.arn,
canonicalID: account.canonicalID,
shortid: account.shortid,
accountDisplayName: account.name,
email: account.email.toLowerCase(),
keys: [],
};
this.accountsBy.canId[accountData.canonicalID] = accountData;
this.accountsBy.email[accountData.email] = accountData;
if (account.keys !== undefined) {
account.keys.forEach(key => {
accountData.keys.push(key);
this.accountsBy.accessKey[key.access] = accountData;
});
}
}
_build(authdata) {
authdata.accounts.forEach(account => {
this._indexAccount(account);
});
}
/**
* This method returns the account associated to a canonical ID.
*
* @param {string} canId - The canonicalId of the account
* @return {Object} account - The account object
* @return {Object} account.arn - The account's ARN
* @return {Object} account.canonicalID - The account's canonical ID
* @return {Object} account.shortid - The account's internal shortid
* @return {Object} account.accountDisplayName - The account's display name
* @return {Object} account.email - The account's lowercased email
*/
getEntityByCanId(canId) {
return this.accountsBy.canId[canId];
}
/**
* This method returns the entity (either an account or a user) associated
* to a canonical ID.
*
* @param {string} key - The accessKey of the entity
* @return {Object} entity - The entity object
* @return {Object} entity.arn - The entity's ARN
* @return {Object} entity.canonicalID - The canonical ID for the entity's
* account
* @return {Object} entity.shortid - The entity's internal shortid
* @return {Object} entity.accountDisplayName - The entity's account
* display name
* @return {Object} entity.IAMDisplayName - The user's display name
* (if the entity is an user)
* @return {Object} entity.email - The entity's lowercased email
*/
getEntityByKey(key) {
return this.accountsBy.accessKey[key];
}
/**
* This method returns the entity (either an account or a user) associated
* to an email address.
*
* @param {string} email - The email address
* @return {Object} entity - The entity object
* @return {Object} entity.arn - The entity's ARN
* @return {Object} entity.canonicalID - The canonical ID for the entity's
* account
* @return {Object} entity.shortid - The entity's internal shortid
* @return {Object} entity.accountDisplayName - The entity's account
* display name
* @return {Object} entity.IAMDisplayName - The user's display name
* (if the entity is an user)
* @return {Object} entity.email - The entity's lowercased email
*/
getEntityByEmail(email) {
const lowerCasedEmail = email.toLowerCase();
return this.accountsBy.email[lowerCasedEmail];
}
/**
* This method returns the secret key associated with the entity.
* @param {Object} entity - the entity object
* @param {string} accessKey - access key
* @returns {string} secret key
*/
getSecretKey(entity, accessKey) {
return entity.keys
.filter(kv => kv.access === accessKey)[0].secret;
}
/**
* This method returns the account display name associated with the entity.
* @param {Object} entity - the entity object
* @returns {string} account display name
*/
getAcctDisplayName(entity) {
return entity.accountDisplayName;
}
}
module.exports = Indexer;

View File

@@ -0,0 +1,18 @@
const AuthLoader = require('./AuthLoader');
/**
* @deprecated please use {@link AuthLoader} class instead
*
* @param {object} authdata - the authentication config file's data
* @param {werelogs.API} logApi - object providing a constructor function
* for the Logger object
* @return {boolean} true on erroneous data
* false on success
*/
function validateAuthConfig(authdata, logApi) {
const authLoader = new AuthLoader(logApi);
authLoader.addAccounts(authdata);
return !authLoader.validate();
}
module.exports = validateAuthConfig;

View File

@@ -1,79 +0,0 @@
{
"accountsKeyedbyAccessKey": {
"accessKey1": {
"arn": "aws::iam:accessKey1:user/Bart",
"IAMdisplayName": "Bart",
"secretKey": "verySecretKey1",
"canonicalID": "accessKey1canonicalID",
"displayName": "accessKey1displayName"
},
"accessKey2": {
"arn": "aws::iam:accessKey2:user/Lisa",
"IAMdisplayName": "Lisa",
"secretKey": "verySecretKey2",
"canonicalID": "accessKey2canonicalID",
"displayName": "accessKey2displayName"
}
},
"accountsKeyedbyEmail": {
"sampleaccount1@sampling.com": {
"arn": "aws::iam:123456789012:root",
"createDate": "",
"saltedPwd": "",
"pwdlastUsed": "",
"pwdCreated": "",
"name": "",
"shortid": "123456789012",
"email": "sampleaccount1@sampling.com",
"canonicalID": "79a59df900b949e55d96a1e698fbacedfd6e09d98eacf8f8d5218e7cd47ef2be",
"secretKeyIdList": [],
"aliasList": [],
"oidcpdList": []
},
"sampleaccount2@sampling.com": {
"arn": "aws::iam:321456789012:root",
"createDate": "",
"saltedPwd": "",
"pwdlastUsed": "",
"pwdCreated": "",
"name": "",
"shortid": "321456789012",
"email": "sampleaccount2@sampling.com",
"canonicalID": "79a59df900b949e55d96a1e698fbacedfd6e09d98eacf8f8d5218e7cd47ef2bf",
"secretKeyIdList": [],
"aliasList": [],
"oidcpdList": []
}
},
"accountsKeyedbyCanID": {
"79a59df900b949e55d96a1e698fbacedfd6e09d98eacf8f8d5218e7cd47ef2be": {
"arn": "aws::iam:123456789012:root",
"createDate": "",
"saltedPwd": "",
"pwdlastUsed": "",
"pwdCreated": "",
"name": "",
"shortid": "123456789012",
"email": "sampleaccount1@sampling.com",
"canonicalID": "79a59df900b949e55d96a1e698fbacedfd6e09d98eacf8f8d5218e7cd47ef2be",
"secretKeyIdList": [],
"aliasList": [],
"oidcpdList": []
},
"79a59df900b949e55d96a1e698fbacedfd6e09d98eacf8f8d5218e7cd47ef2bf": {
"arn": "aws::iam:321456789012:root",
"createDate": "",
"saltedPwd": "",
"pwdlastUsed": "",
"pwdCreated": "",
"name": "",
"shortid": "321456789012",
"email": "sampleaccount2@sampling.com",
"canonicalID": "79a59df900b949e55d96a1e698fbacedfd6e09d98eacf8f8d5218e7cd47ef2bf",
"secretKeyIdList": [],
"aliasList": [],
"oidcpdList": []
}
}
}

View File

@@ -5,7 +5,7 @@ const utf8 = require('utf8');
const getCanonicalizedAmzHeaders = require('./getCanonicalizedAmzHeaders');
const getCanonicalizedResource = require('./getCanonicalizedResource');
function constructStringToSign(request, data, log) {
function constructStringToSign(request, data, log, clientType) {
/*
Build signature per AWS requirements:
StringToSign = HTTP-Verb + '\n' +
@@ -36,10 +36,10 @@ function constructStringToSign(request, data, log) {
than here in stringToSign so we have replicated that.
*/
const date = query.Expires ? query.Expires : headers.date;
const combinedQueryHeaders = Object.assign(headers, query);
const combinedQueryHeaders = Object.assign({}, headers, query);
stringToSign += (date ? `${date}\n` : '\n')
+ getCanonicalizedAmzHeaders(combinedQueryHeaders)
+ getCanonicalizedResource(request);
+ getCanonicalizedAmzHeaders(combinedQueryHeaders, clientType)
+ getCanonicalizedResource(request, clientType);
return utf8.encode(stringToSign);
}

View File

@@ -1,13 +1,16 @@
'use strict'; // eslint-disable-line strict
function getCanonicalizedAmzHeaders(headers) {
function getCanonicalizedAmzHeaders(headers, clientType) {
/*
Iterate through headers and pull any headers that are x-amz headers.
Need to include 'x-amz-date' here even though AWS docs
ambiguous on this.
*/
const filterFn = clientType === 'GCP' ?
val => val.substr(0, 7) === 'x-goog-' :
val => val.substr(0, 6) === 'x-amz-';
const amzHeaders = Object.keys(headers)
.filter(val => val.substr(0, 6) === 'x-amz-')
.filter(filterFn)
.map(val => [val.trim(), headers[val].trim()]);
/*
AWS docs state that duplicate headers should be combined

View File

@@ -2,7 +2,46 @@
const url = require('url');
function getCanonicalizedResource(request) {
const gcpSubresources = [
'acl',
'billing',
'compose',
'cors',
'encryption',
'lifecycle',
'location',
'logging',
'storageClass',
'tagging',
'upload_id',
'versioning',
'versions',
'websiteConfig',
];
const awsSubresources = [
'acl',
'cors',
'delete',
'lifecycle',
'location',
'logging',
'notification',
'partNumber',
'policy',
'requestPayment',
'tagging',
'torrent',
'uploadId',
'uploads',
'versionId',
'versioning',
'replication',
'versions',
'website',
];
function getCanonicalizedResource(request, clientType) {
/*
This variable is used to determine whether to insert
a '?' or '&'. Once a query parameter is added to the resourceString,
@@ -24,25 +63,8 @@ function getCanonicalizedResource(request) {
*/
// Specified subresources:
const subresources = [
'acl',
'cors',
'delete',
'lifecycle',
'location',
'logging',
'notification',
'partNumber',
'policy',
'requestPayment',
'torrent',
'uploadId',
'uploads',
'versionId',
'versioning',
'versions',
'website',
];
const subresources =
clientType === 'GCP' ? gcpSubresources : awsSubresources;
/*
If the request includes parameters in the query string,

View File

@@ -1,6 +1,7 @@
'use strict'; // eslint-disable-line strict
const errors = require('../../errors');
const constants = require('../../constants');
const constructStringToSign = require('./constructStringToSign');
const checkRequestExpiry = require('./checkRequestExpiry');
const algoCheck = require('./algoCheck');
@@ -9,6 +10,12 @@ function check(request, log, data) {
log.trace('running header auth check');
const headers = request.headers;
const token = headers['x-amz-security-token'];
if (token && !constants.iamSecurityToken.pattern.test(token)) {
log.debug('invalid security token', { token });
return { err: errors.InvalidToken };
}
// Check to make sure timestamp is within 15 minutes of current time
let timestamp = headers['x-amz-date'] ?
headers['x-amz-date'] : headers.date;
@@ -25,6 +32,7 @@ function check(request, log, data) {
if (err) {
return { err };
}
// Authorization Header should be
// in the format of 'AWS AccessKey:Signature'
const authInfo = headers.authorization;
@@ -67,6 +75,7 @@ function check(request, log, data) {
authType: 'REST-HEADER',
signatureVersion: 'AWS',
signatureAge: Date.now() - timestamp,
securityToken: token,
},
},
};

View File

@@ -1,7 +1,7 @@
'use strict'; // eslint-disable-line strict
const errors = require('../../errors');
const constants = require('../../constants');
const algoCheck = require('./algoCheck');
const constructStringToSign = require('./constructStringToSign');
@@ -11,6 +11,13 @@ function check(request, log, data) {
log.debug('query string auth not supported for post requests');
return { err: errors.NotImplemented };
}
const token = data.SecurityToken;
if (token && !constants.iamSecurityToken.pattern.test(token)) {
log.debug('invalid security token', { token });
return { err: errors.InvalidToken };
}
/*
Check whether request has expired or if
expires parameter is more than 604800000 milliseconds
@@ -20,7 +27,7 @@ function check(request, log, data) {
milliseconds to compare to Date.now()
*/
const expirationTime = parseInt(data.Expires, 10) * 1000;
if (isNaN(expirationTime)) {
if (Number.isNaN(expirationTime)) {
log.debug('invalid expires parameter',
{ expires: data.Expires });
return { err: errors.MissingSecurityHeader };
@@ -65,6 +72,7 @@ function check(request, log, data) {
algo,
authType: 'REST-QUERY-STRING',
signatureVersion: 'AWS',
securityToken: token,
},
},
};

View File

@@ -35,6 +35,13 @@ function _toHexUTF8(char) {
function awsURIencode(input, encodeSlash, noEncodeStar) {
const encSlash = encodeSlash === undefined ? true : encodeSlash;
let encoded = '';
/**
* Duplicate query params are not suppported by AWS S3 APIs. These params
* are parsed as Arrays by Node.js HTTP parser which breaks this method
*/
if (typeof input !== 'string') {
return encoded;
}
for (let i = 0; i < input.length; i++) {
const ch = input.charAt(i);
if ((ch >= 'A' && ch <= 'Z') ||

View File

@@ -10,17 +10,13 @@ const createCanonicalRequest = require('./createCanonicalRequest');
* @returns {string} - stringToSign
*/
function constructStringToSign(params) {
const request = params.request;
const signedHeaders = params.signedHeaders;
const payloadChecksum = params.payloadChecksum;
const credentialScope = params.credentialScope;
const timestamp = params.timestamp;
const query = params.query;
const log = params.log;
const { request, signedHeaders, payloadChecksum, credentialScope, timestamp,
query, log, proxyPath } = params;
const path = proxyPath || request.path;
const canonicalReqResult = createCanonicalRequest({
pHttpVerb: request.method,
pResource: request.path,
pResource: path,
pQuery: query,
pHeaders: request.headers,
pSignedHeaders: signedHeaders,

View File

@@ -48,7 +48,8 @@ function createCanonicalRequest(params) {
// canonical query string
let canonicalQueryStr = '';
if (pQuery && !((service === 'iam' || service === 'ring') &&
if (pQuery && !((service === 'iam' || service === 'ring' ||
service === 'sts') &&
pHttpVerb === 'POST')) {
const sortedQueryParams = Object.keys(pQuery).sort().map(key => {
const encodedKey = awsURIencode(key);

View File

@@ -1,6 +1,7 @@
'use strict'; // eslint-disable-line strict
const errors = require('../../../lib/errors');
const constants = require('../../constants');
const constructStringToSign = require('./constructStringToSign');
const checkTimeSkew = require('./timeUtils').checkTimeSkew;
@@ -22,6 +23,13 @@ const areSignedHeadersComplete =
*/
function check(request, log, data, awsService) {
log.trace('running header auth check');
const token = request.headers['x-amz-security-token'];
if (token && !constants.iamSecurityToken.pattern.test(token)) {
log.debug('invalid security token', { token });
return { err: errors.InvalidToken };
}
// authorization header
const authHeader = request.headers.authorization;
if (!authHeader) {
@@ -90,7 +98,7 @@ function check(request, log, data, awsService) {
log);
if (validationResult instanceof Error) {
log.debug('credentials in improper format', { credentialsArr,
timestamp, validationResult });
timestamp, validationResult });
return { err: validationResult };
}
// credentialsArr is [accessKey, date, region, aws-service, aws4_request]
@@ -153,6 +161,7 @@ function check(request, log, data, awsService) {
// chunk evaluation
credentialScope,
timestamp,
securityToken: token,
},
},
};

View File

@@ -1,5 +1,6 @@
'use strict'; // eslint-disable-line strict
const constants = require('../../constants');
const errors = require('../../errors');
const constructStringToSign = require('./constructStringToSign');
@@ -23,6 +24,15 @@ function check(request, log, data) {
if (Object.keys(authParams).length !== 5) {
return { err: errors.InvalidArgument };
}
// Query params are not specified in AWS documentation as case-insensitive,
// so we use case-sensitive
const token = data['X-Amz-Security-Token'];
if (token && !constants.iamSecurityToken.pattern.test(token)) {
log.debug('invalid security token', { token });
return { err: errors.InvalidToken };
}
const signedHeaders = authParams.signedHeaders;
const signatureFromRequest = authParams.signatureFromRequest;
const timestamp = authParams.timestamp;
@@ -38,7 +48,7 @@ function check(request, log, data) {
log);
if (validationResult instanceof Error) {
log.debug('credentials in improper format', { credential,
timestamp, validationResult });
timestamp, validationResult });
return { err: validationResult };
}
const accessKey = credential[0];
@@ -95,6 +105,7 @@ function check(request, log, data) {
authType: 'REST-QUERY-STRING',
signatureVersion: 'AWS4-HMAC-SHA256',
signatureAge: Date.now() - convertAmzTimeToMs(timestamp),
securityToken: token,
},
},
};

View File

@@ -41,8 +41,9 @@ function validateCredentials(credentials, timestamp, log) {
{ scopeDate, timestampDate });
return errors.RequestTimeTooSkewed;
}
if (service !== 's3' && service !== 'iam' && service !== 'ring') {
log.warn('service in credentials is not one of s3/iam/ring', {
if (service !== 's3' && service !== 'iam' && service !== 'ring' &&
service !== 'sts') {
log.warn('service in credentials is not one of s3/iam/ring/sts', {
service,
});
return errors.InvalidArgument;

View File

@@ -1,8 +1,87 @@
'use strict'; // eslint-disable-line strict
// The min value here is to manage further backward compat if we
// need it
const iamSecurityTokenSizeMin = 128;
const iamSecurityTokenSizeMax = 128;
// Security token is an hex string (no real format from amazon)
const iamSecurityTokenPattern =
new RegExp(`^[a-f0-9]{${iamSecurityTokenSizeMin},` +
`${iamSecurityTokenSizeMax}}$`);
module.exports = {
// info about the iam security token
iamSecurityToken: {
min: iamSecurityTokenSizeMin,
max: iamSecurityTokenSizeMax,
pattern: iamSecurityTokenPattern,
},
// PublicId is used as the canonicalID for a request that contains
// no authentication information. Requestor can access
// only public resources
publicId: 'http://acs.amazonaws.com/groups/global/AllUsers',
zenkoServiceAccount: 'http://acs.zenko.io/accounts/service',
metadataFileNamespace: '/MDFile',
dataFileURL: '/DataFile',
// AWS states max size for user-defined metadata
// (x-amz-meta- headers) is 2 KB:
// http://docs.aws.amazon.com/AmazonS3/latest/API/RESTObjectPUT.html
// In testing, AWS seems to allow up to 88 more bytes,
// so we do the same.
maximumMetaHeadersSize: 2136,
emptyFileMd5: 'd41d8cd98f00b204e9800998ecf8427e',
// Version 2 changes the format of the data location property
// Version 3 adds the dataStoreName attribute
mdModelVersion: 3,
/*
* Splitter is used to build the object name for the overview of a
* multipart upload and to build the object names for each part of a
* multipart upload. These objects with large names are then stored in
* metadata in a "shadow bucket" to a real bucket. The shadow bucket
* contains all ongoing multipart uploads. We include in the object
* name some of the info we might need to pull about an open multipart
* upload or about an individual part with each piece of info separated
* by the splitter. We can then extract each piece of info by splitting
* the object name string with this splitter.
* For instance, assuming a splitter of '...!*!',
* the name of the upload overview would be:
* overview...!*!objectKey...!*!uploadId
* For instance, the name of a part would be:
* uploadId...!*!partNumber
*
* The sequence of characters used in the splitter should not occur
* elsewhere in the pieces of info to avoid splitting where not
* intended.
*
* Splitter is also used in adding bucketnames to the
* namespacerusersbucket. The object names added to the
* namespaceusersbucket are of the form:
* canonicalID...!*!bucketname
*/
splitter: '..|..',
usersBucket: 'users..bucket',
// MPU Bucket Prefix is used to create the name of the shadow
// bucket used for multipart uploads. There is one shadow mpu
// bucket per bucket and its name is the mpuBucketPrefix followed
// by the name of the final destination bucket for the object
// once the multipart upload is complete.
mpuBucketPrefix: 'mpuShadowBucket',
// since aws s3 does not allow capitalized buckets, these may be
// used for special internal purposes
permittedCapitalizedBuckets: {
METADATA: true,
},
// HTTP server keep-alive timeout is set to a higher value than
// client's free sockets timeout to avoid the risk of triggering
// ECONNRESET errors if the server closes the connection at the
// exact moment clients attempt to reuse an established connection
// for a new request.
//
// Note: the ability to close inactive connections on the client
// after httpClientFreeSocketsTimeout milliseconds requires the
// use of "agentkeepalive" module instead of the regular node.js
// http.Agent.
httpServerKeepAliveTimeout: 60000,
httpClientFreeSocketTimeout: 55000,
};

View File

@@ -40,6 +40,7 @@ class IndexTransaction {
this.operations = [];
this.db = db;
this.closed = false;
this.conditions = [];
}
/**
@@ -118,6 +119,35 @@ class IndexTransaction {
this.push({ type: 'del', key });
}
/**
* Adds a condition for the transaction
*
* @argument {object} condition an object with the following attributes:
* {
* <condition>: the object key
* }
* example: { notExists: 'key1' }
*
* @throws {Error} an error described by the following properties
* - pushOnCommittedTransaction if already committed
* - missingCondition if the condition is empty
*
* @returns {undefined}
*/
addCondition(condition) {
if (this.closed) {
throw propError('pushOnCommittedTransaction',
'can not add conditions to already committed transaction');
}
if (condition === undefined || Object.keys(condition).length === 0) {
throw propError('missingCondition', 'missing condition for conditional put');
}
if (typeof (condition.notExists) !== 'string') {
throw propError('unsupportedConditionalOperation', 'missing key or supported condition');
}
this.conditions.push(condition);
}
/**
* Applies the queued updates in this transaction atomically.
*
@@ -138,6 +168,7 @@ class IndexTransaction {
}
this.closed = true;
writeOptions.conditions = this.conditions;
// The array-of-operations variant of the `batch` method
// allows passing options such has `sync: true` whereas the

View File

@@ -1,13 +1,65 @@
'use strict'; // eslint-disable-line strict
/**
* ArsenalError
*
* @extends {Error}
*/
class ArsenalError extends Error {
/**
* constructor.
*
* @param {string} type - Type of error or message
* @param {number} code - HTTP status code
* @param {string} desc - Verbose description of error
*/
constructor(type, code, desc) {
super(type);
/**
* HTTP status code of error
* @type {number}
*/
this.code = code;
/**
* Description of error
* @type {string}
*/
this.description = desc;
this[type] = true;
}
/**
* Output the error as a JSON string
* @returns {string} Error as JSON string
*/
toString() {
return JSON.stringify({
errorType: this.message,
errorMessage: this.description,
});
}
/**
* Write the error in an HTTP response
*
* @param { http.ServerResponse } res - Response we are responding to
* @returns {undefined}
*/
writeResponse(res) {
res.writeHead(this.code);
res.end(this.toString());
}
/**
* customizeDescription returns a new ArsenalError with a new description
* with the same HTTP code and message.
*
* @param {string} description - New error description
* @returns {ArsenalError} New error
*/
customizeDescription(description) {
return new ArsenalError(this.message, this.code, description);
}

View File

@@ -0,0 +1,20 @@
# Get Pensieve Credentials Executable
## To make executable file from getPensieveCreds.js
`npm install -g pkg`
`pkg getPensieveCreds.js`
This will build a mac, linux and windows file.
If you just want linux, for example:
`pkg getPensieveCreds.js --targets node6-linux-x64`
For further options, see https://github.com/zeit/pkg
## To run the executable file
Call the output executable file with an
argument that names the service you
are trying to get credentials for (e.g., clueso):
`./getPensieveCreds-linux serviceName`

View File

@@ -0,0 +1,45 @@
const async = require('async');
const MetadataFileClient =
require('../../storage/metadata/file/MetadataFileClient');
const mdClient = new MetadataFileClient({
host: 's3-metadata',
port: '9993',
});
const { loadOverlayVersion, parseServiceCredentials } = require('./utils');
const serviceName = process.argv[2];
if (serviceName === undefined) {
throw new Error('Missing service name (e.g., clueso)');
}
const tokenKey = 'auth/zenko/remote-management-token';
const mdDb = mdClient.openDB(error => {
if (error) {
throw error;
}
const db = mdDb.openSub('PENSIEVE');
return async.waterfall([
cb => db.get('configuration/overlay-version', {}, cb),
(version, cb) => loadOverlayVersion(db, version, cb),
(conf, cb) => db.get(tokenKey, {}, (err, instanceAuth) => {
if (err) {
return cb(err);
}
const creds = parseServiceCredentials(conf, instanceAuth,
serviceName);
return cb(null, creds);
}),
], (err, creds) => {
db.disconnect();
if (err) {
throw err;
}
if (!creds) {
throw new Error('No credentials found');
}
process.stdout.write(`export AWS_ACCESS_KEY_ID="${creds.accessKey}"\n`);
process.stdout
.write(`export AWS_SECRET_ACCESS_KEY="${creds.secretKey}"`);
});
});

View File

@@ -0,0 +1,14 @@
{
"name": "pensievecreds",
"version": "1.0.0",
"description": "Executable tool for Pensieve",
"main": "getPensieveCreds.js",
"scripts": {
"test": "mocha --recursive --timeout 5500 tests/unit"
},
"dependencies": {
"mocha": "2.5.3",
"async": "^2.6.0",
"node-forge": "^0.7.1"
}
}

View File

@@ -0,0 +1,7 @@
{
"privateKey": "-----BEGIN RSA PRIVATE KEY-----\r\nMIIEowIBAAKCAQEAj13sSYE40lAX2qpBvfdGfcSVNtBf8i5FH+E8FAhORwwPu+2S\r\n3yBQbgwHq30WWxunGb1NmZL1wkVZ+vf12DtxqFRnMA08LfO4oO6oC4V8XfKeuHyJ\r\n1qlaKRINz6r9yDkTHtwWoBnlAINurlcNKgGD5p7D+G26Chbr/Oo0ZwHula9DxXy6\r\neH8/bJ5/BynyNyyWRPoAO+UkUdY5utkFCUq2dbBIhovMgjjikf5p2oWqnRKXc+JK\r\nBegr6lSHkkhyqNhTmd8+wA+8Cace4sy1ajY1t5V4wfRZea5vwl/HlyyKodvHdxng\r\nJgg6H61JMYPkplY6Gr9OryBKEAgq02zYoYTDfwIDAQABAoIBAAuDYGlavkRteCzw\r\nRU1LIVcSRWVcgIgDXTu9K8T0Ec0008Kkxomyn6LmxmroJbZ1VwsDH8s4eRH73ckA\r\nxrZxt6Pr+0lplq6eBvKtl8MtGhq1VDe+kJczjHEF6SQHOFAu/TEaPZrn2XMcGvRX\r\nO1BnRL9tepFlxm3u/06VRFYNWqqchM+tFyzLu2AuiuKd5+slSX7KZvVgdkY1ErKH\r\ngB75lPyhPb77C/6ptqUisVMSO4JhLhsD0+ekDVY982Sb7KkI+szdWSbtMx9Ek2Wo\r\ntXwJz7I8T7IbODy9aW9G+ydyhMDFmaEYIaDVFKJj5+fluNza3oQ5PtFNVE50GQJA\r\nsisGqfECgYEAwpkwt0KpSamSEH6qknNYPOwxgEuXWoFVzibko7is2tFPvY+YJowb\r\n68MqHIYhf7gHLq2dc5Jg1TTbGqLECjVxp4xLU4c95KBy1J9CPAcuH4xQLDXmeLzP\r\nJ2YgznRocbzAMCDAwafCr3uY9FM7oGDHAi5bE5W11xWx+9MlFExL3JkCgYEAvJp5\r\nf+JGN1W037bQe2QLYUWGszewZsvplnNOeytGQa57w4YdF42lPhMz6Kc/zdzKZpN9\r\njrshiIDhAD5NCno6dwqafBAW9WZl0sn7EnlLhD4Lwm8E9bRHnC9H82yFuqmNrzww\r\nzxBCQogJISwHiVz4EkU48B283ecBn0wT/fAa19cCgYEApKWsnEHgrhy1IxOpCoRh\r\nUhqdv2k1xDPN/8DUjtnAFtwmVcLa/zJopU/Zn4y1ZzSzjwECSTi+iWZRQ/YXXHPf\r\nl92SFjhFW92Niuy8w8FnevXjF6T7PYiy1SkJ9OR1QlZrXc04iiGBDazLu115A7ce\r\nanACS03OLw+CKgl6Q/RR83ECgYBCUngDVoimkMcIHHt3yJiP3ikeAKlRnMdJlsa0\r\nXWVZV4hCG3lDfRXsnEgWuimftNKf+6GdfYSvQdLdiQsCcjT5A4uLsQTByv5nf4uA\r\n1ZKOsFrmRrARzxGXhLDikvj7yP//7USkq+0BBGFhfuAvl7fMhPceyPZPehqB7/jf\r\nxX1LBQKBgAn5GgSXzzS0e06ZlP/VrKxreOHa5Z8wOmqqYQ0QTeczAbNNmuITdwwB\r\nNkbRqpVXRIfuj0BQBegAiix8om1W4it0cwz54IXBwQULxJR1StWxj3jo4QtpMQ+z\r\npVPdB1Ilb9zPV1YvDwRfdS1xsobzznAx56ecsXduZjs9mF61db8Q\r\n-----END RSA PRIVATE KEY-----\r\n",
"publicKey": "-----BEGIN PUBLIC KEY-----\r\nMIIBIjANBgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEAj13sSYE40lAX2qpBvfdG\r\nfcSVNtBf8i5FH+E8FAhORwwPu+2S3yBQbgwHq30WWxunGb1NmZL1wkVZ+vf12Dtx\r\nqFRnMA08LfO4oO6oC4V8XfKeuHyJ1qlaKRINz6r9yDkTHtwWoBnlAINurlcNKgGD\r\n5p7D+G26Chbr/Oo0ZwHula9DxXy6eH8/bJ5/BynyNyyWRPoAO+UkUdY5utkFCUq2\r\ndbBIhovMgjjikf5p2oWqnRKXc+JKBegr6lSHkkhyqNhTmd8+wA+8Cace4sy1ajY1\r\nt5V4wfRZea5vwl/HlyyKodvHdxngJgg6H61JMYPkplY6Gr9OryBKEAgq02zYoYTD\r\nfwIDAQAB\r\n-----END PUBLIC KEY-----\r\n",
"accessKey": "QXP3VDG3SALNBX2QBJ1C",
"secretKey": "K5FyqZo5uFKfw9QBtn95o6vuPuD0zH/1seIrqPKqGnz8AxALNSx6EeRq7G1I6JJpS1XN13EhnwGn2ipsml3Uf2fQ00YgEmImG8wzGVZm8fWotpVO4ilN4JGyQCah81rNX4wZ9xHqDD7qYR5MyIERxR/osoXfctOwY7GGUjRKJfLOguNUlpaovejg6mZfTvYAiDF+PTO1sKUYqHt1IfKQtsK3dov1EFMBB5pWM7sVfncq/CthKN5M+VHx9Y87qdoP3+7AW+RCBbSDOfQgxvqtS7PIAf10mDl8k2kEURLz+RqChu4O4S0UzbEmtja7wa7WYhYKv/tM/QeW7kyNJMmnPg==",
"decryptedSecretKey": "n7PSZ3U6SgerF9PCNhXYsq3S3fRKVGdZTicGV8Ur"
}

View File

@@ -0,0 +1,39 @@
const assert = require('assert');
const { parseServiceCredentials, decryptSecret } =
require('../../utils');
const { privateKey, accessKey, secretKey, decryptedSecretKey }
= require('../resources.json');
describe('decyrptSecret', () => {
it('should decrypt a secret', () => {
const instanceCredentials = {
privateKey,
};
const result = decryptSecret(instanceCredentials, secretKey);
assert.strictEqual(result, decryptedSecretKey);
});
});
describe('parseServiceCredentials', () => {
const conf = {
users: [{ accessKey,
accountType: 'service-clueso',
secretKey,
userName: 'Search Service Account' }],
};
const auth = JSON.stringify({ privateKey });
it('should parse service credentials', () => {
const result = parseServiceCredentials(conf, auth, 'clueso');
const expectedResult = {
accessKey,
secretKey: decryptedSecretKey,
};
assert.deepStrictEqual(result, expectedResult);
});
it('should return undefined if no such service', () => {
const result = parseServiceCredentials(conf, auth, undefined);
assert.strictEqual(result, undefined);
});
});

View File

@@ -0,0 +1,38 @@
const forge = require('node-forge');
function decryptSecret(instanceCredentials, secret) {
const privateKey = forge.pki.privateKeyFromPem(
instanceCredentials.privateKey);
const encryptedSecretKey = forge.util.decode64(secret);
return privateKey.decrypt(encryptedSecretKey, 'RSA-OAEP', {
md: forge.md.sha256.create(),
});
}
function loadOverlayVersion(db, version, cb) {
db.get(`configuration/overlay/${version}`, {}, (err, val) => {
if (err) {
return cb(err);
}
return cb(null, JSON.parse(val));
});
}
function parseServiceCredentials(conf, auth, serviceName) {
const instanceAuth = JSON.parse(auth);
const serviceAccount = (conf.users || []).find(
u => u.accountType === `service-${serviceName}`);
if (!serviceAccount) {
return undefined;
}
return {
accessKey: serviceAccount.accessKey,
secretKey: decryptSecret(instanceAuth, serviceAccount.secretKey),
};
}
module.exports = {
decryptSecret,
loadOverlayVersion,
parseServiceCredentials,
};

32
lib/jsutil.js Normal file
View File

@@ -0,0 +1,32 @@
'use strict'; // eslint-disable-line
const debug = require('util').debuglog('jsutil');
// JavaScript utility functions
/**
* force <tt>func</tt> to be called only once, even if actually called
* multiple times. The cached result of the first call is then
* returned (if any).
*
* @note underscore.js provides this functionality but not worth
* adding a new dependency for such a small use case.
*
* @param {function} func function to call at most once
* @return {function} a callable wrapper mirroring <tt>func</tt> but
* only calls <tt>func</tt> at first invocation.
*/
module.exports.once = function once(func) {
const state = { called: false, res: undefined };
return function wrapper(...args) {
if (!state.called) {
state.called = true;
state.res = func.apply(func, args);
} else {
debug('function already called:', func,
'returning cached result:', state.res);
}
return state.res;
};
};

162
lib/metrics/RedisClient.js Normal file
View File

@@ -0,0 +1,162 @@
const Redis = require('ioredis');
class RedisClient {
/**
* @constructor
* @param {Object} config - config
* @param {string} config.host - Redis host
* @param {number} config.port - Redis port
* @param {string} config.password - Redis password
* @param {werelogs.Logger} logger - logger instance
*/
constructor(config, logger) {
this._client = new Redis(config);
this._client.on('error', err =>
logger.trace('error from redis', {
error: err,
method: 'RedisClient.constructor',
redisHost: config.host,
redisPort: config.port,
})
);
return this;
}
/**
* increment value of a key by 1 and set a ttl
* @param {string} key - key holding the value
* @param {number} expiry - expiry in seconds
* @param {callback} cb - callback
* @return {undefined}
*/
incrEx(key, expiry, cb) {
return this._client
.multi([['incr', key], ['expire', key, expiry]])
.exec(cb);
}
/**
* increment value of a key by a given amount and set a ttl
* @param {string} key - key holding the value
* @param {number} amount - amount to increase by
* @param {number} expiry - expiry in seconds
* @param {callback} cb - callback
* @return {undefined}
*/
incrbyEx(key, amount, expiry, cb) {
return this._client
.multi([['incrby', key, amount], ['expire', key, expiry]])
.exec(cb);
}
/**
* execute a batch of commands
* @param {string[]} cmds - list of commands
* @param {callback} cb - callback
* @return {undefined}
*/
batch(cmds, cb) {
return this._client.pipeline(cmds).exec(cb);
}
/**
* Checks if a key exists
* @param {string} key - name of key
* @param {function} cb - callback
* If cb response returns 0, key does not exist.
* If cb response returns 1, key exists.
* @return {undefined}
*/
exists(key, cb) {
return this._client.exists(key, cb);
}
/**
* Add a value and its score to a sorted set. If no sorted set exists, this
* will create a new one for the given key.
* @param {string} key - name of key
* @param {integer} score - score used to order set
* @param {string} value - value to store
* @param {callback} cb - callback
* @return {undefined}
*/
zadd(key, score, value, cb) {
return this._client.zadd(key, score, value, cb);
}
/**
* Get number of elements in a sorted set.
* Note: using this on a key that does not exist will return 0.
* Note: using this on an existing key that isn't a sorted set will
* return an error WRONGTYPE.
* @param {string} key - name of key
* @param {function} cb - callback
* @return {undefined}
*/
zcard(key, cb) {
return this._client.zcard(key, cb);
}
/**
* Get the score for given value in a sorted set
* Note: using this on a key that does not exist will return nil.
* Note: using this on a value that does not exist in a valid sorted set key
* will return nil.
* @param {string} key - name of key
* @param {string} value - value within sorted set
* @param {function} cb - callback
* @return {undefined}
*/
zscore(key, value, cb) {
return this._client.zscore(key, value, cb);
}
/**
* Remove a value from a sorted set
* @param {string} key - name of key
* @param {string|array} value - value within sorted set. Can specify
* multiple values within an array
* @param {function} cb - callback
* The cb response returns number of values removed
* @return {undefined}
*/
zrem(key, value, cb) {
return this._client.zrem(key, value, cb);
}
/**
* Get specified range of elements in a sorted set
* @param {string} key - name of key
* @param {integer} start - start index (inclusive)
* @param {integer} end - end index (inclusive) (can use -1)
* @param {function} cb - callback
* @return {undefined}
*/
zrange(key, start, end, cb) {
return this._client.zrange(key, start, end, cb);
}
/**
* Get range of elements in a sorted set based off score
* @param {string} key - name of key
* @param {integer|string} min - min score value (inclusive)
* (can use "-inf")
* @param {integer|string} max - max score value (inclusive)
* (can use "+inf")
* @param {function} cb - callback
* @return {undefined}
*/
zrangebyscore(key, min, max, cb) {
return this._client.zrangebyscore(key, min, max, cb);
}
clear(cb) {
return this._client.flushdb(cb);
}
disconnect() {
this._client.disconnect();
}
}
module.exports = RedisClient;

163
lib/metrics/StatsClient.js Normal file
View File

@@ -0,0 +1,163 @@
const async = require('async');
class StatsClient {
/**
* @constructor
* @param {object} redisClient - RedisClient instance
* @param {number} interval - sampling interval by seconds
* @param {number} expiry - sampling duration by seconds
*/
constructor(redisClient, interval, expiry) {
this._redis = redisClient;
this._interval = interval;
this._expiry = expiry;
return this;
}
/*
* Utility function to use when callback is undefined
*/
_noop() {}
/**
* normalize to the nearest interval
* @param {object} d - Date instance
* @return {number} timestamp - normalized to the nearest interval
*/
_normalizeTimestamp(d) {
const s = d.getSeconds();
return d.setSeconds(s - s % this._interval, 0);
}
/**
* set timestamp to the previous interval
* @param {object} d - Date instance
* @return {number} timestamp - set to the previous interval
*/
_setPrevInterval(d) {
return d.setSeconds(d.getSeconds() - this._interval);
}
/**
* build redis key to get total number of occurrences on the server
* @param {string} name - key name identifier
* @param {object} d - Date instance
* @return {string} key - key for redis
*/
_buildKey(name, d) {
return `${name}:${this._normalizeTimestamp(d)}`;
}
/**
* reduce the array of values to a single value
* typical input looks like [[null, '1'], [null, '2'], [null, null]...]
* @param {array} arr - Date instance
* @return {string} key - key for redis
*/
_getCount(arr) {
return arr.reduce((prev, a) => {
let num = parseInt(a[1], 10);
num = Number.isNaN(num) ? 0 : num;
return prev + num;
}, 0);
}
/**
* report/record a new request received on the server
* @param {string} id - service identifier
* @param {number} incr - optional param increment
* @param {function} cb - callback
* @return {undefined}
*/
reportNewRequest(id, incr, cb) {
if (!this._redis) {
return undefined;
}
let callback;
let amount;
if (typeof incr === 'function') {
// In case where optional `incr` is not passed, but `cb` is passed
callback = incr;
amount = 1;
} else {
callback = (cb && typeof cb === 'function') ? cb : this._noop;
amount = (typeof incr === 'number') ? incr : 1;
}
const key = this._buildKey(`${id}:requests`, new Date());
return this._redis.incrbyEx(key, amount, this._expiry, callback);
}
/**
* report/record a request that ended up being a 500 on the server
* @param {string} id - service identifier
* @param {callback} cb - callback
* @return {undefined}
*/
report500(id, cb) {
if (!this._redis) {
return undefined;
}
const callback = cb || this._noop;
const key = this._buildKey(`${id}:500s`, new Date());
return this._redis.incrEx(key, this._expiry, callback);
}
/**
* get stats for the last x seconds, x being the sampling duration
* @param {object} log - Werelogs request logger
* @param {string} id - service identifier
* @param {callback} cb - callback to call with the err/result
* @return {undefined}
*/
getStats(log, id, cb) {
if (!this._redis) {
return cb(null, {});
}
const d = new Date();
const totalKeys = Math.floor(this._expiry / this._interval);
const reqsKeys = [];
const req500sKeys = [];
for (let i = 0; i < totalKeys; i++) {
reqsKeys.push(['get', this._buildKey(`${id}:requests`, d)]);
req500sKeys.push(['get', this._buildKey(`${id}:500s`, d)]);
this._setPrevInterval(d);
}
return async.parallel([
next => this._redis.batch(reqsKeys, next),
next => this._redis.batch(req500sKeys, next),
], (err, results) => {
/**
* Batch result is of the format
* [ [null, '1'], [null, '2'], [null, '3'] ] where each
* item is the result of the each batch command
* Foreach item in the result, index 0 signifies the error and
* index 1 contains the result
*/
const statsRes = {
'requests': 0,
'500s': 0,
'sampleDuration': this._expiry,
};
if (err) {
log.error('error getting stats', {
error: err,
method: 'StatsClient.getStats',
});
/**
* Redis for stats is not a critial component, ignoring
* any error here as returning an InternalError
* would be confused with the health of the service
*/
return cb(null, statsRes);
}
statsRes.requests = this._getCount(results[0]);
statsRes['500s'] = this._getCount(results[1]);
return cb(null, statsRes);
});
}
}
module.exports = StatsClient;

120
lib/metrics/StatsModel.js Normal file
View File

@@ -0,0 +1,120 @@
const StatsClient = require('./StatsClient');
/**
* @class StatsModel
*
* @classdesc Extend and overwrite how timestamps are normalized by minutes
* rather than by seconds
*/
class StatsModel extends StatsClient {
/**
* normalize date timestamp to the nearest hour
* @param {Date} d - Date instance
* @return {number} timestamp - normalized to the nearest hour
*/
normalizeTimestampByHour(d) {
return d.setMinutes(0, 0, 0);
}
/**
* get previous hour to date given
* @param {Date} d - Date instance
* @return {number} timestamp - one hour prior to date passed
*/
_getDatePreviousHour(d) {
return d.setHours(d.getHours() - 1);
}
/**
* normalize to the nearest interval
* @param {object} d - Date instance
* @return {number} timestamp - normalized to the nearest interval
*/
_normalizeTimestamp(d) {
const m = d.getMinutes();
return d.setMinutes(m - m % (Math.floor(this._interval / 60)), 0, 0);
}
/**
* override the method to get the result as an array of integers separated
* by each interval
* typical input looks like [[null, '1'], [null, '2'], [null, null]...]
* @param {array} arr - each index contains the result of each batch command
* where index 0 signifies the error and index 1 contains the result
* @return {array} array of integers, ordered from most recent interval to
* oldest interval
*/
_getCount(arr) {
return arr.reduce((store, i) => {
let num = parseInt(i[1], 10);
num = Number.isNaN(num) ? 0 : num;
store.push(num);
return store;
}, []);
}
/**
* get list of sorted set key timestamps
* @param {number} epoch - epoch time
* @return {array} array of sorted set key timestamps
*/
getSortedSetHours(epoch) {
const timestamps = [];
let date = this.normalizeTimestampByHour(new Date(epoch));
while (timestamps.length < 24) {
timestamps.push(date);
date = this._getDatePreviousHour(new Date(date));
}
return timestamps;
}
/**
* get the normalized hour timestamp for given epoch time
* @param {number} epoch - epoch time
* @return {string} normalized hour timestamp for given time
*/
getSortedSetCurrentHour(epoch) {
return this.normalizeTimestampByHour(new Date(epoch));
}
/**
* helper method to add element to a sorted set, applying TTL if new set
* @param {string} key - name of key
* @param {integer} score - score used to order set
* @param {string} value - value to store
* @param {callback} cb - callback
* @return {undefined}
*/
addToSortedSet(key, score, value, cb) {
this._redis.exists(key, (err, resCode) => {
if (err) {
return cb(err);
}
if (resCode === 0) {
// milliseconds in a day
const msInADay = 24 * 60 * 60 * 1000;
const nearestHour = this.normalizeTimestampByHour(new Date());
// in seconds
const ttl = Math.ceil(
(msInADay - (Date.now() - nearestHour)) / 1000);
const cmds = [
['zadd', key, score, value],
['expire', key, ttl],
];
return this._redis.batch(cmds, (err, res) => {
if (err) {
return cb(err);
}
const cmdErr = res.find(r => r[0] !== null);
if (cmdErr) {
return cb(cmdErr);
}
const successResponse = res[0][1];
return cb(null, successResponse);
});
}
return this._redis.zadd(key, score, value, cb);
});
}
}
module.exports = StatsModel;

View File

@@ -0,0 +1,40 @@
const promClient = require('prom-client');
const collectDefaultMetricsIntervalMs =
process.env.COLLECT_DEFAULT_METRICS_INTERVAL_MS !== undefined ?
Number.parseInt(process.env.COLLECT_DEFAULT_METRICS_INTERVAL_MS, 10) :
10000;
promClient.collectDefaultMetrics({ timeout: collectDefaultMetricsIntervalMs });
class ZenkoMetrics {
static createCounter(params) {
return new promClient.Counter(params);
}
static createGauge(params) {
return new promClient.Gauge(params);
}
static createHistogram(params) {
return new promClient.Histogram(params);
}
static createSummary(params) {
return new promClient.Summary(params);
}
static getMetric(name) {
return promClient.register.getSingleMetric(name);
}
static asPrometheus() {
return promClient.register.metrics();
}
static asPrometheusContentType() {
return promClient.register.contentType;
}
}
module.exports = ZenkoMetrics;

106
lib/models/ARN.js Normal file
View File

@@ -0,0 +1,106 @@
const errors = require('../errors');
const validServices = {
aws: ['s3', 'iam', 'sts', 'ring'],
scality: ['utapi', 'sso'],
};
class ARN {
/**
*
* Create an ARN object from its individual components
*
* @constructor
* @param {string} partition - ARN partition (e.g. 'aws')
* @param {string} service - service name in partition (e.g. 's3')
* @param {string} [region] - AWS region
* @param {string} [accountId] - AWS 12-digit account ID
* @param {string} resource - AWS resource path (e.g. 'foo/bar')
*/
constructor(partition, service, region, accountId, resource) {
this._partition = partition;
this._service = service;
this._region = region || null;
this._accountId = accountId || null;
this._resource = resource;
}
static createFromString(arnStr) {
const [arn, partition, service, region, accountId,
resourceType, resource] = arnStr.split(':');
if (arn !== 'arn') {
return { error: errors.InvalidArgument.customizeDescription(
'bad ARN: must start with "arn:"') };
}
if (!partition) {
return { error: errors.InvalidArgument.customizeDescription(
'bad ARN: must include a partition name, like "aws" in ' +
'"arn:aws:..."') };
}
if (!service) {
return { error: errors.InvalidArgument.customizeDescription(
'bad ARN: must include a service name, like "s3" in ' +
'"arn:aws:s3:..."') };
}
if (validServices[partition] === undefined) {
return { error: errors.InvalidArgument.customizeDescription(
`bad ARN: unknown partition "${partition}", should be a ` +
'valid partition name like "aws" in "arn:aws:..."') };
}
if (!validServices[partition].includes(service)) {
return { error: errors.InvalidArgument.customizeDescription(
`bad ARN: unsupported ${partition} service "${service}"`) };
}
if (accountId && !/^([0-9]{12}|[*])$/.test(accountId)) {
return { error: errors.InvalidArgument.customizeDescription(
`bad ARN: bad account ID "${accountId}": ` +
'must be a 12-digit number or "*"') };
}
const fullResource = (resource !== undefined ?
`${resourceType}:${resource}` : resourceType);
return new ARN(partition, service, region, accountId, fullResource);
}
getPartition() {
return this._partition;
}
getService() {
return this._service;
}
getRegion() {
return this._region;
}
getAccountId() {
return this._accountId;
}
getResource() {
return this._resource;
}
isIAMAccount() {
return this.getService() === 'iam'
&& this.getAccountId() !== null
&& this.getAccountId() !== '*'
&& this.getResource() === 'root';
}
isIAMUser() {
return this.getService() === 'iam'
&& this.getAccountId() !== null
&& this.getAccountId() !== '*'
&& this.getResource().startsWith('user/');
}
isIAMRole() {
return this.getService() === 'iam'
&& this.getAccountId() !== null
&& this.getResource().startsWith('role');
}
toString() {
return ['arn', this.getPartition(), this.getService(),
this.getRegion(), this.getAccountId(), this.getResource()]
.join(':');
}
}
module.exports = ARN;

526
lib/models/BucketInfo.js Normal file
View File

@@ -0,0 +1,526 @@
const assert = require('assert');
const { WebsiteConfiguration } = require('./WebsiteConfiguration');
const ReplicationConfiguration = require('./ReplicationConfiguration');
const LifecycleConfiguration = require('./LifecycleConfiguration');
// WHEN UPDATING THIS NUMBER, UPDATE MODELVERSION.MD CHANGELOG
const modelVersion = 6;
class BucketInfo {
/**
* Represents all bucket information.
* @constructor
* @param {string} name - bucket name
* @param {string} owner - bucket owner's name
* @param {string} ownerDisplayName - owner's display name
* @param {object} creationDate - creation date of bucket
* @param {number} mdBucketModelVersion - bucket model version
* @param {object} [acl] - bucket ACLs (no need to copy
* ACL object since referenced object will not be used outside of
* BucketInfo instance)
* @param {boolean} transient - flag indicating whether bucket is transient
* @param {boolean} deleted - flag indicating whether attempt to delete
* @param {object} serverSideEncryption - sse information for this bucket
* @param {number} serverSideEncryption.cryptoScheme -
* cryptoScheme used
* @param {string} serverSideEncryption.algorithm -
* algorithm to use
* @param {string} serverSideEncryption.masterKeyId -
* key to get master key
* @param {boolean} serverSideEncryption.mandatory -
* true for mandatory encryption
* bucket has been made
* @param {object} versioningConfiguration - versioning configuration
* @param {string} versioningConfiguration.Status - versioning status
* @param {object} versioningConfiguration.MfaDelete - versioning mfa delete
* @param {string} locationConstraint - locationConstraint for bucket
* @param {WebsiteConfiguration} [websiteConfiguration] - website
* configuration
* @param {object[]} [cors] - collection of CORS rules to apply
* @param {string} [cors[].id] - optional ID to identify rule
* @param {string[]} cors[].allowedMethods - methods allowed for CORS request
* @param {string[]} cors[].allowedOrigins - origins allowed for CORS request
* @param {string[]} [cors[].allowedHeaders] - headers allowed in an OPTIONS
* request via the Access-Control-Request-Headers header
* @param {number} [cors[].maxAgeSeconds] - seconds browsers should cache
* OPTIONS response
* @param {string[]} [cors[].exposeHeaders] - headers expose to applications
* @param {object} [replicationConfiguration] - replication configuration
* @param {object} [lifecycleConfiguration] - lifecycle configuration
*/
constructor(name, owner, ownerDisplayName, creationDate,
mdBucketModelVersion, acl, transient, deleted,
serverSideEncryption, versioningConfiguration,
locationConstraint, websiteConfiguration, cors,
replicationConfiguration, lifecycleConfiguration) {
assert.strictEqual(typeof name, 'string');
assert.strictEqual(typeof owner, 'string');
assert.strictEqual(typeof ownerDisplayName, 'string');
assert.strictEqual(typeof creationDate, 'string');
if (mdBucketModelVersion) {
assert.strictEqual(typeof mdBucketModelVersion, 'number');
}
if (acl) {
assert.strictEqual(typeof acl, 'object');
assert(Array.isArray(acl.FULL_CONTROL));
assert(Array.isArray(acl.WRITE));
assert(Array.isArray(acl.WRITE_ACP));
assert(Array.isArray(acl.READ));
assert(Array.isArray(acl.READ_ACP));
}
if (serverSideEncryption) {
assert.strictEqual(typeof serverSideEncryption, 'object');
const { cryptoScheme, algorithm, masterKeyId, mandatory } =
serverSideEncryption;
assert.strictEqual(typeof cryptoScheme, 'number');
assert.strictEqual(typeof algorithm, 'string');
assert.strictEqual(typeof masterKeyId, 'string');
assert.strictEqual(typeof mandatory, 'boolean');
}
if (versioningConfiguration) {
assert.strictEqual(typeof versioningConfiguration, 'object');
const { Status, MfaDelete } = versioningConfiguration;
assert(Status === undefined ||
Status === 'Enabled' ||
Status === 'Suspended');
assert(MfaDelete === undefined ||
MfaDelete === 'Enabled' ||
MfaDelete === 'Disabled');
}
if (locationConstraint) {
assert.strictEqual(typeof locationConstraint, 'string');
}
if (websiteConfiguration) {
assert(websiteConfiguration instanceof WebsiteConfiguration);
const { indexDocument, errorDocument, redirectAllRequestsTo,
routingRules } = websiteConfiguration;
assert(indexDocument === undefined ||
typeof indexDocument === 'string');
assert(errorDocument === undefined ||
typeof errorDocument === 'string');
assert(redirectAllRequestsTo === undefined ||
typeof redirectAllRequestsTo === 'object');
assert(routingRules === undefined ||
Array.isArray(routingRules));
}
if (cors) {
assert(Array.isArray(cors));
}
if (replicationConfiguration) {
ReplicationConfiguration.validateConfig(replicationConfiguration);
}
if (lifecycleConfiguration) {
LifecycleConfiguration.validateConfig(lifecycleConfiguration);
}
const aclInstance = acl || {
Canned: 'private',
FULL_CONTROL: [],
WRITE: [],
WRITE_ACP: [],
READ: [],
READ_ACP: [],
};
// IF UPDATING PROPERTIES, INCREMENT MODELVERSION NUMBER ABOVE
this._acl = aclInstance;
this._name = name;
this._owner = owner;
this._ownerDisplayName = ownerDisplayName;
this._creationDate = creationDate;
this._mdBucketModelVersion = mdBucketModelVersion || 0;
this._transient = transient || false;
this._deleted = deleted || false;
this._serverSideEncryption = serverSideEncryption || null;
this._versioningConfiguration = versioningConfiguration || null;
this._locationConstraint = locationConstraint || null;
this._websiteConfiguration = websiteConfiguration || null;
this._replicationConfiguration = replicationConfiguration || null;
this._cors = cors || null;
this._lifecycleConfiguration = lifecycleConfiguration || null;
return this;
}
/**
* Serialize the object
* @return {string} - stringified object
*/
serialize() {
const bucketInfos = {
acl: this._acl,
name: this._name,
owner: this._owner,
ownerDisplayName: this._ownerDisplayName,
creationDate: this._creationDate,
mdBucketModelVersion: this._mdBucketModelVersion,
transient: this._transient,
deleted: this._deleted,
serverSideEncryption: this._serverSideEncryption,
versioningConfiguration: this._versioningConfiguration,
locationConstraint: this._locationConstraint,
websiteConfiguration: undefined,
cors: this._cors,
replicationConfiguration: this._replicationConfiguration,
lifecycleConfiguration: this._lifecycleConfiguration,
};
if (this._websiteConfiguration) {
bucketInfos.websiteConfiguration =
this._websiteConfiguration.getConfig();
}
return JSON.stringify(bucketInfos);
}
/**
* deSerialize the JSON string
* @param {string} stringBucket - the stringified bucket
* @return {object} - parsed string
*/
static deSerialize(stringBucket) {
const obj = JSON.parse(stringBucket);
const websiteConfig = obj.websiteConfiguration ?
new WebsiteConfiguration(obj.websiteConfiguration) : null;
return new BucketInfo(obj.name, obj.owner, obj.ownerDisplayName,
obj.creationDate, obj.mdBucketModelVersion, obj.acl,
obj.transient, obj.deleted, obj.serverSideEncryption,
obj.versioningConfiguration, obj.locationConstraint, websiteConfig,
obj.cors, obj.replicationConfiguration, obj.lifecycleConfiguration);
}
/**
* Returns the current model version for the data structure
* @return {number} - the current model version set above in the file
*/
static currentModelVersion() {
return modelVersion;
}
/**
* Create a BucketInfo from an object
*
* @param {object} data - object containing data
* @return {BucketInfo} Return an BucketInfo
*/
static fromObj(data) {
return new BucketInfo(data._name, data._owner, data._ownerDisplayName,
data._creationDate, data._mdBucketModelVersion, data._acl,
data._transient, data._deleted, data._serverSideEncryption,
data._versioningConfiguration, data._locationConstraint,
data._websiteConfiguration, data._cors,
data._replicationConfiguration, data._lifecycleConfiguration);
}
/**
* Get the ACLs.
* @return {object} acl
*/
getAcl() {
return this._acl;
}
/**
* Set the canned acl's.
* @param {string} cannedACL - canned ACL being set
* @return {BucketInfo} - bucket info instance
*/
setCannedAcl(cannedACL) {
this._acl.Canned = cannedACL;
return this;
}
/**
* Set a specific ACL.
* @param {string} canonicalID - id for account being given access
* @param {string} typeOfGrant - type of grant being granted
* @return {BucketInfo} - bucket info instance
*/
setSpecificAcl(canonicalID, typeOfGrant) {
this._acl[typeOfGrant].push(canonicalID);
return this;
}
/**
* Set all ACLs.
* @param {object} acl - new set of ACLs
* @return {BucketInfo} - bucket info instance
*/
setFullAcl(acl) {
this._acl = acl;
return this;
}
/**
* Get the server side encryption information
* @return {object} serverSideEncryption
*/
getServerSideEncryption() {
return this._serverSideEncryption;
}
/**
* Set server side encryption information
* @param {object} serverSideEncryption - server side encryption information
* @return {BucketInfo} - bucket info instance
*/
setServerSideEncryption(serverSideEncryption) {
this._serverSideEncryption = serverSideEncryption;
return this;
}
/**
* Get the versioning configuration information
* @return {object} versioningConfiguration
*/
getVersioningConfiguration() {
return this._versioningConfiguration;
}
/**
* Set versioning configuration information
* @param {object} versioningConfiguration - versioning information
* @return {BucketInfo} - bucket info instance
*/
setVersioningConfiguration(versioningConfiguration) {
this._versioningConfiguration = versioningConfiguration;
return this;
}
/**
* Check that versioning is 'Enabled' on the given bucket.
* @return {boolean} - `true` if versioning is 'Enabled', otherwise `false`
*/
isVersioningEnabled() {
const versioningConfig = this.getVersioningConfiguration();
return versioningConfig ? versioningConfig.Status === 'Enabled' : false;
}
/**
* Get the website configuration information
* @return {object} websiteConfiguration
*/
getWebsiteConfiguration() {
return this._websiteConfiguration;
}
/**
* Set website configuration information
* @param {object} websiteConfiguration - configuration for bucket website
* @return {BucketInfo} - bucket info instance
*/
setWebsiteConfiguration(websiteConfiguration) {
this._websiteConfiguration = websiteConfiguration;
return this;
}
/**
* Set replication configuration information
* @param {object} replicationConfiguration - replication information
* @return {BucketInfo} - bucket info instance
*/
setReplicationConfiguration(replicationConfiguration) {
this._replicationConfiguration = replicationConfiguration;
return this;
}
/**
* Get replication configuration information
* @return {object|null} replication configuration information or `null` if
* the bucket does not have a replication configuration
*/
getReplicationConfiguration() {
return this._replicationConfiguration;
}
/**
* Get lifecycle configuration information
* @return {object|null} lifecycle configuration information or `null` if
* the bucket does not have a lifecycle configuration
*/
getLifecycleConfiguration() {
return this._lifecycleConfiguration;
}
/**
* Set lifecycle configuration information
* @param {object} lifecycleConfiguration - lifecycle information
* @return {BucketInfo} - bucket info instance
*/
setLifecycleConfiguration(lifecycleConfiguration) {
this._lifecycleConfiguration = lifecycleConfiguration;
return this;
}
/**
* Get cors resource
* @return {object[]} cors
*/
getCors() {
return this._cors;
}
/**
* Set cors resource
* @param {object[]} rules - collection of CORS rules
* @param {string} [rules.id] - optional id to identify rule
* @param {string[]} rules[].allowedMethods - methods allowed for CORS
* @param {string[]} rules[].allowedOrigins - origins allowed for CORS
* @param {string[]} [rules[].allowedHeaders] - headers allowed in an
* OPTIONS request via the Access-Control-Request-Headers header
* @param {number} [rules[].maxAgeSeconds] - seconds browsers should cache
* OPTIONS response
* @param {string[]} [rules[].exposeHeaders] - headers to expose to external
* applications
* @return {BucketInfo} - bucket info instance
*/
setCors(rules) {
this._cors = rules;
return this;
}
/**
* get the serverside encryption algorithm
* @return {string} - sse algorithm used by this bucket
*/
getSseAlgorithm() {
if (!this._serverSideEncryption) {
return null;
}
return this._serverSideEncryption.algorithm;
}
/**
* get the server side encryption master key Id
* @return {string} - sse master key Id used by this bucket
*/
getSseMasterKeyId() {
if (!this._serverSideEncryption) {
return null;
}
return this._serverSideEncryption.masterKeyId;
}
/**
* Get bucket name.
* @return {string} - bucket name
*/
getName() {
return this._name;
}
/**
* Set bucket name.
* @param {string} bucketName - new bucket name
* @return {BucketInfo} - bucket info instance
*/
setName(bucketName) {
this._name = bucketName;
return this;
}
/**
* Get bucket owner.
* @return {string} - bucket owner's canonicalID
*/
getOwner() {
return this._owner;
}
/**
* Set bucket owner.
* @param {string} ownerCanonicalID - bucket owner canonicalID
* @return {BucketInfo} - bucket info instance
*/
setOwner(ownerCanonicalID) {
this._owner = ownerCanonicalID;
return this;
}
/**
* Get bucket owner display name.
* @return {string} - bucket owner dispaly name
*/
getOwnerDisplayName() {
return this._ownerDisplayName;
}
/**
* Set bucket owner display name.
* @param {string} ownerDisplayName - bucket owner display name
* @return {BucketInfo} - bucket info instance
*/
setOwnerDisplayName(ownerDisplayName) {
this._ownerDisplayName = ownerDisplayName;
return this;
}
/**
* Get bucket creation date.
* @return {object} - bucket creation date
*/
getCreationDate() {
return this._creationDate;
}
/**
* Set location constraint.
* @param {string} location - bucket location constraint
* @return {BucketInfo} - bucket info instance
*/
setLocationConstraint(location) {
this._locationConstraint = location;
return this;
}
/**
* Get location constraint.
* @return {string} - bucket location constraint
*/
getLocationConstraint() {
return this._locationConstraint;
}
/**
* Set Bucket model version
*
* @param {number} version - Model version
* @return {BucketInfo} - bucket info instance
*/
setMdBucketModelVersion(version) {
this._mdBucketModelVersion = version;
return this;
}
/**
* Get Bucket model version
*
* @return {number} Bucket model version
*/
getMdBucketModelVersion() {
return this._mdBucketModelVersion;
}
/**
* Add transient flag.
* @return {BucketInfo} - bucket info instance
*/
addTransientFlag() {
this._transient = true;
return this;
}
/**
* Remove transient flag.
* @return {BucketInfo} - bucket info instance
*/
removeTransientFlag() {
this._transient = false;
return this;
}
/**
* Check transient flag.
* @return {boolean} - depending on whether transient flag in place
*/
hasTransientFlag() {
return !!this._transient;
}
/**
* Add deleted flag.
* @return {BucketInfo} - bucket info instance
*/
addDeletedFlag() {
this._deleted = true;
return this;
}
/**
* Remove deleted flag.
* @return {BucketInfo} - bucket info instance
*/
removeDeletedFlag() {
this._deleted = false;
return this;
}
/**
* Check deleted flag.
* @return {boolean} - depending on whether deleted flag in place
*/
hasDeletedFlag() {
return !!this._deleted;
}
/**
* Check if the versioning mode is on.
* @return {boolean} - versioning mode status
*/
isVersioningOn() {
return this._versioningConfiguration &&
this._versioningConfiguration.Status === 'Enabled';
}
}
module.exports = BucketInfo;

View File

@@ -0,0 +1,728 @@
const assert = require('assert');
const UUID = require('uuid');
const errors = require('../errors');
/**
* Format of xml request:
<LifecycleConfiguration>
<Rule>
<ID>id1</ID>
<Filter>
<Prefix>logs/</Prefix>
</Filter>
<Status>Enabled</Status>
<Expiration>
<Days>365</Days>
</Expiration>
</Rule>
<Rule>
<ID>DeleteAfterBecomingNonCurrent</ID>
<Filter>
<And>
<Prefix>logs/</Prefix>
<Tag>
<Key>key1</Key>
<Value>value1</Value>
</Tag>
</And>
</Filter>
<Status>Enabled</Status>
<NoncurrentVersionExpiration>
<NoncurrentDays>1</NoncurrentDays>
</NoncurrentVersionExpiration>
<AbortIncompleteMultipartUploads>
<DaysAfterInitiation>1</DaysAfterInitiation>
</AbortIncompleteMultipartUploads>
</Rule>
</LifecycleConfiguration>
*/
/**
* Format of config:
config = {
rules = [
{
ruleID: <value>,
ruleStatus: <value>,
filter: {
rulePrefix: <value>,
tags: [
{
key: <value>,
val: <value>
},
{
key: <value>,
val: <value>
}
]
},
actions: [
{
actionName: <value>,
days: <value>,
date: <value>,
deleteMarker: <value>
},
{
actionName: <value>,
days: <value>,
date: <value>,
deleteMarker: <value>,
},
]
}
]
};
*/
class LifecycleConfiguration {
/**
* Create a Lifecycle Configuration instance
* @param {string} xml - the parsed xml
* @return {object} - LifecycleConfiguration instance
*/
constructor(xml) {
this._parsedXML = xml;
this._ruleIDs = [];
this._tagKeys = [];
this._config = {};
}
/**
* Get the lifecycle configuration
* @return {object} - the lifecycle configuration
*/
getLifecycleConfiguration() {
const rules = this._buildRulesArray();
if (rules.error) {
this._config.error = rules.error;
}
return this._config;
}
/**
* Build the this._config.rules array
* @return {object} - contains error if any rule returned an error
* or parsing failed
*/
_buildRulesArray() {
const rules = {};
this._config.rules = [];
if (!this._parsedXML || this._parsedXML === '') {
rules.error = errors.MalformedXML.customizeDescription(
'request xml is undefined or empty');
return rules;
}
if (!this._parsedXML.LifecycleConfiguration &&
this._parsedXML.LifecycleConfiguration !== '') {
rules.error = errors.MalformedXML.customizeDescription(
'request xml does not include LifecycleConfiguration');
return rules;
}
const lifecycleConf = this._parsedXML.LifecycleConfiguration;
const rulesArray = lifecycleConf.Rule;
if (!rulesArray || !Array.isArray(rulesArray)
|| rulesArray.length === 0) {
rules.error = errors.MissingRequiredParameter.customizeDescription(
'missing required key \'Rules\' in LifecycleConfiguration');
return rules;
}
if (rulesArray.length > 1000) {
rules.error = errors.MalformedXML.customizeDescription(
'request xml includes over max limit of 1000 rules');
return rules;
}
for (let i = 0; i < rulesArray.length; i++) {
const rule = this._parseRule(rulesArray[i]);
if (rule.error) {
rules.error = rule.error;
break;
} else {
this._config.rules.push(rule);
}
}
return rules;
}
/**
* Check that each xml rule is valid
* @param {object} rule - a rule object from Rule array from this._parsedXml
* @return {object} - contains error if any component returned an error
* or parsing failed, else contains parsed rule object
*
* Format of ruleObj:
* ruleObj = {
* ruleID: <value>,
* ruleStatus: <value>,
* filter: {
* rulePrefix: <value>,
* tags: [
* {
* key: <value>,
* val: <value>,
* }
* ]
* }
* actions: [
* {
* actionName: <value>,
* day: <value>,
* date: <value>,
* deleteMarker: <value>
* },
* ]
* }
*/
_parseRule(rule) {
const ruleObj = {};
if (rule.Transition || rule.NoncurrentVersionTransition) {
ruleObj.error = errors.NotImplemented.customizeDescription(
'Transition lifecycle action not yet implemented');
return ruleObj;
}
// Either Prefix or Filter must be included, but can be empty string
if ((!rule.Filter && rule.Filter !== '') &&
(!rule.Prefix && rule.Prefix !== '')) {
ruleObj.error = errors.MalformedXML.customizeDescription(
'Rule xml does not include valid Filter or Prefix');
return ruleObj;
}
if (rule.Filter && rule.Prefix) {
ruleObj.error = errors.MalformedXML.customizeDescription(
'Rule xml should not include both Filter and Prefix');
return ruleObj;
}
if (!rule.Status) {
ruleObj.error = errors.MissingRequiredParameter.
customizeDescription('Rule xml does not include Status');
return ruleObj;
}
const subFilter = rule.Filter ? rule.Filter[0] : rule.Prefix;
const id = this._parseID(rule.ID);
const status = this._parseStatus(rule.Status[0]);
const filter = this._parseFilter(subFilter);
const actions = this._parseAction(rule);
const rulePropArray = [id, status, filter, actions];
for (let i = 0; i < rulePropArray.length; i++) {
const prop = rulePropArray[i];
if (prop.error) {
ruleObj.error = prop.error;
break;
} else {
const propName = prop.propName;
// eslint-disable-next-line no-param-reassign
delete prop.propName;
ruleObj[propName] = prop[propName] || prop;
}
}
return ruleObj;
}
/**
* Check that filter component of rule is valid
* @param {object} filter - filter object from a rule object
* @return {object} - contains error if parsing failed, else contains
* parsed prefix and tag array
*
* Format of filterObj:
* filterObj = {
* error: <error>,
* propName: 'filter',
* rulePrefix: <value>,
* tags: [
* {
* key: <value>,
* val: <value>
* },
* {
* key: <value>,
* value: <value>
* }
* ]
* }
*/
_parseFilter(filter) {
const filterObj = {};
filterObj.propName = 'filter';
// if no Rule Prefix or Filter, rulePrefix is empty string
filterObj.rulePrefix = '';
if (Array.isArray(filter)) {
// if Prefix was included, not Filter, filter will be Prefix array
// if more than one Prefix is included, we ignore all but the last
filterObj.rulePrefix = filter.pop();
return filterObj;
}
if (filter.And && (filter.Prefix || filter.Tag) ||
(filter.Prefix && filter.Tag)) {
filterObj.error = errors.MalformedXML.customizeDescription(
'Filter should only include one of And, Prefix, or Tag key');
return filterObj;
}
if (filter.Prefix) {
filterObj.rulePrefix = filter.Prefix.pop();
return filterObj;
}
if (filter.Tag) {
const tagObj = this._parseTags(filter.Tag[0]);
if (tagObj.error) {
filterObj.error = tagObj.error;
return filterObj;
}
filterObj.tags = tagObj.tags;
return filterObj;
}
if (filter.And) {
const andF = filter.And[0];
if (!andF.Tag || (!andF.Prefix && andF.Tag.length < 2)) {
filterObj.error = errors.MalformedXML.customizeDescription(
'And should include Prefix and Tags or more than one Tag');
return filterObj;
}
if (andF.Prefix && andF.Prefix.length >= 1) {
filterObj.rulePrefix = andF.Prefix.pop();
}
const tagObj = this._parseTags(andF.Tag[0]);
if (tagObj.error) {
filterObj.error = tagObj.error;
return filterObj;
}
filterObj.tags = tagObj.tags;
return filterObj;
}
return filterObj;
}
/**
* Check that each tag object is valid
* @param {object} tags - a tag object from a filter object
* @return {boolean} - indicates whether tag object is valid
*
* Format of tagObj:
* tagObj = {
* error: <error>,
* tags: [
* {
* key: <value>,
* value: <value>,
* }
* ]
* }
*/
_parseTags(tags) {
const tagObj = {};
tagObj.tags = [];
// reset _tagKeys to empty because keys cannot overlap within a rule,
// but different rules can have the same tag keys
this._tagKeys = [];
if (!tags.Key || !tags.Value) {
tagObj.error = errors.MissingRequiredParameter.customizeDescription(
'Tag XML does not contain both Key and Value');
return tagObj;
}
if (tags.Key.length !== tags.Value.length) {
tagObj.error = errors.MalformedXML.customizeDescription(
'Tag XML should contain same number of Keys and Values');
return tagObj;
}
for (let i = 0; i < tags.Key.length; i++) {
if (tags.Key[i].length < 1 || tags.Key[i].length > 128) {
tagObj.error = errors.InvalidRequest.customizeDescription(
'Tag Key must be a length between 1 and 128 char');
break;
}
if (this._tagKeys.includes(tags.Key[i])) {
tagObj.error = errors.InvalidRequest.customizeDescription(
'Tag Keys must be unique');
break;
}
this._tagKeys.push(tags.Key[i]);
const tag = {
key: tags.Key[i],
val: tags.Value[i],
};
tagObj.tags.push(tag);
}
return tagObj;
}
/**
* Check that ID component of rule is valid
* @param {array} id - contains id string at first index or empty
* @return {object} - contains error if parsing failed or id is not unique,
* else contains parsed or generated id
*
* Format of idObj:
* idObj = {
* error: <error>,
* propName: 'ruleID',
* ruleID: <value>
* }
*/
_parseID(id) {
const idObj = {};
idObj.propName = 'ruleID';
if (id && id[0].length > 255) {
idObj.error = errors.InvalidArgument.customizeDescription(
'Rule ID is greater than 255 characters long');
return idObj;
}
if (!id || !id[0] || id[0] === '') {
// ID is optional property, but create one if not provided or is ''
// We generate 48-character alphanumeric, unique ID for rule
idObj.ruleID = Buffer.from(UUID.v4()).toString('base64');
} else {
idObj.ruleID = id[0];
}
// Each ID in a list of rules must be unique.
if (this._ruleIDs.includes(idObj.ruleID)) {
idObj.error = errors.InvalidRequest.customizeDescription(
'Rule ID must be unique');
return idObj;
}
this._ruleIDs.push(idObj.ruleID);
return idObj;
}
/**
* Check that status component of rule is valid
* @param {string} status - status string
* @return {object} - contains error if parsing failed, else contains
* parsed status
*
* Format of statusObj:
* statusObj = {
* error: <error>,
* propName: 'ruleStatus',
* ruleStatus: <value>
* }
*/
_parseStatus(status) {
const statusObj = {};
statusObj.propName = 'ruleStatus';
const validStatuses = ['Enabled', 'Disabled'];
if (!validStatuses.includes(status)) {
statusObj.error = errors.MalformedXML.customizeDescription(
'Status is not valid');
return statusObj;
}
statusObj.ruleStatus = status;
return statusObj;
}
/**
* Check that action component of rule is valid
* @param {object} rule - a rule object from Rule array from this._parsedXml
* @return {object} - contains error if parsing failed, else contains
* parsed action information
*
* Format of actionObj:
* actionsObj = {
* error: <error>,
* propName: 'action',
* actions: [
* {
* actionName: <value>,
* days: <value>,
* date: <value>,
* deleteMarker: <value>
* },
* ],
* }
*/
_parseAction(rule) {
const actionsObj = {};
actionsObj.propName = 'actions';
actionsObj.actions = [];
const validActions = ['AbortIncompleteMultipartUpload',
'Expiration', 'NoncurrentVersionExpiration'];
validActions.forEach(a => {
if (rule[a]) {
actionsObj.actions.push({ actionName: `${a}` });
}
});
if (actionsObj.actions.length === 0) {
actionsObj.error = errors.InvalidRequest.customizeDescription(
'Rule does not include valid action');
return actionsObj;
}
actionsObj.actions.forEach(a => {
const actionFn = `_parse${a.actionName}`;
const action = this[actionFn](rule);
if (action.error) {
actionsObj.error = action.error;
} else {
const actionTimes = ['days', 'date', 'deleteMarker'];
actionTimes.forEach(t => {
if (action[t]) {
// eslint-disable-next-line no-param-reassign
a[t] = action[t];
}
});
}
});
return actionsObj;
}
/**
* Check that AbortIncompleteMultipartUpload action is valid
* @param {object} rule - a rule object from Rule array from this._parsedXml
* @return {object} - contains error if parsing failed, else contains
* parsed action time
*
* Format of abortObj:
* abortObj = {
* error: <error>,
* days: <value>
* }
*/
_parseAbortIncompleteMultipartUpload(rule) {
const abortObj = {};
let filter = null;
if (rule.Filter && rule.Filter[0]) {
if (rule.Filter[0].And) {
filter = rule.Filter[0].And[0];
} else {
filter = rule.Filter[0];
}
}
if (filter && filter.Tag) {
abortObj.error = errors.InvalidRequest.customizeDescription(
'Tag-based filter cannot be used with ' +
'AbortIncompleteMultipartUpload action');
return abortObj;
}
const subAbort = rule.AbortIncompleteMultipartUpload[0];
if (!subAbort.DaysAfterInitiation) {
abortObj.error = errors.MalformedXML.customizeDescription(
'AbortIncompleteMultipartUpload action does not ' +
'include DaysAfterInitiation');
return abortObj;
}
const daysInt = parseInt(subAbort.DaysAfterInitiation[0], 10);
if (daysInt < 1) {
abortObj.error = errors.InvalidArgument.customizeDescription(
'DaysAfterInitiation is not a positive integer');
return abortObj;
}
abortObj.days = daysInt;
return abortObj;
}
/**
* Check that Expiration action is valid
* @param {object} rule - a rule object from Rule array from this._parsedXml
* @return {object} - contains error if parsing failed, else contains
* parsed action time
*
* Format of expObj:
* expObj = {
* error: <error>,
* days: <value>,
* date: <value>,
* deleteMarker: <value>
* }
*/
_parseExpiration(rule) {
const expObj = {};
const subExp = rule.Expiration[0];
if (!subExp.Date && !subExp.Days && !subExp.ExpiredObjectDeleteMarker) {
expObj.error = errors.MalformedXML.customizeDescription(
'Expiration action does not include an action time');
return expObj;
}
const eodm = 'ExpiredObjectDeleteMarker';
if (subExp.Date && (subExp.Days || subExp[eodm]) ||
(subExp.Days && subExp[eodm])) {
expObj.error = errors.MalformedXML.customizeDescription(
'Expiration action includes more than one time');
return expObj;
}
if (subExp.Date) {
const isoRegex = new RegExp('^(-?(?:[1-9][0-9]*)?[0-9]{4})-' +
'(1[0-2]|0[1-9])-(3[01]|0[1-9]|[12][0-9])T(2[0-3]|[01][0-9])' +
':([0-5][0-9]):([0-5][0-9])(.[0-9]+)?(Z)?$');
if (!isoRegex.test(subExp.Date[0])) {
expObj.error = errors.InvalidArgument.customizeDescription(
'Date must be in ISO 8601 format');
} else {
expObj.date = subExp.Date[0];
}
}
if (subExp.Days) {
const daysInt = parseInt(subExp.Days[0], 10);
if (daysInt < 1) {
expObj.error = errors.InvalidArgument.customizeDescription(
'Expiration days is not a positive integer');
} else {
expObj.days = daysInt;
}
}
if (subExp.ExpiredObjectDeleteMarker) {
let filter = null;
if (rule.Filter && rule.Filter[0]) {
if (rule.Filter[0].And) {
filter = rule.Filter[0].And[0];
} else {
filter = rule.Filter[0];
}
}
if (filter && filter.Tag) {
expObj.error = errors.InvalidRequest.customizeDescription(
'Tag-based filter cannot be used with ' +
'ExpiredObjectDeleteMarker action');
return expObj;
}
const validValues = ['true', 'false'];
if (!validValues.includes(subExp.ExpiredObjectDeleteMarker[0])) {
expObj.error = errors.MalformedXML.customizeDescription(
'ExpiredObjDeleteMarker is not true or false');
} else {
expObj.deleteMarker = subExp.ExpiredObjectDeleteMarker[0];
}
}
return expObj;
}
/**
* Check that NoncurrentVersionExpiration action is valid
* @param {object} rule - a rule object from Rule array from this._parsedXml
* @return {object} - contains error if parsing failed, else contains
* parsed action time
*
* Format of nvExpObj:
* nvExpObj = {
* error: <error>,
* days: <value>,
* }
*/
_parseNoncurrentVersionExpiration(rule) {
const nvExpObj = {};
const subNVExp = rule.NoncurrentVersionExpiration[0];
if (!subNVExp.NoncurrentDays) {
nvExpObj.error = errors.MalformedXML.customizeDescription(
'NoncurrentVersionExpiration action does not include ' +
'NoncurrentDays');
return nvExpObj;
}
const daysInt = parseInt(subNVExp.NoncurrentDays[0], 10);
if (daysInt < 1) {
nvExpObj.error = errors.InvalidArgument.customizeDescription(
'NoncurrentDays is not a positive integer');
} else {
nvExpObj.days = daysInt;
}
return nvExpObj;
}
/**
* Validate the bucket metadata lifecycle configuration structure and
* value types
* @param {object} config - The lifecycle configuration to validate
* @return {undefined}
*/
static validateConfig(config) {
assert.strictEqual(typeof config, 'object');
const rules = config.rules;
assert.strictEqual(Array.isArray(rules), true);
rules.forEach(rule => {
const { ruleID, ruleStatus, filter, actions } = rule;
assert.strictEqual(typeof ruleID, 'string');
assert.strictEqual(typeof ruleStatus, 'string');
assert.strictEqual(typeof filter, 'object');
assert.strictEqual(Array.isArray(actions), true);
if (filter.rulePrefix) {
assert.strictEqual(typeof filter.rulePrefix, 'string');
}
if (filter.tags) {
assert.strictEqual(Array.isArray(filter.tags), true);
filter.tags.forEach(t => {
assert.strictEqual(typeof t.key, 'string');
assert.strictEqual(typeof t.val, 'string');
});
}
actions.forEach(a => {
assert.strictEqual(typeof a.actionName, 'string');
if (a.days) {
assert.strictEqual(typeof a.days, 'number');
}
if (a.date) {
assert.strictEqual(typeof a.date, 'string');
}
if (a.deleteMarker) {
assert.strictEqual(typeof a.deleteMarker, 'string');
}
});
});
}
/**
* Get XML representation of lifecycle configuration object
* @param {object} config - Lifecycle configuration object
* @return {string} - XML representation of config
*/
static getConfigXml(config) {
const rules = config.rules;
const rulesXML = rules.map(rule => {
const { ruleID, ruleStatus, filter, actions } = rule;
const ID = `<ID>${ruleID}</ID>`;
const Status = `<Status>${ruleStatus}</Status>`;
const { rulePrefix, tags } = filter;
const Prefix = rulePrefix ? `<Prefix>${rulePrefix}</Prefix>` : '';
let tagXML = '';
if (tags) {
const keysVals = tags.map(t => {
const { key, val } = t;
const Tag = `<Key>${key}</Key>` +
`<Value>${val}</Value>`;
return Tag;
}).join('');
tagXML = `<Tag>${keysVals}</Tag>`;
}
let Filter;
if (rulePrefix && !tags) {
Filter = Prefix;
} else if (tags && (rulePrefix || tags.length > 1)) {
Filter = `<Filter><And>${Prefix}${tagXML}</And></Filter>`;
} else {
// remaining condition is if only one or no tag
Filter = `<Filter>${tagXML}</Filter>`;
}
const Actions = actions.map(action => {
const { actionName, days, date, deleteMarker } = action;
let Action;
if (actionName === 'AbortIncompleteMultipartUpload') {
Action = `<${actionName}><DaysAfterInitiation>${days}` +
`</DaysAfterInitiation></${actionName}>`;
} else if (actionName === 'NoncurrentVersionExpiration') {
Action = `<${actionName}><NoncurrentDays>${days}` +
`</NoncurrentDays></${actionName}>`;
} else if (actionName === 'Expiration') {
const Days = days ? `<Days>${days}</Days>` : '';
const Date = date ? `<Date>${date}</Date>` : '';
const DelMarker = deleteMarker ?
`<ExpiredObjectDeleteMarker>${deleteMarker}` +
'</ExpiredObjectDeleteMarker>' : '';
Action = `<${actionName}>${Days}${Date}${DelMarker}` +
`</${actionName}>`;
}
return Action;
}).join('');
return `<Rule>${ID}${Status}${Filter}${Actions}</Rule>`;
}).join('');
return '<?xml version="1.0" encoding="UTF-8"?>' +
'<LifecycleConfiguration ' +
'xmlns="http://s3.amazonaws.com/doc/2006-03-01/">' +
`${rulesXML}` +
'</LifecycleConfiguration>';
}
}
module.exports = LifecycleConfiguration;

945
lib/models/ObjectMD.js Normal file
View File

@@ -0,0 +1,945 @@
const constants = require('../constants');
const VersionIDUtils = require('../versioning/VersionID');
const ObjectMDLocation = require('./ObjectMDLocation');
/**
* Class to manage metadata object for regular s3 objects (instead of
* mpuPart metadata for example)
*/
class ObjectMD {
/**
* Create a new instance of ObjectMD. Parameter <tt>objMd</tt> is
* reserved for internal use, users should call
* {@link ObjectMD.createFromBlob()} to load from a stored
* metadata blob and check the returned value for errors.
*
* @constructor
* @param {ObjectMD|object} [objMd] - object metadata source,
* either an ObjectMD instance or a native JS object parsed from
* JSON
*/
constructor(objMd = undefined) {
this._initMd();
if (objMd !== undefined) {
if (objMd instanceof ObjectMD) {
this._updateFromObjectMD(objMd);
} else {
this._updateFromParsedJSON(objMd);
}
} else {
// set newly-created object md modified time to current time
this._data['last-modified'] = new Date().toJSON();
}
// set latest md model version now that we ensured
// backward-compat conversion
this._data['md-model-version'] = constants.mdModelVersion;
}
/**
* create an ObjectMD instance from stored metadata
*
* @param {String|Buffer} storedBlob - serialized metadata blob
* @return {object} a result object containing either a 'result'
* property which value is a new ObjectMD instance on success, or
* an 'error' property on error
*/
static createFromBlob(storedBlob) {
try {
const objMd = JSON.parse(storedBlob);
return { result: new ObjectMD(objMd) };
} catch (err) {
return { error: err };
}
}
/**
* Returns metadata attributes for the current model
*
* @return {object} object with keys of existing attributes
* and value set to true
*/
static getAttributes() {
const sample = new ObjectMD();
const attributes = {};
Object.keys(sample.getValue()).forEach(key => {
attributes[key] = true;
});
return attributes;
}
getSerialized() {
return JSON.stringify(this.getValue());
}
_initMd() {
// initialize md with default values
this._data = {
'owner-display-name': '',
'owner-id': '',
'cache-control': '',
'content-disposition': '',
'content-encoding': '',
'expires': '',
'content-length': 0,
'content-type': '',
'content-md5': '',
// simple/no version. will expand once object versioning is
// introduced
'x-amz-version-id': 'null',
'x-amz-server-version-id': '',
// TODO: Handle this as a utility function for all object puts
// similar to normalizing request but after checkAuth so
// string to sign is not impacted. This is GH Issue#89.
'x-amz-storage-class': 'STANDARD',
'x-amz-server-side-encryption': '',
'x-amz-server-side-encryption-aws-kms-key-id': '',
'x-amz-server-side-encryption-customer-algorithm': '',
'x-amz-website-redirect-location': '',
'acl': {
Canned: 'private',
FULL_CONTROL: [],
WRITE_ACP: [],
READ: [],
READ_ACP: [],
},
'key': '',
'location': null,
// versionId, isNull, nullVersionId and isDeleteMarker
// should be undefined when not set explicitly
'isNull': undefined,
'nullVersionId': undefined,
'nullUploadId': undefined,
'isDeleteMarker': undefined,
'versionId': undefined,
'uploadId': undefined,
'tags': {},
'replicationInfo': {
status: '',
backends: [],
content: [],
destination: '',
storageClass: '',
role: '',
storageType: '',
dataStoreVersionId: '',
},
'dataStoreName': '',
};
}
_updateFromObjectMD(objMd) {
// We only duplicate selected attributes here, where setters
// allow to change inner values, and let the others as shallow
// copies. Since performance is a concern, we want to avoid
// the JSON.parse(JSON.stringify()) method.
Object.assign(this._data, objMd._data);
Object.assign(this._data.replicationInfo,
objMd._data.replicationInfo);
}
_updateFromParsedJSON(objMd) {
// objMd is a new JS object created for the purpose, it's safe
// to just assign its top-level properties.
Object.assign(this._data, objMd);
this._convertToLatestModel();
}
_convertToLatestModel() {
// handle backward-compat stuff
if (typeof(this._data.location) === 'string') {
this.setLocation([{ key: this._data.location }]);
}
}
/**
* Set owner display name
*
* @param {string} displayName - Owner display name
* @return {ObjectMD} itself
*/
setOwnerDisplayName(displayName) {
this._data['owner-display-name'] = displayName;
return this;
}
/**
* Returns owner display name
*
* @return {string} Onwer display name
*/
getOwnerDisplayName() {
return this._data['owner-display-name'];
}
/**
* Set owner id
*
* @param {string} id - Owner id
* @return {ObjectMD} itself
*/
setOwnerId(id) {
this._data['owner-id'] = id;
return this;
}
/**
* Returns owner id
*
* @return {string} owner id
*/
getOwnerId() {
return this._data['owner-id'];
}
/**
* Set cache control
*
* @param {string} cacheControl - Cache control
* @return {ObjectMD} itself
*/
setCacheControl(cacheControl) {
this._data['cache-control'] = cacheControl;
return this;
}
/**
* Returns cache control
*
* @return {string} Cache control
*/
getCacheControl() {
return this._data['cache-control'];
}
/**
* Set content disposition
*
* @param {string} contentDisposition - Content disposition
* @return {ObjectMD} itself
*/
setContentDisposition(contentDisposition) {
this._data['content-disposition'] = contentDisposition;
return this;
}
/**
* Returns content disposition
*
* @return {string} Content disposition
*/
getContentDisposition() {
return this._data['content-disposition'];
}
/**
* Set content encoding
*
* @param {string} contentEncoding - Content encoding
* @return {ObjectMD} itself
*/
setContentEncoding(contentEncoding) {
this._data['content-encoding'] = contentEncoding;
return this;
}
/**
* Returns content encoding
*
* @return {string} Content encoding
*/
getContentEncoding() {
return this._data['content-encoding'];
}
/**
* Set expiration date
*
* @param {string} expires - Expiration date
* @return {ObjectMD} itself
*/
setExpires(expires) {
this._data.expires = expires;
return this;
}
/**
* Returns expiration date
*
* @return {string} Expiration date
*/
getExpires() {
return this._data.expires;
}
/**
* Set content length
*
* @param {number} contentLength - Content length
* @return {ObjectMD} itself
*/
setContentLength(contentLength) {
this._data['content-length'] = contentLength;
return this;
}
/**
* Returns content length
*
* @return {number} Content length
*/
getContentLength() {
return this._data['content-length'];
}
/**
* Set content type
*
* @param {string} contentType - Content type
* @return {ObjectMD} itself
*/
setContentType(contentType) {
this._data['content-type'] = contentType;
return this;
}
/**
* Returns content type
*
* @return {string} Content type
*/
getContentType() {
return this._data['content-type'];
}
/**
* Set last modified date
*
* @param {string} lastModified - Last modified date
* @return {ObjectMD} itself
*/
setLastModified(lastModified) {
this._data['last-modified'] = lastModified;
return this;
}
/**
* Returns last modified date
*
* @return {string} Last modified date
*/
getLastModified() {
return this._data['last-modified'];
}
/**
* Set content md5 hash
*
* @param {string} contentMd5 - Content md5 hash
* @return {ObjectMD} itself
*/
setContentMd5(contentMd5) {
this._data['content-md5'] = contentMd5;
return this;
}
/**
* Returns content md5 hash
*
* @return {string} content md5 hash
*/
getContentMd5() {
return this._data['content-md5'];
}
/**
* Set version id
*
* @param {string} versionId - Version id
* @return {ObjectMD} itself
*/
setAmzVersionId(versionId) {
this._data['x-amz-version-id'] = versionId;
return this;
}
/**
* Returns version id
*
* @return {string} Version id
*/
getAmzVersionId() {
return this._data['x-amz-version-id'];
}
/**
* Set server version id
*
* @param {string} versionId - server version id
* @return {ObjectMD} itself
*/
setAmzServerVersionId(versionId) {
this._data['x-amz-server-version-id'] = versionId;
return this;
}
/**
* Returns server version id
*
* @return {string} server version id
*/
getAmzServerVersionId() {
return this._data['x-amz-server-version-id'];
}
/**
* Set storage class
*
* @param {string} storageClass - Storage class
* @return {ObjectMD} itself
*/
setAmzStorageClass(storageClass) {
this._data['x-amz-storage-class'] = storageClass;
return this;
}
/**
* Returns storage class
*
* @return {string} Storage class
*/
getAmzStorageClass() {
return this._data['x-amz-storage-class'];
}
/**
* Set server side encryption
*
* @param {string} serverSideEncryption - Server side encryption
* @return {ObjectMD} itself
*/
setAmzServerSideEncryption(serverSideEncryption) {
this._data['x-amz-server-side-encryption'] = serverSideEncryption;
return this;
}
/**
* Returns server side encryption
*
* @return {string} server side encryption
*/
getAmzServerSideEncryption() {
return this._data['x-amz-server-side-encryption'];
}
/**
* Set encryption key id
*
* @param {string} keyId - Encryption key id
* @return {ObjectMD} itself
*/
setAmzEncryptionKeyId(keyId) {
this._data['x-amz-server-side-encryption-aws-kms-key-id'] = keyId;
return this;
}
/**
* Returns encryption key id
*
* @return {string} Encryption key id
*/
getAmzEncryptionKeyId() {
return this._data['x-amz-server-side-encryption-aws-kms-key-id'];
}
/**
* Set encryption customer algorithm
*
* @param {string} algo - Encryption customer algorithm
* @return {ObjectMD} itself
*/
setAmzEncryptionCustomerAlgorithm(algo) {
this._data['x-amz-server-side-encryption-customer-algorithm'] = algo;
return this;
}
/**
* Returns Encryption customer algorithm
*
* @return {string} Encryption customer algorithm
*/
getAmzEncryptionCustomerAlgorithm() {
return this._data['x-amz-server-side-encryption-customer-algorithm'];
}
/**
* Set metadata redirectLocation value
*
* @param {string} redirectLocation - The website redirect location
* @return {ObjectMD} itself
*/
setRedirectLocation(redirectLocation) {
this._data['x-amz-website-redirect-location'] = redirectLocation;
return this;
}
/**
* Get metadata redirectLocation value
*
* @return {string} Website redirect location
*/
getRedirectLocation() {
return this._data['x-amz-website-redirect-location'];
}
/**
* Set access control list
*
* @param {object} acl - Access control list
* @param {string} acl.Canned -
* @param {string[]} acl.FULL_CONTROL -
* @param {string[]} acl.WRITE_ACP -
* @param {string[]} acl.READ -
* @param {string[]} acl.READ_ACP -
* @return {ObjectMD} itself
*/
setAcl(acl) {
this._data.acl = acl;
return this;
}
/**
* Returns access control list
*
* @return {object} Access control list
*/
getAcl() {
return this._data.acl;
}
/**
* Set object key
*
* @param {string} key - Object key
* @return {ObjectMD} itself
*/
setKey(key) {
this._data.key = key;
return this;
}
/**
* Returns object key
*
* @return {string} object key
*/
getKey() {
return this._data.key;
}
/**
* Set location
*
* @param {object[]} location - array of data locations (see
* constructor of {@link ObjectMDLocation} for a description of
* fields for each array object)
* @return {ObjectMD} itself
*/
setLocation(location) {
if (!Array.isArray(location) || location.length === 0) {
this._data.location = null;
} else {
this._data.location = location;
}
return this;
}
/**
* Returns location
*
* @return {object[]} location
*/
getLocation() {
const { location } = this._data;
return Array.isArray(location) ? location : [];
}
// Object metadata may contain multiple elements for a single part if
// the part was originally copied from another MPU. Here we reduce the
// locations array to a single element for each part.
getReducedLocations() {
const locations = this.getLocation();
const reducedLocations = [];
let partTotal = 0;
let start;
for (let i = 0; i < locations.length; i++) {
const currPart = new ObjectMDLocation(locations[i]);
if (i === 0) {
start = currPart.getPartStart();
}
const currPartNum = currPart.getPartNumber();
let nextPartNum = undefined;
if (i < locations.length - 1) {
const nextPart = new ObjectMDLocation(locations[i + 1]);
nextPartNum = nextPart.getPartNumber();
}
partTotal += currPart.getPartSize();
if (currPartNum !== nextPartNum) {
currPart.setPartSize(partTotal);
currPart.setPartStart(start);
reducedLocations.push(currPart.getValue());
start += partTotal;
partTotal = 0;
}
}
return reducedLocations;
}
/**
* Set metadata isNull value
*
* @param {boolean} isNull - Whether new version is null or not
* @return {ObjectMD} itself
*/
setIsNull(isNull) {
this._data.isNull = isNull;
return this;
}
/**
* Get metadata isNull value
*
* @return {boolean} Whether new version is null or not
*/
getIsNull() {
return this._data.isNull || false;
}
/**
* Set metadata nullVersionId value
*
* @param {string} nullVersionId - The version id of the null version
* @return {ObjectMD} itself
*/
setNullVersionId(nullVersionId) {
this._data.nullVersionId = nullVersionId;
return this;
}
/**
* Get metadata nullVersionId value
*
* @return {string|undefined} The version id of the null version
*/
getNullVersionId() {
return this._data.nullVersionId;
}
/**
* Set metadata nullUploadId value
*
* @param {string} nullUploadId - The upload ID used to complete
* the MPU of the null version
* @return {ObjectMD} itself
*/
setNullUploadId(nullUploadId) {
this._data.nullUploadId = nullUploadId;
return this;
}
/**
* Get metadata nullUploadId value
*
* @return {string|undefined} The object nullUploadId
*/
getNullUploadId() {
return this._data.nullUploadId;
}
/**
* Set metadata isDeleteMarker value
*
* @param {boolean} isDeleteMarker - Whether object is a delete marker
* @return {ObjectMD} itself
*/
setIsDeleteMarker(isDeleteMarker) {
this._data.isDeleteMarker = isDeleteMarker;
return this;
}
/**
* Get metadata isDeleteMarker value
*
* @return {boolean} Whether object is a delete marker
*/
getIsDeleteMarker() {
return this._data.isDeleteMarker || false;
}
/**
* Set metadata versionId value
*
* @param {string} versionId - The object versionId
* @return {ObjectMD} itself
*/
setVersionId(versionId) {
this._data.versionId = versionId;
return this;
}
/**
* Get metadata versionId value
*
* @return {string|undefined} The object versionId
*/
getVersionId() {
return this._data.versionId;
}
/**
* Get metadata versionId value in encoded form (the one visible
* to the S3 API user)
*
* @return {string|undefined} The encoded object versionId
*/
getEncodedVersionId() {
return VersionIDUtils.encode(this.getVersionId());
}
/**
* Set metadata uploadId value
*
* @param {string} uploadId - The upload ID used to complete the MPU object
* @return {ObjectMD} itself
*/
setUploadId(uploadId) {
this._data.uploadId = uploadId;
return this;
}
/**
* Get metadata uploadId value
*
* @return {string|undefined} The object uploadId
*/
getUploadId() {
return this._data.uploadId;
}
/**
* Set tags
*
* @param {object} tags - tags object
* @return {ObjectMD} itself
*/
setTags(tags) {
this._data.tags = tags;
return this;
}
/**
* Returns tags
*
* @return {object} tags object
*/
getTags() {
return this._data.tags;
}
/**
* Set replication information
*
* @param {object} replicationInfo - replication information object
* @return {ObjectMD} itself
*/
setReplicationInfo(replicationInfo) {
const { status, backends, content, destination, storageClass, role,
storageType, dataStoreVersionId } = replicationInfo;
this._data.replicationInfo = {
status,
backends,
content,
destination,
storageClass: storageClass || '',
role,
storageType: storageType || '',
dataStoreVersionId: dataStoreVersionId || '',
};
return this;
}
/**
* Get replication information
*
* @return {object} replication object
*/
getReplicationInfo() {
return this._data.replicationInfo;
}
setReplicationStatus(status) {
this._data.replicationInfo.status = status;
return this;
}
setReplicationSiteStatus(site, status) {
const backend = this._data.replicationInfo.backends
.find(o => o.site === site);
if (backend) {
backend.status = status;
}
return this;
}
getReplicationSiteStatus(site) {
const backend = this._data.replicationInfo.backends
.find(o => o.site === site);
if (backend) {
return backend.status;
}
return undefined;
}
setReplicationDataStoreVersionId(versionId) {
this._data.replicationInfo.dataStoreVersionId = versionId;
return this;
}
setReplicationSiteDataStoreVersionId(site, versionId) {
const backend = this._data.replicationInfo.backends
.find(o => o.site === site);
if (backend) {
backend.dataStoreVersionId = versionId;
}
return this;
}
getReplicationSiteDataStoreVersionId(site) {
const backend = this._data.replicationInfo.backends
.find(o => o.site === site);
if (backend) {
return backend.dataStoreVersionId;
}
return undefined;
}
setReplicationBackends(backends) {
this._data.replicationInfo.backends = backends;
return this;
}
setReplicationStorageClass(storageClass) {
this._data.replicationInfo.storageClass = storageClass;
return this;
}
getReplicationDataStoreVersionId() {
return this._data.replicationInfo.dataStoreVersionId;
}
getReplicationStatus() {
return this._data.replicationInfo.status;
}
getReplicationBackends() {
return this._data.replicationInfo.backends;
}
getReplicationContent() {
return this._data.replicationInfo.content;
}
getReplicationRoles() {
return this._data.replicationInfo.role;
}
getReplicationStorageType() {
return this._data.replicationInfo.storageType;
}
getReplicationStorageClass() {
return this._data.replicationInfo.storageClass;
}
getReplicationTargetBucket() {
const destBucketArn = this._data.replicationInfo.destination;
return destBucketArn.split(':').slice(-1)[0];
}
/**
* Set dataStoreName
*
* @param {string} dataStoreName - name of data backend obj stored in
* @return {ObjectMD} itself
*/
setDataStoreName(dataStoreName) {
this._data.dataStoreName = dataStoreName;
return this;
}
/**
* Get dataStoreName
*
* @return {string} name of data backend obj stored in
*/
getDataStoreName() {
return this._data.dataStoreName;
}
/**
* Get dataStoreVersionId
*
* @return {string} external backend version id for data
*/
getDataStoreVersionId() {
const location = this.getLocation();
if (!location[0]) {
return undefined;
}
return location[0].dataStoreVersionId;
}
/**
* Set custom meta headers
*
* @param {object} metaHeaders - Meta headers
* @return {ObjectMD} itself
*/
setUserMetadata(metaHeaders) {
Object.keys(metaHeaders).forEach(key => {
if (key.startsWith('x-amz-meta-')) {
this._data[key] = metaHeaders[key];
}
});
// If a multipart object and the acl is already parsed, we update it
if (metaHeaders.acl) {
this.setAcl(metaHeaders.acl);
}
return this;
}
/**
* overrideMetadataValues (used for complete MPU and object copy)
*
* @param {object} headers - Headers
* @return {ObjectMD} itself
*/
overrideMetadataValues(headers) {
Object.assign(this._data, headers);
return this;
}
/**
* Returns metadata object
*
* @return {object} metadata object
*/
getValue() {
return this._data;
}
}
module.exports = ObjectMD;

View File

@@ -0,0 +1,77 @@
/**
* Helper class to ease access to a single data location in metadata
* 'location' array
*/
class ObjectMDLocation {
/**
* @constructor
* @param {object} locationObj - single data location info
* @param {string} locationObj.key - data backend key
* @param {number} locationObj.start - index of first data byte of
* this part in the full object
* @param {number} locationObj.size - byte length of data part
* @param {string} locationObj.dataStoreName - type of data store
* @param {string} locationObj.dataStoreETag - internal ETag of
* data part
*/
constructor(locationObj) {
this._data = {
key: locationObj.key,
start: locationObj.start,
size: locationObj.size,
dataStoreName: locationObj.dataStoreName,
dataStoreETag: locationObj.dataStoreETag,
};
}
getKey() {
return this._data.key;
}
getDataStoreName() {
return this._data.dataStoreName;
}
setDataLocation(location) {
this._data.key = location.key;
this._data.dataStoreName = location.dataStoreName;
return this;
}
getDataStoreETag() {
return this._data.dataStoreETag;
}
getPartNumber() {
return Number.parseInt(this._data.dataStoreETag.split(':')[0], 10);
}
getPartETag() {
return this._data.dataStoreETag.split(':')[1];
}
getPartStart() {
return this._data.start;
}
setPartStart(start) {
this._data.start = start;
return this;
}
getPartSize() {
return this._data.size;
}
setPartSize(size) {
this._data.size = size;
return this;
}
getValue() {
return this._data;
}
}
module.exports = ObjectMDLocation;

View File

@@ -0,0 +1,451 @@
const assert = require('assert');
const UUID = require('uuid');
const escapeForXml = require('../s3middleware/escapeForXml');
const errors = require('../errors');
const { isValidBucketName } = require('../s3routes/routesUtils');
const MAX_RULES = 1000;
const RULE_ID_LIMIT = 255;
const validStorageClasses = [
'STANDARD',
'STANDARD_IA',
'REDUCED_REDUNDANCY',
];
/**
Example XML request:
<ReplicationConfiguration>
<Role>IAM-role-ARN</Role>
<Rule>
<ID>Rule-1</ID>
<Status>rule-status</Status>
<Prefix>key-prefix</Prefix>
<Destination>
<Bucket>arn:aws:s3:::bucket-name</Bucket>
<StorageClass>
optional-destination-storage-class-override
</StorageClass>
</Destination>
</Rule>
<Rule>
<ID>Rule-2</ID>
...
</Rule>
...
</ReplicationConfiguration>
*/
class ReplicationConfiguration {
/**
* Create a ReplicationConfiguration instance
* @param {string} xml - The parsed XML
* @param {object} log - Werelogs logger
* @param {object} config - S3 server configuration
* @return {object} - ReplicationConfiguration instance
*/
constructor(xml, log, config) {
this._parsedXML = xml;
this._log = log;
this._config = config;
this._configPrefixes = [];
this._configIDs = [];
// The bucket metadata model of replication config. Note there is a
// single `destination` property because we can replicate to only one
// other bucket. Thus each rule is simplified to these properties.
this._role = null;
this._destination = null;
this._rules = null;
this._prevStorageClass = null;
this._hasScalityDestination = null;
}
/**
* Get the role of the bucket replication configuration
* @return {string|null} - The role if defined, otherwise `null`
*/
getRole() {
return this._role;
}
/**
* The bucket to replicate data to
* @return {string|null} - The bucket if defined, otherwise `null`
*/
getDestination() {
return this._destination;
}
/**
* The rules for replication configuration
* @return {string|null} - The rules if defined, otherwise `null`
*/
getRules() {
return this._rules;
}
/**
* Get the replication configuration
* @return {object} - The replication configuration
*/
getReplicationConfiguration() {
return {
role: this.getRole(),
destination: this.getDestination(),
rules: this.getRules(),
};
}
/**
* Build the rule object from the parsed XML of the given rule
* @param {object} rule - The rule object from this._parsedXML
* @return {object} - The rule object to push into the `Rules` array
*/
_buildRuleObject(rule) {
const obj = {
prefix: rule.Prefix[0],
enabled: rule.Status[0] === 'Enabled',
};
// ID is an optional property, but create one if not provided or is ''.
// We generate a 48-character alphanumeric, unique ID for the rule.
obj.id = rule.ID && rule.ID[0] !== '' ? rule.ID[0] :
Buffer.from(UUID.v4()).toString('base64');
// StorageClass is an optional property.
if (rule.Destination[0].StorageClass) {
obj.storageClass = rule.Destination[0].StorageClass[0];
}
return obj;
}
/**
* Check if the Role field of the replication configuration is valid
* @param {string} ARN - The Role field value provided in the configuration
* @return {boolean} `true` if a valid role ARN, `false` otherwise
*/
_isValidRoleARN(ARN) {
// AWS accepts a range of values for the Role field. Though this does
// not encompass all constraints imposed by AWS, we have opted to
// enforce the following.
const arr = ARN.split(':');
const isValidRoleARN =
arr[0] === 'arn' &&
arr[1] === 'aws' &&
arr[2] === 'iam' &&
arr[3] === '' &&
(arr[4] === '*' || arr[4].length > 1) &&
arr[5].startsWith('role');
return isValidRoleARN;
}
/**
* Check that the `Role` property of the configuration is valid
* @return {undefined}
*/
_parseRole() {
const parsedRole = this._parsedXML.ReplicationConfiguration.Role;
if (!parsedRole) {
return errors.MalformedXML;
}
const role = parsedRole[0];
const rolesArr = role.split(',');
if (this._hasScalityDestination && rolesArr.length !== 2) {
return errors.InvalidArgument.customizeDescription(
'Invalid Role specified in replication configuration: ' +
'Role must be a comma-separated list of two IAM roles');
}
if (!this._hasScalityDestination && rolesArr.length > 1) {
return errors.InvalidArgument.customizeDescription(
'Invalid Role specified in replication configuration: ' +
'Role may not contain a comma separator');
}
const invalidRole = rolesArr.find(r => !this._isValidRoleARN(r));
if (invalidRole !== undefined) {
return errors.InvalidArgument.customizeDescription(
'Invalid Role specified in replication configuration: ' +
`'${invalidRole}'`);
}
this._role = role;
return undefined;
}
/**
* Check that the `Rules` property array is valid
* @return {undefined}
*/
_parseRules() {
// Note that the XML uses 'Rule' while the config object uses 'Rules'.
const { Rule } = this._parsedXML.ReplicationConfiguration;
if (!Rule || Rule.length < 1) {
return errors.MalformedXML;
}
if (Rule.length > MAX_RULES) {
return errors.InvalidRequest.customizeDescription(
'Number of defined replication rules cannot exceed 1000');
}
const err = this._parseEachRule(Rule);
if (err) {
return err;
}
return undefined;
}
/**
* Check that each rule in the `Rules` property array is valid
* @param {array} rules - The rule array from this._parsedXML
* @return {undefined}
*/
_parseEachRule(rules) {
const rulesArr = [];
for (let i = 0; i < rules.length; i++) {
const err =
this._parseStatus(rules[i]) || this._parsePrefix(rules[i]) ||
this._parseID(rules[i]) || this._parseDestination(rules[i]);
if (err) {
return err;
}
rulesArr.push(this._buildRuleObject(rules[i]));
}
this._rules = rulesArr;
return undefined;
}
/**
* Check that the `Status` property is valid
* @param {object} rule - The rule object from this._parsedXML
* @return {undefined}
*/
_parseStatus(rule) {
const status = rule.Status && rule.Status[0];
if (!status || !['Enabled', 'Disabled'].includes(status)) {
return errors.MalformedXML;
}
return undefined;
}
/**
* Check that the `Prefix` property is valid
* @param {object} rule - The rule object from this._parsedXML
* @return {undefined}
*/
_parsePrefix(rule) {
const prefix = rule.Prefix && rule.Prefix[0];
// An empty string prefix should be allowed.
if (!prefix && prefix !== '') {
return errors.MalformedXML;
}
if (prefix.length > 1024) {
return errors.InvalidArgument.customizeDescription('Rule prefix ' +
'cannot be longer than maximum allowed key length of 1024');
}
// Each Prefix in a list of rules must not overlap. For example, two
// prefixes 'TaxDocs' and 'TaxDocs/2015' are overlapping. An empty
// string prefix is expected to overlap with any other prefix.
for (let i = 0; i < this._configPrefixes.length; i++) {
const used = this._configPrefixes[i];
if (prefix.startsWith(used) || used.startsWith(prefix)) {
return errors.InvalidRequest.customizeDescription('Found ' +
`overlapping prefixes '${used}' and '${prefix}'`);
}
}
this._configPrefixes.push(prefix);
return undefined;
}
/**
* Check that the `ID` property is valid
* @param {object} rule - The rule object from this._parsedXML
* @return {undefined}
*/
_parseID(rule) {
const id = rule.ID && rule.ID[0];
if (id && id.length > RULE_ID_LIMIT) {
return errors.InvalidArgument
.customizeDescription('Rule Id cannot be greater than 255');
}
// Each ID in a list of rules must be unique.
if (this._configIDs.includes(id)) {
return errors.InvalidRequest.customizeDescription(
'Rule Id must be unique');
}
if (id !== undefined) {
this._configIDs.push(id);
}
return undefined;
}
/**
* Check that the `StorageClass` property is valid
* @param {object} destination - The destination object from this._parsedXML
* @return {undefined}
*/
_parseStorageClass(destination) {
const { replicationEndpoints } = this._config;
// The only condition where the default endpoint is possibly undefined
// is if there is only a single replication endpoint.
const defaultEndpoint =
replicationEndpoints.find(endpoint => endpoint.default) ||
replicationEndpoints[0];
// StorageClass is optional.
if (destination.StorageClass === undefined) {
this._hasScalityDestination = defaultEndpoint.type === undefined;
return undefined;
}
const storageClasses = destination.StorageClass[0].split(',');
const isValidStorageClass = storageClasses.every(storageClass => {
if (validStorageClasses.includes(storageClass)) {
this._hasScalityDestination =
defaultEndpoint.type === undefined;
return true;
}
const endpoint = replicationEndpoints.find(endpoint =>
endpoint.site === storageClass);
if (endpoint) {
// If this._hasScalityDestination was not set to true in any
// previous iteration or by a prior rule's storage class, then
// check if the current endpoint is a Scality destination.
if (!this._hasScalityDestination) {
// If any endpoint does not have a type, then we know it is
// a Scality destination.
this._hasScalityDestination = endpoint.type === undefined;
}
return true;
}
return false;
});
if (!isValidStorageClass) {
return errors.MalformedXML;
}
return undefined;
}
/**
* Check that the `Bucket` property is valid
* @param {object} destination - The destination object from this._parsedXML
* @return {undefined}
*/
_parseBucket(destination) {
const parsedBucketARN = destination.Bucket;
// If there is no Scality destination, we get the destination bucket
// from the location configuration.
if (!this._hasScalityDestination && !parsedBucketARN) {
return undefined;
}
if (!parsedBucketARN) {
return errors.MalformedXML;
}
const bucketARN = parsedBucketARN[0];
if (!bucketARN) {
return errors.InvalidArgument.customizeDescription(
'Destination bucket cannot be null or empty');
}
const arr = bucketARN.split(':');
const isValidARN =
arr[0] === 'arn' &&
arr[1] === 'aws' &&
arr[2] === 's3' &&
arr[3] === '' &&
arr[4] === '';
if (!isValidARN) {
return errors.InvalidArgument
.customizeDescription('Invalid bucket ARN');
}
if (!isValidBucketName(arr[5], [])) {
return errors.InvalidArgument
.customizeDescription('The specified bucket is not valid');
}
// We can replicate objects only to one destination bucket.
if (this._destination && this._destination !== bucketARN) {
return errors.InvalidRequest.customizeDescription(
'The destination bucket must be same for all rules');
}
this._destination = bucketARN;
return undefined;
}
/**
* Check that the `destination` property is valid
* @param {object} rule - The rule object from this._parsedXML
* @return {undefined}
*/
_parseDestination(rule) {
const dest = rule.Destination && rule.Destination[0];
if (!dest) {
return errors.MalformedXML;
}
const err = this._parseStorageClass(dest) || this._parseBucket(dest);
if (err) {
return err;
}
return undefined;
}
/**
* Check that the request configuration is valid
* @return {undefined}
*/
parseConfiguration() {
const err = this._parseRules();
if (err) {
return err;
}
return this._parseRole();
}
/**
* Get the XML representation of the configuration object
* @param {object} config - The bucket replication configuration
* @return {string} - The XML representation of the configuration
*/
static getConfigXML(config) {
const { role, destination, rules } = config;
const Role = `<Role>${escapeForXml(role)}</Role>`;
const Bucket = `<Bucket>${escapeForXml(destination)}</Bucket>`;
const rulesXML = rules.map(rule => {
const { prefix, enabled, storageClass, id } = rule;
const Prefix = prefix === '' ? '<Prefix/>' :
`<Prefix>${escapeForXml(prefix)}</Prefix>`;
const Status =
`<Status>${enabled ? 'Enabled' : 'Disabled'}</Status>`;
const StorageClass = storageClass ?
`<StorageClass>${storageClass}</StorageClass>` : '';
const Destination =
`<Destination>${Bucket}${StorageClass}</Destination>`;
// If the ID property was omitted in the configuration object, we
// create an ID for the rule. Hence it is always defined.
const ID = `<ID>${escapeForXml(id)}</ID>`;
return `<Rule>${ID}${Prefix}${Status}${Destination}</Rule>`;
}).join('');
return '<?xml version="1.0" encoding="UTF-8"?>' +
'<ReplicationConfiguration ' +
'xmlns="http://s3.amazonaws.com/doc/2006-03-01/">' +
`${rulesXML}${Role}` +
'</ReplicationConfiguration>';
}
/**
* Validate the bucket metadata replication configuration structure and
* value types
* @param {object} config - The replication configuration to validate
* @return {undefined}
*/
static validateConfig(config) {
assert.strictEqual(typeof config, 'object');
const { role, rules, destination } = config;
assert.strictEqual(typeof role, 'string');
assert.strictEqual(typeof destination, 'string');
assert.strictEqual(Array.isArray(rules), true);
rules.forEach(rule => {
assert.strictEqual(typeof rule, 'object');
const { prefix, enabled, id, storageClass } = rule;
assert.strictEqual(typeof prefix, 'string');
assert.strictEqual(typeof enabled, 'boolean');
assert(id === undefined || typeof id === 'string');
if (storageClass !== undefined) {
assert.strictEqual(typeof storageClass, 'string');
}
});
}
}
module.exports = ReplicationConfiguration;

View File

@@ -0,0 +1,195 @@
class RoutingRule {
/**
* Represents a routing rule in a website configuration.
* @constructor
* @param {object} params - object containing redirect and condition objects
* @param {object} params.redirect - specifies how to redirect requests
* @param {string} [params.redirect.protocol] - protocol to use for redirect
* @param {string} [params.redirect.hostName] - hostname to use for redirect
* @param {string} [params.redirect.replaceKeyPrefixWith] - string to replace
* keyPrefixEquals specified in condition
* @param {string} [params.redirect.replaceKeyWith] - string to replace key
* @param {string} [params.redirect.httpRedirectCode] - http redirect code
* @param {object} [params.condition] - specifies conditions for a redirect
* @param {string} [params.condition.keyPrefixEquals] - key prefix that
* triggers a redirect
* @param {string} [params.condition.httpErrorCodeReturnedEquals] - http code
* that triggers a redirect
*/
constructor(params) {
if (params) {
this._redirect = params.redirect;
this._condition = params.condition;
}
}
/**
* Return copy of rule as plain object
* @return {object} rule;
*/
getRuleObject() {
const rule = {
redirect: this._redirect,
condition: this._condition,
};
return rule;
}
/**
* Return the condition object
* @return {object} condition;
*/
getCondition() {
return this._condition;
}
/**
* Return the redirect object
* @return {object} redirect;
*/
getRedirect() {
return this._redirect;
}
}
class WebsiteConfiguration {
/**
* Object that represents website configuration
* @constructor
* @param {object} params - object containing params to construct Object
* @param {string} params.indexDocument - key for index document object
* required when redirectAllRequestsTo is undefined
* @param {string} [params.errorDocument] - key for error document object
* @param {object} params.redirectAllRequestsTo - object containing info
* about how to redirect all requests
* @param {string} params.redirectAllRequestsTo.hostName - hostName to use
* when redirecting all requests
* @param {string} [params.redirectAllRequestsTo.protocol] - protocol to use
* when redirecting all requests ('http' or 'https')
* @param {(RoutingRule[]|object[])} params.routingRules - array of Routing
* Rule instances or plain routing rule objects to cast as RoutingRule's
*/
constructor(params) {
if (params) {
this._indexDocument = params.indexDocument;
this._errorDocument = params.errorDocument;
this._redirectAllRequestsTo = params.redirectAllRequestsTo;
this.setRoutingRules(params.routingRules);
}
}
/**
* Return plain object with configuration info
* @return {object} - Object copy of class instance
*/
getConfig() {
const websiteConfig = {
indexDocument: this._indexDocument,
errorDocument: this._errorDocument,
redirectAllRequestsTo: this._redirectAllRequestsTo,
};
if (this._routingRules) {
websiteConfig.routingRules =
this._routingRules.map(rule => rule.getRuleObject());
}
return websiteConfig;
}
/**
* Set the redirectAllRequestsTo
* @param {object} obj - object to set as redirectAllRequestsTo
* @param {string} obj.hostName - hostname for redirecting all requests
* @param {object} [obj.protocol] - protocol for redirecting all requests
* @return {undefined};
*/
setRedirectAllRequestsTo(obj) {
this._redirectAllRequestsTo = obj;
}
/**
* Return the redirectAllRequestsTo object
* @return {object} redirectAllRequestsTo;
*/
getRedirectAllRequestsTo() {
return this._redirectAllRequestsTo;
}
/**
* Set the index document object name
* @param {string} suffix - index document object key
* @return {undefined};
*/
setIndexDocument(suffix) {
this._indexDocument = suffix;
}
/**
* Get the index document object name
* @return {string} indexDocument
*/
getIndexDocument() {
return this._indexDocument;
}
/**
* Set the error document object name
* @param {string} key - error document object key
* @return {undefined};
*/
setErrorDocument(key) {
this._errorDocument = key;
}
/**
* Get the error document object name
* @return {string} errorDocument
*/
getErrorDocument() {
return this._errorDocument;
}
/**
* Set the whole RoutingRules array
* @param {array} array - array to set as instance's RoutingRules
* @return {undefined};
*/
setRoutingRules(array) {
if (array) {
this._routingRules = array.map(rule => {
if (rule instanceof RoutingRule) {
return rule;
}
return new RoutingRule(rule);
});
}
}
/**
* Add a RoutingRule instance to routingRules array
* @param {object} obj - rule to add to array
* @return {undefined};
*/
addRoutingRule(obj) {
if (!this._routingRules) {
this._routingRules = [];
}
if (obj && obj instanceof RoutingRule) {
this._routingRules.push(obj);
} else if (obj) {
this._routingRules.push(new RoutingRule(obj));
}
}
/**
* Get routing rules
* @return {RoutingRule[]} - array of RoutingRule instances
*/
getRoutingRules() {
return this._routingRules;
}
}
module.exports = {
RoutingRule,
WebsiteConfiguration,
};

171
lib/network/RoundRobin.js Normal file
View File

@@ -0,0 +1,171 @@
const DEFAULT_STICKY_COUNT = 100;
/**
* Shuffle an array in-place
*
* @param {Array} array - The array to shuffle
* @return {undefined}
*/
function shuffle(array) {
for (let i = array.length - 1; i > 0; i--) {
const randIndex = Math.floor(Math.random() * (i + 1));
/* eslint-disable no-param-reassign */
const randIndexVal = array[randIndex];
array[randIndex] = array[i];
array[i] = randIndexVal;
/* eslint-enable no-param-reassign */
}
}
class RoundRobin {
/**
* @constructor
* @param {object[]|string[]} hostsList - list of hosts to query
* in round-robin fashion.
* @param {string} hostsList[].host - host name or IP address
* @param {number} [hostsList[].port] - port number to contact
* @param {object} [options] - options object
* @param {number} [options.stickyCount=100] - number of requests
* to send to the same host before switching to the next one
* @param {Logger} [options.logger] - logger object
*/
constructor(hostsList, options) {
if (hostsList.length === 0) {
throw new Error(
'at least one host must be provided for round robin');
}
if (options && options.logger) {
this.logger = options.logger;
}
if (options && options.stickyCount) {
this.stickyCount = options.stickyCount;
} else {
this.stickyCount = DEFAULT_STICKY_COUNT;
}
if (options && options.defaultPort) {
this.defaultPort = Number.parseInt(options.defaultPort, 10);
}
this.hostsList = hostsList.map(item => this._validateHostObj(item));
// TODO: add blacklisting capability
shuffle(this.hostsList);
this.hostIndex = 0;
this.pickCount = 0;
}
_validateHostObj(hostItem) {
const hostItemObj = {};
if (typeof hostItem === 'string') {
const hostParts = hostItem.split(':');
if (hostParts.length > 2) {
throw new Error(`${hostItem}: ` +
'bad round robin item: expect "host[:port]"');
}
hostItemObj.host = hostParts[0];
hostItemObj.port = hostParts[1];
} else {
if (typeof hostItem !== 'object') {
throw new Error(`${hostItem}: bad round robin item: ` +
'must be a string or object');
}
hostItemObj.host = hostItem.host;
hostItemObj.port = hostItem.port;
}
if (typeof hostItemObj.host !== 'string') {
throw new Error(`${hostItemObj.host}: ` +
'bad round robin host name: not a string');
}
if (hostItemObj.port !== undefined) {
if (/^[0-9]+$/.exec(hostItemObj.port) === null) {
throw new Error(`'${hostItemObj.port}': ` +
'bad round robin host port: not a number');
}
const parsedPort = Number.parseInt(hostItemObj.port, 10);
if (parsedPort <= 0 || parsedPort > 65535) {
throw new Error(`'${hostItemObj.port}': bad round robin ` +
'host port: not a valid port number');
}
return {
host: hostItemObj.host,
port: parsedPort,
};
}
return { host: hostItemObj.host,
port: this.defaultPort };
}
/**
* return the next host within round-robin cycle
*
* The same host is returned up to {@link this.stickyCount} times,
* then the next host in the round-robin list is returned.
*
* Once all hosts have been returned once, the list is shuffled
* and a new round-robin cycle starts.
*
* @return {object} a host object with { host, port } attributes
*/
pickHost() {
if (this.logger) {
this.logger.debug('pick host',
{ host: this.getCurrentHost() });
}
const curHost = this.getCurrentHost();
++this.pickCount;
if (this.pickCount === this.stickyCount) {
this._roundRobinCurrentHost({ shuffle: true });
this.pickCount = 0;
}
return curHost;
}
/**
* return the next host within round-robin cycle
*
* stickyCount is ignored, the next host in the round-robin list
* is returned.
*
* Once all hosts have been returned once, the list is shuffled
* and a new round-robin cycle starts.
*
* @return {object} a host object with { host, port } attributes
*/
pickNextHost() {
// don't shuffle in this case because we want to force picking
// a different host, shuffling may return the same host again
this._roundRobinCurrentHost({ shuffle: false });
this.pickCount = 0;
return this.getCurrentHost();
}
/**
* return the current host in round-robin, without changing the
* round-robin state
*
* @return {object} a host object with { host, port } attributes
*/
getCurrentHost() {
return this.hostsList[this.hostIndex];
}
_roundRobinCurrentHost(params) {
this.hostIndex += 1;
if (this.hostIndex === this.hostsList.length) {
this.hostIndex = 0;
// re-shuffle the array when all entries have been
// returned once, if shuffle param is true
if (params.shuffle) {
shuffle(this.hostsList);
}
}
if (this.logger) {
this.logger.debug('round robin host',
{ newHost: this.getCurrentHost() });
}
}
}
module.exports = RoundRobin;

View File

@@ -6,6 +6,8 @@ const assert = require('assert');
const dhparam = require('../../https/dh2048').dhparam;
const ciphers = require('../../https/ciphers').ciphers;
const errors = require('../../errors');
const { checkSupportIPv6 } = require('./utils');
class Server {
@@ -20,6 +22,13 @@ class Server {
this._noDelay = true;
this._cbOnListening = () => {};
this._cbOnRequest = (req, res) => this._noHandlerCb(req, res);
this._cbOnCheckContinue = (req, res) => {
res.writeContinue();
this._cbOnRequest(req, res);
};
// AWS S3 does not respond with 417 Expectation Failed or any error
// when Expect header is received and the value is not 100-continue
this._cbOnCheckExpectation = (req, res) => this._cbOnRequest(req, res);
this._cbOnError = () => false;
this._cbOnStop = () => {};
this._https = {
@@ -32,8 +41,10 @@ class Server {
rejectUnauthorized: true,
};
this._port = port;
this._address = checkSupportIPv6() ? '::' : '0.0.0.0';
this._server = null;
this._logger = logger;
this._keepAliveTimeout = null; // null: use default node.js value
}
/**
@@ -48,6 +59,19 @@ class Server {
return this;
}
/**
* Set the keep-alive timeout after which inactive client
* connections are automatically closed (default should be
* 5 seconds in node.js)
*
* @param {number} keepAliveTimeout - keep-alive timeout in milliseconds
* @return {Server} - returns this
*/
setKeepAliveTimeout(keepAliveTimeout) {
this._keepAliveTimeout = keepAliveTimeout;
return this;
}
/**
* Getter to access to the http/https server
*
@@ -70,7 +94,7 @@ class Server {
* Setter to the listening port
*
* @param {number} port - Port to listen into
* @return {Server} itself
* @return {undefined}
*/
setPort(port) {
this._port = port;
@@ -85,6 +109,25 @@ class Server {
return this._port;
}
/**
* Setter to the bind address
*
* @param {String} address - address bound to the socket
* @return {undefined}
*/
setBindAddress(address) {
this._address = address;
}
/**
* Getter to access the bind address
*
* @return {String} address bound to the socket
*/
getBindAddress() {
return this._address;
}
/**
* Getter to access to the noDelay (nagle algorithm) configuration
*
@@ -144,7 +187,7 @@ class Server {
* Function called when no handler specified in the server
*
* @param {http.IncomingMessage|https.IncomingMessage} req - Request object
* @param {http.ServerResponse|https.ServerResponse} res - Response object
* @param {http.ServerResponse} res - Response object
* @return {undefined}
*/
_noHandlerCb(req, res) {
@@ -161,13 +204,11 @@ class Server {
/**
* Function called when request received
*
* @param {http.IncomingMessage|https.IncomingMessage} req - Request object
* @param {http.ServerResponse|https.ServerResponse} res - Response object
* @param {http.IncomingMessage} req - Request object
* @param {http.ServerResponse} res - Response object
* @return {undefined}
*/
_onRequest(req, res) {
// Setting no delay of the socket to the value configured
req.connection.setNoDelay(this.isNoDelay());
return this._cbOnRequest(req, res);
}
@@ -180,6 +221,8 @@ class Server {
this._logger.info('Server is listening', {
method: 'arsenal.network.Server._onListening',
address: this._server.address(),
serverIP: this._address,
serverPort: this._port,
});
this._cbOnListening();
}
@@ -244,6 +287,32 @@ class Server {
return this;
}
/**
* Set the checkExpectation handler callback
*
* @param {function} cb - Callback(req, res)
* @return {Server} itself
*/
onCheckExpectation(cb) {
assert.strictEqual(typeof cb, 'function',
'Callback must be a function');
this._cbOnCheckExpectation = cb;
return this;
}
/**
* Set the checkContinue handler callback
*
* @param {function} cb - Callback(req, res)
* @return {Server} itself
*/
onCheckContinue(cb) {
assert.strictEqual(typeof cb, 'function',
'Callback must be a function');
this._cbOnCheckContinue = cb;
return this;
}
/**
* Set the error handler callback, if this handler returns true when an
* error is triggered, the server will restart
@@ -278,7 +347,7 @@ class Server {
* @return {undefined}
*/
_onSecureConnection(sock) {
if (!sock.authorized) {
if (this._https.requestCert && !sock.authorized) {
this._logger.error('rejected secure connection', {
address: sock.address(),
authorized: false,
@@ -302,6 +371,30 @@ class Server {
});
}
/**
* Function called when request with an HTTP Expect header is received,
* where the value is not 100-continue
*
* @param {http.IncomingMessage|https.IncomingMessage} req - Request object
* @param {http.ServerResponse} res - Response object
* @return {undefined}
*/
_onCheckExpectation(req, res) {
return this._cbOnCheckExpectation(req, res);
}
/**
* Function called when request with an HTTP Expect: 100-continue
* is received
*
* @param {http.IncomingMessage|https.IncomingMessage} req - Request object
* @param {http.ServerResponse} res - Response object
* @return {undefined}
*/
_onCheckContinue(req, res) {
return this._cbOnCheckContinue(req, res);
}
/**
* Function to start the Server
*
@@ -325,17 +418,30 @@ class Server {
this._server = http.createServer(
(req, res) => this._onRequest(req, res));
}
if (this._keepAliveTimeout) {
this._server.keepAliveTimeout = this._keepAliveTimeout;
}
this._server.on('error', err => this._onError(err));
this._server.on('secureConnection',
sock => this._onSecureConnection(sock));
this._server.on('connection', sock => {
// Setting no delay of the socket to the value configured
sock.setNoDelay(this.isNoDelay());
sock.on('error', err => this._logger.info(
'socket error - request rejected', { error: err }));
});
this._server.on('tlsClientError', (err, sock) =>
this._onClientError(err, sock));
this._server.on('clientError', (err, sock) =>
this._onClientError(err, sock));
this._server.on('checkContinue', (req, res) =>
this._onCheckContinue(req, res));
this._server.on('checkExpectation', (req, res) =>
this._onCheckExpectation(req, res));
this._server.on('listening', () => this._onListening());
}
this._server.listen(this._port);
this._server.listen(this._port, this._address);
return this;
}

119
lib/network/http/utils.js Normal file
View File

@@ -0,0 +1,119 @@
'use strict'; // eslint-disable-line
const os = require('os');
const errors = require('../../errors');
/**
* Parse the Range header into an object
*
* @param {String} rangeHeader - The 'Range' header value
* @return {Object} object containing a range specification, with
* either of:
* - start and end attributes: a fully specified range request
* - a single start attribute: no end is specified in the range request
* - a suffix attribute: suffix range request
* - an error attribute of type errors.InvalidArgument if the range
* syntax is invalid
*/
function parseRangeSpec(rangeHeader) {
const rangeMatch = /^bytes=([0-9]+)?-([0-9]+)?$/.exec(rangeHeader);
if (rangeMatch) {
const rangeValues = rangeMatch.slice(1, 3);
if (rangeValues[0] === undefined) {
if (rangeValues[1] !== undefined) {
return { suffix: Number.parseInt(rangeValues[1], 10) };
}
} else {
const rangeSpec = { start: Number.parseInt(rangeValues[0], 10) };
if (rangeValues[1] === undefined) {
return rangeSpec;
}
rangeSpec.end = Number.parseInt(rangeValues[1], 10);
if (rangeSpec.start <= rangeSpec.end) {
return rangeSpec;
}
}
}
return { error: errors.InvalidArgument };
}
/**
* Convert a range specification as given by parseRangeSpec() into a
* fully specified absolute byte range
*
* @param {Number []} rangeSpec - Parsed range specification as returned
* by parseRangeSpec()
* @param {Number} objectSize - Total byte size of the whole object
* @return {Object} object containing either:
* - a 'range' attribute which is a fully specified byte range [start,
end], as the inclusive absolute byte range to request from the
object
* - or no attribute if the requested range is a valid range request
for a whole empty object (non-zero suffix range)
* - or an 'error' attribute of type errors.InvalidRange if the
* requested range is out of object's boundaries.
*/
function getByteRangeFromSpec(rangeSpec, objectSize) {
if (rangeSpec.suffix !== undefined) {
if (rangeSpec.suffix === 0) {
// 0-byte suffix is always invalid (even on empty objects)
return { error: errors.InvalidRange };
}
if (objectSize === 0) {
// any other suffix range on an empty object returns the
// full object (0 bytes)
return {};
}
return { range: [Math.max(objectSize - rangeSpec.suffix, 0),
objectSize - 1] };
}
if (rangeSpec.start < objectSize) {
// test is false if end is undefined
return { range: [rangeSpec.start,
(rangeSpec.end < objectSize ?
rangeSpec.end : objectSize - 1)] };
}
return { error: errors.InvalidRange };
}
/**
* Convenience function that combines parseRangeSpec() and
* getByteRangeFromSpec()
*
* @param {String} rangeHeader - The 'Range' header value
* @param {Number} objectSize - Total byte size of the whole object
* @return {Object} object containing either:
* - a 'range' attribute which is a fully specified byte range [start,
* end], as the inclusive absolute byte range to request from the
* object
* - or no attribute if the requested range is either syntactically
* incorrect or is a valid range request for an empty object
* (non-zero suffix range)
* - or an 'error' attribute instead of type errors.InvalidRange if
* the requested range is out of object's boundaries.
*/
function parseRange(rangeHeader, objectSize) {
const rangeSpec = parseRangeSpec(rangeHeader);
if (rangeSpec.error) {
// invalid range syntax is silently ignored in HTTP spec,
// hence returns the whole object
return {};
}
return getByteRangeFromSpec(rangeSpec, objectSize);
}
function checkSupportIPv6() {
const niList = os.networkInterfaces();
return Object.keys(niList).some(network =>
niList[network].some(intfc => intfc.family === 'IPv6'));
}
module.exports = {
parseRangeSpec,
getByteRangeFromSpec,
parseRange,
checkSupportIPv6,
};

View File

@@ -0,0 +1,109 @@
const httpServer = require('../http/server');
const werelogs = require('werelogs');
const errors = require('../../errors');
const DEFAULT_LIVE_ROUTE = '/_/live';
const DEFAULT_READY_ROUTE = '/_/live';
const DEFAULT_METRICS_ROUTE = '/_/metrics';
/**
* ProbeDelegate is used to determine if a probe is successful or
* if any errors are present.
* If everything is working as intended, it is a no-op.
* Otherwise, return a string representing what is failing.
* @callback ProbeDelegate
* @param { import('http').ServerResponse } res - HTTP response for writing
* @param {werelogs.Logger} log - Werelogs instance for logging if you choose to
* @return {(string|undefined)} String representing issues to report. An empty
* string or undefined is used to represent no issues.
*/
/**
* @typedef {Object} ProbeServerParams
* @property {number} port - Port to run server on
* @property {string} [bindAddress] - Address to bind to, defaults to localhost
*/
/**
* ProbeServer is a generic server for handling probe checks or other
* generic responses.
*
* @extends {httpServer}
*/
class ProbeServer extends httpServer {
/**
* Create a new ProbeServer with parameters
*
* @param {ProbeServerParams} params - Parameters for server
*/
constructor(params) {
const logging = new werelogs.Logger('ProbeServer');
super(params.port, logging);
this.logging = logging;
this.setBindAddress(params.bindAddress || 'localhost');
// hooking our request processing function by calling the
// parent's method for that
this.onRequest(this._onRequest);
/**
* Map of routes to callback methods
* @type {Map<string, ProbeDelegate>}
*/
this._handlers = new Map();
}
/**
* Add request handler at the path
*
* @example <caption>If service is not connected</caption>
* addHandler(DEFAULT_LIVE_ROUTE, (res, log) => {
* if (!redisConnected) {
* return 'Redis is not connected';
* }
* res.writeHead(200)
* res.end()
* })
* @param {string|string[]} pathOrPaths - URL path(s) for where the request should be handled
* @param {ProbeDelegate} handler - Callback to handle request
* @returns {undefined}
*/
addHandler(pathOrPaths, handler) {
let paths = pathOrPaths;
if (typeof paths === 'string') {
paths = [paths];
}
for (const p of paths) {
this._handlers.set(p, handler);
}
}
_onRequest(req, res) {
const log = this.logging.newRequestLogger();
log.debug('request received', { method: req.method, url: req.url });
if (req.method !== 'GET') {
errors.MethodNotAllowed.writeResponse(res);
return;
}
if (!this._handlers.has(req.url)) {
errors.InvalidURI.writeResponse(res);
return;
}
const probeResponse = this._handlers.get(req.url)(res, log);
if (probeResponse !== undefined && probeResponse !== '') {
// Return an internal error with the response
errors.InternalError
.customizeDescription(probeResponse)
.writeResponse(res);
}
}
}
module.exports = {
ProbeServer,
DEFAULT_LIVE_ROUTE,
DEFAULT_READY_ROUTE,
DEFAULT_METRICS_ROUTE,
};

View File

@@ -0,0 +1,312 @@
'use strict'; // eslint-disable-line
const assert = require('assert');
const http = require('http');
const werelogs = require('werelogs');
const constants = require('../../constants');
const utils = require('./utils');
const errors = require('../../errors');
const HttpAgent = require('agentkeepalive');
function setRequestUids(reqHeaders, reqUids) {
// inhibit 'assignment to property of function parameter' -
// this is what we want
// eslint-disable-next-line
reqHeaders['X-Scal-Request-Uids'] = reqUids;
}
function setRange(reqHeaders, range) {
const rangeStart = range[0] !== undefined ? range[0].toString() : '';
const rangeEnd = range[1] !== undefined ? range[1].toString() : '';
// inhibit 'assignment to property of function parameter' -
// this is what we want
// eslint-disable-next-line
reqHeaders['Range'] = `bytes=${rangeStart}-${rangeEnd}`;
}
function setContentType(reqHeaders, contentType) {
// inhibit 'assignment to property of function parameter' -
// this is what we want
// eslint-disable-next-line
reqHeaders['Content-Type'] = contentType;
}
function setContentLength(reqHeaders, size) {
// inhibit 'assignment to property of function parameter' -
// this is what we want
// eslint-disable-next-line
reqHeaders['Content-Length'] = size.toString();
}
function makeErrorFromHTTPResponse(response) {
const rawBody = response.read();
const body = (rawBody !== null ? rawBody.toString() : '');
let error;
try {
const fields = JSON.parse(body);
error = errors[fields.errorType]
.customizeDescription(fields.errorMessage);
} catch (err) {
error = new Error(body);
}
// error is always a newly created object, so we can modify its
// properties
error.remote = true;
return error;
}
/**
* @class
* @classdesc REST Client interface
*
* The API is usable when the object is constructed.
*/
class RESTClient {
/**
* Interface to the data file server
* @constructor
* @param {Object} params - Contains the basic configuration.
* @param {String} params.host - hostname or ip address of the
* RESTServer instance
* @param {Number} params.port - port number that the RESTServer
* instance listens to
* @param {Werelogs.API} [params.logApi] - logging API instance object
*/
constructor(params) {
assert(params.host);
assert(params.port);
this.host = params.host;
this.port = params.port;
this.setupLogging(params.logApi);
this.httpAgent = new HttpAgent({
keepAlive: true,
freeSocketTimeout: constants.httpClientFreeSocketTimeout,
});
}
/**
* Destroy the HTTP agent, forcing a close of the remaining open
* connections
*
* @return {undefined}
*/
destroy() {
this.httpAgent.destroy();
}
/*
* Create a dedicated logger for RESTClient, from the provided werelogs API
* instance.
*
* @param {werelogs.API} logApi - object providing a constructor function
* for the Logger object
* @return {undefined}
*/
setupLogging(logApi) {
this.logging = new (logApi || werelogs).Logger('DataFileRESTClient');
}
createLogger(reqUids) {
return reqUids ?
this.logging.newRequestLoggerFromSerializedUids(reqUids) :
this.logging.newRequestLogger();
}
doRequest(method, headers, key, log, responseCb) {
const reqHeaders = headers || {};
const urlKey = key || '';
const reqParams = {
hostname: this.host,
port: this.port,
method,
path: `${constants.dataFileURL}/${urlKey}`,
headers: reqHeaders,
agent: this.httpAgent,
};
log.debug(`about to send ${method} request`, {
hostname: reqParams.hostname,
port: reqParams.port,
path: reqParams.path,
headers: reqParams.headers });
const request = http.request(reqParams, responseCb);
// disable nagle algorithm
request.setNoDelay(true);
return request;
}
/**
* This sends a PUT request to the the REST server
* @param {http.IncomingMessage} stream - Request with the data to send
* @param {string} stream.contentHash - hash of the data to send
* @param {integer} size - size
* @param {string} reqUids - The serialized request ids
* @param {RESTClient~putCallback} callback - callback
* @returns {undefined}
*/
put(stream, size, reqUids, callback) {
const log = this.createLogger(reqUids);
const headers = {};
setRequestUids(headers, reqUids);
setContentType(headers, 'application/octet-stream');
setContentLength(headers, size);
const request = this.doRequest('PUT', headers, null, log, response => {
response.once('readable', () => {
// expects '201 Created'
if (response.statusCode !== 201) {
return callback(makeErrorFromHTTPResponse(response));
}
// retrieve the key from the Location response header
// containing the complete URL to the object, like
// /DataFile/abcdef.
const location = response.headers.location;
if (location === undefined) {
return callback(new Error(
'missing Location header in the response'));
}
const locationInfo = utils.explodePath(location);
if (!locationInfo) {
return callback(new Error(
`bad Location response header: ${location}`));
}
return callback(null, locationInfo.key);
});
}).on('finish', () => {
log.debug('finished sending PUT data to the REST server', {
component: 'RESTClient',
method: 'put',
contentLength: size,
});
}).on('error', callback);
stream.pipe(request);
stream.on('error', err => {
log.error('error from readable stream', {
error: err,
method: 'put',
component: 'RESTClient',
});
request.end();
});
}
/**
* send a GET request to the REST server
* @param {String} key - The key associated to the value
* @param { Number [] | Undefined} range - range (if any) a
* [start, end] inclusive range specification, as defined in
* HTTP/1.1 RFC.
* @param {String} reqUids - The serialized request ids
* @param {RESTClient~getCallback} callback - callback
* @returns {undefined}
*/
get(key, range, reqUids, callback) {
const log = this.createLogger(reqUids);
const headers = {};
setRequestUids(headers, reqUids);
if (range) {
setRange(headers, range);
}
const request = this.doRequest('GET', headers, key, log, response => {
response.once('readable', () => {
if (response.statusCode !== 200 &&
response.statusCode !== 206) {
return callback(makeErrorFromHTTPResponse(response));
}
return callback(null, response);
});
}).on('error', callback);
request.end();
}
/**
* Send a GET request to the REST server, for a specific action rather
* than an object. Response will be truncated at the high watermark for
* the internal buffer of the stream, which is 16KB.
*
* @param {String} action - The action to query
* @param {String} reqUids - The serialized request ids
* @param {RESTClient~getCallback} callback - callback
* @returns {undefined}
*/
getAction(action, reqUids, callback) {
const log = this.createLogger(reqUids);
const headers = {};
setRequestUids(headers, reqUids);
const reqParams = {
hostname: this.host,
port: this.port,
method: 'GET',
path: `${constants.dataFileURL}?${action}`,
headers,
agent: this.httpAgent,
};
log.debug('about to send GET request', {
hostname: reqParams.hostname,
port: reqParams.port,
path: reqParams.path,
headers: reqParams.headers });
const request = http.request(reqParams, response => {
response.once('readable', () => {
if (response.statusCode !== 200 &&
response.statusCode !== 206) {
return callback(makeErrorFromHTTPResponse(response));
}
return callback(null, response.read().toString());
});
}).on('error', callback);
request.end();
}
/**
* send a DELETE request to the REST server
* @param {String} key - The key associated to the values
* @param {String} reqUids - The serialized request ids
* @param {RESTClient~deleteCallback} callback - callback
* @returns {undefined}
*/
delete(key, reqUids, callback) {
const log = this.createLogger(reqUids);
const headers = {};
setRequestUids(headers, reqUids);
const request = this.doRequest(
'DELETE', headers, key, log, response => {
response.once('readable', () => {
if (response.statusCode !== 200 &&
response.statusCode !== 204) {
return callback(makeErrorFromHTTPResponse(response));
}
return callback(null);
});
}).on('error', callback);
request.end();
}
}
/**
* @callback RESTClient~putCallback
* @param {Error} - The encountered error
* @param {String} key - The key to access the data
*/
/**
* @callback RESTClient~getCallback
* @param {Error} - The encountered error
* @param {stream.Readable} stream - The stream of values fetched
*/
/**
* @callback RESTClient~deleteCallback
* @param {Error} - The encountered error
*/
module.exports = RESTClient;

View File

@@ -0,0 +1,314 @@
'use strict'; // eslint-disable-line
const assert = require('assert');
const url = require('url');
const werelogs = require('werelogs');
const httpServer = require('../http/server');
const constants = require('../../constants');
const utils = require('./utils');
const httpUtils = require('../http/utils');
const errors = require('../../errors');
function setContentLength(response, contentLength) {
response.setHeader('Content-Length', contentLength.toString());
}
function setContentRange(response, byteRange, objectSize) {
const [start, end] = byteRange;
assert(start !== undefined && end !== undefined);
response.setHeader('Content-Range',
`bytes ${start}-${end}/${objectSize}`);
}
function sendError(res, log, error, optMessage) {
res.writeHead(error.code);
let message;
if (optMessage) {
message = optMessage;
} else {
message = error.description || '';
}
log.debug('sending back error response', { httpCode: error.code,
errorType: error.message,
error: message });
res.end(`${JSON.stringify({ errorType: error.message,
errorMessage: message })}\n`);
}
/**
* Parse the given url and return a pathInfo object. Sanity checks are
* performed.
*
* @param {String} urlStr - URL to parse
* @param {Boolean} expectKey - whether the command expects to see a
* key in the URL
* @return {Object} a pathInfo object with URL items containing the
* following attributes:
* - pathInfo.service {String} - The name of REST service ("DataFile")
* - pathInfo.key {String} - The requested key
*/
function parseURL(urlStr, expectKey) {
const urlObj = url.parse(urlStr);
const pathInfo = utils.explodePath(urlObj.path);
if (pathInfo.service !== constants.dataFileURL) {
throw errors.InvalidAction.customizeDescription(
`unsupported service '${pathInfo.service}'`);
}
if (expectKey && pathInfo.key === undefined) {
throw errors.MissingParameter.customizeDescription(
'URL is missing key');
}
if (!expectKey && pathInfo.key !== undefined) {
// note: we may implement rewrite functionality by allowing a
// key in the URL, though we may still provide the new key in
// the Location header to keep immutability property and
// atomicity of the update (we would just remove the old
// object when the new one has been written entirely in this
// case, saving a request over an equivalent PUT + DELETE).
throw errors.InvalidURI.customizeDescription(
'PUT url cannot contain a key');
}
return pathInfo;
}
/**
* @class
* @classdesc REST Server interface
*
* You have to call setup() to initialize the storage backend, then
* start() to start listening to the configured port.
*/
class RESTServer extends httpServer {
/**
* @constructor
* @param {Object} params - constructor params
* @param {Number} params.port - TCP port where the server listens to
* @param {arsenal.storage.data.file.Store} params.dataStore -
* data store object
* @param {Number} [params.bindAddress='localhost'] - address
* bound to the socket
* @param {Object} [params.log] - logger configuration
*/
constructor(params) {
assert(params.port);
werelogs.configure({
level: params.log.logLevel,
dump: params.log.dumpLevel,
});
const logging = new werelogs.Logger('DataFileRESTServer');
super(params.port, logging);
this.logging = logging;
this.dataStore = params.dataStore;
this.setBindAddress(params.bindAddress || 'localhost');
this.setKeepAliveTimeout(constants.httpServerKeepAliveTimeout);
// hooking our request processing function by calling the
// parent's method for that
this.onRequest(this._onRequest);
this.reqMethods = {
PUT: this._onPut.bind(this),
GET: this._onGet.bind(this),
DELETE: this._onDelete.bind(this),
};
}
/**
* Setup the storage backend
*
* @param {function} callback - called when finished
* @return {undefined}
*/
setup(callback) {
this.dataStore.setup(callback);
}
/**
* Create a new request logger object
*
* @param {String} reqUids - serialized request UIDs (as received in
* the X-Scal-Request-Uids header)
* @return {werelogs.RequestLogger} new request logger
*/
createLogger(reqUids) {
return reqUids ?
this.logging.newRequestLoggerFromSerializedUids(reqUids) :
this.logging.newRequestLogger();
}
/**
* Main incoming request handler, dispatches to method-specific
* handlers
*
* @param {http.IncomingMessage} req - HTTP request object
* @param {http.ServerResponse} res - HTTP response object
* @return {undefined}
*/
_onRequest(req, res) {
const reqUids = req.headers['x-scal-request-uids'];
const log = this.createLogger(reqUids);
log.debug('request received', { method: req.method,
url: req.url });
if (req.method in this.reqMethods) {
this.reqMethods[req.method](req, res, log);
} else {
// Method Not Allowed
sendError(res, log, errors.MethodNotAllowed);
}
}
/**
* Handler for PUT requests
*
* @param {http.IncomingMessage} req - HTTP request object
* @param {http.ServerResponse} res - HTTP response object
* @param {werelogs.RequestLogger} log - logger object
* @return {undefined}
*/
_onPut(req, res, log) {
let size;
try {
parseURL(req.url, false);
const contentLength = req.headers['content-length'];
if (contentLength === undefined) {
throw errors.MissingContentLength;
}
size = Number.parseInt(contentLength, 10);
if (Number.isNaN(size)) {
throw errors.InvalidInput.customizeDescription(
'bad Content-Length');
}
} catch (err) {
return sendError(res, log, err);
}
this.dataStore.put(req, size, log, (err, key) => {
if (err) {
return sendError(res, log, err);
}
log.debug('sending back 201 response to PUT', { key });
res.setHeader('Location', `${constants.dataFileURL}/${key}`);
setContentLength(res, 0);
res.writeHead(201);
return res.end(() => {
log.debug('PUT response sent', { key });
});
});
return undefined;
}
/**
* Handler for GET requests
*
* @param {http.IncomingMessage} req - HTTP request object
* @param {http.ServerResponse} res - HTTP response object
* @param {werelogs.RequestLogger} log - logger object
* @return {undefined}
*/
_onGet(req, res, log) {
let pathInfo;
let rangeSpec = undefined;
// Get request on the toplevel endpoint with ?action
if (req.url.startsWith(`${constants.dataFileURL}?`)) {
const queryParam = url.parse(req.url).query;
if (queryParam === 'diskUsage') {
return this.dataStore.getDiskUsage((err, result) => {
if (err) {
return sendError(res, log, err);
}
res.writeHead(200);
res.end(JSON.stringify(result));
return undefined;
});
}
}
// Get request on an actual object
try {
pathInfo = parseURL(req.url, true);
const rangeHeader = req.headers.range;
if (rangeHeader !== undefined) {
rangeSpec = httpUtils.parseRangeSpec(rangeHeader);
if (rangeSpec.error) {
// ignore header if syntax is invalid
rangeSpec = undefined;
}
}
} catch (err) {
return sendError(res, log, err);
}
this.dataStore.stat(pathInfo.key, log, (err, info) => {
if (err) {
return sendError(res, log, err);
}
let byteRange;
let contentLength;
if (rangeSpec) {
const { range, error } = httpUtils.getByteRangeFromSpec(
rangeSpec, info.objectSize);
if (error) {
return sendError(res, log, error);
}
byteRange = range;
}
if (byteRange) {
contentLength = byteRange[1] - byteRange[0] + 1;
} else {
contentLength = info.objectSize;
}
this.dataStore.get(pathInfo.key, byteRange, log, (err, rs) => {
if (err) {
return sendError(res, log, err);
}
log.debug('sending back 200/206 response with contents',
{ key: pathInfo.key });
setContentLength(res, contentLength);
res.setHeader('Accept-Ranges', 'bytes');
if (byteRange) {
// data is immutable, so objectSize is still correct
setContentRange(res, byteRange, info.objectSize);
res.writeHead(206);
} else {
res.writeHead(200);
}
rs.pipe(res);
return undefined;
});
return undefined;
});
return undefined;
}
/**
* Handler for DELETE requests
*
* @param {http.IncomingMessage} req - HTTP request object
* @param {http.ServerResponse} res - HTTP response object
* @param {werelogs.RequestLogger} log - logger object
* @return {undefined}
*/
_onDelete(req, res, log) {
let pathInfo;
try {
pathInfo = parseURL(req.url, true);
} catch (err) {
return sendError(res, log, err);
}
this.dataStore.delete(pathInfo.key, log, err => {
if (err) {
return sendError(res, log, err);
}
log.debug('sending back 204 response to DELETE',
{ key: pathInfo.key });
res.writeHead(204);
return res.end(() => {
log.debug('DELETE response sent', { key: pathInfo.key });
});
});
return undefined;
}
}
module.exports = RESTServer;

15
lib/network/rest/utils.js Normal file
View File

@@ -0,0 +1,15 @@
'use strict'; // eslint-disable-line
const errors = require('../../errors');
module.exports.explodePath = function explodePath(path) {
const pathMatch = /^(\/[a-zA-Z0-9]+)(\/([0-9a-f]*))?$/.exec(path);
if (pathMatch) {
return {
service: pathMatch[1],
key: (pathMatch[3] !== undefined && pathMatch[3].length > 0 ?
pathMatch[3] : undefined),
};
}
throw errors.InvalidURI.customizeDescription('malformed URI');
};

View File

@@ -0,0 +1,132 @@
'use strict'; // eslint-disable-line
const assert = require('assert');
const rpc = require('./rpc.js');
/**
* @class
* @classdesc Wrap a LevelDB RPC client supporting sub-levels on top
* of a base RPC client.
*
* An additional "subLevel" request parameter is attached to RPC
* requests to tell the RPC service for which sub-level the request
* applies.
*
* openSub() can be used to open sub-levels, returning a new LevelDB
* RPC client object accessing the sub-level transparently.
*/
class LevelDbClient extends rpc.BaseClient {
/**
* @constructor
*
* @param {Object} params - constructor params
* @param {String} params.url - URL of the socket.io namespace,
* e.g. 'http://localhost:9990/metadata'
* @param {Logger} params.logger - logger object
* @param {Number} [params.callTimeoutMs] - timeout for remote calls
* @param {Number} [params.streamMaxPendingAck] - max number of
* in-flight output stream packets sent to the server without an ack
* received yet
* @param {Number} [params.streamAckTimeoutMs] - timeout for receiving
* an ack after an output stream packet is sent to the server
*/
constructor(params) {
super(params);
this.path = []; // start from the root sublevel
// transmit the sublevel information as a request param
this.addRequestInfoProducer(
dbClient => ({ subLevel: dbClient.path }));
}
/**
* return a handle to a sublevel database
*
* @note this function has no side-effect on the db, it just
* returns a handle properly configured to access the sublevel db
* from the client.
*
* @param {String} subName - name of sublevel
* @return {Object} a handle to the sublevel database that has the
* same API as its parent
*/
openSub(subName) {
const subDbClient = new LevelDbClient({ url: this.url,
logger: this.logger });
// make the same exposed RPC calls available from the sub-level object
Object.assign(subDbClient, this);
// listeners should not be duplicated on sublevel
subDbClient.removeAllListeners();
// copy and append the new sublevel to the path
subDbClient.path = subDbClient.path.slice();
subDbClient.path.push(subName);
return subDbClient;
}
}
/**
* @class
* @classdesc Wrap a LevelDB RPC service supporting sub-levels on top
* of a base RPC service.
*
* An additional "subLevel" request parameter received from the RPC
* client is automatically parsed, and the requested sub-level of the
* database is opened and attached to the call environment in
* env.subDb (env is passed as first parameter of received RPC calls).
*/
class LevelDbService extends rpc.BaseService {
/**
* @constructor
*
* @param {Object} params - constructor parameters
* @param {String} params.namespace - socket.io namespace, a free
* string name that must start with '/'. The client will have to
* provide the same namespace in the URL
* (http://host:port/namespace)
* @param {Object} params.rootDb - root LevelDB database object to
* expose to remote clients
* @param {Object} params.logger - logger object
* @param {String} [params.apiVersion="1.0"] - Version number that
* is shared with clients in the manifest (may be used to ensure
* backward compatibility)
* @param {RPCServer} [params.server] - convenience parameter,
* calls server.registerServices() automatically
*/
constructor(params) {
assert(params.rootDb);
super(params);
this.rootDb = params.rootDb;
this.addRequestInfoConsumer((dbService, reqParams) => {
const env = {};
env.subLevel = reqParams.subLevel;
env.subDb = this.lookupSubLevel(reqParams.subLevel);
return env;
});
}
/**
* lookup a sublevel db given by the <tt>path</tt> array from the
* root leveldb handle.
*
* @param {String []} path - path to the sublevel, as a
* piecewise array of sub-levels
* @return {Object} the handle to the sublevel
*/
lookupSubLevel(path) {
let subDb = this.rootDb;
path.forEach(pathItem => {
subDb = subDb.sublevel(pathItem);
});
return subDb;
}
}
module.exports = {
LevelDbClient,
LevelDbService,
};

749
lib/network/rpc/rpc.js Normal file
View File

@@ -0,0 +1,749 @@
'use strict'; // eslint-disable-line
const http = require('http');
const io = require('socket.io');
const ioClient = require('socket.io-client');
const sioStream = require('./sio-stream');
const async = require('async');
const assert = require('assert');
const EventEmitter = require('events').EventEmitter;
const flattenError = require('./utils').flattenError;
const reconstructError = require('./utils').reconstructError;
const errors = require('../../errors');
const jsutil = require('../../jsutil');
const DEFAULT_CALL_TIMEOUT_MS = 30000;
// to handle recursion without no-use-before-define warning
// eslint-disable-next-line prefer-const
let streamRPCJSONObj;
/**
* @brief get a client object that proxies RPC calls to a remote
* server through socket.io events
*
* Additional request environment parameters that are not passed as
* explicit RPC arguments can be passed using addRequestInfoProducer()
* method, directly or through sub-classing
*
* NOTE: synchronous calls on the server-side API (i.e those which
* take no callback argument) become asynchronous on the client, take
* one additional parameter (the callback), then:
*
* - if it throws, the error is passed as callback's first argument,
* otherwise null is passed
* - the return value is passed as callback's second argument (unless
* an error occurred).
*/
class BaseClient extends EventEmitter {
/**
* @constructor
*
* @param {Object} params - constructor params
* @param {String} params.url - URL of the socket.io namespace,
* e.g. 'http://localhost:9990/metadata'
* @param {Logger} params.logger - logger object
* @param {Number} [params.callTimeoutMs] - timeout for remote calls
* @param {Number} [params.streamMaxPendingAck] - max number of
* in-flight output stream packets sent to the server without an ack
* received yet
* @param {Number} [params.streamAckTimeoutMs] - timeout for receiving
* an ack after an output stream packet is sent to the server
*/
constructor(params) {
const { url, logger, callTimeoutMs,
streamMaxPendingAck, streamAckTimeoutMs } = params;
assert(url);
assert(logger);
super();
this.url = url;
this.logger = logger;
this.callTimeoutMs = callTimeoutMs;
this.streamMaxPendingAck = streamMaxPendingAck;
this.streamAckTimeoutMs = streamAckTimeoutMs;
this.requestInfoProducers = [];
this.requestInfoProducers.push(
dbClient => ({ reqUids: dbClient.withReqUids }));
}
/**
* @brief internal RPC implementation w/o timeout
*
* @param {String} remoteCall - name of the remote function to call
* @param {Array} args - list of arguments to the remote function
* @param {function} cb - callback called when done
* @return {undefined}
*/
_call(remoteCall, args, cb) {
const wrapCb = (err, data) => {
cb(reconstructError(err),
this.socketStreams.decodeStreams(data));
};
this.logger.debug('remote call', { remoteCall, args });
this.socket.emit('call', remoteCall,
this.socketStreams.encodeStreams(args), wrapCb);
return undefined;
}
/**
* @brief call a remote function named <tt>remoteCall</tt>, with
* arguments <tt>args</tt> and callback <tt>cb</tt>
*
* <tt>cb</tt> is called when the remote function returns an ack, or
* when the timeout set by <tt>timeoutMs</tt> expires, whichever comes
* first. When an ack is received, the callback gets the arguments
* sent by the remote function in the ack response. In the case of
* timeout, it's passed a single Error argument with the code:
* 'ETIMEDOUT' property, and a self-described string in the 'info'
* property.
*
* @param {String} remoteCall - name of the remote function to call
* @param {Array} args - list of arguments to the remote function
* @param {function} cb - callback called when done or timeout
* @param {Number} timeoutMs - timeout in milliseconds
* @return {undefined}
*/
callTimeout(remoteCall, args, cb, timeoutMs = DEFAULT_CALL_TIMEOUT_MS) {
if (typeof cb !== 'function') {
throw new Error(`argument cb=${cb} is not a callback`);
}
async.timeout(this._call.bind(this), timeoutMs,
`operation ${remoteCall} timed out`)(remoteCall,
args, cb);
return undefined;
}
getCallTimeout() {
return this.callTimeoutMs;
}
setCallTimeout(newTimeoutMs) {
this.callTimeoutMs = newTimeoutMs;
}
/**
* connect to the remote RPC server
*
* @param {function} cb - callback when connection is complete or
* if there is an error
* @return {undefined}
*/
connect(cb) {
this.socket = ioClient(this.url);
this.socketStreams = sioStream.createSocket(
this.socket,
this.logger,
this.streamMaxPendingAck,
this.streamAckTimeoutMs);
const url = this.url;
this.socket.on('error', err => {
this.logger.warn('connectivity error to the RPC service',
{ url, error: err });
});
this.socket.on('connect', () => {
this.emit('connect');
});
this.socket.on('disconnect', () => {
this.emit('disconnect');
});
// only hard-coded call necessary to discover the others
this.createCall('getManifest');
this.getManifest((err, manifest) => {
if (err) {
this.logger.error('Error fetching manifest from RPC server',
{ error: err });
} else {
manifest.api.forEach(apiItem => {
this.createCall(apiItem.name);
});
}
if (cb) {
return cb(err);
}
return undefined;
});
}
/**
* disconnect this client from the RPC server. A disconnect event
* is emitted when done.
*
* @return {undefined}
*/
disconnect() {
this.socket.disconnect();
}
/**
* create a new RPC call with the given name
*
* This function should normally not be called by the user,
* because the API is automatically exposed by reading the
* manifest from the server.
*
* @param {String} remoteCall - name of the API call to create
* @return {undefined}
*/
createCall(remoteCall) {
this[remoteCall] = function onCall(...rpcArgs) {
const cb = rpcArgs.pop();
const args = { rpcArgs };
// produce the extra parameters for the request
this.requestInfoProducers.forEach(f => {
Object.assign(args, f(this));
});
this.callTimeout(remoteCall, args, cb, this.callTimeoutMs);
// reset temporary argument-passing sugar
this.withReqUids = undefined;
};
}
/**
* add a function that provides additional parameters to send
* along each request. It will be called before every single
* request, so the parameters can be dynamic.
*
* @param {function} f - function returning an object that
* contains the additional parameters for the request. It is
* called with the client object passed as a parameter.
* @return {undefined}
*/
addRequestInfoProducer(f) {
this.requestInfoProducers.push(f);
}
/**
* decorator function that adds information from the given logger
* object so that the remote end can reconstruct this information
* in the logs (namely the request UIDs). This call takes effect
* only for the next RPC call.
*
* The typical use case is:
* ```
* rpcClient.withRequestLogger(logger).callSomeFunction(params);
* ```
*
* @param {Object} logger - werelogs logger object
* @return {BaseClient} returns the original called client object
* so that the result can be chained with further calls
*/
withRequestLogger(logger) {
this.withReqUids = logger.getSerializedUids();
return this;
}
}
/**
* @class
* @classdesc RPC service class
*
* A service maps to a specific namespace and provides a set of RPC
* functions.
*
* Additional request environment parameters passed by the client
* should be parsed in helpers passed to addRequestInfoConsumer()
* method.
*
*/
class BaseService {
/**
* @constructor
*
* @param {Object} params - constructor parameters
* @param {String} params.namespace - socket.io namespace, a free
* string name that must start with '/'. The client will have to
* provide the same namespace in the URL
* (http://host:port/namespace)
* @param {Object} params.logger - logger object
* @param {String} [params.apiVersion="1.0"] - Version number that
* is shared with clients in the manifest (may be used to ensure
* backward compatibility)
* @param {RPCServer} [params.server] - convenience parameter,
* calls server.registerServices() automatically
*/
constructor(params) {
const { namespace, logger, apiVersion, server } = params;
assert(namespace);
assert(namespace.startsWith('/'));
assert(logger);
this.namespace = namespace;
this.logger = logger;
this.apiVersion = apiVersion || '1.0';
this.requestInfoConsumers = [];
// initialize with a single hard-coded API call, the user will
// register its own calls later
this.syncAPI = {};
this.asyncAPI = {};
this.registerSyncAPI({
getManifest: () => {
const exposedAPI = [];
Object.keys(this.syncAPI).forEach(callName => {
if (callName !== 'getManifest') {
exposedAPI.push({ name: callName });
}
});
Object.keys(this.asyncAPI).forEach(callName => {
exposedAPI.push({ name: callName });
});
return { apiVersion: this.apiVersion,
api: exposedAPI };
},
});
this.addRequestInfoConsumer((dbService, params) => {
const env = {};
if (params.reqUids) {
env.reqUids = params.reqUids;
env.requestLogger = dbService.logger
.newRequestLoggerFromSerializedUids(params.reqUids);
} else {
env.requestLogger = dbService.logger.newRequestLogger();
}
return env;
});
if (server) {
server.registerServices(this);
}
}
/**
* register a set of API functions that return a result synchronously
*
* @param {Object} apiExtension - Object mapping names to API
* function implementation. Each API function gets an
* environment object as first parameter that contains various
* useful attributes, while the rest of parameters are the RPC
* parameters as passed by the client in the call.
* @return {undefined}
*/
registerSyncAPI(apiExtension) {
Object.assign(this.syncAPI, apiExtension);
Object.keys(apiExtension).forEach(callName => {
this[callName] = function localCall(...args) {
const params = { rpcArgs: args };
if (this.requestParams) {
Object.assign(params, this.requestParams);
this.requestParams = undefined;
}
return this.onSyncCall(callName, params);
};
});
}
/**
* register a set of API functions that return a result through a
* callback passed as last argument
*
* @param {Object} apiExtension - Object mapping names to API
* function implementation. Each API function gets an
* environment object as first parameter that contains various
* useful attributes, while the rest of parameters are the RPC
* parameters as passed by the client in the call, followed by a
* callback function to call with an error status and optional
* additional response values.
* @return {undefined}
*/
registerAsyncAPI(apiExtension) {
Object.assign(this.asyncAPI, apiExtension);
Object.keys(apiExtension).forEach(callName => {
this[callName] = function localCall(...args) {
const cb = args.pop();
const params = { rpcArgs: args };
if (this.requestParams) {
Object.assign(params, this.requestParams);
this.requestParams = undefined;
}
return this.onAsyncCall(callName, params, cb);
};
});
}
withRequestParams(params) {
this.requestParams = params;
return this;
}
/**
* set the API version string, that is communicated to connecting
* clients in the manifest
*
* @param {String} apiVersion - arbitrary version string
* (suggested format "x.y")
* @return {undefined}
*/
setAPIVersion(apiVersion) {
this.apiVersion = apiVersion;
}
/**
* add a function to be called before each API call that is in
* charge of converting some extra request info (outside raw RPC
* arguments) into environment attributes directly usable by the
* API implementation
*
* @param {function} f - function to be called with two arguments:
* the service object and the params object received from the
* client, and which returns an object with the additional
* environment attributes
* @return {undefined}
*/
addRequestInfoConsumer(f) {
this.requestInfoConsumers.push(f);
}
_onCall(remoteCall, args, cb) {
if (remoteCall in this.asyncAPI) {
try {
this.onAsyncCall(remoteCall, args, (err, data) => {
cb(flattenError(err), data);
});
} catch (err) {
return cb(flattenError(err));
}
} else if (remoteCall in this.syncAPI) {
let result;
try {
result = this.onSyncCall(remoteCall, args);
return cb(null, result);
} catch (err) {
return cb(flattenError(err));
}
} else {
return cb(errors.InvalidArgument.customizeDescription(
`Unknown remote call ${remoteCall} ` +
`in namespace ${this.namespace}`));
}
return undefined;
}
_createCallEnv(params) {
const env = {};
this.requestInfoConsumers.forEach(f => {
const extraEnv = f(this, params);
Object.assign(env, extraEnv);
});
return env;
}
onSyncCall(remoteCall, params) {
const env = this._createCallEnv(params);
return this.syncAPI[remoteCall].apply(
this, [env].concat(params.rpcArgs));
}
onAsyncCall(remoteCall, params, cb) {
const env = this._createCallEnv(params);
this.asyncAPI[remoteCall].apply(
this, [env].concat(params.rpcArgs).concat(cb));
}
}
/**
* @brief create a server object that serves remote requests through
* socket.io events.
*
* Services associated to namespaces (aka. URL base path) must be
* registered thereafter on this server.
*
* Each service may customize the sending and reception of RPC
* messages through subclassing, e.g. LevelDbService looks up a
* particular sub-level before forwarding the RPC, providing it the
* target sub-level handle.
*
* @param {Object} params - params object
* @param {Object} params.logger - logger object
* @param {Number} [params.streamMaxPendingAck] - max number of
* in-flight output stream packets sent to the server without an ack
* received yet
* @param {Number} [params.streamAckTimeoutMs] - timeout for receiving
* an ack after an output stream packet is sent to the server
* @return {Object} a server object, not yet listening on a TCP port
* (you must call listen(port) on the returned object)
*/
function RPCServer(params) {
assert(params.logger);
const httpServer = http.createServer();
const server = io(httpServer);
const log = params.logger;
/**
* register a list of service objects on this server
*
* It's not necessary to call this function if you provided a
* "server" parameter to the service constructor.
*
* @param {BaseService} serviceList - list of services to register
* @return {undefined}
*/
server.registerServices = function registerServices(...serviceList) {
serviceList.forEach(service => {
const sock = this.of(service.namespace);
sock.on('connection', conn => {
const streamsSocket = sioStream.createSocket(
conn,
params.logger,
params.streamMaxPendingAck,
params.streamAckTimeoutMs);
conn.on('error', err => {
log.error('error on socket.io connection',
{ namespace: service.namespace, error: err });
});
conn.on('call', (remoteCall, args, cb) => {
const decodedArgs = streamsSocket.decodeStreams(args);
service._onCall(remoteCall, decodedArgs, (err, res) => {
if (err) {
return cb(err);
}
const encodedRes = streamsSocket.encodeStreams(res);
return cb(err, encodedRes);
});
});
});
});
};
server.listen = function listen(port, bindAddress = undefined) {
httpServer.listen(port, bindAddress);
};
return server;
}
function sendHTTPError(res, err) {
res.writeHead(err.code || 500);
return res.end(`${JSON.stringify({ error: err.message,
message: err.description })}\n`);
}
/**
* convert an input object stream to a JSON array streamed in output
*
* @param {stream.Readable} rstream - object input stream to serialize
* as a JSON array
* @param {stream.Writable} wstream - bytes output stream to write the
* serialized array to
* @param {function} cb - callback when done writing data
* @return {undefined}
*/
function objectStreamToJSON(rstream, wstream, cb) {
wstream.write('[');
let begin = true;
const cbOnce = jsutil.once(cb);
let writeInProgress = false;
let readEnd = false;
rstream.on('data', item => {
if (begin) {
begin = false;
} else {
wstream.write(',');
}
rstream.pause();
writeInProgress = true;
streamRPCJSONObj(item, wstream, err => {
writeInProgress = false;
if (err) {
return cbOnce(err);
}
if (readEnd) {
wstream.write(']');
return cbOnce(null);
}
return rstream.resume();
});
});
rstream.on('end', () => {
readEnd = true;
if (!writeInProgress) {
wstream.write(']');
cbOnce(null);
}
});
rstream.on('error', err => {
cbOnce(err);
});
}
/**
* stream the result as returned by the RPC call to a connected client
*
* It's similar to sending the raw contents of JSON.stringify() to the
* client, except that any embedded object with pipe() method is
* considered as an object stream and will be sent as a JSON array of
* objects.
*
* Keep in mind that this function is only meant to be used in debug
* tools, it would require strenghtening to be used in production
* mode.
*
* @param {Object} obj - js object to stream JSON-serialized
* @param {stream.Writable} wstream - output stream
* @param {function} cb - callback when all JSON data has been output
* or if there was an error
* @return {undefined}
*/
streamRPCJSONObj = function _streamRPCJSONObj(obj, wstream, cb) {
const cbOnce = jsutil.once(cb);
if (typeof obj === 'object') {
if (obj && obj.pipe !== undefined) {
// stream object streams as JSON arrays
return objectStreamToJSON(obj, wstream, cbOnce);
}
if (Array.isArray(obj)) {
let first = true;
wstream.write('[');
return async.eachSeries(obj, (child, done) => {
if (first) {
first = false;
} else {
wstream.write(',');
}
streamRPCJSONObj(child, wstream, done);
},
err => {
if (err) {
return cbOnce(err);
}
wstream.write(']');
return cbOnce(null);
});
}
if (obj) {
let first = true;
wstream.write('{');
return async.eachSeries(Object.keys(obj), (k, done) => {
if (obj[k] === undefined) {
return done();
}
if (first) {
first = false;
} else {
wstream.write(',');
}
wstream.write(`${JSON.stringify(k)}:`);
return streamRPCJSONObj(obj[k], wstream, done);
},
err => {
if (err) {
return cbOnce(err);
}
wstream.write('}');
return cbOnce(null);
});
}
}
// primitive types
if (obj === undefined) {
wstream.write('null'); // if undefined elements are present in
// arrays, convert them to JSON null
// objects
} else {
wstream.write(JSON.stringify(obj));
}
return setImmediate(() => cbOnce(null));
};
/**
* @brief create a server object that serves RPC requests through POST
* HTTP requests. This is intended to help functional testing, the
* RPCServer class is meant to be used on real traffic.
*
* Services associated to namespaces (aka. URL base path) must be
* registered thereafter on this server.
*
* @param {Object} params - params object
* @param {Object} params.logger - logger object
* @return {Object} a HTTP server object, not yet listening on a TCP
* port (you must call listen(port) on the returned object)
*/
function RESTServer(params) {
assert(params);
assert(params.logger);
const httpServer = http.createServer((req, res) => {
if (req.method !== 'POST') {
return sendHTTPError(
res, errors.MethodNotAllowed.customizeDescription(
'only POST requests are supported for RPC calls'));
}
const matchingService = httpServer.serviceList.find(
service => req.url === service.namespace);
if (!matchingService) {
return sendHTTPError(
res, errors.InvalidArgument.customizeDescription(
`unknown service in URL ${req.url}`));
}
const reqBody = [];
req.on('data', data => {
reqBody.push(data);
});
return req.on('end', () => {
if (reqBody.length === 0) {
return sendHTTPError(res, errors.MissingRequestBodyError);
}
try {
const jsonReq = JSON.parse(reqBody);
if (!jsonReq.call) {
throw errors.InvalidArgument.customizeDescription(
'missing "call" JSON attribute');
}
const args = jsonReq.args || {};
matchingService._onCall(jsonReq.call, args, (err, data) => {
if (err) {
return sendHTTPError(res, err);
}
res.writeHead(200);
if (data === undefined) {
return res.end();
}
res.write('{"result":');
return streamRPCJSONObj(data, res, err => {
if (err) {
return res.end(JSON.stringify(err));
}
return res.end('}\n');
});
});
return undefined;
} catch (err) {
return sendHTTPError(res, err);
}
});
});
httpServer.serviceList = [];
/**
* register a list of service objects on this server
*
* It's not necessary to call this function if you provided a
* "server" parameter to the service constructor.
*
* @param {BaseService} serviceList - list of services to register
* @return {undefined}
*/
httpServer.registerServices = function registerServices(...serviceList) {
this.serviceList.push(...serviceList);
};
return httpServer;
}
module.exports = {
BaseClient,
BaseService,
RPCServer,
RESTServer,
};

View File

@@ -0,0 +1,442 @@
'use strict'; // eslint-disable-line
const uuid = require('uuid');
const stream = require('stream');
const debug = require('debug')('sio-stream');
const assert = require('assert');
const async = require('async');
const flattenError = require('./utils').flattenError;
const reconstructError = require('./utils').reconstructError;
const DEFAULT_MAX_PENDING_ACK = 4;
const DEFAULT_ACK_TIMEOUT_MS = 5000;
class SIOOutputStream extends stream.Writable {
constructor(socket, streamId, maxPendingAck, ackTimeoutMs) {
super({ objectMode: true });
this._initOutputStream(socket, streamId, maxPendingAck,
ackTimeoutMs);
}
_initOutputStream(socket, streamId, maxPendingAck, ackTimeoutMs) {
this.socket = socket;
this.streamId = streamId;
this.on('finish', () => {
this.socket._finish(this.streamId, err => {
// no-op on client ack, it's not excluded we add
// things later here
debug('ack finish', this.streamId, 'err', err);
});
});
this.on('error', err => {
debug('output stream error', this.streamId);
// notify remote of the error
this.socket._error(this.streamId, err);
});
// This is used for queuing flow control, don't issue more
// than maxPendingAck requests (events) that have not been
// acked yet
this.maxPendingAck = maxPendingAck;
this.ackTimeoutMs = ackTimeoutMs;
this.nPendingAck = 0;
}
_write(chunk, encoding, callback) {
return this._writev([{ chunk }], callback);
}
_writev(chunks, callback) {
const payload = chunks.map(chunk => chunk.chunk);
debug(`_writev(${JSON.stringify(payload)}, ...)`);
this.nPendingAck += 1;
const timeoutInfo =
`stream timeout: did not receive ack after ${this.ackTimeoutMs}ms`;
async.timeout(cb => {
this.socket._write(this.streamId, payload, cb);
}, this.ackTimeoutMs, timeoutInfo)(
err => {
debug(`ack stream-data ${this.streamId}
(${JSON.stringify(payload)}):`, err);
if (this.nPendingAck === this.maxPendingAck) {
callback();
}
this.nPendingAck -= 1;
if (err) {
// notify remote of the error (timeout notably)
debug('stream error:', err);
this.socket._error(this.streamId, err);
// stop the producer
this.socket.destroyStream(this.streamId);
}
});
if (this.nPendingAck < this.maxPendingAck) {
callback();
}
}
}
class SIOInputStream extends stream.Readable {
constructor(socket, streamId) {
super({ objectMode: true });
this.socket = socket;
this.streamId = streamId;
this._readState = {
pushBuffer: [],
readable: false,
};
}
destroy() {
debug('destroy called', this.streamId);
this._destroyed = true;
this.pause();
this.removeAllListeners('data');
this.removeAllListeners('end');
this._readState = {
pushBuffer: [],
readable: false,
};
// do this in case the client piped this stream to other ones
this.unpipe();
// emit 'stream-hangup' event to notify the remote producer
// that we're not interested in further results
this.socket._hangup(this.streamId);
this.emit('close');
}
_pushData() {
debug('pushData _readState:', this._readState);
if (this._destroyed) {
return;
}
while (this._readState.pushBuffer.length > 0) {
const item = this._readState.pushBuffer.shift();
debug('pushing item', item);
if (!this.push(item)) {
this._readState.readable = false;
break;
}
}
}
_read(size) {
debug(`_read(${size})`);
this._readState.readable = true;
this._pushData();
}
_ondata(data) {
debug('_ondata', this.streamId, data);
if (this._destroyed) {
return;
}
this._readState.pushBuffer.push(...data);
if (this._readState.readable) {
this._pushData();
}
}
_onend() {
debug('_onend', this.streamId);
this._readState.pushBuffer.push(null);
if (this._readState.readable) {
this._pushData();
}
this.emit('close');
}
_onerror(receivedErr) {
debug('_onerror', this.streamId, 'error', receivedErr);
const err = reconstructError(receivedErr);
err.remote = true;
this.emit('error', err);
}
}
/**
* @class
* @classdesc manage a set of user streams over a socket.io connection
*/
class SIOStreamSocket {
constructor(socket, logger, maxPendingAck, ackTimeoutMs) {
assert(socket);
assert(logger);
/** @member {Object} socket.io connection */
this.socket = socket;
/** @member {Object} logger object */
this.logger = logger;
/** @member {Number} max number of in-flight output stream
* packets sent to the client without an ack received yet */
this.maxPendingAck = maxPendingAck;
/** @member {Number} timeout for receiving an ack after an
* output stream packet is sent to the client */
this.ackTimeoutMs = ackTimeoutMs;
/** @member {Object} map of stream proxies initiated by the
* remote side */
this.remoteStreams = {};
/** @member {Object} map of stream-like objects initiated
* locally and connected to the remote side */
this.localStreams = {};
const log = logger;
// stream data message, contains an array of one or more data objects
this.socket.on('stream-data', (payload, cb) => {
const { streamId, data } = payload;
log.debug('received \'stream-data\' event',
{ streamId, size: data.length });
const stream = this.remoteStreams[streamId];
if (!stream) {
log.debug('no such remote stream registered', { streamId });
return;
}
stream._ondata(data);
cb(null);
});
// signals normal end of stream to the consumer
this.socket.on('stream-end', (payload, cb) => {
const { streamId } = payload;
log.debug('received \'stream-end\' event', { streamId });
const stream = this.remoteStreams[streamId];
if (!stream) {
log.debug('no such remote stream registered', { streamId });
return;
}
stream._onend();
cb(null);
});
// error message sent by the stream producer to the consumer
this.socket.on('stream-error', payload => {
const { streamId, error } = payload;
log.debug('received \'stream-error\' event', { streamId, error });
const stream = this.remoteStreams[streamId];
if (!stream) {
log.debug('no such remote stream registered', { streamId });
return;
}
stream._onerror(error);
});
// hangup message sent by the stream consumer to the producer
this.socket.on('stream-hangup', payload => {
const { streamId } = payload;
log.debug('received \'stream-hangup\' event', { streamId });
const stream = this.localStreams[streamId];
if (!stream) {
log.debug('no such local stream registered' +
'(may have already reached the end)', { streamId });
return;
}
this.destroyStream(streamId);
});
}
/**
* @brief encode all stream-like objects found inside a user
* object into a serialized form that can be tramsmitted through a
* socket.io connection, then decoded back to a stream proxy
* object by the other end with decodeStreams()
*
* @param {Object} arg any flat object or value that may be or
* contain stream-like objects
* @return {Object} an object of the same nature than <tt>arg</tt> with
* streams encoded for transmission to the remote side
*/
encodeStreams(arg) {
if (!arg) {
return arg;
}
const log = this.logger;
const isReadStream = (typeof arg.pipe === 'function'
&& typeof (arg.read) === 'function');
let isWriteStream = (typeof arg.write === 'function');
if (isReadStream || isWriteStream) {
if (isReadStream && isWriteStream) {
// For now, consider that duplex streams are input
// streams for the purpose of supporting Transform
// streams in server -> client direction. If the need
// arises, we can implement full duplex streams later.
isWriteStream = false;
}
const streamId = uuid();
const encodedStream = {
$streamId: streamId,
readable: isReadStream,
writable: isWriteStream,
};
let transportStream;
if (isReadStream) {
transportStream = new SIOOutputStream(this, streamId,
this.maxPendingAck,
this.ackTimeoutMs);
} else {
transportStream = new SIOInputStream(this, streamId);
}
this.localStreams[streamId] = arg;
arg.once('close', () => {
log.debug('stream closed, removing from local streams',
{ streamId });
delete this.localStreams[streamId];
});
arg.on('error', error => {
log.error('stream error', { streamId, error });
});
if (isReadStream) {
arg.pipe(transportStream);
}
if (isWriteStream) {
transportStream.pipe(arg);
}
return encodedStream;
}
if (typeof arg === 'object') {
let encodedObj;
if (Array.isArray(arg)) {
encodedObj = [];
for (let k = 0; k < arg.length; ++k) {
encodedObj.push(this.encodeStreams(arg[k]));
}
} else {
encodedObj = {};
// user objects are simple flat objects and we want to
// copy all their properties
// eslint-disable-next-line
for (const k in arg) {
encodedObj[k] = this.encodeStreams(arg[k]);
}
}
return encodedObj;
}
return arg;
}
/**
* @brief decode all encoded stream markers (produced by
* encodeStreams()) found inside the object received from the
* remote side, turn them into actual readable/writable stream
* proxies that are forwarding data from/to the remote side stream
*
* @param {Object} arg the object as received from the remote side
* @return {Object} an object of the same nature than <tt>arg</tt> with
* stream markers decoded into actual readable/writable stream
* objects
*/
decodeStreams(arg) {
if (!arg) {
return arg;
}
const log = this.logger;
if (arg.$streamId !== undefined) {
if (arg.readable && arg.writable) {
throw new Error('duplex streams not supported');
}
const streamId = arg.$streamId;
let stream;
if (arg.readable) {
stream = new SIOInputStream(this, streamId);
} else if (arg.writable) {
stream = new SIOOutputStream(this, streamId,
this.maxPendingAck,
this.ackTimeoutMs);
} else {
throw new Error('can\'t decode stream neither readable ' +
'nor writable');
}
this.remoteStreams[streamId] = stream;
if (arg.readable) {
stream.once('close', () => {
log.debug('stream closed, removing from remote streams',
{ streamId });
delete this.remoteStreams[streamId];
});
}
if (arg.writable) {
stream.once('finish', () => {
log.debug('stream finished, removing from remote streams',
{ streamId });
delete this.remoteStreams[streamId];
});
}
stream.on('error', error => {
log.error('stream error', { streamId, error });
});
return stream;
}
if (typeof arg === 'object') {
let decodedObj;
if (Array.isArray(arg)) {
decodedObj = [];
for (let k = 0; k < arg.length; ++k) {
decodedObj.push(this.decodeStreams(arg[k]));
}
} else {
decodedObj = {};
// user objects are simple flat objects and we want to
// copy all their properties
// eslint-disable-next-line
for (const k in arg) {
decodedObj[k] = this.decodeStreams(arg[k]);
}
}
return decodedObj;
}
return arg;
}
_write(streamId, data, cb) {
this.logger.debug('emit \'stream-data\' event',
{ streamId, size: data.length });
this.socket.emit('stream-data', { streamId, data }, cb);
}
_finish(streamId, cb) {
this.logger.debug('emit \'stream-end\' event', { streamId });
this.socket.emit('stream-end', { streamId }, cb);
}
_error(streamId, error) {
this.logger.debug('emit \'stream-error\' event', { streamId, error });
this.socket.emit('stream-error', { streamId,
error: flattenError(error) });
}
_hangup(streamId) {
this.logger.debug('emit \'stream-hangup\' event', { streamId });
this.socket.emit('stream-hangup', { streamId });
}
destroyStream(streamId) {
this.logger.debug('destroyStream', { streamId });
if (!this.localStreams[streamId]) {
return;
}
if (this.localStreams[streamId].destroy) {
// a 'close' event shall be emitted by destroy()
this.localStreams[streamId].destroy();
}
// if no destroy function exists in the input stream, let it
// go through the end
}
}
module.exports.createSocket = function createSocket(
socket,
logger,
maxPendingAck = DEFAULT_MAX_PENDING_ACK,
ackTimeoutMs = DEFAULT_ACK_TIMEOUT_MS) {
return new SIOStreamSocket(socket, logger, maxPendingAck, ackTimeoutMs);
};

48
lib/network/rpc/utils.js Normal file
View File

@@ -0,0 +1,48 @@
'use strict'; // eslint-disable-line
/**
* @brief turn all <tt>err</tt> own and prototype attributes into own attributes
*
* This is done so that JSON.stringify() can properly serialize those
* attributes (e.g. err.notFound)
*
* @param {Error} err error object
* @return {Object} flattened object containing <tt>err</tt> attributes
*/
module.exports.flattenError = function flattenError(err) {
if (!err) {
return err;
}
const flattenedErr = {};
flattenedErr.message = err.message;
for (const k in err) {
if (!(k in flattenedErr)) {
flattenedErr[k] = err[k];
}
}
return flattenedErr;
};
/**
* @brief recreate a proper Error object from its flattened
* representation created with flattenError().
*
* @note Its internals may differ from the original Error object but
* its attributes should be the same.
*
* @param {Object} err flattened error object
* @return {Error} a reconstructed Error object inheriting <tt>err</tt>
* attributes
*/
module.exports.reconstructError = function reconstructError(err) {
if (!err) {
return err;
}
const reconstructedErr = new Error(err.message);
Object.keys(err).forEach(k => {
reconstructedErr[k] = err[k];
});
return reconstructedErr;
};

View File

@@ -5,6 +5,7 @@ const userPolicySchema = require('./userPolicySchema');
const errors = require('../errors');
const ajValidate = new Ajv({ allErrors: true });
ajValidate.addMetaSchema(require('ajv/lib/refs/json-schema-draft-06.json'));
// compiles schema to functions and caches them for all cases
const userPolicyValidate = ajValidate.compile(userPolicySchema);

View File

@@ -1,9 +1,125 @@
{
"$schema": "http://json-schema.org/draft-04/schema#",
"$schema": "http://json-schema.org/draft-06/schema#",
"type": "object",
"title": "AWS Policy schema.",
"description": "This schema describes a user policy per AWS policy grammar rules",
"definitions": {
"principalService": {
"type": "object",
"properties": {
"Service": {
"type": "string",
"const": "backbeat"
}
},
"additionalProperties": false
},
"principalAnonymous": {
"type": "string",
"pattern": "^\\*$"
},
"principalAWSAccountID": {
"type": "string",
"pattern": "^[0-9]{12}$"
},
"principalAWSAccountArn": {
"type": "string",
"pattern": "^arn:aws:iam::[0-9]{12}:root$"
},
"principalAWSUserArn": {
"type": "string",
"pattern": "^arn:aws:iam::[0-9]{12}:user/[\\w+=,.@ -]{1,64}$"
},
"principalAWSRoleArn": {
"type": "string",
"pattern": "^arn:aws:iam::[0-9]{12}:role/[\\w+=,.@ -]{1,64}$"
},
"principalFederatedSamlIdp": {
"type": "string",
"pattern": "^arn:aws:iam::[0-9]{12}:saml-provider/[\\w._-]{1,128}$"
},
"principalAWSItem": {
"type": "object",
"properties": {
"AWS": {
"oneOf": [
{
"$ref": "#/definitions/principalAWSAccountID"
},
{
"$ref": "#/definitions/principalAnonymous"
},
{
"$ref": "#/definitions/principalAWSAccountArn"
},
{
"$ref": "#/definitions/principalAWSUserArn"
},
{
"$ref": "#/definitions/principalAWSRoleArn"
},
{
"type": "array",
"minItems": 1,
"items": {
"$ref": "#/definitions/principalAWSAccountID"
}
},
{
"type": "array",
"minItems": 1,
"items": {
"$ref": "#/definitions/principalAWSAccountArn"
}
},
{
"type": "array",
"minItems": 1,
"items": {
"$ref": "#/definitions/principalAWSRoleArn"
}
},
{
"type": "array",
"minItems": 1,
"items": {
"$ref": "#/definitions/principalAWSUserArn"
}
}
]
}
},
"additionalProperties": false
},
"principalFederatedItem": {
"type": "object",
"properties": {
"Federated": {
"oneOf": [
{
"$ref": "#/definitions/principalFederatedSamlIdp"
}
]
}
},
"additionalProperties": false
},
"principalItem": {
"oneOf": [
{
"$ref": "#/definitions/principalAWSItem"
},
{
"$ref": "#/definitions/principalAnonymous"
},
{
"$ref": "#/definitions/principalFederatedItem"
},
{
"$ref": "#/definitions/principalService"
}
]
},
"actionItem": {
"type": "string",
"pattern": "^[^*:]+:([^:])+|^\\*{1}$"
@@ -187,9 +303,7 @@
"properties": {
"Version": {
"type": "string",
"enum": [
"2012-10-17"
]
"const": "2012-10-17"
},
"Statement": {
"oneOf": [
@@ -212,6 +326,12 @@
"Deny"
]
},
"Principal": {
"$ref": "#/definitions/principalItem"
},
"NotPrincipal": {
"$ref": "#/definitions/principalItem"
},
"Action": {
"oneOf": [
{
@@ -277,24 +397,41 @@
"Action",
"Resource"
]
}, {
},
{
"required": [
"Effect",
"Action",
"NotResource"
]
}, {
},
{
"required": [
"Effect",
"NotAction",
"Resource"
]
}, {
},
{
"required": [
"Effect",
"NotAction",
"NotResource"
]
},
{
"required": [
"Effect",
"Action",
"Principal"
]
},
{
"required": [
"Effect",
"Action",
"NotPrincipal"
]
}
]
}
@@ -315,6 +452,9 @@
"Deny"
]
},
"Principal": {
"$ref": "#/definitions/principalItem"
},
"Action": {
"oneOf": [
{
@@ -380,24 +520,41 @@
"Effect",
"Resource"
]
}, {
},
{
"required": [
"Action",
"Effect",
"NotResource"
]
}, {
},
{
"required": [
"Effect",
"NotAction",
"Resource"
]
}, {
},
{
"required": [
"Effect",
"NotAction",
"NotResource"
]
},
{
"required": [
"Effect",
"Action",
"Principal"
]
},
{
"required": [
"Effect",
"Action",
"NotPrincipal"
]
}
]
}
@@ -409,4 +566,4 @@
"Statement"
],
"additionalProperties": false
}
}

View File

@@ -7,77 +7,28 @@ const parseIp = require('../ipCheck').parseIp;
// For bucket head and object head:
// http://docs.aws.amazon.com/AmazonS3/latest/dev/
// using-with-s3-actions.html
const _actionMap = {
bucketDelete: 's3:DeleteBucket',
bucketDeleteWebsite: 's3:DeleteBucketWebsite',
bucketGet: 's3:ListBucket',
bucketGetACL: 's3:GetBucketAcl',
bucketGetCors: 's3:GetBucketCORS',
bucketGetVersioning: 's3:GetBucketVersioning',
bucketGetWebsite: 's3:GetBucketWebsite',
bucketHead: 's3:ListBucket',
bucketPut: 's3:CreateBucket',
bucketPutACL: 's3:PutBucketAcl',
bucketPutCors: 's3:PutBucketCORS',
// for bucketDeleteCors need s3:PutBucketCORS permission
// see http://docs.aws.amazon.com/AmazonS3/latest/API/
// RESTBucketDELETEcors.html
bucketDeleteCors: 's3:PutBucketCORS',
bucketPutVersioning: 's3:PutBucketVersioning',
bucketPutWebsite: 's3:PutBucketWebsite',
completeMultipartUpload: 's3:PutObject',
initiateMultipartUpload: 's3:PutObject',
listMultipartUploads: 's3:ListBucketMultipartUploads',
listParts: 's3:ListMultipartUploadParts',
multipartDelete: 's3:AbortMultipartUpload',
objectDelete: 's3:DeleteObject',
objectGet: 's3:GetObject',
objectGetACL: 's3:GetObjectAcl',
objectHead: 's3:GetObject',
objectPut: 's3:PutObject',
objectPutACL: 's3:PutObjectAcl',
objectPutPart: 's3:PutObject',
serviceGet: 's3:ListAllMyBuckets',
};
const {
actionMapRQ,
actionMapIAM,
actionMapSSO,
actionMapSTS,
actionMapMetadata,
} = require('./utils/actionMaps');
const _actionMapIAM = {
attachGroupPolicy: 'iam:AttachGroupPolicy',
attachUserPolicy: 'iam:AttachUserPolicy',
createAccessKey: 'iam:CreateAccessKey',
createGroup: 'iam:CreateGroup',
createPolicy: 'iam:CreatePolicy',
createPolicyVersion: 'iam:CreatePolicyVersion',
createUser: 'iam:CreateUser',
deleteAccessKey: 'iam:DeleteAccessKey',
deleteGroup: 'iam:DeleteGroup',
deleteGroupPolicy: 'iam:DeleteGroupPolicy',
deletePolicy: 'iam:DeletePolicy',
deletePolicyVersion: 'iam:DeletePolicyVersion',
deleteUser: 'iam:DeleteUser',
detachGroupPolicy: 'iam:DetachGroupPolicy',
detachUserPolicy: 'iam:DetachUserPolicy',
getGroup: 'iam:GetGroup',
getGroupPolicy: 'iam:GetGroupPolicy',
getPolicy: 'iam:GetPolicy',
getPolicyVersion: 'iam:GetPolicyVersion',
getUser: 'iam:GetUser',
listAccessKeys: 'iam:ListAccessKeys',
listGroupPolicies: 'iam:ListGroupPolicies',
listGroups: 'iam:ListGroups',
listGroupsForUser: 'iam:ListGroupsForUser',
listPolicies: 'iam:ListPolicies',
listPolicyVersions: 'iam:ListPolicyVersions',
listUsers: 'iam:ListUsers',
putGroupPolicy: 'iam:PutGroupPolicy',
removeUserFromGroup: 'iam:RemoveUserFromGroup',
const _actionNeedQuotaCheck = {
objectPut: true,
objectPutPart: true,
};
function _findAction(service, method) {
if (service === 's3') {
return _actionMap[method];
return actionMapRQ[method];
}
if (service === 'iam') {
return _actionMapIAM[method];
return actionMapIAM[method];
}
if (service === 'sso') {
return actionMapSSO[method];
}
if (service === 'ring') {
return `ring:${method}`;
@@ -86,11 +37,17 @@ function _findAction(service, method) {
// currently only method is ListMetrics
return `utapi:${method}`;
}
if (service === 'sts') {
return actionMapSTS[method];
}
if (service === 'metadata') {
return actionMapMetadata[method];
}
return undefined;
}
function _buildArn(service, generalResource, specificResource, requesterInfo) {
// arn:partition:service:region:account-id:resourcetype/resource
// arn:partition:service:region:account-id:resourcetype/resource
if (service === 's3') {
// arn:aws:s3:::bucket/object
// General resource is bucketName
@@ -101,16 +58,20 @@ function _buildArn(service, generalResource, specificResource, requesterInfo) {
}
return 'arn:aws:s3:::';
}
if (service === 'iam') {
// arn:aws:iam::<account-id>:<resource-type><resource>
if (service === 'iam' || service === 'sts') {
// arn:aws:iam::<account-id>:<resource-type>/<resource>
let accountId = requesterInfo.accountid;
if (service === 'sts') {
accountId = requesterInfo.targetAccountId;
}
if (specificResource) {
return `arn:aws:iam::${requesterInfo.accountid}:` +
return `arn:aws:iam::${accountId}:` +
`${generalResource}${specificResource}`;
}
return `arn:aws:iam::${requesterInfo.accountid}:${generalResource}`;
return `arn:aws:iam::${accountId}:${generalResource}`;
}
if (service === 'ring') {
// arn:aws:iam::<account-id>:<resource-type><resource>
// arn:aws:iam::<account-id>:<resource-type>/<resource>
if (specificResource) {
return `arn:aws:ring::${requesterInfo.accountid}:` +
`${generalResource}/${specificResource}`;
@@ -121,9 +82,26 @@ function _buildArn(service, generalResource, specificResource, requesterInfo) {
// arn:scality:utapi:::resourcetype/resource
// (possible resource types are buckets, accounts or users)
if (specificResource) {
return `arn:scality:utapi:::${generalResource}/${specificResource}`;
return `arn:scality:utapi::${requesterInfo.accountid}:` +
`${generalResource}/${specificResource}`;
}
return `arn:scality:utapi:::${generalResource}/`;
return `arn:scality:utapi::${requesterInfo.accountid}:` +
`${generalResource}/`;
}
if (service === 'sso') {
if (specificResource) {
return `arn:scality:sso:::${generalResource}/${specificResource}`;
}
return `arn:scality:sso:::${generalResource}`;
}
if (service === 'metadata') {
// arn:scality:metadata::<account-id>:<resource-type>/<resource>
if (specificResource) {
return `arn:scality:metadata::${requesterInfo.accountid}:` +
`${generalResource}/${specificResource}`;
}
return `arn:scality:metadata::${requesterInfo.accountid}:` +
`${generalResource}/`;
}
return undefined;
}
@@ -146,6 +124,8 @@ function _buildArn(service, generalResource, specificResource, requesterInfo) {
* @param {string} signatureVersion - auth signature type used
* @param {string} authType - type of authentication used
* @param {number} signatureAge - age of signature in milliseconds
* @param {string} securityToken - auth security token (temporary credentials)
* @param {string} policyArn - policy arn
* @return {RequestContext} a RequestContext instance
*/
@@ -153,7 +133,8 @@ class RequestContext {
constructor(headers, query, generalResource, specificResource,
requesterIp, sslEnabled, apiMethod,
awsService, locationConstraint, requesterInfo,
signatureVersion, authType, signatureAge) {
signatureVersion, authType, signatureAge, securityToken, policyArn,
action) {
this._headers = headers;
this._query = query;
this._requesterIp = requesterIp;
@@ -178,7 +159,10 @@ class RequestContext {
this._signatureVersion = signatureVersion;
this._authType = authType;
this._signatureAge = signatureAge;
this._securityToken = securityToken;
this._policyArn = policyArn;
this._action = action;
this._needQuota = _actionNeedQuotaCheck[apiMethod] === true;
return this;
}
@@ -204,6 +188,9 @@ class RequestContext {
signatureAge: this._signatureAge,
locationConstraint: this._locationConstraint,
tokenIssueTime: this._tokenIssueTime,
securityToken: this._securityToken,
policyArn: this._policyArn,
action: this._action,
};
return JSON.stringify(requestInfo);
}
@@ -211,20 +198,25 @@ class RequestContext {
/**
* deSerialize the JSON string
* @param {string} stringRequest - the stringified requestContext
* @param {string} resource - individual specificResource
* @return {object} - parsed string
*/
static deSerialize(stringRequest) {
static deSerialize(stringRequest, resource) {
let obj;
try {
obj = JSON.parse(stringRequest);
} catch (err) {
return new Error(err);
}
if (resource) {
obj.specificResource = resource;
}
return new RequestContext(obj.headers, obj.query, obj.generalResource,
obj.specificResource, obj.requesterIp, obj.sslEnabled,
obj.apiMethod, obj.awsService, obj.locationConstraint,
obj.requesterInfo, obj.signatureVersion,
obj.authType, obj.signatureAge);
obj.authType, obj.signatureAge, obj.securityToken, obj.policyArn,
obj.action);
}
/**
@@ -232,6 +224,9 @@ class RequestContext {
* @return {string} action
*/
getAction() {
if (this._action) {
return this._action;
}
if (this._foundAction) {
return this._foundAction;
}
@@ -322,6 +317,26 @@ class RequestContext {
return parseIp(this._requesterIp);
}
getRequesterAccountId() {
return this._requesterInfo.accountid;
}
getRequesterEndArn() {
return this._requesterInfo.arn;
}
getRequesterExternalId() {
return this._requesterInfo.externalId;
}
getRequesterPrincipalArn() {
return this._requesterInfo.parentArn || this._requesterInfo.arn;
}
getRequesterType() {
return this._requesterInfo.principalType;
}
/**
* Set sslEnabled
* @param {boolean} sslEnabled - true if https used
@@ -495,6 +510,55 @@ class RequestContext {
getMultiFactorAuthAge() {
return this._multiFactorAuthAge;
}
/**
* Returns the authentication security token
*
* @return {string} security token
*/
getSecurityToken() {
return this._securityToken;
}
/**
* Set the authentication security token
*
* @param {string} token - Security token
* @return {RequestContext} itself
*/
setSecurityToken(token) {
this._securityToken = token;
return this;
}
/**
* Get the policy arn
*
* @return {string} policyArn - Policy arn
*/
getPolicyArn() {
return this._policyArn;
}
/**
* Set the policy arn
*
* @param {string} policyArn - Policy arn
* @return {RequestContext} itself
*/
setPolicyArn(policyArn) {
this._policyArn = policyArn;
return this;
}
/**
* Returns the quota check condition
*
* @returns {boolean} needQuota - check whether quota check is needed
*/
isQuotaCheckNeeded() {
return this._needQuota;
}
}
module.exports = RequestContext;

View File

@@ -38,7 +38,7 @@ function isResourceApplicable(requestContext, statementResource, log) {
// Pull just the relative id because there is no restriction that it
// does not contain ":"
const requestRelativeId = requestResourceArr.slice(5).join(':');
for (let i = 0; i < statementResource.length; i ++) {
for (let i = 0; i < statementResource.length; i++) {
// Handle variables (must handle BEFORE wildcards)
const policyResource =
substituteVariables(statementResource[i], requestContext);
@@ -73,7 +73,7 @@ function isActionApplicable(requestAction, statementAction, log) {
statementAction = [statementAction];
}
const length = statementAction.length;
for (let i = 0; i < length; i ++) {
for (let i = 0; i < length; i++) {
// No variables in actions so no need to handle
const regExStrOfStatementAction =
handleWildcards(statementAction[i]);
@@ -98,12 +98,12 @@ function isActionApplicable(requestAction, statementAction, log) {
* @param {Object} log - logger
* @return {boolean} true if meet conditions, false if not
*/
function meetConditions(requestContext, statementCondition, log) {
evaluators.meetConditions = (requestContext, statementCondition, log) => {
// The Condition portion of a policy is an object with different
// operators as keys
const operators = Object.keys(statementCondition);
const length = operators.length;
for (let i = 0; i < length; i ++) {
for (let i = 0; i < length; i++) {
const operator = operators[i];
const hasIfExistsCondition = operator.endsWith('IfExists');
// If has "IfExists" added to operator name, find operator name
@@ -119,8 +119,7 @@ function meetConditions(requestContext, statementCondition, log) {
const conditionsWithSameOperator = statementCondition[operator];
const conditionKeys = Object.keys(conditionsWithSameOperator);
const conditionKeysLength = conditionKeys.length;
for (let j = 0; j < conditionKeysLength;
j ++) {
for (let j = 0; j < conditionKeysLength; j++) {
const key = conditionKeys[j];
let value = conditionsWithSameOperator[key];
if (!Array.isArray(value)) {
@@ -165,13 +164,13 @@ function meetConditions(requestContext, statementCondition, log) {
// are the only operators where wildcards are allowed
if (!operatorFunction(keyBasedOnRequestContext, value)) {
log.trace('did not satisfy condition', { operator: bareOperator,
keyBasedOnRequestContext, policyValue: value });
keyBasedOnRequestContext, policyValue: value });
return false;
}
}
}
return true;
}
};
/**
* Evaluate whether a request is permitted under a policy.
@@ -222,7 +221,8 @@ evaluators.evaluatePolicy = (requestContext, policy, log) => {
continue;
}
// If do not meet conditions move on to next statement
if (currentStatement.Condition && !meetConditions(requestContext,
if (currentStatement.Condition &&
!evaluators.meetConditions(requestContext,
currentStatement.Condition, log)) {
continue;
}

View File

@@ -0,0 +1,176 @@
const { meetConditions } = require('./evaluator');
/**
* Class with methods to manage the policy 'principal' validation
*/
class Principal {
/**
* Function to evaluate conditions if needed
*
* @param {object} params - Evaluation parameters
* @param {object} statement - Statement policy field
* @return {boolean} True if meet conditions
*/
static _evaluateCondition(params, statement) {
if (statement.Condition) {
return meetConditions(params.rc, statement.Condition, params.log);
}
return true;
}
/**
* Checks principal field against valid principals array
*
* @param {object} params - Evaluation parameters
* @param {object} statement - Statement policy field
* @param {object} valids - Valid principal fields
* @return {string} result of principal evaluation, either 'Neutral',
* 'Allow' or 'Deny'
*/
static _evaluatePrincipalField(params, statement, valids) {
const reverse = !!statement.NotPrincipal;
const principal = statement.Principal || statement.NotPrincipal;
if (typeof principal === 'string' && principal === '*') {
if (reverse) {
// In case of anonymous NotPrincipal, this will neutral everyone
return 'Neutral';
}
if (!Principal._evaluateCondition(params, statement)) {
return 'Neutral';
}
return statement.Effect;
} else if (typeof principal === 'string') {
return 'Deny';
}
let ref = [];
let toCheck = [];
if (valids.Federated && principal.Federated) {
ref = valids.Federated;
toCheck = principal.Federated;
} else if (valids.AWS && principal.AWS) {
ref = valids.AWS;
toCheck = principal.AWS;
} else if (valids.Service && principal.Service) {
ref = valids.Service;
toCheck = principal.Service;
} else {
if (reverse) {
return statement.Effect;
}
return 'Neutral';
}
toCheck = Array.isArray(toCheck) ? toCheck : [toCheck];
ref = Array.isArray(ref) ? ref : [ref];
if (toCheck.indexOf('*') !== -1) {
if (reverse) {
return 'Neutral';
}
if (!Principal._evaluateCondition(params, statement)) {
return 'Neutral';
}
return statement.Effect;
}
const len = ref.length;
for (let i = 0; i < len; ++i) {
if (toCheck.indexOf(ref[i]) !== -1) {
if (reverse) {
return 'Neutral';
}
if (!Principal._evaluateCondition(params, statement)) {
return 'Neutral';
}
return statement.Effect;
}
}
if (reverse) {
return statement.Effect;
}
return 'Neutral';
}
/**
* Function to evaluate principal of statements against a valid principal
* array
*
* @param {object} params - Evaluation parameters
* @param {object} valids - Valid principal fields
* @return {string} result of principal evaluation, either 'Allow' or 'Deny'
*/
static _evaluatePrincipal(params, valids) {
const doc = params.trustedPolicy;
let statements = doc.Statement;
if (!Array.isArray(statements)) {
statements = [statements];
}
const len = statements.length;
let authorized = 'Deny';
for (let i = 0; i < len; ++i) {
const statement = statements[i];
const result = Principal._evaluatePrincipalField(params,
statement, valids);
if (result === 'Deny') {
return 'Deny';
} else if (result === 'Allow') {
authorized = 'Allow';
}
}
return authorized;
}
/**
* Function to evaluate principal for a policy
*
* @param {object} params - Evaluation parameters
* @return {object} {
* result: 'Allow' or 'Deny',
* checkAction: true or false,
* }
*/
static evaluatePrincipal(params) {
let valids = null;
let checkAction = false;
const account = params.rc.getRequesterAccountId();
const targetAccount = params.targetAccountId;
const accountArn = `arn:aws:iam::${account}:root`;
const requesterArn = params.rc.getRequesterPrincipalArn();
const requesterEndArn = params.rc.getRequesterEndArn();
const requesterType = params.rc.getRequesterType();
if (account !== targetAccount) {
valids = {
AWS: [
account,
accountArn,
],
};
checkAction = true;
} else {
if (requesterType === 'User' || requesterType === 'AssumedRole' ||
requesterType === 'Federated') {
valids = {
AWS: [
account,
accountArn,
],
};
if (requesterType === 'User' ||
requesterType === 'AssumedRole') {
valids.AWS.push(requesterArn);
if (requesterEndArn !== requesterArn) {
valids.AWS.push(requesterEndArn);
}
} else {
valids.Federated = [requesterArn];
}
} else if (requesterType === 'Service') {
valids = { Service: requesterArn };
}
}
const result = Principal._evaluatePrincipal(params, valids);
return {
result,
checkAction,
};
}
}
module.exports = Principal;

View File

@@ -0,0 +1,32 @@
const ipCheck = require('../ipCheck');
/**
* getClientIp - Gets the client IP from the request
* @param {object} request - http request object
* @param {object} s3config - s3 config
* @return {string} - returns client IP from the request
*/
function getClientIp(request, s3config) {
const requestConfig = s3config ? s3config.requests : {};
const remoteAddress = request.socket.remoteAddress;
const clientIp = requestConfig ? remoteAddress : request.headers['x-forwarded-for'] || remoteAddress;
if (requestConfig) {
const { trustedProxyCIDRs, extractClientIPFromHeader } = requestConfig;
/**
* if requests are configured to come via proxy,
* check from config which proxies are to be trusted and
* which header to be used to extract client IP
*/
if (ipCheck.ipMatchCidrList(trustedProxyCIDRs, clientIp)) {
const ipFromHeader = request.headers[extractClientIPFromHeader];
if (ipFromHeader && ipFromHeader.trim().length) {
return ipFromHeader.split(',')[0].trim();
}
}
}
return clientIp;
}
module.exports = {
getClientIp,
};

View File

@@ -0,0 +1,172 @@
const sharedActionMap = {
bucketDelete: 's3:DeleteBucket',
bucketDeleteWebsite: 's3:DeleteBucketWebsite',
bucketGet: 's3:ListBucket',
bucketGetACL: 's3:GetBucketAcl',
bucketGetCors: 's3:GetBucketCORS',
bucketGetLifecycle: 's3:GetLifecycleConfiguration',
bucketGetLocation: 's3:GetBucketLocation',
bucketGetReplication: 's3:GetReplicationConfiguration',
bucketGetVersioning: 's3:GetBucketVersioning',
bucketGetWebsite: 's3:GetBucketWebsite',
bucketHead: 's3:ListBucket',
bucketPutACL: 's3:PutBucketAcl',
bucketPutCors: 's3:PutBucketCORS',
bucketPutLifecycle: 's3:PutLifecycleConfiguration',
bucketPutReplication: 's3:PutReplicationConfiguration',
bucketPutVersioning: 's3:PutBucketVersioning',
bucketPutWebsite: 's3:PutBucketWebsite',
listMultipartUploads: 's3:ListBucketMultipartUploads',
listParts: 's3:ListMultipartUploadParts',
multipartDelete: 's3:AbortMultipartUpload',
objectDelete: 's3:DeleteObject',
objectDeleteTagging: 's3:DeleteObjectTagging',
objectGet: 's3:GetObject',
objectGetACL: 's3:GetObjectAcl',
objectGetTagging: 's3:GetObjectTagging',
objectPut: 's3:PutObject',
objectPutACL: 's3:PutObjectAcl',
objectPutTagging: 's3:PutObjectTagging',
};
// action map used for request context
const actionMapRQ = Object.assign({
bucketPut: 's3:CreateBucket',
// for bucketDeleteCors need s3:PutBucketCORS permission
// see http://docs.aws.amazon.com/AmazonS3/latest/API/
// RESTBucketDELETEcors.html
bucketDeleteCors: 's3:PutBucketCORS',
bucketDeleteReplication: 's3:DeleteReplicationConfiguration',
bucketDeleteLifecycle: 's3:DeleteLifecycleConfiguration',
completeMultipartUpload: 's3:PutObject',
initiateMultipartUpload: 's3:PutObject',
objectDeleteVersion: 's3:DeleteObjectVersion',
objectDeleteTaggingVersion: 's3:DeleteObjectVersionTagging',
objectGetVersion: 's3:GetObjectVersion',
objectGetACLVersion: 's3:GetObjectVersionAcl',
objectGetTaggingVersion: 's3:GetObjectVersionTagging',
objectHead: 's3:GetObject',
objectPutACLVersion: 's3:PutObjectVersionAcl',
objectPutPart: 's3:PutObject',
objectPutTaggingVersion: 's3:PutObjectVersionTagging',
serviceGet: 's3:ListAllMyBuckets',
objectReplicate: 's3:ReplicateObject',
}, sharedActionMap);
// action map used for bucket policies
const actionMapBP = Object.assign({
bucketDeletePolicy: 's3:DeleteBucketPolicy',
bucketGetObjectLock: 's3:GetBucketObjectLockConfiguration',
bucketGetPolicy: 's3:GetBucketPolicy',
bucketPutObjectLock: 's3:PutBucketObjectLockConfiguration',
bucketPutPolicy: 's3:PutBucketPolicy',
objectGetLegalHold: 's3:GetObjectLegalHold',
objectGetRetention: 's3:GetObjectRetention',
objectPutLegalHold: 's3:PutObjectLegalHold',
objectPutRetention: 's3:PutObjectRetention',
}, sharedActionMap);
// action map for all relevant s3 actions
const actionMapS3 = Object.assign({
bucketGetNotification: 's3:GetBucketNotification',
bucketPutNotification: 's3:PutBucketNotification',
}, sharedActionMap, actionMapRQ, actionMapBP);
const actionMonitoringMapS3 = {
bucketDelete: 'DeleteBucket',
bucketDeleteCors: 'DeleteBucketCors',
bucketDeleteLifecycle: 'DeleteBucketLifecycle',
bucketDeleteReplication: 'DeleteBucketReplication',
bucketDeleteWebsite: 'DeleteBucketWebsite',
bucketGet: 'ListObjects',
bucketGetACL: 'GetBucketAcl',
bucketGetCors: 'GetBucketCors',
bucketGetLifecycle: 'GetBucketLifecycleConfiguration',
bucketGetLocation: 'GetBucketLocation',
bucketGetReplication: 'GetBucketReplication',
bucketGetVersioning: 'GetBucketVersioning',
bucketGetWebsite: 'GetBucketWebsite',
bucketHead: 'HeadBucket',
bucketPut: 'CreateBucket',
bucketPutACL: 'PutBucketAcl',
bucketPutCors: 'PutBucketCors',
bucketPutLifecycle: 'PutBucketLifecycleConfiguration',
bucketPutReplication: 'PutBucketReplication',
bucketPutVersioning: 'PutBucketVersioning',
bucketPutWebsite: 'PutBucketWebsite',
completeMultipartUpload: 'CompleteMultipartUpload',
initiateMultipartUpload: 'CreateMultipartUpload',
listMultipartUploads: 'ListMultipartUploads',
listParts: 'ListParts',
multiObjectDelete: 'DeleteObjects',
multipartDelete: 'AbortMultipartUpload',
objectCopy: 'CopyObject',
objectDelete: 'DeleteObject',
objectDeleteTagging: 'DeleteObjectTagging',
objectGet: 'GetObject',
objectGetACL: 'GetObjectAcl',
objectGetTagging: 'GetObjectTagging',
objectHead: 'HeadObject',
objectPut: 'PutObject',
objectPutACL: 'PutObjectAcl',
objectPutCopyPart: 'UploadPartCopy',
objectPutPart: 'UploadPart',
objectPutTagging: 'PutObjectTagging',
serviceGet: 'ListBuckets',
};
const actionMapIAM = {
attachGroupPolicy: 'iam:AttachGroupPolicy',
attachUserPolicy: 'iam:AttachUserPolicy',
createAccessKey: 'iam:CreateAccessKey',
createGroup: 'iam:CreateGroup',
createPolicy: 'iam:CreatePolicy',
createPolicyVersion: 'iam:CreatePolicyVersion',
createUser: 'iam:CreateUser',
deleteAccessKey: 'iam:DeleteAccessKey',
deleteGroup: 'iam:DeleteGroup',
deleteGroupPolicy: 'iam:DeleteGroupPolicy',
deletePolicy: 'iam:DeletePolicy',
deletePolicyVersion: 'iam:DeletePolicyVersion',
deleteUser: 'iam:DeleteUser',
detachGroupPolicy: 'iam:DetachGroupPolicy',
detachUserPolicy: 'iam:DetachUserPolicy',
getGroup: 'iam:GetGroup',
getGroupPolicy: 'iam:GetGroupPolicy',
getPolicy: 'iam:GetPolicy',
getPolicyVersion: 'iam:GetPolicyVersion',
getUser: 'iam:GetUser',
listAccessKeys: 'iam:ListAccessKeys',
listGroupPolicies: 'iam:ListGroupPolicies',
listGroups: 'iam:ListGroups',
listGroupsForUser: 'iam:ListGroupsForUser',
listPolicies: 'iam:ListPolicies',
listPolicyVersions: 'iam:ListPolicyVersions',
listUsers: 'iam:ListUsers',
putGroupPolicy: 'iam:PutGroupPolicy',
removeUserFromGroup: 'iam:RemoveUserFromGroup',
};
const actionMapSSO = {
SsoAuthorize: 'sso:Authorize',
};
const actionMapSTS = {
assumeRole: 'sts:AssumeRole',
};
const actionMapMetadata = {
admin: 'metadata:admin',
default: 'metadata:bucketd',
};
module.exports = {
actionMapRQ,
actionMapBP,
actionMapS3,
actionMonitoringMapS3,
actionMapIAM,
actionMapSSO,
actionMapSTS,
actionMapMetadata,
};

View File

@@ -14,8 +14,7 @@ const handleWildcardInResource =
*/
function checkArnMatch(policyArn, requestRelativeId, requestArnArr,
caseSensitive) {
let regExofArn = handleWildcardInResource(policyArn);
regExofArn = caseSensitive ? regExofArn : regExofArn.toLowerCase();
const regExofArn = handleWildcardInResource(policyArn);
// The relativeId is the last part of the ARN (for instance, a bucket and
// object name in S3)
// Join on ":" in case there were ":" in the relativeID at the end
@@ -26,16 +25,22 @@ function checkArnMatch(policyArn, requestRelativeId, requestArnArr,
// Check to see if the relative-id matches first since most likely
// to diverge. If not a match, the resource is not applicable so return
// false
if (!policyRelativeIdRegEx.test(requestRelativeId)) {
if (!policyRelativeIdRegEx.test(caseSensitive ?
requestRelativeId : requestRelativeId.toLowerCase())) {
return false;
}
// Check the other parts of the ARN to make sure they match. If not,
// return false.
for (let j = 0; j < 5; j ++) {
for (let j = 0; j < 5; j++) {
const segmentRegEx = new RegExp(regExofArn[j]);
const requestSegment = caseSensitive ? requestArnArr[j] :
requestArnArr[j].toLowerCase();
if (!segmentRegEx.test(requestSegment)) {
const policyArnArr = policyArn.split(':');
// We want to allow an empty account ID for utapi service ARNs to not
// break compatibility.
if (j === 4 && policyArnArr[2] === 'utapi' && policyArnArr[4] === '') {
continue;
} else if (!segmentRegEx.test(requestSegment)) {
return false;
}
}

View File

@@ -112,7 +112,7 @@ conditions.findConditionKey = (key, requestContext) => {
// (STANDARD, etc.)
map.set('s3:x-amz-storage-class', headers['x-amz-storage-class']);
// s3:VersionId -- version id of object
map.set('s3:VersionId', headers['x-amz-version-id']);
map.set('s3:VersionId', query.versionId);
// s3:LocationConstraint -- Used to restrict creation of bucket
// in certain region. Only applicable for CreateBucket
map.set('s3:LocationConstraint', requestContext.getLocationConstraint());
@@ -139,6 +139,13 @@ conditions.findConditionKey = (key, requestContext) => {
// so can use this in a deny policy to deny any requests that do not
// have a signed payload
map.set('s3:x-amz-content-sha256', headers['x-amz-content-sha256']);
// s3:ObjLocationConstraint is the location constraint set for an
// object on a PUT request using the "x-amz-meta-scal-location-constraint"
// header
map.set('s3:ObjLocationConstraint',
headers['x-amz-meta-scal-location-constraint']);
map.set('sts:ExternalId', requestContext.getRequesterExternalId());
map.set('iam:PolicyArn', requestContext.getPolicyArn());
return map.get(key);
};

View File

@@ -0,0 +1,46 @@
const Transform = require('stream').Transform;
const crypto = require('crypto');
/**
* This class is design to compute md5 hash at the same time as sending
* data through a stream
*/
class MD5Sum extends Transform {
/**
* @constructor
*/
constructor() {
super({});
this.hash = crypto.createHash('md5');
this.completedHash = undefined;
}
/**
* This function will update the current md5 hash with the next chunk
*
* @param {Buffer|string} chunk - Chunk to compute
* @param {string} encoding - Data encoding
* @param {function} callback - Callback(err, chunk, encoding)
* @return {undefined}
*/
_transform(chunk, encoding, callback) {
this.hash.update(chunk, encoding);
callback(null, chunk, encoding);
}
/**
* This function will end the hash computation
*
* @param {function} callback(err)
* @return {undefined}
*/
_flush(callback) {
this.completedHash = this.hash.digest('hex');
this.emit('hashed');
callback(null);
}
}
module.exports = MD5Sum;

View File

@@ -0,0 +1,83 @@
const EventEmitter = require('events');
/**
* Class to collect results of streaming subparts.
* Emits "done" event when streaming is complete and Azure has returned
* results for putting each of the subparts
* Emits "error" event if Azure returns an error for putting a subpart and
* streaming is in-progress
* @class ResultsCollector
*/
class ResultsCollector extends EventEmitter {
/**
* @constructor
*/
constructor() {
super();
this._results = [];
this._queue = 0;
this._streamingFinished = false;
}
/**
* ResultsCollector.pushResult - register result of putting one subpart
* and emit "done" or "error" events if appropriate
* @param {(Error|undefined)} err - error returned from Azure after
* putting a subpart
* @param {number} subPartIndex - the index of the subpart
* @emits ResultCollector#done
* @emits ResultCollector#error
* @return {undefined}
*/
pushResult(err, subPartIndex) {
this._results.push({
error: err,
subPartIndex,
});
this._queue--;
if (this._resultsComplete()) {
this.emit('done', err, this._results);
} else if (err) {
this.emit('error', err, subPartIndex);
}
}
/**
* ResultsCollector.pushOp - register operation to put another subpart
* @return {undefined};
*/
pushOp() {
this._queue++;
}
/**
* ResultsCollector.enableComplete - register streaming has finished,
* allowing ResultCollector#done event to be emitted when last result
* has been returned
* @return {undefined};
*/
enableComplete() {
this._streamingFinished = true;
}
_resultsComplete() {
return (this._queue === 0 && this._streamingFinished);
}
}
/**
* "done" event
* @event ResultCollector#done
* @type {(Error|undefined)} err - error returned by Azure putting last subpart
* @type {object[]} results - result for putting each of the subparts
* @property {Error} [results[].error] - error returned by Azure putting subpart
* @property {number} results[].subPartIndex - index of the subpart
*/
/**
* "error" event
* @event ResultCollector#error
* @type {(Error|undefined)} error - error returned by Azure last subpart
* @type {number} subPartIndex - index of the subpart
*/
module.exports = ResultsCollector;

View File

@@ -0,0 +1,145 @@
const stream = require('stream');
class SubStream extends stream.PassThrough {
constructor(options) {
super(options);
this.on('stopStreamingToAzure', function stopStreamingToAzure() {
this._abortStreaming();
});
}
_abortStreaming() {
this.push(null);
this.end();
}
}
/**
* Interface for streaming subparts.
* @class SubStreamInterface
*/
class SubStreamInterface {
/**
* @constructor
* @param {stream.Readable} sourceStream - stream to read for data
*/
constructor(sourceStream) {
this._sourceStream = sourceStream;
this._totalLengthCounter = 0;
this._lengthCounter = 0;
this._subPartIndex = 0;
this._currentStream = new SubStream();
this._streamingAborted = false;
}
/**
* SubStreamInterface.pauseStreaming - pause data flow
* @return {undefined}
*/
pauseStreaming() {
this._sourceStream.pause();
}
/**
* SubStreamInterface.resumeStreaming - resume data flow
* @return {undefined}
*/
resumeStreaming() {
this._sourceStream.resume();
}
/**
* SubStreamInterface.endStreaming - signal end of data for last stream,
* to be called when source stream has ended
* @return {undefined}
*/
endStreaming() {
this._totalLengthCounter += this._lengthCounter;
this._currentStream.end();
}
/**
* SubStreamInterface.stopStreaming - destroy streams,
* to be called when streaming must be stopped externally
* @param {stream.Readable} [piper] - a stream that is piping data into
* source stream
* @return {undefined}
*/
stopStreaming(piper) {
this._streamingAborted = true;
if (piper) {
piper.unpipe();
}
this._currentStream.emit('stopStreamingToAzure');
}
/**
* SubStreamInterface.getLengthCounter - return length of bytes streamed
* for current subpart
* @return {number} - this._lengthCounter
*/
getLengthCounter() {
return this._lengthCounter;
}
/**
* SubStreamInterface.getTotalBytesStreamed - return total bytes streamed
* @return {number} - this._totalLengthCounter
*/
getTotalBytesStreamed() {
return this._totalLengthCounter;
}
/**
* SubStreamInterface.getCurrentStream - return subpart stream currently
* being written to from source stream
* @return {number} - this._currentStream
*/
getCurrentStream() {
return this._currentStream;
}
/**
* SubStreamInterface.transitionToNextStream - signal end of data for
* current stream, generate a new stream and start streaming to new stream
* @return {object} - return object containing new current stream and
* subpart index of current subpart
*/
transitionToNextStream() {
this.pauseStreaming();
this._currentStream.end();
this._totalLengthCounter += this._lengthCounter;
this._lengthCounter = 0;
this._subPartIndex++;
this._currentStream = new SubStream();
this.resumeStreaming();
return {
nextStream: this._currentStream,
subPartIndex: this._subPartIndex,
};
}
/**
* SubStreamInterface.write - write to the current stream
* @param {Buffer} chunk - a chunk of data
* @return {undefined}
*/
write(chunk) {
if (this._streamingAborted) {
// don't write
return;
}
const ready = this._currentStream.write(chunk);
if (!ready) {
this.pauseStreaming();
this._currentStream.once('drain', () => {
this.resumeStreaming();
});
}
this._lengthCounter += chunk.length;
}
}
module.exports = SubStreamInterface;

View File

@@ -0,0 +1,229 @@
const assert = require('assert');
const crypto = require('crypto');
const stream = require('stream');
const ResultsCollector = require('./ResultsCollector');
const SubStreamInterface = require('./SubStreamInterface');
const objectUtils = require('../objectUtils');
const MD5Sum = require('../MD5Sum');
const errors = require('../../errors');
const azureMpuUtils = {};
azureMpuUtils.splitter = '|';
azureMpuUtils.overviewMpuKey = 'azure_mpu';
azureMpuUtils.maxSubPartSize = 104857600;
azureMpuUtils.zeroByteETag = crypto.createHash('md5').update('').digest('hex');
azureMpuUtils.padString = (str, category) => {
const _padFn = {
left: (str, padString) =>
`${padString}${str}`.substr(-padString.length),
right: (str, padString) =>
`${str}${padString}`.substr(0, padString.length),
};
// It's a little more performant if we add pre-generated strings for each
// type of padding we want to apply, instead of using string.repeat() to
// create the padding.
const padSpec = {
partNumber: {
padString: '00000',
direction: 'left',
},
subPart: {
padString: '00',
direction: 'left',
},
part: {
padString:
'%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%',
direction: 'right',
},
};
const { direction, padString } = padSpec[category];
return _padFn[direction](str, padString);
};
// NOTE: If we want to extract the object name from these keys, we will need
// to use a similar method to _getKeyAndUploadIdFromMpuKey since the object
// name may have instances of the splitter used to delimit arguments
azureMpuUtils.getMpuSummaryKey = (objectName, uploadId) =>
`${objectName}${azureMpuUtils.splitter}${uploadId}`;
azureMpuUtils.getBlockId = (uploadId, partNumber, subPartIndex) => {
const paddedPartNumber = azureMpuUtils.padString(partNumber, 'partNumber');
const paddedSubPart = azureMpuUtils.padString(subPartIndex, 'subPart');
const splitter = azureMpuUtils.splitter;
const blockId = `${uploadId}${splitter}partNumber${paddedPartNumber}` +
`${splitter}subPart${paddedSubPart}${splitter}`;
return azureMpuUtils.padString(blockId, 'part');
};
azureMpuUtils.getSummaryPartId = (partNumber, eTag, size) => {
const paddedPartNumber = azureMpuUtils.padString(partNumber, 'partNumber');
const timestamp = Date.now();
const splitter = azureMpuUtils.splitter;
const summaryKey = `${paddedPartNumber}${splitter}${timestamp}` +
`${splitter}${eTag}${splitter}${size}${splitter}`;
return azureMpuUtils.padString(summaryKey, 'part');
};
azureMpuUtils.getSubPartInfo = dataContentLength => {
const numberFullSubParts =
Math.floor(dataContentLength / azureMpuUtils.maxSubPartSize);
const remainder = dataContentLength % azureMpuUtils.maxSubPartSize;
const numberSubParts = remainder ?
numberFullSubParts + 1 : numberFullSubParts;
const lastPartSize = remainder || azureMpuUtils.maxSubPartSize;
return {
expectedNumberSubParts: numberSubParts,
lastPartIndex: numberSubParts - 1,
lastPartSize,
};
};
azureMpuUtils.getSubPartSize = (subPartInfo, subPartIndex) => {
const { lastPartIndex, lastPartSize } = subPartInfo;
return subPartIndex === lastPartIndex ?
lastPartSize : azureMpuUtils.maxSubPartSize;
};
azureMpuUtils.getSubPartIds = (part, uploadId) =>
[...Array(part.numberSubParts).keys()].map(subPartIndex =>
azureMpuUtils.getBlockId(uploadId, part.partNumber, subPartIndex));
azureMpuUtils.putSinglePart = (errorWrapperFn, request, params, dataStoreName,
log, cb) => {
const { bucketName, partNumber, size, objectKey, contentMD5, uploadId }
= params;
const blockId = azureMpuUtils.getBlockId(uploadId, partNumber, 0);
const passThrough = new stream.PassThrough();
const options = {};
if (contentMD5) {
options.useTransactionalMD5 = true;
options.transactionalContentMD5 = contentMD5;
}
request.pipe(passThrough);
return errorWrapperFn('uploadPart', 'createBlockFromStream',
[blockId, bucketName, objectKey, passThrough, size, options,
(err, result) => {
if (err) {
log.error('Error from Azure data backend uploadPart',
{ error: err.message, dataStoreName });
if (err.code === 'ContainerNotFound') {
return cb(errors.NoSuchBucket);
}
if (err.code === 'InvalidMd5') {
return cb(errors.InvalidDigest);
}
if (err.code === 'Md5Mismatch') {
return cb(errors.BadDigest);
}
return cb(errors.InternalError.customizeDescription(
`Error returned from Azure: ${err.message}`)
);
}
const eTag = objectUtils.getHexMD5(result.headers['content-md5']);
return cb(null, eTag, size);
}], log, cb);
};
azureMpuUtils.putNextSubPart = (errorWrapperFn, partParams, subPartInfo,
subPartStream, subPartIndex, resultsCollector, log, cb) => {
const { uploadId, partNumber, bucketName, objectKey } = partParams;
const subPartSize = azureMpuUtils.getSubPartSize(
subPartInfo, subPartIndex);
const subPartId = azureMpuUtils.getBlockId(uploadId, partNumber,
subPartIndex);
resultsCollector.pushOp();
errorWrapperFn('uploadPart', 'createBlockFromStream',
[subPartId, bucketName, objectKey, subPartStream, subPartSize,
{}, err => resultsCollector.pushResult(err, subPartIndex)], log, cb);
};
azureMpuUtils.putSubParts = (errorWrapperFn, request, params,
dataStoreName, log, cb) => {
const subPartInfo = azureMpuUtils.getSubPartInfo(params.size);
const resultsCollector = new ResultsCollector();
const hashedStream = new MD5Sum();
const streamInterface = new SubStreamInterface(hashedStream);
log.trace('data length is greater than max subpart size;' +
'putting multiple parts');
resultsCollector.on('error', (err, subPartIndex) => {
log.error(`Error putting subpart to Azure: ${subPartIndex}`,
{ error: err.message, dataStoreName });
streamInterface.stopStreaming(request);
if (err.code === 'ContainerNotFound') {
return cb(errors.NoSuchBucket);
}
return cb(errors.InternalError.customizeDescription(
`Error returned from Azure: ${err}`));
});
resultsCollector.on('done', (err, results) => {
if (err) {
log.error('Error putting last subpart to Azure',
{ error: err.message, dataStoreName });
if (err.code === 'ContainerNotFound') {
return cb(errors.NoSuchBucket);
}
return cb(errors.InternalError.customizeDescription(
`Error returned from Azure: ${err}`));
}
const numberSubParts = results.length;
// check if we have streamed more parts than calculated; should not
// occur, but do a sanity assertion to detect any coding logic error
assert.strictEqual(numberSubParts, subPartInfo.expectedNumberSubParts,
`Fatal error: streamed ${numberSubParts} subparts but ` +
`expected ${subPartInfo.expectedNumberSubParts} subparts`);
const totalLength = streamInterface.getTotalBytesStreamed();
log.trace('successfully put subparts to Azure',
{ numberSubParts, totalLength });
hashedStream.on('hashed', () => cb(null, hashedStream.completedHash,
totalLength));
// in case the hashed event was already emitted before the
// event handler was registered:
if (hashedStream.completedHash) {
hashedStream.removeAllListeners('hashed');
return cb(null, hashedStream.completedHash, totalLength);
}
return undefined;
});
const currentStream = streamInterface.getCurrentStream();
// start first put to Azure before we start streaming the data
azureMpuUtils.putNextSubPart(errorWrapperFn, params, subPartInfo,
currentStream, 0, resultsCollector, log, cb);
request.pipe(hashedStream);
hashedStream.on('end', () => {
resultsCollector.enableComplete();
streamInterface.endStreaming();
});
hashedStream.on('data', data => {
const currentLength = streamInterface.getLengthCounter();
if (currentLength + data.length > azureMpuUtils.maxSubPartSize) {
const bytesToMaxSize = azureMpuUtils.maxSubPartSize - currentLength;
const firstChunk = bytesToMaxSize === 0 ? data :
data.slice(bytesToMaxSize);
if (bytesToMaxSize !== 0) {
// if we have not streamed full subpart, write enough of the
// data chunk to stream the correct length
streamInterface.write(data.slice(0, bytesToMaxSize));
}
const { nextStream, subPartIndex } =
streamInterface.transitionToNextStream();
azureMpuUtils.putNextSubPart(errorWrapperFn, params, subPartInfo,
nextStream, subPartIndex, resultsCollector, log, cb);
streamInterface.write(firstChunk);
} else {
streamInterface.write(data);
}
});
};
module.exports = azureMpuUtils;

View File

@@ -0,0 +1,107 @@
const querystring = require('querystring');
const escapeForXml = require('./escapeForXml');
const convertMethods = {};
convertMethods.completeMultipartUpload = xmlParams => {
const escapedBucketName = escapeForXml(xmlParams.bucketName);
return '<?xml version="1.0" encoding="UTF-8"?>' +
'<CompleteMultipartUploadResult ' +
'xmlns="http://s3.amazonaws.com/doc/2006-03-01/">' +
`<Location>http://${escapedBucketName}.` +
`${escapeForXml(xmlParams.hostname)}/` +
`${escapeForXml(xmlParams.objectKey)}</Location>` +
`<Bucket>${escapedBucketName}</Bucket>` +
`<Key>${escapeForXml(xmlParams.objectKey)}</Key>` +
`<ETag>${escapeForXml(xmlParams.eTag)}</ETag>` +
'</CompleteMultipartUploadResult>';
};
convertMethods.initiateMultipartUpload = xmlParams =>
'<?xml version="1.0" encoding="UTF-8"?>' +
'<InitiateMultipartUploadResult ' +
'xmlns="http://s3.amazonaws.com/doc/2006-03-01/">' +
`<Bucket>${escapeForXml(xmlParams.bucketName)}</Bucket>` +
`<Key>${escapeForXml(xmlParams.objectKey)}</Key>` +
`<UploadId>${escapeForXml(xmlParams.uploadId)}</UploadId>` +
'</InitiateMultipartUploadResult>';
convertMethods.listMultipartUploads = xmlParams => {
const xml = [];
const l = xmlParams.list;
xml.push('<?xml version="1.0" encoding="UTF-8"?>',
'<ListMultipartUploadsResult ' +
'xmlns="http://s3.amazonaws.com/doc/2006-03-01/">',
`<Bucket>${escapeForXml(xmlParams.bucketName)}</Bucket>`
);
// For certain XML elements, if it is `undefined`, AWS returns either an
// empty tag or does not include it. Hence the `optional` key in the params.
const params = [
{ tag: 'KeyMarker', value: xmlParams.keyMarker },
{ tag: 'UploadIdMarker', value: xmlParams.uploadIdMarker },
{ tag: 'NextKeyMarker', value: l.NextKeyMarker, optional: true },
{ tag: 'NextUploadIdMarker', value: l.NextUploadIdMarker,
optional: true },
{ tag: 'Delimiter', value: l.Delimiter, optional: true },
{ tag: 'Prefix', value: xmlParams.prefix, optional: true },
];
params.forEach(param => {
if (param.value) {
xml.push(`<${param.tag}>${escapeForXml(param.value)}` +
`</${param.tag}>`);
} else if (!param.optional) {
xml.push(`<${param.tag} />`);
}
});
xml.push(`<MaxUploads>${escapeForXml(l.MaxKeys)}</MaxUploads>`,
`<IsTruncated>${escapeForXml(l.IsTruncated)}</IsTruncated>`
);
l.Uploads.forEach(upload => {
const val = upload.value;
let key = upload.key;
if (xmlParams.encoding === 'url') {
key = querystring.escape(key);
}
xml.push('<Upload>',
`<Key>${escapeForXml(key)}</Key>`,
`<UploadId>${escapeForXml(val.UploadId)}</UploadId>`,
'<Initiator>',
`<ID>${escapeForXml(val.Initiator.ID)}</ID>`,
`<DisplayName>${escapeForXml(val.Initiator.DisplayName)}` +
'</DisplayName>',
'</Initiator>',
'<Owner>',
`<ID>${escapeForXml(val.Owner.ID)}</ID>`,
`<DisplayName>${escapeForXml(val.Owner.DisplayName)}` +
'</DisplayName>',
'</Owner>',
`<StorageClass>${escapeForXml(val.StorageClass)}` +
'</StorageClass>',
`<Initiated>${escapeForXml(val.Initiated)}</Initiated>`,
'</Upload>'
);
});
l.CommonPrefixes.forEach(prefix => {
xml.push('<CommonPrefixes>',
`<Prefix>${escapeForXml(prefix)}</Prefix>`,
'</CommonPrefixes>'
);
});
xml.push('</ListMultipartUploadsResult>');
return xml.join('');
};
function convertToXml(method, xmlParams) {
return convertMethods[method](xmlParams);
}
module.exports = convertToXml;

View File

@@ -0,0 +1,19 @@
/**
* Project: node-xml https://github.com/dylang/node-xml
* License: MIT https://github.com/dylang/node-xml/blob/master/LICENSE
*/
const XML_CHARACTER_MAP = {
'&': '&amp;',
'"': '&quot;',
"'": '&apos;',
'<': '&lt;',
'>': '&gt;',
};
function escapeForXml(string) {
return string && string.replace
? string.replace(/([&"<>'])/g, (str, item) => XML_CHARACTER_MAP[item])
: string;
}
module.exports = escapeForXml;

View File

@@ -0,0 +1,42 @@
const Readable = require('stream').Readable;
/**
* This class is used to produce zeros filled buffers for a reader consumption
*/
class NullStream extends Readable {
/**
* Construct a new zeros filled buffers producer that will
* produce as much bytes as specified by the range parameter, or the size
* parameter if range is null or not constituted of 2 elements
* @constructor
* @param {integer} size - the number of null bytes to produce
* @param {array} range - a range specification to override to size
*/
constructor(size, range) {
super({});
if (Array.isArray(range) && range.length === 2) {
this.bytesToRead = range[1] - range[0] + 1;
} else {
this.bytesToRead = size;
}
}
/**
* This function generates the stream of null bytes
*
* @param {integer} size - advisory amount of data to produce
* @returns {undefined}
*/
_read(size) {
const toRead = Math.min(size, this.bytesToRead);
const buffer = toRead > 0
? Buffer.alloc(toRead, 0)
: null;
this.bytesToRead -= toRead;
this.push(buffer);
}
}
module.exports = NullStream;

View File

@@ -0,0 +1,9 @@
const objectUtils = {};
objectUtils.getHexMD5 = base64MD5 =>
Buffer.from(base64MD5, 'base64').toString('hex');
objectUtils.getBase64MD5 = hexMD5 =>
Buffer.from(hexMD5, 'hex').toString('base64');
module.exports = objectUtils;

224
lib/s3middleware/tagging.js Normal file
View File

@@ -0,0 +1,224 @@
const { parseString } = require('xml2js');
const errors = require('../errors');
const escapeForXml = require('./escapeForXml');
const errorInvalidArgument = errors.InvalidArgument
.customizeDescription('The header \'x-amz-tagging\' shall be ' +
'encoded as UTF-8 then URLEncoded URL query parameters without ' +
'tag name duplicates.');
const errorBadRequestLimit50 = errors.BadRequest
.customizeDescription('Object tags cannot be greater than 50');
/*
Format of xml request:
<Tagging>
<TagSet>
<Tag>
<Key>Tag Name</Key>
<Value>Tag Value</Value>
</Tag>
</TagSet>
</Tagging>
*/
const _validator = {
validateTagStructure: tag => tag
&& Object.keys(tag).length === 2
&& tag.Key && tag.Value
&& tag.Key.length === 1 && tag.Value.length === 1
&& tag.Key[0] !== undefined && tag.Value[0] !== undefined
&& typeof tag.Key[0] === 'string' && typeof tag.Value[0] === 'string',
validateXMLStructure: result =>
result && Object.keys(result).length === 1 &&
result.Tagging &&
result.Tagging.TagSet &&
result.Tagging.TagSet.length === 1 &&
(
result.Tagging.TagSet[0] === '' ||
result.Tagging.TagSet[0] &&
Object.keys(result.Tagging.TagSet[0]).length === 1 &&
result.Tagging.TagSet[0].Tag &&
Array.isArray(result.Tagging.TagSet[0].Tag)
),
validateKeyValue: (key, value) => {
if (key.length > 128) {
return errors.InvalidTag.customizeDescription('The TagKey you ' +
'have provided is too long, max 128');
}
if (value.length > 256) {
return errors.InvalidTag.customizeDescription('The TagValue you ' +
'have provided is too long, max 256');
}
return true;
},
};
/** _validateTags - Validate tags, returning an error if tags are invalid
* @param {object[]} tags - tags parsed from xml to be validated
* @param {string[]} tags[].Key - Name of the tag
* @param {string[]} tags[].Value - Value of the tag
* @return {(Error|object)} tagsResult - return object tags on success
* { key: value}; error on failure
*/
function _validateTags(tags) {
let result;
const tagsResult = {};
if (tags.length === 0) {
return tagsResult;
}
// Maximum number of tags per resource: 50
if (tags.length > 50) {
return errorBadRequestLimit50;
}
for (let i = 0; i < tags.length; i++) {
const tag = tags[i];
if (!_validator.validateTagStructure(tag)) {
return errors.MalformedXML;
}
const key = tag.Key[0];
const value = tag.Value[0];
if (!key) {
return errors.InvalidTag.customizeDescription('The TagKey you ' +
'have provided is invalid');
}
// Allowed characters are letters, whitespace, and numbers, plus
// the following special characters: + - = . _ : /
// Maximum key length: 128 Unicode characters
// Maximum value length: 256 Unicode characters
result = _validator.validateKeyValue(key, value);
if (result instanceof Error) {
return result;
}
tagsResult[key] = value;
}
// not repeating keys
if (tags.length > Object.keys(tagsResult).length) {
return errors.InvalidTag.customizeDescription('Cannot provide ' +
'multiple Tags with the same key');
}
return tagsResult;
}
/** parseTagXml - Parse and validate xml body, returning callback with object
* tags : { key: value}
* @param {string} xml - xml body to parse and validate
* @param {object} log - Werelogs logger
* @param {function} cb - callback to server
* @return {(Error|object)} - calls callback with tags object on success, error
* on failure
*/
function parseTagXml(xml, log, cb) {
parseString(xml, (err, result) => {
if (err) {
log.trace('xml parsing failed', {
error: err,
method: 'parseTagXml',
});
log.debug('invalid xml', { xml });
return cb(errors.MalformedXML);
}
if (!_validator.validateXMLStructure(result)) {
log.debug('xml validation failed', {
error: errors.MalformedXML,
method: '_validator.validateXMLStructure',
xml,
});
return cb(errors.MalformedXML);
}
// AWS does not return error if no tag
if (result.Tagging.TagSet[0] === '') {
return cb(null, []);
}
const validationRes = _validateTags(result.Tagging.TagSet[0].Tag);
if (validationRes instanceof Error) {
log.debug('tag validation failed', {
error: validationRes,
method: '_validateTags',
xml,
});
return cb(validationRes);
}
// if no error, validation returns tags object
return cb(null, validationRes);
});
}
function convertToXml(objectTags) {
const xml = [];
xml.push('<?xml version="1.0" encoding="UTF-8" standalone="yes"?>',
'<Tagging> <TagSet>');
if (objectTags && Object.keys(objectTags).length > 0) {
Object.keys(objectTags).forEach(key => {
xml.push(`<Tag><Key>${escapeForXml(key)}</Key>` +
`<Value>${escapeForXml(objectTags[key])}</Value></Tag>`);
});
}
xml.push('</TagSet> </Tagging>');
return xml.join('');
}
/** parseTagFromQuery - Parse and validate x-amz-tagging header (URL query
* parameter encoded), returning callback with object tags : { key: value}
* @param {string} tagQuery - tag(s) URL query parameter encoded
* @return {(Error|object)} - calls callback with tags object on success, error
* on failure
*/
function parseTagFromQuery(tagQuery) {
const tagsResult = {};
const pairs = tagQuery.split('&');
let key;
let value;
let emptyTag = 0;
if (pairs.length === 0) {
return tagsResult;
}
for (let i = 0; i < pairs.length; i++) {
const pair = pairs[i];
if (!pair) {
emptyTag++;
continue;
}
const pairArray = pair.split('=');
if (pairArray.length !== 2) {
return errorInvalidArgument;
}
try {
key = decodeURIComponent(pairArray[0]);
value = decodeURIComponent(pairArray[1]);
} catch (err) {
return errorInvalidArgument;
}
if (!key) {
return errorInvalidArgument;
}
const errorResult = _validator.validateKeyValue(key, value);
if (errorResult instanceof Error) {
return errorResult;
}
tagsResult[key] = value;
}
// return InvalidArgument error if using the same key multiple times
if (pairs.length - emptyTag > Object.keys(tagsResult).length) {
return errorInvalidArgument;
}
if (Object.keys(tagsResult).length > 50) {
return errorBadRequestLimit50;
}
return tagsResult;
}
module.exports = {
_validator,
parseTagXml,
convertToXml,
parseTagFromQuery,
};

View File

@@ -0,0 +1,27 @@
const constants = require('../constants');
const errors = require('../errors');
const userMetadata = {};
/**
* Pull user provided meta headers from request headers
* @param {object} headers - headers attached to the http request (lowercased)
* @return {(object|Error)} all user meta headers or MetadataTooLarge
*/
userMetadata.getMetaHeaders = headers => {
const metaHeaders = Object.create(null);
let totalLength = 0;
const metaHeaderKeys = Object.keys(headers).filter(h =>
h.startsWith('x-amz-meta-'));
const validHeaders = metaHeaderKeys.every(k => {
totalLength += k.length;
totalLength += headers[k].length;
metaHeaders[k] = headers[k];
return (totalLength <= constants.maximumMetaHeadersSize);
});
if (validHeaders) {
return metaHeaders;
}
return errors.MetadataTooLarge;
};
module.exports = userMetadata;

View File

@@ -0,0 +1,124 @@
const errors = require('../errors');
function _matchesETag(item, contentMD5) {
return (item === contentMD5 || item === '*' || item === `"${contentMD5}"`);
}
function _checkEtagMatch(ifETagMatch, contentMD5) {
const res = { present: false, error: null };
if (ifETagMatch) {
res.present = true;
if (ifETagMatch.includes(',')) {
const items = ifETagMatch.split(',');
const anyMatch = items.some(item =>
_matchesETag(item, contentMD5));
if (!anyMatch) {
res.error = errors.PreconditionFailed;
}
} else if (!_matchesETag(ifETagMatch, contentMD5)) {
res.error = errors.PreconditionFailed;
}
}
return res;
}
function _checkEtagNoneMatch(ifETagNoneMatch, contentMD5) {
const res = { present: false, error: null };
if (ifETagNoneMatch) {
res.present = true;
if (ifETagNoneMatch.includes(',')) {
const items = ifETagNoneMatch.split(',');
const anyMatch = items.some(item =>
_matchesETag(item, contentMD5));
if (anyMatch) {
res.error = errors.NotModified;
}
} else if (_matchesETag(ifETagNoneMatch, contentMD5)) {
res.error = errors.NotModified;
}
}
return res;
}
function _checkModifiedSince(ifModifiedSinceTime, lastModified) {
const res = { present: false, error: null };
if (ifModifiedSinceTime) {
res.present = true;
const checkWith = (new Date(ifModifiedSinceTime)).getTime();
if (Number.isNaN(Number(checkWith))) {
res.error = errors.InvalidArgument;
} else if (lastModified <= checkWith) {
res.error = errors.NotModified;
}
}
return res;
}
function _checkUnmodifiedSince(ifUnmodifiedSinceTime, lastModified) {
const res = { present: false, error: null };
if (ifUnmodifiedSinceTime) {
res.present = true;
const checkWith = (new Date(ifUnmodifiedSinceTime)).getTime();
if (Number.isNaN(Number(checkWith))) {
res.error = errors.InvalidArgument;
} else if (lastModified > checkWith) {
res.error = errors.PreconditionFailed;
}
}
return res;
}
/**
* validateConditionalHeaders - validates 'if-modified-since',
* 'if-unmodified-since', 'if-match' or 'if-none-match' headers if included in
* request against last-modified date of object and/or ETag.
* @param {object} headers - headers from request object
* @param {string} lastModified - last modified date of object
* @param {object} contentMD5 - content MD5 of object
* @return {object} object with error as key and arsenal error as value or
* empty object if no error
*/
function validateConditionalHeaders(headers, lastModified, contentMD5) {
let lastModifiedDate = new Date(lastModified);
lastModifiedDate.setMilliseconds(0);
lastModifiedDate = lastModifiedDate.getTime();
const ifMatchHeader = headers['if-match'] ||
headers['x-amz-copy-source-if-match'];
const ifNoneMatchHeader = headers['if-none-match'] ||
headers['x-amz-copy-source-if-none-match'];
const ifModifiedSinceHeader = headers['if-modified-since'] ||
headers['x-amz-copy-source-if-modified-since'];
const ifUnmodifiedSinceHeader = headers['if-unmodified-since'] ||
headers['x-amz-copy-source-if-unmodified-since'];
const etagMatchRes = _checkEtagMatch(ifMatchHeader, contentMD5);
const etagNoneMatchRes = _checkEtagNoneMatch(ifNoneMatchHeader, contentMD5);
const modifiedSinceRes = _checkModifiedSince(ifModifiedSinceHeader,
lastModifiedDate);
const unmodifiedSinceRes = _checkUnmodifiedSince(ifUnmodifiedSinceHeader,
lastModifiedDate);
// If-Unmodified-Since condition evaluates to false and If-Match
// is not present, then return the error. Otherwise, If-Unmodified-Since is
// silent when If-Match match, and when If-Match does not match, it's the
// same error, so each case are covered.
if (!etagMatchRes.present && unmodifiedSinceRes.error) {
return unmodifiedSinceRes;
}
if (etagMatchRes.present && etagMatchRes.error) {
return etagMatchRes;
}
if (etagNoneMatchRes.present && etagNoneMatchRes.error) {
return etagNoneMatchRes;
}
if (modifiedSinceRes.present && modifiedSinceRes.error) {
return modifiedSinceRes;
}
return {};
}
module.exports = {
_checkEtagMatch,
_checkEtagNoneMatch,
_checkModifiedSince,
_checkUnmodifiedSince,
validateConditionalHeaders,
};

246
lib/s3routes/routes.js Normal file
View File

@@ -0,0 +1,246 @@
const assert = require('assert');
const errors = require('../errors');
const routeGET = require('./routes/routeGET');
const routePUT = require('./routes/routePUT');
const routeDELETE = require('./routes/routeDELETE');
const routeHEAD = require('./routes/routeHEAD');
const routePOST = require('./routes/routePOST');
const routeOPTIONS = require('./routes/routeOPTIONS');
const routesUtils = require('./routesUtils');
const routeWebsite = require('./routes/routeWebsite');
const requestUtils = require('../../lib/policyEvaluator/requestUtils');
const routeMap = {
GET: routeGET,
PUT: routePUT,
POST: routePOST,
DELETE: routeDELETE,
HEAD: routeHEAD,
OPTIONS: routeOPTIONS,
};
function isValidReqUids(reqUids) {
// baseline check, to avoid the risk of running into issues if
// users craft a large x-scal-request-uids header
return reqUids.length < 128;
}
function checkUnsupportedRoutes(reqMethod) {
const method = routeMap[reqMethod];
if (!method) {
return { error: errors.MethodNotAllowed };
}
return { method };
}
function checkBucketAndKey(bucketName, objectKey, method, reqQuery,
blacklistedPrefixes, log) {
// if empty name and request not a List Buckets
if (!bucketName && !(method === 'GET' && !objectKey)) {
log.debug('empty bucket name', { method: 'routes' });
return (method !== 'OPTIONS') ?
errors.MethodNotAllowed : errors.AccessForbidden
.customizeDescription('CORSResponse: Bucket not found');
}
if (bucketName !== undefined && routesUtils.isValidBucketName(bucketName,
blacklistedPrefixes.bucket) === false) {
log.debug('invalid bucket name', { bucketName });
if (method === 'DELETE') {
return errors.NoSuchBucket;
}
return errors.InvalidBucketName;
}
if (objectKey !== undefined) {
const result = routesUtils.isValidObjectKey(objectKey,
blacklistedPrefixes.object);
if (!result.isValid) {
log.debug('invalid object key', { objectKey });
return errors.InvalidArgument.customizeDescription('Object key ' +
`must not start with "${result.invalidPrefix}".`);
}
}
if ((reqQuery.partNumber || reqQuery.uploadId)
&& objectKey === undefined) {
return errors.InvalidRequest
.customizeDescription('A key must be specified');
}
return undefined;
}
// TODO: ARSN-59 remove assertions or restrict it to dev environment only.
function checkTypes(req, res, params, logger, s3config) {
assert.strictEqual(typeof req, 'object',
'bad routes param: req must be an object');
assert.strictEqual(typeof res, 'object',
'bad routes param: res must be an object');
assert.strictEqual(typeof logger, 'object',
'bad routes param: logger must be an object');
assert.strictEqual(typeof params.api, 'object',
'bad routes param: api must be an object');
assert.strictEqual(typeof params.api.callApiMethod, 'function',
'bad routes param: api.callApiMethod must be a defined function');
assert.strictEqual(typeof params.internalHandlers, 'object',
'bad routes param: internalHandlers must be an object');
if (params.statsClient) {
assert.strictEqual(typeof params.statsClient, 'object',
'bad routes param: statsClient must be an object');
}
assert(Array.isArray(params.allEndpoints),
'bad routes param: allEndpoints must be an array');
assert(params.allEndpoints.length > 0,
'bad routes param: allEndpoints must have at least one endpoint');
params.allEndpoints.forEach(endpoint => {
assert.strictEqual(typeof endpoint, 'string',
'bad routes param: each item in allEndpoints must be a string');
});
assert(Array.isArray(params.websiteEndpoints),
'bad routes param: allEndpoints must be an array');
params.websiteEndpoints.forEach(endpoint => {
assert.strictEqual(typeof endpoint, 'string',
'bad routes param: each item in websiteEndpoints must be a string');
});
assert.strictEqual(typeof params.blacklistedPrefixes, 'object',
'bad routes param: blacklistedPrefixes must be an object');
assert(Array.isArray(params.blacklistedPrefixes.bucket),
'bad routes param: blacklistedPrefixes.bucket must be an array');
params.blacklistedPrefixes.bucket.forEach(pre => {
assert.strictEqual(typeof pre, 'string',
'bad routes param: each blacklisted bucket prefix must be a string');
});
assert(Array.isArray(params.blacklistedPrefixes.object),
'bad routes param: blacklistedPrefixes.object must be an array');
params.blacklistedPrefixes.object.forEach(pre => {
assert.strictEqual(typeof pre, 'string',
'bad routes param: each blacklisted object prefix must be a string');
});
assert.strictEqual(typeof params.dataRetrievalFn, 'function',
'bad routes param: dataRetrievalFn must be a defined function');
if (s3config) {
assert.strictEqual(typeof s3config, 'object', 'bad routes param: s3config must be an object');
}
}
/** routes - route request to appropriate method
* @param {Http.Request} req - http request object
* @param {Http.ServerResponse} res - http response sent to the client
* @param {object} params - additional routing parameters
* @param {object} params.api - all api methods and method to call an api method
* i.e. api.callApiMethod(methodName, request, response, log, callback)
* @param {function} params.internalHandlers - internal handlers API object
* for queries beginning with '/_/'
* @param {StatsClient} [params.statsClient] - client to report stats to Redis
* @param {string[]} params.allEndpoints - all accepted REST endpoints
* @param {string[]} params.websiteEndpoints - all accepted website endpoints
* @param {object} params.blacklistedPrefixes - blacklisted prefixes
* @param {string[]} params.blacklistedPrefixes.bucket - bucket prefixes
* @param {string[]} params.blacklistedPrefixes.object - object prefixes
* @param {object} params.unsupportedQueries - object containing true/false
* values for whether queries are supported
* @param {function} params.dataRetrievalFn - function to retrieve data
* @param {RequestLogger} logger - werelogs logger instance
* @param {String} [s3config] - s3 configuration
* @returns {undefined}
*/
function routes(req, res, params, logger, s3config) {
checkTypes(req, res, params, logger);
const {
api,
internalHandlers,
statsClient,
allEndpoints,
websiteEndpoints,
blacklistedPrefixes,
dataRetrievalFn,
} = params;
const clientInfo = {
clientIP: requestUtils.getClientIp(req, s3config),
clientPort: req.socket.remotePort,
httpCode: res.statusCode,
httpMessage: res.statusMessage,
httpMethod: req.method,
httpURL: req.url,
endpoint: req.endpoint,
};
let reqUids = req.headers['x-scal-request-uids'];
if (reqUids !== undefined && !isValidReqUids(reqUids)) {
// simply ignore invalid id (any user can provide an
// invalid request ID through a crafted header)
reqUids = undefined;
}
const log = (reqUids !== undefined ?
logger.newRequestLoggerFromSerializedUids(reqUids) :
logger.newRequestLogger());
if (!req.url.startsWith('/_/healthcheck')) {
log.info('received request', clientInfo);
}
log.end().addDefaultFields(clientInfo);
if (req.url.startsWith('/_/')) {
let internalServiceName = req.url.slice(3);
const serviceDelim = internalServiceName.indexOf('/');
if (serviceDelim !== -1) {
internalServiceName = internalServiceName.slice(0, serviceDelim);
}
if (internalHandlers[internalServiceName] === undefined) {
return routesUtils.responseXMLBody(
errors.InvalidURI, undefined, res, log);
}
return internalHandlers[internalServiceName](
clientInfo.clientIP, req, res, log, statsClient);
}
if (statsClient) {
// report new request for stats
statsClient.reportNewRequest('s3');
}
try {
const validHosts = allEndpoints.concat(websiteEndpoints);
routesUtils.normalizeRequest(req, validHosts);
} catch (err) {
log.debug('could not normalize request', { error: err.stack });
return routesUtils.responseXMLBody(
errors.InvalidURI.customizeDescription('Could not parse the ' +
'specified URI. Check your restEndpoints configuration.'),
undefined, res, log);
}
log.addDefaultFields({
bucketName: req.bucketName,
objectKey: req.objectKey,
bytesReceived: req.parsedContentLength || 0,
bodyLength: parseInt(req.headers['content-length'], 10) || 0,
});
const { error, method } = checkUnsupportedRoutes(req.method, req.query);
if (error) {
log.trace('error validating route or uri params', { error });
return routesUtils.responseXMLBody(error, null, res, log);
}
const bucketOrKeyError = checkBucketAndKey(req.bucketName, req.objectKey,
req.method, req.query, blacklistedPrefixes, log);
if (bucketOrKeyError) {
log.trace('error with bucket or key value',
{ error: bucketOrKeyError });
return routesUtils.responseXMLBody(bucketOrKeyError, null, res, log);
}
// bucket website request
if (websiteEndpoints && websiteEndpoints.indexOf(req.parsedHost) > -1) {
return routeWebsite(req, res, api, log, statsClient, dataRetrievalFn);
}
return method(req, res, api, log, statsClient, dataRetrievalFn);
}
module.exports = routes;

View File

@@ -0,0 +1,83 @@
const routesUtils = require('../routesUtils');
const errors = require('../../errors');
function routeDELETE(request, response, api, log, statsClient) {
log.debug('routing request', { method: 'routeDELETE' });
if (request.query.uploadId) {
if (request.objectKey === undefined) {
return routesUtils.responseNoBody(
errors.InvalidRequest.customizeDescription('A key must be ' +
'specified'), null, response, 200, log);
}
api.callApiMethod('multipartDelete', request, response, log,
(err, corsHeaders) => {
routesUtils.statsReport500(err, statsClient);
return routesUtils.responseNoBody(err, corsHeaders, response,
204, log);
});
} else if (request.objectKey === undefined) {
if (request.query.website !== undefined) {
return api.callApiMethod('bucketDeleteWebsite', request,
response, log, (err, corsHeaders) => {
routesUtils.statsReport500(err, statsClient);
return routesUtils.responseNoBody(err, corsHeaders,
response, 204, log);
});
} else if (request.query.cors !== undefined) {
return api.callApiMethod('bucketDeleteCors', request, response,
log, (err, corsHeaders) => {
routesUtils.statsReport500(err, statsClient);
return routesUtils.responseNoBody(err, corsHeaders,
response, 204, log);
});
} else if (request.query.replication !== undefined) {
return api.callApiMethod('bucketDeleteReplication', request,
response, log, (err, corsHeaders) => {
routesUtils.statsReport500(err, statsClient);
return routesUtils.responseNoBody(err, corsHeaders,
response, 204, log);
});
} else if (request.query.lifecycle !== undefined) {
return api.callApiMethod('bucketDeleteLifecycle', request,
response, log, (err, corsHeaders) => {
routesUtils.statsReport500(err, statsClient);
return routesUtils.responseNoBody(err, corsHeaders,
response, 204, log);
});
}
api.callApiMethod('bucketDelete', request, response, log,
(err, corsHeaders) => {
routesUtils.statsReport500(err, statsClient);
return routesUtils.responseNoBody(err, corsHeaders, response,
204, log);
});
} else {
if (request.query.tagging !== undefined) {
return api.callApiMethod('objectDeleteTagging', request,
response, log, (err, resHeaders) => {
routesUtils.statsReport500(err, statsClient);
return routesUtils.responseNoBody(err, resHeaders,
response, 204, log);
});
}
api.callApiMethod('objectDelete', request, response, log,
(err, corsHeaders) => {
/*
* Since AWS expects a 204 regardless of the existence of
the object, the errors NoSuchKey and NoSuchVersion should not
* be sent back as a response.
*/
if (err && !err.NoSuchKey && !err.NoSuchVersion) {
return routesUtils.responseNoBody(err, corsHeaders,
response, null, log);
}
routesUtils.statsReport500(err, statsClient);
return routesUtils.responseNoBody(null, corsHeaders, response,
204, log);
});
}
return undefined;
}
module.exports = routeDELETE;

View File

@@ -0,0 +1,128 @@
const errors = require('../../errors');
const routesUtils = require('../routesUtils');
function routerGET(request, response, api, log, statsClient, dataRetrievalFn) {
log.debug('routing request', { method: 'routerGET' });
if (request.bucketName === undefined && request.objectKey !== undefined) {
routesUtils.responseXMLBody(errors.NoSuchBucket, null, response, log);
} else if (request.bucketName === undefined
&& request.objectKey === undefined) {
// GET service
api.callApiMethod('serviceGet', request, response, log, (err, xml) => {
routesUtils.statsReport500(err, statsClient);
return routesUtils.responseXMLBody(err, xml, response, log);
});
} else if (request.objectKey === undefined) {
// GET bucket ACL
if (request.query.acl !== undefined) {
api.callApiMethod('bucketGetACL', request, response, log,
(err, xml, corsHeaders) => {
routesUtils.statsReport500(err, statsClient);
return routesUtils.responseXMLBody(err, xml, response, log,
corsHeaders);
});
} else if (request.query.replication !== undefined) {
api.callApiMethod('bucketGetReplication', request, response, log,
(err, xml, corsHeaders) => {
routesUtils.statsReport500(err, statsClient);
return routesUtils.responseXMLBody(err, xml, response, log,
corsHeaders);
});
} else if (request.query.cors !== undefined) {
api.callApiMethod('bucketGetCors', request, response, log,
(err, xml, corsHeaders) => {
routesUtils.statsReport500(err, statsClient);
routesUtils.responseXMLBody(err, xml, response, log,
corsHeaders);
});
} else if (request.query.versioning !== undefined) {
api.callApiMethod('bucketGetVersioning', request, response, log,
(err, xml, corsHeaders) => {
routesUtils.statsReport500(err, statsClient);
routesUtils.responseXMLBody(err, xml, response, log,
corsHeaders);
});
} else if (request.query.website !== undefined) {
api.callApiMethod('bucketGetWebsite', request, response, log,
(err, xml, corsHeaders) => {
routesUtils.statsReport500(err, statsClient);
routesUtils.responseXMLBody(err, xml, response, log,
corsHeaders);
});
} else if (request.query.lifecycle !== undefined) {
api.callApiMethod('bucketGetLifecycle', request, response, log,
(err, xml, corsHeaders) => {
routesUtils.statsReport500(err, statsClient);
routesUtils.responseXMLBody(err, xml, response, log,
corsHeaders);
});
} else if (request.query.uploads !== undefined) {
// List MultipartUploads
api.callApiMethod('listMultipartUploads', request, response, log,
(err, xml, corsHeaders) => {
routesUtils.statsReport500(err, statsClient);
return routesUtils.responseXMLBody(err, xml, response, log,
corsHeaders);
});
} else if (request.query.location !== undefined) {
api.callApiMethod('bucketGetLocation', request, response, log,
(err, xml, corsHeaders) => {
routesUtils.statsReport500(err, statsClient);
return routesUtils.responseXMLBody(err, xml, response, log,
corsHeaders);
});
} else {
// GET bucket
api.callApiMethod('bucketGet', request, response, log,
(err, xml, corsHeaders) => {
routesUtils.statsReport500(err, statsClient);
return routesUtils.responseXMLBody(err, xml, response, log,
corsHeaders);
});
}
} else {
/* eslint-disable no-lonely-if */
if (request.query.acl !== undefined) {
// GET object ACL
api.callApiMethod('objectGetACL', request, response, log,
(err, xml, corsHeaders) => {
routesUtils.statsReport500(err, statsClient);
return routesUtils.responseXMLBody(err, xml, response, log,
corsHeaders);
});
} else if (request.query.tagging !== undefined) {
// GET object Tagging
api.callApiMethod('objectGetTagging', request, response, log,
(err, xml, corsHeaders) => {
routesUtils.statsReport500(err, statsClient);
return routesUtils.responseXMLBody(err, xml, response, log,
corsHeaders);
});
// List parts of an open multipart upload
} else if (request.query.uploadId !== undefined) {
api.callApiMethod('listParts', request, response, log,
(err, xml, corsHeaders) => {
routesUtils.statsReport500(err, statsClient);
return routesUtils.responseXMLBody(err, xml, response, log,
corsHeaders);
});
} else {
// GET object
api.callApiMethod('objectGet', request, response, log,
(err, dataGetInfo, resMetaHeaders, range) => {
let contentLength = 0;
if (resMetaHeaders && resMetaHeaders['Content-Length']) {
contentLength = resMetaHeaders['Content-Length'];
}
log.end().addDefaultFields({ contentLength });
routesUtils.statsReport500(err, statsClient);
return routesUtils.responseStreamData(err, request.query,
resMetaHeaders, dataGetInfo, dataRetrievalFn, response,
range, log);
});
}
/* eslint-enable */
}
}
module.exports = routerGET;

View File

@@ -0,0 +1,29 @@
const errors = require('../../errors');
const routesUtils = require('../routesUtils');
function routeHEAD(request, response, api, log, statsClient) {
log.debug('routing request', { method: 'routeHEAD' });
if (request.bucketName === undefined) {
log.trace('head request without bucketName');
routesUtils.responseXMLBody(errors.MethodNotAllowed,
null, response, log);
} else if (request.objectKey === undefined) {
// HEAD bucket
api.callApiMethod('bucketHead', request, response, log,
(err, corsHeaders) => {
routesUtils.statsReport500(err, statsClient);
return routesUtils.responseNoBody(err, corsHeaders, response,
200, log);
});
} else {
// HEAD object
api.callApiMethod('objectHead', request, response, log,
(err, resHeaders) => {
routesUtils.statsReport500(err, statsClient);
return routesUtils.responseContentHeaders(err, {}, resHeaders,
response, log);
});
}
}
module.exports = routeHEAD;

View File

@@ -0,0 +1,31 @@
const errors = require('../../errors');
const routesUtils = require('../routesUtils');
function routeOPTIONS(request, response, api, log, statsClient) {
log.debug('routing request', { method: 'routeOPTION' });
const corsMethod = request.headers['access-control-request-method'] || null;
if (!request.headers.origin) {
const msg = 'Insufficient information. Origin request header needed.';
const err = errors.BadRequest.customizeDescription(msg);
log.debug('missing origin', { method: 'routeOPTIONS', error: err });
return routesUtils.responseXMLBody(err, undefined, response, log);
}
if (['GET', 'PUT', 'HEAD', 'POST', 'DELETE'].indexOf(corsMethod) < 0) {
const msg = `Invalid Access-Control-Request-Method: ${corsMethod}`;
const err = errors.BadRequest.customizeDescription(msg);
log.debug('invalid Access-Control-Request-Method',
{ method: 'routeOPTIONS', error: err });
return routesUtils.responseXMLBody(err, undefined, response, log);
}
return api.callApiMethod('corsPreflight', request, response, log,
(err, resHeaders) => {
routesUtils.statsReport500(err, statsClient);
return routesUtils.responseNoBody(err, resHeaders, response, 200,
log);
});
}
module.exports = routeOPTIONS;

Some files were not shown because too many files have changed in this diff Show More