Compare commits

...

63 Commits

Author SHA1 Message Date
Vianney Rancurel c55c4abe6c ft: ARSN-114 in_memory backend
Minor changes.
Create a basic test
2022-03-15 09:42:30 -07:00
Taylor McKinnon 13f33a81a6 ft(CLDSRV-102): Add Aborted MPU PUT 2022-02-03 15:29:35 -08:00
bbuchanan9 08c1a2046d Revert "bugfix: S3C-2052 Delete orphaned data"
This reverts commit d45fbdbf25e93fbc7fded81a718f29d9adca0bbd.
2019-08-28 17:02:26 -07:00
bbuchanan9 ab239ffa54 Revert "bugfix: S3C-2052 Delete orphaned data in APIs"
This reverts commit 5bf5fc861cab38e24209f86055f525aa627be67b.
2019-08-28 14:34:40 -07:00
bbuchanan9 ad1ee70c4e bugfix: S3C-2052 Delete orphaned data in APIs
Cleanup orphaned data in error cases for
the following APIs:

* objectPut
* objectCopy
* objectCopyPart
2019-08-13 16:08:23 -07:00
bbuchanan9 61bb75b276 bugfix: S3C-2052 Delete orphaned data 2019-08-09 10:31:56 -07:00
Dora Korpar 52bbe85463 ft: S3C-1171 list objects v2 2018-09-04 18:00:10 -07:00
Rahul Padigela 8921a0d9c7 fix: update mem versioning to reflect other backends
This commit fixes the memory backend implementation of versioning to be inline with
Versioning implementation for other backends - When versionId is specified, update
existing version and also update master if put is newer or same version than master.
If master is not available, create master.
2018-01-08 23:47:56 -08:00
Electra Chong 10a99a5f44 chore: require modules instead of import 2017-05-09 17:12:09 -07:00
Electra Chong d053da3f6c rf: clean up listing types, parsing
No longer use 'Basic' listing type.
Move JSON parsing for 'DelimiterVersions' to metadata wrapper to be consistent with what we do for other listing types.
Add some more assertions to listing tests.
2017-05-04 11:27:08 -07:00
Electra Chong 2ec6808e91 ft: parse & use repGroupId from config 2017-04-26 14:02:11 -07:00
Vinh Tao ad6faf8a5a ft: versioning for metadata local backends 2017-03-13 13:59:35 +01:00
Electra Chong a486fdd457 rf: metadata params for versioning
- reorder metadata wrapper params to precede callback
- pass callback params directly from bucketclient
2017-02-23 14:39:26 -08:00
Guillaume Gimenez ed2e4d6e9d fix listing corner cases
- and comply with AWS on optional response fields
- and maxKeys hard limit raised

The validity of these tests may be verified against AWS S3 with the following
commands
```
AWS_ON_AIR=true npm run ft_test
```
2016-12-12 18:34:14 -08:00
Bennett Buchanan 21486bc9fd COMPAT: Return expected response for `MaxUpload` and `Delimiter` params
Currently, the value of `MaxUploads` for listMultipartUpload API is
the number of MPU in a bucket. This is not the behavior of AWS. This
commit changes the API's behavior so the value of `MaxUploads` can be
specified by the user using the `MaxUploads` parameter. This commit also
returns the expected default of 1000 if no `MaxUploads` is not defined.
If `MaxUploads` is defined as `0`, the behavior of AWS is to return an
empty `Uploads` array and set `IsTruncated` to `false`. This commit also
adds this expected behavior.

Currently, when using the file-backend, if a `Delimiter` parameter is
defined and it does not result in a `CommonPrefixes` array that is
greater than 0, `Delimiter` is set to ''. This causes the XML response
to omit the `Delimeter` key. For compatibility with AWS, this commit
tests an associated change in Arsenal that fixes the `Delimiter` response.

A new test has been added to verify these behaviors.

[squash me] Use object destructuring for variables

[squash me] Clean up code and comments

[squash me] Remove callbacks and return all every Promise

[squash me] Handle 0 `MaxUploads` object in `getExpectedObj()`

[squash me] Separate assertions from object creation

[squash me] Remove redundant assertion statements
2016-11-02 17:51:45 -07:00
Bennett Buchanan f56df90adb COMPAT: Handle `undefined` values for listMultipartUploads API
This commit fixes issues that were causing listMultipartUploads
to contain 'undefined' as XML content:
* `services.getMultipartUploadListing` in listMultipartUploads.js is
  passing an `undefined` DisplayName for the owner of the MPU and
  an `undefined` StorageClass for an upload. This results in the
  following JSON response for the listMultipartUploads API,
  respectively:

  ```javascript
  "Owner": { "DisplayName": "undefined", ... }
  "Uploads": [{ ... "StorageClass": "undefined", ... }]

  ```
* Changes in getMultipartUploadListing.js fixes the
  issues stated in the above bullet point by retrieving the proper
  values for the keys `DisplayName`
* Changes in services.js (the addition of `multipartObjectMD.storageClass`)
  is required for the  listMPU API to retrieve the proper `storageClass`
  value
* For values that could be `undefined` in the listMultipartUploads
  API, use empty XML elements if the value is, in fact, `undefined`
* Add tests that check for previously `undefined` values, and remove
  legacy 'list ongoing multipart uploads' tests which would
  be redundant with this commit
* Does not include certain XML elements if not defined (e.g., `Prefix`,
  `Delimiter`)

[squash me] Check for null, use 'x-amz-storage-class' field

[squash me] COMPAT: Do not include certain `undefined` XML elements
2016-10-31 11:07:28 -07:00
David Pineau a91fa0ecda Remove mentions of the forbidden word 2016-05-30 01:04:06 +02:00
Michael Zapata dd63d98ac9 fix(memory_metadata): potential RCEs and crashes
Switch from `Object`s to `Map`s allowing buckets to be created
without any risk
2016-05-24 18:21:18 +02:00
alexandre-merle ab5dc5c444 FT: Add backward compatibility
- Add backward compatibility for splitter
2016-05-22 19:58:39 +02:00
alexandre.merle dae83707f3 FT: Allow objectKey with splitter
- Change splitter value with an invalid value
  for bucket name to avoid conflict
- Replace split by splitter with indexOf
  and lastIndexOf to find part of the key
2016-05-22 06:03:42 +02:00
Antonin Coulibaly 26902a0b83 FIX backend to use BucketInfo class
* Update bucketclient backend
* Update in_memory backend
* FIX the metadata wrapper (use the underscore naming)
2016-05-02 16:22:18 +02:00
Lauren Spiegel 4409b43622 (BucketInfo) - use the new BucketInfo methods
* update associated tests
2016-04-07 17:21:29 -07:00
Rahul Padigela 21ca9104c1 bf: sort keys by their unicode points
AWS orders their keys by unicode points rather than natural sort or
human sort. This commit removes the dependency natural-compare-lite
which was used as the sort comparision function so far.
2016-05-10 16:35:49 -07:00
Rahul Padigela f38c39bdc2 bf: convert sync calls to async
The in-memory backend emulators are being used assuming they are async
when they are actually synchronous. Wrapping these calls with async.js
results in 'Maximum call stack size exceeded' error as all functions
are called on the same tick. Wrapping them with process.nextTick()
ensures they are called on the next tick of the event loop.

Fix #527
2016-05-09 10:34:07 -07:00
Lauren Spiegel c2b401d7c3 Refactor Bucket class as new BucketInfo class 2016-04-04 14:34:27 -07:00
Antonin Coulibaly 70a36337ec (linter) CLEAN lint errors 2016-04-26 14:53:50 +02:00
Michael Zapata f73ad7cc3b feat(routes): add a getBucketAndObject route
Allow to parallelize operations, to avoid many unnecessary exchanges
between S3 and Metadata, by sending back an object of this form:
```json
{
  "bucket": {},
  "obj": {}
}
```
with `bucket` and `obj` being of the form previously available.
2016-04-08 17:52:43 +02:00
claire d3f7d01162 Handle CORS and policy metadata
Cleaned up response to s3cmd info call.
Closes #418
2016-03-29 14:41:09 -07:00
Lauren Spiegel a8b1d06818 BF: Handle keys with spaces
This closes #381.
2016-03-04 18:36:51 -08:00
Rahul Padigela a8a0af06b1 FT: Multipart Upload Listing 2016-02-25 10:41:00 -08:00
Antonin Coulibaly d9100dd72b (errors) In_memory_backend - Add arsenal errors
* Update associated tests
2016-03-03 15:29:05 +01:00
Michael Zapata 426c4628af style(lib): improve readability 2016-03-02 15:07:39 +01:00
Lauren Spiegel f83f2fab27 Fix next marker in bucket listing.
NextMarker should return the last item in the listing.
The last item could be a common prefix or a content key.
This closes #318.
2016-02-18 11:45:24 -08:00
Michael Zapata 527ecdd00a cleanup(code): follow guidelines more closely 2016-02-06 17:29:54 +01:00
Lauren Spiegel 8dad5d526b Use info from Vault throughout S3
This closes #75 and closes #180.
2016-01-28 14:42:56 -08:00
Lauren Spiegel b7ecaa93ec Shorten key for mpu overview.
Now mpu overview key only stores the object key
and uploadId.  Rest of the data is stored as the object
value and returned in the listing function.

This works with metadata branch dev/IntegrateMPU
and bucketclient branch dev/uriencode.
2016-01-22 12:33:35 -08:00
Lauren Spiegel d12a21249d Shorten list parts key.
Only items in the list part key are now the uploadId and the partNumber.
The remaining info (size, etag, last modified and data locations) are
now sent as the value of the object to MD and returned in the listing
function.

This branch works with metadata branch dev/dev/ShortenMPUKeys
2016-01-21 17:46:27 -08:00
Lauren Spiegel 6bede12bba URI Encode in bucket client and metadata should decode upon receipt.
MPU works with this branch, dev/IntegrateMPU branch of metadata
and dev/uriencode branch of bucketclient
2016-01-20 16:58:19 -08:00
Lauren Spiegel c4cb173841 URI encode and decode mpu keys
With these changes, MPU works with metadata.

URI encode and decode part keys and replace '-' and '.'
in key names.
2016-01-14 17:32:17 -08:00
Rached Ben Mustapha aa94555b02 FT: Forward logging context to bucket backends 2016-01-14 17:33:55 +01:00
Lauren Spiegel 879ac76ebc Move constant strings out of config 2016-01-14 11:42:23 -08:00
Lauren Spiegel 5a91754821 HF: Fix getServiceIntegration due to rebase issue
The branch that was merged in for getServiceIntegration
did not have the final changes.  This commit fixes that.

This uses the ListObject functionnality, as users are actually
`buckets` in our metadata backend.
2015-12-28 17:59:35 -08:00
Lauren Spiegel c41adcc6ad Modify getService to integrate with metadata
This uses the ListObject functionnality, as users are actually
`buckets` in our metadata backend.
2015-12-28 17:59:35 -08:00
Michael Zapata 6274b7b8c4 fix(MDwrapper): add updateBucket() method
This allows us to put new ACLs when needed in IM-Metadata.
2016-01-11 15:06:25 +01:00
Michael Zapata a60594a86d fix(MDwrapper): serialization on metadata backend
Centralize the parsing in one place, while making the memory backend
easier to work with.

Allow the use of the new scality/bucketclient version.
2016-01-07 13:59:11 +01:00
Rahul Padigela 3a1e2938ae BF: Conditionally check delete marker in object MD
x-amz-delete marker is not available in object metadata all the time,
so we have to conditionally check its existence.
2016-01-05 17:21:37 +01:00
Michael Zapata fd5a3563c7 fix(UIDs): remove bucketUID and objectUID
Working with hashes in the database made things harder to debug when
integrating other components. This change allows for easier debugging
while removing an unneeded overhead as well as offering a boost in
readability in the internal API.
2016-01-04 16:57:49 +01:00
Adrien Vergé 1f5b23f72f Config: Introduce a config.json parser
Some parameters need to be settable to adapt to a given environment:
port to listen on, bucketd connection info, vault connection info, etc.

This commit introduces a `config.json` file that is parsed through the
`lib/Config` class. The default location of the JSON file can be
overridden using the S3_CONFIG_FILE environment variable, similarly to
other projects (Vault, MetaData...)
2016-01-04 17:15:14 +01:00
Michael Zapata e8925f3e31 fix(MD): stringify objectMD sent to bucketclient
The HTTP node API cannot handle the sending of an object, hence the
stringification

Beware, this is a breaking change, that is complemented by a
refactor of the way we handle `Date` instances, since the
stringification erases that kind of info.
2015-12-21 17:17:13 +01:00
Rahul Padigela 0ca1c1f050 BF: ETag should be hex and enclosed in quotes
Content-MD5 will be stored as hex (if it's base64 encoded, it will be
converted into hex) and ETag response will be the content-md5 hex
enclosed in quotes. This fixes #68.
2015-12-21 12:16:41 -08:00
Michael Zapata f9f2fbb0e2 style(s3): edit lines over 80 columns 2015-12-17 14:43:01 +01:00
Michael Zapata e9277ce008 feat(MDwrapper): wrap listObject operations
We use an ugly hack to get back bucketUIDs when needed, they will be
dropped soon enough, cf #116.

- Add listObject function in the wrapper
- Remove listObject methods in Bucket
- Update the listObject API to have more encapsulation
- Remove calls to said methods
- Replace them by the wrapper function
- Update associated unit tests
2015-12-12 16:52:27 +01:00
Michael Zapata 5f6d904bd5 feature(MDwrapper): wrap deleteObjectMD operations
We use an ugly hack to get back bucketUIDs when needed, they will be
dropped soon enough, cf #116.

- Add deleteObject function in the wrapper
- Remove deleteObject methods in Bucket
- Remove calls to said methods
- Replace them by the wrapper function
- Update associated unit tests
2015-12-11 20:14:40 +01:00
Michael Zapata 9a644a0559 feature(MDwrapper): wrap getObjectMD operations
We use an ugly hack to get back bucketUIDs when needed, they will be
dropped soon enough, cf #116.

- Add getObject function in the wrapper
- Remove getObject methods in Bucket
- Remove calls to said methods
- Replace them by the wrapper function
- Update associated unit tests
2015-12-11 17:41:59 +01:00
Michael Zapata 812f1c7492 feature(MDwrapper): wrap putObjectMD operations
We use an ugly hack to get back bucketUIDs when needed, they will be
dropped soon enough, cf #116.

- Add putObject function in the wrapper
- Remove putObject methods in Bucket
- Remove calls to said methods
- Replace them by the wrapper function
- Update associated unit tests
2015-12-11 14:57:50 +01:00
Michael Zapata 1bd47c7b35 feature(MDwrapper): wrap deleteBucket operations
Bucket deletion is abstracted away, allowing for multiple backends.
2015-12-09 12:37:35 +01:00
Michael Zapata d96a2b07cc feature(MDwrapper): wrap createBucket operations
All bucket creation is abstracted away, allowing for multiple backends.
2015-12-08 15:56:18 +01:00
Michael Zapata 179aaa4501 feature(MDwrapper): wrap getBucket operations
Remove all hardcoded bits, and do that in the backend.
2015-12-08 15:39:50 +01:00
Lauren Spiegel 3cc563d02f FT: Implement list multipart uploads 2015-11-23 17:05:17 -08:00
Lauren Spiegel 339c7c2459 Refactor multipart upload so more efficient 2015-11-25 10:16:25 -08:00
Lauren Spiegel addd7b5fe3 FT: Implement list multipart uploads 2015-11-23 17:05:17 -08:00
Michael Zapata 7f020f3e3c refactor(acl): separate acl logic from services
Edit Bucket constructor to avoid having bucket logic outside of it.
2015-12-08 12:01:59 +01:00
Michael Zapata 7d332764be refactor(structure): reorganise directories
Compartiment every component in a meaningful directory, to prepare for
future backends implementations
2015-11-25 15:13:20 +01:00
13 changed files with 766 additions and 18 deletions

View File

@ -106,6 +106,13 @@ module.exports = {
require('./lib/storage/metadata/file/MetadataFileClient'), require('./lib/storage/metadata/file/MetadataFileClient'),
LogConsumer: LogConsumer:
require('./lib/storage/metadata/bucketclient/LogConsumer'), require('./lib/storage/metadata/bucketclient/LogConsumer'),
inMemory: {
metastore:
require('./lib/storage/metadata/in_memory/metastore'),
metadata: require('./lib/storage/metadata/in_memory/metadata'),
bucketUtilities:
require('./lib/storage/metadata/in_memory/bucket_utilities'),
},
}, },
data: { data: {
file: { file: {

View File

@ -0,0 +1,9 @@
module.exports = {
Basic: require('./basic').List,
Delimiter: require('./delimiter').Delimiter,
DelimiterVersions: require('./delimiterVersions')
.DelimiterVersions,
DelimiterMaster: require('./delimiterMaster')
.DelimiterMaster,
MPU: require('./MPU').MultipartUploads,
};

View File

@ -3,8 +3,6 @@ const errors = require('../../errors');
const BucketInfo = require('../../models/BucketInfo'); const BucketInfo = require('../../models/BucketInfo');
const BucketClientInterface = require('./bucketclient/BucketClientInterface'); const BucketClientInterface = require('./bucketclient/BucketClientInterface');
const BucketFileInterface = require('./file/BucketFileInterface');
const MongoClientInterface = require('./mongoclient/MongoClientInterface');
const metastore = require('./in_memory/metastore'); const metastore = require('./in_memory/metastore');
let CdmiMetadata; let CdmiMetadata;
@ -71,25 +69,10 @@ class MetadataWrapper {
if (clientName === 'mem') { if (clientName === 'mem') {
this.client = metastore; this.client = metastore;
this.implName = 'memorybucket'; this.implName = 'memorybucket';
} else if (clientName === 'file') {
this.client = new BucketFileInterface(params, logger);
this.implName = 'bucketfile';
} else if (clientName === 'scality') { } else if (clientName === 'scality') {
this.client = new BucketClientInterface(params, bucketclient, this.client = new BucketClientInterface(params, bucketclient,
logger); logger);
this.implName = 'bucketclient'; this.implName = 'bucketclient';
} else if (clientName === 'mongodb') {
this.client = new MongoClientInterface({
replicaSetHosts: params.mongodb.replicaSetHosts,
writeConcern: params.mongodb.writeConcern,
replicaSet: params.mongodb.replicaSet,
readPreference: params.mongodb.readPreference,
database: params.mongodb.database,
replicationGroupId: params.replicationGroupId,
path: params.mongodb.path,
logger,
});
this.implName = 'mongoclient';
} else if (clientName === 'cdmi') { } else if (clientName === 'cdmi') {
if (!CdmiMetadata) { if (!CdmiMetadata) {
throw new Error('Unauthorized backend'); throw new Error('Unauthorized backend');

View File

@ -0,0 +1,34 @@
const ListResult = require('./ListResult');
class ListMultipartUploadsResult extends ListResult {
constructor() {
super();
this.Uploads = [];
this.NextKeyMarker = undefined;
this.NextUploadIdMarker = undefined;
}
addUpload(uploadInfo) {
this.Uploads.push({
key: decodeURIComponent(uploadInfo.key),
value: {
UploadId: uploadInfo.uploadId,
Initiator: {
ID: uploadInfo.initiatorID,
DisplayName: uploadInfo.initiatorDisplayName,
},
Owner: {
ID: uploadInfo.ownerID,
DisplayName: uploadInfo.ownerDisplayName,
},
StorageClass: uploadInfo.storageClass,
Initiated: uploadInfo.initiated,
},
});
this.MaxKeys += 1;
}
}
module.exports = {
ListMultipartUploadsResult,
};

View File

@ -0,0 +1,27 @@
class ListResult {
constructor() {
this.IsTruncated = false;
this.NextMarker = undefined;
this.CommonPrefixes = [];
/*
Note: this.MaxKeys will get incremented as
keys are added so that when response is returned,
this.MaxKeys will equal total keys in response
(with each CommonPrefix counting as 1 key)
*/
this.MaxKeys = 0;
}
addCommonPrefix(prefix) {
if (!this.hasCommonPrefix(prefix)) {
this.CommonPrefixes.push(prefix);
this.MaxKeys += 1;
}
}
hasCommonPrefix(prefix) {
return (this.CommonPrefixes.indexOf(prefix) !== -1);
}
}
module.exports = ListResult;

View File

@ -0,0 +1,62 @@
# bucket_mem design
## RATIONALE
The bucket API will be used for managing buckets behind the S3 interface.
We plan to have only 2 backends using this interface:
* One production backend
* One debug backend purely in memory
One important remark here is that we don't want an abstraction but a
duck-typing style interface (different classes MemoryBucket and Bucket having
the same methods putObjectMD(), getObjectMD(), etc).
Notes about the memory backend: The backend is currently a simple key/value
store in memory. The functions actually use nextTick() to emulate the future
asynchronous behavior of the production backend.
## BUCKET API
The bucket API is a very simple API with 5 functions:
- putObjectMD(): put metadata for an object in the bucket
- getObjectMD(): get metadata from the bucket
- deleteObjectMD(): delete metadata for an object from the bucket
- deleteBucketMD(): delete a bucket
- getBucketListObjects(): perform the complex bucket listing AWS search
function with various flavors. This function returns a response in a
ListBucketResult object.
getBucketListObjects(prefix, marker, delimiter, maxKeys, callback) behavior is
the following:
prefix (not required): Limits the response to keys that begin with the
specified prefix. You can use prefixes to separate a bucket into different
groupings of keys. (You can think of using prefix to make groups in the same
way you'd use a folder in a file system.)
marker (not required): Specifies the key to start with when listing objects in
a bucket. Amazon S3 returns object keys in alphabetical order, starting with
key after the marker in order.
delimiter (not required): A delimiter is a character you use to group keys.
All keys that contain the same string between the prefix, if specified, and the
first occurrence of the delimiter after the prefix are grouped under a single
result element, CommonPrefixes. If you don't specify the prefix parameter, then
the substring starts at the beginning of the key. The keys that are grouped
under CommonPrefixes are not returned elsewhere in the response.
maxKeys: Sets the maximum number of keys returned in the response body. You can
add this to your request if you want to retrieve fewer than the default 1000
keys. The response might contain fewer keys but will never contain more. If
there are additional keys that satisfy the search criteria but were not
returned because maxKeys was exceeded, the response contains an attribute of
IsTruncated set to true and a NextMarker. To return the additional keys, call
the function again using NextMarker as your marker argument in the function.
Any key that does not contain the delimiter will be returned individually in
Contents rather than in CommonPrefixes.
If there is an error, the error subfield is returned in the response.

View File

@ -0,0 +1,51 @@
function markerFilterMPU(allMarkers, array) {
const { keyMarker, uploadIdMarker } = allMarkers;
for (let i = 0; i < array.length; i++) {
// If the keyMarker is the same as the key,
// check the uploadIdMarker. If uploadIdMarker is the same
// as or alphabetically after the uploadId of the item,
// eliminate the item.
if (uploadIdMarker && keyMarker === array[i].key) {
const laterId =
[uploadIdMarker, array[i].uploadId].sort()[1];
if (array[i].uploadId === laterId) {
break;
} else {
array.shift();
i--;
}
} else {
// If the keyMarker is alphabetically after the key
// of the item in the array, eliminate the item from the array.
const laterItem =
[keyMarker, array[i].key].sort()[1];
if (keyMarker === array[i].key || keyMarker === laterItem) {
array.shift();
i--;
} else {
break;
}
}
}
return array;
}
function prefixFilter(prefix, array) {
for (let i = 0; i < array.length; i++) {
if (array[i].indexOf(prefix) !== 0) {
array.splice(i, 1);
i--;
}
}
return array;
}
function isKeyInContents(responseObject, key) {
return responseObject.Contents.some(val => val.key === key);
}
module.exports = {
markerFilterMPU,
prefixFilter,
isKeyInContents,
};

View File

@ -0,0 +1,148 @@
const errors = require('../../../errors');
const { markerFilterMPU, prefixFilter } = require('./bucket_utilities');
const { ListMultipartUploadsResult } = require('./ListMultipartUploadsResult');
const { metadata } = require('./metadata');
const defaultMaxKeys = 1000;
function getMultipartUploadListing(bucket, params, callback) {
const { delimiter, keyMarker,
uploadIdMarker, prefix, queryPrefixLength, splitter } = params;
const splitterLen = splitter.length;
const maxKeys = params.maxKeys !== undefined ?
Number.parseInt(params.maxKeys, 10) : defaultMaxKeys;
const response = new ListMultipartUploadsResult();
const keyMap = metadata.keyMaps.get(bucket.getName());
if (prefix) {
response.Prefix = prefix;
if (typeof prefix !== 'string') {
return callback(errors.InvalidArgument);
}
}
if (keyMarker) {
response.KeyMarker = keyMarker;
if (typeof keyMarker !== 'string') {
return callback(errors.InvalidArgument);
}
}
if (uploadIdMarker) {
response.UploadIdMarker = uploadIdMarker;
if (typeof uploadIdMarker !== 'string') {
return callback(errors.InvalidArgument);
}
}
if (delimiter) {
response.Delimiter = delimiter;
if (typeof delimiter !== 'string') {
return callback(errors.InvalidArgument);
}
}
if (maxKeys && typeof maxKeys !== 'number') {
return callback(errors.InvalidArgument);
}
// Sort uploads alphatebetically by objectKey and if same objectKey,
// then sort in ascending order by time initiated
let uploads = [];
keyMap.forEach((val, key) => {
uploads.push(key);
});
uploads.sort((a, b) => {
const aIndex = a.indexOf(splitter);
const bIndex = b.indexOf(splitter);
const aObjectKey = a.substring(aIndex + splitterLen);
const bObjectKey = b.substring(bIndex + splitterLen);
const aInitiated = keyMap.get(a).initiated;
const bInitiated = keyMap.get(b).initiated;
if (aObjectKey === bObjectKey) {
if (Date.parse(aInitiated) >= Date.parse(bInitiated)) {
return 1;
}
if (Date.parse(aInitiated) < Date.parse(bInitiated)) {
return -1;
}
}
return (aObjectKey < bObjectKey) ? -1 : 1;
});
// Edit the uploads array so it only
// contains keys that contain the prefix
uploads = prefixFilter(prefix, uploads);
uploads = uploads.map(stringKey => {
const index = stringKey.indexOf(splitter);
const index2 = stringKey.indexOf(splitter, index + splitterLen);
const storedMD = keyMap.get(stringKey);
return {
key: stringKey.substring(index + splitterLen, index2),
uploadId: stringKey.substring(index2 + splitterLen),
bucket: storedMD.eventualStorageBucket,
initiatorID: storedMD.initiator.ID,
initiatorDisplayName: storedMD.initiator.DisplayName,
ownerID: storedMD['owner-id'],
ownerDisplayName: storedMD['owner-display-name'],
storageClass: storedMD['x-amz-storage-class'],
initiated: storedMD.initiated,
};
});
// If keyMarker specified, edit the uploads array so it
// only contains keys that occur alphabetically after the marker.
// If there is also an uploadIdMarker specified, filter to eliminate
// any uploads that share the keyMarker and have an uploadId before
// the uploadIdMarker.
if (keyMarker) {
const allMarkers = {
keyMarker,
uploadIdMarker,
};
uploads = markerFilterMPU(allMarkers, uploads);
}
// Iterate through uploads and filter uploads
// with keys containing delimiter
// into response.CommonPrefixes and filter remaining uploads
// into response.Uploads
for (let i = 0; i < uploads.length; i++) {
const currentUpload = uploads[i];
// If hit maxKeys, stop adding keys to response
if (response.MaxKeys >= maxKeys) {
response.IsTruncated = true;
break;
}
// If a delimiter is specified, find its
// index in the current key AFTER THE OCCURRENCE OF THE PREFIX
// THAT WAS SENT IN THE QUERY (not the prefix including the splitter
// and other elements)
let delimiterIndexAfterPrefix = -1;
const currentKeyWithoutPrefix =
currentUpload.key.slice(queryPrefixLength);
let sliceEnd;
if (delimiter) {
delimiterIndexAfterPrefix = currentKeyWithoutPrefix
.indexOf(delimiter);
sliceEnd = delimiterIndexAfterPrefix + queryPrefixLength;
}
// If delimiter occurs in current key, add key to
// response.CommonPrefixes.
// Otherwise add upload to response.Uploads
if (delimiterIndexAfterPrefix > -1) {
const keySubstring = currentUpload.key.slice(0, sliceEnd + 1);
response.addCommonPrefix(keySubstring);
} else {
response.NextKeyMarker = currentUpload.key;
response.NextUploadIdMarker = currentUpload.uploadId;
response.addUpload(currentUpload);
}
}
// `response.MaxKeys` should be the value from the original `MaxUploads`
// parameter specified by the user (or else the default 1000). Redefine it
// here, so it does not equal the value of `uploads.length`.
response.MaxKeys = maxKeys;
// If `response.MaxKeys` is 0, `response.IsTruncated` should be `false`.
response.IsTruncated = maxKeys === 0 ? false : response.IsTruncated;
return callback(null, response);
}
module.exports = getMultipartUploadListing;

View File

@ -0,0 +1,8 @@
const metadata = {
buckets: new Map,
keyMaps: new Map,
};
module.exports = {
metadata,
};

View File

@ -0,0 +1,334 @@
const errors = require('../../../errors');
const list = require('../../../algos/list/exportAlgos');
const genVID =
require('../../../versioning/VersionID').generateVersionId;
const getMultipartUploadListing = require('./getMultipartUploadListing');
const { metadata } = require('./metadata');
const defaultMaxKeys = 1000;
let uidCounter = 0;
function generateVersionId() {
return genVID(uidCounter++, undefined);
}
function formatVersionKey(key, versionId) {
return `${key}\0${versionId}`;
}
function inc(str) {
return str ? (str.slice(0, str.length - 1) +
String.fromCharCode(str.charCodeAt(str.length - 1) + 1)) : str;
}
const metastore = {
createBucket: (bucketName, bucketMD, log, cb) => {
process.nextTick(() => {
metastore.getBucketAttributes(bucketName, log, (err, bucket) => {
// TODO Check whether user already owns the bucket,
// if so return "BucketAlreadyOwnedByYou"
// If not owned by user, return "BucketAlreadyExists"
if (bucket) {
return cb(errors.BucketAlreadyExists);
}
metadata.buckets.set(bucketName, bucketMD);
metadata.keyMaps.set(bucketName, new Map);
return cb();
});
});
},
putBucketAttributes: (bucketName, bucketMD, log, cb) => {
process.nextTick(() => {
metastore.getBucketAttributes(bucketName, log, err => {
if (err) {
return cb(err);
}
metadata.buckets.set(bucketName, bucketMD);
return cb();
});
});
},
getBucketAttributes: (bucketName, log, cb) => {
process.nextTick(() => {
if (!metadata.buckets.has(bucketName)) {
return cb(errors.NoSuchBucket);
}
return cb(null, metadata.buckets.get(bucketName));
});
},
deleteBucket: (bucketName, log, cb) => {
process.nextTick(() => {
metastore.getBucketAttributes(bucketName, log, err => {
if (err) {
return cb(err);
}
if (metadata.keyMaps.has(bucketName)
&& metadata.keyMaps.get(bucketName).length > 0) {
return cb(errors.BucketNotEmpty);
}
metadata.buckets.delete(bucketName);
metadata.keyMaps.delete(bucketName);
return cb(null);
});
});
},
putObject: (bucketName, objName, objVal, params, log, cb) => {
process.nextTick(() => {
// Ignore the PUT done by AbortMPU
if (params && params.isAbort) {
return cb(null);
}
return metastore.getBucketAttributes(bucketName, log, err => {
if (err) {
return cb(err);
}
/*
valid combinations of versioning options:
- !versioning && !versionId: normal non-versioning put
- versioning && !versionId: create a new version
- versionId: update (PUT/DELETE) an existing version,
and also update master version in case the put
version is newer or same version than master.
if versionId === '' update master version
*/
if (params && params.versionId) {
objVal.versionId = params.versionId; // eslint-disable-line
const mst = metadata.keyMaps.get(bucketName).get(objName);
if (mst && mst.versionId === params.versionId || !mst) {
metadata.keyMaps.get(bucketName).set(objName, objVal);
}
// eslint-disable-next-line
objName = formatVersionKey(objName, params.versionId);
metadata.keyMaps.get(bucketName).set(objName, objVal);
return cb(null, `{"versionId":"${objVal.versionId}"}`);
}
if (params && params.versioning) {
const versionId = generateVersionId();
objVal.versionId = versionId; // eslint-disable-line
metadata.keyMaps.get(bucketName).set(objName, objVal);
// eslint-disable-next-line
objName = formatVersionKey(objName, versionId);
metadata.keyMaps.get(bucketName).set(objName, objVal);
return cb(null, `{"versionId":"${versionId}"}`);
}
if (params && params.versionId === '') {
const versionId = generateVersionId();
objVal.versionId = versionId; // eslint-disable-line
metadata.keyMaps.get(bucketName).set(objName, objVal);
return cb(null, `{"versionId":"${objVal.versionId}"}`);
}
metadata.keyMaps.get(bucketName).set(objName, objVal);
return cb(null);
});
});
},
getBucketAndObject: (bucketName, objName, params, log, cb) => {
process.nextTick(() => {
metastore.getBucketAttributes(bucketName, log, (err, bucket) => {
if (err) {
return cb(err, { bucket });
}
if (params && params.versionId) {
// eslint-disable-next-line
objName = formatVersionKey(objName, params.versionId);
}
if (!metadata.keyMaps.has(bucketName)
|| !metadata.keyMaps.get(bucketName).has(objName)) {
return cb(null, { bucket: bucket.serialize() });
}
return cb(null, {
bucket: bucket.serialize(),
obj: JSON.stringify(
metadata.keyMaps.get(bucketName).get(objName)
),
});
});
});
},
getObject: (bucketName, objName, params, log, cb) => {
process.nextTick(() => {
metastore.getBucketAttributes(bucketName, log, err => {
if (err) {
return cb(err);
}
if (params && params.versionId) {
// eslint-disable-next-line
objName = formatVersionKey(objName, params.versionId);
}
if (!metadata.keyMaps.has(bucketName)
|| !metadata.keyMaps.get(bucketName).has(objName)) {
return cb(errors.NoSuchKey);
}
return cb(null, metadata.keyMaps.get(bucketName).get(objName));
});
});
},
deleteObject: (bucketName, objName, params, log, cb) => {
process.nextTick(() => {
metastore.getBucketAttributes(bucketName, log, err => {
if (err) {
return cb(err);
}
if (!metadata.keyMaps.get(bucketName).has(objName)) {
return cb(errors.NoSuchKey);
}
if (params && params.versionId) {
const baseKey = inc(formatVersionKey(objName, ''));
const vobjName = formatVersionKey(objName,
params.versionId);
metadata.keyMaps.get(bucketName).delete(vobjName);
const mst = metadata.keyMaps.get(bucketName).get(objName);
if (mst.versionId === params.versionId) {
const keys = [];
metadata.keyMaps.get(bucketName).forEach((val, key) => {
if (key < baseKey && key > vobjName) {
keys.push(key);
}
});
if (keys.length === 0) {
metadata.keyMaps.get(bucketName).delete(objName);
return cb();
}
const key = keys.sort()[0];
const value = metadata.keyMaps.get(bucketName).get(key);
metadata.keyMaps.get(bucketName).set(objName, value);
}
return cb();
}
metadata.keyMaps.get(bucketName).delete(objName);
return cb();
});
});
},
_hasDeleteMarker(key, keyMap) {
const objectMD = keyMap.get(key);
if (objectMD['x-amz-delete-marker'] !== undefined) {
return (objectMD['x-amz-delete-marker'] === true);
}
return false;
},
listObject(bucketName, params, log, cb) {
process.nextTick(() => {
const {
prefix,
marker,
delimiter,
maxKeys,
continuationToken,
startAfter,
} = params;
if (prefix && typeof prefix !== 'string') {
return cb(errors.InvalidArgument);
}
if (marker && typeof marker !== 'string') {
return cb(errors.InvalidArgument);
}
if (delimiter && typeof delimiter !== 'string') {
return cb(errors.InvalidArgument);
}
if (maxKeys && typeof maxKeys !== 'number') {
return cb(errors.InvalidArgument);
}
if (continuationToken && typeof continuationToken !== 'string') {
return cb(errors.InvalidArgument);
}
if (startAfter && typeof startAfter !== 'string') {
return cb(errors.InvalidArgument);
}
// If paramMaxKeys is undefined, the default parameter will set it.
// However, if it is null, the default parameter will not set it.
let numKeys = maxKeys;
if (numKeys === null) {
numKeys = defaultMaxKeys;
}
if (!metadata.keyMaps.has(bucketName)) {
return cb(errors.NoSuchBucket);
}
// If marker specified, edit the keys array so it
// only contains keys that occur alphabetically after the marker
const listingType = params.listingType;
const extension = new list[listingType](params, log);
const listingParams = extension.genMDParams();
const keys = [];
metadata.keyMaps.get(bucketName).forEach((val, key) => {
if (listingParams.gt && listingParams.gt >= key) {
return null;
}
if (listingParams.gte && listingParams.gte > key) {
return null;
}
if (listingParams.lt && key >= listingParams.lt) {
return null;
}
if (listingParams.lte && key > listingParams.lte) {
return null;
}
return keys.push(key);
});
keys.sort();
// Iterate through keys array and filter keys containing
// delimiter into response.CommonPrefixes and filter remaining
// keys into response.Contents
for (let i = 0; i < keys.length; ++i) {
const currentKey = keys[i];
// Do not list object with delete markers
if (this._hasDeleteMarker(currentKey,
metadata.keyMaps.get(bucketName))) {
continue;
}
const objMD = metadata.keyMaps.get(bucketName).get(currentKey);
const value = JSON.stringify(objMD);
const obj = {
key: currentKey,
value,
};
// calling Ext.filter(obj) adds the obj to the Ext result if
// not filtered.
// Also, Ext.filter returns false when hit max keys.
// What a nifty function!
if (extension.filter(obj) < 0) {
break;
}
}
return cb(null, extension.result());
});
},
listMultipartUploads(bucketName, listingParams, log, cb) {
process.nextTick(() => {
metastore.getBucketAttributes(bucketName, log, (err, bucket) => {
if (bucket === undefined) {
// no on going multipart uploads, return empty listing
return cb(null, {
IsTruncated: false,
NextMarker: undefined,
MaxKeys: 0,
});
}
return getMultipartUploadListing(bucket, listingParams, cb);
});
});
},
};
module.exports = metastore;

View File

@ -3,7 +3,7 @@
"engines": { "engines": {
"node": ">=16" "node": ">=16"
}, },
"version": "7.10.13", "version": "7.10.14",
"description": "Common utilities for the S3 project components", "description": "Common utilities for the S3 project components",
"main": "index.js", "main": "index.js",
"repository": { "repository": {

View File

@ -0,0 +1,26 @@
{
"acl": {
"Canned": "private",
"FULL_CONTROL": [],
"WRITE": [],
"WRITE_ACP": [],
"READ": [],
"READ_ACP": []
},
"name": "BucketName",
"owner": "9d8fe19a78974c56dceb2ea4a8f01ed0f5fecb9d29f80e9e3b84104e4a3ea520",
"ownerDisplayName": "anonymousCoward",
"creationDate": "2018-06-04T17:45:42.592Z",
"mdBucketModelVersion": 8,
"transient": false,
"deleted": false,
"serverSideEncryption": null,
"versioningConfiguration": null,
"websiteConfiguration": null,
"locationConstraint": "us-east-1",
"readLocationConstraint": "us-east-1",
"cors": null,
"replicationConfiguration": null,
"lifecycleConfiguration": null,
"uid": "fea97818-6a9a-11e8-9777-e311618cc5d4"
}

View File

@ -0,0 +1,59 @@
const async = require('async');
const assert = require('assert');
const werelogs = require('werelogs');
const MetadataWrapper = require('../../../../../lib/storage/metadata/MetadataWrapper');
const fakeBucketInfo = require('./FakeBucketInfo.json');
describe('InMemory', () => {
const fakeBucket = 'fake';
const logger = new werelogs.Logger('Injector');
const memBackend = new MetadataWrapper(
'mem', {}, null, logger);
before(done => {
memBackend.createBucket(fakeBucket, fakeBucketInfo, logger, done);
});
after(done => {
memBackend.deleteBucket(fakeBucket, logger, done);
});
it('basic', done => {
async.waterfall([
next => {
memBackend.putObjectMD(fakeBucket, 'foo', 'bar', {}, logger, err => {
if (err) {
return next(err);
}
return next();
});
},
next => {
memBackend.getObjectMD(fakeBucket, 'foo', {}, logger, (err, data) => {
if (err) {
return next(err);
}
assert.deepEqual(data, 'bar');
return next();
});
},
next => {
memBackend.deleteObjectMD(fakeBucket, 'foo', {}, logger, err => {
if (err) {
return next(err);
}
return next();
});
},
next => {
memBackend.getObjectMD(fakeBucket, 'foo', {}, logger, err => {
if (err) {
assert.deepEqual(err.message, 'NoSuchKey');
return next();
}
return next(new Error('unexpected success'));
});
},
], done);
});
});