Compare commits
49 Commits
dependabot
...
developmen
Author | SHA1 | Date |
---|---|---|
Vitaliy Filippov | b5711e9cbf | |
Vitaliy Filippov | 36dc6298d2 | |
Vitaliy Filippov | bc2d637578 | |
Vitaliy Filippov | b543695048 | |
Vitaliy Filippov | 90024d044d | |
Vitaliy Filippov | 451ab33f68 | |
Vitaliy Filippov | c86107e912 | |
Vitaliy Filippov | 0a5962f256 | |
Vitaliy Filippov | 0e292791c6 | |
Vitaliy Filippov | fc07729bd0 | |
Vitaliy Filippov | 4527dd6795 | |
Vitaliy Filippov | 05fb581023 | |
Vitaliy Filippov | 956739a04e | |
Vitaliy Filippov | 7ad0888a66 | |
Vitaliy Filippov | bf01ba4ed1 | |
Vitaliy Filippov | ab019e7e50 | |
Vitaliy Filippov | 3797695e74 | |
Vitaliy Filippov | c8084196c4 | |
bert-e | b72e918ff9 | |
bert-e | 22887f47d8 | |
bert-e | 0cd10a73f3 | |
bert-e | e139406612 | |
Maha Benzekri | d91853a38b | |
Mickael Bourgois | a7e798f909 | |
Mickael Bourgois | 3a1ba29869 | |
Mickael Bourgois | dbb9b6d787 | |
Mickael Bourgois | fce76f0934 | |
Mickael Bourgois | 0e39aaac09 | |
Mickael Bourgois | 0b14c93fac | |
Mickael Bourgois | ab2960bbf4 | |
Mickael Bourgois | 7305b112e2 | |
Mickael Bourgois | cd9e2e757b | |
Mickael Bourgois | ca0904f584 | |
Mickael Bourgois | 0dd3dd35e6 | |
bert-e | bf7e4b7e23 | |
bert-e | 92f4794727 | |
Jonathan Gramain | c6ef85e3a1 | |
Jonathan Gramain | c0fe0cfbcf | |
bert-e | 9c936f2b83 | |
bert-e | d26bac2ebc | |
Jonathan Gramain | cfb9db5178 | |
Jonathan Gramain | 2ce004751a | |
Jonathan Gramain | 539219e046 | |
Jonathan Gramain | be49e55db5 | |
bert-e | e6b240421b | |
bert-e | 81739e3ecf | |
Jonathan Gramain | c475503248 | |
bert-e | 7acbd5d2fb | |
Jonathan Gramain | 8d726322e5 |
|
@ -150,6 +150,9 @@ jobs:
|
||||||
provenance: false
|
provenance: false
|
||||||
tags: |
|
tags: |
|
||||||
ghcr.io/${{ github.repository }}:${{ github.sha }}
|
ghcr.io/${{ github.repository }}:${{ github.sha }}
|
||||||
|
labels: |
|
||||||
|
git.repository=${{ github.repository }}
|
||||||
|
git.commit-sha=${{ github.sha }}
|
||||||
cache-from: type=gha,scope=cloudserver
|
cache-from: type=gha,scope=cloudserver
|
||||||
cache-to: type=gha,mode=max,scope=cloudserver
|
cache-to: type=gha,mode=max,scope=cloudserver
|
||||||
- name: Build and push pykmip image
|
- name: Build and push pykmip image
|
||||||
|
@ -159,6 +162,9 @@ jobs:
|
||||||
context: .github/pykmip
|
context: .github/pykmip
|
||||||
tags: |
|
tags: |
|
||||||
ghcr.io/${{ github.repository }}/pykmip:${{ github.sha }}
|
ghcr.io/${{ github.repository }}/pykmip:${{ github.sha }}
|
||||||
|
labels: |
|
||||||
|
git.repository=${{ github.repository }}
|
||||||
|
git.commit-sha=${{ github.sha }}
|
||||||
cache-from: type=gha,scope=pykmip
|
cache-from: type=gha,scope=pykmip
|
||||||
cache-to: type=gha,mode=max,scope=pykmip
|
cache-to: type=gha,mode=max,scope=pykmip
|
||||||
- name: Build and push MongoDB
|
- name: Build and push MongoDB
|
||||||
|
|
175
README.md
175
README.md
|
@ -1,10 +1,7 @@
|
||||||
# Zenko CloudServer
|
# Zenko CloudServer with Vitastor Backend
|
||||||
|
|
||||||
![Zenko CloudServer logo](res/scality-cloudserver-logo.png)
|
![Zenko CloudServer logo](res/scality-cloudserver-logo.png)
|
||||||
|
|
||||||
[![Docker Pulls][badgedocker]](https://hub.docker.com/r/zenko/cloudserver)
|
|
||||||
[![Docker Pulls][badgetwitter]](https://twitter.com/zenko)
|
|
||||||
|
|
||||||
## Overview
|
## Overview
|
||||||
|
|
||||||
CloudServer (formerly S3 Server) is an open-source Amazon S3-compatible
|
CloudServer (formerly S3 Server) is an open-source Amazon S3-compatible
|
||||||
|
@ -14,137 +11,71 @@ Scality’s Open Source Multi-Cloud Data Controller.
|
||||||
CloudServer provides a single AWS S3 API interface to access multiple
|
CloudServer provides a single AWS S3 API interface to access multiple
|
||||||
backend data storage both on-premise or public in the cloud.
|
backend data storage both on-premise or public in the cloud.
|
||||||
|
|
||||||
CloudServer is useful for Developers, either to run as part of a
|
This repository contains a fork of CloudServer with [Vitastor](https://git.yourcmc.ru/vitalif/vitastor)
|
||||||
continous integration test environment to emulate the AWS S3 service locally
|
backend support.
|
||||||
or as an abstraction layer to develop object storage enabled
|
|
||||||
application on the go.
|
|
||||||
|
|
||||||
## Learn more at [www.zenko.io/cloudserver](https://www.zenko.io/cloudserver/)
|
## Quick Start with Vitastor
|
||||||
|
|
||||||
## [May I offer you some lovely documentation?](http://s3-server.readthedocs.io/en/latest/)
|
Vitastor Backend is in experimental status, however you can already try to
|
||||||
|
run it and write or read something, or even mount it with [GeeseFS](https://github.com/yandex-cloud/geesefs),
|
||||||
|
it works too 😊.
|
||||||
|
|
||||||
## Docker
|
Installation instructions:
|
||||||
|
|
||||||
[Run your Zenko CloudServer with Docker](https://hub.docker.com/r/zenko/cloudserver/)
|
### Install Vitastor
|
||||||
|
|
||||||
## Contributing
|
Refer to [Vitastor Quick Start Manual](https://git.yourcmc.ru/vitalif/vitastor/src/branch/master/docs/intro/quickstart.en.md).
|
||||||
|
|
||||||
In order to contribute, please follow the
|
### Install Zenko with Vitastor Backend
|
||||||
[Contributing Guidelines](
|
|
||||||
https://github.com/scality/Guidelines/blob/master/CONTRIBUTING.md).
|
|
||||||
|
|
||||||
## Installation
|
- Clone this repository: `git clone https://git.yourcmc.ru/vitalif/zenko-cloudserver-vitastor`
|
||||||
|
- Install dependencies: `npm install --omit dev` or just `npm install`
|
||||||
|
- Clone Vitastor repository: `git clone https://git.yourcmc.ru/vitalif/vitastor`
|
||||||
|
- Build Vitastor node.js binding by running `npm install` in `node-binding` subdirectory of Vitastor repository.
|
||||||
|
You need `node-gyp` and `vitastor-client-dev` (Vitastor client library) for it to succeed.
|
||||||
|
- Symlink Vitastor module to Zenko: `ln -s /path/to/vitastor/node-binding /path/to/zenko/node_modules/vitastor`
|
||||||
|
|
||||||
### Dependencies
|
### Install and Configure MongoDB
|
||||||
|
|
||||||
Building and running the Zenko CloudServer requires node.js 10.x and yarn v1.17.x
|
Refer to [MongoDB Manual](https://www.mongodb.com/docs/manual/installation/).
|
||||||
. Up-to-date versions can be found at
|
|
||||||
[Nodesource](https://github.com/nodesource/distributions).
|
|
||||||
|
|
||||||
### Clone source code
|
### Setup Zenko
|
||||||
|
|
||||||
```shell
|
- Create a separate pool for S3 object data in your Vitastor cluster: `vitastor-cli create-pool s3-data`
|
||||||
git clone https://github.com/scality/S3.git
|
- Retrieve ID of the new pool from `vitastor-cli ls-pools --detail s3-data`
|
||||||
|
- In another pool, create an image for storing Vitastor volume metadata: `vitastor-cli create -s 10G s3-volume-meta`
|
||||||
|
- Copy `config.json.vitastor` to `config.json`, adjust it to match your domain
|
||||||
|
- Copy `authdata.json.example` to `authdata.json` - this is where you set S3 access & secret keys,
|
||||||
|
and also adjust them if you want to. Scality seems to use a separate auth service "Scality Vault" for
|
||||||
|
access keys, but it's not published, so let's use a file for now.
|
||||||
|
- Copy `locationConfig.json.vitastor` to `locationConfig.json` - this is where you set Vitastor cluster access data.
|
||||||
|
You should put correct values for `pool_id` (pool ID from the second step) and `metadata_image` (from the third step)
|
||||||
|
in this file.
|
||||||
|
|
||||||
|
Note: `locationConfig.json` in this version corresponds to storage classes (like STANDARD, COLD, etc)
|
||||||
|
instead of "locations" (zones like us-east-1) as it was in original Zenko CloudServer.
|
||||||
|
|
||||||
|
### Start Zenko
|
||||||
|
|
||||||
|
Start the S3 server with: `node index.js`
|
||||||
|
|
||||||
|
If you use default settings, Zenko CloudServer starts on port 8000.
|
||||||
|
The default access key is `accessKey1` with a secret key of `verySecretKey1`.
|
||||||
|
|
||||||
|
Now you can access your S3 with `s3cmd` or `geesefs`:
|
||||||
|
|
||||||
|
```
|
||||||
|
s3cmd --access_key=accessKey1 --secret_key=verySecretKey1 --host=http://localhost:8000 mb s3://testbucket
|
||||||
```
|
```
|
||||||
|
|
||||||
### Install js dependencies
|
```
|
||||||
|
AWS_ACCESS_KEY_ID=accessKey1 \
|
||||||
Go to the ./S3 folder,
|
AWS_SECRET_ACCESS_KEY=verySecretKey1 \
|
||||||
|
geesefs --endpoint http://localhost:8000 testbucket mountdir
|
||||||
```shell
|
|
||||||
yarn install --frozen-lockfile
|
|
||||||
```
|
```
|
||||||
|
|
||||||
If you get an error regarding installation of the diskUsage module,
|
# Author & License
|
||||||
please install g++.
|
|
||||||
|
|
||||||
If you get an error regarding level-down bindings, try clearing your yarn cache:
|
- [Zenko CloudServer](https://s3-server.readthedocs.io/en/latest/) author is Scality, licensed under [Apache License, version 2.0](https://www.apache.org/licenses/LICENSE-2.0)
|
||||||
|
- [Vitastor](https://git.yourcmc.ru/vitalif/vitastor/) and Zenko Vitastor backend author is Vitaliy Filippov, licensed under [VNPL-1.1](https://git.yourcmc.ru/vitalif/vitastor/src/branch/master/VNPL-1.1.txt)
|
||||||
```shell
|
(a "network copyleft" license based on AGPL/SSPL, but worded in a better way)
|
||||||
yarn cache clean
|
|
||||||
```
|
|
||||||
|
|
||||||
## Run it with a file backend
|
|
||||||
|
|
||||||
```shell
|
|
||||||
yarn start
|
|
||||||
```
|
|
||||||
|
|
||||||
This starts a Zenko CloudServer on port 8000. Two additional ports 9990 and
|
|
||||||
9991 are also open locally for internal transfer of metadata and data,
|
|
||||||
respectively.
|
|
||||||
|
|
||||||
The default access key is accessKey1 with
|
|
||||||
a secret key of verySecretKey1.
|
|
||||||
|
|
||||||
By default the metadata files will be saved in the
|
|
||||||
localMetadata directory and the data files will be saved
|
|
||||||
in the localData directory within the ./S3 directory on your
|
|
||||||
machine. These directories have been pre-created within the
|
|
||||||
repository. If you would like to save the data or metadata in
|
|
||||||
different locations of your choice, you must specify them with absolute paths.
|
|
||||||
So, when starting the server:
|
|
||||||
|
|
||||||
```shell
|
|
||||||
mkdir -m 700 $(pwd)/myFavoriteDataPath
|
|
||||||
mkdir -m 700 $(pwd)/myFavoriteMetadataPath
|
|
||||||
export S3DATAPATH="$(pwd)/myFavoriteDataPath"
|
|
||||||
export S3METADATAPATH="$(pwd)/myFavoriteMetadataPath"
|
|
||||||
yarn start
|
|
||||||
```
|
|
||||||
|
|
||||||
## Run it with multiple data backends
|
|
||||||
|
|
||||||
```shell
|
|
||||||
export S3DATA='multiple'
|
|
||||||
yarn start
|
|
||||||
```
|
|
||||||
|
|
||||||
This starts a Zenko CloudServer on port 8000.
|
|
||||||
The default access key is accessKey1 with
|
|
||||||
a secret key of verySecretKey1.
|
|
||||||
|
|
||||||
With multiple backends, you have the ability to
|
|
||||||
choose where each object will be saved by setting
|
|
||||||
the following header with a locationConstraint on
|
|
||||||
a PUT request:
|
|
||||||
|
|
||||||
```shell
|
|
||||||
'x-amz-meta-scal-location-constraint':'myLocationConstraint'
|
|
||||||
```
|
|
||||||
|
|
||||||
If no header is sent with a PUT object request, the
|
|
||||||
location constraint of the bucket will determine
|
|
||||||
where the data is saved. If the bucket has no location
|
|
||||||
constraint, the endpoint of the PUT request will be
|
|
||||||
used to determine location.
|
|
||||||
|
|
||||||
See the Configuration section in our documentation
|
|
||||||
[here](http://s3-server.readthedocs.io/en/latest/GETTING_STARTED/#configuration)
|
|
||||||
to learn how to set location constraints.
|
|
||||||
|
|
||||||
## Run it with an in-memory backend
|
|
||||||
|
|
||||||
```shell
|
|
||||||
yarn run mem_backend
|
|
||||||
```
|
|
||||||
|
|
||||||
This starts a Zenko CloudServer on port 8000.
|
|
||||||
The default access key is accessKey1 with
|
|
||||||
a secret key of verySecretKey1.
|
|
||||||
|
|
||||||
## Run it with Vault user management
|
|
||||||
|
|
||||||
Note: Vault is proprietary and must be accessed separately.
|
|
||||||
|
|
||||||
```shell
|
|
||||||
export S3VAULT=vault
|
|
||||||
yarn start
|
|
||||||
```
|
|
||||||
|
|
||||||
This starts a Zenko CloudServer using Vault for user management.
|
|
||||||
|
|
||||||
[badgetwitter]: https://img.shields.io/twitter/follow/zenko.svg?style=social&label=Follow
|
|
||||||
[badgedocker]: https://img.shields.io/docker/pulls/scality/s3server.svg
|
|
||||||
[badgepub]: https://circleci.com/gh/scality/S3.svg?style=svg
|
|
||||||
[badgepriv]: http://ci.ironmann.io/gh/scality/S3.svg?style=svg&circle-token=1f105b7518b53853b5b7cf72302a3f75d8c598ae
|
|
||||||
|
|
|
@ -1,46 +0,0 @@
|
||||||
#!/usr/bin/env node
|
|
||||||
'use strict'; // eslint-disable-line strict
|
|
||||||
|
|
||||||
const {
|
|
||||||
startWSManagementClient,
|
|
||||||
startPushConnectionHealthCheckServer,
|
|
||||||
} = require('../lib/management/push');
|
|
||||||
|
|
||||||
const logger = require('../lib/utilities/logger');
|
|
||||||
|
|
||||||
const {
|
|
||||||
PUSH_ENDPOINT: pushEndpoint,
|
|
||||||
INSTANCE_ID: instanceId,
|
|
||||||
MANAGEMENT_TOKEN: managementToken,
|
|
||||||
} = process.env;
|
|
||||||
|
|
||||||
if (!pushEndpoint) {
|
|
||||||
logger.error('missing push endpoint env var');
|
|
||||||
process.exit(1);
|
|
||||||
}
|
|
||||||
|
|
||||||
if (!instanceId) {
|
|
||||||
logger.error('missing instance id env var');
|
|
||||||
process.exit(1);
|
|
||||||
}
|
|
||||||
|
|
||||||
if (!managementToken) {
|
|
||||||
logger.error('missing management token env var');
|
|
||||||
process.exit(1);
|
|
||||||
}
|
|
||||||
|
|
||||||
startPushConnectionHealthCheckServer(err => {
|
|
||||||
if (err) {
|
|
||||||
logger.error('could not start healthcheck server', { error: err });
|
|
||||||
process.exit(1);
|
|
||||||
}
|
|
||||||
const url = `${pushEndpoint}/${instanceId}/ws?metrics=1`;
|
|
||||||
startWSManagementClient(url, managementToken, err => {
|
|
||||||
if (err) {
|
|
||||||
logger.error('connection failed, exiting', { error: err });
|
|
||||||
process.exit(1);
|
|
||||||
}
|
|
||||||
logger.info('no more connection, exiting');
|
|
||||||
process.exit(0);
|
|
||||||
});
|
|
||||||
});
|
|
|
@ -1,46 +0,0 @@
|
||||||
#!/usr/bin/env node
|
|
||||||
'use strict'; // eslint-disable-line strict
|
|
||||||
|
|
||||||
const {
|
|
||||||
startWSManagementClient,
|
|
||||||
startPushConnectionHealthCheckServer,
|
|
||||||
} = require('../lib/management/push');
|
|
||||||
|
|
||||||
const logger = require('../lib/utilities/logger');
|
|
||||||
|
|
||||||
const {
|
|
||||||
PUSH_ENDPOINT: pushEndpoint,
|
|
||||||
INSTANCE_ID: instanceId,
|
|
||||||
MANAGEMENT_TOKEN: managementToken,
|
|
||||||
} = process.env;
|
|
||||||
|
|
||||||
if (!pushEndpoint) {
|
|
||||||
logger.error('missing push endpoint env var');
|
|
||||||
process.exit(1);
|
|
||||||
}
|
|
||||||
|
|
||||||
if (!instanceId) {
|
|
||||||
logger.error('missing instance id env var');
|
|
||||||
process.exit(1);
|
|
||||||
}
|
|
||||||
|
|
||||||
if (!managementToken) {
|
|
||||||
logger.error('missing management token env var');
|
|
||||||
process.exit(1);
|
|
||||||
}
|
|
||||||
|
|
||||||
startPushConnectionHealthCheckServer(err => {
|
|
||||||
if (err) {
|
|
||||||
logger.error('could not start healthcheck server', { error: err });
|
|
||||||
process.exit(1);
|
|
||||||
}
|
|
||||||
const url = `${pushEndpoint}/${instanceId}/ws?proxy=1`;
|
|
||||||
startWSManagementClient(url, managementToken, err => {
|
|
||||||
if (err) {
|
|
||||||
logger.error('connection failed, exiting', { error: err });
|
|
||||||
process.exit(1);
|
|
||||||
}
|
|
||||||
logger.info('no more connection, exiting');
|
|
||||||
process.exit(0);
|
|
||||||
});
|
|
||||||
});
|
|
|
@ -4,6 +4,7 @@
|
||||||
"metricsPort": 8002,
|
"metricsPort": 8002,
|
||||||
"metricsListenOn": [],
|
"metricsListenOn": [],
|
||||||
"replicationGroupId": "RG001",
|
"replicationGroupId": "RG001",
|
||||||
|
"workers": 4,
|
||||||
"restEndpoints": {
|
"restEndpoints": {
|
||||||
"localhost": "us-east-1",
|
"localhost": "us-east-1",
|
||||||
"127.0.0.1": "us-east-1",
|
"127.0.0.1": "us-east-1",
|
||||||
|
@ -101,6 +102,14 @@
|
||||||
"readPreference": "primary",
|
"readPreference": "primary",
|
||||||
"database": "metadata"
|
"database": "metadata"
|
||||||
},
|
},
|
||||||
|
"authdata": "authdata.json",
|
||||||
|
"backends": {
|
||||||
|
"auth": "file",
|
||||||
|
"data": "file",
|
||||||
|
"metadata": "mongodb",
|
||||||
|
"kms": "file",
|
||||||
|
"quota": "none"
|
||||||
|
},
|
||||||
"externalBackends": {
|
"externalBackends": {
|
||||||
"aws_s3": {
|
"aws_s3": {
|
||||||
"httpAgent": {
|
"httpAgent": {
|
|
@ -0,0 +1,71 @@
|
||||||
|
{
|
||||||
|
"port": 8000,
|
||||||
|
"listenOn": [],
|
||||||
|
"metricsPort": 8002,
|
||||||
|
"metricsListenOn": [],
|
||||||
|
"replicationGroupId": "RG001",
|
||||||
|
"restEndpoints": {
|
||||||
|
"localhost": "STANDARD",
|
||||||
|
"127.0.0.1": "STANDARD",
|
||||||
|
"yourhostname.ru": "STANDARD"
|
||||||
|
},
|
||||||
|
"websiteEndpoints": [
|
||||||
|
"static.yourhostname.ru"
|
||||||
|
],
|
||||||
|
"replicationEndpoints": [ {
|
||||||
|
"site": "zenko",
|
||||||
|
"servers": ["127.0.0.1:8000"],
|
||||||
|
"default": true
|
||||||
|
} ],
|
||||||
|
"log": {
|
||||||
|
"logLevel": "info",
|
||||||
|
"dumpLevel": "error"
|
||||||
|
},
|
||||||
|
"healthChecks": {
|
||||||
|
"allowFrom": ["127.0.0.1/8", "::1"]
|
||||||
|
},
|
||||||
|
"backends": {
|
||||||
|
"metadata": "mongodb"
|
||||||
|
},
|
||||||
|
"mongodb": {
|
||||||
|
"replicaSetHosts": "127.0.0.1:27017",
|
||||||
|
"writeConcern": "majority",
|
||||||
|
"replicaSet": "rs0",
|
||||||
|
"readPreference": "primary",
|
||||||
|
"database": "s3",
|
||||||
|
"authCredentials": {
|
||||||
|
"username": "s3",
|
||||||
|
"password": ""
|
||||||
|
}
|
||||||
|
},
|
||||||
|
"externalBackends": {
|
||||||
|
"aws_s3": {
|
||||||
|
"httpAgent": {
|
||||||
|
"keepAlive": false,
|
||||||
|
"keepAliveMsecs": 1000,
|
||||||
|
"maxFreeSockets": 256,
|
||||||
|
"maxSockets": null
|
||||||
|
}
|
||||||
|
},
|
||||||
|
"gcp": {
|
||||||
|
"httpAgent": {
|
||||||
|
"keepAlive": true,
|
||||||
|
"keepAliveMsecs": 1000,
|
||||||
|
"maxFreeSockets": 256,
|
||||||
|
"maxSockets": null
|
||||||
|
}
|
||||||
|
}
|
||||||
|
},
|
||||||
|
"requests": {
|
||||||
|
"viaProxy": false,
|
||||||
|
"trustedProxyCIDRs": [],
|
||||||
|
"extractClientIPFromHeader": ""
|
||||||
|
},
|
||||||
|
"bucketNotificationDestinations": [
|
||||||
|
{
|
||||||
|
"resource": "target1",
|
||||||
|
"type": "dummy",
|
||||||
|
"host": "localhost:6000"
|
||||||
|
}
|
||||||
|
]
|
||||||
|
}
|
|
@ -116,7 +116,7 @@ const constants = {
|
||||||
],
|
],
|
||||||
|
|
||||||
// user metadata header to set object locationConstraint
|
// user metadata header to set object locationConstraint
|
||||||
objectLocationConstraintHeader: 'x-amz-meta-scal-location-constraint',
|
objectLocationConstraintHeader: 'x-amz-storage-class',
|
||||||
lastModifiedHeader: 'x-amz-meta-x-scal-last-modified',
|
lastModifiedHeader: 'x-amz-meta-x-scal-last-modified',
|
||||||
legacyLocations: ['sproxyd', 'legacy'],
|
legacyLocations: ['sproxyd', 'legacy'],
|
||||||
// declare here all existing service accounts and their properties
|
// declare here all existing service accounts and their properties
|
||||||
|
@ -205,9 +205,6 @@ const constants = {
|
||||||
],
|
],
|
||||||
allowedUtapiEventFilterStates: ['allow', 'deny'],
|
allowedUtapiEventFilterStates: ['allow', 'deny'],
|
||||||
allowedRestoreObjectRequestTierValues: ['Standard'],
|
allowedRestoreObjectRequestTierValues: ['Standard'],
|
||||||
validStorageClasses: [
|
|
||||||
'STANDARD',
|
|
||||||
],
|
|
||||||
lifecycleListing: {
|
lifecycleListing: {
|
||||||
CURRENT_TYPE: 'current',
|
CURRENT_TYPE: 'current',
|
||||||
NON_CURRENT_TYPE: 'noncurrent',
|
NON_CURRENT_TYPE: 'noncurrent',
|
||||||
|
|
|
@ -6,129 +6,39 @@
|
||||||
#
|
#
|
||||||
alabaster==0.7.12 \
|
alabaster==0.7.12 \
|
||||||
--hash=sha256:446438bdcca0e05bd45ea2de1668c1d9b032e1a9154c2c259092d77031ddd359 \
|
--hash=sha256:446438bdcca0e05bd45ea2de1668c1d9b032e1a9154c2c259092d77031ddd359 \
|
||||||
--hash=sha256:a661d72d58e6ea8a57f7a86e37d86716863ee5e92788398526d58b26a4e4dc02
|
--hash=sha256:a661d72d58e6ea8a57f7a86e37d86716863ee5e92788398526d58b26a4e4dc02 \
|
||||||
# via sphinx
|
# via sphinx
|
||||||
babel==2.6.0 \
|
babel==2.6.0 \
|
||||||
--hash=sha256:6778d85147d5d85345c14a26aada5e478ab04e39b078b0745ee6870c2b5cf669 \
|
--hash=sha256:6778d85147d5d85345c14a26aada5e478ab04e39b078b0745ee6870c2b5cf669 \
|
||||||
--hash=sha256:8cba50f48c529ca3fa18cf81fa9403be176d374ac4d60738b839122dfaaa3d23
|
--hash=sha256:8cba50f48c529ca3fa18cf81fa9403be176d374ac4d60738b839122dfaaa3d23 \
|
||||||
# via sphinx
|
# via sphinx
|
||||||
certifi==2018.10.15 \
|
certifi==2018.10.15 \
|
||||||
--hash=sha256:339dc09518b07e2fa7eda5450740925974815557727d6bd35d319c1524a04a4c \
|
--hash=sha256:339dc09518b07e2fa7eda5450740925974815557727d6bd35d319c1524a04a4c \
|
||||||
--hash=sha256:6d58c986d22b038c8c0df30d639f23a3e6d172a05c3583e766f4c0b785c0986a
|
--hash=sha256:6d58c986d22b038c8c0df30d639f23a3e6d172a05c3583e766f4c0b785c0986a \
|
||||||
# via requests
|
# via requests
|
||||||
charset-normalizer==3.3.2 \
|
chardet==3.0.4 \
|
||||||
--hash=sha256:06435b539f889b1f6f4ac1758871aae42dc3a8c0e24ac9e60c2384973ad73027 \
|
--hash=sha256:84ab92ed1c4d4f16916e05906b6b75a6c0fb5db821cc65e70cbd64a3e2a5eaae \
|
||||||
--hash=sha256:06a81e93cd441c56a9b65d8e1d043daeb97a3d0856d177d5c90ba85acb3db087 \
|
--hash=sha256:fc323ffcaeaed0e0a02bf4d117757b98aed530d9ed4531e3e15460124c106691 \
|
||||||
--hash=sha256:0a55554a2fa0d408816b3b5cedf0045f4b8e1a6065aec45849de2d6f3f8e9786 \
|
|
||||||
--hash=sha256:0b2b64d2bb6d3fb9112bafa732def486049e63de9618b5843bcdd081d8144cd8 \
|
|
||||||
--hash=sha256:10955842570876604d404661fbccbc9c7e684caf432c09c715ec38fbae45ae09 \
|
|
||||||
--hash=sha256:122c7fa62b130ed55f8f285bfd56d5f4b4a5b503609d181f9ad85e55c89f4185 \
|
|
||||||
--hash=sha256:1ceae2f17a9c33cb48e3263960dc5fc8005351ee19db217e9b1bb15d28c02574 \
|
|
||||||
--hash=sha256:1d3193f4a680c64b4b6a9115943538edb896edc190f0b222e73761716519268e \
|
|
||||||
--hash=sha256:1f79682fbe303db92bc2b1136016a38a42e835d932bab5b3b1bfcfbf0640e519 \
|
|
||||||
--hash=sha256:2127566c664442652f024c837091890cb1942c30937add288223dc895793f898 \
|
|
||||||
--hash=sha256:22afcb9f253dac0696b5a4be4a1c0f8762f8239e21b99680099abd9b2b1b2269 \
|
|
||||||
--hash=sha256:25baf083bf6f6b341f4121c2f3c548875ee6f5339300e08be3f2b2ba1721cdd3 \
|
|
||||||
--hash=sha256:2e81c7b9c8979ce92ed306c249d46894776a909505d8f5a4ba55b14206e3222f \
|
|
||||||
--hash=sha256:3287761bc4ee9e33561a7e058c72ac0938c4f57fe49a09eae428fd88aafe7bb6 \
|
|
||||||
--hash=sha256:34d1c8da1e78d2e001f363791c98a272bb734000fcef47a491c1e3b0505657a8 \
|
|
||||||
--hash=sha256:37e55c8e51c236f95b033f6fb391d7d7970ba5fe7ff453dad675e88cf303377a \
|
|
||||||
--hash=sha256:3d47fa203a7bd9c5b6cee4736ee84ca03b8ef23193c0d1ca99b5089f72645c73 \
|
|
||||||
--hash=sha256:3e4d1f6587322d2788836a99c69062fbb091331ec940e02d12d179c1d53e25fc \
|
|
||||||
--hash=sha256:42cb296636fcc8b0644486d15c12376cb9fa75443e00fb25de0b8602e64c1714 \
|
|
||||||
--hash=sha256:45485e01ff4d3630ec0d9617310448a8702f70e9c01906b0d0118bdf9d124cf2 \
|
|
||||||
--hash=sha256:4a78b2b446bd7c934f5dcedc588903fb2f5eec172f3d29e52a9096a43722adfc \
|
|
||||||
--hash=sha256:4ab2fe47fae9e0f9dee8c04187ce5d09f48eabe611be8259444906793ab7cbce \
|
|
||||||
--hash=sha256:4d0d1650369165a14e14e1e47b372cfcb31d6ab44e6e33cb2d4e57265290044d \
|
|
||||||
--hash=sha256:549a3a73da901d5bc3ce8d24e0600d1fa85524c10287f6004fbab87672bf3e1e \
|
|
||||||
--hash=sha256:55086ee1064215781fff39a1af09518bc9255b50d6333f2e4c74ca09fac6a8f6 \
|
|
||||||
--hash=sha256:572c3763a264ba47b3cf708a44ce965d98555f618ca42c926a9c1616d8f34269 \
|
|
||||||
--hash=sha256:573f6eac48f4769d667c4442081b1794f52919e7edada77495aaed9236d13a96 \
|
|
||||||
--hash=sha256:5b4c145409bef602a690e7cfad0a15a55c13320ff7a3ad7ca59c13bb8ba4d45d \
|
|
||||||
--hash=sha256:6463effa3186ea09411d50efc7d85360b38d5f09b870c48e4600f63af490e56a \
|
|
||||||
--hash=sha256:65f6f63034100ead094b8744b3b97965785388f308a64cf8d7c34f2f2e5be0c4 \
|
|
||||||
--hash=sha256:663946639d296df6a2bb2aa51b60a2454ca1cb29835324c640dafb5ff2131a77 \
|
|
||||||
--hash=sha256:6897af51655e3691ff853668779c7bad41579facacf5fd7253b0133308cf000d \
|
|
||||||
--hash=sha256:68d1f8a9e9e37c1223b656399be5d6b448dea850bed7d0f87a8311f1ff3dabb0 \
|
|
||||||
--hash=sha256:6ac7ffc7ad6d040517be39eb591cac5ff87416c2537df6ba3cba3bae290c0fed \
|
|
||||||
--hash=sha256:6b3251890fff30ee142c44144871185dbe13b11bab478a88887a639655be1068 \
|
|
||||||
--hash=sha256:6c4caeef8fa63d06bd437cd4bdcf3ffefe6738fb1b25951440d80dc7df8c03ac \
|
|
||||||
--hash=sha256:6ef1d82a3af9d3eecdba2321dc1b3c238245d890843e040e41e470ffa64c3e25 \
|
|
||||||
--hash=sha256:753f10e867343b4511128c6ed8c82f7bec3bd026875576dfd88483c5c73b2fd8 \
|
|
||||||
--hash=sha256:7cd13a2e3ddeed6913a65e66e94b51d80a041145a026c27e6bb76c31a853c6ab \
|
|
||||||
--hash=sha256:7ed9e526742851e8d5cc9e6cf41427dfc6068d4f5a3bb03659444b4cabf6bc26 \
|
|
||||||
--hash=sha256:7f04c839ed0b6b98b1a7501a002144b76c18fb1c1850c8b98d458ac269e26ed2 \
|
|
||||||
--hash=sha256:802fe99cca7457642125a8a88a084cef28ff0cf9407060f7b93dca5aa25480db \
|
|
||||||
--hash=sha256:80402cd6ee291dcb72644d6eac93785fe2c8b9cb30893c1af5b8fdd753b9d40f \
|
|
||||||
--hash=sha256:8465322196c8b4d7ab6d1e049e4c5cb460d0394da4a27d23cc242fbf0034b6b5 \
|
|
||||||
--hash=sha256:86216b5cee4b06df986d214f664305142d9c76df9b6512be2738aa72a2048f99 \
|
|
||||||
--hash=sha256:87d1351268731db79e0f8e745d92493ee2841c974128ef629dc518b937d9194c \
|
|
||||||
--hash=sha256:8bdb58ff7ba23002a4c5808d608e4e6c687175724f54a5dade5fa8c67b604e4d \
|
|
||||||
--hash=sha256:8c622a5fe39a48f78944a87d4fb8a53ee07344641b0562c540d840748571b811 \
|
|
||||||
--hash=sha256:8d756e44e94489e49571086ef83b2bb8ce311e730092d2c34ca8f7d925cb20aa \
|
|
||||||
--hash=sha256:8f4a014bc36d3c57402e2977dada34f9c12300af536839dc38c0beab8878f38a \
|
|
||||||
--hash=sha256:9063e24fdb1e498ab71cb7419e24622516c4a04476b17a2dab57e8baa30d6e03 \
|
|
||||||
--hash=sha256:90d558489962fd4918143277a773316e56c72da56ec7aa3dc3dbbe20fdfed15b \
|
|
||||||
--hash=sha256:923c0c831b7cfcb071580d3f46c4baf50f174be571576556269530f4bbd79d04 \
|
|
||||||
--hash=sha256:95f2a5796329323b8f0512e09dbb7a1860c46a39da62ecb2324f116fa8fdc85c \
|
|
||||||
--hash=sha256:96b02a3dc4381e5494fad39be677abcb5e6634bf7b4fa83a6dd3112607547001 \
|
|
||||||
--hash=sha256:9f96df6923e21816da7e0ad3fd47dd8f94b2a5ce594e00677c0013018b813458 \
|
|
||||||
--hash=sha256:a10af20b82360ab00827f916a6058451b723b4e65030c5a18577c8b2de5b3389 \
|
|
||||||
--hash=sha256:a50aebfa173e157099939b17f18600f72f84eed3049e743b68ad15bd69b6bf99 \
|
|
||||||
--hash=sha256:a981a536974bbc7a512cf44ed14938cf01030a99e9b3a06dd59578882f06f985 \
|
|
||||||
--hash=sha256:a9a8e9031d613fd2009c182b69c7b2c1ef8239a0efb1df3f7c8da66d5dd3d537 \
|
|
||||||
--hash=sha256:ae5f4161f18c61806f411a13b0310bea87f987c7d2ecdbdaad0e94eb2e404238 \
|
|
||||||
--hash=sha256:aed38f6e4fb3f5d6bf81bfa990a07806be9d83cf7bacef998ab1a9bd660a581f \
|
|
||||||
--hash=sha256:b01b88d45a6fcb69667cd6d2f7a9aeb4bf53760d7fc536bf679ec94fe9f3ff3d \
|
|
||||||
--hash=sha256:b261ccdec7821281dade748d088bb6e9b69e6d15b30652b74cbbac25e280b796 \
|
|
||||||
--hash=sha256:b2b0a0c0517616b6869869f8c581d4eb2dd83a4d79e0ebcb7d373ef9956aeb0a \
|
|
||||||
--hash=sha256:b4a23f61ce87adf89be746c8a8974fe1c823c891d8f86eb218bb957c924bb143 \
|
|
||||||
--hash=sha256:bd8f7df7d12c2db9fab40bdd87a7c09b1530128315d047a086fa3ae3435cb3a8 \
|
|
||||||
--hash=sha256:beb58fe5cdb101e3a055192ac291b7a21e3b7ef4f67fa1d74e331a7f2124341c \
|
|
||||||
--hash=sha256:c002b4ffc0be611f0d9da932eb0f704fe2602a9a949d1f738e4c34c75b0863d5 \
|
|
||||||
--hash=sha256:c083af607d2515612056a31f0a8d9e0fcb5876b7bfc0abad3ecd275bc4ebc2d5 \
|
|
||||||
--hash=sha256:c180f51afb394e165eafe4ac2936a14bee3eb10debc9d9e4db8958fe36afe711 \
|
|
||||||
--hash=sha256:c235ebd9baae02f1b77bcea61bce332cb4331dc3617d254df3323aa01ab47bd4 \
|
|
||||||
--hash=sha256:cd70574b12bb8a4d2aaa0094515df2463cb429d8536cfb6c7ce983246983e5a6 \
|
|
||||||
--hash=sha256:d0eccceffcb53201b5bfebb52600a5fb483a20b61da9dbc885f8b103cbe7598c \
|
|
||||||
--hash=sha256:d965bba47ddeec8cd560687584e88cf699fd28f192ceb452d1d7ee807c5597b7 \
|
|
||||||
--hash=sha256:db364eca23f876da6f9e16c9da0df51aa4f104a972735574842618b8c6d999d4 \
|
|
||||||
--hash=sha256:ddbb2551d7e0102e7252db79ba445cdab71b26640817ab1e3e3648dad515003b \
|
|
||||||
--hash=sha256:deb6be0ac38ece9ba87dea880e438f25ca3eddfac8b002a2ec3d9183a454e8ae \
|
|
||||||
--hash=sha256:e06ed3eb3218bc64786f7db41917d4e686cc4856944f53d5bdf83a6884432e12 \
|
|
||||||
--hash=sha256:e27ad930a842b4c5eb8ac0016b0a54f5aebbe679340c26101df33424142c143c \
|
|
||||||
--hash=sha256:e537484df0d8f426ce2afb2d0f8e1c3d0b114b83f8850e5f2fbea0e797bd82ae \
|
|
||||||
--hash=sha256:eb00ed941194665c332bf8e078baf037d6c35d7c4f3102ea2d4f16ca94a26dc8 \
|
|
||||||
--hash=sha256:eb6904c354526e758fda7167b33005998fb68c46fbc10e013ca97f21ca5c8887 \
|
|
||||||
--hash=sha256:eb8821e09e916165e160797a6c17edda0679379a4be5c716c260e836e122f54b \
|
|
||||||
--hash=sha256:efcb3f6676480691518c177e3b465bcddf57cea040302f9f4e6e191af91174d4 \
|
|
||||||
--hash=sha256:f27273b60488abe721a075bcca6d7f3964f9f6f067c8c4c605743023d7d3944f \
|
|
||||||
--hash=sha256:f30c3cb33b24454a82faecaf01b19c18562b1e89558fb6c56de4d9118a032fd5 \
|
|
||||||
--hash=sha256:fb69256e180cb6c8a894fee62b3afebae785babc1ee98b81cdf68bbca1987f33 \
|
|
||||||
--hash=sha256:fd1abc0d89e30cc4e02e4064dc67fcc51bd941eb395c502aac3ec19fab46b519 \
|
|
||||||
--hash=sha256:ff8fa367d09b717b2a17a052544193ad76cd49979c805768879cb63d9ca50561
|
|
||||||
# via requests
|
# via requests
|
||||||
commonmark==0.5.4 \
|
commonmark==0.5.4 \
|
||||||
--hash=sha256:34d73ec8085923c023930dfc0bcd1c4286e28a2a82de094bb72fabcc0281cbe5
|
--hash=sha256:34d73ec8085923c023930dfc0bcd1c4286e28a2a82de094bb72fabcc0281cbe5 \
|
||||||
# via recommonmark
|
# via recommonmark
|
||||||
docutils==0.14 \
|
docutils==0.14 \
|
||||||
--hash=sha256:02aec4bd92ab067f6ff27a38a38a41173bf01bed8f89157768c1573f53e474a6 \
|
--hash=sha256:02aec4bd92ab067f6ff27a38a38a41173bf01bed8f89157768c1573f53e474a6 \
|
||||||
--hash=sha256:51e64ef2ebfb29cae1faa133b3710143496eca21c530f3f71424d77687764274 \
|
--hash=sha256:51e64ef2ebfb29cae1faa133b3710143496eca21c530f3f71424d77687764274 \
|
||||||
--hash=sha256:7a4bd47eaf6596e1295ecb11361139febe29b084a87bf005bf899f9a42edc3c6
|
--hash=sha256:7a4bd47eaf6596e1295ecb11361139febe29b084a87bf005bf899f9a42edc3c6 \
|
||||||
# via
|
# via recommonmark, sphinx
|
||||||
# recommonmark
|
|
||||||
# sphinx
|
|
||||||
idna==2.7 \
|
idna==2.7 \
|
||||||
--hash=sha256:156a6814fb5ac1fc6850fb002e0852d56c0c8d2531923a51032d1b70760e186e \
|
--hash=sha256:156a6814fb5ac1fc6850fb002e0852d56c0c8d2531923a51032d1b70760e186e \
|
||||||
--hash=sha256:684a38a6f903c1d71d6d5fac066b58d7768af4de2b832e426ec79c30daa94a16
|
--hash=sha256:684a38a6f903c1d71d6d5fac066b58d7768af4de2b832e426ec79c30daa94a16 \
|
||||||
# via requests
|
# via requests
|
||||||
imagesize==1.1.0 \
|
imagesize==1.1.0 \
|
||||||
--hash=sha256:3f349de3eb99145973fefb7dbe38554414e5c30abd0c8e4b970a7c9d09f3a1d8 \
|
--hash=sha256:3f349de3eb99145973fefb7dbe38554414e5c30abd0c8e4b970a7c9d09f3a1d8 \
|
||||||
--hash=sha256:f3832918bc3c66617f92e35f5d70729187676313caa60c187eb0f28b8fe5e3b5
|
--hash=sha256:f3832918bc3c66617f92e35f5d70729187676313caa60c187eb0f28b8fe5e3b5 \
|
||||||
# via sphinx
|
# via sphinx
|
||||||
jinja2==2.10 \
|
jinja2==2.10 \
|
||||||
--hash=sha256:74c935a1b8bb9a3947c50a54766a969d4846290e1e788ea44c1392163723c3bd \
|
--hash=sha256:74c935a1b8bb9a3947c50a54766a969d4846290e1e788ea44c1392163723c3bd \
|
||||||
--hash=sha256:f84be1bb0040caca4cea721fcbbbbd61f9be9464ca236387158b0feea01914a4
|
--hash=sha256:f84be1bb0040caca4cea721fcbbbbd61f9be9464ca236387158b0feea01914a4 \
|
||||||
# via sphinx
|
# via sphinx
|
||||||
markupsafe==1.1.0 \
|
markupsafe==1.1.0 \
|
||||||
--hash=sha256:048ef924c1623740e70204aa7143ec592504045ae4429b59c30054cb31e3c432 \
|
--hash=sha256:048ef924c1623740e70204aa7143ec592504045ae4429b59c30054cb31e3c432 \
|
||||||
|
@ -158,51 +68,52 @@ markupsafe==1.1.0 \
|
||||||
--hash=sha256:efdc45ef1afc238db84cb4963aa689c0408912a0239b0721cb172b4016eb31d6 \
|
--hash=sha256:efdc45ef1afc238db84cb4963aa689c0408912a0239b0721cb172b4016eb31d6 \
|
||||||
--hash=sha256:f137c02498f8b935892d5c0172560d7ab54bc45039de8805075e19079c639a9c \
|
--hash=sha256:f137c02498f8b935892d5c0172560d7ab54bc45039de8805075e19079c639a9c \
|
||||||
--hash=sha256:f82e347a72f955b7017a39708a3667f106e6ad4d10b25f237396a7115d8ed5fd \
|
--hash=sha256:f82e347a72f955b7017a39708a3667f106e6ad4d10b25f237396a7115d8ed5fd \
|
||||||
--hash=sha256:fb7c206e01ad85ce57feeaaa0bf784b97fa3cad0d4a5737bc5295785f5c613a1
|
--hash=sha256:fb7c206e01ad85ce57feeaaa0bf784b97fa3cad0d4a5737bc5295785f5c613a1 \
|
||||||
# via jinja2
|
# via jinja2
|
||||||
packaging==18.0 \
|
packaging==18.0 \
|
||||||
--hash=sha256:0886227f54515e592aaa2e5a553332c73962917f2831f1b0f9b9f4380a4b9807 \
|
--hash=sha256:0886227f54515e592aaa2e5a553332c73962917f2831f1b0f9b9f4380a4b9807 \
|
||||||
--hash=sha256:f95a1e147590f204328170981833854229bb2912ac3d5f89e2a8ccd2834800c9
|
--hash=sha256:f95a1e147590f204328170981833854229bb2912ac3d5f89e2a8ccd2834800c9 \
|
||||||
# via sphinx
|
# via sphinx
|
||||||
pygments==2.2.0 \
|
pygments==2.2.0 \
|
||||||
--hash=sha256:78f3f434bcc5d6ee09020f92ba487f95ba50f1e3ef83ae96b9d5ffa1bab25c5d \
|
--hash=sha256:78f3f434bcc5d6ee09020f92ba487f95ba50f1e3ef83ae96b9d5ffa1bab25c5d \
|
||||||
--hash=sha256:dbae1046def0efb574852fab9e90209b23f556367b5a320c0bcb871c77c3e8cc
|
--hash=sha256:dbae1046def0efb574852fab9e90209b23f556367b5a320c0bcb871c77c3e8cc \
|
||||||
# via sphinx
|
# via sphinx
|
||||||
pyparsing==2.3.0 \
|
pyparsing==2.3.0 \
|
||||||
--hash=sha256:40856e74d4987de5d01761a22d1621ae1c7f8774585acae358aa5c5936c6c90b \
|
--hash=sha256:40856e74d4987de5d01761a22d1621ae1c7f8774585acae358aa5c5936c6c90b \
|
||||||
--hash=sha256:f353aab21fd474459d97b709e527b5571314ee5f067441dc9f88e33eecd96592
|
--hash=sha256:f353aab21fd474459d97b709e527b5571314ee5f067441dc9f88e33eecd96592 \
|
||||||
# via packaging
|
# via packaging
|
||||||
pytz==2018.7 \
|
pytz==2018.7 \
|
||||||
--hash=sha256:31cb35c89bd7d333cd32c5f278fca91b523b0834369e757f4c5641ea252236ca \
|
--hash=sha256:31cb35c89bd7d333cd32c5f278fca91b523b0834369e757f4c5641ea252236ca \
|
||||||
--hash=sha256:8e0f8568c118d3077b46be7d654cc8167fa916092e28320cde048e54bfc9f1e6
|
--hash=sha256:8e0f8568c118d3077b46be7d654cc8167fa916092e28320cde048e54bfc9f1e6 \
|
||||||
# via babel
|
# via babel
|
||||||
recommonmark==0.4.0 \
|
recommonmark==0.4.0 \
|
||||||
--hash=sha256:6e29c723abcf5533842376d87c4589e62923ecb6002a8e059eb608345ddaff9d \
|
--hash=sha256:6e29c723abcf5533842376d87c4589e62923ecb6002a8e059eb608345ddaff9d \
|
||||||
--hash=sha256:cd8bf902e469dae94d00367a8197fb7b81fcabc9cfb79d520e0d22d0fbeaa8b7
|
--hash=sha256:cd8bf902e469dae94d00367a8197fb7b81fcabc9cfb79d520e0d22d0fbeaa8b7
|
||||||
# via -r requirements.in
|
requests==2.20.1 \
|
||||||
requests==2.32.2 \
|
--hash=sha256:65b3a120e4329e33c9889db89c80976c5272f56ea92d3e74da8a463992e3ff54 \
|
||||||
--hash=sha256:dd951ff5ecf3e3b3aa26b40703ba77495dab41da839ae72ef3c8e5d8e2433289 \
|
--hash=sha256:ea881206e59f41dbd0bd445437d792e43906703fff75ca8ff43ccdb11f33f263 \
|
||||||
--hash=sha256:fc06670dd0ed212426dfeb94fc1b983d917c4f9847c863f313c9dfaaffb7c23c
|
|
||||||
# via sphinx
|
# via sphinx
|
||||||
six==1.11.0 \
|
six==1.11.0 \
|
||||||
--hash=sha256:70e8a77beed4562e7f14fe23a786b54f6296e34344c23bc42f07b15018ff98e9 \
|
--hash=sha256:70e8a77beed4562e7f14fe23a786b54f6296e34344c23bc42f07b15018ff98e9 \
|
||||||
--hash=sha256:832dc0e10feb1aa2c68dcc57dbb658f1c7e65b9b61af69048abc87a2db00a0eb
|
--hash=sha256:832dc0e10feb1aa2c68dcc57dbb658f1c7e65b9b61af69048abc87a2db00a0eb \
|
||||||
# via
|
# via packaging, sphinx
|
||||||
# packaging
|
|
||||||
# sphinx
|
|
||||||
snowballstemmer==1.2.1 \
|
snowballstemmer==1.2.1 \
|
||||||
--hash=sha256:919f26a68b2c17a7634da993d91339e288964f93c274f1343e3bbbe2096e1128 \
|
--hash=sha256:919f26a68b2c17a7634da993d91339e288964f93c274f1343e3bbbe2096e1128 \
|
||||||
--hash=sha256:9f3bcd3c401c3e862ec0ebe6d2c069ebc012ce142cce209c098ccb5b09136e89
|
--hash=sha256:9f3bcd3c401c3e862ec0ebe6d2c069ebc012ce142cce209c098ccb5b09136e89 \
|
||||||
# via sphinx
|
# via sphinx
|
||||||
sphinx==1.8.2 \
|
sphinx==1.8.2 \
|
||||||
--hash=sha256:120732cbddb1b2364471c3d9f8bfd4b0c5b550862f99a65736c77f970b142aea \
|
--hash=sha256:120732cbddb1b2364471c3d9f8bfd4b0c5b550862f99a65736c77f970b142aea \
|
||||||
--hash=sha256:b348790776490894e0424101af9c8413f2a86831524bd55c5f379d3e3e12ca64
|
--hash=sha256:b348790776490894e0424101af9c8413f2a86831524bd55c5f379d3e3e12ca64
|
||||||
# via -r requirements.in
|
|
||||||
sphinxcontrib-websupport==1.1.0 \
|
sphinxcontrib-websupport==1.1.0 \
|
||||||
--hash=sha256:68ca7ff70785cbe1e7bccc71a48b5b6d965d79ca50629606c7861a21b206d9dd \
|
--hash=sha256:68ca7ff70785cbe1e7bccc71a48b5b6d965d79ca50629606c7861a21b206d9dd \
|
||||||
--hash=sha256:9de47f375baf1ea07cdb3436ff39d7a9c76042c10a769c52353ec46e4e8fc3b9
|
--hash=sha256:9de47f375baf1ea07cdb3436ff39d7a9c76042c10a769c52353ec46e4e8fc3b9 \
|
||||||
|
# via sphinx
|
||||||
|
typing==3.6.6 \
|
||||||
|
--hash=sha256:4027c5f6127a6267a435201981ba156de91ad0d1d98e9ddc2aa173453453492d \
|
||||||
|
--hash=sha256:57dcf675a99b74d64dacf6fba08fb17cf7e3d5fdff53d4a30ea2a5e7e52543d4 \
|
||||||
|
--hash=sha256:a4c8473ce11a65999c8f59cb093e70686b6c84c98df58c1dae9b3b196089858a \
|
||||||
# via sphinx
|
# via sphinx
|
||||||
urllib3==1.24.1 \
|
urllib3==1.24.1 \
|
||||||
--hash=sha256:61bf29cada3fc2fbefad4fdf059ea4bd1b4a86d2b6d15e1c7c0b582b9752fe39 \
|
--hash=sha256:61bf29cada3fc2fbefad4fdf059ea4bd1b4a86d2b6d15e1c7c0b582b9752fe39 \
|
||||||
--hash=sha256:de9529817c93f27c8ccbfead6985011db27bd0ddfcdb2d86f3f663385c6a9c22
|
--hash=sha256:de9529817c93f27c8ccbfead6985011db27bd0ddfcdb2d86f3f663385c6a9c22 \
|
||||||
# via requests
|
# via requests
|
||||||
|
|
12
index.js
12
index.js
|
@ -1,10 +1,10 @@
|
||||||
'use strict'; // eslint-disable-line strict
|
'use strict'; // eslint-disable-line strict
|
||||||
|
|
||||||
/**
|
require('werelogs').stderrUtils.catchAndTimestampStderr(
|
||||||
* Catch uncaught exceptions and add timestamp to aid debugging
|
undefined,
|
||||||
*/
|
// Do not exit as workers have their own listener that will exit
|
||||||
process.on('uncaughtException', err => {
|
// But primary don't have another listener
|
||||||
process.stderr.write(`${new Date().toISOString()}: Uncaught exception: \n${err.stack}`);
|
require('cluster').isPrimary ? 1 : null,
|
||||||
});
|
);
|
||||||
|
|
||||||
require('./lib/server.js')();
|
require('./lib/server.js')();
|
||||||
|
|
245
lib/Config.js
245
lib/Config.js
|
@ -107,6 +107,47 @@ function parseSproxydConfig(configSproxyd) {
|
||||||
return joi.attempt(configSproxyd, joiSchema, 'bad config');
|
return joi.attempt(configSproxyd, joiSchema, 'bad config');
|
||||||
}
|
}
|
||||||
|
|
||||||
|
function parseRedisConfig(redisConfig) {
|
||||||
|
const joiSchema = joi.object({
|
||||||
|
password: joi.string().allow(''),
|
||||||
|
host: joi.string(),
|
||||||
|
port: joi.number(),
|
||||||
|
retry: joi.object({
|
||||||
|
connectBackoff: joi.object({
|
||||||
|
min: joi.number().required(),
|
||||||
|
max: joi.number().required(),
|
||||||
|
jitter: joi.number().required(),
|
||||||
|
factor: joi.number().required(),
|
||||||
|
deadline: joi.number().required(),
|
||||||
|
}),
|
||||||
|
}),
|
||||||
|
// sentinel config
|
||||||
|
sentinels: joi.alternatives().try(
|
||||||
|
joi.string()
|
||||||
|
.pattern(/^[a-zA-Z0-9.-]+:[0-9]+(,[a-zA-Z0-9.-]+:[0-9]+)*$/)
|
||||||
|
.custom(hosts => hosts.split(',').map(item => {
|
||||||
|
const [host, port] = item.split(':');
|
||||||
|
return { host, port: Number.parseInt(port, 10) };
|
||||||
|
})),
|
||||||
|
joi.array().items(
|
||||||
|
joi.object({
|
||||||
|
host: joi.string().required(),
|
||||||
|
port: joi.number().required(),
|
||||||
|
})
|
||||||
|
).min(1),
|
||||||
|
),
|
||||||
|
name: joi.string(),
|
||||||
|
sentinelPassword: joi.string().allow(''),
|
||||||
|
})
|
||||||
|
.and('host', 'port')
|
||||||
|
.and('sentinels', 'name')
|
||||||
|
.xor('host', 'sentinels')
|
||||||
|
.without('sentinels', ['host', 'port'])
|
||||||
|
.without('host', ['sentinels', 'sentinelPassword']);
|
||||||
|
|
||||||
|
return joi.attempt(redisConfig, joiSchema, 'bad config');
|
||||||
|
}
|
||||||
|
|
||||||
function restEndpointsAssert(restEndpoints, locationConstraints) {
|
function restEndpointsAssert(restEndpoints, locationConstraints) {
|
||||||
assert(typeof restEndpoints === 'object',
|
assert(typeof restEndpoints === 'object',
|
||||||
'bad config: restEndpoints must be an object of endpoints');
|
'bad config: restEndpoints must be an object of endpoints');
|
||||||
|
@ -336,7 +377,7 @@ function dmfLocationConstraintAssert(locationObj) {
|
||||||
function locationConstraintAssert(locationConstraints) {
|
function locationConstraintAssert(locationConstraints) {
|
||||||
const supportedBackends =
|
const supportedBackends =
|
||||||
['mem', 'file', 'scality',
|
['mem', 'file', 'scality',
|
||||||
'mongodb', 'dmf', 'azure_archive'].concat(Object.keys(validExternalBackends));
|
'mongodb', 'dmf', 'azure_archive', 'vitastor'].concat(Object.keys(validExternalBackends));
|
||||||
assert(typeof locationConstraints === 'object',
|
assert(typeof locationConstraints === 'object',
|
||||||
'bad config: locationConstraints must be an object');
|
'bad config: locationConstraints must be an object');
|
||||||
Object.keys(locationConstraints).forEach(l => {
|
Object.keys(locationConstraints).forEach(l => {
|
||||||
|
@ -461,27 +502,23 @@ function locationConstraintAssert(locationConstraints) {
|
||||||
locationConstraints[l].details.connector.hdclient);
|
locationConstraints[l].details.connector.hdclient);
|
||||||
}
|
}
|
||||||
});
|
});
|
||||||
assert(Object.keys(locationConstraints)
|
|
||||||
.includes('us-east-1'), 'bad locationConfig: must ' +
|
|
||||||
'include us-east-1 as a locationConstraint');
|
|
||||||
}
|
}
|
||||||
|
|
||||||
function parseUtapiReindex(config) {
|
function parseUtapiReindex(config) {
|
||||||
const {
|
const {
|
||||||
enabled,
|
enabled,
|
||||||
schedule,
|
schedule,
|
||||||
sentinel,
|
redis,
|
||||||
bucketd,
|
bucketd,
|
||||||
onlyCountLatestWhenObjectLocked,
|
onlyCountLatestWhenObjectLocked,
|
||||||
} = config;
|
} = config;
|
||||||
assert(typeof enabled === 'boolean',
|
assert(typeof enabled === 'boolean',
|
||||||
'bad config: utapi.reindex.enabled must be a boolean');
|
'bad config: utapi.reindex.enabled must be a boolean');
|
||||||
assert(typeof sentinel === 'object',
|
|
||||||
'bad config: utapi.reindex.sentinel must be an object');
|
const parsedRedis = parseRedisConfig(redis);
|
||||||
assert(typeof sentinel.port === 'number',
|
assert(Array.isArray(parsedRedis.sentinels),
|
||||||
'bad config: utapi.reindex.sentinel.port must be a number');
|
'bad config: utapi reindex redis config requires a list of sentinels');
|
||||||
assert(typeof sentinel.name === 'string',
|
|
||||||
'bad config: utapi.reindex.sentinel.name must be a string');
|
|
||||||
assert(typeof bucketd === 'object',
|
assert(typeof bucketd === 'object',
|
||||||
'bad config: utapi.reindex.bucketd must be an object');
|
'bad config: utapi.reindex.bucketd must be an object');
|
||||||
assert(typeof bucketd.port === 'number',
|
assert(typeof bucketd.port === 'number',
|
||||||
|
@ -499,6 +536,13 @@ function parseUtapiReindex(config) {
|
||||||
'bad config: utapi.reindex.schedule must be a valid ' +
|
'bad config: utapi.reindex.schedule must be a valid ' +
|
||||||
`cron schedule. ${e.message}.`);
|
`cron schedule. ${e.message}.`);
|
||||||
}
|
}
|
||||||
|
return {
|
||||||
|
enabled,
|
||||||
|
schedule,
|
||||||
|
redis: parsedRedis,
|
||||||
|
bucketd,
|
||||||
|
onlyCountLatestWhenObjectLocked,
|
||||||
|
};
|
||||||
}
|
}
|
||||||
|
|
||||||
function requestsConfigAssert(requestsConfig) {
|
function requestsConfigAssert(requestsConfig) {
|
||||||
|
@ -586,7 +630,6 @@ class Config extends EventEmitter {
|
||||||
// Read config automatically
|
// Read config automatically
|
||||||
this._getLocationConfig();
|
this._getLocationConfig();
|
||||||
this._getConfig();
|
this._getConfig();
|
||||||
this._configureBackends();
|
|
||||||
}
|
}
|
||||||
|
|
||||||
_getLocationConfig() {
|
_getLocationConfig() {
|
||||||
|
@ -798,11 +841,11 @@ class Config extends EventEmitter {
|
||||||
this.websiteEndpoints = config.websiteEndpoints;
|
this.websiteEndpoints = config.websiteEndpoints;
|
||||||
}
|
}
|
||||||
|
|
||||||
this.clusters = false;
|
this.workers = false;
|
||||||
if (config.clusters !== undefined) {
|
if (config.workers !== undefined) {
|
||||||
assert(Number.isInteger(config.clusters) && config.clusters > 0,
|
assert(Number.isInteger(config.workers) && config.workers > 0,
|
||||||
'bad config: clusters must be a positive integer');
|
'bad config: workers must be a positive integer');
|
||||||
this.clusters = config.clusters;
|
this.workers = config.workers;
|
||||||
}
|
}
|
||||||
|
|
||||||
if (config.usEastBehavior !== undefined) {
|
if (config.usEastBehavior !== undefined) {
|
||||||
|
@ -1040,8 +1083,7 @@ class Config extends EventEmitter {
|
||||||
assert(typeof config.localCache.port === 'number',
|
assert(typeof config.localCache.port === 'number',
|
||||||
'config: bad port for localCache. port must be a number');
|
'config: bad port for localCache. port must be a number');
|
||||||
if (config.localCache.password !== undefined) {
|
if (config.localCache.password !== undefined) {
|
||||||
assert(
|
assert(typeof config.localCache.password === 'string',
|
||||||
this._verifyRedisPassword(config.localCache.password),
|
|
||||||
'config: vad password for localCache. password must' +
|
'config: vad password for localCache. password must' +
|
||||||
' be a string');
|
' be a string');
|
||||||
}
|
}
|
||||||
|
@ -1067,55 +1109,7 @@ class Config extends EventEmitter {
|
||||||
}
|
}
|
||||||
|
|
||||||
if (config.redis) {
|
if (config.redis) {
|
||||||
if (config.redis.sentinels) {
|
this.redis = parseRedisConfig(config.redis);
|
||||||
this.redis = { sentinels: [], name: null };
|
|
||||||
|
|
||||||
assert(typeof config.redis.name === 'string',
|
|
||||||
'bad config: redis sentinel name must be a string');
|
|
||||||
this.redis.name = config.redis.name;
|
|
||||||
assert(Array.isArray(config.redis.sentinels) ||
|
|
||||||
typeof config.redis.sentinels === 'string',
|
|
||||||
'bad config: redis sentinels must be an array or string');
|
|
||||||
|
|
||||||
if (typeof config.redis.sentinels === 'string') {
|
|
||||||
config.redis.sentinels.split(',').forEach(item => {
|
|
||||||
const [host, port] = item.split(':');
|
|
||||||
this.redis.sentinels.push({ host,
|
|
||||||
port: Number.parseInt(port, 10) });
|
|
||||||
});
|
|
||||||
} else if (Array.isArray(config.redis.sentinels)) {
|
|
||||||
config.redis.sentinels.forEach(item => {
|
|
||||||
const { host, port } = item;
|
|
||||||
assert(typeof host === 'string',
|
|
||||||
'bad config: redis sentinel host must be a string');
|
|
||||||
assert(typeof port === 'number',
|
|
||||||
'bad config: redis sentinel port must be a number');
|
|
||||||
this.redis.sentinels.push({ host, port });
|
|
||||||
});
|
|
||||||
}
|
|
||||||
|
|
||||||
if (config.redis.sentinelPassword !== undefined) {
|
|
||||||
assert(
|
|
||||||
this._verifyRedisPassword(config.redis.sentinelPassword));
|
|
||||||
this.redis.sentinelPassword = config.redis.sentinelPassword;
|
|
||||||
}
|
|
||||||
} else {
|
|
||||||
// check for standalone configuration
|
|
||||||
this.redis = {};
|
|
||||||
assert(typeof config.redis.host === 'string',
|
|
||||||
'bad config: redis.host must be a string');
|
|
||||||
assert(typeof config.redis.port === 'number',
|
|
||||||
'bad config: redis.port must be a number');
|
|
||||||
this.redis.host = config.redis.host;
|
|
||||||
this.redis.port = config.redis.port;
|
|
||||||
}
|
|
||||||
if (config.redis.password !== undefined) {
|
|
||||||
assert(
|
|
||||||
this._verifyRedisPassword(config.redis.password),
|
|
||||||
'bad config: invalid password for redis. password must ' +
|
|
||||||
'be a string');
|
|
||||||
this.redis.password = config.redis.password;
|
|
||||||
}
|
|
||||||
}
|
}
|
||||||
if (config.scuba) {
|
if (config.scuba) {
|
||||||
this.scuba = {};
|
this.scuba = {};
|
||||||
|
@ -1183,50 +1177,8 @@ class Config extends EventEmitter {
|
||||||
assert(config.redis, 'missing required property of utapi ' +
|
assert(config.redis, 'missing required property of utapi ' +
|
||||||
'configuration: redis');
|
'configuration: redis');
|
||||||
if (config.utapi.redis) {
|
if (config.utapi.redis) {
|
||||||
if (config.utapi.redis.sentinels) {
|
this.utapi.redis = parseRedisConfig(config.utapi.redis);
|
||||||
this.utapi.redis = { sentinels: [], name: null };
|
if (this.utapi.redis.retry === undefined) {
|
||||||
|
|
||||||
assert(typeof config.utapi.redis.name === 'string',
|
|
||||||
'bad config: redis sentinel name must be a string');
|
|
||||||
this.utapi.redis.name = config.utapi.redis.name;
|
|
||||||
|
|
||||||
assert(Array.isArray(config.utapi.redis.sentinels),
|
|
||||||
'bad config: redis sentinels must be an array');
|
|
||||||
config.utapi.redis.sentinels.forEach(item => {
|
|
||||||
const { host, port } = item;
|
|
||||||
assert(typeof host === 'string',
|
|
||||||
'bad config: redis sentinel host must be a string');
|
|
||||||
assert(typeof port === 'number',
|
|
||||||
'bad config: redis sentinel port must be a number');
|
|
||||||
this.utapi.redis.sentinels.push({ host, port });
|
|
||||||
});
|
|
||||||
} else {
|
|
||||||
// check for standalone configuration
|
|
||||||
this.utapi.redis = {};
|
|
||||||
assert(typeof config.utapi.redis.host === 'string',
|
|
||||||
'bad config: redis.host must be a string');
|
|
||||||
assert(typeof config.utapi.redis.port === 'number',
|
|
||||||
'bad config: redis.port must be a number');
|
|
||||||
this.utapi.redis.host = config.utapi.redis.host;
|
|
||||||
this.utapi.redis.port = config.utapi.redis.port;
|
|
||||||
}
|
|
||||||
if (config.utapi.redis.retry !== undefined) {
|
|
||||||
if (config.utapi.redis.retry.connectBackoff !== undefined) {
|
|
||||||
const { min, max, jitter, factor, deadline } = config.utapi.redis.retry.connectBackoff;
|
|
||||||
assert.strictEqual(typeof min, 'number',
|
|
||||||
'utapi.redis.retry.connectBackoff: min must be a number');
|
|
||||||
assert.strictEqual(typeof max, 'number',
|
|
||||||
'utapi.redis.retry.connectBackoff: max must be a number');
|
|
||||||
assert.strictEqual(typeof jitter, 'number',
|
|
||||||
'utapi.redis.retry.connectBackoff: jitter must be a number');
|
|
||||||
assert.strictEqual(typeof factor, 'number',
|
|
||||||
'utapi.redis.retry.connectBackoff: factor must be a number');
|
|
||||||
assert.strictEqual(typeof deadline, 'number',
|
|
||||||
'utapi.redis.retry.connectBackoff: deadline must be a number');
|
|
||||||
}
|
|
||||||
|
|
||||||
this.utapi.redis.retry = config.utapi.redis.retry;
|
|
||||||
} else {
|
|
||||||
this.utapi.redis.retry = {
|
this.utapi.redis.retry = {
|
||||||
connectBackoff: {
|
connectBackoff: {
|
||||||
min: 10,
|
min: 10,
|
||||||
|
@ -1237,22 +1189,6 @@ class Config extends EventEmitter {
|
||||||
},
|
},
|
||||||
};
|
};
|
||||||
}
|
}
|
||||||
if (config.utapi.redis.password !== undefined) {
|
|
||||||
assert(
|
|
||||||
this._verifyRedisPassword(config.utapi.redis.password),
|
|
||||||
'config: invalid password for utapi redis. password' +
|
|
||||||
' must be a string');
|
|
||||||
this.utapi.redis.password = config.utapi.redis.password;
|
|
||||||
}
|
|
||||||
if (config.utapi.redis.sentinelPassword !== undefined) {
|
|
||||||
assert(
|
|
||||||
this._verifyRedisPassword(
|
|
||||||
config.utapi.redis.sentinelPassword),
|
|
||||||
'config: invalid password for utapi redis. password' +
|
|
||||||
' must be a string');
|
|
||||||
this.utapi.redis.sentinelPassword =
|
|
||||||
config.utapi.redis.sentinelPassword;
|
|
||||||
}
|
|
||||||
}
|
}
|
||||||
if (config.utapi.metrics) {
|
if (config.utapi.metrics) {
|
||||||
this.utapi.metrics = config.utapi.metrics;
|
this.utapi.metrics = config.utapi.metrics;
|
||||||
|
@ -1322,8 +1258,7 @@ class Config extends EventEmitter {
|
||||||
}
|
}
|
||||||
|
|
||||||
if (config.utapi && config.utapi.reindex) {
|
if (config.utapi && config.utapi.reindex) {
|
||||||
parseUtapiReindex(config.utapi.reindex);
|
this.utapi.reindex = parseUtapiReindex(config.utapi.reindex);
|
||||||
this.utapi.reindex = config.utapi.reindex;
|
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
@ -1368,6 +1303,8 @@ class Config extends EventEmitter {
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
this.authdata = config.authdata || 'authdata.json';
|
||||||
|
|
||||||
this.kms = {};
|
this.kms = {};
|
||||||
if (config.kms) {
|
if (config.kms) {
|
||||||
assert(typeof config.kms.userName === 'string');
|
assert(typeof config.kms.userName === 'string');
|
||||||
|
@ -1587,25 +1524,6 @@ class Config extends EventEmitter {
|
||||||
this.outboundProxy.certs = certObj.certs;
|
this.outboundProxy.certs = certObj.certs;
|
||||||
}
|
}
|
||||||
|
|
||||||
this.managementAgent = {};
|
|
||||||
this.managementAgent.port = 8010;
|
|
||||||
this.managementAgent.host = 'localhost';
|
|
||||||
if (config.managementAgent !== undefined) {
|
|
||||||
if (config.managementAgent.port !== undefined) {
|
|
||||||
assert(Number.isInteger(config.managementAgent.port)
|
|
||||||
&& config.managementAgent.port > 0,
|
|
||||||
'bad config: managementAgent port must be a positive ' +
|
|
||||||
'integer');
|
|
||||||
this.managementAgent.port = config.managementAgent.port;
|
|
||||||
}
|
|
||||||
if (config.managementAgent.host !== undefined) {
|
|
||||||
assert.strictEqual(typeof config.managementAgent.host, 'string',
|
|
||||||
'bad config: management agent host must ' +
|
|
||||||
'be a string');
|
|
||||||
this.managementAgent.host = config.managementAgent.host;
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
// Ephemeral token to protect the reporting endpoint:
|
// Ephemeral token to protect the reporting endpoint:
|
||||||
// try inherited from parent first, then hardcoded in conf file,
|
// try inherited from parent first, then hardcoded in conf file,
|
||||||
// then create a fresh one as last resort.
|
// then create a fresh one as last resort.
|
||||||
|
@ -1695,6 +1613,8 @@ class Config extends EventEmitter {
|
||||||
'bad config: maxScannedLifecycleListingEntries must be greater than 2');
|
'bad config: maxScannedLifecycleListingEntries must be greater than 2');
|
||||||
this.maxScannedLifecycleListingEntries = config.maxScannedLifecycleListingEntries;
|
this.maxScannedLifecycleListingEntries = config.maxScannedLifecycleListingEntries;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
this._configureBackends(config);
|
||||||
}
|
}
|
||||||
|
|
||||||
_setTimeOptions() {
|
_setTimeOptions() {
|
||||||
|
@ -1733,35 +1653,37 @@ class Config extends EventEmitter {
|
||||||
}
|
}
|
||||||
|
|
||||||
_getAuthData() {
|
_getAuthData() {
|
||||||
return require(findConfigFile(process.env.S3AUTH_CONFIG || 'authdata.json'));
|
return JSON.parse(fs.readFileSync(findConfigFile(process.env.S3AUTH_CONFIG || this.authdata), { encoding: 'utf-8' }));
|
||||||
}
|
}
|
||||||
|
|
||||||
_configureBackends() {
|
_configureBackends(config) {
|
||||||
|
const backends = config.backends || {};
|
||||||
/**
|
/**
|
||||||
* Configure the backends for Authentication, Data and Metadata.
|
* Configure the backends for Authentication, Data and Metadata.
|
||||||
*/
|
*/
|
||||||
let auth = 'mem';
|
let auth = backends.auth || 'mem';
|
||||||
let data = 'multiple';
|
let data = backends.data || 'multiple';
|
||||||
let metadata = 'file';
|
let metadata = backends.metadata || 'file';
|
||||||
let kms = 'file';
|
let kms = backends.kms || 'file';
|
||||||
let quota = 'none';
|
let quota = backends.quota || 'none';
|
||||||
if (process.env.S3BACKEND) {
|
if (process.env.S3BACKEND) {
|
||||||
const validBackends = ['mem', 'file', 'scality', 'cdmi'];
|
const validBackends = ['mem', 'file', 'scality', 'cdmi'];
|
||||||
assert(validBackends.indexOf(process.env.S3BACKEND) > -1,
|
assert(validBackends.indexOf(process.env.S3BACKEND) > -1,
|
||||||
'bad environment variable: S3BACKEND environment variable ' +
|
'bad environment variable: S3BACKEND environment variable ' +
|
||||||
'should be one of mem/file/scality/cdmi'
|
'should be one of mem/file/scality/cdmi'
|
||||||
);
|
);
|
||||||
auth = process.env.S3BACKEND;
|
auth = process.env.S3BACKEND == 'scality' ? 'scality' : 'mem';
|
||||||
data = process.env.S3BACKEND;
|
data = process.env.S3BACKEND;
|
||||||
metadata = process.env.S3BACKEND;
|
metadata = process.env.S3BACKEND;
|
||||||
kms = process.env.S3BACKEND;
|
kms = process.env.S3BACKEND;
|
||||||
}
|
}
|
||||||
if (process.env.S3VAULT) {
|
if (process.env.S3VAULT) {
|
||||||
auth = process.env.S3VAULT;
|
auth = process.env.S3VAULT;
|
||||||
|
auth = (auth === 'file' || auth === 'mem' || auth === 'cdmi' ? 'mem' : auth);
|
||||||
}
|
}
|
||||||
|
|
||||||
if (auth === 'file' || auth === 'mem' || auth === 'cdmi') {
|
if (auth === 'file' || auth === 'mem' || auth === 'cdmi') {
|
||||||
// Auth only checks for 'mem' since mem === file
|
// Auth only checks for 'mem' since mem === file
|
||||||
auth = 'mem';
|
|
||||||
let authData;
|
let authData;
|
||||||
if (process.env.SCALITY_ACCESS_KEY_ID &&
|
if (process.env.SCALITY_ACCESS_KEY_ID &&
|
||||||
process.env.SCALITY_SECRET_ACCESS_KEY) {
|
process.env.SCALITY_SECRET_ACCESS_KEY) {
|
||||||
|
@ -1790,10 +1712,10 @@ class Config extends EventEmitter {
|
||||||
'should be one of mem/file/scality/multiple'
|
'should be one of mem/file/scality/multiple'
|
||||||
);
|
);
|
||||||
data = process.env.S3DATA;
|
data = process.env.S3DATA;
|
||||||
}
|
|
||||||
if (data === 'scality' || data === 'multiple') {
|
if (data === 'scality' || data === 'multiple') {
|
||||||
data = 'multiple';
|
data = 'multiple';
|
||||||
}
|
}
|
||||||
|
}
|
||||||
assert(this.locationConstraints !== undefined &&
|
assert(this.locationConstraints !== undefined &&
|
||||||
this.restEndpoints !== undefined,
|
this.restEndpoints !== undefined,
|
||||||
'bad config: locationConstraints and restEndpoints must be set'
|
'bad config: locationConstraints and restEndpoints must be set'
|
||||||
|
@ -1817,10 +1739,6 @@ class Config extends EventEmitter {
|
||||||
};
|
};
|
||||||
}
|
}
|
||||||
|
|
||||||
_verifyRedisPassword(password) {
|
|
||||||
return typeof password === 'string';
|
|
||||||
}
|
|
||||||
|
|
||||||
setAuthDataAccounts(accounts) {
|
setAuthDataAccounts(accounts) {
|
||||||
this.authData.accounts = accounts;
|
this.authData.accounts = accounts;
|
||||||
this.emit('authdata-update');
|
this.emit('authdata-update');
|
||||||
|
@ -1955,6 +1873,7 @@ class Config extends EventEmitter {
|
||||||
|
|
||||||
module.exports = {
|
module.exports = {
|
||||||
parseSproxydConfig,
|
parseSproxydConfig,
|
||||||
|
parseRedisConfig,
|
||||||
locationConstraintAssert,
|
locationConstraintAssert,
|
||||||
ConfigObject: Config,
|
ConfigObject: Config,
|
||||||
config: new Config(),
|
config: new Config(),
|
||||||
|
|
|
@ -45,9 +45,8 @@ function checkLocationConstraint(request, locationConstraint, log) {
|
||||||
} else if (parsedHost && restEndpoints[parsedHost]) {
|
} else if (parsedHost && restEndpoints[parsedHost]) {
|
||||||
locationConstraintChecked = restEndpoints[parsedHost];
|
locationConstraintChecked = restEndpoints[parsedHost];
|
||||||
} else {
|
} else {
|
||||||
log.trace('no location constraint provided on bucket put;' +
|
locationConstraintChecked = Object.keys(locationConstrains)[0];
|
||||||
'setting us-east-1');
|
log.trace('no location constraint provided on bucket put; setting '+locationConstraintChecked);
|
||||||
locationConstraintChecked = 'us-east-1';
|
|
||||||
}
|
}
|
||||||
|
|
||||||
if (!locationConstraints[locationConstraintChecked]) {
|
if (!locationConstraints[locationConstraintChecked]) {
|
||||||
|
|
|
@ -6,6 +6,7 @@ const convertToXml = s3middleware.convertToXml;
|
||||||
const { pushMetric } = require('../utapi/utilities');
|
const { pushMetric } = require('../utapi/utilities');
|
||||||
const collectCorsHeaders = require('../utilities/collectCorsHeaders');
|
const collectCorsHeaders = require('../utilities/collectCorsHeaders');
|
||||||
const { hasNonPrintables } = require('../utilities/stringChecks');
|
const { hasNonPrintables } = require('../utilities/stringChecks');
|
||||||
|
const { config } = require('../Config');
|
||||||
const { cleanUpBucket } = require('./apiUtils/bucket/bucketCreation');
|
const { cleanUpBucket } = require('./apiUtils/bucket/bucketCreation');
|
||||||
const constants = require('../../constants');
|
const constants = require('../../constants');
|
||||||
const services = require('../services');
|
const services = require('../services');
|
||||||
|
@ -65,7 +66,7 @@ function initiateMultipartUpload(authInfo, request, log, callback) {
|
||||||
const websiteRedirectHeader =
|
const websiteRedirectHeader =
|
||||||
request.headers['x-amz-website-redirect-location'];
|
request.headers['x-amz-website-redirect-location'];
|
||||||
if (request.headers['x-amz-storage-class'] &&
|
if (request.headers['x-amz-storage-class'] &&
|
||||||
!constants.validStorageClasses.includes(request.headers['x-amz-storage-class'])) {
|
!config.locationConstraints[request.headers['x-amz-storage-class']]) {
|
||||||
log.trace('invalid storage-class header');
|
log.trace('invalid storage-class header');
|
||||||
monitoring.promMetrics('PUT', bucketName,
|
monitoring.promMetrics('PUT', bucketName,
|
||||||
errors.InvalidStorageClass.code, 'initiateMultipartUpload');
|
errors.InvalidStorageClass.code, 'initiateMultipartUpload');
|
||||||
|
|
|
@ -508,8 +508,9 @@ function multiObjectDelete(authInfo, request, log, callback) {
|
||||||
if (bucketShield(bucketMD, 'objectDelete')) {
|
if (bucketShield(bucketMD, 'objectDelete')) {
|
||||||
return next(errors.NoSuchBucket);
|
return next(errors.NoSuchBucket);
|
||||||
}
|
}
|
||||||
if (!isBucketAuthorized(bucketMD, 'objectDelete', canonicalID, authInfo, log, request,
|
// The implicit deny flag is ignored in the DeleteObjects API, as authorization only
|
||||||
request.actionImplicitDenies)) {
|
// affects the objects.
|
||||||
|
if (!isBucketAuthorized(bucketMD, 'objectDelete', canonicalID, authInfo, log, request)) {
|
||||||
log.trace("access denied due to bucket acl's");
|
log.trace("access denied due to bucket acl's");
|
||||||
// if access denied at the bucket level, no access for
|
// if access denied at the bucket level, no access for
|
||||||
// any of the objects so all results will be error results
|
// any of the objects so all results will be error results
|
||||||
|
|
|
@ -249,7 +249,7 @@ function objectCopy(authInfo, request, sourceBucket,
|
||||||
const responseHeaders = {};
|
const responseHeaders = {};
|
||||||
|
|
||||||
if (request.headers['x-amz-storage-class'] &&
|
if (request.headers['x-amz-storage-class'] &&
|
||||||
!constants.validStorageClasses.includes(request.headers['x-amz-storage-class'])) {
|
!config.locationConstraints[request.headers['x-amz-storage-class']]) {
|
||||||
log.trace('invalid storage-class header');
|
log.trace('invalid storage-class header');
|
||||||
monitoring.promMetrics('PUT', destBucketName,
|
monitoring.promMetrics('PUT', destBucketName,
|
||||||
errors.InvalidStorageClass.code, 'copyObject');
|
errors.InvalidStorageClass.code, 'copyObject');
|
||||||
|
|
|
@ -3,6 +3,7 @@ const { errors, versioning } = require('arsenal');
|
||||||
|
|
||||||
const constants = require('../../constants');
|
const constants = require('../../constants');
|
||||||
const aclUtils = require('../utilities/aclUtils');
|
const aclUtils = require('../utilities/aclUtils');
|
||||||
|
const { config } = require('../Config');
|
||||||
const { cleanUpBucket } = require('./apiUtils/bucket/bucketCreation');
|
const { cleanUpBucket } = require('./apiUtils/bucket/bucketCreation');
|
||||||
const { getObjectSSEConfiguration } = require('./apiUtils/bucket/bucketEncryption');
|
const { getObjectSSEConfiguration } = require('./apiUtils/bucket/bucketEncryption');
|
||||||
const collectCorsHeaders = require('../utilities/collectCorsHeaders');
|
const collectCorsHeaders = require('../utilities/collectCorsHeaders');
|
||||||
|
@ -71,7 +72,7 @@ function objectPut(authInfo, request, streamingV4Params, log, callback) {
|
||||||
query,
|
query,
|
||||||
} = request;
|
} = request;
|
||||||
if (headers['x-amz-storage-class'] &&
|
if (headers['x-amz-storage-class'] &&
|
||||||
!constants.validStorageClasses.includes(headers['x-amz-storage-class'])) {
|
!config.locationConstraints[headers['x-amz-storage-class']]) {
|
||||||
log.trace('invalid storage-class header');
|
log.trace('invalid storage-class header');
|
||||||
monitoring.promMetrics('PUT', request.bucketName,
|
monitoring.promMetrics('PUT', request.bucketName,
|
||||||
errors.InvalidStorageClass.code, 'putObject');
|
errors.InvalidStorageClass.code, 'putObject');
|
||||||
|
|
|
@ -1,4 +1,3 @@
|
||||||
const vaultclient = require('vaultclient');
|
|
||||||
const { auth } = require('arsenal');
|
const { auth } = require('arsenal');
|
||||||
|
|
||||||
const { config } = require('../Config');
|
const { config } = require('../Config');
|
||||||
|
@ -21,6 +20,7 @@ function getVaultClient(config) {
|
||||||
port,
|
port,
|
||||||
https: true,
|
https: true,
|
||||||
});
|
});
|
||||||
|
const vaultclient = require('vaultclient');
|
||||||
vaultClient = new vaultclient.Client(host, port, true, key, cert, ca);
|
vaultClient = new vaultclient.Client(host, port, true, key, cert, ca);
|
||||||
} else {
|
} else {
|
||||||
logger.info('vaultclient configuration', {
|
logger.info('vaultclient configuration', {
|
||||||
|
@ -28,6 +28,7 @@ function getVaultClient(config) {
|
||||||
port,
|
port,
|
||||||
https: false,
|
https: false,
|
||||||
});
|
});
|
||||||
|
const vaultclient = require('vaultclient');
|
||||||
vaultClient = new vaultclient.Client(host, port);
|
vaultClient = new vaultclient.Client(host, port);
|
||||||
}
|
}
|
||||||
|
|
||||||
|
@ -49,10 +50,6 @@ function getMemBackend(config) {
|
||||||
}
|
}
|
||||||
|
|
||||||
switch (config.backends.auth) {
|
switch (config.backends.auth) {
|
||||||
case 'mem':
|
|
||||||
implName = 'vaultMem';
|
|
||||||
client = getMemBackend(config);
|
|
||||||
break;
|
|
||||||
case 'multiple':
|
case 'multiple':
|
||||||
implName = 'vaultChain';
|
implName = 'vaultChain';
|
||||||
client = new ChainBackend('s3', [
|
client = new ChainBackend('s3', [
|
||||||
|
@ -60,9 +57,14 @@ case 'multiple':
|
||||||
getVaultClient(config),
|
getVaultClient(config),
|
||||||
]);
|
]);
|
||||||
break;
|
break;
|
||||||
default: // vault
|
case 'vault':
|
||||||
implName = 'vault';
|
implName = 'vault';
|
||||||
client = getVaultClient(config);
|
client = getVaultClient(config);
|
||||||
|
break;
|
||||||
|
default: // mem
|
||||||
|
implName = 'vaultMem';
|
||||||
|
client = getMemBackend(config);
|
||||||
|
break;
|
||||||
}
|
}
|
||||||
|
|
||||||
module.exports = new Vault(client, implName);
|
module.exports = new Vault(client, implName);
|
||||||
|
|
|
@ -8,20 +8,6 @@ const inMemory = require('./in_memory/backend').backend;
|
||||||
const file = require('./file/backend');
|
const file = require('./file/backend');
|
||||||
const KMIPClient = require('arsenal').network.kmipClient;
|
const KMIPClient = require('arsenal').network.kmipClient;
|
||||||
const Common = require('./common');
|
const Common = require('./common');
|
||||||
let scalityKMS;
|
|
||||||
let scalityKMSImpl;
|
|
||||||
try {
|
|
||||||
// eslint-disable-next-line import/no-unresolved
|
|
||||||
const ScalityKMS = require('scality-kms');
|
|
||||||
scalityKMS = new ScalityKMS(config.kms);
|
|
||||||
scalityKMSImpl = 'scalityKms';
|
|
||||||
} catch (error) {
|
|
||||||
logger.warn('scality kms unavailable. ' +
|
|
||||||
'Using file kms backend unless mem specified.',
|
|
||||||
{ error });
|
|
||||||
scalityKMS = file;
|
|
||||||
scalityKMSImpl = 'fileKms';
|
|
||||||
}
|
|
||||||
|
|
||||||
let client;
|
let client;
|
||||||
let implName;
|
let implName;
|
||||||
|
@ -33,8 +19,9 @@ if (config.backends.kms === 'mem') {
|
||||||
client = file;
|
client = file;
|
||||||
implName = 'fileKms';
|
implName = 'fileKms';
|
||||||
} else if (config.backends.kms === 'scality') {
|
} else if (config.backends.kms === 'scality') {
|
||||||
client = scalityKMS;
|
const ScalityKMS = require('scality-kms');
|
||||||
implName = scalityKMSImpl;
|
client = new ScalityKMS(config.kms);
|
||||||
|
implName = 'scalityKms';
|
||||||
} else if (config.backends.kms === 'kmip') {
|
} else if (config.backends.kms === 'kmip') {
|
||||||
const kmipConfig = { kmip: config.kmip };
|
const kmipConfig = { kmip: config.kmip };
|
||||||
if (!kmipConfig.kmip) {
|
if (!kmipConfig.kmip) {
|
||||||
|
|
|
@ -1,131 +0,0 @@
|
||||||
/**
|
|
||||||
* Target service that should handle a message
|
|
||||||
* @readonly
|
|
||||||
* @enum {number}
|
|
||||||
*/
|
|
||||||
const MessageType = {
|
|
||||||
/** Message that contains a configuration overlay */
|
|
||||||
CONFIG_OVERLAY_MESSAGE: 1,
|
|
||||||
/** Message that requests a metrics report */
|
|
||||||
METRICS_REQUEST_MESSAGE: 2,
|
|
||||||
/** Message that contains a metrics report */
|
|
||||||
METRICS_REPORT_MESSAGE: 3,
|
|
||||||
/** Close the virtual TCP socket associated to the channel */
|
|
||||||
CHANNEL_CLOSE_MESSAGE: 4,
|
|
||||||
/** Write data to the virtual TCP socket associated to the channel */
|
|
||||||
CHANNEL_PAYLOAD_MESSAGE: 5,
|
|
||||||
};
|
|
||||||
|
|
||||||
/**
|
|
||||||
* Target service that should handle a message
|
|
||||||
* @readonly
|
|
||||||
* @enum {number}
|
|
||||||
*/
|
|
||||||
const TargetType = {
|
|
||||||
/** Let the dispatcher choose the most appropriate message */
|
|
||||||
TARGET_ANY: 0,
|
|
||||||
};
|
|
||||||
|
|
||||||
const headerSize = 3;
|
|
||||||
|
|
||||||
class ChannelMessageV0 {
|
|
||||||
/**
|
|
||||||
* @param {Buffer} buffer Message bytes
|
|
||||||
*/
|
|
||||||
constructor(buffer) {
|
|
||||||
this.messageType = buffer.readUInt8(0);
|
|
||||||
this.channelNumber = buffer.readUInt8(1);
|
|
||||||
this.target = buffer.readUInt8(2);
|
|
||||||
this.payload = buffer.slice(headerSize);
|
|
||||||
}
|
|
||||||
|
|
||||||
/**
|
|
||||||
* @returns {number} Message type
|
|
||||||
*/
|
|
||||||
getType() {
|
|
||||||
return this.messageType;
|
|
||||||
}
|
|
||||||
|
|
||||||
/**
|
|
||||||
* @returns {number} Channel number if applicable
|
|
||||||
*/
|
|
||||||
getChannelNumber() {
|
|
||||||
return this.channelNumber;
|
|
||||||
}
|
|
||||||
|
|
||||||
/**
|
|
||||||
* @returns {number} Target service, or 0 to choose automatically
|
|
||||||
*/
|
|
||||||
getTarget() {
|
|
||||||
return this.target;
|
|
||||||
}
|
|
||||||
|
|
||||||
/**
|
|
||||||
* @returns {Buffer} Message payload if applicable
|
|
||||||
*/
|
|
||||||
getPayload() {
|
|
||||||
return this.payload;
|
|
||||||
}
|
|
||||||
|
|
||||||
/**
|
|
||||||
* Creates a wire representation of a channel close message
|
|
||||||
*
|
|
||||||
* @param {number} channelId Channel number
|
|
||||||
*
|
|
||||||
* @returns {Buffer} wire representation
|
|
||||||
*/
|
|
||||||
static encodeChannelCloseMessage(channelId) {
|
|
||||||
const buf = Buffer.alloc(headerSize);
|
|
||||||
buf.writeUInt8(MessageType.CHANNEL_CLOSE_MESSAGE, 0);
|
|
||||||
buf.writeUInt8(channelId, 1);
|
|
||||||
buf.writeUInt8(TargetType.TARGET_ANY, 2);
|
|
||||||
return buf;
|
|
||||||
}
|
|
||||||
|
|
||||||
/**
|
|
||||||
* Creates a wire representation of a channel data message
|
|
||||||
*
|
|
||||||
* @param {number} channelId Channel number
|
|
||||||
* @param {Buffer} data Payload
|
|
||||||
*
|
|
||||||
* @returns {Buffer} wire representation
|
|
||||||
*/
|
|
||||||
static encodeChannelDataMessage(channelId, data) {
|
|
||||||
const buf = Buffer.alloc(data.length + headerSize);
|
|
||||||
buf.writeUInt8(MessageType.CHANNEL_PAYLOAD_MESSAGE, 0);
|
|
||||||
buf.writeUInt8(channelId, 1);
|
|
||||||
buf.writeUInt8(TargetType.TARGET_ANY, 2);
|
|
||||||
data.copy(buf, headerSize);
|
|
||||||
return buf;
|
|
||||||
}
|
|
||||||
|
|
||||||
/**
|
|
||||||
* Creates a wire representation of a metrics message
|
|
||||||
*
|
|
||||||
* @param {object} body Metrics report
|
|
||||||
*
|
|
||||||
* @returns {Buffer} wire representation
|
|
||||||
*/
|
|
||||||
static encodeMetricsReportMessage(body) {
|
|
||||||
const report = JSON.stringify(body);
|
|
||||||
const buf = Buffer.alloc(report.length + headerSize);
|
|
||||||
buf.writeUInt8(MessageType.METRICS_REPORT_MESSAGE, 0);
|
|
||||||
buf.writeUInt8(0, 1);
|
|
||||||
buf.writeUInt8(TargetType.TARGET_ANY, 2);
|
|
||||||
buf.write(report, headerSize);
|
|
||||||
return buf;
|
|
||||||
}
|
|
||||||
|
|
||||||
/**
|
|
||||||
* Protocol name used for subprotocol negociation
|
|
||||||
*/
|
|
||||||
static get protocolName() {
|
|
||||||
return 'zenko-secure-channel-v0';
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
module.exports = {
|
|
||||||
ChannelMessageV0,
|
|
||||||
MessageType,
|
|
||||||
TargetType,
|
|
||||||
};
|
|
|
@ -1,94 +0,0 @@
|
||||||
const WebSocket = require('ws');
|
|
||||||
const arsenal = require('arsenal');
|
|
||||||
|
|
||||||
const logger = require('../utilities/logger');
|
|
||||||
const _config = require('../Config').config;
|
|
||||||
const { patchConfiguration } = require('./configuration');
|
|
||||||
const { reshapeExceptionError } = arsenal.errorUtils;
|
|
||||||
|
|
||||||
|
|
||||||
const managementAgentMessageType = {
|
|
||||||
/** Message that contains the loaded overlay */
|
|
||||||
NEW_OVERLAY: 1,
|
|
||||||
};
|
|
||||||
|
|
||||||
const CONNECTION_RETRY_TIMEOUT_MS = 5000;
|
|
||||||
|
|
||||||
|
|
||||||
function initManagementClient() {
|
|
||||||
const { host, port } = _config.managementAgent;
|
|
||||||
|
|
||||||
const ws = new WebSocket(`ws://${host}:${port}/watch`);
|
|
||||||
|
|
||||||
ws.on('open', () => {
|
|
||||||
logger.info('connected with management agent');
|
|
||||||
});
|
|
||||||
|
|
||||||
ws.on('close', (code, reason) => {
|
|
||||||
logger.info('disconnected from management agent', { reason });
|
|
||||||
setTimeout(initManagementClient, CONNECTION_RETRY_TIMEOUT_MS);
|
|
||||||
});
|
|
||||||
|
|
||||||
ws.on('error', error => {
|
|
||||||
logger.error('error on connection with management agent', { error });
|
|
||||||
});
|
|
||||||
|
|
||||||
ws.on('message', data => {
|
|
||||||
const method = 'initManagementclient::onMessage';
|
|
||||||
const log = logger.newRequestLogger();
|
|
||||||
let msg;
|
|
||||||
|
|
||||||
if (!data) {
|
|
||||||
log.error('message without data', { method });
|
|
||||||
return;
|
|
||||||
}
|
|
||||||
try {
|
|
||||||
msg = JSON.parse(data);
|
|
||||||
} catch (err) {
|
|
||||||
log.error('data is an invalid json', { method, err, data });
|
|
||||||
return;
|
|
||||||
}
|
|
||||||
|
|
||||||
if (msg.payload === undefined) {
|
|
||||||
log.error('message without payload', { method });
|
|
||||||
return;
|
|
||||||
}
|
|
||||||
if (typeof msg.messageType !== 'number') {
|
|
||||||
log.error('messageType is not an integer', {
|
|
||||||
type: typeof msg.messageType,
|
|
||||||
method,
|
|
||||||
});
|
|
||||||
return;
|
|
||||||
}
|
|
||||||
|
|
||||||
switch (msg.messageType) {
|
|
||||||
case managementAgentMessageType.NEW_OVERLAY:
|
|
||||||
patchConfiguration(msg.payload, log, err => {
|
|
||||||
if (err) {
|
|
||||||
log.error('failed to patch overlay', {
|
|
||||||
error: reshapeExceptionError(err),
|
|
||||||
method,
|
|
||||||
});
|
|
||||||
}
|
|
||||||
});
|
|
||||||
return;
|
|
||||||
default:
|
|
||||||
log.error('new overlay message with unmanaged message type', {
|
|
||||||
method,
|
|
||||||
type: msg.messageType,
|
|
||||||
});
|
|
||||||
return;
|
|
||||||
}
|
|
||||||
});
|
|
||||||
}
|
|
||||||
|
|
||||||
function isManagementAgentUsed() {
|
|
||||||
return process.env.MANAGEMENT_USE_AGENT === '1';
|
|
||||||
}
|
|
||||||
|
|
||||||
|
|
||||||
module.exports = {
|
|
||||||
managementAgentMessageType,
|
|
||||||
initManagementClient,
|
|
||||||
isManagementAgentUsed,
|
|
||||||
};
|
|
|
@ -1,240 +0,0 @@
|
||||||
const arsenal = require('arsenal');
|
|
||||||
|
|
||||||
const { buildAuthDataAccount } = require('../auth/in_memory/builder');
|
|
||||||
const _config = require('../Config').config;
|
|
||||||
const metadata = require('../metadata/wrapper');
|
|
||||||
|
|
||||||
const { getStoredCredentials } = require('./credentials');
|
|
||||||
|
|
||||||
const latestOverlayVersionKey = 'configuration/overlay-version';
|
|
||||||
const managementDatabaseName = 'PENSIEVE';
|
|
||||||
const replicatorEndpoint = 'zenko-cloudserver-replicator';
|
|
||||||
const { decryptSecret } = arsenal.pensieve.credentialUtils;
|
|
||||||
const { patchLocations } = arsenal.patches.locationConstraints;
|
|
||||||
const { reshapeExceptionError } = arsenal.errorUtils;
|
|
||||||
const { replicationBackends } = require('arsenal').constants;
|
|
||||||
|
|
||||||
function overlayHasVersion(overlay) {
|
|
||||||
return overlay && overlay.version !== undefined;
|
|
||||||
}
|
|
||||||
|
|
||||||
function remoteOverlayIsNewer(cachedOverlay, remoteOverlay) {
|
|
||||||
return (overlayHasVersion(remoteOverlay) &&
|
|
||||||
(!overlayHasVersion(cachedOverlay) ||
|
|
||||||
remoteOverlay.version > cachedOverlay.version));
|
|
||||||
}
|
|
||||||
|
|
||||||
/**
|
|
||||||
* Updates the live {Config} object with the new overlay configuration.
|
|
||||||
*
|
|
||||||
* No-op if this version was already applied to the live {Config}.
|
|
||||||
*
|
|
||||||
* @param {object} newConf Overlay configuration to apply
|
|
||||||
* @param {werelogs~Logger} log Request-scoped logger
|
|
||||||
* @param {function} cb Function to call with (error, newConf)
|
|
||||||
*
|
|
||||||
* @returns {undefined}
|
|
||||||
*/
|
|
||||||
function patchConfiguration(newConf, log, cb) {
|
|
||||||
if (newConf.version === undefined) {
|
|
||||||
log.debug('no remote configuration created yet');
|
|
||||||
return process.nextTick(cb, null, newConf);
|
|
||||||
}
|
|
||||||
|
|
||||||
if (_config.overlayVersion !== undefined &&
|
|
||||||
newConf.version <= _config.overlayVersion) {
|
|
||||||
log.debug('configuration version already applied',
|
|
||||||
{ configurationVersion: newConf.version });
|
|
||||||
return process.nextTick(cb, null, newConf);
|
|
||||||
}
|
|
||||||
return getStoredCredentials(log, (err, creds) => {
|
|
||||||
if (err) {
|
|
||||||
return cb(err);
|
|
||||||
}
|
|
||||||
const accounts = [];
|
|
||||||
if (newConf.users) {
|
|
||||||
newConf.users.forEach(u => {
|
|
||||||
if (u.secretKey && u.secretKey.length > 0) {
|
|
||||||
const secretKey = decryptSecret(creds, u.secretKey);
|
|
||||||
// accountType will be service-replication or service-clueso
|
|
||||||
let serviceName;
|
|
||||||
if (u.accountType && u.accountType.startsWith('service-')) {
|
|
||||||
serviceName = u.accountType.split('-')[1];
|
|
||||||
}
|
|
||||||
const newAccount = buildAuthDataAccount(
|
|
||||||
u.accessKey, secretKey, u.canonicalId, serviceName,
|
|
||||||
u.userName);
|
|
||||||
accounts.push(newAccount.accounts[0]);
|
|
||||||
}
|
|
||||||
});
|
|
||||||
}
|
|
||||||
|
|
||||||
const restEndpoints = Object.assign({}, _config.restEndpoints);
|
|
||||||
if (newConf.endpoints) {
|
|
||||||
newConf.endpoints.forEach(e => {
|
|
||||||
restEndpoints[e.hostname] = e.locationName;
|
|
||||||
});
|
|
||||||
}
|
|
||||||
|
|
||||||
if (!restEndpoints[replicatorEndpoint]) {
|
|
||||||
restEndpoints[replicatorEndpoint] = 'us-east-1';
|
|
||||||
}
|
|
||||||
|
|
||||||
const locations = patchLocations(newConf.locations, creds, log);
|
|
||||||
if (Object.keys(locations).length !== 0) {
|
|
||||||
try {
|
|
||||||
_config.setLocationConstraints(locations);
|
|
||||||
} catch (error) {
|
|
||||||
const exceptionError = reshapeExceptionError(error);
|
|
||||||
log.error('could not apply configuration version location ' +
|
|
||||||
'constraints', { error: exceptionError,
|
|
||||||
method: 'getStoredCredentials' });
|
|
||||||
return cb(exceptionError);
|
|
||||||
}
|
|
||||||
try {
|
|
||||||
const locationsWithReplicationBackend = Object.keys(locations)
|
|
||||||
// NOTE: In Orbit, we don't need to have Scality location in our
|
|
||||||
// replication endpoind config, since we do not replicate to
|
|
||||||
// any Scality Instance yet.
|
|
||||||
.filter(key => replicationBackends
|
|
||||||
[locations[key].type])
|
|
||||||
.reduce((obj, key) => {
|
|
||||||
/* eslint no-param-reassign:0 */
|
|
||||||
obj[key] = locations[key];
|
|
||||||
return obj;
|
|
||||||
}, {});
|
|
||||||
_config.setReplicationEndpoints(
|
|
||||||
locationsWithReplicationBackend);
|
|
||||||
} catch (error) {
|
|
||||||
const exceptionError = reshapeExceptionError(error);
|
|
||||||
log.error('could not apply replication endpoints',
|
|
||||||
{ error: exceptionError, method: 'getStoredCredentials' });
|
|
||||||
return cb(exceptionError);
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
_config.setAuthDataAccounts(accounts);
|
|
||||||
_config.setRestEndpoints(restEndpoints);
|
|
||||||
_config.setPublicInstanceId(newConf.instanceId);
|
|
||||||
|
|
||||||
if (newConf.browserAccess) {
|
|
||||||
if (Boolean(_config.browserAccessEnabled) !==
|
|
||||||
Boolean(newConf.browserAccess.enabled)) {
|
|
||||||
_config.browserAccessEnabled =
|
|
||||||
Boolean(newConf.browserAccess.enabled);
|
|
||||||
_config.emit('browser-access-enabled-change');
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
_config.overlayVersion = newConf.version;
|
|
||||||
|
|
||||||
log.info('applied configuration version',
|
|
||||||
{ configurationVersion: _config.overlayVersion });
|
|
||||||
|
|
||||||
return cb(null, newConf);
|
|
||||||
});
|
|
||||||
}
|
|
||||||
|
|
||||||
/**
|
|
||||||
* Writes configuration version to the management database
|
|
||||||
*
|
|
||||||
* @param {object} cachedOverlay Latest stored configuration version
|
|
||||||
* for freshness comparison purposes
|
|
||||||
* @param {object} remoteOverlay New configuration version
|
|
||||||
* @param {werelogs~Logger} log Request-scoped logger
|
|
||||||
* @param {function} cb Function to call with (error, remoteOverlay)
|
|
||||||
*
|
|
||||||
* @returns {undefined}
|
|
||||||
*/
|
|
||||||
function saveConfigurationVersion(cachedOverlay, remoteOverlay, log, cb) {
|
|
||||||
if (remoteOverlayIsNewer(cachedOverlay, remoteOverlay)) {
|
|
||||||
const objName = `configuration/overlay/${remoteOverlay.version}`;
|
|
||||||
metadata.putObjectMD(managementDatabaseName, objName, remoteOverlay,
|
|
||||||
{}, log, error => {
|
|
||||||
if (error) {
|
|
||||||
const exceptionError = reshapeExceptionError(error);
|
|
||||||
log.error('could not save configuration',
|
|
||||||
{ error: exceptionError,
|
|
||||||
method: 'saveConfigurationVersion',
|
|
||||||
configurationVersion: remoteOverlay.version });
|
|
||||||
cb(exceptionError);
|
|
||||||
return;
|
|
||||||
}
|
|
||||||
metadata.putObjectMD(managementDatabaseName,
|
|
||||||
latestOverlayVersionKey, remoteOverlay.version, {}, log,
|
|
||||||
error => {
|
|
||||||
if (error) {
|
|
||||||
log.error('could not save configuration version', {
|
|
||||||
configurationVersion: remoteOverlay.version,
|
|
||||||
});
|
|
||||||
}
|
|
||||||
cb(error, remoteOverlay);
|
|
||||||
});
|
|
||||||
});
|
|
||||||
} else {
|
|
||||||
log.debug('no remote configuration to cache yet');
|
|
||||||
process.nextTick(cb, null, remoteOverlay);
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
/**
|
|
||||||
* Loads the latest cached configuration overlay from the management
|
|
||||||
* database, without contacting the Orbit API.
|
|
||||||
*
|
|
||||||
* @param {werelogs~Logger} log Request-scoped logger
|
|
||||||
* @param {function} callback Function called with (error, cachedOverlay)
|
|
||||||
*
|
|
||||||
* @returns {undefined}
|
|
||||||
*/
|
|
||||||
function loadCachedOverlay(log, callback) {
|
|
||||||
return metadata.getObjectMD(managementDatabaseName,
|
|
||||||
latestOverlayVersionKey, {}, log, (err, version) => {
|
|
||||||
if (err) {
|
|
||||||
if (err.is.NoSuchKey) {
|
|
||||||
return process.nextTick(callback, null, {});
|
|
||||||
}
|
|
||||||
return callback(err);
|
|
||||||
}
|
|
||||||
return metadata.getObjectMD(managementDatabaseName,
|
|
||||||
`configuration/overlay/${version}`, {}, log, (err, conf) => {
|
|
||||||
if (err) {
|
|
||||||
if (err.is.NoSuchKey) {
|
|
||||||
return process.nextTick(callback, null, {});
|
|
||||||
}
|
|
||||||
return callback(err);
|
|
||||||
}
|
|
||||||
return callback(null, conf);
|
|
||||||
});
|
|
||||||
});
|
|
||||||
}
|
|
||||||
|
|
||||||
function applyAndSaveOverlay(overlay, log) {
|
|
||||||
patchConfiguration(overlay, log, err => {
|
|
||||||
if (err) {
|
|
||||||
log.error('could not apply pushed overlay', {
|
|
||||||
error: reshapeExceptionError(err),
|
|
||||||
method: 'applyAndSaveOverlay',
|
|
||||||
});
|
|
||||||
return;
|
|
||||||
}
|
|
||||||
saveConfigurationVersion(null, overlay, log, err => {
|
|
||||||
if (err) {
|
|
||||||
log.error('could not cache overlay version', {
|
|
||||||
error: reshapeExceptionError(err),
|
|
||||||
method: 'applyAndSaveOverlay',
|
|
||||||
});
|
|
||||||
return;
|
|
||||||
}
|
|
||||||
log.info('overlay push processed');
|
|
||||||
});
|
|
||||||
});
|
|
||||||
}
|
|
||||||
|
|
||||||
module.exports = {
|
|
||||||
loadCachedOverlay,
|
|
||||||
managementDatabaseName,
|
|
||||||
patchConfiguration,
|
|
||||||
saveConfigurationVersion,
|
|
||||||
remoteOverlayIsNewer,
|
|
||||||
applyAndSaveOverlay,
|
|
||||||
};
|
|
|
@ -1,145 +0,0 @@
|
||||||
const arsenal = require('arsenal');
|
|
||||||
const forge = require('node-forge');
|
|
||||||
const request = require('../utilities/request');
|
|
||||||
|
|
||||||
const metadata = require('../metadata/wrapper');
|
|
||||||
|
|
||||||
const managementDatabaseName = 'PENSIEVE';
|
|
||||||
const tokenConfigurationKey = 'auth/zenko/remote-management-token';
|
|
||||||
const tokenRotationDelay = 3600 * 24 * 7 * 1000; // 7 days
|
|
||||||
const { reshapeExceptionError } = arsenal.errorUtils;
|
|
||||||
|
|
||||||
/**
|
|
||||||
* Retrieves Orbit API token from the management database.
|
|
||||||
*
|
|
||||||
* The token is used to authenticate stat posting and
|
|
||||||
*
|
|
||||||
* @param {werelogs~Logger} log Request-scoped logger to be able to trace
|
|
||||||
* initialization process
|
|
||||||
* @param {function} callback Function called with (error, result)
|
|
||||||
*
|
|
||||||
* @returns {undefined}
|
|
||||||
*/
|
|
||||||
function getStoredCredentials(log, callback) {
|
|
||||||
metadata.getObjectMD(managementDatabaseName, tokenConfigurationKey, {},
|
|
||||||
log, callback);
|
|
||||||
}
|
|
||||||
|
|
||||||
function issueCredentials(managementEndpoint, instanceId, log, callback) {
|
|
||||||
log.info('registering with API to get token');
|
|
||||||
|
|
||||||
const keyPair = forge.pki.rsa.generateKeyPair({ bits: 2048, e: 0x10001 });
|
|
||||||
const privateKey = forge.pki.privateKeyToPem(keyPair.privateKey);
|
|
||||||
const publicKey = forge.pki.publicKeyToPem(keyPair.publicKey);
|
|
||||||
|
|
||||||
const postData = {
|
|
||||||
publicKey,
|
|
||||||
};
|
|
||||||
|
|
||||||
request.post(`${managementEndpoint}/${instanceId}/register`,
|
|
||||||
{ body: postData, json: true }, (error, response, body) => {
|
|
||||||
if (error) {
|
|
||||||
return callback(error);
|
|
||||||
}
|
|
||||||
if (response.statusCode !== 201) {
|
|
||||||
log.error('could not register instance', {
|
|
||||||
statusCode: response.statusCode,
|
|
||||||
});
|
|
||||||
return callback(arsenal.errors.InternalError);
|
|
||||||
}
|
|
||||||
/* eslint-disable no-param-reassign */
|
|
||||||
body.privateKey = privateKey;
|
|
||||||
/* eslint-enable no-param-reassign */
|
|
||||||
return callback(null, body);
|
|
||||||
});
|
|
||||||
}
|
|
||||||
|
|
||||||
function confirmInstanceCredentials(
|
|
||||||
managementEndpoint, instanceId, creds, log, callback) {
|
|
||||||
const postData = {
|
|
||||||
serial: creds.serial || 0,
|
|
||||||
publicKey: creds.publicKey,
|
|
||||||
};
|
|
||||||
|
|
||||||
const opts = {
|
|
||||||
headers: {
|
|
||||||
'x-instance-authentication-token': creds.token,
|
|
||||||
},
|
|
||||||
body: postData,
|
|
||||||
};
|
|
||||||
|
|
||||||
request.post(`${managementEndpoint}/${instanceId}/confirm`,
|
|
||||||
opts, (error, response) => {
|
|
||||||
if (error) {
|
|
||||||
return callback(error);
|
|
||||||
}
|
|
||||||
if (response.statusCode === 200) {
|
|
||||||
return callback(null, instanceId, creds.token);
|
|
||||||
}
|
|
||||||
return callback(arsenal.errors.InternalError);
|
|
||||||
});
|
|
||||||
}
|
|
||||||
|
|
||||||
/**
|
|
||||||
* Initializes credentials and PKI in the management database.
|
|
||||||
*
|
|
||||||
* In case the management database is new and empty, the instance
|
|
||||||
* is registered as new against the Orbit API with newly-generated
|
|
||||||
* RSA key pair.
|
|
||||||
*
|
|
||||||
* @param {string} managementEndpoint API endpoint
|
|
||||||
* @param {string} instanceId UUID of this deployment
|
|
||||||
* @param {werelogs~Logger} log Request-scoped logger to be able to trace
|
|
||||||
* initialization process
|
|
||||||
* @param {function} callback Function called with (error, result)
|
|
||||||
*
|
|
||||||
* @returns {undefined}
|
|
||||||
*/
|
|
||||||
function initManagementCredentials(
|
|
||||||
managementEndpoint, instanceId, log, callback) {
|
|
||||||
getStoredCredentials(log, (error, value) => {
|
|
||||||
if (error) {
|
|
||||||
if (error.is.NoSuchKey) {
|
|
||||||
return issueCredentials(managementEndpoint, instanceId, log,
|
|
||||||
(error, value) => {
|
|
||||||
if (error) {
|
|
||||||
log.error('could not issue token',
|
|
||||||
{ error: reshapeExceptionError(error),
|
|
||||||
method: 'initManagementCredentials' });
|
|
||||||
return callback(error);
|
|
||||||
}
|
|
||||||
log.debug('saving token');
|
|
||||||
return metadata.putObjectMD(managementDatabaseName,
|
|
||||||
tokenConfigurationKey, value, {}, log, error => {
|
|
||||||
if (error) {
|
|
||||||
log.error('could not save token',
|
|
||||||
{ error: reshapeExceptionError(error),
|
|
||||||
method: 'initManagementCredentials',
|
|
||||||
});
|
|
||||||
return callback(error);
|
|
||||||
}
|
|
||||||
log.info('saved token locally, ' +
|
|
||||||
'confirming instance');
|
|
||||||
return confirmInstanceCredentials(
|
|
||||||
managementEndpoint, instanceId, value, log,
|
|
||||||
callback);
|
|
||||||
});
|
|
||||||
});
|
|
||||||
}
|
|
||||||
log.debug('could not get token', { error });
|
|
||||||
return callback(error);
|
|
||||||
}
|
|
||||||
|
|
||||||
log.info('returning existing token');
|
|
||||||
if (Date.now() - value.issueDate > tokenRotationDelay) {
|
|
||||||
log.warn('management API token is too old, should re-issue');
|
|
||||||
}
|
|
||||||
|
|
||||||
return callback(null, instanceId, value.token);
|
|
||||||
});
|
|
||||||
}
|
|
||||||
|
|
||||||
module.exports = {
|
|
||||||
getStoredCredentials,
|
|
||||||
initManagementCredentials,
|
|
||||||
};
|
|
|
@ -1,138 +0,0 @@
|
||||||
const arsenal = require('arsenal');
|
|
||||||
const async = require('async');
|
|
||||||
|
|
||||||
const metadata = require('../metadata/wrapper');
|
|
||||||
const logger = require('../utilities/logger');
|
|
||||||
|
|
||||||
const {
|
|
||||||
loadCachedOverlay,
|
|
||||||
managementDatabaseName,
|
|
||||||
patchConfiguration,
|
|
||||||
} = require('./configuration');
|
|
||||||
const { initManagementCredentials } = require('./credentials');
|
|
||||||
const { startWSManagementClient } = require('./push');
|
|
||||||
const { startPollingManagementClient } = require('./poll');
|
|
||||||
const { reshapeExceptionError } = arsenal.errorUtils;
|
|
||||||
const { isManagementAgentUsed } = require('./agentClient');
|
|
||||||
|
|
||||||
const initRemoteManagementRetryDelay = 10000;
|
|
||||||
|
|
||||||
const managementEndpointRoot =
|
|
||||||
process.env.MANAGEMENT_ENDPOINT ||
|
|
||||||
'https://api.zenko.io';
|
|
||||||
const managementEndpoint = `${managementEndpointRoot}/api/v1/instance`;
|
|
||||||
|
|
||||||
const pushEndpointRoot =
|
|
||||||
process.env.PUSH_ENDPOINT ||
|
|
||||||
'https://push.api.zenko.io';
|
|
||||||
const pushEndpoint = `${pushEndpointRoot}/api/v1/instance`;
|
|
||||||
|
|
||||||
function initManagementDatabase(log, callback) {
|
|
||||||
// XXX choose proper owner names
|
|
||||||
const md = new arsenal.models.BucketInfo(managementDatabaseName, 'owner',
|
|
||||||
'owner display name', new Date().toJSON());
|
|
||||||
|
|
||||||
metadata.createBucket(managementDatabaseName, md, log, error => {
|
|
||||||
if (error) {
|
|
||||||
if (error.is.BucketAlreadyExists) {
|
|
||||||
log.info('created management database');
|
|
||||||
return callback();
|
|
||||||
}
|
|
||||||
log.error('could not initialize management database',
|
|
||||||
{ error: reshapeExceptionError(error),
|
|
||||||
method: 'initManagementDatabase' });
|
|
||||||
return callback(error);
|
|
||||||
}
|
|
||||||
log.info('initialized management database');
|
|
||||||
return callback();
|
|
||||||
});
|
|
||||||
}
|
|
||||||
|
|
||||||
function startManagementListeners(instanceId, token) {
|
|
||||||
const mode = process.env.MANAGEMENT_MODE || 'push';
|
|
||||||
if (mode === 'push') {
|
|
||||||
const url = `${pushEndpoint}/${instanceId}/ws`;
|
|
||||||
startWSManagementClient(url, token);
|
|
||||||
} else {
|
|
||||||
startPollingManagementClient(managementEndpoint, instanceId, token);
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
/**
|
|
||||||
* Initializes Orbit-based management by:
|
|
||||||
* - creating the management database in metadata
|
|
||||||
* - generating a key pair for credentials encryption
|
|
||||||
* - generating an instance-unique ID
|
|
||||||
* - getting an authentication token for the API
|
|
||||||
* - loading and applying the latest cached overlay configuration
|
|
||||||
* - starting a configuration update and metrics push background task
|
|
||||||
*
|
|
||||||
* @param {werelogs~Logger} log Request-scoped logger to be able to trace
|
|
||||||
* initialization process
|
|
||||||
* @param {function} callback Function to call once the overlay is loaded
|
|
||||||
* (overlay)
|
|
||||||
*
|
|
||||||
* @returns {undefined}
|
|
||||||
*/
|
|
||||||
function initManagement(log, callback) {
|
|
||||||
if ((process.env.REMOTE_MANAGEMENT_DISABLE &&
|
|
||||||
process.env.REMOTE_MANAGEMENT_DISABLE !== '0')
|
|
||||||
|| process.env.S3BACKEND === 'mem') {
|
|
||||||
log.info('remote management disabled');
|
|
||||||
return;
|
|
||||||
}
|
|
||||||
|
|
||||||
/* Temporary check before to fully move to the process management agent. */
|
|
||||||
if (isManagementAgentUsed() ^ typeof callback === 'function') {
|
|
||||||
let msg = 'misuse of initManagement function: ';
|
|
||||||
msg += `MANAGEMENT_USE_AGENT: ${process.env.MANAGEMENT_USE_AGENT}`;
|
|
||||||
msg += `, callback type: ${typeof callback}`;
|
|
||||||
throw new Error(msg);
|
|
||||||
}
|
|
||||||
|
|
||||||
async.waterfall([
|
|
||||||
// eslint-disable-next-line arrow-body-style
|
|
||||||
cb => { return isManagementAgentUsed() ? metadata.setup(cb) : cb(); },
|
|
||||||
cb => initManagementDatabase(log, cb),
|
|
||||||
cb => metadata.getUUID(log, cb),
|
|
||||||
(instanceId, cb) => initManagementCredentials(
|
|
||||||
managementEndpoint, instanceId, log, cb),
|
|
||||||
(instanceId, token, cb) => {
|
|
||||||
if (!isManagementAgentUsed()) {
|
|
||||||
cb(null, instanceId, token, {});
|
|
||||||
return;
|
|
||||||
}
|
|
||||||
loadCachedOverlay(log, (err, overlay) => cb(err, instanceId,
|
|
||||||
token, overlay));
|
|
||||||
},
|
|
||||||
(instanceId, token, overlay, cb) => {
|
|
||||||
if (!isManagementAgentUsed()) {
|
|
||||||
cb(null, instanceId, token, overlay);
|
|
||||||
return;
|
|
||||||
}
|
|
||||||
patchConfiguration(overlay, log,
|
|
||||||
err => cb(err, instanceId, token, overlay));
|
|
||||||
},
|
|
||||||
], (error, instanceId, token, overlay) => {
|
|
||||||
if (error) {
|
|
||||||
log.error('could not initialize remote management, retrying later',
|
|
||||||
{ error: reshapeExceptionError(error),
|
|
||||||
method: 'initManagement' });
|
|
||||||
setTimeout(initManagement,
|
|
||||||
initRemoteManagementRetryDelay,
|
|
||||||
logger.newRequestLogger());
|
|
||||||
} else {
|
|
||||||
log.info(`this deployment's Instance ID is ${instanceId}`);
|
|
||||||
log.end('management init done');
|
|
||||||
startManagementListeners(instanceId, token);
|
|
||||||
if (callback) {
|
|
||||||
callback(overlay);
|
|
||||||
}
|
|
||||||
}
|
|
||||||
});
|
|
||||||
}
|
|
||||||
|
|
||||||
module.exports = {
|
|
||||||
initManagement,
|
|
||||||
initManagementDatabase,
|
|
||||||
};
|
|
|
@ -1,157 +0,0 @@
|
||||||
const arsenal = require('arsenal');
|
|
||||||
const async = require('async');
|
|
||||||
const request = require('../utilities/request');
|
|
||||||
|
|
||||||
const _config = require('../Config').config;
|
|
||||||
const logger = require('../utilities/logger');
|
|
||||||
const metadata = require('../metadata/wrapper');
|
|
||||||
const {
|
|
||||||
loadCachedOverlay,
|
|
||||||
patchConfiguration,
|
|
||||||
saveConfigurationVersion,
|
|
||||||
} = require('./configuration');
|
|
||||||
const { reshapeExceptionError } = arsenal.errorUtils;
|
|
||||||
|
|
||||||
const pushReportDelay = 30000;
|
|
||||||
const pullConfigurationOverlayDelay = 60000;
|
|
||||||
|
|
||||||
function loadRemoteOverlay(
|
|
||||||
managementEndpoint, instanceId, remoteToken, cachedOverlay, log, cb) {
|
|
||||||
log.debug('loading remote overlay');
|
|
||||||
const opts = {
|
|
||||||
headers: {
|
|
||||||
'x-instance-authentication-token': remoteToken,
|
|
||||||
'x-scal-request-id': log.getSerializedUids(),
|
|
||||||
},
|
|
||||||
json: true,
|
|
||||||
};
|
|
||||||
request.get(`${managementEndpoint}/${instanceId}/config/overlay`, opts,
|
|
||||||
(error, response, body) => {
|
|
||||||
if (error) {
|
|
||||||
return cb(error);
|
|
||||||
}
|
|
||||||
if (response.statusCode === 200) {
|
|
||||||
return cb(null, cachedOverlay, body);
|
|
||||||
}
|
|
||||||
if (response.statusCode === 404) {
|
|
||||||
return cb(null, cachedOverlay, {});
|
|
||||||
}
|
|
||||||
return cb(arsenal.errors.AccessForbidden, cachedOverlay, {});
|
|
||||||
});
|
|
||||||
}
|
|
||||||
|
|
||||||
// TODO save only after successful patch
|
|
||||||
function applyConfigurationOverlay(
|
|
||||||
managementEndpoint, instanceId, remoteToken, log) {
|
|
||||||
async.waterfall([
|
|
||||||
wcb => loadCachedOverlay(log, wcb),
|
|
||||||
(cachedOverlay, wcb) => patchConfiguration(cachedOverlay,
|
|
||||||
log, wcb),
|
|
||||||
(cachedOverlay, wcb) =>
|
|
||||||
loadRemoteOverlay(managementEndpoint, instanceId, remoteToken,
|
|
||||||
cachedOverlay, log, wcb),
|
|
||||||
(cachedOverlay, remoteOverlay, wcb) =>
|
|
||||||
saveConfigurationVersion(cachedOverlay, remoteOverlay, log, wcb),
|
|
||||||
(remoteOverlay, wcb) => patchConfiguration(remoteOverlay,
|
|
||||||
log, wcb),
|
|
||||||
], error => {
|
|
||||||
if (error) {
|
|
||||||
log.error('could not apply managed configuration',
|
|
||||||
{ error: reshapeExceptionError(error),
|
|
||||||
method: 'applyConfigurationOverlay' });
|
|
||||||
}
|
|
||||||
setTimeout(applyConfigurationOverlay, pullConfigurationOverlayDelay,
|
|
||||||
managementEndpoint, instanceId, remoteToken,
|
|
||||||
logger.newRequestLogger());
|
|
||||||
});
|
|
||||||
}
|
|
||||||
|
|
||||||
function postStats(managementEndpoint, instanceId, remoteToken, report, next) {
|
|
||||||
const toURL = `${managementEndpoint}/${instanceId}/stats`;
|
|
||||||
const toOptions = {
|
|
||||||
json: true,
|
|
||||||
headers: {
|
|
||||||
'content-type': 'application/json',
|
|
||||||
'x-instance-authentication-token': remoteToken,
|
|
||||||
},
|
|
||||||
body: report,
|
|
||||||
};
|
|
||||||
const toCallback = (err, response, body) => {
|
|
||||||
if (err) {
|
|
||||||
logger.info('could not post stats', { error: err });
|
|
||||||
}
|
|
||||||
if (response && response.statusCode !== 201) {
|
|
||||||
logger.info('could not post stats', {
|
|
||||||
body,
|
|
||||||
statusCode: response.statusCode,
|
|
||||||
});
|
|
||||||
}
|
|
||||||
if (next) {
|
|
||||||
next(null, instanceId, remoteToken);
|
|
||||||
}
|
|
||||||
};
|
|
||||||
return request.post(toURL, toOptions, toCallback);
|
|
||||||
}
|
|
||||||
|
|
||||||
function getStats(next) {
|
|
||||||
const fromURL = `http://localhost:${_config.port}/_/report`;
|
|
||||||
const fromOptions = {
|
|
||||||
headers: {
|
|
||||||
'x-scal-report-token': process.env.REPORT_TOKEN,
|
|
||||||
},
|
|
||||||
};
|
|
||||||
return request.get(fromURL, fromOptions, next);
|
|
||||||
}
|
|
||||||
|
|
||||||
function pushStats(managementEndpoint, instanceId, remoteToken, next) {
|
|
||||||
if (process.env.PUSH_STATS === 'false') {
|
|
||||||
return;
|
|
||||||
}
|
|
||||||
|
|
||||||
getStats((err, res, report) => {
|
|
||||||
if (err) {
|
|
||||||
logger.info('could not retrieve stats', { error: err });
|
|
||||||
return;
|
|
||||||
}
|
|
||||||
|
|
||||||
logger.debug('report', { report });
|
|
||||||
postStats(
|
|
||||||
managementEndpoint,
|
|
||||||
instanceId,
|
|
||||||
remoteToken,
|
|
||||||
report,
|
|
||||||
next
|
|
||||||
);
|
|
||||||
return;
|
|
||||||
});
|
|
||||||
|
|
||||||
setTimeout(pushStats, pushReportDelay,
|
|
||||||
managementEndpoint, instanceId, remoteToken);
|
|
||||||
}
|
|
||||||
|
|
||||||
/**
|
|
||||||
* Starts background task that updates configuration and pushes stats.
|
|
||||||
*
|
|
||||||
* Periodically polls for configuration updates, and pushes stats at
|
|
||||||
* a fixed interval.
|
|
||||||
*
|
|
||||||
* @param {string} managementEndpoint API endpoint
|
|
||||||
* @param {string} instanceId UUID of this deployment
|
|
||||||
* @param {string} remoteToken API authentication token
|
|
||||||
*
|
|
||||||
* @returns {undefined}
|
|
||||||
*/
|
|
||||||
function startPollingManagementClient(
|
|
||||||
managementEndpoint, instanceId, remoteToken) {
|
|
||||||
metadata.notifyBucketChange(() => {
|
|
||||||
pushStats(managementEndpoint, instanceId, remoteToken);
|
|
||||||
});
|
|
||||||
|
|
||||||
pushStats(managementEndpoint, instanceId, remoteToken);
|
|
||||||
applyConfigurationOverlay(managementEndpoint, instanceId, remoteToken,
|
|
||||||
logger.newRequestLogger());
|
|
||||||
}
|
|
||||||
|
|
||||||
module.exports = {
|
|
||||||
startPollingManagementClient,
|
|
||||||
};
|
|
|
@ -1,301 +0,0 @@
|
||||||
const arsenal = require('arsenal');
|
|
||||||
const HttpsProxyAgent = require('https-proxy-agent');
|
|
||||||
const net = require('net');
|
|
||||||
const request = require('../utilities/request');
|
|
||||||
const { URL } = require('url');
|
|
||||||
const WebSocket = require('ws');
|
|
||||||
const assert = require('assert');
|
|
||||||
const http = require('http');
|
|
||||||
|
|
||||||
const _config = require('../Config').config;
|
|
||||||
const logger = require('../utilities/logger');
|
|
||||||
const metadata = require('../metadata/wrapper');
|
|
||||||
|
|
||||||
const { reshapeExceptionError } = arsenal.errorUtils;
|
|
||||||
const { isManagementAgentUsed } = require('./agentClient');
|
|
||||||
const { applyAndSaveOverlay } = require('./configuration');
|
|
||||||
const {
|
|
||||||
ChannelMessageV0,
|
|
||||||
MessageType,
|
|
||||||
} = require('./ChannelMessageV0');
|
|
||||||
|
|
||||||
const {
|
|
||||||
CONFIG_OVERLAY_MESSAGE,
|
|
||||||
METRICS_REQUEST_MESSAGE,
|
|
||||||
CHANNEL_CLOSE_MESSAGE,
|
|
||||||
CHANNEL_PAYLOAD_MESSAGE,
|
|
||||||
} = MessageType;
|
|
||||||
|
|
||||||
const PING_INTERVAL_MS = 10000;
|
|
||||||
const subprotocols = [ChannelMessageV0.protocolName];
|
|
||||||
|
|
||||||
const cloudServerHost = process.env.SECURE_CHANNEL_DEFAULT_FORWARD_TO_HOST
|
|
||||||
|| 'localhost';
|
|
||||||
const cloudServerPort = process.env.SECURE_CHANNEL_DEFAULT_FORWARD_TO_PORT
|
|
||||||
|| _config.port;
|
|
||||||
|
|
||||||
let overlayMessageListener = null;
|
|
||||||
let connected = false;
|
|
||||||
|
|
||||||
// No wildcard nor cidr/mask match for now
|
|
||||||
function createWSAgent(pushEndpoint, env, log) {
|
|
||||||
const url = new URL(pushEndpoint);
|
|
||||||
const noProxy = (env.NO_PROXY || env.no_proxy
|
|
||||||
|| '').split(',');
|
|
||||||
|
|
||||||
if (noProxy.includes(url.hostname)) {
|
|
||||||
log.info('push server ws has proxy exclusion', { noProxy });
|
|
||||||
return null;
|
|
||||||
}
|
|
||||||
|
|
||||||
if (url.protocol === 'https:' || url.protocol === 'wss:') {
|
|
||||||
const httpsProxy = (env.HTTPS_PROXY || env.https_proxy);
|
|
||||||
if (httpsProxy) {
|
|
||||||
log.info('push server ws using https proxy', { httpsProxy });
|
|
||||||
return new HttpsProxyAgent(httpsProxy);
|
|
||||||
}
|
|
||||||
} else if (url.protocol === 'http:' || url.protocol === 'ws:') {
|
|
||||||
const httpProxy = (env.HTTP_PROXY || env.http_proxy);
|
|
||||||
if (httpProxy) {
|
|
||||||
log.info('push server ws using http proxy', { httpProxy });
|
|
||||||
return new HttpsProxyAgent(httpProxy);
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
const allProxy = (env.ALL_PROXY || env.all_proxy);
|
|
||||||
if (allProxy) {
|
|
||||||
log.info('push server ws using wildcard proxy', { allProxy });
|
|
||||||
return new HttpsProxyAgent(allProxy);
|
|
||||||
}
|
|
||||||
|
|
||||||
log.info('push server ws not using proxy');
|
|
||||||
return null;
|
|
||||||
}
|
|
||||||
|
|
||||||
/**
|
|
||||||
* Starts background task that updates configuration and pushes stats.
|
|
||||||
*
|
|
||||||
* Receives pushed Websocket messages on configuration updates, and
|
|
||||||
* sends stat messages in response to API sollicitations.
|
|
||||||
*
|
|
||||||
* @param {string} url API endpoint
|
|
||||||
* @param {string} token API authentication token
|
|
||||||
* @param {function} cb end-of-connection callback
|
|
||||||
*
|
|
||||||
* @returns {undefined}
|
|
||||||
*/
|
|
||||||
function startWSManagementClient(url, token, cb) {
|
|
||||||
logger.info('connecting to push server', { url });
|
|
||||||
function _logError(error, errorMessage, method) {
|
|
||||||
if (error) {
|
|
||||||
logger.error(`management client error: ${errorMessage}`,
|
|
||||||
{ error: reshapeExceptionError(error), method });
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
const socketsByChannelId = [];
|
|
||||||
const headers = {
|
|
||||||
'x-instance-authentication-token': token,
|
|
||||||
};
|
|
||||||
const agent = createWSAgent(url, process.env, logger);
|
|
||||||
|
|
||||||
const ws = new WebSocket(url, subprotocols, { headers, agent });
|
|
||||||
let pingTimeout = null;
|
|
||||||
|
|
||||||
function sendPing() {
|
|
||||||
if (ws.readyState === ws.OPEN) {
|
|
||||||
ws.ping(err => _logError(err, 'failed to send a ping', 'sendPing'));
|
|
||||||
}
|
|
||||||
pingTimeout = setTimeout(() => ws.terminate(), PING_INTERVAL_MS);
|
|
||||||
}
|
|
||||||
|
|
||||||
function initiatePing() {
|
|
||||||
clearTimeout(pingTimeout);
|
|
||||||
setTimeout(sendPing, PING_INTERVAL_MS);
|
|
||||||
}
|
|
||||||
|
|
||||||
function pushStats(options) {
|
|
||||||
if (process.env.PUSH_STATS === 'false') {
|
|
||||||
return;
|
|
||||||
}
|
|
||||||
const fromURL = `http://${cloudServerHost}:${cloudServerPort}/_/report`;
|
|
||||||
const fromOptions = {
|
|
||||||
json: true,
|
|
||||||
headers: {
|
|
||||||
'x-scal-report-token': process.env.REPORT_TOKEN,
|
|
||||||
'x-scal-report-skip-cache': Boolean(options && options.noCache),
|
|
||||||
},
|
|
||||||
};
|
|
||||||
request.get(fromURL, fromOptions, (err, response, body) => {
|
|
||||||
if (err) {
|
|
||||||
_logError(err, 'failed to get metrics report', 'pushStats');
|
|
||||||
return;
|
|
||||||
}
|
|
||||||
ws.send(ChannelMessageV0.encodeMetricsReportMessage(body),
|
|
||||||
err => _logError(err, 'failed to send metrics report message',
|
|
||||||
'pushStats'));
|
|
||||||
});
|
|
||||||
}
|
|
||||||
|
|
||||||
function closeChannel(channelId) {
|
|
||||||
const socket = socketsByChannelId[channelId];
|
|
||||||
if (socket) {
|
|
||||||
socket.destroy();
|
|
||||||
delete socketsByChannelId[channelId];
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
function receiveChannelData(channelId, payload) {
|
|
||||||
let socket = socketsByChannelId[channelId];
|
|
||||||
if (!socket) {
|
|
||||||
socket = net.createConnection(cloudServerPort, cloudServerHost);
|
|
||||||
|
|
||||||
socket.on('data', data => {
|
|
||||||
ws.send(ChannelMessageV0.
|
|
||||||
encodeChannelDataMessage(channelId, data), err =>
|
|
||||||
_logError(err, 'failed to send channel data message',
|
|
||||||
'receiveChannelData'));
|
|
||||||
});
|
|
||||||
|
|
||||||
socket.on('connect', () => {
|
|
||||||
});
|
|
||||||
|
|
||||||
socket.on('drain', () => {
|
|
||||||
});
|
|
||||||
|
|
||||||
socket.on('error', error => {
|
|
||||||
logger.error('failed to connect to S3', {
|
|
||||||
code: error.code,
|
|
||||||
host: error.address,
|
|
||||||
port: error.port,
|
|
||||||
});
|
|
||||||
});
|
|
||||||
|
|
||||||
socket.on('end', () => {
|
|
||||||
socket.destroy();
|
|
||||||
socketsByChannelId[channelId] = null;
|
|
||||||
ws.send(ChannelMessageV0.encodeChannelCloseMessage(channelId),
|
|
||||||
err => _logError(err,
|
|
||||||
'failed to send channel close message',
|
|
||||||
'receiveChannelData'));
|
|
||||||
});
|
|
||||||
|
|
||||||
socketsByChannelId[channelId] = socket;
|
|
||||||
}
|
|
||||||
socket.write(payload);
|
|
||||||
}
|
|
||||||
|
|
||||||
function browserAccessChangeHandler() {
|
|
||||||
if (!_config.browserAccessEnabled) {
|
|
||||||
socketsByChannelId.forEach(s => s.close());
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
ws.on('open', () => {
|
|
||||||
connected = true;
|
|
||||||
logger.info('connected to push server');
|
|
||||||
|
|
||||||
metadata.notifyBucketChange(() => {
|
|
||||||
pushStats({ noCache: true });
|
|
||||||
});
|
|
||||||
_config.on('browser-access-enabled-change', browserAccessChangeHandler);
|
|
||||||
|
|
||||||
initiatePing();
|
|
||||||
});
|
|
||||||
|
|
||||||
const cbOnce = cb ? arsenal.jsutil.once(cb) : null;
|
|
||||||
|
|
||||||
ws.on('close', () => {
|
|
||||||
logger.info('disconnected from push server, reconnecting in 10s');
|
|
||||||
metadata.notifyBucketChange(null);
|
|
||||||
_config.removeListener('browser-access-enabled-change',
|
|
||||||
browserAccessChangeHandler);
|
|
||||||
setTimeout(startWSManagementClient, 10000, url, token);
|
|
||||||
connected = false;
|
|
||||||
|
|
||||||
if (cbOnce) {
|
|
||||||
process.nextTick(cbOnce);
|
|
||||||
}
|
|
||||||
});
|
|
||||||
|
|
||||||
ws.on('error', err => {
|
|
||||||
connected = false;
|
|
||||||
logger.error('error from push server connection', {
|
|
||||||
error: err,
|
|
||||||
errorMessage: err.message,
|
|
||||||
});
|
|
||||||
if (cbOnce) {
|
|
||||||
process.nextTick(cbOnce, err);
|
|
||||||
}
|
|
||||||
});
|
|
||||||
|
|
||||||
ws.on('ping', () => {
|
|
||||||
ws.pong(err => _logError(err, 'failed to send a pong'));
|
|
||||||
});
|
|
||||||
|
|
||||||
ws.on('pong', () => {
|
|
||||||
initiatePing();
|
|
||||||
});
|
|
||||||
|
|
||||||
ws.on('message', data => {
|
|
||||||
const log = logger.newRequestLogger();
|
|
||||||
const message = new ChannelMessageV0(data);
|
|
||||||
switch (message.getType()) {
|
|
||||||
case CONFIG_OVERLAY_MESSAGE:
|
|
||||||
if (!isManagementAgentUsed()) {
|
|
||||||
applyAndSaveOverlay(JSON.parse(message.getPayload()), log);
|
|
||||||
} else {
|
|
||||||
if (overlayMessageListener) {
|
|
||||||
overlayMessageListener(message.getPayload().toString());
|
|
||||||
}
|
|
||||||
}
|
|
||||||
break;
|
|
||||||
case METRICS_REQUEST_MESSAGE:
|
|
||||||
pushStats();
|
|
||||||
break;
|
|
||||||
case CHANNEL_CLOSE_MESSAGE:
|
|
||||||
closeChannel(message.getChannelNumber());
|
|
||||||
break;
|
|
||||||
case CHANNEL_PAYLOAD_MESSAGE:
|
|
||||||
// browserAccessEnabled defaults to true unless explicitly false
|
|
||||||
if (_config.browserAccessEnabled !== false) {
|
|
||||||
receiveChannelData(
|
|
||||||
message.getChannelNumber(), message.getPayload());
|
|
||||||
}
|
|
||||||
break;
|
|
||||||
default:
|
|
||||||
logger.error('unknown message type from push server',
|
|
||||||
{ messageType: message.getType() });
|
|
||||||
}
|
|
||||||
});
|
|
||||||
}
|
|
||||||
|
|
||||||
function addOverlayMessageListener(callback) {
|
|
||||||
assert(typeof callback === 'function');
|
|
||||||
overlayMessageListener = callback;
|
|
||||||
}
|
|
||||||
|
|
||||||
function startPushConnectionHealthCheckServer(cb) {
|
|
||||||
const server = http.createServer((req, res) => {
|
|
||||||
if (req.url !== '/_/healthcheck') {
|
|
||||||
res.writeHead(404);
|
|
||||||
res.write('Not Found');
|
|
||||||
} else if (connected) {
|
|
||||||
res.writeHead(200);
|
|
||||||
res.write('Connected');
|
|
||||||
} else {
|
|
||||||
res.writeHead(503);
|
|
||||||
res.write('Not Connected');
|
|
||||||
}
|
|
||||||
res.end();
|
|
||||||
});
|
|
||||||
|
|
||||||
server.listen(_config.port, cb);
|
|
||||||
}
|
|
||||||
|
|
||||||
module.exports = {
|
|
||||||
createWSAgent,
|
|
||||||
startWSManagementClient,
|
|
||||||
startPushConnectionHealthCheckServer,
|
|
||||||
addOverlayMessageListener,
|
|
||||||
};
|
|
|
@ -2,9 +2,9 @@ const MetadataWrapper = require('arsenal').storage.metadata.MetadataWrapper;
|
||||||
const { config } = require('../Config');
|
const { config } = require('../Config');
|
||||||
const logger = require('../utilities/logger');
|
const logger = require('../utilities/logger');
|
||||||
const constants = require('../../constants');
|
const constants = require('../../constants');
|
||||||
const bucketclient = require('bucketclient');
|
|
||||||
|
|
||||||
const clientName = config.backends.metadata;
|
const clientName = config.backends.metadata;
|
||||||
|
let bucketclient;
|
||||||
let params;
|
let params;
|
||||||
if (clientName === 'mem') {
|
if (clientName === 'mem') {
|
||||||
params = {};
|
params = {};
|
||||||
|
@ -21,6 +21,7 @@ if (clientName === 'mem') {
|
||||||
noDbOpen: null,
|
noDbOpen: null,
|
||||||
};
|
};
|
||||||
} else if (clientName === 'scality') {
|
} else if (clientName === 'scality') {
|
||||||
|
bucketclient = require('bucketclient');
|
||||||
params = {
|
params = {
|
||||||
bucketdBootstrap: config.bucketd.bootstrap,
|
bucketdBootstrap: config.bucketd.bootstrap,
|
||||||
bucketdLog: config.bucketd.log,
|
bucketdLog: config.bucketd.log,
|
||||||
|
|
|
@ -18,11 +18,6 @@ const locationStorageCheck =
|
||||||
require('./api/apiUtils/object/locationStorageCheck');
|
require('./api/apiUtils/object/locationStorageCheck');
|
||||||
const vault = require('./auth/vault');
|
const vault = require('./auth/vault');
|
||||||
const metadata = require('./metadata/wrapper');
|
const metadata = require('./metadata/wrapper');
|
||||||
const { initManagement } = require('./management');
|
|
||||||
const {
|
|
||||||
initManagementClient,
|
|
||||||
isManagementAgentUsed,
|
|
||||||
} = require('./management/agentClient');
|
|
||||||
|
|
||||||
const HttpAgent = require('agentkeepalive');
|
const HttpAgent = require('agentkeepalive');
|
||||||
const QuotaService = require('./quotas/quotas');
|
const QuotaService = require('./quotas/quotas');
|
||||||
|
@ -56,7 +51,6 @@ const STATS_INTERVAL = 5; // 5 seconds
|
||||||
const STATS_EXPIRY = 30; // 30 seconds
|
const STATS_EXPIRY = 30; // 30 seconds
|
||||||
const statsClient = new StatsClient(localCacheClient, STATS_INTERVAL,
|
const statsClient = new StatsClient(localCacheClient, STATS_INTERVAL,
|
||||||
STATS_EXPIRY);
|
STATS_EXPIRY);
|
||||||
const enableRemoteManagement = true;
|
|
||||||
|
|
||||||
class S3Server {
|
class S3Server {
|
||||||
/**
|
/**
|
||||||
|
@ -327,26 +321,13 @@ class S3Server {
|
||||||
QuotaService?.setup(log);
|
QuotaService?.setup(log);
|
||||||
}
|
}
|
||||||
|
|
||||||
// TODO this should wait for metadata healthcheck to be ok
|
|
||||||
// TODO only do this in cluster master
|
|
||||||
if (enableRemoteManagement) {
|
|
||||||
if (!isManagementAgentUsed()) {
|
|
||||||
setTimeout(() => {
|
|
||||||
initManagement(logger.newRequestLogger());
|
|
||||||
}, 5000);
|
|
||||||
} else {
|
|
||||||
initManagementClient();
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
this.started = true;
|
this.started = true;
|
||||||
});
|
});
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
function main() {
|
function main() {
|
||||||
// TODO: change config to use workers prop. name for clarity
|
let workers = _config.workers || 1;
|
||||||
let workers = _config.clusters || 1;
|
|
||||||
if (process.env.S3BACKEND === 'mem') {
|
if (process.env.S3BACKEND === 'mem') {
|
||||||
workers = 1;
|
workers = 1;
|
||||||
}
|
}
|
||||||
|
|
|
@ -1,16 +1,12 @@
|
||||||
|
require('werelogs').stderrUtils.catchAndTimestampStderr();
|
||||||
const _config = require('../Config').config;
|
const _config = require('../Config').config;
|
||||||
const { utapiVersion, UtapiServer: utapiServer } = require('utapi');
|
const { utapiVersion, UtapiServer: utapiServer } = require('utapi');
|
||||||
|
const vault = require('../auth/vault');
|
||||||
|
|
||||||
// start utapi server
|
// start utapi server
|
||||||
if (utapiVersion === 1 && _config.utapi) {
|
if (utapiVersion === 1 && _config.utapi) {
|
||||||
const fullConfig = Object.assign({}, _config.utapi,
|
const fullConfig = Object.assign({}, _config.utapi,
|
||||||
{ redis: _config.redis });
|
{ redis: _config.redis, vaultclient: vault });
|
||||||
if (_config.vaultd) {
|
|
||||||
Object.assign(fullConfig, { vaultd: _config.vaultd });
|
|
||||||
}
|
|
||||||
if (_config.https) {
|
|
||||||
Object.assign(fullConfig, { https: _config.https });
|
|
||||||
}
|
|
||||||
// copy healthcheck IPs
|
// copy healthcheck IPs
|
||||||
if (_config.healthChecks) {
|
if (_config.healthChecks) {
|
||||||
Object.assign(fullConfig, { healthChecks: _config.healthChecks });
|
Object.assign(fullConfig, { healthChecks: _config.healthChecks });
|
||||||
|
|
|
@ -1,3 +1,4 @@
|
||||||
|
require('werelogs').stderrUtils.catchAndTimestampStderr();
|
||||||
const UtapiReindex = require('utapi').UtapiReindex;
|
const UtapiReindex = require('utapi').UtapiReindex;
|
||||||
const { config } = require('../Config');
|
const { config } = require('../Config');
|
||||||
|
|
||||||
|
|
|
@ -1,3 +1,4 @@
|
||||||
|
require('werelogs').stderrUtils.catchAndTimestampStderr();
|
||||||
const UtapiReplay = require('utapi').UtapiReplay;
|
const UtapiReplay = require('utapi').UtapiReplay;
|
||||||
const _config = require('../Config').config;
|
const _config = require('../Config').config;
|
||||||
|
|
||||||
|
|
|
@ -1,7 +1,11 @@
|
||||||
const { Werelogs } = require('werelogs');
|
const { configure, Werelogs } = require('werelogs');
|
||||||
|
|
||||||
const _config = require('../Config.js').config;
|
const _config = require('../Config.js').config;
|
||||||
|
|
||||||
|
configure({
|
||||||
|
level: _config.log.logLevel,
|
||||||
|
dump: _config.log.dumpLevel,
|
||||||
|
});
|
||||||
const werelogs = new Werelogs({
|
const werelogs = new Werelogs({
|
||||||
level: _config.log.logLevel,
|
level: _config.log.logLevel,
|
||||||
dump: _config.log.dumpLevel,
|
dump: _config.log.dumpLevel,
|
||||||
|
|
|
@ -0,0 +1,12 @@
|
||||||
|
{
|
||||||
|
"STANDARD": {
|
||||||
|
"type": "vitastor",
|
||||||
|
"objectId": "std",
|
||||||
|
"legacyAwsBehavior": true,
|
||||||
|
"details": {
|
||||||
|
"config_path": "/etc/vitastor/vitastor.conf",
|
||||||
|
"pool_id": 3,
|
||||||
|
"metadata_image": "s3-volume-meta"
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
|
@ -1,179 +0,0 @@
|
||||||
const Uuid = require('uuid');
|
|
||||||
const WebSocket = require('ws');
|
|
||||||
|
|
||||||
const logger = require('./lib/utilities/logger');
|
|
||||||
const { initManagement } = require('./lib/management');
|
|
||||||
const _config = require('./lib/Config').config;
|
|
||||||
const { managementAgentMessageType } = require('./lib/management/agentClient');
|
|
||||||
const { addOverlayMessageListener } = require('./lib/management/push');
|
|
||||||
const { saveConfigurationVersion } = require('./lib/management/configuration');
|
|
||||||
|
|
||||||
|
|
||||||
// TODO: auth?
|
|
||||||
// TODO: werelogs with a specific name.
|
|
||||||
|
|
||||||
const CHECK_BROKEN_CONNECTIONS_FREQUENCY_MS = 15000;
|
|
||||||
|
|
||||||
|
|
||||||
class ManagementAgentServer {
|
|
||||||
constructor() {
|
|
||||||
this.port = _config.managementAgent.port || 8010;
|
|
||||||
this.wss = null;
|
|
||||||
this.loadedOverlay = null;
|
|
||||||
|
|
||||||
this.stop = this.stop.bind(this);
|
|
||||||
process.on('SIGINT', this.stop);
|
|
||||||
process.on('SIGHUP', this.stop);
|
|
||||||
process.on('SIGQUIT', this.stop);
|
|
||||||
process.on('SIGTERM', this.stop);
|
|
||||||
process.on('SIGPIPE', () => {});
|
|
||||||
}
|
|
||||||
|
|
||||||
start(_cb) {
|
|
||||||
const cb = _cb || function noop() {};
|
|
||||||
|
|
||||||
/* Define REPORT_TOKEN env variable needed by the management
|
|
||||||
* module. */
|
|
||||||
process.env.REPORT_TOKEN = process.env.REPORT_TOKEN
|
|
||||||
|| _config.reportToken
|
|
||||||
|| Uuid.v4();
|
|
||||||
|
|
||||||
initManagement(logger.newRequestLogger(), overlay => {
|
|
||||||
let error = null;
|
|
||||||
|
|
||||||
if (overlay) {
|
|
||||||
this.loadedOverlay = overlay;
|
|
||||||
this.startServer();
|
|
||||||
} else {
|
|
||||||
error = new Error('failed to init management');
|
|
||||||
}
|
|
||||||
return cb(error);
|
|
||||||
});
|
|
||||||
}
|
|
||||||
|
|
||||||
stop() {
|
|
||||||
if (!this.wss) {
|
|
||||||
process.exit(0);
|
|
||||||
return;
|
|
||||||
}
|
|
||||||
this.wss.close(() => {
|
|
||||||
logger.info('server shutdown');
|
|
||||||
process.exit(0);
|
|
||||||
});
|
|
||||||
}
|
|
||||||
|
|
||||||
startServer() {
|
|
||||||
this.wss = new WebSocket.Server({
|
|
||||||
port: this.port,
|
|
||||||
clientTracking: true,
|
|
||||||
path: '/watch',
|
|
||||||
});
|
|
||||||
|
|
||||||
this.wss.on('connection', this.onConnection.bind(this));
|
|
||||||
this.wss.on('listening', this.onListening.bind(this));
|
|
||||||
this.wss.on('error', this.onError.bind(this));
|
|
||||||
|
|
||||||
setInterval(this.checkBrokenConnections.bind(this),
|
|
||||||
CHECK_BROKEN_CONNECTIONS_FREQUENCY_MS);
|
|
||||||
|
|
||||||
addOverlayMessageListener(this.onNewOverlay.bind(this));
|
|
||||||
}
|
|
||||||
|
|
||||||
onConnection(socket, request) {
|
|
||||||
function hearthbeat() {
|
|
||||||
this.isAlive = true;
|
|
||||||
}
|
|
||||||
logger.info('client connected to watch route', {
|
|
||||||
ip: request.connection.remoteAddress,
|
|
||||||
});
|
|
||||||
|
|
||||||
/* eslint-disable no-param-reassign */
|
|
||||||
socket.isAlive = true;
|
|
||||||
socket.on('pong', hearthbeat.bind(socket));
|
|
||||||
|
|
||||||
if (socket.readyState !== socket.OPEN) {
|
|
||||||
logger.error('client socket not in ready state', {
|
|
||||||
state: socket.readyState,
|
|
||||||
client: socket._socket._peername,
|
|
||||||
});
|
|
||||||
return;
|
|
||||||
}
|
|
||||||
|
|
||||||
const msg = {
|
|
||||||
messageType: managementAgentMessageType.NEW_OVERLAY,
|
|
||||||
payload: this.loadedOverlay,
|
|
||||||
};
|
|
||||||
socket.send(JSON.stringify(msg), error => {
|
|
||||||
if (error) {
|
|
||||||
logger.error('failed to send remoteOverlay to client', {
|
|
||||||
error,
|
|
||||||
client: socket._socket._peername,
|
|
||||||
});
|
|
||||||
}
|
|
||||||
});
|
|
||||||
}
|
|
||||||
|
|
||||||
onListening() {
|
|
||||||
logger.info('websocket server listening',
|
|
||||||
{ port: this.port });
|
|
||||||
}
|
|
||||||
|
|
||||||
onError(error) {
|
|
||||||
logger.error('websocket server error', { error });
|
|
||||||
}
|
|
||||||
|
|
||||||
_sendNewOverlayToClient(client) {
|
|
||||||
if (client.readyState !== client.OPEN) {
|
|
||||||
logger.error('client socket not in ready state', {
|
|
||||||
state: client.readyState,
|
|
||||||
client: client._socket._peername,
|
|
||||||
});
|
|
||||||
return;
|
|
||||||
}
|
|
||||||
|
|
||||||
const msg = {
|
|
||||||
messageType: managementAgentMessageType.NEW_OVERLAY,
|
|
||||||
payload: this.loadedOverlay,
|
|
||||||
};
|
|
||||||
client.send(JSON.stringify(msg), error => {
|
|
||||||
if (error) {
|
|
||||||
logger.error(
|
|
||||||
'failed to send remoteOverlay to management agent client', {
|
|
||||||
error, client: client._socket._peername,
|
|
||||||
});
|
|
||||||
}
|
|
||||||
});
|
|
||||||
}
|
|
||||||
|
|
||||||
onNewOverlay(remoteOverlay) {
|
|
||||||
const remoteOverlayObj = JSON.parse(remoteOverlay);
|
|
||||||
saveConfigurationVersion(
|
|
||||||
this.loadedOverlay, remoteOverlayObj, logger, err => {
|
|
||||||
if (err) {
|
|
||||||
logger.error('failed to save remote overlay', { err });
|
|
||||||
return;
|
|
||||||
}
|
|
||||||
this.loadedOverlay = remoteOverlayObj;
|
|
||||||
this.wss.clients.forEach(
|
|
||||||
this._sendNewOverlayToClient.bind(this)
|
|
||||||
);
|
|
||||||
});
|
|
||||||
}
|
|
||||||
|
|
||||||
checkBrokenConnections() {
|
|
||||||
this.wss.clients.forEach(client => {
|
|
||||||
if (!client.isAlive) {
|
|
||||||
logger.info('close broken connection', {
|
|
||||||
client: client._socket._peername,
|
|
||||||
});
|
|
||||||
client.terminate();
|
|
||||||
return;
|
|
||||||
}
|
|
||||||
client.isAlive = false;
|
|
||||||
client.ping();
|
|
||||||
});
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
const server = new ManagementAgentServer();
|
|
||||||
server.start();
|
|
50
package.json
50
package.json
|
@ -1,6 +1,6 @@
|
||||||
{
|
{
|
||||||
"name": "@zenko/cloudserver",
|
"name": "@zenko/cloudserver",
|
||||||
"version": "8.8.26",
|
"version": "8.8.27",
|
||||||
"description": "Zenko CloudServer, an open-source Node.js implementation of a server handling the Amazon S3 protocol",
|
"description": "Zenko CloudServer, an open-source Node.js implementation of a server handling the Amazon S3 protocol",
|
||||||
"main": "index.js",
|
"main": "index.js",
|
||||||
"engines": {
|
"engines": {
|
||||||
|
@ -21,14 +21,13 @@
|
||||||
"dependencies": {
|
"dependencies": {
|
||||||
"@azure/storage-blob": "^12.12.0",
|
"@azure/storage-blob": "^12.12.0",
|
||||||
"@hapi/joi": "^17.1.0",
|
"@hapi/joi": "^17.1.0",
|
||||||
"arsenal": "git+https://github.com/scality/arsenal#8.1.130",
|
"arsenal": "git+https://git.yourcmc.ru/vitalif/zenko-arsenal.git#development/8.1",
|
||||||
"async": "~2.5.0",
|
"async": "^2.5.0",
|
||||||
"aws-sdk": "2.905.0",
|
"aws-sdk": "^2.905.0",
|
||||||
"bucketclient": "scality/bucketclient#8.1.9",
|
|
||||||
"bufferutil": "^4.0.6",
|
"bufferutil": "^4.0.6",
|
||||||
"commander": "^2.9.0",
|
"commander": "^2.9.0",
|
||||||
"cron-parser": "^2.11.0",
|
"cron-parser": "^2.11.0",
|
||||||
"diskusage": "1.1.3",
|
"diskusage": "^1.1.3",
|
||||||
"google-auto-auth": "^0.9.1",
|
"google-auto-auth": "^0.9.1",
|
||||||
"http-proxy": "^1.17.0",
|
"http-proxy": "^1.17.0",
|
||||||
"http-proxy-agent": "^4.0.1",
|
"http-proxy-agent": "^4.0.1",
|
||||||
|
@ -38,38 +37,45 @@
|
||||||
"mongodb": "^5.2.0",
|
"mongodb": "^5.2.0",
|
||||||
"node-fetch": "^2.6.0",
|
"node-fetch": "^2.6.0",
|
||||||
"node-forge": "^0.7.1",
|
"node-forge": "^0.7.1",
|
||||||
"npm-run-all": "~4.1.5",
|
"npm-run-all": "^4.1.5",
|
||||||
"prom-client": "14.2.0",
|
"prom-client": "14.2.0",
|
||||||
"request": "^2.81.0",
|
"request": "^2.81.0",
|
||||||
"scubaclient": "git+https://github.com/scality/scubaclient.git",
|
"scubaclient": "git+https://git.yourcmc.ru/vitalif/zenko-scubaclient.git",
|
||||||
"sql-where-parser": "~2.2.1",
|
"sql-where-parser": "^2.2.1",
|
||||||
"utapi": "github:scality/utapi#8.1.14",
|
"utapi": "git+https://git.yourcmc.ru/vitalif/zenko-utapi.git",
|
||||||
"utf-8-validate": "^5.0.8",
|
"utf-8-validate": "^5.0.8",
|
||||||
"utf8": "~2.1.1",
|
"utf8": "^2.1.1",
|
||||||
"uuid": "^8.3.2",
|
"uuid": "^8.3.2",
|
||||||
"vaultclient": "scality/vaultclient#8.3.20",
|
"werelogs": "git+https://git.yourcmc.ru/vitalif/zenko-werelogs.git#development/8.1",
|
||||||
"werelogs": "scality/werelogs#8.1.4",
|
|
||||||
"ws": "^5.1.0",
|
"ws": "^5.1.0",
|
||||||
"xml2js": "~0.4.16"
|
"xml2js": "^0.4.16"
|
||||||
|
},
|
||||||
|
"overrides": {
|
||||||
|
"ltgt": "^2.2.0"
|
||||||
},
|
},
|
||||||
"devDependencies": {
|
"devDependencies": {
|
||||||
|
"@babel/core": "^7.25.2",
|
||||||
|
"@babel/preset-env": "^7.25.3",
|
||||||
|
"babel-loader": "^9.1.3",
|
||||||
"bluebird": "^3.3.1",
|
"bluebird": "^3.3.1",
|
||||||
"eslint": "^8.14.0",
|
"eslint": "^8.14.0",
|
||||||
"eslint-config-airbnb-base": "^13.1.0",
|
"eslint-config-airbnb-base": "^15.0.0",
|
||||||
"eslint-config-scality": "scality/Guidelines#8.2.0",
|
"eslint-config-scality": "git+https://git.yourcmc.ru/vitalif/zenko-eslint-config-scality.git",
|
||||||
"eslint-plugin-import": "^2.14.0",
|
"eslint-plugin-import": "^2.14.0",
|
||||||
"eslint-plugin-mocha": "^10.1.0",
|
"eslint-plugin-mocha": "^10.1.0",
|
||||||
"express": "^4.17.1",
|
"express": "^4.17.1",
|
||||||
"ioredis": "4.9.5",
|
"ioredis": "^4.9.5",
|
||||||
"istanbul": "1.0.0-alpha.2",
|
"istanbul": "^1.0.0-alpha.2",
|
||||||
"istanbul-api": "1.0.0-alpha.13",
|
"istanbul-api": "^1.0.0-alpha.13",
|
||||||
"lolex": "^1.4.0",
|
"lolex": "^1.4.0",
|
||||||
"mocha": "^2.3.4",
|
"mocha": ">=3.1.2",
|
||||||
"mocha-junit-reporter": "^1.23.1",
|
"mocha-junit-reporter": "^1.23.1",
|
||||||
"mocha-multi-reporters": "^1.1.7",
|
"mocha-multi-reporters": "^1.1.7",
|
||||||
"node-mocks-http": "1.5.2",
|
"node-mocks-http": "^1.5.2",
|
||||||
"sinon": "^13.0.1",
|
"sinon": "^13.0.1",
|
||||||
"tv4": "^1.2.7"
|
"tv4": "^1.2.7",
|
||||||
|
"webpack": "^5.93.0",
|
||||||
|
"webpack-cli": "^5.1.4"
|
||||||
},
|
},
|
||||||
"scripts": {
|
"scripts": {
|
||||||
"cloudserver": "S3METADATA=mongodb npm-run-all --parallel start_dataserver start_s3server",
|
"cloudserver": "S3METADATA=mongodb npm-run-all --parallel start_dataserver start_s3server",
|
||||||
|
|
|
@ -1,82 +0,0 @@
|
||||||
const assert = require('assert');
|
|
||||||
|
|
||||||
const {
|
|
||||||
createWSAgent,
|
|
||||||
} = require('../../../lib/management/push');
|
|
||||||
|
|
||||||
const proxy = 'http://proxy:3128/';
|
|
||||||
const logger = { info: () => {} };
|
|
||||||
|
|
||||||
function testVariableSet(httpProxy, httpsProxy, allProxy, noProxy) {
|
|
||||||
return () => {
|
|
||||||
it(`should use ${httpProxy} environment variable`, () => {
|
|
||||||
let agent = createWSAgent('https://pushserver', {
|
|
||||||
[httpProxy]: 'http://proxy:3128',
|
|
||||||
}, logger);
|
|
||||||
assert.equal(agent, null);
|
|
||||||
|
|
||||||
agent = createWSAgent('http://pushserver', {
|
|
||||||
[httpProxy]: proxy,
|
|
||||||
}, logger);
|
|
||||||
assert.equal(agent.proxy.href, proxy);
|
|
||||||
});
|
|
||||||
|
|
||||||
it(`should use ${httpsProxy} environment variable`, () => {
|
|
||||||
let agent = createWSAgent('http://pushserver', {
|
|
||||||
[httpsProxy]: proxy,
|
|
||||||
}, logger);
|
|
||||||
assert.equal(agent, null);
|
|
||||||
|
|
||||||
agent = createWSAgent('https://pushserver', {
|
|
||||||
[httpsProxy]: proxy,
|
|
||||||
}, logger);
|
|
||||||
assert.equal(agent.proxy.href, proxy);
|
|
||||||
});
|
|
||||||
|
|
||||||
it(`should use ${allProxy} environment variable`, () => {
|
|
||||||
let agent = createWSAgent('http://pushserver', {
|
|
||||||
[allProxy]: proxy,
|
|
||||||
}, logger);
|
|
||||||
assert.equal(agent.proxy.href, proxy);
|
|
||||||
|
|
||||||
agent = createWSAgent('https://pushserver', {
|
|
||||||
[allProxy]: proxy,
|
|
||||||
}, logger);
|
|
||||||
assert.equal(agent.proxy.href, proxy);
|
|
||||||
});
|
|
||||||
|
|
||||||
it(`should use ${noProxy} environment variable`, () => {
|
|
||||||
let agent = createWSAgent('http://pushserver', {
|
|
||||||
[noProxy]: 'pushserver',
|
|
||||||
}, logger);
|
|
||||||
assert.equal(agent, null);
|
|
||||||
|
|
||||||
agent = createWSAgent('http://pushserver', {
|
|
||||||
[noProxy]: 'pushserver',
|
|
||||||
[httpProxy]: proxy,
|
|
||||||
}, logger);
|
|
||||||
assert.equal(agent, null);
|
|
||||||
|
|
||||||
agent = createWSAgent('http://pushserver', {
|
|
||||||
[noProxy]: 'pushserver2',
|
|
||||||
[httpProxy]: proxy,
|
|
||||||
}, logger);
|
|
||||||
assert.equal(agent.proxy.href, proxy);
|
|
||||||
});
|
|
||||||
};
|
|
||||||
}
|
|
||||||
|
|
||||||
describe('Websocket connection agent', () => {
|
|
||||||
describe('with no proxy env', () => {
|
|
||||||
it('should handle empty proxy environment', () => {
|
|
||||||
const agent = createWSAgent('https://pushserver', {}, logger);
|
|
||||||
assert.equal(agent, null);
|
|
||||||
});
|
|
||||||
});
|
|
||||||
|
|
||||||
describe('with lowercase proxy env',
|
|
||||||
testVariableSet('http_proxy', 'https_proxy', 'all_proxy', 'no_proxy'));
|
|
||||||
|
|
||||||
describe('with uppercase proxy env',
|
|
||||||
testVariableSet('HTTP_PROXY', 'HTTPS_PROXY', 'ALL_PROXY', 'NO_PROXY'));
|
|
||||||
});
|
|
|
@ -1,239 +0,0 @@
|
||||||
const assert = require('assert');
|
|
||||||
const crypto = require('crypto');
|
|
||||||
|
|
||||||
const { DummyRequestLogger } = require('../helpers');
|
|
||||||
const log = new DummyRequestLogger();
|
|
||||||
|
|
||||||
const metadata = require('../../../lib/metadata/wrapper');
|
|
||||||
const managementDatabaseName = 'PENSIEVE';
|
|
||||||
const tokenConfigurationKey = 'auth/zenko/remote-management-token';
|
|
||||||
|
|
||||||
const { privateKey, accessKey, decryptedSecretKey, secretKey, canonicalId,
|
|
||||||
userName } = require('./resources.json');
|
|
||||||
const shortid = '123456789012';
|
|
||||||
const email = 'customaccount1@setbyenv.com';
|
|
||||||
const arn = 'arn:aws:iam::123456789012:root';
|
|
||||||
const { config } = require('../../../lib/Config');
|
|
||||||
|
|
||||||
const {
|
|
||||||
remoteOverlayIsNewer,
|
|
||||||
patchConfiguration,
|
|
||||||
} = require('../../../lib/management/configuration');
|
|
||||||
|
|
||||||
const {
|
|
||||||
initManagementDatabase,
|
|
||||||
} = require('../../../lib/management/index');
|
|
||||||
|
|
||||||
function initManagementCredentialsMock(cb) {
|
|
||||||
return metadata.putObjectMD(managementDatabaseName,
|
|
||||||
tokenConfigurationKey, { privateKey }, {},
|
|
||||||
log, error => cb(error));
|
|
||||||
}
|
|
||||||
|
|
||||||
function getConfig() {
|
|
||||||
return config;
|
|
||||||
}
|
|
||||||
|
|
||||||
// Original Config
|
|
||||||
const overlayVersionOriginal = Object.assign({}, config.overlayVersion);
|
|
||||||
const authDataOriginal = Object.assign({}, config.authData);
|
|
||||||
const locationConstraintsOriginal = Object.assign({},
|
|
||||||
config.locationConstraints);
|
|
||||||
const restEndpointsOriginal = Object.assign({}, config.restEndpoints);
|
|
||||||
const browserAccessEnabledOriginal = config.browserAccessEnabled;
|
|
||||||
const instanceId = '19683e55-56f7-4a4c-98a7-706c07e4ec30';
|
|
||||||
const publicInstanceId = crypto.createHash('sha256')
|
|
||||||
.update(instanceId)
|
|
||||||
.digest('hex');
|
|
||||||
|
|
||||||
function resetConfig() {
|
|
||||||
config.overlayVersion = overlayVersionOriginal;
|
|
||||||
config.authData = authDataOriginal;
|
|
||||||
config.locationConstraints = locationConstraintsOriginal;
|
|
||||||
config.restEndpoints = restEndpointsOriginal;
|
|
||||||
config.browserAccessEnabled = browserAccessEnabledOriginal;
|
|
||||||
}
|
|
||||||
|
|
||||||
function assertConfig(actualConf, expectedConf) {
|
|
||||||
Object.keys(expectedConf).forEach(key => {
|
|
||||||
assert.deepStrictEqual(actualConf[key], expectedConf[key]);
|
|
||||||
});
|
|
||||||
}
|
|
||||||
|
|
||||||
describe('patchConfiguration', () => {
|
|
||||||
before(done => initManagementDatabase(log, err => {
|
|
||||||
if (err) {
|
|
||||||
return done(err);
|
|
||||||
}
|
|
||||||
return initManagementCredentialsMock(done);
|
|
||||||
}));
|
|
||||||
beforeEach(() => {
|
|
||||||
resetConfig();
|
|
||||||
});
|
|
||||||
|
|
||||||
it('should modify config using the new config', done => {
|
|
||||||
const newConf = {
|
|
||||||
version: 1,
|
|
||||||
instanceId,
|
|
||||||
users: [
|
|
||||||
{
|
|
||||||
secretKey,
|
|
||||||
accessKey,
|
|
||||||
canonicalId,
|
|
||||||
userName,
|
|
||||||
},
|
|
||||||
],
|
|
||||||
endpoints: [
|
|
||||||
{
|
|
||||||
hostname: '1.1.1.1',
|
|
||||||
locationName: 'us-east-1',
|
|
||||||
},
|
|
||||||
],
|
|
||||||
locations: {
|
|
||||||
'us-east-1': {
|
|
||||||
name: 'us-east-1',
|
|
||||||
objectId: 'us-east-1',
|
|
||||||
locationType: 'location-file-v1',
|
|
||||||
legacyAwsBehavior: true,
|
|
||||||
details: {},
|
|
||||||
},
|
|
||||||
},
|
|
||||||
browserAccess: {
|
|
||||||
enabled: true,
|
|
||||||
},
|
|
||||||
};
|
|
||||||
return patchConfiguration(newConf, log, err => {
|
|
||||||
assert.ifError(err);
|
|
||||||
const actualConf = getConfig();
|
|
||||||
const expectedConf = {
|
|
||||||
overlayVersion: 1,
|
|
||||||
publicInstanceId,
|
|
||||||
browserAccessEnabled: true,
|
|
||||||
authData: {
|
|
||||||
accounts: [{
|
|
||||||
name: userName,
|
|
||||||
email,
|
|
||||||
arn,
|
|
||||||
canonicalID: canonicalId,
|
|
||||||
shortid,
|
|
||||||
keys: [{
|
|
||||||
access: accessKey,
|
|
||||||
secret: decryptedSecretKey,
|
|
||||||
}],
|
|
||||||
}],
|
|
||||||
},
|
|
||||||
locationConstraints: {
|
|
||||||
'us-east-1': {
|
|
||||||
type: 'file',
|
|
||||||
objectId: 'us-east-1',
|
|
||||||
legacyAwsBehavior: true,
|
|
||||||
isTransient: false,
|
|
||||||
sizeLimitGB: null,
|
|
||||||
details: { supportsVersioning: true },
|
|
||||||
name: 'us-east-1',
|
|
||||||
locationType: 'location-file-v1',
|
|
||||||
},
|
|
||||||
},
|
|
||||||
};
|
|
||||||
assertConfig(actualConf, expectedConf);
|
|
||||||
assert.deepStrictEqual(actualConf.restEndpoints['1.1.1.1'],
|
|
||||||
'us-east-1');
|
|
||||||
return done();
|
|
||||||
});
|
|
||||||
});
|
|
||||||
|
|
||||||
it('should apply second configuration if version (2) is greater than ' +
|
|
||||||
'overlayVersion (1)', done => {
|
|
||||||
const newConf1 = {
|
|
||||||
version: 1,
|
|
||||||
instanceId,
|
|
||||||
};
|
|
||||||
const newConf2 = {
|
|
||||||
version: 2,
|
|
||||||
instanceId,
|
|
||||||
browserAccess: {
|
|
||||||
enabled: true,
|
|
||||||
},
|
|
||||||
};
|
|
||||||
patchConfiguration(newConf1, log, err => {
|
|
||||||
assert.ifError(err);
|
|
||||||
return patchConfiguration(newConf2, log, err => {
|
|
||||||
assert.ifError(err);
|
|
||||||
const actualConf = getConfig();
|
|
||||||
const expectedConf = {
|
|
||||||
overlayVersion: 2,
|
|
||||||
browserAccessEnabled: true,
|
|
||||||
};
|
|
||||||
assertConfig(actualConf, expectedConf);
|
|
||||||
return done();
|
|
||||||
});
|
|
||||||
});
|
|
||||||
});
|
|
||||||
|
|
||||||
it('should not apply the second configuration if version equals ' +
|
|
||||||
'overlayVersion', done => {
|
|
||||||
const newConf1 = {
|
|
||||||
version: 1,
|
|
||||||
instanceId,
|
|
||||||
};
|
|
||||||
const newConf2 = {
|
|
||||||
version: 1,
|
|
||||||
instanceId,
|
|
||||||
browserAccess: {
|
|
||||||
enabled: true,
|
|
||||||
},
|
|
||||||
};
|
|
||||||
patchConfiguration(newConf1, log, err => {
|
|
||||||
assert.ifError(err);
|
|
||||||
return patchConfiguration(newConf2, log, err => {
|
|
||||||
assert.ifError(err);
|
|
||||||
const actualConf = getConfig();
|
|
||||||
const expectedConf = {
|
|
||||||
overlayVersion: 1,
|
|
||||||
browserAccessEnabled: undefined,
|
|
||||||
};
|
|
||||||
assertConfig(actualConf, expectedConf);
|
|
||||||
return done();
|
|
||||||
});
|
|
||||||
});
|
|
||||||
});
|
|
||||||
});
|
|
||||||
|
|
||||||
describe('remoteOverlayIsNewer', () => {
|
|
||||||
it('should return remoteOverlayIsNewer equals false if remote overlay ' +
|
|
||||||
'is less than the cached', () => {
|
|
||||||
const cachedOverlay = {
|
|
||||||
version: 2,
|
|
||||||
};
|
|
||||||
const remoteOverlay = {
|
|
||||||
version: 1,
|
|
||||||
};
|
|
||||||
const isRemoteOverlayNewer = remoteOverlayIsNewer(cachedOverlay,
|
|
||||||
remoteOverlay);
|
|
||||||
assert.equal(isRemoteOverlayNewer, false);
|
|
||||||
});
|
|
||||||
it('should return remoteOverlayIsNewer equals false if remote overlay ' +
|
|
||||||
'and the cached one are equal', () => {
|
|
||||||
const cachedOverlay = {
|
|
||||||
version: 1,
|
|
||||||
};
|
|
||||||
const remoteOverlay = {
|
|
||||||
version: 1,
|
|
||||||
};
|
|
||||||
const isRemoteOverlayNewer = remoteOverlayIsNewer(cachedOverlay,
|
|
||||||
remoteOverlay);
|
|
||||||
assert.equal(isRemoteOverlayNewer, false);
|
|
||||||
});
|
|
||||||
it('should return remoteOverlayIsNewer equals true if remote overlay ' +
|
|
||||||
'version is greater than the cached one ', () => {
|
|
||||||
const cachedOverlay = {
|
|
||||||
version: 0,
|
|
||||||
};
|
|
||||||
const remoteOverlay = {
|
|
||||||
version: 1,
|
|
||||||
};
|
|
||||||
const isRemoteOverlayNewer = remoteOverlayIsNewer(cachedOverlay,
|
|
||||||
remoteOverlay);
|
|
||||||
assert.equal(isRemoteOverlayNewer, true);
|
|
||||||
});
|
|
||||||
});
|
|
|
@ -1,9 +0,0 @@
|
||||||
{
|
|
||||||
"privateKey": "-----BEGIN RSA PRIVATE KEY-----\r\nMIIEowIBAAKCAQEAj13sSYE40lAX2qpBvfdGfcSVNtBf8i5FH+E8FAhORwwPu+2S\r\n3yBQbgwHq30WWxunGb1NmZL1wkVZ+vf12DtxqFRnMA08LfO4oO6oC4V8XfKeuHyJ\r\n1qlaKRINz6r9yDkTHtwWoBnlAINurlcNKgGD5p7D+G26Chbr/Oo0ZwHula9DxXy6\r\neH8/bJ5/BynyNyyWRPoAO+UkUdY5utkFCUq2dbBIhovMgjjikf5p2oWqnRKXc+JK\r\nBegr6lSHkkhyqNhTmd8+wA+8Cace4sy1ajY1t5V4wfRZea5vwl/HlyyKodvHdxng\r\nJgg6H61JMYPkplY6Gr9OryBKEAgq02zYoYTDfwIDAQABAoIBAAuDYGlavkRteCzw\r\nRU1LIVcSRWVcgIgDXTu9K8T0Ec0008Kkxomyn6LmxmroJbZ1VwsDH8s4eRH73ckA\r\nxrZxt6Pr+0lplq6eBvKtl8MtGhq1VDe+kJczjHEF6SQHOFAu/TEaPZrn2XMcGvRX\r\nO1BnRL9tepFlxm3u/06VRFYNWqqchM+tFyzLu2AuiuKd5+slSX7KZvVgdkY1ErKH\r\ngB75lPyhPb77C/6ptqUisVMSO4JhLhsD0+ekDVY982Sb7KkI+szdWSbtMx9Ek2Wo\r\ntXwJz7I8T7IbODy9aW9G+ydyhMDFmaEYIaDVFKJj5+fluNza3oQ5PtFNVE50GQJA\r\nsisGqfECgYEAwpkwt0KpSamSEH6qknNYPOwxgEuXWoFVzibko7is2tFPvY+YJowb\r\n68MqHIYhf7gHLq2dc5Jg1TTbGqLECjVxp4xLU4c95KBy1J9CPAcuH4xQLDXmeLzP\r\nJ2YgznRocbzAMCDAwafCr3uY9FM7oGDHAi5bE5W11xWx+9MlFExL3JkCgYEAvJp5\r\nf+JGN1W037bQe2QLYUWGszewZsvplnNOeytGQa57w4YdF42lPhMz6Kc/zdzKZpN9\r\njrshiIDhAD5NCno6dwqafBAW9WZl0sn7EnlLhD4Lwm8E9bRHnC9H82yFuqmNrzww\r\nzxBCQogJISwHiVz4EkU48B283ecBn0wT/fAa19cCgYEApKWsnEHgrhy1IxOpCoRh\r\nUhqdv2k1xDPN/8DUjtnAFtwmVcLa/zJopU/Zn4y1ZzSzjwECSTi+iWZRQ/YXXHPf\r\nl92SFjhFW92Niuy8w8FnevXjF6T7PYiy1SkJ9OR1QlZrXc04iiGBDazLu115A7ce\r\nanACS03OLw+CKgl6Q/RR83ECgYBCUngDVoimkMcIHHt3yJiP3ikeAKlRnMdJlsa0\r\nXWVZV4hCG3lDfRXsnEgWuimftNKf+6GdfYSvQdLdiQsCcjT5A4uLsQTByv5nf4uA\r\n1ZKOsFrmRrARzxGXhLDikvj7yP//7USkq+0BBGFhfuAvl7fMhPceyPZPehqB7/jf\r\nxX1LBQKBgAn5GgSXzzS0e06ZlP/VrKxreOHa5Z8wOmqqYQ0QTeczAbNNmuITdwwB\r\nNkbRqpVXRIfuj0BQBegAiix8om1W4it0cwz54IXBwQULxJR1StWxj3jo4QtpMQ+z\r\npVPdB1Ilb9zPV1YvDwRfdS1xsobzznAx56ecsXduZjs9mF61db8Q\r\n-----END RSA PRIVATE KEY-----\r\n",
|
|
||||||
"publicKey": "-----BEGIN PUBLIC KEY-----\r\nMIIBIjANBgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEAj13sSYE40lAX2qpBvfdG\r\nfcSVNtBf8i5FH+E8FAhORwwPu+2S3yBQbgwHq30WWxunGb1NmZL1wkVZ+vf12Dtx\r\nqFRnMA08LfO4oO6oC4V8XfKeuHyJ1qlaKRINz6r9yDkTHtwWoBnlAINurlcNKgGD\r\n5p7D+G26Chbr/Oo0ZwHula9DxXy6eH8/bJ5/BynyNyyWRPoAO+UkUdY5utkFCUq2\r\ndbBIhovMgjjikf5p2oWqnRKXc+JKBegr6lSHkkhyqNhTmd8+wA+8Cace4sy1ajY1\r\nt5V4wfRZea5vwl/HlyyKodvHdxngJgg6H61JMYPkplY6Gr9OryBKEAgq02zYoYTD\r\nfwIDAQAB\r\n-----END PUBLIC KEY-----\r\n",
|
|
||||||
"accessKey": "QXP3VDG3SALNBX2QBJ1C",
|
|
||||||
"secretKey": "K5FyqZo5uFKfw9QBtn95o6vuPuD0zH/1seIrqPKqGnz8AxALNSx6EeRq7G1I6JJpS1XN13EhnwGn2ipsml3Uf2fQ00YgEmImG8wzGVZm8fWotpVO4ilN4JGyQCah81rNX4wZ9xHqDD7qYR5MyIERxR/osoXfctOwY7GGUjRKJfLOguNUlpaovejg6mZfTvYAiDF+PTO1sKUYqHt1IfKQtsK3dov1EFMBB5pWM7sVfncq/CthKN5M+VHx9Y87qdoP3+7AW+RCBbSDOfQgxvqtS7PIAf10mDl8k2kEURLz+RqChu4O4S0UzbEmtja7wa7WYhYKv/tM/QeW7kyNJMmnPg==",
|
|
||||||
"decryptedSecretKey": "n7PSZ3U6SgerF9PCNhXYsq3S3fRKVGdZTicGV8Ur",
|
|
||||||
"canonicalId": "79a59df900b949e55d96a1e698fbacedfd6e09d98eacf8f8d5218e7cd47ef2be",
|
|
||||||
"userName": "orbituser"
|
|
||||||
}
|
|
|
@ -1,43 +0,0 @@
|
||||||
const assert = require('assert');
|
|
||||||
|
|
||||||
const { getCapabilities } = require('../../../lib/utilities/reportHandler');
|
|
||||||
|
|
||||||
// Ensures that expected features are enabled even if they
|
|
||||||
// rely on optional dependencies (such as secureChannelOptimizedPath)
|
|
||||||
describe('report handler', () => {
|
|
||||||
it('should report current capabilities', () => {
|
|
||||||
const c = getCapabilities();
|
|
||||||
assert.strictEqual(c.locationTypeDigitalOcean, true);
|
|
||||||
assert.strictEqual(c.locationTypeS3Custom, true);
|
|
||||||
assert.strictEqual(c.locationTypeSproxyd, true);
|
|
||||||
assert.strictEqual(c.locationTypeHyperdriveV2, true);
|
|
||||||
assert.strictEqual(c.locationTypeLocal, true);
|
|
||||||
assert.strictEqual(c.preferredReadLocation, true);
|
|
||||||
assert.strictEqual(c.managedLifecycle, true);
|
|
||||||
assert.strictEqual(c.secureChannelOptimizedPath, true);
|
|
||||||
assert.strictEqual(c.s3cIngestLocation, true);
|
|
||||||
});
|
|
||||||
|
|
||||||
[
|
|
||||||
{ value: 'true', result: true },
|
|
||||||
{ value: 'TRUE', result: true },
|
|
||||||
{ value: 'tRuE', result: true },
|
|
||||||
{ value: '1', result: true },
|
|
||||||
{ value: 'false', result: false },
|
|
||||||
{ value: 'FALSE', result: false },
|
|
||||||
{ value: 'FaLsE', result: false },
|
|
||||||
{ value: '0', result: false },
|
|
||||||
{ value: 'foo', result: false },
|
|
||||||
{ value: '', result: true },
|
|
||||||
{ value: undefined, result: true },
|
|
||||||
].forEach(param =>
|
|
||||||
it(`should allow set local file system capability ${param.value}`, () => {
|
|
||||||
const OLD_ENV = process.env;
|
|
||||||
|
|
||||||
if (param.value !== undefined) process.env.LOCAL_VOLUME_CAPABILITY = param.value;
|
|
||||||
assert.strictEqual(getCapabilities().locationTypeLocal, param.result);
|
|
||||||
|
|
||||||
process.env = OLD_ENV;
|
|
||||||
})
|
|
||||||
);
|
|
||||||
});
|
|
|
@ -1,72 +0,0 @@
|
||||||
const assert = require('assert');
|
|
||||||
|
|
||||||
const {
|
|
||||||
ChannelMessageV0,
|
|
||||||
MessageType,
|
|
||||||
TargetType,
|
|
||||||
} = require('../../../lib/management/ChannelMessageV0');
|
|
||||||
|
|
||||||
const {
|
|
||||||
CONFIG_OVERLAY_MESSAGE,
|
|
||||||
METRICS_REQUEST_MESSAGE,
|
|
||||||
METRICS_REPORT_MESSAGE,
|
|
||||||
CHANNEL_CLOSE_MESSAGE,
|
|
||||||
CHANNEL_PAYLOAD_MESSAGE,
|
|
||||||
} = MessageType;
|
|
||||||
|
|
||||||
const { TARGET_ANY } = TargetType;
|
|
||||||
|
|
||||||
describe('ChannelMessageV0', () => {
|
|
||||||
describe('codec', () => {
|
|
||||||
it('should roundtrip metrics report', () => {
|
|
||||||
const b = ChannelMessageV0.encodeMetricsReportMessage({ a: 1 });
|
|
||||||
const m = new ChannelMessageV0(b);
|
|
||||||
|
|
||||||
assert.strictEqual(METRICS_REPORT_MESSAGE, m.getType());
|
|
||||||
assert.strictEqual(0, m.getChannelNumber());
|
|
||||||
assert.strictEqual(m.getTarget(), TARGET_ANY);
|
|
||||||
assert.strictEqual(m.getPayload().toString(), '{"a":1}');
|
|
||||||
});
|
|
||||||
|
|
||||||
it('should roundtrip channel data', () => {
|
|
||||||
const data = new Buffer('dummydata');
|
|
||||||
const b = ChannelMessageV0.encodeChannelDataMessage(50, data);
|
|
||||||
const m = new ChannelMessageV0(b);
|
|
||||||
|
|
||||||
assert.strictEqual(CHANNEL_PAYLOAD_MESSAGE, m.getType());
|
|
||||||
assert.strictEqual(50, m.getChannelNumber());
|
|
||||||
assert.strictEqual(m.getTarget(), TARGET_ANY);
|
|
||||||
assert.strictEqual(m.getPayload().toString(), 'dummydata');
|
|
||||||
});
|
|
||||||
|
|
||||||
it('should roundtrip channel close', () => {
|
|
||||||
const b = ChannelMessageV0.encodeChannelCloseMessage(3);
|
|
||||||
const m = new ChannelMessageV0(b);
|
|
||||||
|
|
||||||
assert.strictEqual(CHANNEL_CLOSE_MESSAGE, m.getType());
|
|
||||||
assert.strictEqual(3, m.getChannelNumber());
|
|
||||||
assert.strictEqual(m.getTarget(), TARGET_ANY);
|
|
||||||
});
|
|
||||||
});
|
|
||||||
|
|
||||||
describe('decoder', () => {
|
|
||||||
it('should parse metrics request', () => {
|
|
||||||
const b = new Buffer([METRICS_REQUEST_MESSAGE, 0, 0]);
|
|
||||||
const m = new ChannelMessageV0(b);
|
|
||||||
|
|
||||||
assert.strictEqual(METRICS_REQUEST_MESSAGE, m.getType());
|
|
||||||
assert.strictEqual(0, m.getChannelNumber());
|
|
||||||
assert.strictEqual(m.getTarget(), TARGET_ANY);
|
|
||||||
});
|
|
||||||
|
|
||||||
it('should parse overlay push', () => {
|
|
||||||
const b = new Buffer([CONFIG_OVERLAY_MESSAGE, 0, 0, 34, 65, 34]);
|
|
||||||
const m = new ChannelMessageV0(b);
|
|
||||||
|
|
||||||
assert.strictEqual(CONFIG_OVERLAY_MESSAGE, m.getType());
|
|
||||||
assert.strictEqual(0, m.getChannelNumber());
|
|
||||||
assert.strictEqual(m.getTarget(), TARGET_ANY);
|
|
||||||
assert.strictEqual(m.getPayload().toString(), '"A"');
|
|
||||||
});
|
|
||||||
});
|
|
||||||
});
|
|
|
@ -0,0 +1,254 @@
|
||||||
|
const assert = require('assert');
|
||||||
|
const { parseRedisConfig } = require('../../../lib/Config');
|
||||||
|
|
||||||
|
describe('parseRedisConfig', () => {
|
||||||
|
[
|
||||||
|
{
|
||||||
|
desc: 'with host and port',
|
||||||
|
input: {
|
||||||
|
host: 'localhost',
|
||||||
|
port: 6479,
|
||||||
|
},
|
||||||
|
},
|
||||||
|
{
|
||||||
|
desc: 'with host, port and password',
|
||||||
|
input: {
|
||||||
|
host: 'localhost',
|
||||||
|
port: 6479,
|
||||||
|
password: 'mypass',
|
||||||
|
},
|
||||||
|
},
|
||||||
|
{
|
||||||
|
desc: 'with host, port and an empty password',
|
||||||
|
input: {
|
||||||
|
host: 'localhost',
|
||||||
|
port: 6479,
|
||||||
|
password: '',
|
||||||
|
},
|
||||||
|
},
|
||||||
|
{
|
||||||
|
desc: 'with host, port and an empty retry config',
|
||||||
|
input: {
|
||||||
|
host: 'localhost',
|
||||||
|
port: 6479,
|
||||||
|
retry: {
|
||||||
|
},
|
||||||
|
},
|
||||||
|
},
|
||||||
|
{
|
||||||
|
desc: 'with host, port and a custom retry config',
|
||||||
|
input: {
|
||||||
|
host: 'localhost',
|
||||||
|
port: 6479,
|
||||||
|
retry: {
|
||||||
|
connectBackoff: {
|
||||||
|
min: 10,
|
||||||
|
max: 1000,
|
||||||
|
jitter: 0.1,
|
||||||
|
factor: 1.5,
|
||||||
|
deadline: 10000,
|
||||||
|
},
|
||||||
|
},
|
||||||
|
},
|
||||||
|
},
|
||||||
|
{
|
||||||
|
desc: 'with a single sentinel and no sentinel password',
|
||||||
|
input: {
|
||||||
|
name: 'myname',
|
||||||
|
sentinels: [
|
||||||
|
{
|
||||||
|
host: 'localhost',
|
||||||
|
port: 16479,
|
||||||
|
},
|
||||||
|
],
|
||||||
|
},
|
||||||
|
},
|
||||||
|
{
|
||||||
|
desc: 'with two sentinels and a sentinel password',
|
||||||
|
input: {
|
||||||
|
name: 'myname',
|
||||||
|
sentinels: [
|
||||||
|
{
|
||||||
|
host: '10.20.30.40',
|
||||||
|
port: 16479,
|
||||||
|
},
|
||||||
|
{
|
||||||
|
host: '10.20.30.41',
|
||||||
|
port: 16479,
|
||||||
|
},
|
||||||
|
],
|
||||||
|
sentinelPassword: 'mypass',
|
||||||
|
},
|
||||||
|
},
|
||||||
|
{
|
||||||
|
desc: 'with a sentinel and an empty sentinel password',
|
||||||
|
input: {
|
||||||
|
name: 'myname',
|
||||||
|
sentinels: [
|
||||||
|
{
|
||||||
|
host: '10.20.30.40',
|
||||||
|
port: 16479,
|
||||||
|
},
|
||||||
|
],
|
||||||
|
sentinelPassword: '',
|
||||||
|
},
|
||||||
|
},
|
||||||
|
{
|
||||||
|
desc: 'with a basic production-like config with sentinels',
|
||||||
|
input: {
|
||||||
|
name: 'scality-s3',
|
||||||
|
password: '',
|
||||||
|
sentinelPassword: '',
|
||||||
|
sentinels: [
|
||||||
|
{
|
||||||
|
host: 'storage-1',
|
||||||
|
port: 16379,
|
||||||
|
},
|
||||||
|
{
|
||||||
|
host: 'storage-2',
|
||||||
|
port: 16379,
|
||||||
|
},
|
||||||
|
{
|
||||||
|
host: 'storage-3',
|
||||||
|
port: 16379,
|
||||||
|
},
|
||||||
|
{
|
||||||
|
host: 'storage-4',
|
||||||
|
port: 16379,
|
||||||
|
},
|
||||||
|
{
|
||||||
|
host: 'storage-5',
|
||||||
|
port: 16379,
|
||||||
|
},
|
||||||
|
],
|
||||||
|
},
|
||||||
|
},
|
||||||
|
{
|
||||||
|
desc: 'with a single sentinel passed as a string',
|
||||||
|
input: {
|
||||||
|
name: 'myname',
|
||||||
|
sentinels: '10.20.30.40:16479',
|
||||||
|
},
|
||||||
|
output: {
|
||||||
|
name: 'myname',
|
||||||
|
sentinels: [
|
||||||
|
{
|
||||||
|
host: '10.20.30.40',
|
||||||
|
port: 16479,
|
||||||
|
},
|
||||||
|
],
|
||||||
|
},
|
||||||
|
},
|
||||||
|
{
|
||||||
|
desc: 'with a list of sentinels passed as a string',
|
||||||
|
input: {
|
||||||
|
name: 'myname',
|
||||||
|
sentinels: '10.20.30.40:16479,another-host:16480,10.20.30.42:16481',
|
||||||
|
sentinelPassword: 'mypass',
|
||||||
|
},
|
||||||
|
output: {
|
||||||
|
name: 'myname',
|
||||||
|
sentinels: [
|
||||||
|
{
|
||||||
|
host: '10.20.30.40',
|
||||||
|
port: 16479,
|
||||||
|
},
|
||||||
|
{
|
||||||
|
host: 'another-host',
|
||||||
|
port: 16480,
|
||||||
|
},
|
||||||
|
{
|
||||||
|
host: '10.20.30.42',
|
||||||
|
port: 16481,
|
||||||
|
},
|
||||||
|
],
|
||||||
|
sentinelPassword: 'mypass',
|
||||||
|
},
|
||||||
|
},
|
||||||
|
].forEach(testCase => {
|
||||||
|
it(`should parse a valid config ${testCase.desc}`, () => {
|
||||||
|
const redisConfig = parseRedisConfig(testCase.input);
|
||||||
|
assert.deepStrictEqual(redisConfig, testCase.output || testCase.input);
|
||||||
|
});
|
||||||
|
});
|
||||||
|
|
||||||
|
[
|
||||||
|
{
|
||||||
|
desc: 'that is empty',
|
||||||
|
input: {},
|
||||||
|
},
|
||||||
|
{
|
||||||
|
desc: 'with only a host',
|
||||||
|
input: {
|
||||||
|
host: 'localhost',
|
||||||
|
},
|
||||||
|
},
|
||||||
|
{
|
||||||
|
desc: 'with only a port',
|
||||||
|
input: {
|
||||||
|
port: 6479,
|
||||||
|
},
|
||||||
|
},
|
||||||
|
{
|
||||||
|
desc: 'with a custom retry config with missing values',
|
||||||
|
input: {
|
||||||
|
host: 'localhost',
|
||||||
|
port: 6479,
|
||||||
|
retry: {
|
||||||
|
connectBackoff: {
|
||||||
|
},
|
||||||
|
},
|
||||||
|
},
|
||||||
|
},
|
||||||
|
{
|
||||||
|
desc: 'with a sentinel but no name',
|
||||||
|
input: {
|
||||||
|
sentinels: [
|
||||||
|
{
|
||||||
|
host: 'localhost',
|
||||||
|
port: 16479,
|
||||||
|
},
|
||||||
|
],
|
||||||
|
},
|
||||||
|
},
|
||||||
|
{
|
||||||
|
desc: 'with a sentinel but an empty name',
|
||||||
|
input: {
|
||||||
|
name: '',
|
||||||
|
sentinels: [
|
||||||
|
{
|
||||||
|
host: 'localhost',
|
||||||
|
port: 16479,
|
||||||
|
},
|
||||||
|
],
|
||||||
|
},
|
||||||
|
},
|
||||||
|
{
|
||||||
|
desc: 'with an empty list of sentinels',
|
||||||
|
input: {
|
||||||
|
name: 'myname',
|
||||||
|
sentinels: [],
|
||||||
|
},
|
||||||
|
},
|
||||||
|
{
|
||||||
|
desc: 'with an empty list of sentinels passed as a string',
|
||||||
|
input: {
|
||||||
|
name: 'myname',
|
||||||
|
sentinels: '',
|
||||||
|
},
|
||||||
|
},
|
||||||
|
{
|
||||||
|
desc: 'with an invalid list of sentinels passed as a string (missing port)',
|
||||||
|
input: {
|
||||||
|
name: 'myname',
|
||||||
|
sentinels: '10.20.30.40:16479,10.20.30.50',
|
||||||
|
},
|
||||||
|
},
|
||||||
|
].forEach(testCase => {
|
||||||
|
it(`should fail to parse an invalid config ${testCase.desc}`, () => {
|
||||||
|
assert.throws(() => {
|
||||||
|
parseRedisConfig(testCase.input);
|
||||||
|
});
|
||||||
|
});
|
||||||
|
});
|
||||||
|
});
|
|
@ -0,0 +1,48 @@
|
||||||
|
module.exports = {
|
||||||
|
entry: './index.js',
|
||||||
|
target: 'node',
|
||||||
|
devtool: 'source-map',
|
||||||
|
output: {
|
||||||
|
filename: 'zenko-vitastor.js'
|
||||||
|
},
|
||||||
|
module: {
|
||||||
|
rules: [
|
||||||
|
{
|
||||||
|
test: /.jsx?$/,
|
||||||
|
use: {
|
||||||
|
loader: 'babel-loader',
|
||||||
|
options: {
|
||||||
|
presets: [ [ "@babel/preset-env", { "targets": { "node": "12.0" }, "exclude": [ "transform-regenerator" ] } ] ],
|
||||||
|
}
|
||||||
|
}
|
||||||
|
},
|
||||||
|
{
|
||||||
|
test: /.json$/,
|
||||||
|
type: 'json'
|
||||||
|
}
|
||||||
|
]
|
||||||
|
},
|
||||||
|
externals: {
|
||||||
|
'leveldown': 'commonjs leveldown',
|
||||||
|
'bufferutil': 'commonjs bufferutil',
|
||||||
|
'diskusage': 'commonjs diskusage',
|
||||||
|
'utf-8-validate': 'commonjs utf-8-validate',
|
||||||
|
'fcntl': 'commonjs fcntl',
|
||||||
|
'ioctl': 'commonjs ioctl',
|
||||||
|
'vitastor': 'commonjs vitastor',
|
||||||
|
'vaultclient': 'commonjs vaultclient',
|
||||||
|
'bucketclient': 'commonjs bucketclient',
|
||||||
|
'scality-kms': 'commonjs scality-kms',
|
||||||
|
'sproxydclient': 'commonjs sproxydclient',
|
||||||
|
'hdclient': 'commonjs hdclient',
|
||||||
|
'cdmiclient': 'commonjs cdmiclient',
|
||||||
|
'kerberos': 'commonjs kerberos',
|
||||||
|
'@mongodb-js/zstd': 'commonjs @mongodb-js/zstd',
|
||||||
|
'@aws-sdk/credential-providers': 'commonjs @aws-sdk/credential-providers',
|
||||||
|
'snappy': 'commonjs snappy',
|
||||||
|
'mongodb-client-encryption': 'commonjs mongodb-client-encryption'
|
||||||
|
},
|
||||||
|
node: {
|
||||||
|
__dirname: false
|
||||||
|
}
|
||||||
|
};
|
Loading…
Reference in New Issue