Compare commits

...

31 Commits

Author SHA1 Message Date
Mickael Bourgois d095b043e0
CLDSRV-544: bump version 2024-06-30 21:15:52 +02:00
Mickael Bourgois 5243426ce9
CLDSRV-544 Add timestamp on stderr utapi v1
(cherry picked from commit ca0904f584)
2024-06-30 21:15:52 +02:00
Mickael Bourgois 9d5af75a54
CLDSRV-544: Add timestamp on stderr
The previous version would not exit the master of the cluster
Now it exits as it should do

(cherry picked from commit 0dd3dd35e6)
2024-06-30 21:15:52 +02:00
Francois Ferrand e008083d68
Use official docker build steps
The docker-build step from `scality/workflows/` fails to login to
 ghcr, as it picks up the old registry creds.

Issue: CLDSRV-524
(cherry picked from commit b824fc0828)
2024-05-27 17:49:39 +02:00
Francois Ferrand 9938fcdd10
Build pykmip image
Issue: CLDSRV-524
(cherry picked from commit a2e6d91cf2)
2024-05-27 17:49:39 +02:00
Francois Ferrand ab733e07a3
Upgrade actions
- artifacts@v4
- cache@v4
- checkout@v4
- codeql@v3
- dependency-review@v4
- login@v3
- setup-buildx@v3
- setup-node@v4
- setup-python@v5

Issue: CLDSRV-524
(cherry picked from commit c1060853dd)
2024-05-27 17:49:34 +02:00
Francois Ferrand 478d208298
Migrate to ghcr
Issue: CLDSRV-524
(cherry picked from commit 227d6edd09)
2024-05-27 17:48:54 +02:00
Taylor McKinnon a4e31f421b Bump version to 7.70.21-8 2024-05-22 09:38:46 -07:00
Taylor McKinnon 7d67a04438 Disable git clone protection to work around git bug affecting git-lfs 2024-05-22 09:38:46 -07:00
Rahul Padigela 185b4a3efc improvement CLDSRV-466 add timestamp for exceptions
(cherry picked from commit b1b2d2ada6)
2024-05-22 08:31:30 -07:00
Taylor McKinnon 1d9567263f bf(CLDSRV-529): Bump version 2024-05-16 12:24:52 -07:00
Taylor McKinnon 702beed37c bf(CLDSRV-529): Bump utapi
(cherry picked from commit 53f2a159fa)
2024-05-16 12:24:30 -07:00
Will Toozs c2ca91229a
CLDSRV-531: bump version 2024-05-14 16:55:28 +02:00
Will Toozs 8107d2b986
CLDSRV-531: change requestType to bucketPutLifecycle 2024-05-14 16:54:58 +02:00
Taylor McKinnon 53d143efa7 impr(CLDSRV-525): Bump version to 7.70.21-5 2024-04-16 09:59:59 -07:00
Taylor McKinnon 2ec6968565 impr(CLDSRV-467): Add new Utapi Reindex option `utapi.reindex.onlyCountLatestWhenObjectLocked`
(cherry picked from commit 818b1e60d1)
2024-04-16 09:58:58 -07:00
Taylor McKinnon 5bdc7b97d4 impr(CLDSRV-525): Bump Utapi to 7.70.3 2024-04-16 09:53:46 -07:00
tmacro 4eb77f9327 CLDSRV-474: fix CI fail
(cherry picked from commit e109b0fca7)
2024-02-21 12:15:40 -08:00
Taylor McKinnon 5239f013d9 Bump cloudserver to 7.70.21-4 2024-02-21 12:09:32 -08:00
Taylor McKinnon 49d1d65f37 possible => unsupported
(cherry picked from commit 59b87479df)
2024-02-21 12:05:31 -08:00
Taylor McKinnon 5cdcee201b bf(CLDSRV-463): Strictly validate checksum algorithm headers
(cherry picked from commit 1e9ee0ef0b)
2024-02-21 12:05:26 -08:00
Jonathan Gramain 121352ebfb CLDSRV-458 [HOTFIX 9.2.0] version bump to 7.70.21-3 2023-11-07 13:14:59 -08:00
Jonathan Gramain d5e2a7a894 bf: CLDSRV-458 fix bucketd params on null version update
On in-place updates of "legacy" null versions (those without the
"isNull2" attribute, using the "nullVersionId" chain instead of null
keys), we mustn't pass the "isNull" query parameter when sending the
update request to bucketd. Otherwise, it creates a null key which
causes issues when deleting the null version later.

Use a helper to pass the right set of parameters in all request types
that update versions in-place.

(cherry picked from commit 3985e2a712)
2023-11-07 13:12:24 -08:00
Nicolas Humbert de66293473 bump project version 2023-10-27 11:11:15 +02:00
Nicolas Humbert 067644b8db CLDSRV-462 Expiration header is not compatible with legacy object md
Before the Object Metadata refactor done around May 31, 2017 (c22e44f63d), if no tags were set, the object tag was stored as undefined.

After the commit, if no tags are set, the object tag is stored as an empty object '{}'.

When the expiration response headers were implemented on 812b09afef around Nov 22, 2021, the empty object was handled, but not the undefined tag logic, which made the expiration response headers not backward compatible.

We need to address both cases: the undefined property and the empty object '{}'.

(cherry picked from commit 61fe64a3ac)
2023-10-27 11:09:27 +02:00
Maha Benzekri c0dfd6fe5e
re-enabling null version compat tests 2023-10-05 18:02:22 +02:00
benzekrimaha 962dede838
Update tests/functional/aws-node-sdk/test/bucket/putBucketPolicy.js
Co-authored-by: William <91462779+williamlardier@users.noreply.github.com>
2023-10-05 14:57:04 +02:00
Maha Benzekri 85fb1fe606
fix image build problems 2023-10-04 17:23:08 +02:00
Maha Benzekri 570602b902
CLDSRV-452: CLDSRV version bump 2023-10-04 15:03:07 +02:00
Maha Benzekri 3af5d1b692
CLDSRV-452: Add Id and principal tests 2023-10-04 15:01:48 +02:00
Maha Benzekri 677605e48c
CLDSRV-452:Bump arsenal version 2023-10-04 14:55:25 +02:00
32 changed files with 738 additions and 201 deletions

View File

@ -16,17 +16,10 @@ runs:
run: |- run: |-
set -exu; set -exu;
mkdir -p /tmp/artifacts/${{ github.job }}/; mkdir -p /tmp/artifacts/${{ github.job }}/;
- uses: actions/setup-node@v2 - uses: actions/setup-node@v4
with: with:
node-version: '16' node-version: '16'
cache: 'yarn' cache: 'yarn'
- name: install dependencies - name: install dependencies
shell: bash shell: bash
run: yarn install --ignore-engines --frozen-lockfile --network-concurrency 1 run: yarn install --ignore-engines --frozen-lockfile --network-concurrency 1
- uses: actions/cache@v2
with:
path: ~/.cache/pip
key: ${{ runner.os }}-pip
- name: Install python deps
shell: bash
run: pip install docker-compose

View File

@ -34,4 +34,4 @@ gcpbackendmismatch_GCP_SERVICE_KEY
gcpbackend_GCP_SERVICE_KEYFILE gcpbackend_GCP_SERVICE_KEYFILE
gcpbackendmismatch_GCP_SERVICE_KEYFILE gcpbackendmismatch_GCP_SERVICE_KEYFILE
gcpbackendnoproxy_GCP_SERVICE_KEYFILE gcpbackendnoproxy_GCP_SERVICE_KEYFILE
gcpbackendproxy_GCP_SERVICE_KEYFILE gcpbackendproxy_GCP_SERVICE_KEYFILE

View File

@ -62,6 +62,6 @@ services:
pykmip: pykmip:
network_mode: "host" network_mode: "host"
profiles: ['pykmip'] profiles: ['pykmip']
image: registry.scality.com/cloudserver-dev/pykmip image: ${PYKMIP_IMAGE:-ghcr.io/scality/cloudserver/pykmip}
volumes: volumes:
- /tmp/artifacts/${JOB_NAME}:/artifacts - /tmp/artifacts/${JOB_NAME}:/artifacts

View File

@ -10,36 +10,59 @@ on:
jobs: jobs:
build-federation-image: build-federation-image:
uses: scality/workflows/.github/workflows/docker-build.yaml@v1 runs-on: ubuntu-20.04
secrets: inherit steps:
with: - name: Checkout
push: true uses: actions/checkout@v4
registry: registry.scality.com - name: Set up Docker Buildx
namespace: ${{ github.event.repository.name }} uses: docker/setup-buildx-action@v3
name: ${{ github.event.repository.name }} - name: Login to GitHub Registry
context: . uses: docker/login-action@v3
file: images/svc-base/Dockerfile with:
tag: ${{ github.event.inputs.tag }}-svc-base registry: ghcr.io
username: ${{ github.repository_owner }}
password: ${{ github.token }}
- name: Build and push image for federation
uses: docker/build-push-action@v5
with:
push: true
context: .
file: images/svc-base/Dockerfile
tags: |
ghcr.io/${{ github.repository }}:${{ github.event.inputs.tag }}-svc-base
cache-from: type=gha,scope=federation
cache-to: type=gha,mode=max,scope=federation
build-image: build-image:
uses: scality/workflows/.github/workflows/docker-build.yaml@v1 runs-on: ubuntu-20.04
secrets: inherit steps:
with: - name: Checkout
push: true uses: actions/checkout@v4
registry: registry.scality.com - name: Set up Docker Buildx
namespace: ${{ github.event.repository.name }} uses: docker/setup-buildx-action@v3
name: ${{ github.event.repository.name }} - name: Login to GitHub Registry
context: . uses: docker/login-action@v3
file: Dockerfile with:
tag: ${{ github.event.inputs.tag }} registry: ghcr.io
username: ${{ github.repository_owner }}
password: ${{ github.token }}
- name: Build and push image
uses: docker/build-push-action@v5
with:
push: true
context: .
tags: |
ghcr.io/${{ github.repository }}:${{ github.event.inputs.tag }}
cache-from: type=gha
cache-to: type=gha,mode=max
github-release: github-release:
runs-on: ubuntu-latest runs-on: ubuntu-latest
steps: steps:
- name: Create Release - name: Create Release
uses: softprops/action-gh-release@v1 uses: softprops/action-gh-release@v2
env: env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }} GITHUB_TOKEN: ${{ github.token }}
with: with:
name: Release ${{ github.event.inputs.tag }} name: Release ${{ github.event.inputs.tag }}
tag_name: ${{ github.event.inputs.tag }} tag_name: ${{ github.event.inputs.tag }}

View File

@ -65,23 +65,24 @@ env:
ENABLE_LOCAL_CACHE: "true" ENABLE_LOCAL_CACHE: "true"
REPORT_TOKEN: "report-token-1" REPORT_TOKEN: "report-token-1"
REMOTE_MANAGEMENT_DISABLE: "1" REMOTE_MANAGEMENT_DISABLE: "1"
# https://github.com/git-lfs/git-lfs/issues/5749
GIT_CLONE_PROTECTION_ACTIVE: 'false'
jobs: jobs:
linting-coverage: linting-coverage:
runs-on: ubuntu-latest runs-on: ubuntu-latest
steps: steps:
- name: Checkout - name: Checkout
uses: actions/checkout@v2 uses: actions/checkout@v4
- uses: actions/setup-node@v2 - uses: actions/setup-node@v4
with: with:
node-version: '16' node-version: '16'
cache: yarn cache: yarn
- name: install dependencies - name: install dependencies
run: yarn install --frozen-lockfile --network-concurrency 1 run: yarn install --frozen-lockfile --network-concurrency 1
- uses: actions/setup-python@v4 - uses: actions/setup-python@v5
with: with:
python-version: '3.9' python-version: '3.9'
- uses: actions/cache@v2 - uses: actions/cache@v4
with: with:
path: ~/.cache/pip path: ~/.cache/pip
key: ${{ runner.os }}-pip key: ${{ runner.os }}-pip
@ -114,7 +115,7 @@ jobs:
find . -name "*junit*.xml" -exec cp {} artifacts/junit/ ";" find . -name "*junit*.xml" -exec cp {} artifacts/junit/ ";"
if: always() if: always()
- name: Upload files to artifacts - name: Upload files to artifacts
uses: scality/action-artifacts@v2 uses: scality/action-artifacts@v4
with: with:
method: upload method: upload
url: https://artifacts.scality.net url: https://artifacts.scality.net
@ -127,64 +128,78 @@ jobs:
runs-on: ubuntu-20.04 runs-on: ubuntu-20.04
steps: steps:
- name: Checkout - name: Checkout
uses: actions/checkout@v2 uses: actions/checkout@v4
- name: Set up Docker Buildx - name: Set up Docker Buildx
uses: docker/setup-buildx-action@v1.6.0 uses: docker/setup-buildx-action@v3
- name: Login to GitHub Registry - name: Login to GitHub Registry
uses: docker/login-action@v1.10.0 uses: docker/login-action@v3
with: with:
registry: ghcr.io registry: ghcr.io
username: ${{ github.repository_owner }} username: ${{ github.repository_owner }}
password: ${{ secrets.GITHUB_TOKEN }} password: ${{ github.token }}
- name: Login to Registry
uses: docker/login-action@v1
with:
registry: registry.scality.com
username: ${{ secrets.REGISTRY_LOGIN }}
password: ${{ secrets.REGISTRY_PASSWORD }}
- name: Build and push cloudserver image - name: Build and push cloudserver image
uses: docker/build-push-action@v3 uses: docker/build-push-action@v5
with: with:
push: true push: true
context: . context: .
provenance: false provenance: false
tags: | tags: |
ghcr.io/${{ github.repository }}/cloudserver:${{ github.sha }} ghcr.io/${{ github.repository }}:${{ github.sha }}
registry.scality.com/cloudserver-dev/cloudserver:${{ github.sha }}
cache-from: type=gha,scope=cloudserver cache-from: type=gha,scope=cloudserver
cache-to: type=gha,mode=max,scope=cloudserver cache-to: type=gha,mode=max,scope=cloudserver
- name: Build and push pykmip image
uses: docker/build-push-action@v5
with:
push: true
context: .github/pykmip
tags: |
ghcr.io/${{ github.repository }}/pykmip:${{ github.sha }}
cache-from: type=gha,scope=pykmip
cache-to: type=gha,mode=max,scope=pykmip
build-federation-image: build-federation-image:
uses: scality/workflows/.github/workflows/docker-build.yaml@v1 runs-on: ubuntu-20.04
secrets: inherit steps:
with: - name: Checkout
push: true uses: actions/checkout@v4
registry: registry.scality.com - name: Set up Docker Buildx
namespace: cloudserver-dev uses: docker/setup-buildx-action@v3
name: cloudserver - name: Login to GitHub Registry
context: . uses: docker/login-action@v3
file: images/svc-base/Dockerfile with:
tag: ${{ github.sha }}-svc-base registry: ghcr.io
username: ${{ github.repository_owner }}
password: ${{ github.token }}
- name: Build and push image for federation
uses: docker/build-push-action@v5
with:
push: true
context: .
file: images/svc-base/Dockerfile
tags: |
ghcr.io/${{ github.repository }}:${{ github.sha }}-svc-base
cache-from: type=gha,scope=federation
cache-to: type=gha,mode=max,scope=federation
multiple-backend: multiple-backend:
runs-on: ubuntu-latest runs-on: ubuntu-latest
needs: build needs: build
env: env:
CLOUDSERVER_IMAGE: ghcr.io/${{ github.repository }}/cloudserver:${{ github.sha }} CLOUDSERVER_IMAGE: ghcr.io/${{ github.repository }}:${{ github.sha }}
S3BACKEND: mem S3BACKEND: mem
S3_LOCATION_FILE: /usr/src/app/tests/locationConfig/locationConfigTests.json S3_LOCATION_FILE: /usr/src/app/tests/locationConfig/locationConfigTests.json
S3DATA: multiple S3DATA: multiple
JOB_NAME: ${{ github.job }} JOB_NAME: ${{ github.job }}
steps: steps:
- name: Checkout - name: Checkout
uses: actions/checkout@v3 uses: actions/checkout@v4
- uses: actions/setup-python@v4 - uses: actions/setup-python@v5
with: with:
python-version: 3.9 python-version: 3.9
- name: Setup CI environment - name: Setup CI environment
uses: ./.github/actions/setup-ci uses: ./.github/actions/setup-ci
- name: Setup CI services - name: Setup CI services
run: docker-compose up -d run: docker compose up -d
working-directory: .github/docker working-directory: .github/docker
- name: Run multiple backend test - name: Run multiple backend test
run: |- run: |-
@ -194,7 +209,7 @@ jobs:
env: env:
S3_LOCATION_FILE: tests/locationConfig/locationConfigTests.json S3_LOCATION_FILE: tests/locationConfig/locationConfigTests.json
- name: Upload logs to artifacts - name: Upload logs to artifacts
uses: scality/action-artifacts@v3 uses: scality/action-artifacts@v4
with: with:
method: upload method: upload
url: https://artifacts.scality.net url: https://artifacts.scality.net
@ -217,18 +232,16 @@ jobs:
env: env:
S3BACKEND: file S3BACKEND: file
S3VAULT: mem S3VAULT: mem
CLOUDSERVER_IMAGE: ghcr.io/${{ github.repository }}/cloudserver:${{ github.sha }} CLOUDSERVER_IMAGE: ghcr.io/${{ github.repository }}:${{ github.sha }}
MPU_TESTING: "yes" MPU_TESTING: "yes"
ENABLE_NULL_VERSION_COMPAT_MODE: "${{ matrix.enable-null-compat }}" ENABLE_NULL_VERSION_COMPAT_MODE: "${{ matrix.enable-null-compat }}"
JOB_NAME: ${{ matrix.job-name }} JOB_NAME: ${{ matrix.job-name }}
steps: steps:
- name: Checkout - name: Checkout
uses: actions/checkout@v3 uses: actions/checkout@v4
- uses: actions/setup-python@v4 - uses: actions/setup-python@v5
with: with:
python-version: | python-version: 3.9
2.7
3.9
- name: Setup CI environment - name: Setup CI environment
uses: ./.github/actions/setup-ci uses: ./.github/actions/setup-ci
- name: Setup matrix job artifacts directory - name: Setup matrix job artifacts directory
@ -236,24 +249,20 @@ jobs:
run: | run: |
set -exu set -exu
mkdir -p /tmp/artifacts/${{ matrix.job-name }}/ mkdir -p /tmp/artifacts/${{ matrix.job-name }}/
- name: Setup python2 test environment - name: Setup python test environment
run: | run: |
sudo apt-get install -y libdigest-hmac-perl sudo apt-get install -y libdigest-hmac-perl
pip install virtualenv==20.21.0 pip install 's3cmd==2.3.0'
virtualenv -p $(which python2) ~/.virtualenv/py2
source ~/.virtualenv/py2/bin/activate
pip install 's3cmd==1.6.1'
- name: Setup CI services - name: Setup CI services
run: docker-compose up -d run: docker compose up -d
working-directory: .github/docker working-directory: .github/docker
- name: Run file ft tests - name: Run file ft tests
run: |- run: |-
set -o pipefail; set -o pipefail;
bash wait_for_local_port.bash 8000 40 bash wait_for_local_port.bash 8000 40
source ~/.virtualenv/py2/bin/activate
yarn run ft_test | tee /tmp/artifacts/${{ matrix.job-name }}/tests.log yarn run ft_test | tee /tmp/artifacts/${{ matrix.job-name }}/tests.log
- name: Upload logs to artifacts - name: Upload logs to artifacts
uses: scality/action-artifacts@v3 uses: scality/action-artifacts@v4
with: with:
method: upload method: upload
url: https://artifacts.scality.net url: https://artifacts.scality.net
@ -267,20 +276,20 @@ jobs:
needs: build needs: build
env: env:
ENABLE_UTAPI_V2: t ENABLE_UTAPI_V2: t
S3BACKEND: mem S3BACKEND: mem
BUCKET_DENY_FILTER: utapi-event-filter-deny-bucket BUCKET_DENY_FILTER: utapi-event-filter-deny-bucket
CLOUDSERVER_IMAGE: ghcr.io/${{ github.repository }}/cloudserver:${{ github.sha }} CLOUDSERVER_IMAGE: ghcr.io/${{ github.repository }}:${{ github.sha }}
JOB_NAME: ${{ github.job }} JOB_NAME: ${{ github.job }}
steps: steps:
- name: Checkout - name: Checkout
uses: actions/checkout@v3 uses: actions/checkout@v4
- uses: actions/setup-python@v4 - uses: actions/setup-python@v5
with: with:
python-version: 3.9 python-version: 3.9
- name: Setup CI environment - name: Setup CI environment
uses: ./.github/actions/setup-ci uses: ./.github/actions/setup-ci
- name: Setup CI services - name: Setup CI services
run: docker-compose up -d run: docker compose up -d
working-directory: .github/docker working-directory: .github/docker
- name: Run file utapi v2 tests - name: Run file utapi v2 tests
run: |- run: |-
@ -288,7 +297,7 @@ jobs:
bash wait_for_local_port.bash 8000 40 bash wait_for_local_port.bash 8000 40
yarn run test_utapi_v2 | tee /tmp/artifacts/${{ github.job }}/tests.log yarn run test_utapi_v2 | tee /tmp/artifacts/${{ github.job }}/tests.log
- name: Upload logs to artifacts - name: Upload logs to artifacts
uses: scality/action-artifacts@v3 uses: scality/action-artifacts@v4
with: with:
method: upload method: upload
url: https://artifacts.scality.net url: https://artifacts.scality.net
@ -304,12 +313,13 @@ jobs:
S3BACKEND: file S3BACKEND: file
S3VAULT: mem S3VAULT: mem
MPU_TESTING: true MPU_TESTING: true
CLOUDSERVER_IMAGE: ghcr.io/${{ github.repository }}/cloudserver:${{ github.sha }} CLOUDSERVER_IMAGE: ghcr.io/${{ github.repository }}:${{ github.sha }}
PYKMIP_IMAGE: ghcr.io/${{ github.repository }}/pykmip:${{ github.sha }}
JOB_NAME: ${{ github.job }} JOB_NAME: ${{ github.job }}
steps: steps:
- name: Checkout - name: Checkout
uses: actions/checkout@v3 uses: actions/checkout@v4
- uses: actions/setup-python@v4 - uses: actions/setup-python@v5
with: with:
python-version: 3.9 python-version: 3.9
- name: Setup CI environment - name: Setup CI environment
@ -318,7 +328,7 @@ jobs:
run: cp -r ./certs /tmp/ssl-kmip run: cp -r ./certs /tmp/ssl-kmip
working-directory: .github/pykmip working-directory: .github/pykmip
- name: Setup CI services - name: Setup CI services
run: docker-compose --profile pykmip up -d run: docker compose --profile pykmip up -d
working-directory: .github/docker working-directory: .github/docker
- name: Run file KMIP tests - name: Run file KMIP tests
run: |- run: |-
@ -327,7 +337,7 @@ jobs:
bash wait_for_local_port.bash 5696 40 bash wait_for_local_port.bash 5696 40
yarn run ft_kmip | tee /tmp/artifacts/${{ github.job }}/tests.log yarn run ft_kmip | tee /tmp/artifacts/${{ github.job }}/tests.log
- name: Upload logs to artifacts - name: Upload logs to artifacts
uses: scality/action-artifacts@v3 uses: scality/action-artifacts@v4
with: with:
method: upload method: upload
url: https://artifacts.scality.net url: https://artifacts.scality.net

View File

@ -177,6 +177,16 @@ const constants = {
assumedRoleArnResourceType: 'assumed-role', assumedRoleArnResourceType: 'assumed-role',
// Session name of the backbeat lifecycle assumed role session. // Session name of the backbeat lifecycle assumed role session.
backbeatLifecycleSessionName: 'backbeat-lifecycle', backbeatLifecycleSessionName: 'backbeat-lifecycle',
unsupportedSignatureChecksums: new Set([
'STREAMING-UNSIGNED-PAYLOAD-TRAILER',
'STREAMING-AWS4-HMAC-SHA256-PAYLOAD-TRAILER',
'STREAMING-AWS4-ECDSA-P256-SHA256-PAYLOAD',
'STREAMING-AWS4-ECDSA-P256-SHA256-PAYLOAD-TRAILER',
]),
supportedSignatureChecksums: new Set([
'UNSIGNED-PAYLOAD',
'STREAMING-AWS4-HMAC-SHA256-PAYLOAD',
]),
}; };
module.exports = constants; module.exports = constants;

View File

@ -1,4 +1,4 @@
FROM registry.scality.com/federation/nodesvc-base:7.10.6.0 FROM ghcr.io/scality/federation/nodesvc-base:7.10.6.0
ENV S3_CONFIG_FILE=${CONF_DIR}/config.json ENV S3_CONFIG_FILE=${CONF_DIR}/config.json
ENV S3_LOCATION_FILE=${CONF_DIR}/locationConfig.json ENV S3_LOCATION_FILE=${CONF_DIR}/locationConfig.json

View File

@ -1,3 +1,10 @@
'use strict'; // eslint-disable-line strict 'use strict'; // eslint-disable-line strict
require('werelogs').stderrUtils.catchAndTimestampStderr(
undefined,
// Do not exit as workers have their own listener that will exit
// But primary don't have another listener
require('cluster').isPrimary ? 1 : null,
);
require('./lib/server.js')(); require('./lib/server.js')();

View File

@ -289,7 +289,14 @@ function locationConstraintAssert(locationConstraints) {
'include us-east-1 as a locationConstraint'); 'include us-east-1 as a locationConstraint');
} }
function parseUtapiReindex({ enabled, schedule, sentinel, bucketd }) { function parseUtapiReindex(config) {
const {
enabled,
schedule,
sentinel,
bucketd,
onlyCountLatestWhenObjectLocked,
} = config;
assert(typeof enabled === 'boolean', assert(typeof enabled === 'boolean',
'bad config: utapi.reindex.enabled must be a boolean'); 'bad config: utapi.reindex.enabled must be a boolean');
assert(typeof sentinel === 'object', assert(typeof sentinel === 'object',
@ -304,6 +311,10 @@ function parseUtapiReindex({ enabled, schedule, sentinel, bucketd }) {
'bad config: utapi.reindex.bucketd.port must be a number'); 'bad config: utapi.reindex.bucketd.port must be a number');
assert(typeof schedule === 'string', assert(typeof schedule === 'string',
'bad config: utapi.reindex.schedule must be a string'); 'bad config: utapi.reindex.schedule must be a string');
if (onlyCountLatestWhenObjectLocked !== undefined) {
assert(typeof onlyCountLatestWhenObjectLocked === 'boolean',
'bad config: utapi.reindex.onlyCountLatestWhenObjectLocked must be a boolean');
}
try { try {
cronParser.parseExpression(schedule); cronParser.parseExpression(schedule);
} catch (e) { } catch (e) {

View File

@ -37,8 +37,10 @@ const AMZ_ABORT_ID_HEADER = 'x-amz-abort-rule-id';
function _generateExpHeadersObjects(rules, params, datetime) { function _generateExpHeadersObjects(rules, params, datetime) {
const tags = { const tags = {
TagSet: Object.keys(params.tags) TagSet: params.tags
.map(key => ({ Key: key, Value: params.tags[key] })), ? Object.keys(params.tags)
.map(key => ({ Key: key, Value: params.tags[key] }))
: [],
}; };
const objectInfo = { Key: params.key }; const objectInfo = { Key: params.key };

View File

@ -0,0 +1,32 @@
const { errors } = require('arsenal');
const { unsupportedSignatureChecksums, supportedSignatureChecksums } = require('../../../../constants');
function validateChecksumHeaders(headers) {
// If the x-amz-trailer header is present the request is using one of the
// trailing checksum algorithms, which are not supported.
if (headers['x-amz-trailer'] !== undefined) {
return errors.BadRequest.customizeDescription('trailing checksum is not supported');
}
const signatureChecksum = headers['x-amz-content-sha256'];
if (signatureChecksum === undefined) {
return null;
}
if (supportedSignatureChecksums.has(signatureChecksum)) {
return null;
}
// If the value is not one of the possible checksum algorithms
// the only other valid value is the actual sha256 checksum of the payload.
// Do a simple sanity check of the length to guard against future algos.
// If the value is an unknown algo, then it will fail checksum validation.
if (!unsupportedSignatureChecksums.has(signatureChecksum) && signatureChecksum.length === 64) {
return null;
}
return errors.BadRequest.customizeDescription('unsupported checksum algorithm');
}
module.exports = validateChecksumHeaders;

View File

@ -360,60 +360,86 @@ function versioningPreprocessing(bucketName, bucketMD, objectKey, objMD,
}); });
} }
/** Return options to pass to Metadata layer for version-specific
* operations with the given requested version ID
*
* @param {object} objectMD - object metadata
* @param {boolean} nullVersionCompatMode - if true, behaves in null
* version compatibility mode
* @return {object} options object with params:
* {string} [options.versionId] - specific versionId to update
* {boolean} [options.isNull=true|false|undefined] - if set, tells the
* Metadata backend if we're updating or deleting a new-style null
* version (stored in master or null key), or not a null version.
*/
function getVersionSpecificMetadataOptions(objectMD, nullVersionCompatMode) {
// Use the internal versionId if it is a "real" null version (not
// non-versioned)
//
// If the target object is non-versioned: do not specify a
// "versionId" attribute nor "isNull"
//
// If the target version is a null version, i.e. has the "isNull"
// attribute:
//
// - send the "isNull=true" param to Metadata if the version is
// already a null key put by a non-compat mode Cloudserver, to
// let Metadata know that the null key is to be updated or
// deleted. This is the case if the "isNull2" metadata attribute
// exists
//
// - otherwise, do not send the "isNull" parameter to hint
// Metadata that it is a legacy null version
//
// If the target version is not a null version and is versioned:
//
// - send the "isNull=false" param to Metadata in non-compat
// mode (mandatory for v1 format)
//
// - otherwise, do not send the "isNull" parameter to hint
// Metadata that an existing null version may not be stored in a
// null key
//
//
if (objectMD.versionId === undefined) {
return {};
}
const options = { versionId: objectMD.versionId };
if (objectMD.isNull) {
if (objectMD.isNull2) {
options.isNull = true;
}
} else if (!nullVersionCompatMode) {
options.isNull = false;
}
return options;
}
/** preprocessingVersioningDelete - return versioning information for S3 to /** preprocessingVersioningDelete - return versioning information for S3 to
* manage deletion of objects and versions, including creation of delete markers * manage deletion of objects and versions, including creation of delete markers
* @param {string} bucketName - name of bucket * @param {string} bucketName - name of bucket
* @param {object} bucketMD - bucket metadata * @param {object} bucketMD - bucket metadata
* @param {object} objectMD - obj metadata * @param {object} objectMD - obj metadata
* @param {string} [reqVersionId] - specific version ID sent as part of request * @param {string} [reqVersionId] - specific version ID sent as part of request
* @param {boolean} nullVersionCompatMode - if true, behaves in null * @param {boolean} nullVersionCompatMode - if true, behaves in null version compatibility mode
* version compatibility mode and return appropriate values:
* - in normal mode, returns an 'isNull' boolean sent to Metadata (true or false)
* - in compatibility mode, does not return an 'isNull' property
* @return {object} options object with params: * @return {object} options object with params:
* {boolean} [options.deleteData=true|undefined] - whether to delete data (if undefined * {boolean} [options.deleteData=true|undefined] - whether to delete data (if undefined
* means creating a delete marker instead) * means creating a delete marker instead)
* {string} [options.versionId] - specific versionId to delete * {string} [options.versionId] - specific versionId to delete
* {boolean} [options.isNull=true|false|undefined] - if set, tells the * {boolean} [options.isNull=true|false|undefined] - if set, tells the
* Metadata backend if we're deleting a null version or not a null * Metadata backend if we're deleting a new-style null version (stored
* version. Not set if `nullVersionCompatMode` is true. * in master or null key), or not a null version.
*/ */
function preprocessingVersioningDelete(bucketName, bucketMD, objectMD, reqVersionId, nullVersionCompatMode) { function preprocessingVersioningDelete(bucketName, bucketMD, objectMD, reqVersionId, nullVersionCompatMode) {
const options = {}; let options = {};
if (bucketMD.getVersioningConfiguration() && reqVersionId) {
options = getVersionSpecificMetadataOptions(objectMD, nullVersionCompatMode);
}
if (!bucketMD.getVersioningConfiguration() || reqVersionId) { if (!bucketMD.getVersioningConfiguration() || reqVersionId) {
// delete data if bucket is non-versioned or the request // delete data if bucket is non-versioned or the request
// deletes a specific version // deletes a specific version
options.deleteData = true; options.deleteData = true;
} }
if (bucketMD.getVersioningConfiguration() && reqVersionId) {
if (reqVersionId === 'null') {
// deleting the 'null' version if it exists:
//
// - use its internal versionId if it is a "real" null
// version (not non-versioned)
//
// - send the "isNull" param to Metadata if:
//
// - in non-compat mode (mandatory for v1 format)
//
// - OR if the version is already a null key put by a
// non-compat mode Cloudserver, to let Metadata know that
// the null key is to be deleted. This is the case if the
// "isNull2" param is set.
if (objectMD.versionId !== undefined) {
options.versionId = objectMD.versionId;
if (objectMD.isNull2) {
options.isNull = true;
}
}
} else {
// deleting a specific version
options.versionId = reqVersionId;
if (!nullVersionCompatMode) {
options.isNull = false;
}
}
}
return options; return options;
} }
@ -424,5 +450,6 @@ module.exports = {
processVersioningState, processVersioningState,
getMasterState, getMasterState,
versioningPreprocessing, versioningPreprocessing,
getVersionSpecificMetadataOptions,
preprocessingVersioningDelete, preprocessingVersioningDelete,
}; };

View File

@ -18,7 +18,7 @@ function bucketDeleteLifecycle(authInfo, request, log, callback) {
const metadataValParams = { const metadataValParams = {
authInfo, authInfo,
bucketName, bucketName,
requestType: 'bucketDeleteLifecycle', requestType: 'bucketPutLifecycle',
request, request,
}; };
return metadataValidateBucket(metadataValParams, log, (err, bucket) => { return metadataValidateBucket(metadataValParams, log, (err, bucket) => {

View File

@ -1,7 +1,7 @@
const async = require('async'); const async = require('async');
const { errors } = require('arsenal'); const { errors } = require('arsenal');
const { decodeVersionId, getVersionIdResHeader } const { decodeVersionId, getVersionIdResHeader, getVersionSpecificMetadataOptions }
= require('./apiUtils/object/versioning'); = require('./apiUtils/object/versioning');
const { metadataValidateBucketAndObj } = require('../metadata/metadataUtils'); const { metadataValidateBucketAndObj } = require('../metadata/metadataUtils');
@ -75,13 +75,7 @@ function objectDeleteTagging(authInfo, request, log, callback) {
(bucket, objectMD, next) => { (bucket, objectMD, next) => {
// eslint-disable-next-line no-param-reassign // eslint-disable-next-line no-param-reassign
objectMD.tags = {}; objectMD.tags = {};
const params = {}; const params = getVersionSpecificMetadataOptions(objectMD, config.nullVersionCompatMode);
if (objectMD.versionId) {
params.versionId = objectMD.versionId;
if (!config.nullVersionCompatMode) {
params.isNull = objectMD.isNull || false;
}
}
const replicationInfo = getReplicationInfo(objectKey, bucket, true, const replicationInfo = getReplicationInfo(objectKey, bucket, true,
0, REPLICATION_ACTION, objectMD); 0, REPLICATION_ACTION, objectMD);
if (replicationInfo) { if (replicationInfo) {

View File

@ -15,6 +15,8 @@ const kms = require('../kms/wrapper');
const { config } = require('../Config'); const { config } = require('../Config');
const { setExpirationHeaders } = require('./apiUtils/object/expirationHeaders'); const { setExpirationHeaders } = require('./apiUtils/object/expirationHeaders');
const monitoring = require('../utilities/metrics'); const monitoring = require('../utilities/metrics');
const validateChecksumHeaders = require('./apiUtils/object/validateChecksumHeaders');
const writeContinue = require('../utilities/writeContinue'); const writeContinue = require('../utilities/writeContinue');
const versionIdUtils = versioning.VersionID; const versionIdUtils = versioning.VersionID;
@ -69,6 +71,11 @@ function objectPut(authInfo, request, streamingV4Params, log, callback) {
)); ));
} }
const checksumHeaderErr = validateChecksumHeaders(headers);
if (checksumHeaderErr) {
return callback(checksumHeaderErr);
}
log.trace('owner canonicalID to send to data', { canonicalID }); log.trace('owner canonicalID to send to data', { canonicalID });
return metadataValidateBucketAndObj(valParams, log, return metadataValidateBucketAndObj(valParams, log,

View File

@ -7,7 +7,7 @@ const { pushMetric } = require('../utapi/utilities');
const collectCorsHeaders = require('../utilities/collectCorsHeaders'); const collectCorsHeaders = require('../utilities/collectCorsHeaders');
const constants = require('../../constants'); const constants = require('../../constants');
const vault = require('../auth/vault'); const vault = require('../auth/vault');
const { decodeVersionId, getVersionIdResHeader } const { decodeVersionId, getVersionIdResHeader, getVersionSpecificMetadataOptions }
= require('./apiUtils/object/versioning'); = require('./apiUtils/object/versioning');
const { metadataValidateBucketAndObj } = require('../metadata/metadataUtils'); const { metadataValidateBucketAndObj } = require('../metadata/metadataUtils');
const monitoring = require('../utilities/metrics'); const monitoring = require('../utilities/metrics');
@ -281,13 +281,7 @@ function objectPutACL(authInfo, request, log, cb) {
}, },
function addAclsToObjMD(bucket, objectMD, ACLParams, next) { function addAclsToObjMD(bucket, objectMD, ACLParams, next) {
// Add acl's to object metadata // Add acl's to object metadata
const params = {}; const params = getVersionSpecificMetadataOptions(objectMD, config.nullVersionCompatMode);
if (objectMD.versionId) {
params.versionId = objectMD.versionId;
if (!config.nullVersionCompatMode) {
params.isNull = objectMD.isNull || false;
}
}
acl.addObjectACL(bucket, objectKey, objectMD, acl.addObjectACL(bucket, objectKey, objectMD,
ACLParams, params, log, err => next(err, bucket, objectMD)); ACLParams, params, log, err => next(err, bucket, objectMD));
}, },

View File

@ -2,7 +2,7 @@ const async = require('async');
const { errors, s3middleware } = require('arsenal'); const { errors, s3middleware } = require('arsenal');
const collectCorsHeaders = require('../utilities/collectCorsHeaders'); const collectCorsHeaders = require('../utilities/collectCorsHeaders');
const { decodeVersionId, getVersionIdResHeader } = const { decodeVersionId, getVersionIdResHeader, getVersionSpecificMetadataOptions } =
require('./apiUtils/object/versioning'); require('./apiUtils/object/versioning');
const getReplicationInfo = require('./apiUtils/object/getReplicationInfo'); const getReplicationInfo = require('./apiUtils/object/getReplicationInfo');
const metadata = require('../metadata/wrapper'); const metadata = require('../metadata/wrapper');
@ -86,13 +86,7 @@ function objectPutLegalHold(authInfo, request, log, callback) {
(bucket, legalHold, objectMD, next) => { (bucket, legalHold, objectMD, next) => {
// eslint-disable-next-line no-param-reassign // eslint-disable-next-line no-param-reassign
objectMD.legalHold = legalHold; objectMD.legalHold = legalHold;
const params = {}; const params = getVersionSpecificMetadataOptions(objectMD, config.nullVersionCompatMode);
if (objectMD.versionId) {
params.versionId = objectMD.versionId;
if (!config.nullVersionCompatMode) {
params.isNull = objectMD.isNull || false;
}
}
const replicationInfo = getReplicationInfo(objectKey, bucket, true, const replicationInfo = getReplicationInfo(objectKey, bucket, true,
0, REPLICATION_ACTION, objectMD); 0, REPLICATION_ACTION, objectMD);
if (replicationInfo) { if (replicationInfo) {

View File

@ -19,6 +19,8 @@ const locationConstraintCheck
const monitoring = require('../utilities/metrics'); const monitoring = require('../utilities/metrics');
const writeContinue = require('../utilities/writeContinue'); const writeContinue = require('../utilities/writeContinue');
const { getObjectSSEConfiguration } = require('./apiUtils/bucket/bucketEncryption'); const { getObjectSSEConfiguration } = require('./apiUtils/bucket/bucketEncryption');
const validateChecksumHeaders = require('./apiUtils/object/validateChecksumHeaders');
const skipError = new Error('skip'); const skipError = new Error('skip');
// We pad the partNumbers so that the parts will be sorted in numerical order. // We pad the partNumbers so that the parts will be sorted in numerical order.
@ -64,6 +66,11 @@ function objectPutPart(authInfo, request, streamingV4Params, log,
return cb(errors.EntityTooLarge); return cb(errors.EntityTooLarge);
} }
const checksumHeaderErr = validateChecksumHeaders(request.headers);
if (checksumHeaderErr) {
return cb(checksumHeaderErr);
}
// Note: Part sizes cannot be less than 5MB in size except for the last. // Note: Part sizes cannot be less than 5MB in size except for the last.
// However, we do not check this value here because we cannot know which // However, we do not check this value here because we cannot know which
// part will be the last until a complete MPU request is made. Thus, we let // part will be the last until a complete MPU request is made. Thus, we let

View File

@ -1,7 +1,7 @@
const async = require('async'); const async = require('async');
const { errors, s3middleware } = require('arsenal'); const { errors, s3middleware } = require('arsenal');
const { decodeVersionId, getVersionIdResHeader } = const { decodeVersionId, getVersionIdResHeader, getVersionSpecificMetadataOptions } =
require('./apiUtils/object/versioning'); require('./apiUtils/object/versioning');
const { ObjectLockInfo, checkUserGovernanceBypass, hasGovernanceBypassHeader } = const { ObjectLockInfo, checkUserGovernanceBypass, hasGovernanceBypassHeader } =
require('./apiUtils/object/objectLockHelpers'); require('./apiUtils/object/objectLockHelpers');
@ -116,13 +116,7 @@ function objectPutRetention(authInfo, request, log, callback) {
/* eslint-disable no-param-reassign */ /* eslint-disable no-param-reassign */
objectMD.retentionMode = retentionInfo.mode; objectMD.retentionMode = retentionInfo.mode;
objectMD.retentionDate = retentionInfo.date; objectMD.retentionDate = retentionInfo.date;
const params = {}; const params = getVersionSpecificMetadataOptions(objectMD, config.nullVersionCompatMode);
if (objectMD.versionId) {
params.versionId = objectMD.versionId;
if (!config.nullVersionCompatMode) {
params.isNull = objectMD.isNull || false;
}
}
const replicationInfo = getReplicationInfo(objectKey, bucket, true, const replicationInfo = getReplicationInfo(objectKey, bucket, true,
0, REPLICATION_ACTION, objectMD); 0, REPLICATION_ACTION, objectMD);
if (replicationInfo) { if (replicationInfo) {

View File

@ -1,7 +1,7 @@
const async = require('async'); const async = require('async');
const { errors, s3middleware } = require('arsenal'); const { errors, s3middleware } = require('arsenal');
const { decodeVersionId, getVersionIdResHeader } = const { decodeVersionId, getVersionIdResHeader, getVersionSpecificMetadataOptions } =
require('./apiUtils/object/versioning'); require('./apiUtils/object/versioning');
const { metadataValidateBucketAndObj } = require('../metadata/metadataUtils'); const { metadataValidateBucketAndObj } = require('../metadata/metadataUtils');
@ -81,13 +81,7 @@ function objectPutTagging(authInfo, request, log, callback) {
(bucket, tags, objectMD, next) => { (bucket, tags, objectMD, next) => {
// eslint-disable-next-line no-param-reassign // eslint-disable-next-line no-param-reassign
objectMD.tags = tags; objectMD.tags = tags;
const params = {}; const params = getVersionSpecificMetadataOptions(objectMD, config.nullVersionCompatMode);
if (objectMD.versionId) {
params.versionId = objectMD.versionId;
if (!config.nullVersionCompatMode) {
params.isNull = objectMD.isNull || false;
}
}
const replicationInfo = getReplicationInfo(objectKey, bucket, true, const replicationInfo = getReplicationInfo(objectKey, bucket, true,
0, REPLICATION_ACTION, objectMD); 0, REPLICATION_ACTION, objectMD);
if (replicationInfo) { if (replicationInfo) {

View File

@ -1,3 +1,4 @@
require('werelogs').stderrUtils.catchAndTimestampStderr();
const _config = require('../Config').config; const _config = require('../Config').config;
const { utapiVersion, UtapiServer: utapiServer } = require('utapi'); const { utapiVersion, UtapiServer: utapiServer } = require('utapi');

View File

@ -1,3 +1,4 @@
require('werelogs').stderrUtils.catchAndTimestampStderr();
const UtapiReindex = require('utapi').UtapiReindex; const UtapiReindex = require('utapi').UtapiReindex;
const { config } = require('../Config'); const { config } = require('../Config');

View File

@ -1,3 +1,4 @@
require('werelogs').stderrUtils.catchAndTimestampStderr();
const UtapiReplay = require('utapi').UtapiReplay; const UtapiReplay = require('utapi').UtapiReplay;
const _config = require('../Config').config; const _config = require('../Config').config;

View File

@ -1,6 +1,6 @@
{ {
"name": "s3", "name": "s3",
"version": "7.70.21", "version": "7.70.21-9",
"description": "S3 connector", "description": "S3 connector",
"main": "index.js", "main": "index.js",
"engines": { "engines": {
@ -20,7 +20,7 @@
"homepage": "https://github.com/scality/S3#readme", "homepage": "https://github.com/scality/S3#readme",
"dependencies": { "dependencies": {
"@hapi/joi": "^17.1.0", "@hapi/joi": "^17.1.0",
"arsenal": "git+https://github.com/scality/arsenal#7.70.4", "arsenal": "git+https://github.com/scality/arsenal#7.70.4-1",
"async": "~2.5.0", "async": "~2.5.0",
"aws-sdk": "2.905.0", "aws-sdk": "2.905.0",
"azure-storage": "^2.1.0", "azure-storage": "^2.1.0",
@ -35,11 +35,11 @@
"moment": "^2.26.0", "moment": "^2.26.0",
"npm-run-all": "~4.1.5", "npm-run-all": "~4.1.5",
"prom-client": "14.2.0", "prom-client": "14.2.0",
"utapi": "git+https://github.com/scality/utapi#7.10.12", "utapi": "git+https://github.com/scality/utapi#7.70.4",
"utf8": "~2.1.1", "utf8": "~2.1.1",
"uuid": "^3.0.1", "uuid": "^3.0.1",
"vaultclient": "scality/vaultclient#7.10.13", "vaultclient": "scality/vaultclient#7.10.13",
"werelogs": "scality/werelogs#8.1.0", "werelogs": "scality/werelogs#8.1.0-1",
"xml2js": "~0.4.16" "xml2js": "~0.4.16"
}, },
"devDependencies": { "devDependencies": {

View File

@ -30,6 +30,33 @@ function getPolicyParams(paramToChange) {
}; };
} }
function getPolicyParamsWithId(paramToChange, policyId) {
const newParam = {};
const bucketPolicy = {
Version: '2012-10-17',
Id: policyId,
Statement: [basicStatement],
};
if (paramToChange) {
newParam[paramToChange.key] = paramToChange.value;
bucketPolicy.Statement[0] = Object.assign({}, basicStatement, newParam);
}
return {
Bucket: bucket,
Policy: JSON.stringify(bucketPolicy),
};
}
function generateRandomString(length) {
// All allowed characters matching the regex in arsenal
const allowedCharacters = 'abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ0123456789+=,.@ -/';
const allowedCharactersLength = allowedCharacters.length;
return [...Array(length)]
.map(() => allowedCharacters[~~(Math.random() * allowedCharactersLength)])
.join('');
}
// Check for the expected error response code and status code. // Check for the expected error response code and status code.
function assertError(err, expectedErr, cb) { function assertError(err, expectedErr, cb) {
if (expectedErr === null) { if (expectedErr === null) {
@ -102,5 +129,31 @@ describe('aws-sdk test put bucket policy', () => {
s3.putBucketPolicy(params, err => s3.putBucketPolicy(params, err =>
assertError(err, 'MalformedPolicy', done)); assertError(err, 'MalformedPolicy', done));
}); });
it('should return MalformedPolicy because Id is not a string',
done => {
const params = getPolicyParamsWithId(null, 59);
s3.putBucketPolicy(params, err =>
assertError(err, 'MalformedPolicy', done));
});
it('should put a bucket policy on bucket since Id is a string',
done => {
const params = getPolicyParamsWithId(null, 'cd3ad3d9-2776-4ef1-a904-4c229d1642e');
s3.putBucketPolicy(params, err =>
assertError(err, null, done));
});
it('should allow bucket policy with pincipal arn less than 2048 characters', done => {
const params = getPolicyParams({ key: 'Principal', value: { AWS: `arn:aws:iam::767707094035:user/${generateRandomString(150)}` } }); // eslint-disable-line max-len
s3.putBucketPolicy(params, err =>
assertError(err, null, done));
});
it('should not allow bucket policy with pincipal arn more than 2048 characters', done => {
const params = getPolicyParams({ key: 'Principal', value: { AWS: `arn:aws:iam::767707094035:user/${generateRandomString(2020)}` } }); // eslint-disable-line max-len
s3.putBucketPolicy(params, err =>
assertError(err, 'MalformedPolicy', done));
});
}); });
}); });

View File

@ -0,0 +1,156 @@
const assert = require('assert');
const async = require('async');
const BucketUtility = require('../../lib/utility/bucket-util');
const {
removeAllVersions,
versioningEnabled,
} = require('../../lib/utility/versioning-util.js');
// This series of tests can only be enabled on an environment that has
// two Cloudserver instances, with one of them in null version
// compatibility mode. This is why they have to be explicitly enabled,
// which is done in a particular Integration test suite. This test
// suite makes the most sense in Integration because it tests the
// combination of Cloudserver requests to bucketd and the behavior of
// bucketd based on those requests.
const describeSkipIfNotExplicitlyEnabled =
process.env.ENABLE_LEGACY_NULL_VERSION_COMPAT_TESTS ? describe : describe.skip;
describeSkipIfNotExplicitlyEnabled('legacy null version compatibility tests', () => {
const bucketUtilCompat = new BucketUtility('default', {
endpoint: 'http://127.0.0.1:8001',
});
const s3Compat = bucketUtilCompat.s3;
const bucketUtil = new BucketUtility('default', {});
const s3 = bucketUtil.s3;
const bucket = `legacy-null-version-compat-${Date.now()}`;
// In this series of tests, we first create a non-current null
// version in legacy format (with "nullVersionId" field in the
// master and no "isNull2" metadata attribute), by using the
// Cloudserver endpoint that is configured with null version
// compatibility mode enabled.
beforeEach(done => async.series([
next => s3Compat.createBucket({
Bucket: bucket,
}, next),
next => s3Compat.putObject({
Bucket: bucket,
Key: 'obj',
Body: 'nullbody',
}, next),
next => s3Compat.putBucketVersioning({
Bucket: bucket,
VersioningConfiguration: versioningEnabled,
}, next),
next => s3Compat.putObject({
Bucket: bucket,
Key: 'obj',
Body: 'versionedbody',
}, next),
], done));
afterEach(done => {
removeAllVersions({ Bucket: bucket }, err => {
if (err) {
return done(err);
}
return s3Compat.deleteBucket({ Bucket: bucket }, done);
});
});
it('updating ACL of legacy null version with non-compat cloudserver', done => {
async.series([
next => s3.putObjectAcl({
Bucket: bucket,
Key: 'obj',
VersionId: 'null',
ACL: 'public-read',
}, next),
next => s3.getObjectAcl({
Bucket: bucket,
Key: 'obj',
VersionId: 'null',
}, (err, acl) => {
assert.ifError(err);
// check that we fetched the updated null version
assert.strictEqual(acl.Grants.length, 2);
next();
}),
next => s3.deleteObject({
Bucket: bucket,
Key: 'obj',
VersionId: 'null',
}, next),
next => s3.listObjectVersions({
Bucket: bucket,
}, (err, listing) => {
assert.ifError(err);
// check that the null version has been correctly deleted
assert(listing.Versions.every(version => version.VersionId !== 'null'));
next();
}),
], done);
});
it('updating tags of legacy null version with non-compat cloudserver', done => {
const tagSet = [
{
Key: 'newtag',
Value: 'newtagvalue',
},
];
async.series([
next => s3.putObjectTagging({
Bucket: bucket,
Key: 'obj',
VersionId: 'null',
Tagging: {
TagSet: tagSet,
},
}, next),
next => s3.getObjectTagging({
Bucket: bucket,
Key: 'obj',
VersionId: 'null',
}, (err, tagging) => {
assert.ifError(err);
assert.deepStrictEqual(tagging.TagSet, tagSet);
next();
}),
next => s3.deleteObjectTagging({
Bucket: bucket,
Key: 'obj',
VersionId: 'null',
}, err => {
assert.ifError(err);
next();
}),
next => s3.getObjectTagging({
Bucket: bucket,
Key: 'obj',
VersionId: 'null',
}, (err, tagging) => {
assert.ifError(err);
assert.deepStrictEqual(tagging.TagSet, []);
next();
}),
next => s3.deleteObject({
Bucket: bucket,
Key: 'obj',
VersionId: 'null',
}, next),
next => s3.listObjectVersions({
Bucket: bucket,
}, (err, listing) => {
assert.ifError(err);
// check that the null version has been correctly deleted
assert(listing.Versions.every(version => version.VersionId !== 'null'));
next();
}),
], done);
});
});

View File

@ -0,0 +1,70 @@
const assert = require('assert');
const { makeS3Request } = require('../utils/makeRequest');
const HttpRequestAuthV4 = require('../utils/HttpRequestAuthV4');
const bucket = 'testunsupportedchecksumsbucket';
const objectKey = 'key';
const objData = Buffer.alloc(1024, 'a');
const authCredentials = {
accessKey: 'accessKey1',
secretKey: 'verySecretKey1',
};
const itSkipIfAWS = process.env.AWS_ON_AIR ? it.skip : it;
describe('unsupported checksum requests:', () => {
before(done => {
makeS3Request({
method: 'PUT',
authCredentials,
bucket,
}, err => {
assert.ifError(err);
done();
});
});
after(done => {
makeS3Request({
method: 'DELETE',
authCredentials,
bucket,
}, err => {
assert.ifError(err);
done();
});
});
itSkipIfAWS('should respond with BadRequest for trailing checksum', done => {
const req = new HttpRequestAuthV4(
`http://localhost:8000/${bucket}/${objectKey}`,
Object.assign(
{
method: 'PUT',
headers: {
'content-length': objData.length,
'x-amz-content-sha256': 'STREAMING-AWS4-HMAC-SHA256-PAYLOAD-TRAILER',
'x-amz-trailer': 'x-amz-checksum-sha256',
},
},
authCredentials
),
res => {
assert.strictEqual(res.statusCode, 400);
res.on('data', () => {});
res.on('end', done);
}
);
req.on('error', err => {
assert.ifError(err);
});
req.write(objData);
req.once('drain', () => {
req.end();
});
});
});

View File

@ -165,7 +165,9 @@ function readJsonFromChild(child, lineFinder, cb) {
const findBrace = data.indexOf('{', findLine); const findBrace = data.indexOf('{', findLine);
const findEnd = findEndJson(data, findBrace); const findEnd = findEndJson(data, findBrace);
const endJson = data.substring(findBrace, findEnd + 1) const endJson = data.substring(findBrace, findEnd + 1)
.replace(/"/g, '\\"').replace(/'/g, '"'); .replace(/"/g, '\\"').replace(/'/g, '"')
.replace(/b'/g, '\'')
.replace(/b"/g, '"');
return cb(JSON.parse(endJson)); return cb(JSON.parse(endJson));
}); });
} }
@ -344,18 +346,18 @@ describe('s3cmd getService', () => {
it("should have response headers matching AWS's response headers", it("should have response headers matching AWS's response headers",
done => { done => {
provideLineOfInterest(['ls', '--debug'], 'DEBUG: Response: {', provideLineOfInterest(['ls', '--debug'], '\'headers\': {',
parsedObject => { parsedObject => {
assert(parsedObject.headers['x-amz-id-2']); assert(parsedObject['x-amz-id-2']);
assert(parsedObject.headers['transfer-encoding']); assert(parsedObject['transfer-encoding']);
assert(parsedObject.headers['x-amz-request-id']); assert(parsedObject['x-amz-request-id']);
const gmtDate = new Date(parsedObject.headers.date) const gmtDate = new Date(parsedObject.date)
.toUTCString(); .toUTCString();
assert.strictEqual(parsedObject.headers.date, gmtDate); assert.strictEqual(parsedObject.date, gmtDate);
assert.strictEqual(parsedObject assert.strictEqual(parsedObject
.headers['content-type'], 'application/xml'); ['content-type'], 'application/xml');
assert.strictEqual(parsedObject assert.strictEqual(parsedObject
.headers['set-cookie'], undefined); ['set-cookie'], undefined);
done(); done();
}); });
}); });
@ -395,11 +397,11 @@ describe('s3cmd getObject', function toto() {
}); });
it('get non existing file in existing bucket, should fail', done => { it('get non existing file in existing bucket, should fail', done => {
exec(['get', `s3://${bucket}/${nonexist}`, 'fail'], done, 12); exec(['get', `s3://${bucket}/${nonexist}`, 'fail'], done, 64);
}); });
it('get file in non existing bucket, should fail', done => { it('get file in non existing bucket, should fail', done => {
exec(['get', `s3://${nonexist}/${nonexist}`, 'fail2'], done, 12); exec(['get', `s3://${nonexist}/${nonexist}`, 'fail2'], done, 64);
}); });
}); });
@ -511,7 +513,7 @@ describe('s3cmd delObject', () => {
it('delete an already deleted object, should return a 204', done => { it('delete an already deleted object, should return a 204', done => {
provideLineOfInterest(['rm', `s3://${bucket}/${upload}`, '--debug'], provideLineOfInterest(['rm', `s3://${bucket}/${upload}`, '--debug'],
'DEBUG: Response: {', parsedObject => { 'DEBUG: Response:\n{', parsedObject => {
assert.strictEqual(parsedObject.status, 204); assert.strictEqual(parsedObject.status, 204);
done(); done();
}); });
@ -519,14 +521,14 @@ describe('s3cmd delObject', () => {
it('delete non-existing object, should return a 204', done => { it('delete non-existing object, should return a 204', done => {
provideLineOfInterest(['rm', `s3://${bucket}/${nonexist}`, '--debug'], provideLineOfInterest(['rm', `s3://${bucket}/${nonexist}`, '--debug'],
'DEBUG: Response: {', parsedObject => { 'DEBUG: Response:\n{', parsedObject => {
assert.strictEqual(parsedObject.status, 204); assert.strictEqual(parsedObject.status, 204);
done(); done();
}); });
}); });
it('try to get the deleted object, should fail', done => { it('try to get the deleted object, should fail', done => {
exec(['get', `s3://${bucket}/${upload}`, download], done, 12); exec(['get', `s3://${bucket}/${upload}`, download], done, 64);
}); });
}); });
@ -621,7 +623,7 @@ describe('s3cmd multipart upload', function titi() {
}); });
it('should not be able to get deleted object', done => { it('should not be able to get deleted object', done => {
exec(['get', `s3://${bucket}/${MPUpload}`, download], done, 12); exec(['get', `s3://${bucket}/${MPUpload}`, download], done, 64);
}); });
}); });
@ -660,7 +662,7 @@ MPUploadSplitter.forEach(file => {
}); });
it('should not be able to get deleted object', done => { it('should not be able to get deleted object', done => {
exec(['get', `s3://${bucket}/${file}`, download], done, 12); exec(['get', `s3://${bucket}/${file}`, download], done, 64);
}); });
}); });
}); });
@ -728,7 +730,7 @@ describe('s3cmd info', () => {
// test that POLICY and CORS are returned as 'none' // test that POLICY and CORS are returned as 'none'
it('should find that policy has a value of none', done => { it('should find that policy has a value of none', done => {
checkRawOutput(['info', `s3://${bucket}`], 'policy', 'none', checkRawOutput(['info', `s3://${bucket}`], 'Policy', 'none',
'stdout', foundIt => { 'stdout', foundIt => {
assert(foundIt); assert(foundIt);
done(); done();
@ -736,7 +738,7 @@ describe('s3cmd info', () => {
}); });
it('should find that cors has a value of none', done => { it('should find that cors has a value of none', done => {
checkRawOutput(['info', `s3://${bucket}`], 'cors', 'none', checkRawOutput(['info', `s3://${bucket}`], 'CORS', 'none',
'stdout', foundIt => { 'stdout', foundIt => {
assert(foundIt); assert(foundIt);
done(); done();
@ -762,7 +764,7 @@ describe('s3cmd info', () => {
}); });
it('should find that cors has a value', done => { it('should find that cors has a value', done => {
checkRawOutput(['info', `s3://${bucket}`], 'cors', corsConfig, checkRawOutput(['info', `s3://${bucket}`], 'CORS', corsConfig,
'stdout', foundIt => { 'stdout', foundIt => {
assert(foundIt, 'Did not find value for cors'); assert(foundIt, 'Did not find value for cors');
done(); done();

View File

@ -103,6 +103,16 @@ describe('generateExpirationHeaders', () => {
}, },
{}, {},
], ],
[
'should provide correct headers for compatibility with legacy objects missing the tags property',
{
lifecycleConfig: lifecycleExpirationDays,
objectParams: { key: 'object', date: objectDate },
},
{
'x-amz-expiration': `expiry-date="${expectedDaysExpiryDate}", rule-id="test-days"`,
},
],
[ [
'should return correct headers for object (days)', 'should return correct headers for object (days)',
{ {

View File

@ -0,0 +1,75 @@
const assert = require('assert');
const validateChecksumHeaders = require('../../../../lib/api/apiUtils/object/validateChecksumHeaders');
const { unsupportedSignatureChecksums, supportedSignatureChecksums } = require('../../../../constants');
const passingCases = [
{
description: 'should return null if no checksum headers are present',
headers: {},
},
{
description: 'should return null if UNSIGNED-PAYLOAD is used',
headers: {
'x-amz-content-sha256': 'UNSIGNED-PAYLOAD',
},
},
{
description: 'should return null if a sha256 checksum is used',
headers: {
'x-amz-content-sha256': 'thisIs64CharactersLongAndThatsAllWeCheckFor1234567890abcdefghijk',
},
},
];
supportedSignatureChecksums.forEach(checksum => {
passingCases.push({
description: `should return null if ${checksum} is used`,
headers: {
'x-amz-content-sha256': checksum,
},
});
});
const failingCases = [
{
description: 'should return BadRequest if a trailing checksum is used',
headers: {
'x-amz-trailer': 'test',
},
},
{
description: 'should return BadRequest if an unknown algo is used',
headers: {
'x-amz-content-sha256': 'UNSUPPORTED-CHECKSUM',
},
},
];
unsupportedSignatureChecksums.forEach(checksum => {
failingCases.push({
description: `should return BadRequest if ${checksum} is used`,
headers: {
'x-amz-content-sha256': checksum,
},
});
});
describe('validateChecksumHeaders', () => {
passingCases.forEach(testCase => {
it(testCase.description, () => {
const result = validateChecksumHeaders(testCase.headers);
assert.ifError(result);
});
});
failingCases.forEach(testCase => {
it(testCase.description, () => {
const result = validateChecksumHeaders(testCase.headers);
assert(result instanceof Error, 'Expected an error to be returned');
assert.strictEqual(result.is.BadRequest, true);
assert.strictEqual(result.code, 400);
});
});
});

View File

@ -5,6 +5,7 @@ const { config } = require('../../../../lib/Config');
const INF_VID = versioning.VersionID.getInfVid(config.replicationGroupId); const INF_VID = versioning.VersionID.getInfVid(config.replicationGroupId);
const { processVersioningState, getMasterState, const { processVersioningState, getMasterState,
getVersionSpecificMetadataOptions,
preprocessingVersioningDelete } = preprocessingVersioningDelete } =
require('../../../../lib/api/apiUtils/object/versioning'); require('../../../../lib/api/apiUtils/object/versioning');
@ -527,6 +528,68 @@ describe('versioning helpers', () => {
})))); }))));
}); });
describe('getVersionSpecificMetadataOptions', () => {
[
{
description: 'object put before versioning was first enabled',
objMD: {},
expectedRes: {},
expectedResCompat: {},
},
{
description: 'non-null object version',
objMD: {
versionId: 'v1',
},
expectedRes: {
versionId: 'v1',
isNull: false,
},
expectedResCompat: {
versionId: 'v1',
},
},
{
description: 'legacy null object version',
objMD: {
versionId: 'vnull',
isNull: true,
},
expectedRes: {
versionId: 'vnull',
},
expectedResCompat: {
versionId: 'vnull',
},
},
{
description: 'null object version in null key',
objMD: {
versionId: 'vnull',
isNull: true,
isNull2: true,
},
expectedRes: {
versionId: 'vnull',
isNull: true,
},
expectedResCompat: {
versionId: 'vnull',
isNull: true,
},
},
].forEach(testCase =>
[false, true].forEach(nullVersionCompatMode =>
it(`${testCase.description}${nullVersionCompatMode ? ' (null compat)' : ''}`,
() => {
const options = getVersionSpecificMetadataOptions(
testCase.objMD, nullVersionCompatMode);
const expectedResAttr = nullVersionCompatMode ?
'expectedResCompat' : 'expectedRes';
assert.deepStrictEqual(options, testCase[expectedResAttr]);
})));
});
describe('preprocessingVersioningDelete', () => { describe('preprocessingVersioningDelete', () => {
[ [
{ {

View File

@ -426,9 +426,9 @@ arraybuffer.slice@~0.0.7:
optionalDependencies: optionalDependencies:
ioctl "^2.0.2" ioctl "^2.0.2"
"arsenal@git+https://github.com/scality/arsenal#7.70.4": "arsenal@git+https://github.com/scality/arsenal#7.70.4-1":
version "7.70.4" version "7.70.4-1"
resolved "git+https://github.com/scality/arsenal#c4cc5a2c3dfa4a8d6d565c4029ec05cbb0bf1a3e" resolved "git+https://github.com/scality/arsenal#09a474d3eae9db23bcfed760fa70aafd961a2ce7"
dependencies: dependencies:
"@types/async" "^3.2.12" "@types/async" "^3.2.12"
"@types/utf8" "^3.0.1" "@types/utf8" "^3.0.1"
@ -5161,9 +5161,9 @@ user-home@^2.0.0:
dependencies: dependencies:
os-homedir "^1.0.0" os-homedir "^1.0.0"
"utapi@git+https://github.com/scality/utapi#7.10.12": "utapi@git+https://github.com/scality/utapi#7.70.4":
version "7.10.12" version "7.70.4"
resolved "git+https://github.com/scality/utapi#347cf3c1cb088bc14bea082227100f93d1b11597" resolved "git+https://github.com/scality/utapi#960d990e899bc6d90f9e835ac2befdb319f6ee0b"
dependencies: dependencies:
"@hapi/joi" "^17.1.1" "@hapi/joi" "^17.1.1"
"@senx/warp10" "^1.0.14" "@senx/warp10" "^1.0.14"
@ -5292,6 +5292,12 @@ werelogs@scality/werelogs#8.1.0:
dependencies: dependencies:
safe-json-stringify "1.0.3" safe-json-stringify "1.0.3"
werelogs@scality/werelogs#8.1.0-1:
version "8.1.0-1"
resolved "https://codeload.github.com/scality/werelogs/tar.gz/1a3a7b12aa15e9b72f7fa3231b7c3adbd74f75b3"
dependencies:
safe-json-stringify "1.0.3"
werelogs@scality/werelogs#GA7.2.0.5: werelogs@scality/werelogs#GA7.2.0.5:
version "7.2.0" version "7.2.0"
resolved "https://codeload.github.com/scality/werelogs/tar.gz/bc034589ebf7810d6e6d61932f94327976de6eef" resolved "https://codeload.github.com/scality/werelogs/tar.gz/bc034589ebf7810d6e6d61932f94327976de6eef"