Compare commits

..

No commits in common. "c4f2f674e790063215191dc02e18a67af8b5e90a" and "42-hackathon" have entirely different histories.

188 changed files with 25421 additions and 65851 deletions

1
.eslintrc Normal file
View File

@ -0,0 +1 @@
{ "extends": "scality" }

1
.gitignore vendored Normal file
View File

@ -0,0 +1 @@
node_modules/

3
.yamllint Normal file
View File

@ -0,0 +1,3 @@
extends: default
rules:
document-start: {level: error}

5
CONTRIBUTING.md Normal file
View File

@ -0,0 +1,5 @@
# Contributing rules
Please follow the
[Contributing Guidelines](
https://github.com/scality/Guidelines/blob/master/CONTRIBUTING.md).

View File

@ -176,18 +176,7 @@
END OF TERMS AND CONDITIONS
APPENDIX: How to apply the Apache License to your work.
To apply the Apache License to your work, attach the following
boilerplate notice, with the fields enclosed by brackets "[]"
replaced with your own identifying information. (Don't include
the brackets!) The text should be enclosed in the appropriate
comment syntax for the file format. We also recommend that a
file or class name and description of purpose be included on the
same "printed page" as the copyright notice for easier
identification within third-party archives.
Copyright [yyyy] [name of copyright owner]
Copyright 2016 Scality
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.

151
README.md Normal file
View File

@ -0,0 +1,151 @@
# Arsenal
[![CircleCI][badgepub]](https://circleci.com/gh/scality/Arsenal)
[![Scality CI][badgepriv]](http://ci.ironmann.io/gh/scality/Arsenal)
Common utilities for the S3 project components
Within this repository, you will be able to find the shared libraries for the
multiple components making up the whole Project.
* [Guidelines](#guidelines)
* [Shuffle](#shuffle) to shuffle an array.
* [Errors](#errors) load an object of errors instances.
- [errors/arsenalErrors.json](errors/arsenalErrors.json)
## Guidelines
Please read our coding and workflow guidelines at
[scality/Guidelines](https://github.com/scality/Guidelines).
### Contributing
In order to contribute, please follow the
[Contributing Guidelines](
https://github.com/scality/Guidelines/blob/master/CONTRIBUTING.md).
## Shuffle
### Usage
``` js
import { shuffle } from 'arsenal';
let array = [1, 2, 3, 4, 5];
shuffle(array);
console.log(array);
//[5, 3, 1, 2, 4]
```
## Errors
### Usage
``` js
import { errors } from 'arsenal';
console.log(errors.AccessDenied);
//{ [Error: AccessDenied]
// code: 403,
// description: 'Access Denied',
// AccessDenied: true }
```
## Clustering
The clustering class can be used to set up a cluster of workers. The class will
create at least 1 worker, will log any worker event (started, exited).
The class also provides a watchdog which restarts the workers in case of
failure until the stop() method is called.
### Usage
#### Simple
```
import { Clustering } from 'arsenal';
const cluster = new Clustering(clusterSize, logger);
cluster.start(current => {
// Put here the logic of every worker.
// 'current' is the Clustering instance, worker id is accessible by
// current.getIndex()
});
```
The callback will be called every time a worker is started/restarted.
#### Handle exit
```
import { Clustering } from 'arsenal';
const cluster = new Clustering(clusterSize, logger);
cluster.start(current => {
// Put here the logic of every worker.
// 'current' is the Clustering instance, worker id is accessible by
// current.getIndex()
}).onExit(current => {
if (current.isMaster()) {
// Master process exiting
} else {
const id = current.getIndex();
// Worker (id) exiting
}
});
```
You can handle exit event on both master and workers by calling the
'onExit' method and setting the callback. This allows release of resources
or save state before exiting the process.
#### Silencing a signal
```
import { Clustering } from 'arsenal';
const cluster = new Clustering(clusterSize, logger);
cluster.start(current => {
// Put here the logic of every worker.
// 'current' is the Clustering instance, worker id is accessible by
// current.getIndex()
}).onExit((current, signal) => {
if (signal !== 'SIGTERM') {
process.exit(current.getStatus());
}
});
```
You can silence stop signals, by simply not exiting on the exit callback
#### Shutdown timeout
```
import { Clustering } from 'arsenal';
const cluster = new Clustering(clusterSize, logger, 1000);
cluster.start(current => {
// Put here the logic of every worker.
// 'current' is the Clustering instance, worker id is accessible by
// current.getIndex()
}).onExit((current, signal) => {
if (signal === 'SIGTERM') {
// releasing resources
}
});
```
By default, the shutdown timeout is set to 5000 milliseconds. This timeout is
used only when you explicitly call the stop() method. This window is
used to let the application release its resources, but if timeout occurs
before the application has finished it's cleanup, a 'SIGKILL' signal is send
to the process (which results in an immediate termination, and this signal
can't be caught).
[badgepub]: https://circleci.com/gh/scality/Arsenal.svg?style=svg
[badgepriv]: http://ci.ironmann.io/gh/scality/Arsenal.svg?style=svg&circle-token=c3d2570682cba6763a97ea0bc87521941413d75c

File diff suppressed because it is too large Load Diff

View File

@ -1,17 +0,0 @@
<svg xmlns="http://www.w3.org/2000/svg" width="102" height="20">
<script/>
<linearGradient id="a" x2="0" y2="100%">
<stop offset="0" stop-color="#bbb" stop-opacity=".1"/>
<stop offset="1" stop-opacity=".1"/>
</linearGradient>
<rect rx="3" width="102" height="20" fill="#555"/>
<rect rx="3" x="64" width="38" height="20" fill="#dab226"/>
<path fill="#dab226" d="M64 0h4v20h-4z"/>
<rect rx="3" width="102" height="20" fill="url(#a)"/>
<g fill="#fff" text-anchor="middle" font-family="DejaVu Sans,Verdana,Geneva,sans-serif" font-size="11">
<text x="32" y="15" fill="#010101" fill-opacity=".3">document</text>
<text x="32" y="14">document</text>
<text x="82.5" y="15" fill="#010101" fill-opacity=".3">87%</text>
<text x="82.5" y="14">87%</text>
</g>
</svg>

Before

Width:  |  Height:  |  Size: 795 B

23
circle.yml Normal file
View File

@ -0,0 +1,23 @@
---
general:
branches:
ignore:
- /^ultron\/.*/ # Ignore ultron/* branches
machine:
node:
version: 6.9.5
environment:
CXX: g++-4.9
dependencies:
pre:
- sudo pip install yamllint
test:
override:
- npm run --silent lint_yml
- npm run --silent lint -- --max-warnings 0
- npm run --silent lint_md
- npm run --silent test
- npm run ft_test

File diff suppressed because it is too large Load Diff

View File

@ -1,19 +0,0 @@
{
"coverage": "87.23%",
"expectCount": 47,
"actualCount": 41,
"files": {
"kinetic/Kinetic.js": {
"expectCount": 47,
"actualCount": 41,
"undocumentLines": [
66,
13,
41,
683,
15,
25
]
}
}
}

View File

@ -1,132 +0,0 @@
/* Tomorrow Theme */
/* Original theme - https://github.com/chriskempson/tomorrow-theme */
/* Pretty printing styles. Used with prettify.js. */
/* SPAN elements with the classes below are added by prettyprint. */
/* plain text */
.pln {
color: #4d4d4c; }
@media screen {
/* string content */
.str {
color: #718c00; }
/* a keyword */
.kwd {
color: #8959a8; }
/* a comment */
.com {
color: #8e908c; }
/* a type name */
.typ {
color: #4271ae; }
/* a literal value */
.lit {
color: #f5871f; }
/* punctuation */
.pun {
color: #4d4d4c; }
/* lisp open bracket */
.opn {
color: #4d4d4c; }
/* lisp close bracket */
.clo {
color: #4d4d4c; }
/* a markup tag name */
.tag {
color: #c82829; }
/* a markup attribute name */
.atn {
color: #f5871f; }
/* a markup attribute value */
.atv {
color: #3e999f; }
/* a declaration */
.dec {
color: #f5871f; }
/* a variable name */
.var {
color: #c82829; }
/* a function name */
.fun {
color: #4271ae; } }
/* Use higher contrast and text-weight for printable form. */
@media print, projection {
.str {
color: #060; }
.kwd {
color: #006;
font-weight: bold; }
.com {
color: #600;
font-style: italic; }
.typ {
color: #404;
font-weight: bold; }
.lit {
color: #044; }
.pun, .opn, .clo {
color: #440; }
.tag {
color: #006;
font-weight: bold; }
.atn {
color: #404; }
.atv {
color: #060; } }
/* Style */
/*
pre.prettyprint {
background: white;
font-family: Consolas, Monaco, 'Andale Mono', monospace;
font-size: 12px;
line-height: 1.5;
border: 1px solid #ccc;
padding: 10px; }
*/
/* Specify class=linenums on a pre to get line numbering */
ol.linenums {
margin-top: 0;
margin-bottom: 0; }
/* IE indents via margin-left */
li.L0,
li.L1,
li.L2,
li.L3,
li.L4,
li.L5,
li.L6,
li.L7,
li.L8,
li.L9 {
/* */ }
/* Alternate shading for lines */
li.L1,
li.L3,
li.L5,
li.L7,
li.L9 {
/* */ }

View File

@ -1,944 +0,0 @@
@import url(https://fonts.googleapis.com/css?family=Roboto:400,300,700);
* {
margin: 0;
padding: 0;
text-decoration: none;
}
html
{
font-family: 'Roboto', sans-serif;
overflow: auto;
font-size: 14px;
/*color: #4d4e53;*/
color: rgba(0, 0, 0, .68);
background-color: #fff;
}
a {
/*color: #0095dd;*/
/*color:rgb(37, 138, 175);*/
color: #039BE5;
}
code a:hover {
text-decoration: underline;
}
ul, ol {
padding-left: 20px;
}
ul li {
list-style: disc;
margin: 4px 0;
}
ol li {
margin: 4px 0;
}
h1 {
margin-bottom: 10px;
font-size: 34px;
font-weight: 300;
border-bottom: solid 1px #ddd;
}
h2 {
margin-top: 24px;
margin-bottom: 10px;
font-size: 20px;
border-bottom: solid 1px #ddd;
font-weight: 300;
}
h3 {
position: relative;
font-size: 16px;
margin-bottom: 12px;
background-color: #E2E2E2;
padding: 4px;
font-weight: 300;
}
del {
text-decoration: line-through;
}
p {
margin-bottom: 15px;
line-height: 1.5;
}
p > code {
background-color: #f5f5f5;
border-radius: 3px;
}
pre > code {
display: block;
}
pre.prettyprint, pre > code {
padding: 4px;
margin: 1em 0;
background-color: #f5f5f5;
border-radius: 3px;
}
pre.prettyprint > code {
margin: 0;
}
p > code,
li > code {
padding: 0 4px;
border-radius: 3px;
}
.import-path pre.prettyprint,
.import-path pre.prettyprint code {
margin: 0;
padding: 0;
border: none;
background: white;
}
.layout-container {
/*display: flex;*/
/*flex-direction: row;*/
/*justify-content: flex-start;*/
/*align-items: stretch;*/
}
.layout-container > header {
height: 40px;
line-height: 40px;
font-size: 16px;
padding: 0 10px;
margin: 0;
position: fixed;
width: 100%;
z-index: 1;
background-color: white;
top: 0;
border-bottom: solid 1px #E02130;
}
.layout-container > header > a{
margin: 0 5px;
}
.layout-container > header > a.repo-url-github {
font-size: 0;
display: inline-block;
width: 20px;
height: 38px;
background: url("../image/github.png") no-repeat center;
background-size: 20px;
vertical-align: top;
}
.navigation {
position: fixed;
top: 0;
left: 0;
box-sizing: border-box;
width: 250px;
height: 100%;
padding-top: 40px;
padding-left: 15px;
padding-bottom: 2em;
margin-top:1em;
overflow-x: scroll;
box-shadow: rgba(255, 255, 255, 1) -1px 0 0 inset;
border-right: 1px solid rgba(0, 0, 0, 0.1);
}
.navigation ul {
padding: 0;
}
.navigation li {
list-style: none;
margin: 4px 0;
white-space: nowrap;
}
.navigation .nav-dir-path {
margin-top: 0.7em;
margin-bottom: 0.25em;
font-size: 0.8em;
color: #aaa;
}
.kind-class,
.kind-interface,
.kind-function,
.kind-typedef,
.kind-variable,
.kind-external {
margin-left: 0.75em;
width: 1.2em;
height: 1.2em;
display: inline-block;
text-align: center;
border-radius: 0.2em;
margin-right: 0.2em;
font-weight: bold;
}
.kind-class {
color: #009800;
background-color: #bfe5bf;
}
.kind-interface {
color: #fbca04;
background-color: #fef2c0;
}
.kind-function {
color: #6b0090;
background-color: #d6bdde;
}
.kind-variable {
color: #eb6420;
background-color: #fad8c7;
}
.kind-typedef {
color: #db001e;
background-color: #edbec3;
}
.kind-external {
color: #0738c3;
background-color: #bbcbea;
}
h1 .version,
h1 .url a {
font-size: 14px;
color: #aaa;
}
.content {
margin-top: 40px;
margin-left: 250px;
padding: 10px 50px 10px 20px;
}
.header-notice {
font-size: 14px;
color: #aaa;
margin: 0;
}
.expression-extends .prettyprint {
margin-left: 10px;
background: white;
}
.extends-chain {
border-bottom: 1px solid#ddd;
padding-bottom: 10px;
margin-bottom: 10px;
}
.extends-chain span:nth-of-type(1) {
padding-left: 10px;
}
.extends-chain > div {
margin: 5px 0;
}
.description table {
font-size: 14px;
border-spacing: 0;
border: 0;
border-collapse: collapse;
}
.description thead {
background: #999;
color: white;
}
.description table td,
.description table th {
border: solid 1px #ddd;
padding: 4px;
font-weight: normal;
}
.flat-list ul {
padding-left: 0;
}
.flat-list li {
display: inline;
list-style: none;
}
table.summary {
width: 100%;
margin: 10px 0;
border-spacing: 0;
border: 0;
border-collapse: collapse;
}
table.summary thead {
background: #999;
color: white;
}
table.summary td {
border: solid 1px #ddd;
padding: 4px 10px;
}
table.summary tbody td:nth-child(1) {
text-align: right;
white-space: nowrap;
min-width: 64px;
vertical-align: top;
}
table.summary tbody td:nth-child(2) {
width: 100%;
border-right: none;
}
table.summary tbody td:nth-child(3) {
white-space: nowrap;
border-left: none;
vertical-align: top;
}
table.summary td > div:nth-of-type(2) {
padding-top: 4px;
padding-left: 15px;
}
table.summary td p {
margin-bottom: 0;
}
.inherited-summary thead td {
padding-left: 2px;
}
.inherited-summary thead a {
color: white;
}
.inherited-summary .summary tbody {
display: none;
}
.inherited-summary .summary .toggle {
padding: 0 4px;
font-size: 12px;
cursor: pointer;
}
.inherited-summary .summary .toggle.closed:before {
content: "▶";
}
.inherited-summary .summary .toggle.opened:before {
content: "▼";
}
.member, .method {
margin-bottom: 24px;
}
table.params {
width: 100%;
margin: 10px 0;
border-spacing: 0;
border: 0;
border-collapse: collapse;
}
table.params thead {
background: #eee;
color: #aaa;
}
table.params td {
padding: 4px;
border: solid 1px #ddd;
}
table.params td p {
margin: 0;
}
.content .detail > * {
margin: 15px 0;
}
.content .detail > h3 {
color: black;
}
.content .detail > div {
margin-left: 10px;
}
.content .detail > .import-path {
margin-top: -8px;
}
.content .detail + .detail {
margin-top: 30px;
}
.content .detail .throw td:first-child {
padding-right: 10px;
}
.content .detail h4 + :not(pre) {
padding-left: 0;
margin-left: 10px;
}
.content .detail h4 + ul li {
list-style: none;
}
.return-param * {
display: inline;
}
.argument-params {
margin-bottom: 20px;
}
.return-type {
padding-right: 10px;
font-weight: normal;
}
.return-desc {
margin-left: 10px;
margin-top: 4px;
}
.return-desc p {
margin: 0;
}
.deprecated, .experimental, .instance-docs {
border-left: solid 5px orange;
padding-left: 4px;
margin: 4px 0;
}
tr.listen p,
tr.throw p,
tr.emit p{
margin-bottom: 10px;
}
.version, .since {
color: #aaa;
}
h3 .right-info {
position: absolute;
right: 4px;
font-size: 14px;
}
.version + .since:before {
content: '| ';
}
.see {
margin-top: 10px;
}
.see h4 {
margin: 4px 0;
}
.content .detail h4 + .example-doc {
margin: 6px 0;
}
.example-caption {
position: relative;
bottom: -1px;
display: inline-block;
padding: 4px;
font-style: italic;
background-color: #f5f5f5;
font-weight: bold;
border-radius: 3px;
border-bottom-left-radius: 0;
border-bottom-right-radius: 0;
}
.example-caption + pre.source-code {
margin-top: 0;
border-top-left-radius: 0;
}
footer, .file-footer {
text-align: right;
font-style: italic;
font-weight: 100;
font-size: 13px;
margin-right: 50px;
margin-left: 270px;
border-top: 1px solid #ddd;
padding-top: 30px;
margin-top: 20px;
padding-bottom: 10px;
}
pre.source-code {
background: #f5f5f5;
padding: 4px;
}
pre.raw-source-code > code {
padding: 0;
margin: 0;
}
pre.source-code.line-number {
padding: 0;
}
pre.source-code ol {
background: #eee;
padding-left: 40px;
}
pre.source-code li {
background: white;
padding-left: 4px;
list-style: decimal;
margin: 0;
}
pre.source-code.line-number li.active {
background: rgb(255, 255, 150);
}
pre.source-code.line-number li.error-line {
background: #ffb8bf;
}
table.files-summary {
width: 100%;
margin: 10px 0;
border-spacing: 0;
border: 0;
border-collapse: collapse;
text-align: right;
}
table.files-summary tbody tr:hover {
background: #eee;
}
table.files-summary td:first-child,
table.files-summary td:nth-of-type(2) {
text-align: left;
}
table.files-summary[data-use-coverage="false"] td.coverage {
display: none;
}
table.files-summary thead {
background: #999;
color: white;
}
table.files-summary td {
border: solid 1px #ddd;
padding: 4px 10px;
vertical-align: top;
}
table.files-summary td.identifiers > span {
display: block;
margin-top: 4px;
}
table.files-summary td.identifiers > span:first-child {
margin-top: 0;
}
table.files-summary .coverage-count {
font-size: 12px;
color: #aaa;
display: inline-block;
min-width: 40px;
}
.total-coverage-count {
position: relative;
bottom: 2px;
font-size: 12px;
color: #666;
font-weight: 500;
padding-left: 5px;
}
table.test-summary thead {
background: #999;
color: white;
}
table.test-summary thead .test-description {
width: 50%;
}
table.test-summary {
width: 100%;
margin: 10px 0;
border-spacing: 0;
border: 0;
border-collapse: collapse;
}
table.test-summary thead .test-count {
width: 3em;
}
table.test-summary tbody tr:hover {
background-color: #eee;
}
table.test-summary td {
border: solid 1px #ddd;
padding: 4px 10px;
vertical-align: top;
}
table.test-summary td p {
margin: 0;
}
table.test-summary tr.test-describe .toggle {
display: inline-block;
float: left;
margin-right: 4px;
cursor: pointer;
}
table.test-summary tr.test-describe .toggle.opened:before {
content: '▼';
}
table.test-summary tr.test-describe .toggle.closed:before {
content: '▶';
}
table.test-summary .test-target > span {
display: block;
margin-top: 4px;
}
table.test-summary .test-target > span:first-child {
margin-top: 0;
}
.inner-link-active {
background: rgb(255, 255, 150);
}
/* search box */
.search-box {
position: absolute;
top: 10px;
right: 50px;
padding-right: 8px;
padding-bottom: 10px;
line-height: normal;
font-size: 12px;
}
.search-box img {
width: 20px;
vertical-align: top;
}
.search-input {
display: inline;
visibility: hidden;
width: 0;
padding: 2px;
height: 1.5em;
outline: none;
background: transparent;
border: 1px #0af;
border-style: none none solid none;
vertical-align: bottom;
}
.search-input-edge {
display: none;
width: 1px;
height: 5px;
background-color: #0af;
vertical-align: bottom;
}
.search-result {
position: absolute;
display: none;
height: 600px;
width: 100%;
padding: 0;
margin-top: 5px;
margin-left: 24px;
background: white;
box-shadow: 1px 1px 4px rgb(0,0,0);
white-space: nowrap;
overflow-y: scroll;
}
.search-result-import-path {
color: #aaa;
font-size: 12px;
}
.search-result li {
list-style: none;
padding: 2px 4px;
}
.search-result li a {
display: block;
}
.search-result li.selected {
background: #ddd;
}
.search-result li.search-separator {
background: rgb(37, 138, 175);
color: white;
}
.search-box.active .search-input {
visibility: visible;
transition: width 0.2s ease-out;
width: 300px;
}
.search-box.active .search-input-edge {
display: inline-block;
}
/* coverage badge */
.esdoc-coverage {
display: inline-block;
height: 20px;
vertical-align: top;
}
h1 .esdoc-coverage {
position: relative;
top: -4px;
}
.esdoc-coverage-wrap {
color: white;
font-size: 12px;
font-weight: 500;
}
.esdoc-coverage-label {
padding: 3px 4px 3px 6px;
background: linear-gradient(to bottom, #5e5e5e 0%,#4c4c4c 100%);
border-radius: 4px 0 0 4px;
display: inline-block;
height: 20px;
box-sizing: border-box;
line-height: 14px;
}
.esdoc-coverage-ratio {
padding: 3px 6px 3px 4px;
border-radius: 0 4px 4px 0;
display: inline-block;
height: 20px;
box-sizing: border-box;
line-height: 14px;
}
.esdoc-coverage-low {
background: linear-gradient(to bottom, #db654f 0%,#c9533d 100%);
}
.esdoc-coverage-middle {
background: linear-gradient(to bottom, #dab226 0%,#c9a179 100%);
}
.esdoc-coverage-high {
background: linear-gradient(to bottom, #4fc921 0%,#3eb810 100%);
}
.github-markdown .manual-toc {
padding-left: 0;
}
/** manual */
.manual-root .navigation {
padding-left: 0;
}
.navigation .manual-toc-title {
margin: 0;
padding: 0.5em 0 0.5em 1em;
border: none;
font-size: 1em;
font-weight: normal;
}
.navigation .manual-toc-title:first-child {
margin-top: 0;
}
.navigation .manual-toc {
display: none;
margin-left: 0.5em;
margin-top: -0.25em;
}
.github-markdown .manual-toc-title a {
color: inherit;
}
.manual-breadcrumb-list {
font-size: 0.8em;
margin-bottom: 1em;
}
.manual-toc-title a:hover {
color: #039BE5;
}
.manual-toc li {
margin: 0.75em 0;
list-style-type: none;
}
.manual-toc .indent-h1 {
margin-left: 0;
}
.manual-toc .indent-h2 {
margin-left: 1em;
}
.manual-toc .indent-h3 {
margin-left: 3em;
}
.manual-toc .indent-h4 {
margin-left: 4em;
}
.manual-toc .indent-h5 {
margin-left: 5em;
}
.manual-nav li {
margin: 0.75em 0;
}
.manual-dot {
margin-left: 0.75em;
width: 0.6em;
height: 0.6em;
display: inline-block;
border-radius: 0.3em;
margin-right: 0.3em;
background-color: #bfe5bf;
}
/* github markdown */
.github-markdown {
font-size: 16px;
}
.github-markdown h1,
.github-markdown h2,
.github-markdown h3,
.github-markdown h4,
.github-markdown h5 {
margin-top: 1em;
margin-bottom: 16px;
font-weight: bold;
padding: 0;
}
.github-markdown h1:nth-of-type(1) {
margin-top: 0;
}
.github-markdown h1 {
font-size: 2em;
padding-bottom: 0.3em;
}
.github-markdown h2 {
font-size: 1.75em;
padding-bottom: 0.3em;
}
.github-markdown h3 {
font-size: 1.5em;
background-color: transparent;
}
.github-markdown h4 {
font-size: 1.25em;
}
.github-markdown h5 {
font-size: 1em;
}
.github-markdown ul, .github-markdown ol {
padding-left: 2em;
}
.github-markdown pre > code {
font-size: 0.85em;
}
.github-markdown table {
margin-bottom: 1em;
border-collapse: collapse;
border-spacing: 0;
}
.github-markdown table tr {
background-color: #fff;
border-top: 1px solid #ccc;
}
.github-markdown table th,
.github-markdown table td {
padding: 6px 13px;
border: 1px solid #ddd;
}
.github-markdown table tr:nth-child(2n) {
background-color: #f8f8f8;
}
/** badge(.svg) does not have border */
.github-markdown img:not([src*=".svg"]) {
max-width: 100%;
box-shadow: 1px 1px 1px rgba(0,0,0,0.5);
}

2620
dump.json

File diff suppressed because one or more lines are too long

715
errors/arsenalErrors.json Normal file
View File

@ -0,0 +1,715 @@
{
"_comment": "------------------- Amazon errors ------------------",
"AccessDenied": {
"code": 403,
"description": "Access Denied"
},
"AccessForbidden": {
"code": 403,
"description": "Access Forbidden"
},
"AccountProblem": {
"code": 403,
"description": "There is a problem with your AWS account that prevents the operation from completing successfully. Please use Contact Us."
},
"AmbiguousGrantByEmailAddress": {
"code": 400,
"description": "The email address you provided is associated with more than one account."
},
"BadDigest": {
"code": 400,
"description": "The Content-MD5 you specified did not match what we received."
},
"BucketAlreadyExists": {
"code": 409,
"description": "The requested bucket name is not available. The bucket namespace is shared by all users of the system. Please select a different name and try again."
},
"BucketAlreadyOwnedByYou": {
"code": 409,
"description": "Your previous request to create the named bucket succeeded and you already own it. You get this error in all AWS regions except US Standard, us-east-1. In us-east-1 region, you will get 200 OK, but it is no-op (if bucket exists S3 will not do anything)."
},
"BucketNotEmpty": {
"code": 409,
"description": "The bucket you tried to delete is not empty."
},
"CredentialsNotSupported": {
"code": 400,
"description": "This request does not support credentials."
},
"CrossLocationLoggingProhibited": {
"code": 403,
"description": "Cross-location logging not allowed. Buckets in one geographic location cannot log information to a bucket in another location."
},
"DeleteConflict": {
"code": 409,
"description": "The request was rejected because it attempted to delete a resource that has attached subordinate entities. The error message describes these entities."
},
"EntityTooSmall": {
"code": 400,
"description": "Your proposed upload is smaller than the minimum allowed object size."
},
"EntityTooLarge": {
"code": 400,
"description": "Your proposed upload exceeds the maximum allowed object size."
},
"ExpiredToken": {
"code": 400,
"description": "The provided token has expired."
},
"IllegalVersioningConfigurationException": {
"code": 400,
"description": "Indicates that the versioning configuration specified in the request is invalid."
},
"IncompleteBody": {
"code": 400,
"description": "You did not provide the number of bytes specified by the Content-Length HTTP header."
},
"IncorrectNumberOfFilesInPostRequest": {
"code": 400,
"description": "POST requires exactly one file upload per request."
},
"InlineDataTooLarge": {
"code": 400,
"description": "Inline data exceeds the maximum allowed size."
},
"InternalError": {
"code": 500,
"description": "We encountered an internal error. Please try again."
},
"InvalidAccessKeyId": {
"code": 403,
"description": "The AWS access key Id you provided does not exist in our records."
},
"InvalidAddressingHeader": {
"code": 400,
"description": "You must specify the Anonymous role."
},
"InvalidArgument": {
"code": 400,
"description": "Invalid Argument"
},
"InvalidBucketName": {
"code": 400,
"description": "The specified bucket is not valid."
},
"InvalidBucketState": {
"code": 409,
"description": "The request is not valid with the current state of the bucket."
},
"InvalidDigest": {
"code": 400,
"description": "The Content-MD5 you specified is not valid."
},
"InvalidEncryptionAlgorithmError": {
"code": 400,
"description": "The encryption request you specified is not valid. The valid value is AES256."
},
"InvalidLocationConstraint": {
"code": 400,
"description": "The specified location constraint is not valid."
},
"InvalidObjectState": {
"code": 403,
"description": "The operation is not valid for the current state of the object."
},
"InvalidPart": {
"code": 400,
"description": "One or more of the specified parts could not be found. The part might not have been uploaded, or the specified entity tag might not have matched the part's entity tag."
},
"InvalidPartOrder": {
"code": 400,
"description": "The list of parts was not in ascending order.Parts list must specified in order by part number."
},
"InvalidPartNumber": {
"code": 416,
"description": "The requested partnumber is not satisfiable."
},
"InvalidPayer": {
"code": 403,
"description": "All access to this object has been disabled."
},
"InvalidPolicyDocument": {
"code": 400,
"description": "The content of the form does not meet the conditions specified in the policy document."
},
"InvalidRange": {
"code": 416,
"description": "The requested range cannot be satisfied."
},
"InvalidRedirectLocation": {
"code": 400,
"description": "The website redirect location must have a prefix of 'http://' or 'https://' or '/'."
},
"InvalidRequest": {
"code": 400,
"description": "SOAP requests must be made over an HTTPS connection."
},
"InvalidSecurity": {
"code": 403,
"description": "The provided security credentials are not valid."
},
"InvalidSOAPRequest": {
"code": 400,
"description": "The SOAP request body is invalid."
},
"InvalidStorageClass": {
"code": 400,
"description": "The storage class you specified is not valid."
},
"InvalidTag": {
"code": 400,
"description": "The Tag you have provided is invalid"
},
"InvalidTargetBucketForLogging": {
"code": 400,
"description": "The target bucket for logging does not exist, is not owned by you, or does not have the appropriate grants for the log-delivery group."
},
"InvalidToken": {
"code": 400,
"description": "The provided token is malformed or otherwise invalid."
},
"InvalidURI": {
"code": 400,
"description": "Couldn't parse the specified URI."
},
"KeyTooLong": {
"code": 400,
"description": "Your key is too long."
},
"LimitExceeded": {
"code": 409,
"description": " The request was rejected because it attempted to create resources beyond the current AWS account limits. The error message describes the limit exceeded."
},
"MalformedACLError": {
"code": 400,
"description": "The XML you provided was not well-formed or did not validate against our published schema."
},
"MalformedPOSTRequest": {
"code": 400,
"description": "The body of your POST request is not well-formed multipart/form-data."
},
"MalformedXML": {
"code": 400,
"description": "The XML you provided was not well-formed or did not validate against our published schema."
},
"MaxMessageLengthExceeded": {
"code": 400,
"description": "Your request was too big."
},
"MaxPostPreDataLengthExceededError": {
"code": 400,
"description": "Your POST request fields preceding the upload file were too large."
},
"MetadataTooLarge": {
"code": 400,
"description": "Your metadata headers exceed the maximum allowed metadata size."
},
"MethodNotAllowed": {
"code": 405,
"description": "The specified method is not allowed against this resource."
},
"MissingAttachment": {
"code": 400,
"description": "A SOAP attachment was expected, but none were found."
},
"MissingContentLength": {
"code": 411,
"description": "You must provide the Content-Length HTTP header."
},
"MissingRequestBodyError": {
"code": 400,
"description": "Request body is empty"
},
"MissingSecurityElement": {
"code": 400,
"description": "The SOAP 1.1 request is missing a security element."
},
"MissingSecurityHeader": {
"code": 400,
"description": "Your request is missing a required header."
},
"NoLoggingStatusForKey": {
"code": 400,
"description": "There is no such thing as a logging status subresource for a key."
},
"NoSuchBucket": {
"code": 404,
"description": "The specified bucket does not exist."
},
"NoSuchCORSConfiguration": {
"code": 404,
"description": "The CORS configuration does not exist"
},
"NoSuchKey": {
"code": 404,
"description": "The specified key does not exist."
},
"NoSuchLifecycleConfiguration": {
"code": 404,
"description": "The lifecycle configuration does not exist."
},
"NoSuchWebsiteConfiguration": {
"code": 404,
"description": "The specified bucket does not have a website configuration"
},
"NoSuchUpload": {
"code": 404,
"description": "The specified multipart upload does not exist. The upload ID might be invalid, or the multipart upload might have been aborted or completed."
},
"NoSuchVersion": {
"code": 404,
"description": "Indicates that the version ID specified in the request does not match an existing version."
},
"ReplicationConfigurationNotFoundError": {
"code": 404,
"description": "The replication configuration was not found"
},
"NotImplemented": {
"code": 501,
"description": "A header you provided implies functionality that is not implemented."
},
"NotModified": {
"code": 304,
"description": "Not Modified."
},
"NotSignedUp": {
"code": 403,
"description": "Your account is not signed up for the S3 service. You must sign up before you can use S3. "
},
"NoSuchBucketPolicy": {
"code": 404,
"description": "The specified bucket does not have a bucket policy."
},
"OperationAborted": {
"code": 409,
"description": "A conflicting conditional operation is currently in progress against this resource. Try again."
},
"PermanentRedirect": {
"code": 301,
"description": "The bucket you are attempting to access must be addressed using the specified endpoint. Send all future requests to this endpoint."
},
"PreconditionFailed": {
"code": 412,
"description": "At least one of the preconditions you specified did not hold."
},
"Redirect": {
"code": 307,
"description": "Temporary redirect."
},
"RestoreAlreadyInProgress": {
"code": 409,
"description": "Object restore is already in progress."
},
"RequestIsNotMultiPartContent": {
"code": 400,
"description": "Bucket POST must be of the enclosure-type multipart/form-data."
},
"RequestTimeout": {
"code": 400,
"description": "Your socket connection to the server was not read from or written to within the timeout period."
},
"RequestTimeTooSkewed": {
"code": 403,
"description": "The difference between the request time and the server's time is too large."
},
"RequestTorrentOfBucketError": {
"code": 400,
"description": "Requesting the torrent file of a bucket is not permitted."
},
"SignatureDoesNotMatch": {
"code": 403,
"description": "The request signature we calculated does not match the signature you provided."
},
"_comment" : {
"note" : "This is an AWS S3 specific error. We are opting to use the more general 'ServiceUnavailable' error used throughout AWS (IAM/EC2) to have uniformity of error messages even though we are potentially compromising S3 compatibility.",
"ServiceUnavailable": {
"code": 503,
"description": "Reduce your request rate."
}
},
"ServiceUnavailable": {
"code": 503,
"description": "The request has failed due to a temporary failure of the server."
},
"SlowDown": {
"code": 503,
"description": "Reduce your request rate."
},
"TemporaryRedirect": {
"code": 307,
"description": "You are being redirected to the bucket while DNS updates."
},
"TokenRefreshRequired": {
"code": 400,
"description": "The provided token must be refreshed."
},
"TooManyBuckets": {
"code": 400,
"description": "You have attempted to create more buckets than allowed."
},
"TooManyParts": {
"code": 400,
"description": "You have attempted to upload more parts than allowed."
},
"UnexpectedContent": {
"code": 400,
"description": "This request does not support content."
},
"UnresolvableGrantByEmailAddress": {
"code": 400,
"description": "The email address you provided does not match any account on record."
},
"UserKeyMustBeSpecified": {
"code": 400,
"description": "The bucket POST must contain the specified field name. If it is specified, check the order of the fields."
},
"NoSuchEntity": {
"code": 404,
"description": "The request was rejected because it referenced an entity that does not exist. The error message describes the entity."
},
"WrongFormat": {
"code": 400,
"description": "Data entered by the user has a wrong format."
},
"Forbidden": {
"code": 403,
"description": "Authentication failed."
},
"EntityDoesNotExist": {
"code": 404,
"description": "Not found."
},
"EntityAlreadyExists": {
"code": 409,
"description": "The request was rejected because it attempted to create a resource that already exists."
},
"ServiceFailure": {
"code": 500,
"description": "Server error: the request processing has failed because of an unknown error, exception or failure."
},
"IncompleteSignature": {
"code": 400,
"description": "The request signature does not conform to AWS standards."
},
"InternalFailure": {
"code": 500,
"description": "The request processing has failed because of an unknown error, exception or failure."
},
"InvalidAction": {
"code": 400,
"description": "The action or operation requested is invalid. Verify that the action is typed correctly."
},
"InvalidClientTokenId": {
"code": 403,
"description": "The X.509 certificate or AWS access key ID provided does not exist in our records."
},
"InvalidParameterCombination": {
"code": 400,
"description": "Parameters that must not be used together were used together."
},
"InvalidParameterValue": {
"code": 400,
"description": "An invalid or out-of-range value was supplied for the input parameter."
},
"InvalidQueryParameter": {
"code": 400,
"description": "The AWS query string is malformed or does not adhere to AWS standards."
},
"MalformedQueryString": {
"code": 404,
"description": "The query string contains a syntax error."
},
"MissingAction": {
"code": 400,
"description": "The request is missing an action or a required parameter."
},
"MissingAuthenticationToken": {
"code": 403,
"description": "The request must contain either a valid (registered) AWS access key ID or X.509 certificate."
},
"MissingParameter": {
"code": 400,
"description": "A required parameter for the specified action is not supplied."
},
"OptInRequired": {
"code": 403,
"description": "The AWS access key ID needs a subscription for the service."
},
"RequestExpired": {
"code": 400,
"description": "The request reached the service more than 15 minutes after the date stamp on the request or more than 15 minutes after the request expiration date (such as for pre-signed URLs), or the date stamp on the request is more than 15 minutes in the future."
},
"Throttling": {
"code": 400,
"description": "The request was denied due to request throttling."
},
"AccountNotFound": {
"code": 404,
"description": "No account was found in Vault, please contact your system administrator."
},
"ValidationError": {
"code": 400,
"description": "The specified value is invalid."
},
"MalformedPolicyDocument": {
"code": 400,
"description": "Syntax errors in policy."
},
"InvalidInput": {
"code": 400,
"description": "The request was rejected because an invalid or out-of-range value was supplied for an input parameter."
},
"_comment": "-------------- Special non-AWS S3 errors --------------",
"MPUinProgress": {
"code": 409,
"description": "The bucket you tried to delete has an ongoing multipart upload."
},
"_comment": "-------------- Internal project errors --------------",
"_comment": "----------------------- Vault -----------------------",
"_comment": "#### formatErrors ####",
"BadName": {
"description": "name not ok",
"code": 5001
},
"BadAccount": {
"description": "account not ok",
"code": 5002
},
"BadGroup": {
"description": "group not ok",
"code": 5003
},
"BadId": {
"description": "id not ok",
"code": 5004
},
"BadAccountName": {
"description": "accountName not ok",
"code": 5005
},
"BadNameFriendly": {
"description": "nameFriendly not ok",
"code": 5006
},
"BadEmailAddress": {
"description": "email address not ok",
"code": 5007
},
"BadPath": {
"description": "path not ok",
"code": 5008
},
"BadArn": {
"description": "arn not ok",
"code": 5009
},
"BadCreateDate": {
"description": "createDate not ok",
"code": 5010
},
"BadLastUsedDate": {
"description": "lastUsedDate not ok",
"code": 5011
},
"BadNotBefore": {
"description": "notBefore not ok",
"code": 5012
},
"BadNotAfter": {
"description": "notAfter not ok",
"code": 5013
},
"BadSaltedPwd": {
"description": "salted password not ok",
"code": 5014
},
"ok": {
"description": "No error",
"code": 200
},
"BadUser": {
"description": "user not ok",
"code": 5016
},
"BadSaltedPasswd": {
"description": "salted password not ok",
"code": 5017
},
"BadPasswdDate": {
"description": "password date not ok",
"code": 5018
},
"BadCanonicalId": {
"description": "canonicalId not ok",
"code": 5019
},
"BadAlias": {
"description": "alias not ok",
"code": 5020
},
"_comment": "#### internalErrors ####",
"DBPutFailed": {
"description": "DB put failed",
"code": 5021
},
"_comment": "#### alreadyExistErrors ####",
"AccountEmailAlreadyUsed": {
"description": "an other account already uses that email",
"code": 5022
},
"AccountNameAlreadyUsed": {
"description": "an other account already uses that name",
"code": 5023
},
"UserEmailAlreadyUsed": {
"description": "an other user already uses that email",
"code": 5024
},
"UserNameAlreadyUsed": {
"description": "an other user already uses that name",
"code": 5025
},
"_comment": "#### doesntExistErrors ####",
"NoParentAccount": {
"description": "parent account does not exist",
"code": 5026
},
"_comment": "#### authErrors ####",
"BadStringToSign": {
"description": "stringToSign not ok'",
"code": 5027
},
"BadSignatureFromRequest": {
"description": "signatureFromRequest not ok",
"code": 5028
},
"BadAlgorithm": {
"description": "hashAlgorithm not ok",
"code": 5029
},
"SecretKeyDoesNotExist": {
"description": "secret key does not exist",
"code": 5030
},
"InvalidRegion": {
"description": "Region was not provided or is not recognized by the system",
"code": 5031
},
"ScopeDate": {
"description": "scope date is missing, or format is invalid",
"code": 5032
},
"BadAccessKey": {
"description": "access key not ok",
"code": 5033
},
"NoDict": {
"description": "no dictionary of params provided for signature verification",
"code": 5034
},
"BadSecretKey": {
"description": "secretKey not ok",
"code": 5035
},
"BadSecretKeyValue": {
"description": "secretKey value not ok",
"code": 5036
},
"BadSecretKeyStatus": {
"description": "secretKey status not ok",
"code": 5037
},
"_comment": "#### OidcpErrors ####",
"BadUrl": {
"description": "url not ok",
"code": 5038
},
"BadClientIdList": {
"description": "client id list not ok'",
"code": 5039
},
"BadThumbprintList": {
"description": "thumbprint list not ok'",
"code": 5040
},
"BadObject": {
"description": "Object not ok'",
"code": 5041
},
"_comment": "#### RoleErrors ####",
"BadRole": {
"description": "role not ok",
"code": 5042
},
"_comment": "#### SamlpErrors ####",
"BadSamlp": {
"description": "samlp not ok",
"code": 5043
},
"BadMetadataDocument": {
"description": "metadata document not ok",
"code": 5044
},
"BadSessionIndex": {
"description": "session index not ok",
"code": 5045
},
"Unauthorized": {
"description": "not authenticated",
"code": 401
},
"_comment": "--------------------- MetaData ---------------------",
"_comment": "#### formatErrors ####",
"CacheUpdated": {
"description": "The cache has been updated",
"code": 500
},
"DBNotFound": {
"description": "This DB does not exist",
"code": 404
},
"DBAlreadyExists": {
"description": "This DB already exist",
"code": 409
},
"ObjNotFound": {
"description": "This object does not exist",
"code": 404
},
"PermissionDenied": {
"description": "Permission denied",
"code": 403
},
"BadRequest": {
"description": "BadRequest",
"code": 400
},
"RaftSessionNotLeader": {
"description": "NotLeader",
"code": 500
},
"RaftSessionLeaderNotConnected": {
"description": "RaftSessionLeaderNotConnected",
"code": 400
},
"NoLeaderForDB": {
"description": "NoLeaderForDB",
"code": 400
},
"RouteNotFound": {
"description": "RouteNotFound",
"code": 404
},
"NoMapsInConfig": {
"description": "NoMapsInConfig",
"code": 404
},
"DBAPINotReady": {
"message": "DBAPINotReady",
"code": 500
},
"NotEnoughMapsInConfig:": {
"description": "NotEnoughMapsInConfig",
"code": 400
}
}

View File

@ -1,741 +0,0 @@
<!DOCTYPE html>
<html>
<head>
<meta charset="utf-8">
<base data-ice="baseUrl" href="../../">
<title data-ice="title">kinetic/Kinetic.js | API Document</title>
<link type="text/css" rel="stylesheet" href="css/style.css">
<link type="text/css" rel="stylesheet" href="css/prettify-tomorrow.css">
<script src="script/prettify/prettify.js"></script>
<script src="script/manual.js"></script>
</head>
<body class="layout-container" data-ice="rootContainer">
<header>
<a href="./">Home</a>
<a href="identifiers.html">Reference</a>
<a href="source.html">Source</a>
<a data-ice="repoURL" href="git+https://github.com/scality/IronMan-Arsenal.git">Repository</a>
<div class="search-box">
<span>
<img src="./image/search.png">
<span class="search-input-edge"></span><input class="search-input"><span class="search-input-edge"></span>
</span>
<ul class="search-result"></ul>
</div>
</header>
<nav class="navigation" data-ice="nav"><div>
<ul>
<li data-ice="doc"><span data-ice="kind" class="kind-class">C</span><span data-ice="name"><span><a href="class/kinetic/Kinetic.js~Kinetic.html">Kinetic</a></span></span></li>
<li data-ice="doc"><span data-ice="kind" class="kind-variable">V</span><span data-ice="name"><span><a href="variable/index.html#static-variable-kinetic">kinetic</a></span></span></li>
</ul>
</div>
</nav>
<div class="content" data-ice="content"><h1 data-ice="title">kinetic/Kinetic.js</h1>
<pre class="source-code line-number raw-source-code"><code class="prettyprint linenums" data-ice="content">import protobuf from &apos;protobufjs&apos;;
import crypto from &apos;crypto&apos;;
const VERSION = 0x46;
const protoFilePath = __dirname + &apos;/kinetic.proto&apos;;
const buildName = &apos;com.seagate.kinetic.proto&apos;;
/**
* Represents the Kinetic Protocol Data Structure.
* @constructor
*/
class Kinetic {
constructor() {
this._version = VERSION;
this.logs = {
UTILIZATIONS: 0,
TEMPERATURES: 1,
CAPACITIES: 2,
CONFIGURATION: 3,
STATISTICS: 4,
MESSAGES: 5,
LIMITS: 6,
DEVICE: 7,
};
this.op = {
PUT: 4,
PUT_RESPONSE: 3,
GET: 2,
GET_RESPONSE: 1,
NOOP: 30,
NOOP_RESPONSE: 29,
DELETE: 6,
DELETE_RESPONSE: 5,
SET_CLUSTER_VERSION: 22,
SETUP_RESPONSE: 21,
FLUSH: 32,
FLUSH_RESPONSE: 31,
GETLOG: 24,
GETLOG_RESPONSE: 23,
};
this.errors = {
INVALID_STATUS_CODE: -1,
NOT_ATTEMPTED: 0,
SUCCESS: 1,
HMAC_FAILURE: 2,
NOT_AUTHORIZED: 3,
VERSION_FAILURE: 4,
INTERNAL_ERROR: 5,
HEADER_REQUIRED: 6,
NOT_FOUND: 7,
VERSION_MISMATCH: 8,
SERVICE_BUSY: 9,
EXPIRED: 10,
DATA_ERROR: 11,
PERM_DATA_ERROR: 12,
REMOTE_CONNECTION_ERROR: 13,
NO_SPACE: 14,
NO_SUCH_HMAC_ALGORITHM: 15,
INVALID_REQUEST: 16,
NESTED_OPERATION_ERRORS: 17,
DEVICE_LOCKED: 18,
DEVICE_ALREADY_UNLOCKED: 19,
CONNECTION_TERMINATED: 20,
INVALID_BATCH: 21,
};
this.build = protobuf.loadProtoFile(protoFilePath).build(buildName);
return this;
}
/**
* Slice the buffer with the offset and the limit.
* @param {Object} obj - an object buffer with offset and limit.
* @returns {Buffer} sliced buffer from the buffer structure with the offset
* and the limit.
*/
getSlice(obj) {
return obj.buffer.slice(obj.offset, obj.limit);
}
/**
* Sets the actual protobuf message for the Kinetic Protocol Data Unit.
* @param {Object} pbMessage - the well formated kinetic protobuf structure.
* @returns {Kinetic} to allow for a functional style.
*/
setProtobuf(pbMessage) {
this._message = pbMessage;
return this;
}
/**
* Sets the chunk for the Kinetic Protocol Data Unit.
* @param {Buffer} chunk.
* @returns {Kinetic} to allow for a functional style.
*/
setChunk(chunk) {
this._chunk = chunk;
return this;
}
/**
* Sets the general protobuf message for the Kinetic Protocol Data Unit.
* @param {Object} command - the well formated general kinetic protobuf
* structure.
* @returns {Kinetic} setting the protobuf message.
*/
setCommand(command) {
const message = new this.build.Command(command);
return this.setProtobuf(message);
}
/**
* Sets the HMAC for the Kinetic Protocol Data Unit integrity.
* @param {Buffer} secret - the shared secret.
* @returns {Kinetic} to allow for a functional style.
*/
setHMAC() {
this._hmac = crypto.createHmac(&apos;sha1&apos;, &apos;asdfasdf&apos;)
.update(this.getProtobuf().toBuffer()).digest();
return this;
}
/**
* Gets the actual version of the kinetic protocol.
* @returns {Number} the current version of the kinetic protocol.
*/
getVersion() {
return this._version;
}
/**
* Gets the actual protobuf message.
* @returns {Object} Kinetic protobuf message.
*/
getProtobuf() {
return this._message;
}
/**
* Gets the actual protobuf message size.
* @returns {Number} Size of the kinetic protobuf message.
*/
getProtobufSize() {
return this.getProtobuf().calculate();
}
/**
* Gets the actual chunk.
* @returns {Buffer} Chunk.
*/
getChunk() {
return this._chunk;
}
/**
* Gets the actual chunk size.
* @returns {Number} Chunk size.
*/
getChunkSize() {
return this._chunk.length;
}
/**
* Gets the general build template.
* @returns {Object} General kinetic protobuf structure.
*/
getCommand() {
return this.build.Command;
}
/**
* Gets the actual HMAC.
* @returns {Buffer} HMAC.
*/
getHMAC() {
return this._hmac;
}
/**
* Gets the actual request messageType.
* @returns {Number} The code number of the request.
*/
getMessageType() {
return this.getProtobuf().header.messageType;
}
/**
* Gets the actual key.
* @returns {Buffer} Key.
*/
getKey() {
return this.getSlice(this.getProtobuf().body.keyValue.key);
}
/**
* Gets the version of the data unit in the database.
* @returns {Buffer} Version of the data unit in the database.
*/
getDbVersion() {
return this.getSlice(this.getProtobuf().body.keyValue.dbVersion);
}
/**
* Gets the new version of the data unit.
* @returns {Buffer} New version of the data unit.
*/
getNewVersion() {
return this.getSlice(this.getProtobuf().body.keyValue.newVersion);
}
/**
* Gets the detailed error message.
* @returns {Buffer} Detailed error message.
*/
getErrorMessage() {
return this.getSlice(this.getProtobuf().status.detailedMessage);
}
/**
* Gets the logs message.
* @returns {Buffer} Logs message.
*/
getGetLogMessage() {
return this.getSlice(this.getProtobuf().body.getLog.messages);
}
/**
* Gets the operartion name with it code.
* @param {Number} opCode - the operation code.
* @returns {String} operation name.
*/
getOp(opCode) {
return this.getKeyByValue(this.op, opCode);
}
/**
* Gets the error name with it code.
* @param {Number} errorCode - the error code.
* @returns {String} error name.
*/
getError(errorCode) {
return this.getKeyByValue(this.errors, errorCode);
}
/**
* Gets the log type name with it code.
* @param {Number} logCode - the log type code.
* @returns {String} log type name.
*/
getLogType(logCode) {
return this.getKeyByValue(this.logs, logCode);
}
/**
* Gets the key of an object with it value.
* @param {Object} object - the corresponding object.
* @param {value} value - the corresponding value.
* @returns {Buffer} object key.
*/
getKeyByValue(object, value) {
return Object.keys(object).find(key =&gt; object[key] === value);
}
/**
* Compare two buffers.
* @param {Buffer} buf0/buf1 - the buffers to compare.
* @returns {Boolean} false if it&apos;s different true if not.
*/
diff(buf0, buf1) {
if (buf0.length !== buf1.length) {
return false;
}
for (let i = 0; i &lt;= buf0.length; i++) {
if (buf0[i] !== buf1[i])
return false;
}
return true;
}
/**
* Test the HMAC integrity between the actual instance and the given HMAC.
* @param {Buffer} hmac - the non instance hmac to compare.
* @returns {Boolean} true if the HMACs are the same.
* @returns an error if they are different.
*/
hmacIntegrity(hmac) {
if (hmac === undefined || this.getHMAC() === undefined)
return this.errors.HMAC_FAILURE;
if (this.diff(hmac, this.getHMAC()) === false)
return this.errors.HMAC_FAILURE;
return true;
}
/**
* Getting logs and stats request following the kinetic protocol.
* @param {number} incrementTCP - monotonically increasing number for each
* request in a TCP connection.
* @param {Array} types - array filled by logs types needed.
* @param {number} clusterVersion - version of the cluster
* @returns {Kinetic} this - message structure following the kinetic
* protocol
*/
getLog(incrementTCP, types, clusterVersion) {
const identity = (new Date).getTime();
return this.setCommand({
&quot;header&quot;: {
&quot;messageType&quot;: &quot;GETLOG&quot;,
&quot;connectionID&quot;: identity,
&quot;sequence&quot;: incrementTCP,
&quot;clusterVersion&quot;: clusterVersion,
},
&quot;body&quot;: {
&quot;getLog&quot;: {
&quot;types&quot;: types,
},
},
});
}
/**
* Getting logs and stats response following the kinetic protocol.
* @param {String or number} response - response code (SUCCESS, FAIL)
* @param {String or Buffer} errorMessage - detailed error message.
* @param {object} responseLogs - object filled by logs needed.
* @returns {Kinetic} this - message structure following the kinetic
* protocol
*/
getLogResponse(response, errorMessage, responseLogs) {
return this.setCommand({
&quot;header&quot;: {
&quot;ackSequence&quot;: this.getProtobuf().header.sequence,
&quot;messageType&quot;: &quot;GETLOG_RESPONSE&quot;,
},
&quot;body&quot;: {
&quot;getLog&quot;: responseLogs,
},
&quot;status&quot;: {
&quot;code&quot;: response,
&quot;detailedMessage&quot;: errorMessage,
},
});
}
/**
* Flush all data request following the kinetic protocol.
* @param {number} incrementTCP - monotonically increasing number for each
* request in a TCP connection.
* @param {number} clusterVersion - version of the cluster
* @returns {Kinetic} this - message structure following the kinetic
* protocol
*/
flush(incrementTCP, clusterVersion) {
const identity = (new Date).getTime();
return this.setCommand({
&quot;header&quot;: {
&quot;messageType&quot;: &quot;FLUSHALLDATA&quot;,
&quot;connectionID&quot;: identity,
&quot;sequence&quot;: incrementTCP,
&quot;clusterVersion&quot;: clusterVersion,
},
&quot;body&quot;: { },
});
}
/**
* Flush all data response following the kinetic protocol.
* @param {String or number} response - response code (SUCCESS, FAIL)
* @param {String or Buffer} errorMessage - detailed error message.
* @returns {Kinetic} this - message structure following the kinetic
* protocol
*/
flushResponse(response, errorMessage) {
return this.setCommand({
&quot;header&quot;: {
&quot;messageType&quot;: &quot;FLUSHALLDATA_RESPONSE&quot;,
&quot;ackSequence&quot;: this.getProtobuf().header.sequence,
},
&quot;status&quot;: {
&quot;code&quot;: response,
&quot;detailedMessage&quot;: errorMessage,
},
});
}
/**
* set clusterVersion request following the kinetic protocol.
* @param {number} incrementTCP - monotonically increasing number for each
* request in a TCP connection.
* @param {number} clusterVersion - The version number of this cluster
* definition
* @param {number} oldClusterVersion - The old version number of this
* cluster definition
* @returns {Kinetic} this - message structure following the kinetic
* protocol
*/
setClusterVersion(incrementTCP, clusterVersion, oldClusterVersion) {
const identity = (new Date).getTime();
return this.setCommand({
&quot;header&quot;: {
&quot;messageType&quot;: &quot;SETUP&quot;,
&quot;connectionID&quot;: identity,
&quot;sequence&quot;: incrementTCP,
&quot;clusterVersion&quot;: oldClusterVersion,
},
&quot;body&quot;: {
&quot;setup&quot;: {
&quot;newClusterVersion&quot;: clusterVersion,
},
},
});
}
/**
* Setup response request following the kinetic protocol.
* @param {String or number} response - response code (SUCCESS, FAIL)
* @param {String or Buffer} errorMessage - detailed error message.
* @returns {Kinetic} this - message structure following the kinetic
* protocol
*/
setupResponse(response, errorMessage) {
return this.setCommand({
&quot;header&quot;: {
&quot;messageType&quot;: &quot;SETUP_RESPONSE&quot;,
&quot;ackSequence&quot;: this.getProtobuf().header.sequence,
},
&quot;status&quot;: {
&quot;code&quot;: response,
&quot;detailedMessage&quot;: errorMessage,
},
});
}
/**
* NOOP request following the kinetic protocol.
* @param {number} incrementTCP - monotonically increasing number for each
* request in a TCP connection
* @param {number} clusterVersion - The version number of this cluster
* definition
* @returns {Kinetic} this - message structure following the kinetic
* protocol
*/
noOp(incrementTCP, clusterVersion) {
const identity = (new Date).getTime();
return this.setCommand({
&quot;header&quot;: {
&quot;messageType&quot;: &quot;NOOP&quot;,
&quot;connectionID&quot;: identity,
&quot;sequence&quot;: incrementTCP,
&quot;clusterVersion&quot;: clusterVersion,
},
});
}
/**
* Response for the NOOP request following the kinetic protocol.
* @param {String or number} response - response code (SUCCESS, FAIL)
* @param {String or Buffer} errorMessage - detailed error message.
* @returns {Kinetic} this - message structure following the kinetic
* protocol
*/
noOpResponse(response, errorMessage) {
return this.setCommand({
&quot;header&quot;: {
&quot;messageType&quot;: &quot;NOOP_RESPONSE&quot;,
&quot;ackSequence&quot;: this.getProtobuf().header.sequence,
},
&quot;status&quot;: {
&quot;code&quot;: response,
&quot;detailedMessage&quot;: errorMessage,
},
});
}
/**
* PUT request following the kinetic protocol.
* @param {String or Buffer} key - key of the item to put.
* @param {number} incrementTCP - monotonically increasing number for each
* request in a TCP connection
* @param {String or Buffer} dbVersion - version of the item in the
* database.
* @param {String or Buffer} newVersion - new version of the item to put.
* @param {number} clusterVersion - The version number of this cluster
* definition
* @returns {Kinetic} this - message structure following the kinetic
* protocol
*/
put(key, incrementTCP, dbVersion, newVersion, clusterVersion) {
const identity = (new Date).getTime();
return this.setCommand({
&quot;header&quot;: {
&quot;messageType&quot;: &quot;PUT&quot;,
&quot;connectionID&quot;: identity,
&quot;sequence&quot;: incrementTCP,
&quot;clusterVersion&quot;: clusterVersion,
},
&quot;body&quot;: {
&quot;keyValue&quot;: {
&quot;key&quot;: key,
&quot;newVersion&quot;: newVersion,
&quot;dbVersion&quot;: dbVersion,
},
},
});
}
/**
* Response for the PUT request following the kinetic protocol.
* @param {String or number} response - response code (SUCCESS, FAIL)
* @param {String or Buffer} errorMessage - detailed error message.
* @returns {Kinetic} this - message structure following the kinetic
* protocol
*/
putResponse(response, errorMessage) {
return this.setCommand({
&quot;header&quot;: {
&quot;messageType&quot;: &quot;PUT_RESPONSE&quot;,
&quot;ackSequence&quot;: this.getProtobuf().header.sequence,
},
&quot;body&quot;: {
&quot;keyValue&quot;: { },
},
&quot;status&quot;: {
&quot;code&quot;: response,
&quot;detailedMessage&quot;: errorMessage,
},
});
}
/**
* GET request following the kinetic protocol.
* @param {String or Buffer} key - key of the item to put.
* @param {number} incrementTCP - monotonically increasing number for each
* request in a TCP connection
* @param {number} clusterVersion - The version number of this cluster
* definition
* @returns {Kinetic} this - message structure following the kinetic
* protocol
*/
get(key, incrementTCP, clusterVersion) {
const identity = (new Date).getTime();
return this.setCommand({
&quot;header&quot;: {
&quot;messageType&quot;: &quot;GET&quot;,
&quot;connectionID&quot;: identity,
&quot;sequence&quot;: incrementTCP,
&quot;clusterVersion&quot;: clusterVersion,
},
&quot;body&quot;: {
&quot;keyValue&quot;: {
&quot;key&quot;: key,
},
},
});
}
/**
* Response for the GET request following the kinetic protocol.
* @param {String or number} response - response code (SUCCESS, FAIL)
* @param {String or Buffer} errorMessage - Detailed error message.
* @param {String or Buffer} dbVersion - The version of the item in the
* database.
* @returns {Kinetic} this - message structure following the kinetic
* protocol
*/
getResponse(response, errorMessage, dbVersion) {
return this.setCommand({
&quot;header&quot;: {
&quot;messageType&quot;: &quot;GET_RESPONSE&quot;,
&quot;ackSequence&quot;: this.getProtobuf().header.sequence,
},
&quot;body&quot;: {
&quot;keyValue&quot;: {
&quot;key&quot;: this.getProtobuf().body.keyValue.key,
&quot;dbVersion&quot;: dbVersion,
},
},
&quot;status&quot;: {
&quot;code&quot;: response,
&quot;detailedMessage&quot;: errorMessage,
},
});
}
/**
* DELETE request following the kinetic protocol.
* @param {String or Buffer} key - key of the item to put.
* @param {number} incrementTCP - monotonically increasing number for each
* request in a TCP connection
* @param {number} clusterVersion - The version number of this cluster
* definition
* @returns {Kinetic} this - message structure following the kinetic
* protocol
*/
delete(key, incrementTCP, clusterVersion) {
const identity = (new Date).getTime();
return this.setCommand({
&quot;header&quot;: {
&quot;messageType&quot;: &quot;DELETE&quot;,
&quot;connectionID&quot;: identity,
&quot;sequence&quot;: incrementTCP,
&quot;clusterVersion&quot;: clusterVersion,
},
&quot;body&quot;: {
&quot;keyValue&quot;: {
&quot;key&quot;: key,
},
},
});
}
/**
* Response for the DELETE request following the kinetic protocol.
* @param {String or number} response - response code (SUCCESS, FAIL)
* @param {String or Buffer} errorMessage - Detailed error message.
* @returns {Kinetic} this - message structure following the kinetic
* protocol
*/
deleteResponse(response, errorMessage) {
return this.setCommand({
&quot;header&quot;: {
&quot;messageType&quot;: &quot;DELETE_RESPONSE&quot;,
&quot;ackSequence&quot;: this.getProtobuf().header.sequence,
},
&quot;body&quot;: {
&quot;keyValue&quot;: { },
},
&quot;status&quot;: {
&quot;code&quot;: response,
&quot;detailedMessage&quot;: errorMessage,
},
});
}
/**
* Sends data following Kinetic protocol.
* @param {Socket} sock - Socket to send data through.
*/
send(sock) {
const buf = new Buffer(9);
buf.writeInt8(this.getVersion(), 0);
// BE stands for Big Endian
buf.writeInt32BE(this.getProtobufSize(), 1);
buf.writeInt32BE(this.getChunkSize(), 5);
sock.write(Buffer.concat(
[buf, this.getProtobuf().toBuffer(), this.getChunk()]
));
}
/**
* Creates the Kinetic Protocol Data Structure from a buffer.
* @param {Buffer} data - The data received by the socket.
*/
parse(data) {
const version = data.readInt8(0);
const pbMsgLen = data.readInt32BE(1);
const chunkLen = data.readInt32BE(5);
if (version !== this.getVersion()) {
return this.errors.VERSION_FAILURE;
}
try {
this.setProtobuf(
this.getCommand().decode(data.slice(9, pbMsgLen + 9))
);
this.setChunk(data.slice(pbMsgLen + 9, chunkLen + pbMsgLen + 9));
} catch (e) {
return e;
}
if (this.getChunkSize() !== chunkLen) {
return this.errors.DATA_ERROR;
}
return this.errors.SUCCESS;
}
}
export default Kinetic;
</code></pre>
</div>
<footer class="footer">
Generated by <a href="https://esdoc.org">ESDoc<span data-ice="esdocVersion">(0.4.1)</span></a>
</footer>
<script src="script/search_index.js"></script>
<script src="script/search.js"></script>
<script src="script/pretty-print.js"></script>
<script src="script/inherited-summary.js"></script>
<script src="script/test-summary.js"></script>
<script src="script/inner-link.js"></script>
<script src="script/patch-for-local.js"></script>
</body>
</html>

View File

@ -1,125 +0,0 @@
<!DOCTYPE html>
<html>
<head>
<meta charset="utf-8">
<base data-ice="baseUrl">
<title data-ice="title">Index | API Document</title>
<link type="text/css" rel="stylesheet" href="css/style.css">
<link type="text/css" rel="stylesheet" href="css/prettify-tomorrow.css">
<script src="script/prettify/prettify.js"></script>
<script src="script/manual.js"></script>
</head>
<body class="layout-container" data-ice="rootContainer">
<header>
<a href="./">Home</a>
<a href="identifiers.html">Reference</a>
<a href="source.html">Source</a>
<a data-ice="repoURL" href="git+https://github.com/scality/IronMan-Arsenal.git">Repository</a>
<div class="search-box">
<span>
<img src="./image/search.png">
<span class="search-input-edge"></span><input class="search-input"><span class="search-input-edge"></span>
</span>
<ul class="search-result"></ul>
</div>
</header>
<nav class="navigation" data-ice="nav"><div>
<ul>
<li data-ice="doc"><span data-ice="kind" class="kind-class">C</span><span data-ice="name"><span><a href="class/kinetic/Kinetic.js~Kinetic.html">Kinetic</a></span></span></li>
<li data-ice="doc"><span data-ice="kind" class="kind-variable">V</span><span data-ice="name"><span><a href="variable/index.html#static-variable-kinetic">kinetic</a></span></span></li>
</ul>
</div>
</nav>
<div class="content" data-ice="content"><h1>References</h1>
<div data-ice="classSummary"><h2 id="class">Class Summary</h2><table class="summary" data-ice="summary">
<thead><tr><td data-ice="title" colspan="3">Static Public Class Summary</td></tr></thead>
<tbody>
<tr data-ice="target">
<td>
<span class="access" data-ice="access">public</span>
<span class="override" data-ice="override"></span>
</td>
<td>
<div>
<p>
<span data-ice="name"><span><a href="class/kinetic/Kinetic.js~Kinetic.html">Kinetic</a></span></span>
</p>
</div>
<div>
<div data-ice="description"><p>Represents the Kinetic Protocol Data Structure.</p>
</div>
</div>
</td>
<td>
</td>
</tr>
</tbody>
</table>
</div>
<div data-ice="variableSummary"><h2 id="variable">Variable Summary</h2><table class="summary" data-ice="summary">
<thead><tr><td data-ice="title" colspan="3">Static Public Variable Summary</td></tr></thead>
<tbody>
<tr data-ice="target">
<td>
<span class="access" data-ice="access">public</span>
<span class="override" data-ice="override"></span>
</td>
<td>
<div>
<p>
<span data-ice="name"><span><a href="variable/index.html#static-variable-kinetic">kinetic</a></span></span><span data-ice="signature">: <span><a href="class/kinetic/Kinetic.js~Kinetic.html">Kinetic</a></span></span>
</p>
</div>
<div>
</div>
</td>
<td>
</td>
</tr>
</tbody>
</table>
</div>
</div>
<footer class="footer">
Generated by <a href="https://esdoc.org">ESDoc<span data-ice="esdocVersion">(0.4.1)</span></a>
</footer>
<script src="script/search_index.js"></script>
<script src="script/search.js"></script>
<script src="script/pretty-print.js"></script>
<script src="script/inherited-summary.js"></script>
<script src="script/test-summary.js"></script>
<script src="script/inner-link.js"></script>
<script src="script/patch-for-local.js"></script>
</body>
</html>

View File

@ -1,17 +0,0 @@
<svg xmlns="http://www.w3.org/2000/svg" width="102" height="20">
<script/>
<linearGradient id="a" x2="0" y2="100%">
<stop offset="0" stop-color="#bbb" stop-opacity=".1"/>
<stop offset="1" stop-opacity=".1"/>
</linearGradient>
<rect rx="3" width="102" height="20" fill="#555"/>
<rect rx="3" x="64" width="38" height="20" fill="@color@"/>
<path fill="@color@" d="M64 0h4v20h-4z"/>
<rect rx="3" width="102" height="20" fill="url(#a)"/>
<g fill="#fff" text-anchor="middle" font-family="DejaVu Sans,Verdana,Geneva,sans-serif" font-size="11">
<text x="32" y="15" fill="#010101" fill-opacity=".3">document</text>
<text x="32" y="14">document</text>
<text x="82.5" y="15" fill="#010101" fill-opacity=".3">@ratio@</text>
<text x="82.5" y="14">@ratio@</text>
</g>
</svg>

Before

Width:  |  Height:  |  Size: 803 B

Binary file not shown.

Before

Width:  |  Height:  |  Size: 4.2 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 12 KiB

View File

@ -1,63 +0,0 @@
<!DOCTYPE html>
<html>
<head>
<meta charset="utf-8">
<base data-ice="baseUrl">
<title data-ice="title">API Document</title>
<link type="text/css" rel="stylesheet" href="css/style.css">
<link type="text/css" rel="stylesheet" href="css/prettify-tomorrow.css">
<script src="script/prettify/prettify.js"></script>
<script src="script/manual.js"></script>
</head>
<body class="layout-container" data-ice="rootContainer">
<header>
<a href="./">Home</a>
<a href="identifiers.html">Reference</a>
<a href="source.html">Source</a>
<a data-ice="repoURL" href="git+https://github.com/scality/IronMan-Arsenal.git">Repository</a>
<div class="search-box">
<span>
<img src="./image/search.png">
<span class="search-input-edge"></span><input class="search-input"><span class="search-input-edge"></span>
</span>
<ul class="search-result"></ul>
</div>
</header>
<nav class="navigation" data-ice="nav"><div>
<ul>
<li data-ice="doc"><span data-ice="kind" class="kind-class">C</span><span data-ice="name"><span><a href="class/kinetic/Kinetic.js~Kinetic.html">Kinetic</a></span></span></li>
<li data-ice="doc"><span data-ice="kind" class="kind-variable">V</span><span data-ice="name"><span><a href="variable/index.html#static-variable-kinetic">kinetic</a></span></span></li>
</ul>
</div>
</nav>
<div class="content" data-ice="content"><div data-ice="index" class="github-markdown"><h1 id="ironman-arsenal">IronMan-Arsenal</h1>
<p>Common utilities for the IronMan project components</p>
<p>Within this repository, you will be able to find the shared libraries for the
multiple components making up the whole Project.</p>
<h2 id="guidelines">Guidelines</h2>
<p>Please read our coding and workflow guidelines at
<a href="https://github.com/scality/IronMan-Guidelines">scality/IronMan-Guidelines</a>.</p>
</div>
</div>
<footer class="footer">
Generated by <a href="https://esdoc.org">ESDoc<span data-ice="esdocVersion">(0.4.1)</span></a>
</footer>
<script src="script/search_index.js"></script>
<script src="script/search.js"></script>
<script src="script/pretty-print.js"></script>
<script src="script/inherited-summary.js"></script>
<script src="script/test-summary.js"></script>
<script src="script/inner-link.js"></script>
<script src="script/patch-for-local.js"></script>
</body>
</html>

94
index.js Normal file
View File

@ -0,0 +1,94 @@
module.exports = {
auth: require('./lib/auth/auth'),
constants: require('./lib/constants'),
db: require('./lib/db'),
errors: require('./lib/errors.js'),
shuffle: require('./lib/shuffle'),
stringHash: require('./lib/stringHash'),
ipCheck: require('./lib/ipCheck'),
jsutil: require('./lib/jsutil'),
https: {
ciphers: require('./lib/https/ciphers.js'),
dhparam: require('./lib/https/dh2048.js'),
},
algorithms: {
list: {
Basic: require('./lib/algos/list/basic').List,
Delimiter: require('./lib/algos/list/delimiter').Delimiter,
DelimiterVersions: require('./lib/algos/list/delimiterVersions')
.DelimiterVersions,
DelimiterMaster: require('./lib/algos/list/delimiterMaster')
.DelimiterMaster,
MPU: require('./lib/algos/list/MPU').MultipartUploads,
},
listTools: {
DelimiterTools: require('./lib/algos/list/tools'),
},
},
policies: {
evaluators: require('./lib/policyEvaluator/evaluator.js'),
validateUserPolicy: require('./lib/policy/policyValidator')
.validateUserPolicy,
RequestContext: require('./lib/policyEvaluator/RequestContext.js'),
},
Clustering: require('./lib/Clustering'),
testing: {
matrix: require('./lib/testing/matrix.js'),
},
versioning: {
VersioningConstants: require('./lib/versioning/constants.js')
.VersioningConstants,
Version: require('./lib/versioning/Version.js').Version,
VersionID: require('./lib/versioning/VersionID.js'),
},
network: {
http: {
server: require('./lib/network/http/server'),
},
rpc: require('./lib/network/rpc/rpc'),
level: require('./lib/network/rpc/level-net'),
rest: {
RESTServer: require('./lib/network/rest/RESTServer'),
RESTClient: require('./lib/network/rest/RESTClient'),
},
RoundRobin: require('./lib/network/RoundRobin'),
},
s3routes: {
routes: require('./lib/s3routes/routes'),
routesUtils: require('./lib/s3routes/routesUtils'),
},
s3middleware: {
userMetadata: require('./lib/s3middleware/userMetadata'),
escapeForXml: require('./lib/s3middleware/escapeForXml'),
tagging: require('./lib/s3middleware/tagging'),
validateConditionalHeaders:
require('./lib/s3middleware/validateConditionalHeaders')
.validateConditionalHeaders,
MD5Sum: require('./lib/s3middleware/MD5Sum'),
},
storage: {
metadata: {
MetadataFileServer:
require('./lib/storage/metadata/file/MetadataFileServer'),
MetadataFileClient:
require('./lib/storage/metadata/file/MetadataFileClient'),
LogConsumer:
require('./lib/storage/metadata/bucketclient/LogConsumer'),
},
data: {
file: {
DataFileStore:
require('./lib/storage/data/file/DataFileStore'),
},
},
utils: require('./lib/storage/utils'),
},
models: {
BucketInfo: require('./lib/models/BucketInfo'),
ObjectMD: require('./lib/models/ObjectMD'),
WebsiteConfiguration: require('./lib/models/WebsiteConfiguration'),
ReplicationConfiguration:
require('./lib/models/ReplicationConfiguration'),
},
};

263
lib/Clustering.js Normal file
View File

@ -0,0 +1,263 @@
'use strict'; // eslint-disable-line
const cluster = require('cluster');
class Clustering {
/**
* Constructor
*
* @param {number} size Cluster size
* @param {Logger} logger Logger object
* @param {number} [shutdownTimeout=5000] Change default shutdown timeout
* releasing ressources
* @return {Clustering} itself
*/
constructor(size, logger, shutdownTimeout) {
this._size = size;
if (size < 1) {
throw new Error('Cluster size must be greater than or equal to 1');
}
this._shutdownTimeout = shutdownTimeout || 5000;
this._logger = logger;
this._shutdown = false;
this._workers = new Array(size).fill(undefined);
this._workersTimeout = new Array(size).fill(undefined);
this._workersStatus = new Array(size).fill(undefined);
this._status = 0;
this._exitCb = undefined; // Exit callback
this._index = undefined;
}
/**
* Method called after a stop() call
*
* @private
* @return {undefined}
*/
_afterStop() {
// Asuming all workers shutdown gracefully
this._status = 0;
const size = this._size;
for (let i = 0; i < size; ++i) {
// If the process return an error code or killed by a signal,
// set the status
if (typeof this._workersStatus[i] === 'number') {
this._status = this._workersStatus[i];
break;
} else if (typeof this._workersStatus[i] === 'string') {
this._status = 1;
break;
}
}
if (this._exitCb) {
return this._exitCb(this);
}
return process.exit(this.getStatus());
}
/**
* Method called when a worker exited
*
* @param {Cluster.worker} worker - Current worker
* @param {number} i - Worker index
* @param {number} code - Exit code
* @param {string} signal - Exit signal
* @return {undefined}
*/
_workerExited(worker, i, code, signal) {
// If the worker:
// - was killed by a signal
// - return an error code
// - or just stopped
if (signal) {
this._logger.info('Worker killed by signal', {
signal,
id: i,
childPid: worker.process.pid,
});
this._workersStatus[i] = signal;
} else if (code !== 0) {
this._logger.error('Worker exit with code', {
code,
id: i,
childPid: worker.process.pid,
});
this._workersStatus[i] = code;
} else {
this._logger.info('Worker shutdown gracefully', {
id: i,
childPid: worker.process.pid,
});
this._workersStatus[i] = undefined;
}
this._workers[i] = undefined;
if (this._workersTimeout[i]) {
clearTimeout(this._workersTimeout[i]);
this._workersTimeout[i] = undefined;
}
// If we don't trigger the stop method, the watchdog
// will autorestart the worker
if (this._shutdown === false) {
return process.nextTick(() => this.startWorker(i));
}
// Check if an worker is still running
if (!this._workers.every(cur => cur === undefined)) {
return undefined;
}
return this._afterStop();
}
/**
* Method to start a worker
*
* @param {number} i Index of the starting worker
* @return {undefined}
*/
startWorker(i) {
if (!cluster.isMaster) {
return;
}
// Fork a new worker
this._workers[i] = cluster.fork();
// Listen for message from the worker
this._workers[i].on('message', msg => {
// If the worker is ready, send him his id
if (msg === 'ready') {
this._workers[i].send({ msg: 'setup', id: i });
}
});
this._workers[i].on('exit', (code, signal) =>
this._workerExited(this._workers[i], i, code, signal));
// Trigger when the worker was started
this._workers[i].on('online', () => {
this._logger.info('Worker started', {
id: i,
childPid: this._workers[i].process.pid,
});
});
}
/**
* Method to put handler on cluster exit
*
* @param {function} cb - Callback(Clustering, [exitSignal])
* @return {Clustering} Itself
*/
onExit(cb) {
this._exitCb = cb;
return this;
}
/**
* Method to start the cluster (if master) or to start the callback
* (worker)
*
* @param {function} cb - Callback to run the worker
* @return {Clustering} itself
*/
start(cb) {
process.on('SIGINT', () => this.stop('SIGINT'));
process.on('SIGHUP', () => this.stop('SIGHUP'));
process.on('SIGQUIT', () => this.stop('SIGQUIT'));
process.on('SIGTERM', () => this.stop('SIGTERM'));
process.on('SIGPIPE', () => {});
process.on('exit', (code, signal) => {
if (this._exitCb) {
this._status = code || 0;
return this._exitCb(this, signal);
}
return process.exit(code || 0);
});
process.on('uncaughtException', err => {
this._logger.fatal('caught error', {
error: err.message,
stack: err.stack.split('\n').map(str => str.trim()),
});
process.exit(1);
});
if (!cluster.isMaster) {
// Waiting for message from master to
// know the id of the slave cluster
process.on('message', msg => {
if (msg.msg === 'setup') {
this._index = msg.id;
cb(this);
}
});
// Send message to the master, to let him know
// the worker has started
process.send('ready');
} else {
for (let i = 0; i < this._size; ++i) {
this.startWorker(i);
}
}
return this;
}
/**
* Method to get workers
*
* @return {Cluster.Worker[]} Workers
*/
getWorkers() {
return this._workers;
}
/**
* Method to get the status of the cluster
*
* @return {number} Status code
*/
getStatus() {
return this._status;
}
/**
* Method to return if it's the master process
*
* @return {boolean} - True if master, false otherwise
*/
isMaster() {
return this._index === undefined;
}
/**
* Method to get index of the worker
*
* @return {number|undefined} Worker index, undefined if it's master
*/
getIndex() {
return this._index;
}
/**
* Method to stop the cluster
*
* @param {string} signal - Set internally when processes killed by signal
* @return {undefined}
*/
stop(signal) {
if (!cluster.isMaster) {
if (this._exitCb) {
return this._exitCb(this, signal);
}
return process.exit(0);
}
this._shutdown = true;
return this._workers.forEach((worker, i) => {
if (!worker) {
return undefined;
}
this._workersTimeout[i] = setTimeout(() => {
// Kill the worker if the sigterm was ignored or take too long
process.kill(worker.process.pid, 'SIGKILL');
}, this._shutdownTimeout);
// Send sigterm to the process, allowing to release ressources
// and save some states
return process.kill(worker.process.pid, 'SIGTERM');
});
}
}
module.exports = Clustering;

View File

@ -0,0 +1,75 @@
'use strict'; // eslint-disable-line strict
const { FILTER_SKIP, SKIP_NONE } = require('./tools');
/**
* Base class of listing extensions.
*/
class Extension {
/**
* This takes a list of parameters and a logger as the inputs.
* Derivatives should have their own format regarding parameters.
*
* @param {Object} parameters - listing parameter from applications
* @param {RequestLogger} logger - the logger
* @constructor
*/
constructor(parameters, logger) {
// inputs
this.parameters = parameters;
this.logger = logger;
// listing results
this.res = undefined;
this.keys = 0;
}
/**
* Generates listing parameters that metadata can understand from the input
* parameters. What metadata can understand: gt, gte, lt, lte, limit, keys,
* values, reverse; we use the same set of parameters as levelup's.
* Derivatives should have their own conversion of their original listing
* parameters into metadata listing parameters.
*
* @return {object} - listing parameters for metadata
*/
genMDParams() {
return {};
}
/**
* This function receives a data entry from metadata and decides if it will
* include the entry in the listing result or not.
*
* @param {object} entry - a listing entry from metadata
* expected format: { key, value }
* @return {number} - result of filtering the entry:
* > 0: entry is accepted and included in the result
* = 0: entry is accepted but not included (skipping)
* < 0: entry is not accepted, listing should finish
*/
filter(entry) {
return entry ? FILTER_SKIP : FILTER_SKIP;
}
/**
* Provides the insight into why filter is skipping an entry. This could be
* because it is skipping a range of delimited keys or a range of specific
* version when doing master version listing.
*
* @return {string} - the insight: a common prefix or a master key,
* or SKIP_NONE if there is no insight
*/
skipping() {
return SKIP_NONE;
}
/**
* Get the listing resutls. Format depends on derivatives' specific logic.
* @return {Array} - The listed elements
*/
result() {
return this.res;
}
}
module.exports.default = Extension;

159
lib/algos/list/MPU.js Normal file
View File

@ -0,0 +1,159 @@
'use strict'; // eslint-disable-line strict
const { inc, checkLimit, FILTER_END, FILTER_ACCEPT } = require('./tools');
const DEFAULT_MAX_KEYS = 1000;
function numberDefault(num, defaultNum) {
const parsedNum = Number.parseInt(num, 10);
return Number.isNaN(parsedNum) ? defaultNum : parsedNum;
}
/**
* Class for the MultipartUploads extension
*/
class MultipartUploads {
/**
* Constructor of the extension
* Init and check parameters
* @param {Object} params - The parameters you sent to DBD
* @param {RequestLogger} logger - The logger of the request
* @return {undefined}
*/
constructor(params, logger) {
this.params = params;
this.CommonPrefixes = [];
this.Uploads = [];
this.IsTruncated = false;
this.NextKeyMarker = '';
this.NextUploadIdMarker = '';
this.prefixLength = 0;
this.queryPrefixLength = numberDefault(params.queryPrefixLength, 0);
this.keys = 0;
this.maxKeys = checkLimit(params.maxKeys, DEFAULT_MAX_KEYS);
this.delimiter = params.delimiter;
this.splitter = params.splitter;
this.logger = logger;
}
genMDParams() {
const params = {};
if (this.params.keyMarker) {
params.gt = `overview${this.params.splitter}` +
`${this.params.keyMarker}${this.params.splitter}`;
if (this.params.uploadIdMarker) {
params.gt += `${this.params.uploadIdMarker}`;
}
// advance so that lower bound does not include the supplied
// markers
params.gt = inc(params.gt);
}
if (this.params.prefix) {
if (params.gt === undefined || this.params.prefix > params.gt) {
delete params.gt;
params.gte = this.params.prefix;
}
params.lt = inc(this.params.prefix);
}
return params;
}
/**
* This function adds the elements to the Uploads
* Set the NextKeyMarker to the current key
* Increment the keys counter
* @param {String} value - The value of the key
* @return {undefined}
*/
addUpload(value) {
const tmp = JSON.parse(value);
this.Uploads.push({
key: tmp.key,
value: {
UploadId: tmp.uploadId,
Initiator: {
ID: tmp.initiator.ID,
DisplayName: tmp.initiator.DisplayName,
},
Owner: {
ID: tmp['owner-id'],
DisplayName: tmp['owner-display-name'],
},
StorageClass: tmp['x-amz-storage-class'],
Initiated: tmp.initiated,
},
});
this.NextKeyMarker = tmp.key;
this.NextUploadIdMarker = tmp.uploadId;
++this.keys;
}
/**
* This function adds a common prefix to the CommonPrefixes array
* Set the NextKeyMarker to the current commonPrefix
* Increment the keys counter
* @param {String} commonPrefix - The commonPrefix to add
* @return {undefined}
*/
addCommonPrefix(commonPrefix) {
if (this.CommonPrefixes.indexOf(commonPrefix) === -1) {
this.CommonPrefixes.push(commonPrefix);
this.NextKeyMarker = commonPrefix;
++this.keys;
}
}
/**
* This function applies filter on each element
* @param {String} obj - The key and value of the element
* @return {number} - > 0: Continue, < 0: Stop
*/
filter(obj) {
// Check first in case of maxkeys = 0
if (this.keys >= this.maxKeys) {
// In cases of maxKeys <= 0 => IsTruncated = false
this.IsTruncated = this.maxKeys > 0;
return FILTER_END;
}
const key = obj.key;
const value = obj.value;
if (this.delimiter) {
const mpuPrefixSlice = `overview${this.splitter}`.length;
const mpuKey = key.slice(mpuPrefixSlice);
const commonPrefixIndex = mpuKey.indexOf(this.delimiter,
this.queryPrefixLength);
if (commonPrefixIndex === -1) {
this.addUpload(value);
} else {
this.addCommonPrefix(mpuKey.substring(0,
commonPrefixIndex + this.delimiter.length));
}
} else {
this.addUpload(value);
}
return FILTER_ACCEPT;
}
skipping() {
return '';
}
/**
* Returns the formatted result
* @return {Object} - The result.
*/
result() {
return {
CommonPrefixes: this.CommonPrefixes,
Uploads: this.Uploads,
IsTruncated: this.IsTruncated,
NextKeyMarker: this.NextKeyMarker,
MaxKeys: this.maxKeys,
NextUploadIdMarker: this.NextUploadIdMarker,
Delimiter: this.delimiter,
};
}
}
module.exports = {
MultipartUploads,
};

75
lib/algos/list/basic.js Normal file
View File

@ -0,0 +1,75 @@
'use strict'; // eslint-disable-line strict
const Extension = require('./Extension').default;
const { checkLimit, FILTER_END, FILTER_ACCEPT } = require('./tools');
const DEFAULT_MAX_KEYS = 10000;
/**
* Class of an extension doing the simple listing
*/
class List extends Extension {
/**
* Constructor
* Set the logger and the res
* @param {Object} parameters - The parameters you sent to DBD
* @param {RequestLogger} logger - The logger of the request
* @return {undefined}
*/
constructor(parameters, logger) {
super(parameters, logger);
this.res = [];
if (parameters) {
this.maxKeys = checkLimit(parameters.maxKeys, DEFAULT_MAX_KEYS);
} else {
this.maxKeys = DEFAULT_MAX_KEYS;
}
this.keys = 0;
}
genMDParams() {
const params = {
gt: this.parameters.gt,
gte: this.parameters.gte || this.parameters.start,
lt: this.parameters.lt,
lte: this.parameters.lte || this.parameters.end,
keys: this.parameters.keys,
values: this.parameters.values,
};
Object.keys(params).forEach(key => {
if (params[key] === null || params[key] === undefined) {
delete params[key];
}
});
return params;
}
/**
* Function apply on each element
* Just add it to the array
* @param {Object} elem - The data from the database
* @return {number} - > 0 : continue listing
* < 0 : listing done
*/
filter(elem) {
// Check first in case of maxkeys <= 0
if (this.keys >= this.maxKeys) {
return FILTER_END;
}
this.res.push(elem);
this.keys++;
return FILTER_ACCEPT;
}
/**
* Function returning the result
* @return {Array} - The listed elements
*/
result() {
return this.res;
}
}
module.exports = {
List,
};

224
lib/algos/list/delimiter.js Normal file
View File

@ -0,0 +1,224 @@
'use strict'; // eslint-disable-line strict
const Extension = require('./Extension').default;
const { inc, FILTER_END, FILTER_ACCEPT, FILTER_SKIP } = require('./tools');
/**
* Find the next delimiter in the path
*
* @param {string} key - path of the object
* @param {string} delimiter - string to find
* @param {number} index - index to start at
* @return {number} delimiterIndex - returns -1 in case no delimiter is found
*/
function nextDelimiter(key, delimiter, index) {
return key.indexOf(delimiter, index);
}
/**
* Find the common prefix in the path
*
* @param {String} key - path of the object
* @param {String} delimiter - separator
* @param {Number} delimiterIndex - 'folder' index in the path
* @return {String} - CommonPrefix
*/
function getCommonPrefix(key, delimiter, delimiterIndex) {
return key.substring(0, delimiterIndex + delimiter.length);
}
/**
* Handle object listing with parameters
*
* @prop {String[]} CommonPrefixes - 'folders' defined by the delimiter
* @prop {String[]} Contents - 'files' to list
* @prop {Boolean} IsTruncated - truncated listing flag
* @prop {String|undefined} NextMarker - marker per amazon format
* @prop {Number} keys - count of listed keys
* @prop {String|undefined} delimiter - separator per amazon format
* @prop {String|undefined} prefix - prefix per amazon format
* @prop {Number} maxKeys - number of keys to list
*/
class Delimiter extends Extension {
/**
* Create a new Delimiter instance
* @constructor
* @param {Object} parameters - listing parameters
* @param {String} [parameters.delimiter] - delimiter per amazon
* format
* @param {String} [parameters.prefix] - prefix per amazon
* format
* @param {String} [parameters.marker] - marker per amazon
* format
* @param {Number} [parameters.maxKeys] - number of keys to list
* @param {Boolean} [parameters.alphabeticalOrder] - Either the result is
* alphabetically ordered
* or not.
*/
constructor(parameters) {
super(parameters);
// original listing parameters
this.delimiter = parameters.delimiter;
this.prefix = parameters.prefix;
this.marker = parameters.marker;
this.maxKeys = parameters.maxKeys || 1000;
this.alphabeticalOrder =
typeof parameters.alphabeticalOrder !== 'undefined' ?
parameters.alphabeticalOrder : true;
// results
this.CommonPrefixes = [];
this.Contents = [];
this.IsTruncated = false;
this.NextMarker = parameters.marker;
if (this.delimiter !== undefined &&
this.NextMarker !== undefined &&
this.NextMarker.startsWith(this.prefix || '')) {
const nextDelimiterIndex =
this.NextMarker.indexOf(this.delimiter,
this.prefix
? this.prefix.length
: 0);
this.NextMarker =
this.NextMarker.slice(0, nextDelimiterIndex +
this.delimiter.length);
}
}
genMDParams() {
const params = {};
if (this.prefix) {
params.gte = this.prefix;
params.lt = inc(this.prefix);
}
if (this.marker) {
if (params.gte && params.gte > this.marker) {
return params;
}
delete params.gte;
params.gt = this.marker;
}
return params;
}
/**
* check if the max keys count has been reached and set the
* final state of the result if it is the case
* @return {Boolean} - indicates if the iteration has to stop
*/
_reachedMaxKeys() {
if (this.keys >= this.maxKeys) {
// In cases of maxKeys <= 0 -> IsTruncated = false
this.IsTruncated = this.maxKeys > 0;
return true;
}
return false;
}
/**
* Add a (key, value) tuple to the listing
* Set the NextMarker to the current key
* Increment the keys counter
* @param {String} key - The key to add
* @param {String} value - The value of the key
* @return {number} - indicates if iteration should continue
*/
addContents(key, value) {
if (this._reachedMaxKeys()) {
return FILTER_END;
}
this.Contents.push({ key, value });
this.NextMarker = key;
++this.keys;
return FILTER_ACCEPT;
}
/**
* Filter to apply on each iteration, based on:
* - prefix
* - delimiter
* - maxKeys
* The marker is being handled directly by levelDB
* @param {Object} obj - The key and value of the element
* @param {String} obj.key - The key of the element
* @param {String} obj.value - The value of the element
* @return {number} - indicates if iteration should continue
*/
filter(obj) {
const key = obj.key;
const value = obj.value;
if ((this.prefix && !key.startsWith(this.prefix))
|| (this.alphabeticalOrder
&& typeof this.NextMarker === 'string'
&& key <= this.NextMarker)) {
return FILTER_SKIP;
}
if (this.delimiter) {
const baseIndex = this.prefix ? this.prefix.length : 0;
const delimiterIndex = nextDelimiter(key,
this.delimiter,
baseIndex);
if (delimiterIndex === -1) {
return this.addContents(key, value);
}
return this.addCommonPrefix(key, delimiterIndex);
}
return this.addContents(key, value);
}
/**
* Add a Common Prefix in the list
* @param {String} key - object name
* @param {Number} index - after prefix starting point
* @return {Boolean} - indicates if iteration should continue
*/
addCommonPrefix(key, index) {
const commonPrefix = getCommonPrefix(key, this.delimiter, index);
if (this.CommonPrefixes.indexOf(commonPrefix) === -1
&& this.NextMarker !== commonPrefix) {
if (this._reachedMaxKeys()) {
return FILTER_END;
}
this.CommonPrefixes.push(commonPrefix);
this.NextMarker = commonPrefix;
++this.keys;
return FILTER_ACCEPT;
}
return FILTER_SKIP;
}
/**
* If repd happens to want to skip listing, here is an idea.
*
* @return {string} - the present range (NextMarker) if repd believes
* that it's enough and should move on
*/
skipping() {
return this.NextMarker;
}
/**
* Return an object containing all mandatory fields to use once the
* iteration is done, doesn't show a NextMarker field if the output
* isn't truncated
* @return {Object} - following amazon format
*/
result() {
/* NextMarker is only provided when delimiter is used.
* specified in v1 listing documentation
* http://docs.aws.amazon.com/AmazonS3/latest/API/RESTBucketGET.html
*/
return {
CommonPrefixes: this.CommonPrefixes,
Contents: this.Contents,
IsTruncated: this.IsTruncated,
NextMarker: (this.IsTruncated && this.delimiter)
? this.NextMarker
: undefined,
Delimiter: this.delimiter,
};
}
}
module.exports = { Delimiter };

View File

@ -0,0 +1,94 @@
'use strict'; // eslint-disable-line strict
const Delimiter = require('./delimiter').Delimiter;
const Version = require('../../versioning/Version').Version;
const VSConst = require('../../versioning/constants').VersioningConstants;
const { FILTER_ACCEPT, FILTER_SKIP, SKIP_NONE } = require('./tools');
const VID_SEP = VSConst.VersionId.Separator;
/**
* Handle object listing with parameters. This extends the base class Delimiter
* to return the raw master versions of existing objects.
*/
class DelimiterMaster extends Delimiter {
/**
* Delimiter listing of master versions.
* @param {Object} parameters - listing parameters
* @param {String} parameters.delimiter - delimiter per amazon format
* @param {String} parameters.prefix - prefix per amazon format
* @param {String} parameters.marker - marker per amazon format
* @param {Number} parameters.maxKeys - number of keys to list
*/
constructor(parameters) {
super(parameters);
this.prvPHDKey = undefined;
}
/**
* Filter to apply on each iteration, based on:
* - prefix
* - delimiter
* - maxKeys
* The marker is being handled directly by levelDB
* @param {Object} obj - The key and value of the element
* @param {String} obj.key - The key of the element
* @param {String} obj.value - The value of the element
* @return {number} - indicates if iteration should continue
*/
filter(obj) {
let key = obj.key;
const value = obj.value;
if ((this.prefix && !key.startsWith(this.prefix))
|| (typeof this.NextMarker === 'string' &&
key <= this.NextMarker)) {
return FILTER_SKIP;
}
const versionIdIndex = key.indexOf(VID_SEP);
if (versionIdIndex >= 0) {
// generally we do not accept a specific version,
// we only do when the master version is a PHD version
key = key.slice(0, versionIdIndex);
if (key !== this.prvPHDKey) {
return FILTER_ACCEPT; // trick repd to not increase its streak
}
}
if (Version.isPHD(value)) {
// master version is a PHD version: wait for the next version
this.prvPHDKey = key;
return FILTER_ACCEPT; // trick repd to not increase its streak
}
if (Version.isDeleteMarker(value)) {
// version is a delete marker, ignore
return FILTER_ACCEPT; // trick repd to not increase its streak
}
// non-PHD master version or a version whose master is a PHD version
this.prvPHDKey = undefined;
if (this.delimiter) {
// check if the key has the delimiter
const baseIndex = this.prefix ? this.prefix.length : 0;
const delimiterIndex = key.indexOf(this.delimiter, baseIndex);
if (delimiterIndex >= 0) {
// try to add the prefix to the list
return this.addCommonPrefix(key, delimiterIndex);
}
}
return this.addContents(key, value);
}
skipping() {
if (this.NextMarker) {
// next marker:
// - foo/ : skipping foo/
// - foo : skipping foo.
const index = this.NextMarker.lastIndexOf(this.delimiter);
if (index === this.NextMarker.length - 1) {
return this.NextMarker;
}
return this.NextMarker + VID_SEP;
}
return SKIP_NONE;
}
}
module.exports = { DelimiterMaster };

View File

@ -0,0 +1,165 @@
'use strict'; // eslint-disable-line strict
const Delimiter = require('./delimiter').Delimiter;
const Version = require('../../versioning/Version').Version;
const VSConst = require('../../versioning/constants').VersioningConstants;
const { inc, FILTER_END, FILTER_ACCEPT, FILTER_SKIP, SKIP_NONE } =
require('./tools');
const VID_SEP = VSConst.VersionId.Separator;
function formatVersionKey(key, versionId) {
return `${key}${VID_SEP}${versionId}`;
}
/**
* Handle object listing with parameters
*
* @prop {String[]} CommonPrefixes - 'folders' defined by the delimiter
* @prop {String[]} Contents - 'files' to list
* @prop {Boolean} IsTruncated - truncated listing flag
* @prop {String|undefined} NextMarker - marker per amazon format
* @prop {Number} keys - count of listed keys
* @prop {String|undefined} delimiter - separator per amazon format
* @prop {String|undefined} prefix - prefix per amazon format
* @prop {Number} maxKeys - number of keys to list
*/
class DelimiterVersions extends Delimiter {
constructor(parameters) {
super(parameters);
// specific to version listing
this.keyMarker = parameters.keyMarker;
this.versionIdMarker = parameters.versionIdMarker;
// internal state
this.masterKey = undefined;
this.masterVersionId = undefined;
// listing results
this.NextMarker = parameters.keyMarker;
this.NextVersionIdMarker = undefined;
}
genMDParams() {
const params = {};
if (this.parameters.prefix) {
params.gte = this.parameters.prefix;
params.lt = inc(this.parameters.prefix);
}
if (this.parameters.keyMarker) {
if (params.gte && params.gte > this.parameters.keyMarker) {
return params;
}
delete params.gte;
if (this.parameters.versionIdMarker) {
// versionIdMarker should always come with keyMarker
// but may not be the other way around
params.gt = formatVersionKey(this.parameters.keyMarker,
this.parameters.versionIdMarker);
} else {
params.gt = inc(this.parameters.keyMarker + VID_SEP);
}
}
return params;
}
/**
* Add a (key, versionId, value) tuple to the listing.
* Set the NextMarker to the current key
* Increment the keys counter
* @param {object} obj - the entry to add to the listing result
* @param {String} obj.key - The key to add
* @param {String} obj.versionId - versionId
* @param {String} obj.value - The value of the key
* @return {Boolean} - indicates if iteration should continue
*/
addContents(obj) {
if (this._reachedMaxKeys()) {
return FILTER_END;
}
this.Contents.push(obj);
this.NextMarker = obj.key;
this.NextVersionIdMarker = obj.versionId;
++this.keys;
return FILTER_ACCEPT;
}
/**
* Filter to apply on each iteration, based on:
* - prefix
* - delimiter
* - maxKeys
* The marker is being handled directly by levelDB
* @param {Object} obj - The key and value of the element
* @param {String} obj.key - The key of the element
* @param {String} obj.value - The value of the element
* @return {number} - indicates if iteration should continue
*/
filter(obj) {
if (Version.isPHD(obj.value)) {
return FILTER_ACCEPT; // trick repd to not increase its streak
}
if (this.prefix && !obj.key.startsWith(this.prefix)) {
return FILTER_SKIP;
}
let key = obj.key; // original key
let versionId = undefined; // versionId
const versionIdIndex = obj.key.indexOf(VID_SEP);
if (versionIdIndex < 0) {
this.masterKey = obj.key;
this.masterVersionId =
Version.from(obj.value).getVersionId() || 'null';
versionId = this.masterVersionId;
} else {
// eslint-disable-next-line
key = obj.key.slice(0, versionIdIndex);
// eslint-disable-next-line
versionId = obj.key.slice(versionIdIndex + 1);
if (this.masterKey === key && this.masterVersionId === versionId) {
return FILTER_ACCEPT; // trick repd to not increase its streak
}
this.masterKey = undefined;
this.masterVersionId = undefined;
}
if (this.delimiter) {
const baseIndex = this.prefix ? this.prefix.length : 0;
const delimiterIndex = key.indexOf(this.delimiter, baseIndex);
if (delimiterIndex >= 0) {
return this.addCommonPrefix(key, delimiterIndex);
}
}
return this.addContents({ key, value: obj.value, versionId });
}
skipping() {
if (this.NextMarker) {
const index = this.NextMarker.lastIndexOf(this.delimiter);
if (index === this.NextMarker.length - 1) {
return this.NextMarker;
}
}
return SKIP_NONE;
}
/**
* Return an object containing all mandatory fields to use once the
* iteration is done, doesn't show a NextMarker field if the output
* isn't truncated
* @return {Object} - following amazon format
*/
result() {
/* NextMarker is only provided when delimiter is used.
* specified in v1 listing documentation
* http://docs.aws.amazon.com/AmazonS3/latest/API/RESTBucketGET.html
*/
return {
CommonPrefixes: this.CommonPrefixes,
Versions: this.Contents,
IsTruncated: this.IsTruncated,
NextKeyMarker: this.IsTruncated ? this.NextMarker : undefined,
NextVersionIdMarker: this.IsTruncated ?
this.NextVersionIdMarker : undefined,
Delimiter: this.delimiter,
};
}
}
module.exports = { DelimiterVersions };

41
lib/algos/list/tools.js Normal file
View File

@ -0,0 +1,41 @@
// constants for extensions
const SKIP_NONE = undefined; // to be inline with the values of NextMarker
const FILTER_ACCEPT = 1;
const FILTER_SKIP = 0;
const FILTER_END = -1;
/**
* This function check if number is valid
* To be valid a number need to be an Integer and be lower than the limit
* if specified
* If the number is not valid the limit is returned
* @param {Number} number - The number to check
* @param {Number} limit - The limit to respect
* @return {Number} - The parsed number || limit
*/
function checkLimit(number, limit) {
const parsed = Number.parseInt(number, 10);
const valid = !Number.isNaN(parsed) && (!limit || parsed <= limit);
return valid ? parsed : limit;
}
/**
* Increment the charCode of the last character of a valid string.
*
* @param {string} str - the input string
* @return {string} - the incremented string
* or the input if it is not valid
*/
function inc(str) {
return str ? (str.slice(0, str.length - 1) +
String.fromCharCode(str.charCodeAt(str.length - 1) + 1)) : str;
}
module.exports = {
checkLimit,
inc,
SKIP_NONE,
FILTER_END,
FILTER_SKIP,
FILTER_ACCEPT,
};

54
lib/auth/AuthInfo.js Normal file
View File

@ -0,0 +1,54 @@
'use strict'; // eslint-disable-line strict
const constants = require('../constants');
/**
* Class containing requester's information received from Vault
* @param {object} info from Vault including arn, canonicalID,
* shortid, email, accountDisplayName and IAMdisplayName (if applicable)
* @return {AuthInfo} an AuthInfo instance
*/
class AuthInfo {
constructor(objectFromVault) {
// amazon resource name for IAM user (if applicable)
this.arn = objectFromVault.arn;
// account canonicalID
this.canonicalID = objectFromVault.canonicalID;
// shortid for account (also contained in ARN)
this.shortid = objectFromVault.shortid;
// email for account or user as applicable
this.email = objectFromVault.email;
// display name for account
this.accountDisplayName = objectFromVault.accountDisplayName;
// display name for user (if applicable)
this.IAMdisplayName = objectFromVault.IAMdisplayName;
}
getArn() {
return this.arn;
}
getCanonicalID() {
return this.canonicalID;
}
getShortid() {
return this.shortid;
}
getEmail() {
return this.email;
}
getAccountDisplayName() {
return this.accountDisplayName;
}
getIAMdisplayName() {
return this.IAMdisplayName;
}
// Check whether requester is an IAM user versus an account
isRequesterAnIAMUser() {
return !!this.IAMdisplayName;
}
isRequesterPublicUser() {
return this.canonicalID === constants.publicId;
}
}
module.exports = AuthInfo;

276
lib/auth/Vault.js Normal file
View File

@ -0,0 +1,276 @@
const errors = require('../errors');
const AuthInfo = require('./AuthInfo');
/** vaultSignatureCb parses message from Vault and instantiates
* @param {object} err - error from vault
* @param {object} authInfo - info from vault
* @param {object} log - log for request
* @param {function} callback - callback to authCheck functions
* @param {object} [streamingV4Params] - present if v4 signature;
* items used to calculate signature on chunks if streaming auth
* @return {undefined}
*/
function vaultSignatureCb(err, authInfo, log, callback, streamingV4Params) {
// vaultclient API guarantees that it returns:
// - either `err`, an Error object with `code` and `message` properties set
// - or `err == null` and `info` is an object with `message.code` and
// `message.message` properties set.
if (err) {
log.debug('received error message from auth provider',
{ errorMessage: err });
return callback(err);
}
log.debug('received info from Vault', { authInfo });
const info = authInfo.message.body;
const userInfo = new AuthInfo(info.userInfo);
const authorizationResults = info.authorizationResults;
return callback(null, userInfo, authorizationResults, streamingV4Params);
}
/**
* Class that provides common authentication methods against different
* authentication backends.
* @class Vault
*/
class Vault {
/**
* @constructor
* @param {object} client - authentication backend or vault client
* @param {string} implName - implementation name for auth backend
*/
constructor(client, implName) {
this.client = client;
this.implName = implName;
}
/**
* authenticateV2Request
*
* @param {string} params - the authentication parameters as returned by
* auth.extractParams
* @param {number} params.version - shall equal 2
* @param {string} params.data.accessKey - the user's accessKey
* @param {string} params.data.signatureFromRequest - the signature read
* from the request
* @param {string} params.data.stringToSign - the stringToSign
* @param {string} params.data.algo - the hashing algorithm used for the
* signature
* @param {string} params.data.authType - the type of authentication (query
* or header)
* @param {string} params.data.signatureVersion - the version of the
* signature (AWS or AWS4)
* @param {number} [params.data.signatureAge] - the age of the signature in
* ms
* @param {string} params.data.log - the logger object
* @param {RequestContext []} requestContexts - an array of RequestContext
* instances which contain information for policy authorization check
* @param {function} callback - callback with either error or user info
* @returns {undefined}
*/
authenticateV2Request(params, requestContexts, callback) {
params.log.debug('authenticating V2 request');
let serializedRCsArr;
if (requestContexts) {
serializedRCsArr = requestContexts.map(rc => rc.serialize());
}
this.client.verifySignatureV2(
params.data.stringToSign,
params.data.signatureFromRequest,
params.data.accessKey,
{
algo: params.data.algo,
reqUid: params.log.getSerializedUids(),
logger: params.log,
securityToken: params.data.securityToken,
requestContext: serializedRCsArr,
},
(err, userInfo) => vaultSignatureCb(err, userInfo,
params.log, callback)
);
}
/** authenticateV4Request
* @param {object} params - the authentication parameters as returned by
* auth.extractParams
* @param {number} params.version - shall equal 4
* @param {string} params.data.log - the logger object
* @param {string} params.data.accessKey - the user's accessKey
* @param {string} params.data.signatureFromRequest - the signature read
* from the request
* @param {string} params.data.region - the AWS region
* @param {string} params.data.stringToSign - the stringToSign
* @param {string} params.data.scopeDate - the timespan to allow the request
* @param {string} params.data.authType - the type of authentication (query
* or header)
* @param {string} params.data.signatureVersion - the version of the
* signature (AWS or AWS4)
* @param {number} params.data.signatureAge - the age of the signature in ms
* @param {number} params.data.timestamp - signaure timestamp
* @param {string} params.credentialScope - credentialScope for signature
* @param {RequestContext [] | null} requestContexts -
* an array of RequestContext or null if authenticaiton of a chunk
* in streamingv4 auth
* instances which contain information for policy authorization check
* @param {function} callback - callback with either error or user info
* @return {undefined}
*/
authenticateV4Request(params, requestContexts, callback) {
params.log.debug('authenticating V4 request');
let serializedRCs;
if (requestContexts) {
serializedRCs = requestContexts.map(rc => rc.serialize());
}
const streamingV4Params = {
accessKey: params.data.accessKey,
signatureFromRequest: params.data.signatureFromRequest,
region: params.data.region,
scopeDate: params.data.scopeDate,
timestamp: params.data.timestamp,
credentialScope: params.data.credentialScope };
this.client.verifySignatureV4(
params.data.stringToSign,
params.data.signatureFromRequest,
params.data.accessKey,
params.data.region,
params.data.scopeDate,
{
reqUid: params.log.getSerializedUids(),
logger: params.log,
securityToken: params.data.securityToken,
requestContext: serializedRCs,
},
(err, userInfo) => vaultSignatureCb(err, userInfo,
params.log, callback, streamingV4Params)
);
}
/** getCanonicalIds -- call Vault to get canonicalIDs based on email
* addresses
* @param {array} emailAddresses - list of emailAddresses
* @param {object} log - log object
* @param {function} callback - callback with either error or an array
* of objects with each object containing the canonicalID and emailAddress
* of an account as properties
* @return {undefined}
*/
getCanonicalIds(emailAddresses, log, callback) {
log.trace('getting canonicalIDs from Vault based on emailAddresses',
{ emailAddresses });
this.client.getCanonicalIds(emailAddresses,
{ reqUid: log.getSerializedUids() },
(err, info) => {
if (err) {
log.debug('received error message from auth provider',
{ errorMessage: err });
return callback(err);
}
const infoFromVault = info.message.body;
log.trace('info received from vault', { infoFromVault });
const foundIds = [];
for (let i = 0; i < Object.keys(infoFromVault).length; i++) {
const key = Object.keys(infoFromVault)[i];
if (infoFromVault[key] === 'WrongFormat'
|| infoFromVault[key] === 'NotFound') {
return callback(errors.UnresolvableGrantByEmailAddress);
}
const obj = {};
obj.email = key;
obj.canonicalID = infoFromVault[key];
foundIds.push(obj);
}
return callback(null, foundIds);
});
}
/** getEmailAddresses -- call Vault to get email addresses based on
* canonicalIDs
* @param {array} canonicalIDs - list of canonicalIDs
* @param {object} log - log object
* @param {function} callback - callback with either error or an object
* with canonicalID keys and email address values
* @return {undefined}
*/
getEmailAddresses(canonicalIDs, log, callback) {
log.trace('getting emailAddresses from Vault based on canonicalIDs',
{ canonicalIDs });
this.client.getEmailAddresses(canonicalIDs,
{ reqUid: log.getSerializedUids() },
(err, info) => {
if (err) {
log.debug('received error message from vault',
{ errorMessage: err });
return callback(err);
}
const infoFromVault = info.message.body;
log.trace('info received from vault', { infoFromVault });
const result = {};
/* If the email address was not found in Vault, do not
send the canonicalID back to the API */
Object.keys(infoFromVault).forEach(key => {
if (infoFromVault[key] !== 'NotFound' &&
infoFromVault[key] !== 'WrongFormat') {
result[key] = infoFromVault[key];
}
});
return callback(null, result);
});
}
/** checkPolicies -- call Vault to evaluate policies
* @param {object} requestContextParams - parameters needed to construct
* requestContext in Vault
* @param {object} requestContextParams.constantParams - params that have
* the same value for each requestContext to be constructed in Vault
* @param {object} requestContextParams.paramaterize - params that have
* arrays as values since a requestContext needs to be constructed with
* each option in Vault
* @param {string} userArn - arn of requesting user
* @param {object} log - log object
* @param {function} callback - callback with either error or an array
* of authorization results
* @return {undefined}
*/
checkPolicies(requestContextParams, userArn, log, callback) {
log.trace('sending request context params to vault to evaluate' +
'policies');
this.client.checkPolicies(requestContextParams, userArn, {
reqUid: log.getSerializedUids(),
}, (err, info) => {
if (err) {
log.debug('received error message from auth provider',
{ error: err });
return callback(err);
}
const result = info.message.body;
return callback(null, result);
});
}
checkHealth(log, callback) {
if (!this.client.healthcheck) {
const defResp = {};
defResp[this.implName] = { code: 200, message: 'OK' };
return callback(null, defResp);
}
return this.client.healthcheck(log.getSerializedUids(), (err, obj) => {
const respBody = {};
if (err) {
log.debug(`error from ${this.implName}`, { error: err });
respBody[this.implName] = {
error: err,
};
// error returned as null so async parallel doesn't return
// before all backends are checked
return callback(null, respBody);
}
respBody[this.implName] = {
code: 200,
message: 'OK',
body: obj,
};
return callback(null, respBody);
});
}
}
module.exports = Vault;

220
lib/auth/auth.js Normal file
View File

@ -0,0 +1,220 @@
'use strict'; // eslint-disable-line strict
const crypto = require('crypto');
const errors = require('../errors');
const queryString = require('querystring');
const AuthInfo = require('./AuthInfo');
const v2 = require('./v2/authV2');
const v4 = require('./v4/authV4');
const constants = require('../constants');
const constructStringToSignV4 = require('./v4/constructStringToSign');
const convertUTCtoISO8601 = require('./v4/timeUtils').convertUTCtoISO8601;
const vaultUtilities = require('./in_memory/vaultUtilities');
const backend = require('./in_memory/Backend');
const validateAuthConfig = require('./in_memory/validateAuthConfig');
const Vault = require('./Vault');
let vault = null;
const auth = {};
const checkFunctions = {
v2: {
headers: v2.header.check,
query: v2.query.check,
},
v4: {
headers: v4.header.check,
query: v4.query.check,
},
};
// If no auth information is provided in request, then user is part of
// 'All Users Group' so use this group as the canonicalID for the publicUser
const publicUserInfo = new AuthInfo({ canonicalID: constants.publicId });
function setAuthHandler(handler) {
vault = handler;
return auth;
}
/**
* This function will check validity of request parameters to authenticate
*
* @param {Http.Request} request - Http request object
* @param {object} log - Logger object
* @param {string} awsService - Aws service related
* @param {object} data - Parameters from queryString parsing or body of
* POST request
*
* @return {object} ret
* @return {object} ret.err - arsenal.errors object if any error was found
* @return {object} ret.params - auth parameters to use later on for signature
* computation and check
* @return {object} ret.params.version - the auth scheme version
* (undefined, 2, 4)
* @return {object} ret.params.data - the auth scheme's specific data
*/
function extractParams(request, log, awsService, data) {
log.trace('entered', { method: 'Arsenal.auth.server.extractParams' });
const authHeader = request.headers.authorization;
let version = null;
let method = null;
// Identify auth version and method to dispatch to the right check function
if (authHeader) {
method = 'headers';
// TODO: Check for security token header to handle temporary security
// credentials
if (authHeader.startsWith('AWS ')) {
version = 'v2';
} else if (authHeader.startsWith('AWS4')) {
version = 'v4';
} else {
log.trace('invalid authorization security header',
{ header: authHeader });
return { err: errors.AccessDenied };
}
} else if (data.Signature) {
method = 'query';
version = 'v2';
} else if (data['X-Amz-Algorithm']) {
method = 'query';
version = 'v4';
}
// Here, either both values are set, or none is set
if (version !== null && method !== null) {
if (!checkFunctions[version] || !checkFunctions[version][method]) {
log.trace('invalid auth version or method',
{ version, authMethod: method });
return { err: errors.NotImplemented };
}
log.trace('identified auth method', { version, authMethod: method });
return checkFunctions[version][method](request, log, data, awsService);
}
// no auth info identified
log.debug('assuming public user');
return { err: null, params: publicUserInfo };
}
/**
* This function will check validity of request parameters to authenticate
*
* @param {Http.Request} request - Http request object
* @param {object} log - Logger object
* @param {function} cb - the callback
* @param {string} awsService - Aws service related
* @param {RequestContext[] | null} requestContexts - array of RequestContext
* or null if no requestContexts to be sent to Vault (for instance,
* in multi-object delete request)
* @return {undefined}
*/
function doAuth(request, log, cb, awsService, requestContexts) {
const res = extractParams(request, log, awsService, request.query);
if (res.err) {
return cb(res.err);
} else if (res.params instanceof AuthInfo) {
return cb(null, res.params);
}
if (requestContexts) {
requestContexts.forEach(requestContext => {
requestContext.setAuthType(res.params.data.authType);
requestContext.setSignatureVersion(res.params
.data.signatureVersion);
requestContext.setSignatureAge(res.params.data.signatureAge);
requestContext.setSecurityToken(res.params.data.securityToken);
});
}
// Corner cases managed, we're left with normal auth
res.params.log = log;
if (res.params.version === 2) {
return vault.authenticateV2Request(res.params, requestContexts, cb);
}
if (res.params.version === 4) {
return vault.authenticateV4Request(res.params, requestContexts, cb,
awsService);
}
log.error('authentication method not found', {
method: 'Arsenal.auth.doAuth',
});
return cb(errors.InternalError);
}
/**
* This function will generate a version 4 header
*
* @param {Http.Request} request - Http request object
* @param {object} data - Parameters from queryString parsing or body of
* POST request
* @param {string} accessKey - the accessKey
* @param {string} secretKeyValue - the secretKey
* @param {string} awsService - Aws service related
* @return {undefined}
*/
function generateV4Headers(request, data, accessKey, secretKeyValue,
awsService) {
Object.assign(request, { headers: {} });
const amzDate = convertUTCtoISO8601(Date.now());
// get date without time
const scopeDate = amzDate.slice(0, amzDate.indexOf('T'));
const region = 'us-east-1';
const service = awsService || 'iam';
const credentialScope =
`${scopeDate}/${region}/${service}/aws4_request`;
const timestamp = amzDate;
const algorithm = 'AWS4-HMAC-SHA256';
let payload = '';
if (request.method === 'POST') {
payload = queryString.stringify(data, null, null, {
encodeURIComponent,
});
}
const payloadChecksum = crypto.createHash('sha256')
.update(payload, 'binary').digest('hex');
request.setHeader('host', request._headers.host);
request.setHeader('x-amz-date', amzDate);
request.setHeader('x-amz-content-sha256', payloadChecksum);
Object.assign(request.headers, request._headers);
const signedHeaders = Object.keys(request._headers)
.filter(headerName =>
headerName.startsWith('x-amz-')
|| headerName.startsWith('x-scal-')
|| headerName === 'host'
).sort().join(';');
const params = { request, signedHeaders, payloadChecksum,
credentialScope, timestamp, query: data,
awsService: service };
const stringToSign = constructStringToSignV4(params);
const signingKey = vaultUtilities.calculateSigningKey(secretKeyValue,
region,
scopeDate,
service);
const signature = crypto.createHmac('sha256', signingKey)
.update(stringToSign, 'binary').digest('hex');
const authorizationHeader = `${algorithm} Credential=${accessKey}` +
`/${credentialScope}, SignedHeaders=${signedHeaders}, ` +
`Signature=${signature}`;
request.setHeader('authorization', authorizationHeader);
Object.assign(request, { headers: {} });
}
module.exports = {
setHandler: setAuthHandler,
server: {
extractParams,
doAuth,
},
client: {
generateV4Headers,
},
inMemory: {
backend,
validateAuthConfig,
},
AuthInfo,
Vault,
};

View File

@ -0,0 +1,245 @@
'use strict'; // eslint-disable-line strict
const crypto = require('crypto');
const errors = require('../../errors');
const calculateSigningKey = require('./vaultUtilities').calculateSigningKey;
const hashSignature = require('./vaultUtilities').hashSignature;
const Indexer = require('./Indexer');
function _buildArn(service, generalResource, specificResource) {
return `arn:aws:${service}:::${generalResource}/${specificResource}`;
}
function _formatResponse(userInfoToSend) {
return {
message: {
body: { userInfo: userInfoToSend },
},
};
}
/**
* Class that provides a memory backend for verifying signatures and getting
* emails and canonical ids associated with an account.
*
* @class Backend
*/
class Backend {
/**
* @constructor
* @param {string} service - service identifer for construction arn
* @param {Indexer} indexer - indexer instance for retrieving account info
* @param {function} formatter - function which accepts user info to send
* back and returns it in an object
*/
constructor(service, indexer, formatter) {
this.service = service;
this.indexer = indexer;
this.formatResponse = formatter;
}
/** verifySignatureV2
* @param {string} stringToSign - string to sign built per AWS rules
* @param {string} signatureFromRequest - signature sent with request
* @param {string} accessKey - user's accessKey
* @param {object} options - contains algorithm (SHA1 or SHA256)
* @param {function} callback - callback with either error or user info
* @return {function} calls callback
*/
verifySignatureV2(stringToSign, signatureFromRequest,
accessKey, options, callback) {
const entity = this.indexer.getEntityByKey(accessKey);
if (!entity) {
return callback(errors.InvalidAccessKeyId);
}
const secretKey = this.indexer.getSecretKey(entity, accessKey);
const reconstructedSig =
hashSignature(stringToSign, secretKey, options.algo);
if (signatureFromRequest !== reconstructedSig) {
return callback(errors.SignatureDoesNotMatch);
}
const userInfoToSend = {
accountDisplayName: this.indexer.getAcctDisplayName(entity),
canonicalID: entity.canonicalID,
arn: entity.arn,
IAMdisplayName: entity.IAMdisplayName,
};
const vaultReturnObject = this.formatResponse(userInfoToSend);
return callback(null, vaultReturnObject);
}
/** verifySignatureV4
* @param {string} stringToSign - string to sign built per AWS rules
* @param {string} signatureFromRequest - signature sent with request
* @param {string} accessKey - user's accessKey
* @param {string} region - region specified in request credential
* @param {string} scopeDate - date specified in request credential
* @param {object} options - options to send to Vault
* (just contains reqUid for logging in Vault)
* @param {function} callback - callback with either error or user info
* @return {function} calls callback
*/
verifySignatureV4(stringToSign, signatureFromRequest, accessKey,
region, scopeDate, options, callback) {
const entity = this.indexer.getEntityByKey(accessKey);
if (!entity) {
return callback(errors.InvalidAccessKeyId);
}
const secretKey = this.indexer.getSecretKey(entity, accessKey);
const signingKey = calculateSigningKey(secretKey, region, scopeDate);
const reconstructedSig = crypto.createHmac('sha256', signingKey)
.update(stringToSign, 'binary').digest('hex');
if (signatureFromRequest !== reconstructedSig) {
return callback(errors.SignatureDoesNotMatch);
}
const userInfoToSend = {
accountDisplayName: this.indexer.getAcctDisplayName(entity),
canonicalID: entity.canonicalID,
arn: entity.arn,
IAMdisplayName: entity.IAMdisplayName,
};
const vaultReturnObject = this.formatResponse(userInfoToSend);
return callback(null, vaultReturnObject);
}
/**
* Gets canonical ID's for a list of accounts
* based on email associated with account
* @param {array} emails - list of email addresses
* @param {object} log - log object
* @param {function} cb - callback to calling function
* @returns {function} callback with either error or
* object with email addresses as keys and canonical IDs
* as values
*/
getCanonicalIds(emails, log, cb) {
const results = {};
emails.forEach(email => {
const lowercasedEmail = email.toLowerCase();
const entity = this.indexer.getEntityByEmail(lowercasedEmail);
if (!entity) {
results[email] = 'NotFound';
} else {
results[email] =
entity.canonicalID;
}
});
const vaultReturnObject = {
message: {
body: results,
},
};
return cb(null, vaultReturnObject);
}
/**
* Gets email addresses (referred to as diplay names for getACL's)
* for a list of accounts based on canonical IDs associated with account
* @param {array} canonicalIDs - list of canonicalIDs
* @param {object} options - to send log id to vault
* @param {function} cb - callback to calling function
* @returns {function} callback with either error or
* an object from Vault containing account canonicalID
* as each object key and an email address as the value (or "NotFound")
*/
getEmailAddresses(canonicalIDs, options, cb) {
const results = {};
canonicalIDs.forEach(canonicalId => {
const foundEntity = this.indexer.getEntityByCanId(canonicalId);
if (!foundEntity || !foundEntity.email) {
results[canonicalId] = 'NotFound';
} else {
results[canonicalId] = foundEntity.email;
}
});
const vaultReturnObject = {
message: {
body: results,
},
};
return cb(null, vaultReturnObject);
}
/**
* Mocks Vault's response to a policy evaluation request
* Since policies not actually implemented in memory backend,
* we allow users to proceed with request.
* @param {object} requestContextParams - parameters needed to construct
* requestContext in Vault
* @param {object} requestContextParams.constantParams -
* params that have the
* same value for each requestContext to be constructed in Vault
* @param {object} requestContextParams.paramaterize - params that have
* arrays as values since a requestContext needs to be constructed with
* each option in Vault
* @param {object[]} requestContextParams.paramaterize.specificResource -
* specific resources paramaterized as an array of objects containing
* properties `key` and optional `versionId`
* @param {string} userArn - arn of requesting user
* @param {object} log - log object
* @param {function} cb - callback with either error or an array
* of authorization results
* @returns {undefined}
* @callback called with (err, vaultReturnObject)
*/
checkPolicies(requestContextParams, userArn, log, cb) {
let results;
const parameterizeParams = requestContextParams.parameterize;
if (parameterizeParams && parameterizeParams.specificResource) {
// object is parameterized
results = parameterizeParams.specificResource.map(obj => ({
isAllowed: true,
arn: _buildArn(this.service, requestContextParams
.constantParams.generalResource, obj.key),
versionId: obj.versionId,
}));
} else {
results = [{
isAllowed: true,
arn: _buildArn(this.service, requestContextParams
.constantParams.generalResource, requestContextParams
.constantParams.specificResource),
}];
}
const vaultReturnObject = {
message: {
body: results,
},
};
return cb(null, vaultReturnObject);
}
}
class S3AuthBackend extends Backend {
/**
* @constructor
* @param {object} authdata - the authentication config file's data
* @param {object[]} authdata.accounts - array of account objects
* @param {string=} authdata.accounts[].name - account name
* @param {string} authdata.accounts[].email - account email
* @param {string} authdata.accounts[].arn - IAM resource name
* @param {string} authdata.accounts[].canonicalID - account canonical ID
* @param {string} authdata.accounts[].shortid - short account ID
* @param {object[]=} authdata.accounts[].keys - array of key objects
* @param {string} authdata.accounts[].keys[].access - access key
* @param {string} authdata.accounts[].keys[].secret - secret key
* @param {object[]=} authdata.accounts[].users - array of user objects:
* note, same properties as account except no canonical ID / sas token
* @param {string=} authdata.accounts[].sasToken - Azure SAS token
* @return {undefined}
*/
constructor(authdata) {
super('s3', new Indexer(authdata), _formatResponse);
}
refreshAuthData(authData) {
this.indexer = new Indexer(authData);
}
}
module.exports = {
s3: S3AuthBackend,
};

View File

@ -0,0 +1,180 @@
/**
* Class that provides an internal indexing over the simple data provided by
* the authentication configuration file for the memory backend. This allows
* accessing the different authentication entities through various types of
* keys.
*
* @class Indexer
*/
class Indexer {
/**
* @constructor
* @param {object} authdata - the authentication config file's data
* @param {object[]} authdata.accounts - array of account objects
* @param {string=} authdata.accounts[].name - account name
* @param {string} authdata.accounts[].email - account email
* @param {string} authdata.accounts[].arn - IAM resource name
* @param {string} authdata.accounts[].canonicalID - account canonical ID
* @param {string} authdata.accounts[].shortid - short account ID
* @param {object[]=} authdata.accounts[].keys - array of key objects
* @param {string} authdata.accounts[].keys[].access - access key
* @param {string} authdata.accounts[].keys[].secret - secret key
* @param {object[]=} authdata.accounts[].users - array of user objects:
* note, same properties as account except no canonical ID / sas token
* @param {string=} authdata.accounts[].sasToken - Azure SAS token
* @return {undefined}
*/
constructor(authdata) {
this.accountsBy = {
canId: {},
accessKey: {},
email: {},
};
this.usersBy = {
accessKey: {},
email: {},
};
/*
* This may happen if the application is configured to use another
* authentication backend than in-memory.
* As such, we're managing the error here to avoid screwing up there.
*/
if (!authdata) {
return;
}
this._build(authdata);
}
_indexUser(account, user) {
const userData = {
arn: account.arn,
canonicalID: account.canonicalID,
shortid: account.shortid,
accountDisplayName: account.accountDisplayName,
IAMdisplayName: user.name,
email: user.email.toLowerCase(),
keys: [],
};
this.usersBy.email[userData.email] = userData;
user.keys.forEach(key => {
userData.keys.push(key);
this.usersBy.accessKey[key.access] = userData;
});
}
_indexAccount(account) {
const accountData = {
arn: account.arn,
canonicalID: account.canonicalID,
shortid: account.shortid,
accountDisplayName: account.name,
email: account.email.toLowerCase(),
keys: [],
};
this.accountsBy.canId[accountData.canonicalID] = accountData;
this.accountsBy.email[accountData.email] = accountData;
if (account.keys !== undefined) {
account.keys.forEach(key => {
accountData.keys.push(key);
this.accountsBy.accessKey[key.access] = accountData;
});
}
if (account.users !== undefined) {
account.users.forEach(user => {
this._indexUser(accountData, user);
});
}
}
_build(authdata) {
authdata.accounts.forEach(account => {
this._indexAccount(account);
});
}
/**
* This method returns the account associated to a canonical ID.
*
* @param {string} canId - The canonicalId of the account
* @return {Object} account - The account object
* @return {Object} account.arn - The account's ARN
* @return {Object} account.canonicalID - The account's canonical ID
* @return {Object} account.shortid - The account's internal shortid
* @return {Object} account.accountDisplayName - The account's display name
* @return {Object} account.email - The account's lowercased email
*/
getEntityByCanId(canId) {
return this.accountsBy.canId[canId];
}
/**
* This method returns the entity (either an account or a user) associated
* to a canonical ID.
*
* @param {string} key - The accessKey of the entity
* @return {Object} entity - The entity object
* @return {Object} entity.arn - The entity's ARN
* @return {Object} entity.canonicalID - The canonical ID for the entity's
* account
* @return {Object} entity.shortid - The entity's internal shortid
* @return {Object} entity.accountDisplayName - The entity's account
* display name
* @return {Object} entity.IAMDisplayName - The user's display name
* (if the entity is an user)
* @return {Object} entity.email - The entity's lowercased email
*/
getEntityByKey(key) {
if (this.accountsBy.accessKey.hasOwnProperty(key)) {
return this.accountsBy.accessKey[key];
}
return this.usersBy.accessKey[key];
}
/**
* This method returns the entity (either an account or a user) associated
* to an email address.
*
* @param {string} email - The email address
* @return {Object} entity - The entity object
* @return {Object} entity.arn - The entity's ARN
* @return {Object} entity.canonicalID - The canonical ID for the entity's
* account
* @return {Object} entity.shortid - The entity's internal shortid
* @return {Object} entity.accountDisplayName - The entity's account
* display name
* @return {Object} entity.IAMDisplayName - The user's display name
* (if the entity is an user)
* @return {Object} entity.email - The entity's lowercased email
*/
getEntityByEmail(email) {
const lowerCasedEmail = email.toLowerCase();
if (this.usersBy.email.hasOwnProperty(lowerCasedEmail)) {
return this.usersBy.email[lowerCasedEmail];
}
return this.accountsBy.email[lowerCasedEmail];
}
/**
* This method returns the secret key associated with the entity.
* @param {Object} entity - the entity object
* @param {string} accessKey - access key
* @returns {string} secret key
*/
getSecretKey(entity, accessKey) {
return entity.keys
.filter(kv => kv.access === accessKey)[0].secret;
}
/**
* This method returns the account display name associated with the entity.
* @param {Object} entity - the entity object
* @returns {string} account display name
*/
getAcctDisplayName(entity) {
return entity.accountDisplayName;
}
}
module.exports = Indexer;

View File

@ -0,0 +1,194 @@
const werelogs = require('werelogs');
function _incr(count) {
if (count !== undefined) {
return count + 1;
}
return 1;
}
/**
* This function ensures that the field `name` inside `container` is of the
* expected `type` inside `obj`. If any error is found, an entry is added into
* the error collector object.
*
* @param {object} data - the error collector object
* @param {string} container - the name of the entity that contains
* what we're checking
* @param {string} name - the name of the entity we're checking for
* @param {string} type - expected typename of the entity we're checking
* @param {object} obj - the object we're checking the fields of
* @return {boolean} true if the type is Ok and no error found
* false if an error was found and reported
*/
function _checkType(data, container, name, type, obj) {
if ((type === 'array' && !Array.isArray(obj[name]))
|| (type !== 'array' && typeof obj[name] !== type)) {
data.errors.push({
txt: 'property is not of the expected type',
obj: {
entity: container,
property: name,
type: typeof obj[name],
expectedType: type,
},
});
return false;
}
return true;
}
/**
* This function ensures that the field `name` inside `obj` which is a
* `container`. If any error is found, an entry is added into the error
* collector object.
*
* @param {object} data - the error collector object
* @param {string} container - the name of the entity that contains
* what we're checking
* @param {string} name - the name of the entity we're checking for
* @param {string} type - expected typename of the entity we're checking
* @param {object} obj - the object we're checking the fields of
* @return {boolean} true if the field exists and type is Ok
* false if an error was found and reported
*/
function _checkExists(data, container, name, type, obj) {
if (obj[name] === undefined) {
data.errors.push({
txt: 'missing property in auth entity',
obj: {
entity: container,
property: name,
},
});
return false;
}
return _checkType(data, container, name, type, obj);
}
function _checkUser(data, userObj) {
if (_checkExists(data, 'User', 'arn', 'string', userObj)) {
// eslint-disable-next-line no-param-reassign
data.arns[userObj.arn] = _incr(data.arns[userObj.arn]);
}
if (_checkExists(data, 'User', 'email', 'string', userObj)) {
// eslint-disable-next-line no-param-reassign
data.emails[userObj.email] = _incr(data.emails[userObj.email]);
}
if (_checkExists(data, 'User', 'keys', 'array', userObj)) {
userObj.keys.forEach(keyObj => {
// eslint-disable-next-line no-param-reassign
data.keys[keyObj.access] = _incr(data.keys[keyObj.access]);
});
}
}
function _checkAccount(data, accountObj, checkSas) {
if (_checkExists(data, 'Account', 'email', 'string', accountObj)) {
// eslint-disable-next-line no-param-reassign
data.emails[accountObj.email] = _incr(data.emails[accountObj.email]);
}
if (_checkExists(data, 'Account', 'arn', 'string', accountObj)) {
// eslint-disable-next-line no-param-reassign
data.arns[accountObj.arn] = _incr(data.arns[accountObj.arn]);
}
if (_checkExists(data, 'Account', 'canonicalID', 'string', accountObj)) {
// eslint-disable-next-line no-param-reassign
data.canonicalIds[accountObj.canonicalID] =
_incr(data.canonicalIds[accountObj.canonicalID]);
}
if (checkSas &&
_checkExists(data, 'Account', 'sasToken', 'string', accountObj)) {
// eslint-disable-next-line no-param-reassign
data.sasTokens[accountObj.sasToken] =
_incr(data.sasTokens[accountObj.sasToken]);
}
if (accountObj.users) {
if (_checkType(data, 'Account', 'users', 'array', accountObj)) {
accountObj.users.forEach(userObj => _checkUser(data, userObj));
}
}
if (accountObj.keys) {
if (_checkType(data, 'Account', 'keys', 'array', accountObj)) {
accountObj.keys.forEach(keyObj => {
// eslint-disable-next-line no-param-reassign
data.keys[keyObj.access] = _incr(data.keys[keyObj.access]);
});
}
}
}
function _dumpCountError(property, obj, log) {
let count = 0;
Object.keys(obj).forEach(key => {
if (obj[key] > 1) {
log.error('property should be unique', {
property,
value: key,
count: obj[key],
});
++count;
}
});
return count;
}
function _dumpErrors(checkData, log) {
let nerr = _dumpCountError('CanonicalID', checkData.canonicalIds, log);
nerr += _dumpCountError('Email', checkData.emails, log);
nerr += _dumpCountError('ARN', checkData.arns, log);
nerr += _dumpCountError('AccessKey', checkData.keys, log);
nerr += _dumpCountError('SAS Token', checkData.sasTokens, log);
if (checkData.errors.length > 0) {
checkData.errors.forEach(msg => {
log.error(msg.txt, msg.obj);
});
}
if (checkData.errors.length === 0 && nerr === 0) {
return false;
}
log.fatal('invalid authentication config file (cannot start)');
return true;
}
/**
* @param {object} authdata - the authentication config file's data
* @param {werelogs.API} logApi - object providing a constructor function
* for the Logger object
* @param {(boolean|null)} checkSas - whether to check Azure SAS for ea. account
* @return {boolean} true on erroneous data
* false on success
*/
function validateAuthConfig(authdata, logApi, checkSas) {
const checkData = {
errors: [],
emails: [],
arns: [],
canonicalIds: [],
keys: [],
sasTokens: [],
};
const log = new (logApi || werelogs).Logger('S3');
if (authdata.accounts === undefined) {
checkData.errors.push({
txt: 'no "accounts" array defined in Auth config',
});
return _dumpErrors(checkData, log);
}
authdata.accounts.forEach(account => {
_checkAccount(checkData, account, checkSas);
});
return _dumpErrors(checkData, log);
}
module.exports = validateAuthConfig;

View File

@ -0,0 +1,35 @@
'use strict'; // eslint-disable-line strict
const crypto = require('crypto');
/** hashSignature for v2 Auth
* @param {string} stringToSign - built string to sign per AWS rules
* @param {string} secretKey - user's secretKey
* @param {string} algorithm - either SHA256 or SHA1
* @return {string} reconstructed signature
*/
function hashSignature(stringToSign, secretKey, algorithm) {
const hmacObject = crypto.createHmac(algorithm, secretKey);
return hmacObject.update(stringToSign, 'binary').digest('base64');
}
/** calculateSigningKey for v4 Auth
* @param {string} secretKey - requester's secretKey
* @param {string} region - region included in request
* @param {string} scopeDate - scopeDate included in request
* @param {string} [service] - To specify another service than s3
* @return {string} signingKey - signingKey to calculate signature
*/
function calculateSigningKey(secretKey, region, scopeDate, service) {
const dateKey = crypto.createHmac('sha256', `AWS4${secretKey}`)
.update(scopeDate, 'binary').digest();
const dateRegionKey = crypto.createHmac('sha256', dateKey)
.update(region, 'binary').digest();
const dateRegionServiceKey = crypto.createHmac('sha256', dateRegionKey)
.update(service || 's3', 'binary').digest();
const signingKey = crypto.createHmac('sha256', dateRegionServiceKey)
.update('aws4_request', 'binary').digest();
return signingKey;
}
module.exports = { hashSignature, calculateSigningKey };

19
lib/auth/v2/algoCheck.js Normal file
View File

@ -0,0 +1,19 @@
'use strict'; // eslint-disable-line strict
function algoCheck(signatureLength) {
let algo;
// If the signature sent is 44 characters,
// this means that sha256 was used:
// 44 characters in base64
const SHA256LEN = 44;
const SHA1LEN = 28;
if (signatureLength === SHA256LEN) {
algo = 'sha256';
}
if (signatureLength === SHA1LEN) {
algo = 'sha1';
}
return algo;
}
module.exports = algoCheck;

11
lib/auth/v2/authV2.js Normal file
View File

@ -0,0 +1,11 @@
'use strict'; // eslint-disable-line strict
const headerAuthCheck = require('./headerAuthCheck');
const queryAuthCheck = require('./queryAuthCheck');
const authV2 = {
header: headerAuthCheck,
query: queryAuthCheck,
};
module.exports = authV2;

View File

@ -0,0 +1,36 @@
'use strict'; // eslint-disable-line strict
const errors = require('../../errors');
const epochTime = new Date('1970-01-01').getTime();
function checkRequestExpiry(timestamp, log) {
// If timestamp is before epochTime, the request is invalid and return
// errors.AccessDenied
if (timestamp < epochTime) {
log.debug('request time is invalid', { timestamp });
return errors.AccessDenied;
}
// If timestamp is not within 15 minutes of current time, or if
// timestamp is more than 15 minutes in the future, the request
// has expired and return errors.RequestTimeTooSkewed
const currentTime = Date.now();
log.trace('request timestamp', { requestTimestamp: timestamp });
log.trace('current timestamp', { currentTimestamp: currentTime });
const fifteenMinutes = (15 * 60 * 1000);
if (currentTime - timestamp > fifteenMinutes) {
log.trace('request timestamp is not within 15 minutes of current time');
log.debug('request time too skewed', { timestamp });
return errors.RequestTimeTooSkewed;
}
if (currentTime + fifteenMinutes < timestamp) {
log.trace('request timestamp is more than 15 minutes into future');
log.debug('request time too skewed', { timestamp });
return errors.RequestTimeTooSkewed;
}
return undefined;
}
module.exports = checkRequestExpiry;

View File

@ -0,0 +1,46 @@
'use strict'; // eslint-disable-line strict
const utf8 = require('utf8');
const getCanonicalizedAmzHeaders = require('./getCanonicalizedAmzHeaders');
const getCanonicalizedResource = require('./getCanonicalizedResource');
function constructStringToSign(request, data, log) {
/*
Build signature per AWS requirements:
StringToSign = HTTP-Verb + '\n' +
Content-MD5 + '\n' +
Content-Type + '\n' +
Date (or Expiration for query Auth) + '\n' +
CanonicalizedAmzHeaders +
CanonicalizedResource;
*/
log.trace('constructing string to sign');
let stringToSign = `${request.method}\n`;
const headers = request.headers;
const query = data;
const contentMD5 = headers['content-md5'] ?
headers['content-md5'] : query['Content-MD5'];
stringToSign += (contentMD5 ? `${contentMD5}\n` : '\n');
const contentType = headers['content-type'] ?
headers['content-type'] : query['Content-Type'];
stringToSign += (contentType ? `${contentType}\n` : '\n');
/*
AWS docs are conflicting on whether to include x-amz-date header here
if present in request.
s3cmd includes x-amz-date in amzHeaders rather
than here in stringToSign so we have replicated that.
*/
const date = query.Expires ? query.Expires : headers.date;
const combinedQueryHeaders = Object.assign({}, headers, query);
stringToSign += (date ? `${date}\n` : '\n')
+ getCanonicalizedAmzHeaders(combinedQueryHeaders)
+ getCanonicalizedResource(request);
return utf8.encode(stringToSign);
}
module.exports = constructStringToSign;

View File

@ -0,0 +1,44 @@
'use strict'; // eslint-disable-line strict
function getCanonicalizedAmzHeaders(headers) {
/*
Iterate through headers and pull any headers that are x-amz headers.
Need to include 'x-amz-date' here even though AWS docs
ambiguous on this.
*/
const amzHeaders = Object.keys(headers)
.filter(val => val.substr(0, 6) === 'x-amz-')
.map(val => [val.trim(), headers[val].trim()]);
/*
AWS docs state that duplicate headers should be combined
in the same header with values concatenated with
a comma separation.
Node combines duplicate headers and concatenates the values
with a comma AND SPACE separation.
Could replace all occurrences of ', ' with ',' but this
would remove spaces that might be desired
(for instance, in date header).
Opted to proceed without this parsing since it does not appear
that the AWS clients use duplicate headers.
*/
// If there are no amz headers, just return an empty string
if (amzHeaders.length === 0) {
return '';
}
// Sort the amz headers by key (first item in tuple)
amzHeaders.sort((a, b) => {
if (a[0] > b[0]) {
return 1;
}
return -1;
});
// Build headerString
return amzHeaders.reduce((headerStr, current) =>
`${headerStr}${current[0]}:${current[1]}\n`,
'');
}
module.exports = getCanonicalizedAmzHeaders;

View File

@ -0,0 +1,101 @@
'use strict'; // eslint-disable-line strict
const url = require('url');
function getCanonicalizedResource(request) {
/*
This variable is used to determine whether to insert
a '?' or '&'. Once a query parameter is added to the resourceString,
it changes to '&' before any new query parameter is added.
*/
let queryChar = '?';
// If bucket specified in hostname, add to resourceString
let resourceString = request.gotBucketNameFromHost ?
`/${request.bucketName}` : '';
// Add the path to the resourceString
resourceString += url.parse(request.url).pathname;
/*
If request includes a specified subresource,
add to the resourceString: (a) a '?', (b) the subresource,
and (c) its value (if any).
Separate multiple subresources with '&'.
Subresources must be in alphabetical order.
*/
// Specified subresources:
const subresources = [
'acl',
'cors',
'delete',
'lifecycle',
'location',
'logging',
'notification',
'partNumber',
'policy',
'requestPayment',
'tagging',
'torrent',
'uploadId',
'uploads',
'versionId',
'versioning',
'replication',
'versions',
'website',
];
/*
If the request includes parameters in the query string,
that override the headers, include
them in the resourceString
along with their values.
AWS is ambiguous about format. Used alphabetical order.
*/
const overridingParams = [
'response-cache-control',
'response-content-disposition',
'response-content-encoding',
'response-content-language',
'response-content-type',
'response-expires',
];
// Check which specified subresources are present in query string,
// build array with them
const query = request.query;
const presentSubresources = Object.keys(query).filter(val =>
subresources.indexOf(val) !== -1);
// Sort the array and add the subresources and their value (if any)
// to the resourceString
presentSubresources.sort();
resourceString = presentSubresources.reduce((prev, current) => {
const ch = (query[current] !== '' ? '=' : '');
const ret = `${prev}${queryChar}${current}${ch}${query[current]}`;
queryChar = '&';
return ret;
}, resourceString);
// Add the overriding parameters to our resourceString
resourceString = overridingParams.reduce((prev, current) => {
if (query[current]) {
const ret = `${prev}${queryChar}${current}=${query[current]}`;
queryChar = '&';
return ret;
}
return prev;
}, resourceString);
/*
Per AWS, the delete query string parameter must be included when
you create the CanonicalizedResource for a multi-object Delete request.
Unclear what this means for a single item delete request.
*/
if (request.query.delete) {
// Addresses adding '?' instead of '&' if no other params added.
resourceString += `${queryChar}delete=${query.delete}`;
}
return resourceString;
}
module.exports = getCanonicalizedResource;

View File

@ -0,0 +1,84 @@
'use strict'; // eslint-disable-line strict
const errors = require('../../errors');
const constants = require('../../constants');
const constructStringToSign = require('./constructStringToSign');
const checkRequestExpiry = require('./checkRequestExpiry');
const algoCheck = require('./algoCheck');
function check(request, log, data) {
log.trace('running header auth check');
const headers = request.headers;
const token = headers['x-amz-security-token'];
if (token && !constants.iamSecurityToken.pattern.test(token)) {
log.debug('invalid security token', { token });
return { err: errors.InvalidToken };
}
// Check to make sure timestamp is within 15 minutes of current time
let timestamp = headers['x-amz-date'] ?
headers['x-amz-date'] : headers.date;
timestamp = Date.parse(timestamp);
if (!timestamp) {
log.debug('missing or invalid date header',
{ method: 'auth/v2/headerAuthCheck.check' });
return { err: errors.AccessDenied.
customizeDescription('Authentication requires a valid Date or ' +
'x-amz-date header') };
}
const err = checkRequestExpiry(timestamp, log);
if (err) {
return { err };
}
// Authorization Header should be
// in the format of 'AWS AccessKey:Signature'
const authInfo = headers.authorization;
if (!authInfo) {
log.debug('missing authorization security header');
return { err: errors.MissingSecurityHeader };
}
const semicolonIndex = authInfo.indexOf(':');
if (semicolonIndex < 0) {
log.debug('invalid authorization header', { authInfo });
return { err: errors.InvalidArgument };
}
const accessKey = semicolonIndex > 4 ?
authInfo.substring(4, semicolonIndex).trim() : undefined;
if (typeof accessKey !== 'string' || accessKey.length === 0) {
log.trace('invalid authorization header', { authInfo });
return { err: errors.MissingSecurityHeader };
}
log.addDefaultFields({ accessKey });
const signatureFromRequest = authInfo.substring(semicolonIndex + 1).trim();
log.trace('signature from request', { signatureFromRequest });
const stringToSign = constructStringToSign(request, data, log);
log.trace('constructed string to sign', { stringToSign });
const algo = algoCheck(signatureFromRequest.length);
log.trace('algo for calculating signature', { algo });
if (algo === undefined) {
return { err: errors.InvalidArgument };
}
return {
err: null,
params: {
version: 2,
data: {
accessKey,
signatureFromRequest,
stringToSign,
algo,
authType: 'REST-HEADER',
signatureVersion: 'AWS',
signatureAge: Date.now() - timestamp,
securityToken: token,
},
},
};
}
module.exports = { check };

View File

@ -0,0 +1,81 @@
'use strict'; // eslint-disable-line strict
const errors = require('../../errors');
const constants = require('../../constants');
const algoCheck = require('./algoCheck');
const constructStringToSign = require('./constructStringToSign');
function check(request, log, data) {
log.trace('running query auth check');
if (request.method === 'POST') {
log.debug('query string auth not supported for post requests');
return { err: errors.NotImplemented };
}
const token = data.SecurityToken;
if (token && !constants.iamSecurityToken.pattern.test(token)) {
log.debug('invalid security token', { token });
return { err: errors.InvalidToken };
}
/*
Check whether request has expired or if
expires parameter is more than 100000000 milliseconds
(1 day and 4 hours) in the future.
Expires time is provided in seconds so need to
multiply by 1000 to obtain
milliseconds to compare to Date.now()
*/
const expirationTime = parseInt(data.Expires, 10) * 1000;
if (isNaN(expirationTime)) {
log.debug('invalid expires parameter',
{ expires: data.Expires });
return { err: errors.MissingSecurityHeader };
}
const currentTime = Date.now();
// 100000000 ms (one day and 4 hours).
if (expirationTime > currentTime + 100000000) {
log.debug('expires parameter too far in future',
{ expires: request.query.Expires });
return { err: errors.AccessDenied };
}
if (currentTime > expirationTime) {
log.debug('current time exceeds expires time',
{ expires: request.query.Expires });
return { err: errors.RequestTimeTooSkewed };
}
const accessKey = data.AWSAccessKeyId;
log.addDefaultFields({ accessKey });
const signatureFromRequest = decodeURIComponent(data.Signature);
log.trace('signature from request', { signatureFromRequest });
if (!accessKey || !signatureFromRequest) {
log.debug('invalid access key/signature parameters');
return { err: errors.MissingSecurityHeader };
}
const stringToSign = constructStringToSign(request, data, log);
log.trace('constructed string to sign', { stringToSign });
const algo = algoCheck(signatureFromRequest.length);
log.trace('algo for calculating signature', { algo });
if (algo === undefined) {
return { err: errors.InvalidArgument };
}
return {
err: null,
params: {
version: 2,
data: {
accessKey,
signatureFromRequest,
stringToSign,
algo,
authType: 'REST-QUERY-STRING',
signatureVersion: 'AWS',
securityToken: token,
},
},
};
}
module.exports = { check };

11
lib/auth/v4/authV4.js Normal file
View File

@ -0,0 +1,11 @@
'use strict'; // eslint-disable-line strict
const headerAuthCheck = require('./headerAuthCheck');
const queryAuthCheck = require('./queryAuthCheck');
const authV4 = {
header: headerAuthCheck,
query: queryAuthCheck,
};
module.exports = authV4;

View File

@ -0,0 +1,57 @@
'use strict'; // eslint-disable-line strict
/*
AWS's URI encoding rules:
URI encode every byte. Uri-Encode() must enforce the following rules:
URI encode every byte except the unreserved characters:
'A'-'Z', 'a'-'z', '0'-'9', '-', '.', '_', and '~'.
The space character is a reserved character and must be
encoded as "%20" (and not as "+").
Each Uri-encoded byte is formed by a '%' and the two-digit
hexadecimal value of the byte.
Letters in the hexadecimal value must be uppercase, for example "%1A".
Encode the forward slash character, '/',
everywhere except in the object key name.
For example, if the object key name is photos/Jan/sample.jpg,
the forward slash in the key name is not encoded.
See http://docs.aws.amazon.com/AmazonS3/latest/API/sig-v4-header-based-auth.html
*/
// converts utf8 character to hex and pads "%" before every two hex digits
function _toHexUTF8(char) {
const hexRep = Buffer.from(char, 'utf8').toString('hex').toUpperCase();
let res = '';
hexRep.split('').forEach((v, n) => {
// pad % before every 2 hex digits
if (n % 2 === 0) {
res += '%';
}
res += v;
});
return res;
}
function awsURIencode(input, encodeSlash) {
const encSlash = encodeSlash === undefined ? true : encodeSlash;
let encoded = '';
for (let i = 0; i < input.length; i++) {
const ch = input.charAt(i);
if ((ch >= 'A' && ch <= 'Z') ||
(ch >= 'a' && ch <= 'z') ||
(ch >= '0' && ch <= '9') ||
ch === '_' || ch === '-' ||
ch === '~' || ch === '.') {
encoded = encoded.concat(ch);
} else if (ch === ' ') {
encoded = encoded.concat('%20');
} else if (ch === '/') {
encoded = encoded.concat(encSlash ? '%2F' : ch);
} else {
encoded = encoded.concat(_toHexUTF8(ch));
}
}
return encoded;
}
module.exports = awsURIencode;

View File

@ -0,0 +1,48 @@
'use strict'; // eslint-disable-line strict
const crypto = require('crypto');
const createCanonicalRequest = require('./createCanonicalRequest');
/**
* constructStringToSign - creates V4 stringToSign
* @param {object} params - params object
* @returns {string} - stringToSign
*/
function constructStringToSign(params) {
const request = params.request;
const signedHeaders = params.signedHeaders;
const payloadChecksum = params.payloadChecksum;
const credentialScope = params.credentialScope;
const timestamp = params.timestamp;
const query = params.query;
const log = params.log;
const canonicalReqResult = createCanonicalRequest({
pHttpVerb: request.method,
pResource: request.path,
pQuery: query,
pHeaders: request.headers,
pSignedHeaders: signedHeaders,
payloadChecksum,
service: params.awsService,
});
if (canonicalReqResult instanceof Error) {
if (log) {
log.error('error creating canonicalRequest');
}
return canonicalReqResult;
}
if (log) {
log.debug('constructed canonicalRequest', { canonicalReqResult });
}
const sha256 = crypto.createHash('sha256');
const canonicalHex = sha256.update(canonicalReqResult, 'binary')
.digest('hex');
const stringToSign = `AWS4-HMAC-SHA256\n${timestamp}\n` +
`${credentialScope}\n${canonicalHex}`;
return stringToSign;
}
module.exports = constructStringToSign;

View File

@ -0,0 +1,82 @@
'use strict'; // eslint-disable-line strict
const awsURIencode = require('./awsURIencode');
const crypto = require('crypto');
const queryString = require('querystring');
/**
* createCanonicalRequest - creates V4 canonical request
* @param {object} params - contains pHttpVerb (request type),
* pResource (parsed from URL), pQuery (request query),
* pHeaders (request headers), pSignedHeaders (signed headers from request),
* payloadChecksum (from request)
* @returns {string} - canonicalRequest
*/
function createCanonicalRequest(params) {
const pHttpVerb = params.pHttpVerb;
const pResource = params.pResource;
const pQuery = params.pQuery;
const pHeaders = params.pHeaders;
const pSignedHeaders = params.pSignedHeaders;
const service = params.service;
let payloadChecksum = params.payloadChecksum;
if (!payloadChecksum) {
if (pHttpVerb === 'GET') {
payloadChecksum = 'e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b' +
'934ca495991b7852b855';
} else if (pHttpVerb === 'POST') {
let payload = queryString.stringify(pQuery, null, null, {
encodeURIComponent: awsURIencode,
});
payload = payload.replace(/%20/g, '+');
payloadChecksum = crypto.createHash('sha256')
.update(payload, 'binary').digest('hex').toLowerCase();
}
}
const canonicalURI = !!pResource ? awsURIencode(pResource, false) : '/';
// canonical query string
let canonicalQueryStr = '';
if (pQuery && !((service === 'iam' || service === 'ring') &&
pHttpVerb === 'POST')) {
const sortedQueryParams = Object.keys(pQuery).sort().map(key => {
const encodedKey = awsURIencode(key);
const value = pQuery[key] ? awsURIencode(pQuery[key]) : '';
return `${encodedKey}=${value}`;
});
canonicalQueryStr = sortedQueryParams.join('&');
}
// signed headers
const signedHeadersList = pSignedHeaders.split(';');
signedHeadersList.sort((a, b) => a.localeCompare(b));
const signedHeaders = signedHeadersList.join(';');
// canonical headers
const canonicalHeadersList = signedHeadersList.map(signedHeader => {
if (pHeaders[signedHeader] !== undefined) {
const trimmedHeader = pHeaders[signedHeader]
.trim().replace(/\s+/g, ' ');
return `${signedHeader}:${trimmedHeader}\n`;
}
// nginx will strip the actual expect header so add value of
// header back here if it was included as a signed header
if (signedHeader === 'expect') {
return `${signedHeader}:100-continue\n`;
}
// handle case where signed 'header' is actually query param
return `${signedHeader}:${pQuery[signedHeader]}\n`;
});
const canonicalHeaders = canonicalHeadersList.join('');
const canonicalRequest = `${pHttpVerb}\n${canonicalURI}\n` +
`${canonicalQueryStr}\n${canonicalHeaders}\n` +
`${signedHeaders}\n${payloadChecksum}`;
return canonicalRequest;
}
module.exports = createCanonicalRequest;

View File

@ -0,0 +1,170 @@
'use strict'; // eslint-disable-line strict
const errors = require('../../../lib/errors');
const constants = require('../../constants');
const constructStringToSign = require('./constructStringToSign');
const checkTimeSkew = require('./timeUtils').checkTimeSkew;
const convertUTCtoISO8601 = require('./timeUtils').convertUTCtoISO8601;
const convertAmzTimeToMs = require('./timeUtils').convertAmzTimeToMs;
const extractAuthItems = require('./validateInputs').extractAuthItems;
const validateCredentials = require('./validateInputs').validateCredentials;
const areSignedHeadersComplete =
require('./validateInputs').areSignedHeadersComplete;
/**
* V4 header auth check
* @param {object} request - HTTP request object
* @param {object} log - logging object
* @param {object} data - Parameters from queryString parsing or body of
* POST request
* @param {string} awsService - Aws service ('iam' or 's3')
* @return {callback} calls callback
*/
function check(request, log, data, awsService) {
log.trace('running header auth check');
const token = request.headers['x-amz-security-token'];
if (token && !constants.iamSecurityToken.pattern.test(token)) {
log.debug('invalid security token', { token });
return { err: errors.InvalidToken };
}
// authorization header
const authHeader = request.headers.authorization;
if (!authHeader) {
log.debug('missing authorization header');
return { err: errors.MissingSecurityHeader };
}
const authHeaderItems = extractAuthItems(authHeader, log);
if (Object.keys(authHeaderItems).length < 3) {
log.debug('invalid authorization header', { authHeader });
return { err: errors.InvalidArgument };
}
const payloadChecksum = request.headers['x-amz-content-sha256'];
if (!payloadChecksum && awsService !== 'iam') {
log.debug('missing payload checksum');
return { err: errors.MissingSecurityHeader };
}
if (payloadChecksum === 'STREAMING-AWS4-HMAC-SHA256-PAYLOAD') {
log.trace('requesting streaming v4 auth');
if (request.method !== 'PUT') {
log.debug('streaming v4 auth for put only',
{ method: 'auth/v4/headerAuthCheck.check' });
return { err: errors.InvalidArgument };
}
if (!request.headers['x-amz-decoded-content-length']) {
return { err: errors.MissingSecurityHeader };
}
}
log.trace('authorization header from request', { authHeader });
const signatureFromRequest = authHeaderItems.signatureFromRequest;
const credentialsArr = authHeaderItems.credentialsArr;
const signedHeaders = authHeaderItems.signedHeaders;
if (!areSignedHeadersComplete(signedHeaders, request.headers)) {
log.debug('signedHeaders are incomplete', { signedHeaders });
return { err: errors.AccessDenied };
}
let timestamp;
// check request timestamp
const xAmzDate = request.headers['x-amz-date'];
if (xAmzDate) {
const xAmzDateArr = xAmzDate.split('T');
// check that x-amz- date has the correct format and after epochTime
if (xAmzDateArr.length === 2 && xAmzDateArr[0].length === 8
&& xAmzDateArr[1].length === 7
&& Number.parseInt(xAmzDateArr[0], 10) > 19700101) {
// format of x-amz- date is ISO 8601: YYYYMMDDTHHMMSSZ
timestamp = request.headers['x-amz-date'];
}
} else if (request.headers.date) {
timestamp = convertUTCtoISO8601(request.headers.date);
}
if (!timestamp) {
log.debug('missing or invalid date header',
{ method: 'auth/v4/headerAuthCheck.check' });
return { err: errors.AccessDenied.
customizeDescription('Authentication requires a valid Date or ' +
'x-amz-date header') };
}
const validationResult = validateCredentials(credentialsArr, timestamp,
log);
if (validationResult instanceof Error) {
log.debug('credentials in improper format', { credentialsArr,
timestamp, validationResult });
return { err: validationResult };
}
// credentialsArr is [accessKey, date, region, aws-service, aws4_request]
const scopeDate = credentialsArr[1];
const region = credentialsArr[2];
const service = credentialsArr[3];
const accessKey = credentialsArr.shift();
const credentialScope = credentialsArr.join('/');
// In AWS Signature Version 4, the signing key is valid for up to seven days
// (see Introduction to Signing Requests.
// Therefore, a signature is also valid for up to seven days or
// less if specified by a bucket policy.
// Since policies are not yet implemented, we will have a 15
// minute default like in v2 Auth.
// See http://docs.aws.amazon.com/AmazonS3/latest/API/
// bucket-policy-s3-sigv4-conditions.html
// TODO: When implementing bucket policies,
// note that expiration can be shortened so
// expiry is as set out in the policy.
// 15 minutes in seconds
const expiry = (15 * 60);
const isTimeSkewed = checkTimeSkew(timestamp, expiry, log);
if (isTimeSkewed) {
return { err: errors.RequestTimeTooSkewed };
}
const stringToSign = constructStringToSign({
log,
request,
query: data,
signedHeaders,
credentialScope,
timestamp,
payloadChecksum,
awsService: service,
});
log.trace('constructed stringToSign', { stringToSign });
if (stringToSign instanceof Error) {
return { err: stringToSign };
}
return {
err: null,
params: {
version: 4,
data: {
accessKey,
signatureFromRequest,
region,
service,
scopeDate,
stringToSign,
authType: 'REST-HEADER',
signatureVersion: 'AWS4-HMAC-SHA256',
signatureAge: Date.now() - convertAmzTimeToMs(timestamp),
// credentialScope and timestamp needed for streaming V4
// chunk evaluation
credentialScope,
timestamp,
securityToken: token,
},
},
};
}
module.exports = { check };

View File

@ -0,0 +1,114 @@
'use strict'; // eslint-disable-line strict
const constants = require('../../constants');
const errors = require('../../errors');
const constructStringToSign = require('./constructStringToSign');
const checkTimeSkew = require('./timeUtils').checkTimeSkew;
const convertAmzTimeToMs = require('./timeUtils').convertAmzTimeToMs;
const validateCredentials = require('./validateInputs').validateCredentials;
const extractQueryParams = require('./validateInputs').extractQueryParams;
const areSignedHeadersComplete =
require('./validateInputs').areSignedHeadersComplete;
/**
* V4 query auth check
* @param {object} request - HTTP request object
* @param {object} log - logging object
* @param {object} data - Contain authentification params (GET or POST data)
* @return {callback} calls callback
*/
function check(request, log, data) {
const authParams = extractQueryParams(data, log);
if (Object.keys(authParams).length !== 5) {
return { err: errors.InvalidArgument };
}
// Query params are not specified in AWS documentation as case-insensitive,
// so we use case-sensitive
const token = data['X-Amz-Security-Token'];
if (token && !constants.iamSecurityToken.pattern.test(token)) {
log.debug('invalid security token', { token });
return { err: errors.InvalidToken };
}
const signedHeaders = authParams.signedHeaders;
const signatureFromRequest = authParams.signatureFromRequest;
const timestamp = authParams.timestamp;
const expiry = authParams.expiry;
const credential = authParams.credential;
if (!areSignedHeadersComplete(signedHeaders, request.headers)) {
log.debug('signedHeaders are incomplete', { signedHeaders });
return { err: errors.AccessDenied };
}
const validationResult = validateCredentials(credential, timestamp,
log);
if (validationResult instanceof Error) {
log.debug('credentials in improper format', { credential,
timestamp, validationResult });
return { err: validationResult };
}
const accessKey = credential[0];
const scopeDate = credential[1];
const region = credential[2];
const service = credential[3];
const requestType = credential[4];
const isTimeSkewed = checkTimeSkew(timestamp, expiry, log);
if (isTimeSkewed) {
return { err: errors.RequestTimeTooSkewed };
}
// In query v4 auth, the canonical request needs
// to include the query params OTHER THAN
// the signature so create a
// copy of the query object and remove
// the X-Amz-Signature property.
const queryWithoutSignature = Object.assign({}, data);
delete queryWithoutSignature['X-Amz-Signature'];
// For query auth, instead of a
// checksum of the contents, the
// string 'UNSIGNED-PAYLOAD' should be
// added to the canonicalRequest in
// building string to sign
const payloadChecksum = 'UNSIGNED-PAYLOAD';
const stringToSign = constructStringToSign({
log,
request,
query: queryWithoutSignature,
signedHeaders,
payloadChecksum,
timestamp,
credentialScope:
`${scopeDate}/${region}/${service}/${requestType}`,
awsService: service,
});
if (stringToSign instanceof Error) {
return { err: stringToSign };
}
log.trace('constructed stringToSign', { stringToSign });
return {
err: null,
params: {
version: 4,
data: {
accessKey,
signatureFromRequest,
region,
scopeDate,
stringToSign,
authType: 'REST-QUERY-STRING',
signatureVersion: 'AWS4-HMAC-SHA256',
signatureAge: Date.now() - convertAmzTimeToMs(timestamp),
securityToken: token,
},
},
};
}
module.exports = { check };

60
lib/auth/v4/timeUtils.js Normal file
View File

@ -0,0 +1,60 @@
'use strict'; // eslint-disable-line strict
/**
* Convert timestamp to milliseconds since Unix Epoch
* @param {string} timestamp of ISO8601Timestamp format without
* dashes or colons, e.g. 20160202T220410Z
* @return {number} number of milliseconds since Unix Epoch
*/
function convertAmzTimeToMs(timestamp) {
const arr = timestamp.split('');
// Convert to YYYY-MM-DDTHH:mm:ss.sssZ
const ISO8601time = `${arr.slice(0, 4).join('')}-${arr[4]}${arr[5]}` +
`-${arr.slice(6, 11).join('')}:${arr[11]}${arr[12]}:${arr[13]}` +
`${arr[14]}.000Z`;
return Date.parse(ISO8601time);
}
/**
* Convert UTC timestamp to ISO 8601 timestamp
* @param {string} timestamp of UTC form: Fri, 10 Feb 2012 21:34:55 GMT
* @return {string} ISO8601 timestamp of form: YYYYMMDDTHHMMSSZ
*/
function convertUTCtoISO8601(timestamp) {
// convert to ISO string: YYYY-MM-DDTHH:mm:ss.sssZ.
const converted = new Date(timestamp).toISOString();
// Remove "-"s and "."s and milliseconds
return converted.split('.')[0].replace(/-|:/g, '').concat('Z');
}
/**
* Check whether timestamp predates request or is too old
* @param {string} timestamp of ISO8601Timestamp format without
* dashes or colons, e.g. 20160202T220410Z
* @param {number} expiry - number of seconds signature should be valid
* @param {object} log - log for request
* @return {boolean} true if there is a time problem
*/
function checkTimeSkew(timestamp, expiry, log) {
const currentTime = Date.now();
const fifteenMinutes = (15 * 60 * 1000);
const parsedTimestamp = convertAmzTimeToMs(timestamp);
if ((currentTime + fifteenMinutes) < parsedTimestamp) {
log.debug('current time pre-dates timestamp', {
parsedTimestamp,
currentTimeInMilliseconds: currentTime });
return true;
}
const expiryInMilliseconds = expiry * 1000;
if (currentTime > parsedTimestamp + expiryInMilliseconds) {
log.debug('signature has expired', {
parsedTimestamp,
expiry,
currentTimeInMilliseconds: currentTime });
return true;
}
return false;
}
module.exports = { convertAmzTimeToMs, convertUTCtoISO8601, checkTimeSkew };

View File

@ -0,0 +1,188 @@
'use strict'; // eslint-disable-line strict
const errors = require('../../../lib/errors');
/**
* Validate Credentials
* @param {array} credentials - contains accessKey, scopeDate,
* region, service, requestType
* @param {string} timestamp - timestamp from request in
* the format of ISO 8601: YYYYMMDDTHHMMSSZ
* @param {object} log - logging object
* @return {boolean} true if credentials are correct format, false if not
*/
function validateCredentials(credentials, timestamp, log) {
if (!Array.isArray(credentials) || credentials.length !== 5) {
log.warn('credentials in improper format', { credentials });
return errors.InvalidArgument;
}
// credentials[2] (region) is not read intentionally
const accessKey = credentials[0];
const scopeDate = credentials[1];
const service = credentials[3];
const requestType = credentials[4];
if (accessKey.length < 1) {
log.warn('accessKey provided is wrong format', { accessKey });
return errors.InvalidArgument;
}
// The scope date (format YYYYMMDD) must be same date as the timestamp
// on the request from the x-amz-date param (if queryAuthCheck)
// or from the x-amz-date header or date header (if headerAuthCheck)
// Format of timestamp is ISO 8601: YYYYMMDDTHHMMSSZ.
// http://docs.aws.amazon.com/AmazonS3/latest/API/
// sigv4-query-string-auth.html
// http://docs.aws.amazon.com/general/latest/gr/
// sigv4-date-handling.html
// convert timestamp to format of scopeDate YYYYMMDD
const timestampDate = timestamp.split('T')[0];
if (scopeDate.length !== 8 || scopeDate !== timestampDate) {
log.warn('scope date must be the same date as the timestamp date',
{ scopeDate, timestampDate });
return errors.RequestTimeTooSkewed;
}
if (service !== 's3' && service !== 'iam' && service !== 'ring') {
log.warn('service in credentials is not one of s3/iam/ring', {
service,
});
return errors.InvalidArgument;
}
if (requestType !== 'aws4_request') {
log.warn('requestType contained in params is not aws4_request',
{ requestType });
return errors.InvalidArgument;
}
return {};
}
/**
* Extract and validate components from query object
* @param {object} queryObj - query object from request
* @param {object} log - logging object
* @return {object} object containing extracted query params for authV4
*/
function extractQueryParams(queryObj, log) {
const authParams = {};
// Do not need the algorithm sent back
if (queryObj['X-Amz-Algorithm'] !== 'AWS4-HMAC-SHA256') {
log.warn('algorithm param incorrect',
{ algo: queryObj['X-Amz-Algorithm'] });
return authParams;
}
const signedHeaders = queryObj['X-Amz-SignedHeaders'];
// At least "host" must be included in signed headers
if (signedHeaders && signedHeaders.length > 3) {
authParams.signedHeaders = signedHeaders;
} else {
log.warn('missing signedHeaders');
return authParams;
}
const signature = queryObj['X-Amz-Signature'];
if (signature && signature.length === 64) {
authParams.signatureFromRequest = signature;
} else {
log.warn('missing signature');
return authParams;
}
const timestamp = queryObj['X-Amz-Date'];
if (timestamp && timestamp.length === 16) {
authParams.timestamp = timestamp;
} else {
log.warn('missing or invalid timestamp',
{ timestamp: queryObj['X-Amz-Date'] });
return authParams;
}
const expiry = Number.parseInt(queryObj['X-Amz-Expires'], 10);
if (expiry && (expiry > 0 && expiry < 604801)) {
authParams.expiry = expiry;
} else {
log.warn('invalid expiry', { expiry });
return authParams;
}
const credential = queryObj['X-Amz-Credential'];
if (credential && credential.length > 28 && credential.indexOf('/') > -1) {
authParams.credential = credential.split('/');
} else {
log.warn('invalid credential param', { credential });
return authParams;
}
return authParams;
}
/**
* Extract and validate components from auth header
* @param {string} authHeader - authorization header from request
* @param {object} log - logging object
* @return {object} object containing extracted auth header items for authV4
*/
function extractAuthItems(authHeader, log) {
const authItems = {};
const authArray = authHeader
.replace('AWS4-HMAC-SHA256 ', '').split(',');
if (authArray.length < 3) {
return authItems;
}
// extract authorization components
const credentialStr = authArray[0];
const signedHeadersStr = authArray[1];
const signatureStr = authArray[2];
log.trace('credentials from request', { credentialStr });
if (credentialStr && credentialStr.trim().startsWith('Credential=')
&& credentialStr.indexOf('/') > -1) {
authItems.credentialsArr = credentialStr
.trim().replace('Credential=', '').split('/');
} else {
log.warn('missing credentials');
}
log.trace('signed headers from request', { signedHeadersStr });
if (signedHeadersStr && signedHeadersStr.trim()
.startsWith('SignedHeaders=')) {
authItems.signedHeaders = signedHeadersStr
.trim().replace('SignedHeaders=', '');
} else {
log.warn('missing signed headers');
}
log.trace('signature from request', { signatureStr });
if (signatureStr && signatureStr.trim().startsWith('Signature=')) {
authItems.signatureFromRequest = signatureStr
.trim().replace('Signature=', '');
} else {
log.warn('missing signature');
}
return authItems;
}
/**
* Checks whether the signed headers include the host header
* and all x-amz- and x-scal- headers in request
* @param {string} signedHeaders - signed headers sent with request
* @param {object} allHeaders - request.headers
* @return {boolean} true if all x-amz-headers included and false if not
*/
function areSignedHeadersComplete(signedHeaders, allHeaders) {
const signedHeadersList = signedHeaders.split(';');
if (signedHeadersList.indexOf('host') === -1) {
return false;
}
const headers = Object.keys(allHeaders);
for (let i = 0; i < headers.length; i++) {
if ((headers[i].startsWith('x-amz-')
|| headers[i].startsWith('x-scal-'))
&& signedHeadersList.indexOf(headers[i]) === -1) {
return false;
}
}
return true;
}
module.exports = { validateCredentials, extractQueryParams,
areSignedHeadersComplete, extractAuthItems };

32
lib/constants.js Normal file
View File

@ -0,0 +1,32 @@
'use strict'; // eslint-disable-line strict
// The min value here is to manage further backward compat if we
// need it
const iamSecurityTokenSizeMin = 128;
const iamSecurityTokenSizeMax = 128;
// Security token is an hex string (no real format from amazon)
const iamSecurityTokenPattern =
new RegExp(`^[a-f0-9]{${iamSecurityTokenSizeMin},` +
`${iamSecurityTokenSizeMax}}$`);
module.exports = {
// info about the iam security token
iamSecurityToken: {
min: iamSecurityTokenSizeMin,
max: iamSecurityTokenSizeMax,
pattern: iamSecurityTokenPattern,
},
// PublicId is used as the canonicalID for a request that contains
// no authentication information. Requestor can access
// only public resources
publicId: 'http://acs.amazonaws.com/groups/global/AllUsers',
metadataFileNamespace: '/MDFile',
dataFileURL: '/DataFile',
// AWS states max size for user-defined metadata
// (x-amz-meta- headers) is 2 KB:
// http://docs.aws.amazon.com/AmazonS3/latest/API/RESTObjectPUT.html
// In testing, AWS seems to allow up to 88 more bytes,
// so we do the same.
maximumMetaHeadersSize: 2136,
emptyFileMd5: 'd41d8cd98f00b204e9800998ecf8427e',
};

151
lib/db.js Normal file
View File

@ -0,0 +1,151 @@
'use strict'; // eslint-disable-line strict
const writeOptions = { sync: true };
/**
* Like Error, but with a property set to true.
* TODO: this is copied from kineticlib, should consolidate with the
* future errors module
*
* Example: instead of:
* const err = new Error("input is not a buffer");
* err.badTypeInput = true;
* throw err;
* use:
* throw propError("badTypeInput", "input is not a buffer");
*
* @param {String} propName - the property name.
* @param {String} message - the Error message.
* @returns {Error} the Error object.
*/
function propError(propName, message) {
const err = new Error(message);
err[propName] = true;
return err;
}
/**
* Running transaction with multiple updates to be committed atomically
*/
class IndexTransaction {
/**
* Builds a new transaction
*
* @argument {Leveldb} db an open database to which the updates
* will be applied
*
* @returns {IndexTransaction} a new empty transaction
*/
constructor(db) {
this.operations = [];
this.db = db;
this.closed = false;
}
/**
* Adds a new operation to participate in this running transaction
*
* @argument {object} op an object with the following attributes:
* {
* type: 'put' or 'del',
* key: the object key,
* value: (optional for del) the value to store,
* }
*
* @throws {Error} an error described by the following properties
* - invalidTransactionVerb if op is not put or del
* - pushOnCommittedTransaction if already committed
* - missingKey if the key is missing from the op
* - missingValue if putting without a value
*
* @returns {undefined}
*/
push(op) {
if (this.closed) {
throw propError('pushOnCommittedTransaction',
'can not add ops to already committed transaction');
}
if (op.type !== 'put' && op.type !== 'del') {
throw propError('invalidTransactionVerb',
`unknown action type: ${op.type}`);
}
if (op.key === undefined) {
throw propError('missingKey', 'missing key');
}
if (op.type === 'put' && op.value === undefined) {
throw propError('missingValue', 'missing value');
}
this.operations.push(op);
}
/**
* Adds a new put operation to this running transaction
*
* @argument {string} key - the key of the object to put
* @argument {string} value - the value to put
*
* @throws {Error} an error described by the following properties
* - pushOnCommittedTransaction if already committed
* - missingKey if the key is missing from the op
* - missingValue if putting without a value
*
* @returns {undefined}
*
* @see push
*/
put(key, value) {
this.push({ type: 'put', key, value });
}
/**
* Adds a new del operation to this running transaction
*
* @argument {string} key - the key of the object to delete
*
* @throws {Error} an error described by the following properties
* - pushOnCommittedTransaction if already committed
* - missingKey if the key is missing from the op
*
* @returns {undefined}
*
* @see push
*/
del(key) {
this.push({ type: 'del', key });
}
/**
* Applies the queued updates in this transaction atomically.
*
* @argument {function} cb function to be called when the commit
* finishes, taking an optional error argument
*
* @returns {undefined}
*/
commit(cb) {
if (this.closed) {
return cb(propError('alreadyCommitted',
'transaction was already committed'));
}
if (this.operations.length === 0) {
return cb(propError('emptyTransaction',
'tried to commit an empty transaction'));
}
this.closed = true;
// The array-of-operations variant of the `batch` method
// allows passing options such has `sync: true` whereas the
// chained form does not.
return this.db.batch(this.operations, writeOptions, cb);
}
}
module.exports = {
IndexTransaction,
};

35
lib/errors.js Normal file
View File

@ -0,0 +1,35 @@
'use strict'; // eslint-disable-line strict
class ArsenalError extends Error {
constructor(type, code, desc) {
super(type);
this.code = code;
this.description = desc;
this[type] = true;
}
customizeDescription(description) {
return new ArsenalError(this.message, this.code, description);
}
}
/**
* Generate an Errors instances object.
*
* @returns {Object.<string, ArsenalError>} - object field by arsenalError
* instances
*/
function errorsGen() {
const errors = {};
const errorsObj = require('../errors/arsenalErrors.json');
Object.keys(errorsObj)
.filter(index => index !== '_comment')
.forEach(index => {
errors[index] = new ArsenalError(index, errorsObj[index].code,
errorsObj[index].description);
});
return errors;
}
module.exports = errorsGen();

34
lib/https/ciphers.js Normal file
View File

@ -0,0 +1,34 @@
'use strict'; // eslint-disable-line strict
const ciphers = [
'DHE-RSA-AES128-GCM-SHA256',
'ECDHE-ECDSA-AES128-GCM-SHA256',
'ECDHE-RSA-AES256-GCM-SHA384',
'ECDHE-ECDSA-AES256-GCM-SHA384',
'DHE-RSA-AES128-GCM-SHA256',
'ECDHE-RSA-AES128-SHA256',
'DHE-RSA-AES128-SHA256',
'ECDHE-RSA-AES256-SHA384',
'DHE-RSA-AES256-SHA384',
'ECDHE-RSA-AES256-SHA256',
'DHE-RSA-AES256-SHA256',
'HIGH',
'!aNULL',
'!eNULL',
'!EXPORT',
'!DES',
'!RC4',
'!MD5',
'!SHA1',
'!PSK',
'!aECDH',
'!SRP',
'!IDEA',
'!EDH-DSS-DES-CBC3-SHA',
'!EDH-RSA-DES-CBC3-SHA',
'!KRB5-DES-CBC3-SHA',
].join(':');
module.exports = {
ciphers,
};

44
lib/https/dh2048.js Normal file
View File

@ -0,0 +1,44 @@
/*
PKCS#3 DH Parameters: (2048 bit)
prime:
00:87:df:53:ef:b2:86:36:e8:98:f4:de:b1:ac:22:
77:40:db:f8:48:50:03:4d:ad:c2:0f:ed:55:31:30:
1d:44:92:c7:50:da:60:94:1f:a2:02:84:d8:88:b0:
c5:66:0b:53:0a:9c:74:65:95:03:f8:93:37:aa:20:
99:cb:43:8a:e7:f6:46:95:50:fb:b1:99:b1:8d:1b:
5d:a5:52:b8:a8:83:ed:c1:ab:fc:b7:42:7b:73:60:
8d:7d:41:2a:c9:16:c9:17:8a:44:f5:97:1d:41:17:
93:e6:9f:e5:96:6c:a1:41:db:ea:e9:c1:c7:f9:c2:
89:93:ad:c2:e8:31:d1:56:84:ad:b2:7b:14:72:f2:
9e:db:73:cb:19:9b:a5:2a:0f:07:dd:e4:41:c4:76:
a6:1e:49:b2:b8:45:43:b6:83:61:30:8a:09:38:db:
1d:5d:2a:68:e4:68:1c:0f:81:10:30:cf:31:6f:fa:
ac:9c:2c:67:e9:02:06:4c:1b:dc:1e:c9:31:b6:54:
d9:39:f5:0f:93:85:d0:e9:86:f7:b5:08:b6:4e:ea:
f3:91:01:cb:96:7e:14:ee:9f:c6:66:cf:83:fb:a0:
f7:4a:04:8f:aa:be:8f:6c:bc:4a:b3:28:0a:ef:bb:
6d:8e:be:b5:73:12:e8:0c:97:86:77:92:f9:87:50:
8f:9b
generator: 2 (0x2)
-----BEGIN DH PARAMETERS-----
MIIBCAKCAQEAh99T77KGNuiY9N6xrCJ3QNv4SFADTa3CD+1VMTAdRJLHUNpglB+i
AoTYiLDFZgtTCpx0ZZUD+JM3qiCZy0OK5/ZGlVD7sZmxjRtdpVK4qIPtwav8t0J7
c2CNfUEqyRbJF4pE9ZcdQReT5p/llmyhQdvq6cHH+cKJk63C6DHRVoStsnsUcvKe
23PLGZulKg8H3eRBxHamHkmyuEVDtoNhMIoJONsdXSpo5GgcD4EQMM8xb/qsnCxn
6QIGTBvcHskxtlTZOfUPk4XQ6Yb3tQi2TurzkQHLln4U7p/GZs+D+6D3SgSPqr6P
bLxKsygK77ttjr61cxLoDJeGd5L5h1CPmwIBAg==
-----END DH PARAMETERS-----
*/
'use strict'; // eslint-disable-line strict
const dhparam =
'MIIBCAKCAQEAh99T77KGNuiY9N6xrCJ3QNv4SFADTa3CD+1VMTAdRJLHUNpglB+i' +
'AoTYiLDFZgtTCpx0ZZUD+JM3qiCZy0OK5/ZGlVD7sZmxjRtdpVK4qIPtwav8t0J7' +
'c2CNfUEqyRbJF4pE9ZcdQReT5p/llmyhQdvq6cHH+cKJk63C6DHRVoStsnsUcvKe' +
'23PLGZulKg8H3eRBxHamHkmyuEVDtoNhMIoJONsdXSpo5GgcD4EQMM8xb/qsnCxn' +
'6QIGTBvcHskxtlTZOfUPk4XQ6Yb3tQi2TurzkQHLln4U7p/GZs+D+6D3SgSPqr6P' +
'bLxKsygK77ttjr61cxLoDJeGd5L5h1CPmwIBAg==';
module.exports = {
dhparam,
};

83
lib/ipCheck.js Normal file
View File

@ -0,0 +1,83 @@
'use strict'; // eslint-disable-line strict
const ipaddr = require('ipaddr.js');
/**
* checkIPinRangeOrMatch checks whether a given ip address is in an ip address
* range or matches the given ip address
* @param {string} cidr - ip address range or ip address
* @param {object} ip - parsed ip address
* @return {boolean} true if in range, false if not
*/
function checkIPinRangeOrMatch(cidr, ip) {
// If there is an exact match of the ip address, no need to check ranges
if (ip.toString() === cidr) {
return true;
}
let range;
try {
range = ipaddr.IPv4.parseCIDR(cidr);
} catch (err) {
try {
// not ipv4 so try ipv6
range = ipaddr.IPv6.parseCIDR(cidr);
} catch (err) {
// range is not valid ipv4 or ipv6
return false;
}
}
try {
return ip.match(range);
} catch (err) {
return false;
}
}
/**
* Parse IP address into object representation
* @param {string} ip - IPV4/IPV6/IPV4-mapped IPV6 address
* @return {object} parsedIp - Object representation of parsed IP
*/
function parseIp(ip) {
if (ipaddr.IPv4.isValid(ip)) {
return ipaddr.parse(ip);
}
if (ipaddr.IPv6.isValid(ip)) {
// also parses IPv6 mapped IPv4 addresses into IPv4 representation
return ipaddr.process(ip);
}
// not valid ip address according to module, so return empty object
// which will obviously not match a range of ip addresses that the parsedIp
// is being tested against
return {};
}
/**
* Checks if an IP adress matches a given list of CIDR ranges
* @param {string[]} cidrList - List of CIDR ranges
* @param {string} ip - IP address
* @return {boolean} - true if there is match or false for no match
*/
function ipMatchCidrList(cidrList, ip) {
const parsedIp = parseIp(ip);
return cidrList.some(item => {
let cidr;
// patch the cidr if range is not specified
if (item.indexOf('/') === -1) {
if (item.startsWith('127.')) {
cidr = `${item}/8`;
} else if (ipaddr.IPv4.isValid(item)) {
cidr = `${item}/32`;
}
}
return checkIPinRangeOrMatch(cidr || item, parsedIp);
});
}
module.exports = {
checkIPinRangeOrMatch,
ipMatchCidrList,
parseIp,
};

32
lib/jsutil.js Normal file
View File

@ -0,0 +1,32 @@
'use strict'; // eslint-disable-line
const debug = require('util').debuglog('jsutil');
// JavaScript utility functions
/**
* force <tt>func</tt> to be called only once, even if actually called
* multiple times. The cached result of the first call is then
* returned (if any).
*
* @note underscore.js provides this functionality but not worth
* adding a new dependency for such a small use case.
*
* @param {function} func function to call at most once
* @return {function} a callable wrapper mirroring <tt>func</tt> but
* only calls <tt>func</tt> at first invocation.
*/
module.exports.once = function once(func) {
const state = { called: false, res: undefined };
return function wrapper(...args) {
if (!state.called) {
state.called = true;
state.res = func.apply(func, args);
} else {
debug('function already called:', func,
'returning cached result:', state.res);
}
return state.res;
};
};

502
lib/models/BucketInfo.js Normal file
View File

@ -0,0 +1,502 @@
const assert = require('assert');
const { WebsiteConfiguration } = require('./WebsiteConfiguration');
const ReplicationConfiguration = require('./ReplicationConfiguration');
// WHEN UPDATING THIS NUMBER, UPDATE MODELVERSION.MD CHANGELOG
const modelVersion = 5;
class BucketInfo {
/**
* Represents all bucket information.
* @constructor
* @param {string} name - bucket name
* @param {string} owner - bucket owner's name
* @param {string} ownerDisplayName - owner's display name
* @param {object} creationDate - creation date of bucket
* @param {number} mdBucketModelVersion - bucket model version
* @param {object} [acl] - bucket ACLs (no need to copy
* ACL object since referenced object will not be used outside of
* BucketInfo instance)
* @param {boolean} transient - flag indicating whether bucket is transient
* @param {boolean} deleted - flag indicating whether attempt to delete
* @param {object} serverSideEncryption - sse information for this bucket
* @param {number} serverSideEncryption.cryptoScheme -
* cryptoScheme used
* @param {string} serverSideEncryption.algorithm -
* algorithm to use
* @param {string} serverSideEncryption.masterKeyId -
* key to get master key
* @param {boolean} serverSideEncryption.mandatory -
* true for mandatory encryption
* bucket has been made
* @param {object} versioningConfiguration - versioning configuration
* @param {string} versioningConfiguration.Status - versioning status
* @param {object} versioningConfiguration.MfaDelete - versioning mfa delete
* @param {string} locationConstraint - locationConstraint for bucket
* @param {WebsiteConfiguration} [websiteConfiguration] - website
* configuration
* @param {object[]} [cors] - collection of CORS rules to apply
* @param {string} [cors[].id] - optional ID to identify rule
* @param {string[]} cors[].allowedMethods - methods allowed for CORS request
* @param {string[]} cors[].allowedOrigins - origins allowed for CORS request
* @param {string[]} [cors[].allowedHeaders] - headers allowed in an OPTIONS
* request via the Access-Control-Request-Headers header
* @param {number} [cors[].maxAgeSeconds] - seconds browsers should cache
* OPTIONS response
* @param {string[]} [cors[].exposeHeaders] - headers expose to applications
* @param {object} [replicationConfiguration] - replication configuration
*/
constructor(name, owner, ownerDisplayName, creationDate,
mdBucketModelVersion, acl, transient, deleted,
serverSideEncryption, versioningConfiguration,
locationConstraint, websiteConfiguration, cors,
replicationConfiguration) {
assert.strictEqual(typeof name, 'string');
assert.strictEqual(typeof owner, 'string');
assert.strictEqual(typeof ownerDisplayName, 'string');
assert.strictEqual(typeof creationDate, 'string');
if (mdBucketModelVersion) {
assert.strictEqual(typeof mdBucketModelVersion, 'number');
}
if (acl) {
assert.strictEqual(typeof acl, 'object');
assert(Array.isArray(acl.FULL_CONTROL));
assert(Array.isArray(acl.WRITE));
assert(Array.isArray(acl.WRITE_ACP));
assert(Array.isArray(acl.READ));
assert(Array.isArray(acl.READ_ACP));
}
if (serverSideEncryption) {
assert.strictEqual(typeof serverSideEncryption, 'object');
const { cryptoScheme, algorithm, masterKeyId, mandatory } =
serverSideEncryption;
assert.strictEqual(typeof cryptoScheme, 'number');
assert.strictEqual(typeof algorithm, 'string');
assert.strictEqual(typeof masterKeyId, 'string');
assert.strictEqual(typeof mandatory, 'boolean');
}
if (versioningConfiguration) {
assert.strictEqual(typeof versioningConfiguration, 'object');
const { Status, MfaDelete } = versioningConfiguration;
assert(Status === undefined ||
Status === 'Enabled' ||
Status === 'Suspended');
assert(MfaDelete === undefined ||
MfaDelete === 'Enabled' ||
MfaDelete === 'Disabled');
}
if (locationConstraint) {
assert.strictEqual(typeof locationConstraint, 'string');
}
if (websiteConfiguration) {
assert(websiteConfiguration instanceof WebsiteConfiguration);
const { indexDocument, errorDocument, redirectAllRequestsTo,
routingRules } = websiteConfiguration;
assert(indexDocument === undefined ||
typeof indexDocument === 'string');
assert(errorDocument === undefined ||
typeof errorDocument === 'string');
assert(redirectAllRequestsTo === undefined ||
typeof redirectAllRequestsTo === 'object');
assert(routingRules === undefined ||
Array.isArray(routingRules));
}
if (cors) {
assert(Array.isArray(cors));
}
if (replicationConfiguration) {
ReplicationConfiguration.validateConfig(replicationConfiguration);
}
const aclInstance = acl || {
Canned: 'private',
FULL_CONTROL: [],
WRITE: [],
WRITE_ACP: [],
READ: [],
READ_ACP: [],
};
// IF UPDATING PROPERTIES, INCREMENT MODELVERSION NUMBER ABOVE
this._acl = aclInstance;
this._name = name;
this._owner = owner;
this._ownerDisplayName = ownerDisplayName;
this._creationDate = creationDate;
this._mdBucketModelVersion = mdBucketModelVersion || 0;
this._transient = transient || false;
this._deleted = deleted || false;
this._serverSideEncryption = serverSideEncryption || null;
this._versioningConfiguration = versioningConfiguration || null;
this._locationConstraint = locationConstraint || null;
this._websiteConfiguration = websiteConfiguration || null;
this._replicationConfiguration = replicationConfiguration || null;
this._cors = cors || null;
return this;
}
/**
* Serialize the object
* @return {string} - stringified object
*/
serialize() {
const bucketInfos = {
acl: this._acl,
name: this._name,
owner: this._owner,
ownerDisplayName: this._ownerDisplayName,
creationDate: this._creationDate,
mdBucketModelVersion: this._mdBucketModelVersion,
transient: this._transient,
deleted: this._deleted,
serverSideEncryption: this._serverSideEncryption,
versioningConfiguration: this._versioningConfiguration,
locationConstraint: this._locationConstraint,
websiteConfiguration: undefined,
cors: this._cors,
replicationConfiguration: this._replicationConfiguration,
};
if (this._websiteConfiguration) {
bucketInfos.websiteConfiguration =
this._websiteConfiguration.getConfig();
}
return JSON.stringify(bucketInfos);
}
/**
* deSerialize the JSON string
* @param {string} stringBucket - the stringified bucket
* @return {object} - parsed string
*/
static deSerialize(stringBucket) {
const obj = JSON.parse(stringBucket);
const websiteConfig = obj.websiteConfiguration ?
new WebsiteConfiguration(obj.websiteConfiguration) : null;
return new BucketInfo(obj.name, obj.owner, obj.ownerDisplayName,
obj.creationDate, obj.mdBucketModelVersion, obj.acl,
obj.transient, obj.deleted, obj.serverSideEncryption,
obj.versioningConfiguration, obj.locationConstraint, websiteConfig,
obj.cors, obj.replicationConfiguration);
}
/**
* Returns the current model version for the data structure
* @return {number} - the current model version set above in the file
*/
static currentModelVersion() {
return modelVersion;
}
/**
* Create a BucketInfo from an object
*
* @param {object} data - object containing data
* @return {BucketInfo} Return an BucketInfo
*/
static fromObj(data) {
return new BucketInfo(data._name, data._owner, data._ownerDisplayName,
data._creationDate, data._mdBucketModelVersion, data._acl,
data._transient, data._deleted, data._serverSideEncryption,
data._versioningConfiguration, data._locationConstraint,
data._websiteConfiguration, data._cors,
data._replicationConfiguration);
}
/**
* Get the ACLs.
* @return {object} acl
*/
getAcl() {
return this._acl;
}
/**
* Set the canned acl's.
* @param {string} cannedACL - canned ACL being set
* @return {BucketInfo} - bucket info instance
*/
setCannedAcl(cannedACL) {
this._acl.Canned = cannedACL;
return this;
}
/**
* Set a specific ACL.
* @param {string} canonicalID - id for account being given access
* @param {string} typeOfGrant - type of grant being granted
* @return {BucketInfo} - bucket info instance
*/
setSpecificAcl(canonicalID, typeOfGrant) {
this._acl[typeOfGrant].push(canonicalID);
return this;
}
/**
* Set all ACLs.
* @param {object} acl - new set of ACLs
* @return {BucketInfo} - bucket info instance
*/
setFullAcl(acl) {
this._acl = acl;
return this;
}
/**
* Get the server side encryption information
* @return {object} serverSideEncryption
*/
getServerSideEncryption() {
return this._serverSideEncryption;
}
/**
* Set server side encryption information
* @param {object} serverSideEncryption - server side encryption information
* @return {BucketInfo} - bucket info instance
*/
setServerSideEncryption(serverSideEncryption) {
this._serverSideEncryption = serverSideEncryption;
return this;
}
/**
* Get the versioning configuration information
* @return {object} versioningConfiguration
*/
getVersioningConfiguration() {
return this._versioningConfiguration;
}
/**
* Set versioning configuration information
* @param {object} versioningConfiguration - versioning information
* @return {BucketInfo} - bucket info instance
*/
setVersioningConfiguration(versioningConfiguration) {
this._versioningConfiguration = versioningConfiguration;
return this;
}
/**
* Check that versioning is 'Enabled' on the given bucket.
* @return {boolean} - `true` if versioning is 'Enabled', otherwise `false`
*/
isVersioningEnabled() {
const versioningConfig = this.getVersioningConfiguration();
return versioningConfig ? versioningConfig.Status === 'Enabled' : false;
}
/**
* Get the website configuration information
* @return {object} websiteConfiguration
*/
getWebsiteConfiguration() {
return this._websiteConfiguration;
}
/**
* Set website configuration information
* @param {object} websiteConfiguration - configuration for bucket website
* @return {BucketInfo} - bucket info instance
*/
setWebsiteConfiguration(websiteConfiguration) {
this._websiteConfiguration = websiteConfiguration;
return this;
}
/**
* Set replication configuration information
* @param {object} replicationConfiguration - replication information
* @return {BucketInfo} - bucket info instance
*/
setReplicationConfiguration(replicationConfiguration) {
this._replicationConfiguration = replicationConfiguration;
return this;
}
/**
* Get replication configuration information
* @return {object|null} replication configuration information or `null` if
* the bucket does not have a replication configuration
*/
getReplicationConfiguration() {
return this._replicationConfiguration;
}
/**
* Get cors resource
* @return {object[]} cors
*/
getCors() {
return this._cors;
}
/**
* Set cors resource
* @param {object[]} rules - collection of CORS rules
* @param {string} [rules.id] - optional id to identify rule
* @param {string[]} rules[].allowedMethods - methods allowed for CORS
* @param {string[]} rules[].allowedOrigins - origins allowed for CORS
* @param {string[]} [rules[].allowedHeaders] - headers allowed in an
* OPTIONS request via the Access-Control-Request-Headers header
* @param {number} [rules[].maxAgeSeconds] - seconds browsers should cache
* OPTIONS response
* @param {string[]} [rules[].exposeHeaders] - headers to expose to external
* applications
* @return {BucketInfo} - bucket info instance
*/
setCors(rules) {
this._cors = rules;
return this;
}
/**
* get the serverside encryption algorithm
* @return {string} - sse algorithm used by this bucket
*/
getSseAlgorithm() {
if (!this._serverSideEncryption) {
return null;
}
return this._serverSideEncryption.algorithm;
}
/**
* get the server side encryption master key Id
* @return {string} - sse master key Id used by this bucket
*/
getSseMasterKeyId() {
if (!this._serverSideEncryption) {
return null;
}
return this._serverSideEncryption.masterKeyId;
}
/**
* Get bucket name.
* @return {string} - bucket name
*/
getName() {
return this._name;
}
/**
* Set bucket name.
* @param {string} bucketName - new bucket name
* @return {BucketInfo} - bucket info instance
*/
setName(bucketName) {
this._name = bucketName;
return this;
}
/**
* Get bucket owner.
* @return {string} - bucket owner's canonicalID
*/
getOwner() {
return this._owner;
}
/**
* Set bucket owner.
* @param {string} ownerCanonicalID - bucket owner canonicalID
* @return {BucketInfo} - bucket info instance
*/
setOwner(ownerCanonicalID) {
this._owner = ownerCanonicalID;
return this;
}
/**
* Get bucket owner display name.
* @return {string} - bucket owner dispaly name
*/
getOwnerDisplayName() {
return this._ownerDisplayName;
}
/**
* Set bucket owner display name.
* @param {string} ownerDisplayName - bucket owner display name
* @return {BucketInfo} - bucket info instance
*/
setOwnerDisplayName(ownerDisplayName) {
this._ownerDisplayName = ownerDisplayName;
return this;
}
/**
* Get bucket creation date.
* @return {object} - bucket creation date
*/
getCreationDate() {
return this._creationDate;
}
/**
* Set location constraint.
* @param {string} location - bucket location constraint
* @return {BucketInfo} - bucket info instance
*/
setLocationConstraint(location) {
this._locationConstraint = location;
return this;
}
/**
* Get location constraint.
* @return {string} - bucket location constraint
*/
getLocationConstraint() {
return this._locationConstraint;
}
/**
* Set Bucket model version
*
* @param {number} version - Model version
* @return {BucketInfo} - bucket info instance
*/
setMdBucketModelVersion(version) {
this._mdBucketModelVersion = version;
return this;
}
/**
* Get Bucket model version
*
* @return {number} Bucket model version
*/
getMdBucketModelVersion() {
return this._mdBucketModelVersion;
}
/**
* Add transient flag.
* @return {BucketInfo} - bucket info instance
*/
addTransientFlag() {
this._transient = true;
return this;
}
/**
* Remove transient flag.
* @return {BucketInfo} - bucket info instance
*/
removeTransientFlag() {
this._transient = false;
return this;
}
/**
* Check transient flag.
* @return {boolean} - depending on whether transient flag in place
*/
hasTransientFlag() {
return !!this._transient;
}
/**
* Add deleted flag.
* @return {BucketInfo} - bucket info instance
*/
addDeletedFlag() {
this._deleted = true;
return this;
}
/**
* Remove deleted flag.
* @return {BucketInfo} - bucket info instance
*/
removeDeletedFlag() {
this._deleted = false;
return this;
}
/**
* Check deleted flag.
* @return {boolean} - depending on whether deleted flag in place
*/
hasDeletedFlag() {
return !!this._deleted;
}
/**
* Check if the versioning mode is on.
* @return {boolean} - versioning mode status
*/
isVersioningOn() {
return this._versioningConfiguration &&
this._versioningConfiguration.Status === 'Enabled';
}
}
module.exports = BucketInfo;

668
lib/models/ObjectMD.js Normal file
View File

@ -0,0 +1,668 @@
// Version 2 changes the format of the data location property
// Version 3 adds the dataStoreName attribute
const modelVersion = 3;
/**
* Class to manage metadata object for regular s3 objects (instead of
* mpuPart metadata for example)
*/
module.exports = class ObjectMD {
/**
* @constructor
*
* @param {number} version - Version of the metadata model
*/
constructor(version) {
const now = new Date().toJSON();
this._data = {
'md-model-version': version || modelVersion,
'owner-display-name': '',
'owner-id': '',
'cache-control': '',
'content-disposition': '',
'content-encoding': '',
'expires': '',
'content-length': 0,
'content-type': '',
'last-modified': now,
'content-md5': '',
// simple/no version. will expand once object versioning is
// introduced
'x-amz-version-id': 'null',
'x-amz-server-version-id': '',
// TODO: Handle this as a utility function for all object puts
// similar to normalizing request but after checkAuth so
// string to sign is not impacted. This is GH Issue#89.
'x-amz-storage-class': 'STANDARD',
'x-amz-server-side-encryption': '',
'x-amz-server-side-encryption-aws-kms-key-id': '',
'x-amz-server-side-encryption-customer-algorithm': '',
'x-amz-website-redirect-location': '',
'acl': {
Canned: 'private',
FULL_CONTROL: [],
WRITE_ACP: [],
READ: [],
READ_ACP: [],
},
'key': '',
'location': [],
'isNull': '',
'nullVersionId': '',
'isDeleteMarker': '',
'versionId': undefined, // If no versionId, it should be undefined
'tags': {},
'replicationInfo': {
status: '',
content: [],
destination: '',
storageClass: '',
role: '',
},
'dataStoreName': '',
};
}
/**
* Returns metadata model version
*
* @return {number} Metadata model version
*/
getModelVersion() {
return this._data['md-model-version'];
}
/**
* Set owner display name
*
* @param {string} displayName - Owner display name
* @return {ObjectMD} itself
*/
setOwnerDisplayName(displayName) {
this._data['owner-display-name'] = displayName;
return this;
}
/**
* Returns owner display name
*
* @return {string} Onwer display name
*/
getOwnerDisplayName() {
return this._data['owner-display-name'];
}
/**
* Set owner id
*
* @param {string} id - Owner id
* @return {ObjectMD} itself
*/
setOwnerId(id) {
this._data['owner-id'] = id;
return this;
}
/**
* Returns owner id
*
* @return {string} owner id
*/
getOwnerId() {
return this._data['owner-id'];
}
/**
* Set cache control
*
* @param {string} cacheControl - Cache control
* @return {ObjectMD} itself
*/
setCacheControl(cacheControl) {
this._data['cache-control'] = cacheControl;
return this;
}
/**
* Returns cache control
*
* @return {string} Cache control
*/
getCacheControl() {
return this._data['cache-control'];
}
/**
* Set content disposition
*
* @param {string} contentDisposition - Content disposition
* @return {ObjectMD} itself
*/
setContentDisposition(contentDisposition) {
this._data['content-disposition'] = contentDisposition;
return this;
}
/**
* Returns content disposition
*
* @return {string} Content disposition
*/
getContentDisposition() {
return this._data['content-disposition'];
}
/**
* Set content encoding
*
* @param {string} contentEncoding - Content encoding
* @return {ObjectMD} itself
*/
setContentEncoding(contentEncoding) {
this._data['content-encoding'] = contentEncoding;
return this;
}
/**
* Returns content encoding
*
* @return {string} Content encoding
*/
getContentEncoding() {
return this._data['content-encoding'];
}
/**
* Set expiration date
*
* @param {string} expires - Expiration date
* @return {ObjectMD} itself
*/
setExpires(expires) {
this._data.expires = expires;
return this;
}
/**
* Returns expiration date
*
* @return {string} Expiration date
*/
getExpires() {
return this._data.expires;
}
/**
* Set content length
*
* @param {number} contentLength - Content length
* @return {ObjectMD} itself
*/
setContentLength(contentLength) {
this._data['content-length'] = contentLength;
return this;
}
/**
* Returns content length
*
* @return {number} Content length
*/
getContentLength() {
return this._data['content-length'];
}
/**
* Set content type
*
* @param {string} contentType - Content type
* @return {ObjectMD} itself
*/
setContentType(contentType) {
this._data['content-type'] = contentType;
return this;
}
/**
* Returns content type
*
* @return {string} Content type
*/
getContentType() {
return this._data['content-type'];
}
/**
* Set last modified date
*
* @param {string} lastModified - Last modified date
* @return {ObjectMD} itself
*/
setLastModified(lastModified) {
this._data['last-modified'] = lastModified;
return this;
}
/**
* Returns last modified date
*
* @return {string} Last modified date
*/
getLastModified() {
return this._data['last-modified'];
}
/**
* Set content md5 hash
*
* @param {string} contentMd5 - Content md5 hash
* @return {ObjectMD} itself
*/
setContentMd5(contentMd5) {
this._data['content-md5'] = contentMd5;
return this;
}
/**
* Returns content md5 hash
*
* @return {string} content md5 hash
*/
getContentMd5() {
return this._data['content-md5'];
}
/**
* Set version id
*
* @param {string} versionId - Version id
* @return {ObjectMD} itself
*/
setAmzVersionId(versionId) {
this._data['x-amz-version-id'] = versionId;
return this;
}
/**
* Returns version id
*
* @return {string} Version id
*/
getAmzVersionId() {
return this._data['x-amz-version-id'];
}
/**
* Set server version id
*
* @param {string} versionId - server version id
* @return {ObjectMD} itself
*/
setAmzServerVersionId(versionId) {
this._data['x-amz-server-version-id'] = versionId;
return this;
}
/**
* Returns server version id
*
* @return {string} server version id
*/
getAmzServerVersionId() {
return this._data['x-amz-server-version-id'];
}
/**
* Set storage class
*
* @param {string} storageClass - Storage class
* @return {ObjectMD} itself
*/
setAmzStorageClass(storageClass) {
this._data['x-amz-storage-class'] = storageClass;
return this;
}
/**
* Returns storage class
*
* @return {string} Storage class
*/
getAmzStorageClass() {
return this._data['x-amz-storage-class'];
}
/**
* Set server side encryption
*
* @param {string} serverSideEncryption - Server side encryption
* @return {ObjectMD} itself
*/
setAmzServerSideEncryption(serverSideEncryption) {
this._data['x-amz-server-side-encryption'] = serverSideEncryption;
return this;
}
/**
* Returns server side encryption
*
* @return {string} server side encryption
*/
getAmzServerSideEncryption() {
return this._data['x-amz-server-side-encryption'];
}
/**
* Set encryption key id
*
* @param {string} keyId - Encryption key id
* @return {ObjectMD} itself
*/
setAmzEncryptionKeyId(keyId) {
this._data['x-amz-server-side-encryption-aws-kms-key-id'] = keyId;
return this;
}
/**
* Returns encryption key id
*
* @return {string} Encryption key id
*/
getAmzEncryptionKeyId() {
return this._data['x-amz-server-side-encryption-aws-kms-key-id'];
}
/**
* Set encryption customer algorithm
*
* @param {string} algo - Encryption customer algorithm
* @return {ObjectMD} itself
*/
setAmzEncryptionCustomerAlgorithm(algo) {
this._data['x-amz-server-side-encryption-customer-algorithm'] = algo;
return this;
}
/**
* Returns Encryption customer algorithm
*
* @return {string} Encryption customer algorithm
*/
getAmzEncryptionCustomerAlgorithm() {
return this._data['x-amz-server-side-encryption-customer-algorithm'];
}
/**
* Set metadata redirectLocation value
*
* @param {string} redirectLocation - The website redirect location
* @return {ObjectMD} itself
*/
setRedirectLocation(redirectLocation) {
this._data['x-amz-website-redirect-location'] = redirectLocation;
return this;
}
/**
* Get metadata redirectLocation value
*
* @return {string} Website redirect location
*/
getRedirectLocation() {
return this._data['x-amz-website-redirect-location'];
}
/**
* Set access control list
*
* @param {object} acl - Access control list
* @param {string} acl.Canned -
* @param {string[]} acl.FULL_CONTROL -
* @param {string[]} acl.WRITE_ACP -
* @param {string[]} acl.READ -
* @param {string[]} acl.READ_ACP -
* @return {ObjectMD} itself
*/
setAcl(acl) {
this._data.acl = acl;
return this;
}
/**
* Returns access control list
*
* @return {object} Access control list
*/
getAcl() {
return this._data.acl;
}
/**
* Set object key
*
* @param {string} key - Object key
* @return {ObjectMD} itself
*/
setKey(key) {
this._data.key = key;
return this;
}
/**
* Returns object key
*
* @return {string} object key
*/
getKey() {
return this._data.key;
}
/**
* Set location
*
* @param {string[]} location - location
* @return {ObjectMD} itself
*/
setLocation(location) {
this._data.location = location;
return this;
}
/**
* Returns location
*
* @return {string[]} location
*/
getLocation() {
return this._data.location;
}
/**
* Set metadata isNull value
*
* @param {boolean} isNull - Whether new version is null or not
* @return {ObjectMD} itself
*/
setIsNull(isNull) {
this._data.isNull = isNull;
return this;
}
/**
* Get metadata isNull value
*
* @return {boolean} Whether new version is null or not
*/
getIsNull() {
return this._data.isNull;
}
/**
* Set metadata nullVersionId value
*
* @param {string} nullVersionId - The version id of the null version
* @return {ObjectMD} itself
*/
setNullVersionId(nullVersionId) {
this._data.nullVersionId = nullVersionId;
return this;
}
/**
* Get metadata nullVersionId value
*
* @return {string} The version id of the null version
*/
getNullVersionId() {
return this._data.nullVersionId;
}
/**
* Set metadata isDeleteMarker value
*
* @param {boolean} isDeleteMarker - Whether object is a delete marker
* @return {ObjectMD} itself
*/
setIsDeleteMarker(isDeleteMarker) {
this._data.isDeleteMarker = isDeleteMarker;
return this;
}
/**
* Get metadata isDeleteMarker value
*
* @return {boolean} Whether object is a delete marker
*/
getIsDeleteMarker() {
return this._data.isDeleteMarker;
}
/**
* Set metadata versionId value
*
* @param {string} versionId - The object versionId
* @return {ObjectMD} itself
*/
setVersionId(versionId) {
this._data.versionId = versionId;
return this;
}
/**
* Get metadata versionId value
*
* @return {string} The object versionId
*/
getVersionId() {
return this._data.versionId;
}
/**
* Set tags
*
* @param {object} tags - tags object
* @return {ObjectMD} itself
*/
setTags(tags) {
this._data.tags = tags;
return this;
}
/**
* Returns tags
*
* @return {object} tags object
*/
getTags() {
return this._data.tags;
}
/**
* Set replication information
*
* @param {object} replicationInfo - replication information object
* @return {ObjectMD} itself
*/
setReplicationInfo(replicationInfo) {
const { status, content, destination, storageClass, role } =
replicationInfo;
this._data.replicationInfo = {
status,
content,
destination,
storageClass: storageClass || '',
role,
};
return this;
}
/**
* Get replication information
*
* @return {object} replication object
*/
getReplicationInfo() {
return this._data.replicationInfo;
}
/**
* Set dataStoreName
*
* @param {string} dataStoreName - name of data backend obj stored in
* @return {ObjectMD} itself
*/
setDataStoreName(dataStoreName) {
this._data.dataStoreName = dataStoreName;
return this;
}
/**
* Get dataStoreName
*
* @return {string} name of data backend obj stored in
*/
getDataStoreName() {
return this._data.dataStoreName;
}
/**
* Set custom meta headers
*
* @param {object} metaHeaders - Meta headers
* @return {ObjectMD} itself
*/
setUserMetadata(metaHeaders) {
Object.keys(metaHeaders).forEach(key => {
if (key.startsWith('x-amz-meta-')) {
this._data[key] = metaHeaders[key];
}
});
// If a multipart object and the acl is already parsed, we update it
if (metaHeaders.acl) {
this.setAcl(metaHeaders.acl);
}
return this;
}
/**
* overrideMetadataValues (used for complete MPU and object copy)
*
* @param {object} headers - Headers
* @return {ObjectMD} itself
*/
overrideMetadataValues(headers) {
Object.assign(this._data, headers);
return this;
}
/**
* Returns metadata object
*
* @return {object} metadata object
*/
getValue() {
return this._data;
}
};

View File

@ -0,0 +1,423 @@
const assert = require('assert');
const UUID = require('uuid');
const escapeForXml = require('../s3middleware/escapeForXml');
const errors = require('../errors');
const { isValidBucketName } = require('../s3routes/routesUtils');
const MAX_RULES = 1000;
const RULE_ID_LIMIT = 255;
const validStorageClasses = [
undefined,
'STANDARD',
'STANDARD_IA',
'REDUCED_REDUNDANCY',
];
/**
Example XML request:
<ReplicationConfiguration>
<Role>IAM-role-ARN</Role>
<Rule>
<ID>Rule-1</ID>
<Status>rule-status</Status>
<Prefix>key-prefix</Prefix>
<Destination>
<Bucket>arn:aws:s3:::bucket-name</Bucket>
<StorageClass>
optional-destination-storage-class-override
</StorageClass>
</Destination>
</Rule>
<Rule>
<ID>Rule-2</ID>
...
</Rule>
...
</ReplicationConfiguration>
*/
class ReplicationConfiguration {
/**
* Create a ReplicationConfiguration instance
* @param {string} xml - The parsed XML
* @param {object} log - Werelogs logger
* @param {object} config - S3 server configuration
* @return {object} - ReplicationConfiguration instance
*/
constructor(xml, log, config) {
this._parsedXML = xml;
this._log = log;
this._config = config;
this._configPrefixes = [];
this._configIDs = [];
// The bucket metadata model of replication config. Note there is a
// single `destination` property because we can replicate to only one
// other bucket. Thus each rule is simplified to these properties.
this._role = null;
this._destination = null;
this._rules = null;
}
/**
* Get the role of the bucket replication configuration
* @return {string|null} - The role if defined, otherwise `null`
*/
getRole() {
return this._role;
}
/**
* The bucket to replicate data to
* @return {string|null} - The bucket if defined, otherwise `null`
*/
getDestination() {
return this._destination;
}
/**
* The rules for replication configuration
* @return {string|null} - The rules if defined, otherwise `null`
*/
getRules() {
return this._rules;
}
/**
* Get the replication configuration
* @return {object} - The replication configuration
*/
getReplicationConfiguration() {
return {
role: this.getRole(),
destination: this.getDestination(),
rules: this.getRules(),
};
}
/**
* Build the rule object from the parsed XML of the given rule
* @param {object} rule - The rule object from this._parsedXML
* @return {object} - The rule object to push into the `Rules` array
*/
_buildRuleObject(rule) {
const obj = {
prefix: rule.Prefix[0],
enabled: rule.Status[0] === 'Enabled',
};
// ID is an optional property, but create one if not provided or is ''.
// We generate a 48-character alphanumeric, unique ID for the rule.
obj.id = rule.ID && rule.ID[0] !== '' ? rule.ID[0] :
Buffer.from(UUID.v4()).toString('base64');
// StorageClass is an optional property.
if (rule.Destination[0].StorageClass) {
obj.storageClass = rule.Destination[0].StorageClass[0];
}
return obj;
}
/**
* Check if the Role field of the replication configuration is valid
* @param {string} ARN - The Role field value provided in the configuration
* @return {boolean} `true` if a valid role ARN, `false` otherwise
*/
_isValidRoleARN(ARN) {
// AWS accepts a range of values for the Role field. Though this does
// not encompass all constraints imposed by AWS, we have opted to
// enforce the following.
const arr = ARN.split(':');
const isValidRoleARN =
arr[0] === 'arn' &&
arr[1] === 'aws' &&
arr[2] === 'iam' &&
arr[3] === '' &&
(arr[4] === '*' || arr[4].length > 1) &&
arr[5].startsWith('role');
return isValidRoleARN;
}
/**
* Check that the `Role` property of the configuration is valid
* @return {undefined}
*/
_parseRole() {
const parsedRole = this._parsedXML.ReplicationConfiguration.Role;
if (!parsedRole) {
return errors.MalformedXML;
}
const role = parsedRole[0];
const rolesArr = role.split(',');
if (rolesArr.length !== 2) {
return errors.InvalidArgument.customizeDescription(
'Invalid Role specified in replication configuration: ' +
'Role must be a comma-separated list of two IAM roles');
}
const invalidRole = rolesArr.find(r => !this._isValidRoleARN(r));
if (invalidRole !== undefined) {
return errors.InvalidArgument.customizeDescription(
'Invalid Role specified in replication configuration: ' +
`'${invalidRole}'`);
}
this._role = role;
return undefined;
}
/**
* Check that the `Rules` property array is valid
* @return {undefined}
*/
_parseRules() {
// Note that the XML uses 'Rule' while the config object uses 'Rules'.
const { Rule } = this._parsedXML.ReplicationConfiguration;
if (!Rule || Rule.length < 1) {
return errors.MalformedXML;
}
if (Rule.length > MAX_RULES) {
return errors.InvalidRequest.customizeDescription(
'Number of defined replication rules cannot exceed 1000');
}
const err = this._parseEachRule(Rule);
if (err) {
return err;
}
return undefined;
}
/**
* Check that each rule in the `Rules` property array is valid
* @param {array} rules - The rule array from this._parsedXML
* @return {undefined}
*/
_parseEachRule(rules) {
const rulesArr = [];
for (let i = 0; i < rules.length; i++) {
const err =
this._parseStatus(rules[i]) || this._parsePrefix(rules[i]) ||
this._parseID(rules[i]) || this._parseDestination(rules[i]);
if (err) {
return err;
}
rulesArr.push(this._buildRuleObject(rules[i]));
}
this._rules = rulesArr;
return undefined;
}
/**
* Check that the `Status` property is valid
* @param {object} rule - The rule object from this._parsedXML
* @return {undefined}
*/
_parseStatus(rule) {
const status = rule.Status && rule.Status[0];
if (!status || !['Enabled', 'Disabled'].includes(status)) {
return errors.MalformedXML;
}
return undefined;
}
/**
* Check that the `Prefix` property is valid
* @param {object} rule - The rule object from this._parsedXML
* @return {undefined}
*/
_parsePrefix(rule) {
const prefix = rule.Prefix && rule.Prefix[0];
// An empty string prefix should be allowed.
if (!prefix && prefix !== '') {
return errors.MalformedXML;
}
if (prefix.length > 1024) {
return errors.InvalidArgument.customizeDescription('Rule prefix ' +
'cannot be longer than maximum allowed key length of 1024');
}
// Each Prefix in a list of rules must not overlap. For example, two
// prefixes 'TaxDocs' and 'TaxDocs/2015' are overlapping. An empty
// string prefix is expected to overlap with any other prefix.
for (let i = 0; i < this._configPrefixes.length; i++) {
const used = this._configPrefixes[i];
if (prefix.startsWith(used) || used.startsWith(prefix)) {
return errors.InvalidRequest.customizeDescription('Found ' +
`overlapping prefixes '${used}' and '${prefix}'`);
}
}
this._configPrefixes.push(prefix);
return undefined;
}
/**
* Check that the `ID` property is valid
* @param {object} rule - The rule object from this._parsedXML
* @return {undefined}
*/
_parseID(rule) {
const id = rule.ID && rule.ID[0];
if (id && id.length > RULE_ID_LIMIT) {
return errors.InvalidArgument
.customizeDescription('Rule Id cannot be greater than 255');
}
// Each ID in a list of rules must be unique.
if (this._configIDs.includes(id)) {
return errors.InvalidRequest.customizeDescription(
'Rule Id must be unique');
}
this._configIDs.push(id);
return undefined;
}
/**
* Check that the `StorageClass` is a valid class
* @param {string} storageClass - The storage class to validate
* @return {boolean} `true` if valid, otherwise `false`
*/
_isValidStorageClass(storageClass) {
if (!this._config) {
return validStorageClasses.includes(storageClass);
}
const replicationEndpoints = this._config.replicationEndpoints
.map(endpoint => endpoint.site);
return replicationEndpoints.includes(storageClass) ||
validStorageClasses.includes(storageClass);
}
/**
* Check that the `StorageClass` property is valid
* @param {object} destination - The destination object from this._parsedXML
* @return {undefined}
*/
_parseStorageClass(destination) {
const storageClass = destination.StorageClass &&
destination.StorageClass[0];
if (!this._isValidStorageClass(storageClass)) {
return errors.MalformedXML;
}
return undefined;
}
/**
* Check that the `Bucket` property is valid
* @param {object} destination - The destination object from this._parsedXML
* @return {undefined}
*/
_parseBucket(destination) {
const parsedBucketARN = destination.Bucket;
if (!parsedBucketARN) {
return errors.MalformedXML;
}
const bucketARN = parsedBucketARN[0];
if (!bucketARN) {
return errors.InvalidArgument.customizeDescription(
'Destination bucket cannot be null or empty');
}
const arr = bucketARN.split(':');
const isValidARN =
arr[0] === 'arn' &&
arr[1] === 'aws' &&
arr[2] === 's3' &&
arr[3] === '' &&
arr[4] === '';
if (!isValidARN) {
return errors.InvalidArgument
.customizeDescription('Invalid bucket ARN');
}
if (!isValidBucketName(arr[5], [])) {
return errors.InvalidArgument
.customizeDescription('The specified bucket is not valid');
}
// We can replicate objects only to one destination bucket.
if (this._destination && this._destination !== bucketARN) {
return errors.InvalidRequest.customizeDescription(
'The destination bucket must be same for all rules');
}
this._destination = bucketARN;
return undefined;
}
/**
* Check that the `destination` property is valid
* @param {object} rule - The rule object from this._parsedXML
* @return {undefined}
*/
_parseDestination(rule) {
const dest = rule.Destination && rule.Destination[0];
if (!dest) {
return errors.MalformedXML;
}
const err = this._parseBucket(dest) || this._parseStorageClass(dest);
if (err) {
return err;
}
return undefined;
}
/**
* Check that the request configuration is valid
* @return {undefined}
*/
parseConfiguration() {
const err = this._parseRole() || this._parseRules();
if (err) {
return err;
}
return undefined;
}
/**
* Get the XML representation of the configuration object
* @param {object} config - The bucket replication configuration
* @return {string} - The XML representation of the configuration
*/
static getConfigXML(config) {
const { role, destination, rules } = config;
const Role = `<Role>${escapeForXml(role)}</Role>`;
const Bucket = `<Bucket>${escapeForXml(destination)}</Bucket>`;
const rulesXML = rules.map(rule => {
const { prefix, enabled, storageClass, id } = rule;
const Prefix = prefix === '' ? '<Prefix/>' :
`<Prefix>${escapeForXml(prefix)}</Prefix>`;
const Status =
`<Status>${enabled ? 'Enabled' : 'Disabled'}</Status>`;
const StorageClass = storageClass ?
`<StorageClass>${storageClass}</StorageClass>` : '';
const Destination =
`<Destination>${Bucket}${StorageClass}</Destination>`;
// If the ID property was omitted in the configuration object, we
// create an ID for the rule. Hence it is always defined.
const ID = `<ID>${escapeForXml(id)}</ID>`;
return `<Rule>${ID}${Prefix}${Status}${Destination}</Rule>`;
}).join('');
return '<?xml version="1.0" encoding="UTF-8"?>' +
'<ReplicationConfiguration ' +
'xmlns="http://s3.amazonaws.com/doc/2006-03-01/">' +
`${rulesXML}${Role}` +
'</ReplicationConfiguration>';
}
/**
* Validate the bucket metadata replication configuration structure and
* value types
* @param {object} config - The replication configuration to validate
* @return {undefined}
*/
static validateConfig(config) {
assert.strictEqual(typeof config, 'object');
const { role, rules, destination } = config;
assert.strictEqual(typeof role, 'string');
assert.strictEqual(typeof destination, 'string');
assert.strictEqual(Array.isArray(rules), true);
rules.forEach(rule => {
assert.strictEqual(typeof rule, 'object');
const { prefix, enabled, id, storageClass } = rule;
assert.strictEqual(typeof prefix, 'string');
assert.strictEqual(typeof enabled, 'boolean');
assert(id === undefined || typeof id === 'string');
if (storageClass !== undefined) {
assert.strictEqual(typeof storageClass, 'string');
}
});
}
}
module.exports = ReplicationConfiguration;

View File

@ -0,0 +1,195 @@
class RoutingRule {
/**
* Represents a routing rule in a website configuration.
* @constructor
* @param {object} params - object containing redirect and condition objects
* @param {object} params.redirect - specifies how to redirect requests
* @param {string} [params.redirect.protocol] - protocol to use for redirect
* @param {string} [params.redirect.hostName] - hostname to use for redirect
* @param {string} [params.redirect.replaceKeyPrefixWith] - string to replace
* keyPrefixEquals specified in condition
* @param {string} [params.redirect.replaceKeyWith] - string to replace key
* @param {string} [params.redirect.httpRedirectCode] - http redirect code
* @param {object} [params.condition] - specifies conditions for a redirect
* @param {string} [params.condition.keyPrefixEquals] - key prefix that
* triggers a redirect
* @param {string} [params.condition.httpErrorCodeReturnedEquals] - http code
* that triggers a redirect
*/
constructor(params) {
if (params) {
this._redirect = params.redirect;
this._condition = params.condition;
}
}
/**
* Return copy of rule as plain object
* @return {object} rule;
*/
getRuleObject() {
const rule = {
redirect: this._redirect,
condition: this._condition,
};
return rule;
}
/**
* Return the condition object
* @return {object} condition;
*/
getCondition() {
return this._condition;
}
/**
* Return the redirect object
* @return {object} redirect;
*/
getRedirect() {
return this._redirect;
}
}
class WebsiteConfiguration {
/**
* Object that represents website configuration
* @constructor
* @param {object} params - object containing params to construct Object
* @param {string} params.indexDocument - key for index document object
* required when redirectAllRequestsTo is undefined
* @param {string} [params.errorDocument] - key for error document object
* @param {object} params.redirectAllRequestsTo - object containing info
* about how to redirect all requests
* @param {string} params.redirectAllRequestsTo.hostName - hostName to use
* when redirecting all requests
* @param {string} [params.redirectAllRequestsTo.protocol] - protocol to use
* when redirecting all requests ('http' or 'https')
* @param {(RoutingRule[]|object[])} params.routingRules - array of Routing
* Rule instances or plain routing rule objects to cast as RoutingRule's
*/
constructor(params) {
if (params) {
this._indexDocument = params.indexDocument;
this._errorDocument = params.errorDocument;
this._redirectAllRequestsTo = params.redirectAllRequestsTo;
this.setRoutingRules(params.routingRules);
}
}
/**
* Return plain object with configuration info
* @return {object} - Object copy of class instance
*/
getConfig() {
const websiteConfig = {
indexDocument: this._indexDocument,
errorDocument: this._errorDocument,
redirectAllRequestsTo: this._redirectAllRequestsTo,
};
if (this._routingRules) {
websiteConfig.routingRules =
this._routingRules.map(rule => rule.getRuleObject());
}
return websiteConfig;
}
/**
* Set the redirectAllRequestsTo
* @param {object} obj - object to set as redirectAllRequestsTo
* @param {string} obj.hostName - hostname for redirecting all requests
* @param {object} [obj.protocol] - protocol for redirecting all requests
* @return {undefined};
*/
setRedirectAllRequestsTo(obj) {
this._redirectAllRequestsTo = obj;
}
/**
* Return the redirectAllRequestsTo object
* @return {object} redirectAllRequestsTo;
*/
getRedirectAllRequestsTo() {
return this._redirectAllRequestsTo;
}
/**
* Set the index document object name
* @param {string} suffix - index document object key
* @return {undefined};
*/
setIndexDocument(suffix) {
this._indexDocument = suffix;
}
/**
* Get the index document object name
* @return {string} indexDocument
*/
getIndexDocument() {
return this._indexDocument;
}
/**
* Set the error document object name
* @param {string} key - error document object key
* @return {undefined};
*/
setErrorDocument(key) {
this._errorDocument = key;
}
/**
* Get the error document object name
* @return {string} errorDocument
*/
getErrorDocument() {
return this._errorDocument;
}
/**
* Set the whole RoutingRules array
* @param {array} array - array to set as instance's RoutingRules
* @return {undefined};
*/
setRoutingRules(array) {
if (array) {
this._routingRules = array.map(rule => {
if (rule instanceof RoutingRule) {
return rule;
}
return new RoutingRule(rule);
});
}
}
/**
* Add a RoutingRule instance to routingRules array
* @param {object} obj - rule to add to array
* @return {undefined};
*/
addRoutingRule(obj) {
if (!this._routingRules) {
this._routingRules = [];
}
if (obj && obj instanceof RoutingRule) {
this._routingRules.push(obj);
} else if (obj) {
this._routingRules.push(new RoutingRule(obj));
}
}
/**
* Get routing rules
* @return {RoutingRule[]} - array of RoutingRule instances
*/
getRoutingRules() {
return this._routingRules;
}
}
module.exports = {
RoutingRule,
WebsiteConfiguration,
};

167
lib/network/RoundRobin.js Normal file
View File

@ -0,0 +1,167 @@
const DEFAULT_STICKY_COUNT = 100;
/**
* Shuffle an array in-place
*
* @param {Array} array - The array to shuffle
* @return {undefined}
*/
function shuffle(array) {
for (let i = array.length - 1; i > 0; i--) {
const randIndex = Math.floor(Math.random() * (i + 1));
/* eslint-disable no-param-reassign */
const randIndexVal = array[randIndex];
array[randIndex] = array[i];
array[i] = randIndexVal;
/* eslint-enable no-param-reassign */
}
}
class RoundRobin {
/**
* @constructor
* @param {object[]|string[]} hostsList - list of hosts to query
* in round-robin fashion.
* @param {string} hostsList[].host - host name or IP address
* @param {number} [hostsList[].port] - port number to contact
* @param {object} [options] - options object
* @param {number} [options.stickyCount=100] - number of requests
* to send to the same host before switching to the next one
* @param {Logger} [options.logger] - logger object
*/
constructor(hostsList, options) {
if (hostsList.length === 0) {
throw new Error(
'at least one host must be provided for round robin');
}
this.hostsList = hostsList.map(item => this._validateHostObj(item));
if (options && options.logger) {
this.logger = options.logger;
}
if (options && options.stickyCount) {
this.stickyCount = options.stickyCount;
} else {
this.stickyCount = DEFAULT_STICKY_COUNT;
}
// TODO: add blacklisting capability
shuffle(this.hostsList);
this.hostIndex = 0;
this.pickCount = 0;
}
_validateHostObj(hostItem) {
const hostItemObj = {};
if (typeof hostItem === 'string') {
const hostParts = hostItem.split(':');
if (hostParts.length > 2) {
throw new Error(`${hostItem}: ` +
'bad round robin item: expect "host[:port]"');
}
hostItemObj.host = hostParts[0];
hostItemObj.port = hostParts[1];
} else {
if (typeof hostItem !== 'object') {
throw new Error(`${hostItem}: bad round robin item: ` +
'must be a string or object');
}
hostItemObj.host = hostItem.host;
hostItemObj.port = hostItem.port;
}
if (typeof hostItemObj.host !== 'string') {
throw new Error(`${hostItemObj.host}: ` +
'bad round robin host name: not a string');
}
if (hostItemObj.port !== undefined) {
if (/^[0-9]+$/.exec(hostItemObj.port) === null) {
throw new Error(`'${hostItemObj.port}': ` +
'bad round robin host port: not a number');
}
const parsedPort = Number.parseInt(hostItemObj.port, 10);
if (parsedPort <= 0 || parsedPort > 65535) {
throw new Error(`'${hostItemObj.port}': bad round robin ` +
'host port: not a valid port number');
}
return {
host: hostItemObj.host,
port: parsedPort,
};
}
return { host: hostItemObj.host };
}
/**
* return the next host within round-robin cycle
*
* The same host is returned up to {@link this.stickyCount} times,
* then the next host in the round-robin list is returned.
*
* Once all hosts have been returned once, the list is shuffled
* and a new round-robin cycle starts.
*
* @return {object} a host object with { host, port } attributes
*/
pickHost() {
if (this.logger) {
this.logger.debug('pick host',
{ host: this.getCurrentHost() });
}
const curHost = this.getCurrentHost();
++this.pickCount;
if (this.pickCount === this.stickyCount) {
this._roundRobinCurrentHost({ shuffle: true });
this.pickCount = 0;
}
return curHost;
}
/**
* return the next host within round-robin cycle
*
* stickyCount is ignored, the next host in the round-robin list
* is returned.
*
* Once all hosts have been returned once, the list is shuffled
* and a new round-robin cycle starts.
*
* @return {object} a host object with { host, port } attributes
*/
pickNextHost() {
// don't shuffle in this case because we want to force picking
// a different host, shuffling may return the same host again
this._roundRobinCurrentHost({ shuffle: false });
this.pickCount = 0;
return this.getCurrentHost();
}
/**
* return the current host in round-robin, without changing the
* round-robin state
*
* @return {object} a host object with { host, port } attributes
*/
getCurrentHost() {
return this.hostsList[this.hostIndex];
}
_roundRobinCurrentHost(params) {
this.hostIndex += 1;
if (this.hostIndex === this.hostsList.length) {
this.hostIndex = 0;
// re-shuffle the array when all entries have been
// returned once, if shuffle param is true
if (params.shuffle) {
shuffle(this.hostsList);
}
}
if (this.logger) {
this.logger.debug('round robin host',
{ newHost: this.getCurrentHost() });
}
}
}
module.exports = RoundRobin;

440
lib/network/http/server.js Normal file
View File

@ -0,0 +1,440 @@
'use strict'; // eslint-disable-line
const http = require('http');
const https = require('https');
const assert = require('assert');
const dhparam = require('../../https/dh2048').dhparam;
const ciphers = require('../../https/ciphers').ciphers;
const errors = require('../../errors');
class Server {
/**
* @constructor
*
* @param {number} port - Port to listen into
* @param {werelogs.Logger} logger - Logger object
*/
constructor(port, logger) {
assert.strictEqual(typeof port, 'number', 'Port must be a number');
this._noDelay = true;
this._cbOnListening = () => {};
this._cbOnRequest = (req, res) => this._noHandlerCb(req, res);
this._cbOnCheckContinue = (req, res) => {
res.writeContinue();
this._cbOnRequest(req, res);
};
// AWS S3 does not respond with 417 Expectation Failed or any error
// when Expect header is received and the value is not 100-continue
this._cbOnCheckExpectation = (req, res) => this._cbOnRequest(req, res);
this._cbOnError = () => false;
this._cbOnStop = () => {};
this._https = {
ciphers,
dhparam,
cert: null,
key: null,
ca: null,
requestCert: false,
rejectUnauthorized: true,
};
this._port = port;
this._address = '::';
this._server = null;
this._logger = logger;
}
/**
* Setter to noDelay, this disable the nagle tcp algorithm, reducing
* latency for each request
*
* @param {boolean} value - { true: Disable, false: Enable }
* @return {Server} itself
*/
setNoDelay(value) {
this._noDelay = value;
return this;
}
/**
* Getter to access to the http/https server
*
* @return {http.Server|https.Server} http/https server
*/
getServer() {
return this._server;
}
/**
* Getter to access to the current authority certificate
*
* @return {string} Authority certificate
*/
getAuthorityCertificate() {
return this._https.ca;
}
/**
* Setter to the listening port
*
* @param {number} port - Port to listen into
* @return {undefined}
*/
setPort(port) {
this._port = port;
}
/**
* Getter to access to the listening port
*
* @return {number} listening port
*/
getPort() {
return this._port;
}
/**
* Setter to the bind address
*
* @param {String} address - address bound to the socket
* @return {undefined}
*/
setBindAddress(address) {
this._address = address;
}
/**
* Getter to access the bind address
*
* @return {String} address bound to the socket
*/
getBindAddress() {
return this._address;
}
/**
* Getter to access to the noDelay (nagle algorithm) configuration
*
* @return {boolean} { true: Disable, false: Enable }
*/
isNoDelay() {
return this._noDelay;
}
/**
* Getter to know if the server run under https or http
*
* @return {boolean} { true: Https server, false: http server }
*/
isHttps() {
return !!this._https.cert && !!this._https.key;
}
/**
* Setter for the https configuration
*
* @param {string} [cert] - Content of the certificate
* @param {string} [key] - Content of the key
* @param {string} [ca] - Content of the authority certificate
* @param {boolean} [twoWay] - Enable the two way exchange, which means
* each client needs to set up an ssl certificate
* @return {Server} itself
*/
setHttps(cert, key, ca, twoWay) {
this._https = {
ciphers,
dhparam,
cert: null,
key: null,
ca: null,
requestCert: false,
rejectUnauthorized: true,
};
if (cert && key) {
assert.strictEqual(typeof cert, 'string');
assert.strictEqual(typeof key, 'string');
this._https.cert = cert;
this._https.key = key;
}
if (ca) {
assert.strictEqual(typeof ca, 'string');
this._https.ca = [ca];
}
if (twoWay) {
assert.strictEqual(typeof twoWay, 'boolean');
this._https.requestCert = twoWay;
}
return this;
}
/**
* Function called when no handler specified in the server
*
* @param {http.IncomingMessage|https.IncomingMessage} req - Request object
* @param {http.ServerResponse|https.ServerResponse} res - Response object
* @return {undefined}
*/
_noHandlerCb(req, res) {
// if no handler on the Server, send back an internal error
const err = errors.InternalError;
const msg = `${err.message}: No handler in Server`;
res.writeHead(err.code, {
'Content-Type': 'text/plain',
'Content-Length': msg.length,
});
return res.end(msg);
}
/**
* Function called when request received
*
* @param {http.IncomingMessage|https.IncomingMessage} req - Request object
* @param {http.ServerResponse|https.ServerResponse} res - Response object
* @return {undefined}
*/
_onRequest(req, res) {
return this._cbOnRequest(req, res);
}
/**
* Function called when the Server is listening
*
* @return {undefined}
*/
_onListening() {
this._logger.info('Server is listening', {
method: 'arsenal.network.Server._onListening',
address: this._server.address(),
});
this._cbOnListening();
}
/**
* Function called when the Server sends back an error
*
* @param {Error} err - Error to be sent back
* @return {undefined}
*/
_onError(err) {
this._logger.error('Server error', {
method: 'arsenal.network.Server._onError',
port: this._port,
error: err.stack || err,
});
if (this._cbOnError) {
if (this._cbOnError(err) === true) {
process.nextTick(() => this.start());
}
}
}
/**
* Function called when the Server is stopped
*
* @return {undefined}
*/
_onClose() {
if (this._server.listening) {
this._logger.info('Server is stopped', {
address: this._server.address(),
});
}
this._server = null;
this._cbOnStop();
}
/**
* Set the listening callback
*
* @param {function} cb - Callback()
* @return {Server} itself
*/
onListening(cb) {
assert.strictEqual(typeof cb, 'function',
'Callback must be a function');
this._cbOnListening = cb;
return this;
}
/**
* Set the request handler callback
*
* @param {function} cb - Callback(req, res)
* @return {Server} itself
*/
onRequest(cb) {
assert.strictEqual(typeof cb, 'function',
'Callback must be a function');
this._cbOnRequest = cb;
return this;
}
/**
* Set the checkExpectation handler callback
*
* @param {function} cb - Callback(req, res)
* @return {Server} itself
*/
onCheckExpectation(cb) {
assert.strictEqual(typeof cb, 'function',
'Callback must be a function');
this._cbOnCheckExpectation = cb;
return this;
}
/**
* Set the checkContinue handler callback
*
* @param {function} cb - Callback(req, res)
* @return {Server} itself
*/
onCheckContinue(cb) {
assert.strictEqual(typeof cb, 'function',
'Callback must be a function');
this._cbOnCheckContinue = cb;
return this;
}
/**
* Set the error handler callback, if this handler returns true when an
* error is triggered, the server will restart
*
* @param {function} cb - Callback(err)
* @return {Server} itself
*/
onError(cb) {
assert.strictEqual(typeof cb, 'function',
'Callback must be a function');
this._cbOnError = cb;
return this;
}
/**
* Set the stop handler callback
*
* @param {function} cb - Callback()
* @return {Server} itself
*/
onStop(cb) {
assert.strictEqual(typeof cb, 'function',
'Callback must be a function');
this._cbOnStop = cb;
return this;
}
/**
* Function called when a secure connection is etablished
*
* @param {tls.TlsSocket} sock - socket
* @return {undefined}
*/
_onSecureConnection(sock) {
if (!sock.authorized) {
this._logger.error('rejected secure connection', {
address: sock.address(),
authorized: false,
error: sock.authorizationError,
});
}
}
/**
* function called when an error came from the client request
*
* @param {Error} err - Error
* @param {net.Socket|tls.TlsSocket} sock - Socket
* @return {undefined}
*/
_onClientError(err, sock) {
this._logger.error('client error', {
method: 'arsenal.network.Server._onClientError',
error: err.stack || err,
address: sock.address(),
});
}
/**
* Function called when request with an HTTP Expect header is received,
* where the value is not 100-continue
*
* @param {http.IncomingMessage|https.IncomingMessage} req - Request object
* @param {http.ServerResponse|https.ServerResponse} res - Response object
* @return {undefined}
*/
_onCheckExpectation(req, res) {
return this._cbOnCheckExpectation(req, res);
}
/**
* Function called when request with an HTTP Expect: 100-continue
* is received
*
* @param {http.IncomingMessage|https.IncomingMessage} req - Request object
* @param {http.ServerResponse|https.ServerResponse} res - Response object
* @return {undefined}
*/
_onCheckContinue(req, res) {
return this._cbOnCheckContinue(req, res);
}
/**
* Function to start the Server
*
* @return {Server} itself
*/
start() {
if (!this._server) {
if (this.isHttps()) {
this._logger.info('starting Server under https', {
method: 'arsenal.network.Server.start',
port: this._port,
});
this._https.agent = new https.Agent(this._https);
this._server = https.createServer(this._https,
(req, res) => this._onRequest(req, res));
} else {
this._logger.info('starting Server under http', {
method: 'arsenal.network.Server.start',
port: this._port,
});
this._server = http.createServer(
(req, res) => this._onRequest(req, res));
}
this._server.on('error', err => this._onError(err));
this._server.on('secureConnection',
sock => this._onSecureConnection(sock));
this._server.on('connection', sock => {
// Setting no delay of the socket to the value configured
sock.setNoDelay(this.isNoDelay());
sock.on('error', err => this._logger.info(
'socket error - request rejected', { error: err }));
});
this._server.on('tlsClientError', (err, sock) =>
this._onClientError(err, sock));
this._server.on('clientError', (err, sock) =>
this._onClientError(err, sock));
this._server.on('checkContinue', (req, res) =>
this._onCheckContinue(req, res));
this._server.on('checkExpectation', (req, res) =>
this._onCheckExpectation(req, res));
this._server.on('listening', () => this._onListening());
}
this._server.listen(this._port, this._address);
return this;
}
/**
* Function to stop the Server
*
* @return {Server} itself
*/
stop() {
if (this._server) {
this._server.close(() => this._onClose());
}
return this;
}
}
module.exports = Server;

109
lib/network/http/utils.js Normal file
View File

@ -0,0 +1,109 @@
'use strict'; // eslint-disable-line
const errors = require('../../errors');
/**
* Parse the Range header into an object
*
* @param {String} rangeHeader - The 'Range' header value
* @return {Object} object containing a range specification, with
* either of:
* - start and end attributes: a fully specified range request
* - a single start attribute: no end is specified in the range request
* - a suffix attribute: suffix range request
* - an error attribute of type errors.InvalidArgument if the range
* syntax is invalid
*/
function parseRangeSpec(rangeHeader) {
const rangeMatch = /^bytes=([0-9]+)?-([0-9]+)?$/.exec(rangeHeader);
if (rangeMatch) {
const rangeValues = rangeMatch.slice(1, 3);
if (rangeValues[0] === undefined) {
if (rangeValues[1] !== undefined) {
return { suffix: Number.parseInt(rangeValues[1], 10) };
}
} else {
const rangeSpec = { start: Number.parseInt(rangeValues[0], 10) };
if (rangeValues[1] === undefined) {
return rangeSpec;
}
rangeSpec.end = Number.parseInt(rangeValues[1], 10);
if (rangeSpec.start <= rangeSpec.end) {
return rangeSpec;
}
}
}
return { error: errors.InvalidArgument };
}
/**
* Convert a range specification as given by parseRangeSpec() into a
* fully specified absolute byte range
*
* @param {Number []} rangeSpec - Parsed range specification as returned
* by parseRangeSpec()
* @param {Number} objectSize - Total byte size of the whole object
* @return {Object} object containing either:
* - a 'range' attribute which is a fully specified byte range [start,
end], as the inclusive absolute byte range to request from the
object
* - or no attribute if the requested range is a valid range request
for a whole empty object (non-zero suffix range)
* - or an 'error' attribute of type errors.InvalidRange if the
* requested range is out of object's boundaries.
*/
function getByteRangeFromSpec(rangeSpec, objectSize) {
if (rangeSpec.suffix !== undefined) {
if (rangeSpec.suffix === 0) {
// 0-byte suffix is always invalid (even on empty objects)
return { error: errors.InvalidRange };
}
if (objectSize === 0) {
// any other suffix range on an empty object returns the
// full object (0 bytes)
return {};
}
return { range: [Math.max(objectSize - rangeSpec.suffix, 0),
objectSize - 1] };
}
if (rangeSpec.start < objectSize) {
// test is false if end is undefined
return { range: [rangeSpec.start,
(rangeSpec.end < objectSize ?
rangeSpec.end : objectSize - 1)] };
}
return { error: errors.InvalidRange };
}
/**
* Convenience function that combines parseRangeSpec() and
* getByteRangeFromSpec()
*
* @param {String} rangeHeader - The 'Range' header value
* @param {Number} objectSize - Total byte size of the whole object
* @return {Object} object containing either:
* - a 'range' attribute which is a fully specified byte range [start,
* end], as the inclusive absolute byte range to request from the
* object
* - or no attribute if the requested range is either syntactically
* incorrect or is a valid range request for an empty object
* (non-zero suffix range)
* - or an 'error' attribute instead of type errors.InvalidRange if
* the requested range is out of object's boundaries.
*/
function parseRange(rangeHeader, objectSize) {
const rangeSpec = parseRangeSpec(rangeHeader);
if (rangeSpec.error) {
// invalid range syntax is silently ignored in HTTP spec,
// hence returns the whole object
return {};
}
return getByteRangeFromSpec(rangeSpec, objectSize);
}
module.exports = { parseRangeSpec,
getByteRangeFromSpec,
parseRange };

View File

@ -0,0 +1,297 @@
'use strict'; // eslint-disable-line
const assert = require('assert');
const http = require('http');
const werelogs = require('werelogs');
const constants = require('../../constants');
const utils = require('./utils');
const errors = require('../../errors');
function setRequestUids(reqHeaders, reqUids) {
// inhibit 'assignment to property of function parameter' -
// this is what we want
// eslint-disable-next-line
reqHeaders['X-Scal-Request-Uids'] = reqUids;
}
function setRange(reqHeaders, range) {
const rangeStart = range[0] !== undefined ? range[0].toString() : '';
const rangeEnd = range[1] !== undefined ? range[1].toString() : '';
// inhibit 'assignment to property of function parameter' -
// this is what we want
// eslint-disable-next-line
reqHeaders['Range'] = `bytes=${rangeStart}-${rangeEnd}`;
}
function setContentType(reqHeaders, contentType) {
// inhibit 'assignment to property of function parameter' -
// this is what we want
// eslint-disable-next-line
reqHeaders['Content-Type'] = contentType;
}
function setContentLength(reqHeaders, size) {
// inhibit 'assignment to property of function parameter' -
// this is what we want
// eslint-disable-next-line
reqHeaders['Content-Length'] = size.toString();
}
function makeErrorFromHTTPResponse(response) {
const rawBody = response.read();
const body = (rawBody !== null ? rawBody.toString() : '');
let error;
try {
const fields = JSON.parse(body);
error = errors[fields.errorType]
.customizeDescription(fields.errorMessage);
} catch (err) {
error = new Error(body);
}
// error is always a newly created object, so we can modify its
// properties
error.remote = true;
return error;
}
/**
* @class
* @classdesc REST Client interface
*
* The API is usable when the object is constructed.
*/
class RESTClient {
/**
* Interface to the data file server
* @constructor
* @param {Object} params - Contains the basic configuration.
* @param {String} params.host - hostname or ip address of the
* RESTServer instance
* @param {Number} params.port - port number that the RESTServer
* instance listens to
* @param {Werelogs.API} [params.logApi] - logging API instance object
*/
constructor(params) {
assert(params.host);
assert(params.port);
this.host = params.host;
this.port = params.port;
this.setupLogging(params.logApi);
this.httpAgent = new http.Agent({ keepAlive: true });
}
/*
* Create a dedicated logger for RESTClient, from the provided werelogs API
* instance.
*
* @param {werelogs.API} logApi - object providing a constructor function
* for the Logger object
* @return {undefined}
*/
setupLogging(logApi) {
this.logging = new (logApi || werelogs).Logger('DataFileRESTClient');
}
createLogger(reqUids) {
return reqUids ?
this.logging.newRequestLoggerFromSerializedUids(reqUids) :
this.logging.newRequestLogger();
}
doRequest(method, headers, key, log, responseCb) {
const reqHeaders = headers || {};
const urlKey = key || '';
const reqParams = {
hostname: this.host,
port: this.port,
method,
path: `${constants.dataFileURL}/${urlKey}`,
headers: reqHeaders,
agent: this.httpAgent,
};
log.debug(`about to send ${method} request`, {
hostname: reqParams.hostname,
port: reqParams.port,
path: reqParams.path,
headers: reqParams.headers });
const request = http.request(reqParams, responseCb);
// disable nagle algorithm
request.setNoDelay(true);
return request;
}
/**
* This sends a PUT request to the the REST server
* @param {http.IncomingMessage} stream - Request with the data to send
* @param {string} stream.contentHash - hash of the data to send
* @param {integer} size - size
* @param {string} reqUids - The serialized request ids
* @param {RESTClient~putCallback} callback - callback
* @returns {undefined}
*/
put(stream, size, reqUids, callback) {
const log = this.createLogger(reqUids);
const headers = {};
setRequestUids(headers, reqUids);
setContentType(headers, 'application/octet-stream');
setContentLength(headers, size);
const request = this.doRequest('PUT', headers, null, log, response => {
response.once('readable', () => {
// expects '201 Created'
if (response.statusCode !== 201) {
return callback(makeErrorFromHTTPResponse(response));
}
// retrieve the key from the Location response header
// containing the complete URL to the object, like
// /DataFile/abcdef.
const location = response.headers.location;
if (location === undefined) {
return callback(new Error(
'missing Location header in the response'));
}
const locationInfo = utils.explodePath(location);
if (!locationInfo) {
return callback(new Error(
`bad Location response header: ${location}`));
}
return callback(null, locationInfo.key);
});
}).on('finish', () => {
log.debug('finished sending PUT data to the REST server', {
component: 'RESTClient',
method: 'put',
contentLength: size,
});
}).on('error', callback);
stream.pipe(request);
stream.on('error', err => {
log.error('error from readable stream', {
error: err,
method: 'put',
component: 'RESTClient',
});
request.end();
});
}
/**
* send a GET request to the REST server
* @param {String} key - The key associated to the value
* @param { Number [] | Undefined} range - range (if any) a
* [start, end] inclusive range specification, as defined in
* HTTP/1.1 RFC.
* @param {String} reqUids - The serialized request ids
* @param {RESTClient~getCallback} callback - callback
* @returns {undefined}
*/
get(key, range, reqUids, callback) {
const log = this.createLogger(reqUids);
const headers = {};
setRequestUids(headers, reqUids);
if (range) {
setRange(headers, range);
}
const request = this.doRequest('GET', headers, key, log, response => {
response.once('readable', () => {
if (response.statusCode !== 200 &&
response.statusCode !== 206) {
return callback(makeErrorFromHTTPResponse(response));
}
return callback(null, response);
});
}).on('error', callback);
request.end();
}
/**
* Send a GET request to the REST server, for a specific action rather
* than an object. Response will be truncated at the high watermark for
* the internal buffer of the stream, which is 16KB.
*
* @param {String} action - The action to query
* @param {String} reqUids - The serialized request ids
* @param {RESTClient~getCallback} callback - callback
* @returns {undefined}
*/
getAction(action, reqUids, callback) {
const log = this.createLogger(reqUids);
const headers = {};
setRequestUids(headers, reqUids);
const reqParams = {
hostname: this.host,
port: this.port,
method: 'GET',
path: `${constants.dataFileURL}?${action}`,
headers,
agent: this.httpAgent,
};
log.debug('about to send GET request', {
hostname: reqParams.hostname,
port: reqParams.port,
path: reqParams.path,
headers: reqParams.headers });
const request = http.request(reqParams, response => {
response.once('readable', () => {
if (response.statusCode !== 200 &&
response.statusCode !== 206) {
return callback(makeErrorFromHTTPResponse(response));
}
return callback(null, response.read().toString());
});
}).on('error', callback);
request.end();
}
/**
* send a DELETE request to the REST server
* @param {String} key - The key associated to the values
* @param {String} reqUids - The serialized request ids
* @param {RESTClient~deleteCallback} callback - callback
* @returns {undefined}
*/
delete(key, reqUids, callback) {
const log = this.createLogger(reqUids);
const headers = {};
setRequestUids(headers, reqUids);
const request = this.doRequest(
'DELETE', headers, key, log, response => {
response.once('readable', () => {
if (response.statusCode !== 200 &&
response.statusCode !== 204) {
return callback(makeErrorFromHTTPResponse(response));
}
return callback(null);
});
}).on('error', callback);
request.end();
}
}
/**
* @callback RESTClient~putCallback
* @param {Error} - The encountered error
* @param {String} key - The key to access the data
*/
/**
* @callback RESTClient~getCallback
* @param {Error} - The encountered error
* @param {stream.Readable} stream - The stream of values fetched
*/
/**
* @callback RESTClient~deleteCallback
* @param {Error} - The encountered error
*/
module.exports = RESTClient;

View File

@ -0,0 +1,314 @@
'use strict'; // eslint-disable-line
const assert = require('assert');
const url = require('url');
const werelogs = require('werelogs');
const httpServer = require('../http/server');
const constants = require('../../constants');
const utils = require('./utils');
const httpUtils = require('../http/utils');
const errors = require('../../errors');
function setContentLength(response, contentLength) {
response.setHeader('Content-Length', contentLength.toString());
}
function setContentRange(response, byteRange, objectSize) {
const [start, end] = byteRange;
assert(start !== undefined && end !== undefined);
response.setHeader('Content-Range',
`bytes ${start}-${end}/${objectSize}`);
}
function sendError(res, log, error, optMessage) {
res.writeHead(error.code);
let message;
if (optMessage) {
message = optMessage;
} else {
message = error.description || '';
}
log.debug('sending back error response', { httpCode: error.code,
errorType: error.message,
error: message });
res.end(`${JSON.stringify({ errorType: error.message,
errorMessage: message })}\n`);
}
/**
* Parse the given url and return a pathInfo object. Sanity checks are
* performed.
*
* @param {String} urlStr - URL to parse
* @param {Boolean} expectKey - whether the command expects to see a
* key in the URL
* @return {Object} a pathInfo object with URL items containing the
* following attributes:
* - pathInfo.service {String} - The name of REST service ("DataFile")
* - pathInfo.key {String} - The requested key
*/
function parseURL(urlStr, expectKey) {
const urlObj = url.parse(urlStr);
const pathInfo = utils.explodePath(urlObj.path);
if (pathInfo.service !== constants.dataFileURL) {
throw errors.InvalidAction.customizeDescription(
`unsupported service '${pathInfo.service}'`);
}
if (expectKey && pathInfo.key === undefined) {
throw errors.MissingParameter.customizeDescription(
'URL is missing key');
}
if (!expectKey && pathInfo.key !== undefined) {
// note: we may implement rewrite functionality by allowing a
// key in the URL, though we may still provide the new key in
// the Location header to keep immutability property and
// atomicity of the update (we would just remove the old
// object when the new one has been written entirely in this
// case, saving a request over an equivalent PUT + DELETE).
throw errors.InvalidURI.customizeDescription(
'PUT url cannot contain a key');
}
return pathInfo;
}
/**
* @class
* @classdesc REST Server interface
*
* You have to call setup() to initialize the storage backend, then
* start() to start listening to the configured port.
*/
class RESTServer extends httpServer {
/**
* @constructor
* @param {Object} params - constructor params
* @param {Number} params.port - TCP port where the server listens to
* @param {arsenal.storage.data.file.Store} params.dataStore -
* data store object
* @param {Number} [params.bindAddress='localhost'] - address
* bound to the socket
* @param {Object} [params.log] - logger configuration
*/
constructor(params) {
assert(params.port);
werelogs.configure({
level: params.log.logLevel,
dump: params.log.dumpLevel,
});
const logging = new werelogs.Logger('DataFileRESTServer');
super(params.port, logging);
this.logging = logging;
this.dataStore = params.dataStore;
this.setBindAddress(params.bindAddress || 'localhost');
// hooking our request processing function by calling the
// parent's method for that
this.onRequest(this._onRequest);
this.reqMethods = {
PUT: this._onPut.bind(this),
GET: this._onGet.bind(this),
DELETE: this._onDelete.bind(this),
};
}
/**
* Setup the storage backend
*
* @param {function} callback - called when finished
* @return {undefined}
*/
setup(callback) {
this.dataStore.setup(callback);
}
/**
* Create a new request logger object
*
* @param {String} reqUids - serialized request UIDs (as received in
* the X-Scal-Request-Uids header)
* @return {werelogs.RequestLogger} new request logger
*/
createLogger(reqUids) {
return reqUids ?
this.logging.newRequestLoggerFromSerializedUids(reqUids) :
this.logging.newRequestLogger();
}
/**
* Main incoming request handler, dispatches to method-specific
* handlers
*
* @param {http.IncomingMessage} req - HTTP request object
* @param {http.ServerResponse} res - HTTP response object
* @return {undefined}
*/
_onRequest(req, res) {
const reqUids = req.headers['x-scal-request-uids'];
const log = this.createLogger(reqUids);
log.debug('request received', { method: req.method,
url: req.url });
if (req.method in this.reqMethods) {
this.reqMethods[req.method](req, res, log);
} else {
// Method Not Allowed
sendError(res, log, errors.MethodNotAllowed);
}
}
/**
* Handler for PUT requests
*
* @param {http.IncomingMessage} req - HTTP request object
* @param {http.ServerResponse} res - HTTP response object
* @param {werelogs.RequestLogger} log - logger object
* @return {undefined}
*/
_onPut(req, res, log) {
let size;
try {
parseURL(req.url, false);
const contentLength = req.headers['content-length'];
if (contentLength === undefined) {
throw errors.MissingContentLength;
}
size = Number.parseInt(contentLength, 10);
if (isNaN(size)) {
throw errors.InvalidInput.customizeDescription(
'bad Content-Length');
}
} catch (err) {
return sendError(res, log, err);
}
this.dataStore.put(req, size, log, (err, key) => {
if (err) {
return sendError(res, log, err);
}
log.debug('sending back 201 response to PUT', { key });
res.setHeader('Location', `${constants.dataFileURL}/${key}`);
setContentLength(res, 0);
res.writeHead(201);
return res.end(() => {
log.debug('PUT response sent', { key });
});
});
return undefined;
}
/**
* Handler for GET requests
*
* @param {http.IncomingMessage} req - HTTP request object
* @param {http.ServerResponse} res - HTTP response object
* @param {werelogs.RequestLogger} log - logger object
* @return {undefined}
*/
_onGet(req, res, log) {
let pathInfo;
let rangeSpec = undefined;
// Get request on the toplevel endpoint with ?action
if (req.url.startsWith(`${constants.dataFileURL}?`)) {
const queryParam = url.parse(req.url).query;
if (queryParam === 'diskUsage') {
this.dataStore.getDiskUsage((err, result) => {
if (err) {
return sendError(res, log, err);
}
res.writeHead(200);
res.end(JSON.stringify(result));
return undefined;
});
}
}
// Get request on an actual object
try {
pathInfo = parseURL(req.url, true);
const rangeHeader = req.headers.range;
if (rangeHeader !== undefined) {
rangeSpec = httpUtils.parseRangeSpec(rangeHeader);
if (rangeSpec.error) {
// ignore header if syntax is invalid
rangeSpec = undefined;
}
}
} catch (err) {
return sendError(res, log, err);
}
this.dataStore.stat(pathInfo.key, log, (err, info) => {
if (err) {
return sendError(res, log, err);
}
let byteRange;
let contentLength;
if (rangeSpec) {
const { range, error } = httpUtils.getByteRangeFromSpec(
rangeSpec, info.objectSize);
if (error) {
return sendError(res, log, error);
}
byteRange = range;
}
if (byteRange) {
contentLength = byteRange[1] - byteRange[0] + 1;
} else {
contentLength = info.objectSize;
}
this.dataStore.get(pathInfo.key, byteRange, log, (err, rs) => {
if (err) {
return sendError(res, log, err);
}
log.debug('sending back 200/206 response with contents',
{ key: pathInfo.key });
setContentLength(res, contentLength);
res.setHeader('Accept-Ranges', 'bytes');
if (byteRange) {
// data is immutable, so objectSize is still correct
setContentRange(res, byteRange, info.objectSize);
res.writeHead(206);
} else {
res.writeHead(200);
}
rs.pipe(res);
return undefined;
});
return undefined;
});
return undefined;
}
/**
* Handler for DELETE requests
*
* @param {http.IncomingMessage} req - HTTP request object
* @param {http.ServerResponse} res - HTTP response object
* @param {werelogs.RequestLogger} log - logger object
* @return {undefined}
*/
_onDelete(req, res, log) {
let pathInfo;
try {
pathInfo = parseURL(req.url, true);
} catch (err) {
return sendError(res, log, err);
}
this.dataStore.delete(pathInfo.key, log, err => {
if (err) {
return sendError(res, log, err);
}
log.debug('sending back 204 response to DELETE',
{ key: pathInfo.key });
res.writeHead(204);
return res.end(() => {
log.debug('DELETE response sent', { key: pathInfo.key });
});
});
return undefined;
}
}
module.exports = RESTServer;

15
lib/network/rest/utils.js Normal file
View File

@ -0,0 +1,15 @@
'use strict'; // eslint-disable-line
const errors = require('../../errors');
module.exports.explodePath = function explodePath(path) {
const pathMatch = /^(\/[a-zA-Z0-9]+)(\/([0-9a-f]*))?$/.exec(path);
if (pathMatch) {
return {
service: pathMatch[1],
key: (pathMatch[3] !== undefined && pathMatch[3].length > 0 ?
pathMatch[3] : undefined),
};
}
throw errors.InvalidURI.customizeDescription('malformed URI');
};

View File

@ -0,0 +1,132 @@
'use strict'; // eslint-disable-line
const assert = require('assert');
const rpc = require('./rpc.js');
/**
* @class
* @classdesc Wrap a LevelDB RPC client supporting sub-levels on top
* of a base RPC client.
*
* An additional "subLevel" request parameter is attached to RPC
* requests to tell the RPC service for which sub-level the request
* applies.
*
* openSub() can be used to open sub-levels, returning a new LevelDB
* RPC client object accessing the sub-level transparently.
*/
class LevelDbClient extends rpc.BaseClient {
/**
* @constructor
*
* @param {Object} params - constructor params
* @param {String} params.url - URL of the socket.io namespace,
* e.g. 'http://localhost:9990/metadata'
* @param {Logger} params.logger - logger object
* @param {Number} [params.callTimeoutMs] - timeout for remote calls
* @param {Number} [params.streamMaxPendingAck] - max number of
* in-flight output stream packets sent to the server without an ack
* received yet
* @param {Number} [params.streamAckTimeoutMs] - timeout for receiving
* an ack after an output stream packet is sent to the server
*/
constructor(params) {
super(params);
this.path = []; // start from the root sublevel
// transmit the sublevel information as a request param
this.addRequestInfoProducer(
dbClient => ({ subLevel: dbClient.path }));
}
/**
* return a handle to a sublevel database
*
* @note this function has no side-effect on the db, it just
* returns a handle properly configured to access the sublevel db
* from the client.
*
* @param {String} subName - name of sublevel
* @return {Object} a handle to the sublevel database that has the
* same API as its parent
*/
openSub(subName) {
const subDbClient = new LevelDbClient({ url: this.url,
logger: this.logger });
// make the same exposed RPC calls available from the sub-level object
Object.assign(subDbClient, this);
// listeners should not be duplicated on sublevel
subDbClient.removeAllListeners();
// copy and append the new sublevel to the path
subDbClient.path = subDbClient.path.slice();
subDbClient.path.push(subName);
return subDbClient;
}
}
/**
* @class
* @classdesc Wrap a LevelDB RPC service supporting sub-levels on top
* of a base RPC service.
*
* An additional "subLevel" request parameter received from the RPC
* client is automatically parsed, and the requested sub-level of the
* database is opened and attached to the call environment in
* env.subDb (env is passed as first parameter of received RPC calls).
*/
class LevelDbService extends rpc.BaseService {
/**
* @constructor
*
* @param {Object} params - constructor parameters
* @param {String} params.namespace - socket.io namespace, a free
* string name that must start with '/'. The client will have to
* provide the same namespace in the URL
* (http://host:port/namespace)
* @param {Object} params.rootDb - root LevelDB database object to
* expose to remote clients
* @param {Object} params.logger - logger object
* @param {String} [params.apiVersion="1.0"] - Version number that
* is shared with clients in the manifest (may be used to ensure
* backward compatibility)
* @param {RPCServer} [params.server] - convenience parameter,
* calls server.registerServices() automatically
*/
constructor(params) {
assert(params.rootDb);
super(params);
this.rootDb = params.rootDb;
this.addRequestInfoConsumer((dbService, reqParams) => {
const env = {};
env.subLevel = reqParams.subLevel;
env.subDb = this.lookupSubLevel(reqParams.subLevel);
return env;
});
}
/**
* lookup a sublevel db given by the <tt>path</tt> array from the
* root leveldb handle.
*
* @param {String []} path - path to the sublevel, as a
* piecewise array of sub-levels
* @return {Object} the handle to the sublevel
*/
lookupSubLevel(path) {
let subDb = this.rootDb;
path.forEach(pathItem => {
subDb = subDb.sublevel(pathItem);
});
return subDb;
}
}
module.exports = {
LevelDbClient,
LevelDbService,
};

749
lib/network/rpc/rpc.js Normal file
View File

@ -0,0 +1,749 @@
'use strict'; // eslint-disable-line
const http = require('http');
const io = require('socket.io');
const ioClient = require('socket.io-client');
const sioStream = require('./sio-stream');
const async = require('async');
const assert = require('assert');
const EventEmitter = require('events').EventEmitter;
const flattenError = require('./utils').flattenError;
const reconstructError = require('./utils').reconstructError;
const errors = require('../../errors');
const jsutil = require('../../jsutil');
const DEFAULT_CALL_TIMEOUT_MS = 30000;
// to handle recursion without no-use-before-define warning
// eslint-disable-next-line prefer-const
let streamRPCJSONObj;
/**
* @brief get a client object that proxies RPC calls to a remote
* server through socket.io events
*
* Additional request environment parameters that are not passed as
* explicit RPC arguments can be passed using addRequestInfoProducer()
* method, directly or through sub-classing
*
* NOTE: synchronous calls on the server-side API (i.e those which
* take no callback argument) become asynchronous on the client, take
* one additional parameter (the callback), then:
*
* - if it throws, the error is passed as callback's first argument,
* otherwise null is passed
* - the return value is passed as callback's second argument (unless
* an error occurred).
*/
class BaseClient extends EventEmitter {
/**
* @constructor
*
* @param {Object} params - constructor params
* @param {String} params.url - URL of the socket.io namespace,
* e.g. 'http://localhost:9990/metadata'
* @param {Logger} params.logger - logger object
* @param {Number} [params.callTimeoutMs] - timeout for remote calls
* @param {Number} [params.streamMaxPendingAck] - max number of
* in-flight output stream packets sent to the server without an ack
* received yet
* @param {Number} [params.streamAckTimeoutMs] - timeout for receiving
* an ack after an output stream packet is sent to the server
*/
constructor(params) {
const { url, logger, callTimeoutMs,
streamMaxPendingAck, streamAckTimeoutMs } = params;
assert(url);
assert(logger);
super();
this.url = url;
this.logger = logger;
this.callTimeoutMs = callTimeoutMs;
this.streamMaxPendingAck = streamMaxPendingAck;
this.streamAckTimeoutMs = streamAckTimeoutMs;
this.requestInfoProducers = [];
this.requestInfoProducers.push(
dbClient => ({ reqUids: dbClient.withReqUids }));
}
/**
* @brief internal RPC implementation w/o timeout
*
* @param {String} remoteCall - name of the remote function to call
* @param {Array} args - list of arguments to the remote function
* @param {function} cb - callback called when done
* @return {undefined}
*/
_call(remoteCall, args, cb) {
const wrapCb = (err, data) => {
cb(reconstructError(err),
this.socketStreams.decodeStreams(data));
};
this.logger.debug('remote call', { remoteCall, args });
this.socket.emit('call', remoteCall,
this.socketStreams.encodeStreams(args), wrapCb);
return undefined;
}
/**
* @brief call a remote function named <tt>remoteCall</tt>, with
* arguments <tt>args</tt> and callback <tt>cb</tt>
*
* <tt>cb</tt> is called when the remote function returns an ack, or
* when the timeout set by <tt>timeoutMs</tt> expires, whichever comes
* first. When an ack is received, the callback gets the arguments
* sent by the remote function in the ack response. In the case of
* timeout, it's passed a single Error argument with the code:
* 'ETIMEDOUT' property, and a self-described string in the 'info'
* property.
*
* @param {String} remoteCall - name of the remote function to call
* @param {Array} args - list of arguments to the remote function
* @param {function} cb - callback called when done or timeout
* @param {Number} timeoutMs - timeout in milliseconds
* @return {undefined}
*/
callTimeout(remoteCall, args, cb, timeoutMs = DEFAULT_CALL_TIMEOUT_MS) {
if (typeof cb !== 'function') {
throw new Error(`argument cb=${cb} is not a callback`);
}
async.timeout(this._call.bind(this), timeoutMs,
`operation ${remoteCall} timed out`)(remoteCall,
args, cb);
return undefined;
}
getCallTimeout() {
return this.callTimeoutMs;
}
setCallTimeout(newTimeoutMs) {
this.callTimeoutMs = newTimeoutMs;
}
/**
* connect to the remote RPC server
*
* @param {function} cb - callback when connection is complete or
* if there is an error
* @return {undefined}
*/
connect(cb) {
this.socket = ioClient(this.url);
this.socketStreams = sioStream.createSocket(
this.socket,
this.logger,
this.streamMaxPendingAck,
this.streamAckTimeoutMs);
const url = this.url;
this.socket.on('error', err => {
this.logger.warn('connectivity error to the RPC service',
{ url, error: err });
});
this.socket.on('connect', () => {
this.emit('connect');
});
this.socket.on('disconnect', () => {
this.emit('disconnect');
});
// only hard-coded call necessary to discover the others
this.createCall('getManifest');
this.getManifest((err, manifest) => {
if (err) {
this.logger.error('Error fetching manifest from RPC server',
{ error: err });
} else {
manifest.api.forEach(apiItem => {
this.createCall(apiItem.name);
});
}
if (cb) {
return cb(err);
}
return undefined;
});
}
/**
* disconnect this client from the RPC server. A disconnect event
* is emitted when done.
*
* @return {undefined}
*/
disconnect() {
this.socket.disconnect();
}
/**
* create a new RPC call with the given name
*
* This function should normally not be called by the user,
* because the API is automatically exposed by reading the
* manifest from the server.
*
* @param {String} remoteCall - name of the API call to create
* @return {undefined}
*/
createCall(remoteCall) {
this[remoteCall] = function onCall(...rpcArgs) {
const cb = rpcArgs.pop();
const args = { rpcArgs };
// produce the extra parameters for the request
this.requestInfoProducers.forEach(f => {
Object.assign(args, f(this));
});
this.callTimeout(remoteCall, args, cb, this.callTimeoutMs);
// reset temporary argument-passing sugar
this.withReqUids = undefined;
};
}
/**
* add a function that provides additional parameters to send
* along each request. It will be called before every single
* request, so the parameters can be dynamic.
*
* @param {function} f - function returning an object that
* contains the additional parameters for the request. It is
* called with the client object passed as a parameter.
* @return {undefined}
*/
addRequestInfoProducer(f) {
this.requestInfoProducers.push(f);
}
/**
* decorator function that adds information from the given logger
* object so that the remote end can reconstruct this information
* in the logs (namely the request UIDs). This call takes effect
* only for the next RPC call.
*
* The typical use case is:
* ```
* rpcClient.withRequestLogger(logger).callSomeFunction(params);
* ```
*
* @param {Object} logger - werelogs logger object
* @return {BaseClient} returns the original called client object
* so that the result can be chained with further calls
*/
withRequestLogger(logger) {
this.withReqUids = logger.getSerializedUids();
return this;
}
}
/**
* @class
* @classdesc RPC service class
*
* A service maps to a specific namespace and provides a set of RPC
* functions.
*
* Additional request environment parameters passed by the client
* should be parsed in helpers passed to addRequestInfoConsumer()
* method.
*
*/
class BaseService {
/**
* @constructor
*
* @param {Object} params - constructor parameters
* @param {String} params.namespace - socket.io namespace, a free
* string name that must start with '/'. The client will have to
* provide the same namespace in the URL
* (http://host:port/namespace)
* @param {Object} params.logger - logger object
* @param {String} [params.apiVersion="1.0"] - Version number that
* is shared with clients in the manifest (may be used to ensure
* backward compatibility)
* @param {RPCServer} [params.server] - convenience parameter,
* calls server.registerServices() automatically
*/
constructor(params) {
const { namespace, logger, apiVersion, server } = params;
assert(namespace);
assert(namespace.startsWith('/'));
assert(logger);
this.namespace = namespace;
this.logger = logger;
this.apiVersion = apiVersion || '1.0';
this.requestInfoConsumers = [];
// initialize with a single hard-coded API call, the user will
// register its own calls later
this.syncAPI = {};
this.asyncAPI = {};
this.registerSyncAPI({
getManifest: () => {
const exposedAPI = [];
Object.keys(this.syncAPI).forEach(callName => {
if (callName !== 'getManifest') {
exposedAPI.push({ name: callName });
}
});
Object.keys(this.asyncAPI).forEach(callName => {
exposedAPI.push({ name: callName });
});
return { apiVersion: this.apiVersion,
api: exposedAPI };
},
});
this.addRequestInfoConsumer((dbService, params) => {
const env = {};
if (params.reqUids) {
env.reqUids = params.reqUids;
env.requestLogger = dbService.logger
.newRequestLoggerFromSerializedUids(params.reqUids);
} else {
env.requestLogger = dbService.logger.newRequestLogger();
}
return env;
});
if (server) {
server.registerServices(this);
}
}
/**
* register a set of API functions that return a result synchronously
*
* @param {Object} apiExtension - Object mapping names to API
* function implementation. Each API function gets an
* environment object as first parameter that contains various
* useful attributes, while the rest of parameters are the RPC
* parameters as passed by the client in the call.
* @return {undefined}
*/
registerSyncAPI(apiExtension) {
Object.assign(this.syncAPI, apiExtension);
Object.keys(apiExtension).forEach(callName => {
this[callName] = function localCall(...args) {
const params = { rpcArgs: args };
if (this.requestParams) {
Object.assign(params, this.requestParams);
this.requestParams = undefined;
}
return this.onSyncCall(callName, params);
};
});
}
/**
* register a set of API functions that return a result through a
* callback passed as last argument
*
* @param {Object} apiExtension - Object mapping names to API
* function implementation. Each API function gets an
* environment object as first parameter that contains various
* useful attributes, while the rest of parameters are the RPC
* parameters as passed by the client in the call, followed by a
* callback function to call with an error status and optional
* additional response values.
* @return {undefined}
*/
registerAsyncAPI(apiExtension) {
Object.assign(this.asyncAPI, apiExtension);
Object.keys(apiExtension).forEach(callName => {
this[callName] = function localCall(...args) {
const cb = args.pop();
const params = { rpcArgs: args };
if (this.requestParams) {
Object.assign(params, this.requestParams);
this.requestParams = undefined;
}
return this.onAsyncCall(callName, params, cb);
};
});
}
withRequestParams(params) {
this.requestParams = params;
return this;
}
/**
* set the API version string, that is communicated to connecting
* clients in the manifest
*
* @param {String} apiVersion - arbitrary version string
* (suggested format "x.y")
* @return {undefined}
*/
setAPIVersion(apiVersion) {
this.apiVersion = apiVersion;
}
/**
* add a function to be called before each API call that is in
* charge of converting some extra request info (outside raw RPC
* arguments) into environment attributes directly usable by the
* API implementation
*
* @param {function} f - function to be called with two arguments:
* the service object and the params object received from the
* client, and which returns an object with the additional
* environment attributes
* @return {undefined}
*/
addRequestInfoConsumer(f) {
this.requestInfoConsumers.push(f);
}
_onCall(remoteCall, args, cb) {
if (remoteCall in this.asyncAPI) {
try {
this.onAsyncCall(remoteCall, args, (err, data) => {
cb(flattenError(err), data);
});
} catch (err) {
return cb(flattenError(err));
}
} else if (remoteCall in this.syncAPI) {
let result;
try {
result = this.onSyncCall(remoteCall, args);
return cb(null, result);
} catch (err) {
return cb(flattenError(err));
}
} else {
return cb(errors.InvalidArgument.customizeDescription(
`Unknown remote call ${remoteCall} ` +
`in namespace ${this.namespace}`));
}
return undefined;
}
_createCallEnv(params) {
const env = {};
this.requestInfoConsumers.forEach(f => {
const extraEnv = f(this, params);
Object.assign(env, extraEnv);
});
return env;
}
onSyncCall(remoteCall, params) {
const env = this._createCallEnv(params);
return this.syncAPI[remoteCall].apply(
this, [env].concat(params.rpcArgs));
}
onAsyncCall(remoteCall, params, cb) {
const env = this._createCallEnv(params);
this.asyncAPI[remoteCall].apply(
this, [env].concat(params.rpcArgs).concat(cb));
}
}
/**
* @brief create a server object that serves remote requests through
* socket.io events.
*
* Services associated to namespaces (aka. URL base path) must be
* registered thereafter on this server.
*
* Each service may customize the sending and reception of RPC
* messages through subclassing, e.g. LevelDbService looks up a
* particular sub-level before forwarding the RPC, providing it the
* target sub-level handle.
*
* @param {Object} params - params object
* @param {Object} params.logger - logger object
* @param {Number} [params.streamMaxPendingAck] - max number of
* in-flight output stream packets sent to the server without an ack
* received yet
* @param {Number} [params.streamAckTimeoutMs] - timeout for receiving
* an ack after an output stream packet is sent to the server
* @return {Object} a server object, not yet listening on a TCP port
* (you must call listen(port) on the returned object)
*/
function RPCServer(params) {
assert(params.logger);
const httpServer = http.createServer();
const server = io(httpServer);
const log = params.logger;
/**
* register a list of service objects on this server
*
* It's not necessary to call this function if you provided a
* "server" parameter to the service constructor.
*
* @param {BaseService} serviceList - list of services to register
* @return {undefined}
*/
server.registerServices = function registerServices(...serviceList) {
serviceList.forEach(service => {
const sock = this.of(service.namespace);
sock.on('connection', conn => {
const streamsSocket = sioStream.createSocket(
conn,
params.logger,
params.streamMaxPendingAck,
params.streamAckTimeoutMs);
conn.on('error', err => {
log.error('error on socket.io connection',
{ namespace: service.namespace, error: err });
});
conn.on('call', (remoteCall, args, cb) => {
const decodedArgs = streamsSocket.decodeStreams(args);
service._onCall(remoteCall, decodedArgs, (err, res) => {
if (err) {
return cb(err);
}
const encodedRes = streamsSocket.encodeStreams(res);
return cb(err, encodedRes);
});
});
});
});
};
server.listen = function listen(port, bindAddress = undefined) {
httpServer.listen(port, bindAddress);
};
return server;
}
function sendHTTPError(res, err) {
res.writeHead(err.code || 500);
return res.end(`${JSON.stringify({ error: err.message,
message: err.description })}\n`);
}
/**
* convert an input object stream to a JSON array streamed in output
*
* @param {stream.Readable} rstream - object input stream to serialize
* as a JSON array
* @param {stream.Writable} wstream - bytes output stream to write the
* serialized array to
* @param {function} cb - callback when done writing data
* @return {undefined}
*/
function objectStreamToJSON(rstream, wstream, cb) {
wstream.write('[');
let begin = true;
const cbOnce = jsutil.once(cb);
let writeInProgress = false;
let readEnd = false;
rstream.on('data', item => {
if (begin) {
begin = false;
} else {
wstream.write(',');
}
rstream.pause();
writeInProgress = true;
streamRPCJSONObj(item, wstream, err => {
writeInProgress = false;
if (err) {
return cbOnce(err);
}
if (readEnd) {
wstream.write(']');
return cbOnce(null);
}
return rstream.resume();
});
});
rstream.on('end', () => {
readEnd = true;
if (!writeInProgress) {
wstream.write(']');
cbOnce(null);
}
});
rstream.on('error', err => {
cbOnce(err);
});
}
/**
* stream the result as returned by the RPC call to a connected client
*
* It's similar to sending the raw contents of JSON.stringify() to the
* client, except that any embedded object with pipe() method is
* considered as an object stream and will be sent as a JSON array of
* objects.
*
* Keep in mind that this function is only meant to be used in debug
* tools, it would require strenghtening to be used in production
* mode.
*
* @param {Object} obj - js object to stream JSON-serialized
* @param {stream.Writable} wstream - output stream
* @param {function} cb - callback when all JSON data has been output
* or if there was an error
* @return {undefined}
*/
streamRPCJSONObj = function _streamRPCJSONObj(obj, wstream, cb) {
const cbOnce = jsutil.once(cb);
if (typeof(obj) === 'object') {
if (obj && obj.pipe !== undefined) {
// stream object streams as JSON arrays
return objectStreamToJSON(obj, wstream, cbOnce);
}
if (Array.isArray(obj)) {
let first = true;
wstream.write('[');
return async.eachSeries(obj, (child, done) => {
if (first) {
first = false;
} else {
wstream.write(',');
}
streamRPCJSONObj(child, wstream, done);
},
err => {
if (err) {
return cbOnce(err);
}
wstream.write(']');
return cbOnce(null);
});
}
if (obj) {
let first = true;
wstream.write('{');
return async.eachSeries(Object.keys(obj), (k, done) => {
if (obj[k] === undefined) {
return done();
}
if (first) {
first = false;
} else {
wstream.write(',');
}
wstream.write(`${JSON.stringify(k)}:`);
return streamRPCJSONObj(obj[k], wstream, done);
},
err => {
if (err) {
return cbOnce(err);
}
wstream.write('}');
return cbOnce(null);
});
}
}
// primitive types
if (obj === undefined) {
wstream.write('null'); // if undefined elements are present in
// arrays, convert them to JSON null
// objects
} else {
wstream.write(JSON.stringify(obj));
}
return setImmediate(() => cbOnce(null));
};
/**
* @brief create a server object that serves RPC requests through POST
* HTTP requests. This is intended to help functional testing, the
* RPCServer class is meant to be used on real traffic.
*
* Services associated to namespaces (aka. URL base path) must be
* registered thereafter on this server.
*
* @param {Object} params - params object
* @param {Object} params.logger - logger object
* @return {Object} a HTTP server object, not yet listening on a TCP
* port (you must call listen(port) on the returned object)
*/
function RESTServer(params) {
assert(params);
assert(params.logger);
const httpServer = http.createServer((req, res) => {
if (req.method !== 'POST') {
return sendHTTPError(
res, errors.MethodNotAllowed.customizeDescription(
'only POST requests are supported for RPC calls'));
}
const matchingService = httpServer.serviceList.find(
service => req.url === service.namespace);
if (!matchingService) {
return sendHTTPError(
res, errors.InvalidArgument.customizeDescription(
`unknown service in URL ${req.url}`));
}
const reqBody = [];
req.on('data', data => {
reqBody.push(data);
});
return req.on('end', () => {
if (reqBody.length === 0) {
return sendHTTPError(res, errors.MissingRequestBodyError);
}
try {
const jsonReq = JSON.parse(reqBody);
if (!jsonReq.call) {
throw errors.InvalidArgument.customizeDescription(
'missing "call" JSON attribute');
}
const args = jsonReq.args || {};
matchingService._onCall(jsonReq.call, args, (err, data) => {
if (err) {
return sendHTTPError(res, err);
}
res.writeHead(200);
if (data === undefined) {
return res.end();
}
res.write('{"result":');
return streamRPCJSONObj(data, res, err => {
if (err) {
return res.end(JSON.stringify(err));
}
return res.end('}\n');
});
});
return undefined;
} catch (err) {
return sendHTTPError(res, err);
}
});
});
httpServer.serviceList = [];
/**
* register a list of service objects on this server
*
* It's not necessary to call this function if you provided a
* "server" parameter to the service constructor.
*
* @param {BaseService} serviceList - list of services to register
* @return {undefined}
*/
httpServer.registerServices = function registerServices(...serviceList) {
this.serviceList.push.apply(this.serviceList, serviceList);
};
return httpServer;
}
module.exports = {
BaseClient,
BaseService,
RPCServer,
RESTServer,
};

View File

@ -0,0 +1,443 @@
'use strict'; // eslint-disable-line
const uuid = require('uuid');
const stream = require('stream');
const debug = require('debug')('sio-stream');
const assert = require('assert');
const async = require('async');
const flattenError = require('./utils').flattenError;
const reconstructError = require('./utils').reconstructError;
const DEFAULT_MAX_PENDING_ACK = 4;
const DEFAULT_ACK_TIMEOUT_MS = 5000;
class SIOOutputStream extends stream.Writable {
constructor(socket, streamId, maxPendingAck, ackTimeoutMs) {
super({ objectMode: true });
this._initOutputStream(socket, streamId, maxPendingAck,
ackTimeoutMs);
}
_initOutputStream(socket, streamId, maxPendingAck, ackTimeoutMs) {
this.socket = socket;
this.streamId = streamId;
this.on('finish', () => {
this.socket._finish(this.streamId, err => {
// no-op on client ack, it's not excluded we add
// things later here
debug('ack finish', this.streamId, 'err', err);
});
});
this.on('error', err => {
debug('output stream error', this.streamId);
// notify remote of the error
this.socket._error(this.streamId, err);
});
// This is used for queuing flow control, don't issue more
// than maxPendingAck requests (events) that have not been
// acked yet
this.maxPendingAck = maxPendingAck;
this.ackTimeoutMs = ackTimeoutMs;
this.nPendingAck = 0;
}
_write(chunk, encoding, callback) {
return this._writev([{ chunk }], callback);
}
_writev(chunks, callback) {
const payload = chunks.map(chunk => chunk.chunk);
debug(`_writev(${JSON.stringify(payload)}, ...)`);
this.nPendingAck += 1;
const timeoutInfo =
`stream timeout: did not receive ack after ${this.ackTimeoutMs}ms`;
async.timeout(cb => {
this.socket._write(this.streamId, payload, cb);
}, this.ackTimeoutMs, timeoutInfo)(
err => {
debug(`ack stream-data ${this.streamId}
(${JSON.stringify(payload)}):`, err);
if (this.nPendingAck === this.maxPendingAck) {
callback();
}
this.nPendingAck -= 1;
if (err) {
// notify remote of the error (timeout notably)
debug('stream error:', err);
this.socket._error(this.streamId, err);
// stop the producer
this.socket.destroyStream(this.streamId);
}
});
if (this.nPendingAck < this.maxPendingAck) {
callback();
}
}
}
class SIOInputStream extends stream.Readable {
constructor(socket, streamId) {
super({ objectMode: true });
this.socket = socket;
this.streamId = streamId;
this._readState = {
pushBuffer: [],
readable: false,
};
}
destroy() {
debug('destroy called', this.streamId);
this._destroyed = true;
this.pause();
this.removeAllListeners('data');
this.removeAllListeners('end');
this._readState = {
pushBuffer: [],
readable: false,
};
// do this in case the client piped this stream to other ones
this.unpipe();
// emit 'stream-hangup' event to notify the remote producer
// that we're not interested in further results
this.socket._hangup(this.streamId);
this.emit('close');
}
_pushData() {
debug('pushData _readState:', this._readState);
if (this._destroyed) {
return;
}
while (this._readState.pushBuffer.length > 0) {
const item = this._readState.pushBuffer.shift();
debug('pushing item', item);
if (!this.push(item)) {
this._readState.readable = false;
break;
}
}
}
_read(size) {
debug(`_read(${size})`);
this._readState.readable = true;
this._pushData();
}
_ondata(data) {
debug('_ondata', this.streamId, data);
if (this._destroyed) {
return;
}
this._readState.pushBuffer.push.apply(this._readState.pushBuffer,
data);
if (this._readState.readable) {
this._pushData();
}
}
_onend() {
debug('_onend', this.streamId);
this._readState.pushBuffer.push(null);
if (this._readState.readable) {
this._pushData();
}
this.emit('close');
}
_onerror(receivedErr) {
debug('_onerror', this.streamId, 'error', receivedErr);
const err = reconstructError(receivedErr);
err.remote = true;
this.emit('error', err);
}
}
/**
* @class
* @classdesc manage a set of user streams over a socket.io connection
*/
class SIOStreamSocket {
constructor(socket, logger, maxPendingAck, ackTimeoutMs) {
assert(socket);
assert(logger);
/** @member {Object} socket.io connection */
this.socket = socket;
/** @member {Object} logger object */
this.logger = logger;
/** @member {Number} max number of in-flight output stream
* packets sent to the client without an ack received yet */
this.maxPendingAck = maxPendingAck;
/** @member {Number} timeout for receiving an ack after an
* output stream packet is sent to the client */
this.ackTimeoutMs = ackTimeoutMs;
/** @member {Object} map of stream proxies initiated by the
* remote side */
this.remoteStreams = {};
/** @member {Object} map of stream-like objects initiated
* locally and connected to the remote side */
this.localStreams = {};
const log = logger;
// stream data message, contains an array of one or more data objects
this.socket.on('stream-data', (payload, cb) => {
const { streamId, data } = payload;
log.debug('received \'stream-data\' event',
{ streamId, size: data.length });
const stream = this.remoteStreams[streamId];
if (!stream) {
log.debug('no such remote stream registered', { streamId });
return;
}
stream._ondata(data);
cb(null);
});
// signals normal end of stream to the consumer
this.socket.on('stream-end', (payload, cb) => {
const { streamId } = payload;
log.debug('received \'stream-end\' event', { streamId });
const stream = this.remoteStreams[streamId];
if (!stream) {
log.debug('no such remote stream registered', { streamId });
return;
}
stream._onend();
cb(null);
});
// error message sent by the stream producer to the consumer
this.socket.on('stream-error', payload => {
const { streamId, error } = payload;
log.debug('received \'stream-error\' event', { streamId, error });
const stream = this.remoteStreams[streamId];
if (!stream) {
log.debug('no such remote stream registered', { streamId });
return;
}
stream._onerror(error);
});
// hangup message sent by the stream consumer to the producer
this.socket.on('stream-hangup', payload => {
const { streamId } = payload;
log.debug('received \'stream-hangup\' event', { streamId });
const stream = this.localStreams[streamId];
if (!stream) {
log.debug('no such local stream registered' +
'(may have already reached the end)', { streamId });
return;
}
this.destroyStream(streamId);
});
}
/**
* @brief encode all stream-like objects found inside a user
* object into a serialized form that can be tramsmitted through a
* socket.io connection, then decoded back to a stream proxy
* object by the other end with decodeStreams()
*
* @param {Object} arg any flat object or value that may be or
* contain stream-like objects
* @return {Object} an object of the same nature than <tt>arg</tt> with
* streams encoded for transmission to the remote side
*/
encodeStreams(arg) {
if (!arg) {
return arg;
}
const log = this.logger;
const isReadStream = (typeof(arg.pipe) === 'function'
&& typeof (arg.read) === 'function');
let isWriteStream = (typeof(arg.write) === 'function');
if (isReadStream || isWriteStream) {
if (isReadStream && isWriteStream) {
// For now, consider that duplex streams are input
// streams for the purpose of supporting Transform
// streams in server -> client direction. If the need
// arises, we can implement full duplex streams later.
isWriteStream = false;
}
const streamId = uuid();
const encodedStream = {
$streamId: streamId,
readable: isReadStream,
writable: isWriteStream,
};
let transportStream;
if (isReadStream) {
transportStream = new SIOOutputStream(this, streamId,
this.maxPendingAck,
this.ackTimeoutMs);
} else {
transportStream = new SIOInputStream(this, streamId);
}
this.localStreams[streamId] = arg;
arg.once('close', () => {
log.debug('stream closed, removing from local streams',
{ streamId });
delete this.localStreams[streamId];
});
arg.on('error', error => {
log.error('stream error', { streamId, error });
});
if (isReadStream) {
arg.pipe(transportStream);
}
if (isWriteStream) {
transportStream.pipe(arg);
}
return encodedStream;
}
if (typeof(arg) === 'object') {
let encodedObj;
if (Array.isArray(arg)) {
encodedObj = [];
for (let k = 0; k < arg.length; ++k) {
encodedObj.push(this.encodeStreams(arg[k]));
}
} else {
encodedObj = {};
// user objects are simple flat objects and we want to
// copy all their properties
// eslint-disable-next-line
for (const k in arg) {
encodedObj[k] = this.encodeStreams(arg[k]);
}
}
return encodedObj;
}
return arg;
}
/**
* @brief decode all encoded stream markers (produced by
* encodeStreams()) found inside the object received from the
* remote side, turn them into actual readable/writable stream
* proxies that are forwarding data from/to the remote side stream
*
* @param {Object} arg the object as received from the remote side
* @return {Object} an object of the same nature than <tt>arg</tt> with
* stream markers decoded into actual readable/writable stream
* objects
*/
decodeStreams(arg) {
if (!arg) {
return arg;
}
const log = this.logger;
if (arg.$streamId !== undefined) {
if (arg.readable && arg.writable) {
throw new Error('duplex streams not supported');
}
const streamId = arg.$streamId;
let stream;
if (arg.readable) {
stream = new SIOInputStream(this, streamId);
} else if (arg.writable) {
stream = new SIOOutputStream(this, streamId,
this.maxPendingAck,
this.ackTimeoutMs);
} else {
throw new Error('can\'t decode stream neither readable ' +
'nor writable');
}
this.remoteStreams[streamId] = stream;
if (arg.readable) {
stream.once('close', () => {
log.debug('stream closed, removing from remote streams',
{ streamId });
delete this.remoteStreams[streamId];
});
}
if (arg.writable) {
stream.once('finish', () => {
log.debug('stream finished, removing from remote streams',
{ streamId });
delete this.remoteStreams[streamId];
});
}
stream.on('error', error => {
log.error('stream error', { streamId, error });
});
return stream;
}
if (typeof(arg) === 'object') {
let decodedObj;
if (Array.isArray(arg)) {
decodedObj = [];
for (let k = 0; k < arg.length; ++k) {
decodedObj.push(this.decodeStreams(arg[k]));
}
} else {
decodedObj = {};
// user objects are simple flat objects and we want to
// copy all their properties
// eslint-disable-next-line
for (const k in arg) {
decodedObj[k] = this.decodeStreams(arg[k]);
}
}
return decodedObj;
}
return arg;
}
_write(streamId, data, cb) {
this.logger.debug('emit \'stream-data\' event',
{ streamId, size: data.length });
this.socket.emit('stream-data', { streamId, data }, cb);
}
_finish(streamId, cb) {
this.logger.debug('emit \'stream-end\' event', { streamId });
this.socket.emit('stream-end', { streamId }, cb);
}
_error(streamId, error) {
this.logger.debug('emit \'stream-error\' event', { streamId, error });
this.socket.emit('stream-error', { streamId,
error: flattenError(error) });
}
_hangup(streamId) {
this.logger.debug('emit \'stream-hangup\' event', { streamId });
this.socket.emit('stream-hangup', { streamId });
}
destroyStream(streamId) {
this.logger.debug('destroyStream', { streamId });
if (!this.localStreams[streamId]) {
return;
}
if (this.localStreams[streamId].destroy) {
// a 'close' event shall be emitted by destroy()
this.localStreams[streamId].destroy();
}
// if no destroy function exists in the input stream, let it
// go through the end
}
}
module.exports.createSocket = function createSocket(
socket,
logger,
maxPendingAck = DEFAULT_MAX_PENDING_ACK,
ackTimeoutMs = DEFAULT_ACK_TIMEOUT_MS) {
return new SIOStreamSocket(socket, logger, maxPendingAck, ackTimeoutMs);
};

48
lib/network/rpc/utils.js Normal file
View File

@ -0,0 +1,48 @@
'use strict'; // eslint-disable-line
/**
* @brief turn all <tt>err</tt> own and prototype attributes into own attributes
*
* This is done so that JSON.stringify() can properly serialize those
* attributes (e.g. err.notFound)
*
* @param {Error} err error object
* @return {Object} flattened object containing <tt>err</tt> attributes
*/
module.exports.flattenError = function flattenError(err) {
if (!err) {
return err;
}
const flattenedErr = {};
flattenedErr.message = err.message;
for (const k in err) {
if (!(k in flattenedErr)) {
flattenedErr[k] = err[k];
}
}
return flattenedErr;
};
/**
* @brief recreate a proper Error object from its flattened
* representation created with flattenError().
*
* @note Its internals may differ from the original Error object but
* its attributes should be the same.
*
* @param {Object} err flattened error object
* @return {Error} a reconstructed Error object inheriting <tt>err</tt>
* attributes
*/
module.exports.reconstructError = function reconstructError(err) {
if (!err) {
return err;
}
const reconstructedErr = new Error(err.message);
Object.keys(err).forEach(k => {
reconstructedErr[k] = err[k];
});
return reconstructedErr;
};

View File

@ -0,0 +1,117 @@
'use strict'; // eslint-disable-line strict
const Ajv = require('ajv');
const userPolicySchema = require('./userPolicySchema');
const errors = require('../errors');
const ajValidate = new Ajv({ allErrors: true });
// compiles schema to functions and caches them for all cases
const userPolicyValidate = ajValidate.compile(userPolicySchema);
const errDict = {
required: {
Version: 'Policy document must be version 2012-10-17 or greater.',
Action: 'Policy statement must contain actions.',
},
pattern: {
Action: 'Actions/Conditions must be prefaced by a vendor,' +
' e.g., iam, sdb, ec2, etc.',
Resource: 'Resource must be in ARN format or "*".',
},
minItems: {
Resource: 'Policy statement must contain resources.',
},
};
// parse ajv errors and return early with the first relevant error
function _parseErrors(ajvErrors) {
// deep copy is needed as we have to assign custom error description
const parsedErr = Object.assign({}, errors.MalformedPolicyDocument);
parsedErr.description = 'Syntax errors in policy.';
ajvErrors.some(err => {
const resource = err.dataPath;
const field = err.params ? err.params.missingProperty : undefined;
const errType = err.keyword;
if (errType === 'type' && (resource === '.Statement' ||
resource === '.Statement.Resource' ||
resource === '.Statement.NotResource')) {
// skip this as this doesn't have enough error context
return false;
}
if (err.keyword === 'required' && field && errDict.required[field]) {
parsedErr.description = errDict.required[field];
} else if (err.keyword === 'pattern' &&
(resource === '.Statement.Action' ||
resource === '.Statement.NotAction')) {
parsedErr.description = errDict.pattern.Action;
} else if (err.keyword === 'pattern' &&
(resource === '.Statement.Resource' ||
resource === '.Statement.NotResource')) {
parsedErr.description = errDict.pattern.Resource;
} else if (err.keyword === 'minItems' &&
(resource === '.Statement.Resource' ||
resource === '.Statement.NotResource')) {
parsedErr.description = errDict.minItems.Resource;
}
return true;
});
return parsedErr;
}
// parse JSON safely without throwing an exception
function _safeJSONParse(s) {
try {
return JSON.parse(s);
} catch (e) {
return e;
}
}
// validates policy using the validation schema
function _validatePolicy(type, policy) {
if (type === 'user') {
const parseRes = _safeJSONParse(policy);
if (parseRes instanceof Error) {
return { error: Object.assign({}, errors.MalformedPolicyDocument),
valid: false };
}
userPolicyValidate(parseRes);
if (userPolicyValidate.errors) {
return { error: _parseErrors(userPolicyValidate.errors),
valid: false };
}
return { error: null, valid: true };
}
// TODO: add support for resource policies
return { error: errors.NotImplemented, valid: false };
}
/**
* @typedef ValidationResult
* @type Object
* @property {Array|null} error - list of validation errors or null
* @property {Bool} valid - true/false depending on the validation result
*/
/**
* Validates user policy
* @param {String} policy - policy json
* @returns {Object} - returns object with properties error and value
* @returns {ValidationResult} - result of the validation
*/
function validateUserPolicy(policy) {
return _validatePolicy('user', policy);
}
/**
* Validates resource policy
* @param {String} policy - policy json
* @returns {Object} - returns object with properties error and value
* @returns {ValidationResult} - result of the validation
*/
function validateResourcePolicy(policy) {
return _validatePolicy('resource', policy);
}
module.exports = {
validateUserPolicy,
validateResourcePolicy,
};

View File

@ -0,0 +1,543 @@
{
"$schema": "http://json-schema.org/draft-04/schema#",
"type": "object",
"title": "AWS Policy schema.",
"description": "This schema describes a user policy per AWS policy grammar rules",
"definitions": {
"principalService": {
"type": "object",
"properties": {
"Service": {
"type": "string",
"enum": [
"backbeat"
]
}
},
"additionalProperties": false
},
"principalAnonymous": {
"type": "string",
"pattern": "^\\*$"
},
"principalAWSAccountID": {
"type": "string",
"pattern": "^[0-9]{12}$"
},
"principalAWSAccountArn": {
"type": "string",
"pattern": "^arn:aws:iam::[0-9]{12}:root$"
},
"principalAWSUserArn": {
"type": "string",
"pattern": "^arn:aws:iam::[0-9]{12}:user/[\\w+=,.@ -]{1,64}$"
},
"principalAWSRoleArn": {
"type": "string",
"pattern": "^arn:aws:iam::[0-9]{12}:role/[\\w+=,.@ -]{1,64}$"
},
"principalFederatedSamlIdp": {
"type": "string",
"pattern": "^arn:aws:iam::[0-9]{12}:saml-provider/[\\w._-]{1,128}$"
},
"principalAWSItem": {
"type": "object",
"properties": {
"AWS": {
"oneOf": [
{ "$ref": "#/definitions/principalAWSAccountID" },
{ "$ref": "#/definitions/principalAnonymous" },
{ "$ref": "#/definitions/principalAWSAccountArn" },
{ "$ref": "#/definitions/principalAWSUserArn" },
{ "$ref": "#/definitions/principalAWSRoleArn" },
{
"type": "array",
"minItems": 1,
"items": {
"$ref": "#/definitions/principalAWSAccountID"
}
},
{
"type": "array",
"minItems": 1,
"items": {
"$ref": "#/definitions/principalAWSAccountArn"
}
},
{
"type": "array",
"minItems": 1,
"items": {
"$ref": "#/definitions/principalAWSRoleArn"
}
},
{
"type": "array",
"minItems": 1,
"items": {
"$ref": "#/definitions/principalAWSUserArn"
}
}
]
}
},
"additionalProperties": false
},
"principalFederatedItem": {
"type": "object",
"properties": {
"Federated": {
"oneOf": [
{ "$ref": "#/definitions/principalFederatedSamlIdp" }
]
}
},
"additionalProperties": false
},
"principalItem": {
"oneOf": [
{ "$ref": "#/definitions/principalAWSItem" },
{ "$ref": "#/definitions/principalAnonymous" },
{ "$ref": "#/definitions/principalFederatedItem" },
{ "$ref": "#/definitions/principalService" }
]
},
"actionItem": {
"type": "string",
"pattern": "^[^*:]+:([^:])+|^\\*{1}$"
},
"resourceItem": {
"type": "string",
"pattern": "^\\*|arn:(aws|scality)(:(\\*{1}|[a-z0-9\\*\\-]{2,})*?){3}:((?!\\$\\{\\}).)*?$"
},
"conditions": {
"type": "object",
"properties": {
"StringEquals": {
"type": "object"
},
"StringNotEquals": {
"type": "object"
},
"StringEqualsIgnoreCase": {
"type": "object"
},
"StringNotEqualsIgnoreCase": {
"type": "object"
},
"StringLike": {
"type": "object"
},
"StringNotLike": {
"type": "object"
},
"NumericEquals": {
"type": "object"
},
"NumericNotEquals": {
"type": "object"
},
"NumericLessThan": {
"type": "object"
},
"NumericLessThanEquals": {
"type": "object"
},
"NumericGreaterThan": {
"type": "object"
},
"NumericGreaterThanEquals": {
"type": "object"
},
"DateEquals": {
"type": "object"
},
"DateNotEquals": {
"type": "object"
},
"DateLessThan": {
"type": "object"
},
"DateLessThanEquals": {
"type": "object"
},
"DateGreaterThan": {
"type": "object"
},
"DateGreaterThanEquals": {
"type": "object"
},
"Bool": {
"type": "object"
},
"BinaryEquals": {
"type": "object"
},
"BinaryNotEquals": {
"type": "object"
},
"IpAddress": {
"type": "object"
},
"NotIpAddress": {
"type": "object"
},
"ArnEquals": {
"type": "object"
},
"ArnNotEquals": {
"type": "object"
},
"ArnLike": {
"type": "object"
},
"ArnNotLike": {
"type": "object"
},
"Null": {
"type": "object"
},
"StringEqualsIfExists": {
"type": "object"
},
"StringNotEqualsIfExists": {
"type": "object"
},
"StringEqualsIgnoreCaseIfExists": {
"type": "object"
},
"StringNotEqualsIgnoreCaseIfExists": {
"type": "object"
},
"StringLikeIfExists": {
"type": "object"
},
"StringNotLikeIfExists": {
"type": "object"
},
"NumericEqualsIfExists": {
"type": "object"
},
"NumericNotEqualsIfExists": {
"type": "object"
},
"NumericLessThanIfExists": {
"type": "object"
},
"NumericLessThanEqualsIfExists": {
"type": "object"
},
"NumericGreaterThanIfExists": {
"type": "object"
},
"NumericGreaterThanEqualsIfExists": {
"type": "object"
},
"DateEqualsIfExists": {
"type": "object"
},
"DateNotEqualsIfExists": {
"type": "object"
},
"DateLessThanIfExists": {
"type": "object"
},
"DateLessThanEqualsIfExists": {
"type": "object"
},
"DateGreaterThanIfExists": {
"type": "object"
},
"DateGreaterThanEqualsIfExists": {
"type": "object"
},
"BoolIfExists": {
"type": "object"
},
"BinaryEqualsIfExists": {
"type": "object"
},
"BinaryNotEqualsIfExists": {
"type": "object"
},
"IpAddressIfExists": {
"type": "object"
},
"NotIpAddressIfExists": {
"type": "object"
},
"ArnEqualsIfExists": {
"type": "object"
},
"ArnNotEqualsIfExists": {
"type": "object"
},
"ArnLikeIfExists": {
"type": "object"
},
"ArnNotLikeIfExists": {
"type": "object"
}
},
"additionalProperties": false
}
},
"properties": {
"Version": {
"type": "string",
"enum": [
"2012-10-17"
]
},
"Statement": {
"oneOf": [
{
"type": [
"array"
],
"minItems": 1,
"items": {
"type": "object",
"properties": {
"Sid": {
"type": "string",
"pattern": "^[a-zA-Z0-9]+$"
},
"Effect": {
"type": "string",
"enum": [
"Allow",
"Deny"
]
},
"Principal": {
"$ref": "#/definitions/principalItem"
},
"NotPrincipal": {
"$ref": "#/definitions/principalItem"
},
"Action": {
"oneOf": [
{
"$ref": "#/definitions/actionItem"
},
{
"type": "array",
"items": {
"$ref": "#/definitions/actionItem"
}
}
]
},
"NotAction": {
"oneOf": [
{
"$ref": "#/definitions/actionItem"
},
{
"type": "array",
"items": {
"$ref": "#/definitions/actionItem"
}
}
]
},
"Resource": {
"oneOf": [
{
"$ref": "#/definitions/resourceItem"
},
{
"type": "array",
"items": {
"$ref": "#/definitions/resourceItem"
},
"minItems": 1
}
]
},
"NotResource": {
"oneOf": [
{
"$ref": "#/definitions/resourceItem"
},
{
"type": "array",
"items": {
"$ref": "#/definitions/resourceItem"
},
"minItems": 1
}
]
},
"Condition": {
"$ref": "#/definitions/conditions"
}
},
"oneOf": [
{
"required": [
"Effect",
"Action",
"Resource"
]
}, {
"required": [
"Effect",
"Action",
"NotResource"
]
}, {
"required": [
"Effect",
"NotAction",
"Resource"
]
}, {
"required": [
"Effect",
"NotAction",
"NotResource"
]
}, {
"required": [
"Effect",
"Action",
"Principal"
]
}, {
"required": [
"Effect",
"Action",
"NotPrincipal"
]
}
]
}
},
{
"type": [
"object"
],
"properties": {
"Sid": {
"type": "string",
"pattern": "^[a-zA-Z0-9]+$"
},
"Effect": {
"type": "string",
"enum": [
"Allow",
"Deny"
]
},
"Principal": {
"$ref": "#/definitions/principalItem"
},
"Action": {
"oneOf": [
{
"$ref": "#/definitions/actionItem"
},
{
"type": "array",
"items": {
"$ref": "#/definitions/actionItem"
}
}
]
},
"NotAction": {
"oneOf": [
{
"$ref": "#/definitions/actionItem"
},
{
"type": "array",
"items": {
"$ref": "#/definitions/actionItem"
}
}
]
},
"Resource": {
"oneOf": [
{
"$ref": "#/definitions/resourceItem"
},
{
"type": "array",
"items": {
"$ref": "#/definitions/resourceItem"
},
"minItems": 1
}
]
},
"NotResource": {
"oneOf": [
{
"$ref": "#/definitions/resourceItem"
},
{
"type": "array",
"items": {
"$ref": "#/definitions/resourceItem"
},
"minItems": 1
}
]
},
"Condition": {
"$ref": "#/definitions/conditions"
}
},
"oneOf": [
{
"required": [
"Action",
"Effect",
"Resource"
]
}, {
"required": [
"Action",
"Effect",
"NotResource"
]
}, {
"required": [
"Effect",
"NotAction",
"Resource"
]
}, {
"required": [
"Effect",
"NotAction",
"NotResource"
]
}, {
"required": [
"Effect",
"Action",
"Principal"
]
}, {
"required": [
"Effect",
"Action",
"NotPrincipal"
]
}
]
}
]
}
},
"required": [
"Version",
"Statement"
],
"additionalProperties": false
}

View File

@ -0,0 +1,553 @@
'use strict'; // eslint-disable-line strict
const parseIp = require('../ipCheck').parseIp;
// http://docs.aws.amazon.com/IAM/latest/UserGuide/list_s3.html
// For MPU actions:
// http://docs.aws.amazon.com/AmazonS3/latest/dev/mpuAndPermissions.html
// For bucket head and object head:
// http://docs.aws.amazon.com/AmazonS3/latest/dev/
// using-with-s3-actions.html
const _actionMap = {
bucketDelete: 's3:DeleteBucket',
bucketDeleteWebsite: 's3:DeleteBucketWebsite',
bucketGet: 's3:ListBucket',
bucketGetACL: 's3:GetBucketAcl',
bucketGetCors: 's3:GetBucketCORS',
bucketGetVersioning: 's3:GetBucketVersioning',
bucketGetWebsite: 's3:GetBucketWebsite',
bucketGetLocation: 's3:GetBucketLocation',
bucketHead: 's3:ListBucket',
bucketPut: 's3:CreateBucket',
bucketPutACL: 's3:PutBucketAcl',
bucketPutCors: 's3:PutBucketCORS',
// for bucketDeleteCors need s3:PutBucketCORS permission
// see http://docs.aws.amazon.com/AmazonS3/latest/API/
// RESTBucketDELETEcors.html
bucketDeleteCors: 's3:PutBucketCORS',
bucketPutVersioning: 's3:PutBucketVersioning',
bucketPutWebsite: 's3:PutBucketWebsite',
bucketPutReplication: 's3:PutReplicationConfiguration',
bucketGetReplication: 's3:GetReplicationConfiguration',
bucketDeleteReplication: 's3:DeleteReplicationConfiguration',
completeMultipartUpload: 's3:PutObject',
initiateMultipartUpload: 's3:PutObject',
listMultipartUploads: 's3:ListBucketMultipartUploads',
listParts: 's3:ListMultipartUploadParts',
multipartDelete: 's3:AbortMultipartUpload',
objectDelete: 's3:DeleteObject',
objectDeleteVersion: 's3:DeleteObjectVersion',
objectDeleteTagging: 's3:DeleteObjectTagging',
objectDeleteTaggingVersion: 's3:DeleteObjectVersionTagging',
objectGet: 's3:GetObject',
objectGetVersion: 's3:GetObjectVersion',
objectGetACL: 's3:GetObjectAcl',
objectGetACLVersion: 's3:GetObjectVersionAcl',
objectGetTagging: 's3:GetObjectTagging',
objectGetTaggingVersion: 's3:GetObjectVersionTagging',
objectHead: 's3:GetObject',
objectPut: 's3:PutObject',
objectPutACL: 's3:PutObjectAcl',
objectPutACLVersion: 's3:PutObjectVersionAcl',
objectPutPart: 's3:PutObject',
objectPutTagging: 's3:PutObjectTagging',
objectPutTaggingVersion: 's3:PutObjectVersionTagging',
serviceGet: 's3:ListAllMyBuckets',
objectReplicate: 's3:ReplicateObject',
};
const _actionMapIAM = {
attachGroupPolicy: 'iam:AttachGroupPolicy',
attachUserPolicy: 'iam:AttachUserPolicy',
createAccessKey: 'iam:CreateAccessKey',
createGroup: 'iam:CreateGroup',
createPolicy: 'iam:CreatePolicy',
createPolicyVersion: 'iam:CreatePolicyVersion',
createUser: 'iam:CreateUser',
deleteAccessKey: 'iam:DeleteAccessKey',
deleteGroup: 'iam:DeleteGroup',
deleteGroupPolicy: 'iam:DeleteGroupPolicy',
deletePolicy: 'iam:DeletePolicy',
deletePolicyVersion: 'iam:DeletePolicyVersion',
deleteUser: 'iam:DeleteUser',
detachGroupPolicy: 'iam:DetachGroupPolicy',
detachUserPolicy: 'iam:DetachUserPolicy',
getGroup: 'iam:GetGroup',
getGroupPolicy: 'iam:GetGroupPolicy',
getPolicy: 'iam:GetPolicy',
getPolicyVersion: 'iam:GetPolicyVersion',
getUser: 'iam:GetUser',
listAccessKeys: 'iam:ListAccessKeys',
listGroupPolicies: 'iam:ListGroupPolicies',
listGroups: 'iam:ListGroups',
listGroupsForUser: 'iam:ListGroupsForUser',
listPolicies: 'iam:ListPolicies',
listPolicyVersions: 'iam:ListPolicyVersions',
listUsers: 'iam:ListUsers',
putGroupPolicy: 'iam:PutGroupPolicy',
removeUserFromGroup: 'iam:RemoveUserFromGroup',
};
const _actionMapSSO = {
SsoAuthorize: 'sso:Authorize',
};
function _findAction(service, method) {
if (service === 's3') {
return _actionMap[method];
}
if (service === 'iam') {
return _actionMapIAM[method];
}
if (service === 'sso') {
return _actionMapSSO[method];
}
if (service === 'ring') {
return `ring:${method}`;
}
if (service === 'utapi') {
// currently only method is ListMetrics
return `utapi:${method}`;
}
return undefined;
}
function _buildArn(service, generalResource, specificResource, requesterInfo) {
// arn:partition:service:region:account-id:resourcetype/resource
if (service === 's3') {
// arn:aws:s3:::bucket/object
// General resource is bucketName
if (generalResource && specificResource) {
return `arn:aws:s3:::${generalResource}/${specificResource}`;
} else if (generalResource) {
return `arn:aws:s3:::${generalResource}`;
}
return 'arn:aws:s3:::';
}
if (service === 'iam') {
// arn:aws:iam::<account-id>:<resource-type><resource>
if (specificResource) {
return `arn:aws:iam::${requesterInfo.accountid}:` +
`${generalResource}${specificResource}`;
}
return `arn:aws:iam::${requesterInfo.accountid}:${generalResource}`;
}
if (service === 'ring') {
// arn:aws:iam::<account-id>:<resource-type><resource>
if (specificResource) {
return `arn:aws:ring::${requesterInfo.accountid}:` +
`${generalResource}/${specificResource}`;
}
return `arn:aws:ring::${requesterInfo.accountid}:${generalResource}`;
}
if (service === 'utapi') {
// arn:scality:utapi:::resourcetype/resource
// (possible resource types are buckets, accounts or users)
if (specificResource) {
return `arn:scality:utapi::${requesterInfo.accountid}:` +
`${generalResource}/${specificResource}`;
}
return `arn:scality:utapi::${requesterInfo.accountid}:` +
`${generalResource}/`;
}
if (service === 'sso') {
if (specificResource) {
return `arn:scality:sso:::${generalResource}/${specificResource}`;
}
return `arn:scality:sso:::${generalResource}`;
}
return undefined;
}
/**
* Class containing RequestContext for policy auth check
* @param {object} headers - request headers
* @param {query} query - request query
* @param {string} generalResource - bucket name from request if any if from s3
* or accounts, buckets or users from utapi
* @param {string} specificResource - object name from request if any if from s3
* or bucketname from utapi if from utapi
* @param {string} requesterIp - ip of requester
* @param {boolean} sslEnabled - whether request was https
* @param {string} apiMethod - type of request
* @param {string} awsService - service receiving request
* @param {string} locationConstraint - location constraint
* for put bucket operation
* @param {object} requesterInfo - info about entity making request
* @param {string} signatureVersion - auth signature type used
* @param {string} authType - type of authentication used
* @param {number} signatureAge - age of signature in milliseconds
* @param {string} securityToken - auth security token (temporary credentials)
* @return {RequestContext} a RequestContext instance
*/
class RequestContext {
constructor(headers, query, generalResource, specificResource,
requesterIp, sslEnabled, apiMethod,
awsService, locationConstraint, requesterInfo,
signatureVersion, authType, signatureAge, securityToken) {
this._headers = headers;
this._query = query;
this._requesterIp = requesterIp;
this._sslEnabled = sslEnabled;
this._apiMethod = apiMethod;
this._awsService = awsService;
this._generalResource = generalResource;
this._specificResource = specificResource;
this._locationConstraint = locationConstraint;
// Not implemented
this._multiFactorAuthPresent = null;
// Not implemented
this._multiFactorAuthAge = null;
// Not implemented
this._tokenIssueTime = null;
// Remainder not set when originally instantiated
// (unless if instantiated from deSerialize)
this._requesterInfo = requesterInfo;
// See http://docs.aws.amazon.com/AmazonS3/latest/
// API/bucket-policy-s3-sigv4-conditions.html
this._signatureVersion = signatureVersion;
this._authType = authType;
this._signatureAge = signatureAge;
this._securityToken = securityToken;
return this;
}
/**
* Serialize the object
* @return {string} - stringified object
*/
serialize() {
const requestInfo = {
apiMethod: this._apiMethod,
headers: this._headers,
query: this._query,
requersterInfo: this._requesterInfo,
requesterIp: this._requesterIp,
sslEnabled: this._sslEnabled,
awsService: this._awsService,
generalResource: this._generalResource,
specificResource: this._specificResource,
multiFactorAuthPresent: this._multiFactorAuthPresent,
multiFactorAuthAge: this._multiFactorAuthAge,
signatureVersion: this._signatureVersion,
authType: this._authType,
signatureAge: this._signatureAge,
locationConstraint: this._locationConstraint,
tokenIssueTime: this._tokenIssueTime,
securityToken: this._securityToken,
};
return JSON.stringify(requestInfo);
}
/**
* deSerialize the JSON string
* @param {string} stringRequest - the stringified requestContext
* @return {object} - parsed string
*/
static deSerialize(stringRequest) {
let obj;
try {
obj = JSON.parse(stringRequest);
} catch (err) {
return new Error(err);
}
return new RequestContext(obj.headers, obj.query, obj.generalResource,
obj.specificResource, obj.requesterIp, obj.sslEnabled,
obj.apiMethod, obj.awsService, obj.locationConstraint,
obj.requesterInfo, obj.signatureVersion,
obj.authType, obj.signatureAge, obj.securityToken);
}
/**
* Get the request action
* @return {string} action
*/
getAction() {
if (this._foundAction) {
return this._foundAction;
}
this._foundAction = _findAction(this._awsService, this._apiMethod);
return this._foundAction;
}
/**
* Get the resource impacted by the request
* @return {string} arn for the resource
*/
getResource() {
if (this._foundResource) {
return this._foundResource;
}
this._foundResource =
_buildArn(this._awsService, this._generalResource,
this._specificResource, this._requesterInfo);
return this._foundResource;
}
/**
* Set headers
* @param {object} headers - request headers
* @return {RequestContext} - RequestContext instance
*/
setHeaders(headers) {
this._headers = headers;
return this;
}
/**
* Get headers
* @return {object} request headers
*/
getHeaders() {
return this._headers;
}
/**
* Set query
* @param {object} query - request query
* @return {RequestContext} - RequestContext instance
*/
setQuery(query) {
this._query = query;
return this;
}
/**
* Get query
* @return {object} request query
*/
getQuery() {
return this._query;
}
/**
* Set requesterInfo
* @param {object} requesterInfo - info about entity making request
* @return {RequestContext} - RequestContext instance
*/
setRequesterInfo(requesterInfo) {
this._requesterInfo = requesterInfo;
return this;
}
/**
* Get requesterInfo
* @return {object} requesterInfo
*/
getRequesterInfo() {
return this._requesterInfo;
}
/**
* Set requesterIp
* @param {string} requesterIp - ip address of requester
* @return {RequestContext} - RequestContext instance
*/
setRequesterIp(requesterIp) {
this._requesterIp = requesterIp;
return this;
}
/**
* Get requesterIp
* @return {object} requesterIp - parsed requesterIp
*/
getRequesterIp() {
return parseIp(this._requesterIp);
}
/**
* Set sslEnabled
* @param {boolean} sslEnabled - true if https used
* @return {RequestContext} - RequestContext instance
*/
setSslEnabled(sslEnabled) {
this._sslEnabled = sslEnabled;
return this;
}
/**
* Get sslEnabled
* @return {boolean} true if sslEnabled, false if not
*/
getSslEnabled() {
return !!this._sslEnabled;
}
/**
* Set signatureVersion
* @param {string} signatureVersion - "AWS" identifies Signature Version 2
* and "AWS4-HMAC-SHA256" identifies Signature Version 4
* @return {RequestContext} - RequestContext instance
*/
setSignatureVersion(signatureVersion) {
this._signatureVersion = signatureVersion;
return this;
}
/**
* Get signatureVersion
*
* @return {string} authentication signature version
* "AWS" identifies Signature Version 2 and
* "AWS4-HMAC-SHA256" identifies Signature Version 4
*/
getSignatureVersion() {
return this._signatureVersion;
}
/**
* Set authType
* @param {string} authType - REST-HEADER, REST-QUERY-STRING or POST
* @return {RequestContext} - RequestContext instance
*/
setAuthType(authType) {
this._authType = authType;
return this;
}
/**
* Get authType
* @return {string} authentication type:
* REST-HEADER, REST-QUERY-STRING or POST
*/
getAuthType() {
return this._authType;
}
/**
* Set signatureAge
* @param {number} signatureAge -- age of signature in milliseconds
* Note that for v2 query auth this will be undefined (since these
* requests are pre-signed and only come with an expires time so
* do not know age)
* @return {RequestContext} - RequestContext instance
*/
setSignatureAge(signatureAge) {
this._signatureAge = signatureAge;
return this;
}
/**
* Get signatureAge
* @return {number} age of signature in milliseconds
* Note that for v2 query auth this will be undefined (since these
* requests are pre-signed and only come with an expires time so
* do not know age)
*/
getSignatureAge() {
return this._signatureAge;
}
/**
* Set locationConstraint
* @param {string} locationConstraint - bucket region constraint
* @return {RequestContext} - RequestContext instance
*/
setLocationConstraint(locationConstraint) {
this._locationConstraint = locationConstraint;
return this;
}
/**
* Get locationConstraint
* @return {string} location constraint of put bucket request
*/
getLocationConstraint() {
return this._locationConstraint;
}
/**
* Set awsService
* @param {string} awsService receiving request
* @return {RequestContext} - RequestContext instance
*/
setAwsService(awsService) {
this._awsService = awsService;
return this;
}
/**
* Get awsService
* @return {string} awsService receiving request
*/
getAwsService() {
return this._awsService;
}
/**
* Set tokenIssueTime
* @param {string} tokenIssueTime - Date/time that
* temporary security credentials were issued
* Only present in requests that are signed using
* temporary security credentials.
* @return {RequestContext} - RequestContext instance
*/
setTokenIssueTime(tokenIssueTime) {
this._tokenIssueTime = tokenIssueTime;
return this;
}
/**
* Get tokenIssueTime
* @return {string} tokenIssueTime
*/
getTokenIssueTime() {
return this._tokenIssueTime;
}
/**
* Set multiFactorAuthPresent
* @param {boolean} multiFactorAuthPresent - sets out whether MFA used
* for request
* @return {RequestContext} - RequestContext instance
*/
setMultiFactorAuthPresent(multiFactorAuthPresent) {
this._multiFactorAuthPresent = multiFactorAuthPresent;
return this;
}
/**
* Get multiFactorAuthPresent
* @return {boolean} multiFactorAuthPresent
*/
getMultiFactorAuthPresent() {
return this._multiFactorAuthPresent;
}
/**
* Set multiFactorAuthAge
* @param {number} multiFactorAuthAge - seconds since
* MFA credentials were issued
* @return {RequestContext} - RequestContext instance
*/
setMultiFactorAuthAge(multiFactorAuthAge) {
this._multiFactorAuthAge = multiFactorAuthAge;
return this;
}
/**
* Get multiFactorAuthAge
* @return {number} multiFactorAuthAge - seconds since
* MFA credentials were issued
*/
getMultiFactorAuthAge() {
return this._multiFactorAuthAge;
}
/**
* Returns the authentication security token
*
* @return {string} security token
*/
getSecurityToken() {
return this._securityToken;
}
/**
* Set the authentication security token
*
* @param {string} token - Security token
* @return {RequestContext} itself
*/
setSecurityToken(token) {
this._securityToken = token;
return this;
}
}
module.exports = RequestContext;

View File

@ -0,0 +1,272 @@
'use strict'; // eslint-disable-line strict
const substituteVariables = require('./utils/variables.js');
const handleWildcards = require('./utils/wildcards.js').handleWildcards;
const conditions = require('./utils/conditions.js');
const findConditionKey = conditions.findConditionKey;
const convertConditionOperator = conditions.convertConditionOperator;
const checkArnMatch = require('./utils/checkArnMatch.js');
const evaluators = {};
const operatorsWithVariables = ['StringEquals', 'StringNotEquals',
'StringEqualsIgnoreCase', 'StringNotEqualsIgnoreCase',
'StringLike', 'StringNotLike', 'ArnEquals', 'ArnNotEquals',
'ArnLike', 'ArnNotLike'];
const operatorsWithNegation = ['StringNotEquals',
'StringNotEqualsIgnoreCase', 'StringNotLike', 'ArnNotEquals',
'ArnNotLike', 'NumericNotEquals'];
/**
* Check whether resource in policy statement applies to request resource
* @param {object} requestContext - info about request
* @param {string | [string]} statementResource - Resource(s) impacted
* by policy statement
* @param {object} log - logger
* @return {boolean} true if applicable, false if not
*/
function isResourceApplicable(requestContext, statementResource, log) {
const resource = requestContext.getResource();
if (!Array.isArray(statementResource)) {
// eslint-disable-next-line no-param-reassign
statementResource = [statementResource];
}
// ARN format:
// arn:partition:service:region:namespace:relative-id
const requestResourceArr = resource.split(':');
// Pull just the relative id because there is no restriction that it
// does not contain ":"
const requestRelativeId = requestResourceArr.slice(5).join(':');
for (let i = 0; i < statementResource.length; i ++) {
// Handle variables (must handle BEFORE wildcards)
const policyResource =
substituteVariables(statementResource[i], requestContext);
// Handle wildcards
const arnSegmentsMatch =
checkArnMatch(policyResource, requestRelativeId,
requestResourceArr, true);
if (arnSegmentsMatch) {
log.trace('policy resource is applicable to request',
{ requestResource: resource, policyResource });
return true;
}
continue;
}
log.trace('no policy resource is applicable to request',
{ requestResource: resource });
// If no match found, no resource is applicable
return false;
}
/**
* Check whether action in policy statement applies to request
* @param {Object} requestAction - Type of client request
* @param {string | [string]} statementAction - Action(s) impacted
* by policy statement
* @param {Object} log - logger
* @return {boolean} true if applicable, false if not
*/
function isActionApplicable(requestAction, statementAction, log) {
if (!Array.isArray(statementAction)) {
// eslint-disable-next-line no-param-reassign
statementAction = [statementAction];
}
const length = statementAction.length;
for (let i = 0; i < length; i ++) {
// No variables in actions so no need to handle
const regExStrOfStatementAction =
handleWildcards(statementAction[i]);
const actualRegEx = new RegExp(regExStrOfStatementAction, 'i');
if (actualRegEx.test(requestAction)) {
log.trace('policy action is applicable to request action', {
requestAction, policyAction: statementAction[i],
});
return true;
}
}
log.trace('no action in policy applicable to request action',
{ requestAction });
// If no match found, return false
return false;
}
/**
* Check whether request meets policy conditions
* @param {RequestContext} requestContext - info about request
* @param {Object} statementCondition - Condition statement from policy
* @param {Object} log - logger
* @return {boolean} true if meet conditions, false if not
*/
function meetConditions(requestContext, statementCondition, log) {
// The Condition portion of a policy is an object with different
// operators as keys
const operators = Object.keys(statementCondition);
const length = operators.length;
for (let i = 0; i < length; i ++) {
const operator = operators[i];
const hasIfExistsCondition = operator.endsWith('IfExists');
// If has "IfExists" added to operator name, find operator name
// without "IfExists"
const bareOperator = hasIfExistsCondition ? operator.slice(0, -8) :
operator;
const operatorCanHaveVariables =
operatorsWithVariables.indexOf(bareOperator) > -1;
const isNegationOperator =
operatorsWithNegation.indexOf(bareOperator) > -1;
// Loop through conditions with the same operator
// Note: this should be the actual operator name, not the bareOperator
const conditionsWithSameOperator = statementCondition[operator];
const conditionKeys = Object.keys(conditionsWithSameOperator);
const conditionKeysLength = conditionKeys.length;
for (let j = 0; j < conditionKeysLength;
j ++) {
const key = conditionKeys[j];
let value = conditionsWithSameOperator[key];
if (!Array.isArray(value)) {
value = [value];
}
// Handle variables
if (operatorCanHaveVariables) {
value = value.map(item =>
substituteVariables(item, requestContext));
}
// Pull key using requestContext
// TODO: If applicable to S3, handle policy set operations
// where a keyBasedOnRequestContext returns multiple values and
// condition has "ForAnyValue" or "ForAllValues".
// (see http://docs.aws.amazon.com/IAM/latest/UserGuide/
// reference_policies_multi-value-conditions.html)
const keyBasedOnRequestContext =
findConditionKey(key, requestContext);
// Handle IfExists and negation operators
if ((keyBasedOnRequestContext === undefined ||
keyBasedOnRequestContext === null) &&
(hasIfExistsCondition || isNegationOperator)) {
log.trace('satisfies condition due to IfExists operator or ' +
'negation operator', { method: 'evaluators.evaluatePolicy' });
continue;
}
// If no IfExists qualifier, the key does not exist and the
// condition operator is not Null, the
// condition is not met so return false.
if ((keyBasedOnRequestContext === null ||
keyBasedOnRequestContext === undefined) &&
bareOperator !== 'Null') {
log.trace('condition not satisfied due to ' +
'missing info', { operator,
conditionKey: key, policyValue: value });
return false;
}
// Transalate operator into function using bareOperator
const operatorFunction = convertConditionOperator(bareOperator);
// Note: Wildcards are handled in the comparison operator function
// itself since StringLike, StringNotLike, ArnLike and ArnNotLike
// are the only operators where wildcards are allowed
if (!operatorFunction(keyBasedOnRequestContext, value)) {
log.trace('did not satisfy condition', { operator: bareOperator,
keyBasedOnRequestContext, policyValue: value });
return false;
}
}
}
return true;
}
/**
* Evaluate whether a request is permitted under a policy.
* @param {RequestContext} requestContext - Info necessary to
* evaluate permission
* See http://docs.aws.amazon.com/IAM/latest/UserGuide/
* reference_policies_evaluation-logic.html#policy-eval-reqcontext
* @param {object} policy - An IAM or resource policy
* @param {object} log - logger
* @return {string} Allow if permitted, Deny if not permitted or Neutral
* if not applicable
*/
evaluators.evaluatePolicy = (requestContext, policy, log) => {
// TODO: For bucket policies need to add Principal evaluation
let verdict = 'Neutral';
if (!Array.isArray(policy.Statement)) {
// eslint-disable-next-line no-param-reassign
policy.Statement = [policy.Statement];
}
for (let i = 0; i < policy.Statement.length; i++) {
const currentStatement = policy.Statement[i];
// If affirmative resource is in policy and request resource is
// not applicable, move on to next statement
if (currentStatement.Resource && !isResourceApplicable(requestContext,
currentStatement.Resource, log)) {
continue;
}
// If NotResource is in policy and resource matches NotResource
// in policy, move on to next statement
if (currentStatement.NotResource &&
isResourceApplicable(requestContext,
currentStatement.NotResource, log)) {
continue;
}
// If affirmative action is in policy and request action is not
// applicable, move on to next statement
if (currentStatement.Action &&
!isActionApplicable(requestContext.getAction(),
currentStatement.Action, log)) {
continue;
}
// If NotAction is in policy and action matches NotAction in policy,
// move on to next statement
if (currentStatement.NotAction &&
isActionApplicable(requestContext.getAction(),
currentStatement.NotAction, log)) {
continue;
}
// If do not meet conditions move on to next statement
if (currentStatement.Condition && !meetConditions(requestContext,
currentStatement.Condition, log)) {
continue;
}
if (currentStatement.Effect === 'Deny') {
log.trace('Deny statement applies');
// Once have Deny, return Deny since deny overrides an allow
return 'Deny';
}
log.trace('Allow statement applies');
// If statement is applicable, conditions are met and Effect is
// to Allow, set verdict to Allow
verdict = 'Allow';
}
log.trace('result of evaluating single policy', { verdict });
return verdict;
};
/**
* Evaluate whether a request is permitted under a policy.
* @param {RequestContext} requestContext - Info necessary to
* evaluate permission
* See http://docs.aws.amazon.com/IAM/latest/UserGuide/
* reference_policies_evaluation-logic.html#policy-eval-reqcontext
* @param {[object]} allPolicies - all applicable IAM or resource policies
* @param {object} log - logger
* @return {string} Allow if permitted, Deny if not permitted.
* Default is to Deny. Deny overrides an Allow
*/
evaluators.evaluateAllPolicies = (requestContext, allPolicies, log) => {
log.trace('evaluating all policies');
let verdict = 'Deny';
for (let i = 0; i < allPolicies.length; i++) {
const singlePolicyVerdict =
evaluators.evaluatePolicy(requestContext, allPolicies[i], log);
// If there is any Deny, just return Deny
if (singlePolicyVerdict === 'Deny') {
return 'Deny';
}
if (singlePolicyVerdict === 'Allow') {
verdict = 'Allow';
}
}
log.trace('result of evaluating all pollicies', { verdict });
return verdict;
};
module.exports = evaluators;

View File

@ -0,0 +1,51 @@
'use strict'; // eslint-disable-line strict
const handleWildcardInResource =
require('./wildcards.js').handleWildcardInResource;
/**
* Checks whether an ARN from a request matches an ARN in a policy
* to compare against each portion of the ARN from the request
* @param {string} policyArn - arn from policy
* @param {string} requestRelativeId - last part of the arn from the request
* @param {[string]} requestArnArr - all parts of request arn split on ":"
* @param {boolean} caseSensitive - whether the comparison should be
* case sensitive
* @return {boolean} true if match, false if not
*/
function checkArnMatch(policyArn, requestRelativeId, requestArnArr,
caseSensitive) {
let regExofArn = handleWildcardInResource(policyArn);
regExofArn = caseSensitive ? regExofArn : regExofArn.toLowerCase();
// The relativeId is the last part of the ARN (for instance, a bucket and
// object name in S3)
// Join on ":" in case there were ":" in the relativeID at the end
// of the arn
const policyRelativeId = caseSensitive ? regExofArn.slice(5).join(':') :
regExofArn.slice(5).join(':').toLowerCase();
const policyRelativeIdRegEx = new RegExp(policyRelativeId);
// Check to see if the relative-id matches first since most likely
// to diverge. If not a match, the resource is not applicable so return
// false
if (!policyRelativeIdRegEx.test(requestRelativeId)) {
return false;
}
// Check the other parts of the ARN to make sure they match. If not,
// return false.
for (let j = 0; j < 5; j ++) {
const segmentRegEx = new RegExp(regExofArn[j]);
const requestSegment = caseSensitive ? requestArnArr[j] :
requestArnArr[j].toLowerCase();
const policyArnArr = policyArn.split(':');
// We want to allow an empty account ID for utapi service ARNs to not
// break compatibility.
if (j === 4 && policyArnArr[2] === 'utapi' && policyArnArr[4] === '') {
continue;
} else if (!segmentRegEx.test(requestSegment)) {
return false;
}
}
// If there were matches on all parts of the ARN, return true
return true;
}
module.exports = checkArnMatch;

View File

@ -0,0 +1,412 @@
'use strict'; // eslint-disable-line strict
const checkIPinRangeOrMatch = require('../../ipCheck').checkIPinRangeOrMatch;
const handleWildcards = require('./wildcards.js').handleWildcards;
const checkArnMatch = require('./checkArnMatch.js');
const conditions = {};
/**
* findConditionKey finds the value of a condition key based on requestContext
* @param {string} key - condition key name
* @param {RequestContext} requestContext - info sent with request
* @return {string} condition key value
*/
conditions.findConditionKey = (key, requestContext) => {
// TODO: Consider combining with findVariable function if no benefit
// to keeping separate
const headers = requestContext.getHeaders();
const query = requestContext.getQuery();
const requesterInfo = requestContext.getRequesterInfo();
const map = new Map();
// Possible AWS Condition keys (http://docs.aws.amazon.com/IAM/latest/
// UserGuide/reference_policies_elements.html#AvailableKeys)
// aws:CurrentTime Used for date/time conditions
// (see Date Condition Operators).
map.set('aws:CurrentTime', new Date().toISOString());
// aws:EpochTime Used for date/time conditions
// (see Date Condition Operators).
map.set('aws:EpochTime', Date.now().toString());
// aws:TokenIssueTime Date/time that temporary security
// credentials were issued (see Date Condition Operators).
// Only present in requests that are signed using temporary security
// credentials.
map.set('aws:TokenIssueTime', requestContext.getTokenIssueTime());
// aws:MultiFactorAuthPresent Used to check whether MFA was used
// (see Boolean Condition Operators).
// Note: This key is only present if MFA was used. So, the following
// will not work:
// "Condition" :
// { "Bool" : { "aws:MultiFactorAuthPresent" : false } }
// Instead use:
// "Condition" :
// { "Null" : { "aws:MultiFactorAuthPresent" : true } }
map.set('aws:MultiFactorAuthPresent',
requestContext.getMultiFactorAuthPresent());
// aws:MultiFactorAuthAge Used to check how many seconds since
// MFA credentials were issued. If MFA was not used,
// this key is not present
map.set('aws:MultiFactorAuthAge', requestContext.getMultiFactorAuthAge());
// aws:principaltype states whether the principal is an account,
// user, federated, or assumed role
// Note: Docs for conditions have "PrincipalType" but simulator
// and docs for variables have lowercase
map.set('aws:principaltype', requesterInfo.principaltype);
// aws:Referer Used to check who referred the client browser to
// the address the request is being sent to. Only supported by some
// services, such as S3. Value comes from the referer header in the
// HTTPS request made to AWS.
map.set('aws:referer', headers.referer);
// aws:SecureTransport Used to check whether the request was sent
// using SSL (see Boolean Condition Operators).
map.set('aws:SecureTransport',
requestContext.getSslEnabled() ? 'true' : 'false');
// aws:SourceArn Used check the source of the request,
// using the ARN of the source. N/A here.
map.set('aws:SourceArn', undefined);
// aws:SourceIp Used to check the requester's IP address
// (see IP Address Condition Operators)
map.set('aws:SourceIp', requestContext.getRequesterIp());
// aws:SourceVpc Used to restrict access to a specific
// AWS Virtual Private Cloud. N/A here.
map.set('aws:SourceVpc', undefined);
// aws:SourceVpce Used to limit access to a specific VPC endpoint
// N/A here
map.set('aws:SourceVpce', undefined);
// aws:UserAgent Used to check the requester's client app.
// (see String Condition Operators)
map.set('aws:UserAgent', headers['user-agent']);
// aws:userid Used to check the requester's unique user ID.
// (see String Condition Operators)
map.set('aws:userid', requesterInfo.userid);
// aws:username Used to check the requester's friendly user name.
// (see String Condition Operators)
map.set('aws:username', requesterInfo.username);
// Possible condition keys for S3:
// s3:x-amz-acl is acl request for bucket or object put request
map.set('s3:x-amz-acl', headers['x-amz-acl']);
// s3:x-amz-grant-PERMISSION (where permission can be:
// read, write, read-acp, write-acp or full-control)
// Value is the value of that header (ex. id of grantee)
map.set('s3:x-amz-grant-read', headers['x-amz-grant-read']);
map.set('s3:x-amz-grant-write', headers['x-amz-grant-write']);
map.set('s3:x-amz-grant-read-acp', headers['x-amz-grant-read-acp']);
map.set('s3:x-amz-grant-write-acp', headers['x-amz-grant-write-acp']);
map.set('s3:x-amz-grant-full-control', headers['x-amz-grant-full-control']);
// s3:x-amz-copy-source is x-amz-copy-source header if applicable on
// a put object
map.set('s3:x-amz-copy-source', headers['x-amz-copy-source']);
// s3:x-amz-metadata-directive is x-amz-metadata-directive header if
// applicable on a put object copy. Determines whether metadata will
// be copied from original object or replaced. Values or "COPY" or
// "REPLACE". Default is "COPY"
map.set('s3:x-amz-metadata-directive', headers['metadata-directive']);
// s3:x-amz-server-side-encryption -- Used to require that object put
// use server side encryption. Value is the encryption algo such as
// "AES256"
map.set('s3:x-amz-server-side-encryption',
headers['x-amz-server-side-encryption']);
// s3:x-amz-storage-class -- x-amz-storage-class header value
// (STANDARD, etc.)
map.set('s3:x-amz-storage-class', headers['x-amz-storage-class']);
// s3:VersionId -- version id of object
map.set('s3:VersionId', query.versionId);
// s3:LocationConstraint -- Used to restrict creation of bucket
// in certain region. Only applicable for CreateBucket
map.set('s3:LocationConstraint', requestContext.getLocationConstraint());
// s3:delimiter is delimiter for listing request
map.set('s3:delimiter', query.delimiter);
// s3:max-keys is max-keys for listing request
map.set('s3:max-keys', query['max-keys']);
// s3:prefix is prefix for listing request
map.set('s3:prefix', query.prefix);
// s3 auth v4 additional condition keys
// (See http://docs.aws.amazon.com/AmazonS3/latest/API/
// bucket-policy-s3-sigv4-conditions.html)
// s3:signatureversion -- Either "AWS" for v2 or
// "AWS4-HMAC-SHA256" for v4
map.set('s3:signatureversion', requestContext.getSignatureVersion());
// s3:authType -- Method of authentication: either "REST-HEADER",
// "REST-QUERY-STRING" or "POST"
map.set('s3:authType', requestContext.getAuthType());
// s3:signatureAge is the length of time, in milliseconds,
// that a signature is valid in an authenticated request. So,
// can use this to limit the age to less than 7 days
map.set('s3:signatureAge', requestContext.getSignatureAge());
// s3:x-amz-content-sha256 - Valid value is "UNSIGNED-PAYLOAD"
// so can use this in a deny policy to deny any requests that do not
// have a signed payload
map.set('s3:x-amz-content-sha256', headers['x-amz-content-sha256']);
// s3:ObjLocationConstraint is the location constraint set for an
// object on a PUT request using the "x-amz-meta-scal-location-constraint"
// header
map.set('s3:ObjLocationConstraint',
headers['x-amz-meta-scal-location-constraint']);
return map.get(key);
};
// Wildcards are allowed in certain string comparison and arn comparisons
// Permitted in StringLike, StringNotLike, ArnLike and ArnNotLike
// This restriction almost matches up with where variables can be used in
// conditions so converting ${*}, ${?} and ${$} as part of the wildcard
// transformation instead of the variable substitution works
// (except for the StringEquals, StringNotEquals, ArnEquals and
// ArnNotEquals conditions where wildcards
// not allowed but variables are allowed). For those 4 operators, we switch
// out ${*}, ${?} and ${$} in the convertConditionOperator function.
function convertSpecialChars(string) {
function characterMap(char) {
const map = {
'${*}': '*',
'${?}': '?',
'${$}': '$',
};
return map[char];
}
return string.replace(/(\$\{\*\})|(\$\{\?\})|(\$\{\$\})/g,
characterMap);
}
/**
* convertToEpochTime checks whether epoch or ISO time and converts to epoch
* if necessary
* @param {string | array} time - value or values to be converted
* @return {string | array} converted value or values
*/
function convertToEpochTime(time) {
function convertSingle(item) {
// If ISO time
if (item.indexOf(':') > -1) {
return new Date(item).getTime().toString();
}
return item;
}
if (!Array.isArray(time)) {
return convertSingle(time);
}
return time.map(single => convertSingle(single));
}
/**
* convertConditionOperator converts a string operator into a function
* each function takes a string key and array of values as arguments.
* Variables in the value are handled before calling this function but
* wildcards and switching ${$}, ${*} and ${?} are handled here because
* whether wildcards allowed depends on operator
* @param {string} operator - condition operator
* Possible Condition Operators:
* (http://docs.aws.amazon.com/IAM/latest/UserGuide/
* reference_policies_elements.html)
* @return {boolean} true if condition passes and false if not
*/
conditions.convertConditionOperator = operator => {
// Policy Validator checks that the condition operator
// is only one of these strings so should not have undefined
// or security issue with object assignment
const operatorMap = {
StringEquals: function stringEquals(key, value) {
return value.some(item => {
const swtichedOutChars = convertSpecialChars(item);
return swtichedOutChars === key;
});
},
StringNotEquals: function stringNotEquals(key, value) {
// eslint-disable-next-line new-cap
return !operatorMap.StringEquals(key, value);
},
StringEqualsIgnoreCase: function stringEqualsIgnoreCase(key, value) {
const lowerKey = key.toLowerCase();
return value.some(item => {
const swtichedOutChars = convertSpecialChars(item);
return swtichedOutChars.toLowerCase() === lowerKey;
});
},
StringNotEqualsIgnoreCase:
function stringNotEqualsIgnoreCase(key, value) {
// eslint-disable-next-line new-cap
return !operatorMap.StringEqualsIgnoreCase(key, value);
},
StringLike: function stringLike(key, value) {
return value.some(item => {
const wildItem = handleWildcards(item);
const wildRegEx = new RegExp(wildItem);
return wildRegEx.test(key);
});
},
StringNotLike: function stringNotLike(key, value) {
// eslint-disable-next-line new-cap
return !operatorMap.StringLike(key, value);
},
NumericEquals: function numericEquals(key, value) {
const numberKey = Number.parseInt(key, 10);
if (Number.isNaN(numberKey)) {
return false;
}
return value.some(item => {
const numberItem = Number.parseInt(item, 10);
if (Number.isNaN(numberItem)) {
return false;
}
return numberKey === numberItem;
});
},
NumericNotEquals: function numericNotEquals(key, value) {
// eslint-disable-next-line new-cap
return !operatorMap.NumericEquals(key, value);
},
NumericLessThan: function lessThan(key, value) {
const numberKey = Number.parseInt(key, 10);
if (Number.isNaN(numberKey)) {
return false;
}
return value.some(item => {
const numberItem = Number.parseInt(item, 10);
if (Number.isNaN(numberItem)) {
return false;
}
return numberKey < numberItem;
});
},
NumericLessThanEquals: function lessThanOrEquals(key, value) {
const numberKey = Number.parseInt(key, 10);
if (Number.isNaN(numberKey)) {
return false;
}
return value.some(item => {
const numberItem = Number.parseInt(item, 10);
if (Number.isNaN(numberItem)) {
return false;
}
return numberKey <= numberItem;
});
},
NumericGreaterThan: function greaterThan(key, value) {
const numberKey = Number.parseInt(key, 10);
if (Number.isNaN(numberKey)) {
return false;
}
return value.some(item => {
const numberItem = Number.parseInt(item, 10);
if (Number.isNaN(numberItem)) {
return false;
}
return numberKey > numberItem;
});
},
NumericGreaterThanEquals: function greaterThanOrEquals(key, value) {
const numberKey = Number.parseInt(key, 10);
if (Number.isNaN(numberKey)) {
return false;
}
return value.some(item => {
const numberItem = Number.parseInt(item, 10);
if (Number.isNaN(numberItem)) {
return false;
}
return numberKey >= numberItem;
});
},
DateEquals: function dateEquals(key, value) {
const epochKey = convertToEpochTime(key);
const epochValues = convertToEpochTime(value);
// eslint-disable-next-line new-cap
return operatorMap.NumericEquals(epochKey, epochValues);
},
DateNotEquals: function dateNotEquals(key, value) {
const epochKey = convertToEpochTime(key);
const epochValues = convertToEpochTime(value);
// eslint-disable-next-line new-cap
return operatorMap.NumericNotEquals(epochKey, epochValues);
},
DateLessThan: function dateLessThan(key, value) {
const epochKey = convertToEpochTime(key);
const epochValues = convertToEpochTime(value);
// eslint-disable-next-line new-cap
return operatorMap.NumericLessThan(epochKey, epochValues);
},
DateLessThanEquals: function dateLessThanEquals(key, value) {
const epochKey = convertToEpochTime(key);
const epochValues = convertToEpochTime(value);
// eslint-disable-next-line new-cap
return operatorMap.NumericLessThanEquals(epochKey, epochValues);
},
DateGreaterThan: function dateGreaterThan(key, value) {
const epochKey = convertToEpochTime(key);
const epochValues = convertToEpochTime(value);
// eslint-disable-next-line new-cap
return operatorMap.NumericGreaterThan(epochKey, epochValues);
},
DateGreaterThanEquals: function dateGreaterThanEquals(key, value) {
const epochKey = convertToEpochTime(key);
const epochValues = convertToEpochTime(value);
// eslint-disable-next-line new-cap
return operatorMap.NumericGreaterThanEquals(epochKey, epochValues);
},
Bool: function bool(key, value) {
// Tested with policy validator and it just appears to be a string
// comparison (can send in values other than true or false)
// eslint-disable-next-line new-cap
return operatorMap.StringEquals(key, value);
},
BinaryEquals: function binaryEquals(key, value) {
const base64Key = Buffer.from(key, 'utf8').toString('base64');
return value.some(item => item === base64Key);
},
BinaryNotEquals: function binaryNotEquals(key, value) {
// eslint-disable-next-line new-cap
return !operatorMap.BinaryEquals(key, value);
},
IpAddress: function ipAddress(key, value) {
return value.some(item => checkIPinRangeOrMatch(item, key));
},
NotIpAddress: function notIpAddress(key, value) {
// eslint-disable-next-line new-cap
return !operatorMap.IpAddress(key, value);
},
// Note that ARN operators are for comparing a source ARN
// against a given value (such as an EC2 instance) so N/A here.
ArnEquals: function ArnEquals(key, value) {
// eslint-disable-next-line new-cap
return operatorMap.StringEquals(key, value);
},
ArnNotEquals: function ArnNotEquals(key, value) {
// eslint-disable-next-line new-cap
return !operatorMap.StringEquals(key, value);
},
ArnLike: function ArnLike(key, value) {
// ARN format:
// arn:partition:service:region:namespace:relative-id
const requestArnArr = key.split(':');
// Pull just the relative id because there is no restriction that it
// does not contain ":"
const requestRelativeId = requestArnArr.slice(5).join(':');
return value.some(policyArn => checkArnMatch(policyArn,
requestRelativeId, requestArnArr, false));
},
ArnNotLike: function ArnNotLike(key, value) {
// eslint-disable-next-line new-cap
return !operatorMap.ArnLike(key, value);
},
Null: function nullOperator(key, value) {
// Null is used to check if a condition key is present.
// The policy statement value should be either true (the key doesn't
// exist — it is null) or false (the key exists and its value is
// not null).
if ((key === undefined || key === null)
&& value[0] === 'true' ||
(key !== undefined && key !== null)
&& value[0] === 'false') {
return true;
}
return false;
},
};
return operatorMap[operator];
};
module.exports = conditions;

View File

@ -0,0 +1,107 @@
'use strict'; // eslint-disable-line strict
// FUNCTIONS TO TRANSLATE VARIABLES
// Variables are ONLY used in Resource element and in Condition element
// For Resource Element: variable can appear as the LAST PART of the ARN.
// For Comparison Element: in any condition that involves
// the string operators (StringEquals, StringLike, StringNotLike, etc.)
// or the ARN operators (ArnEquals, ArnLike, etc.).
/**
* findVariable finds the value of a variable based on the requestContext
* @param {string} variable - variable name
* @param {RequestContext} requestContext - info sent with request
* @return {string} variable value
*/
function findVariable(variable, requestContext) {
// See http://docs.aws.amazon.com/IAM/latest/UserGuide/
// reference_policies_variables.html
const headers = requestContext.getHeaders();
const query = requestContext.getQuery();
const requesterInfo = requestContext.getRequesterInfo();
const map = new Map();
// aws:CurrentTime can be used for conditions
// that check the date and time.
map.set('aws:CurrentTime', new Date().toISOString());
// aws:EpochTime for use with date/time conditions
map.set('aws:EpochTime', Date.now());
// aws:TokenIssueTime is date and time that temp security credentials
// were issued. can be used with date/time conditions.
// this key is only available in requests that are signed using
// temporary security credentials.
map.set('aws:TokenIssueTime', requestContext.getTokenIssueTime());
// aws:principaltype states whether the principal is an account,
// user, federated, or assumed role
map.set('aws:principaltype', requesterInfo.principaltype);
// aws:SecureTransport is boolean value that represents whether the
// request was sent using SSL
map.set('aws:SecureTransport',
requestContext.getSslEnabled() ? 'true' : 'false');
// aws:SourceIp is requester's IP address, for use with IP address
// conditions
map.set('aws:SourceIp', requestContext.getRequesterIp());
// aws:UserAgent is information about the requester's client application
map.set('aws:UserAgent', headers['user-agent']);
// aws:userid is unique ID for the current user
map.set('aws:userid', requesterInfo.userid);
// aws:username is friendly name of the current user
map.set('aws:username', requesterInfo.username);
// ec2:SourceInstanceARN is the Amazon EC2 instance from which the
// request was made. Present only when the request comes from an Amazon
// EC2 instance using an IAM role associated with an EC2
// instance profile. N/A here.
map.set('ec2:SourceInstanceARN', undefined);
// s3 - specific:
// s3:prefix is prefix for listing request
map.set('s3:prefix', query.prefix);
// s3:max-keys is max-keys for listing request
map.set('s3:max-keys', query['max-keys']);
// s3:x-amz-acl is acl request for bucket or object put request
map.set('s3:x-amz-acl', query['x-amz-acl']);
return map.get(variable);
}
/**
* substituteVariables replaces variable values for variables in the form of
* ${variablename}
* @param {string} string potentially containing a variable
* @param {RequestContext} requestContext - info sent with request
* @return {string} string with variable values substituted for variables
*/
function substituteVariables(string, requestContext) {
const arr = string.split('');
let startOfVariable = arr.indexOf('$');
while (startOfVariable > -1) {
if (arr[startOfVariable + 1] !== '{') {
startOfVariable = arr.indexOf('$', startOfVariable + 1);
continue;
}
const end = arr.indexOf('}', startOfVariable + 1);
// If there is no end to the variable, we're done looking for
// substitutions so return
if (end === -1) {
return arr.join('');
}
const variableContent = arr.slice(startOfVariable + 2, end).join('');
// If a variable is not one of the known variables or is
// undefined, leave the original string '${whatever}'.
// This also means that ${*}, ${?} and ${$} will remain as they are
// here and will be converted as part of the wildcard transformation
const value = findVariable(variableContent, requestContext);
// Length of item being replaced is the variable content plus ${}
let replacingLength = variableContent.length + 3;
if (value !== undefined) {
arr.splice(startOfVariable, replacingLength, value);
// If we do replace, we are replacing with one array index
// so use 1 for replacing length in substitutionEnd
replacingLength = 1;
}
const substitutionEnd = startOfVariable + replacingLength;
startOfVariable = arr.indexOf('$', substitutionEnd);
}
return arr.join('');
}
module.exports = substituteVariables;

View File

@ -0,0 +1,62 @@
'use strict'; // eslint-disable-line strict
const wildcards = {};
// * represents any combo of characters
// ? represents any single character
// TODO: Note that there are special rules for * in Principal.
// Handle when working with bucket policies.
/**
* Converts string into a string that has all regEx characters escaped except
* for those needed to check for AWS wildcards. Converted string can then
* be used for a regEx comparison.
* @param {string} string - any input string
* @return {string} converted string
*/
wildcards.handleWildcards = string => {
// Replace all '*' with '.*' (allow any combo of letters)
// and all '?' with '.{1}' (allow for any one character)
// If *, ? or $ are enclosed in ${}, keep literal *, ?, or $
function characterMap(char) {
const map = {
'\\*': '.*?',
'\\?': '.{1}',
'\\$\\{\\*\\}': '\\*',
'\\$\\{\\?\\}': '\\?',
'\\$\\{\\$\\}': '\\$',
};
return map[char];
}
// Escape all regExp special characters
let regExStr = string.replace(/[\\^$*+?.()|[\]{}]/g, '\\$&');
// Replace the AWS special characters with regExp equivalents
regExStr = regExStr.replace(
// eslint-disable-next-line max-len
/(\\\*)|(\\\?)|(\\\$\\\{\\\*\\\})|(\\\$\\\{\\\?\\\})|(\\\$\\\{\\\$\\\})/g,
characterMap);
return `^${regExStr}$`;
};
/**
* Converts each portion of an ARN into a converted regEx string
* to compare against each portion of the ARN from the request
* @param {string} arn - arn for requested resource
* @return {[string]} array of strings to be used for regEx comparisons
*/
wildcards.handleWildcardInResource = arn => {
// Wildcards can be part of the resource ARN.
// Wildcards do NOT span segments of the ARN (separated by ":")
// Example: all elements in specific bucket:
// "Resource": "arn:aws:s3:::my_corporate_bucket/*"
// ARN format:
// arn:partition:service:region:namespace:relative-id
const arnArr = arn.split(':');
return arnArr.map(portion => wildcards.handleWildcards(portion));
};
module.exports = wildcards;

View File

@ -0,0 +1,46 @@
const Transform = require('stream').Transform;
const crypto = require('crypto');
/**
* This class is design to compute md5 hash at the same time as sending
* data through a stream
*/
class MD5Sum extends Transform {
/**
* @constructor
*/
constructor() {
super({});
this.hash = crypto.createHash('md5');
this.completedHash = undefined;
}
/**
* This function will update the current md5 hash with the next chunk
*
* @param {Buffer|string} chunk - Chunk to compute
* @param {string} encoding - Data encoding
* @param {function} callback - Callback(err, chunk, encoding)
* @return {undefined}
*/
_transform(chunk, encoding, callback) {
this.hash.update(chunk, encoding);
callback(null, chunk, encoding);
}
/**
* This function will end the hash computation
*
* @param {function} callback(err)
* @return {undefined}
*/
_flush(callback) {
this.completedHash = this.hash.digest('hex');
this.emit('hashed');
callback(null);
}
}
module.exports = MD5Sum;

View File

@ -0,0 +1,19 @@
/**
* Project: node-xml https://github.com/dylang/node-xml
* License: MIT https://github.com/dylang/node-xml/blob/master/LICENSE
*/
const XML_CHARACTER_MAP = {
'&': '&amp;',
'"': '&quot;',
"'": '&apos;',
'<': '&lt;',
'>': '&gt;',
};
function escapeForXml(string) {
return string && string.replace
? string.replace(/([&"<>'])/g, (str, item) => XML_CHARACTER_MAP[item])
: string;
}
module.exports = escapeForXml;

226
lib/s3middleware/tagging.js Normal file
View File

@ -0,0 +1,226 @@
const { parseString } = require('xml2js');
const errors = require('../errors');
const escapeForXml = require('./escapeForXml');
const tagRegex = new RegExp(/[^a-zA-Z0-9 +-=._:/]/g);
const errorInvalidArgument = errors.InvalidArgument
.customizeDescription('The header \'x-amz-tagging\' shall be ' +
'encoded as UTF-8 then URLEncoded URL query parameters without ' +
'tag name duplicates.');
const errorBadRequestLimit10 = errors.BadRequest
.customizeDescription('Object tags cannot be greater than 10');
/*
Format of xml request:
<Tagging>
<TagSet>
<Tag>
<Key>Tag Name</Key>
<Value>Tag Value</Value>
</Tag>
</TagSet>
</Tagging>
*/
const _validator = {
validateTagStructure: tag => tag
&& Object.keys(tag).length === 2
&& tag.Key && tag.Value
&& tag.Key.length === 1 && tag.Value.length === 1
&& tag.Key[0] !== undefined && tag.Value[0] !== undefined
&& typeof tag.Key[0] === 'string' && typeof tag.Value[0] === 'string',
validateXMLStructure: result =>
result && Object.keys(result).length === 1 &&
result.Tagging &&
result.Tagging.TagSet &&
result.Tagging.TagSet.length === 1 &&
(
result.Tagging.TagSet[0] === '' ||
result.Tagging.TagSet[0] &&
Object.keys(result.Tagging.TagSet[0]).length === 1 &&
result.Tagging.TagSet[0].Tag &&
Array.isArray(result.Tagging.TagSet[0].Tag)
),
validateKeyValue: (key, value) => {
if (key.length > 128 || key.match(tagRegex)) {
return errors.InvalidTag.customizeDescription('The TagKey you ' +
'have provided is invalid');
}
if (value.length > 256 || value.match(tagRegex)) {
return errors.InvalidTag.customizeDescription('The TagValue you ' +
'have provided is invalid');
}
return true;
},
};
/** _validateTags - Validate tags, returning an error if tags are invalid
* @param {object[]} tags - tags parsed from xml to be validated
* @param {string[]} tags[].Key - Name of the tag
* @param {string[]} tags[].Value - Value of the tag
* @return {(Error|object)} tagsResult - return object tags on success
* { key: value}; error on failure
*/
function _validateTags(tags) {
let result;
const tagsResult = {};
if (tags.length === 0) {
return tagsResult;
}
// Maximum number of tags per resource: 10
if (tags.length > 10) {
return errorBadRequestLimit10;
}
for (let i = 0; i < tags.length; i++) {
const tag = tags[i];
if (!_validator.validateTagStructure(tag)) {
return errors.MalformedXML;
}
const key = tag.Key[0];
const value = tag.Value[0];
if (!key) {
return errors.InvalidTag.customizeDescription('The TagKey you ' +
'have provided is invalid');
}
// Allowed characters are letters, whitespace, and numbers, plus
// the following special characters: + - = . _ : /
// Maximum key length: 128 Unicode characters
// Maximum value length: 256 Unicode characters
result = _validator.validateKeyValue(key, value);
if (result instanceof Error) {
return result;
}
tagsResult[key] = value;
}
// not repeating keys
if (tags.length > Object.keys(tagsResult).length) {
return errors.InvalidTag.customizeDescription('Cannot provide ' +
'multiple Tags with the same key');
}
return tagsResult;
}
/** parseTagXml - Parse and validate xml body, returning callback with object
* tags : { key: value}
* @param {string} xml - xml body to parse and validate
* @param {object} log - Werelogs logger
* @param {function} cb - callback to server
* @return {(Error|object)} - calls callback with tags object on success, error
* on failure
*/
function parseTagXml(xml, log, cb) {
parseString(xml, (err, result) => {
if (err) {
log.trace('xml parsing failed', {
error: err,
method: 'parseTagXml',
});
log.debug('invalid xml', { xml });
return cb(errors.MalformedXML);
}
if (!_validator.validateXMLStructure(result)) {
log.debug('xml validation failed', {
error: errors.MalformedXML,
method: '_validator.validateXMLStructure',
xml,
});
return cb(errors.MalformedXML);
}
// AWS does not return error if no tag
if (result.Tagging.TagSet[0] === '') {
return cb(null, []);
}
const validationRes = _validateTags(result.Tagging.TagSet[0].Tag);
if (validationRes instanceof Error) {
log.debug('tag validation failed', {
error: validationRes,
method: '_validateTags',
xml,
});
return cb(validationRes);
}
// if no error, validation returns tags object
return cb(null, validationRes);
});
}
function convertToXml(objectTags) {
const xml = [];
xml.push('<?xml version="1.0" encoding="UTF-8" standalone="yes"?>',
'<Tagging> <TagSet>');
if (objectTags && Object.keys(objectTags).length > 0) {
Object.keys(objectTags).forEach(key => {
xml.push(`<Tag><Key>${escapeForXml(key)}</Key>` +
`<Value>${escapeForXml(objectTags[key])}</Value></Tag>`);
});
}
xml.push('</TagSet> </Tagging>');
return xml.join('');
}
/** parseTagFromQuery - Parse and validate x-amz-tagging header (URL query
* parameter encoded), returning callback with object tags : { key: value}
* @param {string} tagQuery - tag(s) URL query parameter encoded
* @return {(Error|object)} - calls callback with tags object on success, error
* on failure
*/
function parseTagFromQuery(tagQuery) {
const tagsResult = {};
const pairs = tagQuery.split('&');
let key;
let value;
let emptyTag = 0;
if (pairs.length === 0) {
return tagsResult;
}
for (let i = 0; i < pairs.length; i++) {
const pair = pairs[i];
if (!pair) {
emptyTag ++;
continue;
}
const pairArray = pair.split('=');
if (pairArray.length !== 2) {
return errorInvalidArgument;
}
try {
key = decodeURIComponent(pairArray[0]);
value = decodeURIComponent(pairArray[1]);
} catch (err) {
return errorInvalidArgument;
}
if (!key) {
return errorInvalidArgument;
}
const errorResult = _validator.validateKeyValue(key, value);
if (errorResult instanceof Error) {
return errorResult;
}
tagsResult[key] = value;
}
// return InvalidArgument error if using the same key multiple times
if (pairs.length - emptyTag > Object.keys(tagsResult).length) {
return errorInvalidArgument;
}
if (Object.keys(tagsResult).length > 10) {
return errorBadRequestLimit10;
}
return tagsResult;
}
module.exports = {
_validator,
parseTagXml,
convertToXml,
parseTagFromQuery,
};

View File

@ -0,0 +1,27 @@
const constants = require('../constants');
const errors = require('../errors');
const userMetadata = {};
/**
* Pull user provided meta headers from request headers
* @param {object} headers - headers attached to the http request (lowercased)
* @return {(object|Error)} all user meta headers or MetadataTooLarge
*/
userMetadata.getMetaHeaders = headers => {
const metaHeaders = Object.create(null);
let totalLength = 0;
const metaHeaderKeys = Object.keys(headers).filter(h =>
h.startsWith('x-amz-meta-'));
const validHeaders = metaHeaderKeys.every(k => {
totalLength += k.length;
totalLength += headers[k].length;
metaHeaders[k] = headers[k];
return (totalLength <= constants.maximumMetaHeadersSize);
});
if (validHeaders) {
return metaHeaders;
}
return errors.MetadataTooLarge;
};
module.exports = userMetadata;

View File

@ -0,0 +1,124 @@
const errors = require('../errors');
function _matchesETag(item, contentMD5) {
return (item === contentMD5 || item === '*' || item === `"${contentMD5}"`);
}
function _checkEtagMatch(ifETagMatch, contentMD5) {
const res = { present: false, error: null };
if (ifETagMatch) {
res.present = true;
if (ifETagMatch.includes(',')) {
const items = ifETagMatch.split(',');
const anyMatch = items.some(item =>
_matchesETag(item, contentMD5));
if (!anyMatch) {
res.error = errors.PreconditionFailed;
}
} else if (!_matchesETag(ifETagMatch, contentMD5)) {
res.error = errors.PreconditionFailed;
}
}
return res;
}
function _checkEtagNoneMatch(ifETagNoneMatch, contentMD5) {
const res = { present: false, error: null };
if (ifETagNoneMatch) {
res.present = true;
if (ifETagNoneMatch.includes(',')) {
const items = ifETagNoneMatch.split(',');
const anyMatch = items.some(item =>
_matchesETag(item, contentMD5));
if (anyMatch) {
res.error = errors.NotModified;
}
} else if (_matchesETag(ifETagNoneMatch, contentMD5)) {
res.error = errors.NotModified;
}
}
return res;
}
function _checkModifiedSince(ifModifiedSinceTime, lastModified) {
const res = { present: false, error: null };
if (ifModifiedSinceTime) {
res.present = true;
const checkWith = (new Date(ifModifiedSinceTime)).getTime();
if (isNaN(checkWith)) {
res.error = errors.InvalidArgument;
} else if (lastModified <= checkWith) {
res.error = errors.NotModified;
}
}
return res;
}
function _checkUnmodifiedSince(ifUnmodifiedSinceTime, lastModified) {
const res = { present: false, error: null };
if (ifUnmodifiedSinceTime) {
res.present = true;
const checkWith = (new Date(ifUnmodifiedSinceTime)).getTime();
if (isNaN(checkWith)) {
res.error = errors.InvalidArgument;
} else if (lastModified > checkWith) {
res.error = errors.PreconditionFailed;
}
}
return res;
}
/**
* validateConditionalHeaders - validates 'if-modified-since',
* 'if-unmodified-since', 'if-match' or 'if-none-match' headers if included in
* request against last-modified date of object and/or ETag.
* @param {object} headers - headers from request object
* @param {string} lastModified - last modified date of object
* @param {object} contentMD5 - content MD5 of object
* @return {object} object with error as key and arsenal error as value or
* empty object if no error
*/
function validateConditionalHeaders(headers, lastModified, contentMD5) {
let lastModifiedDate = new Date(lastModified);
lastModifiedDate.setMilliseconds(0);
lastModifiedDate = lastModifiedDate.getTime();
const ifMatchHeader = headers['if-match'] ||
headers['x-amz-copy-source-if-match'];
const ifNoneMatchHeader = headers['if-none-match'] ||
headers['x-amz-copy-source-if-none-match'];
const ifModifiedSinceHeader = headers['if-modified-since'] ||
headers['x-amz-copy-source-if-modified-since'];
const ifUnmodifiedSinceHeader = headers['if-unmodified-since'] ||
headers['x-amz-copy-source-if-unmodified-since'];
const etagMatchRes = _checkEtagMatch(ifMatchHeader, contentMD5);
const etagNoneMatchRes = _checkEtagNoneMatch(ifNoneMatchHeader, contentMD5);
const modifiedSinceRes = _checkModifiedSince(ifModifiedSinceHeader,
lastModifiedDate);
const unmodifiedSinceRes = _checkUnmodifiedSince(ifUnmodifiedSinceHeader,
lastModifiedDate);
// If-Unmodified-Since condition evaluates to false and If-Match
// is not present, then return the error. Otherwise, If-Unmodified-Since is
// silent when If-Match match, and when If-Match does not match, it's the
// same error, so each case are covered.
if (!etagMatchRes.present && unmodifiedSinceRes.error) {
return unmodifiedSinceRes;
}
if (etagMatchRes.present && etagMatchRes.error) {
return etagMatchRes;
}
if (etagNoneMatchRes.present && etagNoneMatchRes.error) {
return etagNoneMatchRes;
}
if (modifiedSinceRes.present && modifiedSinceRes.error) {
return modifiedSinceRes;
}
return {};
}
module.exports = {
_checkEtagMatch,
_checkEtagNoneMatch,
_checkModifiedSince,
_checkUnmodifiedSince,
validateConditionalHeaders,
};

206
lib/s3routes/routes.js Normal file
View File

@ -0,0 +1,206 @@
const assert = require('assert');
const errors = require('../errors');
const routeGET = require('./routes/routeGET');
const routePUT = require('./routes/routePUT');
const routeDELETE = require('./routes/routeDELETE');
const routeHEAD = require('./routes/routeHEAD');
const routePOST = require('./routes/routePOST');
const routeOPTIONS = require('./routes/routeOPTIONS');
const routesUtils = require('./routesUtils');
const routeWebsite = require('./routes/routeWebsite');
const routeMap = {
GET: routeGET,
PUT: routePUT,
POST: routePOST,
DELETE: routeDELETE,
HEAD: routeHEAD,
OPTIONS: routeOPTIONS,
};
function checkUnsupportedRoutes(reqMethod) {
const method = routeMap[reqMethod];
if (!method) {
return { error: errors.MethodNotAllowed };
}
return { method };
}
function checkBucketAndKey(bucketName, objectKey, method, reqQuery,
blacklistedPrefixes, log) {
// if empty name and request not a list Buckets
if (!bucketName && !(method === 'GET' && !objectKey)) {
log.debug('empty bucket name', { method: 'routes' });
return (method !== 'OPTIONS') ?
errors.MethodNotAllowed : errors.AccessForbidden
.customizeDescription('CORSResponse: Bucket not found');
}
if (bucketName !== undefined && routesUtils.isValidBucketName(bucketName,
blacklistedPrefixes.bucket) === false) {
log.debug('invalid bucket name', { bucketName });
return errors.InvalidBucketName;
}
if ((reqQuery.partNumber || reqQuery.uploadId)
&& objectKey === undefined) {
return errors.InvalidRequest
.customizeDescription('A key must be specified');
}
return undefined;
}
function checkTypes(req, res, params, logger) {
assert.strictEqual(typeof req, 'object',
'bad routes param: req must be an object');
assert.strictEqual(typeof res, 'object',
'bad routes param: res must be an object');
assert.strictEqual(typeof logger, 'object',
'bad routes param: logger must be an object');
assert.strictEqual(typeof params.api, 'object',
'bad routes param: api must be an object');
assert.strictEqual(typeof params.api.callApiMethod, 'function',
'bad routes param: api.callApiMethod must be a defined function');
assert.strictEqual(typeof params.internalHandlers, 'object',
'bad routes param: internalHandlers must be an object');
if (params.statsClient) {
assert.strictEqual(typeof params.statsClient, 'object',
'bad routes param: statsClient must be an object');
}
assert(Array.isArray(params.allEndpoints),
'bad routes param: allEndpoints must be an array');
assert(params.allEndpoints.length > 0,
'bad routes param: allEndpoints must have at least one endpoint');
params.allEndpoints.forEach(endpoint => {
assert.strictEqual(typeof endpoint, 'string',
'bad routes param: each item in allEndpoints must be a string');
});
assert(Array.isArray(params.websiteEndpoints),
'bad routes param: allEndpoints must be an array');
params.websiteEndpoints.forEach(endpoint => {
assert.strictEqual(typeof endpoint, 'string',
'bad routes param: each item in websiteEndpoints must be a string');
});
assert.strictEqual(typeof params.blacklistedPrefixes, 'object',
'bad routes param: blacklistedPrefixes must be an object');
assert(Array.isArray(params.blacklistedPrefixes.bucket),
'bad routes param: blacklistedPrefixes.bucket must be an array');
params.blacklistedPrefixes.bucket.forEach(pre => {
assert.strictEqual(typeof pre, 'string',
'bad routes param: each blacklisted bucket prefix must be a string');
});
assert(Array.isArray(params.blacklistedPrefixes.object),
'bad routes param: blacklistedPrefixes.object must be an array');
params.blacklistedPrefixes.object.forEach(pre => {
assert.strictEqual(typeof pre, 'string',
'bad routes param: each blacklisted object prefix must be a string');
});
assert.strictEqual(typeof params.dataRetrievalFn, 'function',
'bad routes param: dataRetrievalFn must be a defined function');
}
/** routes - route request to appropriate method
* @param {Http.Request} req - http request object
* @param {Http.ServerResponse} res - http response sent to the client
* @param {object} params - additional routing parameters
* @param {object} params.api - all api methods and method to call an api method
* i.e. api.callApiMethod(methodName, request, response, log, callback)
* @param {function} params.internalHandlers - internal handlers API object
* for queries beginning with '/_/'
* @param {StatsClient} [params.statsClient] - client to report stats to Redis
* @param {string[]} params.allEndpoints - all accepted REST endpoints
* @param {string[]} params.websiteEndpoints - all accepted website endpoints
* @param {object} params.blacklistedPrefixes - blacklisted prefixes
* @param {string[]} params.blacklistedPrefixes.bucket - bucket prefixes
* @param {string[]} params.blacklistedPrefixes.object - object prefixes
* @param {object} params.unsupportedQueries - object containing true/false
* values for whether queries are supported
* @param {function} params.dataRetrievalFn - function to retrieve data
* @param {RequestLogger} logger - werelogs logger instance
* @returns {undefined}
*/
function routes(req, res, params, logger) {
checkTypes(req, res, params, logger);
const {
api,
internalHandlers,
statsClient,
allEndpoints,
websiteEndpoints,
blacklistedPrefixes,
dataRetrievalFn,
} = params;
const clientInfo = {
clientIP: req.socket.remoteAddress,
clientPort: req.socket.remotePort,
httpMethod: req.method,
httpURL: req.url,
endpoint: req.endpoint,
};
const log = logger.newRequestLogger();
log.info('received request', clientInfo);
log.end().addDefaultFields(clientInfo);
if (req.url.startsWith('/_/')) {
let internalServiceName = req.url.slice(3);
const serviceDelim = internalServiceName.indexOf('/');
if (serviceDelim !== -1) {
internalServiceName = internalServiceName.slice(0, serviceDelim);
}
if (internalHandlers[internalServiceName] === undefined) {
return routesUtils.responseXMLBody(
errors.InvalidURI, undefined, res, log);
}
return internalHandlers[internalServiceName](
clientInfo.clientIP, req, res, log, statsClient);
}
if (statsClient) {
// report new request for stats
statsClient.reportNewRequest();
}
try {
const validHosts = allEndpoints.concat(websiteEndpoints);
routesUtils.normalizeRequest(req, validHosts);
} catch (err) {
log.trace('could not normalize request', { error: err.stack });
return routesUtils.responseXMLBody(
errors.InvalidURI, undefined, res, log);
}
log.addDefaultFields({
bucketName: req.bucketName,
objectKey: req.objectKey,
bytesReceived: req.parsedContentLength || 0,
bodyLength: parseInt(req.headers['content-length'], 10) || 0,
});
const { error, method } = checkUnsupportedRoutes(req.method, req.query);
if (error) {
log.trace('error validating route or uri params', { error });
return routesUtils.responseXMLBody(error, null, res, log);
}
const bucketOrKeyError = checkBucketAndKey(req.bucketName, req.objectKey,
req.method, req.query, blacklistedPrefixes, log);
if (bucketOrKeyError) {
log.trace('error with bucket or key value',
{ error: bucketOrKeyError });
return routesUtils.responseXMLBody(bucketOrKeyError, null, res, log);
}
// bucket website request
if (websiteEndpoints && websiteEndpoints.indexOf(req.parsedHost) > -1) {
return routeWebsite(req, res, api, log, statsClient, dataRetrievalFn);
}
return method(req, res, api, log, statsClient, dataRetrievalFn);
}
module.exports = routes;

View File

@ -0,0 +1,78 @@
const routesUtils = require('../routesUtils');
const errors = require('../../errors');
function routeDELETE(request, response, api, log, statsClient) {
log.debug('routing request', { method: 'routeDELETE' });
if (request.query.uploadId) {
if (request.objectKey === undefined) {
return routesUtils.responseNoBody(
errors.InvalidRequest.customizeDescription('A key must be ' +
'specified'), null, response, 200, log);
}
api.callApiMethod('multipartDelete', request, response, log,
(err, corsHeaders) => {
routesUtils.statsReport500(err, statsClient);
return routesUtils.responseNoBody(err, corsHeaders, response,
204, log);
});
} else {
if (request.objectKey === undefined) {
if (request.query.website !== undefined) {
return api.callApiMethod('bucketDeleteWebsite', request,
response, log, (err, corsHeaders) => {
routesUtils.statsReport500(err, statsClient);
return routesUtils.responseNoBody(err, corsHeaders,
response, 204, log);
});
} else if (request.query.cors !== undefined) {
return api.callApiMethod('bucketDeleteCors', request, response,
log, (err, corsHeaders) => {
routesUtils.statsReport500(err, statsClient);
return routesUtils.responseNoBody(err, corsHeaders,
response, 204, log);
});
} else if (request.query.replication !== undefined) {
return api.callApiMethod('bucketDeleteReplication', request,
response, log, (err, corsHeaders) => {
routesUtils.statsReport500(err, statsClient);
return routesUtils.responseNoBody(err, corsHeaders,
response, 204, log);
});
}
api.callApiMethod('bucketDelete', request, response, log,
(err, corsHeaders) => {
routesUtils.statsReport500(err, statsClient);
return routesUtils.responseNoBody(err, corsHeaders, response,
204, log);
});
} else {
if (request.query.tagging !== undefined) {
return api.callApiMethod('objectDeleteTagging', request,
response, log, (err, resHeaders) => {
routesUtils.statsReport500(err, statsClient);
return routesUtils.responseNoBody(err, resHeaders,
response, 204, log);
});
}
api.callApiMethod('objectDelete', request, response, log,
(err, corsHeaders) => {
/*
* Since AWS expects a 204 regardless of the existence of
the object, the errors NoSuchKey and NoSuchVersion should not
* be sent back as a response.
*/
if (err && !err.NoSuchKey && !err.NoSuchVersion) {
return routesUtils.responseNoBody(err, corsHeaders,
response, null, log);
}
routesUtils.statsReport500(err, statsClient);
return routesUtils.responseNoBody(null, corsHeaders, response,
204, log);
});
}
}
return undefined;
}
module.exports = routeDELETE;

View File

@ -0,0 +1,119 @@
const errors = require('../../errors');
const routesUtils = require('../routesUtils');
function routerGET(request, response, api, log, statsClient, dataRetrievalFn) {
log.debug('routing request', { method: 'routerGET' });
if (request.bucketName === undefined && request.objectKey !== undefined) {
routesUtils.responseXMLBody(errors.NoSuchBucket, null, response, log);
} else if (request.bucketName === undefined
&& request.objectKey === undefined) {
// GET service
api.callApiMethod('serviceGet', request, response, log, (err, xml) => {
routesUtils.statsReport500(err, statsClient);
return routesUtils.responseXMLBody(err, xml, response, log);
});
} else if (request.objectKey === undefined) {
// GET bucket ACL
if (request.query.acl !== undefined) {
api.callApiMethod('bucketGetACL', request, response, log,
(err, xml, corsHeaders) => {
routesUtils.statsReport500(err, statsClient);
return routesUtils.responseXMLBody(err, xml, response, log,
corsHeaders);
});
} else if (request.query.replication !== undefined) {
api.callApiMethod('bucketGetReplication', request, response, log,
(err, xml, corsHeaders) => {
routesUtils.statsReport500(err, statsClient);
return routesUtils.responseXMLBody(err, xml, response, log,
corsHeaders);
});
} else if (request.query.cors !== undefined) {
api.callApiMethod('bucketGetCors', request, response, log,
(err, xml, corsHeaders) => {
routesUtils.statsReport500(err, statsClient);
routesUtils.responseXMLBody(err, xml, response, log,
corsHeaders);
});
} else if (request.query.versioning !== undefined) {
api.callApiMethod('bucketGetVersioning', request, response, log,
(err, xml, corsHeaders) => {
routesUtils.statsReport500(err, statsClient);
routesUtils.responseXMLBody(err, xml, response, log,
corsHeaders);
});
} else if (request.query.website !== undefined) {
api.callApiMethod('bucketGetWebsite', request, response, log,
(err, xml, corsHeaders) => {
routesUtils.statsReport500(err, statsClient);
routesUtils.responseXMLBody(err, xml, response, log,
corsHeaders);
});
} else if (request.query.uploads !== undefined) {
// List MultipartUploads
api.callApiMethod('listMultipartUploads', request, response, log,
(err, xml, corsHeaders) => {
routesUtils.statsReport500(err, statsClient);
return routesUtils.responseXMLBody(err, xml, response, log,
corsHeaders);
});
} else if (request.query.location !== undefined) {
api.callApiMethod('bucketGetLocation', request, response, log,
(err, xml, corsHeaders) => {
routesUtils.statsReport500(err, statsClient);
return routesUtils.responseXMLBody(err, xml, response, log,
corsHeaders);
});
} else {
// GET bucket
api.callApiMethod('bucketGet', request, response, log,
(err, xml, corsHeaders) => {
routesUtils.statsReport500(err, statsClient);
return routesUtils.responseXMLBody(err, xml, response, log,
corsHeaders);
});
}
} else {
if (request.query.acl !== undefined) {
// GET object ACL
api.callApiMethod('objectGetACL', request, response, log,
(err, xml, corsHeaders) => {
routesUtils.statsReport500(err, statsClient);
return routesUtils.responseXMLBody(err, xml, response, log,
corsHeaders);
});
} else if (request.query.tagging !== undefined) {
// GET object Tagging
api.callApiMethod('objectGetTagging', request, response, log,
(err, xml, corsHeaders) => {
routesUtils.statsReport500(err, statsClient);
return routesUtils.responseXMLBody(err, xml, response, log,
corsHeaders);
});
// List parts of an open multipart upload
} else if (request.query.uploadId !== undefined) {
api.callApiMethod('listParts', request, response, log,
(err, xml, corsHeaders) => {
routesUtils.statsReport500(err, statsClient);
return routesUtils.responseXMLBody(err, xml, response, log,
corsHeaders);
});
} else {
// GET object
api.callApiMethod('objectGet', request, response, log,
(err, dataGetInfo, resMetaHeaders, range) => {
let contentLength = 0;
if (resMetaHeaders && resMetaHeaders['Content-Length']) {
contentLength = resMetaHeaders['Content-Length'];
}
log.end().addDefaultFields({ contentLength });
routesUtils.statsReport500(err, statsClient);
return routesUtils.responseStreamData(err, request.query,
resMetaHeaders, dataGetInfo, dataRetrievalFn, response,
range, log);
});
}
}
}
module.exports = routerGET;

View File

@ -0,0 +1,29 @@
const errors = require('../../errors');
const routesUtils = require('../routesUtils');
function routeHEAD(request, response, api, log, statsClient) {
log.debug('routing request', { method: 'routeHEAD' });
if (request.bucketName === undefined) {
log.trace('head request without bucketName');
routesUtils.responseXMLBody(errors.MethodNotAllowed,
null, response, log);
} else if (request.objectKey === undefined) {
// HEAD bucket
api.callApiMethod('bucketHead', request, response, log,
(err, corsHeaders) => {
routesUtils.statsReport500(err, statsClient);
return routesUtils.responseNoBody(err, corsHeaders, response,
200, log);
});
} else {
// HEAD object
api.callApiMethod('objectHead', request, response, log,
(err, resHeaders) => {
routesUtils.statsReport500(err, statsClient);
return routesUtils.responseContentHeaders(err, {}, resHeaders,
response, log);
});
}
}
module.exports = routeHEAD;

View File

@ -0,0 +1,31 @@
const errors = require('../../errors');
const routesUtils = require('../routesUtils');
function routeOPTIONS(request, response, api, log, statsClient) {
log.debug('routing request', { method: 'routeOPTION' });
const corsMethod = request.headers['access-control-request-method'] || null;
if (!request.headers.origin) {
const msg = 'Insufficient information. Origin request header needed.';
const err = errors.BadRequest.customizeDescription(msg);
log.debug('missing origin', { method: 'routeOPTIONS', error: err });
return routesUtils.responseXMLBody(err, undefined, response, log);
}
if (['GET', 'PUT', 'HEAD', 'POST', 'DELETE'].indexOf(corsMethod) < 0) {
const msg = `Invalid Access-Control-Request-Method: ${corsMethod}`;
const err = errors.BadRequest.customizeDescription(msg);
log.debug('invalid Access-Control-Request-Method',
{ method: 'routeOPTIONS', error: err });
return routesUtils.responseXMLBody(err, undefined, response, log);
}
return api.callApiMethod('corsPreflight', request, response, log,
(err, resHeaders) => {
routesUtils.statsReport500(err, statsClient);
return routesUtils.responseNoBody(err, resHeaders, response, 200,
log);
});
}
module.exports = routeOPTIONS;

View File

@ -0,0 +1,54 @@
const errors = require('../../errors');
const routesUtils = require('../routesUtils');
/* eslint-disable no-param-reassign */
function routePOST(request, response, api, log) {
log.debug('routing request', { method: 'routePOST' });
const invalidMultiObjectDelReq = request.query.delete !== undefined
&& request.bucketName === undefined;
if (invalidMultiObjectDelReq) {
return routesUtils.responseNoBody(errors.MethodNotAllowed, null,
response, null, log);
}
request.post = '';
const invalidInitiateMpuReq = request.query.uploads !== undefined
&& request.objectKey === undefined;
const invalidCompleteMpuReq = request.query.uploadId !== undefined
&& request.objectKey === undefined;
if (invalidInitiateMpuReq || invalidCompleteMpuReq) {
return routesUtils.responseNoBody(errors.InvalidURI, null,
response, null, log);
}
// POST initiate multipart upload
if (request.query.uploads !== undefined) {
return api.callApiMethod('initiateMultipartUpload', request,
response, log, (err, result, corsHeaders) =>
routesUtils.responseXMLBody(err, result, response, log,
corsHeaders));
}
// POST complete multipart upload
if (request.query.uploadId !== undefined) {
return api.callApiMethod('completeMultipartUpload', request,
response, log, (err, result, resHeaders) =>
routesUtils.responseXMLBody(err, result, response, log,
resHeaders));
}
// POST multiObjectDelete
if (request.query.delete !== undefined) {
return api.callApiMethod('multiObjectDelete', request, response,
log, (err, xml, corsHeaders) =>
routesUtils.responseXMLBody(err, xml, response, log,
corsHeaders));
}
return routesUtils.responseNoBody(errors.NotImplemented, null, response,
200, log);
}
/* eslint-enable no-param-reassign */
module.exports = routePOST;

View File

@ -0,0 +1,168 @@
const errors = require('../../errors');
const routesUtils = require('../routesUtils');
/* eslint-disable no-param-reassign */
function routePUT(request, response, api, log, statsClient) {
log.debug('routing request', { method: 'routePUT' });
if (request.objectKey === undefined) {
// PUT bucket - PUT bucket ACL
// content-length for object is handled separately below
const contentLength = request.headers['content-length'];
if ((contentLength && (isNaN(contentLength) || contentLength < 0)) ||
contentLength === '') {
log.debug('invalid content-length header');
return routesUtils.responseNoBody(
errors.BadRequest, null, response, null, log);
}
// PUT bucket ACL
if (request.query.acl !== undefined) {
api.callApiMethod('bucketPutACL', request, response, log,
(err, corsHeaders) => {
routesUtils.statsReport500(err, statsClient);
return routesUtils.responseNoBody(err, corsHeaders,
response, 200, log);
});
} else if (request.query.versioning !== undefined) {
api.callApiMethod('bucketPutVersioning', request, response, log,
(err, corsHeaders) => {
routesUtils.statsReport500(err, statsClient);
routesUtils.responseNoBody(err, corsHeaders, response, 200,
log);
});
} else if (request.query.website !== undefined) {
api.callApiMethod('bucketPutWebsite', request, response, log,
(err, corsHeaders) => {
routesUtils.statsReport500(err, statsClient);
return routesUtils.responseNoBody(err, corsHeaders,
response, 200, log);
});
} else if (request.query.cors !== undefined) {
api.callApiMethod('bucketPutCors', request, response, log,
(err, corsHeaders) => {
routesUtils.statsReport500(err, statsClient);
return routesUtils.responseNoBody(err, corsHeaders,
response, 200, log);
});
} else if (request.query.replication !== undefined) {
api.callApiMethod('bucketPutReplication', request, response, log,
(err, corsHeaders) => {
routesUtils.statsReport500(err, statsClient);
routesUtils.responseNoBody(err, corsHeaders, response, 200,
log);
});
} else {
// PUT bucket
return api.callApiMethod('bucketPut', request, response, log,
(err, corsHeaders) => {
routesUtils.statsReport500(err, statsClient);
const location = { Location: `/${request.bucketName}` };
const resHeaders = corsHeaders ?
Object.assign({}, location, corsHeaders) : location;
return routesUtils.responseNoBody(err, resHeaders,
response, 200, log);
});
}
} else {
// PUT object, PUT object ACL, PUT object multipart or
// PUT object copy
// if content-md5 is not present in the headers, try to
// parse content-md5 from meta headers
if (request.headers['content-md5'] === '') {
log.debug('empty content-md5 header', {
method: 'routePUT',
});
return routesUtils
.responseNoBody(errors.InvalidDigest, null, response, 200, log);
}
if (request.headers['content-md5']) {
request.contentMD5 = request.headers['content-md5'];
} else {
request.contentMD5 = routesUtils.parseContentMD5(request.headers);
}
if (request.contentMD5 && request.contentMD5.length !== 32) {
request.contentMD5 = Buffer.from(request.contentMD5, 'base64')
.toString('hex');
if (request.contentMD5 && request.contentMD5.length !== 32) {
log.debug('invalid md5 digest', {
contentMD5: request.contentMD5,
});
return routesUtils
.responseNoBody(errors.InvalidDigest, null, response, 200,
log);
}
}
if (request.query.partNumber) {
if (request.headers['x-amz-copy-source']) {
api.callApiMethod('objectPutCopyPart', request, response, log,
(err, xml, additionalHeaders) => {
routesUtils.statsReport500(err, statsClient);
return routesUtils.responseXMLBody(err, xml, response, log,
additionalHeaders);
});
} else {
api.callApiMethod('objectPutPart', request, response, log,
(err, calculatedHash, corsHeaders) => {
if (err) {
return routesUtils.responseNoBody(err, corsHeaders,
response, 200, log);
}
// ETag's hex should always be enclosed in quotes
const resMetaHeaders = corsHeaders || {};
resMetaHeaders.ETag = `"${calculatedHash}"`;
routesUtils.statsReport500(err, statsClient);
return routesUtils.responseNoBody(err, resMetaHeaders,
response, 200, log);
});
}
} else if (request.query.acl !== undefined) {
api.callApiMethod('objectPutACL', request, response, log,
(err, resHeaders) => {
routesUtils.statsReport500(err, statsClient);
return routesUtils.responseNoBody(err, resHeaders,
response, 200, log);
});
} else if (request.query.tagging !== undefined) {
api.callApiMethod('objectPutTagging', request, response, log,
(err, resHeaders) => {
routesUtils.statsReport500(err, statsClient);
return routesUtils.responseNoBody(err, resHeaders,
response, 200, log);
});
} else if (request.headers['x-amz-copy-source']) {
return api.callApiMethod('objectCopy', request, response, log,
(err, xml, additionalHeaders) => {
routesUtils.statsReport500(err, statsClient);
routesUtils.responseXMLBody(err, xml, response, log,
additionalHeaders);
});
} else {
if (request.headers['content-length'] === undefined &&
request.headers['x-amz-decoded-content-length'] === undefined) {
return routesUtils.responseNoBody(errors.MissingContentLength,
null, response, 411, log);
}
if (Number.isNaN(request.parsedContentLength) ||
request.parsedContentLength < 0) {
return routesUtils.responseNoBody(errors.BadRequest,
null, response, 400, log);
}
log.end().addDefaultFields({
contentLength: request.parsedContentLength,
});
api.callApiMethod('objectPut', request, response, log,
(err, resHeaders) => {
routesUtils.statsReport500(err, statsClient);
return routesUtils.responseNoBody(err, resHeaders,
response, 200, log);
});
}
}
return undefined;
}
/* eslint-enable no-param-reassign */
module.exports = routePUT;

View File

@ -0,0 +1,65 @@
const errors = require('../../errors');
const routesUtils = require('../routesUtils');
function routerWebsite(request, response, api, log, statsClient,
dataRetrievalFn) {
log.debug('routing request', { method: 'routerWebsite' });
// website endpoint only supports GET and HEAD and must have a bucket
// http://docs.aws.amazon.com/AmazonS3/latest/dev/WebsiteEndpoints.html
if ((request.method !== 'GET' && request.method !== 'HEAD')
|| !request.bucketName) {
return routesUtils.errorHtmlResponse(errors.MethodNotAllowed,
false, request.bucketName, response, null, log);
}
if (request.method === 'GET') {
return api.callApiMethod('websiteGet', request, response, log,
(err, userErrorPageFailure, dataGetInfo, resMetaHeaders,
redirectInfo, key) => {
routesUtils.statsReport500(err, statsClient);
// request being redirected
if (redirectInfo) {
// note that key might have been modified in websiteGet
// api to add index document
return routesUtils.redirectRequest(redirectInfo,
key, request.connection.encrypted,
response, request.headers.host, resMetaHeaders, log);
}
// user has their own error page
if (err && dataGetInfo) {
return routesUtils.streamUserErrorPage(err, dataGetInfo,
dataRetrievalFn, response, resMetaHeaders, log);
}
// send default error html response
if (err) {
return routesUtils.errorHtmlResponse(err,
userErrorPageFailure, request.bucketName,
response, resMetaHeaders, log);
}
// no error, stream data
return routesUtils.responseStreamData(null, request.query,
resMetaHeaders, dataGetInfo, dataRetrievalFn, response,
null, log);
});
}
if (request.method === 'HEAD') {
return api.callApiMethod('websiteHead', request, response, log,
(err, resMetaHeaders, redirectInfo, key) => {
routesUtils.statsReport500(err, statsClient);
if (redirectInfo) {
return routesUtils.redirectRequest(redirectInfo,
key, request.connection.encrypted,
response, request.headers.host, resMetaHeaders, log);
}
// could redirect on err so check for redirectInfo first
if (err) {
return routesUtils.errorHeaderResponse(err, response,
resMetaHeaders, log);
}
return routesUtils.responseContentHeaders(err, {}, resMetaHeaders,
response, log);
});
}
return undefined;
}
module.exports = routerWebsite;

847
lib/s3routes/routesUtils.js Normal file
View File

@ -0,0 +1,847 @@
const url = require('url');
const ipCheck = require('../ipCheck');
/**
* setCommonResponseHeaders - Set HTTP response headers
* @param {object} headers - key and value of new headers to add
* @param {object} response - http response object
* @param {object} log - Werelogs logger
* @return {object} response - response object with additional headers
*/
function setCommonResponseHeaders(headers, response, log) {
if (headers && typeof headers === 'object') {
log.trace('setting response headers', { headers });
Object.keys(headers).forEach(key => {
if (headers[key] !== undefined) {
try {
response.setHeader(key, headers[key]);
} catch (e) {
log.debug('header can not be added ' +
'to the response', { header: headers[key],
error: e.stack, method: 'setCommonResponseHeaders' });
}
}
});
}
response.setHeader('server', 'S3 Server');
// to be expanded in further implementation of logging of requests
response.setHeader('x-amz-id-2', log.getSerializedUids());
response.setHeader('x-amz-request-id', log.getSerializedUids());
return response;
}
/**
* okHeaderResponse - Response with only headers, no body
* @param {object} headers - key and value of new headers to add
* @param {object} response - http response object
* @param {number} httpCode -- http response code
* @param {object} log - Werelogs logger
* @return {object} response - response object with additional headers
*/
function okHeaderResponse(headers, response, httpCode, log) {
log.trace('sending success header response');
setCommonResponseHeaders(headers, response, log);
log.debug('response http code', { httpCode });
response.writeHead(httpCode);
return response.end(() => {
log.end().info('responded to request', {
httpCode: response.statusCode,
});
});
}
const XMLResponseBackend = {
/**
* okXMLResponse - Response with XML body
* @param {string} xml - XML body as string
* @param {object} response - http response object
* @param {object} log - Werelogs logger
* @param {object} additionalHeaders -- additional headers to add
* to response
* @return {object} response - response object with additional headers
*/
okResponse: function okXMLResponse(xml, response, log,
additionalHeaders) {
const bytesSent = Buffer.byteLength(xml);
log.trace('sending success xml response');
log.addDefaultFields({
bytesSent,
});
setCommonResponseHeaders(additionalHeaders, response, log);
response.writeHead(200, { 'Content-type': 'application/xml' });
log.debug('response http code', { httpCode: 200 });
log.trace('xml response', { xml });
return response.end(xml, 'utf8', () => {
log.end().info('responded with XML', {
httpCode: response.statusCode,
});
});
},
errorResponse: function errorXMLResponse(errCode, response, log,
corsHeaders) {
log.trace('sending error xml response', { errCode });
/*
<?xml version="1.0" encoding="UTF-8"?>
<Error>
<Code>NoSuchKey</Code>
<Message>The resource you requested does not exist</Message>
<Resource>/mybucket/myfoto.jpg</Resource>
<RequestId>4442587FB7D0A2F9</RequestId>
</Error>
*/
const xml = [];
xml.push(
'<?xml version="1.0" encoding="UTF-8"?>',
'<Error>',
`<Code>${errCode.message}</Code>`,
`<Message>${errCode.description}</Message>`,
'<Resource></Resource>',
`<RequestId>${log.getSerializedUids()}</RequestId>`,
'</Error>'
);
const xmlStr = xml.join('');
const bytesSent = Buffer.byteLength(xmlStr);
log.addDefaultFields({
bytesSent,
});
setCommonResponseHeaders(corsHeaders, response, log);
response.writeHead(errCode.code,
{ 'Content-Type': 'application/xml',
'Content-Length': bytesSent });
return response.end(xmlStr, 'utf8', () => {
log.end().info('responded with error XML', {
httpCode: response.statusCode,
});
});
},
};
const JSONResponseBackend = {
/**
* okJSONResponse - Response with JSON body
* @param {string} json - JSON body as string
* @param {object} response - http response object
* @param {object} log - Werelogs logger
* @param {object} additionalHeaders -- additional headers to add
* to response
* @return {object} response - response object with additional headers
*/
okResponse: function okJSONResponse(json, response, log,
additionalHeaders) {
const bytesSent = Buffer.byteLength(json);
log.trace('sending success json response');
log.addDefaultFields({
bytesSent,
});
setCommonResponseHeaders(additionalHeaders, response, log);
response.writeHead(200, { 'Content-type': 'application/json' });
log.debug('response http code', { httpCode: 200 });
log.trace('json response', { json });
return response.end(json, 'utf8', () => {
log.end().info('responded with JSON', {
httpCode: response.statusCode,
});
});
},
errorResponse: function errorJSONResponse(errCode, response, log,
corsHeaders) {
log.trace('sending error json response', { errCode });
/*
{
"code": "NoSuchKey",
"message": "The resource you requested does not exist",
"resource": "/mybucket/myfoto.jpg",
"requestId": "4442587FB7D0A2F9"
}
*/
const jsonStr =
`{"code":"${errCode.message}",` +
`"message":"${errCode.description}",` +
'"resource":null,' +
`"requestId":"${log.getSerializedUids()}"}`;
const bytesSent = Buffer.byteLength(jsonStr);
log.addDefaultFields({
bytesSent,
});
setCommonResponseHeaders(corsHeaders, response, log);
response.writeHead(errCode.code,
{ 'Content-Type': 'application/json',
'Content-Length': bytesSent });
return response.end(jsonStr, 'utf8', () => {
log.end().info('responded with error JSON', {
httpCode: response.statusCode,
});
});
},
};
/**
* Modify response headers for an objectGet or objectHead request
* @param {object} overrideParams - parameters in this object override common
* headers. These are extracted from the request's query object
* @param {object} resHeaders - object with common response headers
* @param {object} response - router's response object
* @param {array | undefined} range - range in form of [start, end]
* or undefined if no range header
* @param {object} log - Werelogs logger
* @return {object} response - modified response object
*/
function okContentHeadersResponse(overrideParams, resHeaders,
response, range, log) {
const addHeaders = {};
if (process.env.ALLOW_INVALID_META_HEADERS) {
const headersArr = Object.keys(resHeaders);
const length = headersArr.length;
for (let i = 0; i < length; i++) {
const headerName = headersArr[i];
if (headerName.startsWith('x-amz-')) {
const translatedHeaderName = headerName.replace(/\//g, '|+2f');
// eslint-disable-next-line no-param-reassign
resHeaders[translatedHeaderName] =
resHeaders[headerName];
if (translatedHeaderName !== headerName) {
// eslint-disable-next-line no-param-reassign
delete resHeaders[headerName];
}
}
}
}
Object.assign(addHeaders, resHeaders);
if (overrideParams['response-content-type']) {
addHeaders['Content-Type'] = overrideParams['response-content-type'];
}
if (overrideParams['response-content-language']) {
addHeaders['Content-Language'] =
overrideParams['response-content-language'];
}
if (overrideParams['response-expires']) {
addHeaders.Expires = overrideParams['response-expires'];
}
if (overrideParams['response-cache-control']) {
addHeaders['Cache-Control'] = overrideParams['response-cache-control'];
}
if (overrideParams['response-content-disposition']) {
addHeaders['Content-Disposition'] =
overrideParams['response-content-disposition'];
}
if (overrideParams['response-content-encoding']) {
addHeaders['Content-Encoding'] =
overrideParams['response-content-encoding'];
}
setCommonResponseHeaders(addHeaders, response, log);
const httpCode = range ? 206 : 200;
log.debug('response http code', { httpCode });
response.writeHead(httpCode);
return response;
}
function retrieveData(locations, dataRetrievalFn,
response, logger, errorHandlerFn) {
if (locations.length === 0) {
return response.end();
}
if (errorHandlerFn === undefined) {
// eslint-disable-next-line
errorHandlerFn = () => { response.connection.destroy(); };
}
const current = locations.shift();
if (current.azureStreamingOptions) {
// pipe data directly from source to response
response.on('error', err => {
logger.error('error piping data from source');
errorHandlerFn(err);
});
return dataRetrievalFn(current, response, logger, err => {
if (err) {
logger.error('failed to get object from source', {
error: err,
method: 'retrieveData',
backend: 'Azure',
});
return errorHandlerFn(err);
}
return undefined;
});
}
return dataRetrievalFn(current, response, logger,
(err, readable) => {
if (err) {
logger.error('failed to get object', {
error: err,
method: 'retrieveData',
});
return errorHandlerFn(err);
}
readable.on('error', err => {
logger.error('error piping data from source');
errorHandlerFn(err);
});
readable.on('end', () => {
process.nextTick(retrieveData,
locations, dataRetrievalFn, response, logger);
});
readable.pipe(response, { end: false });
return undefined;
});
}
function _responseBody(responseBackend, errCode, payload, response, log,
additionalHeaders) {
if (errCode && !response.headersSent) {
return responseBackend.errorResponse(errCode, response, log,
additionalHeaders);
}
if (!response.headersSent) {
return responseBackend.okResponse(payload, response, log,
additionalHeaders);
}
return undefined;
}
const routesUtils = {
/**
* @param {string} errCode - S3 error Code
* @param {string} xml - xml body as string conforming to S3's spec.
* @param {object} response - router's response object
* @param {object} log - Werelogs logger
* @param {object} [additionalHeaders] - additionalHeaders to add
* to response
* @return {function} - error or success response utility
*/
responseXMLBody(errCode, xml, response, log, additionalHeaders) {
return _responseBody(XMLResponseBackend, errCode, xml, response,
log, additionalHeaders);
},
/**
* @param {string} errCode - S3 error Code
* @param {string} json - JSON body as string conforming to S3's spec.
* @param {object} response - router's response object
* @param {object} log - Werelogs logger
* @param {object} [additionalHeaders] - additionalHeaders to add
* to response
* @return {function} - error or success response utility
*/
responseJSONBody(errCode, json, response, log, additionalHeaders) {
return _responseBody(JSONResponseBackend, errCode, json, response,
log, additionalHeaders);
},
/**
* @param {string} errCode - S3 error Code
* @param {string} resHeaders - headers to be set for the response
* @param {object} response - router's response object
* @param {number} httpCode - httpCode to set in response
* If none provided, defaults to 200.
* @param {object} log - Werelogs logger
* @return {function} - error or success response utility
*/
responseNoBody(errCode, resHeaders, response, httpCode = 200, log) {
if (errCode && !response.headersSent) {
return XMLResponseBackend.errorResponse(errCode, response, log,
resHeaders);
}
if (!response.headersSent) {
return okHeaderResponse(resHeaders, response, httpCode, log);
}
return undefined;
},
/**
* @param {string} errCode - S3 error Code
* @param {object} overrideParams - parameters in this object override
* common headers. These are extracted from the request's query object
* @param {string} resHeaders - headers to be set for the response
* @param {object} response - router's response object
* @param {object} log - Werelogs logger
* @return {object} - router's response object
*/
responseContentHeaders(errCode, overrideParams, resHeaders, response,
log) {
if (errCode && !response.headersSent) {
return XMLResponseBackend.errorResponse(errCode, response, log,
resHeaders);
}
if (!response.headersSent) {
// Undefined added as an argument since need to send range to
// okContentHeadersResponse in responseStreamData
okContentHeadersResponse(overrideParams, resHeaders, response,
undefined, log);
}
return response.end(() => {
log.end().info('responded with content headers', {
httpCode: response.statusCode,
});
});
},
/**
* @param {string} errCode - S3 error Code
* @param {object} overrideParams - parameters in this object override
* common headers. These are extracted from the request's query object
* @param {string} resHeaders - headers to be set for the response
* @param {array | null} dataLocations --
* - array of locations to get streams from sproxyd
* - null if no data for object and only metadata
* @param {function} dataRetrievalFn - function to handle streaming data
* @param {http.ServerResponse} response - response sent to the client
* @param {array | undefined} range - range in format of [start, end]
* if range header contained in request or undefined if not
* @param {object} log - Werelogs logger
* @return {undefined}
*/
responseStreamData(errCode, overrideParams, resHeaders, dataLocations,
dataRetrievalFn, response, range, log) {
if (errCode && !response.headersSent) {
return XMLResponseBackend.errorResponse(errCode, response, log,
resHeaders);
}
if (!response.headersSent) {
okContentHeadersResponse(overrideParams, resHeaders, response,
range, log);
}
if (dataLocations === null) {
return response.end(() => {
log.end().info('responded with only metadata', {
httpCode: response.statusCode,
});
});
}
response.on('finish', () => {
log.end().info('responded with streamed content', {
httpCode: response.statusCode,
});
});
return retrieveData(dataLocations, dataRetrievalFn, response, log);
},
/**
* @param {object} err -- arsenal error object
* @param {array} dataLocations --
* - array of locations to get streams from backend
* @param {function} dataRetrievalFn - function to handle streaming data
* @param {http.ServerResponse} response - response sent to the client
* @param {object} corsHeaders - CORS-related response headers
* @param {object} log - Werelogs logger
* @return {undefined}
*/
streamUserErrorPage(err, dataLocations, dataRetrievalFn, response,
corsHeaders, log) {
setCommonResponseHeaders(corsHeaders, response, log);
response.writeHead(err.code, { 'Content-type': 'text/html' });
response.on('finish', () => {
log.end().info('responded with streamed content', {
httpCode: response.statusCode,
});
});
return retrieveData(dataLocations, dataRetrievalFn, response, log);
},
/**
* @param {object} err - arsenal error object
* @param {boolean} userErrorPageFailure - whether there was a failure
* retrieving the user's error page
* @param {string} bucketName - bucketName from request
* @param {http.ServerResponse} response - response sent to the client
* @param {object} corsHeaders - CORS-related response headers
* @param {object} log - Werelogs logger
* @return {undefined}
*/
errorHtmlResponse(err, userErrorPageFailure, bucketName, response,
corsHeaders, log) {
log.trace('sending generic html error page',
{ err });
setCommonResponseHeaders(corsHeaders, response, log);
response.writeHead(err.code, { 'Content-type': 'text/html' });
const html = [];
// response.statusMessage will provide standard message for status
// code so much set response status code before creating html
html.push(
'<html>',
'<head>',
`<title>${err.code} ${response.statusMessage}</title>`,
'</head>',
'<body>',
`<h1>${err.code} ${response.statusMessage}</h1>`,
'<ul>',
`<li>Code: ${err.message}</li>`,
`<li>Message: ${err.description}</li>`
);
if (!userErrorPageFailure && bucketName) {
html.push(`<li>BucketName: ${bucketName}</li>`);
}
html.push(
`<li>RequestId: ${log.getSerializedUids()}</li>`,
// AWS response contains HostId here.
// TODO: consider adding
'</ul>'
);
if (userErrorPageFailure) {
html.push(
'<h3>An Error Occurred While Attempting ',
'to Retrieve a Custom ',
'Error Document</h3>',
'<ul>',
`<li>Code: ${err.message}</li>`,
`<li>Message: ${err.description}</li>`,
'</ul>'
);
}
html.push(
'<hr/>',
'</body>',
'</html>'
);
return response.end(html.join(''), 'utf8', () => {
log.end().info('responded with error html', {
httpCode: response.statusCode,
});
});
},
/**
* @param {object} err - arsenal error object
* @param {http.ServerResponse} response - response sent to the client
* @param {object} corsHeaders - CORS-related response headers
* @param {object} log - Werelogs logger
* @return {undefined}
*/
errorHeaderResponse(err, response, corsHeaders, log) {
log.trace('sending error header response',
{ err });
setCommonResponseHeaders(corsHeaders, response, log);
response.setHeader('x-amz-error-code', err.message);
response.setHeader('x-amz-error-message', err.description);
response.writeHead(err.code);
return response.end(() => {
log.end().info('responded with error headers', {
httpCode: response.statusCode,
});
});
},
/**
* redirectRequest - redirectRequest based on rule
* @param {object} routingInfo - info for routing
* @param {string} [routingInfo.hostName] - redirect host
* @param {string} [routingInfo.protocol] - protocol for redirect
* (http or https)
* @param {number} [routingInfo.httpRedirectCode] - redirect http code
* @param {string} [routingInfo.replaceKeyPrefixWith] - repalcement prefix
* @param {string} [routingInfo.replaceKeyWith] - replacement key
* @param {string} [routingInfo.prefixFromRule] - key prefix to be replaced
* @param {boolean} [routingInfo.justPath] - whether to just send the
* path as the redirect location header rather than full protocol plus
* hostname plus path (AWS only sends path when redirect is based on
* x-amz-website-redirect-location header and redirect is to key in
* same bucket)
* @param {boolean} [routingInfo.redirectLocationHeader] - whether redirect
* rule came from an x-amz-website-redirect-location header
* @param {string} objectKey - key name (may have been modified in
* websiteGet api to include index document)
* @param {boolean} encrypted - whether request was https
* @param {object} response - response object
* @param {string} hostHeader - host sent in original request.headers
* @param {object} corsHeaders - CORS-related response headers
* @param {object} log - Werelogs instance
* @return {undefined}
*/
redirectRequest(routingInfo, objectKey, encrypted, response, hostHeader,
corsHeaders, log) {
const { justPath, redirectLocationHeader, hostName, protocol,
httpRedirectCode, replaceKeyPrefixWith,
replaceKeyWith, prefixFromRule } = routingInfo;
const redirectProtocol = protocol || encrypted ? 'https' : 'http';
const redirectCode = httpRedirectCode || 301;
const redirectHostName = hostName || hostHeader;
setCommonResponseHeaders(corsHeaders, response, log);
let redirectKey = objectKey;
// will only have either replaceKeyWith defined or replaceKeyPrefixWith
// defined. not both and might have neither
if (replaceKeyWith !== undefined) {
redirectKey = replaceKeyWith;
}
if (replaceKeyPrefixWith !== undefined) {
if (prefixFromRule !== undefined) {
// if here with prefixFromRule defined, means that
// passed condition
// and objectKey starts with this prefix. replace just first
// instance in objectKey with the replaceKeyPrefixWith value
redirectKey = objectKey.replace(prefixFromRule,
replaceKeyPrefixWith);
} else {
redirectKey = replaceKeyPrefixWith + objectKey;
}
}
let redirectLocation = justPath ? `/${redirectKey}` :
`${redirectProtocol}://${redirectHostName}/${redirectKey}`;
if (!redirectKey && redirectLocationHeader) {
// remove hanging slash
redirectLocation = redirectLocation.slice(0, -1);
}
log.end().info('redirecting request', {
httpCode: redirectCode,
redirectLocation: hostName,
});
response.writeHead(redirectCode, {
Location: redirectLocation,
});
response.end();
return undefined;
},
/**
* Get bucket name and object name from the request
* @param {object} request - http request object
* @param {string} pathname - http request path parsed from request url
* @param {string[]} validHosts - all region endpoints + websiteEndpoints
* @returns {object} result - returns object containing bucket
* name and objectKey as key
*/
getResourceNames(request, pathname, validHosts) {
return this.getNamesFromReq(request, pathname,
routesUtils.getBucketNameFromHost(request, validHosts));
},
/**
* Get bucket name and/or object name from the path of a request
* @param {object} request - http request object
* @param {string} pathname - http request path parsed from request url
* @param {string} bucketNameFromHost - name of bucket from host name
* @returns {object} resources - returns object w. bucket and object as keys
*/
getNamesFromReq(request, pathname,
bucketNameFromHost) {
const resources = {
bucket: undefined,
object: undefined,
host: undefined,
gotBucketNameFromHost: undefined,
path: undefined,
};
// If there are spaces in a key name, s3cmd sends them as "+"s.
// Actual "+"s are uri encoded as "%2B" so by switching "+"s to
// spaces here, you still retain any "+"s in the final decoded path
const pathWithSpacesInsteadOfPluses = pathname.replace(/\+/g, ' ');
const path = decodeURIComponent(pathWithSpacesInsteadOfPluses);
resources.path = path;
let fullHost;
if (request.headers && request.headers.host) {
const reqHost = request.headers.host;
const bracketIndex = reqHost.indexOf(']');
const colonIndex = reqHost.lastIndexOf(':');
const hostLength = colonIndex > bracketIndex ?
colonIndex : reqHost.length;
fullHost = reqHost.slice(0, hostLength);
} else {
fullHost = undefined;
}
if (bucketNameFromHost) {
resources.bucket = bucketNameFromHost;
const bucketNameLength = bucketNameFromHost.length;
resources.host = fullHost.slice(bucketNameLength + 1);
// Slice off leading '/'
resources.object = path.slice(1);
resources.gotBucketNameFromHost = true;
} else {
resources.host = fullHost;
const urlArr = path.split('/');
if (urlArr.length > 1) {
resources.bucket = urlArr[1];
resources.object = urlArr.slice(2).join('/');
} else if (urlArr.length === 1) {
resources.bucket = urlArr[0];
}
}
// remove any empty strings or nulls
if (resources.bucket === '' || resources.bucket === null) {
resources.bucket = undefined;
}
if (resources.object === '' || resources.object === null) {
resources.object = undefined;
}
return resources;
},
/**
* Get bucket name from the request of a virtually hosted bucket
* @param {object} request - HTTP request object
* @return {string|undefined} - returns bucket name if dns-style query
* returns undefined if path-style query
* @param {string[]} validHosts - all region endpoints + websiteEndpoints
* @throws {Error} in case the type of query could not be infered
*/
getBucketNameFromHost(request, validHosts) {
const headers = request.headers;
if (headers === undefined || headers.host === undefined) {
throw new Error('bad request: no host in headers');
}
const reqHost = headers.host;
const bracketIndex = reqHost.indexOf(']');
const colonIndex = reqHost.lastIndexOf(':');
const hostLength = colonIndex > bracketIndex ?
colonIndex : reqHost.length;
// If request is made using IPv6 (indicated by presence of brackets),
// surrounding brackets should not be included in host var
const host = bracketIndex > -1 ?
reqHost.slice(1, hostLength - 1) : reqHost.slice(0, hostLength);
// parseIp returns empty object if host is not valid IP
// If host is an IP address, it's path-style
if (Object.keys(ipCheck.parseIp(host)).length !== 0) {
return undefined;
}
let bucketName;
for (let i = 0; i < validHosts.length; ++i) {
if (host === validHosts[i]) {
// It's path-style
return undefined;
} else if (host.endsWith(`.${validHosts[i]}`)) {
const potentialBucketName = host.split(`.${validHosts[i]}`)[0];
if (!bucketName) {
bucketName = potentialBucketName;
} else {
// bucketName should be shortest so that takes into account
// most specific potential hostname
bucketName =
potentialBucketName.length < bucketName.length ?
potentialBucketName : bucketName;
}
}
}
if (bucketName) {
return bucketName;
}
throw new Error(
`bad request: hostname ${host} is not in valid endpoints`
);
},
/**
* Modify http request object
* @param {object} request - http request object
* @param {string[]} validHosts - all region endpoints + websiteEndpoints
* @return {object} request object with additional attributes
*/
normalizeRequest(request, validHosts) {
/* eslint-disable no-param-reassign */
const parsedUrl = url.parse(request.url, true);
request.query = parsedUrl.query;
// TODO: make the namespace come from a config variable.
request.namespace = 'default';
// Parse bucket and/or object names from request
const resources = this.getResourceNames(request, parsedUrl.pathname,
validHosts);
request.gotBucketNameFromHost = resources.gotBucketNameFromHost;
request.bucketName = resources.bucket;
request.objectKey = resources.object;
request.parsedHost = resources.host;
request.path = resources.path;
// For streaming v4 auth, the total body content length
// without the chunk metadata is sent as
// the x-amz-decoded-content-length
const contentLength = request.headers['x-amz-decoded-content-length'] ?
request.headers['x-amz-decoded-content-length'] :
request.headers['content-length'];
request.parsedContentLength =
Number.parseInt(contentLength, 10);
if (process.env.ALLOW_INVALID_META_HEADERS) {
const headersArr = Object.keys(request.headers);
const length = headersArr.length;
if (headersArr.indexOf('x-invalid-metadata') > 1) {
for (let i = 0; i < length; i++) {
const headerName = headersArr[i];
if (headerName.startsWith('x-amz-')) {
const translatedHeaderName =
headerName.replace(/\|\+2f/g, '/');
request.headers[translatedHeaderName] =
request.headers[headerName];
if (translatedHeaderName !== headerName) {
delete request.headers[headerName];
}
}
}
}
}
return request;
},
/**
* Validate bucket name per naming rules and restrictions
* @param {string} bucketname - name of the bucket to be created
* @param {string[]} prefixBlacklist - prefixes reserved for internal use
* @return {boolean} - returns true/false by testing
* bucket name against validation rules
*/
isValidBucketName(bucketname, prefixBlacklist) {
const ipAddressRegex = new RegExp(/^(\d+\.){3}\d+$/);
const dnsRegex = new RegExp(/^[a-z0-9]+([\.\-]{1}[a-z0-9]+)*$/);
// Must be at least 3 and no more than 63 characters long.
if (bucketname.length < 3 || bucketname.length > 63) {
return false;
}
// Certain prefixes may be reserved, for example for shadow buckets
// used for multipart uploads
if (prefixBlacklist.some(prefix => bucketname.startsWith(prefix))) {
return false;
}
// Must not contain more than one consecutive period
if (bucketname.indexOf('..') > 1) {
return false;
}
// Must not be an ip address
if (bucketname.match(ipAddressRegex)) {
return false;
}
// Must be dns compatible
return !!bucketname.match(dnsRegex);
},
/**
* Parse content-md5 from meta headers
* @param {string} headers - request headers
* @return {string} - returns content-md5 string
*/
parseContentMD5(headers) {
if (headers['x-amz-meta-s3cmd-attrs']) {
const metaHeadersArr = headers['x-amz-meta-s3cmd-attrs'].split('/');
for (let i = 0; i < metaHeadersArr.length; i++) {
const tmpArr = metaHeadersArr[i].split(':');
if (tmpArr[0] === 'md5') {
return tmpArr[1];
}
}
}
return '';
},
/**
* Report 500 to stats when an Internal Error occurs
* @param {object} err - Arsenal error
* @param {object} statsClient - StatsClient instance
* @returns {undefined}
*/
statsReport500(err, statsClient) {
if (statsClient && err && err.code === 500) {
statsClient.report500();
}
return undefined;
},
};
module.exports = routesUtils;

124
lib/shuffle.js Normal file
View File

@ -0,0 +1,124 @@
'use strict'; // eslint-disable-line strict
const randomBytes = require('crypto').randomBytes;
/*
* This set of function allows us to create an efficient shuffle
* of our array, since Math.random() will not be enough (32 bits of
* entropy are less than enough when the entropy needed is the factorial
* of the array length).
*
* Many thanks to @jmunoznaranjo for providing us with a solid solution.
*/
/*
* Returns the lowest number of bits required to represent a positive base-10
* number. Sync function.
* @param {number} number - a positive integer
* @return {number} the lowest number of bits
* @throws Error if number < 0
*/
function bitsNeeded(number) {
if (number < 0) {
throw new Error('Input must be greater than or equal to zero');
} else if (number === 0) {
return 1;
} else {
return Math.floor(Math.log2(number)) + 1;
}
}
/*
* Returns a 'numbits'-long sequence of 1s *as a base-10 integer*.
* Sync function.
* @param {number} numbits - a positive integer
* @return {number} the sequence of 1s
* if numbits === 0
* @throws Error if numBits < 0
*/
function createMaskOnes(numBits) {
if (numBits < 0) {
throw new Error('Input must be greater than or equal to zero');
}
return Math.pow(2, numBits) - 1;
}
/*
* Returns a buffer of cryptographically secure pseudo-random bytes. The
* source of bytes is nodejs' crypto.randomBytes. Sync function.
* @param{number} howMany - the number of bytes to return
* @return {buffer} a InRangebuffer with 'howMany' pseudo-random bytes.
* @throws Error if numBytes < 0 or if insufficient entropy
*/
function nextBytes(numBytes) {
if (numBytes < 0) {
throw new Error('Input must be greater than or equal to zero');
}
try {
return randomBytes(numBytes);
} catch (ex) {
throw new Error('Insufficient entropy');
}
}
/*
* Returns the number of bytes needed to store a number of bits. Sync function.
* @param {number} numBits - a positive integer
* @return {number} the number of bytes needed
* @throws Error if numBits < 0
*/
function bitsToBytes(numBits) {
if (numBits < 0) {
throw new Error('Input must be greater than or equal to zero');
}
return Math.ceil(numBits / 8);
}
/*
* Returns a cryptographically secure pseudo-random integer in range [min,max].
* The source of randomness underneath is nodejs' crypto.randomBytes.
* Sync function.
* @param {number} min - minimum possible value of the returned integer
* @param {number} max - maximum possible value of the returned integer
* @return {number} - a pseudo-random integer in [min,max], undefined if
* min >= max
*/
function randomRange(min, max) {
if (max < min) {
throw new Error('Invalid range');
}
if (min === max) {
return min;
}
const range = (max - min);
const bits = bitsNeeded(range);
// decide how many bytes we need to draw from nextBytes: drawing less
// bytes means being more efficient
const bytes = bitsToBytes(bits);
// we use a mask as an optimization: it increases the chances for the
// candidate to be in range
const mask = createMaskOnes(bits);
let candidate;
do {
candidate = parseInt(nextBytes(bytes).toString('hex'), 16) & mask;
} while (candidate > range);
return (candidate + min);
}
/**
* This shuffles an array of any length, using sufficient entropy
* in every single case.
* @param {Array} array - Any type of array
* @return {Array} - The sorted array
*/
module.exports = function shuffle(array) {
for (let i = array.length - 1; i > 0; i--) {
const randIndex = randomRange(0, i);
/* eslint-disable no-param-reassign */
const randIndexVal = array[randIndex];
array[randIndex] = array[i];
array[i] = randIndexVal;
/* eslint-enable no-param-reassign */
}
return array;
};

View File

@ -0,0 +1,328 @@
'use strict'; // eslint-disable-line
const fs = require('fs');
const crypto = require('crypto');
const async = require('async');
const diskusage = require('diskusage');
const werelogs = require('werelogs');
const errors = require('../../../errors');
const stringHash = require('../../../stringHash');
const jsutil = require('../../../jsutil');
const storageUtils = require('../../utils');
// The FOLDER_HASH constant refers to the number of base directories
// used for directory hashing of stored objects.
//
// It MUST not be changed on anything else than a clean new storage
// backend.
//
// It may be changed for such system if the default hash value is too
// low for the estimated number of objects to be stored and the file
// system performance (be cautious though), and cannot be changed once
// the system contains data.
const FOLDER_HASH = 3511;
/**
* @class
* @classdesc File-based object blob store
*
* Each object/part becomes a file and the files are stored in a
* directory hash structure under the configured dataPath.
*/
class DataFileStore {
/**
* @constructor
* @param {Object} dataConfig - configuration of the file backend
* @param {String} dataConfig.dataPath - absolute path where to
* store the files
* @param {Boolean} [dataConfig.noSync=false] - If true, disable
* sync calls that ensure files and directories are fully
* written on the physical drive before returning an
* answer. Used to speed up unit tests, may have other uses.
* @param {werelogs.API} [logApi] - object providing a constructor function
* for the Logger object
*/
constructor(dataConfig, logApi) {
this.logger = new (logApi || werelogs).Logger('DataFileStore');
this.dataPath = dataConfig.dataPath;
this.noSync = dataConfig.noSync || false;
}
/**
* Setup the storage backend before starting to read or write
* files in it.
*
* The function ensures that dataPath is accessible and
* pre-creates the directory hashes under dataPath.
*
* @param {function} callback - called when done with no argument
* @return {undefined}
*/
setup(callback) {
fs.access(this.dataPath, fs.F_OK | fs.R_OK | fs.W_OK, err => {
if (err) {
this.logger.error('Data path is not readable or writable',
{ error: err });
return callback(err);
}
// Create FOLDER_HASH subdirectories
const subDirs = Array.from({ length: FOLDER_HASH },
(v, k) => (k).toString());
this.logger.info(`pre-creating ${subDirs.length} subdirs...`);
if (!this.noSync) {
storageUtils.setDirSyncFlag(this.dataPath, this.logger);
}
async.eachSeries(subDirs, (subDirName, next) => {
fs.mkdir(`${this.dataPath}/${subDirName}`, err => {
// If already exists, move on
if (err && err.code !== 'EEXIST') {
return next(err);
}
return next();
});
},
err => {
if (err) {
this.logger.error('Error creating subdirs',
{ error: err });
return callback(err);
}
this.logger.info('data file store init complete, ' +
'go forth and store data.');
return callback();
});
return undefined;
});
}
/**
* Get the filesystem path to a stored object file from its key
*
* @param {String} key - the object key
* @return {String} the absolute path to the file containing the
* object contents
*/
getFilePath(key) {
const hash = stringHash(key);
const folderHashPath = ((hash % FOLDER_HASH)).toString();
return `${this.dataPath}/${folderHashPath}/${key}`;
}
/**
* Put a new object to the storage backend
*
* @param {stream.Readable} dataStream - input stream providing the
* object data
* @param {Number} size - Total byte size of the data to put
* @param {werelogs.RequestLogger} log - logging object
* @param {DataFileStore~putCallback} callback - called when done
* @return {undefined}
*/
put(dataStream, size, log, callback) {
const key = crypto.pseudoRandomBytes(20).toString('hex');
const filePath = this.getFilePath(key);
log.debug('starting to write data', { method: 'put', key, filePath });
dataStream.pause();
fs.open(filePath, 'wx', (err, fd) => {
if (err) {
log.error('error opening filePath',
{ method: 'put', key, filePath, error: err });
return callback(errors.InternalError.customizeDescription(
`filesystem error: open() returned ${err.code}`));
}
const cbOnce = jsutil.once(callback);
// disable autoClose so that we can close(fd) only after
// fsync() has been called
const fileStream = fs.createWriteStream(filePath,
{ fd,
autoClose: false });
fileStream.on('finish', () => {
function ok() {
log.debug('finished writing data',
{ method: 'put', key, filePath });
return cbOnce(null, key);
}
if (this.noSync) {
fs.close(fd);
return ok();
}
fs.fsync(fd, err => {
fs.close(fd);
if (err) {
log.error('fsync error',
{ method: 'put', key, filePath,
error: err });
return cbOnce(
errors.InternalError.customizeDescription(
'filesystem error: fsync() returned ' +
`${err.code}`));
}
return ok();
});
return undefined;
}).on('error', err => {
log.error('error streaming data on write',
{ method: 'put', key, filePath, error: err });
// destroying the write stream forces a close(fd)
fileStream.destroy();
return cbOnce(errors.InternalError.customizeDescription(
`write stream error: ${err.code}`));
});
dataStream.resume();
dataStream.pipe(fileStream);
dataStream.on('error', err => {
log.error('error streaming data on read',
{ method: 'put', key, filePath, error: err });
// destroying the write stream forces a close(fd)
fileStream.destroy();
return cbOnce(errors.InternalError.customizeDescription(
`read stream error: ${err.code}`));
});
return undefined;
});
}
/**
* Get info about a stored object (see DataFileStore~statCallback
* to know which info is returned)
*
* @param {String} key - key of the object
* @param {werelogs.RequestLogger} log - logging object
* @param {DataFileStore~statCallback} callback - called when done
* @return {undefined}
*/
stat(key, log, callback) {
const filePath = this.getFilePath(key);
log.debug('stat file', { key, filePath });
fs.stat(filePath, (err, stat) => {
if (err) {
if (err.code === 'ENOENT') {
return callback(errors.ObjNotFound);
}
log.error('error on \'stat\' of file',
{ key, filePath, error: err });
return callback(errors.InternalError.customizeDescription(
`filesystem error: stat() returned ${err.code}`));
}
const info = { objectSize: stat.size };
return callback(null, info);
});
}
/**
* Retrieve data of a stored object
*
* @param {String} key - key of the object
* @param {Object} [byteRange] - optional absolute inclusive byte
* range to retrieve.
* @param {werelogs.RequestLogger} log - logging object
* @param {DataFileStore~getCallback} callback - called when done
* @return {undefined}
*/
get(key, byteRange, log, callback) {
const filePath = this.getFilePath(key);
const readStreamOptions = {
flags: 'r',
encoding: null,
fd: null,
autoClose: true,
};
if (byteRange) {
readStreamOptions.start = byteRange[0];
readStreamOptions.end = byteRange[1];
}
log.debug('opening readStream to get data',
{ method: 'get', key, filePath, byteRange });
const cbOnce = jsutil.once(callback);
const rs = fs.createReadStream(filePath, readStreamOptions)
.on('error', err => {
if (err.code === 'ENOENT') {
return cbOnce(errors.ObjNotFound);
}
log.error('error retrieving file',
{ method: 'get', key, filePath,
error: err });
return cbOnce(
errors.InternalError.customizeDescription(
`filesystem read error: ${err.code}`));
})
.on('open', () => { cbOnce(null, rs); });
}
/**
* Delete a stored object
*
* @param {String} key - key of the object
* @param {werelogs.RequestLogger} log - logging object
* @param {DataFileStore~deleteCallback} callback - called when done
* @return {undefined}
*/
delete(key, log, callback) {
const filePath = this.getFilePath(key);
log.debug('deleting file', { method: 'delete', key, filePath });
return fs.unlink(filePath, err => {
if (err) {
if (err.code === 'ENOENT') {
return callback(errors.ObjNotFound);
}
log.error('error deleting file', { method: 'delete',
key, filePath,
error: err });
return callback(errors.InternalError.customizeDescription(
`filesystem error: unlink() returned ${err.code}`));
}
return callback();
});
}
/**
* Retrieve disk usage information
*
* @param {DataFileStore~diskUsageCallback} callback - called when done
* @return {undefined}
*/
getDiskUsage(callback) {
diskusage.check(this.dataPath, callback);
}
}
/**
* @callback DataFileStore~putCallback
* @param {Error} - The encountered error
* @param {String} key - The key to access the data
*/
/**
* @callback DataFileStore~statCallback
* @param {Error} - The encountered error
* @param {Object} info - Information about the object
* @param {Number} info.objectSize - Byte size of the object
*/
/**
* @callback DataFileStore~getCallback
* @param {Error} - The encountered error
* arsenal.errors.ObjNotFound is returned if the object does not exist
* @param {stream.Readable} stream - The stream of requested object data
*/
/**
* @callback DataFileStore~deleteCallback
* @param {Error} - The encountered error
* arsenal.errors.ObjNotFound is returned if the object does not exist
*/
/**
* @callback DataFileStore~diskUsageCallback
* @param {Error} - The encountered error
* @param {object} - The disk usage info
*/
module.exports = DataFileStore;

View File

@ -0,0 +1,128 @@
'use strict'; // eslint-disable-line
const stream = require('stream');
const werelogs = require('werelogs');
const errors = require('../../../errors');
class ListRecordStream extends stream.Transform {
constructor(logger) {
super({ objectMode: true });
this.logger = logger;
}
_transform(itemObj, encoding, callback) {
itemObj.entries.forEach(entry => {
// eslint-disable-next-line no-param-reassign
entry.type = entry.type || 'put';
});
this.push(itemObj);
callback();
}
}
/**
* @class
* @classdesc Proxy object to access raft log API
*/
class LogConsumer {
/**
* @constructor
*
* @param {Object} params - constructor params
* @param {bucketclient.RESTClient} params.bucketClient - client
* object to bucketd
* @param {Number} params.raftSession - raft session ID to query
* @param {werelogs.API} [params.logApi] - object providing a constructor
* function for the Logger object
*/
constructor(params) {
this.setupLogging(params.logApi);
this.bucketClient = params.bucketClient;
this.raftSession = params.raftSession;
}
/**
* Create a dedicated logger for LogConsumer, from the provided werelogs
* API instance.
*
* @param {werelogs.API} logApi - object providing a constructor
* function for the Logger object
* @return {undefined}
*/
setupLogging(logApi) {
const api = logApi || werelogs;
this.logger = new api.Logger('LogConsumer');
}
/**
* Prune the oldest records in the record log
*
* Note: not implemented yet
*
* @param {Object} params - params object
* @param {Function} cb - callback when done
* @return {undefined}
*/
pruneRecords(params, cb) {
setImmediate(() => cb(errors.NotImplemented));
}
/**
* Read a series of log records from raft
*
* @param {Object} [params] - params object
* @param {Number} [params.startSeq] - fetch starting from this
* sequence number
* @param {Number} [params.limit] - maximum number of log records
* to return
* @param {function} cb - callback function, called with an error
* object or null and an object as 2nd parameter
* @return {undefined}
*/
readRecords(params, cb) {
const recordStream = new ListRecordStream(this.logger);
const _params = params || {};
this.bucketClient.getRaftLog(
this.raftSession, _params.startSeq, _params.limit,
false, null, (err, data) => {
if (err) {
if (err.code === 404) {
// no such raft session, log and ignore
this.logger.warn('raft session does not exist yet',
{ raftId: this.raftSession });
return cb(null, { info: { start: null,
end: null } });
}
if (err.code === 416) {
// requested range not satisfiable
this.logger.debug('no new log record to process',
{ raftId: this.raftSession });
return cb(null, { info: { start: null,
end: null } });
}
this.logger.error(
'Error handling record log request', { error: err });
return cb(err);
}
let logResponse;
try {
logResponse = JSON.parse(data);
} catch (err) {
this.logger.error('received malformed JSON',
{ params });
return cb(errors.InternalError);
}
logResponse.log.forEach(entry => recordStream.write(entry));
recordStream.end();
return cb(null, { info: logResponse.info,
log: recordStream });
}, this.logger.newRequestLogger());
}
}
module.exports = LogConsumer;

View File

@ -0,0 +1,100 @@
'use strict'; // eslint-disable-line
const assert = require('assert');
const constants = require('../../../constants');
const levelNet = require('../../../network/rpc/level-net');
const { RecordLogProxy } = require('./RecordLog.js');
const werelogs = require('werelogs');
class MetadataFileClient {
/**
* Construct a metadata client
*
* @param {Object} params - constructor params
* @param {String} params.host - name or IP address of metadata
* server host
* @param {Number} params.port - TCP port to connect to the metadata
* server
* @param {werelogs.API} [params.logApi] - object providing a constructor
* function for the Logger object
* @param {Number} [params.callTimeoutMs] - timeout for remote calls
*/
constructor(params) {
assert.notStrictEqual(params.host, undefined);
assert.notStrictEqual(params.port, undefined);
this.host = params.host;
this.port = params.port;
this.callTimeoutMs = params.callTimeoutMs;
this.setupLogging(params.logApi);
}
/**
* Create a dedicated logger for MetadataFileClient, from the provided
* werelogs API instance.
*
* @param {werelogs.API} logApi - object providing a constructor
* function for the Logger object
* @return {undefined}
*/
setupLogging(logApi) {
const api = logApi || werelogs;
this.logger = new api.Logger('MetadataFileClient');
}
/**
* Open the remote metadata database (backed by leveldb)
*
* @param {function} [done] called when done
* @return {Object} handle to the remote database
*/
openDB(done) {
const url = `http://${this.host}:${this.port}` +
`${constants.metadataFileNamespace}/metadata`;
this.logger.info(`connecting to metadata service at ${url}`);
const dbClient = new levelNet.LevelDbClient({
url,
logger: this.logger,
callTimeoutMs: this.callTimeoutMs,
});
dbClient.connect(done);
return dbClient;
}
/**
* Open a new or existing record log and access its API through
* RPC calls.
*
* @param {String} [params] - open params
* @param {String} [params.logName] - name of log to open (default
* "main")
* @param {Function} done - callback expecting an error argument,
* or null and the opened log proxy object on success
* @return {undefined}
*/
openRecordLog(params, done) {
const _params = params || {};
const url = `http://${this.host}:${this.port}` +
`${constants.metadataFileNamespace}/recordLog`;
this.logger.info('connecting to record log service', { url });
const logProxy = new RecordLogProxy({
url,
name: _params.logName,
logger: this.logger,
callTimeoutMs: this.callTimeoutMs,
});
logProxy.connect(err => {
if (err) {
this.logger.error('error connecting to record log service',
{ url, error: err.stack });
return done(err);
}
this.logger.info('connected to record log service', { url });
return done(null, logProxy);
});
return logProxy;
}
}
module.exports = MetadataFileClient;

Some files were not shown because too many files have changed in this diff Show More