Compare commits

..

No commits in common. "5da4b90c21441415f17a0d93e1a672c19954ff05" and "c5b209904e1c8806648e08c1aa6be5aac82bde1b" have entirely different histories.

37 changed files with 1145 additions and 1716 deletions

View File

@ -1,87 +0,0 @@
# General support information
GitHub Issues are **reserved** for actionable bug reports (including
documentation inaccuracies), and feature requests.
**All questions** (regarding configuration, usecases, performance, community,
events, setup and usage recommendations, among other things) should be asked on
the **[Zenko Forum](http://forum.zenko.io/)**.
> Questions opened as GitHub issues will systematically be closed, and moved to
> the [Zenko Forum](http://forum.zenko.io/).
--------------------------------------------------------------------------------
## Avoiding duplicates
When reporting a new issue/requesting a feature, make sure that we do not have
any duplicates already open:
- search the issue list for this repository (use the search bar, select
"Issues" on the left pane after searching);
- if there is a duplicate, please do not open your issue, and add a comment
to the existing issue instead.
--------------------------------------------------------------------------------
## Bug report information
(delete this section (everything between the lines) if you're not reporting a
bug but requesting a feature)
### Description
Briefly describe the problem you are having in a few paragraphs.
### Steps to reproduce the issue
Please provide steps to reproduce, including full log output
### Actual result
Describe the results you received
### Expected result
Describe the results you expected
### Additional information
- Node.js version,
- Docker version,
- npm version,
- distribution/OS,
- optional: anything else you deem helpful to us.
--------------------------------------------------------------------------------
## Feature Request
(delete this section (everything between the lines) if you're not requesting
a feature but reporting a bug)
### Proposal
Describe the feature
### Current behavior
What currently happens
### Desired behavior
What you would like to happen
### Usecase
Please provide usecases for changing the current behavior
### Additional information
- Is this request for your company? Y/N
- If Y: Company name:
- Are you using any Scality Enterprise Edition products (RING, Zenko EE)? Y/N
- Are you willing to contribute this feature yourself?
- Position/Title:
- How did you hear about us?
--------------------------------------------------------------------------------

View File

@ -1,21 +0,0 @@
FROM node:6-slim
WORKDIR /usr/src/app
COPY package.json /usr/src/app
RUN apt-get update \
&& apt-get install -y jq --no-install-recommends \
&& npm install --production \
&& rm -rf /var/lib/apt/lists/* \
&& npm cache clear --force \
&& rm -rf ~/.node-gyp \
&& rm -rf /tmp/npm-*
# Keep the .git directory in order to properly report version
COPY . /usr/src/app
ENTRYPOINT ["/usr/src/app/docker-entrypoint.sh"]
CMD [ "npm", "start" ]
EXPOSE 8100

View File

@ -3,8 +3,9 @@
![Utapi logo](res/utapi-logo.png) ![Utapi logo](res/utapi-logo.png)
[![Circle CI][badgepub]](https://circleci.com/gh/scality/utapi) [![Circle CI][badgepub]](https://circleci.com/gh/scality/utapi)
[![Scality CI][badgepriv]](http://ci.ironmann.io/gh/scality/utapi)
Service Utilization API for tracking resource usage and metrics reporting. Service Utilization API for tracking resource usage and metrics reporting
## Design ## Design
@ -87,13 +88,13 @@ Server is running.
1. Create an IAM user 1. Create an IAM user
``` ```
aws iam --endpoint-url <endpoint> create-user --user-name <user-name> aws iam --endpoint-url <endpoint> create-user --user-name utapiuser
``` ```
2. Create access key for the user 2. Create access key for the user
``` ```
aws iam --endpoint-url <endpoint> create-access-key --user-name <user-name> aws iam --endpoint-url <endpoint> create-access-key --user-name utapiuser
``` ```
3. Define a managed IAM policy 3. Define a managed IAM policy
@ -202,11 +203,12 @@ Server is running.
5. Attach user to the managed policy 5. Attach user to the managed policy
``` ```
aws --endpoint-url <endpoint> iam attach-user-policy --user-name aws --endpoint-url <endpoint> iam attach-user-policy --user-name utapiuser
<user-name> --policy-arn <policy arn> --policy-arn <policy arn>
``` ```
Now the user has access to ListMetrics request in Utapi on all buckets. Now the user `utapiuser` has access to ListMetrics request in Utapi on all
buckets.
### Signing request with Auth V4 ### Signing request with Auth V4
@ -222,18 +224,16 @@ following urls for reference.
You may also view examples making a request with Auth V4 using various languages You may also view examples making a request with Auth V4 using various languages
and AWS SDKs [here](/examples). and AWS SDKs [here](/examples).
Alternatively, you can use a nifty command line tool available in Scality's Alternatively, you can use a nifty command line tool available in Scality's S3.
CloudServer.
You can git clone the CloudServer repo from here You can git clone S3 repo from here https://github.com/scality/S3.git and follow
https://github.com/scality/cloudserver and follow the instructions in the README the instructions in README to install the dependencies.
to install the dependencies.
If you have CloudServer running inside a docker container you can docker exec If you have S3 running inside a docker container you can docker exec into the S3
into the CloudServer container as container as
``` ```
docker exec -it <container-id> bash docker exec -it <container id> bash
``` ```
and then run the command and then run the command
@ -271,7 +271,7 @@ Usage: list_metrics [options]
-v, --verbose -v, --verbose
``` ```
An example call to list metrics for a bucket `demo` to Utapi in a https enabled A typical call to list metrics for a bucket `demo` to Utapi in a https enabled
deployment would be deployment would be
``` ```
@ -283,7 +283,7 @@ Both start and end times are time expressed as UNIX epoch timestamps **expressed
in milliseconds**. in milliseconds**.
Keep in mind, since Utapi metrics are normalized to the nearest 15 min. Keep in mind, since Utapi metrics are normalized to the nearest 15 min.
interval, start time and end time need to be in the specific format as follows. interval, so start time and end time need to be in specific format as follows.
#### Start time #### Start time
@ -297,7 +297,7 @@ Date: Tue Oct 11 2016 17:35:25 GMT-0700 (PDT)
Unix timestamp (milliseconds): 1476232525320 Unix timestamp (milliseconds): 1476232525320
Here's an example JS method to get a start timestamp Here's a typical JS method to get start timestamp
```javascript ```javascript
function getStartTimestamp(t) { function getStartTimestamp(t) {
@ -317,7 +317,7 @@ seconds and milliseconds set to 59 and 999 respectively. So valid end timestamps
would look something like `09:14:59:999`, `09:29:59:999`, `09:44:59:999` and would look something like `09:14:59:999`, `09:29:59:999`, `09:44:59:999` and
`09:59:59:999`. `09:59:59:999`.
Here's an example JS method to get an end timestamp Here's a typical JS method to get end timestamp
```javascript ```javascript
function getEndTimestamp(t) { function getEndTimestamp(t) {
@ -342,3 +342,4 @@ In order to contribute, please follow the
https://github.com/scality/Guidelines/blob/master/CONTRIBUTING.md). https://github.com/scality/Guidelines/blob/master/CONTRIBUTING.md).
[badgepub]: http://circleci.com/gh/scality/utapi.svg?style=svg [badgepub]: http://circleci.com/gh/scality/utapi.svg?style=svg
[badgepriv]: http://ci.ironmann.io/gh/scality/utapi.svg?style=svg

View File

@ -1,47 +0,0 @@
#!/bin/bash
# set -e stops the execution of a script if a command or pipeline has an error
set -e
# modifying config.json
JQ_FILTERS_CONFIG="."
if [[ "$LOG_LEVEL" ]]; then
if [[ "$LOG_LEVEL" == "info" || "$LOG_LEVEL" == "debug" || "$LOG_LEVEL" == "trace" ]]; then
JQ_FILTERS_CONFIG="$JQ_FILTERS_CONFIG | .log.logLevel=\"$LOG_LEVEL\""
echo "Log level has been modified to $LOG_LEVEL"
else
echo "The log level you provided is incorrect (info/debug/trace)"
fi
fi
if [[ "$WORKERS" ]]; then
JQ_FILTERS_CONFIG="$JQ_FILTERS_CONFIG | .workers=\"$WORKERS\""
fi
if [[ "$REDIS_HOST" ]]; then
JQ_FILTERS_CONFIG="$JQ_FILTERS_CONFIG | .redis.host=\"$REDIS_HOST\""
fi
if [[ "$REDIS_PORT" ]]; then
JQ_FILTERS_CONFIG="$JQ_FILTERS_CONFIG | .redis.port=\"$REDIS_PORT\""
fi
if [[ "$VAULTD_HOST" ]]; then
JQ_FILTERS_CONFIG="$JQ_FILTERS_CONFIG | .vaultd.host=\"$VAULTD_HOST\""
fi
if [[ "$VAULTD_PORT" ]]; then
JQ_FILTERS_CONFIG="$JQ_FILTERS_CONFIG | .vaultd.port=\"$VAULTD_PORT\""
fi
if [[ "$HEALTHCHECKS_ALLOWFROM" ]]; then
JQ_FILTERS_CONFIG="$JQ_FILTERS_CONFIG | .healthChecks.allowFrom=[\"$HEALTHCHECKS_ALLOWFROM\"]"
fi
if [[ $JQ_FILTERS_CONFIG != "." ]]; then
jq "$JQ_FILTERS_CONFIG" config.json > config.json.tmp
mv config.json.tmp config.json
fi
exec "$@"

View File

@ -1,90 +0,0 @@
import sys, os, base64, datetime, hashlib, hmac, datetime, calendar, json
import requests # pip install requests
access_key = '9EQTVVVCLSSG6QBMNKO5'
secret_key = 'T5mK/skkkwJ/mTjXZnHyZ5UzgGIN=k9nl4dyTmDH'
method = 'POST'
service = 's3'
host = 'localhost:8100'
region = 'us-east-1'
canonical_uri = '/buckets'
canonical_querystring = 'Action=ListMetrics&Version=20160815'
content_type = 'application/x-amz-json-1.0'
algorithm = 'AWS4-HMAC-SHA256'
t = datetime.datetime.utcnow()
amz_date = t.strftime('%Y%m%dT%H%M%SZ')
date_stamp = t.strftime('%Y%m%d')
# Key derivation functions. See:
# http://docs.aws.amazon.com/general/latest/gr/signature-v4-examples.html#signature-v4-examples-python
def sign(key, msg):
return hmac.new(key, msg.encode("utf-8"), hashlib.sha256).digest()
def getSignatureKey(key, date_stamp, regionName, serviceName):
kDate = sign(('AWS4' + key).encode('utf-8'), date_stamp)
kRegion = sign(kDate, regionName)
kService = sign(kRegion, serviceName)
kSigning = sign(kService, 'aws4_request')
return kSigning
def get_start_time(t):
start = t.replace(minute=t.minute - t.minute % 15, second=0, microsecond=0)
return calendar.timegm(start.utctimetuple()) * 1000;
def get_end_time(t):
end = t.replace(minute=t.minute - t.minute % 15, second=0, microsecond=0)
return calendar.timegm(end.utctimetuple()) * 1000 - 1;
start_time = get_start_time(datetime.datetime(2016, 1, 1, 0, 0, 0, 0))
end_time = get_end_time(datetime.datetime(2016, 2, 1, 0, 0, 0, 0))
# Request parameters for listing Utapi bucket metrics--passed in a JSON block.
bucketListing = {
'buckets': [ 'utapi-test' ],
'timeRange': [ start_time, end_time ],
}
request_parameters = json.dumps(bucketListing)
payload_hash = hashlib.sha256(request_parameters).hexdigest()
canonical_headers = \
'content-type:{0}\nhost:{1}\nx-amz-content-sha256:{2}\nx-amz-date:{3}\n' \
.format(content_type, host, payload_hash, amz_date)
signed_headers = 'content-type;host;x-amz-content-sha256;x-amz-date'
canonical_request = '{0}\n{1}\n{2}\n{3}\n{4}\n{5}' \
.format(method, canonical_uri, canonical_querystring, canonical_headers,
signed_headers, payload_hash)
credential_scope = '{0}/{1}/{2}/aws4_request' \
.format(date_stamp, region, service)
string_to_sign = '{0}\n{1}\n{2}\n{3}' \
.format(algorithm, amz_date, credential_scope,
hashlib.sha256(canonical_request).hexdigest())
signing_key = getSignatureKey(secret_key, date_stamp, region, service)
signature = hmac.new(signing_key, (string_to_sign).encode('utf-8'),
hashlib.sha256).hexdigest()
authorization_header = \
'{0} Credential={1}/{2}, SignedHeaders={3}, Signature={4}' \
.format(algorithm, access_key, credential_scope, signed_headers, signature)
# The 'host' header is added automatically by the Python 'requests' library.
headers = {
'Content-Type': content_type,
'X-Amz-Content-Sha256': payload_hash,
'X-Amz-Date': amz_date,
'Authorization': authorization_header
}
endpoint = 'http://' + host + canonical_uri + '?' + canonical_querystring;
r = requests.post(endpoint, data=request_parameters, headers=headers)
print (r.text)

View File

@ -3,13 +3,10 @@ FROM warp10io/warp10:2.6.0
ENV S6_VERSION 2.0.0.1 ENV S6_VERSION 2.0.0.1
ENV WARP10_CONF_TEMPLATES ${WARP10_HOME}/conf.templates/standalone ENV WARP10_CONF_TEMPLATES ${WARP10_HOME}/conf.templates/standalone
ENV SENSISION_DATA_DIR /data/sensision
# Modify Warp 10 default config # Modify Warp 10 default config
ENV standalone.host 0.0.0.0 ENV standalone.host 0.0.0.0
ENV standalone.port 4802
ENV warpscript.repository.directory /usr/local/share/warpscript ENV warpscript.repository.directory /usr/local/share/warpscript
ENV warp.token.file /static.tokens
ENV warpscript.extension.protobuf io.warp10.ext.protobuf.ProtobufWarpScriptExtension ENV warpscript.extension.protobuf io.warp10.ext.protobuf.ProtobufWarpScriptExtension
ENV warpscript.extension.macrovalueencoder 'io.warp10.continuum.ingress.MacroValueEncoder$Extension' ENV warpscript.extension.macrovalueencoder 'io.warp10.continuum.ingress.MacroValueEncoder$Extension'
# ENV warpscript.extension.debug io.warp10.script.ext.debug.DebugWarpScriptExtension # ENV warpscript.extension.debug io.warp10.script.ext.debug.DebugWarpScriptExtension
@ -23,5 +20,5 @@ ADD https://dl.bintray.com/senx/maven/io/warp10/warp10-ext-protobuf/1.1.0-uberja
ADD ./images/warp10/s6 /etc ADD ./images/warp10/s6 /etc
ADD ./warpscript /usr/local/share/warpscript ADD ./warpscript /usr/local/share/warpscript
ADD ./images/warp10/static.tokens /
CMD /init CMD /init

View File

@ -15,8 +15,3 @@ ensureDir "$WARP10_DATA_DIR/conf"
ensureDir "$WARP10_DATA_DIR/data/leveldb" ensureDir "$WARP10_DATA_DIR/data/leveldb"
ensureDir "$WARP10_DATA_DIR/data/datalog" ensureDir "$WARP10_DATA_DIR/data/datalog"
ensureDir "$WARP10_DATA_DIR/data/datalog_done" ensureDir "$WARP10_DATA_DIR/data/datalog_done"
ensureDir "$SENSISION_DATA_DIR"
ensureDir "$SENSISION_DATA_DIR/logs"
ensureDir "$SENSISION_DATA_DIR/conf"
ensureDir "/var/run/sensision"

View File

@ -1,6 +1,5 @@
#!/usr/bin/with-contenv sh #!/usr/bin/with-contenv sh
echo "Installing warp 10 config"
for path in $WARP10_CONF_TEMPLATES/*; do for path in $WARP10_CONF_TEMPLATES/*; do
name="$(basename $path .template)" name="$(basename $path .template)"
if [ ! -f "$WARP10_DATA_DIR/conf/$name" ]; then if [ ! -f "$WARP10_DATA_DIR/conf/$name" ]; then
@ -8,7 +7,3 @@ for path in $WARP10_CONF_TEMPLATES/*; do
echo "Copied $name to $WARP10_DATA_DIR/conf/$name" echo "Copied $name to $WARP10_DATA_DIR/conf/$name"
fi fi
done done
echo "Installing sensision config"
cp ${SENSISION_HOME}/templates/sensision.template ${SENSISION_DATA_DIR}/conf/sensision.conf
cp ${SENSISION_HOME}/templates/log4j.properties.template ${SENSISION_DATA_DIR}/conf/log4j.properties

View File

@ -15,9 +15,3 @@ ensure_link "$WARP10_HOME/etc/conf.d" "$WARP10_DATA_DIR/conf"
ensure_link "$WARP10_HOME/leveldb" "$WARP10_DATA_DIR/data/leveldb" ensure_link "$WARP10_HOME/leveldb" "$WARP10_DATA_DIR/data/leveldb"
ensure_link "$WARP10_HOME/datalog" "$WARP10_DATA_DIR/data/datalog" ensure_link "$WARP10_HOME/datalog" "$WARP10_DATA_DIR/data/datalog"
ensure_link "$WARP10_HOME/datalog_done" "$WARP10_DATA_DIR/data/datalog_done" ensure_link "$WARP10_HOME/datalog_done" "$WARP10_DATA_DIR/data/datalog_done"
ensure_link "$SENSISION_HOME/etc" "${SENSISION_DATA_DIR}/conf"
ensure_link "$SENSISION_HOME/logs" "${SENSISION_DATA_DIR}/logs"
ensure_link /var/run/sensision/metrics ${SENSISION_HOME}/metrics
ensure_link /var/run/sensision/targets ${SENSISION_HOME}/targets
ensure_link /var/run/sensision/queued ${SENSISION_HOME}/queued

View File

@ -1,9 +0,0 @@
#!/usr/bin/with-contenv sh
chmod 1733 "$SENSISION_HOME/metrics"
chmod 1733 "$SENSISION_HOME/targets"
chmod 700 "$SENSISION_HOME/queued"
sed -i 's/@warp:WriteToken@/'"writeTokenStatic"'/' $SENSISION_DATA_DIR/conf/sensision.conf
sed -i -e "s_^sensision\.home.*_sensision\.home = ${SENSISION_HOME}_" $SENSISION_DATA_DIR/conf/sensision.conf
sed -i -e 's_^sensision\.qf\.url\.default.*_sensision\.qf\.url\.default=http://127.0.0.1:4802/api/v0/update_' $SENSISION_DATA_DIR/conf/sensision.conf

View File

@ -1,26 +0,0 @@
#!/usr/bin/with-contenv sh
JAVA="/usr/bin/java"
JAVA_OPTS=""
VERSION=1.0.21
SENSISION_CONFIG=${SENSISION_DATA_DIR}/conf/sensision.conf
SENSISION_JAR=${SENSISION_HOME}/bin/sensision-${VERSION}.jar
SENSISION_CP=${SENSISION_HOME}/etc:${SENSISION_JAR}
SENSISION_CLASS=io.warp10.sensision.Main
export MALLOC_ARENA_MAX=1
if [ -z "$SENSISION_HEAP" ]; then
SENSISION_HEAP=64m
fi
SENSISION_CMD="${JAVA} ${JAVA_OPTS} -Xmx${SENSISION_HEAP} -Dsensision.server.port=0 ${SENSISION_OPTS} -Dsensision.config=${SENSISION_CONFIG} -cp ${SENSISION_CP} ${SENSISION_CLASS}"
if [ -n "$ENABLE_SENSISION" ]; then
echo "Starting Sensision with $SENSISION_CMD ..."
exec $SENSISION_CMD | tee -a ${SENSISION_HOME}/logs/sensision.log
else
echo "Sensision is disabled"
# wait indefinitely
exec tail -f /dev/null
fi

View File

@ -1,37 +1,13 @@
#!/usr/bin/with-contenv sh #!/usr/bin/with-contenv sh
export SENSISIONID=warp10
export MALLOC_ARENA_MAX=1
JAVA="/usr/bin/java" JAVA="/usr/bin/java"
WARP10_JAR=${WARP10_HOME}/bin/warp10-${WARP10_VERSION}.jar WARP10_JAR=${WARP10_HOME}/bin/warp10-${WARP10_VERSION}.jar
WARP10_CLASS=io.warp10.standalone.Warp WARP10_CLASS=io.warp10.standalone.Warp
WARP10_CP="${WARP10_HOME}/etc:${WARP10_JAR}:${WARP10_HOME}/lib/*" WARP10_CP="${WARP10_HOME}/etc:${WARP10_JAR}:${WARP10_HOME}/lib/*"
WARP10_CONFIG_DIR="$WARP10_DATA_DIR/conf" WARP10_CONFIG_DIR="$WARP10_DATA_DIR/conf"
CONFIG_FILES="$(find ${WARP10_CONFIG_DIR} -not -path "*/\.*" -name "*.conf" | sort | tr '\n' ' ' 2> /dev/null)" CONFIG_FILES="$(find ${WARP10_CONFIG_DIR} -not -path "*/\.*" -name "*.conf" | sort | tr '\n' ' ' 2> /dev/null)"
LOG4J_CONF=${WARP10_HOME}/etc/log4j.properties
if [ -z "$WARP10_HEAP" ]; then WARP10_CMD="${JAVA} ${JAVA_OPTS} -cp ${WARP10_CP} ${WARP10_CLASS} ${CONFIG_FILES}"
WARP10_HEAP=1g
fi
if [ -z "$WARP10_HEAP_MAX" ]; then
WARP10_HEAP_MAX=4g
fi
JAVA_OPTS="-Djava.awt.headless=true -Xms${WARP10_HEAP} -Xmx${WARP10_HEAP_MAX} -XX:+UseG1GC ${JAVA_OPTS}"
SENSISION_OPTS=
if [ -n "$ENABLE_SENSISION" ]; then
_SENSISION_LABELS=
# Expects a comma seperated list of key=value ex key=value,foo=bar
if [ -n "$SENSISION_LABELS" ]; then
_SENSISION_LABELS="-Dsensision.default.labels=$SENSISION_LABELS"
fi
SENSISION_OPTS="-Dsensision.server.port=0 ${_SENSISION_LABELS} -Dsensision.events.dir=/var/run/sensision/metrics -Dfile.encoding=UTF-8"
fi
WARP10_CMD="${JAVA} -Dlog4j.configuration=file:${LOG4J_CONF} ${JAVA_OPTS} ${SENSISION_OPTS} -cp ${WARP10_CP} ${WARP10_CLASS} ${CONFIG_FILES}"
echo "Starting Warp 10 with $WARP10_CMD ..." echo "Starting Warp 10 with $WARP10_CMD ..."
exec $WARP10_CMD | tee -a ${WARP10_HOME}/logs/warp10.log exec $WARP10_CMD | tee -a ${WARP10_HOME}/logs/warp10.log

View File

@ -1,9 +0,0 @@
token.write.0.name=writeTokenStatic
token.write.0.producer=42424242-4242-4242-4242-424242424242
token.write.0.owner=42424242-4242-4242-4242-424242424242
token.write.0.app=utapi
token.read.0.name=readTokenStatic
token.read.0.owner=42424242-4242-4242-4242-424242424242
token.read.0.app=utapi

View File

@ -1,4 +1,5 @@
/* eslint-disable global-require */ /* eslint-disable global-require */
// eslint-disable-line strict // eslint-disable-line strict
let toExport; let toExport;

View File

@ -81,17 +81,6 @@ class Datastore {
return this._client.call((backend, done) => backend.incr(key, done), cb); return this._client.call((backend, done) => backend.incr(key, done), cb);
} }
/**
* increment value of a key by the provided value
* @param {string} key - key holding the value
* @param {string} value - value containing the data
* @param {callback} cb - callback
* @return {undefined}
*/
incrby(key, value, cb) {
return this._client.incrby(key, value, cb);
}
/** /**
* decrement value of a key by 1 * decrement value of a key by 1
* @param {string} key - key holding the value * @param {string} key - key holding the value

View File

@ -97,7 +97,6 @@ const metricObj = {
buckets: 'bucket', buckets: 'bucket',
accounts: 'accountId', accounts: 'accountId',
users: 'userId', users: 'userId',
location: 'location',
}; };
class UtapiClient { class UtapiClient {
@ -121,17 +120,13 @@ class UtapiClient {
const api = (config || {}).logApi || werelogs; const api = (config || {}).logApi || werelogs;
this.log = new api.Logger('UtapiClient'); this.log = new api.Logger('UtapiClient');
// By default, we push all resource types // By default, we push all resource types
this.metrics = ['buckets', 'accounts', 'users', 'service', 'location']; this.metrics = ['buckets', 'accounts', 'users', 'service'];
this.service = 's3'; this.service = 's3';
this.disableOperationCounters = false; this.disableOperationCounters = false;
this.enabledOperationCounters = []; this.enabledOperationCounters = [];
this.disableClient = true; this.disableClient = true;
if (config) { if (config) {
this.disableClient = false;
this.expireMetrics = config.expireMetrics;
this.expireMetricsTTL = config.expireMetricsTTL || 0;
if (config.metrics) { if (config.metrics) {
const message = 'invalid property in UtapiClient configuration'; const message = 'invalid property in UtapiClient configuration';
assert(Array.isArray(config.metrics), `${message}: metrics ` assert(Array.isArray(config.metrics), `${message}: metrics `
@ -159,6 +154,9 @@ class UtapiClient {
if (config.enabledOperationCounters) { if (config.enabledOperationCounters) {
this.enabledOperationCounters = config.enabledOperationCounters; this.enabledOperationCounters = config.enabledOperationCounters;
} }
this.disableClient = false;
this.expireMetrics = config.expireMetrics;
this.expireMetricsTTL = config.expireMetricsTTL || 0;
} }
} }
@ -1116,69 +1114,6 @@ class UtapiClient {
}); });
} }
/**
*
* @param {string} location - name of data location
* @param {number} updateSize - size in bytes to update location metric by,
* could be negative, indicating deleted object
* @param {string} reqUid - Request Unique Identifier
* @param {function} callback - callback to call
* @return {undefined}
*/
pushLocationMetric(location, updateSize, reqUid, callback) {
const log = this.log.newRequestLoggerFromSerializedUids(reqUid);
const params = {
level: 'location',
service: 's3',
location,
};
this._checkMetricTypes(params);
const action = (updateSize < 0) ? 'decrby' : 'incrby';
const size = (updateSize < 0) ? -updateSize : updateSize;
return this.ds[action](generateKey(params, 'locationStorage'), size,
err => {
if (err) {
log.error('error pushing metric', {
method: 'UtapiClient.pushLocationMetric',
error: err,
});
return callback(errors.InternalError);
}
return callback();
});
}
/**
*
* @param {string} location - name of data backend to get metric for
* @param {string} reqUid - Request Unique Identifier
* @param {function} callback - callback to call
* @return {undefined}
*/
getLocationMetric(location, reqUid, callback) {
const log = this.log.newRequestLoggerFromSerializedUids(reqUid);
const params = {
level: 'location',
service: 's3',
location,
};
const redisKey = generateKey(params, 'locationStorage');
return this.ds.get(redisKey, (err, bytesStored) => {
if (err) {
log.error('error getting metric', {
method: 'UtapiClient: getLocationMetric',
error: err,
});
return callback(errors.InternalError);
}
// if err and bytesStored are null, key does not exist yet
if (bytesStored === null) {
return callback(null, 0);
}
return callback(null, bytesStored);
});
}
/** /**
* Get storage used by bucket/account/user/service * Get storage used by bucket/account/user/service
* @param {object} params - params for the metrics * @param {object} params - params for the metrics

View File

@ -32,7 +32,7 @@ def get_options():
parser.add_argument("-n", "--sentinel-cluster-name", default='scality-s3', help="Redis cluster name") parser.add_argument("-n", "--sentinel-cluster-name", default='scality-s3', help="Redis cluster name")
parser.add_argument("-s", "--bucketd-addr", default='http://127.0.0.1:9000', help="URL of the bucketd server") parser.add_argument("-s", "--bucketd-addr", default='http://127.0.0.1:9000', help="URL of the bucketd server")
parser.add_argument("-w", "--worker", default=10, help="Number of workers") parser.add_argument("-w", "--worker", default=10, help="Number of workers")
parser.add_argument("-b", "--bucket", default=None, help="Bucket to be processed") parser.add_argument("-b", "--bucket", default=False, help="Bucket to be processed")
return parser.parse_args() return parser.parse_args()
def chunks(iterable, size): def chunks(iterable, size):
@ -119,7 +119,7 @@ class BucketDClient:
else: else:
is_truncated = len(payload) > 0 is_truncated = len(payload) > 0
def list_buckets(self, name = None): def list_buckets(self):
def get_next_marker(p): def get_next_marker(p):
if p is None: if p is None:
@ -135,14 +135,8 @@ class BucketDClient:
buckets = [] buckets = []
for result in payload['Contents']: for result in payload['Contents']:
match = re.match("(\w+)..\|..(\w+.*)", result['key']) match = re.match("(\w+)..\|..(\w+.*)", result['key'])
bucket = Bucket(*match.groups()) buckets.append(Bucket(*match.groups()))
if name is None or bucket.name == name: yield buckets
buckets.append(bucket)
if buckets:
yield buckets
if name is not None:
# Break on the first matching bucket if a name is given
break
def list_mpus(self, bucket): def list_mpus(self, bucket):
@ -334,15 +328,12 @@ def log_report(resource, name, obj_count, total_size):
if __name__ == '__main__': if __name__ == '__main__':
options = get_options() options = get_options()
if options.bucket is not None and not options.bucket.strip():
print('You must provide a bucket name with the --bucket flag')
sys.exit(1)
bucket_client = BucketDClient(options.bucketd_addr) bucket_client = BucketDClient(options.bucketd_addr)
redis_client = get_redis_client(options) redis_client = get_redis_client(options)
account_reports = {} account_reports = {}
observed_buckets = set() observed_buckets = set()
with ThreadPoolExecutor(max_workers=options.worker) as executor: with ThreadPoolExecutor(max_workers=options.worker) as executor:
for batch in bucket_client.list_buckets(options.bucket): for batch in bucket_client.list_buckets():
bucket_reports = {} bucket_reports = {}
jobs = [executor.submit(index_bucket, bucket_client, b) for b in batch] jobs = [executor.submit(index_bucket, bucket_client, b) for b in batch]
for job in futures.as_completed(jobs): for job in futures.as_completed(jobs):
@ -358,15 +349,29 @@ if __name__ == '__main__':
log_report('buckets', bucket, report['obj_count'], report['total_size']) log_report('buckets', bucket, report['obj_count'], report['total_size'])
pipeline.execute() pipeline.execute()
recorded_buckets = set(get_resources_from_redis(redis_client, 'buckets')) # Update total account reports in chunks
if options.bucket is None: for chunk in chunks(account_reports.items(), ACCOUNT_UPDATE_CHUNKSIZE):
stale_buckets = recorded_buckets.difference(observed_buckets) pipeline = redis_client.pipeline(transaction=False) # No transaction to reduce redis load
elif observed_buckets and options.bucket in recorded_buckets: for userid, report in chunk:
# The provided bucket does not exist, so clean up any metrics update_redis(pipeline, 'accounts', userid, report['obj_count'], report['total_size'])
stale_buckets = { options.bucket } log_report('accounts', userid, report['obj_count'], report['total_size'])
else: pipeline.execute()
stale_buckets = set()
observed_accounts = set(account_reports.keys())
recorded_accounts = set(get_resources_from_redis(redis_client, 'accounts'))
recorded_buckets = set(get_resources_from_redis(redis_client, 'buckets'))
# Stale accounts and buckets are ones that do not appear in the listing, but have recorded values
stale_accounts = recorded_accounts.difference(observed_accounts)
_log.info('Found %s stale accounts' % len(stale_accounts))
for chunk in chunks(stale_accounts, ACCOUNT_UPDATE_CHUNKSIZE):
pipeline = redis_client.pipeline(transaction=False) # No transaction to reduce redis load
for account in chunk:
update_redis(pipeline, 'accounts', account, 0, 0)
log_report('accounts', account, 0, 0)
pipeline.execute()
stale_buckets = recorded_buckets.difference(observed_buckets)
_log.info('Found %s stale buckets' % len(stale_buckets)) _log.info('Found %s stale buckets' % len(stale_buckets))
for chunk in chunks(stale_buckets, ACCOUNT_UPDATE_CHUNKSIZE): for chunk in chunks(stale_buckets, ACCOUNT_UPDATE_CHUNKSIZE):
pipeline = redis_client.pipeline(transaction=False) # No transaction to reduce redis load pipeline = redis_client.pipeline(transaction=False) # No transaction to reduce redis load
@ -374,26 +379,3 @@ if __name__ == '__main__':
update_redis(pipeline, 'buckets', bucket, 0, 0) update_redis(pipeline, 'buckets', bucket, 0, 0)
log_report('buckets', bucket, 0, 0) log_report('buckets', bucket, 0, 0)
pipeline.execute() pipeline.execute()
# Account metrics are not updated if a bucket is specified
if options.bucket is None:
# Update total account reports in chunks
for chunk in chunks(account_reports.items(), ACCOUNT_UPDATE_CHUNKSIZE):
pipeline = redis_client.pipeline(transaction=False) # No transaction to reduce redis load
for userid, report in chunk:
update_redis(pipeline, 'accounts', userid, report['obj_count'], report['total_size'])
log_report('accounts', userid, report['obj_count'], report['total_size'])
pipeline.execute()
observed_accounts = set(account_reports.keys())
recorded_accounts = set(get_resources_from_redis(redis_client, 'accounts'))
# Stale accounts and buckets are ones that do not appear in the listing, but have recorded values
stale_accounts = recorded_accounts.difference(observed_accounts)
_log.info('Found %s stale accounts' % len(stale_accounts))
for chunk in chunks(stale_accounts, ACCOUNT_UPDATE_CHUNKSIZE):
pipeline = redis_client.pipeline(transaction=False) # No transaction to reduce redis load
for account in chunk:
update_redis(pipeline, 'accounts', account, 0, 0)
log_report('accounts', account, 0, 0)
pipeline.execute()

View File

@ -65,10 +65,10 @@ const keys = {
*/ */
function getSchemaPrefix(params, timestamp) { function getSchemaPrefix(params, timestamp) {
const { const {
bucket, accountId, userId, level, service, location, bucket, accountId, userId, level, service,
} = params; } = params;
// `service` property must remain last because other objects also include it // `service` property must remain last because other objects also include it
const id = bucket || accountId || userId || location || service; const id = bucket || accountId || userId || service;
const prefix = timestamp ? `${service}:${level}:${timestamp}:${id}:` const prefix = timestamp ? `${service}:${level}:${timestamp}:${id}:`
: `${service}:${level}:${id}:`; : `${service}:${level}:${id}:`;
return prefix; return prefix;
@ -83,13 +83,9 @@ function getSchemaPrefix(params, timestamp) {
*/ */
function generateKey(params, metric, timestamp) { function generateKey(params, metric, timestamp) {
const prefix = getSchemaPrefix(params, timestamp); const prefix = getSchemaPrefix(params, timestamp);
if (params.location) {
return `${prefix}locationStorage`;
}
return keys[metric](prefix); return keys[metric](prefix);
} }
/** /**
* Returns a list of the counters for a metric type * Returns a list of the counters for a metric type
* @param {object} params - object with metric type and id as a property * @param {object} params - object with metric type and id as a property

View File

@ -51,10 +51,7 @@ class RedisClient extends EventEmitter {
Object.values(this._inFlightTimeouts) Object.values(this._inFlightTimeouts)
.forEach(clearTimeout); .forEach(clearTimeout);
} }
if (this._redis !== null) { await this._redis.quit();
await this._redis.quit();
this._redis = null;
}
}, callback); }, callback);
} }
@ -106,26 +103,29 @@ class RedisClient extends EventEmitter {
this.emit('error', error); this.emit('error', error);
} }
_createCommandTimeout() { _createCommandTimeout() {
let timer; let timer;
let onTimeout;
const cancelTimeout = jsutil.once(() => { const cancelTimeout = jsutil.once(() => {
clearTimeout(timer); clearTimeout(timer);
this.off('timeout', onTimeout);
this._inFlightTimeouts.delete(timer); this._inFlightTimeouts.delete(timer);
}); });
const timeout = new Promise((_, reject) => { const timeout = new Promise((_, reject) => {
timer = setTimeout(this.emit.bind(this, 'timeout'), COMMAND_TIMEOUT); timer = setTimeout(
() => {
this.emit('timeout');
this._initClient();
},
COMMAND_TIMEOUT,
);
this._inFlightTimeouts.add(timer); this._inFlightTimeouts.add(timer);
onTimeout = () => { this.once('timeout', () => {
moduleLogger.warn('redis command timed out'); moduleLogger.warn('redis command timed out');
cancelTimeout(); cancelTimeout();
this._initClient();
reject(errors.OperationTimedOut); reject(errors.OperationTimedOut);
}; });
this.once('timeout', onTimeout);
}); });
return { timeout, cancelTimeout }; return { timeout, cancelTimeout };

View File

@ -26,8 +26,8 @@ async function listMetric(ctx, params) {
// A separate request will be made to warp 10 per requested resource // A separate request will be made to warp 10 per requested resource
const results = await Promise.all( const results = await Promise.all(
resources.map(async ({ resource, id }) => { resources.map(async resource => {
const labels = { [labelName]: id }; const labels = { [labelName]: resource };
const options = { const options = {
params: { params: {
start, start,

View File

@ -5,7 +5,7 @@ const { ipCheck } = require('arsenal');
const config = require('../config'); const config = require('../config');
const { logger, buildRequestLogger } = require('../utils'); const { logger, buildRequestLogger } = require('../utils');
const errors = require('../errors'); const errors = require('../errors');
const { translateAndAuthorize } = require('../vault'); const { authenticateRequest, vault } = require('../vault');
const oasOptions = { const oasOptions = {
controllers: path.join(__dirname, './API/'), controllers: path.join(__dirname, './API/'),
@ -44,17 +44,6 @@ function loggerMiddleware(req, res, next) {
return next(); return next();
} }
function responseLoggerMiddleware(req, res, next) {
const info = {
httpCode: res.statusCode,
httpMessage: res.statusMessage,
};
req.logger.end('finished handling request', info);
if (next !== undefined) {
next();
}
}
// next is purposely not called as all error responses are handled here // next is purposely not called as all error responses are handled here
// eslint-disable-next-line no-unused-vars // eslint-disable-next-line no-unused-vars
function errorMiddleware(err, req, res, next) { function errorMiddleware(err, req, res, next) {
@ -81,7 +70,15 @@ function errorMiddleware(err, req, res, next) {
message, message,
}, },
}); });
responseLoggerMiddleware(req, res); }
function responseLoggerMiddleware(req, res, next) {
const info = {
httpCode: res.statusCode,
httpMessage: res.statusMessage,
};
req.logger.end('finished handling request', info);
return next();
} }
// eslint-disable-next-line no-unused-vars // eslint-disable-next-line no-unused-vars
@ -114,7 +111,7 @@ async function authV4Middleware(request, response, params) {
let authorizedResources; let authorizedResources;
try { try {
[passed, authorizedResources] = await translateAndAuthorize(request, action, params.level, requestedResources); [passed, authorizedResources] = await authenticateRequest(request, action, params.level, requestedResources);
} catch (error) { } catch (error) {
request.logger.error('error during authentication', { error }); request.logger.error('error during authentication', { error });
throw errors.InternalError; throw errors.InternalError;
@ -125,14 +122,17 @@ async function authV4Middleware(request, response, params) {
throw errors.AccessDenied; throw errors.AccessDenied;
} }
switch (request.ctx.operationId) { if (params.level === 'accounts') {
case 'listMetrics': request.logger.debug('converting account ids to canonical ids');
params.body[params.level] = authorizedResources; authorizedResources = await vault.getCanonicalIds(
break; authorizedResources,
request.logger.logger,
);
}
default: // authorizedResources is only defined on non-account credentials
[params.resource] = authorizedResources; if (request.ctx.operationId === 'listMetrics' && authorizedResources !== undefined) {
break; params.body[params.level] = authorizedResources;
} }
} }

View File

@ -1,28 +1,22 @@
const assert = require('assert'); const assert = require('assert');
const async = require('async');
const BaseTask = require('./BaseTask'); const BaseTask = require('./BaseTask');
const { UtapiMetric } = require('../models'); const { UtapiMetric } = require('../models');
const config = require('../config'); const config = require('../config');
const { checkpointLagSecs } = require('../constants');
const { const {
LoggerContext, shardFromTimestamp, convertTimestamp, InterpolatedClock, now, LoggerContext, shardFromTimestamp, convertTimestamp, InterpolatedClock,
} = require('../utils'); } = require('../utils');
const { checkpointLagSecs } = require('../constants');
const logger = new LoggerContext({ const logger = new LoggerContext({
module: 'IngestShard', module: 'IngestShard',
}); });
const now = () => convertTimestamp(new Date().getTime());
const checkpointLagMicroseconds = convertTimestamp(checkpointLagSecs); const checkpointLagMicroseconds = convertTimestamp(checkpointLagSecs);
class IngestShardTask extends BaseTask { class IngestShardTask extends BaseTask {
constructor(...options) { constructor(...options) {
super({ super(...options);
warp10: {
requestTimeout: 30000,
connectTimeout: 30000,
},
...options,
});
this._defaultSchedule = config.ingestionSchedule; this._defaultSchedule = config.ingestionSchedule;
this._defaultLag = config.ingestionLagSeconds; this._defaultLag = config.ingestionLagSeconds;
} }
@ -41,7 +35,7 @@ class IngestShardTask extends BaseTask {
return; return;
} }
await async.eachLimit(toIngest, 10, await Promise.all(toIngest.map(
async shard => { async shard => {
if (await this._cache.shardExists(shard)) { if (await this._cache.shardExists(shard)) {
const metrics = await this._cache.getMetricsForShard(shard); const metrics = await this._cache.getMetricsForShard(shard);
@ -74,7 +68,8 @@ class IngestShardTask extends BaseTask {
} else { } else {
logger.warn('shard does not exist', { shard }); logger.warn('shard does not exist', { shard });
} }
}); },
));
} }
} }

View File

@ -141,20 +141,7 @@ class MigrateTask extends BaseTask {
timestamp, timestamp,
timestamp, timestamp,
)); ));
const numberOfObjects = MigrateTask._parseMetricValue(numberOfObjectsResp[0]);
let numberOfObjects;
if (numberOfObjectsResp.length === 1) {
numberOfObjects = MigrateTask._parseMetricValue(numberOfObjectsResp[0]);
} else {
numberOfObjects = numberOfObjectsOffset;
logger.warn('Could not retrieve value for numberOfObjects, falling back to last seen value',
{
metricLevel: level,
resource,
metricTimestamp: timestamp,
lastSeen: numberOfObjectsOffset,
});
}
let incomingBytes = 0; let incomingBytes = 0;
let outgoingBytes = 0; let outgoingBytes = 0;
@ -196,7 +183,7 @@ class MigrateTask extends BaseTask {
stop: -1, stop: -1,
}); });
if (resp.result && (resp.result.length === 0 || resp.result[0] === '' || resp.result[0] === '[]')) { if (resp.result && (resp.result.length === 0 || resp.result[0] === '')) {
return null; return null;
} }

View File

@ -20,7 +20,7 @@ class InterpolatedClock {
} }
getTs() { getTs() {
const ts = Date.now(); const ts = new Date().now();
if (ts === this._now) { if (ts === this._now) {
// If this is the same millisecond as the last call // If this is the same millisecond as the last call
this._step += 1; this._step += 1;

View File

@ -2,7 +2,6 @@ const assert = require('assert');
const { auth, policies } = require('arsenal'); const { auth, policies } = require('arsenal');
const vaultclient = require('vaultclient'); const vaultclient = require('vaultclient');
const config = require('./config'); const config = require('./config');
const errors = require('./errors');
/** /**
@class Vault @class Vault
@ -84,14 +83,7 @@ class Vault {
reject(err); reject(err);
return; return;
} }
if (!res.message || !res.message.body) { resolve(res);
reject(errors.InternalError);
return;
}
resolve(res.message.body.map(acc => ({
resource: acc.accountId,
id: acc.canonicalId,
})));
})); }));
} }
} }
@ -99,14 +91,6 @@ class Vault {
const vault = new Vault(config); const vault = new Vault(config);
auth.setHandler(vault); auth.setHandler(vault);
async function translateResourceIds(level, resources, log) {
if (level === 'accounts') {
return vault.getCanonicalIds(resources, log);
}
return resources.map(resource => ({ resource, id: resource }));
}
async function authenticateRequest(request, action, level, resources) { async function authenticateRequest(request, action, level, resources) {
const policyContext = new policies.RequestContext( const policyContext = new policies.RequestContext(
request.headers, request.headers,
@ -130,11 +114,10 @@ async function authenticateRequest(request, action, level, resources) {
return; return;
} }
// Will only have res if request is from a user rather than an account // Will only have res if request is from a user rather than an account
let authorizedResources = resources;
if (res) { if (res) {
try { try {
authorizedResources = res.reduce( const authorizedResources = (res || [])
(authed, result) => { .reduce((authed, result) => {
if (result.isAllowed) { if (result.isAllowed) {
// result.arn should be of format: // result.arn should be of format:
// arn:scality:utapi:::resourcetype/resource // arn:scality:utapi:::resourcetype/resource
@ -145,32 +128,24 @@ async function authenticateRequest(request, action, level, resources) {
request.logger.trace('access granted for resource', { resource }); request.logger.trace('access granted for resource', { resource });
} }
return authed; return authed;
}, [], }, []);
); resolve([
authorizedResources.length !== 0,
authorizedResources,
]);
} catch (err) { } catch (err) {
reject(err); reject(err);
} }
} else { } else {
request.logger.trace('granted access to all resources'); request.logger.trace('granted access to all resources');
resolve([true]);
} }
resolve([
authorizedResources.length !== 0,
authorizedResources,
]);
}, 's3', [policyContext]); }, 's3', [policyContext]);
}); });
} }
async function translateAndAuthorize(request, action, level, resources) {
const [authed, authorizedResources] = await authenticateRequest(request, action, level, resources);
const translated = await translateResourceIds(level, authorizedResources, request.logger.logger);
return [authed, translated];
}
module.exports = { module.exports = {
authenticateRequest, authenticateRequest,
translateAndAuthorize,
Vault, Vault,
vault, vault,
}; };

View File

@ -3,7 +3,7 @@
"engines": { "engines": {
"node": ">=10.19.0" "node": ">=10.19.0"
}, },
"version": "8.1.0", "version": "7.8.0",
"description": "API for tracking resource utilization and reporting metrics", "description": "API for tracking resource utilization and reporting metrics",
"main": "index.js", "main": "index.js",
"repository": { "repository": {
@ -34,7 +34,7 @@
"node-schedule": "^1.3.2", "node-schedule": "^1.3.2",
"oas-tools": "^2.1.8", "oas-tools": "^2.1.8",
"uuid": "^3.3.2", "uuid": "^3.3.2",
"vaultclient": "scality/vaultclient#ff9e92f", "vaultclient": "scality/vaultclient#21d03b1",
"werelogs": "scality/werelogs#0a4c576" "werelogs": "scality/werelogs#0a4c576"
}, },
"devDependencies": { "devDependencies": {

View File

@ -266,10 +266,6 @@ class Router {
*/ */
_processSecurityChecks(utapiRequest, route, cb) { _processSecurityChecks(utapiRequest, route, cb) {
const log = utapiRequest.getLog(); const log = utapiRequest.getLog();
if (process.env.UTAPI_AUTH === 'false') {
// Zenko route request does not need to go through Vault
return this._startRequest(utapiRequest, route, cb);
}
return this._authSquared(utapiRequest, err => { return this._authSquared(utapiRequest, err => {
if (err) { if (err) {
log.trace('error from vault', { errors: err }); log.trace('error from vault', { errors: err });

View File

@ -21,9 +21,6 @@ const config = {
localCache: redisLocal, localCache: redisLocal,
component: 's3', component: 's3',
}; };
const location = 'foo-backend';
const incrby = 100;
const decrby = -30;
function isSortedSetKey(key) { function isSortedSetKey(key) {
return key.endsWith('storageUtilized') || key.endsWith('numberOfObjects'); return key.endsWith('storageUtilized') || key.endsWith('numberOfObjects');
@ -79,29 +76,6 @@ function setMockData(data, timestamp, cb) {
return cb(); return cb();
} }
function getLocationObject(bytesValue) {
const obj = {};
obj[`s3:location:${location}:locationStorage`] = `${bytesValue}`;
return obj;
}
function testLocationMetric(c, params, expected, cb) {
const { location, updateSize } = params;
if (updateSize) {
c.pushLocationMetric(location, updateSize, REQUID, err => {
assert.equal(err, null);
assert.deepStrictEqual(memoryBackend.data, expected);
return cb();
});
} else {
c.getLocationMetric(location, REQUID, (err, bytesStored) => {
assert.equal(err, null);
assert.strictEqual(bytesStored, expected);
return cb();
});
}
}
describe('UtapiClient:: enable/disable client', () => { describe('UtapiClient:: enable/disable client', () => {
it('should disable client when no redis config is provided', () => { it('should disable client when no redis config is provided', () => {
const c = new UtapiClient(); const c = new UtapiClient();
@ -773,27 +747,3 @@ tests.forEach(test => {
}); });
}); });
}); });
describe('UtapiClient:: location quota metrics', () => {
beforeEach(function beFn() {
this.currentTest.c = new UtapiClient(config);
this.currentTest.c.setDataStore(ds);
});
afterEach(() => memoryBackend.flushDb());
it('should increment location metric', function itFn(done) {
const expected = getLocationObject(incrby);
testLocationMetric(this.test.c, { location, updateSize: incrby },
expected, done);
});
it('should decrement location metric', function itFn(done) {
const expected = getLocationObject(decrby);
testLocationMetric(this.test.c, { location, updateSize: decrby },
expected, done);
});
it('should list location metric', function itFn(done) {
const expected = 0;
testLocationMetric(this.test.c, { location }, expected, done);
});
});

View File

@ -38,16 +38,14 @@ describe('Test middleware', () => {
}); });
describe('test errorMiddleware', () => { describe('test errorMiddleware', () => {
let req;
let resp; let resp;
beforeEach(() => { beforeEach(() => {
req = templateRequest();
resp = new ExpressResponseStub(); resp = new ExpressResponseStub();
}); });
it('should set a default code and message', () => { it('should set a default code and message', () => {
middleware.errorMiddleware({}, req, resp); middleware.errorMiddleware({}, null, resp);
assert.strictEqual(resp._status, 500); assert.strictEqual(resp._status, 500);
assert.deepStrictEqual(resp._body, { assert.deepStrictEqual(resp._body, {
error: { error: {
@ -58,7 +56,7 @@ describe('Test middleware', () => {
}); });
it('should set the correct info from an error', () => { it('should set the correct info from an error', () => {
middleware.errorMiddleware({ code: 123, message: 'Hello World!', utapiError: true }, req, resp); middleware.errorMiddleware({ code: 123, message: 'Hello World!', utapiError: true }, null, resp);
assert.deepStrictEqual(resp._body, { assert.deepStrictEqual(resp._body, {
error: { error: {
code: '123', code: '123',
@ -68,7 +66,7 @@ describe('Test middleware', () => {
}); });
it("should replace an error's message if it's internal and not in development mode", () => { it("should replace an error's message if it's internal and not in development mode", () => {
middleware.errorMiddleware({ code: 123, message: 'Hello World!' }, req, resp); middleware.errorMiddleware({ code: 123, message: 'Hello World!' }, null, resp);
assert.deepStrictEqual(resp._body, { assert.deepStrictEqual(resp._body, {
error: { error: {
code: '123', code: '123',
@ -76,16 +74,5 @@ describe('Test middleware', () => {
}, },
}); });
}); });
it('should call responseLoggerMiddleware after response', () => {
const spy = sinon.spy();
req.logger.end = spy;
resp.statusMessage = 'Hello World!';
middleware.errorMiddleware({ code: 123 }, req, resp);
assert(spy.calledOnceWith('finished handling request', {
httpCode: 123,
httpMessage: 'Hello World!',
}));
});
}); });
}); });

View File

@ -1,38 +0,0 @@
const assert = require('assert');
const sinon = require('sinon');
const { InterpolatedClock } = require('../../../../libV2/utils');
describe('Test InterpolatedClock', () => {
let fakeClock;
let iClock;
beforeEach(() => {
fakeClock = sinon.useFakeTimers();
iClock = new InterpolatedClock();
});
afterEach(() => {
fakeClock.restore();
});
it('should get the current timestamp', () => {
const ts = iClock.getTs();
assert(Number.isInteger(ts));
assert.strictEqual(ts, 0);
});
it('should interpolate microseconds if called too fast', () => {
const initial = iClock.getTs();
const second = iClock.getTs();
assert.strictEqual(second - initial, 1);
});
it('should not interpolate if last call >= 1ms ago', () => {
const initial = iClock.getTs();
fakeClock.tick(1);
const second = iClock.getTs();
assert.strictEqual(second - initial, 1000);
});
});

View File

@ -8,7 +8,7 @@ function decode(type, data, includeDefaults = true) {
if (!Type) { if (!Type) {
throw new Error(`Unknown type ${type}`); throw new Error(`Unknown type ${type}`);
} }
const msg = Type.decode(Buffer.from(data)); const msg = Type.decode(Buffer.from(data, 'hex'));
return Type.toObject(msg, { return Type.toObject(msg, {
longs: Number, longs: Number,
defaults: includeDefaults, defaults: includeDefaults,

View File

@ -7,10 +7,6 @@ class ExpressResponseStub {
this._redirect = null; this._redirect = null;
} }
get statusCode() {
return this._status;
}
status(code) { status(code) {
this._status = code; this._status = code;
return this; return this;

View File

@ -17,22 +17,18 @@ message Event {
'> '>
PROTOC 'proto' STORE PROTOC 'proto' STORE
<% <%
'iso8859-1' ->BYTES HEX-> !$proto 'Event' PB->
!$proto 'Event' PB->
%> %>
'macro' STORE 'macro' STORE
'0a0b6d794f7065726174696f6e32096d796163636f756e74480150ffffffffffffffffff01' '0a0b6d794f7065726174696f6e32096d796163636f756e74480150ffffffffffffffffff01' @macro
HEX-> 'iso8859-1' BYTES-> @macro
DUP 'op' GET 'myOperation' == ASSERT DUP 'op' GET 'myOperation' == ASSERT
DUP 'acc' GET 'myaccount' == ASSERT DUP 'acc' GET 'myaccount' == ASSERT
DUP 'objD' GET 1 == ASSERT DUP 'objD' GET 1 == ASSERT
'sizeD' GET -1 == ASSERT 'sizeD' GET -1 == ASSERT
'0a0568656c6c6f120568656c6c6f1a0568656c6c6f220568656c6c6f2a0568656c6c6f320568656c6c6f3a0568656c6c6f420568656c6c6f4801500158016000' '0a0568656c6c6f120568656c6c6f1a0568656c6c6f220568656c6c6f2a0568656c6c6f320568656c6c6f3a0568656c6c6f420568656c6c6f4801500158016000' @macro
HEX-> 'iso8859-1' BYTES-> @macro
DUP "op" GET "hello" == ASSERT DUP "op" GET "hello" == ASSERT
DUP "id" GET "hello" == ASSERT DUP "id" GET "hello" == ASSERT
DUP "bck" GET "hello" == ASSERT DUP "bck" GET "hello" == ASSERT

View File

@ -33,8 +33,7 @@ PROTOC 'proto' STORE
!$info INFO !$info INFO
SAVE 'context' STORE SAVE 'context' STORE
<% <%
'iso8859-1' ->BYTES HEX-> !$proto 'Record' PB->
!$proto 'Record' PB->
%> %>
<% // catch any exception <% // catch any exception
RETHROW RETHROW
@ -46,7 +45,7 @@ PROTOC 'proto' STORE
'macro' STORE 'macro' STORE
// Unit tests // Unit tests
'081e101e181e20002a0d0a097075744f626a656374101e' HEX-> 'iso8859-1' BYTES-> '081e101e181e20002a0d0a097075744f626a656374101e'
@macro @macro
{ 'outB' 0 'ops' { 'putObject' 30 } 'sizeD' 30 'inB' 30 'objD' 30 } == ASSERT { 'outB' 0 'ops' { 'putObject' 30 } 'sizeD' 30 'inB' 30 'objD' 30 } == ASSERT

View File

@ -17,7 +17,7 @@ message Event {
'> '>
PROTOC 'proto' STORE PROTOC 'proto' STORE
<% <%
!$proto 'Event' ->PB !$proto 'Event' ->PB ->HEX
%> %>
'macro' STORE 'macro' STORE
@ -28,7 +28,6 @@ PROTOC 'proto' STORE
'sizeD' -1 'sizeD' -1
} @macro } @macro
->HEX
'0a0b6d794f7065726174696f6e32096d796163636f756e74480150ffffffffffffffffff01' == ASSERT '0a0b6d794f7065726174696f6e32096d796163636f756e74480150ffffffffffffffffff01' == ASSERT
{ {
@ -46,7 +45,6 @@ PROTOC 'proto' STORE
"outB" 0 "outB" 0
} @macro } @macro
->HEX
'0a0568656c6c6f120568656c6c6f1a0568656c6c6f220568656c6c6f2a0568656c6c6f320568656c6c6f3a0568656c6c6f420568656c6c6f4801500158016000' == ASSERT '0a0568656c6c6f120568656c6c6f1a0568656c6c6f220568656c6c6f2a0568656c6c6f320568656c6c6f3a0568656c6c6f420568656c6c6f4801500158016000' == ASSERT
$macro $macro

View File

@ -34,7 +34,7 @@ PROTOC 'proto' STORE
!$info INFO !$info INFO
SAVE 'context' STORE SAVE 'context' STORE
<% <%
!$proto 'Record' ->PB !$proto 'Record' ->PB ->HEX
%> %>
<% // catch any exception <% // catch any exception
RETHROW RETHROW
@ -49,7 +49,6 @@ PROTOC 'proto' STORE
{ 'outB' 0 'ops' { 'putObject' 30 } 'sizeD' 30 'inB' 30 'objD' 30 } { 'outB' 0 'ops' { 'putObject' 30 } 'sizeD' 30 'inB' 30 'objD' 30 }
@macro @macro
->HEX
'081e101e181e20002a0d0a097075744f626a656374101e' == ASSERT '081e101e181e20002a0d0a097075744f626a656374101e' == ASSERT
$macro $macro

2021
yarn.lock

File diff suppressed because it is too large Load Diff