Compare commits

...

1 Commits

Author SHA1 Message Date
William Abernathy 23583a07cf documentation: S3C-1922_Remove_scality/s3server_refs
S3Server references were out of date (changed to CloudServer).
Fixing them revealed most of the links were broken, a lot of the
language was inelegant, and the organization made the document
hard to follow. Hence the voluminous corrections.
2019-01-30 15:56:31 -08:00
4 changed files with 776 additions and 838 deletions

View File

@ -1,11 +1,14 @@
Docker
======
- `Environment Variables <#environment-variables>`__
- `Tunables and setup tips <#tunables-and-setup-tips>`__
- `Examples for continuous integration with
Docker <#continuous-integration-with-docker-hosted CloudServer>`__
- `Examples for going in production with Docker <#in-production-with-docker-hosted CloudServer>`__
- `Environment Variables <environment-variables>`__
- `Tunables and setup tips <tunables-and-setup-tips>`__
- `Examples for continuous integration with Docker
<continuous-integration-with-docker-hosted-cloudserver>`__
- `Examples for going into production with Docker
<in-production-w-a-Docker-hosted-cloudserver>`__
.. _environment-variables:
Environment Variables
---------------------
@ -15,21 +18,23 @@ S3DATA
S3DATA=multiple
^^^^^^^^^^^^^^^
Allows you to run Scality Zenko CloudServer with multiple data backends, defined
This variable enables running CloudServer with multiple data backends, defined
as regions.
When using multiple data backends, a custom ``locationConfig.json`` file is
mandatory. It will allow you to set custom regions. You will then need to
provide associated rest_endpoints for each custom region in your
``config.json`` file.
`Learn more about multiple backends configuration <../GETTING_STARTED/#location-configuration>`__
If you are using Scality RING endpoints, please refer to your customer
documentation.
For multiple data backends, a custom locationConfig.json file is required.
This file enables you to set custom regions. You must provide associated
rest_endpoints for each custom region in config.json.
Running it with an AWS S3 hosted backend
""""""""""""""""""""""""""""""""""""""""
To run CloudServer with an S3 AWS backend, you will have to add a new section
to your ``locationConfig.json`` file with the ``aws_s3`` location type:
`Learn more about multiple-backend configurations <./GETTING_STARTED#location-configuration>`__
If you are using Scality RING endpoints, refer to your customer documentation.
Running CloudServer with an AWS S3-Hosted Backend
"""""""""""""""""""""""""""""""""""""""""""""""""
To run CloudServer with an S3 AWS backend, add a new section to the
``locationConfig.json`` file with the ``aws_s3`` location type:
.. code:: json
@ -45,10 +50,9 @@ to your ``locationConfig.json`` file with the ``aws_s3`` location type:
}
(...)
You will also have to edit your AWS credentials file to be able to use your
command line tool of choice. This file should mention credentials for all the
backends you're using. You can use several profiles when using multiple
profiles.
Edit your AWS credentials file to enable your preferred command-line tool.
This file must mention credentials for all backends in use. You can use
several profiles if multiple profiles are configured.
.. code:: json
@ -59,110 +63,114 @@ profiles.
aws_access_key_id={{YOUR_ACCESS_KEY}}
aws_secret_access_key={{YOUR_SECRET_KEY}}
Just as you need to mount your locationConfig.json, you will need to mount your
AWS credentials file at run time:
``-v ~/.aws/credentials:/root/.aws/credentials`` on Linux, OS X, or Unix or
As with locationConfig.json, the AWS credentials file must be mounted at
run time: ``-v ~/.aws/credentials:/root/.aws/credentials`` on Unix-like
systems (Linux, OS X, etc.), or
``-v C:\Users\USERNAME\.aws\credential:/root/.aws/credentials`` on Windows
NOTE: One account can't copy to another account with a source and
destination on real AWS unless the account associated with the
access Key/secret Key pairs used for the destination bucket has rights
to get in the source bucket. ACL's would have to be updated
on AWS directly to enable this.
.. note:: One account cannot copy to another account with a source and
destination on real AWS unless the account associated with the
accessKey/secretKey pairs used for the destination bucket has source
bucket access privileges. To enable this, update ACLs directly on AWS.
S3BACKEND
~~~~~~~~~
S3BACKEND=file
^^^^^^^^^^^^^^
When storing file data, for it to be persistent you must mount docker volumes
for both data and metadata. See `this section <#using-docker-volumes-in-production>`__
For stored file data to persist, you must mount Docker volumes
for both data and metadata. See
`In Production with a Docker-Hosted CloudServer <in-production-w-a-Docker-hosted-cloudserver>`__
S3BACKEND=mem
^^^^^^^^^^^^^
This is ideal for testing - no data will remain after container is shutdown.
This is ideal for testing: no data remains after the container is shut down.
ENDPOINT
~~~~~~~~
This variable specifies your endpoint. If you have a domain such as
new.host.com, by specifying that here, you and your users can direct s3
server requests to new.host.com.
This variable specifies the endpoint. To direct CloudServer requests to
new.host.com, for example, specify the endpoint with:
.. code-block:: shell
$ docker run -d --name s3server -p 8000:8000 -e ENDPOINT=new.host.com scality/s3server
$ docker run -d --name cloudserver -p 8000:8000 -e ENDPOINT=new.host.com scality/cloudserver
Note: In your ``/etc/hosts`` file on Linux, OS X, or Unix with root
permissions, make sure to associate 127.0.0.1 with ``new.host.com``
.. note:: On Unix-like systems (Linux, OS X, etc.) edit /etc/hosts
to associate 127.0.0.1 with new.host.com.
SCALITY\_ACCESS\_KEY\_ID and SCALITY\_SECRET\_ACCESS\_KEY
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
These variables specify authentication credentials for an account named
"CustomAccount".
“CustomAccount”.
You can set credentials for many accounts by editing
``conf/authdata.json`` (see below for further info), but if you just
want to specify one set of your own, you can use these environment
variables.
Set account credentials for multiple accounts by editing conf/authdata.json
(see below for further details). To specify one set for personal use, set these
environment variables:
.. code-block:: shell
docker run -d --name s3server -p 8000:8000 -e SCALITY_ACCESS_KEY_ID=newAccessKey
-e SCALITY_SECRET_ACCESS_KEY=newSecretKey scality/s3server
$ docker run -d --name cloudserver -p 8000:8000 -e SCALITY_ACCESS_KEY_ID=newAccessKey \
-e SCALITY_SECRET_ACCESS_KEY=newSecretKey scality/cloudserver
Note: Anything in the ``authdata.json`` file will be ignored. Note: The
old ``ACCESS_KEY`` and ``SECRET_KEY`` environment variables are now
deprecated
.. note:: This takes precedence over the contents of the authdata.json
file. The authdata.json file is ignored.
.. note:: The ACCESS_KEY and SECRET_KEY environment variables are
deprecated.
LOG\_LEVEL
~~~~~~~~~~
This variable allows you to change the log level: info, debug or trace.
The default is info. Debug will give you more detailed logs and trace
will give you the most detailed.
This variable changes the log level. There are three levels: info, debug,
and trace. The default is info. Debug provides more detailed logs, and trace
provides the most detailed logs.
.. code-block:: shell
$ docker run -d --name s3server -p 8000:8000 -e LOG_LEVEL=trace scality/s3server
$ docker run -d --name cloudserver -p 8000:8000 -e LOG_LEVEL=trace scality/cloudserver
SSL
~~~
This variable set to true allows you to run S3 with SSL:
Set true, this variable runs CloudServer with SSL.
**Note1**: You also need to specify the ENDPOINT environment variable.
**Note2**: In your ``/etc/hosts`` file on Linux, OS X, or Unix with root
permissions, make sure to associate 127.0.0.1 with ``<YOUR_ENDPOINT>``
If SSL is set true:
**Warning**: These certs, being self-signed (and the CA being generated
inside the container) will be untrusted by any clients, and could
disappear on a container upgrade. That's ok as long as it's for quick
testing. Also, best security practice for non-testing would be to use an
extra container to do SSL/TLS termination such as haproxy/nginx/stunnel
to limit what an exploit on either component could expose, as well as
certificates in a mounted volume
* The ENDPOINT environment variable must also be specified.
.. code-block:: shell
* On Unix-like systems (Linux, OS X, etc.), 127.0.0.1 must be associated with
<YOUR_ENDPOINT> in /etc/hosts.
$ docker run -d --name s3server -p 8000:8000 -e SSL=TRUE -e ENDPOINT=<YOUR_ENDPOINT>
scality/s3server
.. Warning:: Self-signed certs with a CA generated within the container are
suitable for testing purposes only. Clients cannot trust them, and they may
disappear altogether on a container upgrade. The best security practice for
production environments is to use an extra container, such as
haproxy/nginx/stunnel, for SSL/TLS termination and to pull certificates
from a mounted volume, limiting what an exploit on either component
can expose.
More information about how to use S3 server with SSL
`here <https://s3.scality.com/v1.0/page/scality-with-ssl>`__
.. code:: shell
$ docker run -d --name cloudserver -p 8000:8000 -e SSL=TRUE -e ENDPOINT=<YOUR_ENDPOINT> \
scality/cloudserver
For more information about using ClousdServer with SSL, see `Using SSL <./GETTING_STARTED#Using SSL>`__
LISTEN\_ADDR
~~~~~~~~~~~~
This variable instructs the Zenko CloudServer, and its data and metadata
components to listen on the specified address. This allows starting the data
or metadata servers as standalone services, for example.
This variable causes CloudServer and its data and metadata components to
listen on the specified address. This allows starting the data or metadata
servers as standalone services, for example.
.. code-block:: shell
.. code:: shell
$ docker run -d --name s3server-data -p 9991:9991 -e LISTEN_ADDR=0.0.0.0
scality/s3server npm run start_dataserver
$ docker run -d --name cloudserver-data -p 9991:9991 -e LISTEN_ADDR=0.0.0.0 \
scality/cloudserver npm run start_dataserver
DATA\_HOST and METADATA\_HOST
@ -172,10 +180,10 @@ These variables configure the data and metadata servers to use,
usually when they are running on another host and only starting the stateless
Zenko CloudServer.
.. code-block:: shell
.. code:: shell
$ docker run -d --name s3server -e DATA_HOST=s3server-data
-e METADATA_HOST=s3server-metadata scality/s3server npm run start_s3server
$ docker run -d --name cloudserver -e DATA_HOST=cloudserver-data \
-e METADATA_HOST=cloudserver-metadata scality/cloudserver npm run start_s3server
REDIS\_HOST
~~~~~~~~~~~
@ -183,21 +191,23 @@ REDIS\_HOST
Use this variable to connect to the redis cache server on another host than
localhost.
.. code-block:: shell
.. code:: shell
$ docker run -d --name s3server -p 8000:8000
-e REDIS_HOST=my-redis-server.example.com scality/s3server
$ docker run -d --name cloudserver -p 8000:8000 \
-e REDIS_HOST=my-redis-server.example.com scality/cloudserver
REDIS\_PORT
~~~~~~~~~~~
Use this variable to connect to the redis cache server on another port than
the default 6379.
Use this variable to connect to the Redis cache server on a port other
than the default 6379.
.. code-block:: shell
.. code:: shell
$ docker run -d --name s3server -p 8000:8000
-e REDIS_PORT=6379 scality/s3server
$ docker run -d --name cloudserver -p 8000:8000 \
-e REDIS_PORT=6379 scality/cloudserver
.. _tunables-and-setup-tips:
Tunables and Setup Tips
-----------------------
@ -205,61 +215,57 @@ Tunables and Setup Tips
Using Docker Volumes
~~~~~~~~~~~~~~~~~~~~
Zenko CloudServer runs with a file backend by default.
CloudServer runs with a file backend by default, meaning that data is
stored inside the CloudServers Docker container.
So, by default, the data is stored inside your Zenko CloudServer Docker
container.
However, if you want your data and metadata to persist, you **MUST** use
Docker volumes to host your data and metadata outside your Zenko CloudServer
Docker container. Otherwise, the data and metadata will be destroyed
when you erase the container.
For data and metadata to persist, data and metadata must be hosted in Docker
volumes outside the CloudServers Docker container. Otherwise, the data
and metadata are destroyed when the container is erased.
.. code-block:: shell
$ docker run -­v $(pwd)/data:/usr/src/app/localData -­v $(pwd)/metadata:/usr/src/app/localMetadata
-p 8000:8000 ­-d scality/s3server
$ docker run -­v $(pwd)/data:/usr/src/app/localData -­v $(pwd)/metadata:/usr/src/app/localMetadata \
-p 8000:8000 ­-d scality/cloudserver
This command mounts the host directory, ``./data``, into the container
at ``/usr/src/app/localData`` and the host directory, ``./metadata``, into
the container at ``/usr/src/app/localMetaData``. It can also be any host
mount point, like ``/mnt/data`` and ``/mnt/metadata``.
This command mounts the ./data host directory to the container
at /usr/src/app/localData and the ./metadata host directory to
the container at /usr/src/app/localMetaData.
Adding modifying or deleting accounts or users credentials
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
.. tip:: These host directories can be mounted to any accessible mount
point, such as /mnt/data and /mnt/metadata, for example.
1. Create locally a customized ``authdata.json`` based on our ``/conf/authdata.json``.
Adding, Modifying, or Deleting Accounts or Credentials
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
2. Use `Docker
Volume <https://docs.docker.com/engine/tutorials/dockervolumes/>`__
to override the default ``authdata.json`` through a docker file mapping.
1. Create a customized authdata.json file locally based on /conf/authdata.json.
2. Use `Docker volumes <https://docs.docker.com/storage/volumes/>`__
to override the default ``authdata.json`` through a Docker file mapping.
For example:
.. code-block:: shell
$ docker run -v $(pwd)/authdata.json:/usr/src/app/conf/authdata.json -p 8000:8000 -d
scality/s3server
$ docker run -v $(pwd)/authdata.json:/usr/src/app/conf/authdata.json -p 8000:8000 -d \
scality/cloudserver
Specifying your own host name
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Specifying a Host Name
~~~~~~~~~~~~~~~~~~~~~~
To specify a host name (e.g. s3.domain.name), you can provide your own
`config.json <https://github.com/scality/S3/blob/master/config.json>`__
using `Docker
Volume <https://docs.docker.com/engine/tutorials/dockervolumes/>`__.
To specify a host name (for example, s3.domain.name), provide your own
`config.json <https://github.com/scality/cloudserver/blob/master/config.json>`__
file using `Docker volumes <https://docs.docker.com/storage/volumes/>`__.
First add a new key-value pair in the restEndpoints section of your
config.json. The key in the key-value pair should be the host name you
would like to add and the value is the default location\_constraint for
this endpoint.
First, add a new key-value pair to the restEndpoints section of your
config.json. Make the key the host name you want, and the value the default
location\_constraint for this endpoint.
For example, ``s3.example.com`` is mapped to ``us-east-1`` which is one
of the ``location_constraints`` listed in your locationConfig.json file
`here <https://github.com/scality/S3/blob/master/locationConfig.json>`__.
More information about location configuration
`here <https://github.com/scality/S3/blob/master/README.md#location-configuration>`__
For more information about location configuration, see:
`GETTING STARTED <./GETTING_STARTED#location-configuration>`__
.. code:: json
@ -267,31 +273,31 @@ More information about location configuration
"localhost": "file",
"127.0.0.1": "file",
...
"s3.example.com": "us-east-1"
"cloudserver.example.com": "us-east-1"
},
Then, run your Scality S3 Server using `Docker
Volume <https://docs.docker.com/engine/tutorials/dockervolumes/>`__:
Next, run CloudServer using a `Docker volume
<https://docs.docker.com/engine/tutorials/dockervolumes/>`__:
.. code-block:: shell
$ docker run -v $(pwd)/config.json:/usr/src/app/config.json -p 8000:8000 -d scality/s3server
$ docker run -v $(pwd)/config.json:/usr/src/app/config.json -p 8000:8000 -d scality/cloudserver
Your local ``config.json`` file will override the default one through a
docker file mapping.
The local ``config.json`` file overrides the default one through a Docker
file mapping.
Running as an unprivileged user
Running as an Unprivileged User
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Zenko CloudServer runs as root by default.
CloudServer runs as root by default.
You can change that by modifing the dockerfile and specifying a user
before the entrypoint.
To change this, modify the dockerfile and specify a user before the
entry point.
The user needs to exist within the container, and own the folder
**/usr/src/app** for Scality Zenko CloudServer to run properly.
The user must exist within the container, and must own the
/usr/src/app directory for CloudServer to run.
For instance, you can modify these lines in the dockerfile:
For example, the following dockerfile lines can be modified:
.. code-block:: shell
@ -305,54 +311,58 @@ For instance, you can modify these lines in the dockerfile:
USER scality
ENTRYPOINT ["/usr/src/app/docker-entrypoint.sh"]
Continuous integration with Docker hosted CloudServer
-----------------------------------------------------
.. _continuous-integration-with-docker-hosted-cloudserver:
When you start the Docker Scality Zenko CloudServer image, you can adjust the
configuration of the Scality Zenko CloudServer instance by passing one or more
environment variables on the docker run command line.
Continuous Integration with a Docker-Hosted CloudServer
-------------------------------------------------------
Sample ways to run it for CI are:
When you start the Docker CloudServer image, you can adjust the
configuration of the CloudServer instance by passing one or more
environment variables on the ``docker run`` command line.
- With custom locations (one in-memory, one hosted on AWS), and custom
credentials mounted:
To run CloudServer for CI with custom locations (one in-memory,
one hosted on AWS), and custom credentials mounted:
.. code-block:: shell
docker run --name CloudServer -p 8000:8000
-v $(pwd)/locationConfig.json:/usr/src/app/locationConfig.json
-v $(pwd)/authdata.json:/usr/src/app/conf/authdata.json
-v ~/.aws/credentials:/root/.aws/credentials
-e S3DATA=multiple -e S3BACKEND=mem scality/s3server
$ docker run --name CloudServer -p 8000:8000 \
-v $(pwd)/locationConfig.json:/usr/src/app/locationConfig.json \
-v $(pwd)/authdata.json:/usr/src/app/conf/authdata.json \
-v ~/.aws/credentials:/root/.aws/credentials \
-e S3DATA=multiple -e S3BACKEND=mem scality/cloudserver
- With custom locations, (one in-memory, one hosted on AWS, one file),
and custom credentials set as environment variables
(see `this section <#scality-access-key-id-and-scality-secret-access-key>`__):
To run CloudServer for CI with custom locations, (one in-memory, one
hosted on AWS, and one file), and custom credentials `set as environment
variables <./GETTING_STARTED#scality-access-key-id-and-scality-secret-access-key>`__):
.. code-block:: shell
docker run --name CloudServer -p 8000:8000
-v $(pwd)/locationConfig.json:/usr/src/app/locationConfig.json
-v ~/.aws/credentials:/root/.aws/credentials
-v $(pwd)/data:/usr/src/app/localData -v $(pwd)/metadata:/usr/src/app/localMetadata
-e SCALITY_ACCESS_KEY_ID=accessKey1
-e SCALITY_SECRET_ACCESS_KEY=verySecretKey1
-e S3DATA=multiple -e S3BACKEND=mem scality/s3server
$ docker run --name CloudServer -p 8000:8000 \
-v $(pwd)/locationConfig.json:/usr/src/app/locationConfig.json \
-v ~/.aws/credentials:/root/.aws/credentials \
-v $(pwd)/data:/usr/src/app/localData -v $(pwd)/metadata:/usr/src/app/localMetadata \
-e SCALITY_ACCESS_KEY_ID=accessKey1 \
-e SCALITY_SECRET_ACCESS_KEY=verySecretKey1 \
-e S3DATA=multiple -e S3BACKEND=mem scality/cloudserver
In production with Docker hosted CloudServer
--------------------------------------------
.. _in-production-w-a-Docker-hosted-cloudserver:
In production, we expect that data will be persistent, that you will use the
multiple backends capabilities of Zenko CloudServer, and that you will have a
custom endpoint for your local storage, and custom credentials for your local
storage:
In Production with a Docker-Hosted CloudServer
----------------------------------------------
Because data must persist in production settings, CloudServer offers
multiple-backend capabilities. This requires a custom endpoint
and custom credentials for local storage.
Customize these with:
.. code-block:: shell
docker run -d --name CloudServer
-v $(pwd)/data:/usr/src/app/localData -v $(pwd)/metadata:/usr/src/app/localMetadata
-v $(pwd)/locationConfig.json:/usr/src/app/locationConfig.json
-v $(pwd)/authdata.json:/usr/src/app/conf/authdata.json
-v ~/.aws/credentials:/root/.aws/credentials -e S3DATA=multiple
-e ENDPOINT=custom.endpoint.com
-p 8000:8000 ­-d scality/s3server
$ docker run -d --name CloudServer \
-v $(pwd)/data:/usr/src/app/localData -v $(pwd)/metadata:/usr/src/app/localMetadata \
-v $(pwd)/locationConfig.json:/usr/src/app/locationConfig.json \
-v $(pwd)/authdata.json:/usr/src/app/conf/authdata.json \
-v ~/.aws/credentials:/root/.aws/credentials -e S3DATA=multiple \
-e ENDPOINT=custom.endpoint.com \
-p 8000:8000 ­-d scality/cloudserver \

View File

@ -6,210 +6,206 @@ Getting Started
|CircleCI| |Scality CI|
Dependencies
------------
Building and running the Scality Zenko CloudServer requires node.js 6.9.5 and
npm v3. Up-to-date versions can be found at
`Nodesource <https://github.com/nodesource/distributions>`__.
Installation
------------
Dependencies
~~~~~~~~~~~~
1. Clone the source code
Building and running the Scality Zenko CloudServer requires node.js 6.9.5 and
npm v3 . Up-to-date versions can be found at
`Nodesource <https://github.com/nodesource/distributions>`__.
.. code-block:: shell
Clone source code
~~~~~~~~~~~~~~~~~
$ git clone https://github.com/scality/cloudserver.git
2. Go to the cloudserver directory and use npm to install the js dependencies.
.. code-block:: shell
$ cd cloudserver
$ npm install
Running CloudServer with a File Backend
---------------------------------------
.. code-block:: shell
git clone https://github.com/scality/S3.git
$ npm start
Install js dependencies
~~~~~~~~~~~~~~~~~~~~~~~
This starts a Zenko CloudServer on port 8000. Two additional ports, 9990
and 9991, are also open locally for internal transfer of metadata and
data, respectively.
Go to the ./S3 folder,
The default access key is accessKey1. The secret key is verySecretKey1.
By default, metadata files are saved in the localMetadata directory and
data files are saved in the localData directory in the local ./cloudserver
directory. These directories are pre-created within the repository. To
save data or metadata in different locations, you must specify them using
absolute paths. Thus, when starting the server:
.. code-block:: shell
npm install
$ mkdir -m 700 $(pwd)/myFavoriteDataPath
$ mkdir -m 700 $(pwd)/myFavoriteMetadataPath
$ export S3DATAPATH="$(pwd)/myFavoriteDataPath"
$ export S3METADATAPATH="$(pwd)/myFavoriteMetadataPath"
$ npm start
Run it with a file backend
--------------------------
Running Cloudserver with Multiple Data Backends
-----------------------------------------------
.. code-block:: shell
npm start
$ export S3DATA='multiple'
$ npm start
This starts an Zenko CloudServer on port 8000. Two additional ports 9990 and
9991 are also open locally for internal transfer of metadata and data,
respectively.
This starts a Zenko CloudServer on port 8000.
The default access key is accessKey1 with a secret key of
verySecretKey1.
The default access key is accessKey1. The secret key is verySecretKey1.
By default the metadata files will be saved in the localMetadata
directory and the data files will be saved in the localData directory
within the ./S3 directory on your machine. These directories have been
pre-created within the repository. If you would like to save the data or
metadata in different locations of your choice, you must specify them
with absolute paths. So, when starting the server:
.. code-block:: shell
mkdir -m 700 $(pwd)/myFavoriteDataPath
mkdir -m 700 $(pwd)/myFavoriteMetadataPath
export S3DATAPATH="$(pwd)/myFavoriteDataPath"
export S3METADATAPATH="$(pwd)/myFavoriteMetadataPath"
npm start
Run it with multiple data backends
----------------------------------
.. code-block:: shell
export S3DATA='multiple'
npm start
This starts an Zenko CloudServer on port 8000. The default access key is
accessKey1 with a secret key of verySecretKey1.
With multiple backends, you have the ability to choose where each object
will be saved by setting the following header with a locationConstraint
on a PUT request:
With multiple backends, you can choose where each object is saved by setting
the following header with a location constraint in a PUT request:
.. code-block:: shell
'x-amz-meta-scal-location-constraint':'myLocationConstraint'
If no header is sent with a PUT object request, the location constraint
of the bucket will determine where the data is saved. If the bucket has
no location constraint, the endpoint of the PUT request will be used to
determine location.
If no header is sent with a PUT object request, the buckets location
constraint determines where the data is saved. If the bucket has no
location constraint, the endpoint of the PUT request determines location.
See the Configuration section below to learn how to set location
constraints.
See the Configuration_ section to set location constraints.
Run it with an in-memory backend
--------------------------------
Run Cloudserver with an In-Memory Backend
-----------------------------------------
.. code-block:: shell
npm run mem_backend
$ npm run mem_backend
This starts an Zenko CloudServer on port 8000. The default access key is
accessKey1 with a secret key of verySecretKey1.
This starts a Zenko CloudServer on port 8000.
Run it for continuous integration testing or in production with Docker
----------------------------------------------------------------------
The default access key is accessKey1. The secret key is verySecretKey1.
`DOCKER <../DOCKER/>`__
Run Cloudserver for Continuous Integration Testing or in Production with Docker
-------------------------------------------------------------------------------
`DOCKER <./DOCKER>`__
Testing
-------
~~~~~~~
You can run the unit tests with the following command:
Run unit tests with the command:
.. code-block:: shell
npm test
$ npm test
You can run the multiple backend unit tests with:
Run multiple-backend unit tests with:
.. code-block:: shell
CI=true S3DATA=multiple npm start
npm run multiple_backend_test
$ CI=true S3DATA=multiple npm start
$ npm run multiple_backend_test
You can run the linter with:
Run the linter with:
.. code-block:: shell
npm run lint
$ npm run lint
Running functional tests locally:
Running Functional Tests Locally
--------------------------------
For the AWS backend and Azure backend tests to pass locally,
you must modify tests/locationConfigTests.json so that awsbackend
specifies a bucketname of a bucket you have access to based on
your credentials profile and modify "azurebackend" with details
for your Azure account.
To pass AWS and Azure backend tests locally, modify
tests/locationConfig/locationConfigTests.json so that ``awsbackend``
specifies the bucketname of a bucket you have access to based on your
credentials, and modify ``azurebackend`` with details for your Azure account.
The test suite requires additional tools, **s3cmd** and **Redis**
installed in the environment the tests are running in.
- Install `s3cmd <http://s3tools.org/download>`__
- Install `redis <https://redis.io/download>`__ and start Redis.
- Add localCache section to your ``config.json``:
1. Install `s3cmd <http://s3tools.org/download>`__
::
2. Install `redis <https://redis.io/download>`__ and start Redis.
"localCache": {
3. Add localCache section to ``config.json``:
.. code:: json
"localCache": {
"host": REDIS_HOST,
"port": REDIS_PORT
}
}
where ``REDIS_HOST`` is your Redis instance IP address (``"127.0.0.1"``
if your Redis is running locally) and ``REDIS_PORT`` is your Redis
instance port (``6379`` by default)
where ``REDIS_HOST`` is the Redis instance IP address (``"127.0.0.1"``
if Redis is running locally) and ``REDIS_PORT`` is the Redis instance
port (``6379`` by default)
- Add the following to the etc/hosts file on your machine:
4. Add the following to the local etc/hosts file:
.. code-block:: shell
.. code-block:: shell
127.0.0.1 bucketwebsitetester.s3-website-us-east-1.amazonaws.com
127.0.0.1 bucketwebsitetester.s3-website-us-east-1.amazonaws.com
- Start the Zenko CloudServer in memory and run the functional tests:
5. Start Zenko CloudServer in memory and run the functional tests:
.. code-block:: shell
.. code-block:: shell
CI=true npm run mem_backend
CI=true npm run ft_test
$ CI=true npm run mem_backend
$ CI=true npm run ft_test
.. _Configuration:
Configuration
-------------
There are three configuration files for your Scality Zenko CloudServer:
There are three configuration files for Zenko CloudServer:
1. ``conf/authdata.json``, described above for authentication
* ``conf/authdata.json``, for authentication.
2. ``locationConfig.json``, to set up configuration options for
* ``locationConfig.json``, to configure where data is saved.
where data will be saved
* ``config.json``, for general configuration options.
3. ``config.json``, for general configuration options
.. _location-configuration:
Location Configuration
~~~~~~~~~~~~~~~~~~~~~~
You must specify at least one locationConstraint in your
locationConfig.json (or leave as pre-configured).
You must specify at least one locationConstraint in locationConfig.json
(or leave it as pre-configured).
You must also specify 'us-east-1' as a locationConstraint so if you only
define one locationConstraint, that would be it. If you put a bucket to
an unknown endpoint and do not specify a locationConstraint in the put
bucket call, us-east-1 will be used.
You must also specify 'us-east-1' as a locationConstraint. If you put a
bucket to an unknown endpoint and do not specify a locationConstraint in
the PUT bucket call, us-east-1 is used.
For instance, the following locationConstraint will save data sent to
For instance, the following locationConstraint saves data sent to
``myLocationConstraint`` to the file backend:
.. code:: json
"myLocationConstraint": {
"type": "file",
"legacyAwsBehavior": false,
"details": {}
},
"myLocationConstraint": {
"type": "file",
"legacyAwsBehavior": false,
"details": {}
},
Each locationConstraint must include the ``type``,
``legacyAwsBehavior``, and ``details`` keys. ``type`` indicates which
backend will be used for that region. Currently, mem, file, and scality
are the supported backends. ``legacyAwsBehavior`` indicates whether the
region will have the same behavior as the AWS S3 'us-east-1' region. If
the locationConstraint type is scality, ``details`` should contain
connector information for sproxyd. If the locationConstraint type is mem
or file, ``details`` should be empty.
Each locationConstraint must include the ``type``, ``legacyAwsBehavior``,
and ``details`` keys. ``type`` indicates which backend is used for that
region. Supported backends are mem, file, and scality.``legacyAwsBehavior``
indicates whether the region behaves the same as the AWS S3 'us-east-1'
region. If the locationConstraint type is ``scality``, ``details`` must
contain connector information for sproxyd. If the locationConstraint type
is ``mem`` or ``file``, ``details`` must be empty.
Once you have your locationConstraints in your locationConfig.json, you
can specify a default locationConstraint for each of your endpoints.
Once locationConstraints is set in locationConfig.json, specify a default
locationConstraint for each endpoint.
For instance, the following sets the ``localhost`` endpoint to the
``myLocationConstraint`` data backend defined above:
@ -220,26 +216,24 @@ For instance, the following sets the ``localhost`` endpoint to the
"localhost": "myLocationConstraint"
},
If you would like to use an endpoint other than localhost for your
Scality Zenko CloudServer, that endpoint MUST be listed in your
``restEndpoints``. Otherwise if your server is running with a:
To use an endpoint other than localhost for Zenko CloudServer, the endpoint
must be listed in ``restEndpoints``. Otherwise, if the server is running
with a:
- **file backend**: your default location constraint will be ``file``
- **memory backend**: your default location constraint will be ``mem``
* **file backend**: The default location constraint is ``file``
* **memory backend**: The default location constraint is ``mem``
Endpoints
~~~~~~~~~
Note that our Zenko CloudServer supports both:
The Zenko CloudServer supports endpoints that are rendered in either:
- path-style: http://myhostname.com/mybucket
- hosted-style: http://mybucket.myhostname.com
* path style: http://myhostname.com/mybucket or
* hosted style: http://mybucket.myhostname.com
However, hosted-style requests will not hit the server if you are using
an ip address for your host. So, make sure you are using path-style
requests in that case. For instance, if you are using the AWS SDK for
JavaScript, you would instantiate your client like this:
However, if an IP address is specified for the host, hosted-style requests
cannot reach the server. Use path-style requests in that case. For example,
if you are using the AWS SDK for JavaScript, instantiate your client like this:
.. code:: js
@ -248,87 +242,99 @@ JavaScript, you would instantiate your client like this:
s3ForcePathStyle: true,
});
Setting your own access key and secret key pairs
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Setting Your Own Access and Secret Key Pairs
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
You can set credentials for many accounts by editing
``conf/authdata.json`` but if you want to specify one set of your own
credentials, you can use ``SCALITY_ACCESS_KEY_ID`` and
``SCALITY_SECRET_ACCESS_KEY`` environment variables.
Credentials can be set for many accounts by editing ``conf/authdata.json``,
but use the ``SCALITY_ACCESS_KEY_ID`` and ``SCALITY_SECRET_ACCESS_KEY``
environment variables to specify your own credentials.
_`scality-access-key-id-and-scality-secret-access-key`
SCALITY\_ACCESS\_KEY\_ID and SCALITY\_SECRET\_ACCESS\_KEY
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
These variables specify authentication credentials for an account named
"CustomAccount".
“CustomAccount”.
Note: Anything in the ``authdata.json`` file will be ignored.
.. note:: Anything in the ``authdata.json`` file is ignored.
.. code-block:: shell
SCALITY_ACCESS_KEY_ID=newAccessKey SCALITY_SECRET_ACCESS_KEY=newSecretKey npm start
$ SCALITY_ACCESS_KEY_ID=newAccessKey SCALITY_SECRET_ACCESS_KEY=newSecretKey npm start
.. _Using_SSL:
Scality with SSL
~~~~~~~~~~~~~~~~~~~~~~
Using SSL
~~~~~~~~~
If you wish to use https with your local Zenko CloudServer, you need to set up
SSL certificates. Here is a simple guide of how to do it.
To use https with your local CloudServer, you must set up
SSL certificates.
Deploying Zenko CloudServer
^^^^^^^^^^^^^^^^^^^^^^^^^^^
1. Deploy CloudServer using `our DockerHub page
<https://hub.docker.com/r/zenko/cloudserver/>`__ (run it with a file
backend).
First, you need to deploy **Zenko CloudServer**. This can be done very easily
via `our **DockerHub**
page <https://hub.docker.com/r/scality/s3server/>`__ (you want to run it
with a file backend).
.. Note:: If Docker is not installed locally, follow the
`instructions to install it for your distribution
<https://docs.docker.com/engine/installation/>`__
*Note:* *- If you don't have docker installed on your machine, here
are the `instructions to install it for your
distribution <https://docs.docker.com/engine/installation/>`__*
2. Update the CloudServer containers config
Updating your Zenko CloudServer container's config
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Add your certificates to your container. To do this,
#. exec inside the CloudServer container.
You're going to add your certificates to your container. In order to do
so, you need to exec inside your Zenko CloudServer container. Run a
``$> docker ps`` and find your container's id (the corresponding image
name should be ``scality/s3server``. Copy the corresponding container id
(here we'll use ``894aee038c5e``, and run:
#. Run ``$> docker ps`` to find the containers ID (the corresponding
image name is ``scality/cloudserver``.
#. Copy the corresponding container ID (``894aee038c5e`` in the present
example), and run:
.. code-block:: shell
.. code-block:: shell
$> docker exec -it 894aee038c5e bash
$> docker exec -it 894aee038c5e bash
You're now inside your container, using an interactive terminal :)
This puts you inside your container, using an interactive terminal.
Generate SSL key and certificates
**********************************
3. Generate the SSL key and certificates. The paths where the different
files are stored are defined after the ``-out`` option in each of the
following commands.
There are 5 steps to this generation. The paths where the different
files are stored are defined after the ``-out`` option in each command
#. Generate a private key for your certificate signing request (CSR):
.. code-block:: shell
.. code-block:: shell
# Generate a private key for your CSR
$> openssl genrsa -out ca.key 2048
# Generate a self signed certificate for your local Certificate Authority
$> openssl req -new -x509 -extensions v3_ca -key ca.key -out ca.crt -days 99999 -subj "/C=US/ST=Country/L=City/O=Organization/CN=scality.test"
$> openssl genrsa -out ca.key 2048
# Generate a key for Zenko CloudServer
$> openssl genrsa -out test.key 2048
# Generate a Certificate Signing Request for S3 Server
$> openssl req -new -key test.key -out test.csr -subj "/C=US/ST=Country/L=City/O=Organization/CN=*.scality.test"
# Generate a local-CA-signed certificate for S3 Server
$> openssl x509 -req -in test.csr -CA ca.crt -CAkey ca.key -CAcreateserial -out test.crt -days 99999 -sha256
#. Generate a self-signed certificate for your local certificate
authority (CA):
Update Zenko CloudServer ``config.json``
****************************************
.. code:: shell
Add a ``certFilePaths`` section to ``./config.json`` with the
appropriate paths:
$> openssl req -new -x509 -extensions v3_ca -key ca.key -out ca.crt -days 99999 -subj "/C=US/ST=Country/L=City/O=Organization/CN=scality.test"
.. code:: json
#. Generate a key for the CloudServer:
.. code:: shell
$> openssl genrsa -out test.key 2048
#. Generate a CSR for CloudServer:
.. code:: shell
$> openssl req -new -key test.key -out test.csr -subj "/C=US/ST=Country/L=City/O=Organization/CN=*.scality.test"
#. Generate a certificate for CloudServer signed by the local CA:
.. code:: shell
$> openssl x509 -req -in test.csr -CA ca.crt -CAkey ca.key -CAcreateserial -out test.crt -days 99999 -sha256
4. Update Zenko CloudServer ``config.json``. Add a ``certFilePaths``
section to ``./config.json`` with appropriate paths:
.. code:: json
"certFilePaths": {
"key": "./test.key",
@ -336,42 +342,36 @@ appropriate paths:
"ca": "./ca.crt"
}
Run your container with the new config
**************************************
5. Run your container with the new config.
First, you need to exit your container. Simply run ``$> exit``. Then,
you need to restart your container. Normally, a simple
``$> docker restart s3server`` should do the trick.
#. Exit the container by running ``$> exit``.
Update your host config
^^^^^^^^^^^^^^^^^^^^^^^
#. Restart the container with ``$> docker restart cloudserver``.
Associates local IP addresses with hostname
*******************************************
6. Update the host configuration by adding s3.scality.test
to /etc/hosts:
In your ``/etc/hosts`` file on Linux, OS X, or Unix (with root
permissions), edit the line of localhost so it looks like this:
.. code:: bash
::
127.0.0.1 localhost s3.scality.test
127.0.0.1 localhost s3.scality.test
7. Copy the local certificate authority (ca.crt in step 4) from your
container. Choose the path to save this file to (in the present
example, ``/root/ca.crt``), and run:
Copy the local certificate authority from your container
********************************************************
.. code:: shell
In the above commands, it's the file named ``ca.crt``. Choose the path
you want to save this file at (here we chose ``/root/ca.crt``), and run
something like:
$> docker cp 894aee038c5e:/usr/src/app/ca.crt /root/ca.crt
.. code-block:: shell
.. note:: Your container ID will be different, and your path to
ca.crt may be different.
$> docker cp 894aee038c5e:/usr/src/app/ca.crt /root/ca.crt
Test the Config
^^^^^^^^^^^^^^^
Test your config
^^^^^^^^^^^^^^^^^
If aws-sdk is not installed, run ``$> npm install aws-sdk``.
If you do not have aws-sdk installed, run ``$> npm install aws-sdk``. In
a ``test.js`` file, paste the following script:
Paste the following script into a file named "test.js":
.. code:: js
@ -411,8 +411,13 @@ a ``test.js`` file, paste the following script:
});
});
Now run that script with ``$> nodejs test.js``. If all goes well, it
should output ``SSL is cool!``. Enjoy that added security!
Now run this script with:
.. code::
$> nodejs test.js
On success, the script outputs ``SSL is cool!``.
.. |CircleCI| image:: https://circleci.com/gh/scality/S3.svg?style=svg

View File

@ -4,479 +4,415 @@ Integrations
High Availability
=================
`Docker swarm <https://docs.docker.com/engine/swarm/>`__ is a
clustering tool developped by Docker and ready to use with its
containers. It allows to start a service, which we define and use as a
means to ensure Zenko CloudServer's continuous availability to the end user.
Indeed, a swarm defines a manager and n workers among n+1 servers. We
will do a basic setup in this tutorial, with just 3 servers, which
already provides a strong service resiliency, whilst remaining easy to
do as an individual. We will use NFS through docker to share data and
`Docker Swarm <https://docs.docker.com/engine/swarm/>`__ is a clustering tool
developed by Docker for use with its containers. It can be used to start
services, which we define to ensure CloudServer's continuous availability to
end users. A swarm defines a manager and *n* workers among *n* + 1 servers.
This tutorial shows how to perform a basic setup with three servers, which
provides strong service resiliency, while remaining easy to use and
maintain. We will use NFS through Docker to share data and
metadata between the different servers.
You will see that the steps of this tutorial are defined as **On
Server**, **On Clients**, **On All Machines**. This refers respectively
to NFS Server, NFS Clients, or NFS Server and Clients. In our example,
the IP of the Server will be **10.200.15.113**, while the IPs of the
Clients will be **10.200.15.96 and 10.200.15.97**
Sections are labeled **On Server**, **On Clients**, or
**On All Machines**, referring respectively to NFS server, NFS clients, or
NFS server and clients. In the present example, the servers IP address is
**10.200.15.113** and the client IP addresses are **10.200.15.96** and
**10.200.15.97**
Installing docker
-----------------
1. Install Docker (on All Machines)
Any version from docker 1.12.6 onwards should work; we used Docker
17.03.0-ce for this tutorial.
Docker 17.03.0-ce is used for this tutorial. Docker 1.12.6 and later will
likely work, but is not tested.
On All Machines
~~~~~~~~~~~~~~~
* On Ubuntu 14.04
Install Docker CE for Ubuntu as `documented at Docker
<https://docs.docker.com/install/linux/docker-ce/ubuntu/>`__.
Install the aufs dependency as recommended by Docker. The required
commands are:
On Ubuntu 14.04
^^^^^^^^^^^^^^^
.. code:: sh
The docker website has `solid
documentation <https://docs.docker.com/engine/installation/linux/ubuntu/>`__.
We have chosen to install the aufs dependency, as recommended by Docker.
Here are the required commands:
$> sudo apt-get update
$> sudo apt-get install linux-image-extra-$(uname -r) linux-image-extra-virtual
$> sudo apt-get install apt-transport-https ca-certificates curl software-properties-common
$> curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add -
$> sudo add-apt-repository "deb [arch=amd64] https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable"
$> sudo apt-get update
$> sudo apt-get install docker-ce
* On CentOS 7
Install Docker CE as `documented at Docker
<https://docs.docker.com/install/linux/docker-ce/centos/>`__.
The required commands are:
.. code:: sh
$> sudo yum install -y yum-utils
$> sudo yum-config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.repo
$> sudo yum makecache fast
$> sudo yum install docker-ce
$> sudo systemctl start docker
2. Install NFS on Client(s)
NFS clients mount Docker volumes over the NFS servers shared folders.
If the NFS commons are installed, manual mounts are no longer needed.
* On Ubuntu 14.04
Install the NFS commons with apt-get:
.. code:: sh
$> sudo apt-get install nfs-common
* On CentOS 7
Install the NFS utils; then start required services:
.. code:: sh
$> yum install nfs-utils
$> sudo systemctl enable rpcbind
$> sudo systemctl enable nfs-server
$> sudo systemctl enable nfs-lock
$> sudo systemctl enable nfs-idmap
$> sudo systemctl start rpcbind
$> sudo systemctl start nfs-server
$> sudo systemctl start nfs-lock
$> sudo systemctl start nfs-idmap
3. Install NFS (on Server)
The NFS server hosts the data and metadata. The package(s) to install on it
differs from the package installed on the clients.
* On Ubuntu 14.04
Install the NFS server-specific package and the NFS commons:
.. code:: sh
$> sudo apt-get install nfs-kernel-server nfs-common
* On CentOS 7
Install the NFS utils and start the required services:
.. code:: sh
$> yum install nfs-utils
$> sudo systemctl enable rpcbind
$> sudo systemctl enable nfs-server
$> sudo systemctl enable nfs-lock
$> sudo systemctl enable nfs-idmap
$> sudo systemctl start rpcbind
$> sudo systemctl start nfs-server
$> sudo systemctl start nfs-lock
$> sudo systemctl start nfs-idmap
For both distributions:
#. Choose where shared data and metadata from the local
`CloudServer <http://www.zenko.io/cloudserver/>`__ shall be stored (The
present example uses /var/nfs/data and /var/nfs/metadata). Set permissions
for these folders for
sharing over NFS:
.. code:: sh
$> mkdir -p /var/nfs/data /var/nfs/metadata
$> chmod -R 777 /var/nfs/
#. The /etc/exports file configures network permissions and r-w-x permissions
for NFS access. Edit /etc/exports, adding the following lines:
.. code:: sh
/var/nfs/data 10.200.15.96(rw,sync,no_root_squash) 10.200.15.97(rw,sync,no_root_squash)
/var/nfs/metadata 10.200.15.96(rw,sync,no_root_squash) 10.200.15.97(rw,sync,no_root_squash)
Ubuntu applies the no\_subtree\_check option by default, so both
folders are declared with the same permissions, even though theyre in
the same tree.
#. Export this new NFS table:
.. code:: sh
$> sudo exportfs -a
#. Edit the ``MountFlags`` option in the Docker config in
/lib/systemd/system/docker.service to enable NFS mount from Docker volumes
on other machines:
.. code:: sh
MountFlags=shared
#. Restart the NFS server and Docker daemons to apply these changes.
* On Ubuntu 14.04
.. code:: sh
$> sudo service nfs-kernel-server restart
$> sudo service docker restart
* On CentOS 7
.. code:: sh
$> sudo systemctl restart nfs-server
$> sudo systemctl daemon-reload
$> sudo systemctl restart docker
4. Set Up a Docker Swarm
* On all machines and distributions:
Set up the Docker volumes to be mounted to the NFS server for CloudServers
data and metadata storage. The following commands must be replicated on all
machines:
.. code:: sh
$> docker volume create --driver local --opt type=nfs --opt o=addr=10.200.15.113,rw --opt device=:/var/nfs/data --name data
$> docker volume create --driver local --opt type=nfs --opt o=addr=10.200.15.113,rw --opt device=:/var/nfs/metadata --name metadata
There is no need to ``docker exec`` these volumes to mount them: the
Docker Swarm manager does this when the Docker service is started.
* On a server:
To start a Docker service on a Docker Swarm cluster, initialize the cluster
(that is, define a manager), prompt workers/nodes to join in, and then start
the service.
Initialize the swarm cluster, and review its response:
.. code:: sh
$> docker swarm init --advertise-addr 10.200.15.113
Swarm initialized: current node (db2aqfu3bzfzzs9b1kfeaglmq) is now a manager.
To add a worker to this swarm, run the following command:
docker swarm join \
--token SWMTKN-1-5yxxencrdoelr7mpltljn325uz4v6fe1gojl14lzceij3nujzu-2vfs9u6ipgcq35r90xws3stka \
10.200.15.113:2377
To add a manager to this swarm, run 'docker swarm join-token manager' and follow the instructions.
* On clients:
Copy and paste the command provided by your Docker Swarm init. A successful
request/response will resemble:
.. code:: sh
$> docker swarm join --token SWMTKN-1-5yxxencrdoelr7mpltljn325uz4v6fe1gojl14lzceij3nujzu-2vfs9u6ipgcq35r90xws3stka 10.200.15.113:2377
This node joined a swarm as a worker.
Set Up Docker Swarm on Clients on a Server
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Start the service on the Swarm cluster.
.. code:: sh
$> sudo apt-get update
$> sudo apt-get install linux-image-extra-$(uname -r) linux-image-extra-virtual
$> sudo apt-get install apt-transport-https ca-certificates curl software-properties-common
$> curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add -
$> sudo add-apt-repository "deb [arch=amd64] https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable"
$> sudo apt-get update
$> sudo apt-get install docker-ce
$> docker service create --name s3 --replicas 1 --mount type=volume,source=data,target=/usr/src/app/localData --mount type=volume,source=metadata,target=/usr/src/app/localMetadata -p 8000:8000 scality/cloudserver
On CentOS 7
^^^^^^^^^^^
The docker website has `solid
documentation <https://docs.docker.com/engine/installation/linux/centos/>`__.
Here are the required commands:
.. code:: sh
$> sudo yum install -y yum-utils
$> sudo yum-config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.repo
$> sudo yum makecache fast
$> sudo yum install docker-ce
$> sudo systemctl start docker
Configure NFS
-------------
On Clients
~~~~~~~~~~
Your NFS Clients will mount Docker volumes over your NFS Server's shared
folders. Hence, you don't have to mount anything manually, you just have
to install the NFS commons:
On Ubuntu 14.04
^^^^^^^^^^^^^^^
Simply install the NFS commons:
.. code:: sh
$> sudo apt-get install nfs-common
On CentOS 7
^^^^^^^^^^^
Install the NFS utils, and then start the required services:
.. code:: sh
$> yum install nfs-utils
$> sudo systemctl enable rpcbind
$> sudo systemctl enable nfs-server
$> sudo systemctl enable nfs-lock
$> sudo systemctl enable nfs-idmap
$> sudo systemctl start rpcbind
$> sudo systemctl start nfs-server
$> sudo systemctl start nfs-lock
$> sudo systemctl start nfs-idmap
On Server
~~~~~~~~~
Your NFS Server will be the machine to physically host the data and
metadata. The package(s) we will install on it is slightly different
from the one we installed on the clients.
On Ubuntu 14.04
^^^^^^^^^^^^^^^
Install the NFS server specific package and the NFS commons:
.. code:: sh
$> sudo apt-get install nfs-kernel-server nfs-common
On CentOS 7
^^^^^^^^^^^
Same steps as with the client: install the NFS utils and start the
required services:
.. code:: sh
$> yum install nfs-utils
$> sudo systemctl enable rpcbind
$> sudo systemctl enable nfs-server
$> sudo systemctl enable nfs-lock
$> sudo systemctl enable nfs-idmap
$> sudo systemctl start rpcbind
$> sudo systemctl start nfs-server
$> sudo systemctl start nfs-lock
$> sudo systemctl start nfs-idmap
On Ubuntu 14.04 and CentOS 7
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Choose where your shared data and metadata from your local `Zenko CloudServer
<http://www.zenko.io/cloudserver/>`__ will be stored.
We chose to go with /var/nfs/data and /var/nfs/metadata. You also need
to set proper sharing permissions for these folders as they'll be shared
over NFS:
.. code:: sh
$> mkdir -p /var/nfs/data /var/nfs/metadata
$> chmod -R 777 /var/nfs/
Now you need to update your **/etc/exports** file. This is the file that
configures network permissions and rwx permissions for NFS access. By
default, Ubuntu applies the no\_subtree\_check option, so we declared
both folders with the same permissions, even though they're in the same
tree:
.. code:: sh
$> sudo vim /etc/exports
In this file, add the following lines:
.. code:: sh
/var/nfs/data 10.200.15.96(rw,sync,no_root_squash) 10.200.15.97(rw,sync,no_root_squash)
/var/nfs/metadata 10.200.15.96(rw,sync,no_root_squash) 10.200.15.97(rw,sync,no_root_squash)
Export this new NFS table:
.. code:: sh
$> sudo exportfs -a
Eventually, you need to allow for NFS mount from Docker volumes on other
machines. You need to change the Docker config in
**/lib/systemd/system/docker.service**:
.. code:: sh
$> sudo vim /lib/systemd/system/docker.service
In this file, change the **MountFlags** option:
.. code:: sh
MountFlags=shared
Now you just need to restart the NFS server and docker daemons so your
changes apply.
On Ubuntu 14.04
^^^^^^^^^^^^^^^
Restart your NFS Server and docker services:
.. code:: sh
$> sudo service nfs-kernel-server restart
$> sudo service docker restart
On CentOS 7
^^^^^^^^^^^
Restart your NFS Server and docker daemons:
.. code:: sh
$> sudo systemctl restart nfs-server
$> sudo systemctl daemon-reload
$> sudo systemctl restart docker
Set up your Docker Swarm service
--------------------------------
On All Machines
~~~~~~~~~~~~~~~
On Ubuntu 14.04 and CentOS 7
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
We will now set up the Docker volumes that will be mounted to the NFS
Server and serve as data and metadata storage for Zenko CloudServer. These two
commands have to be replicated on all machines:
.. code:: sh
$> docker volume create --driver local --opt type=nfs --opt o=addr=10.200.15.113,rw --opt device=:/var/nfs/data --name data
$> docker volume create --driver local --opt type=nfs --opt o=addr=10.200.15.113,rw --opt device=:/var/nfs/metadata --name metadata
There is no need to ""docker exec" these volumes to mount them: the
Docker Swarm manager will do it when the Docker service will be started.
On Server
^^^^^^^^^
To start a Docker service on a Docker Swarm cluster, you first have to
initialize that cluster (i.e.: define a manager), then have the
workers/nodes join in, and then start the service. Initialize the swarm
cluster, and look at the response:
.. code:: sh
$> docker swarm init --advertise-addr 10.200.15.113
Swarm initialized: current node (db2aqfu3bzfzzs9b1kfeaglmq) is now a manager.
To add a worker to this swarm, run the following command:
docker swarm join \
--token SWMTKN-1-5yxxencrdoelr7mpltljn325uz4v6fe1gojl14lzceij3nujzu-2vfs9u6ipgcq35r90xws3stka \
10.200.15.113:2377
To add a manager to this swarm, run 'docker swarm join-token manager' and follow the instructions.
On Clients
^^^^^^^^^^
Simply copy/paste the command provided by your docker swarm init. When
all goes well, you'll get something like this:
.. code:: sh
$> docker swarm join --token SWMTKN-1-5yxxencrdoelr7mpltljn325uz4v6fe1gojl14lzceij3nujzu-2vfs9u6ipgcq35r90xws3stka 10.200.15.113:2377
This node joined a swarm as a worker.
On Server
^^^^^^^^^
Start the service on your swarm cluster!
.. code:: sh
$> docker service create --name s3 --replicas 1 --mount type=volume,source=data,target=/usr/src/app/localData --mount type=volume,source=metadata,target=/usr/src/app/localMetadata -p 8000:8000 scality/s3server
If you run a docker service ls, you should have the following output:
On a successful installation, ``docker service ls`` returns the following
output:
.. code:: sh
$> docker service ls
ID NAME MODE REPLICAS IMAGE
ocmggza412ft s3 replicated 1/1 scality/s3server:latest
ocmggza412ft s3 replicated 1/1 scality/cloudserver:latest
If your service won't start, consider disabling apparmor/SELinux.
If the service does not start, consider disabling apparmor/SELinux.
Testing your High Availability S3Server
---------------------------------------
Testing the High-Availability CloudServer
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
On All Machines
~~~~~~~~~~~~~~~
On Ubuntu 14.04 and CentOS 7
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Try to find out where your Scality Zenko CloudServer is actually running using
the **docker ps** command. It can be on any node of the swarm cluster,
manager or worker. When you find it, you can kill it, with **docker stop
<container id>** and you'll see it respawn on a different node of the
swarm cluster. Now you see, if one of your servers falls, or if docker
stops unexpectedly, your end user will still be able to access your
local Zenko CloudServer.
On all machines (client/server) and distributions (Ubuntu and CentOS),
determine where CloudServer is running using ``docker ps``. CloudServer can
operate on any node of the Swarm cluster, manager or worker. When you find
it, you can kill it with ``docker stop <container id>``. It will respawn
on a different node. Now, if one server falls, or if Docker stops
unexpectedly, the end user will still be able to access your the local CloudServer.
Troubleshooting
---------------
~~~~~~~~~~~~~~~
To troubleshoot the service you can run:
To troubleshoot the service, run:
.. code:: sh
$> docker service ps s3docker service ps s3
ID NAME IMAGE NODE DESIRED STATE CURRENT STATE ERROR
0ar81cw4lvv8chafm8pw48wbc s3.1 scality/s3server localhost.localdomain.localdomain Running Running 7 days ago
cvmf3j3bz8w6r4h0lf3pxo6eu \_ s3.1 scality/s3server localhost.localdomain.localdomain Shutdown Failed 7 days ago "task: non-zero exit (137)"
0ar81cw4lvv8chafm8pw48wbc s3.1 scality/cloudserver localhost.localdomain.localdomain Running Running 7 days ago
cvmf3j3bz8w6r4h0lf3pxo6eu \_ s3.1 scality/cloudserver localhost.localdomain.localdomain Shutdown Failed 7 days ago "task: non-zero exit (137)"
If the error is truncated it is possible to have a more detailed view of
the error by inspecting the docker task ID:
If the error is truncated, view the error in detail by inspecting the
Docker task ID:
.. code:: sh
$> docker inspect cvmf3j3bz8w6r4h0lf3pxo6eu
Off you go!
-----------
Let us know what you use this functionality for, and if you'd like any
specific developments around it. Or, even better: come and contribute to
our `Github repository <https://github.com/scality/s3/>`__! We look
forward to meeting you!
~~~~~~~~~~~
Let us know how you use this and if you'd like any specific developments
around it. Even better: come and contribute to our `Github repository
<https://github.com/scality/s3/>`__! We look forward to meeting you!
S3FS
====
Export your buckets as a filesystem with s3fs on top of Zenko CloudServer
You can export buckets as a filesystem with s3fs on CloudServer.
`s3fs <https://github.com/s3fs-fuse/s3fs-fuse>`__ is an open source
tool that allows you to mount an S3 bucket on a filesystem-like backend.
It is available both on Debian and RedHat distributions. For this
tutorial, we used an Ubuntu 14.04 host to deploy and use s3fs over
Scality's Zenko CloudServer.
tool, available both on Debian and RedHat distributions, that enables
you to mount an S3 bucket on a filesystem-like backend. This tutorial uses
an Ubuntu 14.04 host to deploy and use s3fs over CloudServer.
Deploying Zenko CloudServer with SSL
------------------------------------
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
First, you need to deploy **Zenko CloudServer**. This can be done very easily
via `our DockerHub
page <https://hub.docker.com/r/scality/s3server/>`__ (you want to run it
with a file backend).
First, deploy CloudServer with a file backend using `our DockerHub page
<https://hub.docker.com/r/scality/cloudserver/>`__.
*Note:* *- If you don't have docker installed on your machine, here
are the `instructions to install it for your
distribution <https://docs.docker.com/engine/installation/>`__*
.. note::
You also necessarily have to set up SSL with Zenko CloudServer to use s3fs. We
have a nice
`tutorial <https://s3.scality.com/v1.0/page/scality-with-ssl>`__ to help
you do it.
If Docker is not installed on your machine, follow
`these instructions <https://docs.docker.com/engine/installation/>`__
to install it for your distribution.
s3fs setup
----------
You must also set up SSL with CloudServer to use s3fs. See `Using SSL
<./GETTING_STARTED#Using_SSL>`__ for instructions.
s3fs Setup
~~~~~~~~~~
Installing s3fs
~~~~~~~~~~~~~~~
---------------
s3fs has quite a few dependencies. As explained in their
`README <https://github.com/s3fs-fuse/s3fs-fuse/blob/master/README.md#installation>`__,
the following commands should install everything for Ubuntu 14.04:
Follow the instructions in the s3fs `README
<https://github.com/s3fs-fuse/s3fs-fuse/blob/master/README.md#installation-from-pre-built-packages>`__,
Check that s3fs is properly installed. A version check should return
a response resembling:
.. code:: sh
$> sudo apt-get install automake autotools-dev g++ git libcurl4-gnutls-dev
$> sudo apt-get install libfuse-dev libssl-dev libxml2-dev make pkg-config
Now you want to install s3fs per se:
.. code:: sh
$> git clone https://github.com/s3fs-fuse/s3fs-fuse.git
$> cd s3fs-fuse
$> ./autogen.sh
$> ./configure
$> make
$> sudo make install
Check that s3fs is properly installed by checking its version. it should
answer as below:
.. code:: sh
$> s3fs --version
$> s3fs --version
Amazon Simple Storage Service File System V1.80(commit:d40da2c) with OpenSSL
Copyright (C) 2010 Randy Rizun <rrizun@gmail.com>
License GPL2: GNU GPL version 2 <http://gnu.org/licenses/gpl.html>
This is free software: you are free to change and redistribute it.
There is NO WARRANTY, to the extent permitted by law.
Configuring s3fs
~~~~~~~~~~~~~~~~
----------------
s3fs expects you to provide it with a password file. Our file is
``/etc/passwd-s3fs``. The structure for this file is
``ACCESSKEYID:SECRETKEYID``, so, for S3Server, you can run:
``ACCESSKEYID:SECRETKEYID``, so, for CloudServer, you can run:
.. code:: sh
$> echo 'accessKey1:verySecretKey1' > /etc/passwd-s3fs
$> chmod 600 /etc/passwd-s3fs
Using Zenko CloudServer with s3fs
---------------------------------
Using CloudServer with s3fs
---------------------------
First, you're going to need a mountpoint; we chose ``/mnt/tests3fs``:
1. Use /mnt/tests3fs as a mount point.
.. code:: sh
.. code:: sh
$> mkdir /mnt/tests3fs
Then, you want to create a bucket on your local Zenko CloudServer; we named it
``tests3fs``:
2. Create a bucket on your local CloudServer. In the present example it is
named “tests3fs”.
.. code:: sh
.. code:: sh
$> s3cmd mb s3://tests3fs
*Note:* *- If you've never used s3cmd with our Zenko CloudServer, our README
provides you with a `recommended
config <https://github.com/scality/S3/blob/master/README.md#s3cmd>`__*
3. Mount the bucket to your mount point with s3fs:
Now you can mount your bucket to your mountpoint with s3fs:
.. code:: sh
.. code:: sh
$> s3fs tests3fs /mnt/tests3fs -o passwd_file=/etc/passwd-s3fs -o url="https://s3.scality.test:8000/" -o use_path_request_style
*If you're curious, the structure of this command is*
``s3fs BUCKET_NAME PATH/TO/MOUNTPOINT -o OPTIONS``\ *, and the
options are mandatory and serve the following purposes:
* ``passwd_file``\ *: specifiy path to password file;
* ``url``\ *: specify the hostname used by your SSL provider;
* ``use_path_request_style``\ *: force path style (by default, s3fs
uses subdomains (DNS style)).*
The structure of this command is:
``s3fs BUCKET_NAME PATH/TO/MOUNTPOINT -o OPTIONS``. Of these mandatory
options:
| From now on, you can either add files to your mountpoint, or add
objects to your bucket, and they'll show in the other.
| For example, let's' create two files, and then a directory with a file
in our mountpoint:
* ``passwd_file`` specifies the path to the password file.
* ``url`` specifies the host name used by your SSL provider.
* ``use_path_request_style`` forces the path style (by default,
s3fs uses DNS-style subdomains).
.. code:: sh
Once the bucket is mounted, files added to the mount point or
objects added to the bucket will appear in both locations.
$> touch /mnt/tests3fs/file1 /mnt/tests3fs/file2
$> mkdir /mnt/tests3fs/dir1
$> touch /mnt/tests3fs/dir1/file3
Example
-------
Now, I can use s3cmd to show me what is actually in S3Server:
Create two files, and then a directory with a file in our mount point:
.. code:: sh
.. code:: sh
$> s3cmd ls -r s3://tests3fs
$> touch /mnt/tests3fs/file1 /mnt/tests3fs/file2
$> mkdir /mnt/tests3fs/dir1
$> touch /mnt/tests3fs/dir1/file3
2017-02-28 17:28 0 s3://tests3fs/dir1/
2017-02-28 17:29 0 s3://tests3fs/dir1/file3
2017-02-28 17:28 0 s3://tests3fs/file1
2017-02-28 17:28 0 s3://tests3fs/file2
Now, use s3cmd to show what is in CloudServer:
Now you can enjoy a filesystem view on your local Zenko CloudServer!
.. code:: sh
$> s3cmd ls -r s3://tests3fs
2017-02-28 17:28 0 s3://tests3fs/dir1/
2017-02-28 17:29 0 s3://tests3fs/dir1/file3
2017-02-28 17:28 0 s3://tests3fs/file1
2017-02-28 17:28 0 s3://tests3fs/file2
Now you can enjoy a filesystem view on your local CloudServer.
Duplicity
=========
How to backup your files with Zenko CloudServer.
How to back up your files with CloudServer.
Installing
-----------
Installing Duplicity and its dependencies
Installing Duplicity and its Dependencies
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Second, you want to install
`Duplicity <http://duplicity.nongnu.org/index.html>`__. You have to
download `this
tarball <https://code.launchpad.net/duplicity/0.7-series/0.7.11/+download/duplicity-0.7.11.tar.gz>`__,
decompress it, and then checkout the README inside, which will give you
a list of dependencies to install. If you're using Ubuntu 14.04, this is
your lucky day: here is a lazy step by step install.
To install `Duplicity <http://duplicity.nongnu.org/>`__,
go to `this site <https://code.launchpad.net/duplicity/0.7-series>`__.
Download the latest tarball. Decompress it and follow the instructions
in the README.
.. code:: sh
$> tar zxvf duplicity-0.7.11.tar.gz
$> cd duplicity-0.7.11
$> python setup.py install
You may receive error messages indicating the need to install some or all
of the following dependencies:
.. code:: sh
@ -484,30 +420,20 @@ your lucky day: here is a lazy step by step install.
$> apt-get install python-dev python-pip python-lockfile
$> pip install -U boto
Then you want to actually install Duplicity:
Testing the Installation
------------------------
.. code:: sh
1. Check that CloudServer is running. Run ``$> docker ps``. You should
see one container named ``scality/cloudserver``. If you do not, run
``$> docker start cloudserver`` and check again.
$> tar zxvf duplicity-0.7.11.tar.gz
$> cd duplicity-0.7.11
$> python setup.py install
Using
------
Testing your installation
~~~~~~~~~~~~~~~~~~~~~~~~~~~
First, we're just going to quickly check that Zenko CloudServer is actually
running. To do so, simply run ``$> docker ps`` . You should see one
container named ``scality/s3server``. If that is not the case, try
``$> docker start s3server``, and check again.
Secondly, as you probably know, Duplicity uses a module called **Boto**
to send requests to S3. Boto requires a configuration file located in
**``/etc/boto.cfg``** to have your credentials and preferences. Here is
a minimalistic config `that you can finetune following these
instructions <http://boto.cloudhackers.com/en/latest/getting_started.html>`__.
2. Duplicity uses a module called “Boto” to send requests to S3. Boto
requires a configuration file located in ``/etc/boto.cfg`` to store
your credentials and preferences. A minimal configuration
you can fine tune `following these instructions
<http://boto.cloudhackers.com/en/latest/getting_started.html>`__ is
shown here:
::
@ -521,54 +447,51 @@ instructions <http://boto.cloudhackers.com/en/latest/getting_started.html>`__.
# If using SSL, unmute and provide absolute path to local CA certificate
# ca_certificates_file = /absolute/path/to/ca.crt
*Note:* *If you want to set up SSL with Zenko CloudServer, check out our
`tutorial <http://link/to/SSL/tutorial>`__*
.. note:: To set up SSL with CloudServer, check out our `Using SSL
<./GETTING_STARTED#Using_SSL>`__ in GETTING STARTED.
At this point, we've met all the requirements to start running Zenko CloudServer
as a backend to Duplicity. So we should be able to back up a local
folder/file to local S3. Let's try with the duplicity decompressed
folder:
3. At this point all requirements to run CloudServer as a backend to Duplicity
have been met. A local folder/file should back up to the local S3.
Try it with the decompressed Duplicity folder:
.. code:: sh
$> duplicity duplicity-0.7.11 "s3://127.0.0.1:8000/testbucket/"
*Note:* *Duplicity will prompt you for a symmetric encryption
passphrase. Save it somewhere as you will need it to recover your
data. Alternatively, you can also add the ``--no-encryption`` flag
and the data will be stored plain.*
.. note:: Duplicity will prompt for a symmetric encryption passphrase.
Save it carefully, as you will need it to recover your data.
Alternatively, you can add the ``--no-encryption`` flag
and the data will be stored plain.
If this command is succesful, you will get an output looking like this:
If this command is successful, you will receive an output resembling:
::
.. code:: sh
--------------[ Backup Statistics ]--------------
StartTime 1486486547.13 (Tue Feb 7 16:55:47 2017)
EndTime 1486486547.40 (Tue Feb 7 16:55:47 2017)
ElapsedTime 0.27 (0.27 seconds)
SourceFiles 388
SourceFileSize 6634529 (6.33 MB)
NewFiles 388
NewFileSize 6634529 (6.33 MB)
DeletedFiles 0
ChangedFiles 0
ChangedFileSize 0 (0 bytes)
ChangedDeltaSize 0 (0 bytes)
DeltaEntries 388
RawDeltaSize 6392865 (6.10 MB)
TotalDestinationSizeChange 2003677 (1.91 MB)
Errors 0
-------------------------------------------------
--------------[ Backup Statistics ]--------------
StartTime 1486486547.13 (Tue Feb 7 16:55:47 2017)
EndTime 1486486547.40 (Tue Feb 7 16:55:47 2017)
ElapsedTime 0.27 (0.27 seconds)
SourceFiles 388
SourceFileSize 6634529 (6.33 MB)
NewFiles 388
NewFileSize 6634529 (6.33 MB)
DeletedFiles 0
ChangedFiles 0
ChangedFileSize 0 (0 bytes)
ChangedDeltaSize 0 (0 bytes)
DeltaEntries 388
RawDeltaSize 6392865 (6.10 MB)
TotalDestinationSizeChange 2003677 (1.91 MB)
Errors 0
-------------------------------------------------
Congratulations! You can now backup to your local S3 through duplicity
:)
Congratulations! You can now back up to your local S3 through Duplicity.
Automating backups
~~~~~~~~~~~~~~~~~~~
Automating Backups
------------------
Now you probably want to back up your files periodically. The easiest
way to do this is to write a bash script and add it to your crontab.
Here is my suggestion for such a file:
The easiest way to back up files periodically is to write a bash script
and add it to your crontab. A suggested script follows.
.. code:: sh
@ -577,33 +500,33 @@ Here is my suggestion for such a file:
# Export your passphrase so you don't have to type anything
export PASSPHRASE="mypassphrase"
# If you want to use a GPG Key, put it here and unmute the line below
# To use a GPG key, put it here and uncomment the line below
#GPG_KEY=
# Define your backup bucket, with localhost specified
DEST="s3://127.0.0.1:8000/testbuckets3server/"
DEST="s3://127.0.0.1:8000/testbucketcloudserver/"
# Define the absolute path to the folder you want to backup
# Define the absolute path to the folder to back up
SOURCE=/root/testfolder
# Set to "full" for full backups, and "incremental" for incremental backups
# Warning: you have to perform one full backup befor you can perform
# Warning: you must perform one full backup befor you can perform
# incremental ones on top of it
FULL=incremental
# How long to keep backups for; if you don't want to delete old
# backups, keep empty; otherwise, syntax is "1Y" for one year, "1M"
# for one month, "1D" for one day
# How long to keep backups. If you don't want to delete old backups, keep
# this value empty; otherwise, the syntax is "1Y" for one year, "1M" for
# one month, "1D" for one day.
OLDER_THAN="1Y"
# is_running checks whether duplicity is currently completing a task
# is_running checks whether Duplicity is currently completing a task
is_running=$(ps -ef | grep duplicity | grep python | wc -l)
# If duplicity is already completing a task, this will simply not run
# If Duplicity is already completing a task, this will not run
if [ $is_running -eq 0 ]; then
echo "Backup for ${SOURCE} started"
# If you want to delete backups older than a certain time, we do it here
# To delete backups older than a certain time, do it here
if [ "$OLDER_THAN" != "" ]; then
echo "Removing backups older than ${OLDER_THAN}"
duplicity remove-older-than ${OLDER_THAN} ${DEST}
@ -626,17 +549,17 @@ Here is my suggestion for such a file:
# Forget the passphrase...
unset PASSPHRASE
So let's say you put this file in ``/usr/local/sbin/backup.sh.`` Next
you want to run ``crontab -e`` and paste your configuration in the file
that opens. If you're unfamiliar with Cron, here is a good `How
To <https://help.ubuntu.com/community/CronHowto>`__. The folder I'm
backing up is a folder I modify permanently during my workday, so I want
incremental backups every 5mn from 8AM to 9PM monday to friday. Here is
the line I will paste in my crontab:
Put this file in ``/usr/local/sbin/backup.sh``. Run ``crontab -e`` and
paste your configuration into the file that opens. If you're unfamiliar
with Cron, here is a good `HowTo
<https://help.ubuntu.com/community/CronHowto>`__. If the folder being
backed up is a folder to be modified permanently during the work day,
we can set incremental backups every 5 minutes from 8 AM to 9 PM Monday
through Friday by pasting the following line into crontab:
.. code:: sh
*/5 8-20 * * 1-5 /usr/local/sbin/backup.sh
Now I can try and add / remove files from the folder I'm backing up, and
I will see incremental backups in my bucket.
Adding or removing files from the folder being backed up will result in
incremental backups in the bucket.

View File

@ -162,8 +162,8 @@ CloudServer.
-v $(pwd)/locationConfig.json:/usr/src/app/locationConfig.json \
-v $(pwd)/conf/authdata.json:/usr/src/app/conf/authdata.json \
-v ~/.aws/credentials:/root/.aws/credentials \
-e S3DATA=multiple -e ENDPOINT=http://localhost -p 8000:8000
-d scality/s3server
-e S3DATA=multiple -e ENDPOINT=http://localhost -p 8000:8000 \
-d scality/cloudserver
Testing: put an object to AWS S3 using CloudServer
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
@ -324,7 +324,7 @@ CloudServer.
-v $(pwd)/locationConfig.json:/usr/src/app/locationConfig.json \
-v $(pwd)/conf/authdata.json:/usr/src/app/conf/authdata.json \
-e S3DATA=multiple -e ENDPOINT=http://localhost -p 8000:8000
-d scality/s3server
-d scality/cloudserver
Testing: put an object to MS Azure using CloudServer
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~