docs: use 2.0 docs

release-2.0
Yicheng Qin 2015-01-14 17:09:37 -08:00
parent 6bda827b67
commit 245e23ca47
12 changed files with 553 additions and 1896 deletions

View File

@ -8,7 +8,7 @@ In the early 2.0.0-alpha series, we're providing this tool early to encourage ad
### Data Migration Tips ### Data Migration Tips
* Keep the environment variables and etcd instance flags the same (much as [the upgrade document](../upgrade.md) suggests), particularly `--name`/`ETCD_NAME`. * Keep the environment variables and etcd instance flags the same, particularly `--name`/`ETCD_NAME`.
* Don't change the cluster configuration. If there's a plan to add or remove machines, it's probably best to arrange for that after the migration, rather than before or at the same time. * Don't change the cluster configuration. If there's a plan to add or remove machines, it's probably best to arrange for that after the migration, rather than before or at the same time.
### Running the tool ### Running the tool

File diff suppressed because it is too large Load Diff

View File

@ -1,356 +0,0 @@
# Clustering Guide
## Overview
Starting an etcd cluster statically requires that each member knows another in the cluster. In a number of cases, you might not know the IPs of your cluster members ahead of time. In these cases, you can bootstrap an etcd cluster with the help of a discovery service.
This guide willcover the following mechanisms for bootstrapping an etcd cluster:
* [Static](#static)
* [etcd Discovery](#etcd-discovery)
* [DNS Discovery](#dns-discovery)
Each of the bootstrapping mechanisms will be used to create a three machine etcd cluster with the following details:
|Name|Address|Hostname|
|------|---------|------------------|
|infra0|10.0.1.10|infra0.example.com|
|infra1|10.0.1.11|infra1.example.com|
|infra2|10.0.1.12|infra2.example.com|
## Static
As we know the cluster members, their addresses and the size of the cluster before starting, we can use an offline bootstrap configuration by setting the `initial-cluster` flag. Each machine will get either the following command line or environment variables:
```
ETCD_INITIAL_CLUSTER="infra0=http://10.0.1.10:2380,infra1=http://10.0.1.11:2380,infra2=http://10.0.1.12:2380"
ETCD_INITIAL_CLUSTER_STATE=new
```
```
-initial-cluster infra0=http://10.0.1.10:2380,http://10.0.1.11:2380,infra2=http://10.0.1.12:2380 \
-initial-cluster-state new
```
Note that the URLs specified in `initial-cluster` are the _advertised peer URLs_, i.e. they should match the value of `initial-advertise-peer-urls` on the respective nodes.
If you are spinning up multiple clusters (or creating and destroying a single cluster) with same configuration for testing purpose, it is highly recommended that you specify a unique `initial-cluster-token` for the different clusters. By doing this, etcd can generate unique cluster IDs and member IDs for the clusters even if they otherwise have the exact same configuration. This can protect you from cross-cluster-interaction, which might corrupt your clusters.
On each machine you would start etcd with these flags:
```
$ etcd -name infra0 -initial-advertise-peer-urls https://10.0.1.10:2380 \
-listen-peer-urls https://10.0.1.10:2380 \
-initial-cluster-token etcd-cluster-1 \
-initial-cluster infra0=http://10.0.1.10:2380,infra1=http://10.0.1.11:2380,infra2=http://10.0.1.12:2380 \
-initial-cluster-state new
```
```
$ etcd -name infra1 -initial-advertise-peer-urls https://10.0.1.11:2380 \
-listen-peer-urls https://10.0.1.11:2380 \
-initial-cluster-token etcd-cluster-1 \
-initial-cluster infra0=http://10.0.1.10:2380,infra1=http://10.0.1.11:2380,infra2=http://10.0.1.12:2380 \
-initial-cluster-state new
```
```
$ etcd -name infra2 -initial-advertise-peer-urls https://10.0.1.12:2380 \
-listen-peer-urls https://10.0.1.12:2380 \
-initial-cluster-token etcd-cluster-1 \
-initial-cluster infra0=http://10.0.1.10:2380,infra1=http://10.0.1.11:2380,infra2=http://10.0.1.12:2380 \
-initial-cluster-state new
```
The command line parameters starting with `-initial-cluster` will be ignored on subsequent runs of etcd. You are free to remove the environment variables or command line flags after the initial bootstrap process. If you need to make changes to the configuration later (for example, adding or removing members to/from the cluster), see the [runtime configuration](runtime-configuration.md) guide.
### Error Cases
In the following example, we have not included our new host in the list of enumerated nodes. If this is a new cluster, the node _must_ be added to the list of initial cluster members.
```
$ etcd -name infra1 -initial-advertise-peer-urls http://10.0.1.11:2380 \
-listen-peer-urls https://10.0.1.11:2380 \
-initial-cluster infra0=http://10.0.1.10:2380 \
-initial-cluster-state new
etcd: infra1 not listed in the initial cluster config
exit 1
```
In this example, we are attempting to map a node (infra0) on a different address (127.0.0.1:2380) than its enumerated address in the cluster list (10.0.1.10:2380). If this node is to listen on multiple addresses, all addresses _must_ be reflected in the "initial-cluster" configuration directive.
```
$ etcd -name infra0 -initial-advertise-peer-urls http://127.0.0.1:2380 \
-listen-peer-urls http://10.0.1.10:2380 \
-initial-cluster infra0=http://10.0.1.10:2380,infra1=http://10.0.1.11:2380,infra2=http://10.0.1.12:2380 \
-initial-cluster-state=new
etcd: error setting up initial cluster: infra0 has different advertised URLs in the cluster and advertised peer URLs list
exit 1
```
If you configure a peer with a different set of configuration and attempt to join this cluster you will get a cluster ID mismatch and etcd will exit.
```
$ etcd -name infra3 -initial-advertise-peer-urls http://10.0.1.13:2380 \
-listen-peer-urls http://10.0.1.13:2380 \
-initial-cluster infra0=http://10.0.1.10:2380,infra1=http://10.0.1.11:2380,infra3=http://10.0.1.13:2380 \
-initial-cluster-state=new
etcd: conflicting cluster ID to the target cluster (c6ab534d07e8fcc4 != bc25ea2a74fb18b0). Exiting.
exit 1
```
## Discovery
In a number of cases, you might not know the IPs of your cluster peers ahead of time. This is common when utilizing cloud providers or when your network uses DHCP. In these cases, rather than specifying a static configuration, you can use an existing etcd cluster to bootstrap a new one. We call this process "discovery".
There two methods that can be used for discovery:
* etcd discovery service
* DNS SRV records
### etcd Discovery
#### Lifetime of a Discovery URL
A discovery URL identifies a unique etcd cluster. Instead of reusing a discovery URL, you should always create discovery URLs for new clusters.
Moreover, discovery URLs should ONLY be used for the initial bootstrapping of a cluster. To change cluster membership after the cluster is already running, see the [runtime reconfiguration][runtime] guide.
[runtime]: https://github.com/coreos/etcd/blob/master/Documentation/2.0/runtime-configuration.md
#### Custom etcd Discovery Service
Discovery uses an existing cluster to bootstrap itself. If you are using your own etcd cluster you can create a URL like so:
```
$ curl -X PUT https://myetcd.local/v2/keys/discovery/6c007a14875d53d9bf0ef5a6fc0257c817f0fb83/_config/size -d value=3
```
By setting the size key to the URL, you create a discovery URL with an expected cluster size of 3.
If you bootstrap an etcd cluster using discovery service with more than the expected number of etcd members, the extra etcd processes will [fall back][fall-back] to being [proxies][proxy] by default.
The URL you will use in this case will be `https://myetcd.local/v2/keys/discovery/6c007a14875d53d9bf0ef5a6fc0257c817f0fb83` and the etcd members will use the `https://myetcd.local/v2/keys/discovery/6c007a14875d53d9bf0ef5a6fc0257c817f0fb83` directory for registration as they start.
Now we start etcd with those relevant flags for each member:
```
$ etcd -name infra0 -initial-advertise-peer-urls http://10.0.1.10:2380 \
-listen-peer-urls http://10.0.1.10:2380 \
-discovery https://myetcd.local/v2/keys/discovery/6c007a14875d53d9bf0ef5a6fc0257c817f0fb83
```
```
$ etcd -name infra1 -initial-advertise-peer-urls http://10.0.1.11:2380 \
-listen-peer-urls http://10.0.1.11:2380 \
-discovery https://myetcd.local/v2/keys/discovery/6c007a14875d53d9bf0ef5a6fc0257c817f0fb83
```
```
$ etcd -name infra2 -initial-advertise-peer-urls http://10.0.1.12:2380 \
-listen-peer-urls http://10.0.1.12:2380 \
-discovery https://myetcd.local/v2/keys/discovery/6c007a14875d53d9bf0ef5a6fc0257c817f0fb83
```
This will cause each member to register itself with the custom etcd discovery service and begin the cluster once all machines have been registered.
#### Public etcd Discovery Service
If you do not have access to an existing cluster, you can use the public discovery service hosted at `discovery.etcd.io`. You can create a private discovery URL using the "new" endpoint like so:
```
$ curl https://discovery.etcd.io/new?size=3
https://discovery.etcd.io/3e86b59982e49066c5d813af1c2e2579cbf573de
```
This will create the cluster with an initial expected size of 3 members. If you do not specify a size, a default of 3 will be used.
If you bootstrap an etcd cluster using discovery service with more than the expected number of etcd members, the extra etcd processes will [fall back][fall-back] to being [proxies][proxy] by default.
[fall-back]: proxy.md#fallback-to-proxy-mode-with-discovery-service
[proxy]: proxy.md
```
ETCD_DISCOVERY=https://discovery.etcd.io/3e86b59982e49066c5d813af1c2e2579cbf573de
```
```
-discovery https://discovery.etcd.io/3e86b59982e49066c5d813af1c2e2579cbf573de
```
Now we start etcd with those relevant flags for each member:
```
$ etcd -name infra0 -initial-advertise-peer-urls http://10.0.1.10:2380 \
-listen-peer-urls http://10.0.1.10:2380 \
-discovery https://discovery.etcd.io/3e86b59982e49066c5d813af1c2e2579cbf573de
```
```
$ etcd -name infra1 -initial-advertise-peer-urls http://10.0.1.11:2380 \
-listen-peer-urls http://10.0.1.11:2380 \
-discovery https://discovery.etcd.io/3e86b59982e49066c5d813af1c2e2579cbf573de
```
```
$ etcd -name infra2 -initial-advertise-peer-urls http://10.0.1.12:2380 \
-listen-peer-urls http://10.0.1.12:2380 \
-discovery https://discovery.etcd.io/3e86b59982e49066c5d813af1c2e2579cbf573de
```
This will cause each member to register itself with the discovery service and begin the cluster once all members have been registered.
You can use the environment variable `ETCD_DISCOVERY_PROXY` to cause etcd to use an HTTP proxy to connect to the discovery service.
#### Error and Warning Cases
##### Discovery Server Errors
```
$ etcd -name infra0 -initial-advertise-peer-urls http://10.0.1.10:2380 \
-listen-peer-urls http://10.0.1.10:2380 \
-discovery https://discovery.etcd.io/3e86b59982e49066c5d813af1c2e2579cbf573de
etcd: error: the cluster doesnt have a size configuration value in https://discovery.etcd.io/3e86b59982e49066c5d813af1c2e2579cbf573de/_config
exit 1
```
##### User Errors
This error will occur if the discovery cluster already has the configured number of members, and `discovery-fallback` is explicitly disabled
```
$ etcd -name infra0 -initial-advertise-peer-urls http://10.0.1.10:2380 \
-listen-peer-urls http://10.0.1.10:2380 \
-discovery https://discovery.etcd.io/3e86b59982e49066c5d813af1c2e2579cbf573de \
-discovery-fallback exit
etcd: discovery: cluster is full
exit 1
```
##### Warnings
This is a harmless warning notifying you that the discovery URL will be
ignored on this machine.
```
$ etcd -name infra0 -initial-advertise-peer-urls http://10.0.1.10:2380 \
-listen-peer-urls http://10.0.1.10:2380 \
-discovery https://discovery.etcd.io/3e86b59982e49066c5d813af1c2e2579cbf573de
etcdserver: discovery token ignored since a cluster has already been initialized. Valid log found at /var/lib/etcd
```
### DNS Discovery
DNS [SRV records](http://www.ietf.org/rfc/rfc2052.txt) can be used as a discovery mechanism.
The `-discovery-srv` flag can be used to set the DNS domain name where the discovery SRV records can be found.
The following DNS SRV records are looked up in the listed order:
* _etcd-server-ssl._tcp.example.com
* _etcd-server._tcp.example.com
If `_etcd-server-ssl._tcp.example.com` is found then etcd will attempt the bootstrapping process over SSL.
#### Create DNS SRV records
```
$ dig +noall +answer SRV _etcd-server._tcp.example.com
_etcd-server._tcp.example.com. 300 IN SRV 0 0 2380 infra0.example.com.
_etcd-server._tcp.example.com. 300 IN SRV 0 0 2380 infra1.example.com.
_etcd-server._tcp.example.com. 300 IN SRV 0 0 2380 infra2.example.com.
```
```
$ dig +noall +answer infra0.example.com infra1.example.com infra2.example.com
infra0.example.com. 300 IN A 10.0.1.10
infra1.example.com. 300 IN A 10.0.1.11
infra2.example.com. 300 IN A 10.0.1.12
```
#### Bootstrap the etcd cluster using DNS
etcd cluster memebers can listen on domain names or IP address, the bootstrap process will resolve DNS A records.
```
$ etcd -name infra0 \
-discovery-srv example.com \
-initial-advertise-peer-urls http://infra0.example.com:2380 \
-initial-cluster-token etcd-cluster-1 \
-initial-cluster-state new \
-advertise-client-urls http://infra0.example.com:2379 \
-listen-client-urls http://infra0.example.com:2379 \
-listen-peer-urls http://infra0.example.com:2380
```
```
$ etcd -name infra1 \
-discovery-srv example.com \
-initial-advertise-peer-urls http://infra1.example.com:2380 \
-initial-cluster-token etcd-cluster-1 \
-initial-cluster-state new \
-advertise-client-urls http://infra1.example.com:2379 \
-listen-client-urls http://infra1.example.com:2379 \
-listen-peer-urls http://infra1.example.com:2380
```
```
$ etcd -name infra2 \
-discovery-srv example.com \
-initial-advertise-peer-urls http://infra2.example.com:2380 \
-initial-cluster-token etcd-cluster-1 \
-initial-cluster-state new \
-advertise-client-urls http://infra2.example.com:2379 \
-listen-client-urls http://infra2.example.com:2379 \
-listen-peer-urls http://infra2.example.com:2380
```
You can also bootstrap the cluster using IP addresses instead of domain names:
```
$ etcd -name infra0 \
-discovery-srv example.com \
-initial-advertise-peer-urls http://10.0.1.10:2380 \
-initial-cluster-token etcd-cluster-1 \
-initial-cluster-state new \
-advertise-client-urls http://10.0.1.10:2379 \
-listen-client-urls http://10.0.1.10:2379 \
-listen-peer-urls http://10.0.1.10:2380
```
```
$ etcd -name infra1 \
-discovery-srv example.com \
-initial-advertise-peer-urls http://10.0.1.11:2380 \
-initial-cluster-token etcd-cluster-1 \
-initial-cluster-state new \
-advertise-client-urls http://10.0.1.11:2379 \
-listen-client-urls http://10.0.1.11:2379 \
-listen-peer-urls http://10.0.1.11:2380
```
```
$ etcd -name infra2 \
-discovery-srv example.com \
-initial-advertise-peer-urls http://10.0.1.12:2380 \
-initial-cluster-token etcd-cluster-1 \
-initial-cluster-state new \
-advertise-client-urls http://10.0.1.12:2379 \
-listen-client-urls http://10.0.1.12:2379 \
-listen-peer-urls http://10.0.1.12:2380
```
#### etcd proxy configuration
DNS SRV records can also be used to configure the list of peers for an etcd server running in proxy mode:
```
$ etcd --proxy on -discovery-srv example.com
```
# 0.4 to 2.0+ Migration Guide
In etcd 2.0 we introduced the ability to listen on more than one address and to advertise multiple addresses. This makes using etcd easier when you have complex networking, such as private and public networks on various cloud providers.
To make understanding this feature easier, we changed the naming of some flags, but we support the old flags to make the migration from the old to new version easier.
|Old Flag |New Flag |Migration Behavior |
|-----------------------|-----------------------|---------------------------------------------------------------------------------------|
|-peer-addr |-initial-advertise-peer-urls |If specified, peer-addr will be used as the only peer URL. Error if both flags specified.|
|-addr |-advertise-client-urls |If specified, addr will be used as the only client URL. Error if both flags specified.|
|-peer-bind-addr |-listen-peer-urls |If specified, peer-bind-addr will be used as the only peer bind URL. Error if both flags specified.|
|-bind-addr |-listen-client-urls |If specified, bind-addr will be used as the only client bind URL. Error if both flags specified.|
|-peers |none |Deprecated. The -initial-cluster flag provides a similar concept with different semantics. Please read this guide on cluster startup.|
|-peers-file |none |Deprecated. The -initial-cluster flag provides a similar concept with different semantics. Please read this guide on cluster startup.|

View File

@ -15,7 +15,7 @@ Using an out-of-date data directory can lead to inconsistency as the member had
For maximum safety, if an etcd member suffers any sort of data corruption or loss, it must be removed from the cluster. For maximum safety, if an etcd member suffers any sort of data corruption or loss, it must be removed from the cluster.
Once removed the member can be re-added with an empty data directory. Once removed the member can be re-added with an empty data directory.
[members-api]: https://github.com/coreos/etcd/blob/master/Documentation/2.0/other_apis.md#members-api [members-api]: https://github.com/coreos/etcd/blob/master/Documentation/other_apis.md#members-api
#### Contents #### Contents
@ -129,7 +129,7 @@ etcd -name node1 \
-advertise-client-urls http://10.0.1.13:2379,http://127.0.0.1:2379 -advertise-client-urls http://10.0.1.13:2379,http://127.0.0.1:2379
``` ```
[change peer url]: https://github.com/coreos/etcd/blob/master/Documentation/2.0/other_apis.md#change-the-peer-urls-of-a-member [change peer url]: https://github.com/coreos/etcd/blob/master/Documentation/other_apis.md#change-the-peer-urls-of-a-member
### Disaster Recovery ### Disaster Recovery

View File

@ -2,23 +2,22 @@
## Running a Single Machine Cluster ## Running a Single Machine Cluster
These examples will use a single machine cluster to show you the basics of the etcd REST API. These examples will use a single member cluster to show you the basics of the etcd REST API.
Let's start etcd: Let's start etcd:
```sh ```sh
./bin/etcd -data-dir machine0 -name machine0 ./bin/etcd
``` ```
This will bring up etcd listening on default ports (4001 for client communication and 7001 for server-to-server communication). This will bring up etcd listening on the IANA assigned ports and listening on localhost.
The `-data-dir machine0` argument tells etcd to write machine configuration, logs and snapshots to the `./machine0/` directory. The IANA assigned ports for etcd are 2379 for client communication and 2380 for server-to-server communication.
The `-name machine0` tells the rest of the cluster that this machine is named machine0.
## Getting the etcd version ## Getting the etcd version
The etcd version of a specific instance can be obtained from the `/version` endpoint. The etcd version of a specific instance can be obtained from the `/version` endpoint.
```sh ```sh
curl -L http://127.0.0.1:4001/version curl -L http://127.0.0.1:2379/version
``` ```
## Key Space Operations ## Key Space Operations
@ -26,14 +25,13 @@ curl -L http://127.0.0.1:4001/version
The primary API of etcd is a hierarchical key space. The primary API of etcd is a hierarchical key space.
The key space consists of directories and keys which are generically referred to as "nodes". The key space consists of directories and keys which are generically referred to as "nodes".
### Setting the value of a key ### Setting the value of a key
Let's set the first key-value pair in the datastore. Let's set the first key-value pair in the datastore.
In this case the key is `/message` and the value is `Hello world`. In this case the key is `/message` and the value is `Hello world`.
```sh ```sh
curl -L http://127.0.0.1:4001/v2/keys/message -XPUT -d value="Hello world" curl http://127.0.0.1:2379/v2/keys/message -XPUT -d value="Hello world"
``` ```
```json ```json
@ -61,7 +59,7 @@ etcd uses a file-system-like structure to represent the key-value pairs, therefo
In this case, a successful request was made that attempted to change the node's value to `Hello world`. In this case, a successful request was made that attempted to change the node's value to `Hello world`.
4. `node.createdIndex`: an index is a unique, monotonically-incrementing integer created for each change to etcd. 4. `node.createdIndex`: an index is a unique, monotonically-incrementing integer created for each change to etcd.
This specific index reflects the point in the etcd state machine at which a given key was created. This specific index reflects the point in the etcd state member at which a given key was created.
You may notice that in this example the index is `2` even though it is the first request you sent to the server. You may notice that in this example the index is `2` even though it is the first request you sent to the server.
This is because there are internal commands that also change the state behind the scenes, like adding and syncing servers. This is because there are internal commands that also change the state behind the scenes, like adding and syncing servers.
@ -77,7 +75,7 @@ etcd includes a few HTTP headers in responses that provide global information ab
``` ```
X-Etcd-Index: 35 X-Etcd-Index: 35
X-Raft-Index: 5398 X-Raft-Index: 5398
X-Raft-Term: 0 X-Raft-Term: 1
``` ```
- `X-Etcd-Index` is the current etcd index as explained above. - `X-Etcd-Index` is the current etcd index as explained above.
@ -92,7 +90,7 @@ X-Raft-Term: 0
We can get the value that we just set in `/message` by issuing a `GET` request: We can get the value that we just set in `/message` by issuing a `GET` request:
```sh ```sh
curl -L http://127.0.0.1:4001/v2/keys/message curl http://127.0.0.1:2379/v2/keys/message
``` ```
```json ```json
@ -113,7 +111,7 @@ curl -L http://127.0.0.1:4001/v2/keys/message
You can change the value of `/message` from `Hello world` to `Hello etcd` with another `PUT` request to the key: You can change the value of `/message` from `Hello world` to `Hello etcd` with another `PUT` request to the key:
```sh ```sh
curl -L http://127.0.0.1:4001/v2/keys/message -XPUT -d value="Hello etcd" curl http://127.0.0.1:2379/v2/keys/message -XPUT -d value="Hello etcd"
``` ```
```json ```json
@ -141,7 +139,7 @@ Here we introduce a new field: `prevNode`. The `prevNode` field represents what
You can remove the `/message` key with a `DELETE` request: You can remove the `/message` key with a `DELETE` request:
```sh ```sh
curl -L http://127.0.0.1:4001/v2/keys/message -XDELETE curl http://127.0.0.1:2379/v2/keys/message -XDELETE
``` ```
```json ```json
@ -168,7 +166,7 @@ Keys in etcd can be set to expire after a specified number of seconds.
You can do this by setting a TTL (time to live) on the key when sending a `PUT` request: You can do this by setting a TTL (time to live) on the key when sending a `PUT` request:
```sh ```sh
curl -L http://127.0.0.1:4001/v2/keys/foo -XPUT -d value=bar -d ttl=5 curl http://127.0.0.1:2379/v2/keys/foo -XPUT -d value=bar -d ttl=5
``` ```
```json ```json
@ -191,12 +189,12 @@ Note the two new fields in response:
2. The `ttl` is the specified time to live for the key, in seconds. 2. The `ttl` is the specified time to live for the key, in seconds.
_NOTE_: Keys can only be expired by a cluster leader, so if a machine gets disconnected from the cluster, its keys will not expire until it rejoins. _NOTE_: Keys can only be expired by a cluster leader, so if a member gets disconnected from the cluster, its keys will not expire until it rejoins.
Now you can try to get the key by sending a `GET` request: Now you can try to get the key by sending a `GET` request:
```sh ```sh
curl -L http://127.0.0.1:4001/v2/keys/foo curl http://127.0.0.1:2379/v2/keys/foo
``` ```
If the TTL has expired, the key will have been deleted, and you will be returned a 100. If the TTL has expired, the key will have been deleted, and you will be returned a 100.
@ -210,10 +208,10 @@ If the TTL has expired, the key will have been deleted, and you will be returned
} }
``` ```
The TTL could be unset to avoid expiration through update operation: The TTL can be unset to avoid expiration through update operation:
```sh ```sh
curl -L http://127.0.0.1:4001/v2/keys/foo -XPUT -d value=bar -d ttl= -d prevExist=true curl http://127.0.0.1:2379/v2/keys/foo -XPUT -d value=bar -d ttl= -d prevExist=true
``` ```
```json ```json
@ -245,7 +243,7 @@ This also works for child keys by passing `recursive=true` in curl.
In one terminal, we send a `GET` with `wait=true` : In one terminal, we send a `GET` with `wait=true` :
```sh ```sh
curl -L http://127.0.0.1:4001/v2/keys/foo?wait=true curl http://127.0.0.1:2379/v2/keys/foo?wait=true
``` ```
Now we are waiting for any changes at path `/foo`. Now we are waiting for any changes at path `/foo`.
@ -253,7 +251,7 @@ Now we are waiting for any changes at path `/foo`.
In another terminal, we set a key `/foo` with value `bar`: In another terminal, we set a key `/foo` with value `bar`:
```sh ```sh
curl -L http://127.0.0.1:4001/v2/keys/foo -XPUT -d value=bar curl http://127.0.0.1:2379/v2/keys/foo -XPUT -d value=bar
``` ```
The first terminal should get the notification and return with the same response as the set request: The first terminal should get the notification and return with the same response as the set request:
@ -279,26 +277,69 @@ The first terminal should get the notification and return with the same response
However, the watch command can do more than this. However, the watch command can do more than this.
Using the index, we can watch for commands that have happened in the past. Using the index, we can watch for commands that have happened in the past.
This is useful for ensuring you don't miss events between watch commands. This is useful for ensuring you don't miss events between watch commands.
Typically, we watch again from the (modifiedIndex + 1) of the node we got.
Let's try to watch for the set command of index 7 again: Let's try to watch for the set command of index 7 again:
```sh ```sh
curl -L 'http://127.0.0.1:4001/v2/keys/foo?wait=true&waitIndex=7' curl 'http://127.0.0.1:2379/v2/keys/foo?wait=true&waitIndex=7'
``` ```
The watch command returns immediately with the same response as previously. The watch command returns immediately with the same response as previously.
**Note**: etcd only keeps the responses of the most recent 1000 events.
It is recommended to send the response to another thread to process immediately
instead of blocking the watch while processing the result.
If we miss all the 1000 events, we need to recover the current state of the
watching key space. First, We do a get and then start to watch from the (etcdIndex + 1).
For example, we set `/foo="bar"` for 2000 times and tries to wait from index 7.
```sh
curl 'http://127.0.0.1:2379/v2/keys/foo?wait=true&waitIndex=7'
```
We get the index is outdated response, since we miss the 1000 events kept in etcd.
```
{"errorCode":401,"message":"The event in requested index is outdated and cleared","cause":"the requested history has been cleared [1003/7]","index":2002}
```
To start watch, frist we need to fetch the current state of key `/foo` and the etcdIndex.
```sh
curl 'http://127.0.0.1:2379/v2/keys/foo' -vv
```
```
< HTTP/1.1 200 OK
< Content-Type: application/json
< X-Etcd-Cluster-Id: 7e27652122e8b2ae
< X-Etcd-Index: 2002
< X-Raft-Index: 2615
< X-Raft-Term: 2
< Date: Mon, 05 Jan 2015 18:54:43 GMT
< Transfer-Encoding: chunked
<
{"action":"get","node":{"key":"/foo","value":"","modifiedIndex":2002,"createdIndex":2002}}
```
The `X-Etcd-Index` is important. It is the index when we got the value of `/foo`.
So we can watch again from the (`X-Etcd-Index` + 1) without missing an event after the last get.
```sh
curl 'http://127.0.0.1:2379/v2/keys/foo?wait=true&waitIndex=2003'
```
### Atomically Creating In-Order Keys ### Atomically Creating In-Order Keys
Using `POST` on a directory, you can create keys with key names that are created in-order. Using `POST` on a directory, you can create keys with key names that are created in-order.
This can be used in a variety of useful patterns, like implementing queues of keys which need to be processed in strict order. This can be used in a variety of useful patterns, like implementing queues of keys which need to be processed in strict order.
An example use case is the [locking module][lockmod] which uses it to ensure clients get fair access to a mutex. An example use case would be ensuring clients get fair access to a mutex.
Creating an in-order key is easy: Creating an in-order key is easy:
```sh ```sh
curl http://127.0.0.1:4001/v2/keys/queue -XPOST -d value=Job1 curl http://127.0.0.1:2379/v2/keys/queue -XPOST -d value=Job1
``` ```
```json ```json
@ -317,7 +358,7 @@ If you create another entry some time later, it is guaranteed to have a key name
Also note the key names use the global etcd index, so the next key can be more than `previous + 1`. Also note the key names use the global etcd index, so the next key can be more than `previous + 1`.
```sh ```sh
curl http://127.0.0.1:4001/v2/keys/queue -XPOST -d value=Job2 curl http://127.0.0.1:2379/v2/keys/queue -XPOST -d value=Job2
``` ```
```json ```json
@ -335,7 +376,7 @@ curl http://127.0.0.1:4001/v2/keys/queue -XPOST -d value=Job2
To enumerate the in-order keys as a sorted list, use the "sorted" parameter. To enumerate the in-order keys as a sorted list, use the "sorted" parameter.
```sh ```sh
curl -s 'http://127.0.0.1:4001/v2/keys/queue?recursive=true&sorted=true' curl -s 'http://127.0.0.1:2379/v2/keys/queue?recursive=true&sorted=true'
``` ```
```json ```json
@ -364,8 +405,6 @@ curl -s 'http://127.0.0.1:4001/v2/keys/queue?recursive=true&sorted=true'
} }
``` ```
[lockmod]: #lock
### Using a directory TTL ### Using a directory TTL
@ -373,7 +412,7 @@ Like keys, directories in etcd can be set to expire after a specified number of
You can do this by setting a TTL (time to live) on a directory when it is created with a `PUT`: You can do this by setting a TTL (time to live) on a directory when it is created with a `PUT`:
```sh ```sh
curl -L http://127.0.0.1:4001/v2/keys/dir -XPUT -d ttl=30 -d dir=true curl http://127.0.0.1:2379/v2/keys/dir -XPUT -d ttl=30 -d dir=true
``` ```
```json ```json
@ -394,13 +433,13 @@ The directory's TTL can be refreshed by making an update.
You can do this by making a PUT with `prevExist=true` and a new TTL. You can do this by making a PUT with `prevExist=true` and a new TTL.
```sh ```sh
curl -L http://127.0.0.1:4001/v2/keys/dir -XPUT -d ttl=30 -d dir=true -d prevExist=true curl http://127.0.0.1:2379/v2/keys/dir -XPUT -d ttl=30 -d dir=true -d prevExist=true
``` ```
Keys that are under this directory work as usual, but when the directory expires, a watcher on a key under the directory will get an expire event: Keys that are under this directory work as usual, but when the directory expires, a watcher on a key under the directory will get an expire event:
```sh ```sh
curl 'http://127.0.0.1:4001/v2/keys/dir/asdf?consistent=true&wait=true' curl 'http://127.0.0.1:2379/v2/keys/dir/asdf?consistent=true&wait=true'
``` ```
```json ```json
@ -440,14 +479,14 @@ Here is a simple example.
Let's create a key-value pair first: `foo=one`. Let's create a key-value pair first: `foo=one`.
```sh ```sh
curl -L http://127.0.0.1:4001/v2/keys/foo -XPUT -d value=one curl http://127.0.0.1:2379/v2/keys/foo -XPUT -d value=one
``` ```
Now let's try some invalid `CompareAndSwap` commands. Now let's try some invalid `CompareAndSwap` commands.
Trying to set this existing key with `prevExist=false` fails as expected: Trying to set this existing key with `prevExist=false` fails as expected:
```sh ```sh
curl -L http://127.0.0.1:4001/v2/keys/foo?prevExist=false -XPUT -d value=three curl http://127.0.0.1:2379/v2/keys/foo?prevExist=false -XPUT -d value=three
``` ```
The error code explains the problem: The error code explains the problem:
@ -464,7 +503,7 @@ The error code explains the problem:
Now let's provide a `prevValue` parameter: Now let's provide a `prevValue` parameter:
```sh ```sh
curl -L http://127.0.0.1:4001/v2/keys/foo?prevValue=two -XPUT -d value=three curl http://127.0.0.1:2379/v2/keys/foo?prevValue=two -XPUT -d value=three
``` ```
This will try to compare the previous value of the key and the previous value we provided. If they are equal, the value of the key will change to three. This will try to compare the previous value of the key and the previous value we provided. If they are equal, the value of the key will change to three.
@ -484,7 +523,7 @@ Note: the condition prevIndex=0 always passes.
Let's try a valid condition: Let's try a valid condition:
```sh ```sh
curl -L http://127.0.0.1:4001/v2/keys/foo?prevValue=one -XPUT -d value=two curl http://127.0.0.1:2379/v2/keys/foo?prevValue=one -XPUT -d value=two
``` ```
The response should be: The response should be:
@ -522,14 +561,14 @@ The current comparable conditions are:
Here is a simple example. Let's first create a key: `foo=one`. Here is a simple example. Let's first create a key: `foo=one`.
```sh ```sh
curl -L http://127.0.0.1:4001/v2/keys/foo -XPUT -d value=one curl http://127.0.0.1:2379/v2/keys/foo -XPUT -d value=one
``` ```
Now let's try some `CompareAndDelete` commands. Now let's try some `CompareAndDelete` commands.
Trying to delete the key with `prevValue=two` fails as expected: Trying to delete the key with `prevValue=two` fails as expected:
```sh ```sh
curl -L http://127.0.0.1:4001/v2/keys/foo?prevValue=two -XDELETE curl http://127.0.0.1:2379/v2/keys/foo?prevValue=two -XDELETE
``` ```
The error code explains the problem: The error code explains the problem:
@ -546,7 +585,7 @@ The error code explains the problem:
As does a `CompareAndDelete` with a mismatched `prevIndex`: As does a `CompareAndDelete` with a mismatched `prevIndex`:
```sh ```sh
curl -L http://127.0.0.1:4001/v2/keys/foo?prevIndex=1 -XDELETE curl http://127.0.0.1:2379/v2/keys/foo?prevIndex=1 -XDELETE
``` ```
```json ```json
@ -561,7 +600,7 @@ curl -L http://127.0.0.1:4001/v2/keys/foo?prevIndex=1 -XDELETE
And now a valid `prevValue` condition: And now a valid `prevValue` condition:
```sh ```sh
curl -L http://127.0.0.1:4001/v2/keys/foo?prevValue=one -XDELETE curl http://127.0.0.1:2379/v2/keys/foo?prevValue=one -XDELETE
``` ```
The successful response will look something like: The successful response will look something like:
@ -591,7 +630,7 @@ But there are cases where you will want to create a directory or remove one.
Creating a directory is just like a key except you cannot provide a value and must add the `dir=true` parameter. Creating a directory is just like a key except you cannot provide a value and must add the `dir=true` parameter.
```sh ```sh
curl -L http://127.0.0.1:4001/v2/keys/dir -XPUT -d dir=true curl http://127.0.0.1:2379/v2/keys/dir -XPUT -d dir=true
``` ```
```json ```json
{ {
@ -617,7 +656,7 @@ In this example, let's first create some keys:
We already have `/foo=two` so now we'll create another one called `/foo_dir/foo` with the value of `bar`: We already have `/foo=two` so now we'll create another one called `/foo_dir/foo` with the value of `bar`:
```sh ```sh
curl -L http://127.0.0.1:4001/v2/keys/foo_dir/foo -XPUT -d value=bar curl http://127.0.0.1:2379/v2/keys/foo_dir/foo -XPUT -d value=bar
``` ```
```json ```json
@ -635,7 +674,7 @@ curl -L http://127.0.0.1:4001/v2/keys/foo_dir/foo -XPUT -d value=bar
Now we can list the keys under root `/`: Now we can list the keys under root `/`:
```sh ```sh
curl -L http://127.0.0.1:4001/v2/keys/ curl http://127.0.0.1:2379/v2/keys/
``` ```
We should see the response as an array of items: We should see the response as an array of items:
@ -668,7 +707,7 @@ Here we can see `/foo` is a key-value pair under `/` and `/foo_dir` is a directo
We can also recursively get all the contents under a directory by adding `recursive=true`. We can also recursively get all the contents under a directory by adding `recursive=true`.
```sh ```sh
curl -L http://127.0.0.1:4001/v2/keys/?recursive=true curl http://127.0.0.1:2379/v2/keys/?recursive=true
``` ```
```json ```json
@ -706,51 +745,35 @@ curl -L http://127.0.0.1:4001/v2/keys/?recursive=true
### Deleting a Directory ### Deleting a Directory
Now let's try to delete the directory `/dir` Now let's try to delete the directory `/foo_dir`.
You can remove an empty directory using the `DELETE` verb and the `dir=true` parameter. Following will succeed because `/dir` was empty You can remove an empty directory using the `DELETE` verb and the `dir=true` parameter.
```sh ```sh
curl -L 'http://127.0.0.1:4001/v2/keys/dir?dir=true' -XDELETE curl 'http://127.0.0.1:2379/v2/keys/foo_dir?dir=true' -XDELETE
``` ```
```json ```json
{ {
"action": "delete", "action": "delete",
"node": { "node": {
"createdIndex": 30, "createdIndex": 30,
"dir": true, "dir": true,
"key": "/dir", "key": "/foo_dir",
"modifiedIndex": 31 "modifiedIndex": 31
}, },
"prevNode": { "prevNode": {
"createdIndex": 30, "createdIndex": 30,
"key": "/dir", "key": "/foo_dir",
"dir": true, "dir": true,
"modifiedIndex": 30 "modifiedIndex": 30
} }
} }
``` ```
However, deleting `/foo_dir` will result into an error because `/foo_dir` is not empty.
```sh
curl -L 'http://127.0.0.1:4001/v2/keys/foo_dir?dir=true' -XDELETE
```
```json
{
"errorCode":108,
"message":"Directory not empty",
"cause":"/foo_dir",
"index":2
}
```
To delete a directory that holds keys, you must add `recursive=true`. To delete a directory that holds keys, you must add `recursive=true`.
```sh ```sh
curl -L http://127.0.0.1:4001/v2/keys/foo_dir?recursive=true -XDELETE curl http://127.0.0.1:2379/v2/keys/dir?recursive=true -XDELETE
``` ```
```json ```json
@ -780,7 +803,7 @@ The hidden item will not be listed when sending a `GET` request for a directory.
First we'll add a hidden key named `/_message`: First we'll add a hidden key named `/_message`:
```sh ```sh
curl -L http://127.0.0.1:4001/v2/keys/_message -XPUT -d value="Hello hidden world" curl http://127.0.0.1:2379/v2/keys/_message -XPUT -d value="Hello hidden world"
``` ```
```json ```json
@ -798,7 +821,7 @@ curl -L http://127.0.0.1:4001/v2/keys/_message -XPUT -d value="Hello hidden worl
Next we'll add a regular key named `/message`: Next we'll add a regular key named `/message`:
```sh ```sh
curl -L http://127.0.0.1:4001/v2/keys/message -XPUT -d value="Hello world" curl http://127.0.0.1:2379/v2/keys/message -XPUT -d value="Hello world"
``` ```
```json ```json
@ -816,7 +839,7 @@ curl -L http://127.0.0.1:4001/v2/keys/message -XPUT -d value="Hello world"
Now let's try to get a listing of keys under the root directory, `/`: Now let's try to get a listing of keys under the root directory, `/`:
```sh ```sh
curl -L http://127.0.0.1:4001/v2/keys/ curl http://127.0.0.1:2379/v2/keys/
``` ```
```json ```json
@ -852,7 +875,7 @@ For example you can use curl to upload a simple text file and encode it:
``` ```
echo "Hello\nWorld" > afile.txt echo "Hello\nWorld" > afile.txt
curl -L http://127.0.0.1:4001/v2/keys/afile -XPUT --data-urlencode value@afile.txt curl http://127.0.0.1:2379/v2/keys/afile -XPUT --data-urlencode value@afile.txt
``` ```
```json ```json
@ -875,7 +898,7 @@ Followers in a cluster can be behind the leader in their copy of the keyspace.
If your application wants or needs the most up-to-date version of a key then it should ensure it reads from the current leader. If your application wants or needs the most up-to-date version of a key then it should ensure it reads from the current leader.
By using the `consistent=true` flag in your GET requests, etcd will make sure you are talking to the current master. By using the `consistent=true` flag in your GET requests, etcd will make sure you are talking to the current master.
As an example of how a machine can be behind the leader let's start with a three machine cluster: L, F1, and F2. As an example of how a member can be behind the leader let's start with a three member cluster: L, F1, and F2.
A client makes a write to L and F1 acknowledges the request. A client makes a write to L and F1 acknowledges the request.
The client is told the write was successful and the keyspace is updated. The client is told the write was successful and the keyspace is updated.
Meanwhile F2 has partitioned from the network and will have an out-of-date version of the keyspace until the partition resolves. Meanwhile F2 has partitioned from the network and will have an out-of-date version of the keyspace until the partition resolves.
@ -894,225 +917,10 @@ The read will take a very similar path to a write and will have a similar
speed. If you are unsure if you need this feature feel free to email etcd-dev speed. If you are unsure if you need this feature feel free to email etcd-dev
for advice. for advice.
## Lock Module (*Deprecated and Removed*)
The lock module is used to serialize access to resources used by clients.
Multiple clients can attempt to acquire a lock but only one can have it at a time.
Once the lock is released, the next client waiting for the lock will receive it.
**Warning:** This module is deprecated and removed at v0.4. See [Modules][modules] for more details.
### Acquiring a Lock
To acquire a lock, simply send a `POST` request to the lock module with the lock name and TTL:
```sh
curl -L http://127.0.0.1:4001/mod/v2/lock/mylock -XPOST -d ttl=20
```
You will receive the lock index when you acquire the lock:
```
2
```
If the TTL is not specified or is not a number then you'll receive the following error:
```json
{
"errorCode": 202,
"message": "The given TTL in POST form is not a number",
"cause": "Acquire",
}
```
If you specify a timeout that is not a number then you'll receive the following error:
```json
{
"errorCode": 205,
"message": "The given timeout in POST form is not a number",
"cause": "Acquire",
}
```
### Renewing a Lock
To extend the TTL of an already acquired lock, simply repeat your original request but with a `PUT` and the lock index instead:
```sh
curl -L http://127.0.0.1:4001/mod/v2/lock/mylock -XPUT -d index=5 -d ttl=20
```
If the index or value is not specified then you'll receive the following error:
```json
{
"errorCode": 207,
"message": "Index or value is required",
"cause": "Renew",
}
```
If the index or value does not exist then you'll receive the following error with a `404 Not Found` HTTP code:
```json
{
"errorCode": 100,
"message": "Key not found",
"index": 1
}
```
If the TTL is not specified or is not a number then you'll receive the following error:
```json
{
"errorCode": 202,
"message": "The given TTL in POST form is not a number",
"cause": "Renew",
}
```
### Releasing a Lock
When the client is finished with the lock, simply send a `DELETE` request to release the lock:
```sh
curl -L http://127.0.0.1:4001/mod/v2/lock/mylock?index=5 -XDELETE
```
If the index or value is not specified then you'll receive the following error:
```json
{
"errorCode": 207,
"message": "Index or value is required",
"cause": "Release",
}
```
If the index and value are both specified then you'll receive the following error:
```json
{
"errorCode": 208,
"message": "Index and value cannot both be specified",
"cause": "Release",
}
```
If the index or value does not exist then you'll receive the following error with a `404 Not Found` HTTP code:
```json
{
"errorCode": 100,
"message": "Key not found",
"index": 1
}
```
### Retrieving a Lock
To determine the current value or index of a lock, send a `GET` request to the lock.
You can specify a `field` of `index` or `value`.
The default is `value`.
```sh
curl -L http://127.0.0.1:4001/mod/v2/lock/mylock?field=index
```
Will return the current index:
```sh
2
```
If you specify a field other than `index` or `value` then you'll receive the following error:
```json
{
"errorCode": 209,
"message": "Invalid field",
"cause": "Get",
}
```
## Leader Module (*Deprecated*)
The leader module wraps the lock module to provide a simple interface for electing a single leader in a cluster.
**Warning:** This module is deprecated at v0.4. See [Modules][modules] for more details.
[modules]: https://github.com/coreos/etcd/blob/master/Documentation/modules.md
### Setting the Leader
A client can attempt to become leader by sending a `PUT` request to the leader module with the name of the leader to elect:
```sh
curl -L http://127.0.0.1:4001/mod/v2/leader/myclustername -XPUT -d ttl=300 -d name=foo.mydomain.com
```
You will receive a successful `200` HTTP response code when the leader is elected.
If the name is not specified then you'll receive the following error:
```json
{
"errorCode": 206,
"message": "Name is required in POST form",
"cause": "Set",
}
```
You can also receive any errors specified by the Lock module.
### Retrieving the Current Leader
A client can check to determine if there is a current leader by sending a `GET` request to the leader module:
```sh
curl -L http://127.0.0.1:4001/mod/v2/leader/myclustername
```
You will receive the name of the current leader:
```sh
foo.mydomain.com
```
### Relinquishing Leadership
A client can give up leadership by sending a `DELETE` request with the leader name:
```sh
curl -L http://127.0.0.1:4001/mod/v2/leader/myclustername?name=foo.mydomain.com -XDELETE
```
If the name is not specified then you'll receive the following error:
```json
{
"errorCode": 206,
"message": "Name is required in POST form",
"cause": "Set",
}
```
## Statistics ## Statistics
An etcd cluster keeps track of a number of statistics including latency, bandwidth and uptime. An etcd cluster keeps track of a number of statistics including latency, bandwidth and uptime.
These statistics are used in the `/mod/dashboard` endpoint to generate tables and graphs about the cluster state. These are exposed via the statistics endpoint to understand the internal health of a cluster.
### Leader Statistics ### Leader Statistics
@ -1120,40 +928,24 @@ The leader has a view of the entire cluster and keeps track of two interesting s
You can grab these statistics from the `/v2/stats/leader` endpoint: You can grab these statistics from the `/v2/stats/leader` endpoint:
```sh ```sh
curl -L http://127.0.0.1:4001/v2/stats/leader curl http://127.0.0.1:2379/v2/stats/leader
``` ```
```json ```json
{ {
"followers": { "id": "2c7d3e0b8627375b",
"etcd-node1": { "leaderInfo": {
"counts": { "leader": "8a69d5f6b7814500",
"fail": 1212, "startTime": "2014-10-24T13:15:51.184719899-07:00",
"success": 4163176 "uptime": "7m17.859616962s"
}, },
"latency": { "name": "infra1",
"average": 2.7206299430775007, "recvAppendRequestCnt": 3949,
"current": 1.486487, "recvBandwidthRate": 561.5729321100841,
"maximum": 2018.410279, "recvPkgRate": 9.008227977383449,
"minimum": 1.011763, "sendAppendRequestCnt": 0,
"standardDeviation": 6.246990702203536 "startTime": "2014-10-24T13:15:50.070369454-07:00",
} "state": "StateFollower"
},
"etcd-node3": {
"counts": {
"fail": 1378,
"success": 4164598
},
"latency": {
"average": 2.707100125761001,
"current": 1.666258,
"maximum": 1409.054765,
"minimum": 0.998415,
"standardDeviation": 5.910089773061448
}
}
},
"leader": "etcd-node2"
} }
``` ```
@ -1162,59 +954,64 @@ curl -L http://127.0.0.1:4001/v2/stats/leader
Each node keeps a number of internal statistics: Each node keeps a number of internal statistics:
- `leaderInfo.leader`: name of the current leader machine - `id`: the unique identifier for the member
- `leaderInfo.leader`: id of the current leader member
- `leaderInfo.uptime`: amount of time the leader has been leader - `leaderInfo.uptime`: amount of time the leader has been leader
- `name`: this machine's name - `name`: this member's name
- `recvAppendRequestCnt`: number of append requests this node has processed - `recvAppendRequestCnt`: number of append requests this node has processed
- `recvBandwidthRate`: number of bytes per second this node is receiving (follower only) - `recvBandwidthRate`: number of bytes per second this node is receiving (follower only)
- `recvPkgRate`: number of requests per second this node is receiving (follower only) - `recvPkgRate`: number of requests per second this node is receiving (follower only)
- `sendAppendRequestCnt`: number of requests that this node has sent - `sendAppendRequestCnt`: number of requests that this node has sent
- `sendBandwidthRate`: number of bytes per second this node is sending (leader only). This value is undefined on single machine clusters. - `sendBandwidthRate`: number of bytes per second this node is sending (leader only). This value is undefined on single member clusters.
- `sendPkgRate`: number of requests per second this node is sending (leader only). This value is undefined on single machine clusters. - `sendPkgRate`: number of requests per second this node is sending (leader only). This value is undefined on single member clusters.
- `state`: either leader or follower - `state`: either leader or follower
- `startTime`: the time when this node was started - `startTime`: the time when this node was started
This is an example response from a follower machine: This is an example response from a follower member:
```sh ```sh
curl -L http://127.0.0.1:4001/v2/stats/self curl http://127.0.0.1:2379/v2/stats/self
``` ```
```json ```json
{ {
"id": "eca0338f4ea31566",
"leaderInfo": { "leaderInfo": {
"leader": "machine1", "leader": "8a69d5f6b7814500",
"uptime": "1m18.544996775s" "startTime": "2014-10-24T13:15:51.186620747-07:00",
"uptime": "10m59.322358947s"
}, },
"name": "machine0", "name": "node3",
"recvAppendRequestCnt": 5871307, "recvAppendRequestCnt": 5944,
"recvBandwidthRate": 630.3121596542599, "recvBandwidthRate": 570.6254930219969,
"recvPkgRate": 19.272654323628185, "recvPkgRate": 9.00892789741075,
"sendAppendRequestCnt": 3175763, "sendAppendRequestCnt": 0,
"startTime": "2014-01-01T15:26:24.96569404Z", "startTime": "2014-10-24T13:15:50.072007085-07:00",
"state": "follower" "state": "StateFollower"
} }
``` ```
And this is an example response from a leader machine: And this is an example response from a leader member:
```sh ```sh
curl -L http://127.0.0.1:4001/v2/stats/self curl http://127.0.0.1:2379/v2/stats/self
``` ```
```json ```json
{ {
"id": "eca0338f4ea31566",
"leaderInfo": { "leaderInfo": {
"leader": "machine0", "leader": "8a69d5f6b7814500",
"uptime": "24.648619798s" "startTime": "2014-10-24T13:15:51.186620747-07:00",
"uptime": "10m47.012122091s"
}, },
"name": "machine0", "name": "node3",
"recvAppendRequestCnt": 5901116, "recvAppendRequestCnt": 5835,
"sendAppendRequestCnt": 3212344, "recvBandwidthRate": 584.1485698657176,
"sendBandwidthRate": 1254.3151237301615, "recvPkgRate": 9.17390765395709,
"sendPkgRate": 38.71342974475808, "sendAppendRequestCnt": 0,
"startTime": "2014-01-01T15:26:24.96569404Z", "startTime": "2014-10-24T13:15:50.072007085-07:00",
"state": "leader" "state": "StateFollower"
} }
``` ```
@ -1227,7 +1024,7 @@ Operations that modify the store's state like create, delete, set and update are
Operations like get and watch are node local and will only be seen on this node. Operations like get and watch are node local and will only be seen on this node.
```sh ```sh
curl -L http://127.0.0.1:4001/v2/stats/store curl http://127.0.0.1:2379/v2/stats/store
``` ```
```json ```json
@ -1251,92 +1048,6 @@ curl -L http://127.0.0.1:4001/v2/stats/store
## Cluster Config ## Cluster Config
The configuration endpoint manages shared cluster wide properties. See the [other etcd APIs][other-apis] for details on the cluster management.
### Set Cluster Config [other-apis]: https://github.com/coreos/etcd/blob/master/Documentation/other_apis.md
```sh
curl -L http://127.0.0.1:7001/v2/admin/config -XPUT -d '{"activeSize":3, "removeDelay":1800,"syncInterval":5}'
```
```json
{
"activeSize": 3,
"removeDelay": 1800,
"syncInterval":5
}
```
`activeSize` is the maximum number of peers that can join the cluster and participate in the consensus protocol.
The size of cluster is controlled to be around a certain number. If it is not, standby-mode instances will join or peer-mode instances will be removed to make it happen.
`removeDelay` indicates the minimum time that a machine has been observed to be unresponsive before it is removed from the cluster.
### Get Cluster Config
```sh
curl -L http://127.0.0.1:7001/v2/admin/config
```
```json
{
"activeSize": 3,
"removeDelay": 1800,
"syncInterval":5
}
```
## Remove Machines
At times you may want to manually remove a machine. Using the machines endpoint
you can find and remove machines.
First, list all the machines in the cluster.
```sh
curl -L http://127.0.0.1:7001/v2/admin/machines
```
```json
[
{
"clientURL": "http://127.0.0.1:4001",
"name": "peer1",
"peerURL": "http://127.0.0.1:7001",
"state": "leader"
},
{
"clientURL": "http://127.0.0.1:4002",
"name": "peer2",
"peerURL": "http://127.0.0.1:7002",
"state": "follower"
},
{
"clientURL": "http://127.0.0.1:4003",
"name": "peer3",
"peerURL": "http://127.0.0.1:7003",
"state": "follower"
}
]
```
Then take a closer look at the machine you want to remove.
```sh
curl -L http://127.0.0.1:7001/v2/admin/machines/peer2
```
```json
{
"clientURL": "http://127.0.0.1:4002",
"name": "peer2",
"peerURL": "http://127.0.0.1:7002",
"state": "follower"
}
```
And finally remove it.
```sh
curl -L -XDELETE http://127.0.0.1:7001/v2/admin/machines/peer2
```

View File

@ -16,14 +16,14 @@ The major flag changes are to mostly related to bootstrapping. The `initial-*` f
- `-peers-file` is replaced by `-initial-cluster`. - `-peers-file` is replaced by `-initial-cluster`.
The documentation of new command line flags can be found at The documentation of new command line flags can be found at
https://github.com/coreos/etcd/blob/master/Documentation/2.0/configuration.md. https://github.com/coreos/etcd/blob/master/Documentation/configuration.md.
#### Data Dir #### Data Dir
- Default data dir location has changed from {$hostname}.etcd to {name}.etcd. - Default data dir location has changed from {$hostname}.etcd to {name}.etcd.
- The disk format within the data dir has changed. etcd 2.0 should be able to auto upgrade the old data format. Instructions on doing so manually are in the [migration tool doc][migrationtooldoc]. - The disk format within the data dir has changed. etcd 2.0 should be able to auto upgrade the old data format. Instructions on doing so manually are in the [migration tool doc][migrationtooldoc].
[migrationtooldoc]: https://github.com/coreos/etcd/blob/master/Documentation/2.0/0_4_migration_tool.md [migrationtooldoc]: https://github.com/coreos/etcd/blob/master/Documentation/0_4_migration_tool.md
#### Standby #### Standby
@ -33,18 +33,18 @@ Standby mode was intended for large clusters that had a subset of the members ac
Proxy mode in 2.0 will provide similar functionality, and with improved control over which machines act as proxies due to the operator specifically configuring them. Proxies also support read only or read/write modes for increased security and durability. Proxy mode in 2.0 will provide similar functionality, and with improved control over which machines act as proxies due to the operator specifically configuring them. Proxies also support read only or read/write modes for increased security and durability.
[proxymode]: https://github.com/coreos/etcd/blob/master/Documentation/2.0/proxy.md [proxymode]: https://github.com/coreos/etcd/blob/master/Documentation/proxy.md
#### Discovery Service #### Discovery Service
A size key needs to be provided inside a [discovery token][discoverytoken]. A size key needs to be provided inside a [discovery token][discoverytoken].
[discoverytoken]: https://github.com/coreos/etcd/blob/master/Documentation/2.0/clustering.md#custom-etcd-discovery-service [discoverytoken]: https://github.com/coreos/etcd/blob/master/Documentation/clustering.md#custom-etcd-discovery-service
#### HTTP Admin API #### HTTP Admin API
`v2/admin` on peer url and `v2/keys/_etcd` are unified under the new [v2/member API][memberapi] to better explain which machines are part of an etcd cluster, and to simplify the keyspace for all your use cases. `v2/admin` on peer url and `v2/keys/_etcd` are unified under the new [v2/member API][memberapi] to better explain which machines are part of an etcd cluster, and to simplify the keyspace for all your use cases.
[memberapi]: https://github.com/coreos/etcd/blob/master/Documentation/2.0/other_apis.md [memberapi]: https://github.com/coreos/etcd/blob/master/Documentation/other_apis.md
#### HTTP Key Value API #### HTTP Key Value API
- The follower can now transparently proxy write equests to the leader. Clients will no longer see 307 redirections to the leader from etcd. - The follower can now transparently proxy write equests to the leader. Clients will no longer see 307 redirections to the leader from etcd.

View File

@ -1 +1,356 @@
2.0/clustering.md # Clustering Guide
## Overview
Starting an etcd cluster statically requires that each member knows another in the cluster. In a number of cases, you might not know the IPs of your cluster members ahead of time. In these cases, you can bootstrap an etcd cluster with the help of a discovery service.
This guide willcover the following mechanisms for bootstrapping an etcd cluster:
* [Static](#static)
* [etcd Discovery](#etcd-discovery)
* [DNS Discovery](#dns-discovery)
Each of the bootstrapping mechanisms will be used to create a three machine etcd cluster with the following details:
|Name|Address|Hostname|
|------|---------|------------------|
|infra0|10.0.1.10|infra0.example.com|
|infra1|10.0.1.11|infra1.example.com|
|infra2|10.0.1.12|infra2.example.com|
## Static
As we know the cluster members, their addresses and the size of the cluster before starting, we can use an offline bootstrap configuration by setting the `initial-cluster` flag. Each machine will get either the following command line or environment variables:
```
ETCD_INITIAL_CLUSTER="infra0=http://10.0.1.10:2380,infra1=http://10.0.1.11:2380,infra2=http://10.0.1.12:2380"
ETCD_INITIAL_CLUSTER_STATE=new
```
```
-initial-cluster infra0=http://10.0.1.10:2380,http://10.0.1.11:2380,infra2=http://10.0.1.12:2380 \
-initial-cluster-state new
```
Note that the URLs specified in `initial-cluster` are the _advertised peer URLs_, i.e. they should match the value of `initial-advertise-peer-urls` on the respective nodes.
If you are spinning up multiple clusters (or creating and destroying a single cluster) with same configuration for testing purpose, it is highly recommended that you specify a unique `initial-cluster-token` for the different clusters. By doing this, etcd can generate unique cluster IDs and member IDs for the clusters even if they otherwise have the exact same configuration. This can protect you from cross-cluster-interaction, which might corrupt your clusters.
On each machine you would start etcd with these flags:
```
$ etcd -name infra0 -initial-advertise-peer-urls https://10.0.1.10:2380 \
-listen-peer-urls https://10.0.1.10:2380 \
-initial-cluster-token etcd-cluster-1 \
-initial-cluster infra0=http://10.0.1.10:2380,infra1=http://10.0.1.11:2380,infra2=http://10.0.1.12:2380 \
-initial-cluster-state new
```
```
$ etcd -name infra1 -initial-advertise-peer-urls https://10.0.1.11:2380 \
-listen-peer-urls https://10.0.1.11:2380 \
-initial-cluster-token etcd-cluster-1 \
-initial-cluster infra0=http://10.0.1.10:2380,infra1=http://10.0.1.11:2380,infra2=http://10.0.1.12:2380 \
-initial-cluster-state new
```
```
$ etcd -name infra2 -initial-advertise-peer-urls https://10.0.1.12:2380 \
-listen-peer-urls https://10.0.1.12:2380 \
-initial-cluster-token etcd-cluster-1 \
-initial-cluster infra0=http://10.0.1.10:2380,infra1=http://10.0.1.11:2380,infra2=http://10.0.1.12:2380 \
-initial-cluster-state new
```
The command line parameters starting with `-initial-cluster` will be ignored on subsequent runs of etcd. You are free to remove the environment variables or command line flags after the initial bootstrap process. If you need to make changes to the configuration later (for example, adding or removing members to/from the cluster), see the [runtime configuration](runtime-configuration.md) guide.
### Error Cases
In the following example, we have not included our new host in the list of enumerated nodes. If this is a new cluster, the node _must_ be added to the list of initial cluster members.
```
$ etcd -name infra1 -initial-advertise-peer-urls http://10.0.1.11:2380 \
-listen-peer-urls https://10.0.1.11:2380 \
-initial-cluster infra0=http://10.0.1.10:2380 \
-initial-cluster-state new
etcd: infra1 not listed in the initial cluster config
exit 1
```
In this example, we are attempting to map a node (infra0) on a different address (127.0.0.1:2380) than its enumerated address in the cluster list (10.0.1.10:2380). If this node is to listen on multiple addresses, all addresses _must_ be reflected in the "initial-cluster" configuration directive.
```
$ etcd -name infra0 -initial-advertise-peer-urls http://127.0.0.1:2380 \
-listen-peer-urls http://10.0.1.10:2380 \
-initial-cluster infra0=http://10.0.1.10:2380,infra1=http://10.0.1.11:2380,infra2=http://10.0.1.12:2380 \
-initial-cluster-state=new
etcd: error setting up initial cluster: infra0 has different advertised URLs in the cluster and advertised peer URLs list
exit 1
```
If you configure a peer with a different set of configuration and attempt to join this cluster you will get a cluster ID mismatch and etcd will exit.
```
$ etcd -name infra3 -initial-advertise-peer-urls http://10.0.1.13:2380 \
-listen-peer-urls http://10.0.1.13:2380 \
-initial-cluster infra0=http://10.0.1.10:2380,infra1=http://10.0.1.11:2380,infra3=http://10.0.1.13:2380 \
-initial-cluster-state=new
etcd: conflicting cluster ID to the target cluster (c6ab534d07e8fcc4 != bc25ea2a74fb18b0). Exiting.
exit 1
```
## Discovery
In a number of cases, you might not know the IPs of your cluster peers ahead of time. This is common when utilizing cloud providers or when your network uses DHCP. In these cases, rather than specifying a static configuration, you can use an existing etcd cluster to bootstrap a new one. We call this process "discovery".
There two methods that can be used for discovery:
* etcd discovery service
* DNS SRV records
### etcd Discovery
#### Lifetime of a Discovery URL
A discovery URL identifies a unique etcd cluster. Instead of reusing a discovery URL, you should always create discovery URLs for new clusters.
Moreover, discovery URLs should ONLY be used for the initial bootstrapping of a cluster. To change cluster membership after the cluster is already running, see the [runtime reconfiguration][runtime] guide.
[runtime]: https://github.com/coreos/etcd/blob/master/Documentation/runtime-configuration.md
#### Custom etcd Discovery Service
Discovery uses an existing cluster to bootstrap itself. If you are using your own etcd cluster you can create a URL like so:
```
$ curl -X PUT https://myetcd.local/v2/keys/discovery/6c007a14875d53d9bf0ef5a6fc0257c817f0fb83/_config/size -d value=3
```
By setting the size key to the URL, you create a discovery URL with an expected cluster size of 3.
If you bootstrap an etcd cluster using discovery service with more than the expected number of etcd members, the extra etcd processes will [fall back][fall-back] to being [proxies][proxy] by default.
The URL you will use in this case will be `https://myetcd.local/v2/keys/discovery/6c007a14875d53d9bf0ef5a6fc0257c817f0fb83` and the etcd members will use the `https://myetcd.local/v2/keys/discovery/6c007a14875d53d9bf0ef5a6fc0257c817f0fb83` directory for registration as they start.
Now we start etcd with those relevant flags for each member:
```
$ etcd -name infra0 -initial-advertise-peer-urls http://10.0.1.10:2380 \
-listen-peer-urls http://10.0.1.10:2380 \
-discovery https://myetcd.local/v2/keys/discovery/6c007a14875d53d9bf0ef5a6fc0257c817f0fb83
```
```
$ etcd -name infra1 -initial-advertise-peer-urls http://10.0.1.11:2380 \
-listen-peer-urls http://10.0.1.11:2380 \
-discovery https://myetcd.local/v2/keys/discovery/6c007a14875d53d9bf0ef5a6fc0257c817f0fb83
```
```
$ etcd -name infra2 -initial-advertise-peer-urls http://10.0.1.12:2380 \
-listen-peer-urls http://10.0.1.12:2380 \
-discovery https://myetcd.local/v2/keys/discovery/6c007a14875d53d9bf0ef5a6fc0257c817f0fb83
```
This will cause each member to register itself with the custom etcd discovery service and begin the cluster once all machines have been registered.
#### Public etcd Discovery Service
If you do not have access to an existing cluster, you can use the public discovery service hosted at `discovery.etcd.io`. You can create a private discovery URL using the "new" endpoint like so:
```
$ curl https://discovery.etcd.io/new?size=3
https://discovery.etcd.io/3e86b59982e49066c5d813af1c2e2579cbf573de
```
This will create the cluster with an initial expected size of 3 members. If you do not specify a size, a default of 3 will be used.
If you bootstrap an etcd cluster using discovery service with more than the expected number of etcd members, the extra etcd processes will [fall back][fall-back] to being [proxies][proxy] by default.
[fall-back]: proxy.md#fallback-to-proxy-mode-with-discovery-service
[proxy]: proxy.md
```
ETCD_DISCOVERY=https://discovery.etcd.io/3e86b59982e49066c5d813af1c2e2579cbf573de
```
```
-discovery https://discovery.etcd.io/3e86b59982e49066c5d813af1c2e2579cbf573de
```
Now we start etcd with those relevant flags for each member:
```
$ etcd -name infra0 -initial-advertise-peer-urls http://10.0.1.10:2380 \
-listen-peer-urls http://10.0.1.10:2380 \
-discovery https://discovery.etcd.io/3e86b59982e49066c5d813af1c2e2579cbf573de
```
```
$ etcd -name infra1 -initial-advertise-peer-urls http://10.0.1.11:2380 \
-listen-peer-urls http://10.0.1.11:2380 \
-discovery https://discovery.etcd.io/3e86b59982e49066c5d813af1c2e2579cbf573de
```
```
$ etcd -name infra2 -initial-advertise-peer-urls http://10.0.1.12:2380 \
-listen-peer-urls http://10.0.1.12:2380 \
-discovery https://discovery.etcd.io/3e86b59982e49066c5d813af1c2e2579cbf573de
```
This will cause each member to register itself with the discovery service and begin the cluster once all members have been registered.
You can use the environment variable `ETCD_DISCOVERY_PROXY` to cause etcd to use an HTTP proxy to connect to the discovery service.
#### Error and Warning Cases
##### Discovery Server Errors
```
$ etcd -name infra0 -initial-advertise-peer-urls http://10.0.1.10:2380 \
-listen-peer-urls http://10.0.1.10:2380 \
-discovery https://discovery.etcd.io/3e86b59982e49066c5d813af1c2e2579cbf573de
etcd: error: the cluster doesnt have a size configuration value in https://discovery.etcd.io/3e86b59982e49066c5d813af1c2e2579cbf573de/_config
exit 1
```
##### User Errors
This error will occur if the discovery cluster already has the configured number of members, and `discovery-fallback` is explicitly disabled
```
$ etcd -name infra0 -initial-advertise-peer-urls http://10.0.1.10:2380 \
-listen-peer-urls http://10.0.1.10:2380 \
-discovery https://discovery.etcd.io/3e86b59982e49066c5d813af1c2e2579cbf573de \
-discovery-fallback exit
etcd: discovery: cluster is full
exit 1
```
##### Warnings
This is a harmless warning notifying you that the discovery URL will be
ignored on this machine.
```
$ etcd -name infra0 -initial-advertise-peer-urls http://10.0.1.10:2380 \
-listen-peer-urls http://10.0.1.10:2380 \
-discovery https://discovery.etcd.io/3e86b59982e49066c5d813af1c2e2579cbf573de
etcdserver: discovery token ignored since a cluster has already been initialized. Valid log found at /var/lib/etcd
```
### DNS Discovery
DNS [SRV records](http://www.ietf.org/rfc/rfc2052.txt) can be used as a discovery mechanism.
The `-discovery-srv` flag can be used to set the DNS domain name where the discovery SRV records can be found.
The following DNS SRV records are looked up in the listed order:
* _etcd-server-ssl._tcp.example.com
* _etcd-server._tcp.example.com
If `_etcd-server-ssl._tcp.example.com` is found then etcd will attempt the bootstrapping process over SSL.
#### Create DNS SRV records
```
$ dig +noall +answer SRV _etcd-server._tcp.example.com
_etcd-server._tcp.example.com. 300 IN SRV 0 0 2380 infra0.example.com.
_etcd-server._tcp.example.com. 300 IN SRV 0 0 2380 infra1.example.com.
_etcd-server._tcp.example.com. 300 IN SRV 0 0 2380 infra2.example.com.
```
```
$ dig +noall +answer infra0.example.com infra1.example.com infra2.example.com
infra0.example.com. 300 IN A 10.0.1.10
infra1.example.com. 300 IN A 10.0.1.11
infra2.example.com. 300 IN A 10.0.1.12
```
#### Bootstrap the etcd cluster using DNS
etcd cluster memebers can listen on domain names or IP address, the bootstrap process will resolve DNS A records.
```
$ etcd -name infra0 \
-discovery-srv example.com \
-initial-advertise-peer-urls http://infra0.example.com:2380 \
-initial-cluster-token etcd-cluster-1 \
-initial-cluster-state new \
-advertise-client-urls http://infra0.example.com:2379 \
-listen-client-urls http://infra0.example.com:2379 \
-listen-peer-urls http://infra0.example.com:2380
```
```
$ etcd -name infra1 \
-discovery-srv example.com \
-initial-advertise-peer-urls http://infra1.example.com:2380 \
-initial-cluster-token etcd-cluster-1 \
-initial-cluster-state new \
-advertise-client-urls http://infra1.example.com:2379 \
-listen-client-urls http://infra1.example.com:2379 \
-listen-peer-urls http://infra1.example.com:2380
```
```
$ etcd -name infra2 \
-discovery-srv example.com \
-initial-advertise-peer-urls http://infra2.example.com:2380 \
-initial-cluster-token etcd-cluster-1 \
-initial-cluster-state new \
-advertise-client-urls http://infra2.example.com:2379 \
-listen-client-urls http://infra2.example.com:2379 \
-listen-peer-urls http://infra2.example.com:2380
```
You can also bootstrap the cluster using IP addresses instead of domain names:
```
$ etcd -name infra0 \
-discovery-srv example.com \
-initial-advertise-peer-urls http://10.0.1.10:2380 \
-initial-cluster-token etcd-cluster-1 \
-initial-cluster-state new \
-advertise-client-urls http://10.0.1.10:2379 \
-listen-client-urls http://10.0.1.10:2379 \
-listen-peer-urls http://10.0.1.10:2380
```
```
$ etcd -name infra1 \
-discovery-srv example.com \
-initial-advertise-peer-urls http://10.0.1.11:2380 \
-initial-cluster-token etcd-cluster-1 \
-initial-cluster-state new \
-advertise-client-urls http://10.0.1.11:2379 \
-listen-client-urls http://10.0.1.11:2379 \
-listen-peer-urls http://10.0.1.11:2380
```
```
$ etcd -name infra2 \
-discovery-srv example.com \
-initial-advertise-peer-urls http://10.0.1.12:2380 \
-initial-cluster-token etcd-cluster-1 \
-initial-cluster-state new \
-advertise-client-urls http://10.0.1.12:2379 \
-listen-client-urls http://10.0.1.12:2379 \
-listen-peer-urls http://10.0.1.12:2380
```
#### etcd proxy configuration
DNS SRV records can also be used to configure the list of peers for an etcd server running in proxy mode:
```
$ etcd --proxy on -discovery-srv example.com
```
# 0.4 to 2.0+ Migration Guide
In etcd 2.0 we introduced the ability to listen on more than one address and to advertise multiple addresses. This makes using etcd easier when you have complex networking, such as private and public networks on various cloud providers.
To make understanding this feature easier, we changed the naming of some flags, but we support the old flags to make the migration from the old to new version easier.
|Old Flag |New Flag |Migration Behavior |
|-----------------------|-----------------------|---------------------------------------------------------------------------------------|
|-peer-addr |-initial-advertise-peer-urls |If specified, peer-addr will be used as the only peer URL. Error if both flags specified.|
|-addr |-advertise-client-urls |If specified, addr will be used as the only client URL. Error if both flags specified.|
|-peer-bind-addr |-listen-peer-urls |If specified, peer-bind-addr will be used as the only peer bind URL. Error if both flags specified.|
|-bind-addr |-listen-client-urls |If specified, bind-addr will be used as the only client bind URL. Error if both flags specified.|
|-peers |none |Deprecated. The -initial-cluster flag provides a similar concept with different semantics. Please read this guide on cluster startup.|
|-peers-file |none |Deprecated. The -initial-cluster flag provides a similar concept with different semantics. Please read this guide on cluster startup.|

View File

@ -137,9 +137,9 @@ Be CAUTIOUS to use unsafe flags because it will break the guarantee given by con
+ Print the version and exit. + Print the version and exit.
+ default: false + default: false
[build-cluster]: https://github.com/coreos/etcd/blob/master/Documentation/2.0/clustering.md#static [build-cluster]: https://github.com/coreos/etcd/blob/master/Documentation/clustering.md#static
[reconfig]: https://github.com/coreos/etcd/blob/master/Documentation/2.0/runtime-configuration.md [reconfig]: https://github.com/coreos/etcd/blob/master/Documentation/runtime-configuration.md
[discovery]: https://github.com/coreos/etcd/blob/master/Documentation/2.0/clustering.md#discovery [discovery]: https://github.com/coreos/etcd/blob/master/Documentation/clustering.md#discovery
[proxy]: https://github.com/coreos/etcd/blob/master/Documentation/2.0/proxy.md [proxy]: https://github.com/coreos/etcd/blob/master/Documentation/proxy.md
[security]: https://github.com/coreos/etcd/blob/master/Documentation/security.md [security]: https://github.com/coreos/etcd/blob/master/Documentation/security.md
[restore]: https://github.com/coreos/etcd/blob/master/Documentation/2.0/admin_guide.md#restoring-a-backup [restore]: https://github.com/coreos/etcd/blob/master/Documentation/admin_guide.md#restoring-a-backup

View File

@ -29,4 +29,4 @@ etcd -proxy on -client-listen-urls 127.0.0.1:8080 -discovery https://discovery.
#### Fallback to proxy mode with discovery service #### Fallback to proxy mode with discovery service
If you bootstrap a etcd cluster using [discovery service][discovery-service] with more than the expected number of etcd members, the extra etcd processes will fall back to being `readwrite` proxies by default. They will forward the requests to the cluster as described above. For example, if you create a discovery url with `size=5`, and start ten etcd processes using that same discovery URL, the result will be a cluster with five etcd members and five proxies. Note that this behaviour can be disabled with the `proxy-fallback` flag. If you bootstrap a etcd cluster using [discovery service][discovery-service] with more than the expected number of etcd members, the extra etcd processes will fall back to being `readwrite` proxies by default. They will forward the requests to the cluster as described above. For example, if you create a discovery url with `size=5`, and start ten etcd processes using that same discovery URL, the result will be a cluster with five etcd members and five proxies. Note that this behaviour can be disabled with the `proxy-fallback` flag.
[discovery-service]: https://github.com/coreos/etcd/blob/master/Documentation/2.0/clustering.md#discovery [discovery-service]: https://github.com/coreos/etcd/blob/master/Documentation/clustering.md#discovery

View File

@ -18,7 +18,7 @@ If etcd falls below a simple majority of members it can no longer accept writes:
If you want to migrate a running member to another machine, please refer [member migration section][member migration]. If you want to migrate a running member to another machine, please refer [member migration section][member migration].
[member migration]: https://github.com/coreos/etcd/blob/master/Documentation/2.0/admin_guide.md#member-migration [member migration]: https://github.com/coreos/etcd/blob/master/Documentation/admin_guide.md#member-migration
### Increase Cluster Size ### Increase Cluster Size
@ -57,7 +57,7 @@ To increase from 3 to 5 members you will make two add operations
To decrease from 5 to 3 you will make two remove operations To decrease from 5 to 3 you will make two remove operations
All of these examples will use the `etcdctl` command line tool that ships with etcd. All of these examples will use the `etcdctl` command line tool that ships with etcd.
If you want to use the member API directly you can find the documentation [here](https://github.com/coreos/etcd/blob/master/Documentation/2.0/other_apis.md). If you want to use the member API directly you can find the documentation [here](https://github.com/coreos/etcd/blob/master/Documentation/other_apis.md).
### Remove a Member ### Remove a Member
@ -90,7 +90,7 @@ Removal of the leader is safe, but the cluster will be out of progress for a per
Adding a member is a two step process: Adding a member is a two step process:
* Add the new member to the cluster via the [members API](https://github.com/coreos/etcd/blob/master/Documentation/2.0/other_apis.md#post-v2members) or the `etcdctl member add` command. * Add the new member to the cluster via the [members API](https://github.com/coreos/etcd/blob/master/Documentation/other_apis.md#post-v2members) or the `etcdctl member add` command.
* Start the member with the correct configuration. * Start the member with the correct configuration.
Using `etcdctl` let's add the new member to the cluster: Using `etcdctl` let's add the new member to the cluster: