Compare commits

...

1548 Commits

Author SHA1 Message Date
Gyu-Ho Lee
fc00305a2e version: bump to v3.0.15 2016-11-10 13:12:43 -08:00
Gyu-Ho Lee
f322fe7f0d clientv3, ctlv3: document range end requirement 2016-11-10 13:10:18 -08:00
Gyu-Ho Lee
049fcd30ea integration: test wrong watcher range 2016-11-10 13:09:13 -08:00
Gyu-Ho Lee
1b702e79db mvcc: return -1 for wrong watcher range key >= end
Fix https://github.com/coreos/etcd/issues/6819.
2016-11-10 13:08:51 -08:00
Anthony Romano
b87190d9dc integration: test canceling a watcher on disconnected stream 2016-11-10 13:07:24 -08:00
Anthony Romano
83b493f945 clientv3: let watchers cancel when reconnecting 2016-11-10 13:06:47 -08:00
Gyu-Ho Lee
9b69cbd989 version: bump to v3.0.14+git 2016-11-04 13:06:36 -07:00
Gyu-Ho Lee
8a37349097 version: bump to v3.0.14 2016-11-04 10:54:14 -07:00
Xiang Li
9a0e4dfe4f ctlv3: fix migration 2016-11-03 09:47:41 -07:00
Timothy St. Clair
f60469af16 ctlv3: Add a no-ttl flag to etcdctl migrate to discard keys on transform. 2016-11-03 09:47:39 -07:00
Gyu-Ho Lee
932370d8ca version: bump to v3.0.13+git 2016-10-24 11:22:50 -07:00
Gyu-Ho Lee
c99d0d4b25 version: bump to v3.0.13 2016-10-24 11:04:43 -07:00
Gyu-Ho Lee
d78216f528 e2e: remove 'ctlV3GetFailPerm' 2016-10-24 11:04:13 -07:00
Hongchao Deng
c05c027a24 etcdctl: fix migrate in outputing client.Node to json
Using printf will try to parse the string and replace special
characters. In migrate code, we want to just output the raw
json string of client.Node.
For example,
    Printf("%\\") => %!\(MISSING)
    Print("%\\") => %\
Thus, we should use print instead.
2016-10-20 10:51:16 -07:00
Gyu-Ho Lee
3fd64f913a auth: fix return type on 'hasRootRole' 2016-10-12 13:59:27 -07:00
Xiang Li
f935290bbc mvcc: fix rev inconsistency
Try:

./etcdctl put foo bar
./etcdctl del foo
./etcdctl compact 3

restart etcd

./etcdctl get foo
mvcc: required revision has been compacted

The error is unexpected when range over the head revision.

Internally, we incorrectly set current revision smaller than the
compacted revision when we remove all keys around compacted revision.

This commit fixes the issue by recovering the current revision at least
to compacted revision.
2016-10-12 13:08:26 -07:00
Hitoshi Mitake
ca91f898a2 auth, e2e, clientv3: the root role should be granted access to every key
This commit changes the semantics of the root role. The role should be
able to access to every key.

Partially fixes https://github.com/coreos/etcd/issues/6355
2016-10-11 12:19:46 -07:00
Gyu-Ho Lee
fcbada7798 Merge pull request #6622 from luxas/backport_arm_fixes
Backport arm fixes
2016-10-11 12:15:58 -07:00
Jared Hulbert
fad9bdc3e1 etcdserver: atomic access alignment
Most fields accessed with sync/atomic functions are 64bit aligned, but a couple
are not.  This makes comments out of date and therefore misleading.

Affected fields reordered, comments scrubbed and updated.
2016-10-11 11:48:43 +03:00
Jared Hulbert
198ccb8b7b raftpb: atomic access alignment
The Entry struct has misaligned fields that are accessed atomically.  The
misalignment is caused by the EntryType enum which the Protocol Buffers
spec forces to be a 32bit int.

Moving the order of the fields without renumbering them in the .proto file
seems to align the go structure without changing the wire format.
2016-10-11 11:48:43 +03:00
Jared Hulbert
dc5d5c6ac8 raft: atomic access alignment
The relevant structures are properly aligned, however, there is no comment
highlighting the need to keep it aligned as is present elsewhere in the
codebase.

Adding note to keep alignment, in line with similar comments in the codebase.
2016-10-11 11:48:43 +03:00
Gyu-Ho Lee
f771eaca47 version: bump to v3.0.12+git 2016-10-07 16:42:12 -07:00
Gyu-Ho Lee
2d1e2e8e64 version: bump to v3.0.12 2016-10-07 15:14:25 -07:00
Gyu-Ho Lee
6412758177 v3rpc: remove redundant locks 2016-10-07 15:13:56 -07:00
Xiang Li
836c8159f6 v3rpc: lock progress and prevKV map correctly 2016-10-07 15:13:12 -07:00
Gyu-Ho Lee
e406e6e8f4 etcdctl/ctlv3: add 'prev-kv' flag to watch command 2016-10-07 14:23:09 -07:00
Gyu-Ho Lee
2fa2c6284e clientv3: add 'prevKV' field to watch request 2016-10-07 14:22:58 -07:00
Gyu-Ho Lee
2862c4fa12 v3rpc: implement 'prev-kv' watch 2016-10-07 14:22:19 -07:00
Gyu-Ho Lee
6f89fbf8b5 etcdserver: use mvcc.WatchableKV for prev-kv watch 2016-10-07 14:22:00 -07:00
Gyu-Ho Lee
6ae7ec9a3f *: regenerate proto 2016-10-07 14:21:19 -07:00
Gyu-Ho Lee
4a35b1b20a etcdserverpb: add 'prev_kb' to WatchCreateRequest 2016-10-07 14:20:46 -07:00
Gyu-Ho Lee
c859c97ee2 mvccpb: add 'prev_kv' field 2016-10-07 14:19:59 -07:00
Gyu-Ho Lee
a091c629e1 version: bump to v3.0.11+git 2016-10-07 13:25:21 -07:00
Gyu-Ho Lee
96de94a584 version: bump to v3.0.11 2016-10-07 11:27:48 -07:00
Gyu-Ho Lee
e9cd8410d7 integration: add 'prevKV' to TestV3DeleteRange 2016-10-07 11:03:19 -07:00
Gyu-Ho Lee
e37ede1d2e etcdserver: handle 'PrevKV' 2016-10-07 11:00:48 -07:00
Gyu-Ho Lee
4420a29ac4 etcdctl/ctlv3: add 'prev-kv' flag 2016-10-07 10:56:06 -07:00
Gyu-Ho Lee
0544d4bfd0 clientv3: add WithPrevKV OpOption 2016-10-07 10:54:45 -07:00
Gyu-Ho Lee
fe7379f102 clientv3: add Op.prevKV 2016-10-07 10:51:01 -07:00
Gyu-Ho Lee
c76df5052b *: update proto to add 'prev_kv' 2016-10-07 10:47:47 -07:00
Xiang Li
3299cad1c3 *: add put prevkv 2016-10-07 10:39:08 -07:00
Anthony Romano
d9ab018c49 integration: test a canceled watch won't return a closing error 2016-10-05 14:19:36 -07:00
Anthony Romano
e853451cd2 clientv3: only return closing error to watcher if context is not canceled
Fixes #6503
2016-10-05 14:19:32 -07:00
Anthony Romano
1becf9d2f5 clientv3: fix race on watch initial revision
The initial revision was being updated in the substream goroutine defer;
this was racing with the resume path fetching the initial revision when
the substream closes during resume. Instead, update the initial revision
whenever the substream processes a new watch response. Since the substream
cannot receive a watch response while it is resuming, the write to the
initial revision is ordered to always happen after the resume read.

Fixes #6586
2016-10-05 10:56:36 -07:00
Anthony Romano
1a712cf187 clientv3: make IsProgressNotify() false on compact event and closed channel
Fixes #6549
2016-10-04 15:13:02 -07:00
Gyu-Ho Lee
023f335f67 wal: set PageWriter offset in file encoder 2016-10-04 15:12:47 -07:00
Gyu-Ho Lee
bf0da78b63 pkg/ioutil: configure pageOffset in NewPageWriter 2016-10-04 15:12:46 -07:00
Anthony Romano
e8473850a2 integration: test canceling watchers when disconnected 2016-10-04 15:12:37 -07:00
Anthony Romano
b836d187fd clientv3: simplify watch synchronization
Was more complicated than it needed to be and didn't really work in the
first place. Restructured watcher registation to use a queue.
2016-10-04 15:12:18 -07:00
Gyu-Ho Lee
9b09229c4d version: bump to v3.0.10+git 2016-09-23 11:13:45 -07:00
Gyu-Ho Lee
546c0f7ed6 version: bump to v3.0.10 2016-09-23 10:49:03 -07:00
sharat
adbad1c9b5 ctlv3: close snapshot file before rename (Windows) 2016-09-23 09:11:02 -07:00
Anthony Romano
273b986751 clientv3: process closed watcherStreams in watcherGrpcStream run loop
Was racing with Watch() when closing the grpc stream on no watchers.

Fixes #6476
2016-09-21 15:52:20 -07:00
Gyu-Ho Lee
5b205729b9 rafthttp: add v3.0.0 to supported streams 2016-09-16 21:54:55 +09:00
Anthony Romano
fe900b09dd version: bump to v3.0.9+git 2016-09-15 15:10:23 -07:00
Anthony Romano
494c012659 version: bump to v3.0.9 2016-09-15 12:56:33 -07:00
Anthony Romano
4abc381ebe clientv3: drain buffered WatchResponses before resuming
Otherwise, the watcherStream can receive WatchResponses in the
middle of a resume, corrupting the stream.

Fixes #6364
2016-09-15 12:38:15 -07:00
Anthony Romano
73c8fdac53 integration: fix compilation for backported Election test 2016-09-15 11:45:37 -07:00
sharat
ee2717493a ctlv3: fix line parsing for Windows 2016-09-15 11:25:53 -07:00
Xiang Li
2435eb9ecd clientv3: balancer panics when call up after close
Fix the issue by adding a simple guard varable.
2016-09-15 18:46:26 +09:00
Anthony Romano
8fb533dabe embed: warn on domain name in listener 2016-09-15 18:46:19 +09:00
Anthony Romano
2f0f5ac504 Revert "Merge pull request #6365 from heyitsanthony/fix-dns-bind"
This reverts commit af5ab7b351, reversing
changes made to da6a0f0594.
2016-09-15 18:43:46 +09:00
Jason E. Aten
9ab811d478 auth: fix range handling bugs.
Test 15, counting from zero, in TestGetMergedPerms
in etcd/auth/range_perm_cache_test.go, was trying
incorrectly assert that [a, b) merged with [b, "")
should be [a, b). Added a test specifically for
this. This patch fixes the incorrect larger test
and the bugs in the code that it was hiding.

Fixes #6359
2016-09-15 18:41:56 +09:00
Anthony Romano
e0a99fb4ba version: bump to v3.0.8+git 2016-09-09 15:56:31 -07:00
Anthony Romano
d40982fc91 version: bump to v3.0.8 2016-09-09 13:14:44 -07:00
Gyu-Ho Lee
fe3a1cc31b wal: fix error type 2016-09-09 09:11:25 +09:00
Gyu-Ho Lee
70713706a1 wal: fix err shadowing (go vet) 2016-09-09 09:07:48 +09:00
Xiang Li
0054e7e89b etcdctl: restore should create a snapshot
Restore should create a snasphot. So the new db file
can be sent to newly joined member.
2016-09-09 09:03:51 +09:00
Anthony Romano
97f718b504 fileutil: windows OpenDir
Windows needs to open a directory with write access to fsync but the go
runtime won't open directories that way.
2016-09-09 09:01:56 +09:00
Anthony Romano
202da9270e wal: fsync directory after wal file rename
Fixes #6368
2016-09-09 09:01:49 +09:00
Anthony Romano
6e83ec0ed7 etcdmain: reject binding listeners to domain names
Fixes #6336
2016-09-07 08:08:35 +09:00
Jason E. Aten
5c44cdfdaa etcdctl/ctlv3: don't crash when we should prompt for pw.
when 'etcdctl --user name get blah' is invoked to
 prompt for password, don't panic.

 addresses the segfault part of #6343
2016-09-04 09:02:50 +09:00
Anthony Romano
09a239f040 e2e: add quoted key/value to txn test 2016-09-04 09:02:47 +09:00
Anthony Romano
3faff8b2e2 etcdctl: fix quoted string handling in txn and watch
Fixes #6315
2016-09-04 09:02:28 +09:00
Anthony Romano
2345fda18e version: bump to v3.0.7+git 2016-08-31 16:41:06 -07:00
Gyu-Ho Lee
5695120efc version: bump to v3.0.7 2016-08-31 09:49:24 -07:00
Gyu-Ho Lee
183293e061 wal: lowercase segmentSizeBytes 2016-08-31 09:48:30 -07:00
Jason E. Aten
4b48876f0e clientv3/concurrency: allow election on prefixes of keys.
After winning an election or obtaining a lock, we
auto-append a slash after the provided key prefix.
This avoids the previous deadlock due to waiting
on the wrong key.

Fixes #6278

Conflicts:
	clientv3/concurrency/election.go
	clientv3/concurrency/mutex.go
2016-08-31 09:46:05 -07:00
Aaron Lehmann
5089bf58fb wal: hold file lock while renaming WAL directory on non-Windows
Windows requires this lock to be released before the directory is
renamed. But on unix-like operating systems, releasing the lock and
trying to reacquire it immediately can be flaky if a process is forked
around the same time. The file descriptors are marked as close-on-exec
by the Go runtime, but there is a window between the fork and exec where
another process will be holding the lock.
2016-08-31 09:39:57 -07:00
Anthony Romano
480a347179 wal: use page buffered writer for writing records
Forces torn writes to only happen on sector boundaries.

Fixes #6271
2016-08-30 21:06:36 -07:00
Anthony Romano
59e560c7a7 ioutil: add page buffered writer
A buffered writer that only writes full pages or when explicitly flushed.
2016-08-30 21:06:33 -07:00
Xiang Li
0bd9bea2e9 etcdserver: allow zero kv index for cluster upgrade
If a user upgrades etcd from 2.3.x to 3.0 and shutdown the
cluster immediately without triggering any new backend writes,
then the consistent index in backend would be zero.

The user cannot restart etcdserver due to today's strick index
match checking. We now have to lose this a bit for this case.
2016-08-30 21:05:20 -07:00
Anthony Romano
bd7581ac59 wal: zero out wal tail past its first zero record
Whenever the WAL is opened for writes, it should write zeroes to its tail
starting from the first zero record. Otherwise, if there are entries past
the first zero record due to a torn write, any new writes that overlap the
old entries will lead to a garbage record on the tail and cause a CRC
mismatch.
2016-08-26 14:27:53 -07:00
Anthony Romano
db378c3d26 wal: test for truncation on torn writes 2016-08-26 14:27:51 -07:00
Anthony Romano
23740162dc fileutil: add ZeroToEnd for zeroing files 2016-08-26 14:27:49 -07:00
Anthony Romano
96422a955f discovery: reject IP address records in SRVGetCluster
Was incorrectly trimming the trailing '.' from the target; this in turn
caused the etcd server to accept any SRV record with an IP target
instead of only targets with A records.
2016-08-24 09:14:47 -07:00
Gyu-Ho Lee
6fd996fdac version: bump to v3.0.6+git 2016-08-19 12:38:13 -07:00
Gyu-Ho Lee
9efa00d103 version: bump to v3.0.6 2016-08-19 12:03:02 -07:00
Xiang Li
72d30f4c34 *: minor cleanup for lease 2016-08-19 11:53:38 -07:00
Xiang Li
2e92779777 mvcc: attach keys to leases after recover all state
The previous logic is wrong. When we have hisotry like Put(foo, bar, lease1),
and Put(foo, bar, lease2), we will end up with attaching foo to two leases 1 and
2. Similar things can happen for deattach by clearing the lease of a key.

Now we try to fix this by starting to attach leases at the end of the recovery.
We use a map to keep the last lease attachment state.
2016-08-19 11:49:05 -07:00
Xiang Li
404415b1e3 lease: do lease delection in the kv txn 2016-08-19 11:49:05 -07:00
Xiang Li
07e421d245 lease: delete kvs in a txn 2016-08-19 11:49:05 -07:00
Xiang Li
a7d6e29275 etcdserver: always recover lessor first 2016-08-19 11:49:05 -07:00
Gyu-Ho Lee
1a8b295dab vendor: update grpc/grpc-go for clientconn patch 2016-08-19 11:46:51 -07:00
Anthony Romano
ffc45cc066 rafthttp: fix race between streamReader.stop() and connection closer 2016-08-19 11:45:39 -07:00
Gyu-Ho Lee
0db1ba8093 version: bump to v3.0.5+git 2016-08-19 11:11:10 -07:00
Gyu-Ho Lee
43f7c94ac8 version: bump to v3.0.5 2016-08-19 10:20:37 -07:00
Hongchao Deng
93d13fb5b4 integration: NewClusterV3 should launch cluster before creating clients 2016-08-18 14:54:45 -07:00
Gyu-Ho Lee
6a1e3e73dd vendor: boltdb/bolt v1.3.0 for Go 1.7
In case somebody wants to build this branch with Go 1.7
2016-08-18 14:41:34 -07:00
Xiang Li
ec576ee5ac mvcc: fix count 2016-08-16 12:13:33 -07:00
Anthony Romano
606d79afc4 clientv3: use failfast and retry wrappers for at-most-once rpcs 2016-08-16 12:12:44 -07:00
Anthony Romano
f4d15a430c integration: treat client TLS connecting to insecure server as timeout 2016-08-16 12:09:42 -07:00
Anthony Romano
4a841459f1 clientv3: respect up/down notifications from grpc
Fixes #5842
2016-08-16 12:09:38 -07:00
Gyu-Ho Lee
ee8c577fc0 vendor: update grpc 2016-08-16 12:09:16 -07:00
Anthony Romano
8ae0f94cd7 clientv3: only block on New() when DialTimeout > 0
Fixes #6162
2016-08-12 12:03:33 -07:00
Anthony Romano
69a97863a9 clientv3: handle watchGrpcStream shutdown if prior to goroutine start
Fixes #6141
2016-08-09 20:59:09 -07:00
Anthony Romano
12c7e4a9f8 clientv3: close watcher stream once all watchers detach
Fixes #6134
2016-08-09 10:44:21 -07:00
Anthony Romano
23cced240b transport: add ServerName to TLSConfig and add ValidateSecureEndpoints
ServerName prevents accepting forged SRV records with cross-domain
credentials. ValidateSecureEndpoints prevents downgrade attacks from SRV
records.
2016-08-04 11:00:28 -07:00
Anthony Romano
e73c928d85 etcdctl: set ServerName for TLS when using --discovery-srv 2016-08-04 11:00:25 -07:00
Anthony Romano
779ad90f9a Documentation: update clustering guide about PKI SRV record forging 2016-08-04 11:00:22 -07:00
Anthony Romano
dca1740be5 etcdmain: check TLS on gateway SRV records 2016-08-04 11:00:15 -07:00
Anthony Romano
487b34d857 embed: use ServerName on TLS DNS discovery w/o CA file 2016-08-04 10:56:11 -07:00
Gyu-Ho Lee
a31283cf51 v2http: use guest access in non-TLS mode
Fix https://github.com/coreos/etcd/issues/6075.
2016-08-04 10:52:42 -07:00
Gyu-Ho Lee
b722bedf8a version: bump to v3.0.4+git 2016-07-27 15:30:31 -07:00
Gyu-Ho Lee
d53923c636 version: bump to v3.0.4 2016-07-27 13:40:42 -07:00
Gyu-Ho Lee
9356665d60 *: regenerate proto files for grpc-gateway 2016-07-27 13:40:07 -07:00
Gyu-Ho Lee
0932d17395 scripts/genproto: use latest grpc-gateway c8ec92d0 2016-07-27 13:39:00 -07:00
Gyu-Ho Lee
2a3ea3f996 Dockerfile-release: add '/var/lib/etcd/'
We have '/var/etcd/' in Dockerfile for historical reason.
Most cases, user store data in '/var/lib/etcd/'.
2016-07-27 13:38:58 -07:00
Anthony Romano
e5a5e5f7c6 etcdserver, api, membership: don't race on setting version
Fixes #6029
2016-07-27 09:39:39 -07:00
Gyu-Ho Lee
00bdd907d5 Documentation: fix links in upgrades 2016-07-26 13:16:15 -07:00
Gyu-Ho Lee
8eab756d3f *: regenerate proto 2016-07-25 21:36:07 -07:00
Xiang Li
3d9b1d1635 scripts:genproto.sh: update grpc-gateway 2016-07-25 21:31:33 -07:00
Xiang Li
4218193dd7 etcdserverpb: add missing deleterange annotation 2016-07-25 21:31:30 -07:00
Dongsu Park
6499d01c9b etcdmain: correctly check return values from SdNotify()
SdNotify() now returns 2 values, sent and err. So startEtcdOrProxyV2()
needs to check the 2 return values correctly. As the 2 values are
independent of each other, error checking needs to be slightly updated
too.

SdNotifyNoSocket, which was previously provided by go-systemd, does not
exist any more. In that case (false, nil) will be returned instead.
2016-07-21 11:00:37 -07:00
Dongsu Park
83b39b4f6b vendor: update go-systemd
Godeps.json and vendor need to be updated according to the newest
go-systemd, as SdNotify() in go-systemd has changed its API.
2016-07-21 11:00:34 -07:00
Anthony Romano
21092ca715 integration: change timeouts for TestWatchWithProgressNotify
a) 2 * progress interval was passing with dropped notifies
b) waitResponse was waiting so long that it expected a dropped notify
2016-07-21 10:59:54 -07:00
Anthony Romano
a4e79d7ebf v3rpc: don't elide next progress notification on progress notification
Fixes #5878
2016-07-21 10:59:51 -07:00
Anthony Romano
846883a979 rpctypes, clientv3: retry RPC on EtcdStopped
Fixes #5983
2016-07-21 10:59:27 -07:00
Anthony Romano
c7a3edb90f fileutil: rework purge tests so they don't poll
Fixes #5966
2016-07-21 10:57:06 -07:00
Gyu-Ho Lee
f308a27e91 e2e: test auth enabled with CN name cert 2016-07-21 10:55:56 -07:00
Gyu-Ho Lee
1d37154793 v2http: test with 'ClientCertAuthEnabled' 2016-07-21 10:55:54 -07:00
Gyu-Ho Lee
092d069d3e v2http: set 'ClientCertAuthEnabled' in client.go 2016-07-21 10:55:51 -07:00
Gyu-Ho Lee
ab5c4e23bd v2http: add 'ClientCertAuthEnabled' in handlers 2016-07-21 10:55:44 -07:00
Gyu-Ho Lee
59bf6693c7 embed: set 'ClientCertAuthEnabled' 2016-07-21 10:55:30 -07:00
Gyu-Ho Lee
affcbfbf06 etcdserver: add 'ClientCertAuthEnabled' option 2016-07-21 10:52:14 -07:00
Gyu-Ho Lee
e81df2648c v2http: move 'testdata' from 'etcdhttp' 2016-07-21 10:52:09 -07:00
rob boll
27a450235a v2http: client cert cn authentication
introduce client certificate authentication using certificate cn.
2016-07-21 10:52:06 -07:00
rob boll
42454f9ed8 v2http: refactor http basic auth
refactor http basic auth code to combine basic auth extraction and validation
2016-07-21 10:52:04 -07:00
Anthony Romano
7ea8860670 e2e: use a single member cluster in TestCtlV3Migrate
Occasionally migrate would fail because a minority node would be missing
v2 keys. Instead, just use a single member cluster.

Fixes #5992
2016-07-21 10:50:49 -07:00
jesse.millan
2fb72029ef etcdctl: Add support for formating output of ls command in json
The ls command will check for and honor json or extended output formats.

Fixes #5993
2016-07-21 10:50:47 -07:00
Xiang Li
77af59796d clientv3/integration: fix race in TestWatchCompactRevision 2016-07-21 10:50:46 -07:00
Anthony Romano
b732f96e07 integration: drain keepalives in TestLeaseKeepAliveCloseAfterDisconnectRevoke
Fixes #5900
2016-07-21 10:50:44 -07:00
Gyu-Ho Lee
602198105d *: regenerate proto 2016-07-18 11:08:51 -07:00
Gyu-Ho Lee
e513cbd562 vendor: update 'gogo/protobuf' 2016-07-18 11:06:58 -07:00
Gyu-Ho Lee
4198369dd0 scripts: update gogo/protobuf, use 'gofast' plugin
- Fix https://github.com/coreos/etcd/issues/5942
- Partial fix for https://github.com/coreos/etcd/issues/5865
2016-07-18 11:06:55 -07:00
Gyu-Ho Lee
debecc1868 vendor: change to 'grpc-ecosystem' from 'gengo' 2016-07-18 11:06:33 -07:00
Gyu-Ho Lee
140fc04c62 *: regenerate proto files 2016-07-18 11:06:17 -07:00
Gyu-Ho Lee
7e34665774 scripts: update genproto with grpc-ecosystem 2016-07-18 11:03:54 -07:00
Gyu-Ho Lee
be541f3641 Documentation: change to grpc-ecosystem 2016-07-18 11:03:52 -07:00
Gyu-Ho Lee
e582416994 embed: change import path to 'grpc-ecosystem' 2016-07-18 11:03:50 -07:00
Xiang Li
842145ecb3 *: fix issue found in fast lease renew 2016-07-18 11:03:20 -07:00
Gyu-Ho Lee
d68936c4da version: bump to v3.0.3+git 2016-07-15 11:51:50 -07:00
Gyu-Ho Lee
24a90baff8 version: bump to v3.0.3 2016-07-15 11:26:14 -07:00
Anthony Romano
6b7891d5f1 integration: add FailFast(false) to failing tests 2016-07-14 19:01:17 -07:00
Anthony Romano
129b271ff8 clientv3: use grpc.FailFast(false) for all calls 2016-07-14 19:00:46 -07:00
Anthony Romano
a11ee983c4 vendor: update grpc
Fixes #5871
2016-07-14 18:47:02 -07:00
Anthony Romano
bec58d5f58 integration: test grpc error equivalence with Error() 2016-07-14 18:47:00 -07:00
Anthony Romano
4b6f9b79e6 rpctypes: test error equivalence with Error()
grpc.Errorf() now returns *rpcError, which makes comparisons shallow.
2016-07-14 18:46:58 -07:00
Xiang Li
f7ec7f025b embed: only get initial cluster setting if the member is not init 2016-07-14 13:01:29 -07:00
Gyu-Ho Lee
34c76a47c1 Revert "Dockerfile: use 'ENTRYPOINT' instead of 'CMD'" 2016-07-14 12:24:06 -07:00
Xiang Li
525653ff51 raft: do not change RecentActive when resetState for progress 2016-07-12 09:59:42 -07:00
Xiang Li
a647b79038 etcdserver: fix TestSnap 2016-07-11 13:59:12 -07:00
Xiang Li
9bc1d08753 etcdctl: only takes 127.0.0.1:2379 as default endpoint 2016-07-11 13:41:53 -07:00
Gyu-Ho Lee
6a79bda691 e2e: add basic upgrade tests 2016-07-11 13:41:50 -07:00
Gyu-Ho Lee
1edfcd6859 test: add upgrade test flag 2016-07-11 13:41:47 -07:00
Gyu-Ho Lee
f51fdbccec version: bump to v3.0.2+git 2016-07-08 12:09:09 -07:00
Gyu-Ho Lee
faeeb2fc75 version: bump to v3.0.2 2016-07-08 11:45:18 -07:00
Xiang Li
d50c487132 v3rpc: lock progress and prevKV map correctly 2016-07-08 10:16:10 -07:00
Anthony Romano
b837feffe4 client/integration: test v2 client one shot operations 2016-07-07 17:30:09 -07:00
Anthony Romano
4d89640195 client: make set/delete one shot operations
Old behavior would retry set and delete even if there's an error. This
can lead to the client returning an error for deleting twice, instead
of returning an error for an interdeterminate state.

Fixes #5832
2016-07-07 17:30:04 -07:00
westhood
1292d453c3 clientv3: fix sync base
It is not correct to use WithPrefix. Range end will change in every
internal batch.
2016-07-07 14:21:43 -07:00
westhood
ec20b381ed clientv3: add public function to get prefix range end 2016-07-07 14:21:41 -07:00
Secret
37cc3f5262 Dockerfile: use 'ENTRYPOINT' instead of 'CMD'
use entrypoint, so people can specify flags to etcd
without providing the binary.

Signed-off-by: Secret <haichuang221@163.com>
2016-07-05 11:40:47 -07:00
Xiang Li
7f1940e5ed etcdserver: commit before sending snapshot 2016-07-05 11:06:54 -07:00
Xiang Li
caccf8e5e6 v3rpc: do not panic on user error for watch 2016-07-05 11:06:35 -07:00
Anthony Romano
ef65dfe2eb wal: release wal locks before renaming directory on init
Fixes #5852
2016-07-05 11:05:51 -07:00
Gyu-Ho Lee
ff6c6916f2 etcdserver/api: print only major.minor version API
Before

2016-07-01 14:57:50.927170 I | api: enabled capabilities for version 3.0.0

After

2016-07-01 14:57:50.927170 I | api: enabled capabilities for version 3.0
2016-07-01 15:19:53 -07:00
Gyu-Ho Lee
3dfe8765d3 version: bump to v3.0.1+git 2016-07-01 14:53:20 -07:00
Gyu-Ho Lee
a4a52cb15d version: bump to v3.0.1 2016-07-01 13:58:37 -07:00
Gyu-Ho Lee
014970930a *: test, docs with go1.6+
etcd v3 uses http/2, which doesn't work well with go1.5
2016-07-01 11:59:37 -07:00
Geert-Johan Riemer
4628be982c Documentation: fix typo in api_grpc_gateway.md 2016-07-01 11:59:35 -07:00
Anthony Romano
ff55e5a188 etcdserver: exit on missing backend only if semver is >= 3.0.0 2016-07-01 11:59:32 -07:00
Gyu-Ho Lee
bf0898266c release: fix Dockerfile etcd binary paths
release script uses binary files in 'release/image-docker',
not the ones in "bin/". Tested with v3.0.0 release.
2016-06-30 12:27:34 -07:00
Gyu-Ho Lee
b9d69f7698 version: bump to v3.0.0+git 2016-06-30 11:37:05 -07:00
Gyu-Ho Lee
6f48bda7ac version: bump to v3.0.0 2016-06-30 10:04:59 -07:00
Gyu-Ho Lee
316534e09e *: remove beta from docs 2016-06-30 10:04:34 -07:00
Jeff Zellner
3cecbdb464 hack: install goreman in tls-setup example 2016-06-30 09:33:19 -07:00
Jeff Zellner
62f11e43ee hack: add tls-setup example generated certs to gitignore 2016-06-30 09:33:12 -07:00
Anthony Romano
064c1585ee Merge pull request #5822 from raoofm/patch-9
Doc: fix typo in dev-guide.md
2016-06-30 09:06:32 -07:00
Raoof Mohammed
15300a1eb8 Doc: fix typo in dev-guide.md 2016-06-30 10:36:50 -04:00
Gyu-Ho Lee
58dd047ee4 ctlv3: make flags, commands formats consistent
1. Capitalize first letter
2. Remove period at the end

(followed the pattern in linux coreutil man page)
2016-06-29 16:16:56 -07:00
Anthony Romano
4b42ea6cd7 clientv3: only use closeErr on watch when donec is closed
Fixes #5800
2016-06-28 17:48:44 -07:00
Gyu-Ho Lee
53c27ae621 benchmark: fix Compact request 2016-06-28 14:15:32 -07:00
Xiang Li
269de67bde mvcc: do not hash consistent index 2016-06-28 12:29:36 -07:00
Anthony Romano
8bbccf1047 clientv3, ctl3, clientv3/integration: add compact response to compact 2016-06-28 12:29:32 -07:00
Gyu-Ho Lee
c00e97ea49 Merge pull request #5785 from gyuho/doc_update
Documentation/upgrades: upgrade 3.0 doc
2016-06-27 15:46:53 -07:00
Gyu-Ho Lee
c8a7d281ee Documentation/upgrades: upgrade 3.0 doc 2016-06-27 15:45:44 -07:00
Anthony Romano
a5c2cd2708 Merge pull request #5784 from heyitsanthony/doc-todos
Documentation: clear out some TODOs
2016-06-27 15:20:30 -07:00
Anthony Romano
11fdf2dd18 Documentation: clear out some TODOs 2016-06-27 15:00:18 -07:00
Gyu-Ho Lee
3b300f42e9 Merge pull request #5781 from gyuho/compact_client
*: compact with physical in client side
2016-06-27 14:46:54 -07:00
Anthony Romano
b4162f8a45 Merge pull request #5782 from heyitsanthony/doc-lint
Documentation: conform to header style
2016-06-27 12:20:31 -07:00
Gyu-Ho Lee
f63e6875bd e2e: test 'physical' flag in compact cmd 2016-06-27 12:07:49 -07:00
Gyu-Ho Lee
76e2bf03b8 etcdctl: v3 compact with physical flag 2016-06-27 12:07:46 -07:00
Gyu-Ho Lee
859e336d68 clientv3: configurable physical in compact 2016-06-27 12:04:04 -07:00
Anthony Romano
35229eb2d3 Documentation: conform to header style 2016-06-27 12:00:24 -07:00
Xiang Li
bbed3ecc8d Merge pull request #5780 from xiang90/check_i
etcdserver: check index of the kv when restarting
2016-06-27 11:44:10 -07:00
Xiang Li
9614dc6e71 etcdserver: check index of the kv when restarting 2016-06-27 10:27:27 -07:00
Hitoshi Mitake
aab905f7cc Merge pull request #5776 from mitake/commit-title
test: more accurate checking of commit title
2016-06-27 14:48:10 +09:00
Xiang Li
cfc171d5f7 Merge pull request #5777 from xiang90/c_be
etcdserver: refuse to restart if backend file is missing
2016-06-26 22:34:08 -07:00
Xiang Li
891ddcba6e etcdserver: refuse to restart if backend file is missing 2016-06-26 21:16:51 -07:00
Hitoshi Mitake
555028f3d1 test: more accurate checking of commit title 2016-06-26 21:05:47 -07:00
Anthony Romano
efcf03f0b1 Merge pull request #5773 from heyitsanthony/integration-unixsock
integration: use unix sockets for all connections
2016-06-25 16:17:20 -07:00
Gyu-Ho Lee
ae6e879812 Merge pull request #5774 from gyuho/raft_minor_fix
raft: len(entries) before Lock, use firstIndex
2016-06-25 11:13:16 -07:00
Gyu-Ho Lee
6a48961895 raft: len(entries) before Lock, use firstIndex
- To avoid unnecessary locking in case len(entries) == 0
- use firstIndex method
2016-06-24 23:50:00 -07:00
Anthony Romano
13d0ea7f54 integration: use unix domain sockets for all connections 2016-06-24 21:18:19 -07:00
Anthony Romano
bbb84ff709 discovery: use pkg/transport to create http transport 2016-06-24 21:04:39 -07:00
Anthony Romano
54d56e2531 pkg/types: accept unix and unixs schemes 2016-06-24 21:04:39 -07:00
Anthony Romano
fc1a226d15 pkg/transport: unix domain socket listener and transport 2016-06-24 21:04:31 -07:00
Xiang Li
0c820dc7ba Merge pull request #5769 from xiang90/unstable
doc: add unstable
2016-06-24 13:38:54 -07:00
Gyu-Ho Lee
40f62ab4a5 Merge pull request #5771 from gyuho/docker
*: separate Dockerfile for quay build trigger
2016-06-24 13:26:05 -07:00
Xiang Li
0da05a896f doc: add experimental apis 2016-06-24 13:05:53 -07:00
Gyu-Ho Lee
8a71f749d7 *: separate Dockerfile for quay build trigger
Fix https://quay.io/repository/coreos/etcd-git/build/d75d80b1-7d8d-42bd-af07-645b7da3a118.
2016-06-24 12:55:10 -07:00
Gyu-Ho Lee
3424f95b03 Merge pull request #5770 from gyuho/op_guide
*: move 'Project detail' to op-guide
2016-06-24 10:50:03 -07:00
Gyu-Ho Lee
862b3fe2be *: move 'Project detail' to op-guide 2016-06-24 10:47:12 -07:00
Anthony Romano
aeb5b3c82b Merge pull request #5766 from heyitsanthony/eschew-you
doc: remove you/your from current docs
2016-06-24 09:29:26 -07:00
Anthony Romano
e1b9ccb1d7 doc: eschew "you" for current docs 2016-06-24 09:28:12 -07:00
Anthony Romano
d284a45a4b Merge pull request #5765 from heyitsanthony/autotls-security
doc: auto-tls example in security guide
2016-06-24 09:17:38 -07:00
Anthony Romano
9bde740cf9 doc: auto-tls example in security guide 2016-06-24 09:15:46 -07:00
Xiang Li
c1d2149a0f Merge pull request #5767 from mitake/build
build: remove needless output
2016-06-24 09:03:14 -07:00
Gyu-Ho Lee
15b267fbfd Merge pull request #5768 from gyuho/raft_comment
raft: fix comment, method name in progress
2016-06-24 08:38:02 -07:00
Gyu-Ho Lee
33f7e7583b raft: fix comment,method name to needSnapshotAbort
And 'maybeSnapshotAbort' does not 'unset'
the pendingSnapshot. 'resetState', which is called after this
metho, is the one that unsets pendingSnapshot. So this changes
the method name.
2016-06-24 07:54:10 -07:00
Hitoshi Mitake
abc1cb945b build: remove needless output
Current build script outputs its name to stdout because of its
checking argv[0].

$ ./build
./build

The line is a little bit mysterious so this commit removes it.
2016-06-24 13:54:53 +09:00
Gyu-Ho Lee
a7189ef073 Merge pull request #5762 from gyuho/member_auth
Documentation/demo: add member, auth example
2016-06-23 16:10:57 -07:00
Anthony Romano
78d9ae1820 Merge pull request #5763 from heyitsanthony/local-tester-fp
local-tester: support failpoints
2016-06-23 13:01:26 -07:00
Xiang Li
9b4dc92fdc Merge pull request #5761 from xiang90/proxy_v2
*: make it clear that proxy only supports v2 api now
2016-06-23 12:35:04 -07:00
Xiang Li
755d192ff7 *: make it clear that proxy only supports v2 api now 2016-06-23 12:06:42 -07:00
Anthony Romano
244266708b local-tester: support failpoints 2016-06-23 12:04:11 -07:00
Gyu-Ho Lee
b2a8acdf10 Documentation/demo: add member, auth example 2016-06-23 11:50:37 -07:00
Gyu-Ho Lee
9664df1b5e Merge pull request #5760 from gyuho/peer-urls
*: change ctlv3 flag peerURLs to 'peer-urls'
2016-06-23 10:12:28 -07:00
Gyu-Ho Lee
f9d250ad1b e2e: update flag to 'peer-urls' 2016-06-23 09:53:30 -07:00
Gyu-Ho Lee
fa74a0d3bb etcdctl: change peerURLs flag to 'peer-urls' 2016-06-23 09:52:25 -07:00
Xiang Li
c949811752 Merge pull request #5758 from dannysauer/master
index is incremented in Watcher; remove double-increment
2016-06-23 07:57:21 -07:00
Xiang Li
5247702d8d Merge pull request #5755 from nekto0n/reuse-timer
Reuse timer in backend.run.
2016-06-23 07:28:09 -07:00
Danny Sauer
a998fb4af1 etcdctl: index is incremented in Watcher; remove double-increment 2016-06-23 08:54:34 -05:00
Nikita Vetoshkin
dbc7c2cf4e backend: reuse timer in run().
Benchmarks:

```
import (
	"testing"
	"time"
)

func BenchmarkTimeAfter(b *testing.B) {
	b.ReportAllocs()
	for n := 0; n < b.N; n++ {
		select {
		case <- time.After(1 * time.Millisecond):
		}
	}
}

func BenchmarkTimerReset(b *testing.B) {
	b.ReportAllocs()
	t := time.NewTimer(1 * time.Millisecond)
	for n := 0; n < b.N; n++ {
		select {
		case <- t.C:
		}
		t.Reset(1 * time.Millisecond)
	}
}
```

Running reveals that each loop results in 3 allocs:

```
BenchmarkTimeAfter-4 	    2000	   1112134 ns/op	     192 B/op	       3 allocs/op
BenchmarkTimerReset-4	    2000	   1109774 ns/op	       0 B/op	       0 allocs/op
```
2016-06-23 18:49:41 +05:00
Gyu-Ho Lee
b945a3fcc8 Merge pull request #5753 from gyuho/example
clientv3: add auth example
2016-06-22 20:27:30 -07:00
Gyu-Ho Lee
2da5bdd4df clientv3: add auth example 2016-06-22 20:06:13 -07:00
Gyu-Ho Lee
e4ab1540c8 Merge pull request #5752 from gyuho/mkdir
Make mkdir consistent
2016-06-22 16:16:38 -07:00
Gyu-Ho Lee
4a0f922a6c pkg/transport: use TouchDirAll 2016-06-22 15:57:55 -07:00
Gyu-Ho Lee
6cfc03a5f9 wal: use CreateDirAll 2016-06-22 15:57:55 -07:00
Gyu-Ho Lee
c363fd288b etcdserver: use CreateDirAll 2016-06-22 15:57:47 -07:00
Gyu-Ho Lee
5720fe812e etcdctl: use CreateDirAll 2016-06-22 15:55:56 -07:00
Gyu-Ho Lee
187faba3e0 pkg/fileutil: fix TouchDirAll, add CreateDirAll
os.MkdirAll never returns os.ErrExist.
And add another function to ensure deepest
directory is empty.
2016-06-22 15:54:17 -07:00
Gyu-Ho Lee
df9a52e53f Merge pull request #5702 from gyuho/vet
*: go vet, go lint fixes
2016-06-22 14:52:34 -07:00
Anthony Romano
6fbf8be3ac Merge pull request #5751 from heyitsanthony/fail-bad-commit-msg
test: check commit titles
2016-06-22 14:03:15 -07:00
Anthony Romano
b7253992d4 test: check commit titles 2016-06-22 13:30:22 -07:00
Gyu-Ho Lee
c1e3601776 raftexample: fixes from go vet, go lint 2016-06-22 12:04:15 -07:00
Gyu-Ho Lee
e221699fd8 rafthttp: fix from go vet, go lint 2016-06-22 12:04:15 -07:00
Gyu-Ho Lee
725ded40f7 etcdserver: fix from go vet, go lint 2016-06-22 12:04:15 -07:00
Gyu-Ho Lee
e2138179e3 client: fix from go vet, go lint 2016-06-22 12:04:15 -07:00
Gyu-Ho Lee
6557ef7cd8 *: copy all exported members in tls.Config
Without this, go vet complains

assignment copies lock value to n: crypto/tls.Config contains sync.Once
contains sync.Mutex
2016-06-22 12:04:08 -07:00
Anthony Romano
84c416491e Merge pull request #5739 from heyitsanthony/serialize-txn
etcdserver: make serialized txns auth-aware
2016-06-22 11:49:56 -07:00
Gyu-Ho Lee
caffcb7fbb *: go vet fix in go tip 2016-06-22 11:10:59 -07:00
Anthony Romano
30cfa30490 etcdserver: make serialized txns auth-aware 2016-06-22 10:51:42 -07:00
Anthony Romano
aafb2e9430 etcdserver: add lock to authApplier so serialized requests don't race 2016-06-22 10:51:42 -07:00
Gyu-Ho Lee
27ef4baa9c Merge pull request #5749 from gyuho/manual
*: misc typos and go vet fixes
2016-06-22 10:45:02 -07:00
James Shubin
6480066054 *: misc typos and go vet fixes 2016-06-22 10:32:13 -07:00
Xiang Li
8d259d3cf1 Merge pull request #5745 from xiang90/count_client
clientv3: add withCount support
2016-06-22 10:04:06 -07:00
Xiang Li
82991074bf Merge pull request #5733 from mitake/user-detail
etcdctl: a flag for getting detailed information of a user
2016-06-22 09:26:00 -07:00
Hitoshi Mitake
0e7690780f etcdctl: a flag for getting detailed information of a user
This commit adds a new flag --detail to etcdctl user get command. The
flag enables printing the detailed permission information of the user
like below example:

$ ETCDCTL_API=3 bin/etcdctl --user root:p user get u1
User: u1
Roles: r1 r2
$ ETCDCTL_API=3 bin/etcdctl --user root:p user get u1 --detail
User: u1

Role r1
KV Read:
        [k1, k5)
KV Write:
        [k1, k5)

Role r2
KV Read:
        a
        b
        [k8, k9)
KV Write:
        a
        b
        [k8, k9)
2016-06-22 13:29:48 +09:00
Xiang Li
6496ae005d clientv3: add withCount support 2016-06-21 21:17:35 -07:00
Xiang Li
0b5ea3ec94 Merge pull request #5742 from xiang90/count
*: support count in range query
2016-06-21 19:42:08 -07:00
Xiang Li
def21f11a9 *: support count in range query 2016-06-21 16:20:55 -07:00
Anthony Romano
5a6ad1ea76 Merge pull request #5738 from heyitsanthony/fp
build with failpoints
2016-06-21 15:02:45 -07:00
Anthony Romano
de68818f03 etcdserver: add some failpoints 2016-06-21 14:43:20 -07:00
Anthony Romano
7f8ffd7dbe test, build: support failpoints 2016-06-21 14:43:20 -07:00
Anthony Romano
6009e88077 test, build: make build script source-able without doing a build 2016-06-21 14:35:20 -07:00
Gyu-Ho Lee
99957e9831 Merge pull request #5736 from gyuho/cleanup
etcdctl/ctlv3: minor clean ups
2016-06-21 13:31:20 -07:00
Gyu-Ho Lee
80aa5978ca etcdctl/ctlv3: minor clean ups
- Fix typo
- Improve command ordering (elect should be below lock)
- Update migrate command description
2016-06-21 13:12:01 -07:00
Gyu-Ho Lee
c01c36bcfd Merge pull request #5735 from gyuho/auth_doc
etcdctl/ctlv3: document auth,user,role
2016-06-21 12:49:31 -07:00
Gyu-Ho Lee
e5d9ca5180 etcdctl/ctlv3: document auth,user,role 2016-06-21 12:46:42 -07:00
Xiang Li
22bae02fe5 Merge pull request #5734 from xiang90/learning
doc: move docs to learning
2016-06-21 11:02:04 -07:00
Xiang Li
7c12949b41 doc: move docs to learning 2016-06-21 10:49:46 -07:00
Xiang Li
1b8e83ae60 Merge pull request #5732 from mitake/e2e-user-role-dyn-update
e2e: add test cases for updating user and role during operations
2016-06-21 09:54:18 -07:00
Hitoshi Mitake
4106e56d91 e2e: check role revoking during operations 2016-06-21 15:52:36 +09:00
Hitoshi Mitake
68bcbdc84e e2e: check user deletion during operations 2016-06-21 15:03:04 +09:00
Xiang Li
d017814eaa Merge pull request #5722 from mitake/auth-v3-check-test
e2e: check runtime permission changing
2016-06-20 22:42:43 -07:00
Gyu-Ho Lee
8920e7c4d5 Merge pull request #5731 from gyuho/grpc_log
*: use capnslog for grpclog
2016-06-20 20:35:28 -07:00
Gyu-Ho Lee
a1c7a7df5e *: use capnslog for grpclog 2016-06-20 20:35:03 -07:00
Hitoshi Mitake
6fe4d9d30a e2e: check runtime permission changing
This commit adds extends the test for checking runtime permission
grant/revoke.
2016-06-21 11:55:09 +09:00
Gyu-Ho Lee
6d81601df3 vendor: update capnslog 2016-06-20 19:39:15 -07:00
Gyu-Ho Lee
0cc59f3976 Merge pull request #5730 from gyuho/cli_dep
*: codegangsta/cli to urfave/cli
2016-06-20 16:54:15 -07:00
Gyu-Ho Lee
bdca594495 etcdctl/ctlv2: use latest Action interface 2016-06-20 16:34:28 -07:00
Xiang Li
1e0ff8555e Merge pull request #5729 from xiang90/fix_bench
benchmark: fix watch bench
2016-06-20 16:02:09 -07:00
Gyu-Ho Lee
0ae9d444f9 ctlv2: use urfave/cli in ctlv2 2016-06-20 15:17:03 -07:00
Gyu-Ho Lee
c4df15ff3e vendor: codegangsta/cli to urfave/cli
For https://github.com/coreos/etcd/issues/3901.
2016-06-20 15:06:20 -07:00
Anthony Romano
ce180bbaf1 Merge pull request #5685 from heyitsanthony/multictx-watcher
clientv3: watch with arbitrary ctx values
2016-06-20 14:52:40 -07:00
Gyu-Ho Lee
d5696cb6ef Merge pull request #5712 from gyuho/curl_v3
e2e: grpc-gateway cURL tests
2016-06-20 14:48:29 -07:00
Gyu-Ho Lee
b4f0a8853b e2e: grpc-gateway cURL tests 2016-06-20 14:29:10 -07:00
Anthony Romano
1097d63ff7 clientv3/integration: test WithRequireLeader on Watch 2016-06-20 14:26:16 -07:00
Xiang Li
2bd5d66596 benchmark: fix watch bench 2016-06-20 14:00:46 -07:00
Gyu-Ho Lee
a01f5a2786 Merge pull request #5728 from gyuho/log_dir
etcd-agent: set up directory for etcd logs
2016-06-20 13:25:25 -07:00
Anthony Romano
722f5b2a8c clientv3: watch with arbitrary ctx values
Sets up a new watch stream for every unique set of ctx values.

Fixes #5354
2016-06-20 12:44:51 -07:00
Xiang Li
e5583b26eb Merge pull request #5711 from xiang90/client_bytes
*: add client network metrics
2016-06-20 12:03:18 -07:00
Gyu-Ho Lee
50f2f984e4 etcd-agent: set up directory for etcd logs 2016-06-20 11:32:14 -07:00
Xiang Li
35fd81e465 *: add client network metrics 2016-06-20 11:18:06 -07:00
Xiang Li
fb1f1ce1fd Merge pull request #5727 from xiang90/fix_watch_bench
benchmark: correctly count number of watchers
2016-06-20 11:00:18 -07:00
Xiang Li
2a2dd1075f benchmark: correctly count number of watchers 2016-06-20 10:37:17 -07:00
Xiang Li
729f5b45fd Merge pull request #5720 from xiang90/report_recv
*: fix pending events metrics
2016-06-20 06:44:16 -07:00
Xiang Li
6e717775a8 Merge pull request #5723 from mitake/etcdctl-misc
etcdctl: slightly enhance output of role revoke-permission
2016-06-20 06:14:28 -07:00
Hitoshi Mitake
0173564122 etcdctl: slightly enhance output of role revoke-permission 2016-06-20 16:57:50 +09:00
Xiang Li
6f28b43806 *: fix pending events metrics 2016-06-19 23:00:39 -07:00
Xiang Li
8111e0f7dc Merge pull request #5716 from ajityagaty/get_filtering
v3api: Add a flag to RangeRequest to return only the keys.
2016-06-19 14:50:15 -07:00
Ajit Yagaty
ad5d55dd4c v3api: Add a flag to RangeRequest to return only the keys.
Currently the user can't list only the keys in a prefix search. In
order to support such operations the filtering will be done on the
server side to reduce the encoding and network transfer costs.
2016-06-19 14:18:39 -07:00
Gyu-Ho Lee
23621387fc Merge pull request #5714 from gyuho/wal_dir
*: use fileutil.TouchDirAll
2016-06-19 12:02:46 -07:00
Gyu-Ho Lee
d37e564eaa etcdserver: use TouchDirAll 2016-06-19 11:26:52 -07:00
Xiang Li
ce50ee14d8 Merge pull request #5710 from xiang90/rm_la
*: remove old flag support
2016-06-19 06:57:28 -07:00
Gyu-Ho Lee
eaa72dfa0b Merge pull request #5709 from gyuho/docker
update: Dockerfile, documentation
2016-06-18 19:57:34 -07:00
Gyu-Ho Lee
b03c832bed Merge pull request #5698 from gyuho/documentation
Documentation: grpc-gateway
2016-06-17 15:33:18 -07:00
Gyu-Ho Lee
3ddfa16c46 Documentation: update container.md 2016-06-17 15:22:13 -07:00
Gyu-Ho Lee
eec706b9ae etcdserverpb: generate Swagger API JSON 2016-06-17 15:19:32 -07:00
Gyu-Ho Lee
09e5db5a46 Documentation: add grpc-gateway doc 2016-06-17 15:19:28 -07:00
Xiang Li
8ea6be38ba *: remove old flag support
These legacy flags support are here only because we do not want
CoreOS updates to break people.

Now people will be aware of that they switch to etcd3. Do not need
to support 0.x flags any more.
2016-06-17 14:51:45 -07:00
Gyu-Ho Lee
c25ff426af Dockerfile: build image with alpine 2016-06-17 14:42:40 -07:00
Anthony Romano
6dcd020d7d Merge pull request #5707 from heyitsanthony/test-all
test: don't hardcode packages for testing
2016-06-17 14:08:46 -07:00
Xiang Li
3f6619ada9 Merge pull request #5708 from xiang90/pending
*: add pending/failed proposal metrics
2016-06-17 13:48:39 -07:00
Anthony Romano
9feb3d0e51 etcd-tester: fix goword warnings 2016-06-17 13:37:35 -07:00
Anthony Romano
f7b84d69a4 etcd-agent/client: fixup godocs 2016-06-17 13:37:35 -07:00
Anthony Romano
ea21b8ee1f lessor: fix go vet, goword warnings, and unreliable test 2016-06-17 13:37:25 -07:00
Anthony Romano
016be1ef31 contrib/recipes: fix govet and goword warnings 2016-06-17 13:13:09 -07:00
Xiang Li
598fa7a10e *: add pending/failed proposal metrics 2016-06-17 13:09:38 -07:00
Xiang Li
aa503f84d5 Merge pull request #5705 from xiang90/metrics_peer
*: add peer prefix for network metrics between peers
2016-06-17 12:38:06 -07:00
Xiang Li
bd8627c8ab Merge pull request #5706 from xiang90/app_metrics
etcdserver: add applied metrics
2016-06-17 12:26:23 -07:00
Xiang Li
6af0917812 *: add peer prefix for network metrics between peers 2016-06-17 11:59:49 -07:00
Xiang Li
57474697af etcdserver: add applied metrics 2016-06-17 11:52:50 -07:00
Anthony Romano
74b13aab61 grpcproxy: fix go vet warnings 2016-06-17 11:41:49 -07:00
Anthony Romano
6c0882145a test: don't use hardcoded package lists for testing 2016-06-17 11:41:49 -07:00
Xiang Li
e4f56c4eb6 Merge pull request #5701 from xiang90/rm_exp
*: make auto-compaction-retention non-experimental
2016-06-17 11:02:43 -07:00
Gyu-Ho Lee
8bb0ce54e6 Merge pull request #5704 from gyuho/agent_fix
etcd-agent: fix test
2016-06-17 10:58:14 -07:00
Gyu-Ho Lee
61659302db Merge pull request #5703 from gyuho/grpc_proto
Update gRPC, gogo/protobuf
2016-06-17 10:56:38 -07:00
Gyu-Ho Lee
63c13e8b98 etcd-agent: fix test 2016-06-17 10:47:15 -07:00
Gyu-Ho Lee
63901be674 *: regenerate proto 2016-06-17 10:22:28 -07:00
Gyu-Ho Lee
d03a3d141e vendor: update gRPC dependency 2016-06-17 10:22:16 -07:00
Gyu-Ho Lee
b0d7455fb1 scripts: use latest gogo/protobuf for proto files
For https://github.com/coreos/etcd/issues/5671.
2016-06-17 10:21:18 -07:00
Xiang Li
d68664841c *: make auto-compaction-retention non-experimental 2016-06-17 10:04:31 -07:00
Xiang Li
3488555bc3 Merge pull request #5674 from mitake/auth-v3-get-users-roles
*: support getting all users and roles in auth v3
2016-06-17 06:51:47 -07:00
Hitoshi Mitake
18253e2723 *: support getting all users and roles in auth v3
This commit expands RPCs for getting user and role and support list up
all users and roles. etcdctl v3 is now support getting all users and
roles with the newly added option --all e.g. etcdctl user get --all
2016-06-17 16:22:41 +09:00
Gyu-Ho Lee
cc4f35887c Merge pull request #5699 from gyuho/readme
README: more demos, links
2016-06-16 22:57:31 -07:00
Gyu-Ho Lee
dde2aea214 Documentation: add 'migrate' command example 2016-06-16 19:47:57 -07:00
Gyu-Ho Lee
1066c9b806 README: add dash, play.etcd.io, animated demo link 2016-06-16 19:47:25 -07:00
Xiang Li
2d08e093c1 Merge pull request #5696 from xiang90/fix_panic
etcdserver: fix panic when getting header of raft request
2016-06-16 13:58:50 -07:00
Xiang Li
adff458895 etcdserver: fix panic when getting header of raft request 2016-06-16 13:42:10 -07:00
Gyu-Ho Lee
b3558894f2 Merge pull request #5695 from gyuho/proto
*: use latest protodoc, regenerate
2016-06-16 12:35:44 -07:00
Xiang Li
c98ca2db43 Merge pull request #5493 from mqliang/cache
proxy: cache range request in proxy
2016-06-16 12:30:13 -07:00
Xiang Li
5f5c3c8f82 Merge pull request #5694 from xiang90/comp
etcdserver: only pause compaction when sending snapshot
2016-06-16 12:26:55 -07:00
Xiang Li
7f9adfd5b8 Merge pull request #5692 from xiang90/fix_live
raft: make tick unblock and fix potential live lock
2016-06-16 12:26:31 -07:00
Gyu-Ho Lee
0bae7b635c *: regenerate proto, doc 2016-06-16 11:57:46 -07:00
Gyu-Ho Lee
d26d006fd6 scripts: use latest protodoc to skip grpc-gateway
protodoc now skips grpc-gateway options
2016-06-16 11:57:05 -07:00
Xiang Li
1c6070ccc7 Merge pull request #5693 from Jiaweizdev/update-port-number-in-proxy-doc
doc: update port number in proxy doc
2016-06-16 08:59:39 -07:00
Xiang Li
699e76b631 etcdserver: only pause compaction when sending snapshot 2016-06-16 08:57:02 -07:00
Jiawei Zhang
fb165fcc58 doc: update port number 2016-06-16 17:13:52 +02:00
Xiang Li
848f539536 raft: make tick unblock and fix potential live lock 2016-06-16 08:01:06 -07:00
Xiang Li
49266dca2d Merge pull request #5690 from xiang90/fix_s
etcdserver: save state before save snapshot
2016-06-15 22:36:30 -07:00
Xiang Li
9c78cda088 etcdserver: save state before save snapshot 2016-06-15 22:00:33 -07:00
Hitoshi Mitake
b07fbbf27c Merge pull request #5687 from mitake/auth-v3-txn-2
etcdserver: permission checking of Txn() in authApplierV3
2016-06-16 12:51:10 +09:00
Hitoshi Mitake
cdf1a2ee2c etcdserver: permission checking of Txn() in authApplierV3 2016-06-15 20:10:16 -07:00
Anthony Romano
5385ca0a43 Merge pull request #5659 from heyitsanthony/bridge-more-errors
bridge: packet corruption and reordering
2016-06-15 19:23:22 -07:00
Anthony Romano
11869905ae bridge: packet corruption and reordering
With bonus bridge connection code refactor.
2016-06-15 17:08:19 -07:00
Gyu-Ho Lee
555976ea84 Merge pull request #5684 from gyuho/test
etcd-agent: SIGQUIT when cleanup
2016-06-15 16:13:20 -07:00
Xiang Li
bc69142940 Merge pull request #5683 from xiang90/fix_refresh
store: copy old value when refresh + cas
2016-06-15 16:11:26 -07:00
Gyu-Ho Lee
bd604a029e etcd-agent: SIGQUIT when cleanup 2016-06-15 16:03:25 -07:00
Xiang Li
df56f9d6f9 store: copy old value when refresh + cas 2016-06-15 15:32:58 -07:00
Xiang Li
b607b36a6c Merge pull request #5648 from ingvagabund/doc-nits
docs: Clustering.md: Switch "command line" and "environment variables"
2016-06-15 15:24:20 -07:00
Xiang Li
c505f03c62 Merge pull request #5682 from cdancy/patch-2
Documentation: add gradle-etcd-rest-plugin to libraries-and-tools.md
2016-06-15 15:02:51 -07:00
Christopher Dancy
f392370f73 Documentation: add gradle-etcd-rest-plugin to libraries-and-tools.md
Add link to the gradle-etcd-rest-plugin client under the 'Gradle plugins' sub-section.

Fixes #5681
2016-06-15 17:59:50 -04:00
Gyu-Ho Lee
7d666ab8b9 Merge pull request #5677 from gyuho/minor_etcdserver_fix
etcdserver: preallocate slices
2016-06-15 13:20:06 -07:00
Gyu-Ho Lee
32d766d749 etcdserver: preallocate slice 2016-06-15 13:03:10 -07:00
Anthony Romano
b98fa063c8 Merge pull request #5672 from heyitsanthony/applier-auth-layer
auth, etcdserver: separate auth checking apply from core apply
2016-06-15 10:06:34 -07:00
Anthony Romano
16db9e68a2 auth, etcdserver: separate auth checking apply from core apply 2016-06-15 09:03:27 -07:00
mqliang
5676c5cf26 proxy: serve range request from proxy cache if set serializable 2016-06-15 14:12:36 +08:00
mqliang
eca38c109a vendor:add groupcache lru package 2016-06-15 14:12:36 +08:00
Xiang Li
16d86fd4f8 Merge pull request #5669 from xiang90/proto-gw
main: add grpc-gateway support
2016-06-14 17:46:00 -07:00
Xiang Li
7f569a163c test: go vet should only test the go code in the dir 2016-06-14 17:09:06 -07:00
Xiang Li
252adc0caf *: update dependencies 2016-06-14 17:09:06 -07:00
Xiang Li
5a7b7f7595 main: add grpc-gateway support
Now etcd can serve HTTP json request at /v3alpha/
2016-06-14 17:09:06 -07:00
Gyu-Ho Lee
a6fec46c0e Merge pull request #5652 from gyuho/version
etcdctl/*: print API version
2016-06-14 16:04:24 -07:00
Gyu-Ho Lee
1e38ab1706 etcdctl: print API version (v2, v3 separate) 2016-06-14 15:33:39 -07:00
Xiang Li
6958334db2 Merge pull request #5662 from xiang90/auth_delete
*: support deleteRange perm checking
2016-06-13 20:13:43 -07:00
Anthony Romano
c97107cf81 Merge pull request #5660 from heyitsanthony/fix-watch-test
e2e: don't Put() after watchTest finishes
2016-06-13 19:39:30 -07:00
Xiang Li
a571bd0271 Merge pull request #5661 from xiang90/fix_subset
auth: fix remove subset when there are equal ranges
2016-06-13 19:03:10 -07:00
Xiang Li
c75fa6fdc9 *: support deleteRange perm checking 2016-06-13 17:49:13 -07:00
Xiang Li
e67613830e auth: fix remove subset when there are equal ranges 2016-06-13 17:13:55 -07:00
Anthony Romano
d78ef8bc72 e2e: don't Put() after watchTest finishes
Fixes #5598
2016-06-13 16:55:02 -07:00
Xiang Li
a26ebfb675 Merge pull request #5654 from xiang90/auth_key
auth: add key support in merge func
2016-06-13 16:53:36 -07:00
Xiang Li
38546a9d24 auth: use bytes equal when possible 2016-06-13 16:37:21 -07:00
Xiang Li
390c89b7f9 auth: remove the special checking case for key auth 2016-06-13 16:37:20 -07:00
Xiang Li
9be65414eb auth: add key support in merge func 2016-06-13 16:37:20 -07:00
Gyu-Ho Lee
2a018240e7 Merge pull request #5657 from gyuho/cleanup
etcd-tester: cleanup in compact error, log level
2016-06-13 15:15:52 -07:00
Gyu-Ho Lee
84953365a2 etcd-tester: cleanup in compact error, log level 2016-06-13 14:54:53 -07:00
Gyu-Ho Lee
18851e70b6 Merge pull request #5656 from gyuho/auth_bytes
make auth key, rangeEnd typed like mvcc ([]byte)
2016-06-13 14:41:19 -07:00
Gyu-Ho Lee
5d6af0b51f etcdserver: key, rangeEnd in []byte for auth 2016-06-13 14:21:25 -07:00
Gyu-Ho Lee
e9d2eb2b54 auth: key, range in []byte type
Fix https://github.com/coreos/etcd/issues/5655.
2016-06-13 14:21:22 -07:00
Gyu-Ho Lee
70a2add2b0 Merge pull request #5650 from gyuho/wal_update
wal: use bytes.Equal, other minor updates
2016-06-13 09:05:53 -07:00
Gyu-Ho Lee
b4aa4607cb wal: use bytes.Equal, other minor updates
- Replace reflect.Equal with bytes.Equal where possible
- Remove some TODOs
- Some minor simplifications
2016-06-13 01:33:53 -07:00
Jan Chaloupka
2e29bea8fe docs: Clustering.md: Switch command line and environment variables to reflect the order of examples right below 2016-06-13 10:23:21 +02:00
Xiang Li
f25b3dbfc8 Merge pull request #5640 from xiang90/permcheck
auth: clean permission checking
2016-06-12 18:26:21 -07:00
Gyu-Ho Lee
667093bbd1 Merge pull request #5645 from gyuho/wal_simple
wal: simplify boolean return
2016-06-11 11:10:59 -07:00
Gyu-Ho Lee
3243795522 wal: simplify boolean return 2016-06-11 10:36:52 -07:00
Xiang Li
4aaf7f94cf Merge pull request #5643 from hongchaodeng/doc-fix
v3 docs: ErrCompaction -> ErrCompacted
2016-06-11 01:40:58 -07:00
Hongchao Deng
c11418b56c docs: v3 api, ErrCompaction -> ErrCompacted 2016-06-10 21:53:06 -07:00
Gyu-Ho Lee
bdb5a321d1 Merge pull request #5642 from gyuho/client
vendor: update grpc dependency
2016-06-10 21:09:52 -07:00
Gyu-Ho Lee
5225a4e4bc clientv3: fix client for grpc change
Fix https://github.com/coreos/etcd/issues/5638.
2016-06-10 20:40:46 -07:00
Gyu-Ho Lee
b2a531d5a3 vendor: update grpc dependency
For 59486d9c17
2016-06-10 20:40:06 -07:00
Xiang Li
1bbe09eb3c auth: clean permission checking 2016-06-10 19:23:20 -07:00
Gyu-Ho Lee
cff5851956 Merge pull request #5639 from mitake/email
MAINTAINERS: updating email address of Hitoshi Mitake
2016-06-10 18:40:41 -07:00
Hitoshi Mitake
6b80f0ad7e MAINTAINERS: updating email address of Hitoshi Mitake
I'm mainly using the updated email address for working.
2016-06-10 18:12:39 -07:00
Xiang Li
ae366ba4f1 Merge pull request #5637 from xiang90/auth_clean
auth: cleanup get perm func
2016-06-10 18:12:07 -07:00
Xiang Li
f99ff5d513 auth: cleanup get perm func 2016-06-10 16:36:51 -07:00
Xiang Li
3eab6bef6a Merge pull request #5635 from xiang90/cl
auth: clean up range_perm_cache.go
2016-06-10 16:08:54 -07:00
Xiang Li
c802c23e6d Merge pull request #5636 from xiang90/mt
MAINTAINERS: add Hitoshi as a maintainer of auth pkg
2016-06-10 16:07:04 -07:00
Xiang Li
43db5515e7 MAINTAINERS: add Hitoshi as a maintainer of auth pkg 2016-06-10 15:55:57 -07:00
Gyu-Ho Lee
c6fae5d566 Merge pull request #5631 from raoofm/patch-8
Doc: Fault tolerance table
2016-06-10 15:49:36 -07:00
Gyu-Ho Lee
175c67a552 Merge pull request #5634 from gyuho/wal
wal: PrivateFileMode/DirMode as in pkg/fileutil
2016-06-10 15:41:43 -07:00
Xiang Li
65ff76882b Merge pull request #5624 from xiang90/warn_apply
etcdserver: warn heavy apply
2016-06-10 15:28:27 -07:00
Gyu-Ho Lee
47d5257622 pkg/fileutil: expose PrivateFileMode/DirMode 2016-06-10 15:22:14 -07:00
Xiang Li
77efe4cda9 auth: clean up range_perm_cache.go 2016-06-10 15:21:04 -07:00
Gyu-Ho Lee
4570eddc2c wal: PrivateFileMode/DirMode as in pkg/fileutil
To make it consistent with pkg/fileutil
2016-06-10 15:20:57 -07:00
Xiang Li
3210bb8181 Merge pull request #5632 from xiang90/auth_store_cleanup
auth: cleanup store.go
2016-06-10 14:49:56 -07:00
Xiang Li
a92ea417b4 Merge pull request #5534 from gyuho/readme
README: minor fix in README
2016-06-10 14:46:15 -07:00
Xiang Li
64eccd519d etcdserver: warn heavy apply 2016-06-10 14:43:34 -07:00
Hitoshi Mitake
bb6102c00c Merge pull request #5630 from xiang90/del_user
auth: add del functions for user/role
2016-06-10 14:28:36 -07:00
Xiang Li
f8c1a50195 auth: cleanup store.go 2016-06-10 14:19:29 -07:00
Hitoshi Mitake
2781553a9e Merge pull request #5615 from mitake/auth-v3-consistent-token
auth, etcdserver: make auth tokens consistent for all nodes
2016-06-10 14:19:21 -07:00
Raoof Mohammed
37ac90c419 Doc: Fault tolerance table 2016-06-10 17:12:36 -04:00
Xiang Li
8776962008 auth: add del functions for user/role 2016-06-10 14:11:00 -07:00
Hitoshi Mitake
ead5096fa9 auth, etcdserver: make auth tokens consistent for all nodes
Currently auth tokens are generated in the replicated state machine
layer randomly. It means one auth token generated in node A cannot be
used for node B. It is problematic for load balancing and fail
over. This commit moves the token generation logic from the state
machine to API layer (before raft) and let all nodes share a single
token.

Log index of Raft is also added to a token for ensuring uniqueness of
the token and detecting activation of the token in the cluster (some
nodes can receive the token before generating and installing the token
in its state machine).

This commit also lets authStore have simple token related things. It
is required because of unit test. The test requires cleaning of the
state of the simple token things after one test (succeeding test can
create duplicated token and it causes panic).
2016-06-10 13:55:37 -07:00
Xiang Li
65abcc1a59 Merge pull request #5629 from xiang90/put_role
auth: cleanup
2016-06-10 13:53:34 -07:00
Xiang Li
cf99d596f5 auth: cleanup get user and get role usage 2016-06-10 13:34:40 -07:00
Xiang Li
0914d65c1f auth: add put role 2016-06-10 13:20:48 -07:00
Anthony Romano
e854fa1856 Merge pull request #5622 from heyitsanthony/e2e-auth-keys
e2e: auth key put test
2016-06-10 12:17:38 -07:00
Gyu-Ho Lee
cd569d640b Merge pull request #5600 from lucab/to-upstream/armored-sigs
doc: sign release artifacts in armor mode
2016-06-10 12:11:53 -07:00
Xiang Li
aa56e47712 Merge pull request #5625 from xiang90/put_user
auth: add put_user
2016-06-10 12:10:21 -07:00
Anthony Romano
1e22137a9a e2e: test auth is respected for Puts 2016-06-10 11:43:06 -07:00
Anthony Romano
b3a0b0502c etcdserver: respect auth on serialized Range 2016-06-10 11:43:05 -07:00
Xiang Li
ae30ab7897 auth: add put_user 2016-06-10 11:27:42 -07:00
Xiang Li
247103c40b Merge pull request #5623 from xiang90/get_role
auth: add getRole
2016-06-10 11:17:59 -07:00
Xiang Li
1958598a18 auth: add getRole 2016-06-10 10:59:34 -07:00
Xiang Li
c459073c6d Merge pull request #5620 from xiang90/auth_recover
auth: implement recover
2016-06-10 10:35:03 -07:00
Gyu-Ho Lee
05f9d1b716 Merge pull request #5610 from gyuho/handle_timeout_error
etcd-tester: do not exit for compaction timeout
2016-06-10 09:47:54 -07:00
Gyu-Ho Lee
5631acdb8f etcd-tester: do not exit for compact timeout
Temporary fix for https://github.com/coreos/etcd/issues/5606.
2016-06-10 09:44:45 -07:00
Xiang Li
ca4e78687e auth: implement recover 2016-06-10 09:37:37 -07:00
Anthony Romano
bdc7035c10 Merge pull request #5617 from liggitt/preallocation
fileutil: avoid double preallocation
2016-06-09 22:27:17 -07:00
Jordan Liggitt
4f7622fb9a fileutil: avoid double preallocation 2016-06-10 00:27:59 -04:00
Gyu-Ho Lee
d4ac09de0f Merge pull request #5612 from gyuho/index_bench
mvcc: add keyIndex, treeIndex Restore benchmark
2016-06-09 16:09:56 -07:00
Xiang Li
6e32e8501a Merge pull request #5613 from xiang90/rootrole
*: add admin permission checking
2016-06-09 16:00:37 -07:00
Xiang Li
7da1940dce Merge pull request #5607 from xiang90/raft_user
raft: add docker/swarmkit as notable raft users
2016-06-09 15:39:09 -07:00
Xiang Li
f1c6fa48f5 *: add admin permission checking 2016-06-09 15:25:09 -07:00
Gyu-Ho Lee
6bbd8b7efb mvcc: add keyIndex benchmark test
Useful later when trying to optimize our restore operations.
2016-06-09 14:13:18 -07:00
Anthony Romano
a7c5058953 Merge pull request #5608 from heyitsanthony/clientv3-auth-opts
clientv3: use separate dialopts for auth dial
2016-06-09 12:56:59 -07:00
Anthony Romano
349eaf117a clientv3: use separate dialopts for auth dial
Needs to use a different balancer from the main client connection
because of the way grpc uses the Notify channel.
2016-06-09 10:38:57 -07:00
Xiang Li
ab65d2b848 raft: add docker/swarmkit as notable raft users 2016-06-09 10:10:44 -07:00
Anthony Romano
78c957df41 Merge pull request #5603 from heyitsanthony/clientv3-close-keepalive
clientv3: close keepalive channel if TTL locally exceeded
2016-06-09 09:44:32 -07:00
Anthony Romano
0554ef9c39 clientv3/integration: tests for closing lease channel 2016-06-09 09:12:59 -07:00
Anthony Romano
e534532523 clientv3: close keep alive channel if no response within TTL 2016-06-09 09:12:59 -07:00
Xiang Li
fb0df211f0 Merge pull request #5586 from xiang90/root
auth: add root user and root role
2016-06-09 00:23:45 -07:00
Xiang Li
da2f2a5189 auth: add root user and root role 2016-06-08 19:55:08 -07:00
Gyu-Ho Lee
a548cab828 Merge pull request #5602 from gyuho/get_leader
clientv3/integration: WaitLeader to follower
2016-06-08 17:03:25 -07:00
Gyu-Ho Lee
753073198f clientv3/integration: WaitLeader to follower
Fix https://github.com/coreos/etcd/issues/5601.
2016-06-08 16:45:32 -07:00
Xiang Li
77dee97c2f Merge pull request #5578 from mitake/auth-v3-range
auth, etcdserver: permission of range requests
2016-06-08 16:33:25 -07:00
Hitoshi Mitake
253e313c09 *: support granting and revoking range
This commit adds a feature for granting and revoking range of keys,
not a single key.

Example:
$ ETCDCTL_API=3 bin/etcdctl role grant r1 readwrite k1 k3
Role r1 updated
$ ETCDCTL_API=3 bin/etcdctl role get r1
Role r1
KV Read:
        [a, b)
        [k1, k3)
        [k2, k4)
KV Write:
        [a, b)
        [k1, k3)
        [k2, k4)
$ ETCDCTL_API=3 bin/etcdctl --user u1:p get k1 k4
k1
v1
$ ETCDCTL_API=3 bin/etcdctl --user u1:p get k1 k5
Error:  etcdserver: permission denied
2016-06-08 14:58:25 -07:00
Gyu-Ho Lee
9dad78c68f Merge pull request #5599 from gyuho/e2e_fix
e2e: fix race in ranging test tables
2016-06-08 14:46:02 -07:00
Gyu-Ho Lee
bd5e1ea1c0 e2e: fix race in ranging test tables
Fix https://github.com/coreos/etcd/issues/5598.

race conditions were detected in iterating the test table
because the go func closure doesn't receive the 'puts' index
in the argument. This can cause the test to run wrong put
operations.
2016-06-08 13:44:05 -07:00
Anthony Romano
87d105c036 Merge pull request #5596 from heyitsanthony/wal-warn-slow-fsync
wal: warn if sync exceeds a second
2016-06-08 13:07:13 -07:00
Hitoshi Mitake
6bb96074da auth, etcdserver: permission of range requests
Currently the auth mechanism doesn't support permissions of range
request. It just checks exact matching of key names even for range
queries. This commit adds a mechanism for setting permission to range
queries. Range queries are allowed if a range of the query is [begin1,
end1) and the user has a permission of reading [begin2, range2) and
[begin1, end2) is a subset of [begin2, range2). Range delete requests
will follow the same rule.
2016-06-08 11:57:32 -07:00
Gyu-Ho Lee
35329a1674 Merge pull request #5597 from gyuho/btree_dep
*: update google/btree dependency
2016-06-08 11:39:29 -07:00
Gyu-Ho Lee
0b7e5c70a5 *: update google/btree dependency 2016-06-08 11:23:49 -07:00
Anthony Romano
39eaa37dcf wal: warn if sync exceeds a second 2016-06-08 11:03:18 -07:00
Anthony Romano
ff2b24a8ac Merge pull request #5583 from heyitsanthony/grpc-nuke-waitstate
clientv3: use grpc balancer
2016-06-08 09:45:44 -07:00
Anthony Romano
4a13c9f9b3 clientv3: use grpc balancer 2016-06-08 09:24:13 -07:00
Luca Bruno
e551aec339 doc: sign release artifacts in armor mode
Release guide steps to artifacts signing defaults to binary
signatures, while producing .asc files.
This commit changes to armored signatures, also matching appc
requirements.

Fixes #5594
2016-06-08 17:51:54 +02:00
Xiang Li
66a6ed63cb Merge pull request #5585 from xiang90/token_cleanup
etcdserver: make usernameFromCtx more go style
2016-06-08 08:08:58 -07:00
Xiang Li
4d56f54898 Merge pull request #5590 from xiang90/user
auth: add getuser
2016-06-08 08:08:36 -07:00
Anthony Romano
7abc8f21eb integration: update tests for new grpc reconnection interface 2016-06-08 01:04:59 -07:00
Anthony Romano
62f8ec25c0 clientv3: use grpc reconnection logic 2016-06-08 01:04:59 -07:00
Anthony Romano
1823702cc6 integration: bridge connections to grpc server
Tests need to disconnect the network connection for the client to check
reconnection paths but closing a grpc connection closes the logical connection.
To disconnect the client, instead have a bridge between the server and
the client which can monitor and reset connections.
2016-06-08 00:34:53 -07:00
Anthony Romano
b382c2c86f vendor: update grpc 2016-06-07 22:46:43 -07:00
Xiang Li
c6496dcff6 auth: add getuser 2016-06-07 22:43:04 -07:00
Gyu-Ho Lee
3e057129e2 Merge pull request #5588 from purpleidea/fix/test-typo
e2e: tests: fix small typo
2016-06-07 22:25:57 -07:00
James Shubin
0048782d97 e2e: tests: fix small typo
Found when trying to get the e2e tests to run on Fedora which they
don't because of https://github.com/kr/pty/issues/21
2016-06-08 01:14:11 -04:00
Gyu-Ho Lee
2da6fb6616 Merge pull request #5587 from gyuho/function
etcd-tester: retry for 'etcdserver: not capable'
2016-06-07 22:01:07 -07:00
Gyu-Ho Lee
350673f1f8 etcd-tester: retry for 'etcdserver: not capable'
Fix https://github.com/coreos/etcd/issues/5573.

Currently stresser starts at the same time as cluster start.
If the stresser got launched too fast/early, all stressers
exit from the error 'etcdserver: not capable', which
means the cluster is not ready yet. This adds additional
error checking, so stresser can retry.
2016-06-07 21:56:04 -07:00
Xiang Li
cc1155c93b etcdserver: make usernameFromCtx more go style 2016-06-07 21:17:32 -07:00
Gyu-Ho Lee
9a14b796e0 Merge pull request #5582 from gyuho/watch_range_end
etcdctl: support watch with range_end
2016-06-07 17:08:49 -07:00
Gyu-Ho Lee
7eaf73d273 e2e: test watch command with 2 args 2016-06-07 16:52:19 -07:00
Gyu-Ho Lee
624d5eb0cb etcdctl: support range_end for watch command
Fix https://github.com/coreos/etcd/issues/5575.
2016-06-07 16:52:15 -07:00
Gyu-Ho Lee
50ef8f148c Merge pull request #5579 from gyuho/request_union
RequestOp, ResponseOp
2016-06-07 13:54:59 -07:00
Gyu-Ho Lee
1610391449 *: following changes for proto update 2016-06-07 13:33:03 -07:00
Gyu-Ho Lee
1e4d3603db clientv3,ctlv3: following changes for proto change 2016-06-07 13:32:36 -07:00
Gyu-Ho Lee
6e149e3485 etcdserver: following updates for proto change 2016-06-07 13:32:07 -07:00
Gyu-Ho Lee
ca630a0803 etcdserverpb: RequestOp, ResponseOp
Fix https://github.com/coreos/etcd/issues/5504.
2016-06-07 13:31:10 -07:00
Xiang Li
0d1133178f Merge pull request #5574 from xiang90/auth
auth: make naming consistent
2016-06-07 11:24:29 -07:00
Xiang Li
83ce1051ff auth: make naming consistent 2016-06-07 10:54:50 -07:00
Anthony Romano
4984d82d27 Merge pull request #5570 from heyitsanthony/rafthttp-snapshot-tests
rafthttp: snapshot testing
2016-06-06 16:02:22 -07:00
Anthony Romano
7f461b2df9 Merge pull request #5572 from heyitsanthony/fallocate-eintr-fallback
pkg/fileutil: fall back to truncate() if fallocate is interrupted
2016-06-06 15:24:42 -07:00
Anthony Romano
dc91da50b5 rafthttp: snapshot tests 2016-06-06 11:38:11 -07:00
Anthony Romano
93f114c76c snap: return errors if Message's snapshot is not entirely read 2016-06-06 11:38:11 -07:00
Anthony Romano
3aadb25c31 pkg/ioutil: exact readcloser
NewExactReadCloser wraps a ReadCloser so it returns errors if exact number
of bytes are not read.
2016-06-06 11:38:10 -07:00
Anthony Romano
5be39d2c84 wal: don't preallocate on old tail file
Code is only there to handle an edge case where the tail wasn't preallocated
already (e.g., via old etcd version or a crash). It also triggers tmpfs
corruption, so remove it.
2016-06-06 11:31:25 -07:00
Gyu-Ho Lee
9022137d2b Merge pull request #5567 from gyuho/wal_type
wal: minor fixes
2016-06-06 10:31:03 -07:00
Anthony Romano
54aac4ab7e pkg/fileutil: fall back to truncate() if fallocate is interrupted
Fixes #5558
2016-06-06 09:52:34 -07:00
Gyu-Ho Lee
008081ffb5 wal: minor fixes
- remove unnecessary type cast
- simply modulo operations
2016-06-06 09:43:19 -07:00
Gyu-Ho Lee
c63eaf45f9 Merge pull request #5566 from ktateish/fix-single-dash
*: replace '-' with '--' for long options
2016-06-05 21:38:29 -07:00
Katsuyuki Tateishi
8b75a33398 *: replace '-' with '--' for long options
A long option should have double dashes (cf. #4595),
so are error messages.
2016-06-06 12:25:45 +09:00
Gyu-Ho Lee
3c2a47ea64 Merge pull request #5565 from gyuho/raft_doc
raft: small fix in doc
2016-06-05 19:16:08 -07:00
Gyu-Ho Lee
843c53192a raft: small fix in doc
'MsgBeat' is an internal type to signal the leader, not the message type
that gets sent to its followers. 'MsgHeartbeat' is the type sent to followers.
2016-06-05 17:47:46 -07:00
Xiang Li
2baca91ee2 Merge pull request #5564 from mitake/auth-v3-cleaning
cleaning auth v3
2016-06-05 08:40:06 -07:00
Hitoshi Mitake
94f22e8a07 *: rename RPCs and structs related to revoking
This commit renames RPCs and structs related to revoking.
1. UserRevoke -> UserRevokeRole
2. RoleRevoke -> RoleRevokePermission
2016-06-05 16:57:23 +09:00
Hitoshi Mitake
60fc1e4d4e auth, etcdserver: error codes for revoking non existing role and permission
This commit adds error codes for representing revoking non existing
role (from user) and permission (from role).
2016-06-05 16:41:10 +09:00
Xiang Li
8bebd8caa9 Merge pull request #5559 from gyuho/docker_guide
Documentation: add docker guide for v3
2016-06-04 19:14:48 -07:00
Gyu-Ho Lee
2f00b1e071 Documentation: add docker guide for v3 2016-06-04 16:43:44 -07:00
Xiang Li
429b2eee58 Merge pull request #5548 from mitake/auth-v3-revoke-delete
revoke user, revoke role, and delete role in auth v3
2016-06-03 21:44:37 -07:00
Hitoshi Mitake
c7a1423d45 *: support deleting a role in auth v3
This commit implements RoleDelete() RPC for supporting deleting a role
in auth v3. It also adds a new subcommand "role delete" to etcdctl.
2016-06-04 13:42:45 +09:00
Hitoshi Mitake
0cb1343109 *: support revoking a key from a role in auth v3
This commit implements RoleRevoke() RPC for supporting revoking a key
from a role in auth v3. It also adds a new subcommand "role revoke" to
etcdctl.
2016-06-04 13:42:45 +09:00
Hitoshi Mitake
957b07c408 *: support revoking a role from a user in auth v3
This commit implements UserRevoke() RPC for supporting revoking a role
from a user in auth v3. It also adds a new subcommand "user revoke" to
etcdctl.
2016-06-04 13:39:26 +09:00
Gyu-Ho Lee
3f1af453b9 Merge pull request #5560 from gyuho/lease_test
clientv3/integration: test lease closed connection
2016-06-03 18:23:03 -07:00
Gyu-Ho Lee
0cb4dd4331 clientv3/integration: test lease closed connection
Tests if lease operations return ErrConnClosed when
the client is closed.
2016-06-03 16:41:32 -07:00
Xiang Li
6a35833fc3 Merge pull request #5450 from luxas/more_arches
travis: Catch compilation errors in CI for arm and ppc64le
2016-06-03 16:12:28 -07:00
Anthony Romano
c093234e3a Merge pull request #5557 from heyitsanthony/fix-watcher-cancel
mvcc: don't cancel watcher if stream is already closed
2016-06-03 11:28:54 -07:00
Anthony Romano
88afb0b0a6 Merge pull request #5543 from heyitsanthony/clientv3-unblock-reconnect
clientv3: don't hold client lock while dialing
2016-06-03 11:28:44 -07:00
Xiang Li
6187d812da Merge pull request #5556 from xiang90/r_test
raft: fix TestNodeStepUnblock
2016-06-03 11:13:43 -07:00
Anthony Romano
f57b4eb46d mvcc: don't cancel watcher if stream is already closed
Close() already cancels all the watchers but doesn't bother to clear out
the bookkeeping maps so Cancel() may try to cancel twice.

Fixes #5533
2016-06-03 11:12:46 -07:00
Anthony Romano
7dfe7db243 clientv3: panic if ActiveConnection tries to return non-nil connection 2016-06-03 10:25:20 -07:00
Anthony Romano
267d1cb16f clientv3: fix watch to reconnect on failure
It was spinning before.
2016-06-03 10:25:20 -07:00
Anthony Romano
5f5a203e27 clientv3: don't hold client lock while dialing
Causes async reconnect to block while the client is dialing.

This was also causing problems with the Close error message, so
now Close() will return the last dial error (if any) instead of
clearing it out with a cancel().

Fixes #5416
2016-06-03 10:25:20 -07:00
Xiang Li
500296d0fb raft: fix TestNodeStepUnblock
The test cases have side-effect. We need to stop testing if one of the test
fails. Also timeout should be much longer to avoid false-positive.
2016-06-03 10:22:11 -07:00
Xiang Li
948dc5e425 Merge pull request #5552 from ktateish/fix-wrong-link
Fix wrong links
2016-06-03 10:06:13 -07:00
Xiang Li
634b9584ef Merge pull request #5555 from xiang90/fix_rm
rafthttp: report error to correct chan
2016-06-03 09:48:43 -07:00
Xiang Li
5183631f17 rafthttp: report error to correct chan 2016-06-03 09:18:02 -07:00
Lucas Käldström
95fc21e38b travis: Catch compilation errors in CI for arm and ppc64le 2016-06-03 18:46:36 +03:00
Katsuyuki Tateishi
5bff4d85d6 Doc: fix links using url for internal doc 2016-06-03 22:26:01 +09:00
Katsuyuki Tateishi
9585daf0a9 Doc: fix wrong links and remove unused or duplicate ones 2016-06-03 22:23:57 +09:00
Xiang Li
b3fee0abff Merge pull request #5539 from mitake/auth-v3-get-role
*: support getting role in auth v3
2016-06-02 21:48:45 -07:00
Hitoshi Mitake
10ee69b44c *: support getting role in auth v3
This commit implements RoleGet() RPC of etcdserver and adds a new
subcommand "role get" to etcdctl v3. It will list up permissions that
are granted to a given role.

$ ETCDCTL_API=3 bin/etcdctl role get r1
Role r1
KV Read:
        b
        d
KV Write:
        a
        c
        d
2016-06-03 13:03:54 +09:00
Xiang Li
755567cb3d Merge pull request #5547 from xiang90/int
integration: always return active client
2016-06-02 15:52:38 -07:00
Xiang Li
bbfe7f401f integration: always return active client
In the integration test, we sometimes stop/restart an etcd server.
Now our client has internal connection monitoring logic that might
set conn to nil when there is a connection failure and the redial
also fails.

Chaning randClient to always return a client with active connection
to make integration test reliable.
2016-06-02 14:49:32 -07:00
Xiang Li
85691dbbe5 Merge pull request #5546 from raoofm/patch-6
Doc: fix link for migrate command in v2-migration
2016-06-02 14:21:36 -07:00
Raoof Mohammed
6ac67ecd5c Doc: fix link for migrate command in v2-migration
Doc: fix link for migrate command in v2-migration
2016-06-02 17:19:43 -04:00
Anthony Romano
6d96dd581a Merge pull request #5545 from heyitsanthony/revert-more-i64
Revert "etcdserverpb: make RangeResponse.More an int64"
2016-06-02 14:09:31 -07:00
Anthony Romano
84a487f723 Revert "etcdserverpb: make RangeResponse.More an int64"
This reverts commit 84e1ab8765.
2016-06-02 13:43:40 -07:00
Xiang Li
3005f2717f Merge pull request #5541 from xiang90/tls
transport: require tls12
2016-06-02 10:11:57 -07:00
Xiang Li
8b28c647ea transport: require tls12 2016-06-02 09:38:56 -07:00
Xiang Li
51a048e6b3 Merge pull request #5540 from xiang90/fix_snap
snap: fix write snap
2016-06-02 09:12:50 -07:00
Xiang Li
2b77e9a086 Merge pull request #5538 from rustyrobot/fix-header-formatting
doc: fix header formatting
2016-06-02 07:58:27 -07:00
Xiang Li
ab0ccdc4df snap: fix write snap
Do not use writeFile since it does not sync file before closing.
This can lead to slient file corruption when disk is full.
2016-06-02 07:38:48 -07:00
Evgeny L
9098f27745 doc: fix header formatting 2016-06-02 16:15:08 +03:00
Gyu-Ho Lee
ab3398f7fd README: minor fix in README 2016-06-01 23:33:59 -07:00
Xiang Li
29d2caf14a Merge pull request #5532 from xiang90/rh
rafthttp: simplify initialization funcs
2016-06-01 22:31:19 -07:00
Xiang Li
a047aa4a81 rafthttp: rename to to peerID 2016-06-01 22:12:47 -07:00
Xiang Li
c25c00fcf9 rafthttp: simplify initialization funcs 2016-06-01 21:47:46 -07:00
Gyu-Ho Lee
2fcac66605 Merge pull request #5530 from gyuho/build_script
scripts: include v2 README in the release
2016-06-01 20:59:33 -07:00
Xiang Li
140e2a18fb Merge pull request #5492 from mitake/auth-v3-user-get
*: support getting user in etcdctl v3
2016-06-01 20:27:18 -07:00
Hitoshi Mitake
5609fdb9a8 *: support getting user in etcdctl v3
This commit adds a new subcommand "user get" to etcdctl v3. It will
list up roles that are granted to a given user.

Example:
$ ETCDCTL_API=3 bin/etcdctl user get u1
User: u1
Roles: r1 r2 r3

This commit also modifies the layout of InternalRaftRequest for
frequent update of auth related members.
2016-06-02 12:10:19 +09:00
Anthony Romano
b95c5b7da9 Merge pull request #5526 from heyitsanthony/more-to-int64
etcdserverpb: make RangeResponse.More an int64
2016-06-01 20:03:15 -07:00
Gyu-Ho Lee
232c1914d2 scripts: include v2 README in the release 2016-06-01 19:12:34 -07:00
Anthony Romano
84e1ab8765 etcdserverpb: make RangeResponse.More an int64 2016-06-01 17:10:23 -07:00
Xiang Li
9fee7732f6 Merge pull request #5468 from swingbach/master
implemented leader lease when quorum check is on.
2016-06-01 16:10:41 -07:00
swingbach@gmail.com
337ef64ed5 raft: implemented leader lease when quorum check is on 2016-06-02 06:17:27 +08:00
Anthony Romano
fb64c8ccfe Merge pull request #5521 from heyitsanthony/clientv3-hide-retrydial
clientv3: hide retry dial api
2016-06-01 13:00:02 -07:00
Gyu-Ho Lee
bea4268a0b Merge pull request #5520 from gyuho/grpc_dep
vendor: update grpc dependency
2016-06-01 11:43:23 -07:00
Gyu-Ho Lee
c451a1b350 Merge pull request #5519 from gyuho/etcdctlv3_README
etcdctl: v3 as default README
2016-06-01 11:41:17 -07:00
Gyu-Ho Lee
240757729c etcdctl: make v3 as default README 2016-06-01 11:36:21 -07:00
Anthony Romano
22744566f4 clientv3: hide retry dial api 2016-06-01 11:36:16 -07:00
Gyu-Ho Lee
542b7dff64 vendor: update grpc dependency 2016-06-01 11:24:03 -07:00
Xiang Li
a6144bdf3e Merge pull request #5507 from xiang90/failure
doc: add failures guide
2016-06-01 11:07:22 -07:00
Xiang Li
fc33fd1aa6 doc: add failures guide 2016-06-01 11:06:44 -07:00
Gyu-Ho Lee
47ef5f7ca5 Merge pull request #5510 from gyuho/clientv3_fix
clientv3: watch resp with error when client close
2016-06-01 11:01:30 -07:00
Gyu-Ho Lee
75dc10574a clientv3: watch resp with error when client close 2016-06-01 10:39:48 -07:00
Xiang Li
9ed3b446ca Merge pull request #5509 from heyitsanthony/clientv3-fix-concurrent-close
clientv3: fix deadlock on Get with concurrent Close
2016-06-01 07:37:28 -07:00
Xiang Li
36fcc9e9d4 Merge pull request #5515 from xiang90/logging
*: more logging on critical state change
2016-06-01 07:04:36 -07:00
Anthony Romano
a83051d0fc clientv3: don't panic on Get if NewKV is created with a closed client 2016-06-01 05:53:21 -07:00
Anthony Romano
1d88130522 clientv3: fix deadlock on Get with concurrent Close 2016-06-01 05:53:21 -07:00
Anthony Romano
5cb7400cee Merge pull request #5508 from heyitsanthony/bench-stm-lock
concurrency, benchmark: additional stm support
2016-06-01 05:48:50 -07:00
Xiang Li
8528c8c599 *: more logging on critical state change
Add more logging for better debugging purpose.
2016-05-31 23:31:03 -07:00
Anthony Romano
fc06dd1452 Merge pull request #5480 from heyitsanthony/fix-migrate-nov2
etcdctl: improve error message on migration without v2 keys
2016-05-31 15:18:56 -07:00
Xiang Li
2d4c7d6886 Merge pull request #5506 from xiang90/r_rafthttp
rafthttp: simplify streamReader initilization
2016-05-31 15:00:52 -07:00
Anthony Romano
51551abef5 concurrency, benchmark: read-committed STM isolation policy 2016-05-31 14:35:27 -07:00
Anthony Romano
f34a9350c3 benchmark: benchmark stm workload with distributed mutex 2016-05-31 14:35:27 -07:00
Anthony Romano
bb2a3ea8d8 benchmark: respect stm isolation mode flag 2016-05-31 14:35:27 -07:00
Anthony Romano
7709cd84bb Merge pull request #5505 from heyitsanthony/v3rpc-watcher-close
v3rpc: fix race on ctrl channel when watcher stream closes
2016-05-31 14:24:10 -07:00
Gyu-Ho Lee
cc837dfc6d Merge pull request #5503 from gyuho/fix_clientv3
clientv3: handle nil connection after *Client.Close (KV)
2016-05-31 12:38:20 -07:00
Xiang Li
86269ab5bf rafthttp: simplify streamReader initilization 2016-05-31 12:13:37 -07:00
Gyu-Ho Lee
7b5657cf1a clientv3: check if KV.Client is closed
For https://github.com/coreos/etcd/issues/5495.
2016-05-31 12:00:19 -07:00
Gyu-Ho Lee
d116c116fe clientv3: getRemote comment about release 2016-05-31 12:00:19 -07:00
Gyu-Ho Lee
b0d4a0a9bd integration: skip closed client in Terminate 2016-05-31 12:00:15 -07:00
Gyu-Ho Lee
283318d547 v3rpc: add ErrConnClosed for closed client
For https://github.com/coreos/etcd/issues/5495.
2016-05-31 11:15:01 -07:00
Anthony Romano
09e8f5782e v3rpc: fix race on closing watcher stream ctrl channel
Sometimes close would race with the recvLoop, leading the
recvLoop to write to a close channel.
2016-05-31 11:07:31 -07:00
Anthony Romano
41d3cea9b3 integration: test closing stream while creating watchers 2016-05-31 11:02:15 -07:00
Anthony Romano
310ebdd3e1 Merge pull request #5498 from heyitsanthony/wal-tmpfile-fixes
wal: improve tmp file handling
2016-05-31 11:01:29 -07:00
Xiang Li
e39f436728 Merge pull request #5494 from xiang90/refactor_rafthttp
rafthttp: remove the newPipeline func
2016-05-31 09:35:47 -07:00
Xiang Li
eb9b281741 Merge pull request #5502 from jonboulle/master
MAINTAINERS: remove extraneous space
2016-05-31 07:07:10 -07:00
Anthony Romano
05cc3c3dbb wal: limit number of tmp file names
This fixes a space leak if the etcd server is restarted in shorter and shorter
intervals causing the tmp files to stack up.
2016-05-31 06:25:23 -07:00
Anthony Romano
71a9d6fc8b wal: don't warn when opening wal directory with stale tmp files 2016-05-31 06:25:23 -07:00
Anthony Romano
6686833e51 e2e: check for empty string as etcdctl backup result
Was checking for an ignored wal file warning. Added support for
TMPDIR since repeated runs were failing on left over test data.
2016-05-31 06:25:23 -07:00
Jonathan Boulle
ad95ceea2f MAINTAINERS: remove extraneous space 2016-05-31 12:11:53 +02:00
Xiang Li
6f8cc58214 Merge pull request #5490 from mitake/errcode
etcdserver, auth: not return grpc error code directly in the apply phase
2016-05-30 22:00:54 -07:00
Xiang Li
cc2e0fad3e Merge pull request #5497 from purpleidea/feat/doc-clarify
docs: fix ordering of sentence so it's logical and more clear
2016-05-30 21:52:04 -07:00
Anthony Romano
2cd3a3bd59 etcdctl: improve error message on migration without v2 keys
Fixes #5478
2016-05-30 19:14:04 -07:00
Anthony Romano
9c767cbf98 Merge pull request #5464 from heyitsanthony/fix-victim-watchers
mvcc: tighten up watcher cancelation and revision handling
2016-05-30 20:09:39 -06:00
James Shubin
4aab13ac06 docs: fix ordering of sentence so it's logical and more clear 2016-05-30 22:07:31 -04:00
Hitoshi Mitake
5144318af0 etcdserver, auth: not return grpc error code directly in the apply phase
Current permission checking mechanism doesn't return its error code
well. The internal error (code = 13) is returned to client and the
retry mechanism doesn't work well. This commit fixes the problem.
2016-05-31 11:04:34 +09:00
Xiang Li
ba68d7bbe6 rafthttp: make newRemote simpler 2016-05-30 16:24:26 -07:00
Xiang Li
efe0ee7e59 rafthttp: remove the newPipeline func
Using struct to initialize pipeline is better when we have many
fields to file in.
2016-05-30 16:19:50 -07:00
Xiang Li
815bc5307f Merge pull request #5489 from linuxcer/master
etcdserver: fix typo in server.go
2016-05-30 15:20:02 -07:00
Chengfei Zhang
29cc568659 etcdserver: fix typo in server.go 2016-05-31 05:54:30 +08:00
Gyu-Ho Lee
4e5c24fcf9 Merge pull request #5487 from gyuho/mvcc_proto
mvcc: delete EXPIRE event type
2016-05-29 18:22:15 -07:00
Gyu-Ho Lee
c43e59338f etcdctl/ctlv3: remove mvccpb.EXPIRE in mirror cmd 2016-05-29 15:11:29 -07:00
Gyu-Ho Lee
3266c809e4 mvcc: delete EXPIRE event type
Addressing https://github.com/coreos/etcd/pull/5484#discussion_r65005236.
etcd v3 doesn't expire keys. It's either PUT of DELETE.
2016-05-29 14:54:38 -07:00
Xiang Li
84e7fa149e Merge pull request #5439 from mitake/auth-v3-permcheck
do permission check in raft log apply phase
2016-05-28 19:05:31 -07:00
Xiang Li
0184288479 Merge pull request #5419 from xiang90/raft_doc
raft: initial readme
2016-05-28 18:38:10 -07:00
Xiang Li
5b2e130f09 raft: initial readme 2016-05-28 18:37:21 -07:00
Gyu-Ho Lee
a86ae1d969 Merge pull request #5483 from gyuho/client_typo
clientv3: fix panic message in OpPut
2016-05-28 12:11:56 -07:00
Gyu-Ho Lee
9a0fe2620e clientv3: fix panic message in OpPut 2016-05-28 11:55:28 -07:00
Hitoshi Mitake
8e821cdc70 *: do permission check in raft log apply phase
This commit lets etcdserver check permission during its log applying
phase. With this change, permission checking of operations is
supported.

Currently, put and range are supported. In addition, multi key
permission check of range isn't supported yet.
2016-05-29 00:05:48 +09:00
Hitoshi Mitake
90e9652f70 etcdserver: return error of apply result without touching response
Current etcdserver tries to return result.resp even if result.err is
not nil. A situation of result.resp == nil and result.err != nil can
happen and it results an error like below:

18:49:57 etcd1 | interface conversion: proto.Message is nil, not *etcdserverpb.PutResponse

This commit lets the functions return result.err if it is not nil.
2016-05-29 00:05:48 +09:00
Anthony Romano
cfb3f96c2b mvcc: tighten up watcher cancelation and revision handling
Makes w.cur into w.minrev, the minimum revision for the next update, and
retries cancelation if the watcher isn't found (because it's being processed
by moveVictims).

Fixes: #5459
2016-05-27 17:19:32 -07:00
Anthony Romano
c438310634 v3rpc: make watcher wait for its send goroutine to finish 2016-05-27 16:54:26 -07:00
Gyu-Ho Lee
20fc3e968f Merge pull request #5465 from gyuho/compact1
etcd-tester: log more for compact errors
2016-05-27 16:16:04 -07:00
Gyu-Ho Lee
099dd1d1fb Merge pull request #5477 from gyuho/readme
README: fix write/sec number
2016-05-27 15:52:27 -07:00
Gyu-Ho Lee
c13bf42ac6 README: fix write/sec number 2016-05-27 15:50:04 -07:00
Gyu-Ho Lee
0313484f17 Merge pull request #5476 from gyuho/latency_dodc
Documentation: add average latency numbers
2016-05-27 15:47:26 -07:00
Gyu-Ho Lee
79fac9ee6f Documentation: add average latency numbers 2016-05-27 15:46:35 -07:00
Anthony Romano
f7fbcf8209 Merge pull request #5475 from heyitsanthony/doc-pkgs
*: add missing godoc package descriptions
2016-05-27 16:37:49 -06:00
Anthony Romano
fc7da09d67 *: add missing godoc package descriptions
Fixes #4074
2016-05-27 15:15:26 -07:00
Gyu-Ho Lee
0df5bb0002 Merge pull request #5445 from gyuho/performance_doc
Documentation: add benchmark to performance.md
2016-05-27 15:09:03 -07:00
Gyu-Ho Lee
33daeb7464 Documentation: add benchmark to performance.md
Fix https://github.com/coreos/etcd/issues/5433.
2016-05-27 15:05:54 -07:00
Xiang Li
d8f325dabf Merge pull request #5472 from xiang90/fix_cap
integration: move cap enabling to init
2016-05-27 11:42:07 -07:00
Xiang Li
ac2859057a integration: move cap enabling to init 2016-05-27 11:12:07 -07:00
Xiang Li
2d47211589 Merge pull request #5471 from xiang90/proxy_rand
httpproxy: init the rand that we use to randomize endpoints
2016-05-27 10:46:42 -07:00
Xiang Li
c73e8fd946 httpproxy: init the rand that we use to randomize endpoints
This is actually does not change anything. The endpoints are already
randomized before feeding into proxy. But it makes the proxy more safe.
2016-05-27 10:28:03 -07:00
Xiang Li
45b872fe5d Merge pull request #5470 from dnaeon/gru
docs: add Gru to the list of projects using etcd
2016-05-27 10:18:55 -07:00
Marin Atanasov Nikolov
6e4fa5e773 docs: add Gru to the list of projects using etcd 2016-05-27 20:17:57 +03:00
Gyu-Ho Lee
04039eb006 etcd-tester: more logs for compact operations 2016-05-27 09:55:13 -07:00
Gyu-Ho Lee
3ed5d28e2e etcd-tester: fix, clean up multiple things (#5462)
* etcd-tester: more logging, fix typo

* etcd-tester: fix prevCompactRev scope

Fix https://github.com/coreos/etcd/issues/5440.

* etcd-tester: move utils to bottom, clean up logs

And remove stresser operation inside defrag

* etcd-tester: separate update revision call

* etcd-tester: fix cleanup when case is -1
2016-05-26 11:37:49 -07:00
Xiang Li
6acb3d67fb Merge pull request #5448 from xiang90/fix_refrsh
etcd: fix refresh feature
2016-05-26 09:53:13 -07:00
Anthony Romano
44b59e24eb Merge pull request #5455 from heyitsanthony/clientv3-url-endpoints
clientv3: handle URL scheme when given in endpoint
2016-05-26 10:25:27 -06:00
Gyu-Ho Lee
d117684086 Merge pull request #5453 from gyuho/protobuf_etcdctlv3
etcdctl/ctlv3: protobuf write-out for member list
2016-05-25 22:39:54 -07:00
Gyu-Ho Lee
5cba7080bc etcdctl/ctlv3: protobuf write-out for member list
Fix https://github.com/coreos/etcd/issues/5297.
2016-05-25 22:23:57 -07:00
Gyu-Ho Lee
86591d64c5 etcdctl: doc member list, others protobuf output 2016-05-25 22:17:45 -07:00
Gyu-Ho Lee
d7fa07cffa Merge pull request #5456 from gyuho/tester_fix
etcd-tester: fix compact timeout
2016-05-25 18:53:07 -07:00
Gyu-Ho Lee
4c7af825c7 etcd-tester: timeout per number of compact entries
Fix https://github.com/coreos/etcd/issues/5440.
2016-05-25 18:37:13 -07:00
Gyu-Ho Lee
5ab27e99f2 Merge pull request #5454 from gyuho/document_issue_5401
etcdserverpb: document how to prefix, range query
2016-05-25 17:07:53 -07:00
Anthony Romano
9dc0782f45 clientv3: handle URL scheme when given in endpoint
Fixes #5427
2016-05-25 18:01:36 -06:00
Gyu-Ho Lee
8a718f3e56 etcdserverpb: document prefix, range query
Fix https://github.com/coreos/etcd/issues/5401.
2016-05-25 16:53:36 -07:00
Xiang Li
53084ebead etcd: fix refresh feature
When using refresh, etcd store v2 watch is broken. Although with refresh
store should not trigger current watchers, it should still add events into
the watchhub to make a complete history. Current store fails to add the event
into the watchhub, which causes issues.
2016-05-25 13:33:31 -07:00
Xiang Li
9ea1705563 Merge pull request #5441 from mqliang/Rlock-GET
store: use Rlock when GET
2016-05-25 11:29:36 -07:00
Gyu-Ho Lee
84ded59f08 Merge pull request #5443 from raoofm/patch-5
Doc: fix typo in v2-migration.md
2016-05-24 09:35:42 -07:00
Raoof Mohammed
5002114127 Doc: fix typo in v2-migration.md 2016-05-24 11:44:40 -04:00
mqliang
ffd3cb78d4 store: use Rlock when GET 2016-05-24 17:13:29 +08:00
Gyu-Ho Lee
f86dc5c7f7 Merge pull request #5438 from gyuho/proxy_log
proxy/httpproxy: fix v2 proxy log header
2016-05-23 16:49:26 -07:00
Xiang Li
340df26883 Merge pull request #5435 from xiang90/cap
api: add v3rpc capability
2016-05-23 15:50:08 -07:00
Gyu-Ho Lee
dd8a36820e proxy/httpproxy: fix v2 proxy log header
Replace all with capnslog
2016-05-23 15:45:49 -07:00
Xiang Li
1c544c3ba5 api: add v3rpc capability 2016-05-23 14:45:08 -07:00
Gyu-Ho Lee
663db2bbf8 Merge pull request #5410 from gyuho/e2e_migrate
e2e: test migrate command
2016-05-23 14:42:51 -07:00
Gyu-Ho Lee
23b14a8c8d e2e: add migrate cmd test 2016-05-23 14:27:51 -07:00
Gyu-Ho Lee
96d06d4f2c e2e: add Restart, Start, grpcEndpoints methods 2016-05-23 14:27:48 -07:00
Gyu-Ho Lee
6a8c65cba9 Merge pull request #5436 from gyuho/v3_doc
Documentation: updates for v3 release
2016-05-23 12:29:39 -07:00
Gyu-Ho Lee
fd7685f3a1 Documentation: add clientv3 links to libraries 2016-05-23 12:01:38 -07:00
Gyu-Ho Lee
d57164d0c8 README: throughput number in v3, add Doorman
Our v3 benchmark shows etcd v3 can do 40k writes per second.
1k throughput number is for etcd v2. Also adds YouTube's doorman
to example project lists.
2016-05-23 12:00:03 -07:00
Gyu-Ho Lee
3351ea1ae2 Procfile: v3 as default 2016-05-23 11:59:23 -07:00
Xiang Li
ad9d18faa9 Merge pull request #5411 from xiang90/m_doc
doc: add app migration doc
2016-05-23 11:56:34 -07:00
Xiang Li
a62e4e1e3a doc: add app migration doc 2016-05-23 11:53:44 -07:00
Gyu-Ho Lee
a3a4f51d90 Merge pull request #5434 from gyuho/log_integration
integration: add logs for debugging
2016-05-23 11:52:08 -07:00
Xiang Li
4df91ae755 Merge pull request #5424 from gyuho/slice_pre_alloc
rafthttp: replace append with pre-allocated slice
2016-05-23 11:30:07 -07:00
Gyu-Ho Lee
ddbe46543d integration: add logs for debugging 2016-05-23 11:23:41 -07:00
Gyu-Ho Lee
f20573b576 Merge pull request #5426 from gyuho/log_compaction_done
mvcc: log when compaction is done
2016-05-21 09:33:50 -07:00
Gyu-Ho Lee
bf8cf39daf mvcc: use capnslog 2016-05-20 22:31:22 -07:00
Anthony Romano
4882330fd7 Merge pull request #5417 from heyitsanthony/watcher-victims
mvcc: reuse watcher batch from notify on blocked watch channel
2016-05-20 19:59:38 -07:00
Anthony Romano
394ce5f3b8 mvcc: move blocked unsynced watchers to victim list 2016-05-20 15:56:02 -07:00
Anthony Romano
5984e46364 mvcc: move blocked sync watcher work to victim list
Instead of holding the store lock while doing a lot of work like when syncung
unsynced watchers, the work from a blocked synced notify can be reused and
dispatched without holding the store lock for long.
2016-05-20 15:56:02 -07:00
Gyu-Ho Lee
c9264c5e65 rafthttp: replace append with pre-allocated slice 2016-05-20 15:20:55 -07:00
Xiang Li
1226946e2d Merge pull request #5423 from purpleidea/feat/typos3
clientv3: fix typo
2016-05-20 14:45:20 -07:00
James Shubin
374b3ee40b clientv3: fix typo 2016-05-20 17:18:52 -04:00
Gyu-Ho Lee
4c36054610 Merge pull request #5420 from purpleidea/feat/typos2
Fix typos
2016-05-20 11:30:38 -07:00
James Shubin
edca3cbe44 clientv3: Fix typos
Found randomly when going through docs. HTH
2016-05-20 14:06:29 -04:00
Anthony Romano
0b34b236d6 mvcc: benchmark for synced watchers 2016-05-19 23:31:27 -07:00
Xiang Li
751d5fa486 Merge pull request #5414 from swingbach/master
raft: fix tiny mistake of message type
2016-05-19 23:15:15 -07:00
swingbach@gmail.com
ff9d16a2e0 raft: fix tiny mistake of message type 2016-05-20 14:04:08 +08:00
Xiang Li
4ee60d6671 Merge pull request #5413 from mitake/test
test: remove a directory correctly
2016-05-19 21:58:14 -07:00
Hitoshi Mitake
1727f278f2 test: remove a directory correctly
Current rm in the test script cannot the gopath/src correctly and
results test failure.
2016-05-20 13:42:36 +09:00
Xiang Li
e9f3e809a6 Merge pull request #5409 from xiang90/doc
etcdctl: add migrate command into readme
2016-05-19 16:54:10 -07:00
Xiang Li
628a38d906 etcdctl: add migrate command into readme 2016-05-19 16:53:47 -07:00
Gyu-Ho Lee
82c6408f38 Merge pull request #5406 from gyuho/clientv3_slice
clientv3/concurrency: preallocate slice in stm
2016-05-19 14:57:19 -07:00
Gyu-Ho Lee
fa1e40c120 clientv3/concurrency: preallocate slice in stm 2016-05-19 14:42:19 -07:00
Gyu-Ho Lee
8c17674cda Merge pull request #5404 from gyuho/watch_optimize
mvcc: remove defer in watchable store
2016-05-19 14:08:37 -07:00
Gyu-Ho Lee
be4fb634a1 Merge pull request #5279 from gyuho/demo
Documentation: add animated quick demo
2016-05-19 14:03:27 -07:00
Gyu-Ho Lee
aa85cf037f mvcc: remove defer in watchable store 2016-05-19 13:51:51 -07:00
Xiang Li
54536af135 Merge pull request #5405 from gyuho/watch_client
clientv3: preallocate watch streams slice
2016-05-19 13:21:44 -07:00
Gyu-Ho Lee
f9306fb817 clientv3: preallocate watch streams slice
To avoid slice growth when appending
2016-05-19 12:55:55 -07:00
Xiang Li
edb11881f8 Merge pull request #5391 from xiang90/migrate
etcdctl: add migrate command
2016-05-19 12:33:11 -07:00
Xiang Li
6f2e7875aa etcdctl: add migrate command
Migrate command accepts a datadir and an optional user-provided
transformer function that transform v2 keys to v2 keys.

Migrate command then builds a v3 backend state based on the existing
v2 keys and the output of the transformer function.
2016-05-19 12:17:15 -07:00
Gyu-Ho Lee
61a7d3efb3 Merge pull request #5392 from gyuho/watch_bench
benchmark: fix watch command
2016-05-19 10:12:24 -07:00
Gyu-Ho Lee
9ca84e814f benchmark: fix watch command
Fix https://github.com/coreos/etcd/issues/5099.
2016-05-19 09:57:35 -07:00
Xiang Li
8e4a83c830 Merge pull request #5400 from rkrambovitis/patch-2
doc: fix https omission in documentation.
2016-05-19 08:07:27 -07:00
Robert Krambovitis
38ebb6b475 doc: fix https omission in documentation.
doc: added missing (http)s to tls setup guide

This fixes a minor documentation omission, where the 1st initial-advertise-peer-url for tls setup appears to be http.

fixes documentation
2016-05-19 18:04:52 +03:00
Xiang Li
9ea181e561 Merge pull request #5388 from swingbach/master
raft: add more assertions on dueling candidates test case
2016-05-19 06:59:35 -07:00
swingbach@gmail.com
1e54117580 raft: add more comments for dueling candidates test case 2016-05-19 13:51:20 +08:00
swingbach@gmail.com
c703ccab63 raft: add more assertions for dueling candidates test case 2016-05-19 13:50:14 +08:00
Anthony Romano
62b4d1cef7 Merge pull request #5394 from heyitsanthony/clientv3-no-close-conn
clientv3: don't reuse closed connection and ignore "transport is closing"
2016-05-18 15:52:21 -07:00
Anthony Romano
e4a2dcad9e clientv3/integration: ignore closing transport in TestKVPutStoppedServerAndClose
The grpc "transport is closing" error is rasied when the host is unreachable;
there's no good way to avoid it for a Put.

Fixes #5343
2016-05-18 14:49:39 -07:00
Anthony Romano
782a8802c0 clientv3: avoid reusing closed connection in KV 2016-05-18 14:46:17 -07:00
Gyu-Ho Lee
26783f51b1 Documentation: add animated quick demo 2016-05-18 11:28:27 -07:00
Gyu-Ho Lee
dc073e1aa7 Merge pull request #5383 from gyuho/kvstore_byte_pool
mvcc: use buffer bytes to encode consistent index
2016-05-18 10:32:33 -07:00
Gyu-Ho Lee
77775e8e92 mvcc: preallocate bytes buffer for saveIndex 2016-05-18 10:01:57 -07:00
Gyu-Ho Lee
90498b3756 Merge pull request #5385 from gyuho/fix_backup_test
e2e: wait for member publishing after backup
2016-05-17 21:57:52 -07:00
Gyu-Ho Lee
f2b2e0761a e2e: wait for member publishing after backup 2016-05-17 21:39:04 -07:00
Gyu-Ho Lee
81b4e6d332 Merge pull request #5384 from mitake/genproto
scripts: pass -u to go get in genproto.sh
2016-05-17 20:49:36 -07:00
Hitoshi Mitake
db9ccb75bf scripts: pass -u to go get in genproto.sh
Current genproto.sh doesn't pass -u option to go get. It is
problematic because the script depends on a specific version of
gogoproto. Actually it causes build error if a repository already have
an old version of gogoproto that doesn't have a specified commit
($SHA). This commit lets the script pass -u to go get for avoid the
error.
2016-05-18 11:38:51 +09:00
Gyu-Ho Lee
7678fc153a Merge pull request #5382 from gyuho/rafthttp_timeout
rafthttp: fix TestSendMessageWhenStreamIsBroken
2016-05-17 16:22:02 -07:00
Gyu-Ho Lee
d20cb40f4f rafthttp: fix TestSendMessageWhenStreamIsBroken
Fix https://github.com/coreos/etcd/issues/5381.

In case CI being slow that taking more than 10ms.
2016-05-17 16:03:54 -07:00
Gyu-Ho Lee
ecf192556e Merge pull request #5380 from gyuho/backup_e2e_test
e2e: v2 backup test
2016-05-17 15:56:24 -07:00
Gyu-Ho Lee
06950e41b4 e2e: v2 backup test
Fix https://github.com/coreos/etcd/issues/5367.
2016-05-17 15:35:39 -07:00
Anthony Romano
fb8d12a9cd Merge pull request #5379 from heyitsanthony/fix-snapshot-close-wal
etcdserver: wait for snapshots before closing raft
2016-05-17 15:19:41 -07:00
Anthony Romano
73204e9637 etcdserver: wait for snapshots before closing raft
Fixes #5374
2016-05-17 15:04:25 -07:00
Anthony Romano
1a06f5dab5 Merge pull request #5359 from mischief/bolt-openbsd
mvcc: set bolt options to nil for non-linux systems
2016-05-17 13:32:37 -07:00
Gyu-Ho Lee
f65331b456 Merge pull request #5376 from gyuho/e2e_typo
e2e: add 'force-new-cluster' flag, fix typo
2016-05-17 13:29:58 -07:00
Gyu-Ho Lee
00a2dca619 Merge pull request #5378 from gyuho/boltdb_update
vendor: update boltdb to v1.2.1
2016-05-17 13:26:29 -07:00
Gyu-Ho Lee
86c85b88ad Merge pull request #5377 from purpleidea/bug/typos
clientv3: fix typos
2016-05-17 12:51:13 -07:00
Gyu-Ho Lee
dd8e81070a e2e: add force-new-cluster flag 2016-05-17 12:48:26 -07:00
Gyu-Ho Lee
63e6228a0b e2e: fix typo(isClientAuthTLS to isClientAutoTLS) 2016-05-17 12:47:21 -07:00
Nick Owens
e4e4c9dc2c mvcc: set bolt options to nil for non-linux systems 2016-05-17 12:46:44 -07:00
Gyu-Ho Lee
bc5f626e56 vendor: update boltdb to v1.2.1 2016-05-17 12:42:38 -07:00
James Shubin
42f3b4964f clientv3: fix typos 2016-05-17 15:39:56 -04:00
Gyu-Ho Lee
0269afd643 Merge pull request #5375 from gyuho/admin_guide_typo
Documentation/v2: fix typo for updating a member
2016-05-17 11:47:09 -07:00
Gyu-Ho Lee
e2fe80393e Documentation/v2: fix typo for updating a member
Fix https://github.com/coreos/etcd/issues/5358.
2016-05-17 11:44:39 -07:00
Gyu-Ho Lee
3c78523643 Merge pull request #5373 from gyuho/table-write-out
Documentation: write-out=table for v3 commands
2016-05-17 10:46:50 -07:00
Gyu-Ho Lee
6a0148e214 Documentation: write-out=table for v3 commands 2016-05-17 10:45:18 -07:00
Gyu-Ho Lee
3c8301358c Merge pull request #5371 from gyuho/auth_doc
Documentation/v2: fix auth_api.md bug
2016-05-17 10:22:12 -07:00
xiaohuang
21c9da1ed4 Documentation/v2: fix auth_api.md bug
role guest read and write is "/*", not "*", same with other roles.
2016-05-17 09:42:38 -07:00
Xiang Li
7014f6861d Merge pull request #5361 from mitake/auth-v3-token-credential
RFC: *: attach auth token as a gRPC credential
2016-05-16 21:45:44 -07:00
Hitoshi Mitake
6259318521 *: attach auth token as a gRPC credential
This commit adds a functionality of attaching an auth token to gRPC
connection as a per RPC credential.

For doing this, this commit lets clientv3.Client.Dial() create a
dedicated gRPC connection for doing authentication. With the dedicated
connection, the client calls Authenticate() RPC and obtain its
token. The token is attached to the main gRPC connection with
grpc.WithPerRPCCredentials().

This commit also adds a new option --username to etcdctl (v3). With
this option, etcdctl attaches its auth token to the main gRPC
connection (currently it is not used at all).
2016-05-17 13:26:12 +09:00
Anthony Romano
327b01169c Merge pull request #5353 from heyitsanthony/clientv3-throttle-reconn
clientv3: throttle reconnection rate
2016-05-16 13:41:28 -07:00
Anthony Romano
f6e5fe6877 Merge pull request #5368 from heyitsanthony/sshot-hash
v3rpc, etcdctl: snapshot integrity hash
2016-05-16 13:09:02 -07:00
Anthony Romano
798718c49b etcdctl: verify snapshot hash on restore
Fixes #4097
2016-05-16 12:08:08 -07:00
Anthony Romano
ac2e3e43bf v3rpc: add sha trailer to snapshot 2016-05-16 11:15:03 -07:00
Anthony Romano
e8101ddf09 clientv3: throttle reconnection rate
Client was reconnecting after establishing connections because the lease
and watch APIs were thrashing. Instead, wait a little before accepting
new reconnect requests.
2016-05-16 11:14:45 -07:00
Anthony Romano
3c3bb3f97c godep: add golang.org/x/time/rate 2016-05-16 11:14:45 -07:00
Xiang Li
a663828a32 Merge pull request #5366 from xiang90/fix_restore
raft: do not panic when removing all the nodes from cluster
2016-05-16 10:45:48 -07:00
Xiang Li
29c77dee74 Merge pull request #5298 from purpleidea/feat/newurlsmap
pkg/types: Build a urls map from a string map
2016-05-16 10:39:14 -07:00
Anthony Romano
8ffbaef502 Merge pull request #5364 from heyitsanthony/fix-election-wait
integration: fix TestElectionWait
2016-05-16 10:30:17 -07:00
Anthony Romano
e52fc2d07e Merge pull request #5363 from heyitsanthony/fix-test-wait
test: fix wait on integration tests
2016-05-16 10:28:45 -07:00
Xiang Li
910781ef5b raft: do not panic when removing all the nodes from cluster 2016-05-16 10:04:17 -07:00
Anthony Romano
c21b885dd5 integration: fix TestElectionWait
elections are now per-session so waiting on the same election with the
same client will not block like before

Fixes #5362
2016-05-16 07:32:42 -07:00
Anthony Romano
e312bb675c test: fix wait on integration tests
Typo was causing failed tests to look like they passed on CI.
2016-05-16 06:32:38 -07:00
Xiang Li
46481b17fc Merge pull request #5356 from xiang90/grpc-proxy
proxy: initial grpc kv service proxy
2016-05-14 12:31:06 -07:00
Anthony Romano
2d3a8541d0 Merge pull request #5355 from heyitsanthony/cluster-security-doc
doc: add TLS examples to clustering guide
2016-05-14 10:44:06 -07:00
James Shubin
d41ce0a97c pkg/types: Add tests for NewURLsMapFromStringMap 2016-05-14 10:48:56 -04:00
James Shubin
17e23769d9 pkg/types: gofmt existing code 2016-05-14 09:33:58 -04:00
James Shubin
029fe6bf47 pkg/types: Build a urls map from a string map
This adds a simple transformation function which is helpful when
manipulating the different etcd internal data representations.
2016-05-14 09:33:58 -04:00
Xiang Li
ec2ac72585 proxy: initial grpc kv service proxy 2016-05-13 23:00:29 -07:00
Anthony Romano
25850e0070 doc: add TLS examples to clustering guide
Fixes #3595
2016-05-13 17:10:41 -07:00
Xiang Li
deb21d3da5 Merge pull request #5352 from xiang90/p
integration: remove parallel testing
2016-05-13 13:24:36 -07:00
Gyu-Ho Lee
410c5cd828 Merge pull request #5351 from gyuho/allow_null_key
etcdctl/ctlv3: allow empty key
2016-05-13 12:26:59 -07:00
Xiang Li
c7c0e1eb7a integration: remove parallel testing
We cannot do testing in parallel since leak testing will detect the goroutines
in other tests running in parallel.
2016-05-13 12:01:25 -07:00
Gyu-Ho Lee
002090daec e2e: test empty key for get command 2016-05-13 11:30:36 -07:00
Gyu-Ho Lee
3ec627d1a8 etcdctl/ctlv3: allow empty key
Fix https://github.com/coreos/etcd/issues/5323.
2016-05-13 11:29:58 -07:00
Anthony Romano
8c953499fa Merge pull request #5349 from heyitsanthony/clientv3-conc-fixups
clientv3/concurrency: ctx-izations and session leader ids
2016-05-13 10:33:55 -07:00
Anthony Romano
120020fa9c clientv3/concurrency: use session id for election keys to avoid deadlock 2016-05-13 10:07:35 -07:00
Anthony Romano
393725fe5f clientv3/concurrency: ctx-ize Leader(), Resign(), and Unlock() 2016-05-13 10:07:35 -07:00
Anthony Romano
2e93c65c96 bridge: fix command line flag handling
flag package expects flags in Argv[1:] and stops on non-flag arguments
but bridge was expecting the forwarding address in os.Argv[1]
2016-05-13 10:07:35 -07:00
Xiang Li
4d2424210f Merge pull request #5313 from xiang90/fix_raft_abort
raft: simplify leadership transfer
2016-05-13 09:26:01 -07:00
Anthony Romano
4612e2d59a Merge pull request #5340 from heyitsanthony/etcd-runner-election
etcd-runner: election mode
2016-05-12 22:53:35 -07:00
Anthony Romano
4fe91ed1e2 etcd-runner: election mode 2016-05-12 22:32:33 -07:00
Anthony Romano
215afb9b1d etcd-runner: refactor round code 2016-05-12 22:32:33 -07:00
Gyu-Ho Lee
66e5e4f298 Merge pull request #5344 from gyuho/license_authors
*: update LICENSE header
2016-05-12 21:18:35 -07:00
Gyu-Ho Lee
71e6c4b06a .header: update to 'etcd Authors' 2016-05-12 20:56:50 -07:00
Gyu-Ho Lee
ef44f71da9 *: update LICENSE header 2016-05-12 20:51:48 -07:00
Gyu-Ho Lee
c538e0f9a9 etcdctl: update LICENSE header 2016-05-12 20:51:39 -07:00
Gyu-Ho Lee
2a44b9636a auth: update LICENSE header 2016-05-12 20:51:14 -07:00
Gyu-Ho Lee
fd9e07a529 clientv3: update LICENSE header 2016-05-12 20:50:58 -07:00
Gyu-Ho Lee
9d9f02c1ee mvcc: update LICENSE header 2016-05-12 20:50:33 -07:00
Gyu-Ho Lee
3d523e34b1 tools: update LICENSE header 2016-05-12 20:50:17 -07:00
Gyu-Ho Lee
4a5befc2de wal: update LICENSE header 2016-05-12 20:50:04 -07:00
Gyu-Ho Lee
abb4cd5646 etcdserver: update LICENSE header 2016-05-12 20:49:40 -07:00
Gyu-Ho Lee
bd71a60875 rafthttp: update LICENSE header 2016-05-12 20:49:28 -07:00
Gyu-Ho Lee
fe884f8209 raft: update LICENSE header 2016-05-12 20:49:15 -07:00
Gyu-Ho Lee
8b77de4e99 pkg: update LICENSE header 2016-05-12 20:48:53 -07:00
Xiang Li
a880e9c7cb Merge pull request #5332 from xiang90/sl
*: cancel required leader streams when memeber lost its leader
2016-05-12 20:24:34 -07:00
Gyu-Ho Lee
15c5259e2d Merge pull request #5328 from gyuho/require_leader
requireHasLeader client side
2016-05-12 19:53:43 -07:00
Xiang Li
9c103dd0de *: cancel required leader streams when memeber lost its leader 2016-05-12 19:42:21 -07:00
Gyu-Ho Lee
68eaf4083a clientv3: WithRequireLeader 2016-05-12 19:25:42 -07:00
Gyu-Ho Lee
431c4e7b3b Merge pull request #5342 from gyuho/grpc_dep
cmd/vendor: update grpc (upstream)
2016-05-12 19:23:31 -07:00
Gyu-Ho Lee
711be0a567 cmd/vendor: update grpc (upstream) 2016-05-12 19:02:30 -07:00
Gyu-Ho Lee
f4d1501198 Merge pull request #5337 from gyuho/configurable_monitor_interval
etcdmain: gateway monitor-interval flag
2016-05-12 18:58:52 -07:00
Gyu-Ho Lee
a32aabc377 proxy/tcpproxy: add more logs 2016-05-12 17:48:36 -07:00
Gyu-Ho Lee
750273afd9 Merge pull request #5339 from gyuho/protodoc_fix
*: fix protodoc, consistent casing in api doc
2016-05-12 17:39:06 -07:00
Anthony Romano
78d46b71fa Merge pull request #5336 from heyitsanthony/fix-clientv3-failput-close-crash
clientv3: fix Close after failed Put
2016-05-12 17:32:49 -07:00
Gyu-Ho Lee
9a6daefb3e etcdmain: add retry-delay flag 2016-05-12 17:03:00 -07:00
Gyu-Ho Lee
62e5ffac13 Merge pull request #5338 from gyuho/proxy_log
httpproxy: fix capnslog log path
2016-05-12 16:58:32 -07:00
Gyu-Ho Lee
b1f95c314b *: fix protodoc, consistent casing in api doc
There was a bug in protodoc.
This changes git SHA to use the latest protodoc.
And make the letter casing consistent with original
Protocol Buffer. Go capitalizes the member variables,
but the protocol buffer documentation should be same as
original proto files.
2016-05-12 16:23:29 -07:00
Anthony Romano
527aa1a499 clientv3: fix Close after failed Put
Was crashing on a nil connection. Reworked the shutdown path a little so
there's only one connection close site.
2016-05-12 16:16:27 -07:00
Gyu-Ho Lee
25d9169e9a httpproxy: fix capnslog log path
We changed the package path, so log paths needs to be updated as well.
2016-05-12 15:56:40 -07:00
Gyu-Ho Lee
fb65d04291 Merge pull request #5329 from gyuho/typo_integration
integration: fix NewClientV3 error messages
2016-05-12 10:49:14 -07:00
Gyu-Ho Lee
78ae4b92a6 integration: fix NewClientV3 error messages 2016-05-12 10:26:27 -07:00
Xiang Li
2e011053b1 Merge pull request #5326 from mortonfox/patch-1
README: Update link to configuration.md
2016-05-11 22:33:54 -07:00
Morton Fox
9c05f92f2e README: Update link to configuration.md
The file, along with all other documentation files, has moved into the Documentation folder.
2016-05-12 00:57:30 -04:00
Anthony Romano
9acb7ab41c Merge pull request #5325 from heyitsanthony/fix-partial-wal-init
wal: atomically initialize wal directory
2016-05-11 18:01:04 -07:00
Xiang Li
6fc3106e68 Merge pull request #5324 from xiang90/partitioned
*: etcd member rejects unary call with leader requirement when it does not have leader
2016-05-11 17:48:06 -07:00
Anthony Romano
17391336af wal: atomically initialize wal directory
Fixes #5270
2016-05-11 16:50:17 -07:00
Xiang Li
19221b33cc *: etcd member rejects unary call with leader requirement when it does not have leader 2016-05-11 16:34:34 -07:00
Anthony Romano
be0c38ec2b Merge pull request #5322 from heyitsanthony/port-docs
scrub legacy ports and update tls information
2016-05-11 16:32:45 -07:00
Anthony Romano
dcb3b7aecf *: scrub legacy ports from code and scripts 2016-05-11 13:46:30 -07:00
Anthony Romano
db8f5771f1 doc: scrub legacy ports and TLS information for v3 2016-05-11 13:46:29 -07:00
Anthony Romano
b03a2f0323 Merge pull request #5318 from heyitsanthony/watcher-latency
batch watcher sync to reduce request latency
2016-05-11 12:53:20 -07:00
Anthony Romano
080272be17 mvcc: limit total watchers synced per sync
Fixes #4567
2016-05-11 11:16:43 -07:00
Anthony Romano
f5165a0149 benchmark: make number of watcher streams configurable in watch-get
Each stream uses a client goroutine and a grpc stream; the setup causes
considerable client-side latency on the first get requests.
2016-05-11 11:16:43 -07:00
Anthony Romano
2aa4dd52cc benchmark: use separate connection for get in watch-get
The watcher traffic interferes with the get latency when sharing connections.
2016-05-11 11:16:43 -07:00
Xiang Li
ca105a1c89 Merge pull request #5319 from xiang90/fix_rafthttp_test
*: fix TestTransportErrorc
2016-05-11 11:01:43 -07:00
Gyu-Ho Lee
e90313c9c2 Merge pull request #5321 from gyuho/doc_fix
*: fix minor typos
2016-05-11 10:58:36 -07:00
Gyu-Ho Lee
3104507eb2 *: fix minor typos 2016-05-11 10:55:38 -07:00
Gyu-Ho Lee
b2eb90024f Merge pull request #5320 from gyuho/issue518
v2/README: add known bugs
2016-05-11 10:45:40 -07:00
Xiang Li
aaefd52afa Merge pull request #5092 from xiang90/etcdlet
*: gateway initial commit
2016-05-11 10:36:02 -07:00
Gyu-Ho Lee
5023996d02 v2/README: add known bugs
For https://github.com/coreos/etcd/issues/518.
2016-05-11 10:35:41 -07:00
Xiang Li
00b660cc53 Merge pull request #5309 from xiang90/d_metrics
*: add disk operation metrics for monitoring
2016-05-11 10:18:39 -07:00
Xiang Li
4d0f474034 *: fix TestTransportErrorc
CI can be slow. We should just wait longer.
2016-05-11 10:09:40 -07:00
Xiang Li
a300be92dc *: initial support for gatway
etcd gatway is a simple l4 gateway that forwards tcp connections to
the given endpoints.
2016-05-11 09:44:50 -07:00
Xiang Li
0fb7cb8b00 *: add disk operation metrics for monitoring 2016-05-11 09:36:45 -07:00
Gyu-Ho Lee
5ddb532072 Merge pull request #5314 from gyuho/test-script
test: fix typo, clean-up print statements
2016-05-10 23:51:05 -07:00
Gyu-Ho Lee
fd7e2b20b0 test: fix typo, clean-up print statements 2016-05-10 23:05:58 -07:00
Xiang Li
82a6de8b69 raft: simplify leadership transfer 2016-05-10 20:03:42 -07:00
Xiang Li
62d4c6d357 Merge pull request #5312 from ajityagaty/backup
etcdctl: Add --wal-dir and --backup-wal-dir options to backup command.
2016-05-10 19:51:30 -07:00
Ajit Yagaty
23f9d72870 etcdctl: Add --wal-dir and --backup-wal-dir options to backup command.
If the WAL is stored in a separate directory then the backup command
would need a --wal-dir option to pick the path to the WAL directory.
The user might also want to store the backup of data and wal separately
for which --backup-wal-dir option is provided.
2016-05-10 18:38:56 -07:00
Gyu-Ho Lee
d8215c8892 Merge pull request #5310 from gyuho/timeout_v2
etcdctl/ctlv2: total-timeout for Sync
2016-05-10 15:02:33 -07:00
Gyu-Ho Lee
62a9209088 etcdctl/ctlv2: total-timeout for Sync
Fix https://github.com/coreos/etcd/issues/4897.
2016-05-10 14:20:05 -07:00
Anthony Romano
6b2d7f9412 Merge pull request #5308 from heyitsanthony/fix-init-notify
etcdmain: notify systemd when etcd is ready to accept requests
2016-05-10 13:55:06 -07:00
Anthony Romano
8c4958dd60 etcdmain: notify systemd when etcd is ready to accept requests
Fixes #5151
2016-05-10 13:36:46 -07:00
Xiang Li
5cbd8cefc9 Merge pull request #5291 from xiang90/c_i
*: add proposalsCommitted metrics
2016-05-10 12:51:28 -07:00
Xiang Li
ab11415d25 *: add proposalsCommitted metrics 2016-05-10 10:56:25 -07:00
Anthony Romano
dad1197c89 Merge pull request #5303 from heyitsanthony/bench-watch-unsync
benchmark: watch-get for testing unsynced watcher/get contention
2016-05-10 10:31:45 -07:00
Anthony Romano
467de8cb4f benchmark: watch-get for testing unsynced watcher/get contention 2016-05-10 10:24:40 -07:00
Gyu-Ho Lee
efcba23d21 Merge pull request #5301 from gyuho/simple_member
etcdctl/ctlv3: make 'table' printer configurable
2016-05-10 10:12:54 -07:00
Gyu-Ho Lee
3e088b3b40 etcdctl/ctlv3: make 'table' printer configurable
Fix https://github.com/coreos/etcd/issues/5296.
2016-05-10 10:02:02 -07:00
Xiang Li
8daad8e06e Merge pull request #5305 from ajityagaty/conf_file
Doc: Add the new '--config-file' detail to configuration.md file
2016-05-10 07:58:55 -07:00
Ajit Yagaty
97a2ebe3a2 Doc: Add the new '--config-file' detail to configuration.md file
Add a description about the --config-file option into the
configuration.md file.
2016-05-10 07:50:37 -07:00
Xiang Li
fa6670488d Merge pull request #5302 from xiang90/conf-file
*: move sample config file to root directory
2016-05-10 07:46:38 -07:00
Xiang Li
4ae47ad934 Merge pull request #5294 from xiang90/r_metrics
*: simplify network metrics
2016-05-09 22:50:45 -07:00
Xiang Li
98dbdd5fbb *: simplify network metrics 2016-05-09 22:37:12 -07:00
Xiang Li
00398ec98d *: move sample config file to root directory 2016-05-09 21:36:09 -07:00
Xiang Li
07c04c7c75 Merge pull request #5280 from ajityagaty/server_config_file
etcd: Configuration file for etcd server.
2016-05-09 19:52:09 -07:00
Ajit Yagaty
8bc5ab9f8d etcd: Configuration file for etcd server.
Added a new command line option to etcd server to read in a YAML
based configuration file. I've also added an example configuration
file with comments and a set of test cases.
2016-05-09 18:17:27 -07:00
Xiang Li
0d43a2b7e7 Merge pull request #5295 from ajityagaty/auth_disable
auth: Adding support for "auth disable" command.
2016-05-07 23:09:37 -07:00
Ajit Yagaty
adc981c53d auth: Adding support for "auth disable" command.
Added support for the auth disable command in the server, added the
etcdctl command and a respective testcase.
2016-05-07 19:21:49 -07:00
Anthony Romano
53491aac0a Merge pull request #5250 from heyitsanthony/fix-wal-write-tear
wal: repair torn writes
2016-05-06 17:14:56 -07:00
Anthony Romano
cd9e6a1d4f wal: lock WAL file while repairing 2016-05-06 16:57:55 -07:00
Anthony Romano
774030e1b2 wal: repair torn writes
Fixes #5230
2016-05-06 16:54:08 -07:00
Gyu-Ho Lee
c9c2cdfeaf Merge pull request #5293 from heyitsanthony/fix-compact-cancel-crash
etcdserver: fix nil dereference in physical Compact on proposal timeout
2016-05-06 16:25:03 -07:00
Anthony Romano
824ffded12 etcdserver: fix nil dereference in physical Compact on proposal timeout
Fixes #5292
2016-05-06 15:38:18 -07:00
Xiang Li
34fbec118a Merge pull request #5289 from xiang90/has_leader_metrics
*: add has leader metrics
2016-05-06 14:45:49 -07:00
Xiang Li
4481016953 Merge pull request #5290 from coreos/vv
*: bump to v3.0.0-beta.0+git
2016-05-06 14:10:52 -07:00
Gyu-Ho Lee
ebaa54bf6e *: bump to v3.0.0-beta.0+git 2016-05-06 14:04:01 -07:00
Xiang Li
824478be5f *: add has leader metrics 2016-05-06 13:59:19 -07:00
Gyu-Ho Lee
ffd1fa6f52 Merge pull request #5288 from gyuho/version_bump
*: bump to 3.0.0-beta.0
2016-05-06 13:29:19 -07:00
Xiang Li
faca29fc3b Merge pull request #5287 from xiang90/l_metrics
*: add leader changes to metrics
2016-05-06 13:27:08 -07:00
Xiang Li
76d073a2b5 *: add leader changes to metrics 2016-05-06 13:12:20 -07:00
Gyu-Ho Lee
74ea9ea5cd *: bump to 3.0.0-beta.0 2016-05-06 13:09:50 -07:00
Gyu-Ho Lee
d17aaae714 Merge pull request #5265 from gyuho/fix_5246
v2http: allow empty role for GET '/users'
2016-05-06 11:58:21 -07:00
Gyu-Ho Lee
3c2d0a229c v2http: allow empty role for GET /users
Fix https://github.com/coreos/etcd/issues/5246.
2016-05-06 11:39:38 -07:00
Anthony Romano
879cfe7666 Merge pull request #5278 from heyitsanthony/fix-clientv3-disconnects
clientv3: fix disconnect breakage
2016-05-05 19:53:08 -07:00
Anthony Romano
712090fc09 clientv3: keep watcher client active if reconnect has network error
Otherwise watchers created after a long disconnect period will always
close immediately.
2016-05-05 19:30:11 -07:00
Anthony Romano
22c3a439bc clientv3: do not stop lease client on lost receive stream
Fixes #5242
2016-05-05 19:30:11 -07:00
Anthony Romano
cdc8f99658 clientv3: rework reconnection logic
Avoids go routine flood for tight loops with a dead connection.
Now uses request ctx when reconnecting for immediate retry.
2016-05-05 19:30:11 -07:00
Anthony Romano
cc37632003 Merge pull request #5285 from heyitsanthony/fix-windows-sha
build: set git sha on windows builds
2016-05-05 18:39:36 -07:00
Anthony Romano
5d86525230 build: set git sha on windows builds 2016-05-05 18:18:07 -07:00
Xiang Li
93d84b9076 Merge pull request #5284 from xiang90/perf_doc
doc: add performance.md
2016-05-05 16:02:39 -07:00
Xiang Li
b033167094 doc: add performance.md 2016-05-05 14:58:34 -07:00
Xiang Li
98031a3b6e Merge pull request #5249 from xiang90/metrics
*: add metrics for grpc api
2016-05-05 14:19:46 -07:00
Xiang Li
063307ec0a *: add metrics for grpc api 2016-05-05 13:45:52 -07:00
Gyu-Ho Lee
61add11b05 Merge pull request #5259 from gyuho/functional-test
etcd-tester: refactor
2016-05-05 11:15:18 -07:00
Gyu-Ho Lee
cc7dd9b729 etcd-tester: refactor 2016-05-05 10:55:42 -07:00
Anthony Romano
3bcd2b5b9f Merge pull request #5271 from heyitsanthony/fix-rafthttp-active-race
rafthttp: fix race on peer status activeSince
2016-05-04 13:49:58 -07:00
Anthony Romano
c5af1d7a88 rafthttp: fix race on peer status activeSince 2016-05-04 11:48:16 -07:00
Anthony Romano
b24d0032d2 Merge pull request #5269 from heyitsanthony/fix-httpproxy-race
httpproxy: fix race on getting close notifier channel
2016-05-04 09:49:19 -07:00
Anthony Romano
a76f5f5ed2 httpproxy: fix race on getting close notifier channel
Fixes #5267
2016-05-04 09:32:26 -07:00
Xiang Li
53ed8750ce Merge pull request #5266 from maciej/scala_etcd_client
librarites-and-tools.md: add Scala-based maciej/etcd-client
2016-05-04 09:15:18 -07:00
Maciej Bilas
aeff5507e6 librarites-and-tools.md: add Scala-based maciej/etcd-client 2016-05-04 02:56:17 +02:00
Anthony Romano
b7761530e1 Merge pull request #5251 from heyitsanthony/fix-watch-panic
clientv3: gracefully handle watcher resume on compacted revision
2016-05-03 15:00:39 -07:00
Gyu-Ho Lee
b53aaf4c82 Merge pull request #5262 from gyuho/more_logging
*: more detailed timeout logging
2016-05-03 14:13:46 -07:00
Gyu-Ho Lee
9bf601a921 etcdserver: log timeout 2016-05-03 13:39:31 -07:00
Xiang Li
0f5b8c39b4 Merge pull request #5263 from gyuho/autotls-flag
etcdmain: add auto-tls flag to help.go
2016-05-03 13:14:23 -07:00
Gyu-Ho Lee
56dd991b4e etcdmain: add auto-tls flag to help.go 2016-05-03 12:40:02 -07:00
Gyu-Ho Lee
864cbd36bf Merge pull request #5261 from gyuho/typo
*: typo, remove string type assertions
2016-05-03 11:29:13 -07:00
Anthony Romano
1a0d1ab4ab Merge pull request #5260 from glevand/for-merge-build
build: Simplify host detection
2016-05-03 11:28:46 -07:00
Gyu-Ho Lee
a288188001 *: typo, remove string type assertions 2016-05-03 10:59:57 -07:00
Gyu-Ho Lee
5d8d684a91 Merge pull request #5257 from gyuho/proto_fix
*: fix protodoc, re-run genproto script, typos in proto files
2016-05-03 10:46:46 -07:00
Gyu-Ho Lee
fd27f9cd28 Merge pull request #5256 from gyuho/fix_build
build: set GitSHA version in cmd directory
2016-05-03 10:13:07 -07:00
Geoff Levand
4ecb560604 build: Simplify host detection
Signed-off-by: Geoff Levand <geoff@infradead.org>
2016-05-03 09:54:44 -07:00
Anthony Romano
8b52fd0d2d clientv3: gracefully handle watcher resume on compacted revision
Fixes #5239
2016-05-03 09:30:53 -07:00
Xiang Li
b7639b00e0 Merge pull request #5252 from xiang90/client-tls
*: support auto tls on client side
2016-05-03 09:22:11 -07:00
Xiang Li
c5bf6a9d9e e2e: add test for auto client tls 2016-05-03 08:35:02 -07:00
Gyu-Ho Lee
015acabdbb *: rerun genproto -g 2016-05-02 23:02:31 -07:00
Gyu-Ho Lee
6222d46233 scripts/genproto.sh: update protodoc git SHA
To use protodoc with the fix
58fed2ed06.

This correctly parses the order of values in 'directories' flag.
2016-05-02 23:00:40 -07:00
Gyu-Ho Lee
36acde620e build: set GitSHA version in cmd directory
Fix https://github.com/coreos/etcd/issues/5255.
2016-05-02 22:16:40 -07:00
Xiang Li
1f5c5abe6d Merge pull request #5253 from xiang90/fix_raft_test
raft: fix flaky test
2016-05-02 21:33:51 -07:00
Xiang Li
2fa5b913fe raft: fix flaky test
We recently changed the randomized election timeout from (et, 2*et-1] tp
[et, 2*et-2], where et is user set election timeout.

So 2*et might trigger two elections instead of one. We need to fix the test
code accordingly.

Thanks for Tikv guys for finding this issue. We probably need to randomize
etcd/raft test more.
2016-05-02 21:08:19 -07:00
Xiang Li
973ad5aa7c *: support auto tls on client side 2016-05-02 16:17:49 -07:00
Gyu-Ho Lee
fee71b18a3 Merge pull request #5248 from gyuho/hash_with_revision
functional-tester: use revision from hash method
2016-05-02 15:30:26 -07:00
Gyu-Ho Lee
064c1ff0f3 etcdserver/api/v3rpc: use Revision from Hash API 2016-05-02 15:06:39 -07:00
Gyu-Ho Lee
7a6d9ea01a mvcc: Hash to return Revision 2016-05-02 15:04:24 -07:00
Xiang Li
a8139e2b0e Merge pull request #5247 from joshix/faqhead
Documentation/v2: Add newline before heading in faq.md
2016-05-02 11:43:58 -07:00
Joshua Wood
92d673ea59 Documentation/v2: Add newline before heading in faq.md
Minor rewrite to the heading text for clarity.

Matches downstream coreos-inc/coreos-pages#648 and
coreos-inc/coreos-pages#649.
2016-05-02 11:15:18 -07:00
Xiang Li
b9ea5f6d90 Merge pull request #5241 from claws/avoid_differences_in_gnu_and_bsd_cut
use sed instead of cut to accomodate GNU and BSD differences
2016-04-30 20:58:57 -07:00
Chris Laws
c071104fc4 script: fix build script regression to work on OSX
Use sed instead of cut to accomodate GNU and BSD differences

Fixes: #5240
2016-05-01 13:06:07 +09:30
Xiang Li
28f3cb0f14 Merge pull request #5171 from xiang90/runner
etcd-runner: initial commit
2016-04-30 19:39:53 -07:00
Xiang Li
73ecb61ff4 etcd-runner: initial commit 2016-04-30 17:24:03 -07:00
Xiang Li
262de75a7e Merge pull request #5238 from xiang90/bench_watch_put
mvcc: add benchmark for watch put and improve it
2016-04-30 10:46:38 -07:00
Xiang Li
ad327e01d0 mvcc: add benchmark for watch put and improve it 2016-04-29 19:58:37 -07:00
Xiang Li
b58f8dd64b Merge pull request #5237 from brian-brazil/master
Improve some debug metrics.
2016-04-29 17:53:54 -07:00
Brian Brazil
ea1d0f3e0d etcdserver: Improve some debug metrics.
The _total suffix is by convention for counters,
don't use it on a gauge. Clarify help string.
Tweak metric name so it'll sort with related metrics,
and be a little more understandable.

Remove open file descriptor metric, as Prometheus client_golang
provides that out of the box as process_open_fds which is also
more up to date. Both only support Linux, so there's no loss of
platform support.

Fixes #5229
2016-04-30 01:29:13 +01:00
Xiang Li
c89e348fbc Merge pull request #5232 from glevand/master
Add arm64 travis builds
2016-04-29 16:41:09 -07:00
Gyu-Ho Lee
552a5af10f Merge pull request #5236 from gyuho/wait_purge
pkg/fileutil: wait up to 300ms for purge test
2016-04-29 16:37:09 -07:00
Geoff Levand
b79bb6f164 travis: Enable arm64 builds
Setup a travis test matrix on a new variable 'TARGET', which specifies the CI
target.  Update the script section with a conditional that runs the needed
commands for each target.

Also, set go_import_path to make cloned repos work, enable the trusty VM, and
enable verbose builds when testing.

Signed-off-by: Geoff Levand <geoff@infradead.org>
2016-04-29 15:31:30 -07:00
Gyu-Ho Lee
4ab1500a6d pkg/fileutil: wait up to 300ms for purge test
Fix https://github.com/coreos/etcd/issues/5231.

The issue shows that slow CI can take more than 200ms
for purging. This increase the loop iteration to wait
up to 300ms in case the disk is being slow.
2016-04-29 15:24:44 -07:00
Anthony Romano
00d6f104b5 Merge pull request #5235 from ronabop/get_put_typo
simple typo in README docs for getting started
2016-04-29 14:24:21 -07:00
Ronald Gundlach-Chmara
c97b74a72f doc: fix etcdctl example in README
repeated put rather than put followed by get
2016-04-29 21:20:14 +00:00
Gyu-Ho Lee
634a9e833e Merge pull request #5233 from gyuho/client-doc
clientv3: fix README, add error handling example
2016-04-29 13:56:37 -07:00
Gyu-Ho Lee
0c5bcd5d80 clientv3: fix README, add error handling example 2016-04-29 13:34:16 -07:00
Gyu-Ho Lee
33968059e9 Merge pull request #5222 from gyuho/error_interface
rpctypes: error interface
2016-04-29 13:01:54 -07:00
Gyu-Ho Lee
ec1fdd3938 integration: test with new server errors 2016-04-29 12:00:26 -07:00
Gyu-Ho Lee
b3ebe66c97 clientv3/integration: tests with new errors 2016-04-29 12:00:26 -07:00
Gyu-Ho Lee
6049c95dc9 clientv3: auth with rpctypes.Error 2016-04-29 12:00:26 -07:00
Gyu-Ho Lee
506cf1f03f etcdserver/api/v3rpc: use new errors 2016-04-29 12:00:26 -07:00
Gyu-Ho Lee
2b361cf06b rpctypes: define a new error interface 2016-04-29 12:00:22 -07:00
Gyu-Ho Lee
d893a78c38 test: add v3rpc, rpctypes 2016-04-29 11:00:02 -07:00
Anthony Romano
8e099ab713 Merge pull request #5225 from heyitsanthony/local-tester
local-tester: procfile, faults, and network bridge
2016-04-29 10:27:56 -07:00
Xiang Li
b8850cec93 Merge pull request #5228 from xiang90/fix_d
mvcc: fix watch deleteRange
2016-04-29 10:23:46 -07:00
Anthony Romano
29eca4eb88 Merge pull request #5223 from heyitsanthony/kv-less-reconnect
clientv3: better serialization for kv and txn connection retry
2016-04-29 10:02:17 -07:00
Anthony Romano
c0ff77e809 local-tester: procfile, faults, and network bridge
Creates a local fault injected cluster and stresser for etcd.

Usage: goreman -f tools/local-tester/Procfile start
2016-04-29 09:57:02 -07:00
Xiang Li
3ddcc21179 mvcc: fix watch deleteRange 2016-04-29 09:40:28 -07:00
Anthony Romano
c26eb3f241 clientv3: better serialization for kv and txn connection retry
If the grpc connection is restored between an rpc network failure
and trying to reestablish the connection, the connection retry would
end up resetting good connections if many operations were
in-flight at the time of network failure.
2016-04-29 09:26:32 -07:00
Xiang Li
60425de0ff Merge pull request #5227 from raoofm/patch-3
Doc: Update production-users.md
2016-04-29 08:25:07 -07:00
Raoof Mohammed
db8588ab93 Doc: Update production-users.md
Update the Backups policy
2016-04-29 11:23:51 -04:00
Xiang Li
51ad5f00bf Merge pull request #5226 from raoofm/patch-2
Doc: Update production-users.md
2016-04-29 08:01:07 -07:00
Raoof Mohammed
419ae757d2 Update production-users.md 2016-04-29 10:58:24 -04:00
Gyu-Ho Lee
4480eb6d49 Merge pull request #5217 from gyuho/rpc_types
*: return rpctypes.Err in clientv3
2016-04-28 15:58:47 -07:00
Gyu-Ho Lee
f148f4b2b9 clientv3/integration: tests error types (rpctypes) 2016-04-28 15:42:27 -07:00
Gyu-Ho Lee
2e3d79a7bf clientv3: convert errors to rpctypes on returning
For https://github.com/coreos/etcd/issues/5211.
2016-04-28 15:39:37 -07:00
Gyu-Ho Lee
f613052435 rpctypes: Error function to convert clientv3 error 2016-04-28 12:16:13 -07:00
Gyu-Ho Lee
bef5be42b5 integration: add quota backend bytes option 2016-04-28 12:15:31 -07:00
Anthony Romano
11ec94b7e8 Merge pull request #5218 from heyitsanthony/fix-issue-3699
integration: wait for ReadyNotify in Issue3699 test
2016-04-28 10:48:08 -07:00
Anthony Romano
7c666b533a Merge pull request #5221 from heyitsanthony/parallel-e2e-integration
test: run e2e and integration tests in parallel
2016-04-28 10:30:40 -07:00
Anthony Romano
85edd66c65 test: run e2e and integration tests in parallel 2016-04-28 10:17:40 -07:00
Anthony Romano
8291110049 rafthttp: do not create new connections after stopping transport 2016-04-28 10:10:52 -07:00
Xiang Li
d1e11842df Merge pull request #5219 from xiang90/req_timeout
etcdserver: add timeout for processing v3 request
2016-04-28 09:25:08 -07:00
Xiang Li
6ee5f9c677 etcdserver: add timeout for processing v3 request 2016-04-28 08:52:17 -07:00
Anthony Romano
d814e9dc35 integration: wait for ReadyNotify in Issue3699 test
Fixes #5147
2016-04-27 22:04:07 -07:00
Anthony Romano
8df52dc6fa Merge pull request #5216 from heyitsanthony/lease-header-err
v3rpc: only fill lease grant header if no error
2016-04-27 16:51:16 -07:00
Anthony Romano
06ea8aee11 v3rpc: only fill lease grant header if no error
Was panicking under cluster fault injection.
2016-04-27 16:28:40 -07:00
Xiang Li
ca83793876 Merge pull request #5169 from xiang90/ready
etcdserver: do not serve requests before finish the first internal proposal
2016-04-27 16:05:12 -07:00
Xiang Li
434f2c356d etcdserver: do not serve requests before finish the first internal proposal 2016-04-27 15:46:31 -07:00
Gyu-Ho Lee
e50df7c19b Merge pull request #5215 from gyuho/finish_doc
Finish v2 documentation cleaning
2016-04-27 14:07:59 -07:00
Gyu-Ho Lee
c697aa7c60 Documentation: remove the rest
Remove:
1. auth_api.md
2. docker_guide.md
3. faq.md
4. implementation-faq.md
5. internal-protocol-versioning.md
2016-04-27 13:48:11 -07:00
Gyu-Ho Lee
8b3d1562f9 Documentation: remove admin_guide out of v2 2016-04-27 13:48:07 -07:00
Gyu-Ho Lee
c25c8573ac Merge pull request #5212 from gyuho/doc_fix
v2 documentation link fix
2016-04-27 13:18:39 -07:00
Gyu-Ho Lee
954535c2b4 Documentation: move members_api.md 2016-04-27 11:49:41 -07:00
Gyu-Ho Lee
42c09a95a0 Documentation: remove other_apis from v3 2016-04-27 11:40:48 -07:00
Gyu-Ho Lee
a2ab18fce5 Documentation: move api.md to v2 2016-04-27 11:40:48 -07:00
Gyu-Ho Lee
5464665107 Documentation: del backward_compatibility from v3 2016-04-27 11:40:48 -07:00
Gyu-Ho Lee
04fda9d25f Documentation: fix proxy link and delete from v3 2016-04-27 11:40:44 -07:00
Gyu-Ho Lee
95bac2dc3c Documentation: remove v2 snapshot migration doc 2016-04-27 11:31:44 -07:00
Gyu-Ho Lee
01927cc26a *: remove v2 specific authentication doc 2016-04-27 11:30:51 -07:00
Gyu-Ho Lee
f4b8e878ed Documentation: delete upgrade_2_* from v3 doc dir 2016-04-27 11:29:36 -07:00
Gyu-Ho Lee
63c5725fef Documentation: fix errorcode link to v2 2016-04-27 11:28:48 -07:00
Xiang Li
afd2cc7373 Merge pull request #5206 from xiang90/lease_header
v3rpc: fill lease header
2016-04-27 11:18:00 -07:00
Anthony Romano
08f6c0775a Merge pull request #5199 from heyitsanthony/safe-lock-retry
clientv3/concurrency: use session lease id for mutex keys
2016-04-27 11:10:46 -07:00
Gyu-Ho Lee
07daa9fdc0 Merge pull request #5201 from gyuho/auth_test
auth: add basic tests
2016-04-27 10:57:20 -07:00
Xiang Li
c3de53c23c v3rpc: fill lease header 2016-04-27 10:30:23 -07:00
Gyu-Ho Lee
14415c2187 auth: add tests 2016-04-27 10:13:36 -07:00
Gyu-Ho Lee
81ac766bb4 Merge pull request #5174 from gyuho/restart
etcd-tester: match more grpc errors
2016-04-27 09:47:55 -07:00
Gyu-Ho Lee
de7c18909f etcd-tester: match more grpc errors
To prevent stressers from returning from failure injections
2016-04-27 09:34:05 -07:00
Xiang Li
8a4c9c9da1 Merge pull request #5205 from clearbit/rh-error-newline
etcdctl: Add a newline so that errors don't bleed into each other.
2016-04-27 07:31:08 -07:00
Rob Holland
a00be40db2 etcdctl: Add a newline so that errors don't bleed into each other. 2016-04-27 14:25:57 +01:00
Anthony Romano
ecb0e2bd38 Merge pull request #5203 from heyitsanthony/fix-lease-leak
clientv3: check stream context in lease keep alive send loop
2016-04-26 20:42:31 -07:00
Anthony Romano
30a9229f38 clientv3: check stream context in lease keep alive send loop
If no leases are being kept alive, a connection reset would leak
the send routine since it would only test the stream when sending
keep alives.

Fixes #5200
2016-04-26 20:10:09 -07:00
Anthony Romano
22797c7185 clientv3/concurrency: use session lease id for mutex keys
With randomized keys, if the connection goes down, but the session remains,
the client would need complicated recovery logic to avoid deadlock.
Instead, bind the session's lease id to the lock entry; if a session tries
to reacquire the lock it will reassume its old place in the wait list.
2016-04-26 17:37:26 -07:00
Gyu-Ho Lee
c8ab6c348a Merge pull request #5196 from gyuho/password_check
etcdserver/auth: check empty password
2016-04-26 15:56:17 -07:00
Gyu-Ho Lee
bba08f6f79 e2e: add tests for issue 5182
For https://github.com/coreos/etcd/issues/5182.
2016-04-26 15:37:19 -07:00
Gyu-Ho Lee
07685bcf97 etcdserver/auth: check empty password in merge
Fix https://github.com/coreos/etcd/issues/5182.
2016-04-26 15:37:15 -07:00
Anthony Romano
78c96e893e Merge pull request #5198 from heyitsanthony/readme-3.0
doc: focus on v3 in README
2016-04-26 14:47:22 -07:00
Anthony Romano
dc55c312b0 doc: focus on v3 in README and clone old v2 docs
Fixes #5192
2016-04-26 14:41:59 -07:00
Anthony Romano
ce76c28805 Merge pull request #5197 from heyitsanthony/fix-lease-revoke-keepalive
etcdserver: respond with ttl=0 for revoked lease keep alive
2016-04-26 14:13:54 -07:00
Anthony Romano
af1a0b60e2 etcdserver: respond with ttl=0 for revoked lease keep alive
Fixes #5172
2016-04-26 13:53:20 -07:00
Xiang Li
26e52d2bce Merge pull request #5190 from xiang90/deb_metrics
*: add debugging metrics
2016-04-26 10:27:05 -07:00
Xiang Li
67645095e9 *: add debugging metrics 2016-04-26 09:52:56 -07:00
Xiang Li
7161eeed8b Merge pull request #5191 from xiang90/github-folder
.github: add pull request and issue template
2016-04-25 16:22:48 -07:00
Xiang Li
cc27c3a1e6 .github: add pull request and issue template 2016-04-25 16:22:13 -07:00
Anthony Romano
d923b59190 Merge pull request #5189 from heyitsanthony/storage-to-mvcc
*: rename storage package to mvcc
2016-04-25 15:52:08 -07:00
Anthony Romano
b7ac758969 *: rename storage package to mvcc 2016-04-25 15:25:51 -07:00
Xiang Li
1440007608 Merge pull request #5187 from xiang90/doc_security
doc: add link to security
2016-04-25 14:32:12 -07:00
Gyu-Ho Lee
1d5bfd95dc Merge pull request #5188 from gyuho/gogoproto-dependency
Update gogo/proto, grpc dependency
2016-04-25 14:29:42 -07:00
Gyu-Ho Lee
12d01bb1eb vendor: update grpc, gogo/protobuf 2016-04-25 14:10:58 -07:00
Gyu-Ho Lee
4b31acf0e0 *: update generated Proto 2016-04-25 14:08:33 -07:00
Gyu-Ho Lee
82ef33a8d3 scripts: update genproto with new gogoproto hash 2016-04-25 14:07:40 -07:00
Xiang Li
4b296bf51c doc: add link to security 2016-04-25 13:54:38 -07:00
Xiang Li
9ec176a9b0 Merge pull request #5176 from xiang90/lease_client
clientv3: keepaliveonce should have a per call ctx
2016-04-25 11:45:58 -07:00
Xiang Li
6de5b45b2f Merge pull request #5185 from joshix/dochds
Documentation/doc.md: Make headings boring :)
2016-04-25 11:45:20 -07:00
Josh Wood
2a38cb5ad8 Documentation/doc.md: Make headings boring :)
Make the heading sentences that introduce each list of documents
a little more standard language, remove implied 2nd person, reduce
exclamations.
2016-04-25 10:58:25 -07:00
Xiang Li
8a82ddadb9 Merge pull request #5181 from xiang90/cluster_doc
docs: move clustering doc
2016-04-25 10:50:29 -07:00
Xiang Li
cbd79c666e clientv3: keepaliveonce should have a per call ctx
KeepAliveOnce should have a per call ctx. Now we have a per
API ctx, but we might do rpc calls mutiple times in a for loop.

To avoid unnecessary routine leak, use per call ctx.
2016-04-25 10:46:47 -07:00
Xiang Li
1b98074897 docs: move clustering doc 2016-04-25 10:35:29 -07:00
Gyu-Ho Lee
1378e72bc2 Merge pull request #5184 from gyuho/typo
*: fix flag location, minor typo
2016-04-25 09:59:24 -07:00
Xiang Li
3ae956eb89 Merge pull request #5179 from xiang90/doc_di
doc: link to recovery.md
2016-04-25 09:47:30 -07:00
Gyu-Ho Lee
3ad8e91e00 *: fix flag location, minor typo 2016-04-25 09:41:11 -07:00
Xiang Li
663aca701d Merge pull request #5177 from xiang90/lease_client_2
clientv3: retry on switchRemoteAndStream
2016-04-25 09:36:06 -07:00
Xiang Li
736e1d6c33 doc: link to recovery.md 2016-04-23 22:40:35 -07:00
Xiang Li
844208d7dd clientv3: retry on switchRemoteAndStream
If switchRemoteAndStream fails, the whole lease API fails since
the internal routine exits. We should only fail the whole API when
there is a fatal error. For example, we should fail if we fail to
connection to all the endpoints user provided.

If we connect to an endpoint, but fail to create a stream, we should
retry instead of returning error to fail the entire API.
2016-04-23 21:55:34 -07:00
Gyu-Ho Lee
f8673b5f60 Merge pull request #5170 from gyuho/tester
etcd-tester: flag consistency-check
2016-04-22 22:26:17 -07:00
Gyu-Ho Lee
151d0d3831 etcd-tester: flag consistency-check 2016-04-22 22:22:12 -07:00
Anthony Romano
90f91ac8ac Merge pull request #5162 from heyitsanthony/disaster-doc
doc: v3 disaster recovery doc
2016-04-22 19:48:45 -07:00
Anthony Romano
579c1342e6 doc: v3 disaster recovery doc 2016-04-22 19:49:39 -07:00
Anthony Romano
50471d0c5c Merge pull request #5168 from heyitsanthony/fix-pipeline-leak
etcdserver: stop raft after stopping apply scheduler
2016-04-22 19:11:34 -07:00
Anthony Romano
08d879341d etcdserver: stop raft after stopping apply scheduler
Was causing a pipeline leak.
2016-04-22 17:15:13 -07:00
Xiang Li
e51e146a19 Merge pull request #5167 from xiang90/doc_reorg
docs: update docs.md and create subdirs
2016-04-22 17:03:16 -07:00
Xiang Li
bfd6465ea3 docs: update docs.md and create subdirs 2016-04-22 16:58:03 -07:00
Xiang Li
45bf7fb960 Merge pull request #5165 from xiang90/race
raft: fix detected race in node.go
2016-04-22 16:12:33 -07:00
Xiang Li
59c5110b73 raft: fix detected race in node.go 2016-04-22 15:45:33 -07:00
Gyu-Ho Lee
0dd9c2520b Merge pull request #5164 from gyuho/sleep_for_slow_network
etcd-tester: wait more for slow network recovery
2016-04-22 15:36:50 -07:00
Gyu-Ho Lee
6a0664d701 etcd-tester: wait more for slow network recovery
For https://github.com/coreos/etcd/issues/5121.
2016-04-22 15:24:47 -07:00
Gyu-Ho Lee
da1138f8de Merge pull request #5160 from gyuho/close_db
etcdctl/ctlv3: close bolt.DB in snapshot status
2016-04-22 12:10:28 -07:00
Josh Wood
d49c044666 Merge pull request #5135 from heyitsanthony/maintenance-doc
doc: v3 maintenance
2016-04-22 12:02:39 -07:00
Gyu-Ho Lee
53abaf86c6 etcdctl/ctlv3: close bolt.DB in snapshot status 2016-04-22 11:43:52 -07:00
Gyu-Ho Lee
3ffbc4c8dd Merge pull request #5158 from gyuho/functional-test-check
etcd-tester: reset success var for every case
2016-04-22 09:37:27 -07:00
Gyu-Ho Lee
0feb88cee1 etcd-tester: change var success->failed
Previous success overwrites the later failure.
Make it simpler by changing the variable to 'failed'.
2016-04-22 09:27:37 -07:00
Xiang Li
af30795752 Merge pull request #5157 from mitake/5155
etcdserver: remove a data race of ServerStat
2016-04-22 09:19:47 -07:00
Hitoshi Mitake
24077fb3f6 etcdserver: remove a data race of ServerStat
It seems that ServerStats.BecomeLeader() is missing a lock.

Fix https://github.com/coreos/etcd/issues/5155
2016-04-22 23:41:38 +09:00
Xiang Li
69bc0f76bc Merge pull request #5152 from heyitsanthony/fix-quota-test
integration: wait for alarm in TestV3StorageQuotaApply
2016-04-21 21:26:46 -07:00
Anthony Romano
2927c90fae integration: wait for alarm in TestV3StorageQuotaApply
Fixes #4974
2016-04-21 20:53:43 -07:00
Gyu-Ho Lee
f73cdf4035 Merge pull request #5153 from gyuho/api_doc
*: change Protocol Buffer documentation title
2016-04-21 20:03:37 -07:00
Gyu-Ho Lee
2751a10db6 *: change Protocol Buffer documentation title 2016-04-21 19:58:41 -07:00
Gyu-Ho Lee
fdf6335416 Merge pull request #5117 from gyuho/proto_gen
*: Protocol Buffer docs auto-generate script
2016-04-21 19:33:41 -07:00
Gyu-Ho Lee
753630dc37 *: Protocol Buffer docs auto-generate script 2016-04-21 19:14:21 -07:00
Anthony Romano
b8c35e3af8 doc: v3 maintenance 2016-04-21 17:02:46 -07:00
Xiang Li
d32113a0e5 Merge pull request #5150 from xiang90/doc_f
doc: front page of etcd3 doc
2016-04-21 16:49:18 -07:00
Xiang Li
e38710b5f9 doc: front page of etcd3 doc 2016-04-21 16:42:16 -07:00
Gyu-Ho Lee
0c191b71ec Merge pull request #5146 from gyuho/help
etcdmain: quota-backend-bytes in help.go
2016-04-21 13:24:20 -07:00
Gyu-Ho Lee
fa61bf86d7 etcdmain: add quota-backend-bytes to help.go 2016-04-21 13:05:54 -07:00
Gyu-Ho Lee
79a91b3450 Merge pull request #5145 from gyuho/skip_compact
etcd-tester: skip compaction after different hash
2016-04-21 11:09:19 -07:00
Xiang Li
4e175a98c3 Merge pull request #5144 from xiang90/l
*: fix invalid access to backend struct
2016-04-21 10:14:39 -07:00
Xiang Li
c0cf44f134 backedn: protect backend access with lock 2016-04-21 09:34:31 -07:00
Xiang Li
4991cda202 etcdsever: fix the leaky snashot routine issue 2016-04-21 08:48:11 -07:00
Xiang Li
8684d96914 Merge pull request #5124 from mitake/auth-v3-authenticate
*: support authenticate in v3 auth
2016-04-20 21:07:09 -07:00
Hitoshi Mitake
131e3806bb *: support authenticate in v3 auth
This commit implements Authenticate() API of the auth package. It does
authentication based on its authUsers bucket and generate a token for
succeeding RPCs.
2016-04-21 12:32:19 +09:00
Gyu-Ho Lee
e835d24bea etcd-tester: skip compaction after different hash
When hashes don't match, there could be some nodes
falling behind and the compact request can then error
with 'future revision compact'.
2016-04-20 17:13:51 -07:00
Gyu-Ho Lee
05d5459b1d Merge pull request #5143 from gyuho/mirror-make-e2e
e2e: make-mirror
2016-04-20 15:33:41 -07:00
Gyu-Ho Lee
6eb25751ec e2e: make-mirror 2016-04-20 15:13:45 -07:00
Gyu-Ho Lee
29dfca883f Merge pull request #5141 from gyuho/alarm_test
e2e: test alarm
2016-04-20 12:09:43 -07:00
Gyu-Ho Lee
d976121e35 e2e: test alarm 2016-04-20 11:38:53 -07:00
Anthony Romano
20db51bfb2 Merge pull request #5138 from heyitsanthony/v2api-refactor
etcdserver: v2api refactor
2016-04-20 11:07:37 -07:00
Gyu-Ho Lee
b37a0ad9e7 Merge pull request #5137 from gyuho/member_add_test
e2e: add member add/update test
2016-04-20 10:38:43 -07:00
Xiang Li
f1440f1d63 Merge pull request #5140 from xiang90/fix_d
backend: update db.size after defrag
2016-04-20 10:31:37 -07:00
Anthony Romano
0fe24e7ffc etcdserver: rename v3demo_server to v3_server
Not much of a demo any more.
2016-04-20 10:29:22 -07:00
Anthony Romano
ebace2eb1b etcdserver: split out v2 Do() API from core server code 2016-04-20 10:29:22 -07:00
Anthony Romano
41382bc3f0 etcdserver: split out v2 raft apply interface 2016-04-20 10:29:22 -07:00
Anthony Romano
1fe4c34398 Merge pull request #5131 from heyitsanthony/etcdctl-get-json
etcdctl: print full json response for Get
2016-04-20 10:21:48 -07:00
Gyu-Ho Lee
0893dbf7c1 e2e: add member add/update test 2016-04-20 10:05:55 -07:00
Xiang Li
bfc6309222 Merge pull request #5129 from xiang90/pipe_test
make TestPipelineKeepSendingWhenPostError reliable
2016-04-20 10:02:17 -07:00
Xiang Li
74d50884bb backend: update db.size after defrag 2016-04-20 10:01:38 -07:00
Anthony Romano
d2a58cbb0a etcdctl: print full json response for Get
Otherwise parsing get/txn output with json is somewhat complicated
because in some cases there's a json message and sometimes not.
Likewise, a get on an absent key has to return the current revision for
some algorithms to work.
2016-04-20 09:56:32 -07:00
Xiang Li
fb137f11c5 rafthttp: make TestPipelineKeepSendingWhenPostError reliable 2016-04-20 09:38:47 -07:00
Anthony Romano
0c40f4a7e3 Merge pull request #5136 from heyitsanthony/test-display-gosimple
test: display failure output for gosimple
2016-04-19 23:25:12 -07:00
Anthony Romano
46dfa682e7 test: display failure output for gosimple 2016-04-19 22:58:37 -07:00
Xiang Li
32a486b462 Merge pull request #5127 from xiang90/down_build
doc: build
2016-04-19 13:38:04 -07:00
Xiang Li
d1067d39c7 doc: build 2016-04-19 13:37:50 -07:00
Xiang Li
8af9c88377 Merge pull request #5122 from xiang90/lease_doc
doc: add lease section to interacting doc
2016-04-19 13:31:16 -07:00
Xiang Li
16630529f7 Merge pull request #5125 from xiang90/dev_cluster
doc: add local_cluster doc
2016-04-19 10:51:16 -07:00
Xiang Li
531ee93878 doc: add local_cluster doc 2016-04-19 10:50:54 -07:00
Xiang Li
6d06c060b4 doc: add lease section to interacting doc 2016-04-19 08:18:59 -07:00
Xiang Li
668ea89980 Merge pull request #5126 from judwhite/patch-2
raft/doc.go: add missing }
2016-04-19 07:25:31 -07:00
Jud White
a9cfbd5414 raft/doc.go: add missing } 2016-04-19 04:21:33 -05:00
Xiang Li
bf9cccfc34 Merge pull request #5118 from ajityagaty/fsync_osx
fileutil: Sync on HFS/OSX needs to be handled differently.
2016-04-18 22:22:53 -07:00
Ajit Yagaty
8b6de5f85d fileutil: Sync on HFS/OSX needs to be handled differently.
A call file.Sync on OSX doesn't guarantee actual persistence on
physical drive media as the data can be cached in physical drive's
buffers. Hence calls to file.Sync need to be replaced with
fcntl(F_FULLFSYNC).
2016-04-18 21:49:04 -07:00
Xiang Li
d16628bf50 Merge pull request #5120 from magicwang-cn/master
etcdserver: close response body when getting cluster information
2016-04-18 19:44:19 -07:00
magicwang-cn
97c71f44fd etcdserver: close response body when getting cluster information 2016-04-19 10:03:40 +08:00
Xiang Li
c4892c7f51 Merge pull request #5105 from xiang90/get_started
doc: add write/read example for interact doc
2016-04-18 14:27:53 -07:00
Xiang Li
a2ac639176 doc: add write/read example for interact doc 2016-04-18 13:42:12 -07:00
Gyu-Ho Lee
8a0fa5622e Merge pull request #5114 from gyuho/snapshot_test
*: add Snapshot e2e test
2016-04-18 09:27:07 -07:00
Anthony Romano
b494ad3a0d Merge pull request #5112 from heyitsanthony/protobuf-comments
storagepb, etcdserverpb: improve documentation for RPC message fields
2016-04-17 23:53:53 -07:00
Anthony Romano
42245a5518 storagepb, etcdserverpb: improve documentation for RPC message fields 2016-04-17 23:33:00 -07:00
Gyu-Ho Lee
ea6a747fc1 Merge pull request #5116 from ajityagaty/typo_fix
etcdctlv3: Fix for typo in alarm command handling.
2016-04-17 20:30:22 -07:00
Ajit Yagaty
68dd22d93d etcdctlv3: Fix for typo in alarm command handling. 2016-04-17 19:31:39 -07:00
Xiang Li
9504df2917 Merge pull request #5115 from gyuho/gc
v3rpc: bytes-key map look-up gc optimization
2016-04-17 13:21:47 -07:00
Gyu-Ho Lee
86f580fa8f v3rpc: bytes-key map look-up gc optimization
This change
f5f5a8b620
just got merged to go1.6.1 where Go does special optimization for x =
m[string(k)] where k is []byte.
2016-04-17 10:52:19 -07:00
Gyu-Ho Lee
a2afb513dd *: add snapshot e2e test 2016-04-16 13:27:10 -07:00
Anthony Romano
d4ff9364d4 Merge pull request #4861 from heyitsanthony/nfs-lock
pkg/fileutil: fix linux file locks over NFS
2016-04-16 08:59:10 -07:00
Xiang Li
11e8d01035 Merge pull request #5113 from ajityagaty/remove_lease_id_casts
clientv3: Remove superfluous LeaseID casts in integration tests.
2016-04-16 07:22:06 -07:00
Xiang Li
f15b5aa4e6 Merge pull request #5034 from ZhuPeng/proxy-http2
Enable http2 support between proxy and member
2016-04-16 07:04:41 -07:00
Ajit Yagaty
da5bd04a1a clientv3: Remove superflous LeaseID casts in integration tests.
The integration tests under clientv3 have superflous LeaseID casts
that are not needed as the ID field of the lease responses are of
type LeaseID now.
2016-04-15 17:48:20 -07:00
Xiang Li
73b48dd8eb Merge pull request #5111 from Amit-PivotalLabs/fix-etcdctl-unset-env
etcdctl: unset ETCDCTL_API env var properly
2016-04-15 16:32:42 -07:00
Amit Kumar Gupta
c629a30f1f etcdctl: unset ETCDCTL_API env var properly 2016-04-15 15:43:00 -07:00
Gyu-Ho Lee
4ed5f66a7a Merge pull request #5109 from gyuho/member_remove_test
e2e: add member remove test
2016-04-15 15:04:00 -07:00
Gyu-Ho Lee
caf0e9b9b1 Merge pull request #5110 from gyuho/error_when_db_not_exist
etcdctl: snapshot status error for non-existent file
2016-04-15 14:44:25 -07:00
Gyu-Ho Lee
59a88d1cf6 e2e: add member remove test 2016-04-15 14:43:32 -07:00
Gyu-Ho Lee
a78ece4ac2 etcdctl: snapshot status error for non-existent file 2016-04-15 14:15:16 -07:00
Anthony Romano
3ee99a496f Merge pull request #5096 from heyitsanthony/clientv3-run-examples
test, clientv3: run examples as integration tests
2016-04-15 12:42:44 -07:00
Anthony Romano
9bfa0172f5 test, clientv3: run examples as integration tests 2016-04-15 11:51:30 -07:00
Gyu-Ho Lee
d4dae7e9e9 Merge pull request #5101 from gyuho/watch_bench_fix
benchmark: ensure all watcher receivers to finish
2016-04-15 11:49:24 -07:00
Gyu-Ho Lee
ad226f2020 benchmark: ensure all watcher receivers to finish
Fix https://github.com/coreos/etcd/issues/5099.
2016-04-15 11:11:14 -07:00
Anthony Romano
c1455a4f10 Merge pull request #5090 from ajityagaty/lease_id
clientv3: Use LeaseID in all the client APIs.
2016-04-15 10:48:29 -07:00
Xiang Li
da153d3f3c Merge pull request #5091 from xiang90/r_h
doc: add response header doc into api
2016-04-15 09:57:48 -07:00
Xiang Li
3b72c3da53 doc: add response header doc into api 2016-04-15 09:54:30 -07:00
Gyu-Ho Lee
81a5fc16ef Merge pull request #5095 from gyuho/govet_fix
*: fix govet -shadow in go tip
2016-04-15 09:41:24 -07:00
Gyu-Ho Lee
376234f196 Merge pull request #5094 from gyuho/watch_range_example
*: add more examples to clientv3, pkg/adt
2016-04-15 09:10:25 -07:00
Gyu-Ho Lee
641a1a66e1 *: fix govet -shadow in go tip 2016-04-15 07:39:52 -07:00
Gyu-Ho Lee
ae27b991b1 *: add more examples to clientv3, pkg/adt 2016-04-14 23:46:50 -07:00
Ajit Yagaty
06a4086bf9 clientv3: Use LeaseID in all the client APIs.
In order to use LeaseID type instead of int64 we have to convert
the protobuf lease responses into client lease reponses.
2016-04-14 23:09:46 -07:00
Gyu-Ho Lee
4ee7cad116 Merge pull request #5093 from gyuho/fix_test
functional-tester/etcd-tester: fix error check
2016-04-14 21:45:44 -07:00
Gyu-Ho Lee
8515ae30fb functional-tester/etcd-tester: fix error check 2016-04-14 21:31:12 -07:00
朱鹏
67db28f979 proxy: enable http2 for connecting to members
enable http2 when transport specified a custom TLS config, which was
not automatically enable.

Issue 5033
2016-04-15 10:16:26 +08:00
Anthony Romano
6c1cc1d4ea Merge pull request #5089 from heyitsanthony/fix-func-tester-timeout
etcd-tester: return error if first compaction times out
2016-04-14 17:24:22 -07:00
Anthony Romano
21233416e8 etcd-tester: return error if first compaction times out
Fixes #5081
2016-04-14 17:11:53 -07:00
Xiang Li
74153ffa45 Merge pull request #5082 from xiang90/kv_d
doc: add doc for kv message
2016-04-14 15:17:04 -07:00
Xiang Li
df37c75bb9 doc: add doc for kv message 2016-04-14 15:16:23 -07:00
Anthony Romano
f2e915f56e Merge pull request #5086 from heyitsanthony/test-race-rafthttp
test: check races on rafthttp
2016-04-14 14:21:20 -07:00
Anthony Romano
57448622d9 Merge pull request #5085 from heyitsanthony/hide-yaml
clientv3: make YamlConfig struct private
2016-04-14 14:10:20 -07:00
Anthony Romano
01be6933c6 test: check races on rafthttp
The data race in net/http has been fixed for a while.
2016-04-14 13:45:31 -07:00
Gyu-Ho Lee
cfbb8a71db Merge pull request #5084 from gyuho/typo
clientv3: fix example code format, more examples
2016-04-14 12:30:44 -07:00
Anthony Romano
04ef861c3d clientv3: make YamlConfig struct private 2016-04-14 12:26:01 -07:00
Gyu-Ho Lee
81e344bef9 clientv3: fix example code format, more examples 2016-04-14 12:13:07 -07:00
Gyu-Ho Lee
6bbdebb281 Merge pull request #5076 from gyuho/more_e2e
*: add, clean up e2e tests
2016-04-14 11:59:13 -07:00
Gyu-Ho Lee
6a3b5fe70c Merge pull request #5083 from ajityagaty/role_grant_test
e2e: Test case for the etcdctlv3 'role grant' command.
2016-04-14 11:53:21 -07:00
Gyu-Ho Lee
fefb58dc90 e2e: clean up, add more tests 2016-04-14 11:42:57 -07:00
Ajit Yagaty
4495559ad6 e2e: Test case for the etcdctlv3 'role grant' command.
Adding a test case to test the 'role grant' sub-command.
2016-04-14 11:31:07 -07:00
Xiang Li
ba1c0a2b12 Merge pull request #5080 from xiang90/up
proxy: initial userspace tcp proxy
2016-04-14 10:41:46 -07:00
Xiang Li
4a913ae60a proxy: initial userspace tcp proxy 2016-04-14 10:14:30 -07:00
Gyu-Ho Lee
da1132662a Merge pull request #5078 from ajityagaty/role_cmd_tests
e2e: Test case for the etcdctlv3 role command.
2016-04-14 09:44:07 -07:00
Xiang Li
27844a6aef Merge pull request #5079 from mitake/auth-fix
auth: remove index out of range in role grant
2016-04-14 08:07:18 -07:00
Hitoshi Mitake
a016220648 auth: remove index out of range in role grant
Fixes https://github.com/coreos/etcd/issues/5077
2016-04-14 22:02:10 +09:00
Ajit Yagaty
3b7c8d752c e2e: Test case for the etcdctlv3 role command.
New test cases have been added to test the 'role' and 'user'
sub-commands of etcdctlv3 utility.
2016-04-14 01:54:22 -07:00
Xiang Li
ac95cc32ef Merge pull request #5075 from xiang90/p
proxy: move http related thing to httpproxy
2016-04-13 22:44:29 -07:00
Anthony Romano
e913792d0f Merge pull request #5073 from heyitsanthony/etcdctl-docs
doc: document many etcdctl commands
2016-04-13 22:08:22 -07:00
Anthony Romano
cd05ac4217 doc: document many etcdctl commands
documents defrag, compaction, lease, snapshot status, member, endpoint
2016-04-13 21:50:59 -07:00
Anthony Romano
b20d171ee1 Merge pull request #5074 from heyitsanthony/fix-compact-current-rev
storage: have Range on rev=0 work even if compacted to current revision
2016-04-13 21:15:55 -07:00
Xiang Li
66d2ae7a39 proxy: move http related thing to httpproxy 2016-04-13 21:09:26 -07:00
Anthony Romano
d72bcdc156 storage: have Range on rev=0 work even if compacted to current revision 2016-04-13 21:00:35 -07:00
Anthony Romano
e6ff5a38e1 Merge pull request #5072 from heyitsanthony/fix-ep-json
etcdctl: respect --write-out=json for endpoint status command
2016-04-13 19:12:26 -07:00
Gyu-Ho Lee
793fb2cf64 Merge pull request #4673 from gyuho/slow
functional-tester: add latency test (simulate slow network)
2016-04-13 17:07:30 -07:00
Anthony Romano
f07350735d etcdctl: respect --write-out=json for endpoint status command 2016-04-13 17:04:31 -07:00
Gyu-Ho Lee
6af40ea1e1 functional-tester: add latency test (simulate slow network)
Fix https://github.com/coreos/etcd/issues/4666.
2016-04-13 17:00:09 -07:00
Gyu-Ho Lee
e9aa8ff235 Merge pull request #5071 from gyuho/member_api_change
*: Member api change
2016-04-13 16:45:10 -07:00
Anthony Romano
3dcfe79cc0 Merge pull request #5070 from heyitsanthony/member-doc
etcdctl: display required arguments for member commands in usage
2016-04-13 16:40:16 -07:00
Gyu-Ho Lee
7a2ef3eb00 *: regenerate proto buffers 2016-04-13 16:24:07 -07:00
Gyu-Ho Lee
2c6176b5f2 *: remove MemberLeader API in client side (fix examples) 2016-04-13 16:23:57 -07:00
Gyu-Ho Lee
b78886239e *: remove IsLeader field in Member API server side 2016-04-13 16:23:33 -07:00
Anthony Romano
90df7fd738 etcdctl: display required arguments for member commands in usage 2016-04-13 16:18:00 -07:00
Anthony Romano
22812badc2 Merge pull request #5069 from heyitsanthony/fix-snapshot-status-json
etcdctl: respect -write-out=json for snapshot status
2016-04-13 15:57:39 -07:00
Anthony Romano
b90e30b28e etcdctl: respect -write-out=json for snapshot status 2016-04-13 13:37:32 -07:00
Anthony Romano
a553ea8ba7 Merge pull request #5068 from heyitsanthony/lease-fixups
etcdctl: improve lease command documentation and exit codes
2016-04-13 13:20:06 -07:00
Anthony Romano
993f25f055 Merge pull request #5065 from heyitsanthony/errexit-defrag
etcdctl: return non-zero exit code if defrag fails on any endpoint
2016-04-13 13:19:43 -07:00
Anthony Romano
721ed6ba2b etcdctl: return non-zero exit code if defrag fails on any endpoint 2016-04-13 12:39:43 -07:00
Anthony Romano
855a5116a2 etcdctl: improve lease command documentation and exit codes 2016-04-13 12:38:21 -07:00
Gyu-Ho Lee
c0971a6ebc Merge pull request #5066 from gyuho/compaction_test
e2e: compaction test
2016-04-13 12:35:20 -07:00
Gyu-Ho Lee
3f0863a1e9 e2e: compact test 2016-04-13 12:07:48 -07:00
Gyu-Ho Lee
c8e860c4fa Merge pull request #5055 from gyuho/get_rev
*: add rev flag to get command
2016-04-13 12:05:48 -07:00
Xiang Li
3fef0eb0d8 Merge pull request #5061 from xiang90/grpc_d
*:update dependencies
2016-04-13 11:40:14 -07:00
Xiang Li
5157b713ed Merge pull request #5064 from raoofm/patch-1
Documentation: v3 mem benchmark total watch value
2016-04-13 11:35:23 -07:00
Gyu-Ho Lee
60548b85c4 *: add rev flag to get command 2016-04-13 11:32:29 -07:00
Gyu-Ho Lee
15e865e024 Merge pull request #5062 from gyuho/govet-mutex
etcd-tester: fix govet
2016-04-13 11:19:20 -07:00
Gyu-Ho Lee
cb280bae91 etcd-tester: fix govet 2016-04-13 11:12:31 -07:00
Raoof Mohammed
61cfe68247 Documentation: v3 mem benchmark total watch value
Updating Documentation/benchmarks/etcd-3-watch-memory-benchmark.md with the correct 'total watching' value
2016-04-13 14:12:10 -04:00
Gyu-Ho Lee
52c4595899 Merge pull request #5060 from gyuho/ineffassign
*: fixes based on ineffassign
2016-04-13 10:59:58 -07:00
Xiang Li
7c5ec417c3 *:update dependencies 2016-04-13 10:47:24 -07:00
Gyu-Ho Lee
89f8e66682 *: fixes based on ineffassign 2016-04-13 10:41:58 -07:00
Gyu-Ho Lee
35d2d7b23e Merge pull request #5059 from gyuho/elect_e2e_test
e2e: add elect command test
2016-04-13 10:25:28 -07:00
Gyu-Ho Lee
1224044553 e2e: add elect command test 2016-04-13 10:00:56 -07:00
Anthony Romano
228e772b3a Merge pull request #5056 from heyitsanthony/expect-signal
pkg/expect, e2e: support sending Signals to expect process, test etcdctl lock
2016-04-13 09:42:41 -07:00
Anthony Romano
8763bd1e97 e2e: etcdctlv3 lock test 2016-04-13 09:26:16 -07:00
Anthony Romano
604a73c833 e2e: remove sh in spawnCmd
certain shells claim the ppid for expect processes which interferes with
signals
2016-04-13 09:12:40 -07:00
Anthony Romano
fcb5ba98d0 pkg/expect: support sending Signals to expect process 2016-04-13 09:11:57 -07:00
Anthony Romano
18992bac4f Merge pull request #5057 from heyitsanthony/e2e-v3-cleanup
e2e: cleanup error and prefix arg handling for ctlv3 tests
2016-04-13 09:09:13 -07:00
Anthony Romano
209f573083 e2e: cleanup error and prefix arg handling for ctlv3 tests 2016-04-12 23:48:13 -07:00
Xiang Li
2985396768 Merge pull request #5053 from xiang90/ctl_i
etcdctl: move endpoint-heath and status into endpoint command
2016-04-12 16:50:03 -07:00
Xiang Li
ae9b251d99 etcdctl: move endpoint-heath and status into endpoint command 2016-04-12 16:30:26 -07:00
Anthony Romano
0ca949ce90 Merge pull request #5051 from heyitsanthony/fix-user-list
etcdctl: don't panic on ListUser with roles
2016-04-12 14:24:08 -07:00
Anthony Romano
c9ce92f635 client: accept roles in response for ListUser
Fixes #5046
2016-04-12 12:48:43 -07:00
Xiang Li
a8b7d0b63c Merge pull request #5050 from xiang90/b_v
etcdserver: save cluster version into backend
2016-04-12 12:05:02 -07:00
Xiang Li
e9735b7bd0 etcdserver: save cluster version into backend 2016-04-12 11:37:22 -07:00
Anthony Romano
f13e558ab4 e2e: test etcdtl user list on root user 2016-04-12 11:15:06 -07:00
Anthony Romano
095a755e4d Merge pull request #5049 from heyitsanthony/fix-grant-roles-existing
etcdctl: don't crash on duplicate role in user grant
2016-04-12 11:05:08 -07:00
Anthony Romano
a12fd9cc92 etcdctl: print grant/revoke error instead of scanning roles for changes
Fixes #5045
2016-04-12 10:49:05 -07:00
Anthony Romano
a0d653b630 e2e: test etcdctl v2 double user grant
Crashes in 2.3.1
2016-04-12 10:49:05 -07:00
Xiang Li
040f7b90c7 Merge pull request #5048 from xiang90/fix_c
rafthttp: fix comment in msgappv2
2016-04-12 10:20:04 -07:00
Xiang Li
f2d558644d rafthttp: fix comment in msgappv2 2016-04-12 10:14:06 -07:00
Xiang Li
ef0d5c3d7d Merge pull request #4957 from mqliang/memberStatus
etcdctlv3: expose store size and raft status in 'etcdctl status' command
2016-04-12 08:46:45 -07:00
mqliang
ff311ba0a7 etcdctlv3: print db size and raft status in 'etcdctl status' command 2016-04-12 22:58:22 +08:00
mqliang
a9a06438f9 etcdctlv3: expose db size and raft status in server side 2016-04-12 22:49:15 +08:00
mqliang
1044fbce2c etcdctlv3: update aunto generated files 2016-04-12 22:48:47 +08:00
mqliang
c3da2631bf etcdctlv3: add db size and raft status in protobuffer 2016-04-12 22:47:27 +08:00
Xiang Li
17e32b6aa9 Merge pull request #5041 from xiang90/snap_info
snapshot status
2016-04-11 23:13:08 -07:00
Gyu-Ho Lee
50219d4def Merge pull request #5042 from gyuho/ts
benchmark: return time series with missing periods filled in
2016-04-11 23:09:36 -07:00
Xiang Li
8e3d99cd3e Merge pull request #5043 from mitake/auth-trivial
little cleaning of v3 auth
2016-04-11 23:09:22 -07:00
Gyu-Ho Lee
2aab6ff2eb benchmark: return time series with missing periods filled in 2016-04-11 23:07:45 -07:00
Xiang Li
94d436c5d1 vendor: add go-humanize 2016-04-11 22:55:47 -07:00
Xiang Li
b5292f6fce etcdctl: add snapshot status support 2016-04-11 22:55:47 -07:00
Hitoshi Mitake
0b4749ea65 auth: remove needless logging during creating a new user 2016-04-12 14:52:31 +09:00
Hitoshi Mitake
bfd49023a1 auth: sort key permissions of role struct for effective searching 2016-04-12 14:52:31 +09:00
Anthony Romano
b1c3e7edbf Merge pull request #4982 from heyitsanthony/godep-update-script
scripts: updatedep.sh to update vendored dependencies
2016-04-11 19:47:18 -07:00
Anthony Romano
4481e54017 Merge pull request #5040 from heyitsanthony/fix-txn-rev
etcdserver: set txn header revision to store revision following txn
2016-04-11 19:41:54 -07:00
Anthony Romano
c5b8e8dc88 etcdserver: set txn header revision to store revision following txn 2016-04-11 17:03:05 -07:00
Anthony Romano
8c2225f251 Merge pull request #5038 from heyitsanthony/sshot-docs
doc: document etcdctl snapshot command
2016-04-11 16:21:09 -07:00
Anthony Romano
195100a769 Merge pull request #5039 from heyitsanthony/fix-write-out
etcdctl: respect --write-out
2016-04-11 16:19:43 -07:00
Xiang Li
3a695a82a3 Merge pull request #5036 from xiang90/r_t
raft: add a test case for Test Slice
2016-04-11 16:02:13 -07:00
Anthony Romano
e5a2bd58ec etcdctl: respect --write-out
Support got clobbered about a month ago.
2016-04-11 16:01:38 -07:00
Anthony Romano
6e8d01f956 doc: document etcdctl snapshot command 2016-04-11 15:58:20 -07:00
Xiang Li
0a684c10ad Merge pull request #5025 from xiang90/no_dup_resp
etcdserver: do not send out out of date appResp
2016-04-11 14:41:52 -07:00
Xiang Li
3bad47d691 Merge pull request #5018 from xiang90/b
etcdserver: set backend to cluster
2016-04-11 13:02:57 -07:00
Anthony Romano
be822b05d2 Merge pull request #5012 from heyitsanthony/snap-api
*: snapshot RPC
2016-04-11 13:00:18 -07:00
Anthony Romano
e838c26f8a etcdctl: use snapshot RPC in snapshot command 2016-04-11 12:32:53 -07:00
Anthony Romano
b97b5843a3 Merge pull request #5035 from heyitsanthony/fix-unused-output
test: display unused output if unused source found
2016-04-11 11:36:23 -07:00
Xiang Li
174a996c37 Merge pull request #5032 from mitake/auth-user-grant
*: support granting a role to a user in v3 auth
2016-04-11 11:10:10 -07:00
Xiang Li
9423125ce1 raft: add a test case for Test Slice 2016-04-11 10:04:03 -07:00
Anthony Romano
2113b77635 test: display unused output if unused source found
unused will non-zero exit if it finds unused source which causes test's
set -e to close out of the test script
2016-04-11 09:55:22 -07:00
Anthony Romano
d5766eab3e clientv3: add Snapshot to Maintenance 2016-04-11 09:51:17 -07:00
Anthony Romano
a6b6fcf1c4 etcdserverpb, v3rpc: add Snapshot to Maintenance RPC service 2016-04-11 09:51:16 -07:00
Hitoshi Mitake
7ba2646d37 *: support granting a role to a user in v3 auth 2016-04-11 15:53:30 +09:00
Gyu-Ho Lee
af1b3f061a Merge pull request #5024 from ajityagaty/user_cmd_tests
e2e: Test cases for the etcdctlv3 user commands.
2016-04-10 23:50:12 -07:00
Gyu-Ho Lee
6c8428c393 Merge pull request #5031 from gyuho/cleanup
*: clean up from go vet, misspell
2016-04-10 23:32:46 -07:00
Gyu-Ho Lee
9108af9046 *: clean up from go vet, misspell 2016-04-10 23:16:56 -07:00
Gyu-Ho Lee
1f4f3667a4 Merge pull request #5021 from gyuho/vendor_doc
*: client vendoring README
2016-04-10 22:15:36 -07:00
Xiang Li
935999a80e Merge pull request #5030 from mitake/auth-trivial
trivial updates for v3 auth
2016-04-10 21:47:08 -07:00
Hitoshi Mitake
53bb79f240 auth: remove needless field from protobuf define
The field tombstone won't be used in the future because of the design
change.
2016-04-11 13:02:34 +09:00
Hitoshi Mitake
097cec8194 etcdctl: let some v3 auth related functions be private
They don't need to be public.
2016-04-11 13:01:19 +09:00
Xiang Li
27480f9ea4 Merge pull request #4966 from mitake/auth-role-grant
*: support granting key permission to role in v3 auth
2016-04-10 20:31:05 -07:00
Hitoshi Mitake
02033b4c47 *: support granting key permission to role in v3 auth 2016-04-11 12:23:19 +09:00
Anthony Romano
f5f0280a63 Merge pull request #5028 from heyitsanthony/etcdmain-unsupported-envvar
etcdmain: start on unsupported arch when ETCD_UNSUPPORTED_ARCH is set
2016-04-10 19:55:34 -07:00
Anthony Romano
c4caa65c51 etcdmain: start on unsupported arch when ETCD_UNSUPPORTED_ARCH is set 2016-04-10 19:36:04 -07:00
Anthony Romano
130567832f Merge pull request #4734 from luxas/32bit_alignments
etcdserver: align 64-bit atomics on 8-byte boundary
2016-04-10 19:18:15 -07:00
Ajit Yagaty
603c14db9d e2e: Test cases for the etcdctlv3 user commands.
New test cases have been added to test the "user" sub-commands of
the etcdctlv3 utility.
2016-04-10 17:46:04 -07:00
Gyu-Ho Lee
0d9039f192 Merge pull request #5026 from lodevil/master
KeepAliveOnce error fix (when the lease not found)
2016-04-10 17:35:52 -07:00
lolynx
e3fd246414 clientv3: fix KeepAliveOnce return error message 2016-04-11 08:13:36 +08:00
Xiang Li
de7692b2b2 etcdserver: do not send out out of date appResp 2016-04-09 23:30:00 -07:00
Xiang Li
3c0ac9d600 etcdserver: set backend to cluster 2016-04-08 21:46:45 -07:00
Gyu-Ho Lee
78554c6de6 *: client vendoring README 2016-04-08 19:48:17 -07:00
Xiang Li
345bdc3db6 Merge pull request #5017 from xiang90/member
membership: save/update the whole member information into backend
2016-04-08 13:30:35 -07:00
Xiang Li
a406c9fa3d membership: save/update the whole member information into backend 2016-04-08 13:14:37 -07:00
Gyu-Ho Lee
fe810e7b43 Merge pull request #5015 from gyuho/semaphore-ci-badge
README: change semaphore CI status badge
2016-04-08 12:10:05 -07:00
Gyu-Ho Lee
9bb99b5f72 Merge pull request #5016 from gyuho/lease_simple
*: clean up from gosimple
2016-04-08 12:09:56 -07:00
Gyu-Ho Lee
953a08d841 *: clean up from gosimple 2016-04-08 11:55:03 -07:00
Xiang Li
4997ed36b4 Merge pull request #5011 from xiang90/r_c
raft: fix issues reported by golint
2016-04-08 11:46:12 -07:00
Gyu-Ho Lee
97730778e5 README: change semaphore CI status badge
We just reset Semaphore CI, and badge URL is changed.
2016-04-08 11:24:30 -07:00
Xiang Li
b70e6a6bf1 Merge pull request #4916 from es-chow/transfer-leader
raft: transfer leader feature
2016-04-08 08:01:24 -07:00
es-chow
ac059eb8cb raft: transfer leader feature 2016-04-08 16:56:32 +08:00
Gyu-Ho Lee
4041bbe571 Merge pull request #5008 from gyuho/gosimple_unused
clean up with gosimple and unused
2016-04-07 23:31:21 -07:00
Gyu-Ho Lee
fb85da92e8 *: fix based on gosimple and unused 2016-04-07 23:16:37 -07:00
Gyu-Ho Lee
9aec045fce test, travis: integrate gosimple and unused 2016-04-07 23:16:33 -07:00
Xiang Li
1b41ee9c99 raft: fix issues reported by golint 2016-04-07 22:14:56 -07:00
Xiang Li
49f9b5470e Merge pull request #5009 from xiang90/sp
*: fix misspell
2016-04-07 22:09:41 -07:00
Xiang Li
9c7fb9c360 *: fix misspell 2016-04-07 21:57:06 -07:00
Xiang Li
71a492e59e Merge pull request #5005 from xiang90/clu_storage
membership: update attr in membership pkg
2016-04-07 21:40:32 -07:00
Xiang Li
b13b77f362 membership: update attr in membership pkg 2016-04-07 21:25:32 -07:00
Anthony Romano
2fe3e1e850 Merge pull request #5007 from heyitsanthony/hush-caps
v2http: only report capabilities on update
2016-04-07 20:31:37 -07:00
Anthony Romano
2b7ad35fa0 v2http: only report capabilities on update 2016-04-07 20:14:30 -07:00
Anthony Romano
1f5794c117 Merge pull request #4997 from heyitsanthony/fix-race-consistent
etcdserver: fix race on consistent index
2016-04-07 20:12:22 -07:00
Anthony Romano
4d2d2cabb9 etcdserver: fix race on consistent index 2016-04-07 19:53:08 -07:00
Gyu-Ho Lee
004ff3d4f0 Merge pull request #5006 from gyuho/watch_type
clientv3/integration: use clientv3.Event type
2016-04-07 19:48:18 -07:00
Gyu-Ho Lee
a9f1d5dfa6 clientv3/integration: use clientv3 event types
Fix https://github.com/coreos/etcd/issues/5001.
2016-04-07 19:29:32 -07:00
Gyu-Ho Lee
8b320e7c55 Merge pull request #4999 from gyuho/test
*: log, expect by capability check
2016-04-07 19:06:15 -07:00
Xiang Li
1c12b66e35 Merge pull request #5000 from xiang90/clu_storage
membership: save/update/delete member when backend is provided
2016-04-07 18:00:11 -07:00
Gyu-Ho Lee
868a3e279d Merge pull request #5002 from gyuho/agent_test
etcd-agent: fix etcd agent tests, remove unused listener
2016-04-07 17:19:44 -07:00
Gyu-Ho Lee
d78345244b *: log, expect by capability check 2016-04-07 17:18:51 -07:00
Gyu-Ho Lee
139f23fd13 etcd-agent: fix etcd agent tests, remove unused listener 2016-04-07 17:04:24 -07:00
Xiang Li
29623cccb2 membership: save/update/delete member when backend is provided 2016-04-07 16:34:43 -07:00
Anthony Romano
c91c7ca3bf Merge pull request #4961 from heyitsanthony/rename-lease-create
*: rename lease Create to Grant
2016-04-07 14:51:22 -07:00
Xiang Li
f31105bc08 Merge pull request #4994 from xiang90/clu
etcdserver: move membership related code to membership pkg
2016-04-07 14:39:18 -07:00
Xiang Li
bf2289ae00 etcdserver: move membership related code to membership pkg 2016-04-07 14:21:37 -07:00
Gyu-Ho Lee
5d4ee7ac5f Merge pull request #4995 from gyuho/proxy-clean
proxy: simplify channel receive, add missing function call
2016-04-07 12:32:24 -07:00
Anthony Romano
dc17eaace7 *: rename Lease Create to Grant
Creating a lease through the client API interface union looked like
"c.Create(...)"-- the method name wasn't very descriptive.
2016-04-07 12:28:14 -07:00
Gyu-Ho Lee
6abbdcdc06 proxy: simplify channel receive, add missing function call 2016-04-07 12:24:17 -07:00
Gyu-Ho Lee
ee4ff1e448 Merge pull request #4976 from gyuho/lease_testing
e2e: lease tests, fix minor format string
2016-04-07 11:25:51 -07:00
Gyu-Ho Lee
84bf6e7462 e2e: lease tests, fix minor format string 2016-04-07 11:18:49 -07:00
Gyu-Ho Lee
a38617d93a Merge pull request #4992 from gyuho/e2e_clean
e2e: clean up, return all lines in error
2016-04-07 10:59:08 -07:00
Gyu-Ho Lee
2779341250 e2e: clean up, return all lines in error
1. change file names
2. now if sub-command errors, the test will receive all
lines from stdout and stderr.

Expected output:

```
read /dev/ptmx: input/output error (expected key2, got ["key1\r\n" "val1\r\n" ""])
```

3. change how we check GRPC timeout (only bypass timeout error when we give 0
timeout)
2016-04-07 10:41:56 -07:00
Anthony Romano
ac232ac9a7 scripts: updatedep.sh to update vendored dependencies
Running godep in the vendored cmd directory will try to pull etcd in
as a dependency. As a fix, this script safely vendors into cmd.
2016-04-07 10:28:33 -07:00
Xiang Li
2e5ee26300 Merge pull request #4987 from hongchaodeng/ev
expose APIs to recognize event type
2016-04-07 10:18:25 -07:00
Xiang Li
21eda79451 Merge pull request #4991 from endocode/kayrus/faq_doc
docs: fixed markdown formatting in faq.md
2016-04-07 09:59:35 -07:00
kayrus
5c782a2086 docs: fixed markdown formatting in faq.md 2016-04-07 18:51:33 +02:00
Hongchao Deng
aa11dafaf8 clientv3: expose event type in user API
- add another layer of abstraction in clientv3 for user, not expose internal storagepb ones
- provide commonly used routines IsCreate(), IsModify() on event
2016-04-07 09:47:04 -07:00
Xiang Li
a5f341e886 Merge pull request #4989 from xiang90/clu
*: move Cluster interface to api
2016-04-07 08:33:52 -07:00
Xiang Li
030865abe3 *: move Cluster interface to api 2016-04-07 08:05:47 -07:00
Gyu-Ho Lee
b137df77f1 Merge pull request #4985 from gyuho/unused
*: clean up unused vars, functions
2016-04-06 21:49:58 -07:00
Gyu-Ho Lee
6e6d64fb9b *: clean up unused vars, functions
With help from https://github.com/dominikh/go-unused.
IsNetTimeoutError seems useful, so moved to pkg/netutil.
2016-04-06 21:33:55 -07:00
Gyu-Ho Lee
79a09e6857 Merge pull request #4984 from gyuho/watch-range
clientv3/integration: fix watch range test typo
2016-04-06 21:32:30 -07:00
Gyu-Ho Lee
e72591b4a2 clientv3/integration: fix watch range test typo 2016-04-06 21:12:07 -07:00
Anthony Romano
7408bc2504 Merge pull request #4948 from heyitsanthony/update-grpc
vendor: update grpc
2016-04-06 17:55:53 -07:00
Gyu-Ho Lee
82e58e602d Merge pull request #4983 from gyuho/expect_line
pkg/expect: ExpectFunc, LineCount
2016-04-06 16:10:02 -07:00
Gyu-Ho Lee
679e5e379b pkg/expect: ExpectFunc, LineCount
ExpectFunc to make expect more extensible. LineCount to be
able to check 'no output' command.
2016-04-06 15:56:00 -07:00
Xiang Li
62990fb5fa Merge pull request #4970 from tamird/fix-raft-past-election
raft: correct regression in `pastElectionTimeout`
2016-04-06 08:03:38 -07:00
Tamir Duberstein
68db18667a raft: correct doc comment 2016-04-06 08:43:42 -04:00
Tamir Duberstein
5250784b09 raft: use rand.Intn instead of rand.Int and mod
This provides a better random distribution and is easier to read.
2016-04-06 08:43:42 -04:00
Anthony Romano
6b0eb9c3c0 godeps: update grpc dependency 2016-04-06 01:30:06 -07:00
Anthony Romano
34375ef851 Merge pull request #4950 from heyitsanthony/revendor
vendor: only vendor on emitted binaries
2016-04-05 21:36:51 -07:00
Anthony Romano
b1d41016b2 vendor: only vendor on emitted binaries
Moves the vendor/ directory to cmd/vendor. Vendored binaries are built
from cmd/, which is backed by symlinks pointing back to repo root.
2016-04-05 21:01:16 -07:00
Gyu-Ho Lee
b9e933b850 Merge pull request #4971 from gyuho/e2e_more
e2e: clean up to test tables, endpoint-health test
2016-04-05 14:34:35 -07:00
Gyu-Ho Lee
e3599e4145 e2e: clean up to test tables, endpoint-health test 2016-04-05 13:33:37 -07:00
Gyu-Ho Lee
01c303113d Merge pull request #4964 from gyuho/get_sort_e2e
e2e: get with sort order, target
2016-04-04 23:22:04 -07:00
Gyu-Ho Lee
3e39f36b34 e2e: get with sort order, target 2016-04-04 23:10:03 -07:00
Xiang Li
c3bca3739f Merge pull request #4926 from mitake/auth-role-add
*: support adding role in auth v3
2016-04-04 18:44:16 -07:00
Xiang Li
21096bf27f Merge pull request #4963 from xiang90/ht
*: mv etcdhttp into api pkg
2016-04-04 18:40:29 -07:00
Xiang Li
8662aaada4 Merge pull request #4958 from mitake/progrep-race
etcdserver, clientv3: let progressReportIntervalMilliseconds be private
2016-04-04 18:04:57 -07:00
Hitoshi Mitake
2b17a3919c *: support adding role in auth v3 2016-04-05 09:28:17 +09:00
Hitoshi Mitake
88306c9fa7 etcdserver, clientv3: let progressReportIntervalMilliseconds be private
progressReportIntervalMilliseconds (old
ProgressReportIntervalMilliseconds) is accessed by multiple goroutines
and it is reported as race.

For avoiding this report, this commit wraps the variable with
functions. They access the variable with atomic operations so the race
won't be reported.
2016-04-05 09:12:17 +09:00
Xiang Li
2c50eb240e *: mv etcdhttp into api pkg 2016-04-04 16:31:35 -07:00
Gyu-Ho Lee
bfbe0fac8c Merge pull request #4951 from gyuho/watch_prefix
e2e: watch by prefix
2016-04-04 15:11:32 -07:00
Gyu-Ho Lee
9de5b8db80 e2e: watch by prefix 2016-04-04 14:52:54 -07:00
Anthony Romano
b3247356c1 Merge pull request #4956 from heyitsanthony/txn-serialize
etcdserver: serializable transactions
2016-04-04 09:51:09 -07:00
Gyu-Ho Lee
98504fe863 Merge pull request #4959 from gyuho/ctl_doc
etcdctl: READMEv3 doc about prefix
2016-04-04 08:28:41 -07:00
Gyu-Ho Lee
1543e7bd95 etcdctl: READMEv3 doc about prefix 2016-04-04 07:00:49 -07:00
Anthony Romano
fab3c8e705 etcdserver: serializable transactions
Support case where txn doesn't have to go through quorum.
2016-04-04 04:21:42 -07:00
Anthony Romano
46e877b8bb Merge pull request #4955 from mitake/e2e-test
e2e: import fmt in etcdctlv3_test.go
2016-04-04 01:37:21 -07:00
Hitoshi Mitake
4ff81678ac e2e: import fmt in etcdctlv3_test.go 2016-04-04 17:00:33 +09:00
Xiang Li
b6ac21374e Merge pull request #4952 from ajityagaty/snap_db_file_fix
snap: Do not complain about db file.
2016-04-03 17:54:03 -07:00
Ajit Yagaty
c12f263577 snap: Do not complain about db file.
Currently the snapshotter throws a warning if a file without the
.snap suffix is found. Fix it to allow known files to exist in
the snap folder.
2016-04-03 17:28:04 -07:00
Gyu-Ho Lee
e8a4ed01e2 Merge pull request #4949 from gyuho/delete
*: add del by prefix with e2e tests
2016-04-03 12:09:16 -07:00
Anthony Romano
3abd137dc5 Merge pull request #4945 from heyitsanthony/fix-exit-status
e2e, pkg/expect: distinguish between Stop and Close
2016-04-03 12:02:59 -07:00
Anthony Romano
dc420d660e e2e, pkg/expect: distinguish between Stop and Close
Fixes #4928
2016-04-03 11:45:02 -07:00
Gyu-Ho Lee
9afae9e2c1 *: add del by prefix with e2e tests 2016-04-03 11:41:49 -07:00
Gyu-Ho Lee
bb69dd324e Merge pull request #4939 from gyuho/e2e_txn_version
e2e: etcdctlv3 version, txn basic tests
2016-04-03 11:09:57 -07:00
Xiang Li
73b0d398e4 Merge pull request #4946 from xiang90/b
vendor: update boltdb to 1.2.0
2016-04-03 10:59:51 -07:00
Gyu-Ho Lee
f4eaa3f8fb pkg/expect: replace SendLine with Send method 2016-04-03 10:57:35 -07:00
Gyu-Ho Lee
c280871714 e2e: etcdctlv3 version, txn basic tests 2016-04-03 10:57:31 -07:00
Xiang Li
37c1edc952 vendor: update boltdb to 1.2.0 2016-04-03 10:47:07 -07:00
Xiang Li
19136afc2b Merge pull request #4798 from mqliang/memberStatus
etcdctlv3: initial implementaton of 'etcdctl member status' command
2016-04-03 08:48:23 -07:00
mqliang
d80af00785 etcdctlv3: implement the 'etcdctl status' command 2016-04-03 13:55:58 +08:00
mqliang
f3ca17ea03 etcdctlv3: implement the client side functionality 2016-04-03 13:46:34 +08:00
mqliang
1d5d2494ed etcdctlv3: implement status rpc in server side 2016-04-03 13:46:01 +08:00
mqliang
bbca61252f etcdctlv3: update aunto generated files 2016-04-03 13:45:17 +08:00
mqliang
3c62bfb7a3 etcdctlv3: add status rpc in protbuffer file 2016-04-03 13:44:45 +08:00
Gyu-Ho Lee
6770b9c67a Merge pull request #4944 from gyuho/delete_num
etcdctl: print number of deleted keys
2016-04-02 21:13:46 -07:00
Gyu-Ho Lee
e8877ab180 etcdctl: print number of deleted keys 2016-04-02 20:54:37 -07:00
Gyu-Ho Lee
584d90cd5d Merge pull request #4912 from gyuho/defrag
functional-tester: defrag every 500 round
2016-04-02 18:58:41 -07:00
Gyu-Ho Lee
b866337f25 functional-tester: defrag every 500 round
Fix https://github.com/coreos/etcd/issues/4665.
2016-04-02 18:51:26 -07:00
Xiang Li
d2ce6836af Merge pull request #4942 from xiang90/def
backend: reset count in defrg
2016-04-02 18:43:03 -07:00
Gyu-Ho Lee
c09f23c46d *: clean up bool comparison 2016-04-02 18:27:54 -07:00
Xiang Li
2b54b73b90 backend: reset count in defrg 2016-04-02 17:25:05 -07:00
Gyu-Ho Lee
b0cc0e443c *: clean up if, bool comparison 2016-04-02 12:55:11 -07:00
Gyu-Ho Lee
dc0061e4db e2e: add Get tests 2016-04-01 22:45:27 -07:00
Anthony Romano
ff01a4de65 Merge pull request #4936 from heyitsanthony/compact-barrier-restore
etcdserver, storage: don't ack physical compaction on error or snap restore
2016-04-01 20:18:12 -07:00
Anthony Romano
6f707b857a etcdserver, storage: don't ack physical compaction on error or snap restore
Snapshot recovery will reset the FIFO; reschedule the physical acknowledgment
instead of acknowledging on scheduler teardown.
2016-04-01 16:32:05 -07:00
Gyu-Ho Lee
eea56d037e etcdserver: fix govet error 2016-04-01 16:01:47 -07:00
Xiang Li
3083b6d11e Merge pull request #4933 from xiang90/m
MAINTAINERS: update maintainers list
2016-04-01 15:34:57 -07:00
Anthony Romano
623c7b4df4 Merge pull request #4930 from heyitsanthony/fix-wal-corrupt
wal: fix tail corruption
2016-04-01 15:23:52 -07:00
Xiang Li
c0e614b0bd MAINTAINERS: update maintainers list 2016-04-01 15:12:08 -07:00
Anthony Romano
bfe3a3d08e wal: fix tail corruption
On ReadAll, WAL seeks to the end of the last record in the tail. If the tail did not
end with preallocated space, the decoder would report 0 as the last offset and begin
writing at offset 0 of the tail.

Fixes #4903
2016-04-01 15:05:52 -07:00
Xiang Li
e1b561cb7c Merge pull request #4929 from xiang90/rand
raft: lower split vote rate
2016-04-01 12:35:59 -07:00
Xiang Li
5d431b4782 raft: lower split vote rate 2016-04-01 12:11:03 -07:00
Xiang Li
bf6d905a5a Merge pull request #4923 from xiang90/conf
clientv3: support read conf from file
2016-04-01 10:09:51 -07:00
Xiang Li
f05f7b475e vendor: add yaml dependencies 2016-04-01 09:36:11 -07:00
Xiang Li
802de5f9f8 clientv3: support read conf from file 2016-04-01 09:36:11 -07:00
Anthony Romano
307cb5167c Merge pull request #4925 from heyitsanthony/wal-dump-lock
etcd-dump-logs: don't try to acquire wal file locks
2016-03-31 22:24:54 -07:00
Anthony Romano
7fffd6ffd2 etcd-dump-logs: don't try to acquire wal file locks
can now dump logs from a running etcd instance
2016-03-31 21:51:20 -07:00
Gyu-Ho Lee
c43910f835 Merge pull request #4910 from gyuho/compact_test
etcd-tester: no error for compact double-send
2016-03-31 21:43:26 -07:00
Anthony Romano
bdaba136a9 Merge pull request #4915 from heyitsanthony/hash-barrier
etcdserver: force backend commit before acking physical compaction
2016-03-31 21:36:57 -07:00
Gyu-Ho Lee
f9b90e13ac etcd-tester: no error for compact double-send
When compactKV request is halted before final acknowledgement,
it used to just continue on the next endpoint. But there could be
a case than compactKV is requested twice, and the first one is already
replicated and applied by the time the second request was to be
applied (returning compact revision error). This skips the case
by parsing the error message.
2016-03-31 21:29:02 -07:00
Anthony Romano
81de5648d9 etcdserver: force backend commit before acking physical compaction 2016-03-31 21:25:40 -07:00
Gyu-Ho Lee
2f785015a5 Merge pull request #4922 from gyuho/ctl_test
e2e: basic v3 watch test
2016-03-31 18:18:29 -07:00
Gyu-Ho Lee
b98f67095e e2e: add basic v3 watch test 2016-03-31 18:04:14 -07:00
Gyu-Ho Lee
d898c68f2c pkg/expect: add SendLine for interactive mode 2016-03-31 15:34:30 -07:00
Xiang Li
1d698f093f Merge pull request #4921 from xiang90/tls
*: move baisc tls util funcs to tlsutil pkg
2016-03-31 09:59:25 -07:00
Xiang Li
eb3919e8cf *: move baisc tls util funcs to tlsutil pkg 2016-03-31 09:45:45 -07:00
Xiang Li
de801b500b Merge pull request #4920 from mitake/auth-user-password
*: support changing password in v3 auth
2016-03-30 23:45:50 -07:00
Hitoshi Mitake
73166b41e9 *: support changing password in v3 auth
This commit adds a functionality for updating password of existing
users.
2016-03-31 15:28:15 +09:00
Gyu-Ho Lee
f328c75ba7 Merge pull request #4919 from gyuho/expects
*: ctl v3 tests with multi expects
2016-03-30 22:23:21 -07:00
Gyu-Ho Lee
a6c6bbd81c e2e: ctl tests with multi expects 2016-03-30 22:09:23 -07:00
Xiang Li
324afd7fde Merge pull request #4918 from mitake/auth-user-messages
etcdctl: print messages for successful auth operations
2016-03-30 22:03:14 -07:00
Hitoshi Mitake
2ad9b5692f etcdctl: print messages for successful auth operations
This commit lets etcdctl v3 follow the manner of etcdctl v2.
2016-03-31 13:56:01 +09:00
Xiang Li
59bb65182a Merge pull request #4917 from mitake/auth-user-delete
*: support deleting user in v3 auth
2016-03-30 21:36:17 -07:00
Hitoshi Mitake
d8888ded12 *: support deleting user in v3 auth
This commit adds a functionality of user deletion. It can be invoked
with the new user delete command.

Example usage:
$ ETCDCTL_API=3 etcdctl user delete usr1
2016-03-31 13:18:51 +09:00
Anthony Romano
93c3f920ca Merge pull request #4909 from heyitsanthony/pkg-expect
e2e: replace gexpect with simpler expect
2016-03-30 15:36:41 -07:00
Anthony Romano
eb3351533a godep: remove gexpect 2016-03-30 15:14:24 -07:00
Anthony Romano
5022dce31a e2e: use pkg/expect 2016-03-30 15:14:24 -07:00
Anthony Romano
5707f6b997 pkg/expect: add expect package 2016-03-30 15:14:24 -07:00
Anthony Romano
b539d3a411 test: check formatting for all relevant packages in pkg/ 2016-03-30 15:14:24 -07:00
Gyu-Ho Lee
6cf198d1b1 Merge pull request #4911 from heyitsanthony/physical-already
etcdserver, storage: wait for physical compaction if already compacted
2016-03-30 14:27:21 -07:00
Anthony Romano
7b37bd332c etcdserver, storage: wait for physical compaction if already compacted 2016-03-30 13:59:52 -07:00
Anthony Romano
7ce5c2b9ff Merge pull request #4902 from heyitsanthony/alarm-ctl
etcdctl: alarm command
2016-03-30 13:55:29 -07:00
Xiang Li
14f146b9f7 Merge pull request #4908 from xiang90/c
*: simplify consistent index handling
2016-03-30 13:53:21 -07:00
Xiang Li
eddc741b5e *: simplify consistent index handling 2016-03-30 13:38:28 -07:00
Anthony Romano
2aca3252e8 etcdctl: alarm command 2016-03-30 13:33:52 -07:00
Anthony Romano
c91b2d098d clientv3: AlarmList and AlarmDisarm 2016-03-30 13:33:52 -07:00
Anthony Romano
dd5b73cfee alarms: support Get of all alarms 2016-03-30 13:33:52 -07:00
Anthony Romano
cd02cef5e9 etcdserver: only warn on new and disarmed alarms
listing alarms was generating warning output
2016-03-30 13:33:52 -07:00
Xiang Li
0f64e01f6b Merge pull request #4864 from cdancy/patch-1
Update libraries-and-tools.md
2016-03-30 13:02:09 -07:00
Christopher Dancy
4e2a4b17b5 Documentation: add etcd-rest to libraries-and-tools.md
Add link to the etcd-rest client under the 'Java libraries' sub-section.

Fixes #4906
2016-03-30 15:56:20 -04:00
Anthony Romano
a5172974da Merge pull request #4863 from heyitsanthony/ft-check-compact
etcd-tester: check compaction revision
2016-03-30 10:08:05 -07:00
Gyu-Ho Lee
1eb375d296 Merge pull request #4880 from gyuho/drain
*: drain http.Response.Body before closing
2016-03-30 10:02:52 -07:00
Gyu-Ho Lee
1bee31a3bb Merge pull request #4905 from gyuho/vendor_doc
*: document client package vendoring guide
2016-03-30 10:02:32 -07:00
Anthony Romano
4c65f3fe7a etcd-tester: check compaction revision
Faster than waiting 30 seconds between rounds.
2016-03-30 09:45:30 -07:00
Anthony Romano
4b35cb9462 etcdserver, storage: optionally wait for Compaction completion in RPC 2016-03-30 09:45:30 -07:00
Gyu-Ho Lee
a42d1dc1fe *: drain http.Response.Body before closing 2016-03-30 09:35:47 -07:00
Gyu-Ho Lee
b8d3b15206 *: document client package vendoring guide 2016-03-30 09:34:41 -07:00
Xiang Li
12d8d33a1c Merge pull request #4879 from mitake/auth-user-error
etcdserver: return internal error in a case of not auth specific errors
2016-03-30 08:04:41 -07:00
Hitoshi Mitake
8ee8d755bb etcdserver: return internal error in a case of not auth specific errors 2016-03-30 23:44:22 +09:00
Hitoshi Mitake
443c677357 etcdserver: extract togRPCError() to a separated file
It is used from multiple files in v3rpc package.
2016-03-30 22:53:20 +09:00
Anthony Romano
96ee00a322 etcdserverpb: make alarm memberId uint64
To be consistent with Cluster API
2016-03-29 20:15:39 -07:00
Anthony Romano
2deed74494 Merge pull request #4901 from heyitsanthony/config-dbsize
etcdserver: configurable backend size quota
2016-03-29 18:55:12 -07:00
Anthony Romano
9b2c963179 etcdserver: configurable backend size quota
Configurable with the flag --experimental-quota-backend-bytes and
through ServerConfig.QuotaBackendBytes.

Fixes #4894
2016-03-29 18:39:25 -07:00
Xiang Li
b0956d5dbf Merge pull request #4891 from mitake/auth-prefix
*: add Auth prefix to auth related requests and responses
2016-03-29 17:24:12 -07:00
Gyu-Ho Lee
d00811428d Merge pull request #4898 from gyuho/context_err
client: return context error
2016-03-29 17:22:40 -07:00
Gyu-Ho Lee
8d0d10cce5 client: return original ctx error
Fix https://github.com/coreos/etcd/issues/3209.
2016-03-29 16:57:48 -07:00
Gyu-Ho Lee
00f222ecad Merge pull request #4892 from gyuho/help
etcdmain: add missing flag doc
2016-03-29 10:30:33 -07:00
Xiang Li
870b5c5ea7 Merge pull request #4219 from endocode/kayrus/username_environment
Handle ETCDCTL_USERNAME environment
2016-03-29 10:24:43 -07:00
kayrus
720502b25f etcdctl: Handle ETCDCTL_USERNAME environment 2016-03-29 19:06:31 +02:00
Gyu-Ho Lee
92f4aced25 etcdmain: add peer-auto-tls doc 2016-03-29 09:40:57 -07:00
Xiang Li
bb8619f4f7 Merge pull request #4895 from xiang90/client_doc
client: doc that client is thread-safe
2016-03-29 09:36:01 -07:00
Xiang Li
9d49d35090 client: doc that client is thread-safe 2016-03-29 09:28:53 -07:00
Anthony Romano
d533c14881 Merge pull request #4876 from heyitsanthony/integration-races
*: fix races from clientv3/integration tests
2016-03-29 09:10:53 -07:00
Xiang Li
75babb82b6 Merge pull request #4888 from xiang90/fix_raft
rafthttp: do not block on proposal
2016-03-29 07:37:18 -07:00
Anthony Romano
161bc5e19c clientv3: fix race when setting grpc Logger
grpc only permits SetLogger on init()
2016-03-28 23:30:03 -07:00
Hitoshi Mitake
987568c65c *: add Auth prefix to auth related requests and responses 2016-03-29 14:32:19 +09:00
Anthony Romano
1637b37132 Merge pull request #4890 from heyitsanthony/fix-4889
clientv3/integration: get quorum before watching in TestKVCompact
2016-03-28 22:30:58 -07:00
Anthony Romano
096abb3f37 clientv3/integration: get quorum before watching in TestKVCompact
Fixes #4889
2016-03-28 22:18:10 -07:00
Xiang Li
660eef8a95 Merge pull request #4872 from ajityagaty/cli_opts_aliases
etcdctl: Add aliases for command flags.
2016-03-28 22:04:00 -07:00
Xiang Li
0c137b344b rafthttp: do not block on proposal 2016-03-28 21:40:12 -07:00
Ajit Yagaty
2e3856740d etcdctl: Add aliases for command flags.
Add aliases to the flags that are supplied to the sub commands.
2016-03-28 20:57:34 -07:00
Anthony Romano
c53380cd2a Merge pull request #4886 from heyitsanthony/move-hash
v3rpc: move Hash RPC to Maintenance service
2016-03-28 19:35:03 -07:00
Anthony Romano
3fbacf4be2 v3rpc: move Hash RPC to Maintenance service 2016-03-28 17:15:58 -07:00
Xiang Li
495bef8b4c Merge pull request #4885 from xiang90/log_doc
doc/dev: add logging doc
2016-03-28 17:00:41 -07:00
Anthony Romano
4bdfc0a46d clientv3: fix race on writing watch channel over return channel
Found in TestElectionFailover
2016-03-28 16:08:18 -07:00
Anthony Romano
5ee85bea7c v3rpc: fix race on watch progress map
Found in TestElectionWait
2016-03-28 16:08:18 -07:00
Anthony Romano
813afc3d11 rafthttp: fix race between AddRemote and Send 2016-03-28 16:08:18 -07:00
Anthony Romano
91dc6b29a6 clientv3/integration: fix race when setting progress report interval 2016-03-28 16:08:18 -07:00
Anthony Romano
2c83362e63 clientv3: fix race in KV reconnection logic 2016-03-28 16:08:18 -07:00
Anthony Romano
e129223dbe clientv3: fix race in watcher resume 2016-03-28 16:08:18 -07:00
Anthony Romano
47db0a2f2e test: add race detection to clientv3 integration tests 2016-03-28 16:08:18 -07:00
Xiang Li
ffc7488af2 doc/dev: add logging doc 2016-03-28 15:34:51 -07:00
Anthony Romano
6e3a0948e4 Merge pull request #4868 from heyitsanthony/api-quota
etcdserver: storage quotas
2016-03-28 15:15:57 -07:00
Anthony Romano
a403a94d7b etcdserver: cap new keys on space alarm 2016-03-28 14:56:26 -07:00
Anthony Romano
9e7f47c490 etcdserver: Alarm RPC
Alarms are events that nodes can use to relay health information to
the rest of the cluster. A node may Activate an alarm and that alarm
will stay set until Deactivated.
2016-03-28 14:56:26 -07:00
Anthony Romano
ae077a2183 backend: add UnsafeForEach to BatchTx
Useful for efficiently iterating over an entire bucket.
2016-03-28 14:56:26 -07:00
Anthony Romano
9c8253c543 etcdserver, v3rpc: space quotas 2016-03-28 14:56:26 -07:00
Anthony Romano
fc346041e5 Merge pull request #4883 from heyitsanthony/fix-4874
integration: don't call rand.Intn in TestSTMConflict on 0
2016-03-28 13:36:19 -07:00
Anthony Romano
94e77cfa5d etcdserver: move v3 raft apply functions to interface 2016-03-28 13:16:21 -07:00
Anthony Romano
384c3ec907 integration: don't call rand.Intn in TestSTMConflict on 0
Fixes #4874
2016-03-28 13:06:07 -07:00
Xiang Li
2b83d9c2e5 Merge pull request #4882 from xiang90/ctl_combine
*: combine etcdctl and etcdctlv3
2016-03-28 11:42:25 -07:00
Xiang Li
87d9f06a45 *: combine etcdctl and etcdctlv3 2016-03-28 11:28:05 -07:00
Gyu-Ho Lee
83ada7232a Merge pull request #4871 from gyuho/windows_file_lock_20160326
pkg/fileutil: lock file on Windows
2016-03-27 12:38:38 -07:00
Xiang Li
fa98d8d337 Merge pull request #4845 from mitake/auth-user
*: support adding user in v3 auth
2016-03-27 07:51:10 -07:00
Hitoshi Mitake
8874545a1e *: support adding user in v3 auth
This commit adds a new subcommand "user add" to etcdctlv3. With the
command users can create a user for the authentication.

Example of usage:
$ etcdctlv3 user add user1
Password of user1:
Type password of user1 again for confirmation:
2016-03-27 18:11:42 +09:00
Gyu-Ho Lee
3f1a1c3192 pkg/fileutil: lock file on Windows 2016-03-27 00:35:44 -07:00
Gyu-Ho Lee
68b38e7ade Merge pull request #4875 from gyuho/clientv3_disable_grpclog
clientv3: disable client side grpc log
2016-03-26 22:57:37 -07:00
Gyu-Ho Lee
29fccb3221 clientv3: configurable grpc logger 2016-03-26 22:38:53 -07:00
Xiang Li
b8fc61bcec Merge pull request #4869 from ajityagaty/insecure_skip_tls_verify
etcdctlv3: Add insecure-skip-tls-verify flag.
2016-03-26 12:12:55 -07:00
Xiang Li
9c3242c6df Merge pull request #4862 from mitake/procfiles
Procfile, V3DemoProcfile: add default endpoint of v3 to Procfile remo…
2016-03-26 08:21:01 -07:00
Hitoshi Mitake
7418c1af24 V3DemoProcfile: remove the obsolete flag
The flag --experimental-v3demo is already removed so V3DemoProcfile
cannot be used. This commit removes it.
2016-03-26 08:15:58 -07:00
Ajit Yagaty
4e39db4158 etcdctlv3: Add insecure-skip-tls-verify flag.
The user can specify insecure-skip-tls-verify flag to skip the
server certificate verification step.
2016-03-25 19:28:41 -07:00
Anthony Romano
877030ea9d pkg/fileutil: fix linux file locks over NFS
Fixes #4853
2016-03-25 16:28:29 -07:00
Xiang Li
36db6cd982 Merge pull request #4867 from xiang90/ctl_env
etcdctlv3: accept evn for global configuration flags
2016-03-25 15:32:06 -07:00
Xiang Li
a120ca16c0 etcdctlv3: accept evn for global configuration flags 2016-03-25 14:23:32 -07:00
Xiang Li
92a73e727b Merge pull request #4857 from xiang90/warn_tls
etcdmain: warn on contradictory TLS settings
2016-03-25 09:38:11 -07:00
Xiang Li
5449edc025 Merge pull request #4817 from mqliang/time-out
etcdctlv3: add timeout support
2016-03-25 07:30:48 -07:00
mqliang
f165f8b44e etcdctlv3: add timeout support
add timeout setting support for etcdctlv3
2016-03-25 16:24:49 +08:00
Anthony Romano
20a267dc6a Merge pull request #4860 from heyitsanthony/tester-sched
tools/functional-tester: --schedule-cases flag
2016-03-24 22:05:05 -07:00
Anthony Romano
4a17097d00 tools/functional-tester: --schedule-cases flag
Command line argument for specifying a schedule of test cases per round.
Default is run each test case once each round.
2016-03-24 19:43:23 -07:00
Xiang Li
05dc2dac70 Merge pull request #4859 from xiang90/ctl_secure
etcdctlv3: support secure connection without key/cert
2016-03-24 16:47:23 -07:00
Xiang Li
0865688c27 etcdctlv3: support secure connection without key/cert 2016-03-24 16:29:33 -07:00
Xiang Li
6285455f85 etcdmain: warn on contradictory TLS settings 2016-03-24 10:21:47 -07:00
Xiang Li
7d2545c72e Merge pull request #4856 from xiang90/fail_key_cert
etcdmain: etcd should fail to start when https is enabled but tls con…
2016-03-24 10:10:43 -07:00
Xiang Li
5ee3729738 etcdmain: etcd should fail to start when https is enabled but tls config is not given 2016-03-24 09:57:25 -07:00
Anthony Romano
d16bfa5e54 Merge pull request #4854 from mitake/genproto
scripts: update genproto.sh for vendor
2016-03-24 08:53:55 -07:00
Xiang Li
d0d3b32210 Merge pull request #4850 from xiang90/rm_demo
*: enable v3 by default
2016-03-23 23:48:29 -07:00
Hitoshi Mitake
0436223793 scripts: update genproto.sh for vendor
Current genproto.sh assumes Godep so its output files have obsolete
parts.
2016-03-24 14:30:46 +09:00
Xiang Li
70a9391378 *: enable v3 by default 2016-03-23 17:01:36 -07:00
Gyu-Ho Lee
2fec88ebfc Merge pull request #4851 from gyuho/fix_functional_tester
functional-tester: add GRPCURLs for cluster config
2016-03-23 16:47:33 -07:00
Gyu-Ho Lee
9fb60deb7c functional-tester: add GRPCURLs for cluster config
GRPC and v2 client address share the same host(port)
but GRPC does not work with schema specified. This fixes
it by adding another member for GRPC without schema, as
we had before.
2016-03-23 16:28:05 -07:00
Xiang Li
333ac5789a Merge pull request #4831 from xiang90/tlx
*: http and https on the same port
2016-03-23 15:59:58 -07:00
Gyu-Ho Lee
e4ac8edb2f Merge pull request #4849 from gyuho/functional_test
functional-tester: set gRPC endpoint for stresser
2016-03-23 15:32:14 -07:00
Xiang Li
4d2227e5ab e2e: combine cfg.isClientTLS and cfg.isClientBoth 2016-03-23 15:30:58 -07:00
Gyu-Ho Lee
012143e703 functional-tester: set gRPC endpoint for stresser 2016-03-23 15:23:19 -07:00
Xiang Li
9d55420a00 e2e: add an e2e test for TLS/non-TLS on the same port 2016-03-23 13:43:47 -07:00
Anthony Romano
ca5dff6682 Merge pull request #4848 from heyitsanthony/rename-compare-created
clientv3: rename comparison from CreatedRevision to CreateRevision
2016-03-23 10:32:49 -07:00
Xiang Li
900a61b023 *: http and https on the same port 2016-03-23 10:28:38 -07:00
Anthony Romano
489779d905 clientv3: rename comparison from CreatedRevision to CreateRevision
To match protobuf naming
2016-03-23 09:50:46 -07:00
Xiang Li
88e738fcb6 Merge pull request #4844 from ajityagaty/polish_naming_conventions
clientv3: Renaming SortByCreatedRev to maintain consistency.
2016-03-23 09:27:34 -07:00
Xiang Li
0181725e55 Merge pull request #4846 from jonboulle/master
docs: "master election" -> "leader election"
2016-03-23 09:27:08 -07:00
Anthony Romano
6081a29c13 Merge pull request #4843 from heyitsanthony/go-vendor
*: migrate Godeps to vendor/
2016-03-23 06:01:26 -07:00
Jonathan Boulle
5f72a28157 docs: "master election" -> "leader election" 2016-03-23 12:23:01 +01:00
Anthony Romano
86a477c2f6 doc: update client README to use vendor/ 2016-03-22 18:02:10 -07:00
Anthony Romano
2f22ac662c travis: use GO15VENDOREXPERIMENT 2016-03-22 18:02:10 -07:00
Ajit Yagaty
2bb417bfff clientv3: Renaming SortByCreatedRev to maintain consistency.
Renamed SortByCreatedRev to SortByCreateRevision to be consistent
with the naming used for SortByModRevision.
2016-03-22 17:56:24 -07:00
Anthony Romano
45cf31650c test: ignore vendor/ directory on license check 2016-03-22 17:33:46 -07:00
Anthony Romano
fb3510b276 build: enable vendor experiment for go1.5 2016-03-22 17:33:46 -07:00
Anthony Romano
bd832e5b0a *: migrate Godeps to vendor/ 2016-03-22 17:10:28 -07:00
Gyu-Ho Lee
e9b9b228e7 Merge pull request #4842 from gyuho/serial
etcdctlv3: get command with consistency flag
2016-03-22 17:07:29 -07:00
Gyu-Ho Lee
a10662210a e2e: etcdctlv3 with serializable read 2016-03-22 16:52:33 -07:00
Gyu-Ho Lee
5686340d26 etcdctlv3: get command with consistency flag
As we do in benchmark tool.
2016-03-22 16:52:28 -07:00
Xiang Li
096a89117a Merge pull request #4840 from ajityagaty/polish_naming_conventions
clientv3: Fix inconsistent naming convention in v3 client.
2016-03-22 15:27:12 -07:00
Xiang Li
70e709c5f4 Merge pull request #4812 from xiang90/ping
etcdctlv3: implement endpoint-health command
2016-03-22 15:22:53 -07:00
Xiang Li
43221f0b7a etcdctlv3: implement endpoint-health command
endpoint-health checks endpoint.

It can generate 3 outputs:

1. cannot connect to the member through endpoint

2. connected to the member, but member failed to commit any proposals

3. connected to the member, and member committed a proposal
2016-03-22 15:09:50 -07:00
Ajit Yagaty
606889a002 clientv3: Fix inconsistent naming convention in v3 client.
In order to have a consistent naming for variable/function names
pertaining to ModifiedRevision, all occurrences have been renamed
to ModRevision.
2016-03-22 14:58:11 -07:00
Gyu-Ho Lee
499b893704 Merge pull request #4838 from gyuho/dial
etcdctlv3: add dial timeout flag
2016-03-22 13:26:19 -07:00
Gyu-Ho Lee
8396da3e83 etcdctlv3: add dial timeout flag
Fix https://github.com/coreos/etcd/issues/4836.
2016-03-22 13:15:26 -07:00
Gyu-Ho Lee
2b44df5440 Merge pull request #4833 from gyuho/govet
etcdmain: fix shadowed variables
2016-03-22 10:01:31 -07:00
Xiang Li
565eb61cd3 Merge pull request #4834 from gyuho/godep_pb
Godeps: semantic versioning cheggaaa/pb
2016-03-22 09:09:58 -07:00
Nick Owens
a054ae320b Merge pull request #4830 from mischief/proxy-env
pkg/transport: use ProxyFromEnvironment when constructing a transport
2016-03-22 01:14:03 -07:00
Xiang Li
afb1bc242b Merge pull request #4822 from mitake/auth-backend
auth, etcdserver: add a method for updating backend during apply snap…
2016-03-21 23:34:46 -07:00
Hitoshi Mitake
4e39f690f2 auth, etcdserver: add a method for recoverying from backend during apply snapshot
This commit adds a new method Recovery() to auth.AuthStore for
recoverying auth state from backend during apply snapshot. It follows
a manner of the lessor.
2016-03-22 15:17:40 +09:00
Gyu-Ho Lee
bb9a7f5a7c Godeps: semantic versioning cheggaaa/pb
Fix https://github.com/coreos/etcd/issues/4832.
2016-03-21 22:06:16 -07:00
Gyu-Ho Lee
2364d71ea2 etcdmain: fix shadowed variables 2016-03-21 21:55:06 -07:00
Nick Owens
d80a546ed4 pkg/transport: use ProxyFromEnvironment when constructing a transport
this allows use of HTTP_PROXY/HTTPS_PROXY for etcdctl.
2016-03-21 21:02:42 -07:00
Gyu-Ho Lee
e73ac5bdd7 Merge pull request #4826 from philips/buildv3-by-default
build: build etcdctlv3 by default
2016-03-21 17:04:11 -07:00
Gyu-Ho Lee
0ac4eba60e Merge pull request #4829 from gyuho/server_closure
etcdmain: fix blocking m.Server closure
2016-03-21 16:50:13 -07:00
Gyu-Ho Lee
cdb7cfd74b etcdmain: fix blocking m.Server closure 2016-03-21 16:39:20 -07:00
Xiang Li
7e3fc182d5 Merge pull request #4828 from xiang90/cmux
*: gRPC + HTTP on the same port
2016-03-21 15:37:44 -07:00
Xiang Li
7c3432a79f Godep: add cmux dependency 2016-03-21 14:33:37 -07:00
Xiang Li
d3809abe42 *: gRPC + HTTP on the same port
We use cmux to do this since we want to do http+https on the same
port in the near future too.
2016-03-21 14:29:25 -07:00
Anthony Romano
adebd91114 Merge pull request #4785 from heyitsanthony/gce-fallocate
wal: extend WAL file to segment size on fallocate
2016-03-21 13:08:53 -07:00
Anthony Romano
3fed78ae7b Merge pull request #4484 from heyitsanthony/auto-tls
automatic peer TLS
2016-03-21 12:59:29 -07:00
Anthony Romano
0df732c052 wal: pre-create segment files
Pipeline file creation and allocation so it overlaps writes to the log.

Fixes #4773
2016-03-21 11:56:53 -07:00
Anthony Romano
24b806d2ee wal: preallocate WAL files with initial size equal to segment size
Avoids having to update file size metadata during fdatasync on common path.

Fixes #4755
2016-03-21 11:56:53 -07:00
Xiang Li
8f653572ac Merge pull request #4827 from xiang90/fix_ctl
etcdctlv3: use godep dir for tablewriter dependency
2016-03-21 11:50:40 -07:00
Xiang Li
5a0bb40a41 etcdctlv3: use godep dir for tablewriter dependency 2016-03-21 11:47:55 -07:00
Anthony Romano
d1ee12566b e2e: test auto tls 2016-03-21 11:44:14 -07:00
Brandon Philips
7d2aee8eca build: build etcdctlv3 by default
Any reason not to? It makes demoing etcd easier with the V3 procfile.
2016-03-21 11:42:01 -07:00
Gyu-Ho Lee
c8c0c728a0 Merge pull request #4825 from gyuho/key_link
Documentation: add public key link to release doc
2016-03-21 11:39:45 -07:00
Anthony Romano
e9b2bd751d etcdmain: add --peer-auto-tls option
Lets the peer generate its own (unsigned) certs.
2016-03-21 11:38:23 -07:00
Anthony Romano
a69c709839 pkg/transport: generate certs 2016-03-21 11:38:23 -07:00
Gyu-Ho Lee
86164374ef Merge pull request #4824 from gyuho/dash
*: replace '-' with '--' in doc
2016-03-21 11:33:22 -07:00
Gyu-Ho Lee
f58d1348a7 Documentation: add public key link to release doc 2016-03-21 11:28:49 -07:00
Gyu-Ho Lee
67c2384bdf *: replace '-' with '--' in doc
Fix https://github.com/coreos/etcd/issues/4595.
2016-03-21 11:12:43 -07:00
Anthony Romano
aafe717f2f fileutil: support file extending preallocate 2016-03-21 09:42:30 -07:00
Anthony Romano
7879429a94 Merge pull request #4802 from heyitsanthony/bench-stm
benchmark: STM benchmark
2016-03-20 16:10:54 -07:00
Anthony Romano
1383da1030 benchmark: STM benchmark 2016-03-20 12:21:29 -07:00
Gyu-Ho Lee
053bc83fe4 Merge pull request #4810 from gyuho/client_serialized_read
clientv3: set Serializable from Op
2016-03-19 14:35:55 -07:00
Gyu-Ho Lee
c740b60db8 Merge pull request #4814 from gyuho/godoc
*: minor updates for godoc
2016-03-19 14:29:48 -07:00
Gyu-Ho Lee
dae7e009b0 *: godoc clean up 2016-03-19 14:19:23 -07:00
Gyu-Ho Lee
0555a6112d Merge pull request #4813 from gyuho/clientv3
clientv3/concurrency: fix godoc
2016-03-18 19:07:54 -07:00
Gyu-Ho Lee
dc9af8e3f3 Merge pull request #4807 from gyuho/go1.4-build
Drop go1.4 support for development
2016-03-18 19:07:44 -07:00
Gyu-Ho Lee
21b33de810 rafthttp: drop go1.4 tests 2016-03-18 18:46:11 -07:00
Gyu-Ho Lee
5bba773199 pkg/testutil: drop go1.4 goroutine leak exception 2016-03-18 18:45:47 -07:00
Gyu-Ho Lee
0a82c06a2c pkg/types: drop go1.4 tests 2016-03-18 18:45:29 -07:00
Gyu-Ho Lee
33e22fa8d7 pkg/httputil: drop go1.4 tests 2016-03-18 18:45:12 -07:00
Gyu-Ho Lee
25e47db416 client: drop go1.4 tests 2016-03-18 18:44:56 -07:00
Gyu-Ho Lee
896cba5cb9 README: go1.5 for Go development 2016-03-18 18:44:40 -07:00
Xiang Li
6aa17f0c76 Merge pull request #4775 from heyitsanthony/wal-locks
fileutil, wal: refactor file locking
2016-03-18 18:26:31 -07:00
Xiang Li
16270dba4f Merge pull request #4805 from gyuho/drop-go1.4
travis: drop go1.4
2016-03-18 18:24:55 -07:00
Gyu-Ho Lee
4e4f0ab619 clientv3/concurrency: fix godoc 2016-03-18 16:34:58 -07:00
Gyu-Ho Lee
ac9376ea16 *: bump to v2.3.0+git 2016-03-18 16:32:04 -07:00
Gyu-Ho Lee
f38a611b55 clientv3: set Serializable from Op
Fix https://github.com/coreos/etcd/issues/4809.
2016-03-18 15:56:48 -07:00
Gyu-Ho Lee
5badbab8b7 travis: drop go1.4
Fix https://github.com/coreos/etcd/issues/4790.
2016-03-18 13:33:12 -07:00
Anthony Romano
7397e14c0a fileutil, wal: refactor file locking
File lock interface was more verbose than it needed to be while
simultaneously making it difficult to support systems (e.g., Windows)
that only permit locked writes on a single fd holding the lock.
2016-03-16 15:02:15 -07:00
Lucas Käldström
a44645e13d etcdserver: align 64-bit atomics on 8-byte boundary 2016-03-10 07:24:33 +02:00
1972 changed files with 136414 additions and 135084 deletions

8
.github/ISSUE_TEMPLATE.md vendored Normal file
View File

@@ -0,0 +1,8 @@
# Bug reporting
A good bug report has some very specific qualities, so please read over our short document on
[reporting bugs][report_bugs] before you submit your bug report.
To ask a question, go ahead and ignore this.
[report_bugs]: ../Documentation/reporting_bugs.md

5
.github/PULL_REQUEST_TEMPLATE.md vendored Normal file
View File

@@ -0,0 +1,5 @@
# Contributing guidelines
Please read our [contribution workflow][contributing] before submitting a pull request.
[contributing]: ../CONTRIBUTING.md#contribution-flow

1
.gitignore vendored
View File

@@ -10,3 +10,4 @@
/hack/insta-discovery/.env
*.test
tools/functional-tester/docker/bin
hack/tls-setup/certs

View File

@@ -1,4 +1,4 @@
// Copyright 2016 CoreOS, Inc.
// Copyright 2016 The etcd Authors
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.

View File

@@ -1,15 +1,34 @@
dist: trusty
language: go
go_import_path: github.com/coreos/etcd
sudo: false
go:
- 1.4
- 1.5
- 1.6
- tip
env:
global:
- GO15VENDOREXPERIMENT=1
matrix:
- TARGET=amd64
- TARGET=arm64
- TARGET=arm
- TARGET=ppc64le
matrix:
fast_finish: true
allow_failures:
- go: tip
exclude:
- go: 1.6
env: TARGET=arm64
- go: tip
env: TARGET=arm
- go: tip
env: TARGET=arm64
- go: tip
env: TARGET=ppc64le
addons:
apt:
@@ -20,6 +39,17 @@ addons:
before_install:
- go get -v github.com/chzchzchz/goword
- go get -v honnef.co/go/simple/cmd/gosimple
- go get -v honnef.co/go/unused/cmd/unused
# disable godep restore override
install:
- pushd cmd/ && go get -t -v ./... && popd
script:
- ./test
- >
if [ "${TARGET}" == "amd64" ]; then
GOARCH="${TARGET}" ./test;
else
GOARCH="${TARGET}" ./build;
fi

View File

@@ -12,7 +12,7 @@ etcd is Apache 2.0 licensed and accepts contributions via GitHub pull requests.
- Fork the repository on GitHub
- Read the README.md for build instructions
## Reporting Bugs and Creating Issues
## Reporting bugs and creating issues
Reporting bugs is one of the best ways to contribute. However, a good bug report
has some very specific qualities, so please read over our short document on
@@ -39,7 +39,7 @@ The coding style suggested by the Golang community is used in etcd. See the [sty
Please follow this style to make etcd easy to review, maintain and develop.
### Format of the Commit Message
### Format of the commit message
We follow a rough convention for commit messages that is designed to answer two
questions: what changed and why. The subject line should feature the what and

View File

@@ -1,2 +1,6 @@
FROM golang:onbuild
EXPOSE 4001 7001 2379 2380
FROM golang
ADD . /go/src/github.com/coreos/etcd
ADD cmd/vendor /go/src/github.com/coreos/etcd/vendor
RUN go install github.com/coreos/etcd
EXPOSE 2379 2380
ENTRYPOINT ["etcd"]

11
Dockerfile-release Normal file
View File

@@ -0,0 +1,11 @@
FROM alpine:latest
ADD etcd /usr/local/bin/
ADD etcdctl /usr/local/bin/
RUN mkdir -p /var/etcd/
RUN mkdir -p /var/lib/etcd/
EXPOSE 2379 2380
# Define default command.
CMD ["/usr/local/bin/etcd"]

View File

@@ -1,303 +0,0 @@
# Administration
## Data Directory
### Lifecycle
When first started, etcd stores its configuration into a data directory specified by the data-dir configuration parameter.
Configuration is stored in the write ahead log and includes: the local member ID, cluster ID, and initial cluster configuration.
The write ahead log and snapshot files are used during member operation and to recover after a restart.
Having a dedicated disk to store wal files can improve the throughput and stabilize the cluster.
It is highly recommended to dedicate a wal disk and set `--wal-dir` to point to a directory on that device for a production cluster deployment.
If a members data directory is ever lost or corrupted then the user should [remove][remove-a-member] the etcd member from the cluster using `etcdctl` tool.
A user should avoid restarting an etcd member with a data directory from an out-of-date backup.
Using an out-of-date data directory can lead to inconsistency as the member had agreed to store information via raft then re-joins saying it needs that information again.
For maximum safety, if an etcd member suffers any sort of data corruption or loss, it must be removed from the cluster.
Once removed the member can be re-added with an empty data directory.
### Contents
The data directory has two sub-directories in it:
1. wal: write ahead log files are stored here. For details see the [wal package documentation][wal-pkg]
2. snap: log snapshots are stored here. For details see the [snap package documentation][snap-pkg]
If `--wal-dir` flag is set, etcd will write the write ahead log files to the specified directory instead of data directory.
## Cluster Management
### Lifecycle
If you are spinning up multiple clusters for testing it is recommended that you specify a unique initial-cluster-token for the different clusters.
This can protect you from cluster corruption in case of mis-configuration because two members started with different cluster tokens will refuse members from each other.
### Monitoring
It is important to monitor your production etcd cluster for healthy information and runtime metrics.
#### Health Monitoring
At lowest level, etcd exposes health information via HTTP at `/health` in JSON format. If it returns `{"health": "true"}`, then the cluster is healthy. Please note the `/health` endpoint is still an experimental one as in etcd 2.2.
```
$ curl -L http://127.0.0.1:2379/health
{"health": "true"}
```
You can also use etcdctl to check the cluster-wide health information. It will contact all the members of the cluster and collect the health information for you.
```
$./etcdctl cluster-health
member 8211f1d0f64f3269 is healthy: got healthy result from http://127.0.0.1:12379
member 91bc3c398fb3c146 is healthy: got healthy result from http://127.0.0.1:22379
member fd422379fda50e48 is healthy: got healthy result from http://127.0.0.1:32379
cluster is healthy
```
#### Runtime Metrics
etcd uses [Prometheus][prometheus] for metrics reporting in the server. You can read more through the runtime metrics [doc][metrics].
### Debugging
Debugging a distributed system can be difficult. etcd provides several ways to make debug
easier.
#### Enabling Debug Logging
When you want to debug etcd without stopping it, you can enable debug logging at runtime.
etcd exposes logging configuration at `/config/local/log`.
```
$ curl http://127.0.0.1:2379/config/local/log -XPUT -d '{"Level":"DEBUG"}'
$ # debug logging enabled
$
$ curl http://127.0.0.1:2379/config/local/log -XPUT -d '{"Level":"INFO"}'
$ # debug logging disabled
```
#### Debugging Variables
Debug variables are exposed for real-time debugging purposes. Developers who are familiar with etcd can utilize these variables to debug unexpected behavior. etcd exposes debug variables via HTTP at `/debug/vars` in JSON format. The debug variables contains
`cmdline`, `file_descriptor_limit`, `memstats` and `raft.status`.
`cmdline` is the command line arguments passed into etcd.
`file_descriptor_limit` is the max number of file descriptors etcd can utilize.
`memstats` is explained in detail in the [Go runtime documentation][golang-memstats].
`raft.status` is useful when you want to debug low level raft issues if you are familiar with raft internals. In most cases, you do not need to check `raft.status`.
```json
{
"cmdline": ["./etcd"],
"file_descriptor_limit": 0,
"memstats": {"Alloc":4105744,"TotalAlloc":42337320,"Sys":12560632,"...":"..."},
"raft.status": {"id":"ce2a822cea30bfca","term":5,"vote":"ce2a822cea30bfca","commit":23509,"lead":"ce2a822cea30bfca","raftState":"StateLeader","progress":{"ce2a822cea30bfca":{"match":23509,"next":23510,"state":"ProgressStateProbe"}}}
}
```
### Optimal Cluster Size
The recommended etcd cluster size is 3, 5 or 7, which is decided by the fault tolerance requirement. A 7-member cluster can provide enough fault tolerance in most cases. While larger cluster provides better fault tolerance the write performance reduces since data needs to be replicated to more machines.
#### Fault Tolerance Table
It is recommended to have an odd number of members in a cluster. Having an odd cluster size doesn't change the number needed for majority, but you gain a higher tolerance for failure by adding the extra member. You can see this in practice when comparing even and odd sized clusters:
| Cluster Size | Majority | Failure Tolerance |
|--------------|------------|-------------------|
| 1 | 1 | 0 |
| 3 | 2 | 1 |
| 4 | 3 | 1 |
| 5 | 3 | **2** |
| 6 | 4 | 2 |
| 7 | 4 | **3** |
| 8 | 5 | 3 |
| 9 | 5 | **4** |
As you can see, adding another member to bring the size of cluster up to an odd size is always worth it. During a network partition, an odd number of members also guarantees that there will almost always be a majority of the cluster that can continue to operate and be the source of truth when the partition ends.
#### Changing Cluster Size
After your cluster is up and running, adding or removing members is done via [runtime reconfiguration][runtime-reconfig], which allows the cluster to be modified without downtime. The `etcdctl` tool has `member list`, `member add` and `member remove` commands to complete this process.
### Member Migration
When there is a scheduled machine maintenance or retirement, you might want to migrate an etcd member to another machine without losing the data and changing the member ID.
The data directory contains all the data to recover a member to its point-in-time state. To migrate a member:
* Stop the member process.
* Copy the data directory of the now-idle member to the new machine.
* Update the peer URLs for the replaced member to reflect the new machine according to the [runtime reconfiguration instructions][update-member].
* Start etcd on the new machine, using the same configuration and the copy of the data directory.
This example will walk you through the process of migrating the infra1 member to a new machine:
|Name|Peer URL|
|------|--------------|
|infra0|10.0.1.10:2380|
|infra1|10.0.1.11:2380|
|infra2|10.0.1.12:2380|
```sh
$ export ETCDCTL_ENDPOINT=http://10.0.1.10:2379,http://10.0.1.11:2379,http://10.0.1.12:2379
```
```sh
$ etcdctl member list
84194f7c5edd8b37: name=infra0 peerURLs=http://10.0.1.10:2380 clientURLs=http://127.0.0.1:2379,http://10.0.1.10:2379
b4db3bf5e495e255: name=infra1 peerURLs=http://10.0.1.11:2380 clientURLs=http://127.0.0.1:2379,http://10.0.1.11:2379
bc1083c870280d44: name=infra2 peerURLs=http://10.0.1.12:2380 clientURLs=http://127.0.0.1:2379,http://10.0.1.12:2379
```
#### Stop the member etcd process
```sh
$ ssh 10.0.1.11
```
```sh
$ kill `pgrep etcd`
```
#### Copy the data directory of the now-idle member to the new machine
```
$ tar -cvzf infra1.etcd.tar.gz %data_dir%
```
```sh
$ scp infra1.etcd.tar.gz 10.0.1.13:~/
```
#### Update the peer URLs for that member to reflect the new machine
```sh
$ curl http://10.0.1.10:2379/v2/members/b4db3bf5e495e255 -XPUT \
-H "Content-Type: application/json" -d '{"peerURLs":["http://10.0.1.13:2380"]}'
```
Or use `etcdctl member update` command
```sh
$ etcdctl member update b4db3bf5e495e255 http://10.0.1.13:2380
```
#### Start etcd on the new machine, using the same configuration and the copy of the data directory
```sh
$ ssh 10.0.1.13
```
```sh
$ tar -xzvf infra1.etcd.tar.gz -C %data_dir%
```
```
etcd -name infra1 \
-listen-peer-urls http://10.0.1.13:2380 \
-listen-client-urls http://10.0.1.13:2379,http://127.0.0.1:2379 \
-advertise-client-urls http://10.0.1.13:2379,http://127.0.0.1:2379
```
### Disaster Recovery
etcd is designed to be resilient to machine failures. An etcd cluster can automatically recover from any number of temporary failures (for example, machine reboots), and a cluster of N members can tolerate up to _(N-1)/2_ permanent failures (where a member can no longer access the cluster, due to hardware failure or disk corruption). However, in extreme circumstances, a cluster might permanently lose enough members such that quorum is irrevocably lost. For example, if a three-node cluster suffered two simultaneous and unrecoverable machine failures, it would be normally impossible for the cluster to restore quorum and continue functioning.
To recover from such scenarios, etcd provides functionality to backup and restore the datastore and recreate the cluster without data loss.
#### Backing up the datastore
**NB:** Windows users must stop etcd before running the backup command.
The first step of the recovery is to backup the data directory on a functioning etcd node. To do this, use the `etcdctl backup` command, passing in the original data directory used by etcd. For example:
```sh
etcdctl backup \
--data-dir %data_dir% \
--backup-dir %backup_data_dir%
```
This command will rewrite some of the metadata contained in the backup (specifically, the node ID and cluster ID), which means that the node will lose its former identity. In order to recreate a cluster from the backup, you will need to start a new, single-node cluster. The metadata is rewritten to prevent the new node from inadvertently being joined onto an existing cluster.
#### Restoring a backup
To restore a backup using the procedure created above, start etcd with the `-force-new-cluster` option and pointing to the backup directory. This will initialize a new, single-member cluster with the default advertised peer URLs, but preserve the entire contents of the etcd data store. Continuing from the previous example:
```sh
etcd \
-data-dir=%backup_data_dir% \
-force-new-cluster \
...
```
Now etcd should be available on this node and serving the original datastore.
Once you have verified that etcd has started successfully, shut it down and move the data back to the previous location (you may wish to make another copy as well to be safe):
```sh
pkill etcd
rm -fr %data_dir%
mv %backup_data_dir% %data_dir%
etcd \
-data-dir=%data_dir% \
...
```
#### Restoring the cluster
Now that the node is running successfully, [change its advertised peer URLs][update-member], as the `--force-new-cluster` option has set the peer URL to the default listening on localhost.
You can then add more nodes to the cluster and restore resiliency. See the [add a new member][add-a-member] guide for more details. **NB:** If you are trying to restore your cluster using old failed etcd nodes, please make sure you have stopped old etcd instances and removed their old data directories specified by the data-dir configuration parameter.
### Client Request Timeout
etcd sets different timeouts for various types of client requests. The timeout value is not tunable now, which will be improved soon (https://github.com/coreos/etcd/issues/2038).
#### Get requests
Timeout is not set for get requests, because etcd serves the result locally in a non-blocking way.
**Note**: QuorumGet request is a different type, which is mentioned in the following sections.
#### Watch requests
Timeout is not set for watch requests. etcd will not stop a watch request until client cancels it, or the connection is broken.
#### Delete, Put, Post, QuorumGet requests
The default timeout is 5 seconds. It should be large enough to allow all key modifications if the majority of cluster is functioning.
If the request times out, it indicates two possibilities:
1. the server the request sent to was not functioning at that time.
2. the majority of the cluster is not functioning.
If timeout happens several times continuously, administrators should check status of cluster and resolve it as soon as possible.
### Best Practices
#### Maximum OS threads
By default, etcd uses the default configuration of the Go 1.4 runtime, which means that at most one operating system thread will be used to execute code simultaneously. (Note that this default behavior [has changed in Go 1.5][golang1.5-runtime]).
When using etcd in heavy-load scenarios on machines with multiple cores it will usually be desirable to increase the number of threads that etcd can utilize. To do this, simply set the environment variable GOMAXPROCS to the desired number when starting etcd. For more information on this variable, see the [Go runtime documentation][golang-runtime].
[add-a-member]: runtime-configuration.md#add-a-new-member
[golang1.5-runtime]: https://golang.org/doc/go1.5#runtime
[golang-memstats]: https://golang.org/pkg/runtime/#MemStats
[golang-runtime]: https://golang.org/pkg/runtime
[metrics]: metrics.md
[prometheus]: http://prometheus.io/
[remove-a-member]: runtime-configuration.md#remove-a-member
[runtime-reconfig]: runtime-configuration.md#cluster-reconfiguration-operations
[snap-pkg]: http://godoc.org/github.com/coreos/etcd/snap
[update-a-member]: runtime-configuration.md#update-a-member
[wal-pkg]: http://godoc.org/github.com/coreos/etcd/wal

File diff suppressed because it is too large Load Diff

View File

@@ -1,511 +0,0 @@
# v2 Auth and Security
## etcd Resources
There are three types of resources in etcd
1. permission resources: users and roles in the user store
2. key-value resources: key-value pairs in the key-value store
3. settings resources: security settings, auth settings, and dynamic etcd cluster settings (election/heartbeat)
### Permission Resources
#### Users
A user is an identity to be authenticated. Each user can have multiple roles. The user has a capability (such as reading or writing) on the resource if one of the roles has that capability.
A user named `root` is required before authentication can be enabled, and it always has the ROOT role. The ROOT role can be granted to multiple users, but `root` is required for recovery purposes.
#### Roles
Each role has exact one associated Permission List. An permission list exists for each permission on key-value resources.
The special static ROOT (named `root`) role has a full permissions on all key-value resources, the permission to manage user resources and settings resources. Only the ROOT role has the permission to manage user resources and modify settings resources. The ROOT role is built-in and does not need to be created.
There is also a special GUEST role, named 'guest'. These are the permissions given to unauthenticated requests to etcd. This role will be created automatically, and by default allows access to the full keyspace due to backward compatibility. (etcd did not previously authenticate any actions.). This role can be modified by a ROOT role holder at any time, to reduce the capabilities of unauthenticated users.
#### Permissions
There are two types of permissions, `read` and `write`. All management and settings require the ROOT role.
A Permission List is a list of allowed patterns for that particular permission (read or write). Only ALLOW prefixes are supported. DENY becomes more complicated and is TBD.
### Key-Value Resources
A key-value resource is a key-value pairs in the store. Given a list of matching patterns, permission for any given key in a request is granted if any of the patterns in the list match.
Only prefixes or exact keys are supported. A prefix permission string ends in `*`.
A permission on `/foo` is for that exact key or directory, not its children or recursively. `/foo*` is a prefix that matches `/foo` recursively, and all keys thereunder, and keys with that prefix (eg. `/foobar`. Contrast to the prefix `/foo/*`). `*` alone is permission on the full keyspace.
### Settings Resources
Specific settings for the cluster as a whole. This can include adding and removing cluster members, enabling or disabling authentication, replacing certificates, and any other dynamic configuration by the administrator (holder of the ROOT role).
## v2 Auth
### Basic Auth
We only support [Basic Auth][basic-auth] for the first version. Client needs to attach the basic auth to the HTTP Authorization Header.
### Authorization field for operations
Added to requests to /v2/keys, /v2/auth
Add code 401 Unauthorized to the set of responses from the v2 API
Authorization: Basic {encoded string}
### Future Work
Other types of auth can be considered for the future (eg, signed certs, public keys) but the `Authorization:` header allows for other such types
### Things out of Scope for etcd Permissions
* Pluggable AUTH backends like LDAP (other Authorization tokens generated by LDAP et al may be a possibility)
* Very fine-grained access controls (eg: users modifying keys outside work hours)
## API endpoints
An Error JSON corresponds to:
{
"name": "ErrErrorName",
"description" : "The longer helpful description of the error."
}
#### Enable and Disable Authentication
**Get auth status**
GET /v2/auth/enable
Sent Headers:
Possible Status Codes:
200 OK
200 Body:
{
"enabled": true
}
**Enable auth**
PUT /v2/auth/enable
Sent Headers:
Put Body: (empty)
Possible Status Codes:
200 OK
400 Bad Request (if root user has not been created)
409 Conflict (already enabled)
200 Body: (empty)
**Disable auth**
DELETE /v2/auth/enable
Sent Headers:
Authorization: Basic <RootAuthString>
Possible Status Codes:
200 OK
401 Unauthorized (if not a root user)
409 Conflict (already disabled)
200 Body: (empty)
#### Users
The User JSON object is formed as follows:
```
{
"user": "userName",
"password": "password",
"roles": [
"role1",
"role2"
],
"grant": [],
"revoke": []
}
```
Password is only passed when necessary.
**Get a List of Users**
GET/HEAD /v2/auth/users
Sent Headers:
Authorization: Basic <BasicAuthString>
Possible Status Codes:
200 OK
401 Unauthorized
200 Headers:
Content-type: application/json
200 Body:
{
"users": [
{
"user": "alice",
"roles": [
{
"role": "root",
"permissions": {
"kv": {
"read": ["*"],
"write": ["*"]
}
}
}
]
},
{
"user": "bob",
"roles": [
{
"role": "guest",
"permissions": {
"kv": {
"read": ["*"],
"write": ["*"]
}
}
}
]
}
]
}
**Get User Details**
GET/HEAD /v2/auth/users/alice
Sent Headers:
Authorization: Basic <BasicAuthString>
Possible Status Codes:
200 OK
401 Unauthorized
404 Not Found
200 Headers:
Content-type: application/json
200 Body:
{
"user" : "alice",
"roles" : [
{
"role": "fleet",
"permissions" : {
"kv" : {
"read": [ "/fleet/" ],
"write": [ "/fleet/" ]
}
}
},
{
"role": "etcd",
"permissions" : {
"kv" : {
"read": [ "*" ],
"write": [ "*" ]
}
}
}
]
}
**Create Or Update A User**
A user can be created with initial roles, if filled in. However, no roles are required; only the username and password fields
PUT /v2/auth/users/charlie
Sent Headers:
Authorization: Basic <BasicAuthString>
Put Body:
JSON struct, above, matching the appropriate name
* Starting password and roles when creating.
* Grant/Revoke/Password filled in when updating (to grant roles, revoke roles, or change the password).
Possible Status Codes:
200 OK
201 Created
400 Bad Request
401 Unauthorized
404 Not Found (update non-existent users)
409 Conflict (when granting duplicated roles or revoking non-existent roles)
200 Headers:
Content-type: application/json
200 Body:
JSON state of the user
**Remove A User**
DELETE /v2/auth/users/charlie
Sent Headers:
Authorization: Basic <BasicAuthString>
Possible Status Codes:
200 OK
401 Unauthorized
403 Forbidden (remove root user when auth is enabled)
404 Not Found
200 Headers:
200 Body: (empty)
#### Roles
A full role structure may look like this. A Permission List structure is used for the "permissions", "grant", and "revoke" keys.
```
{
"role" : "fleet",
"permissions" : {
"kv" : {
"read" : [ "/fleet/" ],
"write": [ "/fleet/" ]
}
},
"grant" : {"kv": {...}},
"revoke": {"kv": {...}}
}
```
**Get Role Details**
GET/HEAD /v2/auth/roles/fleet
Sent Headers:
Authorization: Basic <BasicAuthString>
Possible Status Codes:
200 OK
401 Unauthorized
404 Not Found
200 Headers:
Content-type: application/json
200 Body:
{
"role" : "fleet",
"permissions" : {
"kv" : {
"read": [ "/fleet/" ],
"write": [ "/fleet/" ]
}
}
}
**Get a list of Roles**
GET/HEAD /v2/auth/roles
Sent Headers:
Authorization: Basic <BasicAuthString>
Possible Status Codes:
200 OK
401 Unauthorized
200 Headers:
Content-type: application/json
200 Body:
{
"roles": [
{
"role": "fleet",
"permissions": {
"kv": {
"read": ["/fleet/"],
"write": ["/fleet/"]
}
}
},
{
"role": "etcd",
"permissions": {
"kv": {
"read": ["*"],
"write": ["*"]
}
}
},
{
"role": "quay",
"permissions": {
"kv": {
"read": ["*"],
"write": ["*"]
}
}
}
]
}
**Create Or Update A Role**
PUT /v2/auth/roles/rkt
Sent Headers:
Authorization: Basic <BasicAuthString>
Put Body:
Initial desired JSON state, including the role name for verification and:
* Starting permission set if creating
* Granted/Revoked permission set if updating
Possible Status Codes:
200 OK
201 Created
400 Bad Request
401 Unauthorized
404 Not Found (update non-existent roles)
409 Conflict (when granting duplicated permission or revoking non-existent permission)
200 Body:
JSON state of the role
**Remove A Role**
DELETE /v2/auth/roles/rkt
Sent Headers:
Authorization: Basic <BasicAuthString>
Possible Status Codes:
200 OK
401 Unauthorized
403 Forbidden (remove root)
404 Not Found
200 Headers:
200 Body: (empty)
## Example Workflow
Let's walk through an example to show two tenants (applications, in our case) using etcd permissions.
### Create root role
```
PUT /v2/auth/users/root
Put Body:
{"user" : "root", "password": "betterRootPW!"}
```
### Enable auth
```
PUT /v2/auth/enable
```
### Modify guest role (revoke write permission)
```
PUT /v2/auth/roles/guest
Headers:
Authorization: Basic <root:betterRootPW!>
Put Body:
{
"role" : "guest",
"revoke" : {
"kv" : {
"write": [
"*"
]
}
}
}
```
### Create Roles for the Applications
Create the rkt role fully specified:
```
PUT /v2/auth/roles/rkt
Headers:
Authorization: Basic <root:betterRootPW!>
Body:
{
"role" : "rkt",
"permissions" : {
"kv": {
"read": [
"/rkt/*"
],
"write": [
"/rkt/*"
]
}
}
}
```
But let's make fleet just a basic role for now:
```
PUT /v2/auth/roles/fleet
Headers:
Authorization: Basic <root:betterRootPW!>
Body:
{
"role" : "fleet"
}
```
### Optional: Grant some permissions to the roles
Well, we finally figured out where we want fleet to live. Let's fix it.
(Note that we avoided this in the rkt case. So this step is optional.)
```
PUT /v2/auth/roles/fleet
Headers:
Authorization: Basic <root:betterRootPW!>
Put Body:
{
"role" : "fleet",
"grant" : {
"kv" : {
"read": [
"/rkt/fleet",
"/fleet/*"
]
}
}
}
```
### Create Users
Same as before, let's use rocket all at once and fleet separately
```
PUT /v2/auth/users/rktuser
Headers:
Authorization: Basic <root:betterRootPW!>
Body:
{"user" : "rktuser", "password" : "rktpw", "roles" : ["rkt"]}
```
```
PUT /v2/auth/users/fleetuser
Headers:
Authorization: Basic <root:betterRootPW!>
Body:
{"user" : "fleetuser", "password" : "fleetpw"}
```
### Optional: Grant Roles to Users
Likewise, let's explicitly grant fleetuser access.
```
PUT /v2/auth/users/fleetuser
Headers:
Authorization: Basic <root:betterRootPW!>
Body:
{"user": "fleetuser", "grant": ["fleet"]}
```
#### Start to use fleetuser and rktuser
For example:
```
PUT /v2/keys/rkt/RktData
Headers:
Authorization: Basic <rktuser:rktpw>
Body:
value=launch
```
Reads and writes outside the prefixes granted will fail with a 401 Unauthorized.
[basic-auth]: https://en.wikipedia.org/wiki/Basic_access_authentication

View File

@@ -1,71 +0,0 @@
# Backward Compatibility
The main goal of etcd 2.0 release is to improve cluster safety around bootstrapping and dynamic reconfiguration. To do this, we deprecated the old error-prone APIs and provide a new set of APIs.
The other main focus of this release was a more reliable Raft implementation, but as this change is internal it should not have any notable effects to users.
## Command Line Flags Changes
The major flag changes are to mostly related to bootstrapping. The `initial-*` flags provide an improved way to specify the required criteria to start the cluster. The advertised URLs now support a list of values instead of a single value, which allows etcd users to gracefully migrate to the new set of IANA-assigned ports (2379/client and 2380/peers) while maintaining backward compatibility with the old ports.
- `-addr` is replaced by `-advertise-client-urls`.
- `-bind-addr` is replaced by `-listen-client-urls`.
- `-peer-addr` is replaced by `-initial-advertise-peer-urls`.
- `-peer-bind-addr` is replaced by `-listen-peer-urls`.
- `-peers` is replaced by `-initial-cluster`.
- `-peers-file` is replaced by `-initial-cluster`.
- `-peer-heartbeat-interval` is replaced by `-heartbeat-interval`.
- `-peer-election-timeout` is replaced by `-election-timeout`.
The documentation of new command line flags can be found at
https://github.com/coreos/etcd/blob/master/Documentation/configuration.md.
## Data Directory Naming
The default data dir location has changed from {$hostname}.etcd to {name}.etcd.
## Key-Value API
### Read consistency flag
The consistent flag for read operations is removed in etcd 2.0.0. The normal read operations provides the same consistency guarantees with the 0.4.6 read operations with consistent flag set.
The read consistency guarantees are:
The consistent read guarantees the sequential consistency within one client that talks to one etcd server. Read/Write from one client to one etcd member should be observed in order. If one client write a value to an etcd server successfully, it should be able to get the value out of the server immediately.
Each etcd member will proxy the request to leader and only return the result to user after the result is applied on the local member. Thus after the write succeed, the user is guaranteed to see the value on the member it sent the request to.
Reads do not provide linearizability. If you want linearizable read, you need to set quorum option to true.
**Previous behavior**
We added an option for a consistent read in the old version of etcd since etcd 0.x redirects the write request to the leader. When the user get back the result from the leader, the member it sent the request to originally might not apply the write request yet. With the consistent flag set to true, the client will always send read request to the leader. So one client should be able to see its last write when consistent=true is enabled. There is no order guarantees among different clients.
## Standby
etcd 0.4s standby mode has been deprecated. [Proxy mode][proxymode] is introduced to solve a subset of problems standby was solving.
Standby mode was intended for large clusters that had a subset of the members acting in the consensus process. Overall this process was too magical and allowed for operators to back themselves into a corner.
Proxy mode in 2.0 will provide similar functionality, and with improved control over which machines act as proxies due to the operator specifically configuring them. Proxies also support read only or read/write modes for increased security and durability.
[proxymode]: proxy.md
## Discovery Service
A size key needs to be provided inside a [discovery token][discoverytoken].
[discoverytoken]: clustering.md#custom-etcd-discovery-service
## HTTP Admin API
`v2/admin` on peer url and `v2/keys/_etcd` are unified under the new [v2/members API][members-api] to better explain which machines are part of an etcd cluster, and to simplify the keyspace for all your use cases.
[members-api]: members_api.md
## HTTP Key Value API
- The follower can now transparently proxy write requests to the leader. Clients will no longer see 307 redirections to the leader from etcd.
- Expiration time is in UTC instead of local time.

View File

@@ -72,6 +72,6 @@ With the benchmark result, we can calculate roughly that `c1 = 17kb`, `c2 = 18kb
| 5k | 50 | 10 | 2.5M | 5710MB |
| 1k | 50 | 100 | 5M | 2380MB |
| 2k | 50 | 100 | 10M | 4672MB |
| 5k | 50 | 100 | 50M | *OOM* |
| 5k | 50 | 100 | 25M | *OOM* |
[rss]: https://en.wikipedia.org/wiki/Resident_set_size

View File

@@ -1,4 +1,4 @@
# Branch Management
# Branch management
## Guide
@@ -13,7 +13,7 @@ The etcd team has adopted a *rolling release model* and supports one stable vers
The `master` branch is our development branch. All new features land here first.
If you want to try new features, pull `master` and play with it. Note that `master` may not be stable because new features may introduce bugs.
To try new and experimental features, pull `master` and play with it. Note that `master` may not be stable because new features may introduce bugs.
Before the release of the next stable version, feature PRs will be frozen. We will focus on the testing, bug-fix and documentation for one to two weeks.

View File

@@ -1,434 +0,0 @@
# Clustering Guide
## Overview
Starting an etcd cluster statically requires that each member knows another in the cluster. In a number of cases, you might not know the IPs of your cluster members ahead of time. In these cases, you can bootstrap an etcd cluster with the help of a discovery service.
Once an etcd cluster is up and running, adding or removing members is done via [runtime reconfiguration][runtime-conf]. To better understand the design behind runtime reconfiguration, we suggest you read [the runtime configuration design document][runtime-reconf-design].
This guide will cover the following mechanisms for bootstrapping an etcd cluster:
* [Static](#static)
* [etcd Discovery](#etcd-discovery)
* [DNS Discovery](#dns-discovery)
Each of the bootstrapping mechanisms will be used to create a three machine etcd cluster with the following details:
|Name|Address|Hostname|
|------|---------|------------------|
|infra0|10.0.1.10|infra0.example.com|
|infra1|10.0.1.11|infra1.example.com|
|infra2|10.0.1.12|infra2.example.com|
## Static
As we know the cluster members, their addresses and the size of the cluster before starting, we can use an offline bootstrap configuration by setting the `initial-cluster` flag. Each machine will get either the following command line or environment variables:
```
ETCD_INITIAL_CLUSTER="infra0=http://10.0.1.10:2380,infra1=http://10.0.1.11:2380,infra2=http://10.0.1.12:2380"
ETCD_INITIAL_CLUSTER_STATE=new
```
```
--initial-cluster infra0=http://10.0.1.10:2380,infra1=http://10.0.1.11:2380,infra2=http://10.0.1.12:2380 \
--initial-cluster-state new
```
Note that the URLs specified in `initial-cluster` are the _advertised peer URLs_, i.e. they should match the value of `initial-advertise-peer-urls` on the respective nodes.
If you are spinning up multiple clusters (or creating and destroying a single cluster) with same configuration for testing purpose, it is highly recommended that you specify a unique `initial-cluster-token` for the different clusters. By doing this, etcd can generate unique cluster IDs and member IDs for the clusters even if they otherwise have the exact same configuration. This can protect you from cross-cluster-interaction, which might corrupt your clusters.
etcd listens on [`listen-client-urls`][conf-listen-client] to accept client traffic. etcd member advertises the URLs specified in [`advertise-client-urls`][conf-adv-client] to other members, proxies, clients. Please make sure the `advertise-client-urls` are reachable from intended clients. A common mistake is setting `advertise-client-urls` to localhost or leave it as default when you want the remote clients to reach etcd.
On each machine you would start etcd with these flags:
```
$ etcd --name infra0 --initial-advertise-peer-urls http://10.0.1.10:2380 \
--listen-peer-urls http://10.0.1.10:2380 \
--listen-client-urls http://10.0.1.10:2379,http://127.0.0.1:2379 \
--advertise-client-urls http://10.0.1.10:2379 \
--initial-cluster-token etcd-cluster-1 \
--initial-cluster infra0=http://10.0.1.10:2380,infra1=http://10.0.1.11:2380,infra2=http://10.0.1.12:2380 \
--initial-cluster-state new
```
```
$ etcd --name infra1 --initial-advertise-peer-urls http://10.0.1.11:2380 \
--listen-peer-urls http://10.0.1.11:2380 \
--listen-client-urls http://10.0.1.11:2379,http://127.0.0.1:2379 \
--advertise-client-urls http://10.0.1.11:2379 \
--initial-cluster-token etcd-cluster-1 \
--initial-cluster infra0=http://10.0.1.10:2380,infra1=http://10.0.1.11:2380,infra2=http://10.0.1.12:2380 \
--initial-cluster-state new
```
```
$ etcd --name infra2 --initial-advertise-peer-urls http://10.0.1.12:2380 \
--listen-peer-urls http://10.0.1.12:2380 \
--listen-client-urls http://10.0.1.12:2379,http://127.0.0.1:2379 \
--advertise-client-urls http://10.0.1.12:2379 \
--initial-cluster-token etcd-cluster-1 \
--initial-cluster infra0=http://10.0.1.10:2380,infra1=http://10.0.1.11:2380,infra2=http://10.0.1.12:2380 \
--initial-cluster-state new
```
The command line parameters starting with `--initial-cluster` will be ignored on subsequent runs of etcd. You are free to remove the environment variables or command line flags after the initial bootstrap process. If you need to make changes to the configuration later (for example, adding or removing members to/from the cluster), see the [runtime configuration][runtime-conf] guide.
### Error Cases
In the following example, we have not included our new host in the list of enumerated nodes. If this is a new cluster, the node _must_ be added to the list of initial cluster members.
```
$ etcd --name infra1 --initial-advertise-peer-urls http://10.0.1.11:2380 \
--listen-peer-urls https://10.0.1.11:2380 \
--listen-client-urls http://10.0.1.11:2379,http://127.0.0.1:2379 \
--advertise-client-urls http://10.0.1.11:2379 \
--initial-cluster infra0=http://10.0.1.10:2380 \
--initial-cluster-state new
etcd: infra1 not listed in the initial cluster config
exit 1
```
In this example, we are attempting to map a node (infra0) on a different address (127.0.0.1:2380) than its enumerated address in the cluster list (10.0.1.10:2380). If this node is to listen on multiple addresses, all addresses _must_ be reflected in the "initial-cluster" configuration directive.
```
$ etcd --name infra0 --initial-advertise-peer-urls http://127.0.0.1:2380 \
--listen-peer-urls http://10.0.1.10:2380 \
--listen-client-urls http://10.0.1.10:2379,http://127.0.0.1:2379 \
--advertise-client-urls http://10.0.1.10:2379 \
--initial-cluster infra0=http://10.0.1.10:2380,infra1=http://10.0.1.11:2380,infra2=http://10.0.1.12:2380 \
--initial-cluster-state=new
etcd: error setting up initial cluster: infra0 has different advertised URLs in the cluster and advertised peer URLs list
exit 1
```
If you configure a peer with a different set of configuration and attempt to join this cluster you will get a cluster ID mismatch and etcd will exit.
```
$ etcd --name infra3 --initial-advertise-peer-urls http://10.0.1.13:2380 \
--listen-peer-urls http://10.0.1.13:2380 \
--listen-client-urls http://10.0.1.13:2379,http://127.0.0.1:2379 \
--advertise-client-urls http://10.0.1.13:2379 \
--initial-cluster infra0=http://10.0.1.10:2380,infra1=http://10.0.1.11:2380,infra3=http://10.0.1.13:2380 \
--initial-cluster-state=new
etcd: conflicting cluster ID to the target cluster (c6ab534d07e8fcc4 != bc25ea2a74fb18b0). Exiting.
exit 1
```
## Discovery
In a number of cases, you might not know the IPs of your cluster peers ahead of time. This is common when utilizing cloud providers or when your network uses DHCP. In these cases, rather than specifying a static configuration, you can use an existing etcd cluster to bootstrap a new one. We call this process "discovery".
There two methods that can be used for discovery:
* etcd discovery service
* DNS SRV records
### etcd Discovery
To better understand the design about discovery service protocol, we suggest you read [this][discovery-proto].
#### Lifetime of a Discovery URL
A discovery URL identifies a unique etcd cluster. Instead of reusing a discovery URL, you should always create discovery URLs for new clusters.
Moreover, discovery URLs should ONLY be used for the initial bootstrapping of a cluster. To change cluster membership after the cluster is already running, see the [runtime reconfiguration][runtime-conf] guide.
#### Custom etcd Discovery Service
Discovery uses an existing cluster to bootstrap itself. If you are using your own etcd cluster you can create a URL like so:
```
$ curl -X PUT https://myetcd.local/v2/keys/discovery/6c007a14875d53d9bf0ef5a6fc0257c817f0fb83/_config/size -d value=3
```
By setting the size key to the URL, you create a discovery URL with an expected cluster size of 3.
If you bootstrap an etcd cluster using discovery service with more than the expected number of etcd members, the extra etcd processes will [fall back][fall-back] to being [proxies][proxy] by default.
The URL you will use in this case will be `https://myetcd.local/v2/keys/discovery/6c007a14875d53d9bf0ef5a6fc0257c817f0fb83` and the etcd members will use the `https://myetcd.local/v2/keys/discovery/6c007a14875d53d9bf0ef5a6fc0257c817f0fb83` directory for registration as they start.
**Each member must have a different name flag specified. `Hostname` or `machine-id` can be a good choice. Or discovery will fail due to duplicated name.**
Now we start etcd with those relevant flags for each member:
```
$ etcd --name infra0 --initial-advertise-peer-urls http://10.0.1.10:2380 \
--listen-peer-urls http://10.0.1.10:2380 \
--listen-client-urls http://10.0.1.10:2379,http://127.0.0.1:2379 \
--advertise-client-urls http://10.0.1.10:2379 \
--discovery https://myetcd.local/v2/keys/discovery/6c007a14875d53d9bf0ef5a6fc0257c817f0fb83
```
```
$ etcd --name infra1 --initial-advertise-peer-urls http://10.0.1.11:2380 \
--listen-peer-urls http://10.0.1.11:2380 \
--listen-client-urls http://10.0.1.11:2379,http://127.0.0.1:2379 \
--advertise-client-urls http://10.0.1.11:2379 \
--discovery https://myetcd.local/v2/keys/discovery/6c007a14875d53d9bf0ef5a6fc0257c817f0fb83
```
```
$ etcd --name infra2 --initial-advertise-peer-urls http://10.0.1.12:2380 \
--listen-peer-urls http://10.0.1.12:2380 \
--listen-client-urls http://10.0.1.12:2379,http://127.0.0.1:2379 \
--advertise-client-urls http://10.0.1.12:2379 \
--discovery https://myetcd.local/v2/keys/discovery/6c007a14875d53d9bf0ef5a6fc0257c817f0fb83
```
This will cause each member to register itself with the custom etcd discovery service and begin the cluster once all machines have been registered.
#### Public etcd Discovery Service
If you do not have access to an existing cluster, you can use the public discovery service hosted at `discovery.etcd.io`. You can create a private discovery URL using the "new" endpoint like so:
```
$ curl https://discovery.etcd.io/new?size=3
https://discovery.etcd.io/3e86b59982e49066c5d813af1c2e2579cbf573de
```
This will create the cluster with an initial expected size of 3 members. If you do not specify a size, a default of 3 will be used.
If you bootstrap an etcd cluster using discovery service with more than the expected number of etcd members, the extra etcd processes will [fall back][fall-back] to being [proxies][proxy] by default.
```
ETCD_DISCOVERY=https://discovery.etcd.io/3e86b59982e49066c5d813af1c2e2579cbf573de
```
```
-discovery https://discovery.etcd.io/3e86b59982e49066c5d813af1c2e2579cbf573de
```
**Each member must have a different name flag specified. `Hostname` or `machine-id` can be a good choice. Or discovery will fail due to duplicated name.**
Now we start etcd with those relevant flags for each member:
```
$ etcd --name infra0 --initial-advertise-peer-urls http://10.0.1.10:2380 \
--listen-peer-urls http://10.0.1.10:2380 \
--listen-client-urls http://10.0.1.10:2379,http://127.0.0.1:2379 \
--advertise-client-urls http://10.0.1.10:2379 \
--discovery https://discovery.etcd.io/3e86b59982e49066c5d813af1c2e2579cbf573de
```
```
$ etcd --name infra1 --initial-advertise-peer-urls http://10.0.1.11:2380 \
--listen-peer-urls http://10.0.1.11:2380 \
--listen-client-urls http://10.0.1.11:2379,http://127.0.0.1:2379 \
--advertise-client-urls http://10.0.1.11:2379 \
--discovery https://discovery.etcd.io/3e86b59982e49066c5d813af1c2e2579cbf573de
```
```
$ etcd --name infra2 --initial-advertise-peer-urls http://10.0.1.12:2380 \
--listen-peer-urls http://10.0.1.12:2380 \
--listen-client-urls http://10.0.1.12:2379,http://127.0.0.1:2379 \
--advertise-client-urls http://10.0.1.12:2379 \
--discovery https://discovery.etcd.io/3e86b59982e49066c5d813af1c2e2579cbf573de
```
This will cause each member to register itself with the discovery service and begin the cluster once all members have been registered.
You can use the environment variable `ETCD_DISCOVERY_PROXY` to cause etcd to use an HTTP proxy to connect to the discovery service.
#### Error and Warning Cases
##### Discovery Server Errors
```
$ etcd --name infra0 --initial-advertise-peer-urls http://10.0.1.10:2380 \
--listen-peer-urls http://10.0.1.10:2380 \
--listen-client-urls http://10.0.1.10:2379,http://127.0.0.1:2379 \
--advertise-client-urls http://10.0.1.10:2379 \
--discovery https://discovery.etcd.io/3e86b59982e49066c5d813af1c2e2579cbf573de
etcd: error: the cluster doesnt have a size configuration value in https://discovery.etcd.io/3e86b59982e49066c5d813af1c2e2579cbf573de/_config
exit 1
```
##### User Errors
This error will occur if the discovery cluster already has the configured number of members, and `discovery-fallback` is explicitly disabled
```
$ etcd --name infra0 --initial-advertise-peer-urls http://10.0.1.10:2380 \
--listen-peer-urls http://10.0.1.10:2380 \
--listen-client-urls http://10.0.1.10:2379,http://127.0.0.1:2379 \
--advertise-client-urls http://10.0.1.10:2379 \
--discovery https://discovery.etcd.io/3e86b59982e49066c5d813af1c2e2579cbf573de \
--discovery-fallback exit
etcd: discovery: cluster is full
exit 1
```
##### Warnings
This is a harmless warning notifying you that the discovery URL will be
ignored on this machine.
```
$ etcd --name infra0 --initial-advertise-peer-urls http://10.0.1.10:2380 \
--listen-peer-urls http://10.0.1.10:2380 \
--listen-client-urls http://10.0.1.10:2379,http://127.0.0.1:2379 \
--advertise-client-urls http://10.0.1.10:2379 \
--discovery https://discovery.etcd.io/3e86b59982e49066c5d813af1c2e2579cbf573de
etcdserver: discovery token ignored since a cluster has already been initialized. Valid log found at /var/lib/etcd
```
### DNS Discovery
DNS [SRV records][rfc-srv] can be used as a discovery mechanism.
The `-discovery-srv` flag can be used to set the DNS domain name where the discovery SRV records can be found.
The following DNS SRV records are looked up in the listed order:
* _etcd-server-ssl._tcp.example.com
* _etcd-server._tcp.example.com
If `_etcd-server-ssl._tcp.example.com` is found then etcd will attempt the bootstrapping process over SSL.
To help clients discover the etcd cluster, the following DNS SRV records are looked up in the listed order:
* _etcd-client._tcp.example.com
* _etcd-client-ssl._tcp.example.com
If `_etcd-client-ssl._tcp.example.com` is found, clients will attempt to communicate with the etcd cluster over SSL.
#### Create DNS SRV records
```
$ dig +noall +answer SRV _etcd-server._tcp.example.com
_etcd-server._tcp.example.com. 300 IN SRV 0 0 2380 infra0.example.com.
_etcd-server._tcp.example.com. 300 IN SRV 0 0 2380 infra1.example.com.
_etcd-server._tcp.example.com. 300 IN SRV 0 0 2380 infra2.example.com.
```
```
$ dig +noall +answer SRV _etcd-client._tcp.example.com
_etcd-client._tcp.example.com. 300 IN SRV 0 0 2379 infra0.example.com.
_etcd-client._tcp.example.com. 300 IN SRV 0 0 2379 infra1.example.com.
_etcd-client._tcp.example.com. 300 IN SRV 0 0 2379 infra2.example.com.
```
```
$ dig +noall +answer infra0.example.com infra1.example.com infra2.example.com
infra0.example.com. 300 IN A 10.0.1.10
infra1.example.com. 300 IN A 10.0.1.11
infra2.example.com. 300 IN A 10.0.1.12
```
#### Bootstrap the etcd cluster using DNS
etcd cluster members can listen on domain names or IP address, the bootstrap process will resolve DNS A records.
The resolved address in `--initial-advertise-peer-urls` *must match* one of the resolved addresses in the SRV targets. The etcd member reads the resolved address to find out if it belongs to the cluster defined in the SRV records.
```
$ etcd --name infra0 \
--discovery-srv example.com \
--initial-advertise-peer-urls http://infra0.example.com:2380 \
--initial-cluster-token etcd-cluster-1 \
--initial-cluster-state new \
--advertise-client-urls http://infra0.example.com:2379 \
--listen-client-urls http://infra0.example.com:2379 \
--listen-peer-urls http://infra0.example.com:2380
```
```
$ etcd --name infra1 \
--discovery-srv example.com \
--initial-advertise-peer-urls http://infra1.example.com:2380 \
--initial-cluster-token etcd-cluster-1 \
--initial-cluster-state new \
--advertise-client-urls http://infra1.example.com:2379 \
--listen-client-urls http://infra1.example.com:2379 \
--listen-peer-urls http://infra1.example.com:2380
```
```
$ etcd --name infra2 \
--discovery-srv example.com \
--initial-advertise-peer-urls http://infra2.example.com:2380 \
--initial-cluster-token etcd-cluster-1 \
--initial-cluster-state new \
--advertise-client-urls http://infra2.example.com:2379 \
--listen-client-urls http://infra2.example.com:2379 \
--listen-peer-urls http://infra2.example.com:2380
```
You can also bootstrap the cluster using IP addresses instead of domain names:
```
$ etcd --name infra0 \
--discovery-srv example.com \
--initial-advertise-peer-urls http://10.0.1.10:2380 \
--initial-cluster-token etcd-cluster-1 \
--initial-cluster-state new \
--advertise-client-urls http://10.0.1.10:2379 \
--listen-client-urls http://10.0.1.10:2379 \
--listen-peer-urls http://10.0.1.10:2380
```
```
$ etcd --name infra1 \
--discovery-srv example.com \
--initial-advertise-peer-urls http://10.0.1.11:2380 \
--initial-cluster-token etcd-cluster-1 \
--initial-cluster-state new \
--advertise-client-urls http://10.0.1.11:2379 \
--listen-client-urls http://10.0.1.11:2379 \
--listen-peer-urls http://10.0.1.11:2380
```
```
$ etcd --name infra2 \
--discovery-srv example.com \
--initial-advertise-peer-urls http://10.0.1.12:2380 \
--initial-cluster-token etcd-cluster-1 \
--initial-cluster-state new \
--advertise-client-urls http://10.0.1.12:2379 \
--listen-client-urls http://10.0.1.12:2379 \
--listen-peer-urls http://10.0.1.12:2380
```
#### etcd proxy configuration
DNS SRV records can also be used to configure the list of peers for an etcd server running in proxy mode:
```
$ etcd --proxy on --discovery-srv example.com
```
#### etcd client configuration
DNS SRV records can also be used to help clients discover the etcd cluster.
The official [etcd/client][client] supports [DNS Discovery][client-discoverer].
`etcdctl` also supports DNS Discovery by specifying the `--discovery-srv` option.
```
$ etcdctl --discovery-srv example.com set foo bar
```
#### Error Cases
You might see an error like `cannot find local etcd $name from SRV records.`. That means the etcd member fails to find itself from the cluster defined in SRV records. The resolved address in `--initial-advertise-peer-urls` *must match* one of the resolved addresses in the SRV targets.
# 0.4 to 2.0+ Migration Guide
In etcd 2.0 we introduced the ability to listen on more than one address and to advertise multiple addresses. This makes using etcd easier when you have complex networking, such as private and public networks on various cloud providers.
To make understanding this feature easier, we changed the naming of some flags, but we support the old flags to make the migration from the old to new version easier.
|Old Flag |New Flag |Migration Behavior |
|-----------------------|-----------------------|---------------------------------------------------------------------------------------|
|-peer-addr |--initial-advertise-peer-urls |If specified, peer-addr will be used as the only peer URL. Error if both flags specified.|
|-addr |--advertise-client-urls |If specified, addr will be used as the only client URL. Error if both flags specified.|
|-peer-bind-addr |--listen-peer-urls |If specified, peer-bind-addr will be used as the only peer bind URL. Error if both flags specified.|
|-bind-addr |--listen-client-urls |If specified, bind-addr will be used as the only client bind URL. Error if both flags specified.|
|-peers |none |Deprecated. The --initial-cluster flag provides a similar concept with different semantics. Please read this guide on cluster startup.|
|-peers-file |none |Deprecated. The --initial-cluster flag provides a similar concept with different semantics. Please read this guide on cluster startup.|
[client]: /client
[client-discoverer]: https://godoc.org/github.com/coreos/etcd/client#Discoverer
[conf-adv-client]: configuration.md#-advertise-client-urls
[conf-listen-client]: configuration.md#-listen-client-urls
[discovery-proto]: discovery_protocol.md
[fall-back]: proxy.md#fallback-to-proxy-mode-with-discovery-service
[proxy]: proxy.md
[rfc-srv]: http://www.ietf.org/rfc/rfc2052.txt
[runtime-conf]: runtime-configuration.md
[runtime-reconf-design]: runtime-reconf-design.md

View File

@@ -1,282 +0,0 @@
# Configuration Flags
etcd is configurable through command-line flags and environment variables. Options set on the command line take precedence over those from the environment.
The format of environment variable for flag `--my-flag` is `ETCD_MY_FLAG`. It applies to all flags.
The [official etcd ports][iana-ports] are 2379 for client requests, and 2380 for peer communication. Some legacy code and documentation still references ports 4001 and 7001, but all new etcd use and discussion should adopt the assigned ports.
To start etcd automatically using custom settings at startup in Linux, using a [systemd][systemd-intro] unit is highly recommended.
[systemd-intro]: http://freedesktop.org/wiki/Software/systemd/
## Member Flags
### --name
+ Human-readable name for this member.
+ default: "default"
+ env variable: ETCD_NAME
+ This value is referenced as this node's own entries listed in the `--initial-cluster` flag (Ex: `default=http://localhost:2380` or `default=http://localhost:2380,default=http://localhost:7001`). This needs to match the key used in the flag if you're using [static bootstrapping][build-cluster]. When using discovery, each member must have a unique name. `Hostname` or `machine-id` can be a good choice.
### --data-dir
+ Path to the data directory.
+ default: "${name}.etcd"
+ env variable: ETCD_DATA_DIR
### --wal-dir
+ Path to the dedicated wal directory. If this flag is set, etcd will write the WAL files to the walDir rather than the dataDir. This allows a dedicated disk to be used, and helps avoid io competition between logging and other IO operations.
+ default: ""
+ env variable: ETCD_WAL_DIR
### --snapshot-count
+ Number of committed transactions to trigger a snapshot to disk.
+ default: "10000"
+ env variable: ETCD_SNAPSHOT_COUNT
### --heartbeat-interval
+ Time (in milliseconds) of a heartbeat interval.
+ default: "100"
+ env variable: ETCD_HEARTBEAT_INTERVAL
### --election-timeout
+ Time (in milliseconds) for an election to timeout. See [Documentation/tuning.md](tuning.md#time-parameters) for details.
+ default: "1000"
+ env variable: ETCD_ELECTION_TIMEOUT
### --listen-peer-urls
+ List of URLs to listen on for peer traffic. This flag tells the etcd to accept incoming requests from its peers on the specified scheme://IP:port combinations. Scheme can be either http or https.If 0.0.0.0 is specified as the IP, etcd listens to the given port on all interfaces. If an IP address is given as well as a port, etcd will listen on the given port and interface. Multiple URLs may be used to specify a number of addresses and ports to listen on. The etcd will respond to requests from any of the listed addresses and ports.
+ default: "http://localhost:2380,http://localhost:7001"
+ env variable: ETCD_LISTEN_PEER_URLS
+ example: "http://10.0.0.1:2380"
+ invalid example: "http://example.com:2380" (domain name is invalid for binding)
### --listen-client-urls
+ List of URLs to listen on for client traffic. This flag tells the etcd to accept incoming requests from the clients on the specified scheme://IP:port combinations. Scheme can be either http or https. If 0.0.0.0 is specified as the IP, etcd listens to the given port on all interfaces. If an IP address is given as well as a port, etcd will listen on the given port and interface. Multiple URLs may be used to specify a number of addresses and ports to listen on. The etcd will respond to requests from any of the listed addresses and ports.
+ default: "http://localhost:2379,http://localhost:4001"
+ env variable: ETCD_LISTEN_CLIENT_URLS
+ example: "http://10.0.0.1:2379"
+ invalid example: "http://example.com:2379" (domain name is invalid for binding)
### --max-snapshots
+ Maximum number of snapshot files to retain (0 is unlimited)
+ default: 5
+ env variable: ETCD_MAX_SNAPSHOTS
+ The default for users on Windows is unlimited, and manual purging down to 5 (or your preference for safety) is recommended.
### --max-wals
+ Maximum number of wal files to retain (0 is unlimited)
+ default: 5
+ env variable: ETCD_MAX_WALS
+ The default for users on Windows is unlimited, and manual purging down to 5 (or your preference for safety) is recommended.
### --cors
+ Comma-separated white list of origins for CORS (cross-origin resource sharing).
+ default: none
+ env variable: ETCD_CORS
## Clustering Flags
`--initial` prefix flags are used in bootstrapping ([static bootstrap][build-cluster], [discovery-service bootstrap][discovery] or [runtime reconfiguration][reconfig]) a new member, and ignored when restarting an existing member.
`--discovery` prefix flags need to be set when using [discovery service][discovery].
### --initial-advertise-peer-urls
+ List of this member's peer URLs to advertise to the rest of the cluster. These addresses are used for communicating etcd data around the cluster. At least one must be routable to all cluster members. These URLs can contain domain names.
+ default: "http://localhost:2380,http://localhost:7001"
+ env variable: ETCD_INITIAL_ADVERTISE_PEER_URLS
+ example: "http://example.com:2380, http://10.0.0.1:2380"
### --initial-cluster
+ Initial cluster configuration for bootstrapping.
+ default: "default=http://localhost:2380,default=http://localhost:7001"
+ env variable: ETCD_INITIAL_CLUSTER
+ The key is the value of the `--name` flag for each node provided. The default uses `default` for the key because this is the default for the `--name` flag.
### --initial-cluster-state
+ Initial cluster state ("new" or "existing"). Set to `new` for all members present during initial static or DNS bootstrapping. If this option is set to `existing`, etcd will attempt to join the existing cluster. If the wrong value is set, etcd will attempt to start but fail safely.
+ default: "new"
+ env variable: ETCD_INITIAL_CLUSTER_STATE
[static bootstrap]: clustering.md#static
### --initial-cluster-token
+ Initial cluster token for the etcd cluster during bootstrap.
+ default: "etcd-cluster"
+ env variable: ETCD_INITIAL_CLUSTER_TOKEN
### --advertise-client-urls
+ List of this member's client URLs to advertise to the rest of the cluster. These URLs can contain domain names.
+ default: "http://localhost:2379,http://localhost:4001"
+ env variable: ETCD_ADVERTISE_CLIENT_URLS
+ example: "http://example.com:2379, http://10.0.0.1:2379"
+ Be careful if you are advertising URLs such as http://localhost:2379 from a cluster member and are using the proxy feature of etcd. This will cause loops, because the proxy will be forwarding requests to itself until its resources (memory, file descriptors) are eventually depleted.
### --discovery
+ Discovery URL used to bootstrap the cluster.
+ default: none
+ env variable: ETCD_DISCOVERY
### --discovery-srv
+ DNS srv domain used to bootstrap the cluster.
+ default: none
+ env variable: ETCD_DISCOVERY_SRV
### --discovery-fallback
+ Expected behavior ("exit" or "proxy") when discovery services fails.
+ default: "proxy"
+ env variable: ETCD_DISCOVERY_FALLBACK
### --discovery-proxy
+ HTTP proxy to use for traffic to discovery service.
+ default: none
+ env variable: ETCD_DISCOVERY_PROXY
### --strict-reconfig-check
+ Reject reconfiguration requests that would cause quorum loss.
+ default: false
+ env variable: ETCD_STRICT_RECONFIG_CHECK
## Proxy Flags
`--proxy` prefix flags configures etcd to run in [proxy mode][proxy].
### --proxy
+ Proxy mode setting ("off", "readonly" or "on").
+ default: "off"
+ env variable: ETCD_PROXY
### --proxy-failure-wait
+ Time (in milliseconds) an endpoint will be held in a failed state before being reconsidered for proxied requests.
+ default: 5000
+ env variable: ETCD_PROXY_FAILURE_WAIT
### --proxy-refresh-interval
+ Time (in milliseconds) of the endpoints refresh interval.
+ default: 30000
+ env variable: ETCD_PROXY_REFRESH_INTERVAL
### --proxy-dial-timeout
+ Time (in milliseconds) for a dial to timeout or 0 to disable the timeout
+ default: 1000
+ env variable: ETCD_PROXY_DIAL_TIMEOUT
### --proxy-write-timeout
+ Time (in milliseconds) for a write to timeout or 0 to disable the timeout.
+ default: 5000
+ env variable: ETCD_PROXY_WRITE_TIMEOUT
### --proxy-read-timeout
+ Time (in milliseconds) for a read to timeout or 0 to disable the timeout.
+ Don't change this value if you use watches because they are using long polling requests.
+ default: 0
+ env variable: ETCD_PROXY_READ_TIMEOUT
## Security Flags
The security flags help to [build a secure etcd cluster][security].
### --ca-file [DEPRECATED]
+ Path to the client server TLS CA file. `--ca-file ca.crt` could be replaced by `--trusted-ca-file ca.crt --client-cert-auth` and etcd will perform the same.
+ default: none
+ env variable: ETCD_CA_FILE
### --cert-file
+ Path to the client server TLS cert file.
+ default: none
+ env variable: ETCD_CERT_FILE
### --key-file
+ Path to the client server TLS key file.
+ default: none
+ env variable: ETCD_KEY_FILE
### --client-cert-auth
+ Enable client cert authentication.
+ default: false
+ env variable: ETCD_CLIENT_CERT_AUTH
### --trusted-ca-file
+ Path to the client server TLS trusted CA key file.
+ default: none
+ env variable: ETCD_TRUSTED_CA_FILE
### --peer-ca-file [DEPRECATED]
+ Path to the peer server TLS CA file. `--peer-ca-file ca.crt` could be replaced by `--peer-trusted-ca-file ca.crt --peer-client-cert-auth` and etcd will perform the same.
+ default: none
+ env variable: ETCD_PEER_CA_FILE
### --peer-cert-file
+ Path to the peer server TLS cert file.
+ default: none
+ env variable: ETCD_PEER_CERT_FILE
### --peer-key-file
+ Path to the peer server TLS key file.
+ default: none
+ env variable: ETCD_PEER_KEY_FILE
### --peer-client-cert-auth
+ Enable peer client cert authentication.
+ default: false
+ env variable: ETCD_PEER_CLIENT_CERT_AUTH
### --peer-trusted-ca-file
+ Path to the peer server TLS trusted CA file.
+ default: none
+ env variable: ETCD_PEER_TRUSTED_CA_FILE
## Logging Flags
### --debug
+ Drop the default log level to DEBUG for all subpackages.
+ default: false (INFO for all packages)
+ env variable: ETCD_DEBUG
### --log-package-levels
+ Set individual etcd subpackages to specific log levels. An example being `etcdserver=WARNING,security=DEBUG`
+ default: none (INFO for all packages)
+ env variable: ETCD_LOG_PACKAGE_LEVELS
## Unsafe Flags
Please be CAUTIOUS when using unsafe flags because it will break the guarantees given by the consensus protocol.
For example, it may panic if other members in the cluster are still alive.
Follow the instructions when using these flags.
### --force-new-cluster
+ Force to create a new one-member cluster. It commits configuration changes forcing to remove all existing members in the cluster and add itself. It needs to be set to [restore a backup][restore].
+ default: false
+ env variable: ETCD_FORCE_NEW_CLUSTER
## Experimental Flags
### --experimental-v3demo
+ Enable experimental [v3 demo API][rfc-v3].
+ default: false
+ env variable: ETCD_EXPERIMENTAL_V3DEMO
## Miscellaneous Flags
### --version
+ Print the version and exit.
+ default: false
## Profiling flags
### --enable-pprof
+ Enable runtime profiling data via HTTP server. Address is at client URL + "/debug/pprof"
+ default: false
[build-cluster]: clustering.md#static
[reconfig]: runtime-configuration.md
[discovery]: clustering.md#discovery
[iana-ports]: https://www.iana.org/assignments/service-names-port-numbers/service-names-port-numbers.xhtml?search=etcd
[proxy]: proxy.md
[reconfig]: runtime-configuration.md
[restore]: admin_guide.md#restoring-a-backup
[rfc-v3]: rfc/v3api.md
[security]: security.md
[systemd-intro]: http://freedesktop.org/wiki/Software/systemd/
[tuning]: tuning.md#time-parameters

454
Documentation/demo.md Normal file
View File

@@ -0,0 +1,454 @@
# Demo
This series of examples shows the basic procedures for working with an etcd cluster.
## Set up a cluster
<img src="https://storage.googleapis.com/etcd/demo/01_etcd_clustering_2016051001.gif" alt="01_etcd_clustering_2016050601"/>
On each etcd node, specify the cluster members:
```
TOKEN=token-01
CLUSTER_STATE=new
NAME_1=machine-1
NAME_2=machine-2
NAME_3=machine-3
HOST_1=10.240.0.17
HOST_2=10.240.0.18
HOST_3=10.240.0.19
CLUSTER=${NAME_1}=http://${HOST_1}:2380,${NAME_2}=http://${HOST_2}:2380,${NAME_3}=http://${HOST_3}:2380
```
Run this on each machine:
```
# For machine 1
THIS_NAME=${NAME_1}
THIS_IP=${HOST_1}
etcd --data-dir=data.etcd --name ${THIS_NAME} \
--initial-advertise-peer-urls http://${THIS_IP}:2380 --listen-peer-urls http://${THIS_IP}:2380 \
--advertise-client-urls http://${THIS_IP}:2379 --listen-client-urls http://${THIS_IP}:2379 \
--initial-cluster ${CLUSTER} \
--initial-cluster-state ${CLUSTER_STATE} --initial-cluster-token ${TOKEN}
# For machine 2
THIS_NAME=${NAME_2}
THIS_IP=${HOST_2}
etcd --data-dir=data.etcd --name ${THIS_NAME} \
--initial-advertise-peer-urls http://${THIS_IP}:2380 --listen-peer-urls http://${THIS_IP}:2380 \
--advertise-client-urls http://${THIS_IP}:2379 --listen-client-urls http://${THIS_IP}:2379 \
--initial-cluster ${CLUSTER} \
--initial-cluster-state ${CLUSTER_STATE} --initial-cluster-token ${TOKEN}
# For machine 3
THIS_NAME=${NAME_3}
THIS_IP=${HOST_3}
etcd --data-dir=data.etcd --name ${THIS_NAME} \
--initial-advertise-peer-urls http://${THIS_IP}:2380 --listen-peer-urls http://${THIS_IP}:2380 \
--advertise-client-urls http://${THIS_IP}:2379 --listen-client-urls http://${THIS_IP}:2379 \
--initial-cluster ${CLUSTER} \
--initial-cluster-state ${CLUSTER_STATE} --initial-cluster-token ${TOKEN}
```
Or use our public discovery service:
```
curl https://discovery.etcd.io/new?size=3
https://discovery.etcd.io/a81b5818e67a6ea83e9d4daea5ecbc92
# grab this token
TOKEN=token-01
CLUSTER_STATE=new
NAME_1=machine-1
NAME_2=machine-2
NAME_3=machine-3
HOST_1=10.240.0.17
HOST_2=10.240.0.18
HOST_3=10.240.0.19
DISCOVERY=https://discovery.etcd.io/a81b5818e67a6ea83e9d4daea5ecbc92
THIS_NAME=${NAME_1}
THIS_IP=${HOST_1}
etcd --data-dir=data.etcd --name ${THIS_NAME} \
--initial-advertise-peer-urls http://${THIS_IP}:2380 --listen-peer-urls http://${THIS_IP}:2380 \
--advertise-client-urls http://${THIS_IP}:2379 --listen-client-urls http://${THIS_IP}:2379 \
--discovery ${DISCOVERY} \
--initial-cluster-state ${CLUSTER_STATE} --initial-cluster-token ${TOKEN}
THIS_NAME=${NAME_2}
THIS_IP=${HOST_2}
etcd --data-dir=data.etcd --name ${THIS_NAME} \
--initial-advertise-peer-urls http://${THIS_IP}:2380 --listen-peer-urls http://${THIS_IP}:2380 \
--advertise-client-urls http://${THIS_IP}:2379 --listen-client-urls http://${THIS_IP}:2379 \
--discovery ${DISCOVERY} \
--initial-cluster-state ${CLUSTER_STATE} --initial-cluster-token ${TOKEN}
THIS_NAME=${NAME_3}
THIS_IP=${HOST_3}
etcd --data-dir=data.etcd --name ${THIS_NAME} \
--initial-advertise-peer-urls http://${THIS_IP}:2380 --listen-peer-urls http://${THIS_IP}:2380 \
--advertise-client-urls http://${THIS_IP}:2379 --listen-client-urls http://${THIS_IP}:2379 \
--discovery ${DISCOVERY} \
--initial-cluster-state ${CLUSTER_STATE} --initial-cluster-token ${TOKEN}
```
Now etcd is ready! To connect to etcd with etcdctl:
```
export ETCDCTL_API=3
HOST_1=10.240.0.17
HOST_2=10.240.0.18
HOST_3=10.240.0.19
ENDPOINTS=$HOST_1:2379,$HOST_2:2379,$HOST_3:2379
etcdctl --endpoints=$ENDPOINTS member list
```
## Access etcd
<img src="https://storage.googleapis.com/etcd/demo/02_etcdctl_access_etcd_2016051001.gif" alt="02_etcdctl_access_etcd_2016051001"/>
`put` command to write:
```
etcdctl --endpoints=$ENDPOINTS put foo "Hello World!"
```
`get` to read from etcd:
```
etcdctl --endpoints=$ENDPOINTS get foo
etcdctl --endpoints=$ENDPOINTS --write-out="json" get foo
```
## Get by prefix
<img src="https://storage.googleapis.com/etcd/demo/03_etcdctl_get_by_prefix_2016050501.gif" alt="03_etcdctl_get_by_prefix_2016050501"/>
```
etcdctl --endpoints=$ENDPOINTS put web1 value1
etcdctl --endpoints=$ENDPOINTS put web2 value2
etcdctl --endpoints=$ENDPOINTS put web3 value3
etcdctl --endpoints=$ENDPOINTS get web --prefix
```
## Delete
<img src="https://storage.googleapis.com/etcd/demo/04_etcdctl_delete_2016050601.gif" alt="04_etcdctl_delete_2016050601"/>
```
etcdctl --endpoints=$ENDPOINTS put key myvalue
etcdctl --endpoints=$ENDPOINTS del key
etcdctl --endpoints=$ENDPOINTS put k1 value1
etcdctl --endpoints=$ENDPOINTS put k2 value2
etcdctl --endpoints=$ENDPOINTS del k --prefix
```
## Transactional write
`txn` to wrap multiple requests into one transaction:
<img src="https://storage.googleapis.com/etcd/demo/05_etcdctl_transaction_2016050501.gif" alt="05_etcdctl_transaction_2016050501"/>
```
etcdctl --endpoints=$ENDPOINTS put user1 bad
etcdctl --endpoints=$ENDPOINTS txn --interactive
compares:
value("user1") = "bad"
success requests (get, put, delete):
del user1
failure requests (get, put, delete):
put user1 good
```
## Watch
`watch` to get notified of future changes:
<img src="https://storage.googleapis.com/etcd/demo/06_etcdctl_watch_2016050501.gif" alt="06_etcdctl_watch_2016050501"/>
```
etcdctl --endpoints=$ENDPOINTS watch stock1
etcdctl --endpoints=$ENDPOINTS put stock1 1000
etcdctl --endpoints=$ENDPOINTS watch stock --prefix
etcdctl --endpoints=$ENDPOINTS put stock1 10
etcdctl --endpoints=$ENDPOINTS put stock2 20
```
## Lease
`lease` to write with TTL:
<img src="https://storage.googleapis.com/etcd/demo/07_etcdctl_lease_2016050501.gif" alt="07_etcdctl_lease_2016050501"/>
```
etcdctl --endpoints=$ENDPOINTS lease grant 300
# lease 2be7547fbc6a5afa granted with TTL(300s)
etcdctl --endpoints=$ENDPOINTS put sample value --lease=2be7547fbc6a5afa
etcdctl --endpoints=$ENDPOINTS get sample
etcdctl --endpoints=$ENDPOINTS lease keep-alive 2be7547fbc6a5afa
etcdctl --endpoints=$ENDPOINTS lease revoke 2be7547fbc6a5afa
# or after 300 seconds
etcdctl --endpoints=$ENDPOINTS get sample
```
## Distributed locks
`lock` for distributed lock:
<img src="https://storage.googleapis.com/etcd/demo/08_etcdctl_lock_2016050501.gif" alt="08_etcdctl_lock_2016050501"/>
```
etcdctl --endpoints=$ENDPOINTS lock mutex1
# another client with the same name blocks
etcdctl --endpoints=$ENDPOINTS lock mutex1
```
## Elections
`elect` for leader election:
<img src="https://storage.googleapis.com/etcd/demo/09_etcdctl_elect_2016050501.gif" alt="09_etcdctl_elect_2016050501"/>
```
etcdctl --endpoints=$ENDPOINTS elect one p1
# another client with the same name blocks
etcdctl --endpoints=$ENDPOINTS elect one p2
```
## Cluster status
Specify the initial cluster configuration for each machine:
<img src="https://storage.googleapis.com/etcd/demo/10_etcdctl_endpoint_2016050501.gif" alt="10_etcdctl_endpoint_2016050501"/>
```
etcdctl --write-out=table --endpoints=$ENDPOINTS endpoint status
+------------------+------------------+---------+---------+-----------+-----------+------------+
| ENDPOINT | ID | VERSION | DB SIZE | IS LEADER | RAFT TERM | RAFT INDEX |
+------------------+------------------+---------+---------+-----------+-----------+------------+
| 10.240.0.17:2379 | 4917a7ab173fabe7 | 3.0.0 | 45 kB | true | 4 | 16726 |
| 10.240.0.18:2379 | 59796ba9cd1bcd72 | 3.0.0 | 45 kB | false | 4 | 16726 |
| 10.240.0.19:2379 | 94df724b66343e6c | 3.0.0 | 45 kB | false | 4 | 16726 |
+------------------+------------------+---------+---------+-----------+-----------+------------+
```
```
etcdctl --endpoints=$ENDPOINTS endpoint health
10.240.0.17:2379 is healthy: successfully committed proposal: took = 3.345431ms
10.240.0.19:2379 is healthy: successfully committed proposal: took = 3.767967ms
10.240.0.18:2379 is healthy: successfully committed proposal: took = 4.025451ms
```
## Snapshot
`snapshot` to save point-in-time snapshot of etcd database:
<img src="https://storage.googleapis.com/etcd/demo/11_etcdctl_snapshot_2016051001.gif" alt="11_etcdctl_snapshot_2016051001"/>
```
etcdctl --endpoints=$ENDPOINTS snapshot save my.db
Snapshot saved at my.db
```
```
etcdctl --write-out=table --endpoints=$ENDPOINTS snapshot status my.db
+---------+----------+------------+------------+
| HASH | REVISION | TOTAL KEYS | TOTAL SIZE |
+---------+----------+------------+------------+
| c55e8b8 | 9 | 13 | 25 kB |
+---------+----------+------------+------------+
```
## Migrate
`migrate` to transform etcd v2 to v3 data:
<img src="https://storage.googleapis.com/etcd/demo/12_etcdctl_migrate_2016061602.gif" alt="12_etcdctl_migrate_2016061602"/>
```
# write key in etcd version 2 store
export ETCDCTL_API=2
etcdctl --endpoints=http://$ENDPOINT set foo bar
# read key in etcd v2
etcdctl --endpoints=$ENDPOINTS --output="json" get foo
# stop etcd node to migrate, one by one
# migrate v2 data
export ETCDCTL_API=3
etcdctl --endpoints=$ENDPOINT migrate --data-dir="default.etcd" --wal-dir="default.etcd/member/wal"
# restart etcd node after migrate, one by one
# confirm that the key got migrated
etcdctl --endpoints=$ENDPOINTS get /foo
```
## Member
`member` to add,remove,update membership:
<img src="https://storage.googleapis.com/etcd/demo/13_etcdctl_member_2016062301.gif" alt="13_etcdctl_member_2016062301"/>
```
# For each machine
TOKEN=my-etcd-token-1
CLUSTER_STATE=new
NAME_1=etcd-node-1
NAME_2=etcd-node-2
NAME_3=etcd-node-3
HOST_1=10.240.0.13
HOST_2=10.240.0.14
HOST_3=10.240.0.15
CLUSTER=${NAME_1}=http://${HOST_1}:2380,${NAME_2}=http://${HOST_2}:2380,${NAME_3}=http://${HOST_3}:2380
# For node 1
THIS_NAME=${NAME_1}
THIS_IP=${HOST_1}
etcd --data-dir=data.etcd --name ${THIS_NAME} \
--initial-advertise-peer-urls http://${THIS_IP}:2380 \
--listen-peer-urls http://${THIS_IP}:2380 \
--advertise-client-urls http://${THIS_IP}:2379 \
--listen-client-urls http://${THIS_IP}:2379 \
--initial-cluster ${CLUSTER} \
--initial-cluster-state ${CLUSTER_STATE} \
--initial-cluster-token ${TOKEN}
# For node 2
THIS_NAME=${NAME_2}
THIS_IP=${HOST_2}
etcd --data-dir=data.etcd --name ${THIS_NAME} \
--initial-advertise-peer-urls http://${THIS_IP}:2380 \
--listen-peer-urls http://${THIS_IP}:2380 \
--advertise-client-urls http://${THIS_IP}:2379 \
--listen-client-urls http://${THIS_IP}:2379 \
--initial-cluster ${CLUSTER} \
--initial-cluster-state ${CLUSTER_STATE} \
--initial-cluster-token ${TOKEN}
# For node 3
THIS_NAME=${NAME_3}
THIS_IP=${HOST_3}
etcd --data-dir=data.etcd --name ${THIS_NAME} \
--initial-advertise-peer-urls http://${THIS_IP}:2380 \
--listen-peer-urls http://${THIS_IP}:2380 \
--advertise-client-urls http://${THIS_IP}:2379 \
--listen-client-urls http://${THIS_IP}:2379 \
--initial-cluster ${CLUSTER} \
--initial-cluster-state ${CLUSTER_STATE} \
--initial-cluster-token ${TOKEN}
```
Then replace a member with `member remove` and `member add` commands:
```
# get member ID
export ETCDCTL_API=3
HOST_1=10.240.0.13
HOST_2=10.240.0.14
HOST_3=10.240.0.15
etcdctl --endpoints=${HOST_1}:2379,${HOST_2}:2379,${HOST_3}:2379 member list
# remove the member
MEMBER_ID=278c654c9a6dfd3b
etcdctl --endpoints=${HOST_1}:2379,${HOST_2}:2379,${HOST_3}:2379 \
member remove ${MEMBER_ID}
# add a new member (node 4)
export ETCDCTL_API=3
NAME_1=etcd-node-1
NAME_2=etcd-node-2
NAME_4=etcd-node-4
HOST_1=10.240.0.13
HOST_2=10.240.0.14
HOST_4=10.240.0.16 # new member
etcdctl --endpoints=${HOST_1}:2379,${HOST_2}:2379 \
member add ${NAME_4} \
--peer-urls=http://${HOST_4}:2380
```
Next, start the new member with `--initial-cluster-state existing` flag:
```
# [WARNING] If the new member starts from the same disk space,
# make sure to remove the data directory of the old member
#
# restart with 'existing' flag
TOKEN=my-etcd-token-1
CLUSTER_STATE=existing
NAME_1=etcd-node-1
NAME_2=etcd-node-2
NAME_4=etcd-node-4
HOST_1=10.240.0.13
HOST_2=10.240.0.14
HOST_4=10.240.0.16 # new member
CLUSTER=${NAME_1}=http://${HOST_1}:2380,${NAME_2}=http://${HOST_2}:2380,${NAME_4}=http://${HOST_4}:2380
THIS_NAME=${NAME_4}
THIS_IP=${HOST_4}
etcd --data-dir=data.etcd --name ${THIS_NAME} \
--initial-advertise-peer-urls http://${THIS_IP}:2380 \
--listen-peer-urls http://${THIS_IP}:2380 \
--advertise-client-urls http://${THIS_IP}:2379 \
--listen-client-urls http://${THIS_IP}:2379 \
--initial-cluster ${CLUSTER} \
--initial-cluster-state ${CLUSTER_STATE} \
--initial-cluster-token ${TOKEN}
```
## Auth
`auth`,`user`,`role` for authentication:
<img src="https://storage.googleapis.com/etcd/demo/14_etcdctl_auth_2016062301.gif" alt="14_etcdctl_auth_2016062301"/>
```
export ETCDCTL_API=3
ENDPOINTS=localhost:2379
etcdctl --endpoints=${ENDPOINTS} role add root
etcdctl --endpoints=${ENDPOINTS} role grant-permission root readwrite foo
etcdctl --endpoints=${ENDPOINTS} role get root
etcdctl --endpoints=${ENDPOINTS} user add root
etcdctl --endpoints=${ENDPOINTS} user grant-role root root
etcdctl --endpoints=${ENDPOINTS} user get root
etcdctl --endpoints=${ENDPOINTS} auth enable
# now all client requests go through auth
etcdctl --endpoints=${ENDPOINTS} --user=root:123 put foo bar
etcdctl --endpoints=${ENDPOINTS} get foo
etcdctl --endpoints=${ENDPOINTS} --user=root:123 get foo
etcdctl --endpoints=${ENDPOINTS} --user=root:123 get foo1
```

View File

@@ -0,0 +1,38 @@
## Why grpc-gateway
etcd v3 uses [gRPC][grpc] for its messaging protocol. The etcd project includes a gRPC-based [Go client][go-client] and a command line utility, [etcdctl][etcdctl], for communicating with an etcd cluster through gRPC. For languages with no gRPC support, etcd provides a JSON [grpc-gateway][grpc-gateway]. This gateway serves a RESTful proxy that translates HTTP/JSON requests into gRPC messages.
## Using grpc-gateway
The gateway accepts a [JSON mapping][json-mapping] for etcd's [protocol buffer][api-ref] message definitions. Note that `key` and `value` fields are defined as byte arrays and therefore must be base64 encoded in JSON.
```bash
<<COMMENT
https://www.base64encode.org/
foo is 'Zm9v' in Base64
bar is 'YmFy'
COMMENT
curl -L http://localhost:2379/v3alpha/kv/put \
-X POST -d '{"key": "Zm9v", "value": "YmFy"}'
curl -L http://localhost:2379/v3alpha/kv/range \
-X POST -d '{"key": "Zm9v"}'
```
## Swagger
Generated [Swagger][swagger] API definitions can be found at [rpc.swagger.json][swagger-doc].
[api-ref]: ./api_reference_v3.md
[go-client]: https://github.com/coreos/etcd/tree/master/clientv3
[etcdctl]: https://github.com/coreos/etcd/tree/master/etcdctl
[grpc]: http://www.grpc.io/
[grpc-gateway]: https://github.com/grpc-ecosystem/grpc-gateway
[json-mapping]: https://developers.google.com/protocol-buffers/docs/proto3#json
[swagger]: http://swagger.io/
[swagger-doc]: apispec/swagger/rpc.swagger.json

View File

@@ -0,0 +1,835 @@
### etcd API Reference
This is a generated documentation. Please read the proto files for more.
##### service `Auth` (etcdserver/etcdserverpb/rpc.proto)
| Method | Request Type | Response Type | Description |
| ------ | ------------ | ------------- | ----------- |
| AuthEnable | AuthEnableRequest | AuthEnableResponse | AuthEnable enables authentication. |
| AuthDisable | AuthDisableRequest | AuthDisableResponse | AuthDisable disables authentication. |
| Authenticate | AuthenticateRequest | AuthenticateResponse | Authenticate processes an authenticate request. |
| UserAdd | AuthUserAddRequest | AuthUserAddResponse | UserAdd adds a new user. |
| UserGet | AuthUserGetRequest | AuthUserGetResponse | UserGet gets detailed user information. |
| UserList | AuthUserListRequest | AuthUserListResponse | UserList gets a list of all users. |
| UserDelete | AuthUserDeleteRequest | AuthUserDeleteResponse | UserDelete deletes a specified user. |
| UserChangePassword | AuthUserChangePasswordRequest | AuthUserChangePasswordResponse | UserChangePassword changes the password of a specified user. |
| UserGrantRole | AuthUserGrantRoleRequest | AuthUserGrantRoleResponse | UserGrant grants a role to a specified user. |
| UserRevokeRole | AuthUserRevokeRoleRequest | AuthUserRevokeRoleResponse | UserRevokeRole revokes a role of specified user. |
| RoleAdd | AuthRoleAddRequest | AuthRoleAddResponse | RoleAdd adds a new role. |
| RoleGet | AuthRoleGetRequest | AuthRoleGetResponse | RoleGet gets detailed role information. |
| RoleList | AuthRoleListRequest | AuthRoleListResponse | RoleList gets lists of all roles. |
| RoleDelete | AuthRoleDeleteRequest | AuthRoleDeleteResponse | RoleDelete deletes a specified role. |
| RoleGrantPermission | AuthRoleGrantPermissionRequest | AuthRoleGrantPermissionResponse | RoleGrantPermission grants a permission of a specified key or range to a specified role. |
| RoleRevokePermission | AuthRoleRevokePermissionRequest | AuthRoleRevokePermissionResponse | RoleRevokePermission revokes a key or range permission of a specified role. |
##### service `Cluster` (etcdserver/etcdserverpb/rpc.proto)
| Method | Request Type | Response Type | Description |
| ------ | ------------ | ------------- | ----------- |
| MemberAdd | MemberAddRequest | MemberAddResponse | MemberAdd adds a member into the cluster. |
| MemberRemove | MemberRemoveRequest | MemberRemoveResponse | MemberRemove removes an existing member from the cluster. |
| MemberUpdate | MemberUpdateRequest | MemberUpdateResponse | MemberUpdate updates the member configuration. |
| MemberList | MemberListRequest | MemberListResponse | MemberList lists all the members in the cluster. |
##### service `KV` (etcdserver/etcdserverpb/rpc.proto)
for grpc-gateway
| Method | Request Type | Response Type | Description |
| ------ | ------------ | ------------- | ----------- |
| Range | RangeRequest | RangeResponse | Range gets the keys in the range from the key-value store. |
| Put | PutRequest | PutResponse | Put puts the given key into the key-value store. A put request increments the revision of the key-value store and generates one event in the event history. |
| DeleteRange | DeleteRangeRequest | DeleteRangeResponse | DeleteRange deletes the given range from the key-value store. A delete request increments the revision of the key-value store and generates a delete event in the event history for every deleted key. |
| Txn | TxnRequest | TxnResponse | Txn processes multiple requests in a single transaction. A txn request increments the revision of the key-value store and generates events with the same revision for every completed request. It is not allowed to modify the same key several times within one txn. |
| Compact | CompactionRequest | CompactionResponse | Compact compacts the event history in the etcd key-value store. The key-value store should be periodically compacted or the event history will continue to grow indefinitely. |
##### service `Lease` (etcdserver/etcdserverpb/rpc.proto)
| Method | Request Type | Response Type | Description |
| ------ | ------------ | ------------- | ----------- |
| LeaseGrant | LeaseGrantRequest | LeaseGrantResponse | LeaseGrant creates a lease which expires if the server does not receive a keepAlive within a given time to live period. All keys attached to the lease will be expired and deleted if the lease expires. Each expired key generates a delete event in the event history. |
| LeaseRevoke | LeaseRevokeRequest | LeaseRevokeResponse | LeaseRevoke revokes a lease. All keys attached to the lease will expire and be deleted. |
| LeaseKeepAlive | LeaseKeepAliveRequest | LeaseKeepAliveResponse | LeaseKeepAlive keeps the lease alive by streaming keep alive requests from the client to the server and streaming keep alive responses from the server to the client. |
##### service `Maintenance` (etcdserver/etcdserverpb/rpc.proto)
| Method | Request Type | Response Type | Description |
| ------ | ------------ | ------------- | ----------- |
| Alarm | AlarmRequest | AlarmResponse | Alarm activates, deactivates, and queries alarms regarding cluster health. |
| Status | StatusRequest | StatusResponse | Status gets the status of the member. |
| Defragment | DefragmentRequest | DefragmentResponse | Defragment defragments a member's backend database to recover storage space. |
| Hash | HashRequest | HashResponse | Hash returns the hash of the local KV state for consistency checking purpose. This is designed for testing; do not use this in production when there are ongoing transactions. |
| Snapshot | SnapshotRequest | SnapshotResponse | Snapshot sends a snapshot of the entire backend from a member over a stream to a client. |
##### service `Watch` (etcdserver/etcdserverpb/rpc.proto)
| Method | Request Type | Response Type | Description |
| ------ | ------------ | ------------- | ----------- |
| Watch | WatchRequest | WatchResponse | Watch watches for events happening or that have happened. Both input and output are streams; the input stream is for creating and canceling watchers and the output stream sends events. One watch RPC can watch on multiple key ranges, streaming events for several watches at once. The entire event history can be watched starting from the last compaction revision. |
##### message `AlarmMember` (etcdserver/etcdserverpb/rpc.proto)
| Field | Description | Type |
| ----- | ----------- | ---- |
| memberID | memberID is the ID of the member associated with the raised alarm. | uint64 |
| alarm | alarm is the type of alarm which has been raised. | AlarmType |
##### message `AlarmRequest` (etcdserver/etcdserverpb/rpc.proto)
default, used to query if any alarm is active space quota is exhausted
| Field | Description | Type |
| ----- | ----------- | ---- |
| action | action is the kind of alarm request to issue. The action may GET alarm statuses, ACTIVATE an alarm, or DEACTIVATE a raised alarm. | AlarmAction |
| memberID | memberID is the ID of the member associated with the alarm. If memberID is 0, the alarm request covers all members. | uint64 |
| alarm | alarm is the type of alarm to consider for this request. | AlarmType |
##### message `AlarmResponse` (etcdserver/etcdserverpb/rpc.proto)
| Field | Description | Type |
| ----- | ----------- | ---- |
| header | | ResponseHeader |
| alarms | alarms is a list of alarms associated with the alarm request. | (slice of) AlarmMember |
##### message `AuthDisableRequest` (etcdserver/etcdserverpb/rpc.proto)
Empty field.
##### message `AuthDisableResponse` (etcdserver/etcdserverpb/rpc.proto)
| Field | Description | Type |
| ----- | ----------- | ---- |
| header | | ResponseHeader |
##### message `AuthEnableRequest` (etcdserver/etcdserverpb/rpc.proto)
Empty field.
##### message `AuthEnableResponse` (etcdserver/etcdserverpb/rpc.proto)
| Field | Description | Type |
| ----- | ----------- | ---- |
| header | | ResponseHeader |
##### message `AuthRoleAddRequest` (etcdserver/etcdserverpb/rpc.proto)
| Field | Description | Type |
| ----- | ----------- | ---- |
| name | name is the name of the role to add to the authentication system. | string |
##### message `AuthRoleAddResponse` (etcdserver/etcdserverpb/rpc.proto)
| Field | Description | Type |
| ----- | ----------- | ---- |
| header | | ResponseHeader |
##### message `AuthRoleDeleteRequest` (etcdserver/etcdserverpb/rpc.proto)
| Field | Description | Type |
| ----- | ----------- | ---- |
| role | | string |
##### message `AuthRoleDeleteResponse` (etcdserver/etcdserverpb/rpc.proto)
| Field | Description | Type |
| ----- | ----------- | ---- |
| header | | ResponseHeader |
##### message `AuthRoleGetRequest` (etcdserver/etcdserverpb/rpc.proto)
| Field | Description | Type |
| ----- | ----------- | ---- |
| role | | string |
##### message `AuthRoleGetResponse` (etcdserver/etcdserverpb/rpc.proto)
| Field | Description | Type |
| ----- | ----------- | ---- |
| header | | ResponseHeader |
| perm | | (slice of) authpb.Permission |
##### message `AuthRoleGrantPermissionRequest` (etcdserver/etcdserverpb/rpc.proto)
| Field | Description | Type |
| ----- | ----------- | ---- |
| name | name is the name of the role which will be granted the permission. | string |
| perm | perm is the permission to grant to the role. | authpb.Permission |
##### message `AuthRoleGrantPermissionResponse` (etcdserver/etcdserverpb/rpc.proto)
| Field | Description | Type |
| ----- | ----------- | ---- |
| header | | ResponseHeader |
##### message `AuthRoleListRequest` (etcdserver/etcdserverpb/rpc.proto)
Empty field.
##### message `AuthRoleListResponse` (etcdserver/etcdserverpb/rpc.proto)
| Field | Description | Type |
| ----- | ----------- | ---- |
| header | | ResponseHeader |
| roles | | (slice of) string |
##### message `AuthRoleRevokePermissionRequest` (etcdserver/etcdserverpb/rpc.proto)
| Field | Description | Type |
| ----- | ----------- | ---- |
| role | | string |
| key | | string |
| range_end | | string |
##### message `AuthRoleRevokePermissionResponse` (etcdserver/etcdserverpb/rpc.proto)
| Field | Description | Type |
| ----- | ----------- | ---- |
| header | | ResponseHeader |
##### message `AuthUserAddRequest` (etcdserver/etcdserverpb/rpc.proto)
| Field | Description | Type |
| ----- | ----------- | ---- |
| name | | string |
| password | | string |
##### message `AuthUserAddResponse` (etcdserver/etcdserverpb/rpc.proto)
| Field | Description | Type |
| ----- | ----------- | ---- |
| header | | ResponseHeader |
##### message `AuthUserChangePasswordRequest` (etcdserver/etcdserverpb/rpc.proto)
| Field | Description | Type |
| ----- | ----------- | ---- |
| name | name is the name of the user whose password is being changed. | string |
| password | password is the new password for the user. | string |
##### message `AuthUserChangePasswordResponse` (etcdserver/etcdserverpb/rpc.proto)
| Field | Description | Type |
| ----- | ----------- | ---- |
| header | | ResponseHeader |
##### message `AuthUserDeleteRequest` (etcdserver/etcdserverpb/rpc.proto)
| Field | Description | Type |
| ----- | ----------- | ---- |
| name | name is the name of the user to delete. | string |
##### message `AuthUserDeleteResponse` (etcdserver/etcdserverpb/rpc.proto)
| Field | Description | Type |
| ----- | ----------- | ---- |
| header | | ResponseHeader |
##### message `AuthUserGetRequest` (etcdserver/etcdserverpb/rpc.proto)
| Field | Description | Type |
| ----- | ----------- | ---- |
| name | | string |
##### message `AuthUserGetResponse` (etcdserver/etcdserverpb/rpc.proto)
| Field | Description | Type |
| ----- | ----------- | ---- |
| header | | ResponseHeader |
| roles | | (slice of) string |
##### message `AuthUserGrantRoleRequest` (etcdserver/etcdserverpb/rpc.proto)
| Field | Description | Type |
| ----- | ----------- | ---- |
| user | user is the name of the user which should be granted a given role. | string |
| role | role is the name of the role to grant to the user. | string |
##### message `AuthUserGrantRoleResponse` (etcdserver/etcdserverpb/rpc.proto)
| Field | Description | Type |
| ----- | ----------- | ---- |
| header | | ResponseHeader |
##### message `AuthUserListRequest` (etcdserver/etcdserverpb/rpc.proto)
Empty field.
##### message `AuthUserListResponse` (etcdserver/etcdserverpb/rpc.proto)
| Field | Description | Type |
| ----- | ----------- | ---- |
| header | | ResponseHeader |
| users | | (slice of) string |
##### message `AuthUserRevokeRoleRequest` (etcdserver/etcdserverpb/rpc.proto)
| Field | Description | Type |
| ----- | ----------- | ---- |
| name | | string |
| role | | string |
##### message `AuthUserRevokeRoleResponse` (etcdserver/etcdserverpb/rpc.proto)
| Field | Description | Type |
| ----- | ----------- | ---- |
| header | | ResponseHeader |
##### message `AuthenticateRequest` (etcdserver/etcdserverpb/rpc.proto)
| Field | Description | Type |
| ----- | ----------- | ---- |
| name | | string |
| password | | string |
##### message `AuthenticateResponse` (etcdserver/etcdserverpb/rpc.proto)
| Field | Description | Type |
| ----- | ----------- | ---- |
| header | | ResponseHeader |
| token | token is an authorized token that can be used in succeeding RPCs | string |
##### message `CompactionRequest` (etcdserver/etcdserverpb/rpc.proto)
CompactionRequest compacts the key-value store up to a given revision. All superseded keys with a revision less than the compaction revision will be removed.
| Field | Description | Type |
| ----- | ----------- | ---- |
| revision | revision is the key-value store revision for the compaction operation. | int64 |
| physical | physical is set so the RPC will wait until the compaction is physically applied to the local database such that compacted entries are totally removed from the backend database. | bool |
##### message `CompactionResponse` (etcdserver/etcdserverpb/rpc.proto)
| Field | Description | Type |
| ----- | ----------- | ---- |
| header | | ResponseHeader |
##### message `Compare` (etcdserver/etcdserverpb/rpc.proto)
| Field | Description | Type |
| ----- | ----------- | ---- |
| result | result is logical comparison operation for this comparison. | CompareResult |
| target | target is the key-value field to inspect for the comparison. | CompareTarget |
| key | key is the subject key for the comparison operation. | bytes |
| target_union | | oneof |
| version | version is the version of the given key | int64 |
| create_revision | create_revision is the creation revision of the given key | int64 |
| mod_revision | mod_revision is the last modified revision of the given key. | int64 |
| value | value is the value of the given key, in bytes. | bytes |
##### message `DefragmentRequest` (etcdserver/etcdserverpb/rpc.proto)
Empty field.
##### message `DefragmentResponse` (etcdserver/etcdserverpb/rpc.proto)
| Field | Description | Type |
| ----- | ----------- | ---- |
| header | | ResponseHeader |
##### message `DeleteRangeRequest` (etcdserver/etcdserverpb/rpc.proto)
| Field | Description | Type |
| ----- | ----------- | ---- |
| key | key is the first key to delete in the range. | bytes |
| range_end | range_end is the key following the last key to delete for the range [key, range_end). If range_end is not given, the range is defined to contain only the key argument. If range_end is '\0', the range is all keys greater than or equal to the key argument. | bytes |
| prev_kv | If prev_kv is set, etcd gets the previous key-value pairs before deleting it. The previous key-value pairs will be returned in the delte response. | bool |
##### message `DeleteRangeResponse` (etcdserver/etcdserverpb/rpc.proto)
| Field | Description | Type |
| ----- | ----------- | ---- |
| header | | ResponseHeader |
| deleted | deleted is the number of keys deleted by the delete range request. | int64 |
| prev_kvs | if prev_kv is set in the request, the previous key-value pairs will be returned. | (slice of) mvccpb.KeyValue |
##### message `HashRequest` (etcdserver/etcdserverpb/rpc.proto)
Empty field.
##### message `HashResponse` (etcdserver/etcdserverpb/rpc.proto)
| Field | Description | Type |
| ----- | ----------- | ---- |
| header | | ResponseHeader |
| hash | hash is the hash value computed from the responding member's key-value store. | uint32 |
##### message `LeaseGrantRequest` (etcdserver/etcdserverpb/rpc.proto)
| Field | Description | Type |
| ----- | ----------- | ---- |
| TTL | TTL is the advisory time-to-live in seconds. | int64 |
| ID | ID is the requested ID for the lease. If ID is set to 0, the lessor chooses an ID. | int64 |
##### message `LeaseGrantResponse` (etcdserver/etcdserverpb/rpc.proto)
| Field | Description | Type |
| ----- | ----------- | ---- |
| header | | ResponseHeader |
| ID | ID is the lease ID for the granted lease. | int64 |
| TTL | TTL is the server chosen lease time-to-live in seconds. | int64 |
| error | | string |
##### message `LeaseKeepAliveRequest` (etcdserver/etcdserverpb/rpc.proto)
| Field | Description | Type |
| ----- | ----------- | ---- |
| ID | ID is the lease ID for the lease to keep alive. | int64 |
##### message `LeaseKeepAliveResponse` (etcdserver/etcdserverpb/rpc.proto)
| Field | Description | Type |
| ----- | ----------- | ---- |
| header | | ResponseHeader |
| ID | ID is the lease ID from the keep alive request. | int64 |
| TTL | TTL is the new time-to-live for the lease. | int64 |
##### message `LeaseRevokeRequest` (etcdserver/etcdserverpb/rpc.proto)
| Field | Description | Type |
| ----- | ----------- | ---- |
| ID | ID is the lease ID to revoke. When the ID is revoked, all associated keys will be deleted. | int64 |
##### message `LeaseRevokeResponse` (etcdserver/etcdserverpb/rpc.proto)
| Field | Description | Type |
| ----- | ----------- | ---- |
| header | | ResponseHeader |
##### message `Member` (etcdserver/etcdserverpb/rpc.proto)
| Field | Description | Type |
| ----- | ----------- | ---- |
| ID | ID is the member ID for this member. | uint64 |
| name | name is the human-readable name of the member. If the member is not started, the name will be an empty string. | string |
| peerURLs | peerURLs is the list of URLs the member exposes to the cluster for communication. | (slice of) string |
| clientURLs | clientURLs is the list of URLs the member exposes to clients for communication. If the member is not started, clientURLs will be empty. | (slice of) string |
##### message `MemberAddRequest` (etcdserver/etcdserverpb/rpc.proto)
| Field | Description | Type |
| ----- | ----------- | ---- |
| peerURLs | peerURLs is the list of URLs the added member will use to communicate with the cluster. | (slice of) string |
##### message `MemberAddResponse` (etcdserver/etcdserverpb/rpc.proto)
| Field | Description | Type |
| ----- | ----------- | ---- |
| header | | ResponseHeader |
| member | member is the member information for the added member. | Member |
##### message `MemberListRequest` (etcdserver/etcdserverpb/rpc.proto)
Empty field.
##### message `MemberListResponse` (etcdserver/etcdserverpb/rpc.proto)
| Field | Description | Type |
| ----- | ----------- | ---- |
| header | | ResponseHeader |
| members | members is a list of all members associated with the cluster. | (slice of) Member |
##### message `MemberRemoveRequest` (etcdserver/etcdserverpb/rpc.proto)
| Field | Description | Type |
| ----- | ----------- | ---- |
| ID | ID is the member ID of the member to remove. | uint64 |
##### message `MemberRemoveResponse` (etcdserver/etcdserverpb/rpc.proto)
| Field | Description | Type |
| ----- | ----------- | ---- |
| header | | ResponseHeader |
##### message `MemberUpdateRequest` (etcdserver/etcdserverpb/rpc.proto)
| Field | Description | Type |
| ----- | ----------- | ---- |
| ID | ID is the member ID of the member to update. | uint64 |
| peerURLs | peerURLs is the new list of URLs the member will use to communicate with the cluster. | (slice of) string |
##### message `MemberUpdateResponse` (etcdserver/etcdserverpb/rpc.proto)
| Field | Description | Type |
| ----- | ----------- | ---- |
| header | | ResponseHeader |
##### message `PutRequest` (etcdserver/etcdserverpb/rpc.proto)
| Field | Description | Type |
| ----- | ----------- | ---- |
| key | key is the key, in bytes, to put into the key-value store. | bytes |
| value | value is the value, in bytes, to associate with the key in the key-value store. | bytes |
| lease | lease is the lease ID to associate with the key in the key-value store. A lease value of 0 indicates no lease. | int64 |
| prev_kv | If prev_kv is set, etcd gets the previous key-value pair before changing it. The previous key-value pair will be returned in the put response. | bool |
##### message `PutResponse` (etcdserver/etcdserverpb/rpc.proto)
| Field | Description | Type |
| ----- | ----------- | ---- |
| header | | ResponseHeader |
| prev_kv | if prev_kv is set in the request, the previous key-value pair will be returned. | mvccpb.KeyValue |
##### message `RangeRequest` (etcdserver/etcdserverpb/rpc.proto)
| Field | Description | Type |
| ----- | ----------- | ---- |
| key | default, no sorting lowest target value first highest target value first key is the first key for the range. If range_end is not given, the request only looks up key. | bytes |
| range_end | range_end is the upper bound on the requested range [key, range_end). If range_end is '\0', the range is all keys >= key. If the range_end is one bit larger than the given key, then the range requests get the all keys with the prefix (the given key). If both key and range_end are '\0', then range requests returns all keys. | bytes |
| limit | limit is a limit on the number of keys returned for the request. | int64 |
| revision | revision is the point-in-time of the key-value store to use for the range. If revision is less or equal to zero, the range is over the newest key-value store. If the revision has been compacted, ErrCompacted is returned as a response. | int64 |
| sort_order | sort_order is the order for returned sorted results. | SortOrder |
| sort_target | sort_target is the key-value field to use for sorting. | SortTarget |
| serializable | serializable sets the range request to use serializable member-local reads. Range requests are linearizable by default; linearizable requests have higher latency and lower throughput than serializable requests but reflect the current consensus of the cluster. For better performance, in exchange for possible stale reads, a serializable range request is served locally without needing to reach consensus with other nodes in the cluster. | bool |
| keys_only | keys_only when set returns only the keys and not the values. | bool |
| count_only | count_only when set returns only the count of the keys in the range. | bool |
##### message `RangeResponse` (etcdserver/etcdserverpb/rpc.proto)
| Field | Description | Type |
| ----- | ----------- | ---- |
| header | | ResponseHeader |
| kvs | kvs is the list of key-value pairs matched by the range request. kvs is empty when count is requested. | (slice of) mvccpb.KeyValue |
| more | more indicates if there are more keys to return in the requested range. | bool |
| count | count is set to the number of keys within the range when requested. | int64 |
##### message `RequestOp` (etcdserver/etcdserverpb/rpc.proto)
| Field | Description | Type |
| ----- | ----------- | ---- |
| request | request is a union of request types accepted by a transaction. | oneof |
| request_range | | RangeRequest |
| request_put | | PutRequest |
| request_delete_range | | DeleteRangeRequest |
##### message `ResponseHeader` (etcdserver/etcdserverpb/rpc.proto)
| Field | Description | Type |
| ----- | ----------- | ---- |
| cluster_id | cluster_id is the ID of the cluster which sent the response. | uint64 |
| member_id | member_id is the ID of the member which sent the response. | uint64 |
| revision | revision is the key-value store revision when the request was applied. | int64 |
| raft_term | raft_term is the raft term when the request was applied. | uint64 |
##### message `ResponseOp` (etcdserver/etcdserverpb/rpc.proto)
| Field | Description | Type |
| ----- | ----------- | ---- |
| response | response is a union of response types returned by a transaction. | oneof |
| response_range | | RangeResponse |
| response_put | | PutResponse |
| response_delete_range | | DeleteRangeResponse |
##### message `SnapshotRequest` (etcdserver/etcdserverpb/rpc.proto)
Empty field.
##### message `SnapshotResponse` (etcdserver/etcdserverpb/rpc.proto)
| Field | Description | Type |
| ----- | ----------- | ---- |
| header | header has the current key-value store information. The first header in the snapshot stream indicates the point in time of the snapshot. | ResponseHeader |
| remaining_bytes | remaining_bytes is the number of blob bytes to be sent after this message | uint64 |
| blob | blob contains the next chunk of the snapshot in the snapshot stream. | bytes |
##### message `StatusRequest` (etcdserver/etcdserverpb/rpc.proto)
Empty field.
##### message `StatusResponse` (etcdserver/etcdserverpb/rpc.proto)
| Field | Description | Type |
| ----- | ----------- | ---- |
| header | | ResponseHeader |
| version | version is the cluster protocol version used by the responding member. | string |
| dbSize | dbSize is the size of the backend database, in bytes, of the responding member. | int64 |
| leader | leader is the member ID which the responding member believes is the current leader. | uint64 |
| raftIndex | raftIndex is the current raft index of the responding member. | uint64 |
| raftTerm | raftTerm is the current raft term of the responding member. | uint64 |
##### message `TxnRequest` (etcdserver/etcdserverpb/rpc.proto)
From google paxosdb paper: Our implementation hinges around a powerful primitive which we call MultiOp. All other database operations except for iteration are implemented as a single call to MultiOp. A MultiOp is applied atomically and consists of three components: 1. A list of tests called guard. Each test in guard checks a single entry in the database. It may check for the absence or presence of a value, or compare with a given value. Two different tests in the guard may apply to the same or different entries in the database. All tests in the guard are applied and MultiOp returns the results. If all tests are true, MultiOp executes t op (see item 2 below), otherwise it executes f op (see item 3 below). 2. A list of database operations called t op. Each operation in the list is either an insert, delete, or lookup operation, and applies to a single database entry. Two different operations in the list may apply to the same or different entries in the database. These operations are executed if guard evaluates to true. 3. A list of database operations called f op. Like t op, but executed if guard evaluates to false.
| Field | Description | Type |
| ----- | ----------- | ---- |
| compare | compare is a list of predicates representing a conjunction of terms. If the comparisons succeed, then the success requests will be processed in order, and the response will contain their respective responses in order. If the comparisons fail, then the failure requests will be processed in order, and the response will contain their respective responses in order. | (slice of) Compare |
| success | success is a list of requests which will be applied when compare evaluates to true. | (slice of) RequestOp |
| failure | failure is a list of requests which will be applied when compare evaluates to false. | (slice of) RequestOp |
##### message `TxnResponse` (etcdserver/etcdserverpb/rpc.proto)
| Field | Description | Type |
| ----- | ----------- | ---- |
| header | | ResponseHeader |
| succeeded | succeeded is set to true if the compare evaluated to true or false otherwise. | bool |
| responses | responses is a list of responses corresponding to the results from applying success if succeeded is true or failure if succeeded is false. | (slice of) ResponseOp |
##### message `WatchCancelRequest` (etcdserver/etcdserverpb/rpc.proto)
| Field | Description | Type |
| ----- | ----------- | ---- |
| watch_id | watch_id is the watcher id to cancel so that no more events are transmitted. | int64 |
##### message `WatchCreateRequest` (etcdserver/etcdserverpb/rpc.proto)
| Field | Description | Type |
| ----- | ----------- | ---- |
| key | key is the key to register for watching. | bytes |
| range_end | range_end is the end of the range [key, range_end) to watch. If range_end is not given, only the key argument is watched. If range_end is equal to '\0', all keys greater than or equal to the key argument are watched. | bytes |
| start_revision | start_revision is an optional revision to watch from (inclusive). No start_revision is "now". | int64 |
| progress_notify | progress_notify is set so that the etcd server will periodically send a WatchResponse with no events to the new watcher if there are no recent events. It is useful when clients wish to recover a disconnected watcher starting from a recent known revision. The etcd server may decide how often it will send notifications based on current load. | bool |
| prev_kv | If prev_kv is set, created watcher gets the previous KV before the event happens. If the previous KV is already compacted, nothing will be returned. | bool |
##### message `WatchRequest` (etcdserver/etcdserverpb/rpc.proto)
| Field | Description | Type |
| ----- | ----------- | ---- |
| request_union | request_union is a request to either create a new watcher or cancel an existing watcher. | oneof |
| create_request | | WatchCreateRequest |
| cancel_request | | WatchCancelRequest |
##### message `WatchResponse` (etcdserver/etcdserverpb/rpc.proto)
| Field | Description | Type |
| ----- | ----------- | ---- |
| header | | ResponseHeader |
| watch_id | watch_id is the ID of the watcher that corresponds to the response. | int64 |
| created | created is set to true if the response is for a create watch request. The client should record the watch_id and expect to receive events for the created watcher from the same stream. All events sent to the created watcher will attach with the same watch_id. | bool |
| canceled | canceled is set to true if the response is for a cancel watch request. No further events will be sent to the canceled watcher. | bool |
| compact_revision | compact_revision is set to the minimum index if a watcher tries to watch at a compacted index. This happens when creating a watcher at a compacted revision or the watcher cannot catch up with the progress of the key-value store. The client should treat the watcher as canceled and should not try to create any watcher with the same start_revision again. | int64 |
| events | | (slice of) mvccpb.Event |
##### message `Event` (mvcc/mvccpb/kv.proto)
| Field | Description | Type |
| ----- | ----------- | ---- |
| type | type is the kind of event. If type is a PUT, it indicates new data has been stored to the key. If type is a DELETE, it indicates the key was deleted. | EventType |
| kv | kv holds the KeyValue for the event. A PUT event contains current kv pair. A PUT event with kv.Version=1 indicates the creation of a key. A DELETE/EXPIRE event contains the deleted key with its modification revision set to the revision of deletion. | KeyValue |
| prev_kv | prev_kv holds the key-value pair before the event happens. | KeyValue |
##### message `KeyValue` (mvcc/mvccpb/kv.proto)
| Field | Description | Type |
| ----- | ----------- | ---- |
| key | key is the key in bytes. An empty key is not allowed. | bytes |
| create_revision | create_revision is the revision of last creation on this key. | int64 |
| mod_revision | mod_revision is the revision of last modification on this key. | int64 |
| version | version is the version of the key. A deletion resets the version to zero and any modification of the key increases its version. | int64 |
| value | value is the value held by the key, in bytes. | bytes |
| lease | lease is the ID of the lease that attached to key. When the attached lease expires, the key will be deleted. If lease is 0, then no lease is attached to the key. | int64 |
##### message `Lease` (lease/leasepb/lease.proto)
| Field | Description | Type |
| ----- | ----------- | ---- |
| ID | | int64 |
| TTL | | int64 |
##### message `Permission` (auth/authpb/auth.proto)
Permission is a single entity
| Field | Description | Type |
| ----- | ----------- | ---- |
| permType | | Type |
| key | | bytes |
| range_end | | bytes |
##### message `Role` (auth/authpb/auth.proto)
Role is a single entry in the bucket authRoles
| Field | Description | Type |
| ----- | ----------- | ---- |
| name | | bytes |
| keyPermission | | (slice of) Permission |
##### message `User` (auth/authpb/auth.proto)
User is a single entry in the bucket authUsers
| Field | Description | Type |
| ----- | ----------- | ---- |
| name | | bytes |
| password | | bytes |
| roles | | (slice of) string |

File diff suppressed because it is too large Load Diff

View File

@@ -0,0 +1,8 @@
# Experimental APIs and features
For the most part, the etcd project is stable, but we are still moving fast! We believe in the release fast philosophy. We want to get early feedback on features still in development and stabilizing. Thus, there are, and will be more, experimental features and APIs. We plan to improve these features based on the early feedback from the community, or abandon them if there is little interest, in the next few releases. If you are running a production system, please do not rely on any experimental features or APIs.
## The current experimental API/features are:
- v3 auth API: expect to be stable in 3.1 release
- etcd gateway: expect to be stable in 3.1 release

View File

@@ -0,0 +1,243 @@
# Interacting with etcd
Users mostly interact with etcd by putting or getting the value of a key. This section describes how to do that by using etcdctl, a command line tool for interacting with etcd server. The concepts described here should apply to the gRPC APIs or client library APIs.
By default, etcdctl talks to the etcd server with the v2 API for backward compatibility. For etcdctl to speak to etcd using the v3 API, the API version must be set to version 3 via the `ETCDCTL_API` environment variable.
``` bash
export ETCDCTL_API=3
```
## Write a key
Applications store keys into the etcd cluster by writing to keys. Every stored key is replicated to all etcd cluster members through the Raft protocol to achieve consistency and reliability.
Here is the command to set the value of key `foo` to `bar`:
``` bash
$ etcdctl put foo bar
OK
```
## Read keys
Applications can read values of keys from an etcd cluster. Queries may read a single key, or a range of keys.
Suppose the etcd cluster has stored the following keys:
```
foo = bar
foo1 = bar1
foo3 = bar3
```
Here is the command to read the value of key `foo`:
```bash
$ etcdctl get foo
foo
bar
```
Here is the command to range over the keys from `foo` to `foo9`:
```bash
$ etcdctl get foo foo9
foo
bar
foo1
bar1
foo3
bar3
```
## Read past version of keys
Applications may want to read superseded versions of a key. For example, an application may wish to roll back to an old configuration by accessing an earlier version of a key. Alternatively, an application may want a consistent view over multiple keys through multiple requests by accessing key history.
Since every modification to the etcd cluster key-value store increments the global revision of an etcd cluster, an application can read superseded keys by providing an older etcd revision.
Suppose an etcd cluster already has the following keys:
``` bash
$ etcdctl put foo bar # revision = 2
$ etcdctl put foo1 bar1 # revision = 3
$ etcdctl put foo bar_new # revision = 4
$ etcdctl put foo1 bar1_new # revision = 5
```
Here are an example to access the past versions of keys:
```bash
$ etcdctl get foo foo9 # access the most recent versions of keys
foo
bar_new
foo1
bar1_new
$ etcdctl get --rev=4 foo foo9 # access the versions of keys at revision 4
foo
bar_new
foo1
bar1
$ etcdctl get --rev=3 foo foo9 # access the versions of keys at revision 3
foo
bar
foo1
bar1
$ etcdctl get --rev=2 foo foo9 # access the versions of keys at revision 2
foo
bar
$ etcdctl get --rev=1 foo foo9 # access the versions of keys at revision 1
```
## Delete keys
Applications can delete a key or a range of keys from an etcd cluster.
Here is the command to delete key `foo`:
```bash
$ etcdctl del foo
1 # one key is deleted
```
Here is the command to delete keys ranging from `foo` to `foo9`:
```bash
$ etcdctl del foo foo9
2 # two keys are deleted
```
## Watch key changes
Applications can watch on a key or a range of keys to monitor for any updates.
Here is the command to watch on key `foo`:
```bash
$ etcdctl watch foo
# in another terminal: etcdctl put foo bar
foo
bar
```
Here is the command to watch on a range key from `foo` to `foo9`:
```bash
$ etcdctl watch foo foo9
# in another terminal: etcdctl put foo bar
foo
bar
# in another terminal: etcdctl put foo1 bar1
foo1
bar1
```
## Watch historical changes of keys
Applications may want to watch for historical changes of keys in etcd. For example, an application may wish to receive all the modifications of a key; if the application stays connected to etcd, then `watch` is good enough. However, if the application or etcd fails, a change may happen during the failure, and the application will not receive the update in real time. To guarantee the update is delivered, the application must be able to watch for historical changes to keys. To do this, an application can specify a historical revision on a watch, just like reading past version of keys.
Suppose we finished the following sequence of operations:
``` bash
etcdctl put foo bar # revision = 2
etcdctl put foo1 bar1 # revision = 3
etcdctl put foo bar_new # revision = 4
etcdctl put foo1 bar1_new # revision = 5
```
Here is an example to watch the historical changes:
```bash
# watch for changes on key `foo` since revision 2
$ etcdctl watch --rev=2 foo
PUT
foo
bar
PUT
foo
bar_new
# watch for changes on key `foo` since revision 3
$ etcdctl watch --rev=3 foo
PUT
foo
bar_new
```
## Compacted revisions
As we mentioned, etcd keeps revisions so that applications can read past versions of keys. However, to avoid accumulating an unbounded amount of history, it is important to compact past revisions. After compacting, etcd removes historical revisions, releasing resources for future use. All superseded data with revisions before the compacted revision will be unavailable.
Here is the command to compact the revisions:
```bash
$ etcdctl compact 5
compacted revision 5
# any revisions before the compacted one are not accessible
$ etcdctl get --rev=4 foo
Error: rpc error: code = 11 desc = etcdserver: mvcc: required revision has been compacted
```
## Grant leases
Applications can grant leases for keys from an etcd cluster. When a key is attached to a lease, its lifetime is bound to the lease's lifetime which in turn is governed by a time-to-live (TTL). Each lease has a minimum time-to-live (TTL) value specified by the application at grant time. The lease's actual TTL value is at least the minimum TTL and is chosen by the etcd cluster. Once a lease's TTL elapses, the lease expires and all attached keys are deleted.
Here is the command to grant a lease:
```
# grant a lease with 10 second TTL
$ etcdctl lease grant 10
lease 32695410dcc0ca06 granted with TTL(10s)
# attach key foo to lease 32695410dcc0ca06
$ etcdctl put --lease=32695410dcc0ca06 foo bar
OK
```
## Revoke leases
Applications revoke leases by lease ID. Revoking a lease deletes all of its attached keys.
Suppose we finished the following sequence of operations:
```
$ etcdctl lease grant 10
lease 32695410dcc0ca06 granted with TTL(10s)
$ etcdctl put --lease=32695410dcc0ca06 foo bar
OK
```
Here is the command to revoke the same lease:
```
$ etcdctl lease revoke 32695410dcc0ca06
lease 32695410dcc0ca06 revoked
$ etcdctl get foo
# empty response since foo is deleted due to lease revocation
```
## Keep leases alive
Applications can keep a lease alive by refreshing its TTL so it does not expire.
Suppose we finished the following sequence of operations:
```
$ etcdctl lease grant 10
lease 32695410dcc0ca06 granted with TTL(10s)
```
Here is the command to keep the same lease alive:
```
$ etcdctl lease keep-alive 32695410dcc0ca0
lease 32695410dcc0ca0 keepalived with TTL(100)
lease 32695410dcc0ca0 keepalived with TTL(100)
lease 32695410dcc0ca0 keepalived with TTL(100)
...
```

View File

@@ -0,0 +1,90 @@
# Setup a local cluster
For testing and development deployments, the quickest and easiest way is to set up a local cluster. For a production deployment, refer to the [clustering][clustering] section.
## Local standalone cluster
Deploying an etcd cluster as a standalone cluster is straightforward. Start it with just one command:
```
$ ./etcd
...
```
The started etcd member listens on `localhost:2379` for client requests.
To interact with the started cluster by using etcdctl:
```
# use API version 3
$ export ETCDCTL_API=3
$ ./etcdctl put foo bar
OK
$ ./etcdctl get foo
bar
```
## Local multi-member cluster
A Procfile is provided to easily set up a local multi-member cluster. Start a multi-member cluster with a few commands:
```
# install goreman program to control Profile-based applications.
$ go get github.com/mattn/goreman
$ goreman -f Procfile start
...
```
The started members listen on `localhost:12379`, `localhost:22379`, and `localhost:32379` for client requests respectively.
To interact with the started cluster by using etcdctl:
```
# use API version 3
$ export ETCDCTL_API=3
$ etcdctl --write-out=table --endpoints=localhost:12379 member list
+------------------+---------+--------+------------------------+------------------------+
| ID | STATUS | NAME | PEER ADDRS | CLIENT ADDRS |
+------------------+---------+--------+------------------------+------------------------+
| 8211f1d0f64f3269 | started | infra1 | http://127.0.0.1:12380 | http://127.0.0.1:12379 |
| 91bc3c398fb3c146 | started | infra2 | http://127.0.0.1:22380 | http://127.0.0.1:22379 |
| fd422379fda50e48 | started | infra3 | http://127.0.0.1:32380 | http://127.0.0.1:32379 |
+------------------+---------+--------+------------------------+------------------------+
$ etcdctl --endpoints=localhost:12379 put foo bar
OK
```
To exercise etcd's fault tolerance, kill a member:
```
# kill etcd2
$ goreman run stop etcd2
$ etcdctl --endpoints=localhost:12379 put key hello
OK
$ etcdctl --endpoints=localhost:12379 get key
hello
# try to get key from the killed member
$ etcdctl --endpoints=localhost:22379 get key
2016/04/18 23:07:35 grpc: Conn.resetTransport failed to create client transport: connection error: desc = "transport: dial tcp 127.0.0.1:22379: getsockopt: connection refused"; Reconnecting to "localhost:22379"
Error: grpc: timed out trying to connect
# restart the killed member
$ goreman run restart etcd2
# get the key from restarted member
$ etcdctl --endpoints=localhost:22379 get key
hello
```
To learn more about interacting with etcd, read [interacting with etcd section][interacting].
[interacting]: ./interacting_v3.md
[clustering]: ../op-guide/clustering.md

View File

@@ -0,0 +1,113 @@
# Discovery service protocol
Discovery service protocol helps new etcd member to discover all other members in cluster bootstrap phase using a shared discovery URL.
Discovery service protocol is _only_ used in cluster bootstrap phase, and cannot be used for runtime reconfiguration or cluster monitoring.
The protocol uses a new discovery token to bootstrap one _unique_ etcd cluster. Remember that one discovery token can represent only one etcd cluster. As long as discovery protocol on this token starts, even if it fails halfway, it must not be used to bootstrap another etcd cluster.
The rest of this article will walk through the discovery process with examples that correspond to a self-hosted discovery cluster. The public discovery service, discovery.etcd.io, functions the same way, but with a layer of polish to abstract away ugly URLs, generate UUIDs automatically, and provide some protections against excessive requests. At its core, the public discovery service still uses an etcd cluster as the data store as described in this document.
## Protocol workflow
The idea of discovery protocol is to use an internal etcd cluster to coordinate bootstrap of a new cluster. First, all new members interact with discovery service and help to generate the expected member list. Then each new member bootstraps its server using this list, which performs the same functionality as -initial-cluster flag.
In the following example workflow, we will list each step of protocol in curl format for ease of understanding.
By convention the etcd discovery protocol uses the key prefix `_etcd/registry`. If `http://example.com` hosts an etcd cluster for discovery service, a full URL to discovery keyspace will be `http://example.com/v2/keys/_etcd/registry`. We will use this as the URL prefix in the example.
### Creating a new discovery token
Generate a unique token that will identify the new cluster. This will be used as a unique prefix in discovery keyspace in the following steps. An easy way to do this is to use `uuidgen`:
```
UUID=$(uuidgen)
```
### Specifying the expected cluster size
The discovery token expects a cluster size that must be specified. The size is used by the discovery service to know when it has found all members that will initially form the cluster.
```
curl -X PUT http://example.com/v2/keys/_etcd/registry/${UUID}/_config/size -d value=${cluster_size}
```
Usually the cluster size is 3, 5 or 7. Check [optimal cluster size][cluster-size] for more details.
### Bringing up etcd processes
Given the discovery URL, use it as `-discovery` flag and bring up etcd processes. Every etcd process will follow this next few steps internally if given a `-discovery` flag.
### Registering itself
The first thing for etcd process is to register itself into the discovery URL as a member. This is done by creating member ID as a key in the discovery URL.
```
curl -X PUT http://example.com/v2/keys/_etcd/registry/${UUID}/${member_id}?prevExist=false -d value="${member_name}=${member_peer_url_1}&${member_name}=${member_peer_url_2}"
```
### Checking the status
It checks the expected cluster size and registration status in discovery URL, and decides what the next action is.
```
curl -X GET http://example.com/v2/keys/_etcd/registry/${UUID}/_config/size
curl -X GET http://example.com/v2/keys/_etcd/registry/${UUID}
```
If registered members are still not enough, it will wait for left members to appear.
If the number of registered members is bigger than the expected size N, it treats the first N registered members as the member list for the cluster. If the member itself is in the member list, the discovery procedure succeeds and it fetches all peers through the member list. If it is not in the member list, the discovery procedure finishes with the failure that the cluster has been full.
In etcd implementation, the member may check the cluster status even before registering itself. So it could fail quickly if the cluster has been full.
### Waiting for all members
The wait process is described in detail in the [etcd API documentation][api].
```
curl -X GET http://example.com/v2/keys/_etcd/registry/${UUID}?wait=true&waitIndex=${current_etcd_index}
```
It keeps waiting until finding all members.
## Public discovery service
CoreOS Inc. hosts a public discovery service at https://discovery.etcd.io/ , which provides some nice features for ease of use.
### Mask key prefix
Public discovery service will redirect `https://discovery.etcd.io/${UUID}` to etcd cluster behind for the key at `/v2/keys/_etcd/registry`. It masks register key prefix for short and readable discovery url.
### Get new token
```
GET /new
Sent query:
size=${cluster_size}
Possible status codes:
200 OK
400 Bad Request
200 Body:
generated discovery url
```
The generation process in the service follows the steps from [Creating a New Discovery Token][new-discovery-token] to [Specifying the Expected Cluster Size][expected-cluster-size].
### Check discovery status
```
GET /${UUID}
```
The status for this discovery token, including the machines that have been registered, can be checked by requesting the value of the UUID.
### Open-source repository
The repository is located at https://github.com/coreos/discovery.etcd.io. It could be used to build a custom discovery service.
[api]: ../v2/api.md#waiting-for-a-change
[cluster-size]: ../v2/admin_guide.md#optimal-cluster-size
[expected-cluster-size]: #specifying-the-expected-cluster-size
[new-discovery-token]: #creating-a-new-discovery-token

View File

@@ -0,0 +1,29 @@
# Logging conventions
etcd uses the [capnslog][capnslog] library for logging application output categorized into *levels*. A log message's level is determined according to these conventions:
* Error: Data has been lost, a request has failed for a bad reason, or a required resource has been lost
* Examples:
* A failure to allocate disk space for WAL
* Warning: (Hopefully) Temporary conditions that may cause errors, but may work fine. A replica disappearing (that may reconnect) is a warning.
* Examples:
* Failure to send raft message to a remote peer
* Failure to receive heartbeat message within the configured election timeout
* Notice: Normal, but important (uncommon) log information.
* Examples:
* Add a new node into the cluster
* Add a new user into auth subsystem
* Info: Normal, working log information, everything is fine, but helpful notices for auditing or common operations.
* Examples:
* Startup configuration
* Start to do snapshot
* Debug: Everything is still fine, but even common operations may be logged, and less helpful but more quantity of notices.
* Examples:
* Send a normal message to a remote peer
* Write a log entry to disk
[capnslog]: [https://github.com/coreos/pkg/tree/master/capnslog]

View File

@@ -0,0 +1,109 @@
# etcd release guide
The guide talks about how to release a new version of etcd.
The procedure includes some manual steps for sanity checking but it can probably be further scripted. Please keep this document up-to-date if making changes to the release process.
## Prepare release
Set desired version as environment variable for following steps. Here is an example to release 2.3.0:
```
export VERSION=v2.3.0
export PREV_VERSION=v2.2.5
```
All releases version numbers follow the format of [semantic versioning 2.0.0](http://semver.org/).
### Major, minor version release, or its pre-release
- Ensure the relevant milestone on GitHub is complete. All referenced issues should be closed, or moved elsewhere.
- Remove this release from [roadmap](https://github.com/coreos/etcd/blob/master/ROADMAP.md), if necessary.
- Ensure the latest upgrade documentation is available.
- Bump [hardcoded MinClusterVerion in the repository](https://github.com/coreos/etcd/blob/master/version/version.go#L29), if necessary.
- Add feature capability maps for the new version, if necessary.
### Patch version release
- Discuss about commits that are backported to the patch release. The commits should not include merge commits.
- Cherry-pick these commits starting from the oldest one into stable branch.
## Write release note
- Write introduction for the new release. For example, what major bug we fix, what new features we introduce or what performance improvement we make.
- Write changelog for the last release. ChangeLog should be straightforward and easy to understand for the end-user.
- Put `[GH XXXX]` at the head of change line to reference Pull Request that introduces the change. Moreover, add a link on it to jump to the Pull Request.
## Tag version
- Bump [hardcoded Version in the repository](https://github.com/coreos/etcd/blob/master/version/version.go#L30) to the latest version `${VERSION}`.
- Ensure all tests on CI system are passed.
- Manually check etcd is buildable in Linux, Darwin and Windows.
- Manually check upgrade etcd cluster of previous minor version works well.
- Manually check new features work well.
- Add a signed tag through `git tag -s ${VERSION}`.
- Sanity check tag correctness through `git show tags/$VERSION`.
- Push the tag to GitHub through `git push origin tags/$VERSION`. This assumes `origin` corresponds to "https://github.com/coreos/etcd".
## Build release binaries and images
- Ensure `actool` is available, or installing it through `go get github.com/appc/spec/actool`.
- Ensure `docker` is available.
Run release script in root directory:
```
./scripts/release.sh ${VERSION}
```
It generates all release binaries and images under directory ./release.
## Sign binaries and images
etcd project key must be used to sign the generated binaries and images.`$SUBKEYID` is the key ID of etcd project Yubikey. Connect the key and run `gpg2 --card-status` to get the ID.
The following commands are used for public release sign:
```
cd release
for i in etcd-*{.zip,.tar.gz}; do gpg2 --default-key $SUBKEYID --armor --output ${i}.asc --detach-sign ${i}; done
for i in etcd-*{.zip,.tar.gz}; do gpg2 --verify ${i}.asc ${i}; done
```
The public key for GPG signing can be found at [CoreOS Application Signing Key](https://coreos.com/security/app-signing-key)
## Publish release page in GitHub
- Set release title as the version name.
- Follow the format of previous release pages.
- Attach the generated binaries, aci image and signatures.
- Select whether it is a pre-release.
- Publish the release!
## Publish docker image in Quay.io
- Push docker image:
```
docker login quay.io
docker push quay.io/coreos/etcd:${VERSION}
```
- Add `latest` tag to the new image on [quay.io](https://quay.io/repository/coreos/etcd?tag=latest&tab=tags) if this is a stable release.
## Announce to the etcd-dev Googlegroup
- Follow the format of [previous release emails](https://groups.google.com/forum/#!forum/etcd-dev).
- Make sure to include a list of authors that contributed since the previous release - something like the following might be handy:
```
git log ...${PREV_VERSION} --pretty=format:"%an" | sort | uniq | tr '\n' ',' | sed -e 's#,#, #g' -e 's#, $##'
```
- Send email to etcd-dev@googlegroups.com
## Post release
- Create new stable branch through `git push origin ${VERSION_MAJOR}.${VERSION_MINOR}` if this is a major stable release. This assumes `origin` corresponds to "https://github.com/coreos/etcd".
- Bump [hardcoded Version in the repository](https://github.com/coreos/etcd/blob/master/version/version.go#L30) to the version `${VERSION}+git`.

View File

@@ -1,106 +0,0 @@
# etcd release guide
The guide talks about how to release a new version of etcd.
The procedure includes some manual steps for sanity checking but it can probably be further scripted. Please keep this document up-to-date if you want to make changes to the release process.
## Prepare Release
Set desired version as environment variable for following steps. Here is an example to release 2.3.0:
```
export VERSION=v2.3.0
export PREV_VERSION=v2.2.5
```
All releases version numbers follow the format of [semantic versioning 2.0.0](http://semver.org/).
### Major, Minor Version Release, or its Pre-release
- Ensure the relevant milestone on GitHub is complete. All referenced issues should be closed, or moved elsewhere.
- Remove this release from [roadmap](https://github.com/coreos/etcd/blob/master/ROADMAP.md), if necessary.
- Ensure the latest upgrade documentation is available.
- Bump [hardcoded MinClusterVerion in the repository](https://github.com/coreos/etcd/blob/master/version/version.go#L29), if necessary.
- Add feature capability maps for the new version, if necessary.
### Patch Version Release
- Discuss about commits that are backported to the patch release. The commits should not include merge commits.
- Cherry-pick these commits starting from the oldest one into stable branch.
## Write Release Note
- Write introduction for the new release. For example, what major bug we fix, what new features we introduce or what performance improvement we make.
- Write changelog for the last release. ChangeLog should be straightforward and easy to understand for the end-user.
- Put `[GH XXXX]` at the head of change line to reference Pull Request that introduces the change. Moreover, add a link on it to jump to the Pull Request.
## Tag Version
- Bump [hardcoded Version in the repository](https://github.com/coreos/etcd/blob/master/version/version.go#L30) to the latest version `${VERSION}`.
- Ensure all tests on CI system are passed.
- Manually check etcd is buildable in Linux, Darwin and Windows.
- Manually check upgrade etcd cluster of previous minor version works well.
- Manually check new features work well.
- Add a signed tag through `git tag -s ${VERSION}`.
- Sanity check tag correctness through `git show tags/$VERSION`.
- Push the tag to GitHub through `git push origin tags/$VERSION`. This assumes `origin` corresponds to "https://github.com/coreos/etcd".
## Build Release Binaries and Images
- Ensure `actool` is available, or installing it through `go get github.com/appc/spec/actool`.
- Ensure `docker` is available.
Run release script in root directory:
```
./scripts/release.sh ${VERSION}
```
It generates all release binaries and images under directory ./release.
## Sign Binaries and Images
etcd project key must be used to sign the generated binaries and images.`$SUBKEYID` is the key ID of etcd project Yubikey. Connect the key and run `gpg2 --card-status` to get the ID.
The following commands are used for public release sign:
```
cd release
for i in etcd-*{.zip,.tar.gz}; do gpg2 --default-key $SUBKEYID --output ${i}.asc --detach-sign ${i}; done
for i in etcd-*{.zip,.tar.gz}; do gpg2 --verify ${i}.asc ${i}; done
```
## Publish Release Page in GitHub
- Set release title as the version name.
- Follow the format of previous release pages.
- Attach the generated binaries, aci image and signatures.
- Select whether it is a pre-release.
- Publish the release!
## Publish Docker Image in Quay.io
- Push docker image:
```
docker login quay.io
docker push quay.io/coreos/etcd:${VERSION}
```
- Add `latest` tag to the new image on [quay.io](https://quay.io/repository/coreos/etcd?tag=latest&tab=tags) if this is a stable release.
## Announce to etcd-dev Googlegroup
- Follow the format of [previous release emails](https://groups.google.com/forum/#!forum/etcd-dev).
- Make sure to include a list of authors that contributed since the previous release - something like the following might be handy:
```
git log ...${PREV_VERSION} --pretty=format:"%an" | sort | uniq | tr '\n' ',' | sed -e 's#,#, #g' -e 's#, $##'
```
- Send email to etcd-dev@googlegroups.com
## Post Release
- Create new stable branch through `git push origin ${VERSION_MAJOR}.${VERSION_MINOR}` if this is a major stable release. This assumes `origin` corresponds to "https://github.com/coreos/etcd".
- Bump [hardcoded Version in the repository](https://github.com/coreos/etcd/blob/master/version/version.go#L30) to the version `${VERSION}+git`.

56
Documentation/dl_build.md Normal file
View File

@@ -0,0 +1,56 @@
# Download and build
## System requirements
The etcd performance benchmarks run etcd on 8 vCPU, 16GB RAM, 50GB SSD GCE instances, but any relatively modern machine with low latency storage and a few gigabytes of memory should suffice for most use cases. Applications with large v2 data stores will require more memory than a large v3 data store since data is kept in anonymous memory instead of memory mapped from a file. than For running etcd on a cloud provider, we suggest at least a medium instance on AWS or a standard-1 instance on GCE.
## Download the pre-built binary
The easiest way to get etcd is to use one of the pre-built release binaries which are available for OSX, Linux, Windows, appc, and Docker. Instructions for using these binaries are on the [GitHub releases page][github-release].
## Build the latest version
For those wanting to try the very latest version, build etcd from the `master` branch.
[Go](https://golang.org/) version 1.6+ (with HTTP2 support) is required to build the latest version of etcd.
Here are the commands to build an etcd binary from the `master` branch:
```
# go is required
$ go version
go version go1.6 darwin/amd64
# GOPATH should be set correctly
$ echo $GOPATH
/Users/example/go
$ mkdir -p $GOPATH/src/github.com/coreos
$ cd $GOPATH/src/github.com/coreos
$ git clone github.com:coreos/etcd.git
$ cd etcd
$ ./build
$ ./bin/etcd
...
```
## Test the installation
Check the etcd binary is built correctly by starting etcd and setting a key.
Start etcd:
```
$ ./bin/etcd
```
Set a key:
```
$ ETCDCTL_API=3 ./bin/etcdctl put foo bar
OK
```
If OK is printed, then etcd is working!
[github-release]: https://github.com/coreos/etcd/releases/
[go]: https://golang.org/doc/install

73
Documentation/docs.md Normal file
View File

@@ -0,0 +1,73 @@
# Documentation
etcd is a distributed key-value store designed to reliably and quickly preserve and provide access to critical data. It enables reliable distributed coordination through distributed locking, leader elections, and write barriers. An etcd cluster is intended for high availability and permanent data storage and retrieval.
## Getting started
New etcd users and developers should get started by [downloading and building][download_build] etcd. After getting etcd, follow this [quick demo][demo] to see the basics of creating and working with an etcd cluster.
## Developing with etcd
The easiest way to get started using etcd as a distributed key-value store is to [set up a local cluster][local_cluster].
- [Setting up local clusters][local_cluster]
- [Interacting with etcd][interacting]
- [API references][api_ref]
- [gRPC gateway][api_grpc_gateway]
- [Experimental features and APIs][experimental]
## Operating etcd clusters
Administrators who need to create reliable and scalable key-value stores for the developers they support should begin with a [cluster on multiple machines][clustering].
- [Setting up clusters][clustering]
- [Run etcd clusters inside containers][container]
- [Configuration][conf]
- [Security][security]
- Monitoring
- [Maintenance][maintenance]
- [Understand failures][failures]
- [Disaster recovery][recovery]
- [Performance][performance]
- [Versioning][versioning]
- [Supported platform][supported_platform]
## Learning
To learn more about the concepts and internals behind etcd, read the following pages:
- Why etcd (TODO)
- [Understand data model][data_model]
- [Understand APIs][understand_apis]
- [Glossary][glossary]
- Internals (TODO)
## Upgrading and compatibility
- [Migrate applications from using API v2 to API v3][v2_migration]
- [Updating v2.3 to v3.0][v3_upgrade]
## Troubleshooting
[api_ref]: dev-guide/api_reference_v3.md
[api_grpc_gateway]: dev-guide/api_grpc_gateway.md
[clustering]: op-guide/clustering.md
[conf]: op-guide/configuration.md
[data_model]: learning/data_model.md
[demo]: demo.md
[download_build]: dl_build.md
[failures]: op-guide/failures.md
[glossary]: learning/glossary.md
[interacting]: dev-guide/interacting_v3.md
[local_cluster]: dev-guide/local_cluster.md
[performance]: op-guide/performance.md
[recovery]: op-guide/recovery.md
[maintenance]: op-guide/maintenance.md
[security]: op-guide/security.md
[v2_migration]: op-guide/v2-migration.md
[container]: op-guide/container.md
[understand_apis]: learning/api.md
[versioning]: op-guide/versioning.md
[supported_platform]: op-guide/supported-platform.md
[experimental]: dev-guide/experimental_apis.md
[v3_upgrade]: upgrades/upgrade_3_0.md

View File

@@ -1,83 +0,0 @@
# FAQ
## 1) How come I can read an old version of the data when a majority of the members are down?
In situations where a client connects to a minority, etcd
favors by default availability over consistency. This means that even though
data might be “out of date”, it is still better to return something versus
nothing.
In order to confirm that a read is up to date with a majority of the cluster,
the client can use the `quorum=true` parameter on reads of keys. This means
that a majority of the cluster is checked on reads before returning the data,
otherwise the read will timeout and fail.
## 2) With quorum=false, doesnt this mean that if my client switched the member it was connected to, that it could experience a logical ordering where the cluster goes backwards in time?
Yes, but this could be handled at the etcd client implementation via
remembering the last seen index. The “index” is the cluster's single
irrevocable sequence of the entire modification history. The client could
remember the last seen index, and determine via comparing the index returned on
the GET whether or not the state of the key-value pair is before or after its
last seen state.
## 3) What happens if a watch is registered on a minority member?
The watch will stay untriggered, even as modifications are occurring in the
majority quorum. This is an open issue, and is being addressed in v3. There are
multiple ways to work around the watch trigger not firing.
1) build a signaling mechanism independent of etcd. This could be as simple as
a “pulse” to the client to reissue a GET with quorum=true for the most recent
version of the data.
2) poll on the `/v2/keys` endpoint and check that the raft-index is increasing every
timeout.
## 4) What is a proxy used for?
A proxy is a redirection server to the etcd cluster. The proxy handles the
redirection of a client to the current configuration of the etcd cluster. A
typical use case is to start a proxy on a machine, and on first boot up of the
proxy specify both the `--proxy` flag and the `--initial-cluster` flag.
From there, any etcdctl client that starts up automatically speaks to the local
proxy and the proxy redirects operations to the current configuration of the
cluster it was originally paired with.
In the v2 spec of etcd, proxies cannot be promoted to members of the cluster.
They also cannot be promoted to followers or at any point become part of the
replication of the etcd cluster itself.
## 5) How is cluster membership and health handled in etcd v2?
The design goal of etcd is that reconfiguration is simply an API, and health
monitoring and addition/removal of members is up to the individual application
and their integration with the reconfiguration API.
Thus, a member that is down, even infinitely, will never be automatically
removed from the etcd cluster member list.
This makes sense because it's usually an application level / administrative
action to determine whether a reconfiguration should happen based on health.
For more information, refer to the [runtime reconfiguration design document][runtime-reconf-design].
## 6) how does --endpoint work with etcdctl?
The `--endpoint` flag can specify any number of etcd cluster members in a comma
separated list. This list might be a subset, equal to, or more than the actual
etcd cluster member list itself.
If only one peer is specified via the `--endpoint` flag, the etcdctl discovers the
rest of the cluster via the member list of that one peer, and then it randomly
chooses a member to use. Again, the client can use the `quorum=true` flag on
reads, which will always fail when using a member in the minority.
If peers from multiple clusters are specified via the `--endpoint` flag, etcdctl
will randomly choose a peer, and the request will simply get routed to one of
the clusters. This is probably not what you want.
Note: --peers flag is now deprecated and --endpoint should be used instead,
as it might confuse users to give etcdctl a peerURL.
[runtime-reconf-design]: runtime-reconf-design.md

View File

@@ -0,0 +1,57 @@
# etcd3 API
NOTE: this doc is not finished!
## Response header
All Responses from etcd API have a [response header][response_header] attached. The response header includes the metadata of the response.
```proto
message ResponseHeader {
uint64 cluster_id = 1;
uint64 member_id = 2;
int64 revision = 3;
uint64 raft_term = 4;
}
```
* Cluster_ID - the ID of the cluster that generates the response
* Member_ID - the ID of the member that generates the response
* Revision - the revision of the key-value store when the response is generated
* Raft_Term - the Raft term of the member when the response is generated
An application may read the Cluster_ID (Member_ID) field to ensure it is communicating with the intended cluster (member).
Applications can use the `Revision` to know the latest revision of the key-value store. This is especially useful when applications specify a historical revision to make time `travel query` and wishes to know the latest revision at the time of the request.
Applications can use `Raft_Term` to detect when the cluster completes a new leader election.
## Key-Value API
Key-Value API is used to manipulate key-value pairs stored inside etcd. The key-value API is defined as a [gRPC service][kv-service]. The Key-Value pair is defined as structured data in [protobuf format][kv-proto].
### Key-Value pair
A key-value pair is the smallest unit that the key-value API can manipulate. Each key-value pair has a number of fields:
```protobuf
message KeyValue {
bytes key = 1;
int64 create_revision = 2;
int64 mod_revision = 3;
int64 version = 4;
bytes value = 5;
int64 lease = 6;
}
```
* Key - key in bytes. An empty key is not allowed.
* Value - value in bytes.
* Version - version is the version of the key. A deletion resets the version to zero and any modification of the key increases its version.
* Create_Revision - revision of the last creation on the key.
* Mod_Revision - revision of the last modification on the key.
* Lease - the ID of the lease attached to the key. If lease is 0, then no lease is attached to the key.
[kv-proto]: https://github.com/coreos/etcd/blob/master/mvcc/mvccpb/kv.proto
[kv-service]: https://github.com/coreos/etcd/blob/master/etcdserver/etcdserverpb/rpc.proto
[response_header]: https://github.com/coreos/etcd/blob/master/etcdserver/etcdserverpb/rpc.proto

View File

@@ -0,0 +1,63 @@
# KV API guarantees
etcd is a consistent and durable key value store with mini-transaction(TODO: link to txn doc when we have it) support. The key value store is exposed through the KV APIs. etcd tries to ensure the strongest consistency and durability guarantees for a distributed system. This specification enumerates the KV API guarantees made by etcd.
### APIs to consider
* Read APIs
* range
* watch
* Write APIs
* put
* delete
* Combination (read-modify-write) APIs
* txn
### etcd specific definitions
#### Operation completed
An etcd operation is considered complete when it is committed through consensus, and therefore “executed” -- permanently stored -- by the etcd storage engine. The client knows an operation is completed when it receives a response from the etcd server. Note that the client may be uncertain about the status of an operation if it times out, or there is a network disruption between the client and the etcd member. etcd may also abort operations when there is a leader election. etcd does not send `abort` responses to clients outstanding requests in this event.
#### Revision
An etcd operation that modifies the key value store is assigned with a single increasing revision. A transaction operation might modifies the key value store multiple times, but only one revision is assigned. The revision attribute of a key value pair that modified by the operation has the same value as the revision of the operation. The revision can be used as a logical clock for key value store. A key value pair that has a larger revision is modified after a key value pair with a smaller revision. Two key value pairs that have the same revision are modified by an operation "concurrently".
### Guarantees provided
#### Atomicity
All API requests are atomic; an operation either completes entirely or not at all. For watch requests, all events generated by one operation will be in one watch response. Watch never observes partial events for a single operation.
#### Consistency
All API calls ensure [sequential consistency][seq_consistency], the strongest consistency guarantee available from distributed systems. No matter which etcd member server a client makes requests to, a client reads the same events in the same order. If two members complete the same number of operations, the state of the two members is consistent.
For watch operations, etcd guarantees to return the same value for the same key across all members for the same revision. For range operations, etcd has a similar guarantee for [linearized][Linearizability] access; serialized access may be behind the quorum state, so that the later revision is not yet available.
As with all distributed systems, it is impossible for etcd to ensure [strict consistency][strict_consistency]. etcd does not guarantee that it will return to a read the “most recent” value (as measured by a wall clock when a request is completed) available on any cluster member.
#### Isolation
etcd ensures [serializable isolation][serializable_isolation], which is the highest isolation level available in distributed systems. Read operations will never observe any intermediate data.
#### Durability
Any completed operations are durable. All accessible data is also durable data. A read will never return data that has not been made durable.
#### Linearizability
Linearizability (also known as Atomic Consistency or External Consistency) is a consistency level between strict consistency and sequential consistency.
For linearizability, suppose each operation receives a timestamp from a loosely synchronized global clock. Operations are linearized if and only if they always complete as though they were executed in a sequential order and each operation appears to complete in the order specified by the program. Likewise, if an operations timestamp precedes another, that operation must also precede the other operation in the sequence.
For example, consider a client completing a write at time point 1 (*t1*). A client issuing a read at *t2* (for *t2* > *t1*) should receive a value at least as recent as the previous write, completed at *t1*. However, the read might actually complete only by *t3*, and the returned value, current at *t2* when the read began, might be "stale" by *t3*.
etcd does not ensure linearizability for watch operations. Users are expected to verify the revision of watch responses to ensure correct ordering.
etcd ensures linearizability for all other operations by default. Linearizability comes with a cost, however, because linearized requests must go through the Raft consensus process. To obtain lower latencies and higher throughput for read requests, clients can configure a requests consistency mode to `serializable`, which may access stale data with respect to quorum, but removes the performance penalty of linearized accesses' reliance on live consensus.
[seq_consistency]: https://en.wikipedia.org/wiki/Consistency_model#Sequential_consistency
[strict_consistency]: https://en.wikipedia.org/wiki/Consistency_model#Strict_consistency
[serializable_isolation]: https://en.wikipedia.org/wiki/Isolation_(database_systems)#Serializable
[Linearizability]: #Linearizability

View File

@@ -0,0 +1,25 @@
# Data model
etcd is designed to reliably store infrequently updated data and provide reliable watch queries. etcd exposes previous versions of key-value pairs to support inexpensive snapshots and watch history events (“time travel queries”). A persistent, multi-version, concurrency-control data model is a good fit for these use cases.
etcd stores data in a multiversion [persistent][persistent-ds] key-value store. The persistent key-value store preserves the previous version of a key-value pair when its value is superseded with new data. The key-value store is effectively immutable; its operations do not update the structure in-place, but instead always generates a new updated structure. All past versions of keys are still accessible and watchable after modification. To prevent the data store from growing indefinitely over time from maintaining old versions, the store may be compacted to shed the oldest versions of superseded data.
### Logical view
The stores logical view is a flat binary key space. The key space has a lexically sorted index on byte string keys so range queries are inexpensive.
The key space maintains multiple revisions. Each atomic mutative operation (e.g., a transaction operation may contain multiple operations) creates a new revision on the key space. All data held by previous revisions remains unchanged. Old versions of key can still be accessed through previous revisions. Likewise, revisions are indexed as well; ranging over revisions with watchers is efficient. If the store is compacted to recover space, revisions before the compact revision will be removed.
A keys lifetime spans a generation. Each key may have one or multiple generations. Creating a key increments the generation of that key, starting at 1 if the key never existed. Deleting a key generates a key tombstone, concluding the keys current generation. Each modification of a key creates a new version of the key. Once a compaction happens, any generation ended before the given revision will be removed and values set before the compaction revision except the latest one will be removed.
### Physical view
etcd stores the physical data as key-value pairs in a persistent [b+tree][b+tree]. Each revision of the stores state only contains the delta from its previous revision to be efficient. A single revision may correspond to multiple keys in the tree.
The key of key-value pair is a 3-tuple (major, sub, type). Major is the store revision holding the key. Sub differentiates among keys within the same revision. Type is an optional suffix for special value (e.g., `t` if the value contains a tombstone). The value of the key-value pair contains the modification from previous revision, thus one delta from previous revision. The b+tree is ordered by key in lexical byte-order. Ranged lookups over revision deltas are fast; this enables quickly finding modifications from one specific revision to another. Compaction removes out-of-date keys-value pairs.
etcd also keeps a secondary in-memory [btree][btree] index to speed up range queries over keys. The keys in the btree index are the keys of the store exposed to user. The value is a pointer to the modification of the persistent b+tree. Compaction removes dead pointers.
[persistent-ds]: https://en.wikipedia.org/wiki/Persistent_data_structure
[btree]: https://en.wikipedia.org/wiki/B-tree
[b+tree]: https://en.wikipedia.org/wiki/B%2B_tree

View File

@@ -1,4 +1,4 @@
# Libraries and Tools
# Libraries and tools
**Tools**
@@ -17,7 +17,8 @@
**Go libraries**
- [etcd/client](https://github.com/coreos/etcd/blob/master/client) - the officially maintained Go client
- [etcd/clientv3](https://github.com/coreos/etcd/blob/master/clientv3) - the officially maintained Go client for v3
- [etcd/client](https://github.com/coreos/etcd/blob/master/client) - the officially maintained Go client for v2
- [go-etcd](https://github.com/coreos/go-etcd) - the deprecated official client. May be useful for older (<2.0.0) versions of etcd.
**Java libraries**
@@ -27,6 +28,11 @@
- [diwakergupta/jetcd](https://github.com/diwakergupta/jetcd) - Supports v2
- [jurmous/etcd4j](https://github.com/jurmous/etcd4j) - Supports v2, Async/Sync, waits and SSL
- [AdoHe/etcd4j](http://github.com/AdoHe/etcd4j) - Supports v2 (enhance for real production cluster)
- [cdancy/etcd-rest](https://github.com/cdancy/etcd-rest) - Uses jclouds to provide a complete implementation of v2 API.
**Scala libraries**
- [maciej/etcd-client](https://github.com/maciej/etcd-client) - Supports v2. Akka HTTP-based fully async client
**Python libraries**
@@ -87,6 +93,10 @@
- [efrecon/etcd-tcl](https://github.com/efrecon/etcd-tcl) - Supports v2, except wait.
**Gradle Plugins**
- [gradle-etcd-rest-plugin](https://github.com/cdancy/gradle-etcd-rest-plugin) - Supports v2
**Chef Integration**
- [coderanger/etcd-chef](https://github.com/coderanger/etcd-chef)
@@ -122,3 +132,4 @@
- [spf13/viper](https://github.com/spf13/viper) - Go configuration library, reads values from ENV, pflags, files, and etcd with optional encryption
- [lytics/metafora](https://github.com/lytics/metafora) - Go distributed task library
- [ryandoyle/nss-etcd](https://github.com/ryandoyle/nss-etcd) - A GNU libc NSS module for resolving names from etcd.
- [Gru](https://github.com/dnaeon/gru) - Orchestration made easy with Go

View File

@@ -1,134 +1,136 @@
# Metrics
**NOTE: The metrics feature is considered experimental. We may add/change/remove metrics without warning in future releases.**
etcd uses [Prometheus][prometheus] for metrics reporting. The metrics can be used for real-time monitoring and debugging. etcd does not persist its metrics; if a member restarts, the metrics will be reset.
etcd uses [Prometheus][prometheus] for metrics reporting in the server. The metrics can be used for real-time monitoring and debugging.
etcd only stores these data in memory. If a member restarts, metrics will reset.
The simplest way to see the available metrics is to cURL the metrics endpoint `/metrics` of etcd. The format is described [here](http://prometheus.io/docs/instrumenting/exposition_formats/).
The simplest way to see the available metrics is to cURL the metrics endpoint `/metrics`. The format is described [here](http://prometheus.io/docs/instrumenting/exposition_formats/).
Follow the [Prometheus getting started doc][prometheus-getting-started] to spin up a Prometheus server to collect etcd metrics.
The naming of metrics follows the suggested [best practice of Prometheus][prometheus-naming]. A metric name has an `etcd` prefix as its namespace and a subsystem prefix (for example `wal` and `etcdserver`).
The naming of metrics follows the suggested [Prometheus best practices][prometheus-naming]. A metric name has an `etcd` or `etcd_debugging` prefix as its namespace and a subsystem prefix (for example `wal` and `etcdserver`).
etcd now exposes the following metrics:
## etcd namespace metrics
## etcdserver
The metrics under the `etcd` prefix are for monitoring and alerting. They are stable high level metrics. If there is any change of these metrics, it will be included in release notes.
| Name | Description | Type |
|-----------------------------------------|--------------------------------------------------|-----------|
| file_descriptors_used_total | The total number of file descriptors used | Gauge |
| proposal_durations_seconds | The latency distributions of committing proposal | Histogram |
| pending_proposal_total | The total number of pending proposals | Gauge |
| proposal_failed_total | The total number of failed proposals | Counter |
Metrics that are etcd2 related are documented [v2 metrics guide][v2-http-metrics].
High file descriptors (`file_descriptors_used_total`) usage (near the file descriptors limitation of the process) indicates a potential out of file descriptors issue. That might cause etcd fails to create new WAL files and panics.
### Server
[Proposal][glossary-proposal] durations (`proposal_durations_seconds`) provides a histogram about the proposal commit latency. Latency can be introduced into this process by network and disk IO.
These metrics describe the status of the etcd server. In order to detect outages or problems for troubleshooting, the server metrics of every production etcd cluster should be closely monitored.
Pending proposal (`pending_proposal_total`) gives you an idea about how many proposal are in the queue and waiting for commit. An increasing pending number indicates a high client load or an unstable cluster.
All these metrics are prefixed with `etcd_server_`
Failed proposals (`proposal_failed_total`) are normally related to two issues: temporary failures related to a leader election or longer duration downtime caused by a loss of quorum in the cluster.
| Name | Description | Type |
|---------------------------|----------------------------------------------------------|---------|
| has_leader | Whether or not a leader exists. 1 is existence, 0 is not.| Gauge |
| leader_changes_seen_total | The number of leader changes seen. | Counter |
| proposals_committed_total | The total number of consensus proposals committed. | Gauge |
| proposals_applied_total | The total number of consensus proposals applied. | Gauge |
| proposals_pending | The current number of pending proposals. | Gauge |
| proposals_failed_total | The total number of failed proposals seen. | Counter |
## wal
`has_leader` indicates whether the member has a leader. If a member does not have a leader, it is
totally unavailable. If all the members in the cluster do not have any leader, the entire cluster
is totally unavailable.
| Name | Description | Type |
|------------------------------------|--------------------------------------------------|-----------|
| fsync_durations_seconds | The latency distributions of fsync called by wal | Histogram |
| last_index_saved | The index of the last entry saved by wal | Gauge |
`leader_changes_seen_total` counts the number of leader changes the member has seen since its start. Rapid leadership changes impact the performance of etcd significantly. It also signals that the leader is unstable, perhaps due to network connectivity issues or excessive load hitting the etcd cluster.
Abnormally high fsync duration (`fsync_durations_seconds`) indicates disk issues and might cause the cluster to be unstable.
`proposals_committed_total` records the total number of consensus proposals committed. This gauge should increase over time if the cluster is healthy. Several healthy members of an etcd cluster may have different total committed proposals at once. This discrepancy may be due to recovering from peers after starting, lagging behind the leader, or being the leader and therefore having the most commits. It is important to monitor this metric across all the members in the cluster; a consistently large lag between a single member and its leader indicates that member is slow or unhealthy.
`proposals_applied_total` records the total number of consensus proposals applied. The etcd server applies every committed proposal asynchronously. The difference between `proposals_committed_total` and `proposals_applied_total` should usually be small (within a few thousands even under high load). If the difference between them continues to rise, it indicates that the etcd server is overloaded. This might happen when applying expensive queries like heavy range queries or large txn operations.
## http requests
`proposals_pending` indicates how many proposals are queued to commit. Rising pending proposals suggests there is a high client load or the member cannot commit proposals.
These metrics describe the serving of requests (non-watch events) served by etcd members in non-proxy mode: total
incoming requests, request failures and processing latency (inc. raft rounds for storage). They are useful for tracking
user-generated traffic hitting the etcd cluster .
`proposals_failed_total` are normally related to two issues: temporary failures related to a leader election or longer downtime caused by a loss of quorum in the cluster.
All these metrics are prefixed with `etcd_http_`
### Disk
These metrics describe the status of the disk operations.
All these metrics are prefixed with `etcd_disk_`.
| Name | Description | Type |
|------------------------------------|-------------------------------------------------------|-----------|
| wal_fsync_duration_seconds | The latency distributions of fsync called by wal | Histogram |
| backend_commit_duration_seconds | The latency distributions of commit called by backend.| Histogram |
A `wal_fsync` is called when etcd persists its log entries to disk before applying them.
A `backend_commit` is called when etcd commits an incremental snapshot of its most recent changes to disk.
High disk operation latencies (`wal_fsync_duration_seconds` or `backend_commit_duration_seconds`) often indicate disk issues. It may cause high request latency or make the cluster unstable.
### Network
These metrics describe the status of the network.
All these metrics are prefixed with `etcd_network_`
| Name | Description | Type |
|---------------------------|--------------------------------------------------------------------|---------------|
| peer_sent_bytes_total | The total number of bytes sent to the peer with ID `To`. | Counter(To) |
| peer_received_bytes_total | The total number of bytes received from the peer with ID `From`. | Counter(From) |
| peer_round_trip_time_seconds | Round-Trip-Time histogram between peers. | Histogram(To) |
| client_grpc_sent_bytes_total | The total number of bytes sent to grpc clients. | Counter |
| client_grpc_received_bytes_total| The total number of bytes received to grpc clients. | Counter |
`peer_sent_bytes_total` counts the total number of bytes sent to a specific peer. Usually the leader member sends more data than other members since it is responsible for transmitting replicated data.
`peer_received_bytes_total` counts the total number of bytes received from a specific peer. Usually follower members receive data only from the leader member.
### gRPC requests
These metrics describe the requests served by a specific etcd member: total received requests, total failed requests, and processing latency. They are useful for tracking user-generated traffic hitting the etcd cluster.
All these metrics are prefixed with `etcd_grpc_`
| Name | Description | Type |
|--------------------------------|-----------------------------------------------------------------------------------------|--------------------|
| received_total | Total number of events after parsing and auth. | Counter(method) |
| failed_total | Total number of failed events.   | Counter(method,error) |
| successful_duration_second | Bucketed handling times of the requests, including raft rounds for writes. | Histogram(method) |
|--------------------------------|-------------------------------------------------------------------------------------|------------------------|
| requests_total | Total number of received requests | Counter(method) |
| requests_failed_total | Total number of failed requests.   | Counter(method,error) |
| unary_requests_duration_seconds | Bucketed handling duration of the requests. | Histogram(method) |
Example Prometheus queries that may be useful from these metrics (across all etcd members):
* `sum(rate(etcd_http_failed_total{job="etcd"}[1m]) by (method) / sum(rate(etcd_http_events_received_total{job="etcd"})[1m]) by (method)`
* `sum(rate(etcd_grpc_requests_failed_total{job="etcd"}[1m]) by (grpc_method) / sum(rate(etcd_grpc_total{job="etcd"})[1m]) by (grpc_method)`
Shows the fraction of events that failed by HTTP method across all members, across a time window of `1m`.
Shows the fraction of events that failed by gRPC method across all members, across a time window of `1m`.
* `sum(rate(etcd_http_received_total{job="etcd",method="GET})[1m]) by (method)`
`sum(rate(etcd_http_received_total{job="etcd",method~="GET})[1m]) by (method)`
* `sum(rate(etcd_grpc_requests_total{job="etcd",grpc_method="PUT"})[1m]) by (grpc_method)`
Shows the rate of successful readonly/write queries across all servers, across a time window of `1m`.
Shows the rate of PUT requests across all members, across a time window of `1m`.
* `histogram_quantile(0.9, sum(increase(etcd_http_successful_processing_seconds{job="etcd",method="GET"}[5m]) ) by (le))`
`histogram_quantile(0.9, sum(increase(etcd_http_successful_processing_seconds{job="etcd",method!="GET"}[5m]) ) by (le))`
* `histogram_quantile(0.9, sum(rate(etcd_grpc_unary_requests_duration_seconds{job="etcd",grpc_method="PUT"}[5m]) ) by (le))`
Show the 0.90-tile latency (in seconds) of read/write (respectively) event handling across all members, with a window of `5m`.
Show the 0.90-tile latency (in seconds) of PUT request handling across all members, with a window of `5m`.
## snapshot
## etcd_debugging namespace metrics
The metrics under the `etcd_debugging` prefix are for debugging. They are very implementation dependent and volatile. They might be changed or removed without any warning in new etcd releases. Some of the metrics might be moved to the `etcd` prefix when they become more stable.
### Snapshot
| Name | Description | Type |
|--------------------------------------------|------------------------------------------------------------|-----------|
| snapshot_save_total_durations_seconds | The total latency distributions of save called by snapshot | Histogram |
| snapshot_save_total_duration_seconds | The total latency distributions of save called by snapshot | Histogram |
Abnormally high snapshot duration (`snapshot_save_total_durations_seconds`) indicates disk issues and might cause the cluster to be unstable.
Abnormally high snapshot duration (`snapshot_save_total_duration_seconds`) indicates disk issues and might cause the cluster to be unstable.
## Prometheus supplied metrics
## rafthttp
The Prometheus client library provides a number of metrics under the `go` and `process` namespaces. There are a few that are particlarly interesting.
| Name | Description | Type | Labels |
|-----------------------------------|--------------------------------------------|--------------|--------------------------------|
| message_sent_latency_seconds | The latency distributions of messages sent | HistogramVec | sendingType, msgType, remoteID |
| message_sent_failed_total | The total number of failed messages sent | Summary | sendingType, msgType, remoteID |
| Name | Description | Type |
|-----------------------------------|--------------------------------------------|--------------|
| process_open_fds | Number of open file descriptors. | Gauge |
| process_max_fds | Maximum number of open file descriptors. | Gauge |
Heavy file descriptor (`process_open_fds`) usage (i.e., near the process's file descriptor limit, `process_max_fds`) indicates a potential file descriptor exhaustion issue. If the file descriptors are exhausted, etcd may panic because it cannot create new WAL files.
Abnormally high message duration (`message_sent_latency_seconds`) indicates network issues and might cause the cluster to be unstable.
An increase in message failures (`message_sent_failed_total`) indicates more severe network issues and might cause the cluster to be unstable.
Label `sendingType` is the connection type to send messages. `message`, `msgapp` and `msgappv2` use HTTP streaming, while `pipeline` does HTTP request for each message.
Label `msgType` is the type of raft message. `MsgApp` is log replication message; `MsgSnap` is snapshot install message; `MsgProp` is proposal forward message; the others are used to maintain raft internal status. If you have a large snapshot, you would expect a long msgSnap sending latency. For other types of messages, you would expect low latency, which is comparable to your ping latency if you have enough network bandwidth.
Label `remoteID` is the member ID of the message destination.
## proxy
etcd members operating in proxy mode do not do store operations. They forward all requests
to cluster instances.
Tracking the rate of requests coming from a proxy allows one to pin down which machine is performing most reads/writes.
All these metrics are prefixed with `etcd_proxy_`
| Name | Description | Type |
|---------------------------|-----------------------------------------------------------------------------------------|--------------------|
| requests_total | Total number of requests by this proxy instance. . | Counter(method) |
| handled_total | Total number of fully handled requests, with responses from etcd members. | Counter(method) |
| dropped_total | Total number of dropped requests due to forwarding errors to etcd members.  | Counter(method,error) |
| handling_duration_seconds | Bucketed handling times by HTTP method, including round trip to member instances. | Histogram(method) |
Example Prometheus queries that may be useful from these metrics (across all etcd servers):
* `sum(rate(etcd_proxy_handled_total{job="etcd"}[1m])) by (method)`
Rate of requests (by HTTP method) handled by all proxies, across a window of `1m`.
* `histogram_quantile(0.9, sum(increase(etcd_proxy_events_handling_time_seconds_bucket{job="etcd",method="GET"}[5m])) by (le))`
`histogram_quantile(0.9, sum(increase(etcd_proxy_events_handling_time_seconds_bucket{job="etcd",method!="GET"}[5m])) by (le))`
Show the 0.90-tile latency (in seconds) of handling of user requests across all proxy machines, with a window of `5m`.
* `sum(rate(etcd_proxy_dropped_total{job="etcd"}[1m])) by (proxying_error)`
Number of failed request on the proxy. This should be 0, spikes here indicate connectivity issues to etcd cluster.
[glossary-proposal]: glossary.md#proposal
[glossary-proposal]: learning/glossary.md#proposal
[prometheus]: http://prometheus.io/
[prometheus-getting-started](http://prometheus.io/docs/introduction/getting_started/)
[prometheus-getting-started]: http://prometheus.io/docs/introduction/getting_started/
[prometheus-naming]: http://prometheus.io/docs/practices/naming/
[v2-http-metrics]: v2/metrics.md#http-requests

View File

@@ -0,0 +1,474 @@
# Clustering Guide
## Overview
Starting an etcd cluster statically requires that each member knows another in the cluster. In a number of cases, the IPs of the cluster members may be unknown ahead of time. In these cases, the etcd cluster can be bootstrapped with the help of a discovery service.
Once an etcd cluster is up and running, adding or removing members is done via [runtime reconfiguration][runtime-conf]. To better understand the design behind runtime reconfiguration, we suggest reading [the runtime configuration design document][runtime-reconf-design].
This guide will cover the following mechanisms for bootstrapping an etcd cluster:
* [Static](#static)
* [etcd Discovery](#etcd-discovery)
* [DNS Discovery](#dns-discovery)
Each of the bootstrapping mechanisms will be used to create a three machine etcd cluster with the following details:
|Name|Address|Hostname|
|------|---------|------------------|
|infra0|10.0.1.10|infra0.example.com|
|infra1|10.0.1.11|infra1.example.com|
|infra2|10.0.1.12|infra2.example.com|
## Static
As we know the cluster members, their addresses and the size of the cluster before starting, we can use an offline bootstrap configuration by setting the `initial-cluster` flag. Each machine will get either the following environment variables or command line:
```
ETCD_INITIAL_CLUSTER="infra0=http://10.0.1.10:2380,infra1=http://10.0.1.11:2380,infra2=http://10.0.1.12:2380"
ETCD_INITIAL_CLUSTER_STATE=new
```
```
--initial-cluster infra0=http://10.0.1.10:2380,infra1=http://10.0.1.11:2380,infra2=http://10.0.1.12:2380 \
--initial-cluster-state new
```
Note that the URLs specified in `initial-cluster` are the _advertised peer URLs_, i.e. they should match the value of `initial-advertise-peer-urls` on the respective nodes.
If spinning up multiple clusters (or creating and destroying a single cluster) with same configuration for testing purpose, it is highly recommended that each cluster is given a unique `initial-cluster-token`. By doing this, etcd can generate unique cluster IDs and member IDs for the clusters even if they otherwise have the exact same configuration. This can protect etcd from cross-cluster-interaction, which might corrupt the clusters.
etcd listens on [`listen-client-urls`][conf-listen-client] to accept client traffic. etcd member advertises the URLs specified in [`advertise-client-urls`][conf-adv-client] to other members, proxies, clients. Please make sure the `advertise-client-urls` are reachable from intended clients. A common mistake is setting `advertise-client-urls` to localhost or leave it as default if the remote clients should reach etcd.
On each machine, start etcd with these flags:
```
$ etcd --name infra0 --initial-advertise-peer-urls http://10.0.1.10:2380 \
--listen-peer-urls http://10.0.1.10:2380 \
--listen-client-urls http://10.0.1.10:2379,http://127.0.0.1:2379 \
--advertise-client-urls http://10.0.1.10:2379 \
--initial-cluster-token etcd-cluster-1 \
--initial-cluster infra0=http://10.0.1.10:2380,infra1=http://10.0.1.11:2380,infra2=http://10.0.1.12:2380 \
--initial-cluster-state new
```
```
$ etcd --name infra1 --initial-advertise-peer-urls http://10.0.1.11:2380 \
--listen-peer-urls http://10.0.1.11:2380 \
--listen-client-urls http://10.0.1.11:2379,http://127.0.0.1:2379 \
--advertise-client-urls http://10.0.1.11:2379 \
--initial-cluster-token etcd-cluster-1 \
--initial-cluster infra0=http://10.0.1.10:2380,infra1=http://10.0.1.11:2380,infra2=http://10.0.1.12:2380 \
--initial-cluster-state new
```
```
$ etcd --name infra2 --initial-advertise-peer-urls http://10.0.1.12:2380 \
--listen-peer-urls http://10.0.1.12:2380 \
--listen-client-urls http://10.0.1.12:2379,http://127.0.0.1:2379 \
--advertise-client-urls http://10.0.1.12:2379 \
--initial-cluster-token etcd-cluster-1 \
--initial-cluster infra0=http://10.0.1.10:2380,infra1=http://10.0.1.11:2380,infra2=http://10.0.1.12:2380 \
--initial-cluster-state new
```
The command line parameters starting with `--initial-cluster` will be ignored on subsequent runs of etcd. Feel free to remove the environment variables or command line flags after the initial bootstrap process. If the configuration needs changes later (for example, adding or removing members to/from the cluster), see the [runtime configuration][runtime-conf] guide.
### TLS
etcd supports encrypted communication through the TLS protocol. TLS channels can be used for encrypted internal cluster communication between peers as well as encrypted client traffic. This section provides examples for setting up a cluster with peer and client TLS. Additional information detailing etcd's TLS support can be found in the [security guide][security-guide].
#### Self-signed certificates
A cluster using self-signed certificates both encrypts traffic and authenticates its connections. To start a cluster with self-signed certificates, each cluster member should have a unique key pair (`member.crt`, `member.key`) signed by a shared cluster CA certificate (`ca.crt`) for both peer connections and client connections. Certificates may be generated by following the etcd [TLS setup][tls-setup] example.
On each machine, etcd would be started with these flags:
```
$ etcd --name infra0 --initial-advertise-peer-urls http://10.0.1.10:2380 \
--listen-peer-urls https://10.0.1.10:2380 \
--listen-client-urls https://10.0.1.10:2379,https://127.0.0.1:2379 \
--advertise-client-urls https://10.0.1.10:2379 \
--initial-cluster-token etcd-cluster-1 \
--initial-cluster infra0=https://10.0.1.10:2380,infra1=https://10.0.1.11:2380,infra2=https://10.0.1.12:2380 \
--initial-cluster-state new \
--client-cert-auth --trusted-ca-file=/path/to/ca-client.crt \
--cert-file=/path/to/infra0-client.crt --key-file=/path/to/infra0-client.key \
--peer-client-cert-auth --peer-trusted-ca-file=ca-peer.crt \
--peer-cert-file=/path/to/infra0-peer.crt --peer-key-file=/path/to/infra0-peer.key
```
```
$ etcd --name infra1 --initial-advertise-peer-urls https://10.0.1.11:2380 \
--listen-peer-urls https://10.0.1.11:2380 \
--listen-client-urls https://10.0.1.11:2379,https://127.0.0.1:2379 \
--advertise-client-urls https://10.0.1.11:2379 \
--initial-cluster-token etcd-cluster-1 \
--initial-cluster infra0=https://10.0.1.10:2380,infra1=https://10.0.1.11:2380,infra2=https://10.0.1.12:2380 \
--initial-cluster-state new \
--client-cert-auth --trusted-ca-file=/path/to/ca-client.crt \
--cert-file=/path/to/infra1-client.crt --key-file=/path/to/infra1-client.key \
--peer-client-cert-auth --peer-trusted-ca-file=ca-peer.crt \
--peer-cert-file=/path/to/infra1-peer.crt --peer-key-file=/path/to/infra1-peer.key
```
```
$ etcd --name infra2 --initial-advertise-peer-urls https://10.0.1.12:2380 \
--listen-peer-urls https://10.0.1.12:2380 \
--listen-client-urls https://10.0.1.12:2379,https://127.0.0.1:2379 \
--advertise-client-urls https://10.0.1.12:2379 \
--initial-cluster-token etcd-cluster-1 \
--initial-cluster infra0=https://10.0.1.10:2380,infra1=https://10.0.1.11:2380,infra2=https://10.0.1.12:2380 \
--initial-cluster-state new \
--client-cert-auth --trusted-ca-file=/path/to/ca-client.crt \
--cert-file=/path/to/infra2-client.crt --key-file=/path/to/infra2-client.key \
--peer-client-cert-auth --peer-trusted-ca-file=ca-peer.crt \
--peer-cert-file=/path/to/infra2-peer.crt --peer-key-file=/path/to/infra2-peer.key
```
#### Automatic certificates
If the cluster needs encrypted communication but does not require authenticated connections, etcd can be configured to automatically generate its keys. On initialization, each member creates its own set of keys based on its advertised IP addresses and hosts.
On each machine, etcd would be started with these flag:
```
$ etcd --name infra0 --initial-advertise-peer-urls https://10.0.1.10:2380 \
--listen-peer-urls https://10.0.1.10:2380 \
--listen-client-urls https://10.0.1.10:2379,https://127.0.0.1:2379 \
--advertise-client-urls https://10.0.1.10:2379 \
--initial-cluster-token etcd-cluster-1 \
--initial-cluster infra0=https://10.0.1.10:2380,infra1=https://10.0.1.11:2380,infra2=https://10.0.1.12:2380 \
--initial-cluster-state new \
--auto-tls \
--peer-auto-tls
```
```
$ etcd --name infra1 --initial-advertise-peer-urls https://10.0.1.11:2380 \
--listen-peer-urls https://10.0.1.11:2380 \
--listen-client-urls https://10.0.1.11:2379,https://127.0.0.1:2379 \
--advertise-client-urls https://10.0.1.11:2379 \
--initial-cluster-token etcd-cluster-1 \
--initial-cluster infra0=https://10.0.1.10:2380,infra1=https://10.0.1.11:2380,infra2=https://10.0.1.12:2380 \
--initial-cluster-state new \
--auto-tls \
--peer-auto-tls
```
```
$ etcd --name infra2 --initial-advertise-peer-urls https://10.0.1.12:2380 \
--listen-peer-urls https://10.0.1.12:2380 \
--listen-client-urls https://10.0.1.12:2379,https://127.0.0.1:2379 \
--advertise-client-urls https://10.0.1.12:2379 \
--initial-cluster-token etcd-cluster-1 \
--initial-cluster infra0=https://10.0.1.10:2380,infra1=https://10.0.1.11:2380,infra2=https://10.0.1.12:2380 \
--initial-cluster-state new \
--auto-tls \
--peer-auto-tls
```
### Error cases
In the following example, we have not included our new host in the list of enumerated nodes. If this is a new cluster, the node _must_ be added to the list of initial cluster members.
```
$ etcd --name infra1 --initial-advertise-peer-urls http://10.0.1.11:2380 \
--listen-peer-urls https://10.0.1.11:2380 \
--listen-client-urls http://10.0.1.11:2379,http://127.0.0.1:2379 \
--advertise-client-urls http://10.0.1.11:2379 \
--initial-cluster infra0=http://10.0.1.10:2380 \
--initial-cluster-state new
etcd: infra1 not listed in the initial cluster config
exit 1
```
In this example, we are attempting to map a node (infra0) on a different address (127.0.0.1:2380) than its enumerated address in the cluster list (10.0.1.10:2380). If this node is to listen on multiple addresses, all addresses _must_ be reflected in the "initial-cluster" configuration directive.
```
$ etcd --name infra0 --initial-advertise-peer-urls http://127.0.0.1:2380 \
--listen-peer-urls http://10.0.1.10:2380 \
--listen-client-urls http://10.0.1.10:2379,http://127.0.0.1:2379 \
--advertise-client-urls http://10.0.1.10:2379 \
--initial-cluster infra0=http://10.0.1.10:2380,infra1=http://10.0.1.11:2380,infra2=http://10.0.1.12:2380 \
--initial-cluster-state=new
etcd: error setting up initial cluster: infra0 has different advertised URLs in the cluster and advertised peer URLs list
exit 1
```
If a peer is configured with a different set of configuration arguments and attempts to join this cluster, etcd will report a cluster ID mismatch will exit.
```
$ etcd --name infra3 --initial-advertise-peer-urls http://10.0.1.13:2380 \
--listen-peer-urls http://10.0.1.13:2380 \
--listen-client-urls http://10.0.1.13:2379,http://127.0.0.1:2379 \
--advertise-client-urls http://10.0.1.13:2379 \
--initial-cluster infra0=http://10.0.1.10:2380,infra1=http://10.0.1.11:2380,infra3=http://10.0.1.13:2380 \
--initial-cluster-state=new
etcd: conflicting cluster ID to the target cluster (c6ab534d07e8fcc4 != bc25ea2a74fb18b0). Exiting.
exit 1
```
## Discovery
In a number of cases, the IPs of the cluster peers may not be known ahead of time. This is common when utilizing cloud providers or when the network uses DHCP. In these cases, rather than specifying a static configuration, use an existing etcd cluster to bootstrap a new one. We call this process "discovery".
There two methods that can be used for discovery:
* etcd discovery service
* DNS SRV records
### etcd discovery
To better understand the design about discovery service protocol, we suggest reading the discovery service protocol [documentation][discovery-proto].
#### Lifetime of a discovery URL
A discovery URL identifies a unique etcd cluster. Instead of reusing a discovery URL, always create discovery URLs for new clusters.
Moreover, discovery URLs should ONLY be used for the initial bootstrapping of a cluster. To change cluster membership after the cluster is already running, see the [runtime reconfiguration][runtime-conf] guide.
#### Custom etcd discovery service
Discovery uses an existing cluster to bootstrap itself. If using a private etcd cluster, can create a URL like so:
```
$ curl -X PUT https://myetcd.local/v2/keys/discovery/6c007a14875d53d9bf0ef5a6fc0257c817f0fb83/_config/size -d value=3
```
By setting the size key to the URL, a discovery URL is created with an expected cluster size of 3.
The URL to use in this case will be `https://myetcd.local/v2/keys/discovery/6c007a14875d53d9bf0ef5a6fc0257c817f0fb83` and the etcd members will use the `https://myetcd.local/v2/keys/discovery/6c007a14875d53d9bf0ef5a6fc0257c817f0fb83` directory for registration as they start.
**Each member must have a different name flag specified. `Hostname` or `machine-id` can be a good choice. Or discovery will fail due to duplicated name.**
Now we start etcd with those relevant flags for each member:
```
$ etcd --name infra0 --initial-advertise-peer-urls http://10.0.1.10:2380 \
--listen-peer-urls http://10.0.1.10:2380 \
--listen-client-urls http://10.0.1.10:2379,http://127.0.0.1:2379 \
--advertise-client-urls http://10.0.1.10:2379 \
--discovery https://myetcd.local/v2/keys/discovery/6c007a14875d53d9bf0ef5a6fc0257c817f0fb83
```
```
$ etcd --name infra1 --initial-advertise-peer-urls http://10.0.1.11:2380 \
--listen-peer-urls http://10.0.1.11:2380 \
--listen-client-urls http://10.0.1.11:2379,http://127.0.0.1:2379 \
--advertise-client-urls http://10.0.1.11:2379 \
--discovery https://myetcd.local/v2/keys/discovery/6c007a14875d53d9bf0ef5a6fc0257c817f0fb83
```
```
$ etcd --name infra2 --initial-advertise-peer-urls http://10.0.1.12:2380 \
--listen-peer-urls http://10.0.1.12:2380 \
--listen-client-urls http://10.0.1.12:2379,http://127.0.0.1:2379 \
--advertise-client-urls http://10.0.1.12:2379 \
--discovery https://myetcd.local/v2/keys/discovery/6c007a14875d53d9bf0ef5a6fc0257c817f0fb83
```
This will cause each member to register itself with the custom etcd discovery service and begin the cluster once all machines have been registered.
#### Public etcd discovery service
If no exiting cluster is available, use the public discovery service hosted at `discovery.etcd.io`. To create a private discovery URL using the "new" endpoint, use the command:
```
$ curl https://discovery.etcd.io/new?size=3
https://discovery.etcd.io/3e86b59982e49066c5d813af1c2e2579cbf573de
```
This will create the cluster with an initial expected size of 3 members. If no size is specified, a default of 3 is used.
```
ETCD_DISCOVERY=https://discovery.etcd.io/3e86b59982e49066c5d813af1c2e2579cbf573de
```
```
--discovery https://discovery.etcd.io/3e86b59982e49066c5d813af1c2e2579cbf573de
```
**Each member must have a different name flag specified. `Hostname` or `machine-id` can be a good choice. Or discovery will fail due to duplicated name.**
Now we start etcd with those relevant flags for each member:
```
$ etcd --name infra0 --initial-advertise-peer-urls http://10.0.1.10:2380 \
--listen-peer-urls http://10.0.1.10:2380 \
--listen-client-urls http://10.0.1.10:2379,http://127.0.0.1:2379 \
--advertise-client-urls http://10.0.1.10:2379 \
--discovery https://discovery.etcd.io/3e86b59982e49066c5d813af1c2e2579cbf573de
```
```
$ etcd --name infra1 --initial-advertise-peer-urls http://10.0.1.11:2380 \
--listen-peer-urls http://10.0.1.11:2380 \
--listen-client-urls http://10.0.1.11:2379,http://127.0.0.1:2379 \
--advertise-client-urls http://10.0.1.11:2379 \
--discovery https://discovery.etcd.io/3e86b59982e49066c5d813af1c2e2579cbf573de
```
```
$ etcd --name infra2 --initial-advertise-peer-urls http://10.0.1.12:2380 \
--listen-peer-urls http://10.0.1.12:2380 \
--listen-client-urls http://10.0.1.12:2379,http://127.0.0.1:2379 \
--advertise-client-urls http://10.0.1.12:2379 \
--discovery https://discovery.etcd.io/3e86b59982e49066c5d813af1c2e2579cbf573de
```
This will cause each member to register itself with the discovery service and begin the cluster once all members have been registered.
Use the environment variable `ETCD_DISCOVERY_PROXY` to cause etcd to use an HTTP proxy to connect to the discovery service.
#### Error and warning cases
##### Discovery server errors
```
$ etcd --name infra0 --initial-advertise-peer-urls http://10.0.1.10:2380 \
--listen-peer-urls http://10.0.1.10:2380 \
--listen-client-urls http://10.0.1.10:2379,http://127.0.0.1:2379 \
--advertise-client-urls http://10.0.1.10:2379 \
--discovery https://discovery.etcd.io/3e86b59982e49066c5d813af1c2e2579cbf573de
etcd: error: the cluster doesnt have a size configuration value in https://discovery.etcd.io/3e86b59982e49066c5d813af1c2e2579cbf573de/_config
exit 1
```
##### Warnings
This is a harmless warning indicating the discovery URL will be ignored on this machine.
```
$ etcd --name infra0 --initial-advertise-peer-urls http://10.0.1.10:2380 \
--listen-peer-urls http://10.0.1.10:2380 \
--listen-client-urls http://10.0.1.10:2379,http://127.0.0.1:2379 \
--advertise-client-urls http://10.0.1.10:2379 \
--discovery https://discovery.etcd.io/3e86b59982e49066c5d813af1c2e2579cbf573de
etcdserver: discovery token ignored since a cluster has already been initialized. Valid log found at /var/lib/etcd
```
### DNS discovery
DNS [SRV records][rfc-srv] can be used as a discovery mechanism.
The `-discovery-srv` flag can be used to set the DNS domain name where the discovery SRV records can be found.
The following DNS SRV records are looked up in the listed order:
* _etcd-server-ssl._tcp.example.com
* _etcd-server._tcp.example.com
If `_etcd-server-ssl._tcp.example.com` is found then etcd will attempt the bootstrapping process over TLS.
To help clients discover the etcd cluster, the following DNS SRV records are looked up in the listed order:
* _etcd-client._tcp.example.com
* _etcd-client-ssl._tcp.example.com
If `_etcd-client-ssl._tcp.example.com` is found, clients will attempt to communicate with the etcd cluster over SSL/TLS.
If etcd is using TLS without a custom certificate authority, the discovery domain (e.g., example.com) must match the SRV record domain (e.g., infra1.example.com). This is to mitigate attacks that forge SRV records to point to a different domain; the domain would have a valid certificate under PKI but be controlled by an unknown third party.
#### Create DNS SRV records
```
$ dig +noall +answer SRV _etcd-server._tcp.example.com
_etcd-server._tcp.example.com. 300 IN SRV 0 0 2380 infra0.example.com.
_etcd-server._tcp.example.com. 300 IN SRV 0 0 2380 infra1.example.com.
_etcd-server._tcp.example.com. 300 IN SRV 0 0 2380 infra2.example.com.
```
```
$ dig +noall +answer SRV _etcd-client._tcp.example.com
_etcd-client._tcp.example.com. 300 IN SRV 0 0 2379 infra0.example.com.
_etcd-client._tcp.example.com. 300 IN SRV 0 0 2379 infra1.example.com.
_etcd-client._tcp.example.com. 300 IN SRV 0 0 2379 infra2.example.com.
```
```
$ dig +noall +answer infra0.example.com infra1.example.com infra2.example.com
infra0.example.com. 300 IN A 10.0.1.10
infra1.example.com. 300 IN A 10.0.1.11
infra2.example.com. 300 IN A 10.0.1.12
```
#### Bootstrap the etcd cluster using DNS
etcd cluster members can listen on domain names or IP address, the bootstrap process will resolve DNS A records.
The resolved address in `--initial-advertise-peer-urls` *must match* one of the resolved addresses in the SRV targets. The etcd member reads the resolved address to find out if it belongs to the cluster defined in the SRV records.
```
$ etcd --name infra0 \
--discovery-srv example.com \
--initial-advertise-peer-urls http://infra0.example.com:2380 \
--initial-cluster-token etcd-cluster-1 \
--initial-cluster-state new \
--advertise-client-urls http://infra0.example.com:2379 \
--listen-client-urls http://infra0.example.com:2379 \
--listen-peer-urls http://infra0.example.com:2380
```
```
$ etcd --name infra1 \
--discovery-srv example.com \
--initial-advertise-peer-urls http://infra1.example.com:2380 \
--initial-cluster-token etcd-cluster-1 \
--initial-cluster-state new \
--advertise-client-urls http://infra1.example.com:2379 \
--listen-client-urls http://infra1.example.com:2379 \
--listen-peer-urls http://infra1.example.com:2380
```
```
$ etcd --name infra2 \
--discovery-srv example.com \
--initial-advertise-peer-urls http://infra2.example.com:2380 \
--initial-cluster-token etcd-cluster-1 \
--initial-cluster-state new \
--advertise-client-urls http://infra2.example.com:2379 \
--listen-client-urls http://infra2.example.com:2379 \
--listen-peer-urls http://infra2.example.com:2380
```
The cluster can also bootstrap using IP addresses instead of domain names:
```
$ etcd --name infra0 \
--discovery-srv example.com \
--initial-advertise-peer-urls http://10.0.1.10:2380 \
--initial-cluster-token etcd-cluster-1 \
--initial-cluster-state new \
--advertise-client-urls http://10.0.1.10:2379 \
--listen-client-urls http://10.0.1.10:2379 \
--listen-peer-urls http://10.0.1.10:2380
```
```
$ etcd --name infra1 \
--discovery-srv example.com \
--initial-advertise-peer-urls http://10.0.1.11:2380 \
--initial-cluster-token etcd-cluster-1 \
--initial-cluster-state new \
--advertise-client-urls http://10.0.1.11:2379 \
--listen-client-urls http://10.0.1.11:2379 \
--listen-peer-urls http://10.0.1.11:2380
```
```
$ etcd --name infra2 \
--discovery-srv example.com \
--initial-advertise-peer-urls http://10.0.1.12:2380 \
--initial-cluster-token etcd-cluster-1 \
--initial-cluster-state new \
--advertise-client-urls http://10.0.1.12:2379 \
--listen-client-urls http://10.0.1.12:2379 \
--listen-peer-urls http://10.0.1.12:2380
```
### Proxy
When the `--proxy` flag is set, etcd runs in [proxy mode][proxy]. This proxy mode only supports the etcd v2 API; there are no plans to support the v3 API. Instead, for v3 API support, there will be a new proxy with enhanced features following the etcd 3.0 release.
To setup an etcd cluster with proxies of v2 API, please read the the [clustering doc in etcd 2.3 release][clustering_etcd2].
[conf-adv-client]: configuration.md#--advertise-client-urls
[conf-listen-client]: configuration.md#--listen-client-urls
[discovery-proto]: ../dev-internal/discovery_protocol.md
[rfc-srv]: http://www.ietf.org/rfc/rfc2052.txt
[runtime-conf]: runtime-configuration.md
[runtime-reconf-design]: runtime-reconf-design.md
[proxy]: https://github.com/coreos/etcd/blob/release-2.3/Documentation/proxy.md
[clustering_etcd2]: https://github.com/coreos/etcd/blob/release-2.3/Documentation/clustering.md
[security-guide]: security.md
[tls-setup]: /hack/tls-setup

View File

@@ -0,0 +1,290 @@
# Configuration flags
etcd is configurable through command-line flags and environment variables. Options set on the command line take precedence over those from the environment.
The format of environment variable for flag `--my-flag` is `ETCD_MY_FLAG`. It applies to all flags.
The [official etcd ports][iana-ports] are 2379 for client requests and 2380 for peer communication. The etcd ports can be set to accept TLS traffic, non-TLS traffic, or both TLS and non-TLS traffic.
To start etcd automatically using custom settings at startup in Linux, using a [systemd][systemd-intro] unit is highly recommended.
## Member flags
### --name
+ Human-readable name for this member.
+ default: "default"
+ env variable: ETCD_NAME
+ This value is referenced as this node's own entries listed in the `--initial-cluster` flag (e.g., `default=http://localhost:2380`). This needs to match the key used in the flag if using [static bootstrapping][build-cluster]. When using discovery, each member must have a unique name. `Hostname` or `machine-id` can be a good choice.
### --data-dir
+ Path to the data directory.
+ default: "${name}.etcd"
+ env variable: ETCD_DATA_DIR
### --wal-dir
+ Path to the dedicated wal directory. If this flag is set, etcd will write the WAL files to the walDir rather than the dataDir. This allows a dedicated disk to be used, and helps avoid io competition between logging and other IO operations.
+ default: ""
+ env variable: ETCD_WAL_DIR
### --snapshot-count
+ Number of committed transactions to trigger a snapshot to disk.
+ default: "10000"
+ env variable: ETCD_SNAPSHOT_COUNT
### --heartbeat-interval
+ Time (in milliseconds) of a heartbeat interval.
+ default: "100"
+ env variable: ETCD_HEARTBEAT_INTERVAL
### --election-timeout
+ Time (in milliseconds) for an election to timeout. See [Documentation/tuning.md][tuning] for details.
+ default: "1000"
+ env variable: ETCD_ELECTION_TIMEOUT
### --listen-peer-urls
+ List of URLs to listen on for peer traffic. This flag tells the etcd to accept incoming requests from its peers on the specified scheme://IP:port combinations. Scheme can be either http or https.If 0.0.0.0 is specified as the IP, etcd listens to the given port on all interfaces. If an IP address is given as well as a port, etcd will listen on the given port and interface. Multiple URLs may be used to specify a number of addresses and ports to listen on. The etcd will respond to requests from any of the listed addresses and ports.
+ default: "http://localhost:2380"
+ env variable: ETCD_LISTEN_PEER_URLS
+ example: "http://10.0.0.1:2380"
+ invalid example: "http://example.com:2380" (domain name is invalid for binding)
### --listen-client-urls
+ List of URLs to listen on for client traffic. This flag tells the etcd to accept incoming requests from the clients on the specified scheme://IP:port combinations. Scheme can be either http or https. If 0.0.0.0 is specified as the IP, etcd listens to the given port on all interfaces. If an IP address is given as well as a port, etcd will listen on the given port and interface. Multiple URLs may be used to specify a number of addresses and ports to listen on. The etcd will respond to requests from any of the listed addresses and ports.
+ default: "http://localhost:2379"
+ env variable: ETCD_LISTEN_CLIENT_URLS
+ example: "http://10.0.0.1:2379"
+ invalid example: "http://example.com:2379" (domain name is invalid for binding)
### --max-snapshots
+ Maximum number of snapshot files to retain (0 is unlimited)
+ default: 5
+ env variable: ETCD_MAX_SNAPSHOTS
+ The default for users on Windows is unlimited, and manual purging down to 5 (or some preference for safety) is recommended.
### --max-wals
+ Maximum number of wal files to retain (0 is unlimited)
+ default: 5
+ env variable: ETCD_MAX_WALS
+ The default for users on Windows is unlimited, and manual purging down to 5 (or some preference for safety) is recommended.
### --cors
+ Comma-separated white list of origins for CORS (cross-origin resource sharing).
+ default: none
+ env variable: ETCD_CORS
## Clustering flags
`--initial` prefix flags are used in bootstrapping ([static bootstrap][build-cluster], [discovery-service bootstrap][discovery] or [runtime reconfiguration][reconfig]) a new member, and ignored when restarting an existing member.
`--discovery` prefix flags need to be set when using [discovery service][discovery].
### --initial-advertise-peer-urls
+ List of this member's peer URLs to advertise to the rest of the cluster. These addresses are used for communicating etcd data around the cluster. At least one must be routable to all cluster members. These URLs can contain domain names.
+ default: "http://localhost:2380"
+ env variable: ETCD_INITIAL_ADVERTISE_PEER_URLS
+ example: "http://example.com:2380, http://10.0.0.1:2380"
### --initial-cluster
+ Initial cluster configuration for bootstrapping.
+ default: "default=http://localhost:2380"
+ env variable: ETCD_INITIAL_CLUSTER
+ The key is the value of the `--name` flag for each node provided. The default uses `default` for the key because this is the default for the `--name` flag.
### --initial-cluster-state
+ Initial cluster state ("new" or "existing"). Set to `new` for all members present during initial static or DNS bootstrapping. If this option is set to `existing`, etcd will attempt to join the existing cluster. If the wrong value is set, etcd will attempt to start but fail safely.
+ default: "new"
+ env variable: ETCD_INITIAL_CLUSTER_STATE
[static bootstrap]: clustering.md#static
### --initial-cluster-token
+ Initial cluster token for the etcd cluster during bootstrap.
+ default: "etcd-cluster"
+ env variable: ETCD_INITIAL_CLUSTER_TOKEN
### --advertise-client-urls
+ List of this member's client URLs to advertise to the rest of the cluster. These URLs can contain domain names.
+ default: "http://localhost:2379"
+ env variable: ETCD_ADVERTISE_CLIENT_URLS
+ example: "http://example.com:2379, http://10.0.0.1:2379"
+ Be careful if advertising URLs such as http://localhost:2379 from a cluster member and are using the proxy feature of etcd. This will cause loops, because the proxy will be forwarding requests to itself until its resources (memory, file descriptors) are eventually depleted.
### --discovery
+ Discovery URL used to bootstrap the cluster.
+ default: none
+ env variable: ETCD_DISCOVERY
### --discovery-srv
+ DNS srv domain used to bootstrap the cluster.
+ default: none
+ env variable: ETCD_DISCOVERY_SRV
### --discovery-fallback
+ Expected behavior ("exit" or "proxy") when discovery services fails. "proxy" supports v2 API only.
+ default: "proxy"
+ env variable: ETCD_DISCOVERY_FALLBACK
### --discovery-proxy
+ HTTP proxy to use for traffic to discovery service.
+ default: none
+ env variable: ETCD_DISCOVERY_PROXY
### --strict-reconfig-check
+ Reject reconfiguration requests that would cause quorum loss.
+ default: false
+ env variable: ETCD_STRICT_RECONFIG_CHECK
### --auto-compaction-retention
+ Auto compaction retention for mvcc key value store in hour. 0 means disable auto compaction.
+ default: 0
+ env variable: ETCD_AUTO_COMPACTION_RETENTION
## Proxy flags
`--proxy` prefix flags configures etcd to run in [proxy mode][proxy]. "proxy" supports v2 API only.
### --proxy
+ Proxy mode setting ("off", "readonly" or "on").
+ default: "off"
+ env variable: ETCD_PROXY
### --proxy-failure-wait
+ Time (in milliseconds) an endpoint will be held in a failed state before being reconsidered for proxied requests.
+ default: 5000
+ env variable: ETCD_PROXY_FAILURE_WAIT
### --proxy-refresh-interval
+ Time (in milliseconds) of the endpoints refresh interval.
+ default: 30000
+ env variable: ETCD_PROXY_REFRESH_INTERVAL
### --proxy-dial-timeout
+ Time (in milliseconds) for a dial to timeout or 0 to disable the timeout
+ default: 1000
+ env variable: ETCD_PROXY_DIAL_TIMEOUT
### --proxy-write-timeout
+ Time (in milliseconds) for a write to timeout or 0 to disable the timeout.
+ default: 5000
+ env variable: ETCD_PROXY_WRITE_TIMEOUT
### --proxy-read-timeout
+ Time (in milliseconds) for a read to timeout or 0 to disable the timeout.
+ Don't change this value if using watches because use long polling requests.
+ default: 0
+ env variable: ETCD_PROXY_READ_TIMEOUT
## Security flags
The security flags help to [build a secure etcd cluster][security].
### --ca-file [DEPRECATED]
+ Path to the client server TLS CA file. `--ca-file ca.crt` could be replaced by `--trusted-ca-file ca.crt --client-cert-auth` and etcd will perform the same.
+ default: none
+ env variable: ETCD_CA_FILE
### --cert-file
+ Path to the client server TLS cert file.
+ default: none
+ env variable: ETCD_CERT_FILE
### --key-file
+ Path to the client server TLS key file.
+ default: none
+ env variable: ETCD_KEY_FILE
### --client-cert-auth
+ Enable client cert authentication.
+ default: false
+ env variable: ETCD_CLIENT_CERT_AUTH
### --trusted-ca-file
+ Path to the client server TLS trusted CA key file.
+ default: none
+ env variable: ETCD_TRUSTED_CA_FILE
### --auto-tls
+ Client TLS using generated certificates
+ default: false
+ env variable: ETCD_AUTO_TLS
### --peer-ca-file [DEPRECATED]
+ Path to the peer server TLS CA file. `--peer-ca-file ca.crt` could be replaced by `--peer-trusted-ca-file ca.crt --peer-client-cert-auth` and etcd will perform the same.
+ default: none
+ env variable: ETCD_PEER_CA_FILE
### --peer-cert-file
+ Path to the peer server TLS cert file.
+ default: none
+ env variable: ETCD_PEER_CERT_FILE
### --peer-key-file
+ Path to the peer server TLS key file.
+ default: none
+ env variable: ETCD_PEER_KEY_FILE
### --peer-client-cert-auth
+ Enable peer client cert authentication.
+ default: false
+ env variable: ETCD_PEER_CLIENT_CERT_AUTH
### --peer-trusted-ca-file
+ Path to the peer server TLS trusted CA file.
+ default: none
+ env variable: ETCD_PEER_TRUSTED_CA_FILE
### --peer-auto-tls
+ Peer TLS using generated certificates
+ default: false
+ env variable: ETCD_PEER_AUTO_TLS
## Logging flags
### --debug
+ Drop the default log level to DEBUG for all subpackages.
+ default: false (INFO for all packages)
+ env variable: ETCD_DEBUG
### --log-package-levels
+ Set individual etcd subpackages to specific log levels. An example being `etcdserver=WARNING,security=DEBUG`
+ default: none (INFO for all packages)
+ env variable: ETCD_LOG_PACKAGE_LEVELS
## Unsafe flags
Please be CAUTIOUS when using unsafe flags because it will break the guarantees given by the consensus protocol.
For example, it may panic if other members in the cluster are still alive.
Follow the instructions when using these flags.
### --force-new-cluster
+ Force to create a new one-member cluster. It commits configuration changes forcing to remove all existing members in the cluster and add itself. It needs to be set to [restore a backup][restore].
+ default: false
+ env variable: ETCD_FORCE_NEW_CLUSTER
## Miscellaneous flags
### --version
+ Print the version and exit.
+ default: false
### --config-file
+ Load server configuration from a file.
+ default: none
## Profiling flags
### --enable-pprof
+ Enable runtime profiling data via HTTP server. Address is at client URL + "/debug/pprof"
+ default: false
[build-cluster]: clustering.md#static
[reconfig]: runtime-configuration.md
[discovery]: clustering.md#discovery
[iana-ports]: https://www.iana.org/assignments/service-names-port-numbers/service-names-port-numbers.xhtml?search=etcd
[proxy]: ../v2/proxy.md
[restore]: ../v2/admin_guide.md#restoring-a-backup
[security]: security.md
[systemd-intro]: http://freedesktop.org/wiki/Software/systemd/
[tuning]: ../tuning.md#time-parameters

View File

@@ -0,0 +1,61 @@
# Run etcd clusters inside containers
The following guide shows how to run etcd with rkt and Docker using the [static bootstrap process](clustering.md#static).
## Docker
In order to expose the etcd API to clients outside of Docker host, use the host IP address of the container. Please see [`docker inspect`](https://docs.docker.com/engine/reference/commandline/inspect) for more detail on how to get the IP address. Alternatively, specify `--net=host` flag to `docker run` command to skip placing the container inside of a separate network stack.
```
# For each machine
ETCD_VERSION=v3.0.0
TOKEN=my-etcd-token
CLUSTER_STATE=new
NAME_1=etcd-node-0
NAME_2=etcd-node-1
NAME_3=etcd-node-2
HOST_1=10.20.30.1
HOST_2=10.20.30.2
HOST_3=10.20.30.3
CLUSTER=${NAME_1}=http://${HOST_1}:2380,${NAME_2}=http://${HOST_2}:2380,${NAME_3}=http://${HOST_3}:2380
# For node 1
THIS_NAME=${NAME_1}
THIS_IP=${HOST_1}
sudo docker run --net=host --name etcd quay.io/coreos/etcd:${ETCD_VERSION} \
/usr/local/bin/etcd \
--data-dir=data.etcd --name ${THIS_NAME} \
--initial-advertise-peer-urls http://${THIS_IP}:2380 --listen-peer-urls http://${THIS_IP}:2380 \
--advertise-client-urls http://${THIS_IP}:2379 --listen-client-urls http://${THIS_IP}:2379 \
--initial-cluster ${CLUSTER} \
--initial-cluster-state ${CLUSTER_STATE} --initial-cluster-token ${TOKEN}
# For node 2
THIS_NAME=${NAME_2}
THIS_IP=${HOST_2}
sudo docker run --net=host --name etcd quay.io/coreos/etcd:${ETCD_VERSION} \
/usr/local/bin/etcd \
--data-dir=data.etcd --name ${THIS_NAME} \
--initial-advertise-peer-urls http://${THIS_IP}:2380 --listen-peer-urls http://${THIS_IP}:2380 \
--advertise-client-urls http://${THIS_IP}:2379 --listen-client-urls http://${THIS_IP}:2379 \
--initial-cluster ${CLUSTER} \
--initial-cluster-state ${CLUSTER_STATE} --initial-cluster-token ${TOKEN}
# For node 3
THIS_NAME=${NAME_3}
THIS_IP=${HOST_3}
sudo docker run --net=host --name etcd quay.io/coreos/etcd:${ETCD_VERSION} \
/usr/local/bin/etcd \
--data-dir=data.etcd --name ${THIS_NAME} \
--initial-advertise-peer-urls http://${THIS_IP}:2380 --listen-peer-urls http://${THIS_IP}:2380 \
--advertise-client-urls http://${THIS_IP}:2379 --listen-client-urls http://${THIS_IP}:2379 \
--initial-cluster ${CLUSTER} \
--initial-cluster-state ${CLUSTER_STATE} --initial-cluster-token ${TOKEN}
```
To run `etcdctl` using API version 3:
```
docker exec etcd /bin/sh -c "export ETCDCTL_API=3 && /usr/local/bin/etcdctl put foo bar"
```

View File

@@ -0,0 +1,44 @@
# Understand failures
Failures are common in a large deployment of machines. A machine fails when its hardware or software malfunctions. Multiple machines fail together when there are power failures or network issues. Multiple kinds of failures can also happen at once; it is almost impossible to enumerate all possible failure cases.
In this section, we catalog kinds of failures and discuss how etcd is designed to tolerate these failures. Most users, if not all, can map a particular failure into one kind of failure. To prepare for rare or [unrecoverable failures][unrecoverable], always [back up][backup] the etcd cluster.
## Minor followers failure
When fewer than half of the followers fail, the etcd cluster can still accept requests and make progress without any major disruption. For example, two follower failures will not affect a five member etcd clusters operation. However, clients will lose connectivity to the failed members. Client libraries should hide these interruptions from users for read requests by automatically reconnecting to other members. Operators should expect the system load on the other members to increase due to the reconnections.
## Leader failure
When a leader fails, the etcd cluster automatically elects a new leader. The election does not happen instantly once the leader fails. It takes about an election timeout to elect a new leader since the failure detection model is timeout based.
During the leader election the cluster cannot process any writes. Write requests sent during the election are queued for processing until a new leader is elected.
Writes already sent to the old leader but not yet committed may be lost. The new leader has the power to rewrite any uncommitted entries from the previous leader. From the user perspective, some write requests might time out after a new leader election. However, no committed writes are ever lost.
The new leader extends timeouts automatically for all leases. This mechanism ensures a lease will not expire before the granted TTL even if it was granted by the old leader.
## Majority failure
When the majority members of the cluster fail, the etcd cluster fails and cannot accept more writes.
The etcd cluster can only recover from a majority failure once the majority of members become available. If a majority of members cannot come back online, then the operator must start [disaster recovery][unrecoverable] to recover the cluster.
Once a majority of members works, the etcd cluster elects a new leader automatically and returns to a healthy state. The new leader extends timeouts automatically for all leases. This mechanism ensures no lease expires due to server side unavailability.
## Network partition
A network partition is similar to a minor followers failure or a leader failure. A network partition divides the etcd cluster into two parts; one with a member majority and the other with a member minority. The majority side becomes the available cluster and the minority side is unavailable; there is no “split-brain” in etcd.
If the leader is on the majority side, then from the majority point of view the failure is a minority follower failure. If the leader is on the minority side, then it is a leader failure. The leader on the minority side steps down and the majority side elects a new leader.
Once the network partition clears, the minority side automatically recognizes the leader from the majority side and recovers its state.
## Failure during bootstrapping
A cluster bootstrap is only successful if all required members successfully start. If any failure happens during bootstrapping, remove the data directories on all members and re-bootstrap the cluster with a new cluster-token or new discovery token.
Of course, it is possible to recover a failed bootstrapped cluster like recovering a running cluster. However, it almost always takes more time and resources to recover that cluster than bootstrapping a new one, since there is no data to recover.
[backup]: maintenance.md#snapshot-backup
[unrecoverable]: recovery.md#disaster-recovery

View File

@@ -0,0 +1,115 @@
# Maintenance
## Overview
An etcd cluster needs periodic maintenance to remain reliable. Depending on an etcd application's needs, this maintenance can usually be automated and performed without downtime or significantly degraded performance.
All etcd maintenance manages storage resources consumed by the etcd keyspace. Failure to adequately control the keyspace size is guarded by storage space quotas; if an etcd member runs low on space, a quota will trigger cluster-wide alarms which will put the system into a limited-operation maintenance mode. To avoid running out of space for writes to the keyspace, the etcd keyspace history must be compacted. Storage space itself may be reclaimed by defragmenting etcd members. Finally, periodic snapshot backups of etcd member state makes it possible to recover any unintended logical data loss or corruption caused by operational error.
## History compaction
Since etcd keeps an exact history of its keyspace, this history should be periodically compacted to avoid performance degradation and eventual storage space exhaustion. Compacting the keyspace history drops all information about keys superseded prior to a given keyspace revision. The space used by these keys then becomes available for additional writes to the keyspace.
The keyspace can be compacted automatically with `etcd`'s time windowed history retention policy, or manually with `etcdctl`. The `etcdctl` method provides fine-grained control over the compacting process whereas automatic compacting fits applications that only need key history for some length of time.
`etcd` can be set to automatically compact the keyspace with the `--auto-compaction` option with a period of hours:
```sh
# keep one hour of history
$ etcd --auto-compaction-retention=1
```
An `etcdctl` initiated compaction works as follows:
```sh
# compact up to revision 3
$ etcdctl compact 3
```
Revisions prior to the compaction revision become inaccessible:
```sh
$ etcdctl get --rev=2 somekey
Error: rpc error: code = 11 desc = etcdserver: mvcc: required revision has been compacted
```
## Defragmentation
After compacting the keyspace, the backend database may exhibit internal fragmentation. Any internal fragmentation is space that is free to use by the backend but still consumes storage space. The process of defragmentation releases this storage space back to the file system. Defragmentation is issued on a per-member so that cluster-wide latency spikes may be avoided.
Compacting old revisions internally fragments `etcd` by leaving gaps in backend database. Fragmented space is available for use by `etcd` but unavailable to the host filesystem.
To defragment an etcd member, use the `etcdctl defrag` command:
```sh
$ etcdctl defrag
Finished defragmenting etcd member[127.0.0.1:2379]
```
## Space quota
The space quota in `etcd` ensures the cluster operates in a reliable fashion. Without a space quota, `etcd` may suffer from poor performance if the keyspace grows excessively large, or it may simply run out of storage space, leading to unpredictable cluster behavior. If the keyspace's backend database for any member exceeds the space quota, `etcd` raises a cluster-wide alarm that puts the cluster into a maintenance mode which only accepts key reads and deletes. After freeing enough space in the keyspace, the alarm can be disarmed and the cluster will resume normal operation.
By default, `etcd` sets a conservative space quota suitable for most applications, but it may be configured on the command line, in bytes:
```sh
# set a very small 16MB quota
$ etcd --quota-backend-bytes=16777216
```
The space quota can be triggered with a loop:
```sh
# fill keyspace
$ while [ 1 ]; do dd if=/dev/urandom bs=1024 count=1024 | etcdctl put key || break; done
...
Error: rpc error: code = 8 desc = etcdserver: mvcc: database space exceeded
# confirm quota space is exceeded
$ etcdctl --write-out=table endpoint status
+----------------+------------------+-----------+---------+-----------+-----------+------------+
| ENDPOINT | ID | VERSION | DB SIZE | IS LEADER | RAFT TERM | RAFT INDEX |
+----------------+------------------+-----------+---------+-----------+-----------+------------+
| 127.0.0.1:2379 | bf9071f4639c75cc | 2.3.0+git | 18 MB | true | 2 | 3332 |
+----------------+------------------+-----------+---------+-----------+-----------+------------+
# confirm alarm is raised
$ etcdctl alarm list
memberID:13803658152347727308 alarm:NOSPACE
```
Removing excessive keyspace data will put the cluster back within the quota limits so the alarm can be disarmed:
```sh
# get current revision
$ etcdctl --endpoints=:2379 endpoint status
[{"Endpoint":"127.0.0.1:2379","Status":{"header":{"cluster_id":8925027824743593106,"member_id":13803658152347727308,"revision":1516,"raft_term":2},"version":"2.3.0+git","dbSize":17973248,"leader":13803658152347727308,"raftIndex":6359,"raftTerm":2}}]
# compact away all old revisions
$ etdctl compact 1516
compacted revision 1516
# defragment away excessive space
$ etcdctl defrag
Finished defragmenting etcd member[127.0.0.1:2379]
# disarm alarm
$ etcdctl alarm disarm
memberID:13803658152347727308 alarm:NOSPACE
# test puts are allowed again
$ etdctl put newkey 123
OK
```
## Snapshot backup
Snapshotting the `etcd` cluster on a regular basis serves as a durable backup for an etcd keyspace. By taking periodic snapshots of an etcd member's backend database, an `etcd` cluster can be recovered to a point in time with a known good state.
A snapshot is taken with `etcdctl`:
```sh
$ etcdctl snapshot save backup.db
$ etcdctl --write-out=table snapshot status backup.db
+----------+----------+------------+------------+
| HASH | REVISION | TOTAL KEYS | TOTAL SIZE |
+----------+----------+------------+------------+
| fe01cf57 | 10 | 7 | 2.1 MB |
+----------+----------+------------+------------+
```

View File

@@ -0,0 +1,74 @@
# Performance
## Understanding performance
etcd provides stable, sustained high performance. Two factors define performance: latency and throughput. Latency is the time taken to complete an operation. Throughput is the total operations completed within some time period. Usually average latency increases as the overall throughput increases when etcd accepts concurrent client requests. In common cloud environments, like a standard `n-4` on Google Compute Engine (GCE) or a comparable machine type on AWS, a three member etcd cluster finishes a request in less than one millisecond under light load, and can complete more than 30,000 requests per second under heavy load.
etcd uses the Raft consensus algorithm to replicate requests among members and reach agreement. Consensus performance, especially commit latency, is limited by two physical constraints: network IO latency and disk IO latency. The minimum time to finish an etcd request is the network Round Trip Time (RTT) between members, plus the time `fdatasync` requires to commit the data to permanant storage. The RTT within a datacenter may be as long as several hundred microseconds. A typical RTT within the United States is around 50ms, and can be as slow as 400ms between continents. The typical fdatasync latency for a spinning disk is about 10ms. For SSDs, the latency is often lower than 1ms. To increase throughput, etcd batches multiple requests together and submits them to Raft. This batching policy lets etcd attain high throughput despite heavy load.
There are other sub-systems which impact the overall performance of etcd. Each serialized etcd request must run through etcds boltdb-backed MVCC storage engine, which usually takes tens of microseconds to finish. Periodically etcd incrementally snapshots its recently applied requests, merging them back with the previous on-disk snapshot. This process may lead to a latency spike. Although this is usually not a problem on SSDs, it may double the observed latency on HDD. Likewise, inflight compactions can impact etcds performance. Fortunately, the impact is often insignificant since the compaction is staggered so it does not compete for resources with regular requests. The RPC system, gRPC, gives etcd a well-defined, extensible API, but it also introduces additional latency, especially for local reads.
## Benchmarks
Benchmarking etcd performance can be done with the [benchmark](https://github.com/coreos/etcd/tree/master/tools/benchmark) CLI tool included with etcd.
For some baseline performance numbers, we consider a three member etcd cluster with the following hardware configuration:
- Google Cloud Compute Engine
- 3 machines of 8 vCPUs + 16GB Memory + 50GB SSD
- 1 machine(client) of 16 vCPUs + 30GB Memory + 50GB SSD
- Ubuntu 15.10
- etcd v3 master branch (commit SHA d8f325d), Go 1.6.2
With this configuration, etcd can approximately write:
| Number of keys | Key size in bytes | Value size in bytes | Number of connections | Number of clients | Target etcd server | Average write QPS | Average latency per request | Memory |
|----------------|-------------------|---------------------|-----------------------|-------------------|--------------------|-------------------|-----------------------------|--------|
| 10,000 | 8 | 256 | 1 | 1 | leader only | 525 | 2ms | 35 MB |
| 100,000 | 8 | 256 | 100 | 1000 | leader only | 25,000 | 30ms | 35 MB |
| 100,000 | 8 | 256 | 100 | 1000 | all members | 33,000 | 25ms | 35 MB |
Sample commands are:
```
# assuming IP_1 is leader, write requests to the leader
benchmark --endpoints={IP_1} --conns=1 --clients=1 \
put --key-size=8 --sequential-keys --total=10000 --val-size=256
benchmark --endpoints={IP_1} --conns=100 --clients=1000 \
put --key-size=8 --sequential-keys --total=100000 --val-size=256
# write to all members
benchmark --endpoints={IP_1},{IP_2},{IP_3} --conns=100 --clients=1000 \
put --key-size=8 --sequential-keys --total=100000 --val-size=256
```
Linearizable read requests go through a quorum of cluster members for consensus to fetch the most recent data. Serializable read requests are cheaper than linearizable reads since they are served by any single etcd member, instead of a quorum of members, in exchange for possibly serving stale data. etcd can read:
| Number of requests | Key size in bytes | Value size in bytes | Number of connections | Number of clients | Consistency | Average latency per request | Average read QPS |
|--------------------|-------------------|---------------------|-----------------------|-------------------|-------------|-----------------------------|------------------|
| 10,000 | 8 | 256 | 1 | 1 | Linearizable | 2ms | 560 |
| 10,000 | 8 | 256 | 1 | 1 | Serializable | 0.4ms | 7,500 |
| 100,000 | 8 | 256 | 100 | 1000 | Linearizable | 15ms | 43,000 |
| 100,000 | 8 | 256 | 100 | 1000 | Serializable | 9ms | 93,000 |
Sample commands are:
```
# Linearizable read requests
benchmark --endpoints={IP_1},{IP_2},{IP_3} --conns=1 --clients=1 \
range YOUR_KEY --consistency=l --total=10000
benchmark --endpoints={IP_1},{IP_2},{IP_3} --conns=100 --clients=1000 \
range YOUR_KEY --consistency=l --total=100000
# Serializable read requests for each member and sum up the numbers
for endpoint in {IP_1} {IP_2} {IP_3}; do
benchmark --endpoints=$endpoint --conns=1 --clients=1 \
range YOUR_KEY --consistency=s --total=10000
done
for endpoint in {IP_1} {IP_2} {IP_3}; do
benchmark --endpoints=$endpoint --conns=100 --clients=1000 \
range YOUR_KEY --consistency=s --total=100000
done
```
We encourage running the benchmark test when setting up an etcd cluster for the first time in a new environment to ensure the cluster achieves adequate performance; cluster latency and throughput can be sensitive to minor environment differences.

View File

@@ -0,0 +1,63 @@
## Disaster recovery
etcd is designed to withstand machine failures. An etcd cluster automatically recovers from temporary failures (e.g., machine reboots) and tolerates up to *(N-1)/2* permanent failures for a cluster of N members. When a member permanently fails, whether due to hardware failure or disk corruption, it loses access to the cluster. If the cluster permanently loses more than *(N-1)/2* members then it disastrously fails, irrevocably losing quorum. Once quorum is lost, the cluster cannot reach consensus and therefore cannot continue accepting updates.
To recover from disastrous failure, etcd v3 provides snapshot and restore facilities to recreate the cluster without v3 key data loss. To recover v2 keys, refer to the [v2 admin guide][v2_recover].
[v2_recover]: ../v2/admin_guide.md#disaster-recovery
### Snapshotting the keyspace
Recovering a cluster first needs a snapshot of the keyspace from an etcd member. A snapshot may either be taken from a live member with the `etcdctl snapshot save` command or by copying the `member/snap/db` file from an etcd data directory. For example, the following command snapshots the keyspace served by `$ENDPOINT` to the file `snapshot.db`:
```sh
$ etcdctl --endpoints $ENDPOINT snapshot save snapshot.db
```
### Restoring a cluster
To restore a cluster, all that is needed is a single snapshot "db" file. A cluster restore with `etcdctl snapshot restore` creates new etcd data directories; all members should restore using the same snapshot. Restoring overwrites some snapshot metadata (specifically, the member ID and cluster ID); the member loses its former identity. This metadata overwrite prevents the new member from inadvertently joining an existing cluster. Therefore in order to start a cluster from a snapshot, the restore must start a new logical cluster.
Snapshot integrity may be optionally verified at restore time. If the snapshot is taken with `etcdctl snapshot save`, it will have an integrity hash that is checked by `etcdctl snapshot restore`. If the snapshot is copied from the data directory, there is no integrity hash and it will only restore by using `--skip-hash-check`.
A restore initializes a new member of a new cluster, with a fresh cluster configuration using `etcd`'s cluster configuration flags, but preserves the contents of the etcd keyspace. Continuing from the previous example, the following creates new etcd data directories (`m1.etcd`, `m2.etcd`, `m3.etcd`) for a three member cluster:
```sh
$ etcdctl snapshot restore snapshot.db \
--name m1 \
--initial-cluster m1=http:/host1:2380,m2=http://host2:2380,m3=http://host3:2380 \
--initial-cluster-token etcd-cluster-1 \
--initial-advertise-peer-urls http://host1:2380
$ etcdctl snapshot restore snapshot.db \
--name m2 \
--initial-cluster m1=http:/host1:2380,m2=http://host2:2380,m3=http://host3:2380 \
--initial-cluster-token etcd-cluster-1 \
--initial-advertise-peer-urls http://host2:2380
$ etcdctl snapshot restore snapshot.db \
--name m3 \
--initial-cluster m1=http:/host1:2380,m2=http://host2:2380,m3=http://host3:2380 \
--initial-cluster-token etcd-cluster-1 \
--initial-advertise-peer-urls http://host3:2380
```
Next, start `etcd` with the new data directories:
```sh
$ etcd \
--name m1 \
--listen-client-urls http://host1:2379 \
--advertise-client-urls http://host1:2379 \
--listen-peer-urls http://host1:2380 &
$ etcd \
--name m2 \
--listen-client-urls http://host2:2379 \
--advertise-client-urls http://host2:2379 \
--listen-peer-urls http://host2:2380 &
$ etcd \
--name m3 \
--listen-client-urls http://host3:2379 \
--advertise-client-urls http://host3:2379 \
--listen-peer-urls http://host3:2380 &
```
Now the restored etcd cluster should be available and serving the keyspace given by the snapshot.

View File

@@ -0,0 +1,185 @@
# Runtime reconfiguration
etcd comes with support for incremental runtime reconfiguration, which allows users to update the membership of the cluster at run time.
Reconfiguration requests can only be processed when the majority of the cluster members are functioning. It is **highly recommended** to always have a cluster size greater than two in production. It is unsafe to remove a member from a two member cluster. The majority of a two member cluster is also two. If there is a failure during the removal process, the cluster might not able to make progress and need to [restart from majority failure][majority failure].
To better understand the design behind runtime reconfiguration, we suggest reading [the runtime reconfiguration document][runtime-reconf].
## Reconfiguration use cases
Let's walk through some common reasons for reconfiguring a cluster. Most of these just involve combinations of adding or removing a member, which are explained below under [Cluster Reconfiguration Operations][cluster-reconf].
### Cycle or upgrade multiple machines
If multiple cluster members need to move due to planned maintenance (hardware upgrades, network downtime, etc.), it is recommended to modify members one at a time.
It is safe to remove the leader, however there is a brief period of downtime while the election process takes place. If the cluster holds more than 50MB, it is recommended to [migrate the member's data directory][member migration].
### Change the cluster size
Increasing the cluster size can enhance [failure tolerance][fault tolerance table] and provide better read performance. Since clients can read from any member, increasing the number of members increases the overall read throughput.
Decreasing the cluster size can improve the write performance of a cluster, with a trade-off of decreased resilience. Writes into the cluster are replicated to a majority of members of the cluster before considered committed. Decreasing the cluster size lowers the majority, and each write is committed more quickly.
### Replace a failed machine
If a machine fails due to hardware failure, data directory corruption, or some other fatal situation, it should be replaced as soon as possible. Machines that have failed but haven't been removed adversely affect the quorum and reduce the tolerance for an additional failure.
To replace the machine, follow the instructions for [removing the member][remove member] from the cluster, and then [add a new member][add member] in its place. If the cluster holds more than 50MB, it is recommended to [migrate the failed member's data directory][member migration] if it is still accessible.
### Restart cluster from majority failure
If the majority of the cluster is lost or all of the nodes have changed IP addresses, then manual action is necessary to recover safely.
The basic steps in the recovery process include [creating a new cluster using the old data][disaster recovery], forcing a single member to act as the leader, and finally using runtime configuration to [add new members][add member] to this new cluster one at a time.
## Cluster reconfiguration operations
Now that we have the use cases in mind, let us lay out the operations involved in each.
Before making any change, the simple majority (quorum) of etcd members must be available.
This is essentially the same requirement as for any other write to etcd.
All changes to the cluster are done one at a time:
* To update a single member peerURLs, make an update operation
* To replace a single member, make an add then a remove operation
* To increase from 3 to 5 members, make two add operations
* To decrease from 5 to 3, make two remove operations
All of these examples will use the `etcdctl` command line tool that ships with etcd.
To change membership without `etcdctl`, use the [v2 HTTP members API][member-api] or the [v3 gRPC members API][member-api-grpc].
### Update a member
#### Update advertise client URLs
To update the advertise client URLs of a member, simply restart
that member with updated client urls flag (`--advertise-client-urls`) or environment variable
(`ETCD_ADVERTISE_CLIENT_URLS`). The restarted member will self publish the updated URLs.
A wrongly updated client URL will not affect the health of the etcd cluster.
#### Update advertise peer URLs
To update the advertise peer URLs of a member, first update
it explicitly via member command and then restart the member. The additional action is required
since updating peer URLs changes the cluster wide configuration and can affect the health of the etcd cluster.
To update the peer URLs, first, we need to find the target member's ID. To list all members with `etcdctl`:
```sh
$ etcdctl member list
6e3bd23ae5f1eae0: name=node2 peerURLs=http://localhost:23802 clientURLs=http://127.0.0.1:23792
924e2e83e93f2560: name=node3 peerURLs=http://localhost:23803 clientURLs=http://127.0.0.1:23793
a8266ecf031671f3: name=node1 peerURLs=http://localhost:23801 clientURLs=http://127.0.0.1:23791
```
In this example let's `update` a8266ecf031671f3 member ID and change its peerURLs value to http://10.0.1.10:2380
```sh
$ etcdctl member update a8266ecf031671f3 http://10.0.1.10:2380
Updated member with ID a8266ecf031671f3 in cluster
```
### Remove a member
Let us say the member ID we want to remove is a8266ecf031671f3.
We then use the `remove` command to perform the removal:
```sh
$ etcdctl member remove a8266ecf031671f3
Removed member a8266ecf031671f3 from cluster
```
The target member will stop itself at this point and print out the removal in the log:
```
etcd: this member has been permanently removed from the cluster. Exiting.
```
It is safe to remove the leader, however the cluster will be inactive while a new leader is elected. This duration is normally the period of election timeout plus the voting process.
### Add a new member
Adding a member is a two step process:
* Add the new member to the cluster via the [HTTP members API][member-api], the [gRPC members API][member-api-grpc], or the `etcdctl member add` command.
* Start the new member with the new cluster configuration, including a list of the updated members (existing members + the new member).
Using `etcdctl` let's add the new member to the cluster by specifying its [name][conf-name] and [advertised peer URLs][conf-adv-peer]:
```sh
$ etcdctl member add infra3 http://10.0.1.13:2380
added member 9bf1b35fc7761a23 to cluster
ETCD_NAME="infra3"
ETCD_INITIAL_CLUSTER="infra0=http://10.0.1.10:2380,infra1=http://10.0.1.11:2380,infra2=http://10.0.1.12:2380,infra3=http://10.0.1.13:2380"
ETCD_INITIAL_CLUSTER_STATE=existing
```
`etcdctl` has informed the cluster about the new member and printed out the environment variables needed to successfully start it.
Now start the new etcd process with the relevant flags for the new member:
```sh
$ export ETCD_NAME="infra3"
$ export ETCD_INITIAL_CLUSTER="infra0=http://10.0.1.10:2380,infra1=http://10.0.1.11:2380,infra2=http://10.0.1.12:2380,infra3=http://10.0.1.13:2380"
$ export ETCD_INITIAL_CLUSTER_STATE=existing
$ etcd --listen-client-urls http://10.0.1.13:2379 --advertise-client-urls http://10.0.1.13:2379 --listen-peer-urls http://10.0.1.13:2380 --initial-advertise-peer-urls http://10.0.1.13:2380 --data-dir %data_dir%
```
The new member will run as a part of the cluster and immediately begin catching up with the rest of the cluster.
If adding multiple members the best practice is to configure a single member at a time and verify it starts correctly before adding more new members.
If adding a new member to a 1-node cluster, the cluster cannot make progress before the new member starts because it needs two members as majority to agree on the consensus. This behavior only happens between the time `etcdctl member add` informs the cluster about the new member and the new member successfully establishing a connection to the existing one.
#### Error cases when adding members
In the following case we have not included our new host in the list of enumerated nodes.
If this is a new cluster, the node must be added to the list of initial cluster members.
```sh
$ etcd --name infra3 \
--initial-cluster infra0=http://10.0.1.10:2380,infra1=http://10.0.1.11:2380,infra2=http://10.0.1.12:2380 \
--initial-cluster-state existing
etcdserver: assign ids error: the member count is unequal
exit 1
```
In this case we give a different address (10.0.1.14:2380) to the one that we used to join the cluster (10.0.1.13:2380).
```sh
$ etcd --name infra4 \
--initial-cluster infra0=http://10.0.1.10:2380,infra1=http://10.0.1.11:2380,infra2=http://10.0.1.12:2380,infra4=http://10.0.1.14:2380 \
--initial-cluster-state existing
etcdserver: assign ids error: unmatched member while checking PeerURLs
exit 1
```
When we start etcd using the data directory of a removed member, etcd will exit automatically if it connects to any active member in the cluster:
```sh
$ etcd
etcd: this member has been permanently removed from the cluster. Exiting.
exit 1
```
### Strict reconfiguration check mode (`-strict-reconfig-check`)
As described in the above, the best practice of adding new members is to configure a single member at a time and verify it starts correctly before adding more new members. This step by step approach is very important because if newly added members is not configured correctly (for example the peer URLs are incorrect), the cluster can lose quorum. The quorum loss happens since the newly added member are counted in the quorum even if that member is not reachable from other existing members. Also quorum loss might happen if there is a connectivity issue or there are operational issues.
For avoiding this problem, etcd provides an option `-strict-reconfig-check`. If this option is passed to etcd, etcd rejects reconfiguration requests if the number of started members will be less than a quorum of the reconfigured cluster.
It is recommended to enable this option. However, it is disabled by default because of keeping compatibility.
[add member]: #add-a-new-member
[cluster-reconf]: #cluster-reconfiguration-operations
[conf-adv-peer]: configuration.md#-initial-advertise-peer-urls
[conf-name]: configuration.md#-name
[disaster recovery]: recovery.md
[fault tolerance table]: ../v2/admin_guide.md#fault-tolerance-table
[majority failure]: #restart-cluster-from-majority-failure
[member-api]: ../v2/members_api.md
[member-api-grpc]: ../dev-guide/api_reference_v3.md#service-cluster-etcdserveretcdserverpbrpcproto
[member migration]: ../v2/admin_guide.md#member-migration
[remove member]: #remove-a-member
[runtime-reconf]: runtime-reconf-design.md

View File

@@ -0,0 +1,50 @@
# Design of runtime reconfiguration
Runtime reconfiguration is one of the hardest and most error prone features in a distributed system, especially in a consensus based system like etcd.
Read on to learn about the design of etcd's runtime reconfiguration commands and how we tackled these problems.
## Two phase config changes keep the cluster safe
In etcd, every runtime reconfiguration has to go through [two phases][add-member] for safety reasons. For example, to add a member, first inform cluster of new configuration and then start the new member.
Phase 1 - Inform cluster of new configuration
To add a member into etcd cluster, make an API call to request a new member to be added to the cluster. This is only way to add a new member into an existing cluster. The API call returns when the cluster agrees on the configuration change.
Phase 2 - Start new member
To join the etcd member into the existing cluster, specify the correct `initial-cluster` and set `initial-cluster-state` to `existing`. When the member starts, it will contact the existing cluster first and verify the current cluster configuration matches the expected one specified in `initial-cluster`. When the new member successfully starts, the cluster has reached the expected configuration.
By splitting the process into two discrete phases users are forced to be explicit regarding cluster membership changes. This actually gives users more flexibility and makes things easier to reason about. For example, if there is an attempt to add a new member with the same ID as an existing member in an etcd cluster, the action will fail immediately during phase one without impacting the running cluster. Similar protection is provided to prevent adding new members by mistake. If a new etcd member attempts to join the cluster before the cluster has accepted the configuration change,, it will not be accepted by the cluster.
Without the explicit workflow around cluster membership etcd would be vulnerable to unexpected cluster membership changes. For example, if etcd is running under an init system such as systemd, etcd would be restarted after being removed via the membership API, and attempt to rejoin the cluster on startup. This cycle would continue every time a member is removed via the API and systemd is set to restart etcd after failing, which is unexpected.
We expect runtime reconfiguration to be an infrequent operation. We decided to keep it explicit and user-driven to ensure configuration safety and keep the cluster always running smoothly under explicit control.
## Permanent loss of quorum requires new cluster
If a cluster permanently loses a majority of its members, a new cluster will need to be started from an old data directory to recover the previous state.
It is entirely possible to force removing the failed members from the existing cluster to recover. However, we decided not to support this method since it bypasses the normal consensus committing phase, which is unsafe. If the member to remove is not actually dead or force removed through different members in the same cluster, etcd will end up with a diverged cluster with same clusterID. This is very dangerous and hard to debug/fix afterwards.
With a correct deployment, the possibility of permanent majority lose is very low. But it is a severe enough problem that worth special care. We strongly suggest reading the [disaster recovery documentation][disaster-recovery] and prepare for permanent majority lose before putting etcd into production.
## Do not use public discovery service for runtime reconfiguration
The public discovery service should only be used for bootstrapping a cluster. To join member into an existing cluster, use runtime reconfiguration API.
Discovery service is designed for bootstrapping an etcd cluster in the cloud environment, when the IP addresses of all the members are not known beforehand. After successfully bootstrapping a cluster, the IP addresses of all the members are known. Technically, the discovery service should no longer be needed.
It seems that using public discovery service is a convenient way to do runtime reconfiguration, after all discovery service already has all the cluster configuration information. However relying on public discovery service brings troubles:
1. it introduces external dependencies for the entire life-cycle of the cluster, not just bootstrap time. If there is a network issue between the cluster and public discovery service, the cluster will suffer from it.
2. public discovery service must reflect correct runtime configuration of the cluster during it life-cycle. It has to provide security mechanism to avoid bad actions, and it is hard.
3. public discovery service has to keep tens of thousands of cluster configurations. Our public discovery service backend is not ready for that workload.
To have a discovery service that supports runtime reconfiguration, the best choice is to build a private one.
[add-member]: runtime-configuration.md#add-a-new-member
[disaster-recovery]: recovery.md

View File

@@ -0,0 +1,224 @@
# Security model
etcd supports automatic TLS as well as authentication through client certificates for both clients to server as well as peer (server to server / cluster) communication.
To get up and running, first have a CA certificate and a signed key pair for one member. It is recommended to create and sign a new key pair for every member in a cluster.
For convenience, the [cfssl] tool provides an easy interface to certificate generation, and we provide an example using the tool [here][tls-setup]. Alternatively, try this [guide to generating self-signed key pairs][tls-guide].
## Basic setup
etcd takes several certificate related configuration options, either through command-line flags or environment variables:
**Client-to-server communication:**
`--cert-file=<path>`: Certificate used for SSL/TLS connections **to** etcd. When this option is set, advertise-client-urls can use the HTTPS schema.
`--key-file=<path>`: Key for the certificate. Must be unencrypted.
`--client-cert-auth`: When this is set etcd will check all incoming HTTPS requests for a client certificate signed by the trusted CA, requests that don't supply a valid client certificate will fail.
`--trusted-ca-file=<path>`: Trusted certificate authority.
`--auto-tls`: Use automatically generated self-signed certificates for TLS connections with clients.
**Peer (server-to-server / cluster) communication:**
The peer options work the same way as the client-to-server options:
`--peer-cert-file=<path>`: Certificate used for SSL/TLS connections between peers. This will be used both for listening on the peer address as well as sending requests to other peers.
`--peer-key-file=<path>`: Key for the certificate. Must be unencrypted.
`--peer-client-cert-auth`: When set, etcd will check all incoming peer requests from the cluster for valid client certificates signed by the supplied CA.
`--peer-trusted-ca-file=<path>`: Trusted certificate authority.
`--peer-auto-tls`: Use automatically generated self-signed certificates for TLS connections between peers.
If either a client-to-server or peer certificate is supplied the key must also be set. All of these configuration options are also available through the environment variables, `ETCD_CA_FILE`, `ETCD_PEER_CA_FILE` and so on.
## Example 1: Client-to-server transport security with HTTPS
For this, have a CA certificate (`ca.crt`) and signed key pair (`server.crt`, `server.key`) ready.
Let us configure etcd to provide simple HTTPS transport security step by step:
```sh
$ etcd --name infra0 --data-dir infra0 \
--cert-file=/path/to/server.crt --key-file=/path/to/server.key \
--advertise-client-urls=https://127.0.0.1:2379 --listen-client-urls=https://127.0.0.1:2379
```
This should start up fine and it will be possible to test the configuration by speaking HTTPS to etcd:
```sh
$ curl --cacert /path/to/ca.crt https://127.0.0.1:2379/v2/keys/foo -XPUT -d value=bar -v
```
The command should show that the handshake succeed. Since we use self-signed certificates with our own certificate authority, the CA must be passed to curl using the `--cacert` option. Another possibility would be to add the CA certificate to the system's trusted certificates directory (usually in `/etc/pki/tls/certs` or `/etc/ssl/certs`).
**OSX 10.9+ Users**: curl 7.30.0 on OSX 10.9+ doesn't understand certificates passed in on the command line.
Instead, import the dummy ca.crt directly into the keychain or add the `-k` flag to curl to ignore errors.
To test without the `-k` flag, run `open ./fixtures/ca/ca.crt` and follow the prompts.
Please remove this certificate after testing!
If there is a workaround, let us know.
## Example 2: Client-to-server authentication with HTTPS client certificates
For now we've given the etcd client the ability to verify the server identity and provide transport security. We can however also use client certificates to prevent unauthorized access to etcd.
The clients will provide their certificates to the server and the server will check whether the cert is signed by the supplied CA and decide whether to serve the request.
The same files mentioned in the first example are needed for this, as well as a key pair for the client (`client.crt`, `client.key`) signed by the same certificate authority.
```sh
$ etcd --name infra0 --data-dir infra0 \
--client-cert-auth --trusted-ca-file=/path/to/ca.crt --cert-file=/path/to/server.crt --key-file=/path/to/server.key \
--advertise-client-urls https://127.0.0.1:2379 --listen-client-urls https://127.0.0.1:2379
```
Now try the same request as above to this server:
```sh
$ curl --cacert /path/to/ca.crt https://127.0.0.1:2379/v2/keys/foo -XPUT -d value=bar -v
```
The request should be rejected by the server:
```
...
routines:SSL3_READ_BYTES:sslv3 alert bad certificate
...
```
To make it succeed, we need to give the CA signed client certificate to the server:
```sh
$ curl --cacert /path/to/ca.crt --cert /path/to/client.crt --key /path/to/client.key \
-L https://127.0.0.1:2379/v2/keys/foo -XPUT -d value=bar -v
```
The output should include:
```
...
SSLv3, TLS handshake, CERT verify (15):
...
TLS handshake, Finished (20)
```
And also the response from the server:
```json
{
"action": "set",
"node": {
"createdIndex": 12,
"key": "/foo",
"modifiedIndex": 12,
"value": "bar"
}
}
```
## Example 3: Transport security & client certificates in a cluster
etcd supports the same model as above for **peer communication**, that means the communication between etcd members in a cluster.
Assuming we have our `ca.crt` and two members with their own keypairs (`member1.crt` & `member1.key`, `member2.crt` & `member2.key`) signed by this CA, we launch etcd as follows:
```sh
DISCOVERY_URL=... # from https://discovery.etcd.io/new
# member1
$ etcd --name infra1 --data-dir infra1 \
--peer-client-cert-auth --peer-trusted-ca-file=/path/to/ca.crt --peer-cert-file=/path/to/member1.crt --peer-key-file=/path/to/member1.key \
--initial-advertise-peer-urls=https://10.0.1.10:2380 --listen-peer-urls=https://10.0.1.10:2380 \
--discovery ${DISCOVERY_URL}
# member2
$ etcd --name infra2 --data-dir infra2 \
--peer-client-cert-auth --peer-trusted-ca-file=/path/to/ca.crt --peer-cert-file=/path/to/member2.crt --peer-key-file=/path/to/member2.key \
--initial-advertise-peer-urls=https://10.0.1.11:2380 --listen-peer-urls=https://10.0.1.11:2380 \
--discovery ${DISCOVERY_URL}
```
The etcd members will form a cluster and all communication between members in the cluster will be encrypted and authenticated using the client certificates. The output of etcd will show that the addresses it connects to use HTTPS.
## Example 4: Automatic self-signed transport security
For cases where communication encryption, but not authentication, is needed, etcd supports encrypting its messages with automatically generated self-signed certificates. This simplifies deployment because there is no need for managing certificates and keys outside of etcd.
Configure etcd to use self-signed certificates for client and peer connections with the flags `--auto-tls` and `--peer-auto-tls`:
```sh
DISCOVERY_URL=... # from https://discovery.etcd.io/new
# member1
$ etcd --name infra1 --data-dir infra1 \
--auto-tls --peer-auto-tls \
--initial-advertise-peer-urls=https://10.0.1.10:2380 --listen-peer-urls=https://10.0.1.10:2380 \
--discovery ${DISCOVERY_URL}
# member2
$ etcd --name infra2 --data-dir infra2 \
--auto-tls --peer-auto-tls \
--initial-advertise-peer-urls=https://10.0.1.11:2380 --listen-peer-urls=https://10.0.1.11:2380 \
--discovery ${DISCOVERY_URL}
```
Self-signed certificates do not authenticate identity so curl will return an error:
```sh
curl: (60) SSL certificate problem: Invalid certificate chain
```
To disable certificate chain checking, invoke curl with the `-k` flag:
```sh
$ curl -k https://127.0.0.1:2379/v2/keys/foo -Xput -d value=bar -v
```
## Notes for etcd proxy
etcd proxy terminates the TLS from its client if the connection is secure, and uses proxy's own key/cert specified in `--peer-key-file` and `--peer-cert-file` to communicate with etcd members.
The proxy communicates with etcd members through both the `--advertise-client-urls` and `--advertise-peer-urls` of a given member. It forwards client requests to etcd members advertised client urls, and it syncs the initial cluster configuration through etcd members advertised peer urls.
When client authentication is enabled for an etcd member, the administrator must ensure that the peer certificate specified in the proxy's `--peer-cert-file` option is valid for that authentication. The proxy's peer certificate must also be valid for peer authentication if peer authentication is enabled.
## Frequently asked questions
### I'm seeing a SSLv3 alert handshake failure when using TLS client authentication?
The `crypto/tls` package of `golang` checks the key usage of the certificate public key before using it.
To use the certificate public key to do client auth, we need to add `clientAuth` to `Extended Key Usage` when creating the certificate public key.
Here is how to do it:
Add the following section to openssl.cnf:
```
[ ssl_client ]
...
extendedKeyUsage = clientAuth
...
```
When creating the cert be sure to reference it in the `-extensions` flag:
```
$ openssl ca -config openssl.cnf -policy policy_anything -extensions ssl_client -out certs/machine.crt -infiles machine.csr
```
### With peer certificate authentication I receive "certificate is valid for 127.0.0.1, not $MY_IP"
Make sure to sign the certificates with a Subject Name the member's public IP address. The `etcd-ca` tool for example provides an `--ip=` option for its `new-cert` command.
The certificate needs to be signed for the member's FQDN in its Subject Name, use Subject Alternative Names (short IP SANs) to add the IP address. The `etcd-ca` tool provides `--domain=` option for its `new-cert` command, and openssl can make [it][alt-name] too.
[cfssl]: https://github.com/cloudflare/cfssl
[tls-setup]: /hack/tls-setup
[tls-guide]: https://github.com/coreos/docs/blob/master/os/generate-self-signed-certificates.md
[alt-name]: http://wiki.cacert.org/FAQ/subjectAltName

View File

@@ -0,0 +1,14 @@
## Supported platform
### 32-bit and other unsupported systems
etcd has known issues on 32-bit systems due to a bug in the Go runtime. See #[358][358] for more information.
To avoid inadvertently running a possibly unstable etcd server, `etcd` on unsupported architectures will print
a warning message and immediately exit if the environment variable `ETCD_UNSUPPORTED_ARCH` is not set to
the target architecture.
Currently only the amd64 architecture is officially supported by `etcd`.
[358]: https://github.com/coreos/etcd/issues/358

View File

@@ -0,0 +1,47 @@
# Migrate applications from using API v2 to API v3
The data store v2 is still accessible from the API v2 after upgrading to etcd3. Thus, it will work as before and require no application changes. With etcd 3, applications use the new grpc API v3 to access the mvcc store, which provides more features and improved performance. The mvcc store and the old store v2 are separate and isolated; writes to the store v2 will not affect the mvcc store and, similarly, writes to the mvcc store will not affect the store v2.
Migrating an application from the API v2 to the API v3 involves two steps: 1) migrate the client library and, 2) migrate the data. If the application can rebuild the data, then migrating the data is unnecessary.
## Migrate client library
API v3 is different from API v2, thus application developers need to use a new client library to send requests to etcd API v3. The documentation of the client v3 is available at https://godoc.org/github.com/coreos/etcd/clientv3.
There are some notable differences between API v2 and API v3:
- Transaction: In v3, etcd provides multi-key conditional transactions. Applications should use transactions in place of `Compare-And-Swap` operations.
- Flat key space: There are no directories in API v3, only keys. For example, "/a/b/c/" is a key. Range queries support getting all keys matching a given prefix.
- Compacted responses: Operations like `Delete` no longer return previous values. To get the deleted value, a transaction can be used to atomically get the key and then delete its value.
- Leases: A replacement for v2 TTLs; the TTL is bound to a lease and keys attach to the lease. When the TTL expires, the lease is revoked and all attached keys are removed.
## Migrate data
Application data can be migrated either offline or online. Offline migration is much simpler than online migration and is recommended.
### Offline migration
Offline migration is very simple but requires etcd downtime. If an etcd downtime window spanning from seconds to minutes is acceptable, offline migration is a good choice and is easy to automate.
First, all members in the etcd cluster must converge to the same state. This can be achieved by stopping all applications that write keys to etcd. Alternatively, if the applications must remain running, configure etcd to listen on a different client URL and restart all etcd members. To check if the states converged, within a few seconds, use the `ETCDCTL_API=3 etcdctl endpoint status` command to confirm that the `raft index` of all members match (or differ by at most 1 due to an internal sync raft command).
Second, migrate the v2 keys into v3 with the [migrate][migrate_command] (`ETCDCTL_API=3 etcdctl migrate`) command. The migrate command writes keys in the v2 store to a user-provided transformer program and reads back transformed keys. It then writes transformed keys into the mvcc store. This usually takes at most tens of seconds.
Restart the etcd members and everything should just work.
### Online migration
If the application cannot tolerate any downtime, then it must migrate online. The implementation of online migration will vary from application to application but the overall idea is the same.
First, write application code using the v3 API. The application must support two modes: a migration mode and a normal mode. The application starts in migration mode. When running in migration mode, the application reads keys using the v3 API first, and, if it cannot find the key, it retries with the API v2. In normal mode, the application only reads keys using the v3 API. The application writes keys over the API v3 in both modes. To acknowledge a switch from migration mode to normal mode, the application watches on a switch mode key. When switch keys value turns to `true`, the application switches over from migration mode to normal mode.
Second, start a background job to migrate data from the store v2 to the mvcc store by reading keys from the API v2 and writing keys to the API v3.
After finishing data migration, the background job writes `true` into the switch mode key to notify the application that it may switch modes.
Online migration can be difficult when the application logic depends on store v2 indexes. Applications will need additional logic to convert mvcc store revisions to store v2 indexes.
[migrate_command]: ../../etcdctl/README.md#migrate-options

View File

@@ -0,0 +1,17 @@
## Versioning
### Service versioning
etcd uses [semantic versioning](http://semver.org)
New minor versions may add additional features to the API.
Get the running etcd cluster version with `etcdctl`:
```sh
ETCDCTL_API=3 etcdctl --endpoints=127.0.0.1:2379 endpoint status
```
### API versioning
The `v3` API responses should not change after the 3.0.0 release but new features will be added over time.

View File

@@ -1,6 +1,6 @@
# Production Users
# Production users
This document tracks people and use cases for etcd in production. By creating a list of production use cases we hope to build a community of advisors that we can reach out to with experience using various etcd applications, operation environments, and cluster sizes. The etcd development team may reach out periodically to check-in on your experience and update this list.
This document tracks people and use cases for etcd in production. By creating a list of production use cases we hope to build a community of advisors that we can reach out to with experience using various etcd applications, operation environments, and cluster sizes. The etcd development team may reach out periodically to check-in on how etcd is working in the field and update this list.
## discovery.etcd.io
@@ -48,4 +48,15 @@ CyCore Systems provides architecture and engineering for computing systems. Thi
Radius Intelligence uses Kubernetes running CoreOS to containerize and scale internal toolsets. Examples include running [JetBrains TeamCity][teamcity] and internal AWS security and cost reporting tools. etcd clusters back these clusters as well as provide some basic environment bootstrapping configuration keys.
## Vonage
- *Application*: system configuration for microservices, scheduling, locks (future - service discovery)
- *Launched*: August 2015
- *Cluster Size*: 2 clusters of 5 members in 2 DCs, n local proxies 1-to-1 with microservice, (ssl and SRV look up)
- *Order of Data Size*: kilobytes
- *Operator*: Vonage [devAdmin][raoofm]
- *Environment*: VMWare, AWS
- *Backups*: Daily snapshots on VMs. Backups done for upgrades.
[teamcity]: https://www.jetbrains.com/teamcity/
[raoofm]:https://github.com/raoofm

View File

@@ -1,153 +0,0 @@
# Proxy
etcd can run as a transparent proxy. Doing so allows for easy discovery of etcd within your infrastructure, since it can run on each machine as a local service. In this mode, etcd acts as a reverse proxy and forwards client requests to an active etcd cluster. The etcd proxy does not participate in the consensus replication of the etcd cluster, thus it neither increases the resilience nor decreases the write performance of the etcd cluster.
etcd currently supports two proxy modes: `readwrite` and `readonly`. The default mode is `readwrite`, which forwards both read and write requests to the etcd cluster. A `readonly` etcd proxy only forwards read requests to the etcd cluster, and returns `HTTP 501` to all write requests.
The proxy will shuffle the list of cluster members periodically to avoid sending all connections to a single member.
The member list used by an etcd proxy consists of all client URLs advertised in the cluster. These client URLs are specified in each etcd cluster member's `advertise-client-urls` option.
An etcd proxy examines several command-line options to discover its peer URLs. In order of precedence, these options are `discovery`, `discovery-srv`, and `initial-cluster`. The `initial-cluster` option is set to a comma-separated list of one or more etcd peer URLs used temporarily in order to discover the permanent cluster.
After establishing a list of peer URLs in this manner, the proxy retrieves the list of client URLs from the first reachable peer. These client URLs are specified by the `advertise-client-urls` option to etcd peers. The proxy then continues to connect to the first reachable etcd cluster member every thirty seconds to refresh the list of client URLs.
While etcd proxies therefore do not need to be given the `advertise-client-urls` option, as they retrieve this configuration from the cluster, this implies that `initial-cluster` must be set correctly for every proxy, and the `advertise-client-urls` option must be set correctly for every non-proxy, first-order cluster peer. Otherwise, requests to any etcd proxy would be forwarded improperly. Take special care not to set the `advertise-client-urls` option to URLs that point to the proxy itself, as such a configuration will cause the proxy to enter a loop, forwarding requests to itself until resources are exhausted. To correct either case, stop etcd and restart it with the correct URLs.
[This example Procfile][procfile] illustrates the difference in the etcd peer and proxy command lines used to configure and start a cluster with one proxy under the [goreman process management utility][goreman].
To summarize etcd proxy startup and peer discovery:
1. etcd proxies execute the following steps in order until the cluster *peer-urls* are known:
1. If `discovery` is set for the proxy, ask the given discovery service for
the *peer-urls*. The *peer-urls* will be the combined
`initial-advertise-peer-urls` of all first-order, non-proxy cluster
members.
2. If `discovery-srv` is set for the proxy, the *peer-urls* are discovered
from DNS.
3. If `initial-cluster` is set for the proxy, that will become the value of
*peer-urls*.
4. Otherwise use the default value of
`http://localhost:2380,http://localhost:7001`.
2. These *peer-urls* are used to contact the (non-proxy) members of the cluster
to find their *client-urls*. The *client-urls* will thus be the combined
`advertise-client-urls` of all cluster members (i.e. non-proxies).
3. Request of clients of the proxy will be forwarded (proxied) to these
*client-urls*.
Always start the first-order etcd cluster members first, then any proxies. A proxy must be able to reach the cluster members to retrieve its configuration, and will attempt connections somewhat aggressively in the absence of such a channel. Starting the members before any proxy ensures the proxy can discover the client URLs when it later starts.
## Using an etcd proxy
To start etcd in proxy mode, you need to provide three flags: `proxy`, `listen-client-urls`, and `initial-cluster` (or `discovery`).
To start a readwrite proxy, set `-proxy on`; To start a readonly proxy, set `-proxy readonly`.
The proxy will be listening on `listen-client-urls` and forward requests to the etcd cluster discovered from in `initial-cluster` or `discovery` url.
### Start an etcd proxy with a static configuration
To start a proxy that will connect to a statically defined etcd cluster, specify the `initial-cluster` flag:
```
etcd --proxy on \
--listen-client-urls http://127.0.0.1:8080 \
--initial-cluster infra0=http://10.0.1.10:2380,infra1=http://10.0.1.11:2380,infra2=http://10.0.1.12:2380
```
### Start an etcd proxy with the discovery service
If you bootstrap an etcd cluster using the [discovery service][discovery-service], you can also start the proxy with the same `discovery`.
To start a proxy using the discovery service, specify the `discovery` flag. The proxy will wait until the etcd cluster defined at the `discovery` url finishes bootstrapping, and then start to forward the requests.
```
etcd --proxy on \
--listen-client-urls http://127.0.0.1:8080 \
--discovery https://discovery.etcd.io/3e86b59982e49066c5d813af1c2e2579cbf573de \
```
## Fallback to proxy mode with discovery service
If you bootstrap an etcd cluster using [discovery service][discovery-service] with more than the expected number of etcd members, the extra etcd processes will fall back to being `readwrite` proxies by default. They will forward the requests to the cluster as described above. For example, if you create a discovery url with `size=5`, and start ten etcd processes using that same discovery url, the result will be a cluster with five etcd members and five proxies. Note that this behaviour can be disabled with the `discovery-fallback='exit'` flag.
## Promote a proxy to a member of etcd cluster
A Proxy is in the part of etcd cluster that does not participate in consensus. A proxy will not promote itself to an etcd member that participates in consensus automatically in any case.
If you want to promote a proxy to an etcd member, there are four steps you need to follow:
- use etcdctl to add the proxy node as an etcd member into the existing cluster
- stop the etcd proxy process or service
- remove the existing proxy data directory
- restart the etcd process with new member configuration
## Example
We assume you have a one member etcd cluster with one proxy. The cluster information is listed below:
|Name|Address|
|------|---------|
|infra0|10.0.1.10|
|proxy0|10.0.1.11|
This example walks you through a case that you promote one proxy to an etcd member. The cluster will become a two member cluster after finishing the four steps.
### Add a new member into the existing cluster
First, use etcdctl to add the member to the cluster, which will output the environment variables need to correctly configure the new member:
``` bash
$ etcdctl -endpoint http://10.0.1.10:2379 member add infra1 http://10.0.1.11:2380
added member 9bf1b35fc7761a23 to cluster
ETCD_NAME="infra1"
ETCD_INITIAL_CLUSTER="infra0=http://10.0.1.10:2380,infra1=http://10.0.1.11:2380"
ETCD_INITIAL_CLUSTER_STATE=existing
```
### Stop the proxy process
Stop the existing proxy so we can wipe it's state on disk and reload it with the new configuration:
``` bash
px aux | grep etcd
kill %etcd_proxy_pid%
```
or (if you are running etcd proxy as etcd service under systemd)
``` bash
sudo systemctl stop etcd
```
### Remove the existing proxy data dir
``` bash
rm -rf %data_dir%/proxy
```
### Start etcd as a new member
Finally, start the reconfigured member and make sure it joins the cluster correctly:
``` bash
$ export ETCD_NAME="infra1"
$ export ETCD_INITIAL_CLUSTER="infra0=http://10.0.1.10:2380,infra1=http://10.0.1.11:2380"
$ export ETCD_INITIAL_CLUSTER_STATE=existing
$ etcd --listen-client-urls http://10.0.1.11:2379 \
--advertise-client-urls http://10.0.1.11:2379 \
--listen-peer-urls http://10.0.1.11:2380 \
--initial-advertise-peer-urls http://10.0.1.11:2380 \
--data-dir %data_dir%
```
If you are running etcd under systemd, you should modify the service file with correct configuration and restart the service:
``` bash
sudo systemd restart etcd
```
If an error occurs, check the [add member troubleshooting doc][runtime-configuration].
[discovery-service]: clustering.md#discovery
[goreman]: https://github.com/mattn/goreman
[procfile]: /Procfile
[runtime-configuration]: runtime-configuration.md#error-cases-when-adding-members

View File

@@ -1,24 +1,24 @@
# Reporting Bugs
# Reporting bugs
If you find bugs or documentation mistakes in the etcd project, please let us know by [opening an issue][issue]. We treat bugs and mistakes very seriously and believe no issue is too small. Before creating a bug report, please check that an issue reporting the same problem does not already exist.
If any part of the etcd project has bugs or documentation mistakes, please let us know by [opening an issue][issue]. We treat bugs and mistakes very seriously and believe no issue is too small. Before creating a bug report, please check that an issue reporting the same problem does not already exist.
To make your bug report accurate and easy to understand, please try to create bug reports that are:
To make the bug report accurate and easy to understand, please try to create bug reports that are:
- Specific. Include as much details as possible: which version, what environment, what configuration, etc. You can also attach etcd log (the starting log with etcd configuration is especially important).
- Specific. Include as much details as possible: which version, what environment, what configuration, etc. If the bug is related to running the etcd server, please attach the etcd log (the starting log with etcd configuration is especially important).
- Reproducible. Include the steps to reproduce the problem. We understand some issues might be hard to reproduce, please includes the steps that might lead to the problem. You can also attach the affected etcd data dir and stack strace to the bug report.
- Reproducible. Include the steps to reproduce the problem. We understand some issues might be hard to reproduce, please includes the steps that might lead to the problem. If possible, please attach the affected etcd data dir and stack strace to the bug report.
- Isolated. Please try to isolate and reproduce the bug with minimum dependencies. It would significantly slow down the speed to fix a bug if too many dependencies are involved in a bug report. Debugging external systems that rely on etcd is out of scope, but we are happy to point you in the right direction or help you interact with etcd in the correct manner.
- Isolated. Please try to isolate and reproduce the bug with minimum dependencies. It would significantly slow down the speed to fix a bug if too many dependencies are involved in a bug report. Debugging external systems that rely on etcd is out of scope, but we are happy to provide guidance in the right direction or help with using etcd itself.
- Unique. Do not duplicate existing bug report.
- Scoped. One bug per report. Do not follow up with another bug inside one report.
You might also want to read [Elika Etemads article on filing good bug reports][filing-good-bugs] before creating a bug report.
It may be worthwhile to read [Elika Etemads article on filing good bug reports][filing-good-bugs] before creating a bug report.
We might ask you for further information to locate a bug. A duplicated bug report will be closed.
We might ask for further information to locate a bug. A duplicated bug report will be closed.
## Frequently Asked Questions
## Frequently asked questions
### How to get a stack trace
@@ -39,7 +39,7 @@ $ sudo systemctl cat etcd2
$ sudo journalctl -u etcd2
```
Due to an upstream systemd bug, journald may miss the last few log lines when its process exit. If journalctl tells you that etcd stops without fatal or panic message, you could try `sudo journalctl -f -t etcd2` to get full log.
Due to an upstream systemd bug, journald may miss the last few log lines when its processes exit. If journalctl says etcd stopped without fatal or panic message, try `sudo journalctl -f -t etcd2` to get full log.
[etcd-issue]: https://github.com/coreos/etcd/issues/new
[filing-good-bugs]: http://fantasai.inkedblade.net/style/talks/filing-good-bugs/

View File

@@ -208,4 +208,4 @@ WatchResponse {
```
[api-protobuf]: https://github.com/coreos/etcd/blob/master/etcdserver/etcdserverpb/rpc.proto
[kv-protobuf]: https://github.com/coreos/etcd/blob/master/storage/storagepb/kv.proto
[kv-protobuf]: https://github.com/coreos/etcd/blob/master/mvcc/mvccpb/kv.proto

View File

@@ -1,11 +1,10 @@
# Tuning
The default settings in etcd should work well for installations on a local network where the average network latency is low.
However, when using etcd across multiple data centers or over networks with high latency you may need to tweak the heartbeat interval and election timeout settings.
The default settings in etcd should work well for installations on a local network where the average network latency is low. However, when using etcd across multiple data centers or over networks with high latency, the heartbeat interval and election timeout settings may need tuning.
The network isn't the only source of latency. Each request and response may be impacted by slow disks on both the leader and follower. Each of these timeouts represents the total time from request to successful response from the other machine.
## Time Parameters
## Time parameters
The underlying distributed consensus protocol relies on two separate time parameters to ensure that nodes can handoff leadership if one stalls or goes offline.
The first parameter is called the *Heartbeat Interval*.
@@ -24,24 +23,24 @@ On the other side, a too high heartbeat interval leads to high election timeout.
The easiest way to measure round-trip time (RTT) is to use [PING utility][ping].
The election timeout should be set based on the heartbeat interval and average round-trip time between members.
Election timeouts must be at least 10 times the round-trip time so it can account for variance in your network.
For example, if the round-trip time between your members is 10ms then you should have at least a 100ms election timeout.
Election timeouts must be at least 10 times the round-trip time so it can account for variance in the network.
For example, if the round-trip time between members is 10ms then the election timeout should be at least 100ms.
You should also set your election timeout to at least 5 to 10 times your heartbeat interval to account for variance in leader replication.
For a heartbeat interval of 50ms you should set your election timeout to at least 250ms - 500ms.
The election timeout should be set to at least 5 to 10 times the heartbeat interval to account for variance in leader replication.
For a heartbeat interval of 50ms, set the election timeout to at least 250ms - 500ms.
The upper limit of election timeout is 50000ms (50s), which should only be used when deploying a globally-distributed etcd cluster.
A reasonable round-trip time for the continental United States is 130ms, and the time between US and Japan is around 350-400ms.
If your network has uneven performance or regular packet delays/loss then it is possible that a couple of retries may be necessary to successfully send a packet. So 5s is a safe upper limit of global round-trip time.
If the network has uneven performance or regular packet delays/loss then it is possible that a couple of retries may be necessary to successfully send a packet. So 5s is a safe upper limit of global round-trip time.
As the election timeout should be an order of magnitude bigger than broadcast time, in the case of ~5s for a globally distributed cluster, then 50 seconds becomes a reasonable maximum.
The heartbeat interval and election timeout value should be the same for all members in one cluster. Setting different values for etcd members may disrupt cluster stability.
You can override the default values on the command line:
The default values can be overridden on the command line:
```sh
# Command line arguments:
$ etcd -heartbeat-interval=100 -election-timeout=500
$ etcd --heartbeat-interval=100 --election-timeout=500
# Environment variables:
$ ETCD_HEARTBEAT_INTERVAL=100 ETCD_ELECTION_TIMEOUT=500 etcd
@@ -58,15 +57,15 @@ A complete history works well for lightly used clusters but clusters that are he
To avoid having a huge log etcd makes periodic snapshots.
These snapshots provide a way for etcd to compact the log by saving the current state of the system and removing old logs.
### Snapshot Tuning
### Snapshot tuning
Creating snapshots can be expensive so they're only created after a given number of changes to etcd.
By default, snapshots will be made after every 10,000 changes.
If etcd's memory usage and disk usage are too high, you can lower the snapshot threshold by setting the following on the command line:
If etcd's memory usage and disk usage are too high, try lowering the snapshot threshold by setting the following on the command line:
```sh
# Command line arguments:
$ etcd -snapshot-count=5000
$ etcd --snapshot-count=5000
# Environment variables:
$ ETCD_SNAPSHOT_COUNT=5000 etcd

View File

@@ -0,0 +1,119 @@
## Upgrade etcd from 2.3 to 3.0
In the general case, upgrading from etcd 2.3 to 3.0 can be a zero-downtime, rolling upgrade:
- one by one, stop the etcd v2.3 processes and replace them with etcd v3.0 processes
- after running all v3.0 processes, new features in v3.0 are available to the cluster
Before [starting an upgrade](#upgrade-procedure), read through the rest of this guide to prepare.
### Upgrade Checklists
#### Upgrade Requirements
To upgrade an existing etcd deployment to 3.0, the running cluster must be 2.3 or greater. If it's before 2.3, please upgrade to [2.3](https://github.com/coreos/etcd/releases/tag/v2.3.0) before upgrading to 3.0.
Also, to ensure a smooth rolling upgrade, the running cluster must be healthy. You can check the health of the cluster by using the `etcdctl cluster-health` command.
#### Preparation
Before upgrading etcd, always test the services relying on etcd in a staging environment before deploying the upgrade to the production environment.
Before beginning, [backup the etcd data directory](../v2/admin_guide.md#backing-up-the-datastore). Should something go wrong with the upgrade, it is possible to use this backup to [downgrade](#downgrade) back to existing etcd version.
#### Mixed Versions
While upgrading, an etcd cluster supports mixed versions of etcd members, and operates with the protocol of the lowest common version. The cluster is only considered upgraded once all of its members are upgraded to version 3.0. Internally, etcd members negotiate with each other to determine the overall cluster version, which controls the reported version and the supported features.
#### Limitations
It might take up to 2 minutes for the newly upgraded member to catch up with the existing cluster when the total data size is larger than 50MB. Check the size of a recent snapshot to estimate the total data size. In other words, it is safest to wait for 2 minutes between upgrading each member.
For a much larger total data size, 100MB or more , this one-time process might take even more time. Administrators of very large etcd clusters of this magnitude can feel free to contact the [etcd team][etcd-contact] before upgrading, and well be happy to provide advice on the procedure.
#### Downgrade
If all members have been upgraded to v3.0, the cluster will be upgraded to v3.0, and downgrade from this completed state is **not possible**. If any single member is still v2.3, however, the cluster and its operations remains “v2.3”, and it is possible from this mixed cluster state to return to using a v2.3 etcd binary on all members.
Please [backup the data directory](../v2/admin_guide.md#backing-up-the-datastore) of all etcd members to make downgrading the cluster possible even after it has been completely upgraded.
### Upgrade Procedure
This example details the upgrade of a three-member v2.3 ectd cluster running on a local machine.
#### 1. Check upgrade requirements.
Is the the cluster healthy and running v.2.3.x?
```
$ etcdctl cluster-health
member 6e3bd23ae5f1eae0 is healthy: got healthy result from http://localhost:22379
member 924e2e83e93f2560 is healthy: got healthy result from http://localhost:32379
member 8211f1d0f64f3269 is healthy: got healthy result from http://localhost:12379
cluster is healthy
$ curl http://localhost:2379/version
{"etcdserver":"2.3.x","etcdcluster":"2.3.0"}
```
#### 2. Stop the existing etcd process
When each etcd process is stopped, expected errors will be logged by other cluster members. This is normal since a cluster member connection has been (temporarily) broken:
```
2016-06-27 15:21:48.624124 E | rafthttp: failed to dial 8211f1d0f64f3269 on stream Message (dial tcp 127.0.0.1:12380: getsockopt: connection refused)
2016-06-27 15:21:48.624175 I | rafthttp: the connection with 8211f1d0f64f3269 became inactive
```
Its a good idea at this point to [backup the etcd data directory](../v2/admin_guide.md#backing-up-the-datastore) to provide a downgrade path should any problems occur:
```
$ etcdctl backup \
--data-dir /var/lib/etcd \
--backup-dir /tmp/etcd_backup
```
#### 3. Drop-in etcd v3.0 binary and start the new etcd process
The new v3.0 etcd will publish its information to the cluster:
```
09:58:25.938673 I | etcdserver: published {Name:infra1 ClientURLs:[http://localhost:12379]} to cluster 524400597fb1d5f6
```
Verify that each member, and then the entire cluster, becomes healthy with the new v3.0 etcd binary:
```
$ etcdctl cluster-health
member 6e3bd23ae5f1eae0 is healthy: got healthy result from http://localhost:22379
member 924e2e83e93f2560 is healthy: got healthy result from http://localhost:32379
member 8211f1d0f64f3269 is healthy: got healthy result from http://localhost:12379
cluster is healthy
```
Upgraded members will log warnings like the following until the entire cluster is upgraded. This is expected and will cease after all etcd cluster members are upgraded to v3.0:
```
2016-06-27 15:22:05.679644 W | etcdserver: the local etcd version 2.3.7 is not up-to-date
2016-06-27 15:22:05.679660 W | etcdserver: member 8211f1d0f64f3269 has a higher version 3.0.0
```
#### 4. Repeat step 2 to step 3 for all other members
#### 5. Finish
When all members are upgraded, the cluster will report upgrading to 3.0 successfully:
```
2016-06-27 15:22:19.873751 N | membership: updated the cluster version from 2.3 to 3.0
2016-06-27 15:22:19.914574 I | api: enabled capabilities for version 3.0.0
```
```
$ ETCDCTL_API=3 etcdctl endpoint health
127.0.0.1:12379 is healthy: successfully committed proposal: took = 18.440155ms
127.0.0.1:32379 is healthy: successfully committed proposal: took = 13.651368ms
127.0.0.1:22379 is healthy: successfully committed proposal: took = 18.513301ms
```
[etcd-contact]: https://groups.google.com/forum/#!forum/etcd-dev

165
Documentation/v2/README.md Normal file
View File

@@ -0,0 +1,165 @@
# etcd2
[![Go Report Card](https://goreportcard.com/badge/github.com/coreos/etcd)](https://goreportcard.com/report/github.com/coreos/etcd)
[![Build Status](https://travis-ci.org/coreos/etcd.svg?branch=master)](https://travis-ci.org/coreos/etcd)
[![Build Status](https://semaphoreci.com/api/v1/coreos/etcd/branches/master/shields_badge.svg)](https://semaphoreci.com/coreos/etcd)
[![Docker Repository on Quay.io](https://quay.io/repository/coreos/etcd-git/status "Docker Repository on Quay.io")](https://quay.io/repository/coreos/etcd-git)
**Note**: The `master` branch may be in an *unstable or even broken state* during development. Please use [releases][github-release] instead of the `master` branch in order to get stable binaries.
![etcd Logo](../../logos/etcd-horizontal-color.png)
etcd is a distributed, consistent key-value store for shared configuration and service discovery, with a focus on being:
* *Simple*: curl'able user-facing API (HTTP+JSON)
* *Secure*: optional SSL client cert authentication
* *Fast*: benchmarked 1000s of writes/s per instance
* *Reliable*: properly distributed using Raft
etcd is written in Go and uses the [Raft][raft] consensus algorithm to manage a highly-available replicated log.
etcd is used [in production by many companies](./production-users.md), and the development team stands behind it in critical deployment scenarios, where etcd is frequently teamed with applications such as [Kubernetes][k8s], [fleet][fleet], [locksmith][locksmith], [vulcand][vulcand], and many others.
See [etcdctl][etcdctl] for a simple command line client.
Or feel free to just use `curl`, as in the examples below.
[raft]: https://raft.github.io/
[k8s]: http://kubernetes.io/
[fleet]: https://github.com/coreos/fleet
[locksmith]: https://github.com/coreos/locksmith
[vulcand]: https://github.com/vulcand/vulcand
[etcdctl]: https://github.com/coreos/etcd/tree/master/etcdctl
## Getting Started
### Getting etcd
The easiest way to get etcd is to use one of the pre-built release binaries which are available for OSX, Linux, Windows, AppC (ACI), and Docker. Instructions for using these binaries are on the [GitHub releases page][github-release].
For those wanting to try the very latest version, you can build the latest version of etcd from the `master` branch.
You will first need [*Go*](https://golang.org/) installed on your machine (version 1.5+ is required).
All development occurs on `master`, including new features and bug fixes.
Bug fixes are first targeted at `master` and subsequently ported to release branches, as described in the [branch management][branch-management] guide.
[github-release]: https://github.com/coreos/etcd/releases/
[branch-management]: branch_management.md
### Running etcd
First start a single-member cluster of etcd:
```sh
./bin/etcd
```
This will bring up etcd listening on port 2379 for client communication and on port 2380 for server-to-server communication.
Next, let's set a single key, and then retrieve it:
```
curl -L http://127.0.0.1:2379/v2/keys/mykey -XPUT -d value="this is awesome"
curl -L http://127.0.0.1:2379/v2/keys/mykey
```
You have successfully started an etcd and written a key to the store.
### etcd TCP ports
The [official etcd ports][iana-ports] are 2379 for client requests, and 2380 for peer communication. To maintain compatibility, some etcd configuration and documentation continues to refer to the legacy ports 4001 and 7001, but all new etcd use and discussion should adopt the IANA-assigned ports. The legacy ports 4001 and 7001 will be fully deprecated, and support for their use removed, in future etcd releases.
[iana-ports]: https://www.iana.org/assignments/service-names-port-numbers/service-names-port-numbers.xhtml?search=etcd
### Running local etcd cluster
First install [goreman](https://github.com/mattn/goreman), which manages Procfile-based applications.
Our [Procfile script](./Procfile) will set up a local example cluster. You can start it with:
```sh
goreman start
```
This will bring up 3 etcd members `infra1`, `infra2` and `infra3` and etcd proxy `proxy`, which runs locally and composes a cluster.
You can write a key to the cluster and retrieve the value back from any member or proxy.
### Next Steps
Now it's time to dig into the full etcd API and other guides.
- Explore the full [API][api].
- Set up a [multi-machine cluster][clustering].
- Learn the [config format, env variables and flags][configuration].
- Find [language bindings and tools][libraries-and-tools].
- Use TLS to [secure an etcd cluster][security].
- [Tune etcd][tuning].
- [Upgrade from 0.4.9+ to 2.2.0][upgrade].
[api]: ./api.md
[clustering]: ./clustering.md
[configuration]: ./configuration.md
[libraries-and-tools]: ./libraries-and-tools.md
[security]: ./security.md
[tuning]: ./tuning.md
[upgrade]: ./04_to_2_snapshot_migration.md
## Contact
- Mailing list: [etcd-dev](https://groups.google.com/forum/?hl=en#!forum/etcd-dev)
- IRC: #[etcd](irc://irc.freenode.org:6667/#etcd) on freenode.org
- Planning/Roadmap: [milestones](https://github.com/coreos/etcd/milestones), [roadmap](../../ROADMAP.md)
- Bugs: [issues](https://github.com/coreos/etcd/issues)
## Contributing
See [CONTRIBUTING](../../CONTRIBUTING.md) for details on submitting patches and the contribution workflow.
## Reporting bugs
See [reporting bugs](reporting_bugs.md) for details about reporting any issue you may encounter.
## Known bugs
[GH518](https://github.com/coreos/etcd/issues/518) is a known bug. Issue is that:
```
curl http://127.0.0.1:2379/v2/keys/foo -XPUT -d value=bar
curl http://127.0.0.1:2379/v2/keys/foo -XPUT -d dir=true -d prevExist=true
```
If the previous node is a key and client tries to overwrite it with `dir=true`, it does not give warnings such as `Not a directory`. Instead, the key is set to empty value.
## Project Details
### Versioning
#### Service Versioning
etcd uses [semantic versioning](http://semver.org)
New minor versions may add additional features to the API.
You can get the version of etcd by issuing a request to /version:
```sh
curl -L http://127.0.0.1:2379/version
```
#### API Versioning
The `v2` API responses should not change after the 2.0.0 release but new features will be added over time.
#### 32-bit and other unsupported systems
etcd has known issues on 32-bit systems due to a bug in the Go runtime. See #[358][358] for more information.
To avoid inadvertently running a possibly unstable etcd server, `etcd` on unsupported architectures will print
a warning message and immediately exit if the environment variable `ETCD_UNSUPPORTED_ARCH` is not set to
the target architecture.
Currently only the amd64 architecture is officially supported by `etcd`.
[358]: https://github.com/coreos/etcd/issues/358
### License
etcd is under the Apache 2.0 license. See the [LICENSE](LICENSE) file for details.

View File

@@ -0,0 +1,310 @@
# Administration
## Data Directory
### Lifecycle
When first started, etcd stores its configuration into a data directory specified by the data-dir configuration parameter.
Configuration is stored in the write ahead log and includes: the local member ID, cluster ID, and initial cluster configuration.
The write ahead log and snapshot files are used during member operation and to recover after a restart.
Having a dedicated disk to store wal files can improve the throughput and stabilize the cluster.
It is highly recommended to dedicate a wal disk and set `--wal-dir` to point to a directory on that device for a production cluster deployment.
If a members data directory is ever lost or corrupted then the user should [remove][remove-a-member] the etcd member from the cluster using `etcdctl` tool.
A user should avoid restarting an etcd member with a data directory from an out-of-date backup.
Using an out-of-date data directory can lead to inconsistency as the member had agreed to store information via raft then re-joins saying it needs that information again.
For maximum safety, if an etcd member suffers any sort of data corruption or loss, it must be removed from the cluster.
Once removed the member can be re-added with an empty data directory.
### Contents
The data directory has two sub-directories in it:
1. wal: write ahead log files are stored here. For details see the [wal package documentation][wal-pkg]
2. snap: log snapshots are stored here. For details see the [snap package documentation][snap-pkg]
If `--wal-dir` flag is set, etcd will write the write ahead log files to the specified directory instead of data directory.
## Cluster Management
### Lifecycle
If you are spinning up multiple clusters for testing it is recommended that you specify a unique initial-cluster-token for the different clusters.
This can protect you from cluster corruption in case of mis-configuration because two members started with different cluster tokens will refuse members from each other.
### Monitoring
It is important to monitor your production etcd cluster for healthy information and runtime metrics.
#### Health Monitoring
At lowest level, etcd exposes health information via HTTP at `/health` in JSON format. If it returns `{"health": "true"}`, then the cluster is healthy. Please note the `/health` endpoint is still an experimental one as in etcd 2.2.
```
$ curl -L http://127.0.0.1:2379/health
{"health": "true"}
```
You can also use etcdctl to check the cluster-wide health information. It will contact all the members of the cluster and collect the health information for you.
```
$./etcdctl cluster-health
member 8211f1d0f64f3269 is healthy: got healthy result from http://127.0.0.1:12379
member 91bc3c398fb3c146 is healthy: got healthy result from http://127.0.0.1:22379
member fd422379fda50e48 is healthy: got healthy result from http://127.0.0.1:32379
cluster is healthy
```
#### Runtime Metrics
etcd uses [Prometheus][prometheus] for metrics reporting in the server. You can read more through the runtime metrics [doc][metrics].
### Debugging
Debugging a distributed system can be difficult. etcd provides several ways to make debug
easier.
#### Enabling Debug Logging
When you want to debug etcd without stopping it, you can enable debug logging at runtime.
etcd exposes logging configuration at `/config/local/log`.
```
$ curl http://127.0.0.1:2379/config/local/log -XPUT -d '{"Level":"DEBUG"}'
$ # debug logging enabled
$
$ curl http://127.0.0.1:2379/config/local/log -XPUT -d '{"Level":"INFO"}'
$ # debug logging disabled
```
#### Debugging Variables
Debug variables are exposed for real-time debugging purposes. Developers who are familiar with etcd can utilize these variables to debug unexpected behavior. etcd exposes debug variables via HTTP at `/debug/vars` in JSON format. The debug variables contains
`cmdline`, `file_descriptor_limit`, `memstats` and `raft.status`.
`cmdline` is the command line arguments passed into etcd.
`file_descriptor_limit` is the max number of file descriptors etcd can utilize.
`memstats` is explained in detail in the [Go runtime documentation][golang-memstats].
`raft.status` is useful when you want to debug low level raft issues if you are familiar with raft internals. In most cases, you do not need to check `raft.status`.
```json
{
"cmdline": ["./etcd"],
"file_descriptor_limit": 0,
"memstats": {"Alloc":4105744,"TotalAlloc":42337320,"Sys":12560632,"...":"..."},
"raft.status": {"id":"ce2a822cea30bfca","term":5,"vote":"ce2a822cea30bfca","commit":23509,"lead":"ce2a822cea30bfca","raftState":"StateLeader","progress":{"ce2a822cea30bfca":{"match":23509,"next":23510,"state":"ProgressStateProbe"}}}
}
```
### Optimal Cluster Size
The recommended etcd cluster size is 3, 5 or 7, which is decided by the fault tolerance requirement. A 7-member cluster can provide enough fault tolerance in most cases. While larger cluster provides better fault tolerance the write performance reduces since data needs to be replicated to more machines.
#### Fault Tolerance Table
It is recommended to have an odd number of members in a cluster. Having an odd cluster size doesn't change the number needed for majority, but you gain a higher tolerance for failure by adding the extra member. You can see this in practice when comparing even and odd sized clusters:
| Cluster Size | Majority | Failure Tolerance |
|--------------|------------|-------------------|
| 1 | 1 | 0 |
| 2 | 2 | 0 |
| 3 | 2 | **1** |
| 4 | 3 | 1 |
| 5 | 3 | **2** |
| 6 | 4 | 2 |
| 7 | 4 | **3** |
| 8 | 5 | 3 |
| 9 | 5 | **4** |
As you can see, adding another member to bring the size of cluster up to an odd size is always worth it. During a network partition, an odd number of members also guarantees that there will almost always be a majority of the cluster that can continue to operate and be the source of truth when the partition ends.
#### Changing Cluster Size
After your cluster is up and running, adding or removing members is done via [runtime reconfiguration][runtime-reconfig], which allows the cluster to be modified without downtime. The `etcdctl` tool has `member list`, `member add` and `member remove` commands to complete this process.
### Member Migration
When there is a scheduled machine maintenance or retirement, you might want to migrate an etcd member to another machine without losing the data and changing the member ID.
The data directory contains all the data to recover a member to its point-in-time state. To migrate a member:
* Stop the member process.
* Copy the data directory of the now-idle member to the new machine.
* Update the peer URLs for the replaced member to reflect the new machine according to the [runtime reconfiguration instructions][update-a-member].
* Start etcd on the new machine, using the same configuration and the copy of the data directory.
This example will walk you through the process of migrating the infra1 member to a new machine:
|Name|Peer URL|
|------|--------------|
|infra0|10.0.1.10:2380|
|infra1|10.0.1.11:2380|
|infra2|10.0.1.12:2380|
```sh
$ export ETCDCTL_ENDPOINT=http://10.0.1.10:2379,http://10.0.1.11:2379,http://10.0.1.12:2379
```
```sh
$ etcdctl member list
84194f7c5edd8b37: name=infra0 peerURLs=http://10.0.1.10:2380 clientURLs=http://127.0.0.1:2379,http://10.0.1.10:2379
b4db3bf5e495e255: name=infra1 peerURLs=http://10.0.1.11:2380 clientURLs=http://127.0.0.1:2379,http://10.0.1.11:2379
bc1083c870280d44: name=infra2 peerURLs=http://10.0.1.12:2380 clientURLs=http://127.0.0.1:2379,http://10.0.1.12:2379
```
#### Stop the member etcd process
```sh
$ ssh 10.0.1.11
```
```sh
$ kill `pgrep etcd`
```
#### Copy the data directory of the now-idle member to the new machine
```
$ tar -cvzf infra1.etcd.tar.gz %data_dir%
```
```sh
$ scp infra1.etcd.tar.gz 10.0.1.13:~/
```
#### Update the peer URLs for that member to reflect the new machine
```sh
$ curl http://10.0.1.10:2379/v2/members/b4db3bf5e495e255 -XPUT \
-H "Content-Type: application/json" -d '{"peerURLs":["http://10.0.1.13:2380"]}'
```
Or use `etcdctl member update` command
```sh
$ etcdctl member update b4db3bf5e495e255 http://10.0.1.13:2380
```
#### Start etcd on the new machine, using the same configuration and the copy of the data directory
```sh
$ ssh 10.0.1.13
```
```sh
$ tar -xzvf infra1.etcd.tar.gz -C %data_dir%
```
```
etcd -name infra1 \
-listen-peer-urls http://10.0.1.13:2380 \
-listen-client-urls http://10.0.1.13:2379,http://127.0.0.1:2379 \
-advertise-client-urls http://10.0.1.13:2379,http://127.0.0.1:2379
```
### Disaster Recovery
etcd is designed to be resilient to machine failures. An etcd cluster can automatically recover from any number of temporary failures (for example, machine reboots), and a cluster of N members can tolerate up to _(N-1)/2_ permanent failures (where a member can no longer access the cluster, due to hardware failure or disk corruption). However, in extreme circumstances, a cluster might permanently lose enough members such that quorum is irrevocably lost. For example, if a three-node cluster suffered two simultaneous and unrecoverable machine failures, it would be normally impossible for the cluster to restore quorum and continue functioning.
To recover from such scenarios, etcd provides functionality to backup and restore the datastore and recreate the cluster without data loss.
#### Backing up the datastore
**NB:** Windows users must stop etcd before running the backup command.
The first step of the recovery is to backup the data directory and wal directory, if stored separately, on a functioning etcd node. To do this, use the `etcdctl backup` command, passing in the original data (and wal) directory used by etcd. For example:
```sh
etcdctl backup \
--data-dir %data_dir% \
[--wal-dir %wal_dir%] \
--backup-dir %backup_data_dir%
[--backup-wal-dir %backup_wal_dir%]
```
This command will rewrite some of the metadata contained in the backup (specifically, the node ID and cluster ID), which means that the node will lose its former identity. In order to recreate a cluster from the backup, you will need to start a new, single-node cluster. The metadata is rewritten to prevent the new node from inadvertently being joined onto an existing cluster.
#### Restoring a backup
To restore a backup using the procedure created above, start etcd with the `-force-new-cluster` option and pointing to the backup directory. This will initialize a new, single-member cluster with the default advertised peer URLs, but preserve the entire contents of the etcd data store. Continuing from the previous example:
```sh
etcd \
-data-dir=%backup_data_dir% \
[-wal-dir=%backup_wal_dir%] \
-force-new-cluster \
...
```
Now etcd should be available on this node and serving the original datastore.
Once you have verified that etcd has started successfully, shut it down and move the data and wal, if stored separately, back to the previous location (you may wish to make another copy as well to be safe):
```sh
pkill etcd
rm -fr %data_dir%
rm -fr %wal_dir%
mv %backup_data_dir% %data_dir%
mv %backup_wal_dir% %wal_dir%
etcd \
-data-dir=%data_dir% \
[-wal-dir=%wal_dir%] \
...
```
#### Restoring the cluster
Now that the node is running successfully, [change its advertised peer URLs][update-a-member], as the `--force-new-cluster` option has set the peer URL to the default listening on localhost.
You can then add more nodes to the cluster and restore resiliency. See the [add a new member][add-a-member] guide for more details. **NB:** If you are trying to restore your cluster using old failed etcd nodes, please make sure you have stopped old etcd instances and removed their old data directories specified by the data-dir configuration parameter.
### Client Request Timeout
etcd sets different timeouts for various types of client requests. The timeout value is not tunable now, which will be improved soon (https://github.com/coreos/etcd/issues/2038).
#### Get requests
Timeout is not set for get requests, because etcd serves the result locally in a non-blocking way.
**Note**: QuorumGet request is a different type, which is mentioned in the following sections.
#### Watch requests
Timeout is not set for watch requests. etcd will not stop a watch request until client cancels it, or the connection is broken.
#### Delete, Put, Post, QuorumGet requests
The default timeout is 5 seconds. It should be large enough to allow all key modifications if the majority of cluster is functioning.
If the request times out, it indicates two possibilities:
1. the server the request sent to was not functioning at that time.
2. the majority of the cluster is not functioning.
If timeout happens several times continuously, administrators should check status of cluster and resolve it as soon as possible.
### Best Practices
#### Maximum OS threads
By default, etcd uses the default configuration of the Go 1.4 runtime, which means that at most one operating system thread will be used to execute code simultaneously. (Note that this default behavior [has changed in Go 1.5][golang1.5-runtime]).
When using etcd in heavy-load scenarios on machines with multiple cores it will usually be desirable to increase the number of threads that etcd can utilize. To do this, simply set the environment variable GOMAXPROCS to the desired number when starting etcd. For more information on this variable, see the [Go runtime documentation][golang-runtime].
[add-a-member]: runtime-configuration.md#add-a-new-member
[golang1.5-runtime]: https://golang.org/doc/go1.5#runtime
[golang-memstats]: https://golang.org/pkg/runtime/#MemStats
[golang-runtime]: https://golang.org/pkg/runtime
[metrics]: metrics.md
[prometheus]: http://prometheus.io/
[remove-a-member]: runtime-configuration.md#remove-a-member
[runtime-reconfig]: runtime-configuration.md#cluster-reconfiguration-operations
[snap-pkg]: http://godoc.org/github.com/coreos/etcd/snap
[update-a-member]: runtime-configuration.md#update-a-member
[wal-pkg]: http://godoc.org/github.com/coreos/etcd/wal

1131
Documentation/v2/api.md Normal file

File diff suppressed because it is too large Load Diff

View File

@@ -0,0 +1,511 @@
# v2 Auth and Security
## etcd Resources
There are three types of resources in etcd
1. permission resources: users and roles in the user store
2. key-value resources: key-value pairs in the key-value store
3. settings resources: security settings, auth settings, and dynamic etcd cluster settings (election/heartbeat)
### Permission Resources
#### Users
A user is an identity to be authenticated. Each user can have multiple roles. The user has a capability (such as reading or writing) on the resource if one of the roles has that capability.
A user named `root` is required before authentication can be enabled, and it always has the ROOT role. The ROOT role can be granted to multiple users, but `root` is required for recovery purposes.
#### Roles
Each role has exact one associated Permission List. An permission list exists for each permission on key-value resources.
The special static ROOT (named `root`) role has a full permissions on all key-value resources, the permission to manage user resources and settings resources. Only the ROOT role has the permission to manage user resources and modify settings resources. The ROOT role is built-in and does not need to be created.
There is also a special GUEST role, named 'guest'. These are the permissions given to unauthenticated requests to etcd. This role will be created automatically, and by default allows access to the full keyspace due to backward compatibility. (etcd did not previously authenticate any actions.). This role can be modified by a ROOT role holder at any time, to reduce the capabilities of unauthenticated users.
#### Permissions
There are two types of permissions, `read` and `write`. All management and settings require the ROOT role.
A Permission List is a list of allowed patterns for that particular permission (read or write). Only ALLOW prefixes are supported. DENY becomes more complicated and is TBD.
### Key-Value Resources
A key-value resource is a key-value pairs in the store. Given a list of matching patterns, permission for any given key in a request is granted if any of the patterns in the list match.
Only prefixes or exact keys are supported. A prefix permission string ends in `*`.
A permission on `/foo` is for that exact key or directory, not its children or recursively. `/foo*` is a prefix that matches `/foo` recursively, and all keys thereunder, and keys with that prefix (eg. `/foobar`. Contrast to the prefix `/foo/*`). `*` alone is permission on the full keyspace.
### Settings Resources
Specific settings for the cluster as a whole. This can include adding and removing cluster members, enabling or disabling authentication, replacing certificates, and any other dynamic configuration by the administrator (holder of the ROOT role).
## v2 Auth
### Basic Auth
We only support [Basic Auth][basic-auth] for the first version. Client needs to attach the basic auth to the HTTP Authorization Header.
### Authorization field for operations
Added to requests to /v2/keys, /v2/auth
Add code 401 Unauthorized to the set of responses from the v2 API
Authorization: Basic {encoded string}
### Future Work
Other types of auth can be considered for the future (eg, signed certs, public keys) but the `Authorization:` header allows for other such types
### Things out of Scope for etcd Permissions
* Pluggable AUTH backends like LDAP (other Authorization tokens generated by LDAP et al may be a possibility)
* Very fine-grained access controls (eg: users modifying keys outside work hours)
## API endpoints
An Error JSON corresponds to:
{
"name": "ErrErrorName",
"description" : "The longer helpful description of the error."
}
#### Enable and Disable Authentication
**Get auth status**
GET /v2/auth/enable
Sent Headers:
Possible Status Codes:
200 OK
200 Body:
{
"enabled": true
}
**Enable auth**
PUT /v2/auth/enable
Sent Headers:
Put Body: (empty)
Possible Status Codes:
200 OK
400 Bad Request (if root user has not been created)
409 Conflict (already enabled)
200 Body: (empty)
**Disable auth**
DELETE /v2/auth/enable
Sent Headers:
Authorization: Basic <RootAuthString>
Possible Status Codes:
200 OK
401 Unauthorized (if not a root user)
409 Conflict (already disabled)
200 Body: (empty)
#### Users
The User JSON object is formed as follows:
```
{
"user": "userName",
"password": "password",
"roles": [
"role1",
"role2"
],
"grant": [],
"revoke": []
}
```
Password is only passed when necessary.
**Get a List of Users**
GET/HEAD /v2/auth/users
Sent Headers:
Authorization: Basic <BasicAuthString>
Possible Status Codes:
200 OK
401 Unauthorized
200 Headers:
Content-type: application/json
200 Body:
{
"users": [
{
"user": "alice",
"roles": [
{
"role": "root",
"permissions": {
"kv": {
"read": ["/*"],
"write": ["/*"]
}
}
}
]
},
{
"user": "bob",
"roles": [
{
"role": "guest",
"permissions": {
"kv": {
"read": ["/*"],
"write": ["/*"]
}
}
}
]
}
]
}
**Get User Details**
GET/HEAD /v2/auth/users/alice
Sent Headers:
Authorization: Basic <BasicAuthString>
Possible Status Codes:
200 OK
401 Unauthorized
404 Not Found
200 Headers:
Content-type: application/json
200 Body:
{
"user" : "alice",
"roles" : [
{
"role": "fleet",
"permissions" : {
"kv" : {
"read": [ "/fleet/" ],
"write": [ "/fleet/" ]
}
}
},
{
"role": "etcd",
"permissions" : {
"kv" : {
"read": [ "/*" ],
"write": [ "/*" ]
}
}
}
]
}
**Create Or Update A User**
A user can be created with initial roles, if filled in. However, no roles are required; only the username and password fields
PUT /v2/auth/users/charlie
Sent Headers:
Authorization: Basic <BasicAuthString>
Put Body:
JSON struct, above, matching the appropriate name
* Starting password and roles when creating.
* Grant/Revoke/Password filled in when updating (to grant roles, revoke roles, or change the password).
Possible Status Codes:
200 OK
201 Created
400 Bad Request
401 Unauthorized
404 Not Found (update non-existent users)
409 Conflict (when granting duplicated roles or revoking non-existent roles)
200 Headers:
Content-type: application/json
200 Body:
JSON state of the user
**Remove A User**
DELETE /v2/auth/users/charlie
Sent Headers:
Authorization: Basic <BasicAuthString>
Possible Status Codes:
200 OK
401 Unauthorized
403 Forbidden (remove root user when auth is enabled)
404 Not Found
200 Headers:
200 Body: (empty)
#### Roles
A full role structure may look like this. A Permission List structure is used for the "permissions", "grant", and "revoke" keys.
```
{
"role" : "fleet",
"permissions" : {
"kv" : {
"read" : [ "/fleet/" ],
"write": [ "/fleet/" ]
}
},
"grant" : {"kv": {...}},
"revoke": {"kv": {...}}
}
```
**Get Role Details**
GET/HEAD /v2/auth/roles/fleet
Sent Headers:
Authorization: Basic <BasicAuthString>
Possible Status Codes:
200 OK
401 Unauthorized
404 Not Found
200 Headers:
Content-type: application/json
200 Body:
{
"role" : "fleet",
"permissions" : {
"kv" : {
"read": [ "/fleet/" ],
"write": [ "/fleet/" ]
}
}
}
**Get a list of Roles**
GET/HEAD /v2/auth/roles
Sent Headers:
Authorization: Basic <BasicAuthString>
Possible Status Codes:
200 OK
401 Unauthorized
200 Headers:
Content-type: application/json
200 Body:
{
"roles": [
{
"role": "fleet",
"permissions": {
"kv": {
"read": ["/fleet/"],
"write": ["/fleet/"]
}
}
},
{
"role": "etcd",
"permissions": {
"kv": {
"read": ["/*"],
"write": ["/*"]
}
}
},
{
"role": "quay",
"permissions": {
"kv": {
"read": ["/*"],
"write": ["/*"]
}
}
}
]
}
**Create Or Update A Role**
PUT /v2/auth/roles/rkt
Sent Headers:
Authorization: Basic <BasicAuthString>
Put Body:
Initial desired JSON state, including the role name for verification and:
* Starting permission set if creating
* Granted/Revoked permission set if updating
Possible Status Codes:
200 OK
201 Created
400 Bad Request
401 Unauthorized
404 Not Found (update non-existent roles)
409 Conflict (when granting duplicated permission or revoking non-existent permission)
200 Body:
JSON state of the role
**Remove A Role**
DELETE /v2/auth/roles/rkt
Sent Headers:
Authorization: Basic <BasicAuthString>
Possible Status Codes:
200 OK
401 Unauthorized
403 Forbidden (remove root)
404 Not Found
200 Headers:
200 Body: (empty)
## Example Workflow
Let's walk through an example to show two tenants (applications, in our case) using etcd permissions.
### Create root role
```
PUT /v2/auth/users/root
Put Body:
{"user" : "root", "password": "betterRootPW!"}
```
### Enable auth
```
PUT /v2/auth/enable
```
### Modify guest role (revoke write permission)
```
PUT /v2/auth/roles/guest
Headers:
Authorization: Basic <root:betterRootPW!>
Put Body:
{
"role" : "guest",
"revoke" : {
"kv" : {
"write": [
"/*"
]
}
}
}
```
### Create Roles for the Applications
Create the rkt role fully specified:
```
PUT /v2/auth/roles/rkt
Headers:
Authorization: Basic <root:betterRootPW!>
Body:
{
"role" : "rkt",
"permissions" : {
"kv": {
"read": [
"/rkt/*"
],
"write": [
"/rkt/*"
]
}
}
}
```
But let's make fleet just a basic role for now:
```
PUT /v2/auth/roles/fleet
Headers:
Authorization: Basic <root:betterRootPW!>
Body:
{
"role" : "fleet"
}
```
### Optional: Grant some permissions to the roles
Well, we finally figured out where we want fleet to live. Let's fix it.
(Note that we avoided this in the rkt case. So this step is optional.)
```
PUT /v2/auth/roles/fleet
Headers:
Authorization: Basic <root:betterRootPW!>
Put Body:
{
"role" : "fleet",
"grant" : {
"kv" : {
"read": [
"/rkt/fleet",
"/fleet/*"
]
}
}
}
```
### Create Users
Same as before, let's use rocket all at once and fleet separately
```
PUT /v2/auth/users/rktuser
Headers:
Authorization: Basic <root:betterRootPW!>
Body:
{"user" : "rktuser", "password" : "rktpw", "roles" : ["rkt"]}
```
```
PUT /v2/auth/users/fleetuser
Headers:
Authorization: Basic <root:betterRootPW!>
Body:
{"user" : "fleetuser", "password" : "fleetpw"}
```
### Optional: Grant Roles to Users
Likewise, let's explicitly grant fleetuser access.
```
PUT /v2/auth/users/fleetuser
Headers:
Authorization: Basic <root:betterRootPW!>
Body:
{"user": "fleetuser", "grant": ["fleet"]}
```
#### Start to use fleetuser and rktuser
For example:
```
PUT /v2/keys/rkt/RktData
Headers:
Authorization: Basic <rktuser:rktpw>
Body:
value=launch
```
Reads and writes outside the prefixes granted will fail with a 401 Unauthorized.
[basic-auth]: https://en.wikipedia.org/wiki/Basic_access_authentication

View File

@@ -0,0 +1,71 @@
# Backward Compatibility
The main goal of etcd 2.0 release is to improve cluster safety around bootstrapping and dynamic reconfiguration. To do this, we deprecated the old error-prone APIs and provide a new set of APIs.
The other main focus of this release was a more reliable Raft implementation, but as this change is internal it should not have any notable effects to users.
## Command Line Flags Changes
The major flag changes are to mostly related to bootstrapping. The `initial-*` flags provide an improved way to specify the required criteria to start the cluster. The advertised URLs now support a list of values instead of a single value, which allows etcd users to gracefully migrate to the new set of IANA-assigned ports (2379/client and 2380/peers) while maintaining backward compatibility with the old ports.
- `-addr` is replaced by `-advertise-client-urls`.
- `-bind-addr` is replaced by `-listen-client-urls`.
- `-peer-addr` is replaced by `-initial-advertise-peer-urls`.
- `-peer-bind-addr` is replaced by `-listen-peer-urls`.
- `-peers` is replaced by `-initial-cluster`.
- `-peers-file` is replaced by `-initial-cluster`.
- `-peer-heartbeat-interval` is replaced by `-heartbeat-interval`.
- `-peer-election-timeout` is replaced by `-election-timeout`.
The documentation of new command line flags can be found at
https://github.com/coreos/etcd/blob/master/Documentation/v2/configuration.md.
## Data Directory Naming
The default data dir location has changed from {$hostname}.etcd to {name}.etcd.
## Key-Value API
### Read consistency flag
The consistent flag for read operations is removed in etcd 2.0.0. The normal read operations provides the same consistency guarantees with the 0.4.6 read operations with consistent flag set.
The read consistency guarantees are:
The consistent read guarantees the sequential consistency within one client that talks to one etcd server. Read/Write from one client to one etcd member should be observed in order. If one client write a value to an etcd server successfully, it should be able to get the value out of the server immediately.
Each etcd member will proxy the request to leader and only return the result to user after the result is applied on the local member. Thus after the write succeed, the user is guaranteed to see the value on the member it sent the request to.
Reads do not provide linearizability. If you want linearizable read, you need to set quorum option to true.
**Previous behavior**
We added an option for a consistent read in the old version of etcd since etcd 0.x redirects the write request to the leader. When the user get back the result from the leader, the member it sent the request to originally might not apply the write request yet. With the consistent flag set to true, the client will always send read request to the leader. So one client should be able to see its last write when consistent=true is enabled. There is no order guarantees among different clients.
## Standby
etcd 0.4s standby mode has been deprecated. [Proxy mode][proxymode] is introduced to solve a subset of problems standby was solving.
Standby mode was intended for large clusters that had a subset of the members acting in the consensus process. Overall this process was too magical and allowed for operators to back themselves into a corner.
Proxy mode in 2.0 will provide similar functionality, and with improved control over which machines act as proxies due to the operator specifically configuring them. Proxies also support read only or read/write modes for increased security and durability.
[proxymode]: proxy.md
## Discovery Service
A size key needs to be provided inside a [discovery token][discoverytoken].
[discoverytoken]: clustering.md#custom-etcd-discovery-service
## HTTP Admin API
`v2/admin` on peer url and `v2/keys/_etcd` are unified under the new [v2/members API][members-api] to better explain which machines are part of an etcd cluster, and to simplify the keyspace for all your use cases.
[members-api]: members_api.md
## HTTP Key Value API
- The follower can now transparently proxy write requests to the leader. Clients will no longer see 307 redirections to the leader from etcd.
- Expiration time is in UTC instead of local time.

View File

@@ -0,0 +1,18 @@
# Benchmarks
etcd benchmarks will be published regularly and tracked for each release below:
- [etcd v2.1.0-alpha][2.1]
- [etcd v2.2.0-rc][2.2]
- [etcd v3 demo][3.0]
# Memory Usage Benchmarks
It records expected memory usage in different scenarios.
- [etcd v2.2.0-rc][2.2-mem]
[2.1]: etcd-2-1-0-alpha-benchmarks.md
[2.2]: etcd-2-2-0-rc-benchmarks.md
[2.2-mem]: etcd-2-2-0-rc-memory-benchmarks.md
[3.0]: etcd-3-demo-benchmarks.md

View File

@@ -0,0 +1,52 @@
## Physical machines
GCE n1-highcpu-2 machine type
- 1x dedicated local SSD mounted under /var/lib/etcd
- 1x dedicated slow disk for the OS
- 1.8 GB memory
- 2x CPUs
- etcd version 2.1.0 alpha
## etcd Cluster
3 etcd members, each runs on a single machine
## Testing
Bootstrap another machine and use the [boom HTTP benchmark tool][boom] to send requests to each etcd member. Check the [benchmark hacking guide][hack-benchmark] for detailed instructions.
## Performance
### reading one single key
| key size in bytes | number of clients | target etcd server | read QPS | 90th Percentile Latency (ms) |
|-------------------|-------------------|--------------------|----------|---------------|
| 64 | 1 | leader only | 1534 | 0.7 |
| 64 | 64 | leader only | 10125 | 9.1 |
| 64 | 256 | leader only | 13892 | 27.1 |
| 256 | 1 | leader only | 1530 | 0.8 |
| 256 | 64 | leader only | 10106 | 10.1 |
| 256 | 256 | leader only | 14667 | 27.0 |
| 64 | 64 | all servers | 24200 | 3.9 |
| 64 | 256 | all servers | 33300 | 11.8 |
| 256 | 64 | all servers | 24800 | 3.9 |
| 256 | 256 | all servers | 33000 | 11.5 |
### writing one single key
| key size in bytes | number of clients | target etcd server | write QPS | 90th Percentile Latency (ms) |
|-------------------|-------------------|--------------------|-----------|---------------|
| 64 | 1 | leader only | 60 | 21.4 |
| 64 | 64 | leader only | 1742 | 46.8 |
| 64 | 256 | leader only | 3982 | 90.5 |
| 256 | 1 | leader only | 58 | 20.3 |
| 256 | 64 | leader only | 1770 | 47.8 |
| 256 | 256 | leader only | 4157 | 105.3 |
| 64 | 64 | all servers | 1028 | 123.4 |
| 64 | 256 | all servers | 3260 | 123.8 |
| 256 | 64 | all servers | 1033 | 121.5 |
| 256 | 256 | all servers | 3061 | 119.3 |
[boom]: https://github.com/rakyll/boom
[hack-benchmark]: /hack/benchmark/

View File

@@ -0,0 +1,69 @@
# Benchmarking etcd v2.2.0
## Physical Machines
GCE n1-highcpu-2 machine type
- 1x dedicated local SSD mounted as etcd data directory
- 1x dedicated slow disk for the OS
- 1.8 GB memory
- 2x CPUs
## etcd Cluster
3 etcd 2.2.0 members, each runs on a single machine.
Detailed versions:
```
etcd Version: 2.2.0
Git SHA: e4561dd
Go Version: go1.5
Go OS/Arch: linux/amd64
```
## Testing
Bootstrap another machine, outside of the etcd cluster, and run the [`boom` HTTP benchmark tool](https://github.com/rakyll/boom) with a connection reuse patch to send requests to each etcd cluster member. See the [benchmark instructions](../../hack/benchmark/) for the patch and the steps to reproduce our procedures.
The performance is calulated through results of 100 benchmark rounds.
## Performance
### Single Key Read Performance
| key size in bytes | number of clients | target etcd server | average read QPS | read QPS stddev | average 90th Percentile Latency (ms) | latency stddev |
|-------------------|-------------------|--------------------|------------------|-----------------|--------------------------------------|----------------|
| 64 | 1 | leader only | 2303 | 200 | 0.49 | 0.06 |
| 64 | 64 | leader only | 15048 | 685 | 7.60 | 0.46 |
| 64 | 256 | leader only | 14508 | 434 | 29.76 | 1.05 |
| 256 | 1 | leader only | 2162 | 214 | 0.52 | 0.06 |
| 256 | 64 | leader only | 14789 | 792 | 7.69| 0.48 |
| 256 | 256 | leader only | 14424 | 512 | 29.92 | 1.42 |
| 64 | 64 | all servers | 45752 | 2048 | 2.47 | 0.14 |
| 64 | 256 | all servers | 46592 | 1273 | 10.14 | 0.59 |
| 256 | 64 | all servers | 45332 | 1847 | 2.48| 0.12 |
| 256 | 256 | all servers | 46485 | 1340 | 10.18 | 0.74 |
### Single Key Write Performance
| key size in bytes | number of clients | target etcd server | average write QPS | write QPS stddev | average 90th Percentile Latency (ms) | latency stddev |
|-------------------|-------------------|--------------------|------------------|-----------------|--------------------------------------|----------------|
| 64 | 1 | leader only | 55 | 4 | 24.51 | 13.26 |
| 64 | 64 | leader only | 2139 | 125 | 35.23 | 3.40 |
| 64 | 256 | leader only | 4581 | 581 | 70.53 | 10.22 |
| 256 | 1 | leader only | 56 | 4 | 22.37| 4.33 |
| 256 | 64 | leader only | 2052 | 151 | 36.83 | 4.20 |
| 256 | 256 | leader only | 4442 | 560 | 71.59 | 10.03 |
| 64 | 64 | all servers | 1625 | 85 | 58.51 | 5.14 |
| 64 | 256 | all servers | 4461 | 298 | 89.47 | 36.48 |
| 256 | 64 | all servers | 1599 | 94 | 60.11| 6.43 |
| 256 | 256 | all servers | 4315 | 193 | 88.98 | 7.01 |
## Performance Changes
- Because etcd now records metrics for each API call, read QPS performance seems to see a minor decrease in most scenarios. This minimal performance impact was judged a reasonable investment for the breadth of monitoring and debugging information returned.
- Write QPS to cluster leaders seems to be increased by a small margin. This is because the main loop and entry apply loops were decoupled in the etcd raft logic, eliminating several blocks between them.
- Write QPS to all members seems to be increased by a significant margin, because followers now receive the latest commit index sooner, and commit proposals more quickly.

View File

@@ -0,0 +1,72 @@
## Physical machines
GCE n1-highcpu-2 machine type
- 1x dedicated local SSD mounted under /var/lib/etcd
- 1x dedicated slow disk for the OS
- 1.8 GB memory
- 2x CPUs
## etcd Cluster
3 etcd 2.2.0-rc members, each runs on a single machine.
Detailed versions:
```
etcd Version: 2.2.0-alpha.1+git
Git SHA: 59a5a7e
Go Version: go1.4.2
Go OS/Arch: linux/amd64
```
Also, we use 3 etcd 2.1.0 alpha-stage members to form cluster to get base performance. etcd's commit head is at [c7146bd5][c7146bd5], which is the same as the one that we use in [etcd 2.1 benchmark][etcd-2.1-benchmark].
## Testing
Bootstrap another machine and use the [boom HTTP benchmark tool][boom] to send requests to each etcd member. Check the [benchmark hacking guide][hack-benchmark] for detailed instructions.
## Performance
### reading one single key
| key size in bytes | number of clients | target etcd server | read QPS | 90th Percentile Latency (ms) |
|-------------------|-------------------|--------------------|----------|---------------|
| 64 | 1 | leader only | 2804 (-5%) | 0.4 (+0%) |
| 64 | 64 | leader only | 17816 (+0%) | 5.7 (-6%) |
| 64 | 256 | leader only | 18667 (-6%) | 20.4 (+2%) |
| 256 | 1 | leader only | 2181 (-15%) | 0.5 (+25%) |
| 256 | 64 | leader only | 17435 (-7%) | 6.0 (+9%) |
| 256 | 256 | leader only | 18180 (-8%) | 21.3 (+3%) |
| 64 | 64 | all servers | 46965 (-4%) | 2.1 (+0%) |
| 64 | 256 | all servers | 55286 (-6%) | 7.4 (+6%) |
| 256 | 64 | all servers | 46603 (-6%) | 2.1 (+5%) |
| 256 | 256 | all servers | 55291 (-6%) | 7.3 (+4%) |
### writing one single key
| key size in bytes | number of clients | target etcd server | write QPS | 90th Percentile Latency (ms) |
|-------------------|-------------------|--------------------|-----------|---------------|
| 64 | 1 | leader only | 76 (+22%) | 19.4 (-15%) |
| 64 | 64 | leader only | 2461 (+45%) | 31.8 (-32%) |
| 64 | 256 | leader only | 4275 (+1%) | 69.6 (-10%) |
| 256 | 1 | leader only | 64 (+20%) | 16.7 (-30%) |
| 256 | 64 | leader only | 2385 (+30%) | 31.5 (-19%) |
| 256 | 256 | leader only | 4353 (-3%) | 74.0 (+9%) |
| 64 | 64 | all servers | 2005 (+81%) | 49.8 (-55%) |
| 64 | 256 | all servers | 4868 (+35%) | 81.5 (-40%) |
| 256 | 64 | all servers | 1925 (+72%) | 47.7 (-59%) |
| 256 | 256 | all servers | 4975 (+36%) | 70.3 (-36%) |
### performance changes explanation
- read QPS in most scenarios is decreased by 5~8%. The reason is that etcd records store metrics for each store operation. The metrics is important for monitoring and debugging, so this is acceptable.
- write QPS to leader is increased by 20~30%. This is because we decouple raft main loop and entry apply loop, which avoids them blocking each other.
- write QPS to all servers is increased by 30~80% because follower could receive latest commit index earlier and commit proposals faster.
[boom]: https://github.com/rakyll/boom
[c7146bd5]: https://github.com/coreos/etcd/commits/c7146bd5f2c73716091262edc638401bb8229144
[etcd-2.1-benchmark]: etcd-2-1-0-alpha-benchmarks.md
[hack-benchmark]: /hack/benchmark/

View File

@@ -0,0 +1,47 @@
## Physical machine
GCE n1-standard-2 machine type
- 1x dedicated local SSD mounted under /var/lib/etcd
- 1x dedicated slow disk for the OS
- 7.5 GB memory
- 2x CPUs
## etcd
```
etcd Version: 2.2.0-rc.0+git
Git SHA: 103cb5c
Go Version: go1.5
Go OS/Arch: linux/amd64
```
## Testing
Start 3-member etcd cluster, each of which uses 2 cores.
The length of key name is always 64 bytes, which is a reasonable length of average key bytes.
## Memory Maximal Usage
- etcd may use maximal memory if one follower is dead and the leader keeps sending snapshots.
- `max RSS` is the maximal memory usage recorded in 3 runs.
| value bytes | key number | data size(MB) | max RSS(MB) | max RSS/data rate on leader |
|-------------|-------------|---------------|-------------|-----------------------------|
| 128 | 50000 | 6 | 433 | 72x |
| 128 | 100000 | 12 | 659 | 54x |
| 128 | 200000 | 24 | 1466 | 61x |
| 1024 | 50000 | 48 | 1253 | 26x |
| 1024 | 100000 | 96 | 2344 | 24x |
| 1024 | 200000 | 192 | 4361 | 22x |
## Data Size Threshold
- When etcd reaches data size threshold, it may trigger leader election easily and drop part of proposals.
- At most cases, etcd cluster should work smoothly if it doesn't hit the threshold. If it doesn't work well due to insufficient resources, you need to decrease its data size.
| value bytes | key number limitation | suggested data size threshold(MB) | consumed RSS(MB) |
|-------------|-----------------------|-----------------------------------|------------------|
| 128 | 400K | 48 | 2400 |
| 1024 | 300K | 292 | 6500 |

View File

@@ -0,0 +1,42 @@
## Physical machines
GCE n1-highcpu-2 machine type
- 1x dedicated local SSD mounted under /var/lib/etcd
- 1x dedicated slow disk for the OS
- 1.8 GB memory
- 2x CPUs
- etcd version 2.2.0
## etcd Cluster
1 etcd member running in v3 demo mode
## Testing
Use [etcd v3 benchmark tool][etcd-v3-benchmark].
## Performance
### reading one single key
| key size in bytes | number of clients | read QPS | 90th Percentile Latency (ms) |
|-------------------|-------------------|----------|---------------|
| 256 | 1 | 2716 | 0.4 |
| 256 | 64 | 16623 | 6.1 |
| 256 | 256 | 16622 | 21.7 |
The performance is nearly the same as the one with empty server handler.
### reading one single key after putting
| key size in bytes | number of clients | read QPS | 90th Percentile Latency (ms) |
|-------------------|-------------------|----------|---------------|
| 256 | 1 | 2269 | 0.5 |
| 256 | 64 | 13582 | 8.6 |
| 256 | 256 | 13262 | 47.5 |
The performance with empty server handler is not affected by one put. So the
performance downgrade should be caused by storage package.
[etcd-v3-benchmark]: /tools/benchmark/

View File

@@ -0,0 +1,77 @@
# Watch Memory Usage Benchmark
*NOTE*: The watch features are under active development, and their memory usage may change as that development progresses. We do not expect it to significantly increase beyond the figures stated below.
A primary goal of etcd is supporting a very large number of watchers doing a massively large amount of watching. etcd aims to support O(10k) clients, O(100K) watch streams (O(10) streams per client) and O(10M) total watchings (O(100) watching per stream). The memory consumed by each individual watching accounts for the largest portion of etcd's overall usage, and is therefore the focus of current and future optimizations.
Three related components of etcd watch consume physical memory: each `grpc.Conn`, each watch stream, and each instance of the watching activity. `grpc.Conn` maintains the actual TCP connection and other gRPC connection state. Each `grpc.Conn` consumes O(10kb) of memory, and might have multiple watch streams attached.
Each watch stream is an independent HTTP2 connection which consumes another O(10kb) of memory.
Multiple watchings might share one watch stream.
Watching is the actual struct that tracks the changes on the key-value store. Each watching should only consume < O(1kb).
```
+-------+
| watch |
+---------> | foo |
| +-------+
+------+-----+
| stream |
+--------------> | |
| +------+-----+ +-------+
| | | watch |
| +---------> | bar |
+-----+------+ +-------+
| | +------------+
| conn +-------> | stream |
| | | |
+-----+------+ +------------+
|
|
|
| +------------+
+--------------> | stream |
| |
+------------+
```
The theoretical memory consumption of watch can be approximated with the formula:
`memory = c1 * number_of_conn + c2 * avg_number_of_stream_per_conn + c3 * avg_number_of_watch_stream`
## Testing Environment
etcd version
- git head https://github.com/coreos/etcd/commit/185097ffaa627b909007e772c175e8fefac17af3
GCE n1-standard-2 machine type
- 7.5 GB memory
- 2x CPUs
## Overall memory usage
The overall memory usage captures how much [RSS][rss] etcd consumes with the client watchers. While the result may vary by as much as 10%, it is still meaningful, since the goal is to learn about the rough memory usage and the pattern of allocations.
With the benchmark result, we can calculate roughly that `c1 = 17kb`, `c2 = 18kb` and `c3 = 350bytes`. So each additional client connection consumes 17kb of memory and each additional stream consumes 18kb of memory, and each additional watching only cause 350bytes. A single etcd server can maintain millions of watchings with a few GB of memory in normal case.
| clients | streams per client | watchings per stream | total watching | memory usage |
|---------|---------|-----------|----------------|--------------|
| 1k | 1 | 1 | 1k | 50MB |
| 2k | 1 | 1 | 2k | 90MB |
| 5k | 1 | 1 | 5k | 200MB |
| 1k | 10 | 1 | 10k | 217MB |
| 2k | 10 | 1 | 20k | 417MB |
| 5k | 10 | 1 | 50k | 980MB |
| 1k | 50 | 1 | 50k | 1001MB |
| 2k | 50 | 1 | 100k | 1960MB |
| 5k | 50 | 1 | 250k | 4700MB |
| 1k | 50 | 10 | 500k | 1171MB |
| 2k | 50 | 10 | 1M | 2371MB |
| 5k | 50 | 10 | 2.5M | 5710MB |
| 1k | 50 | 100 | 5M | 2380MB |
| 2k | 50 | 100 | 10M | 4672MB |
| 5k | 50 | 100 | 50M | *OOM* |
[rss]: https://en.wikipedia.org/wiki/Resident_set_size

View File

@@ -0,0 +1,98 @@
# Storage Memory Usage Benchmark
<!---todo: link storage to storage design doc-->
Two components of etcd storage consume physical memory. The etcd process allocates an *in-memory index* to speed key lookup. The process's *page cache*, managed by the operating system, stores recently-accessed data from disk for quick re-use.
The in-memory index holds all the keys in a [B-tree][btree] data structure, along with pointers to the on-disk data (the values). Each key in the B-tree may contain multiple pointers, pointing to different versions of its values. The theoretical memory consumption of the in-memory index can hence be approximated with the formula:
`N * (c1 + avg_key_size) + N * (avg_versions_of_key) * (c2 + size_of_pointer)`
where `c1` is the key metadata overhead and `c2` is the version metadata overhead.
The graph shows the detailed structure of the in-memory index B-tree.
```
In mem index
+------------+
| key || ... |
+--------------+ | || |
| | +------------+
| | | v1 || ... |
| disk <----------------| || | Tree Node
| | +------------+
| | | v2 || ... |
| <----------------+ || |
| | +------------+
+--------------+ +-----+ | | |
| | | | |
| +------------+
|
|
^
------+
| ... |
| |
+-----+
| ... | Tree Node
| |
+-----+
| ... |
| |
------+
```
[Page cache memory][pagecache] is managed by the operating system and is not covered in detail in this document.
## Testing Environment
etcd version
- git head https://github.com/coreos/etcd/commit/776e9fb7be7eee5e6b58ab977c8887b4fe4d48db
GCE n1-standard-2 machine type
- 7.5 GB memory
- 2x CPUs
## In-memory index memory usage
In this test, we only benchmark the memory usage of the in-memory index. The goal is to find `c1` and `c2` mentioned above and to understand the hard limit of memory consumption of the storage.
We calculate the memory usage consumption via the Go runtime.ReadMemStats. We calculate the total allocated bytes difference before creating the index and after creating the index. It cannot perfectly reflect the memory usage of the in-memory index itself but can show the rough consumption pattern.
| N | versions | key size | memory usage |
|------|----------|----------|--------------|
| 100K | 1 | 64bytes | 22MB |
| 100K | 5 | 64bytes | 39MB |
| 1M | 1 | 64bytes | 218MB |
| 1M | 5 | 64bytes | 432MB |
| 100K | 1 | 256bytes | 41MB |
| 100K | 5 | 256bytes | 65MB |
| 1M | 1 | 256bytes | 409MB |
| 1M | 5 | 256bytes | 506MB |
Based on the result, we can calculate `c1=120bytes`, `c2=30bytes`. We only need two sets of data to calculate `c1` and `c2`, since they are the only unknown variable in the formula. The `c1=120bytes` and `c2=30bytes` are the average value of the 4 sets of `c1` and `c2` we calculated. The key metadata overhead is still relatively nontrivial (50%) for small key-value pairs. However, this is a significant improvement over the old store, which had at least 1000% overhead.
## Overall memory usage
The overall memory usage captures how much RSS etcd consumes with the storage. The value size should have very little impact on the overall memory usage of etcd, since we keep values on disk and only retain hot values in memory, managed by the OS page cache.
| N | versions | key size | value size | memory usage |
|------|----------|----------|------------|--------------|
| 100K | 1 | 64bytes | 256bytes | 40MB |
| 100K | 5 | 64bytes | 256bytes | 89MB |
| 1M | 1 | 64bytes | 256bytes | 470MB |
| 1M | 5 | 64bytes | 256bytes | 880MB |
| 100K | 1 | 64bytes | 1KB | 102MB |
| 100K | 5 | 64bytes | 1KB | 164MB |
| 1M | 1 | 64bytes | 1KB | 587MB |
| 1M | 5 | 64bytes | 1KB | 836MB |
Based on the result, we know the value size does not significantly impact the memory consumption. There is some minor increase due to more data held in the OS page cache.
[btree]: https://en.wikipedia.org/wiki/B-tree
[pagecache]: https://en.wikipedia.org/wiki/Page_cache

View File

@@ -0,0 +1,26 @@
# Branch Management
## Guide
* New development occurs on the [master branch][master].
* Master branch should always have a green build!
* Backwards-compatible bug fixes should target the master branch and subsequently be ported to stable branches.
* Once the master branch is ready for release, it will be tagged and become the new stable branch.
The etcd team has adopted a *rolling release model* and supports one stable version of etcd.
### Master branch
The `master` branch is our development branch. All new features land here first.
If you want to try new features, pull `master` and play with it. Note that `master` may not be stable because new features may introduce bugs.
Before the release of the next stable version, feature PRs will be frozen. We will focus on the testing, bug-fix and documentation for one to two weeks.
### Stable branches
All branches with prefix `release-` are considered _stable_ branches.
After every minor release (http://semver.org/), we will have a new stable branch for that release. We will keep fixing the backwards-compatible bugs for the latest stable release, but not previous releases. The _patch_ release, incorporating any bug fixes, will be once every two weeks, given any patches.
[master]: https://github.com/coreos/etcd/tree/master

View File

@@ -0,0 +1,435 @@
# Clustering Guide
## Overview
Starting an etcd cluster statically requires that each member knows another in the cluster. In a number of cases, you might not know the IPs of your cluster members ahead of time. In these cases, you can bootstrap an etcd cluster with the help of a discovery service.
Once an etcd cluster is up and running, adding or removing members is done via [runtime reconfiguration][runtime-conf]. To better understand the design behind runtime reconfiguration, we suggest you read [the runtime configuration design document][runtime-reconf-design].
This guide will cover the following mechanisms for bootstrapping an etcd cluster:
* [Static](#static)
* [etcd Discovery](#etcd-discovery)
* [DNS Discovery](#dns-discovery)
Each of the bootstrapping mechanisms will be used to create a three machine etcd cluster with the following details:
|Name|Address|Hostname|
|------|---------|------------------|
|infra0|10.0.1.10|infra0.example.com|
|infra1|10.0.1.11|infra1.example.com|
|infra2|10.0.1.12|infra2.example.com|
## Static
As we know the cluster members, their addresses and the size of the cluster before starting, we can use an offline bootstrap configuration by setting the `initial-cluster` flag. Each machine will get either the following command line or environment variables:
```
ETCD_INITIAL_CLUSTER="infra0=http://10.0.1.10:2380,infra1=http://10.0.1.11:2380,infra2=http://10.0.1.12:2380"
ETCD_INITIAL_CLUSTER_STATE=new
```
```
--initial-cluster infra0=http://10.0.1.10:2380,infra1=http://10.0.1.11:2380,infra2=http://10.0.1.12:2380 \
--initial-cluster-state new
```
Note that the URLs specified in `initial-cluster` are the _advertised peer URLs_, i.e. they should match the value of `initial-advertise-peer-urls` on the respective nodes.
If you are spinning up multiple clusters (or creating and destroying a single cluster) with same configuration for testing purpose, it is highly recommended that you specify a unique `initial-cluster-token` for the different clusters. By doing this, etcd can generate unique cluster IDs and member IDs for the clusters even if they otherwise have the exact same configuration. This can protect you from cross-cluster-interaction, which might corrupt your clusters.
etcd listens on [`listen-client-urls`][conf-listen-client] to accept client traffic. etcd member advertises the URLs specified in [`advertise-client-urls`][conf-adv-client] to other members, proxies, clients. Please make sure the `advertise-client-urls` are reachable from intended clients. A common mistake is setting `advertise-client-urls` to localhost or leave it as default when you want the remote clients to reach etcd.
On each machine you would start etcd with these flags:
```
$ etcd --name infra0 --initial-advertise-peer-urls http://10.0.1.10:2380 \
--listen-peer-urls http://10.0.1.10:2380 \
--listen-client-urls http://10.0.1.10:2379,http://127.0.0.1:2379 \
--advertise-client-urls http://10.0.1.10:2379 \
--initial-cluster-token etcd-cluster-1 \
--initial-cluster infra0=http://10.0.1.10:2380,infra1=http://10.0.1.11:2380,infra2=http://10.0.1.12:2380 \
--initial-cluster-state new
```
```
$ etcd --name infra1 --initial-advertise-peer-urls http://10.0.1.11:2380 \
--listen-peer-urls http://10.0.1.11:2380 \
--listen-client-urls http://10.0.1.11:2379,http://127.0.0.1:2379 \
--advertise-client-urls http://10.0.1.11:2379 \
--initial-cluster-token etcd-cluster-1 \
--initial-cluster infra0=http://10.0.1.10:2380,infra1=http://10.0.1.11:2380,infra2=http://10.0.1.12:2380 \
--initial-cluster-state new
```
```
$ etcd --name infra2 --initial-advertise-peer-urls http://10.0.1.12:2380 \
--listen-peer-urls http://10.0.1.12:2380 \
--listen-client-urls http://10.0.1.12:2379,http://127.0.0.1:2379 \
--advertise-client-urls http://10.0.1.12:2379 \
--initial-cluster-token etcd-cluster-1 \
--initial-cluster infra0=http://10.0.1.10:2380,infra1=http://10.0.1.11:2380,infra2=http://10.0.1.12:2380 \
--initial-cluster-state new
```
The command line parameters starting with `--initial-cluster` will be ignored on subsequent runs of etcd. You are free to remove the environment variables or command line flags after the initial bootstrap process. If you need to make changes to the configuration later (for example, adding or removing members to/from the cluster), see the [runtime configuration][runtime-conf] guide.
### Error Cases
In the following example, we have not included our new host in the list of enumerated nodes. If this is a new cluster, the node _must_ be added to the list of initial cluster members.
```
$ etcd --name infra1 --initial-advertise-peer-urls http://10.0.1.11:2380 \
--listen-peer-urls https://10.0.1.11:2380 \
--listen-client-urls http://10.0.1.11:2379,http://127.0.0.1:2379 \
--advertise-client-urls http://10.0.1.11:2379 \
--initial-cluster infra0=http://10.0.1.10:2380 \
--initial-cluster-state new
etcd: infra1 not listed in the initial cluster config
exit 1
```
In this example, we are attempting to map a node (infra0) on a different address (127.0.0.1:2380) than its enumerated address in the cluster list (10.0.1.10:2380). If this node is to listen on multiple addresses, all addresses _must_ be reflected in the "initial-cluster" configuration directive.
```
$ etcd --name infra0 --initial-advertise-peer-urls http://127.0.0.1:2380 \
--listen-peer-urls http://10.0.1.10:2380 \
--listen-client-urls http://10.0.1.10:2379,http://127.0.0.1:2379 \
--advertise-client-urls http://10.0.1.10:2379 \
--initial-cluster infra0=http://10.0.1.10:2380,infra1=http://10.0.1.11:2380,infra2=http://10.0.1.12:2380 \
--initial-cluster-state=new
etcd: error setting up initial cluster: infra0 has different advertised URLs in the cluster and advertised peer URLs list
exit 1
```
If you configure a peer with a different set of configuration and attempt to join this cluster you will get a cluster ID mismatch and etcd will exit.
```
$ etcd --name infra3 --initial-advertise-peer-urls http://10.0.1.13:2380 \
--listen-peer-urls http://10.0.1.13:2380 \
--listen-client-urls http://10.0.1.13:2379,http://127.0.0.1:2379 \
--advertise-client-urls http://10.0.1.13:2379 \
--initial-cluster infra0=http://10.0.1.10:2380,infra1=http://10.0.1.11:2380,infra3=http://10.0.1.13:2380 \
--initial-cluster-state=new
etcd: conflicting cluster ID to the target cluster (c6ab534d07e8fcc4 != bc25ea2a74fb18b0). Exiting.
exit 1
```
## Discovery
In a number of cases, you might not know the IPs of your cluster peers ahead of time. This is common when utilizing cloud providers or when your network uses DHCP. In these cases, rather than specifying a static configuration, you can use an existing etcd cluster to bootstrap a new one. We call this process "discovery".
There two methods that can be used for discovery:
* etcd discovery service
* DNS SRV records
### etcd Discovery
To better understand the design about discovery service protocol, we suggest you read [this][discovery-proto].
#### Lifetime of a Discovery URL
A discovery URL identifies a unique etcd cluster. Instead of reusing a discovery URL, you should always create discovery URLs for new clusters.
Moreover, discovery URLs should ONLY be used for the initial bootstrapping of a cluster. To change cluster membership after the cluster is already running, see the [runtime reconfiguration][runtime-conf] guide.
#### Custom etcd Discovery Service
Discovery uses an existing cluster to bootstrap itself. If you are using your own etcd cluster you can create a URL like so:
```
$ curl -X PUT https://myetcd.local/v2/keys/discovery/6c007a14875d53d9bf0ef5a6fc0257c817f0fb83/_config/size -d value=3
```
By setting the size key to the URL, you create a discovery URL with an expected cluster size of 3.
If you bootstrap an etcd cluster using discovery service with more than the expected number of etcd members, the extra etcd processes will [fall back][fall-back] to being [proxies][proxy] by default.
The URL you will use in this case will be `https://myetcd.local/v2/keys/discovery/6c007a14875d53d9bf0ef5a6fc0257c817f0fb83` and the etcd members will use the `https://myetcd.local/v2/keys/discovery/6c007a14875d53d9bf0ef5a6fc0257c817f0fb83` directory for registration as they start.
**Each member must have a different name flag specified. `Hostname` or `machine-id` can be a good choice. Or discovery will fail due to duplicated name.**
Now we start etcd with those relevant flags for each member:
```
$ etcd --name infra0 --initial-advertise-peer-urls http://10.0.1.10:2380 \
--listen-peer-urls http://10.0.1.10:2380 \
--listen-client-urls http://10.0.1.10:2379,http://127.0.0.1:2379 \
--advertise-client-urls http://10.0.1.10:2379 \
--discovery https://myetcd.local/v2/keys/discovery/6c007a14875d53d9bf0ef5a6fc0257c817f0fb83
```
```
$ etcd --name infra1 --initial-advertise-peer-urls http://10.0.1.11:2380 \
--listen-peer-urls http://10.0.1.11:2380 \
--listen-client-urls http://10.0.1.11:2379,http://127.0.0.1:2379 \
--advertise-client-urls http://10.0.1.11:2379 \
--discovery https://myetcd.local/v2/keys/discovery/6c007a14875d53d9bf0ef5a6fc0257c817f0fb83
```
```
$ etcd --name infra2 --initial-advertise-peer-urls http://10.0.1.12:2380 \
--listen-peer-urls http://10.0.1.12:2380 \
--listen-client-urls http://10.0.1.12:2379,http://127.0.0.1:2379 \
--advertise-client-urls http://10.0.1.12:2379 \
--discovery https://myetcd.local/v2/keys/discovery/6c007a14875d53d9bf0ef5a6fc0257c817f0fb83
```
This will cause each member to register itself with the custom etcd discovery service and begin the cluster once all machines have been registered.
#### Public etcd Discovery Service
If you do not have access to an existing cluster, you can use the public discovery service hosted at `discovery.etcd.io`. You can create a private discovery URL using the "new" endpoint like so:
```
$ curl https://discovery.etcd.io/new?size=3
https://discovery.etcd.io/3e86b59982e49066c5d813af1c2e2579cbf573de
```
This will create the cluster with an initial expected size of 3 members. If you do not specify a size, a default of 3 will be used.
If you bootstrap an etcd cluster using discovery service with more than the expected number of etcd members, the extra etcd processes will [fall back][fall-back] to being [proxies][proxy] by default.
```
ETCD_DISCOVERY=https://discovery.etcd.io/3e86b59982e49066c5d813af1c2e2579cbf573de
```
```
-discovery https://discovery.etcd.io/3e86b59982e49066c5d813af1c2e2579cbf573de
```
**Each member must have a different name flag specified. `Hostname` or `machine-id` can be a good choice. Or discovery will fail due to duplicated name.**
Now we start etcd with those relevant flags for each member:
```
$ etcd --name infra0 --initial-advertise-peer-urls http://10.0.1.10:2380 \
--listen-peer-urls http://10.0.1.10:2380 \
--listen-client-urls http://10.0.1.10:2379,http://127.0.0.1:2379 \
--advertise-client-urls http://10.0.1.10:2379 \
--discovery https://discovery.etcd.io/3e86b59982e49066c5d813af1c2e2579cbf573de
```
```
$ etcd --name infra1 --initial-advertise-peer-urls http://10.0.1.11:2380 \
--listen-peer-urls http://10.0.1.11:2380 \
--listen-client-urls http://10.0.1.11:2379,http://127.0.0.1:2379 \
--advertise-client-urls http://10.0.1.11:2379 \
--discovery https://discovery.etcd.io/3e86b59982e49066c5d813af1c2e2579cbf573de
```
```
$ etcd --name infra2 --initial-advertise-peer-urls http://10.0.1.12:2380 \
--listen-peer-urls http://10.0.1.12:2380 \
--listen-client-urls http://10.0.1.12:2379,http://127.0.0.1:2379 \
--advertise-client-urls http://10.0.1.12:2379 \
--discovery https://discovery.etcd.io/3e86b59982e49066c5d813af1c2e2579cbf573de
```
This will cause each member to register itself with the discovery service and begin the cluster once all members have been registered.
You can use the environment variable `ETCD_DISCOVERY_PROXY` to cause etcd to use an HTTP proxy to connect to the discovery service.
#### Error and Warning Cases
##### Discovery Server Errors
```
$ etcd --name infra0 --initial-advertise-peer-urls http://10.0.1.10:2380 \
--listen-peer-urls http://10.0.1.10:2380 \
--listen-client-urls http://10.0.1.10:2379,http://127.0.0.1:2379 \
--advertise-client-urls http://10.0.1.10:2379 \
--discovery https://discovery.etcd.io/3e86b59982e49066c5d813af1c2e2579cbf573de
etcd: error: the cluster doesnt have a size configuration value in https://discovery.etcd.io/3e86b59982e49066c5d813af1c2e2579cbf573de/_config
exit 1
```
##### User Errors
This error will occur if the discovery cluster already has the configured number of members, and `discovery-fallback` is explicitly disabled
```
$ etcd --name infra0 --initial-advertise-peer-urls http://10.0.1.10:2380 \
--listen-peer-urls http://10.0.1.10:2380 \
--listen-client-urls http://10.0.1.10:2379,http://127.0.0.1:2379 \
--advertise-client-urls http://10.0.1.10:2379 \
--discovery https://discovery.etcd.io/3e86b59982e49066c5d813af1c2e2579cbf573de \
--discovery-fallback exit
etcd: discovery: cluster is full
exit 1
```
##### Warnings
This is a harmless warning notifying you that the discovery URL will be
ignored on this machine.
```
$ etcd --name infra0 --initial-advertise-peer-urls http://10.0.1.10:2380 \
--listen-peer-urls http://10.0.1.10:2380 \
--listen-client-urls http://10.0.1.10:2379,http://127.0.0.1:2379 \
--advertise-client-urls http://10.0.1.10:2379 \
--discovery https://discovery.etcd.io/3e86b59982e49066c5d813af1c2e2579cbf573de
etcdserver: discovery token ignored since a cluster has already been initialized. Valid log found at /var/lib/etcd
```
### DNS Discovery
DNS [SRV records][rfc-srv] can be used as a discovery mechanism.
The `-discovery-srv` flag can be used to set the DNS domain name where the discovery SRV records can be found.
The following DNS SRV records are looked up in the listed order:
* _etcd-server-ssl._tcp.example.com
* _etcd-server._tcp.example.com
If `_etcd-server-ssl._tcp.example.com` is found then etcd will attempt the bootstrapping process over SSL.
To help clients discover the etcd cluster, the following DNS SRV records are looked up in the listed order:
* _etcd-client._tcp.example.com
* _etcd-client-ssl._tcp.example.com
If `_etcd-client-ssl._tcp.example.com` is found, clients will attempt to communicate with the etcd cluster over SSL.
#### Create DNS SRV records
```
$ dig +noall +answer SRV _etcd-server._tcp.example.com
_etcd-server._tcp.example.com. 300 IN SRV 0 0 2380 infra0.example.com.
_etcd-server._tcp.example.com. 300 IN SRV 0 0 2380 infra1.example.com.
_etcd-server._tcp.example.com. 300 IN SRV 0 0 2380 infra2.example.com.
```
```
$ dig +noall +answer SRV _etcd-client._tcp.example.com
_etcd-client._tcp.example.com. 300 IN SRV 0 0 2379 infra0.example.com.
_etcd-client._tcp.example.com. 300 IN SRV 0 0 2379 infra1.example.com.
_etcd-client._tcp.example.com. 300 IN SRV 0 0 2379 infra2.example.com.
```
```
$ dig +noall +answer infra0.example.com infra1.example.com infra2.example.com
infra0.example.com. 300 IN A 10.0.1.10
infra1.example.com. 300 IN A 10.0.1.11
infra2.example.com. 300 IN A 10.0.1.12
```
#### Bootstrap the etcd cluster using DNS
etcd cluster members can listen on domain names or IP address, the bootstrap process will resolve DNS A records.
The resolved address in `--initial-advertise-peer-urls` *must match* one of the resolved addresses in the SRV targets. The etcd member reads the resolved address to find out if it belongs to the cluster defined in the SRV records.
```
$ etcd --name infra0 \
--discovery-srv example.com \
--initial-advertise-peer-urls http://infra0.example.com:2380 \
--initial-cluster-token etcd-cluster-1 \
--initial-cluster-state new \
--advertise-client-urls http://infra0.example.com:2379 \
--listen-client-urls http://infra0.example.com:2379 \
--listen-peer-urls http://infra0.example.com:2380
```
```
$ etcd --name infra1 \
--discovery-srv example.com \
--initial-advertise-peer-urls http://infra1.example.com:2380 \
--initial-cluster-token etcd-cluster-1 \
--initial-cluster-state new \
--advertise-client-urls http://infra1.example.com:2379 \
--listen-client-urls http://infra1.example.com:2379 \
--listen-peer-urls http://infra1.example.com:2380
```
```
$ etcd --name infra2 \
--discovery-srv example.com \
--initial-advertise-peer-urls http://infra2.example.com:2380 \
--initial-cluster-token etcd-cluster-1 \
--initial-cluster-state new \
--advertise-client-urls http://infra2.example.com:2379 \
--listen-client-urls http://infra2.example.com:2379 \
--listen-peer-urls http://infra2.example.com:2380
```
You can also bootstrap the cluster using IP addresses instead of domain names:
```
$ etcd --name infra0 \
--discovery-srv example.com \
--initial-advertise-peer-urls http://10.0.1.10:2380 \
--initial-cluster-token etcd-cluster-1 \
--initial-cluster-state new \
--advertise-client-urls http://10.0.1.10:2379 \
--listen-client-urls http://10.0.1.10:2379 \
--listen-peer-urls http://10.0.1.10:2380
```
```
$ etcd --name infra1 \
--discovery-srv example.com \
--initial-advertise-peer-urls http://10.0.1.11:2380 \
--initial-cluster-token etcd-cluster-1 \
--initial-cluster-state new \
--advertise-client-urls http://10.0.1.11:2379 \
--listen-client-urls http://10.0.1.11:2379 \
--listen-peer-urls http://10.0.1.11:2380
```
```
$ etcd --name infra2 \
--discovery-srv example.com \
--initial-advertise-peer-urls http://10.0.1.12:2380 \
--initial-cluster-token etcd-cluster-1 \
--initial-cluster-state new \
--advertise-client-urls http://10.0.1.12:2379 \
--listen-client-urls http://10.0.1.12:2379 \
--listen-peer-urls http://10.0.1.12:2380
```
#### etcd proxy configuration
DNS SRV records can also be used to configure the list of peers for an etcd server running in proxy mode:
```
$ etcd --proxy on --discovery-srv example.com
```
#### etcd client configuration
DNS SRV records can also be used to help clients discover the etcd cluster.
The official [etcd/client][client] supports [DNS Discovery][client-discoverer].
`etcdctl` also supports DNS Discovery by specifying the `--discovery-srv` option.
```
$ etcdctl --discovery-srv example.com set foo bar
```
#### Error Cases
You might see an error like `cannot find local etcd $name from SRV records.`. That means the etcd member fails to find itself from the cluster defined in SRV records. The resolved address in `--initial-advertise-peer-urls` *must match* one of the resolved addresses in the SRV targets.
# 0.4 to 2.0+ Migration Guide
In etcd 2.0 we introduced the ability to listen on more than one address and to advertise multiple addresses. This makes using etcd easier when you have complex networking, such as private and public networks on various cloud providers.
To make understanding this feature easier, we changed the naming of some flags, but we support the old flags to make the migration from the old to new version easier.
|Old Flag |New Flag |Migration Behavior |
|-----------------------|-----------------------|---------------------------------------------------------------------------------------|
|-peer-addr |--initial-advertise-peer-urls |If specified, peer-addr will be used as the only peer URL. Error if both flags specified.|
|-addr |--advertise-client-urls |If specified, addr will be used as the only client URL. Error if both flags specified.|
|-peer-bind-addr |--listen-peer-urls |If specified, peer-bind-addr will be used as the only peer bind URL. Error if both flags specified.|
|-bind-addr |--listen-client-urls |If specified, bind-addr will be used as the only client bind URL. Error if both flags specified.|
|-peers |none |Deprecated. The --initial-cluster flag provides a similar concept with different semantics. Please read this guide on cluster startup.|
|-peers-file |none |Deprecated. The --initial-cluster flag provides a similar concept with different semantics. Please read this guide on cluster startup.|
[client]: /client
[client-discoverer]: https://godoc.org/github.com/coreos/etcd/client#Discoverer
[conf-adv-client]: configuration.md#-advertise-client-urls
[conf-listen-client]: configuration.md#-listen-client-urls
[discovery-proto]: discovery_protocol.md
[fall-back]: proxy.md#fallback-to-proxy-mode-with-discovery-service
[proxy]: proxy.md
[rfc-srv]: http://www.ietf.org/rfc/rfc2052.txt
[runtime-conf]: runtime-configuration.md
[runtime-reconf-design]: runtime-reconf-design.md

View File

@@ -0,0 +1,282 @@
# Configuration Flags
etcd is configurable through command-line flags and environment variables. Options set on the command line take precedence over those from the environment.
The format of environment variable for flag `--my-flag` is `ETCD_MY_FLAG`. It applies to all flags.
The [official etcd ports][iana-ports] are 2379 for client requests, and 2380 for peer communication. Some legacy code and documentation still references ports 4001 and 7001, but all new etcd use and discussion should adopt the assigned ports.
To start etcd automatically using custom settings at startup in Linux, using a [systemd][systemd-intro] unit is highly recommended.
[systemd-intro]: http://freedesktop.org/wiki/Software/systemd/
## Member Flags
### --name
+ Human-readable name for this member.
+ default: "default"
+ env variable: ETCD_NAME
+ This value is referenced as this node's own entries listed in the `--initial-cluster` flag (Ex: `default=http://localhost:2380` or `default=http://localhost:2380,default=http://localhost:7001`). This needs to match the key used in the flag if you're using [static bootstrapping][build-cluster]. When using discovery, each member must have a unique name. `Hostname` or `machine-id` can be a good choice.
### --data-dir
+ Path to the data directory.
+ default: "${name}.etcd"
+ env variable: ETCD_DATA_DIR
### --wal-dir
+ Path to the dedicated wal directory. If this flag is set, etcd will write the WAL files to the walDir rather than the dataDir. This allows a dedicated disk to be used, and helps avoid io competition between logging and other IO operations.
+ default: ""
+ env variable: ETCD_WAL_DIR
### --snapshot-count
+ Number of committed transactions to trigger a snapshot to disk.
+ default: "10000"
+ env variable: ETCD_SNAPSHOT_COUNT
### --heartbeat-interval
+ Time (in milliseconds) of a heartbeat interval.
+ default: "100"
+ env variable: ETCD_HEARTBEAT_INTERVAL
### --election-timeout
+ Time (in milliseconds) for an election to timeout. See [tuning.md](tuning.md#time-parameters) for details.
+ default: "1000"
+ env variable: ETCD_ELECTION_TIMEOUT
### --listen-peer-urls
+ List of URLs to listen on for peer traffic. This flag tells the etcd to accept incoming requests from its peers on the specified scheme://IP:port combinations. Scheme can be either http or https.If 0.0.0.0 is specified as the IP, etcd listens to the given port on all interfaces. If an IP address is given as well as a port, etcd will listen on the given port and interface. Multiple URLs may be used to specify a number of addresses and ports to listen on. The etcd will respond to requests from any of the listed addresses and ports.
+ default: "http://localhost:2380,http://localhost:7001"
+ env variable: ETCD_LISTEN_PEER_URLS
+ example: "http://10.0.0.1:2380"
+ invalid example: "http://example.com:2380" (domain name is invalid for binding)
### --listen-client-urls
+ List of URLs to listen on for client traffic. This flag tells the etcd to accept incoming requests from the clients on the specified scheme://IP:port combinations. Scheme can be either http or https. If 0.0.0.0 is specified as the IP, etcd listens to the given port on all interfaces. If an IP address is given as well as a port, etcd will listen on the given port and interface. Multiple URLs may be used to specify a number of addresses and ports to listen on. The etcd will respond to requests from any of the listed addresses and ports.
+ default: "http://localhost:2379,http://localhost:4001"
+ env variable: ETCD_LISTEN_CLIENT_URLS
+ example: "http://10.0.0.1:2379"
+ invalid example: "http://example.com:2379" (domain name is invalid for binding)
### --max-snapshots
+ Maximum number of snapshot files to retain (0 is unlimited)
+ default: 5
+ env variable: ETCD_MAX_SNAPSHOTS
+ The default for users on Windows is unlimited, and manual purging down to 5 (or your preference for safety) is recommended.
### --max-wals
+ Maximum number of wal files to retain (0 is unlimited)
+ default: 5
+ env variable: ETCD_MAX_WALS
+ The default for users on Windows is unlimited, and manual purging down to 5 (or your preference for safety) is recommended.
### --cors
+ Comma-separated white list of origins for CORS (cross-origin resource sharing).
+ default: none
+ env variable: ETCD_CORS
## Clustering Flags
`--initial` prefix flags are used in bootstrapping ([static bootstrap][build-cluster], [discovery-service bootstrap][discovery] or [runtime reconfiguration][reconfig]) a new member, and ignored when restarting an existing member.
`--discovery` prefix flags need to be set when using [discovery service][discovery].
### --initial-advertise-peer-urls
+ List of this member's peer URLs to advertise to the rest of the cluster. These addresses are used for communicating etcd data around the cluster. At least one must be routable to all cluster members. These URLs can contain domain names.
+ default: "http://localhost:2380,http://localhost:7001"
+ env variable: ETCD_INITIAL_ADVERTISE_PEER_URLS
+ example: "http://example.com:2380, http://10.0.0.1:2380"
### --initial-cluster
+ Initial cluster configuration for bootstrapping.
+ default: "default=http://localhost:2380,default=http://localhost:7001"
+ env variable: ETCD_INITIAL_CLUSTER
+ The key is the value of the `--name` flag for each node provided. The default uses `default` for the key because this is the default for the `--name` flag.
### --initial-cluster-state
+ Initial cluster state ("new" or "existing"). Set to `new` for all members present during initial static or DNS bootstrapping. If this option is set to `existing`, etcd will attempt to join the existing cluster. If the wrong value is set, etcd will attempt to start but fail safely.
+ default: "new"
+ env variable: ETCD_INITIAL_CLUSTER_STATE
[static bootstrap]: clustering.md#static
### --initial-cluster-token
+ Initial cluster token for the etcd cluster during bootstrap.
+ default: "etcd-cluster"
+ env variable: ETCD_INITIAL_CLUSTER_TOKEN
### --advertise-client-urls
+ List of this member's client URLs to advertise to the rest of the cluster. These URLs can contain domain names.
+ default: "http://localhost:2379,http://localhost:4001"
+ env variable: ETCD_ADVERTISE_CLIENT_URLS
+ example: "http://example.com:2379, http://10.0.0.1:2379"
+ Be careful if you are advertising URLs such as http://localhost:2379 from a cluster member and are using the proxy feature of etcd. This will cause loops, because the proxy will be forwarding requests to itself until its resources (memory, file descriptors) are eventually depleted.
### --discovery
+ Discovery URL used to bootstrap the cluster.
+ default: none
+ env variable: ETCD_DISCOVERY
### --discovery-srv
+ DNS srv domain used to bootstrap the cluster.
+ default: none
+ env variable: ETCD_DISCOVERY_SRV
### --discovery-fallback
+ Expected behavior ("exit" or "proxy") when discovery services fails.
+ default: "proxy"
+ env variable: ETCD_DISCOVERY_FALLBACK
### --discovery-proxy
+ HTTP proxy to use for traffic to discovery service.
+ default: none
+ env variable: ETCD_DISCOVERY_PROXY
### --strict-reconfig-check
+ Reject reconfiguration requests that would cause quorum loss.
+ default: false
+ env variable: ETCD_STRICT_RECONFIG_CHECK
## Proxy Flags
`--proxy` prefix flags configures etcd to run in [proxy mode][proxy].
### --proxy
+ Proxy mode setting ("off", "readonly" or "on").
+ default: "off"
+ env variable: ETCD_PROXY
### --proxy-failure-wait
+ Time (in milliseconds) an endpoint will be held in a failed state before being reconsidered for proxied requests.
+ default: 5000
+ env variable: ETCD_PROXY_FAILURE_WAIT
### --proxy-refresh-interval
+ Time (in milliseconds) of the endpoints refresh interval.
+ default: 30000
+ env variable: ETCD_PROXY_REFRESH_INTERVAL
### --proxy-dial-timeout
+ Time (in milliseconds) for a dial to timeout or 0 to disable the timeout
+ default: 1000
+ env variable: ETCD_PROXY_DIAL_TIMEOUT
### --proxy-write-timeout
+ Time (in milliseconds) for a write to timeout or 0 to disable the timeout.
+ default: 5000
+ env variable: ETCD_PROXY_WRITE_TIMEOUT
### --proxy-read-timeout
+ Time (in milliseconds) for a read to timeout or 0 to disable the timeout.
+ Don't change this value if you use watches because they are using long polling requests.
+ default: 0
+ env variable: ETCD_PROXY_READ_TIMEOUT
## Security Flags
The security flags help to [build a secure etcd cluster][security].
### --ca-file [DEPRECATED]
+ Path to the client server TLS CA file. `--ca-file ca.crt` could be replaced by `--trusted-ca-file ca.crt --client-cert-auth` and etcd will perform the same.
+ default: none
+ env variable: ETCD_CA_FILE
### --cert-file
+ Path to the client server TLS cert file.
+ default: none
+ env variable: ETCD_CERT_FILE
### --key-file
+ Path to the client server TLS key file.
+ default: none
+ env variable: ETCD_KEY_FILE
### --client-cert-auth
+ Enable client cert authentication.
+ default: false
+ env variable: ETCD_CLIENT_CERT_AUTH
### --trusted-ca-file
+ Path to the client server TLS trusted CA key file.
+ default: none
+ env variable: ETCD_TRUSTED_CA_FILE
### --peer-ca-file [DEPRECATED]
+ Path to the peer server TLS CA file. `--peer-ca-file ca.crt` could be replaced by `--peer-trusted-ca-file ca.crt --peer-client-cert-auth` and etcd will perform the same.
+ default: none
+ env variable: ETCD_PEER_CA_FILE
### --peer-cert-file
+ Path to the peer server TLS cert file.
+ default: none
+ env variable: ETCD_PEER_CERT_FILE
### --peer-key-file
+ Path to the peer server TLS key file.
+ default: none
+ env variable: ETCD_PEER_KEY_FILE
### --peer-client-cert-auth
+ Enable peer client cert authentication.
+ default: false
+ env variable: ETCD_PEER_CLIENT_CERT_AUTH
### --peer-trusted-ca-file
+ Path to the peer server TLS trusted CA file.
+ default: none
+ env variable: ETCD_PEER_TRUSTED_CA_FILE
## Logging Flags
### --debug
+ Drop the default log level to DEBUG for all subpackages.
+ default: false (INFO for all packages)
+ env variable: ETCD_DEBUG
### --log-package-levels
+ Set individual etcd subpackages to specific log levels. An example being `etcdserver=WARNING,security=DEBUG`
+ default: none (INFO for all packages)
+ env variable: ETCD_LOG_PACKAGE_LEVELS
## Unsafe Flags
Please be CAUTIOUS when using unsafe flags because it will break the guarantees given by the consensus protocol.
For example, it may panic if other members in the cluster are still alive.
Follow the instructions when using these flags.
### --force-new-cluster
+ Force to create a new one-member cluster. It commits configuration changes forcing to remove all existing members in the cluster and add itself. It needs to be set to [restore a backup][restore].
+ default: false
+ env variable: ETCD_FORCE_NEW_CLUSTER
## Experimental Flags
### --experimental-v3demo
+ Enable experimental [v3 demo API][rfc-v3].
+ default: false
+ env variable: ETCD_EXPERIMENTAL_V3DEMO
## Miscellaneous Flags
### --version
+ Print the version and exit.
+ default: false
## Profiling flags
### --enable-pprof
+ Enable runtime profiling data via HTTP server. Address is at client URL + "/debug/pprof"
+ default: false
[build-cluster]: clustering.md#static
[reconfig]: runtime-configuration.md
[discovery]: clustering.md#discovery
[iana-ports]: https://www.iana.org/assignments/service-names-port-numbers/service-names-port-numbers.xhtml?search=etcd
[proxy]: proxy.md
[reconfig]: runtime-configuration.md
[restore]: admin_guide.md#restoring-a-backup
[rfc-v3]: rfc/v3api.md
[security]: security.md
[systemd-intro]: http://freedesktop.org/wiki/Software/systemd/
[tuning]: tuning.md#time-parameters

View File

@@ -0,0 +1,109 @@
# etcd release guide
The guide talks about how to release a new version of etcd.
The procedure includes some manual steps for sanity checking but it can probably be further scripted. Please keep this document up-to-date if you want to make changes to the release process.
## Prepare Release
Set desired version as environment variable for following steps. Here is an example to release 2.1.3:
```
export VERSION=v2.1.3
export PREV_VERSION=v2.1.2
```
All releases version numbers follow the format of [semantic versioning 2.0.0](http://semver.org/).
### Major, Minor Version Release, or its Pre-release
- Ensure the relevant milestone on GitHub is complete. All referenced issues should be closed, or moved elsewhere.
- Remove this release from [roadmap](https://github.com/coreos/etcd/blob/master/ROADMAP.md), if necessary.
- Ensure the latest upgrade documentation is available.
- Bump [hardcoded MinClusterVerion in the repository](https://github.com/coreos/etcd/blob/master/version/version.go#L29), if necessary.
- Add feature capability maps for the new version, if necessary.
### Patch Version Release
- Discuss about commits that are backported to the patch release. The commits should not include merge commits.
- Cherry-pick these commits starting from the oldest one into stable branch.
## Write Release Note
- Write introduction for the new release. For example, what major bug we fix, what new features we introduce or what performance improvement we make.
- Write changelog for the last release. ChangeLog should be straightforward and easy to understand for the end-user.
- Put `[GH XXXX]` at the head of change line to reference Pull Request that introduces the change. Moreover, add a link on it to jump to the Pull Request.
## Tag Version
- Bump [hardcoded Version in the repository](https://github.com/coreos/etcd/blob/master/version/version.go#L30) to the latest version `${VERSION}`.
- Ensure all tests on CI system are passed.
- Manually check etcd is buildable in Linux, Darwin and Windows.
- Manually check upgrade etcd cluster of previous minor version works well.
- Manually check new features work well.
- Add a signed tag through `git tag -s ${VERSION}`.
- Sanity check tag correctness through `git show tags/$VERSION`.
- Push the tag to GitHub through `git push origin tags/$VERSION`. This assumes `origin` corresponds to "https://github.com/coreos/etcd".
## Build Release Binaries and Images
- Ensure `actool` is available, or installing it through `go get github.com/appc/spec/actool`.
- Ensure `docker` is available.
Run release script in root directory:
```
./scripts/release.sh ${VERSION}
```
It generates all release binaries and images under directory ./release.
## Sign Binaries and Images
Choose appropriate private key to sign the generated binaries and images.
The following commands are used for public release sign:
```
cd release
# personal GPG is okay for now
for i in etcd-*{.zip,.tar.gz}; do gpg --sign ${i}; done
# use `CoreOS ACI Builder <release@coreos.com>` secret key
gpg -u 88182190 -a --output etcd-${VERSION}-linux-amd64.aci.asc --detach-sig etcd-${VERSION}-linux-amd64.aci
```
## Publish Release Page in GitHub
- Set release title as the version name.
- Follow the format of previous release pages.
- Attach the generated binaries, aci image and signatures.
- Select whether it is a pre-release.
- Publish the release!
## Publish Docker Image in Quay.io
- Push docker image:
```
docker login quay.io
docker push quay.io/coreos/etcd:${VERSION}
```
- Add `latest` tag to the new image on [quay.io](https://quay.io/repository/coreos/etcd?tag=latest&tab=tags) if this is a stable release.
## Announce to etcd-dev Googlegroup
- Follow the format of [previous release emails](https://groups.google.com/forum/#!forum/etcd-dev).
- Make sure to include a list of authors that contributed since the previous release - something like the following might be handy:
```
git log ...${PREV_VERSION} --pretty=format:"%an" | sort | uniq | tr '\n' ',' | sed -e 's#,#, #g' -e 's#, $##'
```
- Send email to etcd-dev@googlegroups.com
## Post Release
- Create new stable branch through `git push origin ${VERSION_MAJOR}.${VERSION_MINOR}` if this is a major stable release. This assumes `origin` corresponds to "https://github.com/coreos/etcd".
- Bump [hardcoded Version in the repository](https://github.com/coreos/etcd/blob/master/version/version.go#L30) to the version `${VERSION}+git`.

84
Documentation/v2/faq.md Normal file
View File

@@ -0,0 +1,84 @@
# FAQ
## 1) Why can an etcd client read an old version of data when a majority of the etcd cluster members are down?
In situations where a client connects to a minority, etcd
favors by default availability over consistency. This means that even though
data might be “out of date”, it is still better to return something versus
nothing.
In order to confirm that a read is up to date with a majority of the cluster,
the client can use the `quorum=true` parameter on reads of keys. This means
that a majority of the cluster is checked on reads before returning the data,
otherwise the read will timeout and fail.
## 2) With quorum=false, doesnt this mean that if my client switched the member it was connected to, that it could experience a logical ordering where the cluster goes backwards in time?
Yes, but this could be handled at the etcd client implementation via
remembering the last seen index. The “index” is the cluster's single
irrevocable sequence of the entire modification history. The client could
remember the last seen index, and determine via comparing the index returned on
the GET whether or not the state of the key-value pair is before or after its
last seen state.
## 3) What happens if a watch is registered on a minority member?
The watch will stay untriggered, even as modifications are occurring in the
majority quorum. This is an open issue, and is being addressed in v3. There are
multiple ways to work around the watch trigger not firing.
1) build a signaling mechanism independent of etcd. This could be as simple as
a “pulse” to the client to reissue a GET with quorum=true for the most recent
version of the data.
2) poll on the `/v2/keys` endpoint and check that the raft-index is increasing every
timeout.
## 4) What is a proxy used for?
A proxy is a redirection server to the etcd cluster. The proxy handles the
redirection of a client to the current configuration of the etcd cluster. A
typical use case is to start a proxy on a machine, and on first boot up of the
proxy specify both the `--proxy` flag and the `--initial-cluster` flag.
From there, any etcdctl client that starts up automatically speaks to the local
proxy and the proxy redirects operations to the current configuration of the
cluster it was originally paired with.
In the v2 spec of etcd, proxies cannot be promoted to members of the cluster.
They also cannot be promoted to followers or at any point become part of the
replication of the etcd cluster itself.
## 5) How is cluster membership and health handled in etcd v2?
The design goal of etcd is that reconfiguration is simply an API, and health
monitoring and addition/removal of members is up to the individual application
and their integration with the reconfiguration API.
Thus, a member that is down, even infinitely, will never be automatically
removed from the etcd cluster member list.
This makes sense because it's usually an application level / administrative
action to determine whether a reconfiguration should happen based on health.
For more information, refer to the [runtime reconfiguration design document][runtime-reconf-design].
## 6) how does --endpoint work with etcdctl?
The `--endpoint` flag can specify any number of etcd cluster members in a comma
separated list. This list might be a subset, equal to, or more than the actual
etcd cluster member list itself.
If only one peer is specified via the `--endpoint` flag, the etcdctl discovers the
rest of the cluster via the member list of that one peer, and then it randomly
chooses a member to use. Again, the client can use the `quorum=true` flag on
reads, which will always fail when using a member in the minority.
If peers from multiple clusters are specified via the `--endpoint` flag, etcdctl
will randomly choose a peer, and the request will simply get routed to one of
the clusters. This is probably not what you want.
Note: --peers flag is now deprecated and --endpoint should be used instead,
as it might confuse users to give etcdctl a peerURL.
[runtime-reconf-design]: runtime-reconf-design.md

View File

@@ -0,0 +1,35 @@
# Glossary
This document defines the various terms used in etcd documentation, command line and source code.
## Node
Node is an instance of raft state machine.
It has a unique identification, and records other nodes' progress internally when it is the leader.
## Member
Member is an instance of etcd. It hosts a node, and provides service to clients.
## Cluster
Cluster consists of several members.
The node in each member follows raft consensus protocol to replicate logs. Cluster receives proposals from members, commits them and apply to local store.
## Peer
Peer is another member of the same cluster.
## Proposal
A proposal is a request (for example a write request, a configuration change request) that needs to go through raft protocol.
## Client
Client is a caller of the cluster's HTTP API.
## Machine (deprecated)
The alternative of Member in etcd before 2.0

View File

@@ -0,0 +1,124 @@
# Libraries and Tools
**Tools**
- [etcdctl](https://github.com/coreos/etcd/tree/master/etcdctl) - A command line client for etcd
- [etcd-backup](https://github.com/fanhattan/etcd-backup) - A powerful command line utility for dumping/restoring etcd - Supports v2
- [etcd-dump](https://npmjs.org/package/etcd-dump) - Command line utility for dumping/restoring etcd.
- [etcd-fs](https://github.com/xetorthio/etcd-fs) - FUSE filesystem for etcd
- [etcddir](https://github.com/rekby/etcddir) - Realtime sync etcd and local directory. Work with windows and linux.
- [etcd-browser](https://github.com/henszey/etcd-browser) - A web-based key/value editor for etcd using AngularJS
- [etcd-lock](https://github.com/datawisesystems/etcd-lock) - Master election & distributed r/w lock implementation using etcd - Supports v2
- [etcd-console](https://github.com/matishsiao/etcd-console) - A web-base key/value editor for etcd using PHP
- [etcd-viewer](https://github.com/nikfoundas/etcd-viewer) - An etcd key-value store editor/viewer written in Java
- [etcdtool](https://github.com/mickep76/etcdtool) - Export/Import/Edit etcd directory as JSON/YAML/TOML and Validate directory using JSON schema
- [etcd-rest](https://github.com/mickep76/etcd-rest) - Create generic REST API in Go using etcd as a backend with validation using JSON schema
- [etcdsh](https://github.com/kamilhark/etcdsh) - A command line client with support of command history and tab completion. Supports v2
**Go libraries**
- [etcd/client](https://github.com/coreos/etcd/blob/master/client) - the officially maintained Go client
- [go-etcd](https://github.com/coreos/go-etcd) - the deprecated official client. May be useful for older (<2.0.0) versions of etcd.
**Java libraries**
- [boonproject/etcd](https://github.com/boonproject/boon/blob/master/etcd/README.md) - Supports v2, Async/Sync and waits
- [justinsb/jetcd](https://github.com/justinsb/jetcd)
- [diwakergupta/jetcd](https://github.com/diwakergupta/jetcd) - Supports v2
- [jurmous/etcd4j](https://github.com/jurmous/etcd4j) - Supports v2, Async/Sync, waits and SSL
- [AdoHe/etcd4j](http://github.com/AdoHe/etcd4j) - Supports v2 (enhance for real production cluster)
**Python libraries**
- [jplana/python-etcd](https://github.com/jplana/python-etcd) - Supports v2
- [russellhaering/txetcd](https://github.com/russellhaering/txetcd) - a Twisted Python library
- [cholcombe973/autodock](https://github.com/cholcombe973/autodock) - A docker deployment automation tool
- [lisael/aioetcd](https://github.com/lisael/aioetcd) - (Python 3.4+) Asyncio coroutines client (Supports v2)
**Node libraries**
- [stianeikeland/node-etcd](https://github.com/stianeikeland/node-etcd) - Supports v2 (w Coffeescript)
- [lavagetto/nodejs-etcd](https://github.com/lavagetto/nodejs-etcd) - Supports v2
- [deedubs/node-etcd-config](https://github.com/deedubs/node-etcd-config) - Supports v2
**Ruby libraries**
- [iconara/etcd-rb](https://github.com/iconara/etcd-rb)
- [jpfuentes2/etcd-ruby](https://github.com/jpfuentes2/etcd-ruby)
- [ranjib/etcd-ruby](https://github.com/ranjib/etcd-ruby) - Supports v2
**C libraries**
- [jdarcy/etcd-api](https://github.com/jdarcy/etcd-api) - Supports v2
- [shafreeck/cetcd](https://github.com/shafreeck/cetcd) - Supports v2
**C++ libraries**
- [edwardcapriolo/etcdcpp](https://github.com/edwardcapriolo/etcdcpp) - Supports v2
- [suryanathan/etcdcpp](https://github.com/suryanathan/etcdcpp) - Supports v2 (with waits)
**Clojure libraries**
- [aterreno/etcd-clojure](https://github.com/aterreno/etcd-clojure)
- [dwwoelfel/cetcd](https://github.com/dwwoelfel/cetcd) - Supports v2
- [rthomas/clj-etcd](https://github.com/rthomas/clj-etcd) - Supports v2
**Erlang libraries**
- [marshall-lee/etcd.erl](https://github.com/marshall-lee/etcd.erl)
**.Net Libraries**
- [wangjia184/etcdnet](https://github.com/wangjia184/etcdnet) - Supports v2
- [drusellers/etcetera](https://github.com/drusellers/etcetera)
**PHP Libraries**
- [linkorb/etcd-php](https://github.com/linkorb/etcd-php)
**Haskell libraries**
- [wereHamster/etcd-hs](https://github.com/wereHamster/etcd-hs)
**R libraries**
- [ropensci/etseed](https://github.com/ropensci/etseed)
**Tcl libraries**
- [efrecon/etcd-tcl](https://github.com/efrecon/etcd-tcl) - Supports v2, except wait.
**Chef Integration**
- [coderanger/etcd-chef](https://github.com/coderanger/etcd-chef)
**Chef Cookbook**
- [spheromak/etcd-cookbook](https://github.com/spheromak/etcd-cookbook)
**BOSH Releases**
- [cloudfoundry-community/etcd-boshrelease](https://github.com/cloudfoundry-community/etcd-boshrelease)
- [cloudfoundry/cf-release](https://github.com/cloudfoundry/cf-release/tree/master/jobs/etcd)
**Projects using etcd**
- [binocarlos/yoda](https://github.com/binocarlos/yoda) - etcd + ZeroMQ
- [calavera/active-proxy](https://github.com/calavera/active-proxy) - HTTP Proxy configured with etcd
- [derekchiang/etcdplus](https://github.com/derekchiang/etcdplus) - A set of distributed synchronization primitives built upon etcd
- [go-discover](https://github.com/flynn/go-discover) - service discovery in Go
- [gleicon/goreman](https://github.com/gleicon/goreman/tree/etcd) - Branch of the Go Foreman clone with etcd support
- [garethr/hiera-etcd](https://github.com/garethr/hiera-etcd) - Puppet hiera backend using etcd
- [mattn/etcd-vim](https://github.com/mattn/etcd-vim) - SET and GET keys from inside vim
- [mattn/etcdenv](https://github.com/mattn/etcdenv) - "env" shebang with etcd integration
- [kelseyhightower/confd](https://github.com/kelseyhightower/confd) - Manage local app config files using templates and data from etcd
- [configdb](https://git.autistici.org/ai/configdb/tree/master) - A REST relational abstraction on top of arbitrary database backends, aimed at storing configs and inventories.
- [scrz](https://github.com/scrz/scrz) - Container manager, stores configuration in etcd.
- [fleet](https://github.com/coreos/fleet) - Distributed init system
- [kubernetes/kubernetes](https://github.com/kubernetes/kubernetes) - Container cluster manager introduced by Google.
- [mailgun/vulcand](https://github.com/mailgun/vulcand) - HTTP proxy that uses etcd as a configuration backend.
- [duedil-ltd/discodns](https://github.com/duedil-ltd/discodns) - Simple DNS nameserver using etcd as a database for names and records.
- [skynetservices/skydns](https://github.com/skynetservices/skydns) - RFC compliant DNS server
- [xordataexchange/crypt](https://github.com/xordataexchange/crypt) - Securely store values in etcd using GPG encryption
- [spf13/viper](https://github.com/spf13/viper) - Go configuration library, reads values from ENV, pflags, files, and etcd with optional encryption
- [lytics/metafora](https://github.com/lytics/metafora) - Go distributed task library
- [ryandoyle/nss-etcd](https://github.com/ryandoyle/nss-etcd) - A GNU libc NSS module for resolving names from etcd.

143
Documentation/v2/metrics.md Normal file
View File

@@ -0,0 +1,143 @@
# Metrics
etcd uses [Prometheus][prometheus] for metrics reporting. The metrics can be used for real-time monitoring and debugging. etcd does not persist its metrics; if a member restarts, the metrics will be reset.
The simplest way to see the available metrics is to cURL the metrics endpoint `/metrics`. The format is described [here](http://prometheus.io/docs/instrumenting/exposition_formats/).
Follow the [Prometheus getting started doc][prometheus-getting-started] to spin up a Prometheus server to collect etcd metrics.
The naming of metrics follows the suggested [Prometheus best practices][prometheus-naming]. A metric name has an `etcd` or `etcd_debugging` prefix as its namespace and a subsystem prefix (for example `wal` and `etcdserver`).
## etcd namespace metrics
The metrics under the `etcd` prefix are for monitoring and alerting. They are stable high level metrics. If there is any change of these metrics, it will be included in release notes.
### http requests
These metrics describe the serving of requests (non-watch events) served by etcd members in non-proxy mode: total
incoming requests, request failures and processing latency (inc. raft rounds for storage). They are useful for tracking
user-generated traffic hitting the etcd cluster .
All these metrics are prefixed with `etcd_http_`
| Name | Description | Type |
|--------------------------------|-----------------------------------------------------------------------------------------|--------------------|
| received_total | Total number of events after parsing and auth. | Counter(method) |
| failed_total | Total number of failed events.   | Counter(method,error) |
| successful_duration_seconds | Bucketed handling times of the requests, including raft rounds for writes. | Histogram(method) |
Example Prometheus queries that may be useful from these metrics (across all etcd members):
* `sum(rate(etcd_http_failed_total{job="etcd"}[1m]) by (method) / sum(rate(etcd_http_events_received_total{job="etcd"})[1m]) by (method)`
Shows the fraction of events that failed by HTTP method across all members, across a time window of `1m`.
* `sum(rate(etcd_http_received_total{job="etcd",method="GET})[1m]) by (method)`
`sum(rate(etcd_http_received_total{job="etcd",method~="GET})[1m]) by (method)`
Shows the rate of successful readonly/write queries across all servers, across a time window of `1m`.
* `histogram_quantile(0.9, sum(rate(etcd_http_successful_duration_seconds{job="etcd",method="GET"}[5m]) ) by (le))`
`histogram_quantile(0.9, sum(rate(etcd_http_successful_duration_seconds{job="etcd",method!="GET"}[5m]) ) by (le))`
Show the 0.90-tile latency (in seconds) of read/write (respectively) event handling across all members, with a window of `5m`.
### proxy
etcd members operating in proxy mode do not directly perform store operations. They forward all requests to cluster instances.
Tracking the rate of requests coming from a proxy allows one to pin down which machine is performing most reads/writes.
All these metrics are prefixed with `etcd_proxy_`
| Name | Description | Type |
|---------------------------|-----------------------------------------------------------------------------------------|--------------------|
| requests_total | Total number of requests by this proxy instance. | Counter(method) |
| handled_total | Total number of fully handled requests, with responses from etcd members. | Counter(method) |
| dropped_total | Total number of dropped requests due to forwarding errors to etcd members.  | Counter(method,error) |
| handling_duration_seconds | Bucketed handling times by HTTP method, including round trip to member instances. | Histogram(method) |
Example Prometheus queries that may be useful from these metrics (across all etcd servers):
* `sum(rate(etcd_proxy_handled_total{job="etcd"}[1m])) by (method)`
Rate of requests (by HTTP method) handled by all proxies, across a window of `1m`.
* `histogram_quantile(0.9, sum(rate(handling_duration_seconds{job="etcd",method="GET"}[5m])) by (le))`
`histogram_quantile(0.9, sum(rate(handling_duration_seconds{job="etcd",method!="GET"}[5m])) by (le))`
Show the 0.90-tile latency (in seconds) of handling of user requests across all proxy machines, with a window of `5m`.
* `sum(rate(etcd_proxy_dropped_total{job="etcd"}[1m])) by (proxying_error)`
Number of failed request on the proxy. This should be 0, spikes here indicate connectivity issues to the etcd cluster.
## etcd_debugging namespace metrics
The metrics under the `etcd_debugging` prefix are for debugging. They are very implementation dependent and volatile. They might be changed or removed without any warning in new etcd releases. Some of the metrics might be moved to the `etcd` prefix when they become more stable.
### etcdserver
| Name | Description | Type |
|-----------------------------------------|--------------------------------------------------|-----------|
| proposal_duration_seconds | The latency distributions of committing proposal | Histogram |
| proposals_pending | The current number of pending proposals | Gauge |
| proposals_failed_total | The total number of failed proposals | Counter |
[Proposal][glossary-proposal] duration (`proposal_duration_seconds`) provides a proposal commit latency histogram. The reported latency reflects network and disk IO delays in etcd.
Proposals pending (`proposals_pending`) indicates how many proposals are queued for commit. Rising pending proposals suggests there is a high client load or the cluster is unstable.
Failed proposals (`proposals_failed_total`) are normally related to two issues: temporary failures related to a leader election or longer duration downtime caused by a loss of quorum in the cluster.
### wal
| Name | Description | Type |
|------------------------------------|--------------------------------------------------|-----------|
| fsync_duration_seconds | The latency distributions of fsync called by wal | Histogram |
| last_index_saved | The index of the last entry saved by wal | Gauge |
Abnormally high fsync duration (`fsync_duration_seconds`) indicates disk issues and might cause the cluster to be unstable.
### snapshot
| Name | Description | Type |
|--------------------------------------------|------------------------------------------------------------|-----------|
| snapshot_save_total_duration_seconds | The total latency distributions of save called by snapshot | Histogram |
Abnormally high snapshot duration (`snapshot_save_total_duration_seconds`) indicates disk issues and might cause the cluster to be unstable.
### rafthttp
| Name | Description | Type | Labels |
|-----------------------------------|--------------------------------------------|--------------|--------------------------------|
| message_sent_latency_seconds | The latency distributions of messages sent | HistogramVec | sendingType, msgType, remoteID |
| message_sent_failed_total | The total number of failed messages sent | Summary | sendingType, msgType, remoteID |
Abnormally high message duration (`message_sent_latency_seconds`) indicates network issues and might cause the cluster to be unstable.
An increase in message failures (`message_sent_failed_total`) indicates more severe network issues and might cause the cluster to be unstable.
Label `sendingType` is the connection type to send messages. `message`, `msgapp` and `msgappv2` use HTTP streaming, while `pipeline` does HTTP request for each message.
Label `msgType` is the type of raft message. `MsgApp` is log replication messages; `MsgSnap` is snapshot install messages; `MsgProp` is proposal forward messages; the others maintain internal raft status. Given large snapshots, a lengthy msgSnap transmission latency should be expected. For other types of messages, given enough network bandwidth, latencies comparable to ping latency should be expected.
Label `remoteID` is the member ID of the message destination.
## Prometheus supplied metrics
The Prometheus client library provides a number of metrics under the `go` and `process` namespaces. There are a few that are particlarly interesting.
| Name | Description | Type |
|-----------------------------------|--------------------------------------------|--------------|
| process_open_fds | Number of open file descriptors. | Gauge |
| process_max_fds | Maximum number of open file descriptors. | Gauge |
Heavy file descriptor (`process_open_fds`) usage (i.e., near the process's file descriptor limit, `process_max_fds`) indicates a potential file descriptor exhaustion issue. If the file descriptors are exhausted, etcd may panic because it cannot create new WAL files.
[glossary-proposal]: glossary.md#proposal
[prometheus]: http://prometheus.io/
[prometheus-getting-started]: http://prometheus.io/docs/introduction/getting_started/
[prometheus-naming]: http://prometheus.io/docs/practices/naming/

View File

@@ -0,0 +1,62 @@
# FreeBSD
Starting with version 0.1.2 both etcd and etcdctl have been ported to FreeBSD and can
be installed either via packages or ports system. Their versions have been recently
updated to 0.2.0 so now you can enjoy using etcd and etcdctl on FreeBSD 10.0 (RC4 as
of now) and 9.x where they have been tested. They might also work when installed from
ports on earlier versions of FreeBSD, but your mileage may vary.
## Installation
### Using pkgng package system
1. If you do not have pkg­ng installed, install it with command `pkg` and answering 'Y'
when asked
2. Update your repository data with `pkg update`
3. Install etcd with `pkg install coreos-etcd coreos-etcdctl`
4. Verify successful installation with `pkg info | grep etcd` and you should get:
```
r@fbsd­10:/ # pkg info | grep etcd
coreos­etcd­0.2.0              Highly­available key value store and service discovery
coreos­etcdctl­0.2.0           Simple commandline client for etcd
r@fbsd­10:/ #
```
5. Youre ready to use etcd and etcdctl! For more information about using pkgng, please
see: http://www.freebsd.org/doc/handbook/pkgng­intro.html
 
### Using ports system
1. If you do not have ports installed, install with with `portsnap fetch extract` (it
may take some time depending on your hardware and network connection)
2. Build etcd with `cd /usr/ports/devel/etcd && make install clean`, you
will get an option to build and install documentation and etcdctl with it.
3. If you haven't installed it with etcdctl, and you would like to install it later, you can build it
with `cd /usr/ports/devel/etcdctl && make install clean`
4. Verify successful installation with `pkg info | grep etcd` and you should get:
 
```
r@fbsd­10:/ # pkg info | grep etcd
coreos­etcd­0.2.0              Highly­available key value store and service discovery
coreos­etcdctl­0.2.0           Simple commandline client for etcd
r@fbsd­10:/ #
```
5. Youre ready to use etcd and etcdctl! For more information about using ports system,
please see: https://www.freebsd.org/doc/handbook/ports­using.html
## Issues
If you find any issues with the build/install procedure or you've found a problem that
you've verified is local to FreeBSD version only (for example, by not being able to
reproduce it on any other platform, like OSX or Linux), please sent a
problem report using this page for more
information: http://www.freebsd.org/send­pr.html

View File

@@ -0,0 +1,51 @@
# Production Users
This document tracks people and use cases for etcd in production. By creating a list of production use cases we hope to build a community of advisors that we can reach out to with experience using various etcd applications, operation environments, and cluster sizes. The etcd development team may reach out periodically to check-in on your experience and update this list.
## discovery.etcd.io
- *Application*: https://github.com/coreos/discovery.etcd.io
- *Launched*: Feb. 2014
- *Cluster Size*: 5 members, 5 discovery proxies
- *Order of Data Size*: 100s of Megabytes
- *Operator*: CoreOS, brandon.philips@coreos.com
- *Environment*: AWS
- *Backups*: Periodic async to S3
discovery.etcd.io is the longest continuously running etcd backed service that we know about. It is the basis of automatic cluster bootstrap and was launched in Feb. 2014: https://coreos.com/blog/etcd-0.3.0-released/.
## OpenTable
- *Application*: OpenTable internal service discovery and cluster configuration management
- *Launched*: May 2014
- *Cluster Size*: 3 members each in 6 independent clusters; approximately 50 nodes reading / writing
- *Order of Data Size*: 10s of MB
- *Operator*: OpenTable, Inc; sschlansker@opentable.com
- *Environment*: AWS, VMWare
- *Backups*: None, all data can be re-created if necessary.
## cycoresys.com
- *Application*: multiple
- *Launched*: Jul. 2014
- *Cluster Size*: 3 members, _n_ proxies
- *Order of Data Size*: 100s of kilobytes
- *Operator*: CyCore Systems, Inc, sys@cycoresys.com
- *Environment*: Baremetal
- *Backups*: Periodic sync to Ceph RadosGW and DigitalOcean VM
CyCore Systems provides architecture and engineering for computing systems. This cluster provides microservices, virtual machines, databases, storage clusters to a number of clients. It is built on CoreOS machines, with each machine in the cluster running etcd as a peer or proxy.
## Radius Intelligence
- *Application*: multiple internal tools, Kubernetes clusters, bootstrappable system configs
- *Launched*: June 2015
- *Cluster Size*: 2 clusters of 5 and 3 members; approximately a dozen nodes read/write
- *Order of Data Size*: 100s of kilobytes
- *Operator*: Radius Intelligence; jcderr@radius.com
- *Environment*: AWS, CoreOS, Kubernetes
- *Backups*: None, all data can be recreated if necessary.
Radius Intelligence uses Kubernetes running CoreOS to containerize and scale internal toolsets. Examples include running [JetBrains TeamCity][teamcity] and internal AWS security and cost reporting tools. etcd clusters back these clusters as well as provide some basic environment bootstrapping configuration keys.
[teamcity]: https://www.jetbrains.com/teamcity/

153
Documentation/v2/proxy.md Normal file
View File

@@ -0,0 +1,153 @@
# Proxy
etcd can run as a transparent proxy. Doing so allows for easy discovery of etcd within your infrastructure, since it can run on each machine as a local service. In this mode, etcd acts as a reverse proxy and forwards client requests to an active etcd cluster. The etcd proxy does not participate in the consensus replication of the etcd cluster, thus it neither increases the resilience nor decreases the write performance of the etcd cluster.
etcd currently supports two proxy modes: `readwrite` and `readonly`. The default mode is `readwrite`, which forwards both read and write requests to the etcd cluster. A `readonly` etcd proxy only forwards read requests to the etcd cluster, and returns `HTTP 501` to all write requests.
The proxy will shuffle the list of cluster members periodically to avoid sending all connections to a single member.
The member list used by an etcd proxy consists of all client URLs advertised in the cluster. These client URLs are specified in each etcd cluster member's `advertise-client-urls` option.
An etcd proxy examines several command-line options to discover its peer URLs. In order of precedence, these options are `discovery`, `discovery-srv`, and `initial-cluster`. The `initial-cluster` option is set to a comma-separated list of one or more etcd peer URLs used temporarily in order to discover the permanent cluster.
After establishing a list of peer URLs in this manner, the proxy retrieves the list of client URLs from the first reachable peer. These client URLs are specified by the `advertise-client-urls` option to etcd peers. The proxy then continues to connect to the first reachable etcd cluster member every thirty seconds to refresh the list of client URLs.
While etcd proxies therefore do not need to be given the `advertise-client-urls` option, as they retrieve this configuration from the cluster, this implies that `initial-cluster` must be set correctly for every proxy, and the `advertise-client-urls` option must be set correctly for every non-proxy, first-order cluster peer. Otherwise, requests to any etcd proxy would be forwarded improperly. Take special care not to set the `advertise-client-urls` option to URLs that point to the proxy itself, as such a configuration will cause the proxy to enter a loop, forwarding requests to itself until resources are exhausted. To correct either case, stop etcd and restart it with the correct URLs.
[This example Procfile][procfile] illustrates the difference in the etcd peer and proxy command lines used to configure and start a cluster with one proxy under the [goreman process management utility][goreman].
To summarize etcd proxy startup and peer discovery:
1. etcd proxies execute the following steps in order until the cluster *peer-urls* are known:
1. If `discovery` is set for the proxy, ask the given discovery service for
the *peer-urls*. The *peer-urls* will be the combined
`initial-advertise-peer-urls` of all first-order, non-proxy cluster
members.
2. If `discovery-srv` is set for the proxy, the *peer-urls* are discovered
from DNS.
3. If `initial-cluster` is set for the proxy, that will become the value of
*peer-urls*.
4. Otherwise use the default value of
`http://localhost:2380,http://localhost:7001`.
2. These *peer-urls* are used to contact the (non-proxy) members of the cluster
to find their *client-urls*. The *client-urls* will thus be the combined
`advertise-client-urls` of all cluster members (i.e. non-proxies).
3. Request of clients of the proxy will be forwarded (proxied) to these
*client-urls*.
Always start the first-order etcd cluster members first, then any proxies. A proxy must be able to reach the cluster members to retrieve its configuration, and will attempt connections somewhat aggressively in the absence of such a channel. Starting the members before any proxy ensures the proxy can discover the client URLs when it later starts.
## Using an etcd proxy
To start etcd in proxy mode, you need to provide three flags: `proxy`, `listen-client-urls`, and `initial-cluster` (or `discovery`).
To start a readwrite proxy, set `-proxy on`; To start a readonly proxy, set `-proxy readonly`.
The proxy will be listening on `listen-client-urls` and forward requests to the etcd cluster discovered from in `initial-cluster` or `discovery` url.
### Start an etcd proxy with a static configuration
To start a proxy that will connect to a statically defined etcd cluster, specify the `initial-cluster` flag:
```
etcd --proxy on \
--listen-client-urls http://127.0.0.1:2379 \
--initial-cluster infra0=http://10.0.1.10:2380,infra1=http://10.0.1.11:2380,infra2=http://10.0.1.12:2380
```
### Start an etcd proxy with the discovery service
If you bootstrap an etcd cluster using the [discovery service][discovery-service], you can also start the proxy with the same `discovery`.
To start a proxy using the discovery service, specify the `discovery` flag. The proxy will wait until the etcd cluster defined at the `discovery` url finishes bootstrapping, and then start to forward the requests.
```
etcd --proxy on \
--listen-client-urls http://127.0.0.1:2379 \
--discovery https://discovery.etcd.io/3e86b59982e49066c5d813af1c2e2579cbf573de \
```
## Fallback to proxy mode with discovery service
If you bootstrap an etcd cluster using [discovery service][discovery-service] with more than the expected number of etcd members, the extra etcd processes will fall back to being `readwrite` proxies by default. They will forward the requests to the cluster as described above. For example, if you create a discovery url with `size=5`, and start ten etcd processes using that same discovery url, the result will be a cluster with five etcd members and five proxies. Note that this behaviour can be disabled with the `discovery-fallback='exit'` flag.
## Promote a proxy to a member of etcd cluster
A Proxy is in the part of etcd cluster that does not participate in consensus. A proxy will not promote itself to an etcd member that participates in consensus automatically in any case.
If you want to promote a proxy to an etcd member, there are four steps you need to follow:
- use etcdctl to add the proxy node as an etcd member into the existing cluster
- stop the etcd proxy process or service
- remove the existing proxy data directory
- restart the etcd process with new member configuration
## Example
We assume you have a one member etcd cluster with one proxy. The cluster information is listed below:
|Name|Address|
|------|---------|
|infra0|10.0.1.10|
|proxy0|10.0.1.11|
This example walks you through a case that you promote one proxy to an etcd member. The cluster will become a two member cluster after finishing the four steps.
### Add a new member into the existing cluster
First, use etcdctl to add the member to the cluster, which will output the environment variables need to correctly configure the new member:
``` bash
$ etcdctl -endpoint http://10.0.1.10:2379 member add infra1 http://10.0.1.11:2380
added member 9bf1b35fc7761a23 to cluster
ETCD_NAME="infra1"
ETCD_INITIAL_CLUSTER="infra0=http://10.0.1.10:2380,infra1=http://10.0.1.11:2380"
ETCD_INITIAL_CLUSTER_STATE=existing
```
### Stop the proxy process
Stop the existing proxy so we can wipe it's state on disk and reload it with the new configuration:
``` bash
px aux | grep etcd
kill %etcd_proxy_pid%
```
or (if you are running etcd proxy as etcd service under systemd)
``` bash
sudo systemctl stop etcd
```
### Remove the existing proxy data dir
``` bash
rm -rf %data_dir%/proxy
```
### Start etcd as a new member
Finally, start the reconfigured member and make sure it joins the cluster correctly:
``` bash
$ export ETCD_NAME="infra1"
$ export ETCD_INITIAL_CLUSTER="infra0=http://10.0.1.10:2380,infra1=http://10.0.1.11:2380"
$ export ETCD_INITIAL_CLUSTER_STATE=existing
$ etcd --listen-client-urls http://10.0.1.11:2379 \
--advertise-client-urls http://10.0.1.11:2379 \
--listen-peer-urls http://10.0.1.11:2380 \
--initial-advertise-peer-urls http://10.0.1.11:2380 \
--data-dir %data_dir%
```
If you are running etcd under systemd, you should modify the service file with correct configuration and restart the service:
``` bash
sudo systemd restart etcd
```
If an error occurs, check the [add member troubleshooting doc][runtime-configuration].
[discovery-service]: clustering.md#discovery
[goreman]: https://github.com/mattn/goreman
[procfile]: /Procfile
[runtime-configuration]: runtime-configuration.md#error-cases-when-adding-members

View File

@@ -0,0 +1,45 @@
# Reporting Bugs
If you find bugs or documentation mistakes in the etcd project, please let us know by [opening an issue][issue]. We treat bugs and mistakes very seriously and believe no issue is too small. Before creating a bug report, please check that an issue reporting the same problem does not already exist.
To make your bug report accurate and easy to understand, please try to create bug reports that are:
- Specific. Include as much details as possible: which version, what environment, what configuration, etc. You can also attach etcd log (the starting log with etcd configuration is especially important).
- Reproducible. Include the steps to reproduce the problem. We understand some issues might be hard to reproduce, please includes the steps that might lead to the problem. You can also attach the affected etcd data dir and stack strace to the bug report.
- Isolated. Please try to isolate and reproduce the bug with minimum dependencies. It would significantly slow down the speed to fix a bug if too many dependencies are involved in a bug report. Debugging external systems that rely on etcd is out of scope, but we are happy to point you in the right direction or help you interact with etcd in the correct manner.
- Unique. Do not duplicate existing bug report.
- Scoped. One bug per report. Do not follow up with another bug inside one report.
You might also want to read [Elika Etemads article on filing good bug reports][filing-good-bugs] before creating a bug report.
We might ask you for further information to locate a bug. A duplicated bug report will be closed.
## Frequently Asked Questions
### How to get a stack trace
``` bash
$ kill -QUIT $PID
```
### How to get etcd version
``` bash
$ etcd --version
```
### How to get etcd configuration and log when it runs as systemd service etcd2.service
``` bash
$ sudo systemctl cat etcd2
$ sudo journalctl -u etcd2
```
Due to an upstream systemd bug, journald may miss the last few log lines when its process exit. If journalctl tells you that etcd stops without fatal or panic message, you could try `sudo journalctl -f -t etcd2` to get full log.
[etcd-issue]: https://github.com/coreos/etcd/issues/new
[filing-good-bugs]: http://fantasai.inkedblade.net/style/talks/filing-good-bugs/

View File

@@ -0,0 +1,211 @@
# Overview
The etcd v3 API is designed to give users a more efficient and cleaner abstraction compared to etcd v2. There are a number of semantic and protocol changes in this new API. For an overview [see Xiang Li's video](https://youtu.be/J5AioGtEPeQ?t=211).
To prove out the design of the v3 API the team has also built [a number of example recipes](https://github.com/coreos/etcd/tree/master/contrib/recipes), there is a [video discussing these recipes too](https://www.youtube.com/watch?v=fj-2RY-3yVU&feature=youtu.be&t=590).
# Design
1. Flatten binary key-value space
2. Keep the event history until compaction
- access to old version of keys
- user controlled history compaction
3. Support range query
- Pagination support with limit argument
- Support consistency guarantee across multiple range queries
4. Replace TTL key with Lease
- more efficient/ low cost keep alive
- a logical group of TTL keys
5. Replace CAS/CAD with multi-object Txn
- MUCH MORE powerful and flexible
6. Support efficient watching with multiple ranges
7. RPC API supports the completed set of APIs.
- more efficient than JSON/HTTP
- additional txn/lease support
8. HTTP API supports a subset of APIs.
- easy for people to try out etcd
- easy for people to write simple etcd application
## Notes
### Request Size Limitation
The max request size is around 1MB. Since etcd replicates requests in a streaming fashion, a very large
request might block other requests for a long time. The use case for etcd is to store small configuration
values, so we prevent user from submitting large requests. This also applies to Txn requests. We might loosen
the size in the future a little bit or make it configurable.
## Protobuf Defined API
[api protobuf][api-protobuf]
[kv protobuf][kv-protobuf]
## Examples
### Put a key (foo=bar)
```
// A put is always successful
Put( PutRequest { key = foo, value = bar } )
PutResponse {
cluster_id = 0x1000,
member_id = 0x1,
revision = 1,
raft_term = 0x1,
}
```
### Get a key (assume we have foo=bar)
```
Get ( RangeRequest { key = foo } )
RangeResponse {
cluster_id = 0x1000,
member_id = 0x1,
revision = 1,
raft_term = 0x1,
kvs = {
{
key = foo,
value = bar,
create_revision = 1,
mod_revision = 1,
version = 1;
},
},
}
```
### Range over a key space (assume we have foo0=bar0… foo100=bar100)
```
Range ( RangeRequest { key = foo, end_key = foo80, limit = 30 } )
RangeResponse {
cluster_id = 0x1000,
member_id = 0x1,
revision = 100,
raft_term = 0x1,
kvs = {
{
key = foo0,
value = bar0,
create_revision = 1,
mod_revision = 1,
version = 1;
},
...,
{
key = foo30,
value = bar30,
create_revision = 30,
mod_revision = 30,
version = 1;
},
},
}
```
### Finish a txn (assume we have foo0=bar0, foo1=bar1)
```
Txn(TxnRequest {
// mod_revision of foo0 is equal to 1, mod_revision of foo1 is greater than 1
compare = {
{compareType = equal, key = foo0, mod_revision = 1},
{compareType = greater, key = foo1, mod_revision = 1}}
},
// if the comparison succeeds, put foo2 = bar2
success = {PutRequest { key = foo2, value = success }},
// if the comparison fails, put foo2=fail
failure = {PutRequest { key = foo2, value = failure }},
)
TxnResponse {
cluster_id = 0x1000,
member_id = 0x1,
revision = 3,
raft_term = 0x1,
succeeded = true,
responses = {
// response of PUT foo2=success
{
cluster_id = 0x1000,
member_id = 0x1,
revision = 3,
raft_term = 0x1,
}
}
}
```
### Watch on a key/range
```
Watch( WatchRequest{
key = foo,
end_key = fop, // prefix foo
start_revision = 20,
end_revision = 10000,
// server decided notification frequency
progress_notification = true,
}
… // this can be a watch request stream
)
// put (foo0=bar0) event at 3
WatchResponse {
cluster_id = 0x1000,
member_id = 0x1,
revision = 3,
raft_term = 0x1,
event_type = put,
kv = {
key = foo0,
value = bar0,
create_revision = 1,
mod_revision = 1,
version = 1;
},
}
// a notification at 2000
WatchResponse {
cluster_id = 0x1000,
member_id = 0x1,
revision = 2000,
raft_term = 0x1,
// nil event as notification
}
// put (foo0=bar3000) event at 3000
WatchResponse {
cluster_id = 0x1000,
member_id = 0x1,
revision = 3000,
raft_term = 0x1,
event_type = put,
kv = {
key = foo0,
value = bar3000,
create_revision = 1,
mod_revision = 3000,
version = 2;
},
}
```
[api-protobuf]: https://github.com/coreos/etcd/blob/master/etcdserver/etcdserverpb/rpc.proto
[kv-protobuf]: https://github.com/coreos/etcd/blob/master/storage/storagepb/kv.proto

View File

@@ -0,0 +1,75 @@
# Tuning
The default settings in etcd should work well for installations on a local network where the average network latency is low.
However, when using etcd across multiple data centers or over networks with high latency you may need to tweak the heartbeat interval and election timeout settings.
The network isn't the only source of latency. Each request and response may be impacted by slow disks on both the leader and follower. Each of these timeouts represents the total time from request to successful response from the other machine.
## Time Parameters
The underlying distributed consensus protocol relies on two separate time parameters to ensure that nodes can handoff leadership if one stalls or goes offline.
The first parameter is called the *Heartbeat Interval*.
This is the frequency with which the leader will notify followers that it is still the leader.
For best practices, the parameter should be set around round-trip time between members.
By default, etcd uses a `100ms` heartbeat interval.
The second parameter is the *Election Timeout*.
This timeout is how long a follower node will go without hearing a heartbeat before attempting to become leader itself.
By default, etcd uses a `1000ms` election timeout.
Adjusting these values is a trade off.
The value of heartbeat interval is recommended to be around the maximum of average round-trip time (RTT) between members, normally around 0.5-1.5x the round-trip time.
If heartbeat interval is too low, etcd will send unnecessary messages that increase the usage of CPU and network resources.
On the other side, a too high heartbeat interval leads to high election timeout. Higher election timeout takes longer time to detect a leader failure.
The easiest way to measure round-trip time (RTT) is to use [PING utility][ping].
The election timeout should be set based on the heartbeat interval and average round-trip time between members.
Election timeouts must be at least 10 times the round-trip time so it can account for variance in your network.
For example, if the round-trip time between your members is 10ms then you should have at least a 100ms election timeout.
You should also set your election timeout to at least 5 to 10 times your heartbeat interval to account for variance in leader replication.
For a heartbeat interval of 50ms you should set your election timeout to at least 250ms - 500ms.
The upper limit of election timeout is 50000ms (50s), which should only be used when deploying a globally-distributed etcd cluster.
A reasonable round-trip time for the continental United States is 130ms, and the time between US and Japan is around 350-400ms.
If your network has uneven performance or regular packet delays/loss then it is possible that a couple of retries may be necessary to successfully send a packet. So 5s is a safe upper limit of global round-trip time.
As the election timeout should be an order of magnitude bigger than broadcast time, in the case of ~5s for a globally distributed cluster, then 50 seconds becomes a reasonable maximum.
The heartbeat interval and election timeout value should be the same for all members in one cluster. Setting different values for etcd members may disrupt cluster stability.
You can override the default values on the command line:
```sh
# Command line arguments:
$ etcd -heartbeat-interval=100 -election-timeout=500
# Environment variables:
$ ETCD_HEARTBEAT_INTERVAL=100 ETCD_ELECTION_TIMEOUT=500 etcd
```
The values are specified in milliseconds.
## Snapshots
etcd appends all key changes to a log file.
This log grows forever and is a complete linear history of every change made to the keys.
A complete history works well for lightly used clusters but clusters that are heavily used would carry around a large log.
To avoid having a huge log etcd makes periodic snapshots.
These snapshots provide a way for etcd to compact the log by saving the current state of the system and removing old logs.
### Snapshot Tuning
Creating snapshots can be expensive so they're only created after a given number of changes to etcd.
By default, snapshots will be made after every 10,000 changes.
If etcd's memory usage and disk usage are too high, you can lower the snapshot threshold by setting the following on the command line:
```sh
# Command line arguments:
$ etcd -snapshot-count=5000
# Environment variables:
$ ETCD_SNAPSHOT_COUNT=5000 etcd
```
[ping]: https://en.wikipedia.org/wiki/Ping_(networking_utility)

210
Godeps/Godeps.json generated
View File

@@ -1,210 +0,0 @@
{
"ImportPath": "github.com/coreos/etcd",
"GoVersion": "go1.5.1",
"Packages": [
"./..."
],
"Deps": [
{
"ImportPath": "bitbucket.org/ww/goautoneg",
"Comment": "null-5",
"Rev": "75cd24fc2f2c2a2088577d12123ddee5f54e0675"
},
{
"ImportPath": "github.com/akrennmair/gopcap",
"Rev": "00e11033259acb75598ba416495bb708d864a010"
},
{
"ImportPath": "github.com/beorn7/perks/quantile",
"Rev": "b965b613227fddccbfffe13eae360ed3fa822f8d"
},
{
"ImportPath": "github.com/bgentry/speakeasy",
"Rev": "36e9cfdd690967f4f690c6edcc9ffacd006014a0"
},
{
"ImportPath": "github.com/boltdb/bolt",
"Comment": "v1.1.0-81-g0fd4c05",
"Rev": "0fd4c0547d204c7b1cad6db6f3adad5f2cf453e5"
},
{
"ImportPath": "github.com/cheggaaa/pb",
"Rev": "da1f27ad1d9509b16f65f52fd9d8138b0f2dc7b2"
},
{
"ImportPath": "github.com/codegangsta/cli",
"Comment": "1.2.0-183-gb5232bb",
"Rev": "b5232bb2934f606f9f27a1305f1eea224e8e8b88"
},
{
"ImportPath": "github.com/coreos/gexpect",
"Rev": "5173270e159f5aa8fbc999dc7e3dcb50f4098a69"
},
{
"ImportPath": "github.com/coreos/go-semver/semver",
"Rev": "568e959cd89871e61434c1143528d9162da89ef2"
},
{
"ImportPath": "github.com/coreos/go-systemd/daemon",
"Comment": "v3-6-gcea488b",
"Rev": "cea488b4e6855fee89b6c22a811e3c5baca861b6"
},
{
"ImportPath": "github.com/coreos/go-systemd/journal",
"Comment": "v3-6-gcea488b",
"Rev": "cea488b4e6855fee89b6c22a811e3c5baca861b6"
},
{
"ImportPath": "github.com/coreos/go-systemd/util",
"Comment": "v3-6-gcea488b",
"Rev": "cea488b4e6855fee89b6c22a811e3c5baca861b6"
},
{
"ImportPath": "github.com/coreos/pkg/capnslog",
"Rev": "2c77715c4df99b5420ffcae14ead08f52104065d"
},
{
"ImportPath": "github.com/cpuguy83/go-md2man/md2man",
"Comment": "v1.0.4",
"Rev": "71acacd42f85e5e82f70a55327789582a5200a90"
},
{
"ImportPath": "github.com/gogo/protobuf/proto",
"Comment": "v0.1-118-ge8904f5",
"Rev": "e8904f58e872a473a5b91bc9bf3377d223555263"
},
{
"ImportPath": "github.com/golang/glog",
"Rev": "44145f04b68cf362d9c4df2182967c2275eaefed"
},
{
"ImportPath": "github.com/golang/protobuf/proto",
"Rev": "6aaa8d47701fa6cf07e914ec01fde3d4a1fe79c3"
},
{
"ImportPath": "github.com/google/btree",
"Rev": "cc6329d4279e3f025a53a83c397d2339b5705c45"
},
{
"ImportPath": "github.com/inconshreveable/mousetrap",
"Rev": "76626ae9c91c4f2a10f34cad8ce83ea42c93bb75"
},
{
"ImportPath": "github.com/jonboulle/clockwork",
"Rev": "72f9bd7c4e0c2a40055ab3d0f09654f730cce982"
},
{
"ImportPath": "github.com/kballard/go-shellquote",
"Rev": "d8ec1a69a250a17bb0e419c386eac1f3711dc142"
},
{
"ImportPath": "github.com/kr/pty",
"Comment": "release.r56-29-gf7ee69f",
"Rev": "f7ee69f31298ecbe5d2b349c711e2547a617d398"
},
{
"ImportPath": "github.com/mattn/go-runewidth",
"Comment": "travisish-46-gd6bea18",
"Rev": "d6bea18f789704b5f83375793155289da36a3c7f"
},
{
"ImportPath": "github.com/matttproud/golang_protobuf_extensions/pbutil",
"Rev": "fc2b8d3a73c4867e51861bbdd5ae3c1f0869dd6a"
},
{
"ImportPath": "github.com/olekukonko/tablewriter",
"Rev": "cca8bbc0798408af109aaaa239cbd2634846b340"
},
{
"ImportPath": "github.com/olekukonko/ts",
"Rev": "ecf753e7c962639ab5a1fb46f7da627d4c0a04b8"
},
{
"ImportPath": "github.com/prometheus/client_golang/prometheus",
"Comment": "0.7.0-52-ge51041b",
"Rev": "e51041b3fa41cece0dca035740ba6411905be473"
},
{
"ImportPath": "github.com/prometheus/client_model/go",
"Comment": "model-0.0.2-12-gfa8ad6f",
"Rev": "fa8ad6fec33561be4280a8f0514318c79d7f6cb6"
},
{
"ImportPath": "github.com/prometheus/common/expfmt",
"Rev": "ffe929a3f4c4faeaa10f2b9535c2b1be3ad15650"
},
{
"ImportPath": "github.com/prometheus/common/model",
"Rev": "ffe929a3f4c4faeaa10f2b9535c2b1be3ad15650"
},
{
"ImportPath": "github.com/prometheus/procfs",
"Rev": "454a56f35412459b5e684fd5ec0f9211b94f002a"
},
{
"ImportPath": "github.com/russross/blackfriday",
"Comment": "v1.4-2-g300106c",
"Rev": "300106c228d52c8941d4b3de6054a6062a86dda3"
},
{
"ImportPath": "github.com/shurcooL/sanitized_anchor_name",
"Rev": "10ef21a441db47d8b13ebcc5fd2310f636973c77"
},
{
"ImportPath": "github.com/spacejam/loghisto",
"Rev": "323309774dec8b7430187e46cd0793974ccca04a"
},
{
"ImportPath": "github.com/spf13/cobra",
"Rev": "1c44ec8d3f1552cac48999f9306da23c4d8a288b"
},
{
"ImportPath": "github.com/spf13/pflag",
"Rev": "08b1a584251b5b62f458943640fc8ebd4d50aaa5"
},
{
"ImportPath": "github.com/stretchr/testify/assert",
"Rev": "9cc77fa25329013ce07362c7742952ff887361f2"
},
{
"ImportPath": "github.com/ugorji/go/codec",
"Rev": "f1f1a805ed361a0e078bb537e4ea78cd37dcf065"
},
{
"ImportPath": "github.com/xiang90/probing",
"Rev": "6a0cc1ae81b4cc11db5e491e030e4b98fba79c19"
},
{
"ImportPath": "golang.org/x/crypto/bcrypt",
"Rev": "1351f936d976c60a0a48d728281922cf63eafb8d"
},
{
"ImportPath": "golang.org/x/crypto/blowfish",
"Rev": "1351f936d976c60a0a48d728281922cf63eafb8d"
},
{
"ImportPath": "golang.org/x/net/context",
"Rev": "6acef71eb69611914f7a30939ea9f6e194c78172"
},
{
"ImportPath": "golang.org/x/net/http2",
"Rev": "6acef71eb69611914f7a30939ea9f6e194c78172"
},
{
"ImportPath": "golang.org/x/net/internal/timeseries",
"Rev": "6acef71eb69611914f7a30939ea9f6e194c78172"
},
{
"ImportPath": "golang.org/x/net/trace",
"Rev": "6acef71eb69611914f7a30939ea9f6e194c78172"
},
{
"ImportPath": "golang.org/x/sys/unix",
"Rev": "9c60d1c508f5134d1ca726b4641db998f2523357"
},
{
"ImportPath": "google.golang.org/grpc",
"Rev": "b88c12e7caf74af3928de99a864aaa9916fa5aad"
}
]
}

2
Godeps/_workspace/.gitignore generated vendored
View File

@@ -1,2 +0,0 @@
/pkg
/bin

Some files were not shown because too many files have changed in this diff Show More