Compare commits

...

470 Commits

Author SHA1 Message Date
Sam Batschelet 973882f697 version: 3.3.27
Signed-off-by: Sam Batschelet <sbatsche@redhat.com>
2021-10-15 08:32:31 -04:00
Piotr Tabor e82c2fd178
Merge pull request #13386 from hexfusion/cp-13376-release-3.3
[release-3.3] Dockerfile: bump debian bullseye-20210927
2021-10-04 08:40:11 +02:00
Sam Batschelet 24801f5c27 Dockerfile: bump debian bullseye-20210927
fixes: CVE-2021-3711, CVE-2021-35942, CVE-2019-9893

Signed-off-by: Sam Batschelet <sbatsche@redhat.com>
2021-10-04 00:48:07 -04:00
Sam Batschelet 984d71c8f4 version: 3.3.26
Signed-off-by: Sam Batschelet <sbatsche@redhat.com>
2021-10-03 23:48:44 -04:00
Piotr Tabor 9530a81d62
Merge pull request #12552 from kolyshkin/3.3-fix-lock
[3.3 backport] pkg/fileutil: fix constant for linux locking
2021-01-16 22:16:46 +01:00
Moritz Both ec81adb216 pkg/fileutil: fix constant for linux locking
The constant F_OFD_GETLK is 36, not 37, according to
/usr/include/bits/fcntl-linux.h
Credits go to joakim-tjernlund who digged deep enough
to find this.

Fixes #31182
2020-12-14 10:53:41 -08:00
Jingyi Hu 7d1277644e
Merge pull request #12357 from cfc4n/automated-cherry-pick-of-#12264-upstream-release-3.3
Automated cherry pick of #12264
2020-11-17 01:58:10 +08:00
CFC4N c54c59d339
clientv3: get AuthToken automatically when clientConn is ready.
fixes: #11954
2020-09-30 17:17:46 +08:00
Gyuho Lee 2c834459e1 version: 3.3.25
Signed-off-by: Gyuho Lee <leegyuho@amazon.com>
2020-08-24 12:33:27 -07:00
Gyuho Lee 43d6162d3f
Merge pull request #12246 from SVilgelm/fix-import-path
Fix import path to fileutils in listener
2020-08-24 12:31:29 -07:00
Gyuho Lee d01dda54dd
Merge pull request #12251 from spzala/automated-cherry-pick-of-#12242-upstream-release-3.3
Automated cherry pick of #12242
2020-08-24 12:30:42 -07:00
Sahdev P. Zala 864d9f4127 pkg: file stat warning
Provide warning and doc instead of enforcing file permission.
2020-08-24 11:32:31 -04:00
Sergey Vilgelm 386ebbb704
Fix import path to fileutils in listener
transport/listener: change the import path of fileutil

Version 3.3 still uses the github.com/coreos/etcd prefix, but the transport/listener package
used the go.etcd.io/etcd path prefix.
2020-08-22 07:27:15 -05:00
Gyuho Lee bdd57848dc scripts/release: logging release version
Signed-off-by: Gyuho Lee <leegyuho@amazon.com>
2020-08-18 11:45:15 -07:00
Gyuho Lee fd9a5b0be5 go.mod/sum: delete temporarily
Release version name is being overwritten by the scripts...

Will add back after release.

Signed-off-by: Gyuho Lee <leegyuho@amazon.com>
2020-08-18 11:44:47 -07:00
Gyuho Lee f9e5264765 version: v3.3.24
Signed-off-by: Gyuho Lee <leegyuho@amazon.com>
2020-08-18 09:32:00 -07:00
Gyuho Lee f78bdce575
Merge pull request #12215 from wenjiaswe/automated-cherry-pick-of-#12106-upstream-release-3.3
Automated cherry pick of #12106
2020-08-13 21:37:14 -07:00
Yuchen Zhou cc5cc3ae40 etcdserver: change protobuf field type from int to int64 (#12000) 2020-08-13 15:55:41 -07:00
Gyuho Lee 5bc8f1650c etcdserver: add OS level FD metrics
Similar counts are exposed via Prometheus.
This adds the one that are perceived by etcd server.

e.g.

os_fd_limit 120000
os_fd_used 14
process_cpu_seconds_total 0.31
process_max_fds 120000
process_open_fds 17

Signed-off-by: Gyuho Lee <leegyuho@amazon.com>
2020-08-12 18:40:03 -07:00
Gyuho Lee 0bed5fffd4 pkg/runtime: optimize FDUsage by removing sort
No need sort when we just want the counts.

Signed-off-by: Gyuho Lee <leegyuho@amazon.com>
2020-08-12 18:39:10 -07:00
Gyuho Lee 4873f5516b version: add "3.3.23"
Signed-off-by: Gyuho Lee <leegyuho@amazon.com>
2020-07-16 15:15:48 -07:00
Sahdev Zala b16bfbed53
Merge pull request #12128 from spzala/automated-cherry-pick-of-#12012-upstream-release-3.3
Automated cherry pick of #12012
2020-07-13 10:53:04 -04:00
Hitoshi Mitake 604be01b61 Documentation: note on data encryption 2020-07-13 09:51:28 -04:00
Gyuho Lee bfc2267eba
Merge pull request #12113 from spzala/automated-cherry-pick-of-#12018-upstream-release-3.3
Automated cherry pick of #12018
2020-07-07 10:32:07 -07:00
Sahdev P. Zala ac37d3499e pkg: consider umask when use MkdirAll
os.MkdirAll creates directory before umask so make sure that a desired
permission is set after creating a directory with MkdirAll. Use the
existing TouchDirAll function which checks for permission if dir is already
exist and when create a new dir.
2020-07-07 12:02:55 -04:00
Gyuho Lee e542d1aed8
Merge pull request #12090 from tangcong/automated-cherry-pick-of-#11997-origin-release-3.3
Automated cherry pick of #11997
2020-07-06 13:00:48 -07:00
Gyuho Lee 140edf0dc6
Merge pull request #12104 from spzala/automated-cherry-pick-of-#12092-upstream-release-3.3
Automated cherry pick of #12092
2020-07-06 11:47:52 -07:00
Gyuho Lee 6c15e40dbd
Merge pull request #12057 from spzala/automated-cherry-pick-of-#11608-upstream-release-3.3
Automated cherry pick of #11608
2020-07-06 11:47:44 -07:00
Gyuho Lee 13f92b45d6
Merge pull request #12087 from spzala/automated-cherry-pick-of-#11807-upstream-release-3.3
Automated cherry pick of #11807
2020-07-06 11:47:36 -07:00
Sahdev Zala 1255e3f0c8
Update grpc_proxy.go
Using the plog.Warningf instead of zap which was added from 3.4
2020-07-05 12:31:58 -04:00
Hitoshi Mitake 4ae0875b34 etcdmain: let grpc proxy warn about insecure-skip-tls-verify 2020-07-05 12:10:07 -04:00
tangcong 44b0318929 pkg/fileutil: print desired file permission in error log 2020-06-29 10:00:23 +08:00
Sahdev P. Zala abd80f383e wal: fix panic when decoder not set
Handle the related panic and clarify doc.
2020-06-27 17:23:17 -04:00
Gyuho Lee 3076b616ab
Merge pull request #12075 from cfc4n/automated-cherry-pick-of-#11987-upstream-release-3.3
Automated cherry pick of #11987
2020-06-26 11:29:41 -07:00
Gyuho Lee c88a2c8cc1
Merge pull request #12078 from cfc4n/automated-cherry-pick-of-#11980-upstream-release-3.3
Automated cherry pick of #11980
2020-06-26 11:28:47 -07:00
Gyuho Lee 0b74a4dbdb
Merge pull request #12082 from spzala/automated-cherry-pick-of-#11945-upstream-release-3.3
Automated cherry pick of #11945
2020-06-26 11:28:28 -07:00
Gyuho Lee e959cda568
Merge pull request #12083 from spzala/automated-cherry-pick-of-#11793-upstream-release-3.3
Automated cherry pick of #11793
2020-06-26 11:28:17 -07:00
Sahdev P. Zala a3e242c085 Discovery: do not allow passing negative cluster size
When an etcd instance attempts to perform service discovery, if a
cluster size with negative value  is provided, the etcd instance
will panic without recovery because of
2020-06-26 14:04:51 -04:00
Gyuho Lee bccb40b7d9 wal: check out of range slice in "ReadAll", "decoder"
wal: add slice bound checks in decoder

CHANGELOG-3.5: add wal slice bound check
CHANGELOG-3.5: add "decodeRecord"

Signed-off-by: Gyuho Lee <leegyuho@amazon.com>
2020-06-25 21:00:05 -04:00
Changxin Miao 6be5c54c94 pkg: Fix dir permission check on Windows 2020-06-25 20:21:54 -04:00
cfc4n ba7ff1eea9 auth: Customize simpleTokenTTL settings.
see https://github.com/etcd-io/etcd/issues/11978 for more detail.
2020-06-25 20:17:49 +08:00
cfc4n 8c885ad9a9 mvcc: chanLen 1024 is to biger,and it used more memory. 128 seems to be enough. Sometimes the consumption speed is more than the production speed.
See https://github.com/etcd-io/etcd/issues/11906 for more detail.
2020-06-25 19:51:51 +08:00
Gyuho Lee cdc1c8f02f
Merge pull request #12050 from spzala/automated-cherry-pick-of-#11845-upstream-release-3.3
Automated cherry pick of #11845
2020-06-24 20:42:14 -07:00
Gyuho Lee 94857c925a
Merge pull request #12052 from spzala/automated-cherry-pick-of-#11830-upstream-release-3.3
Automated cherry pick of #11830
2020-06-24 20:42:06 -07:00
Gyuho Lee 56bf4c4779
Merge pull request #12053 from spzala/automated-cherry-pick-of-#11841-upstream-release-3.3
Automated cherry pick of #11841
2020-06-24 20:41:58 -07:00
Gyuho Lee 2e601c4611
Merge pull request #12058 from spzala/automated-cherry-pick-of-#11818-upstream-release-3.3
Automated cherry pick of #11818
2020-06-24 20:41:21 -07:00
Gyuho Lee 6992211021
Merge pull request #12059 from spzala/automated-cherry-pick-of-#11787-upstream-release-3.3
Automated cherry pick of #11787
2020-06-24 20:41:12 -07:00
Gyuho Lee 829f484165
Merge pull request #12063 from cfc4n/automated-cherry-pick-of-#11986-upstream-release-3.3
Automated cherry pick of #11986
2020-06-24 20:40:45 -07:00
Gyuho Lee 05f5b69673
Merge pull request #12067 from cfc4n/automated-cherry-pick-of-#12005-upstream-release-3.3
Automated cherry pick of #12005
2020-06-24 20:40:13 -07:00
Gyuho Lee d18eeef0e7
Merge pull request #12069 from cfc4n/release-3.3
go.mod: fix incorrect package dependency when etcd clientv3 used as libary.
2020-06-24 20:40:02 -07:00
Gyuho Lee 1a79fe3758
Merge pull request #12071 from spzala/automated-cherry-pick-of-#12060-upstream-release-3.3
Automated cherry pick of #12060
2020-06-24 20:39:25 -07:00
Gyuho Lee 599beaee41
Merge pull request #12073 from spzala/automated-cherry-pick-of-#11798-upstream-release-3.3
Automated cherry pick of #11798
2020-06-24 20:39:00 -07:00
Sahdev P. Zala bde76af5fa pkg: check file stats
modify file util.
2020-06-24 21:28:16 -04:00
Xiang Li b85fc84c26 doc: add TLS related warnings 2020-06-24 16:41:53 -04:00
CFC4N c3780bb216 go.mod: fix incorrect package dependency when etcd clientv3 used as libary.
Fixes: https://github.com/etcd-io/etcd/issues/12068
2020-06-24 21:45:06 +08:00
cfc4n 999df4e5a1 auth: return incorrect result 'ErrUserNotFound' when client request without username or username was empty.
Fiexs https://github.com/etcd-io/etcd/issues/12004 .
2020-06-24 19:10:51 +08:00
cfc4n c4db372810 etcdserver:FDUsage set ticker to 10 minute from 5 seconds. This ticker will check File Descriptor Requirements ,and count all fds in used. And recorded some logs when in used >= limit/5*4. Just recorded message. If fds was more than 10K,It's low performance due to FDUsage() works. So need to increase it.
see https://github.com/etcd-io/etcd/issues/11969 for more detail.
2020-06-24 13:21:30 +08:00
Sahdev P. Zala 64f8b86e0d embed: fix compaction runtime err
Handle negative value input which currently gives a runtime error.
2020-06-23 14:47:58 -04:00
Hitoshi Mitake 585814082b etcdserver: don't let InternalAuthenticateRequest have password 2020-06-23 14:16:44 -04:00
Hitoshi Mitake c511894ee5
Merge pull request #12051 from spzala/automated-cherry-pick-of-#11796-upstream-release-3.3
Automated cherry pick of #11796
2020-06-23 23:21:45 +09:00
Hitoshi Mitake a89c2512ea etcdctl, etcdmain: warn about --insecure-skip-tls-verify options 2020-06-22 19:53:45 -04:00
Hitoshi Mitake 9e00f6f37f Documentation: note on the policy of insecure by default 2020-06-22 19:51:04 -04:00
Hitoshi Mitake da1d42d111 Documentation: note on password strength 2020-06-22 19:48:51 -04:00
Xiang Li f6b822dfe8 etcdmain: best effort detection of self pointing in tcp proxy 2020-06-22 19:39:34 -04:00
Gyuho Lee 3bf09a5859
Merge pull request #11758 from jingyih/automated-cherry-pick-of-#11754-upstream-release-3.3
Automated cherry pick of #11754 on release-3.3
2020-06-21 23:21:55 -07:00
Gyuho Lee 282cce72fd version: 3.3.22
Signed-off-by: Gyuho Lee <leegyuho@amazon.com>
2020-05-20 15:42:36 -07:00
tangcong a9d14cbb64 wal: add TestValidSnapshotEntriesAfterPurgeWal testcase
Signed-off-by: Gyuho Lee <leegyuho@amazon.com>
2020-05-20 15:08:10 -07:00
tangcong 8ce10ea4a5 wal: fix crc mismatch crash bug 2020-05-20 11:39:00 -07:00
Gyuho Lee 669285f515 rafthttp: log snapshot downloads
Signed-off-by: Gyuho Lee <leegyuho@amazon.com>
2020-05-20 11:01:13 -07:00
Gyuho Lee 1205851db7 version: 3.3.21
Signed-off-by: Gyuho Lee <leegyuho@amazon.com>
2020-05-18 11:30:17 -07:00
Gyuho Lee 672314546b rafthttp: improve snapshot logging
Signed-off-by: Gyuho Lee <leegyuho@amazon.com>
2020-05-18 11:30:01 -07:00
Gyuho Lee 924b8128c2 *: make sure snapshot save downloads SHA256 checksum
ref. https://github.com/etcd-io/etcd/pull/11896

Signed-off-by: Gyuho Lee <leegyuho@amazon.com>
2020-05-18 02:27:01 -07:00
Gyuho Lee 9caec0d124 etcdserver,wal: fix inconsistencies in WAL and snapshot
ref. https://github.com/etcd-io/etcd/issues/10219

Signed-off-by: Gyuho Lee <leegyuho@amazon.com>
2020-05-18 02:26:57 -07:00
Gyuho Lee 23337471d7
Merge pull request #11856 from tangcong/automated-cherry-pick-of-#11817-origin-release-3.3
Automated cherry pick of #11817 on release-3.3
2020-05-07 20:00:11 -07:00
tangcong 5f799922a8 mvcc: fix deadlock bug 2020-05-08 10:12:36 +08:00
Changxin Miao 1b5e2f4305
Update grpc-gateway to 1.3.1 (#11843) 2020-05-06 15:32:08 -07:00
Jingyi Hu 7e20b9ff91
Merge pull request #11753 from tangcong/automated-cherry-pick-of-#11652-#11670-#11710-origin-release-3.3
Automated cherry pick of #11652 #11670 #11710
2020-04-10 23:21:34 +08:00
Changxin Miao 8781e1d44c etcdserver: watch stream got closed once one request is not permitted (#11708) 2020-04-06 07:09:15 -07:00
tangcong 294e714489 *: fix cherry-pick conflict 2020-04-06 10:47:14 +08:00
tangcong 64fc4cc244 auth: ensure RoleGrantPermission is compatible with older versions 2020-04-06 09:20:52 +08:00
tangcong 27dffc6d01 etcdserver: print warn log when failed to apply request 2020-04-06 09:20:45 +08:00
tangcong acd9422459 auth: cleanup saveConsistentIndex in NewAuthStore 2020-04-06 09:16:58 +08:00
tangcong e7291a1dab auth: print warning log when error is ErrAuthOldRevision 2020-04-06 09:16:58 +08:00
shawwang 06a2f816e9 auth: add new metric 'etcd_debugging_auth_revision' 2020-04-06 09:16:38 +08:00
tangcong 140bf5321d *: fix auth revision corruption bug 2020-04-06 09:16:06 +08:00
Gyuho Lee 9fd7e2b802 version: 3.3.20
Signed-off-by: Gyuho Lee <leegyuho@amazon.com>
2020-04-01 10:49:03 -07:00
Gyuho Lee 1aa5da9121 wal: add "etcd_wal_writes_bytes_total"
Signed-off-by: Gyuho Lee <leegyuho@amazon.com>
2020-04-01 10:49:00 -07:00
Gyuho Lee 89ecd19414 pkg/ioutil: add "FlushN"
Signed-off-by: Gyuho Lee <leegyuho@amazon.com>
2020-04-01 10:36:03 -07:00
Gyuho Lee 67da93f739 version: 3.3.19
Signed-off-by: Gyuho Lee <leegyuho@amazon.com>
2020-03-18 17:23:32 -07:00
Gyuho Lee a463bd54ae words: whitelist "racey"
Signed-off-by: Gyuho Lee <leegyuho@amazon.com>
2020-03-18 17:23:16 -07:00
Gyuho Lee cd200b49a2 Revert "version: 3.3.19"
This reverts commit acb9746d66.
2020-03-18 17:22:12 -07:00
Gyuho Lee 508808010c travis.yaml: use Go 1.12.12
Signed-off-by: Gyuho Lee <leegyuho@amazon.com>
2020-03-18 17:21:49 -07:00
Gyuho Lee acb9746d66 version: 3.3.19
Signed-off-by: Gyuho Lee <leegyuho@amazon.com>
2020-03-18 17:18:46 -07:00
Gyuho Lee 07562e235c Revert "version: 3.3.19"
This reverts commit 3f6b978b0496080e8067e0d2d1270134a9a51ef8.
2020-03-18 17:18:34 -07:00
Gyuho Lee 10d50e0662 words: whitelist "hasleader"
Signed-off-by: Gyuho Lee <leegyuho@amazon.com>
2020-03-18 17:18:31 -07:00
Gyuho Lee f9c89209f3 version: 3.3.19
Signed-off-by: Gyuho Lee <leegyuho@amazon.com>
2020-03-18 17:18:31 -07:00
Gyuho Lee d9027cecf2 etcdserver/api/v3rpc: handle api version metadata, add metrics
ref.
https://github.com/etcd-io/etcd/pull/11687

Signed-off-by: Gyuho Lee <leegyuho@amazon.com>
2020-03-18 17:18:31 -07:00
Gyuho Lee 6f7ee076ea clientv3: embed api version in metadata
ref.
https://github.com/etcd-io/etcd/pull/11687

Signed-off-by: Gyuho Lee <leegyuho@amazon.com>

clientv3: fix racy writes to context key

=== RUN   TestWatchOverlapContextCancel

==================

WARNING: DATA RACE

Write at 0x00c42110dd40 by goroutine 99:

  runtime.mapassign()

      /usr/local/go/src/runtime/hashmap.go:485 +0x0

  github.com/coreos/etcd/clientv3.metadataSet()

      /go/src/github.com/coreos/etcd/gopath/src/github.com/coreos/etcd/clientv3/ctx.go:61 +0x8c

  github.com/coreos/etcd/clientv3.withVersion()

      /go/src/github.com/coreos/etcd/gopath/src/github.com/coreos/etcd/clientv3/ctx.go:47 +0x137

  github.com/coreos/etcd/clientv3.newStreamClientInterceptor.func1()

      /go/src/github.com/coreos/etcd/gopath/src/github.com/coreos/etcd/clientv3/client.go:309 +0x81

  google.golang.org/grpc.NewClientStream()

      /go/src/github.com/coreos/etcd/gopath/src/google.golang.org/grpc/stream.go:101 +0x10e

  github.com/coreos/etcd/etcdserver/etcdserverpb.(*watchClient).Watch()

      /go/src/github.com/coreos/etcd/gopath/src/github.com/coreos/etcd/etcdserver/etcdserverpb/rpc.pb.go:3193 +0xe9

  github.com/coreos/etcd/clientv3.(*watchGrpcStream).openWatchClient()

      /go/src/github.com/coreos/etcd/gopath/src/github.com/coreos/etcd/clientv3/watch.go:788 +0x143

  github.com/coreos/etcd/clientv3.(*watchGrpcStream).newWatchClient()

      /go/src/github.com/coreos/etcd/gopath/src/github.com/coreos/etcd/clientv3/watch.go:700 +0x5c3

  github.com/coreos/etcd/clientv3.(*watchGrpcStream).run()

      /go/src/github.com/coreos/etcd/gopath/src/github.com/coreos/etcd/clientv3/watch.go:431 +0x12b

Previous read at 0x00c42110dd40 by goroutine 130:

  reflect.maplen()

      /usr/local/go/src/runtime/hashmap.go:1165 +0x0

  reflect.Value.MapKeys()

      /usr/local/go/src/reflect/value.go:1090 +0x43b

  fmt.(*pp).printValue()

      /usr/local/go/src/fmt/print.go:741 +0x1885

  fmt.(*pp).printArg()

      /usr/local/go/src/fmt/print.go:682 +0x1b1

  fmt.(*pp).doPrintf()

      /usr/local/go/src/fmt/print.go:998 +0x1cad

  fmt.Sprintf()

      /usr/local/go/src/fmt/print.go:196 +0x77

  github.com/coreos/etcd/clientv3.streamKeyFromCtx()

      /go/src/github.com/coreos/etcd/gopath/src/github.com/coreos/etcd/clientv3/watch.go:825 +0xc8

  github.com/coreos/etcd/clientv3.(*watcher).Watch()

      /go/src/github.com/coreos/etcd/gopath/src/github.com/coreos/etcd/clientv3/watch.go:265 +0x426

  github.com/coreos/etcd/clientv3/integration.testWatchOverlapContextCancel.func1()

      /go/src/github.com/coreos/etcd/gopath/src/github.com/coreos/etcd/clientv3/integration/watch_test.go:959 +0x23e

Goroutine 99 (running) created at:

  github.com/coreos/etcd/clientv3.(*watcher).newWatcherGrpcStream()

      /go/src/github.com/coreos/etcd/gopath/src/github.com/coreos/etcd/clientv3/watch.go:236 +0x59d

  github.com/coreos/etcd/clientv3.(*watcher).Watch()

      /go/src/github.com/coreos/etcd/gopath/src/github.com/coreos/etcd/clientv3/watch.go:278 +0xbb6

  github.com/coreos/etcd/clientv3/integration.testWatchOverlapContextCancel.func1()

      /go/src/github.com/coreos/etcd/gopath/src/github.com/coreos/etcd/clientv3/integration/watch_test.go:959 +0x23e

Goroutine 130 (running) created at:

  github.com/coreos/etcd/clientv3/integration.testWatchOverlapContextCancel()

      /go/src/github.com/coreos/etcd/gopath/src/github.com/coreos/etcd/clientv3/integration/watch_test.go:979 +0x76d

  github.com/coreos/etcd/clientv3/integration.TestWatchOverlapContextCancel()

      /go/src/github.com/coreos/etcd/gopath/src/github.com/coreos/etcd/clientv3/integration/watch_test.go:922 +0x44

  testing.tRunner()

      /usr/local/go/src/testing/testing.go:657 +0x107

==================

Signed-off-by: Gyuho Lee <leegyuho@amazon.com>
2020-03-18 17:18:29 -07:00
Gyuho Lee 30aaceb1c3 etcdserver/api/etcdhttp: log server-side /health checks
ref.
https://github.com/etcd-io/etcd/pull/11704

Signed-off-by: Gyuho Lee <leegyuho@amazon.com>
2020-03-18 16:28:18 -07:00
Sam Batschelet 1228d6c1e7 proxy/grpcproxy: add return on error for metrics handler
Signed-off-by: Sam Batschelet <sbatsche@redhat.com>
2020-03-16 12:06:33 -04:00
Sahdev Zala eb1df6d9d2
Merge pull request #11665 from jingyih/automated-cherry-pick-of-#11638-upstream-release-3.3
Automated cherry pick of #11638 on release-3.3
2020-03-11 19:22:55 -04:00
jingyih c58133b2d4 etcdctl: fix member add command
Use members information from member add response, which is
guaranteed to be up to date.
2020-02-29 07:21:17 -08:00
Jingyi Hu e21e355d91
Merge pull request #11632 from jingyih/automated-cherry-pick-of-#11630-upstream-release-3.3
Automated cherry pick of #11630 to release-3.3
2020-02-16 08:35:50 +08:00
jingyih 7b1a92cb7c mvcc/backend: check for nil boltOpenOptions
Check if boltOpenOptions is nil before use it.
2020-02-15 00:27:08 -08:00
Wenjia b0a4038b79
Merge pull request #11623 from jpbetz/automated-cherry-pick-of-#11613-origin-release-3.3
Automated cherry pick of #11613 to release-3.3
2020-02-13 13:15:03 -08:00
Joe Betz b3d9e29096
mvcc/backend: Delete orphaned db.tmp files before defrag 2020-02-13 12:32:04 -08:00
Hitoshi Mitake 70853d60e7
Merge pull request #11378 from jingyih/automated-cherry-pick-of-#10218-#10468-upstream-release-3.3
Automated cherry pick of #10218 #10468 on release 3.3
2020-01-26 01:40:43 +09:00
Gyuho Lee 3c8740a793 version: 3.3.18
Signed-off-by: Gyuho Lee <leegyuho@amazon.com>
2019-11-26 20:19:50 -08:00
Wenjia 9cd3eefd01
Merge pull request #11394 from jingyih/automated-cherry-pick-of-#11374-upstream-release-3.3
Automated cherry pick of #11374 on release 3.3
2019-11-26 15:02:17 -08:00
yoyinzyc 47d3dea2a9 mvcc: update to "etcd_debugging_mvcc_total_put_size_in_bytes" 2019-11-26 14:09:04 -08:00
yoyinzyc aaa85715c3 mvcc: add "etcd_mvcc_put_size_in_bytes" to monitor the throughput of put request. 2019-11-26 14:07:50 -08:00
Jingyi Hu e1508f94b6 integration: disable TestV3AuthOldRevConcurrent
Disable TestV3AuthOldRevConcurrent for now. See
https://github.com/etcd-io/etcd/pull/10468#issuecomment-463253361
2019-11-20 16:45:48 -08:00
Jingyi Hu 5a4821721e etcdserver: remove auth validation loop
Remove auth validation loop in v3_server.raftRequest(). Re-validation
when error ErrAuthOldRevision occurs should be handled on client side.
2019-11-20 16:45:48 -08:00
Maxim Vladimirskiy 95095f8406 etcdserver: Remove infinite loop in doSerialize
Once chk(ai) fails with auth.ErrAuthOldRevision it will always do,
regardless how many times you retry. So the error is better be returned
to fail the pending request and make the client re-authenticate.
2019-11-20 16:45:47 -08:00
Gyuho Lee 5cf80a6229 clientv3: fix retry/streamer error message
Signed-off-by: Gyuho Lee <leegyuho@amazon.com>
2019-10-31 10:08:53 -07:00
Gyuho Lee 069bce1384
Merge pull request #11314 from jingyih/automated-cherry-pick-of-#11308-upstream-release-3.3
Automated cherry pick of #11308 on release-3.3
2019-10-31 10:08:08 -07:00
Jingyi Hu 7c164a8948 etcdserver: wait purge file loop during shutdown
To prevent the purge file loop from accidentally acquiring the file lock
and remove the files during server shutdowm.
2019-10-30 16:47:06 -07:00
Gyuho Lee ff5fb05bec scripts/release: list GPG key only when tagging is needed
Signed-off-by: Gyuho Lee <leegyuho@amazon.com>
2019-10-23 11:13:05 -07:00
Gyuho Lee a977795f2d
Merge pull request #11253 from YoyinZyc/automated-cherry-pick-of-#11247-origin-release-3.3
Automated cherry pick of #11247
2019-10-18 10:30:16 -07:00
Gyuho Lee aedfe5458a
Merge pull request #11261 from wenjiaswe/automated-cherry-pick-of-#10257-upstream-release-3.3
cherry pick "etcd_cluster_version" metric" (#10257, #11233, #11254, #11265) to release-3.3
2019-10-17 12:40:08 -07:00
Wenjia Zhang e7888805e1 Add cluster version fix #11233, #11254, #11265 2019-10-16 13:27:07 -07:00
Gyuho Lee 7fbfdc2b6a tests/e2e: test cluster version
Signed-off-by: Gyuho Lee <leegyuho@amazon.com>
2019-10-15 18:07:53 -07:00
Gyuho Lee 5c19bd24f0 etcdserver/*: add "etcd_cluster_version" metric
Signed-off-by: Gyuho Lee <leegyuho@amazon.com>
2019-10-15 18:05:33 -07:00
Joe Betz 683a643fba Add version, tag and branch checks to release script 2019-10-14 12:57:35 -07:00
Gyuho Lee 660dc83e19
Merge pull request #11245 from YoyinZyc/prevent-darwin-build-3.3
scripts: avoid release builds on darwin machine.
2019-10-11 12:37:34 -07:00
yoyinzyc ffcddac2ff scripts: avoid release builds on darwin machine. 2019-10-11 11:27:57 -07:00
Joe Betz 6d8052314b
version: v3.3.17 2019-10-11 10:23:13 -07:00
Gyuho Lee 5e4d852e95
Merge pull request #11236 from YoyinZyc/change-git-clone
scripts: use https for git clone.
2019-10-10 22:52:46 -07:00
yoyinzyc 3827d6bd2d scripts: use https for git clone. 2019-10-10 16:46:37 -07:00
Joe Betz 3fae828623
vendor: v3.3.16 2019-10-10 10:59:40 -07:00
Gyuho Lee 011bd86bd6
Merge pull request #11196 from andyliuliming/release-3.3-cherry
etcdserver: cherry-pick skip client san verification option for 3.3 version.
2019-10-09 09:40:58 -07:00
Andy Liu a311a80699 helper document update. 2019-10-09 13:15:56 +08:00
Joe Betz ef61a56c0c
Merge pull request #11215 from jpbetz/automated-cherry-pick-of-#11184-origin-release-3.3
Automated cherry pick of #11184
2019-10-08 18:47:12 -07:00
Joe Betz a2f585d80c
clientv3: Set authority used in cert checks to host of endpoint 2019-10-08 18:25:08 -07:00
Joe Betz 7558b41ccd
Merge pull request #11216 from jpbetz/automated-cherry-pick-of-#11211-origin-release-3.3
Automated cherry pick of #11211
2019-10-08 17:16:01 -07:00
Joe Betz 4b227b6e71
clientv3: Replace endpoint.ParseHostPort with net.SplitHostPort to fix IPv6 client endpoints 2019-10-08 16:12:13 -07:00
Gyuho Lee 1be7ab4ee2
Merge pull request #11201 from jingyih/automated-cherry-pick-of-#11194-origin-release-3.3
Automated cherry pick of #11194 on release-3.3
2019-10-03 16:03:30 -07:00
Jingyi Hu 02a27c0851 etcdctl: fix member add command 2019-10-03 13:52:57 -07:00
Andy Liu d851911f86 etcdserver: add unit test. 2019-10-03 16:06:30 +08:00
Andy Liu 86b1686c7e etcdserver: cherry-pick skip client san verification option for 3.3 version.
Co-authored-by: Martin Weindel <martin.weindel@sap.com>
Co-authored-by: Jingyi Hu <jingyih@google.com>
Co-authored-by: Liming Liu <andyliuliming@outlook.com>
2019-10-03 10:12:22 +08:00
Jingyi Hu 943832af44
Merge pull request #11134 from jingyih/automated-cherry-pick-of-#11126-origin-release-3.3
Automated cherry pick of #11126 on release-3.3
2019-09-07 00:03:44 -07:00
Jingyi Hu 8a8efa73e6 mvcc: add store revision metrics
Add experimental metrics etcd_debugging_mvcc_current_revision and
etcd_debugging_mvcc_compact_revision.
2019-09-06 17:19:46 -07:00
Gyuho Lee a4f18a40b0
Merge pull request #11056 from jingyih/update_bbolt
vendor: update bbolt to v1.3.3
2019-08-20 09:08:39 -07:00
Gyuho Lee 7f067ceafd
Merge pull request #11055 from jingyih/fix_gofmt_bom
*: Fix gofmt bom
2019-08-19 22:39:08 -07:00
Jingyi Hu 9244d2ba86 vendor: update bbolt to v1.3.3 2019-08-19 20:55:55 -07:00
Jingyi Hu ffb43dff5b bom: regenerate 2019-08-19 20:35:44 -07:00
Jingyi Hu 74cf4ae9a2 scripts: fix updatebom.sh
Remove "./cmd/vendor".
2019-08-19 20:29:49 -07:00
Jingyi Hu 81fc7c23c2 *: fix gofmt 2019-08-19 20:22:15 -07:00
Gyuho Lee 94745a4eed version: 3.3.15
Signed-off-by: Gyuho Lee <leegyuho@amazon.com>
2019-08-19 11:26:22 -07:00
Gyuho Lee e94188bc55 vendor: regenerate
Signed-off-by: Gyuho Lee <leegyuho@amazon.com>
2019-08-19 11:26:16 -07:00
Gyuho Lee aa1e17aac3 go.mod: remove, change back to "glide"
Signed-off-by: Gyuho Lee <leegyuho@amazon.com>
2019-08-19 11:26:12 -07:00
Gyuho Lee 5cf5d88a18 version: 3.3.14
Signed-off-by: Gyuho Lee <leegyuho@amazon.com>
2019-08-16 16:21:44 -07:00
Gyuho Lee af8cb6c5b9 Documentation/upgrades: special upgrade guides for >= 3.3.14
Signed-off-by: Gyuho Lee <leegyuho@amazon.com>
2019-08-16 16:21:11 -07:00
Gyuho Lee 9dd98b7c90 version: 3.3.14-rc.0
Signed-off-by: Gyuho Lee <leegyuho@amazon.com>
2019-08-15 15:03:43 -07:00
Gyuho Lee 2f3aa893ec vendor: regenerate
Signed-off-by: Gyuho Lee <leegyuho@amazon.com>
2019-08-15 15:02:26 -07:00
Gyuho Lee d65219c1ef go.mod: regenerate
Signed-off-by: Gyuho Lee <leegyuho@amazon.com>
2019-08-15 15:02:03 -07:00
Gyuho Lee b9c976eed8 gitignore: track vendor directory
Signed-off-by: Gyuho Lee <leegyuho@amazon.com>
2019-08-15 15:00:46 -07:00
Gyuho Lee b196734290 *: test with Go 1.12.9
Signed-off-by: Gyuho Lee <leegyuho@amazon.com>
2019-08-15 14:42:32 -07:00
Gyuho Lee 1aa4af83c0 version: 3.3.14-beta.0
Signed-off-by: Gyuho Lee <leegyuho@amazon.com>
2019-08-14 11:52:26 -07:00
Gyuho Lee 95a5c57754 tests/e2e: add missing curl
Signed-off-by: Gyuho Lee <leegyuho@amazon.com>
2019-08-14 11:31:53 -07:00
Gyuho Lee 082c5e0705 e2e: move
Signed-off-by: Gyuho Lee <leegyuho@amazon.com>
2019-08-14 11:22:39 -07:00
Gyuho Lee 33668f4eff test: do not run "v2store" tests
Signed-off-by: Gyuho Lee <leegyuho@amazon.com>
2019-08-14 11:12:46 -07:00
Gyuho Lee c7c09c61d0 test: bump up timeout for e2e tests
Signed-off-by: Gyuho Lee <leegyuho@amazon.com>
2019-08-14 09:52:16 -07:00
Gyuho Lee 4f1e65418f travis: fix functional tests
Signed-off-by: Gyuho Lee <leegyuho@amazon.com>
2019-08-14 09:40:16 -07:00
Gyuho Lee e16b21be7b functional: add back, travis
Signed-off-by: Gyuho Lee <leegyuho@amazon.com>
2019-08-14 09:32:10 -07:00
Gyuho Lee 0e96b34d9f auth: fix tests
Signed-off-by: Gyuho Lee <leegyuho@amazon.com>
2019-08-14 09:32:10 -07:00
Gyuho Lee 3c2b1cd76a travis: do not run functional for now
Signed-off-by: Gyuho Lee <leegyuho@amazon.com>
2019-08-14 09:32:10 -07:00
Gyuho Lee 37d10dd8b8 travis: skip windows build
Signed-off-by: Gyuho Lee <leegyuho@amazon.com>
2019-08-14 09:32:10 -07:00
Gyuho Lee 84508f7c98 test: fix repo path
Signed-off-by: Gyuho Lee <leegyuho@amazon.com>
2019-08-14 09:32:10 -07:00
Gyuho Lee be3babffb7 tests/e2e: fix
Signed-off-by: Gyuho Lee <leegyuho@amazon.com>
2019-08-14 09:32:10 -07:00
Gyuho Lee 61065db065 build: remove tools
Signed-off-by: Gyuho Lee <leegyuho@amazon.com>
2019-08-14 09:32:10 -07:00
Gyuho Lee 0ddda8c72e integration: fix tests
Signed-off-by: Gyuho Lee <leegyuho@amazon.com>
2019-08-14 09:32:10 -07:00
Gyuho Lee b889245252 integration: fix "HashKVRequest"
Signed-off-by: Gyuho Lee <leegyuho@amazon.com>
2019-08-14 09:32:10 -07:00
Gyuho Lee 6e37ece3b9 functional: update
Signed-off-by: Gyuho Lee <leegyuho@amazon.com>
2019-08-14 09:32:10 -07:00
Gyuho Lee f68fac655e travis.yml: fix, run e2e
Signed-off-by: Gyuho Lee <leegyuho@amazon.com>
2019-08-14 09:32:10 -07:00
Gyuho Lee dbfc7bd612 integration: update
Signed-off-by: Gyuho Lee <leegyuho@amazon.com>
2019-08-14 09:32:10 -07:00
Gyuho Lee e5c2dff346 etcdserver: detect leader change on reads
Signed-off-by: Gyuho Lee <leegyuho@amazon.com>
2019-08-14 09:32:10 -07:00
Gyuho Lee 9561f6b3b6 clientv3: rewrite based on 3.4
Signed-off-by: Gyuho Lee <leegyuho@amazon.com>
2019-08-14 09:32:06 -07:00
Gyuho Lee a317433854 raft: fix compile error in "Panic"
Signed-off-by: Gyuho Lee <leegyuho@amazon.com>
2019-08-14 04:05:07 -07:00
Gyuho Lee 7eb9a29e26 pkg/*: add
Signed-off-by: Gyuho Lee <leegyuho@amazon.com>
2019-08-14 04:05:04 -07:00
Gyuho Lee 5a678bb4e3 etcdserver/api/v3rpc: support watch fragmentation
Signed-off-by: Gyuho Lee <leegyuho@amazon.com>
2019-08-14 01:22:29 -07:00
Gyuho Lee 92a750432f tests: update
Signed-off-by: Gyuho Lee <leegyuho@amazon.com>
2019-08-14 01:22:29 -07:00
Gyuho Lee d167714b36 *: regenerate proto
Signed-off-by: Gyuho Lee <leegyuho@amazon.com>
2019-08-14 01:22:23 -07:00
Gyuho Lee 9f7294f1e0 etcdserver/etcdserverpb/rpc.proto: add watch progress/fragment
Signed-off-by: Gyuho Lee <leegyuho@amazon.com>
2019-08-14 01:17:29 -07:00
Gyuho Lee 830bba337f vendor: regenerate, upgrade gRPC to 1.23.0
Signed-off-by: Gyuho Lee <leegyuho@amazon.com>
2019-08-14 01:16:44 -07:00
Gyuho Lee 27cf72b231 go.mod: migrate to Go module
Signed-off-by: Gyuho Lee <leegyuho@amazon.com>
2019-08-14 01:16:09 -07:00
Gyuho Lee d7fc66bcbb scripts: update release, genproto, dep
Signed-off-by: Gyuho Lee <leegyuho@amazon.com>
2019-08-14 01:14:34 -07:00
Gyuho Lee cc1591aa4e Makefile/build: sync with 3.4 branch
Signed-off-by: Gyuho Lee <leegyuho@amazon.com>
2019-08-14 01:13:22 -07:00
Gyuho Lee 08124105ad *: use new adt.IntervalTree interface
Signed-off-by: Gyuho Lee <leegyuho@amazon.com>
2019-08-09 11:15:49 -07:00
Gyuho Lee ffe90b9ff3 pkg/adt: remove TODO
Signed-off-by: Gyuho Lee <leegyuho@amazon.com>
2019-08-09 11:02:28 -07:00
xkey 036bd1ab09 pkg/adt: fix interval tree black-height property based on rbtree
Author: xkey <xk33430@ly.com>
ref. https://github.com/etcd-io/etcd/pull/10978

Signed-off-by: Gyuho Lee <leegyuho@amazon.com>
2019-08-09 11:02:21 -07:00
Gyuho Lee 33e4877b56 pkg/adt: document textbook implementation with pseudo-code
Signed-off-by: Gyuho Lee <leegyuho@amazon.com>
2019-08-09 11:02:15 -07:00
Gyuho Lee c25f746f77 pkg/adt: mask test failure, add TODO
Signed-off-by: Gyuho Lee <leegyuho@amazon.com>
2019-08-09 11:02:07 -07:00
Gyuho Lee f4341fd35c pkg/adt: add "IntervalTree.Delete" failure case
Described in https://github.com/etcd-io/etcd/issues/10877.

"black-height" property: Every path from a node to any descendant leaf node must have the same number of black nodes.

Expected

    After deleting 11 (requires rebalancing):
                            [510,511]
                             /      \
                   ----------        --------------------------
                  /                                            \
              [383,384]                                       [830,831]
              /       \                                      /          \
             /         \                                    /            \
      [261,262](red)  [410,411]                     [647,648]           [899,900](red)
          /               \                              \                      /    \
         /                 \                              \                    /      \
      [82,83]           [292,293]                      [815,816](red)   [888,889]    [972,973]
            \                                                           /
             \                                                         /
          [238,239](red)                                       [953,954](red)

Got

    After deleting 11 (requires rebalancing):
                            [510,511]
                             /      \
                   ----------        --------------------------
                  /                                            \
              [82,83]                                       [830,831]
                    \                                      /          \
                     \                                    /            \
                  [383,384]                        [647,648]            [899,900]
                  /       \                              \                  /    \
                 /         \                              \                /      \
           [261,262]      [410,411]                      [815,816]   [888,889]    [972,973]
             /   \                                                                  /
            /     \                                                                /
     [238,239]   [292,293]                                                  [953,954]

This violates "black-height" property.

Signed-off-by: Gyuho Lee <leegyuho@amazon.com>
2019-08-09 11:01:58 -07:00
Gyuho Lee b3152365bb pkg/adt: test node "11" deletion
Signed-off-by: Gyuho Lee <leegyuho@amazon.com>
2019-08-09 11:01:51 -07:00
Gyuho Lee d938435e44 pkg/adt: README "IntervalTree.Delete" test case images
Signed-off-by: Gyuho Lee <leegyuho@amazon.com>
2019-08-09 11:01:43 -07:00
Gyuho Lee 594e7d6627 pkg/adt: README initial commit
Signed-off-by: Gyuho Lee <leegyuho@amazon.com>
2019-08-09 11:01:35 -07:00
Gyuho Lee 266214d19e pkg/adt: add "visitLevel", make "IntervalTree" interface, more tests
Make "IntervalTree" an interface to abstract range tree interface

Signed-off-by: Gyuho Lee <leegyuho@amazon.com>
2019-08-09 11:01:16 -07:00
Gyuho Lee 0b37ae05b1 pkg: clean up code format
Signed-off-by: Gyuho Lee <gyuhox@gmail.com>
2019-08-09 11:00:44 -07:00
Gyuho Lee 3aef9a1a8f travis: update
Signed-off-by: Gyuho Lee <leegyuho@amazon.com>
2019-08-09 10:57:38 -07:00
Gyuho Lee 4527f4c4b0 etcdserver: add "etcd_server_snapshot_apply_inflights_total"
Signed-off-by: Gyuho Lee <leegyuho@amazon.com>
2019-08-08 15:13:14 -07:00
Gyuho Lee 1c8fab7365 etcdserver/api: add "etcd_network_snapshot_send_inflights_total", "etcd_network_snapshot_receive_inflights_total"
Signed-off-by: Gyuho Lee <leegyuho@amazon.com>
2019-08-08 15:12:08 -07:00
Gyuho Lee 789ff21b18
Merge pull request #10570 from sbenderli/cherry-pick-of-#8334
raft: cherry pick of #8334 to release-3.3
2019-07-23 11:42:42 -07:00
Gyuho Lee d12f13279f
Merge pull request #10827 from yznima/pr-race-3.3
Raft HTTP: fix pause/resume race condition
2019-07-23 10:59:02 -07:00
Nima Yahyazadeh 9f1d6ca1c9 Raft HTTP: fix pause/resume race condition
(cherry picked from commit b1812a410f)
2019-06-17 13:33:27 -04:00
Gyuho Lee 5832014353
Merge pull request #10793 from jingyih/automated-cherry-pick-of-#10788-origin-release-3.3
Automated cherry pick of #10788 on release-3.3
2019-06-05 14:39:55 -07:00
Jingyi Hu d005486359 ctlv3: add missing newline in EndpointHealth
To make the output consistent with the output before #9540.
2019-06-05 14:36:57 -07:00
Gyuho Lee 89429703db
Merge pull request #10782 from jingyih/cherrypick_9540_to_release3p3
ctlv3: cherry pick of #9540 to release 3.3
2019-06-04 09:55:19 -07:00
Gyuho Lee f835a85965 ctlv3: support "write-out" for "endpoint health" command
Signed-off-by: Gyuho Lee <gyuhox@gmail.com>
2019-06-03 17:01:54 -07:00
Gyuho Lee b0babe5d1e
Merge pull request #10718 from rohitsardesai83/release-3.3
etcd: Replace ghodss/yaml with sigs.k8s.io/yaml in 3.3
2019-05-29 13:47:56 -07:00
Rohit Sardesai 8ed3e70d7c etcd: Replace ghodss/yaml with sigs.k8s.io/yaml 2019-05-29 23:03:16 +05:30
Gyuho Lee 98d3084268 version: bump up 3.3.13
Signed-off-by: Gyuho Lee <leegyuho@amazon.com>
2019-05-02 10:22:46 -07:00
Gyuho Lee b7001c05bc clientv3: fix race condition in "Endpoints" methods
From https://github.com/etcd-io/etcd/pull/10595.

Signed-off-by: Gyuho Lee <leegyuho@amazon.com>
2019-05-02 10:17:58 -07:00
Gyuho Lee f179d4d6a3 etcdserver: improve heartbeat send failures logging
Signed-off-by: Gyuho Lee <leegyuho@amazon.com>
2019-05-02 10:02:28 -07:00
Luc Perkins c46aa44143
Documentation metadata for 3.3 branch (#10692)
* Update Documentation folder

Signed-off-by: lucperkins <lucperkins@gmail.com>

* Re-add README file

Signed-off-by: lucperkins <lucperkins@gmail.com>
2019-04-30 14:03:05 -07:00
Davanum Srinivas ad7c2cddb0 vendor: add missing files
Change-Id: I53b30e9317de6cd058833d743bc88c46686cea20
2019-04-25 15:45:49 -04:00
Davanum Srinivas 6499c14cb6 vendor: Run scripts/updatedeps.sh to cleanup unused code 2019-04-25 15:45:49 -04:00
Davanum Srinivas 6e91e3559c client: Switch to case sensitive unmarshalling to be compatible with ugorji
Using lessons learned from k8s changes:
https://github.com/kubernetes/kubernetes/pull/65034

Change-Id: Ia17a8f94ae6ed00c5af2595c2b48d3c9a0344427
2019-04-25 15:45:49 -04:00
Davanum Srinivas 7ff7e0aadd *: update bill-of-materials
Change-Id: Ibfa24e28cacd58388f7606a945c8ac35e1c34580
2019-04-25 15:45:49 -04:00
Davanum Srinivas 02ccf2013d vendor: Add json-iterator and its dependencies
Change-Id: I1f3fc00f95efadd6da9b4c248156f8460ae0ff97
2019-04-25 15:45:49 -04:00
Davanum Srinivas 20bd0c064c scripts: Remove generated code and script
Change-Id: Iac4601443bcad71920fd96b97bfe21c16116577a
2019-04-25 15:45:49 -04:00
Davanum Srinivas 69e0daf809 client: Replace ugorji/codec with json-iterator/go
We need to use the stdlib-compatible one that is case-sensitive, etc

Change-Id: Id0df573a70e09967ac7d8c0a63d99d6a49ce82f1
2019-04-25 15:45:49 -04:00
Joe Betz 5f4a45596e
Merge pull request #10656 from jpbetz/automated-cherry-pick-of-#10646-release-3.3
Automated cherry pick of #10646
2019-04-18 14:10:02 -07:00
Yingnan Zhang 38bf1bdbe0
mvcc: fix db_compaction_total_duration_milliseconds 2019-04-17 16:31:06 -07:00
Shreyas Rao e206a8b495 wal: Add test for Verify
Signed-off-by: Shreyas Rao <shreyas.sriganesh.rao@sap.com>
2019-04-12 06:56:08 -04:00
shreyas-s-rao cf4836fb2c wal: add Verify function to perform corruption check on wal contents
Signed-off-by: Shreyas Rao <shreyas.sriganesh.rao@sap.com>
2019-04-12 06:56:08 -04:00
Sam Batschelet 43386ac29b *: Change gRPC proxy to expose etcd server endpoint /metrics
This PR resolves an issue where the `/metrics` endpoints exposed by the proxy were not returning metrics of the etcd members servers but of the proxy itself.

Signed-off-by: Sam Batschelet <sbatsche@redhat.com>
2019-04-11 17:07:40 -04:00
Sam Batschelet 332e995ccd travis: fix tests by using proper code path
Signed-off-by: Sam Batschelet <sbatsche@redhat.com>
2019-04-11 16:19:36 -04:00
Gyuho Lee ad5e169dcf
Merge pull request #10597 from purpleidea/3.3/fatal-corruption
etcdserver: Use panic instead of fatal on no space left error
2019-03-29 14:54:46 -07:00
James Shubin 7814718c73 etcdserver: Use panic instead of fatal on no space left error
When using the embed package to embed etcd, sometimes the storage prefix
being used might be full. In this case, this code path triggers, causing
an: `etcdserver: create wal error: no space left on device` error, which
causes a fatal. A fatal differs from a panic in that it also calls
os.Exit(1). In this situation, the calling program that embeds the etcd
server will be abruptly killed, which prevents it from cleaning up
safely, and giving a proper error message. Depending on what the calling
program is, this can cause corruption and data loss.

This patch switches the fatal to a panic. Ideally this would be a
regular error which would get propagated upwards to the StartEtcd
command, but in the meantime at least this can be caught with recover().

This fixes the most common fatal that I've experienced, but there are
surely more that need looking into. If possible, the errors should be
threaded down into the code path so that embedding etcd can be more
robust.

Fixes: https://github.com/etcd-io/etcd/issues/10588

This is a cherry-picked version of upstream: 368f70a37c
2019-03-29 17:45:48 -04:00
shawnli ec22eb908a raft: cherry pick of #8334 to release-3.3 2019-03-21 16:02:30 -04:00
Gyuho Lee c6964428ff travis.yml: update Go 1.10.8
Signed-off-by: Gyuho Lee <leegyuho@amazon.com>
2019-02-07 10:45:15 -08:00
Gyuho Lee d57e8b8d97 version: 3.3.12
Signed-off-by: Gyuho Lee <leegyuho@amazon.com>
2019-02-07 10:41:58 -08:00
Iskander Sharipov e634184dc6 etcdctl: fix strings.HasPrefix args order
Signed-off-by: Iskander Sharipov <quasilyte@gmail.com>
2019-02-07 10:41:44 -08:00
Gyuho Lee 410a879601 version: 3.3.11+git
Signed-off-by: Gyuho Lee <leegyuho@amazon.com>
2019-02-07 10:41:33 -08:00
Gyuho Lee 2cf9e51d2a version: 3.3.11
Signed-off-by: Gyuho Lee <leegyuho@amazon.com>
2019-01-11 11:12:25 -08:00
Sam Batschelet 15903736d5 auth: fix cherry-pick
Signed-off-by: Sam Batschelet <sbatsche@redhat.com>
2019-01-09 13:10:32 -05:00
Sam Batschelet c7f744d6d3 auth: disable CommonName auth for gRPC-gateway
Signed-off-by: Sam Batschelet <sbatsche@redhat.com>
2019-01-08 21:01:25 +00:00
Gyuho Lee e6b2f00047
Merge pull request #10335 from gyuho/release-3.3-patch
[Cherry pick 3.3] grpcproxy: fix memory leak
2018-12-17 20:37:04 -08:00
Igor German 59cc0f9ac5 grpcproxy: fix memory leak
use set instead of slice as interval value

fixes #10326
2018-12-17 19:00:57 -08:00
Gyuho Lee 3a7b8b31fd travis: use Go 1.10.7
Signed-off-by: Gyuho Lee <leegyuho@amazon.com>
2018-12-17 19:00:22 -08:00
Gyuho Lee 6f250f9a47 version: 3.3.10+git
Signed-off-by: Gyuho Lee <leegyuho@amazon.com>
2018-10-10 13:30:14 -07:00
Gyuho Lee 27fc7e2296 version: 3.3.10
Signed-off-by: Gyuho Lee <leegyuho@amazon.com>
2018-10-10 10:17:54 -07:00
Gyuho Lee eb932c2083 travis.yml: use Go 1.10.4
Signed-off-by: Gyuho Lee <leegyuho@amazon.com>
2018-10-10 10:17:36 -07:00
Gyuho Lee 957700f444 etcdserver: add "etcd_server_read_indexes_failed_total"
Signed-off-by: Gyuho Lee <leegyuho@amazon.com>
2018-10-09 18:22:02 -07:00
Gyuho Lee b45f5306dc rafthttp: probe all raft transports
This PR adds another probing routine to monitor the connection
for Raft message transports. Previously, we only monitored
snapshot transports.

In our production cluster, we found one TCP connection had >8-sec
latencies to a remote peer, but "etcd_network_peer_round_trip_time_seconds"
metrics shows <1-sec latency distribution, which means etcd server
was not sampling enough while such latency spikes happen
outside of snapshot pipeline connection.

Signed-off-by: Gyuho Lee <leegyuho@amazon.com>
2018-10-09 18:18:27 -07:00
Gyuho Lee 8491137b55 etcdserver: add "etcd_server_health_success/failures"
Signed-off-by: Gyuho Lee <leegyuho@amazon.com>
2018-10-09 17:54:30 -07:00
Jingyi Hu ebe950fc1c
Merge pull request #10161 from jingyih/automated-cherry-pick-of-#10153-origin-release-3.3
clientv3: automated cherry pick of #10153 to release-3.3
2018-10-08 18:37:52 -07:00
yura 20d83e405f clientv3: concurrency.Mutex.Lock() - preserve invariant
Convenient invariant:
- if werr == nil then lock is supposed to be locked at the moment.

While we could not be confident in stronger invariant ('is exactly locked'),
it were inconvenient that previous code could return `werr == nil` after
Mutex.Unlock.

It could happen when ctx is canceled/timeouted exactly after waitDeletes
successfully returned werr == nil and before `<-ctx.Done()` checked.
While such situation is very rare, it is still possible.

fixes #10111
2018-10-08 16:42:26 -07:00
Wenjia cb57901e03
Merge pull request #10041 from wenjiaswe/automated-cherry-pick-of-#9997-upstream-release-3.3
Automated cherry pick of #9997
2018-10-03 13:52:02 -07:00
Gyuho Lee d838e24f80 etcdserver/api/rafthttp: add v3 snapshot send/receive metrics
Distribution would be:
0.1 second or more
...
25.6 seconds or more
51.2 seconds or more

etcd_network_snapshot_send_success
etcd_network_snapshot_send_failures
etcd_network_snapshot_send_total_duration_seconds
etcd_network_snapshot_receive_success
etcd_network_snapshot_receive_failures
etcd_network_snapshot_receive_total_duration_seconds

Signed-off-by: Gyuho Lee <leegyuho@amazon.com>
2018-10-03 11:12:42 -07:00
Gyuho Lee 7ec9ff62b5 etcdserver/api/snap: add v3 snapshot fsync metrics
etcd_snap_db_fsync_duration_seconds_count
etcd_snap_db_save_total_duration_seconds_bucket

Signed-off-by: Gyuho Lee <leegyuho@amazon.com>
2018-10-03 11:12:41 -07:00
Gyuho Lee dc02dc2ede tests/Dockerfile: update, fix GOPATH
Signed-off-by: Gyuho Lee <leegyuho@amazon.com>
2018-10-01 01:30:23 -07:00
Gyuho Lee 40ed18a457
Merge pull request #10122 from jingyih/cherry-pick-of-#10109-origin-release-3.3
etcdctl: cherry pick of #10109 to release-3.3
2018-09-25 17:30:01 -07:00
Jingyi Hu 60d546e309 etcdctl: cherry pick of #10109 to release-3.3
Add snapshot file integrity verification in snapshot status.
2018-09-25 16:50:47 -07:00
Gyuho Lee e774f7309c
Merge pull request #10093 from jingyih/remove_duplicated_import
etcdserver: remove duplicated imports
2018-09-13 20:57:09 -07:00
Jingyi Hu 9eee0b078e etcdserver: remove duplicated imports
Removed duplicated imports of package 'context' in server.go
2018-09-13 20:44:03 -07:00
Gyuho Lee d1acb5a5c8 etcdserver: add "etcd_server_id"
Signed-off-by: Gyuho Lee <leegyuho@amazon.com>
2018-08-29 14:50:17 -07:00
Gyuho Lee 73c1100b04 etcdserver: clarify read index wait timeout warnings
Signed-off-by: Gyuho Lee <leegyuho@amazon.com>
2018-08-29 14:38:59 -07:00
Gyuho Lee c577335a64 rafthttp: clarify "became inactive" warning
Signed-off-by: Gyuho Lee <leegyuho@amazon.com>
2018-08-29 14:34:15 -07:00
Gyuho Lee f69413e9ee
Merge pull request #10027 from hexfusion/cherry-pick-a205cfe
etcdserver: cherry-pick #9861 to release-3.3
2018-08-20 12:54:43 -07:00
Gyuho Lee 0dc4632e28 Merge pull request #9861 from gyuho/race
etcdserver/api/v3rpc: remove duplicate gRPC logger set
2018-08-17 22:32:10 -04:00
Gyuho Lee f8fc923fc0
Merge pull request #10004 from jingyih/automated-cherry-pick-of-#9990-origin-release-3.3
Automated cherry pick of #9990
2018-08-15 06:37:33 -07:00
Jingyi Hu 264bb51a9a etcdserver: code clean up
Code clean up in interceptor.go
2018-08-14 17:08:45 -07:00
Jingyi Hu c6c0d03522 vendor: add go-grpc-middleware
Rebased to master PR #9994.  Fixed a Go format issue in
v3rpc/interceptor.go.  Updated vendor to include go-grpc-middleware.
2018-08-14 17:08:45 -07:00
Jingyi Hu 94f81368ae etcdserver: add grpc interceptor to log info on incoming requests to etcd server
To improve debuggability of etcd v3. Added a grpc interceptor to log
info on incoming requests to etcd server. The log output includes
remote client info, request content (with value field redacted), request
handling latency, response size, etc. Uses zap logger if available,
otherwise uses capnslog.

Also did some clean up on the chaining of grpc interceptors on server
side.
2018-08-14 16:20:13 -07:00
Gyuho Lee 051587f56f version: bump up to 3.3.9+git 2018-07-24 10:17:06 -07:00
Gyuho Lee fca8add78a version: 3.3.9
Signed-off-by: Gyuho Lee <gyuhox@gmail.com>
2018-07-24 09:48:32 -07:00
Gyuho Lee ea40e9f059 etcdserver: add "etcd_server_go_version" metric
Currently, one has to look at server logs manually,
to see what Go version was used to build etcd server.

Signed-off-by: Gyuho Lee <gyuhox@gmail.com>
2018-07-23 16:39:24 -07:00
Gyuho Lee fbc0510a4e clientv3: fix keepalive send interval when response queue is full
client should update next keepalive send time
even when lease keepalive response queue becomes full.

Otherwise, client sends keepalive request every 500ms
regardless of TTL when the send is only expected to happen
with the interval of TTL / 3 at minimum.

Signed-off-by: Gyuho Lee <gyuhox@gmail.com>
2018-07-23 08:51:18 -07:00
Gyuho Lee 267a62199c
Merge pull request #9940 from wenjiaswe/automated-cherry-pick-of-#9761-upstream-release-3.3
Automated cherry pick of #9761
2018-07-19 18:27:15 -07:00
Wenjia 143fc4ce79
added "now := time.Now()" 2018-07-19 17:27:40 -07:00
Wenjia 7f421efe48
remove "github.com/gogo/protobuf/plugin/stringer" 2018-07-19 17:15:32 -07:00
Gyuho Lee d509620793 etcdserver: rename to "heartbeat_send_failures_total"
Signed-off-by: Gyuho Lee <gyuhox@gmail.com>
2018-07-19 16:58:14 -07:00
Gyuho Lee d5654ba459 mvcc: add "etcd_mvcc_hash_(rev)_duration_seconds"
etcd_mvcc_hash_duration_seconds
etcd_mvcc_hash_rev_duration_seconds

Signed-off-by: Gyuho Lee <gyuhox@gmail.com>
2018-07-19 16:57:04 -07:00
Gyuho Lee da304d7aae mvcc/backend: fix defrag duration scale
Signed-off-by: Gyuho Lee <gyuhox@gmail.com>
2018-07-19 16:54:26 -07:00
Gyuho Lee 978727a963 mvcc/backend: add "etcd_disk_backend_defrag_duration_seconds"
Signed-off-by: Gyuho Lee <gyuhox@gmail.com>
2018-07-19 16:54:26 -07:00
Gyuho Lee 4ad350482e mvcc/backend: document metrics ExponentialBuckets
Signed-off-by: Gyuho Lee <gyuhox@gmail.com>
2018-07-19 16:53:31 -07:00
Gyuho Lee f7367d94ff mvcc/backend: clean up mutex, logging
Signed-off-by: Gyuho Lee <gyuhox@gmail.com>
2018-07-19 16:53:31 -07:00
Gyuho Lee e43224c3b6 etcdserver: add "etcd_server_slow_apply_total"
{"level":"warn","ts":1527101858.6985068,"caller":"etcdserver/util.go:115","msg":"apply request took too long","took":0.114101529,"expected-duration":0.1,"prefix":"","request":"header:<ID:1029181977902852337> put:<key:\"\\000\\000...

Signed-off-by: Gyuho Lee <gyuhox@gmail.com>
2018-07-19 16:52:37 -07:00
Gyuho Lee 4c7bf51030 etcdserver: add "etcd_server_heartbeat_failures_total"
{"level":"warn","ts":1527101858.4149103,"caller":"etcdserver/raft.go:370","msg":"failed to send out heartbeat; took too long, server is overloaded likely from slow disk","heartbeat-interval":0.1,"expected-duration":0.2,"exceeded-duration":0.025771662}
{"level":"warn","ts":1527101858.4149644,"caller":"etcdserver/raft.go:370","msg":"failed to send out heartbeat; took too long, server is overloaded likely from slow disk","heartbeat-interval":0.1,"expected-duration":0.2,"exceeded-duration":0.034015766}

Signed-off-by: Gyuho Lee <gyuhox@gmail.com>
2018-07-19 16:51:08 -07:00
Gyuho Lee ffe52f74c0 e2e: log errors TestV3CurlCipherSuitesMismatch for now
Signed-off-by: Gyuho Lee <gyuhox@gmail.com>
2018-07-19 10:11:10 -07:00
Gyuho Lee 1da638c4dc Makefile: use Go 1.10.3 by default
Signed-off-by: Gyuho Lee <gyuhox@gmail.com>
2018-07-19 10:01:27 -07:00
Gyuho Lee 82ce873987 *: use Go 1.10.3 for testing
Signed-off-by: Gyuho Lee <gyuhox@gmail.com>
2018-07-19 09:56:59 -07:00
Gyuho Lee adfd0d3fe7 mvcc: avoid unnecessary metrics update
https://github.com/coreos/etcd/pull/9300

Signed-off-by: Gyuho Lee <gyuhox@gmail.com>
2018-07-03 14:51:08 -07:00
Gyuho Lee a410463a0b mvcc: add "etcd_mvcc_db_total_size_in_use_in_bytes"
Signed-off-by: Gyuho Lee <gyuhox@gmail.com>
2018-07-03 14:36:18 -07:00
Gyuho Lee 1da3603e31 mvcc: add "etcd_mvcc_db_total_size_in_bytes"
Signed-off-by: Gyuho Lee <gyuhox@gmail.com>
2018-07-03 14:35:48 -07:00
Gyuho Lee 72c51d3e12 etcdserver: add "etcd_server_quota_backend_bytes"
Signed-off-by: Gyuho Lee <gyuhox@gmail.com>
2018-07-03 13:26:49 -07:00
Gyuho Lee 4481238224 etcdserver: add "etcd_server_slow_read_indexes_total"
Signed-off-by: Gyuho Lee <gyuhox@gmail.com>
2018-07-03 13:00:08 -07:00
Gyuho Lee 82e670766a etcdserver: clarify read index warnings
Signed-off-by: Gyuho Lee <gyuhox@gmail.com>
2018-07-03 12:53:21 -07:00
Gyuho Lee 09addbdaa0 tests: update test scripts
Signed-off-by: Gyuho Lee <gyuhox@gmail.com>
2018-06-18 14:08:36 -07:00
Gyuho Lee 4ea2271f86 version: 3.3.8+git
Signed-off-by: Gyuho Lee <gyuhox@gmail.com>
2018-06-15 10:21:06 -07:00
Gyuho Lee 33245c6b5b version: 3.3.8
Signed-off-by: Gyuho Lee <gyuhox@gmail.com>
2018-06-15 09:41:56 -07:00
Gyuho Lee 4c18c56bf6 travis: use Go 1.9.7
Signed-off-by: Gyuho Lee <gyuhox@gmail.com>
2018-06-15 09:41:41 -07:00
Gyuho Lee cb46e9ee0b gitignore: ignore "docs" and "vendor"
Signed-off-by: Gyuho Lee <gyuhox@gmail.com>
2018-06-15 09:34:20 -07:00
Jordan Liggitt 1fea97b898 clientv3: backoff on reestablishing watches when Unavailable errors are encountered 2018-06-14 10:47:46 -07:00
Gyuho Lee 5227545764 tests/semaphore.test.bash: update
Signed-off-by: Gyuho Lee <gyuhox@gmail.com>
2018-06-13 14:39:38 -07:00
Gyuho Lee 1ba7c71975 Makefile: update
Signed-off-by: Gyuho Lee <gyuhox@gmail.com>
2018-06-13 14:39:02 -07:00
Joe Betz b7c19232bc etcdserver: Fix txn request 'took too long' warnings to use loggable request stringer 2018-06-12 09:33:33 -07:00
Joe Betz 07f833ae3e etcdserver: Add response byte size and range response count to took too long warning 2018-06-11 11:26:26 -07:00
Joe Betz ef154094b3 etcdserver: Replace value contents with value_size in request took too long warning 2018-06-08 09:49:43 -07:00
Gyuho Lee 21f186a40b version: bump up to 3.3.7+git 2018-06-06 10:08:16 -07:00
Gyuho Lee 56536de551 version: 3.3.7
Signed-off-by: Gyuho Lee <gyuhox@gmail.com>
2018-06-05 18:50:19 -07:00
Gyuho Lee a0ebf8cb1c e2e: test client-side cipher suites with curl
Signed-off-by: Gyuho Lee <gyuhox@gmail.com>
2018-06-05 18:50:19 -07:00
Gyuho Lee 13715724b8 etcdmain: add "--cipher-suites" flag
Signed-off-by: Gyuho Lee <gyuhox@gmail.com>
2018-06-05 18:50:15 -07:00
Gyuho Lee 22d65d8cc2 embed: support custom cipher suites
Signed-off-by: Gyuho Lee <gyuhox@gmail.com>
2018-06-05 18:18:16 -07:00
Gyuho Lee 6c2add4142 integration: test client-side TLS cipher suites
Signed-off-by: Gyuho Lee <gyuhox@gmail.com>
2018-06-05 18:11:16 -07:00
Gyuho Lee 6a3842776b pkg/transport: add "TLSInfo.CipherSuites" field
Signed-off-by: Gyuho Lee <gyuhox@gmail.com>
2018-06-05 18:10:35 -07:00
Gyuho Lee 641bddca0f pkg/tlsutil: add "GetCipherSuite"
Signed-off-by: Gyuho Lee <gyuhox@gmail.com>
2018-06-05 18:10:16 -07:00
Gyuho Lee 21a1162ad1 tests/e2e: test move-leader command with TLS
Signed-off-by: Gyuho Lee <gyuhox@gmail.com>
2018-06-05 13:56:31 -07:00
Gyuho Lee e2cb9cbaec ctlv3: support TLS endpoints for move-leader command
Signed-off-by: Gyuho Lee <gyuhox@gmail.com>
2018-06-05 13:56:05 -07:00
Joe Betz 243074c5c5 scripts/release: Fix docker push for 3.1 releases, remove inaccurate warning at the end of release script 2018-05-31 14:44:29 -07:00
Gyuho Lee 26a73f2fa1 version: bump up to 3.3.6+git 2018-05-31 11:57:20 -07:00
Gyuho Lee 932c3c01f9 version: 3.3.6
Signed-off-by: Gyuho Lee <gyuhox@gmail.com>
2018-05-31 11:41:42 -07:00
Gyuho Lee 41888ddbaa mvcc: fix panic by allowing future revision watcher from restore operation
This also happens without gRPC proxy.

Fix panic when gRPC proxy leader watcher is restored:

```
go test -v -tags cluster_proxy -cpu 4 -race -run TestV3WatchRestoreSnapshotUnsync

=== RUN   TestV3WatchRestoreSnapshotUnsync
panic: watcher minimum revision 9223372036854775805 should not exceed current revision 16

goroutine 156 [running]:
github.com/coreos/etcd/mvcc.(*watcherGroup).chooseAll(0xc4202b8720, 0x10, 0xffffffffffffffff, 0x1)
	/home/gyuho/go/src/github.com/coreos/etcd/mvcc/watcher_group.go:242 +0x3b5
github.com/coreos/etcd/mvcc.(*watcherGroup).choose(0xc4202b8720, 0x200, 0x10, 0xffffffffffffffff, 0xc420253378, 0xc420253378)
	/home/gyuho/go/src/github.com/coreos/etcd/mvcc/watcher_group.go:225 +0x289
github.com/coreos/etcd/mvcc.(*watchableStore).syncWatchers(0xc4202b86e0, 0x0)
	/home/gyuho/go/src/github.com/coreos/etcd/mvcc/watchable_store.go:340 +0x237
github.com/coreos/etcd/mvcc.(*watchableStore).syncWatchersLoop(0xc4202b86e0)
	/home/gyuho/go/src/github.com/coreos/etcd/mvcc/watchable_store.go:214 +0x280
created by github.com/coreos/etcd/mvcc.newWatchableStore
	/home/gyuho/go/src/github.com/coreos/etcd/mvcc/watchable_store.go:90 +0x477
exit status 2
FAIL	github.com/coreos/etcd/integration	2.551s
```

gRPC proxy spawns a watcher with a key "proxy-namespace__lostleader"
and watch revision "int64(math.MaxInt64 - 2)" to detect leader loss.
But, when the partitioned node restores, this watcher triggers
panic with "watcher minimum revision ... should not exceed current ...".

This check was added a long time ago, by my PR, when there was no gRPC proxy:

https://github.com/coreos/etcd/pull/4043#discussion_r48457145

> we can remove this checking actually. it is impossible for a unsynced watching to have a future rev. or we should just panic here.

However, now it's possible that a unsynced watcher has a future
revision, when it was moved from a synced watcher group through
restore operation.

This PR adds "restore" flag to indicate that a watcher was moved
from the synced watcher group with restore operation. Otherwise,
the watcher with future revision in an unsynced watcher group
would still panic.

Example logs with future revision watcher from restore operation:

```
{"level":"info","ts":1527196358.9057755,"caller":"mvcc/watcher_group.go:261","msg":"choosing future revision watcher from restore operation","watch-key":"proxy-namespace__lostleader","watch-revision":9223372036854775805,"current-revision":16}
{"level":"info","ts":1527196358.910349,"caller":"mvcc/watcher_group.go:261","msg":"choosing future revision watcher from restore operation","watch-key":"proxy-namespace__lostleader","watch-revision":9223372036854775805,"current-revision":16}
```

Signed-off-by: Gyuho Lee <gyuhox@gmail.com>
2018-05-31 11:41:34 -07:00
Sam Batschelet 7292963ae7 auth: fix panic using WithRoot and improve JWT coverage
Signed-off-by: Gyuho Lee <gyuhox@gmail.com>
2018-05-23 23:45:24 -07:00
Hitoshi Mitake 37767bc6e2 auth: a new auth token provider nop
This commit adds a new auth token provider named nop. The nop provider
refuses every Authenticate() request so CN based authentication can
only be allowed. If the tokenOpts parameter of auth.NewTokenProvider()
is empty, the provider will be used.
2018-05-23 15:48:39 -07:00
Joe Betz d659771bb8 scripts: Fix remote tag check, gcloud login and umask in release script 2018-05-09 11:08:23 -07:00
Gyuho Lee 39d01e716f version: 3.3.5+git
Signed-off-by: Gyuho Lee <gyuhox@gmail.com>
2018-05-09 11:07:52 -07:00
Gyuho Lee 70c8726202 version: 3.3.5
Signed-off-by: Gyuho Lee <gyuhox@gmail.com>
2018-05-09 09:23:59 -07:00
Gyuho Lee aaca01a0fa tests/e2e: separate coverage tests for exec commands
Signed-off-by: Gyuho Lee <gyuhox@gmail.com>
2018-05-03 18:48:16 -07:00
Gyuho Lee bc2d400b4c etcdctl/ctlv3: fix watch with exec commands
Following command was failing because the parser incorrectly
picks up the second "watch" string in exec command, thus
passing wrong exec commands.

```
ETCDCTL_API=3 ./bin/etcdctl watch aaa -- echo watch event received

panic: runtime error: slice bounds out of range

goroutine 1 [running]:
github.com/coreos/etcd/etcdctl/ctlv3/command.parseWatchArgs(0xc42002e080, 0x8, 0x8, 0xc420206a20, 0x5, 0x6, 0x0, 0x0, 0x0, 0x0, ...)
	/home/gyuho/go/src/github.com/coreos/etcd/etcdctl/ctlv3/command/watch_command.go:303 +0xbed
github.com/coreos/etcd/etcdctl/ctlv3/command.watchCommandFunc(0xc4202a7180, 0xc420206a20, 0x5, 0x6)
	/home/gyuho/go/src/github.com/coreos/etcd/etcdctl/ctlv3/command/watch_command.go:73 +0x11d
github.com/coreos/etcd/vendor/github.com/spf13/cobra.(*Command).execute(0xc4202a7180, 0xc420206960, 0x6, 0x6, 0xc4202a7180, 0xc420206960)
	/home/gyuho/go/src/github.com/coreos/etcd/vendor/github.com/spf13/cobra/command.go:766 +0x2c1
github.com/coreos/etcd/vendor/github.com/spf13/cobra.(*Command).ExecuteC(0x1363de0, 0xc420128638, 0xc420185e01, 0xc420185ee8)
	/home/gyuho/go/src/github.com/coreos/etcd/vendor/github.com/spf13/cobra/command.go:852 +0x30a
github.com/coreos/etcd/vendor/github.com/spf13/cobra.(*Command).Execute(0x1363de0, 0x0, 0x0)
	/home/gyuho/go/src/github.com/coreos/etcd/vendor/github.com/spf13/cobra/command.go:800 +0x2b
github.com/coreos/etcd/etcdctl/ctlv3.Start()
	/home/gyuho/go/src/github.com/coreos/etcd/etcdctl/ctlv3/ctl_nocov.go:25 +0x8e
main.main()
	/home/gyuho/go/src/github.com/coreos/etcd/etcdctl/main.go:40 +0x17b
```

Signed-off-by: Gyuho Lee <gyuhox@gmail.com>
2018-05-03 18:48:08 -07:00
Gyuho Lee 913a98567e tests: use Go 1.9.6
Signed-off-by: Gyuho Lee <gyuhox@gmail.com>
2018-05-01 10:22:04 -07:00
Gyuho Lee 3f888b8085 functional/tester: handle retries in "caseUntilSnapshot"
Signed-off-by: Gyuho Lee <gyuhox@gmail.com>
2018-04-30 14:37:20 -07:00
Gyuho Lee c15c8c6116 functional.yaml: use lower ports
Signed-off-by: Gyuho Lee <gyuhox@gmail.com>
2018-04-30 13:36:36 -07:00
Joe Betz f535bb64f3 scripts: Fix a few etcd release script bugs and make it reenterant. 2018-04-25 10:04:43 -07:00
Eric Chiang f01d690e6f etcdmain: document peer-cert-allowed-cn flag 2018-04-24 13:57:51 -07:00
Gyuho Lee d09fa9c537 version: 3.3.4+git
Signed-off-by: Gyuho Lee <gyuhox@gmail.com>
2018-04-24 13:56:13 -07:00
Gyuho Lee fdde8705f5 version: 3.3.4
Signed-off-by: Gyuho Lee <gyuhox@gmail.com>
2018-04-24 12:05:29 -07:00
Joe Betz 600b2d1967 scripts: Add scripts/release that performs 'etcd-release-runbook' (https://goo.gl/Gxwysq) style release workflow 2018-04-24 12:05:18 -07:00
Gyuho Lee 870138accb etcdserver: log skipping initial election tick
Signed-off-by: Gyuho Lee <gyuhox@gmail.com>
2018-04-23 10:59:01 -07:00
Gyuho Lee 758203bd86 etcdmain: add "--initial-election-tick-advance"
Signed-off-by: Gyuho Lee <gyuhox@gmail.com>
2018-04-23 10:58:57 -07:00
Gyuho Lee 8886a6397c embed: add "InitialElectionTickAdvance"
Signed-off-by: Gyuho Lee <gyuhox@gmail.com>
2018-04-23 10:26:48 -07:00
Gyuho Lee ea829611b5 integration: set InitialElectionTickAdvance to true by default
Signed-off-by: Gyuho Lee <gyuhox@gmail.com>
2018-04-23 10:22:16 -07:00
Gyuho Lee b923c74fe5 etcdserver: add "InitialElectionTickAdvance"
Signed-off-by: Gyuho Lee <gyuhox@gmail.com>
2018-04-23 10:21:51 -07:00
Maciej Borsz 7cbc2f1068 etcdserver: add is_leader prometheus metric that is 1 on the leader.
Before this change, we had now way to find a leader using /metrics
endpoint. This commit adds a metric to do that.
2018-04-19 14:59:31 -07:00
Gyuho Lee 78109152b9 integration: re-overwrite "httptest.Server" TLS.Certificates
Signed-off-by: Gyuho Lee <gyuhox@gmail.com>
2018-04-17 06:17:46 -07:00
rob boll 08dc184618 pkg/transport: don't set certificates on tls config 2018-04-17 06:17:38 -07:00
Gyuho Lee 48f4ee9268 functional: create symlinks for build
Signed-off-by: Gyuho Lee <gyuhox@gmail.com>
2018-04-12 16:05:36 -07:00
Gyuho Lee 07a34aa76b travis: run build tests for "functional"
Signed-off-by: Gyuho Lee <gyuhox@gmail.com>
2018-04-12 15:56:30 -07:00
Gyuho Lee 2cabb82375 snapshot: remove tests
Signed-off-by: Gyuho Lee <gyuhox@gmail.com>
2018-04-12 15:24:02 -07:00
Gyuho Lee 56a9778bc2 functional: initial commit (copied from master)
Signed-off-by: Gyuho Lee <gyuhox@gmail.com>
2018-04-12 13:19:22 -07:00
Gyuho Lee 5abe521e77 snapshot: initial commit (for functional tests)
Signed-off-by: Gyuho Lee <gyuhox@gmail.com>
2018-04-12 13:19:19 -07:00
Gyuho Lee 3c4ace2d27 test: simplify
Signed-off-by: Gyuho Lee <gyuhox@gmail.com>
2018-04-12 11:09:25 -07:00
disksing 095fc0b411 etcdserver/stats: make all fields guarded by mutex. 2018-04-11 19:49:00 -07:00
disksing d40abbb502 etcdserver/stats: fix stats data race. 2018-04-11 19:49:00 -07:00
Gyuho Lee c19be730fd test: remove build flag "-a"
Signed-off-by: Gyuho Lee <gyuhox@gmail.com>
2018-04-11 10:17:31 -07:00
Gyuho Lee 99e4a5ffae cmd/vendor: add "go.uber.org/zap"
Signed-off-by: Gyuho Lee <gyuhox@gmail.com>
2018-04-10 23:46:00 -07:00
Gyuho Lee 3736a126df pkg/proxy: move from "pkg/transport"
Signed-off-by: Gyuho Lee <gyuhox@gmail.com>
2018-04-10 23:43:23 -07:00
Gyuho Lee 074e417770 tools: remove
Signed-off-by: Gyuho Lee <gyuhox@gmail.com>
2018-04-10 23:43:16 -07:00
Gyuho Lee dd9f05567d travis: update
Signed-off-by: Gyuho Lee <gyuhox@gmail.com>
2018-04-10 23:34:27 -07:00
Gyuho Lee a28cf17f25 test/*: clean up semaphore scripts
Signed-off-by: Gyuho Lee <gyuhox@gmail.com>
2018-04-10 23:33:50 -07:00
Gyuho Lee cdbb8ffdc1 etcdserver: fix "lease_expired_total" metrics
Signed-off-by: Gyuho Lee <gyuhox@gmail.com>
2018-04-10 17:57:35 -07:00
Gyuho Lee 68ba797549 tests: move test scripts
Signed-off-by: Gyuho Lee <gyuhox@gmail.com>
2018-04-09 11:33:23 -07:00
Gyuho Lee 5d97bccff2 semaphore.sh: update Go version
Signed-off-by: Gyuho Lee <gyuhox@gmail.com>
2018-03-29 09:20:26 -07:00
Gyuho Lee e5ec25fe0b travis: use Go 1.9.5
Signed-off-by: Gyuho Lee <gyuhox@gmail.com>
2018-03-29 09:07:35 -07:00
Gyuho Lee c522f6060f version: 3.3.3+git
Signed-off-by: Gyuho Lee <gyuhox@gmail.com>
2018-03-29 09:07:10 -07:00
Gyuho Lee e348b1aedd version: 3.3.3
Signed-off-by: Gyuho Lee <gyuhox@gmail.com>
2018-03-28 13:00:06 -07:00
Gyuho Lee 4355d91fcc Documentation/upgrades: backport all upgrade guides
Signed-off-by: Gyuho Lee <gyuhox@gmail.com>
2018-03-27 10:32:43 -07:00
Gyuho Lee ce7b86b65a compactor: simplify interval logic on periodic compactor
Signed-off-by: Gyuho Lee <gyuhox@gmail.com>
2018-03-26 05:37:31 -07:00
Iwasaki Yudai d70a218b19 compactor: adjust interval for period <1-hour 2018-03-26 05:37:24 -07:00
Gyuho Lee e029de320a compactor: clean up
Signed-off-by: Gyuho Lee <gyuhox@gmail.com>
2018-03-22 11:03:22 -07:00
Gyuho Lee 863a56a998 rafthttp: add missing "peer_sent_failures_total" metrics call
Signed-off-by: Gyuho Lee <gyuhox@gmail.com>
2018-03-14 12:44:38 -04:00
Gyuho Lee 3282d90707 etcdserver: adjust election ticks on restart
Signed-off-by: Gyuho Lee <gyuhox@gmail.com>
2018-03-10 20:05:56 -08:00
Gyuho Lee b2d5c6c7bd etcdserver: make "advanceTicks" method
Signed-off-by: Gyuho Lee <gyuhox@gmail.com>
2018-03-10 20:05:50 -08:00
Gyuho Lee 6fe7316ec4 rafthttp: add "ActivePeers" to "Transport"
Signed-off-by: Gyuho Lee <gyuhox@gmail.com>
2018-03-10 20:05:35 -08:00
Gyuho Lee 40e02256c7 version: 3.3.2+git
Signed-off-by: Gyuho Lee <gyuhox@gmail.com>
2018-03-08 14:49:14 -08:00
Gyuho Lee c9d46ab379 version: 3.3.2
Signed-off-by: Gyuho Lee <gyuhox@gmail.com>
2018-03-08 12:57:09 -08:00
Gyuho Lee d1da2023b9 clientv3/integration: test "rpctypes.ErrLeaseTTLTooLarge"
Signed-off-by: Gyuho Lee <gyuhox@gmail.com>
2018-03-08 10:34:34 -08:00
Iwasaki Yudai eaa0050d4d *: enforce max lease TTL with 9,000,000,000 seconds
math.MaxInt64 / time.Second is 9,223,372,036. 9,000,000,000 is easier to
remember/document.
2018-03-08 10:34:12 -08:00
Gyuho Lee 99a12662c1 *: remove unused env vars
Signed-off-by: Gyuho Lee <gyuhox@gmail.com>
2018-03-08 01:35:36 -08:00
Gyuho Lee e6d44fa3f2 hack/scripts-dev: fix indentation in run.sh
Signed-off-by: Gyuho Lee <gyuhox@gmail.com>
2018-03-07 14:32:27 -08:00
Gyuho Lee 43caf2b28a hack/scripts-dev: sync with master branch
Signed-off-by: Gyuho Lee <gyuhox@gmail.com>
2018-03-07 14:18:58 -08:00
Gyuho Lee bfb7a155b4 travis: update Go version string
Signed-off-by: Gyuho Lee <gyuhox@gmail.com>
2018-03-07 14:04:14 -08:00
Gyuho Lee f76ef3ce8d e2e: fix missing "apiPrefix"
Signed-off-by: Gyuho Lee <gyuhox@gmail.com>
2018-03-07 00:03:02 -08:00
Gyuho Lee 462ba8bb09 embed: fix wrong compactor imports
Signed-off-by: Gyuho Lee <gyuhox@gmail.com>
2018-03-06 23:26:45 -08:00
Gyuho Lee 146ed08052 Documentation/op-guide: highlight defrag operation "--endpoints" flag
Signed-off-by: Gyuho Lee <gyuhox@gmail.com>
2018-03-05 11:15:05 -08:00
Gyuho Lee 1bc974d536 etcdctl: highlight "defrag" command caveats
Signed-off-by: Gyuho Lee <gyuhox@gmail.com>
2018-03-05 11:15:02 -08:00
Gyuho Lee 3e3468d1fa e2e: add "Election" grpc-gateway test cases
Signed-off-by: Gyuho Lee <gyuhox@gmail.com>
2018-03-02 10:40:50 -08:00
Gyuho Lee 207f19354b e2e: add "spawnWithExpectLines"
Signed-off-by: Gyuho Lee <gyuhox@gmail.com>
2018-03-02 10:40:41 -08:00
Gyuho Lee bb8a5377ce api/v3election: error on missing "leader" field
Signed-off-by: Gyuho Lee <gyuhox@gmail.com>
2018-03-02 10:40:34 -08:00
Gyuho Lee 8291e16128 Documentation: make "Consul" section more objective
Signed-off-by: Gyuho Lee <gyuhox@gmail.com>
2018-03-02 10:40:22 -08:00
Gyuho Lee a5b31087e8 etcdserver: enable "CheckQuorum" when starting with "ForceNewCluster"
We enable "raft.Config.CheckQuorum" by default in other
Raft initial starts. So should start with "ForceNewCluster".

Signed-off-by: Gyuho Lee <gyuhox@gmail.com>
2018-03-02 10:40:08 -08:00
Rob Day cec79dd706 httpproxy: cancel requests when client closes a connection 2018-03-02 10:39:46 -08:00
Gyuho Lee 3641af83e7
semaphore: release test version 2018-02-27 11:29:58 -08:00
Gyuho Lee 240fda5128 embed: fix revision-based compaction with default value
Signed-off-by: Gyuho Lee <gyuhox@gmail.com>
2018-02-21 09:35:00 -08:00
Gyuho Lee d627301735 embed: document/validate compaction mode
Signed-off-by: Gyuho Lee <gyuhox@gmail.com>
2018-02-21 09:34:59 -08:00
Gyuho Lee 534c31b4ca version: 3.3.1+git
Signed-off-by: Gyuho Lee <gyuhox@gmail.com>
2018-02-12 14:36:11 -08:00
Gyuho Lee 28f3f26c0e version: 3.3.1
Signed-off-by: Gyuho Lee <gyuhox@gmail.com>
2018-02-12 09:29:11 -08:00
Gyuho Lee 4737f3a620 hack/scripts-dev: Makefile with Go 1.9.4, 1.8.7
Signed-off-by: Gyuho Lee <gyuhox@gmail.com>
2018-02-12 09:28:56 -08:00
Gyuho Lee bc6e235052 travis: use Go 1.9.4 with TARGET_GO_VERSION
Signed-off-by: Gyuho Lee <gyuhox@gmail.com>
2018-02-12 09:28:56 -08:00
Gyuho Lee 13c5cedfb8 semaphore: use Go 1.9.4, update release upgrade test version
Signed-off-by: Gyuho Lee <gyuhox@gmail.com>
2018-02-12 09:28:55 -08:00
Xiang 9942f904fb etcdserver: improve request took too long warning 2018-02-06 16:58:04 -08:00
Iwasaki Yudai eaf7d631ad mvcc: restore unsynced watchers
In case syncWatchersLoop() starts before Restore() is called,
watchers already added by that moment are moved to s.synced by the loop.
However, there is a broken logic that moves watchers from s.synced
to s.uncyned without setting keyWatchers of the watcherGroup.
Eventually syncWatchers() fails to pickup those watchers from s.unsynced
and no events are sent to the watchers, because newWatcherBatch() called
in the function uses wg.watcherSetByKey() internally that requires
a proper keyWatchers value.
2018-02-06 11:34:46 -08:00
Gyuho Lee 21a1a28c18 hack: sync with etcd master
Signed-off-by: Gyuho Lee <gyuhox@gmail.com>
2018-02-06 10:07:01 -08:00
Gyuho Lee c932e9e2ba tools/functional-tester: update README for local docker testing
Signed-off-by: Gyuho Lee <gyuhox@gmail.com>
2018-02-06 10:06:35 -08:00
Gyuho Lee cf96d8a130 Dockerfile-functional-tester: initial commit
Signed-off-by: Gyuho Lee <gyuhox@gmail.com>
2018-02-06 10:06:25 -08:00
Gyuho Lee a3ec84e311 gitignore: add ".Dockerfile-functional-tester"
Signed-off-by: Gyuho Lee <gyuhox@gmail.com>
2018-02-06 10:06:12 -08:00
Gyuho Lee 29aca652bf test: configure advertise ports in functional_pass
Signed-off-by: Gyuho Lee <gyuhox@gmail.com>
2018-02-06 10:04:42 -08:00
Gyuho Lee bbfd0077e8 etcd-tester: set advertise ports, delay w/ network faults
Signed-off-by: Gyuho Lee <gyuhox@gmail.com>
2018-02-06 10:04:33 -08:00
Gyuho Lee 18df07754f etcd-agent: use "pkg/transport.Proxy"
Signed-off-by: Gyuho Lee <gyuhox@gmail.com>
2018-02-06 10:04:10 -08:00
Gyuho Lee 56178a8a06 test: remove "use-root" in functional_pass
Signed-off-by: Gyuho Lee <gyuhox@gmail.com>
2018-02-06 10:03:58 -08:00
Gyuho Lee a9a616a09f etcd-agent: remove "use-root"
Signed-off-by: Gyuho Lee <gyuhox@gmail.com>
2018-02-06 10:03:41 -08:00
Gyuho Lee abdfa87ae5 functional-tester: remove old assets
Signed-off-by: Gyuho Lee <gyuhox@gmail.com>
2018-02-06 10:03:29 -08:00
Gyuho Lee a4cbba89ff pkg/transport: implement "Proxy"
Signed-off-by: Gyuho Lee <gyuhox@gmail.com>
2018-02-06 10:02:34 -08:00
Gyuho Lee 0bc06d72df pkg/transport: add "fixtures" for TLS tests
Signed-off-by: Gyuho Lee <gyuhox@gmail.com>
2018-02-06 10:02:25 -08:00
Iwasaki Yudai a1fbed5abc *: Remove 8GiB quota limitation from documents
Also mention that in v3.3 change log.

Signed-off-by: Gyuho Lee <gyuhox@gmail.com>
2018-02-02 14:28:26 -08:00
Gyuho Lee 665fb01f95 version: 3.3.0+git
Signed-off-by: Gyuho Lee <gyuhox@gmail.com>
2018-02-01 14:14:07 -08:00
Gyuho Lee c23606781f version: 3.3.0
Signed-off-by: Gyuho Lee <gyuhox@gmail.com>
2018-02-01 10:03:36 -08:00
Gyuho Lee afa01aaef0 etcdmain: define "defaultGRPCMaxCallSendMsgSize"
Signed-off-by: Gyuho Lee <gyuhox@gmail.com>
2018-01-30 09:50:27 -08:00
Gyuho Lee d20e5a6bb5 Documentation/op-guide: highlight defragment operation
Signed-off-by: Gyuho Lee <gyuhox@gmail.com>
2018-01-30 09:42:37 -08:00
Gyuho Lee 6931dd8442 Documentation/op-guide: revert "--discovery-srv-name" doc changes
Signed-off-by: Gyuho Lee <gyuhox@gmail.com>
2018-01-30 09:41:42 -08:00
Gyuho Lee f320348682 Documentation: sync with etcd master
Signed-off-by: Gyuho Lee <gyuhox@gmail.com>
2018-01-30 09:37:57 -08:00
Rene Zbinden d7e6dd77bb grpcproxy: configure --max-send-bytes and --max-recv-bytes for grpc proxy 2018-01-30 09:33:16 -08:00
Gyuho Lee 50d2a00f01 etcdserver: clarify warnings on backend open taking >10 seconds
If db file is 10 GiB, it can take more than 1-second.

Signed-off-by: Gyuho Lee <gyuhox@gmail.com>
2018-01-26 10:55:16 -08:00
Gyuho Lee c5bba152ee etcdserver: add detailed errors in "ValidateClusterAndAssignIDs"
Signed-off-by: Gyuho Lee <gyuhox@gmail.com>
2018-01-25 12:00:21 -08:00
Gyuho Lee dbde4e986b pkg/netutil: return error from "URLStringsEqual"
Signed-off-by: Gyuho Lee <gyuhox@gmail.com>
2018-01-25 12:00:14 -08:00
Gyuho Lee f9b7fccf1b etcdserver: add error details on DNS resolution failure on advertise URLs
Signed-off-by: Gyuho Lee <gyuhox@gmail.com>
2018-01-25 12:00:07 -08:00
Gyuho Lee 9deb838ddb semaphore,travis: test with Go 1.9.3
Signed-off-by: Gyuho Lee <gyuhox@gmail.com>
2018-01-23 14:03:24 -08:00
Gyuho Lee baf7320e10 version: 3.3.0-rc.4+git
Signed-off-by: Gyuho Lee <gyuhox@gmail.com>
2018-01-23 14:03:07 -08:00
Gyuho Lee ea6360f550 version: 3.3.0-rc.4
Signed-off-by: Gyuho Lee <gyuhox@gmail.com>
2018-01-22 11:32:00 -08:00
Gyuho Lee 2aa3d91759 clientv3/integration: add TestMemberAddUpdateWrongURLs
Signed-off-by: Gyuho Lee <gyuhox@gmail.com>
2018-01-22 11:31:45 -08:00
Gyuho Lee 7973612c6e words: whitelit "rafthttp"
Signed-off-by: Gyuho Lee <gyuhox@gmail.com>
2018-01-22 11:29:51 -08:00
Gyuho Lee 1c91ddc6f4 clientv3: prevent no-scheme URLs to cluster APIs
Signed-off-by: Gyuho Lee <gyuhox@gmail.com>
2018-01-22 11:27:25 -08:00
Gyuho Lee 8a18cc96d0 etcdserver/api/v3rpc: debug-log client disconnect on TLS, http/2 stream CANCEL
Signed-off-by: Gyuho Lee <gyuhox@gmail.com>
2018-01-19 12:50:20 -08:00
Gyuho Lee a90f301ba8 version: 3.3.0-rc.3+git
Signed-off-by: Gyuho Lee <gyuhox@gmail.com>
2018-01-19 12:48:21 -08:00
Gyuho Lee 374dc5743f version: 3.3.0-rc.3
Signed-off-by: Gyuho Lee <gyuhox@gmail.com>
2018-01-17 15:23:08 -08:00
Gyuho Lee 55505617df proxy/grpcproxy: remove "Errors" field
Signed-off-by: Gyuho Lee <gyuhox@gmail.com>
2018-01-17 15:22:54 -08:00
Gyuho Lee a9317d3d77 e2e: remove "/health" "errors" field test
Signed-off-by: Gyuho Lee <gyuhox@gmail.com>
2018-01-17 15:22:54 -08:00
Gyuho Lee 02d362ccde etcdserver/api/etcdhttp: remove "errors" field in /health
Signed-off-by: Gyuho Lee <gyuhox@gmail.com>
2018-01-17 15:22:54 -08:00
Jordan Liggitt d292337d14 api/etcdhttp: change /health type back to string for backwards compatibility 2018-01-17 12:44:38 -08:00
Gyuho Lee 7974f008f3 etcdctl: document "ETCD_WATCH_*"
Signed-off-by: Gyuho Lee <gyuhox@gmail.com>
2018-01-17 12:44:22 -08:00
Gyuho Lee 4a3f99415e e2e: test ETCD_WATCH_VALUE
Signed-off-by: Gyuho Lee <gyuhox@gmail.com>
2018-01-17 12:44:07 -08:00
Gyuho Lee 6340564c84 ctlv3: set ETCD_WATCH_* on watch exec
Signed-off-by: Gyuho Lee <gyuhox@gmail.com>
2018-01-17 12:43:55 -08:00
Gyuho Lee 6735028ec0 ctlv3: exit on exec watch error
Signed-off-by: Gyuho Lee <gyuhox@gmail.com>
2018-01-17 12:43:45 -08:00
Gyuho Lee 906f098053 ctlv3: set ETCD_WATCH_KEY, ETCD_WATCH_VALUE on exec watch
Signed-off-by: Gyuho Lee <gyuhox@gmail.com>
2018-01-17 12:43:38 -08:00
Gyuho Lee 8a66237693 ctlv3: handle pkg/flags warnings
Signed-off-by: Gyuho Lee <gyuhox@gmail.com>
2018-01-17 12:43:27 -08:00
Gyuho Lee d37afffb98 etcdctl: document watch with ETCDCTL_WATCH_*
Signed-off-by: Gyuho Lee <gyuhox@gmail.com>
2018-01-17 12:43:12 -08:00
Gyuho Lee 7e2759da8d e2e: add watch tests with ETCDCTL_WATCH_*
Signed-off-by: Gyuho Lee <gyuhox@gmail.com>
2018-01-17 12:43:02 -08:00
Gyuho Lee ad4df985fc ctlv3: support ETCDCTL_WATCH_KEY, ETCDCTL_WATCH_RANGE_END
Signed-off-by: Gyuho Lee <gyuhox@gmail.com>
2018-01-17 12:42:54 -08:00
Gyuho Lee 2df89c8bf6 Documentation/op-guide: clarify security.md on TLS auth
Make it more accurate (just as pkg/transport/listener_tls.go does).

Signed-off-by: Gyuho Lee <gyuhox@gmail.com>
2018-01-12 15:23:06 -08:00
Hitoshi Mitake 6178c45066 etcdctl: don't ask password twice for etcdctl endpoint health --cluster
Current etcdctl endpoint health --cluster asks password twice if auth
is enabled. This is because the command creates two client instances:
one for the purpose of checking endpoint health and another for
getting cluster members with MemberList(). The latter client doesn't
need to be authenticated because MemberList() is a public RPC. This
commit makes the client with no authed one.

Fix https://github.com/coreos/etcd/issues/9094
2018-01-12 09:59:31 -08:00
Gyuho Lee 9ccae0f81a etcd-tester: update stresser weights with txn stresser
Large key writes (stressEntries[1].weight) should not take this
much weight. It was triggering "database size exceeded" errors.

Signed-off-by: Gyuho Lee <gyuhox@gmail.com>
2018-01-12 09:41:51 -08:00
Gyuho Lee a5079cc381 version: 3.3.0-rc.2+git
Signed-off-by: Gyuho Lee <gyuhox@gmail.com>
2018-01-11 14:16:08 -08:00
Gyuho Lee 9e079d8f02 version: 3.3.0-rc.2
Signed-off-by: Gyuho Lee <gyuhox@gmail.com>
2018-01-11 11:18:46 -08:00
Gyuho Lee bd57c9ca5b etcd-tester: fix "writeTxn" key selection
Found when debugging https://github.com/coreos/etcd/issues/9130.

Signed-off-by: Gyuho Lee <gyuhox@gmail.com>
2018-01-11 11:18:05 -08:00
Gyu-Ho Lee 58c402a47b test: limit stress-qps for slow CI machines, add txn flags
Signed-off-by: Gyu-Ho Lee <gyuhox@gmail.com>
2018-01-09 14:18:45 -08:00
Gyu-Ho Lee 3ce73b70bc etcd-tester: add txn stresser
Signed-off-by: Gyu-Ho Lee <gyuhox@gmail.com>
2018-01-09 14:18:33 -08:00
Gyuho Lee ee3c81d8d3 ctlv3: add "snapshot restore --wal-dir"
Signed-off-by: Gyuho Lee <gyuhox@gmail.com>
2018-01-09 11:12:29 -08:00
Sahdev P. Zala 2dfabfbef6 DocCommand: use regex wildcard
The current command as such produces no output on mac term or bash shell.
Using regex wildcard works fine on mac and linux.
2018-01-09 09:11:16 -08:00
Gyuho Lee bf83d5269f clientv3/integration: fix typos
Signed-off-by: Gyuho Lee <gyuhox@gmail.com>
2018-01-09 09:11:15 -08:00
Sam Batschelet a609b1eb47 integration: add constant RequestWaitTimeout. 2018-01-09 09:11:15 -08:00
Iwasaki Yudai 1ae0c0b47d mvcc: check null before set FillPercent not to panic
Since CreateBucketIfNotExists() can return nil when it gets an error,
accessing FillPercent must be done after a nil check, not to cause
a panic.
2018-01-08 13:08:03 -08:00
Sahdev P. Zala ec43197344 etcdserver/api/v3rpc: debug user cancellation and log warning for rest
The context error with cancel code is typically for user cancellation which
should be at debug level. For other error codes we should display a warning.

Fixes #9085
2018-01-08 10:14:37 -08:00
Quentin MACHU 70ba0518f1 embed: enable extensive metrics if specified 2018-01-07 18:48:59 -08:00
Gyuho Lee e330f5004f etcdmain: unset ETCD_UNSUPPORTED_ARCH after arch check
Signed-off-by: Gyuho Lee <gyuhox@gmail.com>
2018-01-05 03:38:35 +00:00
Gyuho Lee 0ec5023b7b pkg/expect: fix deadlock in mac OS
bufio.NewReader.ReadString blocks even
when the process received syscall.SIGKILL.
Remove ptyMu mutex and make ReadString return
when *os.File is closed.

Signed-off-by: Gyuho Lee <gyuhox@gmail.com>
2018-01-02 14:34:01 -08:00
Gyuho Lee 0f69520622 version: bump up to 3.3.0-rc.1+git
Signed-off-by: Gyuho Lee <gyuhox@gmail.com>
2018-01-02 14:33:10 -08:00
Gyuho Lee d3c2acf090 version: bump up to 3.3.0-rc.1
Signed-off-by: Gyuho Lee <gyuhox@gmail.com>
2018-01-02 11:27:15 -08:00
Gyuho Lee 5e35f79087 clientv3/integration: fix TestKVLargeRequests with -tags cluster_proxy
Signed-off-by: Gyuho Lee <gyuhox@gmail.com>
2018-01-02 11:07:24 -08:00
Gyuho Lee 6dff1a9398 tools/functional-tester: remove duplicate grpclog set
Signed-off-by: Gyuho Lee <gyuhox@gmail.com>
2018-01-02 11:02:17 -08:00
Gyuho Lee 325913d6fb etcdserver/api/v3rpc: set grpclog once
Signed-off-by: Gyuho Lee <gyuhox@gmail.com>
2018-01-02 11:02:17 -08:00
Gyuho Lee 24c9fb0527 etcdserver,embed: discard gRPC info logs when debug is off
Signed-off-by: Gyuho Lee <gyuhox@gmail.com>
2018-01-02 11:02:17 -08:00
Gyuho Lee 8511db5e2b etcdserver/api/v3rpc: log stream error with debug level
Signed-off-by: Gyuho Lee <gyuhox@gmail.com>
2018-01-02 11:02:17 -08:00
Gyuho Lee 3193f3c9ab clientv3/leasing: fix racey waitSession
Signed-off-by: Gyuho Lee <gyuhox@gmail.com>
2017-12-21 17:51:03 -08:00
Gyuho Lee bdc508cadf grpc-proxy: add "--debug" flag to "etcd grpc-proxy start" command
Signed-off-by: Gyuho Lee <gyuhox@gmail.com>
2017-12-21 14:44:10 -08:00
Gyuho Lee d5a0609412 embed: only discard infos when debug flag is off
Signed-off-by: Gyuho Lee <gyuhox@gmail.com>
2017-12-21 14:44:02 -08:00
Gyuho Lee 67af1a2138 CHANGELOG: remove rc in release-3.3
Signed-off-by: Gyuho Lee <gyuhox@gmail.com>
2017-12-20 14:32:15 -08:00
Gyuho Lee 66d68a8fdb *: update release upgrade test versions
Signed-off-by: Gyuho Lee <gyuhox@gmail.com>
2017-12-20 14:16:59 -08:00
Gyuho Lee ebaa83c985 version: bump up to 3.3.0+git
Signed-off-by: Gyuho Lee <gyuhox@gmail.com>
2017-12-20 14:16:49 -08:00
2277 changed files with 455243 additions and 317387 deletions

6
.gitignore vendored
View File

@ -1,13 +1,13 @@
/agent-* /agent-*
/coverage /coverage
/covdir /covdir
/docs
/gopath /gopath
/gopath.proto /gopath.proto
/go-bindata /go-bindata
/release /release
/machine* /machine*
/bin /bin
.Dockerfile-test
.vagrant .vagrant
*.etcd *.etcd
*.log *.log
@ -15,8 +15,6 @@
*.swp *.swp
/hack/insta-discovery/.env /hack/insta-discovery/.env
*.test *.test
tools/functional-tester/docker/bin
hack/scripts-dev/docker-dns/.Dockerfile
hack/scripts-dev/docker-dns-srv/.Dockerfile
hack/tls-setup/certs hack/tls-setup/certs
.idea .idea
*.bak

View File

@ -1,16 +0,0 @@
#!/usr/bin/env bash
TEST_SUFFIX=$(date +%s | base64 | head -c 15)
TEST_OPTS="RELEASE_TEST=y INTEGRATION=y PASSES='build unit release integration_e2e functional' MANUAL_VER=v3.2.11"
if [ "$TEST_ARCH" == "386" ]; then
TEST_OPTS="GOARCH=386 PASSES='build unit integration_e2e'"
fi
docker run \
--rm \
--volume=`pwd`:/go/src/github.com/coreos/etcd \
gcr.io/etcd-development/etcd-test:go1.9.2 \
/bin/bash -c "${TEST_OPTS} ./test 2>&1 | tee test-${TEST_SUFFIX}.log"
! egrep "(--- FAIL:|panic: test timed out|appears to have leaked)" -B50 -A10 test-${TEST_SUFFIX}.log

View File

@ -6,8 +6,10 @@ sudo: required
services: docker services: docker
go: go:
- 1.9.2 - 1.12.12
- tip
env:
- GO111MODULE=on
notifications: notifications:
on_success: never on_success: never
@ -15,74 +17,50 @@ notifications:
env: env:
matrix: matrix:
- TARGET=amd64 - TARGET=linux-amd64-integration-1-cpu
- TARGET=amd64-go-tip - TARGET=linux-amd64-integration-4-cpu
- TARGET=darwin-amd64 - TARGET=linux-amd64-functional
- TARGET=windows-amd64 - TARGET=linux-amd64-unit
- TARGET=arm64 - TARGET=linux-amd64-e2e
- TARGET=arm - TARGET=all-build
- TARGET=386 - TARGET=linux-386-unit
- TARGET=ppc64le
matrix: matrix:
fast_finish: true fast_finish: true
allow_failures: allow_failures:
- go: tip - go: 1.12.12
env: TARGET=amd64-go-tip env: TARGET=linux-386-unit
exclude:
- go: 1.9.2
env: TARGET=amd64-go-tip
- go: tip
env: TARGET=amd64
- go: tip
env: TARGET=darwin-amd64
- go: tip
env: TARGET=windows-amd64
- go: tip
env: TARGET=arm
- go: tip
env: TARGET=arm64
- go: tip
env: TARGET=386
- go: tip
env: TARGET=ppc64le
before_install:
- docker pull gcr.io/etcd-development/etcd-test:go1.9.2
install: install:
- pushd cmd/etcd && go get -t -v ./... && popd - go get -t -v -d ./...
script: script:
- echo "TRAVIS_GO_VERSION=${TRAVIS_GO_VERSION}"
- > - >
case "${TARGET}" in case "${TARGET}" in
amd64) linux-amd64-integration-1-cpu)
docker run --rm \ GOARCH=amd64 CPU=1 PASSES='integration' ./test
--volume=`pwd`:/go/src/github.com/coreos/etcd gcr.io/etcd-development/etcd-test:go1.9.2 \
/bin/bash -c "GOARCH=amd64 ./test"
;; ;;
amd64-go-tip) linux-amd64-integration-4-cpu)
GOARCH=amd64 ./test GOARCH=amd64 CPU=4 PASSES='integration' ./test
;; ;;
darwin-amd64) linux-amd64-functional)
docker run --rm \ ./build && GOARCH=amd64 PASSES='functional' ./test
--volume=`pwd`:/go/src/github.com/coreos/etcd gcr.io/etcd-development/etcd-test:go1.9.2 \
/bin/bash -c "GO_BUILD_FLAGS='-a -v' GOOS=darwin GOARCH=amd64 ./build"
;; ;;
windows-amd64) linux-amd64-unit)
docker run --rm \ ./build && GOARCH=amd64 PASSES='unit' ./test
--volume=`pwd`:/go/src/github.com/coreos/etcd gcr.io/etcd-development/etcd-test:go1.9.2 \
/bin/bash -c "GO_BUILD_FLAGS='-a -v' GOOS=windows GOARCH=amd64 ./build"
;; ;;
386) linux-amd64-e2e)
docker run --rm \ GOARCH=amd64 PASSES='build release e2e' MANUAL_VER=v3.3.13 ./test
--volume=`pwd`:/go/src/github.com/coreos/etcd gcr.io/etcd-development/etcd-test:go1.9.2 \
/bin/bash -c "GOARCH=386 PASSES='build unit' ./test"
;; ;;
*) all-build)
# test building out of gopath GOARCH=386 PASSES='build' ./test \
docker run --rm \ && GO_BUILD_FLAGS='-v' GOOS=darwin GOARCH=amd64 ./build \
--volume=`pwd`:/go/src/github.com/coreos/etcd gcr.io/etcd-development/etcd-test:go1.9.2 \ && GO_BUILD_FLAGS='-v' GOARCH=arm ./build \
/bin/bash -c "GO_BUILD_FLAGS='-a -v' GOARCH='${TARGET}' ./build" && GO_BUILD_FLAGS='-v' GOARCH=arm64 ./build \
&& GO_BUILD_FLAGS='-v' GOARCH=ppc64le ./build
;;
linux-386-unit)
GOARCH=386 ./build && GOARCH=386 PASSES='unit' ./test
;; ;;
esac esac

5
.words
View File

@ -25,6 +25,8 @@ healthcheck
iff iff
inflight inflight
keepalive keepalive
hasleader
racey
keepalives keepalives
keyspace keyspace
linearization linearization
@ -33,6 +35,7 @@ mutex
prefetching prefetching
protobuf protobuf
prometheus prometheus
rafthttp
repin repin
serializable serializable
teardown teardown
@ -40,4 +43,4 @@ too_many_pings
uncontended uncontended
unprefixed unprefixed
unlisting unlisting
WithDialer

View File

@ -1,750 +0,0 @@
## [v3.3.0](https://github.com/coreos/etcd/releases/tag/v3.3.0) (2018-01-??)
**v3.3.0 is not yet released; expected to be released in January 2018.**
## [v3.3.0-rc.0](https://github.com/coreos/etcd/releases/tag/v3.3.0-rc.0) (2017-12-20)
See [code changes](https://github.com/coreos/etcd/compare/v3.2.0...v3.3.0) and [v3.3 upgrade guide](https://github.com/coreos/etcd/blob/master/Documentation/upgrades/upgrade_3_3.md) for any breaking changes.
### Improved
- Use [`coreos/bbolt`](https://github.com/coreos/bbolt/releases) to replace [`boltdb/bolt`](https://github.com/boltdb/bolt#project-status).
- Fix [etcd database size grows until `mvcc: database space exceeded`](https://github.com/coreos/etcd/issues/8009).
- [Reduce memory allocation](https://github.com/coreos/etcd/pull/8428) on [Range operations](https://github.com/coreos/etcd/pull/8475).
- [Rate limit](https://github.com/coreos/etcd/pull/8099) and [randomize](https://github.com/coreos/etcd/pull/8101) lease revoke on restart or leader elections.
- Prevent [spikes in Raft proposal rate](https://github.com/coreos/etcd/issues/8096).
- Support `clientv3` balancer failover under [network faults/partitions](https://github.com/coreos/etcd/issues/8711).
- Better warning on [mismatched `--initial-cluster`](https://github.com/coreos/etcd/pull/8083) flag.
### Changed(Breaking Changes)
- Require [Go 1.9+](https://github.com/coreos/etcd/issues/6174).
- Compile with *Go 1.9.2*.
- Deprecate [`golang.org/x/net/context`](https://github.com/coreos/etcd/pull/8511).
- Require [`google.golang.org/grpc`](https://github.com/grpc/grpc-go/releases) [**`v1.7.4`**](https://github.com/grpc/grpc-go/releases/tag/v1.7.4) or [**`v1.7.5+`**](https://github.com/grpc/grpc-go/releases/tag/v1.7.5):
- Deprecate [`metadata.Incoming/OutgoingContext`](https://github.com/coreos/etcd/pull/7896).
- Deprecate `grpclog.Logger`, upgrade to [`grpclog.LoggerV2`](https://github.com/coreos/etcd/pull/8533).
- Deprecate [`grpc.ErrClientConnTimeout`](https://github.com/coreos/etcd/pull/8505) errors in `clientv3`.
- Use [`MaxRecvMsgSize` and `MaxSendMsgSize`](https://github.com/coreos/etcd/pull/8437) to limit message size, in etcd server.
- Upgrade [`github.com/grpc-ecosystem/grpc-gateway`](https://github.com/grpc-ecosystem/grpc-gateway/releases) `v1.2.2` to `v1.3.0`.
- Translate [gRPC status error in v3 client `Snapshot` API](https://github.com/coreos/etcd/pull/9038).
- Upgrade [`github.com/ugorji/go/codec`](https://github.com/ugorji/go) for v2 `client`.
- [Regenerated](https://github.com/coreos/etcd/pull/8721) v2 `client` source code with latest `ugorji/go/codec`.
- Fix [`/health` endpoint JSON output](https://github.com/coreos/etcd/pull/8312).
- v3 `etcdctl` [`lease timetolive LEASE_ID`](https://github.com/coreos/etcd/issues/9028) on expired lease now prints [`lease LEASE_ID already expired`](https://github.com/coreos/etcd/pull/9047).
- <=3.2 prints `lease LEASE_ID granted with TTL(0s), remaining(-1s)`.
### Added(`etcd`)
- Add [`--experimental-enable-v2v3`](https://github.com/coreos/etcd/pull/8407) flag to [emulate v2 API with v3](https://github.com/coreos/etcd/issues/6925).
- Add [`--experimental-corrupt-check-time`](https://github.com/coreos/etcd/pull/8420) flag to [raise corrupt alarm monitoring](https://github.com/coreos/etcd/issues/7125).
- Add [`--experimental-initial-corrupt-check`](https://github.com/coreos/etcd/pull/8554) flag to [check database hash before serving client/peer traffic](https://github.com/coreos/etcd/issues/8313).
- Add [`--max-txn-ops`](https://github.com/coreos/etcd/pull/7976) flag to [configure maximum number operations in transaction](https://github.com/coreos/etcd/issues/7826).
- Add [`--max-request-bytes`](https://github.com/coreos/etcd/pull/7968) flag to [configure maximum client request size](https://github.com/coreos/etcd/issues/7923).
- If not configured, it defaults to 1.5 MiB.
- Add [`--client-crl-file`, `--peer-crl-file`](https://github.com/coreos/etcd/pull/8124) flags for [Certificate revocation list](https://github.com/coreos/etcd/issues/4034).
- Add [`--peer-require-cn`](https://github.com/coreos/etcd/pull/8616) flag to support [CN-based auth for inter-peer connection](https://github.com/coreos/etcd/issues/8262).
- Add [`--listen-metrics-urls`](https://github.com/coreos/etcd/pull/8242) flag for additional `/metrics` endpoints.
- Support [additional (non) TLS `/metrics` endpoints for a TLS-enabled cluster](https://github.com/coreos/etcd/pull/8282).
- e.g. `--listen-metrics-urls=https://localhost:2378,http://localhost:9379` to serve `/metrics` in secure port 2378 and insecure port 9379.
- Useful for [bypassing critical APIs when monitoring etcd](https://github.com/coreos/etcd/issues/8060).
- Add [`--auto-compaction-mode`](https://github.com/coreos/etcd/pull/8123) flag to [support revision-based compaction](https://github.com/coreos/etcd/issues/8098).
- Change `--auto-compaction-retention` flag to [accept string values](https://github.com/coreos/etcd/pull/8563) with [finer granularity](https://github.com/coreos/etcd/issues/8503).
- Add [`--grpc-keepalive-min-time`, `--grpc-keepalive-interval`, `--grpc-keepalive-timeout`](https://github.com/coreos/etcd/pull/8535) flags to configure server-side keepalive policies.
- Serve [`/health` endpoint as unhealthy](https://github.com/coreos/etcd/pull/8272) when [alarm is raised](https://github.com/coreos/etcd/issues/8207).
- Provide [error information in `/health`](https://github.com/coreos/etcd/pull/8312).
- e.g. `{"health":false,"errors":["NOSPACE"]}`.
- Move [logging setup to embed package](https://github.com/coreos/etcd/pull/8810)
- Disable gRPC server log by default.
- Use [monotonic time in Go 1.9](https://github.com/coreos/etcd/pull/8507) for `lease` package.
- Warn on [empty hosts in advertise URLs](https://github.com/coreos/etcd/pull/8384).
- Address [advertise client URLs accepts empty hosts](https://github.com/coreos/etcd/issues/8379).
- etcd `v3.4` will exit on this error.
- e.g. `--advertise-client-urls=http://:2379`.
- Warn on [shadowed environment variables](https://github.com/coreos/etcd/pull/8385).
- Address [error on shadowed environment variables](https://github.com/coreos/etcd/issues/8380).
- etcd `v3.4` will exit on this error.
### Added(API)
- Support [ranges in transaction comparisons](https://github.com/coreos/etcd/pull/8025) for [disconnected linearized reads](https://github.com/coreos/etcd/issues/7924).
- Add [nested transactions](https://github.com/coreos/etcd/pull/8102) to extend [proxy use cases](https://github.com/coreos/etcd/issues/7857).
- Add [lease comparison target in transaction](https://github.com/coreos/etcd/pull/8324).
- Add [lease list](https://github.com/coreos/etcd/pull/8358).
- Add [hash by revision](https://github.com/coreos/etcd/pull/8263) for [better corruption checking against boltdb](https://github.com/coreos/etcd/issues/8016).
### Added(`etcd/clientv3`)
- Add [health balancer](https://github.com/coreos/etcd/pull/8545) to fix [watch API hangs](https://github.com/coreos/etcd/issues/7247), improve [endpoint switch under network faults](https://github.com/coreos/etcd/issues/7941).
- [Refactor balancer](https://github.com/coreos/etcd/pull/8840) and add [client-side keepalive pings](https://github.com/coreos/etcd/pull/8199) to handle [network partitions](https://github.com/coreos/etcd/issues/8711).
- Add [`MaxCallSendMsgSize` and `MaxCallRecvMsgSize`](https://github.com/coreos/etcd/pull/9047) fields to [`clientv3.Config`](https://godoc.org/github.com/coreos/etcd/clientv3#Config).
- Fix [exceeded response size limit error in client-side](https://github.com/coreos/etcd/issues/9043).
- Address [kubernetes#51099](https://github.com/kubernetes/kubernetes/issues/51099).
- `MaxCallSendMsgSize` default value is 2 MiB, if not configured.
- `MaxCallRecvMsgSize` default value is `math.MaxInt32`, if not configured.
- Accept [`Compare_LEASE`](https://github.com/coreos/etcd/pull/8324) in [`clientv3.Compare`](https://godoc.org/github.com/coreos/etcd/clientv3#Compare).
- Add [`LeaseValue` helper](https://github.com/coreos/etcd/pull/8488) to `Cmp` `LeaseID` values in `Txn`.
- Add [`MoveLeader`](https://github.com/coreos/etcd/pull/8153) to `Maintenance`.
- Add [`HashKV`](https://github.com/coreos/etcd/pull/8351) to `Maintenance`.
- Add [`Leases`](https://github.com/coreos/etcd/pull/8358) to `Lease`.
- Add [`clientv3/ordering`](https://github.com/coreos/etcd/pull/8092) for enforce [ordering in serialized requests](https://github.com/coreos/etcd/issues/7623).
### Added(v2 `etcdctl`)
- Add [`backup --with-v3`](https://github.com/coreos/etcd/pull/8479) flag.
### Added(v3 `etcdctl`)
- Add [`--discovery-srv`](https://github.com/coreos/etcd/pull/8462) flag.
- Add [`--keepalive-time`, `--keepalive-timeout`](https://github.com/coreos/etcd/pull/8663) flags.
- Add [`lease list`](https://github.com/coreos/etcd/pull/8358) command.
- Add [`lease keep-alive --once`](https://github.com/coreos/etcd/pull/8775) flag.
- Make [`lease timetolive LEASE_ID`](https://github.com/coreos/etcd/issues/9028) on expired lease print [`lease LEASE_ID already expired`](https://github.com/coreos/etcd/pull/9047).
- <=3.2 prints `lease LEASE_ID granted with TTL(0s), remaining(-1s)`.
- Add [`defrag --data-dir`](https://github.com/coreos/etcd/pull/8367) flag.
- Add [`move-leader`](https://github.com/coreos/etcd/pull/8153) command.
- Add [`endpoint hashkv`](https://github.com/coreos/etcd/pull/8351) command.
- Add [`endpoint --cluster`](https://github.com/coreos/etcd/pull/8143) flag, equivalent to [v2 `etcdctl cluster-health`](https://github.com/coreos/etcd/issues/8117).
- Make `endpoint health` command terminate with [non-zero exit code on unhealthy status](https://github.com/coreos/etcd/pull/8342).
- Add [`lock --ttl`](https://github.com/coreos/etcd/pull/8370) flag.
- Support [`watch [key] [range_end] -- [exec-command…]`](https://github.com/coreos/etcd/pull/8919), equivalent to [v2 `etcdctl exec-watch`](https://github.com/coreos/etcd/issues/8814).
- Enable [`clientv3.WithRequireLeader(context.Context)` for `watch`](https://github.com/coreos/etcd/pull/8672) command.
- Print [`"del"` instead of `"delete"`](https://github.com/coreos/etcd/pull/8297) in `txn` interactive mode.
- Print [`ETCD_INITIAL_ADVERTISE_PEER_URLS` in `member add`](https://github.com/coreos/etcd/pull/8332).
### Added(metrics)
- Add [`etcd --listen-metrics-urls`](https://github.com/coreos/etcd/pull/8242) flag for additional `/metrics` endpoints.
- Useful for [bypassing critical APIs when monitoring etcd](https://github.com/coreos/etcd/issues/8060).
- Add [`etcd_server_version`](https://github.com/coreos/etcd/pull/8960) Prometheus metric.
- To replace [Kubernetes `etcd-version-monitor`](https://github.com/coreos/etcd/issues/8948).
- Add [`etcd_debugging_mvcc_db_compaction_keys_total`](https://github.com/coreos/etcd/pull/8280) Prometheus metric.
- Add [`etcd_debugging_server_lease_expired_total`](https://github.com/coreos/etcd/pull/8064) Prometheus metric.
- To improve [lease revoke monitoring](https://github.com/coreos/etcd/issues/8050).
- Document [Prometheus 2.0 rules](https://github.com/coreos/etcd/pull/8879).
- Initialize gRPC server [metrics with zero values](https://github.com/coreos/etcd/pull/8878).
### Added(`grpc-proxy`)
- Add [`grpc-proxy start --experimental-leasing-prefix`](https://github.com/coreos/etcd/pull/8341) flag:
- For disconnected linearized reads.
- Based on [V system leasing](https://github.com/coreos/etcd/issues/6065).
- See ["Disconnected consistent reads with etcd" blog post](https://coreos.com/blog/coreos-labs-disconnected-consistent-reads-with-etcd).
- Add [`grpc-proxy start --experimental-serializable-ordering`](https://github.com/coreos/etcd/pull/8315) flag.
- To ensure serializable reads have monotonically increasing store revisions across endpoints.
- Add [`grpc-proxy start --metrics-addr`](https://github.com/coreos/etcd/pull/8242) flag for an additional `/metrics` endpoint.
- Set `--metrics-addr=http://[HOST]:9379` to serve `/metrics` in insecure port 9379.
- Serve [`/health` endpoint in grpc-proxy](https://github.com/coreos/etcd/pull/8322).
- Add [`grpc-proxy start --debug`](https://github.com/coreos/etcd/pull/8994) flag.
### Added(gRPC gateway)
- Replace [gRPC gateway](https://github.com/grpc-ecosystem/grpc-gateway) endpoint with [`/v3beta`](https://github.com/coreos/etcd/pull/8880).
- To deprecate [`/v3alpha`](https://github.com/coreos/etcd/issues/8125) in `v3.4`.
- Support ["authorization" token](https://github.com/coreos/etcd/pull/7999).
- Support [websocket for bi-directional streams](https://github.com/coreos/etcd/pull/8257).
- Fix [`Watch` API with gRPC gateway](https://github.com/coreos/etcd/issues/8237).
- Upgrade gRPC gateway to [v1.3.0](https://github.com/coreos/etcd/issues/8838).
### Added(`etcd/raft`)
- Add [non-voting member](https://github.com/coreos/etcd/pull/8751).
- To implement [Raft thesis 4.2.1 Catching up new servers](https://github.com/coreos/etcd/issues/8568).
- `Learner` node does not vote or promote itself.
### Added/Fixed(Security/Auth)
- Add [CRL based connection rejection](https://github.com/coreos/etcd/pull/8124) to manage [revoked certs](https://github.com/coreos/etcd/issues/4034).
- Document [TLS authentication changes](https://github.com/coreos/etcd/pull/8895):
- [Server accepts connections if IP matches, without checking DNS entries](https://github.com/coreos/etcd/pull/8223). For instance, if peer cert contains IP addresses and DNS names in Subject Alternative Name (SAN) field, and the remote IP address matches one of those IP addresses, server just accepts connection without further checking the DNS names.
- [Server supports reverse-lookup on wildcard DNS `SAN`](https://github.com/coreos/etcd/pull/8281). For instance, if peer cert contains only DNS names (no IP addresses) in Subject Alternative Name (SAN) field, server first reverse-lookups the remote IP address to get a list of names mapping to that address (e.g. `nslookup IPADDR`). Then accepts the connection if those names have a matching name with peer cert's DNS names (either by exact or wildcard match). If none is matched, server forward-lookups each DNS entry in peer cert (e.g. look up `example.default.svc` when the entry is `*.example.default.svc`), and accepts connection only when the host's resolved addresses have the matching IP address with the peer's remote IP address.
- Add [`etcd --peer-require-cn`](https://github.com/coreos/etcd/pull/8616) flag.
- To support [CommonName(CN) based auth](https://github.com/coreos/etcd/issues/8262) for inter peer connection.
- [Swap priority](https://github.com/coreos/etcd/pull/8594) of cert CommonName(CN) and username + password.
- To address ["username and password specified in the request should take priority over CN in the cert"](https://github.com/coreos/etcd/issues/8584).
- Protect [lease revoke with auth](https://github.com/coreos/etcd/pull/8031).
- Provide user's role on [auth permission error](https://github.com/coreos/etcd/pull/8164).
- Fix [auth store panic with disabled token](https://github.com/coreos/etcd/pull/8695).
- Update `golang.org/x/crypto/bcrypt` (see [golang/crypto@6c586e1](https://github.com/golang/crypto/commit/6c586e17d90a7d08bbbc4069984180dce3b04117)).
### Fixed(v2)
- [Fail-over v2 client](https://github.com/coreos/etcd/pull/8519) to next endpoint on [oneshot failure](https://github.com/coreos/etcd/issues/8515).
- [Put back `/v2/machines`](https://github.com/coreos/etcd/pull/8062) endpoint for python-etcd wrapper.
### Fixed(v3)
- Fix [range/put/delete operation metrics](https://github.com/coreos/etcd/pull/8054) with transaction:
- `etcd_debugging_mvcc_range_total`
- `etcd_debugging_mvcc_put_total`
- `etcd_debugging_mvcc_delete_total`
- `etcd_debugging_mvcc_txn_total`
- Fix [`etcd_debugging_mvcc_keys_total`](https://github.com/coreos/etcd/pull/8390) on restore.
- Fix [`etcd_debugging_mvcc_db_total_size_in_bytes`](https://github.com/coreos/etcd/pull/8120) on restore.
- Also change to [`prometheus.NewGaugeFunc`](https://github.com/coreos/etcd/pull/8150).
- Fix [backend database in-memory index corruption](https://github.com/coreos/etcd/pull/8127) issue on restore (only 3.2.0 is affected).
- Fix [watch restore from snapshot](https://github.com/coreos/etcd/pull/8427).
- Fix ["put at-most-once" in `clientv3`](https://github.com/coreos/etcd/pull/8335).
- Handle [empty key permission](https://github.com/coreos/etcd/pull/8514) in `etcdctl`.
- [Fix server crash](https://github.com/coreos/etcd/pull/8010) on [invalid transaction request from gRPC gateway](https://github.com/coreos/etcd/issues/7889).
- Fix [`clientv3.WatchResponse.Canceled`](https://github.com/coreos/etcd/pull/8283) on [compacted watch request](https://github.com/coreos/etcd/issues/8231).
- Handle [WAL renaming failure on Windows](https://github.com/coreos/etcd/pull/8286).
- Make [peer dial timeout longer](https://github.com/coreos/etcd/pull/8599).
- See [coreos/etcd-operator#1300](https://github.com/coreos/etcd-operator/issues/1300) for more detail.
- Make server [wait up to request time-out](https://github.com/coreos/etcd/pull/8267) with [pending RPCs](https://github.com/coreos/etcd/issues/8224).
- Fix [`grpc.Server` panic on `GracefulStop`](https://github.com/coreos/etcd/pull/8987) with [TLS-enabled server](https://github.com/coreos/etcd/issues/8916).
- Fix ["multiple peer URLs cannot start" issue](https://github.com/coreos/etcd/issues/8383).
- Fix server-side auth so [concurrent auth operations do not return old revision error](https://github.com/coreos/etcd/pull/8442).
- Fix [`concurrency/stm` `Put` with serializable snapshot](https://github.com/coreos/etcd/pull/8439).
- Use store revision from first fetch to resolve write conflicts instead of modified revision.
- Fix [`grpc-proxy` Snapshot API error handling](https://github.com/coreos/etcd/commit/dbd16d52fbf81e5fd806d21ff5e9148d5bf203ab).
- Fix [`grpc-proxy` KV API `PrevKv` flag handling](https://github.com/coreos/etcd/pull/8366).
- Fix [`grpc-proxy` KV API `KeysOnly` flag handling](https://github.com/coreos/etcd/pull/8552).
- Upgrade [`coreos/go-systemd`](https://github.com/coreos/go-systemd/releases) to `v15` (see https://github.com/coreos/go-systemd/releases/tag/v15).
### Other
- Support previous two minor versions (see our [new release policy](https://github.com/coreos/etcd/pull/8805)).
- `v3.3.x` is the last release cycle that supports `ACI`:
- AppC was [officially suspended](https://github.com/appc/spec#-disclaimer-), as of late 2016.
- [`acbuild`](https://github.com/containers/build#this-project-is-currently-unmaintained) is not maintained anymore.
- `*.aci` files won't be available from etcd `v3.4` release.
- Add container registry [`gcr.io/etcd-development/etcd`](https://gcr.io/etcd-development/etcd).
- [quay.io/coreos/etcd](https://quay.io/coreos/etcd) is still supported as secondary.
## [v3.2.12](https://github.com/coreos/etcd/releases/tag/v3.2.12) (2017-12-20)
See [code changes](https://github.com/coreos/etcd/compare/v3.2.11...v3.2.12) and [v3.2 upgrade guide](https://github.com/coreos/etcd/blob/master/Documentation/upgrades/upgrade_3_2.md) for any breaking changes.
### Fixed
- Fix [error message of `Revision` compactor](https://github.com/coreos/etcd/pull/8999) in server-side.
### Added(`etcd/clientv3`,`etcdctl/v3`)
- Add [`MaxCallSendMsgSize` and `MaxCallRecvMsgSize`](https://github.com/coreos/etcd/pull/9047) fields to [`clientv3.Config`](https://godoc.org/github.com/coreos/etcd/clientv3#Config).
- Fix [exceeded response size limit error in client-side](https://github.com/coreos/etcd/issues/9043).
- Address [kubernetes#51099](https://github.com/kubernetes/kubernetes/issues/51099).
- `MaxCallSendMsgSize` default value is 2 MiB, if not configured.
- `MaxCallRecvMsgSize` default value is `math.MaxInt32`, if not configured.
### Other
- Pin [grpc v1.7.5](https://github.com/grpc/grpc-go/releases/tag/v1.7.5), [grpc-gateway v1.3.0](https://github.com/grpc-ecosystem/grpc-gateway/releases/tag/v1.3.0).
- No code change, just to be explicit about recommended versions.
## [v3.2.11](https://github.com/coreos/etcd/releases/tag/v3.2.11) (2017-12-05)
See [code changes](https://github.com/coreos/etcd/compare/v3.2.10...v3.2.11) and [v3.2 upgrade guide](https://github.com/coreos/etcd/blob/master/Documentation/upgrades/upgrade_3_2.md) for any breaking changes.
### Fixed
- Fix racey grpc-go's server handler transport `WriteStatus` call to prevent [TLS-enabled etcd server crash](https://github.com/coreos/etcd/issues/8904):
- Upgrade [`google.golang.org/grpc`](https://github.com/grpc/grpc-go/releases) `v1.7.3` to `v1.7.4`.
- Add [gRPC RPC failure warnings](https://github.com/coreos/etcd/pull/8939) to help debug such issues in the future.
- Remove `--listen-metrics-urls` flag in monitoring document (non-released in `v3.2.x`, planned for `v3.3.x`).
### Added
- Provide [more cert details](https://github.com/coreos/etcd/pull/8952/files) on TLS handshake failures.
## [v3.1.11](https://github.com/coreos/etcd/releases/tag/v3.1.11) (2017-11-28)
See [code changes](https://github.com/coreos/etcd/compare/v3.1.10...v3.1.11) and [v3.2 upgrade guide](https://github.com/coreos/etcd/blob/master/Documentation/upgrades/upgrade_3_2.md) for any breaking changes.
### Fixed
- [#8411](https://github.com/coreos/etcd/issues/8411),[#8806](https://github.com/coreos/etcd/pull/8806) mvcc: fix watch restore from snapshot
- [#8009](https://github.com/coreos/etcd/issues/8009),[#8902](https://github.com/coreos/etcd/pull/8902) backport coreos/bbolt v1.3.1-coreos.5
## [v3.2.10](https://github.com/coreos/etcd/releases/tag/v3.2.10) (2017-11-16)
See [code changes](https://github.com/coreos/etcd/compare/v3.2.9...v3.2.10) and [v3.2 upgrade guide](https://github.com/coreos/etcd/blob/master/Documentation/upgrades/upgrade_3_2.md) for any breaking changes.
### Fixed
- Replace backend key-value database `boltdb/bolt` with [`coreos/bbolt`](https://github.com/coreos/bbolt/releases) to address [backend database size issue](https://github.com/coreos/etcd/issues/8009).
- Fix `clientv3` balancer to handle [network partitions](https://github.com/coreos/etcd/issues/8711):
- Upgrade [`google.golang.org/grpc`](https://github.com/grpc/grpc-go/releases) `v1.2.1` to `v1.7.3`.
- Upgrade [`github.com/grpc-ecosystem/grpc-gateway`](https://github.com/grpc-ecosystem/grpc-gateway/releases) `v1.2` to `v1.3`.
- Revert [discovery SRV auth `ServerName` with `*.{ROOT_DOMAIN}`](https://github.com/coreos/etcd/pull/8651) to support non-wildcard subject alternative names in the certs (see [issue #8445](https://github.com/coreos/etcd/issues/8445) for more contexts).
- For instance, `etcd --discovery-srv=etcd.local` will only authenticate peers/clients when the provided certs have root domain `etcd.local` (**not `*.etcd.local`**) as an entry in Subject Alternative Name (SAN) field.
## [v3.2.9](https://github.com/coreos/etcd/releases/tag/v3.2.9) (2017-10-06)
See [code changes](https://github.com/coreos/etcd/compare/v3.2.8...v3.2.9) and [v3.2 upgrade guide](https://github.com/coreos/etcd/blob/master/Documentation/upgrades/upgrade_3_2.md) for any breaking changes.
### Fixed(Security)
- Compile with [Go 1.8.4](https://groups.google.com/d/msg/golang-nuts/sHfMg4gZNps/a-HDgDDDAAAJ).
- Update `golang.org/x/crypto/bcrypt` (see [golang/crypto@6c586e1](https://github.com/golang/crypto/commit/6c586e17d90a7d08bbbc4069984180dce3b04117)).
- Fix discovery SRV bootstrapping to [authenticate `ServerName` with `*.{ROOT_DOMAIN}`](https://github.com/coreos/etcd/pull/8651), in order to support sub-domain wildcard matching (see [issue #8445](https://github.com/coreos/etcd/issues/8445) for more contexts).
- For instance, `etcd --discovery-srv=etcd.local` will only authenticate peers/clients when the provided certs have root domain `*.etcd.local` as an entry in Subject Alternative Name (SAN) field.
## [v3.2.8](https://github.com/coreos/etcd/releases/tag/v3.2.8) (2017-09-29)
See [code changes](https://github.com/coreos/etcd/compare/v3.2.7...v3.2.8) and [v3.2 upgrade guide](https://github.com/coreos/etcd/blob/master/Documentation/upgrades/upgrade_3_2.md) for any breaking changes.
### Fixed
- Fix v2 client failover to next endpoint on mutable operation.
- Fix grpc-proxy to respect `KeysOnly` flag.
## [v3.2.7](https://github.com/coreos/etcd/releases/tag/v3.2.7) (2017-09-01)
See [code changes](https://github.com/coreos/etcd/compare/v3.2.6...v3.2.7) and [v3.2 upgrade guide](https://github.com/coreos/etcd/blob/master/Documentation/upgrades/upgrade_3_2.md) for any breaking changes.
### Fixed
- Fix server-side auth so concurrent auth operations do not return old revision error.
- Fix concurrency/stm Put with serializable snapshot
- Use store revision from first fetch to resolve write conflicts instead of modified revision.
## [v3.2.6](https://github.com/coreos/etcd/releases/tag/v3.2.6) (2017-08-21)
See [code changes](https://github.com/coreos/etcd/compare/v3.2.5...v3.2.6).
### Fixed
- Fix watch restore from snapshot.
- Fix `etcd_debugging_mvcc_keys_total` inconsistency.
- Fix multiple URLs for `--listen-peer-urls` flag.
- Add `--enable-pprof` flag to etcd configuration file format.
## [v3.2.5](https://github.com/coreos/etcd/releases/tag/v3.2.5) (2017-08-04)
See [code changes](https://github.com/coreos/etcd/compare/v3.2.4...v3.2.5) and [v3.2 upgrade guide](https://github.com/coreos/etcd/blob/master/Documentation/upgrades/upgrade_3_2.md) for any breaking changes.
### Changed
- Use reverse lookup to match wildcard DNS SAN.
- Return non-zero exit code on unhealthy `endpoint health`.
### Fixed
- Fix unreachable /metrics endpoint when `--enable-v2=false`.
- Fix grpc-proxy to respect `PrevKv` flag.
### Added
- Add container registry `gcr.io/etcd-development/etcd`.
## [v3.2.4](https://github.com/coreos/etcd/releases/tag/v3.2.4) (2017-07-19)
See [code changes](https://github.com/coreos/etcd/compare/v3.2.3...v3.2.4) and [v3.2 upgrade guide](https://github.com/coreos/etcd/blob/master/Documentation/upgrades/upgrade_3_2.md) for any breaking changes.
### Fixed
- Do not block on active client stream when stopping server
- Fix gRPC proxy Snapshot RPC error handling
## [v3.2.3](https://github.com/coreos/etcd/releases/tag/v3.2.3) (2017-07-14)
See [code changes](https://github.com/coreos/etcd/compare/v3.2.2...v3.2.3) and [v3.2 upgrade guide](https://github.com/coreos/etcd/blob/master/Documentation/upgrades/upgrade_3_2.md) for any breaking changes.
### Fixed
- Let clients establish unlimited streams
### Added
- Tag docker images with minor versions
- e.g. `docker pull quay.io/coreos/etcd:v3.2` to fetch latest v3.2 versions
## [v3.1.10](https://github.com/coreos/etcd/releases/tag/v3.1.10) (2017-07-14)
See [code changes](https://github.com/coreos/etcd/compare/v3.1.9...v3.1.10) and [v3.1 upgrade guide](https://github.com/coreos/etcd/blob/master/Documentation/upgrades/upgrade_3_1.md) for any breaking changes.
### Changed
- Compile with Go 1.8.3 to fix panic on `net/http.CloseNotify`
### Added
- Tag docker images with minor versions.
- e.g. `docker pull quay.io/coreos/etcd:v3.1` to fetch latest v3.1 versions.
## [v3.2.2](https://github.com/coreos/etcd/releases/tag/v3.2.2) (2017-07-07)
See [code changes](https://github.com/coreos/etcd/compare/v3.2.1...v3.2.2) and [v3.2 upgrade guide](https://github.com/coreos/etcd/blob/master/Documentation/upgrades/upgrade_3_2.md) for any breaking changes.
### Improved
- Rate-limit lease revoke on expiration.
- Extend leases on promote to avoid queueing effect on lease expiration.
### Fixed
- Use user-provided listen address to connect to gRPC gateway:
- `net.Listener` rewrites IPv4 0.0.0.0 to IPv6 [::], breaking IPv6 disabled hosts.
- Only v3.2.0, v3.2.1 are affected.
- Accept connection with matched IP SAN but no DNS match.
- Don't check DNS entries in certs if there's a matching IP.
- Fix 'tools/benchmark' watch command.
## [v3.2.1](https://github.com/coreos/etcd/releases/tag/v3.2.1) (2017-06-23)
See [code changes](https://github.com/coreos/etcd/compare/v3.2.0...v3.2.1) and [v3.2 upgrade guide](https://github.com/coreos/etcd/blob/master/Documentation/upgrades/upgrade_3_2.md) for any breaking changes.
### Fixed
- Fix backend database in-memory index corruption issue on restore (only 3.2.0 is affected).
- Fix gRPC gateway Txn marshaling issue.
- Fix backend database size debugging metrics.
## [v3.2.0](https://github.com/coreos/etcd/releases/tag/v3.2.0) (2017-06-09)
See [code changes](https://github.com/coreos/etcd/compare/v3.1.0...v3.2.0) and [v3.2 upgrade guide](https://github.com/coreos/etcd/blob/master/Documentation/upgrades/upgrade_3_2.md) for any breaking changes.
### Improved
- Improve backend read concurrency.
### Added
- Embedded etcd
- `Etcd.Peers` field is now `[]*peerListener`.
- RPCs
- Add Election, Lock service.
- Native client etcdserver/api/v3client
- client "embedded" in the server.
- gRPC proxy
- Proxy endpoint discovery.
- Namespaces.
- Coalesce lease requests.
- v3 client
- STM prefetching.
- Add namespace feature.
- Add `ErrOldCluster` with server version checking.
- Translate `WithPrefix()` into `WithFromKey()` for empty key.
- v3 etcdctl
- Add `check perf` command.
- Add `--from-key` flag to role grant-permission command.
- `lock` command takes an optional command to execute.
- etcd flags
- Add `--enable-v2` flag to configure v2 backend (enabled by default).
- Add `--auth-token` flag.
- `etcd gateway`
- Support DNS SRV priority.
- Auth
- Support Watch API.
- JWT tokens.
- Logging, monitoring
- Server warns large snapshot operations.
- Add `etcd_debugging_server_lease_expired_total` metrics.
- Security
- Deny incoming peer certs with wrong IP SAN.
- Resolve TLS `DNSNames` when SAN checking.
- Reload TLS certificates on every client connection.
- Release
- Annotate acbuild with supports-systemd-notify.
- Add `nsswitch.conf` to Docker container image.
- Add ppc64le, arm64(experimental) builds.
- Compile with `Go 1.8.3`.
### Changed
- v3 client
- `LeaseTimeToLive` returns TTL=-1 resp on lease not found.
- `clientv3.NewFromConfigFile` is moved to `clientv3/yaml.NewConfig`.
- concurrency package's elections updated to match RPC interfaces.
- let client dial endpoints not in the balancer.
- Dependencies
- Update [`google.golang.org/grpc`](https://github.com/grpc/grpc-go/releases) to `v1.2.1`.
- Update [`github.com/grpc-ecosystem/grpc-gateway`](https://github.com/grpc-ecosystem/grpc-gateway/releases) to `v1.2.0`.
### Fixed
- Allow v2 snapshot over 512MB.
## [v3.1.9](https://github.com/coreos/etcd/releases/tag/v3.1.9) (2017-06-09)
See [code changes](https://github.com/coreos/etcd/compare/v3.1.8...v3.1.9) and [v3.1 upgrade guide](https://github.com/coreos/etcd/blob/master/Documentation/upgrades/upgrade_3_1.md) for any breaking changes.
### Fixed
- Allow v2 snapshot over 512MB.
## [v3.1.8](https://github.com/coreos/etcd/releases/tag/v3.1.8) (2017-05-19)
See [code changes](https://github.com/coreos/etcd/compare/v3.1.7...v3.1.8) and [v3.1 upgrade guide](https://github.com/coreos/etcd/blob/master/Documentation/upgrades/upgrade_3_1.md) for any breaking changes.
## [v3.1.7](https://github.com/coreos/etcd/releases/tag/v3.1.7) (2017-04-28)
See [code changes](https://github.com/coreos/etcd/compare/v3.1.6...v3.1.7) and [v3.1 upgrade guide](https://github.com/coreos/etcd/blob/master/Documentation/upgrades/upgrade_3_1.md) for any breaking changes.
## [v3.1.6](https://github.com/coreos/etcd/releases/tag/v3.1.6) (2017-04-19)
See [code changes](https://github.com/coreos/etcd/compare/v3.1.5...v3.1.6) and [v3.1 upgrade guide](https://github.com/coreos/etcd/blob/master/Documentation/upgrades/upgrade_3_1.md) for any breaking changes.
### Changed
- Remove auth check in Status API.
### Fixed
- Fill in Auth API response header.
## [v3.1.5](https://github.com/coreos/etcd/releases/tag/v3.1.5) (2017-03-27)
See [code changes](https://github.com/coreos/etcd/compare/v3.1.4...v3.1.5) and [v3.1 upgrade guide](https://github.com/coreos/etcd/blob/master/Documentation/upgrades/upgrade_3_1.md) for any breaking changes.
### Added
- Add `/etc/nsswitch.conf` file to alpine-based Docker image.
### Fixed
- Fix raft memory leak issue.
- Fix Windows file path issues.
## [v3.1.4](https://github.com/coreos/etcd/releases/tag/v3.1.4) (2017-03-22)
See [code changes](https://github.com/coreos/etcd/compare/v3.1.3...v3.1.4) and [v3.1 upgrade guide](https://github.com/coreos/etcd/blob/master/Documentation/upgrades/upgrade_3_1.md) for any breaking changes.
## [v3.1.3](https://github.com/coreos/etcd/releases/tag/v3.1.3) (2017-03-10)
See [code changes](https://github.com/coreos/etcd/compare/v3.1.2...v3.1.3) and [v3.1 upgrade guide](https://github.com/coreos/etcd/blob/master/Documentation/upgrades/upgrade_3_1.md) for any breaking changes.
### Changed
- Use machine default host when advertise URLs are default values(`localhost:2379,2380`) AND if listen URL is `0.0.0.0`.
### Fixed
- Fix `etcd gateway` schema handling in DNS discovery.
- Fix sd_notify behaviors in `gateway`, `grpc-proxy`.
## [v3.1.2](https://github.com/coreos/etcd/releases/tag/v3.1.2) (2017-02-24)
See [code changes](https://github.com/coreos/etcd/compare/v3.1.1...v3.1.2) and [v3.1 upgrade guide](https://github.com/coreos/etcd/blob/master/Documentation/upgrades/upgrade_3_1.md) for any breaking changes.
### Changed
- Use IPv4 default host, by default (when IPv4 and IPv6 are available).
### Fixed
- Fix `etcd gateway` with multiple endpoints.
## [v3.1.1](https://github.com/coreos/etcd/releases/tag/v3.1.1) (2017-02-17)
See [code changes](https://github.com/coreos/etcd/compare/v3.1.0...v3.1.1) and [v3.1 upgrade guide](https://github.com/coreos/etcd/blob/master/Documentation/upgrades/upgrade_3_1.md) for any breaking changes.
### Changed
- Compile with `Go 1.7.5`.
## [v2.3.8](https://github.com/coreos/etcd/releases/tag/v2.3.8) (2017-02-17)
See [code changes](https://github.com/coreos/etcd/compare/v2.3.7...v2.3.8).
### Changed
- Compile with `Go 1.7.5`.
## [v3.1.0](https://github.com/coreos/etcd/releases/tag/v3.1.0) (2017-01-20)
See [code changes](https://github.com/coreos/etcd/compare/v3.0.0...v3.1.0) and [v3.1 upgrade guide](https://github.com/coreos/etcd/blob/master/Documentation/upgrades/upgrade_3_1.md) for any breaking changes.
### Improved
- Faster linearizable reads (implements Raft read-index).
- v3 authentication API is now stable.
### Added
- Automatic leadership transfer when leader steps down.
- etcd flags
- `--strict-reconfig-check` flag is set by default.
- Add `--log-output` flag.
- Add `--metrics` flag.
- v3 client
- Add `SetEndpoints` method; update endpoints at runtime.
- Add `Sync` method; auto-update endpoints at runtime.
- Add `Lease TimeToLive` API; fetch lease information.
- replace Config.Logger field with global logger.
- Get API responses are sorted in ascending order by default.
- v3 etcdctl
- Add `lease timetolive` command.
- Add `--print-value-only` flag to get command.
- Add `--dest-prefix` flag to make-mirror command.
- `get` command responses are sorted in ascending order by default.
- `recipes` now conform to sessions defined in `clientv3/concurrency`.
- ACI has symlinks to `/usr/local/bin/etcd*`.
- Experimental gRPC proxy feature.
### Changed
- Deprecated following gRPC metrics in favor of [go-grpc-prometheus](https://github.com/grpc-ecosystem/go-grpc-prometheus):
- `etcd_grpc_requests_total`
- `etcd_grpc_requests_failed_total`
- `etcd_grpc_active_streams`
- `etcd_grpc_unary_requests_duration_seconds`
- etcd uses default route IP if advertise URL is not given.
- Cluster rejects removing members if quorum will be lost.
- SRV records (e.g., infra1.example.com) must match the discovery domain (i.e., example.com) if no custom certificate authority is given.
- `TLSConfig.ServerName` is ignored with user-provided certificates for backwards compatibility; to be deprecated.
- For example, `etcd --discovery-srv=example.com` will only authenticate peers/clients when the provided certs have root domain `example.com` as an entry in Subject Alternative Name (SAN) field.
- Discovery now has upper limit for waiting on retries.
- Warn on binding listeners through domain names; to be deprecated.
## [v3.0.16](https://github.com/coreos/etcd/releases/tag/v3.0.16) (2016-11-13)
See [code changes](https://github.com/coreos/etcd/compare/v3.0.15...v3.0.16) and [v3.0 upgrade guide](https://github.com/coreos/etcd/blob/master/Documentation/upgrades/upgrade_3_0.md) for any breaking changes.
## [v3.0.15](https://github.com/coreos/etcd/releases/tag/v3.0.15) (2016-11-11)
See [code changes](https://github.com/coreos/etcd/compare/v3.0.14...v3.0.15) and [v3.0 upgrade guide](https://github.com/coreos/etcd/blob/master/Documentation/upgrades/upgrade_3_0.md) for any breaking changes.
### Fixed
- Fix cancel watch request with wrong range end.
## [v3.0.14](https://github.com/coreos/etcd/releases/tag/v3.0.14) (2016-11-04)
See [code changes](https://github.com/coreos/etcd/compare/v3.0.13...v3.0.14) and [v3.0 upgrade guide](https://github.com/coreos/etcd/blob/master/Documentation/upgrades/upgrade_3_0.md) for any breaking changes.
### Added
- v3 `etcdctl migrate` command now supports `--no-ttl` flag to discard keys on transform.
## [v3.0.13](https://github.com/coreos/etcd/releases/tag/v3.0.13) (2016-10-24)
See [code changes](https://github.com/coreos/etcd/compare/v3.0.12...v3.0.13) and [v3.0 upgrade guide](https://github.com/coreos/etcd/blob/master/Documentation/upgrades/upgrade_3_0.md) for any breaking changes.
## [v3.0.12](https://github.com/coreos/etcd/releases/tag/v3.0.12) (2016-10-07)
See [code changes](https://github.com/coreos/etcd/compare/v3.0.11...v3.0.12) and [v3.0 upgrade guide](https://github.com/coreos/etcd/blob/master/Documentation/upgrades/upgrade_3_0.md) for any breaking changes.
## [v3.0.11](https://github.com/coreos/etcd/releases/tag/v3.0.11) (2016-10-07)
See [code changes](https://github.com/coreos/etcd/compare/v3.0.10...v3.0.11) and [v3.0 upgrade guide](https://github.com/coreos/etcd/blob/master/Documentation/upgrades/upgrade_3_0.md) for any breaking changes.
### Added
- Server returns previous key-value (optional)
- `clientv3.WithPrevKV` option
- v3 etcdctl `put,watch,del --prev-kv` flag
## [v3.0.10](https://github.com/coreos/etcd/releases/tag/v3.0.10) (2016-09-23)
See [code changes](https://github.com/coreos/etcd/compare/v3.0.9...v3.0.10) and [v3.0 upgrade guide](https://github.com/coreos/etcd/blob/master/Documentation/upgrades/upgrade_3_0.md) for any breaking changes.
## [v3.0.9](https://github.com/coreos/etcd/releases/tag/v3.0.9) (2016-09-15)
See [code changes](https://github.com/coreos/etcd/compare/v3.0.8...v3.0.9) and [v3.0 upgrade guide](https://github.com/coreos/etcd/blob/master/Documentation/upgrades/upgrade_3_0.md) for any breaking changes.
### Added
- Warn on domain names on listen URLs (v3.2 will reject domain names).
## [v3.0.8](https://github.com/coreos/etcd/releases/tag/v3.0.8) (2016-09-09)
See [code changes](https://github.com/coreos/etcd/compare/v3.0.7...v3.0.8) and [v3.0 upgrade guide](https://github.com/coreos/etcd/blob/master/Documentation/upgrades/upgrade_3_0.md) for any breaking changes.
### Changed
- Allow only IP addresses in listen URLs (domain names are rejected).
## [v3.0.7](https://github.com/coreos/etcd/releases/tag/v3.0.7) (2016-08-31)
See [code changes](https://github.com/coreos/etcd/compare/v3.0.6...v3.0.7) and [v3.0 upgrade guide](https://github.com/coreos/etcd/blob/master/Documentation/upgrades/upgrade_3_0.md) for any breaking changes.
### Changed
- SRV records only allow A records (RFC 2052).
## [v3.0.6](https://github.com/coreos/etcd/releases/tag/v3.0.6) (2016-08-19)
See [code changes](https://github.com/coreos/etcd/compare/v3.0.5...v3.0.6) and [v3.0 upgrade guide](https://github.com/coreos/etcd/blob/master/Documentation/upgrades/upgrade_3_0.md) for any breaking changes.
## [v3.0.5](https://github.com/coreos/etcd/releases/tag/v3.0.5) (2016-08-19)
See [code changes](https://github.com/coreos/etcd/compare/v3.0.4...v3.0.5) and [v3.0 upgrade guide](https://github.com/coreos/etcd/blob/master/Documentation/upgrades/upgrade_3_0.md) for any breaking changes.
### Changed
- SRV records (e.g., infra1.example.com) must match the discovery domain (i.e., example.com) if no custom certificate authority is given.
## [v3.0.4](https://github.com/coreos/etcd/releases/tag/v3.0.4) (2016-07-27)
See [code changes](https://github.com/coreos/etcd/compare/v3.0.3...v3.0.4) and [v3.0 upgrade guide](https://github.com/coreos/etcd/blob/master/Documentation/upgrades/upgrade_3_0.md) for any breaking changes.
### Changed
- v2 auth can now use common name from TLS certificate when `--client-cert-auth` is enabled.
### Added
- v2 `etcdctl ls` command now supports `--output=json`.
- Add /var/lib/etcd directory to etcd official Docker image.
## [v3.0.3](https://github.com/coreos/etcd/releases/tag/v3.0.3) (2016-07-15)
See [code changes](https://github.com/coreos/etcd/compare/v3.0.2...v3.0.3) and [v3.0 upgrade guide](https://github.com/coreos/etcd/blob/master/Documentation/upgrades/upgrade_3_0.md) for any breaking changes.
### Changed
- Revert Dockerfile to use `CMD`, instead of `ENTRYPOINT`, to support `etcdctl` run.
- Docker commands for v3.0.2 won't work without specifying executable binary paths.
- v3 etcdctl default endpoints are now `127.0.0.1:2379`.
## [v3.0.2](https://github.com/coreos/etcd/releases/tag/v3.0.2) (2016-07-08)
See [code changes](https://github.com/coreos/etcd/compare/v3.0.1...v3.0.2) and [v3.0 upgrade guide](https://github.com/coreos/etcd/blob/master/Documentation/upgrades/upgrade_3_0.md) for any breaking changes.
### Changed
- Dockerfile uses `ENTRYPOINT`, instead of `CMD`, to run etcd without binary path specified.
## [v3.0.1](https://github.com/coreos/etcd/releases/tag/v3.0.1) (2016-07-01)
See [code changes](https://github.com/coreos/etcd/compare/v3.0.0...v3.0.1) and [v3.0 upgrade guide](https://github.com/coreos/etcd/blob/master/Documentation/upgrades/upgrade_3_0.md) for any breaking changes.
## [v3.0.0](https://github.com/coreos/etcd/releases/tag/v3.0.0) (2016-06-30)
See [code changes](https://github.com/coreos/etcd/compare/v2.3.0...v3.0.0) and [v3.0 upgrade guide](https://github.com/coreos/etcd/blob/master/Documentation/upgrades/upgrade_3_0.md) for any breaking changes.

View File

@ -0,0 +1,53 @@
FROM ubuntu:17.10
RUN rm /bin/sh && ln -s /bin/bash /bin/sh
RUN echo 'debconf debconf/frontend select Noninteractive' | debconf-set-selections
RUN apt-get -y update \
&& apt-get -y install \
build-essential \
gcc \
apt-utils \
pkg-config \
software-properties-common \
apt-transport-https \
libssl-dev \
sudo \
bash \
curl \
wget \
tar \
git \
&& apt-get -y update \
&& apt-get -y upgrade \
&& apt-get -y autoremove \
&& apt-get -y autoclean
ENV GOROOT /usr/local/go
ENV GOPATH /go
ENV PATH ${GOPATH}/bin:${GOROOT}/bin:${PATH}
ENV GO_VERSION REPLACE_ME_GO_VERSION
ENV GO_DOWNLOAD_URL https://storage.googleapis.com/golang
RUN rm -rf ${GOROOT} \
&& curl -s ${GO_DOWNLOAD_URL}/go${GO_VERSION}.linux-amd64.tar.gz | tar -v -C /usr/local/ -xz \
&& mkdir -p ${GOPATH}/src ${GOPATH}/bin \
&& go version
RUN mkdir -p ${GOPATH}/src/github.com/coreos/etcd
ADD . ${GOPATH}/src/github.com/coreos/etcd
RUN go get -v github.com/coreos/gofail \
&& pushd ${GOPATH}/src/github.com/coreos/etcd \
&& GO_BUILD_FLAGS="-v" ./build \
&& cp ./bin/etcd /etcd \
&& cp ./bin/etcdctl /etcdctl \
&& GO_BUILD_FLAGS="-v" FAILPOINTS=1 ./build \
&& cp ./bin/etcd /etcd-failpoints \
&& ./tools/functional-tester/build \
&& cp ./bin/etcd-agent /etcd-agent \
&& cp ./bin/etcd-tester /etcd-tester \
&& cp ./bin/etcd-runner /etcd-runner \
&& go build -v -o /benchmark ./cmd/tools/benchmark \
&& go build -v -o /etcd-test-proxy ./cmd/tools/etcd-test-proxy \
&& popd \
&& rm -rf ${GOPATH}/src/github.com/coreos/etcd

View File

@ -1,4 +1,5 @@
FROM alpine:latest # TODO: move to k8s.gcr.io/build-image/debian-base:bullseye-v1.y.z when patched
FROM debian:bullseye-20210927
ADD etcd /usr/local/bin/ ADD etcd /usr/local/bin/
ADD etcdctl /usr/local/bin/ ADD etcdctl /usr/local/bin/

View File

@ -1,10 +1,17 @@
FROM aarch64/ubuntu:16.04 # TODO: move to k8s.gcr.io/build-image/debian-base-arm64:bullseye-1.y.z when patched
FROM arm64v8/debian:bullseye-20210927
ADD etcd /usr/local/bin/ ADD etcd /usr/local/bin/
ADD etcdctl /usr/local/bin/ ADD etcdctl /usr/local/bin/
ADD var/etcd /var/etcd ADD var/etcd /var/etcd
ADD var/lib/etcd /var/lib/etcd ADD var/lib/etcd /var/lib/etcd
# Alpine Linux doesn't use pam, which means that there is no /etc/nsswitch.conf,
# but Golang relies on /etc/nsswitch.conf to check the order of DNS resolving
# (see https://github.com/golang/go/commit/9dee7771f561cf6aee081c0af6658cc81fac3918)
# To fix this we just create /etc/nsswitch.conf and add the following line:
RUN echo 'hosts: files mdns4_minimal [NOTFOUND=return] dns mdns4' >> /etc/nsswitch.conf
EXPOSE 2379 2380 EXPOSE 2379 2380
# Define default command. # Define default command.

View File

@ -1,10 +1,17 @@
FROM ppc64le/ubuntu:16.04 # TODO: move to k8s.gcr.io/build-image/debian-base-ppc64le:bullseye-1.y.z when patched
FROM ppc64le/debian:bullseye-20210927
ADD etcd /usr/local/bin/ ADD etcd /usr/local/bin/
ADD etcdctl /usr/local/bin/ ADD etcdctl /usr/local/bin/
ADD var/etcd /var/etcd ADD var/etcd /var/etcd
ADD var/lib/etcd /var/lib/etcd ADD var/lib/etcd /var/lib/etcd
# Alpine Linux doesn't use pam, which means that there is no /etc/nsswitch.conf,
# but Golang relies on /etc/nsswitch.conf to check the order of DNS resolving
# (see https://github.com/golang/go/commit/9dee7771f561cf6aee081c0af6658cc81fac3918)
# To fix this we just create /etc/nsswitch.conf and add the following line:
RUN echo 'hosts: files mdns4_minimal [NOTFOUND=return] dns mdns4' >> /etc/nsswitch.conf
EXPOSE 2379 2380 EXPOSE 2379 2380
# Define default command. # Define default command.

3
Documentation/_index.md Normal file
View File

@ -0,0 +1,3 @@
---
title: etcd version 3.3.12
---

View File

@ -1,18 +0,0 @@
# Benchmarks
etcd benchmarks will be published regularly and tracked for each release below:
- [etcd v2.1.0-alpha][2.1]
- [etcd v2.2.0-rc][2.2]
- [etcd v3 demo][3.0]
# Memory Usage Benchmarks
It records expected memory usage in different scenarios.
- [etcd v2.2.0-rc][2.2-mem]
[2.1]: etcd-2-1-0-alpha-benchmarks.md
[2.2]: etcd-2-2-0-rc-benchmarks.md
[2.2-mem]: etcd-2-2-0-rc-memory-benchmarks.md
[3.0]: etcd-3-demo-benchmarks.md

View File

@ -0,0 +1,3 @@
---
title: Benchmarks
---

View File

@ -1,3 +1,7 @@
---
title: Benchmarking etcd v2.1.0
---
## Physical machines ## Physical machines
GCE n1-highcpu-2 machine type GCE n1-highcpu-2 machine type

View File

@ -1,4 +1,6 @@
# Benchmarking etcd v2.2.0 ---
title: Benchmarking etcd v2.2.0
---
## Physical Machines ## Physical Machines
@ -26,7 +28,7 @@ Go OS/Arch: linux/amd64
Bootstrap another machine, outside of the etcd cluster, and run the [`hey` HTTP benchmark tool](https://github.com/rakyll/hey) with a connection reuse patch to send requests to each etcd cluster member. See the [benchmark instructions](../../hack/benchmark/) for the patch and the steps to reproduce our procedures. Bootstrap another machine, outside of the etcd cluster, and run the [`hey` HTTP benchmark tool](https://github.com/rakyll/hey) with a connection reuse patch to send requests to each etcd cluster member. See the [benchmark instructions](../../hack/benchmark/) for the patch and the steps to reproduce our procedures.
The performance is calulated through results of 100 benchmark rounds. The performance is calculated through results of 100 benchmark rounds.
## Performance ## Performance

View File

@ -1,4 +1,8 @@
## Physical machines ---
title: Benchmarking etcd v2.2.0-rc
---
## Physical machine
GCE n1-highcpu-2 machine type GCE n1-highcpu-2 machine type

View File

@ -1,3 +1,7 @@
---
title: Benchmarking etcd v2.2.0-rc-memory
---
## Physical machine ## Physical machine
GCE n1-standard-2 machine type GCE n1-standard-2 machine type

View File

@ -1,3 +1,7 @@
---
title: Benchmarking etcd v3
---
## Physical machines ## Physical machines
GCE n1-highcpu-2 machine type GCE n1-highcpu-2 machine type

View File

@ -1,4 +1,6 @@
# Watch Memory Usage Benchmark ---
title: Watch Memory Usage Benchmark
---
*NOTE*: The watch features are under active development, and their memory usage may change as that development progresses. We do not expect it to significantly increase beyond the figures stated below. *NOTE*: The watch features are under active development, and their memory usage may change as that development progresses. We do not expect it to significantly increase beyond the figures stated below.

View File

@ -1,4 +1,6 @@
# Storage Memory Usage Benchmark ---
title: Storage Memory Usage Benchmark
---
<!---todo: link storage to storage design doc--> <!---todo: link storage to storage design doc-->
Two components of etcd storage consume physical memory. The etcd process allocates an *in-memory index* to speed key lookup. The process's *page cache*, managed by the operating system, stores recently-accessed data from disk for quick re-use. Two components of etcd storage consume physical memory. The etcd process allocates an *in-memory index* to speed key lookup. The process's *page cache*, managed by the operating system, stores recently-accessed data from disk for quick re-use.

View File

@ -1,4 +1,6 @@
# Branch management ---
title: Branch management
---
## Guide ## Guide

View File

@ -1,4 +1,6 @@
# Demo ---
title: Demo
---
This series of examples shows the basic procedures for working with an etcd cluster. This series of examples shows the basic procedures for working with an etcd cluster.

View File

@ -0,0 +1,3 @@
---
title: Developer guide
---

View File

@ -1,4 +1,6 @@
### etcd concurrency API Reference ---
title: etcd concurrency API Reference
---
This is a generated documentation. Please read the proto files for more. This is a generated documentation. Please read the proto files for more.
@ -20,7 +22,7 @@ The lock service exposes client-side locking facilities as a gRPC interface.
| Field | Description | Type | | Field | Description | Type |
| ----- | ----------- | ---- | | ----- | ----------- | ---- |
| name | name is the identifier for the distributed shared lock to be acquired. | bytes | | name | name is the identifier for the distributed shared lock to be acquired. | bytes |
| lease | lease is the ID of the lease that will be attached to ownership of the lock. If the lease expires or is revoked and currently holds the lock, the lock is automatically released. Calls to Lock with the same lease will be treated as a single acquistion; locking twice with the same lease is a no-op. | int64 | | lease | lease is the ID of the lease that will be attached to ownership of the lock. If the lease expires or is revoked and currently holds the lock, the lock is automatically released. Calls to Lock with the same lease will be treated as a single acquisition; locking twice with the same lease is a no-op. | int64 |

View File

@ -1,16 +1,29 @@
---
title: Why gRPC gateway
---
## Why grpc-gateway etcd v3 uses [gRPC][grpc] for its messaging protocol. The etcd project includes a gRPC-based [Go client][go-client] and a command line utility, [etcdctl][etcdctl], for communicating with an etcd cluster through gRPC. For languages with no gRPC support, etcd provides a JSON [gRPC gateway][grpc-gateway]. This gateway serves a RESTful proxy that translates HTTP/JSON requests into gRPC messages.
etcd v3 uses [gRPC][grpc] for its messaging protocol. The etcd project includes a gRPC-based [Go client][go-client] and a command line utility, [etcdctl][etcdctl], for communicating with an etcd cluster through gRPC. For languages with no gRPC support, etcd provides a JSON [grpc-gateway][grpc-gateway]. This gateway serves a RESTful proxy that translates HTTP/JSON requests into gRPC messages. ## Using gRPC gateway
## Using grpc-gateway
The gateway accepts a [JSON mapping][json-mapping] for etcd's [protocol buffer][api-ref] message definitions. Note that `key` and `value` fields are defined as byte arrays and therefore must be base64 encoded in JSON. The following examples use `curl`, but any HTTP/JSON client should work all the same. The gateway accepts a [JSON mapping][json-mapping] for etcd's [protocol buffer][api-ref] message definitions. Note that `key` and `value` fields are defined as byte arrays and therefore must be base64 encoded in JSON. The following examples use `curl`, but any HTTP/JSON client should work all the same.
### Notes
gRPC gateway endpoint has changed since etcd v3.3:
- etcd v3.2 or before uses only `[CLIENT-URL]/v3alpha/*`.
- etcd v3.3 uses `[CLIENT-URL]/v3beta/*` while keeping `[CLIENT-URL]/v3alpha/*`.
- etcd v3.4 uses `[CLIENT-URL]/v3/*` while keeping `[CLIENT-URL]/v3beta/*`.
- **`[CLIENT-URL]/v3alpha/*` is deprecated**.
- etcd v3.5 or later uses only `[CLIENT-URL]/v3/*`.
- **`[CLIENT-URL]/v3beta/*` is deprecated**.
gRPC-gateway does not support authentication using TLS Common Name.
### Put and get keys ### Put and get keys
Use the `/v3beta/kv/range` and `/v3beta/kv/put` services to read and write keys: Use the `/v3/kv/range` and `/v3/kv/put` services to read and write keys:
```bash ```bash
<<COMMENT <<COMMENT
@ -19,85 +32,94 @@ foo is 'Zm9v' in Base64
bar is 'YmFy' bar is 'YmFy'
COMMENT COMMENT
curl -L http://localhost:2379/v3beta/kv/put \ curl -L http://localhost:2379/v3/kv/put \
-X POST -d '{"key": "Zm9v", "value": "YmFy"}' -X POST -d '{"key": "Zm9v", "value": "YmFy"}'
# {"header":{"cluster_id":"12585971608760269493","member_id":"13847567121247652255","revision":"2","raft_term":"3"}} # {"header":{"cluster_id":"12585971608760269493","member_id":"13847567121247652255","revision":"2","raft_term":"3"}}
curl -L http://localhost:2379/v3beta/kv/range \ curl -L http://localhost:2379/v3/kv/range \
-X POST -d '{"key": "Zm9v"}' -X POST -d '{"key": "Zm9v"}'
# {"header":{"cluster_id":"12585971608760269493","member_id":"13847567121247652255","revision":"2","raft_term":"3"},"kvs":[{"key":"Zm9v","create_revision":"2","mod_revision":"2","version":"1","value":"YmFy"}],"count":"1"} # {"header":{"cluster_id":"12585971608760269493","member_id":"13847567121247652255","revision":"2","raft_term":"3"},"kvs":[{"key":"Zm9v","create_revision":"2","mod_revision":"2","version":"1","value":"YmFy"}],"count":"1"}
# get all keys prefixed with "foo" # get all keys prefixed with "foo"
curl -L http://localhost:2379/v3beta/kv/range \ curl -L http://localhost:2379/v3/kv/range \
-X POST -d '{"key": "Zm9v", "range_end": "Zm9w"}' -X POST -d '{"key": "Zm9v", "range_end": "Zm9w"}'
# {"header":{"cluster_id":"12585971608760269493","member_id":"13847567121247652255","revision":"2","raft_term":"3"},"kvs":[{"key":"Zm9v","create_revision":"2","mod_revision":"2","version":"1","value":"YmFy"}],"count":"1"} # {"header":{"cluster_id":"12585971608760269493","member_id":"13847567121247652255","revision":"2","raft_term":"3"},"kvs":[{"key":"Zm9v","create_revision":"2","mod_revision":"2","version":"1","value":"YmFy"}],"count":"1"}
``` ```
### Watch keys ### Watch keys
Use the `/v3beta/watch` service to watch keys: Use the `/v3/watch` service to watch keys:
```bash ```bash
curl http://localhost:2379/v3beta/watch \ curl -N http://localhost:2379/v3/watch \
-X POST -d '{"create_request": {"key":"Zm9v"} }' & -X POST -d '{"create_request": {"key":"Zm9v"} }' &
# {"result":{"header":{"cluster_id":"12585971608760269493","member_id":"13847567121247652255","revision":"1","raft_term":"2"},"created":true}} # {"result":{"header":{"cluster_id":"12585971608760269493","member_id":"13847567121247652255","revision":"1","raft_term":"2"},"created":true}}
curl -L http://localhost:2379/v3beta/kv/put \ curl -L http://localhost:2379/v3/kv/put \
-X POST -d '{"key": "Zm9v", "value": "YmFy"}' >/dev/null 2>&1 -X POST -d '{"key": "Zm9v", "value": "YmFy"}' >/dev/null 2>&1
# {"result":{"header":{"cluster_id":"12585971608760269493","member_id":"13847567121247652255","revision":"2","raft_term":"2"},"events":[{"kv":{"key":"Zm9v","create_revision":"2","mod_revision":"2","version":"1","value":"YmFy"}}]}} # {"result":{"header":{"cluster_id":"12585971608760269493","member_id":"13847567121247652255","revision":"2","raft_term":"2"},"events":[{"kv":{"key":"Zm9v","create_revision":"2","mod_revision":"2","version":"1","value":"YmFy"}}]}}
``` ```
### Transactions ### Transactions
Issue a transaction with `/v3beta/kv/txn`: Issue a transaction with `/v3/kv/txn`:
```bash ```bash
curl -L http://localhost:2379/v3beta/kv/txn \ # target CREATE
-X POST \ curl -L http://localhost:2379/v3/kv/txn \
-d '{"compare":[{"target":"CREATE","key":"Zm9v","createRevision":"2"}],"success":[{"requestPut":{"key":"Zm9v","value":"YmFy"}}]}' -X POST \
-d '{"compare":[{"target":"CREATE","key":"Zm9v","createRevision":"2"}],"success":[{"requestPut":{"key":"Zm9v","value":"YmFy"}}]}'
# {"header":{"cluster_id":"12585971608760269493","member_id":"13847567121247652255","revision":"3","raft_term":"2"},"succeeded":true,"responses":[{"response_put":{"header":{"revision":"3"}}}]} # {"header":{"cluster_id":"12585971608760269493","member_id":"13847567121247652255","revision":"3","raft_term":"2"},"succeeded":true,"responses":[{"response_put":{"header":{"revision":"3"}}}]}
``` ```
```bash
# target VERSION
curl -L http://localhost:2379/v3/kv/txn \
-X POST \
-d '{"compare":[{"version":"4","result":"EQUAL","target":"VERSION","key":"Zm9v"}],"success":[{"requestRange":{"key":"Zm9v"}}]}'
# {"header":{"cluster_id":"14841639068965178418","member_id":"10276657743932975437","revision":"6","raft_term":"3"},"succeeded":true,"responses":[{"response_range":{"header":{"revision":"6"},"kvs":[{"key":"Zm9v","create_revision":"2","mod_revision":"6","version":"4","value":"YmF6"}],"count":"1"}}]}
```
### Authentication ### Authentication
Set up authentication with the `/v3beta/auth` service: Set up authentication with the `/v3/auth` service:
```bash ```bash
# create root user # create root user
curl -L http://localhost:2379/v3beta/auth/user/add \ curl -L http://localhost:2379/v3/auth/user/add \
-X POST -d '{"name": "root", "password": "pass"}' -X POST -d '{"name": "root", "password": "pass"}'
# {"header":{"cluster_id":"14841639068965178418","member_id":"10276657743932975437","revision":"1","raft_term":"2"}} # {"header":{"cluster_id":"14841639068965178418","member_id":"10276657743932975437","revision":"1","raft_term":"2"}}
# create root role # create root role
curl -L http://localhost:2379/v3beta/auth/role/add \ curl -L http://localhost:2379/v3/auth/role/add \
-X POST -d '{"name": "root"}' -X POST -d '{"name": "root"}'
# {"header":{"cluster_id":"14841639068965178418","member_id":"10276657743932975437","revision":"1","raft_term":"2"}} # {"header":{"cluster_id":"14841639068965178418","member_id":"10276657743932975437","revision":"1","raft_term":"2"}}
# grant root role # grant root role
curl -L http://localhost:2379/v3beta/auth/user/grant \ curl -L http://localhost:2379/v3/auth/user/grant \
-X POST -d '{"user": "root", "role": "root"}' -X POST -d '{"user": "root", "role": "root"}'
# {"header":{"cluster_id":"14841639068965178418","member_id":"10276657743932975437","revision":"1","raft_term":"2"}} # {"header":{"cluster_id":"14841639068965178418","member_id":"10276657743932975437","revision":"1","raft_term":"2"}}
# enable auth # enable auth
curl -L http://localhost:2379/v3beta/auth/enable -X POST -d '{}' curl -L http://localhost:2379/v3/auth/enable -X POST -d '{}'
# {"header":{"cluster_id":"14841639068965178418","member_id":"10276657743932975437","revision":"1","raft_term":"2"}} # {"header":{"cluster_id":"14841639068965178418","member_id":"10276657743932975437","revision":"1","raft_term":"2"}}
``` ```
Authenticate with etcd for an authentication token using `/v3beta/auth/authenticate`: Authenticate with etcd for an authentication token using `/v3/auth/authenticate`:
```bash ```bash
# get the auth token for the root user # get the auth token for the root user
curl -L http://localhost:2379/v3beta/auth/authenticate \ curl -L http://localhost:2379/v3/auth/authenticate \
-X POST -d '{"name": "root", "password": "pass"}' -X POST -d '{"name": "root", "password": "pass"}'
# {"header":{"cluster_id":"14841639068965178418","member_id":"10276657743932975437","revision":"1","raft_term":"2"},"token":"sssvIpwfnLAcWAQH.9"} # {"header":{"cluster_id":"14841639068965178418","member_id":"10276657743932975437","revision":"1","raft_term":"2"},"token":"sssvIpwfnLAcWAQH.9"}
``` ```
Set the `Authorization` header to the authentication token to fetch a key using authentication credentials: Set the `Authorization` header to the authentication token to fetch a key using authentication credentials:
```bash ```bash
curl -L http://localhost:2379/v3beta/kv/put \ curl -L http://localhost:2379/v3/kv/put \
-H 'Authorization : sssvIpwfnLAcWAQH.9' \ -H 'Authorization : sssvIpwfnLAcWAQH.9' \
-X POST -d '{"key": "Zm9v", "value": "YmFy"}' -X POST -d '{"key": "Zm9v", "value": "YmFy"}'
# {"header":{"cluster_id":"14841639068965178418","member_id":"10276657743932975437","revision":"2","raft_term":"2"}} # {"header":{"cluster_id":"14841639068965178418","member_id":"10276657743932975437","revision":"2","raft_term":"2"}}
``` ```
@ -108,9 +130,8 @@ Generated [Swagger][swagger] API definitions can be found at [rpc.swagger.json][
[api-ref]: ./api_reference_v3.md [api-ref]: ./api_reference_v3.md
[go-client]: https://github.com/coreos/etcd/tree/master/clientv3 [go-client]: https://github.com/coreos/etcd/tree/master/clientv3
[etcdctl]: https://github.com/coreos/etcd/tree/master/etcdctl [etcdctl]: https://github.com/coreos/etcd/tree/master/etcdctl
[grpc]: http://www.grpc.io/ [grpc]: https://www.grpc.io/
[grpc-gateway]: https://github.com/grpc-ecosystem/grpc-gateway [grpc-gateway]: https://github.com/grpc-ecosystem/grpc-gateway
[json-mapping]: https://developers.google.com/protocol-buffers/docs/proto3#json [json-mapping]: https://developers.google.com/protocol-buffers/docs/proto3#json
[swagger]: http://swagger.io/ [swagger]: http://swagger.io/
[swagger-doc]: apispec/swagger/rpc.swagger.json [swagger-doc]: apispec/swagger/rpc.swagger.json

View File

@ -1,4 +1,6 @@
### etcd API Reference ---
title: etcd API Reference
---
This is a generated documentation. Please read the proto files for more. This is a generated documentation. Please read the proto files for more.
@ -69,8 +71,8 @@ This is a generated documentation. Please read the proto files for more.
| Alarm | AlarmRequest | AlarmResponse | Alarm activates, deactivates, and queries alarms regarding cluster health. | | Alarm | AlarmRequest | AlarmResponse | Alarm activates, deactivates, and queries alarms regarding cluster health. |
| Status | StatusRequest | StatusResponse | Status gets the status of the member. | | Status | StatusRequest | StatusResponse | Status gets the status of the member. |
| Defragment | DefragmentRequest | DefragmentResponse | Defragment defragments a member's backend database to recover storage space. | | Defragment | DefragmentRequest | DefragmentResponse | Defragment defragments a member's backend database to recover storage space. |
| Hash | HashRequest | HashResponse | Hash computes the hash of the KV's backend. This is designed for testing; do not use this in production when there are ongoing transactions. | | Hash | HashRequest | HashResponse | Hash computes the hash of whole backend keyspace, including key, lease, and other buckets in storage. This is designed for testing ONLY! Do not rely on this in production with ongoing transactions, since Hash operation does not hold MVCC locks. Use "HashKV" API instead for "key" bucket consistency checks. |
| HashKV | HashKVRequest | HashKVResponse | HashKV computes the hash of all MVCC keys up to a given revision. | | HashKV | HashKVRequest | HashKVResponse | HashKV computes the hash of all MVCC keys up to a given revision. It only iterates "key" bucket in backend storage. |
| Snapshot | SnapshotRequest | SnapshotResponse | Snapshot sends a snapshot of the entire backend from a member over a stream to a client. | | Snapshot | SnapshotRequest | SnapshotResponse | Snapshot sends a snapshot of the entire backend from a member over a stream to a client. |
| MoveLeader | MoveLeaderRequest | MoveLeaderResponse | MoveLeader requests current leader node to transfer its leadership to transferee. | | MoveLeader | MoveLeaderRequest | MoveLeaderResponse | MoveLeader requests current leader node to transfer its leadership to transferee. |
@ -226,8 +228,8 @@ Empty field.
| Field | Description | Type | | Field | Description | Type |
| ----- | ----------- | ---- | | ----- | ----------- | ---- |
| role | | string | | role | | string |
| key | | string | | key | | bytes |
| range_end | | string | | range_end | | bytes |
@ -476,6 +478,31 @@ Empty field.
##### message `LeaseCheckpoint` (etcdserver/etcdserverpb/rpc.proto)
| Field | Description | Type |
| ----- | ----------- | ---- |
| ID | ID is the lease ID to checkpoint. | int64 |
| remaining_TTL | Remaining_TTL is the remaining time until expiry of the lease. | int64 |
##### message `LeaseCheckpointRequest` (etcdserver/etcdserverpb/rpc.proto)
| Field | Description | Type |
| ----- | ----------- | ---- |
| checkpoints | | (slice of) LeaseCheckpoint |
##### message `LeaseCheckpointResponse` (etcdserver/etcdserverpb/rpc.proto)
| Field | Description | Type |
| ----- | ----------- | ---- |
| header | | ResponseHeader |
##### message `LeaseGrantRequest` (etcdserver/etcdserverpb/rpc.proto) ##### message `LeaseGrantRequest` (etcdserver/etcdserverpb/rpc.proto)
| Field | Description | Type | | Field | Description | Type |
@ -706,7 +733,7 @@ Empty field.
| count_only | count_only when set returns only the count of the keys in the range. | bool | | count_only | count_only when set returns only the count of the keys in the range. | bool |
| min_mod_revision | min_mod_revision is the lower bound for returned key mod revisions; all keys with lesser mod revisions will be filtered away. | int64 | | min_mod_revision | min_mod_revision is the lower bound for returned key mod revisions; all keys with lesser mod revisions will be filtered away. | int64 |
| max_mod_revision | max_mod_revision is the upper bound for returned key mod revisions; all keys with greater mod revisions will be filtered away. | int64 | | max_mod_revision | max_mod_revision is the upper bound for returned key mod revisions; all keys with greater mod revisions will be filtered away. | int64 |
| min_create_revision | min_create_revision is the lower bound for returned key create revisions; all keys with lesser create trevisions will be filtered away. | int64 | | min_create_revision | min_create_revision is the lower bound for returned key create revisions; all keys with lesser create revisions will be filtered away. | int64 |
| max_create_revision | max_create_revision is the upper bound for returned key create revisions; all keys with greater create revisions will be filtered away. | int64 | | max_create_revision | max_create_revision is the upper bound for returned key create revisions; all keys with greater create revisions will be filtered away. | int64 |
@ -740,7 +767,7 @@ Empty field.
| ----- | ----------- | ---- | | ----- | ----------- | ---- |
| cluster_id | cluster_id is the ID of the cluster which sent the response. | uint64 | | cluster_id | cluster_id is the ID of the cluster which sent the response. | uint64 |
| member_id | member_id is the ID of the member which sent the response. | uint64 | | member_id | member_id is the ID of the member which sent the response. | uint64 |
| revision | revision is the key-value store revision when the request was applied. | int64 | | revision | revision is the key-value store revision when the request was applied. For watch progress responses, the header.revision indicates progress. All future events recieved in this stream are guaranteed to have a higher revision number than the header.revision number. | int64 |
| raft_term | raft_term is the raft term when the request was applied. | uint64 | | raft_term | raft_term is the raft term when the request was applied. | uint64 |
@ -785,10 +812,13 @@ Empty field.
| ----- | ----------- | ---- | | ----- | ----------- | ---- |
| header | | ResponseHeader | | header | | ResponseHeader |
| version | version is the cluster protocol version used by the responding member. | string | | version | version is the cluster protocol version used by the responding member. | string |
| dbSize | dbSize is the size of the backend database, in bytes, of the responding member. | int64 | | dbSize | dbSize is the size of the backend database physically allocated, in bytes, of the responding member. | int64 |
| leader | leader is the member ID which the responding member believes is the current leader. | uint64 | | leader | leader is the member ID which the responding member believes is the current leader. | uint64 |
| raftIndex | raftIndex is the current raft index of the responding member. | uint64 | | raftIndex | raftIndex is the current raft committed index of the responding member. | uint64 |
| raftTerm | raftTerm is the current raft term of the responding member. | uint64 | | raftTerm | raftTerm is the current raft term of the responding member. | uint64 |
| raftAppliedIndex | raftAppliedIndex is the current raft applied index of the responding member. | uint64 |
| errors | errors contains alarm/health information and status. | (slice of) string |
| dbSizeInUse | dbSizeInUse is the size of the backend database logically in use, in bytes, of the responding member. | int64 |
@ -832,6 +862,16 @@ From google paxosdb paper: Our implementation hinges around a powerful primitive
| progress_notify | progress_notify is set so that the etcd server will periodically send a WatchResponse with no events to the new watcher if there are no recent events. It is useful when clients wish to recover a disconnected watcher starting from a recent known revision. The etcd server may decide how often it will send notifications based on current load. | bool | | progress_notify | progress_notify is set so that the etcd server will periodically send a WatchResponse with no events to the new watcher if there are no recent events. It is useful when clients wish to recover a disconnected watcher starting from a recent known revision. The etcd server may decide how often it will send notifications based on current load. | bool |
| filters | filters filter the events at server side before it sends back to the watcher. | (slice of) FilterType | | filters | filters filter the events at server side before it sends back to the watcher. | (slice of) FilterType |
| prev_kv | If prev_kv is set, created watcher gets the previous KV before the event happens. If the previous KV is already compacted, nothing will be returned. | bool | | prev_kv | If prev_kv is set, created watcher gets the previous KV before the event happens. If the previous KV is already compacted, nothing will be returned. | bool |
| watch_id | If watch_id is provided and non-zero, it will be assigned to this watcher. Since creating a watcher in etcd is not a synchronous operation, this can be used ensure that ordering is correct when creating multiple watchers on the same stream. Creating a watcher with an ID already in use on the stream will cause an error to be returned. | int64 |
| fragment | fragment enables splitting large revisions into multiple watch responses. | bool |
##### message `WatchProgressRequest` (etcdserver/etcdserverpb/rpc.proto)
Requests the a watch stream progress status be sent in the watch response stream as soon as possible.
Empty field.
@ -842,6 +882,7 @@ From google paxosdb paper: Our implementation hinges around a powerful primitive
| request_union | request_union is a request to either create a new watcher or cancel an existing watcher. | oneof | | request_union | request_union is a request to either create a new watcher or cancel an existing watcher. | oneof |
| create_request | | WatchCreateRequest | | create_request | | WatchCreateRequest |
| cancel_request | | WatchCancelRequest | | cancel_request | | WatchCancelRequest |
| progress_request | | WatchProgressRequest |
@ -855,6 +896,7 @@ From google paxosdb paper: Our implementation hinges around a powerful primitive
| canceled | canceled is set to true if the response is for a cancel watch request. No further events will be sent to the canceled watcher. | bool | | canceled | canceled is set to true if the response is for a cancel watch request. No further events will be sent to the canceled watcher. | bool |
| compact_revision | compact_revision is set to the minimum index if a watcher tries to watch at a compacted index. This happens when creating a watcher at a compacted revision or the watcher cannot catch up with the progress of the key-value store. The client should treat the watcher as canceled and should not try to create any watcher with the same start_revision again. | int64 | | compact_revision | compact_revision is set to the minimum index if a watcher tries to watch at a compacted index. This happens when creating a watcher at a compacted revision or the watcher cannot catch up with the progress of the key-value store. The client should treat the watcher as canceled and should not try to create any watcher with the same start_revision again. | int64 |
| cancel_reason | cancel_reason indicates the reason for canceling the watcher. | string | | cancel_reason | cancel_reason indicates the reason for canceling the watcher. | string |
| fragment | framgment is true if large watch response was split over multiple responses. | bool |
| events | | (slice of) mvccpb.Event | | events | | (slice of) mvccpb.Event |
@ -888,6 +930,7 @@ From google paxosdb paper: Our implementation hinges around a powerful primitive
| ----- | ----------- | ---- | | ----- | ----------- | ---- |
| ID | | int64 | | ID | | int64 |
| TTL | | int64 | | TTL | | int64 |
| RemainingTTL | | int64 |

View File

@ -15,7 +15,7 @@
"version": "version not set" "version": "version not set"
}, },
"paths": { "paths": {
"/v3beta/auth/authenticate": { "/v3/auth/authenticate": {
"post": { "post": {
"tags": [ "tags": [
"Auth" "Auth"
@ -34,7 +34,7 @@
], ],
"responses": { "responses": {
"200": { "200": {
"description": "(empty)", "description": "A successful response.",
"schema": { "schema": {
"$ref": "#/definitions/etcdserverpbAuthenticateResponse" "$ref": "#/definitions/etcdserverpbAuthenticateResponse"
} }
@ -42,7 +42,7 @@
} }
} }
}, },
"/v3beta/auth/disable": { "/v3/auth/disable": {
"post": { "post": {
"tags": [ "tags": [
"Auth" "Auth"
@ -61,7 +61,7 @@
], ],
"responses": { "responses": {
"200": { "200": {
"description": "(empty)", "description": "A successful response.",
"schema": { "schema": {
"$ref": "#/definitions/etcdserverpbAuthDisableResponse" "$ref": "#/definitions/etcdserverpbAuthDisableResponse"
} }
@ -69,7 +69,7 @@
} }
} }
}, },
"/v3beta/auth/enable": { "/v3/auth/enable": {
"post": { "post": {
"tags": [ "tags": [
"Auth" "Auth"
@ -88,7 +88,7 @@
], ],
"responses": { "responses": {
"200": { "200": {
"description": "(empty)", "description": "A successful response.",
"schema": { "schema": {
"$ref": "#/definitions/etcdserverpbAuthEnableResponse" "$ref": "#/definitions/etcdserverpbAuthEnableResponse"
} }
@ -96,7 +96,7 @@
} }
} }
}, },
"/v3beta/auth/role/add": { "/v3/auth/role/add": {
"post": { "post": {
"tags": [ "tags": [
"Auth" "Auth"
@ -115,7 +115,7 @@
], ],
"responses": { "responses": {
"200": { "200": {
"description": "(empty)", "description": "A successful response.",
"schema": { "schema": {
"$ref": "#/definitions/etcdserverpbAuthRoleAddResponse" "$ref": "#/definitions/etcdserverpbAuthRoleAddResponse"
} }
@ -123,7 +123,7 @@
} }
} }
}, },
"/v3beta/auth/role/delete": { "/v3/auth/role/delete": {
"post": { "post": {
"tags": [ "tags": [
"Auth" "Auth"
@ -142,7 +142,7 @@
], ],
"responses": { "responses": {
"200": { "200": {
"description": "(empty)", "description": "A successful response.",
"schema": { "schema": {
"$ref": "#/definitions/etcdserverpbAuthRoleDeleteResponse" "$ref": "#/definitions/etcdserverpbAuthRoleDeleteResponse"
} }
@ -150,7 +150,7 @@
} }
} }
}, },
"/v3beta/auth/role/get": { "/v3/auth/role/get": {
"post": { "post": {
"tags": [ "tags": [
"Auth" "Auth"
@ -169,7 +169,7 @@
], ],
"responses": { "responses": {
"200": { "200": {
"description": "(empty)", "description": "A successful response.",
"schema": { "schema": {
"$ref": "#/definitions/etcdserverpbAuthRoleGetResponse" "$ref": "#/definitions/etcdserverpbAuthRoleGetResponse"
} }
@ -177,7 +177,7 @@
} }
} }
}, },
"/v3beta/auth/role/grant": { "/v3/auth/role/grant": {
"post": { "post": {
"tags": [ "tags": [
"Auth" "Auth"
@ -196,7 +196,7 @@
], ],
"responses": { "responses": {
"200": { "200": {
"description": "(empty)", "description": "A successful response.",
"schema": { "schema": {
"$ref": "#/definitions/etcdserverpbAuthRoleGrantPermissionResponse" "$ref": "#/definitions/etcdserverpbAuthRoleGrantPermissionResponse"
} }
@ -204,7 +204,7 @@
} }
} }
}, },
"/v3beta/auth/role/list": { "/v3/auth/role/list": {
"post": { "post": {
"tags": [ "tags": [
"Auth" "Auth"
@ -223,7 +223,7 @@
], ],
"responses": { "responses": {
"200": { "200": {
"description": "(empty)", "description": "A successful response.",
"schema": { "schema": {
"$ref": "#/definitions/etcdserverpbAuthRoleListResponse" "$ref": "#/definitions/etcdserverpbAuthRoleListResponse"
} }
@ -231,7 +231,7 @@
} }
} }
}, },
"/v3beta/auth/role/revoke": { "/v3/auth/role/revoke": {
"post": { "post": {
"tags": [ "tags": [
"Auth" "Auth"
@ -250,7 +250,7 @@
], ],
"responses": { "responses": {
"200": { "200": {
"description": "(empty)", "description": "A successful response.",
"schema": { "schema": {
"$ref": "#/definitions/etcdserverpbAuthRoleRevokePermissionResponse" "$ref": "#/definitions/etcdserverpbAuthRoleRevokePermissionResponse"
} }
@ -258,7 +258,7 @@
} }
} }
}, },
"/v3beta/auth/user/add": { "/v3/auth/user/add": {
"post": { "post": {
"tags": [ "tags": [
"Auth" "Auth"
@ -277,7 +277,7 @@
], ],
"responses": { "responses": {
"200": { "200": {
"description": "(empty)", "description": "A successful response.",
"schema": { "schema": {
"$ref": "#/definitions/etcdserverpbAuthUserAddResponse" "$ref": "#/definitions/etcdserverpbAuthUserAddResponse"
} }
@ -285,7 +285,7 @@
} }
} }
}, },
"/v3beta/auth/user/changepw": { "/v3/auth/user/changepw": {
"post": { "post": {
"tags": [ "tags": [
"Auth" "Auth"
@ -304,7 +304,7 @@
], ],
"responses": { "responses": {
"200": { "200": {
"description": "(empty)", "description": "A successful response.",
"schema": { "schema": {
"$ref": "#/definitions/etcdserverpbAuthUserChangePasswordResponse" "$ref": "#/definitions/etcdserverpbAuthUserChangePasswordResponse"
} }
@ -312,7 +312,7 @@
} }
} }
}, },
"/v3beta/auth/user/delete": { "/v3/auth/user/delete": {
"post": { "post": {
"tags": [ "tags": [
"Auth" "Auth"
@ -331,7 +331,7 @@
], ],
"responses": { "responses": {
"200": { "200": {
"description": "(empty)", "description": "A successful response.",
"schema": { "schema": {
"$ref": "#/definitions/etcdserverpbAuthUserDeleteResponse" "$ref": "#/definitions/etcdserverpbAuthUserDeleteResponse"
} }
@ -339,7 +339,7 @@
} }
} }
}, },
"/v3beta/auth/user/get": { "/v3/auth/user/get": {
"post": { "post": {
"tags": [ "tags": [
"Auth" "Auth"
@ -358,7 +358,7 @@
], ],
"responses": { "responses": {
"200": { "200": {
"description": "(empty)", "description": "A successful response.",
"schema": { "schema": {
"$ref": "#/definitions/etcdserverpbAuthUserGetResponse" "$ref": "#/definitions/etcdserverpbAuthUserGetResponse"
} }
@ -366,7 +366,7 @@
} }
} }
}, },
"/v3beta/auth/user/grant": { "/v3/auth/user/grant": {
"post": { "post": {
"tags": [ "tags": [
"Auth" "Auth"
@ -385,7 +385,7 @@
], ],
"responses": { "responses": {
"200": { "200": {
"description": "(empty)", "description": "A successful response.",
"schema": { "schema": {
"$ref": "#/definitions/etcdserverpbAuthUserGrantRoleResponse" "$ref": "#/definitions/etcdserverpbAuthUserGrantRoleResponse"
} }
@ -393,7 +393,7 @@
} }
} }
}, },
"/v3beta/auth/user/list": { "/v3/auth/user/list": {
"post": { "post": {
"tags": [ "tags": [
"Auth" "Auth"
@ -412,7 +412,7 @@
], ],
"responses": { "responses": {
"200": { "200": {
"description": "(empty)", "description": "A successful response.",
"schema": { "schema": {
"$ref": "#/definitions/etcdserverpbAuthUserListResponse" "$ref": "#/definitions/etcdserverpbAuthUserListResponse"
} }
@ -420,7 +420,7 @@
} }
} }
}, },
"/v3beta/auth/user/revoke": { "/v3/auth/user/revoke": {
"post": { "post": {
"tags": [ "tags": [
"Auth" "Auth"
@ -439,7 +439,7 @@
], ],
"responses": { "responses": {
"200": { "200": {
"description": "(empty)", "description": "A successful response.",
"schema": { "schema": {
"$ref": "#/definitions/etcdserverpbAuthUserRevokeRoleResponse" "$ref": "#/definitions/etcdserverpbAuthUserRevokeRoleResponse"
} }
@ -447,7 +447,7 @@
} }
} }
}, },
"/v3beta/cluster/member/add": { "/v3/cluster/member/add": {
"post": { "post": {
"tags": [ "tags": [
"Cluster" "Cluster"
@ -466,7 +466,7 @@
], ],
"responses": { "responses": {
"200": { "200": {
"description": "(empty)", "description": "A successful response.",
"schema": { "schema": {
"$ref": "#/definitions/etcdserverpbMemberAddResponse" "$ref": "#/definitions/etcdserverpbMemberAddResponse"
} }
@ -474,7 +474,7 @@
} }
} }
}, },
"/v3beta/cluster/member/list": { "/v3/cluster/member/list": {
"post": { "post": {
"tags": [ "tags": [
"Cluster" "Cluster"
@ -493,7 +493,7 @@
], ],
"responses": { "responses": {
"200": { "200": {
"description": "(empty)", "description": "A successful response.",
"schema": { "schema": {
"$ref": "#/definitions/etcdserverpbMemberListResponse" "$ref": "#/definitions/etcdserverpbMemberListResponse"
} }
@ -501,7 +501,7 @@
} }
} }
}, },
"/v3beta/cluster/member/remove": { "/v3/cluster/member/remove": {
"post": { "post": {
"tags": [ "tags": [
"Cluster" "Cluster"
@ -520,7 +520,7 @@
], ],
"responses": { "responses": {
"200": { "200": {
"description": "(empty)", "description": "A successful response.",
"schema": { "schema": {
"$ref": "#/definitions/etcdserverpbMemberRemoveResponse" "$ref": "#/definitions/etcdserverpbMemberRemoveResponse"
} }
@ -528,7 +528,7 @@
} }
} }
}, },
"/v3beta/cluster/member/update": { "/v3/cluster/member/update": {
"post": { "post": {
"tags": [ "tags": [
"Cluster" "Cluster"
@ -547,7 +547,7 @@
], ],
"responses": { "responses": {
"200": { "200": {
"description": "(empty)", "description": "A successful response.",
"schema": { "schema": {
"$ref": "#/definitions/etcdserverpbMemberUpdateResponse" "$ref": "#/definitions/etcdserverpbMemberUpdateResponse"
} }
@ -555,7 +555,7 @@
} }
} }
}, },
"/v3beta/kv/compaction": { "/v3/kv/compaction": {
"post": { "post": {
"tags": [ "tags": [
"KV" "KV"
@ -574,7 +574,7 @@
], ],
"responses": { "responses": {
"200": { "200": {
"description": "(empty)", "description": "A successful response.",
"schema": { "schema": {
"$ref": "#/definitions/etcdserverpbCompactionResponse" "$ref": "#/definitions/etcdserverpbCompactionResponse"
} }
@ -582,7 +582,7 @@
} }
} }
}, },
"/v3beta/kv/deleterange": { "/v3/kv/deleterange": {
"post": { "post": {
"tags": [ "tags": [
"KV" "KV"
@ -601,7 +601,7 @@
], ],
"responses": { "responses": {
"200": { "200": {
"description": "(empty)", "description": "A successful response.",
"schema": { "schema": {
"$ref": "#/definitions/etcdserverpbDeleteRangeResponse" "$ref": "#/definitions/etcdserverpbDeleteRangeResponse"
} }
@ -609,13 +609,13 @@
} }
} }
}, },
"/v3beta/kv/lease/leases": { "/v3/kv/lease/leases": {
"post": { "post": {
"tags": [ "tags": [
"Lease" "Lease"
], ],
"summary": "LeaseLeases lists all existing leases.", "summary": "LeaseLeases lists all existing leases.",
"operationId": "LeaseLeases", "operationId": "LeaseLeases2",
"parameters": [ "parameters": [
{ {
"name": "body", "name": "body",
@ -628,7 +628,7 @@
], ],
"responses": { "responses": {
"200": { "200": {
"description": "(empty)", "description": "A successful response.",
"schema": { "schema": {
"$ref": "#/definitions/etcdserverpbLeaseLeasesResponse" "$ref": "#/definitions/etcdserverpbLeaseLeasesResponse"
} }
@ -636,13 +636,13 @@
} }
} }
}, },
"/v3beta/kv/lease/revoke": { "/v3/kv/lease/revoke": {
"post": { "post": {
"tags": [ "tags": [
"Lease" "Lease"
], ],
"summary": "LeaseRevoke revokes a lease. All keys attached to the lease will expire and be deleted.", "summary": "LeaseRevoke revokes a lease. All keys attached to the lease will expire and be deleted.",
"operationId": "LeaseRevoke", "operationId": "LeaseRevoke2",
"parameters": [ "parameters": [
{ {
"name": "body", "name": "body",
@ -655,7 +655,7 @@
], ],
"responses": { "responses": {
"200": { "200": {
"description": "(empty)", "description": "A successful response.",
"schema": { "schema": {
"$ref": "#/definitions/etcdserverpbLeaseRevokeResponse" "$ref": "#/definitions/etcdserverpbLeaseRevokeResponse"
} }
@ -663,13 +663,13 @@
} }
} }
}, },
"/v3beta/kv/lease/timetolive": { "/v3/kv/lease/timetolive": {
"post": { "post": {
"tags": [ "tags": [
"Lease" "Lease"
], ],
"summary": "LeaseTimeToLive retrieves lease information.", "summary": "LeaseTimeToLive retrieves lease information.",
"operationId": "LeaseTimeToLive", "operationId": "LeaseTimeToLive2",
"parameters": [ "parameters": [
{ {
"name": "body", "name": "body",
@ -682,7 +682,7 @@
], ],
"responses": { "responses": {
"200": { "200": {
"description": "(empty)", "description": "A successful response.",
"schema": { "schema": {
"$ref": "#/definitions/etcdserverpbLeaseTimeToLiveResponse" "$ref": "#/definitions/etcdserverpbLeaseTimeToLiveResponse"
} }
@ -690,7 +690,7 @@
} }
} }
}, },
"/v3beta/kv/put": { "/v3/kv/put": {
"post": { "post": {
"tags": [ "tags": [
"KV" "KV"
@ -709,7 +709,7 @@
], ],
"responses": { "responses": {
"200": { "200": {
"description": "(empty)", "description": "A successful response.",
"schema": { "schema": {
"$ref": "#/definitions/etcdserverpbPutResponse" "$ref": "#/definitions/etcdserverpbPutResponse"
} }
@ -717,7 +717,7 @@
} }
} }
}, },
"/v3beta/kv/range": { "/v3/kv/range": {
"post": { "post": {
"tags": [ "tags": [
"KV" "KV"
@ -736,7 +736,7 @@
], ],
"responses": { "responses": {
"200": { "200": {
"description": "(empty)", "description": "A successful response.",
"schema": { "schema": {
"$ref": "#/definitions/etcdserverpbRangeResponse" "$ref": "#/definitions/etcdserverpbRangeResponse"
} }
@ -744,7 +744,7 @@
} }
} }
}, },
"/v3beta/kv/txn": { "/v3/kv/txn": {
"post": { "post": {
"tags": [ "tags": [
"KV" "KV"
@ -763,7 +763,7 @@
], ],
"responses": { "responses": {
"200": { "200": {
"description": "(empty)", "description": "A successful response.",
"schema": { "schema": {
"$ref": "#/definitions/etcdserverpbTxnResponse" "$ref": "#/definitions/etcdserverpbTxnResponse"
} }
@ -771,7 +771,7 @@
} }
} }
}, },
"/v3beta/lease/grant": { "/v3/lease/grant": {
"post": { "post": {
"tags": [ "tags": [
"Lease" "Lease"
@ -790,7 +790,7 @@
], ],
"responses": { "responses": {
"200": { "200": {
"description": "(empty)", "description": "A successful response.",
"schema": { "schema": {
"$ref": "#/definitions/etcdserverpbLeaseGrantResponse" "$ref": "#/definitions/etcdserverpbLeaseGrantResponse"
} }
@ -798,7 +798,7 @@
} }
} }
}, },
"/v3beta/lease/keepalive": { "/v3/lease/keepalive": {
"post": { "post": {
"tags": [ "tags": [
"Lease" "Lease"
@ -807,7 +807,7 @@
"operationId": "LeaseKeepAlive", "operationId": "LeaseKeepAlive",
"parameters": [ "parameters": [
{ {
"description": "(streaming inputs)", "description": " (streaming inputs)",
"name": "body", "name": "body",
"in": "body", "in": "body",
"required": true, "required": true,
@ -818,7 +818,7 @@
], ],
"responses": { "responses": {
"200": { "200": {
"description": "(streaming responses)", "description": "A successful response.(streaming responses)",
"schema": { "schema": {
"$ref": "#/definitions/etcdserverpbLeaseKeepAliveResponse" "$ref": "#/definitions/etcdserverpbLeaseKeepAliveResponse"
} }
@ -826,7 +826,88 @@
} }
} }
}, },
"/v3beta/maintenance/alarm": { "/v3/lease/leases": {
"post": {
"tags": [
"Lease"
],
"summary": "LeaseLeases lists all existing leases.",
"operationId": "LeaseLeases",
"parameters": [
{
"name": "body",
"in": "body",
"required": true,
"schema": {
"$ref": "#/definitions/etcdserverpbLeaseLeasesRequest"
}
}
],
"responses": {
"200": {
"description": "A successful response.",
"schema": {
"$ref": "#/definitions/etcdserverpbLeaseLeasesResponse"
}
}
}
}
},
"/v3/lease/revoke": {
"post": {
"tags": [
"Lease"
],
"summary": "LeaseRevoke revokes a lease. All keys attached to the lease will expire and be deleted.",
"operationId": "LeaseRevoke",
"parameters": [
{
"name": "body",
"in": "body",
"required": true,
"schema": {
"$ref": "#/definitions/etcdserverpbLeaseRevokeRequest"
}
}
],
"responses": {
"200": {
"description": "A successful response.",
"schema": {
"$ref": "#/definitions/etcdserverpbLeaseRevokeResponse"
}
}
}
}
},
"/v3/lease/timetolive": {
"post": {
"tags": [
"Lease"
],
"summary": "LeaseTimeToLive retrieves lease information.",
"operationId": "LeaseTimeToLive",
"parameters": [
{
"name": "body",
"in": "body",
"required": true,
"schema": {
"$ref": "#/definitions/etcdserverpbLeaseTimeToLiveRequest"
}
}
],
"responses": {
"200": {
"description": "A successful response.",
"schema": {
"$ref": "#/definitions/etcdserverpbLeaseTimeToLiveResponse"
}
}
}
}
},
"/v3/maintenance/alarm": {
"post": { "post": {
"tags": [ "tags": [
"Maintenance" "Maintenance"
@ -845,7 +926,7 @@
], ],
"responses": { "responses": {
"200": { "200": {
"description": "(empty)", "description": "A successful response.",
"schema": { "schema": {
"$ref": "#/definitions/etcdserverpbAlarmResponse" "$ref": "#/definitions/etcdserverpbAlarmResponse"
} }
@ -853,7 +934,7 @@
} }
} }
}, },
"/v3beta/maintenance/defragment": { "/v3/maintenance/defragment": {
"post": { "post": {
"tags": [ "tags": [
"Maintenance" "Maintenance"
@ -872,7 +953,7 @@
], ],
"responses": { "responses": {
"200": { "200": {
"description": "(empty)", "description": "A successful response.",
"schema": { "schema": {
"$ref": "#/definitions/etcdserverpbDefragmentResponse" "$ref": "#/definitions/etcdserverpbDefragmentResponse"
} }
@ -880,12 +961,12 @@
} }
} }
}, },
"/v3beta/maintenance/hash": { "/v3/maintenance/hash": {
"post": { "post": {
"tags": [ "tags": [
"Maintenance" "Maintenance"
], ],
"summary": "HashKV computes the hash of all MVCC keys up to a given revision.", "summary": "HashKV computes the hash of all MVCC keys up to a given revision.\nIt only iterates \"key\" bucket in backend storage.",
"operationId": "HashKV", "operationId": "HashKV",
"parameters": [ "parameters": [
{ {
@ -899,7 +980,7 @@
], ],
"responses": { "responses": {
"200": { "200": {
"description": "(empty)", "description": "A successful response.",
"schema": { "schema": {
"$ref": "#/definitions/etcdserverpbHashKVResponse" "$ref": "#/definitions/etcdserverpbHashKVResponse"
} }
@ -907,7 +988,7 @@
} }
} }
}, },
"/v3beta/maintenance/snapshot": { "/v3/maintenance/snapshot": {
"post": { "post": {
"tags": [ "tags": [
"Maintenance" "Maintenance"
@ -926,7 +1007,7 @@
], ],
"responses": { "responses": {
"200": { "200": {
"description": "(streaming responses)", "description": "A successful response.(streaming responses)",
"schema": { "schema": {
"$ref": "#/definitions/etcdserverpbSnapshotResponse" "$ref": "#/definitions/etcdserverpbSnapshotResponse"
} }
@ -934,7 +1015,7 @@
} }
} }
}, },
"/v3beta/maintenance/status": { "/v3/maintenance/status": {
"post": { "post": {
"tags": [ "tags": [
"Maintenance" "Maintenance"
@ -953,7 +1034,7 @@
], ],
"responses": { "responses": {
"200": { "200": {
"description": "(empty)", "description": "A successful response.",
"schema": { "schema": {
"$ref": "#/definitions/etcdserverpbStatusResponse" "$ref": "#/definitions/etcdserverpbStatusResponse"
} }
@ -961,7 +1042,7 @@
} }
} }
}, },
"/v3beta/maintenance/transfer-leadership": { "/v3/maintenance/transfer-leadership": {
"post": { "post": {
"tags": [ "tags": [
"Maintenance" "Maintenance"
@ -980,7 +1061,7 @@
], ],
"responses": { "responses": {
"200": { "200": {
"description": "(empty)", "description": "A successful response.",
"schema": { "schema": {
"$ref": "#/definitions/etcdserverpbMoveLeaderResponse" "$ref": "#/definitions/etcdserverpbMoveLeaderResponse"
} }
@ -988,7 +1069,7 @@
} }
} }
}, },
"/v3beta/watch": { "/v3/watch": {
"post": { "post": {
"tags": [ "tags": [
"Watch" "Watch"
@ -997,7 +1078,7 @@
"operationId": "Watch", "operationId": "Watch",
"parameters": [ "parameters": [
{ {
"description": "(streaming inputs)", "description": " (streaming inputs)",
"name": "body", "name": "body",
"in": "body", "in": "body",
"required": true, "required": true,
@ -1008,7 +1089,7 @@
], ],
"responses": { "responses": {
"200": { "200": {
"description": "(streaming responses)", "description": "A successful response.(streaming responses)",
"schema": { "schema": {
"$ref": "#/definitions/etcdserverpbWatchResponse" "$ref": "#/definitions/etcdserverpbWatchResponse"
} }
@ -1286,10 +1367,12 @@
"type": "object", "type": "object",
"properties": { "properties": {
"key": { "key": {
"type": "string" "type": "string",
"format": "byte"
}, },
"range_end": { "range_end": {
"type": "string" "type": "string",
"format": "byte"
}, },
"role": { "role": {
"type": "string" "type": "string"
@ -2017,7 +2100,7 @@
"format": "int64" "format": "int64"
}, },
"min_create_revision": { "min_create_revision": {
"description": "min_create_revision is the lower bound for returned key create revisions; all keys with\nlesser create trevisions will be filtered away.", "description": "min_create_revision is the lower bound for returned key create revisions; all keys with\nlesser create revisions will be filtered away.",
"type": "string", "type": "string",
"format": "int64" "format": "int64"
}, },
@ -2112,7 +2195,7 @@
"format": "uint64" "format": "uint64"
}, },
"revision": { "revision": {
"description": "revision is the key-value store revision when the request was applied.", "description": "revision is the key-value store revision when the request was applied.\nFor watch progress responses, the header.revision indicates progress. All future events\nrecieved in this stream are guaranteed to have a higher revision number than the\nheader.revision number.",
"type": "string", "type": "string",
"format": "int64" "format": "int64"
} }
@ -2164,10 +2247,22 @@
"type": "object", "type": "object",
"properties": { "properties": {
"dbSize": { "dbSize": {
"description": "dbSize is the size of the backend database, in bytes, of the responding member.", "description": "dbSize is the size of the backend database physically allocated, in bytes, of the responding member.",
"type": "string", "type": "string",
"format": "int64" "format": "int64"
}, },
"dbSizeInUse": {
"description": "dbSizeInUse is the size of the backend database logically in use, in bytes, of the responding member.",
"type": "string",
"format": "int64"
},
"errors": {
"description": "errors contains alarm/health information and status.",
"type": "array",
"items": {
"type": "string"
}
},
"header": { "header": {
"$ref": "#/definitions/etcdserverpbResponseHeader" "$ref": "#/definitions/etcdserverpbResponseHeader"
}, },
@ -2176,8 +2271,13 @@
"type": "string", "type": "string",
"format": "uint64" "format": "uint64"
}, },
"raftAppliedIndex": {
"description": "raftAppliedIndex is the current raft applied index of the responding member.",
"type": "string",
"format": "uint64"
},
"raftIndex": { "raftIndex": {
"description": "raftIndex is the current raft index of the responding member.", "description": "raftIndex is the current raft committed index of the responding member.",
"type": "string", "type": "string",
"format": "uint64" "format": "uint64"
}, },
@ -2259,6 +2359,11 @@
"$ref": "#/definitions/WatchCreateRequestFilterType" "$ref": "#/definitions/WatchCreateRequestFilterType"
} }
}, },
"fragment": {
"description": "fragment enables splitting large revisions into multiple watch responses.",
"type": "boolean",
"format": "boolean"
},
"key": { "key": {
"description": "key is the key to register for watching.", "description": "key is the key to register for watching.",
"type": "string", "type": "string",
@ -2283,9 +2388,18 @@
"description": "start_revision is an optional revision to watch from (inclusive). No start_revision is \"now\".", "description": "start_revision is an optional revision to watch from (inclusive). No start_revision is \"now\".",
"type": "string", "type": "string",
"format": "int64" "format": "int64"
},
"watch_id": {
"description": "If watch_id is provided and non-zero, it will be assigned to this watcher.\nSince creating a watcher in etcd is not a synchronous operation,\nthis can be used ensure that ordering is correct when creating multiple\nwatchers on the same stream. Creating a watcher with an ID already in\nuse on the stream will cause an error to be returned.",
"type": "string",
"format": "int64"
} }
} }
}, },
"etcdserverpbWatchProgressRequest": {
"description": "Requests the a watch stream progress status be sent in the watch response stream as soon as\npossible.",
"type": "object"
},
"etcdserverpbWatchRequest": { "etcdserverpbWatchRequest": {
"type": "object", "type": "object",
"properties": { "properties": {
@ -2294,6 +2408,9 @@
}, },
"create_request": { "create_request": {
"$ref": "#/definitions/etcdserverpbWatchCreateRequest" "$ref": "#/definitions/etcdserverpbWatchCreateRequest"
},
"progress_request": {
"$ref": "#/definitions/etcdserverpbWatchProgressRequest"
} }
} }
}, },
@ -2325,6 +2442,11 @@
"$ref": "#/definitions/mvccpbEvent" "$ref": "#/definitions/mvccpbEvent"
} }
}, },
"fragment": {
"description": "framgment is true if large watch response was split over multiple responses.",
"type": "boolean",
"format": "boolean"
},
"header": { "header": {
"$ref": "#/definitions/etcdserverpbResponseHeader" "$ref": "#/definitions/etcdserverpbResponseHeader"
}, },

View File

@ -15,13 +15,13 @@
"application/json" "application/json"
], ],
"paths": { "paths": {
"/v3beta/election/campaign": { "/v3/election/campaign": {
"post": { "post": {
"summary": "Campaign waits to acquire leadership in an election, returning a LeaderKey\nrepresenting the leadership if successful. The LeaderKey can then be used\nto issue new values on the election, transactionally guard API requests on\nleadership still being held, and resign from the election.", "summary": "Campaign waits to acquire leadership in an election, returning a LeaderKey\nrepresenting the leadership if successful. The LeaderKey can then be used\nto issue new values on the election, transactionally guard API requests on\nleadership still being held, and resign from the election.",
"operationId": "Campaign", "operationId": "Campaign",
"responses": { "responses": {
"200": { "200": {
"description": "", "description": "A successful response.",
"schema": { "schema": {
"$ref": "#/definitions/v3electionpbCampaignResponse" "$ref": "#/definitions/v3electionpbCampaignResponse"
} }
@ -42,13 +42,13 @@
] ]
} }
}, },
"/v3beta/election/leader": { "/v3/election/leader": {
"post": { "post": {
"summary": "Leader returns the current election proclamation, if any.", "summary": "Leader returns the current election proclamation, if any.",
"operationId": "Leader", "operationId": "Leader",
"responses": { "responses": {
"200": { "200": {
"description": "", "description": "A successful response.",
"schema": { "schema": {
"$ref": "#/definitions/v3electionpbLeaderResponse" "$ref": "#/definitions/v3electionpbLeaderResponse"
} }
@ -69,13 +69,13 @@
] ]
} }
}, },
"/v3beta/election/observe": { "/v3/election/observe": {
"post": { "post": {
"summary": "Observe streams election proclamations in-order as made by the election's\nelected leaders.", "summary": "Observe streams election proclamations in-order as made by the election's\nelected leaders.",
"operationId": "Observe", "operationId": "Observe",
"responses": { "responses": {
"200": { "200": {
"description": "(streaming responses)", "description": "A successful response.(streaming responses)",
"schema": { "schema": {
"$ref": "#/definitions/v3electionpbLeaderResponse" "$ref": "#/definitions/v3electionpbLeaderResponse"
} }
@ -96,13 +96,13 @@
] ]
} }
}, },
"/v3beta/election/proclaim": { "/v3/election/proclaim": {
"post": { "post": {
"summary": "Proclaim updates the leader's posted value with a new value.", "summary": "Proclaim updates the leader's posted value with a new value.",
"operationId": "Proclaim", "operationId": "Proclaim",
"responses": { "responses": {
"200": { "200": {
"description": "", "description": "A successful response.",
"schema": { "schema": {
"$ref": "#/definitions/v3electionpbProclaimResponse" "$ref": "#/definitions/v3electionpbProclaimResponse"
} }
@ -123,13 +123,13 @@
] ]
} }
}, },
"/v3beta/election/resign": { "/v3/election/resign": {
"post": { "post": {
"summary": "Resign releases election leadership so other campaigners may acquire\nleadership on the election.", "summary": "Resign releases election leadership so other campaigners may acquire\nleadership on the election.",
"operationId": "Resign", "operationId": "Resign",
"responses": { "responses": {
"200": { "200": {
"description": "", "description": "A successful response.",
"schema": { "schema": {
"$ref": "#/definitions/v3electionpbResignResponse" "$ref": "#/definitions/v3electionpbResignResponse"
} }
@ -168,7 +168,7 @@
"revision": { "revision": {
"type": "string", "type": "string",
"format": "int64", "format": "int64",
"description": "revision is the key-value store revision when the request was applied." "description": "revision is the key-value store revision when the request was applied.\nFor watch progress responses, the header.revision indicates progress. All future events\nrecieved in this stream are guaranteed to have a higher revision number than the\nheader.revision number."
}, },
"raft_term": { "raft_term": {
"type": "string", "type": "string",

View File

@ -15,13 +15,13 @@
"application/json" "application/json"
], ],
"paths": { "paths": {
"/v3beta/lock/lock": { "/v3/lock/lock": {
"post": { "post": {
"summary": "Lock acquires a distributed shared lock on a given named lock.\nOn success, it will return a unique key that exists so long as the\nlock is held by the caller. This key can be used in conjunction with\ntransactions to safely ensure updates to etcd only occur while holding\nlock ownership. The lock is held until Unlock is called on the key or the\nlease associate with the owner expires.", "summary": "Lock acquires a distributed shared lock on a given named lock.\nOn success, it will return a unique key that exists so long as the\nlock is held by the caller. This key can be used in conjunction with\ntransactions to safely ensure updates to etcd only occur while holding\nlock ownership. The lock is held until Unlock is called on the key or the\nlease associate with the owner expires.",
"operationId": "Lock", "operationId": "Lock",
"responses": { "responses": {
"200": { "200": {
"description": "", "description": "A successful response.",
"schema": { "schema": {
"$ref": "#/definitions/v3lockpbLockResponse" "$ref": "#/definitions/v3lockpbLockResponse"
} }
@ -42,13 +42,13 @@
] ]
} }
}, },
"/v3beta/lock/unlock": { "/v3/lock/unlock": {
"post": { "post": {
"summary": "Unlock takes a key returned by Lock and releases the hold on lock. The\nnext Lock caller waiting for the lock will then be woken up and given\nownership of the lock.", "summary": "Unlock takes a key returned by Lock and releases the hold on lock. The\nnext Lock caller waiting for the lock will then be woken up and given\nownership of the lock.",
"operationId": "Unlock", "operationId": "Unlock",
"responses": { "responses": {
"200": { "200": {
"description": "", "description": "A successful response.",
"schema": { "schema": {
"$ref": "#/definitions/v3lockpbUnlockResponse" "$ref": "#/definitions/v3lockpbUnlockResponse"
} }
@ -87,7 +87,7 @@
"revision": { "revision": {
"type": "string", "type": "string",
"format": "int64", "format": "int64",
"description": "revision is the key-value store revision when the request was applied." "description": "revision is the key-value store revision when the request was applied.\nFor watch progress responses, the header.revision indicates progress. All future events\nrecieved in this stream are guaranteed to have a higher revision number than the\nheader.revision number."
}, },
"raft_term": { "raft_term": {
"type": "string", "type": "string",
@ -107,7 +107,7 @@
"lease": { "lease": {
"type": "string", "type": "string",
"format": "int64", "format": "int64",
"description": "lease is the ID of the lease that will be attached to ownership of the\nlock. If the lease expires or is revoked and currently holds the lock,\nthe lock is automatically released. Calls to Lock with the same lease will\nbe treated as a single acquistion; locking twice with the same lease is a\nno-op." "description": "lease is the ID of the lease that will be attached to ownership of the\nlock. If the lease expires or is revoked and currently holds the lock,\nthe lock is automatically released. Calls to Lock with the same lease will\nbe treated as a single acquisition; locking twice with the same lease is a\nno-op."
} }
} }
}, },

View File

@ -1,7 +1,9 @@
# Experimental APIs and features ---
title: Experimental APIs and features
---
For the most part, the etcd project is stable, but we are still moving fast! We believe in the release fast philosophy. We want to get early feedback on features still in development and stabilizing. Thus, there are, and will be more, experimental features and APIs. We plan to improve these features based on the early feedback from the community, or abandon them if there is little interest, in the next few releases. Please do not rely on any experimental features or APIs in production environment. For the most part, the etcd project is stable, but we are still moving fast! We believe in the release fast philosophy. We want to get early feedback on features still in development and stabilizing. Thus, there are, and will be more, experimental features and APIs. We plan to improve these features based on the early feedback from the community, or abandon them if there is little interest, in the next few releases. Please do not rely on any experimental features or APIs in production environment.
## The current experimental API/features are: ## The current experimental API/features are:
- [KV ordering](https://godoc.org/github.com/coreos/etcd/clientv3/ordering) wrapper. When an etcd client switches endpoints, responses to serializable reads may go backward in time if the new endpoint is lagging behind the rest of the cluster. The ordering wrapper caches the current cluster revision from response headers. If a response revision is less than the cached revision, the client selects another endpoint and reissues the read. Enable in grpcproxy with `--experimental-serializable-ordering`. - [KV ordering](https://godoc.org/github.com/etcd-io/etcd/clientv3/ordering) wrapper. When an etcd client switches endpoints, responses to serializable reads may go backward in time if the new endpoint is lagging behind the rest of the cluster. The ordering wrapper caches the current cluster revision from response headers. If a response revision is less than the cached revision, the client selects another endpoint and reissues the read. Enable in grpcproxy with `--experimental-serializable-ordering`.

View File

@ -1,4 +1,6 @@
# gRPC naming and discovery ---
title: gRPC naming and discovery
---
etcd provides a gRPC resolver to support an alternative name system that fetches endpoints from etcd for discovering gRPC services. The underlying mechanism is based on watching updates to keys prefixed with the service name. etcd provides a gRPC resolver to support an alternative name system that fetches endpoints from etcd for discovering gRPC services. The underlying mechanism is based on watching updates to keys prefixed with the service name.
@ -8,8 +10,8 @@ The etcd client provides a gRPC resolver for resolving gRPC endpoints with an et
```go ```go
import ( import (
"github.com/coreos/etcd/clientv3" "go.etcd.io/etcd/clientv3"
etcdnaming "github.com/coreos/etcd/clientv3/naming" etcdnaming "go.etcd.io/etcd/clientv3/naming"
"google.golang.org/grpc" "google.golang.org/grpc"
) )
@ -19,7 +21,7 @@ import (
cli, cerr := clientv3.NewFromURL("http://localhost:2379") cli, cerr := clientv3.NewFromURL("http://localhost:2379")
r := &etcdnaming.GRPCResolver{Client: cli} r := &etcdnaming.GRPCResolver{Client: cli}
b := grpc.RoundRobin(r) b := grpc.RoundRobin(r)
conn, gerr := grpc.Dial("my-service", grpc.WithBalancer(b)) conn, gerr := grpc.Dial("my-service", grpc.WithBalancer(b), grpc.WithBlock(), ...)
``` ```
## Managing service endpoints ## Managing service endpoints

View File

@ -1,8 +1,12 @@
# Interacting with etcd ---
title: Interacting with etcd
---
Users mostly interact with etcd by putting or getting the value of a key. This section describes how to do that by using etcdctl, a command line tool for interacting with etcd server. The concepts described here should apply to the gRPC APIs or client library APIs. Users mostly interact with etcd by putting or getting the value of a key. This section describes how to do that by using etcdctl, a command line tool for interacting with etcd server. The concepts described here should apply to the gRPC APIs or client library APIs.
By default, etcdctl talks to the etcd server with the v2 API for backward compatibility. For etcdctl to speak to etcd using the v3 API, the API version must be set to version 3 via the `ETCDCTL_API` environment variable. However note that any key that was created using the v2 API will not be able to be queried via the v3 API. A v3 API ```etcdctl get``` of a v2 key will exit with 0 and no key data, this is the expected behaviour. The API version used by etcdctl to speak to etcd may be set to version `2` or `3` via the `ETCDCTL_API` environment variable. By default, etcdctl on master (3.4) uses the v3 API and earlier versions (3.3 and earlier) default to the v2 API.
Note that any key that was created using the v2 API will not be able to be queried via the v2 API. A v3 API ```etcdctl get``` of a v2 key will exit with 0 and no key data, this is the expected behaviour.
```bash ```bash
@ -355,6 +359,26 @@ foo # key
bar_latest # value of foo key after modification bar_latest # value of foo key after modification
``` ```
## Watch progress
Applications may want to check the progress of a watch to determine how up-to-date the watch stream is. For example, if a watch is used to update a cache, it can be useful to know if the cache is stale compared to the revision from a quorum read.
Progress requests can be issued using the "progress" command in interactive watch session to ask the etcd server to send a progress notify update in the watch stream:
```bash
$ etcdctl watch -i
$ watch a
$ progress
progress notify: 1
# in another terminal: etcdctl put x 0
# in another terminal: etcdctl put y 1
$ progress
progress notify: 3
```
Note: The revision number in the progress notify response is the revision from the local etcd server node that the watch stream is connected to. If this node is partitioned and not part of quorum, this progress notify revision might be lower than
than the revision returned by a quorum read against a non-partitioned etcd server node.
## Compacted revisions ## Compacted revisions
As we mentioned, etcd keeps revisions so that applications can read past versions of keys. However, to avoid accumulating an unbounded amount of history, it is important to compact past revisions. After compacting, etcd removes historical revisions, releasing resources for future use. All superseded data with revisions before the compacted revision will be unavailable. As we mentioned, etcd keeps revisions so that applications can read past versions of keys. However, to avoid accumulating an unbounded amount of history, it is important to compact past revisions. After compacting, etcd removes historical revisions, releasing resources for future use. All superseded data with revisions before the compacted revision will be unavailable.

View File

@ -1,10 +1,11 @@
# System limits ---
title: System limits
---
## Request size limit ## Request size limit
etcd is designed to handle small key value pairs typical for metadata. Larger requests will work, but may increase the latency of other requests. For the time being, etcd guarantees to support RPC requests with up to 1MB of data. In the future, the size limit may be loosened or made configurable. etcd is designed to handle small key value pairs typical for metadata. Larger requests will work, but may increase the latency of other requests. By default, the maximum size of any request is 1.5 MiB. This limit is configurable through `--max-request-bytes` flag for etcd server.
## Storage size limit ## Storage size limit
The default storage size limit is 2GB, configurable with `--quota-backend-bytes` flag; supports up to 8GB. The default storage size limit is 2GB, configurable with `--quota-backend-bytes` flag. 8GB is a suggested maximum size for normal environments and etcd warns at startup if the configured value exceeds it.

View File

@ -1,4 +1,6 @@
# Set up a local cluster ---
title: Set up a local cluster
---
For testing and development deployments, the quickest and easiest way is to configure a local cluster. For a production deployment, refer to the [clustering][clustering] section. For testing and development deployments, the quickest and easiest way is to configure a local cluster. For a production deployment, refer to the [clustering][clustering] section.
@ -21,14 +23,7 @@ The running etcd member listens on `localhost:2379` for client requests.
Use `etcdctl` to interact with the running cluster: Use `etcdctl` to interact with the running cluster:
1. Configure the environment to have `ETCDCTL_API=3` so `etcdctl` uses the etcd API version 3 instead of defaulting to version 2. 1. Store an example key-value pair in the cluster:
```
# use API version 3
$ export ETCDCTL_API=3
```
2. Store an example key-value pair in the cluster:
``` ```
$ ./etcdctl put foo bar $ ./etcdctl put foo bar
@ -37,7 +32,7 @@ Use `etcdctl` to interact with the running cluster:
If OK is printed, storing key-value pair is successful. If OK is printed, storing key-value pair is successful.
3. Retrieve the value of `foo`: 2. Retrieve the value of `foo`:
``` ```
$ ./etcdctl get foo $ ./etcdctl get foo
@ -70,14 +65,7 @@ A `Procfile` at the base of the etcd git repository is provided to easily config
Use `etcdctl` to interact with the running cluster: Use `etcdctl` to interact with the running cluster:
1. Configure the environment to have `ETCDCTL_API=3` so `etcdctl` uses the etcd API version 3 instead of defaulting to version 2. 1. Print the list of members:
```
# use API version 3
$ export ETCDCTL_API=3
```
2. Print the list of members:
``` ```
$ etcdctl --write-out=table --endpoints=localhost:2379 member list $ etcdctl --write-out=table --endpoints=localhost:2379 member list
@ -94,7 +82,7 @@ Use `etcdctl` to interact with the running cluster:
+------------------+---------+--------+------------------------+------------------------+ +------------------+---------+--------+------------------------+------------------------+
``` ```
3. Store an example key-value pair in the cluster: 2. Store an example key-value pair in the cluster:
``` ```
$ etcdctl put foo bar $ etcdctl put foo bar

View File

@ -1,4 +1,6 @@
# Discovery service protocol ---
title: Discovery service protocol
---
Discovery service protocol helps new etcd member to discover all other members in cluster bootstrap phase using a shared discovery URL. Discovery service protocol helps new etcd member to discover all other members in cluster bootstrap phase using a shared discovery URL.

View File

@ -1,4 +1,6 @@
# Logging conventions ---
title: Logging conventions
---
etcd uses the [capnslog][capnslog] library for logging application output categorized into *levels*. A log message's level is determined according to these conventions: etcd uses the [capnslog][capnslog] library for logging application output categorized into *levels*. A log message's level is determined according to these conventions:

View File

@ -1,4 +1,6 @@
# etcd release guide ---
title: etcd release guide
---
The guide talks about how to release a new version of etcd. The guide talks about how to release a new version of etcd.
@ -13,7 +15,8 @@ release and for ensuring the stability of the release branch.
| Releases | Manager | | Releases | Manager |
| -------- | ------- | | -------- | ------- |
| 3.1 patch (post 3.1.0) | Joe Betz [@jpbetz](https://github.com/jpbetz) | | 3.1 patch (post 3.1.0) | Joe Betz [@jpbetz](https://github.com/jpbetz) |
| 3.2 patch (post 3.2.0) | Gyuho Lee [@gyuho](https://github.com/gyuho) | | 3.2 patch (post 3.2.0) | Joe Betz [@jpbetz](https://github.com/jpbetz) |
| 3.3 patch (post 3.3.0) | Gyuho Lee [@gyuho](https://github.com/gyuho) |
## Prepare release ## Prepare release
@ -29,9 +32,9 @@ All releases version numbers follow the format of [semantic versioning 2.0.0](ht
### Major, minor version release, or its pre-release ### Major, minor version release, or its pre-release
- Ensure the relevant milestone on GitHub is complete. All referenced issues should be closed, or moved elsewhere. - Ensure the relevant milestone on GitHub is complete. All referenced issues should be closed, or moved elsewhere.
- Remove this release from [roadmap](https://github.com/coreos/etcd/blob/master/ROADMAP.md), if necessary. - Remove this release from [roadmap](https://github.com/etcd-io/etcd/blob/master/ROADMAP.md), if necessary.
- Ensure the latest upgrade documentation is available. - Ensure the latest upgrade documentation is available.
- Bump [hardcoded MinClusterVerion in the repository](https://github.com/coreos/etcd/blob/master/version/version.go#L29), if necessary. - Bump [hardcoded MinClusterVerion in the repository](https://github.com/etcd-io/etcd/blob/master/version/version.go#L29), if necessary.
- Add feature capability maps for the new version, if necessary. - Add feature capability maps for the new version, if necessary.
### Patch version release ### Patch version release
@ -49,14 +52,14 @@ All releases version numbers follow the format of [semantic versioning 2.0.0](ht
## Tag version ## Tag version
- Bump [hardcoded Version in the repository](https://github.com/coreos/etcd/blob/master/version/version.go#L30) to the latest version `${VERSION}`. - Bump [hardcoded Version in the repository](https://github.com/etcd-io/etcd/blob/master/version/version.go#L30) to the latest version `${VERSION}`.
- Ensure all tests on CI system are passed. - Ensure all tests on CI system are passed.
- Manually check etcd is buildable in Linux, Darwin and Windows. - Manually check etcd is buildable in Linux, Darwin and Windows.
- Manually check upgrade etcd cluster of previous minor version works well. - Manually check upgrade etcd cluster of previous minor version works well.
- Manually check new features work well. - Manually check new features work well.
- Add a signed tag through `git tag -s ${VERSION}`. - Add a signed tag through `git tag -s ${VERSION}`.
- Sanity check tag correctness through `git show tags/$VERSION`. - Sanity check tag correctness through `git show tags/$VERSION`.
- Push the tag to GitHub through `git push origin tags/$VERSION`. This assumes `origin` corresponds to "https://github.com/coreos/etcd". - Push the tag to GitHub through `git push origin tags/$VERSION`. This assumes `origin` corresponds to "https://github.com/etcd-io/etcd".
## Build release binaries and images ## Build release binaries and images
@ -79,15 +82,15 @@ The following commands are used for public release sign:
``` ```
cd release cd release
for i in etcd-*{.zip,.tar.gz,.aci}; do gpg2 --default-key $SUBKEYID --armor --output ${i}.asc --detach-sign ${i}; done for i in etcd-*{.zip,.tar.gz}; do gpg2 --default-key $SUBKEYID --armor --output ${i}.asc --detach-sign ${i}; done
for i in etcd-*{.zip,.tar.gz,.aci}; do gpg2 --verify ${i}.asc ${i}; done for i in etcd-*{.zip,.tar.gz}; do gpg2 --verify ${i}.asc ${i}; done
# sign zipped source code files # sign zipped source code files
wget https://github.com/coreos/etcd/archive/${VERSION}.zip wget https://github.com/etcd-io/etcd/archive/${VERSION}.zip
gpg2 --armor --default-key $SUBKEYID --output ${VERSION}.zip.asc --detach-sign ${VERSION}.zip gpg2 --armor --default-key $SUBKEYID --output ${VERSION}.zip.asc --detach-sign ${VERSION}.zip
gpg2 --verify ${VERSION}.zip.asc ${VERSION}.zip gpg2 --verify ${VERSION}.zip.asc ${VERSION}.zip
wget https://github.com/coreos/etcd/archive/${VERSION}.tar.gz wget https://github.com/etcd-io/etcd/archive/${VERSION}.tar.gz
gpg2 --armor --default-key $SUBKEYID --output ${VERSION}.tar.gz.asc --detach-sign ${VERSION}.tar.gz gpg2 --armor --default-key $SUBKEYID --output ${VERSION}.tar.gz.asc --detach-sign ${VERSION}.tar.gz
gpg2 --verify ${VERSION}.tar.gz.asc ${VERSION}.tar.gz gpg2 --verify ${VERSION}.tar.gz.asc ${VERSION}.tar.gz
``` ```
@ -99,7 +102,7 @@ The public key for GPG signing can be found at [CoreOS Application Signing Key](
- Set release title as the version name. - Set release title as the version name.
- Follow the format of previous release pages. - Follow the format of previous release pages.
- Attach the generated binaries, aci image and signatures. - Attach the generated binaries and signatures.
- Select whether it is a pre-release. - Select whether it is a pre-release.
- Publish the release! - Publish the release!
@ -155,5 +158,5 @@ git log ...${PREV_VERSION} --pretty=format:"%an" | sort | uniq | tr '\n' ',' | s
## Post release ## Post release
- Create new stable branch through `git push origin ${VERSION_MAJOR}.${VERSION_MINOR}` if this is a major stable release. This assumes `origin` corresponds to "https://github.com/coreos/etcd". - Create new stable branch through `git push origin ${VERSION_MAJOR}.${VERSION_MINOR}` if this is a major stable release. This assumes `origin` corresponds to "https://github.com/etcd-io/etcd".
- Bump [hardcoded Version in the repository](https://github.com/coreos/etcd/blob/master/version/version.go#L30) to the version `${VERSION}+git`. - Bump [hardcoded Version in the repository](https://github.com/etcd-io/etcd/blob/master/version/version.go#L30) to the version `${VERSION}+git`.

View File

@ -1,4 +1,7 @@
# Download and build ---
title: Download and build
weight: 1
---
## System requirements ## System requirements
@ -15,7 +18,7 @@ For those wanting to try the very latest version, build etcd from the `master` b
To build `etcd` from the `master` branch without a `GOPATH` using the official `build` script: To build `etcd` from the `master` branch without a `GOPATH` using the official `build` script:
```sh ```sh
$ git clone https://github.com/coreos/etcd.git $ git clone https://github.com/etcd-io/etcd.git
$ cd etcd $ cd etcd
$ ./build $ ./build
``` ```
@ -26,16 +29,8 @@ To build a vendored `etcd` from the `master` branch via `go get`:
# GOPATH should be set # GOPATH should be set
$ echo $GOPATH $ echo $GOPATH
/Users/example/go /Users/example/go
$ go get github.com/coreos/etcd/cmd/etcd $ go get -v go.etcd.io/etcd
``` $ go get -v go.etcd.io/etcd/etcdctl
To build `etcd` from the `master` branch without vendoring (may not build due to upstream conflicts):
```sh
# GOPATH should be set
$ echo $GOPATH
/Users/example/go
$ go get github.com/coreos/etcd
``` ```
## Test the installation ## Test the installation
@ -44,14 +39,14 @@ Check the etcd binary is built correctly by starting etcd and setting a key.
### Starting etcd ### Starting etcd
If etcd is built without using GOPATH, run the following: If etcd is built without using `go get`, run the following:
``` ```sh
$ ./bin/etcd $ ./bin/etcd
``` ```
If etcd is built using GOPATH, run the following: If etcd is built using `go get`, run the following:
``` ```sh
$ $GOPATH/bin/etcd $ $GOPATH/bin/etcd
``` ```
@ -59,14 +54,16 @@ $ $GOPATH/bin/etcd
Run the following: Run the following:
``` ```sh
$ ETCDCTL_API=3 ./bin/etcdctl put foo bar $ ./bin/etcdctl put foo bar
OK OK
``` ```
(or `$GOPATH/bin/etcdctl put foo bar` if etcdctl was installed with `go get`)
If OK is printed, then etcd is working! If OK is printed, then etcd is working!
[github-release]: https://github.com/coreos/etcd/releases/ [github-release]: https://github.com/etcd-io/etcd/releases/
[go]: https://golang.org/doc/install [go]: https://golang.org/doc/install
[build-script]: ../build [build-script]: ../build
[cmd-directory]: ../cmd [cmd-directory]: ../cmd

View File

@ -1,114 +0,0 @@
# Documentation
etcd is a distributed key-value store designed to reliably and quickly preserve and provide access to critical data. It enables reliable distributed coordination through distributed locking, leader elections, and write barriers. An etcd cluster is intended for high availability and permanent data storage and retrieval.
## Getting started
New etcd users and developers should get started by [downloading and building][download_build] etcd. After getting etcd, follow this [quick demo][demo] to see the basics of creating and working with an etcd cluster.
## Developing with etcd
The easiest way to get started using etcd as a distributed key-value store is to [set up a local cluster][local_cluster].
- [Setting up local clusters][local_cluster]
- [Interacting with etcd][interacting]
- gRPC [etcd core][api_ref] and [etcd concurrency][api_concurrency_ref] API references
- [HTTP JSON API through the gRPC gateway][api_grpc_gateway]
- [gRPC naming and discovery][grpc_naming]
- [Client][namespace_client] and [proxy][namespace_proxy] namespacing
- [Embedding etcd][embed_etcd]
- [Experimental features and APIs][experimental]
- [System limits][system-limit]
## Operating etcd clusters
Administrators who need a fault-tolerant etcd cluster for either development or production should begin with a [cluster on multiple machines][clustering].
### Setting up etcd
- [Configuration flags][conf]
- [Multi-member cluster][clustering]
- [gRPC proxy][grpc_proxy]
- [L4 gateway][gateway]
### System configuration
- [Supported systems][supported_platforms]
- [Hardware recommendations][hardware]
- [Performance benchmarking][performance]
- [Tuning][tuning]
### Platform guides
- [Amazon Web Services][aws_platform]
- [Container Linux, systemd][container_linux_platform]
- [FreeBSD][freebsd_platform]
- [Docker container][container_docker]
- [rkt container][container_rkt]
### Security
- [TLS][security]
- [Role-based access control][authentication]
### Maintenance and troubleshooting
- [Frequently asked questions][faq]
- [Monitoring][monitoring]
- [Maintenance][maintenance]
- [Failure modes][failures]
- [Disaster recovery][recovery]
- [Upgrading][upgrading]
## Learning
To learn more about the concepts and internals behind etcd, read the following pages:
- [Why etcd?][why]
- [Understand data model][data_model]
- [Understand APIs][understand_apis]
- [Glossary][glossary]
- Internals
- [Auth subsystem][auth_design]
[api_ref]: dev-guide/api_reference_v3.md
[api_concurrency_ref]: dev-guide/api_concurrency_reference_v3.md
[api_grpc_gateway]: dev-guide/api_grpc_gateway.md
[clustering]: op-guide/clustering.md
[conf]: op-guide/configuration.md
[system-limit]: dev-guide/limit.md
[faq]: faq.md
[why]: learning/why.md
[data_model]: learning/data_model.md
[demo]: demo.md
[download_build]: dl_build.md
[embed_etcd]: https://godoc.org/github.com/coreos/etcd/embed
[grpc_naming]: dev-guide/grpc_naming.md
[failures]: op-guide/failures.md
[gateway]: op-guide/gateway.md
[glossary]: learning/glossary.md
[namespace_client]: https://godoc.org/github.com/coreos/etcd/clientv3/namespace
[namespace_proxy]: op-guide/grpc_proxy.md#namespacing
[grpc_proxy]: op-guide/grpc_proxy.md
[hardware]: op-guide/hardware.md
[interacting]: dev-guide/interacting_v3.md
[local_cluster]: dev-guide/local_cluster.md
[performance]: op-guide/performance.md
[recovery]: op-guide/recovery.md
[maintenance]: op-guide/maintenance.md
[security]: op-guide/security.md
[monitoring]: op-guide/monitoring.md
[v2_migration]: op-guide/v2-migration.md
[container_rkt]: op-guide/container.md#rkt
[container_docker]: op-guide/container.md#docker
[understand_apis]: learning/api.md
[versioning]: op-guide/versioning.md
[supported_platforms]: op-guide/supported-platform.md
[container_linux_platform]: platforms/container-linux-systemd.md
[freebsd_platform]: platforms/freebsd.md
[aws_platform]: platforms/aws.md
[experimental]: dev-guide/experimental_apis.md
[authentication]: op-guide/authentication.md
[auth_design]: learning/auth_design.md
[tuning]: tuning.md
[upgrading]: upgrades/upgrading-etcd.md

View File

@ -1,4 +1,6 @@
# Frequently Asked Questions (FAQ) ---
title: Frequently Asked Questions (FAQ)
---
## etcd, general ## etcd, general
@ -22,7 +24,7 @@ A member's advertised peer URLs come from `--initial-advertise-peer-urls` on ini
### System requirements ### System requirements
Since etcd writes data to disk, SSD is highly recommended. To prevent performance degradation or unintentionally overloading the key-value store, etcd enforces a 2GB default storage size quota, configurable up to 8GB. To avoid swapping or running out of memory, the machine should have at least as much RAM to cover the quota. At CoreOS, an etcd cluster is usually deployed on dedicated CoreOS Container Linux machines with dual-core processors, 2GB of RAM, and 80GB of SSD *at the very least*. **Note that performance is intrinsically workload dependent; please test before production deployment**. See [hardware][hardware-setup] for more recommendations. Since etcd writes data to disk, SSD is highly recommended. To prevent performance degradation or unintentionally overloading the key-value store, etcd enforces a configurable storage size quota set to 2GB by default. To avoid swapping or running out of memory, the machine should have at least as much RAM to cover the quota. 8GB is a suggested maximum size for normal environments and etcd warns at startup if the configured value exceeds it. At CoreOS, an etcd cluster is usually deployed on dedicated CoreOS Container Linux machines with dual-core processors, 2GB of RAM, and 80GB of SSD *at the very least*. **Note that performance is intrinsically workload dependent; please test before production deployment**. See [hardware][hardware-setup] for more recommendations.
Most stable production environment is Linux operating system with amd64 architecture; see [supported platform][supported-platform] for more. Most stable production environment is Linux operating system with amd64 architecture; see [supported platform][supported-platform] for more.
@ -102,6 +104,12 @@ To recover from the low space quota alarm:
2. [Defragment][maintenance-defragment] every etcd endpoint. 2. [Defragment][maintenance-defragment] every etcd endpoint.
3. [Disarm][maintenance-disarm] the alarm. 3. [Disarm][maintenance-disarm] the alarm.
### What does the etcd warning "etcdserver/api/v3rpc: transport: http2Server.HandleStreams failed to read frame: read tcp 127.0.0.1:2379->127.0.0.1:43020: read: connection reset by peer" mean?
This is gRPC-side warning when a server receives a TCP RST flag with client-side streams being prematurely closed. For example, a client closes its connection, while gRPC server has not yet processed all HTTP/2 frames in the TCP queue. Some data may have been lost in server side, but it is ok so long as client connection has already been closed.
Only [old versions of gRPC](https://github.com/grpc/grpc-go/issues/1362) log this. etcd [>=v3.2.13 by default log this with DEBUG level](https://github.com/etcd-io/etcd/pull/9080), thus only visible with `--debug` flag enabled.
## Performance ## Performance
### How should I benchmark etcd? ### How should I benchmark etcd?
@ -141,14 +149,14 @@ etcd sends a snapshot of its complete key-value store to refresh slow followers
[supported-platform]: ./op-guide/supported-platform.md [supported-platform]: ./op-guide/supported-platform.md
[wal_fsync_duration_seconds]: ./metrics.md#disk [wal_fsync_duration_seconds]: ./metrics.md#disk
[tuning]: ./tuning.md [tuning]: ./tuning.md
[new_issue]: https://github.com/coreos/etcd/issues/new [new_issue]: https://github.com/etcd-io/etcd/issues/new
[backend_commit_metrics]: ./metrics.md#disk [backend_commit_metrics]: ./metrics.md#disk
[raft]: https://raft.github.io/raft.pdf [raft]: https://raft.github.io/raft.pdf
[backup]: https://github.com/coreos/etcd/blob/master/Documentation/op-guide/recovery.md#snapshotting-the-keyspace [backup]: https://github.com/etcd-io/etcd/blob/master/Documentation/op-guide/recovery.md#snapshotting-the-keyspace
[chubby]: http://static.googleusercontent.com/media/research.google.com/en//archive/chubby-osdi06.pdf [chubby]: http://static.googleusercontent.com/media/research.google.com/en//archive/chubby-osdi06.pdf
[runtime reconfiguration]: https://github.com/coreos/etcd/blob/master/Documentation/op-guide/runtime-configuration.md [runtime reconfiguration]: https://github.com/etcd-io/etcd/blob/master/Documentation/op-guide/runtime-configuration.md
[benchmark]: https://github.com/coreos/etcd/tree/master/tools/benchmark [benchmark]: https://github.com/coreos/etcd/tree/master/tools/benchmark
[benchmark-result]: https://github.com/coreos/etcd/blob/master/Documentation/op-guide/performance.md [benchmark-result]: https://github.com/etcd-io/etcd/blob/master/Documentation/op-guide/performance.md
[api-mvcc]: learning/api.md#revisions [api-mvcc]: learning/api.md#revisions
[maintenance-compact]: op-guide/maintenance.md#history-compaction [maintenance-compact]: op-guide/maintenance.md#history-compaction
[maintenance-defragment]: op-guide/maintenance.md#defragmentation [maintenance-defragment]: op-guide/maintenance.md#defragmentation

View File

@ -1,8 +1,11 @@
# Libraries and tools ---
title: Libraries and tools
weight: 2
---
**Tools** **Tools**
- [etcdctl](https://github.com/coreos/etcd/tree/master/etcdctl) - A command line client for etcd - [etcdctl](https://github.com/etcd-io/etcd/tree/master/etcdctl) - A command line client for etcd
- [etcd-backup](https://github.com/fanhattan/etcd-backup) - A powerful command line utility for dumping/restoring etcd - Supports v2 - [etcd-backup](https://github.com/fanhattan/etcd-backup) - A powerful command line utility for dumping/restoring etcd - Supports v2
- [etcd-dump](https://npmjs.org/package/etcd-dump) - Command line utility for dumping/restoring etcd. - [etcd-dump](https://npmjs.org/package/etcd-dump) - Command line utility for dumping/restoring etcd.
- [etcd-fs](https://github.com/xetorthio/etcd-fs) - FUSE filesystem for etcd - [etcd-fs](https://github.com/xetorthio/etcd-fs) - FUSE filesystem for etcd
@ -15,17 +18,18 @@
- [etcd-rest](https://github.com/mickep76/etcd-rest) - Create generic REST API in Go using etcd as a backend with validation using JSON schema - [etcd-rest](https://github.com/mickep76/etcd-rest) - Create generic REST API in Go using etcd as a backend with validation using JSON schema
- [etcdsh](https://github.com/kamilhark/etcdsh) - A command line client with support of command history and tab completion. Supports v2 - [etcdsh](https://github.com/kamilhark/etcdsh) - A command line client with support of command history and tab completion. Supports v2
- [etcdloadtest](https://github.com/sinsharat/etcdloadtest) - A command line load test client for etcd version 3.0 and above. - [etcdloadtest](https://github.com/sinsharat/etcdloadtest) - A command line load test client for etcd version 3.0 and above.
- [lucas](https://github.com/ringtail/lucas) - A web-based key-value viewer for kubernetes etcd3.0+ cluster.
**Go libraries** **Go libraries**
- [etcd/clientv3](https://github.com/coreos/etcd/blob/master/clientv3) - the officially maintained Go client for v3 - [etcd/clientv3](https://github.com/etcd-io/etcd/blob/master/clientv3) - the officially maintained Go client for v3
- [etcd/client](https://github.com/coreos/etcd/blob/master/client) - the officially maintained Go client for v2 - [etcd/client](https://github.com/etcd-io/etcd/blob/master/client) - the officially maintained Go client for v2
- [go-etcd](https://github.com/coreos/go-etcd) - the deprecated official client. May be useful for older (<2.0.0) versions of etcd. - [go-etcd](https://github.com/coreos/go-etcd) - the deprecated official client. May be useful for older (<2.0.0) versions of etcd.
- [encWrapper](https://github.com/lumjjb/etcd/tree/enc_wrapper/clientwrap/encwrapper) - encWrapper is an encryption wrapper for the etcd client Keys API/KV. - [encWrapper](https://github.com/lumjjb/etcd/tree/enc_wrapper/clientwrap/encwrapper) - encWrapper is an encryption wrapper for the etcd client Keys API/KV.
**Java libraries** **Java libraries**
- [coreos/jetcd](https://github.com/coreos/jetcd) - Supports v3 - [coreos/jetcd](https://github.com/etcd-io/jetcd) - Supports v3
- [boonproject/etcd](https://github.com/boonproject/boon/blob/master/etcd/README.md) - Supports v2, Async/Sync and waits - [boonproject/etcd](https://github.com/boonproject/boon/blob/master/etcd/README.md) - Supports v2, Async/Sync and waits
- [justinsb/jetcd](https://github.com/justinsb/jetcd) - [justinsb/jetcd](https://github.com/justinsb/jetcd)
- [diwakergupta/jetcd](https://github.com/diwakergupta/jetcd) - Supports v2 - [diwakergupta/jetcd](https://github.com/diwakergupta/jetcd) - Supports v2
@ -53,6 +57,7 @@
- [txaio-etcd](https://github.com/crossbario/txaio-etcd) - Asynchronous etcd v3-only client library for Twisted (today) and asyncio (future) - [txaio-etcd](https://github.com/crossbario/txaio-etcd) - Asynchronous etcd v3-only client library for Twisted (today) and asyncio (future)
- [dims/etcd3-gateway](https://github.com/dims/etcd3-gateway) - etcd v3 API library using the HTTP grpc gateway - [dims/etcd3-gateway](https://github.com/dims/etcd3-gateway) - etcd v3 API library using the HTTP grpc gateway
- [aioetcd3](https://github.com/gaopeiliang/aioetcd3) - (Python 3.6+) etcd v3 API for asyncio - [aioetcd3](https://github.com/gaopeiliang/aioetcd3) - (Python 3.6+) etcd v3 API for asyncio
- [Revolution1/etcd3-py](https://github.com/Revolution1/etcd3-py) - (python2.7 and python3.5+) Python client for etcd v3, using gRPC-JSON-Gateway
**Node libraries** **Node libraries**
@ -88,17 +93,20 @@
**Erlang libraries** **Erlang libraries**
- [marshall-lee/etcd.erl](https://github.com/marshall-lee/etcd.erl) - [marshall-lee/etcd.erl](https://github.com/marshall-lee/etcd.erl) - Supports v2
- [zhongwencool/eetcd](https://github.com/zhongwencool/eetcd) - Supports v3+ (GRPC only)
**.Net Libraries** **.Net Libraries**
- [wangjia184/etcdnet](https://github.com/wangjia184/etcdnet) - Supports v2 - [wangjia184/etcdnet](https://github.com/wangjia184/etcdnet) - Supports v2
- [drusellers/etcetera](https://github.com/drusellers/etcetera) - [drusellers/etcetera](https://github.com/drusellers/etcetera)
- [shubhamranjan/dotnet-etcd](https://github.com/shubhamranjan/dotnet-etcd) - Supports v3+ (GRPC only)
**PHP Libraries** **PHP Libraries**
- [linkorb/etcd-php](https://github.com/linkorb/etcd-php) - [linkorb/etcd-php](https://github.com/linkorb/etcd-php)
- [activecollab/etcd](https://github.com/activecollab/etcd) - [activecollab/etcd](https://github.com/activecollab/etcd)
- [ouqiang/etcd-php](https://github.com/ouqiang/etcd-php) - Client for v3 gRPC gateway
**Haskell libraries** **Haskell libraries**
@ -138,6 +146,7 @@
- [cloudfoundry/cf-release](https://github.com/cloudfoundry/cf-release/tree/master/jobs/etcd) - [cloudfoundry/cf-release](https://github.com/cloudfoundry/cf-release/tree/master/jobs/etcd)
**Projects using etcd** **Projects using etcd**
- [etcd Raft users](../raft/README.md#notable-users) - projects using etcd's raft library implementation. - [etcd Raft users](../raft/README.md#notable-users) - projects using etcd's raft library implementation.
- [apache/celix](https://github.com/apache/celix) - an implementation of the OSGi specification adapted to C and C++ - [apache/celix](https://github.com/apache/celix) - an implementation of the OSGi specification adapted to C and C++
- [binocarlos/yoda](https://github.com/binocarlos/yoda) - etcd + ZeroMQ - [binocarlos/yoda](https://github.com/binocarlos/yoda) - etcd + ZeroMQ
@ -152,7 +161,6 @@
- [mattn/etcdenv](https://github.com/mattn/etcdenv) - "env" shebang with etcd integration - [mattn/etcdenv](https://github.com/mattn/etcdenv) - "env" shebang with etcd integration
- [kelseyhightower/confd](https://github.com/kelseyhightower/confd) - Manage local app config files using templates and data from etcd - [kelseyhightower/confd](https://github.com/kelseyhightower/confd) - Manage local app config files using templates and data from etcd
- [configdb](https://git.autistici.org/ai/configdb/tree/master) - A REST relational abstraction on top of arbitrary database backends, aimed at storing configs and inventories. - [configdb](https://git.autistici.org/ai/configdb/tree/master) - A REST relational abstraction on top of arbitrary database backends, aimed at storing configs and inventories.
- [fleet](https://github.com/coreos/fleet) - Distributed init system
- [kubernetes/kubernetes](https://github.com/kubernetes/kubernetes) - Container cluster manager introduced by Google. - [kubernetes/kubernetes](https://github.com/kubernetes/kubernetes) - Container cluster manager introduced by Google.
- [mailgun/vulcand](https://github.com/mailgun/vulcand) - HTTP proxy that uses etcd as a configuration backend. - [mailgun/vulcand](https://github.com/mailgun/vulcand) - HTTP proxy that uses etcd as a configuration backend.
- [duedil-ltd/discodns](https://github.com/duedil-ltd/discodns) - Simple DNS nameserver using etcd as a database for names and records. - [duedil-ltd/discodns](https://github.com/duedil-ltd/discodns) - Simple DNS nameserver using etcd as a database for names and records.
@ -165,4 +173,9 @@
- [Vitess](http://vitess.io/) - Vitess is a database clustering system for horizontal scaling of MySQL. - [Vitess](http://vitess.io/) - Vitess is a database clustering system for horizontal scaling of MySQL.
- [lclarkmichalek/etcdhcp](https://github.com/lclarkmichalek/etcdhcp) - DHCP server that uses etcd for persistence and coordination. - [lclarkmichalek/etcdhcp](https://github.com/lclarkmichalek/etcdhcp) - DHCP server that uses etcd for persistence and coordination.
- [openstack/networking-vpp](https://github.com/openstack/networking-vpp) - A networking driver that programs the [FD.io VPP dataplane](https://wiki.fd.io/view/VPP) to provide [OpenStack](https://www.openstack.org/) cloud virtual networking - [openstack/networking-vpp](https://github.com/openstack/networking-vpp) - A networking driver that programs the [FD.io VPP dataplane](https://wiki.fd.io/view/VPP) to provide [OpenStack](https://www.openstack.org/) cloud virtual networking
- [openstack](https://github.com/openstack/governance/blob/master/reference/base-services.rst) - OpenStack services can rely on etcd as a base service. - [OpenStack](https://github.com/openstack/governance/blob/master/reference/base-services.rst) - OpenStack services can rely on etcd as a base service.
- [CoreDNS](https://github.com/coredns/coredns/tree/master/plugin/etcd) - CoreDNS is a DNS server that chains plugins, part of CNCF and Kubernetes
- [Uber M3](https://github.com/m3db/m3) - M3: Ubers Open Source, Large-scale Metrics Platform for Prometheus
- [Rook](https://github.com/rook/rook) - Storage Orchestration for Kubernetes
- [Patroni](https://github.com/zalando/patroni) - A template for PostgreSQL High Availability with ZooKeeper, etcd, or Consul
- [Trillian](https://github.com/google/trillian) - Trillian implements a Merkle tree whose contents are served from a data storage layer, to allow scalability to extremely large trees.

View File

@ -0,0 +1,3 @@
---
title: Learning
---

View File

@ -1,4 +1,6 @@
# etcd3 API ---
title: etcd3 API
---
This document is meant to give an overview of the etcd3 API's central design. It is by no means all encompassing, but intended to focus on the basic ideas needed to understand etcd without the distraction of less common API calls. All etcd3 API's are defined in [gRPC services][grpc-service], which categorize remote procedure calls (RPCs) understood by the etcd server. A full listing of all etcd RPCs are documented in markdown in the [gRPC API listing][grpc-api]. This document is meant to give an overview of the etcd3 API's central design. It is by no means all encompassing, but intended to focus on the basic ideas needed to understand etcd without the distraction of less common API calls. All etcd3 API's are defined in [gRPC services][grpc-service], which categorize remote procedure calls (RPCs) understood by the etcd server. A full listing of all etcd RPCs are documented in markdown in the [gRPC API listing][grpc-api].
@ -472,10 +474,10 @@ message LeaseKeepAliveResponse {
* ID - the lease that was refreshed with a new TTL. * ID - the lease that was refreshed with a new TTL.
* TTL - the new time-to-live, in seconds, that the lease has remaining. * TTL - the new time-to-live, in seconds, that the lease has remaining.
[elections]: https://github.com/coreos/etcd/blob/master/clientv3/concurrency/election.go [elections]: https://github.com/etcd-io/etcd/blob/master/clientv3/concurrency/election.go
[kv-proto]: https://github.com/coreos/etcd/blob/master/mvcc/mvccpb/kv.proto [kv-proto]: https://github.com/etcd-io/etcd/blob/master/mvcc/mvccpb/kv.proto
[grpc-api]: ../dev-guide/api_reference_v3.md [grpc-api]: ../dev-guide/api_reference_v3.md
[grpc-service]: https://github.com/coreos/etcd/blob/master/etcdserver/etcdserverpb/rpc.proto [grpc-service]: https://github.com/etcd-io/etcd/blob/master/etcdserver/etcdserverpb/rpc.proto
[locks]: https://github.com/coreos/etcd/blob/master/clientv3/concurrency/mutex.go [locks]: https://github.com/etcd-io/etcd/blob/master/clientv3/concurrency/mutex.go
[mvcc]: https://en.wikipedia.org/wiki/Multiversion_concurrency_control [mvcc]: https://en.wikipedia.org/wiki/Multiversion_concurrency_control
[stm]: https://github.com/coreos/etcd/blob/master/clientv3/concurrency/stm.go [stm]: https://github.com/etcd-io/etcd/blob/master/clientv3/concurrency/stm.go

View File

@ -1,4 +1,6 @@
# KV API guarantees ---
title: KV API guarantees
---
etcd is a consistent and durable key value store with [mini-transaction][txn] support. The key value store is exposed through the KV APIs. etcd tries to ensure the strongest consistency and durability guarantees for a distributed system. This specification enumerates the KV API guarantees made by etcd. etcd is a consistent and durable key value store with [mini-transaction][txn] support. The key value store is exposed through the KV APIs. etcd tries to ensure the strongest consistency and durability guarantees for a distributed system. This specification enumerates the KV API guarantees made by etcd.
@ -51,7 +53,7 @@ Linearizability (also known as Atomic Consistency or External Consistency) is a
For linearizability, suppose each operation receives a timestamp from a loosely synchronized global clock. Operations are linearized if and only if they always complete as though they were executed in a sequential order and each operation appears to complete in the order specified by the program. Likewise, if an operations timestamp precedes another, that operation must also precede the other operation in the sequence. For linearizability, suppose each operation receives a timestamp from a loosely synchronized global clock. Operations are linearized if and only if they always complete as though they were executed in a sequential order and each operation appears to complete in the order specified by the program. Likewise, if an operations timestamp precedes another, that operation must also precede the other operation in the sequence.
For example, consider a client completing a write at time point 1 (*t1*). A client issuing a read at *t2* (for *t2* > *t1*) should receive a value at least as recent as the previous write, completed at *t1*. However, the read might actually complete only by *t3*, and the returned value, current at *t2* when the read began, might be "stale" by *t3*. For example, consider a client completing a write at time point 1 (*t1*). A client issuing a read at *t2* (for *t2* > *t1*) should receive a value at least as recent as the previous write, completed at *t1*. However, the read might actually complete only by *t3*. Linearizability guarantees the read returns the most current value. Without linearizability guarantee, the returned value, current at *t2* when the read began, might be "stale" by *t3* because a concurrent write might happen between *t2* and *t3*.
etcd does not ensure linearizability for watch operations. Users are expected to verify the revision of watch responses to ensure correct ordering. etcd does not ensure linearizability for watch operations. Users are expected to verify the revision of watch responses to ensure correct ordering.

View File

@ -1,4 +1,6 @@
# etcd v3 authentication design ---
title: etcd v3 authentication design
---
## Why not reuse the v2 auth system? ## Why not reuse the v2 auth system?
@ -26,7 +28,7 @@ The metadata for auth should also be stored and managed in the storage controlle
The authentication mechanism in the etcd v2 protocol has a tricky part because the metadata consistency should work as in the above, but does not: each permission check is processed by the etcd member that receives the client request (etcdserver/api/v2http/client.go), including follower members. Therefore, it's possible the check may be based on stale metadata. The authentication mechanism in the etcd v2 protocol has a tricky part because the metadata consistency should work as in the above, but does not: each permission check is processed by the etcd member that receives the client request (etcdserver/api/v2http/client.go), including follower members. Therefore, it's possible the check may be based on stale metadata.
This staleness means that auth configuration cannot be reflected as soon as operators execute etcdctl. Therefore there is no way to know how long the stale metadata is active. Practically, the configuration change is reflected immediately after the command execution. However, in some cases of heavy load, the inconsistent state can be prolonged and it might result in counter-intuitive situations for users and developers. It requires a workaround like this: https://github.com/coreos/etcd/pull/4317#issuecomment-179037582 This staleness means that auth configuration cannot be reflected as soon as operators execute etcdctl. Therefore there is no way to know how long the stale metadata is active. Practically, the configuration change is reflected immediately after the command execution. However, in some cases of heavy load, the inconsistent state can be prolonged and it might result in counter-intuitive situations for users and developers. It requires a workaround like this: https://github.com/etcd-io/etcd/pull/4317#issuecomment-179037582
### Inconsistent permissions are unsafe for linearized requests ### Inconsistent permissions are unsafe for linearized requests
@ -38,7 +40,7 @@ Therefore, the permission checking logic should be added to the state machine of
### Authentication ### Authentication
At first, a client must create a gRPC connection only to authenticate its user ID and password. An etcd server will respond with an authentication reply. The reponse will be an authentication token on success or an error on failure. The client can use its authentication token to present its credentials to etcd when making API requests. At first, a client must create a gRPC connection only to authenticate its user ID and password. An etcd server will respond with an authentication reply. The response will be an authentication token on success or an error on failure. The client can use its authentication token to present its credentials to etcd when making API requests.
The client connection used to request the authentication token is typically thrown away; it cannot carry the new token's credentials. This is because gRPC doesn't provide a way for adding per RPC credential after creation of the connection (calling `grpc.Dial()`). Therefore, a client cannot assign a token to its connection that is obtained through the connection. The client needs a new connection for using the token. The client connection used to request the authentication token is typically thrown away; it cannot carry the new token's credentials. This is because gRPC doesn't provide a way for adding per RPC credential after creation of the connection (calling `grpc.Dial()`). Therefore, a client cannot assign a token to its connection that is obtained through the connection. The client needs a new connection for using the token.

View File

@ -0,0 +1,114 @@
---
title: etcd client architecture
weight: 1
---
## Introduction
etcd server has proven its robustness with years of failure injection testing. Most complex application logic is already handled by etcd server and its data stores (e.g. cluster membership is transparent to clients, with Raft-layer forwarding proposals to leader). Although server components are correct, its composition with client requires a different set of intricate protocols to guarantee its correctness and high availability under faulty conditions. Ideally, etcd server provides one logical cluster view of many physical machines, and client implements automatic failover between replicas. This documents client architectural decisions and its implementation details.
## Glossary
**clientv3** --- etcd Official Go client for etcd v3 API.
**clientv3-grpc1.0** --- Official client implementation, with [grpc-go v1.0.x](https://github.com/grpc/grpc-go/releases/tag/v1.0.0), which is used in latest etcd v3.1.
**clientv3-grpc1.7** --- Official client implementation, with [grpc-go v1.7.x](https://github.com/grpc/grpc-go/releases/tag/v1.7.0), which is used in latest etcd v3.2 and v3.3.
**clientv3-grpc1.14** --- Official client implementation, with [grpc-go v1.14.x](https://github.com/grpc/grpc-go/releases/tag/v1.14.0), which is used in latest etcd v3.4.
**Balancer** --- etcd client load balancer that implements retry and failover mechanism. etcd client should automatically balance loads between multiple endpoints.
**Endpoints** --- A list of etcd server endpoints that clients can connect to. Typically, 3 or 5 client URLs of an etcd cluster.
**Pinned endpoint** --- When configured with multiple endpoints, <= v3.3 client balancer chooses only one endpoint to establish a TCP connection, in order to conserve total open connections to etcd cluster. In v3.4, balancer round-robins pinned endpoints for every request, thus distributing loads more evenly.
**Client Connection** --- TCP connection that has been established to an etcd server, via gRPC Dial.
**Sub Connection** --- gRPC SubConn interface. Each sub-connection contains a list of addresses. Balancer creates a SubConn from a list of resolved addresses. gRPC ClientConn can map to multiple SubConn (e.g. example.com resolves to `10.10.10.1` and `10.10.10.2` of two sub-connections). etcd v3.4 balancer employs internal resolver to establish one sub-connection for each endpoint.
**Transient disconnect** --- When gRPC server returns a status error of [code Unavailable](https://godoc.org/google.golang.org/grpc/codes#Code).
## Client requirements
**Correctness** --- Requests may fail in the presence of server faults. However, it never violates consistency guarantees: global ordering properties, never write corrupted data, at-most once semantics for mutable operations, watch never observes partial events, and so on.
**Liveness** --- Servers may fail or disconnect briefly. Clients should make progress in either way. Clients should [never deadlock](https://github.com/etcd-io/etcd/issues/8980) waiting for a server to come back from offline, unless configured to do so. Ideally, clients detect unavailable servers with HTTP/2 ping and failover to other nodes with clear error messages.
**Effectiveness** --- Clients should operate effectively with minimum resources: previous TCP connections should be [gracefully closed](https://github.com/etcd-io/etcd/issues/9212) after endpoint switch. Failover mechanism should effectively predict the next replica to connect, without wastefully retrying on failed nodes.
**Portability** --- Official client should be clearly documented and its implementation be applicable to other language bindings. Error handling between different language bindings should be consistent. Since etcd is fully committed to gRPC, implementation should be closely aligned with gRPC long-term design goals (e.g. pluggable retry policy should be compatible with [gRPC retry](https://github.com/grpc/proposal/blob/master/A6-client-retries.md)). Upgrades between two client versions should be non-disruptive.
## Client overview
The etcd client implements the following components:
* balancer that establishes gRPC connections to an etcd cluster,
* API client that sends RPCs to an etcd server, and
* error handler that decides whether to retry a failed request or switch endpoints.
Languages may differ in how to establish an initial connection (e.g. configure TLS), how to encode and send Protocol Buffer messages to server, how to handle stream RPCs, and so on. However, errors returned from etcd server will be the same. So should be error handling and retry policy.
For example, etcd server may return `"rpc error: code = Unavailable desc = etcdserver: request timed out"`, which is transient error that expects retries. Or return `rpc error: code = InvalidArgument desc = etcdserver: key is not provided`, which means request was invalid and should not be retried. Go client can parse errors with `google.golang.org/grpc/status.FromError`, and Java client with `io.grpc.Status.fromThrowable`.
### clientv3-grpc1.0: Balancer Overview
`clientv3-grpc1.0` maintains multiple TCP connections when configured with multiple etcd endpoints. Then pick one address and use it to send all client requests. The pinned address is maintained until the client object is closed (see *Figure 1*). When the client receives an error, it randomly picks another and retries.
{{< figure src="/img/client-architecture-balancer-figure-01.png" >}}
### clientv3-grpc1.0: Balancer Limitation
`clientv3-grpc1.0` opening multiple TCP connections may provide faster balancer failover but requires more resources. The balancer does not understand nodes health status or cluster membership. So, it is possible that balancer gets stuck with one failed or partitioned node.
### clientv3-grpc1.7: Balancer Overview
`clientv3-grpc1.7` maintains only one TCP connection to a chosen etcd server. When given multiple cluster endpoints, a client first tries to connect to them all. As soon as one connection is up, balancer pins the address, closing others (see **Figure 2**).
{{< figure src="/img/client-architecture-balancer-figure-02.png" >}}
The pinned address is to be maintained until the client object is closed. An error, from server or client network fault, is sent to client error handler (see **Figure 3**).
{{< figure src="/img/client-architecture-balancer-figure-03.png" >}}
The client error handler takes an error from gRPC server, and decides whether to retry on the same endpoint, or to switch to other addresses, based on the error code and message (see **Figure 4** and **Figure 5**).
{{< figure src="/img/client-architecture-balancer-figure-04.png" >}}
{{< figure src="/img/client-architecture-balancer-figure-05.png" >}}
Stream RPCs, such as Watch and KeepAlive, are often requested with no timeouts. Instead, client can send periodic HTTP/2 pings to check the status of a pinned endpoint; if the server does not respond to the ping, balancer switches to other endpoints (see **Figure 6**).
{{< figure src="/img/client-architecture-balancer-figure-06.png" >}}
### clientv3-grpc1.7: Balancer Limitation
`clientv3-grpc1.7` balancer sends HTTP/2 keepalives to detect disconnects from streaming requests. It is a simple gRPC server ping mechanism and does not reason about cluster membership, thus unable to detect network partitions. Since partitioned gRPC server can still respond to client pings, balancer may get stuck with a partitioned node. Ideally, keepalive ping detects partition and triggers endpoint switch, before request time-out (see [issue #8673](https://github.com/etcd-io/etcd/issues/8673) and **Figure 7**).
{{< figure src="/img/client-architecture-balancer-figure-07.png" >}}
`clientv3-grpc1.7` balancer maintains a list of unhealthy endpoints. Disconnected addresses are added to “unhealthy” list, and considered unavailable until after wait duration, which is hard coded as dial timeout with default value 5-second. Balancer can have false positives on which endpoints are unhealthy. For instance, endpoint A may come back right after being blacklisted, but still unusable for next 5 seconds (see **Figure 8**).
`clientv3-grpc1.0` suffered the same problems above.
{{< figure src="/img/client-architecture-balancer-figure-08.png" >}}
Upstream gRPC Go had already migrated to new balancer interface. For example, `clientv3-grpc1.7` underlying balancer implementation uses new gRPC balancer and tries to be consistent with old balancer behaviors. While its compatibility has been maintained reasonably well, etcd client still [suffered from subtle breaking changes](https://github.com/grpc/grpc-go/issues/1649). Furthermore, gRPC maintainer recommends [not relying on the old balancer interface](https://github.com/grpc/grpc-go/issues/1942#issuecomment-375368665). In general, to get better support from upstream, it is best to be in sync with latest gRPC releases. And new features, such as retry policy, may not be backported to gRPC 1.7 branch. Thus, both etcd server and client must migrate to latest gRPC versions.
### clientv3-grpc1.14: Balancer Overview
`clientv3-grpc1.7` is so tightly coupled with old gRPC interface, that every single gRPC dependency upgrade broke client behavior. Majority of development and debugging efforts were devoted to fixing those client behavior changes. As a result, its implementation has become overly complicated with bad assumptions on server connectivities.
The primary goal of `clientv3-grpc1.14` is to simplify balancer failover logic; rather than maintaining a list of unhealthy endpoints, which may be stale, simply roundrobin to the next endpoint whenever client gets disconnected from the current endpoint. It does not assume endpoint status. Thus, no more complicated status tracking is needed (see *Figure 8* and above). Upgrading to `clientv3-grpc1.14` should be no issue; all changes were internal while keeping all the backward compatibilities.
Internally, when given multiple endpoints, `clientv3-grpc1.14` creates multiple sub-connections (one sub-connection per each endpoint), while `clientv3-grpc1.7` creates only one connection to a pinned endpoint (see *Figure 9*). For instance, in 5-node cluster, `clientv3-grpc1.14` balancer would require 5 TCP connections, while `clientv3-grpc1.7` only requires one. By preserving the pool of TCP connections, `clientv3-grpc1.14` may consume more resources but provide more flexible load balancer with better failover performance. The default balancing policy is round robin but can be easily extended to support other types of balancers (e.g. power of two, pick leader, etc.). `clientv3-grpc1.14` uses gRPC resolver group and implements balancer picker policy, in order to delegate complex balancing work to upstream gRPC. On the other hand, `clientv3-grpc1.7` manually handles each gRPC connection and balancer failover, which complicates the implementation. `clientv3-grpc1.14` implements retry in the gRPC interceptor chain that automatically handles gRPC internal errors and enables more advanced retry policies like backoff, while `clientv3-grpc1.7` manually interprets gRPC errors for retries.
{{< figure src="/img/client-architecture-balancer-figure-09.png" >}}
### clientv3-grpc1.14: Balancer Limitation
Improvements can be made by caching the status of each endpoint. For instance, balancer can ping each server in advance to maintain a list of healthy candidates, and use this information when doing round-robin. Or when disconnected, balancer can prioritize healthy endpoints. This may complicate the balancer implementation, thus can be addressed in later versions.
Client-side keepalive ping still does not reason about network partitions. Streaming request may get stuck with a partitioned node. Advanced health checking service need to be implemented to understand the cluster membership (see [issue #8673](https://github.com/etcd-io/etcd/issues/8673) for more detail).
Currently, retry logic is handled manually as an interceptor. This may be simplified via [official gRPC retries](https://github.com/grpc/proposal/blob/master/A6-client-retries.md).

View File

@ -0,0 +1,157 @@
---
title: Client feature matrix
---
## Features
Feature | `clientv3-grpc1.14` | `jetcd v0.0.2`
:-------|:--------------------|:--------------
Automatic retry | Yes | .
Retry backoff | Yes | .
Automatic failover | Yes | .
Load balancer | Round-Robin | ·
`WithRequireLeader(context.Context)` | Yes | .
`TLS` | Yes | Yes
`SetEndpoints` | Yes | .
`Sync` endpoints | Yes | .
`AutoSyncInterval` | Yes | .
`KeepAlive` ping | Yes | .
`MaxCallSendMsgSize` | Yes | .
`MaxCallRecvMsgSize` | Yes | .
`RejectOldCluster` | Yes | .
## [KV](https://godoc.org/go.etcd.io/etcd/clientv3#KV)
Feature | `clientv3-grpc1.14` | `jetcd v0.0.2`
:-------|:--------------------|:--------------
`Put` | Yes | .
`Get` | Yes | .
`Delete` | Yes | .
`Compact` | Yes | .
`Do(Op)` | Yes | .
`Txn` | Yes | .
## [Lease](https://godoc.org/go.etcd.io/etcd/clientv3#Lease)
Feature | `clientv3-grpc1.14` | `jetcd v0.0.2`
:-------|:--------------------|:--------------
`Grant` | Yes | .
`Revoke` | Yes | .
`TimeToLive` | Yes | .
`Leases` | Yes | .
`KeepAlive` | Yes | .
`KeepAliveOnce` | Yes | .
## [Watcher](https://godoc.org/go.etcd.io/etcd/clientv3#Watcher)
Feature | `clientv3-grpc1.14` | `jetcd v0.0.2`
:-------|:--------------------|:--------------
`Watch` | Yes | Yes
`RequestProgress` | Yes | .
## [Cluster](https://godoc.org/go.etcd.io/etcd/clientv3#Cluster)
Feature | `clientv3-grpc1.14` | `jetcd v0.0.2`
:-------|:--------------------|:--------------
`MemberList` | Yes | Yes
`MemberAdd` | Yes | Yes
`MemberRemove` | Yes | Yes
`MemberUpdate` | Yes | Yes
## [Maintenance](https://godoc.org/go.etcd.io/etcd/clientv3#Maintenance)
Feature | `clientv3-grpc1.14` | `jetcd v0.0.2`
:-------|:--------------------|:--------------
`AlarmList` | Yes | Yes
`AlarmDisarm` | Yes | ·
`Defragment` | Yes | ·
`Status` | Yes | ·
`HashKV` | Yes | ·
`Snapshot` | Yes | ·
`MoveLeader` | Yes | ·
## [Auth](https://godoc.org/go.etcd.io/etcd/clientv3#Auth)
Feature | `clientv3-grpc1.14` | `jetcd v0.0.2`
:-------|:--------------------|:--------------
`AuthEnable` | Yes | .
`AuthDisable` | Yes | .
`UserAdd` | Yes | .
`UserDelete` | Yes | .
`UserChangePassword` | Yes | .
`UserGrantRole` | Yes | .
`UserGet` | Yes | .
`UserList` | Yes | .
`UserRevokeRole` | Yes | .
`RoleAdd` | Yes | .
`RoleGrantPermission` | Yes | .
`RoleGet` | Yes | .
`RoleList` | Yes | .
`RoleRevokePermission` | Yes | .
`RoleDelete` | Yes | .
## [clientv3util](https://godoc.org/go.etcd.io/etcd/clientv3/clientv3util)
Feature | `clientv3-grpc1.14` | `jetcd v0.0.2`
:-------|:--------------------|:--------------
`KeyExists` | Yes | No
`KeyMissing` | Yes | No
## [Concurrency](https://godoc.org/go.etcd.io/etcd/clientv3/concurrency)
Feature | `clientv3-grpc1.14` | `jetcd v0.0.2`
:-------|:--------------------|:--------------
`Session` | Yes | No
`NewMutex(Session, prefix)` | Yes | No
`NewElection(Session, prefix)` | Yes | No
`NewLocker(Session, prefix)` | Yes | No
`STM Isolation SerializableSnapshot` | Yes | No
`STM Isolation Serializable` | Yes | No
`STM Isolation RepeatableReads` | Yes | No
`STM Isolation ReadCommitted` | Yes | No
`STM Get` | Yes | No
`STM Put` | Yes | No
`STM Rev` | Yes | No
`STM Del` | Yes | No
## [Leasing](https://godoc.org/go.etcd.io/etcd/clientv3/leasing)
Feature | `clientv3-grpc1.14` | `jetcd v0.0.2`
:-------|:--------------------|:--------------
`NewKV(Client, prefix)` | Yes | No
## [Mirror](https://godoc.org/go.etcd.io/etcd/clientv3/mirror)
Feature | `clientv3-grpc1.14` | `jetcd v0.0.2`
:-------|:--------------------|:--------------
`SyncBase` | Yes | No
`SyncUpdates` | Yes | No
## [Namespace](https://godoc.org/go.etcd.io/etcd/clientv3/namespace)
Feature | `clientv3-grpc1.14` | `jetcd v0.0.2`
:-------|:--------------------|:--------------
`KV` | Yes | No
`Lease` | Yes | No
`Watcher` | Yes | No
## [Naming](https://godoc.org/go.etcd.io/etcd/clientv3/naming)
Feature | `clientv3-grpc1.14` | `jetcd v0.0.2`
:-------|:--------------------|:--------------
`GRPCResolver` | Yes | No
## [Ordering](https://godoc.org/go.etcd.io/etcd/clientv3/ordering)
Feature | `clientv3-grpc1.14` | `jetcd v0.0.2`
:-------|:--------------------|:--------------
`KV` | Yes | No
## [Snapshot](https://godoc.org/go.etcd.io/etcd/clientv3/snapshot)
Feature | `clientv3-grpc1.14` | `jetcd v0.0.2`
:-------|:--------------------|:--------------
`Save` | Yes | No
`Status` | Yes | No
`Restore` | Yes | No

View File

@ -1,4 +1,6 @@
# Data model ---
title: Data model
---
etcd is designed to reliably store infrequently updated data and provide reliable watch queries. etcd exposes previous versions of key-value pairs to support inexpensive snapshots and watch history events (“time travel queries”). A persistent, multi-version, concurrency-control data model is a good fit for these use cases. etcd is designed to reliably store infrequently updated data and provide reliable watch queries. etcd exposes previous versions of key-value pairs to support inexpensive snapshots and watch history events (“time travel queries”). A persistent, multi-version, concurrency-control data model is a good fit for these use cases.
@ -8,9 +10,9 @@ etcd stores data in a multiversion [persistent][persistent-ds] key-value store.
The stores logical view is a flat binary key space. The key space has a lexically sorted index on byte string keys so range queries are inexpensive. The stores logical view is a flat binary key space. The key space has a lexically sorted index on byte string keys so range queries are inexpensive.
The key space maintains multiple revisions. Each atomic mutative operation (e.g., a transaction operation may contain multiple operations) creates a new revision on the key space. All data held by previous revisions remains unchanged. Old versions of key can still be accessed through previous revisions. Likewise, revisions are indexed as well; ranging over revisions with watchers is efficient. If the store is compacted to save space, revisions before the compact revision will be removed. The key space maintains multiple **revisions**. Each atomic mutative operation (e.g., a transaction operation may contain multiple operations) creates a new revision on the key space. All data held by previous revisions remains unchanged. Old versions of key can still be accessed through previous revisions. Likewise, revisions are indexed as well; ranging over revisions with watchers is efficient. If the store is compacted to save space, revisions before the compact revision will be removed. Revisions are monotonically increasing over the lifetime of a cluster.
A keys lifetime spans a generation, denoted by its version. Each key may have one or multiple generations. Creating a key increments the version of that key, starting at 1 if the key never existed. Deleting a key generates a key tombstone, concluding the keys current generation by resetting its version. Each modification of a key increments its version. Once a compaction happens, any version ended before the given revision will be removed and values set before the compaction revision except the latest one will be removed. A key's life spans a generation, from creation to deletion. Each key may have one or multiple generations. Creating a key increments the **version** of that key, starting at 1 if the key does not exist at the current revision. Deleting a key generates a key tombstone, concluding the keys current generation by resetting its version to 0. Each modification of a key increments its version; so, versions are monotonically increasing within a key's generation. Once a compaction happens, any generation ended before the compaction revision will be removed, and values set before the compaction revision except the latest one will be removed.
### Physical view ### Physical view

View File

@ -1,4 +1,6 @@
# Glossary ---
title: Glossary
---
This document defines the various terms used in etcd documentation, command line and source code. This document defines the various terms used in etcd documentation, command line and source code.

View File

@ -0,0 +1,106 @@
---
title: Learner
---
## Background
Membership reconfiguration has been one of the biggest operational challenges. Lets review common challenges.
A newly joined etcd member starts with no data, thus demanding more updates from leader until it catches up with leaders logs. Then leaders network is more likely to be overloaded, blocking or dropping leader heartbeats to followers. In such case, a follower may election-timeout to start a new leader election. That is, a cluster with a new member is more vulnerable to leader election. Both leader election and the subsequent update propagation to the new member are prone to causing periods of cluster unavailability (see **Figure 1** below).
{{< figure src="/img/server-learner-figure-01.png" >}}
What if network partition happens? It depends on leader partition. If the leader still maintains the active quorum, the cluster would continue to operate (see **Figure 2**).
{{< figure src="/img/server-learner-figure-02.png" >}}
What if the leader becomes isolated from the rest of the cluster? Leader monitors progress of each follower. When leader loses connectivity from the quorum it reverts back to follower which will affect the cluster availability (see **Figure 3**).
{{< figure src="/img/server-learner-figure-03.png" >}}
When a new node is added to 3 node cluster, the cluster size becomes 4 and the quorum size becomes 3. What if a new node had joined the cluster, and then network partition happens? It depends on which partition the new member gets located after partition. If the new node happens to be located in the same partition as leaders, the leader still maintains the active quorum of 3. No leadership election happens, and no cluster availability gets affected (see **Figure 4**).
{{< figure src="/img/server-learner-figure-04.png" >}}
If the cluster is 2-and-2 partitioned, then neither of partition maintains the quorum of 3. In this case, leadership election happens (see **Figure 5**).
{{< figure src="/img/server-learner-figure-05.png" >}}
What if network partition happens first, and then a new member gets added? A partitioned 3-node cluster already has one disconnected follower. When a new member is added, the quorum changes from 2 to 3. Now, this cluster has only 2 active nodes out 4, thus losing quorum and starting a new leadership election (see **Figure 6**).
{{< figure src="/img/server-learner-figure-06.png" >}}
Since member add operation can change the size of quorum, it is always recommended to “member remove” first to replace an unhealthy node.
Adding a new member to a 1-node cluster changes the quorum size to 2, immediately causing a leader election when the previous leader finds out quorum is not active. This is because “member add” operation is a 2-step process where user needs to apply “member add” command first, and then starts the new node process (see **Figure 7**).
{{< figure src="/img/server-learner-figure-07.png" >}}
An even worse case is when an added member is misconfigured. Membership reconfiguration is a two-step process: “etcdctl member add” and starting an etcd server process with the given peer URL. That is, “member add” command is applied regardless of URL, even when the URL value is invalid. If the first step is applied with invalid URLs, the second step cannot even start the new etcd. Once the cluster loses quorum, there is no way to revert the membership change (see **Figure 8**).
{{< figure src="/img/server-learner-figure-08.png" >}}
Same applies to a multi-node cluster. For example, the cluster has two members down (one is failed, the other is misconfigured) and two members up, but now it requires at least 3 votes to change the cluster membership (see **Figure 9**).
{{< figure src="/img/server-learner-figure-09.png" >}}
As seen above, a simple misconfiguration can fail the whole cluster into an inoperative state. In such case, an operator need manually recreate the cluster with `etcd --force-new-cluster` flag. As etcd has become a mission-critical service for [Kubernetes](https://kubernetes.io), even the slightest outage may have significant impact on users. What can we better to make etcd such operations easier? Among other things, leader election is most critical to cluster availability: Can we make membership reconfiguration less disruptive by not changing the size of quorum? Can a new node be idle, only requesting the minimum updates from leader, until it catches up? Can membership misconfiguration be always reversible and handled in a more secure way (wrong member add command run should never fail the cluster)? Should an user worry about network topology when adding a new member? Can member add API work regardless of the location of nodes and ongoing network partitions?
## Raft learner
In order to mitigate such availability gaps in the previous section, [Raft §4.2.1](https://ramcloud.stanford.edu/~ongaro/thesis.pdf) introduces a new node state “Learner,” which joins the cluster as a **non-voting member** until it catches up to the leaders logs.
## Features in v3.4
An operator should do the minimum amount of work possible to add a new learner node. `member add --learner` command to add a new learner, which joins cluster as a non-voting member but still receives all data from leader (see **Figure 10**).
{{< figure src="/img/server-learner-figure-10.png" >}}
When a learner has caught up with leaders progress, the learner can be promoted to a voting member using the `member promote` API, which then counts towards the quorum (see **Figure 11**).
{{< figure src="/img/server-learner-figure-11.png" >}}
etcd server validates promote request to ensure its operational safety. Only after its log has caught up to leaders can learner be promoted to a voting member (see **Figure 12**).
{{< figure src="/img/server-learner-figure-12.png" >}}
Learner only serves as a standby node until promoted: Leadership cannot be transferred to learner. Learner rejects client reads and writes (client balancer should not route requests to learner). Which means learner does not need issue Read Index requests to leader. Such limitation simplifies the initial learner implementation in v3.4 release (see **Figure 13**).
{{< figure src="/img/server-learner-figure-13.png" >}}
In addition, etcd limits the total number of learners that a cluster can have, and avoids overloading the leader with log replication. Learner never promotes itself. While etcd provides learner status information and safety checks, cluster operator must make the final decision whether to promote learner or not.
## Features in v3.5
**Make learner state only and default** --- Defaulting a new member state to learner will greatly improve membership reconfiguration safety, because learner does not change the size of quorum. Misconfiguration will always be reversible without losing the quorum.
**Make voting-member promotion fully automatic** --- Once a learner catches up to leaders logs, a cluster can automatically promote the learner. etcd requires certain thresholds to be defined by the user, and once the requirements are satisfied, learner promotes itself to a voting member. From a users perspective, “member add” command would work the same way as today but with greater safety provided by learner feature.
**Make learner standby failover node** --- A learner joins as a standby node, and gets automatically promoted when the cluster availability is affected.
**Make learner read-only** --- A learner can serve as a read-only node that never gets promoted. In a weak consistency mode, learner only receives data from leader and never process writes. Serving reads locally without consensus overhead would greatly decrease the workloads to leader but may serve stale data. In a strong consistency mode, learner requests read index from leader to serve latest data, but still rejects writes.
## Learner vs. mirror maker
etcd implements “mirror maker” using watch API to continuously relay key creates and updates to a separate cluster. Mirroring usually has low latency overhead once it completes initial synchronization. Learner and mirroring overlap in that both can be used to replicate existing data for read-only. However, mirroring does not guarantee linearizability. During network disconnects, previous key-values might have been discarded, and clients are expected to verify watch responses for correct ordering. Thus, there is no ordering guarantee in mirror. Use mirror for minimum latency (e.g. cross data center) at the costs of consistency. Use learner to retain all historical data and its ordering.
## Appendix: learner implementation in v3.4
### Expose "Learner" node type to "MemberAdd" API
etcd client adds a flag to “MemberAdd” API for learner node. And etcd server handler applies membership change entry with `pb.ConfChangeAddLearnerNode` type. Once the command has been applied, a server joins the cluster with `etcd --initial-cluster-state=existing` flag. This learner node can neither vote nor count as quorum.
etcd server must not transfer leadership to learner, since it may still lag behind and does not count as quorum. etcd server limits the number of learners that cluster can have to one: the more learners we have, the more data the leader has to propagate. Clients may talk to learner node, but learner rejects all requests other than serializable read and member status API. This is for simplicity of initial implementation. In the future, learner can be extended as a read-only server that continuously mirrors cluster data. Client balancer must provide helper function to exclude learner node endpoint. Otherwise, request sent to learner may fail. Client sync member call should factor into learner node type. So should client endpoints update call.
`MemberList` and `MemberStatus` responses should indicate which node is learner.
### Add "MemberPromote" API
Internally in Raft, second `MemberAdd` call to learner node promotes it to a voting member. Leader maintains the progress of each follower and learner. If learner has not completed its snapshot message, reject promote request. Only accept promote request if and only if: The learner node is in a healthy state. The learner is in sync with leader or the delta is within the threshold (e.g. the number of entries to replicate to learner is less than 1/10 of snapshot count, which means it is less likely that even after promotion leader would not need send snapshot to the learner). All these logic are hard-coded in `etcdserver` package and not configurable.
## Reference
* Original GitHub issue ([issue #9161](https://github.com/etcd-io/etcd/issues/9161))
* Use case ([issue #3715](https://github.com/etcd-io/etcd/issues/3715))
* Use case ([issue #8888](https://github.com/etcd-io/etcd/issues/8888))
* Use case ([issue #10114](https://github.com/etcd-io/etcd/issues/10114))

View File

@ -1,6 +1,8 @@
# etcd versus other key-value stores ---
title: etcd versus other key-value stores
---
The name "etcd" originated from two ideas, the unix "/etc" folder and "d"istibuted systems. The "/etc" folder is a place to store configuration data for a single system whereas etcd stores configuration information for large scale distributed systems. Hence, a "d"istributed "/etc" is "etcd". The name "etcd" originated from two ideas, the unix "/etc" folder and "d"istributed systems. The "/etc" folder is a place to store configuration data for a single system whereas etcd stores configuration information for large scale distributed systems. Hence, a "d"istributed "/etc" is "etcd".
etcd is designed as a general substrate for large scale distributed systems. These are systems that will never tolerate split-brain operation and are willing to sacrifice availability to achieve this end. etcd stores metadata in a consistent and fault-tolerant way. An etcd cluster is meant to provide key-value storage with best of class stability, reliability, scalability and performance. etcd is designed as a general substrate for large scale distributed systems. These are systems that will never tolerate split-brain operation and are willing to sacrifice availability to achieve this end. etcd stores metadata in a consistent and fault-tolerant way. An etcd cluster is meant to provide key-value storage with best of class stability, reliability, scalability and performance.
@ -47,7 +49,7 @@ When considering features, support, and stability, new applications planning to
### Consul ### Consul
Consul bills itself as an end-to-end service discovery framework. To wit, it includes services such as health checking, failure detection, and DNS. Incidentally, Consul also exposes a key value store with mediocre performance and an intricate API. As it stands in Consul 0.7, the storage system does not scales well; systems requiring millions of keys will suffer from high latencies and memory pressure. The key value API is missing, most notably, multi-version keys, conditional transactions, and reliable streaming watches. Consul is an end-to-end service discovery framework. It provides built-in health checking, failure detection, and DNS services. In addition, Consul exposes a key value store with RESTful HTTP APIs. [As it stands in Consul 1.0][dbtester-comparison-results], the storage system does not scale as well as other systems like etcd or Zookeeper in key-value operations; systems requiring millions of keys will suffer from high latencies and memory pressure. The key value API is missing, most notably, multi-version keys, conditional transactions, and reliable streaming watches.
etcd and Consul solve different problems. If looking for a distributed consistent key value store, etcd is a better choice over Consul. If looking for end-to-end cluster service discovery, etcd will not have enough features; choose Kubernetes, Consul, or SmartStack. etcd and Consul solve different problems. If looking for a distributed consistent key value store, etcd is a better choice over Consul. If looking for end-to-end cluster service discovery, etcd will not have enough features; choose Kubernetes, Consul, or SmartStack.
@ -76,18 +78,18 @@ In theory, its possible to build these primitives atop any storage systems pr
For distributed coordination, choosing etcd can help prevent operational headaches and save engineering effort. For distributed coordination, choosing etcd can help prevent operational headaches and save engineering effort.
[production-users]: ../production-users.md [production-users]: ../production-users.md
[grpc]: http://www.grpc.io [grpc]: https://www.grpc.io
[consul-bulletproof]: https://www.consul.io/docs/internals/sessions.html [consul-bulletproof]: https://www.consul.io/docs/internals/sessions.html
[curator]: http://curator.apache.org/ [curator]: http://curator.apache.org/
[cockroach]: https://github.com/cockroachdb/cockroach [cockroach]: https://github.com/cockroachdb/cockroach
[spanner]: https://cloud.google.com/spanner/ [spanner]: https://cloud.google.com/spanner/
[tidb]: https://github.com/pingcap/tidb [tidb]: https://github.com/pingcap/tidb
[etcd-v3lock]: https://godoc.org/github.com/coreos/etcd/etcdserver/api/v3lock/v3lockpb [etcd-v3lock]: https://godoc.org/github.com/etcd-io/etcd/etcdserver/api/v3lock/v3lockpb
[etcd-v3election]: https://godoc.org/github.com/coreos/etcd/etcdserver/api/v3election/v3electionpb [etcd-v3election]: https://godoc.org/github.com/coreos/etcd-io/etcdserver/api/v3election/v3electionpb
[etcd-etcdctl-lock]: ../../etcdctl/README.md#lock-lockname-command-arg1-arg2- [etcd-etcdctl-lock]: ../../etcdctl/README.md#lock-lockname-command-arg1-arg2-
[etcd-etcdctl-elect]: ../../etcdctl/README.md#elect-options-election-name-proposal [etcd-etcdctl-elect]: ../../etcdctl/README.md#elect-options-election-name-proposal
[etcd-mvcc]: data_model.md [etcd-mvcc]: data_model.md
[etcd-recipe]: https://godoc.org/github.com/coreos/etcd/contrib/recipes [etcd-recipe]: https://godoc.org/github.com/etcd-io/etcd/contrib/recipes
[consul-lock]: https://www.consul.io/docs/commands/lock.html [consul-lock]: https://www.consul.io/docs/commands/lock.html
[newsql-leader]: http://dl.acm.org/citation.cfm?id=2960999 [newsql-leader]: http://dl.acm.org/citation.cfm?id=2960999
[etcd-reconfig]: ../op-guide/runtime-configuration.md [etcd-reconfig]: ../op-guide/runtime-configuration.md
@ -112,4 +114,5 @@ For distributed coordination, choosing etcd can help prevent operational headach
[zk-bindings]: https://zookeeper.apache.org/doc/r3.1.2/zookeeperProgrammers.html#ch_bindings [zk-bindings]: https://zookeeper.apache.org/doc/r3.1.2/zookeeperProgrammers.html#ch_bindings
[container-linux]: https://coreos.com/why [container-linux]: https://coreos.com/why
[locksmith]: https://github.com/coreos/locksmith [locksmith]: https://github.com/coreos/locksmith
[kubernetes]: http://kubernetes.io/docs/whatisk8s [kubernetes]: https://kubernetes.io/docs/concepts/overview/what-is-kubernetes/
[dbtester-comparison-results]: https://github.com/coreos/dbtester/tree/master/test-results/2018Q1-02-etcd-zookeeper-consul

View File

@ -1,4 +1,7 @@
# Metrics ---
title: Metrics
weight: 3
---
etcd uses [Prometheus][prometheus] for metrics reporting. The metrics can be used for real-time monitoring and debugging. etcd does not persist its metrics; if a member restarts, the metrics will be reset. etcd uses [Prometheus][prometheus] for metrics reporting. The metrics can be used for real-time monitoring and debugging. etcd does not persist its metrics; if a member restarts, the metrics will be reset.
@ -99,7 +102,7 @@ Abnormally high snapshot duration (`snapshot_save_total_duration_seconds`) indic
## Prometheus supplied metrics ## Prometheus supplied metrics
The Prometheus client library provides a number of metrics under the `go` and `process` namespaces. There are a few that are particlarly interesting. The Prometheus client library provides a number of metrics under the `go` and `process` namespaces. There are a few that are particularly interesting.
| Name | Description | Type | | Name | Description | Type |
|-----------------------------------|--------------------------------------------|--------------| |-----------------------------------|--------------------------------------------|--------------|
@ -113,4 +116,4 @@ Heavy file descriptor (`process_open_fds`) usage (i.e., near the process's file
[prometheus-getting-started]: http://prometheus.io/docs/introduction/getting_started/ [prometheus-getting-started]: http://prometheus.io/docs/introduction/getting_started/
[prometheus-naming]: http://prometheus.io/docs/practices/naming/ [prometheus-naming]: http://prometheus.io/docs/practices/naming/
[v2-http-metrics]: v2/metrics.md#http-requests [v2-http-metrics]: v2/metrics.md#http-requests
[go-grpc-prometheus]: https://github.com/grpc-ecosystem/go-grpc-prometheus [go-grpc-prometheus]: https://github.com/grpc-ecosystem/go-grpc-prometheus

View File

@ -0,0 +1,3 @@
---
title: Operations guide
---

View File

@ -1,4 +1,6 @@
# Role-based access control ---
title: Role-based access control
---
## Overview ## Overview
@ -32,7 +34,7 @@ Creating a user is as easy as
$ etcdctl user add myusername $ etcdctl user add myusername
``` ```
Creating a new user will prompt for a new password. The password can be supplied from standard input when an option `--interactive=false` is given. Creating a new user will prompt for a new password. The password can be supplied from standard input when an option `--interactive=false` is given. `--new-user-password` can also be used for supplying the password.
Roles can be granted and revoked for a user with: Roles can be granted and revoked for a user with:
@ -122,12 +124,12 @@ $ etcdctl role remove myrolename
## Enabling authentication ## Enabling authentication
The minimal steps to enabling auth are as follows. The administrator can set up users and roles before or after enabling authentication, as a matter of preference. The minimal steps to enabling auth are as follows. The administrator can set up users and roles before or after enabling authentication, as a matter of preference.
Make sure the root user is created: Make sure the root user is created:
``` ```
$ etcdctl user add root $ etcdctl user add root
Password of root: Password of root:
``` ```
@ -157,8 +159,20 @@ The password can be taken from a prompt:
$ etcdctl --user user get foo $ etcdctl --user user get foo
``` ```
The password can also be taken from a command line flag `--password`:
```
$ etcdctl --user user --password password get foo
```
Otherwise, all `etcdctl` commands remain the same. Users and roles can still be created and modified, but require authentication by a user with the root role. Otherwise, all `etcdctl` commands remain the same. Users and roles can still be created and modified, but require authentication by a user with the root role.
## Using TLS Common Name ## Using TLS Common Name
As of version v3.2 if an etcd server is launched with the option `--client-cert-auth=true`, the field of Common Name (CN) in the client's TLS cert will be used as an etcd user. In this case, the common name authenticates the user and the client does not need a password. Note that if both of 1. `--client-cert-auth=true` is passed and CN is provided by the client, and 2. username and password are provided by the client, the username and password based authentication is prioritized. Note that this feature cannot be used with gRPC-proxy and gRPC-gateway. This is because gRPC-proxy terminates TLS from its client so all the clients share a cert of the proxy. gRPC-gateway uses a TLS connection internally for transforming HTTP request to gRPC request so it shares the same limitation. Therefore the clients cannot provide their CN to the server correctly. gRPC-proxy will cause an error and stop if a given cert has non empty CN. gRPC-proxy returns an error which indicates that the client has an non empty CN in its cert.
If an etcd server is launched with the option `--client-cert-auth=true`, the field of Common Name (CN) in the client's TLS cert will be used as an etcd user. In this case, the common name authenticates the user and the client does not need a password. Note that if both of 1. `--client-cert-auth=true` is passed and CN is provided by the client, and 2. username and password are provided by the client, the username and password based authentication is prioritized. As of version v3.3 if an etcd server is launched with the option `--peer-cert-allowed-cn` filtering of CN inter-peer connections is enabled. Nodes can only join the etcd cluster if their CN match the allowed one.
See [etcd security page](https://github.com/etcd-io/etcd/blob/master/Documentation/op-guide/security.md) for more details.
## Notes on password strength
`etcdctl` command line interface and etcd API don't check a strength (length, coexistence of numbers and alphabets, etc) of the password during creating a new user or updating password of an existing user. An administrator needs to care about a requirement of password strength by themselves.

View File

@ -1,4 +1,6 @@
# Clustering Guide ---
title: Clustering Guide
---
## Overview ## Overview
@ -342,8 +344,8 @@ etcdserver: discovery token ignored since a cluster has already been initialized
### DNS discovery ### DNS discovery
DNS [SRV records][rfc-srv] can be used as a discovery mechanism. DNS [SRV records][rfc-srv] can be used as a discovery mechanism.
The `-discovery-srv` flag can be used to set the DNS domain name where the discovery SRV records can be found. The `--discovery-srv` flag can be used to set the DNS domain name where the discovery SRV records can be found.
The following DNS SRV records are looked up in the listed order: Setting `--discovery-srv example.com` causes DNS SRV records to be looked up in the listed order:
* _etcd-server-ssl._tcp.example.com * _etcd-server-ssl._tcp.example.com
* _etcd-server._tcp.example.com * _etcd-server._tcp.example.com
@ -357,8 +359,21 @@ To help clients discover the etcd cluster, the following DNS SRV records are loo
If `_etcd-client-ssl._tcp.example.com` is found, clients will attempt to communicate with the etcd cluster over SSL/TLS. If `_etcd-client-ssl._tcp.example.com` is found, clients will attempt to communicate with the etcd cluster over SSL/TLS.
If etcd is using TLS, the discovery SRV record (e.g. `example.com`) must be included in the SSL certificate DNS SAN along with the hostname, or clustering will fail with log messages like the following:
```
[...] rejected connection from "10.0.1.11:53162" (error "remote error: tls: bad certificate", ServerName "example.com")
```
If etcd is using TLS without a custom certificate authority, the discovery domain (e.g., example.com) must match the SRV record domain (e.g., infra1.example.com). This is to mitigate attacks that forge SRV records to point to a different domain; the domain would have a valid certificate under PKI but be controlled by an unknown third party. If etcd is using TLS without a custom certificate authority, the discovery domain (e.g., example.com) must match the SRV record domain (e.g., infra1.example.com). This is to mitigate attacks that forge SRV records to point to a different domain; the domain would have a valid certificate under PKI but be controlled by an unknown third party.
The `-discovery-srv-name` flag additionally configures a suffix to the SRV name that is queried during discovery.
Use this flag to differentiate between multiple etcd clusters under the same domain.
For example, if `discovery-srv=example.com` and `-discovery-srv-name=foo` are set, the following DNS SRV queries are made:
* _etcd-server-ssl-foo._tcp.example.com
* _etcd-server-foo._tcp.example.com
#### Create DNS SRV records #### Create DNS SRV records
``` ```
@ -384,7 +399,8 @@ infra2.example.com. 300 IN A 10.0.1.12
#### Bootstrap the etcd cluster using DNS #### Bootstrap the etcd cluster using DNS
etcd cluster members can listen on domain names or IP address, the bootstrap process will resolve DNS A records. etcd cluster members can advertise domain names or IP address, the bootstrap process will resolve DNS A records.
Since 3.2 (3.1 prints warnings) `--listen-peer-urls` and `--listen-client-urls` will reject domain name for the network interface binding.
The resolved address in `--initial-advertise-peer-urls` *must match* one of the resolved addresses in the SRV targets. The etcd member reads the resolved address to find out if it belongs to the cluster defined in the SRV records. The resolved address in `--initial-advertise-peer-urls` *must match* one of the resolved addresses in the SRV targets. The etcd member reads the resolved address to find out if it belongs to the cluster defined in the SRV records.
@ -395,8 +411,8 @@ $ etcd --name infra0 \
--initial-cluster-token etcd-cluster-1 \ --initial-cluster-token etcd-cluster-1 \
--initial-cluster-state new \ --initial-cluster-state new \
--advertise-client-urls http://infra0.example.com:2379 \ --advertise-client-urls http://infra0.example.com:2379 \
--listen-client-urls http://infra0.example.com:2379 \ --listen-client-urls http://0.0.0.0:2379 \
--listen-peer-urls http://infra0.example.com:2380 --listen-peer-urls http://0.0.0.0:2380
``` ```
``` ```
@ -406,8 +422,8 @@ $ etcd --name infra1 \
--initial-cluster-token etcd-cluster-1 \ --initial-cluster-token etcd-cluster-1 \
--initial-cluster-state new \ --initial-cluster-state new \
--advertise-client-urls http://infra1.example.com:2379 \ --advertise-client-urls http://infra1.example.com:2379 \
--listen-client-urls http://infra1.example.com:2379 \ --listen-client-urls http://0.0.0.0:2379 \
--listen-peer-urls http://infra1.example.com:2380 --listen-peer-urls http://0.0.0.0:2380
``` ```
``` ```
@ -417,8 +433,8 @@ $ etcd --name infra2 \
--initial-cluster-token etcd-cluster-1 \ --initial-cluster-token etcd-cluster-1 \
--initial-cluster-state new \ --initial-cluster-state new \
--advertise-client-urls http://infra2.example.com:2379 \ --advertise-client-urls http://infra2.example.com:2379 \
--listen-client-urls http://infra2.example.com:2379 \ --listen-client-urls http://0.0.0.0:2379 \
--listen-peer-urls http://infra2.example.com:2380 --listen-peer-urls http://0.0.0.0:2380
``` ```
The cluster can also bootstrap using IP addresses instead of domain names: The cluster can also bootstrap using IP addresses instead of domain names:

View File

@ -1,6 +1,13 @@
# Configuration flags ---
title: Configuration flags
---
etcd is configurable through command-line flags and environment variables. Options set on the command line take precedence over those from the environment. etcd is configurable through a configuration file, various command-line flags, and environment variables.
A reusable configuration file is a YAML file made with name and value of one or more command-line flags described below. In order to use this file, specify the file path as a value to the `--config-file` flag. The [sample configuration file][sample-config-file] can be used as a starting point to create a new configuration file as needed.
Options set on the command line take precedence over those from the environment. If a configuration file is provided, other command line flags and environment variables will be ignored.
For example, `etcd --config-file etcd.conf.yml.sample --data-dir /tmp` will ignore the `--data-dir` flag.
The format of environment variable for flag `--my-flag` is `ETCD_MY_FLAG`. It applies to all flags. The format of environment variable for flag `--my-flag` is `ETCD_MY_FLAG`. It applies to all flags.
@ -42,14 +49,14 @@ To start etcd automatically using custom settings at startup in Linux, using a [
+ env variable: ETCD_ELECTION_TIMEOUT + env variable: ETCD_ELECTION_TIMEOUT
### --listen-peer-urls ### --listen-peer-urls
+ List of URLs to listen on for peer traffic. This flag tells the etcd to accept incoming requests from its peers on the specified scheme://IP:port combinations. Scheme can be either http or https.If 0.0.0.0 is specified as the IP, etcd listens to the given port on all interfaces. If an IP address is given as well as a port, etcd will listen on the given port and interface. Multiple URLs may be used to specify a number of addresses and ports to listen on. The etcd will respond to requests from any of the listed addresses and ports. + List of URLs to listen on for peer traffic. This flag tells the etcd to accept incoming requests from its peers on the specified scheme://IP:port combinations. Scheme can be http or https. Alternatively, use `unix://<file-path>` or `unixs://<file-path>` for unix sockets. If 0.0.0.0 is specified as the IP, etcd listens to the given port on all interfaces. If an IP address is given as well as a port, etcd will listen on the given port and interface. Multiple URLs may be used to specify a number of addresses and ports to listen on. The etcd will respond to requests from any of the listed addresses and ports.
+ default: "http://localhost:2380" + default: "http://localhost:2380"
+ env variable: ETCD_LISTEN_PEER_URLS + env variable: ETCD_LISTEN_PEER_URLS
+ example: "http://10.0.0.1:2380" + example: "http://10.0.0.1:2380"
+ invalid example: "http://example.com:2380" (domain name is invalid for binding) + invalid example: "http://example.com:2380" (domain name is invalid for binding)
### --listen-client-urls ### --listen-client-urls
+ List of URLs to listen on for client traffic. This flag tells the etcd to accept incoming requests from the clients on the specified scheme://IP:port combinations. Scheme can be either http or https. If 0.0.0.0 is specified as the IP, etcd listens to the given port on all interfaces. If an IP address is given as well as a port, etcd will listen on the given port and interface. Multiple URLs may be used to specify a number of addresses and ports to listen on. The etcd will respond to requests from any of the listed addresses and ports. + List of URLs to listen on for client traffic. This flag tells the etcd to accept incoming requests from the clients on the specified scheme://IP:port combinations. Scheme can be either http or https. Alternatively, use `unix://<file-path>` or `unixs://<file-path>` for unix sockets. If 0.0.0.0 is specified as the IP, etcd listens to the given port on all interfaces. If an IP address is given as well as a port, etcd will listen on the given port and interface. Multiple URLs may be used to specify a number of addresses and ports to listen on. The etcd will respond to requests from any of the listed addresses and ports.
+ default: "http://localhost:2379" + default: "http://localhost:2379"
+ env variable: ETCD_LISTEN_CLIENT_URLS + env variable: ETCD_LISTEN_CLIENT_URLS
+ example: "http://10.0.0.1:2379" + example: "http://10.0.0.1:2379"
@ -77,6 +84,16 @@ To start etcd automatically using custom settings at startup in Linux, using a [
+ default: 0 + default: 0
+ env variable: ETCD_QUOTA_BACKEND_BYTES + env variable: ETCD_QUOTA_BACKEND_BYTES
### --backend-batch-limit
+ BackendBatchLimit is the maximum operations before commit the backend transaction.
+ default: 0
+ env variable: ETCD_BACKEND_BATCH_LIMIT
### --backend-batch-interval
+ BackendBatchInterval is the maximum time before commit the backend transaction.
+ default: 0
+ env variable: ETCD_BACKEND_BATCH_INTERVAL
### --max-txn-ops ### --max-txn-ops
+ Maximum number of operations permitted in a transaction. + Maximum number of operations permitted in a transaction.
+ default: 128 + default: 128
@ -104,7 +121,7 @@ To start etcd automatically using custom settings at startup in Linux, using a [
## Clustering flags ## Clustering flags
`--initial` prefix flags are used in bootstrapping ([static bootstrap][build-cluster], [discovery-service bootstrap][discovery] or [runtime reconfiguration][reconfig]) a new member, and ignored when restarting an existing member. `--initial-advertise-peer-urls`, `--initial-cluster`, `--initial-cluster-state`, and `--initial-cluster-token` flags are used in bootstrapping ([static bootstrap][build-cluster], [discovery-service bootstrap][discovery] or [runtime reconfiguration][reconfig]) a new member, and ignored when restarting an existing member.
`--discovery` prefix flags need to be set when using [discovery service][discovery]. `--discovery` prefix flags need to be set when using [discovery service][discovery].
@ -150,6 +167,11 @@ To start etcd automatically using custom settings at startup in Linux, using a [
+ default: "" + default: ""
+ env variable: ETCD_DISCOVERY_SRV + env variable: ETCD_DISCOVERY_SRV
### --discovery-srv-name
+ Suffix to the DNS srv name queried when bootstrapping using DNS.
+ default: ""
+ env variable: ETCD_DISCOVERY_SRV_NAME
### --discovery-fallback ### --discovery-fallback
+ Expected behavior ("exit" or "proxy") when discovery services fails. "proxy" supports v2 API only. + Expected behavior ("exit" or "proxy") when discovery services fails. "proxy" supports v2 API only.
+ default: "proxy" + default: "proxy"
@ -162,7 +184,7 @@ To start etcd automatically using custom settings at startup in Linux, using a [
### --strict-reconfig-check ### --strict-reconfig-check
+ Reject reconfiguration requests that would cause quorum loss. + Reject reconfiguration requests that would cause quorum loss.
+ default: false + default: true
+ env variable: ETCD_STRICT_RECONFIG_CHECK + env variable: ETCD_STRICT_RECONFIG_CHECK
### --auto-compaction-retention ### --auto-compaction-retention
@ -171,7 +193,7 @@ To start etcd automatically using custom settings at startup in Linux, using a [
+ env variable: ETCD_AUTO_COMPACTION_RETENTION + env variable: ETCD_AUTO_COMPACTION_RETENTION
### --auto-compaction-mode ### --auto-compaction-mode
+ Interpret 'auto-compaction-retention' one of: periodic|revision. 'periodic' for duration based retention, defaulting to hours if no time unit is provided (e.g. '5m'). 'revision' for revision number based retention. + Interpret 'auto-compaction-retention' one of: 'periodic', 'revision'. 'periodic' for duration based retention, defaulting to hours if no time unit is provided (e.g. '5m'). 'revision' for revision number based retention.
+ default: periodic + default: periodic
+ env variable: ETCD_AUTO_COMPACTION_MODE + env variable: ETCD_AUTO_COMPACTION_MODE
@ -241,6 +263,7 @@ The security flags help to [build a secure etcd cluster][security].
+ Enable client cert authentication. + Enable client cert authentication.
+ default: false + default: false
+ env variable: ETCD_CLIENT_CERT_AUTH + env variable: ETCD_CLIENT_CERT_AUTH
+ CN authentication is not supported by gRPC-gateway.
### --client-crl-file ### --client-crl-file
+ Path to the client certificate revocation list file. + Path to the client certificate revocation list file.
@ -266,12 +289,12 @@ The security flags help to [build a secure etcd cluster][security].
+ env variable: ETCD_PEER_CA_FILE + env variable: ETCD_PEER_CA_FILE
### --peer-cert-file ### --peer-cert-file
+ Path to the peer server TLS cert file. + Path to the peer server TLS cert file. This is the cert for peer-to-peer traffic, used both for server and client.
+ default: "" + default: ""
+ env variable: ETCD_PEER_CERT_FILE + env variable: ETCD_PEER_CERT_FILE
### --peer-key-file ### --peer-key-file
+ Path to the peer server TLS key file. + Path to the peer server TLS key file. This is the key for peer-to-peer traffic, used both for server and client.
+ default: "" + default: ""
+ env variable: ETCD_PEER_KEY_FILE + env variable: ETCD_PEER_KEY_FILE
@ -300,8 +323,32 @@ The security flags help to [build a secure etcd cluster][security].
+ default: none + default: none
+ env variable: ETCD_PEER_CERT_ALLOWED_CN + env variable: ETCD_PEER_CERT_ALLOWED_CN
### --cipher-suites
+ Comma-separated list of supported TLS cipher suites between server/client and peers.
+ default: ""
+ env variable: ETCD_CIPHER_SUITES
### --experimental-peer-skip-client-san-verification
+ Skip verification of SAN field in client certificate for peer connections.
+ default: false
+ env variable: ETCD_EXPERIMENTAL_PEER_SKIP_CLIENT_SAN_VERIFICATION
## Logging flags ## Logging flags
### --logger
**Available from v3.4**
+ Specify 'zap' for structured logging or 'capnslog'.
+ default: capnslog
+ env variable: ETCD_LOGGER
### --log-outputs
+ Specify 'stdout' or 'stderr' to skip journald logging even when running under systemd, or list of comma separated output targets.
+ default: default
+ env variable: ETCD_LOG_OUTPUTS
+ 'default' use 'stderr' config for v3.4 during zap logger migraion
### --debug ### --debug
+ Drop the default log level to DEBUG for all subpackages. + Drop the default log level to DEBUG for all subpackages.
+ default: false (INFO for all packages) + default: false (INFO for all packages)
@ -319,7 +366,7 @@ For example, it may panic if other members in the cluster are still alive.
Follow the instructions when using these flags. Follow the instructions when using these flags.
### --force-new-cluster ### --force-new-cluster
+ Force to create a new one-member cluster. It commits configuration changes forcing to remove all existing members in the cluster and add itself. It needs to be set to [restore a backup][restore]. + Force to create a new one-member cluster. It commits configuration changes forcing to remove all existing members in the cluster and add itself, but is strongly discouraged. Please review the [disaster recovery][recovery] documentation for preferred v3 recovery procedures.
+ default: false + default: false
+ env variable: ETCD_FORCE_NEW_CLUSTER + env variable: ETCD_FORCE_NEW_CLUSTER
@ -332,33 +379,51 @@ Follow the instructions when using these flags.
### --config-file ### --config-file
+ Load server configuration from a file. + Load server configuration from a file.
+ default: "" + default: ""
+ example: [sample configuration file][sample-config-file]
+ env variable: ETCD_CONFIG_FILE
## Profiling flags ## Profiling flags
### --enable-pprof ### --enable-pprof
+ Enable runtime profiling data via HTTP server. Address is at client URL + "/debug/pprof/" + Enable runtime profiling data via HTTP server. Address is at client URL + "/debug/pprof/"
+ default: false + default: false
+ env variable: ETCD_ENABLE_PPROF
### --metrics ### --metrics
+ Set level of detail for exported metrics, specify 'extensive' to include histogram metrics. + Set level of detail for exported metrics, specify 'extensive' to include histogram metrics.
+ default: basic + default: basic
+ env variable: ETCD_METRICS
### --listen-metrics-urls ### --listen-metrics-urls
+ List of URLs to listen on for metrics. + List of additional URLs to listen on that will respond to both the `/metrics` and `/health` endpoints
+ default: "" + default: ""
+ env variable: ETCD_LISTEN_METRICS_URLS
## Auth flags ## Auth flags
### --auth-token ### --auth-token
+ Specify a token type and token specific options, especially for JWT. Its format is "type,var1=val1,var2=val2,...". Possible type is 'simple' or 'jwt'. Possible variables are 'sign-method' for specifying a sign method of jwt (its possible values are 'ES256', 'ES384', 'ES512', 'HS256', 'HS384', 'HS512', 'RS256', 'RS384', 'RS512', 'PS256', 'PS384', or 'PS512'), 'pub-key' for specifying a path to a public key for verifying jwt, and 'priv-key' for specifying a path to a private key for signing jwt. + Specify a token type and token specific options, especially for JWT. Its format is "type,var1=val1,var2=val2,...". Possible type is 'simple' or 'jwt'. Possible variables are 'sign-method' for specifying a sign method of jwt (its possible values are 'ES256', 'ES384', 'ES512', 'HS256', 'HS384', 'HS512', 'RS256', 'RS384', 'RS512', 'PS256', 'PS384', or 'PS512'), 'pub-key' for specifying a path to a public key for verifying jwt, 'priv-key' for specifying a path to a private key for signing jwt, and 'ttl' for specifying TTL of jwt tokens.
+ Example option of JWT: '--auth-token jwt,pub-key=app.rsa.pub,priv-key=app.rsa,sign-method=RS512' + For asymmetric algorithms ('RS', 'PS', 'ES'), the public key is optional, as the private key contains enough information to both sign and verify tokens.
+ Example option of JWT: '--auth-token jwt,pub-key=app.rsa.pub,priv-key=app.rsa,sign-method=RS512,ttl=10m'
+ default: "simple" + default: "simple"
+ env variable: ETCD_AUTH_TOKEN
### --bcrypt-cost
+ Specify the cost / strength of the bcrypt algorithm for hashing auth passwords. Valid values are between 4 and 31.
+ default: 10
+ env variable: (not supported)
## Experimental flags ## Experimental flags
### --experimental-backend-bbolt-freelist-type
+ The freelist type that etcd backend(bboltdb) uses (array and map are supported types).
+ default: array
+ env variable: ETCD_EXPERIMENTAL_BACKEND_BBOLT_FREELIST_TYPE
### --experimental-corrupt-check-time ### --experimental-corrupt-check-time
+ Duration of time between cluster corruption check passes + Duration of time between cluster corruption check passes
+ default: 0s + default: 0s
+ env variable: ETCD_EXPERIMENTAL_CORRUPT_CHECK_TIME
[build-cluster]: clustering.md#static [build-cluster]: clustering.md#static
[reconfig]: runtime-configuration.md [reconfig]: runtime-configuration.md
@ -369,3 +434,5 @@ Follow the instructions when using these flags.
[security]: security.md [security]: security.md
[systemd-intro]: http://freedesktop.org/wiki/Software/systemd/ [systemd-intro]: http://freedesktop.org/wiki/Software/systemd/
[tuning]: ../tuning.md#time-parameters [tuning]: ../tuning.md#time-parameters
[sample-config-file]: ../../etcd.conf.yml.sample
[recovery]: recovery.md#disaster-recovery

View File

@ -1,4 +1,6 @@
# Run etcd clusters inside containers ---
title: Run etcd clusters inside containers
---
The following guide shows how to run etcd with rkt and Docker using the [static bootstrap process](clustering.md#static). The following guide shows how to run etcd with rkt and Docker using the [static bootstrap process](clustering.md#static).
@ -17,14 +19,14 @@ export NODE1=192.168.1.21
Trust the CoreOS [App Signing Key](https://coreos.com/security/app-signing-key/). Trust the CoreOS [App Signing Key](https://coreos.com/security/app-signing-key/).
``` ```
sudo rkt trust --prefix coreos.com/etcd sudo rkt trust --prefix quay.io/coreos/etcd
# gpg key fingerprint is: 18AD 5014 C99E F7E3 BA5F 6CE9 50BD D3E0 FC8A 365E # gpg key fingerprint is: 18AD 5014 C99E F7E3 BA5F 6CE9 50BD D3E0 FC8A 365E
``` ```
Run the `v3.1.2` version of etcd or specify another release version. Run the `v3.2` version of etcd or specify another release version.
``` ```
sudo rkt run --net=default:IP=${NODE1} coreos.com/etcd:v3.1.2 -- -name=node1 -advertise-client-urls=http://${NODE1}:2379 -initial-advertise-peer-urls=http://${NODE1}:2380 -listen-client-urls=http://0.0.0.0:2379 -listen-peer-urls=http://${NODE1}:2380 -initial-cluster=node1=http://${NODE1}:2380 sudo rkt run --net=default:IP=${NODE1} quay.io/coreos/etcd:v3.2 -- -name=node1 -advertise-client-urls=http://${NODE1}:2379 -initial-advertise-peer-urls=http://${NODE1}:2380 -listen-client-urls=http://0.0.0.0:2379 -listen-peer-urls=http://${NODE1}:2380 -initial-cluster=node1=http://${NODE1}:2380
``` ```
List the cluster member. List the cluster member.
@ -45,13 +47,13 @@ export NODE3=172.16.28.23
``` ```
# node 1 # node 1
sudo rkt run --net=default:IP=${NODE1} coreos.com/etcd:v3.1.2 -- -name=node1 -advertise-client-urls=http://${NODE1}:2379 -initial-advertise-peer-urls=http://${NODE1}:2380 -listen-client-urls=http://0.0.0.0:2379 -listen-peer-urls=http://${NODE1}:2380 -initial-cluster=node1=http://${NODE1}:2380,node2=http://${NODE2}:2380,node3=http://${NODE3}:2380 sudo rkt run --net=default:IP=${NODE1} quay.io/coreos/etcd:v3.2 -- -name=node1 -advertise-client-urls=http://${NODE1}:2379 -initial-advertise-peer-urls=http://${NODE1}:2380 -listen-client-urls=http://0.0.0.0:2379 -listen-peer-urls=http://${NODE1}:2380 -initial-cluster=node1=http://${NODE1}:2380,node2=http://${NODE2}:2380,node3=http://${NODE3}:2380
# node 2 # node 2
sudo rkt run --net=default:IP=${NODE2} coreos.com/etcd:v3.1.2 -- -name=node2 -advertise-client-urls=http://${NODE2}:2379 -initial-advertise-peer-urls=http://${NODE2}:2380 -listen-client-urls=http://0.0.0.0:2379 -listen-peer-urls=http://${NODE2}:2380 -initial-cluster=node1=http://${NODE1}:2380,node2=http://${NODE2}:2380,node3=http://${NODE3}:2380 sudo rkt run --net=default:IP=${NODE2} quay.io/coreos/etcd:v3.2 -- -name=node2 -advertise-client-urls=http://${NODE2}:2379 -initial-advertise-peer-urls=http://${NODE2}:2380 -listen-client-urls=http://0.0.0.0:2379 -listen-peer-urls=http://${NODE2}:2380 -initial-cluster=node1=http://${NODE1}:2380,node2=http://${NODE2}:2380,node3=http://${NODE3}:2380
# node 3 # node 3
sudo rkt run --net=default:IP=${NODE3} coreos.com/etcd:v3.1.2 -- -name=node3 -advertise-client-urls=http://${NODE3}:2379 -initial-advertise-peer-urls=http://${NODE3}:2380 -listen-client-urls=http://0.0.0.0:2379 -listen-peer-urls=http://${NODE3}:2380 -initial-cluster=node1=http://${NODE1}:2380,node2=http://${NODE2}:2380,node3=http://${NODE3}:2380 sudo rkt run --net=default:IP=${NODE3} quay.io/coreos/etcd:v3.2 -- -name=node3 -advertise-client-urls=http://${NODE3}:2379 -initial-advertise-peer-urls=http://${NODE3}:2380 -listen-client-urls=http://0.0.0.0:2379 -listen-peer-urls=http://${NODE3}:2380 -initial-cluster=node1=http://${NODE1}:2380,node2=http://${NODE2}:2380,node3=http://${NODE3}:2380
``` ```
Verify the cluster is healthy and can be reached. Verify the cluster is healthy and can be reached.

View File

@ -43,8 +43,8 @@ ANNOTATIONS {
# alert if more than 1% of gRPC method calls have failed within the last 5 minutes # alert if more than 1% of gRPC method calls have failed within the last 5 minutes
ALERT HighNumberOfFailedGRPCRequests ALERT HighNumberOfFailedGRPCRequests
IF sum by(grpc_method) (rate(etcd_grpc_requests_failed_total{job="etcd"}[5m])) IF 100 * (sum by(grpc_method) (rate(etcd_grpc_requests_failed_total{job="etcd"}[5m]))
/ sum by(grpc_method) (rate(etcd_grpc_total{job="etcd"}[5m])) > 0.01 / sum by(grpc_method) (rate(etcd_grpc_total{job="etcd"}[5m]))) > 1
FOR 10m FOR 10m
LABELS { LABELS {
severity = "warning" severity = "warning"
@ -56,8 +56,8 @@ ANNOTATIONS {
# alert if more than 5% of gRPC method calls have failed within the last 5 minutes # alert if more than 5% of gRPC method calls have failed within the last 5 minutes
ALERT HighNumberOfFailedGRPCRequests ALERT HighNumberOfFailedGRPCRequests
IF sum by(grpc_method) (rate(etcd_grpc_requests_failed_total{job="etcd"}[5m])) IF 100 * (sum by(grpc_method) (rate(etcd_grpc_requests_failed_total{job="etcd"}[5m]))
/ sum by(grpc_method) (rate(etcd_grpc_total{job="etcd"}[5m])) > 0.05 / sum by(grpc_method) (rate(etcd_grpc_total{job="etcd"}[5m]))) > 5
FOR 5m FOR 5m
LABELS { LABELS {
severity = "critical" severity = "critical"
@ -79,47 +79,6 @@ ANNOTATIONS {
description = "on etcd instance {{ $labels.instance }} gRPC requests to {{ $labels.grpc_method }} are slow", description = "on etcd instance {{ $labels.instance }} gRPC requests to {{ $labels.grpc_method }} are slow",
} }
# HTTP requests alerts
# ====================
# alert if more than 1% of requests to an HTTP endpoint have failed within the last 5 minutes
ALERT HighNumberOfFailedHTTPRequests
IF sum(rate(grpc_server_handled_total{grpc_code!="OK",job="etcd"}[5m])) BY (grpc_service, grpc_method)
/ sum(rate(grpc_server_handled_total{job="etcd"}[5m])) BY (grpc_service, grpc_method) > 0.01
FOR 10m
LABELS {
severity = "warning"
}
ANNOTATIONS {
summary = "a high number of HTTP requests are failing",
description = "{{ $value }}% of requests for {{ $labels.method }} failed on etcd instance {{ $labels.instance }}",
}
# alert if more than 5% of requests to an HTTP endpoint have failed within the last 5 minutes
ALERT HighNumberOfFailedHTTPRequests
IF sum(rate(grpc_server_handled_total{grpc_code!="OK",job="etcd"}[5m])) BY (grpc_service, grpc_method)
/ sum(rate(grpc_server_handled_total{job="etcd"}[5m])) BY (grpc_service, grpc_method) > 0.05
FOR 5m
LABELS {
severity = "critical"
}
ANNOTATIONS {
summary = "a high number of HTTP requests are failing",
description = "{{ $value }}% of requests for {{ $labels.method }} failed on etcd instance {{ $labels.instance }}",
}
# alert if the 99th percentile of HTTP requests take more than 150ms
ALERT HTTPRequestsSlow
IF histogram_quantile(0.99, rate(etcd_http_successful_duration_seconds_bucket[5m])) > 0.15
FOR 10m
LABELS {
severity = "warning"
}
ANNOTATIONS {
summary = "slow HTTP requests",
description = "on etcd instance {{ $labels.instance }} HTTP requests to {{ $labels.method }} are slow",
}
# file descriptor alerts # file descriptor alerts
# ====================== # ======================

View File

@ -1,143 +1,134 @@
# these rules synced manually from https://github.com/etcd-io/etcd/blob/master/Documentation/etcd-mixin/mixin.libsonnet
groups: groups:
- name: etcd3_alert.rules - name: etcd
rules: rules:
- alert: InsufficientMembers - alert: etcdInsufficientMembers
expr: count(up{job="etcd"} == 0) > (count(up{job="etcd"}) / 2 - 1) annotations:
message: 'etcd cluster "{{ $labels.job }}": insufficient members ({{ $value
}}).'
expr: |
sum(up{job=~".*etcd.*"} == bool 1) by (job) < ((count(up{job=~".*etcd.*"}) by (job) + 1) / 2)
for: 3m for: 3m
labels: labels:
severity: critical severity: critical
- alert: etcdNoLeader
annotations: annotations:
description: If one more etcd member goes down the cluster will be unavailable message: 'etcd cluster "{{ $labels.job }}": member {{ $labels.instance }} has
summary: etcd cluster insufficient members no leader.'
- alert: NoLeader expr: |
expr: etcd_server_has_leader{job="etcd"} == 0 etcd_server_has_leader{job=~".*etcd.*"} == 0
for: 1m for: 1m
labels: labels:
severity: critical severity: critical
- alert: etcdHighNumberOfLeaderChanges
annotations: annotations:
description: etcd member {{ $labels.instance }} has no leader message: 'etcd cluster "{{ $labels.job }}": instance {{ $labels.instance }}
summary: etcd member has no leader has seen {{ $value }} leader changes within the last hour.'
- alert: HighNumberOfLeaderChanges expr: |
expr: increase(etcd_server_leader_changes_seen_total{job="etcd"}[1h]) > 3 rate(etcd_server_leader_changes_seen_total{job=~".*etcd.*"}[15m]) > 3
for: 15m
labels: labels:
severity: warning severity: warning
- alert: etcdHighNumberOfFailedGRPCRequests
annotations: annotations:
description: etcd instance {{ $labels.instance }} has seen {{ $value }} leader message: 'etcd cluster "{{ $labels.job }}": {{ $value }}% of requests for {{
changes within the last hour $labels.grpc_method }} failed on etcd instance {{ $labels.instance }}.'
summary: a high number of leader changes within the etcd cluster are happening expr: |
- alert: HighNumberOfFailedGRPCRequests 100 * sum(rate(grpc_server_handled_total{job=~".*etcd.*", grpc_code!="OK"}[5m])) BY (job, instance, grpc_service, grpc_method)
expr: sum(rate(grpc_server_handled_total{grpc_code!="OK",job="etcd"}[5m])) BY (grpc_service, grpc_method) /
/ sum(rate(grpc_server_handled_total{job="etcd"}[5m])) BY (grpc_service, grpc_method) > 0.01 sum(rate(grpc_server_handled_total{job=~".*etcd.*"}[5m])) BY (job, instance, grpc_service, grpc_method)
> 1
for: 10m for: 10m
labels: labels:
severity: warning severity: warning
- alert: etcdHighNumberOfFailedGRPCRequests
annotations: annotations:
description: '{{ $value }}% of requests for {{ $labels.grpc_method }} failed message: 'etcd cluster "{{ $labels.job }}": {{ $value }}% of requests for {{
on etcd instance {{ $labels.instance }}' $labels.grpc_method }} failed on etcd instance {{ $labels.instance }}.'
summary: a high number of gRPC requests are failing expr: |
- alert: HighNumberOfFailedGRPCRequests 100 * sum(rate(grpc_server_handled_total{job=~".*etcd.*", grpc_code!="OK"}[5m])) BY (job, instance, grpc_service, grpc_method)
expr: sum(rate(grpc_server_handled_total{grpc_code!="OK",job="etcd"}[5m])) BY (grpc_service, grpc_method) /
/ sum(rate(grpc_server_handled_total{job="etcd"}[5m])) BY (grpc_service, grpc_method) > 0.05 sum(rate(grpc_server_handled_total{job=~".*etcd.*"}[5m])) BY (job, instance, grpc_service, grpc_method)
> 5
for: 5m for: 5m
labels: labels:
severity: critical severity: critical
- alert: etcdGRPCRequestsSlow
annotations: annotations:
description: '{{ $value }}% of requests for {{ $labels.grpc_method }} failed message: 'etcd cluster "{{ $labels.job }}": gRPC requests to {{ $labels.grpc_method
on etcd instance {{ $labels.instance }}' }} are taking {{ $value }}s on etcd instance {{ $labels.instance }}.'
summary: a high number of gRPC requests are failing expr: |
- alert: GRPCRequestsSlow histogram_quantile(0.99, sum(rate(grpc_server_handling_seconds_bucket{job=~".*etcd.*", grpc_type="unary"}[5m])) by (job, instance, grpc_service, grpc_method, le))
expr: histogram_quantile(0.99, sum(rate(grpc_server_handling_seconds_bucket{job="etcd",grpc_type="unary"}[5m])) by (grpc_service, grpc_method, le))
> 0.15 > 0.15
for: 10m for: 10m
labels: labels:
severity: critical severity: critical
- alert: etcdMemberCommunicationSlow
annotations: annotations:
description: on etcd instance {{ $labels.instance }} gRPC requests to {{ $labels.grpc_method message: 'etcd cluster "{{ $labels.job }}": member communication with {{ $labels.To
}} are slow }} is taking {{ $value }}s on etcd instance {{ $labels.instance }}.'
summary: slow gRPC requests expr: |
- alert: HighNumberOfFailedHTTPRequests histogram_quantile(0.99, rate(etcd_network_peer_round_trip_time_seconds_bucket{job=~".*etcd.*"}[5m]))
expr: sum(rate(etcd_http_failed_total{job="etcd"}[5m])) BY (method) / sum(rate(etcd_http_received_total{job="etcd"}[5m]))
BY (method) > 0.01
for: 10m
labels:
severity: warning
annotations:
description: '{{ $value }}% of requests for {{ $labels.method }} failed on etcd
instance {{ $labels.instance }}'
summary: a high number of HTTP requests are failing
- alert: HighNumberOfFailedHTTPRequests
expr: sum(rate(etcd_http_failed_total{job="etcd"}[5m])) BY (method) / sum(rate(etcd_http_received_total{job="etcd"}[5m]))
BY (method) > 0.05
for: 5m
labels:
severity: critical
annotations:
description: '{{ $value }}% of requests for {{ $labels.method }} failed on etcd
instance {{ $labels.instance }}'
summary: a high number of HTTP requests are failing
- alert: HTTPRequestsSlow
expr: histogram_quantile(0.99, rate(etcd_http_successful_duration_seconds_bucket[5m]))
> 0.15 > 0.15
for: 10m for: 10m
labels: labels:
severity: warning severity: warning
- alert: etcdHighNumberOfFailedProposals
annotations: annotations:
description: on etcd instance {{ $labels.instance }} HTTP requests to {{ $labels.method message: 'etcd cluster "{{ $labels.job }}": {{ $value }} proposal failures within
}} are slow the last hour on etcd instance {{ $labels.instance }}.'
summary: slow HTTP requests expr: |
- record: instance:fd_utilization rate(etcd_server_proposals_failed_total{job=~".*etcd.*"}[15m]) > 5
expr: process_open_fds / process_max_fds for: 15m
- alert: FdExhaustionClose
expr: predict_linear(instance:fd_utilization[1h], 3600 * 4) > 1
for: 10m
labels: labels:
severity: warning severity: warning
- alert: etcdHighFsyncDurations
annotations: annotations:
description: '{{ $labels.job }} instance {{ $labels.instance }} will exhaust message: 'etcd cluster "{{ $labels.job }}": 99th percentile fync durations are
its file descriptors soon' {{ $value }}s on etcd instance {{ $labels.instance }}.'
summary: file descriptors soon exhausted expr: |
- alert: FdExhaustionClose histogram_quantile(0.99, rate(etcd_disk_wal_fsync_duration_seconds_bucket{job=~".*etcd.*"}[5m]))
expr: predict_linear(instance:fd_utilization[10m], 3600) > 1
for: 10m
labels:
severity: critical
annotations:
description: '{{ $labels.job }} instance {{ $labels.instance }} will exhaust
its file descriptors soon'
summary: file descriptors soon exhausted
- alert: EtcdMemberCommunicationSlow
expr: histogram_quantile(0.99, rate(etcd_network_peer_round_trip_time_seconds_bucket[5m]))
> 0.15
for: 10m
labels:
severity: warning
annotations:
description: etcd instance {{ $labels.instance }} member communication with
{{ $labels.To }} is slow
summary: etcd member communication is slow
- alert: HighNumberOfFailedProposals
expr: increase(etcd_server_proposals_failed_total{job="etcd"}[1h]) > 5
labels:
severity: warning
annotations:
description: etcd instance {{ $labels.instance }} has seen {{ $value }} proposal
failures within the last hour
summary: a high number of proposals within the etcd cluster are failing
- alert: HighFsyncDurations
expr: histogram_quantile(0.99, rate(etcd_disk_wal_fsync_duration_seconds_bucket[5m]))
> 0.5 > 0.5
for: 10m for: 10m
labels: labels:
severity: warning severity: warning
- alert: etcdHighCommitDurations
annotations: annotations:
description: etcd instance {{ $labels.instance }} fync durations are high message: 'etcd cluster "{{ $labels.job }}": 99th percentile commit durations
summary: high fsync durations {{ $value }}s on etcd instance {{ $labels.instance }}.'
- alert: HighCommitDurations expr: |
expr: histogram_quantile(0.99, rate(etcd_disk_backend_commit_duration_seconds_bucket[5m])) histogram_quantile(0.99, rate(etcd_disk_backend_commit_duration_seconds_bucket{job=~".*etcd.*"}[5m]))
> 0.25 > 0.25
for: 10m for: 10m
labels: labels:
severity: warning severity: warning
- alert: etcdHighNumberOfFailedHTTPRequests
annotations: annotations:
description: etcd instance {{ $labels.instance }} commit durations are high message: '{{ $value }}% of requests for {{ $labels.method }} failed on etcd
summary: high commit durations instance {{ $labels.instance }}'
expr: |
sum(rate(etcd_http_failed_total{job=~".*etcd.*", code!="404"}[5m])) BY (method) / sum(rate(etcd_http_received_total{job=~".*etcd.*"}[5m]))
BY (method) > 0.01
for: 10m
labels:
severity: warning
- alert: etcdHighNumberOfFailedHTTPRequests
annotations:
message: '{{ $value }}% of requests for {{ $labels.method }} failed on etcd
instance {{ $labels.instance }}.'
expr: |
sum(rate(etcd_http_failed_total{job=~".*etcd.*", code!="404"}[5m])) BY (method) / sum(rate(etcd_http_received_total{job=~".*etcd.*"}[5m]))
BY (method) > 0.05
for: 10m
labels:
severity: critical
- alert: etcdHTTPRequestsSlow
annotations:
message: etcd instance {{ $labels.instance }} HTTP requests to {{ $labels.method
}} are slow.
expr: |
histogram_quantile(0.99, rate(etcd_http_successful_duration_seconds_bucket[5m]))
> 0.15
for: 10m
labels:
severity: warning

View File

@ -1,4 +1,6 @@
# Failure modes ---
title: Failure modes
---
Failures are common in a large deployment of machines. A machine fails when its hardware or software malfunctions. Multiple machines fail together when there are power failures or network issues. Multiple kinds of failures can also happen at once; it is almost impossible to enumerate all possible failure cases. Failures are common in a large deployment of machines. A machine fails when its hardware or software malfunctions. Multiple machines fail together when there are power failures or network issues. Multiple kinds of failures can also happen at once; it is almost impossible to enumerate all possible failure cases.

View File

@ -1,10 +1,12 @@
# etcd gateway ---
title: etcd gateway
---
## What is etcd gateway ## What is etcd gateway
etcd gateway is a simple TCP proxy that forwards network data to the etcd cluster. The gateway is stateless and transparent; it neither inspects client requests nor interferes with cluster responses. etcd gateway is a simple TCP proxy that forwards network data to the etcd cluster. The gateway is stateless and transparent; it neither inspects client requests nor interferes with cluster responses. It does not terminate TLS connections, do TLS handshakes on behalf of its clients, or verify if the connection is secured.
The gateway supports multiple etcd server endpoints and works on a simple round-robin policy. It only routes to available enpoints and hides failures from its clients. Other retry policies, such as weighted round-robin, may be supported in the future. The gateway supports multiple etcd server endpoints and works on a simple round-robin policy. It only routes to available endpoints and hides failures from its clients. Other retry policies, such as weighted round-robin, may be supported in the future.
## When to use etcd gateway ## When to use etcd gateway
@ -60,7 +62,7 @@ infra2.example.com. 300 IN A 10.0.1.12
Start the etcd gateway to fetch the endpoints from the DNS SRV entries with the command: Start the etcd gateway to fetch the endpoints from the DNS SRV entries with the command:
```bash ```bash
$ etcd gateway --discovery-srv=example.com $ etcd gateway start --discovery-srv=example.com
2016-08-16 11:21:18.867350 I | tcpproxy: ready to proxy client requests to [...] 2016-08-16 11:21:18.867350 I | tcpproxy: ready to proxy client requests to [...]
``` ```
@ -72,7 +74,7 @@ $ etcd gateway --discovery-srv=example.com
* Comma-separated list of etcd server targets for forwarding client connections. * Comma-separated list of etcd server targets for forwarding client connections.
* Default: `127.0.0.1:2379` * Default: `127.0.0.1:2379`
* Invalid example: `https://127.0.0.1:2379` (gateway does not terminate TLS) * Invalid example: `https://127.0.0.1:2379` (gateway does not terminate TLS). Note that the gateway does not verify the HTTP schema or inspect the requests, it only forwards requests to the given endpoints.
#### --discovery-srv #### --discovery-srv
@ -101,5 +103,5 @@ $ etcd gateway --discovery-srv=example.com
#### --trusted-ca-file #### --trusted-ca-file
* Path to the client TLS CA file for the etcd cluster. Used to authenticate endpoints. * Path to the client TLS CA file for the etcd cluster to verify the endpoints returned from SRV discovery. Note that it is ONLY used for authenticating the discovered endpoints rather than creating connections for data transferring. The gateway never terminates TLS connections or create TLS connections on behalf of its clients.
* Default: (not set) * Default: (not set)

File diff suppressed because it is too large Load Diff

View File

@ -1,4 +1,6 @@
# gRPC proxy ---
title: gRPC proxy
---
The gRPC proxy is a stateless etcd reverse proxy operating at the gRPC layer (L7). The proxy is designed to reduce the total processing load on the core etcd cluster. For horizontal scalability, it coalesces watch and lease API requests. To protect the cluster against abusive clients, it caches key range requests. The gRPC proxy is a stateless etcd reverse proxy operating at the gRPC layer (L7). The proxy is designed to reduce the total processing load on the core etcd cluster. For horizontal scalability, it coalesces watch and lease API requests. To protect the cluster against abusive clients, it caches key range requests.
@ -85,7 +87,7 @@ Start the etcd gRPC proxy to use these static endpoints with the command:
$ etcd grpc-proxy start --endpoints=infra0.example.com,infra1.example.com,infra2.example.com --listen-addr=127.0.0.1:2379 $ etcd grpc-proxy start --endpoints=infra0.example.com,infra1.example.com,infra2.example.com --listen-addr=127.0.0.1:2379
``` ```
The etcd gRPC proxy starts and listens on port 8080. It forwards client requests to one of the three endpoints provided above. The etcd gRPC proxy starts and listens on port 2379. It forwards client requests to one of the three endpoints provided above.
Sending requests through the proxy: Sending requests through the proxy:
@ -194,7 +196,7 @@ $ ETCDCTL_API=3 etcdctl --endpoints=localhost:2379 get my-prefix/my-key
## TLS termination ## TLS termination
Terminate TLS from a secure etcd cluster with the grpc proxy by serving an unencrypted local endpoint. Terminate TLS from a secure etcd cluster with the gRPC proxy by serving an unencrypted local endpoint.
To try it out, start a single member etcd cluster with client https: To try it out, start a single member etcd cluster with client https:
@ -211,7 +213,7 @@ $ ETCDCTL_API=3 etcdctl --endpoints=http://localhost:2379 endpoint status
$ ETCDCTL_API=3 etcdctl --endpoints=https://localhost:2379 --cert=client.crt --key=client.key --cacert=ca.crt endpoint status $ ETCDCTL_API=3 etcdctl --endpoints=https://localhost:2379 --cert=client.crt --key=client.key --cacert=ca.crt endpoint status
``` ```
Next, start a grpc proxy on `localhost:12379` by connecting to the etcd endpoint `https://localhost:2379` using the client certificates: Next, start a gRPC proxy on `localhost:12379` by connecting to the etcd endpoint `https://localhost:2379` using the client certificates:
```sh ```sh
$ etcd grpc-proxy start --endpoints=https://localhost:2379 --listen-addr localhost:12379 --cert client.crt --key client.key --cacert=ca.crt --insecure-skip-tls-verify & $ etcd grpc-proxy start --endpoints=https://localhost:2379 --listen-addr localhost:12379 --cert client.crt --key client.key --cacert=ca.crt --insecure-skip-tls-verify &
@ -223,3 +225,28 @@ Finally, test the TLS termination by putting a key into the proxy over http:
$ ETCDCTL_API=3 etcdctl --endpoints=http://localhost:12379 put abc def $ ETCDCTL_API=3 etcdctl --endpoints=http://localhost:12379 put abc def
# OK # OK
``` ```
## Metrics and Health
The gRPC proxy exposes `/health` and Prometheus `/metrics` endpoints for the etcd members defined by `--endpoints`. An alternative define an additional URL that will respond to both the `/metrics` and `/health` endpoints with the `--metrics-addr` flag.
```bash
$ etcd grpc-proxy start \
--endpoints https://localhost:2379 \
--metrics-addr https://0.0.0.0:4443 \
--listen-addr 127.0.0.1:23790 \
--key client.key \
--key-file proxy-server.key \
--cert client.crt \
--cert-file proxy-server.crt \
--cacert ca.pem \
--trusted-ca-file proxy-ca.pem
```
### Known issue
The main interface of the proxy serves both HTTP2 and HTTP/1.1. If proxy is setup with TLS as show in the above example, when using a client such as cURL against the listening interface will require explicitly setting the protocol to HTTP/1.1 on the request to return `/metrics` or `/health`. By using the `--metrics-addr` flag the secondary interface will not have this requirement.
```bash
$ curl --cacert proxy-ca.pem --key proxy-client.key --cert proxy-client.crt https://127.0.0.1:23790/metrics --http1.1
```

View File

@ -1,4 +1,6 @@
# Hardware recommendations ---
title: Hardware recommendations
---
etcd usually runs well with limited resources for development or testing purposes; its common to develop with etcd on a laptop or a cheap cloud machine. However, when running etcd clusters in production, some hardware guidelines are useful for proper administration. These suggestions are not hard rules; they serve as a good starting point for a robust production deployment. As always, deployments should be tested with simulated workloads before running in production. etcd usually runs well with limited resources for development or testing purposes; its common to develop with etcd on a laptop or a cheap cloud machine. However, when running etcd clusters in production, some hardware guidelines are useful for proper administration. These suggestions are not hard rules; they serve as a good starting point for a robust production deployment. As always, deployments should be tested with simulated workloads before running in production.
@ -48,7 +50,7 @@ Example application workload: A 50-node Kubernetes cluster
| Provider | Type | vCPUs | Memory (GB) | Max concurrent IOPS | Disk bandwidth (MB/s) | | Provider | Type | vCPUs | Memory (GB) | Max concurrent IOPS | Disk bandwidth (MB/s) |
|----------|------|-------|--------|------|----------------| |----------|------|-------|--------|------|----------------|
| AWS | m4.large | 2 | 8 | 3600 | 56.25 | | AWS | m4.large | 2 | 8 | 3600 | 56.25 |
| GCE | n1-standard-1 + 50GB PD SSD | 2 | 7.5 | 1500 | 25 | | GCE | n1-standard-2 + 50GB PD SSD | 2 | 7.5 | 1500 | 25 |
### Medium cluster ### Medium cluster

View File

@ -1,4 +1,6 @@
# Maintenance ---
title: Maintenance
---
## Overview ## Overview
@ -6,25 +8,27 @@ An etcd cluster needs periodic maintenance to remain reliable. Depending on an e
All etcd maintenance manages storage resources consumed by the etcd keyspace. Failure to adequately control the keyspace size is guarded by storage space quotas; if an etcd member runs low on space, a quota will trigger cluster-wide alarms which will put the system into a limited-operation maintenance mode. To avoid running out of space for writes to the keyspace, the etcd keyspace history must be compacted. Storage space itself may be reclaimed by defragmenting etcd members. Finally, periodic snapshot backups of etcd member state makes it possible to recover any unintended logical data loss or corruption caused by operational error. All etcd maintenance manages storage resources consumed by the etcd keyspace. Failure to adequately control the keyspace size is guarded by storage space quotas; if an etcd member runs low on space, a quota will trigger cluster-wide alarms which will put the system into a limited-operation maintenance mode. To avoid running out of space for writes to the keyspace, the etcd keyspace history must be compacted. Storage space itself may be reclaimed by defragmenting etcd members. Finally, periodic snapshot backups of etcd member state makes it possible to recover any unintended logical data loss or corruption caused by operational error.
## History compaction ## Raft log retention
`etcd --snapshot-count` configures the number of applied Raft entries to hold in-memory before compaction. When `--snapshot-count` reaches, server first persists snapshot data onto disk, and then truncates old entries. When a slow follower requests logs before a compacted index, leader sends the snapshot forcing the follower to overwrite its state.
Higher `--snapshot-count` holds more Raft entries in memory until snapshot, thus causing [recurrent higher memory usage](https://github.com/kubernetes/kubernetes/issues/60589#issuecomment-371977156). Since leader retains latest Raft entries for longer, a slow follower has more time to catch up before leader snapshot. `--snapshot-count` is a tradeoff between higher memory usage and better availabilities of slow followers.
Since v3.2, the default value of `--snapshot-count` has [changed from from 10,000 to 100,000](https://github.com/etcd-io/etcd/pull/7160).
In performance-wise, `--snapshot-count` greater than 100,000 may impact the write throughput. Higher number of in-memory objects can slow down [Go GC mark phase `runtime.scanobject`](https://golang.org/src/runtime/mgc.go), and infrequent memory reclamation makes allocation slow. Performance varies depending on the workloads and system environments. However, in general, too frequent compaction affects cluster availabilities and write throughputs. Too infrequent compaction is also harmful placing too much pressure on Go garbage collector. See https://www.slideshare.net/mitakeh/understanding-performance-aspects-of-etcd-and-raft for more research results.
## History compaction: v3 API Key-Value Database
Since etcd keeps an exact history of its keyspace, this history should be periodically compacted to avoid performance degradation and eventual storage space exhaustion. Compacting the keyspace history drops all information about keys superseded prior to a given keyspace revision. The space used by these keys then becomes available for additional writes to the keyspace. Since etcd keeps an exact history of its keyspace, this history should be periodically compacted to avoid performance degradation and eventual storage space exhaustion. Compacting the keyspace history drops all information about keys superseded prior to a given keyspace revision. The space used by these keys then becomes available for additional writes to the keyspace.
The keyspace can be compacted automatically with `etcd`'s time windowed history retention policy, or manually with `etcdctl`. The `etcdctl` method provides fine-grained control over the compacting process whereas automatic compacting fits applications that only need key history for some length of time. The keyspace can be compacted automatically with `etcd`'s time windowed history retention policy, or manually with `etcdctl`. The `etcdctl` method provides fine-grained control over the compacting process whereas automatic compacting fits applications that only need key history for some length of time.
`etcd` can be set to automatically compact the keyspace with the `--auto-compaction` option with a period of hours:
```sh
# keep one hour of history
$ etcd --auto-compaction-retention=1
```
An `etcdctl` initiated compaction works as follows: An `etcdctl` initiated compaction works as follows:
```sh ```sh
# compact up to revision 3 # compact up to revision 3
$ etcdctl compact 3 $ etcdctl compact 3
``` ```
Revisions prior to the compaction revision become inaccessible: Revisions prior to the compaction revision become inaccessible:
@ -34,11 +38,43 @@ $ etcdctl get --rev=2 somekey
Error: rpc error: code = 11 desc = etcdserver: mvcc: required revision has been compacted Error: rpc error: code = 11 desc = etcdserver: mvcc: required revision has been compacted
``` ```
### Auto Compaction
`etcd` can be set to automatically compact the keyspace with the `--auto-compaction-*` option with a period of hours:
```sh
# keep one hour of history
$ etcd --auto-compaction-retention=1
```
[v3.0.0](https://github.com/etcd-io/etcd/blob/master/CHANGELOG-3.0.md) and [v3.1.0](https://github.com/etcd-io/etcd/blob/master/CHANGELOG-3.1.md) with `--auto-compaction-retention=10` run periodic compaction on v3 key-value store for every 10-hour. Compactor only supports periodic compaction. Compactor records latest revisions every 5-minute, until it reaches the first compaction period (e.g. 10-hour). In order to retain key-value history of last compaction period, it uses the last revision that was fetched before compaction period, from the revision records that were collected every 5-minute. When `--auto-compaction-retention=10`, compactor uses revision 100 for compact revision where revision 100 is the latest revision fetched from 10 hours ago. If compaction succeeds or requested revision has already been compacted, it resets period timer and starts over with new historical revision records (e.g. restart revision collect and compact for the next 10-hour period). If compaction fails, it retries in 5 minutes.
[v3.2.0](https://github.com/etcd-io/etcd/blob/master/CHANGELOG-3.2.md) compactor runs [every hour](https://github.com/etcd-io/etcd/pull/7875). Compactor only supports periodic compaction. Compactor continues to record latest revisions every 5-minute. For every hour, it uses the last revision that was fetched before compaction period, from the revision records that were collected every 5-minute. That is, for every hour, compactor discards historical data created before compaction period. The retention window of compaction period moves to next hour. For instance, when hourly writes are 100 and `--auto-compaction-retention=10`, v3.1 compacts revision 1000, 2000, and 3000 for every 10-hour, while v3.2.x, v3.3.0, v3.3.1, and v3.3.2 compact revision 1000, 1100, and 1200 for every 1-hour. If compaction succeeds or requested revision has already been compacted, it resets period timer and removes used compacted revision from historical revision records (e.g. start next revision collect and compaction from previously collected revisions). If compaction fails, it retries in 5 minutes.
In [v3.3.0](https://github.com/etcd-io/etcd/blob/master/CHANGELOG-3.3.md), [v3.3.1](https://github.com/etcd-io/etcd/blob/master/CHANGELOG-3.3.md), and [v3.3.2](https://github.com/etcd-io/etcd/blob/master/CHANGELOG-3.3.md), `--auto-compaction-mode=revision --auto-compaction-retention=1000` automatically `Compact` on `"latest revision" - 1000` every 5-minute (when latest revision is 30000, compact on revision 29000). For instance, `--auto-compaction-mode=periodic --auto-compaction-retention=72h` automatically `Compact` with 72-hour retention windown, for every 7.2-hour. For instance, `--auto-compaction-mode=periodic --auto-compaction-retention=30m` automatically `Compact` with 30-minute retention windown, for every 3-minute. Periodic compactor continues to record latest revisions for every 1/10 of given compaction period (e.g. 1-hour when `--auto-compaction-mode=periodic --auto-compaction-retention=10h`). For every 1/10 of given compaction period, compactor uses the last revision that was fetched before compaction period, to discard historical data. The retention window of compaction period moves for every 1/10 of given compaction period. For instance, when hourly writes are 100 and `--auto-compaction-retention=10`, v3.1 compacts revision 1000, 2000, and 3000 for every 10-hour, while v3.2.x, v3.3.0, v3.3.1, and v3.3.2 compact revision 1000, 1100, and 1200 for every 1-hour. Futhermore, when writes per minute are 1000, v3.3.0, v3.3.1, and v3.3.2 with `--auto-compaction-mode=periodic --auto-compaction-retention=30m` compact revision 30000, 33000, and 36000, for every 3-minute with more finer granularity.
When `--auto-compaction-retention=10h`, etcd first waits 10-hour for the first compaction, and then does compaction every hour (1/10 of 10-hour) afterwards like this:
```
0Hr (rev = 1)
1hr (rev = 10)
...
8hr (rev = 80)
9hr (rev = 90)
10hr (rev = 100, Compact(1))
11hr (rev = 110, Compact(10))
...
```
Whether compaction succeeds or not, this process repeats for every 1/10 of given compaction period. If compaction succeeds, it just removes compacted revision from historical revision records.
In [v3.3.3](https://github.com/etcd-io/etcd/blob/master/CHANGELOG-3.3.md), `--auto-compaction-mode=revision --auto-compaction-retention=1000` automatically `Compact` on `"latest revision" - 1000` every 5-minute (when latest revision is 30000, compact on revision 29000). Previously, `--auto-compaction-mode=periodic --auto-compaction-retention=72h` automatically `Compact` with 72-hour retention windown for every 7.2-hour. **Now, `Compact` happens, for every 1-hour but still with 72-hour retention window.** Previously, `--auto-compaction-mode=periodic --auto-compaction-retention=30m` automatically `Compact` with 30-minute retention windown for every 3-minute. **Now, `Compact` happens, for every 30-minute but still with 30-minute retention window.** Periodic compactor keeps recording latest revisions for every compaction period when given period is less than 1-hour, or for every 1-hour when given compaction period is greater than 1-hour (e.g. 1-hour when `--auto-compaction-mode=periodic --auto-compaction-retention=24h`). For every compaction period or 1-hour, compactor uses the last revision that was fetched before compaction period, to discard historical data. The retention window of compaction period moves for every given compaction period or hour. For instance, when hourly writes are 100 and `--auto-compaction-mode=periodic --auto-compaction-retention=24h`, `v3.2.x`, `v3.3.0`, `v3.3.1`, and `v3.3.2` compact revision 2400, 2640, and 2880 for every 2.4-hour, while `v3.3.3` *or later* compacts revision 2400, 2500, 2600 for every 1-hour. Furthermore, when `--auto-compaction-mode=periodic --auto-compaction-retention=30m` and writes per minute are about 1000, `v3.3.0`, `v3.3.1`, and `v3.3.2` compact revision 30000, 33000, and 36000, for every 3-minute, while `v3.3.3` *or later* compacts revision 30000, 60000, and 90000, for every 30-minute.
## Defragmentation ## Defragmentation
After compacting the keyspace, the backend database may exhibit internal fragmentation. Any internal fragmentation is space that is free to use by the backend but still consumes storage space. The process of defragmentation releases this storage space back to the file system. Defragmentation is issued on a per-member so that cluster-wide latency spikes may be avoided. After compacting the keyspace, the backend database may exhibit internal fragmentation. Any internal fragmentation is space that is free to use by the backend but still consumes storage space. Compacting old revisions internally fragments `etcd` by leaving gaps in backend database. Fragmented space is available for use by `etcd` but unavailable to the host filesystem. In other words, deleting application data does not reclaim the space on disk.
Compacting old revisions internally fragments `etcd` by leaving gaps in backend database. Fragmented space is available for use by `etcd` but unavailable to the host filesystem. The process of defragmentation releases this storage space back to the file system. Defragmentation is issued on a per-member so that cluster-wide latency spikes may be avoided.
To defragment an etcd member, use the `etcdctl defrag` command: To defragment an etcd member, use the `etcdctl defrag` command:
@ -47,6 +83,19 @@ $ etcdctl defrag
Finished defragmenting etcd member[127.0.0.1:2379] Finished defragmenting etcd member[127.0.0.1:2379]
``` ```
**Note that defragmentation to a live member blocks the system from reading and writing data while rebuilding its states**.
**Note that defragmentation request does not get replicated over cluster. That is, the request is only applied to the local node. Specify all members in `--endpoints` flag or `--cluster` flag to automatically find all cluster members.**
Run defragment operations for all endpoints in the cluster associated with the default endpoint:
```bash
$ etcdctl defrag --cluster
Finished defragmenting etcd member[http://127.0.0.1:2379]
Finished defragmenting etcd member[http://127.0.0.1:22379]
Finished defragmenting etcd member[http://127.0.0.1:32379]
```
To defragment an etcd data directory directly, while etcd is not running, use the command: To defragment an etcd data directory directly, while etcd is not running, use the command:
``` sh ``` sh
@ -80,14 +129,14 @@ $ ETCDCTL_API=3 etcdctl --write-out=table endpoint status
+----------------+------------------+-----------+---------+-----------+-----------+------------+ +----------------+------------------+-----------+---------+-----------+-----------+------------+
# confirm alarm is raised # confirm alarm is raised
$ ETCDCTL_API=3 etcdctl alarm list $ ETCDCTL_API=3 etcdctl alarm list
memberID:13803658152347727308 alarm:NOSPACE memberID:13803658152347727308 alarm:NOSPACE
``` ```
Removing excessive keyspace data and defragmenting the backend database will put the cluster back within the quota limits: Removing excessive keyspace data and defragmenting the backend database will put the cluster back within the quota limits:
```sh ```sh
# get current revision # get current revision
$ rev=$(ETCDCTL_API=3 etcdctl --endpoints=:2379 endpoint status --write-out="json" | egrep -o '"revision":[0-9]*' | egrep -o '[0-9]*') $ rev=$(ETCDCTL_API=3 etcdctl --endpoints=:2379 endpoint status --write-out="json" | egrep -o '"revision":[0-9]*' | egrep -o '[0-9].*')
# compact away all old revisions # compact away all old revisions
$ ETCDCTL_API=3 etcdctl compact $rev $ ETCDCTL_API=3 etcdctl compact $rev
compacted revision 1516 compacted revision 1516
@ -96,12 +145,16 @@ $ ETCDCTL_API=3 etcdctl defrag
Finished defragmenting etcd member[127.0.0.1:2379] Finished defragmenting etcd member[127.0.0.1:2379]
# disarm alarm # disarm alarm
$ ETCDCTL_API=3 etcdctl alarm disarm $ ETCDCTL_API=3 etcdctl alarm disarm
memberID:13803658152347727308 alarm:NOSPACE memberID:13803658152347727308 alarm:NOSPACE
# test puts are allowed again # test puts are allowed again
$ ETCDCTL_API=3 etcdctl put newkey 123 $ ETCDCTL_API=3 etcdctl put newkey 123
OK OK
``` ```
The metric `etcd_mvcc_db_total_size_in_use_in_bytes` indicates the actual database usage after a history compaction, while `etcd_debugging_mvcc_db_total_size_in_bytes` shows the database size including free space waiting for defragmentation. The latter increases only when the former is close to it, meaning when both of these metrics are close to the quota, a history compaction is required to avoid triggering the space quota.
`etcd_debugging_mvcc_db_total_size_in_bytes` is renamed to `etcd_mvcc_db_total_size_in_bytes` from v3.4.
## Snapshot backup ## Snapshot backup
Snapshotting the `etcd` cluster on a regular basis serves as a durable backup for an etcd keyspace. By taking periodic snapshots of an etcd member's backend database, an `etcd` cluster can be recovered to a point in time with a known good state. Snapshotting the `etcd` cluster on a regular basis serves as a durable backup for an etcd keyspace. By taking periodic snapshots of an etcd member's backend database, an `etcd` cluster can be recovered to a point in time with a known good state.
@ -116,5 +169,4 @@ $ etcdctl --write-out=table snapshot status backup.db
+----------+----------+------------+------------+ +----------+----------+------------+------------+
| fe01cf57 | 10 | 7 | 2.1 MB | | fe01cf57 | 10 | 7 | 2.1 MB |
+----------+----------+------------+------------+ +----------+----------+------------+------------+
``` ```

View File

@ -1,4 +1,6 @@
# Monitoring etcd ---
title: Monitoring etcd
---
Each etcd server provides local monitoring information on its client port through http endpoints. The monitoring data is useful for both system health checking and cluster debugging. Each etcd server provides local monitoring information on its client port through http endpoints. The monitoring data is useful for both system health checking and cluster debugging.
@ -20,14 +22,14 @@ Showing top 10 nodes out of 157 (cum >= 10ms)
flat flat% sum% cum cum% flat flat% sum% cum cum%
130ms 27.08% 27.08% 130ms 27.08% runtime.futex 130ms 27.08% 27.08% 130ms 27.08% runtime.futex
70ms 14.58% 41.67% 70ms 14.58% syscall.Syscall 70ms 14.58% 41.67% 70ms 14.58% syscall.Syscall
20ms 4.17% 45.83% 20ms 4.17% github.com/coreos/etcd/cmd/vendor/golang.org/x/net/http2/hpack.huffmanDecode 20ms 4.17% 45.83% 20ms 4.17% github.com/coreos/etcd/vendor/golang.org/x/net/http2/hpack.huffmanDecode
20ms 4.17% 50.00% 30ms 6.25% runtime.pcvalue 20ms 4.17% 50.00% 30ms 6.25% runtime.pcvalue
20ms 4.17% 54.17% 50ms 10.42% runtime.schedule 20ms 4.17% 54.17% 50ms 10.42% runtime.schedule
10ms 2.08% 56.25% 10ms 2.08% github.com/coreos/etcd/cmd/vendor/github.com/coreos/etcd/etcdserver.(*EtcdServer).AuthInfoFromCtx 10ms 2.08% 56.25% 10ms 2.08% github.com/coreos/etcd/vendor/github.com/coreos/etcd/etcdserver.(*EtcdServer).AuthInfoFromCtx
10ms 2.08% 58.33% 10ms 2.08% github.com/coreos/etcd/cmd/vendor/github.com/coreos/etcd/etcdserver.(*EtcdServer).Lead 10ms 2.08% 58.33% 10ms 2.08% github.com/coreos/etcd/vendor/github.com/coreos/etcd/etcdserver.(*EtcdServer).Lead
10ms 2.08% 60.42% 10ms 2.08% github.com/coreos/etcd/cmd/vendor/github.com/coreos/etcd/pkg/wait.(*timeList).Trigger 10ms 2.08% 60.42% 10ms 2.08% github.com/coreos/etcd/vendor/github.com/coreos/etcd/pkg/wait.(*timeList).Trigger
10ms 2.08% 62.50% 10ms 2.08% github.com/coreos/etcd/cmd/vendor/github.com/prometheus/client_golang/prometheus.(*MetricVec).hashLabelValues 10ms 2.08% 62.50% 10ms 2.08% github.com/coreos/etcd/vendor/github.com/prometheus/client_golang/prometheus.(*MetricVec).hashLabelValues
10ms 2.08% 64.58% 10ms 2.08% github.com/coreos/etcd/cmd/vendor/golang.org/x/net/http2.(*Framer).WriteHeaders 10ms 2.08% 64.58% 10ms 2.08% github.com/coreos/etcd/vendor/golang.org/x/net/http2.(*Framer).WriteHeaders
``` ```
The `/debug/requests` endpoint gives gRPC traces and performance statistics through a web browser. For example, here is a `Range` request for the key `abc`: The `/debug/requests` endpoint gives gRPC traces and performance statistics through a web browser. For example, here is a `Range` request for the key `abc`:
@ -43,7 +45,7 @@ When Elapsed (s)
## Metrics endpoint ## Metrics endpoint
Each etcd server exports metrics under the `/metrics` path on its client port and optionally on interfaces given by `--listen-metrics-urls`. Each etcd server exports metrics under the `/metrics` path on its client port and optionally on locations given by `--listen-metrics-urls`.
The metrics can be fetched with `curl`: The metrics can be fetched with `curl`:
@ -59,6 +61,10 @@ etcd_disk_backend_commit_duration_seconds_bucket{le="0.016"} 406464
... ...
``` ```
## Health Check
Since v3.3.0, in addition to responding to the `/metrics` endpoint, any locations specified by `--listen-metrics-urls` will also respond to the `/health` endpoint. This can be useful if the standard endpoint is configured with mutual (client) TLS authentication, but a load balancer or monitoring service still needs access to the health check.
## Prometheus ## Prometheus
Running a [Prometheus][prometheus] monitoring service is the easiest way to ingest and record etcd's metrics. Running a [Prometheus][prometheus] monitoring service is the easiest way to ingest and record etcd's metrics.
@ -117,8 +123,6 @@ Access: proxy
Then import the default [etcd dashboard template][template] and customize. For instance, if Prometheus data source name is `my-etcd`, the `datasource` field values in JSON also need to be `my-etcd`. Then import the default [etcd dashboard template][template] and customize. For instance, if Prometheus data source name is `my-etcd`, the `datasource` field values in JSON also need to be `my-etcd`.
See the [demo][demo].
Sample dashboard: Sample dashboard:
![](./etcd-sample-grafana.png) ![](./etcd-sample-grafana.png)
@ -127,4 +131,3 @@ Sample dashboard:
[prometheus]: https://prometheus.io/ [prometheus]: https://prometheus.io/
[grafana]: http://grafana.org/ [grafana]: http://grafana.org/
[template]: ./grafana.json [template]: ./grafana.json
[demo]: http://dash.etcd.io/dashboard/db/test-etcd-kubernetes

View File

@ -1,10 +1,12 @@
# Performance ---
title: Performance
---
## Understanding performance ## Understanding performance
etcd provides stable, sustained high performance. Two factors define performance: latency and throughput. Latency is the time taken to complete an operation. Throughput is the total operations completed within some time period. Usually average latency increases as the overall throughput increases when etcd accepts concurrent client requests. In common cloud environments, like a standard `n-4` on Google Compute Engine (GCE) or a comparable machine type on AWS, a three member etcd cluster finishes a request in less than one millisecond under light load, and can complete more than 30,000 requests per second under heavy load. etcd provides stable, sustained high performance. Two factors define performance: latency and throughput. Latency is the time taken to complete an operation. Throughput is the total operations completed within some time period. Usually average latency increases as the overall throughput increases when etcd accepts concurrent client requests. In common cloud environments, like a standard `n-4` on Google Compute Engine (GCE) or a comparable machine type on AWS, a three member etcd cluster finishes a request in less than one millisecond under light load, and can complete more than 30,000 requests per second under heavy load.
etcd uses the Raft consensus algorithm to replicate requests among members and reach agreement. Consensus performance, especially commit latency, is limited by two physical constraints: network IO latency and disk IO latency. The minimum time to finish an etcd request is the network Round Trip Time (RTT) between members, plus the time `fdatasync` requires to commit the data to permanant storage. The RTT within a datacenter may be as long as several hundred microseconds. A typical RTT within the United States is around 50ms, and can be as slow as 400ms between continents. The typical fdatasync latency for a spinning disk is about 10ms. For SSDs, the latency is often lower than 1ms. To increase throughput, etcd batches multiple requests together and submits them to Raft. This batching policy lets etcd attain high throughput despite heavy load. etcd uses the Raft consensus algorithm to replicate requests among members and reach agreement. Consensus performance, especially commit latency, is limited by two physical constraints: network IO latency and disk IO latency. The minimum time to finish an etcd request is the network Round Trip Time (RTT) between members, plus the time `fdatasync` requires to commit the data to permanent storage. The RTT within a datacenter may be as long as several hundred microseconds. A typical RTT within the United States is around 50ms, and can be as slow as 400ms between continents. The typical fdatasync latency for a spinning disk is about 10ms. For SSDs, the latency is often lower than 1ms. To increase throughput, etcd batches multiple requests together and submits them to Raft. This batching policy lets etcd attain high throughput despite heavy load.
There are other sub-systems which impact the overall performance of etcd. Each serialized etcd request must run through etcds boltdb-backed MVCC storage engine, which usually takes tens of microseconds to finish. Periodically etcd incrementally snapshots its recently applied requests, merging them back with the previous on-disk snapshot. This process may lead to a latency spike. Although this is usually not a problem on SSDs, it may double the observed latency on HDD. Likewise, inflight compactions can impact etcds performance. Fortunately, the impact is often insignificant since the compaction is staggered so it does not compete for resources with regular requests. The RPC system, gRPC, gives etcd a well-defined, extensible API, but it also introduces additional latency, especially for local reads. There are other sub-systems which impact the overall performance of etcd. Each serialized etcd request must run through etcds boltdb-backed MVCC storage engine, which usually takes tens of microseconds to finish. Periodically etcd incrementally snapshots its recently applied requests, merging them back with the previous on-disk snapshot. This process may lead to a latency spike. Although this is usually not a problem on SSDs, it may double the observed latency on HDD. Likewise, inflight compactions can impact etcds performance. Fortunately, the impact is often insignificant since the compaction is staggered so it does not compete for resources with regular requests. The RPC system, gRPC, gives etcd a well-defined, extensible API, but it also introduces additional latency, especially for local reads.

View File

@ -1,4 +1,6 @@
# Disaster recovery ---
title: Disaster recovery
---
etcd is designed to withstand machine failures. An etcd cluster automatically recovers from temporary failures (e.g., machine reboots) and tolerates up to *(N-1)/2* permanent failures for a cluster of N members. When a member permanently fails, whether due to hardware failure or disk corruption, it loses access to the cluster. If the cluster permanently loses more than *(N-1)/2* members then it disastrously fails, irrevocably losing quorum. Once quorum is lost, the cluster cannot reach consensus and therefore cannot continue accepting updates. etcd is designed to withstand machine failures. An etcd cluster automatically recovers from temporary failures (e.g., machine reboots) and tolerates up to *(N-1)/2* permanent failures for a cluster of N members. When a member permanently fails, whether due to hardware failure or disk corruption, it loses access to the cluster. If the cluster permanently loses more than *(N-1)/2* members then it disastrously fails, irrevocably losing quorum. Once quorum is lost, the cluster cannot reach consensus and therefore cannot continue accepting updates.
@ -61,3 +63,9 @@ $ etcd \
``` ```
Now the restored etcd cluster should be available and serving the keyspace given by the snapshot. Now the restored etcd cluster should be available and serving the keyspace given by the snapshot.
## Restoring a cluster from membership mis-reconfiguration with wrong URLs
Previously, etcd panics on [membership mis-reconfiguration with wrong URLs](https://github.com/etcd-io/etcd/issues/9173) (v3.2.15 or later returns [error early in client-side](https://github.com/etcd-io/etcd/pull/9174) before etcd server panic).
Recommended way is restore from [snapshot](#snapshotting-the-keyspace). `--force-new-cluster` can be used to overwrite cluster membership while keeping existing application data, but is strongly discouraged because it will panic if other members from previous cluster are still alive. Make sure to save snapshot periodically.

View File

@ -1,4 +1,6 @@
# Runtime reconfiguration ---
title: Runtime reconfiguration
---
etcd comes with support for incremental runtime reconfiguration, which allows users to update the membership of the cluster at run time. etcd comes with support for incremental runtime reconfiguration, which allows users to update the membership of the cluster at run time.
@ -100,7 +102,7 @@ Adding a member is a two step process:
`etcdctl` adds a new member to the cluster by specifying the member's [name][conf-name] and [advertised peer URLs][conf-adv-peer]: `etcdctl` adds a new member to the cluster by specifying the member's [name][conf-name] and [advertised peer URLs][conf-adv-peer]:
```sh ```sh
$ etcdctl member add infra3 http://10.0.1.13:2380 $ etcdctl member add infra3 --peer-urls=http://10.0.1.13:2380
added member 9bf1b35fc7761a23 to cluster added member 9bf1b35fc7761a23 to cluster
ETCD_NAME="infra3" ETCD_NAME="infra3"

View File

@ -1,4 +1,6 @@
# Design of runtime reconfiguration ---
title: Design of runtime reconfiguration
---
Runtime reconfiguration is one of the hardest and most error prone features in a distributed system, especially in a consensus based system like etcd. Runtime reconfiguration is one of the hardest and most error prone features in a distributed system, especially in a consensus based system like etcd.
@ -6,11 +8,11 @@ Read on to learn about the design of etcd's runtime reconfiguration commands and
## Two phase config changes keep the cluster safe ## Two phase config changes keep the cluster safe
In etcd, every runtime reconfiguration has to go through [two phases][add-member] for safety reasons. For example, to add a member, first inform cluster of new configuration and then start the new member. In etcd, every runtime reconfiguration has to go through [two phases][add-member] for safety reasons. For example, to add a member, first inform the cluster of the new configuration and then start the new member.
Phase 1 - Inform cluster of new configuration Phase 1 - Inform cluster of new configuration
To add a member into etcd cluster, make an API call to request a new member to be added to the cluster. This is the only way to add a new member into an existing cluster. The API call returns when the cluster agrees on the configuration change. To add a member into an etcd cluster, make an API call to request a new member to be added to the cluster. This is the only way to add a new member into an existing cluster. The API call returns when the cluster agrees on the configuration change.
Phase 2 - Start new member Phase 2 - Start new member
@ -28,19 +30,19 @@ If a cluster permanently loses a majority of its members, a new cluster will nee
It is entirely possible to force removing the failed members from the existing cluster to recover. However, we decided not to support this method since it bypasses the normal consensus committing phase, which is unsafe. If the member to remove is not actually dead or force removed through different members in the same cluster, etcd will end up with a diverged cluster with same clusterID. This is very dangerous and hard to debug/fix afterwards. It is entirely possible to force removing the failed members from the existing cluster to recover. However, we decided not to support this method since it bypasses the normal consensus committing phase, which is unsafe. If the member to remove is not actually dead or force removed through different members in the same cluster, etcd will end up with a diverged cluster with same clusterID. This is very dangerous and hard to debug/fix afterwards.
With a correct deployment, the possibility of permanent majority lose is very low. But it is a severe enough problem that worth special care. We strongly suggest reading the [disaster recovery documentation][disaster-recovery] and preparing for permanent majority lose before putting etcd into production. With a correct deployment, the possibility of permanent majority loss is very low. But it is a severe enough problem that is worth special care. We strongly suggest reading the [disaster recovery documentation][disaster-recovery] and preparing for permanent majority loss before putting etcd into production.
## Do not use public discovery service for runtime reconfiguration ## Do not use public discovery service for runtime reconfiguration
The public discovery service should only be used for bootstrapping a cluster. To join member into an existing cluster, use runtime reconfiguration API. The public discovery service should only be used for bootstrapping a cluster. To join member into an existing cluster, use the runtime reconfiguration API.
Discovery service is designed for bootstrapping an etcd cluster in the cloud environment, when the IP addresses of all the members are not known beforehand. After successfully bootstrapping a cluster, the IP addresses of all the members are known. Technically, the discovery service should no longer be needed. The discovery service is designed for bootstrapping an etcd cluster in a cloud environment, when the IP addresses of all the members are not known beforehand. After successfully bootstrapping a cluster, the IP addresses of all the members are known. Technically, the discovery service should no longer be needed.
It seems that using public discovery service is a convenient way to do runtime reconfiguration, after all discovery service already has all the cluster configuration information. However relying on public discovery service brings troubles: It seems that using public discovery service is a convenient way to do runtime reconfiguration, after all discovery service already has all the cluster configuration information. However relying on public discovery service brings troubles:
1. it introduces external dependencies for the entire life-cycle of the cluster, not just bootstrap time. If there is a network issue between the cluster and public discovery service, the cluster will suffer from it. 1. it introduces external dependencies for the entire life-cycle of the cluster, not just bootstrap time. If there is a network issue between the cluster and public discovery service, the cluster will suffer from it.
2. public discovery service must reflect correct runtime configuration of the cluster during it life-cycle. It has to provide security mechanism to avoid bad actions, and it is hard. 2. public discovery service must reflect correct runtime configuration of the cluster during its life-cycle. It has to provide security mechanisms to avoid bad actions, and it is hard.
3. public discovery service has to keep tens of thousands of cluster configurations. Our public discovery service backend is not ready for that workload. 3. public discovery service has to keep tens of thousands of cluster configurations. Our public discovery service backend is not ready for that workload.

View File

@ -1,6 +1,8 @@
# Transport security model ---
title: Transport security model
---
etcd supports automatic TLS as well as authentication through client certificates for both clients to server as well as peer (server to server / cluster) communication. etcd supports automatic TLS as well as authentication through client certificates for both clients to server as well as peer (server to server / cluster) communication. **Note that etcd doesn't enable [RBAC based authentication][auth] or the authentication feature in the transport layer by default to reduce friction for users getting started with the database. Further, changing this default would be a breaking change for the project which was established since 2013. An etcd cluster which doesn't enable security features can expose its data to any clients.**
To get up and running, first have a CA certificate and a signed key pair for one member. It is recommended to create and sign a new key pair for every member in a cluster. To get up and running, first have a CA certificate and a signed key pair for one member. It is recommended to create and sign a new key pair for every member in a cluster.
@ -38,6 +40,8 @@ The peer options work the same way as the client-to-server options:
If either a client-to-server or peer certificate is supplied the key must also be set. All of these configuration options are also available through the environment variables, `ETCD_CA_FILE`, `ETCD_PEER_CA_FILE` and so on. If either a client-to-server or peer certificate is supplied the key must also be set. All of these configuration options are also available through the environment variables, `ETCD_CA_FILE`, `ETCD_PEER_CA_FILE` and so on.
`--cipher-suites`: Comma-separated list of supported TLS cipher suites between server/client and peers (empty will be auto-populated by Go). Available from v3.2.22+, v3.3.7+, and v3.4+.
## Example 1: Client-to-server transport security with HTTPS ## Example 1: Client-to-server transport security with HTTPS
For this, have a CA certificate (`ca.crt`) and signed key pair (`server.crt`, `server.key`) ready. For this, have a CA certificate (`ca.crt`) and signed key pair (`server.crt`, `server.key`) ready.
@ -122,6 +126,49 @@ And also the response from the server:
} }
``` ```
Specify cipher suites to block [weak TLS cipher suites](https://github.com/etcd-io/etcd/issues/8320).
TLS handshake would fail when client hello is requested with invalid cipher suites.
For instance:
```bash
$ etcd \
--cert-file ./server.crt \
--key-file ./server.key \
--trusted-ca-file ./ca.crt \
--cipher-suites TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384
```
Then, client requests must specify one of the cipher suites specified in the server:
```bash
# valid cipher suite
$ curl \
--cacert ./ca.crt \
--cert ./server.crt \
--key ./server.key \
-L [CLIENT-URL]/metrics \
--ciphers ECDHE-RSA-AES128-GCM-SHA256
# request succeeds
etcd_server_version{server_version="3.2.22"} 1
...
```
```bash
# invalid cipher suite
$ curl \
--cacert ./ca.crt \
--cert ./server.crt \
--key ./server.key \
-L [CLIENT-URL]/metrics \
--ciphers ECDHE-RSA-DES-CBC3-SHA
# request fails with
(35) error:14094410:SSL routines:ssl3_read_bytes:sslv3 alert handshake failure
```
## Example 3: Transport security & client certificates in a cluster ## Example 3: Transport security & client certificates in a cluster
etcd supports the same model as above for **peer communication**, that means the communication between etcd members in a cluster. etcd supports the same model as above for **peer communication**, that means the communication between etcd members in a cluster.
@ -195,9 +242,9 @@ When client authentication is enabled for an etcd member, the administrator must
## Notes for TLS authentication ## Notes for TLS authentication
Since [v3.2.0](https://github.com/coreos/etcd/blob/master/CHANGELOG.md#v320-2017-06-09), [TLS certificates get reloaded on every client connection](https://github.com/coreos/etcd/pull/7829). This is useful when replacing expiry certs without stopping etcd servers; it can be done by overwriting old certs with new ones. Refreshing certs for every connection should not have too much overhead, but can be improved in the future, with caching layer. Example tests can be found [here](https://github.com/coreos/etcd/blob/b041ce5d514a4b4aaeefbffb008f0c7570a18986/integration/v3_grpc_test.go#L1601-L1757). Since [v3.2.0](https://github.com/etcd-io/etcd/blob/master/CHANGELOG-3.2.md#v320-2017-06-09), [TLS certificates get reloaded on every client connection](https://github.com/etcd-io/etcd/pull/7829). This is useful when replacing expiry certs without stopping etcd servers; it can be done by overwriting old certs with new ones. Refreshing certs for every connection should not have too much overhead, but can be improved in the future, with caching layer. Example tests can be found [here](https://github.com/coreos/etcd/blob/b041ce5d514a4b4aaeefbffb008f0c7570a18986/integration/v3_grpc_test.go#L1601-L1757).
Since [v3.2.0](https://github.com/coreos/etcd/blob/master/CHANGELOG.md#v320-2017-06-09), [server denies incoming peer certs with wrong IP `SAN`](https://github.com/coreos/etcd/pull/7687). For instance, if peer cert contains IP addresses in Subject Alternative Name (SAN) field, server authenticates a peer only when the remote IP address matches one of those IP addresses. This is to prevent unauthorized endpoints from joining the cluster. For example, peer B's CSR (with `cfssl`) is: Since [v3.2.0](https://github.com/etcd-io/etcd/blob/master/CHANGELOG-3.2.md#v320-2017-06-09), [server denies incoming peer certs with wrong IP `SAN`](https://github.com/etcd-io/etcd/pull/7687). For instance, if peer cert contains any IP addresses in Subject Alternative Name (SAN) field, server authenticates a peer only when the remote IP address matches one of those IP addresses. This is to prevent unauthorized endpoints from joining the cluster. For example, peer B's CSR (with `cfssl`) is:
```json ```json
{ {
@ -223,49 +270,132 @@ Since [v3.2.0](https://github.com/coreos/etcd/blob/master/CHANGELOG.md#v320-2017
when peer B's actual IP address is `10.138.0.2`, not `10.138.0.27`. When peer B tries to join the cluster, peer A will reject B with the error `x509: certificate is valid for 10.138.0.27, not 10.138.0.2`, because B's remote IP address does not match the one in Subject Alternative Name (SAN) field. when peer B's actual IP address is `10.138.0.2`, not `10.138.0.27`. When peer B tries to join the cluster, peer A will reject B with the error `x509: certificate is valid for 10.138.0.27, not 10.138.0.2`, because B's remote IP address does not match the one in Subject Alternative Name (SAN) field.
Since [v3.2.0](https://github.com/coreos/etcd/blob/master/CHANGELOG.md#v320-2017-06-09), [server resolves TLS `DNSNames` when checking `SAN`](https://github.com/coreos/etcd/pull/7767). For instance, if peer cert contains any DNS names in Subject Alternative Name (SAN) field, server authenticates a peer only when forward-lookups on those DNS names have matching IP with the remote IP address. For example, peer B's CSR (with `cfssl`) is: Since [v3.2.0](https://github.com/etcd-io/etcd/blob/master/CHANGELOG-3.2.md#v320-2017-06-09), [server resolves TLS `DNSNames` when checking `SAN`](https://github.com/etcd-io/etcd/pull/7767). For instance, if peer cert contains only DNS names (no IP addresses) in Subject Alternative Name (SAN) field, server authenticates a peer only when forward-lookups (`dig b.com`) on those DNS names have matching IP with the remote IP address. For example, peer B's CSR (with `cfssl`) is:
```json ```json
{ {
... "CN": "etcd peer",
"hosts": [ "hosts": [
"b.com" "b.com"
], ],
...
}
``` ```
when peer B's remote IP address is `10.138.0.2`. When peer B tries to join the cluster, peer A looks up the incoming host `b.com` to get the list of IP addresses (e.g. `dig b.com`). And rejects B if the list does not contain the IP `10.138.0.2`, with the error `tls: 10.138.0.2 does not match any of DNSNames ["b.com"]`. when peer B's remote IP address is `10.138.0.2`. When peer B tries to join the cluster, peer A looks up the incoming host `b.com` to get the list of IP addresses (e.g. `dig b.com`). And rejects B if the list does not contain the IP `10.138.0.2`, with the error `tls: 10.138.0.2 does not match any of DNSNames ["b.com"]`.
Since [v3.2.2](https://github.com/coreos/etcd/blob/master/CHANGELOG.md#v322-2017-07-07), [server accepts connections if IP matches, without checking DNS entries](https://github.com/coreos/etcd/pull/8223). For instance, if peer cert contains IP addresses and DNS names in Subject Alternative Name (SAN) field, and the remote IP address matches one of those IP addresses, server just accepts connection without further checking the DNS names. For example, peer B's CSR (with `cfssl`) is: Since [v3.2.2](https://github.com/etcd-io/etcd/blob/master/CHANGELOG-3.2.md#v322-2017-07-07), [server accepts connections if IP matches, without checking DNS entries](https://github.com/etcd-io/etcd/pull/8223). For instance, if peer cert contains IP addresses and DNS names in Subject Alternative Name (SAN) field, and the remote IP address matches one of those IP addresses, server just accepts connection without further checking the DNS names. For example, peer B's CSR (with `cfssl`) is:
```json ```json
{ {
... "CN": "etcd peer",
"hosts": [ "hosts": [
"invalid.domain", "invalid.domain",
"10.138.0.2" "10.138.0.2"
], ],
...
}
``` ```
when peer B's remote IP address is `10.138.0.2` and `invalid.domain` is a invalid host. When peer B tries to join the cluster, peer A successfully authenticates B, since Subject Alternative Name (SAN) field has a valid matching IP address. See [issue#8206](https://github.com/coreos/etcd/issues/8206) for more detail. when peer B's remote IP address is `10.138.0.2` and `invalid.domain` is a invalid host. When peer B tries to join the cluster, peer A successfully authenticates B, since Subject Alternative Name (SAN) field has a valid matching IP address. See [issue#8206](https://github.com/etcd-io/etcd/issues/8206) for more detail.
Since [v3.2.5](https://github.com/coreos/etcd/blob/master/CHANGELOG.md#v325-2017-08-04), [server supports reverse-lookup on wildcard DNS `SAN`](https://github.com/coreos/etcd/pull/8281). For instance, if peer cert contains only DNS names (no IP addresses) in Subject Alternative Name (SAN) field, server first reverse-lookups the remote IP address to get a list of names mapping to that address (e.g. `nslookup IPADDR`). Then accepts the connection if those names have a matching name with peer cert's DNS names (either by exact or wildcard match). If none is matched, server forward-lookups each DNS entry in peer cert (e.g. look up `example.default.svc` when the entry is `*.example.default.svc`), and accepts connection only when the host's resolved addresses have the matching IP address with the peer's remote IP address. For example, peer B's CSR (with `cfssl`) is: Since [v3.2.5](https://github.com/etcd-io/etcd/blob/master/CHANGELOG-3.2.md#v325-2017-08-04), [server supports reverse-lookup on wildcard DNS `SAN`](https://github.com/etcd-io/etcd/pull/8281). For instance, if peer cert contains only DNS names (no IP addresses) in Subject Alternative Name (SAN) field, server first reverse-lookups the remote IP address to get a list of names mapping to that address (e.g. `nslookup IPADDR`). Then accepts the connection if those names have a matching name with peer cert's DNS names (either by exact or wildcard match). If none is matched, server forward-lookups each DNS entry in peer cert (e.g. look up `example.default.svc` when the entry is `*.example.default.svc`), and accepts connection only when the host's resolved addresses have the matching IP address with the peer's remote IP address. For example, peer B's CSR (with `cfssl`) is:
```json ```json
{ {
... "CN": "etcd peer",
"hosts": [ "hosts": [
"*.example.default.svc", "*.example.default.svc",
"*.example.default.svc.cluster.local" "*.example.default.svc.cluster.local"
], ],
...
}
``` ```
when peer B's remote IP address is `10.138.0.2`. When peer B tries to join the cluster, peer A reverse-lookup the IP `10.138.0.2` to get the list of host names. And either exact or wildcard match the host names with peer B's cert DNS names in Subject Alternative Name (SAN) field. If none of reverse/forward lookups worked, it returns an error `"tls: "10.138.0.2" does not match any of DNSNames ["*.example.default.svc","*.example.default.svc.cluster.local"]`. See [issue#8268](https://github.com/coreos/etcd/issues/8268) for more detail. when peer B's remote IP address is `10.138.0.2`. When peer B tries to join the cluster, peer A reverse-lookup the IP `10.138.0.2` to get the list of host names. And either exact or wildcard match the host names with peer B's cert DNS names in Subject Alternative Name (SAN) field. If none of reverse/forward lookups worked, it returns an error `"tls: "10.138.0.2" does not match any of DNSNames ["*.example.default.svc","*.example.default.svc.cluster.local"]`. See [issue#8268](https://github.com/etcd-io/etcd/issues/8268) for more detail.
[v3.3.0](https://github.com/etcd-io/etcd/blob/master/CHANGELOG-3.3.md) adds [`etcd --peer-cert-allowed-cn`](https://github.com/etcd-io/etcd/pull/8616) flag to support [CN(Common Name)-based auth for inter-peer connections](https://github.com/etcd-io/etcd/issues/8262). Kubernetes TLS bootstrapping involves generating dynamic certificates for etcd members and other system components (e.g. API server, kubelet, etc.). Maintaining different CAs for each component provides tighter access control to etcd cluster but often tedious. When `--peer-cert-allowed-cn` flag is specified, node can only join with matching common name even with shared CAs. For example, each member in 3-node cluster is set up with CSRs (with `cfssl`) as below:
```json
{
"CN": "etcd.local",
"hosts": [
"m1.etcd.local",
"127.0.0.1",
"localhost"
],
```
```json
{
"CN": "etcd.local",
"hosts": [
"m2.etcd.local",
"127.0.0.1",
"localhost"
],
```
```json
{
"CN": "etcd.local",
"hosts": [
"m3.etcd.local",
"127.0.0.1",
"localhost"
],
```
Then only peers with matching common names will be authenticated if `--peer-cert-allowed-cn etcd.local` is given. And nodes with different CNs in CSRs or different `--peer-cert-allowed-cn` will be rejected:
```bash
$ etcd --peer-cert-allowed-cn m1.etcd.local
I | embed: rejected connection from "127.0.0.1:48044" (error "CommonName authentication failed", ServerName "m1.etcd.local")
I | embed: rejected connection from "127.0.0.1:55702" (error "remote error: tls: bad certificate", ServerName "m3.etcd.local")
```
Each process should be started with:
```bash
etcd --peer-cert-allowed-cn etcd.local
I | pkg/netutil: resolving m3.etcd.local:32380 to 127.0.0.1:32380
I | pkg/netutil: resolving m2.etcd.local:22380 to 127.0.0.1:22380
I | pkg/netutil: resolving m1.etcd.local:2380 to 127.0.0.1:2380
I | etcdserver: published {Name:m3 ClientURLs:[https://m3.etcd.local:32379]} to cluster 9db03f09b20de32b
I | embed: ready to serve client requests
I | etcdserver: published {Name:m1 ClientURLs:[https://m1.etcd.local:2379]} to cluster 9db03f09b20de32b
I | embed: ready to serve client requests
I | etcdserver: published {Name:m2 ClientURLs:[https://m2.etcd.local:22379]} to cluster 9db03f09b20de32b
I | embed: ready to serve client requests
I | embed: serving client requests on 127.0.0.1:32379
I | embed: serving client requests on 127.0.0.1:22379
I | embed: serving client requests on 127.0.0.1:2379
```
[v3.2.19](https://github.com/etcd-io/etcd/blob/master/CHANGELOG-3.2.md) and [v3.3.4](https://github.com/etcd-io/etcd/blob/master/CHANGELOG-3.3.md) fixes TLS reload when [certificate SAN field only includes IP addresses but no domain names](https://github.com/etcd-io/etcd/issues/9541). For example, a member is set up with CSRs (with `cfssl`) as below:
```json
{
"CN": "etcd.local",
"hosts": [
"127.0.0.1"
],
```
In Go, server calls `(*tls.Config).GetCertificate` for TLS reload if and only if server's `(*tls.Config).Certificates` field is not empty, or `(*tls.ClientHelloInfo).ServerName` is not empty with a valid SNI from the client. Previously, etcd always populates `(*tls.Config).Certificates` on the initial client TLS handshake, as non-empty. Thus, client was always expected to supply a matching SNI in order to pass the TLS verification and to trigger `(*tls.Config).GetCertificate` to reload TLS assets.
However, a certificate whose SAN field does [not include any domain names but only IP addresses](https://github.com/etcd-io/etcd/issues/9541) would request `*tls.ClientHelloInfo` with an empty `ServerName` field, thus failing to trigger the TLS reload on initial TLS handshake; this becomes a problem when expired certificates need to be replaced online.
Now, `(*tls.Config).Certificates` is created empty on initial TLS client handshake, first to trigger `(*tls.Config).GetCertificate`, and then to populate rest of the certificates on every new TLS connection, even when client SNI is empty (e.g. cert only includes IPs).
## Notes for Host Whitelist
`etcd --host-whitelist` flag specifies acceptable hostnames from HTTP client requests. Client origin policy protects against ["DNS Rebinding"](https://en.wikipedia.org/wiki/DNS_rebinding) attacks to insecure etcd servers. That is, any website can simply create an authorized DNS name, and direct DNS to `"localhost"` (or any other address). Then, all HTTP endpoints of etcd server listening on `"localhost"` becomes accessible, thus vulnerable to DNS rebinding attacks. See [CVE-2018-5702](https://bugs.chromium.org/p/project-zero/issues/detail?id=1447#c2) for more detail.
Client origin policy works as follows:
1. If client connection is secure via HTTPS, allow any hostnames.
2. If client connection is not secure and `"HostWhitelist"` is not empty, only allow HTTP requests whose Host field is listed in whitelist.
Note that the client origin policy is enforced whether authentication is enabled or not, for tighter controls.
By default, `etcd --host-whitelist` and `embed.Config.HostWhitelist` are set *empty* to allow all hostnames. Note that when specifying hostnames, loopback addresses are not added automatically. To allow loopback interfaces, add them to whitelist manually (e.g. `"localhost"`, `"127.0.0.1"`, etc.).
## Frequently asked questions ## Frequently asked questions
@ -296,8 +426,17 @@ Make sure to sign the certificates with a Subject Name the member's public IP ad
The certificate needs to be signed for the member's FQDN in its Subject Name, use Subject Alternative Names (short IP SANs) to add the IP address. The `etcd-ca` tool provides `--domain=` option for its `new-cert` command, and openssl can make [it][alt-name] too. The certificate needs to be signed for the member's FQDN in its Subject Name, use Subject Alternative Names (short IP SANs) to add the IP address. The `etcd-ca` tool provides `--domain=` option for its `new-cert` command, and openssl can make [it][alt-name] too.
### Does etcd encrypt data stored on disk drives?
No. etcd doesn't encrypt key/value data stored on disk drives. If a user need to encrypt data stored on etcd, there are some options:
* Let client applications encrypt and decrypt the data
* Use a feature of underlying storage systems for encrypting stored data like [dm-crypt]
### Im seeing a log warning that "directory X exist without recommended permission -rwx------"
When etcd create certain new directories it sets file permission to 700 to prevent unprivileged access as possible. However, if user has already created a directory with own preference, etcd uses the existing directory and logs a warning message if the permission is different than 700.
[cfssl]: https://github.com/cloudflare/cfssl [cfssl]: https://github.com/cloudflare/cfssl
[tls-setup]: ../../hack/tls-setup [tls-setup]: ../../hack/tls-setup
[tls-guide]: https://github.com/coreos/docs/blob/master/os/generate-self-signed-certificates.md [tls-guide]: https://github.com/coreos/docs/blob/master/os/generate-self-signed-certificates.md
[alt-name]: http://wiki.cacert.org/FAQ/subjectAltName [alt-name]: http://wiki.cacert.org/FAQ/subjectAltName
[auth]: authentication.md [auth]: authentication.md
[dm-crypt]: https://en.wikipedia.org/wiki/Dm-crypt

View File

@ -1,4 +1,6 @@
# Supported systems ---
title: Supported systems
---
## Current support ## Current support
@ -6,7 +8,7 @@ The following table lists etcd support status for common architectures and opera
| Architecture | Operating System | Status | Maintainers | | Architecture | Operating System | Status | Maintainers |
| ------------ | ---------------- | ------------ | --------------------------- | | ------------ | ---------------- | ------------ | --------------------------- |
| amd64 | Darwin | Experimental | etcd maintainers | | amd64 | Darwin | Experimental | etcd maintainers |
| amd64 | Linux | Stable | etcd maintainers | | amd64 | Linux | Stable | etcd maintainers |
| amd64 | Windows | Experimental | | | amd64 | Windows | Experimental | |
| arm64 | Linux | Experimental | @glevand | | arm64 | Linux | Experimental | @glevand |
@ -14,7 +16,7 @@ The following table lists etcd support status for common architectures and opera
| 386 | Linux | Unstable | | | 386 | Linux | Unstable | |
| ppc64le | Linux | Stable | etcd maintainers, @mkumatag | | ppc64le | Linux | Stable | etcd maintainers, @mkumatag |
* etcd-maintainers are listed in https://github.com/coreos/etcd/blob/master/MAINTAINERS. * etcd-maintainers are listed in https://github.com/etcd-io/etcd/blob/master/MAINTAINERS.
Experimental platforms appear to work in practice and have some platform specific code in etcd, but do not fully conform to the stable support policy. Unstable platforms have been lightly tested, but less than experimental. Unlisted architecture and operating system pairs are currently unsupported; caveat emptor. Experimental platforms appear to work in practice and have some platform specific code in etcd, but do not fully conform to the stable support policy. Unstable platforms have been lightly tested, but less than experimental. Unlisted architecture and operating system pairs are currently unsupported; caveat emptor.

View File

@ -1,4 +1,6 @@
# Migrate applications from using API v2 to API v3 ---
title: Migrate applications from using API v2 to API v3
---
The data store v2 is still accessible from the API v2 after upgrading to etcd3. Thus, it will work as before and require no application changes. With etcd 3, applications use the new grpc API v3 to access the mvcc store, which provides more features and improved performance. The mvcc store and the old store v2 are separate and isolated; writes to the store v2 will not affect the mvcc store and, similarly, writes to the mvcc store will not affect the store v2. The data store v2 is still accessible from the API v2 after upgrading to etcd3. Thus, it will work as before and require no application changes. With etcd 3, applications use the new grpc API v3 to access the mvcc store, which provides more features and improved performance. The mvcc store and the old store v2 are separate and isolated; writes to the store v2 will not affect the mvcc store and, similarly, writes to the mvcc store will not affect the store v2.

View File

@ -1,4 +1,6 @@
# Versioning ---
title: Versioning
---
## Service versioning ## Service versioning

View File

@ -0,0 +1,3 @@
---
title: Platforms
---

View File

@ -1,4 +1,6 @@
# Amazon Web Services ---
title: Amazon Web Services
---
This guide assumes operational knowledge of Amazon Web Services (AWS), specifically Amazon Elastic Compute Cloud (EC2). This guide provides an introduction to design considerations when designing an etcd deployment on AWS EC2 and how AWS specific features may be utilized in that context. This guide assumes operational knowledge of Amazon Web Services (AWS), specifically Amazon Elastic Compute Cloud (EC2). This guide provides an introduction to design considerations when designing an etcd deployment on AWS EC2 and how AWS specific features may be utilized in that context.

View File

@ -1,4 +1,6 @@
# Container Linux with systemd ---
title: Container Linux with systemd
---
The following guide shows how to run etcd with [systemd][systemd-docs] under [Container Linux][container-linux-docs]. The following guide shows how to run etcd with [systemd][systemd-docs] under [Container Linux][container-linux-docs].

View File

@ -1,4 +1,6 @@
# FreeBSD ---
title: FreeBSD
---
Starting with version 0.1.2 both etcd and etcdctl have been ported to FreeBSD and can be installed either via packages or ports system. Their versions have been recently updated to 0.2.0 so now etcd and etcdctl can be enjoyed on FreeBSD 10.0 (RC4 as of now) and 9.x, where they have been tested. They might also work when installed from ports on earlier versions of FreeBSD, but it is untested; caveat emptor. Starting with version 0.1.2 both etcd and etcdctl have been ported to FreeBSD and can be installed either via packages or ports system. Their versions have been recently updated to 0.2.0 so now etcd and etcdctl can be enjoyed on FreeBSD 10.0 (RC4 as of now) and 9.x, where they have been tested. They might also work when installed from ports on earlier versions of FreeBSD, but it is untested; caveat emptor.

View File

@ -1,4 +1,6 @@
# Production users ---
title: Production users
---
This document tracks people and use cases for etcd in production. By creating a list of production use cases we hope to build a community of advisors that we can reach out to with experience using various etcd applications, operation environments, and cluster sizes. The etcd development team may reach out periodically to check-in on how etcd is working in the field and update this list. This document tracks people and use cases for etcd in production. By creating a list of production use cases we hope to build a community of advisors that we can reach out to with experience using various etcd applications, operation environments, and cluster sizes. The etcd development team may reach out periodically to check-in on how etcd is working in the field and update this list.
@ -237,3 +239,12 @@ At [Branch][branch], we use kubernetes heavily as our core microservice platform
- *Environment*: Bare Metal - *Environment*: Bare Metal
- *Backups*: None, all data is considered ephemeral. - *Backups*: None, all data is considered ephemeral.
## Transwarp
- *Application*: Transwarp Data Cloud, Transwarp Operating System, Transwarp Data Hub, Sophon
- *Launched*: January 2016
- *Cluster Size*: Multiple clusters, multiple sizes
- *Order of Data Size*: Megabytes
- *Operator*: Trasnwarp Operating System
- *Environment*: Bare Metal, Container
- *Backups*: backup scripts

View File

@ -1,4 +1,6 @@
# Reporting bugs ---
title: Reporting bugs
---
If any part of the etcd project has bugs or documentation mistakes, please let us know by [opening an issue][etcd-issue]. We treat bugs and mistakes very seriously and believe no issue is too small. Before creating a bug report, please check that an issue reporting the same problem does not already exist. If any part of the etcd project has bugs or documentation mistakes, please let us know by [opening an issue][etcd-issue]. We treat bugs and mistakes very seriously and believe no issue is too small. Before creating a bug report, please check that an issue reporting the same problem does not already exist.
@ -41,5 +43,5 @@ $ sudo journalctl -u etcd2
Due to an upstream systemd bug, journald may miss the last few log lines when its processes exit. If journalctl says etcd stopped without fatal or panic message, try `sudo journalctl -f -t etcd2` to get full log. Due to an upstream systemd bug, journald may miss the last few log lines when its processes exit. If journalctl says etcd stopped without fatal or panic message, try `sudo journalctl -f -t etcd2` to get full log.
[etcd-issue]: https://github.com/coreos/etcd/issues/new [etcd-issue]: https://github.com/etcd-io/etcd/issues/new
[filing-good-bugs]: http://fantasai.inkedblade.net/style/talks/filing-good-bugs/ [filing-good-bugs]: http://fantasai.inkedblade.net/style/talks/filing-good-bugs/

214
Documentation/rfc/_index.md Normal file
View File

@ -0,0 +1,214 @@
---
title: etcd v3 API
---
The etcd v3 API is designed to give users a more efficient and cleaner abstraction compared to etcd v2. There are a number of semantic and protocol changes in this new API. For an overview [see Xiang Li's video](https://youtu.be/J5AioGtEPeQ?t=211).
To prove out the design of the v3 API the team has also built [a number of example recipes](https://github.com/coreos/etcd/tree/master/contrib/recipes), there is a [video discussing these recipes too](https://www.youtube.com/watch?v=fj-2RY-3yVU&feature=youtu.be&t=590).
# Design
1. Flatten binary key-value space
2. Keep the event history until compaction
- access to old version of keys
- user controlled history compaction
3. Support range query
- Pagination support with limit argument
- Support consistency guarantee across multiple range queries
4. Replace TTL key with Lease
- more efficient/ low cost keep alive
- a logical group of TTL keys
5. Replace CAS/CAD with multi-object Txn
- MUCH MORE powerful and flexible
6. Support efficient watching with multiple ranges
7. RPC API supports the completed set of APIs.
- more efficient than JSON/HTTP
- additional txn/lease support
8. HTTP API supports a subset of APIs.
- easy for people to try out etcd
- easy for people to write simple etcd application
## Notes
### Request Size Limitation
The max request size is around 1MB. Since etcd replicates requests in a streaming fashion, a very large
request might block other requests for a long time. The use case for etcd is to store small configuration
values, so we prevent user from submitting large requests. This also applies to Txn requests. We might loosen
the size in the future a little bit or make it configurable.
## Protobuf Defined API
[api protobuf][api-protobuf]
[kv protobuf][kv-protobuf]
## Examples
### Put a key (foo=bar)
```
// A put is always successful
Put( PutRequest { key = foo, value = bar } )
PutResponse {
cluster_id = 0x1000,
member_id = 0x1,
revision = 1,
raft_term = 0x1,
}
```
### Get a key (assume we have foo=bar)
```
Get ( RangeRequest { key = foo } )
RangeResponse {
cluster_id = 0x1000,
member_id = 0x1,
revision = 1,
raft_term = 0x1,
kvs = {
{
key = foo,
value = bar,
create_revision = 1,
mod_revision = 1,
version = 1;
},
},
}
```
### Range over a key space (assume we have foo0=bar0… foo100=bar100)
```
Range ( RangeRequest { key = foo, end_key = foo80, limit = 30 } )
RangeResponse {
cluster_id = 0x1000,
member_id = 0x1,
revision = 100,
raft_term = 0x1,
kvs = {
{
key = foo0,
value = bar0,
create_revision = 1,
mod_revision = 1,
version = 1;
},
...,
{
key = foo30,
value = bar30,
create_revision = 30,
mod_revision = 30,
version = 1;
},
},
}
```
### Finish a txn (assume we have foo0=bar0, foo1=bar1)
```
Txn(TxnRequest {
// mod_revision of foo0 is equal to 1, mod_revision of foo1 is greater than 1
compare = {
{compareType = equal, key = foo0, mod_revision = 1},
{compareType = greater, key = foo1, mod_revision = 1}}
},
// if the comparison succeeds, put foo2 = bar2
success = {PutRequest { key = foo2, value = success }},
// if the comparison fails, put foo2=fail
failure = {PutRequest { key = foo2, value = failure }},
)
TxnResponse {
cluster_id = 0x1000,
member_id = 0x1,
revision = 3,
raft_term = 0x1,
succeeded = true,
responses = {
// response of PUT foo2=success
{
cluster_id = 0x1000,
member_id = 0x1,
revision = 3,
raft_term = 0x1,
}
}
}
```
### Watch on a key/range
```
Watch( WatchRequest{
key = foo,
end_key = fop, // prefix foo
start_revision = 20,
end_revision = 10000,
// server decided notification frequency
progress_notification = true,
}
… // this can be a watch request stream
)
// put (foo0=bar0) event at 3
WatchResponse {
cluster_id = 0x1000,
member_id = 0x1,
revision = 3,
raft_term = 0x1,
event_type = put,
kv = {
key = foo0,
value = bar0,
create_revision = 1,
mod_revision = 1,
version = 1;
},
}
// a notification at 2000
WatchResponse {
cluster_id = 0x1000,
member_id = 0x1,
revision = 2000,
raft_term = 0x1,
// nil event as notification
}
// put (foo0=bar3000) event at 3000
WatchResponse {
cluster_id = 0x1000,
member_id = 0x1,
revision = 3000,
raft_term = 0x1,
event_type = put,
kv = {
key = foo0,
value = bar3000,
create_revision = 1,
mod_revision = 3000,
version = 2;
},
}
```
[api-protobuf]: https://github.com/etcd-io/etcd/blob/master/etcdserver/etcdserverpb/rpc.proto
[kv-protobuf]: https://github.com/etcd-io/etcd/blob/master/mvcc/mvccpb/kv.proto

View File

@ -1,211 +0,0 @@
# Overview
The etcd v3 API is designed to give users a more efficient and cleaner abstraction compared to etcd v2. There are a number of semantic and protocol changes in this new API. For an overview [see Xiang Li's video](https://youtu.be/J5AioGtEPeQ?t=211).
To prove out the design of the v3 API the team has also built [a number of example recipes](https://github.com/coreos/etcd/tree/master/contrib/recipes), there is a [video discussing these recipes too](https://www.youtube.com/watch?v=fj-2RY-3yVU&feature=youtu.be&t=590).
# Design
1. Flatten binary key-value space
2. Keep the event history until compaction
- access to old version of keys
- user controlled history compaction
3. Support range query
- Pagination support with limit argument
- Support consistency guarantee across multiple range queries
4. Replace TTL key with Lease
- more efficient/ low cost keep alive
- a logical group of TTL keys
5. Replace CAS/CAD with multi-object Txn
- MUCH MORE powerful and flexible
6. Support efficient watching with multiple ranges
7. RPC API supports the completed set of APIs.
- more efficient than JSON/HTTP
- additional txn/lease support
8. HTTP API supports a subset of APIs.
- easy for people to try out etcd
- easy for people to write simple etcd application
## Notes
### Request Size Limitation
The max request size is around 1MB. Since etcd replicates requests in a streaming fashion, a very large
request might block other requests for a long time. The use case for etcd is to store small configuration
values, so we prevent user from submitting large requests. This also applies to Txn requests. We might loosen
the size in the future a little bit or make it configurable.
## Protobuf Defined API
[api protobuf][api-protobuf]
[kv protobuf][kv-protobuf]
## Examples
### Put a key (foo=bar)
```
// A put is always successful
Put( PutRequest { key = foo, value = bar } )
PutResponse {
cluster_id = 0x1000,
member_id = 0x1,
revision = 1,
raft_term = 0x1,
}
```
### Get a key (assume we have foo=bar)
```
Get ( RangeRequest { key = foo } )
RangeResponse {
cluster_id = 0x1000,
member_id = 0x1,
revision = 1,
raft_term = 0x1,
kvs = {
{
key = foo,
value = bar,
create_revision = 1,
mod_revision = 1,
version = 1;
},
},
}
```
### Range over a key space (assume we have foo0=bar0… foo100=bar100)
```
Range ( RangeRequest { key = foo, end_key = foo80, limit = 30 } )
RangeResponse {
cluster_id = 0x1000,
member_id = 0x1,
revision = 100,
raft_term = 0x1,
kvs = {
{
key = foo0,
value = bar0,
create_revision = 1,
mod_revision = 1,
version = 1;
},
...,
{
key = foo30,
value = bar30,
create_revision = 30,
mod_revision = 30,
version = 1;
},
},
}
```
### Finish a txn (assume we have foo0=bar0, foo1=bar1)
```
Txn(TxnRequest {
// mod_revision of foo0 is equal to 1, mod_revision of foo1 is greater than 1
compare = {
{compareType = equal, key = foo0, mod_revision = 1},
{compareType = greater, key = foo1, mod_revision = 1}}
},
// if the comparison succeeds, put foo2 = bar2
success = {PutRequest { key = foo2, value = success }},
// if the comparison fails, put foo2=fail
failure = {PutRequest { key = foo2, value = failure }},
)
TxnResponse {
cluster_id = 0x1000,
member_id = 0x1,
revision = 3,
raft_term = 0x1,
succeeded = true,
responses = {
// response of PUT foo2=success
{
cluster_id = 0x1000,
member_id = 0x1,
revision = 3,
raft_term = 0x1,
}
}
}
```
### Watch on a key/range
```
Watch( WatchRequest{
key = foo,
end_key = fop, // prefix foo
start_revision = 20,
end_revision = 10000,
// server decided notification frequency
progress_notification = true,
}
… // this can be a watch request stream
)
// put (foo0=bar0) event at 3
WatchResponse {
cluster_id = 0x1000,
member_id = 0x1,
revision = 3,
raft_term = 0x1,
event_type = put,
kv = {
key = foo0,
value = bar0,
create_revision = 1,
mod_revision = 1,
version = 1;
},
}
// a notification at 2000
WatchResponse {
cluster_id = 0x1000,
member_id = 0x1,
revision = 2000,
raft_term = 0x1,
// nil event as notification
}
// put (foo0=bar3000) event at 3000
WatchResponse {
cluster_id = 0x1000,
member_id = 0x1,
revision = 3000,
raft_term = 0x1,
event_type = put,
kv = {
key = foo0,
value = bar3000,
create_revision = 1,
mod_revision = 3000,
version = 2;
},
}
```
[api-protobuf]: https://github.com/coreos/etcd/blob/master/etcdserver/etcdserverpb/rpc.proto
[kv-protobuf]: https://github.com/coreos/etcd/blob/master/mvcc/mvccpb/kv.proto

View File

@ -1,4 +1,6 @@
# Tuning ---
title: Tuning
---
The default settings in etcd should work well for installations on a local network where the average network latency is low. However, when using etcd across multiple data centers or over networks with high latency, the heartbeat interval and election timeout settings may need tuning. The default settings in etcd should work well for installations on a local network where the average network latency is low. However, when using etcd across multiple data centers or over networks with high latency, the heartbeat interval and election timeout settings may need tuning.
@ -71,12 +73,12 @@ dropped MsgAppResp to 247ae21ff9436b2d since streamMsg's sending buffer is full
These errors may be resolved by prioritizing etcd's peer traffic over its client traffic. On Linux, peer traffic can be prioritized by using the traffic control mechanism: These errors may be resolved by prioritizing etcd's peer traffic over its client traffic. On Linux, peer traffic can be prioritized by using the traffic control mechanism:
``` ```sh
tc qdisc add dev eth0 root handle 1: prio bands 3 tc qdisc add dev eth0 root handle 1: prio bands 3
tc filter add dev eth0 parent 1: protocol ip prio 1 u32 match ip sport 2380 0xffff flowid 1:1 tc filter add dev eth0 parent 1: protocol ip prio 1 u32 match ip sport 2380 0xffff flowid 1:1
tc filter add dev eth0 parent 1: protocol ip prio 1 u32 match ip dport 2380 0xffff flowid 1:1 tc filter add dev eth0 parent 1: protocol ip prio 1 u32 match ip dport 2380 0xffff flowid 1:1
tc filter add dev eth0 parent 1: protocol ip prio 2 u32 match ip sport 2739 0xffff flowid 1:1 tc filter add dev eth0 parent 1: protocol ip prio 2 u32 match ip sport 2379 0xffff flowid 1:1
tc filter add dev eth0 parent 1: protocol ip prio 2 u32 match ip dport 2739 0xffff flowid 1:1 tc filter add dev eth0 parent 1: protocol ip prio 2 u32 match ip dport 2379 0xffff flowid 1:1
``` ```
[ping]: https://en.wikipedia.org/wiki/Ping_(networking_utility) [ping]: https://en.wikipedia.org/wiki/Ping_(networking_utility)

View File

@ -0,0 +1,3 @@
---
title: Upgrading
---

View File

@ -1,4 +1,6 @@
## Upgrade etcd from 2.3 to 3.0 ---
title: Upgrade etcd from 2.3 to 3.0
---
In the general case, upgrading from etcd 2.3 to 3.0 can be a zero-downtime, rolling upgrade: In the general case, upgrading from etcd 2.3 to 3.0 can be a zero-downtime, rolling upgrade:
- one by one, stop the etcd v2.3 processes and replace them with etcd v3.0 processes - one by one, stop the etcd v2.3 processes and replace them with etcd v3.0 processes
@ -8,6 +10,8 @@ Before [starting an upgrade](#upgrade-procedure), read through the rest of this
### Upgrade checklists ### Upgrade checklists
**NOTE:** When [migrating from v2 with no v3 data](https://github.com/etcd-io/etcd/issues/9480), etcd server v3.2+ panics when etcd restores from existing snapshots but no v3 `ETCD_DATA_DIR/member/snap/db` file. This happens when the server had migrated from v2 with no previous v3 data. This also prevents accidental v3 data loss (e.g. `db` file might have been moved). etcd requires that post v3 migration can only happen with v3 data. Do not upgrade to newer v3 versions until v3.0 server contains v3 data.
#### Upgrade requirements #### Upgrade requirements
To upgrade an existing etcd deployment to 3.0, the running cluster must be 2.3 or greater. If it's before 2.3, please upgrade to [2.3](https://github.com/coreos/etcd/releases/tag/v2.3.8) before upgrading to 3.0. To upgrade an existing etcd deployment to 3.0, the running cluster must be 2.3 or greater. If it's before 2.3, please upgrade to [2.3](https://github.com/coreos/etcd/releases/tag/v2.3.8) before upgrading to 3.0.
@ -122,8 +126,8 @@ $ ETCDCTL_API=3 etcdctl endpoint health
## Known Issues ## Known Issues
- etcd &lt; v3.1 does not work properly if built with Go &gt; v1.7. See [Issue 6951](https://github.com/coreos/etcd/issues/6951) for additional information. - etcd &lt; v3.1 does not work properly if built with Go &gt; v1.7. See [Issue 6951](https://github.com/etcd-io/etcd/issues/6951) for additional information.
- If an error such as `transport: http2Client.notifyError got notified that the client transport was broken unexpected EOF.` shows up in the etcd server logs, be sure etcd is a pre-built release or built with (etcd v3.1+ &amp; go v1.7+) or (etcd &lt;v3.1 &amp; go v1.6.x). - If an error such as `transport: http2Client.notifyError got notified that the client transport was broken unexpected EOF.` shows up in the etcd server logs, be sure etcd is a pre-built release or built with (etcd v3.1+ &amp; go v1.7+) or (etcd &lt;v3.1 &amp; go v1.6.x).
- Adding a v3 node to v2.3 cluster during upgrades is not supported and could trigger panics. See [Issue 7249](https://github.com/coreos/etcd/issues/7429) for additional information. Mixed versions of etcd members are only allowed during v3 migration. Finish upgrades before making any membership changes. - Adding a v3 node to v2.3 cluster during upgrades is not supported and could trigger panics. See [Issue 7249](https://github.com/etcd-io/etcd/issues/7429) for additional information. Mixed versions of etcd members are only allowed during v3 migration. Finish upgrades before making any membership changes.
[etcd-contact]: https://groups.google.com/forum/#!forum/etcd-dev [etcd-contact]: https://groups.google.com/forum/#!forum/etcd-dev

View File

@ -1,4 +1,6 @@
## Upgrade etcd from 3.0 to 3.1 ---
title: Upgrade etcd from 3.0 to 3.1
---
In the general case, upgrading from etcd 3.0 to 3.1 can be a zero-downtime, rolling upgrade: In the general case, upgrading from etcd 3.0 to 3.1 can be a zero-downtime, rolling upgrade:
- one by one, stop the etcd v3.0 processes and replace them with etcd v3.1 processes - one by one, stop the etcd v3.0 processes and replace them with etcd v3.1 processes
@ -8,6 +10,8 @@ Before [starting an upgrade](#upgrade-procedure), read through the rest of this
### Upgrade checklists ### Upgrade checklists
**NOTE:** When [migrating from v2 with no v3 data](https://github.com/etcd-io/etcd/issues/9480), etcd server v3.2+ panics when etcd restores from existing snapshots but no v3 `ETCD_DATA_DIR/member/snap/db` file. This happens when the server had migrated from v2 with no previous v3 data. This also prevents accidental v3 data loss (e.g. `db` file might have been moved). etcd requires that post v3 migration can only happen with v3 data. Do not upgrade to newer v3 versions until v3.0 server contains v3 data.
#### Monitoring #### Monitoring
Following metrics from v3.0.x have been deprecated in favor of [go-grpc-prometheus](https://github.com/grpc-ecosystem/go-grpc-prometheus): Following metrics from v3.0.x have been deprecated in favor of [go-grpc-prometheus](https://github.com/grpc-ecosystem/go-grpc-prometheus):

View File

@ -1,4 +1,6 @@
## Upgrade etcd from 3.1 to 3.2 ---
title: Upgrade etcd from 3.1 to 3.2
---
In the general case, upgrading from etcd 3.1 to 3.2 can be a zero-downtime, rolling upgrade: In the general case, upgrading from etcd 3.1 to 3.2 can be a zero-downtime, rolling upgrade:
- one by one, stop the etcd v3.1 processes and replace them with etcd v3.2 processes - one by one, stop the etcd v3.1 processes and replace them with etcd v3.2 processes
@ -8,13 +10,21 @@ Before [starting an upgrade](#upgrade-procedure), read through the rest of this
### Upgrade checklists ### Upgrade checklists
**NOTE:** When [migrating from v2 with no v3 data](https://github.com/etcd-io/etcd/issues/9480), etcd server v3.2+ panics when etcd restores from existing snapshots but no v3 `ETCD_DATA_DIR/member/snap/db` file. This happens when the server had migrated from v2 with no previous v3 data. This also prevents accidental v3 data loss (e.g. `db` file might have been moved). etcd requires that post v3 migration can only happen with v3 data. Do not upgrade to newer v3 versions until v3.0 server contains v3 data.
Highlighted breaking changes in 3.2. Highlighted breaking changes in 3.2.
#### Change in gRPC dependency (>=3.2.10) #### Changed default `snapshot-count` value
Higher `--snapshot-count` holds more Raft entries in memory until snapshot, thus causing [recurrent higher memory usage](https://github.com/kubernetes/kubernetes/issues/60589#issuecomment-371977156). Since leader retains latest Raft entries for longer, a slow follower has more time to catch up before leader snapshot. `--snapshot-count` is a tradeoff between higher memory usage and better availabilities of slow followers.
Since v3.2, the default value of `--snapshot-count` has [changed from from 10,000 to 100,000](https://github.com/etcd-io/etcd/pull/7160).
#### Changed gRPC dependency (>=3.2.10)
3.2.10 or later now requires [grpc/grpc-go](https://github.com/grpc/grpc-go/releases) `v1.7.5` (<=3.2.9 requires `v1.2.1`). 3.2.10 or later now requires [grpc/grpc-go](https://github.com/grpc/grpc-go/releases) `v1.7.5` (<=3.2.9 requires `v1.2.1`).
##### Deprecate `grpclog.Logger` ##### Deprecated `grpclog.Logger`
`grpclog.Logger` has been deprecated in favor of [`grpclog.LoggerV2`](https://github.com/grpc/grpc-go/blob/master/grpclog/loggerv2.go). `clientv3.Logger` is now `grpclog.LoggerV2`. `grpclog.Logger` has been deprecated in favor of [`grpclog.LoggerV2`](https://github.com/grpc/grpc-go/blob/master/grpclog/loggerv2.go). `clientv3.Logger` is now `grpclog.LoggerV2`.
@ -35,9 +45,9 @@ clientv3.SetLogger(grpclog.NewLoggerV2(os.Stderr, os.Stderr, os.Stderr))
// log.New above cannot be used (not implement grpclog.LoggerV2 interface) // log.New above cannot be used (not implement grpclog.LoggerV2 interface)
``` ```
##### Deprecate `grpc.ErrClientConnTimeout` ##### Deprecated `grpc.ErrClientConnTimeout`
Previously, `grpc.ErrClientConnTimeout` error is returned on client dial time-outs. 3.2 instead returns `context.DeadlineExceeded` (see [#8504](https://github.com/coreos/etcd/issues/8504)). Previously, `grpc.ErrClientConnTimeout` error is returned on client dial time-outs. 3.2 instead returns `context.DeadlineExceeded` (see [#8504](https://github.com/etcd-io/etcd/issues/8504)).
Before Before
@ -64,9 +74,9 @@ if err == context.DeadlineExceeded {
} }
``` ```
#### Change in maximum request size limits (>=3.2.10) #### Changed maximum request size limits (>=3.2.10)
3.2.10 and 3.2.11 allow custom request size limits in server side. >=3.2.12 allows custom request size limits for both server and **client side**. 3.2.10 and 3.2.11 allow custom request size limits in server side. >=3.2.12 allows custom request size limits for both server and **client side**. In previous versions(v3.2.10, v3.2.11), client response size was limited to only 4 MiB.
Server-side request limits can be configured with `--max-request-bytes` flag: Server-side request limits can be configured with `--max-request-bytes` flag:
@ -137,9 +147,9 @@ err.Error() == "rpc error: code = ResourceExhausted desc = grpc: received messag
**If not specified, client-side send limit defaults to 2 MiB (1.5 MiB + gRPC overhead bytes) and receive limit to `math.MaxInt32`**. Please see [clientv3 godoc](https://godoc.org/github.com/coreos/etcd/clientv3#Config) for more detail. **If not specified, client-side send limit defaults to 2 MiB (1.5 MiB + gRPC overhead bytes) and receive limit to `math.MaxInt32`**. Please see [clientv3 godoc](https://godoc.org/github.com/coreos/etcd/clientv3#Config) for more detail.
#### Change in raw gRPC client wrappers #### Changed raw gRPC client wrappers
3.2.12 or later changes the function signatures of `clientv3` gRPC client wrapper. This change was needed to support [custom `grpc.CallOption` on message size limits](https://github.com/coreos/etcd/pull/9047). 3.2.12 or later changes the function signatures of `clientv3` gRPC client wrapper. This change was needed to support [custom `grpc.CallOption` on message size limits](https://github.com/etcd-io/etcd/pull/9047).
Before and after Before and after
@ -160,15 +170,9 @@ Before and after
+func NewWatchFromWatchClient(wc pb.WatchClient, c *Client) Watcher { +func NewWatchFromWatchClient(wc pb.WatchClient, c *Client) Watcher {
``` ```
#### Change in `--listen-peer-urls` and `--listen-client-urls` #### Changed `clientv3.Lease.TimeToLive` API
3.2 now rejects domains names for `--listen-peer-urls` and `--listen-client-urls` (3.1 only prints out warnings), since domain name is invalid for network interface binding. Make sure that those URLs are properly formated as `scheme://IP:port`. Previously, `clientv3.Lease.TimeToLive` API returned `lease.ErrLeaseNotFound` on non-existent lease ID. 3.2 instead returns TTL=-1 in its response and no error (see [#7305](https://github.com/etcd-io/etcd/pull/7305)).
See [issue #6336](https://github.com/coreos/etcd/issues/6336) for more contexts.
#### Change in `clientv3.Lease.TimeToLive` API
Previously, `clientv3.Lease.TimeToLive` API returned `lease.ErrLeaseNotFound` on non-existent lease ID. 3.2 instead returns TTL=-1 in its response and no error (see [#7305](https://github.com/coreos/etcd/pull/7305)).
Before Before
@ -188,7 +192,7 @@ resp.TTL == -1
err == nil err == nil
``` ```
#### Change in `clientv3.NewFromConfigFile` #### Moved `clientv3.NewFromConfigFile` to `clientv3.yaml.NewConfig`
`clientv3.NewFromConfigFile` is moved to `yaml.NewConfig`. `clientv3.NewFromConfigFile` is moved to `yaml.NewConfig`.
@ -206,6 +210,12 @@ import clientv3yaml "github.com/coreos/etcd/clientv3/yaml"
clientv3yaml.NewConfig clientv3yaml.NewConfig
``` ```
#### Change in `--listen-peer-urls` and `--listen-client-urls`
3.2 now rejects domains names for `--listen-peer-urls` and `--listen-client-urls` (3.1 only prints out warnings), since domain name is invalid for network interface binding. Make sure that those URLs are properly formated as `scheme://IP:port`.
See [issue #6336](https://github.com/etcd-io/etcd/issues/6336) for more contexts.
### Server upgrade checklists ### Server upgrade checklists
#### Upgrade requirements #### Upgrade requirements

View File

@ -1,4 +1,6 @@
## Upgrade etcd from 3.2 to 3.3 ---
title: Upgrade etcd from 3.2 to 3.3
---
In the general case, upgrading from etcd 3.2 to 3.3 can be a zero-downtime, rolling upgrade: In the general case, upgrading from etcd 3.2 to 3.3 can be a zero-downtime, rolling upgrade:
- one by one, stop the etcd v3.2 processes and replace them with etcd v3.3 processes - one by one, stop the etcd v3.2 processes and replace them with etcd v3.3 processes
@ -8,9 +10,24 @@ Before [starting an upgrade](#upgrade-procedure), read through the rest of this
### Upgrade checklists ### Upgrade checklists
**NOTE:** When [migrating from v2 with no v3 data](https://github.com/etcd-io/etcd/issues/9480), etcd server v3.2+ panics when etcd restores from existing snapshots but no v3 `ETCD_DATA_DIR/member/snap/db` file. This happens when the server had migrated from v2 with no previous v3 data. This also prevents accidental v3 data loss (e.g. `db` file might have been moved). etcd requires that post v3 migration can only happen with v3 data. Do not upgrade to newer v3 versions until v3.0 server contains v3 data.
Highlighted breaking changes in 3.3. Highlighted breaking changes in 3.3.
#### Change in `etcdserver.EtcdServer` struct #### Changed value type of `etcd --auto-compaction-retention` flag to `string`
Changed `--auto-compaction-retention` flag to [accept string values](https://github.com/etcd-io/etcd/pull/8563) with [finer granularity](https://github.com/etcd-io/etcd/issues/8503). Now that `--auto-compaction-retention` accepts string values, etcd configuration YAML file `auto-compaction-retention` field must be changed to `string` type. Previously, `--config-file etcd.config.yaml` can have `auto-compaction-retention: 24` field, now must be `auto-compaction-retention: "24"` or `auto-compaction-retention: "24h"`. If configured as `--auto-compaction-mode periodic --auto-compaction-retention "24h"`, the time duration value for `--auto-compaction-retention` flag must be valid for [`time.ParseDuration`](https://golang.org/pkg/time/#ParseDuration) function in Go.
```diff
# etcd.config.yaml
+auto-compaction-mode: periodic
-auto-compaction-retention: 24
+auto-compaction-retention: "24"
+# Or
+auto-compaction-retention: "24h"
```
#### Changed `etcdserver.EtcdServer.ServerConfig` to `*etcdserver.EtcdServer.ServerConfig`
`etcdserver.EtcdServer` has changed the type of its member field `*etcdserver.ServerConfig` to `etcdserver.ServerConfig`. And `etcdserver.NewServer` now takes `etcdserver.ServerConfig`, instead of `*etcdserver.ServerConfig`. `etcdserver.EtcdServer` has changed the type of its member field `*etcdserver.ServerConfig` to `etcdserver.ServerConfig`. And `etcdserver.NewServer` now takes `etcdserver.ServerConfig`, instead of `*etcdserver.ServerConfig`.
@ -40,7 +57,9 @@ func (e *EtcdServer) Start() error {
... ...
``` ```
#### Change in `embed.EtcdServer` struct #### Added `embed.Config.LogOutput` struct
**Note that this field has been renamed to `embed.Config.LogOutputs` in `[]string` type in v3.4. Please see [v3.4 upgrade guide](https://github.com/etcd-io/etcd/blob/master/Documentation/upgrades/upgrade_3_4.md) for more details.**
Field `LogOutput` is added to `embed.Config`: Field `LogOutput` is added to `embed.Config`:
@ -63,6 +82,8 @@ WARNING: 2017/11/02 11:35:51 grpc: addrConn.resetTransport failed to create clie
From v3.3, gRPC server logs are disabled by default. From v3.3, gRPC server logs are disabled by default.
**Note that `embed.Config.SetupLogging` method has been deprecated in v3.4. Please see [v3.4 upgrade guide](https://github.com/etcd-io/etcd/blob/master/Documentation/upgrades/upgrade_3_4.md) for more details.**
```go ```go
import "github.com/coreos/etcd/embed" import "github.com/coreos/etcd/embed"
@ -72,48 +93,38 @@ cfg.SetupLogging()
Set `embed.Config.Debug` field to `true` to enable gRPC server logs. Set `embed.Config.Debug` field to `true` to enable gRPC server logs.
#### Change in `/health` endpoint response value #### Changed `/health` endpoint response
Previously, `[endpoint]:[client-port]/health` returned manually marshaled JSON value. 3.3 instead defines [`etcdhttp.Health`](https://godoc.org/github.com/coreos/etcd/etcdserver/api/etcdhttp#Health) struct and returns properly encoded JSON value with errors, if any. Previously, `[endpoint]:[client-port]/health` returned manually marshaled JSON value. 3.3 now defines [`etcdhttp.Health`](https://godoc.org/github.com/coreos/etcd/etcdserver/api/etcdhttp#Health) struct.
Before Note that in v3.3.0-rc.0, v3.3.0-rc.1, and v3.3.0-rc.2, `etcdhttp.Health` has boolean type `"health"` and `"errors"` fields. For backward compatibilities, we reverted `"health"` field to `string` type and removed `"errors"` field. Further health information will be provided in separate APIs.
```bash ```bash
$ curl http://localhost:2379/health $ curl http://localhost:2379/health
{"health": "true"} {"health":"true"}
``` ```
After #### Changed gRPC gateway HTTP endpoints (replaced `/v3alpha` with `/v3beta`)
```bash
$ curl http://localhost:2379/health
{"health":true}
# Or
{"health":false,"errors":["NOSPACE"]}
```
#### Change in gRPC gateway HTTP endpoints (replaced `/v3alpha` with `/v3beta`)
Before Before
```bash ```bash
curl -L http://localhost:2379/v3alpha/kv/put \ curl -L http://localhost:2379/v3alpha/kv/put \
-X POST -d '{"key": "Zm9v", "value": "YmFy"}' -X POST -d '{"key": "Zm9v", "value": "YmFy"}'
``` ```
After After
```bash ```bash
curl -L http://localhost:2379/v3beta/kv/put \ curl -L http://localhost:2379/v3beta/kv/put \
-X POST -d '{"key": "Zm9v", "value": "YmFy"}' -X POST -d '{"key": "Zm9v", "value": "YmFy"}'
``` ```
Requests to `/v3alpha` endpoints will redirect to `/v3beta`, and `/v3alpha` will be removed in 3.4 release. Requests to `/v3alpha` endpoints will redirect to `/v3beta`, and `/v3alpha` will be removed in 3.4 release.
#### Change in maximum request size limits #### Changed maximum request size limits
3.3 now allows custom request size limits for both server and **client side**. 3.3 now allows custom request size limits for both server and **client side**. In previous versions(v3.2.10, v3.2.11), client response size was limited to only 4 MiB.
Server-side request limits can be configured with `--max-request-bytes` flag: Server-side request limits can be configured with `--max-request-bytes` flag:
@ -184,9 +195,9 @@ err.Error() == "rpc error: code = ResourceExhausted desc = grpc: received messag
**If not specified, client-side send limit defaults to 2 MiB (1.5 MiB + gRPC overhead bytes) and receive limit to `math.MaxInt32`**. Please see [clientv3 godoc](https://godoc.org/github.com/coreos/etcd/clientv3#Config) for more detail. **If not specified, client-side send limit defaults to 2 MiB (1.5 MiB + gRPC overhead bytes) and receive limit to `math.MaxInt32`**. Please see [clientv3 godoc](https://godoc.org/github.com/coreos/etcd/clientv3#Config) for more detail.
#### Change in raw gRPC client wrappers #### Changed raw gRPC client wrapper function signatures
3.3 changes the function signatures of `clientv3` gRPC client wrapper. This change was needed to support [custom `grpc.CallOption` on message size limits](https://github.com/coreos/etcd/pull/9047). 3.3 changes the function signatures of `clientv3` gRPC client wrapper. This change was needed to support [custom `grpc.CallOption` on message size limits](https://github.com/etcd-io/etcd/pull/9047).
Before and after Before and after
@ -207,7 +218,7 @@ Before and after
+func NewWatchFromWatchClient(wc pb.WatchClient, c *Client) Watcher { +func NewWatchFromWatchClient(wc pb.WatchClient, c *Client) Watcher {
``` ```
#### Change in clientv3 `Snapshot` API error type #### Changed clientv3 `Snapshot` API error type
Previously, clientv3 `Snapshot` API returned raw [`grpc/*status.statusError`] type error. v3.3 now translates those errors to corresponding public error types, to be consistent with other APIs. Previously, clientv3 `Snapshot` API returned raw [`grpc/*status.statusError`] type error. v3.3 now translates those errors to corresponding public error types, to be consistent with other APIs.
@ -253,7 +264,7 @@ _, err = io.Copy(f, rc)
err == context.DeadlineExceeded err == context.DeadlineExceeded
``` ```
#### Change in `etcdctl lease timetolive` command output #### Changed `etcdctl lease timetolive` command output
Previously, `lease timetolive LEASE_ID` command on expired lease prints `-1s` for remaining seconds. 3.3 now outputs clearer messages. Previously, `lease timetolive LEASE_ID` command on expired lease prints `-1s` for remaining seconds. 3.3 now outputs clearer messages.
@ -270,7 +281,7 @@ After
lease 2d8257079fa1bc0c already expired lease 2d8257079fa1bc0c already expired
``` ```
#### Change in `golang.org/x/net/context` imports #### Changed `golang.org/x/net/context` imports
`clientv3` has deprecated `golang.org/x/net/context`. If a project vendors `golang.org/x/net/context` in other code (e.g. etcd generated protocol buffer code) and imports `github.com/coreos/etcd/clientv3`, it requires Go 1.9+ to compile. `clientv3` has deprecated `golang.org/x/net/context`. If a project vendors `golang.org/x/net/context` in other code (e.g. etcd generated protocol buffer code) and imports `github.com/coreos/etcd/clientv3`, it requires Go 1.9+ to compile.
@ -288,11 +299,11 @@ import "context"
cli.Put(context.Background(), "f", "v") cli.Put(context.Background(), "f", "v")
``` ```
#### Change in gRPC dependency #### Changed gRPC dependency
3.3 now requires [grpc/grpc-go](https://github.com/grpc/grpc-go/releases) `v1.7.5`. 3.3 now requires [grpc/grpc-go](https://github.com/grpc/grpc-go/releases) `v1.7.5`.
##### Deprecate `grpclog.Logger` ##### Deprecated `grpclog.Logger`
`grpclog.Logger` has been deprecated in favor of [`grpclog.LoggerV2`](https://github.com/grpc/grpc-go/blob/master/grpclog/loggerv2.go). `clientv3.Logger` is now `grpclog.LoggerV2`. `grpclog.Logger` has been deprecated in favor of [`grpclog.LoggerV2`](https://github.com/grpc/grpc-go/blob/master/grpclog/loggerv2.go). `clientv3.Logger` is now `grpclog.LoggerV2`.
@ -313,9 +324,9 @@ clientv3.SetLogger(grpclog.NewLoggerV2(os.Stderr, os.Stderr, os.Stderr))
// log.New above cannot be used (not implement grpclog.LoggerV2 interface) // log.New above cannot be used (not implement grpclog.LoggerV2 interface)
``` ```
##### Deprecate `grpc.ErrClientConnTimeout` ##### Deprecated `grpc.ErrClientConnTimeout`
Previously, `grpc.ErrClientConnTimeout` error is returned on client dial time-outs. 3.3 instead returns `context.DeadlineExceeded` (see [#8504](https://github.com/coreos/etcd/issues/8504)). Previously, `grpc.ErrClientConnTimeout` error is returned on client dial time-outs. 3.3 instead returns `context.DeadlineExceeded` (see [#8504](https://github.com/etcd-io/etcd/issues/8504)).
Before Before
@ -342,7 +353,7 @@ if err == context.DeadlineExceeded {
} }
``` ```
#### Change in official container registry #### Changed official container registry
etcd now uses [`gcr.io/etcd-development/etcd`](https://gcr.io/etcd-development/etcd) as a primary container registry, and [`quay.io/coreos/etcd`](https://quay.io/coreos/etcd) as secondary. etcd now uses [`gcr.io/etcd-development/etcd`](https://gcr.io/etcd-development/etcd) as a primary container registry, and [`quay.io/coreos/etcd`](https://quay.io/coreos/etcd) as secondary.
@ -358,6 +369,52 @@ After
docker pull gcr.io/etcd-development/etcd:v3.3.0 docker pull gcr.io/etcd-development/etcd:v3.3.0
``` ```
### Upgrades to >= v3.3.14
[v3.3.14](https://github.com/etcd-io/etcd/releases/tag/v3.3.14) had to include some features from 3.4, while trying to minimize the difference between client balancer implementation. This release fixes ["kube-apiserver 1.13.x refuses to work when first etcd-server is not available" (kubernetes#72102)](https://github.com/kubernetes/kubernetes/issues/72102).
`grpc.ErrClientConnClosing` has been [deprecated in gRPC >= 1.10](https://github.com/grpc/grpc-go/pull/1854).
```diff
import (
+ "go.etcd.io/etcd/clientv3"
"google.golang.org/grpc"
+ "google.golang.org/grpc/codes"
+ "google.golang.org/grpc/status"
)
_, err := kvc.Get(ctx, "a")
-if err == grpc.ErrClientConnClosing {
+if clientv3.IsConnCanceled(err) {
// or
+s, ok := status.FromError(err)
+if ok {
+ if s.Code() == codes.Canceled
```
[The new client balancer](https://github.com/etcd-io/etcd/blob/master/Documentation/learning/design-client.md) uses an asynchronous resolver to pass endpoints to the gRPC dial function. As a result, [v3.3.14](https://github.com/etcd-io/etcd/releases/tag/v3.3.14) or later requires `grpc.WithBlock` dial option to wait until the underlying connection is up.
```diff
import (
"time"
"go.etcd.io/etcd/clientv3"
+ "google.golang.org/grpc"
)
+// "grpc.WithBlock()" to block until the underlying connection is up
ccfg := clientv3.Config{
Endpoints: []string{"localhost:2379"},
DialTimeout: time.Second,
+ DialOptions: []grpc.DialOption{grpc.WithBlock()},
DialKeepAliveTime: time.Second,
DialKeepAliveTimeout: 500 * time.Millisecond,
}
```
Please see [CHANGELOG](https://github.com/etcd-io/etcd/blob/master/CHANGELOG-3.3.md) for a full list of changes.
### Server upgrade checklists ### Server upgrade checklists
#### Upgrade requirements #### Upgrade requirements

View File

@ -0,0 +1,343 @@
---
title: Upgrade etcd from 3.4 to 3.5
---
In the general case, upgrading from etcd 3.4 to 3.5 can be a zero-downtime, rolling upgrade:
- one by one, stop the etcd v3.4 processes and replace them with etcd v3.5 processes
- after running all v3.5 processes, new features in v3.5 are available to the cluster
Before [starting an upgrade](#upgrade-procedure), read through the rest of this guide to prepare.
### Upgrade checklists
**NOTE:** When [migrating from v2 with no v3 data](https://github.com/etcd-io/etcd/issues/9480), etcd server v3.2+ panics when etcd restores from existing snapshots but no v3 `ETCD_DATA_DIR/member/snap/db` file. This happens when the server had migrated from v2 with no previous v3 data. This also prevents accidental v3 data loss (e.g. `db` file might have been moved). etcd requires that post v3 migration can only happen with v3 data. Do not upgrade to newer v3 versions until v3.0 server contains v3 data.
Highlighted breaking changes in 3.5.
#### Deprecate `etcd_debugging_mvcc_db_total_size_in_bytes` Prometheus metrics
v3.4 promoted `etcd_debugging_mvcc_db_total_size_in_bytes` Prometheus metrics to `etcd_mvcc_db_total_size_in_bytes`, in order to encourage etcd storage monitoring. And v3.5 completely deprcates `etcd_debugging_mvcc_db_total_size_in_bytes`.
```diff
-etcd_debugging_mvcc_db_total_size_in_bytes
+etcd_mvcc_db_total_size_in_bytes
```
Note that `etcd_debugging_*` namespace metrics have been marked as experimental. As we improve monitoring guide, we will promote more metrics.
#### Deprecated in `etcd --logger capnslog`
v3.4 defaults to `--logger=zap` in order to support multiple log outputs and structured logging.
**`etcd --logger=capnslog` has been deprecated in v3.5**, and now `--logger=zap` is the default.
```diff
-etcd --logger=capnslog
+etcd --logger=zap --log-outputs=stderr
+# to write logs to stderr and a.log file at the same time
+etcd --logger=zap --log-outputs=stderr,a.log
```
TODO(add more monitoring guides); v3.4 adds `etcd --logger=zap` support for structured logging and multiple log outputs. Main motivation is to promote automated etcd monitoring, rather than looking back server logs when it starts breaking. Future development will make etcd log as few as possible, and make etcd easier to monitor with metrics and alerts. **`etcd --logger=capnslog` will be deprecated in v3.5.**
#### Deprecated in `etcd --log-output`
v3.4 renamed [`etcd --log-output` to `--log-outputs`](https://github.com/etcd-io/etcd/pull/9624) to support multiple log outputs.
**`etcd --log-output` has been deprecated in v3.5.**
```diff
-etcd --log-output=stderr
+etcd --log-outputs=stderr
```
#### Deprecated `etcd --log-package-levels`
**`etcd --log-package-levels` flag for `capnslog` has been deprecated.**
Now, **`etcd --logger=zap`** is the default.
```diff
-etcd --log-package-levels 'etcdmain=CRITICAL,etcdserver=DEBUG'
+etcd --logger=zap --log-outputs=stderr
```
#### Deprecated `[CLIENT-URL]/config/local/log`
**`/config/local/log` endpoint is being deprecated in v3.5, as is `etcd --log-package-levels` flag.**
```diff
-$ curl http://127.0.0.1:2379/config/local/log -XPUT -d '{"Level":"DEBUG"}'
-# debug logging enabled
```
#### Changed gRPC gateway HTTP endpoints (deprecated `/v3beta`)
Before
```bash
curl -L http://localhost:2379/v3beta/kv/put \
-X POST -d '{"key": "Zm9v", "value": "YmFy"}'
```
After
```bash
curl -L http://localhost:2379/v3/kv/put \
-X POST -d '{"key": "Zm9v", "value": "YmFy"}'
```
`/v3beta` has been removed in 3.5 release.
### Server upgrade checklists
#### Upgrade requirements
To upgrade an existing etcd deployment to 3.5, the running cluster must be 3.4 or greater. If it's before 3.4, please [upgrade to 3.4](upgrade_3_3.md) before upgrading to 3.5.
Also, to ensure a smooth rolling upgrade, the running cluster must be healthy. Check the health of the cluster by using the `etcdctl endpoint health` command before proceeding.
#### Preparation
Before upgrading etcd, always test the services relying on etcd in a staging environment before deploying the upgrade to the production environment.
Before beginning, [download the snapshot backup](../op-guide/maintenance.md#snapshot-backup). Should something go wrong with the upgrade, it is possible to use this backup to [downgrade](#downgrade) back to existing etcd version. Please note that the `snapshot` command only backs up the v3 data. For v2 data, see [backing up v2 datastore](../v2/admin_guide.md#backing-up-the-datastore).
#### Mixed versions
While upgrading, an etcd cluster supports mixed versions of etcd members, and operates with the protocol of the lowest common version. The cluster is only considered upgraded once all of its members are upgraded to version 3.5. Internally, etcd members negotiate with each other to determine the overall cluster version, which controls the reported version and the supported features.
#### Limitations
Note: If the cluster only has v3 data and no v2 data, it is not subject to this limitation.
If the cluster is serving a v2 data set larger than 50MB, each newly upgraded member may take up to two minutes to catch up with the existing cluster. Check the size of a recent snapshot to estimate the total data size. In other words, it is safest to wait for 2 minutes between upgrading each member.
For a much larger total data size, 100MB or more , this one-time process might take even more time. Administrators of very large etcd clusters of this magnitude can feel free to contact the [etcd team][etcd-contact] before upgrading, and we'll be happy to provide advice on the procedure.
#### Downgrade
If all members have been upgraded to v3.5, the cluster will be upgraded to v3.5, and downgrade from this completed state is **not possible**. If any single member is still v3.4, however, the cluster and its operations remains "v3.4", and it is possible from this mixed cluster state to return to using a v3.4 etcd binary on all members.
Please [download the snapshot backup](../op-guide/maintenance.md#snapshot-backup) to make downgrading the cluster possible even after it has been completely upgraded.
### Upgrade procedure
This example shows how to upgrade a 3-member v3.4 ectd cluster running on a local machine.
#### Step 1: check upgrade requirements
Is the cluster healthy and running v3.4.x?
```bash
etcdctl --endpoints=localhost:2379,localhost:22379,localhost:32379 endpoint health
<<COMMENT
localhost:2379 is healthy: successfully committed proposal: took = 2.118638ms
localhost:22379 is healthy: successfully committed proposal: took = 3.631388ms
localhost:32379 is healthy: successfully committed proposal: took = 2.157051ms
COMMENT
curl http://localhost:2379/version
<<COMMENT
{"etcdserver":"3.4.0","etcdcluster":"3.4.0"}
COMMENT
curl http://localhost:22379/version
<<COMMENT
{"etcdserver":"3.4.0","etcdcluster":"3.4.0"}
COMMENT
curl http://localhost:32379/version
<<COMMENT
{"etcdserver":"3.4.0","etcdcluster":"3.4.0"}
COMMENT
```
#### Step 2: download snapshot backup from leader
[Download the snapshot backup](../op-guide/maintenance.md#snapshot-backup) to provide a downgrade path should any problems occur.
etcd leader is guaranteed to have the latest application data, thus fetch snapshot from leader:
```bash
curl -sL http://localhost:2379/metrics | grep etcd_server_is_leader
<<COMMENT
# HELP etcd_server_is_leader Whether or not this member is a leader. 1 if is, 0 otherwise.
# TYPE etcd_server_is_leader gauge
etcd_server_is_leader 1
COMMENT
curl -sL http://localhost:22379/metrics | grep etcd_server_is_leader
<<COMMENT
etcd_server_is_leader 0
COMMENT
curl -sL http://localhost:32379/metrics | grep etcd_server_is_leader
<<COMMENT
etcd_server_is_leader 0
COMMENT
etcdctl --endpoints=localhost:2379 snapshot save backup.db
<<COMMENT
{"level":"info","ts":1526585787.148433,"caller":"snapshot/v3_snapshot.go:109","msg":"created temporary db file","path":"backup.db.part"}
{"level":"info","ts":1526585787.1485257,"caller":"snapshot/v3_snapshot.go:120","msg":"fetching snapshot","endpoint":"localhost:2379"}
{"level":"info","ts":1526585787.1519694,"caller":"snapshot/v3_snapshot.go:133","msg":"fetched snapshot","endpoint":"localhost:2379","took":0.003502721}
{"level":"info","ts":1526585787.1520295,"caller":"snapshot/v3_snapshot.go:142","msg":"saved","path":"backup.db"}
Snapshot saved at backup.db
COMMENT
```
#### Step 3: stop one existing etcd server
When each etcd process is stopped, expected errors will be logged by other cluster members. This is normal since a cluster member connection has been (temporarily) broken:
```bash
{"level":"info","ts":1526587281.2001143,"caller":"etcdserver/server.go:2249","msg":"updating cluster version","from":"3.0","to":"3.4"}
{"level":"info","ts":1526587281.2010646,"caller":"membership/cluster.go:473","msg":"updated cluster version","cluster-id":"7dee9ba76d59ed53","local-member-id":"7339c4e5e833c029","from":"3.0","from":"3.4"}
{"level":"info","ts":1526587281.2012327,"caller":"api/capability.go:76","msg":"enabled capabilities for version","cluster-version":"3.4"}
{"level":"info","ts":1526587281.2013083,"caller":"etcdserver/server.go:2272","msg":"cluster version is updated","cluster-version":"3.4"}
^C{"level":"info","ts":1526587299.0717514,"caller":"osutil/interrupt_unix.go:63","msg":"received signal; shutting down","signal":"interrupt"}
{"level":"info","ts":1526587299.0718873,"caller":"embed/etcd.go:285","msg":"closing etcd server","name":"s1","data-dir":"/tmp/etcd/s1","advertise-peer-urls":["http://localhost:2380"],"advertise-client-urls":["http://localhost:2379"]}
{"level":"info","ts":1526587299.0722554,"caller":"etcdserver/server.go:1341","msg":"leadership transfer starting","local-member-id":"7339c4e5e833c029","current-leader-member-id":"7339c4e5e833c029","transferee-member-id":"729934363faa4a24"}
{"level":"info","ts":1526587299.0723994,"caller":"raft/raft.go:1107","msg":"7339c4e5e833c029 [term 3] starts to transfer leadership to 729934363faa4a24"}
{"level":"info","ts":1526587299.0724802,"caller":"raft/raft.go:1113","msg":"7339c4e5e833c029 sends MsgTimeoutNow to 729934363faa4a24 immediately as 729934363faa4a24 already has up-to-date log"}
{"level":"info","ts":1526587299.0737045,"caller":"raft/raft.go:797","msg":"7339c4e5e833c029 [term: 3] received a MsgVote message with higher term from 729934363faa4a24 [term: 4]"}
{"level":"info","ts":1526587299.0737681,"caller":"raft/raft.go:656","msg":"7339c4e5e833c029 became follower at term 4"}
{"level":"info","ts":1526587299.073831,"caller":"raft/raft.go:882","msg":"7339c4e5e833c029 [logterm: 3, index: 9, vote: 0] cast MsgVote for 729934363faa4a24 [logterm: 3, index: 9] at term 4"}
{"level":"info","ts":1526587299.0738947,"caller":"raft/node.go:312","msg":"raft.node: 7339c4e5e833c029 lost leader 7339c4e5e833c029 at term 4"}
{"level":"info","ts":1526587299.0748374,"caller":"raft/node.go:306","msg":"raft.node: 7339c4e5e833c029 elected leader 729934363faa4a24 at term 4"}
{"level":"info","ts":1526587299.1726425,"caller":"etcdserver/server.go:1362","msg":"leadership transfer finished","local-member-id":"7339c4e5e833c029","old-leader-member-id":"7339c4e5e833c029","new-leader-member-id":"729934363faa4a24","took":0.100389359}
{"level":"info","ts":1526587299.1728148,"caller":"rafthttp/peer.go:333","msg":"stopping remote peer","remote-peer-id":"b548c2511513015"}
{"level":"warn","ts":1526587299.1751974,"caller":"rafthttp/stream.go:291","msg":"closed TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"b548c2511513015"}
{"level":"warn","ts":1526587299.1752589,"caller":"rafthttp/stream.go:301","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"b548c2511513015"}
{"level":"warn","ts":1526587299.177348,"caller":"rafthttp/stream.go:291","msg":"closed TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"b548c2511513015"}
{"level":"warn","ts":1526587299.1774004,"caller":"rafthttp/stream.go:301","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"b548c2511513015"}
{"level":"info","ts":1526587299.177515,"caller":"rafthttp/pipeline.go:86","msg":"stopped HTTP pipelining with remote peer","local-member-id":"7339c4e5e833c029","remote-peer-id":"b548c2511513015"}
{"level":"warn","ts":1526587299.1777067,"caller":"rafthttp/stream.go:436","msg":"lost TCP streaming connection with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"7339c4e5e833c029","remote-peer-id":"b548c2511513015","error":"read tcp 127.0.0.1:34636->127.0.0.1:32380: use of closed network connection"}
{"level":"info","ts":1526587299.1778402,"caller":"rafthttp/stream.go:459","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"7339c4e5e833c029","remote-peer-id":"b548c2511513015"}
{"level":"warn","ts":1526587299.1780295,"caller":"rafthttp/stream.go:436","msg":"lost TCP streaming connection with remote peer","stream-reader-type":"stream Message","local-member-id":"7339c4e5e833c029","remote-peer-id":"b548c2511513015","error":"read tcp 127.0.0.1:34634->127.0.0.1:32380: use of closed network connection"}
{"level":"info","ts":1526587299.1780987,"caller":"rafthttp/stream.go:459","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"7339c4e5e833c029","remote-peer-id":"b548c2511513015"}
{"level":"info","ts":1526587299.1781602,"caller":"rafthttp/peer.go:340","msg":"stopped remote peer","remote-peer-id":"b548c2511513015"}
{"level":"info","ts":1526587299.1781986,"caller":"rafthttp/peer.go:333","msg":"stopping remote peer","remote-peer-id":"729934363faa4a24"}
{"level":"warn","ts":1526587299.1802843,"caller":"rafthttp/stream.go:291","msg":"closed TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"729934363faa4a24"}
{"level":"warn","ts":1526587299.1803446,"caller":"rafthttp/stream.go:301","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"729934363faa4a24"}
{"level":"warn","ts":1526587299.1824749,"caller":"rafthttp/stream.go:291","msg":"closed TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"729934363faa4a24"}
{"level":"warn","ts":1526587299.18255,"caller":"rafthttp/stream.go:301","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"729934363faa4a24"}
{"level":"info","ts":1526587299.18261,"caller":"rafthttp/pipeline.go:86","msg":"stopped HTTP pipelining with remote peer","local-member-id":"7339c4e5e833c029","remote-peer-id":"729934363faa4a24"}
{"level":"warn","ts":1526587299.1827736,"caller":"rafthttp/stream.go:436","msg":"lost TCP streaming connection with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"7339c4e5e833c029","remote-peer-id":"729934363faa4a24","error":"read tcp 127.0.0.1:51482->127.0.0.1:22380: use of closed network connection"}
{"level":"info","ts":1526587299.182845,"caller":"rafthttp/stream.go:459","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"7339c4e5e833c029","remote-peer-id":"729934363faa4a24"}
{"level":"warn","ts":1526587299.1830168,"caller":"rafthttp/stream.go:436","msg":"lost TCP streaming connection with remote peer","stream-reader-type":"stream Message","local-member-id":"7339c4e5e833c029","remote-peer-id":"729934363faa4a24","error":"context canceled"}
{"level":"warn","ts":1526587299.1831107,"caller":"rafthttp/peer_status.go:65","msg":"peer became inactive","peer-id":"729934363faa4a24","error":"failed to read 729934363faa4a24 on stream Message (context canceled)"}
{"level":"info","ts":1526587299.1831737,"caller":"rafthttp/stream.go:459","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"7339c4e5e833c029","remote-peer-id":"729934363faa4a24"}
{"level":"info","ts":1526587299.1832306,"caller":"rafthttp/peer.go:340","msg":"stopped remote peer","remote-peer-id":"729934363faa4a24"}
{"level":"warn","ts":1526587299.1837125,"caller":"rafthttp/http.go:424","msg":"failed to find remote peer in cluster","local-member-id":"7339c4e5e833c029","remote-peer-id-stream-handler":"7339c4e5e833c029","remote-peer-id-from":"b548c2511513015","cluster-id":"7dee9ba76d59ed53"}
{"level":"warn","ts":1526587299.1840093,"caller":"rafthttp/http.go:424","msg":"failed to find remote peer in cluster","local-member-id":"7339c4e5e833c029","remote-peer-id-stream-handler":"7339c4e5e833c029","remote-peer-id-from":"b548c2511513015","cluster-id":"7dee9ba76d59ed53"}
{"level":"warn","ts":1526587299.1842315,"caller":"rafthttp/http.go:424","msg":"failed to find remote peer in cluster","local-member-id":"7339c4e5e833c029","remote-peer-id-stream-handler":"7339c4e5e833c029","remote-peer-id-from":"729934363faa4a24","cluster-id":"7dee9ba76d59ed53"}
{"level":"warn","ts":1526587299.1844475,"caller":"rafthttp/http.go:424","msg":"failed to find remote peer in cluster","local-member-id":"7339c4e5e833c029","remote-peer-id-stream-handler":"7339c4e5e833c029","remote-peer-id-from":"729934363faa4a24","cluster-id":"7dee9ba76d59ed53"}
{"level":"info","ts":1526587299.2056687,"caller":"embed/etcd.go:473","msg":"stopping serving peer traffic","address":"127.0.0.1:2380"}
{"level":"info","ts":1526587299.205819,"caller":"embed/etcd.go:480","msg":"stopped serving peer traffic","address":"127.0.0.1:2380"}
{"level":"info","ts":1526587299.2058413,"caller":"embed/etcd.go:289","msg":"closed etcd server","name":"s1","data-dir":"/tmp/etcd/s1","advertise-peer-urls":["http://localhost:2380"],"advertise-client-urls":["http://localhost:2379"]}
```
#### Step 4: restart the etcd server with same configuration
Restart the etcd server with same configuration but with the new etcd binary.
```diff
-etcd-old --name s1 \
+etcd-new --name s1 \
--data-dir /tmp/etcd/s1 \
--listen-client-urls http://localhost:2379 \
--advertise-client-urls http://localhost:2379 \
--listen-peer-urls http://localhost:2380 \
--initial-advertise-peer-urls http://localhost:2380 \
--initial-cluster s1=http://localhost:2380,s2=http://localhost:22380,s3=http://localhost:32380 \
--initial-cluster-token tkn \
--initial-cluster-state new
```
The new v3.5 etcd will publish its information to the cluster. At this point, cluster still operates as v3.4 protocol, which is the lowest common version.
> `{"level":"info","ts":1526586617.1647713,"caller":"membership/cluster.go:485","msg":"set initial cluster version","cluster-id":"7dee9ba76d59ed53","local-member-id":"7339c4e5e833c029","cluster-version":"3.0"}`
> `{"level":"info","ts":1526586617.1648536,"caller":"api/capability.go:76","msg":"enabled capabilities for version","cluster-version":"3.0"}`
> `{"level":"info","ts":1526586617.1649303,"caller":"membership/cluster.go:473","msg":"updated cluster version","cluster-id":"7dee9ba76d59ed53","local-member-id":"7339c4e5e833c029","from":"3.0","from":"3.4"}`
> `{"level":"info","ts":1526586617.1649797,"caller":"api/capability.go:76","msg":"enabled capabilities for version","cluster-version":"3.4"}`
> `{"level":"info","ts":1526586617.2107732,"caller":"etcdserver/server.go:1770","msg":"published local member to cluster through raft","local-member-id":"7339c4e5e833c029","local-member-attributes":"{Name:s1 ClientURLs:[http://localhost:2379]}","request-path":"/0/members/7339c4e5e833c029/attributes","cluster-id":"7dee9ba76d59ed53","publish-timeout":7}`
Verify that each member, and then the entire cluster, becomes healthy with the new v3.5 etcd binary:
```bash
etcdctl endpoint health --endpoints=localhost:2379,localhost:22379,localhost:32379
<<COMMENT
localhost:32379 is healthy: successfully committed proposal: took = 2.337471ms
localhost:22379 is healthy: successfully committed proposal: took = 1.130717ms
localhost:2379 is healthy: successfully committed proposal: took = 2.124843ms
COMMENT
```
Un-upgraded members will log warnings like the following until the entire cluster is upgraded.
This is expected and will cease after all etcd cluster members are upgraded to v3.5:
```
:41.942121 W | etcdserver: member 7339c4e5e833c029 has a higher version 3.5.0
:45.945154 W | etcdserver: the local etcd version 3.4.0 is not up-to-date
```
#### Step 5: repeat *step 3* and *step 4* for rest of the members
When all members are upgraded, the cluster will report upgrading to 3.5 successfully:
Member 1:
> `{"level":"info","ts":1526586949.0920913,"caller":"api/capability.go:76","msg":"enabled capabilities for version","cluster-version":"3.5"}`
> `{"level":"info","ts":1526586949.0921566,"caller":"etcdserver/server.go:2272","msg":"cluster version is updated","cluster-version":"3.5"}`
Member 2:
> `{"level":"info","ts":1526586949.092117,"caller":"membership/cluster.go:473","msg":"updated cluster version","cluster-id":"7dee9ba76d59ed53","local-member-id":"729934363faa4a24","from":"3.4","from":"3.5"}`
> `{"level":"info","ts":1526586949.0923078,"caller":"api/capability.go:76","msg":"enabled capabilities for version","cluster-version":"3.5"}`
Member 3:
> `{"level":"info","ts":1526586949.0921423,"caller":"membership/cluster.go:473","msg":"updated cluster version","cluster-id":"7dee9ba76d59ed53","local-member-id":"b548c2511513015","from":"3.4","from":"3.5"}`
> `{"level":"info","ts":1526586949.0922918,"caller":"api/capability.go:76","msg":"enabled capabilities for version","cluster-version":"3.5"}`
```bash
endpoint health --endpoints=localhost:2379,localhost:22379,localhost:32379
<<COMMENT
localhost:2379 is healthy: successfully committed proposal: took = 492.834µs
localhost:22379 is healthy: successfully committed proposal: took = 1.015025ms
localhost:32379 is healthy: successfully committed proposal: took = 1.853077ms
COMMENT
curl http://localhost:2379/version
<<COMMENT
{"etcdserver":"3.5.0","etcdcluster":"3.5.0"}
COMMENT
curl http://localhost:22379/version
<<COMMENT
{"etcdserver":"3.5.0","etcdcluster":"3.5.0"}
COMMENT
curl http://localhost:32379/version
<<COMMENT
{"etcdserver":"3.5.0","etcdcluster":"3.5.0"}
COMMENT
```
[etcd-contact]: https://groups.google.com/forum/#!forum/etcd-dev

View File

@ -1,13 +1,18 @@
# Upgrading etcd clusters and applications ---
title: Upgrading etcd clusters and applications
---
This section contains documents specific to upgrading etcd clusters and applications. This section contains documents specific to upgrading etcd clusters and applications.
## Moving from etcd API v2 to API v3 ## Moving from etcd API v2 to API v3
* [Migrate applications from using API v2 to API v3][migrate-apps] * [Migrate applications from using API v2 to API v3][migrate-apps]
## Upgrading an etcd v3.x cluster ## Upgrading an etcd v3.x cluster
* [Upgrade etcd from 3.0 to 3.1][upgrade-3-1] * [Upgrade etcd from 3.0 to 3.1][upgrade-3-1]
* [Upgrade etcd from 3.1 to 3.2][upgrade-3-2] * [Upgrade etcd from 3.1 to 3.2][upgrade-3-2]
* [Upgrade etcd from 3.2 to 3.3][upgrade-3-3]
* [Upgrade etcd from 3.3 to 3.4][upgrade-3-4]
## Upgrading from etcd v2.3 ## Upgrading from etcd v2.3
* [Upgrade a v2.3 cluster to v3.0][upgrade-cluster] * [Upgrade a v2.3 cluster to v3.0][upgrade-cluster]
@ -17,3 +22,5 @@ This section contains documents specific to upgrading etcd clusters and applicat
[upgrade-cluster]: upgrade_3_0.md [upgrade-cluster]: upgrade_3_0.md
[upgrade-3-1]: upgrade_3_1.md [upgrade-3-1]: upgrade_3_1.md
[upgrade-3-2]: upgrade_3_2.md [upgrade-3-2]: upgrade_3_2.md
[upgrade-3-3]: upgrade_3_3.md
[upgrade-3-4]: upgrade_3_4.md

View File

@ -1,36 +0,0 @@
**This is the documentation for etcd2 releases. Read [etcd3 doc][v3-docs] for etcd3 releases.**
[v3-docs]: ../docs.md#documentation
# Snapshot Migration
You can migrate a snapshot of your data from a v0.4.9+ cluster into a new etcd 2.2 cluster using a snapshot migration. After snapshot migration, the etcd indexes of your data will change. Many etcd applications rely on these indexes to behave correctly. This operation should only be done while all etcd applications are stopped.
To get started get the newest data snapshot from the 0.4.9+ cluster:
```
curl http://cluster.example.com:4001/v2/migration/snapshot > backup.snap
```
Now, import the snapshot into your new cluster:
```
etcdctl --endpoint new_cluster.example.com import --snap backup.snap
```
If you have a large amount of data, you can specify more concurrent works to copy data in parallel by using `-c` flag.
If you have hidden keys to copy, you can use `--hidden` flag to specify. For example fleet uses `/_coreos.com/fleet` so to import those keys use `--hidden /_coreos.com`.
And the data will quickly copy into the new cluster:
```
entering dir: /
entering dir: /foo
entering dir: /foo/bar
copying key: /foo/bar/1 1
entering dir: /
entering dir: /foo2
entering dir: /foo2/bar2
copying key: /foo2/bar2/2 2
```

View File

@ -1,85 +0,0 @@
# Documentation
etcd is a distributed key-value store designed to reliably and quickly preserve and provide access to critical data. It enables reliable distributed coordination through distributed locking, leader elections, and write barriers. An etcd cluster is intended for high availability and permanent data storage and retrieval.
This is the etcd v2 documentation set. For more recent versions, please see the [etcd v3 guides][etcd-v3].
## Communicating with etcd v2
Reading and writing into the etcd keyspace is done via a simple, RESTful HTTP API, or using language-specific libraries that wrap the HTTP API with higher level primitives.
### Reading and Writing
- [Client API Documentation][api]
- [Libraries, Tools, and Language Bindings][libraries]
- [Admin API Documentation][admin-api]
- [Members API][members-api]
### Security, Auth, Access control
- [Security Model][security]
- [Auth and Security][auth_api]
- [Authentication Guide][authentication]
## etcd v2 Cluster Administration
Configuration values are distributed within the cluster for your applications to read. Values can be changed programmatically and smart applications can reconfigure automatically. You'll never again have to run a configuration management tool on every machine in order to change a single config value.
### General Info
- [etcd Proxies][proxy]
- [Production Users][production-users]
- [Admin Guide][admin_guide]
- [Configuration Flags][configuration]
- [Frequently Asked Questions][faq]
### Initial Setup
- [Tuning etcd Clusters][tuning]
- [Discovery Service Protocol][discovery_protocol]
- [Running etcd under Docker][docker_guide]
### Live Reconfiguration
- [Runtime Configuration][runtime-configuration]
### Debugging etcd
- [Metrics Collection][metrics]
- [Error Code][errorcode]
- [Reporting Bugs][reporting_bugs]
### Migration
- [Upgrade etcd to 2.3][upgrade_2_3]
- [Upgrade etcd to 2.2][upgrade_2_2]
- [Upgrade to etcd 2.1][upgrade_2_1]
- [Snapshot Migration (0.4.x to 2.x)][04_to_2_snapshot_migration]
- [Backward Compatibility][backward_compatibility]
[etcd-v3]: ../docs.md
[api]: api.md
[libraries]: libraries-and-tools.md
[admin-api]: other_apis.md
[members-api]: members_api.md
[security]: security.md
[auth_api]: auth_api.md
[authentication]: authentication.md
[proxy]: proxy.md
[production-users]: production-users.md
[admin_guide]: admin_guide.md
[configuration]: configuration.md
[faq]: faq.md
[tuning]: tuning.md
[discovery_protocol]: discovery_protocol.md
[docker_guide]: docker_guide.md
[runtime-configuration]: runtime-configuration.md
[metrics]: metrics.md
[errorcode]: errorcode.md
[reporting_bugs]: reporting_bugs.md
[upgrade_2_3]: upgrade_2_3.md
[upgrade_2_2]: upgrade_2_2.md
[upgrade_2_1]: upgrade_2_1.md
[04_to_2_snapshot_migration]: 04_to_2_snapshot_migration.md
[backward_compatibility]: backward_compatibility.md

View File

@ -1,317 +0,0 @@
**This is the documentation for etcd2 releases. Read [etcd3 doc][v3-docs] for etcd3 releases.**
[v3-docs]: ../docs.md#documentation
# Administration
## Data Directory
### Lifecycle
When first started, etcd stores its configuration into a data directory specified by the data-dir configuration parameter.
Configuration is stored in the write ahead log and includes: the local member ID, cluster ID, and initial cluster configuration.
The write ahead log and snapshot files are used during member operation and to recover after a restart.
Having a dedicated disk to store wal files can improve the throughput and stabilize the cluster.
It is highly recommended to dedicate a wal disk and set `--wal-dir` to point to a directory on that device for a production cluster deployment.
If a members data directory is ever lost or corrupted then the user should [remove][remove-a-member] the etcd member from the cluster using `etcdctl` tool.
A user should avoid restarting an etcd member with a data directory from an out-of-date backup.
Using an out-of-date data directory can lead to inconsistency as the member had agreed to store information via raft then re-joins saying it needs that information again.
For maximum safety, if an etcd member suffers any sort of data corruption or loss, it must be removed from the cluster.
Once removed the member can be re-added with an empty data directory.
### Contents
The data directory has two sub-directories in it:
1. wal: write ahead log files are stored here. For details see the [wal package documentation][wal-pkg]
2. snap: log snapshots are stored here. For details see the [snap package documentation][snap-pkg]
If `--wal-dir` flag is set, etcd will write the write ahead log files to the specified directory instead of data directory.
## Cluster Management
### Lifecycle
If you are spinning up multiple clusters for testing it is recommended that you specify a unique initial-cluster-token for the different clusters.
This can protect you from cluster corruption in case of mis-configuration because two members started with different cluster tokens will refuse members from each other.
### Monitoring
It is important to monitor your production etcd cluster for healthy information and runtime metrics.
#### Health Monitoring
At lowest level, etcd exposes health information via HTTP at `/health` in JSON format. If it returns `{"health":true}`, then the cluster is healthy.
```
$ curl -L http://127.0.0.1:2379/health
{"health":true}
```
You can also use etcdctl to check the cluster-wide health information. It will contact all the members of the cluster and collect the health information for you.
```
$./etcdctl cluster-health
member 8211f1d0f64f3269 is healthy: got healthy result from http://127.0.0.1:12379
member 91bc3c398fb3c146 is healthy: got healthy result from http://127.0.0.1:22379
member fd422379fda50e48 is healthy: got healthy result from http://127.0.0.1:32379
cluster is healthy
```
#### Runtime Metrics
etcd uses [Prometheus][prometheus] for metrics reporting in the server. You can read more through the runtime metrics [doc][metrics].
### Debugging
Debugging a distributed system can be difficult. etcd provides several ways to make debug
easier.
#### Enabling Debug Logging
When you want to debug etcd without stopping it, you can enable debug logging at runtime.
etcd exposes logging configuration at `/config/local/log`.
```
$ curl http://127.0.0.1:2379/config/local/log -XPUT -d '{"Level":"DEBUG"}'
$ # debug logging enabled
$
$ curl http://127.0.0.1:2379/config/local/log -XPUT -d '{"Level":"INFO"}'
$ # debug logging disabled
```
#### Debugging Variables
Debug variables are exposed for real-time debugging purposes. Developers who are familiar with etcd can utilize these variables to debug unexpected behavior. etcd exposes debug variables via HTTP at `/debug/vars` in JSON format. The debug variables contains
`cmdline`, `file_descriptor_limit`, `memstats` and `raft.status`.
`cmdline` is the command line arguments passed into etcd.
`file_descriptor_limit` is the max number of file descriptors etcd can utilize.
`memstats` is explained in detail in the [Go runtime documentation][golang-memstats].
`raft.status` is useful when you want to debug low level raft issues if you are familiar with raft internals. In most cases, you do not need to check `raft.status`.
```json
{
"cmdline": ["./etcd"],
"file_descriptor_limit": 0,
"memstats": {"Alloc":4105744,"TotalAlloc":42337320,"Sys":12560632,"...":"..."},
"raft.status": {"id":"ce2a822cea30bfca","term":5,"vote":"ce2a822cea30bfca","commit":23509,"lead":"ce2a822cea30bfca","raftState":"StateLeader","progress":{"ce2a822cea30bfca":{"match":23509,"next":23510,"state":"ProgressStateProbe"}}}
}
```
### Optimal Cluster Size
The recommended etcd cluster size is 3, 5 or 7, which is decided by the fault tolerance requirement. A 7-member cluster can provide enough fault tolerance in most cases. While larger cluster provides better fault tolerance the write performance reduces since data needs to be replicated to more machines.
#### Fault Tolerance Table
It is recommended to have an odd number of members in a cluster. Having an odd cluster size doesn't change the number needed for majority, but you gain a higher tolerance for failure by adding the extra member. You can see this in practice when comparing even and odd sized clusters:
| Cluster Size | Majority | Failure Tolerance |
|--------------|------------|-------------------|
| 1 | 1 | 0 |
| 2 | 2 | 0 |
| 3 | 2 | **1** |
| 4 | 3 | 1 |
| 5 | 3 | **2** |
| 6 | 4 | 2 |
| 7 | 4 | **3** |
| 8 | 5 | 3 |
| 9 | 5 | **4** |
As you can see, adding another member to bring the size of cluster up to an odd size is always worth it. During a network partition, an odd number of members also guarantees that there will almost always be a majority of the cluster that can continue to operate and be the source of truth when the partition ends.
#### Changing Cluster Size
After your cluster is up and running, adding or removing members is done via [runtime reconfiguration][runtime-reconfig], which allows the cluster to be modified without downtime. The `etcdctl` tool has `member list`, `member add` and `member remove` commands to complete this process.
### Member Migration
When there is a scheduled machine maintenance or retirement, you might want to migrate an etcd member to another machine without losing the data and changing the member ID.
The data directory contains all the data to recover a member to its point-in-time state. To migrate a member:
* Stop the member process.
* Copy the data directory of the now-idle member to the new machine.
* Update the peer URLs for the replaced member to reflect the new machine according to the [runtime reconfiguration instructions][update-a-member].
* Start etcd on the new machine, using the same configuration and the copy of the data directory.
This example will walk you through the process of migrating the infra1 member to a new machine:
|Name|Peer URL|
|------|--------------|
|infra0|10.0.1.10:2380|
|infra1|10.0.1.11:2380|
|infra2|10.0.1.12:2380|
```sh
$ export ETCDCTL_ENDPOINT=http://10.0.1.10:2379,http://10.0.1.11:2379,http://10.0.1.12:2379
```
```sh
$ etcdctl member list
84194f7c5edd8b37: name=infra0 peerURLs=http://10.0.1.10:2380 clientURLs=http://127.0.0.1:2379,http://10.0.1.10:2379
b4db3bf5e495e255: name=infra1 peerURLs=http://10.0.1.11:2380 clientURLs=http://127.0.0.1:2379,http://10.0.1.11:2379
bc1083c870280d44: name=infra2 peerURLs=http://10.0.1.12:2380 clientURLs=http://127.0.0.1:2379,http://10.0.1.12:2379
```
#### Stop the member etcd process
```sh
$ ssh 10.0.1.11
```
```sh
$ kill `pgrep etcd`
```
#### Copy the data directory of the now-idle member to the new machine
```
$ tar -cvzf infra1.etcd.tar.gz %data_dir%
```
```sh
$ scp infra1.etcd.tar.gz 10.0.1.13:~/
```
#### Update the peer URLs for that member to reflect the new machine
```sh
$ curl http://10.0.1.10:2379/v2/members/b4db3bf5e495e255 -XPUT \
-H "Content-Type: application/json" -d '{"peerURLs":["http://10.0.1.13:2380"]}'
```
Or use `etcdctl member update` command
```sh
$ etcdctl member update b4db3bf5e495e255 http://10.0.1.13:2380
```
#### Start etcd on the new machine, using the same configuration and the copy of the data directory
```sh
$ ssh 10.0.1.13
```
```sh
$ tar -xzvf infra1.etcd.tar.gz -C %data_dir%
```
```
etcd -name infra1 \
-listen-peer-urls http://10.0.1.13:2380 \
-listen-client-urls http://10.0.1.13:2379,http://127.0.0.1:2379 \
-advertise-client-urls http://10.0.1.13:2379,http://127.0.0.1:2379
```
### Disaster Recovery
etcd is designed to be resilient to machine failures. An etcd cluster can automatically recover from any number of temporary failures (for example, machine reboots), and a cluster of N members can tolerate up to _(N-1)/2_ permanent failures (where a member can no longer access the cluster, due to hardware failure or disk corruption). However, in extreme circumstances, a cluster might permanently lose enough members such that quorum is irrevocably lost. For example, if a three-node cluster suffered two simultaneous and unrecoverable machine failures, it would be normally impossible for the cluster to restore quorum and continue functioning.
To recover from such scenarios, etcd provides functionality to backup and restore the datastore and recreate the cluster without data loss.
#### Backing up the datastore
**Note:** Windows users must stop etcd before running the backup command.
The first step of the recovery is to backup the data directory and wal directory, if stored separately, on a functioning etcd node. To do this, use the `etcdctl backup` command, passing in the original data (and wal) directory used by etcd. For example:
```sh
etcdctl backup \
--data-dir %data_dir% \
[--wal-dir %wal_dir%] \
--backup-dir %backup_data_dir%
[--backup-wal-dir %backup_wal_dir%]
```
This command will rewrite some of the metadata contained in the backup (specifically, the node ID and cluster ID), which means that the node will lose its former identity. In order to recreate a cluster from the backup, you will need to start a new, single-node cluster. The metadata is rewritten to prevent the new node from inadvertently being joined onto an existing cluster.
#### Restoring a backup
To restore a backup using the procedure created above, start etcd with the `-force-new-cluster` option and pointing to the backup directory. This will initialize a new, single-member cluster with the default advertised peer URLs, but preserve the entire contents of the etcd data store. Continuing from the previous example:
```sh
etcd \
-data-dir=%backup_data_dir% \
[-wal-dir=%backup_wal_dir%] \
-force-new-cluster \
...
```
Now etcd should be available on this node and serving the original datastore.
Once you have verified that etcd has started successfully, shut it down and move the data and wal, if stored separately, back to the previous location (you may wish to make another copy as well to be safe):
```sh
pkill etcd
rm -fr %data_dir%
rm -fr %wal_dir%
mv %backup_data_dir% %data_dir%
mv %backup_wal_dir% %wal_dir%
etcd \
-data-dir=%data_dir% \
[-wal-dir=%wal_dir%] \
...
```
#### Restoring the cluster
Now that the node is running successfully, [change its advertised peer URLs][update-a-member], as the `--force-new-cluster` option has set the peer URL to the default listening on localhost.
You can then add more nodes to the cluster and restore resiliency. See the [add a new member][add-a-member] guide for more details.
**Note:** If you are trying to restore your cluster using old failed etcd nodes, please make sure you have stopped old etcd instances and removed their old data directories specified by the data-dir configuration parameter.
### Client Request Timeout
etcd sets different timeouts for various types of client requests. The timeout value is not tunable now, which will be improved soon (https://github.com/coreos/etcd/issues/2038).
#### Get requests
Timeout is not set for get requests, because etcd serves the result locally in a non-blocking way.
**Note**: QuorumGet request is a different type, which is mentioned in the following sections.
#### Watch requests
Timeout is not set for watch requests. etcd will not stop a watch request until client cancels it, or the connection is broken.
#### Delete, Put, Post, QuorumGet requests
The default timeout is 5 seconds. It should be large enough to allow all key modifications if the majority of cluster is functioning.
If the request times out, it indicates two possibilities:
1. the server the request sent to was not functioning at that time.
2. the majority of the cluster is not functioning.
If timeout happens several times continuously, administrators should check status of cluster and resolve it as soon as possible.
### Best Practices
#### Maximum OS threads
By default, etcd uses the default configuration of the Go 1.4 runtime, which means that at most one operating system thread will be used to execute code simultaneously. (Note that this default behavior [has changed in Go 1.5][golang1.5-runtime]).
When using etcd in heavy-load scenarios on machines with multiple cores it will usually be desirable to increase the number of threads that etcd can utilize. To do this, simply set the environment variable GOMAXPROCS to the desired number when starting etcd. For more information on this variable, see the [Go runtime documentation][golang-runtime].
[add-a-member]: runtime-configuration.md#add-a-new-member
[golang1.5-runtime]: https://golang.org/doc/go1.5#runtime
[golang-memstats]: https://golang.org/pkg/runtime/#MemStats
[golang-runtime]: https://golang.org/pkg/runtime
[metrics]: metrics.md
[prometheus]: http://prometheus.io/
[remove-a-member]: runtime-configuration.md#remove-a-member
[runtime-reconfig]: runtime-configuration.md#cluster-reconfiguration-operations
[snap-pkg]: http://godoc.org/github.com/coreos/etcd/snap
[update-a-member]: runtime-configuration.md#update-a-member
[wal-pkg]: http://godoc.org/github.com/coreos/etcd/wal

File diff suppressed because it is too large Load Diff

View File

@ -1,97 +0,0 @@
**This is the documentation for etcd2 releases. Read [etcd3 doc][v3-docs] for etcd3 releases.**
[v3-docs]: ../docs.md#documentation
# etcd3 API
TODO: API doc
## Data Model
etcd is designed to reliably store infrequently updated data and provide reliable watch queries. etcd exposes previous versions of key-value pairs to support inexpensive snapshots and watch history events (“time travel queries”). A persistent, multi-version, concurrency-control data model is a good fit for these use cases.
etcd stores data in a multiversion [persistent][persistent-ds] key-value store. The persistent key-value store preserves the previous version of a key-value pair when its value is superseded with new data. The key-value store is effectively immutable; its operations do not update the structure in-place, but instead always generates a new updated structure. All past versions of keys are still accessible and watchable after modification. To prevent the data store from growing indefinitely over time from maintaining old versions, the store may be compacted to shed the oldest versions of superseded data.
### Logical View
The stores logical view is a flat binary key space. The key space has a lexically sorted index on byte string keys so range queries are inexpensive.
The key space maintains multiple revisions. Each atomic mutative operation (e.g., a transaction operation may contain multiple operations) creates a new revision on the key space. All data held by previous revisions remains unchanged. Old versions of key can still be accessed through previous revisions. Likewise, revisions are indexed as well; ranging over revisions with watchers is efficient. If the store is compacted to recover space, revisions before the compact revision will be removed.
A keys lifetime spans a generation. Each key may have one or multiple generations. Creating a key increments the generation of that key, starting at 1 if the key never existed. Deleting a key generates a key tombstone, concluding the keys current generation. Each modification of a key creates a new version of the key. Once a compaction happens, any generation ended before the given revision will be removed and values set before the compaction revision except the latest one will be removed.
### Physical View
etcd stores the physical data as key-value pairs in a persistent [b+tree][b+tree]. Each revision of the stores state only contains the delta from its previous revision to be efficient. A single revision may correspond to multiple keys in the tree.
The key of key-value pair is a 3-tuple (major, sub, type). Major is the store revision holding the key. Sub differentiates among keys within the same revision. Type is an optional suffix for special value (e.g., `t` if the value contains a tombstone). The value of the key-value pair contains the modification from previous revision, thus one delta from previous revision. The b+tree is ordered by key in lexical byte-order. Ranged lookups over revision deltas are fast; this enables quickly finding modifications from one specific revision to another. Compaction removes out-of-date keys-value pairs.
etcd also keeps a secondary in-memory [btree][btree] index to speed up range queries over keys. The keys in the btree index are the keys of the store exposed to user. The value is a pointer to the modification of the persistent b+tree. Compaction removes dead pointers.
## KV API Guarantees
etcd is a consistent and durable key value store with mini-transaction(TODO: link to txn doc when we have it) support. The key value store is exposed through the KV APIs. etcd tries to ensure the strongest consistency and durability guarantees for a distributed system. This specification enumerates the KV API guarantees made by etcd.
### APIs to consider
* Read APIs
* range
* watch
* Write APIs
* put
* delete
* Combination (read-modify-write) APIs
* txn
### etcd Specific Definitions
#### operation completed
An etcd operation is considered complete when it is committed through consensus, and therefore “executed” -- permanently stored -- by the etcd storage engine. The client knows an operation is completed when it receives a response from the etcd server. Note that the client may be uncertain about the status of an operation if it times out, or there is a network disruption between the client and the etcd member. etcd may also abort operations when there is a leader election. etcd does not send `abort` responses to clients outstanding requests in this event.
#### revision
An etcd operation that modifies the key value store is assigned with a single increasing revision. A transaction operation might modify the key value store multiple times, but only one revision is assigned. The revision attribute of a key value pair that modified by the operation has the same value as the revision of the operation. The revision can be used as a logical clock for key value store. A key value pair that has a larger revision is modified after a key value pair with a smaller revision. Two key value pairs that have the same revision are modified by an operation "concurrently".
### Guarantees Provided
#### Atomicity
All API requests are atomic; an operation either completes entirely or not at all. For watch requests, all events generated by one operation will be in one watch response. Watch never observes partial events for a single operation.
#### Consistency
All API calls ensure [sequential consistency][seq_consistency], the strongest consistency guarantee available from distributed systems. No matter which etcd member server a client makes requests to, a client reads the same events in the same order. If two members complete the same number of operations, the state of the two members is consistent.
For watch operations, etcd guarantees to return the same value for the same key across all members for the same revision. For range operations, etcd has a similar guarantee for [linearized][Linearizability] access; serialized access may be behind the quorum state, so that the later revision is not yet available.
As with all distributed systems, it is impossible for etcd to ensure [strict consistency][strict_consistency]. etcd does not guarantee that it will return to a read the “most recent” value (as measured by a wall clock when a request is completed) available on any cluster member.
#### Isolation
etcd ensures [serializable isolation][serializable_isolation], which is the highest isolation level available in distributed systems. Read operations will never observe any intermediate data.
#### Durability
Any completed operations are durable. All accessible data is also durable data. A read will never return data that has not been made durable.
#### Linearizability
Linearizability (also known as Atomic Consistency or External Consistency) is a consistency level between strict consistency and sequential consistency.
For linearizability, suppose each operation receives a timestamp from a loosely synchronized global clock. Operations are linearized if and only if they always complete as though they were executed in a sequential order and each operation appears to complete in the order specified by the program. Likewise, if an operations timestamp precedes another, that operation must also precede the other operation in the sequence.
For example, consider a client completing a write at time point 1 (*t1*). A client issuing a read at *t2* (for *t2* > *t1*) should receive a value at least as recent as the previous write, completed at *t1*. However, the read might actually complete only by *t3*, and the returned value, current at *t2* when the read began, might be "stale" by *t3*.
etcd does not ensure linearizability for watch operations. Users are expected to verify the revision of watch responses to ensure correct ordering.
etcd ensures linearizability for all other operations by default. Linearizability comes with a cost, however, because linearized requests must go through the Raft consensus process. To obtain lower latencies and higher throughput for read requests, clients can configure a requests consistency mode to `serializable`, which may access stale data with respect to quorum, but removes the performance penalty of linearized accesses' reliance on live consensus.
[persistent-ds]: https://en.wikipedia.org/wiki/Persistent_data_structure
[btree]: https://en.wikipedia.org/wiki/B-tree
[b+tree]: https://en.wikipedia.org/wiki/B%2B_tree
[seq_consistency]: https://en.wikipedia.org/wiki/Consistency_model#Sequential_consistency
[strict_consistency]: https://en.wikipedia.org/wiki/Consistency_model#Strict_consistency
[serializable_isolation]: https://en.wikipedia.org/wiki/Isolation_(database_systems)#Serializable
[Linearizability]: #linearizability

View File

@ -1,516 +0,0 @@
**This is the documentation for etcd2 releases. Read [etcd3 doc][v3-docs] for etcd3 releases.**
[v3-docs]: ../docs.md#documentation
# v2 Auth and Security
## etcd Resources
There are three types of resources in etcd
1. permission resources: users and roles in the user store
2. key-value resources: key-value pairs in the key-value store
3. settings resources: security settings, auth settings, and dynamic etcd cluster settings (election/heartbeat)
### Permission Resources
#### Users
A user is an identity to be authenticated. Each user can have multiple roles. The user has a capability (such as reading or writing) on the resource if one of the roles has that capability.
A user named `root` is required before authentication can be enabled, and it always has the ROOT role. The ROOT role can be granted to multiple users, but `root` is required for recovery purposes.
#### Roles
Each role has exact one associated Permission List. An permission list exists for each permission on key-value resources.
The special static ROOT (named `root`) role has a full permissions on all key-value resources, the permission to manage user resources and settings resources. Only the ROOT role has the permission to manage user resources and modify settings resources. The ROOT role is built-in and does not need to be created.
There is also a special GUEST role, named 'guest'. These are the permissions given to unauthenticated requests to etcd. This role will be created automatically, and by default allows access to the full keyspace due to backward compatibility. (etcd did not previously authenticate any actions.). This role can be modified by a ROOT role holder at any time, to reduce the capabilities of unauthenticated users.
#### Permissions
There are two types of permissions, `read` and `write`. All management and settings require the ROOT role.
A Permission List is a list of allowed patterns for that particular permission (read or write). Only ALLOW prefixes are supported. DENY becomes more complicated and is TBD.
### Key-Value Resources
A key-value resource is a key-value pairs in the store. Given a list of matching patterns, permission for any given key in a request is granted if any of the patterns in the list match.
Only prefixes or exact keys are supported. A prefix permission string ends in `*`.
A permission on `/foo` is for that exact key or directory, not its children or recursively. `/foo*` is a prefix that matches `/foo` recursively, and all keys thereunder, and keys with that prefix (eg. `/foobar`. Contrast to the prefix `/foo/*`). `*` alone is permission on the full keyspace.
### Settings Resources
Specific settings for the cluster as a whole. This can include adding and removing cluster members, enabling or disabling authentication, replacing certificates, and any other dynamic configuration by the administrator (holder of the ROOT role).
## v2 Auth
### Basic Auth
We only support [Basic Auth][basic-auth] for the first version. Client needs to attach the basic auth to the HTTP Authorization Header.
### Authorization field for operations
Added to requests to /v2/keys, /v2/auth
Add code 401 Unauthorized to the set of responses from the v2 API
Authorization: Basic {encoded string}
### Future Work
Other types of auth can be considered for the future (eg, signed certs, public keys) but the `Authorization:` header allows for other such types
### Things out of Scope for etcd Permissions
* Pluggable AUTH backends like LDAP (other Authorization tokens generated by LDAP et al may be a possibility)
* Very fine-grained access controls (eg: users modifying keys outside work hours)
## API endpoints
An Error JSON corresponds to:
{
"name": "ErrErrorName",
"description" : "The longer helpful description of the error."
}
#### Enable and Disable Authentication
**Get auth status**
GET /v2/auth/enable
Sent Headers:
Possible Status Codes:
200 OK
200 Body:
{
"enabled": true
}
**Enable auth**
PUT /v2/auth/enable
Sent Headers:
Put Body: (empty)
Possible Status Codes:
200 OK
400 Bad Request (if root user has not been created)
409 Conflict (already enabled)
200 Body: (empty)
**Disable auth**
DELETE /v2/auth/enable
Sent Headers:
Authorization: Basic <RootAuthString>
Possible Status Codes:
200 OK
401 Unauthorized (if not a root user)
409 Conflict (already disabled)
200 Body: (empty)
#### Users
The User JSON object is formed as follows:
```
{
"user": "userName",
"password": "password",
"roles": [
"role1",
"role2"
],
"grant": [],
"revoke": []
}
```
Password is only passed when necessary.
**Get a List of Users**
GET/HEAD /v2/auth/users
Sent Headers:
Authorization: Basic <BasicAuthString>
Possible Status Codes:
200 OK
401 Unauthorized
200 Headers:
Content-type: application/json
200 Body:
{
"users": [
{
"user": "alice",
"roles": [
{
"role": "root",
"permissions": {
"kv": {
"read": ["/*"],
"write": ["/*"]
}
}
}
]
},
{
"user": "bob",
"roles": [
{
"role": "guest",
"permissions": {
"kv": {
"read": ["/*"],
"write": ["/*"]
}
}
}
]
}
]
}
**Get User Details**
GET/HEAD /v2/auth/users/alice
Sent Headers:
Authorization: Basic <BasicAuthString>
Possible Status Codes:
200 OK
401 Unauthorized
404 Not Found
200 Headers:
Content-type: application/json
200 Body:
{
"user" : "alice",
"roles" : [
{
"role": "fleet",
"permissions" : {
"kv" : {
"read": [ "/fleet/" ],
"write": [ "/fleet/" ]
}
}
},
{
"role": "etcd",
"permissions" : {
"kv" : {
"read": [ "/*" ],
"write": [ "/*" ]
}
}
}
]
}
**Create Or Update A User**
A user can be created with initial roles, if filled in. However, no roles are required; only the username and password fields
PUT /v2/auth/users/charlie
Sent Headers:
Authorization: Basic <BasicAuthString>
Put Body:
JSON struct, above, matching the appropriate name
* Starting password and roles when creating.
* Grant/Revoke/Password filled in when updating (to grant roles, revoke roles, or change the password).
Possible Status Codes:
200 OK
201 Created
400 Bad Request
401 Unauthorized
404 Not Found (update non-existent users)
409 Conflict (when granting duplicated roles or revoking non-existent roles)
200 Headers:
Content-type: application/json
200 Body:
JSON state of the user
**Remove A User**
DELETE /v2/auth/users/charlie
Sent Headers:
Authorization: Basic <BasicAuthString>
Possible Status Codes:
200 OK
401 Unauthorized
403 Forbidden (remove root user when auth is enabled)
404 Not Found
200 Headers:
200 Body: (empty)
#### Roles
A full role structure may look like this. A Permission List structure is used for the "permissions", "grant", and "revoke" keys.
```
{
"role" : "fleet",
"permissions" : {
"kv" : {
"read" : [ "/fleet/" ],
"write": [ "/fleet/" ]
}
},
"grant" : {"kv": {...}},
"revoke": {"kv": {...}}
}
```
**Get Role Details**
GET/HEAD /v2/auth/roles/fleet
Sent Headers:
Authorization: Basic <BasicAuthString>
Possible Status Codes:
200 OK
401 Unauthorized
404 Not Found
200 Headers:
Content-type: application/json
200 Body:
{
"role" : "fleet",
"permissions" : {
"kv" : {
"read": [ "/fleet/" ],
"write": [ "/fleet/" ]
}
}
}
**Get a list of Roles**
GET/HEAD /v2/auth/roles
Sent Headers:
Authorization: Basic <BasicAuthString>
Possible Status Codes:
200 OK
401 Unauthorized
200 Headers:
Content-type: application/json
200 Body:
{
"roles": [
{
"role": "fleet",
"permissions": {
"kv": {
"read": ["/fleet/"],
"write": ["/fleet/"]
}
}
},
{
"role": "etcd",
"permissions": {
"kv": {
"read": ["/*"],
"write": ["/*"]
}
}
},
{
"role": "quay",
"permissions": {
"kv": {
"read": ["/*"],
"write": ["/*"]
}
}
}
]
}
**Create Or Update A Role**
PUT /v2/auth/roles/rkt
Sent Headers:
Authorization: Basic <BasicAuthString>
Put Body:
Initial desired JSON state, including the role name for verification and:
* Starting permission set if creating
* Granted/Revoked permission set if updating
Possible Status Codes:
200 OK
201 Created
400 Bad Request
401 Unauthorized
404 Not Found (update non-existent roles)
409 Conflict (when granting duplicated permission or revoking non-existent permission)
200 Body:
JSON state of the role
**Remove A Role**
DELETE /v2/auth/roles/rkt
Sent Headers:
Authorization: Basic <BasicAuthString>
Possible Status Codes:
200 OK
401 Unauthorized
403 Forbidden (remove root)
404 Not Found
200 Headers:
200 Body: (empty)
## Example Workflow
Let's walk through an example to show two tenants (applications, in our case) using etcd permissions.
### Create root role
```
PUT /v2/auth/users/root
Put Body:
{"user" : "root", "password": "betterRootPW!"}
```
### Enable auth
```
PUT /v2/auth/enable
```
### Modify guest role (revoke write permission)
```
PUT /v2/auth/roles/guest
Headers:
Authorization: Basic <root:betterRootPW!>
Put Body:
{
"role" : "guest",
"revoke" : {
"kv" : {
"write": [
"/*"
]
}
}
}
```
### Create Roles for the Applications
Create the rkt role fully specified:
```
PUT /v2/auth/roles/rkt
Headers:
Authorization: Basic <root:betterRootPW!>
Body:
{
"role" : "rkt",
"permissions" : {
"kv": {
"read": [
"/rkt/*"
],
"write": [
"/rkt/*"
]
}
}
}
```
But let's make fleet just a basic role for now:
```
PUT /v2/auth/roles/fleet
Headers:
Authorization: Basic <root:betterRootPW!>
Body:
{
"role" : "fleet"
}
```
### Optional: Grant some permissions to the roles
Well, we finally figured out where we want fleet to live. Let's fix it.
(Note that we avoided this in the rkt case. So this step is optional.)
```
PUT /v2/auth/roles/fleet
Headers:
Authorization: Basic <root:betterRootPW!>
Put Body:
{
"role" : "fleet",
"grant" : {
"kv" : {
"read": [
"/rkt/fleet",
"/fleet/*"
]
}
}
}
```
### Create Users
Same as before, let's use rocket all at once and fleet separately
```
PUT /v2/auth/users/rktuser
Headers:
Authorization: Basic <root:betterRootPW!>
Body:
{"user" : "rktuser", "password" : "rktpw", "roles" : ["rkt"]}
```
```
PUT /v2/auth/users/fleetuser
Headers:
Authorization: Basic <root:betterRootPW!>
Body:
{"user" : "fleetuser", "password" : "fleetpw"}
```
### Optional: Grant Roles to Users
Likewise, let's explicitly grant fleetuser access.
```
PUT /v2/auth/users/fleetuser
Headers:
Authorization: Basic <root:betterRootPW!>
Body:
{"user": "fleetuser", "grant": ["fleet"]}
```
#### Start to use fleetuser and rktuser
For example:
```
PUT /v2/keys/rkt/RktData
Headers:
Authorization: Basic <rktuser:rktpw>
Body:
value=launch
```
Reads and writes outside the prefixes granted will fail with a 401 Unauthorized.
[basic-auth]: https://en.wikipedia.org/wiki/Basic_access_authentication

View File

@ -1,185 +0,0 @@
**This is the documentation for etcd2 releases. Read [etcd3 doc][v3-docs] for etcd3 releases.**
[v3-docs]: ../docs.md#documentation
# Authentication Guide
## Overview
Authentication -- having users and roles in etcd -- was added in etcd 2.1. This guide will help you set up basic authentication in etcd.
etcd before 2.1 was a completely open system; anyone with access to the API could change keys. In order to preserve backward compatibility and upgradability, this feature is off by default.
For a full discussion of the RESTful API, see [the authentication API documentation][auth-api]
## Special Users and Roles
There is one special user, `root`, and there are two special roles, `root` and `guest`.
### User `root`
User `root` must be created before security can be activated. It has the `root` role and allows for the changing of anything inside etcd. The idea behind the `root` user is for recovery purposes -- a password is generated and stored somewhere -- and the root role is granted to the administrator accounts on the system. In the future, for troubleshooting and recovery, we will need to assume some access to the system, and future documentation will assume this root user (though anyone with the role will suffice).
### Role `root`
Role `root` cannot be modified, but it may be granted to any user. Having access via the root role not only allows global read-write access (as was the case before 2.1) but allows modification of the authentication policy and all administrative things, like modifying the cluster membership.
### Role `guest`
The `guest` role defines the permissions granted to any request that does not provide an authentication. This will be created on security activation (if it doesn't already exist) to have full access to all keys, as was true in etcd 2.0. It may be modified at any time, and cannot be removed.
## Working with users
The `user` subcommand for `etcdctl` handles all things having to do with user accounts.
A listing of users can be found with
```
$ etcdctl user list
```
Creating a user is as easy as
```
$ etcdctl user add myusername
```
And there will be prompt for a new password.
Roles can be granted and revoked for a user with
```
$ etcdctl user grant myusername -roles foo,bar,baz
$ etcdctl user revoke myusername -roles bar,baz
```
We can look at this user with
```
$ etcdctl user get myusername
```
And the password for a user can be changed with
```
$ etcdctl user passwd myusername
```
Which will prompt again for a new password.
To delete an account, there's always
```
$ etcdctl user remove myusername
```
## Working with roles
The `role` subcommand for `etcdctl` handles all things having to do with access controls for particular roles, as were granted to individual users.
A listing of roles can be found with
```
$ etcdctl role list
```
A new role can be created with
```
$ etcdctl role add myrolename
```
A role has no password; we are merely defining a new set of access rights.
Roles are granted access to various parts of the keyspace, a single path at a time.
Reading a path is simple; if the path ends in `*`, that key **and all keys prefixed with it**, are granted to holders of this role. If it does not end in `*`, only that key and that key alone is granted.
Access can be granted as either read, write, or both, as in the following examples:
```
# Give read access to keys under the /foo directory
$ etcdctl role grant myrolename -path '/foo/*' -read
# Give write-only access to the key at /foo/bar
$ etcdctl role grant myrolename -path '/foo/bar' -write
# Give full access to keys under /pub
$ etcdctl role grant myrolename -path '/pub/*' -readwrite
```
Beware that
```
# Give full access to keys under /pub??
$ etcdctl role grant myrolename -path '/pub*' -readwrite
```
Without the slash may include keys under `/publishing`, for example. To do both, grant `/pub` and `/pub/*`
To see what's granted, we can look at the role at any time:
```
$ etcdctl role get myrolename
```
Revocation of permissions is done the same logical way:
```
$ etcdctl role revoke myrolename -path '/foo/bar' -write
```
As is removing a role entirely
```
$ etcdctl role remove myrolename
```
## Enabling authentication
The minimal steps to enabling auth are as follows. The administrator can set up users and roles before or after enabling authentication, as a matter of preference.
Make sure the root user is created:
```
$ etcdctl user add root
New password:
```
And enable authentication
```
$ etcdctl auth enable
```
After this, etcd is running with authentication enabled. To disable it for any reason, use the reciprocal command:
```
$ etcdctl -u root:rootpw auth disable
```
It would also be good to check what guests (unauthenticated users) are allowed to do:
```
$ etcdctl -u root:rootpw role get guest
```
And modify this role appropriately, depending on your policies.
## Using `etcdctl` to authenticate
`etcdctl` supports a similar flag as `curl` for authentication.
```
$ etcdctl -u user:password get foo
```
or if you prefer to be prompted:
```
$ etcdctl -u user get foo
```
Otherwise, all `etcdctl` commands remain the same. Users and roles can still be created and modified, but require authentication by a user with the root role.
[auth-api]: auth_api.md

View File

@ -1,77 +0,0 @@
**This is the documentation for etcd2 releases. Read [etcd3 doc][v3-docs] for etcd3 releases.**
[v3-docs]: ../docs.md#documentation
# Backward Compatibility
The main goal of etcd 2.0 release is to improve cluster safety around bootstrapping and dynamic reconfiguration. To do this, we deprecated the old error-prone APIs and provide a new set of APIs.
The other main focus of this release was a more reliable Raft implementation, but as this change is internal it should not have any notable effects to users.
## Command Line Flags Changes
The major flag changes are to mostly related to bootstrapping. The `initial-*` flags provide an improved way to specify the required criteria to start the cluster. The advertised URLs now support a list of values instead of a single value, which allows etcd users to gracefully migrate to the new set of IANA-assigned ports (2379/client and 2380/peers) while maintaining backward compatibility with the old ports.
- `-addr` is replaced by `-advertise-client-urls`.
- `-bind-addr` is replaced by `-listen-client-urls`.
- `-peer-addr` is replaced by `-initial-advertise-peer-urls`.
- `-peer-bind-addr` is replaced by `-listen-peer-urls`.
- `-peers` is replaced by `-initial-cluster`.
- `-peers-file` is replaced by `-initial-cluster`.
- `-peer-heartbeat-interval` is replaced by `-heartbeat-interval`.
- `-peer-election-timeout` is replaced by `-election-timeout`.
The documentation of new command line flags can be found at
https://github.com/coreos/etcd/blob/master/Documentation/v2/configuration.md.
## Data Directory Naming
The default data dir location has changed from {$hostname}.etcd to {name}.etcd.
## Key-Value API
### Read consistency flag
The consistent flag for read operations is removed in etcd 2.0.0. The normal read operations provides the same consistency guarantees with the 0.4.6 read operations with consistent flag set.
The read consistency guarantees are:
The consistent read guarantees the sequential consistency within one client that talks to one etcd server. Read/Write from one client to one etcd member should be observed in order. If one client write a value to an etcd server successfully, it should be able to get the value out of the server immediately.
Each etcd member will proxy the request to leader and only return the result to user after the result is applied on the local member. Thus after the write succeed, the user is guaranteed to see the value on the member it sent the request to.
Reads do not provide linearizability. If you want linearizable read, you need to set quorum option to true.
**Previous behavior**
We added an option for a consistent read in the old version of etcd since etcd 0.x redirects the write request to the leader. When the user get back the result from the leader, the member it sent the request to originally might not apply the write request yet. With the consistent flag set to true, the client will always send read request to the leader. So one client should be able to see its last write when consistent=true is enabled. There is no order guarantees among different clients.
## Standby
etcd 0.4s standby mode has been deprecated. [Proxy mode][proxymode] is introduced to solve a subset of problems standby was solving.
Standby mode was intended for large clusters that had a subset of the members acting in the consensus process. Overall this process was too magical and allowed for operators to back themselves into a corner.
Proxy mode in 2.0 will provide similar functionality, and with improved control over which machines act as proxies due to the operator specifically configuring them. Proxies also support read only or read/write modes for increased security and durability.
[proxymode]: proxy.md
## Discovery Service
A size key needs to be provided inside a [discovery token][discoverytoken].
[discoverytoken]: clustering.md#custom-etcd-discovery-service
## HTTP Admin API
`v2/admin` on peer url and `v2/keys/_etcd` are unified under the new [v2/members API][members-api] to better explain which machines are part of an etcd cluster, and to simplify the keyspace for all your use cases.
[members-api]: members_api.md
## HTTP Key Value API
- The follower can now transparently proxy write requests to the leader. Clients will no longer see 307 redirections to the leader from etcd.
- Expiration time is in UTC instead of local time.

View File

@ -1,23 +0,0 @@
**This is the documentation for etcd2 releases. Read [etcd3 doc][v3-docs] for etcd3 releases.**
[v3-docs]: ../../docs.md#documentation
# Benchmarks
etcd benchmarks will be published regularly and tracked for each release below:
- [etcd v2.1.0-alpha][2.1]
- [etcd v2.2.0-rc][2.2]
- [etcd v3 demo][3.0]
# Memory Usage Benchmarks
It records expected memory usage in different scenarios.
- [etcd v2.2.0-rc][2.2-mem]
[2.1]: etcd-2-1-0-alpha-benchmarks.md
[2.2]: etcd-2-2-0-rc-benchmarks.md
[2.2-mem]: etcd-2-2-0-rc-memory-benchmarks.md
[3.0]: etcd-3-demo-benchmarks.md

View File

@ -1,57 +0,0 @@
**This is the documentation for etcd2 releases. Read [etcd3 doc][v3-docs] for etcd3 releases.**
[v3-docs]: ../../docs.md#documentation
## Physical machines
GCE n1-highcpu-2 machine type
- 1x dedicated local SSD mounted under /var/lib/etcd
- 1x dedicated slow disk for the OS
- 1.8 GB memory
- 2x CPUs
- etcd version 2.1.0 alpha
## etcd Cluster
3 etcd members, each runs on a single machine
## Testing
Bootstrap another machine and use the [boom HTTP benchmark tool][boom] to send requests to each etcd member. Check the [benchmark hacking guide][hack-benchmark] for detailed instructions.
## Performance
### reading one single key
| key size in bytes | number of clients | target etcd server | read QPS | 90th Percentile Latency (ms) |
|-------------------|-------------------|--------------------|----------|---------------|
| 64 | 1 | leader only | 1534 | 0.7 |
| 64 | 64 | leader only | 10125 | 9.1 |
| 64 | 256 | leader only | 13892 | 27.1 |
| 256 | 1 | leader only | 1530 | 0.8 |
| 256 | 64 | leader only | 10106 | 10.1 |
| 256 | 256 | leader only | 14667 | 27.0 |
| 64 | 64 | all servers | 24200 | 3.9 |
| 64 | 256 | all servers | 33300 | 11.8 |
| 256 | 64 | all servers | 24800 | 3.9 |
| 256 | 256 | all servers | 33000 | 11.5 |
### writing one single key
| key size in bytes | number of clients | target etcd server | write QPS | 90th Percentile Latency (ms) |
|-------------------|-------------------|--------------------|-----------|---------------|
| 64 | 1 | leader only | 60 | 21.4 |
| 64 | 64 | leader only | 1742 | 46.8 |
| 64 | 256 | leader only | 3982 | 90.5 |
| 256 | 1 | leader only | 58 | 20.3 |
| 256 | 64 | leader only | 1770 | 47.8 |
| 256 | 256 | leader only | 4157 | 105.3 |
| 64 | 64 | all servers | 1028 | 123.4 |
| 64 | 256 | all servers | 3260 | 123.8 |
| 256 | 64 | all servers | 1033 | 121.5 |
| 256 | 256 | all servers | 3061 | 119.3 |
[boom]: https://github.com/rakyll/boom
[hack-benchmark]: ../../../hack/benchmark/

View File

@ -1,77 +0,0 @@
**This is the documentation for etcd2 releases. Read [etcd3 doc][v3-docs] for etcd3 releases.**
[v3-docs]: ../../docs.md#documentation
# Benchmarking etcd v2.2.0
## Physical Machines
GCE n1-highcpu-2 machine type
- 1x dedicated local SSD mounted as etcd data directory
- 1x dedicated slow disk for the OS
- 1.8 GB memory
- 2x CPUs
## etcd Cluster
3 etcd 2.2.0 members, each runs on a single machine.
Detailed versions:
```
etcd Version: 2.2.0
Git SHA: e4561dd
Go Version: go1.5
Go OS/Arch: linux/amd64
```
## Testing
Bootstrap another machine, outside of the etcd cluster, and run the [`boom` HTTP benchmark tool][boom] with a connection reuse patch to send requests to each etcd cluster member. See the [benchmark instructions][hack] for the patch and the steps to reproduce our procedures.
The performance is calulated through results of 100 benchmark rounds.
## Performance
### Single Key Read Performance
| key size in bytes | number of clients | target etcd server | average read QPS | read QPS stddev | average 90th Percentile Latency (ms) | latency stddev |
|-------------------|-------------------|--------------------|------------------|-----------------|--------------------------------------|----------------|
| 64 | 1 | leader only | 2303 | 200 | 0.49 | 0.06 |
| 64 | 64 | leader only | 15048 | 685 | 7.60 | 0.46 |
| 64 | 256 | leader only | 14508 | 434 | 29.76 | 1.05 |
| 256 | 1 | leader only | 2162 | 214 | 0.52 | 0.06 |
| 256 | 64 | leader only | 14789 | 792 | 7.69| 0.48 |
| 256 | 256 | leader only | 14424 | 512 | 29.92 | 1.42 |
| 64 | 64 | all servers | 45752 | 2048 | 2.47 | 0.14 |
| 64 | 256 | all servers | 46592 | 1273 | 10.14 | 0.59 |
| 256 | 64 | all servers | 45332 | 1847 | 2.48| 0.12 |
| 256 | 256 | all servers | 46485 | 1340 | 10.18 | 0.74 |
### Single Key Write Performance
| key size in bytes | number of clients | target etcd server | average write QPS | write QPS stddev | average 90th Percentile Latency (ms) | latency stddev |
|-------------------|-------------------|--------------------|------------------|-----------------|--------------------------------------|----------------|
| 64 | 1 | leader only | 55 | 4 | 24.51 | 13.26 |
| 64 | 64 | leader only | 2139 | 125 | 35.23 | 3.40 |
| 64 | 256 | leader only | 4581 | 581 | 70.53 | 10.22 |
| 256 | 1 | leader only | 56 | 4 | 22.37| 4.33 |
| 256 | 64 | leader only | 2052 | 151 | 36.83 | 4.20 |
| 256 | 256 | leader only | 4442 | 560 | 71.59 | 10.03 |
| 64 | 64 | all servers | 1625 | 85 | 58.51 | 5.14 |
| 64 | 256 | all servers | 4461 | 298 | 89.47 | 36.48 |
| 256 | 64 | all servers | 1599 | 94 | 60.11| 6.43 |
| 256 | 256 | all servers | 4315 | 193 | 88.98 | 7.01 |
## Performance Changes
- Because etcd now records metrics for each API call, read QPS performance seems to see a minor decrease in most scenarios. This minimal performance impact was judged a reasonable investment for the breadth of monitoring and debugging information returned.
- Write QPS to cluster leaders seems to be increased by a small margin. This is because the main loop and entry apply loops were decoupled in the etcd raft logic, eliminating several blocks between them.
- Write QPS to all members seems to be increased by a significant margin, because followers now receive the latest commit index sooner, and commit proposals more quickly.
[boom]: https://github.com/rakyll/boom
[hack]: ../../../hack/benchmark/

Some files were not shown because too many files have changed in this diff Show More