Compare commits

...

204 Commits

Author SHA1 Message Date
Gyu-Ho Lee
61fc123e7a version: bump up to 3.2.1
Signed-off-by: Gyu-Ho Lee <gyuhox@gmail.com>
2017-06-22 09:47:21 -07:00
Anthony Romano
71d2008385 mvcc: use GaugeFunc metric to load db size when requested
Relying on mvcc to set the db size metric can cause it to
miss size changes when a txn commits after the last write
completes before a quiescent period. Instead, load the
db size on demand.

Fixes #8146
2017-06-22 09:47:01 -07:00
Anthony Romano
79794bf556 integration: test mvcc db size metric is updated following defrag 2017-06-22 09:46:54 -07:00
Gyu-Ho Lee
db0ca8963f test: run basic functional tests
Signed-off-by: Gyu-Ho Lee <gyuhox@gmail.com>
2017-06-20 17:15:22 -07:00
Gyu-Ho Lee
27a3356c74 etcd-tester: add 'exit-on-failure'
Signed-off-by: Gyu-Ho Lee <gyuhox@gmail.com>
2017-06-20 17:15:16 -07:00
Anthony Romano
4526284326 mvcc: restore into tree index with one key index
Clobbering the mvcc kvindex with new keyIndexes for each restore
chunk would cause index corruption by dropping historical information.
2017-06-20 10:58:42 -07:00
Anthony Romano
0b0b1992b8 mvcc: test restore and deletes with small chunk sizes 2017-06-20 10:58:35 -07:00
Anthony Romano
ed7ef5be8b mvcc: set db size metric on restore
Fixes #8080
2017-06-20 10:58:16 -07:00
Anthony Romano
ff5be50ee5 integration: test mvcc db size metric is set on restore 2017-06-20 10:58:10 -07:00
Anthony Romano
a032b3b914 v3rpc: treat nil txn request op as error
Fixes #7889
2017-06-20 10:57:41 -07:00
Anthony Romano
9388a27649 dev-guide: add txn json example 2017-06-20 10:57:35 -07:00
Anthony Romano
af1d732916 e2e: test txn over grpc json 2017-06-20 10:57:27 -07:00
Gyu-Ho Lee
939aa66b48 test: 'FAIL' on release binary download failure
I see CI is failing to download release binaries
but exit code doesn't trigger CI job failure.

We need 'FAIL' string.

Signed-off-by: Gyu-Ho Lee <gyuhox@gmail.com>
2017-06-20 10:55:19 -07:00
Gyu-Ho Lee
3365dd4ff0 Documentation/op-guide: fix failed RPC rate, leader election metrics
This fixes failed RPC rate query, where we do not need
subtraction because we already query by the status code.
Also adds grpc_method to make it more specific. Most of the
time, the failure recovers within 10-second, which is our
Prometheus scrap interval, so 'rate' query might not cover
that time window, showing as 0s, but still shows up in the graph.

Signed-off-by: Gyu-Ho Lee <gyuhox@gmail.com>
2017-06-15 12:00:40 -07:00
Gyu-Ho Lee
959d55ae80 bill-of-materials: regenerate with multi licenses
Fix https://github.com/coreos/etcd/issues/8086.

Signed-off-by: Gyu-Ho Lee <gyuhox@gmail.com>
2017-06-14 08:44:11 -07:00
Geoff Levand
3e1992140a build-aci: Fix ACI image name
The appc discovery spec states that the architecture specifier in the ACI
image file name will be an ACI architecture value.  Our build scripts were
using GOARCH in the image name, which is incorrect for arm64/aarch64.
See: https://github.com/appc/spec/blob/master/spec/discovery.md

Fixes errors like these on arm64 machines:

  $ rkt --debug --insecure-options=image fetch coreos.com/etcd:v3.2.0-rc.1
  image: remote fetching from URL "https://github.com/coreos/etcd/releases/download/v3.2.0-rc.1/etcd-v3.2.0-rc.1-linux-aarch64.aci"
  fetch: bad HTTP status code: 404

Signed-off-by: Geoff Levand <geoff@infradead.org>
2017-06-14 08:43:58 -07:00
Gyu-Ho Lee
b547b982b9 Documentation/upgrades: link to previous guides
Signed-off-by: Gyu-Ho Lee <gyuhox@gmail.com>
2017-06-09 13:04:10 -07:00
Gyu-Ho Lee
56477ca998 version: bump up to 3.2.0+git
Signed-off-by: Gyu-Ho Lee <gyuhox@gmail.com>
2017-06-09 13:03:56 -07:00
Gyu-Ho Lee
66722b1ada version: bump up to 3.2.0
Signed-off-by: Gyu-Ho Lee <gyuhox@gmail.com>
2017-06-09 10:59:09 -07:00
Anthony Romano
963339d265 rafthttp: permit very large v2 snapshots
v2 snapshots were hitting the 512MB message decode limit, causing
sending snapshots to new members to fail for being too big.
2017-06-09 10:49:51 -07:00
Anthony Romano
c87594f27c etcdserver: use same ReadView for read-only txns
A read-only txn isn't serialized by raft, but it uses a fresh
read txn for every mvcc access prior to executing its request ops.
If a write txn modifies the keys matching the read txn's comparisons,
the read txn may return inconsistent results.

To fix, use the same read-only mvcc txn for the duration of the etcd
txn. Probably gets a modest txn speedup as well since there are
fewer read txn allocations.
2017-06-09 09:50:43 -07:00
Anthony Romano
e72ad5dd2a mvcc: create TxnWrites from TxnRead with NewReadOnlyTxnWrite
Already used internally by mvcc, but needed by etcdserver txns.
2017-06-09 09:50:37 -07:00
Anthony Romano
3eb5d24cab integration: test txn comparison and concurrent put ordering 2017-06-09 09:50:30 -07:00
Gyu-Ho Lee
8b9041a938 Documentation/op-guide: do not use host network, fix indentation
Signed-off-by: Gyu-Ho Lee <gyuhox@gmail.com>
2017-06-09 09:14:21 -07:00
Anthony Romano
864ffec88c v2http: put back /v2/machines and mark as non-deprecated
This reverts commit 2bb33181b6. python-etcd
seems to depend on /v2/machines and the maintainer vanished. Plus, it is
prefixed with /v2/ so it probably can't be deprecated anyway.
2017-06-08 12:05:59 -07:00
Gyu-Ho Lee
12bc2bba36 etcdserver: add leaseExpired debugging metrics
Fix https://github.com/coreos/etcd/issues/8050.

Signed-off-by: Gyu-Ho Lee <gyuhox@gmail.com>
2017-06-08 11:23:12 -07:00
Gyu-Ho Lee
3a43afce5a Documentation/op-guide: fix 'grpc_code' field in metrics
Signed-off-by: Gyu-Ho Lee <gyuhox@gmail.com>
2017-06-08 10:16:07 -07:00
Anthony Romano
0e56ea37e7 fileutil: return immediately if preallocating 0 bytes
fallocate will return EINVAL, causing zeroing to the end of a
0 byte file to fail.

Fixes #8045
2017-06-07 12:59:35 -07:00
Anthony Romano
743192aa3b *: clear rarer shellcheck errors on scripts
Clean up the tail of the warnings
2017-06-06 10:44:59 -07:00
Anthony Romano
e8b156578f travis: add shellcheck 2017-06-06 10:44:53 -07:00
Anthony Romano
61f3338ce7 test: shellcheck 2017-06-06 10:44:46 -07:00
Anthony Romano
effffdbdca test, osutil: disable setting SIG_DFL on linux if built with cov tag
Was causing etcd to terminate before finishing writing its
coverage profile.
2017-06-06 09:47:22 -07:00
Gyu-Ho Lee
9bac803bee Documentation/op-guide: fix typo in grafana.json
Signed-off-by: Gyu-Ho Lee <gyuhox@gmail.com>
2017-06-06 09:47:15 -07:00
Anthony Romano
9169ad0d7d *: fix go tool vet -all -shadow errors 2017-06-06 09:47:06 -07:00
Anthony Romano
482a7839d9 test: speedup and strengthen go vet checking
Was iterating over every file, reloading everything. Instead,
analyze the package directories. On my machine, the time for
vet checking goes from 34s to 3s. Scans more code too.
2017-06-06 09:46:54 -07:00
Anthony Romano
ba3058ca79 op-guide: document CN certs in security.md 2017-06-06 09:46:47 -07:00
Anthony Romano
0e90e504f5 scripts, Documentation: fix swagger generation
Changes to the genproto to support splitting out the grpc-gateway broke
swagger generation.
2017-06-02 11:05:21 -07:00
Anthony Romano
998fa0de76 Documentation, scripts: regen RPC docs
Was missing the new cancel_reason field. Also includes updated protodoc
sha to fix generating documentation for upcoming txn compare range patchset.
2017-06-02 10:27:49 -07:00
Anthony Romano
c273735729 op-guide: document configuration flags for gateway 2017-06-01 15:59:49 -07:00
Anthony Romano
c85f736522 mvcc: time restore in restore benchmark
This never worked.
2017-06-01 14:59:31 -07:00
Anthony Romano
a375ff172e mvcc: chunk reads for restoring
Loading all keys at once would cause etcd to use twice as much
memory than it would need to serve the keys, causing RSS to spike on
boot. Instead, load the keys into the mvcc by chunk. Uses pipelining
for some concurrency.

Fixes #7822
2017-06-01 14:59:27 -07:00
Anthony Romano
1893af9bbd integration: use unixs:// if client port configured for tls 2017-06-01 09:47:08 -07:00
Anthony Romano
b4c655677a clientv3: support unixs:// scheme
For using TLS without giving a TLSConfig to the client.
2017-06-01 09:47:03 -07:00
Anthony Romano
c2160adf1d clientv3/integration: test dialing to TLS without a TLS config times out
etcdctl was getting ctx errors from timing out trying to issue RPCs to
a TLS endpoint but without using TLS for transmission. Client should
immediately bail out with a time out error.
2017-06-01 09:46:57 -07:00
Anthony Romano
5ada311416 clientv3: use Endpoints[0] to initialize grpc creds
Dialing out without specifying TLS creds but giving https uses some
default behavior that depends on passing an endpoint with https to
Dial(), so it's not enough to completely rely on the balancer to supply
endpoints.

Fixes #8008

Also ctx-izes grpc.Dial
2017-06-01 09:46:48 -07:00
Anthony Romano
f042cd7d9c vendor: ghodss/yaml v1.0.0 2017-05-30 14:44:30 -07:00
Anthony Romano
f0a400a3a8 vendor: kr/pty v1.0.0 2017-05-30 14:44:23 -07:00
Anthony Romano
6066977280 op-guide: update performance.md
It's been a year, time to refresh with 3.2.0 data.
2017-05-30 10:16:19 -07:00
Anthony Romano
fc88eccc74 vendor: use v0.2.0 of go-semver 2017-05-30 10:15:23 -07:00
Gyu-Ho Lee
5cb28a7d83 Documentation: add 'yaml.NewConfig' change in 3.2
Signed-off-by: Gyu-Ho Lee <gyuhox@gmail.com>
2017-05-30 10:14:55 -07:00
Anthony Romano
de57e88643 Documentation: add FAQ entry for "database space exceeded" errors
Also moves miscategorized cluster id mismatch entry from "performance"
to "operation".
2017-05-26 09:13:13 -07:00
Anthony Romano
967fc70173 Merge pull request #7983 from heyitsanthony/etcdctl-lock-exec
etcdctl: support exec on lock
2017-05-25 10:26:48 -07:00
Gyu-Ho Lee
4a8d32eaa6 Merge pull request #7984 from gyuho/3.2
*: bump up test Go runtime, etcd versions before 3.2 release
2017-05-24 17:20:48 -07:00
Anthony Romano
643c2a310d etcdctl: support exec on lock
The lock command is clumsy to use from the command line, needing mkfifo,
wait, etc. Instead, make like consul and support launching a command if
one is given.
2017-05-24 16:47:00 -07:00
Gyu-Ho Lee
c3a191b38d e2e: use version.Cluster for release test
Signed-off-by: Gyu-Ho Lee <gyuhox@gmail.com>
2017-05-24 15:20:18 -07:00
Gyu-Ho Lee
83efd2c745 ROADMAP: make 'release-3.2' stable branch
Signed-off-by: Gyu-Ho Lee <gyuhox@gmail.com>
2017-05-24 14:31:43 -07:00
Gyu-Ho Lee
307331cc31 test: release tests with v3.2+
Signed-off-by: Gyu-Ho Lee <gyuhox@gmail.com>
2017-05-24 14:31:30 -07:00
Gyu-Ho Lee
2abd22a13b travis: run tests with Go 1.8.3
Signed-off-by: Gyu-Ho Lee <gyuhox@gmail.com>
2017-05-24 14:28:33 -07:00
Anthony Romano
2a4db4307f Merge pull request #7982 from heyitsanthony/watch-latency-clients
benchmark: support multiple clients/conns in watch-latency benchmark
2017-05-24 13:23:07 -07:00
Anthony Romano
ebd6e8c4b1 benchmark: support multiple clients/conns in watch-latency benchmark 2017-05-24 11:31:43 -07:00
Gyu-Ho Lee
8c1ab62bc5 Merge pull request #7975 from raoofm/patch-11
doc: modify vonage usecase, adding kubernetes and vault
2017-05-24 10:40:47 -07:00
Anthony Romano
8d2b340629 Merge pull request #7966 from heyitsanthony/close-kv-err
etcdserver: close mvcc.KV on init error path
2017-05-23 12:59:20 -07:00
Gyu-Ho Lee
0b449a24bb Merge pull request #7956 from gyuho/container-linux
Documentation: add systemd, Container Linux guide
2017-05-23 12:38:37 -07:00
Raoof Mohammed
a1804390b1 doc: modify usecase
adding kubernetes and vault
2017-05-23 14:57:10 -04:00
Gyu-Ho Lee
8b290c680a Documentation: add systemd, Container Linux guide
Signed-off-by: Gyu-Ho Lee <gyuhox@gmail.com>
2017-05-23 11:27:27 -07:00
Anthony Romano
c1c9a2c96c etcdserver: close mvcc.KV on init error path
Scheduled compaction will panic if KV is not stopped before
closing the backend.
2017-05-23 10:41:37 -07:00
Anthony Romano
f75e333264 Merge pull request #7958 from heyitsanthony/perm-prefix
etcdctl: improve role --prefix flag
2017-05-22 12:19:16 -07:00
Gyu-Ho Lee
378bac79e1 Merge pull request #7963 from tlossen/patch-1
documentation: fixed typo
2017-05-22 08:29:25 -07:00
Tim Lossen
20a747ea09 Documentation/learning: fixed typo
(repeated word)
2017-05-22 17:26:34 +02:00
Hitoshi Mitake
4cd5e7ebb2 Merge pull request #7809 from mitake/auth-watch
protect watch with auth
2017-05-20 13:23:30 +09:00
Hitoshi Mitake
881903b6d3 e2e: add a new test case for protecting watch with auth 2017-05-20 11:34:45 +09:00
Hitoshi Mitake
939912c425 clientv3, etcdserver: support auth in Watch() 2017-05-20 11:34:45 +09:00
Anthony Romano
cbd3807b30 Merge pull request #7959 from heyitsanthony/regen-protodoc
Documentation, scripts: regenerate protobuf docs with updated protodoc
2017-05-19 15:20:44 -07:00
Anthony Romano
10b1ba7886 Documentation, scripts: regenerate protobuf docs with updated protodoc 2017-05-19 14:57:16 -07:00
Anthony Romano
2f1467cb27 etcdctl: sync README with etcdctl role command, add prefix example, fix typo
Fixes #7951
2017-05-19 13:53:46 -07:00
Anthony Romano
bd680c3302 ctlv3: add --prefix support to role revoke-permission, cleanup role flag handling 2017-05-19 13:53:46 -07:00
Gyu-Ho Lee
fd7de051a4 version: bump up to 3.2.0-rc.1+git
Signed-off-by: Gyu-Ho Lee <gyuhox@gmail.com>
2017-05-19 12:39:23 -07:00
Gyu-Ho Lee
9d7ed0e63a version: bump up to 3.2.0-rc.1
Signed-off-by: Gyu-Ho Lee <gyuhox@gmail.com>
2017-05-19 11:46:15 -07:00
Gyu-Ho Lee
b82ef007f5 Merge pull request #7955 from gyuho/timeout
integration: bump up 'TestV3LeaseRequireLeader' timeout to 5-sec
2017-05-18 17:11:23 -07:00
Gyu-Ho Lee
29bbcdd110 integration: bump up 'TestV3LeaseRequireLeader' timeout to 5-sec
Signed-off-by: Gyu-Ho Lee <gyuhox@gmail.com>
2017-05-18 16:44:57 -07:00
Gyu-Ho Lee
0afc51c762 Merge pull request #7939 from gyuho/test
etcd-tester: add '-failpoints' to configure gofail
2017-05-18 12:53:07 -07:00
Gyu-Ho Lee
4a8fbb9d5d Merge pull request #7954 from gyuho/m
*: remove unused, fix typos
2017-05-18 12:36:24 -07:00
Gyu-Ho Lee
d690634bd6 *: remove unused, fix typos
Signed-off-by: Gyu-Ho Lee <gyuhox@gmail.com>
2017-05-18 12:11:18 -07:00
Gyu-Ho Lee
62b44a85f8 etcd-tester: add '-failpoints' to configure gofail
Signed-off-by: Gyu-Ho Lee <gyuhox@gmail.com>
2017-05-18 11:59:07 -07:00
Gyu-Ho Lee
e7d705b25f Merge pull request #7953 from gyuho/aaa
etcd-tester: use 'debugutil.PProfHandlers'
2017-05-18 11:26:40 -07:00
Gyu-Ho Lee
e1640cc72f etcd-tester: use 'debugutil.PProfHandlers'
Signed-off-by: Gyu-Ho Lee <gyuhox@gmail.com>
2017-05-18 11:21:24 -07:00
Anthony Romano
a6a1eb8378 Merge pull request #7949 from heyitsanthony/godocs
*: fill out missing package godocs
2017-05-18 10:23:26 -07:00
Anthony Romano
33c375dc44 *: fill out blank package godocs
Mostly one-liner short descriptions, but also includes some typo fixes
and some examples.
2017-05-18 09:41:13 -07:00
Anthony Romano
1f2dcbb935 Merge pull request #7948 from heyitsanthony/remove-proxy-alpha
op-guide: remove alpha from grpc proxy
2017-05-18 09:31:34 -07:00
Anthony Romano
c6cf88ef7f op-guide: remove alpha from grpc proxy 2017-05-17 22:27:06 -07:00
Anthony Romano
4e84bd2e3c Merge pull request #7946 from heyitsanthony/report-weighted
report: add NewWeightedReport
2017-05-17 21:04:53 -07:00
Anthony Romano
c09f0ca9d4 report: add NewWeightedReport
Reports with weighted results.
2017-05-17 16:07:20 -07:00
Xiang Li
218ee40f11 Merge pull request #7945 from xiang90/snapshot_error
etcdserver: more logging on snapshot close path
2017-05-17 15:36:53 -07:00
Xiang
32c252f003 etcdserver: more logging on snapshot close path 2017-05-17 14:48:52 -07:00
Anthony Romano
f4641accc3 Merge pull request #7943 from heyitsanthony/tcpproxy-init-msg
tcpproxy: display endpoints, not pointers, in ready to proxy string
2017-05-17 12:20:46 -07:00
Anthony Romano
b7cda38653 Merge pull request #7935 from heyitsanthony/bridge-latency
bridge: add tx-delay and rx-delay
2017-05-17 11:07:22 -07:00
Anthony Romano
5bd9b9614f tcpproxy: display endpoints, not pointers, in ready to proxy string
The switch to *net.SRV for endpoints caused the ready string to emit
pointers instead of endpoint strings.

Fixes #7942
2017-05-17 10:51:35 -07:00
Anthony Romano
201fd70afc Merge pull request #7934 from heyitsanthony/bench-rpc-mutex
benchmark: add rpc mutexes to stm benchmark
2017-05-17 10:44:00 -07:00
Gyu-Ho Lee
1763f7d4d1 Merge pull request #7919 from gyuho/log-dir
functional-tester: use log-dir as data-dir in etcd-agent
2017-05-16 13:46:57 -07:00
Anthony Romano
271785cd55 Merge pull request #7937 from heyitsanthony/e2e-close-timeout
e2e: Stop() lock/elect etcdctl process if Close times out
2017-05-16 12:34:36 -07:00
Anthony Romano
8f0d4092c3 e2e: Stop() lock/elect etcdctl process if Close times out
Gets backtrace by sending SIGQUIT if Close hangs after sending a SIGINT.
2017-05-16 11:31:23 -07:00
Gyu-Ho Lee
c6219a209d Merge pull request #7933 from gyuho/travis
travis: test builds in other OSes
2017-05-15 22:25:52 -07:00
Anthony Romano
22db11f876 bridge: add tx-delay and rx-delay
Injects transmit and receive latencies.
2017-05-15 17:02:27 -07:00
Gyu-Ho Lee
d826f95c77 travis: test builds in other OSes
Signed-off-by: Gyu-Ho Lee <gyuhox@gmail.com>
2017-05-15 16:55:27 -07:00
Anthony Romano
b6e4858a25 benchmark: add rate limiting to stm 2017-05-15 15:42:54 -07:00
Anthony Romano
6526097bfc benchmark: add rpc locks to stm benchmark 2017-05-15 15:42:26 -07:00
Gyu-Ho Lee
3e7feb4033 Merge pull request #7931 from gyuho/aaa
pkg/osutil: fix missing 'syscall' import
2017-05-15 14:47:46 -07:00
Gyu-Ho Lee
fba225cee5 pkg/osutil: fix missing 'syscall' import
Signed-off-by: Gyu-Ho Lee <gyuhox@gmail.com>
2017-05-15 14:11:54 -07:00
Gyu-Ho Lee
95078c296d Merge pull request #7932 from gyuho/vet
*: remove unnecessary fmt.Sprint
2017-05-15 14:01:23 -07:00
Gyu-Ho Lee
e15020055e *: remove unnecessary fmt.Sprint
Signed-off-by: Gyu-Ho Lee <gyuhox@gmail.com>
2017-05-15 13:23:31 -07:00
Anthony Romano
74fd7709ad Merge pull request #7904 from heyitsanthony/osutil-exit
osutil: force SIG_DFL before resending terminating signal
2017-05-15 12:14:37 -07:00
Anthony Romano
31e3899663 Merge pull request #7925 from heyitsanthony/fix-windows-mmap
backend: force initial mmap size to 0 for windows
2017-05-13 21:42:58 -07:00
Anthony Romano
8516d8ccc5 backend: force initial mmap size to 0 for windows
boltdb on windows allocates a file with the full mmap size even if the
db is empty. Force the initial mmap size to 0 so there's no huge initial
db file on windows.

Fixes #7910
2017-05-12 14:34:07 -07:00
Anthony Romano
6ce9aed8c5 Merge pull request #7881 from heyitsanthony/testctl-logging
e2e: more debugging output for lock and elect tests
2017-05-12 12:01:08 -07:00
Anthony Romano
7a1739a3e8 osutil: force SIG_DFL before resending terminating signal
The go runtime won't always reinstall the default signal handler on the
SIGTERM path, so it's possible the signal won't terminate the process.
Instead, force SIG_DFL for the signal.
2017-05-12 11:56:27 -07:00
Anthony Romano
5b4677b7d7 integration: reset default logging level in TestRestartRemoved 2017-05-12 10:22:29 -07:00
Anthony Romano
b9f5a00b13 e2e: more debugging output for lock and elect etcdctl tests
Meant to debug #6464 and #6934

Dumps the output from the etcd/etcdctl servers and SIGQUITs to get a
golang backtrace in case of a hanged process.
2017-05-12 10:22:29 -07:00
Anthony Romano
90893735cf Merge pull request #7917 from heyitsanthony/refactor-backend-paths
snap, etcdserver: tighten up snapshot path handling
2017-05-12 09:33:37 -07:00
Gyu-Ho Lee
2e3d27e910 functional-tester: use log-dir as data-dir in etcd-agent
Persistent data should be configured in agent side.
There is no need to specify the data-dir in tester side.

Signed-off-by: Gyu-Ho Lee <gyuhox@gmail.com>
2017-05-12 08:30:46 -07:00
fanmin shi
f337754e72 Merge pull request #7914 from fanminshi/doc_snap_warning
*: faq for snapshot warning and dynamically determining snapshotWarningTimeout
2017-05-11 16:48:12 -07:00
Gyu-Ho Lee
aa58aff18c Merge pull request #7918 from gyuho/archive-path
etcd-agent: store failure_archive in log dir
2017-05-11 16:34:43 -07:00
Gyu-Ho Lee
0bcab05465 etcd-agent: store failure_archive in log dir
Signed-off-by: Gyu-Ho Lee <gyuhox@gmail.com>
2017-05-11 16:30:04 -07:00
Anthony Romano
71d7c85b6b expect: reload DEBUG_EXPECT for each process
Lets e2e test cases selectively turn on expect debugging to get
full application output written to stdout.
2017-05-11 16:09:31 -07:00
fanmin shi
16e92d1379 faq: explains "snapshotting is taking more..." warning 2017-05-11 15:25:44 -07:00
fanmin shi
8468b38631 backend: dynamically set snapshotWarningTimeout based on db size 2017-05-11 15:25:35 -07:00
Anthony Romano
7a65cb5847 Merge pull request #7916 from heyitsanthony/snip-extra-doc
clientv3: remove duplicate documentation for Do()
2017-05-11 14:45:35 -07:00
Anthony Romano
f6cd4d4f5b snap, etcdserver: tighten up snapshot path handling
Computing the snapshot file path is error prone; snapshot recovery was
constructing file paths missing a path separator so the snapshot
would never be loaded. Instead, refactor the backend path handling
to use helper functions where possible.
2017-05-11 13:46:59 -07:00
Anthony Romano
63c7e9f840 clientv3: remove duplicate documentation for Do() 2017-05-11 13:25:26 -07:00
Gyu-Ho Lee
f63eb2f6a4 Merge pull request #7913 from gyuho/srv
pkg/srv: fix error checks from resolveTCPAddr
2017-05-11 12:12:01 -07:00
Gyu-Ho Lee
3505c254e1 pkg/srv: fix error checks from resolveTCPAddr
So that 'terr' can be returned later.

Signed-off-by: Gyu-Ho Lee <gyuhox@gmail.com>
2017-05-11 10:53:03 -07:00
Anthony Romano
386374a6d0 Merge pull request #7908 from heyitsanthony/concurrency-proxy
grpcproxy: forward v3lock and v3election requests
2017-05-10 16:41:06 -07:00
fanmin shi
066062a5e0 Merge pull request #7902 from fanminshi/fix_runner
etcd-runner: remove mutex on validate() and release() in global.go
2017-05-10 13:12:09 -07:00
Anthony Romano
00da3ca725 integration: add lock and election services to proxy tests 2017-05-10 13:06:27 -07:00
Anthony Romano
713e006bc6 adpater: adapters for lock and election services 2017-05-10 12:51:05 -07:00
Anthony Romano
fd01db9e60 grpcproxy, etcdmain: add lock and election services to proxy 2017-05-10 12:19:09 -07:00
fanmin shi
b44bd6d2a9 etcd-runner: fix race on nextc 2017-05-10 11:21:17 -07:00
fanmin shi
47f5b7c3ad Merge pull request #7876 from fanminshi/fix_7628
etcdserver: renaming db happens after snapshot persists to wal and snap files
2017-05-09 16:15:41 -07:00
fanmin shi
87d99fe038 etcd-runner: remove mutex on validate() and release() in global.go
election runner can deadlock in atomic release().

suppose election runner has two clients A and B.
if A is a leader and B is a follower, B obtains lock
for release() and waits for A to close(nextc) which signal
next round is ready. However, A can only close(nextc) if it
obtains lock for release(); hence deadlock.

this pr removes atomicity of validate() and release() in global.go
and gives the responsibility of locking to each runner.

FIXES #7891
2017-05-09 15:38:13 -07:00
fanmin shi
dfdaf082c5 etcdserver: add a test to ensure renaming db happens before persisting wal and snap files 2017-05-09 14:00:22 -07:00
fanmin shi
8b7b7222dd etcdserver: renaming db happens after snapshot persists to wal and snap files
In the case that follower recieves a snapshot from leader
and crashes before renaming xxx.snap.db to db but after
snapshot has persisted to .wal and .snap, restarting
follower results loading old db, new .wal, and new .snap.
This will causes a index mismatch between snap metadata index
and consistent index from db.

This pr forces an ordering where saving/renaming db must
happen after snapshot is persisted to wal and snap file.
this guarantees wal and snap files are newer than db.
on server restart, etcd server checks if snap index > db consistent index.
if yes, etcd server attempts to load xxx.snap.db where xxx=snap index
if there is any and panic other wise.

FIXES #7628
2017-05-09 14:00:12 -07:00
Xiang Li
a53a9e167e Merge pull request #7898 from yudai/nit_remove_dup
v3rpc: remove duplicated error case for lease.ErrLeaseNotFound
2017-05-09 12:35:31 -07:00
Xiang Li
b8875515a4 Merge pull request #7890 from yudai/keep_ka_loop_running
clientv3: Do no stop keep alive loop by server side errors
2017-05-09 11:00:21 -07:00
Gyu-Ho Lee
01a985eda5 Merge pull request #7897 from gyuho/bom
scripts: add 'BOM' update script
2017-05-09 10:52:42 -07:00
Iwasaki Yudai
010ffc0692 v3rpc: remove duplicated error case for lease.ErrLeaseNotFound 2017-05-08 20:09:41 -07:00
Gyu-Ho Lee
8c9f01ef53 scripts: add 'BOM' update script
Need this script when we add external dependencies.

Signed-off-by: Gyu-Ho Lee <gyuhox@gmail.com>
2017-05-08 17:59:11 -07:00
Iwasaki Yudai
aa85b0cea7 clientv3: Do no stop keep alive loop by server side errors 2017-05-08 15:47:34 -07:00
Anthony Romano
aac2292ab5 Merge pull request #7882 from heyitsanthony/srv-priority
gateway: DNS SRV priority
2017-05-08 14:17:04 -07:00
Gyu-Ho Lee
3a2e7653f2 Merge pull request #7879 from gyuho/http-server
embed: gracefully close peer handler
2017-05-08 14:00:45 -07:00
Anthony Romano
c232814003 etcdmain, tcpproxy: srv-priority policy
Adds DNS SRV weighting and priorities to gateway.

Partially addresses #4378
2017-05-08 11:35:18 -07:00
fanmin shi
2655540481 Merge pull request #7892 from fanminshi/add_snashot_duration_metric
backend: add prometheus metric for large snapshot duration.
2017-05-08 11:22:51 -07:00
Xiang Li
25eef5a6e4 Merge pull request #7893 from philips/readme-tagline
README: use the same tagline from github
2017-05-08 09:11:08 -07:00
Gyu-Ho Lee
7d21d6c894 embed: gracefully close peer handlers on shutdown
Signed-off-by: Gyu-Ho Lee <gyuhox@gmail.com>
2017-05-06 07:47:23 -07:00
Xiang Li
af7d051019 Merge pull request #7885 from luedigernet/fix-TestEvent
Fix watch_test.go TestEvent
2017-05-05 23:31:59 -07:00
Brandon Philips
90af2ff302 README: use the same tagline from github
Just be consistent with the messaging and use of etcd
2017-05-05 18:07:26 -07:00
fanmin shi
230106dd3c backend: add prometheus metric for large snapshot duration.
FIXES #7878
2017-05-05 17:27:33 -07:00
Luediger Reinhard
8b081ce9b3 clientv3: check IsModify
Fix watch_test.go TestEvent

Prior to This fix the isModify case of the table driven test was never checked.
2017-05-05 19:39:59 +02:00
Anthony Romano
07ad18178d pkg/srv: package for SRV utilities
Trying to decouple the v2 client from SRV code. Can't move
into discovery/ since that creates a circular dependency. So,
give up and move all the SRV code into a new package.
2017-05-05 09:27:59 -07:00
Xiang Li
db6f45e939 Merge pull request #7830 from aaronlehmann/new-nodes-start-active
raft: Set the RecentActive flag for newly added nodes
2017-05-05 08:59:25 -07:00
fanmin shi
1f8de1aab0 Merge pull request #7877 from fanminshi/warning_on_snapshotting
backend: print snapshotting duration warning every 30s
2017-05-04 18:03:47 -07:00
fanmin shi
f7f30f2361 backend: print snapshotting duration warning every 30s
FIXES #7870
2017-05-04 16:41:03 -07:00
Aaron Lehmann
9451fa1f9c raft: Add unit test TestAddNodeCheckQuorum
This test verifies that adding a node does not cause the leader to step
down until at least one full ElectionTick cycle elapses.

Signed-off-by: Aaron Lehmann <aaron.lehmann@docker.com>
2017-05-04 15:04:30 -07:00
Xiang Li
c3b96f8a69 Merge pull request #7875 from yudai/compact_every_time
compactor: Make periodic compactor runs every hour
2017-05-04 13:24:27 -07:00
Iwasaki Yudai
60dbad5a85 compactor: Make periodic compactor runs every hour
Closes #7868.
2017-05-04 10:32:51 -07:00
Gyu-Ho Lee
505bf8c708 Merge pull request #7864 from gyuho/doc-link-fixes
*: run 'marker' in CI
2017-05-04 09:14:06 -07:00
Anthony Romano
2e32d2142d Merge pull request #7869 from heyitsanthony/fix-lease-require-leader-test
clientv3/integration: drain keepalives before waiting for leader loss
2017-05-04 08:29:16 -07:00
Gyu-Ho Lee
282c6fd17d Documentation: remove '[]' from '[DEPRECATED]'
To make 'marker' pass the tests

Signed-off-by: Gyu-Ho Lee <gyuhox@gmail.com>
2017-05-04 08:26:01 -07:00
Gyu-Ho Lee
c2959c998f test: run 'marker' to find broken links
Signed-off-by: Gyu-Ho Lee <gyuhox@gmail.com>
2017-05-04 08:26:00 -07:00
Gyu-Ho Lee
e9a63473a0 scripts,travis: install 'marker' for CI tests
Signed-off-by: Gyu-Ho Lee <gyuhox@gmail.com>
2017-05-04 08:26:00 -07:00
Gyu-Ho Lee
7f05e220a4 Merge pull request #7874 from gyuho/scripts
integration/fixtures-expired: do not force 'rm'
2017-05-03 19:39:00 -07:00
Gyu-Ho Lee
4edbae4a91 integration/fixtures-expired: do not force 'rm'
To make gencerts.sh script safer.

Signed-off-by: Gyu-Ho Lee <gyuhox@gmail.com>
2017-05-03 18:45:44 -07:00
Gyu-Ho Lee
3b251b0ed3 Merge pull request #7871 from gyuho/fix-doc-2
*: fix broken links in markdown
2017-05-03 16:58:38 -07:00
Gyu-Ho Lee
4203320d04 *: fix other broken links in markdown
Signed-off-by: Gyu-Ho Lee <gyuhox@gmail.com>
2017-05-03 16:57:44 -07:00
Gyu-Ho Lee
feb930e357 Documentation/v3: fix broken links
Signed-off-by: Gyu-Ho Lee <gyuhox@gmail.com>
2017-05-03 16:57:38 -07:00
Gyu-Ho Lee
e4e057f8f7 Documentation/v2: fix broken links
Signed-off-by: Gyu-Ho Lee <gyuhox@gmail.com>
2017-05-03 15:37:53 -07:00
Anthony Romano
9fee35b02d Merge pull request #7842 from heyitsanthony/fix-switch-race
clientv3: don't race on upc/downc/switch endpoints in balancer
2017-05-03 13:48:00 -07:00
Anthony Romano
f6d0dda187 clientv3/integration: drain keepalives before waiting for leader loss
500ms keepalive delay on proxy side causes client to sometimes send
a second keepalive since it waits more than 500ms for the first response.

Fixes #7658
2017-05-03 13:22:45 -07:00
Anthony Romano
8f40517adb integration: close proxy's lease client 2017-05-03 13:22:24 -07:00
Gyu-Ho Lee
61c5a0c6ae Merge pull request #7867 from gyuho/fix-tls-test
integration: clean up TLS reload tests, fix no-file while renaming
2017-05-03 12:43:41 -07:00
Gyu-Ho Lee
85fa594265 integration: clean up TLS reload tests, fix no-file while renaming
Fix https://github.com/coreos/etcd/issues/7865.

It is also possible to have mis-matched key file
while renaming directories.

Signed-off-by: Gyu-Ho Lee <gyuhox@gmail.com>
2017-05-03 11:59:09 -07:00
Gyu-Ho Lee
c2d6a92b01 Merge pull request #7853 from gyuho/revert
Documentation/upgrades: revert KeepAlive interface change
2017-05-03 11:04:15 -07:00
Anthony Romano
24e85b2454 Merge pull request #7852 from heyitsanthony/revert-lease-err-ka
Revert "Merge pull request #7732 from heyitsanthony/lease-err-ka"
2017-05-03 11:03:17 -07:00
Anthony Romano
27b3bf230b Merge pull request #7863 from heyitsanthony/stm-apis
concurrency: provide old STM functions as deprecated
2017-05-03 10:19:13 -07:00
fanmin shi
de2e959b27 Merge pull request #7856 from fanminshi/fix_consistent_index_update
etcdserver: apply() sets consistIndex for any entry type
2017-05-03 09:07:16 -07:00
Anthony Romano
31d5d610fc concurrency: provide old STM functions as deprecated
semver
2017-05-03 02:07:01 -07:00
fanmin shi
e33b10a666 etcdserver: add a test to ensure config change also update ConsistIndex 2017-05-02 16:51:40 -07:00
Anthony Romano
61abf25859 integration: close accepted connection on stopc path
Connection pausing added another exit condition in the listener
path, causing the bridge to leak connections instead of closing
them when signalled to close. Also adds some additional Close
paranoia.

Fixes #7823
2017-05-02 16:46:43 -07:00
Anthony Romano
43e5f892f6 clientv3: don't race on upc/downc/switch endpoints in balancer
If the balancer update notification loop starts with a downed
connection and endpoints are switched while the old connection is up,
the balancer can potentially wait forever for an up connection without
refreshing the connections to reflect the current endpoints.

Instead, fetch upc/downc together, only caring about a single transition
either from down->up or up->down for each iteration

Simple way to reproduce failures: add time.Sleep(time.Second) to the
beginning of the update notification loop.
2017-05-02 16:43:24 -07:00
fanmin shi
5533c3058a etcdserver: apply() sets consistIndex for any entry type
previously, apply() doesn't set consistIndex for EntryConfChange type.
this causes a misalignment between consistIndex and applied index
where EntryConfChange entry results setting applied index but not consistIndex.

suppose that addMember() is called and leader reflects that change.
1. applied index and consistIndex is now misaligned.
2. a new follower node joined.
3. leader sends the snapshot to follower
	where the applied index is the snapshot metadata index.
4. follower node saves the snapshot and database(includes consistIndex) from leader.
5. restarting follower loads snapshot and database.
6. follower checks snapshot metadata index(same as applied index) and database consistIndex,
	finds them don't match, and then panic.

FIXES #7834
2017-05-02 14:57:36 -07:00
Gyu-Ho Lee
72d2adca62 Merge pull request #7854 from gyuho/lease-retry
integration: ensure revoke completes before TimeToLive
2017-05-02 12:56:56 -07:00
Gyu-Ho Lee
01b6cdf13d integration: ensure revoke completes before TimeToLive
Signed-off-by: Gyu-Ho Lee <gyuhox@gmail.com>
2017-05-02 12:56:26 -07:00
Xiang Li
24f0423088 Merge pull request #7855 from tessr/master
raft: add chain core to notable users list
2017-05-02 11:30:03 -07:00
Tess Rinearson
3d504737e4 add chain core to raft users list 2017-05-02 11:23:25 -07:00
Gyu-Ho Lee
bb42ba5f4e Documentation/upgrades: revert KeepAlive interface change
Signed-off-by: Gyu-Ho Lee <gyuhox@gmail.com>
2017-05-02 09:45:06 -07:00
Anthony Romano
6dd8fb6f24 Revert "Merge pull request #7732 from heyitsanthony/lease-err-ka"
This reverts commit fbbc4a4979, reversing
changes made to f254e38385.

Fixes #7851
2017-05-02 09:36:16 -07:00
Gyu-Ho Lee
fdf445b5a0 Merge pull request #7848 from gyuho/close-grpcc
embed: fix blocking Close before gRPC server start
2017-05-01 18:44:20 -07:00
Anthony Romano
f065d8e258 Merge pull request #7845 from heyitsanthony/single-node-docker
Documentation: add documentation for single node docker etcd
2017-05-01 16:42:19 -07:00
Gyu-Ho Lee
b0e9d24fb6 embed: fix blocking Close before gRPC server start
If 'StartEtcd' returns before starting gRPC server
(e.g. mismatch snapshot, misconfiguration),
receiving from grpcServerC blocks forever. This patch
just closes the channel to not block on grpcServerC,
and proceeds to next stop operations in Close.

This was masking the issues in https://github.com/coreos/etcd/issues/7834

Signed-off-by: Gyu-Ho Lee <gyuhox@gmail.com>
2017-05-01 16:41:13 -07:00
Anthony Romano
b1720b779c Merge pull request #7846 from heyitsanthony/build-aci-annotate
scripts: annotate with acbuild with supports-systemd-notify
2017-05-01 16:04:03 -07:00
Anthony Romano
6c1ce697a6 scripts: annotate with acbuild with supports-systemd-notify
Fixes #7840
2017-05-01 12:59:08 -07:00
Anthony Romano
3f1f5e5215 Merge pull request #7844 from heyitsanthony/v2-docker-tag
Documentation/v2: pin docker guide to use latest 2.3.x
2017-05-01 12:54:03 -07:00
Anthony Romano
b8f08d400d Documentation: add documentation for single node docker etcd
Fixes #7843
2017-05-01 12:36:16 -07:00
Anthony Romano
066f9bf7e3 Documentation/v2: pin docker guide to use latest 2.3.x 2017-05-01 11:46:39 -07:00
Gyu-Ho Lee
f0ca65a95d version: bump up to 3.2.0-rc.0+git 2017-04-28 11:06:53 -07:00
Aaron Lehmann
52613b262b raft: Set the RecentActive flag for newly added nodes
I found that enabling the CheckQuorum flag led to spurious leader
elections when new nodes joined. It looks like in the time between a new
node joining the cluster, and that node first communicating with the
leader, the quorum check could fail because the new node looks inactive.
To solve this, set the RecentActive flag when nodes are first added.
This gives a grace period for the node to communicate before it causes
the quorum check to fail.

Signed-off-by: Aaron Lehmann <aaron.lehmann@docker.com>
2017-04-27 11:19:29 -07:00
204 changed files with 4569 additions and 1926 deletions

View File

@@ -4,7 +4,7 @@ go_import_path: github.com/coreos/etcd
sudo: false
go:
- 1.8.1
- 1.8.3
- tip
notifications:
@@ -14,6 +14,8 @@ notifications:
env:
matrix:
- TARGET=amd64
- TARGET=darwin-amd64
- TARGET=windows-amd64
- TARGET=arm64
- TARGET=arm
- TARGET=386
@@ -24,6 +26,10 @@ matrix:
allow_failures:
- go: tip
exclude:
- go: tip
env: TARGET=darwin-amd64
- go: tip
env: TARGET=windows-amd64
- go: tip
env: TARGET=arm
- go: tip
@@ -35,10 +41,13 @@ matrix:
addons:
apt:
sources:
- debian-sid
packages:
- libpcap-dev
- libaspell-dev
- libhunspell-dev
- shellcheck
before_install:
- go get -v -u github.com/chzchzchz/goword
@@ -46,6 +55,7 @@ before_install:
- go get -v -u honnef.co/go/tools/cmd/gosimple
- go get -v -u honnef.co/go/tools/cmd/unused
- go get -v -u honnef.co/go/tools/cmd/staticcheck
- ./scripts/install-marker.sh amd64
# disable godep restore override
install:
@@ -57,6 +67,12 @@ script:
amd64)
GOARCH=amd64 ./test
;;
darwin-amd64)
GO_BUILD_FLAGS="-a -v" GOPATH="" GOOS=darwin GOARCH=amd64 ./build
;;
windows-amd64)
GO_BUILD_FLAGS="-a -v" GOPATH="" GOOS=windows GOARCH=amd64 ./build
;;
386)
GOARCH=386 PASSES="build unit" ./test
;;

View File

@@ -6,7 +6,7 @@ This is a generated documentation. Please read the proto files for more.
##### service `Lock` (etcdserver/api/v3lock/v3lockpb/v3lock.proto)
for grpc-gateway The lock service exposes client-side locking facilities as a gRPC interface.
The lock service exposes client-side locking facilities as a gRPC interface.
| Method | Request Type | Response Type | Description |
| ------ | ------------ | ------------- | ----------- |
@@ -51,7 +51,7 @@ for grpc-gateway The lock service exposes client-side locking facilities as a gR
##### service `Election` (etcdserver/api/v3election/v3electionpb/v3election.proto)
for grpc-gateway The election service exposes client-side election facilities as a gRPC interface.
The election service exposes client-side election facilities as a gRPC interface.
| Method | Request Type | Response Type | Description |
| ------ | ------------ | ------------- | ----------- |

View File

@@ -38,6 +38,15 @@ curl -L http://localhost:2379/v3alpha/kv/put \
# {"result":{"header":{"cluster_id":"12585971608760269493","member_id":"13847567121247652255","revision":"2","raft_term":"2"},"events":[{"kv":{"key":"Zm9v","create_revision":"2","mod_revision":"2","version":"1","value":"YmFy"}}]}}
```
Use `curl` to issue a transaction:
```bash
curl -L http://localhost:2379/v3alpha/kv/txn \
-X POST \
-d '{"compare":[{"target":"CREATE","key":"Zm9v","createRevision":"2"}],"success":[{"requestPut":{"key":"Zm9v","value":"YmFy"}}]}'
# {"header":{"cluster_id":"12585971608760269493","member_id":"13847567121247652255","revision":"3","raft_term":"2"},"succeeded":true,"responses":[{"response_put":{"header":{"revision":"3"}}}]}
```
## Swagger
Generated [Swagger][swagger] API definitions can be found at [rpc.swagger.json][swagger-doc].

View File

@@ -40,8 +40,6 @@ This is a generated documentation. Please read the proto files for more.
##### service `KV` (etcdserver/etcdserverpb/rpc.proto)
for grpc-gateway
| Method | Request Type | Response Type | Description |
| ------ | ------------ | ------------- | ----------- |
| Range | RangeRequest | RangeResponse | Range gets the keys in the range from the key-value store. |
@@ -94,8 +92,6 @@ for grpc-gateway
##### message `AlarmRequest` (etcdserver/etcdserverpb/rpc.proto)
default, used to query if any alarm is active space quota is exhausted
| Field | Description | Type |
| ----- | ----------- | ---- |
| action | action is the kind of alarm request to issue. The action may GET alarm statuses, ACTIVATE an alarm, or DEACTIVATE a raised alarm. | AlarmAction |
@@ -637,7 +633,7 @@ Empty field.
| Field | Description | Type |
| ----- | ----------- | ---- |
| key | default, no sorting lowest target value first highest target value first key is the first key for the range. If range_end is not given, the request only looks up key. | bytes |
| key | key is the first key for the range. If range_end is not given, the request only looks up key. | bytes |
| range_end | range_end is the upper bound on the requested range [key, range_end). If range_end is '\0', the range is all keys >= key. If range_end is key plus one (e.g., "aa"+1 == "ab", "a\xff"+1 == "b"), then the range request gets all keys prefixed with key. If both key and range_end are '\0', then the range request returns all keys. | bytes |
| limit | limit is a limit on the number of keys returned for the request. When limit is set to 0, it is treated as no limit. | int64 |
| revision | revision is the point-in-time of the key-value store to use for the range. If revision is less or equal to zero, the range is over the newest key-value store. If the revision has been compacted, ErrCompacted is returned as a response. | int64 |
@@ -770,7 +766,7 @@ From google paxosdb paper: Our implementation hinges around a powerful primitive
| range_end | range_end is the end of the range [key, range_end) to watch. If range_end is not given, only the key argument is watched. If range_end is equal to '\0', all keys greater than or equal to the key argument are watched. If the range_end is one bit larger than the given key, then all keys with the prefix (the given key) will be watched. | bytes |
| start_revision | start_revision is an optional revision to watch from (inclusive). No start_revision is "now". | int64 |
| progress_notify | progress_notify is set so that the etcd server will periodically send a WatchResponse with no events to the new watcher if there are no recent events. It is useful when clients wish to recover a disconnected watcher starting from a recent known revision. The etcd server may decide how often it will send notifications based on current load. | bool |
| filters | filter out put event. filter out delete event. filters filter the events at server side before it sends back to the watcher. | (slice of) FilterType |
| filters | filters filter the events at server side before it sends back to the watcher. | (slice of) FilterType |
| prev_kv | If prev_kv is set, created watcher gets the previous KV before the event happens. If the previous KV is already compacted, nothing will be returned. | bool |
@@ -794,6 +790,7 @@ From google paxosdb paper: Our implementation hinges around a powerful primitive
| created | created is set to true if the response is for a create watch request. The client should record the watch_id and expect to receive events for the created watcher from the same stream. All events sent to the created watcher will attach with the same watch_id. | bool |
| canceled | canceled is set to true if the response is for a cancel watch request. No further events will be sent to the canceled watcher. | bool |
| compact_revision | compact_revision is set to the minimum index if a watcher tries to watch at a compacted index. This happens when creating a watcher at a compacted revision or the watcher cannot catch up with the progress of the key-value store. The client should treat the watcher as canceled and should not try to create any watcher with the same start_revision again. | int64 |
| cancel_reason | cancel_reason indicates the reason for canceling the watcher. | string |
| events | | (slice of) mvccpb.Event |

View File

@@ -2179,6 +2179,10 @@
"format": "int64",
"description": "compact_revision is set to the minimum index if a watcher tries to watch\nat a compacted index.\n\nThis happens when creating a watcher at a compacted revision or the watcher cannot\ncatch up with the progress of the key-value store. \n\nThe client should treat the watcher as canceled and should not try to create any\nwatcher with the same start_revision again."
},
"cancel_reason": {
"type": "string",
"description": "cancel_reason indicates the reason for canceling the watcher."
},
"events": {
"type": "array",
"items": {

View File

@@ -3,7 +3,7 @@
etcd uses the [capnslog][capnslog] library for logging application output categorized into *levels*. A log message's level is determined according to these conventions:
* Error: Data has been lost, a request has failed for a bad reason, or a required resource has been lost
* Examples:
* Examples:
* A failure to allocate disk space for WAL
* Warning: (Hopefully) Temporary conditions that may cause errors, but may work fine. A replica disappearing (that may reconnect) is a warning.
@@ -26,4 +26,4 @@ etcd uses the [capnslog][capnslog] library for logging application output catego
* Send a normal message to a remote peer
* Write a log entry to disk
[capnslog]: [https://github.com/coreos/pkg/tree/master/capnslog]
[capnslog]: https://github.com/coreos/pkg/tree/master/capnslog

View File

@@ -42,6 +42,7 @@ Administrators who need to create reliable and scalable key-value stores for the
- [Supported systems][supported_platforms]
- [Docker container][container_docker]
- [Container Linux, systemd][container_linux_platform]
- [rkt container][container_rkt]
- [Amazon Web Services][aws_platform]
- [FreeBSD][freebsd_platform]
@@ -101,6 +102,7 @@ Answers to [common questions] about etcd.
[understand_apis]: learning/api.md
[versioning]: op-guide/versioning.md
[supported_platforms]: op-guide/supported-platform.md
[container_linux_platform]: platforms/container-linux-systemd.md
[freebsd_platform]: platforms/freebsd.md
[aws_platform]: platforms/aws.md
[experimental]: dev-guide/experimental_apis.md

View File

@@ -78,10 +78,26 @@ On the other hand, if the downed member is removed from cluster membership first
etcd sets `strict-reconfig-check` in order to reject reconfiguration requests that would cause quorum loss. Abandoning quorum is really risky (especially when the cluster is already unhealthy). Although it may be tempting to disable quorum checking if there's quorum loss to add a new member, this could lead to full fledged cluster inconsistency. For many applications, this will make the problem even worse ("disk geometry corruption" being a candidate for most terrifying).
### Why does etcd lose its leader from disk latency spikes?
#### Why does etcd lose its leader from disk latency spikes?
This is intentional; disk latency is part of leader liveness. Suppose the cluster leader takes a minute to fsync a raft log update to disk, but the etcd cluster has a one second election timeout. Even though the leader can process network messages within the election interval (e.g., send heartbeats), it's effectively unavailable because it can't commit any new proposals; it's waiting on the slow disk. If the cluster frequently loses its leader due to disk latencies, try [tuning][tuning] the disk settings or etcd time parameters.
#### What does the etcd warning "request ignored (cluster ID mismatch)" mean?
Every new etcd cluster generates a new cluster ID based on the initial cluster configuration and a user-provided unique `initial-cluster-token` value. By having unique cluster ID's, etcd is protected from cross-cluster interaction which could corrupt the cluster.
Usually this warning happens after tearing down an old cluster, then reusing some of the peer addresses for the new cluster. If any etcd process from the old cluster is still running it will try to contact the new cluster. The new cluster will recognize a cluster ID mismatch, then ignore the request and emit this warning. This warning is often cleared by ensuring peer addresses among distinct clusters are disjoint.
#### What does "mvcc: database space exceeded" mean and how do I fix it?
The [multi-version concurrency control][api-mvcc] data model in etcd keeps an exact history of the keyspace. Without periodically compacting this history (e.g., by setting `--auto-compaction`), etcd will eventually exhaust its storage space. If etcd runs low on storage space, it raises a space quota alarm to protect the cluster from further writes. So long as the alarm is raised, etcd responds to write requests with the error `mvcc: database space exceeded`.
To recover from the low space quota alarm:
1. [Compact][maintenance-compact] etcd's history.
2. [Defragment][maintenance-defragment] every etcd endpoint.
3. [Disarm][maintenance-disarm] the alarm.
### Performance
#### How should I benchmark etcd?
@@ -112,11 +128,10 @@ A slow network can also cause this issue. If network metrics among the etcd mach
If none of the above suggestions clear the warnings, please [open an issue][new_issue] with detailed logging, monitoring, metrics and optionally workload information.
#### What does the etcd warning "request ignored (cluster ID mismatch)" mean?
#### What does the etcd warning "snapshotting is taking more than x seconds to finish ..." mean?
Every new etcd cluster generates a new cluster ID based on the initial cluster configuration and a user-provided unique `initial-cluster-token` value. By having unique cluster ID's, etcd is protected from cross-cluster interaction which could corrupt the cluster.
etcd sends a snapshot of its complete key-value store to refresh slow followers and for [backups][backup]. Slow snapshot transfer times increase MTTR; if the cluster is ingesting data with high throughput, slow followers may livelock by needing a new snapshot before finishing receiving a snapshot. To catch slow snapshot performance, etcd warns when sending a snapshot takes more than thirty seconds and exceeds the expected transfer time for a 1Gbps connection.
Usually this warning happens after tearing down an old cluster, then reusing some of the peer addresses for the new cluster. If any etcd process from the old cluster is still running it will try to contact the new cluster. The new cluster will recognize a cluster ID mismatch, then ignore the request and emit this warning. This warning is often cleared by ensuring peer addresses among distinct clusters are disjoint.
[hardware-setup]: ./op-guide/hardware.md
[supported-platform]: ./op-guide/supported-platform.md
@@ -130,3 +145,7 @@ Usually this warning happens after tearing down an old cluster, then reusing som
[runtime reconfiguration]: https://github.com/coreos/etcd/blob/master/Documentation/op-guide/runtime-configuration.md
[benchmark]: https://github.com/coreos/etcd/tree/master/tools/benchmark
[benchmark-result]: https://github.com/coreos/etcd/blob/master/Documentation/op-guide/performance.md
[api-mvcc]: learning/api.md#revisions
[maintenance-compact]: op-guide/maintenance.md#history-compaction
[maintenance-defragment]: op-guide/maintenance.md#defragmentation
[maintenance-disarm]: ../etcdctl/README.md#alarm-disarm

View File

@@ -348,7 +348,7 @@ message Event {
Watches are long-running requests and use gRPC streams to stream event data. A watch stream is bi-directional; the client writes to the stream to establish watches and reads to receive watch event. A single watch stream can multiplex many distinct watches by tagging events with per-watch identifiers. This multiplexing helps reducing the memory footprint and connection overhead on the core etcd cluster.
Watches make three guarantees about events:
* Ordered - events are ordered by revision; an event will never appear on a watch if it precedes an event in time that has already already been posted.
* Ordered - events are ordered by revision; an event will never appear on a watch if it precedes an event in time that has already been posted.
* Reliable - a sequence of events will never drop any subsequence of events; if there are events ordered in time as a < b < c, then if the watch receives events a and c, it is guaranteed to receive b.
* Atomic - a list of events is guaranteed to encompass complete revisions; updates in the same revision over multiple keys will not be split over several lists of events.

View File

@@ -185,7 +185,10 @@ To start etcd automatically using custom settings at startup in Linux, using a [
The security flags help to [build a secure etcd cluster][security].
### --ca-file [DEPRECATED]
### --ca-file
**DEPRECATED**
+ Path to the client server TLS CA file. `--ca-file ca.crt` could be replaced by `--trusted-ca-file ca.crt --client-cert-auth` and etcd will perform the same.
+ default: none
+ env variable: ETCD_CA_FILE
@@ -215,7 +218,10 @@ The security flags help to [build a secure etcd cluster][security].
+ default: false
+ env variable: ETCD_AUTO_TLS
### --peer-ca-file [DEPRECATED]
### --peer-ca-file
**DEPRECATED**
+ Path to the peer server TLS CA file. `--peer-ca-file ca.crt` could be replaced by `--peer-trusted-ca-file ca.crt --peer-client-cert-auth` and etcd will perform the same.
+ default: none
+ env variable: ETCD_PEER_CA_FILE
@@ -299,7 +305,7 @@ Follow the instructions when using these flags.
[build-cluster]: clustering.md#static
[reconfig]: runtime-configuration.md
[discovery]: clustering.md#discovery
[iana-ports]: https://www.iana.org/assignments/service-names-port-numbers/service-names-port-numbers.xhtml?search=etcd
[iana-ports]: http://www.iana.org/assignments/service-names-port-numbers/service-names-port-numbers.txt
[proxy]: ../v2/proxy.md
[restore]: ../v2/admin_guide.md#restoring-a-backup
[security]: security.md

View File

@@ -68,6 +68,37 @@ Production clusters which refer to peers by DNS name known to the local resolver
In order to expose the etcd API to clients outside of Docker host, use the host IP address of the container. Please see [`docker inspect`](https://docs.docker.com/engine/reference/commandline/inspect) for more detail on how to get the IP address. Alternatively, specify `--net=host` flag to `docker run` command to skip placing the container inside of a separate network stack.
### Running a single node etcd
Use the host IP address when configuring etcd:
```
export NODE1=192.168.1.21
```
Run the latest version of etcd:
```
docker run \
-p 2379:2379 \
-p 2380:2380 \
--volume=${DATA_DIR}:/etcd-data \
--name etcd quay.io/coreos/etcd:latest \
/usr/local/bin/etcd \
--data-dir=/etcd-data --name node1 \
--initial-advertise-peer-urls http://${NODE1}:2380 --listen-peer-urls http://${NODE1}:2380 \
--advertise-client-urls http://${NODE1}:2379 --listen-client-urls http://${NODE1}:2379 \
--initial-cluster node1=http://${NODE1}:2380
```
List the cluster member:
```
etcdctl --endpoints=http://${NODE1}:2379 member list
```
### Running a 3 node etcd cluster
```
# For each machine
ETCD_VERSION=latest
@@ -85,41 +116,47 @@ DATA_DIR=/var/lib/etcd
# For node 1
THIS_NAME=${NAME_1}
THIS_IP=${HOST_1}
sudo docker run --net=host \
--volume=${DATA_DIR}:/etcd-data \
--name etcd quay.io/coreos/etcd:${ETCD_VERSION} \
/usr/local/bin/etcd \
--data-dir=/etcd-data --name ${THIS_NAME} \
--initial-advertise-peer-urls http://${THIS_IP}:2380 --listen-peer-urls http://${THIS_IP}:2380 \
--advertise-client-urls http://${THIS_IP}:2379 --listen-client-urls http://${THIS_IP}:2379 \
--initial-cluster ${CLUSTER} \
--initial-cluster-state ${CLUSTER_STATE} --initial-cluster-token ${TOKEN}
docker run \
-p 2379:2379 \
-p 2380:2380 \
--volume=${DATA_DIR}:/etcd-data \
--name etcd quay.io/coreos/etcd:${ETCD_VERSION} \
/usr/local/bin/etcd \
--data-dir=/etcd-data --name ${THIS_NAME} \
--initial-advertise-peer-urls http://${THIS_IP}:2380 --listen-peer-urls http://${THIS_IP}:2380 \
--advertise-client-urls http://${THIS_IP}:2379 --listen-client-urls http://${THIS_IP}:2379 \
--initial-cluster ${CLUSTER} \
--initial-cluster-state ${CLUSTER_STATE} --initial-cluster-token ${TOKEN}
# For node 2
THIS_NAME=${NAME_2}
THIS_IP=${HOST_2}
sudo docker run --net=host \
--volume=${DATA_DIR}:/etcd-data \
--name etcd quay.io/coreos/etcd:${ETCD_VERSION} \
/usr/local/bin/etcd \
--data-dir=/etcd-data --name ${THIS_NAME} \
--initial-advertise-peer-urls http://${THIS_IP}:2380 --listen-peer-urls http://${THIS_IP}:2380 \
--advertise-client-urls http://${THIS_IP}:2379 --listen-client-urls http://${THIS_IP}:2379 \
--initial-cluster ${CLUSTER} \
--initial-cluster-state ${CLUSTER_STATE} --initial-cluster-token ${TOKEN}
docker run \
-p 2379:2379 \
-p 2380:2380 \
--volume=${DATA_DIR}:/etcd-data \
--name etcd quay.io/coreos/etcd:${ETCD_VERSION} \
/usr/local/bin/etcd \
--data-dir=/etcd-data --name ${THIS_NAME} \
--initial-advertise-peer-urls http://${THIS_IP}:2380 --listen-peer-urls http://${THIS_IP}:2380 \
--advertise-client-urls http://${THIS_IP}:2379 --listen-client-urls http://${THIS_IP}:2379 \
--initial-cluster ${CLUSTER} \
--initial-cluster-state ${CLUSTER_STATE} --initial-cluster-token ${TOKEN}
# For node 3
THIS_NAME=${NAME_3}
THIS_IP=${HOST_3}
sudo docker run --net=host \
--volume=${DATA_DIR}:/etcd-data \
--name etcd quay.io/coreos/etcd:${ETCD_VERSION} \
/usr/local/bin/etcd \
--data-dir=/etcd-data --name ${THIS_NAME} \
--initial-advertise-peer-urls http://${THIS_IP}:2380 --listen-peer-urls http://${THIS_IP}:2380 \
--advertise-client-urls http://${THIS_IP}:2379 --listen-client-urls http://${THIS_IP}:2379 \
--initial-cluster ${CLUSTER} \
--initial-cluster-state ${CLUSTER_STATE} --initial-cluster-token ${TOKEN}
docker run \
-p 2379:2379 \
-p 2380:2380 \
--volume=${DATA_DIR}:/etcd-data \
--name etcd quay.io/coreos/etcd:${ETCD_VERSION} \
/usr/local/bin/etcd \
--data-dir=/etcd-data --name ${THIS_NAME} \
--initial-advertise-peer-urls http://${THIS_IP}:2380 --listen-peer-urls http://${THIS_IP}:2380 \
--advertise-client-urls http://${THIS_IP}:2379 --listen-client-urls http://${THIS_IP}:2379 \
--initial-cluster ${CLUSTER} \
--initial-cluster-state ${CLUSTER_STATE} --initial-cluster-token ${TOKEN}
```
To run `etcdctl` using API version 3:
@@ -141,17 +178,19 @@ rkt run \
--volume etcd-ssl-certs-bundle,kind=host,source=/etc/ssl/certs/ca-certificates.crt \
--mount volume=etcd-ssl-certs-bundle,target=/etc/ssl/certs/ca-certificates.crt \
quay.io/coreos/etcd:latest -- --name my-name \
--initial-advertise-peer-urls http://localhost:2380 --listen-peer-urls http://localhost:2380 \
--advertise-client-urls http://localhost:2379 --listen-client-urls http://localhost:2379 \
--discovery https://discovery.etcd.io/c11fbcdc16972e45253491a24fcf45e1
--initial-advertise-peer-urls http://localhost:2380 --listen-peer-urls http://localhost:2380 \
--advertise-client-urls http://localhost:2379 --listen-client-urls http://localhost:2379 \
--discovery https://discovery.etcd.io/c11fbcdc16972e45253491a24fcf45e1
```
```
docker run \
--volume=/etc/ssl/certs/ca-certificates.crt:/etc/ssl/certs/ca-certificates.crt \
quay.io/coreos/etcd:latest \
/usr/local/bin/etcd --name my-name \
--initial-advertise-peer-urls http://localhost:2380 --listen-peer-urls http://localhost:2380 \
--advertise-client-urls http://localhost:2379 --listen-client-urls http://localhost:2379 \
--discovery https://discovery.etcd.io/86a9ff6c8cb8b4c4544c1a2f88f8b801
-p 2379:2379 \
-p 2380:2380 \
--volume=/etc/ssl/certs/ca-certificates.crt:/etc/ssl/certs/ca-certificates.crt \
quay.io/coreos/etcd:latest \
/usr/local/bin/etcd --name my-name \
--initial-advertise-peer-urls http://localhost:2380 --listen-peer-urls http://localhost:2380 \
--advertise-client-urls http://localhost:2379 --listen-client-urls http://localhost:2379 \
--discovery https://discovery.etcd.io/86a9ff6c8cb8b4c4544c1a2f88f8b801
```

View File

@@ -10,8 +10,7 @@ The gateway supports multiple etcd server endpoints and works on a simple round-
Every application that accesses etcd must first have the address of an etcd cluster client endpoint. If multiple applications on the same server access the same etcd cluster, every application still needs to know the advertised client endpoints of the etcd cluster. If the etcd cluster is reconfigured to have different endpoints, every application may also need to update its endpoint list. This wide-scale reconfiguration is both tedious and error prone.
etcd gateway solves this problem by serving as a stable local endpoint. A typical etcd gateway configuration has
each machine running a gateway listening on a local address and every etcd application connecting to its local gateway. The upshot is only the gateway needs to update its endpoints instead of updating each and every application.
etcd gateway solves this problem by serving as a stable local endpoint. A typical etcd gateway configuration has each machine running a gateway listening on a local address and every etcd application connecting to its local gateway. The upshot is only the gateway needs to update its endpoints instead of updating each and every application.
In summary, to automatically propagate cluster endpoint changes, the etcd gateway runs on every machine serving multiple applications accessing the same etcd cluster.
@@ -64,3 +63,43 @@ Start the etcd gateway to fetch the endpoints from the DNS SRV entries with the
$ etcd gateway --discovery-srv=example.com
2016-08-16 11:21:18.867350 I | tcpproxy: ready to proxy client requests to [...]
```
## Configuration flags
### etcd cluster
#### --endpoints
* Comma-separated list of etcd server targets for forwarding client connections.
* Default: `127.0.0.1:2379`
* Invalid example: `https://127.0.0.1:2379` (gateway does not terminate TLS)
#### --discovery-srv
* DNS domain used to bootstrap cluster endpoints through SRV recrods.
* Default: (not set)
### Network
#### --listen-addr
* Interface and port to bind for accepting client requests.
* Default: `127.0.0.1:23790`
#### --retry-delay
* Duration of delay before retrying to connect to failed endpoints.
* Default: 1m0s
* Invalid example: "123" (expects time unit in format)
### Security
#### --insecure-discovery
* Accept SRV records that are insecure or susceptible to man-in-the-middle attacks.
* Default: `false`
#### --trusted-ca-file
* Path to the client TLS CA file for the etcd cluster. Used to authenticate endpoints.
* Default: (not set)

View File

@@ -114,18 +114,21 @@
"span": 5,
"stack": false,
"steppedLine": false,
"targets": [{
"expr": "sum(rate({grpc_type=\"unary\",grpc_code!=\"OK\"} [1m]))",
"targets": [
{
"expr": "sum(rate(grpc_server_started_total{grpc_type=\"unary\"}[5m]))",
"format": "time_series",
"intervalFactor": 2,
"legendFormat": "{{instance}} RPC Rate",
"legendFormat": "RPC Rate",
"metric": "grpc_server_started_total",
"refId": "A",
"step": 2
},
{
"expr": "sum(rate(grpc_server_started_total{grpc_type=\"unary\",grpc_code!=\"OK\"} [1m])) - sum(rate(grpc_server_handled_total{grpc_type=\"unary\"} [1m]))",
"expr": "sum(rate(grpc_server_handled_total{grpc_type=\"unary\",grpc_code!=\"OK\"}[5m]))",
"format": "time_series",
"intervalFactor": 2,
"legendFormat": "{{instance}} RPC Failed Rate",
"legendFormat": "RPC Failed Rate",
"metric": "grpc_server_handled_total",
"refId": "B",
"step": 2
@@ -197,7 +200,7 @@
"stack": true,
"steppedLine": false,
"targets": [{
"expr": "sum(grpc_server_started_total {grpc_service=\"etcdserverpb.Watch\",grpc_type=\"bidi_stream\",grpc_code!=\"OK\"}) - sum(grpc_server_handled_total {grpc_service=\"etcdserverpb.Watch\",grpc_type=\"bidi_stream\"})",
"expr": "sum(grpc_server_started_total{grpc_service=\"etcdserverpb.Watch\",grpc_type=\"bidi_stream\"}) - sum(grpc_server_handled_total{grpc_service=\"etcdserverpb.Watch\",grpc_type=\"bidi_stream\"})",
"intervalFactor": 2,
"legendFormat": "Watch Streams",
"metric": "grpc_server_handled_total",
@@ -205,7 +208,7 @@
"step": 4
},
{
"expr": "sum(grpc_server_started_total {grpc_service=\"etcdserverpb.Lease\",grpc_type=\"bidi_stream\"}) - sum(grpc_server_handled_total {grpc_service=\"etcdserverpb.Lease\",grpc_type=\"bidi_stream\"})",
"expr": "sum(grpc_server_started_total{grpc_service=\"etcdserverpb.Lease\",grpc_type=\"bidi_stream\"}) - sum(grpc_server_handled_total{grpc_service=\"etcdserverpb.Lease\",grpc_type=\"bidi_stream\"})",
"intervalFactor": 2,
"legendFormat": "Lease Streams",
"metric": "grpc_server_handled_total",
@@ -361,7 +364,7 @@
"stack": false,
"steppedLine": true,
"targets": [{
"expr": "histogram_quantile(0.99, sum(rate(etcd_disk_wal_fsync_duration_seconds_bucket [5m])) by (instance, le))",
"expr": "histogram_quantile(0.99, sum(rate(etcd_disk_wal_fsync_duration_seconds_bucket[5m])) by (instance, le))",
"hide": false,
"intervalFactor": 2,
"legendFormat": "{{instance}} WAL fsync",
@@ -370,7 +373,7 @@
"step": 4
},
{
"expr": "histogram_quantile(0.99, sum(rate(etcd_disk_backend_commit_duration_seconds_bucket [5m])) by (instance, le))",
"expr": "histogram_quantile(0.99, sum(rate(etcd_disk_backend_commit_duration_seconds_bucket[5m])) by (instance, le))",
"intervalFactor": 2,
"legendFormat": "{{instance}} DB fsync",
"metric": "etcd_disk_backend_commit_duration_seconds_bucket",
@@ -522,7 +525,7 @@
"stack": true,
"steppedLine": false,
"targets": [{
"expr": "rate(etcd_network_client_grpc_received_bytes_total [1m])",
"expr": "rate(etcd_network_client_grpc_received_bytes_total[5m])",
"intervalFactor": 2,
"legendFormat": "{{instance}} Client Traffic In",
"metric": "etcd_network_client_grpc_received_bytes_total",
@@ -595,7 +598,7 @@
"stack": true,
"steppedLine": false,
"targets": [{
"expr": "rate(etcd_network_client_grpc_sent_bytes_total [1m])",
"expr": "rate(etcd_network_client_grpc_sent_bytes_total[5m])",
"intervalFactor": 2,
"legendFormat": "{{instance}} Client Traffic Out",
"metric": "etcd_network_client_grpc_sent_bytes_total",
@@ -668,7 +671,7 @@
"stack": false,
"steppedLine": false,
"targets": [{
"expr": "sum(rate(etcd_network_peer_received_bytes_total [1m])) by (instance)",
"expr": "sum(rate(etcd_network_peer_received_bytes_total[5m])) by (instance)",
"intervalFactor": 2,
"legendFormat": "{{instance}} Peer Traffic In",
"metric": "etcd_network_peer_received_bytes_total",
@@ -742,7 +745,7 @@
"stack": false,
"steppedLine": false,
"targets": [{
"expr": "sum(rate(etcd_network_peer_sent_bytes_total [1m])) by (instance)",
"expr": "sum(rate(etcd_network_peer_sent_bytes_total[5m])) by (instance)",
"hide": false,
"interval": "",
"intervalFactor": 2,
@@ -822,7 +825,7 @@
"stack": false,
"steppedLine": false,
"targets": [{
"expr": "sum(rate(etcd_server_proposals_failed_total [1m]))",
"expr": "sum(rate(etcd_server_proposals_failed_total[5m]))",
"intervalFactor": 2,
"legendFormat": "Proposal Failure Rate",
"metric": "etcd_server_proposals_failed_total",
@@ -838,7 +841,7 @@
"step": 2
},
{
"expr": "sum(rate(etcd_server_proposals_committed_total [1m]))",
"expr": "sum(rate(etcd_server_proposals_committed_total[5m]))",
"intervalFactor": 2,
"legendFormat": "Proposal Commit Rate",
"metric": "etcd_server_proposals_committed_total",
@@ -846,7 +849,7 @@
"step": 2
},
{
"expr": "sum(rate(etcd_server_proposals_applied_total [1m]))",
"expr": "sum(rate(etcd_server_proposals_applied_total[5m]))",
"intervalFactor": 2,
"legendFormat": "Proposal Apply Rate",
"refId": "D",
@@ -922,9 +925,9 @@
"stack": false,
"steppedLine": false,
"targets": [{
"expr": "etcd_server_leader_changes_seen_total",
"expr": "changes(etcd_server_leader_changes_seen_total[1d])",
"intervalFactor": 2,
"legendFormat": "{{instance}} Leader Change Seen",
"legendFormat": "{{instance}} Total Leader Elections Per Day",
"metric": "etcd_server_leader_changes_seen_total",
"refId": "A",
"step": 2
@@ -932,7 +935,7 @@
"thresholds": [],
"timeFrom": null,
"timeShift": null,
"title": "Rate Leader Elections",
"title": "Total Leader Elections Per Day",
"tooltip": {
"msResolution": false,
"shared": true,
@@ -1009,4 +1012,4 @@
"version": 215,
"links": [],
"gnetId": null
}
}

View File

@@ -1,7 +1,5 @@
# gRPC proxy
*This is an alpha feature, we are looking for early feedback.*
The gRPC proxy is a stateless etcd reverse proxy operating at the gRPC layer (L7). The proxy is designed to reduce the total processing load on the core etcd cluster. For horizontal scalability, it coalesces watch and lease API requests. To protect the cluster against abusive clients, it caches key range requests.
The gRPC proxy supports multiple etcd server endpoints. When the proxy starts, it randomly picks one etcd server endpoint to use. This endpoint serves all requests until the proxy detects an endpoint failure. If the gRPC proxy detects an endpoint failure, it switches to a different endpoint, if available, to hide failures from its clients. Other retry policies, such as weighted round-robin, may be supported in the future.
@@ -101,7 +99,7 @@ bar
## Client endpoint synchronization and name resolution
The proxy supports registering its endpoints for discovery by writing to a user-defined endpoint. This serves two purposes. First, it allows clients to synchronize their endpoints against a set of proxy endpoints for high availability. Second, it is an endpoint provider for etcd [gRPC naming][dev-guide/grpc_naming.md].
The proxy supports registering its endpoints for discovery by writing to a user-defined endpoint. This serves two purposes. First, it allows clients to synchronize their endpoints against a set of proxy endpoints for high availability. Second, it is an endpoint provider for etcd [gRPC naming](../dev-guide/grpc_naming.md).
Register proxy(s) by providing a user-defined prefix:

View File

@@ -17,58 +17,54 @@ For some baseline performance numbers, we consider a three member etcd cluster w
- Google Cloud Compute Engine
- 3 machines of 8 vCPUs + 16GB Memory + 50GB SSD
- 1 machine(client) of 16 vCPUs + 30GB Memory + 50GB SSD
- Ubuntu 15.10
- etcd v3 master branch (commit SHA d8f325d), Go 1.6.2
- Ubuntu 17.04
- etcd 3.2.0, go 1.8.3
With this configuration, etcd can approximately write:
| Number of keys | Key size in bytes | Value size in bytes | Number of connections | Number of clients | Target etcd server | Average write QPS | Average latency per request | Memory |
|----------------|-------------------|---------------------|-----------------------|-------------------|--------------------|-------------------|-----------------------------|--------|
| 10,000 | 8 | 256 | 1 | 1 | leader only | 525 | 2ms | 35 MB |
| 100,000 | 8 | 256 | 100 | 1000 | leader only | 25,000 | 30ms | 35 MB |
| 100,000 | 8 | 256 | 100 | 1000 | all members | 33,000 | 25ms | 35 MB |
| Number of keys | Key size in bytes | Value size in bytes | Number of connections | Number of clients | Target etcd server | Average write QPS | Average latency per request | Average server RSS |
|---------------:|------------------:|--------------------:|----------------------:|------------------:|--------------------|------------------:|----------------------------:|-------------------:|
| 10,000 | 8 | 256 | 1 | 1 | leader only | 583 | 1.6ms | 48 MB |
| 100,000 | 8 | 256 | 100 | 1000 | leader only | 44,341 | 22ms | 124MB |
| 100,000 | 8 | 256 | 100 | 1000 | all members | 50,104 | 20ms | 126MB |
Sample commands are:
```
# assuming IP_1 is leader, write requests to the leader
benchmark --endpoints={IP_1} --conns=1 --clients=1 \
```sh
# write to leader
benchmark --endpoints=${HOST_1} --target-leader --conns=1 --clients=1 \
put --key-size=8 --sequential-keys --total=10000 --val-size=256
benchmark --endpoints={IP_1} --conns=100 --clients=1000 \
benchmark --endpoints=${HOST_1} --target-leader --conns=100 --clients=1000 \
put --key-size=8 --sequential-keys --total=100000 --val-size=256
# write to all members
benchmark --endpoints={IP_1},{IP_2},{IP_3} --conns=100 --clients=1000 \
benchmark --endpoints=${HOST_1},${HOST_2},${HOST_3} --conns=100 --clients=1000 \
put --key-size=8 --sequential-keys --total=100000 --val-size=256
```
Linearizable read requests go through a quorum of cluster members for consensus to fetch the most recent data. Serializable read requests are cheaper than linearizable reads since they are served by any single etcd member, instead of a quorum of members, in exchange for possibly serving stale data. etcd can read:
| Number of requests | Key size in bytes | Value size in bytes | Number of connections | Number of clients | Consistency | Average latency per request | Average read QPS |
|--------------------|-------------------|---------------------|-----------------------|-------------------|-------------|-----------------------------|------------------|
| 10,000 | 8 | 256 | 1 | 1 | Linearizable | 2ms | 560 |
| 10,000 | 8 | 256 | 1 | 1 | Serializable | 0.4ms | 7,500 |
| 100,000 | 8 | 256 | 100 | 1000 | Linearizable | 15ms | 43,000 |
| 100,000 | 8 | 256 | 100 | 1000 | Serializable | 9ms | 93,000 |
| Number of requests | Key size in bytes | Value size in bytes | Number of connections | Number of clients | Consistency | Average read QPS | Average latency per request |
|-------------------:|------------------:|--------------------:|----------------------:|------------------:|-------------|-----------------:|----------------------------:|
| 10,000 | 8 | 256 | 1 | 1 | Linearizable | 1,353 | 0.7ms |
| 10,000 | 8 | 256 | 1 | 1 | Serializable | 2,909 | 0.3ms |
| 100,000 | 8 | 256 | 100 | 1000 | Linearizable | 141,578 | 5.5ms |
| 100,000 | 8 | 256 | 100 | 1000 | Serializable | 185,758 | 2.2ms |
Sample commands are:
```
# Linearizable read requests
benchmark --endpoints={IP_1},{IP_2},{IP_3} --conns=1 --clients=1 \
```sh
# Single connection read requests
benchmark --endpoints=${HOST_1},${HOST_2},${HOST_3} --conns=1 --clients=1 \
range YOUR_KEY --consistency=l --total=10000
benchmark --endpoints={IP_1},{IP_2},{IP_3} --conns=100 --clients=1000 \
range YOUR_KEY --consistency=l --total=100000
benchmark --endpoints=${HOST_1},${HOST_2},${HOST_3} --conns=1 --clients=1 \
range YOUR_KEY --consistency=s --total=10000
# Serializable read requests for each member and sum up the numbers
for endpoint in {IP_1} {IP_2} {IP_3}; do
benchmark --endpoints=$endpoint --conns=1 --clients=1 \
range YOUR_KEY --consistency=s --total=10000
done
for endpoint in {IP_1} {IP_2} {IP_3}; do
benchmark --endpoints=$endpoint --conns=100 --clients=1000 \
range YOUR_KEY --consistency=s --total=100000
done
# Many concurrent read requests
benchmark --endpoints=${HOST_1},${HOST_2},${HOST_3} --conns=100 --clients=1000 \
range YOUR_KEY --consistency=l --total=100000
benchmark --endpoints=${HOST_1},${HOST_2},${HOST_3} --conns=100 --clients=1000 \
range YOUR_KEY --consistency=s --total=100000
```
We encourage running the benchmark test when setting up an etcd cluster for the first time in a new environment to ensure the cluster achieves adequate performance; cluster latency and throughput can be sensitive to minor environment differences.
We encourage running the benchmark test when setting up an etcd cluster for the first time in a new environment to ensure the cluster achieves adequate performance; cluster latency and throughput can be sensitive to minor environment differences.

View File

@@ -16,7 +16,7 @@ etcd takes several certificate related configuration options, either through com
`--key-file=<path>`: Key for the certificate. Must be unencrypted.
`--client-cert-auth`: When this is set etcd will check all incoming HTTPS requests for a client certificate signed by the trusted CA, requests that don't supply a valid client certificate will fail.
`--client-cert-auth`: When this is set etcd will check all incoming HTTPS requests for a client certificate signed by the trusted CA, requests that don't supply a valid client certificate will fail. If [authentication][auth] is enabled, the certificate provides credentials for the user name given by the Common Name field.
`--trusted-ca-file=<path>`: Trusted certificate authority.
@@ -222,3 +222,4 @@ The certificate needs to be signed for the member's FQDN in its Subject Name, us
[tls-setup]: ../../hack/tls-setup
[tls-guide]: https://github.com/coreos/docs/blob/master/os/generate-self-signed-certificates.md
[alt-name]: http://wiki.cacert.org/FAQ/subjectAltName
[auth]: authentication.md

View File

@@ -0,0 +1,203 @@
# Run etcd on Container Linux with systemd
The following guide shows how to run etcd with [systemd][systemd-docs] under [Container Linux][container-linux-docs].
## Provisioning an etcd cluster
Cluster bootstrapping in Container Linux is simplest with [Ignition][container-linux-ignition]; `coreos-metadata.service` dynamically fetches the machine's IP for discovery. Note that etcd's discovery service protocol is only meant for bootstrapping, and cannot be used with runtime reconfiguration or cluster monitoring.
The [Container Linux Config Transpiler][container-linux-ct] compiles etcd configuration files into Ignition configuration files:
```yaml container-linux-config:norender
etcd:
version: 3.2.0
name: s1
data_dir: /var/lib/etcd
advertise_client_urls: http://{PUBLIC_IPV4}:2379
initial_advertise_peer_urls: http://{PRIVATE_IPV4}:2380
listen_client_urls: http://0.0.0.0:2379
listen_peer_urls: http://{PRIVATE_IPV4}:2380
discovery: https://discovery.etcd.io/<token>
```
`ct` would produce the following Ignition Config:
```
$ ct --platform=gce --in-file /tmp/ct-etcd.cnf
{"ignition":{"version":"2.0.0","config"...
```
```json ignition-config
{
"ignition":{"version":"2.0.0","config":{}},
"storage":{},
"systemd":{
"units":[{
"name":"etcd-member.service",
"enable":true,
"dropins":[{
"name":"20-clct-etcd-member.conf",
"contents":"[Unit]\nRequires=coreos-metadata.service\nAfter=coreos-metadata.service\n\n[Service]\nEnvironmentFile=/run/metadata/coreos\nEnvironment=\"ETCD_IMAGE_TAG=v3.1.8\"\nExecStart=\nExecStart=/usr/lib/coreos/etcd-wrapper $ETCD_OPTS \\\n --name=\"s1\" \\\n --data-dir=\"/var/lib/etcd\" \\\n --listen-peer-urls=\"http://${COREOS_GCE_IP_LOCAL_0}:2380\" \\\n --listen-client-urls=\"http://0.0.0.0:2379\" \\\n --initial-advertise-peer-urls=\"http://${COREOS_GCE_IP_LOCAL_0}:2380\" \\\n --advertise-client-urls=\"http://${COREOS_GCE_IP_EXTERNAL_0}:2379\" \\\n --discovery=\"https://discovery.etcd.io/\u003ctoken\u003e\""}]}]},
"networkd":{},
"passwd":{}}
```
To avoid accidental misconfiguration, the transpiler helpfully verifies etcd configurations when generating Ignition files:
```yaml container-linux-config:norender
etcd:
version: 3.2.0
name: s1
data_dir_x: /var/lib/etcd
advertise_client_urls: http://{PUBLIC_IPV4}:2379
initial_advertise_peer_urls: http://{PRIVATE_IPV4}:2380
listen_client_urls: http://0.0.0.0:2379
listen_peer_urls: http://{PRIVATE_IPV4}:2380
discovery: https://discovery.etcd.io/<token>
```
```
$ ct --platform=gce --in-file /tmp/ct-etcd.cnf
warning at line 3, column 2
Config has unrecognized key: data_dir_x
```
See [Container Linux Provisioning][container-linux-provision] for more details.
## etcd 3.x service
[Container Linux][container-linux-docs] does not include etcd 3.x binaries by default. Different versions of etcd 3.x can be fetched via `etcd-member.service`.
Confirm unit file exists:
```
systemctl cat etcd-member.service
```
Check if the etcd service is running:
```
systemctl status etcd-member.service
```
Example systemd drop-in unit to override the default service settings:
```bash
cat > /tmp/20-cl-etcd-member.conf <<EOF
[Service]
Environment="ETCD_IMAGE_TAG=v3.2.0"
Environment="ETCD_DATA_DIR=/var/lib/etcd"
Environment="ETCD_SSL_DIR=/etc/ssl/certs"
Environment="ETCD_OPTS=--name s1 \
--listen-client-urls https://10.240.0.1:2379 \
--advertise-client-urls https://10.240.0.1:2379 \
--listen-peer-urls https://10.240.0.1:2380 \
--initial-advertise-peer-urls https://10.240.0.1:2380 \
--initial-cluster s1=https://10.240.0.1:2380,s2=https://10.240.0.2:2380,s3=https://10.240.0.3:2380 \
--initial-cluster-token mytoken \
--initial-cluster-state new \
--client-cert-auth \
--trusted-ca-file /etc/ssl/certs/etcd-root-ca.pem \
--cert-file /etc/ssl/certs/s1.pem \
--key-file /etc/ssl/certs/s1-key.pem \
--peer-client-cert-auth \
--peer-trusted-ca-file /etc/ssl/certs/etcd-root-ca.pem \
--peer-cert-file /etc/ssl/certs/s1.pem \
--peer-key-file /etc/ssl/certs/s1-key.pem \
--auto-compaction-retention 1"
EOF
mv /tmp/20-cl-etcd-member.conf /etc/systemd/system/etcd-member.service.d/20-cl-etcd-member.conf
```
Or use a Container Linux Config:
```yaml container-linux-config:norender
systemd:
units:
- name: etcd-member.service
dropins:
- name: conf1.conf
contents: |
[Service]
Environment="ETCD_SSL_DIR=/etc/ssl/certs"
etcd:
version: 3.2.0
name: s1
data_dir: /var/lib/etcd
listen_client_urls: https://0.0.0.0:2379
advertise_client_urls: https://{PUBLIC_IPV4}:2379
listen_peer_urls: https://{PRIVATE_IPV4}:2380
initial_advertise_peer_urls: https://{PRIVATE_IPV4}:2380
initial_cluster: s1=https://{PRIVATE_IPV4}:2380,s2=https://10.240.0.2:2380,s3=https://10.240.0.3:2380
initial_cluster_token: mytoken
initial_cluster_state: new
client_cert_auth: true
trusted_ca_file: /etc/ssl/certs/etcd-root-ca.pem
cert-file: /etc/ssl/certs/s1.pem
key-file: /etc/ssl/certs/s1-key.pem
peer-client-cert-auth: true
peer-trusted-ca-file: /etc/ssl/certs/etcd-root-ca.pem
peer-cert-file: /etc/ssl/certs/s1.pem
peer-key-file: /etc/ssl/certs/s1-key.pem
auto-compaction-retention: 1
```
```
$ ct --platform=gce --in-file /tmp/ct-etcd.cnf
{"ignition":{"version":"2.0.0","config"...
```
To see all runtime drop-in changes for system units:
```
systemd-delta --type=extended
```
To enable and start:
```
systemctl daemon-reload
systemctl enable --now etcd-member.service
```
To see the logs:
```
journalctl --unit etcd-member.service --lines 10
```
To stop and disable the service:
```
systemctl disable --now etcd-member.service
```
## etcd 2.x service
[Container Linux][container-linux-docs] includes a unit file `etcd2.service` for etcd 2.x, which will be removed in the near future. See [Container Linux FAQ][container-linux-faq] for more details.
Confirm unit file is installed:
```
systemctl cat etcd2.service
```
Check if the etcd service is running:
```
systemctl status etcd2.service
```
To stop and disable:
```
systemctl disable --now etcd2.service
```
[systemd-docs]: https://github.com/systemd/systemd
[container-linux-docs]: https://coreos.com/os/docs/latest
[container-linux-faq]: https://github.com/coreos/docs/blob/master/etcd/os-faq.md
[container-linux-provision]: https://github.com/coreos/docs/blob/master/os/provisioning.md
[container-linux-ignition]: https://github.com/coreos/docs/blob/master/ignition/what-is-ignition.md
[container-linux-ct]: https://github.com/coreos/container-linux-config-transpiler

View File

@@ -59,7 +59,7 @@ Radius Intelligence uses Kubernetes running CoreOS to containerize and scale int
## Vonage
- *Application*: system configuration for microservices, scheduling, locks (future - service discovery)
- *Application*: kubernetes, vault backend, system configuration for microservices, scheduling, locks (future - service discovery)
- *Launched*: August 2015
- *Cluster Size*: 2 clusters of 5 members in 2 DCs, n local proxies 1-to-1 with microservice, (ssl and SRV look up)
- *Order of Data Size*: kilobytes
@@ -104,7 +104,7 @@ PD(Placement Driver) is the central controller in the TiDB cluster. It saves the
## QingCloud
- *Application*: [QingCloud](qingcloud) appcenter cluster for service discovery as [metad](metad) backend.
- *Application*: [QingCloud][qingcloud] appcenter cluster for service discovery as [metad][metad] backend.
- *Launched*: December 2016
- *Cluster Size*: 1 cluster of 3 members per user.
- *Order of Data Size*: kilobytes
@@ -186,7 +186,7 @@ In [hyper.sh][hyper.sh], the container service is backed by [hypernetes][hyperne
- *Cluster Size*: 1000+ deployments, each deployment contains a 3 node cluster.
- *Order of Data Size*: 100s of Megabytes
- *Operator*: daocloud.io
- *Environment*: Baremetal and virtual machines
- *Environment*: Baremetal and virtual machines
- *Backups*: None, all data can be recreated if necessary.
In [DaoCloud][DaoCloud], we use Docker and Swarm to deploy and run our applications, and we use etcd to save metadata for service discovery.
@@ -203,8 +203,9 @@ In [DaoCloud][DaoCloud], we use Docker and Swarm to deploy and run our applicati
- *Environment*: AWS, Kubernetes
- *Backups*: EBS volume backups
At Branch, we use kubernetes heavily as our core microservice platform for staging and production.
[Branch]:https://branch.io
At [Branch][branch], we use kubernetes heavily as our core microservice platform for staging and production.
[branch]: https://branch.io
## Baidu Waimai
@@ -213,7 +214,7 @@ At Branch, we use kubernetes heavily as our core microservice platform for stagi
- *Cluster Size*: 3 clusters of 5 members
- *Order of Data Size*: several gigabytes
- *Operator*: Baidu Waimai Operations Department
- *Environment*: CentOS 6.5
- *Environment*: CentOS 6.5
- *Backups*: backup scripts
## Salesforce.com

View File

@@ -1,6 +1,6 @@
# Reporting bugs
If any part of the etcd project has bugs or documentation mistakes, please let us know by [opening an issue][issue]. We treat bugs and mistakes very seriously and believe no issue is too small. Before creating a bug report, please check that an issue reporting the same problem does not already exist.
If any part of the etcd project has bugs or documentation mistakes, please let us know by [opening an issue][etcd-issue]. We treat bugs and mistakes very seriously and believe no issue is too small. Before creating a bug report, please check that an issue reporting the same problem does not already exist.
To make the bug report accurate and easy to understand, please try to create bug reports that are:

View File

@@ -10,7 +10,7 @@ Before [starting an upgrade](#upgrade-procedure), read through the rest of this
#### Upgrade requirements
To upgrade an existing etcd deployment to 3.0, the running cluster must be 2.3 or greater. If it's before 2.3, please upgrade to [2.3](https://github.com/coreos/etcd/releases/tag/v2.3.0) before upgrading to 3.0.
To upgrade an existing etcd deployment to 3.0, the running cluster must be 2.3 or greater. If it's before 2.3, please upgrade to [2.3](https://github.com/coreos/etcd/releases/tag/v2.3.8) before upgrading to 3.0.
Also, to ensure a smooth rolling upgrade, the running cluster must be healthy. Check the health of the cluster by using the `etcdctl cluster-health` command before proceeding.
@@ -52,7 +52,7 @@ member 8211f1d0f64f3269 is healthy: got healthy result from http://localhost:123
cluster is healthy
$ curl http://localhost:2379/version
{"etcdserver":"2.3.x","etcdcluster":"2.3.0"}
{"etcdserver":"2.3.x","etcdcluster":"2.3.8"}
```
#### 2. Stop the existing etcd process

View File

@@ -10,7 +10,7 @@ Before [starting an upgrade](#upgrade-procedure), read through the rest of this
#### Upgrade requirements
To upgrade an existing etcd deployment to 3.1, the running cluster must be 3.0 or greater. If it's before 3.0, please upgrade to [3.0](https://github.com/coreos/etcd/releases/tag/v3.0.16) before upgrading to 3.1.
To upgrade an existing etcd deployment to 3.1, the running cluster must be 3.0 or greater. If it's before 3.0, please [upgrade to 3.0](upgrade_3_0.md) before upgrading to 3.1.
Also, to ensure a smooth rolling upgrade, the running cluster must be healthy. Check the health of the cluster by using the `etcdctl endpoint health` command before proceeding.

View File

@@ -30,37 +30,27 @@ resp.TTL == -1
err == nil
```
Previously, `clientv3.Lease.KeepAlive` interface does not return error (see [#7488](https://github.com/coreos/etcd/issues/7488) and [#7732](https://github.com/coreos/etcd/pull/7732)).
`clientv3.NewFromConfigFile` is moved to `yaml.NewConfig`.
Before
```go
// clientv3
type Lease interface {
KeepAlive(ctx context.Context, id LeaseID) (<-chan *LeaseKeepAliveResponse, error)
KeepAliveOnce(ctx context.Context, id LeaseID) (*LeaseKeepAliveResponse, error)
}
import "github.com/coreos/etcd/clientv3"
clientv3.NewFromConfigFile
```
After
```go
// clientv3
type Lease interface {
KeepAlive(ctx context.Context, id LeaseID) LeaseKeepAliveChan
KeepAliveOnce(ctx context.Context, id LeaseID) LeaseKeepAliveResponse
}
// check error
for ka := range <-LeaseKeepAliveChan { ka.Err }
LeaseKeepAliveResponse.Err
import clientv3yaml "github.com/coreos/etcd/clientv3/yaml"
clientv3yaml.NewConfig
```
### Server upgrade checklists
#### Upgrade requirements
To upgrade an existing etcd deployment to 3.2, the running cluster must be 3.1 or greater. If it's before 3.1, please upgrade to [3.1](https://github.com/coreos/etcd/releases/tag/v3.1.7) before upgrading to 3.2.
To upgrade an existing etcd deployment to 3.2, the running cluster must be 3.1 or greater. If it's before 3.1, please [upgrade to 3.1](upgrade_3_1.md) before upgrading to 3.2.
Also, to ensure a smooth rolling upgrade, the running cluster must be healthy. Check the health of the cluster by using the `etcdctl endpoint health` command before proceeding.

View File

@@ -67,13 +67,13 @@ You have successfully started an etcd and written a key to the store.
The [official etcd ports][iana-ports] are 2379 for client requests, and 2380 for peer communication. To maintain compatibility, some etcd configuration and documentation continues to refer to the legacy ports 4001 and 7001, but all new etcd use and discussion should adopt the IANA-assigned ports. The legacy ports 4001 and 7001 will be fully deprecated, and support for their use removed, in future etcd releases.
[iana-ports]: https://www.iana.org/assignments/service-names-port-numbers/service-names-port-numbers.xhtml?search=etcd
[iana-ports]: http://www.iana.org/assignments/service-names-port-numbers/service-names-port-numbers.txt
### Running local etcd cluster
First install [goreman](https://github.com/mattn/goreman), which manages Procfile-based applications.
Our [Procfile script](./Procfile) will set up a local example cluster. You can start it with:
Our [Procfile script](../../V2Procfile) will set up a local example cluster. You can start it with:
```sh
goreman start
@@ -162,4 +162,4 @@ Currently only the amd64 architecture is officially supported by `etcd`.
### License
etcd is under the Apache 2.0 license. See the [LICENSE](LICENSE) file for details.
etcd is under the Apache 2.0 license. See the [LICENSE](../../LICENSE) file for details.

View File

@@ -18,7 +18,7 @@ A keys lifetime spans a generation. Each key may have one or multiple generat
### Physical View
etcd stores the physical data as key-value pairs in a persistent [b+tree][b+tree]. Each revision of the stores state only contains the delta from its previous revision to be efficient. A single revision may correspond to multiple keys in the tree.
etcd stores the physical data as key-value pairs in a persistent [b+tree][b+tree]. Each revision of the stores state only contains the delta from its previous revision to be efficient. A single revision may correspond to multiple keys in the tree.
The key of key-value pair is a 3-tuple (major, sub, type). Major is the store revision holding the key. Sub differentiates among keys within the same revision. Type is an optional suffix for special value (e.g., `t` if the value contains a tombstone). The value of the key-value pair contains the modification from previous revision, thus one delta from previous revision. The b+tree is ordered by key in lexical byte-order. Ranged lookups over revision deltas are fast; this enables quickly finding modifications from one specific revision to another. Compaction removes out-of-date keys-value pairs.
@@ -73,7 +73,7 @@ Any completed operations are durable. All accessible data is also durable data.
#### Linearizability
Linearizability (also known as Atomic Consistency or External Consistency) is a consistency level between strict consistency and sequential consistency.
Linearizability (also known as Atomic Consistency or External Consistency) is a consistency level between strict consistency and sequential consistency.
For linearizability, suppose each operation receives a timestamp from a loosely synchronized global clock. Operations are linearized if and only if they always complete as though they were executed in a sequential order and each operation appears to complete in the order specified by the program. Likewise, if an operations timestamp precedes another, that operation must also precede the other operation in the sequence.
@@ -83,10 +83,10 @@ etcd does not ensure linearizability for watch operations. Users are expected to
etcd ensures linearizability for all other operations by default. Linearizability comes with a cost, however, because linearized requests must go through the Raft consensus process. To obtain lower latencies and higher throughput for read requests, clients can configure a requests consistency mode to `serializable`, which may access stale data with respect to quorum, but removes the performance penalty of linearized accesses' reliance on live consensus.
[persistent-ds]: [https://en.wikipedia.org/wiki/Persistent_data_structure]
[btree]: [https://en.wikipedia.org/wiki/B-tree]
[b+tree]: [https://en.wikipedia.org/wiki/B%2B_tree]
[seq_consistency]: [https://en.wikipedia.org/wiki/Consistency_model#Sequential_consistency]
[strict_consistency]: [https://en.wikipedia.org/wiki/Consistency_model#Strict_consistency]
[serializable_isolation]: [https://en.wikipedia.org/wiki/Isolation_(database_systems)#Serializable]
[Linearizability]: [#Linearizability]
[persistent-ds]: https://en.wikipedia.org/wiki/Persistent_data_structure
[btree]: https://en.wikipedia.org/wiki/B-tree
[b+tree]: https://en.wikipedia.org/wiki/B%2B_tree
[seq_consistency]: https://en.wikipedia.org/wiki/Consistency_model#Sequential_consistency
[strict_consistency]: https://en.wikipedia.org/wiki/Consistency_model#Strict_consistency
[serializable_isolation]: https://en.wikipedia.org/wiki/Isolation_(database_systems)#Serializable
[Linearizability]: #linearizability

View File

@@ -32,7 +32,7 @@ The consistent flag for read operations is removed in etcd 2.0.0. The normal rea
The read consistency guarantees are:
The consistent read guarantees the sequential consistency within one client that talks to one etcd server. Read/Write from one client to one etcd member should be observed in order. If one client write a value to an etcd server successfully, it should be able to get the value out of the server immediately.
The consistent read guarantees the sequential consistency within one client that talks to one etcd server. Read/Write from one client to one etcd member should be observed in order. If one client write a value to an etcd server successfully, it should be able to get the value out of the server immediately.
Each etcd member will proxy the request to leader and only return the result to user after the result is applied on the local member. Thus after the write succeed, the user is guaranteed to see the value on the member it sent the request to.
@@ -56,6 +56,7 @@ Proxy mode in 2.0 will provide similar functionality, and with improved control
## Discovery Service
A size key needs to be provided inside a [discovery token][discoverytoken].
[discoverytoken]: clustering.md#custom-etcd-discovery-service
## HTTP Admin API

View File

@@ -176,7 +176,10 @@ To start etcd automatically using custom settings at startup in Linux, using a [
The security flags help to [build a secure etcd cluster][security].
### --ca-file [DEPRECATED]
### --ca-file
**DEPRECATED**
+ Path to the client server TLS CA file. `--ca-file ca.crt` could be replaced by `--trusted-ca-file ca.crt --client-cert-auth` and etcd will perform the same.
+ default: none
+ env variable: ETCD_CA_FILE
@@ -201,7 +204,10 @@ The security flags help to [build a secure etcd cluster][security].
+ default: none
+ env variable: ETCD_TRUSTED_CA_FILE
### --peer-ca-file [DEPRECATED]
### --peer-ca-file
**DEPRECATED**
+ Path to the peer server TLS CA file. `--peer-ca-file ca.crt` could be replaced by `--peer-trusted-ca-file ca.crt --peer-client-cert-auth` and etcd will perform the same.
+ default: none
+ env variable: ETCD_PEER_CA_FILE
@@ -234,7 +240,7 @@ The security flags help to [build a secure etcd cluster][security].
+ env variable: ETCD_DEBUG
### --log-package-levels
+ Set individual etcd subpackages to specific log levels. An example being `etcdserver=WARNING,security=DEBUG`
+ Set individual etcd subpackages to specific log levels. An example being `etcdserver=WARNING,security=DEBUG`
+ default: none (INFO for all packages)
+ env variable: ETCD_LOG_PACKAGE_LEVELS
@@ -272,7 +278,7 @@ Follow the instructions when using these flags.
[build-cluster]: clustering.md#static
[reconfig]: runtime-configuration.md
[discovery]: clustering.md#discovery
[iana-ports]: https://www.iana.org/assignments/service-names-port-numbers/service-names-port-numbers.xhtml?search=etcd
[iana-ports]: http://www.iana.org/assignments/service-names-port-numbers/service-names-port-numbers.txt
[proxy]: proxy.md
[reconfig]: runtime-configuration.md
[restore]: admin_guide.md#restoring-a-backup

View File

@@ -16,7 +16,7 @@ This will run the latest release version of etcd. You can specify version if nee
```
docker run -d -v /usr/share/ca-certificates/:/etc/ssl/certs -p 4001:4001 -p 2380:2380 -p 2379:2379 \
--name etcd quay.io/coreos/etcd \
--name etcd quay.io/coreos/etcd:v2.3.8 \
-name etcd0 \
-advertise-client-urls http://${HostIP}:2379,http://${HostIP}:4001 \
-listen-client-urls http://0.0.0.0:2379,http://0.0.0.0:4001 \
@@ -48,7 +48,7 @@ The main difference being the value used for the `-initial-cluster` flag, which
```
docker run -d -v /usr/share/ca-certificates/:/etc/ssl/certs -p 4001:4001 -p 2380:2380 -p 2379:2379 \
--name etcd quay.io/coreos/etcd \
--name etcd quay.io/coreos/etcd:v2.3.8 \
-name etcd0 \
-advertise-client-urls http://192.168.12.50:2379,http://192.168.12.50:4001 \
-listen-client-urls http://0.0.0.0:2379,http://0.0.0.0:4001 \
@@ -63,7 +63,7 @@ docker run -d -v /usr/share/ca-certificates/:/etc/ssl/certs -p 4001:4001 -p 2380
```
docker run -d -v /usr/share/ca-certificates/:/etc/ssl/certs -p 4001:4001 -p 2380:2380 -p 2379:2379 \
--name etcd quay.io/coreos/etcd \
--name etcd quay.io/coreos/etcd:v2.3.8 \
-name etcd1 \
-advertise-client-urls http://192.168.12.51:2379,http://192.168.12.51:4001 \
-listen-client-urls http://0.0.0.0:2379,http://0.0.0.0:4001 \
@@ -78,7 +78,7 @@ docker run -d -v /usr/share/ca-certificates/:/etc/ssl/certs -p 4001:4001 -p 2380
```
docker run -d -v /usr/share/ca-certificates/:/etc/ssl/certs -p 4001:4001 -p 2380:2380 -p 2379:2379 \
--name etcd quay.io/coreos/etcd \
--name etcd quay.io/coreos/etcd:v2.3.8 \
-name etcd2 \
-advertise-client-urls http://192.168.12.52:2379,http://192.168.12.52:4001 \
-listen-client-urls http://0.0.0.0:2379,http://0.0.0.0:4001 \

View File

@@ -115,7 +115,6 @@
- [mattn/etcdenv](https://github.com/mattn/etcdenv) - "env" shebang with etcd integration
- [kelseyhightower/confd](https://github.com/kelseyhightower/confd) - Manage local app config files using templates and data from etcd
- [configdb](https://git.autistici.org/ai/configdb/tree/master) - A REST relational abstraction on top of arbitrary database backends, aimed at storing configs and inventories.
- [scrz](https://github.com/scrz/scrz) - Container manager, stores configuration in etcd.
- [fleet](https://github.com/coreos/fleet) - Distributed init system
- [kubernetes/kubernetes](https://github.com/kubernetes/kubernetes) - Container cluster manager introduced by Google.
- [mailgun/vulcand](https://github.com/mailgun/vulcand) - HTTP proxy that uses etcd as a configuration backend.

View File

@@ -1,6 +1,6 @@
# Reporting Bugs
If you find bugs or documentation mistakes in the etcd project, please let us know by [opening an issue][issue]. We treat bugs and mistakes very seriously and believe no issue is too small. Before creating a bug report, please check that an issue reporting the same problem does not already exist.
If you find bugs or documentation mistakes in the etcd project, please let us know by [opening an issue][etcd-issue]. We treat bugs and mistakes very seriously and believe no issue is too small. Before creating a bug report, please check that an issue reporting the same problem does not already exist.
To make your bug report accurate and easy to understand, please try to create bug reports that are:

View File

@@ -7,25 +7,25 @@ To prove out the design of the v3 API the team has also built [a number of examp
# Design
1. Flatten binary key-value space
2. Keep the event history until compaction
- access to old version of keys
- user controlled history compaction
3. Support range query
- Pagination support with limit argument
- Support consistency guarantee across multiple range queries
4. Replace TTL key with Lease
- more efficient/ low cost keep alive
- a logical group of TTL keys
5. Replace CAS/CAD with multi-object Txn
- MUCH MORE powerful and flexible
6. Support efficient watching with multiple ranges
7. RPC API supports the completed set of APIs.
7. RPC API supports the completed set of APIs.
- more efficient than JSON/HTTP
- additional txn/lease support
@@ -56,7 +56,7 @@ the size in the future a little bit or make it configurable.
// A put is always successful
Put( PutRequest { key = foo, value = bar } )
PutResponse {
PutResponse {
cluster_id = 0x1000,
member_id = 0x1,
revision = 1,
@@ -119,7 +119,7 @@ RangeResponse {
Txn(TxnRequest {
// mod_revision of foo0 is equal to 1, mod_revision of foo1 is greater than 1
compare = {
{compareType = equal, key = foo0, mod_revision = 1},
{compareType = equal, key = foo0, mod_revision = 1},
{compareType = greater, key = foo1, mod_revision = 1}}
},
// if the comparison succeeds, put foo2 = bar2
@@ -156,7 +156,7 @@ Watch( WatchRequest{
end_revision = 10000,
// server decided notification frequency
progress_notification = true,
}
}
… // this can be a watch request stream
)
@@ -176,7 +176,7 @@ WatchResponse {
},
}
// a notification at 2000
WatchResponse {
cluster_id = 0x1000,
@@ -185,9 +185,9 @@ WatchResponse {
raft_term = 0x1,
// nil event as notification
}
// put (foo0=bar3000) event at 3000
WatchResponse {
cluster_id = 0x1000,
@@ -204,8 +204,8 @@ WatchResponse {
},
}
```
[api-protobuf]: https://github.com/coreos/etcd/blob/master/etcdserver/etcdserverpb/rpc.proto
[kv-protobuf]: https://github.com/coreos/etcd/blob/master/storage/storagepb/kv.proto
[api-protobuf]: https://github.com/coreos/etcd/blob/release-2.3/etcdserver/etcdserverpb/rpc.proto
[kv-protobuf]: https://github.com/coreos/etcd/blob/release-2.3/storage/storagepb/kv.proto

View File

@@ -16,7 +16,7 @@ etcd takes several certificate related configuration options, either through com
`--key-file=<path>`: Key for the certificate. Must be unencrypted.
`--client-cert-auth`: When this is set etcd will check all incoming HTTPS requests for a client certificate signed by the trusted CA, requests that don't supply a valid client certificate will fail.
`--client-cert-auth`: When this is set etcd will check all incoming HTTPS requests for a client certificate signed by the trusted CA, requests that don't supply a valid client certificate will fail. If [authentication][auth] is enabled, the certificate provides credentials for the user name given by the Common Name field.
`--trusted-ca-file=<path>`: Trusted certificate authority.
@@ -191,3 +191,4 @@ If you need your certificate to be signed for your member's FQDN in its Subject
[tls-setup]: ../../hack/tls-setup
[tls-guide]: https://github.com/coreos/docs/blob/master/os/generate-self-signed-certificates.md
[alt-name]: http://wiki.cacert.org/FAQ/subjectAltName
[auth]: authentication.md

View File

@@ -11,7 +11,7 @@
![etcd Logo](logos/etcd-horizontal-color.png)
etcd is a distributed, consistent key-value store for shared configuration and service discovery, with a focus on being:
etcd is a distributed reliable key-value store for the most critical data of a distributed system, with a focus on being:
* *Simple*: well-defined, user-facing API (gRPC)
* *Secure*: automatic TLS with optional client cert authentication
@@ -75,7 +75,7 @@ That's it! etcd is now running and serving client requests. For more
The [official etcd ports][iana-ports] are 2379 for client requests, and 2380 for peer communication.
[iana-ports]: https://www.iana.org/assignments/service-names-port-numbers/service-names-port-numbers.xhtml?search=etcd
[iana-ports]: http://www.iana.org/assignments/service-names-port-numbers/service-names-port-numbers.txt
### Running a local etcd cluster
@@ -133,5 +133,3 @@ See [reporting bugs](Documentation/reporting_bugs.md) for details about reportin
### License
etcd is under the Apache 2.0 license. See the [LICENSE](LICENSE) file for details.

View File

@@ -6,7 +6,7 @@ This document defines a high level roadmap for etcd development.
The dates below should not be considered authoritative, but rather indicative of the projected timeline of the project. The [milestones defined in GitHub](https://github.com/coreos/etcd/milestones) represent the most up-to-date and issue-for-issue plans.
etcd 3.1 is our current stable branch. The roadmap below outlines new features that will be added to etcd, and while subject to change, define what future stable will look like.
etcd 3.2 is our current stable branch. The roadmap below outlines new features that will be added to etcd, and while subject to change, define what future stable will look like.
### etcd 3.2 (2017-May)
- Stable scalable proxy

View File

@@ -1,212 +1,379 @@
[
{
"project": "bitbucket.org/ww/goautoneg",
"licenses": [
{
"type": "BSD 3-clause \"New\" or \"Revised\" License",
"confidence": 1
}
]
},
{
"project": "github.com/beorn7/perks/quantile",
"license": "MIT License",
"confidence": 0.989
"licenses": [
{
"type": "MIT License",
"confidence": 0.9891304347826086
}
]
},
{
"project": "github.com/bgentry/speakeasy",
"license": "MIT License",
"confidence": 0.944
"licenses": [
{
"type": "MIT License",
"confidence": 0.9441624365482234
}
]
},
{
"project": "github.com/boltdb/bolt",
"license": "MIT License",
"confidence": 1
"licenses": [
{
"type": "MIT License",
"confidence": 1
}
]
},
{
"project": "github.com/cockroachdb/cmux",
"license": "Apache License 2.0",
"confidence": 1
"licenses": [
{
"type": "Apache License 2.0",
"confidence": 1
}
]
},
{
"project": "github.com/coreos/etcd",
"license": "Apache License 2.0",
"confidence": 1
"licenses": [
{
"type": "Apache License 2.0",
"confidence": 1
}
]
},
{
"project": "github.com/coreos/go-semver/semver",
"license": "Apache License 2.0",
"confidence": 1
"licenses": [
{
"type": "Apache License 2.0",
"confidence": 1
}
]
},
{
"project": "github.com/coreos/go-systemd",
"license": "Apache License 2.0",
"confidence": 0.997
"licenses": [
{
"type": "Apache License 2.0",
"confidence": 0.9966703662597114
}
]
},
{
"project": "github.com/coreos/pkg",
"license": "Apache License 2.0",
"confidence": 1
"licenses": [
{
"type": "Apache License 2.0",
"confidence": 1
}
]
},
{
"project": "github.com/cpuguy83/go-md2man/md2man",
"license": "MIT License",
"confidence": 1
"licenses": [
{
"type": "MIT License",
"confidence": 1
}
]
},
{
"project": "github.com/dgrijalva/jwt-go",
"license": "MIT License",
"confidence": 0.989
"licenses": [
{
"type": "MIT License",
"confidence": 0.9891304347826086
}
]
},
{
"project": "github.com/dustin/go-humanize",
"license": "MIT License",
"confidence": 0.969
"licenses": [
{
"type": "MIT License",
"confidence": 0.96875
}
]
},
{
"project": "github.com/ghodss/yaml",
"license": "MIT License and BSD 3-clause \"New\" or \"Revised\" License",
"confidence": 1
"licenses": [
{
"type": "MIT License and BSD 3-clause \"New\" or \"Revised\" License",
"confidence": 1
}
]
},
{
"project": "github.com/gogo/protobuf/proto",
"license": "BSD 3-clause \"New\" or \"Revised\" License",
"confidence": 0.909
"licenses": [
{
"type": "BSD 3-clause \"New\" or \"Revised\" License",
"confidence": 0.9090909090909091
}
]
},
{
"project": "github.com/golang/groupcache/lru",
"license": "Apache License 2.0",
"confidence": 0.997
"licenses": [
{
"type": "Apache License 2.0",
"confidence": 0.9966703662597114
}
]
},
{
"project": "github.com/golang/protobuf",
"license": "BSD 3-clause \"New\" or \"Revised\" License",
"confidence": 0.92
"licenses": [
{
"type": "BSD 3-clause \"New\" or \"Revised\" License",
"confidence": 0.92
}
]
},
{
"project": "github.com/google/btree",
"license": "Apache License 2.0",
"confidence": 1
"licenses": [
{
"type": "Apache License 2.0",
"confidence": 1
}
]
},
{
"project": "github.com/grpc-ecosystem/go-grpc-prometheus",
"license": "Apache License 2.0",
"confidence": 1
"licenses": [
{
"type": "Apache License 2.0",
"confidence": 1
}
]
},
{
"project": "github.com/grpc-ecosystem/grpc-gateway",
"license": "BSD 3-clause \"New\" or \"Revised\" License",
"confidence": 0.979
"licenses": [
{
"type": "BSD 3-clause \"New\" or \"Revised\" License",
"confidence": 0.979253112033195
}
]
},
{
"project": "github.com/inconshreveable/mousetrap",
"license": "Apache License 2.0",
"confidence": 1
"licenses": [
{
"type": "MIT License and BSD 3-clause \"New\" or \"Revised\" License",
"confidence": 1
},
{
"type": "Apache License 2.0",
"confidence": 1
}
]
},
{
"project": "github.com/jonboulle/clockwork",
"license": "Apache License 2.0",
"confidence": 1
"licenses": [
{
"type": "Apache License 2.0",
"confidence": 1
}
]
},
{
"project": "github.com/mattn/go-runewidth",
"license": "MIT License",
"confidence": 1
"licenses": [
{
"type": "MIT License",
"confidence": 1
}
]
},
{
"project": "github.com/matttproud/golang_protobuf_extensions/pbutil",
"license": "Apache License 2.0",
"confidence": 1
"licenses": [
{
"type": "Apache License 2.0",
"confidence": 1
}
]
},
{
"project": "github.com/olekukonko/tablewriter",
"license": "MIT License",
"confidence": 0.989
"licenses": [
{
"type": "MIT License",
"confidence": 0.9891304347826086
}
]
},
{
"project": "github.com/prometheus/client_golang/prometheus",
"license": "Apache License 2.0",
"confidence": 1
"licenses": [
{
"type": "Apache License 2.0",
"confidence": 1
}
]
},
{
"project": "github.com/prometheus/client_model/go",
"license": "Apache License 2.0",
"confidence": 1
"licenses": [
{
"type": "Apache License 2.0",
"confidence": 1
}
]
},
{
"project": "github.com/prometheus/common",
"license": "Apache License 2.0",
"confidence": 1
"licenses": [
{
"type": "Apache License 2.0",
"confidence": 1
}
]
},
{
"project": "github.com/prometheus/procfs",
"license": "Apache License 2.0",
"confidence": 1
"licenses": [
{
"type": "Apache License 2.0",
"confidence": 1
}
]
},
{
"project": "github.com/russross/blackfriday",
"license": "BSD 2-clause \"Simplified\" License",
"confidence": 0.963
},
{
"project": "github.com/shurcooL/sanitized_anchor_name",
"license": "MIT License",
"confidence": 1
"licenses": [
{
"type": "BSD 2-clause \"Simplified\" License",
"confidence": 0.9626168224299065
}
]
},
{
"project": "github.com/spf13/cobra",
"license": "Apache License 2.0",
"confidence": 0.957
"licenses": [
{
"type": "Apache License 2.0",
"confidence": 0.9573241061130334
}
]
},
{
"project": "github.com/spf13/pflag",
"license": "BSD 3-clause \"New\" or \"Revised\" License",
"confidence": 0.966
"licenses": [
{
"type": "BSD 3-clause \"New\" or \"Revised\" License",
"confidence": 0.9663865546218487
}
]
},
{
"project": "github.com/ugorji/go/codec",
"license": "MIT License",
"confidence": 0.995
"licenses": [
{
"type": "MIT License",
"confidence": 0.9946524064171123
}
]
},
{
"project": "github.com/urfave/cli",
"license": "MIT License",
"confidence": 1
"licenses": [
{
"type": "MIT License",
"confidence": 1
}
]
},
{
"project": "github.com/xiang90/probing",
"license": "MIT License",
"confidence": 1
"licenses": [
{
"type": "MIT License",
"confidence": 1
}
]
},
{
"project": "golang.org/x/crypto",
"license": "BSD 3-clause \"New\" or \"Revised\" License",
"confidence": 0.966
"licenses": [
{
"type": "BSD 3-clause \"New\" or \"Revised\" License",
"confidence": 0.9663865546218487
}
]
},
{
"project": "golang.org/x/net",
"license": "BSD 3-clause \"New\" or \"Revised\" License",
"confidence": 0.966
"licenses": [
{
"type": "BSD 3-clause \"New\" or \"Revised\" License",
"confidence": 0.9663865546218487
}
]
},
{
"project": "golang.org/x/text",
"license": "BSD 3-clause \"New\" or \"Revised\" License",
"confidence": 0.966
"licenses": [
{
"type": "BSD 3-clause \"New\" or \"Revised\" License",
"confidence": 0.9663865546218487
}
]
},
{
"project": "golang.org/x/time/rate",
"license": "BSD 3-clause \"New\" or \"Revised\" License",
"confidence": 0.966
"licenses": [
{
"type": "BSD 3-clause \"New\" or \"Revised\" License",
"confidence": 0.9663865546218487
}
]
},
{
"project": "google.golang.org/grpc",
"license": "BSD 3-clause \"New\" or \"Revised\" License",
"confidence": 0.979
"licenses": [
{
"type": "BSD 3-clause \"New\" or \"Revised\" License",
"confidence": 0.979253112033195
}
]
},
{
"project": "gopkg.in/cheggaaa/pb.v1",
"license": "BSD 3-clause \"New\" or \"Revised\" License",
"confidence": 0.992
"licenses": [
{
"type": "BSD 3-clause \"New\" or \"Revised\" License",
"confidence": 0.9916666666666667
}
]
},
{
"project": "gopkg.in/yaml.v2",
"license": "Apache License 2.0 and MIT License",
"confidence": 1
},
{
"project": "bitbucket.org/ww/goautoneg",
"license": "BSD 3-clause \"New\" or \"Revised\" License",
"confidence": 1
"licenses": [
{
"type": "The Unlicense",
"confidence": 0.35294117647058826
},
{
"type": "MIT License",
"confidence": 0.8975609756097561
}
]
}
]

View File

@@ -1,18 +1,26 @@
[
{
"project": "bitbucket.org/ww/goautoneg",
"license": "BSD 3-clause \"New\" or \"Revised\" License"
"licenses": [
{
"type": "BSD 3-clause \"New\" or \"Revised\" License"
}
]
},
{
"project": "github.com/ghodss/yaml",
"license": "MIT License and BSD 3-clause \"New\" or \"Revised\" License"
"licenses": [
{
"type": "MIT License and BSD 3-clause \"New\" or \"Revised\" License"
}
]
},
{
"project": "github.com/inconshreveable/mousetrap",
"license": "Apache License 2.0"
},
{
"project": "gopkg.in/yaml.v2",
"license": "Apache License 2.0 and MIT License"
"licenses": [
{
"type": "Apache License 2.0"
}
]
}
]

23
build
View File

@@ -3,9 +3,7 @@
# set some environment variables
ORG_PATH="github.com/coreos"
REPO_PATH="${ORG_PATH}/etcd"
export GO15VENDOREXPERIMENT="1"
eval $(go env)
GIT_SHA=`git rev-parse --short HEAD || echo "GitNotFound"`
if [ ! -z "$FAILPOINTS" ]; then
GIT_SHA="$GIT_SHA"-FAILPOINTS
@@ -17,11 +15,7 @@ GO_LDFLAGS="$GO_LDFLAGS -X ${REPO_PATH}/cmd/vendor/${REPO_PATH}/version.GitSHA=$
# enable/disable failpoints
toggle_failpoints() {
FAILPKGS="etcdserver/ mvcc/backend/"
mode="disable"
if [ ! -z "$FAILPOINTS" ]; then mode="enable"; fi
if [ ! -z "$1" ]; then mode="$1"; fi
mode="$1"
if which gofail >/dev/null 2>&1; then
gofail "$mode" $FAILPKGS
elif [ "$mode" != "disable" ]; then
@@ -30,19 +24,26 @@ toggle_failpoints() {
fi
}
toggle_failpoints_default() {
mode="disable"
if [ ! -z "$FAILPOINTS" ]; then mode="enable"; fi
toggle_failpoints "$mode"
}
etcd_build() {
out="bin"
if [ -n "${BINDIR}" ]; then out="${BINDIR}"; fi
toggle_failpoints
toggle_failpoints_default
# Static compilation is useful when etcd is run in a container
CGO_ENABLED=0 go build $GO_BUILD_FLAGS -installsuffix cgo -ldflags "$GO_LDFLAGS" -o ${out}/etcd ${REPO_PATH}/cmd/etcd || return
CGO_ENABLED=0 go build $GO_BUILD_FLAGS -installsuffix cgo -ldflags "$GO_LDFLAGS" -o ${out}/etcdctl ${REPO_PATH}/cmd/etcdctl || return
}
etcd_setup_gopath() {
CDIR=$(cd `dirname "$0"` && pwd)
d=$(dirname "$0")
CDIR=$(cd "$d" && pwd)
cd "$CDIR"
etcdGOPATH=${CDIR}/gopath
etcdGOPATH="${CDIR}/gopath"
# preserve old gopath to support building with unvendored tooling deps (e.g., gofail)
if [ -n "$GOPATH" ]; then
GOPATH=":$GOPATH"
@@ -53,7 +54,7 @@ etcd_setup_gopath() {
ln -s ${CDIR}/cmd/vendor ${etcdGOPATH}/src
}
toggle_failpoints
toggle_failpoints_default
# only build when called directly, not sourced
if echo "$0" | grep "build$" >/dev/null; then

View File

@@ -14,8 +14,27 @@
package client
import (
"github.com/coreos/etcd/pkg/srv"
)
// Discoverer is an interface that wraps the Discover method.
type Discoverer interface {
// Discover looks up the etcd servers for the domain.
Discover(domain string) ([]string, error)
}
type srvDiscover struct{}
// NewSRVDiscover constructs a new Discoverer that uses the stdlib to lookup SRV records.
func NewSRVDiscover() Discoverer {
return &srvDiscover{}
}
func (d *srvDiscover) Discover(domain string) ([]string, error) {
srvs, err := srv.GetClient("etcd-client", domain)
if err != nil {
return nil, err
}
return srvs.Endpoints, nil
}

View File

@@ -1,65 +0,0 @@
// Copyright 2015 The etcd Authors
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
package client
import (
"fmt"
"net"
"net/url"
)
var (
// indirection for testing
lookupSRV = net.LookupSRV
)
type srvDiscover struct{}
// NewSRVDiscover constructs a new Discoverer that uses the stdlib to lookup SRV records.
func NewSRVDiscover() Discoverer {
return &srvDiscover{}
}
// Discover looks up the etcd servers for the domain.
func (d *srvDiscover) Discover(domain string) ([]string, error) {
var urls []*url.URL
updateURLs := func(service, scheme string) error {
_, addrs, err := lookupSRV(service, "tcp", domain)
if err != nil {
return err
}
for _, srv := range addrs {
urls = append(urls, &url.URL{
Scheme: scheme,
Host: net.JoinHostPort(srv.Target, fmt.Sprintf("%d", srv.Port)),
})
}
return nil
}
errHTTPS := updateURLs("etcd-client-ssl", "https")
errHTTP := updateURLs("etcd-client", "http")
if errHTTPS != nil && errHTTP != nil {
return nil, fmt.Errorf("dns lookup errors: %s and %s", errHTTPS, errHTTP)
}
endpoints := make([]string, len(urls))
for i := range urls {
endpoints[i] = urls[i].String()
}
return endpoints, nil
}

View File

@@ -1,102 +0,0 @@
// Copyright 2015 The etcd Authors
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
package client
import (
"errors"
"net"
"reflect"
"testing"
)
func TestSRVDiscover(t *testing.T) {
defer func() { lookupSRV = net.LookupSRV }()
tests := []struct {
withSSL []*net.SRV
withoutSSL []*net.SRV
expected []string
}{
{
[]*net.SRV{},
[]*net.SRV{},
[]string{},
},
{
[]*net.SRV{
{Target: "10.0.0.1", Port: 2480},
{Target: "10.0.0.2", Port: 2480},
{Target: "10.0.0.3", Port: 2480},
},
[]*net.SRV{},
[]string{"https://10.0.0.1:2480", "https://10.0.0.2:2480", "https://10.0.0.3:2480"},
},
{
[]*net.SRV{
{Target: "10.0.0.1", Port: 2480},
{Target: "10.0.0.2", Port: 2480},
{Target: "10.0.0.3", Port: 2480},
},
[]*net.SRV{
{Target: "10.0.0.1", Port: 7001},
},
[]string{"https://10.0.0.1:2480", "https://10.0.0.2:2480", "https://10.0.0.3:2480", "http://10.0.0.1:7001"},
},
{
[]*net.SRV{
{Target: "10.0.0.1", Port: 2480},
{Target: "10.0.0.2", Port: 2480},
{Target: "10.0.0.3", Port: 2480},
},
[]*net.SRV{
{Target: "10.0.0.1", Port: 7001},
},
[]string{"https://10.0.0.1:2480", "https://10.0.0.2:2480", "https://10.0.0.3:2480", "http://10.0.0.1:7001"},
},
{
[]*net.SRV{
{Target: "a.example.com", Port: 2480},
{Target: "b.example.com", Port: 2480},
{Target: "c.example.com", Port: 2480},
},
[]*net.SRV{},
[]string{"https://a.example.com:2480", "https://b.example.com:2480", "https://c.example.com:2480"},
},
}
for i, tt := range tests {
lookupSRV = func(service string, proto string, domain string) (string, []*net.SRV, error) {
if service == "etcd-client-ssl" {
return "", tt.withSSL, nil
}
if service == "etcd-client" {
return "", tt.withoutSSL, nil
}
return "", nil, errors.New("Unknown service in mock")
}
d := NewSRVDiscover()
endpoints, err := d.Discover("example.com")
if err != nil {
t.Fatalf("%d: err: %#v", i, err)
}
if !reflect.DeepEqual(endpoints, tt.expected) {
t.Errorf("#%d: endpoints = %v, want %v", i, endpoints, tt.expected)
}
}
}

View File

@@ -77,7 +77,6 @@ func newSimpleBalancer(eps []string) *simpleBalancer {
for i := range eps {
addrs[i].Addr = getHost(eps[i])
}
notifyCh <- addrs
sb := &simpleBalancer{
addrs: addrs,
notifyCh: notifyCh,
@@ -89,6 +88,7 @@ func newSimpleBalancer(eps []string) *simpleBalancer {
updateAddrsC: make(chan struct{}, 1),
host2ep: getHost2ep(eps),
}
close(sb.downc)
go sb.updateNotifyLoop()
return sb
}
@@ -170,38 +170,51 @@ func (b *simpleBalancer) updateNotifyLoop() {
for {
b.mu.RLock()
upc := b.upc
upc, downc, addr := b.upc, b.downc, b.pinAddr
b.mu.RUnlock()
var downc chan struct{}
// downc or upc should be closed
select {
case <-downc:
downc = nil
default:
}
select {
case <-upc:
var addr string
b.mu.RLock()
addr = b.pinAddr
// Up() sets pinAddr and downc as a pair under b.mu
downc = b.downc
b.mu.RUnlock()
if addr == "" {
break
}
// close opened connections that are not pinAddr
// this ensures only one connection is open per client
upc = nil
default:
}
switch {
case downc == nil && upc == nil:
// stale
select {
case b.notifyCh <- []grpc.Address{{Addr: addr}}:
case <-b.stopc:
return
default:
}
case downc == nil:
b.notifyAddrs()
select {
case <-upc:
case <-b.updateAddrsC:
b.notifyAddrs()
case <-b.stopc:
return
}
case upc == nil:
select {
// close connections that are not the pinned address
case b.notifyCh <- []grpc.Address{{Addr: addr}}:
case <-downc:
case <-b.stopc:
return
}
select {
case <-downc:
case <-b.updateAddrsC:
case <-b.stopc:
return
}
case <-b.updateAddrsC:
b.notifyAddrs()
continue
}
select {
case <-downc:
b.notifyAddrs()
case <-b.updateAddrsC:
b.notifyAddrs()
case <-b.stopc:
return
}
}
}
@@ -231,23 +244,20 @@ func (b *simpleBalancer) Up(addr grpc.Address) func(error) {
if !hasAddr(b.addrs, addr.Addr) {
return func(err error) {}
}
if b.pinAddr == "" {
// notify waiting Get()s and pin first connected address
close(b.upc)
b.downc = make(chan struct{})
b.pinAddr = addr.Addr
// notify client that a connection is up
b.readyOnce.Do(func() { close(b.readyc) })
if b.pinAddr != "" {
return func(err error) {}
}
// notify waiting Get()s and pin first connected address
close(b.upc)
b.downc = make(chan struct{})
b.pinAddr = addr.Addr
// notify client that a connection is up
b.readyOnce.Do(func() { close(b.readyc) })
return func(err error) {
b.mu.Lock()
if b.pinAddr == addr.Addr {
b.upc = make(chan struct{})
close(b.downc)
b.pinAddr = ""
}
b.upc = make(chan struct{})
close(b.downc)
b.pinAddr = ""
b.mu.Unlock()
}
}
@@ -280,6 +290,8 @@ func (b *simpleBalancer) Get(ctx context.Context, opts grpc.BalancerGetOptions)
b.mu.RUnlock()
select {
case <-ch:
case <-b.donec:
return grpc.Address{Addr: ""}, nil, grpc.ErrClientConnClosing
case <-ctx.Done():
return grpc.Address{Addr: ""}, nil, ctx.Err()
}

View File

@@ -182,7 +182,7 @@ func parseEndpoint(endpoint string) (proto string, host string, scheme string) {
host = url.Host
switch url.Scheme {
case "http", "https":
case "unix":
case "unix", "unixs":
proto = "unix"
host = url.Host + url.Path
default:
@@ -197,7 +197,7 @@ func (c *Client) processCreds(scheme string) (creds *credentials.TransportCreden
case "unix":
case "http":
creds = nil
case "https":
case "https", "unixs":
if creds != nil {
break
}
@@ -322,7 +322,7 @@ func (c *Client) dial(endpoint string, dopts ...grpc.DialOption) (*grpc.ClientCo
opts = append(opts, c.cfg.DialOptions...)
conn, err := grpc.Dial(host, opts...)
conn, err := grpc.DialContext(c.ctx, host, opts...)
if err != nil {
return nil, err
}
@@ -367,7 +367,9 @@ func newClient(cfg *Config) (*Client, error) {
}
client.balancer = newSimpleBalancer(cfg.Endpoints)
conn, err := client.dial("", grpc.WithBalancer(client.balancer))
// use Endpoints[0] so that for https:// without any tls config given, then
// grpc will assume the ServerName is in the endpoint.
conn, err := client.dial(cfg.Endpoints[0], grpc.WithBalancer(client.balancer))
if err != nil {
client.cancel()
client.balancer.Close()
@@ -503,3 +505,11 @@ func toErr(ctx context.Context, err error) error {
}
return err
}
func canceledByCaller(stopCtx context.Context, err error) bool {
if stopCtx.Err() == nil || err == nil {
return false
}
return err == context.Canceled || err == context.DeadlineExceeded
}

View File

@@ -12,6 +12,7 @@
// See the License for the specific language governing permissions and
// limitations under the License.
// Package clientv3util contains utility functions derived from clientv3.
package clientv3util
import (

View File

@@ -51,9 +51,12 @@ func NewSession(client *v3.Client, opts ...SessionOption) (*Session, error) {
}
ctx, cancel := context.WithCancel(ops.ctx)
keepAlive := client.KeepAlive(ctx, id)
donec := make(chan struct{})
keepAlive, err := client.KeepAlive(ctx, id)
if err != nil || keepAlive == nil {
return nil, err
}
donec := make(chan struct{})
s := &Session{client: client, opts: ops, id: id, cancel: cancel, donec: donec}
// keep the lease alive until client error or cancelled context

View File

@@ -369,3 +369,18 @@ func respToValue(resp *v3.GetResponse) string {
}
return string(resp.Kvs[0].Value)
}
// NewSTMRepeatable is deprecated.
func NewSTMRepeatable(ctx context.Context, c *v3.Client, apply func(STM) error) (*v3.TxnResponse, error) {
return NewSTM(c, apply, WithAbortContext(ctx), WithIsolation(RepeatableReads))
}
// NewSTMSerializable is deprecated.
func NewSTMSerializable(ctx context.Context, c *v3.Client, apply func(STM) error) (*v3.TxnResponse, error) {
return NewSTM(c, apply, WithAbortContext(ctx), WithIsolation(Serializable))
}
// NewSTMReadCommitted is deprecated.
func NewSTMReadCommitted(ctx context.Context, c *v3.Client, apply func(STM) error) (*v3.TxnResponse, error) {
return NewSTM(c, apply, WithAbortContext(ctx), WithIsolation(ReadCommitted))
}

View File

@@ -100,13 +100,12 @@ func ExampleLease_keepAlive() {
}
// the key 'foo' will be kept forever
ch := cli.KeepAlive(context.TODO(), resp.ID)
ka := <-ch
if ka.Err != nil {
log.Fatal(ka.Err)
ch, kaerr := cli.KeepAlive(context.TODO(), resp.ID)
if kaerr != nil {
log.Fatal(kaerr)
}
ka := <-ch
fmt.Println("ttl:", ka.TTL)
// Output: ttl: 5
}
@@ -132,9 +131,9 @@ func ExampleLease_keepAliveOnce() {
}
// to renew the lease only once
ka := cli.KeepAliveOnce(context.TODO(), resp.ID)
if ka.Err != nil {
log.Fatal(ka.Err)
ka, kaerr := cli.KeepAliveOnce(context.TODO(), resp.ID)
if kaerr != nil {
log.Fatal(kaerr)
}
fmt.Println("ttl:", ka.TTL)

View File

@@ -30,7 +30,7 @@ import (
"google.golang.org/grpc"
)
func ExampleMetrics_range() {
func ExampleClient_metrics() {
cli, err := clientv3.New(clientv3.Config{
Endpoints: endpoints,
DialOptions: []grpc.DialOption{

View File

@@ -66,6 +66,22 @@ func TestDialTLSExpired(t *testing.T) {
}
}
// TestDialTLSNoConfig ensures the client fails to dial / times out
// when TLS endpoints (https, unixs) are given but no tls config.
func TestDialTLSNoConfig(t *testing.T) {
defer testutil.AfterTest(t)
clus := integration.NewClusterV3(t, &integration.ClusterConfig{Size: 1, ClientTLS: &testTLSInfo})
defer clus.Terminate(t)
// expect 'signed by unknown authority'
_, err := clientv3.New(clientv3.Config{
Endpoints: []string{clus.Members[0].GRPCAddr()},
DialTimeout: time.Second,
})
if err != grpc.ErrClientConnTimeout {
t.Fatalf("expected %v, got %v", grpc.ErrClientConnTimeout, err)
}
}
// TestDialSetEndpoints ensures SetEndpoints can replace unavailable endpoints with available ones.
func TestDialSetEndpointsBeforeFail(t *testing.T) {
testDialSetEndpoints(t, true)

View File

@@ -104,14 +104,14 @@ func TestLeaseKeepAliveOnce(t *testing.T) {
t.Errorf("failed to create lease %v", err)
}
ka := lapi.KeepAliveOnce(context.Background(), resp.ID)
if ka.Err != nil {
t.Errorf("failed to keepalive lease %v", ka.Err)
_, err = lapi.KeepAliveOnce(context.Background(), resp.ID)
if err != nil {
t.Errorf("failed to keepalive lease %v", err)
}
ka = lapi.KeepAliveOnce(context.Background(), clientv3.LeaseID(0))
if ka.Err != rpctypes.ErrLeaseNotFound {
t.Errorf("expected %v, got %v", rpctypes.ErrLeaseNotFound, ka.Err)
_, err = lapi.KeepAliveOnce(context.Background(), clientv3.LeaseID(0))
if err != rpctypes.ErrLeaseNotFound {
t.Errorf("expected %v, got %v", rpctypes.ErrLeaseNotFound, err)
}
}
@@ -129,7 +129,10 @@ func TestLeaseKeepAlive(t *testing.T) {
t.Errorf("failed to create lease %v", err)
}
rc := lapi.KeepAlive(context.Background(), resp.ID)
rc, kerr := lapi.KeepAlive(context.Background(), resp.ID)
if kerr != nil {
t.Errorf("failed to keepalive lease %v", kerr)
}
kresp, ok := <-rc
if !ok {
@@ -160,7 +163,11 @@ func TestLeaseKeepAliveOneSecond(t *testing.T) {
if err != nil {
t.Errorf("failed to create lease %v", err)
}
rc := cli.KeepAlive(context.Background(), resp.ID)
rc, kerr := cli.KeepAlive(context.Background(), resp.ID)
if kerr != nil {
t.Errorf("failed to keepalive lease %v", kerr)
}
for i := 0; i < 3; i++ {
if _, ok := <-rc; !ok {
t.Errorf("chan is closed, want not closed")
@@ -186,7 +193,10 @@ func TestLeaseKeepAliveHandleFailure(t *testing.T) {
t.Errorf("failed to create lease %v", err)
}
rc := lapi.KeepAlive(context.Background(), resp.ID)
rc, kerr := lapi.KeepAlive(context.Background(), resp.ID)
if kerr != nil {
t.Errorf("failed to keepalive lease %v", kerr)
}
kresp := <-rc
if kresp.ID != resp.ID {
@@ -220,7 +230,7 @@ func TestLeaseKeepAliveHandleFailure(t *testing.T) {
type leaseCh struct {
lid clientv3.LeaseID
ch clientv3.LeaseKeepAliveChan
ch <-chan *clientv3.LeaseKeepAliveResponse
}
// TestLeaseKeepAliveNotFound ensures a revoked lease won't stop other keep alives
@@ -237,7 +247,10 @@ func TestLeaseKeepAliveNotFound(t *testing.T) {
if rerr != nil {
t.Fatal(rerr)
}
kach := cli.KeepAlive(context.Background(), resp.ID)
kach, kaerr := cli.KeepAlive(context.Background(), resp.ID)
if kaerr != nil {
t.Fatal(kaerr)
}
lchs = append(lchs, leaseCh{resp.ID, kach})
}
@@ -362,7 +375,10 @@ func TestLeaseKeepAliveCloseAfterDisconnectRevoke(t *testing.T) {
if err != nil {
t.Fatal(err)
}
rc := cli.KeepAlive(context.Background(), resp.ID)
rc, kerr := cli.KeepAlive(context.Background(), resp.ID)
if kerr != nil {
t.Fatal(kerr)
}
kresp := <-rc
if kresp.ID != resp.ID {
t.Fatalf("ID = %x, want %x", kresp.ID, resp.ID)
@@ -381,10 +397,9 @@ func TestLeaseKeepAliveCloseAfterDisconnectRevoke(t *testing.T) {
// some keep-alives may still be buffered; drain until close
timer := time.After(time.Duration(kresp.TTL) * time.Second)
loop := true
for loop {
for kresp != nil {
select {
case _, loop = <-rc:
case kresp = <-rc:
case <-timer:
t.Fatalf("keepalive channel did not close")
}
@@ -408,7 +423,10 @@ func TestLeaseKeepAliveInitTimeout(t *testing.T) {
}
// keep client disconnected
clus.Members[0].Stop(t)
rc := cli.KeepAlive(context.Background(), resp.ID)
rc, kerr := cli.KeepAlive(context.Background(), resp.ID)
if kerr != nil {
t.Fatal(kerr)
}
select {
case ka, ok := <-rc:
if ok {
@@ -436,7 +454,10 @@ func TestLeaseKeepAliveTTLTimeout(t *testing.T) {
if err != nil {
t.Fatal(err)
}
rc := cli.KeepAlive(context.Background(), resp.ID)
rc, kerr := cli.KeepAlive(context.Background(), resp.ID)
if kerr != nil {
t.Fatal(kerr)
}
if kresp := <-rc; kresp.ID != resp.ID {
t.Fatalf("ID = %x, want %x", kresp.ID, resp.ID)
}
@@ -559,7 +580,10 @@ func TestLeaseRenewLostQuorum(t *testing.T) {
kctx, kcancel := context.WithCancel(context.Background())
defer kcancel()
ka := cli.KeepAlive(kctx, r.ID)
ka, err := cli.KeepAlive(kctx, r.ID)
if err != nil {
t.Fatal(err)
}
// consume first keepalive so next message sends when cluster is down
<-ka
lastKa := time.Now()
@@ -606,9 +630,9 @@ func TestLeaseKeepAliveLoopExit(t *testing.T) {
}
cli.Close()
ka := cli.KeepAlive(ctx, resp.ID)
if resp, ok := <-ka; ok {
t.Fatalf("expected closed channel, got response %+v", resp)
_, err = cli.KeepAlive(ctx, resp.ID)
if _, ok := err.(clientv3.ErrKeepAliveHalted); !ok {
t.Fatalf("expected %T, got %v(%T)", clientv3.ErrKeepAliveHalted{}, err, err)
}
}
@@ -683,9 +707,15 @@ func TestLeaseWithRequireLeader(t *testing.T) {
t.Fatal(err2)
}
// kaReqLeader close if the leader is lost
kaReqLeader := c.KeepAlive(clientv3.WithRequireLeader(context.TODO()), lid1.ID)
kaReqLeader, kerr1 := c.KeepAlive(clientv3.WithRequireLeader(context.TODO()), lid1.ID)
if kerr1 != nil {
t.Fatal(kerr1)
}
// kaWait will wait even if the leader is lost
kaWait := c.KeepAlive(context.TODO(), lid2.ID)
kaWait, kerr2 := c.KeepAlive(context.TODO(), lid2.ID)
if kerr2 != nil {
t.Fatal(kerr2)
}
select {
case <-kaReqLeader:
@@ -699,6 +729,12 @@ func TestLeaseWithRequireLeader(t *testing.T) {
}
clus.Members[1].Stop(t)
// kaReqLeader may issue multiple requests while waiting for the first
// response from proxy server; drain any stray keepalive responses
time.Sleep(100 * time.Millisecond)
for len(kaReqLeader) > 0 {
<-kaReqLeader
}
select {
case resp, ok := <-kaReqLeader:

View File

@@ -51,11 +51,6 @@ type KV interface {
// Compact compacts etcd KV history before the given rev.
Compact(ctx context.Context, rev int64, opts ...CompactOption) (*CompactResponse, error)
// Do applies a single Op on KV without a transaction.
// Do is useful when declaring operations to be issued at a later time
// whereas Get/Put/Delete are for better suited for when the operation
// should be immediately issued at time of declaration.
// Do applies a single Op on KV without a transaction.
// Do is useful when creating arbitrary operations to be issued at a
// later time; the user can range over the operations, calling Do to

View File

@@ -41,10 +41,8 @@ type LeaseGrantResponse struct {
// LeaseKeepAliveResponse is used to convert the protobuf keepalive response.
type LeaseKeepAliveResponse struct {
*pb.ResponseHeader
ID LeaseID
TTL int64
Err error
Deadline time.Time
ID LeaseID
TTL int64
}
// LeaseTimeToLiveResponse is used to convert the protobuf lease timetolive response.
@@ -71,12 +69,24 @@ const (
// NoLease is a lease ID for the absence of a lease.
NoLease LeaseID = 0
// retryConnWait is how long to wait before retrying on a lost leader
// or keep alive loop failure.
// retryConnWait is how long to wait before retrying request due to an error
retryConnWait = 500 * time.Millisecond
)
type LeaseKeepAliveChan <-chan LeaseKeepAliveResponse
// ErrKeepAliveHalted is returned if client keep alive loop halts with an unexpected error.
//
// This usually means that automatic lease renewal via KeepAlive is broken, but KeepAliveOnce will still work as expected.
type ErrKeepAliveHalted struct {
Reason error
}
func (e ErrKeepAliveHalted) Error() string {
s := "etcdclient: leases keep alive halted"
if e.Reason != nil {
s += ": " + e.Reason.Error()
}
return s
}
type Lease interface {
// Grant creates a new lease.
@@ -88,24 +98,12 @@ type Lease interface {
// TimeToLive retrieves the lease information of the given lease ID.
TimeToLive(ctx context.Context, id LeaseID, opts ...LeaseOption) (*LeaseTimeToLiveResponse, error)
// KeepAlive keeps the given lease alive forever. If the keepalive response posted to
// the channel is not consumed immediately, the lease client will continue sending keep alive requests
// to the etcd server at least every second until latest response is consumed.
//
// The KeepAlive channel closes if the underlying keep alive stream is interrupted in some
// way the client cannot handle itself; the error will be posted in the last keep
// alive message before closing. If there is no keepalive response within the
// lease's time-out, the channel will close with no error. In most cases calling
// KeepAlive again will re-establish keepalives with the target lease if it has not
// expired.
KeepAlive(ctx context.Context, id LeaseID) LeaseKeepAliveChan
// KeepAlive keeps the given lease alive forever.
KeepAlive(ctx context.Context, id LeaseID) (<-chan *LeaseKeepAliveResponse, error)
// KeepAliveOnce renews the lease once. The response corresponds to the
// first message from calling KeepAlive. If the response has a recoverable
// error, KeepAliveOnce will retry the RPC with a new keep alive message.
//
// In most of the cases, Keepalive should be used instead of KeepAliveOnce.
KeepAliveOnce(ctx context.Context, id LeaseID) LeaseKeepAliveResponse
// KeepAliveOnce renews the lease once. In most of the cases, Keepalive
// should be used instead of KeepAliveOnce.
KeepAliveOnce(ctx context.Context, id LeaseID) (*LeaseKeepAliveResponse, error)
// Close releases all resources Lease keeps for efficient communication
// with the etcd server.
@@ -115,8 +113,9 @@ type Lease interface {
type lessor struct {
mu sync.Mutex // guards all fields
// donec is closed when all goroutines are torn down from Close()
donec chan struct{}
// donec is closed and loopErr is set when recvKeepAliveLoop stops
donec chan struct{}
loopErr error
remote pb.LeaseClient
@@ -138,7 +137,7 @@ type lessor struct {
// keepAlive multiplexes a keepalive for a lease over multiple channels
type keepAlive struct {
chs []chan<- LeaseKeepAliveResponse
chs []chan<- *LeaseKeepAliveResponse
ctxs []context.Context
// deadline is the time the keep alive channels close if no response
deadline time.Time
@@ -220,22 +219,24 @@ func (l *lessor) TimeToLive(ctx context.Context, id LeaseID, opts ...LeaseOption
}
}
func (l *lessor) KeepAlive(ctx context.Context, id LeaseID) LeaseKeepAliveChan {
ch := make(chan LeaseKeepAliveResponse, leaseResponseChSize)
func (l *lessor) KeepAlive(ctx context.Context, id LeaseID) (<-chan *LeaseKeepAliveResponse, error) {
ch := make(chan *LeaseKeepAliveResponse, leaseResponseChSize)
l.mu.Lock()
// ensure that recvKeepAliveLoop is still running
select {
case <-l.donec:
err := l.loopErr
l.mu.Unlock()
close(ch)
return ch
return ch, ErrKeepAliveHalted{Reason: err}
default:
}
ka, ok := l.keepAlives[id]
if !ok {
// create fresh keep alive
ka = &keepAlive{
chs: []chan<- LeaseKeepAliveResponse{ch},
chs: []chan<- *LeaseKeepAliveResponse{ch},
ctxs: []context.Context{ctx},
deadline: time.Now().Add(l.firstKeepAliveTimeout),
nextKeepAlive: time.Now(),
@@ -251,51 +252,24 @@ func (l *lessor) KeepAlive(ctx context.Context, id LeaseID) LeaseKeepAliveChan {
go l.keepAliveCtxCloser(id, ctx, ka.donec)
l.firstKeepAliveOnce.Do(func() {
go func() {
defer func() {
l.mu.Lock()
for _, ka := range l.keepAlives {
ka.Close(nil)
}
close(l.donec)
l.mu.Unlock()
}()
for l.stopCtx.Err() == nil {
err := l.recvKeepAliveLoop()
if err == context.Canceled {
// canceled by user; no error like WatchChan
err = nil
}
l.mu.Lock()
for _, ka := range l.keepAlives {
ka.Close(err)
}
l.keepAlives = make(map[LeaseID]*keepAlive)
l.mu.Unlock()
select {
case <-l.stopCtx.Done():
case <-time.After(retryConnWait):
}
}
}()
go l.recvKeepAliveLoop()
go l.deadlineLoop()
})
return ch
return ch, nil
}
func (l *lessor) KeepAliveOnce(ctx context.Context, id LeaseID) LeaseKeepAliveResponse {
func (l *lessor) KeepAliveOnce(ctx context.Context, id LeaseID) (*LeaseKeepAliveResponse, error) {
for {
resp := l.keepAliveOnce(ctx, id)
if resp.Err == nil {
resp, err := l.keepAliveOnce(ctx, id)
if err == nil {
if resp.TTL <= 0 {
resp.Err = rpctypes.ErrLeaseNotFound
err = rpctypes.ErrLeaseNotFound
}
return resp
return resp, err
}
if isHaltErr(ctx, resp.Err) {
return resp
if isHaltErr(ctx, err) {
return nil, toErr(ctx, err)
}
}
}
@@ -365,7 +339,7 @@ func (l *lessor) closeRequireLeader() {
continue
}
// remove all channels that required a leader from keepalive
newChs := make([]chan<- LeaseKeepAliveResponse, len(ka.chs)-reqIdxs)
newChs := make([]chan<- *LeaseKeepAliveResponse, len(ka.chs)-reqIdxs)
newCtxs := make([]context.Context, len(newChs))
newIdx := 0
for i := range ka.chs {
@@ -379,62 +353,84 @@ func (l *lessor) closeRequireLeader() {
}
}
func (l *lessor) keepAliveOnce(ctx context.Context, id LeaseID) LeaseKeepAliveResponse {
func (l *lessor) keepAliveOnce(ctx context.Context, id LeaseID) (*LeaseKeepAliveResponse, error) {
cctx, cancel := context.WithCancel(ctx)
defer cancel()
stream, err := l.remote.LeaseKeepAlive(cctx, grpc.FailFast(false))
if err != nil {
return LeaseKeepAliveResponse{Err: toErr(ctx, err)}
return nil, toErr(ctx, err)
}
err = stream.Send(&pb.LeaseKeepAliveRequest{ID: int64(id)})
if err != nil {
return LeaseKeepAliveResponse{Err: toErr(ctx, err)}
return nil, toErr(ctx, err)
}
resp, rerr := stream.Recv()
if rerr != nil {
return LeaseKeepAliveResponse{Err: toErr(ctx, rerr)}
return nil, toErr(ctx, rerr)
}
return LeaseKeepAliveResponse{
karesp := &LeaseKeepAliveResponse{
ResponseHeader: resp.GetHeader(),
ID: LeaseID(resp.ID),
TTL: resp.TTL,
Deadline: time.Now().Add(time.Duration(resp.TTL) * time.Second),
}
return karesp, nil
}
func (l *lessor) recvKeepAliveLoop() (gerr error) {
stream, serr := l.resetRecv()
for serr == nil {
resp, err := stream.Recv()
if err == nil {
l.recvKeepAlive(resp)
continue
defer func() {
l.mu.Lock()
close(l.donec)
l.loopErr = gerr
for _, ka := range l.keepAlives {
ka.Close()
}
err = toErr(l.stopCtx, err)
if err == rpctypes.ErrNoLeader {
l.closeRequireLeader()
select {
case <-time.After(retryConnWait):
case <-l.stopCtx.Done():
l.keepAlives = make(map[LeaseID]*keepAlive)
l.mu.Unlock()
}()
for {
stream, err := l.resetRecv()
if err != nil {
if canceledByCaller(l.stopCtx, err) {
return err
}
} else if isHaltErr(l.stopCtx, err) {
return err
} else {
for {
resp, err := stream.Recv()
if err != nil {
if canceledByCaller(l.stopCtx, err) {
return err
}
if toErr(l.stopCtx, err) == rpctypes.ErrNoLeader {
l.closeRequireLeader()
}
break
}
l.recvKeepAlive(resp)
}
}
select {
case <-time.After(retryConnWait):
continue
case <-l.stopCtx.Done():
return l.stopCtx.Err()
}
stream, serr = l.resetRecv()
}
return serr
}
// resetRecv opens a new lease stream and starts sending LeaseKeepAliveRequests
func (l *lessor) resetRecv() (pb.Lease_LeaseKeepAliveClient, error) {
sctx, cancel := context.WithCancel(l.stopCtx)
stream, err := l.remote.LeaseKeepAlive(sctx, grpc.FailFast(false))
if err = toErr(sctx, err); err != nil {
if err != nil {
cancel()
return nil, err
}
@@ -458,7 +454,6 @@ func (l *lessor) recvKeepAlive(resp *pb.LeaseKeepAliveResponse) {
ResponseHeader: resp.GetHeader(),
ID: LeaseID(resp.ID),
TTL: resp.TTL,
Deadline: time.Now().Add(time.Duration(resp.TTL) * time.Second),
}
l.mu.Lock()
@@ -472,7 +467,7 @@ func (l *lessor) recvKeepAlive(resp *pb.LeaseKeepAliveResponse) {
if karesp.TTL <= 0 {
// lease expired; close all keep alive channels
delete(l.keepAlives, karesp.ID)
ka.Close(nil)
ka.Close()
return
}
@@ -481,7 +476,7 @@ func (l *lessor) recvKeepAlive(resp *pb.LeaseKeepAliveResponse) {
ka.deadline = time.Now().Add(time.Duration(karesp.TTL) * time.Second)
for _, ch := range ka.chs {
select {
case ch <- *karesp:
case ch <- karesp:
ka.nextKeepAlive = nextKeepAlive
default:
}
@@ -502,7 +497,7 @@ func (l *lessor) deadlineLoop() {
for id, ka := range l.keepAlives {
if ka.deadline.Before(now) {
// waited too long for response; lease may be expired
ka.Close(nil)
ka.Close()
delete(l.keepAlives, id)
}
}
@@ -544,18 +539,9 @@ func (l *lessor) sendKeepAliveLoop(stream pb.Lease_LeaseKeepAliveClient) {
}
}
func (ka *keepAlive) Close(err error) {
func (ka *keepAlive) Close() {
close(ka.donec)
for _, ch := range ka.chs {
if err != nil {
// try to post error if buffer space available
select {
case ch <- LeaseKeepAliveResponse{Err: err}:
default:
}
}
close(ch)
}
// so keepAliveCtxClose doesn't double-close ka.chs
ka.chs, ka.ctxs = nil, nil
}

56
clientv3/naming/doc.go Normal file
View File

@@ -0,0 +1,56 @@
// Copyright 2017 The etcd Authors
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
// Package naming provides an etcd-backed gRPC resolver for discovering gRPC services.
//
// To use, first import the packages:
//
// import (
// "github.com/coreos/etcd/clientv3"
// etcdnaming "github.com/coreos/etcd/clientv3/naming"
//
// "google.golang.org/grpc"
// "google.golang.org/grpc/naming"
// )
//
// First, register new endpoint addresses for a service:
//
// func etcdAdd(c *clientv3.Client, service, addr string) error {
// r := &etcdnaming.GRPCResolver{Client: c}
// return r.Update(c.Ctx(), service, naming.Update{Op: naming.Add, Addr: addr})
// }
//
// Dial an RPC service using the etcd gRPC resolver and a gRPC Balancer:
//
// func etcdDial(c *clientv3.Client, service string) (*grpc.ClientConn, error) {
// r := &etcdnaming.GRPCResolver{Client: c}
// b := grpc.RoundRobin(r)
// return grpc.Dial(service, grpc.WithBalancer(b))
// }
//
// Optionally, force delete an endpoint:
//
// func etcdDelete(c *clientv3, service, addr string) error {
// r := &etcdnaming.GRPCResolver{Client: c}
// return r.Update(c.Ctx(), "my-service", naming.Update{Op: naming.Delete, Addr: "1.2.3.4"})
// }
//
// Or register an expiring endpoint with a lease:
//
// func etcdLeaseAdd(c *clientv3.Client, lid clientv3.LeaseID, service, addr string) error {
// r := &etcdnaming.GRPCResolver{Client: c}
// return r.Update(c.Ctx(), service, naming.Update{Op: naming.Add, Addr: addr}, clientv3.WithLease(lid))
// }
//
package naming

View File

@@ -24,6 +24,7 @@ import (
mvccpb "github.com/coreos/etcd/mvcc/mvccpb"
"golang.org/x/net/context"
"google.golang.org/grpc"
"google.golang.org/grpc/codes"
)
const (
@@ -65,6 +66,9 @@ type WatchResponse struct {
Created bool
closeErr error
// cancelReason is a reason of canceling watch
cancelReason string
}
// IsCreate returns true if the event tells that the key is newly created.
@@ -85,6 +89,9 @@ func (wr *WatchResponse) Err() error {
case wr.CompactRevision != 0:
return v3rpc.ErrCompacted
case wr.Canceled:
if len(wr.cancelReason) != 0 {
return v3rpc.Error(grpc.Errorf(codes.FailedPrecondition, "%s", wr.cancelReason))
}
return v3rpc.ErrFutureRev
}
return nil
@@ -520,10 +527,6 @@ func (w *watchGrpcStream) nextResume() *watcherStream {
// dispatchEvent sends a WatchResponse to the appropriate watcher stream
func (w *watchGrpcStream) dispatchEvent(pbresp *pb.WatchResponse) bool {
ws, ok := w.substreams[pbresp.WatchId]
if !ok {
return false
}
events := make([]*Event, len(pbresp.Events))
for i, ev := range pbresp.Events {
events[i] = (*Event)(ev)
@@ -534,6 +537,11 @@ func (w *watchGrpcStream) dispatchEvent(pbresp *pb.WatchResponse) bool {
CompactRevision: pbresp.CompactRevision,
Created: pbresp.Created,
Canceled: pbresp.Canceled,
cancelReason: pbresp.CancelReason,
}
ws, ok := w.substreams[pbresp.WatchId]
if !ok {
return false
}
select {
case ws.recvc <- wr:

View File

@@ -42,7 +42,7 @@ func TestEvent(t *testing.T) {
ModRevision: 4,
},
},
isModify: false,
isModify: true,
}}
for i, tt := range tests {
if tt.isCreate && !tt.ev.IsCreate() {

View File

@@ -12,6 +12,7 @@
// See the License for the specific language governing permissions and
// limitations under the License.
// Package yaml handles yaml-formatted clientv3 configuration data.
package yaml
import (

View File

@@ -1,3 +1,18 @@
// Copyright 2013-2015 CoreOS, Inc.
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
// Semantic Versions http://semver.org
package semver
import (
@@ -29,35 +44,21 @@ func splitOff(input *string, delim string) (val string) {
return val
}
func New(version string) *Version {
return Must(NewVersion(version))
}
func NewVersion(version string) (*Version, error) {
v := Version{}
dotParts := strings.SplitN(version, ".", 3)
if len(dotParts) != 3 {
return nil, errors.New(fmt.Sprintf("%s is not in dotted-tri format", version))
if err := v.Set(version); err != nil {
return nil, err
}
v.Metadata = splitOff(&dotParts[2], "+")
v.PreRelease = PreRelease(splitOff(&dotParts[2], "-"))
parsed := make([]int64, 3, 3)
for i, v := range dotParts[:3] {
val, err := strconv.ParseInt(v, 10, 64)
parsed[i] = val
if err != nil {
return nil, err
}
}
v.Major = parsed[0]
v.Minor = parsed[1]
v.Patch = parsed[2]
return &v, nil
}
// Must is a helper for wrapping NewVersion and will panic if err is not nil.
func Must(v *Version, err error) *Version {
if err != nil {
panic(err)
@@ -65,45 +66,99 @@ func Must(v *Version, err error) *Version {
return v
}
func (v *Version) String() string {
// Set parses and updates v from the given version string. Implements flag.Value
func (v *Version) Set(version string) error {
metadata := splitOff(&version, "+")
preRelease := PreRelease(splitOff(&version, "-"))
dotParts := strings.SplitN(version, ".", 3)
if len(dotParts) != 3 {
return fmt.Errorf("%s is not in dotted-tri format", version)
}
parsed := make([]int64, 3, 3)
for i, v := range dotParts[:3] {
val, err := strconv.ParseInt(v, 10, 64)
parsed[i] = val
if err != nil {
return err
}
}
v.Metadata = metadata
v.PreRelease = preRelease
v.Major = parsed[0]
v.Minor = parsed[1]
v.Patch = parsed[2]
return nil
}
func (v Version) String() string {
var buffer bytes.Buffer
base := fmt.Sprintf("%d.%d.%d", v.Major, v.Minor, v.Patch)
buffer.WriteString(base)
fmt.Fprintf(&buffer, "%d.%d.%d", v.Major, v.Minor, v.Patch)
if v.PreRelease != "" {
buffer.WriteString(fmt.Sprintf("-%s", v.PreRelease))
fmt.Fprintf(&buffer, "-%s", v.PreRelease)
}
if v.Metadata != "" {
buffer.WriteString(fmt.Sprintf("+%s", v.Metadata))
fmt.Fprintf(&buffer, "+%s", v.Metadata)
}
return buffer.String()
}
func (v *Version) LessThan(versionB Version) bool {
versionA := *v
cmp := recursiveCompare(versionA.Slice(), versionB.Slice())
if cmp == 0 {
cmp = preReleaseCompare(versionA, versionB)
func (v *Version) UnmarshalYAML(unmarshal func(interface{}) error) error {
var data string
if err := unmarshal(&data); err != nil {
return err
}
if cmp == -1 {
return true
}
return false
return v.Set(data)
}
/* Slice converts the comparable parts of the semver into a slice of strings */
func (v *Version) Slice() []int64 {
func (v Version) MarshalJSON() ([]byte, error) {
return []byte(`"` + v.String() + `"`), nil
}
func (v *Version) UnmarshalJSON(data []byte) error {
l := len(data)
if l == 0 || string(data) == `""` {
return nil
}
if l < 2 || data[0] != '"' || data[l-1] != '"' {
return errors.New("invalid semver string")
}
return v.Set(string(data[1 : l-1]))
}
// Compare tests if v is less than, equal to, or greater than versionB,
// returning -1, 0, or +1 respectively.
func (v Version) Compare(versionB Version) int {
if cmp := recursiveCompare(v.Slice(), versionB.Slice()); cmp != 0 {
return cmp
}
return preReleaseCompare(v, versionB)
}
// Equal tests if v is equal to versionB.
func (v Version) Equal(versionB Version) bool {
return v.Compare(versionB) == 0
}
// LessThan tests if v is less than versionB.
func (v Version) LessThan(versionB Version) bool {
return v.Compare(versionB) < 0
}
// Slice converts the comparable parts of the semver into a slice of integers.
func (v Version) Slice() []int64 {
return []int64{v.Major, v.Minor, v.Patch}
}
func (p *PreRelease) Slice() []string {
preRelease := string(*p)
func (p PreRelease) Slice() []string {
preRelease := string(p)
return strings.Split(preRelease, ".")
}
@@ -119,7 +174,7 @@ func preReleaseCompare(versionA Version, versionB Version) int {
return -1
}
// If there is a prelease, check and compare each part.
// If there is a prerelease, check and compare each part.
return recursivePreReleaseCompare(a.Slice(), b.Slice())
}
@@ -141,9 +196,12 @@ func recursiveCompare(versionA []int64, versionB []int64) int {
}
func recursivePreReleaseCompare(versionA []string, versionB []string) int {
// Handle slice length disparity.
// A larger set of pre-release fields has a higher precedence than a smaller set,
// if all of the preceding identifiers are equal.
if len(versionA) == 0 {
// Nothing to compare too, so we return 0
if len(versionB) > 0 {
return -1
}
return 0
} else if len(versionB) == 0 {
// We're longer than versionB so return 1.
@@ -153,7 +211,8 @@ func recursivePreReleaseCompare(versionA []string, versionB []string) int {
a := versionA[0]
b := versionB[0]
aInt := false; bInt := false
aInt := false
bInt := false
aI, err := strconv.Atoi(versionA[0])
if err == nil {

View File

@@ -1,3 +1,17 @@
// Copyright 2013-2015 CoreOS, Inc.
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
package semver
import (

View File

@@ -45,7 +45,11 @@ func indirect(v reflect.Value, decodingNull bool) (json.Unmarshaler, encoding.Te
break
}
if v.IsNil() {
v.Set(reflect.New(v.Type().Elem()))
if v.CanSet() {
v.Set(reflect.New(v.Type().Elem()))
} else {
v = reflect.New(v.Type().Elem())
}
}
if v.Type().NumMethod() > 0 {
if u, ok := v.Interface().(json.Unmarshaler); ok {

View File

@@ -15,12 +15,12 @@ import (
func Marshal(o interface{}) ([]byte, error) {
j, err := json.Marshal(o)
if err != nil {
return nil, fmt.Errorf("error marshaling into JSON: ", err)
return nil, fmt.Errorf("error marshaling into JSON: %v", err)
}
y, err := JSONToYAML(j)
if err != nil {
return nil, fmt.Errorf("error converting JSON to YAML: ", err)
return nil, fmt.Errorf("error converting JSON to YAML: %v", err)
}
return y, nil
@@ -48,7 +48,7 @@ func JSONToYAML(j []byte) ([]byte, error) {
var jsonObj interface{}
// We are using yaml.Unmarshal here (instead of json.Unmarshal) because the
// Go JSON library doesn't try to pick the right number type (int, float,
// etc.) when unmarshling to interface{}, it just picks float64
// etc.) when unmarshalling to interface{}, it just picks float64
// universally. go-yaml does go through the effort of picking the right
// number type, so we can preserve number type throughout this process.
err := yaml.Unmarshal(j, &jsonObj)

View File

@@ -1,3 +1,5 @@
// +build !windows
package pty
import "syscall"

76
cmd/vendor/github.com/kr/pty/pty_dragonfly.go generated vendored Normal file
View File

@@ -0,0 +1,76 @@
package pty
import (
"errors"
"os"
"strings"
"syscall"
"unsafe"
)
// same code as pty_darwin.go
func open() (pty, tty *os.File, err error) {
p, err := os.OpenFile("/dev/ptmx", os.O_RDWR, 0)
if err != nil {
return nil, nil, err
}
sname, err := ptsname(p)
if err != nil {
return nil, nil, err
}
err = grantpt(p)
if err != nil {
return nil, nil, err
}
err = unlockpt(p)
if err != nil {
return nil, nil, err
}
t, err := os.OpenFile(sname, os.O_RDWR, 0)
if err != nil {
return nil, nil, err
}
return p, t, nil
}
func grantpt(f *os.File) error {
_, err := isptmaster(f.Fd())
return err
}
func unlockpt(f *os.File) error {
_, err := isptmaster(f.Fd())
return err
}
func isptmaster(fd uintptr) (bool, error) {
err := ioctl(fd, syscall.TIOCISPTMASTER, 0)
return err == nil, err
}
var (
emptyFiodgnameArg fiodgnameArg
ioctl_FIODNAME = _IOW('f', 120, unsafe.Sizeof(emptyFiodgnameArg))
)
func ptsname(f *os.File) (string, error) {
name := make([]byte, _C_SPECNAMELEN)
fa := fiodgnameArg{Name: (*byte)(unsafe.Pointer(&name[0])), Len: _C_SPECNAMELEN, Pad_cgo_0: [4]byte{0, 0, 0, 0}}
err := ioctl(f.Fd(), ioctl_FIODNAME, uintptr(unsafe.Pointer(&fa)))
if err != nil {
return "", err
}
for i, c := range name {
if c == 0 {
s := "/dev/" + string(name[:i])
return strings.Replace(s, "ptm", "pts", -1), nil
}
}
return "", errors.New("TIOCPTYGNAME string not NUL-terminated")
}

View File

@@ -1,4 +1,4 @@
// +build !linux,!darwin,!freebsd
// +build !linux,!darwin,!freebsd,!dragonfly
package pty

View File

@@ -1,3 +1,5 @@
// +build !windows
package pty
import (

17
cmd/vendor/github.com/kr/pty/types_dragonfly.go generated vendored Normal file
View File

@@ -0,0 +1,17 @@
// +build ignore
package pty
/*
#define _KERNEL
#include <sys/conf.h>
#include <sys/param.h>
#include <sys/filio.h>
*/
import "C"
const (
_C_SPECNAMELEN = C.SPECNAMELEN /* max length of devicename */
)
type fiodgnameArg C.struct_fiodname_args

View File

@@ -1,3 +1,5 @@
// +build !windows
package pty
import (

14
cmd/vendor/github.com/kr/pty/ztypes_dragonfly_amd64.go generated vendored Normal file
View File

@@ -0,0 +1,14 @@
// Created by cgo -godefs - DO NOT EDIT
// cgo -godefs types_dragonfly.go
package pty
const (
_C_SPECNAMELEN = 0x3f
)
type fiodgnameArg struct {
Name *byte
Len uint32
Pad_cgo_0 [4]byte
}

12
cmd/vendor/github.com/kr/pty/ztypes_mipsx.go generated vendored Normal file
View File

@@ -0,0 +1,12 @@
// Created by cgo -godefs - DO NOT EDIT
// cgo -godefs types.go
// +build linux
// +build mips mipsle mips64 mips64le
package pty
type (
_C_int int32
_C_uint uint32
)

View File

@@ -15,8 +15,7 @@ package blackfriday
import (
"bytes"
"github.com/shurcooL/sanitized_anchor_name"
"unicode"
)
// Parse block-level data.
@@ -243,7 +242,7 @@ func (p *parser) prefixHeader(out *bytes.Buffer, data []byte) int {
}
if end > i {
if id == "" && p.flags&EXTENSION_AUTO_HEADER_IDS != 0 {
id = sanitized_anchor_name.Create(string(data[i:end]))
id = SanitizedAnchorName(string(data[i:end]))
}
work := func() bool {
p.inline(out, data[i:end])
@@ -1364,7 +1363,7 @@ func (p *parser) paragraph(out *bytes.Buffer, data []byte) int {
id := ""
if p.flags&EXTENSION_AUTO_HEADER_IDS != 0 {
id = sanitized_anchor_name.Create(string(data[prev:eol]))
id = SanitizedAnchorName(string(data[prev:eol]))
}
p.r.Header(out, work, level, id)
@@ -1428,3 +1427,24 @@ func (p *parser) paragraph(out *bytes.Buffer, data []byte) int {
p.renderParagraph(out, data[:i])
return i
}
// SanitizedAnchorName returns a sanitized anchor name for the given text.
//
// It implements the algorithm specified in the package comment.
func SanitizedAnchorName(text string) string {
var anchorName []rune
futureDash := false
for _, r := range text {
switch {
case unicode.IsLetter(r) || unicode.IsNumber(r):
if futureDash && len(anchorName) > 0 {
anchorName = append(anchorName, '-')
}
futureDash = false
anchorName = append(anchorName, unicode.ToLower(r))
default:
futureDash = true
}
}
return string(anchorName)
}

32
cmd/vendor/github.com/russross/blackfriday/doc.go generated vendored Normal file
View File

@@ -0,0 +1,32 @@
// Package blackfriday is a Markdown processor.
//
// It translates plain text with simple formatting rules into HTML or LaTeX.
//
// Sanitized Anchor Names
//
// Blackfriday includes an algorithm for creating sanitized anchor names
// corresponding to a given input text. This algorithm is used to create
// anchors for headings when EXTENSION_AUTO_HEADER_IDS is enabled. The
// algorithm is specified below, so that other packages can create
// compatible anchor names and links to those anchors.
//
// The algorithm iterates over the input text, interpreted as UTF-8,
// one Unicode code point (rune) at a time. All runes that are letters (category L)
// or numbers (category N) are considered valid characters. They are mapped to
// lower case, and included in the output. All other runes are considered
// invalid characters. Invalid characters that preceed the first valid character,
// as well as invalid character that follow the last valid character
// are dropped completely. All other sequences of invalid characters
// between two valid characters are replaced with a single dash character '-'.
//
// SanitizedAnchorName exposes this functionality, and can be used to
// create compatible links to the anchor names generated by blackfriday.
// This algorithm is also implemented in a small standalone package at
// github.com/shurcooL/sanitized_anchor_name. It can be useful for clients
// that want a small package and don't need full functionality of blackfriday.
package blackfriday
// NOTE: Keep Sanitized Anchor Name algorithm in sync with package
// github.com/shurcooL/sanitized_anchor_name.
// Otherwise, users of sanitized_anchor_name will get anchor names
// that are incompatible with those generated by blackfriday.

View File

@@ -13,9 +13,6 @@
//
//
// Blackfriday markdown processor.
//
// Translates plain text with simple formatting rules into HTML or LaTeX.
package blackfriday
import (

View File

@@ -1,21 +0,0 @@
MIT License
Copyright (c) 2015 Dmitri Shuralyov
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE.

View File

@@ -1,29 +0,0 @@
// Package sanitized_anchor_name provides a func to create sanitized anchor names.
//
// Its logic can be reused by multiple packages to create interoperable anchor names
// and links to those anchors.
//
// At this time, it does not try to ensure that generated anchor names
// are unique, that responsibility falls on the caller.
package sanitized_anchor_name // import "github.com/shurcooL/sanitized_anchor_name"
import "unicode"
// Create returns a sanitized anchor name for the given text.
func Create(text string) string {
var anchorName []rune
var futureDash = false
for _, r := range []rune(text) {
switch {
case unicode.IsLetter(r) || unicode.IsNumber(r):
if futureDash && len(anchorName) > 0 {
anchorName = append(anchorName, '-')
}
futureDash = false
anchorName = append(anchorName, unicode.ToLower(r))
default:
futureDash = true
}
}
return string(anchorName)
}

View File

@@ -90,16 +90,20 @@ func (in *input) charinfoNFKC(p int) (uint16, int) {
}
func (in *input) hangul(p int) (r rune) {
var size int
if in.bytes == nil {
if !isHangulString(in.str[p:]) {
return 0
}
r, _ = utf8.DecodeRuneInString(in.str[p:])
r, size = utf8.DecodeRuneInString(in.str[p:])
} else {
if !isHangul(in.bytes[p:]) {
return 0
}
r, _ = utf8.DecodeRune(in.bytes[p:])
r, size = utf8.DecodeRune(in.bytes[p:])
}
if size != hangulUTF8Size {
return 0
}
return r
}

View File

@@ -30,7 +30,8 @@ var (
)
const (
checkCompactionInterval = 5 * time.Minute
checkCompactionInterval = 5 * time.Minute
executeCompactionInterval = time.Hour
)
type Compactable interface {
@@ -41,6 +42,8 @@ type RevGetter interface {
Rev() int64
}
// Periodic compacts the log by purging revisions older than
// the configured retention time. Compaction happens hourly.
type Periodic struct {
clock clockwork.Clock
periodInHour int
@@ -85,11 +88,12 @@ func (t *Periodic) Run() {
continue
}
}
if clock.Now().Sub(last) < time.Duration(t.periodInHour)*time.Hour {
if clock.Now().Sub(last) < executeCompactionInterval {
continue
}
rev := t.getRev(t.periodInHour)
rev, remaining := t.getRev(t.periodInHour)
if rev < 0 {
continue
}
@@ -97,7 +101,7 @@ func (t *Periodic) Run() {
plog.Noticef("Starting auto-compaction at revision %d", rev)
_, err := t.c.Compact(t.ctx, &pb.CompactionRequest{Revision: rev})
if err == nil || err == mvcc.ErrCompacted {
t.revs = make([]int64, 0)
t.revs = remaining
last = clock.Now()
plog.Noticef("Finished auto-compaction at revision %d", rev)
} else {
@@ -124,10 +128,10 @@ func (t *Periodic) Resume() {
t.paused = false
}
func (t *Periodic) getRev(h int) int64 {
func (t *Periodic) getRev(h int) (int64, []int64) {
i := len(t.revs) - int(time.Duration(h)*time.Hour/checkCompactionInterval)
if i < 0 {
return -1
return -1, t.revs
}
return t.revs[i]
return t.revs[i], t.revs[i+1:]
}

View File

@@ -26,12 +26,14 @@ import (
)
func TestPeriodic(t *testing.T) {
retentionHours := 2
fc := clockwork.NewFakeClock()
rg := &fakeRevGetter{testutil.NewRecorderStream(), 0}
compactable := &fakeCompactable{testutil.NewRecorderStream()}
tb := &Periodic{
clock: fc,
periodInHour: 1,
periodInHour: retentionHours,
rg: rg,
c: compactable,
}
@@ -40,31 +42,26 @@ func TestPeriodic(t *testing.T) {
defer tb.Stop()
n := int(time.Hour / checkCompactionInterval)
// collect 3 hours of revisions
for i := 0; i < 3; i++ {
// advance one (hour - checkCompactionInterval), one revision for each interval
for j := 0; j < n-1; j++ {
_, err := rg.Wait(1)
if err != nil {
t.Fatal(err)
}
// collect 5 hours of revisions
for i := 0; i < 5; i++ {
// advance one hour, one revision for each interval
for j := 0; j < n; j++ {
rg.Wait(1)
fc.Advance(checkCompactionInterval)
}
_, err := rg.Wait(1)
if err != nil {
t.Fatal(err)
// compaction doesn't happen til 2 hours elapses
if i+1 < retentionHours {
continue
}
// ready to acknowledge hour "i"
// block until compactor calls clock.After()
fc.BlockUntil(1)
// unblock the After()
fc.Advance(checkCompactionInterval)
a, err := compactable.Wait(1)
if err != nil {
t.Fatal(err)
}
if !reflect.DeepEqual(a[0].Params[0], &pb.CompactionRequest{Revision: int64(i*n) + 1}) {
t.Errorf("compact request = %v, want %v", a[0].Params[0], &pb.CompactionRequest{Revision: int64(i*n) + 1})
expectedRevision := int64(1 + (i+1)*n - retentionHours*n)
if !reflect.DeepEqual(a[0].Params[0], &pb.CompactionRequest{Revision: expectedRevision}) {
t.Errorf("compact request = %v, want %v", a[0].Params[0], &pb.CompactionRequest{Revision: expectedRevision})
}
}
@@ -92,8 +89,8 @@ func TestPeriodicPause(t *testing.T) {
// tb will collect 3 hours of revisions but not compact since paused
n := int(time.Hour / checkCompactionInterval)
for i := 0; i < 3*n; i++ {
fc.Advance(checkCompactionInterval)
rg.Wait(1)
fc.Advance(checkCompactionInterval)
}
// tb ends up waiting for the clock
@@ -106,14 +103,15 @@ func TestPeriodicPause(t *testing.T) {
// tb resumes to being blocked on the clock
tb.Resume()
// unblock clock, will kick off a compaction at hour 3
// unblock clock, will kick off a compaction at hour 3:05
rg.Wait(1)
fc.Advance(checkCompactionInterval)
a, err := compactable.Wait(1)
if err != nil {
t.Fatal(err)
}
// compact the revision from hour 2
wreq := &pb.CompactionRequest{Revision: int64(2*n + 1)}
// compact the revision from hour 2:05
wreq := &pb.CompactionRequest{Revision: int64(1 + 2*n + 1)}
if !reflect.DeepEqual(a[0].Params[0], wreq) {
t.Errorf("compact request = %v, want %v", a[0].Params[0], wreq.Revision)
}

View File

@@ -288,14 +288,11 @@ func (rc *raftNode) startRaft() {
rc.node = raft.StartNode(c, startPeers)
}
ss := &stats.ServerStats{}
ss.Initialize()
rc.transport = &rafthttp.Transport{
ID: types.ID(rc.id),
ClusterID: 0x1000,
Raft: rc,
ServerStats: ss,
ServerStats: stats.NewServerStats("", ""),
LeaderStats: stats.NewLeaderStats(strconv.Itoa(rc.id)),
ErrorC: make(chan error),
}

17
contrib/recipes/doc.go Normal file
View File

@@ -0,0 +1,17 @@
// Copyright 2017 The etcd Authors
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
// Package recipe contains experimental client-side distributed
// synchronization primitives.
package recipe

View File

@@ -1,103 +0,0 @@
// Copyright 2015 The etcd Authors
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
package discovery
import (
"fmt"
"net"
"net/url"
"strings"
"github.com/coreos/etcd/pkg/types"
)
var (
// indirection for testing
lookupSRV = net.LookupSRV
resolveTCPAddr = net.ResolveTCPAddr
)
// SRVGetCluster gets the cluster information via DNS discovery.
// TODO(barakmich): Currently ignores priority and weight (as they don't make as much sense for a bootstrap)
// Also sees each entry as a separate instance.
func SRVGetCluster(name, dns string, apurls types.URLs) (string, error) {
tempName := int(0)
tcp2ap := make(map[string]url.URL)
// First, resolve the apurls
for _, url := range apurls {
tcpAddr, err := resolveTCPAddr("tcp", url.Host)
if err != nil {
plog.Errorf("couldn't resolve host %s during SRV discovery", url.Host)
return "", err
}
tcp2ap[tcpAddr.String()] = url
}
stringParts := []string{}
updateNodeMap := func(service, scheme string) error {
_, addrs, err := lookupSRV(service, "tcp", dns)
if err != nil {
return err
}
for _, srv := range addrs {
port := fmt.Sprintf("%d", srv.Port)
host := net.JoinHostPort(srv.Target, port)
tcpAddr, err := resolveTCPAddr("tcp", host)
if err != nil {
plog.Warningf("couldn't resolve host %s during SRV discovery", host)
continue
}
n := ""
url, ok := tcp2ap[tcpAddr.String()]
if ok {
n = name
}
if n == "" {
n = fmt.Sprintf("%d", tempName)
tempName++
}
// SRV records have a trailing dot but URL shouldn't.
shortHost := strings.TrimSuffix(srv.Target, ".")
urlHost := net.JoinHostPort(shortHost, port)
stringParts = append(stringParts, fmt.Sprintf("%s=%s://%s", n, scheme, urlHost))
plog.Noticef("got bootstrap from DNS for %s at %s://%s", service, scheme, urlHost)
if ok && url.Scheme != scheme {
plog.Errorf("bootstrap at %s from DNS for %s has scheme mismatch with expected peer %s", scheme+"://"+urlHost, service, url.String())
}
}
return nil
}
failCount := 0
err := updateNodeMap("etcd-server-ssl", "https")
srvErr := make([]string, 2)
if err != nil {
srvErr[0] = fmt.Sprintf("error querying DNS SRV records for _etcd-server-ssl %s", err)
failCount++
}
err = updateNodeMap("etcd-server", "http")
if err != nil {
srvErr[1] = fmt.Sprintf("error querying DNS SRV records for _etcd-server %s", err)
failCount++
}
if failCount == 2 {
plog.Warningf(srvErr[0])
plog.Warningf(srvErr[1])
plog.Errorf("SRV discovery failed: too many errors querying DNS SRV records")
return "", err
}
return strings.Join(stringParts, ","), nil
}

View File

@@ -38,6 +38,7 @@ func TestCtlV3AuthCertCN(t *testing.T) { testCtl(t, authTestCertCN, wi
func TestCtlV3AuthRevokeWithDelete(t *testing.T) { testCtl(t, authTestRevokeWithDelete) }
func TestCtlV3AuthInvalidMgmt(t *testing.T) { testCtl(t, authTestInvalidMgmt) }
func TestCtlV3AuthFromKeyPerm(t *testing.T) { testCtl(t, authTestFromKeyPerm) }
func TestCtlV3AuthAndWatch(t *testing.T) { testCtl(t, authTestWatch) }
func authEnableTest(cx ctlCtx) {
if err := authEnable(cx); err != nil {
@@ -661,3 +662,80 @@ func authTestFromKeyPerm(cx ctlCtx) {
}
}
}
func authTestWatch(cx ctlCtx) {
if err := authEnable(cx); err != nil {
cx.t.Fatal(err)
}
cx.user, cx.pass = "root", "root"
authSetupTestUser(cx)
// grant a key range
if err := ctlV3RoleGrantPermission(cx, "test-role", grantingPerm{true, true, "key", "key4", false}); err != nil {
cx.t.Fatal(err)
}
tests := []struct {
puts []kv
args []string
wkv []kv
want bool
}{
{ // watch 1 key, should be successful
[]kv{{"key", "value"}},
[]string{"key", "--rev", "1"},
[]kv{{"key", "value"}},
true,
},
{ // watch 3 keys by range, should be successful
[]kv{{"key1", "val1"}, {"key3", "val3"}, {"key2", "val2"}},
[]string{"key", "key3", "--rev", "1"},
[]kv{{"key1", "val1"}, {"key2", "val2"}},
true,
},
{ // watch 1 key, should not be successful
[]kv{},
[]string{"key5", "--rev", "1"},
[]kv{},
false,
},
{ // watch 3 keys by range, should not be successful
[]kv{},
[]string{"key", "key6", "--rev", "1"},
[]kv{},
false,
},
}
cx.user, cx.pass = "test-user", "pass"
for i, tt := range tests {
donec := make(chan struct{})
go func(i int, puts []kv) {
defer close(donec)
for j := range puts {
if err := ctlV3Put(cx, puts[j].key, puts[j].val, ""); err != nil {
cx.t.Fatalf("watchTest #%d-%d: ctlV3Put error (%v)", i, j, err)
}
}
}(i, tt.puts)
var err error
if tt.want {
err = ctlV3Watch(cx, tt.args, tt.wkv...)
} else {
err = ctlV3WatchFailPerm(cx, tt.args)
}
if err != nil {
if cx.dialTimeout > 0 && !isGRPCTimedout(err) {
cx.t.Errorf("watchTest #%d: ctlV3Watch error (%v)", i, err)
}
}
<-donec
}
}

View File

@@ -23,9 +23,19 @@ import (
"github.com/coreos/etcd/pkg/expect"
)
func TestCtlV3Elect(t *testing.T) { testCtl(t, testElect) }
func TestCtlV3Elect(t *testing.T) {
oldenv := os.Getenv("EXPECT_DEBUG")
defer os.Setenv("EXPECT_DEBUG", oldenv)
os.Setenv("EXPECT_DEBUG", "1")
testCtl(t, testElect)
}
func testElect(cx ctlCtx) {
// debugging for #6934
sig := cx.epc.withStopSignal(debugLockSignal)
defer cx.epc.withStopSignal(sig)
name := "a"
holder, ch, err := ctlV3Elect(cx, name, "p1")
@@ -70,7 +80,7 @@ func testElect(cx ctlCtx) {
if err = blocked.Signal(os.Interrupt); err != nil {
cx.t.Fatal(err)
}
if err = blocked.Close(); err != nil {
if err = closeWithTimeout(blocked, time.Second); err != nil {
cx.t.Fatal(err)
}
@@ -78,7 +88,7 @@ func testElect(cx ctlCtx) {
if err = holder.Signal(os.Interrupt); err != nil {
cx.t.Fatal(err)
}
if err = holder.Close(); err != nil {
if err = closeWithTimeout(holder, time.Second); err != nil {
cx.t.Fatal(err)
}
@@ -102,6 +112,7 @@ func ctlV3Elect(cx ctlCtx, name, proposal string) (*expect.ExpectProcess, <-chan
close(outc)
return proc, outc, err
}
proc.StopSignal = debugLockSignal
go func() {
s, xerr := proc.ExpectFunc(func(string) bool { return true })
if xerr != nil {

View File

@@ -16,16 +16,49 @@ package e2e
import (
"os"
"runtime"
"strings"
"syscall"
"testing"
"time"
"github.com/coreos/etcd/pkg/expect"
)
func TestCtlV3Lock(t *testing.T) { testCtl(t, testLock) }
// debugLockSignal forces SIGQUIT to debug etcdctl elect and lock failures
var debugLockSignal os.Signal
func init() {
// hacks to ignore SIGQUIT debugging for some builds
switch {
case os.Getenv("COVERDIR") != "":
// SIGQUIT interferes with coverage collection
debugLockSignal = syscall.SIGTERM
case runtime.GOARCH == "ppc64le":
// ppc64le's signal handling won't kill processes with SIGQUIT
// in the same way as amd64/i386, so processes won't terminate
// as expected. Since this debugging code for CI, just ignore
// ppc64le.
debugLockSignal = syscall.SIGKILL
default:
// stack dumping OK
debugLockSignal = syscall.SIGQUIT
}
}
func TestCtlV3Lock(t *testing.T) {
oldenv := os.Getenv("EXPECT_DEBUG")
defer os.Setenv("EXPECT_DEBUG", oldenv)
os.Setenv("EXPECT_DEBUG", "1")
testCtl(t, testLock)
}
func testLock(cx ctlCtx) {
// debugging for #6464
sig := cx.epc.withStopSignal(debugLockSignal)
defer cx.epc.withStopSignal(sig)
name := "a"
holder, ch, err := ctlV3Lock(cx, name)
@@ -70,7 +103,7 @@ func testLock(cx ctlCtx) {
if err = blocked.Signal(os.Interrupt); err != nil {
cx.t.Fatal(err)
}
if err = blocked.Close(); err != nil {
if err = closeWithTimeout(blocked, time.Second); err != nil {
cx.t.Fatal(err)
}
@@ -78,7 +111,7 @@ func testLock(cx ctlCtx) {
if err = holder.Signal(os.Interrupt); err != nil {
cx.t.Fatal(err)
}
if err = holder.Close(); err != nil {
if err = closeWithTimeout(holder, time.Second); err != nil {
cx.t.Fatal(err)
}
@@ -102,6 +135,7 @@ func ctlV3Lock(cx ctlCtx, name string) (*expect.ExpectProcess, <-chan string, er
close(outc)
return proc, outc, err
}
proc.StopSignal = debugLockSignal
go func() {
s, xerr := proc.ExpectFunc(func(string) bool { return true })
if xerr != nil {

View File

@@ -86,7 +86,7 @@ func watchTest(cx ctlCtx) {
}
}
func ctlV3Watch(cx ctlCtx, args []string, kvs ...kv) error {
func setupWatchArgs(cx ctlCtx, args []string) []string {
cmdArgs := append(cx.PrefixArgs(), "watch")
if cx.interactive {
cmdArgs = append(cmdArgs, "--interactive")
@@ -94,6 +94,12 @@ func ctlV3Watch(cx ctlCtx, args []string, kvs ...kv) error {
cmdArgs = append(cmdArgs, args...)
}
return cmdArgs
}
func ctlV3Watch(cx ctlCtx, args []string, kvs ...kv) error {
cmdArgs := setupWatchArgs(cx, args)
proc, err := spawnCmd(cmdArgs)
if err != nil {
return err
@@ -116,3 +122,28 @@ func ctlV3Watch(cx ctlCtx, args []string, kvs ...kv) error {
}
return proc.Stop()
}
func ctlV3WatchFailPerm(cx ctlCtx, args []string) error {
cmdArgs := setupWatchArgs(cx, args)
proc, err := spawnCmd(cmdArgs)
if err != nil {
return err
}
if cx.interactive {
wl := strings.Join(append([]string{"watch"}, args...), " ") + "\r"
if err = proc.Send(wl); err != nil {
return err
}
}
// TODO(mitake): after printing accurate error message that includes
// "permission denied", the above string argument of proc.Expect()
// should be updated.
_, err = proc.Expect("watch is canceled by the server")
if err != nil {
return err
}
return proc.Close()
}

View File

@@ -23,6 +23,7 @@ import (
"github.com/coreos/etcd/pkg/fileutil"
"github.com/coreos/etcd/pkg/testutil"
"github.com/coreos/etcd/version"
)
// TestReleaseUpgrade ensures that changes to master branch does not affect
@@ -53,7 +54,7 @@ func TestReleaseUpgrade(t *testing.T) {
// so there's a window at boot time where it doesn't have V3rpcCapability enabled
// poll /version until etcdcluster is >2.3.x before making v3 requests
for i := 0; i < 7; i++ {
if err = cURLGet(epc, cURLReq{endpoint: "/version", expected: `"etcdcluster":"3.1`}); err != nil {
if err = cURLGet(epc, cURLReq{endpoint: "/version", expected: `"etcdcluster":"` + version.Cluster(version.Version)}); err != nil {
t.Logf("#%d: v3 is not ready yet (%v)", i, err)
time.Sleep(time.Second)
continue

View File

@@ -20,6 +20,7 @@ import (
"net/url"
"os"
"strings"
"time"
"github.com/coreos/etcd/etcdserver"
"github.com/coreos/etcd/pkg/expect"
@@ -553,3 +554,25 @@ func (epc *etcdProcessCluster) grpcEndpoints() []string {
}
return eps
}
func (epc *etcdProcessCluster) withStopSignal(sig os.Signal) os.Signal {
ret := epc.procs[0].proc.StopSignal
for _, p := range epc.procs {
p.proc.StopSignal = sig
}
return ret
}
func closeWithTimeout(p *expect.ExpectProcess, d time.Duration) error {
errc := make(chan error, 1)
go func() { errc <- p.Close() }()
select {
case err := <-errc:
return err
case <-time.After(d):
p.Stop()
// retry close after stopping to collect SIGQUIT data, if any
closeWithTimeout(p, time.Second)
}
return fmt.Errorf("took longer than %v to Close process %+v", d, p)
}

View File

@@ -20,6 +20,8 @@ import (
pb "github.com/coreos/etcd/etcdserver/etcdserverpb"
"github.com/coreos/etcd/pkg/testutil"
"github.com/grpc-ecosystem/grpc-gateway/runtime"
)
func TestV3CurlPutGetNoTLS(t *testing.T) { testCurlPutGetGRPCGateway(t, &configNoTLS) }
@@ -111,3 +113,52 @@ func TestV3CurlWatch(t *testing.T) {
t.Fatal(err)
}
}
func TestV3CurlTxn(t *testing.T) {
defer testutil.AfterTest(t)
epc, err := newEtcdProcessCluster(&configNoTLS)
if err != nil {
t.Fatalf("could not start etcd process cluster (%v)", err)
}
defer func() {
if cerr := epc.Close(); err != nil {
t.Fatalf("error closing etcd processes (%v)", cerr)
}
}()
txn := &pb.TxnRequest{
Compare: []*pb.Compare{
{
Key: []byte("foo"),
Result: pb.Compare_EQUAL,
Target: pb.Compare_CREATE,
TargetUnion: &pb.Compare_CreateRevision{0},
},
},
Success: []*pb.RequestOp{
{
Request: &pb.RequestOp_RequestPut{
RequestPut: &pb.PutRequest{
Key: []byte("foo"),
Value: []byte("bar"),
},
},
},
},
}
m := &runtime.JSONPb{}
jsonDat, jerr := m.Marshal(txn)
if jerr != nil {
t.Fatal(jerr)
}
expected := `"succeeded":true,"responses":[{"response_put":{"header":{"revision":"2"}}}]`
if err = cURLPost(epc, cURLReq{endpoint: "/v3alpha/kv/txn", value: string(jsonDat), expected: expected}); err != nil {
t.Fatalf("failed txn with curl (%v)", err)
}
// was crashing etcd server
malformed := `{"compare":[{"result":0,"target":1,"key":"Zm9v","TargetUnion":null}],"success":[{"Request":{"RequestPut":{"key":"Zm9v","value":"YmFy"}}}]}`
if err = cURLPost(epc, cURLReq{endpoint: "/v3alpha/kv/txn", value: malformed, expected: "error"}); err != nil {
t.Fatalf("failed put with curl (%v)", err)
}
}

View File

@@ -22,10 +22,10 @@ import (
"net/url"
"strings"
"github.com/coreos/etcd/discovery"
"github.com/coreos/etcd/etcdserver"
"github.com/coreos/etcd/pkg/cors"
"github.com/coreos/etcd/pkg/netutil"
"github.com/coreos/etcd/pkg/srv"
"github.com/coreos/etcd/pkg/transport"
"github.com/coreos/etcd/pkg/types"
@@ -321,11 +321,15 @@ func (cfg *Config) PeerURLsMapAndToken(which string) (urlsmap types.URLsMap, tok
urlsmap[cfg.Name] = cfg.APUrls
token = cfg.Durl
case cfg.DNSCluster != "":
var clusterStr string
clusterStr, err = discovery.SRVGetCluster(cfg.Name, cfg.DNSCluster, cfg.APUrls)
if err != nil {
return nil, "", err
clusterStrs, cerr := srv.GetCluster("etcd-server", cfg.Name, cfg.DNSCluster, cfg.APUrls)
if cerr != nil {
plog.Errorf("couldn't resolve during SRV discovery (%v)", cerr)
return nil, "", cerr
}
for _, s := range clusterStrs {
plog.Noticef("got bootstrap from DNS for etcd-server at %s", s)
}
clusterStr := strings.Join(clusterStrs, ",")
if strings.Contains(clusterStr, "https://") && cfg.PeerTLSInfo.CAFile == "" {
cfg.PeerTLSInfo.ServerName = cfg.DNSCluster
}

View File

@@ -15,12 +15,16 @@
package embed
import (
"context"
"crypto/tls"
"fmt"
"io/ioutil"
defaultLog "log"
"net"
"net/http"
"path/filepath"
"sync"
"time"
"github.com/coreos/etcd/etcdserver"
"github.com/coreos/etcd/etcdserver/api/v2http"
@@ -51,7 +55,7 @@ const (
// Etcd contains a running etcd server and its listeners.
type Etcd struct {
Peers []net.Listener
Peers []*peerListener
Clients []net.Listener
Server *etcdserver.EtcdServer
@@ -63,6 +67,12 @@ type Etcd struct {
closeOnce sync.Once
}
type peerListener struct {
net.Listener
serve func() error
close func(context.Context) error
}
// StartEtcd launches the etcd server and HTTP handlers for client/server communication.
// The returned Etcd.Server is not guaranteed to have joined the cluster. Wait
// on the Etcd.Server.ReadyNotify() channel to know when it completes and is ready for use.
@@ -70,13 +80,21 @@ func StartEtcd(inCfg *Config) (e *Etcd, err error) {
if err = inCfg.Validate(); err != nil {
return nil, err
}
serving := false
e = &Etcd{cfg: *inCfg, stopc: make(chan struct{})}
cfg := &e.cfg
defer func() {
if e != nil && err != nil {
e.Close()
e = nil
if e == nil || err == nil {
return
}
if !serving {
// errored before starting gRPC server for serveCtx.grpcServerC
for _, sctx := range e.sctxs {
close(sctx.grpcServerC)
}
}
e.Close()
e = nil
}()
if e.Peers, err = startPeerListeners(cfg); err != nil {
@@ -130,6 +148,25 @@ func StartEtcd(inCfg *Config) (e *Etcd, err error) {
return
}
// configure peer handlers after rafthttp.Transport started
ph := v2http.NewPeerHandler(e.Server)
for i := range e.Peers {
srv := &http.Server{
Handler: ph,
ReadTimeout: 5 * time.Minute,
ErrorLog: defaultLog.New(ioutil.Discard, "", 0), // do not log user error
}
e.Peers[i].serve = func() error {
return srv.Serve(e.Peers[i].Listener)
}
e.Peers[i].close = func(ctx context.Context) error {
// gracefully shutdown http.Server
// close open listeners, idle connections
// until context cancel or time-out
return srv.Shutdown(ctx)
}
}
// buffer channel so goroutines on closed connections won't wait forever
e.errc = make(chan error, len(e.Peers)+len(e.Clients)+2*len(e.sctxs))
@@ -137,6 +174,7 @@ func StartEtcd(inCfg *Config) (e *Etcd, err error) {
if err = e.serve(); err != nil {
return
}
serving = true
return
}
@@ -159,24 +197,30 @@ func (e *Etcd) Close() {
for _, sctx := range e.sctxs {
sctx.cancel()
}
for i := range e.Peers {
if e.Peers[i] != nil {
e.Peers[i].Close()
}
}
for i := range e.Clients {
if e.Clients[i] != nil {
e.Clients[i].Close()
}
}
// close rafthttp transports
if e.Server != nil {
e.Server.Stop()
}
// close all idle connections in peer handler (wait up to 1-second)
for i := range e.Peers {
if e.Peers[i] != nil && e.Peers[i].close != nil {
ctx, cancel := context.WithTimeout(context.Background(), time.Second)
e.Peers[i].close(ctx)
cancel()
}
}
}
func (e *Etcd) Err() <-chan error { return e.errc }
func startPeerListeners(cfg *Config) (plns []net.Listener, err error) {
func startPeerListeners(cfg *Config) (peers []*peerListener, err error) {
if cfg.PeerAutoTLS && cfg.PeerTLSInfo.Empty() {
phosts := make([]string, len(cfg.LPUrls))
for i, u := range cfg.LPUrls {
@@ -194,17 +238,16 @@ func startPeerListeners(cfg *Config) (plns []net.Listener, err error) {
plog.Infof("peerTLS: %s", cfg.PeerTLSInfo)
}
plns = make([]net.Listener, len(cfg.LPUrls))
peers = make([]*peerListener, len(cfg.LPUrls))
defer func() {
if err == nil {
return
}
for i := range plns {
if plns[i] == nil {
continue
for i := range peers {
if peers[i] != nil && peers[i].close != nil {
plog.Info("stopping listening for peers on ", cfg.LPUrls[i].String())
peers[i].close(context.Background())
}
plns[i].Close()
plog.Info("stopping listening for peers on ", cfg.LPUrls[i].String())
}
}()
@@ -217,12 +260,18 @@ func startPeerListeners(cfg *Config) (plns []net.Listener, err error) {
plog.Warningf("The scheme of peer url %s is HTTP while client cert auth (--peer-client-cert-auth) is enabled. Ignored client cert auth for this url.", u.String())
}
}
if plns[i], err = rafthttp.NewListener(u, &cfg.PeerTLSInfo); err != nil {
peers[i] = &peerListener{close: func(context.Context) error { return nil }}
peers[i].Listener, err = rafthttp.NewListener(u, &cfg.PeerTLSInfo)
if err != nil {
return nil, err
}
// once serve, overwrite with 'http.Server.Shutdown'
peers[i].close = func(context.Context) error {
return peers[i].Listener.Close()
}
plog.Info("listening for peers on ", u.String())
}
return plns, nil
return peers, nil
}
func startClientListeners(cfg *Config) (sctxs map[string]*serveCtx, err error) {
@@ -327,11 +376,10 @@ func (e *Etcd) serve() (err error) {
}
// Start the peer server in a goroutine
ph := v2http.NewPeerHandler(e.Server)
for _, l := range e.Peers {
go func(l net.Listener) {
e.errHandler(servePeerHTTP(l, ph))
}(l)
for _, pl := range e.Peers {
go func(l *peerListener) {
e.errHandler(l.serve())
}(pl)
}
// Start a client server goroutine for each listen address

View File

@@ -21,7 +21,6 @@ import (
"net"
"net/http"
"strings"
"time"
"github.com/coreos/etcd/etcdserver"
"github.com/coreos/etcd/etcdserver/api/v3client"
@@ -161,17 +160,6 @@ func grpcHandlerFunc(grpcServer *grpc.Server, otherHandler http.Handler) http.Ha
})
}
func servePeerHTTP(l net.Listener, handler http.Handler) error {
logger := defaultLog.New(ioutil.Discard, "etcdhttp", 0)
// TODO: add debug flag; enable logging when debug flag is set
srv := &http.Server{
Handler: handler,
ReadTimeout: 5 * time.Minute,
ErrorLog: logger, // do not log user error
}
return srv.Serve(l)
}
type registerHandlerFunc func(context.Context, *gw.ServeMux, string, []grpc.DialOption) error
func (sctx *serveCtx) registerGateway(opts []grpc.DialOption) (*gw.ServeMux, error) {

38
embed/serve_test.go Normal file
View File

@@ -0,0 +1,38 @@
// Copyright 2017 The etcd Authors
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
package embed
import (
"io/ioutil"
"os"
"testing"
"github.com/coreos/etcd/auth"
)
// TestStartEtcdWrongToken ensures that StartEtcd with wrong configs returns with error.
func TestStartEtcdWrongToken(t *testing.T) {
tdir, err := ioutil.TempDir(os.TempDir(), "token-test")
if err != nil {
t.Fatal(err)
}
defer os.RemoveAll(tdir)
cfg := NewConfig()
cfg.Dir = tdir
cfg.AuthToken = "wrong-token"
if _, err = StartEtcd(cfg); err != auth.ErrInvalidAuthOpts {
t.Fatalf("expected %v, got %v", auth.ErrInvalidAuthOpts, err)
}
}

View File

@@ -790,7 +790,7 @@ Prints a line of JSON encoding the database hash, revision, total keys, and size
## Concurrency commands
### LOCK \<lockname\>
### LOCK \<lockname\> [command arg1 arg2 ...]
LOCK acquires a distributed named mutex with a given name. Once the lock is acquired, it will be held until etcdctl is terminated.
@@ -798,13 +798,24 @@ LOCK acquires a distributed named mutex with a given name. Once the lock is acqu
Once the lock is acquired, the result for the GET on the unique lock holder key is displayed.
If a command is given, it will be launched with environment variables `ETCD_LOCK_KEY` and `ETCD_LOCK_REV` set to the lock's holder key and revision.
#### Example
Acquire lock with standard output display:
```bash
./etcdctl lock mylock
# mylock/1234534535445
```
Acquire lock and execute `echo lock acquired`:
```bash
./etcdctl lock mylock echo lock acquired
# lock acquired
```
#### Remarks
LOCK returns a zero exit code only if it is terminated by a signal and releases the lock.
@@ -961,25 +972,42 @@ RPC: RoleGrantPermission
#### Options
- from-key -- grant a permission of keys that are greater than or equal to the given key using byte compare
- prefix -- grant a prefix permission
#### Ouptut
#### Output
`Role <role name> updated`.
`Role <role name> updated`.
#### Examples
Grant read and write permission on the key `foo` to role `myrole`:
```bash
./etcdctl --user=root:123 role grant-permission myrole readwrite foo
# Role myrole updated
```
Grant read permission on the wildcard key pattern `foo/*` to role `myrole`:
```bash
./etcdctl --user=root:123 role grant-permission --prefix myrole readwrite foo/
# Role myrole updated
```
### ROLE REVOKE-PERMISSION \<role name\> \<permission type\> \<key\> [endkey]
`role revoke-permission` revokes a key from a role.
RPC: RoleRevokePermission
#### Options
- from-key -- revoke a permission of keys that are greater than or equal to the given key using byte compare
- prefix -- revoke a prefix permission
#### Output
`Permission of key <key> is revoked from role <role name>` for single key. `Permission of range [<key>, <endkey>) is revoked from role <role name>` for a key range. Exit code is zero.

View File

@@ -331,6 +331,6 @@ etcdctl is under the Apache 2.0 license. See the [LICENSE][license] file for det
[authentication]: ../Documentation/v2/authentication.md
[etcd]: https://github.com/coreos/etcd
[github-release]: https://github.com/coreos/etcd/releases/
[license]: https://github.com/coreos/etcdctl/blob/master/LICENSE
[license]: ../LICENSE
[semver]: http://semver.org/
[username-flag]: #--username--u

View File

@@ -150,8 +150,8 @@ func newCheckPerfCommand(cmd *cobra.Command, args []string) {
}
go func() {
cctx, _ := context.WithTimeout(context.Background(), time.Duration(cfg.duration)*time.Second)
cctx, ccancel := context.WithTimeout(context.Background(), time.Duration(cfg.duration)*time.Second)
defer ccancel()
for limit.Wait(cctx) == nil {
binary.PutVarint(k, int64(rand.Int63n(math.MaxInt64)))
requests <- v3.OpPut(checkPerfPrefix+string(k), v)

View File

@@ -148,12 +148,13 @@ func leaseKeepAliveCommandFunc(cmd *cobra.Command, args []string) {
}
id := leaseFromArgs(args[0])
respc := mustClientFromCmd(cmd).KeepAlive(context.TODO(), id)
respc, kerr := mustClientFromCmd(cmd).KeepAlive(context.TODO(), id)
if kerr != nil {
ExitWithError(ExitBadConnection, kerr)
}
for resp := range respc {
if resp.Err != nil {
ExitWithError(ExitError, resp.Err)
}
display.KeepAlive(resp)
display.KeepAlive(*resp)
}
if _, ok := (display).(*simplePrinter); ok {

View File

@@ -16,7 +16,9 @@ package command
import (
"errors"
"fmt"
"os"
"os/exec"
"os/signal"
"syscall"
@@ -29,7 +31,7 @@ import (
// NewLockCommand returns the cobra command for "lock".
func NewLockCommand() *cobra.Command {
c := &cobra.Command{
Use: "lock <lockname>",
Use: "lock <lockname> [exec-command arg1 arg2 ...]",
Short: "Acquires a named lock",
Run: lockCommandFunc,
}
@@ -37,16 +39,16 @@ func NewLockCommand() *cobra.Command {
}
func lockCommandFunc(cmd *cobra.Command, args []string) {
if len(args) != 1 {
ExitWithError(ExitBadArgs, errors.New("lock takes one lock name argument."))
if len(args) == 0 {
ExitWithError(ExitBadArgs, errors.New("lock takes a lock name argument and an optional command to execute."))
}
c := mustClientFromCmd(cmd)
if err := lockUntilSignal(c, args[0]); err != nil {
if err := lockUntilSignal(c, args[0], args[1:]); err != nil {
ExitWithError(ExitError, err)
}
}
func lockUntilSignal(c *clientv3.Client, lockname string) error {
func lockUntilSignal(c *clientv3.Client, lockname string, cmdArgs []string) error {
s, err := concurrency.NewSession(c)
if err != nil {
return err
@@ -69,6 +71,18 @@ func lockUntilSignal(c *clientv3.Client, lockname string) error {
return err
}
if len(cmdArgs) > 0 {
cmd := exec.Command(cmdArgs[0], cmdArgs[1:]...)
cmd.Env = append(environLockResponse(m), os.Environ()...)
cmd.Stdout, cmd.Stderr = os.Stdout, os.Stderr
err := cmd.Run()
unlockErr := m.Unlock(context.TODO())
if err != nil {
return err
}
return unlockErr
}
k, kerr := c.Get(ctx, m.Key())
if kerr != nil {
return kerr
@@ -76,7 +90,6 @@ func lockUntilSignal(c *clientv3.Client, lockname string) error {
if len(k.Kvs) == 0 {
return errors.New("lock lost on init")
}
display.Get(*k)
select {
@@ -87,3 +100,10 @@ func lockUntilSignal(c *clientv3.Client, lockname string) error {
return errors.New("session expired")
}
func environLockResponse(m *concurrency.Mutex) []string {
return []string{
"ETCD_LOCK_KEY=" + m.Key(),
fmt.Sprintf("ETCD_LOCK_REV=%d", m.Header().Revision),
}
}

View File

@@ -167,10 +167,10 @@ func makeEndpointStatusTable(statusList []epStatus) (hdr []string, rows [][]stri
hdr = []string{"endpoint", "ID", "version", "db size", "is leader", "raft term", "raft index"}
for _, status := range statusList {
rows = append(rows, []string{
fmt.Sprint(status.Ep),
status.Ep,
fmt.Sprintf("%x", status.Resp.Header.MemberId),
fmt.Sprint(status.Resp.Version),
fmt.Sprint(humanize.Bytes(uint64(status.Resp.DbSize))),
status.Resp.Version,
humanize.Bytes(uint64(status.Resp.DbSize)),
fmt.Sprint(status.Resp.Leader == status.Resp.Header.MemberId),
fmt.Sprint(status.Resp.RaftTerm),
fmt.Sprint(status.Resp.RaftIndex),

View File

@@ -23,8 +23,8 @@ import (
)
var (
grantPermissionPrefix bool
permFromKey bool
rolePermPrefix bool
rolePermFromKey bool
)
// NewRoleCommand returns the cobra command for "role".
@@ -83,8 +83,8 @@ func newRoleGrantPermissionCommand() *cobra.Command {
Run: roleGrantPermissionCommandFunc,
}
cmd.Flags().BoolVar(&grantPermissionPrefix, "prefix", false, "grant a prefix permission")
cmd.Flags().BoolVar(&permFromKey, "from-key", false, "grant a permission of keys that are greater than or equal to the given key using byte compare")
cmd.Flags().BoolVar(&rolePermPrefix, "prefix", false, "grant a prefix permission")
cmd.Flags().BoolVar(&rolePermFromKey, "from-key", false, "grant a permission of keys that are greater than or equal to the given key using byte compare")
return cmd
}
@@ -96,7 +96,8 @@ func newRoleRevokePermissionCommand() *cobra.Command {
Run: roleRevokePermissionCommandFunc,
}
cmd.Flags().BoolVar(&permFromKey, "from-key", false, "grant a permission of keys that are greater than or equal to the given key using byte compare")
cmd.Flags().BoolVar(&rolePermPrefix, "prefix", false, "revoke a prefix permission")
cmd.Flags().BoolVar(&rolePermFromKey, "from-key", false, "revoke a permission of keys that are greater than or equal to the given key using byte compare")
return cmd
}
@@ -169,27 +170,10 @@ func roleGrantPermissionCommandFunc(cmd *cobra.Command, args []string) {
ExitWithError(ExitBadArgs, err)
}
rangeEnd := ""
if 4 <= len(args) {
if grantPermissionPrefix {
ExitWithError(ExitBadArgs, fmt.Errorf("don't pass both of --prefix option and range end to grant permission command"))
}
if permFromKey {
ExitWithError(ExitBadArgs, fmt.Errorf("don't pass both of --from-key option and range end to grant permission command"))
}
rangeEnd = args[3]
} else if grantPermissionPrefix {
if permFromKey {
ExitWithError(ExitBadArgs, fmt.Errorf("don't pass both of --from-key option and --prefix option to grant permission command"))
}
rangeEnd = clientv3.GetPrefixRangeEnd(args[2])
} else if permFromKey {
rangeEnd = "\x00"
rangeEnd, rerr := rangeEndFromPermFlags(args[2:])
if rerr != nil {
ExitWithError(ExitBadArgs, rerr)
}
resp, err := mustClientFromCmd(cmd).Auth.RoleGrantPermission(context.TODO(), args[0], args[2], rangeEnd, perm)
if err != nil {
ExitWithError(ExitError, err)
@@ -204,16 +188,36 @@ func roleRevokePermissionCommandFunc(cmd *cobra.Command, args []string) {
ExitWithError(ExitBadArgs, fmt.Errorf("role revoke-permission command requires role name and key [endkey] as its argument."))
}
rangeEnd := ""
if 3 <= len(args) {
rangeEnd = args[2]
} else if permFromKey {
rangeEnd = "\x00"
rangeEnd, rerr := rangeEndFromPermFlags(args[1:])
if rerr != nil {
ExitWithError(ExitBadArgs, rerr)
}
resp, err := mustClientFromCmd(cmd).Auth.RoleRevokePermission(context.TODO(), args[0], args[1], rangeEnd)
if err != nil {
ExitWithError(ExitError, err)
}
display.RoleRevokePermission(args[0], args[1], rangeEnd, *resp)
}
func rangeEndFromPermFlags(args []string) (string, error) {
if len(args) == 1 {
if rolePermPrefix {
if rolePermFromKey {
return "", fmt.Errorf("--from-key and --prefix flags are mutually exclusive")
}
return clientv3.GetPrefixRangeEnd(args[0]), nil
}
if rolePermFromKey {
return "\x00", nil
}
// single key case
return "", nil
}
if rolePermPrefix {
return "", fmt.Errorf("unexpected endkey argument with --prefix flag")
}
if rolePermFromKey {
return "", fmt.Errorf("unexpected endkey argument with --from-key flag")
}
return args[1], nil
}

View File

@@ -129,6 +129,9 @@ func getWatchChan(c *clientv3.Client, args []string) (clientv3.WatchChan, error)
func printWatchCh(ch clientv3.WatchChan) {
for resp := range ch {
if resp.Canceled {
fmt.Fprintf(os.Stderr, "watch was canceled (%v)\n", resp.Err())
}
display.Watch(resp)
}
}

View File

@@ -149,7 +149,7 @@ func TestConfigFileClusteringFlags(t *testing.T) {
Durl string `json:"discovery"`
}{
{
// Use default name and generate a default inital-cluster
// Use default name and generate a default initial-cluster
},
{
Name: "non-default",

View File

@@ -91,17 +91,28 @@ func stripSchema(eps []string) []string {
return endpoints
}
func startGateway(cmd *cobra.Command, args []string) {
endpoints := gatewayEndpoints
if eps := discoverEndpoints(gatewayDNSCluster, gatewayCA, gatewayInsecureDiscovery); len(eps) != 0 {
endpoints = eps
func startGateway(cmd *cobra.Command, args []string) {
srvs := discoverEndpoints(gatewayDNSCluster, gatewayCA, gatewayInsecureDiscovery)
if len(srvs.Endpoints) == 0 {
// no endpoints discovered, fall back to provided endpoints
srvs.Endpoints = gatewayEndpoints
}
// Strip the schema from the endpoints because we start just a TCP proxy
srvs.Endpoints = stripSchema(srvs.Endpoints)
if len(srvs.SRVs) == 0 {
for _, ep := range srvs.Endpoints {
h, p, err := net.SplitHostPort(ep)
if err != nil {
plog.Fatalf("error parsing endpoint %q", ep)
}
var port uint16
fmt.Sscanf(p, "%d", &port)
srvs.SRVs = append(srvs.SRVs, &net.SRV{Target: h, Port: port})
}
}
// Strip the schema from the endpoints because we start just a TCP proxy
endpoints = stripSchema(endpoints)
if len(endpoints) == 0 {
if len(srvs.Endpoints) == 0 {
plog.Fatalf("no endpoints found")
}
@@ -113,7 +124,7 @@ func startGateway(cmd *cobra.Command, args []string) {
tp := tcpproxy.TCPProxy{
Listener: l,
Endpoints: endpoints,
Endpoints: srvs.SRVs,
MonitorInterval: getewayRetryDelay,
}

View File

@@ -24,6 +24,8 @@ import (
"github.com/coreos/etcd/clientv3"
"github.com/coreos/etcd/clientv3/namespace"
"github.com/coreos/etcd/etcdserver/api/v3election/v3electionpb"
"github.com/coreos/etcd/etcdserver/api/v3lock/v3lockpb"
pb "github.com/coreos/etcd/etcdserver/etcdserverpb"
"github.com/coreos/etcd/pkg/debugutil"
"github.com/coreos/etcd/pkg/transport"
@@ -106,8 +108,9 @@ func startGRPCProxy(cmd *cobra.Command, args []string) {
os.Exit(1)
}
if eps := discoverEndpoints(grpcProxyDNSCluster, grpcProxyCA, grpcProxyInsecureDiscovery); len(eps) != 0 {
grpcProxyEndpoints = eps
srvs := discoverEndpoints(grpcProxyDNSCluster, grpcProxyCA, grpcProxyInsecureDiscovery)
if len(srvs.Endpoints) != 0 {
grpcProxyEndpoints = srvs.Endpoints
}
l, err := net.Listen("tcp", grpcProxyListenAddr)
@@ -153,6 +156,8 @@ func startGRPCProxy(cmd *cobra.Command, args []string) {
leasep, _ := grpcproxy.NewLeaseProxy(client)
mainp := grpcproxy.NewMaintenanceProxy(client)
authp := grpcproxy.NewAuthProxy(client)
electionp := grpcproxy.NewElectionProxy(client)
lockp := grpcproxy.NewLockProxy(client)
server := grpc.NewServer(
grpc.StreamInterceptor(grpc_prometheus.StreamServerInterceptor),
@@ -164,6 +169,8 @@ func startGRPCProxy(cmd *cobra.Command, args []string) {
pb.RegisterLeaseServer(server, leasep)
pb.RegisterMaintenanceServer(server, mainp)
pb.RegisterAuthServer(server, authp)
v3electionpb.RegisterElectionServer(server, electionp)
v3lockpb.RegisterLockServer(server, lockp)
errc := make(chan error)

View File

@@ -18,22 +18,23 @@ import (
"fmt"
"os"
"github.com/coreos/etcd/client"
"github.com/coreos/etcd/pkg/srv"
"github.com/coreos/etcd/pkg/transport"
)
func discoverEndpoints(dns string, ca string, insecure bool) (endpoints []string) {
func discoverEndpoints(dns string, ca string, insecure bool) (s srv.SRVClients) {
if dns == "" {
return nil
return s
}
endpoints, err := client.NewSRVDiscover().Discover(dns)
srvs, err := srv.GetClient("etcd-client", dns)
if err != nil {
fmt.Fprintln(os.Stderr, err)
os.Exit(1)
}
endpoints := srvs.Endpoints
plog.Infof("discovered the cluster %s from %s", endpoints, dns)
if insecure {
return endpoints
return *srvs
}
// confirm TLS connections are good
tlsInfo := transport.TLSInfo{
@@ -46,5 +47,19 @@ func discoverEndpoints(dns string, ca string, insecure bool) (endpoints []string
plog.Warningf("%v", err)
}
plog.Infof("using discovered endpoints %v", endpoints)
return endpoints
// map endpoints back to SRVClients struct with SRV data
eps := make(map[string]struct{})
for _, ep := range endpoints {
eps[ep] = struct{}{}
}
for i := range srvs.Endpoints {
if _, ok := eps[srvs.Endpoints[i]]; !ok {
continue
}
s.Endpoints = append(s.Endpoints, srvs.Endpoints[i])
s.SRVs = append(s.SRVs, srvs.SRVs[i])
}
return s
}

Some files were not shown because too many files have changed in this diff Show More