Compare commits

..

212 Commits

Author SHA1 Message Date
Gyu-Ho Lee
a48106fbc0 version: bump up to v3.0.17+git 2017-01-20 11:40:18 -08:00
Gyu-Ho Lee
cc198e22d3 version: bump to v3.0.17 2017-01-20 11:02:34 -08:00
Gyu-Ho Lee
fcf813427b lease/leasehttp: pass min TTL in TestRenewHTTP 2017-01-20 11:02:00 -08:00
Xiang Li
518efab61c etcdctlv3: snapshot restore works with lease key 2017-01-20 10:36:02 -08:00
Anthony Romano
42f9a5ef74 etcdserver, lease: tie lease min ttl to election timeout 2017-01-20 10:35:46 -08:00
Gyu-Ho Lee
21509633ba version: bump to v3.0.16+git 2017-01-20 10:19:14 -08:00
Gyu-Ho Lee
a23109a0c6 version: bump to v3.0.16 2017-01-13 11:29:12 -08:00
Anthony Romano
219a4e9ad5 clientv3: don't reset keepalive stream on grant failure
Was triggering cancelation errors on outstanding KeepAlives if Grant
had to retry.
2017-01-13 11:28:17 -08:00
Anthony Romano
3d050630f4 v3api, rpctypes: add ErrTimeoutDueToConnectionLost
Lack of GRPC code was causing this to look like a halting error to the client.
2017-01-13 11:27:56 -08:00
Anthony Romano
9c66ed2798 clientv3: don't reset stream on keepaliveonce or revoke failure
Would cause the keepalive loop to cancel out.

Fixes #7082
2017-01-13 10:42:01 -08:00
Gyu-Ho Lee
a9e2d3d4d3 *: remove 'tools/etcd-top' to drop pcap.h 2016-12-07 10:34:34 -08:00
Gyu-Ho Lee
41e329cd35 *: drop breaking-changes from master branch 2016-12-07 10:34:28 -08:00
Gyu-Ho Lee
3a8b524d36 travis, test: use Go 1.6.4, skip 'gosimple' 2016-12-07 10:28:53 -08:00
Anthony Romano
11668f53db integration: use RequireLeader for TestV3LeaseFailover
Giving Renew() the default request timeout causes TestV3LeaseFailover
to miss its timing constraints. Since it only needs to wait until the
leader recognizes the leader is lost, use RequireLeader to cancel the
keepalive stream before the request times out.
2016-12-07 10:06:28 -08:00
Anthony Romano
7ceca7e046 clientv3/integration: test lease keepalive works following quorum loss 2016-12-07 10:06:28 -08:00
Anthony Romano
395bd2313c v3rpc, etcdserver, leasehttp: ctxize Renew with request timeout
Would retry a few times before returning a not primary error that
the client should never see. Instead, use proper timeouts and
then return a request timeout error on failure.

Fixes #6922
2016-12-07 10:06:19 -08:00
Gyu-Ho Lee
b357569bc6 version: bump to v3.0.15+git 2016-11-11 11:17:31 -08:00
Gyu-Ho Lee
fc00305a2e version: bump to v3.0.15 2016-11-10 13:12:43 -08:00
Gyu-Ho Lee
f322fe7f0d clientv3, ctlv3: document range end requirement 2016-11-10 13:10:18 -08:00
Gyu-Ho Lee
049fcd30ea integration: test wrong watcher range 2016-11-10 13:09:13 -08:00
Gyu-Ho Lee
1b702e79db mvcc: return -1 for wrong watcher range key >= end
Fix https://github.com/coreos/etcd/issues/6819.
2016-11-10 13:08:51 -08:00
Anthony Romano
b87190d9dc integration: test canceling a watcher on disconnected stream 2016-11-10 13:07:24 -08:00
Anthony Romano
83b493f945 clientv3: let watchers cancel when reconnecting 2016-11-10 13:06:47 -08:00
Gyu-Ho Lee
9b69cbd989 version: bump to v3.0.14+git 2016-11-04 13:06:36 -07:00
Gyu-Ho Lee
8a37349097 version: bump to v3.0.14 2016-11-04 10:54:14 -07:00
Xiang Li
9a0e4dfe4f ctlv3: fix migration 2016-11-03 09:47:41 -07:00
Timothy St. Clair
f60469af16 ctlv3: Add a no-ttl flag to etcdctl migrate to discard keys on transform. 2016-11-03 09:47:39 -07:00
Gyu-Ho Lee
932370d8ca version: bump to v3.0.13+git 2016-10-24 11:22:50 -07:00
Gyu-Ho Lee
c99d0d4b25 version: bump to v3.0.13 2016-10-24 11:04:43 -07:00
Gyu-Ho Lee
d78216f528 e2e: remove 'ctlV3GetFailPerm' 2016-10-24 11:04:13 -07:00
Hongchao Deng
c05c027a24 etcdctl: fix migrate in outputing client.Node to json
Using printf will try to parse the string and replace special
characters. In migrate code, we want to just output the raw
json string of client.Node.
For example,
    Printf("%\\") => %!\(MISSING)
    Print("%\\") => %\
Thus, we should use print instead.
2016-10-20 10:51:16 -07:00
Gyu-Ho Lee
3fd64f913a auth: fix return type on 'hasRootRole' 2016-10-12 13:59:27 -07:00
Xiang Li
f935290bbc mvcc: fix rev inconsistency
Try:

./etcdctl put foo bar
./etcdctl del foo
./etcdctl compact 3

restart etcd

./etcdctl get foo
mvcc: required revision has been compacted

The error is unexpected when range over the head revision.

Internally, we incorrectly set current revision smaller than the
compacted revision when we remove all keys around compacted revision.

This commit fixes the issue by recovering the current revision at least
to compacted revision.
2016-10-12 13:08:26 -07:00
Hitoshi Mitake
ca91f898a2 auth, e2e, clientv3: the root role should be granted access to every key
This commit changes the semantics of the root role. The role should be
able to access to every key.

Partially fixes https://github.com/coreos/etcd/issues/6355
2016-10-11 12:19:46 -07:00
Gyu-Ho Lee
fcbada7798 Merge pull request #6622 from luxas/backport_arm_fixes
Backport arm fixes
2016-10-11 12:15:58 -07:00
Jared Hulbert
fad9bdc3e1 etcdserver: atomic access alignment
Most fields accessed with sync/atomic functions are 64bit aligned, but a couple
are not.  This makes comments out of date and therefore misleading.

Affected fields reordered, comments scrubbed and updated.
2016-10-11 11:48:43 +03:00
Jared Hulbert
198ccb8b7b raftpb: atomic access alignment
The Entry struct has misaligned fields that are accessed atomically.  The
misalignment is caused by the EntryType enum which the Protocol Buffers
spec forces to be a 32bit int.

Moving the order of the fields without renumbering them in the .proto file
seems to align the go structure without changing the wire format.
2016-10-11 11:48:43 +03:00
Jared Hulbert
dc5d5c6ac8 raft: atomic access alignment
The relevant structures are properly aligned, however, there is no comment
highlighting the need to keep it aligned as is present elsewhere in the
codebase.

Adding note to keep alignment, in line with similar comments in the codebase.
2016-10-11 11:48:43 +03:00
Gyu-Ho Lee
f771eaca47 version: bump to v3.0.12+git 2016-10-07 16:42:12 -07:00
Gyu-Ho Lee
2d1e2e8e64 version: bump to v3.0.12 2016-10-07 15:14:25 -07:00
Gyu-Ho Lee
6412758177 v3rpc: remove redundant locks 2016-10-07 15:13:56 -07:00
Xiang Li
836c8159f6 v3rpc: lock progress and prevKV map correctly 2016-10-07 15:13:12 -07:00
Gyu-Ho Lee
e406e6e8f4 etcdctl/ctlv3: add 'prev-kv' flag to watch command 2016-10-07 14:23:09 -07:00
Gyu-Ho Lee
2fa2c6284e clientv3: add 'prevKV' field to watch request 2016-10-07 14:22:58 -07:00
Gyu-Ho Lee
2862c4fa12 v3rpc: implement 'prev-kv' watch 2016-10-07 14:22:19 -07:00
Gyu-Ho Lee
6f89fbf8b5 etcdserver: use mvcc.WatchableKV for prev-kv watch 2016-10-07 14:22:00 -07:00
Gyu-Ho Lee
6ae7ec9a3f *: regenerate proto 2016-10-07 14:21:19 -07:00
Gyu-Ho Lee
4a35b1b20a etcdserverpb: add 'prev_kb' to WatchCreateRequest 2016-10-07 14:20:46 -07:00
Gyu-Ho Lee
c859c97ee2 mvccpb: add 'prev_kv' field 2016-10-07 14:19:59 -07:00
Gyu-Ho Lee
a091c629e1 version: bump to v3.0.11+git 2016-10-07 13:25:21 -07:00
Gyu-Ho Lee
96de94a584 version: bump to v3.0.11 2016-10-07 11:27:48 -07:00
Gyu-Ho Lee
e9cd8410d7 integration: add 'prevKV' to TestV3DeleteRange 2016-10-07 11:03:19 -07:00
Gyu-Ho Lee
e37ede1d2e etcdserver: handle 'PrevKV' 2016-10-07 11:00:48 -07:00
Gyu-Ho Lee
4420a29ac4 etcdctl/ctlv3: add 'prev-kv' flag 2016-10-07 10:56:06 -07:00
Gyu-Ho Lee
0544d4bfd0 clientv3: add WithPrevKV OpOption 2016-10-07 10:54:45 -07:00
Gyu-Ho Lee
fe7379f102 clientv3: add Op.prevKV 2016-10-07 10:51:01 -07:00
Gyu-Ho Lee
c76df5052b *: update proto to add 'prev_kv' 2016-10-07 10:47:47 -07:00
Xiang Li
3299cad1c3 *: add put prevkv 2016-10-07 10:39:08 -07:00
Anthony Romano
d9ab018c49 integration: test a canceled watch won't return a closing error 2016-10-05 14:19:36 -07:00
Anthony Romano
e853451cd2 clientv3: only return closing error to watcher if context is not canceled
Fixes #6503
2016-10-05 14:19:32 -07:00
Anthony Romano
1becf9d2f5 clientv3: fix race on watch initial revision
The initial revision was being updated in the substream goroutine defer;
this was racing with the resume path fetching the initial revision when
the substream closes during resume. Instead, update the initial revision
whenever the substream processes a new watch response. Since the substream
cannot receive a watch response while it is resuming, the write to the
initial revision is ordered to always happen after the resume read.

Fixes #6586
2016-10-05 10:56:36 -07:00
Anthony Romano
1a712cf187 clientv3: make IsProgressNotify() false on compact event and closed channel
Fixes #6549
2016-10-04 15:13:02 -07:00
Gyu-Ho Lee
023f335f67 wal: set PageWriter offset in file encoder 2016-10-04 15:12:47 -07:00
Gyu-Ho Lee
bf0da78b63 pkg/ioutil: configure pageOffset in NewPageWriter 2016-10-04 15:12:46 -07:00
Anthony Romano
e8473850a2 integration: test canceling watchers when disconnected 2016-10-04 15:12:37 -07:00
Anthony Romano
b836d187fd clientv3: simplify watch synchronization
Was more complicated than it needed to be and didn't really work in the
first place. Restructured watcher registation to use a queue.
2016-10-04 15:12:18 -07:00
Gyu-Ho Lee
9b09229c4d version: bump to v3.0.10+git 2016-09-23 11:13:45 -07:00
Gyu-Ho Lee
546c0f7ed6 version: bump to v3.0.10 2016-09-23 10:49:03 -07:00
sharat
adbad1c9b5 ctlv3: close snapshot file before rename (Windows) 2016-09-23 09:11:02 -07:00
Anthony Romano
273b986751 clientv3: process closed watcherStreams in watcherGrpcStream run loop
Was racing with Watch() when closing the grpc stream on no watchers.

Fixes #6476
2016-09-21 15:52:20 -07:00
Gyu-Ho Lee
5b205729b9 rafthttp: add v3.0.0 to supported streams 2016-09-16 21:54:55 +09:00
Anthony Romano
fe900b09dd version: bump to v3.0.9+git 2016-09-15 15:10:23 -07:00
Anthony Romano
494c012659 version: bump to v3.0.9 2016-09-15 12:56:33 -07:00
Anthony Romano
4abc381ebe clientv3: drain buffered WatchResponses before resuming
Otherwise, the watcherStream can receive WatchResponses in the
middle of a resume, corrupting the stream.

Fixes #6364
2016-09-15 12:38:15 -07:00
Anthony Romano
73c8fdac53 integration: fix compilation for backported Election test 2016-09-15 11:45:37 -07:00
sharat
ee2717493a ctlv3: fix line parsing for Windows 2016-09-15 11:25:53 -07:00
Xiang Li
2435eb9ecd clientv3: balancer panics when call up after close
Fix the issue by adding a simple guard varable.
2016-09-15 18:46:26 +09:00
Anthony Romano
8fb533dabe embed: warn on domain name in listener 2016-09-15 18:46:19 +09:00
Anthony Romano
2f0f5ac504 Revert "Merge pull request #6365 from heyitsanthony/fix-dns-bind"
This reverts commit af5ab7b351, reversing
changes made to da6a0f0594.
2016-09-15 18:43:46 +09:00
Jason E. Aten
9ab811d478 auth: fix range handling bugs.
Test 15, counting from zero, in TestGetMergedPerms
in etcd/auth/range_perm_cache_test.go, was trying
incorrectly assert that [a, b) merged with [b, "")
should be [a, b). Added a test specifically for
this. This patch fixes the incorrect larger test
and the bugs in the code that it was hiding.

Fixes #6359
2016-09-15 18:41:56 +09:00
Anthony Romano
e0a99fb4ba version: bump to v3.0.8+git 2016-09-09 15:56:31 -07:00
Anthony Romano
d40982fc91 version: bump to v3.0.8 2016-09-09 13:14:44 -07:00
Gyu-Ho Lee
fe3a1cc31b wal: fix error type 2016-09-09 09:11:25 +09:00
Gyu-Ho Lee
70713706a1 wal: fix err shadowing (go vet) 2016-09-09 09:07:48 +09:00
Xiang Li
0054e7e89b etcdctl: restore should create a snapshot
Restore should create a snasphot. So the new db file
can be sent to newly joined member.
2016-09-09 09:03:51 +09:00
Anthony Romano
97f718b504 fileutil: windows OpenDir
Windows needs to open a directory with write access to fsync but the go
runtime won't open directories that way.
2016-09-09 09:01:56 +09:00
Anthony Romano
202da9270e wal: fsync directory after wal file rename
Fixes #6368
2016-09-09 09:01:49 +09:00
Anthony Romano
6e83ec0ed7 etcdmain: reject binding listeners to domain names
Fixes #6336
2016-09-07 08:08:35 +09:00
Jason E. Aten
5c44cdfdaa etcdctl/ctlv3: don't crash when we should prompt for pw.
when 'etcdctl --user name get blah' is invoked to
 prompt for password, don't panic.

 addresses the segfault part of #6343
2016-09-04 09:02:50 +09:00
Anthony Romano
09a239f040 e2e: add quoted key/value to txn test 2016-09-04 09:02:47 +09:00
Anthony Romano
3faff8b2e2 etcdctl: fix quoted string handling in txn and watch
Fixes #6315
2016-09-04 09:02:28 +09:00
Anthony Romano
2345fda18e version: bump to v3.0.7+git 2016-08-31 16:41:06 -07:00
Gyu-Ho Lee
5695120efc version: bump to v3.0.7 2016-08-31 09:49:24 -07:00
Gyu-Ho Lee
183293e061 wal: lowercase segmentSizeBytes 2016-08-31 09:48:30 -07:00
Jason E. Aten
4b48876f0e clientv3/concurrency: allow election on prefixes of keys.
After winning an election or obtaining a lock, we
auto-append a slash after the provided key prefix.
This avoids the previous deadlock due to waiting
on the wrong key.

Fixes #6278

Conflicts:
	clientv3/concurrency/election.go
	clientv3/concurrency/mutex.go
2016-08-31 09:46:05 -07:00
Aaron Lehmann
5089bf58fb wal: hold file lock while renaming WAL directory on non-Windows
Windows requires this lock to be released before the directory is
renamed. But on unix-like operating systems, releasing the lock and
trying to reacquire it immediately can be flaky if a process is forked
around the same time. The file descriptors are marked as close-on-exec
by the Go runtime, but there is a window between the fork and exec where
another process will be holding the lock.
2016-08-31 09:39:57 -07:00
Anthony Romano
480a347179 wal: use page buffered writer for writing records
Forces torn writes to only happen on sector boundaries.

Fixes #6271
2016-08-30 21:06:36 -07:00
Anthony Romano
59e560c7a7 ioutil: add page buffered writer
A buffered writer that only writes full pages or when explicitly flushed.
2016-08-30 21:06:33 -07:00
Xiang Li
0bd9bea2e9 etcdserver: allow zero kv index for cluster upgrade
If a user upgrades etcd from 2.3.x to 3.0 and shutdown the
cluster immediately without triggering any new backend writes,
then the consistent index in backend would be zero.

The user cannot restart etcdserver due to today's strick index
match checking. We now have to lose this a bit for this case.
2016-08-30 21:05:20 -07:00
Anthony Romano
bd7581ac59 wal: zero out wal tail past its first zero record
Whenever the WAL is opened for writes, it should write zeroes to its tail
starting from the first zero record. Otherwise, if there are entries past
the first zero record due to a torn write, any new writes that overlap the
old entries will lead to a garbage record on the tail and cause a CRC
mismatch.
2016-08-26 14:27:53 -07:00
Anthony Romano
db378c3d26 wal: test for truncation on torn writes 2016-08-26 14:27:51 -07:00
Anthony Romano
23740162dc fileutil: add ZeroToEnd for zeroing files 2016-08-26 14:27:49 -07:00
Anthony Romano
96422a955f discovery: reject IP address records in SRVGetCluster
Was incorrectly trimming the trailing '.' from the target; this in turn
caused the etcd server to accept any SRV record with an IP target
instead of only targets with A records.
2016-08-24 09:14:47 -07:00
Gyu-Ho Lee
6fd996fdac version: bump to v3.0.6+git 2016-08-19 12:38:13 -07:00
Gyu-Ho Lee
9efa00d103 version: bump to v3.0.6 2016-08-19 12:03:02 -07:00
Xiang Li
72d30f4c34 *: minor cleanup for lease 2016-08-19 11:53:38 -07:00
Xiang Li
2e92779777 mvcc: attach keys to leases after recover all state
The previous logic is wrong. When we have hisotry like Put(foo, bar, lease1),
and Put(foo, bar, lease2), we will end up with attaching foo to two leases 1 and
2. Similar things can happen for deattach by clearing the lease of a key.

Now we try to fix this by starting to attach leases at the end of the recovery.
We use a map to keep the last lease attachment state.
2016-08-19 11:49:05 -07:00
Xiang Li
404415b1e3 lease: do lease delection in the kv txn 2016-08-19 11:49:05 -07:00
Xiang Li
07e421d245 lease: delete kvs in a txn 2016-08-19 11:49:05 -07:00
Xiang Li
a7d6e29275 etcdserver: always recover lessor first 2016-08-19 11:49:05 -07:00
Gyu-Ho Lee
1a8b295dab vendor: update grpc/grpc-go for clientconn patch 2016-08-19 11:46:51 -07:00
Anthony Romano
ffc45cc066 rafthttp: fix race between streamReader.stop() and connection closer 2016-08-19 11:45:39 -07:00
Gyu-Ho Lee
0db1ba8093 version: bump to v3.0.5+git 2016-08-19 11:11:10 -07:00
Gyu-Ho Lee
43f7c94ac8 version: bump to v3.0.5 2016-08-19 10:20:37 -07:00
Hongchao Deng
93d13fb5b4 integration: NewClusterV3 should launch cluster before creating clients 2016-08-18 14:54:45 -07:00
Gyu-Ho Lee
6a1e3e73dd vendor: boltdb/bolt v1.3.0 for Go 1.7
In case somebody wants to build this branch with Go 1.7
2016-08-18 14:41:34 -07:00
Xiang Li
ec576ee5ac mvcc: fix count 2016-08-16 12:13:33 -07:00
Anthony Romano
606d79afc4 clientv3: use failfast and retry wrappers for at-most-once rpcs 2016-08-16 12:12:44 -07:00
Anthony Romano
f4d15a430c integration: treat client TLS connecting to insecure server as timeout 2016-08-16 12:09:42 -07:00
Anthony Romano
4a841459f1 clientv3: respect up/down notifications from grpc
Fixes #5842
2016-08-16 12:09:38 -07:00
Gyu-Ho Lee
ee8c577fc0 vendor: update grpc 2016-08-16 12:09:16 -07:00
Anthony Romano
8ae0f94cd7 clientv3: only block on New() when DialTimeout > 0
Fixes #6162
2016-08-12 12:03:33 -07:00
Anthony Romano
69a97863a9 clientv3: handle watchGrpcStream shutdown if prior to goroutine start
Fixes #6141
2016-08-09 20:59:09 -07:00
Anthony Romano
12c7e4a9f8 clientv3: close watcher stream once all watchers detach
Fixes #6134
2016-08-09 10:44:21 -07:00
Anthony Romano
23cced240b transport: add ServerName to TLSConfig and add ValidateSecureEndpoints
ServerName prevents accepting forged SRV records with cross-domain
credentials. ValidateSecureEndpoints prevents downgrade attacks from SRV
records.
2016-08-04 11:00:28 -07:00
Anthony Romano
e73c928d85 etcdctl: set ServerName for TLS when using --discovery-srv 2016-08-04 11:00:25 -07:00
Anthony Romano
779ad90f9a Documentation: update clustering guide about PKI SRV record forging 2016-08-04 11:00:22 -07:00
Anthony Romano
dca1740be5 etcdmain: check TLS on gateway SRV records 2016-08-04 11:00:15 -07:00
Anthony Romano
487b34d857 embed: use ServerName on TLS DNS discovery w/o CA file 2016-08-04 10:56:11 -07:00
Gyu-Ho Lee
a31283cf51 v2http: use guest access in non-TLS mode
Fix https://github.com/coreos/etcd/issues/6075.
2016-08-04 10:52:42 -07:00
Gyu-Ho Lee
b722bedf8a version: bump to v3.0.4+git 2016-07-27 15:30:31 -07:00
Gyu-Ho Lee
d53923c636 version: bump to v3.0.4 2016-07-27 13:40:42 -07:00
Gyu-Ho Lee
9356665d60 *: regenerate proto files for grpc-gateway 2016-07-27 13:40:07 -07:00
Gyu-Ho Lee
0932d17395 scripts/genproto: use latest grpc-gateway c8ec92d0 2016-07-27 13:39:00 -07:00
Gyu-Ho Lee
2a3ea3f996 Dockerfile-release: add '/var/lib/etcd/'
We have '/var/etcd/' in Dockerfile for historical reason.
Most cases, user store data in '/var/lib/etcd/'.
2016-07-27 13:38:58 -07:00
Anthony Romano
e5a5e5f7c6 etcdserver, api, membership: don't race on setting version
Fixes #6029
2016-07-27 09:39:39 -07:00
Gyu-Ho Lee
00bdd907d5 Documentation: fix links in upgrades 2016-07-26 13:16:15 -07:00
Gyu-Ho Lee
8eab756d3f *: regenerate proto 2016-07-25 21:36:07 -07:00
Xiang Li
3d9b1d1635 scripts:genproto.sh: update grpc-gateway 2016-07-25 21:31:33 -07:00
Xiang Li
4218193dd7 etcdserverpb: add missing deleterange annotation 2016-07-25 21:31:30 -07:00
Dongsu Park
6499d01c9b etcdmain: correctly check return values from SdNotify()
SdNotify() now returns 2 values, sent and err. So startEtcdOrProxyV2()
needs to check the 2 return values correctly. As the 2 values are
independent of each other, error checking needs to be slightly updated
too.

SdNotifyNoSocket, which was previously provided by go-systemd, does not
exist any more. In that case (false, nil) will be returned instead.
2016-07-21 11:00:37 -07:00
Dongsu Park
83b39b4f6b vendor: update go-systemd
Godeps.json and vendor need to be updated according to the newest
go-systemd, as SdNotify() in go-systemd has changed its API.
2016-07-21 11:00:34 -07:00
Anthony Romano
21092ca715 integration: change timeouts for TestWatchWithProgressNotify
a) 2 * progress interval was passing with dropped notifies
b) waitResponse was waiting so long that it expected a dropped notify
2016-07-21 10:59:54 -07:00
Anthony Romano
a4e79d7ebf v3rpc: don't elide next progress notification on progress notification
Fixes #5878
2016-07-21 10:59:51 -07:00
Anthony Romano
846883a979 rpctypes, clientv3: retry RPC on EtcdStopped
Fixes #5983
2016-07-21 10:59:27 -07:00
Anthony Romano
c7a3edb90f fileutil: rework purge tests so they don't poll
Fixes #5966
2016-07-21 10:57:06 -07:00
Gyu-Ho Lee
f308a27e91 e2e: test auth enabled with CN name cert 2016-07-21 10:55:56 -07:00
Gyu-Ho Lee
1d37154793 v2http: test with 'ClientCertAuthEnabled' 2016-07-21 10:55:54 -07:00
Gyu-Ho Lee
092d069d3e v2http: set 'ClientCertAuthEnabled' in client.go 2016-07-21 10:55:51 -07:00
Gyu-Ho Lee
ab5c4e23bd v2http: add 'ClientCertAuthEnabled' in handlers 2016-07-21 10:55:44 -07:00
Gyu-Ho Lee
59bf6693c7 embed: set 'ClientCertAuthEnabled' 2016-07-21 10:55:30 -07:00
Gyu-Ho Lee
affcbfbf06 etcdserver: add 'ClientCertAuthEnabled' option 2016-07-21 10:52:14 -07:00
Gyu-Ho Lee
e81df2648c v2http: move 'testdata' from 'etcdhttp' 2016-07-21 10:52:09 -07:00
rob boll
27a450235a v2http: client cert cn authentication
introduce client certificate authentication using certificate cn.
2016-07-21 10:52:06 -07:00
rob boll
42454f9ed8 v2http: refactor http basic auth
refactor http basic auth code to combine basic auth extraction and validation
2016-07-21 10:52:04 -07:00
Anthony Romano
7ea8860670 e2e: use a single member cluster in TestCtlV3Migrate
Occasionally migrate would fail because a minority node would be missing
v2 keys. Instead, just use a single member cluster.

Fixes #5992
2016-07-21 10:50:49 -07:00
jesse.millan
2fb72029ef etcdctl: Add support for formating output of ls command in json
The ls command will check for and honor json or extended output formats.

Fixes #5993
2016-07-21 10:50:47 -07:00
Xiang Li
77af59796d clientv3/integration: fix race in TestWatchCompactRevision 2016-07-21 10:50:46 -07:00
Anthony Romano
b732f96e07 integration: drain keepalives in TestLeaseKeepAliveCloseAfterDisconnectRevoke
Fixes #5900
2016-07-21 10:50:44 -07:00
Gyu-Ho Lee
602198105d *: regenerate proto 2016-07-18 11:08:51 -07:00
Gyu-Ho Lee
e513cbd562 vendor: update 'gogo/protobuf' 2016-07-18 11:06:58 -07:00
Gyu-Ho Lee
4198369dd0 scripts: update gogo/protobuf, use 'gofast' plugin
- Fix https://github.com/coreos/etcd/issues/5942
- Partial fix for https://github.com/coreos/etcd/issues/5865
2016-07-18 11:06:55 -07:00
Gyu-Ho Lee
debecc1868 vendor: change to 'grpc-ecosystem' from 'gengo' 2016-07-18 11:06:33 -07:00
Gyu-Ho Lee
140fc04c62 *: regenerate proto files 2016-07-18 11:06:17 -07:00
Gyu-Ho Lee
7e34665774 scripts: update genproto with grpc-ecosystem 2016-07-18 11:03:54 -07:00
Gyu-Ho Lee
be541f3641 Documentation: change to grpc-ecosystem 2016-07-18 11:03:52 -07:00
Gyu-Ho Lee
e582416994 embed: change import path to 'grpc-ecosystem' 2016-07-18 11:03:50 -07:00
Xiang Li
842145ecb3 *: fix issue found in fast lease renew 2016-07-18 11:03:20 -07:00
Gyu-Ho Lee
d68936c4da version: bump to v3.0.3+git 2016-07-15 11:51:50 -07:00
Gyu-Ho Lee
24a90baff8 version: bump to v3.0.3 2016-07-15 11:26:14 -07:00
Anthony Romano
6b7891d5f1 integration: add FailFast(false) to failing tests 2016-07-14 19:01:17 -07:00
Anthony Romano
129b271ff8 clientv3: use grpc.FailFast(false) for all calls 2016-07-14 19:00:46 -07:00
Anthony Romano
a11ee983c4 vendor: update grpc
Fixes #5871
2016-07-14 18:47:02 -07:00
Anthony Romano
bec58d5f58 integration: test grpc error equivalence with Error() 2016-07-14 18:47:00 -07:00
Anthony Romano
4b6f9b79e6 rpctypes: test error equivalence with Error()
grpc.Errorf() now returns *rpcError, which makes comparisons shallow.
2016-07-14 18:46:58 -07:00
Xiang Li
f7ec7f025b embed: only get initial cluster setting if the member is not init 2016-07-14 13:01:29 -07:00
Gyu-Ho Lee
34c76a47c1 Revert "Dockerfile: use 'ENTRYPOINT' instead of 'CMD'" 2016-07-14 12:24:06 -07:00
Xiang Li
525653ff51 raft: do not change RecentActive when resetState for progress 2016-07-12 09:59:42 -07:00
Xiang Li
a647b79038 etcdserver: fix TestSnap 2016-07-11 13:59:12 -07:00
Xiang Li
9bc1d08753 etcdctl: only takes 127.0.0.1:2379 as default endpoint 2016-07-11 13:41:53 -07:00
Gyu-Ho Lee
6a79bda691 e2e: add basic upgrade tests 2016-07-11 13:41:50 -07:00
Gyu-Ho Lee
1edfcd6859 test: add upgrade test flag 2016-07-11 13:41:47 -07:00
Gyu-Ho Lee
f51fdbccec version: bump to v3.0.2+git 2016-07-08 12:09:09 -07:00
Gyu-Ho Lee
faeeb2fc75 version: bump to v3.0.2 2016-07-08 11:45:18 -07:00
Xiang Li
d50c487132 v3rpc: lock progress and prevKV map correctly 2016-07-08 10:16:10 -07:00
Anthony Romano
b837feffe4 client/integration: test v2 client one shot operations 2016-07-07 17:30:09 -07:00
Anthony Romano
4d89640195 client: make set/delete one shot operations
Old behavior would retry set and delete even if there's an error. This
can lead to the client returning an error for deleting twice, instead
of returning an error for an interdeterminate state.

Fixes #5832
2016-07-07 17:30:04 -07:00
westhood
1292d453c3 clientv3: fix sync base
It is not correct to use WithPrefix. Range end will change in every
internal batch.
2016-07-07 14:21:43 -07:00
westhood
ec20b381ed clientv3: add public function to get prefix range end 2016-07-07 14:21:41 -07:00
Secret
37cc3f5262 Dockerfile: use 'ENTRYPOINT' instead of 'CMD'
use entrypoint, so people can specify flags to etcd
without providing the binary.

Signed-off-by: Secret <haichuang221@163.com>
2016-07-05 11:40:47 -07:00
Xiang Li
7f1940e5ed etcdserver: commit before sending snapshot 2016-07-05 11:06:54 -07:00
Xiang Li
caccf8e5e6 v3rpc: do not panic on user error for watch 2016-07-05 11:06:35 -07:00
Anthony Romano
ef65dfe2eb wal: release wal locks before renaming directory on init
Fixes #5852
2016-07-05 11:05:51 -07:00
Gyu-Ho Lee
ff6c6916f2 etcdserver/api: print only major.minor version API
Before

2016-07-01 14:57:50.927170 I | api: enabled capabilities for version 3.0.0

After

2016-07-01 14:57:50.927170 I | api: enabled capabilities for version 3.0
2016-07-01 15:19:53 -07:00
Gyu-Ho Lee
3dfe8765d3 version: bump to v3.0.1+git 2016-07-01 14:53:20 -07:00
Gyu-Ho Lee
a4a52cb15d version: bump to v3.0.1 2016-07-01 13:58:37 -07:00
Gyu-Ho Lee
014970930a *: test, docs with go1.6+
etcd v3 uses http/2, which doesn't work well with go1.5
2016-07-01 11:59:37 -07:00
Geert-Johan Riemer
4628be982c Documentation: fix typo in api_grpc_gateway.md 2016-07-01 11:59:35 -07:00
Anthony Romano
ff55e5a188 etcdserver: exit on missing backend only if semver is >= 3.0.0 2016-07-01 11:59:32 -07:00
Gyu-Ho Lee
bf0898266c release: fix Dockerfile etcd binary paths
release script uses binary files in 'release/image-docker',
not the ones in "bin/". Tested with v3.0.0 release.
2016-06-30 12:27:34 -07:00
Gyu-Ho Lee
b9d69f7698 version: bump to v3.0.0+git 2016-06-30 11:37:05 -07:00
Gyu-Ho Lee
6f48bda7ac version: bump to v3.0.0 2016-06-30 10:04:59 -07:00
Gyu-Ho Lee
316534e09e *: remove beta from docs 2016-06-30 10:04:34 -07:00
Jeff Zellner
3cecbdb464 hack: install goreman in tls-setup example 2016-06-30 09:33:19 -07:00
Jeff Zellner
62f11e43ee hack: add tls-setup example generated certs to gitignore 2016-06-30 09:33:12 -07:00
Anthony Romano
064c1585ee Merge pull request #5822 from raoofm/patch-9
Doc: fix typo in dev-guide.md
2016-06-30 09:06:32 -07:00
Raoof Mohammed
15300a1eb8 Doc: fix typo in dev-guide.md 2016-06-30 10:36:50 -04:00
Gyu-Ho Lee
58dd047ee4 ctlv3: make flags, commands formats consistent
1. Capitalize first letter
2. Remove period at the end

(followed the pattern in linux coreutil man page)
2016-06-29 16:16:56 -07:00
Anthony Romano
4b42ea6cd7 clientv3: only use closeErr on watch when donec is closed
Fixes #5800
2016-06-28 17:48:44 -07:00
Gyu-Ho Lee
53c27ae621 benchmark: fix Compact request 2016-06-28 14:15:32 -07:00
Xiang Li
269de67bde mvcc: do not hash consistent index 2016-06-28 12:29:36 -07:00
Anthony Romano
8bbccf1047 clientv3, ctl3, clientv3/integration: add compact response to compact 2016-06-28 12:29:32 -07:00
627 changed files with 25538 additions and 33114 deletions

1
.gitignore vendored
View File

@@ -1,6 +1,5 @@
/coverage
/gopath
/gopath.proto
/go-bindata
/machine*
/bin

View File

@@ -4,8 +4,7 @@ go_import_path: github.com/coreos/etcd
sudo: false
go:
- 1.7.3
- tip
- 1.6.4
env:
global:
@@ -14,7 +13,6 @@ env:
- TARGET=amd64
- TARGET=arm64
- TARGET=arm
- TARGET=386
- TARGET=ppc64le
matrix:
@@ -22,12 +20,12 @@ matrix:
allow_failures:
- go: tip
exclude:
- go: 1.6
env: TARGET=arm64
- go: tip
env: TARGET=arm
- go: tip
env: TARGET=arm64
- go: tip
env: TARGET=386
- go: tip
env: TARGET=ppc64le
@@ -45,19 +43,12 @@ before_install:
# disable godep restore override
install:
- pushd cmd/etcd && go get -t -v ./... && popd
- pushd cmd/ && go get -t -v ./... && popd
script:
- >
case "${TARGET}" in
amd64)
GOARCH=amd64 ./test
;;
386)
GOARCH=386 PASSES="build unit" ./test
;;
*)
# test building out of gopath
GO_BUILD_FLAGS="-a -v" GOPATH="" GOARCH="${TARGET}" ./build
;;
esac
if [ "${TARGET}" == "amd64" ]; then
GOARCH="${TARGET}" ./test;
else
GOARCH="${TARGET}" ./build;
fi

View File

@@ -14,7 +14,7 @@ GCE n1-highcpu-2 machine type
## Testing
Bootstrap another machine and use the [hey HTTP benchmark tool][hey] to send requests to each etcd member. Check the [benchmark hacking guide][hack-benchmark] for detailed instructions.
Bootstrap another machine and use the [boom HTTP benchmark tool][boom] to send requests to each etcd member. Check the [benchmark hacking guide][hack-benchmark] for detailed instructions.
## Performance
@@ -48,5 +48,5 @@ Bootstrap another machine and use the [hey HTTP benchmark tool][hey] to send req
| 256 | 64 | all servers | 1033 | 121.5 |
| 256 | 256 | all servers | 3061 | 119.3 |
[hey]: https://github.com/rakyll/hey
[boom]: https://github.com/rakyll/boom
[hack-benchmark]: /hack/benchmark/

View File

@@ -24,7 +24,7 @@ Go OS/Arch: linux/amd64
## Testing
Bootstrap another machine, outside of the etcd cluster, and run the [`hey` HTTP benchmark tool](https://github.com/rakyll/hey) with a connection reuse patch to send requests to each etcd cluster member. See the [benchmark instructions](../../hack/benchmark/) for the patch and the steps to reproduce our procedures.
Bootstrap another machine, outside of the etcd cluster, and run the [`boom` HTTP benchmark tool](https://github.com/rakyll/boom) with a connection reuse patch to send requests to each etcd cluster member. See the [benchmark instructions](../../hack/benchmark/) for the patch and the steps to reproduce our procedures.
The performance is calulated through results of 100 benchmark rounds.
@@ -66,4 +66,4 @@ The performance is calulated through results of 100 benchmark rounds.
- Write QPS to cluster leaders seems to be increased by a small margin. This is because the main loop and entry apply loops were decoupled in the etcd raft logic, eliminating several blocks between them.
- Write QPS to all members seems to be increased by a significant margin, because followers now receive the latest commit index sooner, and commit proposals more quickly.
- Write QPS to all members seems to be increased by a significant margin, because followers now receive the latest commit index sooner, and commit proposals more quickly.

View File

@@ -24,7 +24,7 @@ Also, we use 3 etcd 2.1.0 alpha-stage members to form cluster to get base perfor
## Testing
Bootstrap another machine and use the [hey HTTP benchmark tool][hey] to send requests to each etcd member. Check the [benchmark hacking guide][hack-benchmark] for detailed instructions.
Bootstrap another machine and use the [boom HTTP benchmark tool][boom] to send requests to each etcd member. Check the [benchmark hacking guide][hack-benchmark] for detailed instructions.
## Performance
@@ -66,7 +66,7 @@ Bootstrap another machine and use the [hey HTTP benchmark tool][hey] to send req
- write QPS to all servers is increased by 30~80% because follower could receive latest commit index earlier and commit proposals faster.
[hey]: https://github.com/rakyll/hey
[boom]: https://github.com/rakyll/boom
[c7146bd5]: https://github.com/coreos/etcd/commits/c7146bd5f2c73716091262edc638401bb8229144
[etcd-2.1-benchmark]: etcd-2-1-0-alpha-benchmarks.md
[hack-benchmark]: /hack/benchmark/

View File

@@ -59,7 +59,6 @@ for grpc-gateway
| LeaseGrant | LeaseGrantRequest | LeaseGrantResponse | LeaseGrant creates a lease which expires if the server does not receive a keepAlive within a given time to live period. All keys attached to the lease will be expired and deleted if the lease expires. Each expired key generates a delete event in the event history. |
| LeaseRevoke | LeaseRevokeRequest | LeaseRevokeResponse | LeaseRevoke revokes a lease. All keys attached to the lease will expire and be deleted. |
| LeaseKeepAlive | LeaseKeepAliveRequest | LeaseKeepAliveResponse | LeaseKeepAlive keeps the lease alive by streaming keep alive requests from the client to the server and streaming keep alive responses from the server to the client. |
| LeaseTimeToLive | LeaseTimeToLiveRequest | LeaseTimeToLiveResponse | LeaseTimeToLive retrieves lease information. |
@@ -427,7 +426,7 @@ Empty field.
| Field | Description | Type |
| ----- | ----------- | ---- |
| key | key is the first key to delete in the range. | bytes |
| range_end | range_end is the key following the last key to delete for the range [key, range_end). If range_end is not given, the range is defined to contain only the key argument. If range_end is one bit larger than the given key, then the range is all the all keys with the prefix (the given key). If range_end is '\0', the range is all keys greater than or equal to the key argument. | bytes |
| range_end | range_end is the key following the last key to delete for the range [key, range_end). If range_end is not given, the range is defined to contain only the key argument. If range_end is '\0', the range is all keys greater than or equal to the key argument. | bytes |
| prev_kv | If prev_kv is set, etcd gets the previous key-value pairs before deleting it. The previous key-value pairs will be returned in the delte response. | bool |
@@ -511,27 +510,6 @@ Empty field.
##### message `LeaseTimeToLiveRequest` (etcdserver/etcdserverpb/rpc.proto)
| Field | Description | Type |
| ----- | ----------- | ---- |
| ID | ID is the lease ID for the lease. | int64 |
| keys | keys is true to query all the keys attached to this lease. | bool |
##### message `LeaseTimeToLiveResponse` (etcdserver/etcdserverpb/rpc.proto)
| Field | Description | Type |
| ----- | ----------- | ---- |
| header | | ResponseHeader |
| ID | ID is the lease ID from the keep alive request. | int64 |
| TTL | TTL is the remaining TTL in seconds for the lease; the lease will expire in under TTL+1 seconds. | int64 |
| grantedTTL | GrantedTTL is the initial granted time in seconds upon lease creation/renewal. | int64 |
| keys | Keys is the list of keys attached to this lease. | (slice of) bytes |
##### message `Member` (etcdserver/etcdserverpb/rpc.proto)
| Field | Description | Type |
@@ -641,10 +619,6 @@ Empty field.
| serializable | serializable sets the range request to use serializable member-local reads. Range requests are linearizable by default; linearizable requests have higher latency and lower throughput than serializable requests but reflect the current consensus of the cluster. For better performance, in exchange for possible stale reads, a serializable range request is served locally without needing to reach consensus with other nodes in the cluster. | bool |
| keys_only | keys_only when set returns only the keys and not the values. | bool |
| count_only | count_only when set returns only the count of the keys in the range. | bool |
| min_mod_revision | min_mod_revision is the lower bound for returned key mod revisions; all keys with lesser mod revisions will be filtered away. | int64 |
| max_mod_revision | max_mod_revision is the upper bound for returned key mod revisions; all keys with greater mod revisions will be filtered away. | int64 |
| min_create_revision | min_create_revision is the lower bound for returned key create revisions; all keys with lesser create trevisions will be filtered away. | int64 |
| max_create_revision | max_create_revision is the upper bound for returned key create revisions; all keys with greater create revisions will be filtered away. | int64 |
@@ -762,10 +736,9 @@ From google paxosdb paper: Our implementation hinges around a powerful primitive
| Field | Description | Type |
| ----- | ----------- | ---- |
| key | key is the key to register for watching. | bytes |
| range_end | range_end is the end of the range [key, range_end) to watch. If range_end is not given, only the key argument is watched. If range_end is equal to '\0', all keys greater than or equal to the key argument are watched. If the range_end is one bit larger than the given key, then all keys with the prefix (the given key) will be watched. | bytes |
| range_end | range_end is the end of the range [key, range_end) to watch. If range_end is not given, only the key argument is watched. If range_end is equal to '\0', all keys greater than or equal to the key argument are watched. | bytes |
| start_revision | start_revision is an optional revision to watch from (inclusive). No start_revision is "now". | int64 |
| progress_notify | progress_notify is set so that the etcd server will periodically send a WatchResponse with no events to the new watcher if there are no recent events. It is useful when clients wish to recover a disconnected watcher starting from a recent known revision. The etcd server may decide how often it will send notifications based on current load. | bool |
| filters | filter out put event. filter out delete event. filters filter the events at server side before it sends back to the watcher. | (slice of) FilterType |
| prev_kv | If prev_kv is set, created watcher gets the previous KV before the event happens. If the previous KV is already compacted, nothing will be returned. | bool |
@@ -825,22 +798,6 @@ From google paxosdb paper: Our implementation hinges around a powerful primitive
##### message `LeaseInternalRequest` (lease/leasepb/lease.proto)
| Field | Description | Type |
| ----- | ----------- | ---- |
| LeaseTimeToLiveRequest | | etcdserverpb.LeaseTimeToLiveRequest |
##### message `LeaseInternalResponse` (lease/leasepb/lease.proto)
| Field | Description | Type |
| ----- | ----------- | ---- |
| LeaseTimeToLiveResponse | | etcdserverpb.LeaseTimeToLiveResponse |
##### message `Permission` (auth/authpb/auth.proto)
Permission is a single entity

View File

@@ -636,33 +636,6 @@
]
}
},
"/v3alpha/kv/lease/timetolive": {
"post": {
"summary": "LeaseTimeToLive retrieves lease information.",
"operationId": "LeaseTimeToLive",
"responses": {
"200": {
"description": "",
"schema": {
"$ref": "#/definitions/etcdserverpbLeaseTimeToLiveResponse"
}
}
},
"parameters": [
{
"name": "body",
"in": "body",
"required": true,
"schema": {
"$ref": "#/definitions/etcdserverpbLeaseTimeToLiveRequest"
}
}
],
"tags": [
"Lease"
]
}
},
"/v3alpha/kv/put": {
"post": {
"summary": "Put puts the given key into the key-value store.\nA put request increments the revision of the key-value store\nand generates one event in the event history.",
@@ -978,8 +951,7 @@
"enum": [
"EQUAL",
"GREATER",
"LESS",
"NOT_EQUAL"
"LESS"
],
"default": "EQUAL"
},
@@ -1021,15 +993,6 @@
],
"default": "KEY"
},
"WatchCreateRequestFilterType": {
"type": "string",
"enum": [
"NOPUT",
"NODELETE"
],
"default": "NOPUT",
"description": "- NOPUT: filter out put event.\n - NODELETE: filter out delete event."
},
"authpbPermission": {
"type": "object",
"properties": {
@@ -1519,7 +1482,7 @@
"range_end": {
"type": "string",
"format": "byte",
"description": "range_end is the key following the last key to delete for the range [key, range_end).\nIf range_end is not given, the range is defined to contain only the key argument.\nIf range_end is one bit larger than the given key, then the range is all\nthe all keys with the prefix (the given key).\nIf range_end is '\\0', the range is all keys greater than or equal to the key argument."
"description": "range_end is the key following the last key to delete for the range [key, range_end).\nIf range_end is not given, the range is defined to contain only the key argument.\nIf range_end is '\\0', the range is all keys greater than or equal to the key argument."
}
}
},
@@ -1642,52 +1605,6 @@
}
}
},
"etcdserverpbLeaseTimeToLiveRequest": {
"type": "object",
"properties": {
"ID": {
"type": "string",
"format": "int64",
"description": "ID is the lease ID for the lease."
},
"keys": {
"type": "boolean",
"format": "boolean",
"description": "keys is true to query all the keys attached to this lease."
}
}
},
"etcdserverpbLeaseTimeToLiveResponse": {
"type": "object",
"properties": {
"ID": {
"type": "string",
"format": "int64",
"description": "ID is the lease ID from the keep alive request."
},
"TTL": {
"type": "string",
"format": "int64",
"description": "TTL is the remaining TTL in seconds for the lease; the lease will expire in under TTL+1 seconds."
},
"grantedTTL": {
"type": "string",
"format": "int64",
"description": "GrantedTTL is the initial granted time in seconds upon lease creation/renewal."
},
"header": {
"$ref": "#/definitions/etcdserverpbResponseHeader"
},
"keys": {
"type": "array",
"items": {
"type": "string",
"format": "byte"
},
"description": "Keys is the list of keys attached to this lease."
}
}
},
"etcdserverpbMember": {
"type": "object",
"properties": {
@@ -1866,26 +1783,6 @@
"format": "int64",
"description": "limit is a limit on the number of keys returned for the request."
},
"max_create_revision": {
"type": "string",
"format": "int64",
"description": "max_create_revision is the upper bound for returned key create revisions; all keys with\ngreater create revisions will be filtered away."
},
"max_mod_revision": {
"type": "string",
"format": "int64",
"description": "max_mod_revision is the upper bound for returned key mod revisions; all keys with\ngreater mod revisions will be filtered away."
},
"min_create_revision": {
"type": "string",
"format": "int64",
"description": "min_create_revision is the lower bound for returned key create revisions; all keys with\nlesser create trevisions will be filtered away."
},
"min_mod_revision": {
"type": "string",
"format": "int64",
"description": "min_mod_revision is the lower bound for returned key mod revisions; all keys with\nlesser mod revisions will be filtered away."
},
"range_end": {
"type": "string",
"format": "byte",
@@ -2107,13 +2004,6 @@
"etcdserverpbWatchCreateRequest": {
"type": "object",
"properties": {
"filters": {
"type": "array",
"items": {
"$ref": "#/definitions/WatchCreateRequestFilterType"
},
"description": "filters filter the events at server side before it sends back to the watcher."
},
"key": {
"type": "string",
"format": "byte",
@@ -2132,7 +2022,7 @@
"range_end": {
"type": "string",
"format": "byte",
"description": "range_end is the end of the range [key, range_end) to watch. If range_end is not given,\nonly the key argument is watched. If range_end is equal to '\\0', all keys greater than\nor equal to the key argument are watched.\nIf the range_end is one bit larger than the given key,\nthen all keys with the prefix (the given key) will be watched."
"description": "range_end is the end of the range [key, range_end) to watch. If range_end is not given,\nonly the key argument is watched. If range_end is equal to '\\0', all keys greater than\nor equal to the key argument are watched."
},
"start_revision": {
"type": "string",

View File

@@ -1,65 +0,0 @@
# gRPC naming and discovery
etcd provides a gRPC resolver to support an alternative name system that fetches endpoints from etcd for discovering gRPC services. The underlying mechanism is based on watching updates to keys prefixed with the service name.
## Using etcd discovery with go-grpc
The etcd client provides a gRPC resolver for resolving gRPC endpoints with an etcd backend. The resolver is initialized with an etcd client and given a target for resolution:
```go
import (
"github.com/coreos/etcd/clientv3"
etcdnaming "github.com/coreos/etcd/clientv3/naming"
"google.golang.org/grpc"
)
...
cli, cerr := clientv3.NewFromURL("http://localhost:2379")
r := &etcdnaming.GRPCResolver{Client: cli}
b := grpc.RoundRobin(r)
conn, gerr := grpc.Dial("my-service", grpc.WithBalancer(b))
```
## Managing service endpoints
The etcd resolver treats all keys under the prefix of the resolution target following a "/" (e.g., "my-service/") with JSON-encoded go-grpc `naming.Update` values as potential service endpoints. Endpoints are added to the service by creating new keys and removed from the service by deleting keys.
### Adding an endpoint
New endpoints can be added to the service through `etcdctl`:
```sh
ETCDCTL_API=3 etcdctl put my-service/1.2.3.4 '{"Addr":"1.2.3.4","Metadata":"..."}'
```
The etcd client's `GRPCResolver.Update` method can also register new endpoints with a key matching the `Addr`:
```go
r.Update(context.TODO(), "my-service", naming.Update{Op: naming.Add, Addr: "1.2.3.4", Metadata: "..."})
```
### Deleting an endpoint
Hosts can be deleted from the service through `etcdctl`:
```sh
ETCDCTL_API=3 etcdctl del my-service/1.2.3.4
```
The etcd client's `GRPCResolver.Update` method also supports deleting endpoints:
```go
r.Update(context.TODO(), "my-service", naming.Update{Op: naming.Delete, Addr: "1.2.3.4"})
```
### Registering an endpoint with a lease
Registering an endpoint with a lease ensures that if the host can't maintain a keepalive heartbeat (e.g., its machine fails), it will be removed from the service:
```sh
lease=`ETCDCTL_API=3 etcdctl lease grant 5 | cut -f2 -d' '`
ETCDCTL_API=3 etcdctl put --lease=$lease my-service/1.2.3.4 '{"Addr":"1.2.3.4","Metadata":"..."}'
ETCDCTL_API=3 etcdctl lease keep-alive $lease
```

View File

@@ -4,51 +4,28 @@ Users mostly interact with etcd by putting or getting the value of a key. This s
By default, etcdctl talks to the etcd server with the v2 API for backward compatibility. For etcdctl to speak to etcd using the v3 API, the API version must be set to version 3 via the `ETCDCTL_API` environment variable.
```bash
``` bash
export ETCDCTL_API=3
```
## Find versions
etcdctl version and Server API version can be useful in finding the appropriate commands to be used for performing various opertions on etcd.
Here is the command to find the versions:
```bash
$ etcdctl version
etcdctl version: 3.1.0-alpha.0+git
API version: 3.1
```
## Write a key
Applications store keys into the etcd cluster by writing to keys. Every stored key is replicated to all etcd cluster members through the Raft protocol to achieve consistency and reliability.
Here is the command to set the value of key `foo` to `bar`:
```bash
``` bash
$ etcdctl put foo bar
OK
```
Also a key can be set for a specified interval of time by attaching lease to it.
Here is the command to set the value of key `foo1` to `bar1` for 10s.
```bash
$ etcdctl put foo1 bar1 --lease=1234abcd
OK
```
Note: The lease id `1234abcd` in the above command refers to id returned on creating the lease of 10s. This id can then be attached to the key.
## Read keys
Applications can read values of keys from an etcd cluster. Queries may read a single key, or a range of keys.
Applications can read values of keys from an etcd cluster. Queries may read a single key, or a range of keys.
Suppose the etcd cluster has stored the following keys:
```bash
```
foo = bar
foo1 = bar1
foo3 = bar3
@@ -62,21 +39,6 @@ foo
bar
```
Here is the command to read the value of key `foo` in hex format:
```bash
$ etcdctl get foo --hex
\x66\x6f\x6f # Key
\x62\x61\x72 # Value
```
Here is the command to read only the value of key `foo`:
```bash
$ etcdctl get foo --print-value-only
bar
```
Here is the command to range over the keys from `foo` to `foo9`:
```bash
@@ -89,16 +51,6 @@ foo3
bar3
```
Here is the command to range over the keys from `foo` to `foo9` limiting the number of results to 2:
```bash
$ etcdctl get foo foo9 --limit 2
foo
bar
foo1
bar1
```
## Read past version of keys
Applications may want to read superseded versions of a key. For example, an application may wish to roll back to an old configuration by accessing an earlier version of a key. Alternatively, an application may want a consistent view over multiple keys through multiple requests by accessing key history.
@@ -106,11 +58,11 @@ Since every modification to the etcd cluster key-value store increments the glob
Suppose an etcd cluster already has the following keys:
```bash
foo = bar # revision = 2
foo1 = bar1 # revision = 3
foo = bar_new # revision = 4
foo1 = bar1_new # revision = 5
``` bash
$ etcdctl put foo bar # revision = 2
$ etcdctl put foo1 bar1 # revision = 3
$ etcdctl put foo bar_new # revision = 4
$ etcdctl put foo1 bar1_new # revision = 5
```
Here are an example to access the past versions of keys:
@@ -141,46 +93,10 @@ bar
$ etcdctl get --rev=1 foo foo9 # access the versions of keys at revision 1
```
## Read keys which are greater than or equal to the byte value of the specified key
Applications may want to read keys which are greater than or equal to the byte value of the specified key.
Suppose an etcd cluster already has the following keys:
```bash
a = 123
b = 456
z = 789
```
Here is the command to read keys which are greater than or equal to the byte value of key `b` :
```bash
$ etcdctl get --from-key b
b
456
z
789
```
## Delete keys
Applications can delete a key or a range of keys from an etcd cluster.
Suppose an etcd cluster already has the following keys:
```bash
foo = bar
foo1 = bar1
foo3 = bar3
zoo = val
zoo1 = val1
zoo2 = val2
a = 123
b = 456
z = 789
```
Here is the command to delete key `foo`:
```bash
@@ -195,29 +111,6 @@ $ etcdctl del foo foo9
2 # two keys are deleted
```
Here is the command to delete key `zoo` with the deleted key value pair returned:
```bash
$ etcdctl del --prev-kv zoo
1 # one key is deleted
zoo # deleted key
val # the value of the deleted key
```
Here is the command to delete keys having prefix as `zoo`:
```bash
$ etcdctl del --prefix zoo
2 # two keys are deleted
```
Here is the command to delete keys which are greater than or equal to the byte value of key `b` :
```bash
$ etcdctl del --from-key b
2 # two keys are deleted
```
## Watch key changes
Applications can watch on a key or a range of keys to monitor for any updates.
@@ -225,86 +118,38 @@ Applications can watch on a key or a range of keys to monitor for any updates.
Here is the command to watch on key `foo`:
```bash
$ etcdctl watch foo
$ etcdctl watch foo
# in another terminal: etcdctl put foo bar
PUT
foo
bar
```
Here is the command to watch on key `foo` in hex format:
```bash
$ etcdctl watch foo --hex
# in another terminal: etcdctl put foo bar
PUT
\x66\x6f\x6f # Key
\x62\x61\x72 # Value
```
Here is the command to watch on a range key from `foo` to `foo9`:
```bash
$ etcdctl watch foo foo9
# in another terminal: etcdctl put foo bar
PUT
foo
bar
# in another terminal: etcdctl put foo1 bar1
PUT
foo1
bar1
```
Here is the command to watch on keys having prefix `foo`:
```bash
$ etcdctl watch --prefix foo
# in another terminal: etcdctl put foo bar
PUT
foo
bar
# in another terminal: etcdctl put fooz1 barz1
PUT
fooz1
barz1
```
Here is the command to watch on multiple keys `foo` and `zoo`:
```bash
$ etcdctl watch -i
$ watch foo
$ watch zoo
# in another terminal: etcdctl put foo bar
PUT
foo
bar
# in another terminal: etcdctl put zoo val
PUT
zoo
val
```
## Watch historical changes of keys
Applications may want to watch for historical changes of keys in etcd. For example, an application may wish to receive all the modifications of a key; if the application stays connected to etcd, then `watch` is good enough. However, if the application or etcd fails, a change may happen during the failure, and the application will not receive the update in real time. To guarantee the update is delivered, the application must be able to watch for historical changes to keys. To do this, an application can specify a historical revision on a watch, just like reading past version of keys.
Suppose we finished the following sequence of operations:
```bash
$ etcdctl put foo bar # revision = 2
OK
$ etcdctl put foo1 bar1 # revision = 3
OK
$ etcdctl put foo bar_new # revision = 4
OK
$ etcdctl put foo1 bar1_new # revision = 5
OK
``` bash
etcdctl put foo bar # revision = 2
etcdctl put foo1 bar1 # revision = 3
etcdctl put foo bar_new # revision = 4
etcdctl put foo1 bar1_new # revision = 5
```
Here is an example to watch the historical changes:
```bash
# watch for changes on key `foo` since revision 2
$ etcdctl watch --rev=2 foo
@@ -314,9 +159,7 @@ bar
PUT
foo
bar_new
```
```bash
# watch for changes on key `foo` since revision 3
$ etcdctl watch --rev=3 foo
PUT
@@ -324,19 +167,6 @@ foo
bar_new
```
Here is an example to watch only from the last historical change:
```bash
# watch for changes on key `foo` and return last revision value along with modified value
$ etcdctl watch --prev-kv foo
# in another terminal: etcdctl put foo bar_latest
PUT
foo # key
bar_new # last value of foo key before modification
foo # key
bar_latest # value of foo key after modification
```
## Compacted revisions
As we mentioned, etcd keeps revisions so that applications can read past versions of keys. However, to avoid accumulating an unbounded amount of history, it is important to compact past revisions. After compacting, etcd removes historical revisions, releasing resources for future use. All superseded data with revisions before the compacted revision will be unavailable.
@@ -352,20 +182,13 @@ $ etcdctl get --rev=4 foo
Error: rpc error: code = 11 desc = etcdserver: mvcc: required revision has been compacted
```
Note: The current revision of etcd server can be found using get command on any key (existent or non-existent) in json format. Example is shown below for mykey which does not exist in etcd server:
```bash
$ etcdctl get mykey -w=json
{"header":{"cluster_id":14841639068965178418,"member_id":10276657743932975437,"revision":15,"raft_term":4}}
```
## Grant leases
Applications can grant leases for keys from an etcd cluster. When a key is attached to a lease, its lifetime is bound to the lease's lifetime which in turn is governed by a time-to-live (TTL). Each lease has a minimum time-to-live (TTL) value specified by the application at grant time. The lease's actual TTL value is at least the minimum TTL and is chosen by the etcd cluster. Once a lease's TTL elapses, the lease expires and all attached keys are deleted.
Here is the command to grant a lease:
```bash
```
# grant a lease with 10 second TTL
$ etcdctl lease grant 10
lease 32695410dcc0ca06 granted with TTL(10s)
@@ -381,7 +204,7 @@ Applications revoke leases by lease ID. Revoking a lease deletes all of its atta
Suppose we finished the following sequence of operations:
```bash
```
$ etcdctl lease grant 10
lease 32695410dcc0ca06 granted with TTL(10s)
$ etcdctl put --lease=32695410dcc0ca06 foo bar
@@ -390,7 +213,7 @@ OK
Here is the command to revoke the same lease:
```bash
```
$ etcdctl lease revoke 32695410dcc0ca06
lease 32695410dcc0ca06 revoked
@@ -404,54 +227,17 @@ Applications can keep a lease alive by refreshing its TTL so it does not expire.
Suppose we finished the following sequence of operations:
```bash
```
$ etcdctl lease grant 10
lease 32695410dcc0ca06 granted with TTL(10s)
```
Here is the command to keep the same lease alive:
```bash
$ etcdctl lease keep-alive 32695410dcc0ca06
lease 32695410dcc0ca06 keepalived with TTL(100)
lease 32695410dcc0ca06 keepalived with TTL(100)
lease 32695410dcc0ca06 keepalived with TTL(100)
```
$ etcdctl lease keep-alive 32695410dcc0ca0
lease 32695410dcc0ca0 keepalived with TTL(100)
lease 32695410dcc0ca0 keepalived with TTL(100)
lease 32695410dcc0ca0 keepalived with TTL(100)
...
```
## Get lease information
Applications may want to know about lease information, so that they can be renewed or to check if the lease still exists or it has expired. Applications may also want to know the keys to which a particular lease is attached.
Suppose we finished the following sequence of operations:
```bash
# grant a lease with 500 second TTL
$ etcdctl lease grant 500
lease 694d5765fc71500b granted with TTL(500s)
# attach key zoo1 to lease 694d5765fc71500b
$ etcdctl put zoo1 val1 --lease=694d5765fc71500b
OK
# attach key zoo2 to lease 694d5765fc71500b
$ etcdctl put zoo2 val2 --lease=694d5765fc71500b
OK
```
Here is the command to get information about the lease:
```bash
$ etcdctl lease timetolive 694d5765fc71500b
lease 694d5765fc71500b granted with TTL(500s), remaining(258s)
```
Here is the command to get information about the lease along with the keys attached with the lease:
```bash
$ etcdctl lease timetolive --keys 694d5765fc71500b
lease 694d5765fc71500b granted with TTL(500s), remaining(132s), attached keys([zoo2 zoo1])
# if the lease has expired or does not exist it will give the below response:
Error: etcdserver: requested lease not found
```

View File

@@ -28,7 +28,7 @@ bar
## Local multi-member cluster
A `Procfile` at the base of this git repo is provided to easily set up a local multi-member cluster. To start a multi-member cluster go to the root of an etcd source tree and run:
A Procfile is provided to easily set up a local multi-member cluster. Start a multi-member cluster with a few commands:
```
# install goreman program to control Profile-based applications.
@@ -37,7 +37,7 @@ $ goreman -f Procfile start
...
```
The started members listen on `localhost:2379`, `localhost:22379`, and `localhost:32379` for client requests respectively.
The started members listen on `localhost:12379`, `localhost:22379`, and `localhost:32379` for client requests respectively.
To interact with the started cluster by using etcdctl:
@@ -49,12 +49,12 @@ $ etcdctl --write-out=table --endpoints=localhost:12379 member list
+------------------+---------+--------+------------------------+------------------------+
| ID | STATUS | NAME | PEER ADDRS | CLIENT ADDRS |
+------------------+---------+--------+------------------------+------------------------+
| 8211f1d0f64f3269 | started | infra1 | http://127.0.0.1:2380 | http://127.0.0.1:2379 |
| 8211f1d0f64f3269 | started | infra1 | http://127.0.0.1:12380 | http://127.0.0.1:12379 |
| 91bc3c398fb3c146 | started | infra2 | http://127.0.0.1:22380 | http://127.0.0.1:22379 |
| fd422379fda50e48 | started | infra3 | http://127.0.0.1:32380 | http://127.0.0.1:32379 |
+------------------+---------+--------+------------------------+------------------------+
$ etcdctl put foo bar
$ etcdctl --endpoints=localhost:12379 put foo bar
OK
```
@@ -64,10 +64,10 @@ To exercise etcd's fault tolerance, kill a member:
# kill etcd2
$ goreman run stop etcd2
$ etcdctl put key hello
$ etcdctl --endpoints=localhost:12379 put key hello
OK
$ etcdctl get key
$ etcdctl --endpoints=localhost:12379 get key
hello
# try to get key from the killed member

View File

@@ -31,8 +31,8 @@ All releases version numbers follow the format of [semantic versioning 2.0.0](ht
## Write release note
- Write introduction for the new release. For example, what major bug we fix, what new features we introduce or what performance improvement we make.
- Write changelog for the last release. ChangeLog should be straightforward and easy to understand for the end-user.
- Put `[GH XXXX]` at the head of change line to reference Pull Request that introduces the change. Moreover, add a link on it to jump to the Pull Request.
- Find PRs with `release-note` label and explain them in `NEWS` file, as a straightforward summary of changes for end-users.
## Tag version
@@ -47,7 +47,7 @@ All releases version numbers follow the format of [semantic versioning 2.0.0](ht
## Build release binaries and images
- Ensure `acbuild` is available.
- Ensure `actool` is available, or installing it through `go get github.com/appc/spec/actool`.
- Ensure `docker` is available.
Run release script in root directory:

View File

@@ -12,8 +12,6 @@ The easiest way to get etcd is to use one of the pre-built release binaries whic
For those wanting to try the very latest version, build etcd from the `master` branch.
[Go](https://golang.org/) version 1.6+ (with HTTP2 support) is required to build the latest version of etcd.
etcd vendors its dependency for official release binaries, while making vendoring optional to avoid import conflicts.
[`build` script][build-script] would automatically include the vendored dependencies from [`cmd`][cmd-directory] directory.
Here are the commands to build an etcd binary from the `master` branch:
@@ -28,7 +26,7 @@ $ echo $GOPATH
$ mkdir -p $GOPATH/src/github.com/coreos
$ cd $GOPATH/src/github.com/coreos
$ git clone https://github.com/coreos/etcd.git
$ git clone github.com:coreos/etcd.git
$ cd etcd
$ ./build
$ ./bin/etcd
@@ -56,6 +54,3 @@ If OK is printed, then etcd is working!
[github-release]: https://github.com/coreos/etcd/releases/
[go]: https://golang.org/doc/install
[build-script]: ../build
[cmd-directory]: ../cmd

View File

@@ -14,21 +14,17 @@ The easiest way to get started using etcd as a distributed key-value store is to
- [Interacting with etcd][interacting]
- [API references][api_ref]
- [gRPC gateway][api_grpc_gateway]
- [gRPC naming and discovery][grpc_naming]
- [Embedding etcd][embed_etcd]
- [Experimental features and APIs][experimental]
## Operating etcd clusters
Administrators who need to create reliable and scalable key-value stores for the developers they support should begin with a [cluster on multiple machines][clustering].
- [Setting up etcd clusters][clustering]
- [Setting up etcd gateways][gateway]
- [Setting up etcd gRPC proxy (pre-alpha)][grpc_proxy]
- [Setting up clusters][clustering]
- [Run etcd clusters inside containers][container]
- [Configuration][conf]
- [Security][security]
- [Monitoring][monitoring]
- Monitoring
- [Maintenance][maintenance]
- [Understand failures][failures]
- [Disaster recovery][recovery]
@@ -60,19 +56,14 @@ To learn more about the concepts and internals behind etcd, read the following p
[data_model]: learning/data_model.md
[demo]: demo.md
[download_build]: dl_build.md
[embed_etcd]: https://godoc.org/github.com/coreos/etcd/embed
[grpc_naming]: dev-guide/grpc_naming.md
[failures]: op-guide/failures.md
[gateway]: op-guide/gateway.md
[glossary]: learning/glossary.md
[grpc_proxy]: op-guide/grpc_proxy.md
[interacting]: dev-guide/interacting_v3.md
[local_cluster]: dev-guide/local_cluster.md
[performance]: op-guide/performance.md
[recovery]: op-guide/recovery.md
[maintenance]: op-guide/maintenance.md
[security]: op-guide/security.md
[monitoring]: op-guide/monitoring.md
[v2_migration]: op-guide/v2-migration.md
[container]: op-guide/container.md
[understand_apis]: learning/api.md

View File

@@ -2,17 +2,15 @@
This document defines the various terms used in etcd documentation, command line and source code.
## Alarm
## Node
The etcd server raises an alarm whenever the cluster needs operator intervention to remain reliable.
Node is an instance of raft state machine.
## Authentication
It has a unique identification, and records other nodes' progress internally when it is the leader.
Authentication manages user access permissions for etcd resources.
## Member
## Client
A client connects to the etcd cluster to issue service requests such as fetching key-value pairs, writing data, or watching for updates.
Member is an instance of etcd. It hosts a node, and provides service to clients.
## Cluster
@@ -20,42 +18,6 @@ Cluster consists of several members.
The node in each member follows raft consensus protocol to replicate logs. Cluster receives proposals from members, commits them and apply to local store.
## Compaction
Compaction discards all etcd event history and superseded keys prior to a given revision. It is used to reclaim storage space in the etcd backend database.
## Election
The etcd cluster holds elections among its members to choose a leader as part of the raft consensus protocol.
## Endpoint
A URL pointing to an etcd service or resource.
## Key
A user-defined identifier for storing and retrieving user-defined values in etcd.
## Key range
A set of keys containing either an individual key, a lexical interval for all x such that a < x <= b, or all keys greater than a given key.
## Keyspace
The set of all keys in an etcd cluster.
## Lease
A short-lived renewable contract that deletes keys associated with it on its expiry.
## Member
A logical etcd server that participates in serving an etcd cluster.
## Modification Revision
The first revision to hold the last write to a given key.
## Peer
Peer is another member of the same cluster.
@@ -64,34 +26,10 @@ Peer is another member of the same cluster.
A proposal is a request (for example a write request, a configuration change request) that needs to go through raft protocol.
## Quorum
## Client
The number of active members needed for consensus to modify the cluster state. etcd requires a member majority to reach quorum.
Client is a caller of the cluster's HTTP API.
## Revision
## Machine (deprecated)
A 64-bit cluster-wide counter that is incremented each time the keyspace is modified.
## Role
A unit of permissions over a set of key ranges which may be granted to a set of users for access control.
## Snapshot
A point-in-time backup of the etcd cluster state.
## Store
The physical storage backing the cluster keyspace.
## Transaction
An atomically executed set of operations. All modified keys in a transaction share the same modification revision.
## Key Version
The number of writes to a key since it was created, starting at 1. The version of a nonexistent or deleted key is 0.
## Watcher
A client opens a watcher to observe updates on a given key range.
The alternative of Member in etcd before 2.0

View File

@@ -23,7 +23,6 @@
**Java libraries**
- [coreos/jetcd](https://github.com/coreos/jetcd) - Supports v3
- [boonproject/etcd](https://github.com/boonproject/boon/blob/master/etcd/README.md) - Supports v2, Async/Sync and waits
- [justinsb/jetcd](https://github.com/justinsb/jetcd)
- [diwakergupta/jetcd](https://github.com/diwakergupta/jetcd) - Supports v2
@@ -37,7 +36,6 @@
**Python libraries**
- [kragniz/python-etcd3](https://github.com/kragniz/python-etcd3) - Work in progress client for v3
- [jplana/python-etcd](https://github.com/jplana/python-etcd) - Supports v2
- [russellhaering/txetcd](https://github.com/russellhaering/txetcd) - a Twisted Python library
- [cholcombe973/autodock](https://github.com/cholcombe973/autodock) - A docker deployment automation tool
@@ -63,8 +61,6 @@
**C++ libraries**
- [edwardcapriolo/etcdcpp](https://github.com/edwardcapriolo/etcdcpp) - Supports v2
- [suryanathan/etcdcpp](https://github.com/suryanathan/etcdcpp) - Supports v2 (with waits)
- [nokia/etcd-cpp-api](https://github.com/nokia/etcd-cpp-api) - Supports v2
- [nokia/etcd-cpp-apiv3](https://github.com/nokia/etcd-cpp-apiv3) - Supports v3
**Clojure libraries**
@@ -84,7 +80,6 @@
**PHP Libraries**
- [linkorb/etcd-php](https://github.com/linkorb/etcd-php)
- [activecollab/etcd](https://github.com/activecollab/etcd)
**Haskell libraries**
@@ -94,10 +89,6 @@
- [ropensci/etseed](https://github.com/ropensci/etseed)
**Nim libraries**
- [etcd_client](https://github.com/FedericoCeratto/nim-etcd-client)
**Tcl libraries**
- [efrecon/etcd-tcl](https://github.com/efrecon/etcd-tcl) - Supports v2, except wait.

View File

@@ -70,8 +70,6 @@ All these metrics are prefixed with `etcd_network_`
|---------------------------|--------------------------------------------------------------------|---------------|
| peer_sent_bytes_total | The total number of bytes sent to the peer with ID `To`. | Counter(To) |
| peer_received_bytes_total | The total number of bytes received from the peer with ID `From`. | Counter(From) |
| peer_sent_failures_total | The total number of send failures from the peer with ID `To`. | Counter(To) |
| peer_received_failures_total | The total number of receive failures from the peer with ID `From`. | Counter(From) |
| peer_round_trip_time_seconds | Round-Trip-Time histogram between peers. | Histogram(To) |
| client_grpc_sent_bytes_total | The total number of bytes sent to grpc clients. | Counter |
| client_grpc_received_bytes_total| The total number of bytes received to grpc clients. | Counter |
@@ -82,7 +80,30 @@ All these metrics are prefixed with `etcd_network_`
### gRPC requests
These metrics are exposed via [go-grpc-prometheus][go-grpc-prometheus].
These metrics describe the requests served by a specific etcd member: total received requests, total failed requests, and processing latency. They are useful for tracking user-generated traffic hitting the etcd cluster.
All these metrics are prefixed with `etcd_grpc_`
| Name | Description | Type |
|--------------------------------|-------------------------------------------------------------------------------------|------------------------|
| requests_total | Total number of received requests | Counter(method) |
| requests_failed_total | Total number of failed requests.   | Counter(method,error) |
| unary_requests_duration_seconds | Bucketed handling duration of the requests. | Histogram(method) |
Example Prometheus queries that may be useful from these metrics (across all etcd members):
* `sum(rate(etcd_grpc_requests_failed_total{job="etcd"}[1m]) by (grpc_method) / sum(rate(etcd_grpc_total{job="etcd"})[1m]) by (grpc_method)`
Shows the fraction of events that failed by gRPC method across all members, across a time window of `1m`.
* `sum(rate(etcd_grpc_requests_total{job="etcd",grpc_method="PUT"})[1m]) by (grpc_method)`
Shows the rate of PUT requests across all members, across a time window of `1m`.
* `histogram_quantile(0.9, sum(rate(etcd_grpc_unary_requests_duration_seconds{job="etcd",grpc_method="PUT"}[5m]) ) by (le))`
Show the 0.90-tile latency (in seconds) of PUT request handling across all members, with a window of `5m`.
## etcd_debugging namespace metrics
@@ -113,4 +134,3 @@ Heavy file descriptor (`process_open_fds`) usage (i.e., near the process's file
[prometheus-getting-started]: http://prometheus.io/docs/introduction/getting_started/
[prometheus-naming]: http://prometheus.io/docs/practices/naming/
[v2-http-metrics]: v2/metrics.md#http-requests
[go-grpc-prometheus]: https://github.com/grpc-ecosystem/go-grpc-prometheus

View File

@@ -126,7 +126,7 @@ $ etcd --name infra2 --initial-advertise-peer-urls https://10.0.1.12:2380 \
If the cluster needs encrypted communication but does not require authenticated connections, etcd can be configured to automatically generate its keys. On initialization, each member creates its own set of keys based on its advertised IP addresses and hosts.
On each machine, etcd would be started with these flags:
On each machine, etcd would be started with these flag:
```
$ etcd --name infra0 --initial-advertise-peer-urls https://10.0.1.10:2380 \
@@ -205,7 +205,7 @@ exit 1
## Discovery
In a number of cases, the IPs of the cluster peers may not be known ahead of time. This is common when utilizing cloud providers or when the network uses DHCP. In these cases, rather than specifying a static configuration, use an existing etcd cluster to bootstrap a new one. This process is called "discovery".
In a number of cases, the IPs of the cluster peers may not be known ahead of time. This is common when utilizing cloud providers or when the network uses DHCP. In these cases, rather than specifying a static configuration, use an existing etcd cluster to bootstrap a new one. We call this process "discovery".
There two methods that can be used for discovery:
@@ -214,17 +214,17 @@ There two methods that can be used for discovery:
### etcd discovery
To better understand the design of the discovery service protocol, we suggest reading the discovery service protocol [documentation][discovery-proto].
To better understand the design about discovery service protocol, we suggest reading the discovery service protocol [documentation][discovery-proto].
#### Lifetime of a discovery URL
A discovery URL identifies a unique etcd cluster. Instead of reusing an existing discovery URL, each etcd instance shares a new discovery URL to bootstrap the new cluster.
A discovery URL identifies a unique etcd cluster. Instead of reusing a discovery URL, always create discovery URLs for new clusters.
Moreover, discovery URLs should ONLY be used for the initial bootstrapping of a cluster. To change cluster membership after the cluster is already running, see the [runtime reconfiguration][runtime-conf] guide.
#### Custom etcd discovery service
Discovery uses an existing cluster to bootstrap itself. If using a private etcd cluster, create a URL like so:
Discovery uses an existing cluster to bootstrap itself. If using a private etcd cluster, can create a URL like so:
```
$ curl -X PUT https://myetcd.local/v2/keys/discovery/6c007a14875d53d9bf0ef5a6fc0257c817f0fb83/_config/size -d value=3
@@ -271,7 +271,7 @@ $ curl https://discovery.etcd.io/new?size=3
https://discovery.etcd.io/3e86b59982e49066c5d813af1c2e2579cbf573de
```
This will create the cluster with an initial size of 3 members. If no size is specified, a default of 3 is used.
This will create the cluster with an initial expected size of 3 members. If no size is specified, a default of 3 is used.
```
ETCD_DISCOVERY=https://discovery.etcd.io/3e86b59982e49066c5d813af1c2e2579cbf573de
@@ -281,7 +281,7 @@ ETCD_DISCOVERY=https://discovery.etcd.io/3e86b59982e49066c5d813af1c2e2579cbf573d
--discovery https://discovery.etcd.io/3e86b59982e49066c5d813af1c2e2579cbf573de
```
**Each member must have a different name flag specified or else discovery will fail due to duplicated names. `Hostname` or `machine-id` can be a good choice. **
**Each member must have a different name flag specified. `Hostname` or `machine-id` can be a good choice. Or discovery will fail due to duplicated name.**
Now we start etcd with those relevant flags for each member:
@@ -456,10 +456,6 @@ $ etcd --name infra2 \
--listen-peer-urls http://10.0.1.12:2380
```
### Gateway
etcd gateway is a simple TCP proxy that forwards network data to the etcd cluster. Please read [gateway guide] for more information.
### Proxy
When the `--proxy` flag is set, etcd runs in [proxy mode][proxy]. This proxy mode only supports the etcd v2 API; there are no plans to support the v3 API. Instead, for v3 API support, there will be a new proxy with enhanced features following the etcd 3.0 release.
@@ -476,4 +472,3 @@ To setup an etcd cluster with proxies of v2 API, please read the the [clustering
[clustering_etcd2]: https://github.com/coreos/etcd/blob/release-2.3/Documentation/clustering.md
[security-guide]: security.md
[tls-setup]: /hack/tls-setup
[gateway]: gateway.md

View File

@@ -276,7 +276,7 @@ Follow the instructions when using these flags.
## Profiling flags
### --enable-pprof
+ Enable runtime profiling data via HTTP server. Address is at client URL + "/debug/pprof/"
+ Enable runtime profiling data via HTTP server. Address is at client URL + "/debug/pprof"
+ default: false
[build-cluster]: clustering.md#static

View File

@@ -2,68 +2,6 @@
The following guide shows how to run etcd with rkt and Docker using the [static bootstrap process](clustering.md#static).
## rkt
### Running a single node etcd
The following rkt run command will expose the etcd client API on port 2379 and expose the peer API on port 2380.
Use the host IP address when configuring etcd.
```
export NODE1=192.168.1.21
```
Trust the CoreOS [App Signing Key](https://coreos.com/security/app-signing-key/).
```
sudo rkt trust --prefix coreos.com/etcd
# gpg key fingerprint is: 18AD 5014 C99E F7E3 BA5F 6CE9 50BD D3E0 FC8A 365E
```
Run the `v3.0.6` version of etcd or specify another release version.
```
sudo rkt run --net=default:IP=${NODE1} coreos.com/etcd:v3.0.6 -- -name=node1 -advertise-client-urls=http://${NODE1}:2379 -initial-advertise-peer-urls=http://${NODE1}:2380 -listen-client-urls=http://0.0.0.0:2379 -listen-peer-urls=http://${NODE1}:2380 -initial-cluster=node1=http://${NODE1}:2380
```
List the cluster member.
```
etcdctl --endpoints=http://192.168.1.21:2379 member list
```
### Running a 3 node etcd cluster
Setup a 3 node cluster with rkt locally, using the `-initial-cluster` flag.
```sh
export NODE1=172.16.28.21
export NODE2=172.16.28.22
export NODE3=172.16.28.23
```
```
# node 1
sudo rkt run --net=default:IP=${NODE1} coreos.com/etcd:v3.0.6 -- -name=node1 -advertise-client-urls=http://${NODE1}:2379 -initial-advertise-peer-urls=http://${NODE1}:2380 -listen-client-urls=http://0.0.0.0:2379 -listen-peer-urls=http://${NODE1}:2380 -initial-cluster=node1=http://${NODE1}:2380,node2=http://${NODE2}:2380,node3=http://${NODE3}:2380
# node 2
sudo rkt run --net=default:IP=${NODE2} coreos.com/etcd:v3.0.6 -- -name=node2 -advertise-client-urls=http://${NODE2}:2379 -initial-advertise-peer-urls=http://${NODE2}:2380 -listen-client-urls=http://0.0.0.0:2379 -listen-peer-urls=http://${NODE2}:2380 -initial-cluster=node1=http://${NODE1}:2380,node2=http://${NODE2}:2380,node3=http://${NODE3}:2380
# node 3
sudo rkt run --net=default:IP=${NODE3} coreos.com/etcd:v3.0.6 -- -name=node3 -advertise-client-urls=http://${NODE3}:2379 -initial-advertise-peer-urls=http://${NODE3}:2380 -listen-client-urls=http://0.0.0.0:2379 -listen-peer-urls=http://${NODE3}:2380 -initial-cluster=node1=http://${NODE1}:2380,node2=http://${NODE2}:2380,node3=http://${NODE3}:2380
```
Verify the cluster is healthy and can be reached.
```
ETCDCTL_API=3 etcdctl --endpoints=http://172.16.28.21:2379,http://172.16.28.22:2379,http://172.16.28.23:2379 endpoint-health
```
### DNS
Production clusters which refer to peers by DNS name known to the local resolver must mount the [host's DNS configuration](https://coreos.com/kubernetes/docs/latest/kubelet-wrapper.html#customizing-rkt-options).
## Docker
In order to expose the etcd API to clients outside of Docker host, use the host IP address of the container. Please see [`docker inspect`](https://docs.docker.com/engine/reference/commandline/inspect) for more detail on how to get the IP address. Alternatively, specify `--net=host` flag to `docker run` command to skip placing the container inside of a separate network stack.
@@ -121,7 +59,3 @@ To run `etcdctl` using API version 3:
docker exec etcd /bin/sh -c "export ETCDCTL_API=3 && /usr/local/bin/etcdctl put foo bar"
```
## Bare Metal
To provision a 3 node etcd cluster on bare-metal, you might find the examples in the [baremetal repo](https://github.com/coreos/coreos-baremetal/tree/master/examples) useful.

Binary file not shown.

Before

Width:  |  Height:  |  Size: 96 KiB

View File

@@ -1,66 +0,0 @@
# etcd gateway
## What is etcd gateway
etcd gateway is a simple TCP proxy that forwards network data to the etcd cluster. The gateway is stateless and transparent; it neither inspects client requests nor interferes with cluster responses.
The gateway supports multiple etcd server endpoints. When the gateway starts, it randomly picks one etcd server endpoint and forwards all requests to that endpoint. This endpoint serves all requests until the gateway detects a network failure. If the gateway detects an endpoint failure, it will switch to a different endpoint, if available, to hide failures from its clients. Other retry policies, such as weighted round-robin, may be supported in the future.
## When to use etcd gateway
Every application that accesses etcd must first have the address of an etcd cluster client endpoint. If multiple applications on the same server access the same etcd cluster, every application still needs to know the advertised client endpoints of the etcd cluster. If the etcd cluster is reconfigured to have different endpoints, every application may also need to update its endpoint list. This wide-scale reconfiguration is both tedious and error prone.
etcd gateway solves this problem by serving as a stable local endpoint. A typical etcd gateway configuration has
each machine running a gateway listening on a local address and every etcd application connecting to its local gateway. The upshot is only the gateway needs to update its endpoints instead of updating each and every application.
In summary, to automatically propagate cluster endpoint changes, the etcd gateway runs on every machine serving multiple applications accessing same etcd cluster.
## When not to use etcd gateway
- Improving performance
The gateway is not designed for improving etcd cluster performance. It does not provide caching, watch coalescing or batching. The etcd team is developing a caching proxy designed for improving cluster scalability.
- Running on a cluster management system
Advanced cluster management systems like Kubernetes natively support service discovery. Applications can access an etcd cluster with a DNS name or a virtual IP address managed by the system. For example, kube-proxy is equivalent to etcd gateway.
## Start etcd gateway
Consider an etcd cluster with the following static endpoints:
|Name|Address|Hostname|
|------|---------|------------------|
|infra0|10.0.1.10|infra0.example.com|
|infra1|10.0.1.11|infra1.example.com|
|infra2|10.0.1.12|infra2.example.com|
Start the etcd gateway to use these static endpoints with the command:
```bash
$ etcd gateway start --endpoints=infra0.example.com,infra1.example.com,infra2.example.com
2016-08-16 11:21:18.867350 I | tcpproxy: ready to proxy client requests to [...]
```
Alternatively, if using DNS for service discovery, consider the DNS SRV entries:
```bash
$ dig +noall +answer SRV _etcd-client._tcp.example.com
_etcd-client._tcp.example.com. 300 IN SRV 0 0 2379 infra0.example.com.
_etcd-client._tcp.example.com. 300 IN SRV 0 0 2379 infra1.example.com.
_etcd-client._tcp.example.com. 300 IN SRV 0 0 2379 infra2.example.com.
```
```bash
$ dig +noall +answer infra0.example.com infra1.example.com infra2.example.com
infra0.example.com. 300 IN A 10.0.1.10
infra1.example.com. 300 IN A 10.0.1.11
infra2.example.com. 300 IN A 10.0.1.12
```
Start the etcd gateway to fetch the endpoints from the DNS SRV entries with the command:
```bash
$ etcd gateway --discovery-srv=example.com
2016-08-16 11:21:18.867350 I | tcpproxy: ready to proxy client requests to [...]
```

File diff suppressed because it is too large Load Diff

View File

@@ -1,77 +0,0 @@
# gRPC proxy
*This is a pre-alpha feature, we are looking for early feedback.*
The gRPC proxy is a stateless etcd reverse proxy operating at the gRPC layer (L7). The proxy is designed to reduce the total processing load on the core etcd cluster. For horizontal scalability, it coalesces watch and lease API requests. To protect the cluster against abusive clients, it caches key range requests.
The gRPC proxy supports multiple etcd server endpoints. When the proxy starts, it randomly picks one etcd server endpoint to use. This endpoint serves all requests until the proxy detects an endpoint failure. If the gRPC proxy detects an endpoint failure, it switches to a different endpoint, if available, to hide failures from its clients. Other retry policies, such as weighted round-robin, may be supported in the future.
## Scalable watch API
The gRPC proxy coalesces multiple client watchers (`c-watchers`) on the same key or range into a single watcher (`s-watcher`) connected to an etcd server. The proxy broadcasts all events from the `s-watcher` to its `c-watchers`.
Assuming N clients watch the same key, one gRPC proxy can reduce the watch load on the etcd server from N to 1. Users can deploy multiple gRPC proxies to further distribute server load.
In the following example, three clients watch on key A. The gRPC proxy coalesces the three watchers, creating a single watcher attached to the etcd server.
```
+-------------+
| etcd server |
+------+------+
^ watch key A (s-watcher)
|
+-------+-----+
| gRPC proxy | <-------+
| | |
++-----+------+ |watch key A (c-watcher)
watch key A ^ ^ watch key A |
(c-watcher) | | (c-watcher) |
+-------+-+ ++--------+ +----+----+
| client | | client | | client |
| | | | | |
+---------+ +---------+ +---------+
```
### Limitations
To effectively coalesce multiple client watchers into a single watcher, the gRPC proxy coalesces new `c-watchers` into an existing `s-watcher` when possible. This coalesced `s-watcher` may be out of sync with the etcd server due to network delays or buffered undelivered events. When the watch revision is unspecified, the gRPC proxy will not guarantee the `c-watcher` will start watching from the most recent store revision. For example, if a client watches from an etcd server with revision 1000, that watcher will begin at revision 1000. If a client watches from the gRPC proxy, may begin watching from revision 990.
Similar limitations apply to cancellation. When the watcher is cancelled, the etcd servers revision may be greater than the cancellation response revision.
These two limitations should not cause problems for most use cases. In the future, there may be additional options to force the watcher to bypass the gRPC proxy for more accurate revision responses.
## Scalable lease API
TODO
## Abusive clients protection
The gRPC proxy caches responses for requests when it does not break consistency requirements. This can protect the etcd server from abusive clients in tight for loops.
## Start etcd gRPC proxy
Consider an etcd cluster with the following static endpoints:
|Name|Address|Hostname|
|------|---------|------------------|
|infra0|10.0.1.10|infra0.example.com|
|infra1|10.0.1.11|infra1.example.com|
|infra2|10.0.1.12|infra2.example.com|
Start the etcd gRPC proxy to use these static endpoints with the command:
```bash
$ etcd grpc-proxy start --endpoints=infra0.example.com,infra1.example.com,infra2.example.com --listen-addr=127.0.0.1:2379
```
The etcd gRPC proxy starts and listens on port 8080. It forwards client requests to one of the three endpoints provided above.
Sending requests through the proxy:
```bash
$ ETCDCTL_API=3 ./etcdctl --endpoints=127.0.0.1:2379 put foo bar
OK
$ ETCDCTL_API=3 ./etcdctl --endpoints=127.0.0.1:2379 get foo
foo
bar
```

View File

@@ -49,50 +49,51 @@ Finished defragmenting etcd member[127.0.0.1:2379]
## Space quota
The space quota in `etcd` ensures the cluster operates in a reliable fashion. Without a space quota, `etcd` may suffer from poor performance if the keyspace grows excessively large, or it may simply run out of storage space, leading to unpredictable cluster behavior. If the keyspace's backend database for any member exceeds the space quota, `etcd` raises a cluster-wide alarm that puts the cluster into a maintenance mode which only accepts key reads and deletes. Only after freeing enough space in the keyspace and defragmenting the backend database, along with clearing the space quota alarm can the cluster resume normal operation.
The space quota in `etcd` ensures the cluster operates in a reliable fashion. Without a space quota, `etcd` may suffer from poor performance if the keyspace grows excessively large, or it may simply run out of storage space, leading to unpredictable cluster behavior. If the keyspace's backend database for any member exceeds the space quota, `etcd` raises a cluster-wide alarm that puts the cluster into a maintenance mode which only accepts key reads and deletes. After freeing enough space in the keyspace, the alarm can be disarmed and the cluster will resume normal operation.
By default, `etcd` sets a conservative space quota suitable for most applications, but it may be configured on the command line, in bytes:
```sh
# set a very small 16MB quota
$ etcd --quota-backend-bytes=$((16*1024*1024))
$ etcd --quota-backend-bytes=16777216
```
The space quota can be triggered with a loop:
```sh
# fill keyspace
$ while [ 1 ]; do dd if=/dev/urandom bs=1024 count=1024 | ETCDCTL_API=3 etcdctl put key || break; done
$ while [ 1 ]; do dd if=/dev/urandom bs=1024 count=1024 | etcdctl put key || break; done
...
Error: rpc error: code = 8 desc = etcdserver: mvcc: database space exceeded
# confirm quota space is exceeded
$ ETCDCTL_API=3 etcdctl --write-out=table endpoint status
$ etcdctl --write-out=table endpoint status
+----------------+------------------+-----------+---------+-----------+-----------+------------+
| ENDPOINT | ID | VERSION | DB SIZE | IS LEADER | RAFT TERM | RAFT INDEX |
+----------------+------------------+-----------+---------+-----------+-----------+------------+
| 127.0.0.1:2379 | bf9071f4639c75cc | 2.3.0+git | 18 MB | true | 2 | 3332 |
+----------------+------------------+-----------+---------+-----------+-----------+------------+
# confirm alarm is raised
$ ETCDCTL_API=3 etcdctl alarm list
$ etcdctl alarm list
memberID:13803658152347727308 alarm:NOSPACE
```
Removing excessive keyspace data and defragmenting the backend database will put the cluster back within the quota limits:
Removing excessive keyspace data will put the cluster back within the quota limits so the alarm can be disarmed:
```sh
# get current revision
$ rev=$(ETCDCTL_API=3 etcdctl --endpoints=:2379 endpoint status --write-out="json" | egrep -o '"revision":[0-9]*' | egrep -o '[0-9]*')
$ etcdctl --endpoints=:2379 endpoint status
[{"Endpoint":"127.0.0.1:2379","Status":{"header":{"cluster_id":8925027824743593106,"member_id":13803658152347727308,"revision":1516,"raft_term":2},"version":"2.3.0+git","dbSize":17973248,"leader":13803658152347727308,"raftIndex":6359,"raftTerm":2}}]
# compact away all old revisions
$ ETCDCTL_API=3 etcdctl compact $rev
$ etdctl compact 1516
compacted revision 1516
# defragment away excessive space
$ ETCDCTL_API=3 etcdctl defrag
$ etcdctl defrag
Finished defragmenting etcd member[127.0.0.1:2379]
# disarm alarm
$ ETCDCTL_API=3 etcdctl alarm disarm
$ etcdctl alarm disarm
memberID:13803658152347727308 alarm:NOSPACE
# test puts are allowed again
$ ETCDCTL_API=3 etcdctl put newkey 123
$ etdctl put newkey 123
OK
```

View File

@@ -1,80 +0,0 @@
# Monitoring etcd
Each etcd server exports metrics under the `/metrics` path on its client port.
The metrics can be fetched with `curl`:
```sh
$ curl -L http://localhost:2379/metrics
# HELP etcd_debugging_mvcc_keys_total Total number of keys.
# TYPE etcd_debugging_mvcc_keys_total gauge
etcd_debugging_mvcc_keys_total 0
# HELP etcd_debugging_mvcc_pending_events_total Total number of pending events to be sent.
# TYPE etcd_debugging_mvcc_pending_events_total gauge
etcd_debugging_mvcc_pending_events_total 0
...
```
## Prometheus
Running a [Prometheus][prometheus] monitoring service is the easiest way to ingest and record etcd's metrics.
First, install Prometheus:
```sh
PROMETHEUS_VERSION="1.3.1"
wget https://github.com/prometheus/prometheus/releases/download/v$PROMETHEUS_VERSION/prometheus-$PROMETHEUS_VERSION.linux-amd64.tar.gz -O /tmp/prometheus-$PROMETHEUS_VERSION.linux-amd64.tar.gz
tar -xvzf /tmp/prometheus-$PROMETHEUS_VERSION.linux-amd64.tar.gz --directory /tmp/ --strip-components=1
/tmp/prometheus -version
```
Set Prometheus's scraper to target the etcd cluster endpoints:
```sh
cat > /tmp/test-etcd.yaml <<EOF
global:
scrape_interval: 10s
scrape_configs:
- job_name: test-etcd
static_configs:
- targets: ['10.240.0.32:2379','10.240.0.33:2379','10.240.0.34:2379']
EOF
cat /tmp/test-etcd.yaml
```
Set up the Prometheus handler:
```sh
nohup /tmp/prometheus \
-config.file /tmp/test-etcd.yaml \
-web.listen-address ":9090" \
-storage.local.path "test-etcd.data" >> /tmp/test-etcd.log 2>&1 &
```
Now Prometheus will scrape etcd metrics every 10 seconds.
## Grafana
[Grafana][grafana] has built-in Prometheus support; just add a Prometheus data source:
```
Name: test-etcd
Type: Prometheus
Url: http://localhost:9090
Access: proxy
```
Then import the default [etcd dashboard template][template] and customize; see the [demo][demo].
Sample dashboard:
![](./etcd-sample-grafana.png)
[prometheus]: https://prometheus.io/
[grafana]: http://grafana.org/
[template]: ./grafana.json
[demo]: http://dash.etcd.io/dashboard/db/test-etcd

View File

@@ -169,7 +169,7 @@ As described in the above, the best practice of adding new members is to configu
For avoiding this problem, etcd provides an option `-strict-reconfig-check`. If this option is passed to etcd, etcd rejects reconfiguration requests if the number of started members will be less than a quorum of the reconfigured cluster.
It is enabled by default.
It is recommended to enable this option. However, it is disabled by default because of keeping compatibility.
[add member]: #add-a-new-member
[cluster-reconf]: #cluster-reconfiguration-operations

View File

@@ -1,39 +1,14 @@
## Supported platforms
### Current support
The following table lists etcd support status for common architectures and operating systems,
| Architecture | Operating System | Status | Maintainers |
| ------------ | ---------------- | ------------ | ---------------- |
| amd64 | Darwin | Experimental | etcd maintainers |
| amd64 | Linux | Stable | etcd maintainers |
| amd64 | Windows | Experimental | |
| arm64 | Linux | Experimental | @glevand |
| arm | Linux | Unstable | |
| 386 | Linux | Unstable | |
* etcd-maintainers are listed in https://github.com/coreos/etcd/blob/master/MAINTAINERS.
Experimental platforms appear to work in practice and have some platform specific code in etcd, but do not fully conform to the stable support policy. Unstable platforms have been lightly tested, but less than experimental. Unlisted architecture and operating system pairs are currently unsupported; caveat emptor.
### Supporting a new platform
For etcd to officially support a new platform as stable, a few requirements are necessary to ensure acceptable quality:
1. An "official" maintainer for the platform with clear motivation; someone must be responsible for taking care of the platform.
2. Set up CI for build; etcd must compile.
3. Set up CI for running unit tests; etcd must pass simple tests.
4. Set up CI (TravisCI, SemaphoreCI or Jenkins) for running integration tests; etcd must pass intensive tests.
5. (Optional) Set up a functional testing cluster; an etcd cluster should survive stress testing.
## Supported platform
### 32-bit and other unsupported systems
etcd has known issues on 32-bit systems due to a bug in the Go runtime. See the [Go issue][go-issue] and [atomic package][go-atomic] for more information.
etcd has known issues on 32-bit systems due to a bug in the Go runtime. See #[358][358] for more information.
To avoid inadvertently running a possibly unstable etcd server, `etcd` on unstable or unsupported architectures will print a warning message and immediately exit if the environment variable `ETCD_UNSUPPORTED_ARCH` is not set to the target architecture.
To avoid inadvertently running a possibly unstable etcd server, `etcd` on unsupported architectures will print
a warning message and immediately exit if the environment variable `ETCD_UNSUPPORTED_ARCH` is not set to
the target architecture.
Currently only the amd64 architecture is officially supported by `etcd`.
[go-issue]: https://github.com/golang/go/issues/599
[go-atomic]: https://golang.org/pkg/sync/atomic/#pkg-note-BUG
[358]: https://github.com/coreos/etcd/issues/358

View File

@@ -71,23 +71,4 @@ $ etcd --snapshot-count=5000
$ ETCD_SNAPSHOT_COUNT=5000 etcd
```
## Network
If the etcd leader serves a large number of concurrent client requests, it may delay processing follower peer requests due to network congestion. This manifests as send buffer error messages on the follower nodes:
```
dropped MsgProp to 247ae21ff9436b2d since streamMsg's sending buffer is full
dropped MsgAppResp to 247ae21ff9436b2d since streamMsg's sending buffer is full
```
These errors may be resolved by prioritizing etcd's peer traffic over its client traffic. On Linux, peer traffic can be prioritized by using the traffic control mechanism:
```
tc qdisc add dev eth0 root handle 1: prio bands 3
tc filter add dev eth0 parent 1: protocol ip prio 1 u32 match ip sport 2380 0xffff flowid 1:1
tc filter add dev eth0 parent 1: protocol ip prio 1 u32 match ip dport 2380 0xffff flowid 1:1
tc filter add dev eth0 parent 1: protocol ip prio 2 u32 match ip sport 2739 0xffff flowid 1:1
tc filter add dev eth0 parent 1: protocol ip prio 2 u32 match ip dport 2739 0xffff flowid 1:1
```
[ping]: https://en.wikipedia.org/wiki/Ping_(networking_utility)

View File

@@ -216,7 +216,7 @@ To recover from such scenarios, etcd provides functionality to backup and restor
#### Backing up the datastore
**Note:** Windows users must stop etcd before running the backup command.
**NB:** Windows users must stop etcd before running the backup command.
The first step of the recovery is to backup the data directory and wal directory, if stored separately, on a functioning etcd node. To do this, use the `etcdctl backup` command, passing in the original data (and wal) directory used by etcd. For example:
@@ -262,9 +262,7 @@ Once you have verified that etcd has started successfully, shut it down and move
Now that the node is running successfully, [change its advertised peer URLs][update-a-member], as the `--force-new-cluster` option has set the peer URL to the default listening on localhost.
You can then add more nodes to the cluster and restore resiliency. See the [add a new member][add-a-member] guide for more details.
**Note:** If you are trying to restore your cluster using old failed etcd nodes, please make sure you have stopped old etcd instances and removed their old data directories specified by the data-dir configuration parameter.
You can then add more nodes to the cluster and restore resiliency. See the [add a new member][add-a-member] guide for more details. **NB:** If you are trying to restore your cluster using old failed etcd nodes, please make sure you have stopped old etcd instances and removed their old data directories specified by the data-dir configuration parameter.
### Client Request Timeout

View File

@@ -559,25 +559,6 @@ Let's create a key-value pair first: `foo=one`.
curl http://127.0.0.1:2379/v2/keys/foo -XPUT -d value=one
```
```json
{
"action":"set",
"node":{
"key":"/foo",
"value":"one",
"modifiedIndex":4,
"createdIndex":4
}
}
```
Specifying `noValueOnSuccess` option skips returning the node as value.
```sh
curl http://127.0.0.1:2379/v2/keys/foo?noValueOnSuccess=true -XPUT -d value=one
# {"action":"set"}
```
Now let's try some invalid `CompareAndSwap` commands.
Trying to set this existing key with `prevExist=false` fails as expected:

View File

@@ -266,7 +266,7 @@ Follow the instructions when using these flags.
## Profiling flags
### --enable-pprof
+ Enable runtime profiling data via HTTP server. Address is at client URL + "/debug/pprof/"
+ Enable runtime profiling data via HTTP server. Address is at client URL + "/debug/pprof"
+ default: false
[build-cluster]: clustering.md#static

View File

@@ -48,7 +48,7 @@ All releases version numbers follow the format of [semantic versioning 2.0.0](ht
## Build Release Binaries and Images
- Ensure `acbuild` is available.
- Ensure `actool` is available, or installing it through `go get github.com/appc/spec/actool`.
- Ensure `docker` is available.
Run release script in root directory:

View File

@@ -105,7 +105,7 @@ ETCD_INITIAL_CLUSTER_STATE=existing
### Stop the proxy process
Stop the existing proxy so we can wipe its state on disk and reload it with the new configuration:
Stop the existing proxy so we can wipe it's state on disk and reload it with the new configuration:
``` bash
px aux | grep etcd
@@ -149,5 +149,5 @@ If an error occurs, check the [add member troubleshooting doc][runtime-configura
[discovery-service]: clustering.md#discovery
[goreman]: https://github.com/mattn/goreman
[procfile]: https://github.com/coreos/etcd/blob/master/Procfile
[procfile]: /Procfile
[runtime-configuration]: runtime-configuration.md#error-cases-when-adding-members

46
NEWS
View File

@@ -1,46 +0,0 @@
etcd v3.0.15 (2016-11-11)
- fix cancel watch request with wrong range end
etcd v3.0.14 (2016-11-04)
- v3 etcdctl migrate command now supports --no-ttl flag to discard keys on transform
etcd v3.0.13 (2016-10-24)
etcd v3.0.12 (2016-10-07)
etcd v3.0.11 (2016-10-07)
- server returns previous key-value (optional)
- clientv3 WithPrevKV option
- v3 etcdctl prev-kv flag
etcd v3.0.10 (2016-09-23)
etcd v3.0.9 (2016-09-15)
- warn on domain names on listen URLs (v3.2 will reject domain names)
etcd v3.0.8 (2016-09-09)
- allow only IP addresses in listen URLs (domain names are rejected)
etcd v3.0.7 (2016-08-31)
- SRV records only allow A records (RFC 2052)
etcd v3.0.6 (2016-08-19)
etcd v3.0.5 (2016-08-19)
- SRV records (e.g., infra1.example.com) must match the discovery domain
(i.e., example.com) when using the default certificate authority.
etcd v3.0.4 (2016-07-27)
- v2 auth can now use common name from TLS certificate when --client-cert-auth is enabled
- v2 etcdctl ls command now supports --output=json
- Add /var/lib/etcd directory to etcd official Docker image
etcd v3.0.3 (2016-07-15)
- Revert Dockerfile to use CMD, instead of ENTRYPOINT, to support etcdctl run
- Docker commands for v3.0.2 won't work without specifying executable binary paths
- v3 etcdctl default endpoints are now 127.0.0.1:2379
etcd v3.0.2 (2016-07-08)
- Dockerfile uses ENTRYPOINT, instead of CMD, to run etcd without binary path specified
etcd v3.0.1 (2016-07-01)

View File

@@ -39,14 +39,13 @@ See [etcdctl][etcdctl] for a simple command line client.
The easiest way to get etcd is to use one of the pre-built release binaries which are available for OSX, Linux, Windows, AppC (ACI), and Docker. Instructions for using these binaries are on the [GitHub releases page][github-release].
For those wanting to try the very latest version, you can [build the latest version of etcd][dl-build] from the `master` branch.
For those wanting to try the very latest version, you can build the latest version of etcd from the `master` branch.
You will first need [*Go*](https://golang.org/) installed on your machine (version 1.6+ is required).
All development occurs on `master`, including new features and bug fixes.
Bug fixes are first targeted at `master` and subsequently ported to release branches, as described in the [branch management][branch-management] guide.
[github-release]: https://github.com/coreos/etcd/releases/
[branch-management]: ./Documentation/branch_management.md
[dl-build]: ./Documentation/dl_build.md#build-the-latest-version
### Running etcd
@@ -93,10 +92,6 @@ This will bring up 3 etcd members `infra1`, `infra2` and `infra3` and etcd proxy
Every cluster member and proxy accepts key value reads and key value writes.
### Running etcd on Kubernetes
If you want to run etcd cluster on Kubernetes, try [etcd operator](https://github.com/coreos/etcd-operator).
### Next steps
Now it's time to dig into the full etcd API and other guides.
@@ -136,4 +131,3 @@ See [reporting bugs](Documentation/reporting_bugs.md) for details about reportin
etcd is under the Apache 2.0 license. See the [LICENSE](LICENSE) file for details.

View File

@@ -6,19 +6,26 @@ This document defines a high level roadmap for etcd development.
The dates below should not be considered authoritative, but rather indicative of the projected timeline of the project. The [milestones defined in GitHub](https://github.com/coreos/etcd/milestones) represent the most up-to-date and issue-for-issue plans.
etcd 3.0 is our current stable branch. The roadmap below outlines new features that will be added to etcd, and while subject to change, define what future stable will look like.
etcd 2.3 is our current stable branch. The roadmap below outlines new features that will be added to etcd, and while subject to change, define what future stable will look like.
### etcd 3.1 (2016-Oct)
- Stable L4 gateway
- Experimental support for scalable proxy
- Automatic leadership transfer for the rolling upgrade
- V3 API improvements
- Get previous key-value pair
- Get only keys (ignore values)
- Get only key count
### etcd 3.0 (April)
- v3 API ([see also the issue tag](https://github.com/coreos/etcd/issues?utf8=%E2%9C%93&q=label%3Aarea/v3api))
- Leases
- Binary protocol
- Support a large number of watchers
- Failure guarantees documented
- Simple v3 client (golang)
- v3 API
- Locking
- Better disk backend
- Improved write throughput
- Support larger datasets and histories
- Simpler disaster recovery UX
- Integrated with Kubernetes
- Mirroring
### etcd 3.2 (2017-Feb)
- Stable scalable proxy
- JWT token based auth
- Improved watch performance
- ...
### etcd 3.1 (July)
- API bindings for other languages
### etcd 3.+ (future)
- Horizontally scalable proxy layer

View File

@@ -32,9 +32,7 @@ var _ = math.Inf
// This is a compile-time assertion to ensure that this generated file
// is compatible with the proto package it is being compiled against.
// A compilation error at this line likely means your copy of the
// proto package needs to be updated.
const _ = proto.ProtoPackageIsVersion2 // please upgrade the proto package
const _ = proto.ProtoPackageIsVersion1
type Permission_Type int32
@@ -101,113 +99,113 @@ func init() {
proto.RegisterType((*Role)(nil), "authpb.Role")
proto.RegisterEnum("authpb.Permission_Type", Permission_Type_name, Permission_Type_value)
}
func (m *User) Marshal() (dAtA []byte, err error) {
func (m *User) Marshal() (data []byte, err error) {
size := m.Size()
dAtA = make([]byte, size)
n, err := m.MarshalTo(dAtA)
data = make([]byte, size)
n, err := m.MarshalTo(data)
if err != nil {
return nil, err
}
return dAtA[:n], nil
return data[:n], nil
}
func (m *User) MarshalTo(dAtA []byte) (int, error) {
func (m *User) MarshalTo(data []byte) (int, error) {
var i int
_ = i
var l int
_ = l
if len(m.Name) > 0 {
dAtA[i] = 0xa
data[i] = 0xa
i++
i = encodeVarintAuth(dAtA, i, uint64(len(m.Name)))
i += copy(dAtA[i:], m.Name)
i = encodeVarintAuth(data, i, uint64(len(m.Name)))
i += copy(data[i:], m.Name)
}
if len(m.Password) > 0 {
dAtA[i] = 0x12
data[i] = 0x12
i++
i = encodeVarintAuth(dAtA, i, uint64(len(m.Password)))
i += copy(dAtA[i:], m.Password)
i = encodeVarintAuth(data, i, uint64(len(m.Password)))
i += copy(data[i:], m.Password)
}
if len(m.Roles) > 0 {
for _, s := range m.Roles {
dAtA[i] = 0x1a
data[i] = 0x1a
i++
l = len(s)
for l >= 1<<7 {
dAtA[i] = uint8(uint64(l)&0x7f | 0x80)
data[i] = uint8(uint64(l)&0x7f | 0x80)
l >>= 7
i++
}
dAtA[i] = uint8(l)
data[i] = uint8(l)
i++
i += copy(dAtA[i:], s)
i += copy(data[i:], s)
}
}
return i, nil
}
func (m *Permission) Marshal() (dAtA []byte, err error) {
func (m *Permission) Marshal() (data []byte, err error) {
size := m.Size()
dAtA = make([]byte, size)
n, err := m.MarshalTo(dAtA)
data = make([]byte, size)
n, err := m.MarshalTo(data)
if err != nil {
return nil, err
}
return dAtA[:n], nil
return data[:n], nil
}
func (m *Permission) MarshalTo(dAtA []byte) (int, error) {
func (m *Permission) MarshalTo(data []byte) (int, error) {
var i int
_ = i
var l int
_ = l
if m.PermType != 0 {
dAtA[i] = 0x8
data[i] = 0x8
i++
i = encodeVarintAuth(dAtA, i, uint64(m.PermType))
i = encodeVarintAuth(data, i, uint64(m.PermType))
}
if len(m.Key) > 0 {
dAtA[i] = 0x12
data[i] = 0x12
i++
i = encodeVarintAuth(dAtA, i, uint64(len(m.Key)))
i += copy(dAtA[i:], m.Key)
i = encodeVarintAuth(data, i, uint64(len(m.Key)))
i += copy(data[i:], m.Key)
}
if len(m.RangeEnd) > 0 {
dAtA[i] = 0x1a
data[i] = 0x1a
i++
i = encodeVarintAuth(dAtA, i, uint64(len(m.RangeEnd)))
i += copy(dAtA[i:], m.RangeEnd)
i = encodeVarintAuth(data, i, uint64(len(m.RangeEnd)))
i += copy(data[i:], m.RangeEnd)
}
return i, nil
}
func (m *Role) Marshal() (dAtA []byte, err error) {
func (m *Role) Marshal() (data []byte, err error) {
size := m.Size()
dAtA = make([]byte, size)
n, err := m.MarshalTo(dAtA)
data = make([]byte, size)
n, err := m.MarshalTo(data)
if err != nil {
return nil, err
}
return dAtA[:n], nil
return data[:n], nil
}
func (m *Role) MarshalTo(dAtA []byte) (int, error) {
func (m *Role) MarshalTo(data []byte) (int, error) {
var i int
_ = i
var l int
_ = l
if len(m.Name) > 0 {
dAtA[i] = 0xa
data[i] = 0xa
i++
i = encodeVarintAuth(dAtA, i, uint64(len(m.Name)))
i += copy(dAtA[i:], m.Name)
i = encodeVarintAuth(data, i, uint64(len(m.Name)))
i += copy(data[i:], m.Name)
}
if len(m.KeyPermission) > 0 {
for _, msg := range m.KeyPermission {
dAtA[i] = 0x12
data[i] = 0x12
i++
i = encodeVarintAuth(dAtA, i, uint64(msg.Size()))
n, err := msg.MarshalTo(dAtA[i:])
i = encodeVarintAuth(data, i, uint64(msg.Size()))
n, err := msg.MarshalTo(data[i:])
if err != nil {
return 0, err
}
@@ -217,31 +215,31 @@ func (m *Role) MarshalTo(dAtA []byte) (int, error) {
return i, nil
}
func encodeFixed64Auth(dAtA []byte, offset int, v uint64) int {
dAtA[offset] = uint8(v)
dAtA[offset+1] = uint8(v >> 8)
dAtA[offset+2] = uint8(v >> 16)
dAtA[offset+3] = uint8(v >> 24)
dAtA[offset+4] = uint8(v >> 32)
dAtA[offset+5] = uint8(v >> 40)
dAtA[offset+6] = uint8(v >> 48)
dAtA[offset+7] = uint8(v >> 56)
func encodeFixed64Auth(data []byte, offset int, v uint64) int {
data[offset] = uint8(v)
data[offset+1] = uint8(v >> 8)
data[offset+2] = uint8(v >> 16)
data[offset+3] = uint8(v >> 24)
data[offset+4] = uint8(v >> 32)
data[offset+5] = uint8(v >> 40)
data[offset+6] = uint8(v >> 48)
data[offset+7] = uint8(v >> 56)
return offset + 8
}
func encodeFixed32Auth(dAtA []byte, offset int, v uint32) int {
dAtA[offset] = uint8(v)
dAtA[offset+1] = uint8(v >> 8)
dAtA[offset+2] = uint8(v >> 16)
dAtA[offset+3] = uint8(v >> 24)
func encodeFixed32Auth(data []byte, offset int, v uint32) int {
data[offset] = uint8(v)
data[offset+1] = uint8(v >> 8)
data[offset+2] = uint8(v >> 16)
data[offset+3] = uint8(v >> 24)
return offset + 4
}
func encodeVarintAuth(dAtA []byte, offset int, v uint64) int {
func encodeVarintAuth(data []byte, offset int, v uint64) int {
for v >= 1<<7 {
dAtA[offset] = uint8(v&0x7f | 0x80)
data[offset] = uint8(v&0x7f | 0x80)
v >>= 7
offset++
}
dAtA[offset] = uint8(v)
data[offset] = uint8(v)
return offset + 1
}
func (m *User) Size() (n int) {
@@ -310,8 +308,8 @@ func sovAuth(x uint64) (n int) {
func sozAuth(x uint64) (n int) {
return sovAuth(uint64((x << 1) ^ uint64((int64(x) >> 63))))
}
func (m *User) Unmarshal(dAtA []byte) error {
l := len(dAtA)
func (m *User) Unmarshal(data []byte) error {
l := len(data)
iNdEx := 0
for iNdEx < l {
preIndex := iNdEx
@@ -323,7 +321,7 @@ func (m *User) Unmarshal(dAtA []byte) error {
if iNdEx >= l {
return io.ErrUnexpectedEOF
}
b := dAtA[iNdEx]
b := data[iNdEx]
iNdEx++
wire |= (uint64(b) & 0x7F) << shift
if b < 0x80 {
@@ -351,7 +349,7 @@ func (m *User) Unmarshal(dAtA []byte) error {
if iNdEx >= l {
return io.ErrUnexpectedEOF
}
b := dAtA[iNdEx]
b := data[iNdEx]
iNdEx++
byteLen |= (int(b) & 0x7F) << shift
if b < 0x80 {
@@ -365,7 +363,7 @@ func (m *User) Unmarshal(dAtA []byte) error {
if postIndex > l {
return io.ErrUnexpectedEOF
}
m.Name = append(m.Name[:0], dAtA[iNdEx:postIndex]...)
m.Name = append(m.Name[:0], data[iNdEx:postIndex]...)
if m.Name == nil {
m.Name = []byte{}
}
@@ -382,7 +380,7 @@ func (m *User) Unmarshal(dAtA []byte) error {
if iNdEx >= l {
return io.ErrUnexpectedEOF
}
b := dAtA[iNdEx]
b := data[iNdEx]
iNdEx++
byteLen |= (int(b) & 0x7F) << shift
if b < 0x80 {
@@ -396,7 +394,7 @@ func (m *User) Unmarshal(dAtA []byte) error {
if postIndex > l {
return io.ErrUnexpectedEOF
}
m.Password = append(m.Password[:0], dAtA[iNdEx:postIndex]...)
m.Password = append(m.Password[:0], data[iNdEx:postIndex]...)
if m.Password == nil {
m.Password = []byte{}
}
@@ -413,7 +411,7 @@ func (m *User) Unmarshal(dAtA []byte) error {
if iNdEx >= l {
return io.ErrUnexpectedEOF
}
b := dAtA[iNdEx]
b := data[iNdEx]
iNdEx++
stringLen |= (uint64(b) & 0x7F) << shift
if b < 0x80 {
@@ -428,11 +426,11 @@ func (m *User) Unmarshal(dAtA []byte) error {
if postIndex > l {
return io.ErrUnexpectedEOF
}
m.Roles = append(m.Roles, string(dAtA[iNdEx:postIndex]))
m.Roles = append(m.Roles, string(data[iNdEx:postIndex]))
iNdEx = postIndex
default:
iNdEx = preIndex
skippy, err := skipAuth(dAtA[iNdEx:])
skippy, err := skipAuth(data[iNdEx:])
if err != nil {
return err
}
@@ -451,8 +449,8 @@ func (m *User) Unmarshal(dAtA []byte) error {
}
return nil
}
func (m *Permission) Unmarshal(dAtA []byte) error {
l := len(dAtA)
func (m *Permission) Unmarshal(data []byte) error {
l := len(data)
iNdEx := 0
for iNdEx < l {
preIndex := iNdEx
@@ -464,7 +462,7 @@ func (m *Permission) Unmarshal(dAtA []byte) error {
if iNdEx >= l {
return io.ErrUnexpectedEOF
}
b := dAtA[iNdEx]
b := data[iNdEx]
iNdEx++
wire |= (uint64(b) & 0x7F) << shift
if b < 0x80 {
@@ -492,7 +490,7 @@ func (m *Permission) Unmarshal(dAtA []byte) error {
if iNdEx >= l {
return io.ErrUnexpectedEOF
}
b := dAtA[iNdEx]
b := data[iNdEx]
iNdEx++
m.PermType |= (Permission_Type(b) & 0x7F) << shift
if b < 0x80 {
@@ -511,7 +509,7 @@ func (m *Permission) Unmarshal(dAtA []byte) error {
if iNdEx >= l {
return io.ErrUnexpectedEOF
}
b := dAtA[iNdEx]
b := data[iNdEx]
iNdEx++
byteLen |= (int(b) & 0x7F) << shift
if b < 0x80 {
@@ -525,7 +523,7 @@ func (m *Permission) Unmarshal(dAtA []byte) error {
if postIndex > l {
return io.ErrUnexpectedEOF
}
m.Key = append(m.Key[:0], dAtA[iNdEx:postIndex]...)
m.Key = append(m.Key[:0], data[iNdEx:postIndex]...)
if m.Key == nil {
m.Key = []byte{}
}
@@ -542,7 +540,7 @@ func (m *Permission) Unmarshal(dAtA []byte) error {
if iNdEx >= l {
return io.ErrUnexpectedEOF
}
b := dAtA[iNdEx]
b := data[iNdEx]
iNdEx++
byteLen |= (int(b) & 0x7F) << shift
if b < 0x80 {
@@ -556,14 +554,14 @@ func (m *Permission) Unmarshal(dAtA []byte) error {
if postIndex > l {
return io.ErrUnexpectedEOF
}
m.RangeEnd = append(m.RangeEnd[:0], dAtA[iNdEx:postIndex]...)
m.RangeEnd = append(m.RangeEnd[:0], data[iNdEx:postIndex]...)
if m.RangeEnd == nil {
m.RangeEnd = []byte{}
}
iNdEx = postIndex
default:
iNdEx = preIndex
skippy, err := skipAuth(dAtA[iNdEx:])
skippy, err := skipAuth(data[iNdEx:])
if err != nil {
return err
}
@@ -582,8 +580,8 @@ func (m *Permission) Unmarshal(dAtA []byte) error {
}
return nil
}
func (m *Role) Unmarshal(dAtA []byte) error {
l := len(dAtA)
func (m *Role) Unmarshal(data []byte) error {
l := len(data)
iNdEx := 0
for iNdEx < l {
preIndex := iNdEx
@@ -595,7 +593,7 @@ func (m *Role) Unmarshal(dAtA []byte) error {
if iNdEx >= l {
return io.ErrUnexpectedEOF
}
b := dAtA[iNdEx]
b := data[iNdEx]
iNdEx++
wire |= (uint64(b) & 0x7F) << shift
if b < 0x80 {
@@ -623,7 +621,7 @@ func (m *Role) Unmarshal(dAtA []byte) error {
if iNdEx >= l {
return io.ErrUnexpectedEOF
}
b := dAtA[iNdEx]
b := data[iNdEx]
iNdEx++
byteLen |= (int(b) & 0x7F) << shift
if b < 0x80 {
@@ -637,7 +635,7 @@ func (m *Role) Unmarshal(dAtA []byte) error {
if postIndex > l {
return io.ErrUnexpectedEOF
}
m.Name = append(m.Name[:0], dAtA[iNdEx:postIndex]...)
m.Name = append(m.Name[:0], data[iNdEx:postIndex]...)
if m.Name == nil {
m.Name = []byte{}
}
@@ -654,7 +652,7 @@ func (m *Role) Unmarshal(dAtA []byte) error {
if iNdEx >= l {
return io.ErrUnexpectedEOF
}
b := dAtA[iNdEx]
b := data[iNdEx]
iNdEx++
msglen |= (int(b) & 0x7F) << shift
if b < 0x80 {
@@ -669,13 +667,13 @@ func (m *Role) Unmarshal(dAtA []byte) error {
return io.ErrUnexpectedEOF
}
m.KeyPermission = append(m.KeyPermission, &Permission{})
if err := m.KeyPermission[len(m.KeyPermission)-1].Unmarshal(dAtA[iNdEx:postIndex]); err != nil {
if err := m.KeyPermission[len(m.KeyPermission)-1].Unmarshal(data[iNdEx:postIndex]); err != nil {
return err
}
iNdEx = postIndex
default:
iNdEx = preIndex
skippy, err := skipAuth(dAtA[iNdEx:])
skippy, err := skipAuth(data[iNdEx:])
if err != nil {
return err
}
@@ -694,8 +692,8 @@ func (m *Role) Unmarshal(dAtA []byte) error {
}
return nil
}
func skipAuth(dAtA []byte) (n int, err error) {
l := len(dAtA)
func skipAuth(data []byte) (n int, err error) {
l := len(data)
iNdEx := 0
for iNdEx < l {
var wire uint64
@@ -706,7 +704,7 @@ func skipAuth(dAtA []byte) (n int, err error) {
if iNdEx >= l {
return 0, io.ErrUnexpectedEOF
}
b := dAtA[iNdEx]
b := data[iNdEx]
iNdEx++
wire |= (uint64(b) & 0x7F) << shift
if b < 0x80 {
@@ -724,7 +722,7 @@ func skipAuth(dAtA []byte) (n int, err error) {
return 0, io.ErrUnexpectedEOF
}
iNdEx++
if dAtA[iNdEx-1] < 0x80 {
if data[iNdEx-1] < 0x80 {
break
}
}
@@ -741,7 +739,7 @@ func skipAuth(dAtA []byte) (n int, err error) {
if iNdEx >= l {
return 0, io.ErrUnexpectedEOF
}
b := dAtA[iNdEx]
b := data[iNdEx]
iNdEx++
length |= (int(b) & 0x7F) << shift
if b < 0x80 {
@@ -764,7 +762,7 @@ func skipAuth(dAtA []byte) (n int, err error) {
if iNdEx >= l {
return 0, io.ErrUnexpectedEOF
}
b := dAtA[iNdEx]
b := data[iNdEx]
iNdEx++
innerWire |= (uint64(b) & 0x7F) << shift
if b < 0x80 {
@@ -775,7 +773,7 @@ func skipAuth(dAtA []byte) (n int, err error) {
if innerWireType == 4 {
break
}
next, err := skipAuth(dAtA[start:])
next, err := skipAuth(data[start:])
if err != nil {
return 0, err
}
@@ -799,8 +797,6 @@ var (
ErrIntOverflowAuth = fmt.Errorf("proto: integer overflow")
)
func init() { proto.RegisterFile("auth.proto", fileDescriptorAuth) }
var fileDescriptorAuth = []byte{
// 288 bytes of a gzipped FileDescriptorProto
0x1f, 0x8b, 0x08, 0x00, 0x00, 0x09, 0x6e, 0x88, 0x02, 0xff, 0x6c, 0x90, 0xc1, 0x4a, 0xc3, 0x30,

View File

@@ -51,7 +51,7 @@ func isRangeEqual(a, b *rangePerm) bool {
// If there are equal ranges, removeSubsetRangePerms only keeps one of them.
func removeSubsetRangePerms(perms []*rangePerm) []*rangePerm {
// TODO(mitake): currently it is O(n^2), we need a better algorithm
var newp []*rangePerm
newp := make([]*rangePerm, 0)
for i := range perms {
skip := false
@@ -86,7 +86,7 @@ func removeSubsetRangePerms(perms []*rangePerm) []*rangePerm {
// mergeRangePerms merges adjacent rangePerms.
func mergeRangePerms(perms []*rangePerm) []*rangePerm {
var merged []*rangePerm
merged := make([]*rangePerm, 0)
perms = removeSubsetRangePerms(perms)
sort.Sort(RangePermSliceByBegin(perms))

View File

@@ -20,7 +20,6 @@ package auth
import (
"crypto/rand"
"math/big"
"strings"
)
const (
@@ -54,14 +53,3 @@ func (as *authStore) assignSimpleTokenToUser(username, token string) {
as.simpleTokens[token] = username
as.simpleTokensMu.Unlock()
}
func (as *authStore) invalidateUser(username string) {
as.simpleTokensMu.Lock()
defer as.simpleTokensMu.Unlock()
for token, name := range as.simpleTokens {
if strings.Compare(name, username) == 0 {
delete(as.simpleTokens, token)
}
}
}

View File

@@ -16,7 +16,6 @@ package auth
import (
"bytes"
"encoding/binary"
"errors"
"fmt"
"sort"
@@ -36,8 +35,6 @@ var (
authEnabled = []byte{1}
authDisabled = []byte{0}
revisionKey = []byte("authRevision")
authBucketName = []byte("auth")
authUsersBucketName = []byte("authUsers")
authRolesBucketName = []byte("authRoles")
@@ -47,7 +44,6 @@ var (
ErrRootUserNotExist = errors.New("auth: root user does not exist")
ErrRootRoleNotExist = errors.New("auth: root user does not have root role")
ErrUserAlreadyExist = errors.New("auth: user already exists")
ErrUserEmpty = errors.New("auth: user name is empty")
ErrUserNotFound = errors.New("auth: user not found")
ErrRoleAlreadyExist = errors.New("auth: role already exists")
ErrRoleNotFound = errors.New("auth: role not found")
@@ -55,25 +51,13 @@ var (
ErrPermissionDenied = errors.New("auth: permission denied")
ErrRoleNotGranted = errors.New("auth: role is not granted to the user")
ErrPermissionNotGranted = errors.New("auth: permission is not granted to the role")
ErrAuthNotEnabled = errors.New("auth: authentication is not enabled")
ErrAuthOldRevision = errors.New("auth: revision in header is old")
// BcryptCost is the algorithm cost / strength for hashing auth passwords
BcryptCost = bcrypt.DefaultCost
)
const (
rootUser = "root"
rootRole = "root"
revBytesLen = 8
)
type AuthInfo struct {
Username string
Revision uint64
}
type AuthStore interface {
// AuthEnable turns on the authentication feature
AuthEnable() error
@@ -126,30 +110,23 @@ type AuthStore interface {
// RoleList gets a list of all roles
RoleList(r *pb.AuthRoleListRequest) (*pb.AuthRoleListResponse, error)
// AuthInfoFromToken gets a username from the given Token and current revision number
// (The revision number is used for preventing the TOCTOU problem)
AuthInfoFromToken(token string) (*AuthInfo, bool)
// UsernameFromToken gets a username from the given Token
UsernameFromToken(token string) (string, bool)
// IsPutPermitted checks put permission of the user
IsPutPermitted(authInfo *AuthInfo, key []byte) error
IsPutPermitted(username string, key []byte) bool
// IsRangePermitted checks range permission of the user
IsRangePermitted(authInfo *AuthInfo, key, rangeEnd []byte) error
IsRangePermitted(username string, key, rangeEnd []byte) bool
// IsDeleteRangePermitted checks delete-range permission of the user
IsDeleteRangePermitted(authInfo *AuthInfo, key, rangeEnd []byte) error
IsDeleteRangePermitted(username string, key, rangeEnd []byte) bool
// IsAdminPermitted checks admin permission of the user
IsAdminPermitted(authInfo *AuthInfo) error
IsAdminPermitted(username string) bool
// GenSimpleToken produces a simple random string
GenSimpleToken() (string, error)
// Revision gets current revision of authStore
Revision() uint64
// CheckPassword checks a given pair of username and password is correct
CheckPassword(username, password string) (uint64, error)
}
type authStore struct {
@@ -161,8 +138,6 @@ type authStore struct {
simpleTokensMu sync.RWMutex
simpleTokens map[string]string // token -> username
revision uint64
}
func (as *authStore) AuthEnable() error {
@@ -191,8 +166,6 @@ func (as *authStore) AuthEnable() error {
as.rangePermCache = make(map[string]*unifiedRangePermissions)
as.revision = getRevision(tx)
plog.Noticef("Authentication enabled")
return nil
@@ -203,7 +176,6 @@ func (as *authStore) AuthDisable() {
tx := b.BatchTx()
tx.Lock()
tx.UnsafePut(authBucketName, enableFlagKey, authDisabled)
as.commitRevision(tx)
tx.Unlock()
b.ForceCommit()
@@ -211,18 +183,10 @@ func (as *authStore) AuthDisable() {
as.enabled = false
as.enabledMu.Unlock()
as.simpleTokensMu.Lock()
as.simpleTokens = make(map[string]string) // invalidate all tokens
as.simpleTokensMu.Unlock()
plog.Noticef("Authentication disabled")
}
func (as *authStore) Authenticate(ctx context.Context, username, password string) (*pb.AuthenticateResponse, error) {
if !as.isAuthEnabled() {
return nil, ErrAuthNotEnabled
}
// TODO(mitake): after adding jwt support, branching based on values of ctx is required
index := ctx.Value("index").(uint64)
simpleToken := ctx.Value("simpleToken").(string)
@@ -236,6 +200,11 @@ func (as *authStore) Authenticate(ctx context.Context, username, password string
return nil, ErrAuthFailed
}
if bcrypt.CompareHashAndPassword(user.Password, []byte(password)) != nil {
plog.Noticef("authentication failed, invalid password for user %s", username)
return &pb.AuthenticateResponse{}, ErrAuthFailed
}
token := fmt.Sprintf("%s.%d", simpleToken, index)
as.assignSimpleTokenToUser(username, token)
@@ -243,24 +212,6 @@ func (as *authStore) Authenticate(ctx context.Context, username, password string
return &pb.AuthenticateResponse{Token: token}, nil
}
func (as *authStore) CheckPassword(username, password string) (uint64, error) {
tx := as.be.BatchTx()
tx.Lock()
defer tx.Unlock()
user := getUser(tx, username)
if user == nil {
return 0, ErrAuthFailed
}
if bcrypt.CompareHashAndPassword(user.Password, []byte(password)) != nil {
plog.Noticef("authentication failed, invalid password for user %s", username)
return 0, ErrAuthFailed
}
return getRevision(tx), nil
}
func (as *authStore) Recover(be backend.Backend) {
enabled := false
as.be = be
@@ -272,9 +223,6 @@ func (as *authStore) Recover(be backend.Backend) {
enabled = true
}
}
as.revision = getRevision(tx)
tx.Unlock()
as.enabledMu.Lock()
@@ -283,11 +231,7 @@ func (as *authStore) Recover(be backend.Backend) {
}
func (as *authStore) UserAdd(r *pb.AuthUserAddRequest) (*pb.AuthUserAddResponse, error) {
if len(r.Name) == 0 {
return nil, ErrUserEmpty
}
hashed, err := bcrypt.GenerateFromPassword([]byte(r.Password), BcryptCost)
hashed, err := bcrypt.GenerateFromPassword([]byte(r.Password), bcrypt.DefaultCost)
if err != nil {
plog.Errorf("failed to hash password: %s", err)
return nil, err
@@ -309,8 +253,6 @@ func (as *authStore) UserAdd(r *pb.AuthUserAddRequest) (*pb.AuthUserAddResponse,
putUser(tx, newUser)
as.commitRevision(tx)
plog.Noticef("added a new user: %s", r.Name)
return &pb.AuthUserAddResponse{}, nil
@@ -328,11 +270,6 @@ func (as *authStore) UserDelete(r *pb.AuthUserDeleteRequest) (*pb.AuthUserDelete
delUser(tx, r.Name)
as.commitRevision(tx)
as.invalidateCachedPerm(r.Name)
as.invalidateUser(r.Name)
plog.Noticef("deleted a user: %s", r.Name)
return &pb.AuthUserDeleteResponse{}, nil
@@ -341,7 +278,7 @@ func (as *authStore) UserDelete(r *pb.AuthUserDeleteRequest) (*pb.AuthUserDelete
func (as *authStore) UserChangePassword(r *pb.AuthUserChangePasswordRequest) (*pb.AuthUserChangePasswordResponse, error) {
// TODO(mitake): measure the cost of bcrypt.GenerateFromPassword()
// If the cost is too high, we should move the encryption to outside of the raft
hashed, err := bcrypt.GenerateFromPassword([]byte(r.Password), BcryptCost)
hashed, err := bcrypt.GenerateFromPassword([]byte(r.Password), bcrypt.DefaultCost)
if err != nil {
plog.Errorf("failed to hash password: %s", err)
return nil, err
@@ -364,11 +301,6 @@ func (as *authStore) UserChangePassword(r *pb.AuthUserChangePasswordRequest) (*p
putUser(tx, updatedUser)
as.commitRevision(tx)
as.invalidateCachedPerm(r.Name)
as.invalidateUser(r.Name)
plog.Noticef("changed a password of a user: %s", r.Name)
return &pb.AuthUserChangePasswordResponse{}, nil
@@ -404,8 +336,6 @@ func (as *authStore) UserGrantRole(r *pb.AuthUserGrantRoleRequest) (*pb.AuthUser
as.invalidateCachedPerm(r.User)
as.commitRevision(tx)
plog.Noticef("granted role %s to user %s", r.Role, r.User)
return &pb.AuthUserGrantRoleResponse{}, nil
}
@@ -474,8 +404,6 @@ func (as *authStore) UserRevokeRole(r *pb.AuthUserRevokeRoleRequest) (*pb.AuthUs
as.invalidateCachedPerm(r.Name)
as.commitRevision(tx)
plog.Noticef("revoked role %s from user %s", r.Role, r.Name)
return &pb.AuthUserRevokeRoleResponse{}, nil
}
@@ -545,8 +473,6 @@ func (as *authStore) RoleRevokePermission(r *pb.AuthRoleRevokePermissionRequest)
// It should be optimized.
as.clearCachedPerm()
as.commitRevision(tx)
plog.Noticef("revoked key %s from role %s", r.Key, r.Role)
return &pb.AuthRoleRevokePermissionResponse{}, nil
}
@@ -575,8 +501,6 @@ func (as *authStore) RoleDelete(r *pb.AuthRoleDeleteRequest) (*pb.AuthRoleDelete
delRole(tx, r.Role)
as.commitRevision(tx)
plog.Noticef("deleted role %s", r.Role)
return &pb.AuthRoleDeleteResponse{}, nil
}
@@ -597,18 +521,16 @@ func (as *authStore) RoleAdd(r *pb.AuthRoleAddRequest) (*pb.AuthRoleAddResponse,
putRole(tx, newRole)
as.commitRevision(tx)
plog.Noticef("Role %s is created", r.Name)
return &pb.AuthRoleAddResponse{}, nil
}
func (as *authStore) AuthInfoFromToken(token string) (*AuthInfo, bool) {
func (as *authStore) UsernameFromToken(token string) (string, bool) {
as.simpleTokensMu.RLock()
defer as.simpleTokensMu.RUnlock()
t, ok := as.simpleTokens[token]
return &AuthInfo{Username: t, Revision: as.revision}, ok
return t, ok
}
type permSlice []*authpb.Permission
@@ -660,21 +582,15 @@ func (as *authStore) RoleGrantPermission(r *pb.AuthRoleGrantPermissionRequest) (
// It should be optimized.
as.clearCachedPerm()
as.commitRevision(tx)
plog.Noticef("role %s's permission of key %s is updated as %s", r.Name, r.Perm.Key, authpb.Permission_Type_name[int32(r.Perm.PermType)])
return &pb.AuthRoleGrantPermissionResponse{}, nil
}
func (as *authStore) isOpPermitted(userName string, revision uint64, key, rangeEnd []byte, permTyp authpb.Permission_Type) error {
func (as *authStore) isOpPermitted(userName string, key, rangeEnd []byte, permTyp authpb.Permission_Type) bool {
// TODO(mitake): this function would be costly so we need a caching mechanism
if !as.isAuthEnabled() {
return nil
}
if revision < as.revision {
return ErrAuthOldRevision
return true
}
tx := as.be.BatchTx()
@@ -684,52 +600,48 @@ func (as *authStore) isOpPermitted(userName string, revision uint64, key, rangeE
user := getUser(tx, userName)
if user == nil {
plog.Errorf("invalid user name %s for permission checking", userName)
return ErrPermissionDenied
return false
}
// root role should have permission on all ranges
if hasRootRole(user) {
return nil
return true
}
if as.isRangeOpPermitted(tx, userName, key, rangeEnd, permTyp) {
return nil
return true
}
return ErrPermissionDenied
return false
}
func (as *authStore) IsPutPermitted(authInfo *AuthInfo, key []byte) error {
return as.isOpPermitted(authInfo.Username, authInfo.Revision, key, nil, authpb.WRITE)
func (as *authStore) IsPutPermitted(username string, key []byte) bool {
return as.isOpPermitted(username, key, nil, authpb.WRITE)
}
func (as *authStore) IsRangePermitted(authInfo *AuthInfo, key, rangeEnd []byte) error {
return as.isOpPermitted(authInfo.Username, authInfo.Revision, key, rangeEnd, authpb.READ)
func (as *authStore) IsRangePermitted(username string, key, rangeEnd []byte) bool {
return as.isOpPermitted(username, key, rangeEnd, authpb.READ)
}
func (as *authStore) IsDeleteRangePermitted(authInfo *AuthInfo, key, rangeEnd []byte) error {
return as.isOpPermitted(authInfo.Username, authInfo.Revision, key, rangeEnd, authpb.WRITE)
func (as *authStore) IsDeleteRangePermitted(username string, key, rangeEnd []byte) bool {
return as.isOpPermitted(username, key, rangeEnd, authpb.WRITE)
}
func (as *authStore) IsAdminPermitted(authInfo *AuthInfo) error {
func (as *authStore) IsAdminPermitted(username string) bool {
if !as.isAuthEnabled() {
return nil
return true
}
tx := as.be.BatchTx()
tx.Lock()
defer tx.Unlock()
u := getUser(tx, authInfo.Username)
u := getUser(tx, username)
if u == nil {
return ErrUserNotFound
return false
}
if !hasRootRole(u) {
return ErrPermissionDenied
}
return nil
return hasRootRole(u)
}
func getUser(tx backend.BatchTx, username string) *authpb.User {
@@ -841,18 +753,13 @@ func NewAuthStore(be backend.Backend) *authStore {
tx.UnsafeCreateBucket(authUsersBucketName)
tx.UnsafeCreateBucket(authRolesBucketName)
as := &authStore{
be: be,
simpleTokens: make(map[string]string),
revision: 0,
}
as.commitRevision(tx)
tx.Unlock()
be.ForceCommit()
return as
return &authStore{
be: be,
simpleTokens: make(map[string]string),
}
}
func hasRootRole(u *authpb.User) bool {
@@ -863,23 +770,3 @@ func hasRootRole(u *authpb.User) bool {
}
return false
}
func (as *authStore) commitRevision(tx backend.BatchTx) {
as.revision++
revBytes := make([]byte, revBytesLen)
binary.BigEndian.PutUint64(revBytes, as.revision)
tx.UnsafePut(authBucketName, revisionKey, revBytes)
}
func getRevision(tx backend.BatchTx) uint64 {
_, vs := tx.UnsafeRange(authBucketName, []byte(revisionKey), nil, 0)
if len(vs) != 1 {
plog.Panicf("failed to get the key of auth store revision")
}
return binary.BigEndian.Uint64(vs[0])
}
func (as *authStore) Revision() uint64 {
return as.revision
}

View File

@@ -20,12 +20,9 @@ import (
pb "github.com/coreos/etcd/etcdserver/etcdserverpb"
"github.com/coreos/etcd/mvcc/backend"
"golang.org/x/crypto/bcrypt"
"golang.org/x/net/context"
)
func init() { BcryptCost = bcrypt.MinCost }
func TestUserAdd(t *testing.T) {
b, tPath := backend.NewDefaultTmpBackend()
defer func() {
@@ -46,34 +43,9 @@ func TestUserAdd(t *testing.T) {
if err != ErrUserAlreadyExist {
t.Fatalf("expected %v, got %v", ErrUserAlreadyExist, err)
}
ua = &pb.AuthUserAddRequest{Name: ""}
_, err = as.UserAdd(ua) // add a user with empty name
if err != ErrUserEmpty {
t.Fatal(err)
}
}
func enableAuthAndCreateRoot(as *authStore) error {
_, err := as.UserAdd(&pb.AuthUserAddRequest{Name: "root", Password: "root"})
if err != nil {
return err
}
_, err = as.RoleAdd(&pb.AuthRoleAddRequest{Name: "root"})
if err != nil {
return err
}
_, err = as.UserGrantRole(&pb.AuthUserGrantRoleRequest{User: "root", Role: "root"})
if err != nil {
return err
}
return as.AuthEnable()
}
func TestCheckPassword(t *testing.T) {
func TestAuthenticate(t *testing.T) {
b, tPath := backend.NewDefaultTmpBackend()
defer func() {
b.Close()
@@ -81,19 +53,16 @@ func TestCheckPassword(t *testing.T) {
}()
as := NewAuthStore(b)
err := enableAuthAndCreateRoot(as)
if err != nil {
t.Fatal(err)
}
ua := &pb.AuthUserAddRequest{Name: "foo", Password: "bar"}
_, err = as.UserAdd(ua)
_, err := as.UserAdd(ua)
if err != nil {
t.Fatal(err)
}
// auth a non-existing user
_, err = as.CheckPassword("foo-test", "bar")
ctx1 := context.WithValue(context.WithValue(context.TODO(), "index", uint64(1)), "simpleToken", "dummy")
_, err = as.Authenticate(ctx1, "foo-test", "bar")
if err == nil {
t.Fatalf("expected %v, got %v", ErrAuthFailed, err)
}
@@ -102,13 +71,15 @@ func TestCheckPassword(t *testing.T) {
}
// auth an existing user with correct password
_, err = as.CheckPassword("foo", "bar")
ctx2 := context.WithValue(context.WithValue(context.TODO(), "index", uint64(2)), "simpleToken", "dummy")
_, err = as.Authenticate(ctx2, "foo", "bar")
if err != nil {
t.Fatal(err)
}
// auth an existing user but with wrong password
_, err = as.CheckPassword("foo", "")
ctx3 := context.WithValue(context.WithValue(context.TODO(), "index", uint64(3)), "simpleToken", "dummy")
_, err = as.Authenticate(ctx3, "foo", "")
if err == nil {
t.Fatalf("expected %v, got %v", ErrAuthFailed, err)
}
@@ -125,13 +96,9 @@ func TestUserDelete(t *testing.T) {
}()
as := NewAuthStore(b)
err := enableAuthAndCreateRoot(as)
if err != nil {
t.Fatal(err)
}
ua := &pb.AuthUserAddRequest{Name: "foo"}
_, err = as.UserAdd(ua)
_, err := as.UserAdd(ua)
if err != nil {
t.Fatal(err)
}
@@ -161,12 +128,8 @@ func TestUserChangePassword(t *testing.T) {
}()
as := NewAuthStore(b)
err := enableAuthAndCreateRoot(as)
if err != nil {
t.Fatal(err)
}
_, err = as.UserAdd(&pb.AuthUserAddRequest{Name: "foo"})
_, err := as.UserAdd(&pb.AuthUserAddRequest{Name: "foo"})
if err != nil {
t.Fatal(err)
}
@@ -206,13 +169,9 @@ func TestRoleAdd(t *testing.T) {
}()
as := NewAuthStore(b)
err := enableAuthAndCreateRoot(as)
if err != nil {
t.Fatal(err)
}
// adds a new role
_, err = as.RoleAdd(&pb.AuthRoleAddRequest{Name: "role-test"})
_, err := as.RoleAdd(&pb.AuthRoleAddRequest{Name: "role-test"})
if err != nil {
t.Fatal(err)
}
@@ -226,12 +185,8 @@ func TestUserGrant(t *testing.T) {
}()
as := NewAuthStore(b)
err := enableAuthAndCreateRoot(as)
if err != nil {
t.Fatal(err)
}
_, err = as.UserAdd(&pb.AuthUserAddRequest{Name: "foo"})
_, err := as.UserAdd(&pb.AuthUserAddRequest{Name: "foo"})
if err != nil {
t.Fatal(err)
}

39
build
View File

@@ -4,19 +4,15 @@
ORG_PATH="github.com/coreos"
REPO_PATH="${ORG_PATH}/etcd"
export GO15VENDOREXPERIMENT="1"
eval $(go env)
GIT_SHA=`git rev-parse --short HEAD || echo "GitNotFound"`
if [ ! -z "$FAILPOINTS" ]; then
GIT_SHA="$GIT_SHA"-FAILPOINTS
fi
# Set GO_LDFLAGS="-s" for building without symbols for debugging.
GO_LDFLAGS="$GO_LDFLAGS -X ${REPO_PATH}/cmd/vendor/${REPO_PATH}/version.GitSHA=${GIT_SHA}"
# enable/disable failpoints
toggle_failpoints() {
FAILPKGS="etcdserver/ mvcc/backend/"
FAILPKGS="etcdserver/"
mode="disable"
if [ ! -z "$FAILPOINTS" ]; then mode="enable"; fi
@@ -31,33 +27,18 @@ toggle_failpoints() {
}
etcd_build() {
out="bin"
if [ -n "${BINDIR}" ]; then out="${BINDIR}"; fi
if [ -z "${GOARCH}" ] || [ "${GOARCH}" = "$(go env GOHOSTARCH)" ]; then
out="bin"
else
out="bin/${GOARCH}"
fi
toggle_failpoints
# Static compilation is useful when etcd is run in a container
CGO_ENABLED=0 go build $GO_BUILD_FLAGS -installsuffix cgo -ldflags "$GO_LDFLAGS" -o ${out}/etcd ${REPO_PATH}/cmd/etcd || return
CGO_ENABLED=0 go build $GO_BUILD_FLAGS -installsuffix cgo -ldflags "$GO_LDFLAGS" -o ${out}/etcdctl ${REPO_PATH}/cmd/etcdctl || return
}
etcd_setup_gopath() {
CDIR=$(cd `dirname "$0"` && pwd)
cd "$CDIR"
etcdGOPATH=${CDIR}/gopath
# preserve old gopath to support building with unvendored tooling deps (e.g., gofail)
if [ -n "$GOPATH" ]; then
GOPATH=":$GOPATH"
fi
export GOPATH=${etcdGOPATH}$GOPATH
rm -f ${etcdGOPATH}/src
mkdir -p ${etcdGOPATH}
ln -s ${CDIR}/cmd/vendor ${etcdGOPATH}/src
CGO_ENABLED=0 go build $GO_BUILD_FLAGS -installsuffix cgo -ldflags "-s -X ${REPO_PATH}/cmd/vendor/${REPO_PATH}/version.GitSHA=${GIT_SHA}" -o ${out}/etcd ${REPO_PATH}/cmd
CGO_ENABLED=0 go build $GO_BUILD_FLAGS -installsuffix cgo -ldflags "-s" -o ${out}/etcdctl ${REPO_PATH}/cmd/etcdctl
}
toggle_failpoints
# only build when called directly, not sourced
if echo "$0" | grep "build$" >/dev/null; then
# force new gopath so builds outside of gopath work
etcd_setup_gopath
etcd_build
fi
# don't build when sourced
(echo "$0" | grep "/build$" > /dev/null) && etcd_build || true

View File

@@ -1,18 +1,9 @@
$ORG_PATH="github.com/coreos"
$REPO_PATH="$ORG_PATH/etcd"
$PWD = $((Get-Item -Path ".\" -Verbose).FullName)
$FSROOT = $((Get-Location).Drive.Name+":")
$FSYS = $((Get-WMIObject win32_logicaldisk -filter "DeviceID = '$FSROOT'").filesystem)
if ($FSYS.StartsWith("FAT","CurrentCultureIgnoreCase")) {
echo "Error: Cannot build etcd using the $FSYS filesystem (use NTFS instead)"
exit 1
}
# Set $Env:GO_LDFLAGS="-s" for building without symbols.
$GO_LDFLAGS="$Env:GO_LDFLAGS -X $REPO_PATH/cmd/vendor/$REPO_PATH/version.GitSHA=$GIT_SHA"
# rebuild symlinks
echo "Rebuilding symlinks"
git ls-files -s cmd | select-string -pattern 120000 | ForEach {
$l = $_.ToString()
$lnkname = $l.Split(' ')[1]
@@ -22,54 +13,27 @@ git ls-files -s cmd | select-string -pattern 120000 | ForEach {
$terms = $lnkname.Split("\")
$dirname = $terms[0..($terms.length-2)] -join "\"
$lnkname = "$PWD\$lnkname"
$targetAbs = "$((Get-Item -Path "$dirname\$target").FullName)"
$targetAbs = $targetAbs.Replace("/", "\")
if (test-path -pathtype container "$targetAbs") {
if (Test-Path "$lnkname") {
if ((Get-Item "$lnkname") -is [System.IO.DirectoryInfo]) {
# rd so deleting junction doesn't take files with it
cmd /c rd "$lnkname"
}
}
if (Test-Path "$lnkname") {
if (!((Get-Item "$lnkname") -is [System.IO.DirectoryInfo])) {
cmd /c del /A /F "$lnkname"
}
}
cmd /c mklink /J "$lnkname" "$targetAbs" ">NUL"
# rd so deleting junction doesn't take files with it
cmd /c rd "$lnkname"
cmd /c del /A /F "$lnkname"
cmd /c mklink /J "$lnkname" "$targetAbs"
} else {
# Remove file with symlink data (first run)
if (Test-Path "$lnkname") {
cmd /c del /A /F "$lnkname"
}
cmd /c mklink /H "$lnkname" "$targetAbs" ">NUL"
cmd /c del /A /F "$lnkname"
cmd /c mklink /H "$lnkname" "$targetAbs"
}
}
if (-not $env:GOPATH) {
$orgpath="$PWD\gopath\src\" + $ORG_PATH.Replace("/", "\")
if (Test-Path "$orgpath\etcd") {
if ((Get-Item "$orgpath\etcd") -is [System.IO.DirectoryInfo]) {
# rd so deleting junction doesn't take files with it
cmd /c rd "$orgpath\etcd"
}
}
if (Test-Path "$orgpath") {
if ((Get-Item "$orgpath") -is [System.IO.DirectoryInfo]) {
# rd so deleting junction doesn't take files with it
cmd /c rd "$orgpath"
}
}
if (Test-Path "$orgpath") {
if (!((Get-Item "$orgpath") -is [System.IO.DirectoryInfo])) {
# Remove file with symlink data (first run)
cmd /c del /A /F "$orgpath"
}
}
cmd /c mkdir "$orgpath"
cmd /c mklink /J "$orgpath\etcd" "$PWD" ">NUL"
cmd /c rd "$orgpath\etcd"
cmd /c del "$orgpath"
cmd /c mkdir "$orgpath"
cmd /c mklink /J "$orgpath\etcd" "$PWD"
$env:GOPATH = "$PWD\gopath"
}
@@ -77,5 +41,5 @@ if (-not $env:GOPATH) {
$env:CGO_ENABLED = 0
$env:GO15VENDOREXPERIMENT = 1
$GIT_SHA="$(git rev-parse --short HEAD)"
go build -a -installsuffix cgo -ldflags $GO_LDFLAGS -o bin\etcd.exe "$REPO_PATH\cmd\etcd"
go build -a -installsuffix cgo -ldflags $GO_LDFLAGS -o bin\etcdctl.exe "$REPO_PATH\cmd\etcdctl"
go build -a -installsuffix cgo -ldflags "-s -X $REPO_PATH/cmd/vendor/$REPO_PATH/version.GitSHA=$GIT_SHA" -o bin\etcd.exe "$REPO_PATH\cmd"
go build -a -installsuffix cgo -ldflags "-s" -o bin\etcdctl.exe "$REPO_PATH\cmd\etcdctl"

View File

@@ -22,6 +22,7 @@ import (
"net"
"net/http"
"net/url"
"reflect"
"sort"
"strconv"
"sync"
@@ -260,67 +261,53 @@ type httpClusterClient struct {
selectionMode EndpointSelectionMode
}
func (c *httpClusterClient) getLeaderEndpoint(ctx context.Context, eps []url.URL) (string, error) {
ceps := make([]url.URL, len(eps))
copy(ceps, eps)
// To perform a lookup on the new endpoint list without using the current
// client, we'll copy it
clientCopy := &httpClusterClient{
clientFactory: c.clientFactory,
credentials: c.credentials,
rand: c.rand,
pinned: 0,
endpoints: ceps,
}
mAPI := NewMembersAPI(clientCopy)
leader, err := mAPI.Leader(ctx)
func (c *httpClusterClient) getLeaderEndpoint() (string, error) {
mAPI := NewMembersAPI(c)
leader, err := mAPI.Leader(context.Background())
if err != nil {
return "", err
}
if len(leader.ClientURLs) == 0 {
return "", ErrNoLeaderEndpoint
}
return leader.ClientURLs[0], nil // TODO: how to handle multiple client URLs?
}
func (c *httpClusterClient) parseEndpoints(eps []string) ([]url.URL, error) {
func (c *httpClusterClient) SetEndpoints(eps []string) error {
if len(eps) == 0 {
return []url.URL{}, ErrNoEndpoints
return ErrNoEndpoints
}
neps := make([]url.URL, len(eps))
for i, ep := range eps {
u, err := url.Parse(ep)
if err != nil {
return []url.URL{}, err
return err
}
neps[i] = *u
}
return neps, nil
}
func (c *httpClusterClient) SetEndpoints(eps []string) error {
neps, err := c.parseEndpoints(eps)
if err != nil {
return err
switch c.selectionMode {
case EndpointSelectionRandom:
c.endpoints = shuffleEndpoints(c.rand, neps)
c.pinned = 0
case EndpointSelectionPrioritizeLeader:
c.endpoints = neps
lep, err := c.getLeaderEndpoint()
if err != nil {
return ErrNoLeaderEndpoint
}
for i := range c.endpoints {
if c.endpoints[i].String() == lep {
c.pinned = i
break
}
}
// If endpoints doesn't have the lu, just keep c.pinned = 0.
// Forwarding between follower and leader would be required but it works.
default:
return errors.New(fmt.Sprintf("invalid endpoint selection mode: %d", c.selectionMode))
}
c.Lock()
defer c.Unlock()
c.endpoints = shuffleEndpoints(c.rand, neps)
// We're not doing anything for PrioritizeLeader here. This is
// due to not having a context meaning we can't call getLeaderEndpoint
// However, if you're using PrioritizeLeader, you've already been told
// to regularly call sync, where we do have a ctx, and can figure the
// leader. PrioritizeLeader is also quite a loose guarantee, so deal
// with it
c.pinned = 0
return nil
}
@@ -414,51 +401,27 @@ func (c *httpClusterClient) Sync(ctx context.Context) error {
return err
}
var eps []string
c.Lock()
defer c.Unlock()
eps := make([]string, 0)
for _, m := range ms {
eps = append(eps, m.ClientURLs...)
}
sort.Sort(sort.StringSlice(eps))
neps, err := c.parseEndpoints(eps)
if err != nil {
return err
ceps := make([]string, len(c.endpoints))
for i, cep := range c.endpoints {
ceps[i] = cep.String()
}
sort.Sort(sort.StringSlice(ceps))
// fast path if no change happens
// this helps client to pin the endpoint when no cluster change
if reflect.DeepEqual(eps, ceps) {
return nil
}
npin := 0
switch c.selectionMode {
case EndpointSelectionRandom:
c.RLock()
eq := endpointsEqual(c.endpoints, neps)
c.RUnlock()
if eq {
return nil
}
// When items in the endpoint list changes, we choose a new pin
neps = shuffleEndpoints(c.rand, neps)
case EndpointSelectionPrioritizeLeader:
nle, err := c.getLeaderEndpoint(ctx, neps)
if err != nil {
return ErrNoLeaderEndpoint
}
for i, n := range neps {
if n.String() == nle {
npin = i
break
}
}
default:
return fmt.Errorf("invalid endpoint selection mode: %d", c.selectionMode)
}
c.Lock()
defer c.Unlock()
c.endpoints = neps
c.pinned = npin
return nil
return c.SetEndpoints(eps)
}
func (c *httpClusterClient) AutoSync(ctx context.Context, interval time.Duration) error {
@@ -644,27 +607,3 @@ func shuffleEndpoints(r *rand.Rand, eps []url.URL) []url.URL {
}
return neps
}
func endpointsEqual(left, right []url.URL) bool {
if len(left) != len(right) {
return false
}
sLeft := make([]string, len(left))
sRight := make([]string, len(right))
for i, l := range left {
sLeft[i] = l.String()
}
for i, r := range right {
sRight[i] = r.String()
}
sort.Strings(sLeft)
sort.Strings(sRight)
for i := range sLeft {
if sLeft[i] != sRight[i] {
return false
}
}
return true
}

View File

@@ -855,7 +855,7 @@ func TestHTTPClusterClientAutoSyncFail(t *testing.T) {
}
err = hc.AutoSync(context.Background(), time.Hour)
if !strings.HasPrefix(err.Error(), ErrClusterUnavailable.Error()) {
if err.Error() != ErrClusterUnavailable.Error() {
t.Fatalf("incorrect error value: want=%v got=%v", ErrClusterUnavailable, err)
}
}
@@ -900,90 +900,6 @@ func TestHTTPClusterClientSyncPinEndpoint(t *testing.T) {
}
}
// TestHTTPClusterClientSyncUnpinEndpoint tests that Sync() unpins the endpoint when
// it gets a different member list than before.
func TestHTTPClusterClientSyncUnpinEndpoint(t *testing.T) {
cf := newStaticHTTPClientFactory([]staticHTTPResponse{
{
resp: http.Response{StatusCode: http.StatusOK, Header: http.Header{"Content-Type": []string{"application/json"}}},
body: []byte(`{"members":[{"id":"2745e2525fce8fe","peerURLs":["http://127.0.0.1:7003"],"name":"node3","clientURLs":["http://127.0.0.1:4003"]},{"id":"42134f434382925","peerURLs":["http://127.0.0.1:2380","http://127.0.0.1:7001"],"name":"node1","clientURLs":["http://127.0.0.1:2379","http://127.0.0.1:4001"]},{"id":"94088180e21eb87b","peerURLs":["http://127.0.0.1:7002"],"name":"node2","clientURLs":["http://127.0.0.1:4002"]}]}`),
},
{
resp: http.Response{StatusCode: http.StatusOK, Header: http.Header{"Content-Type": []string{"application/json"}}},
body: []byte(`{"members":[{"id":"42134f434382925","peerURLs":["http://127.0.0.1:2380","http://127.0.0.1:7001"],"name":"node1","clientURLs":["http://127.0.0.1:2379","http://127.0.0.1:4001"]},{"id":"94088180e21eb87b","peerURLs":["http://127.0.0.1:7002"],"name":"node2","clientURLs":["http://127.0.0.1:4002"]}]}`),
},
{
resp: http.Response{StatusCode: http.StatusOK, Header: http.Header{"Content-Type": []string{"application/json"}}},
body: []byte(`{"members":[{"id":"2745e2525fce8fe","peerURLs":["http://127.0.0.1:7003"],"name":"node3","clientURLs":["http://127.0.0.1:4003"]},{"id":"42134f434382925","peerURLs":["http://127.0.0.1:2380","http://127.0.0.1:7001"],"name":"node1","clientURLs":["http://127.0.0.1:2379","http://127.0.0.1:4001"]},{"id":"94088180e21eb87b","peerURLs":["http://127.0.0.1:7002"],"name":"node2","clientURLs":["http://127.0.0.1:4002"]}]}`),
},
})
hc := &httpClusterClient{
clientFactory: cf,
rand: rand.New(rand.NewSource(0)),
}
err := hc.SetEndpoints([]string{"http://127.0.0.1:4003", "http://127.0.0.1:2379", "http://127.0.0.1:4001", "http://127.0.0.1:4002"})
if err != nil {
t.Fatalf("unexpected error during setup: %#v", err)
}
wants := []string{"http://127.0.0.1:2379", "http://127.0.0.1:4001", "http://127.0.0.1:4002"}
for i := 0; i < 3; i++ {
err = hc.Sync(context.Background())
if err != nil {
t.Fatalf("#%d: unexpected error during Sync: %#v", i, err)
}
if g := hc.endpoints[hc.pinned]; g.String() != wants[i] {
t.Errorf("#%d: pinned endpoint = %v, want %v", i, g, wants[i])
}
}
}
// TestHTTPClusterClientSyncPinLeaderEndpoint tests that Sync() pins the leader
// when the selection mode is EndpointSelectionPrioritizeLeader
func TestHTTPClusterClientSyncPinLeaderEndpoint(t *testing.T) {
cf := newStaticHTTPClientFactory([]staticHTTPResponse{
{
resp: http.Response{StatusCode: http.StatusOK, Header: http.Header{"Content-Type": []string{"application/json"}}},
body: []byte(`{"members":[{"id":"2745e2525fce8fe","peerURLs":["http://127.0.0.1:7003"],"name":"node3","clientURLs":["http://127.0.0.1:4003"]},{"id":"42134f434382925","peerURLs":["http://127.0.0.1:2380","http://127.0.0.1:7001"],"name":"node1","clientURLs":["http://127.0.0.1:2379","http://127.0.0.1:4001"]},{"id":"94088180e21eb87b","peerURLs":["http://127.0.0.1:7002"],"name":"node2","clientURLs":["http://127.0.0.1:4002"]}]}`),
},
{
resp: http.Response{StatusCode: http.StatusOK, Header: http.Header{"Content-Type": []string{"application/json"}}},
body: []byte(`{"id":"2745e2525fce8fe","peerURLs":["http://127.0.0.1:7003"],"name":"node3","clientURLs":["http://127.0.0.1:4003"]}`),
},
{
resp: http.Response{StatusCode: http.StatusOK, Header: http.Header{"Content-Type": []string{"application/json"}}},
body: []byte(`{"members":[{"id":"2745e2525fce8fe","peerURLs":["http://127.0.0.1:7003"],"name":"node3","clientURLs":["http://127.0.0.1:4003"]},{"id":"42134f434382925","peerURLs":["http://127.0.0.1:2380","http://127.0.0.1:7001"],"name":"node1","clientURLs":["http://127.0.0.1:2379","http://127.0.0.1:4001"]},{"id":"94088180e21eb87b","peerURLs":["http://127.0.0.1:7002"],"name":"node2","clientURLs":["http://127.0.0.1:4002"]}]}`),
},
{
resp: http.Response{StatusCode: http.StatusOK, Header: http.Header{"Content-Type": []string{"application/json"}}},
body: []byte(`{"id":"94088180e21eb87b","peerURLs":["http://127.0.0.1:7002"],"name":"node2","clientURLs":["http://127.0.0.1:4002"]}`),
},
})
hc := &httpClusterClient{
clientFactory: cf,
rand: rand.New(rand.NewSource(0)),
selectionMode: EndpointSelectionPrioritizeLeader,
endpoints: []url.URL{{}}, // Need somewhere to pretend to send to initially
}
wants := []string{"http://127.0.0.1:4003", "http://127.0.0.1:4002"}
for i, want := range wants {
err := hc.Sync(context.Background())
if err != nil {
t.Fatalf("#%d: unexpected error during Sync: %#v", i, err)
}
pinned := hc.endpoints[hc.pinned].String()
if pinned != want {
t.Errorf("#%d: pinned endpoint = %v, want %v", i, pinned, want)
}
}
}
func TestHTTPClusterClientResetFail(t *testing.T) {
tests := [][]string{
// need at least one endpoint

View File

@@ -21,11 +21,7 @@ type ClusterError struct {
}
func (ce *ClusterError) Error() string {
s := ErrClusterUnavailable.Error()
for i, e := range ce.Errors {
s += fmt.Sprintf("; error #%d: %s\n", i, e)
}
return s
return ErrClusterUnavailable.Error()
}
func (ce *ClusterError) Detail() string {

View File

@@ -1,17 +0,0 @@
// Copyright 2016 The etcd Authors
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
// Package integration implements tests built upon embedded etcd, focusing on
// the correctness of the etcd v2 client.
package integration

View File

@@ -8,11 +8,10 @@ package client
import (
"errors"
"fmt"
codec1978 "github.com/ugorji/go/codec"
"reflect"
"runtime"
time "time"
codec1978 "github.com/ugorji/go/codec"
)
const (

View File

@@ -191,10 +191,6 @@ type SetOptions struct {
// Dir specifies whether or not this Node should be created as a directory.
Dir bool
// NoValueOnSuccess specifies whether the response contains the current value of the Node.
// If set, the response will only contain the current value when the request fails.
NoValueOnSuccess bool
}
type GetOptions struct {
@@ -272,10 +268,6 @@ type Response struct {
// Index holds the cluster-level index at the time the Response was generated.
// This index is not tied to the Node(s) contained in this Response.
Index uint64 `json:"-"`
// ClusterID holds the cluster-level ID reported by the server. This
// should be different for different etcd clusters.
ClusterID string `json:"-"`
}
type Node struct {
@@ -343,7 +335,6 @@ func (k *httpKeysAPI) Set(ctx context.Context, key, val string, opts *SetOptions
act.TTL = opts.TTL
act.Refresh = opts.Refresh
act.Dir = opts.Dir
act.NoValueOnSuccess = opts.NoValueOnSuccess
}
doCtx := ctx
@@ -532,16 +523,15 @@ func (w *waitAction) HTTPRequest(ep url.URL) *http.Request {
}
type setAction struct {
Prefix string
Key string
Value string
PrevValue string
PrevIndex uint64
PrevExist PrevExistType
TTL time.Duration
Refresh bool
Dir bool
NoValueOnSuccess bool
Prefix string
Key string
Value string
PrevValue string
PrevIndex uint64
PrevExist PrevExistType
TTL time.Duration
Refresh bool
Dir bool
}
func (a *setAction) HTTPRequest(ep url.URL) *http.Request {
@@ -575,9 +565,6 @@ func (a *setAction) HTTPRequest(ep url.URL) *http.Request {
if a.Refresh {
form.Add("refresh", "true")
}
if a.NoValueOnSuccess {
params.Set("noValueOnSuccess", strconv.FormatBool(a.NoValueOnSuccess))
}
u.RawQuery = params.Encode()
body := strings.NewReader(form.Encode())
@@ -669,7 +656,6 @@ func unmarshalSuccessfulKeysResponse(header http.Header, body []byte) (*Response
return nil, err
}
}
res.ClusterID = header.Get("X-Etcd-Cluster-ID")
return &res, nil
}

View File

@@ -407,15 +407,6 @@ func TestSetAction(t *testing.T) {
wantURL: "http://example.com/foo?dir=true",
wantBody: "",
},
// NoValueOnSuccess is set
{
act: setAction{
Key: "foo",
NoValueOnSuccess: true,
},
wantURL: "http://example.com/foo?noValueOnSuccess=true",
wantBody: "value=",
},
}
for i, tt := range tests {
@@ -673,24 +664,23 @@ func TestUnmarshalSuccessfulResponse(t *testing.T) {
expiration.UnmarshalText([]byte("2015-04-07T04:40:23.044979686Z"))
tests := []struct {
indexHdr string
clusterIDHdr string
body string
wantRes *Response
wantErr bool
hdr string
body string
wantRes *Response
wantErr bool
}{
// Neither PrevNode or Node
{
indexHdr: "1",
body: `{"action":"delete"}`,
wantRes: &Response{Action: "delete", Index: 1},
wantErr: false,
hdr: "1",
body: `{"action":"delete"}`,
wantRes: &Response{Action: "delete", Index: 1},
wantErr: false,
},
// PrevNode
{
indexHdr: "15",
body: `{"action":"delete", "prevNode": {"key": "/foo", "value": "bar", "modifiedIndex": 12, "createdIndex": 10}}`,
hdr: "15",
body: `{"action":"delete", "prevNode": {"key": "/foo", "value": "bar", "modifiedIndex": 12, "createdIndex": 10}}`,
wantRes: &Response{
Action: "delete",
Index: 15,
@@ -707,8 +697,8 @@ func TestUnmarshalSuccessfulResponse(t *testing.T) {
// Node
{
indexHdr: "15",
body: `{"action":"get", "node": {"key": "/foo", "value": "bar", "modifiedIndex": 12, "createdIndex": 10, "ttl": 10, "expiration": "2015-04-07T04:40:23.044979686Z"}}`,
hdr: "15",
body: `{"action":"get", "node": {"key": "/foo", "value": "bar", "modifiedIndex": 12, "createdIndex": 10, "ttl": 10, "expiration": "2015-04-07T04:40:23.044979686Z"}}`,
wantRes: &Response{
Action: "get",
Index: 15,
@@ -727,9 +717,8 @@ func TestUnmarshalSuccessfulResponse(t *testing.T) {
// Node Dir
{
indexHdr: "15",
clusterIDHdr: "abcdef",
body: `{"action":"get", "node": {"key": "/foo", "dir": true, "modifiedIndex": 12, "createdIndex": 10}}`,
hdr: "15",
body: `{"action":"get", "node": {"key": "/foo", "dir": true, "modifiedIndex": 12, "createdIndex": 10}}`,
wantRes: &Response{
Action: "get",
Index: 15,
@@ -739,16 +728,15 @@ func TestUnmarshalSuccessfulResponse(t *testing.T) {
ModifiedIndex: 12,
CreatedIndex: 10,
},
PrevNode: nil,
ClusterID: "abcdef",
PrevNode: nil,
},
wantErr: false,
},
// PrevNode and Node
{
indexHdr: "15",
body: `{"action":"update", "prevNode": {"key": "/foo", "value": "baz", "modifiedIndex": 10, "createdIndex": 10}, "node": {"key": "/foo", "value": "bar", "modifiedIndex": 12, "createdIndex": 10}}`,
hdr: "15",
body: `{"action":"update", "prevNode": {"key": "/foo", "value": "baz", "modifiedIndex": 10, "createdIndex": 10}, "node": {"key": "/foo", "value": "bar", "modifiedIndex": 12, "createdIndex": 10}}`,
wantRes: &Response{
Action: "update",
Index: 15,
@@ -770,24 +758,24 @@ func TestUnmarshalSuccessfulResponse(t *testing.T) {
// Garbage in body
{
indexHdr: "",
body: `garbage`,
wantRes: nil,
wantErr: true,
hdr: "",
body: `garbage`,
wantRes: nil,
wantErr: true,
},
// non-integer index
{
indexHdr: "poo",
body: `{}`,
wantRes: nil,
wantErr: true,
hdr: "poo",
body: `{}`,
wantRes: nil,
wantErr: true,
},
}
for i, tt := range tests {
h := make(http.Header)
h.Add("X-Etcd-Index", tt.indexHdr)
h.Add("X-Etcd-Index", tt.hdr)
res, err := unmarshalSuccessfulKeysResponse(h, []byte(tt.body))
if tt.wantErr != (err != nil) {
t.Errorf("#%d: wantErr=%t, err=%v", i, tt.wantErr, err)

View File

@@ -14,20 +14,6 @@
package client
import (
"regexp"
)
var (
roleNotFoundRegExp *regexp.Regexp
userNotFoundRegExp *regexp.Regexp
)
func init() {
roleNotFoundRegExp = regexp.MustCompile("auth: Role .* does not exist.")
userNotFoundRegExp = regexp.MustCompile("auth: User .* does not exist.")
}
// IsKeyNotFound returns true if the error code is ErrorCodeKeyNotFound.
func IsKeyNotFound(err error) bool {
if cErr, ok := err.(Error); ok {
@@ -35,19 +21,3 @@ func IsKeyNotFound(err error) bool {
}
return false
}
// IsRoleNotFound returns true if the error means role not found of v2 API.
func IsRoleNotFound(err error) bool {
if ae, ok := err.(authError); ok {
return roleNotFoundRegExp.MatchString(ae.Message)
}
return false
}
// IsUserNotFound returns true if the error means user not found of v2 API.
func IsUserNotFound(err error) bool {
if ae, ok := err.(authError); ok {
return userNotFoundRegExp.MatchString(ae.Message)
}
return false
}

View File

@@ -72,10 +72,6 @@ if err != nil {
}
```
## Metrics
The etcd client optionally exposes RPC metrics through [go-grpc-prometheus](https://github.com/grpc-ecosystem/go-grpc-prometheus). See the [examples](https://github.com/coreos/etcd/blob/master/clientv3/example_metrics_test.go).
## Examples
More code examples can be found at [GoDoc](https://godoc.org/github.com/coreos/etcd/clientv3).

View File

@@ -43,7 +43,6 @@ type (
AuthRoleListResponse pb.AuthRoleListResponse
PermissionType authpb.Permission_Type
Permission authpb.Permission
)
const (
@@ -116,12 +115,12 @@ func NewAuth(c *Client) Auth {
}
func (auth *auth) AuthEnable(ctx context.Context) (*AuthEnableResponse, error) {
resp, err := auth.remote.AuthEnable(ctx, &pb.AuthEnableRequest{}, grpc.FailFast(false))
resp, err := auth.remote.AuthEnable(ctx, &pb.AuthEnableRequest{})
return (*AuthEnableResponse)(resp), toErr(ctx, err)
}
func (auth *auth) AuthDisable(ctx context.Context) (*AuthDisableResponse, error) {
resp, err := auth.remote.AuthDisable(ctx, &pb.AuthDisableRequest{}, grpc.FailFast(false))
resp, err := auth.remote.AuthDisable(ctx, &pb.AuthDisableRequest{})
return (*AuthDisableResponse)(resp), toErr(ctx, err)
}

View File

@@ -21,14 +21,8 @@ import (
"golang.org/x/net/context"
"google.golang.org/grpc"
"google.golang.org/grpc/codes"
)
// ErrNoAddrAvilable is returned by Get() when the balancer does not have
// any active connection to endpoints at the time.
// This error is returned only when opts.BlockingWait is true.
var ErrNoAddrAvilable = grpc.Errorf(codes.Unavailable, "there is no address available")
// simpleBalancer does the bare minimum to expose multiple eps
// to the grpc reconnection code path
type simpleBalancer struct {
@@ -48,11 +42,6 @@ type simpleBalancer struct {
// upc closes when upEps transitions from empty to non-zero or the balancer closes.
upc chan struct{}
// grpc issues TLS cert checks using the string passed into dial so
// that string must be the host. To recover the full scheme://host URL,
// have a map from hosts to the original endpoint.
host2ep map[string]string
// pinAddr is the currently pinned address; set to the empty string on
// intialization and shutdown.
pinAddr string
@@ -73,12 +62,11 @@ func newSimpleBalancer(eps []string) *simpleBalancer {
readyc: make(chan struct{}),
upEps: make(map[string]struct{}),
upc: make(chan struct{}),
host2ep: getHost2ep(eps),
}
return sb
}
func (b *simpleBalancer) Start(target string, config grpc.BalancerConfig) error { return nil }
func (b *simpleBalancer) Start(target string) error { return nil }
func (b *simpleBalancer) ConnectNotify() <-chan struct{} {
b.mu.Lock()
@@ -86,49 +74,6 @@ func (b *simpleBalancer) ConnectNotify() <-chan struct{} {
return b.upc
}
func (b *simpleBalancer) getEndpoint(host string) string {
b.mu.Lock()
defer b.mu.Unlock()
return b.host2ep[host]
}
func getHost2ep(eps []string) map[string]string {
hm := make(map[string]string, len(eps))
for i := range eps {
_, host, _ := parseEndpoint(eps[i])
hm[host] = eps[i]
}
return hm
}
func (b *simpleBalancer) updateAddrs(eps []string) {
np := getHost2ep(eps)
b.mu.Lock()
defer b.mu.Unlock()
match := len(np) == len(b.host2ep)
for k, v := range np {
if b.host2ep[k] != v {
match = false
break
}
}
if match {
// same endpoints, so no need to update address
return
}
b.host2ep = np
addrs := make([]grpc.Address, 0, len(eps))
for i := range eps {
addrs = append(addrs, grpc.Address{Addr: getHost(eps[i])})
}
b.addrs = addrs
b.notifyCh <- addrs
}
func (b *simpleBalancer) Up(addr grpc.Address) func(error) {
b.mu.Lock()
defer b.mu.Unlock()
@@ -168,25 +113,6 @@ func (b *simpleBalancer) Up(addr grpc.Address) func(error) {
func (b *simpleBalancer) Get(ctx context.Context, opts grpc.BalancerGetOptions) (grpc.Address, func(), error) {
var addr string
// If opts.BlockingWait is false (for fail-fast RPCs), it should return
// an address it has notified via Notify immediately instead of blocking.
if !opts.BlockingWait {
b.mu.RLock()
closed := b.closed
addr = b.pinAddr
upEps := len(b.upEps)
b.mu.RUnlock()
if closed {
return grpc.Address{Addr: ""}, nil, grpc.ErrClientConnClosing
}
if upEps == 0 {
return grpc.Address{Addr: ""}, nil, ErrNoAddrAvilable
}
return grpc.Address{Addr: addr}, func() {}, nil
}
for {
b.mu.RLock()
ch := b.upc

View File

@@ -1,106 +0,0 @@
// Copyright 2016 The etcd Authors
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
package clientv3
import (
"errors"
"testing"
"time"
"golang.org/x/net/context"
"google.golang.org/grpc"
)
var (
endpoints = []string{"localhost:2379", "localhost:22379", "localhost:32379"}
)
func TestBalancerGetUnblocking(t *testing.T) {
sb := newSimpleBalancer(endpoints)
unblockingOpts := grpc.BalancerGetOptions{BlockingWait: false}
_, _, err := sb.Get(context.Background(), unblockingOpts)
if err != ErrNoAddrAvilable {
t.Errorf("Get() with no up endpoints should return ErrNoAddrAvailable, got: %v", err)
}
down1 := sb.Up(grpc.Address{Addr: endpoints[1]})
down2 := sb.Up(grpc.Address{Addr: endpoints[2]})
addrFirst, putFun, err := sb.Get(context.Background(), unblockingOpts)
if err != nil {
t.Errorf("Get() with up endpoints should success, got %v", err)
}
if addrFirst.Addr != endpoints[1] && addrFirst.Addr != endpoints[2] {
t.Errorf("Get() didn't return expected address, got %v", addrFirst)
}
if putFun == nil {
t.Errorf("Get() returned unexpected nil put function")
}
addrSecond, _, _ := sb.Get(context.Background(), unblockingOpts)
if addrSecond.Addr != addrSecond.Addr {
t.Errorf("Get() didn't return the same address as previous call, got %v and %v", addrFirst, addrSecond)
}
down1(errors.New("error"))
down2(errors.New("error"))
_, _, err = sb.Get(context.Background(), unblockingOpts)
if err != ErrNoAddrAvilable {
t.Errorf("Get() with no up endpoints should return ErrNoAddrAvailable, got: %v", err)
}
}
func TestBalancerGetBlocking(t *testing.T) {
sb := newSimpleBalancer(endpoints)
blockingOpts := grpc.BalancerGetOptions{BlockingWait: true}
ctx, _ := context.WithTimeout(context.Background(), time.Millisecond*100)
_, _, err := sb.Get(ctx, blockingOpts)
if err != context.DeadlineExceeded {
t.Errorf("Get() with no up endpoints should timeout, got %v", err)
}
downC := make(chan func(error), 1)
go func() {
// ensure sb.Up() will be called after sb.Get() to see if Up() releases blocking Get()
time.Sleep(time.Millisecond * 100)
downC <- sb.Up(grpc.Address{Addr: endpoints[1]})
}()
addrFirst, putFun, err := sb.Get(context.Background(), blockingOpts)
if err != nil {
t.Errorf("Get() with up endpoints should success, got %v", err)
}
if addrFirst.Addr != endpoints[1] {
t.Errorf("Get() didn't return expected address, got %v", addrFirst)
}
if putFun == nil {
t.Errorf("Get() returned unexpected nil put function")
}
down1 := <-downC
down2 := sb.Up(grpc.Address{Addr: endpoints[2]})
addrSecond, _, _ := sb.Get(context.Background(), blockingOpts)
if addrSecond.Addr != addrSecond.Addr {
t.Errorf("Get() didn't return the same address as previous call, got %v and %v", addrFirst, addrSecond)
}
down1(errors.New("error"))
down2(errors.New("error"))
ctx, _ = context.WithTimeout(context.Background(), time.Millisecond*100)
_, _, err = sb.Get(ctx, blockingOpts)
if err != context.DeadlineExceeded {
t.Errorf("Get() with no up endpoints should timeout, got %v", err)
}
}

View File

@@ -18,17 +18,17 @@ import (
"crypto/tls"
"errors"
"fmt"
"io/ioutil"
"log"
"net"
"net/url"
"strings"
"time"
"github.com/coreos/etcd/etcdserver/api/v3rpc/rpctypes"
prometheus "github.com/grpc-ecosystem/go-grpc-prometheus"
"golang.org/x/net/context"
"google.golang.org/grpc"
"google.golang.org/grpc/codes"
"google.golang.org/grpc/credentials"
"google.golang.org/grpc/metadata"
)
@@ -87,7 +87,6 @@ func NewFromConfigFile(path string) (*Client, error) {
// Close shuts down the client's etcd connections.
func (c *Client) Close() error {
c.cancel()
c.Watcher.Close()
return toErr(c.ctx, c.conn.Close())
}
@@ -99,44 +98,6 @@ func (c *Client) Ctx() context.Context { return c.ctx }
// Endpoints lists the registered endpoints for the client.
func (c *Client) Endpoints() []string { return c.cfg.Endpoints }
// SetEndpoints updates client's endpoints.
func (c *Client) SetEndpoints(eps ...string) {
c.cfg.Endpoints = eps
c.balancer.updateAddrs(eps)
}
// Sync synchronizes client's endpoints with the known endpoints from the etcd membership.
func (c *Client) Sync(ctx context.Context) error {
mresp, err := c.MemberList(ctx)
if err != nil {
return err
}
var eps []string
for _, m := range mresp.Members {
eps = append(eps, m.ClientURLs...)
}
c.SetEndpoints(eps...)
return nil
}
func (c *Client) autoSync() {
if c.cfg.AutoSyncInterval == time.Duration(0) {
return
}
for {
select {
case <-c.ctx.Done():
return
case <-time.After(c.cfg.AutoSyncInterval):
ctx, _ := context.WithTimeout(c.ctx, 5*time.Second)
if err := c.Sync(ctx); err != nil && err != c.ctx.Err() {
logger.Println("Auto sync endpoints failed:", err)
}
}
}
}
type authTokenCredential struct {
token string
}
@@ -151,31 +112,19 @@ func (cred authTokenCredential) GetRequestMetadata(ctx context.Context, s ...str
}, nil
}
func parseEndpoint(endpoint string) (proto string, host string, scheme string) {
func (c *Client) dialTarget(endpoint string) (proto string, host string, creds *credentials.TransportCredentials) {
proto = "tcp"
host = endpoint
creds = c.creds
url, uerr := url.Parse(endpoint)
if uerr != nil || !strings.Contains(endpoint, "://") {
return
}
scheme = url.Scheme
// strip scheme:// prefix since grpc dials by host
host = url.Host
switch url.Scheme {
case "http", "https":
case "unix":
proto = "unix"
default:
proto, host = "", ""
}
return
}
func (c *Client) processCreds(scheme string) (creds *credentials.TransportCredentials) {
creds = c.creds
switch scheme {
case "unix":
case "http":
creds = nil
case "https":
@@ -186,7 +135,7 @@ func (c *Client) processCreds(scheme string) (creds *credentials.TransportCreden
emptyCreds := credentials.NewTLS(tlsconfig)
creds = &emptyCreds
default:
creds = nil
return "", "", nil
}
return
}
@@ -198,8 +147,17 @@ func (c *Client) dialSetupOpts(endpoint string, dopts ...grpc.DialOption) (opts
}
opts = append(opts, dopts...)
// grpc issues TLS cert checks using the string passed into dial so
// that string must be the host. To recover the full scheme://host URL,
// have a map from hosts to the original endpoint.
host2ep := make(map[string]string)
for i := range c.cfg.Endpoints {
_, host, _ := c.dialTarget(c.cfg.Endpoints[i])
host2ep[host] = c.cfg.Endpoints[i]
}
f := func(host string, t time.Duration) (net.Conn, error) {
proto, host, _ := parseEndpoint(c.balancer.getEndpoint(host))
proto, host, _ := c.dialTarget(host2ep[host])
if proto == "" {
return nil, fmt.Errorf("unknown scheme for %q", host)
}
@@ -212,10 +170,7 @@ func (c *Client) dialSetupOpts(endpoint string, dopts ...grpc.DialOption) (opts
}
opts = append(opts, grpc.WithDialer(f))
creds := c.creds
if _, _, scheme := parseEndpoint(endpoint); len(scheme) != 0 {
creds = c.processCreds(scheme)
}
_, _, creds := c.dialTarget(endpoint)
if creds != nil {
opts = append(opts, grpc.WithTransportCredentials(*creds))
} else {
@@ -248,10 +203,6 @@ func (c *Client) dial(endpoint string, dopts ...grpc.DialOption) (*grpc.ClientCo
opts = append(opts, grpc.WithPerRPCCredentials(authTokenCredential{token: resp.Token}))
}
// add metrics options
opts = append(opts, grpc.WithUnaryInterceptor(prometheus.UnaryClientInterceptor))
opts = append(opts, grpc.WithStreamInterceptor(prometheus.StreamClientInterceptor))
conn, err := grpc.Dial(host, opts...)
if err != nil {
return nil, err
@@ -321,8 +272,13 @@ func newClient(cfg *Config) (*Client, error) {
client.Watcher = NewWatcher(client)
client.Auth = NewAuth(client)
client.Maintenance = NewMaintenance(client)
if cfg.Logger != nil {
logger.Set(cfg.Logger)
} else {
// disable client side grpc by default
logger.Set(log.New(ioutil.Discard, "", 0))
}
go client.autoSync()
return client, nil
}
@@ -338,14 +294,17 @@ func isHaltErr(ctx context.Context, err error) bool {
if err == nil {
return false
}
code := grpc.Code(err)
// Unavailable codes mean the system will be right back.
// (e.g., can't connect, lost leader)
// Treat Internal codes as if something failed, leaving the
// system in an inconsistent state, but retrying could make progress.
// (e.g., failed in middle of send, corrupted frame)
// TODO: are permanent Internal errors possible from grpc?
return code != codes.Unavailable && code != codes.Internal
eErr := rpctypes.Error(err)
if _, ok := eErr.(rpctypes.EtcdError); ok {
return eErr != rpctypes.ErrStopped && eErr != rpctypes.ErrNoLeader
}
// treat etcdserver errors not recognized by the client as halting
return isConnClosing(err) || strings.Contains(err.Error(), "etcdserver:")
}
// isConnClosing returns true if the error matches a grpc client closing error
func isConnClosing(err error) bool {
return strings.Contains(err.Error(), grpc.ErrClientConnClosing.Error())
}
func toErr(ctx context.Context, err error) error {
@@ -353,20 +312,12 @@ func toErr(ctx context.Context, err error) error {
return nil
}
err = rpctypes.Error(err)
if _, ok := err.(rpctypes.EtcdError); ok {
return err
}
code := grpc.Code(err)
switch code {
case codes.DeadlineExceeded:
fallthrough
case codes.Canceled:
if ctx.Err() != nil {
err = ctx.Err()
}
case codes.Unavailable:
switch {
case ctx.Err() != nil && strings.Contains(err.Error(), "context"):
err = ctx.Err()
case strings.Contains(err.Error(), ErrNoAvailableEndpoints.Error()):
err = ErrNoAvailableEndpoints
case codes.FailedPrecondition:
case strings.Contains(err.Error(), grpc.ErrClientConnClosing.Error()):
err = grpc.ErrClientConnClosing
}
return err

View File

@@ -19,7 +19,7 @@ import (
"testing"
"time"
"github.com/coreos/etcd/etcdserver/api/v3rpc/rpctypes"
"github.com/coreos/etcd/etcdserver"
"github.com/coreos/etcd/pkg/testutil"
"golang.org/x/net/context"
"google.golang.org/grpc"
@@ -72,11 +72,11 @@ func TestIsHaltErr(t *testing.T) {
if !isHaltErr(nil, fmt.Errorf("etcdserver: some etcdserver error")) {
t.Errorf(`error prefixed with "etcdserver: " should be Halted by default`)
}
if isHaltErr(nil, rpctypes.ErrGRPCStopped) {
t.Errorf("error %v should not halt", rpctypes.ErrGRPCStopped)
if isHaltErr(nil, etcdserver.ErrStopped) {
t.Errorf("error %v should not halt", etcdserver.ErrStopped)
}
if isHaltErr(nil, rpctypes.ErrGRPCNoLeader) {
t.Errorf("error %v should not halt", rpctypes.ErrGRPCNoLeader)
if isHaltErr(nil, etcdserver.ErrNoLeader) {
t.Errorf("error %v should not halt", etcdserver.ErrNoLeader)
}
ctx, cancel := context.WithCancel(context.TODO())
if isHaltErr(ctx, nil) {

View File

@@ -78,7 +78,7 @@ func (c *cluster) MemberUpdate(ctx context.Context, id uint64, peerAddrs []strin
// it is safe to retry on update.
for {
r := &pb.MemberUpdateRequest{ID: id, PeerURLs: peerAddrs}
resp, err := c.remote.MemberUpdate(ctx, r, grpc.FailFast(false))
resp, err := c.remote.MemberUpdate(ctx, r)
if err == nil {
return (*MemberUpdateResponse)(resp), nil
}

View File

@@ -36,8 +36,6 @@ func Compare(cmp Cmp, result string, v interface{}) Cmp {
switch result {
case "=":
r = pb.Compare_EQUAL
case "!=":
r = pb.Compare_NOT_EQUAL
case ">":
r = pb.Compare_GREATER
case "<":

View File

@@ -29,7 +29,7 @@ var (
)
type Election struct {
session *Session
client *v3.Client
keyPrefix string
@@ -39,18 +39,20 @@ type Election struct {
}
// NewElection returns a new election on a given key prefix.
func NewElection(s *Session, pfx string) *Election {
return &Election{session: s, keyPrefix: pfx + "/"}
func NewElection(client *v3.Client, pfx string) *Election {
return &Election{client: client, keyPrefix: pfx + "/"}
}
// Campaign puts a value as eligible for the election. It blocks until
// it is elected, an error occurs, or the context is cancelled.
func (e *Election) Campaign(ctx context.Context, val string) error {
s := e.session
client := e.session.Client()
s, serr := NewSession(e.client)
if serr != nil {
return serr
}
k := fmt.Sprintf("%s%x", e.keyPrefix, s.Lease())
txn := client.Txn(ctx).If(v3.Compare(v3.CreateRevision(k), "=", 0))
k := fmt.Sprintf("%s/%x", e.keyPrefix, s.Lease())
txn := e.client.Txn(ctx).If(v3.Compare(v3.CreateRevision(k), "=", 0))
txn = txn.Then(v3.OpPut(k, val, v3.WithLease(s.Lease())))
txn = txn.Else(v3.OpGet(k))
resp, err := txn.Commit()
@@ -69,12 +71,12 @@ func (e *Election) Campaign(ctx context.Context, val string) error {
}
}
err = waitDeletes(ctx, client, e.keyPrefix, e.leaderRev-1)
err = waitDeletes(ctx, e.client, e.keyPrefix, v3.WithPrefix(), v3.WithRev(e.leaderRev-1))
if err != nil {
// clean up in case of context cancel
select {
case <-ctx.Done():
e.Resign(client.Ctx())
e.Resign(e.client.Ctx())
default:
e.leaderSession = nil
}
@@ -89,9 +91,8 @@ func (e *Election) Proclaim(ctx context.Context, val string) error {
if e.leaderSession == nil {
return ErrElectionNotLeader
}
client := e.session.Client()
cmp := v3.Compare(v3.CreateRevision(e.leaderKey), "=", e.leaderRev)
txn := client.Txn(ctx).If(cmp)
txn := e.client.Txn(ctx).If(cmp)
txn = txn.Then(v3.OpPut(e.leaderKey, val, v3.WithLease(e.leaderSession.Lease())))
tresp, terr := txn.Commit()
if terr != nil {
@@ -109,8 +110,7 @@ func (e *Election) Resign(ctx context.Context) (err error) {
if e.leaderSession == nil {
return nil
}
client := e.session.Client()
_, err = client.Delete(ctx, e.leaderKey)
_, err = e.client.Delete(ctx, e.leaderKey)
e.leaderKey = ""
e.leaderSession = nil
return err
@@ -118,8 +118,7 @@ func (e *Election) Resign(ctx context.Context) (err error) {
// Leader returns the leader value for the current election.
func (e *Election) Leader(ctx context.Context) (string, error) {
client := e.session.Client()
resp, err := client.Get(ctx, e.keyPrefix, v3.WithFirstCreate()...)
resp, err := e.client.Get(ctx, e.keyPrefix, v3.WithFirstCreate()...)
if err != nil {
return "", err
} else if len(resp.Kvs) == 0 {
@@ -139,11 +138,9 @@ func (e *Election) Observe(ctx context.Context) <-chan v3.GetResponse {
}
func (e *Election) observe(ctx context.Context, ch chan<- v3.GetResponse) {
client := e.session.Client()
defer close(ch)
for {
resp, err := client.Get(ctx, e.keyPrefix, v3.WithFirstCreate()...)
resp, err := e.client.Get(ctx, e.keyPrefix, v3.WithFirstCreate()...)
if err != nil {
return
}
@@ -154,7 +151,7 @@ func (e *Election) observe(ctx context.Context, ch chan<- v3.GetResponse) {
if len(resp.Kvs) == 0 {
// wait for first key put on prefix
opts := []v3.OpOption{v3.WithRev(resp.Header.Revision), v3.WithPrefix()}
wch := client.Watch(cctx, e.keyPrefix, opts...)
wch := e.client.Watch(cctx, e.keyPrefix, opts...)
for kv == nil {
wr, ok := <-wch
@@ -174,7 +171,7 @@ func (e *Election) observe(ctx context.Context, ch chan<- v3.GetResponse) {
kv = resp.Kvs[0]
}
wch := client.Watch(cctx, string(kv.Key), v3.WithRev(kv.ModRevision))
wch := e.client.Watch(cctx, string(kv.Key), v3.WithRev(kv.ModRevision))
keyDeleted := false
for !keyDeleted {
wr, ok := <-wch

View File

@@ -16,6 +16,7 @@ package concurrency
import (
"fmt"
"math"
v3 "github.com/coreos/etcd/clientv3"
"github.com/coreos/etcd/mvcc/mvccpb"
@@ -25,40 +26,46 @@ import (
func waitDelete(ctx context.Context, client *v3.Client, key string, rev int64) error {
cctx, cancel := context.WithCancel(ctx)
defer cancel()
var wr v3.WatchResponse
wch := client.Watch(cctx, key, v3.WithRev(rev))
for wr = range wch {
for wr := range wch {
for _, ev := range wr.Events {
if ev.Type == mvccpb.DELETE {
return nil
}
}
}
if err := wr.Err(); err != nil {
return err
}
if err := ctx.Err(); err != nil {
return err
}
return fmt.Errorf("lost watcher waiting for delete")
}
// waitDeletes efficiently waits until all keys matching the prefix and no greater
// than the create revision.
func waitDeletes(ctx context.Context, client *v3.Client, pfx string, maxCreateRev int64) error {
getOpts := append(v3.WithLastCreate(), v3.WithMaxCreateRev(maxCreateRev))
for {
resp, err := client.Get(ctx, pfx, getOpts...)
if err != nil {
return err
// waitDeletes efficiently waits until all keys matched by Get(key, opts...) are deleted
func waitDeletes(ctx context.Context, client *v3.Client, key string, opts ...v3.OpOption) error {
getOpts := []v3.OpOption{v3.WithSort(v3.SortByCreateRevision, v3.SortAscend)}
getOpts = append(getOpts, opts...)
resp, err := client.Get(ctx, key, getOpts...)
maxRev := int64(math.MaxInt64)
getOpts = append(getOpts, v3.WithRev(0))
for err == nil {
for len(resp.Kvs) > 0 {
i := len(resp.Kvs) - 1
if resp.Kvs[i].CreateRevision <= maxRev {
break
}
resp.Kvs = resp.Kvs[:i]
}
if len(resp.Kvs) == 0 {
return nil
break
}
lastKey := string(resp.Kvs[0].Key)
if err = waitDelete(ctx, client, lastKey, resp.Header.Revision); err != nil {
return err
lastKV := resp.Kvs[len(resp.Kvs)-1]
maxRev = lastKV.CreateRevision
err = waitDelete(ctx, client, string(lastKV.Key), maxRev)
if err != nil || len(resp.Kvs) == 1 {
break
}
getOpts = append(getOpts, v3.WithLimit(int64(len(resp.Kvs)-1)))
resp, err = client.Get(ctx, key, getOpts...)
}
return err
}

View File

@@ -24,22 +24,24 @@ import (
// Mutex implements the sync Locker interface with etcd
type Mutex struct {
s *Session
client *v3.Client
pfx string
myKey string
myRev int64
}
func NewMutex(s *Session, pfx string) *Mutex {
return &Mutex{s, pfx + "/", "", -1}
func NewMutex(client *v3.Client, pfx string) *Mutex {
return &Mutex{client, pfx + "/", "", -1}
}
// Lock locks the mutex with a cancellable context. If the context is cancelled
// while trying to acquire the lock, the mutex tries to clean its stale lock entry.
func (m *Mutex) Lock(ctx context.Context) error {
s := m.s
client := m.s.Client()
s, serr := NewSession(m.client)
if serr != nil {
return serr
}
m.myKey = fmt.Sprintf("%s%x", m.pfx, s.Lease())
cmp := v3.Compare(v3.CreateRevision(m.myKey), "=", 0)
@@ -47,7 +49,7 @@ func (m *Mutex) Lock(ctx context.Context) error {
put := v3.OpPut(m.myKey, "", v3.WithLease(s.Lease()))
// reuse key in case this session already holds the lock
get := v3.OpGet(m.myKey)
resp, err := client.Txn(ctx).If(cmp).Then(put).Else(get).Commit()
resp, err := m.client.Txn(ctx).If(cmp).Then(put).Else(get).Commit()
if err != nil {
return err
}
@@ -57,19 +59,18 @@ func (m *Mutex) Lock(ctx context.Context) error {
}
// wait for deletion revisions prior to myKey
err = waitDeletes(ctx, client, m.pfx, m.myRev-1)
err = waitDeletes(ctx, m.client, m.pfx, v3.WithPrefix(), v3.WithRev(m.myRev-1))
// release lock key if cancelled
select {
case <-ctx.Done():
m.Unlock(client.Ctx())
m.Unlock(m.client.Ctx())
default:
}
return err
}
func (m *Mutex) Unlock(ctx context.Context) error {
client := m.s.Client()
if _, err := client.Delete(ctx, m.myKey); err != nil {
if _, err := m.client.Delete(ctx, m.myKey); err != nil {
return err
}
m.myKey = "\x00"
@@ -86,19 +87,17 @@ func (m *Mutex) Key() string { return m.myKey }
type lockerMutex struct{ *Mutex }
func (lm *lockerMutex) Lock() {
client := lm.s.Client()
if err := lm.Mutex.Lock(client.Ctx()); err != nil {
if err := lm.Mutex.Lock(lm.client.Ctx()); err != nil {
panic(err)
}
}
func (lm *lockerMutex) Unlock() {
client := lm.s.Client()
if err := lm.Mutex.Unlock(client.Ctx()); err != nil {
if err := lm.Mutex.Unlock(lm.client.Ctx()); err != nil {
panic(err)
}
}
// NewLocker creates a sync.Locker backed by an etcd mutex.
func NewLocker(s *Session, pfx string) sync.Locker {
return &lockerMutex{NewMutex(s, pfx)}
func NewLocker(client *v3.Client, pfx string) sync.Locker {
return &lockerMutex{NewMutex(client, pfx)}
}

View File

@@ -15,19 +15,26 @@
package concurrency
import (
"time"
"sync"
v3 "github.com/coreos/etcd/clientv3"
"golang.org/x/net/context"
)
const defaultSessionTTL = 60
// only keep one ephemeral lease per client
var clientSessions clientSessionMgr = clientSessionMgr{sessions: make(map[*v3.Client]*Session)}
const sessionTTL = 60
type clientSessionMgr struct {
sessions map[*v3.Client]*Session
mu sync.Mutex
}
// Session represents a lease kept alive for the lifetime of a client.
// Fault-tolerant applications may use sessions to reason about liveness.
type Session struct {
client *v3.Client
opts *sessionOptions
id v3.LeaseID
cancel context.CancelFunc
@@ -35,30 +42,37 @@ type Session struct {
}
// NewSession gets the leased session for a client.
func NewSession(client *v3.Client, opts ...SessionOption) (*Session, error) {
ops := &sessionOptions{ttl: defaultSessionTTL, ctx: client.Ctx()}
for _, opt := range opts {
opt(ops)
func NewSession(client *v3.Client) (*Session, error) {
clientSessions.mu.Lock()
defer clientSessions.mu.Unlock()
if s, ok := clientSessions.sessions[client]; ok {
return s, nil
}
resp, err := client.Grant(ops.ctx, int64(ops.ttl))
resp, err := client.Grant(client.Ctx(), sessionTTL)
if err != nil {
return nil, err
}
id := v3.LeaseID(resp.ID)
ctx, cancel := context.WithCancel(ops.ctx)
ctx, cancel := context.WithCancel(client.Ctx())
keepAlive, err := client.KeepAlive(ctx, id)
if err != nil || keepAlive == nil {
return nil, err
}
donec := make(chan struct{})
s := &Session{client: client, opts: ops, id: id, cancel: cancel, donec: donec}
s := &Session{client: client, id: id, cancel: cancel, donec: donec}
clientSessions.sessions[client] = s
// keep the lease alive until client error or cancelled context
go func() {
defer close(donec)
defer func() {
clientSessions.mu.Lock()
delete(clientSessions.sessions, client)
clientSessions.mu.Unlock()
close(donec)
}()
for range keepAlive {
// eat messages until keep alive channel closes
}
@@ -67,11 +81,6 @@ func NewSession(client *v3.Client, opts ...SessionOption) (*Session, error) {
return s, nil
}
// Client is the etcd client that is attached to the session.
func (s *Session) Client() *v3.Client {
return s.client
}
// Lease is the lease ID for keys bound to the session.
func (s *Session) Lease() v3.LeaseID { return s.id }
@@ -90,38 +99,6 @@ func (s *Session) Orphan() {
// Close orphans the session and revokes the session lease.
func (s *Session) Close() error {
s.Orphan()
// if revoke takes longer than the ttl, lease is expired anyway
ctx, cancel := context.WithTimeout(s.opts.ctx, time.Duration(s.opts.ttl)*time.Second)
_, err := s.client.Revoke(ctx, s.id)
cancel()
_, err := s.client.Revoke(s.client.Ctx(), s.id)
return err
}
type sessionOptions struct {
ttl int
ctx context.Context
}
// SessionOption configures Session.
type SessionOption func(*sessionOptions)
// WithTTL configures the session's TTL in seconds.
// If TTL is <= 0, the default 60 seconds TTL will be used.
func WithTTL(ttl int) SessionOption {
return func(so *sessionOptions) {
if ttl > 0 {
so.ttl = ttl
}
}
}
// WithContext assigns a context to the session instead of defaulting to
// using the client context. This is useful for canceling NewSession and
// Close operations immediately without having to close the client. If the
// context is canceled before Close() completes, the session's lease will be
// abandoned and left to expire instead of being revoked.
func WithContext(ctx context.Context) SessionOption {
return func(so *sessionOptions) {
so.ctx = ctx
}
}

View File

@@ -28,16 +28,15 @@ type Config struct {
// Endpoints is a list of URLs
Endpoints []string
// AutoSyncInterval is the interval to update endpoints with its latest members.
// 0 disables auto-sync. By default auto-sync is disabled.
AutoSyncInterval time.Duration
// DialTimeout is the timeout for failing to establish a connection.
DialTimeout time.Duration
// TLS holds the client secure credentials, if any.
TLS *tls.Config
// Logger is the logger used by client library.
Logger Logger
// Username is a username for authentication
Username string
@@ -47,7 +46,6 @@ type Config struct {
type yamlConfig struct {
Endpoints []string `json:"endpoints"`
AutoSyncInterval time.Duration `json:"auto-sync-interval"`
DialTimeout time.Duration `json:"dial-timeout"`
InsecureTransport bool `json:"insecure-transport"`
InsecureSkipTLSVerify bool `json:"insecure-skip-tls-verify"`
@@ -70,9 +68,8 @@ func configFromFile(fpath string) (*Config, error) {
}
cfg := &Config{
Endpoints: yc.Endpoints,
AutoSyncInterval: yc.AutoSyncInterval,
DialTimeout: yc.DialTimeout,
Endpoints: yc.Endpoints,
DialTimeout: yc.DialTimeout,
}
if yc.InsecureTransport {

View File

@@ -44,7 +44,7 @@
// etcd client returns 2 types of errors:
//
// 1. context error: canceled or deadline exceeded.
// 2. gRPC error: see https://github.com/coreos/etcd/blob/master/etcdserver/api/v3rpc/rpctypes/error.go
// 2. gRPC error: see https://github.com/coreos/etcd/blob/master/etcdserver/api/v3rpc/error.go.
//
// Here is the example code to handle client errors:
//

View File

@@ -1,46 +0,0 @@
// Copyright 2016 The etcd Authors
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
package clientv3_test
import (
"fmt"
"io/ioutil"
"log"
"net/http"
"github.com/prometheus/client_golang/prometheus"
)
func ExampleMetrics_All() {
// listen for all prometheus metrics
go func() {
http.Handle("/metrics", prometheus.Handler())
log.Fatal(http.ListenAndServe(":47989", nil))
}()
url := "http://localhost:47989/metrics"
// make an http request to fetch all prometheus metrics
resp, err := http.Get(url)
if err != nil {
log.Fatalf("fetch error: %v", err)
}
b, err := ioutil.ReadAll(resp.Body)
resp.Body.Close()
if err != nil {
log.Fatalf("fetch error: reading %s: %v", url, err)
}
fmt.Printf("%s", b)
}

View File

@@ -19,8 +19,6 @@ import (
"time"
"github.com/coreos/etcd/clientv3"
"github.com/coreos/etcd/pkg/transport"
"github.com/coreos/pkg/capnslog"
"golang.org/x/net/context"
)
@@ -31,9 +29,6 @@ var (
)
func Example() {
var plog = capnslog.NewPackageLogger("github.com/coreos/etcd", "clientv3")
clientv3.SetLogger(plog)
cli, err := clientv3.New(clientv3.Config{
Endpoints: endpoints,
DialTimeout: dialTimeout,
@@ -48,29 +43,3 @@ func Example() {
log.Fatal(err)
}
}
func ExampleConfig_withTLS() {
tlsInfo := transport.TLSInfo{
CertFile: "/tmp/test-certs/test-name-1.pem",
KeyFile: "/tmp/test-certs/test-name-1-key.pem",
TrustedCAFile: "/tmp/test-certs/trusted-ca.pem",
}
tlsConfig, err := tlsInfo.ClientConfig()
if err != nil {
log.Fatal(err)
}
cli, err := clientv3.New(clientv3.Config{
Endpoints: endpoints,
DialTimeout: dialTimeout,
TLS: tlsConfig,
})
if err != nil {
log.Fatal(err)
}
defer cli.Close() // make sure to close the client
_, err = cli.Put(context.TODO(), "foo", "bar")
if err != nil {
log.Fatal(err)
}
}

View File

@@ -1,60 +0,0 @@
// Copyright 2016 The etcd Authors
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
package integration
import (
"math/rand"
"testing"
"time"
"github.com/coreos/etcd/clientv3"
"github.com/coreos/etcd/integration"
"github.com/coreos/etcd/pkg/testutil"
"golang.org/x/net/context"
)
// TestDialSetEndpoints ensures SetEndpoints can replace unavailable endpoints with available ones.
func TestDialSetEndpoints(t *testing.T) {
defer testutil.AfterTest(t)
clus := integration.NewClusterV3(t, &integration.ClusterConfig{Size: 3})
defer clus.Terminate(t)
// get endpoint list
eps := make([]string, 3)
for i := range eps {
eps[i] = clus.Members[i].GRPCAddr()
}
toKill := rand.Intn(len(eps))
cfg := clientv3.Config{Endpoints: []string{eps[toKill]}, DialTimeout: 1 * time.Second}
cli, err := clientv3.New(cfg)
if err != nil {
t.Fatal(err)
}
defer cli.Close()
// make a dead node
clus.Members[toKill].Stop(t)
clus.WaitLeader(t)
// update client with available endpoints
cli.SetEndpoints(eps[(toKill+1)%3])
ctx, cancel := context.WithTimeout(context.Background(), 3*time.Second)
if _, err = cli.Get(ctx, "foo", clientv3.WithSerializable()); err != nil {
t.Fatal(err)
}
cancel()
}

View File

@@ -17,7 +17,6 @@ package integration
import (
"bytes"
"math/rand"
"os"
"reflect"
"strings"
"testing"
@@ -36,8 +35,8 @@ func TestKVPutError(t *testing.T) {
defer testutil.AfterTest(t)
var (
maxReqBytes = 1.5 * 1024 * 1024 // hard coded max in v3_server.go
quota = int64(int(maxReqBytes) + 8*os.Getpagesize())
maxReqBytes = 1.5 * 1024 * 1024
quota = int64(maxReqBytes * 1.2)
)
clus := integration.NewClusterV3(t, &integration.ClusterConfig{Size: 1, QuotaBackendBytes: quota})
defer clus.Terminate(t)
@@ -50,7 +49,7 @@ func TestKVPutError(t *testing.T) {
t.Fatalf("expected %v, got %v", rpctypes.ErrEmptyKey, err)
}
_, err = kv.Put(ctx, "key", strings.Repeat("a", int(maxReqBytes+100)))
_, err = kv.Put(ctx, "key", strings.Repeat("a", int(maxReqBytes+100))) // 1.5MB
if err != rpctypes.ErrRequestTooLarge {
t.Fatalf("expected %v, got %v", rpctypes.ErrRequestTooLarge, err)
}
@@ -60,7 +59,7 @@ func TestKVPutError(t *testing.T) {
t.Fatal(err)
}
time.Sleep(1 * time.Second) // give enough time for commit
time.Sleep(500 * time.Millisecond) // give enough time for commit
_, err = kv.Put(ctx, "foo2", strings.Repeat("a", int(maxReqBytes-50)))
if err != rpctypes.ErrNoSpace { // over quota
@@ -226,21 +225,6 @@ func TestKVRange(t *testing.T) {
{Key: []byte("fop"), Value: nil, CreateRevision: 9, ModRevision: 9, Version: 1},
},
},
// range all with SortByKey, missing sorting order (ASCEND by default)
{
"a", "x",
0,
[]clientv3.OpOption{clientv3.WithSort(clientv3.SortByKey, clientv3.SortNone)},
[]*mvccpb.KeyValue{
{Key: []byte("a"), Value: nil, CreateRevision: 2, ModRevision: 2, Version: 1},
{Key: []byte("b"), Value: nil, CreateRevision: 3, ModRevision: 3, Version: 1},
{Key: []byte("c"), Value: nil, CreateRevision: 4, ModRevision: 6, Version: 3},
{Key: []byte("foo"), Value: nil, CreateRevision: 7, ModRevision: 7, Version: 1},
{Key: []byte("foo/abc"), Value: nil, CreateRevision: 8, ModRevision: 8, Version: 1},
{Key: []byte("fop"), Value: nil, CreateRevision: 9, ModRevision: 9, Version: 1},
},
},
// range all with SortByCreateRevision, SortDescend
{
"a", "x",
@@ -256,21 +240,6 @@ func TestKVRange(t *testing.T) {
{Key: []byte("a"), Value: nil, CreateRevision: 2, ModRevision: 2, Version: 1},
},
},
// range all with SortByCreateRevision, missing sorting order (ASCEND by default)
{
"a", "x",
0,
[]clientv3.OpOption{clientv3.WithSort(clientv3.SortByCreateRevision, clientv3.SortNone)},
[]*mvccpb.KeyValue{
{Key: []byte("a"), Value: nil, CreateRevision: 2, ModRevision: 2, Version: 1},
{Key: []byte("b"), Value: nil, CreateRevision: 3, ModRevision: 3, Version: 1},
{Key: []byte("c"), Value: nil, CreateRevision: 4, ModRevision: 6, Version: 3},
{Key: []byte("foo"), Value: nil, CreateRevision: 7, ModRevision: 7, Version: 1},
{Key: []byte("foo/abc"), Value: nil, CreateRevision: 8, ModRevision: 8, Version: 1},
{Key: []byte("fop"), Value: nil, CreateRevision: 9, ModRevision: 9, Version: 1},
},
},
// range all with SortByModRevision, SortDescend
{
"a", "x",
@@ -679,47 +648,16 @@ func TestKVGetCancel(t *testing.T) {
}
}
// TestKVGetStoppedServerAndClose ensures closing after a failed Get works.
func TestKVGetStoppedServerAndClose(t *testing.T) {
defer testutil.AfterTest(t)
clus := integration.NewClusterV3(t, &integration.ClusterConfig{Size: 1})
defer clus.Terminate(t)
cli := clus.Client(0)
clus.Members[0].Stop(t)
ctx, cancel := context.WithTimeout(context.TODO(), time.Second)
// this Get fails and triggers an asynchronous connection retry
_, err := cli.Get(ctx, "abc")
cancel()
if !strings.Contains(err.Error(), "context deadline") {
t.Fatal(err)
}
}
// TestKVPutStoppedServerAndClose ensures closing after a failed Put works.
func TestKVPutStoppedServerAndClose(t *testing.T) {
defer testutil.AfterTest(t)
clus := integration.NewClusterV3(t, &integration.ClusterConfig{Size: 1})
defer clus.Terminate(t)
cli := clus.Client(0)
clus.Members[0].Stop(t)
ctx, cancel := context.WithTimeout(context.TODO(), time.Second)
// get retries on all errors.
// so here we use it to eat the potential broken pipe error for the next put.
// grpc client might see a broken pipe error when we issue the get request before
// grpc finds out the original connection is down due to the member shutdown.
_, err := cli.Get(ctx, "abc")
cancel()
if !strings.Contains(err.Error(), "context deadline") {
t.Fatal(err)
}
// this Put fails and triggers an asynchronous connection retry
_, err = cli.Put(ctx, "abc", "123")
_, err := cli.Put(ctx, "abc", "123")
cancel()
if !strings.Contains(err.Error(), "context deadline") {
t.Fatal(err)

View File

@@ -15,8 +15,6 @@
package integration
import (
"reflect"
"sort"
"testing"
"time"
@@ -458,55 +456,45 @@ func TestLeaseKeepAliveTTLTimeout(t *testing.T) {
clus.Members[0].Restart(t)
}
func TestLeaseTimeToLive(t *testing.T) {
// TestLeaseRenewLostQuorum ensures keepalives work after losing quorum
// for a while.
func TestLeaseRenewLostQuorum(t *testing.T) {
defer testutil.AfterTest(t)
clus := integration.NewClusterV3(t, &integration.ClusterConfig{Size: 3})
defer clus.Terminate(t)
lapi := clientv3.NewLease(clus.RandClient())
defer lapi.Close()
resp, err := lapi.Grant(context.Background(), 10)
cli := clus.Client(0)
r, err := cli.Grant(context.TODO(), 4)
if err != nil {
t.Errorf("failed to create lease %v", err)
t.Fatal(err)
}
kv := clientv3.NewKV(clus.RandClient())
keys := []string{"foo1", "foo2"}
for i := range keys {
if _, err = kv.Put(context.TODO(), keys[i], "bar", clientv3.WithLease(resp.ID)); err != nil {
t.Fatal(err)
kctx, kcancel := context.WithCancel(context.Background())
defer kcancel()
ka, err := cli.KeepAlive(kctx, r.ID)
if err != nil {
t.Fatal(err)
}
// consume first keepalive so next message sends when cluster is down
<-ka
// force keepalive stream message to timeout
clus.Members[1].Stop(t)
clus.Members[2].Stop(t)
// Use TTL-1 since the client closes the keepalive channel if no
// keepalive arrives before the lease deadline.
// The cluster has 1 second to recover and reply to the keepalive.
time.Sleep(time.Duration(r.TTL-1) * time.Second)
clus.Members[1].Restart(t)
clus.Members[2].Restart(t)
select {
case _, ok := <-ka:
if !ok {
t.Fatalf("keepalive closed")
}
}
lresp, lerr := lapi.TimeToLive(context.Background(), resp.ID, clientv3.WithAttachedKeys())
if lerr != nil {
t.Fatal(lerr)
}
if lresp.ID != resp.ID {
t.Fatalf("leaseID expected %d, got %d", resp.ID, lresp.ID)
}
if lresp.GrantedTTL != int64(10) {
t.Fatalf("GrantedTTL expected %d, got %d", 10, lresp.GrantedTTL)
}
if lresp.TTL == 0 || lresp.TTL > lresp.GrantedTTL {
t.Fatalf("unexpected TTL %d (granted %d)", lresp.TTL, lresp.GrantedTTL)
}
ks := make([]string, len(lresp.Keys))
for i := range lresp.Keys {
ks[i] = string(lresp.Keys[i])
}
sort.Strings(ks)
if !reflect.DeepEqual(ks, keys) {
t.Fatalf("keys expected %v, got %v", keys, ks)
}
lresp, lerr = lapi.TimeToLive(context.Background(), resp.ID)
if lerr != nil {
t.Fatal(lerr)
}
if len(lresp.Keys) != 0 {
t.Fatalf("unexpected keys %+v", lresp.Keys)
case <-time.After(time.Duration(r.TTL) * time.Second):
t.Fatalf("timed out waiting for keepalive")
}
}

View File

@@ -1,21 +0,0 @@
// Copyright 2016 The etcd Authors
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
package integration
import "github.com/coreos/pkg/capnslog"
func init() {
capnslog.SetGlobalLogLevel(capnslog.INFO)
}

View File

@@ -1,169 +0,0 @@
// Copyright 2016 The etcd Authors
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
package integration
import (
"bufio"
"io"
"net"
"net/http"
"strconv"
"strings"
"testing"
"time"
"github.com/coreos/etcd/clientv3"
"github.com/coreos/etcd/integration"
"github.com/coreos/etcd/pkg/testutil"
"github.com/coreos/etcd/pkg/transport"
"github.com/prometheus/client_golang/prometheus"
"golang.org/x/net/context"
)
func TestV3ClientMetrics(t *testing.T) {
defer testutil.AfterTest(t)
var (
addr string = "localhost:27989"
ln net.Listener
err error
)
// listen for all prometheus metrics
donec := make(chan struct{})
go func() {
defer close(donec)
srv := &http.Server{Handler: prometheus.Handler()}
srv.SetKeepAlivesEnabled(false)
ln, err = transport.NewUnixListener(addr)
if err != nil {
t.Fatalf("Error: %v occurred while listening on addr: %v", err, addr)
}
err = srv.Serve(ln)
if err != nil && !strings.Contains(err.Error(), "use of closed network connection") {
t.Fatalf("Err serving http requests: %v", err)
}
}()
url := "unix://" + addr + "/metrics"
clus := integration.NewClusterV3(t, &integration.ClusterConfig{Size: 1})
defer clus.Terminate(t)
client := clus.Client(0)
w := clientv3.NewWatcher(client)
defer w.Close()
kv := clientv3.NewKV(client)
wc := w.Watch(context.Background(), "foo")
wBefore := sumCountersForMetricAndLabels(t, url, "grpc_client_msg_received_total", "Watch", "bidi_stream")
pBefore := sumCountersForMetricAndLabels(t, url, "grpc_client_started_total", "Put", "unary")
_, err = kv.Put(context.Background(), "foo", "bar")
if err != nil {
t.Errorf("Error putting value in key store")
}
pAfter := sumCountersForMetricAndLabels(t, url, "grpc_client_started_total", "Put", "unary")
if pBefore+1 != pAfter {
t.Errorf("grpc_client_started_total expected %d, got %d", 1, pAfter-pBefore)
}
// consume watch response
select {
case <-wc:
case <-time.After(10 * time.Second):
t.Error("Timeout occurred for getting watch response")
}
wAfter := sumCountersForMetricAndLabels(t, url, "grpc_client_msg_received_total", "Watch", "bidi_stream")
if wBefore+1 != wAfter {
t.Errorf("grpc_client_msg_received_total expected %d, got %d", 1, wAfter-wBefore)
}
ln.Close()
<-donec
}
func sumCountersForMetricAndLabels(t *testing.T, url string, metricName string, matchingLabelValues ...string) int {
count := 0
for _, line := range getHTTPBodyAsLines(t, url) {
ok := true
if !strings.HasPrefix(line, metricName) {
continue
}
for _, labelValue := range matchingLabelValues {
if !strings.Contains(line, `"`+labelValue+`"`) {
ok = false
break
}
}
if !ok {
continue
}
valueString := line[strings.LastIndex(line, " ")+1 : len(line)-1]
valueFloat, err := strconv.ParseFloat(valueString, 32)
if err != nil {
t.Fatalf("failed parsing value for line: %v and matchingLabelValues: %v", line, matchingLabelValues)
}
count += int(valueFloat)
}
return count
}
func getHTTPBodyAsLines(t *testing.T, url string) []string {
cfgtls := transport.TLSInfo{}
tr, err := transport.NewTransport(cfgtls, time.Second)
if err != nil {
t.Fatalf("Error getting transport: %v", err)
}
tr.MaxIdleConns = -1
tr.DisableKeepAlives = true
cli := &http.Client{Transport: tr}
resp, err := cli.Get(url)
if err != nil {
t.Fatalf("Error fetching: %v", err)
}
reader := bufio.NewReader(resp.Body)
lines := []string{}
for {
line, err := reader.ReadString('\n')
if err != nil {
if err == io.EOF {
break
} else {
t.Fatalf("error reading: %v", err)
}
}
lines = append(lines, line)
}
resp.Body.Close()
return lines
}

View File

@@ -73,7 +73,6 @@ func TestTxnWriteFail(t *testing.T) {
}()
go func() {
defer close(getc)
select {
case <-time.After(5 * time.Second):
t.Fatalf("timed out waiting for txn fail")
@@ -87,10 +86,11 @@ func TestTxnWriteFail(t *testing.T) {
if len(gresp.Kvs) != 0 {
t.Fatalf("expected no keys, got %v", gresp.Kvs)
}
close(getc)
}()
select {
case <-time.After(2 * clus.Members[1].ServerConfig.ReqTimeout()):
case <-time.After(5 * time.Second):
t.Fatalf("timed out waiting for get")
case <-getc:
}
@@ -125,7 +125,7 @@ func TestTxnReadRetry(t *testing.T) {
clus.Members[0].Restart(t)
select {
case <-donec:
case <-time.After(2 * clus.Members[1].ServerConfig.ReqTimeout()):
case <-time.After(5 * time.Second):
t.Fatalf("waited too long")
}
}

View File

@@ -211,14 +211,6 @@ func testWatchReconnRequest(t *testing.T, wctx *watchctx) {
stopc <- struct{}{}
<-donec
// spinning on dropping connections may trigger a leader election
// due to resource starvation; l-read to ensure the cluster is stable
ctx, cancel := context.WithTimeout(context.TODO(), 30*time.Second)
if _, err := wctx.kv.Get(ctx, "_"); err != nil {
t.Fatal(err)
}
cancel()
// ensure watcher works
putAndWatch(t, wctx, "a", "a")
}
@@ -349,9 +341,6 @@ func putAndWatch(t *testing.T, wctx *watchctx, key, val string) {
// TestWatchResumeComapcted checks that the watcher gracefully closes in case
// that it tries to resume to a revision that's been compacted out of the store.
// Since the watcher's server restarts with stale data, the watcher will receive
// either a compaction error or all keys by staying in sync before the compaction
// is finally applied.
func TestWatchResumeCompacted(t *testing.T) {
defer testutil.AfterTest(t)
@@ -380,9 +369,8 @@ func TestWatchResumeCompacted(t *testing.T) {
}
// put some data and compact away
numPuts := 5
kv := clientv3.NewKV(clus.Client(1))
for i := 0; i < numPuts; i++ {
for i := 0; i < 5; i++ {
if _, err := kv.Put(context.TODO(), "foo", "bar"); err != nil {
t.Fatal(err)
}
@@ -393,48 +381,17 @@ func TestWatchResumeCompacted(t *testing.T) {
clus.Members[0].Restart(t)
// since watch's server isn't guaranteed to be synced with the cluster when
// the watch resumes, there is a window where the watch can stay synced and
// read off all events; if the watcher misses the window, it will go out of
// sync and get a compaction error.
wRev := int64(2)
for int(wRev) <= numPuts+1 {
var wresp clientv3.WatchResponse
var ok bool
select {
case wresp, ok = <-wch:
if !ok {
t.Fatalf("expected wresp, but got closed channel")
}
case <-time.After(5 * time.Second):
t.Fatalf("compacted watch timed out")
}
for _, ev := range wresp.Events {
if ev.Kv.ModRevision != wRev {
t.Fatalf("expected modRev %v, got %+v", wRev, ev)
}
wRev++
}
if wresp.Err() == nil {
continue
}
if wresp.Err() != rpctypes.ErrCompacted {
t.Fatalf("wresp.Err() expected %v, but got %v %+v", rpctypes.ErrCompacted, wresp.Err())
}
break
// get compacted error message
wresp, ok := <-wch
if !ok {
t.Fatalf("expected wresp, but got closed channel")
}
if int(wRev) > numPuts+1 {
// got data faster than the compaction
return
if wresp.Err() != rpctypes.ErrCompacted {
t.Fatalf("wresp.Err() expected %v, but got %v", rpctypes.ErrCompacted, wresp.Err())
}
// received compaction error; ensure the channel closes
select {
case wresp, ok := <-wch:
if ok {
t.Fatalf("expected closed channel, but got %v", wresp)
}
case <-time.After(5 * time.Second):
t.Fatalf("timed out waiting for channel close")
// ensure the channel is closed
if wresp, ok = <-wch; ok {
t.Fatalf("expected closed channel, but got %v", wresp)
}
}
@@ -614,19 +571,18 @@ func TestWatchErrConnClosed(t *testing.T) {
defer clus.Terminate(t)
cli := clus.Client(0)
defer cli.Close()
wc := clientv3.NewWatcher(cli)
donec := make(chan struct{})
go func() {
defer close(donec)
ch := wc.Watch(context.TODO(), "foo")
if wr := <-ch; grpc.ErrorDesc(wr.Err()) != grpc.ErrClientConnClosing.Error() {
t.Fatalf("expected %v, got %v", grpc.ErrClientConnClosing, grpc.ErrorDesc(wr.Err()))
wc.Watch(context.TODO(), "foo")
if err := wc.Close(); err != nil && err != grpc.ErrClientConnClosing {
t.Fatalf("expected %v, got %v", grpc.ErrClientConnClosing, err)
}
}()
if err := cli.ActiveConnection().Close(); err != nil {
if err := cli.Close(); err != nil {
t.Fatal(err)
}
clus.TakeClient(0)
@@ -673,12 +629,8 @@ func TestWatchWithRequireLeader(t *testing.T) {
clus := integration.NewClusterV3(t, &integration.ClusterConfig{Size: 3})
defer clus.Terminate(t)
// Put a key for the non-require leader watch to read as an event.
// The watchers will be on member[0]; put key through member[0] to
// ensure that it receives the update so watching after killing quorum
// is guaranteed to have the key.
liveClient := clus.Client(0)
if _, err := liveClient.Put(context.TODO(), "foo", "bar"); err != nil {
// something for the non-require leader watch to read as an event
if _, err := clus.Client(1).Put(context.TODO(), "foo", "bar"); err != nil {
t.Fatal(err)
}
@@ -693,8 +645,8 @@ func TestWatchWithRequireLeader(t *testing.T) {
tickDuration := 10 * time.Millisecond
time.Sleep(time.Duration(3*clus.Members[0].ElectionTicks) * tickDuration)
chLeader := liveClient.Watch(clientv3.WithRequireLeader(context.TODO()), "foo", clientv3.WithRev(1))
chNoLeader := liveClient.Watch(context.TODO(), "foo", clientv3.WithRev(1))
chLeader := clus.Client(0).Watch(clientv3.WithRequireLeader(context.TODO()), "foo", clientv3.WithRev(1))
chNoLeader := clus.Client(0).Watch(context.TODO(), "foo", clientv3.WithRev(1))
select {
case resp, ok := <-chLeader:
@@ -722,114 +674,6 @@ func TestWatchWithRequireLeader(t *testing.T) {
}
}
// TestWatchWithFilter checks that watch filtering works.
func TestWatchWithFilter(t *testing.T) {
cluster := integration.NewClusterV3(t, &integration.ClusterConfig{Size: 1})
defer cluster.Terminate(t)
client := cluster.RandClient()
ctx := context.Background()
wcNoPut := client.Watch(ctx, "a", clientv3.WithFilterPut())
wcNoDel := client.Watch(ctx, "a", clientv3.WithFilterDelete())
if _, err := client.Put(ctx, "a", "abc"); err != nil {
t.Fatal(err)
}
if _, err := client.Delete(ctx, "a"); err != nil {
t.Fatal(err)
}
npResp := <-wcNoPut
if len(npResp.Events) != 1 || npResp.Events[0].Type != clientv3.EventTypeDelete {
t.Fatalf("expected delete event, got %+v", npResp.Events)
}
ndResp := <-wcNoDel
if len(ndResp.Events) != 1 || ndResp.Events[0].Type != clientv3.EventTypePut {
t.Fatalf("expected put event, got %+v", ndResp.Events)
}
select {
case resp := <-wcNoPut:
t.Fatalf("unexpected event on filtered put (%+v)", resp)
case resp := <-wcNoDel:
t.Fatalf("unexpected event on filtered delete (%+v)", resp)
case <-time.After(100 * time.Millisecond):
}
}
// TestWatchWithCreatedNotification checks that createdNotification works.
func TestWatchWithCreatedNotification(t *testing.T) {
cluster := integration.NewClusterV3(t, &integration.ClusterConfig{Size: 1})
defer cluster.Terminate(t)
client := cluster.RandClient()
ctx := context.Background()
createC := client.Watch(ctx, "a", clientv3.WithCreatedNotify())
resp := <-createC
if !resp.Created {
t.Fatalf("expected created event, got %v", resp)
}
}
// TestWatchWithCreatedNotificationDropConn ensures that
// a watcher with created notify does not post duplicate
// created events from disconnect.
func TestWatchWithCreatedNotificationDropConn(t *testing.T) {
cluster := integration.NewClusterV3(t, &integration.ClusterConfig{Size: 1})
defer cluster.Terminate(t)
client := cluster.RandClient()
wch := client.Watch(context.Background(), "a", clientv3.WithCreatedNotify())
resp := <-wch
if !resp.Created {
t.Fatalf("expected created event, got %v", resp)
}
cluster.Members[0].DropConnections()
// try to receive from watch channel again
// ensure it doesn't post another createNotify
select {
case wresp := <-wch:
t.Fatalf("got unexpected watch response: %+v\n", wresp)
case <-time.After(time.Second):
// watcher may not reconnect by the time it hits the select,
// so it wouldn't have a chance to filter out the second create event
}
}
// TestWatchCancelOnServer ensures client watcher cancels propagate back to the server.
func TestWatchCancelOnServer(t *testing.T) {
cluster := integration.NewClusterV3(t, &integration.ClusterConfig{Size: 1})
defer cluster.Terminate(t)
client := cluster.RandClient()
for i := 0; i < 10; i++ {
ctx, cancel := context.WithTimeout(context.Background(), time.Second)
client.Watch(ctx, "a", clientv3.WithCreatedNotify())
cancel()
}
// wait for cancels to propagate
time.Sleep(time.Second)
watchers, err := cluster.Members[0].Metric("etcd_debugging_mvcc_watcher_total")
if err != nil {
t.Fatal(err)
}
if watchers != "0" {
t.Fatalf("expected 0 watchers, got %q", watchers)
}
}
// TestWatchOverlapContextCancel stresses the watcher stream teardown path by
// creating/canceling watchers to ensure that new watchers are not taken down
// by a torn down watch stream. The sort of race that's being detected:
@@ -939,29 +783,6 @@ func TestWatchCancelAndCloseClient(t *testing.T) {
clus.TakeClient(0)
}
// TestWatchStressResumeClose establishes a bunch of watchers, disconnects
// to put them in resuming mode, cancels them so some resumes by cancel fail,
// then closes the watcher interface to ensure correct clean up.
func TestWatchStressResumeClose(t *testing.T) {
defer testutil.AfterTest(t)
clus := integration.NewClusterV3(t, &integration.ClusterConfig{Size: 1})
defer clus.Terminate(t)
cli := clus.Client(0)
ctx, cancel := context.WithCancel(context.Background())
// add more watches than can be resumed before the cancel
wchs := make([]clientv3.WatchChan, 2000)
for i := range wchs {
wchs[i] = cli.Watch(ctx, "abc")
}
clus.Members[0].DropConnections()
cancel()
if err := cli.Close(); err != nil {
t.Fatal(err)
}
clus.TakeClient(0)
}
// TestWatchCancelDisconnected ensures canceling a watcher works when
// its grpc stream is disconnected / reconnecting.
func TestWatchCancelDisconnected(t *testing.T) {

View File

@@ -85,10 +85,6 @@ func NewKV(c *Client) KV {
return &kv{remote: RetryKVClient(c)}
}
func NewKVFromKVClient(remote pb.KVClient) KV {
return &kv{remote: remote}
}
func (kv *kv) Put(ctx context.Context, key, val string, opts ...OpOption) (*PutResponse, error) {
r, err := kv.Do(ctx, OpPut(key, val, opts...))
return r.put, toErr(ctx, err)
@@ -105,7 +101,7 @@ func (kv *kv) Delete(ctx context.Context, key string, opts ...OpOption) (*Delete
}
func (kv *kv) Compact(ctx context.Context, rev int64, opts ...CompactOption) (*CompactResponse, error) {
resp, err := kv.remote.Compact(ctx, OpCompact(rev, opts...).toRequest())
resp, err := kv.remote.Compact(ctx, OpCompact(rev, opts...).toRequest(), grpc.FailFast(false))
if err != nil {
return nil, toErr(ctx, err)
}
@@ -125,7 +121,6 @@ func (kv *kv) Do(ctx context.Context, op Op) (OpResponse, error) {
if err == nil {
return resp, nil
}
if isHaltErr(ctx, err) {
return resp, toErr(ctx, err)
}
@@ -142,7 +137,21 @@ func (kv *kv) do(ctx context.Context, op Op) (OpResponse, error) {
// TODO: handle other ops
case tRange:
var resp *pb.RangeResponse
resp, err = kv.remote.Range(ctx, op.toRangeRequest(), grpc.FailFast(false))
r := &pb.RangeRequest{
Key: op.key,
RangeEnd: op.end,
Limit: op.limit,
Revision: op.rev,
Serializable: op.serializable,
KeysOnly: op.keysOnly,
CountOnly: op.countOnly,
}
if op.sort != nil {
r.SortOrder = pb.RangeRequest_SortOrder(op.sort.Order)
r.SortTarget = pb.RangeRequest_SortTarget(op.sort.Target)
}
resp, err = kv.remote.Range(ctx, r, grpc.FailFast(false))
if err == nil {
return OpResponse{get: (*GetResponse)(resp)}, nil
}

View File

@@ -44,21 +44,6 @@ type LeaseKeepAliveResponse struct {
TTL int64
}
// LeaseTimeToLiveResponse is used to convert the protobuf lease timetolive response.
type LeaseTimeToLiveResponse struct {
*pb.ResponseHeader
ID LeaseID `json:"id"`
// TTL is the remaining TTL in seconds for the lease; the lease will expire in under TTL+1 seconds.
TTL int64 `json:"ttl"`
// GrantedTTL is the initial granted time in seconds upon lease creation/renewal.
GrantedTTL int64 `json:"granted-ttl"`
// Keys is the list of keys attached to this lease.
Keys [][]byte `json:"keys"`
}
const (
// defaultTTL is the assumed lease TTL used for the first keepalive
// deadline before the actual TTL is known to the client.
@@ -76,9 +61,6 @@ type Lease interface {
// Revoke revokes the given lease.
Revoke(ctx context.Context, id LeaseID) (*LeaseRevokeResponse, error)
// TimeToLive retrieves the lease information of the given lease ID.
TimeToLive(ctx context.Context, id LeaseID, opts ...LeaseOption) (*LeaseTimeToLiveResponse, error)
// KeepAlive keeps the given lease alive forever.
KeepAlive(ctx context.Context, id LeaseID) (<-chan *LeaseKeepAliveResponse, error)
@@ -159,10 +141,7 @@ func (l *lessor) Grant(ctx context.Context, ttl int64) (*LeaseGrantResponse, err
return gresp, nil
}
if isHaltErr(cctx, err) {
return nil, toErr(cctx, err)
}
if nerr := l.newStream(); nerr != nil {
return nil, nerr
return nil, toErr(ctx, err)
}
}
}
@@ -182,33 +161,6 @@ func (l *lessor) Revoke(ctx context.Context, id LeaseID) (*LeaseRevokeResponse,
if isHaltErr(ctx, err) {
return nil, toErr(ctx, err)
}
if nerr := l.newStream(); nerr != nil {
return nil, nerr
}
}
}
func (l *lessor) TimeToLive(ctx context.Context, id LeaseID, opts ...LeaseOption) (*LeaseTimeToLiveResponse, error) {
cctx, cancel := context.WithCancel(ctx)
done := cancelWhenStop(cancel, l.stopCtx.Done())
defer close(done)
for {
r := toLeaseTimeToLiveRequest(id, opts...)
resp, err := l.remote.LeaseTimeToLive(cctx, r, grpc.FailFast(false))
if err == nil {
gresp := &LeaseTimeToLiveResponse{
ResponseHeader: resp.GetHeader(),
ID: LeaseID(resp.ID),
TTL: resp.TTL,
GrantedTTL: resp.GrantedTTL,
Keys: resp.Keys,
}
return gresp, nil
}
if isHaltErr(cctx, err) {
return nil, toErr(cctx, err)
}
}
}
@@ -255,10 +207,6 @@ func (l *lessor) KeepAliveOnce(ctx context.Context, id LeaseID) (*LeaseKeepAlive
if isHaltErr(ctx, err) {
return nil, toErr(ctx, err)
}
if nerr := l.newStream(); nerr != nil {
return nil, nerr
}
}
}
@@ -354,10 +302,23 @@ func (l *lessor) recvKeepAliveLoop() {
// resetRecv opens a new lease stream and starts sending LeaseKeepAliveRequests
func (l *lessor) resetRecv() (pb.Lease_LeaseKeepAliveClient, error) {
if err := l.newStream(); err != nil {
sctx, cancel := context.WithCancel(l.stopCtx)
stream, err := l.remote.LeaseKeepAlive(sctx, grpc.FailFast(false))
if err = toErr(sctx, err); err != nil {
cancel()
return nil, err
}
stream := l.getKeepAliveStream()
l.mu.Lock()
defer l.mu.Unlock()
if l.stream != nil && l.streamCancel != nil {
l.stream.CloseSend()
l.streamCancel()
}
l.streamCancel = cancel
l.stream = stream
go l.sendKeepAliveLoop(stream)
return stream, nil
}
@@ -432,7 +393,7 @@ func (l *lessor) sendKeepAliveLoop(stream pb.Lease_LeaseKeepAliveClient) {
return
}
var tosend []LeaseID
tosend := make([]LeaseID, 0)
now := time.Now()
l.mu.Lock()
@@ -453,32 +414,6 @@ func (l *lessor) sendKeepAliveLoop(stream pb.Lease_LeaseKeepAliveClient) {
}
}
func (l *lessor) getKeepAliveStream() pb.Lease_LeaseKeepAliveClient {
l.mu.Lock()
defer l.mu.Unlock()
return l.stream
}
func (l *lessor) newStream() error {
sctx, cancel := context.WithCancel(l.stopCtx)
stream, err := l.remote.LeaseKeepAlive(sctx, grpc.FailFast(false))
if err != nil {
cancel()
return toErr(sctx, err)
}
l.mu.Lock()
defer l.mu.Unlock()
if l.stream != nil && l.streamCancel != nil {
l.stream.CloseSend()
l.streamCancel()
}
l.streamCancel = cancel
l.stream = stream
return nil
}
func (ka *keepAlive) Close() {
close(ka.donec)
for _, ch := range ka.chs {

View File

@@ -15,15 +15,13 @@
package clientv3
import (
"io/ioutil"
"log"
"os"
"sync"
"google.golang.org/grpc/grpclog"
)
// Logger is the logger used by client library.
// It implements grpclog.Logger interface.
type Logger grpclog.Logger
var (
@@ -36,36 +34,20 @@ type settableLogger struct {
}
func init() {
// disable client side logs by default
// use go's standard logger by default like grpc
logger.mu.Lock()
logger.l = log.New(ioutil.Discard, "", 0)
// logger has to override the grpclog at initialization so that
// any changes to the grpclog go through logger with locking
// instead of through SetLogger
//
// now updates only happen through settableLogger.set
logger.l = log.New(os.Stderr, "", log.LstdFlags)
grpclog.SetLogger(&logger)
logger.mu.Unlock()
}
// SetLogger sets client-side Logger. By default, logs are disabled.
func SetLogger(l Logger) {
logger.set(l)
}
// GetLogger returns the current logger.
func GetLogger() Logger {
return logger.get()
}
func (s *settableLogger) set(l Logger) {
func (s *settableLogger) Set(l Logger) {
s.mu.Lock()
logger.l = l
s.mu.Unlock()
}
func (s *settableLogger) get() Logger {
func (s *settableLogger) Get() Logger {
s.mu.RLock()
l := logger.l
s.mu.RUnlock()
@@ -74,9 +56,9 @@ func (s *settableLogger) get() Logger {
// implement the grpclog.Logger interface
func (s *settableLogger) Fatal(args ...interface{}) { s.get().Fatal(args...) }
func (s *settableLogger) Fatalf(format string, args ...interface{}) { s.get().Fatalf(format, args...) }
func (s *settableLogger) Fatalln(args ...interface{}) { s.get().Fatalln(args...) }
func (s *settableLogger) Print(args ...interface{}) { s.get().Print(args...) }
func (s *settableLogger) Printf(format string, args ...interface{}) { s.get().Printf(format, args...) }
func (s *settableLogger) Println(args ...interface{}) { s.get().Println(args...) }
func (s *settableLogger) Fatal(args ...interface{}) { s.Get().Fatal(args...) }
func (s *settableLogger) Fatalf(format string, args ...interface{}) { s.Get().Fatalf(format, args...) }
func (s *settableLogger) Fatalln(args ...interface{}) { s.Get().Fatalln(args...) }
func (s *settableLogger) Print(args ...interface{}) { s.Get().Print(args...) }
func (s *settableLogger) Printf(format string, args ...interface{}) { s.Get().Printf(format, args...) }
func (s *settableLogger) Println(args ...interface{}) { s.Get().Println(args...) }

View File

@@ -20,14 +20,10 @@ import (
"strings"
"testing"
"github.com/coreos/etcd/auth"
"github.com/coreos/etcd/integration"
"github.com/coreos/etcd/pkg/testutil"
"golang.org/x/crypto/bcrypt"
)
func init() { auth.BcryptCost = bcrypt.MinCost }
// TestMain sets up an etcd cluster if running the examples.
func TestMain(m *testing.M) {
useCluster := true // default to running all tests

View File

@@ -1,128 +0,0 @@
// Copyright 2016 The etcd Authors
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
package naming
import (
"encoding/json"
etcd "github.com/coreos/etcd/clientv3"
"golang.org/x/net/context"
"google.golang.org/grpc"
"google.golang.org/grpc/codes"
"google.golang.org/grpc/naming"
)
// GRPCResolver creates a grpc.Watcher for a target to track its resolution changes.
type GRPCResolver struct {
// Client is an initialized etcd client.
Client *etcd.Client
}
func (gr *GRPCResolver) Update(ctx context.Context, target string, nm naming.Update, opts ...etcd.OpOption) (err error) {
switch nm.Op {
case naming.Add:
var v []byte
if v, err = json.Marshal(nm); err != nil {
return grpc.Errorf(codes.InvalidArgument, err.Error())
}
_, err = gr.Client.KV.Put(ctx, target+"/"+nm.Addr, string(v), opts...)
case naming.Delete:
_, err = gr.Client.Delete(ctx, target+"/"+nm.Addr, opts...)
default:
return grpc.Errorf(codes.InvalidArgument, "naming: bad naming op")
}
return err
}
func (gr *GRPCResolver) Resolve(target string) (naming.Watcher, error) {
ctx, cancel := context.WithCancel(context.Background())
w := &gRPCWatcher{c: gr.Client, target: target + "/", ctx: ctx, cancel: cancel}
return w, nil
}
type gRPCWatcher struct {
c *etcd.Client
target string
ctx context.Context
cancel context.CancelFunc
wch etcd.WatchChan
err error
}
// Next gets the next set of updates from the etcd resolver.
// Calls to Next should be serialized; concurrent calls are not safe since
// there is no way to reconcile the update ordering.
func (gw *gRPCWatcher) Next() ([]*naming.Update, error) {
if gw.wch == nil {
// first Next() returns all addresses
return gw.firstNext()
}
if gw.err != nil {
return nil, gw.err
}
// process new events on target/*
wr, ok := <-gw.wch
if !ok {
gw.err = grpc.Errorf(codes.Unavailable, "naming: watch closed")
return nil, gw.err
}
if gw.err = wr.Err(); gw.err != nil {
return nil, gw.err
}
updates := make([]*naming.Update, 0, len(wr.Events))
for _, e := range wr.Events {
var jupdate naming.Update
var err error
switch e.Type {
case etcd.EventTypePut:
err = json.Unmarshal(e.Kv.Value, &jupdate)
jupdate.Op = naming.Add
case etcd.EventTypeDelete:
err = json.Unmarshal(e.PrevKv.Value, &jupdate)
jupdate.Op = naming.Delete
}
if err == nil {
updates = append(updates, &jupdate)
}
}
return updates, nil
}
func (gw *gRPCWatcher) firstNext() ([]*naming.Update, error) {
// Use serialized request so resolution still works if the target etcd
// server is partitioned away from the quorum.
resp, err := gw.c.Get(gw.ctx, gw.target, etcd.WithPrefix(), etcd.WithSerializable())
if gw.err = err; err != nil {
return nil, err
}
updates := make([]*naming.Update, 0, len(resp.Kvs))
for _, kv := range resp.Kvs {
var jupdate naming.Update
if err := json.Unmarshal(kv.Value, &jupdate); err != nil {
continue
}
updates = append(updates, &jupdate)
}
opts := []etcd.OpOption{etcd.WithRev(resp.Header.Revision + 1), etcd.WithPrefix(), etcd.WithPrevKV()}
gw.wch = gw.c.Watch(gw.ctx, gw.target, opts...)
return updates, nil
}
func (gw *gRPCWatcher) Close() { gw.cancel() }

View File

@@ -1,135 +0,0 @@
// Copyright 2016 The etcd Authors
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
package naming
import (
"encoding/json"
"reflect"
"testing"
"golang.org/x/net/context"
"google.golang.org/grpc/naming"
etcd "github.com/coreos/etcd/clientv3"
"github.com/coreos/etcd/integration"
"github.com/coreos/etcd/pkg/testutil"
)
func TestGRPCResolver(t *testing.T) {
defer testutil.AfterTest(t)
clus := integration.NewClusterV3(t, &integration.ClusterConfig{Size: 1})
defer clus.Terminate(t)
r := GRPCResolver{
Client: clus.RandClient(),
}
w, err := r.Resolve("foo")
if err != nil {
t.Fatal("failed to resolve foo", err)
}
defer w.Close()
addOp := naming.Update{Op: naming.Add, Addr: "127.0.0.1", Metadata: "metadata"}
err = r.Update(context.TODO(), "foo", addOp)
if err != nil {
t.Fatal("failed to add foo", err)
}
us, err := w.Next()
if err != nil {
t.Fatal("failed to get udpate", err)
}
wu := &naming.Update{
Op: naming.Add,
Addr: "127.0.0.1",
Metadata: "metadata",
}
if !reflect.DeepEqual(us[0], wu) {
t.Fatalf("up = %#v, want %#v", us[0], wu)
}
delOp := naming.Update{Op: naming.Delete, Addr: "127.0.0.1"}
err = r.Update(context.TODO(), "foo", delOp)
us, err = w.Next()
if err != nil {
t.Fatal("failed to get udpate", err)
}
wu = &naming.Update{
Op: naming.Delete,
Addr: "127.0.0.1",
Metadata: "metadata",
}
if !reflect.DeepEqual(us[0], wu) {
t.Fatalf("up = %#v, want %#v", us[0], wu)
}
}
// TestGRPCResolverMultiInit ensures the resolver will initialize
// correctly with multiple hosts and correctly receive multiple
// updates in a single revision.
func TestGRPCResolverMulti(t *testing.T) {
defer testutil.AfterTest(t)
clus := integration.NewClusterV3(t, &integration.ClusterConfig{Size: 1})
defer clus.Terminate(t)
c := clus.RandClient()
v, verr := json.Marshal(naming.Update{Addr: "127.0.0.1", Metadata: "md"})
if verr != nil {
t.Fatal(verr)
}
if _, err := c.Put(context.TODO(), "foo/host", string(v)); err != nil {
t.Fatal(err)
}
if _, err := c.Put(context.TODO(), "foo/host2", string(v)); err != nil {
t.Fatal(err)
}
r := GRPCResolver{c}
w, err := r.Resolve("foo")
if err != nil {
t.Fatal("failed to resolve foo", err)
}
defer w.Close()
updates, nerr := w.Next()
if nerr != nil {
t.Fatal(nerr)
}
if len(updates) != 2 {
t.Fatalf("expected two updates, got %+v", updates)
}
_, err = c.Txn(context.TODO()).Then(etcd.OpDelete("foo/host"), etcd.OpDelete("foo/host2")).Commit()
if err != nil {
t.Fatal(err)
}
updates, nerr = w.Next()
if nerr != nil {
t.Fatal(nerr)
}
if len(updates) != 2 || (updates[0].Op != naming.Delete && updates[1].Op != naming.Delete) {
t.Fatalf("expected two updates, got %+v", updates)
}
}

View File

@@ -14,7 +14,9 @@
package clientv3
import pb "github.com/coreos/etcd/etcdserver/etcdserverpb"
import (
pb "github.com/coreos/etcd/etcdserver/etcdserverpb"
)
type opType int
@@ -41,10 +43,6 @@ type Op struct {
serializable bool
keysOnly bool
countOnly bool
minModRev int64
maxModRev int64
minCreateRev int64
maxCreateRev int64
// for range, watch
rev int64
@@ -54,45 +52,29 @@ type Op struct {
// progressNotify is for progress updates.
progressNotify bool
// createdNotify is for created event
createdNotify bool
// filters for watchers
filterPut bool
filterDelete bool
// for put
val []byte
leaseID LeaseID
}
func (op Op) toRangeRequest() *pb.RangeRequest {
if op.t != tRange {
panic("op.t != tRange")
}
r := &pb.RangeRequest{
Key: op.key,
RangeEnd: op.end,
Limit: op.limit,
Revision: op.rev,
Serializable: op.serializable,
KeysOnly: op.keysOnly,
CountOnly: op.countOnly,
MinModRevision: op.minModRev,
MaxModRevision: op.maxModRev,
MinCreateRevision: op.minCreateRev,
MaxCreateRevision: op.maxCreateRev,
}
if op.sort != nil {
r.SortOrder = pb.RangeRequest_SortOrder(op.sort.Order)
r.SortTarget = pb.RangeRequest_SortTarget(op.sort.Target)
}
return r
}
func (op Op) toRequestOp() *pb.RequestOp {
switch op.t {
case tRange:
return &pb.RequestOp{Request: &pb.RequestOp_RequestRange{RequestRange: op.toRangeRequest()}}
r := &pb.RangeRequest{
Key: op.key,
RangeEnd: op.end,
Limit: op.limit,
Revision: op.rev,
Serializable: op.serializable,
KeysOnly: op.keysOnly,
CountOnly: op.countOnly,
}
if op.sort != nil {
r.SortOrder = pb.RangeRequest_SortOrder(op.sort.Order)
r.SortTarget = pb.RangeRequest_SortTarget(op.sort.Target)
}
return &pb.RequestOp{Request: &pb.RequestOp_RequestRange{RequestRange: r}}
case tPut:
r := &pb.PutRequest{Key: op.key, Value: op.val, Lease: int64(op.leaseID), PrevKv: op.prevKV}
return &pb.RequestOp{Request: &pb.RequestOp_RequestPut{RequestPut: r}}
@@ -130,14 +112,6 @@ func OpDelete(key string, opts ...OpOption) Op {
panic("unexpected serializable in delete")
case ret.countOnly:
panic("unexpected countOnly in delete")
case ret.minModRev != 0, ret.maxModRev != 0:
panic("unexpected mod revision filter in delete")
case ret.minCreateRev != 0, ret.maxCreateRev != 0:
panic("unexpected create revision filter in delete")
case ret.filterDelete, ret.filterPut:
panic("unexpected filter in delete")
case ret.createdNotify:
panic("unexpected createdNotify in delete")
}
return ret
}
@@ -157,15 +131,7 @@ func OpPut(key, val string, opts ...OpOption) Op {
case ret.serializable:
panic("unexpected serializable in put")
case ret.countOnly:
panic("unexpected countOnly in put")
case ret.minModRev != 0, ret.maxModRev != 0:
panic("unexpected mod revision filter in put")
case ret.minCreateRev != 0, ret.maxCreateRev != 0:
panic("unexpected create revision filter in put")
case ret.filterDelete, ret.filterPut:
panic("unexpected filter in put")
case ret.createdNotify:
panic("unexpected createdNotify in put")
panic("unexpected countOnly in delete")
}
return ret
}
@@ -183,11 +149,7 @@ func opWatch(key string, opts ...OpOption) Op {
case ret.serializable:
panic("unexpected serializable in watch")
case ret.countOnly:
panic("unexpected countOnly in watch")
case ret.minModRev != 0, ret.maxModRev != 0:
panic("unexpected mod revision filter in watch")
case ret.minCreateRev != 0, ret.maxCreateRev != 0:
panic("unexpected create revision filter in watch")
panic("unexpected countOnly in delete")
}
return ret
}
@@ -219,14 +181,6 @@ func WithRev(rev int64) OpOption { return func(op *Op) { op.rev = rev } }
// 'order' can be either 'SortNone', 'SortAscend', 'SortDescend'.
func WithSort(target SortTarget, order SortOrder) OpOption {
return func(op *Op) {
if target == SortByKey && order == SortAscend {
// If order != SortNone, server fetches the entire key-space,
// and then applies the sort and limit, if provided.
// Since current mvcc.Range implementation returns results
// sorted by keys in lexicographically ascending order,
// client should ignore SortOrder if the target is SortByKey.
order = SortNone
}
op.sort = &SortOption{target, order}
}
}
@@ -291,18 +245,6 @@ func WithCountOnly() OpOption {
return func(op *Op) { op.countOnly = true }
}
// WithMinModRev filters out keys for Get with modification revisions less than the given revision.
func WithMinModRev(rev int64) OpOption { return func(op *Op) { op.minModRev = rev } }
// WithMaxModRev filters out keys for Get with modification revisions greater than the given revision.
func WithMaxModRev(rev int64) OpOption { return func(op *Op) { op.maxModRev = rev } }
// WithMinCreateRev filters out keys for Get with creation revisions less than the given revision.
func WithMinCreateRev(rev int64) OpOption { return func(op *Op) { op.minCreateRev = rev } }
// WithMaxCreateRev filters out keys for Get with creation revisions greater than the given revision.
func WithMaxCreateRev(rev int64) OpOption { return func(op *Op) { op.maxCreateRev = rev } }
// WithFirstCreate gets the key with the oldest creation revision in the request range.
func WithFirstCreate() []OpOption { return withTop(SortByCreateRevision, SortAscend) }
@@ -326,8 +268,7 @@ func withTop(target SortTarget, order SortOrder) []OpOption {
return []OpOption{WithPrefix(), WithSort(target, order), WithLimit(1)}
}
// WithProgressNotify makes watch server send periodic progress updates
// every 10 minutes when there is no incoming events.
// WithProgressNotify makes watch server send periodic progress updates.
// Progress updates have zero events in WatchResponse.
func WithProgressNotify() OpOption {
return func(op *Op) {
@@ -335,23 +276,6 @@ func WithProgressNotify() OpOption {
}
}
// WithCreatedNotify makes watch server sends the created event.
func WithCreatedNotify() OpOption {
return func(op *Op) {
op.createdNotify = true
}
}
// WithFilterPut discards PUT events from the watcher.
func WithFilterPut() OpOption {
return func(op *Op) { op.filterPut = true }
}
// WithFilterDelete discards DELETE events from the watcher.
func WithFilterDelete() OpOption {
return func(op *Op) { op.filterDelete = true }
}
// WithPrevKV gets the previous key-value pair before the event happens. If the previous KV is already compacted,
// nothing will be returned.
func WithPrevKV() OpOption {
@@ -359,32 +283,3 @@ func WithPrevKV() OpOption {
op.prevKV = true
}
}
// LeaseOp represents an Operation that lease can execute.
type LeaseOp struct {
id LeaseID
// for TimeToLive
attachedKeys bool
}
// LeaseOption configures lease operations.
type LeaseOption func(*LeaseOp)
func (op *LeaseOp) applyOpts(opts []LeaseOption) {
for _, opt := range opts {
opt(op)
}
}
// WithAttachedKeys requests lease timetolive API to return
// attached keys of given lease ID.
func WithAttachedKeys() LeaseOption {
return func(op *LeaseOp) { op.attachedKeys = true }
}
func toLeaseTimeToLiveRequest(id LeaseID, opts ...LeaseOption) *pb.LeaseTimeToLiveRequest {
ret := &LeaseOp{id: id}
ret.applyOpts(opts)
return &pb.LeaseTimeToLiveRequest{ID: int64(id), Keys: ret.attachedKeys}
}

View File

@@ -1,38 +0,0 @@
// Copyright 2016 The etcd Authors
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
package clientv3
import (
"reflect"
"testing"
pb "github.com/coreos/etcd/etcdserver/etcdserverpb"
)
// TestOpWithSort tests if WithSort(ASCEND, KEY) and WithLimit are specified,
// RangeRequest ignores the SortOption to avoid unnecessarily fetching
// the entire key-space.
func TestOpWithSort(t *testing.T) {
opReq := OpGet("foo", WithSort(SortByKey, SortAscend), WithLimit(10)).toRequestOp().Request
q, ok := opReq.(*pb.RequestOp_RequestRange)
if !ok {
t.Fatalf("expected range request, got %v", reflect.TypeOf(opReq))
}
req := q.RequestRange
wreq := &pb.RangeRequest{Key: []byte("foo"), SortOrder: pb.RangeRequest_NONE, Limit: 10}
if !reflect.DeepEqual(req, wreq) {
t.Fatalf("expected %+v, got %+v", wreq, req)
}
}

View File

@@ -15,40 +15,27 @@
package clientv3
import (
"github.com/coreos/etcd/etcdserver/api/v3rpc/rpctypes"
pb "github.com/coreos/etcd/etcdserver/etcdserverpb"
"golang.org/x/net/context"
"google.golang.org/grpc"
"google.golang.org/grpc/codes"
)
type rpcFunc func(ctx context.Context) error
type retryRpcFunc func(context.Context, rpcFunc) error
type retryRpcFunc func(context.Context, rpcFunc)
func (c *Client) newRetryWrapper() retryRpcFunc {
return func(rpcCtx context.Context, f rpcFunc) error {
return func(rpcCtx context.Context, f rpcFunc) {
for {
err := f(rpcCtx)
if err == nil {
return nil
// ignore grpc conn closing on fail-fast calls; they are transient errors
if err == nil || !isConnClosing(err) {
return
}
// only retry if unavailable
if grpc.Code(err) != codes.Unavailable {
return err
}
// always stop retry on etcd errors
eErr := rpctypes.Error(err)
if _, ok := eErr.(rpctypes.EtcdError); ok {
return err
}
select {
case <-c.balancer.ConnectNotify():
case <-rpcCtx.Done():
return rpcCtx.Err()
case <-c.ctx.Done():
return c.ctx.Err()
return
}
}
}
@@ -65,7 +52,7 @@ func RetryKVClient(c *Client) pb.KVClient {
}
func (rkv *retryKVClient) Put(ctx context.Context, in *pb.PutRequest, opts ...grpc.CallOption) (resp *pb.PutResponse, err error) {
err = rkv.retryf(ctx, func(rctx context.Context) error {
rkv.retryf(ctx, func(rctx context.Context) error {
resp, err = rkv.KVClient.Put(rctx, in, opts...)
return err
})
@@ -73,7 +60,7 @@ func (rkv *retryKVClient) Put(ctx context.Context, in *pb.PutRequest, opts ...gr
}
func (rkv *retryKVClient) DeleteRange(ctx context.Context, in *pb.DeleteRangeRequest, opts ...grpc.CallOption) (resp *pb.DeleteRangeResponse, err error) {
err = rkv.retryf(ctx, func(rctx context.Context) error {
rkv.retryf(ctx, func(rctx context.Context) error {
resp, err = rkv.KVClient.DeleteRange(rctx, in, opts...)
return err
})
@@ -81,7 +68,7 @@ func (rkv *retryKVClient) DeleteRange(ctx context.Context, in *pb.DeleteRangeReq
}
func (rkv *retryKVClient) Txn(ctx context.Context, in *pb.TxnRequest, opts ...grpc.CallOption) (resp *pb.TxnResponse, err error) {
err = rkv.retryf(ctx, func(rctx context.Context) error {
rkv.retryf(ctx, func(rctx context.Context) error {
resp, err = rkv.KVClient.Txn(rctx, in, opts...)
return err
})
@@ -89,7 +76,7 @@ func (rkv *retryKVClient) Txn(ctx context.Context, in *pb.TxnRequest, opts ...gr
}
func (rkv *retryKVClient) Compact(ctx context.Context, in *pb.CompactionRequest, opts ...grpc.CallOption) (resp *pb.CompactionResponse, err error) {
err = rkv.retryf(ctx, func(rctx context.Context) error {
rkv.retryf(ctx, func(rctx context.Context) error {
resp, err = rkv.KVClient.Compact(rctx, in, opts...)
return err
})
@@ -107,7 +94,7 @@ func RetryLeaseClient(c *Client) pb.LeaseClient {
}
func (rlc *retryLeaseClient) LeaseGrant(ctx context.Context, in *pb.LeaseGrantRequest, opts ...grpc.CallOption) (resp *pb.LeaseGrantResponse, err error) {
err = rlc.retryf(ctx, func(rctx context.Context) error {
rlc.retryf(ctx, func(rctx context.Context) error {
resp, err = rlc.LeaseClient.LeaseGrant(rctx, in, opts...)
return err
})
@@ -116,7 +103,7 @@ func (rlc *retryLeaseClient) LeaseGrant(ctx context.Context, in *pb.LeaseGrantRe
}
func (rlc *retryLeaseClient) LeaseRevoke(ctx context.Context, in *pb.LeaseRevokeRequest, opts ...grpc.CallOption) (resp *pb.LeaseRevokeResponse, err error) {
err = rlc.retryf(ctx, func(rctx context.Context) error {
rlc.retryf(ctx, func(rctx context.Context) error {
resp, err = rlc.LeaseClient.LeaseRevoke(rctx, in, opts...)
return err
})
@@ -134,7 +121,7 @@ func RetryClusterClient(c *Client) pb.ClusterClient {
}
func (rcc *retryClusterClient) MemberAdd(ctx context.Context, in *pb.MemberAddRequest, opts ...grpc.CallOption) (resp *pb.MemberAddResponse, err error) {
err = rcc.retryf(ctx, func(rctx context.Context) error {
rcc.retryf(ctx, func(rctx context.Context) error {
resp, err = rcc.ClusterClient.MemberAdd(rctx, in, opts...)
return err
})
@@ -142,7 +129,7 @@ func (rcc *retryClusterClient) MemberAdd(ctx context.Context, in *pb.MemberAddRe
}
func (rcc *retryClusterClient) MemberRemove(ctx context.Context, in *pb.MemberRemoveRequest, opts ...grpc.CallOption) (resp *pb.MemberRemoveResponse, err error) {
err = rcc.retryf(ctx, func(rctx context.Context) error {
rcc.retryf(ctx, func(rctx context.Context) error {
resp, err = rcc.ClusterClient.MemberRemove(rctx, in, opts...)
return err
})
@@ -150,7 +137,7 @@ func (rcc *retryClusterClient) MemberRemove(ctx context.Context, in *pb.MemberRe
}
func (rcc *retryClusterClient) MemberUpdate(ctx context.Context, in *pb.MemberUpdateRequest, opts ...grpc.CallOption) (resp *pb.MemberUpdateResponse, err error) {
err = rcc.retryf(ctx, func(rctx context.Context) error {
rcc.retryf(ctx, func(rctx context.Context) error {
resp, err = rcc.ClusterClient.MemberUpdate(rctx, in, opts...)
return err
})
@@ -168,7 +155,7 @@ func RetryAuthClient(c *Client) pb.AuthClient {
}
func (rac *retryAuthClient) AuthEnable(ctx context.Context, in *pb.AuthEnableRequest, opts ...grpc.CallOption) (resp *pb.AuthEnableResponse, err error) {
err = rac.retryf(ctx, func(rctx context.Context) error {
rac.retryf(ctx, func(rctx context.Context) error {
resp, err = rac.AuthClient.AuthEnable(rctx, in, opts...)
return err
})
@@ -176,7 +163,7 @@ func (rac *retryAuthClient) AuthEnable(ctx context.Context, in *pb.AuthEnableReq
}
func (rac *retryAuthClient) AuthDisable(ctx context.Context, in *pb.AuthDisableRequest, opts ...grpc.CallOption) (resp *pb.AuthDisableResponse, err error) {
err = rac.retryf(ctx, func(rctx context.Context) error {
rac.retryf(ctx, func(rctx context.Context) error {
resp, err = rac.AuthClient.AuthDisable(rctx, in, opts...)
return err
})
@@ -184,7 +171,7 @@ func (rac *retryAuthClient) AuthDisable(ctx context.Context, in *pb.AuthDisableR
}
func (rac *retryAuthClient) UserAdd(ctx context.Context, in *pb.AuthUserAddRequest, opts ...grpc.CallOption) (resp *pb.AuthUserAddResponse, err error) {
err = rac.retryf(ctx, func(rctx context.Context) error {
rac.retryf(ctx, func(rctx context.Context) error {
resp, err = rac.AuthClient.UserAdd(rctx, in, opts...)
return err
})
@@ -192,7 +179,7 @@ func (rac *retryAuthClient) UserAdd(ctx context.Context, in *pb.AuthUserAddReque
}
func (rac *retryAuthClient) UserDelete(ctx context.Context, in *pb.AuthUserDeleteRequest, opts ...grpc.CallOption) (resp *pb.AuthUserDeleteResponse, err error) {
err = rac.retryf(ctx, func(rctx context.Context) error {
rac.retryf(ctx, func(rctx context.Context) error {
resp, err = rac.AuthClient.UserDelete(rctx, in, opts...)
return err
})
@@ -200,7 +187,7 @@ func (rac *retryAuthClient) UserDelete(ctx context.Context, in *pb.AuthUserDelet
}
func (rac *retryAuthClient) UserChangePassword(ctx context.Context, in *pb.AuthUserChangePasswordRequest, opts ...grpc.CallOption) (resp *pb.AuthUserChangePasswordResponse, err error) {
err = rac.retryf(ctx, func(rctx context.Context) error {
rac.retryf(ctx, func(rctx context.Context) error {
resp, err = rac.AuthClient.UserChangePassword(rctx, in, opts...)
return err
})
@@ -208,7 +195,7 @@ func (rac *retryAuthClient) UserChangePassword(ctx context.Context, in *pb.AuthU
}
func (rac *retryAuthClient) UserGrantRole(ctx context.Context, in *pb.AuthUserGrantRoleRequest, opts ...grpc.CallOption) (resp *pb.AuthUserGrantRoleResponse, err error) {
err = rac.retryf(ctx, func(rctx context.Context) error {
rac.retryf(ctx, func(rctx context.Context) error {
resp, err = rac.AuthClient.UserGrantRole(rctx, in, opts...)
return err
})
@@ -216,7 +203,7 @@ func (rac *retryAuthClient) UserGrantRole(ctx context.Context, in *pb.AuthUserGr
}
func (rac *retryAuthClient) UserRevokeRole(ctx context.Context, in *pb.AuthUserRevokeRoleRequest, opts ...grpc.CallOption) (resp *pb.AuthUserRevokeRoleResponse, err error) {
err = rac.retryf(ctx, func(rctx context.Context) error {
rac.retryf(ctx, func(rctx context.Context) error {
resp, err = rac.AuthClient.UserRevokeRole(rctx, in, opts...)
return err
})
@@ -224,7 +211,7 @@ func (rac *retryAuthClient) UserRevokeRole(ctx context.Context, in *pb.AuthUserR
}
func (rac *retryAuthClient) RoleAdd(ctx context.Context, in *pb.AuthRoleAddRequest, opts ...grpc.CallOption) (resp *pb.AuthRoleAddResponse, err error) {
err = rac.retryf(ctx, func(rctx context.Context) error {
rac.retryf(ctx, func(rctx context.Context) error {
resp, err = rac.AuthClient.RoleAdd(rctx, in, opts...)
return err
})
@@ -232,7 +219,7 @@ func (rac *retryAuthClient) RoleAdd(ctx context.Context, in *pb.AuthRoleAddReque
}
func (rac *retryAuthClient) RoleDelete(ctx context.Context, in *pb.AuthRoleDeleteRequest, opts ...grpc.CallOption) (resp *pb.AuthRoleDeleteResponse, err error) {
err = rac.retryf(ctx, func(rctx context.Context) error {
rac.retryf(ctx, func(rctx context.Context) error {
resp, err = rac.AuthClient.RoleDelete(rctx, in, opts...)
return err
})
@@ -240,7 +227,7 @@ func (rac *retryAuthClient) RoleDelete(ctx context.Context, in *pb.AuthRoleDelet
}
func (rac *retryAuthClient) RoleGrantPermission(ctx context.Context, in *pb.AuthRoleGrantPermissionRequest, opts ...grpc.CallOption) (resp *pb.AuthRoleGrantPermissionResponse, err error) {
err = rac.retryf(ctx, func(rctx context.Context) error {
rac.retryf(ctx, func(rctx context.Context) error {
resp, err = rac.AuthClient.RoleGrantPermission(rctx, in, opts...)
return err
})
@@ -248,7 +235,7 @@ func (rac *retryAuthClient) RoleGrantPermission(ctx context.Context, in *pb.Auth
}
func (rac *retryAuthClient) RoleRevokePermission(ctx context.Context, in *pb.AuthRoleRevokePermissionRequest, opts ...grpc.CallOption) (resp *pb.AuthRoleRevokePermissionResponse, err error) {
err = rac.retryf(ctx, func(rctx context.Context) error {
rac.retryf(ctx, func(rctx context.Context) error {
resp, err = rac.AuthClient.RoleRevokePermission(rctx, in, opts...)
return err
})

View File

@@ -19,7 +19,6 @@ import (
pb "github.com/coreos/etcd/etcdserver/etcdserverpb"
"golang.org/x/net/context"
"google.golang.org/grpc"
)
// Txn is the interface that wraps mini-transactions.
@@ -153,12 +152,7 @@ func (txn *txn) Commit() (*TxnResponse, error) {
func (txn *txn) commit() (*TxnResponse, error) {
r := &pb.TxnRequest{Compare: txn.cmps, Success: txn.sus, Failure: txn.fas}
var opts []grpc.CallOption
if !txn.isWrite {
opts = []grpc.CallOption{grpc.FailFast(false)}
}
resp, err := txn.kv.remote.Txn(txn.ctx, r, opts...)
resp, err := txn.kv.remote.Txn(txn.ctx, r)
if err != nil {
return nil, err
}

View File

@@ -17,13 +17,9 @@ package clientv3
import (
"testing"
"time"
"github.com/coreos/etcd/pkg/testutil"
)
func TestTxnPanics(t *testing.T) {
defer testutil.AfterTest(t)
kv := &kv{}
errc := make(chan string)

View File

@@ -61,8 +61,8 @@ type WatchResponse struct {
// the channel sends a final response that has Canceled set to true with a non-nil Err().
Canceled bool
// Created is used to indicate the creation of the watcher.
Created bool
// created is used to indicate the creation of the watcher.
created bool
closeErr error
}
@@ -92,7 +92,7 @@ func (wr *WatchResponse) Err() error {
// IsProgressNotify returns true if the WatchResponse is progress notification.
func (wr *WatchResponse) IsProgressNotify() bool {
return len(wr.Events) == 0 && !wr.Canceled && !wr.Created && wr.CompactRevision == 0 && wr.Header.Revision != 0
return len(wr.Events) == 0 && !wr.Canceled && !wr.created && wr.CompactRevision == 0 && wr.Header.Revision != 0
}
// watcher implements the Watcher interface
@@ -101,7 +101,6 @@ type watcher struct {
// mu protects the grpc streams map
mu sync.RWMutex
// streams holds all the active grpc streams keyed by ctx value.
streams map[string]*watchGrpcStream
}
@@ -145,12 +144,8 @@ type watchRequest struct {
key string
end string
rev int64
// send created notification event if this field is true
createdNotify bool
// progressNotify is for progress updates
// progressNotify is for progress updates.
progressNotify bool
// filters is the list of events to filter out
filters []pb.WatchCreateRequest_FilterType
// get the previous key-value pair before the event happens
prevKV bool
// retc receives a chan WatchResponse once the watcher is established
@@ -178,12 +173,8 @@ type watcherStream struct {
}
func NewWatcher(c *Client) Watcher {
return NewWatchFromWatchClient(pb.NewWatchClient(c.conn))
}
func NewWatchFromWatchClient(wc pb.WatchClient) Watcher {
return &watcher{
remote: wc,
remote: pb.NewWatchClient(c.conn),
streams: make(map[string]*watchGrpcStream),
}
}
@@ -224,22 +215,12 @@ func (w *watcher) newWatcherGrpcStream(inctx context.Context) *watchGrpcStream {
func (w *watcher) Watch(ctx context.Context, key string, opts ...OpOption) WatchChan {
ow := opWatch(key, opts...)
var filters []pb.WatchCreateRequest_FilterType
if ow.filterPut {
filters = append(filters, pb.WatchCreateRequest_NOPUT)
}
if ow.filterDelete {
filters = append(filters, pb.WatchCreateRequest_NODELETE)
}
wr := &watchRequest{
ctx: ctx,
createdNotify: ow.createdNotify,
key: string(ow.key),
end: string(ow.end),
rev: ow.rev,
progressNotify: ow.progressNotify,
filters: filters,
prevKV: ow.prevKV,
retc: make(chan chan WatchResponse, 1),
}
@@ -393,17 +374,15 @@ func (w *watchGrpcStream) run() {
for _, ws := range w.substreams {
if _, ok := closing[ws]; !ok {
close(ws.recvc)
closing[ws] = struct{}{}
}
}
for _, ws := range w.resuming {
if _, ok := closing[ws]; ws != nil && !ok {
close(ws.recvc)
closing[ws] = struct{}{}
}
}
w.joinSubstreams()
for range closing {
for toClose := len(w.substreams) + len(w.resuming); toClose > 0; toClose-- {
w.closeSubstream(<-w.closingc)
}
@@ -479,7 +458,7 @@ func (w *watchGrpcStream) run() {
}
// watch client failed to recv; spawn another if possible
case err := <-w.errc:
if isHaltErr(w.ctx, err) || toErr(w.ctx, err) == v3rpc.ErrNoLeader {
if toErr(w.ctx, err) == v3rpc.ErrNoLeader {
closeErr = err
return
}
@@ -529,7 +508,7 @@ func (w *watchGrpcStream) dispatchEvent(pbresp *pb.WatchResponse) bool {
Header: *pbresp.Header,
Events: events,
CompactRevision: pbresp.CompactRevision,
Created: pbresp.Created,
created: pbresp.Created,
Canceled: pbresp.Canceled,
}
select {
@@ -583,6 +562,14 @@ func (w *watchGrpcStream) serveSubstream(ws *watcherStream, resumec chan struct{
curWr := emptyWr
outc := ws.outc
if len(ws.buf) > 0 && ws.buf[0].created {
select {
case ws.initReq.retc <- ws.outc:
default:
}
ws.buf = ws.buf[1:]
}
if len(ws.buf) > 0 {
curWr = ws.buf[0]
} else {
@@ -600,35 +587,13 @@ func (w *watchGrpcStream) serveSubstream(ws *watcherStream, resumec chan struct{
// shutdown from closeSubstream
return
}
if wr.Created {
if ws.initReq.retc != nil {
ws.initReq.retc <- ws.outc
// to prevent next write from taking the slot in buffered channel
// and posting duplicate create events
ws.initReq.retc = nil
// send first creation event only if requested
if ws.initReq.createdNotify {
ws.outc <- *wr
}
}
}
// TODO pause channel if buffer gets too large
ws.buf = append(ws.buf, wr)
nextRev = wr.Header.Revision
if len(wr.Events) > 0 {
nextRev = wr.Events[len(wr.Events)-1].Kv.ModRevision + 1
}
ws.initReq.rev = nextRev
// created event is already sent above,
// watcher should not post duplicate events
if wr.Created {
continue
}
// TODO pause channel if buffer gets too large
ws.buf = append(ws.buf, wr)
case <-w.ctx.Done():
return
case <-ws.initReq.ctx.Done():
@@ -754,7 +719,6 @@ func (wr *watchRequest) toPB() *pb.WatchRequest {
Key: []byte(wr.key),
RangeEnd: []byte(wr.end),
ProgressNotify: wr.progressNotify,
Filters: wr.filters,
PrevKv: wr.prevKV,
}
cr := &pb.WatchRequest_CreateRequest{CreateRequest: req}

289
cmd/Godeps/Godeps.json generated Normal file
View File

@@ -0,0 +1,289 @@
{
"ImportPath": "github.com/coreos/etcd",
"GoVersion": "go1.7",
"GodepVersion": "v74",
"Packages": [
"./..."
],
"Deps": [
{
"ImportPath": "bitbucket.org/ww/goautoneg",
"Comment": "null-5",
"Rev": "'75cd24fc2f2c2a2088577d12123ddee5f54e0675'"
},
{
"ImportPath": "github.com/beorn7/perks/quantile",
"Rev": "b965b613227fddccbfffe13eae360ed3fa822f8d"
},
{
"ImportPath": "github.com/bgentry/speakeasy",
"Rev": "36e9cfdd690967f4f690c6edcc9ffacd006014a0"
},
{
"ImportPath": "github.com/boltdb/bolt",
"Comment": "v1.3.0",
"Rev": "583e8937c61f1af6513608ccc75c97b6abdf4ff9"
},
{
"ImportPath": "github.com/cockroachdb/cmux",
"Rev": "112f0506e7743d64a6eb8fedbcff13d9979bbf92"
},
{
"ImportPath": "github.com/coreos/go-semver/semver",
"Rev": "568e959cd89871e61434c1143528d9162da89ef2"
},
{
"ImportPath": "github.com/coreos/go-systemd/daemon",
"Comment": "v10-13-gd6c05a1d",
"Rev": "d6c05a1dcbb5ac02b7653da4d99e5db340c20778"
},
{
"ImportPath": "github.com/coreos/go-systemd/journal",
"Comment": "v10-13-gd6c05a1d",
"Rev": "d6c05a1dcbb5ac02b7653da4d99e5db340c20778"
},
{
"ImportPath": "github.com/coreos/go-systemd/util",
"Comment": "v10-13-gd6c05a1d",
"Rev": "d6c05a1dcbb5ac02b7653da4d99e5db340c20778"
},
{
"ImportPath": "github.com/coreos/pkg/capnslog",
"Comment": "v2-8-gfa29b1d",
"Rev": "fa29b1d70f0beaddd4c7021607cc3c3be8ce94b8"
},
{
"ImportPath": "github.com/cpuguy83/go-md2man/md2man",
"Comment": "v1.0.4",
"Rev": "71acacd42f85e5e82f70a55327789582a5200a90"
},
{
"ImportPath": "github.com/dustin/go-humanize",
"Rev": "8929fe90cee4b2cb9deb468b51fb34eba64d1bf0"
},
{
"ImportPath": "github.com/ghodss/yaml",
"Rev": "73d445a93680fa1a78ae23a5839bad48f32ba1ee"
},
{
"ImportPath": "github.com/gogo/protobuf/proto",
"Comment": "v0.2-33-ge18d7aa",
"Rev": "e18d7aa8f8c624c915db340349aad4c49b10d173"
},
{
"ImportPath": "github.com/golang/glog",
"Rev": "44145f04b68cf362d9c4df2182967c2275eaefed"
},
{
"ImportPath": "github.com/golang/groupcache/lru",
"Rev": "02826c3e79038b59d737d3b1c0a1d937f71a4433"
},
{
"ImportPath": "github.com/golang/protobuf/jsonpb",
"Rev": "8616e8ee5e20a1704615e6c8d7afcdac06087a67"
},
{
"ImportPath": "github.com/golang/protobuf/proto",
"Rev": "8616e8ee5e20a1704615e6c8d7afcdac06087a67"
},
{
"ImportPath": "github.com/google/btree",
"Rev": "7d79101e329e5a3adf994758c578dab82b90c017"
},
{
"ImportPath": "github.com/grpc-ecosystem/grpc-gateway/runtime",
"Comment": "v1.0.0-8-gf52d055",
"Rev": "f52d055dc48aec25854ed7d31862f78913cf17d1"
},
{
"ImportPath": "github.com/grpc-ecosystem/grpc-gateway/runtime/internal",
"Comment": "v1.0.0-8-gf52d055",
"Rev": "f52d055dc48aec25854ed7d31862f78913cf17d1"
},
{
"ImportPath": "github.com/grpc-ecosystem/grpc-gateway/utilities",
"Comment": "v1.0.0-8-gf52d055",
"Rev": "f52d055dc48aec25854ed7d31862f78913cf17d1"
},
{
"ImportPath": "github.com/inconshreveable/mousetrap",
"Rev": "76626ae9c91c4f2a10f34cad8ce83ea42c93bb75"
},
{
"ImportPath": "github.com/jonboulle/clockwork",
"Rev": "72f9bd7c4e0c2a40055ab3d0f09654f730cce982"
},
{
"ImportPath": "github.com/kballard/go-shellquote",
"Rev": "d8ec1a69a250a17bb0e419c386eac1f3711dc142"
},
{
"ImportPath": "github.com/kr/pty",
"Comment": "release.r56-29-gf7ee69f",
"Rev": "f7ee69f31298ecbe5d2b349c711e2547a617d398"
},
{
"ImportPath": "github.com/mattn/go-runewidth",
"Comment": "v0.0.1",
"Rev": "d6bea18f789704b5f83375793155289da36a3c7f"
},
{
"ImportPath": "github.com/matttproud/golang_protobuf_extensions/pbutil",
"Rev": "fc2b8d3a73c4867e51861bbdd5ae3c1f0869dd6a"
},
{
"ImportPath": "github.com/olekukonko/tablewriter",
"Rev": "cca8bbc0798408af109aaaa239cbd2634846b340"
},
{
"ImportPath": "github.com/prometheus/client_golang/prometheus",
"Comment": "0.7.0-52-ge51041b",
"Rev": "e51041b3fa41cece0dca035740ba6411905be473"
},
{
"ImportPath": "github.com/prometheus/client_model/go",
"Comment": "model-0.0.2-12-gfa8ad6f",
"Rev": "fa8ad6fec33561be4280a8f0514318c79d7f6cb6"
},
{
"ImportPath": "github.com/prometheus/common/expfmt",
"Rev": "ffe929a3f4c4faeaa10f2b9535c2b1be3ad15650"
},
{
"ImportPath": "github.com/prometheus/common/model",
"Rev": "ffe929a3f4c4faeaa10f2b9535c2b1be3ad15650"
},
{
"ImportPath": "github.com/prometheus/procfs",
"Rev": "454a56f35412459b5e684fd5ec0f9211b94f002a"
},
{
"ImportPath": "github.com/russross/blackfriday",
"Comment": "v1.4-2-g300106c",
"Rev": "300106c228d52c8941d4b3de6054a6062a86dda3"
},
{
"ImportPath": "github.com/shurcooL/sanitized_anchor_name",
"Rev": "10ef21a441db47d8b13ebcc5fd2310f636973c77"
},
{
"ImportPath": "github.com/spacejam/loghisto",
"Rev": "323309774dec8b7430187e46cd0793974ccca04a"
},
{
"ImportPath": "github.com/spf13/cobra",
"Rev": "1c44ec8d3f1552cac48999f9306da23c4d8a288b"
},
{
"ImportPath": "github.com/spf13/pflag",
"Rev": "08b1a584251b5b62f458943640fc8ebd4d50aaa5"
},
{
"ImportPath": "github.com/stretchr/testify/assert",
"Rev": "9cc77fa25329013ce07362c7742952ff887361f2"
},
{
"ImportPath": "github.com/ugorji/go/codec",
"Rev": "f1f1a805ed361a0e078bb537e4ea78cd37dcf065"
},
{
"ImportPath": "github.com/urfave/cli",
"Comment": "v1.17.0-79-g6011f16",
"Rev": "6011f165dc288c72abd8acd7722f837c5c64198d"
},
{
"ImportPath": "github.com/xiang90/probing",
"Rev": "6a0cc1ae81b4cc11db5e491e030e4b98fba79c19"
},
{
"ImportPath": "golang.org/x/crypto/bcrypt",
"Rev": "1351f936d976c60a0a48d728281922cf63eafb8d"
},
{
"ImportPath": "golang.org/x/crypto/blowfish",
"Rev": "1351f936d976c60a0a48d728281922cf63eafb8d"
},
{
"ImportPath": "golang.org/x/net/context",
"Rev": "6acef71eb69611914f7a30939ea9f6e194c78172"
},
{
"ImportPath": "golang.org/x/net/http2",
"Rev": "6acef71eb69611914f7a30939ea9f6e194c78172"
},
{
"ImportPath": "golang.org/x/net/http2/hpack",
"Rev": "6acef71eb69611914f7a30939ea9f6e194c78172"
},
{
"ImportPath": "golang.org/x/net/internal/timeseries",
"Rev": "6acef71eb69611914f7a30939ea9f6e194c78172"
},
{
"ImportPath": "golang.org/x/net/trace",
"Rev": "6acef71eb69611914f7a30939ea9f6e194c78172"
},
{
"ImportPath": "golang.org/x/sys/unix",
"Rev": "9c60d1c508f5134d1ca726b4641db998f2523357"
},
{
"ImportPath": "golang.org/x/time/rate",
"Rev": "a4bde12657593d5e90d0533a3e4fd95e635124cb"
},
{
"ImportPath": "google.golang.org/grpc",
"Comment": "v1.0.0-183-g231b4cf",
"Rev": "231b4cfea0e79843053a33f5fe90bd4d84b23cd3"
},
{
"ImportPath": "google.golang.org/grpc/codes",
"Comment": "v1.0.0-183-g231b4cf",
"Rev": "231b4cfea0e79843053a33f5fe90bd4d84b23cd3"
},
{
"ImportPath": "google.golang.org/grpc/credentials",
"Comment": "v1.0.0-183-g231b4cf",
"Rev": "231b4cfea0e79843053a33f5fe90bd4d84b23cd3"
},
{
"ImportPath": "google.golang.org/grpc/grpclog",
"Comment": "v1.0.0-183-g231b4cf",
"Rev": "231b4cfea0e79843053a33f5fe90bd4d84b23cd3"
},
{
"ImportPath": "google.golang.org/grpc/internal",
"Comment": "v1.0.0-183-g231b4cf",
"Rev": "231b4cfea0e79843053a33f5fe90bd4d84b23cd3"
},
{
"ImportPath": "google.golang.org/grpc/metadata",
"Comment": "v1.0.0-183-g231b4cf",
"Rev": "231b4cfea0e79843053a33f5fe90bd4d84b23cd3"
},
{
"ImportPath": "google.golang.org/grpc/naming",
"Comment": "v1.0.0-183-g231b4cf",
"Rev": "231b4cfea0e79843053a33f5fe90bd4d84b23cd3"
},
{
"ImportPath": "google.golang.org/grpc/peer",
"Comment": "v1.0.0-183-g231b4cf",
"Rev": "231b4cfea0e79843053a33f5fe90bd4d84b23cd3"
},
{
"ImportPath": "google.golang.org/grpc/transport",
"Comment": "v1.0.0-183-g231b4cf",
"Rev": "231b4cfea0e79843053a33f5fe90bd4d84b23cd3"
},
{
"ImportPath": "gopkg.in/cheggaaa/pb.v1",
"Comment": "v1.0.1",
"Rev": "29ad9b62f9e0274422d738242b94a5b89440bfa6"
},
{
"ImportPath": "gopkg.in/yaml.v2",
"Rev": "53feefa2559fb8dfa8d81baad31be332c97d6c77"
}
]
}

5
cmd/Godeps/Readme generated Normal file
View File

@@ -0,0 +1,5 @@
This directory tree is generated automatically by godep.
Please do not edit.
See https://github.com/tools/godep for more information.

View File

@@ -1 +0,0 @@
../

1
cmd/etcdmain Symbolic link
View File

@@ -0,0 +1 @@
../etcdmain

1
cmd/main.go Symbolic link
View File

@@ -0,0 +1 @@
../main.go

13
cmd/vendor/bitbucket.org/ww/goautoneg/Makefile generated vendored Normal file
View File

@@ -0,0 +1,13 @@
include $(GOROOT)/src/Make.inc
TARG=bitbucket.org/ww/goautoneg
GOFILES=autoneg.go
include $(GOROOT)/src/Make.pkg
format:
gofmt -w *.go
docs:
gomake clean
godoc ${TARG} > README.txt

67
cmd/vendor/bitbucket.org/ww/goautoneg/README.txt generated vendored Normal file
View File

@@ -0,0 +1,67 @@
PACKAGE
package goautoneg
import "bitbucket.org/ww/goautoneg"
HTTP Content-Type Autonegotiation.
The functions in this package implement the behaviour specified in
http://www.w3.org/Protocols/rfc2616/rfc2616-sec14.html
Copyright (c) 2011, Open Knowledge Foundation Ltd.
All rights reserved.
Redistribution and use in source and binary forms, with or without
modification, are permitted provided that the following conditions are
met:
Redistributions of source code must retain the above copyright
notice, this list of conditions and the following disclaimer.
Redistributions in binary form must reproduce the above copyright
notice, this list of conditions and the following disclaimer in
the documentation and/or other materials provided with the
distribution.
Neither the name of the Open Knowledge Foundation Ltd. nor the
names of its contributors may be used to endorse or promote
products derived from this software without specific prior written
permission.
THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
"AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
(INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
FUNCTIONS
func Negotiate(header string, alternatives []string) (content_type string)
Negotiate the most appropriate content_type given the accept header
and a list of alternatives.
func ParseAccept(header string) (accept []Accept)
Parse an Accept Header string returning a sorted list
of clauses
TYPES
type Accept struct {
Type, SubType string
Q float32
Params map[string]string
}
Structure to represent a clause in an HTTP Accept Header
SUBDIRECTORIES
.hg

File diff suppressed because it is too large Load Diff

2
cmd/vendor/github.com/bgentry/speakeasy/.gitignore generated vendored Normal file
View File

@@ -0,0 +1,2 @@
example/example
example/example.exe

30
cmd/vendor/github.com/bgentry/speakeasy/Readme.md generated vendored Normal file
View File

@@ -0,0 +1,30 @@
# Speakeasy
This package provides cross-platform Go (#golang) helpers for taking user input
from the terminal while not echoing the input back (similar to `getpasswd`). The
package uses syscalls to avoid any dependence on cgo, and is therefore
compatible with cross-compiling.
[![GoDoc](https://godoc.org/github.com/bgentry/speakeasy?status.png)][godoc]
## Unicode
Multi-byte unicode characters work successfully on Mac OS X. On Windows,
however, this may be problematic (as is UTF in general on Windows). Other
platforms have not been tested.
## License
The code herein was not written by me, but was compiled from two separate open
source packages. Unix portions were imported from [gopass][gopass], while
Windows portions were imported from the [CloudFoundry Go CLI][cf-cli]'s
[Windows terminal helpers][cf-ui-windows].
The [license for the windows portion](./LICENSE_WINDOWS) has been copied exactly
from the source (though I attempted to fill in the correct owner in the
boilerplate copyright notice).
[cf-cli]: https://github.com/cloudfoundry/cli "CloudFoundry Go CLI"
[cf-ui-windows]: https://github.com/cloudfoundry/cli/blob/master/src/cf/terminal/ui_windows.go "CloudFoundry Go CLI Windows input helpers"
[godoc]: https://godoc.org/github.com/bgentry/speakeasy "speakeasy on Godoc.org"
[gopass]: https://code.google.com/p/gopass "gopass"

4
cmd/vendor/github.com/boltdb/bolt/.gitignore generated vendored Normal file
View File

@@ -0,0 +1,4 @@
*.prof
*.test
*.swp
/bin/

18
cmd/vendor/github.com/boltdb/bolt/Makefile generated vendored Normal file
View File

@@ -0,0 +1,18 @@
BRANCH=`git rev-parse --abbrev-ref HEAD`
COMMIT=`git rev-parse --short HEAD`
GOLDFLAGS="-X main.branch $(BRANCH) -X main.commit $(COMMIT)"
default: build
race:
@go test -v -race -test.run="TestSimulate_(100op|1000op)"
# go get github.com/kisielk/errcheck
errcheck:
@errcheck -ignorepkg=bytes -ignore=os:Remove github.com/boltdb/bolt
test:
@go test -v -cover .
@go test -v ./cmd/bolt
.PHONY: fmt test

852
cmd/vendor/github.com/boltdb/bolt/README.md generated vendored Normal file
View File

@@ -0,0 +1,852 @@
Bolt [![Coverage Status](https://coveralls.io/repos/boltdb/bolt/badge.svg?branch=master)](https://coveralls.io/r/boltdb/bolt?branch=master) [![GoDoc](https://godoc.org/github.com/boltdb/bolt?status.svg)](https://godoc.org/github.com/boltdb/bolt) ![Version](https://img.shields.io/badge/version-1.2.1-green.svg)
====
Bolt is a pure Go key/value store inspired by [Howard Chu's][hyc_symas]
[LMDB project][lmdb]. The goal of the project is to provide a simple,
fast, and reliable database for projects that don't require a full database
server such as Postgres or MySQL.
Since Bolt is meant to be used as such a low-level piece of functionality,
simplicity is key. The API will be small and only focus on getting values
and setting values. That's it.
[hyc_symas]: https://twitter.com/hyc_symas
[lmdb]: http://symas.com/mdb/
## Project Status
Bolt is stable and the API is fixed. Full unit test coverage and randomized
black box testing are used to ensure database consistency and thread safety.
Bolt is currently in high-load production environments serving databases as
large as 1TB. Many companies such as Shopify and Heroku use Bolt-backed
services every day.
## Table of Contents
- [Getting Started](#getting-started)
- [Installing](#installing)
- [Opening a database](#opening-a-database)
- [Transactions](#transactions)
- [Read-write transactions](#read-write-transactions)
- [Read-only transactions](#read-only-transactions)
- [Batch read-write transactions](#batch-read-write-transactions)
- [Managing transactions manually](#managing-transactions-manually)
- [Using buckets](#using-buckets)
- [Using key/value pairs](#using-keyvalue-pairs)
- [Autoincrementing integer for the bucket](#autoincrementing-integer-for-the-bucket)
- [Iterating over keys](#iterating-over-keys)
- [Prefix scans](#prefix-scans)
- [Range scans](#range-scans)
- [ForEach()](#foreach)
- [Nested buckets](#nested-buckets)
- [Database backups](#database-backups)
- [Statistics](#statistics)
- [Read-Only Mode](#read-only-mode)
- [Mobile Use (iOS/Android)](#mobile-use-iosandroid)
- [Resources](#resources)
- [Comparison with other databases](#comparison-with-other-databases)
- [Postgres, MySQL, & other relational databases](#postgres-mysql--other-relational-databases)
- [LevelDB, RocksDB](#leveldb-rocksdb)
- [LMDB](#lmdb)
- [Caveats & Limitations](#caveats--limitations)
- [Reading the Source](#reading-the-source)
- [Other Projects Using Bolt](#other-projects-using-bolt)
## Getting Started
### Installing
To start using Bolt, install Go and run `go get`:
```sh
$ go get github.com/boltdb/bolt/...
```
This will retrieve the library and install the `bolt` command line utility into
your `$GOBIN` path.
### Opening a database
The top-level object in Bolt is a `DB`. It is represented as a single file on
your disk and represents a consistent snapshot of your data.
To open your database, simply use the `bolt.Open()` function:
```go
package main
import (
"log"
"github.com/boltdb/bolt"
)
func main() {
// Open the my.db data file in your current directory.
// It will be created if it doesn't exist.
db, err := bolt.Open("my.db", 0600, nil)
if err != nil {
log.Fatal(err)
}
defer db.Close()
...
}
```
Please note that Bolt obtains a file lock on the data file so multiple processes
cannot open the same database at the same time. Opening an already open Bolt
database will cause it to hang until the other process closes it. To prevent
an indefinite wait you can pass a timeout option to the `Open()` function:
```go
db, err := bolt.Open("my.db", 0600, &bolt.Options{Timeout: 1 * time.Second})
```
### Transactions
Bolt allows only one read-write transaction at a time but allows as many
read-only transactions as you want at a time. Each transaction has a consistent
view of the data as it existed when the transaction started.
Individual transactions and all objects created from them (e.g. buckets, keys)
are not thread safe. To work with data in multiple goroutines you must start
a transaction for each one or use locking to ensure only one goroutine accesses
a transaction at a time. Creating transaction from the `DB` is thread safe.
Read-only transactions and read-write transactions should not depend on one
another and generally shouldn't be opened simultaneously in the same goroutine.
This can cause a deadlock as the read-write transaction needs to periodically
re-map the data file but it cannot do so while a read-only transaction is open.
#### Read-write transactions
To start a read-write transaction, you can use the `DB.Update()` function:
```go
err := db.Update(func(tx *bolt.Tx) error {
...
return nil
})
```
Inside the closure, you have a consistent view of the database. You commit the
transaction by returning `nil` at the end. You can also rollback the transaction
at any point by returning an error. All database operations are allowed inside
a read-write transaction.
Always check the return error as it will report any disk failures that can cause
your transaction to not complete. If you return an error within your closure
it will be passed through.
#### Read-only transactions
To start a read-only transaction, you can use the `DB.View()` function:
```go
err := db.View(func(tx *bolt.Tx) error {
...
return nil
})
```
You also get a consistent view of the database within this closure, however,
no mutating operations are allowed within a read-only transaction. You can only
retrieve buckets, retrieve values, and copy the database within a read-only
transaction.
#### Batch read-write transactions
Each `DB.Update()` waits for disk to commit the writes. This overhead
can be minimized by combining multiple updates with the `DB.Batch()`
function:
```go
err := db.Batch(func(tx *bolt.Tx) error {
...
return nil
})
```
Concurrent Batch calls are opportunistically combined into larger
transactions. Batch is only useful when there are multiple goroutines
calling it.
The trade-off is that `Batch` can call the given
function multiple times, if parts of the transaction fail. The
function must be idempotent and side effects must take effect only
after a successful return from `DB.Batch()`.
For example: don't display messages from inside the function, instead
set variables in the enclosing scope:
```go
var id uint64
err := db.Batch(func(tx *bolt.Tx) error {
// Find last key in bucket, decode as bigendian uint64, increment
// by one, encode back to []byte, and add new key.
...
id = newValue
return nil
})
if err != nil {
return ...
}
fmt.Println("Allocated ID %d", id)
```
#### Managing transactions manually
The `DB.View()` and `DB.Update()` functions are wrappers around the `DB.Begin()`
function. These helper functions will start the transaction, execute a function,
and then safely close your transaction if an error is returned. This is the
recommended way to use Bolt transactions.
However, sometimes you may want to manually start and end your transactions.
You can use the `Tx.Begin()` function directly but **please** be sure to close
the transaction.
```go
// Start a writable transaction.
tx, err := db.Begin(true)
if err != nil {
return err
}
defer tx.Rollback()
// Use the transaction...
_, err := tx.CreateBucket([]byte("MyBucket"))
if err != nil {
return err
}
// Commit the transaction and check for error.
if err := tx.Commit(); err != nil {
return err
}
```
The first argument to `DB.Begin()` is a boolean stating if the transaction
should be writable.
### Using buckets
Buckets are collections of key/value pairs within the database. All keys in a
bucket must be unique. You can create a bucket using the `DB.CreateBucket()`
function:
```go
db.Update(func(tx *bolt.Tx) error {
b, err := tx.CreateBucket([]byte("MyBucket"))
if err != nil {
return fmt.Errorf("create bucket: %s", err)
}
return nil
})
```
You can also create a bucket only if it doesn't exist by using the
`Tx.CreateBucketIfNotExists()` function. It's a common pattern to call this
function for all your top-level buckets after you open your database so you can
guarantee that they exist for future transactions.
To delete a bucket, simply call the `Tx.DeleteBucket()` function.
### Using key/value pairs
To save a key/value pair to a bucket, use the `Bucket.Put()` function:
```go
db.Update(func(tx *bolt.Tx) error {
b := tx.Bucket([]byte("MyBucket"))
err := b.Put([]byte("answer"), []byte("42"))
return err
})
```
This will set the value of the `"answer"` key to `"42"` in the `MyBucket`
bucket. To retrieve this value, we can use the `Bucket.Get()` function:
```go
db.View(func(tx *bolt.Tx) error {
b := tx.Bucket([]byte("MyBucket"))
v := b.Get([]byte("answer"))
fmt.Printf("The answer is: %s\n", v)
return nil
})
```
The `Get()` function does not return an error because its operation is
guaranteed to work (unless there is some kind of system failure). If the key
exists then it will return its byte slice value. If it doesn't exist then it
will return `nil`. It's important to note that you can have a zero-length value
set to a key which is different than the key not existing.
Use the `Bucket.Delete()` function to delete a key from the bucket.
Please note that values returned from `Get()` are only valid while the
transaction is open. If you need to use a value outside of the transaction
then you must use `copy()` to copy it to another byte slice.
### Autoincrementing integer for the bucket
By using the `NextSequence()` function, you can let Bolt determine a sequence
which can be used as the unique identifier for your key/value pairs. See the
example below.
```go
// CreateUser saves u to the store. The new user ID is set on u once the data is persisted.
func (s *Store) CreateUser(u *User) error {
return s.db.Update(func(tx *bolt.Tx) error {
// Retrieve the users bucket.
// This should be created when the DB is first opened.
b := tx.Bucket([]byte("users"))
// Generate ID for the user.
// This returns an error only if the Tx is closed or not writeable.
// That can't happen in an Update() call so I ignore the error check.
id, _ := b.NextSequence()
u.ID = int(id)
// Marshal user data into bytes.
buf, err := json.Marshal(u)
if err != nil {
return err
}
// Persist bytes to users bucket.
return b.Put(itob(u.ID), buf)
})
}
// itob returns an 8-byte big endian representation of v.
func itob(v int) []byte {
b := make([]byte, 8)
binary.BigEndian.PutUint64(b, uint64(v))
return b
}
type User struct {
ID int
...
}
```
### Iterating over keys
Bolt stores its keys in byte-sorted order within a bucket. This makes sequential
iteration over these keys extremely fast. To iterate over keys we'll use a
`Cursor`:
```go
db.View(func(tx *bolt.Tx) error {
// Assume bucket exists and has keys
b := tx.Bucket([]byte("MyBucket"))
c := b.Cursor()
for k, v := c.First(); k != nil; k, v = c.Next() {
fmt.Printf("key=%s, value=%s\n", k, v)
}
return nil
})
```
The cursor allows you to move to a specific point in the list of keys and move
forward or backward through the keys one at a time.
The following functions are available on the cursor:
```
First() Move to the first key.
Last() Move to the last key.
Seek() Move to a specific key.
Next() Move to the next key.
Prev() Move to the previous key.
```
Each of those functions has a return signature of `(key []byte, value []byte)`.
When you have iterated to the end of the cursor then `Next()` will return a
`nil` key. You must seek to a position using `First()`, `Last()`, or `Seek()`
before calling `Next()` or `Prev()`. If you do not seek to a position then
these functions will return a `nil` key.
During iteration, if the key is non-`nil` but the value is `nil`, that means
the key refers to a bucket rather than a value. Use `Bucket.Bucket()` to
access the sub-bucket.
#### Prefix scans
To iterate over a key prefix, you can combine `Seek()` and `bytes.HasPrefix()`:
```go
db.View(func(tx *bolt.Tx) error {
// Assume bucket exists and has keys
c := tx.Bucket([]byte("MyBucket")).Cursor()
prefix := []byte("1234")
for k, v := c.Seek(prefix); bytes.HasPrefix(k, prefix); k, v = c.Next() {
fmt.Printf("key=%s, value=%s\n", k, v)
}
return nil
})
```
#### Range scans
Another common use case is scanning over a range such as a time range. If you
use a sortable time encoding such as RFC3339 then you can query a specific
date range like this:
```go
db.View(func(tx *bolt.Tx) error {
// Assume our events bucket exists and has RFC3339 encoded time keys.
c := tx.Bucket([]byte("Events")).Cursor()
// Our time range spans the 90's decade.
min := []byte("1990-01-01T00:00:00Z")
max := []byte("2000-01-01T00:00:00Z")
// Iterate over the 90's.
for k, v := c.Seek(min); k != nil && bytes.Compare(k, max) <= 0; k, v = c.Next() {
fmt.Printf("%s: %s\n", k, v)
}
return nil
})
```
Note that, while RFC3339 is sortable, the Golang implementation of RFC3339Nano does not use a fixed number of digits after the decimal point and is therefore not sortable.
#### ForEach()
You can also use the function `ForEach()` if you know you'll be iterating over
all the keys in a bucket:
```go
db.View(func(tx *bolt.Tx) error {
// Assume bucket exists and has keys
b := tx.Bucket([]byte("MyBucket"))
b.ForEach(func(k, v []byte) error {
fmt.Printf("key=%s, value=%s\n", k, v)
return nil
})
return nil
})
```
### Nested buckets
You can also store a bucket in a key to create nested buckets. The API is the
same as the bucket management API on the `DB` object:
```go
func (*Bucket) CreateBucket(key []byte) (*Bucket, error)
func (*Bucket) CreateBucketIfNotExists(key []byte) (*Bucket, error)
func (*Bucket) DeleteBucket(key []byte) error
```
### Database backups
Bolt is a single file so it's easy to backup. You can use the `Tx.WriteTo()`
function to write a consistent view of the database to a writer. If you call
this from a read-only transaction, it will perform a hot backup and not block
your other database reads and writes.
By default, it will use a regular file handle which will utilize the operating
system's page cache. See the [`Tx`](https://godoc.org/github.com/boltdb/bolt#Tx)
documentation for information about optimizing for larger-than-RAM datasets.
One common use case is to backup over HTTP so you can use tools like `cURL` to
do database backups:
```go
func BackupHandleFunc(w http.ResponseWriter, req *http.Request) {
err := db.View(func(tx *bolt.Tx) error {
w.Header().Set("Content-Type", "application/octet-stream")
w.Header().Set("Content-Disposition", `attachment; filename="my.db"`)
w.Header().Set("Content-Length", strconv.Itoa(int(tx.Size())))
_, err := tx.WriteTo(w)
return err
})
if err != nil {
http.Error(w, err.Error(), http.StatusInternalServerError)
}
}
```
Then you can backup using this command:
```sh
$ curl http://localhost/backup > my.db
```
Or you can open your browser to `http://localhost/backup` and it will download
automatically.
If you want to backup to another file you can use the `Tx.CopyFile()` helper
function.
### Statistics
The database keeps a running count of many of the internal operations it
performs so you can better understand what's going on. By grabbing a snapshot
of these stats at two points in time we can see what operations were performed
in that time range.
For example, we could start a goroutine to log stats every 10 seconds:
```go
go func() {
// Grab the initial stats.
prev := db.Stats()
for {
// Wait for 10s.
time.Sleep(10 * time.Second)
// Grab the current stats and diff them.
stats := db.Stats()
diff := stats.Sub(&prev)
// Encode stats to JSON and print to STDERR.
json.NewEncoder(os.Stderr).Encode(diff)
// Save stats for the next loop.
prev = stats
}
}()
```
It's also useful to pipe these stats to a service such as statsd for monitoring
or to provide an HTTP endpoint that will perform a fixed-length sample.
### Read-Only Mode
Sometimes it is useful to create a shared, read-only Bolt database. To this,
set the `Options.ReadOnly` flag when opening your database. Read-only mode
uses a shared lock to allow multiple processes to read from the database but
it will block any processes from opening the database in read-write mode.
```go
db, err := bolt.Open("my.db", 0666, &bolt.Options{ReadOnly: true})
if err != nil {
log.Fatal(err)
}
```
### Mobile Use (iOS/Android)
Bolt is able to run on mobile devices by leveraging the binding feature of the
[gomobile](https://github.com/golang/mobile) tool. Create a struct that will
contain your database logic and a reference to a `*bolt.DB` with a initializing
constructor that takes in a filepath where the database file will be stored.
Neither Android nor iOS require extra permissions or cleanup from using this method.
```go
func NewBoltDB(filepath string) *BoltDB {
db, err := bolt.Open(filepath+"/demo.db", 0600, nil)
if err != nil {
log.Fatal(err)
}
return &BoltDB{db}
}
type BoltDB struct {
db *bolt.DB
...
}
func (b *BoltDB) Path() string {
return b.db.Path()
}
func (b *BoltDB) Close() {
b.db.Close()
}
```
Database logic should be defined as methods on this wrapper struct.
To initialize this struct from the native language (both platforms now sync
their local storage to the cloud. These snippets disable that functionality for the
database file):
#### Android
```java
String path;
if (android.os.Build.VERSION.SDK_INT >=android.os.Build.VERSION_CODES.LOLLIPOP){
path = getNoBackupFilesDir().getAbsolutePath();
} else{
path = getFilesDir().getAbsolutePath();
}
Boltmobiledemo.BoltDB boltDB = Boltmobiledemo.NewBoltDB(path)
```
#### iOS
```objc
- (void)demo {
NSString* path = [NSSearchPathForDirectoriesInDomains(NSLibraryDirectory,
NSUserDomainMask,
YES) objectAtIndex:0];
GoBoltmobiledemoBoltDB * demo = GoBoltmobiledemoNewBoltDB(path);
[self addSkipBackupAttributeToItemAtPath:demo.path];
//Some DB Logic would go here
[demo close];
}
- (BOOL)addSkipBackupAttributeToItemAtPath:(NSString *) filePathString
{
NSURL* URL= [NSURL fileURLWithPath: filePathString];
assert([[NSFileManager defaultManager] fileExistsAtPath: [URL path]]);
NSError *error = nil;
BOOL success = [URL setResourceValue: [NSNumber numberWithBool: YES]
forKey: NSURLIsExcludedFromBackupKey error: &error];
if(!success){
NSLog(@"Error excluding %@ from backup %@", [URL lastPathComponent], error);
}
return success;
}
```
## Resources
For more information on getting started with Bolt, check out the following articles:
* [Intro to BoltDB: Painless Performant Persistence](http://npf.io/2014/07/intro-to-boltdb-painless-performant-persistence/) by [Nate Finch](https://github.com/natefinch).
* [Bolt -- an embedded key/value database for Go](https://www.progville.com/go/bolt-embedded-db-golang/) by Progville
## Comparison with other databases
### Postgres, MySQL, & other relational databases
Relational databases structure data into rows and are only accessible through
the use of SQL. This approach provides flexibility in how you store and query
your data but also incurs overhead in parsing and planning SQL statements. Bolt
accesses all data by a byte slice key. This makes Bolt fast to read and write
data by key but provides no built-in support for joining values together.
Most relational databases (with the exception of SQLite) are standalone servers
that run separately from your application. This gives your systems
flexibility to connect multiple application servers to a single database
server but also adds overhead in serializing and transporting data over the
network. Bolt runs as a library included in your application so all data access
has to go through your application's process. This brings data closer to your
application but limits multi-process access to the data.
### LevelDB, RocksDB
LevelDB and its derivatives (RocksDB, HyperLevelDB) are similar to Bolt in that
they are libraries bundled into the application, however, their underlying
structure is a log-structured merge-tree (LSM tree). An LSM tree optimizes
random writes by using a write ahead log and multi-tiered, sorted files called
SSTables. Bolt uses a B+tree internally and only a single file. Both approaches
have trade-offs.
If you require a high random write throughput (>10,000 w/sec) or you need to use
spinning disks then LevelDB could be a good choice. If your application is
read-heavy or does a lot of range scans then Bolt could be a good choice.
One other important consideration is that LevelDB does not have transactions.
It supports batch writing of key/values pairs and it supports read snapshots
but it will not give you the ability to do a compare-and-swap operation safely.
Bolt supports fully serializable ACID transactions.
### LMDB
Bolt was originally a port of LMDB so it is architecturally similar. Both use
a B+tree, have ACID semantics with fully serializable transactions, and support
lock-free MVCC using a single writer and multiple readers.
The two projects have somewhat diverged. LMDB heavily focuses on raw performance
while Bolt has focused on simplicity and ease of use. For example, LMDB allows
several unsafe actions such as direct writes for the sake of performance. Bolt
opts to disallow actions which can leave the database in a corrupted state. The
only exception to this in Bolt is `DB.NoSync`.
There are also a few differences in API. LMDB requires a maximum mmap size when
opening an `mdb_env` whereas Bolt will handle incremental mmap resizing
automatically. LMDB overloads the getter and setter functions with multiple
flags whereas Bolt splits these specialized cases into their own functions.
## Caveats & Limitations
It's important to pick the right tool for the job and Bolt is no exception.
Here are a few things to note when evaluating and using Bolt:
* Bolt is good for read intensive workloads. Sequential write performance is
also fast but random writes can be slow. You can use `DB.Batch()` or add a
write-ahead log to help mitigate this issue.
* Bolt uses a B+tree internally so there can be a lot of random page access.
SSDs provide a significant performance boost over spinning disks.
* Try to avoid long running read transactions. Bolt uses copy-on-write so
old pages cannot be reclaimed while an old transaction is using them.
* Byte slices returned from Bolt are only valid during a transaction. Once the
transaction has been committed or rolled back then the memory they point to
can be reused by a new page or can be unmapped from virtual memory and you'll
see an `unexpected fault address` panic when accessing it.
* Be careful when using `Bucket.FillPercent`. Setting a high fill percent for
buckets that have random inserts will cause your database to have very poor
page utilization.
* Use larger buckets in general. Smaller buckets causes poor page utilization
once they become larger than the page size (typically 4KB).
* Bulk loading a lot of random writes into a new bucket can be slow as the
page will not split until the transaction is committed. Randomly inserting
more than 100,000 key/value pairs into a single new bucket in a single
transaction is not advised.
* Bolt uses a memory-mapped file so the underlying operating system handles the
caching of the data. Typically, the OS will cache as much of the file as it
can in memory and will release memory as needed to other processes. This means
that Bolt can show very high memory usage when working with large databases.
However, this is expected and the OS will release memory as needed. Bolt can
handle databases much larger than the available physical RAM, provided its
memory-map fits in the process virtual address space. It may be problematic
on 32-bits systems.
* The data structures in the Bolt database are memory mapped so the data file
will be endian specific. This means that you cannot copy a Bolt file from a
little endian machine to a big endian machine and have it work. For most
users this is not a concern since most modern CPUs are little endian.
* Because of the way pages are laid out on disk, Bolt cannot truncate data files
and return free pages back to the disk. Instead, Bolt maintains a free list
of unused pages within its data file. These free pages can be reused by later
transactions. This works well for many use cases as databases generally tend
to grow. However, it's important to note that deleting large chunks of data
will not allow you to reclaim that space on disk.
For more information on page allocation, [see this comment][page-allocation].
[page-allocation]: https://github.com/boltdb/bolt/issues/308#issuecomment-74811638
## Reading the Source
Bolt is a relatively small code base (<3KLOC) for an embedded, serializable,
transactional key/value database so it can be a good starting point for people
interested in how databases work.
The best places to start are the main entry points into Bolt:
- `Open()` - Initializes the reference to the database. It's responsible for
creating the database if it doesn't exist, obtaining an exclusive lock on the
file, reading the meta pages, & memory-mapping the file.
- `DB.Begin()` - Starts a read-only or read-write transaction depending on the
value of the `writable` argument. This requires briefly obtaining the "meta"
lock to keep track of open transactions. Only one read-write transaction can
exist at a time so the "rwlock" is acquired during the life of a read-write
transaction.
- `Bucket.Put()` - Writes a key/value pair into a bucket. After validating the
arguments, a cursor is used to traverse the B+tree to the page and position
where they key & value will be written. Once the position is found, the bucket
materializes the underlying page and the page's parent pages into memory as
"nodes". These nodes are where mutations occur during read-write transactions.
These changes get flushed to disk during commit.
- `Bucket.Get()` - Retrieves a key/value pair from a bucket. This uses a cursor
to move to the page & position of a key/value pair. During a read-only
transaction, the key and value data is returned as a direct reference to the
underlying mmap file so there's no allocation overhead. For read-write
transactions, this data may reference the mmap file or one of the in-memory
node values.
- `Cursor` - This object is simply for traversing the B+tree of on-disk pages
or in-memory nodes. It can seek to a specific key, move to the first or last
value, or it can move forward or backward. The cursor handles the movement up
and down the B+tree transparently to the end user.
- `Tx.Commit()` - Converts the in-memory dirty nodes and the list of free pages
into pages to be written to disk. Writing to disk then occurs in two phases.
First, the dirty pages are written to disk and an `fsync()` occurs. Second, a
new meta page with an incremented transaction ID is written and another
`fsync()` occurs. This two phase write ensures that partially written data
pages are ignored in the event of a crash since the meta page pointing to them
is never written. Partially written meta pages are invalidated because they
are written with a checksum.
If you have additional notes that could be helpful for others, please submit
them via pull request.
## Other Projects Using Bolt
Below is a list of public, open source projects that use Bolt:
* [BoltDbWeb](https://github.com/evnix/boltdbweb) - A web based GUI for BoltDB files.
* [Operation Go: A Routine Mission](http://gocode.io) - An online programming game for Golang using Bolt for user accounts and a leaderboard.
* [Bazil](https://bazil.org/) - A file system that lets your data reside where it is most convenient for it to reside.
* [DVID](https://github.com/janelia-flyem/dvid) - Added Bolt as optional storage engine and testing it against Basho-tuned leveldb.
* [Skybox Analytics](https://github.com/skybox/skybox) - A standalone funnel analysis tool for web analytics.
* [Scuttlebutt](https://github.com/benbjohnson/scuttlebutt) - Uses Bolt to store and process all Twitter mentions of GitHub projects.
* [Wiki](https://github.com/peterhellberg/wiki) - A tiny wiki using Goji, BoltDB and Blackfriday.
* [ChainStore](https://github.com/pressly/chainstore) - Simple key-value interface to a variety of storage engines organized as a chain of operations.
* [MetricBase](https://github.com/msiebuhr/MetricBase) - Single-binary version of Graphite.
* [Gitchain](https://github.com/gitchain/gitchain) - Decentralized, peer-to-peer Git repositories aka "Git meets Bitcoin".
* [event-shuttle](https://github.com/sclasen/event-shuttle) - A Unix system service to collect and reliably deliver messages to Kafka.
* [ipxed](https://github.com/kelseyhightower/ipxed) - Web interface and api for ipxed.
* [BoltStore](https://github.com/yosssi/boltstore) - Session store using Bolt.
* [photosite/session](https://godoc.org/bitbucket.org/kardianos/photosite/session) - Sessions for a photo viewing site.
* [LedisDB](https://github.com/siddontang/ledisdb) - A high performance NoSQL, using Bolt as optional storage.
* [ipLocator](https://github.com/AndreasBriese/ipLocator) - A fast ip-geo-location-server using bolt with bloom filters.
* [cayley](https://github.com/google/cayley) - Cayley is an open-source graph database using Bolt as optional backend.
* [bleve](http://www.blevesearch.com/) - A pure Go search engine similar to ElasticSearch that uses Bolt as the default storage backend.
* [tentacool](https://github.com/optiflows/tentacool) - REST api server to manage system stuff (IP, DNS, Gateway...) on a linux server.
* [Seaweed File System](https://github.com/chrislusf/seaweedfs) - Highly scalable distributed key~file system with O(1) disk read.
* [InfluxDB](https://influxdata.com) - Scalable datastore for metrics, events, and real-time analytics.
* [Freehold](http://tshannon.bitbucket.org/freehold/) - An open, secure, and lightweight platform for your files and data.
* [Prometheus Annotation Server](https://github.com/oliver006/prom_annotation_server) - Annotation server for PromDash & Prometheus service monitoring system.
* [Consul](https://github.com/hashicorp/consul) - Consul is service discovery and configuration made easy. Distributed, highly available, and datacenter-aware.
* [Kala](https://github.com/ajvb/kala) - Kala is a modern job scheduler optimized to run on a single node. It is persistent, JSON over HTTP API, ISO 8601 duration notation, and dependent jobs.
* [drive](https://github.com/odeke-em/drive) - drive is an unofficial Google Drive command line client for \*NIX operating systems.
* [stow](https://github.com/djherbis/stow) - a persistence manager for objects
backed by boltdb.
* [buckets](https://github.com/joyrexus/buckets) - a bolt wrapper streamlining
simple tx and key scans.
* [mbuckets](https://github.com/abhigupta912/mbuckets) - A Bolt wrapper that allows easy operations on multi level (nested) buckets.
* [Request Baskets](https://github.com/darklynx/request-baskets) - A web service to collect arbitrary HTTP requests and inspect them via REST API or simple web UI, similar to [RequestBin](http://requestb.in/) service
* [Go Report Card](https://goreportcard.com/) - Go code quality report cards as a (free and open source) service.
* [Boltdb Boilerplate](https://github.com/bobintornado/boltdb-boilerplate) - Boilerplate wrapper around bolt aiming to make simple calls one-liners.
* [lru](https://github.com/crowdriff/lru) - Easy to use Bolt-backed Least-Recently-Used (LRU) read-through cache with chainable remote stores.
* [Storm](https://github.com/asdine/storm) - Simple and powerful ORM for BoltDB.
* [GoWebApp](https://github.com/josephspurrier/gowebapp) - A basic MVC web application in Go using BoltDB.
* [SimpleBolt](https://github.com/xyproto/simplebolt) - A simple way to use BoltDB. Deals mainly with strings.
* [Algernon](https://github.com/xyproto/algernon) - A HTTP/2 web server with built-in support for Lua. Uses BoltDB as the default database backend.
* [MuLiFS](https://github.com/dankomiocevic/mulifs) - Music Library Filesystem creates a filesystem to organise your music files.
* [GoShort](https://github.com/pankajkhairnar/goShort) - GoShort is a URL shortener written in Golang and BoltDB for persistent key/value storage and for routing it's using high performent HTTPRouter.
If you are using Bolt in a project please send a pull request to add it to the list.

18
cmd/vendor/github.com/boltdb/bolt/appveyor.yml generated vendored Normal file
View File

@@ -0,0 +1,18 @@
version: "{build}"
os: Windows Server 2012 R2
clone_folder: c:\gopath\src\github.com\boltdb\bolt
environment:
GOPATH: c:\gopath
install:
- echo %PATH%
- echo %GOPATH%
- go version
- go env
- go get -v -t ./...
build_script:
- go test -v ./...

Some files were not shown because too many files have changed in this diff Show More