Compare commits

...

235 Commits

Author SHA1 Message Date
Benjamin Wang
85b640cee7 Bump version to 3.4.21
Signed-off-by: Benjamin Wang <wachao@vmware.com>
2022-09-15 08:46:22 +08:00
Marek Siarkowicz
1a05326fae Merge pull request #14442 from ahrtr/fix_TestV3AuthRestartMember
[release-3.4] Fix the flaky test TestV3AuthRestartMember
2022-09-09 09:57:24 +02:00
Benjamin Wang
b8bea91f22 fix the flaky test TestV3AuthRestartMember
Signed-off-by: Benjamin Wang <wachao@vmware.com>
2022-09-09 09:37:25 +08:00
Benjamin Wang
6730ed8477 Merge pull request #14410 from vivekpatani/release-3.4
[release-3.4] server,test: refresh cache on each NewAuthStore
2022-09-09 09:34:32 +08:00
Benjamin Wang
a55a9f5e07 Merge pull request #14441 from tjungblu/bz_1918413_3.4_upstream
[release-3.4] etcdctl: fix move-leader for multiple endpoints
2022-09-09 09:26:40 +08:00
Thomas Jungblut
86bc0a25c4 etcdctl: fix move-leader for multiple endpoints
Due to a duplicate call of clientConfigFromCmd, the move-leader command
would fail with "conflicting environment variable is shadowed by corresponding command-line flag".
Also in scenarios where no command-line flag was supplied.

Signed-off-by: Thomas Jungblut <tjungblu@redhat.com>
2022-09-08 15:51:19 +02:00
Benjamin Wang
dd743eea81 Merge pull request #14439 from vsvastey/usr/vsvastey/open-with-max-index-test-fix-3.4
[release-3.4] testing: fix TestOpenWithMaxIndex cleanup
2022-09-08 17:00:20 +08:00
Vladimir Sokolov
1ed5dfc20e testing: fix TestOpenWithMaxIndex cleanup
A WAL object was closed by defer, however the WAL was rewritten afterwards,
so defer closed already closed WAL but not the new one. It caused a data
race between writing file and cleaning up a temporary test directory,
which led to a non-deterministic bug.

Fixes #14332

Signed-off-by: Vladimir Sokolov <vsvastey@gmail.com>
2022-09-08 10:49:47 +03:00
Benjamin Wang
b2b7b9d535 Merge pull request #14423 from serathius/one_member_data_loss_raft_3_4
[release-3.4] fix the potential data loss for clusters with only one member
2022-09-06 03:29:45 +08:00
Benjamin Wang
119e4dda19 fix the potential data loss for clusters with only one member
For a cluster with only one member, the raft always send identical
unstable entries and committed entries to etcdserver, and etcd
responds to the client once it finishes (actually partially) the
applying workflow.

When the client receives the response, it doesn't mean etcd has already
successfully saved the data, including BoltDB and WAL, because:
   1. etcd commits the boltDB transaction periodically instead of on each request;
   2. etcd saves WAL entries in parallel with applying the committed entries.
Accordingly, it may run into a situation of data loss when the etcd crashes
immediately after responding to the client and before the boltDB and WAL
successfully save the data to disk.
Note that this issue can only happen for clusters with only one member.

For clusters with multiple members, it isn't an issue, because etcd will
not commit & apply the data before it being replicated to majority members.
When the client receives the response, it means the data must have been applied.
It further means the data must have been committed.
Note: for clusters with multiple members, the raft will never send identical
unstable entries and committed entries to etcdserver.

Signed-off-by: Benjamin Wang <wachao@vmware.com>
2022-09-05 14:15:47 +02:00
Benjamin Wang
9d5ae56764 Merge pull request #14420 from vsvastey/usr/vsvastey/nil-logger
etcdserver: nil-logger issue fix for version 3.4
2022-09-05 14:53:08 +08:00
Vladimir Sokolov
38342e88da etcdserver: nil-logger issue fix for version 3.4
In v3.5 it is assumed that the logger should not be nil, however it is
still a case in v3.4. The PR targeted to v3.5 was backported to 3.4 and
that's why it's possible to get panic on nil logger in 3.4. This commit
fixed this issue.

Fixes #14402

Signed-off-by: Vladimir Sokolov <vsvastey@gmail.com>
2022-09-03 04:34:03 +03:00
vivekpatani
c0ef7d52e0 server,test: refresh cache on each NewAuthStore
- permissions were incorrectly loaded on restarts.
- #14355
- Backport of https://github.com/etcd-io/etcd/pull/14358

Signed-off-by: vivekpatani <9080894+vivekpatani@users.noreply.github.com>
2022-08-31 13:08:11 -07:00
Benjamin Wang
1e2682301c Bump version to 3.4.20
Signed-off-by: Benjamin Wang <wachao@vmware.com>
2022-08-06 05:27:01 +08:00
Sahdev Zala
ee366151c6 Merge pull request #14290 from ahrtr/3.4_no_prevkv_for_create
[3.4] Do not get previous K/V for create event
2022-08-01 08:39:19 -04:00
Benjamin Wang
095bbfc4ed lock down the version of shadow to v0.1.11
The latest vesion v0.1.12 was just released On Jul 27, 2022,
and it is causing issue (see below) on the govet check,

```
govet_shadow' started at Sun Jul 31 23:23:27 PDT 2022
go get: upgraded golang.org/x/net v0.0.0-20211112202133-69e39bad7dc2 => v0.0.0-20220722155237-a158d28d115b
go get: upgraded golang.org/x/sys v0.0.0-20211019181941-9d821ace8654 => v0.0.0-20220722155257-8c9f86f7a55f
go get: upgraded golang.org/x/tools v0.0.0-20190524140312-2c0ae7006135 => v0.1.12
/root/go/pkg/mod/github.com/grpc-ecosystem/go-grpc-prometheus@v1.2.0/client_metrics.go:7:2: missing go.sum entry for module providing package golang.org/x/net/context (imported by go.etcd.io/etcd/etcdserver/etcdserverpb); to add:
	go get go.etcd.io/etcd/etcdserver/etcdserverpb
/root/go/pkg/mod/google.golang.org/grpc@v1.26.0/internal/transport/controlbuf.go:28:2: missing go.sum entry for module providing package golang.org/x/net/http2 (imported by go.etcd.io/etcd/embed); to add:
	go get go.etcd.io/etcd/embed
/root/go/pkg/mod/google.golang.org/grpc@v1.26.0/internal/transport/controlbuf.go:29:2: missing go.sum entry for module providing package golang.org/x/net/http2/hpack (imported by github.com/soheilhy/cmux); to add:
	go get github.com/soheilhy/cmux@v0.1.4
/root/go/pkg/mod/google.golang.org/grpc@v1.26.0/server.go:36:2: missing go.sum entry for module providing package golang.org/x/net/trace (imported by go.etcd.io/etcd/embed); to add:
	go get go.etcd.io/etcd/embed
```

It isn't good to always to use the latest version. Instead, we should
lock down the version, and v0.1.11 was confirmed to be working.

Signed-off-by: Benjamin Wang <wachao@vmware.com>
2022-08-01 15:11:49 +08:00
Benjamin Wang
cc1b0e6a44 do not get previous K/V for create event
Signed-off-by: Benjamin Wang <wachao@vmware.com>
2022-08-01 13:11:46 +08:00
Benjamin Wang
314dcbf6f5 Merge pull request #14274 from lavacat/release-3.4-fix-TestRoundRobinBalancedResolvableFailoverFromServerFail
[3.4] clientv3/balancer: fixed flaky TestRoundRobinBalancedResolvableFailoverFromServerFail
2022-07-27 04:59:38 +08:00
Bogdan Kanivets
6f483a649e clientv3/balancer: fixed flaky TestRoundRobinBalancedResolvableFailoverFromServerFail
- ignore "transport is closing" error during connections warmup after stopping one peer.

Signed-off-by: Bogdan Kanivets <bkanivets@apple.com>
2022-07-26 08:06:59 -07:00
Benjamin Wang
ce539a960c Merge pull request #14279 from SimFG/mvcc-race
[3.4] clientv3/mvcc: fixed DATA RACE
2022-07-26 23:01:34 +08:00
SimFG
04e5e5516e [3.4] clientv3/mvcc: fixed DATA RACE between mvcc.(*store).setupMetricsReporter and mvcc.(*store).restore
Signed-off-by: SimFG <1142838399@qq.com>
2022-07-26 21:38:23 +08:00
Benjamin Wang
2c778eebf7 Merge pull request #14269 from ahrtr/3.4_resend_readindex
[3.4] etcdserver: resend ReadIndex request on empty apply request
2022-07-25 16:53:06 +08:00
Benjamin Wang
f53db9b246 etcdserver: resend ReadIndex request on empty apply request
Backport https://github.com/etcd-io/etcd/pull/12795 to 3.4

Signed-off-by: Benjamin Wang <wachao@vmware.com>
2022-07-25 09:21:31 +08:00
Benjamin Wang
e2b36f8879 Merge pull request #14253 from serathius/checkpoints-fix-3.4
[3.4] Checkpoints fix 3.4
2022-07-22 16:56:17 +08:00
Benjamin Wang
de2e8ccc78 Merge pull request #14258 from ahrtr/3.4_postphone_read_index
[3.4] raft: postpone MsgReadIndex until first commit in the term
2022-07-22 16:46:32 +08:00
Marek Siarkowicz
783e99cbfe Fix lease checkpointing tests by forcing a snapshot
Signed-off-by: Marek Siarkowicz <siarkowicz@google.com>
2022-07-22 10:28:44 +02:00
Marek Siarkowicz
8f4735dfd4 server: Require either cluster version v3.6 or --experimental-enable-lease-checkpoint-persist to persist lease remainingTTL
To avoid inconsistant behavior during cluster upgrade we are feature
gating persistance behind cluster version. This should ensure that
all cluster members are upgraded to v3.6 before changing behavior.

To allow backporting this fix to v3.5 we are also introducing flag
--experimental-enable-lease-checkpoint-persist that will allow for
smooth upgrade in v3.5 clusters with this feature enabled.

Signed-off-by: Marek Siarkowicz <siarkowicz@google.com>
2022-07-22 10:28:29 +02:00
Benjamin Wang
9c9148c4cd raft: postpone MsgReadIndex until first commit in the term
Backport https://github.com/etcd-io/etcd/pull/12762 to 3.4

Signed-off-by: Benjamin Wang <wachao@vmware.com>
2022-07-22 13:56:27 +08:00
Benjamin Wang
f18d074866 Merge pull request #14254 from ramses/backport-13435
[3.4] Backport: non mutating requests pass through quotaKVServer when NOSPACE
2022-07-22 09:27:00 +08:00
Benjamin Wang
aca5cd1717 Merge pull request #14246 from vivekpatani/release-3.4
[3.4] etcdserver,pkg: remove temp files in snap dir when etcdserver starting
2022-07-22 09:14:23 +08:00
vivekpatani
e4deb09c9e etcdserver,pkg: remove temp files in snap dir when etcdserver starting
- Backporting: https://github.com/etcd-io/etcd/pull/12846
- Reference: https://github.com/etcd-io/etcd/issues/14232

Signed-off-by: vivekpatani <9080894+vivekpatani@users.noreply.github.com>
2022-07-21 15:50:27 -07:00
Chao Chen
96f69dee47 Backport: non mutating requests pass through quotaKVServer when NOSPACE
This is a backport of https://github.com/etcd-io/etcd/pull/13435 and is
part of the work for 3.4.20
https://github.com/etcd-io/etcd/issues/14232.

The original change had a second commit that modifies a changelog file.
The 3.4 branch does not include any changelog file, so that part was not
cherry-picked.

Local Testing:

- `make build`
- `make test`

Both succeed.

Signed-off-by: Ramsés Morales <ramses@gmail.com>
2022-07-21 15:06:09 -07:00
Michał Jasionowski
8d83691d53 etcdserver,integration: Store remaining TTL on checkpoint
To extend lease checkpointing mechanism to cases when the whole etcd
cluster is restarted.

Signed-off-by: Marek Siarkowicz <siarkowicz@google.com>
2022-07-21 17:35:21 +02:00
Michał Jasionowski
a30aba8fc2 lease,integration: add checkpoint scheduling after leader change
Current checkpointing mechanism is buggy. New checkpoints for any lease
are scheduled only until the first leader change. Added fix for that
and a test that will check it.

Signed-off-by: Marek Siarkowicz <siarkowicz@google.com>
2022-07-21 17:35:15 +02:00
Benjamin Wang
7ee7029c08 Merge pull request #14251 from ahrtr/3.4_maxstream
[3.4] Support configuring MaxConcurrentStreams for http2
2022-07-21 17:43:15 +08:00
Benjamin Wang
6071b1c523 Support configuring MaxConcurrentStreams for http2
Backport https://github.com/etcd-io/etcd/pull/14219 to 3.4

Signed-off-by: Benjamin Wang <wachao@vmware.com>
2022-07-21 14:25:29 +08:00
Benjamin Wang
40ccb8b454 Merge pull request #14240 from chaochn47/cherry-pick-12335
[3.4] etcdserver: add more detailed traces on linearized reading
2022-07-21 03:32:02 +08:00
Chao Chen
864006b72d print out applied index as uint64
Signed-off-by: Chao Chen <chaochn@amazon.com>
2022-07-20 12:07:51 -07:00
Pierre Zemb
3f9fba9112 etcdserver: add more detailed traces on linearized reading
To improve debuggability of `agreement among raft nodes before
linearized reading`, we added some tracing inside
`linearizableReadLoop`.

This will allow us to know the timing of `s.r.ReadIndex` vs
`s.applyWait.Wait(rs.Index)`.

Signed-off-by: Chao Chen <chaochn@amazon.com>
2022-07-20 12:07:51 -07:00
Benjamin Wang
fc76e90cf2 Merge pull request #14230 from mitake/perm-cache-lock-3.4
server/auth: protect rangePermCache with a RW lock
2022-07-20 18:51:54 +08:00
Benjamin Wang
3ea12d352e Merge pull request #14241 from vivekpatani/release-3.4
clientv3: fix isOptsWithFromKey/isOptsWithPrefix
2022-07-20 18:51:24 +08:00
Benjamin Wang
6313502fb4 Merge pull request #14239 from chaochn47/backport-13676
backport 3.5: #13676 load all leases from backend
2022-07-20 18:50:46 +08:00
Benjamin Wang
b0e1aaef69 Merge pull request #14236 from chrisayoub/release-3.4
[release-3.4] clientv3: filter learners members during autosync
2022-07-20 12:49:31 +08:00
Chris Ayoub
36a76e8531 clientv3: filter learners members during autosync
This change is to ensure that all members returned during the client's
AutoSync are started and are not learners, which are not valid
etcd members to make requests to.

Signed-off-by: Chris Ayoub <cayoub@hubspot.com>
2022-07-20 00:04:03 -04:00
vivekpatani
4fef7fcb90 clientv3: fix isOptsWithFromKey/isOptsWithPrefix
- Addressing: https://github.com/etcd-io/etcd/issues/13332
- Backporting: https://github.com/etcd-io/etcd/pull/13334

Signed-off-by: vivekpatani <9080894+vivekpatani@users.noreply.github.com>
2022-07-19 17:20:56 -07:00
Chao Chen
fd51434b54 backport 3.5: #13676 load all leases from backend
Signed-off-by: Chao Chen <chaochn@amazon.com>
2022-07-19 16:08:01 -07:00
Benjamin Wang
d58a0c0434 Merge pull request #14177 from ahrtr/3.4_lease_renew_linearizable
[3.4] Support linearizable renew lease for 3.4
2022-07-19 16:39:00 +08:00
Hitoshi Mitake
ecd91da40d server/auth: protect rangePermCache with a RW lock
Signed-off-by: Hitoshi Mitake <h.mitake@gmail.com>
2022-07-19 15:51:48 +09:00
Benjamin Wang
07d2b1d626 support linearizable renew lease for 3.4
Cherry pick https://github.com/etcd-io/etcd/pull/13932 to 3.4.

When etcdserver receives a LeaseRenew request, it may be still in
progress of processing the LeaseGrantRequest on exact the same
leaseID. Accordingly it may return a TTL=0 to client due to the
leaseID not found error. So the leader should wait for the appliedID
to be available before processing client requests.

Signed-off-by: Benjamin Wang <wachao@vmware.com>
2022-07-19 13:34:55 +08:00
Benjamin Wang
4636a5fab4 Bump version to 3.4.19
Signed-off-by: Benjamin Wang <wachao@vmware.com>
2022-07-12 16:18:45 +08:00
Benjamin Wang
06561ae4bf Merge pull request #14210 from ahrtr/fix_release_script
[3.4] Fix pipeline failure for release test
2022-07-12 16:06:33 +08:00
Benjamin Wang
be0ce4f15b fix pipeline failure for release test
Signed-off-by: Benjamin Wang <wachao@vmware.com>
2022-07-12 08:31:59 +08:00
Benjamin Wang
d3dfc9b796 Merge pull request #14204 from lavacat/release-3.4-balancer-tests
clientv3/balance: fixed flaky balancer tests
2022-07-12 06:14:35 +08:00
Bogdan Kanivets
185f203528 clientv3/balance: fixed flaky balancer tests
- added verification step to indirectly verify that all peers are in balancer subconn list

Signed-off-by: Bogdan Kanivets <bkanivets@apple.com>
2022-07-11 14:43:58 -07:00
Benjamin Wang
7de53273dd Merge pull request #14205 from ahrtr/3.4_release_script
[3.4] Update release scripts for release-3.4
2022-07-11 20:06:06 +08:00
Benjamin Wang
6cc9416ae5 backport release test to 3.4
Signed-off-by: Benjamin Wang <wachao@vmware.com>
2022-07-11 19:47:08 +08:00
Benjamin Wang
e6b3d97712 Update release scripts for release-3.4
Signed-off-by: Benjamin Wang <wachao@vmware.com>
2022-07-11 16:06:32 +08:00
Marek Siarkowicz
852ac37bc0 Merge pull request #14200 from ahrtr/3.4_pipeline_race
set RACE as true for linux-amd64-unit and linux-amd64-grpcproxy
2022-07-08 10:23:21 +02:00
Benjamin Wang
8c1c5fefdb set RACE as true for linux-amd64-unit and linux-amd64-grpcproxy
Signed-off-by: Benjamin Wang <wachao@vmware.com>
2022-07-08 08:37:31 +08:00
Marek Siarkowicz
0c6063fa82 Merge pull request #14192 from ahrtr/3.4_bump_yaml
[3.4] Bump gopkg.in/yaml.v2 v2.2.2 -> v2.4.0 due to: CVE-2019-11254
2022-07-05 14:32:09 +02:00
Benjamin Wang
860dc149b2 Bump gopkg.in/yaml.v2 v2.2.8 -> v2.4.0 due to: CVE-2019-11254
Cherry pick https://github.com/etcd-io/etcd/pull/13616 to 3.4.

Signed-off-by: Benjamin Wang <wachao@vmware.com>
2022-07-05 06:26:06 +08:00
Marek Siarkowicz
f0256eeec9 Merge pull request #14179 from lavacat/release-3.4-crypto
[backport 3.4] Update golang.org/x/crypto to latest
2022-07-04 11:57:58 +02:00
Bogdan Kanivets
576a798bf9 [backport 3.4] Update golang.org/x/crypto to latest
Update crypto to address CVE-2022-27191.

The CVE fix is added in 0.0.0-20220315160706-3147a52a75dd but this
change updates to latest.

Backport of https://github.com/etcd-io/etcd/pull/13996

Signed-off-by: Bogdan Kanivets <bkanivets@apple.com>
2022-06-30 23:08:13 -07:00
Benjamin Wang
bae61786fc Merge pull request #14183 from ahrtr/3.4_pipeline_issues_20220630
[3.4] Fix pipeline failures in 3.4
2022-07-01 05:36:29 +08:00
Benjamin Wang
8160e9ebe2 disable test cases on certificate-based authentication which isn't supported by gRPC proxy.
Signed-off-by: Benjamin Wang <wachao@vmware.com>
2022-06-30 14:11:54 +08:00
Benjamin Wang
5b3f269159 replace all 3.4 certificates and keys with the files from 3.5
Fix the following error in integration pipeline,
```
=== RUN   TestTLSReloadCopy
    v3_grpc_test.go:1754: tls: failed to find any PEM data in key input
    v3_grpc_test.go:1754: tls: private key does not match public key
    v3_grpc_test.go:1754: tls: private key does not match public key
    v3_grpc_test.go:1754: tls: private key does not match public key
```

Refer to https://github.com/etcd-io/etcd/runs/7123775361?check_suite_focus=true

Signed-off-by: Benjamin Wang <wachao@vmware.com>
2022-06-30 13:21:48 +08:00
Benjamin Wang
bb9113097a fix test failure in TestCtlV3WatchClientTLS
Also refer to the following commit in 3.5,
093282f5ea

Signed-off-by: Benjamin Wang <wachao@vmware.com>
2022-06-30 10:19:03 +08:00
Benjamin Wang
f169e5dcba Merge pull request #14151 from ahrtr/3.4_skip_TestWatchRequestProgress_proxy
[3.4] Skip WatchRequestProgress test in grpc-proxy mode.
2022-06-29 05:40:05 +08:00
Benjamin Wang
6958ee8ff2 Skip WatchRequestProgress test in grpc-proxy mode.
We shouldn't fail the grpc-server (completely) by a not implemented RPC.
Failing whole server by remote request is anti-pattern and security
risk.

Refer to https://github.com/etcd-io/etcd/runs/7034342964?check_suite_focus=true#step:5:2284

```
=== RUN   TestWatchRequestProgress/1-watcher
panic: not implemented
goroutine 83024 [running]:
go.etcd.io/etcd/proxy/grpcproxy.(*watchProxyStream).recvLoop(0xc009232f00, 0x4a73e1, 0xc00e2406e0)
	/home/runner/work/etcd/etcd/proxy/grpcproxy/watch.go:265 +0xbf2
go.etcd.io/etcd/proxy/grpcproxy.(*watchProxy).Watch.func1(0xc0038a3bc0, 0xc009232f00)
	/home/runner/work/etcd/etcd/proxy/grpcproxy/watch.go:125 +0x70
created by go.etcd.io/etcd/proxy/grpcproxy.(*watchProxy).Watch
	/home/runner/work/etcd/etcd/proxy/grpcproxy/watch.go:123 +0x73b
FAIL	go.etcd.io/etcd/clientv3/integration	222.813s
FAIL
```

Signed-off-by: Benjamin Wang <wachao@vmware.com>
2022-06-29 05:12:43 +08:00
Marek Siarkowicz
f1c59dcfac Merge pull request #14170 from ahrtr/3.4_proxy_fix_20220628
Fix deadlock in 'go test -tags cluster_proxy -v ./integration/... ./client'
2022-06-28 17:56:44 +02:00
Benjamin Wang
1c9fa07cd7 Fix deadlock in 'go test -tags cluster_proxy -v ./integration/... ./clientv3/...'
Cherry pick https://github.com/etcd-io/etcd/pull/12319 to 3.4.

Signed-off-by: Benjamin Wang <wachao@vmware.com>
2022-06-28 13:44:47 +08:00
Benjamin Wang
4e88cce06c Merge pull request #14168 from lavacat/release-3.4-TestGetToken
[backport 3.4] clientv3/integration: Reduce flakines of TestGetTokenWithoutAuth
2022-06-28 04:35:17 +08:00
Bogdan Kanivets
2d99b341ad [backport 3.4] clientv3/integration: Reduce flakines of TestGetTokenWithoutAuth
backport from branch-3.5:
https://github.com/etcd-io/etcd/pull/12200/

Signed-off-by: Bogdan Kanivets <bkanivets@apple.com>
2022-06-27 11:31:16 -07:00
Marek Siarkowicz
17fc680454 Merge pull request #14150 from ahrtr/lease_revoke_race_3.4
[3.4] Backport two lease related bug fixes to 3.4
2022-06-24 11:27:09 +02:00
Benjamin Wang
f036529b5d Backport two lease related bug fixes to 3.4
The first bug fix is to resolve the race condition between goroutine
and channel on the same leases to be revoked. It's a classic mistake
in using Golang channel + goroutine. Please refer to
https://go.dev/doc/effective_go#channels

The second bug fix is to resolve the issue that etcd lessor may
continue to schedule checkpoint after stepping down the leader role.

Signed-off-by: Benjamin Wang <wachao@vmware.com>
2022-06-24 09:09:40 +08:00
Benjamin Wang
953376e666 Merge pull request #14136 from ahrtr/3.4_pipeline_issues
[3.4] Fix all the pipeline failues for release 3.4
2022-06-23 04:54:42 +08:00
Benjamin Wang
1abf085cfb fix all the pipeline failues for release 3.4
Items resolved:
1. fix the vet error: possible misuse of reflect.SliceHeader;
2. fix the vet error: call to (*T).Fatal from a non-test goroutine;
3. bump package golang.org/x/crypto, net and sys;
4. bump boltdb from 1.3.3 to 1.3.6;
5. remove the vendor directory;
6. remove go 1.12.17 and 1.15.15, add go 1.16.15 into pipeline;
7. bump go version to 1.16 in go.mod;
8. fix the issue: compile: version go1.16.15 does not match go tool version go1.17.11,
   refer to https://github.com/actions/setup-go/issues/107;
9. fix data race on compactMainRev and watcherGauge;
10. fix test failure for TestLeasingTxnOwnerGet in cluster_proxy mode.

Signed-off-by: Benjamin Wang <wachao@vmware.com>
2022-06-22 05:28:45 +08:00
Benjamin Wang
c2c9e7de01 Merge pull request #14075 from lavacat/release-3.4-go1.15.15-tests
tests: fixing dependencies that brake tests in go.1.15.15
2022-05-31 05:52:21 +08:00
Bogdan Kanivets
ceed023f7c tests: fixing dependencies that brake tests in go.1.15.15
- retry_interceptor_test causes:
clientv3/naming/grpc.go:25:2: module google.golang.org/grpc@latest found (v1.46.0),
but does not contain package google.golang.org/grpc/naming
https://github.com/etcd-io/etcd/issues/12124
2022-05-30 12:08:47 -07:00
Benjamin Wang
5505d7a95b Merge pull request #13206 from cfz/cherry-pick-#13172-r34
[backport 3.4]: server/auth: enable tokenProvider if recoved store enables auth
2022-05-07 06:59:33 +08:00
Piotr Tabor
76147c9c79 Merge pull request #13999 from mitake/backport-13308-to-3.4
Backport PR 13308 to release 3.4
2022-05-06 13:03:05 +02:00
cfz
23e79dbf19 [backport 3.4]: server/auth: enable tokenProvider if recoved store enables auth
this is a manual backport of #13172
2022-05-06 12:26:55 +08:00
Hitoshi Mitake
757a8e8f5b *: implement a retry logic for auth old revision in the client 2022-04-29 23:46:24 +09:00
Ashish Ranjan
9bbdeb4a64 client/v3: refresh the token when ErrUserEmpty is received while retrying
To fix a bug in the retry logic caused when the auth token is cleared after receiving `ErrInvalidAuthToken` from the server and the subsequent call to `getToken` also fails due to some reason (eg. context deadline exceeded).
This leaves the client without a token and the retry will continue to fail with `ErrUserEmpty` unless the token is refreshed.
2022-04-29 23:43:36 +09:00
Marek Siarkowicz
c50b7260cc Merge pull request #13713 from lavacat/defrag-bopts-fix-3.4
mvcc/backend: restore original bolt db options after defrag
2022-02-18 10:54:21 +01:00
Bogdan Kanivets
d30a4fbf0c mvcc/backend: restore original bolt db options after defrag
Problem: Defrag was implemented before custom bolt options were added.
Currently defrag doesn't restore backend options.
For example BackendFreelistType will be unset after defrag.

Solution: save bolt db options and use them in defrag.
2022-02-17 15:33:05 -08:00
richkun
a905430d27 embed: only log stream error with debug level (#13656)
Co-authored-by: tangcong <tangcong506@gmail.com>
2022-01-30 12:24:22 -08:00
Sam Batschelet
161bf7e7be Merge pull request #13475 from chaochn47/backport-release-3.4
backport 3.4 from #13467 exclude the same alarm type activated by multiple peers
2021-11-13 22:10:38 -05:00
Chao Chen
04d47a93f9 backport from #13467 exclude the same alarm type activated by multiple peers 2021-11-12 14:17:14 -08:00
Sam Batschelet
72d3e382e7 version: 3.4.18
Signed-off-by: Sam Batschelet <sbatsche@redhat.com>
2021-10-15 09:47:08 -04:00
Piotr Tabor
eb9cee9ee3 Merge pull request #13397 from geetasg/release-3.4
storage/backend: Add a gauge to indicate if defrag is active (backport)
2021-10-07 19:08:31 +02:00
Geeta Gharpure
85abf6e46d storage/backend: Add a gauge to indicate if defrag is active (backport from 3.6) 2021-10-06 11:04:47 -07:00
Piotr Tabor
1eac258f58 Merge pull request #13385 from hexfusion/cp-13376-release-3.4
[release-3.4] Dockerfile: bump debian bullseye-20210927
2021-10-04 08:40:32 +02:00
Sam Batschelet
91da298560 Dockerfile: bump debian bullseye-20210927
fixes: CVE-2021-3711, CVE-2021-35942, CVE-2019-9893

Signed-off-by: Sam Batschelet <sbatsche@redhat.com>
2021-10-04 00:32:23 -04:00
Sam Batschelet
19e2e70e4f version: 3.4.17
Signed-off-by: Sam Batschelet <sbatsche@redhat.com>
2021-10-03 22:30:27 -04:00
Sam Batschelet
8ea187e2cf Merge pull request #13378 from ysksuzuki/replace-jwt-go
Replace github.com/dgrijalva/jwt-go with github.com/golang-jwt/jwt
2021-10-03 21:48:32 -04:00
Yusuke Suzuki
e63d058247 test: update go to 1.15.15
Update go to 1.15.15 which is the latest of 1.15 because linux-amd64-fmt fails with go 1.15.13.

Signed-off-by: Yusuke Suzuki <yusuke-suzuki@cybozu.co.jp>
2021-10-02 10:04:22 +09:00
Yusuke Suzuki
1558ede7f8 go.mod,go.sum: Replace github.com/dgrijalva/jwt-go with github.com/golang-jwt/jwt
github.com/dgrijalva/jwt-go has CVE https://github.com/advisories/GHSA-w73w-5m7g-f7qc
and is already archived. etcd v3.4 should use a community maintained fork
github.com/golang-jwt/jwt which provides the fixed version of the CVE.

Signed-off-by: Yusuke Suzuki <yusuke-suzuki@cybozu.co.jp>
2021-10-02 10:01:52 +09:00
Sam Batschelet
41061e56ad Merge pull request #13139 from hexfusion/bp-12727
[release-3.4]: ClientV3: Ordering: Fix TestEndpointSwitchResolvesViolation test
2021-06-24 10:38:10 -04:00
Sam Batschelet
501d8f01ea [release-3.4]: ClientV3: Ordering: Fix TestEndpointSwitchResolvesViolation test
Signed-off-by: Sam Batschelet <sbatsche@redhat.com>
2021-06-23 21:26:55 -04:00
Sam Batschelet
38669a0709 Merge pull request #13137 from hexfusion/track-modules
vendor: track vendor/modules.txt
2021-06-23 14:52:10 -04:00
Sam Batschelet
7489911d51 Merge pull request #13135 from serathius/actions-3.4
Migrate PR testing from travis to GitHub actions
2021-06-23 14:03:37 -04:00
Sam Batschelet
15b7954d03 vendor: track vendor/modules.txt
Signed-off-by: Sam Batschelet <sbatsche@redhat.com>
2021-06-23 13:56:39 -04:00
Marek Siarkowicz
6cc1345a0b Migrate PR testing from travis to GitHub actions 2021-06-23 18:25:29 +02:00
Sam Batschelet
589a6993b8 Merge pull request #13101 from mrueg/backport-12864
[backport 3.4]  fix check datascale command for https endpoints
2021-06-14 08:19:22 -04:00
Saeid Bostandoust
4bacd21e20 fix check datascale command for https endpoints 2021-06-11 11:58:20 +02:00
Sam Batschelet
0ecc337028 Merge pull request #13100 from tangcong/automated-cherry-pick-of-#13077-origin-release-3.4
[backport 3.4] embed: unlimit the recv msg size of grpc-gateway
2021-06-10 21:29:34 -04:00
spacewander
628fa1818e embed: unlimit the recv msg size of grpc-gateway
Ensure the client which access etcd via grpc-gateway won't
be limited by the MaxCallRecvMsgSize. Here we choose the same
default value of etcdcli as grpc-gateway's MaxCallRecvMsgSize.

Fix https://github.com/etcd-io/etcd/issues/12576
2021-06-11 08:07:28 +08:00
Gyuho Lee
d19fbe541b version: 3.4.16
Signed-off-by: Gyuho Lee <leegyuho@amazon.com>
2021-05-12 01:52:43 +00:00
Piotr Tabor
6bbc85827b Merge pull request #12917 from chaochn47/2021-05-03-backport-#12880
Backport-3.4 exclude alarms from health check conditionally
2021-05-06 10:21:09 +02:00
Chao Chen
dbde4f2d5e Backport-3.4 exclude alarms from health check conditionally 2021-05-04 10:37:12 -07:00
Gyuho Lee
15715dcf1a Merge pull request #12902 from MakDon/release-3.4
[Backport-3.4] etcdserver/mvcc: update trace.Step condition
2021-04-28 11:05:35 -07:00
makdon
963d3b9369 etcdserver/mvcc: update trace.Step condition
backport PR #12894 to release-3.4
2021-04-28 11:35:49 +08:00
Piotr Tabor
ba829044f5 Merge pull request #12888 from chaochn47/2021-04-22-cherry-pick-12871
Backport-3.4 etcdserver/util.go: reduce memory when logging range requests
2021-04-23 00:45:20 +02:00
Chao Chen
c4eb81af99 Backport-3.4 etcdserver/util.go: reduce memory when logging range requests 2021-04-22 15:07:44 -07:00
Piotr Tabor
ceafa1b33e Merge pull request #12882 from lilic/bump-go-12
.travis,Makefile,functional: Bump go 1.12 version to v1.12.17
2021-04-20 23:33:23 +02:00
Lili Cosic
5890bc8bd6 .travis,Makefile,functional: Bump go 1.12 version to v1.12.17
This version was already used to build the release v3.4.15.
2021-04-20 14:00:44 +02:00
Piotr Tabor
c274aa5ea4 Merge pull request #12849 from lilic/test-go-1-15
[release-3.4]: .travis.yml: Test with go v1.15.11
2021-04-19 18:17:05 +02:00
Piotr Tabor
276ee962ec integration: Fix 'go test --tags cluster_proxy --timeout=30m -v ./integration/...'
grpc proxy opens additional 2 watching channels. The metric is shared
between etcd-server & grpc_proxy, so all assertions on number of open
watch channels need to take in consideration the additional "2"
channels.
2021-04-19 16:41:28 +02:00
Lili Cosic
8d1b8335e3 pkg/tlsutil: Adjust cipher suites for go 1.12
Cherry-pick of 60e44286fa from master branch does not work due to
missing `tls.CipherSuites()` function. We work around by using go build
tags for both the building and tests.
2021-04-19 11:49:13 +02:00
Piotr Tabor
c3f447a698 Fix pkg/tlsutil (test) to not fail on 386.
In fact this commit rewrites the functionality to use upstream list of
ciphers instead of checking whether the lists are in sync using ast
analysis.
2021-04-19 11:49:13 +02:00
Lili Cosic
85e037d9c6 bill-of-materials.json: Update golang.org/x/sys 2021-04-19 11:49:13 +02:00
Lili Cosic
a1691be1bd .travis,test: Turn race off in Travis for go version 1.15
Currently with race it fails, we can enable this at a later point.
2021-04-19 11:49:13 +02:00
Vimal K
df35086b6a integration : fix TestTLSClientCipherSuitesMismatch in go1.13
In go1.13, the TLS13 is enablled by default, and as per go1.13 release notes :
TLS 1.3 cipher suites are not configurable. All supported cipher suites are safe,
and if PreferServerCipherSuites is set in Config the preference order is based
on the available hardware.

Fixing the test case for go1.13 by limiting the TLS version to TLS12
2021-04-19 11:18:14 +02:00
Lili Cosic
eeefd614c8 vendor: Run go mod vendor 2021-04-19 11:18:14 +02:00
Lili Cosic
4276c33026 go.mod,go.sum: Bump github.com/creack/pty that includes patch
This patch is needed due to go 1.15 erroring on:

"Setctty set but Ctty not valid in child".
2021-04-19 11:18:13 +02:00
Lili Cosic
cfc08e5f06 go.mod,go.sum: Comply with go v1.15 2021-04-19 11:18:13 +02:00
Lili Cosic
0b7e4184e8 etcdserver,wal: Convert int to string using rune() 2021-04-19 11:18:13 +02:00
Lili Cosic
35bd924596 integration,raft,tests: Comply with go v1.15 gofmt 2021-04-19 11:18:13 +02:00
Lili Cosic
62596faeed .travis.yml: Test with go v1.15.11
Currently in CI the tests are only run with go v1.12, this adds also go
v1.15.11.

Excludes certain variants for v1.15.
2021-04-19 11:18:13 +02:00
Piotr Tabor
b7e5f5bc12 Merge pull request #12839 from lilic/fix-go-version
[release-3.4]: Pin go version in go.mod to 1.12
2021-04-07 17:52:05 +02:00
Lili Cosic
91bed2e01f pkpkg/testutil/leak.go: Allowlist created by testing.runTests.func1 2021-04-07 17:20:52 +02:00
Lili Cosic
b19eb0f339 vendor: Run go mod vendor 2021-04-07 15:25:32 +02:00
Lili Cosic
8557cb29ba go.sum, go.mod: Run go mod tidy with go 1.12 2021-04-07 15:25:08 +02:00
Lili Cosic
ef415e3fe1 go.mod: Pin go to 1.12 version
As go 1.12.2 is what is tested in CI as well as recommended to be built
with 1.12.2 we should also pin to this in the go directive version.
2021-04-07 15:21:42 +02:00
Sam Batschelet
82eae9227c Merge pull request #12803 from cwedgwood/metrics-3.4
etcdserver: fix incorrect metrics generated when clients cancel watches
2021-04-01 08:17:37 -04:00
Chris Wedgwood
656dc63eab etcdserver: fix incorrect metrics generated when clients cancel watches
Manual cherry-pick of 9571325fe8 for
release-3.4.
2021-03-31 22:59:29 -07:00
Piotr Tabor
30799c97be Merge pull request #12815 from dbavatar/release-3.4-peervalidation
etcdserver: Fix PeerURL validation
2021-03-30 12:54:32 +02:00
Piotr Tabor
16fe9a89ff Merge pull request #12816 from cwedgwood/3.4-relax-gate-timeout
integration: relax leader timeout from 3s to 4s
2021-03-30 12:53:27 +02:00
Chris Wedgwood
c499d9b047 integration: relax leader timeout from 3s to 4s
The integration jobs fail with timeouts slightly over 3s, increase
this marginally so false failures are less prevalent.
2021-03-29 10:17:44 -07:00
Piotr Tabor
2702f9e5f2 Merge pull request #12751 from cwedgwood/nofsyncdowrite
When using --unsafe-no-fsync still write out the data
2021-03-07 11:52:33 +01:00
Chris Wedgwood
94634fc258 etcdserver: when using --unsafe-no-fsync write data
There are situations where we don't wish to fsync but we do want to
write the data.

Typically this occurs in clusters where fsync latency (often the
result of firmware) transiently spikes.  For Kubernetes clusters this
causes (many) elections which have knock-on effects such that the API
server will transiently fail causing other components fail in turn.

By writing the data (buffered and asynchronously flushed, so in most
situations the write is fast) and avoiding the fsync we no longer
trigger this situation and opportunistically write out the data.

Anecdotally:
  Because the fsync is missing there is the argument that certain
  types of failure events will cause data corruption or loss, in
  testing this wasn't seen.  If this was to occur the expectation is
  the member can be readded to a cluster or worst-case restored from a
  robust persisted snapshot.

  The etcd members are deployed across isolated racks with different
  power feeds.  An instantaneous failure of all of them simultaneously
  is unlikely.

  Testing was usually of the form:
   * create (Kubernetes) etcd write-churn by creating replicasets of
     some 1000s of pods
   * break/fail the leader

  Failure testing included:
   * hard node power-off events
   * disk removal
   * orderly reboots/shutdown

  In all cases when the node recovered it was able to rejoin the
  cluster and synchronize.
2021-03-05 10:09:52 -08:00
Sam Batschelet
afd6d8a40d Merge pull request #12740 from hexfusion/cp-12448--release-3.4
Manual cherry pick of #12448 on release 3.4
2021-03-03 13:37:20 -05:00
Sam Batschelet
9aeabe447d server: Added config parameter experimental-warning-apply-duration
Signed-off-by: Sam Batschelet <sbatsche@redhat.com>
2021-03-03 12:14:30 -05:00
Gyuho Lee
aa7126864d version: 3.4.15
Signed-off-by: Gyuho Lee <leegyuho@amazon.com>
2021-02-26 22:08:24 +00:00
Gyuho Lee
3be9460ddc Merge pull request #12679 from chaochn47/backport_3.4_#12677
[Backport-3.4] etcdserver/api/etcdhttp: log successful etcd server side health check in debug level
2021-02-09 15:01:19 -08:00
Chao Chen
f27ef4d343 [Backport-3.4] etcdserver/api/etcdhttp: log successful etcd server side health check in debug level
ref. #12677
ref. 0b9cfa8677
2021-02-08 21:44:44 -08:00
Piotr Tabor
a1c5f59b59 Merge pull request #12402 from vitalif/release-3.4
etcdserver: Fix 64 KB websocket notification message limit
2021-02-03 09:19:21 +01:00
a40f14d92c etcdserver: Fix 64 KB websocket notification message limit
This fixes etcd being unable to send any message longer than 64 KB as
a notification over the websocket. This was because the older version
of grpc-websocket-proxy was used and WithMaxRespBodyBufferSize option
wasn't set.
2021-01-30 00:37:02 +03:00
Sam Batschelet
d51c6c689b Merge pull request #12645 from hexfusion/bump-dep
vendor: bump gorilla/websocket
2021-01-23 13:49:45 -05:00
Sam Batschelet
becc228c5a vendor: bump gorilla/websocket
Signed-off-by: Sam Batschelet <sbatsche@redhat.com>
2021-01-23 11:20:53 -05:00
Piotr Tabor
0880605772 Merge pull request #12551 from kolyshkin/3.4-fix-lock
[3.4 backport] pkg/fileutil: fix F_OFD_ constants
2021-01-15 23:16:49 +01:00
Kir Kolyshkin
bea35fd2c6 pkg/fileutil: fix F_OFD_ constants
Use golang.org/x/sys/unix for F_OFD_* constants.

This fixes the issue that F_OFD_GETLK was defined incorrectly,
resulting in bugs such as https://github.com/moby/moby/issues/31182

Signed-off-by: Kir Kolyshkin <kolyshkin@gmail.com>
2020-12-14 10:42:13 -08:00
Gyuho Lee
8a03d2e961 version: 3.4.14
Signed-off-by: Gyuho Lee <leegyuho@amazon.com>
2020-11-25 11:31:52 -08:00
Gyuho Lee
a4b43b388d pkg/netutil: remove unused "iptables" wrapper
Signed-off-by: Gyuho Lee <leegyuho@amazon.com>
2020-11-25 11:31:17 -08:00
Gyuho Lee
e3b29b66a4 tools/etcd-dump-metrics: validate exec cmd args
To prevent arbitrary command invocations.

Signed-off-by: Gyuho Lee <leegyuho@amazon.com>
2020-11-25 11:30:31 -08:00
Gyuho Lee
eb0fb0e799 Merge pull request #12356 from cfc4n/automated-cherry-pick-of-#12264-upstream-release-3.4
Automated cherry pick of #12264
2020-10-12 14:26:50 -07:00
CFC4N
40b71074e8 clientv3: get AuthToken automatically when clientConn is ready.
fixes: #11954
2020-09-30 17:14:22 +08:00
Jingyi Hu
7e2d426ec0 Merge pull request #12299 from galal-hussein/fix_panic_34
[Backport 3.4] etcdserver: add ConfChangeAddLearnerNode to the list of config changes
2020-09-15 09:04:18 -07:00
galal-hussein
3019246742 etcdserver: add ConfChangeAddLearnerNode to the list of config changes
To fix a panic that happens when trying to get ids of etcd members in
force new cluster mode, the issue happen if the cluster previously had
etcd learner nodes added to the cluster

Fixes #12285
2020-09-14 17:50:57 +02:00
Joe Betz
dd1b699fc4 Merge pull request #12280 from jingyih/automated-cherry-pick-of-#12271-upstream-release-3.4
Automated cherry pick of #12271 on release 3.4
2020-09-10 11:07:54 -07:00
jingyih
f44aaf8248 integration: add flag WatchProgressNotifyInterval in integration test 2020-09-09 12:39:42 -07:00
Gyuho Lee
ae9734ed27 version: 3.4.13
Signed-off-by: Gyuho Lee <leegyuho@amazon.com>
2020-08-24 12:11:28 -07:00
Gyuho Lee
781bde75e2 Merge pull request #12250 from spzala/automated-cherry-pick-of-#12242-upstream-release-3.4
Automated cherry pick of #12242
2020-08-24 12:05:03 -07:00
Sahdev P. Zala
d5ebbbceb8 pkg: file stat warning
Provide warning and doc instead of enforcing file permission.
2020-08-24 11:21:29 -04:00
Sam Batschelet
7cd5872656 Merge pull request #12244 from hexfusion/automated-cherry-pick-of-#12243-upstream-release-3.4
Automated cherry pick of #12243 on release 3.4
2020-08-21 11:24:21 -04:00
Sam Batschelet
46a0a44f95 Automated cherry pick of #12243 on release 3.4
Signed-off-by: Sam Batschelet <sbatsche@redhat.com>
2020-08-21 10:14:07 -04:00
Gyuho Lee
17cef6e3e9 version: 3.4.12
Signed-off-by: Gyuho Lee <leegyuho@amazon.com>
2020-08-19 09:56:24 -07:00
Gyuho Lee
c07cba001b Merge pull request #12239 from liggitt/slow-v2-panic-3.4
[3.4] etcdserver: Avoid panics logging slow v2 requests in integration tests
2020-08-19 09:55:08 -07:00
Jordan Liggitt
b8878eac45 etcdserver: Avoid panics logging slow v2 requests in integration tests 2020-08-19 11:30:39 -04:00
Gyuho Lee
e71e0c5c88 Merge pull request #12226 from jingyih/fix_backport_PR12216
*: add plog logging to the backport of PR12216
2020-08-18 08:48:09 -07:00
Gyuho Lee
bc44e367c3 version: 3.4.11
Signed-off-by: Gyuho Lee <leegyuho@amazon.com>
2020-08-18 08:46:13 -07:00
Gyuho Lee
299e0f17aa Revert "etcdserver/api/v3rpc: "MemberList" never return non-empty ClientURLs"
This reverts commit 0372cfc7ab.
2020-08-18 08:45:38 -07:00
jingyih
75d5e78d1f *: fix backport of PR12216
Fix bugs introduced in commit c60dabf
2020-08-16 15:01:18 +08:00
jingyih
c60dabf2f3 *: add experimental flag for watch notify interval
Signed-off-by: Gyuho Lee <leegyuho@amazon.com>
2020-08-15 10:24:25 -07:00
Gyuho Lee
8a4afdbcc2 Merge pull request #12189 from jingyih/automated-cherry-pick-of-#11452-#12187-upstream-release-3.4
Automated cherry pick of #11452 #12187 on release 3.4
2020-08-13 21:38:05 -07:00
jingyih
6fcab5af9f clientv3: remove excessive watch cancel logging 2020-08-13 13:51:21 +08:00
Gyuho Lee
008074187c etcdserver: add OS level FD metrics
Similar counts are exposed via Prometheus.
This adds the one that are perceived by etcd server.

e.g.

os_fd_limit 120000
os_fd_used 14
process_cpu_seconds_total 0.31
process_max_fds 120000
process_open_fds 17

Signed-off-by: Gyuho Lee <leegyuho@amazon.com>
2020-08-12 18:38:35 -07:00
Gyuho Lee
cf558ee8b7 pkg/runtime: optimize FDUsage by removing sort
No need sort when we just want the counts.

Signed-off-by: Gyuho Lee <leegyuho@amazon.com>
2020-08-12 18:38:17 -07:00
Ted Yu
e800c62eca clientv3: log warning in case of error sending request 2020-07-30 22:54:35 +08:00
Gyuho Lee
0372cfc7ab etcdserver/api/v3rpc: "MemberList" never return non-empty ClientURLs
Signed-off-by: Gyuho Lee <leegyuho@amazon.com>

cr https://code.amazon.com/reviews/CR-29712724
2020-07-16 16:29:51 -07:00
Gyuho Lee
18dfb9cca3 version: 3.4.10
Signed-off-by: Gyuho Lee <leegyuho@amazon.com>
2020-07-16 15:16:20 -07:00
Jingyi Hu
7b8270416d Merge pull request #12106 from bart0sh/PR001-cherry-pick-change-protobuf-field-type-from-int-to-int64
etcdserver: change protobuf field type from int to int64 (#12000)
2020-07-16 00:45:16 +08:00
Sahdev Zala
a2c37485dd Merge pull request #12127 from spzala/automated-cherry-pick-of-#12012-upstream-release-3.4
Automated cherry pick of #12012
2020-07-13 10:53:52 -04:00
Hitoshi Mitake
67bfc310f0 Documentation: note on data encryption 2020-07-13 09:50:30 -04:00
Yuchen Zhou
ed28c768a3 etcdserver: change protobuf field type from int to int64 (#12000) 2020-07-08 10:21:10 +03:00
Gyuho Lee
d3a702a09d Merge pull request #12112 from spzala/automated-cherry-pick-of-#12018-upstream-release-3.4
Automated cherry pick of #12018
2020-07-07 10:32:18 -07:00
Sahdev P. Zala
319331192e pkg: consider umask when use MkdirAll
os.MkdirAll creates directory before umask so make sure that a desired
permission is set after creating a directory with MkdirAll. Use the
existing TouchDirAll function which checks for permission if dir is already
exist and when create a new dir.
2020-07-07 11:46:31 -04:00
Gyuho Lee
2acdf88406 Merge pull request #12076 from cfc4n/automated-cherry-pick-of-#11987-upstream-release-3.4
Automated cherry pick of #11987
2020-07-06 13:01:24 -07:00
Gyuho Lee
a8454e453f Merge pull request #12089 from tangcong/automated-cherry-pick-of-#11997-origin-release-3.4
Automated cherry pick of #11997
2020-07-06 13:00:56 -07:00
Gyuho Lee
32583af167 Merge pull request #12101 from tangcong/automated-cherry-pick-of-#12100-origin-release-3.4
Automated cherry pick of #12100
2020-07-06 11:47:24 -07:00
Sahdev Zala
85cc4deae6 Merge pull request #12103 from spzala/automated-cherry-pick-of-#12092-upstream-release-3.4
Automated cherry pick of #12092
2020-07-05 11:45:30 -04:00
Hitoshi Mitake
7dec4c412c etcdmain: let grpc proxy warn about insecure-skip-tls-verify 2020-07-01 18:25:29 -04:00
tangcong
a4667f596a etcdmain: fix shadow error 2020-07-01 13:36:48 +08:00
tangcong
0207d1df66 pkg/fileutil: print desired file permission in error log 2020-06-29 09:59:19 +08:00
Gyuho Lee
99e893d285 Merge pull request #12074 from cfc4n/automated-cherry-pick-of-#12005-upstream-release-3.4
Automated cherry pick of #12005
2020-06-26 11:30:07 -07:00
Gyuho Lee
d5dec731db Merge pull request #12077 from cfc4n/automated-cherry-pick-of-#11980-upstream-release-3.4
Automated cherry pick of #11980
2020-06-26 11:29:16 -07:00
Gyuho Lee
81a2edc365 Merge pull request #12081 from spzala/automated-cherry-pick-of-#11945-upstream-release-3.4
Automated cherry pick of #11945
2020-06-26 11:28:34 -07:00
Changxin Miao
e5424fc474 pkg: Fix dir permission check on Windows 2020-06-25 20:20:55 -04:00
cfc4n
4488595e05 auth: Customize simpleTokenTTL settings.
see https://github.com/etcd-io/etcd/issues/11978 for more detail.
2020-06-25 19:58:26 +08:00
cfc4n
7b99863e02 mvcc: chanLen 1024 is to biger,and it used more memory. 128 seems to be enough. Sometimes the consumption speed is more than the production speed.
See https://github.com/etcd-io/etcd/issues/11906 for more detail.
2020-06-25 19:53:13 +08:00
cfc4n
490c6139ac auth: return incorrect result 'ErrUserNotFound' when client request without username or username was empty.
Fiexs https://github.com/etcd-io/etcd/issues/12004 .
2020-06-25 19:48:36 +08:00
Gyuho Lee
31e49a4df3 Merge pull request #12048 from spzala/automated-cherry-pick-of-#11793-upstream-release-3.4
Automated cherry pick of #11793
2020-06-24 20:42:26 -07:00
Gyuho Lee
83fc96df0c Merge pull request #12055 from tangcong/automated-cherry-pick-of-#11850-origin-release-3.4
Automated cherry pick of #11850
2020-06-24 20:41:44 -07:00
Gyuho Lee
45192cf62b Merge pull request #12064 from cfc4n/automated-cherry-pick-of-#11986-upstream-release-3.4
Automated cherry pick of #11986
2020-06-24 20:40:37 -07:00
Gyuho Lee
1a1281005c Merge pull request #12070 from spzala/automated-cherry-pick-of-#12060-upstream-release-3.4
Automated cherry pick of #12060
2020-06-24 20:39:33 -07:00
Gyuho Lee
a4f42948e8 Merge pull request #12072 from tangcong/automated-cherry-pick-of-#12066-origin-release-3.4
Automated cherry pick of #12066
2020-06-24 20:39:15 -07:00
Gyuho Lee
2212a84adb Merge pull request #12034 from spzala/automated-cherry-pick-of-#11798-upstream-release-3.4
Automated cherry pick of #11798
2020-06-24 20:38:46 -07:00
tangcong
e42d7b5248 etcdmain: fix shadow error 2020-06-25 06:40:33 +08:00
Xiang Li
b86bb615ff doc: add TLS related warnings 2020-06-24 16:39:35 -04:00
cfc4n
ee963470f4 etcdserver:FDUsage set ticker to 10 minute from 5 seconds. This ticker will check File Descriptor Requirements ,and count all fds in used. And recorded some logs when in used >= limit/5*4. Just recorded message. If fds was more than 10K,It's low performance due to FDUsage() works. So need to increase it.
see https://github.com/etcd-io/etcd/issues/11969 for more detail.
2020-06-24 13:28:40 +08:00
Jack Kleeman
36452a1c1d clientv3: cancel watches proactively on client context cancellation
Currently, watch cancel requests are only sent to the server after a
message comes through on a watch where the client has cancelled. This
means that cancelled watches that don't receive any new messages are
never cancelled; they persist for the lifetime of the client stream.
This has negative connotations for locking applications where a watch
may observe a key which might never change again after cancellation,
leading to many accumulating watches on the server.

By cancelling proactively, in most cases we simply move the cancel
request to happen earlier, and additionally we solve the case where the
cancel request would never be sent.

Fixes #9416
Heavy inspiration drawn from the solutions proposed there.
2020-06-23 19:50:21 +08:00
Gyuho Lee
4571e528f4 wal: check out of range slice in "ReadAll", "decoder"
wal: add slice bound checks in decoder

CHANGELOG-3.5: add wal slice bound check
CHANGELOG-3.5: add "decodeRecord"

Signed-off-by: Gyuho Lee <leegyuho@amazon.com>
2020-06-22 11:57:43 -04:00
Gyuho Lee
37ac22205b Merge pull request #12035 from spzala/automated-cherry-pick-of-#11787-upstream-release-3.4
Automated cherry pick of #11787
2020-06-21 19:22:54 -07:00
Gyuho Lee
493f15c156 Merge pull request #12037 from spzala/automated-cherry-pick-of-#11807-upstream-release-3.4
Automated cherry pick of #11807
2020-06-21 19:20:42 -07:00
Gyuho Lee
c8b3c6f54c Merge pull request #12041 from spzala/automated-cherry-pick-of-#11795-upstream-release-3.4
Automated cherry pick of #11795
2020-06-21 19:20:26 -07:00
Gyuho Lee
368ff75a10 Merge pull request #12039 from spzala/automated-cherry-pick-of-#11845-upstream-release-3.4
Automated cherry pick of #11845
2020-06-21 19:20:04 -07:00
Gyuho Lee
7adbfa1144 Merge pull request #12038 from spzala/automated-cherry-pick-of-#11608-upstream-release-3.4
Automated cherry pick of #11608
2020-06-21 19:19:50 -07:00
Gyuho Lee
e151faf3cc Merge pull request #12040 from spzala/automated-cherry-pick-of-#11796-upstream-release-3.4
Automated cherry pick of #11796
2020-06-21 19:19:31 -07:00
Gyuho Lee
8292fd5051 Merge pull request #12042 from spzala/automated-cherry-pick-of-#11818-upstream-release-3.4
Automated cherry pick of #11818
2020-06-21 19:19:17 -07:00
Gyuho Lee
c37245ed4b Merge pull request #12043 from spzala/automated-cherry-pick-of-#11830-upstream-release-3.4
Automated cherry pick of #11830
2020-06-21 19:18:51 -07:00
Gyuho Lee
6dab8aff66 Merge pull request #12044 from spzala/automated-cherry-pick-of-#11841-upstream-release-3.4
Automated cherry pick of #11841
2020-06-21 19:18:35 -07:00
Hitoshi Mitake
c69efda350 etcdctl, etcdmain: warn about --insecure-skip-tls-verify options 2020-06-21 19:23:06 -04:00
Hitoshi Mitake
3d8e9a323d Documentation: note on the policy of insecure by default 2020-06-21 19:21:05 -04:00
Hitoshi Mitake
963b242846 etcdserver: don't let InternalAuthenticateRequest have password 2020-06-21 19:18:18 -04:00
Hitoshi Mitake
6f011ce524 auth: a new error code for the case of password auth against no password user 2020-06-21 19:12:55 -04:00
Hitoshi Mitake
36f8dee003 Documentation: note on password strength 2020-06-21 19:08:39 -04:00
Xiang Li
47001f28bd etcdmain: best effort detection of self pointing in tcp proxy 2020-06-21 18:12:24 -04:00
Sahdev P. Zala
9a24f73f7b Discovery: do not allow passing negative cluster size
When an etcd instance attempts to perform service discovery, if a
cluster size with negative value  is provided, the etcd instance
will panic without recovery because of
2020-06-21 18:00:35 -04:00
Sahdev P. Zala
7d1cf64049 wal: fix panic when decoder not set
Handle the related panic and clarify doc.
2020-06-21 16:21:34 -04:00
Sahdev P. Zala
05c441f92f embed: fix compaction runtime err
Handle negative value input which currently gives a runtime error.
2020-06-20 20:58:18 -04:00
Sahdev P. Zala
434f7e83f0 pkg: check file stats
modify file util.
2020-06-20 16:29:47 -04:00
Gyuho Lee
91b1a9182a Merge pull request #11977 from jpbetz/automated-cherry-pick-of-#11946-release-3.4
Automated cherry pick of #11946
2020-06-05 12:46:33 -07:00
David Crawshaw
78f67988aa etcdserver, et al: add --unsafe-no-fsync flag
This makes it possible to run an etcd node for testing and development
without placing lots of load on the file system.

Fixes #11930.

Signed-off-by: David Crawshaw <crawshaw@tailscale.com>
2020-06-04 20:19:28 -07:00
Debabrata Banerjee
3b8f812955 etcdserver: Fix PeerURL validation
In case of URLs that are synonyms, the current lexicographic sorting
and compare of the URLs fails with frustrating errors. Make sure to do
a full comparison between every set of PeerURLs before failing.

Fixes #11013
2019-09-16 11:49:58 -04:00
1228 changed files with 4477 additions and 448158 deletions

24
.github/workflows/release.yaml vendored Normal file
View File

@@ -0,0 +1,24 @@
name: Release
on: [push, pull_request]
jobs:
release:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v2
- uses: actions/setup-go@v2
with:
go-version: "1.16.15"
- run: |
git config --global user.email "github-action@etcd.io"
git config --global user.name "Github Action"
gpg --batch --gen-key <<EOF
%no-protection
Key-Type: 1
Key-Length: 2048
Subkey-Type: 1
Subkey-Length: 2048
Name-Real: Github Action
Name-Email: github-action@etcd.io
Expire-Date: 0
EOF
DRY_RUN=true ./scripts/release.sh --no-upload --no-docker-push 3.4.99

77
.github/workflows/tests.yaml vendored Normal file
View File

@@ -0,0 +1,77 @@
name: Tests
on: [push, pull_request]
jobs:
test:
runs-on: ubuntu-latest
strategy:
fail-fast: false
matrix:
target:
- linux-amd64-fmt
- linux-amd64-integration-1-cpu
- linux-amd64-integration-2-cpu
- linux-amd64-integration-4-cpu
- linux-amd64-functional
- linux-amd64-unit-4-cpu-race
- all-build
- linux-amd64-grpcproxy
- linux-386-unit
steps:
- uses: actions/checkout@v2
- uses: actions/setup-go@v2
with:
go-version: "1.16.15"
- run: date
- env:
TARGET: ${{ matrix.target }}
run: |
go version
echo ${GOROOT}
# The version of ${GOROOT} is 1.16.15, while the `/usr/bin/go` links to go1.17.11.
# We need to make sure they are linking to the same go version; otherwise, we will
# run into issue: "compile: version go1.16.15 does not match go tool version go1.17.11".
# Refer to https://github.com/actions/setup-go/issues/107 as well.
sudo unlink /usr/bin/go; sudo ln -s ${GOROOT}/bin/go /usr/bin/go; ls -lrt /usr/bin/go
echo "${TARGET}"
case "${TARGET}" in
linux-amd64-fmt)
GOARCH=amd64 PASSES='fmt bom dep' ./test
;;
linux-amd64-integration-1-cpu)
GOARCH=amd64 CPU=1 PASSES='integration' RACE='false' ./test
;;
linux-amd64-integration-2-cpu)
GOARCH=amd64 CPU=2 PASSES='integration' RACE='false' ./test
;;
linux-amd64-integration-4-cpu)
GOARCH=amd64 CPU=4 PASSES='integration' RACE='false' ./test
;;
linux-amd64-functional)
./build && GOARCH=amd64 PASSES='functional' ./test
;;
linux-amd64-unit-4-cpu-race)
GOARCH=amd64 PASSES='unit' RACE='true' CPU='4' ./test -p=2
;;
all-build)
GOARCH=amd64 PASSES='build' ./test
GOARCH=386 PASSES='build' ./test
GO_BUILD_FLAGS='-v' GOOS=darwin GOARCH=amd64 ./build
GO_BUILD_FLAGS='-v' GOOS=windows GOARCH=amd64 ./build
GO_BUILD_FLAGS='-v' GOARCH=arm ./build
GO_BUILD_FLAGS='-v' GOARCH=arm64 ./build
GO_BUILD_FLAGS='-v' GOARCH=ppc64le ./build
GO_BUILD_FLAGS='-v' GOARCH=s390x ./build
;;
linux-amd64-grpcproxy)
PASSES='build grpcproxy' CPU='4' RACE='true' ./test 2>&1 | tee test.log
! egrep "(--- FAIL:|DATA RACE|panic: test timed out|appears to have leaked)" -B50 -A10 test.log
;;
linux-386-unit)
GOARCH=386 PASSES='unit' ./test
;;
*)
echo "Failed to find target"
exit 1
;;
esac

1
.gitignore vendored
View File

@@ -31,6 +31,7 @@ vendor/**/*
!vendor/**/License*
!vendor/**/LICENCE*
!vendor/**/LICENSE*
!vendor/modules.txt
vendor/**/*_test.go
*.bak

View File

@@ -1,94 +0,0 @@
language: go
go_import_path: go.etcd.io/etcd
sudo: required
services: docker
go:
- 1.12.12
notifications:
on_success: never
on_failure: never
env:
matrix:
- TARGET=linux-amd64-fmt
- TARGET=linux-amd64-integration-1-cpu
- TARGET=linux-amd64-integration-2-cpu
- TARGET=linux-amd64-integration-4-cpu
- TARGET=linux-amd64-functional
- TARGET=linux-amd64-unit
- TARGET=all-build
- TARGET=linux-amd64-grpcproxy
- TARGET=linux-386-unit
matrix:
fast_finish: true
allow_failures:
- go: 1.12.12
env: TARGET=linux-amd64-grpcproxy
- go: 1.12.12
env: TARGET=linux-386-unit
before_install:
- if [[ $TRAVIS_GO_VERSION == 1.* ]]; then docker pull gcr.io/etcd-development/etcd-test:go${TRAVIS_GO_VERSION}; fi
install:
- go get -t -v -d ./...
script:
- echo "TRAVIS_GO_VERSION=${TRAVIS_GO_VERSION}"
- >
case "${TARGET}" in
linux-amd64-fmt)
docker run --rm \
--volume=`pwd`:/go/src/go.etcd.io/etcd gcr.io/etcd-development/etcd-test:go${TRAVIS_GO_VERSION} \
/bin/bash -c "GOARCH=amd64 PASSES='fmt bom dep' ./test"
;;
linux-amd64-integration-1-cpu)
docker run --rm \
--volume=`pwd`:/go/src/go.etcd.io/etcd gcr.io/etcd-development/etcd-test:go${TRAVIS_GO_VERSION} \
/bin/bash -c "GOARCH=amd64 CPU=1 PASSES='integration' ./test"
;;
linux-amd64-integration-2-cpu)
docker run --rm \
--volume=`pwd`:/go/src/go.etcd.io/etcd gcr.io/etcd-development/etcd-test:go${TRAVIS_GO_VERSION} \
/bin/bash -c "GOARCH=amd64 CPU=2 PASSES='integration' ./test"
;;
linux-amd64-integration-4-cpu)
docker run --rm \
--volume=`pwd`:/go/src/go.etcd.io/etcd gcr.io/etcd-development/etcd-test:go${TRAVIS_GO_VERSION} \
/bin/bash -c "GOARCH=amd64 CPU=4 PASSES='integration' ./test"
;;
linux-amd64-functional)
docker run --rm \
--volume=`pwd`:/go/src/go.etcd.io/etcd gcr.io/etcd-development/etcd-test:go${TRAVIS_GO_VERSION} \
/bin/bash -c "./build && GOARCH=amd64 PASSES='functional' ./test"
;;
linux-amd64-unit)
docker run --rm \
--volume=`pwd`:/go/src/go.etcd.io/etcd gcr.io/etcd-development/etcd-test:go${TRAVIS_GO_VERSION} \
/bin/bash -c "GOARCH=amd64 PASSES='unit' ./test"
;;
all-build)
docker run --rm \
--volume=`pwd`:/go/src/go.etcd.io/etcd gcr.io/etcd-development/etcd-test:go${TRAVIS_GO_VERSION} \
/bin/bash -c "GOARCH=amd64 PASSES='build' ./test \
&& GOARCH=386 PASSES='build' ./test \
&& GO_BUILD_FLAGS='-v' GOOS=darwin GOARCH=amd64 ./build \
&& GO_BUILD_FLAGS='-v' GOOS=windows GOARCH=amd64 ./build \
&& GO_BUILD_FLAGS='-v' GOARCH=arm ./build \
&& GO_BUILD_FLAGS='-v' GOARCH=arm64 ./build \
&& GO_BUILD_FLAGS='-v' GOARCH=ppc64le ./build"
;;
linux-amd64-grpcproxy)
sudo HOST_TMP_DIR=/tmp TEST_OPTS="PASSES='build grpcproxy'" make docker-test
;;
linux-386-unit)
docker run --rm \
--volume=`pwd`:/go/src/go.etcd.io/etcd gcr.io/etcd-development/etcd-test:go${TRAVIS_GO_VERSION} \
/bin/bash -c "GOARCH=386 PASSES='unit' ./test"
;;
esac

View File

@@ -1,4 +1,5 @@
FROM k8s.gcr.io/debian-base:v1.0.0
# TODO: move to k8s.gcr.io/build-image/debian-base:bullseye-v1.y.z when patched
FROM debian:bullseye-20210927
ADD etcd /usr/local/bin/
ADD etcdctl /usr/local/bin/

View File

@@ -1,4 +1,5 @@
FROM k8s.gcr.io/debian-base-arm64:v1.0.0
# TODO: move to k8s.gcr.io/build-image/debian-base-arm64:bullseye-1.y.z when patched
FROM arm64v8/debian:bullseye-20210927
ADD etcd /usr/local/bin/
ADD etcdctl /usr/local/bin/

View File

@@ -1,4 +1,5 @@
FROM k8s.gcr.io/debian-base-ppc64le:v1.0.0
# TODO: move to k8s.gcr.io/build-image/debian-base-ppc64le:bullseye-1.y.z when patched
FROM ppc64le/debian:bullseye-20210927
ADD etcd /usr/local/bin/
ADD etcdctl /usr/local/bin/

View File

@@ -128,7 +128,7 @@ for TARGET_ARCH in "amd64" "arm64" "ppc64le"; do
TAG=quay.io/coreos/etcd GOARCH=${TARGET_ARCH} \
BINARYDIR=release/etcd-${VERSION}-linux-${TARGET_ARCH} \
BUILDDIR=release \
./scripts/build-docker ${VERSION}
./scripts/build-docker.sh ${VERSION}
done
```

View File

@@ -174,3 +174,5 @@ As of version v3.2 if an etcd server is launched with the option `--client-cert-
As of version v3.3 if an etcd server is launched with the option `--peer-cert-allowed-cn` or `--peer-cert-allowed-hostname` filtering of inter-peer connections is enabled. Nodes can only join the etcd cluster if their TLS certificate identity match the allowed one.
See [etcd security page](https://github.com/etcd-io/etcd/blob/master/Documentation/op-guide/security.md) for more details.
## Notes on password strength
`etcdctl` command line interface and etcd API don't check a strength (length, coexistence of numbers and alphabets, etc) of the password during creating a new user or updating password of an existing user. An administrator needs to care about a requirement of password strength by themselves.

View File

@@ -4,7 +4,7 @@ title: etcd gateway
## What is etcd gateway
etcd gateway is a simple TCP proxy that forwards network data to the etcd cluster. The gateway is stateless and transparent; it neither inspects client requests nor interferes with cluster responses.
etcd gateway is a simple TCP proxy that forwards network data to the etcd cluster. The gateway is stateless and transparent; it neither inspects client requests nor interferes with cluster responses. It does not terminate TLS connections, do TLS handshakes on behalf of its clients, or verify if the connection is secured.
The gateway supports multiple etcd server endpoints and works on a simple round-robin policy. It only routes to available endpoints and hides failures from its clients. Other retry policies, such as weighted round-robin, may be supported in the future.
@@ -74,7 +74,7 @@ $ etcd gateway start --discovery-srv=example.com
* Comma-separated list of etcd server targets for forwarding client connections.
* Default: `127.0.0.1:2379`
* Invalid example: `https://127.0.0.1:2379` (gateway does not terminate TLS)
* Invalid example: `https://127.0.0.1:2379` (gateway does not terminate TLS). Note that the gateway does not verify the HTTP schema or inspect the requests, it only forwards requests to the given endpoints.
#### --discovery-srv
@@ -103,5 +103,5 @@ $ etcd gateway start --discovery-srv=example.com
#### --trusted-ca-file
* Path to the client TLS CA file for the etcd cluster. Used to authenticate endpoints.
* Path to the client TLS CA file for the etcd cluster to verify the endpoints returned from SRV discovery. Note that it is ONLY used for authenticating the discovered endpoints rather than creating connections for data transferring. The gateway never terminates TLS connections or create TLS connections on behalf of its clients.
* Default: (not set)

View File

@@ -2,7 +2,7 @@
title: Transport security model
---
etcd supports automatic TLS as well as authentication through client certificates for both clients to server as well as peer (server to server / cluster) communication.
etcd supports automatic TLS as well as authentication through client certificates for both clients to server as well as peer (server to server / cluster) communication. **Note that etcd doesn't enable [RBAC based authentication][auth] or the authentication feature in the transport layer by default to reduce friction for users getting started with the database. Further, changing this default would be a breaking change for the project which was established since 2013. An etcd cluster which doesn't enable security features can expose its data to any clients.**
To get up and running, first have a CA certificate and a signed key pair for one member. It is recommended to create and sign a new key pair for every member in a cluster.
@@ -426,8 +426,17 @@ Make sure to sign the certificates with a Subject Name the member's public IP ad
The certificate needs to be signed for the member's FQDN in its Subject Name, use Subject Alternative Names (short IP SANs) to add the IP address. The `etcd-ca` tool provides `--domain=` option for its `new-cert` command, and openssl can make [it][alt-name] too.
### Does etcd encrypt data stored on disk drives?
No. etcd doesn't encrypt key/value data stored on disk drives. If a user need to encrypt data stored on etcd, there are some options:
* Let client applications encrypt and decrypt the data
* Use a feature of underlying storage systems for encrypting stored data like [dm-crypt]
### Im seeing a log warning that "directory X exist without recommended permission -rwx------"
When etcd create certain new directories it sets file permission to 700 to prevent unprivileged access as possible. However, if user has already created a directory with own preference, etcd uses the existing directory and logs a warning message if the permission is different than 700.
[cfssl]: https://github.com/cloudflare/cfssl
[tls-setup]: ../../hack/tls-setup
[tls-guide]: https://github.com/coreos/docs/blob/master/os/generate-self-signed-certificates.md
[alt-name]: http://wiki.cacert.org/FAQ/subjectAltName
[auth]: authentication.md
[dm-crypt]: https://en.wikipedia.org/wiki/Dm-crypt

View File

@@ -51,7 +51,7 @@ docker-remove:
GO_VERSION ?= 1.12.12
GO_VERSION ?= 1.12.17
ETCD_VERSION ?= $(shell git rev-parse --short HEAD || echo "GitNotFound")
TEST_SUFFIX = $(shell date +%s | base64 | head -c 15)
@@ -65,11 +65,11 @@ endif
# Example:
# GO_VERSION=1.12.12 make build-docker-test
# GO_VERSION=1.12.17 make build-docker-test
# make build-docker-test
#
# gcloud docker -- login -u _json_key -p "$(cat /etc/gcp-key-etcd-development.json)" https://gcr.io
# GO_VERSION=1.12.12 make push-docker-test
# GO_VERSION=1.12.17 make push-docker-test
# make push-docker-test
#
# gsutil -m acl ch -u allUsers:R -r gs://artifacts.etcd-development.appspot.com

View File

@@ -21,7 +21,7 @@ import (
"errors"
"time"
jwt "github.com/dgrijalva/jwt-go"
"github.com/golang-jwt/jwt"
"go.uber.org/zap"
)

View File

@@ -21,7 +21,7 @@ import (
"io/ioutil"
"time"
jwt "github.com/dgrijalva/jwt-go"
"github.com/golang-jwt/jwt"
)
const (

View File

@@ -113,38 +113,48 @@ func checkKeyPoint(lg *zap.Logger, cachedPerms *unifiedRangePermissions, key []b
return false
}
func (as *authStore) isRangeOpPermitted(tx backend.BatchTx, userName string, key, rangeEnd []byte, permtyp authpb.Permission_Type) bool {
// assumption: tx is Lock()ed
_, ok := as.rangePermCache[userName]
func (as *authStore) isRangeOpPermitted(userName string, key, rangeEnd []byte, permtyp authpb.Permission_Type) bool {
as.rangePermCacheMu.RLock()
defer as.rangePermCacheMu.RUnlock()
rangePerm, ok := as.rangePermCache[userName]
if !ok {
perms := getMergedPerms(as.lg, tx, userName)
if perms == nil {
if as.lg != nil {
as.lg.Warn(
"failed to create a merged permission",
zap.String("user-name", userName),
)
} else {
plog.Errorf("failed to create a unified permission of user %s", userName)
}
return false
}
as.rangePermCache[userName] = perms
as.lg.Error(
"user doesn't exist",
zap.String("user-name", userName),
)
return false
}
if len(rangeEnd) == 0 {
return checkKeyPoint(as.lg, as.rangePermCache[userName], key, permtyp)
return checkKeyPoint(as.lg, rangePerm, key, permtyp)
}
return checkKeyInterval(as.lg, as.rangePermCache[userName], key, rangeEnd, permtyp)
return checkKeyInterval(as.lg, rangePerm, key, rangeEnd, permtyp)
}
func (as *authStore) clearCachedPerm() {
func (as *authStore) refreshRangePermCache(tx backend.BatchTx) {
// Note that every authentication configuration update calls this method and it invalidates the entire
// rangePermCache and reconstruct it based on information of users and roles stored in the backend.
// This can be a costly operation.
as.rangePermCacheMu.Lock()
defer as.rangePermCacheMu.Unlock()
as.rangePermCache = make(map[string]*unifiedRangePermissions)
}
func (as *authStore) invalidateCachedPerm(userName string) {
delete(as.rangePermCache, userName)
users := getAllUsers(as.lg, tx)
for _, user := range users {
userName := string(user.Name)
perms := getMergedPerms(as.lg, tx, userName)
if perms == nil {
as.lg.Error(
"failed to create a merged permission",
zap.String("user-name", userName),
)
continue
}
as.rangePermCache[userName] = perms
}
}
type unifiedRangePermissions struct {

View File

@@ -37,7 +37,7 @@ const (
// var for testing purposes
var (
simpleTokenTTL = 5 * time.Minute
simpleTokenTTLDefault = 300 * time.Second
simpleTokenTTLResolution = 1 * time.Second
)
@@ -47,6 +47,7 @@ type simpleTokenTTLKeeper struct {
stopc chan struct{}
deleteTokenFunc func(string)
mu *sync.Mutex
simpleTokenTTL time.Duration
}
func (tm *simpleTokenTTLKeeper) stop() {
@@ -58,12 +59,12 @@ func (tm *simpleTokenTTLKeeper) stop() {
}
func (tm *simpleTokenTTLKeeper) addSimpleToken(token string) {
tm.tokens[token] = time.Now().Add(simpleTokenTTL)
tm.tokens[token] = time.Now().Add(tm.simpleTokenTTL)
}
func (tm *simpleTokenTTLKeeper) resetSimpleToken(token string) {
if _, ok := tm.tokens[token]; ok {
tm.tokens[token] = time.Now().Add(simpleTokenTTL)
tm.tokens[token] = time.Now().Add(tm.simpleTokenTTL)
}
}
@@ -101,6 +102,7 @@ type tokenSimple struct {
simpleTokenKeeper *simpleTokenTTLKeeper
simpleTokensMu sync.Mutex
simpleTokens map[string]string // token -> username
simpleTokenTTL time.Duration
}
func (t *tokenSimple) genTokenPrefix() (string, error) {
@@ -157,6 +159,15 @@ func (t *tokenSimple) invalidateUser(username string) {
}
func (t *tokenSimple) enable() {
t.simpleTokensMu.Lock()
defer t.simpleTokensMu.Unlock()
if t.simpleTokenKeeper != nil { // already enabled
return
}
if t.simpleTokenTTL <= 0 {
t.simpleTokenTTL = simpleTokenTTLDefault
}
delf := func(tk string) {
if username, ok := t.simpleTokens[tk]; ok {
if t.lg != nil {
@@ -177,6 +188,7 @@ func (t *tokenSimple) enable() {
stopc: make(chan struct{}),
deleteTokenFunc: delf,
mu: &t.simpleTokensMu,
simpleTokenTTL: t.simpleTokenTTL,
}
go t.simpleTokenKeeper.run()
}
@@ -234,10 +246,14 @@ func (t *tokenSimple) isValidSimpleToken(ctx context.Context, token string) bool
return false
}
func newTokenProviderSimple(lg *zap.Logger, indexWaiter func(uint64) <-chan struct{}) *tokenSimple {
func newTokenProviderSimple(lg *zap.Logger, indexWaiter func(uint64) <-chan struct{}, TokenTTL time.Duration) *tokenSimple {
if lg == nil {
lg = zap.NewNop()
}
return &tokenSimple{
lg: lg,
simpleTokens: make(map[string]string),
indexWaiter: indexWaiter,
lg: lg,
simpleTokens: make(map[string]string),
indexWaiter: indexWaiter,
simpleTokenTTL: TokenTTL,
}
}

View File

@@ -24,9 +24,9 @@ import (
// TestSimpleTokenDisabled ensures that TokenProviderSimple behaves correctly when
// disabled.
func TestSimpleTokenDisabled(t *testing.T) {
initialState := newTokenProviderSimple(zap.NewExample(), dummyIndexWaiter)
initialState := newTokenProviderSimple(zap.NewExample(), dummyIndexWaiter, simpleTokenTTLDefault)
explicitlyDisabled := newTokenProviderSimple(zap.NewExample(), dummyIndexWaiter)
explicitlyDisabled := newTokenProviderSimple(zap.NewExample(), dummyIndexWaiter, simpleTokenTTLDefault)
explicitlyDisabled.enable()
explicitlyDisabled.disable()
@@ -48,7 +48,7 @@ func TestSimpleTokenDisabled(t *testing.T) {
// TestSimpleTokenAssign ensures that TokenProviderSimple can correctly assign a
// token, look it up with info, and invalidate it by user.
func TestSimpleTokenAssign(t *testing.T) {
tp := newTokenProviderSimple(zap.NewExample(), dummyIndexWaiter)
tp := newTokenProviderSimple(zap.NewExample(), dummyIndexWaiter, simpleTokenTTLDefault)
tp.enable()
ctx := context.WithValue(context.WithValue(context.TODO(), AuthenticateParamIndex{}, uint64(1)), AuthenticateParamSimpleTokenPrefix{}, "dummy")
token, err := tp.assign(ctx, "user1", 0)

View File

@@ -23,6 +23,7 @@ import (
"strings"
"sync"
"sync/atomic"
"time"
"go.etcd.io/etcd/auth/authpb"
"go.etcd.io/etcd/etcdserver/api/v3rpc/rpctypes"
@@ -59,6 +60,7 @@ var (
ErrRoleNotFound = errors.New("auth: role not found")
ErrRoleEmpty = errors.New("auth: role name is empty")
ErrAuthFailed = errors.New("auth: authentication failed, invalid user ID or password")
ErrNoPasswordUser = errors.New("auth: authentication failed, password was given for no password user")
ErrPermissionDenied = errors.New("auth: permission denied")
ErrRoleNotGranted = errors.New("auth: role is not granted to the user")
ErrPermissionNotGranted = errors.New("auth: permission is not granted to the role")
@@ -213,7 +215,14 @@ type authStore struct {
enabled bool
enabledMu sync.RWMutex
rangePermCache map[string]*unifiedRangePermissions // username -> unifiedRangePermissions
// rangePermCache needs to be protected by rangePermCacheMu
// rangePermCacheMu needs to be write locked only in initialization phase or configuration changes
// Hot paths like Range(), needs to acquire read lock for improving performance
//
// Note that BatchTx and ReadTx cannot be a mutex for rangePermCache because they are independent resources
// see also: https://github.com/etcd-io/etcd/pull/13920#discussion_r849114855
rangePermCache map[string]*unifiedRangePermissions // username -> unifiedRangePermissions
rangePermCacheMu sync.RWMutex
tokenProvider TokenProvider
syncConsistentIndex saveConsistentIndexFunc
@@ -256,7 +265,7 @@ func (as *authStore) AuthEnable() error {
as.enabled = true
as.tokenProvider.enable()
as.rangePermCache = make(map[string]*unifiedRangePermissions)
as.refreshRangePermCache(tx)
as.setRevision(getRevision(tx))
@@ -360,7 +369,7 @@ func (as *authStore) CheckPassword(username, password string) (uint64, error) {
}
if user.Options != nil && user.Options.NoPassword {
return 0, ErrAuthFailed
return 0, ErrNoPasswordUser
}
return getRevision(tx), nil
@@ -398,6 +407,9 @@ func (as *authStore) Recover(be backend.Backend) {
as.enabledMu.Lock()
as.enabled = enabled
if enabled {
as.tokenProvider.enable()
}
as.enabledMu.Unlock()
}
@@ -452,6 +464,7 @@ func (as *authStore) UserAdd(r *pb.AuthUserAddRequest) (*pb.AuthUserAddResponse,
as.commitRevision(tx)
as.saveConsistentIndex(tx)
as.refreshRangePermCache(tx)
if as.lg != nil {
as.lg.Info("added a user", zap.String("user-name", r.Name))
@@ -484,8 +497,8 @@ func (as *authStore) UserDelete(r *pb.AuthUserDeleteRequest) (*pb.AuthUserDelete
as.commitRevision(tx)
as.saveConsistentIndex(tx)
as.refreshRangePermCache(tx)
as.invalidateCachedPerm(r.Name)
as.tokenProvider.invalidateUser(r.Name)
if as.lg != nil {
@@ -537,8 +550,8 @@ func (as *authStore) UserChangePassword(r *pb.AuthUserChangePasswordRequest) (*p
as.commitRevision(tx)
as.saveConsistentIndex(tx)
as.refreshRangePermCache(tx)
as.invalidateCachedPerm(r.Name)
as.tokenProvider.invalidateUser(r.Name)
if as.lg != nil {
@@ -590,10 +603,9 @@ func (as *authStore) UserGrantRole(r *pb.AuthUserGrantRoleRequest) (*pb.AuthUser
putUser(as.lg, tx, user)
as.invalidateCachedPerm(r.User)
as.commitRevision(tx)
as.saveConsistentIndex(tx)
as.refreshRangePermCache(tx)
if as.lg != nil {
as.lg.Info(
@@ -677,10 +689,9 @@ func (as *authStore) UserRevokeRole(r *pb.AuthUserRevokeRoleRequest) (*pb.AuthUs
putUser(as.lg, tx, updatedUser)
as.invalidateCachedPerm(r.Name)
as.commitRevision(tx)
as.saveConsistentIndex(tx)
as.refreshRangePermCache(tx)
if as.lg != nil {
as.lg.Info(
@@ -750,12 +761,9 @@ func (as *authStore) RoleRevokePermission(r *pb.AuthRoleRevokePermissionRequest)
putRole(as.lg, tx, updatedRole)
// TODO(mitake): currently single role update invalidates every cache
// It should be optimized.
as.clearCachedPerm()
as.commitRevision(tx)
as.saveConsistentIndex(tx)
as.refreshRangePermCache(tx)
if as.lg != nil {
as.lg.Info(
@@ -811,11 +819,11 @@ func (as *authStore) RoleDelete(r *pb.AuthRoleDeleteRequest) (*pb.AuthRoleDelete
putUser(as.lg, tx, updatedUser)
as.invalidateCachedPerm(string(user.Name))
}
as.commitRevision(tx)
as.saveConsistentIndex(tx)
as.refreshRangePermCache(tx)
if as.lg != nil {
as.lg.Info("deleted a role", zap.String("role-name", r.Role))
@@ -905,12 +913,9 @@ func (as *authStore) RoleGrantPermission(r *pb.AuthRoleGrantPermissionRequest) (
putRole(as.lg, tx, role)
// TODO(mitake): currently single role update invalidates every cache
// It should be optimized.
as.clearCachedPerm()
as.commitRevision(tx)
as.saveConsistentIndex(tx)
as.refreshRangePermCache(tx)
if as.lg != nil {
as.lg.Info(
@@ -971,7 +976,7 @@ func (as *authStore) isOpPermitted(userName string, revision uint64, key, rangeE
return nil
}
if as.isRangeOpPermitted(tx, userName, key, rangeEnd, permTyp) {
if as.isRangeOpPermitted(userName, key, rangeEnd, permTyp) {
return nil
}
@@ -994,7 +999,7 @@ func (as *authStore) IsAdminPermitted(authInfo *AuthInfo) error {
if !as.IsAuthEnabled() {
return nil
}
if authInfo == nil {
if authInfo == nil || authInfo.Username == "" {
return ErrUserEmpty
}
@@ -1037,7 +1042,15 @@ func getUser(lg *zap.Logger, tx backend.BatchTx, username string) *authpb.User {
}
func getAllUsers(lg *zap.Logger, tx backend.BatchTx) []*authpb.User {
_, vs := tx.UnsafeRange(authUsersBucketName, []byte{0}, []byte{0xff}, -1)
var vs [][]byte
err := tx.UnsafeForEach(authUsersBucketName, func(k []byte, v []byte) error {
vs = append(vs, v)
return nil
})
if err != nil {
lg.Panic("failed to get users",
zap.Error(err))
}
if len(vs) == 0 {
return nil
}
@@ -1190,6 +1203,8 @@ func NewAuthStore(lg *zap.Logger, be backend.Backend, tp TokenProvider, bcryptCo
as.setupMetricsReporter()
as.refreshRangePermCache(tx)
tx.Unlock()
be.ForceCommit()
@@ -1351,7 +1366,8 @@ func decomposeOpts(lg *zap.Logger, optstr string) (string, map[string]string, er
func NewTokenProvider(
lg *zap.Logger,
tokenOpts string,
indexWaiter func(uint64) <-chan struct{}) (TokenProvider, error) {
indexWaiter func(uint64) <-chan struct{},
TokenTTL time.Duration) (TokenProvider, error) {
tokenType, typeSpecificOpts, err := decomposeOpts(lg, tokenOpts)
if err != nil {
return nil, ErrInvalidAuthOpts
@@ -1364,7 +1380,7 @@ func NewTokenProvider(
} else {
plog.Warningf("simple token is not cryptographically signed")
}
return newTokenProviderSimple(lg, indexWaiter), nil
return newTokenProviderSimple(lg, indexWaiter, TokenTTL), nil
case tokenTypeJWT:
return newTokenProviderJWT(lg, typeSpecificOpts)

View File

@@ -28,6 +28,7 @@ import (
"go.etcd.io/etcd/etcdserver/api/v3rpc/rpctypes"
pb "go.etcd.io/etcd/etcdserver/etcdserverpb"
"go.etcd.io/etcd/mvcc/backend"
"go.etcd.io/etcd/pkg/adt"
"go.uber.org/zap"
"golang.org/x/crypto/bcrypt"
@@ -48,7 +49,7 @@ func TestNewAuthStoreRevision(t *testing.T) {
b, tPath := backend.NewDefaultTmpBackend()
defer os.Remove(tPath)
tp, err := NewTokenProvider(zap.NewExample(), tokenTypeSimple, dummyIndexWaiter)
tp, err := NewTokenProvider(zap.NewExample(), tokenTypeSimple, dummyIndexWaiter, simpleTokenTTLDefault)
if err != nil {
t.Fatal(err)
}
@@ -78,7 +79,7 @@ func TestNewAuthStoreBcryptCost(t *testing.T) {
b, tPath := backend.NewDefaultTmpBackend()
defer os.Remove(tPath)
tp, err := NewTokenProvider(zap.NewExample(), tokenTypeSimple, dummyIndexWaiter)
tp, err := NewTokenProvider(zap.NewExample(), tokenTypeSimple, dummyIndexWaiter, simpleTokenTTLDefault)
if err != nil {
t.Fatal(err)
}
@@ -98,7 +99,7 @@ func TestNewAuthStoreBcryptCost(t *testing.T) {
func setupAuthStore(t *testing.T) (store *authStore, teardownfunc func(t *testing.T)) {
b, tPath := backend.NewDefaultTmpBackend()
tp, err := NewTokenProvider(zap.NewExample(), tokenTypeSimple, dummyIndexWaiter)
tp, err := NewTokenProvider(zap.NewExample(), tokenTypeSimple, dummyIndexWaiter, simpleTokenTTLDefault)
if err != nil {
t.Fatal(err)
}
@@ -151,7 +152,8 @@ func TestUserAdd(t *testing.T) {
as, tearDown := setupAuthStore(t)
defer tearDown(t)
ua := &pb.AuthUserAddRequest{Name: "foo", Options: &authpb.UserAddOptions{NoPassword: false}}
const userName = "foo"
ua := &pb.AuthUserAddRequest{Name: userName, Options: &authpb.UserAddOptions{NoPassword: false}}
_, err := as.UserAdd(ua) // add an existing user
if err == nil {
t.Fatalf("expected %v, got %v", ErrUserAlreadyExist, err)
@@ -165,6 +167,11 @@ func TestUserAdd(t *testing.T) {
if err != ErrUserEmpty {
t.Fatal(err)
}
if _, ok := as.rangePermCache[userName]; !ok {
t.Fatalf("user %s should be added but it doesn't exist in rangePermCache", userName)
}
}
func TestRecover(t *testing.T) {
@@ -213,7 +220,8 @@ func TestUserDelete(t *testing.T) {
defer tearDown(t)
// delete an existing user
ud := &pb.AuthUserDeleteRequest{Name: "foo"}
const userName = "foo"
ud := &pb.AuthUserDeleteRequest{Name: userName}
_, err := as.UserDelete(ud)
if err != nil {
t.Fatal(err)
@@ -227,6 +235,47 @@ func TestUserDelete(t *testing.T) {
if err != ErrUserNotFound {
t.Fatalf("expected %v, got %v", ErrUserNotFound, err)
}
if _, ok := as.rangePermCache[userName]; ok {
t.Fatalf("user %s should be deleted but it exists in rangePermCache", userName)
}
}
func TestUserDeleteAndPermCache(t *testing.T) {
as, tearDown := setupAuthStore(t)
defer tearDown(t)
// delete an existing user
const deletedUserName = "foo"
ud := &pb.AuthUserDeleteRequest{Name: deletedUserName}
_, err := as.UserDelete(ud)
if err != nil {
t.Fatal(err)
}
// delete a non-existing user
_, err = as.UserDelete(ud)
if err != ErrUserNotFound {
t.Fatalf("expected %v, got %v", ErrUserNotFound, err)
}
if _, ok := as.rangePermCache[deletedUserName]; ok {
t.Fatalf("user %s should be deleted but it exists in rangePermCache", deletedUserName)
}
// add a new user
const newUser = "bar"
ua := &pb.AuthUserAddRequest{Name: newUser, Options: &authpb.UserAddOptions{NoPassword: false}}
_, err = as.UserAdd(ua)
if err != nil {
t.Fatal(err)
}
if _, ok := as.rangePermCache[newUser]; !ok {
t.Fatalf("user %s should exist but it doesn't exist in rangePermCache", deletedUserName)
}
}
func TestUserChangePassword(t *testing.T) {
@@ -503,17 +552,44 @@ func TestUserRevokePermission(t *testing.T) {
t.Fatal(err)
}
_, err = as.UserGrantRole(&pb.AuthUserGrantRoleRequest{User: "foo", Role: "role-test"})
const userName = "foo"
_, err = as.UserGrantRole(&pb.AuthUserGrantRoleRequest{User: userName, Role: "role-test"})
if err != nil {
t.Fatal(err)
}
_, err = as.UserGrantRole(&pb.AuthUserGrantRoleRequest{User: "foo", Role: "role-test-1"})
_, err = as.UserGrantRole(&pb.AuthUserGrantRoleRequest{User: userName, Role: "role-test-1"})
if err != nil {
t.Fatal(err)
}
u, err := as.UserGet(&pb.AuthUserGetRequest{Name: "foo"})
perm := &authpb.Permission{
PermType: authpb.WRITE,
Key: []byte("WriteKeyBegin"),
RangeEnd: []byte("WriteKeyEnd"),
}
_, err = as.RoleGrantPermission(&pb.AuthRoleGrantPermissionRequest{
Name: "role-test-1",
Perm: perm,
})
if err != nil {
t.Fatal(err)
}
if _, ok := as.rangePermCache[userName]; !ok {
t.Fatalf("User %s should have its entry in rangePermCache", userName)
}
unifiedPerm := as.rangePermCache[userName]
pt1 := adt.NewBytesAffinePoint([]byte("WriteKeyBegin"))
if !unifiedPerm.writePerms.Contains(pt1) {
t.Fatal("rangePermCache should contain WriteKeyBegin")
}
pt2 := adt.NewBytesAffinePoint([]byte("OutOfRange"))
if unifiedPerm.writePerms.Contains(pt2) {
t.Fatal("rangePermCache should not contain OutOfRange")
}
u, err := as.UserGet(&pb.AuthUserGetRequest{Name: userName})
if err != nil {
t.Fatal(err)
}
@@ -523,12 +599,12 @@ func TestUserRevokePermission(t *testing.T) {
t.Fatalf("expected %v, got %v", expected, u.Roles)
}
_, err = as.UserRevokeRole(&pb.AuthUserRevokeRoleRequest{Name: "foo", Role: "role-test-1"})
_, err = as.UserRevokeRole(&pb.AuthUserRevokeRoleRequest{Name: userName, Role: "role-test-1"})
if err != nil {
t.Fatal(err)
}
u, err = as.UserGet(&pb.AuthUserGetRequest{Name: "foo"})
u, err = as.UserGet(&pb.AuthUserGetRequest{Name: userName})
if err != nil {
t.Fatal(err)
}
@@ -626,7 +702,7 @@ func TestAuthInfoFromCtxRace(t *testing.T) {
b, tPath := backend.NewDefaultTmpBackend()
defer os.Remove(tPath)
tp, err := NewTokenProvider(zap.NewExample(), tokenTypeSimple, dummyIndexWaiter)
tp, err := NewTokenProvider(zap.NewExample(), tokenTypeSimple, dummyIndexWaiter, simpleTokenTTLDefault)
if err != nil {
t.Fatal(err)
}
@@ -658,6 +734,12 @@ func TestIsAdminPermitted(t *testing.T) {
t.Errorf("expected %v, got %v", ErrUserNotFound, err)
}
// empty user
err = as.IsAdminPermitted(&AuthInfo{Username: "", Revision: 1})
if err != ErrUserEmpty {
t.Errorf("expected %v, got %v", ErrUserEmpty, err)
}
// non-admin user
err = as.IsAdminPermitted(&AuthInfo{Username: "foo", Revision: 1})
if err != ErrPermissionDenied {
@@ -692,7 +774,7 @@ func TestRecoverFromSnapshot(t *testing.T) {
as.Close()
tp, err := NewTokenProvider(zap.NewExample(), tokenTypeSimple, dummyIndexWaiter)
tp, err := NewTokenProvider(zap.NewExample(), tokenTypeSimple, dummyIndexWaiter, simpleTokenTTLDefault)
if err != nil {
t.Fatal(err)
}
@@ -725,13 +807,13 @@ func contains(array []string, str string) bool {
func TestHammerSimpleAuthenticate(t *testing.T) {
// set TTL values low to try to trigger races
oldTTL, oldTTLRes := simpleTokenTTL, simpleTokenTTLResolution
oldTTL, oldTTLRes := simpleTokenTTLDefault, simpleTokenTTLResolution
defer func() {
simpleTokenTTL = oldTTL
simpleTokenTTLDefault = oldTTL
simpleTokenTTLResolution = oldTTLRes
}()
simpleTokenTTL = 10 * time.Millisecond
simpleTokenTTLResolution = simpleTokenTTL
simpleTokenTTLDefault = 10 * time.Millisecond
simpleTokenTTLResolution = simpleTokenTTLDefault
users := make(map[string]struct{})
as, tearDown := setupAuthStore(t)
@@ -774,7 +856,7 @@ func TestRolesOrder(t *testing.T) {
b, tPath := backend.NewDefaultTmpBackend()
defer os.Remove(tPath)
tp, err := NewTokenProvider(zap.NewExample(), tokenTypeSimple, dummyIndexWaiter)
tp, err := NewTokenProvider(zap.NewExample(), tokenTypeSimple, dummyIndexWaiter, simpleTokenTTLDefault)
if err != nil {
t.Fatal(err)
}
@@ -829,7 +911,7 @@ func testAuthInfoFromCtxWithRoot(t *testing.T, opts string) {
b, tPath := backend.NewDefaultTmpBackend()
defer os.Remove(tPath)
tp, err := NewTokenProvider(zap.NewExample(), opts, dummyIndexWaiter)
tp, err := NewTokenProvider(zap.NewExample(), opts, dummyIndexWaiter, simpleTokenTTLDefault)
if err != nil {
t.Fatal(err)
}

View File

@@ -44,15 +44,6 @@
}
]
},
{
"project": "github.com/dgrijalva/jwt-go",
"licenses": [
{
"type": "MIT License",
"confidence": 0.9891304347826086
}
]
},
{
"project": "github.com/dustin/go-humanize",
"licenses": [
@@ -71,6 +62,15 @@
}
]
},
{
"project": "github.com/golang-jwt/jwt",
"licenses": [
{
"type": "MIT License",
"confidence": 0.9891304347826086
}
]
},
{
"project": "github.com/golang/groupcache/lru",
"licenses": [
@@ -378,7 +378,7 @@
]
},
{
"project": "golang.org/x/sys/unix",
"project": "golang.org/x/sys",
"licenses": [
{
"type": "BSD 3-clause \"New\" or \"Revised\" License",

View File

@@ -19,7 +19,6 @@ import (
"fmt"
"strings"
"testing"
"time"
"go.etcd.io/etcd/clientv3/balancer/picker"
"go.etcd.io/etcd/clientv3/balancer/resolver/endpoint"
@@ -92,24 +91,25 @@ func TestRoundRobinBalancedResolvableNoFailover(t *testing.T) {
return picked, err
}
prev, switches := "", 0
_, picked, err := warmupConnections(reqFunc, tc.serverCount, "")
if err != nil {
t.Fatalf("Unexpected failure %v", err)
}
// verify that we round robin
prev, switches := picked, 0
for i := 0; i < tc.reqN; i++ {
picked, err := reqFunc(context.Background())
picked, err = reqFunc(context.Background())
if err != nil {
t.Fatalf("#%d: unexpected failure %v", i, err)
}
if prev == "" {
prev = picked
continue
}
if prev != picked {
switches++
}
prev = picked
}
if tc.serverCount > 1 && switches < tc.reqN-3 { // -3 for initial resolutions
// TODO: FIX ME
t.Skipf("expected balanced loads for %d requests, got switches %d", tc.reqN, switches)
if tc.serverCount > 1 && switches != tc.reqN {
t.Fatalf("expected balanced loads for %d requests, got switches %d", tc.reqN, switches)
}
})
}
@@ -160,26 +160,21 @@ func TestRoundRobinBalancedResolvableFailoverFromServerFail(t *testing.T) {
}
// stop first server, loads should be redistributed
// stopped server should never be picked
ms.StopAt(0)
available := make(map[string]struct{})
for i := 1; i < serverCount; i++ {
available[eps[i]] = struct{}{}
// stopped server will be transitioned into TRANSIENT_FAILURE state
// but it doesn't happen instantaneously and it can still be picked for a short period of time
// we ignore "transport is closing" in such case
available, picked, err := warmupConnections(reqFunc, serverCount-1, "transport is closing")
if err != nil {
t.Fatalf("Unexpected failure %v", err)
}
reqN := 10
prev, switches := "", 0
prev, switches := picked, 0
for i := 0; i < reqN; i++ {
picked, err := reqFunc(context.Background())
if err != nil && strings.Contains(err.Error(), "transport is closing") {
continue
}
if prev == "" { // first failover
if eps[0] == picked {
t.Fatalf("expected failover from %q, picked %q", eps[0], picked)
}
prev = picked
continue
picked, err = reqFunc(context.Background())
if err != nil {
t.Fatalf("#%d: unexpected failure %v", i, err)
}
if _, ok := available[picked]; !ok {
t.Fatalf("picked unavailable address %q (available %v)", picked, available)
@@ -189,18 +184,18 @@ func TestRoundRobinBalancedResolvableFailoverFromServerFail(t *testing.T) {
}
prev = picked
}
if switches < reqN-3 { // -3 for initial resolutions + failover
// TODO: FIX ME!
t.Skipf("expected balanced loads for %d requests, got switches %d", reqN, switches)
if switches != reqN {
t.Fatalf("expected balanced loads for %d requests, got switches %d", reqN, switches)
}
// now failed server comes back
ms.StartAt(0)
available, picked, err = warmupConnections(reqFunc, serverCount, "")
if err != nil {
t.Fatalf("Unexpected failure %v", err)
}
// enough time for reconnecting to recovered server
time.Sleep(time.Second)
prev, switches = "", 0
prev, switches = picked, 0
recoveredAddr, recovered := eps[0], 0
available[recoveredAddr] = struct{}{}
@@ -209,10 +204,6 @@ func TestRoundRobinBalancedResolvableFailoverFromServerFail(t *testing.T) {
if err != nil {
t.Fatalf("#%d: unexpected failure %v", i, err)
}
if prev == "" {
prev = picked
continue
}
if _, ok := available[picked]; !ok {
t.Fatalf("#%d: picked unavailable address %q (available %v)", i, picked, available)
}
@@ -224,10 +215,10 @@ func TestRoundRobinBalancedResolvableFailoverFromServerFail(t *testing.T) {
}
prev = picked
}
if switches < reqN-3 { // -3 for initial resolutions
if switches != 2*reqN {
t.Fatalf("expected balanced loads for %d requests, got switches %d", reqN, switches)
}
if recovered < reqN/serverCount {
if recovered != 2*reqN/serverCount {
t.Fatalf("recovered server %q got only %d requests", recoveredAddr, recovered)
}
}
@@ -242,11 +233,10 @@ func TestRoundRobinBalancedResolvableFailoverFromRequestFail(t *testing.T) {
}
defer ms.Stop()
var eps []string
available := make(map[string]struct{})
for _, svr := range ms.Servers {
eps = append(eps, svr.ResolverAddress().Addr)
available[svr.Address] = struct{}{}
}
rsv, err := endpoint.NewResolverGroup("requestfail")
if err != nil {
t.Fatal(err)
@@ -277,6 +267,11 @@ func TestRoundRobinBalancedResolvableFailoverFromRequestFail(t *testing.T) {
return picked, err
}
available, picked, err := warmupConnections(reqFunc, serverCount, "")
if err != nil {
t.Fatalf("Unexpected failure %v", err)
}
reqN := 20
prev, switches := "", 0
for i := 0; i < reqN; i++ {
@@ -285,17 +280,13 @@ func TestRoundRobinBalancedResolvableFailoverFromRequestFail(t *testing.T) {
if i%2 == 0 {
cancel()
}
picked, err := reqFunc(ctx)
picked, err = reqFunc(ctx)
if i%2 == 0 {
if s, ok := status.FromError(err); ok && s.Code() != codes.Canceled || picked != "" {
if s, ok := status.FromError(err); ok && s.Code() != codes.Canceled {
t.Fatalf("#%d: expected %v, got %v", i, context.Canceled, err)
}
continue
}
if prev == "" && picked != "" {
prev = picked
continue
}
if _, ok := available[picked]; !ok {
t.Fatalf("#%d: picked unavailable address %q (available %v)", i, picked, available)
}
@@ -304,7 +295,29 @@ func TestRoundRobinBalancedResolvableFailoverFromRequestFail(t *testing.T) {
}
prev = picked
}
if switches < reqN/2-3 { // -3 for initial resolutions + failover
if switches != reqN/2 {
t.Fatalf("expected balanced loads for %d requests, got switches %d", reqN, switches)
}
}
type reqFuncT = func(ctx context.Context) (picked string, err error)
func warmupConnections(reqFunc reqFuncT, serverCount int, ignoreErr string) (map[string]struct{}, string, error) {
var picked string
var err error
available := make(map[string]struct{})
// cycle through all peers to indirectly verify that balancer subconn list is fully loaded
// otherwise we can't reliably count switches between 'picked' peers in the test assert phase
for len(available) < serverCount {
picked, err = reqFunc(context.Background())
if err != nil {
if ignoreErr != "" && strings.Contains(err.Error(), ignoreErr) {
// skip ignored errors
continue
}
return available, picked, err
}
available[picked] = struct{}{}
}
return available, picked, err
}

View File

@@ -174,7 +174,9 @@ func (c *Client) Sync(ctx context.Context) error {
}
var eps []string
for _, m := range mresp.Members {
eps = append(eps, m.ClientURLs...)
if len(m.Name) != 0 && !m.IsLearner {
eps = append(eps, m.ClientURLs...)
}
}
c.SetEndpoints(eps...)
return nil

View File

@@ -22,6 +22,7 @@ import (
"time"
"go.etcd.io/etcd/etcdserver/api/v3rpc/rpctypes"
"go.etcd.io/etcd/etcdserver/etcdserverpb"
"go.etcd.io/etcd/pkg/testutil"
"google.golang.org/grpc"
@@ -166,3 +167,51 @@ func TestCloseCtxClient(t *testing.T) {
t.Errorf("failed to Close the client. %v", err)
}
}
func TestSyncFiltersMembers(t *testing.T) {
defer testutil.AfterTest(t)
c, _ := New(Config{Endpoints: []string{"http://254.0.0.1:12345"}})
c.Cluster = &mockCluster{
[]*etcdserverpb.Member{
{ID: 0, Name: "", ClientURLs: []string{"http://254.0.0.1:12345"}, IsLearner: false},
{ID: 1, Name: "isStarted", ClientURLs: []string{"http://254.0.0.2:12345"}, IsLearner: true},
{ID: 2, Name: "isStartedAndNotLearner", ClientURLs: []string{"http://254.0.0.3:12345"}, IsLearner: false},
},
}
c.Sync(context.Background())
endpoints := c.Endpoints()
if len(endpoints) != 1 || endpoints[0] != "http://254.0.0.3:12345" {
t.Error("Client.Sync uses learner and/or non-started member client URLs")
}
c.Close()
}
type mockCluster struct {
members []*etcdserverpb.Member
}
func (mc *mockCluster) MemberList(ctx context.Context) (*MemberListResponse, error) {
return &MemberListResponse{Members: mc.members}, nil
}
func (mc *mockCluster) MemberAdd(ctx context.Context, peerAddrs []string) (*MemberAddResponse, error) {
return nil, nil
}
func (mc *mockCluster) MemberAddAsLearner(ctx context.Context, peerAddrs []string) (*MemberAddResponse, error) {
return nil, nil
}
func (mc *mockCluster) MemberRemove(ctx context.Context, id uint64) (*MemberRemoveResponse, error) {
return nil, nil
}
func (mc *mockCluster) MemberUpdate(ctx context.Context, id uint64, peerAddrs []string) (*MemberUpdateResponse, error) {
return nil, nil
}
func (mc *mockCluster) MemberPromote(ctx context.Context, id uint64) (*MemberPromoteResponse, error) {
return nil, nil
}

View File

@@ -65,22 +65,18 @@ func TestResumeElection(t *testing.T) {
respChan := make(chan *clientv3.GetResponse)
go func() {
defer close(respChan)
o := e.Observe(ctx)
respChan <- nil
for {
select {
case resp, ok := <-o:
if !ok {
t.Fatal("Observe() channel closed prematurely")
}
// Ignore any observations that candidate1 was elected
if string(resp.Kvs[0].Value) == "candidate1" {
continue
}
respChan <- &resp
return
for resp := range o {
// Ignore any observations that candidate1 was elected
if string(resp.Kvs[0].Value) == "candidate1" {
continue
}
respChan <- &resp
return
}
t.Error("Observe() channel closed prematurely")
}()
// wait until observe goroutine is running

View File

@@ -619,16 +619,28 @@ func TestLeasingTxnOwnerGet(t *testing.T) {
clus := integration.NewClusterV3(t, &integration.ClusterConfig{Size: 1})
defer clus.Terminate(t)
client := clus.Client(0)
lkv, closeLKV, err := leasing.NewKV(clus.Client(0), "pfx/")
testutil.AssertNil(t, err)
defer closeLKV()
defer func() {
// In '--tags cluster_proxy' mode the client need to be closed before
// closeLKV(). This interrupts all outstanding watches. Closing by closeLKV()
// is not sufficient as (unfortunately) context close does not interrupts Watches.
// See ./clientv3/watch.go:
// >> Currently, client contexts are overwritten with "valCtx" that never closes. <<
clus.TakeClient(0) // avoid double Close() of the client.
client.Close()
closeLKV()
}()
keyCount := rand.Intn(10) + 1
var ops []clientv3.Op
presps := make([]*clientv3.PutResponse, keyCount)
for i := range presps {
k := fmt.Sprintf("k-%d", i)
presp, err := clus.Client(0).Put(context.TODO(), k, k+k)
presp, err := client.Put(context.TODO(), k, k+k)
if err != nil {
t.Fatal(err)
}

View File

@@ -65,8 +65,8 @@ func TestUserErrorAuth(t *testing.T) {
authSetupRoot(t, authapi.Auth)
// unauthenticated client
if _, err := authapi.UserAdd(context.TODO(), "foo", "bar"); err != rpctypes.ErrUserNotFound {
t.Fatalf("expected %v, got %v", rpctypes.ErrUserNotFound, err)
if _, err := authapi.UserAdd(context.TODO(), "foo", "bar"); err != rpctypes.ErrUserEmpty {
t.Fatalf("expected %v, got %v", rpctypes.ErrUserEmpty, err)
}
// wrong id or password
@@ -114,7 +114,7 @@ func authSetupRoot(t *testing.T, auth clientv3.Auth) {
func TestGetTokenWithoutAuth(t *testing.T) {
defer testutil.AfterTest(t)
clus := integration.NewClusterV3(t, &integration.ClusterConfig{Size: 10})
clus := integration.NewClusterV3(t, &integration.ClusterConfig{Size: 2})
defer clus.Terminate(t)
authapi := clus.RandClient()
@@ -130,7 +130,7 @@ func TestGetTokenWithoutAuth(t *testing.T) {
// "Username" and "Password" must be used
cfg := clientv3.Config{
Endpoints: authapi.Endpoints(),
DialTimeout: 1 * time.Second, // make sure all connection time of connect all endpoint must be more DialTimeout
DialTimeout: 5 * time.Second,
Username: "root",
Password: "123",
}
@@ -142,7 +142,7 @@ func TestGetTokenWithoutAuth(t *testing.T) {
switch err {
case nil:
t.Log("passes as expected, but may be connection time less than DialTimeout")
t.Log("passes as expected")
case context.DeadlineExceeded:
t.Errorf("not expected result:%v with endpoint:%s", err, authapi.Endpoints())
case rpctypes.ErrAuthNotEnabled:
@@ -150,5 +150,4 @@ func TestGetTokenWithoutAuth(t *testing.T) {
default:
t.Errorf("other errors:%v", err)
}
}

View File

@@ -582,7 +582,34 @@ func testWatchWithProgressNotify(t *testing.T, watchOnPut bool) {
}
}
func TestConfigurableWatchProgressNotifyInterval(t *testing.T) {
progressInterval := 200 * time.Millisecond
clus := integration.NewClusterV3(t,
&integration.ClusterConfig{
Size: 3,
WatchProgressNotifyInterval: progressInterval,
})
defer clus.Terminate(t)
opts := []clientv3.OpOption{clientv3.WithProgressNotify()}
rch := clus.RandClient().Watch(context.Background(), "foo", opts...)
timeout := 1 * time.Second // we expect to receive watch progress notify in 2 * progressInterval,
// but for CPU-starved situation it may take longer. So we use 1 second here for timeout.
select {
case resp := <-rch: // waiting for a watch progress notify response
if !resp.IsProgressNotify() {
t.Fatalf("expected resp.IsProgressNotify() == true")
}
case <-time.After(timeout):
t.Fatalf("timed out waiting for watch progress notify response in %v", timeout)
}
}
func TestWatchRequestProgress(t *testing.T) {
if integration.ThroughProxy {
t.Skip("grpc-proxy does not support WatchProgress yet")
}
testCases := []struct {
name string
watchers []string

View File

@@ -12,7 +12,7 @@
// See the License for the specific language governing permissions and
// limitations under the License.
package naming
package naming_test
import (
"context"
@@ -21,6 +21,7 @@ import (
"testing"
etcd "go.etcd.io/etcd/clientv3"
namingv3 "go.etcd.io/etcd/clientv3/naming"
"go.etcd.io/etcd/integration"
"go.etcd.io/etcd/pkg/testutil"
@@ -33,7 +34,7 @@ func TestGRPCResolver(t *testing.T) {
clus := integration.NewClusterV3(t, &integration.ClusterConfig{Size: 1})
defer clus.Terminate(t)
r := GRPCResolver{
r := namingv3.GRPCResolver{
Client: clus.RandClient(),
}
@@ -107,7 +108,7 @@ func TestGRPCResolverMulti(t *testing.T) {
t.Fatal(err)
}
r := GRPCResolver{c}
r := namingv3.GRPCResolver{c}
w, err := r.Resolve("foo")
if err != nil {

View File

@@ -77,6 +77,9 @@ type Op struct {
cmps []Cmp
thenOps []Op
elseOps []Op
isOptsWithFromKey bool
isOptsWithPrefix bool
}
// accessors / mutators
@@ -216,6 +219,10 @@ func (op Op) isWrite() bool {
return op.t != tRange
}
func NewOp() *Op {
return &Op{key: []byte("")}
}
// OpGet returns "get" operation based on given key and operation options.
func OpGet(key string, opts ...OpOption) Op {
// WithPrefix and WithFromKey are not supported together
@@ -387,6 +394,7 @@ func WithPrefix() OpOption {
return
}
op.end = getPrefix(op.key)
op.isOptsWithPrefix = true
}
}
@@ -406,6 +414,7 @@ func WithFromKey() OpOption {
op.key = []byte{0}
}
op.end = []byte("\x00")
op.isOptsWithFromKey = true
}
}
@@ -554,7 +563,21 @@ func toLeaseTimeToLiveRequest(id LeaseID, opts ...LeaseOption) *pb.LeaseTimeToLi
}
// isWithPrefix returns true if WithPrefix is being called in the op
func isWithPrefix(opts []OpOption) bool { return isOpFuncCalled("WithPrefix", opts) }
func isWithPrefix(opts []OpOption) bool {
ret := NewOp()
for _, opt := range opts {
opt(ret)
}
return ret.isOptsWithPrefix
}
// isWithFromKey returns true if WithFromKey is being called in the op
func isWithFromKey(opts []OpOption) bool { return isOpFuncCalled("WithFromKey", opts) }
func isWithFromKey(opts []OpOption) bool {
ret := NewOp()
for _, opt := range opts {
opt(ret)
}
return ret.isOptsWithFromKey
}

View File

@@ -16,8 +16,7 @@ package ordering
import (
"errors"
"sync"
"time"
"sync/atomic"
"go.etcd.io/etcd/clientv3"
)
@@ -26,26 +25,18 @@ type OrderViolationFunc func(op clientv3.Op, resp clientv3.OpResponse, prevRev i
var ErrNoGreaterRev = errors.New("etcdclient: no cluster members have a revision higher than the previously received revision")
func NewOrderViolationSwitchEndpointClosure(c clientv3.Client) OrderViolationFunc {
var mu sync.Mutex
violationCount := 0
return func(op clientv3.Op, resp clientv3.OpResponse, prevRev int64) error {
if violationCount > len(c.Endpoints()) {
func NewOrderViolationSwitchEndpointClosure(c *clientv3.Client) OrderViolationFunc {
violationCount := int32(0)
return func(_ clientv3.Op, _ clientv3.OpResponse, _ int64) error {
// Each request is assigned by round-robin load-balancer's picker to a different
// endpoints. If we cycled them 5 times (even with some level of concurrency),
// with high probability no endpoint points on a member with fresh data.
// TODO: Ideally we should track members (resp.opp.Header) that returned
// stale result and explicitly temporarily disable them in 'picker'.
if atomic.LoadInt32(&violationCount) > int32(5*len(c.Endpoints())) {
return ErrNoGreaterRev
}
mu.Lock()
defer mu.Unlock()
eps := c.Endpoints()
// force client to connect to given endpoint by limiting to a single endpoint
c.SetEndpoints(eps[violationCount%len(eps)])
// give enough time for operation
time.Sleep(1 * time.Second)
// set available endpoints back to all endpoints in to ensure
// the client has access to all the endpoints.
c.SetEndpoints(eps...)
// give enough time for operation
time.Sleep(1 * time.Second)
violationCount++
atomic.AddInt32(&violationCount, 1)
return nil
}
}

View File

@@ -64,19 +64,19 @@ func TestEndpointSwitchResolvesViolation(t *testing.T) {
// NewOrderViolationSwitchEndpointClosure will be able to
// access the full list of endpoints.
cli.SetEndpoints(eps...)
OrderingKv := NewKV(cli.KV, NewOrderViolationSwitchEndpointClosure(*cli))
orderingKv := NewKV(cli.KV, NewOrderViolationSwitchEndpointClosure(cli))
// set prevRev to the second member's revision of "foo" such that
// the revision is higher than the third member's revision of "foo"
_, err = OrderingKv.Get(ctx, "foo")
_, err = orderingKv.Get(ctx, "foo")
if err != nil {
t.Fatal(err)
}
t.Logf("Reconfigure client to speak only to the 'partitioned' member")
cli.SetEndpoints(clus.Members[2].GRPCAddr())
time.Sleep(1 * time.Second) // give enough time for operation
_, err = OrderingKv.Get(ctx, "foo", clientv3.WithSerializable())
if err != nil {
t.Fatalf("failed to resolve order violation %v", err)
_, err = orderingKv.Get(ctx, "foo", clientv3.WithSerializable())
if err != ErrNoGreaterRev {
t.Fatal("While speaking to partitioned leader, we should get ErrNoGreaterRev error")
}
}
@@ -123,7 +123,7 @@ func TestUnresolvableOrderViolation(t *testing.T) {
// access the full list of endpoints.
cli.SetEndpoints(eps...)
time.Sleep(1 * time.Second) // give enough time for operation
OrderingKv := NewKV(cli.KV, NewOrderViolationSwitchEndpointClosure(*cli))
OrderingKv := NewKV(cli.KV, NewOrderViolationSwitchEndpointClosure(cli))
// set prevRev to the first member's revision of "foo" such that
// the revision is higher than the fourth and fifth members' revision of "foo"
_, err = OrderingKv.Get(ctx, "foo")

View File

@@ -73,7 +73,13 @@ func (c *Client) unaryClientInterceptor(logger *zap.Logger, optFuncs ...retryOpt
// its the callCtx deadline or cancellation, in which case try again.
continue
}
if callOpts.retryAuth && rpctypes.Error(lastErr) == rpctypes.ErrInvalidAuthToken {
if c.shouldRefreshToken(lastErr, callOpts) {
// clear auth token before refreshing it.
// call c.Auth.Authenticate with an invalid token will always fail the auth check on the server-side,
// if the server has not apply the patch of pr #12165 (https://github.com/etcd-io/etcd/pull/12165)
// and a rpctypes.ErrInvalidAuthToken will recursively call c.getToken until system run out of resource.
c.authTokenBundle.UpdateAuthToken("")
gterr := c.getToken(ctx)
if gterr != nil {
logger.Warn(
@@ -105,6 +111,16 @@ func (c *Client) streamClientInterceptor(logger *zap.Logger, optFuncs ...retryOp
intOpts := reuseOrNewWithCallOptions(defaultOptions, optFuncs)
return func(ctx context.Context, desc *grpc.StreamDesc, cc *grpc.ClientConn, method string, streamer grpc.Streamer, opts ...grpc.CallOption) (grpc.ClientStream, error) {
ctx = withVersion(ctx)
// getToken automatically
// TODO(cfc4n): keep this code block, remove codes about getToken in client.go after pr #12165 merged.
if c.authTokenBundle != nil {
// equal to c.Username != "" && c.Password != ""
err := c.getToken(ctx)
if err != nil && rpctypes.Error(err) != rpctypes.ErrAuthNotEnabled {
logger.Error("clientv3/retry_interceptor: getToken failed", zap.Error(err))
return nil, err
}
}
grpcOpts, retryOpts := filterCallOptions(opts)
callOpts := reuseOrNewWithCallOptions(intOpts, retryOpts)
// short circuit for simplicity, and avoiding allocations.
@@ -132,6 +148,19 @@ func (c *Client) streamClientInterceptor(logger *zap.Logger, optFuncs ...retryOp
}
}
// shouldRefreshToken checks whether there's a need to refresh the token based on the error and callOptions,
// and returns a boolean value.
func (c *Client) shouldRefreshToken(err error, callOpts *options) bool {
if rpctypes.Error(err) == rpctypes.ErrUserEmpty {
// refresh the token when username, password is present but the server returns ErrUserEmpty
// which is possible when the client token is cleared somehow
return c.authTokenBundle != nil // equal to c.Username != "" && c.Password != ""
}
return callOpts.retryAuth &&
(rpctypes.Error(err) == rpctypes.ErrInvalidAuthToken || rpctypes.Error(err) == rpctypes.ErrAuthOldRevision)
}
// type serverStreamingRetryingStream is the implementation of grpc.ClientStream that acts as a
// proxy to the underlying call. If any of the RecvMsg() calls fail, it will try to reestablish
// a new ClientStream according to the retry policy.
@@ -229,7 +258,10 @@ func (s *serverStreamingRetryingStream) receiveMsgAndIndicateRetry(m interface{}
// its the callCtx deadline or cancellation, in which case try again.
return true, err
}
if s.callOpts.retryAuth && rpctypes.Error(err) == rpctypes.ErrInvalidAuthToken {
if s.client.shouldRefreshToken(err, s.callOpts) {
// clear auth token to avoid failure when call getToken
s.client.authTokenBundle.UpdateAuthToken("")
gterr := s.client.getToken(s.ctx)
if gterr != nil {
s.client.lg.Warn("retry failed to fetch new auth token", zap.Error(gterr))

View File

@@ -0,0 +1,141 @@
// Copyright 2022 The etcd Authors
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
// Based on github.com/grpc-ecosystem/go-grpc-middleware/retry, but modified to support the more
// fine grained error checking required by write-at-most-once retry semantics of etcd.
package clientv3
import (
"go.etcd.io/etcd/clientv3/credentials"
"go.etcd.io/etcd/etcdserver/api/v3rpc/rpctypes"
grpccredentials "google.golang.org/grpc/credentials"
"testing"
)
type dummyAuthTokenBundle struct{}
func (d dummyAuthTokenBundle) TransportCredentials() grpccredentials.TransportCredentials {
return nil
}
func (d dummyAuthTokenBundle) PerRPCCredentials() grpccredentials.PerRPCCredentials {
return nil
}
func (d dummyAuthTokenBundle) NewWithMode(mode string) (grpccredentials.Bundle, error) {
return nil, nil
}
func (d dummyAuthTokenBundle) UpdateAuthToken(token string) {
}
func TestClientShouldRefreshToken(t *testing.T) {
type fields struct {
authTokenBundle credentials.Bundle
}
type args struct {
err error
callOpts *options
}
optsWithTrue := &options{
retryAuth: true,
}
optsWithFalse := &options{
retryAuth: false,
}
tests := []struct {
name string
fields fields
args args
want bool
}{
{
name: "ErrUserEmpty and non nil authTokenBundle",
fields: fields{
authTokenBundle: &dummyAuthTokenBundle{},
},
args: args{rpctypes.ErrGRPCUserEmpty, optsWithTrue},
want: true,
},
{
name: "ErrUserEmpty and nil authTokenBundle",
fields: fields{
authTokenBundle: nil,
},
args: args{rpctypes.ErrGRPCUserEmpty, optsWithTrue},
want: false,
},
{
name: "ErrGRPCInvalidAuthToken and retryAuth",
fields: fields{
authTokenBundle: nil,
},
args: args{rpctypes.ErrGRPCInvalidAuthToken, optsWithTrue},
want: true,
},
{
name: "ErrGRPCInvalidAuthToken and !retryAuth",
fields: fields{
authTokenBundle: nil,
},
args: args{rpctypes.ErrGRPCInvalidAuthToken, optsWithFalse},
want: false,
},
{
name: "ErrGRPCAuthOldRevision and retryAuth",
fields: fields{
authTokenBundle: nil,
},
args: args{rpctypes.ErrGRPCAuthOldRevision, optsWithTrue},
want: true,
},
{
name: "ErrGRPCAuthOldRevision and !retryAuth",
fields: fields{
authTokenBundle: nil,
},
args: args{rpctypes.ErrGRPCAuthOldRevision, optsWithFalse},
want: false,
},
{
name: "Other error and retryAuth",
fields: fields{
authTokenBundle: nil,
},
args: args{rpctypes.ErrGRPCAuthFailed, optsWithTrue},
want: false,
},
{
name: "Other error and !retryAuth",
fields: fields{
authTokenBundle: nil,
},
args: args{rpctypes.ErrGRPCAuthFailed, optsWithFalse},
want: false,
},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
c := &Client{
authTokenBundle: tt.fields.authTokenBundle,
}
if got := c.shouldRefreshToken(tt.args.err, tt.args.callOpts); got != tt.want {
t.Errorf("shouldRefreshToken() = %v, want %v", got, tt.want)
}
})
}
}

View File

@@ -391,7 +391,7 @@ func (s *v3Manager) saveDB() error {
be := backend.NewDefaultBackend(dbpath)
// a lessor never timeouts leases
lessor := lease.NewLessor(s.lg, be, lease.LessorConfig{MinLeaseTTL: math.MaxInt64})
lessor := lease.NewLessor(s.lg, be, nil, lease.LessorConfig{MinLeaseTTL: math.MaxInt64})
mvs := mvcc.NewStore(s.lg, be, lessor, (*initIndex)(&commit), mvcc.StoreConfig{CompactionBatchLimit: math.MaxInt32})
txn := mvs.Write(traceutil.TODO())

View File

@@ -16,9 +16,6 @@ package clientv3
import (
"math/rand"
"reflect"
"runtime"
"strings"
"time"
)
@@ -32,18 +29,3 @@ func jitterUp(duration time.Duration, jitter float64) time.Duration {
multiplier := jitter * (rand.Float64()*2 - 1)
return time.Duration(float64(duration) * (1 + multiplier))
}
// Check if the provided function is being called in the op options.
func isOpFuncCalled(op string, opts []OpOption) bool {
for _, opt := range opts {
v := reflect.ValueOf(opt)
if v.Kind() == reflect.Func {
if opFunc := runtime.FuncForPC(v.Pointer()); opFunc != nil {
if strings.Contains(opFunc.Name(), op) {
return true
}
}
}
}
return false
}

View File

@@ -25,6 +25,7 @@ import (
pb "go.etcd.io/etcd/etcdserver/etcdserverpb"
mvccpb "go.etcd.io/etcd/mvcc/mvccpb"
"go.uber.org/zap"
"google.golang.org/grpc"
"google.golang.org/grpc/codes"
"google.golang.org/grpc/metadata"
@@ -140,6 +141,7 @@ type watcher struct {
// streams holds all the active grpc streams keyed by ctx value.
streams map[string]*watchGrpcStream
lg *zap.Logger
}
// watchGrpcStream tracks all watch resources attached to a single grpc stream.
@@ -176,6 +178,8 @@ type watchGrpcStream struct {
resumec chan struct{}
// closeErr is the error that closed the watch stream
closeErr error
lg *zap.Logger
}
// watchStreamRequest is a union of the supported watch request operation types
@@ -242,6 +246,7 @@ func NewWatchFromWatchClient(wc pb.WatchClient, c *Client) Watcher {
}
if c != nil {
w.callOpts = c.callOpts
w.lg = c.lg
}
return w
}
@@ -273,6 +278,7 @@ func (w *watcher) newWatcherGrpcStream(inctx context.Context) *watchGrpcStream {
errc: make(chan error, 1),
closingc: make(chan *watcherStream),
resumec: make(chan struct{}),
lg: w.lg,
}
go wgs.run()
return wgs
@@ -544,10 +550,18 @@ func (w *watchGrpcStream) run() {
w.resuming = append(w.resuming, ws)
if len(w.resuming) == 1 {
// head of resume queue, can register a new watcher
wc.Send(ws.initReq.toPB())
if err := wc.Send(ws.initReq.toPB()); err != nil {
if w.lg != nil {
w.lg.Debug("error when sending request", zap.Error(err))
}
}
}
case *progressRequest:
wc.Send(wreq.toPB())
if err := wc.Send(wreq.toPB()); err != nil {
if w.lg != nil {
w.lg.Debug("error when sending request", zap.Error(err))
}
}
}
// new events from the watch client
@@ -571,7 +585,11 @@ func (w *watchGrpcStream) run() {
}
if ws := w.nextResume(); ws != nil {
wc.Send(ws.initReq.toPB())
if err := wc.Send(ws.initReq.toPB()); err != nil {
if w.lg != nil {
w.lg.Debug("error when sending request", zap.Error(err))
}
}
}
// reset for next iteration
@@ -616,7 +634,14 @@ func (w *watchGrpcStream) run() {
},
}
req := &pb.WatchRequest{RequestUnion: cr}
wc.Send(req)
if w.lg != nil {
w.lg.Debug("sending watch cancel request for failed dispatch", zap.Int64("watch-id", pbresp.WatchId))
}
if err := wc.Send(req); err != nil {
if w.lg != nil {
w.lg.Debug("failed to send watch cancel request", zap.Int64("watch-id", pbresp.WatchId), zap.Error(err))
}
}
}
// watch client failed on Recv; spawn another if possible
@@ -629,7 +654,11 @@ func (w *watchGrpcStream) run() {
return
}
if ws := w.nextResume(); ws != nil {
wc.Send(ws.initReq.toPB())
if err := wc.Send(ws.initReq.toPB()); err != nil {
if w.lg != nil {
w.lg.Debug("error when sending request", zap.Error(err))
}
}
}
cancelSet = make(map[int64]struct{})
@@ -637,6 +666,25 @@ func (w *watchGrpcStream) run() {
return
case ws := <-w.closingc:
if ws.id != -1 {
// client is closing an established watch; close it on the server proactively instead of waiting
// to close when the next message arrives
cancelSet[ws.id] = struct{}{}
cr := &pb.WatchRequest_CancelRequest{
CancelRequest: &pb.WatchCancelRequest{
WatchId: ws.id,
},
}
req := &pb.WatchRequest{RequestUnion: cr}
if w.lg != nil {
w.lg.Debug("sending watch cancel request for closed watcher", zap.Int64("watch-id", ws.id))
}
if err := wc.Send(req); err != nil {
if w.lg != nil {
w.lg.Debug("failed to send watch cancel request", zap.Int64("watch-id", ws.id), zap.Error(err))
}
}
}
w.closeSubstream(ws)
delete(closing, ws)
// no more watchers on this stream, shutdown

View File

@@ -18,6 +18,7 @@ import (
"crypto/tls"
"fmt"
"io/ioutil"
"math"
"net"
"net/http"
"net/url"
@@ -53,7 +54,9 @@ const (
DefaultMaxSnapshots = 5
DefaultMaxWALs = 5
DefaultMaxTxnOps = uint(128)
DefaultWarningApplyDuration = 100 * time.Millisecond
DefaultMaxRequestBytes = 1.5 * 1024 * 1024
DefaultMaxConcurrentStreams = math.MaxUint32
DefaultGRPCKeepAliveMinTime = 5 * time.Second
DefaultGRPCKeepAliveInterval = 2 * time.Hour
DefaultGRPCKeepAliveTimeout = 20 * time.Second
@@ -176,6 +179,10 @@ type Config struct {
MaxTxnOps uint `json:"max-txn-ops"`
MaxRequestBytes uint `json:"max-request-bytes"`
// MaxConcurrentStreams specifies the maximum number of concurrent
// streams that each client can open at a time.
MaxConcurrentStreams uint32 `json:"max-concurrent-streams"`
LPUrls, LCUrls []url.URL
APUrls, ACUrls []url.URL
ClientTLSInfo transport.TLSInfo
@@ -273,14 +280,26 @@ type Config struct {
AuthToken string `json:"auth-token"`
BcryptCost uint `json:"bcrypt-cost"`
// AuthTokenTTL specifies the TTL in seconds of the simple token
AuthTokenTTL uint `json:"auth-token-ttl"`
ExperimentalInitialCorruptCheck bool `json:"experimental-initial-corrupt-check"`
ExperimentalCorruptCheckTime time.Duration `json:"experimental-corrupt-check-time"`
ExperimentalEnableV2V3 string `json:"experimental-enable-v2v3"`
// ExperimentalBackendFreelistType specifies the type of freelist that boltdb backend uses (array and map are supported types).
ExperimentalBackendFreelistType string `json:"experimental-backend-bbolt-freelist-type"`
// ExperimentalEnableLeaseCheckpoint enables primary lessor to persist lease remainingTTL to prevent indefinite auto-renewal of long lived leases.
// ExperimentalEnableLeaseCheckpoint enables leader to send regular checkpoints to other members to prevent reset of remaining TTL on leader change.
ExperimentalEnableLeaseCheckpoint bool `json:"experimental-enable-lease-checkpoint"`
ExperimentalCompactionBatchLimit int `json:"experimental-compaction-batch-limit"`
// ExperimentalEnableLeaseCheckpointPersist enables persisting remainingTTL to prevent indefinite auto-renewal of long lived leases. Always enabled in v3.6. Should be used to ensure smooth upgrade from v3.5 clusters with this feature enabled.
// Requires experimental-enable-lease-checkpoint to be enabled.
// Deprecated in v3.6.
// TODO: Delete in v3.7
ExperimentalEnableLeaseCheckpointPersist bool `json:"experimental-enable-lease-checkpoint-persist"`
ExperimentalCompactionBatchLimit int `json:"experimental-compaction-batch-limit"`
ExperimentalWatchProgressNotifyInterval time.Duration `json:"experimental-watch-progress-notify-interval"`
// ExperimentalWarningApplyDuration is the time duration after which a warning is generated if applying request
// takes more time than this value.
ExperimentalWarningApplyDuration time.Duration `json:"experimental-warning-apply-duration"`
// ForceNewCluster starts a new cluster even if previously started; unsafe.
ForceNewCluster bool `json:"force-new-cluster"`
@@ -335,6 +354,10 @@ type Config struct {
// Only valid if "logger" option is "capnslog".
// WARN: DO NOT USE THIS!
LogPkgLevels string `json:"log-package-levels"`
// UnsafeNoFsync disables all uses of fsync.
// Setting this is unsafe and will cause data loss.
UnsafeNoFsync bool `json:"unsafe-no-fsync"`
}
// configYAML holds the config suitable for yaml parsing
@@ -380,8 +403,10 @@ func NewConfig() *Config {
SnapshotCount: etcdserver.DefaultSnapshotCount,
SnapshotCatchUpEntries: etcdserver.DefaultSnapshotCatchUpEntries,
MaxTxnOps: DefaultMaxTxnOps,
MaxRequestBytes: DefaultMaxRequestBytes,
MaxTxnOps: DefaultMaxTxnOps,
MaxRequestBytes: DefaultMaxRequestBytes,
MaxConcurrentStreams: DefaultMaxConcurrentStreams,
ExperimentalWarningApplyDuration: DefaultWarningApplyDuration,
GRPCKeepAliveMinTime: DefaultGRPCKeepAliveMinTime,
GRPCKeepAliveInterval: DefaultGRPCKeepAliveInterval,
@@ -406,8 +431,9 @@ func NewConfig() *Config {
CORS: map[string]struct{}{"*": {}},
HostWhitelist: map[string]struct{}{"*": {}},
AuthToken: "simple",
BcryptCost: uint(bcrypt.DefaultCost),
AuthToken: "simple",
BcryptCost: uint(bcrypt.DefaultCost),
AuthTokenTTL: 300,
PreVote: false, // TODO: enable by default in v3.5
@@ -616,6 +642,14 @@ func (cfg *Config) Validate() error {
return fmt.Errorf("unknown auto-compaction-mode %q", cfg.AutoCompactionMode)
}
if !cfg.ExperimentalEnableLeaseCheckpointPersist && cfg.ExperimentalEnableLeaseCheckpoint {
cfg.logger.Warn("Detected that checkpointing is enabled without persistence. Consider enabling experimental-enable-lease-checkpoint-persist")
}
if cfg.ExperimentalEnableLeaseCheckpointPersist && !cfg.ExperimentalEnableLeaseCheckpoint {
return fmt.Errorf("setting experimental-enable-lease-checkpoint-persist requires experimental-enable-lease-checkpoint")
}
return nil
}

View File

@@ -196,10 +196,14 @@ func (cfg *Config) setupLogging() error {
grpcLogOnce.Do(func() {
// debug true, enable info, warning, error
// debug false, only discard info
var gl grpclog.LoggerV2
gl, err = logutil.NewGRPCLoggerV2(copied)
if err == nil {
grpclog.SetLoggerV2(gl)
if cfg.LogLevel == "debug" {
var gl grpclog.LoggerV2
gl, err = logutil.NewGRPCLoggerV2(copied)
if err == nil {
grpclog.SetLoggerV2(gl)
}
} else {
grpclog.SetLoggerV2(grpclog.NewLoggerV2(ioutil.Discard, os.Stderr, os.Stderr))
}
})
return nil
@@ -245,7 +249,11 @@ func (cfg *Config) setupLogging() error {
c.loggerWriteSyncer = syncer
grpcLogOnce.Do(func() {
grpclog.SetLoggerV2(logutil.NewGRPCLoggerV2FromZapCore(cr, syncer))
if cfg.LogLevel == "debug" {
grpclog.SetLoggerV2(logutil.NewGRPCLoggerV2FromZapCore(cr, syncer))
} else {
grpclog.SetLoggerV2(grpclog.NewLoggerV2(ioutil.Discard, os.Stderr, os.Stderr))
}
})
return nil
}

View File

@@ -178,9 +178,11 @@ func TestAutoCompactionModeParse(t *testing.T) {
{"revision", "1", false, 1},
{"revision", "1h", false, time.Hour},
{"revision", "a", true, 0},
{"revision", "-1", true, 0},
// periodic
{"periodic", "1", false, time.Hour},
{"periodic", "a", true, 0},
{"revision", "-1", true, 0},
// err mode
{"errmode", "1", false, 0},
{"errmode", "1h", false, time.Hour},

View File

@@ -162,50 +162,56 @@ func StartEtcd(inCfg *Config) (e *Etcd, err error) {
backendFreelistType := parseBackendFreelistType(cfg.ExperimentalBackendFreelistType)
srvcfg := etcdserver.ServerConfig{
Name: cfg.Name,
ClientURLs: cfg.ACUrls,
PeerURLs: cfg.APUrls,
DataDir: cfg.Dir,
DedicatedWALDir: cfg.WalDir,
SnapshotCount: cfg.SnapshotCount,
SnapshotCatchUpEntries: cfg.SnapshotCatchUpEntries,
MaxSnapFiles: cfg.MaxSnapFiles,
MaxWALFiles: cfg.MaxWalFiles,
InitialPeerURLsMap: urlsmap,
InitialClusterToken: token,
DiscoveryURL: cfg.Durl,
DiscoveryProxy: cfg.Dproxy,
NewCluster: cfg.IsNewCluster(),
PeerTLSInfo: cfg.PeerTLSInfo,
TickMs: cfg.TickMs,
ElectionTicks: cfg.ElectionTicks(),
InitialElectionTickAdvance: cfg.InitialElectionTickAdvance,
AutoCompactionRetention: autoCompactionRetention,
AutoCompactionMode: cfg.AutoCompactionMode,
QuotaBackendBytes: cfg.QuotaBackendBytes,
BackendBatchLimit: cfg.BackendBatchLimit,
BackendFreelistType: backendFreelistType,
BackendBatchInterval: cfg.BackendBatchInterval,
MaxTxnOps: cfg.MaxTxnOps,
MaxRequestBytes: cfg.MaxRequestBytes,
StrictReconfigCheck: cfg.StrictReconfigCheck,
ClientCertAuthEnabled: cfg.ClientTLSInfo.ClientCertAuth,
AuthToken: cfg.AuthToken,
BcryptCost: cfg.BcryptCost,
CORS: cfg.CORS,
HostWhitelist: cfg.HostWhitelist,
InitialCorruptCheck: cfg.ExperimentalInitialCorruptCheck,
CorruptCheckTime: cfg.ExperimentalCorruptCheckTime,
PreVote: cfg.PreVote,
Logger: cfg.logger,
LoggerConfig: cfg.loggerConfig,
LoggerCore: cfg.loggerCore,
LoggerWriteSyncer: cfg.loggerWriteSyncer,
Debug: cfg.Debug,
ForceNewCluster: cfg.ForceNewCluster,
EnableGRPCGateway: cfg.EnableGRPCGateway,
EnableLeaseCheckpoint: cfg.ExperimentalEnableLeaseCheckpoint,
CompactionBatchLimit: cfg.ExperimentalCompactionBatchLimit,
Name: cfg.Name,
ClientURLs: cfg.ACUrls,
PeerURLs: cfg.APUrls,
DataDir: cfg.Dir,
DedicatedWALDir: cfg.WalDir,
SnapshotCount: cfg.SnapshotCount,
SnapshotCatchUpEntries: cfg.SnapshotCatchUpEntries,
MaxSnapFiles: cfg.MaxSnapFiles,
MaxWALFiles: cfg.MaxWalFiles,
InitialPeerURLsMap: urlsmap,
InitialClusterToken: token,
DiscoveryURL: cfg.Durl,
DiscoveryProxy: cfg.Dproxy,
NewCluster: cfg.IsNewCluster(),
PeerTLSInfo: cfg.PeerTLSInfo,
TickMs: cfg.TickMs,
ElectionTicks: cfg.ElectionTicks(),
InitialElectionTickAdvance: cfg.InitialElectionTickAdvance,
AutoCompactionRetention: autoCompactionRetention,
AutoCompactionMode: cfg.AutoCompactionMode,
QuotaBackendBytes: cfg.QuotaBackendBytes,
BackendBatchLimit: cfg.BackendBatchLimit,
BackendFreelistType: backendFreelistType,
BackendBatchInterval: cfg.BackendBatchInterval,
MaxTxnOps: cfg.MaxTxnOps,
MaxRequestBytes: cfg.MaxRequestBytes,
MaxConcurrentStreams: cfg.MaxConcurrentStreams,
StrictReconfigCheck: cfg.StrictReconfigCheck,
ClientCertAuthEnabled: cfg.ClientTLSInfo.ClientCertAuth,
AuthToken: cfg.AuthToken,
BcryptCost: cfg.BcryptCost,
TokenTTL: cfg.AuthTokenTTL,
CORS: cfg.CORS,
HostWhitelist: cfg.HostWhitelist,
InitialCorruptCheck: cfg.ExperimentalInitialCorruptCheck,
CorruptCheckTime: cfg.ExperimentalCorruptCheckTime,
PreVote: cfg.PreVote,
Logger: cfg.logger,
LoggerConfig: cfg.loggerConfig,
LoggerCore: cfg.loggerCore,
LoggerWriteSyncer: cfg.loggerWriteSyncer,
Debug: cfg.Debug,
ForceNewCluster: cfg.ForceNewCluster,
EnableGRPCGateway: cfg.EnableGRPCGateway,
UnsafeNoFsync: cfg.UnsafeNoFsync,
EnableLeaseCheckpoint: cfg.ExperimentalEnableLeaseCheckpoint,
LeaseCheckpointPersist: cfg.ExperimentalEnableLeaseCheckpointPersist,
CompactionBatchLimit: cfg.ExperimentalCompactionBatchLimit,
WatchProgressNotifyInterval: cfg.ExperimentalWatchProgressNotifyInterval,
WarningApplyDuration: cfg.ExperimentalWarningApplyDuration,
}
print(e.cfg.logger, *cfg, srvcfg, memberInitialized)
if e.Server, err = etcdserver.NewServer(srvcfg); err != nil {
@@ -327,7 +333,10 @@ func print(lg *zap.Logger, ec Config, sc etcdserver.ServerConfig, memberInitiali
zap.String("initial-cluster", sc.InitialPeerURLsMap.String()),
zap.String("initial-cluster-state", ec.ClusterState),
zap.String("initial-cluster-token", sc.InitialClusterToken),
zap.Int64("quota-size-bytes", quota),
zap.Int64("quota-backend-bytes", quota),
zap.Uint("max-request-bytes", sc.MaxRequestBytes),
zap.Uint32("max-concurrent-streams", sc.MaxConcurrentStreams),
zap.Bool("pre-vote", sc.PreVote),
zap.Bool("initial-corrupt-check", sc.InitialCorruptCheck),
zap.String("corrupt-check-time-interval", sc.CorruptCheckTime.String()),
@@ -811,7 +820,7 @@ func (e *Etcd) GetLogger() *zap.Logger {
func parseCompactionRetention(mode, retention string) (ret time.Duration, err error) {
h, err := strconv.Atoi(retention)
if err == nil {
if err == nil && h >= 0 {
switch mode {
case CompactorModeRevision:
ret = time.Duration(int64(h))

View File

@@ -17,8 +17,10 @@ package embed
import (
"context"
"fmt"
"golang.org/x/net/http2"
"io/ioutil"
defaultLog "log"
"math"
"net"
"net/http"
"strings"
@@ -131,6 +133,10 @@ func (sctx *serveCtx) serve(
Handler: createAccessController(sctx.lg, s, httpmux),
ErrorLog: logger, // do not log user error
}
if err = configureHttpServer(srvhttp, s.Cfg); err != nil {
sctx.lg.Error("Configure http server failed", zap.Error(err))
return err
}
httpl := m.Match(cmux.HTTP1())
go func() { errHandler(srvhttp.Serve(httpl)) }()
@@ -184,6 +190,10 @@ func (sctx *serveCtx) serve(
TLSConfig: tlscfg,
ErrorLog: logger, // do not log user error
}
if err = configureHttpServer(srv, s.Cfg); err != nil {
sctx.lg.Error("Configure https server failed", zap.Error(err))
return err
}
go func() { errHandler(srv.Serve(tlsl)) }()
sctx.serversC <- &servers{secure: true, grpc: gs, http: srv}
@@ -201,6 +211,13 @@ func (sctx *serveCtx) serve(
return m.Serve()
}
func configureHttpServer(srv *http.Server, cfg etcdserver.ServerConfig) error {
// todo (ahrtr): should we support configuring other parameters in the future as well?
return http2.ConfigureServer(srv, &http2.Server{
MaxConcurrentStreams: cfg.MaxConcurrentStreams,
})
}
// grpcHandlerFunc returns an http.Handler that delegates to grpcServer on incoming gRPC
// connections or otherHandler otherwise. Given in gRPC docs.
func grpcHandlerFunc(grpcServer *grpc.Server, otherHandler http.Handler) http.Handler {
@@ -229,6 +246,10 @@ func (sctx *serveCtx) registerGateway(opts []grpc.DialOption) (*gw.ServeMux, err
addr = fmt.Sprintf("%s://%s", network, addr)
}
opts = append(opts, grpc.WithDefaultCallOptions([]grpc.CallOption{
grpc.MaxCallRecvMsgSize(math.MaxInt32),
}...))
conn, err := grpc.DialContext(ctx, addr, opts...)
if err != nil {
return nil, err
@@ -286,6 +307,7 @@ func (sctx *serveCtx) createMux(gwmux *gw.ServeMux, handler http.Handler) *http.
return outgoing
},
),
wsproxy.WithMaxRespBodyBufferSize(0x7fffffff),
),
)
}

View File

@@ -311,6 +311,8 @@ func newCheckDatascaleCommand(cmd *cobra.Command, args []string) {
ExitWithError(ExitError, errEndpoints)
}
sec := secureCfgFromCmd(cmd)
ctx, cancel := context.WithCancel(context.Background())
resp, err := clients[0].Get(ctx, checkDatascalePrefix, v3.WithPrefix(), v3.WithLimit(1))
cancel()
@@ -329,7 +331,7 @@ func newCheckDatascaleCommand(cmd *cobra.Command, args []string) {
wg.Add(len(clients))
// get the process_resident_memory_bytes and process_virtual_memory_bytes before the put operations
bytesBefore := endpointMemoryMetrics(eps[0])
bytesBefore := endpointMemoryMetrics(eps[0], sec)
if bytesBefore == 0 {
fmt.Println("FAIL: Could not read process_resident_memory_bytes before the put operations.")
os.Exit(ExitError)
@@ -367,7 +369,7 @@ func newCheckDatascaleCommand(cmd *cobra.Command, args []string) {
s := <-sc
// get the process_resident_memory_bytes after the put operations
bytesAfter := endpointMemoryMetrics(eps[0])
bytesAfter := endpointMemoryMetrics(eps[0], sec)
if bytesAfter == 0 {
fmt.Println("FAIL: Could not read process_resident_memory_bytes after the put operations.")
os.Exit(ExitError)

View File

@@ -42,7 +42,8 @@ func transferLeadershipCommandFunc(cmd *cobra.Command, args []string) {
ExitWithError(ExitBadArgs, err)
}
c := mustClientFromCmd(cmd)
cfg := clientConfigFromCmd(cmd)
c := cfg.mustClient()
eps := c.Endpoints()
c.Close()
@@ -52,7 +53,6 @@ func transferLeadershipCommandFunc(cmd *cobra.Command, args []string) {
var leaderCli *clientv3.Client
var leaderID uint64
for _, ep := range eps {
cfg := clientConfigFromCmd(cmd)
cfg.endpoints = []string{ep}
cli := cfg.mustClient()
resp, serr := cli.Status(ctx, ep)

View File

@@ -16,6 +16,7 @@ package command
import (
"context"
"crypto/tls"
"encoding/hex"
"fmt"
"io/ioutil"
@@ -90,14 +91,26 @@ func isCommandTimeoutFlagSet(cmd *cobra.Command) bool {
return commandTimeoutFlag.Changed
}
// get the process_resident_memory_bytes from <server:2379>/metrics
func endpointMemoryMetrics(host string) float64 {
// get the process_resident_memory_bytes from <server>/metrics
func endpointMemoryMetrics(host string, scfg *secureCfg) float64 {
residentMemoryKey := "process_resident_memory_bytes"
var residentMemoryValue string
if !strings.HasPrefix(host, `http://`) {
if !strings.HasPrefix(host, "http://") && !strings.HasPrefix(host, "https://") {
host = "http://" + host
}
url := host + "/metrics"
if strings.HasPrefix(host, "https://") {
// load client certificate
cert, err := tls.LoadX509KeyPair(scfg.cert, scfg.key)
if err != nil {
fmt.Println(fmt.Sprintf("client certificate error: %v", err))
return 0.0
}
http.DefaultTransport.(*http.Transport).TLSClientConfig = &tls.Config{
Certificates: []tls.Certificate{cert},
InsecureSkipVerify: scfg.insecureSkipVerify,
}
}
resp, err := http.Get(url)
if err != nil {
fmt.Println(fmt.Sprintf("fetch error: %v", err))

View File

@@ -60,7 +60,7 @@ func init() {
// TODO: secure by default when etcd enables secure gRPC by default.
rootCmd.PersistentFlags().BoolVar(&globalFlags.Insecure, "insecure-transport", true, "disable transport security for client connections")
rootCmd.PersistentFlags().BoolVar(&globalFlags.InsecureDiscovery, "insecure-discovery", true, "accept insecure SRV records describing cluster endpoints")
rootCmd.PersistentFlags().BoolVar(&globalFlags.InsecureSkipVerify, "insecure-skip-tls-verify", false, "skip server certificate verification")
rootCmd.PersistentFlags().BoolVar(&globalFlags.InsecureSkipVerify, "insecure-skip-tls-verify", false, "skip server certificate verification (CAUTION: this option should be enabled only for testing purposes)")
rootCmd.PersistentFlags().StringVar(&globalFlags.TLS.CertFile, "cert", "", "identify secure client using this TLS certificate file")
rootCmd.PersistentFlags().StringVar(&globalFlags.TLS.KeyFile, "key", "", "identify secure client using this TLS key file")
rootCmd.PersistentFlags().StringVar(&globalFlags.TLS.TrustedCAFile, "cacert", "", "verify certificates of TLS-enabled secure servers using this CA bundle")

View File

@@ -163,6 +163,8 @@ func newConfig() *config {
fs.DurationVar(&cfg.ec.GRPCKeepAliveInterval, "grpc-keepalive-interval", cfg.ec.GRPCKeepAliveInterval, "Frequency duration of server-to-client ping to check if a connection is alive (0 to disable).")
fs.DurationVar(&cfg.ec.GRPCKeepAliveTimeout, "grpc-keepalive-timeout", cfg.ec.GRPCKeepAliveTimeout, "Additional duration of wait before closing a non-responsive connection (0 to disable).")
fs.Var(flags.NewUint32Value(cfg.ec.MaxConcurrentStreams), "max-concurrent-streams", "Maximum concurrent streams that each client can open at a time.")
// clustering
fs.Var(
flags.NewUniqueURLsWithExceptions(embed.DefaultInitialAdvertisePeerURLs, ""),
@@ -245,6 +247,7 @@ func newConfig() *config {
// auth
fs.StringVar(&cfg.ec.AuthToken, "auth-token", cfg.ec.AuthToken, "Specify auth token specific options.")
fs.UintVar(&cfg.ec.BcryptCost, "bcrypt-cost", cfg.ec.BcryptCost, "Specify bcrypt algorithm cost factor for auth password hashing.")
fs.UintVar(&cfg.ec.AuthTokenTTL, "auth-token-ttl", cfg.ec.AuthTokenTTL, "The lifetime in seconds of the auth token.")
// gateway
fs.BoolVar(&cfg.ec.EnableGRPCGateway, "enable-grpc-gateway", true, "Enable GRPC gateway.")
@@ -254,10 +257,15 @@ func newConfig() *config {
fs.DurationVar(&cfg.ec.ExperimentalCorruptCheckTime, "experimental-corrupt-check-time", cfg.ec.ExperimentalCorruptCheckTime, "Duration of time between cluster corruption check passes.")
fs.StringVar(&cfg.ec.ExperimentalEnableV2V3, "experimental-enable-v2v3", cfg.ec.ExperimentalEnableV2V3, "v3 prefix for serving emulated v2 state.")
fs.StringVar(&cfg.ec.ExperimentalBackendFreelistType, "experimental-backend-bbolt-freelist-type", cfg.ec.ExperimentalBackendFreelistType, "ExperimentalBackendFreelistType specifies the type of freelist that boltdb backend uses(array and map are supported types)")
fs.BoolVar(&cfg.ec.ExperimentalEnableLeaseCheckpoint, "experimental-enable-lease-checkpoint", false, "Enable to persist lease remaining TTL to prevent indefinite auto-renewal of long lived leases.")
fs.BoolVar(&cfg.ec.ExperimentalEnableLeaseCheckpoint, "experimental-enable-lease-checkpoint", false, "Enable leader to send regular checkpoints to other members to prevent reset of remaining TTL on leader change.")
// TODO: delete in v3.7
fs.BoolVar(&cfg.ec.ExperimentalEnableLeaseCheckpointPersist, "experimental-enable-lease-checkpoint-persist", false, "Enable persisting remainingTTL to prevent indefinite auto-renewal of long lived leases. Always enabled in v3.6. Should be used to ensure smooth upgrade from v3.5 clusters with this feature enabled. Requires experimental-enable-lease-checkpoint to be enabled.")
fs.IntVar(&cfg.ec.ExperimentalCompactionBatchLimit, "experimental-compaction-batch-limit", cfg.ec.ExperimentalCompactionBatchLimit, "Sets the maximum revisions deleted in each compaction batch.")
fs.DurationVar(&cfg.ec.ExperimentalWatchProgressNotifyInterval, "experimental-watch-progress-notify-interval", cfg.ec.ExperimentalWatchProgressNotifyInterval, "Duration of periodic watch progress notifications.")
fs.DurationVar(&cfg.ec.ExperimentalWarningApplyDuration, "experimental-warning-apply-duration", cfg.ec.ExperimentalWarningApplyDuration, "Time duration after which a warning is generated if request takes more time.")
// unsafe
fs.BoolVar(&cfg.ec.UnsafeNoFsync, "unsafe-no-fsync", false, "Disables fsync, unsafe, will cause data loss.")
fs.BoolVar(&cfg.ec.ForceNewCluster, "force-new-cluster", false, "Force to create a new one member cluster.")
// ignored
@@ -332,6 +340,8 @@ func (cfg *config) configFromCmdLine() error {
cfg.ec.CipherSuites = flags.StringsFromFlag(cfg.cf.flagSet, "cipher-suites")
cfg.ec.MaxConcurrentStreams = flags.Uint32FromFlag(cfg.cf.flagSet, "max-concurrent-streams")
// TODO: remove this in v3.5
cfg.ec.DeprecatedLogOutput = flags.UniqueStringsFromFlag(cfg.cf.flagSet, "log-output")
cfg.ec.LogOutputs = flags.UniqueStringsFromFlag(cfg.cf.flagSet, "log-outputs")

View File

@@ -358,7 +358,7 @@ func startProxy(cfg *config) error {
}
cfg.ec.Dir = filepath.Join(cfg.ec.Dir, "proxy")
err = os.MkdirAll(cfg.ec.Dir, fileutil.PrivateDirMode)
err = fileutil.TouchDirAll(cfg.ec.Dir)
if err != nil {
return err
}

View File

@@ -71,7 +71,7 @@ func newGatewayStartCommand() *cobra.Command {
cmd.Flags().StringVar(&gatewayDNSCluster, "discovery-srv", "", "DNS domain used to bootstrap initial cluster")
cmd.Flags().StringVar(&gatewayDNSClusterServiceName, "discovery-srv-name", "", "service name to query when using DNS discovery")
cmd.Flags().BoolVar(&gatewayInsecureDiscovery, "insecure-discovery", false, "accept insecure SRV records")
cmd.Flags().StringVar(&gatewayCA, "trusted-ca-file", "", "path to the client server TLS CA file.")
cmd.Flags().StringVar(&gatewayCA, "trusted-ca-file", "", "path to the client server TLS CA file for verifying the discovered endpoints when discovery-srv is provided.")
cmd.Flags().StringSliceVar(&gatewayEndpoints, "endpoints", []string{"127.0.0.1:2379"}, "comma separated etcd cluster endpoints")
@@ -119,6 +119,41 @@ func startGateway(cmd *cobra.Command, args []string) {
}
}
lhost, lport, err := net.SplitHostPort(gatewayListenAddr)
if err != nil {
fmt.Println("failed to validate listen address:", gatewayListenAddr)
os.Exit(1)
}
laddrs, err := net.LookupHost(lhost)
if err != nil {
fmt.Println("failed to resolve listen host:", lhost)
os.Exit(1)
}
laddrsMap := make(map[string]bool)
for _, addr := range laddrs {
laddrsMap[addr] = true
}
for _, srv := range srvs.SRVs {
var eaddrs []string
eaddrs, err = net.LookupHost(srv.Target)
if err != nil {
fmt.Println("failed to resolve endpoint host:", srv.Target)
os.Exit(1)
}
if fmt.Sprintf("%d", srv.Port) != lport {
continue
}
for _, ea := range eaddrs {
if laddrsMap[ea] {
fmt.Printf("SRV or endpoint (%s:%d->%s:%d) should not resolve to the gateway listen addr (%s)\n", srv.Target, srv.Port, ea, srv.Port, gatewayListenAddr)
os.Exit(1)
}
}
}
if len(srvs.Endpoints) == 0 {
fmt.Println("no endpoints found")
os.Exit(1)

View File

@@ -45,6 +45,7 @@ import (
"github.com/soheilhy/cmux"
"github.com/spf13/cobra"
"go.uber.org/zap"
"golang.org/x/net/http2"
"google.golang.org/grpc"
"google.golang.org/grpc/grpclog"
)
@@ -86,6 +87,8 @@ var (
grpcProxyEnableOrdering bool
grpcProxyDebug bool
maxConcurrentStreams uint32
)
const defaultGRPCMaxCallSendMsgSize = 1.5 * 1024 * 1024
@@ -131,7 +134,7 @@ func newGRPCProxyStartCommand() *cobra.Command {
cmd.Flags().StringVar(&grpcProxyCert, "cert", "", "identify secure connections with etcd servers using this TLS certificate file")
cmd.Flags().StringVar(&grpcProxyKey, "key", "", "identify secure connections with etcd servers using this TLS key file")
cmd.Flags().StringVar(&grpcProxyCA, "cacert", "", "verify certificates of TLS-enabled secure etcd servers using this CA bundle")
cmd.Flags().BoolVar(&grpcProxyInsecureSkipTLSVerify, "insecure-skip-tls-verify", false, "skip authentication of etcd server TLS certificates")
cmd.Flags().BoolVar(&grpcProxyInsecureSkipTLSVerify, "insecure-skip-tls-verify", false, "skip authentication of etcd server TLS certificates (CAUTION: this option should be enabled only for testing purposes)")
// client TLS for connecting to proxy
cmd.Flags().StringVar(&grpcProxyListenCert, "cert-file", "", "identify secure connections to the proxy using this TLS certificate file")
@@ -146,6 +149,8 @@ func newGRPCProxyStartCommand() *cobra.Command {
cmd.Flags().BoolVar(&grpcProxyDebug, "debug", false, "Enable debug-level logging for grpc-proxy.")
cmd.Flags().Uint32Var(&maxConcurrentStreams, "max-concurrent-streams", math.MaxUint32, "Maximum concurrent streams that each client can open at a time.")
return &cmd
}
@@ -195,6 +200,13 @@ func startGRPCProxy(cmd *cobra.Command, args []string) {
httpClient := mustNewHTTPClient(lg)
srvhttp, httpl := mustHTTPListener(lg, m, tlsinfo, client)
if err := http2.ConfigureServer(srvhttp, &http2.Server{
MaxConcurrentStreams: maxConcurrentStreams,
}); err != nil {
lg.Fatal("Failed to configure the http server", zap.Error(err))
}
errc := make(chan error)
go func() { errc <- newGRPCProxyServer(lg, client).Serve(grpcl) }()
go func() { errc <- srvhttp.Serve(httpl) }()
@@ -286,6 +298,9 @@ func newClientCfg(lg *zap.Logger, eps []string) (*clientv3.Config, error) {
return nil, err
}
clientTLS.InsecureSkipVerify = grpcProxyInsecureSkipTLSVerify
if clientTLS.InsecureSkipVerify {
lg.Warn("--insecure-skip-tls-verify was given, this grpc proxy process skips authentication of etcd server TLS certificates. This option should be enabled only for testing purposes.")
}
cfg.TLS = clientTLS
lg.Info("gRPC proxy client TLS", zap.String("tls-info", fmt.Sprintf("%+v", tls)))
}
@@ -323,7 +338,7 @@ func mustListenCMux(lg *zap.Logger, tlsinfo *transport.TLSInfo) cmux.CMux {
func newGRPCProxyServer(lg *zap.Logger, client *clientv3.Client) *grpc.Server {
if grpcProxyEnableOrdering {
vf := ordering.NewOrderViolationSwitchEndpointClosure(*client)
vf := ordering.NewOrderViolationSwitchEndpointClosure(client)
client.KV = ordering.NewKV(client.KV, vf)
lg.Info("waiting for linearized read from cluster to recover ordering")
for {
@@ -347,12 +362,12 @@ func newGRPCProxyServer(lg *zap.Logger, client *clientv3.Client) *grpc.Server {
}
kvp, _ := grpcproxy.NewKvProxy(client)
watchp, _ := grpcproxy.NewWatchProxy(client)
watchp, _ := grpcproxy.NewWatchProxy(client.Ctx(), client)
if grpcProxyResolverPrefix != "" {
grpcproxy.Register(client, grpcProxyResolverPrefix, grpcProxyAdvertiseClientURL, grpcProxyResolverTTL)
}
clusterp, _ := grpcproxy.NewClusterProxy(client, grpcProxyAdvertiseClientURL, grpcProxyResolverPrefix)
leasep, _ := grpcproxy.NewLeaseProxy(client)
leasep, _ := grpcproxy.NewLeaseProxy(client.Ctx(), client)
mainp := grpcproxy.NewMaintenanceProxy(client)
authp := grpcproxy.NewAuthProxy(client)
electionp := grpcproxy.NewElectionProxy(client)

View File

@@ -77,6 +77,8 @@ Member:
Maximum number of operations permitted in a transaction.
--max-request-bytes '1572864'
Maximum client request size in bytes the server will accept.
--max-concurrent-streams 'math.MaxUint32'
Maximum concurrent streams that each client can open at a time.
--grpc-keepalive-min-time '5s'
Minimum duration interval that a client should wait before pinging server.
--grpc-keepalive-interval '2h'
@@ -162,6 +164,8 @@ Auth:
Specify a v3 authentication token type and its options ('simple' or 'jwt').
--bcrypt-cost ` + fmt.Sprintf("%d", bcrypt.DefaultCost) + `
Specify the cost / strength of the bcrypt algorithm for hashing auth passwords. Valid values are between ` + fmt.Sprintf("%d", bcrypt.MinCost) + ` and ` + fmt.Sprintf("%d", bcrypt.MaxCost) + `.
--auth-token-ttl 300
Time (in seconds) of the auth-token-ttl.
Profiling and Monitoring:
--enable-pprof 'false'
@@ -208,6 +212,10 @@ Experimental feature:
ExperimentalCompactionBatchLimit sets the maximum revisions deleted in each compaction batch.
--experimental-peer-skip-client-san-verification 'false'
Skip verification of SAN field in client certificate for peer connections.
--experimental-watch-progress-notify-interval '10m'
Duration of periodical watch progress notification.
--experimental-warning-apply-duration '100ms'
Warning is generated if requests take more than this duration.
Unsafe feature:
--force-new-cluster 'false'

View File

@@ -36,7 +36,7 @@ const (
// HandleMetricsHealth registers metrics and health handlers.
func HandleMetricsHealth(mux *http.ServeMux, srv etcdserver.ServerV2) {
mux.Handle(PathMetrics, promhttp.Handler())
mux.Handle(PathHealth, NewHealthHandler(func() Health { return checkHealth(srv) }))
mux.Handle(PathHealth, NewHealthHandler(func(excludedAlarms AlarmSet) Health { return checkHealth(srv, excludedAlarms) }))
}
// HandlePrometheus registers prometheus handler on '/metrics'.
@@ -45,7 +45,7 @@ func HandlePrometheus(mux *http.ServeMux) {
}
// NewHealthHandler handles '/health' requests.
func NewHealthHandler(hfunc func() Health) http.HandlerFunc {
func NewHealthHandler(hfunc func(excludedAlarms AlarmSet) Health) http.HandlerFunc {
return func(w http.ResponseWriter, r *http.Request) {
if r.Method != http.MethodGet {
w.Header().Set("Allow", http.MethodGet)
@@ -53,7 +53,8 @@ func NewHealthHandler(hfunc func() Health) http.HandlerFunc {
plog.Warningf("/health error (status code %d)", http.StatusMethodNotAllowed)
return
}
h := hfunc()
excludedAlarms := getExcludedAlarms(r)
h := hfunc(excludedAlarms)
d, _ := json.Marshal(h)
if h.Health != "true" {
http.Error(w, string(d), http.StatusServiceUnavailable)
@@ -90,16 +91,38 @@ type Health struct {
Health string `json:"health"`
}
type AlarmSet map[string]struct{}
func getExcludedAlarms(r *http.Request) (alarms AlarmSet) {
alarms = make(map[string]struct{}, 2)
alms, found := r.URL.Query()["exclude"]
if found {
for _, alm := range alms {
if len(alms) == 0 {
continue
}
alarms[alm] = struct{}{}
}
}
return alarms
}
// TODO: server NOSPACE, etcdserver.ErrNoLeader in health API
func checkHealth(srv etcdserver.ServerV2) Health {
func checkHealth(srv etcdserver.ServerV2, excludedAlarms AlarmSet) Health {
h := Health{Health: "true"}
as := srv.Alarms()
if len(as) > 0 {
h.Health = "false"
for _, v := range as {
plog.Warningf("/health error due to an alarm %s", v.String())
alarmName := v.Alarm.String()
if _, found := excludedAlarms[alarmName]; found {
plog.Debugf("/health excluded alarm %s", v.String())
continue
}
h.Health = "false"
plog.Warningf("/health error due to %s", v.String())
return h
}
}
@@ -122,7 +145,7 @@ func checkHealth(srv etcdserver.ServerV2) Health {
if h.Health == "true" {
healthSuccess.Inc()
plog.Infof("/health OK (status code %d)", http.StatusOK)
plog.Debugf("/health OK (status code %d)", http.StatusOK)
} else {
healthFailed.Inc()
}

View File

@@ -0,0 +1,157 @@
// Copyright 2021 The etcd Authors
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
package etcdhttp
import (
"context"
"encoding/json"
"fmt"
"io"
"io/ioutil"
"net/http"
"net/http/httptest"
"testing"
"go.etcd.io/etcd/etcdserver"
stats "go.etcd.io/etcd/etcdserver/api/v2stats"
pb "go.etcd.io/etcd/etcdserver/etcdserverpb"
"go.etcd.io/etcd/pkg/testutil"
"go.etcd.io/etcd/pkg/types"
"go.etcd.io/etcd/raft"
)
type fakeStats struct{}
func (s *fakeStats) SelfStats() []byte { return nil }
func (s *fakeStats) LeaderStats() []byte { return nil }
func (s *fakeStats) StoreStats() []byte { return nil }
type fakeServerV2 struct {
fakeServer
stats.Stats
health string
}
func (s *fakeServerV2) Leader() types.ID {
if s.health == "true" {
return 1
}
return types.ID(raft.None)
}
func (s *fakeServerV2) Do(ctx context.Context, r pb.Request) (etcdserver.Response, error) {
if s.health == "true" {
return etcdserver.Response{}, nil
}
return etcdserver.Response{}, fmt.Errorf("fail health check")
}
func (s *fakeServerV2) ClientCertAuthEnabled() bool { return false }
func TestHealthHandler(t *testing.T) {
// define the input and expected output
// input: alarms, and healthCheckURL
tests := []struct {
alarms []*pb.AlarmMember
healthCheckURL string
statusCode int
health string
}{
{
[]*pb.AlarmMember{},
"/health",
http.StatusOK,
"true",
},
{
[]*pb.AlarmMember{{MemberID: uint64(0), Alarm: pb.AlarmType_NOSPACE}},
"/health",
http.StatusServiceUnavailable,
"false",
},
{
[]*pb.AlarmMember{{MemberID: uint64(0), Alarm: pb.AlarmType_NOSPACE}},
"/health?exclude=NOSPACE",
http.StatusOK,
"true",
},
{
[]*pb.AlarmMember{},
"/health?exclude=NOSPACE",
http.StatusOK,
"true",
},
{
[]*pb.AlarmMember{{MemberID: uint64(1), Alarm: pb.AlarmType_NOSPACE}, {MemberID: uint64(2), Alarm: pb.AlarmType_NOSPACE}, {MemberID: uint64(3), Alarm: pb.AlarmType_NOSPACE}},
"/health?exclude=NOSPACE",
http.StatusOK,
"true",
},
{
[]*pb.AlarmMember{{MemberID: uint64(0), Alarm: pb.AlarmType_NOSPACE}, {MemberID: uint64(1), Alarm: pb.AlarmType_CORRUPT}},
"/health?exclude=NOSPACE",
http.StatusServiceUnavailable,
"false",
},
{
[]*pb.AlarmMember{{MemberID: uint64(0), Alarm: pb.AlarmType_NOSPACE}, {MemberID: uint64(1), Alarm: pb.AlarmType_CORRUPT}},
"/health?exclude=NOSPACE&exclude=CORRUPT",
http.StatusOK,
"true",
},
}
for i, tt := range tests {
func() {
mux := http.NewServeMux()
HandleMetricsHealth(mux, &fakeServerV2{
fakeServer: fakeServer{alarms: tt.alarms},
Stats: &fakeStats{},
health: tt.health,
})
ts := httptest.NewServer(mux)
defer ts.Close()
res, err := ts.Client().Do(&http.Request{Method: http.MethodGet, URL: testutil.MustNewURL(t, ts.URL+tt.healthCheckURL)})
if err != nil {
t.Errorf("fail serve http request %s %v in test case #%d", tt.healthCheckURL, err, i+1)
}
if res == nil {
t.Errorf("got nil http response with http request %s in test case #%d", tt.healthCheckURL, i+1)
return
}
if res.StatusCode != tt.statusCode {
t.Errorf("want statusCode %d but got %d in test case #%d", tt.statusCode, res.StatusCode, i+1)
}
health, err := parseHealthOutput(res.Body)
if err != nil {
t.Errorf("fail parse health check output %v", err)
}
if health.Health != tt.health {
t.Errorf("want health %s but got %s", tt.health, health.Health)
}
}()
}
}
func parseHealthOutput(body io.Reader) (Health, error) {
obj := Health{}
d, derr := ioutil.ReadAll(body)
if derr != nil {
return obj, derr
}
if err := json.Unmarshal(d, &obj); err != nil {
return obj, err
}
return obj, nil
}

View File

@@ -58,6 +58,7 @@ func (c *fakeCluster) Version() *semver.Version { return nil }
type fakeServer struct {
cluster api.Cluster
alarms []*pb.AlarmMember
}
func (s *fakeServer) AddMember(ctx context.Context, memb membership.Member) ([]*membership.Member, error) {
@@ -74,7 +75,7 @@ func (s *fakeServer) PromoteMember(ctx context.Context, id uint64) ([]*membershi
}
func (s *fakeServer) ClusterVersion() *semver.Version { return nil }
func (s *fakeServer) Cluster() api.Cluster { return s.cluster }
func (s *fakeServer) Alarms() []*pb.AlarmMember { return nil }
func (s *fakeServer) Alarms() []*pb.AlarmMember { return s.alarms }
var fakeRaftHandler = http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
w.Write([]byte("test data"))

View File

@@ -763,16 +763,21 @@ func ValidateClusterAndAssignIDs(lg *zap.Logger, local *RaftCluster, existing *R
if len(ems) != len(lms) {
return fmt.Errorf("member count is unequal")
}
sort.Sort(MembersByPeerURLs(ems))
sort.Sort(MembersByPeerURLs(lms))
ctx, cancel := context.WithTimeout(context.TODO(), 30*time.Second)
defer cancel()
for i := range ems {
if ok, err := netutil.URLStringsEqual(ctx, lg, ems[i].PeerURLs, lms[i].PeerURLs); !ok {
return fmt.Errorf("unmatched member while checking PeerURLs (%v)", err)
var err error
ok := false
for j := range lms {
if ok, err = netutil.URLStringsEqual(ctx, lg, ems[i].PeerURLs, lms[j].PeerURLs); ok {
lms[j].ID = ems[i].ID
break
}
}
if !ok {
return fmt.Errorf("PeerURLs: no match found for existing member (%v, %v), last resolver error (%v)", ems[i].ID, ems[i].PeerURLs, err)
}
lms[i].ID = ems[i].ID
}
local.members = make(map[types.ID]*Member)
for _, m := range lms {

View File

@@ -218,7 +218,7 @@ func (d *discovery) createSelf(contents string) error {
return err
}
func (d *discovery) checkCluster() ([]*client.Node, int, uint64, error) {
func (d *discovery) checkCluster() ([]*client.Node, uint64, uint64, error) {
configKey := path.Join("/", d.cluster, "_config")
ctx, cancel := context.WithTimeout(context.Background(), client.DefaultRequestTimeout)
// find cluster size
@@ -247,7 +247,7 @@ func (d *discovery) checkCluster() ([]*client.Node, int, uint64, error) {
}
return nil, 0, 0, err
}
size, err := strconv.Atoi(resp.Node.Value)
size, err := strconv.ParseUint(resp.Node.Value, 10, 0)
if err != nil {
return nil, 0, 0, ErrBadSizeKey
}
@@ -288,7 +288,7 @@ func (d *discovery) checkCluster() ([]*client.Node, int, uint64, error) {
if path.Base(nodes[i].Key) == path.Base(d.selfKey()) {
break
}
if i >= size-1 {
if uint64(i) >= size-1 {
return nodes[:size], size, resp.Index, ErrFullCluster
}
}
@@ -316,7 +316,7 @@ func (d *discovery) logAndBackoffForRetry(step string) {
d.clock.Sleep(retryTimeInSecond)
}
func (d *discovery) checkClusterRetry() ([]*client.Node, int, uint64, error) {
func (d *discovery) checkClusterRetry() ([]*client.Node, uint64, uint64, error) {
if d.retries < nRetries {
d.logAndBackoffForRetry("cluster status check")
return d.checkCluster()
@@ -336,8 +336,8 @@ func (d *discovery) waitNodesRetry() ([]*client.Node, error) {
return nil, ErrTooManyRetries
}
func (d *discovery) waitNodes(nodes []*client.Node, size int, index uint64) ([]*client.Node, error) {
if len(nodes) > size {
func (d *discovery) waitNodes(nodes []*client.Node, size uint64, index uint64) ([]*client.Node, error) {
if uint64(len(nodes)) > size {
nodes = nodes[:size]
}
// watch from the next index
@@ -369,16 +369,16 @@ func (d *discovery) waitNodes(nodes []*client.Node, size int, index uint64) ([]*
}
// wait for others
for len(all) < size {
for uint64(len(all)) < size {
if d.lg != nil {
d.lg.Info(
"found peers from discovery server; waiting for more",
zap.String("discovery-url", d.url.String()),
zap.Int("found-peers", len(all)),
zap.Int("needed-peers", size-len(all)),
zap.Int("needed-peers", int(size-uint64(len(all)))),
)
} else {
plog.Noticef("found %d peer(s), waiting for %d more", len(all), size-len(all))
plog.Noticef("found %d peer(s), waiting for %d more", len(all), size-uint64(len(all)))
}
resp, err := w.Next(context.Background())
if err != nil {
@@ -415,7 +415,7 @@ func (d *discovery) selfKey() string {
return path.Join("/", d.cluster, d.id.String())
}
func nodesToCluster(ns []*client.Node, size int) (string, error) {
func nodesToCluster(ns []*client.Node, size uint64) (string, error) {
s := make([]string, len(ns))
for i, n := range ns {
s[i] = n.Value
@@ -425,7 +425,7 @@ func nodesToCluster(ns []*client.Node, size int) (string, error) {
if err != nil {
return us, ErrInvalidURL
}
if m.Len() != size {
if uint64(m.Len()) != size {
return us, ErrDuplicateName
}
return us, nil

View File

@@ -217,7 +217,7 @@ func TestCheckCluster(t *testing.T) {
if reflect.DeepEqual(ns, tt.nodes) {
t.Errorf("#%d: nodes = %v, want %v", i, ns, tt.nodes)
}
if size != tt.wsize {
if size != uint64(tt.wsize) {
t.Errorf("#%d: size = %v, want %d", i, size, tt.wsize)
}
if index != tt.index {
@@ -301,7 +301,7 @@ func TestWaitNodes(t *testing.T) {
fc.Advance(time.Second * (0x1 << i))
}
}()
g, err := d.waitNodes(tt.nodes, 3, 0) // we do not care about index in this test
g, err := d.waitNodes(tt.nodes, uint64(3), 0) // we do not care about index in this test
if err != nil {
t.Errorf("#%d: err = %v, want %v", i, err, nil)
}
@@ -348,7 +348,7 @@ func TestCreateSelf(t *testing.T) {
func TestNodesToCluster(t *testing.T) {
tests := []struct {
nodes []*client.Node
size int
size uint64
wcluster string
werr error
}{

View File

@@ -104,5 +104,5 @@ func TestNodeExternClone(t *testing.T) {
func sameSlice(a, b []*NodeExtern) bool {
ah := (*reflect.SliceHeader)(unsafe.Pointer(&a))
bh := (*reflect.SliceHeader)(unsafe.Pointer(&b))
return *ah == *bh
return ah.Data == bh.Data && ah.Len == bh.Len && ah.Cap == bh.Cap
}

View File

@@ -844,7 +844,7 @@ func TestStoreWatchSlowConsumer(t *testing.T) {
s.Watch("/foo", true, true, 0) // stream must be true
// Fill watch channel with 100 events
for i := 1; i <= 100; i++ {
s.Set("/foo", false, string(i), v2store.TTLOptionSet{ExpireTime: v2store.Permanent}) // ok
s.Set("/foo", false, string(rune(i)), v2store.TTLOptionSet{ExpireTime: v2store.Permanent}) // ok
}
// testutil.AssertEqual(t, s.WatcherHub.count, int64(1))
s.Set("/foo", false, "101", v2store.TTLOptionSet{ExpireTime: v2store.Permanent}) // ok

View File

@@ -31,7 +31,6 @@ import (
const (
grpcOverheadBytes = 512 * 1024
maxStreams = math.MaxUint32
maxSendBytes = math.MaxInt32
)
@@ -53,7 +52,7 @@ func Server(s *etcdserver.EtcdServer, tls *tls.Config, gopts ...grpc.ServerOptio
)))
opts = append(opts, grpc.MaxRecvMsgSize(int(s.Cfg.MaxRequestBytes+grpcOverheadBytes)))
opts = append(opts, grpc.MaxSendMsgSize(maxSendBytes))
opts = append(opts, grpc.MaxConcurrentStreams(maxStreams))
opts = append(opts, grpc.MaxConcurrentStreams(s.Cfg.MaxConcurrentStreams))
grpcServer := grpc.NewServer(append(opts, gopts...)...)
pb.RegisterKVServer(grpcServer, NewQuotaKVServer(s))

View File

@@ -217,8 +217,8 @@ func newStreamInterceptor(s *etcdserver.EtcdServer) grpc.StreamServerInterceptor
return rpctypes.ErrGRPCNoLeader
}
cctx, cancel := context.WithCancel(ss.Context())
ss = serverStreamWithCtx{ctx: cctx, cancel: &cancel, ServerStream: ss}
ctx := newCancellableContext(ss.Context())
ss = serverStreamWithCtx{ctx: ctx, ServerStream: ss}
smap.mu.Lock()
smap.streams[ss] = struct{}{}
@@ -228,7 +228,8 @@ func newStreamInterceptor(s *etcdserver.EtcdServer) grpc.StreamServerInterceptor
smap.mu.Lock()
delete(smap.streams, ss)
smap.mu.Unlock()
cancel()
// TODO: investigate whether the reason for cancellation here is useful to know
ctx.Cancel(nil)
}()
}
}
@@ -237,10 +238,52 @@ func newStreamInterceptor(s *etcdserver.EtcdServer) grpc.StreamServerInterceptor
}
}
// cancellableContext wraps a context with new cancellable context that allows a
// specific cancellation error to be preserved and later retrieved using the
// Context.Err() function. This is so downstream context users can disambiguate
// the reason for the cancellation which could be from the client (for example)
// or from this interceptor code.
type cancellableContext struct {
context.Context
lock sync.RWMutex
cancel context.CancelFunc
cancelReason error
}
func newCancellableContext(parent context.Context) *cancellableContext {
ctx, cancel := context.WithCancel(parent)
return &cancellableContext{
Context: ctx,
cancel: cancel,
}
}
// Cancel stores the cancellation reason and then delegates to context.WithCancel
// against the parent context.
func (c *cancellableContext) Cancel(reason error) {
c.lock.Lock()
c.cancelReason = reason
c.lock.Unlock()
c.cancel()
}
// Err will return the preserved cancel reason error if present, and will
// otherwise return the underlying error from the parent context.
func (c *cancellableContext) Err() error {
c.lock.RLock()
defer c.lock.RUnlock()
if c.cancelReason != nil {
return c.cancelReason
}
return c.Context.Err()
}
type serverStreamWithCtx struct {
grpc.ServerStream
ctx context.Context
cancel *context.CancelFunc
// ctx is used so that we can preserve a reason for cancellation.
ctx *cancellableContext
}
func (ssc serverStreamWithCtx) Context() context.Context { return ssc.ctx }
@@ -272,7 +315,7 @@ func monitorLeader(s *etcdserver.EtcdServer) *streamsMap {
smap.mu.Lock()
for ss := range smap.streams {
if ssWithCtx, ok := ss.(serverStreamWithCtx); ok {
(*ssWithCtx.cancel)()
ssWithCtx.ctx.Cancel(rpctypes.ErrGRPCNoLeader)
<-ss.Context().Done()
}
}

View File

@@ -35,6 +35,8 @@ var (
ErrGRPCLeaseExist = status.New(codes.FailedPrecondition, "etcdserver: lease already exists").Err()
ErrGRPCLeaseTTLTooLarge = status.New(codes.OutOfRange, "etcdserver: too large lease TTL").Err()
ErrGRPCWatchCanceled = status.New(codes.Canceled, "etcdserver: watch canceled").Err()
ErrGRPCMemberExist = status.New(codes.FailedPrecondition, "etcdserver: member ID already exist").Err()
ErrGRPCPeerURLExist = status.New(codes.FailedPrecondition, "etcdserver: Peer URLs already exists").Err()
ErrGRPCMemberNotEnoughStarted = status.New(codes.FailedPrecondition, "etcdserver: re-configuration failed due to not enough started members").Err()
@@ -62,6 +64,7 @@ var (
ErrGRPCAuthNotEnabled = status.New(codes.FailedPrecondition, "etcdserver: authentication is not enabled").Err()
ErrGRPCInvalidAuthToken = status.New(codes.Unauthenticated, "etcdserver: invalid auth token").Err()
ErrGRPCInvalidAuthMgmt = status.New(codes.InvalidArgument, "etcdserver: invalid auth management").Err()
ErrGRPCAuthOldRevision = status.New(codes.InvalidArgument, "etcdserver: revision of auth store is old").Err()
ErrGRPCNoLeader = status.New(codes.Unavailable, "etcdserver: no leader").Err()
ErrGRPCNotLeader = status.New(codes.FailedPrecondition, "etcdserver: not leader").Err()
@@ -71,6 +74,7 @@ var (
ErrGRPCTimeout = status.New(codes.Unavailable, "etcdserver: request timed out").Err()
ErrGRPCTimeoutDueToLeaderFail = status.New(codes.Unavailable, "etcdserver: request timed out, possibly due to previous leader failure").Err()
ErrGRPCTimeoutDueToConnectionLost = status.New(codes.Unavailable, "etcdserver: request timed out, possibly due to connection lost").Err()
ErrGRPCTimeoutWaitAppliedIndex = status.New(codes.Unavailable, "etcdserver: request timed out, waiting for the applied index took too long").Err()
ErrGRPCUnhealthy = status.New(codes.Unavailable, "etcdserver: unhealthy cluster").Err()
ErrGRPCCorrupt = status.New(codes.DataLoss, "etcdserver: corrupt cluster").Err()
ErrGPRCNotSupportedForLearner = status.New(codes.Unavailable, "etcdserver: rpc not supported for learner").Err()
@@ -119,6 +123,7 @@ var (
ErrorDesc(ErrGRPCAuthNotEnabled): ErrGRPCAuthNotEnabled,
ErrorDesc(ErrGRPCInvalidAuthToken): ErrGRPCInvalidAuthToken,
ErrorDesc(ErrGRPCInvalidAuthMgmt): ErrGRPCInvalidAuthMgmt,
ErrorDesc(ErrGRPCAuthOldRevision): ErrGRPCAuthOldRevision,
ErrorDesc(ErrGRPCNoLeader): ErrGRPCNoLeader,
ErrorDesc(ErrGRPCNotLeader): ErrGRPCNotLeader,
@@ -128,6 +133,7 @@ var (
ErrorDesc(ErrGRPCTimeout): ErrGRPCTimeout,
ErrorDesc(ErrGRPCTimeoutDueToLeaderFail): ErrGRPCTimeoutDueToLeaderFail,
ErrorDesc(ErrGRPCTimeoutDueToConnectionLost): ErrGRPCTimeoutDueToConnectionLost,
ErrorDesc(ErrGRPCTimeoutWaitAppliedIndex): ErrGRPCTimeoutWaitAppliedIndex,
ErrorDesc(ErrGRPCUnhealthy): ErrGRPCUnhealthy,
ErrorDesc(ErrGRPCCorrupt): ErrGRPCCorrupt,
ErrorDesc(ErrGPRCNotSupportedForLearner): ErrGPRCNotSupportedForLearner,
@@ -177,6 +183,7 @@ var (
ErrPermissionNotGranted = Error(ErrGRPCPermissionNotGranted)
ErrAuthNotEnabled = Error(ErrGRPCAuthNotEnabled)
ErrInvalidAuthToken = Error(ErrGRPCInvalidAuthToken)
ErrAuthOldRevision = Error(ErrGRPCAuthOldRevision)
ErrInvalidAuthMgmt = Error(ErrGRPCInvalidAuthMgmt)
ErrNoLeader = Error(ErrGRPCNoLeader)
@@ -187,6 +194,7 @@ var (
ErrTimeout = Error(ErrGRPCTimeout)
ErrTimeoutDueToLeaderFail = Error(ErrGRPCTimeoutDueToLeaderFail)
ErrTimeoutDueToConnectionLost = Error(ErrGRPCTimeoutDueToConnectionLost)
ErrTimeoutWaitAppliedIndex = Error(ErrGRPCTimeoutWaitAppliedIndex)
ErrUnhealthy = Error(ErrGRPCUnhealthy)
ErrCorrupt = Error(ErrGRPCCorrupt)
ErrBadLeaderTransferee = Error(ErrGRPCBadLeaderTransferee)

View File

@@ -53,6 +53,7 @@ var toGRPCErrorMap = map[error]error{
etcdserver.ErrTimeout: rpctypes.ErrGRPCTimeout,
etcdserver.ErrTimeoutDueToLeaderFail: rpctypes.ErrGRPCTimeoutDueToLeaderFail,
etcdserver.ErrTimeoutDueToConnectionLost: rpctypes.ErrGRPCTimeoutDueToConnectionLost,
etcdserver.ErrTimeoutWaitAppliedIndex: rpctypes.ErrGRPCTimeoutWaitAppliedIndex,
etcdserver.ErrUnhealthy: rpctypes.ErrGRPCUnhealthy,
etcdserver.ErrKeyNotFound: rpctypes.ErrGRPCKeyNotFound,
etcdserver.ErrCorrupt: rpctypes.ErrGRPCCorrupt,
@@ -77,6 +78,7 @@ var toGRPCErrorMap = map[error]error{
auth.ErrAuthNotEnabled: rpctypes.ErrGRPCAuthNotEnabled,
auth.ErrInvalidAuthToken: rpctypes.ErrGRPCInvalidAuthToken,
auth.ErrInvalidAuthMgmt: rpctypes.ErrGRPCInvalidAuthMgmt,
auth.ErrAuthOldRevision: rpctypes.ErrGRPCAuthOldRevision,
}
func togRPCError(err error) error {

View File

@@ -31,6 +31,8 @@ import (
"go.uber.org/zap"
)
const minWatchProgressInterval = 100 * time.Millisecond
type watchServer struct {
lg *zap.Logger
@@ -46,7 +48,7 @@ type watchServer struct {
// NewWatchServer returns a new watch server.
func NewWatchServer(s *etcdserver.EtcdServer) pb.WatchServer {
return &watchServer{
srv := &watchServer{
lg: s.Cfg.Logger,
clusterID: int64(s.Cluster().ID()),
@@ -58,6 +60,21 @@ func NewWatchServer(s *etcdserver.EtcdServer) pb.WatchServer {
watchable: s.Watchable(),
ag: s,
}
if s.Cfg.WatchProgressNotifyInterval > 0 {
if s.Cfg.WatchProgressNotifyInterval < minWatchProgressInterval {
if srv.lg != nil {
srv.lg.Warn(
"adjusting watch progress notify interval to minimum period",
zap.Duration("min-watch-progress-notify-interval", minWatchProgressInterval),
)
} else {
plog.Warningf("adjusting watch progress notify interval to minimum period %v", minWatchProgressInterval)
}
s.Cfg.WatchProgressNotifyInterval = minWatchProgressInterval
}
SetProgressReportInterval(s.Cfg.WatchProgressNotifyInterval)
}
return srv
}
var (
@@ -189,15 +206,25 @@ func (ws *watchServer) Watch(stream pb.Watch_WatchServer) (err error) {
}
}()
// TODO: There's a race here. When a stream is closed (e.g. due to a cancellation),
// the underlying error (e.g. a gRPC stream error) may be returned and handled
// through errc if the recv goroutine finishes before the send goroutine.
// When the recv goroutine wins, the stream error is retained. When recv loses
// the race, the underlying error is lost (unless the root error is propagated
// through Context.Err() which is not always the case (as callers have to decide
// to implement a custom context to do so). The stdlib context package builtins
// may be insufficient to carry semantically useful errors around and should be
// revisited.
select {
case err = <-errc:
if err == context.Canceled {
err = rpctypes.ErrGRPCWatchCanceled
}
close(sws.ctrlStream)
case <-stream.Context().Done():
err = stream.Context().Err()
// the only server-side cancellation is noleader for now.
if err == context.Canceled {
err = rpctypes.ErrGRPCNoLeader
err = rpctypes.ErrGRPCWatchCanceled
}
}
@@ -373,7 +400,7 @@ func (sws *serverWatchStream) sendLoop() {
sws.mu.RUnlock()
for i := range evs {
events[i] = &evs[i]
if needPrevKV {
if needPrevKV && !isCreateEvent(evs[i]) {
opt := mvcc.RangeOptions{Rev: evs[i].Kv.ModRevision - 1}
r, err := sws.watchable.Range(evs[i].Kv.Key, nil, opt)
if err == nil && len(r.KVs) != 0 {
@@ -507,6 +534,10 @@ func (sws *serverWatchStream) sendLoop() {
}
}
func isCreateEvent(e mvccpb.Event) bool {
return e.Type == mvccpb.PUT && e.Kv.CreateRevision == e.Kv.ModRevision
}
func sendFragments(
wr *pb.WatchResponse,
maxRequestBytes int,

View File

@@ -33,10 +33,6 @@ import (
"go.uber.org/zap"
)
const (
warnApplyDuration = 100 * time.Millisecond
)
type applyResult struct {
resp proto.Message
err error
@@ -115,7 +111,7 @@ func (s *EtcdServer) newApplierV3() applierV3 {
func (a *applierV3backend) Apply(r *pb.InternalRaftRequest) *applyResult {
ar := &applyResult{}
defer func(start time.Time) {
warnOfExpensiveRequest(a.s.getLogger(), start, &pb.InternalRaftStringer{Request: r}, ar.resp, ar.err)
warnOfExpensiveRequest(a.s.getLogger(), a.s.Cfg.WarningApplyDuration, start, &pb.InternalRaftStringer{Request: r}, ar.resp, ar.err)
if ar.err != nil {
warnOfFailedRequest(a.s.getLogger(), start, &pb.InternalRaftStringer{Request: r}, ar.resp, ar.err)
}
@@ -185,7 +181,7 @@ func (a *applierV3backend) Put(txn mvcc.TxnWrite, p *pb.PutRequest) (resp *pb.Pu
trace = traceutil.New("put",
a.s.getLogger(),
traceutil.Field{Key: "key", Value: string(p.Key)},
traceutil.Field{Key: "req_size", Value: proto.Size(p)},
traceutil.Field{Key: "req_size", Value: p.Size()},
)
val, leaseID := p.Value, lease.LeaseID(p.Lease)
if txn == nil {

View File

@@ -16,6 +16,7 @@ package etcdserver
import (
"encoding/json"
"fmt"
"path"
"time"
@@ -114,7 +115,11 @@ func (a *applierV2store) Sync(r *RequestV2) Response {
// applyV2Request interprets r as a call to v2store.X
// and returns a Response interpreted from v2store.Event
func (s *EtcdServer) applyV2Request(r *RequestV2) Response {
defer warnOfExpensiveRequest(s.getLogger(), time.Now(), r, nil, nil)
stringer := panicAlternativeStringer{
stringer: r,
alternative: func() string { return fmt.Sprintf("id:%d,method:%s,path:%s", r.ID, r.Method, r.Path) },
}
defer warnOfExpensiveRequest(s.getLogger(), s.Cfg.WarningApplyDuration, time.Now(), stringer, nil, nil)
switch r.Method {
case "POST":

View File

@@ -31,6 +31,7 @@ import (
func newBackend(cfg ServerConfig) backend.Backend {
bcfg := backend.DefaultBackendConfig()
bcfg.Path = cfg.backendPath()
bcfg.UnsafeNoFsync = cfg.UnsafeNoFsync
if cfg.BackendBatchLimit != 0 {
bcfg.BatchLimit = cfg.BackendBatchLimit
if cfg.Logger != nil {

View File

@@ -119,6 +119,12 @@ type ServerConfig struct {
// MaxRequestBytes is the maximum request size to send over raft.
MaxRequestBytes uint
// MaxConcurrentStreams specifies the maximum number of concurrent
// streams that each client can open at a time.
MaxConcurrentStreams uint32
WarningApplyDuration time.Duration
StrictReconfigCheck bool
// ClientCertAuthEnabled is true when cert has been signed by the client CA.
@@ -126,6 +132,7 @@ type ServerConfig struct {
AuthToken string
BcryptCost uint
TokenTTL uint
// InitialCorruptCheck is true to check data corruption on boot
// before serving any peer/client traffic.
@@ -151,12 +158,20 @@ type ServerConfig struct {
ForceNewCluster bool
// EnableLeaseCheckpoint enables primary lessor to persist lease remainingTTL to prevent indefinite auto-renewal of long lived leases.
// EnableLeaseCheckpoint enables leader to send regular checkpoints to other members to prevent reset of remaining TTL on leader change.
EnableLeaseCheckpoint bool
// LeaseCheckpointInterval time.Duration is the wait duration between lease checkpoints.
LeaseCheckpointInterval time.Duration
// LeaseCheckpointPersist enables persisting remainingTTL to prevent indefinite auto-renewal of long lived leases. Always enabled in v3.6. Should be used to ensure smooth upgrade from v3.5 clusters with this feature enabled.
LeaseCheckpointPersist bool
EnableGRPCGateway bool
WatchProgressNotifyInterval time.Duration
// UnsafeNoFsync disables all uses of fsync.
// Setting this is unsafe and will cause data loss.
UnsafeNoFsync bool `json:"unsafe-no-fsync"`
}
// VerifyBootstrap sanity-checks the initial config for bootstrap case

View File

@@ -26,6 +26,7 @@ var (
ErrTimeout = errors.New("etcdserver: request timed out")
ErrTimeoutDueToLeaderFail = errors.New("etcdserver: request timed out, possibly due to previous leader failure")
ErrTimeoutDueToConnectionLost = errors.New("etcdserver: request timed out, possibly due to connection lost")
ErrTimeoutWaitAppliedIndex = errors.New("etcdserver: request timed out, waiting for the applied index took too long")
ErrTimeoutLeaderTransfer = errors.New("etcdserver: request timed out, leader transfer took too long")
ErrLeaderChanged = errors.New("etcdserver: leader changed")
ErrNotEnoughStartedMembers = errors.New("etcdserver: re-configuration failed due to not enough started members")

View File

@@ -137,7 +137,7 @@ type loggableValueCompare struct {
Result Compare_CompareResult `protobuf:"varint,1,opt,name=result,proto3,enum=etcdserverpb.Compare_CompareResult"`
Target Compare_CompareTarget `protobuf:"varint,2,opt,name=target,proto3,enum=etcdserverpb.Compare_CompareTarget"`
Key []byte `protobuf:"bytes,3,opt,name=key,proto3"`
ValueSize int `protobuf:"bytes,7,opt,name=value_size,proto3"`
ValueSize int64 `protobuf:"varint,7,opt,name=value_size,proto3"`
RangeEnd []byte `protobuf:"bytes,64,opt,name=range_end,proto3"`
}
@@ -146,7 +146,7 @@ func newLoggableValueCompare(c *Compare, cv *Compare_Value) *loggableValueCompar
c.Result,
c.Target,
c.Key,
len(cv.Value),
int64(len(cv.Value)),
c.RangeEnd,
}
}
@@ -160,7 +160,7 @@ func (*loggableValueCompare) ProtoMessage() {}
// To preserve proto encoding of the key bytes, a faked out proto type is used here.
type loggablePutRequest struct {
Key []byte `protobuf:"bytes,1,opt,name=key,proto3"`
ValueSize int `protobuf:"varint,2,opt,name=value_size,proto3"`
ValueSize int64 `protobuf:"varint,2,opt,name=value_size,proto3"`
Lease int64 `protobuf:"varint,3,opt,name=lease,proto3"`
PrevKv bool `protobuf:"varint,4,opt,name=prev_kv,proto3"`
IgnoreValue bool `protobuf:"varint,5,opt,name=ignore_value,proto3"`
@@ -170,7 +170,7 @@ type loggablePutRequest struct {
func NewLoggablePutRequest(request *PutRequest) *loggablePutRequest {
return &loggablePutRequest{
request.Key,
len(request.Value),
int64(len(request.Value)),
request.Lease,
request.PrevKv,
request.IgnoreValue,

View File

@@ -151,6 +151,19 @@ var (
Help: "Server or member ID in hexadecimal format. 1 for 'server_id' label with current ID.",
},
[]string{"server_id"})
fdUsed = prometheus.NewGauge(prometheus.GaugeOpts{
Namespace: "os",
Subsystem: "fd",
Name: "used",
Help: "The number of used file descriptors.",
})
fdLimit = prometheus.NewGauge(prometheus.GaugeOpts{
Namespace: "os",
Subsystem: "fd",
Name: "limit",
Help: "The file descriptor limit.",
})
)
func init() {
@@ -174,6 +187,8 @@ func init() {
prometheus.MustRegister(isLearner)
prometheus.MustRegister(learnerPromoteSucceed)
prometheus.MustRegister(learnerPromoteFailed)
prometheus.MustRegister(fdUsed)
prometheus.MustRegister(fdLimit)
currentVersion.With(prometheus.Labels{
"server_version": version.Version,
@@ -184,7 +199,12 @@ func init() {
}
func monitorFileDescriptor(lg *zap.Logger, done <-chan struct{}) {
ticker := time.NewTicker(5 * time.Second)
// This ticker will check File Descriptor Requirements ,and count all fds in used.
// And recorded some logs when in used >= limit/5*4. Just recorded message.
// If fds was more than 10K,It's low performance due to FDUsage() works.
// So need to increase it.
// See https://github.com/etcd-io/etcd/issues/11969 for more detail.
ticker := time.NewTicker(10 * time.Minute)
defer ticker.Stop()
for {
used, err := runtime.FDUsage()
@@ -196,6 +216,7 @@ func monitorFileDescriptor(lg *zap.Logger, done <-chan struct{}) {
}
return
}
fdUsed.Set(float64(used))
limit, err := runtime.FDLimit()
if err != nil {
if lg != nil {
@@ -205,6 +226,7 @@ func monitorFileDescriptor(lg *zap.Logger, done <-chan struct{}) {
}
return
}
fdLimit.Set(float64(limit))
if used >= limit/5*4 {
if lg != nil {
lg.Warn("80% of file descriptors are used", zap.Uint64("used", used), zap.Uint64("limit", limit))

View File

@@ -135,8 +135,14 @@ func NewBackendQuota(s *EtcdServer, name string) Quota {
}
func (b *backendQuota) Available(v interface{}) bool {
cost := b.Cost(v)
// if there are no mutating requests, it's safe to pass through
if cost == 0 {
return true
}
// TODO: maybe optimize backend.Size()
return b.s.Backend().Size()+int64(b.Cost(v)) < b.maxBackendBytes
return b.s.Backend().Size()+int64(cost) < b.maxBackendBytes
}
func (b *backendQuota) Cost(v interface{}) int {

View File

@@ -215,6 +215,18 @@ func (r *raftNode) start(rh *raftReadyHandler) {
notifyc: notifyc,
}
waitWALSync := shouldWaitWALSync(rd)
if waitWALSync {
// gofail: var raftBeforeSaveWaitWalSync struct{}
if err := r.storage.Save(rd.HardState, rd.Entries); err != nil {
if r.lg != nil {
r.lg.Fatal("failed to save Raft hard state and entries", zap.Error(err))
} else {
plog.Fatalf("failed to save state and entries error: %v", err)
}
}
}
updateCommittedIndex(&ap, rh)
select {
@@ -245,12 +257,14 @@ func (r *raftNode) start(rh *raftReadyHandler) {
// gofail: var raftAfterSaveSnap struct{}
}
// gofail: var raftBeforeSave struct{}
if err := r.storage.Save(rd.HardState, rd.Entries); err != nil {
if r.lg != nil {
r.lg.Fatal("failed to save Raft hard state and entries", zap.Error(err))
} else {
plog.Fatalf("failed to save state and entries error: %v", err)
if !waitWALSync {
// gofail: var raftBeforeSave struct{}
if err := r.storage.Save(rd.HardState, rd.Entries); err != nil {
if r.lg != nil {
r.lg.Fatal("failed to save Raft hard state and entries", zap.Error(err))
} else {
plog.Fatalf("failed to save state and entries error: %v", err)
}
}
}
if !raft.IsEmptyHardState(rd.HardState) {
@@ -342,6 +356,42 @@ func (r *raftNode) start(rh *raftReadyHandler) {
}()
}
// For a cluster with only one member, the raft may send both the
// unstable entries and committed entries to etcdserver, and there
// may have overlapped log entries between them.
//
// etcd responds to the client once it finishes (actually partially)
// the applying workflow. But when the client receives the response,
// it doesn't mean etcd has already successfully saved the data,
// including BoltDB and WAL, because:
// 1. etcd commits the boltDB transaction periodically instead of on each request;
// 2. etcd saves WAL entries in parallel with applying the committed entries.
// Accordingly, it might run into a situation of data loss when the etcd crashes
// immediately after responding to the client and before the boltDB and WAL
// successfully save the data to disk.
// Note that this issue can only happen for clusters with only one member.
//
// For clusters with multiple members, it isn't an issue, because etcd will
// not commit & apply the data before it being replicated to majority members.
// When the client receives the response, it means the data must have been applied.
// It further means the data must have been committed.
// Note: for clusters with multiple members, the raft will never send identical
// unstable entries and committed entries to etcdserver.
//
// Refer to https://github.com/etcd-io/etcd/issues/14370.
func shouldWaitWALSync(rd raft.Ready) bool {
if len(rd.CommittedEntries) == 0 || len(rd.Entries) == 0 {
return false
}
// Check if there is overlap between unstable and committed entries
// assuming that their index and term are only incrementing.
lastCommittedEntry := rd.CommittedEntries[len(rd.CommittedEntries)-1]
firstUnstableEntry := rd.Entries[0]
return lastCommittedEntry.Term > firstUnstableEntry.Term ||
(lastCommittedEntry.Term == firstUnstableEntry.Term && lastCommittedEntry.Index >= firstUnstableEntry.Index)
}
func updateCommittedIndex(ap *apply, rh *raftReadyHandler) {
var ci uint64
if len(ap.entries) != 0 {
@@ -465,6 +515,9 @@ func startNode(cfg ServerConfig, cl *membership.RaftCluster, ids []types.ID) (id
plog.Panicf("create wal error: %v", err)
}
}
if cfg.UnsafeNoFsync {
w.SetUnsafeNoFsync()
}
peers := make([]raft.Peer, len(ids))
for i, id := range ids {
var ctx []byte
@@ -527,7 +580,7 @@ func restartNode(cfg ServerConfig, snapshot *raftpb.Snapshot) (types.ID, *member
if snapshot != nil {
walsnap.Index, walsnap.Term = snapshot.Metadata.Index, snapshot.Metadata.Term
}
w, id, cid, st, ents := readWAL(cfg.Logger, cfg.WALDir(), walsnap)
w, id, cid, st, ents := readWAL(cfg.Logger, cfg.WALDir(), walsnap, cfg.UnsafeNoFsync)
if cfg.Logger != nil {
cfg.Logger.Info(
@@ -582,7 +635,7 @@ func restartAsStandaloneNode(cfg ServerConfig, snapshot *raftpb.Snapshot) (types
if snapshot != nil {
walsnap.Index, walsnap.Term = snapshot.Metadata.Index, snapshot.Metadata.Term
}
w, id, cid, st, ents := readWAL(cfg.Logger, cfg.WALDir(), walsnap)
w, id, cid, st, ents := readWAL(cfg.Logger, cfg.WALDir(), walsnap, cfg.UnsafeNoFsync)
// discard the previously uncommitted entries
for i, ent := range ents {
@@ -672,10 +725,11 @@ func restartAsStandaloneNode(cfg ServerConfig, snapshot *raftpb.Snapshot) (types
}
// getIDs returns an ordered set of IDs included in the given snapshot and
// the entries. The given snapshot/entries can contain two kinds of
// the entries. The given snapshot/entries can contain three kinds of
// ID-related entry:
// - ConfChangeAddNode, in which case the contained ID will be added into the set.
// - ConfChangeRemoveNode, in which case the contained ID will be removed from the set.
// - ConfChangeAddLearnerNode, in which the contained ID will be added into the set.
func getIDs(lg *zap.Logger, snap *raftpb.Snapshot, ents []raftpb.Entry) []uint64 {
ids := make(map[uint64]bool)
if snap != nil {
@@ -690,6 +744,8 @@ func getIDs(lg *zap.Logger, snap *raftpb.Snapshot, ents []raftpb.Entry) []uint64
var cc raftpb.ConfChange
pbutil.MustUnmarshal(&cc, e.Data)
switch cc.Type {
case raftpb.ConfChangeAddLearnerNode:
ids[cc.NodeID] = true
case raftpb.ConfChangeAddNode:
ids[cc.NodeID] = true
case raftpb.ConfChangeRemoveNode:

View File

@@ -21,6 +21,7 @@ import (
"testing"
"time"
"github.com/stretchr/testify/assert"
"go.etcd.io/etcd/etcdserver/api/membership"
"go.etcd.io/etcd/pkg/mock/mockstorage"
"go.etcd.io/etcd/pkg/pbutil"
@@ -267,3 +268,79 @@ func TestProcessDuplicatedAppRespMessage(t *testing.T) {
t.Errorf("count = %d, want %d", got, want)
}
}
func TestShouldWaitWALSync(t *testing.T) {
testcases := []struct {
name string
unstableEntries []raftpb.Entry
commitedEntries []raftpb.Entry
expectedResult bool
}{
{
name: "both entries are nil",
unstableEntries: nil,
commitedEntries: nil,
expectedResult: false,
},
{
name: "both entries are empty slices",
unstableEntries: []raftpb.Entry{},
commitedEntries: []raftpb.Entry{},
expectedResult: false,
},
{
name: "one nil and the other empty",
unstableEntries: nil,
commitedEntries: []raftpb.Entry{},
expectedResult: false,
},
{
name: "one nil and the other has data",
unstableEntries: nil,
commitedEntries: []raftpb.Entry{{Term: 4, Index: 10, Type: raftpb.EntryNormal, Data: []byte{0x11, 0x22, 0x33}}},
expectedResult: false,
},
{
name: "one empty and the other has data",
unstableEntries: []raftpb.Entry{},
commitedEntries: []raftpb.Entry{{Term: 4, Index: 10, Type: raftpb.EntryNormal, Data: []byte{0x11, 0x22, 0x33}}},
expectedResult: false,
},
{
name: "has different term and index",
unstableEntries: []raftpb.Entry{{Term: 5, Index: 11, Type: raftpb.EntryNormal, Data: []byte{0x11, 0x22, 0x33}}},
commitedEntries: []raftpb.Entry{{Term: 4, Index: 10, Type: raftpb.EntryNormal, Data: []byte{0x11, 0x22, 0x33}}},
expectedResult: false,
},
{
name: "has identical data",
unstableEntries: []raftpb.Entry{{Term: 4, Index: 10, Type: raftpb.EntryNormal, Data: []byte{0x11, 0x22, 0x33}}},
commitedEntries: []raftpb.Entry{{Term: 4, Index: 10, Type: raftpb.EntryNormal, Data: []byte{0x11, 0x22, 0x33}}},
expectedResult: true,
},
{
name: "has overlapped entry",
unstableEntries: []raftpb.Entry{
{Term: 4, Index: 10, Type: raftpb.EntryNormal, Data: []byte{0x11, 0x22, 0x33}},
{Term: 4, Index: 11, Type: raftpb.EntryNormal, Data: []byte{0x44, 0x55, 0x66}},
{Term: 4, Index: 12, Type: raftpb.EntryNormal, Data: []byte{0x77, 0x88, 0x99}},
},
commitedEntries: []raftpb.Entry{
{Term: 4, Index: 8, Type: raftpb.EntryNormal, Data: []byte{0x07, 0x08, 0x09}},
{Term: 4, Index: 9, Type: raftpb.EntryNormal, Data: []byte{0x10, 0x11, 0x12}},
{Term: 4, Index: 10, Type: raftpb.EntryNormal, Data: []byte{0x11, 0x22, 0x33}},
},
expectedResult: true,
},
}
for _, tc := range testcases {
t.Run(tc.name, func(t *testing.T) {
shouldWALSync := shouldWaitWALSync(raft.Ready{
Entries: tc.unstableEntries,
CommittedEntries: tc.commitedEntries,
})
assert.Equal(t, tc.expectedResult, shouldWALSync)
})
}
}

View File

@@ -25,6 +25,7 @@ import (
"os"
"path"
"regexp"
"strings"
"sync"
"sync/atomic"
"time"
@@ -258,10 +259,6 @@ type EtcdServer struct {
peerRt http.RoundTripper
reqIDGen *idutil.Generator
// forceVersionC is used to force the version monitor loop
// to detect the cluster version immediately.
forceVersionC chan struct{}
// wgMu blocks concurrent waitgroup mutation while server stopping
wgMu sync.RWMutex
// wg is used to wait for the go routines that depends on the server state
@@ -276,6 +273,9 @@ type EtcdServer struct {
leadTimeMu sync.RWMutex
leadElectedTime time.Time
firstCommitInTermMu sync.RWMutex
firstCommitInTermC chan struct{}
*AccessController
}
@@ -323,6 +323,17 @@ func NewServer(cfg ServerConfig) (srv *EtcdServer, err error) {
plog.Fatalf("create snapshot directory error: %v", err)
}
}
if err = fileutil.RemoveMatchFile(cfg.Logger, cfg.SnapDir(), func(fileName string) bool {
return strings.HasPrefix(fileName, "tmp")
}); err != nil {
cfg.Logger.Error(
"failed to remove temp file(s) in snapshot directory",
zap.String("path", cfg.SnapDir()),
zap.Error(err),
)
}
ss := snap.New(cfg.Logger, cfg.SnapDir())
bepath := cfg.backendPath()
@@ -520,16 +531,16 @@ func NewServer(cfg ServerConfig) (srv *EtcdServer, err error) {
storage: NewStorage(w, ss),
},
),
id: id,
attributes: membership.Attributes{Name: cfg.Name, ClientURLs: cfg.ClientURLs.StringSlice()},
cluster: cl,
stats: sstats,
lstats: lstats,
SyncTicker: time.NewTicker(500 * time.Millisecond),
peerRt: prt,
reqIDGen: idutil.NewGenerator(uint16(id), time.Now()),
forceVersionC: make(chan struct{}),
AccessController: &AccessController{CORS: cfg.CORS, HostWhitelist: cfg.HostWhitelist},
id: id,
attributes: membership.Attributes{Name: cfg.Name, ClientURLs: cfg.ClientURLs.StringSlice()},
cluster: cl,
stats: sstats,
lstats: lstats,
SyncTicker: time.NewTicker(500 * time.Millisecond),
peerRt: prt,
reqIDGen: idutil.NewGenerator(uint16(id), time.Now()),
AccessController: &AccessController{CORS: cfg.CORS, HostWhitelist: cfg.HostWhitelist},
firstCommitInTermC: make(chan struct{}),
}
serverID.With(prometheus.Labels{"server_id": id.String()}).Set(1)
@@ -543,9 +554,11 @@ func NewServer(cfg ServerConfig) (srv *EtcdServer, err error) {
srv.lessor = lease.NewLessor(
srv.getLogger(),
srv.be,
srv.cluster,
lease.LessorConfig{
MinLeaseTTL: int64(math.Ceil(minTTL.Seconds())),
CheckpointInterval: cfg.LeaseCheckpointInterval,
CheckpointPersist: cfg.LeaseCheckpointPersist,
ExpiredLeasesRetryInterval: srv.Cfg.ReqTimeout(),
})
@@ -553,6 +566,7 @@ func NewServer(cfg ServerConfig) (srv *EtcdServer, err error) {
func(index uint64) <-chan struct{} {
return srv.applyWait.Wait(index)
},
time.Duration(cfg.TokenTTL)*time.Second,
)
if err != nil {
if cfg.Logger != nil {
@@ -1058,37 +1072,7 @@ func (s *EtcdServer) run() {
f := func(context.Context) { s.applyAll(&ep, &ap) }
sched.Schedule(f)
case leases := <-expiredLeaseC:
s.goAttach(func() {
// Increases throughput of expired leases deletion process through parallelization
c := make(chan struct{}, maxPendingRevokes)
for _, lease := range leases {
select {
case c <- struct{}{}:
case <-s.stopping:
return
}
lid := lease.ID
s.goAttach(func() {
ctx := s.authStore.WithRoot(s.ctx)
_, lerr := s.LeaseRevoke(ctx, &pb.LeaseRevokeRequest{ID: int64(lid)})
if lerr == nil {
leaseExpired.Inc()
} else {
if lg != nil {
lg.Warn(
"failed to revoke lease",
zap.String("lease-id", fmt.Sprintf("%016x", lid)),
zap.Error(lerr),
)
} else {
plog.Warningf("failed to revoke %016x (%q)", lid, lerr.Error())
}
}
<-c
})
}
})
s.revokeExpiredLeases(leases)
case err := <-s.errorc:
if lg != nil {
lg.Warn("server error", zap.Error(err))
@@ -1108,6 +1092,45 @@ func (s *EtcdServer) run() {
}
}
func (s *EtcdServer) revokeExpiredLeases(leases []*lease.Lease) {
s.goAttach(func() {
lg := s.Logger()
// Increases throughput of expired leases deletion process through parallelization
c := make(chan struct{}, maxPendingRevokes)
for _, curLease := range leases {
select {
case c <- struct{}{}:
case <-s.stopping:
return
}
f := func(lid int64) {
s.goAttach(func() {
ctx := s.authStore.WithRoot(s.ctx)
_, lerr := s.LeaseRevoke(ctx, &pb.LeaseRevokeRequest{ID: lid})
if lerr == nil {
leaseExpired.Inc()
} else {
if lg != nil {
lg.Warn(
"failed to revoke lease",
zap.String("lease-id", fmt.Sprintf("%016x", lid)),
zap.Error(lerr),
)
} else {
plog.Warningf("failed to revoke %016x (%q)", lid, lerr.Error())
}
}
<-c
})
}
f(int64(curLease.ID))
}
})
}
func (s *EtcdServer) applyAll(ep *etcdProgress, apply *apply) {
s.applySnapshot(ep, apply)
s.applyEntries(ep, apply)
@@ -1911,6 +1934,16 @@ func (s *EtcdServer) leaderChangedNotify() <-chan struct{} {
return s.leaderChanged
}
// FirstCommitInTermNotify returns channel that will be unlocked on first
// entry committed in new term, which is necessary for new leader to answer
// read-only requests (leader is not able to respond any read-only requests
// as long as linearizable semantic is required)
func (s *EtcdServer) FirstCommitInTermNotify() <-chan struct{} {
s.firstCommitInTermMu.RLock()
defer s.firstCommitInTermMu.RUnlock()
return s.firstCommitInTermC
}
// RaftStatusGetter represents etcd server and Raft progress.
type RaftStatusGetter interface {
ID() types.ID
@@ -2178,10 +2211,8 @@ func (s *EtcdServer) applyEntryNormal(e *raftpb.Entry) {
// raft state machine may generate noop entry when leader confirmation.
// skip it in advance to avoid some potential bug in the future
if len(e.Data) == 0 {
select {
case s.forceVersionC <- struct{}{}:
default:
}
s.notifyAboutFirstCommitInTerm()
// promote lessor when the local member is leader and finished
// applying all entries from the last term.
if s.isLeader() {
@@ -2254,6 +2285,15 @@ func (s *EtcdServer) applyEntryNormal(e *raftpb.Entry) {
})
}
func (s *EtcdServer) notifyAboutFirstCommitInTerm() {
newNotifier := make(chan struct{})
s.firstCommitInTermMu.Lock()
notifierToClose := s.firstCommitInTermC
s.firstCommitInTermC = newNotifier
s.firstCommitInTermMu.Unlock()
close(notifierToClose)
}
// applyConfChange applies a ConfChange to the server. It is only
// invoked with a ConfChange that has already passed through Raft
func (s *EtcdServer) applyConfChange(cc raftpb.ConfChange, confState *raftpb.ConfState) (bool, error) {
@@ -2480,7 +2520,7 @@ func (s *EtcdServer) ClusterVersion() *semver.Version {
func (s *EtcdServer) monitorVersions() {
for {
select {
case <-s.forceVersionC:
case <-s.FirstCommitInTermNotify():
case <-time.After(monitorVersionInterval):
case <-s.stopping:
return

View File

@@ -994,7 +994,8 @@ func TestSnapshot(t *testing.T) {
defer func() { ch <- struct{}{} }()
if len(gaction) != 2 {
t.Fatalf("len(action) = %d, want 2", len(gaction))
t.Errorf("len(action) = %d, want 2", len(gaction))
return
}
if !reflect.DeepEqual(gaction[0], testutil.Action{Name: "SaveSnap"}) {
t.Errorf("action = %s, want SaveSnap", gaction[0])
@@ -1156,7 +1157,8 @@ func TestTriggerSnap(t *testing.T) {
// (SnapshotCount+1) * Puts + SaveSnap = (SnapshotCount+1) * Save + SaveSnap + Release
if len(gaction) != wcnt {
t.Logf("gaction: %v", gaction)
t.Fatalf("len(action) = %d, want %d", len(gaction), wcnt)
t.Errorf("len(action) = %d, want %d", len(gaction), wcnt)
return
}
if !reflect.DeepEqual(gaction[wcnt-2], testutil.Action{Name: "SaveSnap"}) {
@@ -1848,3 +1850,59 @@ func (s *sendMsgAppRespTransporter) Send(m []raftpb.Message) {
}
s.sendC <- send
}
func TestWaitAppliedIndex(t *testing.T) {
cases := []struct {
name string
appliedIndex uint64
committedIndex uint64
action func(s *EtcdServer)
ExpectedError error
}{
{
name: "The applied Id is already equal to the commitId",
appliedIndex: 10,
committedIndex: 10,
action: func(s *EtcdServer) {
s.applyWait.Trigger(10)
},
ExpectedError: nil,
},
{
name: "The etcd server has already stopped",
appliedIndex: 10,
committedIndex: 12,
action: func(s *EtcdServer) {
s.stopping <- struct{}{}
},
ExpectedError: ErrStopped,
},
{
name: "Timed out waiting for the applied index",
appliedIndex: 10,
committedIndex: 12,
action: nil,
ExpectedError: ErrTimeoutWaitAppliedIndex,
},
}
for _, tc := range cases {
t.Run(tc.name, func(t *testing.T) {
s := &EtcdServer{
appliedIndex: tc.appliedIndex,
committedIndex: tc.committedIndex,
stopping: make(chan struct{}, 1),
applyWait: wait.NewTimeList(),
}
if tc.action != nil {
go tc.action(s)
}
err := s.waitAppliedIndex()
if err != tc.ExpectedError {
t.Errorf("Unexpected error, want (%v), got (%v)", tc.ExpectedError, err)
}
})
}
}

View File

@@ -82,7 +82,7 @@ func (st *storage) Release(snap raftpb.Snapshot) error {
// readWAL reads the WAL at the given snap and returns the wal, its latest HardState and cluster ID, and all entries that appear
// after the position of the given snap in the WAL.
// The snap must have been previously saved to the WAL, or this call will panic.
func readWAL(lg *zap.Logger, waldir string, snap walpb.Snapshot) (w *wal.WAL, id, cid types.ID, st raftpb.HardState, ents []raftpb.Entry) {
func readWAL(lg *zap.Logger, waldir string, snap walpb.Snapshot, unsafeNoFsync bool) (w *wal.WAL, id, cid types.ID, st raftpb.HardState, ents []raftpb.Entry) {
var (
err error
wmetadata []byte
@@ -97,6 +97,9 @@ func readWAL(lg *zap.Logger, waldir string, snap walpb.Snapshot) (w *wal.WAL, id
plog.Fatalf("open wal error: %v", err)
}
}
if unsafeNoFsync {
w.SetUnsafeNoFsync()
}
if wmetadata, st, ents, err = w.ReadAll(); err != nil {
w.Close()
// we can only repair ErrUnexpectedEOF and we never repair twice.

View File

@@ -103,12 +103,12 @@ func (nc *notifier) notify(err error) {
close(nc.c)
}
func warnOfExpensiveRequest(lg *zap.Logger, now time.Time, reqStringer fmt.Stringer, respMsg proto.Message, err error) {
func warnOfExpensiveRequest(lg *zap.Logger, warningApplyDuration time.Duration, now time.Time, reqStringer fmt.Stringer, respMsg proto.Message, err error) {
var resp string
if !isNil(respMsg) {
resp = fmt.Sprintf("size:%d", proto.Size(respMsg))
}
warnOfExpensiveGenericRequest(lg, now, reqStringer, "", resp, err)
warnOfExpensiveGenericRequest(lg, warningApplyDuration, now, reqStringer, "", resp, err)
}
func warnOfFailedRequest(lg *zap.Logger, now time.Time, reqStringer fmt.Stringer, respMsg proto.Message, err error) {
@@ -130,7 +130,7 @@ func warnOfFailedRequest(lg *zap.Logger, now time.Time, reqStringer fmt.Stringer
}
}
func warnOfExpensiveReadOnlyTxnRequest(lg *zap.Logger, now time.Time, r *pb.TxnRequest, txnResponse *pb.TxnResponse, err error) {
func warnOfExpensiveReadOnlyTxnRequest(lg *zap.Logger, warningApplyDuration time.Duration, now time.Time, r *pb.TxnRequest, txnResponse *pb.TxnResponse, err error) {
reqStringer := pb.NewLoggableTxnRequest(r)
var resp string
if !isNil(txnResponse) {
@@ -143,27 +143,27 @@ func warnOfExpensiveReadOnlyTxnRequest(lg *zap.Logger, now time.Time, r *pb.TxnR
// only range responses should be in a read only txn request
}
}
resp = fmt.Sprintf("responses:<%s> size:%d", strings.Join(resps, " "), proto.Size(txnResponse))
resp = fmt.Sprintf("responses:<%s> size:%d", strings.Join(resps, " "), txnResponse.Size())
}
warnOfExpensiveGenericRequest(lg, now, reqStringer, "read-only range ", resp, err)
warnOfExpensiveGenericRequest(lg, warningApplyDuration, now, reqStringer, "read-only range ", resp, err)
}
func warnOfExpensiveReadOnlyRangeRequest(lg *zap.Logger, now time.Time, reqStringer fmt.Stringer, rangeResponse *pb.RangeResponse, err error) {
func warnOfExpensiveReadOnlyRangeRequest(lg *zap.Logger, warningApplyDuration time.Duration, now time.Time, reqStringer fmt.Stringer, rangeResponse *pb.RangeResponse, err error) {
var resp string
if !isNil(rangeResponse) {
resp = fmt.Sprintf("range_response_count:%d size:%d", len(rangeResponse.Kvs), proto.Size(rangeResponse))
resp = fmt.Sprintf("range_response_count:%d size:%d", len(rangeResponse.Kvs), rangeResponse.Size())
}
warnOfExpensiveGenericRequest(lg, now, reqStringer, "read-only range ", resp, err)
warnOfExpensiveGenericRequest(lg, warningApplyDuration, now, reqStringer, "read-only range ", resp, err)
}
func warnOfExpensiveGenericRequest(lg *zap.Logger, now time.Time, reqStringer fmt.Stringer, prefix string, resp string, err error) {
func warnOfExpensiveGenericRequest(lg *zap.Logger, warningApplyDuration time.Duration, now time.Time, reqStringer fmt.Stringer, prefix string, resp string, err error) {
d := time.Since(now)
if d > warnApplyDuration {
if d > warningApplyDuration {
if lg != nil {
lg.Warn(
"apply request took too long",
zap.Duration("took", d),
zap.Duration("expected-duration", warnApplyDuration),
zap.Duration("expected-duration", warningApplyDuration),
zap.String("prefix", prefix),
zap.String("request", reqStringer.String()),
zap.String("response", resp),
@@ -185,3 +185,21 @@ func warnOfExpensiveGenericRequest(lg *zap.Logger, now time.Time, reqStringer fm
func isNil(msg proto.Message) bool {
return msg == nil || reflect.ValueOf(msg).IsNil()
}
// panicAlternativeStringer wraps a fmt.Stringer, and if calling String() panics, calls the alternative instead.
// This is needed to ensure logging slow v2 requests does not panic, which occurs when running integration tests
// with the embedded server with github.com/golang/protobuf v1.4.0+. See https://github.com/etcd-io/etcd/issues/12197.
type panicAlternativeStringer struct {
stringer fmt.Stringer
alternative func() string
}
func (n panicAlternativeStringer) String() (s string) {
defer func() {
if err := recover(); err != nil {
s = n.alternative()
}
}()
s = n.stringer.String()
return s
}

View File

@@ -90,3 +90,23 @@ func (s *nopTransporterWithActiveTime) Stop() {}
func (s *nopTransporterWithActiveTime) Pause() {}
func (s *nopTransporterWithActiveTime) Resume() {}
func (s *nopTransporterWithActiveTime) reset(am map[types.ID]time.Time) { s.activeMap = am }
func TestPanicAlternativeStringer(t *testing.T) {
p := panicAlternativeStringer{alternative: func() string { return "alternative" }}
p.stringer = testStringerFunc(func() string { panic("here") })
if s := p.String(); s != "alternative" {
t.Fatalf("expected 'alternative', got %q", s)
}
p.stringer = testStringerFunc(func() string { return "test" })
if s := p.String(); s != "test" {
t.Fatalf("expected 'test', got %q", s)
}
}
type testStringerFunc func() string
func (s testStringerFunc) String() string {
return s()
}

View File

@@ -18,6 +18,7 @@ import (
"bytes"
"context"
"encoding/binary"
"strconv"
"time"
"go.etcd.io/etcd/auth"
@@ -40,6 +41,11 @@ const (
// We should stop accepting new proposals if the gap growing to a certain point.
maxGapBetweenApplyAndCommitIndex = 5000
traceThreshold = 100 * time.Millisecond
readIndexRetryTime = 500 * time.Millisecond
// The timeout for the node to catch up its applied index, and is used in
// lease related operations, such as LeaseRenew and LeaseTimeToLive.
applyTimeout = time.Second
)
type RaftKV interface {
@@ -97,7 +103,7 @@ func (s *EtcdServer) Range(ctx context.Context, r *pb.RangeRequest) (*pb.RangeRe
var resp *pb.RangeResponse
var err error
defer func(start time.Time) {
warnOfExpensiveReadOnlyRangeRequest(s.getLogger(), start, r, resp, err)
warnOfExpensiveReadOnlyRangeRequest(s.getLogger(), s.Cfg.WarningApplyDuration, start, r, resp, err)
if resp != nil {
trace.AddField(
traceutil.Field{Key: "response_count", Value: len(resp.Kvs)},
@@ -158,7 +164,7 @@ func (s *EtcdServer) Txn(ctx context.Context, r *pb.TxnRequest) (*pb.TxnResponse
}
defer func(start time.Time) {
warnOfExpensiveReadOnlyTxnRequest(s.getLogger(), start, r, resp, err)
warnOfExpensiveReadOnlyTxnRequest(s.getLogger(), s.Cfg.WarningApplyDuration, start, r, resp, err)
}(time.Now())
get := func() { resp, err = s.applyV3Base.Txn(r) }
@@ -257,6 +263,18 @@ func (s *EtcdServer) LeaseGrant(ctx context.Context, r *pb.LeaseGrantRequest) (*
return resp.(*pb.LeaseGrantResponse), nil
}
func (s *EtcdServer) waitAppliedIndex() error {
select {
case <-s.ApplyWait():
case <-s.stopping:
return ErrStopped
case <-time.After(applyTimeout):
return ErrTimeoutWaitAppliedIndex
}
return nil
}
func (s *EtcdServer) LeaseRevoke(ctx context.Context, r *pb.LeaseRevokeRequest) (*pb.LeaseRevokeResponse, error) {
resp, err := s.raftRequestOnce(ctx, pb.InternalRaftRequest{LeaseRevoke: r})
if err != nil {
@@ -266,26 +284,32 @@ func (s *EtcdServer) LeaseRevoke(ctx context.Context, r *pb.LeaseRevokeRequest)
}
func (s *EtcdServer) LeaseRenew(ctx context.Context, id lease.LeaseID) (int64, error) {
ttl, err := s.lessor.Renew(id)
if err == nil { // already requested to primary lessor(leader)
return ttl, nil
}
if err != lease.ErrNotPrimary {
return -1, err
if s.isLeader() {
if err := s.waitAppliedIndex(); err != nil {
return 0, err
}
ttl, err := s.lessor.Renew(id)
if err == nil { // already requested to primary lessor(leader)
return ttl, nil
}
if err != lease.ErrNotPrimary {
return -1, err
}
}
cctx, cancel := context.WithTimeout(ctx, s.Cfg.ReqTimeout())
defer cancel()
// renewals don't go through raft; forward to leader manually
for cctx.Err() == nil && err != nil {
for cctx.Err() == nil {
leader, lerr := s.waitLeader(cctx)
if lerr != nil {
return -1, lerr
}
for _, url := range leader.PeerURLs {
lurl := url + leasehttp.LeasePrefix
ttl, err = leasehttp.RenewHTTP(cctx, id, lurl, s.peerRt)
ttl, err := leasehttp.RenewHTTP(cctx, id, lurl, s.peerRt)
if err == nil || err == lease.ErrLeaseNotFound {
return ttl, err
}
@@ -299,7 +323,10 @@ func (s *EtcdServer) LeaseRenew(ctx context.Context, id lease.LeaseID) (int64, e
}
func (s *EtcdServer) LeaseTimeToLive(ctx context.Context, r *pb.LeaseTimeToLiveRequest) (*pb.LeaseTimeToLiveResponse, error) {
if s.Leader() == s.ID() {
if s.isLeader() {
if err := s.waitAppliedIndex(); err != nil {
return nil, err
}
// primary; timetolive directly from leader
le := s.lessor.Lookup(lease.LeaseID(r.ID))
if le == nil {
@@ -428,9 +455,10 @@ func (s *EtcdServer) Authenticate(ctx context.Context, r *pb.AuthenticateRequest
return nil, err
}
// internalReq doesn't need to have Password because the above s.AuthStore().CheckPassword() already did it.
// In addition, it will let a WAL entry not record password as a plain text.
internalReq := &pb.InternalAuthenticateRequest{
Name: r.Name,
Password: r.Password,
SimpleToken: st,
}
@@ -669,12 +697,8 @@ func (s *EtcdServer) processInternalRaftRequestOnce(ctx context.Context, r pb.In
func (s *EtcdServer) Watchable() mvcc.WatchableKV { return s.KV() }
func (s *EtcdServer) linearizableReadLoop() {
var rs raft.ReadState
for {
ctxToSend := make([]byte, 8)
id1 := s.reqIDGen.Next()
binary.BigEndian.PutUint64(ctxToSend, id1)
requestId := s.reqIDGen.Next()
leaderChangedNotifier := s.leaderChangedNotify()
select {
case <-leaderChangedNotifier:
@@ -684,91 +708,164 @@ func (s *EtcdServer) linearizableReadLoop() {
return
}
nextnr := newNotifier()
// as a single loop is can unlock multiple reads, it is not very useful
// to propagate the trace from Txn or Range.
trace := traceutil.New("linearizableReadLoop", s.Logger())
nextnr := newNotifier()
s.readMu.Lock()
nr := s.readNotifier
s.readNotifier = nextnr
s.readMu.Unlock()
lg := s.getLogger()
cctx, cancel := context.WithTimeout(context.Background(), s.Cfg.ReqTimeout())
if err := s.r.ReadIndex(cctx, ctxToSend); err != nil {
cancel()
if err == raft.ErrStopped {
return
}
if lg != nil {
lg.Warn("failed to get read index from Raft", zap.Error(err))
} else {
plog.Errorf("failed to get read index from raft: %v", err)
}
readIndexFailed.Inc()
confirmedIndex, err := s.requestCurrentIndex(leaderChangedNotifier, requestId)
if isStopped(err) {
return
}
if err != nil {
nr.notify(err)
continue
}
cancel()
var (
timeout bool
done bool
)
for !timeout && !done {
trace.Step("read index received")
trace.AddField(traceutil.Field{Key: "readStateIndex", Value: confirmedIndex})
appliedIndex := s.getAppliedIndex()
trace.AddField(traceutil.Field{Key: "appliedIndex", Value: strconv.FormatUint(appliedIndex, 10)})
if appliedIndex < confirmedIndex {
select {
case rs = <-s.r.readStateC:
done = bytes.Equal(rs.RequestCtx, ctxToSend)
if !done {
// a previous request might time out. now we should ignore the response of it and
// continue waiting for the response of the current requests.
id2 := uint64(0)
if len(rs.RequestCtx) == 8 {
id2 = binary.BigEndian.Uint64(rs.RequestCtx)
}
if lg != nil {
lg.Warn(
"ignored out-of-date read index response; local node read indexes queueing up and waiting to be in sync with leader",
zap.Uint64("sent-request-id", id1),
zap.Uint64("received-request-id", id2),
)
} else {
plog.Warningf("ignored out-of-date read index response; local node read indexes queueing up and waiting to be in sync with leader (request ID want %d, got %d)", id1, id2)
}
slowReadIndex.Inc()
}
case <-leaderChangedNotifier:
timeout = true
readIndexFailed.Inc()
// return a retryable error.
nr.notify(ErrLeaderChanged)
case <-time.After(s.Cfg.ReqTimeout()):
if lg != nil {
lg.Warn("timed out waiting for read index response (local node might have slow network)", zap.Duration("timeout", s.Cfg.ReqTimeout()))
} else {
plog.Warningf("timed out waiting for read index response (local node might have slow network)")
}
nr.notify(ErrTimeout)
timeout = true
slowReadIndex.Inc()
case <-s.applyWait.Wait(confirmedIndex):
case <-s.stopping:
return
}
}
if !done {
continue
}
if ai := s.getAppliedIndex(); ai < rs.Index {
select {
case <-s.applyWait.Wait(rs.Index):
case <-s.stopping:
return
}
}
// unblock all l-reads requested at indices before rs.Index
// unblock all l-reads requested at indices before confirmedIndex
nr.notify(nil)
trace.Step("applied index is now lower than readState.Index")
trace.LogAllStepsIfLong(traceThreshold)
}
}
func isStopped(err error) bool {
return err == raft.ErrStopped || err == ErrStopped
}
func (s *EtcdServer) requestCurrentIndex(leaderChangedNotifier <-chan struct{}, requestId uint64) (uint64, error) {
err := s.sendReadIndex(requestId)
if err != nil {
return 0, err
}
lg := s.Logger()
errorTimer := time.NewTimer(s.Cfg.ReqTimeout())
defer errorTimer.Stop()
retryTimer := time.NewTimer(readIndexRetryTime)
defer retryTimer.Stop()
firstCommitInTermNotifier := s.FirstCommitInTermNotify()
for {
select {
case rs := <-s.r.readStateC:
requestIdBytes := uint64ToBigEndianBytes(requestId)
gotOwnResponse := bytes.Equal(rs.RequestCtx, requestIdBytes)
if !gotOwnResponse {
// a previous request might time out. now we should ignore the response of it and
// continue waiting for the response of the current requests.
responseId := uint64(0)
if len(rs.RequestCtx) == 8 {
responseId = binary.BigEndian.Uint64(rs.RequestCtx)
}
if lg != nil {
lg.Warn(
"ignored out-of-date read index response; local node read indexes queueing up and waiting to be in sync with leader",
zap.Uint64("sent-request-id", requestId),
zap.Uint64("received-request-id", responseId),
)
} else {
plog.Warningf("ignored out-of-date read index response; local node read indexes queueing up and waiting to be in sync with leader (request ID want %d, got %d)", requestId, responseId)
}
slowReadIndex.Inc()
continue
}
return rs.Index, nil
case <-leaderChangedNotifier:
readIndexFailed.Inc()
// return a retryable error.
return 0, ErrLeaderChanged
case <-firstCommitInTermNotifier:
firstCommitInTermNotifier = s.FirstCommitInTermNotify()
if lg != nil {
lg.Info("first commit in current term: resending ReadIndex request")
} else {
plog.Info("first commit in current term: resending ReadIndex request")
}
err := s.sendReadIndex(requestId)
if err != nil {
return 0, err
}
retryTimer.Reset(readIndexRetryTime)
continue
case <-retryTimer.C:
if lg != nil {
lg.Warn(
"waiting for ReadIndex response took too long, retrying",
zap.Uint64("sent-request-id", requestId),
zap.Duration("retry-timeout", readIndexRetryTime),
)
} else {
plog.Warningf("waiting for ReadIndex response took too long, retrying (sent-request-id: %d, retry-timeout: %s)", requestId, readIndexRetryTime)
}
err := s.sendReadIndex(requestId)
if err != nil {
return 0, err
}
retryTimer.Reset(readIndexRetryTime)
continue
case <-errorTimer.C:
if lg != nil {
lg.Warn(
"timed out waiting for read index response (local node might have slow network)",
zap.Duration("timeout", s.Cfg.ReqTimeout()),
)
} else {
plog.Warningf("timed out waiting for read index response (local node might have slow network) timeout: %s", s.Cfg.ReqTimeout())
}
slowReadIndex.Inc()
return 0, ErrTimeout
case <-s.stopping:
return 0, ErrStopped
}
}
}
func uint64ToBigEndianBytes(number uint64) []byte {
byteResult := make([]byte, 8)
binary.BigEndian.PutUint64(byteResult, number)
return byteResult
}
func (s *EtcdServer) sendReadIndex(requestIndex uint64) error {
ctxToSend := uint64ToBigEndianBytes(requestIndex)
cctx, cancel := context.WithTimeout(context.Background(), s.Cfg.ReqTimeout())
err := s.r.ReadIndex(cctx, ctxToSend)
cancel()
if err == raft.ErrStopped {
return err
}
if err != nil {
lg := s.Logger()
lg.Warn("failed to get read index from Raft", zap.Error(err))
readIndexFailed.Inc()
return err
}
return nil
}
func (s *EtcdServer) linearizableReadNotify(ctx context.Context) error {
s.readMu.RLock()
nc := s.readNotifier

View File

@@ -13,7 +13,7 @@ if ! [[ "${0}" =~ "scripts/docker-local-agent.sh" ]]; then
fi
if [[ -z "${GO_VERSION}" ]]; then
GO_VERSION=1.12.12
GO_VERSION=1.12.17
fi
echo "Running with GO_VERSION:" ${GO_VERSION}

View File

@@ -6,7 +6,7 @@ if ! [[ "${0}" =~ "scripts/docker-local-tester.sh" ]]; then
fi
if [[ -z "${GO_VERSION}" ]]; then
GO_VERSION=1.12.12
GO_VERSION=1.12.17
fi
echo "Running with GO_VERSION:" ${GO_VERSION}

25
go.mod
View File

@@ -1,6 +1,6 @@
module go.etcd.io/etcd
go 1.14
go 1.16
require (
github.com/bgentry/speakeasy v0.1.0
@@ -8,16 +8,16 @@ require (
github.com/coreos/go-semver v0.2.0
github.com/coreos/go-systemd v0.0.0-20180511133405-39ca1b05acc7
github.com/coreos/pkg v0.0.0-20160727233714-3ac0863d7acf
github.com/creack/pty v1.1.7
github.com/dgrijalva/jwt-go v3.2.0+incompatible
github.com/creack/pty v1.1.11
github.com/dustin/go-humanize v0.0.0-20171111073723-bb3d318650d4
github.com/fatih/color v1.7.0 // indirect
github.com/gogo/protobuf v1.2.1
github.com/golang-jwt/jwt v3.2.1+incompatible
github.com/golang/groupcache v0.0.0-20160516000752-02826c3e7903
github.com/golang/protobuf v1.3.2
github.com/google/btree v1.0.0
github.com/google/uuid v1.0.0
github.com/gorilla/websocket v0.0.0-20170926233335-4201258b820c // indirect
github.com/gorilla/websocket v1.4.2 // indirect
github.com/grpc-ecosystem/go-grpc-middleware v1.0.1-0.20190118093823-f849b5445de4
github.com/grpc-ecosystem/go-grpc-prometheus v1.2.0
github.com/grpc-ecosystem/grpc-gateway v1.9.5
@@ -35,22 +35,21 @@ require (
github.com/soheilhy/cmux v0.1.4
github.com/spf13/cobra v0.0.3
github.com/spf13/pflag v1.0.1
github.com/tmc/grpc-websocket-proxy v0.0.0-20170815181823-89b8d40f7ca8
github.com/stretchr/testify v1.3.0
github.com/tmc/grpc-websocket-proxy v0.0.0-20200427203606-3cfed13b9966
github.com/urfave/cli v1.20.0
github.com/xiang90/probing v0.0.0-20190116061207-43a291ad63a2
go.etcd.io/bbolt v1.3.3
go.etcd.io/bbolt v1.3.6
go.uber.org/atomic v1.3.2 // indirect
go.uber.org/multierr v1.1.0 // indirect
go.uber.org/zap v1.10.0
golang.org/x/crypto v0.0.0-20190308221718-c2843e01d9a2
golang.org/x/lint v0.0.0-20190313153728-d0100b6bd8b3 // indirect
golang.org/x/net v0.0.0-20190813141303-74dc4d7220e7
golang.org/x/sys v0.0.0-20190826190057-c7b8b68b1456 // indirect
golang.org/x/crypto v0.0.0-20220411220226-7b82a4e95df4
golang.org/x/net v0.0.0-20211112202133-69e39bad7dc2
golang.org/x/sys v0.0.0-20211019181941-9d821ace8654
golang.org/x/text v0.3.7 // indirect
golang.org/x/time v0.0.0-20180412165947-fbb02b2291d2
golang.org/x/tools v0.0.0-20190524140312-2c0ae7006135 // indirect
google.golang.org/grpc v1.26.0
gopkg.in/cheggaaa/pb.v1 v1.0.25
gopkg.in/yaml.v2 v2.2.2
honnef.co/go/tools v0.0.0-20190523083050-ea95bdfd59fc // indirect
gopkg.in/yaml.v2 v2.4.0
sigs.k8s.io/yaml v1.1.0
)

65
go.sum
View File

@@ -2,7 +2,6 @@ cloud.google.com/go v0.26.0/go.mod h1:aQUYkXzVsufM+DwF1aE+0xfcU+56JwCaLick0ClmMT
github.com/BurntSushi/toml v0.3.1/go.mod h1:xHWCNGjB5oqiDr8zfno3MHue2Ht5sIBksp03qcyfWMU=
github.com/alecthomas/template v0.0.0-20160405071501-a0175ee3bccc/go.mod h1:LOuyumcjzFXgccqObfd/Ljyb9UuFJ6TxHnclSeseNhc=
github.com/alecthomas/units v0.0.0-20151022065526-2efee857e7cf/go.mod h1:ybxpYRFXyAe+OPACYpWeL0wqObRcbAqCMya13uyzqw0=
github.com/beorn7/perks v0.0.0-20180321164747-3a771d992973 h1:xJ4a3vCFaGF/jqvzLMYoU8P317H5OQ+Via4RmuPwCS0=
github.com/beorn7/perks v0.0.0-20180321164747-3a771d992973/go.mod h1:Dwedo/Wpr24TaqPxmxbtue+5NUziq4I4S80YR8gNf3Q=
github.com/beorn7/perks v1.0.0 h1:HWo1m869IqiPhD389kmkxeTalrjNbbJTC8LXupb+sl0=
github.com/beorn7/perks v1.0.0/go.mod h1:KWe93zE9D1o94FZ5RNwFwVgaQK1VOXiVxmqh+CedLV8=
@@ -18,13 +17,11 @@ github.com/coreos/go-systemd v0.0.0-20180511133405-39ca1b05acc7 h1:u9SHYsPQNyt5t
github.com/coreos/go-systemd v0.0.0-20180511133405-39ca1b05acc7/go.mod h1:F5haX7vjVVG0kc13fIWeqUViNPyEJxv/OmvnBo0Yme4=
github.com/coreos/pkg v0.0.0-20160727233714-3ac0863d7acf h1:CAKfRE2YtTUIjjh1bkBtyYFaUT/WmOqsJjgtihT0vMI=
github.com/coreos/pkg v0.0.0-20160727233714-3ac0863d7acf/go.mod h1:E3G3o1h8I7cfcXa63jLwjI0eiQQMgzzUDFVpN/nH/eA=
github.com/creack/pty v1.1.7 h1:6pwm8kMQKCmgUg0ZHTm5+/YvRK0s3THD/28+T6/kk4A=
github.com/creack/pty v1.1.7/go.mod h1:lj5s0c3V2DBrqTV7llrYr5NG6My20zk30Fl46Y7DoTY=
github.com/creack/pty v1.1.11 h1:07n33Z8lZxZ2qwegKbObQohDhXDQxiMMz1NOUGYlesw=
github.com/creack/pty v1.1.11/go.mod h1:oKZEueFk5CKHvIhNR5MUki03XCEU+Q6VDXinZuGJ33E=
github.com/davecgh/go-spew v1.1.0/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38=
github.com/davecgh/go-spew v1.1.1 h1:vj9j/u1bqnvCEfJOwUhtlOARqs3+rkHYY13jYWTU97c=
github.com/davecgh/go-spew v1.1.1/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38=
github.com/dgrijalva/jwt-go v3.2.0+incompatible h1:7qlOGliEKZXTDg6OTjfoBKDXWrumCAMpl/TFQ4/5kLM=
github.com/dgrijalva/jwt-go v3.2.0+incompatible/go.mod h1:E3ru+11k8xSBh+hMPgOLZmtrrCbhqsmaPHjLKYnJCaQ=
github.com/dustin/go-humanize v0.0.0-20171111073723-bb3d318650d4 h1:qk/FSDDxo05wdJH28W+p5yivv7LuLYLRXPPD8KQCtZs=
github.com/dustin/go-humanize v0.0.0-20171111073723-bb3d318650d4/go.mod h1:HtrtbFcZ19U5GC7JDqmcUSB87Iq5E25KnS6fMYU6eOk=
github.com/envoyproxy/go-control-plane v0.9.1-0.20191026205805-5f8ba28d4473/go.mod h1:YTl/9mNaCwkRvm6d1a2C3ymFceY/DCBVvsKhRF0iEA4=
@@ -35,29 +32,29 @@ github.com/ghodss/yaml v1.0.0/go.mod h1:4dBDuWmgqj2HViK6kFavaiC9ZROes6MMH2rRYeME
github.com/go-kit/kit v0.8.0/go.mod h1:xBxKIO96dXMWWy0MnWVtmwkA9/13aqxPnvrjFYMA2as=
github.com/go-logfmt/logfmt v0.3.0/go.mod h1:Qt1PoO58o5twSAckw1HlFXLmHsOX5/0LbT9GBnD5lWE=
github.com/go-stack/stack v1.8.0/go.mod h1:v0f6uXyyMGvRgIKkXu+yp6POWl0qKG85gN/melR3HDY=
github.com/gogo/protobuf v1.1.1 h1:72R+M5VuhED/KujmZVcIquuo8mBgX4oVda//DQb3PXo=
github.com/gogo/protobuf v1.1.1/go.mod h1:r8qH/GZQm5c6nD/R0oafs1akxWv10x8SbQlK7atdtwQ=
github.com/gogo/protobuf v1.2.1 h1:/s5zKNz0uPFCZ5hddgPdo2TK2TVrUNMn0OOX8/aZMTE=
github.com/gogo/protobuf v1.2.1/go.mod h1:hp+jE20tsWTFYpLwKvXlhS1hjn+gTNwPg2I6zVXpSg4=
github.com/golang-jwt/jwt v3.2.1+incompatible h1:73Z+4BJcrTC+KczS6WvTPvRGOp1WmfEP4Q1lOd9Z/+c=
github.com/golang-jwt/jwt v3.2.1+incompatible/go.mod h1:8pz2t5EyA70fFQQSrl6XZXzqecmYZeUEB8OUGHkxJ+I=
github.com/golang/glog v0.0.0-20160126235308-23def4e6c14b h1:VKtxabqXZkF25pY9ekfRL6a582T4P37/31XEstQ5p58=
github.com/golang/glog v0.0.0-20160126235308-23def4e6c14b/go.mod h1:SBH7ygxi8pfUlaOkMMuAQtPIUF8ecWP5IEl/CR7VP2Q=
github.com/golang/groupcache v0.0.0-20160516000752-02826c3e7903 h1:LbsanbbD6LieFkXbj9YNNBupiGHJgFeLpO0j0Fza1h8=
github.com/golang/groupcache v0.0.0-20160516000752-02826c3e7903/go.mod h1:cIg4eruTrX1D+g88fzRXU5OdNfaM+9IcxsU14FzY7Hc=
github.com/golang/mock v1.1.1/go.mod h1:oTYuIxOrZwtPieC+H1uAHpcLFnEyAGVDL/k47Jfbm0A=
github.com/golang/protobuf v1.2.0 h1:P3YflyNX/ehuJFLhxviNdFxQPkGK5cDcApsge1SqnvM=
github.com/golang/protobuf v1.2.0/go.mod h1:6lQm79b+lXiMfvg/cZm0SGofjICqVBUtrP5yJMmIC1U=
github.com/golang/protobuf v1.3.1 h1:YF8+flBXS5eO826T4nzqPrxfhQThhXl0YzfuUPu4SBg=
github.com/golang/protobuf v1.3.1/go.mod h1:6lQm79b+lXiMfvg/cZm0SGofjICqVBUtrP5yJMmIC1U=
github.com/golang/protobuf v1.3.2 h1:6nsPYzhq5kReh6QImI3k5qWzO4PEbvbIW2cwSfR/6xs=
github.com/golang/protobuf v1.3.2/go.mod h1:6lQm79b+lXiMfvg/cZm0SGofjICqVBUtrP5yJMmIC1U=
github.com/google/btree v1.0.0 h1:0udJVsspx3VBr5FwtLhQQtuAsVc79tTq0ocGIPAU6qo=
github.com/google/btree v1.0.0/go.mod h1:lNA+9X1NB3Zf8V7Ke586lFgjr2dZNuvo3lPJSGZ5JPQ=
github.com/google/go-cmp v0.2.0 h1:+dTQ8DZQJz0Mb/HjFlkptS1FeQ4cWSnN941F8aEG4SQ=
github.com/google/go-cmp v0.2.0/go.mod h1:oXzfMopK8JAjlY9xF4vHSVASa0yLyX7SntLO5aqRK0M=
github.com/google/gofuzz v1.0.0/go.mod h1:dBl0BpW6vV/+mYPU4Po3pmUjxk6FQPldtuIdl/M65Eg=
github.com/google/uuid v1.0.0 h1:b4Gk+7WdP/d3HZH8EJsZpvV7EtDOgaZLtnaNGIu1adA=
github.com/google/uuid v1.0.0/go.mod h1:TIyPZe4MgqvfeYDBFedMoGGpEw/LqOeaOT+nhxU+yHo=
github.com/gorilla/websocket v0.0.0-20170926233335-4201258b820c h1:Lh2aW+HnU2Nbe1gqD9SOJLJxW1jBMmQOktN2acDyJk8=
github.com/gorilla/websocket v0.0.0-20170926233335-4201258b820c/go.mod h1:E7qHFY5m1UJ88s3WnNqhKjPHQ0heANvMoAMk2YaljkQ=
github.com/gorilla/websocket v1.4.2 h1:+/TMaTYc4QFitKJxsQ7Yye35DkWvkdLcvGKqM+x0Ufc=
github.com/gorilla/websocket v1.4.2/go.mod h1:YR8l580nyteQvAITg2hZ9XVh4b55+EU/adAjf1fMHhE=
github.com/grpc-ecosystem/go-grpc-middleware v1.0.1-0.20190118093823-f849b5445de4 h1:z53tR0945TRRQO/fLEVPI6SMv7ZflF0TEaTAoU7tOzg=
github.com/grpc-ecosystem/go-grpc-middleware v1.0.1-0.20190118093823-f849b5445de4/go.mod h1:FiyG127CGDf3tlThmgyCl78X/SZQqEOJBCDaAfeWzPs=
github.com/grpc-ecosystem/go-grpc-prometheus v1.2.0 h1:Ovs26xHkKqVztRpIrF/92BcuyuQ/YW4NSIpoGtfXNho=
@@ -68,7 +65,6 @@ github.com/inconshreveable/mousetrap v1.0.0 h1:Z8tu5sraLXCXIcARxBp/8cbvlwVa7Z1NH
github.com/inconshreveable/mousetrap v1.0.0/go.mod h1:PxqpIevigyE2G7u3NXJIT2ANytuPF1OarO4DADm73n8=
github.com/jonboulle/clockwork v0.1.0 h1:VKV+ZcuP6l3yW9doeqz6ziZGgcynBVQO+obU0+0hcPo=
github.com/jonboulle/clockwork v0.1.0/go.mod h1:Ii8DK3G1RaLaWxj9trq07+26W01tbo22gdxWY5EU2bo=
github.com/json-iterator/go v1.1.6 h1:MrUvLMLTMxbqFJ9kzlvat/rYZqZnW3u4wkLzWTaFwKs=
github.com/json-iterator/go v1.1.6/go.mod h1:+SdeFBvtyEkXs7REEP0seUULqWtbJapLOCVDaaPEHmU=
github.com/json-iterator/go v1.1.7 h1:KfgG9LzI+pYjr4xvmz/5H4FXjokeP+rlHLhv3iH62Fo=
github.com/json-iterator/go v1.1.7/go.mod h1:KdQUCv79m/52Kvf8AW2vK1V8akMuk1QjK/uOdHXbAo4=
@@ -108,7 +104,6 @@ github.com/prometheus/client_golang v0.9.1/go.mod h1:7SWBe2y4D6OKWSNQJUaRYU/AaXP
github.com/prometheus/client_golang v1.0.0 h1:vrDKnkGzuGvhNAL56c7DBz29ZL+KxnoR0x7enabFceM=
github.com/prometheus/client_golang v1.0.0/go.mod h1:db9x61etRT2tGnBNRi70OPL5FsnadC4Ky3P0J6CfImo=
github.com/prometheus/client_model v0.0.0-20180712105110-5c3871d89910/go.mod h1:MbSGuTsp3dbXC40dX6PRTWyKYBIrTGTE9sqQNg2J8bo=
github.com/prometheus/client_model v0.0.0-20190129233127-fd36f4220a90 h1:S/YWwWx/RA8rT8tKFRuGUZhuA90OyIBpPCXkcbwU8DE=
github.com/prometheus/client_model v0.0.0-20190129233127-fd36f4220a90/go.mod h1:xMI15A0UPsDsEKsMN9yxemIoYk6Tm2C1GtYGdfGttqA=
github.com/prometheus/client_model v0.0.0-20190812154241-14fe0d1b01d4 h1:gQz4mCbXsO+nc9n1hCxHcGA3Zx3Eo+UHZoInFGUIXNM=
github.com/prometheus/client_model v0.0.0-20190812154241-14fe0d1b01d4/go.mod h1:xMI15A0UPsDsEKsMN9yxemIoYk6Tm2C1GtYGdfGttqA=
@@ -118,7 +113,6 @@ github.com/prometheus/procfs v0.0.0-20181005140218-185b4288413d/go.mod h1:c3At6R
github.com/prometheus/procfs v0.0.2 h1:6LJUbpNm42llc4HRCuvApCSWB/WfhuNo9K98Q9sNGfs=
github.com/prometheus/procfs v0.0.2/go.mod h1:TjEm7ze935MbeOT/UhFTIMYKhuLP4wbCsTZCD3I8kEA=
github.com/rogpeppe/fastuuid v0.0.0-20150106093220-6724a57986af/go.mod h1:XWv6SoW27p1b0cqNHllgS5HIMJraePCO15w5zCzIWYg=
github.com/sirupsen/logrus v1.2.0 h1:juTguoYk5qI21pwyTXY3B3Y5cOTH3ZUyZCg1v/mihuo=
github.com/sirupsen/logrus v1.2.0/go.mod h1:LxeOpSwHxABJmUn/MG1IvRgCAasNZTLOkJPxbbu5VWo=
github.com/sirupsen/logrus v1.4.2 h1:SPIRibHv4MatM3XXNO2BJeFLZwZ2LvZgfQ5+UNI2im4=
github.com/sirupsen/logrus v1.4.2/go.mod h1:tLMulIdttU9McNUspp0xgXVQah82FyeX6MwdIuYE2rE=
@@ -130,44 +124,39 @@ github.com/spf13/pflag v1.0.1 h1:aCvUg6QPl3ibpQUxyLkrEkCHtPqYJL4x9AuhqVqFis4=
github.com/spf13/pflag v1.0.1/go.mod h1:DYY7MBk1bdzusC3SYhjObp+wFpr4gzcvqqNjLnInEg4=
github.com/stretchr/objx v0.1.0/go.mod h1:HFkY916IF+rwdDfMAkV7OtwuqBVzrE8GR6GFx+wExME=
github.com/stretchr/objx v0.1.1/go.mod h1:HFkY916IF+rwdDfMAkV7OtwuqBVzrE8GR6GFx+wExME=
github.com/stretchr/testify v1.2.2 h1:bSDNvY7ZPG5RlJ8otE/7V6gMiyenm9RtJ7IUVIAoJ1w=
github.com/stretchr/testify v1.2.2/go.mod h1:a8OnRcib4nhh0OaRAV+Yts87kKdq0PP7pXfy6kDkUVs=
github.com/stretchr/testify v1.3.0 h1:TivCn/peBQ7UY8ooIcPgZFpTNSz0Q2U6UrFlUfqbe0Q=
github.com/stretchr/testify v1.3.0/go.mod h1:M5WIy9Dh21IEIfnGCwXGc5bZfKNJtfHm1UVUgZn+9EI=
github.com/tmc/grpc-websocket-proxy v0.0.0-20170815181823-89b8d40f7ca8 h1:ndzgwNDnKIqyCvHTXaCqh9KlOWKvBry6nuXMJmonVsE=
github.com/tmc/grpc-websocket-proxy v0.0.0-20170815181823-89b8d40f7ca8/go.mod h1:ncp9v5uamzpCO7NfCPTXjqaC+bZgJeR0sMTm6dMHP7U=
github.com/tmc/grpc-websocket-proxy v0.0.0-20200427203606-3cfed13b9966 h1:j6JEOq5QWFker+d7mFQYOhjTZonQ7YkLTHm56dbn+yM=
github.com/tmc/grpc-websocket-proxy v0.0.0-20200427203606-3cfed13b9966/go.mod h1:ncp9v5uamzpCO7NfCPTXjqaC+bZgJeR0sMTm6dMHP7U=
github.com/urfave/cli v1.20.0 h1:fDqGv3UG/4jbVl/QkFwEdddtEDjh/5Ov6X+0B/3bPaw=
github.com/urfave/cli v1.20.0/go.mod h1:70zkFmudgCuE/ngEzBv17Jvp/497gISqfk5gWijbERA=
github.com/xiang90/probing v0.0.0-20190116061207-43a291ad63a2 h1:eY9dn8+vbi4tKz5Qo6v2eYzo7kUS51QINcR5jNpbZS8=
github.com/xiang90/probing v0.0.0-20190116061207-43a291ad63a2/go.mod h1:UETIi67q53MR2AWcXfiuqkDkRtnGDLqkBTpCHuJHxtU=
go.etcd.io/bbolt v1.3.3 h1:MUGmc65QhB3pIlaQ5bB4LwqSj6GIonVJXpZiaKNyaKk=
go.etcd.io/bbolt v1.3.3/go.mod h1:IbVyRI1SCnLcuJnV2u8VeU0CEYM7e686BmAb1XKL+uU=
go.etcd.io/bbolt v1.3.6 h1:/ecaJf0sk1l4l6V4awd65v2C3ILy7MSj+s/x1ADCIMU=
go.etcd.io/bbolt v1.3.6/go.mod h1:qXsaaIqmgQH0T+OPdb99Bf+PKfBBQVAdyD6TY9G8XM4=
go.uber.org/atomic v1.3.2 h1:2Oa65PReHzfn29GpvgsYwloV9AVFHPDk8tYxt2c2tr4=
go.uber.org/atomic v1.3.2/go.mod h1:gD2HeocX3+yG+ygLZcrzQJaqmWj9AIm7n08wl/qW/PE=
go.uber.org/multierr v1.1.0 h1:HoEmRHQPVSqub6w2z2d2EOVs2fjyFRGyofhKuyDq0QI=
go.uber.org/multierr v1.1.0/go.mod h1:wR5kodmAFQ0UK8QlbwjlSNy0Z68gJhDJUG5sjR94q/0=
go.uber.org/zap v1.10.0 h1:ORx85nbTijNz8ljznvCMR1ZBIPKFn3jQrag10X2AsuM=
go.uber.org/zap v1.10.0/go.mod h1:vwi/ZaCAaUcBkycHslxD9B2zi4UTXhF60s6SWpuDF0Q=
golang.org/x/crypto v0.0.0-20180904163835-0709b304e793 h1:u+LnwYTOOW7Ukr/fppxEb1Nwz0AtPflrblfvUudpo+I=
golang.org/x/crypto v0.0.0-20180904163835-0709b304e793/go.mod h1:6SG95UA2DQfeDnfUPMdvaQW0Q7yPrPDi9nlGo2tz2b4=
golang.org/x/crypto v0.0.0-20190308221718-c2843e01d9a2 h1:VklqNMn3ovrHsnt90PveolxSbWFaJdECFbxSq0Mqo2M=
golang.org/x/crypto v0.0.0-20190308221718-c2843e01d9a2/go.mod h1:djNgcEr1/C05ACkg1iLfiJU5Ep61QUkGW8qpdssI0+w=
golang.org/x/crypto v0.0.0-20191002192127-34f69633bfdc h1:c0o/qxkaO2LF5t6fQrT4b5hzyggAkLLlCUjqfRxd8Q4=
golang.org/x/crypto v0.0.0-20191002192127-34f69633bfdc/go.mod h1:yigFU9vqHzYiE8UmvKecakEJjdnWj3jj499lnFckfCI=
golang.org/x/crypto v0.0.0-20220411220226-7b82a4e95df4 h1:kUhD7nTDoI3fVd9G4ORWrbV5NY0liEs/Jg2pv5f+bBA=
golang.org/x/crypto v0.0.0-20220411220226-7b82a4e95df4/go.mod h1:IxCIyHEi3zRg3s0A5j5BB6A9Jmi73HwBIUl50j+osU4=
golang.org/x/exp v0.0.0-20190121172915-509febef88a4/go.mod h1:CJ0aWSM057203Lf6IL+f9T1iT9GByDxfZKAQTCR3kQA=
golang.org/x/lint v0.0.0-20181026193005-c67002cb31c3/go.mod h1:UVdnD1Gm6xHRNCYTkRU2/jEulfH38KcIWyp/GAMgvoE=
golang.org/x/lint v0.0.0-20190227174305-5b3e6a55c961/go.mod h1:wehouNa3lNwaWXcvxsM5YxQ5yQlVC4a0KAMCusXpPoU=
golang.org/x/lint v0.0.0-20190313153728-d0100b6bd8b3/go.mod h1:6SW0HCj/g11FgYtHlgUYUwCkIfeOF89ocIRzGO/8vkc=
golang.org/x/net v0.0.0-20180724234803-3673e40ba225/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4=
golang.org/x/net v0.0.0-20180826012351-8a410e7b638d/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4=
golang.org/x/net v0.0.0-20181114220301-adae6a3d119a h1:gOpx8G595UYyvj8UK4+OFyY4rx037g3fmfhe5SasG3U=
golang.org/x/net v0.0.0-20181114220301-adae6a3d119a/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4=
golang.org/x/net v0.0.0-20181220203305-927f97764cc3/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4=
golang.org/x/net v0.0.0-20190213061140-3a22650c66bd/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4=
golang.org/x/net v0.0.0-20190311183353-d8887717615a h1:oWX7TPOiFAMXLq8o0ikBYfCJVlRHBcsciT5bXOrH628=
golang.org/x/net v0.0.0-20190311183353-d8887717615a/go.mod h1:t9HGtf8HONx5eT2rtn7q6eTqICYqUVnKs3thJo3Qplg=
golang.org/x/net v0.0.0-20190813141303-74dc4d7220e7 h1:fHDIZ2oxGnUZRN6WgWFCbYBjH9uqVPRCUVUDhs0wnbA=
golang.org/x/net v0.0.0-20190813141303-74dc4d7220e7/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s=
golang.org/x/net v0.0.0-20211112202133-69e39bad7dc2 h1:CIJ76btIcR3eFI5EgSo6k1qKw9KJexJuRLI9G7Hp5wE=
golang.org/x/net v0.0.0-20211112202133-69e39bad7dc2/go.mod h1:9nx3DQGgdP8bBQD5qxJ1jj9UTztislL4KSBs9R2vV5Y=
golang.org/x/oauth2 v0.0.0-20180821212333-d2e6202438be/go.mod h1:N/0e6XlmueqKjAGxoOufVs8QHGRruUQn6yWY3a++T0U=
golang.org/x/sync v0.0.0-20180314180146-1d60e4601c6f/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sync v0.0.0-20181108010431-42b317875d0f/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
@@ -176,36 +165,38 @@ golang.org/x/sync v0.0.0-20190423024810-112230192c58/go.mod h1:RxMgew5VJxzue5/jJ
golang.org/x/sys v0.0.0-20180830151530-49385e6e1522/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
golang.org/x/sys v0.0.0-20180905080454-ebe1bf3edb33/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
golang.org/x/sys v0.0.0-20181107165924-66b7b1311ac8/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
golang.org/x/sys v0.0.0-20181116152217-5ac8a444bdc5 h1:mzjBh+S5frKOsOBobWIMAbXavqjmgO17k/2puhcFR94=
golang.org/x/sys v0.0.0-20181116152217-5ac8a444bdc5/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
golang.org/x/sys v0.0.0-20190215142949-d0b11bdaac8a h1:1BGLXjeY4akVXGgbC9HugT3Jv3hCI0z56oJR5vAMgBU=
golang.org/x/sys v0.0.0-20190215142949-d0b11bdaac8a/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
golang.org/x/sys v0.0.0-20190422165155-953cdadca894/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20190826190057-c7b8b68b1456 h1:ng0gs1AKnRRuEMZoTLLlbOd+C17zUDepwGQBb/n+JVg=
golang.org/x/sys v0.0.0-20190826190057-c7b8b68b1456/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/text v0.3.0 h1:g61tztE5qeGQ89tm6NTjjM9VPIm088od1l6aSorWRWg=
golang.org/x/sys v0.0.0-20200923182605-d9f96fdee20d/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20201119102817-f84b799fce68/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20210423082822-04245dca01da/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20210615035016-665e8c7367d1/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.0.0-20211019181941-9d821ace8654 h1:id054HUawV2/6IGm2IV8KZQjqtwAOo2CYlOToYqa0d0=
golang.org/x/sys v0.0.0-20211019181941-9d821ace8654/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/term v0.0.0-20201126162022-7de9c90e9dd1/go.mod h1:bj7SfCRtBDWHUb9snDiAeCFNEtKQo2Wmx5Cou7ajbmo=
golang.org/x/text v0.3.0/go.mod h1:NqM8EUOU14njkJ3fqMW+pc6Ldnwhi/IjpwHt7yyuwOQ=
golang.org/x/text v0.3.6/go.mod h1:5Zoc/QRtKVWzQhOtBMvqHzDpF6irO9z98xDceosuGiQ=
golang.org/x/text v0.3.7 h1:olpwvP2KacW1ZWvsR7uQhoyTYvKAupfQrRGBFM352Gk=
golang.org/x/text v0.3.7/go.mod h1:u+2+/6zg+i71rQMx5EYifcz6MCKuco9NR6JIITiCfzQ=
golang.org/x/time v0.0.0-20180412165947-fbb02b2291d2 h1:+DCIGbF/swA92ohVg0//6X2IVY3KZs6p9mix0ziNYJM=
golang.org/x/time v0.0.0-20180412165947-fbb02b2291d2/go.mod h1:tRJNPiyCQ0inRvYxbN9jk5I+vvW/OXSQhTDSoE431IQ=
golang.org/x/tools v0.0.0-20180221164845-07fd8470d635/go.mod h1:n7NCudcB/nEzxVGmLbDWY5pfWTLqBcC2KZ6jyYvM4mQ=
golang.org/x/tools v0.0.0-20180917221912-90fa682c2a6e/go.mod h1:n7NCudcB/nEzxVGmLbDWY5pfWTLqBcC2KZ6jyYvM4mQ=
golang.org/x/tools v0.0.0-20190114222345-bf090417da8b/go.mod h1:n7NCudcB/nEzxVGmLbDWY5pfWTLqBcC2KZ6jyYvM4mQ=
golang.org/x/tools v0.0.0-20190226205152-f727befe758c/go.mod h1:9Yl7xja0Znq3iFh3HoIrodX9oNMXvdceNzlUR8zjMvY=
golang.org/x/tools v0.0.0-20190311212946-11955173bddd/go.mod h1:LCzVGOaR6xXOjkQ3onu1FJEFr0SW1gC7cKk1uF8kGRs=
golang.org/x/tools v0.0.0-20190524140312-2c0ae7006135/go.mod h1:RgjU9mgBXZiqYHBnxXauZ1Gv1EHHAz9KjViQ78xBX0Q=
google.golang.org/appengine v1.1.0/go.mod h1:EbEs0AVv82hx2wNQdGPgUI5lhzA/G0D9YwlJXL52JkM=
google.golang.org/appengine v1.4.0/go.mod h1:xpcJRLb0r/rnEns0DIKYYv+WjYCduHsrkT7/EB5XEv4=
google.golang.org/genproto v0.0.0-20180817151627-c66870c02cf8 h1:Nw54tB0rB7hY/N0NQvRW8DG4Yk3Q6T9cu9RcFQDu1tc=
google.golang.org/genproto v0.0.0-20180817151627-c66870c02cf8/go.mod h1:JiN7NxoALGmiZfu7CAH4rXhgtRTLTxftemlI0sWmxmc=
google.golang.org/genproto v0.0.0-20190819201941-24fa4b261c55 h1:gSJIx1SDwno+2ElGhA4+qG2zF97qiUzTM+rQ0klBOcE=
google.golang.org/genproto v0.0.0-20190819201941-24fa4b261c55/go.mod h1:DMBHOl98Agz4BDEuKkezgsaosCRResVns1a3J2ZsMNc=
google.golang.org/grpc v1.19.0/go.mod h1:mqu4LbDTu4XGKhr4mRzUsmM4RtVoemTSY81AxZiDr8c=
google.golang.org/grpc v1.23.0/go.mod h1:Y5yQAOtifL1yxbo5wqy6BxZv8vAUGQwXBOALyacEbxg=
google.golang.org/grpc v1.24.0 h1:vb/1TCsVn3DcJlQ0Gs1yB1pKI6Do2/QNwxdKqmc/b0s=
google.golang.org/grpc v1.24.0/go.mod h1:XDChyiUovWa60DnaeDeZmSW86xtLtjtZbwvSiRnRtcA=
google.golang.org/grpc v1.26.0 h1:2dTRdpdFEEhJYQD8EMLB61nnrzSCTbG38PhqdhvOltg=
google.golang.org/grpc v1.26.0/go.mod h1:qbnxyOmOxrQa7FizSgH+ReBfzJrCY1pSN7KXBS8abTk=
gopkg.in/alecthomas/kingpin.v2 v2.2.6/go.mod h1:FMv+mEhP44yOT+4EoQTLFTRgOQ1FBLkstjWtayDeSgw=
gopkg.in/check.v1 v0.0.0-20161208181325-20d25e280405 h1:yhCVgyC4o1eVCa2tZl7eS0r+SDo693bJlVdllGtEeKM=
gopkg.in/check.v1 v0.0.0-20161208181325-20d25e280405/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0=
gopkg.in/check.v1 v1.0.0-20180628173108-788fd7840127 h1:qIbj1fsPNlZgppZ+VLlY7N33q108Sa+fhmuc+sWQYwY=
gopkg.in/check.v1 v1.0.0-20180628173108-788fd7840127/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0=
@@ -214,8 +205,8 @@ gopkg.in/cheggaaa/pb.v1 v1.0.25/go.mod h1:V/YB90LKu/1FcN3WVnfiiE5oMCibMjukxqG/qS
gopkg.in/resty.v1 v1.12.0/go.mod h1:mDo4pnntr5jdWRML875a/NmxYqAlA73dVijT2AXvQQo=
gopkg.in/yaml.v2 v2.0.0-20170812160011-eb3733d160e7/go.mod h1:JAlM8MvJe8wmxCU4Bli9HhUf9+ttbYbLASfIpnQbh74=
gopkg.in/yaml.v2 v2.2.1/go.mod h1:hI93XBmqTisBFMUTm0b8Fm+jr3Dg1NNxqwp+5A1VGuI=
gopkg.in/yaml.v2 v2.2.2 h1:ZCJp+EgiOT7lHqUV2J862kp8Qj64Jo6az82+3Td9dZw=
gopkg.in/yaml.v2 v2.2.2/go.mod h1:hI93XBmqTisBFMUTm0b8Fm+jr3Dg1NNxqwp+5A1VGuI=
gopkg.in/yaml.v2 v2.4.0 h1:D8xgwECY7CYvx+Y2n4sBz93Jn9JRvxdiyyo8CTfuKaY=
gopkg.in/yaml.v2 v2.4.0/go.mod h1:RDklbk79AGWmwhnvt/jBztapEOGDOx6ZbXqjP6csGnQ=
honnef.co/go/tools v0.0.0-20190102054323-c2f93a96b099/go.mod h1:rf3lG4BRIbNafJWhAfAdb/ePZxsR/4RtNHQocxwk9r4=
honnef.co/go/tools v0.0.0-20190523083050-ea95bdfd59fc/go.mod h1:rf3lG4BRIbNafJWhAfAdb/ePZxsR/4RtNHQocxwk9r4=
sigs.k8s.io/yaml v1.1.0 h1:4A07+ZFc2wgJwo8YNlQpr1rVlgUDlxXHhPJciaPY5gs=

View File

@@ -63,7 +63,7 @@ import (
const (
// RequestWaitTimeout is the time duration to wait for a request to go through or detect leader loss.
RequestWaitTimeout = 3 * time.Second
RequestWaitTimeout = 4 * time.Second
tickDuration = 10 * time.Millisecond
requestTimeout = 20 * time.Second
@@ -152,6 +152,10 @@ type ClusterConfig struct {
EnableLeaseCheckpoint bool
LeaseCheckpointInterval time.Duration
LeaseCheckpointPersist bool
WatchProgressNotifyInterval time.Duration
CorruptCheckTime time.Duration
}
type cluster struct {
@@ -279,23 +283,26 @@ func (c *cluster) HTTPMembers() []client.Member {
func (c *cluster) mustNewMember(t testing.TB) *member {
m := mustNewMember(t,
memberConfig{
name: c.name(rand.Int()),
authToken: c.cfg.AuthToken,
peerTLS: c.cfg.PeerTLS,
clientTLS: c.cfg.ClientTLS,
quotaBackendBytes: c.cfg.QuotaBackendBytes,
maxTxnOps: c.cfg.MaxTxnOps,
maxRequestBytes: c.cfg.MaxRequestBytes,
snapshotCount: c.cfg.SnapshotCount,
snapshotCatchUpEntries: c.cfg.SnapshotCatchUpEntries,
grpcKeepAliveMinTime: c.cfg.GRPCKeepAliveMinTime,
grpcKeepAliveInterval: c.cfg.GRPCKeepAliveInterval,
grpcKeepAliveTimeout: c.cfg.GRPCKeepAliveTimeout,
clientMaxCallSendMsgSize: c.cfg.ClientMaxCallSendMsgSize,
clientMaxCallRecvMsgSize: c.cfg.ClientMaxCallRecvMsgSize,
useIP: c.cfg.UseIP,
enableLeaseCheckpoint: c.cfg.EnableLeaseCheckpoint,
leaseCheckpointInterval: c.cfg.LeaseCheckpointInterval,
name: c.name(rand.Int()),
authToken: c.cfg.AuthToken,
peerTLS: c.cfg.PeerTLS,
clientTLS: c.cfg.ClientTLS,
quotaBackendBytes: c.cfg.QuotaBackendBytes,
maxTxnOps: c.cfg.MaxTxnOps,
maxRequestBytes: c.cfg.MaxRequestBytes,
snapshotCount: c.cfg.SnapshotCount,
snapshotCatchUpEntries: c.cfg.SnapshotCatchUpEntries,
grpcKeepAliveMinTime: c.cfg.GRPCKeepAliveMinTime,
grpcKeepAliveInterval: c.cfg.GRPCKeepAliveInterval,
grpcKeepAliveTimeout: c.cfg.GRPCKeepAliveTimeout,
clientMaxCallSendMsgSize: c.cfg.ClientMaxCallSendMsgSize,
clientMaxCallRecvMsgSize: c.cfg.ClientMaxCallRecvMsgSize,
useIP: c.cfg.UseIP,
enableLeaseCheckpoint: c.cfg.EnableLeaseCheckpoint,
leaseCheckpointPersist: c.cfg.LeaseCheckpointPersist,
leaseCheckpointInterval: c.cfg.LeaseCheckpointInterval,
WatchProgressNotifyInterval: c.cfg.WatchProgressNotifyInterval,
CorruptCheckTime: c.cfg.CorruptCheckTime,
})
m.DiscoveryURL = c.cfg.DiscoveryURL
if c.cfg.UseGRPC {
@@ -568,23 +575,26 @@ type member struct {
func (m *member) GRPCAddr() string { return m.grpcAddr }
type memberConfig struct {
name string
peerTLS *transport.TLSInfo
clientTLS *transport.TLSInfo
authToken string
quotaBackendBytes int64
maxTxnOps uint
maxRequestBytes uint
snapshotCount uint64
snapshotCatchUpEntries uint64
grpcKeepAliveMinTime time.Duration
grpcKeepAliveInterval time.Duration
grpcKeepAliveTimeout time.Duration
clientMaxCallSendMsgSize int
clientMaxCallRecvMsgSize int
useIP bool
enableLeaseCheckpoint bool
leaseCheckpointInterval time.Duration
name string
peerTLS *transport.TLSInfo
clientTLS *transport.TLSInfo
authToken string
quotaBackendBytes int64
maxTxnOps uint
maxRequestBytes uint
snapshotCount uint64
snapshotCatchUpEntries uint64
grpcKeepAliveMinTime time.Duration
grpcKeepAliveInterval time.Duration
grpcKeepAliveTimeout time.Duration
clientMaxCallSendMsgSize int
clientMaxCallRecvMsgSize int
useIP bool
enableLeaseCheckpoint bool
leaseCheckpointInterval time.Duration
leaseCheckpointPersist bool
WatchProgressNotifyInterval time.Duration
CorruptCheckTime time.Duration
}
// mustNewMember return an inited member with the given name. If peerTLS is
@@ -677,9 +687,14 @@ func mustNewMember(t testing.TB, mcfg memberConfig) *member {
m.useIP = mcfg.useIP
m.EnableLeaseCheckpoint = mcfg.enableLeaseCheckpoint
m.LeaseCheckpointInterval = mcfg.leaseCheckpointInterval
m.LeaseCheckpointPersist = mcfg.leaseCheckpointPersist
m.WatchProgressNotifyInterval = mcfg.WatchProgressNotifyInterval
m.InitialCorruptCheck = true
if mcfg.CorruptCheckTime > time.Duration(0) {
m.CorruptCheckTime = mcfg.CorruptCheckTime
}
lcfg := logutil.DefaultZapLoggerConfig
m.LoggerConfig = &lcfg
m.LoggerConfig.OutputPaths = []string{"/dev/null"}

View File

@@ -23,6 +23,8 @@ import (
pb "go.etcd.io/etcd/etcdserver/etcdserverpb"
)
const ThroughProxy = false
func toGRPC(c *clientv3.Client) grpcAPI {
return grpcAPI{
pb.NewClusterClient(c.ActiveConnection()),

View File

@@ -17,6 +17,7 @@
package integration
import (
"context"
"sync"
"go.etcd.io/etcd/clientv3"
@@ -25,6 +26,8 @@ import (
"go.etcd.io/etcd/proxy/grpcproxy/adapter"
)
const ThroughProxy = true
var (
pmu sync.Mutex
proxies map[*clientv3.Client]grpcClientProxy = make(map[*clientv3.Client]grpcClientProxy)
@@ -33,16 +36,23 @@ var (
const proxyNamespace = "proxy-namespace"
type grpcClientProxy struct {
grpc grpcAPI
wdonec <-chan struct{}
kvdonec <-chan struct{}
lpdonec <-chan struct{}
ctx context.Context
ctxCancel func()
grpc grpcAPI
wdonec <-chan struct{}
kvdonec <-chan struct{}
lpdonec <-chan struct{}
}
func toGRPC(c *clientv3.Client) grpcAPI {
pmu.Lock()
defer pmu.Unlock()
// dedicated context bound to 'grpc-proxy' lifetype
// (so in practice lifetime of the client connection to the proxy).
// TODO: Refactor to a separate clientv3.Client instance instead of the context alone.
ctx, ctxCancel := context.WithCancel(context.WithValue(context.TODO(), "_name", "grpcProxyContext"))
if v, ok := proxies[c]; ok {
return v.grpc
}
@@ -53,8 +63,8 @@ func toGRPC(c *clientv3.Client) grpcAPI {
c.Lease = namespace.NewLease(c.Lease, proxyNamespace)
// test coalescing/caching proxy
kvp, kvpch := grpcproxy.NewKvProxy(c)
wp, wpch := grpcproxy.NewWatchProxy(c)
lp, lpch := grpcproxy.NewLeaseProxy(c)
wp, wpch := grpcproxy.NewWatchProxy(ctx, c)
lp, lpch := grpcproxy.NewLeaseProxy(ctx, c)
mp := grpcproxy.NewMaintenanceProxy(c)
clp, _ := grpcproxy.NewClusterProxy(c, "", "") // without registering proxy URLs
authp := grpcproxy.NewAuthProxy(c)
@@ -71,20 +81,21 @@ func toGRPC(c *clientv3.Client) grpcAPI {
adapter.LockServerToLockClient(lockp),
adapter.ElectionServerToElectionClient(electp),
}
proxies[c] = grpcClientProxy{grpc: grpc, wdonec: wpch, kvdonec: kvpch, lpdonec: lpch}
proxies[c] = grpcClientProxy{ctx: ctx, ctxCancel: ctxCancel, grpc: grpc, wdonec: wpch, kvdonec: kvpch, lpdonec: lpch}
return grpc
}
type proxyCloser struct {
clientv3.Watcher
wdonec <-chan struct{}
kvdonec <-chan struct{}
lclose func()
lpdonec <-chan struct{}
proxyCtxCancel func()
wdonec <-chan struct{}
kvdonec <-chan struct{}
lclose func()
lpdonec <-chan struct{}
}
func (pc *proxyCloser) Close() error {
// client ctx is canceled before calling close, so kv and lp will close out
pc.proxyCtxCancel()
<-pc.kvdonec
err := pc.Watcher.Close()
<-pc.wdonec
@@ -104,11 +115,12 @@ func newClientV3(cfg clientv3.Config) (*clientv3.Client, error) {
lc := c.Lease
c.Lease = clientv3.NewLeaseFromLeaseClient(rpc.Lease, c, cfg.DialTimeout)
c.Watcher = &proxyCloser{
Watcher: clientv3.NewWatchFromWatchClient(rpc.Watch, c),
wdonec: proxies[c].wdonec,
kvdonec: proxies[c].kvdonec,
lclose: func() { lc.Close() },
lpdonec: proxies[c].lpdonec,
Watcher: clientv3.NewWatchFromWatchClient(rpc.Watch, c),
wdonec: proxies[c].wdonec,
kvdonec: proxies[c].kvdonec,
lclose: func() { lc.Close() },
lpdonec: proxies[c].lpdonec,
proxyCtxCancel: proxies[c].ctxCancel,
}
pmu.Unlock()
return c, nil

View File

@@ -1,22 +1,22 @@
-----BEGIN CERTIFICATE-----
MIIDrjCCApagAwIBAgIUOl7DCgSvqQKhiihYrZDiBKNpQX4wDQYJKoZIhvcNAQEL
MIIDrjCCApagAwIBAgIUNkN+TZ3hgHno+H9j56nWkmb4dBEwDQYJKoZIhvcNAQEL
BQAwbzEMMAoGA1UEBhMDVVNBMRMwEQYDVQQIEwpDYWxpZm9ybmlhMRYwFAYDVQQH
Ew1TYW4gRnJhbmNpc2NvMQ0wCwYDVQQKEwRldGNkMRYwFAYDVQQLEw1ldGNkIFNl
Y3VyaXR5MQswCQYDVQQDEwJjYTAeFw0xOTEwMDgyMTE5MDBaFw0yOTEwMDUyMTE5
Y3VyaXR5MQswCQYDVQQDEwJjYTAeFw0yMTAyMjgxMDQ4MDBaFw0zMTAyMjYxMDQ4
MDBaMG8xDDAKBgNVBAYTA1VTQTETMBEGA1UECBMKQ2FsaWZvcm5pYTEWMBQGA1UE
BxMNU2FuIEZyYW5jaXNjbzENMAsGA1UEChMEZXRjZDEWMBQGA1UECxMNZXRjZCBT
ZWN1cml0eTELMAkGA1UEAxMCY2EwggEiMA0GCSqGSIb3DQEBAQUAA4IBDwAwggEK
AoIBAQDBNhwKD8oqOwNSDMZR+K6l6ocyXZzZPAIbv7co34xtjt25c8PPKz8FiBSU
M4YeZpzsSp7n7WSSSzVWqFTRBZzvjIrBzLu4CfxMKuUrQX1/BPYgbSxQO+5YKPzO
yaBMhIAEtW+WYsaa6PpWyL65L4giKpVoLS/UFTEBsf+lO6pwFpX2EJnIylLbpwEd
pAXIgVFsodHlP9Zc2tR1TqYetmJ6/A/p5sSZpgLy1y2+Mg4VTMKvs2kNAoh/+lEu
WPe204eMpkBXhukulOiJkVKNdhnCkLslt8ZaMWWqBvD9d94lXycMQ9wnGakPNc4W
5VX3rbLOGOX7xK37BCsh5HGodIrZAgMBAAGjQjBAMA4GA1UdDwEB/wQEAwIBBjAP
BgNVHRMBAf8EBTADAQH/MB0GA1UdDgQWBBRlB76vjaZyFLrEUGm6DQfyjmN6PjAN
BgkqhkiG9w0BAQsFAAOCAQEAD0cRNBQqOPNAUmKCH9xCr4TZFoE+P5aNePU39Jyp
qpJ1HjKI93zBk9aN5udDGPFhm2/iaKx6DuABbxCz0LwNhLiKP6UbHV8F2fTJJ5bo
crXvD0CEpor+Quh995lbq9bv29+zcDVw+Hw0QainBdHWkdw6RAgmbFnJxETDDz8z
VQ0DET3T736oxpEZ4DKQlbzK5LSgZH2lyPEEvzci4QjTZf5X/nitdx7fAdMFFPQ0
lI4l7nIuge5LTR0isEfWHx7Orx6l8dzkofG3fz5BjHCI4JInVlWq3MNNSybDI4pI
GFxeuE/U8K6kIixT8qCAh6Naq9/xuxFkffLmMKfZXoYLCg==
AoIBAQDZwQPFZB+Kt6RIzYvTgbNlRIX/cLVknIy4ZqhLYDQNOdosJn04jjkCfS3k
F5JZuabkUs6d6JcLTbLWV5hCrwZVlCFf3PDn6DvK12GZpybhuqMPZ2T8P2U17AFP
mUj/Rm+25t8Er5r+8ijZmqVi1X1Ef041CFGESr3KjaMjec2kYf38cfEOp2Yq1JWO
0wpVfLElnyDQY9XILdnBepCRZYPq1eW1OSkRk+dZQnJP6BO95IoyREDuBUeTrteR
7dHHTF9AAgR5tnyZ+eLuVUZ2kskcWLxH3y9RyjvVJ+1uCzbdydVPf0H1pBoqWcuA
PYjYkLKMOKBWfYJhSzykhf+QMC7xAgMBAAGjQjBAMA4GA1UdDwEB/wQEAwIBBjAP
BgNVHRMBAf8EBTADAQH/MB0GA1UdDgQWBBQpJiv07dkY9WB0zgB6wOb/HMi8oDAN
BgkqhkiG9w0BAQsFAAOCAQEA0TQ8rRmLt4wYjz0BKh+jElMIg6LBPsCPpfmGDLmK
fdj4Jp7QFlLmXlQSmm8zKz3ftKoOFPYGQYHUkIirIrQB/tdHoXJLgxCzI0SrdCiM
m/DPVjfOTa9Mm5rPcUR79rGDLj2BgzDB+NTETVDXo8mAL5MjFdUyh6jOGBctkCG/
TWdUaN33ZLwUl488NLaw98fIZ/F4d/dsyCJvHEaoo++dgjduoQxmH9Scr2Frmd8G
zYxOoZHG3ARBDp2mpr+I3UCR1/KTITF/NXL6gDcNY3wyZzoaGua7Bd/ysMSi1w3j
CyvClSvRPJRLQemGUP7B/Y8FUkbJ2i/7tz6ozn8sLi3V2Q==
-----END CERTIFICATE-----

View File

@@ -0,0 +1,24 @@
-----BEGIN CERTIFICATE-----
MIID/DCCAuSgAwIBAgIUCzIuVb3586z5C2rQ65jeo4wfbfowDQYJKoZIhvcNAQEL
BQAwbzEMMAoGA1UEBhMDVVNBMRMwEQYDVQQIEwpDYWxpZm9ybmlhMRYwFAYDVQQH
Ew1TYW4gRnJhbmNpc2NvMQ0wCwYDVQQKEwRldGNkMRYwFAYDVQQLEw1ldGNkIFNl
Y3VyaXR5MQswCQYDVQQDEwJjYTAeFw0yMTAyMjgxMDQ4MDBaFw0zMTAyMjYxMDQ4
MDBaMGIxDDAKBgNVBAYTA1VTQTETMBEGA1UECBMKQ2FsaWZvcm5pYTEWMBQGA1UE
BxMNU2FuIEZyYW5jaXNjbzENMAsGA1UEChMEZXRjZDEWMBQGA1UECxMNZXRjZCBT
ZWN1cml0eTCCASIwDQYJKoZIhvcNAQEBBQADggEPADCCAQoCggEBAKmOrIfZ9mH9
O3wLgGinUXDAG+XAP6P6NG9VkWaCUfOkY8x8RKSeuOri31EgYGmFYmQXCtS/WlHD
GCLrUhTnIrC1/WqvuPJIoMMTw7JLh59IuIWdlxds7FWjyuLmi4oUHvCG6aXiT/Z3
ylp4r/HBL+R6KKqQpRjFfwhb1bIWpxZe5ghUtx4AuAW7ayQgpC7FJ3aVW/SS5p0m
IxyKqGvl45IsZuZY59Sa/X2AWSRpr+qe0tM4n1R+1bDhjcV6EuhyfubdSkZHfUJp
PaoUdynHT/VuI5xMF4OXbiwXP36XvHiHd9LIrPOyubrRYvn8dKweBJkvNCnlQo09
zVH5zb9p0DsCAwEAAaOBnDCBmTAOBgNVHQ8BAf8EBAMCBaAwHQYDVR0lBBYwFAYI
KwYBBQUHAwEGCCsGAQUFBwMCMAwGA1UdEwEB/wQCMAAwHQYDVR0OBBYEFG5evtY/
UIPMBcah3B/1BWDI14nUMB8GA1UdIwQYMBaAFCkmK/Tt2Rj1YHTOAHrA5v8cyLyg
MBoGA1UdEQQTMBGCCWxvY2FsaG9zdIcEfwAAATANBgkqhkiG9w0BAQsFAAOCAQEA
VBjy5UtSe/f66d7dKgZVVfKDiOeSb1knATSy7/JyubxVgq64yTN6fqIYRQg4gyVW
IPf8W4BbhEXeA7VumVuTTKjILoufGecjrjA1Skb4lWGfV21A51Fs9TcMLPiQYZ1b
e2J2Trtd0CsteQj4BDrbgiSxahJBaj+4PfXM1tef51DJs+gEg16DGxdzFBtlY+ih
SwOX6YcUyxYzYX2szafPpVRuQqU0B63FkvBbsNMX1KamtAsLtvf/JxYpPY9eg5t/
b5L6pXQkp6bK3q8Gv1WApjD8tcwqBkcJrbjgJ6gfW9h3zEbLmxkAv46sJodVLInL
SYrHgrQ7TRd29DybB6cPAQ==
-----END CERTIFICATE-----

View File

@@ -0,0 +1,27 @@
-----BEGIN RSA PRIVATE KEY-----
MIIEowIBAAKCAQEAqY6sh9n2Yf07fAuAaKdRcMAb5cA/o/o0b1WRZoJR86RjzHxE
pJ646uLfUSBgaYViZBcK1L9aUcMYIutSFOcisLX9aq+48kigwxPDskuHn0i4hZ2X
F2zsVaPK4uaLihQe8IbppeJP9nfKWniv8cEv5HooqpClGMV/CFvVshanFl7mCFS3
HgC4BbtrJCCkLsUndpVb9JLmnSYjHIqoa+Xjkixm5ljn1Jr9fYBZJGmv6p7S0zif
VH7VsOGNxXoS6HJ+5t1KRkd9Qmk9qhR3KcdP9W4jnEwXg5duLBc/fpe8eId30sis
87K5utFi+fx0rB4EmS80KeVCjT3NUfnNv2nQOwIDAQABAoIBAECPnM4VhiUFgTLY
RkqS+wWNgJHYw+KyEGkcEcMQeBfnTkC8SH7OGOcG/7UqOMu1CCPISk17lu5u9K/H
HnfrEmBqy1VmF2vZj6z3x5oJ/FgAHpJx0OgQh2SMe2IuGo+23ZkEJc8N/xh/wEL2
lTfeMVgz02wuq05lVNtf7FxlF7YCSaxxxDtQQTDR3BSq6l12tB81TQvAD+yh35Gs
1jGhPeKHWc1jny309vczpJq4eIK2xhE+MT8YZAiuHCLGOHUlBBpleo5knyMueVE/
/Ezbz6eFiIFYpoHA3d3pv3Dy+5WVnhD0YDQPe+jCQrzxyFGDiN488JQ2tVeRM85b
q0naaZECgYEA1T8XWPqRkhjMy0vJxTVux+wdD3u9DIvgBfHxjBUS2xlZdOiLLmBD
CDVLKe+Twn0KiTb0eU+zNn4g1qnxLXmAH7xYWPLtqoI4mM0O89SWxr06ExplamHp
w5k5O3eJr0veKyCUqVbZRZsOQLi1zqEbaOqpA7TrsQOOT5io+0vVoV0CgYEAy407
JRaGBTBNOPayBVFY+7PRsSRPtcjzbOHriCe4rDn8aIPPmzHyWEIL0pXk5I1eW978
veC/2oZMsxO2vaKta1bSSOrNA8UJQ+t5Ipp6Fj6yAI5dMDcgOIctE8ctxDUfccQM
kS5DDw0W3zYMI7ixyOe6ydX4OAlcpZgqFpNIJncCgYEAuB1pAyIUXZeb+krNQsAH
jgWGcb/cUeDS408pxlDLnvAcFJxSzw+90HBzHRoE8X8UgbQ5ECSIDxyHLdA8s46b
2Mq9XM8h9H3Kb+NcbZm3NJBce/Hmbhtrwb2hdH6ZGgjfIU1YDX02yqo9fBP+pRDk
oYk5tEGY3ZS8YmzkOVQYduECgYACgnNAOc7dMYNCOIhpWF9oewcS0AfLjfayWPa2
bwbv2KcsArQEjdEXFXlf10lDKBsJtu4WyTaUUyOO8adHH0JUGHXvQDXW3g8HL1gG
/TCUJaG8MAUmGwfiqof7vnDqAl2o4WnmQFPDU738coYjypsmhvTemCy/RB5ITF/4
d0hkcQKBgAWpzCnPAh4tPWw1OGE2QSsbRR15hR+67BltiZ+nxJnDcXXS2i08QBkA
3VR0ywWsos+Sox6jm8LpH8RiKqZ5laUjHHUUuX1Tgfxn4EmHo6bBffw7k9vkY7xr
w5Nw/gMRevkRrDQ4Z66z2HspyCHfmdPzWX9zsaSc4nzNs7fw2/uf
-----END RSA PRIVATE KEY-----

Binary file not shown.

View File

@@ -8,6 +8,24 @@
"client auth"
],
"expiry": "87600h"
},
"profiles": {
"client-only": {
"usages": [
"signing",
"key encipherment",
"client auth"
],
"expiry": "87600h"
},
"server-only": {
"usages": [
"signing",
"key encipherment",
"server auth"
],
"expiry": "87600h"
}
}
}
}

View File

@@ -1,5 +1,7 @@
#!/bin/bash
set -e
if ! [[ "$0" =~ "./gencerts.sh" ]]; then
echo "must be run from 'fixtures'"
exit 255
@@ -7,68 +9,57 @@ fi
if ! which cfssl; then
echo "cfssl is not installed"
echo "use: go install -mod mod github.com/cloudflare/cfssl/cmd/cfssl github.com/cloudflare/cfssl/cmd/cfssljson"
exit 255
fi
cfssl gencert --initca=true ./ca-csr.json | cfssljson --bare ./ca
mv ca.pem ca.crt
if which openssl >/dev/null; then
openssl x509 -in ca.crt -noout -text
fi
# generate DNS: localhost, IP: 127.0.0.1, CN: example.com certificates
cfssl gencert \
--ca ./ca.crt \
--ca-key ./ca-key.pem \
--config ./gencert.json \
./server-ca-csr.json | cfssljson --bare ./server
mv server.pem server.crt
mv server-key.pem server.key.insecure
# gencert [config_file.json] [cert-name]
function gencert {
cfssl gencert \
--ca ./ca.crt \
--ca-key ./ca-key.pem \
--config ./gencert.json \
$1 | cfssljson --bare ./$2
mv $2.pem $2.crt
mv $2-key.pem $2.key.insecure
}
# generate DNS: localhost, IP: 127.0.0.1, CN: example.com certificates, with dual usage
gencert ./server-ca-csr.json server
#generates certificate that only has the 'server auth' usage
gencert "--profile=server-only ./server-ca-csr.json" server-serverusage
#generates certificate that only has the 'client auth' usage
gencert "--profile=client-only ./server-ca-csr.json" client-clientusage
#generates certificate that does not contain CN, to be used for proxy -> server connections.
gencert ./client-ca-csr-nocn.json client-nocn
# generate DNS: localhost, IP: 127.0.0.1, CN: example.com certificates (ECDSA)
cfssl gencert \
--ca ./ca.crt \
--ca-key ./ca-key.pem \
--config ./gencert.json \
./server-ca-csr-ecdsa.json | cfssljson --bare ./server-ecdsa
mv server-ecdsa.pem server-ecdsa.crt
mv server-ecdsa-key.pem server-ecdsa.key.insecure
gencert ./server-ca-csr-ecdsa.json server-ecdsa
# generate IP: 127.0.0.1, CN: example.com certificates
cfssl gencert \
--ca ./ca.crt \
--ca-key ./ca-key.pem \
--config ./gencert.json \
./server-ca-csr-ip.json | cfssljson --bare ./server-ip
mv server-ip.pem server-ip.crt
mv server-ip-key.pem server-ip.key.insecure
gencert ./server-ca-csr-ip.json server-ip
# generate IPv6: [::1], CN: example.com certificates
cfssl gencert \
--ca ./ca.crt \
--ca-key ./ca-key.pem \
--config ./gencert.json \
./server-ca-csr-ipv6.json | cfssljson --bare ./server-ip
mv server-ip.pem server-ipv6.crt
mv server-ip-key.pem server-ipv6.key.insecure
gencert ./server-ca-csr-ipv6.json server-ipv6
# generate DNS: localhost, IP: 127.0.0.1, CN: example2.com certificates
cfssl gencert \
--ca ./ca.crt \
--ca-key ./ca-key.pem \
--config ./gencert.json \
./server-ca-csr2.json | cfssljson --bare ./server2
mv server2.pem server2.crt
mv server2-key.pem server2.key.insecure
gencert ./server-ca-csr2.json server2
# generate DNS: localhost, IP: 127.0.0.1, CN: "" certificates
cfssl gencert \
--ca ./ca.crt \
--ca-key ./ca-key.pem \
--config ./gencert.json \
./server-ca-csr3.json | cfssljson --bare ./server3
mv server3.pem server3.crt
mv server3-key.pem server3.key.insecure
gencert ./server-ca-csr3.json server3
# generate wildcard certificates DNS: *.etcd.local
gencert ./server-ca-csr-wildcard.json server-wildcard
# generate revoked certificates and crl
cfssl gencert --ca ./ca.crt \
@@ -80,14 +71,4 @@ mv server-revoked-key.pem server-revoked.key.insecure
grep serial revoked.stderr | awk ' { print $9 } ' >revoke.txt
cfssl gencrl revoke.txt ca.crt ca-key.pem | base64 --decode >revoke.crl
# generate wildcard certificates DNS: *.etcd.local
cfssl gencert \
--ca ./ca.crt \
--ca-key ./ca-key.pem \
--config ./gencert.json \
./server-ca-csr-wildcard.json | cfssljson --bare ./server-wildcard
mv server-wildcard.pem server-wildcard.crt
mv server-wildcard-key.pem server-wildcard.key.insecure
rm -f *.csr *.pem *.stderr *.txt

Binary file not shown.

View File

@@ -1,20 +1,20 @@
-----BEGIN CERTIFICATE-----
MIIDRzCCAi+gAwIBAgIUKgQJ/CMaFxc4JcwwGyiT/7KpedIwDQYJKoZIhvcNAQEL
MIIDRzCCAi+gAwIBAgIUVE0fLzH6W4M2gJVJhmQdkQdKpO0wDQYJKoZIhvcNAQEL
BQAwbzEMMAoGA1UEBhMDVVNBMRMwEQYDVQQIEwpDYWxpZm9ybmlhMRYwFAYDVQQH
Ew1TYW4gRnJhbmNpc2NvMQ0wCwYDVQQKEwRldGNkMRYwFAYDVQQLEw1ldGNkIFNl
Y3VyaXR5MQswCQYDVQQDEwJjYTAeFw0xOTEwMDgyMTE5MDBaFw0yOTEwMDUyMTE5
Y3VyaXR5MQswCQYDVQQDEwJjYTAeFw0yMTAyMjgxMDQ4MDBaFw0zMTAyMjYxMDQ4
MDBaMHgxDDAKBgNVBAYTA1VTQTETMBEGA1UECBMKQ2FsaWZvcm5pYTEWMBQGA1UE
BxMNU2FuIEZyYW5jaXNjbzENMAsGA1UEChMEZXRjZDEWMBQGA1UECxMNZXRjZCBT
ZWN1cml0eTEUMBIGA1UEAxMLZXhhbXBsZS5jb20wWTATBgcqhkjOPQIBBggqhkjO
PQMBBwNCAARXbc8naiFZ3Y2LujrnDCScVNRks/TR+aXPmnuPGjDxbuHxSSbC8Q2z
iTvCkgsIcsifmUIEQcI4v3Kbkj3qMF1so4GcMIGZMA4GA1UdDwEB/wQEAwIFoDAd
PQMBBwNCAAREhCklwbvzFozNPkr3Y5PGrQr1ygfL5Q+XhvPOTTEjEN/zwjw9L0Qa
jfhE8Md89qED0j8xHAKeQRrulgv/FWXXo4GcMIGZMA4GA1UdDwEB/wQEAwIFoDAd
BgNVHSUEFjAUBggrBgEFBQcDAQYIKwYBBQUHAwIwDAYDVR0TAQH/BAIwADAdBgNV
HQ4EFgQU3z1DifT82BfoU5DfMe08meeYmSUwHwYDVR0jBBgwFoAUZQe+r42mchS6
xFBpug0H8o5jej4wGgYDVR0RBBMwEYIJbG9jYWxob3N0hwR/AAABMA0GCSqGSIb3
DQEBCwUAA4IBAQAE3bhZcJuGrnMGMgebCFMuAXvoF9twYIHXpxNOg6u0HTIWOsMB
njEJW/rfZFE/RAJ6JdOMNE2bq2LbJ8dUA25PX3uz6V4omm9B3EvEG9Hh3J+C77XQ
P+ofiUd+j06SdewoxrmmQmjZZdotpFUQG3EEncs+v94jsamwGNLdq4yWDjFdmyuC
hqzSkD48aGqP2Q93wfv8uIiCEmJS1vITTm2LxssCLfiYGortpCx32/DWme8nUlni
1U/pRTx8Brx00dMeruTGjCCpwb8k453oNV6u0D1LsQ9y5DuyEwmZtBEHBN1kVPro
yYW3/b1jcmZk8W9GXqcXy16LbWmpvJmTHPsj
HQ4EFgQUXJTZKpg0EYo+wtiYmacwd1OSKfgwHwYDVR0jBBgwFoAUKSYr9O3ZGPVg
dM4AesDm/xzIvKAwGgYDVR0RBBMwEYIJbG9jYWxob3N0hwR/AAABMA0GCSqGSIb3
DQEBCwUAA4IBAQBd3RuqNsDxYS/RRc9Df8gLaXn/QQhATx6s3+pKHYplIH9sGPCh
ybI4MpwLnuoqxew8dxy7oi/BBXPWSUuVznRV/vLKAIuULoKg2Eb06d17OmqOaakl
asGnJ7z9e6mxHPVDzjkORNlJShY4YOG0tUg4hC5/9Qxh6EGNUKtRC3x4Tm8Jl6me
uGLUjsQV7YhQNRDFECUQmKwolEbwXbAi2SN3I37CBFDFwDT/0BxtfGSn0ZiXRHze
k1dmg9V3r9UPcucb3Djoad/N5YClfFtX/ANC8bufkkdfQLQwIBCUwcPlGxrBAVoD
BoqpmQdpQ/yINKesAD/r5dF2SmUEhZhn6GSK
-----END CERTIFICATE-----

View File

@@ -1,5 +1,5 @@
-----BEGIN EC PRIVATE KEY-----
MHcCAQEEIK3K2gimOw2P0pZ4soFAopriuORuqpRptllFXNRhCRV0oAoGCCqGSM49
AwEHoUQDQgAEV23PJ2ohWd2Ni7o65wwknFTUZLP00fmlz5p7jxow8W7h8UkmwvEN
s4k7wpILCHLIn5lCBEHCOL9ym5I96jBdbA==
MHcCAQEEIEmvbcwNyqDHWXBG2IHZffLme5Ti8oHYzaapBvwkRSWWoAoGCCqGSM49
AwEHoUQDQgAERIQpJcG78xaMzT5K92OTxq0K9coHy+UPl4bzzk0xIxDf88I8PS9E
Go34RPDHfPahA9I/MRwCnkEa7pYL/xVl1w==
-----END EC PRIVATE KEY-----

View File

@@ -1,24 +1,24 @@
-----BEGIN CERTIFICATE-----
MIIEBzCCAu+gAwIBAgIUSvxuG1lgImYpnaK4sPaCiMAd0lgwDQYJKoZIhvcNAQEL
MIIEBzCCAu+gAwIBAgIUIqRH3sc1siaGkZVpWKSoDAIEMucwDQYJKoZIhvcNAQEL
BQAwbzEMMAoGA1UEBhMDVVNBMRMwEQYDVQQIEwpDYWxpZm9ybmlhMRYwFAYDVQQH
Ew1TYW4gRnJhbmNpc2NvMQ0wCwYDVQQKEwRldGNkMRYwFAYDVQQLEw1ldGNkIFNl
Y3VyaXR5MQswCQYDVQQDEwJjYTAeFw0xOTEwMDgyMTE5MDBaFw0yOTEwMDUyMTE5
Y3VyaXR5MQswCQYDVQQDEwJjYTAeFw0yMTAyMjgxMDQ4MDBaFw0zMTAyMjYxMDQ4
MDBaMHgxDDAKBgNVBAYTA1VTQTETMBEGA1UECBMKQ2FsaWZvcm5pYTEWMBQGA1UE
BxMNU2FuIEZyYW5jaXNjbzENMAsGA1UEChMEZXRjZDEWMBQGA1UECxMNZXRjZCBT
ZWN1cml0eTEUMBIGA1UEAxMLZXhhbXBsZS5jb20wggEiMA0GCSqGSIb3DQEBAQUA
A4IBDwAwggEKAoIBAQC7mJOiyqWfmNM5ptQZ22plotVfgoBf9fHTzMw/ap2Vl0/0
4V3GEyYCdPt6V87GWzjBSO9GAmlISBQQybMieZTaTm8KKW2066iJDKseBCv9m4nS
mHv0oDqp3SHsZQ2xHis4lbi7ws2thdqpmjw4Dv96SUiCJUjhcBX4kBMRcOGgk1RF
ENIOInTSKlAiwNF1NSnhj8wMNw7mjw90jpAGAuPuuiQ7+AYHJBJqtT9mRikR8ppw
isjEE6kslCCg2RC45AiF4LXNp7A7Xwm6P34XJ6T9PJUh/r3pa0xHRuI2zQLaW8Z/
b6NYkUGMbHR7AY/+2JzOfnnnQcSB8EYC9bHadvHnAgMBAAGjgZEwgY4wDgYDVR0P
A4IBDwAwggEKAoIBAQDUTSITIyfyfyE+fdQGM8sali6fk4pFCpuODbljr/ywVI81
/MrLGhX/zySgRHKKG5f23CWbImgtIsPbScKxNNQcvDRXmbsLtlH/o7Eoun/e0aGp
rzr0p0QNGGeKFfUBTnaB+Z7+V92oxjNAuyeMZstqJxjOWDGpCS+yFVvP/ElRsL2H
JVZHWOykwKdLznRUjlw//PcvJrNsO9DzYluZ6tDqlN5nyB6aW0h9ZkcCskGDo+1V
94tjh5rGTscREVIWxVxHgLMFlvaEJlz64pVgc8VWD6famaiqP5nC0WOx5BJtSrmF
3WH97DkfVcXWpqUbEAey6G4sLU0a/08iKoWbJmbvAgMBAAGjgZEwgY4wDgYDVR0P
AQH/BAQDAgWgMB0GA1UdJQQWMBQGCCsGAQUFBwMBBggrBgEFBQcDAjAMBgNVHRMB
Af8EAjAAMB0GA1UdDgQWBBSPaFA2Jh7s/IJN/Yw/QFFR4pO3nDAfBgNVHSMEGDAW
gBRlB76vjaZyFLrEUGm6DQfyjmN6PjAPBgNVHREECDAGhwR/AAABMA0GCSqGSIb3
DQEBCwUAA4IBAQAO2EnUXDlZAzOJLmkzQQF/d88PjvzspFtBfj/jCGzK6bpjeZwq
oM1fQOkjuFeNvVLA3WHVT0XEpZEM8lwAr/YwnBWMFlNd3Vb2Cho5VaQq0nVfhYoB
tpzoWcf0Qx4cALesQZ3y2EnXePpzky1R4MfHqulYrmZKSBQsERob/7YgSBk+ucV9
OHLzYxm4OvYvDoR54REq+vgZ3ohoDmBrNNv9OmUHLIrUi+nBpBgnww85Dc7cKB27
EEKxqIfCNTeHSemvzfK/1M6manQX6eyGe48nOwQMV/ocfY6SeA7RABT0l/UsbeMp
g/b2RU+liZ3e8FziW4/1VTt1pmFAN/2hnb0v
Af8EAjAAMB0GA1UdDgQWBBQiwt5ZVBlE2nrYnp5z4R6ADUnCwTAfBgNVHSMEGDAW
gBQpJiv07dkY9WB0zgB6wOb/HMi8oDAPBgNVHREECDAGhwR/AAABMA0GCSqGSIb3
DQEBCwUAA4IBAQAh5Jxw4TbDnQJMzj53KxEgNxbd++p7LMhZkN5X8jtuDe81rzeQ
CyvJlrLEVPKbXiQF6cFV3TOvrZY/PM8UoAcXv0noEtrlRrrjbk9e4My3Zu4O1IHB
MvfXOuF8JN5L3kCrcUcjhMrx8XTLyathSTQxG6TCh7X4+/vXufbZkkzg8nhtSSWB
t7hWYo3KA25TgjP4E68BpnddNe9ad2sMIpIM1ovhhW6v8Uzux+eKkBkeb2Cdmrhd
CEzK33WrDPCLsxa8hOByCWFj76qQ+Q5kgJvK3F7kBLWijhRKLhJBQzCJDCvx4yE1
w/e+aG74m1tAID1XjTuCZIcTZTEHZi31ogVM
-----END CERTIFICATE-----

View File

@@ -1,27 +1,27 @@
-----BEGIN RSA PRIVATE KEY-----
MIIEowIBAAKCAQEAu5iTosqln5jTOabUGdtqZaLVX4KAX/Xx08zMP2qdlZdP9OFd
xhMmAnT7elfOxls4wUjvRgJpSEgUEMmzInmU2k5vCilttOuoiQyrHgQr/ZuJ0ph7
9KA6qd0h7GUNsR4rOJW4u8LNrYXaqZo8OA7/eklIgiVI4XAV+JATEXDhoJNURRDS
DiJ00ipQIsDRdTUp4Y/MDDcO5o8PdI6QBgLj7rokO/gGByQSarU/ZkYpEfKacIrI
xBOpLJQgoNkQuOQIheC1zaewO18Juj9+Fyek/TyVIf696WtMR0biNs0C2lvGf2+j
WJFBjGx0ewGP/ticzn5550HEgfBGAvWx2nbx5wIDAQABAoIBAB0jBpM7TFwsfWov
6jOV68Gbd+6cs1m0NnpCDdsvsQgh904+jrUMFlQ9XS3UY45Vbsw+isNh7n5Gi69L
1KHfJmp90itO4fY+v++BYzaHSVnbhZ2LB32oQVROv00bKPRAjk/8mTO4fv+bkanU
BdRjJ/UTWsq0BczV/uObZQrJcJHi6+sAMYw4b/kxzTALd+UuvmOP7Z/NoWW6x8Mm
ahHgqaMwA0O1f4DsdKYnSUVMF9DNGsxKCUYSYR6RH93Bq/Eo0q1U2egmLIMcTVW9
7QSWsJoZuXlzkq7Hb7mxGdppa6kSzA/VM26qPNE9Cjg4tCMu1RJSfgkcnv27Y8vZ
fZSq3zkCgYEA68VjIqG6sj43SZSvD+Z+Dfuzc+lO4YBSI0Yru8B4ZZq0vfTVQdM/
uf0Bpk/nMbqec/kfcPMHP8zznLe8rcmfZXNQFIaajOb6rzWhCRSgbX98MeGnUe/y
9sG+zFSRrAPDaVRJZwSYILs6o6Hz4o6DBCvr8iKFfm26SLB7hIjwx8UCgYEAy7EL
dIMdsGDzfmxAYqad3oy/N1KVp96zfdnHEiIC0oiXz3YfI7YLFj54yXxx5rHR2/AK
wOo7b90Rc8R0PgtKedKrz5p/E0Bz723ToTxHjsqgVRZqYaEKUOp8wR2t2DJOF9b9
0C/qp6iUy0IOTBYyu3BCMV0aB5kRW62jXJIsQbsCgYB6uO7mOurUFsBug38wNpjM
rIR3RCz0Afg/NipTe1bwBDwqWEOdFNmp9QEj0ZmU7//EfBsajtXqJsNzgswqZbWb
eA9p77qItz4rby3YbS0oceByknOmmdCNEsI+15JPyFGyBNaEUgbhmrNmM0mgVu/p
fvc8vS1hZro9VeelUCaMxQKBgFDgnXHH1fQAqu4ZwX7qNWj2bb5jtjSPgqmH3Tlf
88rwnYasmjStxb0xVPh7xyYYmQFBUKPE3ZDPMGzNJnK0PQAeHEY0TByyzNXWv98X
djpGTl86pUbakKQMVzi+thZP8x4YKXOOcxfbIimKsu6XKdGvAzlihEFcD75dNa4+
BACdAoGBAJevnrC7M/KyDDGW3ci4sFcn7MxRGqLBulwGoCuM+zecbG7NBvDynoaH
NRGpASiboRJyCEoIQivvkZf+K7L/oB4bL/ThF2ZpJUe471tq0444xnXdHRDLG0Dw
OnBl27e3iAiUctqR51ufXKOUaNEf4gcsS9duELMPBxM70GE2Q/2r
MIIEowIBAAKCAQEA1E0iEyMn8n8hPn3UBjPLGpYun5OKRQqbjg25Y6/8sFSPNfzK
yxoV/88koERyihuX9twlmyJoLSLD20nCsTTUHLw0V5m7C7ZR/6OxKLp/3tGhqa86
9KdEDRhnihX1AU52gfme/lfdqMYzQLsnjGbLaicYzlgxqQkvshVbz/xJUbC9hyVW
R1jspMCnS850VI5cP/z3LyazbDvQ82JbmerQ6pTeZ8gemltIfWZHArJBg6PtVfeL
Y4eaxk7HERFSFsVcR4CzBZb2hCZc+uKVYHPFVg+n2pmoqj+ZwtFjseQSbUq5hd1h
/ew5H1XF1qalGxAHsuhuLC1NGv9PIiqFmyZm7wIDAQABAoIBAQDP8BSV5fM0guxO
xvOqd4RRMBPOXLYrVW5yvmJ8j1zSYKA8YrNGJvCxM3ROPXxqZQh807dJsXOT8d8f
o6k74+B1nKkvu/UGTbcWyn+0wqaH2Y+cIXN/OW1f3i1bhJIKi41rVNEzkWAb9LUy
i5z62ZwXBuA3Cw7o34SFyoG4vwQZK1efygUXGIKSHdwAW7mwPD3MLIuIJgDrj2P4
2aLb1jcRyUKCY06QD7tsOH6pn59qkjVnoYMJl7B8DiCRFXjtt1yRZ5RU9sMT8PxV
Px29qIouHvKcLt5cNX/Z1VjoopTHCOvhKMmQzP0ubPp7/Ytu2tPQcc/8DoZ/3+aw
Zr+27SSRAoGBANva1d0wSR17QTjfcKLDup9+ERztyW2fxi7hGgKIqzPCY4E+vGTX
KC5eToNpyo89COa/Z3rHKzLlSYpDaQiB/kWqEm3HPEW2Yq1YHaI7nZsuBPUkMtH+
xOBFyZUYG2aaqQiBuvvSJRxP5puAXGlIWAQp9qOLwtbQJ0gGRMhq4yutAoGBAPc0
Y0xzPNpTkjcRGDN7srcZw12tqy5bpi/TW+Ynlfxg2OnO5e4W9TEgFxcsNeqAB1IB
QBd1QhVnpHFANIThX4XNQ59FJ1jOYuRwKWpjBNOol3YWhLlBwPrRLxJNxYXbZha4
zafhrvv3VMatU9Tc+a4gnZ+ooSM1m/rTAQqfunCLAoGAIAHC4tm1u0IHY8U7u6Zt
E+0hhqmjin8ZNhf1VmsZKYbiP52nhbLBGccG/SC4qZPEKPuyj/BQ/K7evu9Dakaq
gu/YkPzRbIC56uyKG+U786yGcj3b3DCP7uqaB0ekLZLUivWACEs2teF3/Cl6yqUK
k0icrICbU/Sn01d+SgMtoV0CgYBzUx9YFRK4j/BQfEscCYMwZHZ+B30qnVsESMhA
sQsJuGy5dupRjqhIiL3884UbpyrDGQ47Y1q2/aj7pIZbz4BuvXnknbBjf7Um+SR5
G0SvMaGnV44HlyNeX6RkF6AkeFxCEWjv/xtRNOt53HaVgZmBoHmoeFTkRihEdZew
yx+BTQKBgHMs/wLEaC0zLeYZV90s8cR+LqrRl1IkjEtpHg09ESDLX5IyobivSmbB
wOkkFI0N9KzcDjwj3qmPeyXiFIAOX5ToXAWM1tljUOfpniwxv69bzUkSZqwopI/0
OK6gMIYt2GakcSIQrqHBuvGEKEM8I7Aa4QdbO55J4T6qYMqO4xyV
-----END RSA PRIVATE KEY-----

View File

@@ -1,24 +1,24 @@
-----BEGIN CERTIFICATE-----
MIIEEzCCAvugAwIBAgIUYTkp3oUkde9wFRkJA1LlvwFrZ3MwDQYJKoZIhvcNAQEL
MIIEEzCCAvugAwIBAgIUPVE8WTSDgzGco/kjCilmiddHMvwwDQYJKoZIhvcNAQEL
BQAwbzEMMAoGA1UEBhMDVVNBMRMwEQYDVQQIEwpDYWxpZm9ybmlhMRYwFAYDVQQH
Ew1TYW4gRnJhbmNpc2NvMQ0wCwYDVQQKEwRldGNkMRYwFAYDVQQLEw1ldGNkIFNl
Y3VyaXR5MQswCQYDVQQDEwJjYTAeFw0xOTEwMDgyMTE5MDBaFw0yOTEwMDUyMTE5
Y3VyaXR5MQswCQYDVQQDEwJjYTAeFw0yMTAyMjgxMDQ4MDBaFw0zMTAyMjYxMDQ4
MDBaMHgxDDAKBgNVBAYTA1VTQTETMBEGA1UECBMKQ2FsaWZvcm5pYTEWMBQGA1UE
BxMNU2FuIEZyYW5jaXNjbzENMAsGA1UEChMEZXRjZDEWMBQGA1UECxMNZXRjZCBT
ZWN1cml0eTEUMBIGA1UEAxMLZXhhbXBsZS5jb20wggEiMA0GCSqGSIb3DQEBAQUA
A4IBDwAwggEKAoIBAQDcRWZxskwCNXhprj8XCtkxj9GP4z9hVgUxgquSBync1hic
or6qNgrUztv6nlALdQdf+TbPKyGEwCgAlKU/hnJK6lAG3+riyShnyM74/ulV1wYS
F3Rkeh0nNCo95TPNq4GLB+sMfzwoSsT0srPX7KzCqpGy+G7sB0JBNwkTZLkCuMZf
dkkmcZJ3zqIiOzJPlcQa4iBa0L1nV3Uuv49kLZLMCLMslg//IZxC09fnmjn+XLcV
4+RpOKIn7AMN1kqPqmaB6gk2aCbYTZZ8aS9+cOJmTERbynyX4y4sRV18ED3dRNvs
HCedgPOp53nqDneSOqOhhg+Mb95tnMQq1on0+TRDAgMBAAGjgZ0wgZowDgYDVR0P
A4IBDwAwggEKAoIBAQDmszDivhUchvYxLdlHvRonIlsQ9VRVO5C4iQwrTzVTjb9y
q0oXGx1uQGNZyjMMStsZ4Vgi7UqBaEe9z3DIQxe4rzEn8B9NqWS6VgGBI3N9mjiI
BBNzUdJOoaelFqFbIyzL6cxCndovL6hURTxJTLI80dM5pdhfddPckfXmjD6qrZ4k
9bhIyQX+TZViHs6il5HWMJi9XdEW+VCBZ+Zaqjb0vMbBh5mEZIpYCdz7WeoowRUl
kcP7AbFg/PzP6Tg5xe5sNmrSWZSB4QuGfTV9EMTFVA5WqI2Z+T388IOuh5DEw6NL
swHME4eMsCwbZYees7xZh0tERDZUeOJFAifNrd0tAgMBAAGjgZ0wgZowDgYDVR0P
AQH/BAQDAgWgMB0GA1UdJQQWMBQGCCsGAQUFBwMBBggrBgEFBQcDAjAMBgNVHRMB
Af8EAjAAMB0GA1UdDgQWBBTFoXLQVq+Yg2AlRIirXj5ho0PMrjAfBgNVHSMEGDAW
gBRlB76vjaZyFLrEUGm6DQfyjmN6PjAbBgNVHREEFDAShxAAAAAAAAAAAAAAAAAA
AAABMA0GCSqGSIb3DQEBCwUAA4IBAQB4bl4f8TI7k+nlHe4MhJuHP1BKHB5O5SeG
wrgI2+qV38UrKvTag2Z3OVKw12ANGN1vcOUrDS7cCtIZ8Aar7JpBgWrYvVlhAtc5
3syj74Iapg1Prc0PFRmMQTZ4mahRHEqUTm3rdzkwMjNDekBs9yyBsKa08Qrm9+Cz
Z84k/cQTBc3Bg6Xw3vUiL4EmeRQudBQAvh/vdxj6X+fwKmvLbPpgogXuQS/lHhFQ
/rZ+s22RHLlqzAMuordjxS4Nw91dqYFwdYVvEmsK89ZnSWqwLvFCJ4uNnAe8siS7
53YTpGbpLdNkQKAQJdMQSyvcDbQoQ7FI19a1EtSwpg5qSMOTpQ/C
Af8EAjAAMB0GA1UdDgQWBBRhyokLlyprk+9tLhouzyinfDZuHDAfBgNVHSMEGDAW
gBQpJiv07dkY9WB0zgB6wOb/HMi8oDAbBgNVHREEFDAShxAAAAAAAAAAAAAAAAAA
AAABMA0GCSqGSIb3DQEBCwUAA4IBAQDBHcGP2z7UVPCh9Tj0M79mLPB76E9BtTNB
5cadSemk/itnGol4K5x+BILqRvQpKbUm7Yif4XVKtBiPnZothEg5mxcTGO5n3EVi
Y7KmeVZxUkPESQQVnM36ymG+jzSf00KeGhras71ddbAKZBVm+nsL3j1pLz+MGksV
m8xzIW/ilM/zL8ivlSy5XBu4JqJET9O5vs4RBYmzNNC9D2WxxNMm2bAxCd+1Kg82
6TLAFGGla0e8fG39TMfLeQYqHf8FdmGqkhhVStjvgRPnVvEK6Uv6ZESZZBgOb97O
m4BnW5gp9abS3mTYdDMo+TzgcKSlTKbcRV3SHA/9Cjih8IkOdIKW
-----END CERTIFICATE-----

View File

@@ -1,27 +1,27 @@
-----BEGIN RSA PRIVATE KEY-----
MIIEpAIBAAKCAQEA3EVmcbJMAjV4aa4/FwrZMY/Rj+M/YVYFMYKrkgcp3NYYnKK+
qjYK1M7b+p5QC3UHX/k2zyshhMAoAJSlP4ZySupQBt/q4skoZ8jO+P7pVdcGEhd0
ZHodJzQqPeUzzauBiwfrDH88KErE9LKz1+yswqqRsvhu7AdCQTcJE2S5ArjGX3ZJ
JnGSd86iIjsyT5XEGuIgWtC9Z1d1Lr+PZC2SzAizLJYP/yGcQtPX55o5/ly3FePk
aTiiJ+wDDdZKj6pmgeoJNmgm2E2WfGkvfnDiZkxEW8p8l+MuLEVdfBA93UTb7Bwn
nYDzqed56g53kjqjoYYPjG/ebZzEKtaJ9Pk0QwIDAQABAoIBABBdY5gM3BLJ8DFB
zdQjbTF+ct5SztGnd2lPQPnvaE/M5DU27h1tOG7JE5TSEDZZsnuR412O4cWgFRi9
8mz+yxz/vYRVPHku4r6bL61WGvXSrNPJRE92txXDjWPd1HRySoSOyQq7pTeFHo7j
e/MN1WP9EigOxwboHycDNLxpHkmyV1DIlAgNkCZV56//liU/b+4vAVIJrgWfwfGH
NkFd9nkm93oCFOroJ2f30E1wLPlC+ZhIn4ysau+zlWDLYeils0xHwS2GD7gjp/if
i/ibVPgMVW/WPb67olm3nMUsan6CLmKWTiG+yklJT2djoam/iCZWE8/SAZj3qsxy
6W9rafkCgYEA+D8tPM8h0oHlKriFDQZx37EH1dfGJRqxr+SgQiJ03d9pGYEsT+jC
yr/l5ntzTwEEJjp/biIRwCwSWPYQtN4dNqn+11ICQzjhQbfWTfeT6vhSoBNxkeTT
R8tUM0fmoUNrXhPbGZ9XdIxDFgD1pJL96KtyaQGjIRAhyG+khIT7oIUCgYEA4yaM
Mw65KDonnKSVfMiOuG0QNYf70UcIiLSH8USnhbQhzT/c2LG7tNmru9UtQhZtmrpc
vezuOYTkfcAIUjwqm12Ra8Px8WMzwHwKx3C2SrFCLFgjNFyoQ+VIGjtAL1lNKvEx
MObSX7kVIf5+gaO9+KRBEdu55R16yQpW/UVAwCcCgYEA8XdqRkLoED2/Ln3LFW9W
ZpJpH61BlCfR/FhzNcEUUhiUv3UxKA0tJE/ijP05nPhNE+5Es1i6UWXM9vFqMLP4
UIqsUr73anGyUd1CvBX8sEqY/BHNn26nwKbboQHoKKZOknTX4qVmSPyB6K5IQaul
BKN3pwIrreZmJfPKYAiGRY0CgYAYgEbtFvB321X8enA5ZnSmhfUSoRlTaIMOI9Lp
/krHjDd9KR9MLFef2T7B4uufzkWCRAnO3qiPgbsXqUf8fsrluUD/S8JkFBw37elH
u+udwOLvX45kjn4D3M5bLfrtYIeHUz7IFI2qj48s/INuvle2Yxk1sOqrQPPGjZv2
c6rZTwKBgQCHSa+ToxicPJBZ5E7ezgue0LyRGWIMsr2OR16PBL2lPPiCWPH8Ez+l
mTClHll4KVZyqc0VOZDbjMjZBnTiAq/1lb8ZvwsXLi0ue1obkkEYfXLWcxYD3Yne
iBCGhjkqaUA4rESb22j7yqB8WGT83qV0kB9JwElzE9SxnyR9iw2FmA==
MIIEpAIBAAKCAQEA5rMw4r4VHIb2MS3ZR70aJyJbEPVUVTuQuIkMK081U42/cqtK
FxsdbkBjWcozDErbGeFYIu1KgWhHvc9wyEMXuK8xJ/AfTalkulYBgSNzfZo4iAQT
c1HSTqGnpRahWyMsy+nMQp3aLy+oVEU8SUyyPNHTOaXYX3XT3JH15ow+qq2eJPW4
SMkF/k2VYh7OopeR1jCYvV3RFvlQgWfmWqo29LzGwYeZhGSKWAnc+1nqKMEVJZHD
+wGxYPz8z+k4OcXubDZq0lmUgeELhn01fRDExVQOVqiNmfk9/PCDroeQxMOjS7MB
zBOHjLAsG2WHnrO8WYdLREQ2VHjiRQInza3dLQIDAQABAoIBABAXMXKvJVPPCf7W
HtCFHPzbxZRCODaVp/tm+6VNqf+A5Hh///PqnTviW8uYccUKt4tvjzEoccji2BYi
ENC29UGZXolVkylchj0E4Kf8LAL3rbe26RBjBZMcbU/zax+rLWWvkeKXle8ymL//
8DuAkPHzBJOBwLyvwC4jNA53e6t1vKp/j5bpR0VZHFvsGSvGGx3P0l34DwHYjdL7
+Q6jwaoIYDA8rcZHXzT/UCHa2dsk/m1OzFKUdyJVkZ0JIqRBhBmG7Z16T9xfoEZP
Ycv8TudQeH4sbiedwCYSmkfU+CtPzGWhZ4jjBegujJwrQJwF5TbRsqwHv3JOPE6K
hhJTs0ECgYEA8v2JMqFTwKhmh6EASI0YfLvrP1LNR6K6Hhdoe3/RYP2Qg12Cm7rv
7Eq1kpptu9eSnFRPdS16bTyRzTa1/eEPPjvTxCalnEOVdPSbkw6zr6QFwQh6HMrh
fbLQJy1jY/Gj5JkVLMK9l+iLLY9vJ5ZZPfJYdmlZ1USwbcsbF0u37hECgYEA8w0z
ZE34FsKWXdcMNG97OWYMqXWhOlLyxKtsWGUUEoTdK+PjDTDVQVe7KjjhOFehazD1
mfDS1VGwBPHQgjxBDqreBc0HvA4o1B+Js0SiQVbhxkvyDHHnxI5MeUROPCeGljzn
lPM9BnrX1KurSNS+j9YwtUiM4TLcMMdBXdv5UV0CgYEAgudHRDlZD08pfSOlLXCl
onzyLPkEkfT+YzulE/M17xRrB/oWZKL+ocNVshbzyuBFoWZiL/RCIhshSPaScKUQ
Oyyr1t4jFd3q5Ejqjvy6nIK2ftl8P4qkk70DGjf/dVY2Pu6hU63NycqDQBYngaIj
jZXDRndW5+fLTDrA63nlKqECgYBxTj4fDJoTQjOHG7F84Fu5rnFIrqWy4uh59tBT
hQuOdpIE3AAFLja8d4GxdULJWVDO/8v/L92ZxLMiGvjxPdW2WMGYQrTQXml6Ohmf
kOdzPmWSY+U7F/7MCuprvgQa1vJPJ6VuMtbIJoxngIAhO8x6kYeze1bxxRwRQVKf
xuS7oQKBgQDrsuXe9dKZU4lLTO+mPCHZPgSwY+3rFvpAFjoM9YT2/H0EwBrBubtn
NYyN3ih9SyaJdO6RKSEn822sz1Vc8A0xqpjyhUWDxyRCtc+GyVKr2WIC0DraGUjn
flBpRAox7c3DiPQLQXV9WxFVscM1WaNOVEuuHwmUmaM6UXJRXT7jOQ==
-----END RSA PRIVATE KEY-----

View File

@@ -1,24 +1,24 @@
-----BEGIN CERTIFICATE-----
MIIEEjCCAvqgAwIBAgIUBwoN2+J27JtT6IaqV9sWhsHii2IwDQYJKoZIhvcNAQEL
MIIEEjCCAvqgAwIBAgIUSjz5+dzCGgG0oO+u91Rqruj70ZQwDQYJKoZIhvcNAQEL
BQAwbzEMMAoGA1UEBhMDVVNBMRMwEQYDVQQIEwpDYWxpZm9ybmlhMRYwFAYDVQQH
Ew1TYW4gRnJhbmNpc2NvMQ0wCwYDVQQKEwRldGNkMRYwFAYDVQQLEw1ldGNkIFNl
Y3VyaXR5MQswCQYDVQQDEwJjYTAeFw0xOTEwMDgyMTE5MDBaFw0yOTEwMDUyMTE5
Y3VyaXR5MQswCQYDVQQDEwJjYTAeFw0yMTAyMjgxMDQ4MDBaFw0zMTAyMjYxMDQ4
MDBaMHgxDDAKBgNVBAYTA1VTQTETMBEGA1UECBMKQ2FsaWZvcm5pYTEWMBQGA1UE
BxMNU2FuIEZyYW5jaXNjbzENMAsGA1UEChMEZXRjZDEWMBQGA1UECxMNZXRjZCBT
ZWN1cml0eTEUMBIGA1UEAxMLZXhhbXBsZS5jb20wggEiMA0GCSqGSIb3DQEBAQUA
A4IBDwAwggEKAoIBAQDHS/zscOjq013InbTlwsBVwasv8e5+ukZNGDQx5RNaXYxI
NVUM/5Bai4R3CS+DTbr+jBDylKi55gPQ/UIDKlU/NQH+x6UJB050G+aLDWAuRmxi
w8dq7kRw2QJvuMxI+quiZhWk2HYjtvZRZLCUGl//QTL/VCT1smXwXRBU19S2uOfy
g9KgZL/DCkJ9VBUh3+bFVKXBDnIphY4N/0+B/sW71cvRj8zvW3iD0R5T1J+QVEFz
sFRT99/OhV2kUEwMaAYOFv/mMIEO6qc7vf6pB91qdUfEP8AbsOlmiSuOOLuR6X/2
FHUjc8JrFfMuOVHnedRR5quxXbP8o83ilat0tXeVAgMBAAGjgZwwgZkwDgYDVR0P
A4IBDwAwggEKAoIBAQDs90tJyp+StNGHa6spxT4Eeuok40ghqxDwukfOqCe7rIJj
4pOYD2Tk6kFIlCJr+ug1fihWtTVVo0ZKUb4pwhXVDX82HiCyLd7OV7cfEKtnec3q
dUoJ/SR2iyjnVvRubL5szw9hx+XpRC1jo7HweBEFtCBPHPYdM67k9L6yfOoSkqCF
0/aqwbmJop0QSZa4RPEhkvyDLPaNtPnqBcoEF6SRxMSlJl1lVORIsGR5Q4/eZ62k
2ZMy8IwSqFeDtAWSSl9Gqi6cD7WWZsSqYWV06g/8mRj2zRubJuuaZx2a6QMv+Cpz
e32EpX2uZjpT2emCpRFDsy2RBWwLDJrk2+W7Piw5AgMBAAGjgZwwgZkwDgYDVR0P
AQH/BAQDAgWgMB0GA1UdJQQWMBQGCCsGAQUFBwMBBggrBgEFBQcDAjAMBgNVHRMB
Af8EAjAAMB0GA1UdDgQWBBS7gBJSFrjAHryiQpe38OMTzCKH1TAfBgNVHSMEGDAW
gBRlB76vjaZyFLrEUGm6DQfyjmN6PjAaBgNVHREEEzARgglsb2NhbGhvc3SHBH8A
AAEwDQYJKoZIhvcNAQELBQADggEBAE2tsLHtgF1T3d/anKf543q9uh61tr46HHf0
RrGZF+RJuJY5XIAiCN514Z/I7i2wG/x1TDTKwZUebajbk4GvaI4nEnCXs05jwm/n
wpdyRE1EUy5PkVFfXKCNQd096mpZu6EYXBGnQ2fQjg5zFvZSDnYaIf0vBF1WxE4W
0a4a9na3N77OSamPEljM1RJ1Sk+Zg5yI+nwyKcWWk3OlD0j668Vp6/m5VZKyQEkx
crfSj7kgRJWZRhMeh6li3xa9vDmzdF6ojGkgRN3Qljrs36JnmsTono2ETF8GIc+g
eNByAQNppLJjMn+zsaG9J5pr0gDLubFA7oa8aAJgYgJMM/GecAg=
Af8EAjAAMB0GA1UdDgQWBBTEZsOjdDCzMXxW7aVXJsWd8Wii/DAfBgNVHSMEGDAW
gBQpJiv07dkY9WB0zgB6wOb/HMi8oDAaBgNVHREEEzARgglsb2NhbGhvc3SHBH8A
AAEwDQYJKoZIhvcNAQELBQADggEBAGKuQ05hpgwhR2Rdd82mHFc9ItnlW/G88V/p
49TuPpDWJwSidOTvEkzUdwVPtIWoZp7GNB8/ZkHIf6t69UxEiFqltkDCIo/VJimk
2Zk5cgPKgdBaZaufwAzSZWfpnX9k7IIi+gVjGBhYqw8a159AdP75kNjj4oYjcYAE
8FS0K6srsZVf2ER625psFsJG8ZVOjJOqs7fk32aAlCXSwCnOovf9qVlJA40nWi1u
pPIA3vW8JZ1UaYB3GKFkzdDaIGm9R7STEMx4I1gba3UewzN+a/h2Y7gQAWHs9VuR
/zTcENPQuLafN/+DNDVO/DmCgNzO9H1cdvlyUoh5VaKhmpRrw2M=
-----END CERTIFICATE-----

Some files were not shown because too many files have changed in this diff Show More