Fix https://github.com/coreos/etcd/issues/7512.
If a server starts and aborts due to config error,
it is possible to get stuck in ReadyNotify waits.
This adds select case to get notified on stop channel.
Signed-off-by: Gyu-Ho Lee <gyuhox@gmail.com>
This commit adds jwt token support in v3 auth API.
Remaining major ToDos:
- Currently token type isn't hidden from etcdserver. In the near
future the information should be completely invisible from
etcdserver package.
- Configurable expiration of token. Currently tokens can be valid
until keys are changed.
How to use:
1. generate keys for signing and verfying jwt tokens:
$ openssl genrsa -out app.rsa 1024
$ openssl rsa -in app.rsa -pubout > app.rsa.pub
2. add command line options to etcd like below:
--auth-token-type jwt \
--auth-jwt-pub-key app.rsa.pub --auth-jwt-priv-key app.rsa \
--auth-jwt-sign-method RS512
3. launch etcd cluster
Below is a performance comparison of serializable read w/ and w/o jwt
token. Every (3) etcd node is executed on a single machine. Signing
method is RS512 and key length is 1024 bit. As the results show, jwt
based token introduces a performance overhead but it would be
acceptable for a case that requires authentication.
w/o jwt token auth (no auth):
Summary:
Total: 1.6172 secs.
Slowest: 0.0125 secs.
Fastest: 0.0001 secs.
Average: 0.0002 secs.
Stddev: 0.0004 secs.
Requests/sec: 6183.5877
Response time histogram:
0.000 [1] |
0.001 [9982] |∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎
0.003 [1] |
0.004 [1] |
0.005 [0] |
0.006 [0] |
0.008 [6] |
0.009 [0] |
0.010 [1] |
0.011 [5] |
0.013 [3] |
Latency distribution:
10% in 0.0001 secs.
25% in 0.0001 secs.
50% in 0.0001 secs.
75% in 0.0001 secs.
90% in 0.0002 secs.
95% in 0.0002 secs.
99% in 0.0003 secs.
w/ jwt token auth:
Summary:
Total: 2.5364 secs.
Slowest: 0.0182 secs.
Fastest: 0.0002 secs.
Average: 0.0003 secs.
Stddev: 0.0005 secs.
Requests/sec: 3942.5185
Response time histogram:
0.000 [1] |
0.002 [9975] |∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎
0.004 [0] |
0.006 [1] |
0.007 [11] |
0.009 [2] |
0.011 [4] |
0.013 [5] |
0.015 [0] |
0.016 [0] |
0.018 [1] |
Latency distribution:
10% in 0.0002 secs.
25% in 0.0002 secs.
50% in 0.0002 secs.
75% in 0.0002 secs.
90% in 0.0003 secs.
95% in 0.0003 secs.
99% in 0.0004 secs.
Keep more wal entries in memory for fast follower recovery.
10,000 was a too small number that triggers quite a few snapshots.
ZK proves that 100,000 is a reasonable number for even old less prowerful
machines.
Eventually we should provide both count and max memory (for large entries).
Bump go-systemd to v14 (48702e0d, 2016-11-14).
Also adjust caller of daemon.SdNotify() to avoid build error, which can
be seen especially when running "go get github.com/coreos/etcd".
When using the embed functionality, you can't call the Server.Stop()
function until StartEtcd returns, which can block until there is a call
to Server.Stop() in error situations. Since we have a catch-22, the
ReadyNotify() can be called manually by the user if they wish to wait
for the server startup, or in parallel with a timeout if they wish to
cancel it after some time.
Chzz pointed out that this is also more consistent with the
etcdserver.Start() behaviour too.
purpleidea pointed out that this is actually more correct too, because
we can now register the stop interrupt handler before we block on
startup.
SdNotify() now returns 2 values, sent and err. So startEtcdOrProxyV2()
needs to check the 2 return values correctly. As the 2 values are
independent of each other, error checking needs to be slightly updated
too.
SdNotifyNoSocket, which was previously provided by go-systemd, does not
exist any more. In that case (false, nil) will be returned instead.
These legacy flags support are here only because we do not want
CoreOS updates to break people.
Now people will be aware of that they switch to etcd3. Do not need
to support 0.x flags any more.
Added a new command line option to etcd server to read in a YAML
based configuration file. I've also added an example configuration
file with comments and a set of test cases.
This commit adds a new option for activating profiling based on pprof
in etcd process.
- -enable-pprof: boolean type option which activates profiling
For example, if a client URL is http://localhost:12379, users and
developers access heap profiler with this URL:
http://localhost:12379/debug/pprof/heap
Basic support for lease operations like create and revoke.
We still need to:
1. attach keys to leases in KV implmentation if lease field is set
2. leader periodically removes expired leases
3. leader serves keepAlive requests and follower forwards keepAlive
requests to leader.
go HTTP library uses type assertion to determine if a connection
is a TLS connection. If we wrapper TLS Listener with any customized
Listener that can create customized Conn, HTTPs will be broken.
This commit fixes the issue.
etcd might generate incomplete proxy config file after a power failure.
It is because we use ioutil.WriteFile. And iotuile.WriteFile does
not call Sync before closing the file.
Based on the configuration doc, seems these two flags are missing
in the help. So add them and the descriptions are from config.go in
the same directory.
Signed-off-by: Yiqiao Pu <ypu@redhat.com>
We should wrap the blocking function with a closure. And first
creates a go routine to execute the function. Or the inner function
blocks before creating the go routine.
This moves the code to create listener and roundTripper for raft communication
to the same place, and use explicit functions to build them. This prevents
possible development errors in the future.
rafthttp has different requirements for connections created by the
transport for different usage, and this is hard to achieve when giving
one http.RoundTripper. Pass into pkg the data needed to build transport
now, and let rafthttp build its own transports.
Before this PR, the log is
```
2015/09/1 13:18:31 etcdmain: client: etcd cluster is unavailable or
misconfigured
```
It is quite hard for people to understand what happens.
Now we print out the exact reason for the failure, and explains the way
to handle it.
Current membership changing functionality of etcd seems to have a
problem which can cause deadlock.
How to produce:
1. construct N node cluster
2. add N new nodes with etcdctl member add, without starting the new members
What happens:
After finishing add N nodes, a total number of the cluster becomes 2 *
N and a quorum number of the cluster becomes N + 1. It means
membership change requires at least N + 1 nodes because Raft treats
membership information in its log like other ordinal log append
requests.
Assume the peer URLs of the added nodes are wrong because of miss
operation or bugs in wrapping program which launch etcd. In such a
case, both of adding and removing members are impossible because the
quorum isn't preserved. Of course ordinal requests cannot be
served. The cluster would seem to be deadlock.
Of course, the best practice of adding new nodes is adding one node
and let the node start one by one. However, the effect of this problem
is so serious. I think preventing the problem forcibly would be
valuable.
Solution:
This patch lets etcd forbid adding a new node if the operation changes
quorum and the number of changed quorum is larger than a number of
running nodes. If etcd is launched with a newly added option
-strict-reconfig-check, the checking logic is activated. If the option
isn't passed, default behavior of reconfig is kept.
Fixes https://github.com/coreos/etcd/issues/3477
We always want to use GOMAXPROCS() as the way go parses it. When in go1.4, we
want to expose GOMAXPROCS value, so we set GOMAXPROCS explicitly as the
way go 1.4 does and print it out.
But it becomes a problem when go 1.5 changes the way to set GOMAXPROCS.
Fix the problem by stop setting GOMAXPROCS and get its value directly.
Due to this change, it sets default GOMAXPROCS to the
number of CPUs available when compiling in go 1.5, which matches how go 1.5 works:
https://docs.google.com/document/d/1At2Ls5_fhJQ59kDK2DFVhFu3g5mATSXqqV5QrxinasI/edit
This is a behavior change in etcd 2.2.
It uses heartbeat interval and election timeout to estimate the
expected request timeout.
This PR helps etcd survive under high roundtrip-time environment,
e.g., globally-deployed cluster.
This solves the problem that etcd may fatal because its critical path
cannot get file descriptor resource when the number of clients is too
big. The PR lets the client listener close client connections
immediately after they are accepted when
the file descriptor usage in the process reaches some pre-set limit, so
it ensures that the internal critical path could always get file
descriptor when it needs.
When there are tons to clients connecting to the server, the original
behavior is like this:
```
2015/08/4 16:42:08 etcdserver: cannot monitor file descriptor usage
(open /proc/self/fd: too many open files)
2015/08/4 16:42:33 etcdserver: failed to purge snap file open
default2.etcd/member/snap: too many open files
[halted]
```
Current behavior is like this:
```
2015/08/6 19:05:25 transport: accept error: closing connection,
exceed file descriptor usage limitation (fd limit=874)
2015/08/6 19:05:25 transport: accept error: closing connection,
exceed file descriptor usage limitation (fd limit=874)
2015/08/6 19:05:26 transport: accept error: closing connection,
exceed file descriptor usage limitation (fd limit=874)
2015/08/6 19:05:27 transport: accept error: closing connection,
exceed file descriptor usage limitation (fd limit=874)
2015/08/6 19:05:28 transport: accept error: closing connection,
exceed file descriptor usage limitation (fd limit=874)
2015/08/6 19:05:28 etcdserver: 80% of the file descriptor limit is
used [used = 873, limit = 1024]
```
It is available at linux system today because pkg/runtime only has linux
support.