etcd/contrib/raftexample
Gyuho Lee 1caaa9ed4a test: test update for Go 1.12.5 and related changes
Update to Go 1.12.5 testing. Remove deprecated unused and gosimple
pacakges, and mask staticcheck 1006. Also, fix unconvert errors related
to unnecessary type conversions and following staticcheck errors:
- remove redundant return statements
- use for range instead of for select
- use time.Since instead of time.Now().Sub
- omit comparison to bool constant
- replace T.Fatal and T.Fatalf in tests with T.Error and T.Fatalf respectively because the goroutine calls T.Fatal must be called in the same goroutine as the test
- fix error strings that should not be capitalized
- use sort.Strings(...) instead of sort.Sort(sort.StringSlice(...))
- use he status code of Canceled instead of grpc.ErrClientConnClosing which is deprecated
- use use status.Errorf instead of grpc.Errorf which is deprecated

Related #10528 #10438
2019-06-05 17:02:05 -04:00
..
Procfile contrib: example key-value store using raft 2015-12-17 14:41:37 -08:00
README.md raftexample: update readme 2019-02-21 16:56:52 -05:00
doc.go *: add missing godoc package descriptions 2016-05-27 15:15:26 -07:00
httpapi.go *: revert module import paths 2019-05-28 15:39:35 -07:00
kvstore.go *: revert module import paths 2019-05-28 15:39:35 -07:00
kvstore_test.go raftexample: add snapshot methods to kvstore 2016-09-21 04:23:01 -07:00
listener.go raftexample: fixes from go vet, go lint 2016-06-22 12:04:15 -07:00
main.go *: revert module import paths 2019-05-28 15:39:35 -07:00
raft.go *: revert module import paths 2019-05-28 15:39:35 -07:00
raftexample_test.go test: test update for Go 1.12.5 and related changes 2019-06-05 17:02:05 -04:00

README.md

raftexample

raftexample is an example usage of etcd's raft library. It provides a simple REST API for a key-value store cluster backed by the Raft consensus algorithm.

Getting Started

Building raftexample

Clone etcd to <directory>/src/go.etcd.io/etcd

export GOPATH=<directory>
cd <directory>/src/go.etcd.io/etcd/contrib/raftexample
go build -o raftexample

Running single node raftexample

First start a single-member cluster of raftexample:

raftexample --id 1 --cluster http://127.0.0.1:12379 --port 12380

Each raftexample process maintains a single raft instance and a key-value server. The process's list of comma separated peers (--cluster), its raft ID index into the peer list (--id), and http key-value server port (--port) are passed through the command line.

Next, store a value ("hello") to a key ("my-key"):

curl -L http://127.0.0.1:12380/my-key -XPUT -d hello

Finally, retrieve the stored key:

curl -L http://127.0.0.1:12380/my-key

Running a local cluster

First install goreman, which manages Procfile-based applications.

The Procfile script will set up a local example cluster. Start it with:

goreman start

This will bring up three raftexample instances.

Now it's possible to write a key-value pair to any member of the cluster and likewise retrieve it from any member.

Fault Tolerance

To test cluster recovery, first start a cluster and write a value "foo":

goreman start
curl -L http://127.0.0.1:12380/my-key -XPUT -d foo

Next, remove a node and replace the value with "bar" to check cluster availability:

goreman run stop raftexample2
curl -L http://127.0.0.1:12380/my-key -XPUT -d bar
curl -L http://127.0.0.1:32380/my-key

Finally, bring the node back up and verify it recovers with the updated value "bar":

goreman run start raftexample2
curl -L http://127.0.0.1:22380/my-key

Dynamic cluster reconfiguration

Nodes can be added to or removed from a running cluster using requests to the REST API.

For example, suppose we have a 3-node cluster that was started with the commands:

raftexample --id 1 --cluster http://127.0.0.1:12379,http://127.0.0.1:22379,http://127.0.0.1:32379 --port 12380
raftexample --id 2 --cluster http://127.0.0.1:12379,http://127.0.0.1:22379,http://127.0.0.1:32379 --port 22380
raftexample --id 3 --cluster http://127.0.0.1:12379,http://127.0.0.1:22379,http://127.0.0.1:32379 --port 32380

A fourth node with ID 4 can be added by issuing a POST:

curl -L http://127.0.0.1:12380/4 -XPOST -d http://127.0.0.1:42379

Then the new node can be started as the others were, using the --join option:

raftexample --id 4 --cluster http://127.0.0.1:12379,http://127.0.0.1:22379,http://127.0.0.1:32379,http://127.0.0.1:42379 --port 42380 --join

The new node should join the cluster and be able to service key/value requests.

We can remove a node using a DELETE request:

curl -L http://127.0.0.1:12380/3 -XDELETE

Node 3 should shut itself down once the cluster has processed this request.

Design

The raftexample consists of three components: a raft-backed key-value store, a REST API server, and a raft consensus server based on etcd's raft implementation.

The raft-backed key-value store is a key-value map that holds all committed key-values. The store bridges communication between the raft server and the REST server. Key-value updates are issued through the store to the raft server. The store updates its map once raft reports the updates are committed.

The REST server exposes the current raft consensus by accessing the raft-backed key-value store. A GET command looks up a key in the store and returns the value, if any. A key-value PUT command issues an update proposal to the store.

The raft server participates in consensus with its cluster peers. When the REST server submits a proposal, the raft server transmits the proposal to its peers. When raft reaches a consensus, the server publishes all committed updates over a commit channel. For raftexample, this commit channel is consumed by the key-value store.