Compare commits

..

36 Commits

Author SHA1 Message Date
Yicheng Qin
86e616c6e9 *: bump to v2.0.8 2015-03-31 14:29:13 -07:00
Barak Michener
5ae55a2c0d etcdctl: fix import typos 2015-03-31 13:48:18 -07:00
Xiang Li
62ce6eef7b etcdctl: main routine of import command should wait for goroutine existing 2015-03-31 13:26:15 -07:00
Xiang Li
7df4f5c804 build: do not build internal debugging tool
We are still playing around with the dump-log tool.
Stop building it publicly until we are happy with its
ux and functionality.
2015-03-31 13:26:05 -07:00
Xiang Li
461c24e899 etcdct: adopt new client port by default
etcdserver uses both 4001 and 2379 for serving client requests by
default. etcdctl supports both ports by default.
2015-03-31 13:25:56 -07:00
Xiang Li
6d90d03bf0 etcdctl: add migratesnap command 2015-03-31 13:25:39 -07:00
Yicheng Qin
9995e80a2c Revert "etcdhttp: add internalVersion"
This reverts commit a77bf97c14.

Conflicts:
	version/version.go

Conflicts:
	version/version.go
2015-03-31 13:25:22 -07:00
Xiang Li
229405f113 *: remove upgrading related stuff 2015-03-31 13:24:28 -07:00
Mateus Braga
b3f2a998d4 docs: add clarity about the 1000 events history
When talking about missing events on a particular key, the 1000 event history
limit can be understood as being per key, instead of etcd-wide events. Make it
clear that it is across all etcd keys.
2015-03-31 13:24:19 -07:00
Xiang Li
8436e901e9 etcdserver: loose member validation for joining existing cluster 2015-03-31 13:24:07 -07:00
Yicheng Qin
c03f5cb941 *: bump to v2.0.7+git 2015-03-24 23:14:38 -07:00
Yicheng Qin
0cb90e4bea *: bump to v2.0.7 2015-03-24 23:07:57 -07:00
Yicheng Qin
df83b1b34e wal: fix missing import 2015-03-24 23:00:04 -07:00
Xiang Li
f2bef04009 wal: releastTo should work with large release index 2015-03-24 22:51:02 -07:00
Yicheng Qin
02198336f6 version: not return err NotExist in Detect 2015-03-24 22:50:44 -07:00
Yicheng Qin
0c9a226e0e etcdserver: print out extra files in data dir instead of erroring 2015-03-24 22:50:33 -07:00
Yicheng Qin
5bd1d420bb etcdserver: add join-existing check 2015-03-24 22:49:41 -07:00
Yicheng Qin
a1cb5cb768 etcdmain: print error when non-flag args remain 2015-03-24 22:49:31 -07:00
Yicheng Qin
acba49fe81 *: bump to v2.0.6+git 2015-03-23 14:05:08 -07:00
Yicheng Qin
e3c902228b *: bump to v2.0.6 2015-03-23 13:52:00 -07:00
Yicheng Qin
52a2d143d2 migrate: remove starter code
It has been moved to github.com/coreos/etcd-starter.
2015-03-21 11:15:26 -07:00
招牌疯子
f53d550a79 store: fixed clone error for store stats. 2015-03-21 11:14:06 -07:00
Brandon Philips
63b799b891 migrate: detect version 2.0.1
Without this code a second start will crash:

```
$ ./bin/etcd -name foobar --data-dir=foobar
2015/03/18 18:06:28 starter: detect etcd version 2.0.1 in foobar
2015/03/18 18:06:28 starter: unhandled etcd version in foobar
panic: starter: unhandled etcd version in foobar

goroutine 1 [running]:
log.Panicf(0x594770, 0x25, 0x208927c70, 0x1, 0x1)
	/usr/local/go/src/log/log.go:314 +0xd0
github.com/coreos/etcd/migrate/starter.checkInternalVersion(0x20889a480, 0x0, 0x0)
	/Users/philips/src/github.com/coreos/etcd/gopath/src/github.com/coreos/etcd/migrate/starter/starter.go:160 +0xf2f
github.com/coreos/etcd/migrate/starter.StartDesiredVersion(0x20884a010, 0x3, 0x3)
	/Users/philips/src/github.com/coreos/etcd/gopath/src/github.com/coreos/etcd/migrate/starter/starter.go:77 +0x2a9
main.main()
	/Users/philips/src/github.com/coreos/etcd/gopath/src/github.com/coreos/etcd/main.go:46 +0x25e

goroutine 9 [syscall]:
os/signal.loop()
	/usr/local/go/src/os/signal/signal_unix.go:21 +0x1f
created by os/signal.init·1
	/usr/local/go/src/os/signal/signal_unix.go:27 +0x35
```
2015-03-21 11:13:55 -07:00
Brandon Philips
697883fb8c etcdmain: let user provide a name w/o initial-cluster update
Currently this doesn't work if a user wants to try out a single machine
cluster but change the name for whatever reason. This is because the
name is always "default" and the

```
./bin/etcd -name 'baz'
```

This solves our problem on CoreOS where the default is `ETCD_NAME=%m`.
2015-03-21 11:13:42 -07:00
Brandon Philips
f794f87f26 Documentation: fixup grammar around the unsafe flags 2015-03-21 11:13:28 -07:00
Xiang Li
0847986d4a etcdmain: identify data dir type 2015-03-21 11:12:18 -07:00
funkygao
9ea80c6ac1 raft: fix godoc about starting a node 2015-03-21 11:11:21 -07:00
Xiang Li
02fb648abf etcdmain: verify heartbeat and election flag 2015-03-21 11:11:09 -07:00
kmeaw
4c9e1686b1 pkg/flags: Add support for IPv6 addresses
Support IPv6 address for ETCD_ADDR and ETCD_PEER_ADDR

pkg/flags: Support IPv6 address for ETCD_ADDR and ETCD_PEER_ADDR

pkg/flags: tests for IPv6 addr and bind-addr flags

pkg/flags: IPAddressPort.Host: do not enclose IPv6 address in square brackets

pkg/flags: set default bind address to [::] instead of 0.0.0.0

pkg/flags: we don't need fmt any more

also, one minor fix: net.JoinHostPort takes string as a port value

pkg/flags: fix ipv6 tests

pkg/flags: test both IPv4 and IPv6 addresses in TestIPAddressPortString

etcdmain: test: use [::] instead of 0.0.0.0
2015-03-21 11:05:20 -07:00
Yicheng Qin
0fb9362c5c *: bump to v2.0.5+git 2015-03-11 17:00:51 -07:00
Yicheng Qin
9481945228 *: bump to v2.0.5 2015-03-11 11:33:43 -07:00
Xiang Li
e13b09e4d9 wal: fix ReleaseLockTo
ReleaseLockTo should not release the lock on the WAL
segment that is right before the given index. When
restarting etcd, etcd needs to read from the WAL segment
that has a smaller index than the snapshot index.

The correct behavior is that ReleaseLockTo releases
the locks w is holding so that w only holds one lock
that has an index smaller than the given index.
2015-03-10 09:45:46 -07:00
Xiang Li
78e0149f41 raft: do not reset vote if term is not changed
raft MUST keep the voting information for the same term. reset
should not reset vote if term is not changed.
2015-03-10 09:42:45 -07:00
Xiang Li
4c86ab4868 pkg/transport: fix downgrade https to http bug in transport
If the TLS config is empty, etcd downgrades https to http without a warning.
This commit avoid the downgrade and stoping etcd from bootstrap if it cannot
listen on TLS.
2015-03-10 09:39:01 -07:00
Xiang Li
59327bab47 pkg/transport: set the maxIdleConnsPerHost to -1
for transport that are using timeout connections, we set the
maxIdleConnsPerHost to -1. The default transport does not clear
the timeout for the connections it sets to be idle. So the connections
with timeout cannot be reused.
2015-03-10 09:38:39 -07:00
Mikael Kjaer
62ed1ebf03 Documentation: fix "Missing infra1="
Documentation: fix "Missing infra1="
2015-03-10 09:38:27 -07:00
1320 changed files with 14899 additions and 324794 deletions

View File

@@ -2,9 +2,10 @@ language: go
sudo: false
go:
- 1.4
- 1.5
install:
- go get golang.org/x/tools/cmd/cover
- go get golang.org/x/tools/cmd/vet
- go get github.com/barakmich/go-nyet
script:

View File

@@ -1,6 +1,6 @@
# How to contribute
etcd is Apache 2.0 licensed and accepts contributions via GitHub pull requests. This document outlines some of the conventions on commit message formatting, contact points for developers and other resources to make getting your contribution into etcd easier.
etcd is Apache 2.0 licensed and accepts contributions via Github pull requests. This document outlines some of the conventions on commit message formatting, contact points for developers and other resources to make getting your contribution into etcd easier.
# Email and chat
@@ -12,14 +12,6 @@ etcd is Apache 2.0 licensed and accepts contributions via GitHub pull requests.
- Fork the repository on GitHub
- Read the README.md for build instructions
## Reporting Bugs and Creating Issues
Reporting bugs is one of the best ways to contribute. However, a good bug report
has some very specific qualities, so please read over our short document on
[reporting bugs](https://github.com/coreos/etcd/blob/master/Documentation/reporting_bugs.md)
before you submit your bug report. This document might contain links known
issues, another good reason to take a look there, before reporting your bug.
## Contribution flow
This is a rough outline of what a contributor's workflow looks like:

View File

@@ -1,31 +0,0 @@
## Snapshot Migration
You can migrate a snapshot of your data from a v0.4.9+ cluster into a new etcd 2.2 cluster using a snapshot migration. After snapshot migration, the etcd indexes of your data will change. Many etcd applications rely on these indexes to behave correctly. This operation should only be done while all etcd applications are stopped.
To get started get the newest data snapshot from the 0.4.9+ cluster:
```
curl http://cluster.example.com:4001/v2/migration/snapshot > backup.snap
```
Now, import the snapshot into your new cluster:
```
etcdctl --endpoint new_cluster.example.com import --snap backup.snap
```
If you have a large amount of data, you can specify more concurrent works to copy data in parallel by using `-c` flag.
If you have hidden keys to copy, you can use `--hidden` flag to specify.
And the data will quickly copy into the new cluster:
```
entering dir: /
entering dir: /foo
entering dir: /foo/bar
copying key: /foo/bar/1 1
entering dir: /
entering dir: /foo2
entering dir: /foo2/bar2
copying key: /foo2/bar2/2 2
```

View File

@@ -8,17 +8,14 @@ When first started, etcd stores its configuration into a data directory specifie
Configuration is stored in the write ahead log and includes: the local member ID, cluster ID, and initial cluster configuration.
The write ahead log and snapshot files are used during member operation and to recover after a restart.
Having a dedicated disk to store wal files can improve the throughput and stabilize the cluster.
It is highly recommended to dedicate a wal disk and set `--wal-dir` to point to a directory on that device for a production cluster deployment.
If a members data directory is ever lost or corrupted then the user should [remove][remove-a-member] the etcd member from the cluster using `etcdctl` tool.
If a members data directory is ever lost or corrupted then the user should remove the etcd member from the cluster via the [members API][members-api].
A user should avoid restarting an etcd member with a data directory from an out-of-date backup.
Using an out-of-date data directory can lead to inconsistency as the member had agreed to store information via raft then re-joins saying it needs that information again.
For maximum safety, if an etcd member suffers any sort of data corruption or loss, it must be removed from the cluster.
Once removed the member can be re-added with an empty data directory.
[remove-a-member]: runtime-configuration.md#remove-a-member
[members-api]: https://github.com/coreos/etcd/blob/master/Documentation/other_apis.md#members-api
#### Contents
@@ -27,8 +24,6 @@ The data directory has two sub-directories in it:
1. wal: write ahead log files are stored here. For details see the [wal package documentation][wal-pkg]
2. snap: log snapshots are stored here. For details see the [snap package documentation][snap-pkg]
If `--wal-dir` flag is set, etcd will write the write ahead log files to the specified directory instead of data directory.
[wal-pkg]: http://godoc.org/github.com/coreos/etcd/wal
[snap-pkg]: http://godoc.org/github.com/coreos/etcd/snap
@@ -39,74 +34,6 @@ If `--wal-dir` flag is set, etcd will write the write ahead log files to the spe
If you are spinning up multiple clusters for testing it is recommended that you specify a unique initial-cluster-token for the different clusters.
This can protect you from cluster corruption in case of mis-configuration because two members started with different cluster tokens will refuse members from each other.
#### Monitoring
It is important to monitor your production etcd cluster for healthy information and runtime metrics.
##### Health Monitoring
At lowest level, etcd exposes health information via HTTP at `/health` in JSON format. If it returns `{"health": "true"}`, then the cluster is healthy. Please note the `/health` endpoint is still an experimental one as in etcd 2.2.
```
$ curl -L http://127.0.0.1:2379/health
{"health": "true"}
```
You can also use etcdctl to check the cluster-wide health information. It will contact all the members of the cluster and collect the health information for you.
```
$./etcdctl cluster-health
member 8211f1d0f64f3269 is healthy: got healthy result from http://127.0.0.1:12379
member 91bc3c398fb3c146 is healthy: got healthy result from http://127.0.0.1:22379
member fd422379fda50e48 is healthy: got healthy result from http://127.0.0.1:32379
cluster is healthy
```
##### Runtime Metrics
etcd uses [Prometheus](http://prometheus.io/) for metrics reporting in the server. You can read more through the runtime metrics [doc](metrics.md).
#### Debugging
Debugging a distributed system can be difficult. etcd provides several ways to make debug
easier.
##### Enabling Debug Logging
When you want to debug etcd without stopping it, you can enable debug logging at runtime.
etcd exposes logging configuration at `/config/local/log`.
```
$ curl http://127.0.0.1:2379/config/local/log -XPUT -d '{"Level":"DEBUG"}'
$ # debug logging enabled
$
$ curl http://127.0.0.1:2379/config/local/log -XPUT -d '{"Level":"INFO"}'
$ # debug logging disabled
```
##### Debugging Variables
Debug variables are exposed for real-time debugging purposes. Developers who are familiar with etcd can utilize these variables to debug unexpected behavior. etcd exposes debug variables via HTTP at `/debug/vars` in JSON format. The debug variables contains
`cmdline`, `file_descriptor_limit`, `memstats` and `raft.status`.
`cmdline` is the command line arguments passed into etcd.
`file_descriptor_limit` is the max number of file descriptors etcd can utilize.
`memstats` is well explained [here](http://golang.org/pkg/runtime/#MemStats).
`raft.status` is useful when you want to debug low level raft issues if you are familiar with raft internals. In most cases, you do not need to check `raft.status`.
```json
{
"cmdline": ["./etcd"],
"file_descriptor_limit": 0,
"memstats": {"Alloc":4105744,"TotalAlloc":42337320,"Sys":12560632,"...":"..."},
"raft.status": {"id":"ce2a822cea30bfca","term":5,"vote":"ce2a822cea30bfca","commit":23509,"lead":"ce2a822cea30bfca","raftState":"StateLeader","progress":{"ce2a822cea30bfca":{"match":23509,"next":23510,"state":"ProgressStateProbe"}}}
}
```
#### Optimal Cluster Size
The recommended etcd cluster size is 3, 5 or 7, which is decided by the fault tolerance requirement. A 7-member cluster can provide enough fault tolerance in most cases. While larger cluster provides better fault tolerance the write performance reduces since data needs to be replicated to more machines.
@@ -130,17 +57,17 @@ As you can see, adding another member to bring the size of cluster up to an odd
#### Changing Cluster Size
After your cluster is up and running, adding or removing members is done via [runtime reconfiguration](runtime-configuration.md#cluster-reconfiguration-operations), which allows the cluster to be modified without downtime. The `etcdctl` tool has a `member list`, `member add` and `member remove` commands to complete this process.
After your cluster is up and running, adding or removing members is done via [runtime reconfiguration](runtime-configuration.md), which allows the cluster to be modified without downtime. The `etcdctl` tool has a `member list`, `member add` and `member remove` commands to complete this process.
### Member Migration
When there is a scheduled machine maintenance or retirement, you might want to migrate an etcd member to another machine without losing the data and changing the member ID.
When there is a scheduled machine maintenance or retirement, you might want to migrate an etcd member to another machine without losing the data and changing the member ID.
The data directory contains all the data to recover a member to its point-in-time state. To migrate a member:
* Stop the member process
* Copy the data directory of the now-idle member to the new machine
* Update the peer URLs for that member to reflect the new machine according to the [runtime configuration] [change peer url]
* Update the peer URLs for that member to reflect the new machine according to the [member api] [change peer url]
* Start etcd on the new machine, using the same configuration and the copy of the data directory
This example will walk you through the process of migrating the infra1 member to a new machine:
@@ -151,11 +78,11 @@ This example will walk you through the process of migrating the infra1 member to
|infra1|10.0.1.11:2380|
|infra2|10.0.1.12:2380|
```sh
```
$ export ETCDCTL_PEERS=http://10.0.1.10:2379,http://10.0.1.11:2379,http://10.0.1.12:2379
```
```sh
```
$ etcdctl member list
84194f7c5edd8b37: name=infra0 peerURLs=http://10.0.1.10:2380 clientURLs=http://127.0.0.1:2379,http://10.0.1.10:2379
b4db3bf5e495e255: name=infra1 peerURLs=http://10.0.1.11:2380 clientURLs=http://127.0.0.1:2379,http://10.0.1.11:2379
@@ -164,59 +91,53 @@ bc1083c870280d44: name=infra2 peerURLs=http://10.0.1.12:2380 clientURLs=http://1
#### Stop the member etcd process
```sh
$ ssh 10.0.1.11
```
$ ssh core@10.0.1.11
```
```sh
$ kill `pgrep etcd`
```
$ sudo systemctl stop etcd
```
#### Copy the data directory of the now-idle member to the new machine
```
$ tar -cvzf infra1.etcd.tar.gz %data_dir%
$ tar -cvzf node1.etcd.tar.gz /var/lib/etcd/node1.etcd
```
```sh
$ scp infra1.etcd.tar.gz 10.0.1.13:~/
```
$ scp node1.etcd.tar.gz core@10.0.1.13:~/
```
#### Update the peer URLs for that member to reflect the new machine
```sh
```
$ curl http://10.0.1.10:2379/v2/members/b4db3bf5e495e255 -XPUT \
-H "Content-Type: application/json" -d '{"peerURLs":["http://10.0.1.13:2380"]}'
```
Or use `etcdctl member update` command
```sh
$ etcdctl member update b4db3bf5e495e255 http://10.0.1.13:2380
```
#### Start etcd on the new machine, using the same configuration and the copy of the data directory
```sh
$ ssh 10.0.1.13
```
```sh
$ tar -xzvf infra1.etcd.tar.gz -C %data_dir%
$ ssh core@10.0.1.13
```
```
etcd -name infra1 \
$ tar -xzvf node1.etcd.tar.gz -C /var/lib/etcd
```
```
etcd -name node1 \
-listen-peer-urls http://10.0.1.13:2380 \
-listen-client-urls http://10.0.1.13:2379,http://127.0.0.1:2379 \
-advertise-client-urls http://10.0.1.13:2379,http://127.0.0.1:2379
```
[change peer url]: runtime-configuration.md#update-a-member
[change peer url]: https://github.com/coreos/etcd/blob/master/Documentation/other_apis.md#change-the-peer-urls-of-a-member
### Disaster Recovery
etcd is designed to be resilient to machine failures. An etcd cluster can automatically recover from any number of temporary failures (for example, machine reboots), and a cluster of N members can tolerate up to _(N-1)/2_ permanent failures (where a member can no longer access the cluster, due to hardware failure or disk corruption). However, in extreme circumstances, a cluster might permanently lose enough members such that quorum is irrevocably lost. For example, if a three-node cluster suffered two simultaneous and unrecoverable machine failures, it would be normally impossible for the cluster to restore quorum and continue functioning.
etcd is designed to be resilient to machine failures. An etcd cluster can automatically recover from any number of temporary failures (for example, machine reboots), and a cluster of N members can tolerate up to _(N/2)-1_ permanent failures (where a member can no longer access the cluster, due to hardware failure or disk corruption). However, in extreme circumstances, a cluster might permanently lose enough members such that quorum is irrevocably lost. For example, if a three-node cluster suffered two simultaneous and unrecoverable machine failures, it would be normally impossible for the cluster to restore quorum and continue functioning.
To recover from such scenarios, etcd provides functionality to backup and restore the datastore and recreate the cluster without data loss.
@@ -228,8 +149,8 @@ The first step of the recovery is to backup the data directory on a functioning
```sh
etcdctl backup \
--data-dir %data_dir% \
--backup-dir %backup_data_dir%
--data-dir /var/lib/etcd \
--backup-dir /tmp/etcd_backup
```
This command will rewrite some of the metadata contained in the backup (specifically, the node ID and cluster ID), which means that the node will lose its former identity. In order to recreate a cluster from the backup, you will need to start a new, single-node cluster. The metadata is rewritten to prevent the new node from inadvertently being joined onto an existing cluster.
@@ -240,7 +161,7 @@ To restore a backup using the procedure created above, start etcd with the `-for
```sh
etcd \
-data-dir=%backup_data_dir% \
-data-dir=/tmp/etcd_backup \
-force-new-cluster \
...
```
@@ -251,22 +172,20 @@ Once you have verified that etcd has started successfully, shut it down and move
```sh
pkill etcd
rm -fr %data_dir%
mv %backup_data_dir% %data_dir%
rm -fr /var/lib/etcd
mv /tmp/etcd_backup /var/lib/etcd
etcd \
-data-dir=%data_dir% \
-data-dir=/var/lib/etcd \
...
```
#### Restoring the cluster
Now that if the node is running successfully, you should [change its advertised peer URLs](runtime-configuration.md#update-a-member), as the `--force-new-cluster` has set the peer URL to the default (listening on localhost).
You can then add more nodes to the cluster and restore resiliency. See the [add a new member](runtime-configuration.md#add-a-new-member) guide for more details. **NB:** If you are trying to restore your cluster using old failed etcd nodes, please make sure you have stopped old etcd instances and removed their old data directories specified by the data-dir configuration parameter.
Now that the node is running successfully, you can add more nodes to the cluster and restore resiliency. See the [runtime configuration](runtime-configuration.md) guide for more details.
### Client Request Timeout
etcd sets different timeouts for various types of client requests. The timeout value is not tunable now, which will be improved soon (https://github.com/coreos/etcd/issues/2038).
etcd sets different timeouts for various types of client requests. The timeout value is not tunable now, which will be improved soon(https://github.com/coreos/etcd/issues/2038).
#### Get requests
@@ -288,11 +207,3 @@ If the request times out, it indicates two possibilities:
2. the majority of the cluster is not functioning.
If timeout happens several times continuously, administrators should check status of cluster and resolve it as soon as possible.
### Best Practices
#### Maximum OS threads
By default, etcd uses the default configuration of the Go 1.4 runtime, which means that at most one operating system thread will be used to execute code simultaneously. (Note that this default behavior [may change in Go 1.5](https://docs.google.com/document/d/1At2Ls5_fhJQ59kDK2DFVhFu3g5mATSXqqV5QrxinasI/edit)).
When using etcd in heavy-load scenarios on machines with multiple cores it will usually be desirable to increase the number of threads that etcd can utilize. To do this, simply set the environment variable `GOMAXPROCS` to the desired number when starting etcd. For more information on this variable, see the Go [runtime](https://golang.org/pkg/runtime) documentation.

View File

@@ -78,11 +78,11 @@ X-Raft-Index: 5398
X-Raft-Term: 1
```
- `X-Etcd-Index` is the current etcd index as explained above. When request is a watch on key space, `X-Etcd-Index` is the current etcd index when the watch starts, which means that the watched event may happen after `X-Etcd-Index`.
- `X-Etcd-Index` is the current etcd index as explained above.
- `X-Raft-Index` is similar to the etcd index but is for the underlying raft protocol
- `X-Raft-Term` is an integer that will increase whenever an etcd master election happens in the cluster. If this number is increasing rapidly, you may need to tune the election timeout. See the [tuning][tuning] section for details.
[tuning]: tuning.md
[tuning]: #tuning
### Get the value of a key
@@ -277,7 +277,7 @@ The first terminal should get the notification and return with the same response
However, the watch command can do more than this.
Using the index, we can watch for commands that have happened in the past.
This is useful for ensuring you don't miss events between watch commands.
Typically, we watch again from the `modifiedIndex` + 1 of the node we got.
Typically, we watch again from the (modifiedIndex + 1) of the node we got.
Let's try to watch for the set command of index 7 again:
@@ -287,81 +287,48 @@ curl 'http://127.0.0.1:2379/v2/keys/foo?wait=true&waitIndex=7'
The watch command returns immediately with the same response as previously.
If we were to restart the watch from index 8 with:
```sh
curl 'http://127.0.0.1:2379/v2/keys/foo?wait=true&waitIndex=8'
```
Then even if etcd is on index 9 or 800, the first event to occur to the `/foo`
key between 8 and the current index will be returned.
**Note**: etcd only keeps the responses of the most recent 1000 events across all etcd keys.
It is recommended to send the response to another thread to process immediately
instead of blocking the watch while processing the result.
#### Watch from cleared event index
If we miss all the 1000 events, we need to recover the current state of the
watching key space through a get and then start to watch from the
`X-Etcd-Index` + 1.
watching key space. First, We do a get and then start to watch from the (etcdIndex + 1).
For example, we set `/other="bar"` for 2000 times and try to wait from index 8.
For example, we set `/foo="bar"` for 2000 times and tries to wait from index 7.
```sh
curl 'http://127.0.0.1:2379/v2/keys/foo?wait=true&waitIndex=8'
curl 'http://127.0.0.1:2379/v2/keys/foo?wait=true&waitIndex=7'
```
We get the index is outdated response, since we miss the 1000 events kept in etcd.
```
{"errorCode":401,"message":"The event in requested index is outdated and cleared","cause":"the requested history has been cleared [1008/8]","index":2007}
{"errorCode":401,"message":"The event in requested index is outdated and cleared","cause":"the requested history has been cleared [1003/7]","index":2002}
```
To start watch, first we need to fetch the current state of key `/foo`:
To start watch, first we need to fetch the current state of key `/foo` and the etcdIndex.
```sh
curl 'http://127.0.0.1:2379/v2/keys/foo' -vv
```
```
< HTTP/1.1 200 OK
< Content-Type: application/json
< X-Etcd-Cluster-Id: 7e27652122e8b2ae
< X-Etcd-Index: 2007
< X-Etcd-Index: 2002
< X-Raft-Index: 2615
< X-Raft-Term: 2
< Date: Mon, 05 Jan 2015 18:54:43 GMT
< Transfer-Encoding: chunked
<
{"action":"get","node":{"key":"/foo","value":"bar","modifiedIndex":7,"createdIndex":7}}
{"action":"get","node":{"key":"/foo","value":"","modifiedIndex":2002,"createdIndex":2002}}
```
Unlike watches we use the `X-Etcd-Index` + 1 of the response as a `waitIndex`
instead of the node's `modifiedIndex` + 1 for two reasons:
1. The `X-Etcd-Index` is always greater than or equal to the `modifiedIndex` when
getting a key because `X-Etcd-Index` is the current etcd index, and the `modifiedIndex`
is the index of an event already stored in etcd.
2. None of the events represented by indexes between `modifiedIndex` and
`X-Etcd-Index` will be related to the key being fetched.
Using the `modifiedIndex` + 1 is functionally equivalent for subsequent
watches, but since it is smaller than the `X-Etcd-Index` + 1, we may receive a
`401 EventIndexCleared` error immediately.
So the first watch after the get should be:
The `X-Etcd-Index` is important. It is the index when we got the value of `/foo`.
So we can watch again from the (`X-Etcd-Index` + 1) without missing an event after the last get.
```sh
curl 'http://127.0.0.1:2379/v2/keys/foo?wait=true&waitIndex=2008'
curl 'http://127.0.0.1:2379/v2/keys/foo?wait=true&waitIndex=2003'
```
#### Connection being closed prematurely
The server may close a long polling connection before emitting any events.
This can happend due to a timeout or the server being shutdown.
Since the HTTP header is sent immediately upon accepting the connection, the response will be seen as empty: `200 OK` and empty body.
The clients should be prepared to deal with this scenario and retry the watch.
### Atomically Creating In-Order Keys
@@ -380,7 +347,7 @@ curl http://127.0.0.1:2379/v2/keys/queue -XPOST -d value=Job1
"action": "create",
"node": {
"createdIndex": 6,
"key": "/queue/00000000000000000006",
"key": "/queue/6",
"modifiedIndex": 6,
"value": "Job1"
}
@@ -399,7 +366,7 @@ curl http://127.0.0.1:2379/v2/keys/queue -XPOST -d value=Job2
"action": "create",
"node": {
"createdIndex": 29,
"key": "/queue/00000000000000000029",
"key": "/queue/29",
"modifiedIndex": 29,
"value": "Job2"
}
@@ -423,13 +390,13 @@ curl -s 'http://127.0.0.1:2379/v2/keys/queue?recursive=true&sorted=true'
"nodes": [
{
"createdIndex": 2,
"key": "/queue/00000000000000000002",
"key": "/queue/2",
"modifiedIndex": 2,
"value": "Job1"
},
{
"createdIndex": 3,
"key": "/queue/00000000000000000003",
"key": "/queue/3",
"modifiedIndex": 3,
"value": "Job2"
}
@@ -472,7 +439,7 @@ curl http://127.0.0.1:2379/v2/keys/dir -XPUT -d ttl=30 -d dir=true -d prevExist=
Keys that are under this directory work as usual, but when the directory expires, a watcher on a key under the directory will get an expire event:
```sh
curl 'http://127.0.0.1:2379/v2/keys/dir?wait=true'
curl 'http://127.0.0.1:2379/v2/keys/dir/asdf?wait=true'
```
```json
@@ -903,7 +870,7 @@ Here we see the `/message` key but our hidden `/_message` key is not returned.
### Setting a key from a file
You can also use etcd to store small configuration files, JSON documents, XML documents, etc directly.
You can also use etcd to store small configuration files, json documents, XML documents, etc directly.
For example you can use curl to upload a simple text file and encode it:
```
@@ -1079,4 +1046,4 @@ curl http://127.0.0.1:2379/v2/stats/store
See the [other etcd APIs][other-apis] for details on the cluster management.
[other-apis]: other_apis.md
[other-apis]: https://github.com/coreos/etcd/blob/master/Documentation/other_apis.md

View File

@@ -1,434 +0,0 @@
# v2 Auth and Security
## etcd Resources
There are three types of resources in etcd
1. permission resources: users and roles in the user store
2. key-value resources: key-value pairs in the key-value store
3. settings resources: security settings, auth settings, and dynamic etcd cluster settings (election/heartbeat)
### Permission Resources
#### Users
A user is an identity to be authenticated. Each user can have multiple roles. The user has a capability (such as reading or writing) on the resource if one of the roles has that capability.
A user named `root` is required before authentication can be enabled, and it always has the ROOT role. The ROOT role can be granted to multiple users, but `root` is required for recovery purposes.
#### Roles
Each role has exact one associated Permission List. An permission list exists for each permission on key-value resources.
The special static ROOT (named `root`) role has a full permissions on all key-value resources, the permission to manage user resources and settings resources. Only the ROOT role has the permission to manage user resources and modify settings resources. The ROOT role is built-in and does not need to be created.
There is also a special GUEST role, named 'guest'. These are the permissions given to unauthenticated requests to etcd. This role will be created automatically, and by default allows access to the full keyspace due to backward compatability. (etcd did not previously authenticate any actions.). This role can be modified by a ROOT role holder at any time, to reduce the capabilities of unauthenticated users.
#### Permissions
There are two types of permissions, `read` and `write`. All management and settings require the ROOT role.
A Permission List is a list of allowed patterns for that particular permission (read or write). Only ALLOW prefixes are supported. DENY becomes more complicated and is TBD.
### Key-Value Resources
A key-value resource is a key-value pairs in the store. Given a list of matching patterns, permission for any given key in a request is granted if any of the patterns in the list match.
Only prefixes or exact keys are supported. A prefix permission string ends in `*`.
A permission on `/foo` is for that exact key or directory, not its children or recursively. `/foo*` is a prefix that matches `/foo` recursively, and all keys thereunder, and keys with that prefix (eg. `/foobar`. Contrast to the prefix `/foo/*`). `*` alone is permission on the full keyspace.
### Settings Resources
Specific settings for the cluster as a whole. This can include adding and removing cluster members, enabling or disabling authentication, replacing certificates, and any other dynamic configuration by the administrator (holder of the ROOT role).
## v2 Auth
### Basic Auth
We only support [Basic Auth](http://en.wikipedia.org/wiki/Basic_access_authentication) for the first version. Client needs to attach the basic auth to the HTTP Authorization Header.
### Authorization field for operations
Added to requests to /v2/keys, /v2/auth
Add code 401 Unauthorized to the set of responses from the v2 API
Authorization: Basic {encoded string}
### Future Work
Other types of auth can be considered for the future (eg, signed certs, public keys) but the `Authorization:` header allows for other such types
### Things out of Scope for etcd Permissions
* Pluggable AUTH backends like LDAP (other Authorization tokens generated by LDAP et al may be a possibility)
* Very fine-grained access controls (eg: users modifying keys outside work hours)
## API endpoints
An Error JSON corresponds to:
{
"name": "ErrErrorName",
"description" : "The longer helpful description of the error."
}
#### Enable and Disable Authentication
**Get auth status**
GET /v2/auth/enable
Sent Headers:
Possible Status Codes:
200 OK
200 Body:
{
"enabled": true
}
**Enable auth**
PUT /v2/auth/enable
Sent Headers:
Put Body: (empty)
Possible Status Codes:
200 OK
400 Bad Request (if root user has not been created)
409 Conflict (already enabled)
200 Body: (empty)
**Disable auth**
DELETE /v2/auth/enable
Sent Headers:
Authorization: Basic <RootAuthString>
Possible Status Codes:
200 OK
401 Unauthorized (if not a root user)
409 Conflict (already disabled)
200 Body: (empty)
#### Users
The User JSON object is formed as follows:
```
{
"user": "userName",
"password": "password",
"roles": [
"role1",
"role2"
],
"grant": [],
"revoke": []
}
```
Password is only passed when necessary.
**Get a list of users**
GET/HEAD /v2/auth/users
Sent Headers:
Authorization: Basic <BasicAuthString>
Possible Status Codes:
200 OK
401 Unauthorized
200 Headers:
Content-type: application/json
200 Body:
{
"users": ["alice", "bob", "eve"]
}
**Get User Details**
GET/HEAD /v2/auth/users/alice
Sent Headers:
Authorization: Basic <BasicAuthString>
Possible Status Codes:
200 OK
401 Unauthorized
404 Not Found
200 Headers:
Content-type: application/json
200 Body:
{
"user" : "alice",
"roles" : ["fleet", "etcd"]
}
**Create Or Update A User**
A user can be created with initial roles, if filled in. However, no roles are required; only the username and password fields
PUT /v2/auth/users/charlie
Sent Headers:
Authorization: Basic <BasicAuthString>
Put Body:
JSON struct, above, matching the appropriate name
* Starting password and roles when creating.
* Grant/Revoke/Password filled in when updating (to grant roles, revoke roles, or change the password).
Possible Status Codes:
200 OK
201 Created
400 Bad Request
401 Unauthorized
404 Not Found (update non-existent users)
409 Conflict (when granting duplicated roles or revoking non-existent roles)
200 Headers:
Content-type: application/json
200 Body:
JSON state of the user
**Remove A User**
DELETE /v2/auth/users/charlie
Sent Headers:
Authorization: Basic <BasicAuthString>
Possible Status Codes:
200 OK
401 Unauthorized
403 Forbidden (remove root user when auth is enabled)
404 Not Found
200 Headers:
200 Body: (empty)
#### Roles
A full role structure may look like this. A Permission List structure is used for the "permissions", "grant", and "revoke" keys.
```
{
"role" : "fleet",
"permissions" : {
"kv" : {
"read" : [ "/fleet/" ],
"write": [ "/fleet/" ]
}
},
"grant" : {"kv": {...}},
"revoke": {"kv": {...}}
}
```
**Get a list of Roles**
GET/HEAD /v2/auth/roles
Sent Headers:
Authorization: Basic <BasicAuthString>
Possible Status Codes:
200 OK
401 Unauthorized
200 Headers:
Content-type: application/json
200 Body:
{
"roles": ["fleet", "etcd", "quay"]
}
**Get Role Details**
GET/HEAD /v2/auth/roles/fleet
Sent Headers:
Authorization: Basic <BasicAuthString>
Possible Status Codes:
200 OK
401 Unauthorized
404 Not Found
200 Headers:
Content-type: application/json
200 Body:
{
"role" : "fleet",
"permissions" : {
"kv" : {
"read": [ "/fleet/" ],
"write": [ "/fleet/" ]
}
}
}
**Create Or Update A Role**
PUT /v2/auth/roles/rkt
Sent Headers:
Authorization: Basic <BasicAuthString>
Put Body:
Initial desired JSON state, including the role name for verification and:
* Starting permission set if creating
* Granted/Revoked permission set if updating
Possible Status Codes:
200 OK
201 Created
400 Bad Request
401 Unauthorized
404 Not Found (update non-existent roles)
409 Conflict (when granting duplicated permission or revoking non-existent permission)
200 Body:
JSON state of the role
**Remove A Role**
DELETE /v2/auth/roles/rkt
Sent Headers:
Authorization: Basic <BasicAuthString>
Possible Status Codes:
200 OK
401 Unauthorized
403 Forbidden (remove root)
404 Not Found
200 Headers:
200 Body: (empty)
## Example Workflow
Let's walk through an example to show two tenants (applications, in our case) using etcd permissions.
### Create root role
```
PUT /v2/auth/users/root
Put Body:
{"user" : "root", "password": "betterRootPW!"}
```
### Enable auth
```
PUT /v2/auth/enable
```
### Modify guest role (revoke write permission)
```
PUT /v2/auth/roles/guest
Headers:
Authorization: Basic <root:betterRootPW!>
Put Body:
{
"role" : "guest",
"revoke" : {
"kv" : {
"write": [
"*"
]
}
}
}
```
### Create Roles for the Applications
Create the rkt role fully specified:
```
PUT /v2/auth/roles/rkt
Headers:
Authorization: Basic <root:betterRootPW!>
Body:
{
"role" : "rkt",
"permissions" : {
"kv": {
"read": [
"/rkt/*"
],
"write": [
"/rkt/*"
]
}
}
}
```
But let's make fleet just a basic role for now:
```
PUT /v2/auth/roles/fleet
Headers:
Authorization: Basic <root:betterRootPW!>
Body:
{
"role" : "fleet"
}
```
### Optional: Grant some permissions to the roles
Well, we finally figured out where we want fleet to live. Let's fix it.
(Note that we avoided this in the rkt case. So this step is optional.)
```
PUT /v2/auth/roles/fleet
Headers:
Authorization: Basic <root:betterRootPW!>
Put Body:
{
"role" : "fleet",
"grant" : {
"kv" : {
"read": [
"/rkt/fleet",
"/fleet/*"
]
}
}
}
```
### Create Users
Same as before, let's use rocket all at once and fleet separately
```
PUT /v2/auth/users/rktuser
Headers:
Authorization: Basic <root:betterRootPW!>
Body:
{"user" : "rktuser", "password" : "rktpw", "roles" : ["rkt"]}
```
```
PUT /v2/auth/users/fleetuser
Headers:
Authorization: Basic <root:betterRootPW!>
Body:
{"user" : "fleetuser", "password" : "fleetpw"}
```
### Optional: Grant Roles to Users
Likewise, let's explicitly grant fleetuser access.
```
PUT /v2/auth/users/fleetuser
Headers:
Authorization: Basic <root:betterRootPW!>
Body:
{"user": "fleetuser", "grant": ["fleet"]}
```
#### Start to use fleetuser and rktuser
For example:
```
PUT /v2/keys/rkt/RktData
Headers:
Authorization: Basic <rktuser:rktpw>
Body:
value=launch
```
Reads and writes outside the prefixes granted will fail with a 401 Unauthorized.

View File

@@ -1,179 +0,0 @@
# Authentication Guide
**NOTE: The authentication feature is considered experimental. We may change workflow without warning in future releases.**
## Overview
Authentication -- having users and roles in etcd -- was added in etcd 2.1. This guide will help you set up basic authentication in etcd.
etcd before 2.1 was a completely open system; anyone with access to the API could change keys. In order to preserve backward compatibility and upgradability, this feature is off by default.
For a full discussion of the RESTful API, see [the authentication API documentation](auth_api.md)
## Special Users and Roles
There is one special user, `root`, and there are two special roles, `root` and `guest`.
### User `root`
User `root` must be created before security can be activated. It has the `root` role and allows for the changing of anything inside etcd. The idea behind the `root` user is for recovery purposes -- a password is generated and stored somewhere -- and the root role is granted to the administrator accounts on the system. In the future, for troubleshooting and recovery, we will need to assume some access to the system, and future documentation will assume this root user (though anyone with the role will suffice).
### Role `root`
Role `root` cannot be modified, but it may be granted to any user. Having access via the root role not only allows global read-write access (as was the case before 2.1) but allows modification of the authentication policy and all administrative things, like modifying the cluster membership.
### Role `guest`
The `guest` role defines the permissions granted to any request that does not provide an authentication. This will be created on security activation (if it doesn't already exist) to have full access to all keys, as was true in etcd 2.0. It may be modified at any time, and cannot be removed.
## Working with users
The `user` subcommand for `etcdctl` handles all things having to do with user accounts.
A listing of users can be found with
```
$ etcdctl user list
```
Creating a user is as easy as
```
$ etcdctl user add myusername
```
And there will be prompt for a new password.
Roles can be granted and revoked for a user with
```
$ etcdctl user grant myusername -roles foo,bar,baz
$ etcdctl user revoke myusername -roles bar,baz
```
We can look at this user with
```
$ etcdctl user get myusername
```
And the password for a user can be changed with
```
$ etcdctl user passwd myusername
```
Which will prompt again for a new password.
To delete an account, there's always
```
$ etcdctl user remove myusername
```
## Working with roles
The `role` subcommand for `etcdctl` handles all things having to do with access controls for particular roles, as were granted to individual users.
A listing of roles can be found with
```
$ etcdctl role list
```
A new role can be created with
```
$ etcdctl role add myrolename
```
A role has no password; we are merely defining a new set of access rights.
Roles are granted access to various parts of the keyspace, a single path at a time.
Reading a path is simple; if the path ends in `*`, that key **and all keys prefixed with it**, are granted to holders of this role. If it does not end in `*`, only that key and that key alone is granted.
Access can be granted as either read, write, or both, as in the following examples:
```
# Give read access to keys under the /foo directory
$ etcdctl role grant myrolename -path '/foo/*' -read
# Give write-only access to the key at /foo/bar
$ etcdctl role grant myrolename -path '/foo/bar' -write
# Give full access to keys under /pub
$ etcdctl role grant myrolename -path '/pub/*' -readwrite
```
Beware that
```
# Give full access to keys under /pub??
$ etcdctl role grant myrolename -path '/pub*' -readwrite
```
Without the slash may include keys under `/publishing`, for example. To do both, grant `/pub` and `/pub/*`
To see what's granted, we can look at the role at any time:
```
$ etcdctl role get myrolename
```
Revocation of permissions is done the same logical way:
```
$ etcdctl role revoke myrolename -path '/foo/bar' -write
```
As is removing a role entirely
```
$ etcdctl role remove myrolename
```
## Enabling authentication
The minimal steps to enabling auth follow. The administrator can set up users and roles before or after enabling authentication, as a matter of preference.
Make sure the root user is created:
```
$ etcdctl user add root
New password:
```
And enable authentication
```
$ etcdctl auth enable
```
After this, etcd is running with authentication enabled. To disable it for any reason, use the reciprocal command:
```
$ etcdctl -u root:rootpw auth disable
```
It would also be good to check what guests (unauthenticated users) are allowed to do:
```
$ etcdctl -u root:rootpw role get guest
```
And modify this role appropriately, depending on your policies.
## Using `etcdctl` to authenticate
`etcdctl` supports a similar flag as `curl` for authentication.
```
$ etcdctl -u user:password get foo
```
or if you prefer to be prompted:
```
$ etcdctl -u user get foo
```
Otherwise, all `etcdctl` commands remain the same. Users and roles can still be created and modified, but require authentication by a user with the root role.

View File

@@ -1,10 +1,10 @@
# Backward Compatibility
### Backward Compatibility
The main goal of etcd 2.0 release is to improve cluster safety around bootstrapping and dynamic reconfiguration. To do this, we deprecated the old error-prone APIs and provide a new set of APIs.
The other main focus of this release was a more reliable Raft implementation, but as this change is internal it should not have any notable effects to users.
## Command Line Flags Changes
#### Command Line Flags Changes
The major flag changes are to mostly related to bootstrapping. The `initial-*` flags provide an improved way to specify the required criteria to start the cluster. The advertised URLs now support a list of values instead of a single value, which allows etcd users to gracefully migrate to the new set of IANA-assigned ports (2379/client and 2380/peers) while maintaining backward compatibility with the old ports.
@@ -20,13 +20,16 @@ The major flag changes are to mostly related to bootstrapping. The `initial-*` f
The documentation of new command line flags can be found at
https://github.com/coreos/etcd/blob/master/Documentation/configuration.md.
## Data Directory Naming
#### Data Dir
- Default data dir location has changed from {$hostname}.etcd to {name}.etcd.
The default data dir location has changed from {$hostname}.etcd to {name}.etcd.
- The disk format within the data dir has changed. etcd 2.0 should be able to auto upgrade the old data format. Instructions on doing so manually are in the [migration tool doc][migrationtooldoc].
## Key-Value API
[migrationtooldoc]: https://github.com/coreos/etcd/blob/master/Documentation/0_4_migration_tool.md
### Read consistency flag
#### Key-Value API
##### Read consistency flag
The consistent flag for read operations is removed in etcd 2.0.0. The normal read operations provides the same consistency guarantees with the 0.4.6 read operations with consistent flag set.
@@ -36,14 +39,14 @@ The consistent read guarantees the sequential consistency within one client that
Each etcd member will proxy the request to leader and only return the result to user after the result is applied on the local member. Thus after the write succeed, the user is guaranteed to see the value on the member it sent the request to.
Reads do not provide linearizability. If you want linearizable read, you need to set quorum option to true.
Reads do not provide linearizability. If you want linearizabilable read, you need to set quorum option to true.
**Previous behavior**
We added an option for a consistent read in the old version of etcd since etcd 0.x redirects the write request to the leader. When the user get back the result from the leader, the member it sent the request to originally might not apply the write request yet. With the consistent flag set to true, the client will always send read request to the leader. So one client should be able to see its last write when consistent=true is enabled. There is no order guarantees among different clients.
## Standby
#### Standby
etcd 0.4s standby mode has been deprecated. [Proxy mode][proxymode] is introduced to solve a subset of problems standby was solving.
@@ -51,21 +54,21 @@ Standby mode was intended for large clusters that had a subset of the members ac
Proxy mode in 2.0 will provide similar functionality, and with improved control over which machines act as proxies due to the operator specifically configuring them. Proxies also support read only or read/write modes for increased security and durability.
[proxymode]: proxy.md
[proxymode]: https://github.com/coreos/etcd/blob/master/Documentation/proxy.md
## Discovery Service
#### Discovery Service
A size key needs to be provided inside a [discovery token][discoverytoken].
[discoverytoken]: clustering.md#custom-etcd-discovery-service
[discoverytoken]: https://github.com/coreos/etcd/blob/master/Documentation/clustering.md#custom-etcd-discovery-service
## HTTP Admin API
#### HTTP Admin API
`v2/admin` on peer url and `v2/keys/_etcd` are unified under the new [v2/member API][memberapi] to better explain which machines are part of an etcd cluster, and to simplify the keyspace for all your use cases.
[memberapi]: other_apis.md
[memberapi]: https://github.com/coreos/etcd/blob/master/Documentation/other_apis.md
## HTTP Key Value API
- The follower can now transparently proxy write requests to the leader. Clients will no longer see 307 redirections to the leader from etcd.
#### HTTP Key Value API
- The follower can now transparently proxy write equests to the leader. Clients will no longer see 307 redirections to the leader from etcd.
- Expiration time is in UTC instead of local time.

View File

@@ -1,13 +0,0 @@
# Benchmarks
etcd benchmarks will be published regularly and tracked for each release below:
- [etcd v2.1.0-alpha](./etcd-2-1-0-alpha-benchmarks.md)
- [etcd v2.2.0-rc](./etcd-2-2-0-rc-benchmarks.md)
- [etcd v3 demo](./etcd-3-demo-benchmarks.md)
# Memory Usage Benchmarks
It records expected memory usage in different scenarios.
- [etcd v2.2.0-rc](./etcd-2-2-0-rc-memory-benchmarks.md)

View File

@@ -1,49 +0,0 @@
## Physical machines
GCE n1-highcpu-2 machine type
- 1x dedicated local SSD mounted under /var/lib/etcd
- 1x dedicated slow disk for the OS
- 1.8 GB memory
- 2x CPUs
- etcd version 2.1.0 alpha
## etcd Cluster
3 etcd members, each runs on a single machine
## Testing
Bootstrap another machine and use benchmark tool [boom](https://github.com/rakyll/boom) to send requests to each etcd member.
## Performance
### reading one single key
| key size in bytes | number of clients | target etcd server | read QPS | 90th Percentile Latency (ms) |
|-------------------|-------------------|--------------------|----------|---------------|
| 64 | 1 | leader only | 1534 | 0.7 |
| 64 | 64 | leader only | 10125 | 9.1 |
| 64 | 256 | leader only | 13892 | 27.1 |
| 256 | 1 | leader only | 1530 | 0.8 |
| 256 | 64 | leader only | 10106 | 10.1 |
| 256 | 256 | leader only | 14667 | 27.0 |
| 64 | 64 | all servers | 24200 | 3.9 |
| 64 | 256 | all servers | 33300 | 11.8 |
| 256 | 64 | all servers | 24800 | 3.9 |
| 256 | 256 | all servers | 33000 | 11.5 |
### writing one single key
| key size in bytes | number of clients | target etcd server | write QPS | 90th Percentile Latency (ms) |
|-------------------|-------------------|--------------------|-----------|---------------|
| 64 | 1 | leader only | 60 | 21.4 |
| 64 | 64 | leader only | 1742 | 46.8 |
| 64 | 256 | leader only | 3982 | 90.5 |
| 256 | 1 | leader only | 58 | 20.3 |
| 256 | 64 | leader only | 1770 | 47.8 |
| 256 | 256 | leader only | 4157 | 105.3 |
| 64 | 64 | all servers | 1028 | 123.4 |
| 64 | 256 | all servers | 3260 | 123.8 |
| 256 | 64 | all servers | 1033 | 121.5 |
| 256 | 256 | all servers | 3061 | 119.3 |

View File

@@ -1,67 +0,0 @@
## Physical machines
GCE n1-highcpu-2 machine type
- 1x dedicated local SSD mounted under /var/lib/etcd
- 1x dedicated slow disk for the OS
- 1.8 GB memory
- 2x CPUs
## etcd Cluster
3 etcd 2.2.0-rc members, each runs on a single machine.
Detailed versions:
```
etcd Version: 2.2.0-alpha.1+git
Git SHA: 59a5a7e
Go Version: go1.4.2
Go OS/Arch: linux/amd64
```
Also, we use 3 etcd 2.1.0 alpha-stage members to form cluster to get base performance. etcd's commit head is at [c7146bd5](https://github.com/coreos/etcd/commits/c7146bd5f2c73716091262edc638401bb8229144), which is the same as the one that we use in [etcd 2.1 benchmark](./etcd-2-1-0-benchmarks.md).
## Testing
Bootstrap another machine and use benchmark tool [boom](https://github.com/rakyll/boom) to send requests to each etcd member. Check [here](../../hack/benchmark/) for instructions.
## Performance
### reading one single key
| key size in bytes | number of clients | target etcd server | read QPS | 90th Percentile Latency (ms) |
|-------------------|-------------------|--------------------|----------|---------------|
| 64 | 1 | leader only | 2804 (-5%) | 0.4 (+0%) |
| 64 | 64 | leader only | 17816 (+0%) | 5.7 (-6%) |
| 64 | 256 | leader only | 18667 (-6%) | 20.4 (+2%) |
| 256 | 1 | leader only | 2181 (-15%) | 0.5 (+25%) |
| 256 | 64 | leader only | 17435 (-7%) | 6.0 (+9%) |
| 256 | 256 | leader only | 18180 (-8%) | 21.3 (+3%) |
| 64 | 64 | all servers | 46965 (-4%) | 2.1 (+0%) |
| 64 | 256 | all servers | 55286 (-6%) | 7.4 (+6%) |
| 256 | 64 | all servers | 46603 (-6%) | 2.1 (+5%) |
| 256 | 256 | all servers | 55291 (-6%) | 7.3 (+4%) |
### writing one single key
| key size in bytes | number of clients | target etcd server | write QPS | 90th Percentile Latency (ms) |
|-------------------|-------------------|--------------------|-----------|---------------|
| 64 | 1 | leader only | 76 (+22%) | 19.4 (-15%) |
| 64 | 64 | leader only | 2461 (+45%) | 31.8 (-32%) |
| 64 | 256 | leader only | 4275 (+1%) | 69.6 (-10%) |
| 256 | 1 | leader only | 64 (+20%) | 16.7 (-30%) |
| 256 | 64 | leader only | 2385 (+30%) | 31.5 (-19%) |
| 256 | 256 | leader only | 4353 (-3%) | 74.0 (+9%) |
| 64 | 64 | all servers | 2005 (+81%) | 49.8 (-55%) |
| 64 | 256 | all servers | 4868 (+35%) | 81.5 (-40%) |
| 256 | 64 | all servers | 1925 (+72%) | 47.7 (-59%) |
| 256 | 256 | all servers | 4975 (+36%) | 70.3 (-36%) |
### performance changes explanation
- read QPS in most scenarios is decreased by 5~8%. The reason is that etcd records store metrics for each store operation. The metrics is important for monitoring and debugging, so this is acceptable.
- write QPS to leader is increased by 20~30%. This is because we decouple raft main loop and entry apply loop, which avoids them blocking each other.
- write QPS to all servers is increased by 30~80% because follower could receive latest commit index earlier and commit proposals faster.

View File

@@ -1,47 +0,0 @@
## Physical machine
GCE n1-standard-2 machine type
- 1x dedicated local SSD mounted under /var/lib/etcd
- 1x dedicated slow disk for the OS
- 7.5 GB memory
- 2x CPUs
## etcd
```
etcd Version: 2.2.0-rc.0+git
Git SHA: 103cb5c
Go Version: go1.5
Go OS/Arch: linux/amd64
```
## Testing
Start 3-member etcd cluster, each of which uses 2 cores.
The length of key name is always 64 bytes, which is a reasonable length of average key bytes.
## Memory Maximal Usage
- etcd may use maximal memory if one follower is dead and the leader keeps sending snapshots.
- `max RSS` is the maximal memory usage recorded in 3 runs.
| value bytes | key number | data size(MB) | max RSS(MB) | max RSS/data rate on leader |
|-------------|-------------|---------------|-------------|-----------------------------|
| 128 | 50000 | 6 | 433 | 72x |
| 128 | 100000 | 12 | 659 | 54x |
| 128 | 200000 | 24 | 1466 | 61x |
| 1024 | 50000 | 48 | 1253 | 26x |
| 1024 | 100000 | 96 | 2344 | 24x |
| 1024 | 200000 | 192 | 4361 | 22x |
## Data Size Threshold
- When etcd reaches data size threshold, it may trigger leader election easily and drop part of proposals.
- At most cases, etcd cluster should work smoothly if it doesn't hit the threshold. If it doesn't work well due to insufficient resources, you need to decrease its data size.
| value bytes | key number limitation | suggested data size threshold(MB) | consumed RSS(MB) |
|-------------|-----------------------|-----------------------------------|------------------|
| 128 | 400K | 48 | 2400 |
| 1024 | 300K | 292 | 6500 |

View File

@@ -1,40 +0,0 @@
## Physical machines
GCE n1-highcpu-2 machine type
- 1x dedicated local SSD mounted under /var/lib/etcd
- 1x dedicated slow disk for the OS
- 1.8 GB memory
- 2x CPUs
- etcd version 2.2.0
## etcd Cluster
1 etcd member running in v3 demo mode
## Testing
Use [etcd v3 benchmark tool](../../hack/v3benchmark/).
## Performance
### reading one single key
| key size in bytes | number of clients | read QPS | 90th Percentile Latency (ms) |
|-------------------|-------------------|----------|---------------|
| 256 | 1 | 2716 | 0.4 |
| 256 | 64 | 16623 | 6.1 |
| 256 | 256 | 16622 | 21.7 |
The performance is nearly the same as the one with empty server handler.
### reading one single key after putting
| key size in bytes | number of clients | read QPS | 90th Percentile Latency (ms) |
|-------------------|-------------------|----------|---------------|
| 256 | 1 | 2269 | 0.5 |
| 256 | 64 | 13582 | 8.6 |
| 256 | 256 | 13262 | 47.5 |
The performance with empty server handler is not affected by one put. So the
performance downgrade should be caused by storage package.

View File

@@ -1,24 +0,0 @@
## Branch Management
### Guide
- New development occurs on the [master branch](https://github.com/coreos/etcd/tree/master)
- Master branch should always have a green build!
- Backwards-compatible bug fixes should target the master branch and subsequently be ported to stable branches
- Once the master branch is ready for release, it will be tagged and become the new stable branch.
The etcd team has adopted a _rolling release model_ and supports one stable version of etcd.
### Master branch
The `master` branch is our development branch. All new features land here first.
If you want to try new features, pull `master` and play with it. Note that `master` may not be stable because new features may introduce bugs.
Before the release of the next stable version, feature PRs will be frozen. We will focus on the testing, bug-fix and documentation for one to two weeks.
### Stable branches
All branches with prefix `release-` are considered _stable_ branches.
After every minor release (http://semver.org/), we will have a new stable branch for that release. We will keep fixing the backwards-compatible bugs for the latest stable release, but not previous releases. The _patch_ release, incorporating any bug fixes, will be once every two weeks, given any patches.

View File

@@ -4,7 +4,7 @@
Starting an etcd cluster statically requires that each member knows another in the cluster. In a number of cases, you might not know the IPs of your cluster members ahead of time. In these cases, you can bootstrap an etcd cluster with the help of a discovery service.
Once an etcd cluster is up and running, adding or removing members is done via [runtime reconfiguration](runtime-configuration.md). To better understand the design behind runtime reconfiguration, we suggest you read [this](runtime-reconf-design.md).
Once an etcd cluster is up and running, adding or removing members is done via [runtime reconfiguration](runtime-configuration.md).
This guide will cover the following mechanisms for bootstrapping an etcd cluster:
@@ -38,15 +38,11 @@ Note that the URLs specified in `initial-cluster` are the _advertised peer URLs_
If you are spinning up multiple clusters (or creating and destroying a single cluster) with same configuration for testing purpose, it is highly recommended that you specify a unique `initial-cluster-token` for the different clusters. By doing this, etcd can generate unique cluster IDs and member IDs for the clusters even if they otherwise have the exact same configuration. This can protect you from cross-cluster-interaction, which might corrupt your clusters.
etcd listens on [`listen-client-urls`](configuration.md#-listen-client-urls) to accept client traffic. etcd member advertises the URLs specified in [`advertise-client-urls`](configuration.md#-advertise-client-urls) to other members, proxies, clients. Please make sure the `advertise-client-urls` are reachable from intended clients. A common mistake is setting `advertise-client-urls` to localhost or leave it as default when you want the remote clients to reach etcd.
On each machine you would start etcd with these flags:
```
$ etcd -name infra0 -initial-advertise-peer-urls http://10.0.1.10:2380 \
-listen-peer-urls http://10.0.1.10:2380 \
-listen-client-urls http://10.0.1.10:2379,http://127.0.0.1:2379 \
-advertise-client-urls http://10.0.1.10:2379 \
-initial-cluster-token etcd-cluster-1 \
-initial-cluster infra0=http://10.0.1.10:2380,infra1=http://10.0.1.11:2380,infra2=http://10.0.1.12:2380 \
-initial-cluster-state new
@@ -54,8 +50,6 @@ $ etcd -name infra0 -initial-advertise-peer-urls http://10.0.1.10:2380 \
```
$ etcd -name infra1 -initial-advertise-peer-urls http://10.0.1.11:2380 \
-listen-peer-urls http://10.0.1.11:2380 \
-listen-client-urls http://10.0.1.11:2379,http://127.0.0.1:2379 \
-advertise-client-urls http://10.0.1.11:2379 \
-initial-cluster-token etcd-cluster-1 \
-initial-cluster infra0=http://10.0.1.10:2380,infra1=http://10.0.1.11:2380,infra2=http://10.0.1.12:2380 \
-initial-cluster-state new
@@ -63,8 +57,6 @@ $ etcd -name infra1 -initial-advertise-peer-urls http://10.0.1.11:2380 \
```
$ etcd -name infra2 -initial-advertise-peer-urls http://10.0.1.12:2380 \
-listen-peer-urls http://10.0.1.12:2380 \
-listen-client-urls http://10.0.1.12:2379,http://127.0.0.1:2379 \
-advertise-client-urls http://10.0.1.12:2379 \
-initial-cluster-token etcd-cluster-1 \
-initial-cluster infra0=http://10.0.1.10:2380,infra1=http://10.0.1.11:2380,infra2=http://10.0.1.12:2380 \
-initial-cluster-state new
@@ -79,8 +71,6 @@ In the following example, we have not included our new host in the list of enume
```
$ etcd -name infra1 -initial-advertise-peer-urls http://10.0.1.11:2380 \
-listen-peer-urls https://10.0.1.11:2380 \
-listen-client-urls http://10.0.1.11:2379,http://127.0.0.1:2379 \
-advertise-client-urls http://10.0.1.11:2379 \
-initial-cluster infra0=http://10.0.1.10:2380 \
-initial-cluster-state new
etcd: infra1 not listed in the initial cluster config
@@ -92,8 +82,6 @@ In this example, we are attempting to map a node (infra0) on a different address
```
$ etcd -name infra0 -initial-advertise-peer-urls http://127.0.0.1:2380 \
-listen-peer-urls http://10.0.1.10:2380 \
-listen-client-urls http://10.0.1.10:2379,http://127.0.0.1:2379 \
-advertise-client-urls http://10.0.1.10:2379 \
-initial-cluster infra0=http://10.0.1.10:2380,infra1=http://10.0.1.11:2380,infra2=http://10.0.1.12:2380 \
-initial-cluster-state=new
etcd: error setting up initial cluster: infra0 has different advertised URLs in the cluster and advertised peer URLs list
@@ -105,8 +93,6 @@ If you configure a peer with a different set of configuration and attempt to joi
```
$ etcd -name infra3 -initial-advertise-peer-urls http://10.0.1.13:2380 \
-listen-peer-urls http://10.0.1.13:2380 \
-listen-client-urls http://10.0.1.13:2379,http://127.0.0.1:2379 \
-advertise-client-urls http://10.0.1.13:2379 \
-initial-cluster infra0=http://10.0.1.10:2380,infra1=http://10.0.1.11:2380,infra3=http://10.0.1.13:2380 \
-initial-cluster-state=new
etcd: conflicting cluster ID to the target cluster (c6ab534d07e8fcc4 != bc25ea2a74fb18b0). Exiting.
@@ -124,15 +110,13 @@ There two methods that can be used for discovery:
### etcd Discovery
To better understand the design about discovery service protocol, we suggest you read [this](./discovery_protocol.md).
#### Lifetime of a Discovery URL
A discovery URL identifies a unique etcd cluster. Instead of reusing a discovery URL, you should always create discovery URLs for new clusters.
Moreover, discovery URLs should ONLY be used for the initial bootstrapping of a cluster. To change cluster membership after the cluster is already running, see the [runtime reconfiguration][runtime] guide.
[runtime]: runtime-configuration.md
[runtime]: https://github.com/coreos/etcd/blob/master/Documentation/runtime-configuration.md
#### Custom etcd Discovery Service
@@ -148,29 +132,21 @@ If you bootstrap an etcd cluster using discovery service with more than the expe
The URL you will use in this case will be `https://myetcd.local/v2/keys/discovery/6c007a14875d53d9bf0ef5a6fc0257c817f0fb83` and the etcd members will use the `https://myetcd.local/v2/keys/discovery/6c007a14875d53d9bf0ef5a6fc0257c817f0fb83` directory for registration as they start.
Each member must have a different name flag specified. Or discovery will fail due to duplicated name.
Now we start etcd with those relevant flags for each member:
```
$ etcd -name infra0 -initial-advertise-peer-urls http://10.0.1.10:2380 \
-listen-peer-urls http://10.0.1.10:2380 \
-listen-client-urls http://10.0.1.10:2379,http://127.0.0.1:2379 \
-advertise-client-urls http://10.0.1.10:2379 \
-discovery https://myetcd.local/v2/keys/discovery/6c007a14875d53d9bf0ef5a6fc0257c817f0fb83
```
```
$ etcd -name infra1 -initial-advertise-peer-urls http://10.0.1.11:2380 \
-listen-peer-urls http://10.0.1.11:2380 \
-listen-client-urls http://10.0.1.11:2379,http://127.0.0.1:2379 \
-advertise-client-urls http://10.0.1.11:2379 \
-discovery https://myetcd.local/v2/keys/discovery/6c007a14875d53d9bf0ef5a6fc0257c817f0fb83
```
```
$ etcd -name infra2 -initial-advertise-peer-urls http://10.0.1.12:2380 \
-listen-peer-urls http://10.0.1.12:2380 \
-listen-client-urls http://10.0.1.12:2379,http://127.0.0.1:2379 \
-advertise-client-urls http://10.0.1.12:2379 \
-discovery https://myetcd.local/v2/keys/discovery/6c007a14875d53d9bf0ef5a6fc0257c817f0fb83
```
@@ -200,29 +176,21 @@ ETCD_DISCOVERY=https://discovery.etcd.io/3e86b59982e49066c5d813af1c2e2579cbf573d
-discovery https://discovery.etcd.io/3e86b59982e49066c5d813af1c2e2579cbf573de
```
Each member must have a different name flag specified. Or discovery will fail due to duplicated name.
Now we start etcd with those relevant flags for each member:
```
$ etcd -name infra0 -initial-advertise-peer-urls http://10.0.1.10:2380 \
-listen-peer-urls http://10.0.1.10:2380 \
-listen-client-urls http://10.0.1.10:2379,http://127.0.0.1:2379 \
-advertise-client-urls http://10.0.1.10:2379 \
-discovery https://discovery.etcd.io/3e86b59982e49066c5d813af1c2e2579cbf573de
```
```
$ etcd -name infra1 -initial-advertise-peer-urls http://10.0.1.11:2380 \
-listen-peer-urls http://10.0.1.11:2380 \
-listen-client-urls http://10.0.1.11:2379,http://127.0.0.1:2379 \
-advertise-client-urls http://10.0.1.11:2379 \
-discovery https://discovery.etcd.io/3e86b59982e49066c5d813af1c2e2579cbf573de
```
```
$ etcd -name infra2 -initial-advertise-peer-urls http://10.0.1.12:2380 \
-listen-peer-urls http://10.0.1.12:2380 \
-listen-client-urls http://10.0.1.12:2379,http://127.0.0.1:2379 \
-advertise-client-urls http://10.0.1.12:2379 \
-discovery https://discovery.etcd.io/3e86b59982e49066c5d813af1c2e2579cbf573de
```
@@ -238,8 +206,6 @@ You can use the environment variable `ETCD_DISCOVERY_PROXY` to cause etcd to use
```
$ etcd -name infra0 -initial-advertise-peer-urls http://10.0.1.10:2380 \
-listen-peer-urls http://10.0.1.10:2380 \
-listen-client-urls http://10.0.1.10:2379,http://127.0.0.1:2379 \
-advertise-client-urls http://10.0.1.10:2379 \
-discovery https://discovery.etcd.io/3e86b59982e49066c5d813af1c2e2579cbf573de
etcd: error: the cluster doesnt have a size configuration value in https://discovery.etcd.io/3e86b59982e49066c5d813af1c2e2579cbf573de/_config
exit 1
@@ -252,8 +218,6 @@ This error will occur if the discovery cluster already has the configured number
```
$ etcd -name infra0 -initial-advertise-peer-urls http://10.0.1.10:2380 \
-listen-peer-urls http://10.0.1.10:2380 \
-listen-client-urls http://10.0.1.10:2379,http://127.0.0.1:2379 \
-advertise-client-urls http://10.0.1.10:2379 \
-discovery https://discovery.etcd.io/3e86b59982e49066c5d813af1c2e2579cbf573de \
-discovery-fallback exit
etcd: discovery: cluster is full
@@ -268,8 +232,6 @@ ignored on this machine.
```
$ etcd -name infra0 -initial-advertise-peer-urls http://10.0.1.10:2380 \
-listen-peer-urls http://10.0.1.10:2380 \
-listen-client-urls http://10.0.1.10:2379,http://127.0.0.1:2379 \
-advertise-client-urls http://10.0.1.10:2379 \
-discovery https://discovery.etcd.io/3e86b59982e49066c5d813af1c2e2579cbf573de
etcdserver: discovery token ignored since a cluster has already been initialized. Valid log found at /var/lib/etcd
```
@@ -302,9 +264,7 @@ infra2.example.com. 300 IN A 10.0.1.12
```
#### Bootstrap the etcd cluster using DNS
etcd cluster members can listen on domain names or IP address, the bootstrap process will resolve DNS A records.
The resolved address in `-initial-advertise-peer-urls` *must match* one of the resolved addresses in the SRV targets. The etcd member reads the resolved address to find out if it belongs to the cluster defined in the SRV records.
etcd cluster memebers can listen on domain names or IP address, the bootstrap process will resolve DNS A records.
```
$ etcd -name infra0 \
@@ -382,10 +342,6 @@ DNS SRV records can also be used to configure the list of peers for an etcd serv
$ etcd --proxy on -discovery-srv example.com
```
#### Error Cases
You might see the an error like `cannot find local etcd $name from SRV records.`. That means the etcd member fails to find itself from the cluster defined in SRV records. The resolved address in `-initial-advertise-peer-urls` *must match* one of the resolved addresses in the SRV targets.
# 0.4 to 2.0+ Migration Guide
In etcd 2.0 we introduced the ability to listen on more than one address and to advertise multiple addresses. This makes using etcd easier when you have complex networking, such as private and public networks on various cloud providers.

View File

@@ -13,64 +13,44 @@ To start etcd automatically using custom settings at startup in Linux, using a [
##### -name
+ Human-readable name for this member.
+ default: "default"
+ env variable: ETCD_NAME
+ This value is referenced as this node's own entries listed in the `-initial-cluster` flag (Ex: `default=http://localhost:2380` or `default=http://localhost:2380,default=http://localhost:7001`). This needs to match the key used in the flag if you're using [static boostrapping](clustering.md#static).
##### -data-dir
+ Path to the data directory.
+ default: "${name}.etcd"
+ env variable: ETCD_DATA_DIR
##### -wal-dir
+ Path to the dedicated wal directory. If this flag is set, etcd will write the WAL files to the walDir rather than the dataDir. This allows a dedicated disk to be used, and helps avoid io competition between logging and other IO operations.
+ default: ""
+ env variable: ETCD_WAL_DIR
##### -snapshot-count
+ Number of committed transactions to trigger a snapshot to disk.
+ default: "10000"
+ env variable: ETCD_SNAPSHOT_COUNT
##### -heartbeat-interval
+ Time (in milliseconds) of a heartbeat interval.
+ default: "100"
+ env variable: ETCD_HEARTBEAT_INTERVAL
##### -election-timeout
+ Time (in milliseconds) for an election to timeout. See [Documentation/tuning.md](tuning.md#time-parameters) for details.
+ Time (in milliseconds) for an election to timeout.
+ default: "1000"
+ env variable: ETCD_ELECTION_TIMEOUT
##### -listen-peer-urls
+ List of URLs to listen on for peer traffic. This flag tells the etcd to accept incoming requests from its peers on the specified scheme://IP:port combinations. Scheme can be either http or https.If 0.0.0.0 is specified as the IP, etcd listens to the given port on all interfaces. If an IP address is given as well as a port, etcd will listen on the given port and interface. Multiple URLs may be used to specify a number of addresses and ports to listen on. The etcd will respond to requests from any of the listed addresses and ports.
+ List of URLs to listen on for peer traffic.
+ default: "http://localhost:2380,http://localhost:7001"
+ env variable: ETCD_LISTEN_PEER_URLS
+ example: "http://10.0.0.1:2380"
+ invalid example: "http://example.com:2380" (domain name is invalid for binding)
##### -listen-client-urls
+ List of URLs to listen on for client traffic. This flag tells the etcd to accept incoming requests from the clients on the specified scheme://IP:port combinations. Scheme can be either http or https. If 0.0.0.0 is specified as the IP, etcd listens to the given port on all interfaces. If an IP address is given as well as a port, etcd will listen on the given port and interface. Multiple URLs may be used to specify a number of addresses and ports to listen on. The etcd will respond to requests from any of the listed addresses and ports.
+ List of URLs to listen on for client traffic.
+ default: "http://localhost:2379,http://localhost:4001"
+ env variable: ETCD_LISTEN_CLIENT_URLS
+ example: "http://10.0.0.1:2379"
+ invalid example: "http://example.com:2379" (domain name is invalid for binding)
##### -max-snapshots
+ Maximum number of snapshot files to retain (0 is unlimited)
+ default: 5
+ env variable: ETCD_MAX_SNAPSHOTS
+ The default for users on Windows is unlimited, and manual purging down to 5 (or your preference for safety) is recommended.
##### -max-wals
+ Maximum number of wal files to retain (0 is unlimited)
+ default: 5
+ env variable: ETCD_MAX_WALS
+ The default for users on Windows is unlimited, and manual purging down to 5 (or your preference for safety) is recommended.
##### -cors
+ Comma-separated white list of origins for CORS (cross-origin resource sharing).
+ default: none
+ env variable: ETCD_CORS
### Clustering Flags
@@ -80,55 +60,42 @@ To start etcd automatically using custom settings at startup in Linux, using a [
##### -initial-advertise-peer-urls
+ List of this member's peer URLs to advertise to the rest of the cluster. These addresses are used for communicating etcd data around the cluster. At least one must be routable to all cluster members. These URLs can contain domain names.
+ List of this member's peer URLs to advertise to the rest of the cluster. These addresses are used for communicating etcd data around the cluster. At least one must be routable to all cluster members.
+ default: "http://localhost:2380,http://localhost:7001"
+ env variable: ETCD_INITIAL_ADVERTISE_PEER_URLS
+ example: "http://example.com:2380, http://10.0.0.1:2380"
##### -initial-cluster
+ Initial cluster configuration for bootstrapping.
+ default: "default=http://localhost:2380,default=http://localhost:7001"
+ env variable: ETCD_INITIAL_CLUSTER
+ The key is the value of the `-name` flag for each node provided. The default uses `default` for the key because this is the default for the `-name` flag.
##### -initial-cluster-state
+ Initial cluster state ("new" or "existing"). Set to `new` for all members present during initial static or DNS bootstrapping. If this option is set to `existing`, etcd will attempt to join the existing cluster. If the wrong value is set, etcd will attempt to start but fail safely.
+ default: "new"
+ env variable: ETCD_INITIAL_CLUSTER_STATE
[static bootstrap]: clustering.md#static
##### -initial-cluster-token
+ Initial cluster token for the etcd cluster during bootstrap.
+ default: "etcd-cluster"
+ env variable: ETCD_INITIAL_CLUSTER_TOKEN
##### -advertise-client-urls
+ List of this member's client URLs to advertise to the rest of the cluster. These URLs can contain domain names.
+ List of this member's client URLs to advertise to the rest of the cluster.
+ default: "http://localhost:2379,http://localhost:4001"
+ env variable: ETCD_ADVERTISE_CLIENT_URLS
+ example: "http://example.com:2379, http://10.0.0.1:2379"
+ Be careful if you are advertising URLs such as http://localhost:2379 from a cluster member and are using the proxy feature of etcd. This will cause loops, because the proxy will be forwarding requests to itself until its resources (memory, file descriptors) are eventually depleted.
##### -discovery
+ Discovery URL used to bootstrap the cluster.
+ default: none
+ env variable: ETCD_DISCOVERY
##### -discovery-srv
+ DNS srv domain used to bootstrap the cluster.
+ default: none
+ env variable: ETCD_DISCOVERY_SRV
##### -discovery-fallback
+ Expected behavior ("exit" or "proxy") when discovery services fails.
+ default: "proxy"
+ env variable: ETCD_DISCOVERY_FALLBACK
##### -discovery-proxy
+ HTTP proxy to use for traffic to discovery service.
+ default: none
+ env variable: ETCD_DISCOVERY_PROXY
### Proxy Flags
@@ -137,100 +104,34 @@ To start etcd automatically using custom settings at startup in Linux, using a [
##### -proxy
+ Proxy mode setting ("off", "readonly" or "on").
+ default: "off"
+ env variable: ETCD_PROXY
##### -proxy-failure-wait
+ Time (in milliseconds) an endpoint will be held in a failed state before being reconsidered for proxied requests.
+ default: 5000
+ env variable: ETCD_PROXY_FAILURE_WAIT
##### -proxy-refresh-interval
+ Time (in milliseconds) of the endpoints refresh interval.
+ default: 30000
+ env variable: ETCD_PROXY_REFRESH_INTERVAL
##### -proxy-dial-timeout
+ Time (in milliseconds) for a dial to timeout or 0 to disable the timeout
+ default: 1000
+ env variable: ETCD_PROXY_DIAL_TIMEOUT
##### -proxy-write-timeout
+ Time (in milliseconds) for a write to timeout or 0 to disable the timeout.
+ default: 5000
+ env variable: ETCD_PROXY_WRITE_TIMEOUT
##### -proxy-read-timeout
+ Time (in milliseconds) for a read to timeout or 0 to disable the timeout.
+ Don't change this value if you use watches because they are using long polling requests.
+ default: 0
+ env variable: ETCD_PROXY_READ_TIMEOUT
### Security Flags
The security flags help to [build a secure etcd cluster][security].
##### -ca-file [DEPRECATED]
+ Path to the client server TLS CA file. `-ca-file ca.crt` could be replaced by `-trusted-ca-file ca.crt -client-cert-auth` and etcd will perform the same.
##### -ca-file
+ Path to the client server TLS CA file.
+ default: none
+ env variable: ETCD_CA_FILE
##### -cert-file
+ Path to the client server TLS cert file.
+ default: none
+ env variable: ETCD_CERT_FILE
##### -key-file
+ Path to the client server TLS key file.
+ default: none
+ env variable: ETCD_KEY_FILE
##### -client-cert-auth
+ Enable client cert authentication.
+ default: false
+ env variable: ETCD_CLIENT_CERT_AUTH
##### -trusted-ca-file
+ Path to the client server TLS trusted CA key file.
##### -peer-ca-file
+ Path to the peer server TLS CA file.
+ default: none
+ env variable: ETCD_TRUSTED_CA_FILE
##### -peer-ca-file [DEPRECATED]
+ Path to the peer server TLS CA file. `-peer-ca-file ca.crt` could be replaced by `-peer-trusted-ca-file ca.crt -peer-client-cert-auth` and etcd will perform the same.
+ default: none
+ env variable: ETCD_PEER_CA_FILE
##### -peer-cert-file
+ Path to the peer server TLS cert file.
+ default: none
+ env variable: ETCD_PEER_CERT_FILE
##### -peer-key-file
+ Path to the peer server TLS key file.
+ default: none
+ env variable: ETCD_PEER_KEY_FILE
##### -peer-client-cert-auth
+ Enable peer client cert authentication.
+ default: false
+ env variable: ETCD_PEER_CLIENT_CERT_AUTH
##### -peer-trusted-ca-file
+ Path to the peer server TLS trusted CA file.
+ default: none
+ env variable: ETCD_PEER_TRUSTED_CA_FILE
### Logging Flags
##### -debug
+ Drop the default log level to DEBUG for all subpackages.
+ default: false (INFO for all packages)
+ env variable: ETCD_DEBUG
##### -log-package-levels
+ Set individual etcd subpackages to specific log levels. An example being `etcdserver=WARNING,security=DEBUG`
+ default: none (INFO for all packages)
+ env variable: ETCD_LOG_PACKAGE_LEVELS
### Unsafe Flags
@@ -241,14 +142,6 @@ Follow the instructions when using these flags.
##### -force-new-cluster
+ Force to create a new one-member cluster. It commits configuration changes in force to remove all existing members in the cluster and add itself. It needs to be set to [restore a backup][restore].
+ default: false
+ env variable: ETCD_FORCE_NEW_CLUSTER
### Experimental Flags
##### -experimental-v3demo
+ Enable experimental [v3 demo API](rfc/v3api.proto).
+ default: false
+ env variable: ETCD_EXPERIMENTAL_V3DEMO
### Miscellaneous Flags
@@ -256,9 +149,9 @@ Follow the instructions when using these flags.
+ Print the version and exit.
+ default: false
[build-cluster]: clustering.md#static
[reconfig]: runtime-configuration.md
[discovery]: clustering.md#discovery
[proxy]: proxy.md
[security]: security.md
[restore]: admin_guide.md#restoring-a-backup
[build-cluster]: https://github.com/coreos/etcd/blob/master/Documentation/clustering.md#static
[reconfig]: https://github.com/coreos/etcd/blob/master/Documentation/runtime-configuration.md
[discovery]: https://github.com/coreos/etcd/blob/master/Documentation/clustering.md#discovery
[proxy]: https://github.com/coreos/etcd/blob/master/Documentation/proxy.md
[security]: https://github.com/coreos/etcd/blob/master/Documentation/security.md
[restore]: https://github.com/coreos/etcd/blob/master/Documentation/admin_guide.md#restoring-a-backup

View File

@@ -1,109 +0,0 @@
# etcd release guide
The guide talks about how to release a new version of etcd.
The procedure includes some manual steps for sanity checking but it can probably be further scripted. Please keep this document up-to-date if you want to make changes to the release process.
## Prepare Release
Set desired version as environment variable for following steps. Here is an example to release 2.1.3:
```
export VERSION=v2.1.3
export PREV_VERSION=v2.1.2
```
All releases version numbers follow the format of [semantic versioning 2.0.0](http://semver.org/).
### Major, Minor Version Release, or its Pre-release
- Ensure the relevant milestone on GitHub is complete. All referenced issues should be closed, or moved elsewhere.
- Remove this release from [roadmap](https://github.com/coreos/etcd/blob/master/ROADMAP.md), if necessary.
- Ensure the latest upgrade documentation is available.
- Bump [hardcoded MinClusterVerion in the repository](https://github.com/coreos/etcd/blob/master/version/version.go#L29), if necessary.
- Add feature capability maps for the new version, if necessary.
### Patch Version Release
- Discuss about commits that are backported to the patch release. The commits should not include merge commits.
- Cherry-pick these commits starting from the oldest one into stable branch.
## Write Release Note
- Write introduction for the new release. For example, what major bug we fix, what new features we introduce or what performance improvement we make.
- Write changelog for the last release. ChangeLog should be straightforward and easy to understand for the end-user.
- Put `[GH XXXX]` at the head of change line to reference Pull Request that introduces the change. Moreover, add a link on it to jump to the Pull Request.
## Tag Version
- Bump [hardcoded Version in the repository](https://github.com/coreos/etcd/blob/master/version/version.go#L30) to the latest version `${VERSION}`.
- Ensure all tests on CI system are passed.
- Manually check etcd is buildable in Linux, Darwin and Windows.
- Manually check upgrade etcd cluster of previous minor version works well.
- Manually check new features work well.
- Add a signed tag through `git tag -s ${VERSION}`.
- Sanity check tag correctness through `git show tags/$VERSION`.
- Push the tag to GitHub through `git push origin tags/$VERSION`. This assumes `origin` corresponds to "https://github.com/coreos/etcd".
## Build Release Binaries and Images
- Ensure `actool` is available, or installing it through `go get github.com/appc/spec/actool`.
- Ensure `docker` is available.
Run release script in root directory:
```
./scripts/release.sh ${VERSION}
```
It generates all release binaries and images under directory ./release.
## Sign Binaries and Images
Choose appropriate private key to sign the generated binaries and images.
The following commands are used for public release sign:
```
cd release
# personal GPG is okay for now
for i in etcd-*{.zip,.tar.gz}; do gpg --sign ${i}; done
# use `CoreOS ACI Builder <release@coreos.com>` secret key
gpg -u 88182190 -a --output etcd-${VERSION}-linux-amd64.aci.asc --detach-sig etcd-${VERSION}-linux-amd64.aci
```
## Publish Release Page in GitHub
- Set release title as the version name.
- Follow the format of previous release pages.
- Attach the generated binaries, aci image and signatures.
- Select whether it is a pre-release.
- Publish the release!
## Publish Docker Image in Quay.io
- Push docker image:
```
docker login quay.io
docker push quay.io/coreos/etcd:${VERSION}
```
- Add `latest` tag to the new image on [quay.io](https://quay.io/repository/coreos/etcd?tag=latest&tab=tags) if this is a stable release.
## Announce to etcd-dev Googlegroup
- Follow the format of [previous release emails](https://groups.google.com/forum/#!forum/etcd-dev).
- Make sure to include a list of authors that contributed since the previous release - something like the following might be handy:
```
git log ...${PREV_VERSION} --pretty=format:"%an" | sort | uniq | tr '\n' ',' | sed -e 's#,#, #g' -e 's#, $##'
```
- Send email to etcd-dev@googlegroups.com
## Post Release
- Create new stable branch through `git push origin ${VERSION_MAJOR}.${VERSION_MINOR}` if this is a major stable release. This assumes `origin` corresponds to "https://github.com/coreos/etcd".
- Bump [hardcoded Version in the repository](https://github.com/coreos/etcd/blob/master/version/version.go#L30) to the version `${VERSION}+git`.

View File

@@ -1,109 +0,0 @@
# Discovery Service Protocol
Discovery service protocol helps new etcd member to discover all other members in cluster bootstrap phase using a shared discovery URL.
Discovery service protocol is _only_ used in cluster bootstrap phase, and cannot be used for runtime reconfiguration or cluster monitoring.
The protocol uses a new discovery token to bootstrap one _unique_ etcd cluster. Remember that one discovery token can represent only one etcd cluster. As long as discovery protocol on this token starts, even if fails halfway, it must not be used to bootstrap another etcd cluster.
The rest of this article will walk through the discovery process with examples that correspond to a self-hosted discovery cluster. The public discovery service, discovery.etcd.io, functions the same way, but with a layer of polish to abstract away ugly URLs, generate UUIDs automatically, and provide some protections against excessive requests. At its core, the public discovery service still uses an etcd cluster as the data store as described in this document.
## The Protocol Workflow
The idea of discovery protocol is to use an internal etcd cluster to coordinate bootstrap of a new cluster. First, all new members interact with discovery service and help to generate the expected member list. Then each new member bootstraps its server using this list, which performs the same functionality as -initial-cluster flag.
In the following example workflow, we will list each step of protocol in curl format for ease of understanding.
By convention the etcd discovery protocol uses the key prefix `_etcd/registry`. If `http://example.com` hosts a etcd cluster for discovery service, a full URL to discovery keyspace will be `http://example.com/v2/keys/_etcd/registry`. We will use this as the URL prefix in the example.
### Creating a New Discovery Token
Generate a unique token that will identify the new cluster. This will be used as a unique prefix in discovery keyspace in the following steps. An easy way to do this is to use `uuidgen`:
```
UUID=$(uuidgen)
```
### Specifying the Expected Cluster Size
You need to specify the expected cluster size for this discovery token. The size is used by the discovery service to know when it has found all members that will initially form the cluster.
```
curl -X PUT http://example.com/v2/keys/_etcd/registry/${UUID}/_config/size -d value=${cluster_size}
```
Usually the cluster size is 3, 5 or 7. Check [optimal cluster size](admin_guide.md#optimal-cluster-size) for more details.
### Bringing up etcd Processes
Now that you have your discovery URL, you can use it as `-discovery` flag and bring up etcd processes. Every etcd process will follow this next few steps internally if given a `-discovery` flag.
### Registering itself
The first thing for etcd process is to register itself into the discovery URL as a member. This is done by creating member ID as a key in the discovery URL.
```
curl -X PUT http://example.com/v2/keys/_etcd/registry/${UUID}/${member_id}?prevExist=false -d value="${member_name}=${member_peer_url_1}&${member_name}=${member_peer_url_2}"
```
### Checking the Status
It checks the expected cluster size and registration status in discovery URL, and decides what the next action is.
```
curl -X GET http://example.com/v2/keys/_etcd/registry/${UUID}/_config/size
curl -X GET http://example.com/v2/keys/_etcd/registry/${UUID}
```
If registered members are still not enough, it will wait for left members to appear.
If the number of registered members is bigger than the expected size N, it treats the first N registered members as the member list for the cluster. If the member itself is in the member list, the discovery procedure succeeds and it fetches all peers through the member list. If it is not in the member list, the discovery procedure finishes with the failure that the cluster has been full.
In etcd implementation, the member may check the cluster status even before registering itself. So it could fail quickly if the cluster has been full.
### Waiting for All Members
The wait process is described in details [here](https://github.com/coreos/etcd/blob/master/Documentation/api.md#waiting-for-a-change).
```
curl -X GET http://example.com/v2/keys/_etcd/registry/${UUID}?wait=true&waitIndex=${current_etcd_index}
```
It keeps waiting until finding all members.
## Public Discovery Service
CoreOS Inc. hosts a public discovery service at https://discovery.etcd.io/ , which provides some nice features for ease of use.
### Mask Key Prefix
Public discovery service will redirect `https://discovery.etcd.io/${UUID}` to etcd cluster behind for the key at `/v2/keys/_etcd/registry`. It masks register key prefix for short and readable discovery url.
### Get new token
```
GET /new
Sent query:
size=${cluster_size}
Possible status codes:
200 OK
400 Bad Request
200 Body:
generated discovery url
```
The generation process in the service follows the step from [Creating a New Discovery Token](#creating-a-new-discovery-token) to [Specifying the Expected Cluster Size](#specifying-the-expected-cluster-size).
### Check Discovery Status
```
GET /${UUID}
```
You can check the status for this discovery token, including the machines that have been registered, by requesting the value of the UUID.
### Open-source repository
The repository is located at https://github.com/coreos/discovery.etcd.io. You could use it to build your own public discovery service.

View File

@@ -13,8 +13,7 @@ export HostIP="192.168.12.50"
The following `docker run` command will expose the etcd client API over ports 4001 and 2379, and expose the peer port over 2380.
```
docker run -d -v /usr/share/ca-certificates/:/etc/ssl/certs -p 4001:4001 -p 2380:2380 -p 2379:2379 \
--name etcd quay.io/coreos/etcd:v2.0.8 \
docker run -d -p 4001:4001 -p 2380:2380 -p 2379:2379 --name etcd quay.io/coreos/etcd:v2.0.3 \
-name etcd0 \
-advertise-client-urls http://${HostIP}:2379,http://${HostIP}:4001 \
-listen-client-urls http://0.0.0.0:2379,http://0.0.0.0:4001 \
@@ -43,8 +42,7 @@ The main difference being the value used for the `-initial-cluster` flag, which
### etcd0
```
docker run -d -v /usr/share/ca-certificates/:/etc/ssl/certs -p 4001:4001 -p 2380:2380 -p 2379:2379 \
--name etcd quay.io/coreos/etcd:v2.0.8 \
docker run -d -p 4001:4001 -p 2380:2380 -p 2379:2379 --name etcd quay.io/coreos/etcd:v2.0.3 \
-name etcd0 \
-advertise-client-urls http://192.168.12.50:2379,http://192.168.12.50:4001 \
-listen-client-urls http://0.0.0.0:2379,http://0.0.0.0:4001 \
@@ -58,8 +56,7 @@ docker run -d -v /usr/share/ca-certificates/:/etc/ssl/certs -p 4001:4001 -p 2380
### etcd1
```
docker run -d -v /usr/share/ca-certificates/:/etc/ssl/certs -p 4001:4001 -p 2380:2380 -p 2379:2379 \
--name etcd quay.io/coreos/etcd:v2.0.8 \
docker run -d -p 4001:4001 -p 2380:2380 -p 2379:2379 --name etcd quay.io/coreos/etcd:v2.0.3 \
-name etcd1 \
-advertise-client-urls http://192.168.12.51:2379,http://192.168.12.51:4001 \
-listen-client-urls http://0.0.0.0:2379,http://0.0.0.0:4001 \
@@ -73,8 +70,7 @@ docker run -d -v /usr/share/ca-certificates/:/etc/ssl/certs -p 4001:4001 -p 2380
### etcd2
```
docker run -d -v /usr/share/ca-certificates/:/etc/ssl/certs -p 4001:4001 -p 2380:2380 -p 2379:2379 \
--name etcd quay.io/coreos/etcd:v2.0.8 \
docker run -d -p 4001:4001 -p 2380:2380 -p 2379:2379 --name etcd quay.io/coreos/etcd:v2.0.3 \
-name etcd2 \
-advertise-client-urls http://192.168.12.52:2379,http://192.168.12.52:4001 \
-listen-client-urls http://0.0.0.0:2379,http://0.0.0.0:4001 \

View File

@@ -1,80 +0,0 @@
# FAQ
## 1) How come I can read an old version of the data when a majority of the members are down?
In situations where a client connects to a minority, etcd
favors by default availability over consistency. This means that even though
data might be “out of date”, it is still better to return something versus
nothing.
In order to confirm that a read is up to date with a majority of the cluster,
the client can use the `quorum=true` parameter on reads of keys. This means
that a majority of the cluster is checked on reads before returning the data,
otherwise the read will timeout and fail.
## 2) With quorum=false, doesnt this mean that if my client switched the member it was connected to, that it could experience a logical ordering where the cluster goes backwards in time?
Yes, but this could be handled at the etcd client implementation via
remembering the last seen index. The “index” is the cluster's single
irrevocable sequence of the entire modification history. The client could
remember the last seen index, and determine via comparing the index returned on
the GET whether or not the state of the key-value pair is before or after its
last seen state.
## 3) What happens if a watch is registered on a minority member?
The watch will stay untriggered, even as modifications are occurring in the
majority quorum. This is an open issue, and is being addressed in v3. There are
multiple ways to work around the watch trigger not firing.
1) build a signaling mechanism independent of etcd. This could be as simple as
a “pulse” to the client to reissue a GET with quorum=true for the most recent
version of the data.
2) poll on the `/v2/keys` endpoint and check that the raft-index is increasing every
timeout.
## 4) What is a proxy used for?
A proxy is a redirection server to the etcd cluster. The proxy handles the
redirection of a client to the current configuration of the etcd cluster. A
typical usecase is to start a proxy on a machine, and on first boot up of the
proxy specify both the `--proxy` flag and the `--initial-cluster` flag.
From there, any etcdctl client that starts up automatically speaks to the local
proxy and the proxy redirects operations to the current configuration of the
cluster it was originally paired with.
In the v2 spec of etcd, proxies cannot be promoted to members of the cluster.
They also cannot be promoted to followers or at any point become part of the
replication of the etcd cluster itself.
## 5) How is cluster membership and health handled in etcd v2?
The design goal of etcd is that reconfiguration is simply an API, and health
monitoring and addition/removal of members is up to the individual application
and their integration with the reconfiguration API.
Thus, a member that is down, even infinitely, will never be automatically
removed from the etcd cluster member list.
This makes sense because its usually an application level / administrative
action to determine whether a reconfiguration should happen based on health.
For more information, refer to [Documentation/runtime-reconfiguration.md].
## 6) how does --peers work with etcdctl?
The `--peers` flag can specify any number of etcd cluster members in a comma
separated list. This list might be a subset, equal to, or more than the actual
etcd cluster member list itself.
If only one peer is specified via the `--peers` flag, the etcdctl discovers the
rest of the cluster via the member list of that one peer, and then it randomly
chooses a member to use. Again, the client can use the `quorum=true` flag on
reads, which will always fail when using a member in the minority.
If peers from multiple clusters are specified via the `--peers` flag, etcdctl
will randomly choose a peer, and the request will simply get routed to one of
the clusters. This is probably not what you want.

View File

@@ -22,10 +22,6 @@ The node in each member follows raft consensus protocol to replicate logs. Clust
Peer is another member of the same cluster.
### Proposal
A proposal is a request (for example a write request, a configuration change request) that needs to go through raft protocol.
### Client
Client is a caller of the cluster's HTTP API.

View File

@@ -1,65 +0,0 @@
# FAQ
## Initial Bootstrapping UX
etcd initial bootstrapping is done via command line flags such as
`--initial-cluster` or `--discovery`. These flags can safely be left on the
command line after your cluster is running but they will be ignored if you have
a non-empty data dir. So, why did we decide to have this sort of odd UX?
One of the design goals of etcd is easy bringup of clusters using a one-shot
static configuration like AWS Cloud Formation, PXE booting, etc. Essentially we
want to describe several virtual machines and bring them all up at once into an
etcd cluster.
To achieve this sort of hands-free cluster bootstrap we had two other options:
**API to bootstrap**
This is problematic because it cannot be coordinated from a single service file
and we didn't want to have the etcd socket listening but unresponsive to
clients for an unbound period of time.
It would look something like this:
```
ExecStart=/usr/bin/etcd
ExecStartPost/usr/bin/etcd init localhost:2379 --cluster=
```
**etcd init subcommand**
```
etcd init --cluster='default=http://localhost:2380,default=http://localhost:7001'...
etcd init --discovery https://discovery-example.etcd.io/193e4
```
Then after running an init step you would execute `etcd`. This however
introduced problems: we now have to define a hand-off protocol between the etcd
init process and the etcd binary itself. This is hard to coordinate in a single
service file such as:
```
ExecStartPre=/usr/bin/etcd init --cluster=....
ExecStart=/usr/bin/etcd
```
There are several error cases:
0) Init has already ran and the data directory is already configured
1) Discovery fails because of network timeout, etc
2) Discovery fails because the cluster is already full and etcd needs to fall back to proxy
3) Static cluster configuration fails because of conflict, misconfiguration or timeout
In hindsight we could have made this work by doing:
```
rc status
0 Init already ran
1 Discovery fails on network timeout, etc
0 Discovery fails for cluster full, coordinate via proxy state file
1 Static cluster configuration failed
```
Perhaps we can add the init command in a future version and deprecate if the UX
continues to confuse people.

View File

@@ -7,9 +7,8 @@
- [etcd-dump](https://npmjs.org/package/etcd-dump) - Command line utility for dumping/restoring etcd.
- [etcd-fs](https://github.com/xetorthio/etcd-fs) - FUSE filesystem for etcd
- [etcd-browser](https://github.com/henszey/etcd-browser) - A web-based key/value editor for etcd using AngularJS
- [etcd-lock](https://github.com/datawisesystems/etcd-lock) - Master election & distributed r/w lock implementation using etcd - Supports v2
- [etcd-lock](https://github.com/datawisesystems/etcd-lock) - A lock implementation for etcd
- [etcd-console](https://github.com/matishsiao/etcd-console) - A web-base key/value editor for etcd using PHP
- [etcd-viewer](https://github.com/nikfoundas/etcd-viewer) - An etcd key-value store editor/viewer written in Java
**Go libraries**
@@ -34,7 +33,6 @@
- [stianeikeland/node-etcd](https://github.com/stianeikeland/node-etcd) - Supports v2 (w Coffeescript)
- [lavagetto/nodejs-etcd](https://github.com/lavagetto/nodejs-etcd) - Supports v2
- [deedubs/node-etcd-config](https://github.com/deedubs/node-etcd-config) - Supports v2
**Ruby libraries**
@@ -45,7 +43,6 @@
**C libraries**
- [jdarcy/etcd-api](https://github.com/jdarcy/etcd-api) - Supports v2
- [shafreeck/cetcd](https://github.com/shafreeck/cetcd) - Supports v2
**C++ libraries**
- [edwardcapriolo/etcdcpp](https://github.com/edwardcapriolo/etcdcpp) - Supports v2
@@ -71,11 +68,7 @@
**Haskell libraries**
- [wereHamster/etcd-hs](https://github.com/wereHamster/etcd-hs)
**R libraries**
- [ropensci/etseed](https://github.com/ropensci/etseed)
**Tcl libraries**
- [efrecon/etcd-tcl](https://github.com/efrecon/etcd-tcl) - Supports v2, except wait.
@@ -117,5 +110,3 @@ A detailed recap of client functionalities can be found in the [clients compatib
- [skynetservices/skydns](https://github.com/skynetservices/skydns) - RFC compliant DNS server
- [xordataexchange/crypt](https://github.com/xordataexchange/crypt) - Securely store values in etcd using GPG encryption
- [spf13/viper](https://github.com/spf13/viper) - Go configuration library, reads values from ENV, pflags, files, and etcd with optional encryption
- [lytics/metafora](https://github.com/lytics/metafora) - Go distributed task library
- [ryandoyle/nss-etcd](https://github.com/ryandoyle/nss-etcd) - A GNU libc NSS module for resolving names from etcd.

View File

@@ -1,137 +0,0 @@
## Metrics
**NOTE: The metrics feature is considered as an experimental. We might add/change/remove metrics without warning in the future releases.**
etcd uses [Prometheus](http://prometheus.io/) for metrics reporting in the server. The metrics can be used for real-time monitoring and debugging.
The simplest way to see the available metrics is to cURL the metrics endpoint `/metrics` of etcd. The format is described [here](http://prometheus.io/docs/instrumenting/exposition_formats/).
You can also follow the doc [here](http://prometheus.io/docs/introduction/getting_started/) to start a Promethus server and monitor etcd metrics.
The naming of metrics follows the suggested [best practice of Promethus](http://prometheus.io/docs/practices/naming/). A metric name has an `etcd` prefix as its namespace and a subsystem prefix (for example `wal` and `etcdserver`).
etcd now exposes the following metrics:
### etcdserver
| Name | Description | Type |
|-----------------------------------------|--------------------------------------------------|---------|
| file_descriptors_used_total | The total number of file descriptors used | Gauge |
| proposal_durations_milliseconds | The latency distributions of committing proposal | Summary |
| pending_proposal_total | The total number of pending proposals | Gauge |
| proposal_failed_total | The total number of failed proposals | Counter |
High file descriptors (`file_descriptors_used_total`) usage (near the file descriptors limitation of the process) indicates a potential out of file descriptors issue. That might cause etcd fails to create new WAL files and panics.
[Proposal](glossary.md#proposal) durations (`proposal_durations_milliseconds`) give you an summary about the proposal commit latency. Latency can be introduced into this process by network and disk IO.
Pending proposal (`pending_proposal_total`) gives you an idea about how many proposal are in the queue and waiting for commit. An increasing pending number indicates a high client load or an unstable cluster.
Failed proposals (`proposal_failed_total`) are normally related to two issues: temporary failures related to a leader election or longer duration downtime caused by a loss of quorum in the cluster.
### store
These metrics describe the accesses into the data store of etcd members that exist in the cluster. They
are useful to count what kind of actions are taken by users. It is also useful to see and whether all etcd members
"see" the same set of data mutations, and whether reads and watches (which are local) are equally distributed.
All these metrics are prefixed with `etcd_store_`.
| Name | Description | Type |
|---------------------------|------------------------------------------------------------------------------------------|--------------------|
| reads_total | Total number of reads from store, should differ among etcd members (local reads). | Counter(action) |
| writes_total | Total number of writes to store, should be same among all etcd members. | Counter(action) |
| reads_failed_total | Number of failed reads from store (e.g. key missing) on local reads. | Counter(action) |
| writes_failed_total | Number of failed writes to store (e.g. failed compare and swap). | Counter(action) |
| expires_total | Total number of expired keys (due to TTL).   | Counter |
| watch_requests_totals | Total number of incoming watch requests to this etcd member (local watches). | Counter |
| watchers | Current count of active watchers on this etcd member. | Gauge |
Both `reads_total` and `writes_total` count both successful and failed requests. `reads_failed_total` and
`writes_failed_total` count failed requests. A lot of failed writes indicate possible contentions on keys (e.g. when
doing `compareAndSet`), and read failures indicate that some clients try to access keys that don't exist.
Example Prometheus queries that may be useful from these metrics (across all etcd members):
* `sum(rate(etcd_store_reads_total{job="etcd"}[1m])) by (action)`
`max(rate(etcd_store_writes_total{job="etcd"}[1m])) by (action)`
Rate of reads and writes by action, across all servers across a time window of `1m`. The reason why `max` is used
for writes as opposed to `sum` for reads is because all of etcd nodes in the cluster apply all writes to their stores.
Shows the rate of successfull readonly/write queries across all servers, across a time window of `1m`.
* `sum(rate(etcd_store_watch_requests_total{job="etcd"}[1m]))`
Shows rate of new watch requests per second. Likely driven by how often watched keys change.
* `sum(etcd_store_watchers{job="etcd"})`
Number of active watchers across all etcd servers.
### wal
| Name | Description | Type |
|------------------------------------|--------------------------------------------------|---------|
| fsync_durations_microseconds | The latency distributions of fsync called by wal | Summary |
| last_index_saved | The index of the last entry saved by wal | Gauge |
Abnormally high fsync duration (`fsync_durations_microseconds`) indicates disk issues and might cause the cluster to be unstable.
### snapshot
| Name | Description | Type |
|--------------------------------------------|------------------------------------------------------------|---------|
| snapshot_save_total_durations_microseconds | The total latency distributions of save called by snapshot | Summary |
Abnormally high snapshot duration (`snapshot_save_total_durations_microseconds`) indicates disk issues and might cause the cluster to be unstable.
### rafthttp
| Name | Description | Type | Labels |
|-----------------------------------|--------------------------------------------|---------|--------------------------------|
| message_sent_latency_microseconds | The latency distributions of messages sent | Summary | sendingType, msgType, remoteID |
| message_sent_failed_total | The total number of failed messages sent | Summary | sendingType, msgType, remoteID |
Abnormally high message duration (`message_sent_latency_microseconds`) indicates network issues and might cause the cluster to be unstable.
An increase in message failures (`message_sent_failed_total`) indicates more severe network issues and might cause the cluster to be unstable.
Label `sendingType` is the connection type to send messages. `message`, `msgapp` and `msgappv2` use HTTP streaming, while `pipeline` does HTTP request for each message.
Label `msgType` is the type of raft message. `MsgApp` is log replication message; `MsgSnap` is snapshot install message; `MsgProp` is proposal forward message; the others are used to maintain raft internal status. If you have a large snapshot, you would expect a long msgSnap sending latency. For other types of messages, you would expect low latency, which is comparable to your ping latency if you have enough network bandwidth.
Label `remoteID` is the member ID of the message destination.
### proxy
etcd members operating in proxy mode do not do store operations. They forward all requests
to cluster instances.
Tracking the rate of requests coming from a proxy allows one to pin down which machine is performing most reads/writes.
All these metrics are prefixed with `etcd_proxy_`
| Name | Description | Type |
|---------------------------|-----------------------------------------------------------------------------------------|--------------------|
| requests_total | Total number of requests by this proxy instance. . | Counter(method) |
| handled_total | Total number of fully handled requests, with responses from etcd members. | Counter(method) |
| dropped_total | Total number of dropped requests due to forwarding errors to etcd members.  | Counter(method,error) |
| handling_duration_seconds | Bucketed handling times by HTTP method, including round trip to member instances. | Histogram(method) |
Example Prometheus queries that may be useful from these metrics (across all etcd servers):
* `sum(rate(etcd_proxy_handled_total{job="etcd"}[1m])) by (method)`
Rate of requests (by HTTP method) handled by all proxies, across a window of `1m`.
* `histogram_quantile(0.9, sum(increase(etcd_proxy_events_handling_time_seconds_bucket{job="etcd",method="GET"}[5m])) by (le))`
`histogram_quantile(0.9, sum(increase(etcd_proxy_events_handling_time_seconds_bucket{job="etcd",method!="GET"}[5m])) by (le))`
Show the 0.90-tile latency (in seconds) of handling of user requestsacross all proxy machines, with a window of `5m`.
* `sum(rate(etcd_proxy_dropped_total{job="etcd"}[1m])) by (proxying_error)`
Number of failed request on the proxy. This should be 0, spikes here indicate connectivity issues to etcd cluster.

View File

@@ -1,13 +1,9 @@
## Proxy
etcd can now run as a transparent proxy. Running etcd as a proxy allows for easily discovery of etcd within your infrastructure, since it can run on each machine as a local service. In this mode, etcd acts as a reverse proxy and forwards client requests to an active etcd cluster. The etcd proxy does not participate in the consensus replication of the etcd cluster, thus it neither increases the resilience nor decreases the write performance of the etcd cluster.
etcd can now run as a transparent proxy. Running etcd as a proxy allows for easily discovery of etcd within your infrastructure, since it can run on each machine as a local service. In this mode, etcd acts as a reverse proxy and forwards client requests to an active etcd cluster. The etcd proxy does not participant in the consensus replication of the etcd cluster, thus it neither increases the resilience nor decreases the write performance of the etcd cluster.
etcd currently supports two proxy modes: `readwrite` and `readonly`. The default mode is `readwrite`, which forwards both read and write requests to the etcd cluster. A `readonly` etcd proxy only forwards read requests to the etcd cluster, and returns `HTTP 501` to all write requests.
The proxy will shuffle the list of cluster members periodically to avoid sending all connections to a single member.
The member list used by proxy consists of all client URLs advertised within the cluster, as specified in each members' `-advertise-client-urls` flag. If this flag is set incorrectly, requests sent to the proxy are forwarded to wrong addresses and then fail. Including URLs in the `-advertise-client-urls` flag that point to the proxy itself, e.g. http://localhost:2379, is even more problematic as it will cause loops, because the proxy keeps trying to forward requests to itself until its resources (memory, file descriptors) are eventually depleted. The fix for this problem is to restart etcd member with correct `-advertise-client-urls` flag. After client URLs list in proxy is recalculated, which happens every 30 seconds, requests will be forwarded correctly.
### Using an etcd proxy
To start etcd in proxy mode, you need to provide three flags: `proxy`, `listen-client-urls`, and `initial-cluster` (or `discovery`).
@@ -17,9 +13,8 @@ The proxy will be listening on `listen-client-urls` and forward requests to the
#### Start an etcd proxy with a static configuration
To start a proxy that will connect to a statically defined etcd cluster, specify the `initial-cluster` flag:
```
etcd -proxy on -listen-client-urls http://127.0.0.1:8080 -initial-cluster infra0=http://10.0.1.10:2380,infra1=http://10.0.1.11:2380,infra2=http://10.0.1.12:2380
etcd -proxy on -listen-client-urls 127.0.0.1:8080 -initial-cluster infra0=http://10.0.1.10:2380,infra1=http://10.0.1.11:2380,infra2=http://10.0.1.12:2380
```
#### Start an etcd proxy with the discovery service
@@ -28,10 +23,10 @@ If you bootstrap an etcd cluster using the [discovery service][discovery-service
To start a proxy using the discovery service, specify the `discovery` flag. The proxy will wait until the etcd cluster defined at the `discovery` url finishes bootstrapping, and then start to forward the requests.
```
etcd -proxy on -listen-client-urls http://127.0.0.1:8080 -discovery https://discovery.etcd.io/3e86b59982e49066c5d813af1c2e2579cbf573de
etcd -proxy on -listen-client-urls 127.0.0.1:8080 -discovery https://discovery.etcd.io/3e86b59982e49066c5d813af1c2e2579cbf573de
```
#### Fallback to proxy mode with discovery service
If you bootstrap a etcd cluster using [discovery service][discovery-service] with more than the expected number of etcd members, the extra etcd processes will fall back to being `readwrite` proxies by default. They will forward the requests to the cluster as described above. For example, if you create a discovery url with `size=5`, and start ten etcd processes using that same discovery url, the result will be a cluster with five etcd members and five proxies. Note that this behaviour can be disabled with the `proxy-fallback` flag.
[discovery-service]: clustering.md#discovery
[discovery-service]: https://github.com/coreos/etcd/blob/master/Documentation/clustering.md#discovery

View File

@@ -1,43 +0,0 @@
## Reporting Bugs
If you find bugs or documentation mistakes in etcd project, please let us know by [opening an issue](https://github.com/coreos/etcd/issues/new). We treat bugs and mistakes very seriously and believe no issue is too small. Before creating a bug report, please check there that one does not already exist.
To make your bug report accurate and easy to understand, please try to create bug reports that are:
- Specific. Include as much details as possible: which version, what environment, what configuration, etc. You can also attach etcd log (the starting log with etcd configuration is especially important).
- Reproducible. Include the steps to reproduce the problem. We understand some issues might be hard to reproduce, please includes the steps that might lead to the problem. You can also attach the affected etcd data dir and stack strace to the bug report.
- Isolated. Please try to isolate and reproduce the bug with minimum dependencies. It would significantly slow down the speed to fix a bug if too many dependencies are involved in a bug report. Debugging external systems that rely on etcd is out of scope, but we are happy to point you in the right direction or help you interact with etcd in the correct manner.
- Unique. Do not duplicate existing bug report.
- Scoped. One bug per report. Do not follow up with another bug inside one report.
You might also want to read [Elika Etemads article on filing good bug reports](http://fantasai.inkedblade.net/style/talks/filing-good-bugs/) before creating a bug report.
We might ask you for further information to locate a bug. A duplicated bug report will be closed.
## Frequently Asked Questions
### How to get stack trace
``` bash
$ kill -QUIT $PID
```
### How to get etcd version
``` bash
$ etcd --version
```
### How to get etcd configuration and log when it runs as systemd service etcd2.service
``` bash
$ sudo systemctl cat etcd2
$ sudo journalctl -u etcd2
```
Due to an upstream systemd bug, journald may miss the last few log lines when its process exit. If journalctl tells you that etcd stops without fatal or panic message, you could try `sudo journalctl -f -t etcd2` to get full log.

View File

@@ -0,0 +1,470 @@
# v2 Auth and Security
## etcd Resources
There are three types of resources in etcd
1. user resources: users and roles in the user store
2. key-value resources: key-value pairs in the key-value store
3. settings resources: security settings, auth settings, and dynamic etcd cluster settings (election/heartbeat)
### User Resources
#### Users
A user is an identity to be authenticated. Each user can have multiple roles. The user has a capability on the resource if one of the roles has that capability.
The special static `root` user has a ROOT role. (Caps for visual aid throughout)
#### Role
Each role has exact one associated Permission List. An permission list exists for each permission on key-value resources. A role with `manage` permission of a key-value resource can grant/revoke capability of that key-value to other roles.
The special static ROOT role has a full permissions on all key-value resources, the permission to manage user resources and settings resources. Only the ROOT role has the permission to manage user resources and modify settings resources.
#### Permissions
There are two types of permissions, `read` and `write`. All management stems from the ROOT user.
A Permission List is a list of allowed patterns for that particular permission (read or write). Only ALLOW prefixes (incidentally, this is what Amazon S3 does). DENY becomes more complicated and is TBD.
### Key-Value Resources
A key-value resource is a key-value pairs in the store. Given a list of matching patterns, permission for any given key in a request is granted if any of the patterns in the list match.
The glob match rules are as follows:
* `*` and `\` are special characters, representing "greedy match" and "escape" respectively.
* As a corrolary, `\*` and `\\` are the corresponding literal matches.
* All other bytes match exactly their bytes, starting always from the *first byte*. (For regex fans, `re.match` in Python)
* Examples:
* `/foo` matches only the single key/directory of `/foo`
* `/foo*` matches the prefix `/foo`, and all subdirectories/keys
* `/foo/*/bar` matches the keys bar in any (recursive) subdirectory of `/foo`.
### Settings Resources
Specific settings for the cluster as a whole. This can include adding and removing cluster members, enabling or disabling security, replacing certificates, and any other dynamic configuration by the administrator.
## v2 Auth
### Basic Auth
We only support [Basic Auth](http://en.wikipedia.org/wiki/Basic_access_authentication) for the first version. Client needs to attach the basic auth to the HTTP Authorization Header.
### Authorization field for operations
Added to requests to /v2/keys, /v2/security
Add code 403 Forbidden to the set of responses from the v2 API
Authorization: Basic {encoded string}
### Future Work
Other types of auth can be considered for the future (eg, signed certs, public keys) but the `Authorization:` header allows for other such types
### Things out of Scope for etcd Permissions
* Pluggable AUTH backends like LDAP (other Authorization tokens generated by LDAP et al may be a possiblity)
* Very fine-grained access controls (eg: users modifying keys outside work hours)
## API endpoints
An Error JSON corresponds to:
{
"name": "ErrErrorName",
"description" : "The longer helpful description of the error."
}
#### Users
The User JSON object is formed as follows:
```
{
"user": "userName"
"password": "password"
"roles": [
"role1",
"role2"
],
"grant": [],
"revoke": [],
"lastModified": "2006-01-02Z04:05:07"
}
```
Password is only passed when necessary. Last Modified is set by the server and ignored in all client posts.
**Get a list of users**
GET/HEAD /v2/security/user
Sent Headers:
Authorization: Basic <BasicAuthString>
Possible Status Codes:
200 OK
403 Forbidden
200 Headers:
ETag: "<hash of list of users>"
Content-type: application/json
200 Body:
{
"users": ["alice", "bob", "eve"]
}
**Get User Details**
GET/HEAD /v2/security/users/alice
Sent Headers:
Authorization: Basic <BasicAuthString>
Possible Status Codes:
200 OK
403 Forbidden
404 Not Found
200 Headers:
ETag: "users/alice:<lastModified>"
Content-type: application/json
200 Body:
{
"user" : "alice"
"roles" : ["fleet", "etcd"]
"lastModified": "2015-02-05Z18:00:00"
}
**Create A User**
A user can be created with initial roles, if filled in. However, no roles are required; only the username and password fields
PUT /v2/security/users/charlie
Sent Headers:
Authorization: Basic <BasicAuthString>
Put Body:
JSON struct, above, matching the appropriate name and with starting roles.
Possible Status Codes:
200 OK
403 Forbidden
409 Conflict (if exists)
200 Headers:
ETag: "users/charlie:<tzNow>"
200 Body: (empty)
**Remove A User**
DELETE /v2/security/users/charlie
Sent Headers:
Authorization: Basic <BasicAuthString>
Possible Status Codes:
200 OK
403 Forbidden
404 Not Found
200 Headers:
200 Body: (empty)
**Grant a Role(s) to a User**
PUT /v2/security/users/charlie/grant
Sent Headers:
Authorization: Basic <BasicAuthString>
Put Body:
{ "grantRoles" : ["fleet", "etcd"], (extra JSON data for checking OK) }
Possible Status Codes:
200 OK
403 Forbidden
404 Not Found
409 Conflict
200 Headers:
ETag: "users/charlie:<tzNow>"
200 Body:
JSON user struct, updated. "roles" now contains the grants, and "grantRoles" is empty. If there is an error in the set of roles to be added, for example, a non-existent role, then 409 is returned, with an error JSON stating why.
**Revoke a Role(s) from a User**
PUT /v2/security/users/charlie/revoke
Sent Headers:
Authorization: Basic <BasicAuthString>
Put Body:
{ "revokeRoles" : ["fleet"], (extra JSON data for checking OK) }
Possible Status Codes:
200 OK
403 Forbidden
404 Not Found
409 Conflict
200 Headers:
ETag: "users/charlie:<tzNow>"
200 Body:
JSON user struct, updated. "roles" now doesn't contain the roles, and "revokeRoles" is empty. If there is an error in the set of roles to be removed, for example, a non-existent role, then 409 is returned, with an error JSON stating why.
**Change password**
PUT /v2/security/users/charlie/password
Sent Headers:
Authorization: Basic <BasicAuthString>
Put Body:
{"user": "charlie", "password": "newCharliePassword"}
Possible Status Codes:
200 OK
403 Forbidden
404 Not Found
200 Headers:
ETag: "users/charlie:<tzNow>"
200 Body:
JSON user struct, updated
#### Roles
A full role structure may look like this. A Permission List structure is used for the "permissions", "grant", and "revoke" keys.
```
{
"role" : "fleet",
"permissions" : {
"kv" {
"read" : [ "/fleet/" ],
"write": [ "/fleet/" ],
}
}
"grant" : {"kv": {...}},
"revoke": {"kv": {...}},
"members" : ["alice", "bob"],
"lastModified": "2015-02-05Z18:00:00"
}
```
**Get a list of Roles**
GET/HEAD /v2/security/roles
Sent Headers:
Authorization: Basic <BasicAuthString>
Possible Status Codes:
200 OK
403 Forbidden
200 Headers:
ETag: "<hash of list of roles>"
Content-type: application/json
200 Body:
{
"roles": ["fleet", "etcd", "quay"]
}
**Get Role Details**
GET/HEAD /v2/security/roles/fleet
Sent Headers:
Authorization: Basic <BasicAuthString>
Possible Status Codes:
200 OK
403 Forbidden
404 Not Found
200 Headers:
ETag: "roles/fleet:<lastModified>"
Content-type: application/json
200 Body:
{
"role" : "fleet",
"read": {
"prefixesAllowed": ["/fleet/"],
},
"write": {
"prefixesAllowed": ["/fleet/"],
},
"members" : ["alice", "bob"] // Reverse map optional?
"lastModified": "2015-02-05Z18:00:00"
}
**Create A Role**
PUT /v2/security/roles/rocket
Sent Headers:
Authorization: Basic <BasicAuthString>
Put Body:
Initial desired JSON state, complete with prefixes and
Possible Status Codes:
201 Created
403 Forbidden
404 Not Found
409 Conflict (if exists)
200 Headers:
ETag: "roles/rocket:<tzNow>"
200 Body:
JSON state of the role
**Remove A Role**
DELETE /v2/security/roles/rocket
Sent Headers:
Authorization: Basic <BasicAuthString>
Possible Status Codes:
200 OK
403 Forbidden
404 Not Found
200 Headers:
200 Body: (empty)
**Update a Roles Permission List for {read,write}ing**
PUT /v2/security/roles/rocket/update
Sent Headers:
Authorization: Basic <BasicAuthString>
Put Body:
{
"role" : "rocket",
"grant": {
"kv": {
"read" : [ "/rocket/"]
}
},
"revoke": {
"kv": {
"read" : [ "/fleet/"]
}
}
}
Possible Status Codes:
200 OK
403 Forbidden
404 Not Found
200 Headers:
ETag: "roles/rocket:<tzNow>"
200 Body:
JSON state of the role, with change containing empty lists and the deltas applied appropriately.
#### TBD Management modification
## Example Workflow
Let's walk through an example to show two tenants (applications, in our case) using etcd permissions.
### Enable security
//TODO(barakmich): Maybe this is dynamic? I don't like the idea of rebooting when we don't have to.
#### Default ROOT
etcd always has a ROOT when started with security enabled. The default username is `root`, and the password is `root`.
// TODO(barakmich): if the enabling is dynamic, perhaps that'd be a good time to set a password? Thus obviating the next section.
### Change root's password
```
PUT /v2/security/users/root/password
Headers:
Authorization: Basic <root:root>
Put Body:
{"user" : "root", "password": "betterRootPW!"}
```
//TODO(barakmich): How do you recover the root password? *This* may require a flag and a restart. `--disable-permissions`
### Create Roles for the Applications
Create the rocket role fully specified:
```
PUT /v2/security/roles/rocket
Headers:
Authorization: Basic <root:betterRootPW!>
Body:
{
"role" : "rocket",
"permissions" : {
"kv": {
"read": [
"/rocket/"
],
"write": [
"/rocket/"
]
}
}
}
```
But let's make fleet just a basic role for now:
```
PUT /v2/security/roles/fleet
Headers:
Authorization: Basic <root:betterRootPW!>
Body:
{
"role" : "fleet",
}
```
### Optional: Add some permissions to the roles
Well, we finally figured out where we want fleet to live. Let's fix it.
(Note that we avoided this in the rocket case. So this step is optional.)
```
PUT /v2/security/roles/fleet/update
Headers:
Authorization: Basic <root:betterRootPW!>
Put Body:
{
"role" : "fleet",
"grant" : {
"kv" : {
"read": [
"/fleet/"
]
}
}
}
```
### Create Users
Same as before, let's use rocket all at once and fleet separately
```
PUT /v2/security/users/rocketuser
Headers:
Authorization: Basic <root:betterRootPW!>
Body:
{"user" : "rocketuser", "password" : "rocketpw", "roles" : ["rocket"]}
```
```
PUT /v2/security/users/fleetuser
Headers:
Authorization: Basic <root:betterRootPW!>
Body:
{"user" : "fleetuser", "password" : "fleetpw"}
```
### Optional: Grant Roles to Users
Likewise, let's explicitly grant fleetuser access.
```
PUT /v2/security/users/fleetuser/grant
Headers:
Authorization: Basic <root:betterRootPW!>
Body:
{"user": "fleetuser", "grant": ["fleet"]}
```
#### Start to use fleetuser and rocketuser
For example:
```
PUT /v2/keys/rocket/RocketData
Headers:
Authorization: Basic <rocketuser:rocketpw>
```
Reads and writes outside the prefixes granted will fail with a 403 Forbidden.

View File

@@ -1,191 +0,0 @@
## Design
1. Flatten binary key-value space
2. Keep the event history until compaction
- access to old version of keys
- user controlled history compaction
3. Support range query
- Pagination support with limit argument
- Support consistency guarantee across multiple range queries
4. Replace TTL key with Lease
- more efficient/ low cost keep alive
- a logical group of TTL keys
5. Replace CAS/CAD with multi-object Txn
- MUCH MORE powerful and flexible
6. Support efficient watching with multiple ranges
7. RPC API supports the completed set of APIs.
- more efficient than JSON/HTTP
- additional txn/lease support
8. HTTP API supports a subset of APIs.
- easy for people to try out etcd
- easy for people to write simple etcd application
## Protobuf Defined API
[protobuf](./v3api.proto)
### Examples
#### Put a key (foo=bar)
```
// A put is always successful
Put( PutRequest { key = foo, value = bar } )
PutResponse {
cluster_id = 0x1000,
member_id = 0x1,
revision = 1,
raft_term = 0x1,
}
```
#### Get a key (assume we have foo=bar)
```
Get ( RangeRequest { key = foo } )
RangeResponse {
cluster_id = 0x1000,
member_id = 0x1,
revision = 1,
raft_term = 0x1,
kvs = {
{
key = foo,
value = bar,
create_revision = 1,
mod_revision = 1,
version = 1;
},
},
}
```
#### Range over a key space (assume we have foo0=bar0… foo100=bar100)
```
Range ( RangeRequest { key = foo, end_key = foo80, limit = 30 } )
RangeResponse {
cluster_id = 0x1000,
member_id = 0x1,
revision = 100,
raft_term = 0x1,
kvs = {
{
key = foo0,
value = bar0,
create_revision = 1,
mod_revision = 1,
version = 1;
},
...,
{
key = foo30,
value = bar30,
create_revision = 30,
mod_revision = 30,
version = 1;
},
},
}
```
#### Finish a txn (assume we have foo0=bar0, foo1=bar1)
```
Txn(TxnRequest {
// mod_revision of foo0 is equal to 1, mod_revision of foo1 is greater than 1
compare = {
{compareType = equal, key = foo0, mod_revision = 1},
{compareType = greater, key = foo1, mod_revision = 1}}
},
// if the comparison succeeds, put foo2 = bar2
success = {PutRequest { key = foo2, value = success }},
// if the comparison fails, put foo2=fail
failure = {PutRequest { key = foo2, value = failure }},
)
TxnResponse {
cluster_id = 0x1000,
member_id = 0x1,
revision = 3,
raft_term = 0x1,
succeeded = true,
responses = {
// response of PUT foo2=success
{
cluster_id = 0x1000,
member_id = 0x1,
revision = 3,
raft_term = 0x1,
}
}
}
```
#### Watch on a key/range
```
Watch( WatchRequest{
key = foo,
end_key = fop, // prefix foo
start_revision = 20,
end_revision = 10000,
// server decided notification frequency
progress_notification = true,
}
… // this can be a watch request stream
)
// put (foo0=bar0) event at 3
WatchResponse {
cluster_id = 0x1000,
member_id = 0x1,
revision = 3,
raft_term = 0x1,
event_type = put,
kv = {
key = foo0,
value = bar0,
create_revision = 1,
mod_revision = 1,
version = 1;
},
}
// a notification at 2000
WatchResponse {
cluster_id = 0x1000,
member_id = 0x1,
revision = 2000,
raft_term = 0x1,
// nil event as notification
}
// put (foo0=bar3000) event at 3000
WatchResponse {
cluster_id = 0x1000,
member_id = 0x1,
revision = 3000,
raft_term = 0x1,
event_type = put,
kv = {
key = foo0,
value = bar3000,
create_revision = 1,
mod_revision = 3000,
version = 2;
},
}
```

View File

@@ -1,285 +0,0 @@
syntax = "proto3";
// Interface exported by the server.
service etcd {
// Range gets the keys in the range from the store.
rpc Range(RangeRequest) returns (RangeResponse) {}
// Put puts the given key into the store.
// A put request increases the revision of the store,
// and generates one event in the event history.
rpc Put(PutRequest) returns (PutResponse) {}
// Delete deletes the given range from the store.
// A delete request increase the revision of the store,
// and generates one event in the event history.
rpc DeleteRange(DeleteRangeRequest) returns (DeleteRangeResponse) {}
// Txn processes all the requests in one transaction.
// A txn request increases the revision of the store,
// and generates events with the same revision in the event history.
rpc Txn(TxnRequest) returns (TxnResponse) {}
// Watch watches the events happening or happened in etcd. Both input and output
// are stream. One watch rpc can watch for multiple ranges and get a stream of
// events. The whole events history can be watched unless compacted.
rpc WatchRange(stream WatchRangeRequest) returns (stream WatchRangeResponse) {}
// Compact compacts the event history in etcd. User should compact the
// event history periodically, or it will grow infinitely.
rpc Compact(CompactionRequest) returns (CompactionResponse) {}
// LeaseCreate creates a lease. A lease has a TTL. The lease will expire if the
// server does not receive a keepAlive within TTL from the lease holder.
// All keys attached to the lease will be expired and deleted if the lease expires.
// The key expiration generates an event in event history.
rpc LeaseCreate(LeaseCreateRequest) returns (LeaseCreateResponse) {}
// LeaseRevoke revokes a lease. All the key attached to the lease will be expired and deleted.
rpc LeaseRevoke(LeaseRevokeRequest) returns (LeaseRevokeResponse) {}
// LeaseAttach attaches keys with a lease.
rpc LeaseAttach(LeaseAttachRequest) returns (LeaseAttachResponse) {}
// LeaseTxn likes Txn. It has two addition success and failure LeaseAttachRequest list.
// If the Txn is successful, then the success list will be executed. Or the failure list
// will be executed.
rpc LeaseTxn(LeaseTxnRequest) returns (LeaseTxnResponse) {}
// KeepAlive keeps the lease alive.
rpc LeaseKeepAlive(stream LeaseKeepAliveRequest) returns (stream LeaseKeepAliveResponse) {}
}
message ResponseHeader {
// an error type message?
string error = 1;
uint64 cluster_id = 2;
uint64 member_id = 3;
// revision of the store when the request was applied.
int64 revision = 4;
// term of raft when the request was applied.
uint64 raft_term = 5;
}
message RangeRequest {
// if the range_end is not given, the request returns the key.
bytes key = 1;
// if the range_end is given, it gets the keys in range [key, range_end).
bytes range_end = 2;
// limit the number of keys returned.
int64 limit = 3;
// range over the store at the given revision.
// if revision is less or equal to zero, range over the newest store.
// if the revision has been compacted, ErrCompaction will be returned in
// response.
int64 revision = 4;
}
message RangeResponse {
ResponseHeader header = 1;
repeated storagepb.KeyValue kvs = 2;
// more indicates if there are more keys to return in the requested range.
bool more = 3;
}
message PutRequest {
bytes key = 1;
bytes value = 2;
}
message PutResponse {
ResponseHeader header = 1;
}
message DeleteRangeRequest {
// if the range_end is not given, the request deletes the key.
bytes key = 1;
// if the range_end is given, it deletes the keys in range [key, range_end).
bytes range_end = 2;
}
message DeleteRangeResponse {
ResponseHeader header = 1;
}
message RequestUnion {
oneof request {
RangeRequest request_range = 1;
PutRequest request_put = 2;
DeleteRangeRequest request_delete_range = 3;
}
}
message ResponseUnion {
oneof response {
RangeResponse response_range = 1;
PutResponse response_put = 2;
DeleteRangeResponse response_delete_range = 3;
}
}
message Compare {
enum CompareResult {
EQUAL = 0;
GREATER = 1;
LESS = 2;
}
enum CompareTarget {
VERSION = 0;
CREATE = 1;
MOD = 2;
VALUE= 3;
}
CompareResult result = 1;
CompareTarget target = 2;
// key path
bytes key = 3;
oneof target_union {
// version of the given key
int64 version = 4;
// create revision of the given key
int64 create_revision = 5;
// last modified revision of the given key
int64 mod_revision = 6;
// value of the given key
bytes value = 7;
}
}
// If the comparisons succeed, then the success requests will be processed in order,
// and the response will contain their respective responses in order.
// If the comparisons fail, then the failure requests will be processed in order,
// and the response will contain their respective responses in order.
// From google paxosdb paper:
// Our implementation hinges around a powerful primitive which we call MultiOp. All other database
// operations except for iteration are implemented as a single call to MultiOp. A MultiOp is applied atomically
// and consists of three components:
// 1. A list of tests called guard. Each test in guard checks a single entry in the database. It may check
// for the absence or presence of a value, or compare with a given value. Two different tests in the guard
// may apply to the same or different entries in the database. All tests in the guard are applied and
// MultiOp returns the results. If all tests are true, MultiOp executes t op (see item 2 below), otherwise
// it executes f op (see item 3 below).
// 2. A list of database operations called t op. Each operation in the list is either an insert, delete, or
// lookup operation, and applies to a single database entry. Two different operations in the list may apply
// to the same or different entries in the database. These operations are executed
// if guard evaluates to
// true.
// 3. A list of database operations called f op. Like t op, but executed if guard evaluates to false.
message TxnRequest {
repeated Compare compare = 1;
repeated RequestUnion success = 2;
repeated RequestUnion failure = 3;
}
message TxnResponse {
ResponseHeader header = 1;
bool succeeded = 2;
repeated ResponseUnion responses = 3;
}
message KeyValue {
bytes key = 1;
int64 create_revision = 2;
// mod_revision is the last modified revision of the key.
int64 mod_revision = 3;
// version is the version of the key. A deletion resets
// the version to zero and any modification of the key
// increases its version.
int64 version = 4;
bytes value = 5;
}
message WatchRangeRequest {
// if the range_end is not given, the request returns the key.
bytes key = 1;
// if the range_end is given, it gets the keys in range [key, range_end).
bytes range_end = 2;
// start_revision is an optional revision (including) to watch from. No start_revision is "now".
int64 start_revision = 3;
// end_revision is an optional revision (excluding) to end watch. No end_revision is "forever".
int64 end_revision = 4;
bool progress_notification = 5;
}
message WatchRangeResponse {
ResponseHeader header = 1;
repeated Event events = 2;
}
message Event {
enum EventType {
PUT = 0;
DELETE = 1;
EXPIRE = 2;
}
EventType event_type = 1;
// a put event contains the current key-value
// a delete/expire event contains the previous
// key-value
KeyValue kv = 2;
}
// Compaction compacts the kv store upto the given revision (including).
// It removes the old versions of a key. It keeps the newest version of
// the key even if its latest modification revision is smaller than the given
// revision.
message CompactionRequest {
int64 revision = 1;
}
message CompactionResponse {
ResponseHeader header = 1;
}
message LeaseCreateRequest {
// advisory ttl in seconds
int64 ttl = 1;
}
message LeaseCreateResponse {
ResponseHeader header = 1;
int64 lease_id = 2;
// server decided ttl in second
int64 ttl = 3;
string error = 4;
}
message LeaseRevokeRequest {
int64 lease_id = 1;
}
message LeaseRevokeResponse {
ResponseHeader header = 1;
}
message LeaseTxnRequest {
TxnRequest request = 1;
repeated LeaseAttachRequest success = 2;
repeated LeaseAttachRequest failure = 3;
}
message LeaseTxnResponse {
ResponseHeader header = 1;
TxnResponse response = 2;
repeated LeaseAttachResponse attach_responses = 3;
}
message LeaseAttachRequest {
int64 lease_id = 1;
bytes key = 2;
}
message LeaseAttachResponse {
ResponseHeader header = 1;
}
message LeaseKeepAliveRequest {
int64 lease_id = 1;
}
message LeaseKeepAliveResponse {
ResponseHeader header = 1;
int64 lease_id = 2;
int64 ttl = 3;
}

View File

@@ -4,8 +4,6 @@ etcd comes with support for incremental runtime reconfiguration, which allows us
Reconfiguration requests can only be processed when the the majority of the cluster members are functioning. It is **highly recommended** to always have a cluster size greater than two in production. It is unsafe to remove a member from a two member cluster. The majority of a two member cluster is also two. If there is a failure during the removal process, the cluster might not able to make progress and need to [restart from majority failure][majority failure].
To better understand the design behind runtime reconfiguration, we suggest you read [this](runtime-reconf-design.md).
[majority failure]: #restart-cluster-from-majority-failure
## Reconfiguration Use Cases
@@ -39,7 +37,7 @@ To replace the machine, follow the instructions for [removing the member][remove
### Restart Cluster from Majority Failure
If the majority of your cluster is lost or all of your nodes have changed IP addresses, then you need to take manual action in order to recover safely.
If the majority of your cluster is lost, then you need to take manual action in order to recover safely.
The basic steps in the recovery process include [creating a new cluster using the old data][disaster recovery], forcing a single member to act as the leader, and finally using runtime configuration to [add new members][add member] to this new cluster one at a time.
[add member]: #add-a-new-member
@@ -54,38 +52,28 @@ This is essentially the same requirement as for any other write to etcd.
All changes to the cluster are done one at a time:
* To update a single member peerURLs you will make an update operation
* To replace a single member you will make an add then a remove operation
* To increase from 3 to 5 members you will make two add operations
* To decrease from 5 to 3 you will make two remove operations
To replace a single member you will make an add then a remove operation
To increase from 3 to 5 members you will make two add operations
To decrease from 5 to 3 you will make two remove operations
All of these examples will use the `etcdctl` command line tool that ships with etcd.
If you want to use the member API directly you can find the documentation [here](other_apis.md).
### Update a Member
If you would like to update a member IP address (peerURLs), first, we need to find the target member's ID. You can list all members with `etcdctl`:
```sh
$ etcdctl member list
6e3bd23ae5f1eae0: name=node2 peerURLs=http://localhost:23802 clientURLs=http://127.0.0.1:23792
924e2e83e93f2560: name=node3 peerURLs=http://localhost:23803 clientURLs=http://127.0.0.1:23793
a8266ecf031671f3: name=node1 peerURLs=http://localhost:23801 clientURLs=http://127.0.0.1:23791
```
In this example let's `update` a8266ecf031671f3 member ID and change its peerURLs value to http://10.0.1.10:2380
```sh
$ etcdctl member update a8266ecf031671f3 http://10.0.1.10:2380
Updated member with ID a8266ecf031671f3 in cluster
```
If you want to use the member API directly you can find the documentation [here](https://github.com/coreos/etcd/blob/master/Documentation/other_apis.md).
### Remove a Member
First, we need to find the target member's ID. You can list all members with `etcdctl`:
```
$ etcdctl member list
6e3bd23ae5f1eae0: name=node2 peerURLs=http://localhost:7002 clientURLs=http://127.0.0.1:4002
924e2e83e93f2560: name=node3 peerURLs=http://localhost:7003 clientURLs=http://127.0.0.1:4003
a8266ecf031671f3: name=node1 peerURLs=http://localhost:7001 clientURLs=http://127.0.0.1:4001
```
Let us say the member ID we want to remove is a8266ecf031671f3.
We then use the `remove` command to perform the removal:
```sh
```
$ etcdctl member remove a8266ecf031671f3
Removed member a8266ecf031671f3 from cluster
```
@@ -102,12 +90,12 @@ It is safe to remove the leader, however the cluster will be inactive while a ne
Adding a member is a two step process:
* Add the new member to the cluster via the [members API](other_apis.md#post-v2members) or the `etcdctl member add` command.
* Add the new member to the cluster via the [members API](https://github.com/coreos/etcd/blob/master/Documentation/other_apis.md#post-v2members) or the `etcdctl member add` command.
* Start the new member with the new cluster configuration, including a list of the updated members (existing members + the new member).
Using `etcdctl` let's add the new member to the cluster by specifying its [name](configuration.md#-name) and [advertised peer URLs](configuration.md#-initial-advertise-peer-urls):
Using `etcdctl` let's add the new member to the cluster by specifing its [name](configuration.md#-name) and [advertised peer URLs](configuration.md#-initial-advertise-peer-urls):
```sh
```
$ etcdctl member add infra3 http://10.0.1.13:2380
added member 9bf1b35fc7761a23 to cluster
@@ -119,11 +107,11 @@ ETCD_INITIAL_CLUSTER_STATE=existing
`etcdctl` has informed the cluster about the new member and printed out the environment variables needed to successfully start it.
Now start the new etcd process with the relevant flags for the new member:
```sh
```
$ export ETCD_NAME="infra3"
$ export ETCD_INITIAL_CLUSTER="infra0=http://10.0.1.10:2380,infra1=http://10.0.1.11:2380,infra2=http://10.0.1.12:2380,infra3=http://10.0.1.13:2380"
$ export ETCD_INITIAL_CLUSTER_STATE=existing
$ etcd -listen-client-urls http://10.0.1.13:2379 -advertise-client-urls http://10.0.1.13:2379 -listen-peer-urls http://10.0.1.13:2380 -initial-advertise-peer-urls http://10.0.1.13:2380 -data-dir %data_dir%
$ etcd -listen-client-urls http://10.0.1.13:2379 -advertise-client-urls http://10.0.1.13:2379 -listen-peer-urls http://10.0.1.13:2380 -initial-advertise-peer-urls http://10.0.1.13:2380
```
The new member will run as a part of the cluster and immediately begin catching up with the rest of the cluster.
@@ -136,7 +124,7 @@ If you add a new member to a 1-node cluster, the cluster cannot make progress be
In the following case we have not included our new host in the list of enumerated nodes.
If this is a new cluster, the node must be added to the list of initial cluster members.
```sh
```
$ etcd -name infra3 \
-initial-cluster infra0=http://10.0.1.10:2380,infra1=http://10.0.1.11:2380,infra2=http://10.0.1.12:2380 \
-initial-cluster-state existing
@@ -146,7 +134,7 @@ exit 1
In this case we give a different address (10.0.1.14:2380) to the one that we used to join the cluster (10.0.1.13:2380).
```sh
```
$ etcd -name infra4 \
-initial-cluster infra0=http://10.0.1.10:2380,infra1=http://10.0.1.11:2380,infra2=http://10.0.1.12:2380,infra4=http://10.0.1.14:2380 \
-initial-cluster-state existing
@@ -156,7 +144,7 @@ exit 1
When we start etcd using the data directory of a removed member, etcd will exit automatically if it connects to any alive member in the cluster:
```sh
```
$ etcd
etcd: this member has been permanently removed from the cluster. Exiting.
exit 1

View File

@@ -1,47 +0,0 @@
### Design of Runtime Reconfiguration
Runtime reconfiguration is one of the hardest and most error prone features in a distributed system, especially in a consensus based system like etcd.
Read on to learn about the design of etcd's runtime reconfiguration commands and how we tackled these problems.
### Two Phase Config Changes Keep you Safe
In etcd, every runtime reconfiguration has to go through [two phases](Documentation/runtime-configuration.md#add-a-new-member) for safety reasons. For example, to add a member you need to first inform cluster of new configuration and then start the new member.
Phase 1 - Inform cluster of new configuration
To add a member into etcd cluster, you need to make an API call to request a new member to be added to the cluster. And this is only way that you can add a new member into an existing cluster. The API call returns when the cluster agrees on the configuration change.
Phase 2 - Start new member
To join the etcd member into the existing cluster, you need to specify the correct `initial-cluster` and set `initial-cluster-state` to `existing`. When the member starts, it will contact the existing cluster first and verify the current cluster configuration matches the expected one specified in `initial-cluster`. When the new member successfully starts, you know your cluster reached the expected configuration.
By splitting the process into two discrete phases users are forced to be explicit regarding cluster membership changes. This actually gives users more flexibility and makes things easier to reason about. For example, if there is an attempt to add a new member with the same ID as an existing member in an etcd cluster, the action will fail immediately during phase one without impacting the running cluster. Similar protection is provided to prevent adding new members by mistake. If a new etcd member attempts to join the cluster before the cluster has accepted the configuration change,, it will not be accepted by the cluster.
Without the explicit workflow around cluster membership etcd would be vulnerable to unexpected cluster membership changes. For example, if etcd is running under an init system such as systemd, etcd would be restarted after being removed via the membership API, and attempt to rejoin the cluster on startup. This cycle would continue every time a member is removed via the API and systemd is set to restart etcd after failing, which is unexpected.
We think runtime reconfiguration should be a low frequent operation. We made the decision to keep it explicit and user-driven to ensure configuration safety and keep your cluster always running smoothly under your control.
### Permanent Loss of Quorum Requires New Cluster
If a cluster permanently loses a majority of its members, a new cluster will need to be started from an old data directory to recover the previous state.
It is entirely possible to force removing the failed members from the existing cluster to recover. However, we decided not to support this method since it bypasses the normal consensus committing phase, which is unsafe. If the member to remove is not actually dead or you force to remove different members through different members in the same cluster, you will end up with diverged cluster with same clusterID. This is very dangerous and hard to debug/fix afterwards.
If you have a correct deployment, the possibility of permanent majority lose is very low. But it is a severe enough problem that worth special care. We strongly suggest you to read the [disaster recovery documentation](admin_guide.md#disaster-recovery) and prepare for permanent majority lose before you put etcd into production.
### Do Not Use Public Discovery Service For Runtime Reconfiguration
The public discovery service should only be used for bootstrapping a cluster. To join member into an existing cluster, you should use runtime reconfiguration API.
Discovery service is designed for bootstrapping an etcd cluster in the cloud environment, when you do not know the IP addresses of all the members beforehand. After you successfully bootstrap a cluster, the IP addresses of all the members are known. Technically, you should not need the discovery service any more.
It seems that using public discovery service is a convenient way to do runtime reconfiguration, after all discovery service already has all the cluster configuration information. However relying on public discovery service brings troubles:
1. it introduces a external dependencies for the entire life-cycle of your cluster, not just bootstrap time. If there is a network issue between your cluster and public discover service, your cluster will suffer from it.
2. public discovery service must reflect correct runtime configuration of your cluster during it life-cycle. It has to provide security mechanism to avoid bad actions, and it is hard.
3. public discovery service has to keep tens of thousands of cluster configurations. Our public discovery service backend is not ready for that workload.
If you want to have a discovery service that supports runtime reconfiguration, the best choice is to build your private one.

View File

@@ -4,7 +4,7 @@ etcd supports SSL/TLS as well as authentication through client certificates, bot
To get up and running you first need to have a CA certificate and a signed key pair for one member. It is recommended to create and sign a new key pair for every member in a cluster.
For convenience the [cfssl](https://github.com/cloudflare/cfssl) tool provides an easy interface to certificate generation, and we provide a full example using the tool at [here](../hack/tls-setup). Alternatively this site provides a good reference on how to generate self-signed key pairs:
For convenience the [etcd-ca](https://github.com/coreos/etcd-ca) tool provides an easy interface to certificate generation, alternatively this site provides a good reference on how to generate self-signed key pairs:
http://www.g-loaded.eu/2005/11/10/be-your-own-ca/
@@ -18,9 +18,7 @@ etcd takes several certificate related configuration options, either through com
`--key-file=<path>`: Key for the certificate. Must be unencrypted.
`--client-cert-auth`: When this is set etcd will check all incoming HTTPS requests for a client certificate signed by the trusted CA, requests that don't supply a valid client certificate will fail.
`--trusted-ca-file=<path>`: Trusted certificate authority.
`--ca-file=<path>`: When this is set etcd will check all incoming HTTPS requests for a client certificate signed by the supplied CA, requests that don't supply a valid client certificate will fail.
**Peer (server-to-server / cluster) communication:**
@@ -30,9 +28,7 @@ The peer options work the same way as the client-to-server options:
`--peer-key-file=<path>`: Key for the certificate. Must be unencrypted.
`--peer-client-cert-auth`: When set, etcd will check all incoming peer requests from the cluster for valid client certificates signed by the supplied CA.
`--peer-trusted-ca-file=<path>`: Trusted certificate authority.
`--peer-ca-file=<path>`: When set, etcd will check all incoming peer requests from the cluster for valid client certificates signed by the supplied CA.
If either a client-to-server or peer certificate is supplied the key must also be set. All of these configuration options are also available through the environment variables, `ETCD_CA_FILE`, `ETCD_PEER_CA_FILE` and so on.
@@ -72,10 +68,12 @@ You need the same files mentioned in the first example for this, as well as a ke
```sh
$ etcd -name infra0 -data-dir infra0 \
-client-cert-auth -trusted-ca-file=/path/to/ca.crt -cert-file=/path/to/server.crt -key-file=/path/to/server.key \
-ca-file=/path/to/ca.crt -cert-file=/path/to/server.crt -key-file=/path/to/server.key \
-advertise-client-urls https://127.0.0.1:2379 -listen-client-urls https://127.0.0.1:2379
```
Notice that the addition of the `-ca-file` option automatically enables client certificate checking.
Now try the same request as above to this server:
```sh
@@ -132,13 +130,13 @@ DISCOVERY_URL=... # from https://discovery.etcd.io/new
# member1
$ etcd -name infra1 -data-dir infra1 \
-peer-client-cert-auth -peer-trusted-ca-file=/path/to/ca.crt -peer-cert-file=/path/to/member1.crt -peer-key-file=/path/to/member1.key \
-ca-file=/path/to/ca.crt -cert-file=/path/to/member1.crt -key-file=/path/to/member1.key \
-initial-advertise-peer-urls=https://10.0.1.10:2380 -listen-peer-urls=https://10.0.1.10:2380 \
-discovery ${DISCOVERY_URL}
# member2
$ etcd -name infra2 -data-dir infra2 \
-peer-client-cert-atuh -peer-trusted-ca-file=/path/to/ca.crt -peer-cert-file=/path/to/member2.crt -peer-key-file=/path/to/member2.key \
-ca-file=/path/to/ca.crt -cert-file=/path/to/member2.crt -key-file=/path/to/member2.key \
-initial-advertise-peer-urls=https://10.0.1.11:2380 -listen-peer-urls=https://10.0.1.11:2380 \
-discovery ${DISCOVERY_URL}
```
@@ -147,13 +145,6 @@ The etcd members will form a cluster and all communication between members in th
## Frequently Asked Questions
### My cluster is not working with peer tls configuration?
The internal protocol of etcd v2.0.x uses a lot of short-lived HTTP connections.
So, when enabling TLS you may need to increase the heartbeat interval and election timeouts to reduce internal cluster connection churn.
A reasonable place to start are these values: ` --heartbeat-interval 500 --election-timeout 2500`.
This issues is resolved in the etcd v2.1.x series of releases which uses fewer connections.
### I'm seeing a SSLv3 alert handshake failure when using SSL client authentication?
The `crypto/tls` package of `golang` checks the key usage of the certificate public key before using it.

View File

@@ -10,7 +10,7 @@ The network isn't the only source of latency. Each request and response may be i
The underlying distributed consensus protocol relies on two separate time parameters to ensure that nodes can handoff leadership if one stalls or goes offline.
The first parameter is called the *Heartbeat Interval*.
This is the frequency with which the leader will notify followers that it is still the leader.
For best pratices, the parameter should be set around round-trip time between members.
etcd batches commands together for higher throughput so this heartbeat interval is also a delay for how long it takes for commands to be committed.
By default, etcd uses a `100ms` heartbeat interval.
The second parameter is the *Election Timeout*.
@@ -18,21 +18,15 @@ This timeout is how long a follower node will go without hearing a heartbeat bef
By default, etcd uses a `1000ms` election timeout.
Adjusting these values is a trade off.
The value of heartbeat interval is recommended to be around the maximum of average round-trip time (RTT) between members, normally around 0.5-1.5x the round-trip time.
If heartbeat interval is too low, etcd will send unnecessary messages that increase the usage of CPU and network resources.
On the other side, a too high heartbeat interval leads to high election timeout. Higher election timeout takes longer time to detect a leader failure.
The easiest way to measure round-trip time (RTT) is to use [PING utility](https://en.wikipedia.org/wiki/Ping_(networking_utility)).
Lowering the heartbeat interval will cause individual commands to be committed faster but it will lower the overall throughput of etcd.
If your etcd instances have low utilization then lowering the heartbeat interval can improve your command response time.
The election timeout should be set based on the heartbeat interval and average round-trip time between members.
Election timeouts must be at least 10 times the round-trip time so it can account for variance in your network.
For example, if the round-trip time between your members is 10ms then you should have at least a 100ms election timeout.
The election timeout should be set based on the heartbeat interval and your network ping time between nodes.
Election timeouts should be at least 10 times your ping time so it can account for variance in your network.
For example, if the ping time between your nodes is 10ms then you should have at least a 100ms election timeout.
The upper limit of election timeout is 50000ms, which should only be used when deploying global etcd cluster. First, 5s is the upper limit of average global round-trip time. A reasonable round-trip time for the continental united states is 130ms, and the time between US and japan is around 350-400ms. Because package gets delayed a lot, and network situation may be terrible, 5s is a safe value for it. Then, because election timeout should be an order of magnitude bigger than broadcast time, 50s becomes its maximum.
You should also set your election timeout to at least 5 to 10 times your heartbeat interval to account for variance in leader replication.
For a heartbeat interval of 50ms you should set your election timeout to at least 250ms - 500ms.
The heartbeat interval and election timeout value should be the same for all members in one cluster. Setting different values for etcd members may disrupt cluster stability.
You should also set your election timeout to at least 4 to 5 times your heartbeat interval to account for variance in leader replication.
For a heartbeat interval of 50ms you should set your election timeout to at least 200ms - 250ms.
You can override the default values on the command line:
@@ -68,3 +62,13 @@ $ etcd -snapshot-count=5000
# Environment variables:
$ ETCD_SNAPSHOT_COUNT=5000 etcd
```
You can also disable snapshotting by adding the following to your command line:
```sh
# Command line arguments:
$ etcd -snapshot false
# Environment variables:
$ ETCD_SNAPSHOT=false etcd
```

View File

@@ -1,112 +0,0 @@
## Upgrade etcd to 2.1
In the general case, upgrading from etcd 2.0 to 2.1 can be a zero-downtime, rolling upgrade:
- one by one, stop the etcd v2.0 processes and replace them with etcd v2.1 processes
- after you are running all v2.1 processes, new features in v2.1 are available to the cluster
Before [starting an upgrade](#upgrade-procedure), read through the rest of this guide to prepare.
### Upgrade Checklists
#### Upgrade Requirement
To upgrade an existing etcd deployment to 2.1, you must be running 2.0. If youre running a version of etcd before 2.0, you must upgrade to [2.0](https://github.com/coreos/etcd/releases/tag/v2.0.13) before upgrading to 2.1.
Also, to ensure a smooth rolling upgrade, your running cluster must be healthy. You can check the health of the cluster by using `etcdctl cluster-health` command.
#### Preparedness
Before upgrading etcd, always test the services relying on etcd in a staging environment before deploying the upgrade to the production environment.
You might also want to [backup your data directory](admin_guide.md#backing-up-the-datastore) for a potential [downgrade](#downgrade).
etcd 2.1 introduces a new [authentication](auth_api.md) feature, which is disabled by default. If your deployment depends on these, you may want to test the auth features before enabling them in production.
#### Mixed Versions
While upgrading, an etcd cluster supports mixed versions of etcd members. The cluster is only considered upgraded once all its members are upgraded to 2.1.
Internally, etcd members negotiate with each other to determine the overall etcd cluster version, which controls the reported cluster version and the supported features. For example, if you are mid-upgrade, any 2.1 features (such as the the authentication feature mentioned above) wont be available.
#### Limitations
If you encounter any issues during the upgrade, you can attempt to restart the etcd process in trouble using a newer v2.1 binary to solve the problem. One known issue is that etcd v2.0.0 and v2.0.2 may panic during rolling upgrades due to an existing bug, which has been fixed since etcd v2.0.3.
It might take up to 2 minutes for the newly upgraded member to catch up with the existing cluster when the total data size is larger than 50MB (You can check the size of the existing snapshot to know about the rough data size). In other words, it is safest to wait for 2 minutes before upgrading the next member.
If you have even more data, this might take more time. If you have a data size larger than 100MB you should contact us before upgrading, so we can make sure the upgrades work smoothly.
#### Downgrade
If all members have been upgraded to v2.1, the cluster will be upgraded to v2.1, and downgrade is **not possible**. If any member is still v2.0, the cluster will remain in v2.0, and you can go back to use v2.0 binary.
Please [backup your data directory](admin_guide.md#backing-up-the-datastore) of all etcd members if you want to downgrade the cluster, even if it is upgraded.
### Upgrade Procedure
#### 1. Check upgrade requirements.
```
$ etcdctl cluster-health
cluster is healthy
member 6e3bd23ae5f1eae0 is healthy
member 924e2e83e93f2560 is healthy
member a8266ecf031671f3 is healthy
$ curl http://127.0.0.1:4001/version
etcd 2.0.x
```
#### 2. Stop the existing etcd process
You will see similar error logging from other etcd processes in your cluster. This is normal, since you just shut down a member.
```
2015/06/23 15:45:09 sender: error posting to 6e3bd23ae5f1eae0: dial tcp 127.0.0.1:7002: connection refused
2015/06/23 15:45:09 sender: the connection with 6e3bd23ae5f1eae0 became inactive
2015/06/23 15:45:11 rafthttp: encountered error writing to server log stream: write tcp 127.0.0.1:53783: broken pipe
2015/06/23 15:45:11 rafthttp: server streaming to 6e3bd23ae5f1eae0 at term 2 has been stopped
2015/06/23 15:45:11 stream: error sending message: stopped
2015/06/23 15:45:11 stream: stopping the stream server...
```
You could [backup your data directory](https://github.com/coreos/etcd/blob/7f7e2cc79d9c5c342a6eb1e48c386b0223cf934e/Documentation/admin_guide.md#backing-up-the-datastore) for data safety.
```
$ etcdctl backup \
--data-dir /var/lib/etcd \
--backup-dir /tmp/etcd_backup
```
#### 3. Drop-in etcd v2.1 binary and start the new etcd process
You will see the etcd publish its information to the cluster.
```
2015/06/23 15:45:39 etcdserver: published {Name:infra2 ClientURLs:[http://localhost:4002]} to cluster e9c7614f68f35fb2
```
You could verify the cluster becomes healthy.
```
$ etcdctl cluster-health
cluster is healthy
member 6e3bd23ae5f1eae0 is healthy
member 924e2e83e93f2560 is healthy
member a8266ecf031671f3 is healthy
```
#### 4. Repeat step 2 to step 3 for all other members
#### 5. Finish
When all members are upgraded, you will see the cluster is upgraded to 2.1 successfully:
```
2015/06/23 15:46:35 etcdserver: updated the cluster version from 2.0.0 to 2.1.0
```
```
$ curl http://127.0.0.1:4001/version
{"etcdserver":"2.1.x","etcdcluster":"2.1.0"}
```

View File

@@ -1,128 +0,0 @@
## Upgrade etcd from 2.1 to 2.2
In the general case, upgrading from etcd 2.1 to 2.2 can be a zero-downtime, rolling upgrade:
- one by one, stop the etcd v2.1 processes and replace them with etcd v2.2 processes
- after you are running all v2.2 processes, new features in v2.2 are available to the cluster
Before [starting an upgrade](#upgrade-procedure), read through the rest of this guide to prepare.
### Upgrade Checklists
#### Upgrade Requirement
To upgrade an existing etcd deployment to 2.2, you must be running 2.1. If youre running a version of etcd before 2.1, you must upgrade to [2.1](https://github.com/coreos/etcd/releases/tag/v2.1.2) before upgrading to 2.2.
Also, to ensure a smooth rolling upgrade, your running cluster must be healthy. You can check the health of the cluster by using `etcdctl cluster-health` command.
#### Preparedness
Before upgrading etcd, always test the services relying on etcd in a staging environment before deploying the upgrade to the production environment.
You might also want to [backup your data directory](admin_guide.md#backing-up-the-datastore) for a potential [downgrade](#downgrade).
#### Mixed Versions
While upgrading, an etcd cluster supports mixed versions of etcd members. The cluster is only considered upgraded once all its members are upgraded to 2.2.
Internally, etcd members negotiate with each other to determine the overall etcd cluster version, which controls the reported cluster version and the supported features.
#### Limitations
If you have a data size larger than 100MB you should contact us before upgrading, so we can make sure the upgrades work smoothly.
Every etcd 2.2 member will do health checking across the cluster periodically. etcd 2.1 member does not support health checking. During the upgrade, etcd 2.2 member will log warning about the unhealthy state of etcd 2.1 member. You can ignore the warning.
#### Downgrade
If all members have been upgraded to v2.2, the cluster will be upgraded to v2.2, and downgrade is **not possible**. If any member is still v2.1, the cluster will remain in v2.1, and you can go back to use v2.1 binary.
Please [backup your data directory](admin_guide.md#backing-up-the-datastore) of all etcd members if you want to downgrade the cluster, even if it is upgraded.
### Upgrade Procedure
In the example, we upgrade a three member v2.1 cluster running on local machine.
#### 1. Check upgrade requirements.
```
$ etcdctl cluster-health
member 6e3bd23ae5f1eae0 is healthy: got healthy result from http://localhost:22379
member 924e2e83e93f2560 is healthy: got healthy result from http://localhost:32379
member a8266ecf031671f3 is healthy: got healthy result from http://localhost:12379
cluster is healthy
$ curl http://localhost:4001/version
{"etcdserver":"2.1.x","etcdcluster":"2.1.0"}
```
#### 2. Stop the existing etcd process
You will see similar error logging from other etcd processes in your cluster. This is normal, since you just shut down a member and the connection is broken.
```
2015/09/2 09:48:35 etcdserver: failed to reach the peerURL(http://localhost:12380) of member a8266ecf031671f3 (Get http://localhost:12380/version: dial tcp [::1]:12380: getsockopt: connection refused)
2015/09/2 09:48:35 etcdserver: cannot get the version of member a8266ecf031671f3 (Get http://localhost:12380/version: dial tcp [::1]:12380: getsockopt: connection refused)
2015/09/2 09:48:35 rafthttp: failed to write a8266ecf031671f3 on stream Message (write tcp 127.0.0.1:32380->127.0.0.1:64394: write: broken pipe)
2015/09/2 09:48:35 rafthttp: failed to write a8266ecf031671f3 on pipeline (dial tcp [::1]:12380: getsockopt: connection refused)
2015/09/2 09:48:40 etcdserver: failed to reach the peerURL(http://localhost:7001) of member a8266ecf031671f3 (Get http://localhost:7001/version: dial tcp [::1]:12380: getsockopt: connection refused)
2015/09/2 09:48:40 etcdserver: cannot get the version of member a8266ecf031671f3 (Get http://localhost:12380/version: dial tcp [::1]:12380: getsockopt: connection refused)
2015/09/2 09:48:40 rafthttp: failed to heartbeat a8266ecf031671f3 on stream MsgApp v2 (write tcp 127.0.0.1:32380->127.0.0.1:64393: write: broken pipe)
```
You will see logging output like this from ungraded member due to a mixed version cluster. You can ignore this while upgrading.
```
2015/09/2 09:48:45 etcdserver: the etcd version 2.1.2+git is not up-to-date
2015/09/2 09:48:45 etcdserver: member a8266ecf031671f3 has a higher version &{2.2.0-rc.0+git 2.1.0}
```
You will also see logging output like this from the newly upgraded member, since etcd 2.1 member does not support health checking. You can ignore this while upgrading.
```
2015-09-02 09:55:42.691384 W | rafthttp: the connection to peer 6e3bd23ae5f1eae0 is unhealthy
2015-09-02 09:55:42.705626 W | rafthttp: the connection to peer 924e2e83e93f2560 is unhealthy
```
You could [backup your data directory](https://github.com/coreos/etcd/blob/7f7e2cc79d9c5c342a6eb1e48c386b0223cf934e/Documentation/admin_guide.md#backing-up-the-datastore) for data safety.
```
$ etcdctl backup \
--data-dir /var/lib/etcd \
--backup-dir /tmp/etcd_backup
```
#### 3. Drop-in etcd v2.2 binary and start the new etcd process
Now, you can start the etcd v2.2 binary with the previous configuration.
You will see the etcd start and publish its information to the cluster.
```
2015-09-02 09:56:46.117609 I | etcdserver: published {Name:infra2 ClientURLs:[http://localhost:22380]} to cluster e9c7614f68f35fb2
```
You could verify the cluster becomes healthy.
```
$ etcdctl cluster-health
member 6e3bd23ae5f1eae0 is healthy: got healthy result from http://localhost:22379
member 924e2e83e93f2560 is healthy: got healthy result from http://localhost:32379
member a8266ecf031671f3 is healthy: got healthy result from http://localhost:12379
cluster is healthy
```
#### 4. Repeat step 2 to step 3 for all other members
#### 5. Finish
When all members are upgraded, you will see the cluster is upgraded to 2.2 successfully:
```
2015-09-02 09:56:54.896848 N | etcdserver: updated the cluster version from 2.1 to 2.2
```
```
$ curl http://127.0.0.1:4001/version
{"etcdserver":"2.2.x","etcdcluster":"2.2.0"}
```

185
Godeps/Godeps.json generated
View File

@@ -1,201 +1,36 @@
{
"ImportPath": "github.com/coreos/etcd",
"GoVersion": "go1.5.1",
"GoVersion": "go1.4.1",
"Packages": [
"./..."
],
"Deps": [
{
"ImportPath": "bitbucket.org/ww/goautoneg",
"Comment": "null-5",
"Rev": "75cd24fc2f2c2a2088577d12123ddee5f54e0675"
},
{
"ImportPath": "github.com/akrennmair/gopcap",
"Rev": "00e11033259acb75598ba416495bb708d864a010"
},
{
"ImportPath": "github.com/beorn7/perks/quantile",
"Rev": "b965b613227fddccbfffe13eae360ed3fa822f8d"
},
{
"ImportPath": "github.com/bgentry/speakeasy",
"Rev": "36e9cfdd690967f4f690c6edcc9ffacd006014a0"
},
{
"ImportPath": "github.com/boltdb/bolt",
"Comment": "v1.1.0-19-g0b00eff",
"Rev": "0b00effdd7a8270ebd91c24297e51643e370dd52"
},
{
"ImportPath": "github.com/cheggaaa/pb",
"Rev": "da1f27ad1d9509b16f65f52fd9d8138b0f2dc7b2"
"ImportPath": "code.google.com/p/gogoprotobuf/proto",
"Rev": "7fd1620f09261338b6b1ca1289ace83aee0ec946"
},
{
"ImportPath": "github.com/codegangsta/cli",
"Comment": "1.2.0-183-gb5232bb",
"Rev": "b5232bb2934f606f9f27a1305f1eea224e8e8b88"
"Comment": "1.2.0-26-gf7ebb76",
"Rev": "f7ebb761e83e21225d1d8954fde853bf8edd46c4"
},
{
"ImportPath": "github.com/coreos/gexpect",
"Rev": "5173270e159f5aa8fbc999dc7e3dcb50f4098a69"
},
{
"ImportPath": "github.com/coreos/go-semver/semver",
"Rev": "568e959cd89871e61434c1143528d9162da89ef2"
},
{
"ImportPath": "github.com/coreos/go-systemd/daemon",
"Comment": "v3-6-gcea488b",
"Rev": "cea488b4e6855fee89b6c22a811e3c5baca861b6"
},
{
"ImportPath": "github.com/coreos/go-systemd/journal",
"Comment": "v3-6-gcea488b",
"Rev": "cea488b4e6855fee89b6c22a811e3c5baca861b6"
},
{
"ImportPath": "github.com/coreos/go-systemd/util",
"Comment": "v3-6-gcea488b",
"Rev": "cea488b4e6855fee89b6c22a811e3c5baca861b6"
},
{
"ImportPath": "github.com/coreos/pkg/capnslog",
"Rev": "2c77715c4df99b5420ffcae14ead08f52104065d"
},
{
"ImportPath": "github.com/cpuguy83/go-md2man/md2man",
"Comment": "v1.0.4",
"Rev": "71acacd42f85e5e82f70a55327789582a5200a90"
},
{
"ImportPath": "github.com/gogo/protobuf/proto",
"Comment": "v0.1-118-ge8904f5",
"Rev": "e8904f58e872a473a5b91bc9bf3377d223555263"
},
{
"ImportPath": "github.com/golang/glog",
"Rev": "44145f04b68cf362d9c4df2182967c2275eaefed"
},
{
"ImportPath": "github.com/golang/protobuf/proto",
"Rev": "6aaa8d47701fa6cf07e914ec01fde3d4a1fe79c3"
},
{
"ImportPath": "github.com/google/btree",
"Rev": "cc6329d4279e3f025a53a83c397d2339b5705c45"
},
{
"ImportPath": "github.com/inconshreveable/mousetrap",
"Rev": "76626ae9c91c4f2a10f34cad8ce83ea42c93bb75"
"ImportPath": "github.com/coreos/go-etcd/etcd",
"Comment": "v0.2.0-rc1-130-g6aa2da5",
"Rev": "6aa2da5a7a905609c93036b9307185a04a5a84a5"
},
{
"ImportPath": "github.com/jonboulle/clockwork",
"Rev": "72f9bd7c4e0c2a40055ab3d0f09654f730cce982"
},
{
"ImportPath": "github.com/kballard/go-shellquote",
"Rev": "d8ec1a69a250a17bb0e419c386eac1f3711dc142"
},
{
"ImportPath": "github.com/kr/pty",
"Comment": "release.r56-29-gf7ee69f",
"Rev": "f7ee69f31298ecbe5d2b349c711e2547a617d398"
},
{
"ImportPath": "github.com/matttproud/golang_protobuf_extensions/pbutil",
"Rev": "fc2b8d3a73c4867e51861bbdd5ae3c1f0869dd6a"
},
{
"ImportPath": "github.com/olekukonko/ts",
"Rev": "ecf753e7c962639ab5a1fb46f7da627d4c0a04b8"
},
{
"ImportPath": "github.com/prometheus/client_golang/prometheus",
"Comment": "0.7.0-52-ge51041b",
"Rev": "e51041b3fa41cece0dca035740ba6411905be473"
},
{
"ImportPath": "github.com/prometheus/client_model/go",
"Comment": "model-0.0.2-12-gfa8ad6f",
"Rev": "fa8ad6fec33561be4280a8f0514318c79d7f6cb6"
},
{
"ImportPath": "github.com/prometheus/common/expfmt",
"Rev": "ffe929a3f4c4faeaa10f2b9535c2b1be3ad15650"
},
{
"ImportPath": "github.com/prometheus/common/model",
"Rev": "ffe929a3f4c4faeaa10f2b9535c2b1be3ad15650"
},
{
"ImportPath": "github.com/prometheus/procfs",
"Rev": "454a56f35412459b5e684fd5ec0f9211b94f002a"
},
{
"ImportPath": "github.com/russross/blackfriday",
"Comment": "v1.4-2-g300106c",
"Rev": "300106c228d52c8941d4b3de6054a6062a86dda3"
},
{
"ImportPath": "github.com/shurcooL/sanitized_anchor_name",
"Rev": "10ef21a441db47d8b13ebcc5fd2310f636973c77"
},
{
"ImportPath": "github.com/spacejam/loghisto",
"Rev": "323309774dec8b7430187e46cd0793974ccca04a"
},
{
"ImportPath": "github.com/spf13/cobra",
"Rev": "1c44ec8d3f1552cac48999f9306da23c4d8a288b"
},
{
"ImportPath": "github.com/spf13/pflag",
"Rev": "08b1a584251b5b62f458943640fc8ebd4d50aaa5"
},
{
"ImportPath": "github.com/stretchr/testify/assert",
"Rev": "9cc77fa25329013ce07362c7742952ff887361f2"
},
{
"ImportPath": "github.com/ugorji/go/codec",
"Rev": "f1f1a805ed361a0e078bb537e4ea78cd37dcf065"
},
{
"ImportPath": "github.com/xiang90/probing",
"Rev": "6a0cc1ae81b4cc11db5e491e030e4b98fba79c19"
},
{
"ImportPath": "golang.org/x/crypto/bcrypt",
"Rev": "1351f936d976c60a0a48d728281922cf63eafb8d"
},
{
"ImportPath": "golang.org/x/crypto/blowfish",
"Rev": "1351f936d976c60a0a48d728281922cf63eafb8d"
},
{
"ImportPath": "golang.org/x/net/context",
"Rev": "04b9de9b512f58addf28c9853d50ebef61c3953e"
},
{
"ImportPath": "golang.org/x/net/http2",
"Rev": "04b9de9b512f58addf28c9853d50ebef61c3953e"
},
{
"ImportPath": "golang.org/x/net/internal/timeseries",
"Rev": "04b9de9b512f58addf28c9853d50ebef61c3953e"
},
{
"ImportPath": "golang.org/x/net/trace",
"Rev": "04b9de9b512f58addf28c9853d50ebef61c3953e"
},
{
"ImportPath": "golang.org/x/sys/unix",
"Rev": "9c60d1c508f5134d1ca726b4641db998f2523357"
},
{
"ImportPath": "google.golang.org/grpc",
"Rev": "e29d659177655e589850ba7d3d83f7ce12ef23dd"
"Comment": "null-220",
"Rev": "c5a46024776ec35eb562fa9226968b9d543bb13a"
}
]
}

View File

@@ -1,13 +0,0 @@
include $(GOROOT)/src/Make.inc
TARG=bitbucket.org/ww/goautoneg
GOFILES=autoneg.go
include $(GOROOT)/src/Make.pkg
format:
gofmt -w *.go
docs:
gomake clean
godoc ${TARG} > README.txt

View File

@@ -1,67 +0,0 @@
PACKAGE
package goautoneg
import "bitbucket.org/ww/goautoneg"
HTTP Content-Type Autonegotiation.
The functions in this package implement the behaviour specified in
http://www.w3.org/Protocols/rfc2616/rfc2616-sec14.html
Copyright (c) 2011, Open Knowledge Foundation Ltd.
All rights reserved.
Redistribution and use in source and binary forms, with or without
modification, are permitted provided that the following conditions are
met:
Redistributions of source code must retain the above copyright
notice, this list of conditions and the following disclaimer.
Redistributions in binary form must reproduce the above copyright
notice, this list of conditions and the following disclaimer in
the documentation and/or other materials provided with the
distribution.
Neither the name of the Open Knowledge Foundation Ltd. nor the
names of its contributors may be used to endorse or promote
products derived from this software without specific prior written
permission.
THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
"AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
(INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
FUNCTIONS
func Negotiate(header string, alternatives []string) (content_type string)
Negotiate the most appropriate content_type given the accept header
and a list of alternatives.
func ParseAccept(header string) (accept []Accept)
Parse an Accept Header string returning a sorted list
of clauses
TYPES
type Accept struct {
Type, SubType string
Q float32
Params map[string]string
}
Structure to represent a clause in an HTTP Accept Header
SUBDIRECTORIES
.hg

View File

@@ -1,162 +0,0 @@
/*
HTTP Content-Type Autonegotiation.
The functions in this package implement the behaviour specified in
http://www.w3.org/Protocols/rfc2616/rfc2616-sec14.html
Copyright (c) 2011, Open Knowledge Foundation Ltd.
All rights reserved.
Redistribution and use in source and binary forms, with or without
modification, are permitted provided that the following conditions are
met:
Redistributions of source code must retain the above copyright
notice, this list of conditions and the following disclaimer.
Redistributions in binary form must reproduce the above copyright
notice, this list of conditions and the following disclaimer in
the documentation and/or other materials provided with the
distribution.
Neither the name of the Open Knowledge Foundation Ltd. nor the
names of its contributors may be used to endorse or promote
products derived from this software without specific prior written
permission.
THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
"AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
(INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
*/
package goautoneg
import (
"sort"
"strconv"
"strings"
)
// Structure to represent a clause in an HTTP Accept Header
type Accept struct {
Type, SubType string
Q float64
Params map[string]string
}
// For internal use, so that we can use the sort interface
type accept_slice []Accept
func (accept accept_slice) Len() int {
slice := []Accept(accept)
return len(slice)
}
func (accept accept_slice) Less(i, j int) bool {
slice := []Accept(accept)
ai, aj := slice[i], slice[j]
if ai.Q > aj.Q {
return true
}
if ai.Type != "*" && aj.Type == "*" {
return true
}
if ai.SubType != "*" && aj.SubType == "*" {
return true
}
return false
}
func (accept accept_slice) Swap(i, j int) {
slice := []Accept(accept)
slice[i], slice[j] = slice[j], slice[i]
}
// Parse an Accept Header string returning a sorted list
// of clauses
func ParseAccept(header string) (accept []Accept) {
parts := strings.Split(header, ",")
accept = make([]Accept, 0, len(parts))
for _, part := range parts {
part := strings.Trim(part, " ")
a := Accept{}
a.Params = make(map[string]string)
a.Q = 1.0
mrp := strings.Split(part, ";")
media_range := mrp[0]
sp := strings.Split(media_range, "/")
a.Type = strings.Trim(sp[0], " ")
switch {
case len(sp) == 1 && a.Type == "*":
a.SubType = "*"
case len(sp) == 2:
a.SubType = strings.Trim(sp[1], " ")
default:
continue
}
if len(mrp) == 1 {
accept = append(accept, a)
continue
}
for _, param := range mrp[1:] {
sp := strings.SplitN(param, "=", 2)
if len(sp) != 2 {
continue
}
token := strings.Trim(sp[0], " ")
if token == "q" {
a.Q, _ = strconv.ParseFloat(sp[1], 32)
} else {
a.Params[token] = strings.Trim(sp[1], " ")
}
}
accept = append(accept, a)
}
slice := accept_slice(accept)
sort.Sort(slice)
return
}
// Negotiate the most appropriate content_type given the accept header
// and a list of alternatives.
func Negotiate(header string, alternatives []string) (content_type string) {
asp := make([][]string, 0, len(alternatives))
for _, ctype := range alternatives {
asp = append(asp, strings.SplitN(ctype, "/", 2))
}
for _, clause := range ParseAccept(header) {
for i, ctsp := range asp {
if clause.Type == ctsp[0] && clause.SubType == ctsp[1] {
content_type = alternatives[i]
return
}
if clause.Type == ctsp[0] && clause.SubType == "*" {
content_type = alternatives[i]
return
}
if clause.Type == "*" && clause.SubType == "*" {
content_type = alternatives[i]
return
}
}
}
return
}

View File

@@ -1,33 +0,0 @@
package goautoneg
import (
"testing"
)
var chrome = "application/xml,application/xhtml+xml,text/html;q=0.9,text/plain;q=0.8,image/png,*/*;q=0.5"
func TestParseAccept(t *testing.T) {
alternatives := []string{"text/html", "image/png"}
content_type := Negotiate(chrome, alternatives)
if content_type != "image/png" {
t.Errorf("got %s expected image/png", content_type)
}
alternatives = []string{"text/html", "text/plain", "text/n3"}
content_type = Negotiate(chrome, alternatives)
if content_type != "text/html" {
t.Errorf("got %s expected text/html", content_type)
}
alternatives = []string{"text/n3", "text/plain"}
content_type = Negotiate(chrome, alternatives)
if content_type != "text/plain" {
t.Errorf("got %s expected text/plain", content_type)
}
alternatives = []string{"text/n3", "application/rdf+xml"}
content_type = Negotiate(chrome, alternatives)
if content_type != "text/n3" {
t.Errorf("got %s expected text/n3", content_type)
}
}

View File

@@ -1,7 +1,7 @@
# Go support for Protocol Buffers - Google's data interchange format
#
# Copyright 2010 The Go Authors. All rights reserved.
# https://github.com/golang/protobuf
# http://code.google.com/p/goprotobuf/
#
# Redistribution and use in source and binary forms, with or without
# modification, are permitted provided that the following conditions are
@@ -37,7 +37,4 @@ test: install generate-test-pbs
generate-test-pbs:
make install
make -C testdata
protoc-min-version --version="3.0.0" --proto_path=.:../../../../ --gogo_out=. proto3_proto/proto3.proto
make
make install && cd testdata && make

View File

@@ -1,7 +1,7 @@
// Go support for Protocol Buffers - Google's data interchange format
//
// Copyright 2010 The Go Authors. All rights reserved.
// https://github.com/golang/protobuf
// http://code.google.com/p/goprotobuf/
//
// Redistribution and use in source and binary forms, with or without
// modification, are permitted provided that the following conditions are
@@ -34,7 +34,6 @@ package proto_test
import (
"bytes"
"encoding/json"
"errors"
"fmt"
"math"
"math/rand"
@@ -44,8 +43,8 @@ import (
"testing"
"time"
. "github.com/coreos/etcd/Godeps/_workspace/src/github.com/golang/protobuf/proto"
. "github.com/coreos/etcd/Godeps/_workspace/src/github.com/golang/protobuf/proto/testdata"
. "./testdata"
. "github.com/coreos/etcd/Godeps/_workspace/src/code.google.com/p/gogoprotobuf/proto"
)
var globalO *Buffer
@@ -395,84 +394,6 @@ func TestNumericPrimitives(t *testing.T) {
}
}
// fakeMarshaler is a simple struct implementing Marshaler and Message interfaces.
type fakeMarshaler struct {
b []byte
err error
}
func (f *fakeMarshaler) Marshal() ([]byte, error) { return f.b, f.err }
func (f *fakeMarshaler) String() string { return fmt.Sprintf("Bytes: %v Error: %v", f.b, f.err) }
func (f *fakeMarshaler) ProtoMessage() {}
func (f *fakeMarshaler) Reset() {}
type msgWithFakeMarshaler struct {
M *fakeMarshaler `protobuf:"bytes,1,opt,name=fake"`
}
func (m *msgWithFakeMarshaler) String() string { return CompactTextString(m) }
func (m *msgWithFakeMarshaler) ProtoMessage() {}
func (m *msgWithFakeMarshaler) Reset() {}
// Simple tests for proto messages that implement the Marshaler interface.
func TestMarshalerEncoding(t *testing.T) {
tests := []struct {
name string
m Message
want []byte
wantErr error
}{
{
name: "Marshaler that fails",
m: &fakeMarshaler{
err: errors.New("some marshal err"),
b: []byte{5, 6, 7},
},
// Since there's an error, nothing should be written to buffer.
want: nil,
wantErr: errors.New("some marshal err"),
},
{
name: "Marshaler that fails with RequiredNotSetError",
m: &msgWithFakeMarshaler{
M: &fakeMarshaler{
err: &RequiredNotSetError{},
b: []byte{5, 6, 7},
},
},
// Since there's an error that can be continued after,
// the buffer should be written.
want: []byte{
10, 3, // for &msgWithFakeMarshaler
5, 6, 7, // for &fakeMarshaler
},
wantErr: &RequiredNotSetError{},
},
{
name: "Marshaler that succeeds",
m: &fakeMarshaler{
b: []byte{0, 1, 2, 3, 4, 127, 255},
},
want: []byte{0, 1, 2, 3, 4, 127, 255},
wantErr: nil,
},
}
for _, test := range tests {
b := NewBuffer(nil)
err := b.Marshal(test.m)
if _, ok := err.(*RequiredNotSetError); ok {
// We're not in package proto, so we can only assert the type in this case.
err = &RequiredNotSetError{}
}
if !reflect.DeepEqual(test.wantErr, err) {
t.Errorf("%s: got err %v wanted %v", test.name, err, test.wantErr)
}
if !reflect.DeepEqual(test.want, b.Bytes()) {
t.Errorf("%s: got bytes %v wanted %v", test.name, b.Bytes(), test.want)
}
}
}
// Simple tests for bytes
func TestBytesPrimitives(t *testing.T) {
o := old()
@@ -1068,35 +989,6 @@ func TestSubmessageUnrecognizedFields(t *testing.T) {
}
}
// Check that an int32 field can be upgraded to an int64 field.
func TestNegativeInt32(t *testing.T) {
om := &OldMessage{
Num: Int32(-1),
}
b, err := Marshal(om)
if err != nil {
t.Fatalf("Marshal of OldMessage: %v", err)
}
// Check the size. It should be 11 bytes;
// 1 for the field/wire type, and 10 for the negative number.
if len(b) != 11 {
t.Errorf("%v marshaled as %q, wanted 11 bytes", om, b)
}
// Unmarshal into a NewMessage.
nm := new(NewMessage)
if err := Unmarshal(b, nm); err != nil {
t.Fatalf("Unmarshal to NewMessage: %v", err)
}
want := &NewMessage{
Num: Int64(-1),
}
if !Equal(nm, want) {
t.Errorf("nm = %v, want %v", nm, want)
}
}
// Check that we can grow an array (repeated field) to have many elements.
// This test doesn't depend only on our encoding; for variety, it makes sure
// we create, encode, and decode the correct contents explicitly. It's therefore
@@ -1224,10 +1116,13 @@ func TestTypeMismatch(t *testing.T) {
// Now Unmarshal it to the wrong type.
pb2 := initGoTestField()
err := o.Unmarshal(pb2)
if err == nil {
t.Error("expected error, got no error")
} else if !strings.Contains(err.Error(), "bad wiretype") {
t.Error("expected bad wiretype error, got", err)
switch err {
case ErrWrongType:
// fine
case nil:
t.Error("expected wrong type error, got no error")
default:
t.Error("expected wrong type error, got", err)
}
}
@@ -1273,8 +1168,7 @@ func TestProto1RepeatedGroup(t *testing.T) {
}
o := old()
err := o.Marshal(pb)
if err == nil || !strings.Contains(err.Error(), "repeated field Message has nil") {
if err := o.Marshal(pb); err != ErrRepeatedHasNil {
t.Fatalf("unexpected or no error when marshaling: %v", err)
}
}
@@ -1409,11 +1303,10 @@ func TestAllSetDefaults(t *testing.T) {
F_Pinf: Float32(float32(math.Inf(1))),
F_Ninf: Float32(float32(math.Inf(-1))),
F_Nan: Float32(1.7),
StrZero: String(""),
}
SetDefaults(m)
if !Equal(m, expected) {
t.Errorf("SetDefaults failed\n got %v\nwant %v", m, expected)
t.Errorf(" got %v\nwant %v", m, expected)
}
}
@@ -1463,17 +1356,6 @@ func TestSetDefaultsWithRepeatedSubMessage(t *testing.T) {
}
}
func TestSetDefaultWithRepeatedNonMessage(t *testing.T) {
m := &MyMessage{
Pet: []string{"turtle", "wombat"},
}
expected := Clone(m)
SetDefaults(m)
if !Equal(m, expected) {
t.Errorf("\n got %v\nwant %v", m, expected)
}
}
func TestMaximumTagNumber(t *testing.T) {
m := &MaxTag{
LastField: String("natural goat essence"),
@@ -1773,8 +1655,7 @@ func TestEncodingSizes(t *testing.T) {
n int
}{
{&Defaults{F_Int32: Int32(math.MaxInt32)}, 6},
{&Defaults{F_Int32: Int32(math.MinInt32)}, 11},
{&Defaults{F_Uint32: Uint32(uint32(math.MaxInt32) + 1)}, 6},
{&Defaults{F_Int32: Int32(math.MinInt32)}, 6},
{&Defaults{F_Uint32: Uint32(math.MaxUint32)}, 6},
}
for _, test := range tests {
@@ -1866,163 +1747,6 @@ func fuzzUnmarshal(t *testing.T, data []byte) {
Unmarshal(data, pb)
}
func TestMapFieldMarshal(t *testing.T) {
m := &MessageWithMap{
NameMapping: map[int32]string{
1: "Rob",
4: "Ian",
8: "Dave",
},
}
b, err := Marshal(m)
if err != nil {
t.Fatalf("Marshal: %v", err)
}
// b should be the concatenation of these three byte sequences in some order.
parts := []string{
"\n\a\b\x01\x12\x03Rob",
"\n\a\b\x04\x12\x03Ian",
"\n\b\b\x08\x12\x04Dave",
}
ok := false
for i := range parts {
for j := range parts {
if j == i {
continue
}
for k := range parts {
if k == i || k == j {
continue
}
try := parts[i] + parts[j] + parts[k]
if bytes.Equal(b, []byte(try)) {
ok = true
break
}
}
}
}
if !ok {
t.Fatalf("Incorrect Marshal output.\n got %q\nwant %q (or a permutation of that)", b, parts[0]+parts[1]+parts[2])
}
t.Logf("FYI b: %q", b)
(new(Buffer)).DebugPrint("Dump of b", b)
}
func TestMapFieldRoundTrips(t *testing.T) {
m := &MessageWithMap{
NameMapping: map[int32]string{
1: "Rob",
4: "Ian",
8: "Dave",
},
MsgMapping: map[int64]*FloatingPoint{
0x7001: &FloatingPoint{F: Float64(2.0)},
},
ByteMapping: map[bool][]byte{
false: []byte("that's not right!"),
true: []byte("aye, 'tis true!"),
},
}
b, err := Marshal(m)
if err != nil {
t.Fatalf("Marshal: %v", err)
}
t.Logf("FYI b: %q", b)
m2 := new(MessageWithMap)
if err := Unmarshal(b, m2); err != nil {
t.Fatalf("Unmarshal: %v", err)
}
for _, pair := range [][2]interface{}{
{m.NameMapping, m2.NameMapping},
{m.MsgMapping, m2.MsgMapping},
{m.ByteMapping, m2.ByteMapping},
} {
if !reflect.DeepEqual(pair[0], pair[1]) {
t.Errorf("Map did not survive a round trip.\ninitial: %v\n final: %v", pair[0], pair[1])
}
}
}
func TestMapFieldWithNil(t *testing.T) {
m := &MessageWithMap{
MsgMapping: map[int64]*FloatingPoint{
1: nil,
},
}
b, err := Marshal(m)
if err == nil {
t.Fatalf("Marshal of bad map should have failed, got these bytes: %v", b)
}
}
func TestOneof(t *testing.T) {
m := &Communique{}
b, err := Marshal(m)
if err != nil {
t.Fatalf("Marshal of empty message with oneof: %v", err)
}
if len(b) != 0 {
t.Errorf("Marshal of empty message yielded too many bytes: %v", b)
}
m = &Communique{
Union: &Communique_Name{"Barry"},
}
// Round-trip.
b, err = Marshal(m)
if err != nil {
t.Fatalf("Marshal of message with oneof: %v", err)
}
if len(b) != 7 { // name tag/wire (1) + name len (1) + name (5)
t.Errorf("Incorrect marshal of message with oneof: %v", b)
}
m.Reset()
if err := Unmarshal(b, m); err != nil {
t.Fatalf("Unmarshal of message with oneof: %v", err)
}
if x, ok := m.Union.(*Communique_Name); !ok || x.Name != "Barry" {
t.Errorf("After round trip, Union = %+v", m.Union)
}
if name := m.GetName(); name != "Barry" {
t.Errorf("After round trip, GetName = %q, want %q", name, "Barry")
}
// Let's try with a message in the oneof.
m.Union = &Communique_Msg{&Strings{StringField: String("deep deep string")}}
b, err = Marshal(m)
if err != nil {
t.Fatalf("Marshal of message with oneof set to message: %v", err)
}
if len(b) != 20 { // msg tag/wire (1) + msg len (1) + msg (1 + 1 + 16)
t.Errorf("Incorrect marshal of message with oneof set to message: %v", b)
}
m.Reset()
if err := Unmarshal(b, m); err != nil {
t.Fatalf("Unmarshal of message with oneof set to message: %v", err)
}
ss, ok := m.Union.(*Communique_Msg)
if !ok || ss.Msg.GetStringField() != "deep deep string" {
t.Errorf("After round trip with oneof set to message, Union = %+v", m.Union)
}
}
func TestInefficientPackedBool(t *testing.T) {
// https://github.com/golang/protobuf/issues/76
inp := []byte{
0x12, 0x02, // 0x12 = 2<<3|2; 2 bytes
// Usually a bool should take a single byte,
// but it is permitted to be any varint.
0xb9, 0x30,
}
if err := Unmarshal(inp, new(MoreRepeated)); err != nil {
t.Error(err)
}
}
// Benchmarks
func testMsg() *GoTest {

View File

@@ -1,7 +1,7 @@
// Go support for Protocol Buffers - Google's data interchange format
//
// Copyright 2011 The Go Authors. All rights reserved.
// https://github.com/golang/protobuf
// http://code.google.com/p/goprotobuf/
//
// Redistribution and use in source and binary forms, with or without
// modification, are permitted provided that the following conditions are
@@ -29,8 +29,8 @@
// (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
// OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
// Protocol buffer deep copy and merge.
// TODO: RawMessage.
// Protocol buffer deep copy.
// TODO: MessageSet and RawMessage.
package proto
@@ -75,13 +75,12 @@ func Merge(dst, src Message) {
}
func mergeStruct(out, in reflect.Value) {
sprop := GetProperties(in.Type())
for i := 0; i < in.NumField(); i++ {
f := in.Type().Field(i)
if strings.HasPrefix(f.Name, "XXX_") {
continue
}
mergeAny(out.Field(i), in.Field(i), false, sprop.Prop[i])
mergeAny(out.Field(i), in.Field(i))
}
if emIn, ok := in.Addr().Interface().(extensionsMap); ok {
@@ -104,10 +103,7 @@ func mergeStruct(out, in reflect.Value) {
}
}
// mergeAny performs a merge between two values of the same type.
// viaPtr indicates whether the values were indirected through a pointer (implying proto2).
// prop is set if this is a struct field (it may be nil).
func mergeAny(out, in reflect.Value, viaPtr bool, prop *Properties) {
func mergeAny(out, in reflect.Value) {
if in.Type() == protoMessageType {
if !in.IsNil() {
if out.IsNil() {
@@ -121,44 +117,7 @@ func mergeAny(out, in reflect.Value, viaPtr bool, prop *Properties) {
switch in.Kind() {
case reflect.Bool, reflect.Float32, reflect.Float64, reflect.Int32, reflect.Int64,
reflect.String, reflect.Uint32, reflect.Uint64:
if !viaPtr && isProto3Zero(in) {
return
}
out.Set(in)
case reflect.Interface:
// Probably a oneof field; copy non-nil values.
if in.IsNil() {
return
}
// Allocate destination if it is not set, or set to a different type.
// Otherwise we will merge as normal.
if out.IsNil() || out.Elem().Type() != in.Elem().Type() {
out.Set(reflect.New(in.Elem().Elem().Type())) // interface -> *T -> T -> new(T)
}
mergeAny(out.Elem(), in.Elem(), false, nil)
case reflect.Map:
if in.Len() == 0 {
return
}
if out.IsNil() {
out.Set(reflect.MakeMap(in.Type()))
}
// For maps with value types of *T or []byte we need to deep copy each value.
elemKind := in.Type().Elem().Kind()
for _, key := range in.MapKeys() {
var val reflect.Value
switch elemKind {
case reflect.Ptr:
val = reflect.New(in.Type().Elem().Elem())
mergeAny(val, in.MapIndex(key), false, nil)
case reflect.Slice:
val = in.MapIndex(key)
val = reflect.ValueOf(append([]byte{}, val.Bytes()...))
default:
val = in.MapIndex(key)
}
out.SetMapIndex(key, val)
}
case reflect.Ptr:
if in.IsNil() {
return
@@ -166,39 +125,23 @@ func mergeAny(out, in reflect.Value, viaPtr bool, prop *Properties) {
if out.IsNil() {
out.Set(reflect.New(in.Elem().Type()))
}
mergeAny(out.Elem(), in.Elem(), true, nil)
mergeAny(out.Elem(), in.Elem())
case reflect.Slice:
if in.IsNil() {
return
}
if in.Type().Elem().Kind() == reflect.Uint8 {
// []byte is a scalar bytes field, not a repeated field.
// Edge case: if this is in a proto3 message, a zero length
// bytes field is considered the zero value, and should not
// be merged.
if prop != nil && prop.proto3 && in.Len() == 0 {
return
}
// Make a deep copy.
// Append to []byte{} instead of []byte(nil) so that we never end up
// with a nil result.
out.SetBytes(append([]byte{}, in.Bytes()...))
return
}
n := in.Len()
if out.IsNil() {
out.Set(reflect.MakeSlice(in.Type(), 0, n))
}
switch in.Type().Elem().Kind() {
case reflect.Bool, reflect.Float32, reflect.Float64, reflect.Int32, reflect.Int64,
reflect.String, reflect.Uint32, reflect.Uint64:
reflect.String, reflect.Uint32, reflect.Uint64, reflect.Uint8:
out.Set(reflect.AppendSlice(out, in))
default:
for i := 0; i < n; i++ {
x := reflect.Indirect(reflect.New(in.Type().Elem()))
mergeAny(x, in.Index(i), false, nil)
mergeAny(x, in.Index(i))
out.Set(reflect.Append(out, x))
}
}
@@ -215,7 +158,7 @@ func mergeExtension(out, in map[int32]Extension) {
eOut := Extension{desc: eIn.desc}
if eIn.value != nil {
v := reflect.New(reflect.TypeOf(eIn.value)).Elem()
mergeAny(v, reflect.ValueOf(eIn.value), false, nil)
mergeAny(v, reflect.ValueOf(eIn.value))
eOut.value = v.Interface()
}
if eIn.enc != nil {

View File

@@ -1,7 +1,7 @@
// Go support for Protocol Buffers - Google's data interchange format
//
// Copyright 2011 The Go Authors. All rights reserved.
// https://github.com/golang/protobuf
// http://code.google.com/p/goprotobuf/
//
// Redistribution and use in source and binary forms, with or without
// modification, are permitted provided that the following conditions are
@@ -34,10 +34,9 @@ package proto_test
import (
"testing"
"github.com/coreos/etcd/Godeps/_workspace/src/github.com/gogo/protobuf/proto"
"github.com/coreos/etcd/Godeps/_workspace/src/code.google.com/p/gogoprotobuf/proto"
proto3pb "github.com/coreos/etcd/Godeps/_workspace/src/github.com/gogo/protobuf/proto/proto3_proto"
pb "github.com/coreos/etcd/Godeps/_workspace/src/github.com/gogo/protobuf/proto/testdata"
pb "./testdata"
)
var cloneTestMessage = &pb.MyMessage{
@@ -80,22 +79,6 @@ func TestClone(t *testing.T) {
if proto.Equal(m, cloneTestMessage) {
t.Error("Mutating clone changed the original")
}
// Byte fields and repeated fields should be copied.
if &m.Pet[0] == &cloneTestMessage.Pet[0] {
t.Error("Pet: repeated field not copied")
}
if &m.Others[0] == &cloneTestMessage.Others[0] {
t.Error("Others: repeated field not copied")
}
if &m.Others[0].Value[0] == &cloneTestMessage.Others[0].Value[0] {
t.Error("Others[0].Value: bytes field not copied")
}
if &m.RepBytes[0] == &cloneTestMessage.RepBytes[0] {
t.Error("RepBytes: repeated field not copied")
}
if &m.RepBytes[0][0] == &cloneTestMessage.RepBytes[0][0] {
t.Error("RepBytes[0]: bytes field not copied")
}
}
func TestCloneNil(t *testing.T) {
@@ -184,76 +167,6 @@ var mergeTests = []struct {
RepBytes: [][]byte{[]byte("sham"), []byte("wow")},
},
},
// Check that a scalar bytes field replaces rather than appends.
{
src: &pb.OtherMessage{Value: []byte("foo")},
dst: &pb.OtherMessage{Value: []byte("bar")},
want: &pb.OtherMessage{Value: []byte("foo")},
},
{
src: &pb.MessageWithMap{
NameMapping: map[int32]string{6: "Nigel"},
MsgMapping: map[int64]*pb.FloatingPoint{
0x4001: {F: proto.Float64(2.0)},
},
ByteMapping: map[bool][]byte{true: []byte("wowsa")},
},
dst: &pb.MessageWithMap{
NameMapping: map[int32]string{
6: "Bruce", // should be overwritten
7: "Andrew",
},
},
want: &pb.MessageWithMap{
NameMapping: map[int32]string{
6: "Nigel",
7: "Andrew",
},
MsgMapping: map[int64]*pb.FloatingPoint{
0x4001: {F: proto.Float64(2.0)},
},
ByteMapping: map[bool][]byte{true: []byte("wowsa")},
},
},
// proto3 shouldn't merge zero values,
// in the same way that proto2 shouldn't merge nils.
{
src: &proto3pb.Message{
Name: "Aaron",
Data: []byte(""), // zero value, but not nil
},
dst: &proto3pb.Message{
HeightInCm: 176,
Data: []byte("texas!"),
},
want: &proto3pb.Message{
Name: "Aaron",
HeightInCm: 176,
Data: []byte("texas!"),
},
},
// Oneof fields should merge by assignment.
{
src: &pb.Communique{
Union: &pb.Communique_Number{Number: 41},
},
dst: &pb.Communique{
Union: &pb.Communique_Name{Name: "Bobby Tables"},
},
want: &pb.Communique{
Union: &pb.Communique_Number{Number: 41},
},
},
// Oneof nil is the same as not set.
{
src: &pb.Communique{},
dst: &pb.Communique{
Union: &pb.Communique_Name{Name: "Bobby Tables"},
},
want: &pb.Communique{
Union: &pb.Communique_Name{Name: "Bobby Tables"},
},
},
}
func TestMerge(t *testing.T) {

View File

@@ -1,7 +1,7 @@
// Go support for Protocol Buffers - Google's data interchange format
//
// Copyright 2010 The Go Authors. All rights reserved.
// https://github.com/golang/protobuf
// http://code.google.com/p/goprotobuf/
//
// Redistribution and use in source and binary forms, with or without
// modification, are permitted provided that the following conditions are
@@ -43,13 +43,14 @@ import (
"reflect"
)
// ErrWrongType occurs when the wire encoding for the field disagrees with
// that specified in the type being decoded. This is usually caused by attempting
// to convert an encoded protocol buffer into a struct of the wrong type.
var ErrWrongType = errors.New("proto: field/encoding mismatch: wrong type for field")
// errOverflow is returned when an integer is too large to be represented.
var errOverflow = errors.New("proto: integer overflow")
// ErrInternalBadWireType is returned by generated code when an incorrect
// wire type is encountered. It does not get returned to user code.
var ErrInternalBadWireType = errors.New("proto: internal error: bad wiretype for oneof")
// The fundamental decoders that interpret bytes on the wire.
// Those that take integer types all return uint64 and are
// therefore of type valueDecoder.
@@ -182,7 +183,7 @@ func (p *Buffer) DecodeZigzag32() (x uint64, err error) {
func (p *Buffer) DecodeRawBytes(alloc bool) (buf []byte, err error) {
n, err := p.DecodeVarint()
if err != nil {
return nil, err
return
}
nb := int(n)
@@ -318,24 +319,6 @@ func UnmarshalMerge(buf []byte, pb Message) error {
return NewBuffer(buf).Unmarshal(pb)
}
// DecodeMessage reads a count-delimited message from the Buffer.
func (p *Buffer) DecodeMessage(pb Message) error {
enc, err := p.DecodeRawBytes(false)
if err != nil {
return err
}
return NewBuffer(enc).Unmarshal(pb)
}
// DecodeGroup reads a tag-delimited group from the Buffer.
func (p *Buffer) DecodeGroup(pb Message) error {
typ, base, err := getbase(pb)
if err != nil {
return err
}
return p.unmarshalType(typ.Elem(), GetProperties(typ.Elem()), true, base)
}
// Unmarshal parses the protocol buffer representation in the
// Buffer and places the decoded result in pb. If the struct
// underlying pb does not match the data in the buffer, the results can be
@@ -380,11 +363,11 @@ func (o *Buffer) unmarshalType(st reflect.Type, prop *StructProperties, is_group
if is_group {
return nil // input is satisfied
}
return fmt.Errorf("proto: %s: wiretype end group for non-group", st)
return ErrWrongType
}
tag := int(u >> 3)
if tag <= 0 {
return fmt.Errorf("proto: %s: illegal tag %d (wire type %d)", st, tag, wire)
return fmt.Errorf("proto: illegal tag %d", tag)
}
fieldnum, ok := prop.decoderTags.get(tag)
if !ok {
@@ -392,11 +375,11 @@ func (o *Buffer) unmarshalType(st reflect.Type, prop *StructProperties, is_group
if prop.extendable {
if e := structPointer_Interface(base, st).(extendableProto); isExtensionField(e, int32(tag)) {
if err = o.skip(st, tag, wire); err == nil {
if ee, eok := e.(extensionsMap); eok {
if ee, ok := e.(extensionsMap); ok {
ext := ee.ExtensionMap()[int32(tag)] // may be missing
ext.enc = append(ext.enc, o.buf[oi:o.index]...)
ee.ExtensionMap()[int32(tag)] = ext
} else if ee, eok := e.(extensionsBytes); eok {
} else if ee, ok := e.(extensionsBytes); ok {
ext := ee.GetExtensions()
*ext = append(*ext, o.buf[oi:o.index]...)
}
@@ -404,20 +387,6 @@ func (o *Buffer) unmarshalType(st reflect.Type, prop *StructProperties, is_group
continue
}
}
// Maybe it's a oneof?
if prop.oneofUnmarshaler != nil {
m := structPointer_Interface(base, st).(Message)
// First return value indicates whether tag is a oneof field.
ok, err = prop.oneofUnmarshaler(m, tag, wire, o)
if err == ErrInternalBadWireType {
// Map the error to something more descriptive.
// Do the formatting here to save generated code space.
err = fmt.Errorf("bad wiretype for oneof field in %T", m)
}
if ok {
continue
}
}
err = o.skipAndSave(st, tag, wire, base, prop.unrecField)
continue
}
@@ -433,7 +402,7 @@ func (o *Buffer) unmarshalType(st reflect.Type, prop *StructProperties, is_group
// a packable field
dec = p.packedDec
} else {
err = fmt.Errorf("proto: bad wiretype for field %s.%s: got wiretype %d, want %d", st, st.Field(fieldnum).Name, wire, p.WireType)
err = ErrWrongType
continue
}
}
@@ -506,15 +475,6 @@ func (o *Buffer) dec_bool(p *Properties, base structPointer) error {
return nil
}
func (o *Buffer) dec_proto3_bool(p *Properties, base structPointer) error {
u, err := p.valDec(o)
if err != nil {
return err
}
*structPointer_BoolVal(base, p.field) = u != 0
return nil
}
// Decode an int32.
func (o *Buffer) dec_int32(p *Properties, base structPointer) error {
u, err := p.valDec(o)
@@ -525,15 +485,6 @@ func (o *Buffer) dec_int32(p *Properties, base structPointer) error {
return nil
}
func (o *Buffer) dec_proto3_int32(p *Properties, base structPointer) error {
u, err := p.valDec(o)
if err != nil {
return err
}
word32Val_Set(structPointer_Word32Val(base, p.field), uint32(u))
return nil
}
// Decode an int64.
func (o *Buffer) dec_int64(p *Properties, base structPointer) error {
u, err := p.valDec(o)
@@ -544,31 +495,15 @@ func (o *Buffer) dec_int64(p *Properties, base structPointer) error {
return nil
}
func (o *Buffer) dec_proto3_int64(p *Properties, base structPointer) error {
u, err := p.valDec(o)
if err != nil {
return err
}
word64Val_Set(structPointer_Word64Val(base, p.field), o, u)
return nil
}
// Decode a string.
func (o *Buffer) dec_string(p *Properties, base structPointer) error {
s, err := o.DecodeStringBytes()
if err != nil {
return err
}
*structPointer_String(base, p.field) = &s
return nil
}
func (o *Buffer) dec_proto3_string(p *Properties, base structPointer) error {
s, err := o.DecodeStringBytes()
if err != nil {
return err
}
*structPointer_StringVal(base, p.field) = s
sp := new(string)
*sp = s
*structPointer_String(base, p.field) = sp
return nil
}
@@ -602,13 +537,9 @@ func (o *Buffer) dec_slice_packed_bool(p *Properties, base structPointer) error
return err
}
nb := int(nn) // number of bytes of encoded bools
fin := o.index + nb
if fin < o.index {
return errOverflow
}
y := *v
for o.index < fin {
for i := 0; i < nb; i++ {
u, err := p.valDec(o)
if err != nil {
return err
@@ -711,78 +642,6 @@ func (o *Buffer) dec_slice_slice_byte(p *Properties, base structPointer) error {
return nil
}
// Decode a map field.
func (o *Buffer) dec_new_map(p *Properties, base structPointer) error {
raw, err := o.DecodeRawBytes(false)
if err != nil {
return err
}
oi := o.index // index at the end of this map entry
o.index -= len(raw) // move buffer back to start of map entry
mptr := structPointer_NewAt(base, p.field, p.mtype) // *map[K]V
if mptr.Elem().IsNil() {
mptr.Elem().Set(reflect.MakeMap(mptr.Type().Elem()))
}
v := mptr.Elem() // map[K]V
// Prepare addressable doubly-indirect placeholders for the key and value types.
// See enc_new_map for why.
keyptr := reflect.New(reflect.PtrTo(p.mtype.Key())).Elem() // addressable *K
keybase := toStructPointer(keyptr.Addr()) // **K
var valbase structPointer
var valptr reflect.Value
switch p.mtype.Elem().Kind() {
case reflect.Slice:
// []byte
var dummy []byte
valptr = reflect.ValueOf(&dummy) // *[]byte
valbase = toStructPointer(valptr) // *[]byte
case reflect.Ptr:
// message; valptr is **Msg; need to allocate the intermediate pointer
valptr = reflect.New(reflect.PtrTo(p.mtype.Elem())).Elem() // addressable *V
valptr.Set(reflect.New(valptr.Type().Elem()))
valbase = toStructPointer(valptr)
default:
// everything else
valptr = reflect.New(reflect.PtrTo(p.mtype.Elem())).Elem() // addressable *V
valbase = toStructPointer(valptr.Addr()) // **V
}
// Decode.
// This parses a restricted wire format, namely the encoding of a message
// with two fields. See enc_new_map for the format.
for o.index < oi {
// tagcode for key and value properties are always a single byte
// because they have tags 1 and 2.
tagcode := o.buf[o.index]
o.index++
switch tagcode {
case p.mkeyprop.tagcode[0]:
if err := p.mkeyprop.dec(o, p.mkeyprop, keybase); err != nil {
return err
}
case p.mvalprop.tagcode[0]:
if err := p.mvalprop.dec(o, p.mvalprop, valbase); err != nil {
return err
}
default:
// TODO: Should we silently skip this instead?
return fmt.Errorf("proto: bad map data tag %d", raw[0])
}
}
keyelem, valelem := keyptr.Elem(), valptr.Elem()
if !keyelem.IsValid() || !valelem.IsValid() {
// We did not decode the key or the value in the map entry.
// Either way, it's an invalid map entry.
return fmt.Errorf("proto: bad map data: missing key/val")
}
v.SetMapIndex(keyelem, valelem)
return nil
}
// Decode a group.
func (o *Buffer) dec_struct_group(p *Properties, base structPointer) error {
bas := structPointer_GetStructPointer(base, p.field)

View File

@@ -1,5 +1,5 @@
// Copyright (c) 2013, Vastech SA (PTY) LTD. All rights reserved.
// http://github.com/gogo/protobuf/gogoproto
// http://code.google.com/p/gogoprotobuf/gogoproto
//
// Redistribution and use in source and binary forms, with or without
// modification, are permitted provided that the following conditions are
@@ -30,6 +30,51 @@ import (
"reflect"
)
// Decode a reference to a bool pointer.
func (o *Buffer) dec_ref_bool(p *Properties, base structPointer) error {
u, err := p.valDec(o)
if err != nil {
return err
}
if len(o.bools) == 0 {
o.bools = make([]bool, boolPoolSize)
}
o.bools[0] = u != 0
*structPointer_RefBool(base, p.field) = o.bools[0]
o.bools = o.bools[1:]
return nil
}
// Decode a reference to an int32 pointer.
func (o *Buffer) dec_ref_int32(p *Properties, base structPointer) error {
u, err := p.valDec(o)
if err != nil {
return err
}
refWord32_Set(structPointer_RefWord32(base, p.field), o, uint32(u))
return nil
}
// Decode a reference to an int64 pointer.
func (o *Buffer) dec_ref_int64(p *Properties, base structPointer) error {
u, err := p.valDec(o)
if err != nil {
return err
}
refWord64_Set(structPointer_RefWord64(base, p.field), o, u)
return nil
}
// Decode a reference to a string pointer.
func (o *Buffer) dec_ref_string(p *Properties, base structPointer) error {
s, err := o.DecodeStringBytes()
if err != nil {
return err
}
*structPointer_RefString(base, p.field) = s
return nil
}
// Decode a reference to a struct pointer.
func (o *Buffer) dec_ref_struct_message(p *Properties, base structPointer) (err error) {
raw, e := o.DecodeRawBytes(false)

View File

@@ -1,7 +1,7 @@
// Go support for Protocol Buffers - Google's data interchange format
//
// Copyright 2010 The Go Authors. All rights reserved.
// https://github.com/golang/protobuf
// http://code.google.com/p/goprotobuf/
//
// Redistribution and use in source and binary forms, with or without
// modification, are permitted provided that the following conditions are
@@ -60,9 +60,9 @@ func (e *RequiredNotSetError) Error() string {
}
var (
// errRepeatedHasNil is the error returned if Marshal is called with
// ErrRepeatedHasNil is the error returned if Marshal is called with
// a struct with a repeated field containing a nil element.
errRepeatedHasNil = errors.New("proto: repeated field has nil element")
ErrRepeatedHasNil = errors.New("proto: repeated field has nil element")
// ErrNil is the error returned if Marshal is called with nil.
ErrNil = errors.New("proto: Marshal called with nil")
@@ -105,11 +105,6 @@ func (p *Buffer) EncodeVarint(x uint64) error {
return nil
}
// SizeVarint returns the varint encoding size of an integer.
func SizeVarint(x uint64) int {
return sizeVarint(x)
}
func sizeVarint(x uint64) (n int) {
for {
n++
@@ -233,20 +228,6 @@ func Marshal(pb Message) ([]byte, error) {
return p.buf, err
}
// EncodeMessage writes the protocol buffer to the Buffer,
// prefixed by a varint-encoded length.
func (p *Buffer) EncodeMessage(pb Message) error {
t, base, err := getbase(pb)
if structPointer_IsNil(base) {
return ErrNil
}
if err == nil {
var state errorState
err = p.enc_len_struct(GetProperties(t.Elem()), base, &state)
}
return err
}
// Marshal takes the protocol buffer
// and encodes it into the wire format, writing the result to the
// Buffer.
@@ -266,7 +247,7 @@ func (p *Buffer) Marshal(pb Message) error {
return ErrNil
}
if err == nil {
err = p.enc_struct(GetProperties(t.Elem()), base)
err = p.enc_struct(t.Elem(), GetProperties(t.Elem()), base)
}
if collectStats {
@@ -290,7 +271,7 @@ func Size(pb Message) (n int) {
return 0
}
if err == nil {
n = size_struct(GetProperties(t.Elem()), base)
n = size_struct(t.Elem(), GetProperties(t.Elem()), base)
}
if collectStats {
@@ -317,16 +298,6 @@ func (o *Buffer) enc_bool(p *Properties, base structPointer) error {
return nil
}
func (o *Buffer) enc_proto3_bool(p *Properties, base structPointer) error {
v := *structPointer_BoolVal(base, p.field)
if !v {
return ErrNil
}
o.buf = append(o.buf, p.tagcode...)
p.valEnc(o, 1)
return nil
}
func size_bool(p *Properties, base structPointer) int {
v := *structPointer_Bool(base, p.field)
if v == nil {
@@ -335,32 +306,13 @@ func size_bool(p *Properties, base structPointer) int {
return len(p.tagcode) + 1 // each bool takes exactly one byte
}
func size_proto3_bool(p *Properties, base structPointer) int {
v := *structPointer_BoolVal(base, p.field)
if !v && !p.oneof {
return 0
}
return len(p.tagcode) + 1 // each bool takes exactly one byte
}
// Encode an int32.
func (o *Buffer) enc_int32(p *Properties, base structPointer) error {
v := structPointer_Word32(base, p.field)
if word32_IsNil(v) {
return ErrNil
}
x := int32(word32_Get(v)) // permit sign extension to use full 64-bit range
o.buf = append(o.buf, p.tagcode...)
p.valEnc(o, uint64(x))
return nil
}
func (o *Buffer) enc_proto3_int32(p *Properties, base structPointer) error {
v := structPointer_Word32Val(base, p.field)
x := int32(word32Val_Get(v)) // permit sign extension to use full 64-bit range
if x == 0 {
return ErrNil
}
x := word32_Get(v)
o.buf = append(o.buf, p.tagcode...)
p.valEnc(o, uint64(x))
return nil
@@ -371,64 +323,7 @@ func size_int32(p *Properties, base structPointer) (n int) {
if word32_IsNil(v) {
return 0
}
x := int32(word32_Get(v)) // permit sign extension to use full 64-bit range
n += len(p.tagcode)
n += p.valSize(uint64(x))
return
}
func size_proto3_int32(p *Properties, base structPointer) (n int) {
v := structPointer_Word32Val(base, p.field)
x := int32(word32Val_Get(v)) // permit sign extension to use full 64-bit range
if x == 0 && !p.oneof {
return 0
}
n += len(p.tagcode)
n += p.valSize(uint64(x))
return
}
// Encode a uint32.
// Exactly the same as int32, except for no sign extension.
func (o *Buffer) enc_uint32(p *Properties, base structPointer) error {
v := structPointer_Word32(base, p.field)
if word32_IsNil(v) {
return ErrNil
}
x := word32_Get(v)
o.buf = append(o.buf, p.tagcode...)
p.valEnc(o, uint64(x))
return nil
}
func (o *Buffer) enc_proto3_uint32(p *Properties, base structPointer) error {
v := structPointer_Word32Val(base, p.field)
x := word32Val_Get(v)
if x == 0 {
return ErrNil
}
o.buf = append(o.buf, p.tagcode...)
p.valEnc(o, uint64(x))
return nil
}
func size_uint32(p *Properties, base structPointer) (n int) {
v := structPointer_Word32(base, p.field)
if word32_IsNil(v) {
return 0
}
x := word32_Get(v)
n += len(p.tagcode)
n += p.valSize(uint64(x))
return
}
func size_proto3_uint32(p *Properties, base structPointer) (n int) {
v := structPointer_Word32Val(base, p.field)
x := word32Val_Get(v)
if x == 0 && !p.oneof {
return 0
}
n += len(p.tagcode)
n += p.valSize(uint64(x))
return
@@ -446,17 +341,6 @@ func (o *Buffer) enc_int64(p *Properties, base structPointer) error {
return nil
}
func (o *Buffer) enc_proto3_int64(p *Properties, base structPointer) error {
v := structPointer_Word64Val(base, p.field)
x := word64Val_Get(v)
if x == 0 {
return ErrNil
}
o.buf = append(o.buf, p.tagcode...)
p.valEnc(o, x)
return nil
}
func size_int64(p *Properties, base structPointer) (n int) {
v := structPointer_Word64(base, p.field)
if word64_IsNil(v) {
@@ -468,17 +352,6 @@ func size_int64(p *Properties, base structPointer) (n int) {
return
}
func size_proto3_int64(p *Properties, base structPointer) (n int) {
v := structPointer_Word64Val(base, p.field)
x := word64Val_Get(v)
if x == 0 && !p.oneof {
return 0
}
n += len(p.tagcode)
n += p.valSize(x)
return
}
// Encode a string.
func (o *Buffer) enc_string(p *Properties, base structPointer) error {
v := *structPointer_String(base, p.field)
@@ -491,16 +364,6 @@ func (o *Buffer) enc_string(p *Properties, base structPointer) error {
return nil
}
func (o *Buffer) enc_proto3_string(p *Properties, base structPointer) error {
v := *structPointer_StringVal(base, p.field)
if v == "" {
return ErrNil
}
o.buf = append(o.buf, p.tagcode...)
o.EncodeStringBytes(v)
return nil
}
func size_string(p *Properties, base structPointer) (n int) {
v := *structPointer_String(base, p.field)
if v == nil {
@@ -512,16 +375,6 @@ func size_string(p *Properties, base structPointer) (n int) {
return
}
func size_proto3_string(p *Properties, base structPointer) (n int) {
v := *structPointer_StringVal(base, p.field)
if v == "" && !p.oneof {
return 0
}
n += len(p.tagcode)
n += sizeStringBytes(v)
return
}
// All protocol buffer fields are nillable, but be careful.
func isNil(v reflect.Value) bool {
switch v.Kind() {
@@ -548,11 +401,11 @@ func (o *Buffer) enc_struct_message(p *Properties, base structPointer) error {
}
o.buf = append(o.buf, p.tagcode...)
o.EncodeRawBytes(data)
return state.err
return nil
}
o.buf = append(o.buf, p.tagcode...)
return o.enc_len_struct(p.sprop, structp, &state)
return o.enc_len_struct(p.stype, p.sprop, structp, &state)
}
func size_struct_message(p *Properties, base structPointer) int {
@@ -571,7 +424,7 @@ func size_struct_message(p *Properties, base structPointer) int {
}
n0 := len(p.tagcode)
n1 := size_struct(p.sprop, structp)
n1 := size_struct(p.stype, p.sprop, structp)
n2 := sizeVarint(uint64(n1)) // size of encoded length
return n0 + n1 + n2
}
@@ -585,7 +438,7 @@ func (o *Buffer) enc_struct_group(p *Properties, base structPointer) error {
}
o.EncodeVarint(uint64((p.Tag << 3) | WireStartGroup))
err := o.enc_struct(p.sprop, b)
err := o.enc_struct(p.stype, p.sprop, b)
if err != nil && !state.shouldContinue(err, nil) {
return err
}
@@ -600,7 +453,7 @@ func size_struct_group(p *Properties, base structPointer) (n int) {
}
n += sizeVarint(uint64((p.Tag << 3) | WireStartGroup))
n += size_struct(p.sprop, b)
n += size_struct(p.stype, p.sprop, b)
n += sizeVarint(uint64((p.Tag << 3) | WireEndGroup))
return
}
@@ -674,29 +527,9 @@ func (o *Buffer) enc_slice_byte(p *Properties, base structPointer) error {
return nil
}
func (o *Buffer) enc_proto3_slice_byte(p *Properties, base structPointer) error {
s := *structPointer_Bytes(base, p.field)
if len(s) == 0 {
return ErrNil
}
o.buf = append(o.buf, p.tagcode...)
o.EncodeRawBytes(s)
return nil
}
func size_slice_byte(p *Properties, base structPointer) (n int) {
s := *structPointer_Bytes(base, p.field)
if s == nil && !p.oneof {
return 0
}
n += len(p.tagcode)
n += sizeRawBytes(s)
return
}
func size_proto3_slice_byte(p *Properties, base structPointer) (n int) {
s := *structPointer_Bytes(base, p.field)
if len(s) == 0 && !p.oneof {
if s == nil {
return 0
}
n += len(p.tagcode)
@@ -713,7 +546,7 @@ func (o *Buffer) enc_slice_int32(p *Properties, base structPointer) error {
}
for i := 0; i < l; i++ {
o.buf = append(o.buf, p.tagcode...)
x := int32(s.Index(i)) // permit sign extension to use full 64-bit range
x := s.Index(i)
p.valEnc(o, uint64(x))
}
return nil
@@ -727,7 +560,7 @@ func size_slice_int32(p *Properties, base structPointer) (n int) {
}
for i := 0; i < l; i++ {
n += len(p.tagcode)
x := int32(s.Index(i)) // permit sign extension to use full 64-bit range
x := s.Index(i)
n += p.valSize(uint64(x))
}
return
@@ -735,75 +568,6 @@ func size_slice_int32(p *Properties, base structPointer) (n int) {
// Encode a slice of int32s ([]int32) in packed format.
func (o *Buffer) enc_slice_packed_int32(p *Properties, base structPointer) error {
s := structPointer_Word32Slice(base, p.field)
l := s.Len()
if l == 0 {
return ErrNil
}
// TODO: Reuse a Buffer.
buf := NewBuffer(nil)
for i := 0; i < l; i++ {
x := int32(s.Index(i)) // permit sign extension to use full 64-bit range
p.valEnc(buf, uint64(x))
}
o.buf = append(o.buf, p.tagcode...)
o.EncodeVarint(uint64(len(buf.buf)))
o.buf = append(o.buf, buf.buf...)
return nil
}
func size_slice_packed_int32(p *Properties, base structPointer) (n int) {
s := structPointer_Word32Slice(base, p.field)
l := s.Len()
if l == 0 {
return 0
}
var bufSize int
for i := 0; i < l; i++ {
x := int32(s.Index(i)) // permit sign extension to use full 64-bit range
bufSize += p.valSize(uint64(x))
}
n += len(p.tagcode)
n += sizeVarint(uint64(bufSize))
n += bufSize
return
}
// Encode a slice of uint32s ([]uint32).
// Exactly the same as int32, except for no sign extension.
func (o *Buffer) enc_slice_uint32(p *Properties, base structPointer) error {
s := structPointer_Word32Slice(base, p.field)
l := s.Len()
if l == 0 {
return ErrNil
}
for i := 0; i < l; i++ {
o.buf = append(o.buf, p.tagcode...)
x := s.Index(i)
p.valEnc(o, uint64(x))
}
return nil
}
func size_slice_uint32(p *Properties, base structPointer) (n int) {
s := structPointer_Word32Slice(base, p.field)
l := s.Len()
if l == 0 {
return 0
}
for i := 0; i < l; i++ {
n += len(p.tagcode)
x := s.Index(i)
n += p.valSize(uint64(x))
}
return
}
// Encode a slice of uint32s ([]uint32) in packed format.
// Exactly the same as int32, except for no sign extension.
func (o *Buffer) enc_slice_packed_uint32(p *Properties, base structPointer) error {
s := structPointer_Word32Slice(base, p.field)
l := s.Len()
if l == 0 {
@@ -821,7 +585,7 @@ func (o *Buffer) enc_slice_packed_uint32(p *Properties, base structPointer) erro
return nil
}
func size_slice_packed_uint32(p *Properties, base structPointer) (n int) {
func size_slice_packed_int32(p *Properties, base structPointer) (n int) {
s := structPointer_Word32Slice(base, p.field)
l := s.Len()
if l == 0 {
@@ -958,7 +722,7 @@ func (o *Buffer) enc_slice_struct_message(p *Properties, base structPointer) err
for i := 0; i < l; i++ {
structp := s.Index(i)
if structPointer_IsNil(structp) {
return errRepeatedHasNil
return ErrRepeatedHasNil
}
// Can the object marshal itself?
@@ -974,10 +738,10 @@ func (o *Buffer) enc_slice_struct_message(p *Properties, base structPointer) err
}
o.buf = append(o.buf, p.tagcode...)
err := o.enc_len_struct(p.sprop, structp, &state)
err := o.enc_len_struct(p.stype, p.sprop, structp, &state)
if err != nil && !state.shouldContinue(err, nil) {
if err == ErrNil {
return errRepeatedHasNil
return ErrRepeatedHasNil
}
return err
}
@@ -1004,7 +768,7 @@ func size_slice_struct_message(p *Properties, base structPointer) (n int) {
continue
}
n0 := size_struct(p.sprop, structp)
n0 := size_struct(p.stype, p.sprop, structp)
n1 := sizeVarint(uint64(n0)) // size of encoded length
n += n0 + n1
}
@@ -1020,16 +784,16 @@ func (o *Buffer) enc_slice_struct_group(p *Properties, base structPointer) error
for i := 0; i < l; i++ {
b := s.Index(i)
if structPointer_IsNil(b) {
return errRepeatedHasNil
return ErrRepeatedHasNil
}
o.EncodeVarint(uint64((p.Tag << 3) | WireStartGroup))
err := o.enc_struct(p.sprop, b)
err := o.enc_struct(p.stype, p.sprop, b)
if err != nil && !state.shouldContinue(err, nil) {
if err == ErrNil {
return errRepeatedHasNil
return ErrRepeatedHasNil
}
return err
}
@@ -1051,7 +815,7 @@ func size_slice_struct_group(p *Properties, base structPointer) (n int) {
return // return size up to this point
}
n += size_struct(p.sprop, b)
n += size_struct(p.stype, p.sprop, b)
}
return
}
@@ -1088,118 +852,12 @@ func size_map(p *Properties, base structPointer) int {
return sizeExtensionMap(v)
}
// Encode a map field.
func (o *Buffer) enc_new_map(p *Properties, base structPointer) error {
var state errorState // XXX: or do we need to plumb this through?
/*
A map defined as
map<key_type, value_type> map_field = N;
is encoded in the same way as
message MapFieldEntry {
key_type key = 1;
value_type value = 2;
}
repeated MapFieldEntry map_field = N;
*/
v := structPointer_NewAt(base, p.field, p.mtype).Elem() // map[K]V
if v.Len() == 0 {
return nil
}
keycopy, valcopy, keybase, valbase := mapEncodeScratch(p.mtype)
enc := func() error {
if err := p.mkeyprop.enc(o, p.mkeyprop, keybase); err != nil {
return err
}
if err := p.mvalprop.enc(o, p.mvalprop, valbase); err != nil {
return err
}
return nil
}
// Don't sort map keys. It is not required by the spec, and C++ doesn't do it.
for _, key := range v.MapKeys() {
val := v.MapIndex(key)
// The only illegal map entry values are nil message pointers.
if val.Kind() == reflect.Ptr && val.IsNil() {
return errors.New("proto: map has nil element")
}
keycopy.Set(key)
valcopy.Set(val)
o.buf = append(o.buf, p.tagcode...)
if err := o.enc_len_thing(enc, &state); err != nil {
return err
}
}
return nil
}
func size_new_map(p *Properties, base structPointer) int {
v := structPointer_NewAt(base, p.field, p.mtype).Elem() // map[K]V
keycopy, valcopy, keybase, valbase := mapEncodeScratch(p.mtype)
n := 0
for _, key := range v.MapKeys() {
val := v.MapIndex(key)
keycopy.Set(key)
valcopy.Set(val)
// Tag codes for key and val are the responsibility of the sub-sizer.
keysize := p.mkeyprop.size(p.mkeyprop, keybase)
valsize := p.mvalprop.size(p.mvalprop, valbase)
entry := keysize + valsize
// Add on tag code and length of map entry itself.
n += len(p.tagcode) + sizeVarint(uint64(entry)) + entry
}
return n
}
// mapEncodeScratch returns a new reflect.Value matching the map's value type,
// and a structPointer suitable for passing to an encoder or sizer.
func mapEncodeScratch(mapType reflect.Type) (keycopy, valcopy reflect.Value, keybase, valbase structPointer) {
// Prepare addressable doubly-indirect placeholders for the key and value types.
// This is needed because the element-type encoders expect **T, but the map iteration produces T.
keycopy = reflect.New(mapType.Key()).Elem() // addressable K
keyptr := reflect.New(reflect.PtrTo(keycopy.Type())).Elem() // addressable *K
keyptr.Set(keycopy.Addr()) //
keybase = toStructPointer(keyptr.Addr()) // **K
// Value types are more varied and require special handling.
switch mapType.Elem().Kind() {
case reflect.Slice:
// []byte
var dummy []byte
valcopy = reflect.ValueOf(&dummy).Elem() // addressable []byte
valbase = toStructPointer(valcopy.Addr())
case reflect.Ptr:
// message; the generated field type is map[K]*Msg (so V is *Msg),
// so we only need one level of indirection.
valcopy = reflect.New(mapType.Elem()).Elem() // addressable V
valbase = toStructPointer(valcopy.Addr())
default:
// everything else
valcopy = reflect.New(mapType.Elem()).Elem() // addressable V
valptr := reflect.New(reflect.PtrTo(valcopy.Type())).Elem() // addressable *V
valptr.Set(valcopy.Addr()) //
valbase = toStructPointer(valptr.Addr()) // **V
}
return
}
// Encode a struct.
func (o *Buffer) enc_struct(prop *StructProperties, base structPointer) error {
func (o *Buffer) enc_struct(t reflect.Type, prop *StructProperties, base structPointer) error {
var state errorState
// Encode fields in tag order so that decoders may use optimizations
// that depend on the ordering.
// https://developers.google.com/protocol-buffers/docs/encoding#order
// http://code.google.com/apis/protocolbuffers/docs/encoding.html#order
for _, i := range prop.order {
p := prop.Prop[i]
if p.enc != nil {
@@ -1209,9 +867,6 @@ func (o *Buffer) enc_struct(prop *StructProperties, base structPointer) error {
if p.Required && state.err == nil {
state.err = &RequiredNotSetError{p.Name}
}
} else if err == errRepeatedHasNil {
// Give more context to nil values in repeated fields.
return errors.New("repeated field " + p.OrigName + " has nil element")
} else if !state.shouldContinue(err, p) {
return err
}
@@ -1219,14 +874,6 @@ func (o *Buffer) enc_struct(prop *StructProperties, base structPointer) error {
}
}
// Do oneof fields.
if prop.oneofMarshaler != nil {
m := structPointer_Interface(base, prop.stype).(Message)
if err := prop.oneofMarshaler(m, o); err != nil {
return err
}
}
// Add unrecognized fields at the end.
if prop.unrecField.IsValid() {
v := *structPointer_Bytes(base, prop.unrecField)
@@ -1238,7 +885,7 @@ func (o *Buffer) enc_struct(prop *StructProperties, base structPointer) error {
return state.err
}
func size_struct(prop *StructProperties, base structPointer) (n int) {
func size_struct(t reflect.Type, prop *StructProperties, base structPointer) (n int) {
for _, i := range prop.order {
p := prop.Prop[i]
if p.size != nil {
@@ -1252,28 +899,17 @@ func size_struct(prop *StructProperties, base structPointer) (n int) {
n += len(v)
}
// Factor in any oneof fields.
if prop.oneofSizer != nil {
m := structPointer_Interface(base, prop.stype).(Message)
n += prop.oneofSizer(m)
}
return
}
var zeroes [20]byte // longer than any conceivable sizeVarint
// Encode a struct, preceded by its encoded length (as a varint).
func (o *Buffer) enc_len_struct(prop *StructProperties, base structPointer, state *errorState) error {
return o.enc_len_thing(func() error { return o.enc_struct(prop, base) }, state)
}
// Encode something, preceded by its encoded length (as a varint).
func (o *Buffer) enc_len_thing(enc func() error, state *errorState) error {
func (o *Buffer) enc_len_struct(t reflect.Type, prop *StructProperties, base structPointer, state *errorState) error {
iLen := len(o.buf)
o.buf = append(o.buf, 0, 0, 0, 0) // reserve four bytes for length
iMsg := len(o.buf)
err := enc()
err := o.enc_struct(t, prop, base)
if err != nil && !state.shouldContinue(err, nil) {
return err
}

View File

@@ -1,12 +1,12 @@
// Extensions for Protocol Buffers to create more go like structures.
//
// Copyright (c) 2013, Vastech SA (PTY) LTD. All rights reserved.
// http://github.com/gogo/protobuf/gogoproto
// http://code.google.com/p/gogoprotobuf/gogoproto
//
// Go support for Protocol Buffers - Google's data interchange format
//
// Copyright 2010 The Go Authors. All rights reserved.
// http://github.com/golang/protobuf/
// http://code.google.com/p/goprotobuf/
//
// Redistribution and use in source and binary forms, with or without
// modification, are permitted provided that the following conditions are
@@ -40,10 +40,6 @@ import (
"reflect"
)
func NewRequiredNotSetError(field string) *RequiredNotSetError {
return &RequiredNotSetError{field}
}
type Sizer interface {
Size() int
}
@@ -68,9 +64,12 @@ func size_ext_slice_byte(p *Properties, base structPointer) (n int) {
// Encode a reference to bool pointer.
func (o *Buffer) enc_ref_bool(p *Properties, base structPointer) error {
v := *structPointer_BoolVal(base, p.field)
v := structPointer_RefBool(base, p.field)
if v == nil {
return ErrNil
}
x := 0
if v {
if *v {
x = 1
}
o.buf = append(o.buf, p.tagcode...)
@@ -79,37 +78,31 @@ func (o *Buffer) enc_ref_bool(p *Properties, base structPointer) error {
}
func size_ref_bool(p *Properties, base structPointer) int {
v := structPointer_RefBool(base, p.field)
if v == nil {
return 0
}
return len(p.tagcode) + 1 // each bool takes exactly one byte
}
// Encode a reference to int32 pointer.
func (o *Buffer) enc_ref_int32(p *Properties, base structPointer) error {
v := structPointer_Word32Val(base, p.field)
x := int32(word32Val_Get(v))
v := structPointer_RefWord32(base, p.field)
if refWord32_IsNil(v) {
return ErrNil
}
x := refWord32_Get(v)
o.buf = append(o.buf, p.tagcode...)
p.valEnc(o, uint64(x))
return nil
}
func size_ref_int32(p *Properties, base structPointer) (n int) {
v := structPointer_Word32Val(base, p.field)
x := int32(word32Val_Get(v))
n += len(p.tagcode)
n += p.valSize(uint64(x))
return
}
func (o *Buffer) enc_ref_uint32(p *Properties, base structPointer) error {
v := structPointer_Word32Val(base, p.field)
x := word32Val_Get(v)
o.buf = append(o.buf, p.tagcode...)
p.valEnc(o, uint64(x))
return nil
}
func size_ref_uint32(p *Properties, base structPointer) (n int) {
v := structPointer_Word32Val(base, p.field)
x := word32Val_Get(v)
v := structPointer_RefWord32(base, p.field)
if refWord32_IsNil(v) {
return 0
}
x := refWord32_Get(v)
n += len(p.tagcode)
n += p.valSize(uint64(x))
return
@@ -117,16 +110,22 @@ func size_ref_uint32(p *Properties, base structPointer) (n int) {
// Encode a reference to an int64 pointer.
func (o *Buffer) enc_ref_int64(p *Properties, base structPointer) error {
v := structPointer_Word64Val(base, p.field)
x := word64Val_Get(v)
v := structPointer_RefWord64(base, p.field)
if refWord64_IsNil(v) {
return ErrNil
}
x := refWord64_Get(v)
o.buf = append(o.buf, p.tagcode...)
p.valEnc(o, x)
return nil
}
func size_ref_int64(p *Properties, base structPointer) (n int) {
v := structPointer_Word64Val(base, p.field)
x := word64Val_Get(v)
v := structPointer_RefWord64(base, p.field)
if refWord64_IsNil(v) {
return 0
}
x := refWord64_Get(v)
n += len(p.tagcode)
n += p.valSize(x)
return
@@ -134,16 +133,24 @@ func size_ref_int64(p *Properties, base structPointer) (n int) {
// Encode a reference to a string pointer.
func (o *Buffer) enc_ref_string(p *Properties, base structPointer) error {
v := *structPointer_StringVal(base, p.field)
v := structPointer_RefString(base, p.field)
if v == nil {
return ErrNil
}
x := *v
o.buf = append(o.buf, p.tagcode...)
o.EncodeStringBytes(v)
o.EncodeStringBytes(x)
return nil
}
func size_ref_string(p *Properties, base structPointer) (n int) {
v := *structPointer_StringVal(base, p.field)
v := structPointer_RefString(base, p.field)
if v == nil {
return 0
}
x := *v
n += len(p.tagcode)
n += sizeStringBytes(v)
n += sizeStringBytes(x)
return
}
@@ -168,7 +175,7 @@ func (o *Buffer) enc_ref_struct_message(p *Properties, base structPointer) error
}
o.buf = append(o.buf, p.tagcode...)
return o.enc_len_struct(p.sprop, structp, &state)
return o.enc_len_struct(p.stype, p.sprop, structp, &state)
}
//TODO this is only copied, please fix this
@@ -188,7 +195,7 @@ func size_ref_struct_message(p *Properties, base structPointer) int {
}
n0 := len(p.tagcode)
n1 := size_struct(p.sprop, structp)
n1 := size_struct(p.stype, p.sprop, structp)
n2 := sizeVarint(uint64(n1)) // size of encoded length
return n0 + n1 + n2
}
@@ -203,7 +210,7 @@ func (o *Buffer) enc_slice_ref_struct_message(p *Properties, base structPointer)
for i := 0; i < l; i++ {
structp := structPointer_Add(ss1, field(uintptr(i)*size))
if structPointer_IsNil(structp) {
return errRepeatedHasNil
return ErrRepeatedHasNil
}
// Can the object marshal itself?
@@ -219,10 +226,10 @@ func (o *Buffer) enc_slice_ref_struct_message(p *Properties, base structPointer)
}
o.buf = append(o.buf, p.tagcode...)
err := o.enc_len_struct(p.sprop, structp, &state)
err := o.enc_len_struct(p.stype, p.sprop, structp, &state)
if err != nil && !state.shouldContinue(err, nil) {
if err == ErrNil {
return errRepeatedHasNil
return ErrRepeatedHasNil
}
return err
}
@@ -253,7 +260,7 @@ func size_slice_ref_struct_message(p *Properties, base structPointer) (n int) {
continue
}
n0 := size_struct(p.sprop, structp)
n0 := size_struct(p.stype, p.sprop, structp)
n1 := sizeVarint(uint64(n0)) // size of encoded length
n += n0 + n1
}

View File

@@ -1,7 +1,7 @@
// Go support for Protocol Buffers - Google's data interchange format
//
// Copyright 2011 The Go Authors. All rights reserved.
// https://github.com/golang/protobuf
// http://code.google.com/p/goprotobuf/
//
// Redistribution and use in source and binary forms, with or without
// modification, are permitted provided that the following conditions are
@@ -30,6 +30,7 @@
// OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
// Protocol buffer comparison.
// TODO: MessageSet.
package proto
@@ -56,7 +57,7 @@ Equality is defined in this way:
although represented by []byte, is not a repeated field)
- Two unset fields are equal.
- Two unknown field sets are equal if their current
encoded state is equal.
encoded state is equal. (TODO)
- Two extension sets are equal iff they have corresponding
elements that are pairwise equal.
- Every other combination of things are not equal.
@@ -153,32 +154,6 @@ func equalAny(v1, v2 reflect.Value) bool {
return v1.Float() == v2.Float()
case reflect.Int32, reflect.Int64:
return v1.Int() == v2.Int()
case reflect.Interface:
// Probably a oneof field; compare the inner values.
n1, n2 := v1.IsNil(), v2.IsNil()
if n1 || n2 {
return n1 == n2
}
e1, e2 := v1.Elem(), v2.Elem()
if e1.Type() != e2.Type() {
return false
}
return equalAny(e1, e2)
case reflect.Map:
if v1.Len() != v2.Len() {
return false
}
for _, key := range v1.MapKeys() {
val2 := v2.MapIndex(key)
if !val2.IsValid() {
// This key was not found in the second map.
return false
}
if !equalAny(v1.MapIndex(key), val2) {
return false
}
}
return true
case reflect.Ptr:
return equalAny(v1.Elem(), v2.Elem())
case reflect.Slice:

View File

@@ -1,7 +1,7 @@
// Go support for Protocol Buffers - Google's data interchange format
//
// Copyright 2011 The Go Authors. All rights reserved.
// https://github.com/golang/protobuf
// http://code.google.com/p/goprotobuf/
//
// Redistribution and use in source and binary forms, with or without
// modification, are permitted provided that the following conditions are
@@ -34,8 +34,8 @@ package proto_test
import (
"testing"
. "github.com/coreos/etcd/Godeps/_workspace/src/github.com/gogo/protobuf/proto"
pb "github.com/coreos/etcd/Godeps/_workspace/src/github.com/gogo/protobuf/proto/testdata"
pb "./testdata"
. "github.com/coreos/etcd/Godeps/_workspace/src/code.google.com/p/gogoprotobuf/proto"
)
// Four identical base messages.
@@ -155,49 +155,6 @@ var EqualTests = []struct {
},
true,
},
{
"map same",
&pb.MessageWithMap{NameMapping: map[int32]string{1: "Ken"}},
&pb.MessageWithMap{NameMapping: map[int32]string{1: "Ken"}},
true,
},
{
"map different entry",
&pb.MessageWithMap{NameMapping: map[int32]string{1: "Ken"}},
&pb.MessageWithMap{NameMapping: map[int32]string{2: "Rob"}},
false,
},
{
"map different key only",
&pb.MessageWithMap{NameMapping: map[int32]string{1: "Ken"}},
&pb.MessageWithMap{NameMapping: map[int32]string{2: "Ken"}},
false,
},
{
"map different value only",
&pb.MessageWithMap{NameMapping: map[int32]string{1: "Ken"}},
&pb.MessageWithMap{NameMapping: map[int32]string{1: "Rob"}},
false,
},
{
"oneof same",
&pb.Communique{Union: &pb.Communique_Number{Number: 41}},
&pb.Communique{Union: &pb.Communique_Number{Number: 41}},
true,
},
{
"oneof one nil",
&pb.Communique{Union: &pb.Communique_Number{Number: 41}},
&pb.Communique{},
false,
},
{
"oneof different",
&pb.Communique{Union: &pb.Communique_Number{Number: 41}},
&pb.Communique{Union: &pb.Communique_Name{Name: "Bobby Tables"}},
false,
},
}
func TestEqual(t *testing.T) {

View File

@@ -1,7 +1,7 @@
// Go support for Protocol Buffers - Google's data interchange format
//
// Copyright 2010 The Go Authors. All rights reserved.
// https://github.com/golang/protobuf
// http://code.google.com/p/goprotobuf/
//
// Redistribution and use in source and binary forms, with or without
// modification, are permitted provided that the following conditions are
@@ -37,7 +37,6 @@ package proto
import (
"errors"
"fmt"
"reflect"
"strconv"
"sync"
@@ -175,39 +174,32 @@ func extensionProperties(ed *ExtensionDesc) *Properties {
// encodeExtensionMap encodes any unmarshaled (unencoded) extensions in m.
func encodeExtensionMap(m map[int32]Extension) error {
for k, e := range m {
err := encodeExtension(&e)
if err != nil {
if e.value == nil || e.desc == nil {
// Extension is only in its encoded form.
continue
}
// We don't skip extensions that have an encoded form set,
// because the extension value may have been mutated after
// the last time this function was called.
et := reflect.TypeOf(e.desc.ExtensionType)
props := extensionProperties(e.desc)
p := NewBuffer(nil)
// If e.value has type T, the encoder expects a *struct{ X T }.
// Pass a *T with a zero field and hope it all works out.
x := reflect.New(et)
x.Elem().Set(reflect.ValueOf(e.value))
if err := props.enc(p, props, toStructPointer(x)); err != nil {
return err
}
e.enc = p.buf
m[k] = e
}
return nil
}
func encodeExtension(e *Extension) error {
if e.value == nil || e.desc == nil {
// Extension is only in its encoded form.
return nil
}
// We don't skip extensions that have an encoded form set,
// because the extension value may have been mutated after
// the last time this function was called.
et := reflect.TypeOf(e.desc.ExtensionType)
props := extensionProperties(e.desc)
p := NewBuffer(nil)
// If e.value has type T, the encoder expects a *struct{ X T }.
// Pass a *T with a zero field and hope it all works out.
x := reflect.New(et)
x.Elem().Set(reflect.ValueOf(e.value))
if err := props.enc(p, props, toStructPointer(x)); err != nil {
return err
}
e.enc = p.buf
return nil
}
func sizeExtensionMap(m map[int32]Extension) (n int) {
for _, e := range m {
if e.value == nil || e.desc == nil {
@@ -308,12 +300,9 @@ func GetExtension(pb extendableProto, extension *ExtensionDesc) (interface{}, er
}
if epb, doki := pb.(extensionsMap); doki {
emap := epb.ExtensionMap()
e, ok := emap[extension.Field]
e, ok := epb.ExtensionMap()[extension.Field]
if !ok {
// defaultExtensionValue returns the default value or
// ErrMissingExtension if there is no default.
return defaultExtensionValue(extension)
return nil, ErrMissingExtension
}
if e.value != nil {
// Already decoded. Check the descriptor, though.
@@ -336,7 +325,6 @@ func GetExtension(pb extendableProto, extension *ExtensionDesc) (interface{}, er
e.value = v
e.desc = extension
e.enc = nil
emap[extension.Field] = e
return e.value, nil
} else if epb, doki := pb.(extensionsBytes); doki {
ext := epb.GetExtensions()
@@ -358,46 +346,10 @@ func GetExtension(pb extendableProto, extension *ExtensionDesc) (interface{}, er
}
o += n + l
}
return defaultExtensionValue(extension)
}
panic("unreachable")
}
// defaultExtensionValue returns the default value for extension.
// If no default for an extension is defined ErrMissingExtension is returned.
func defaultExtensionValue(extension *ExtensionDesc) (interface{}, error) {
t := reflect.TypeOf(extension.ExtensionType)
props := extensionProperties(extension)
sf, _, err := fieldDefault(t, props)
if err != nil {
return nil, err
}
if sf == nil || sf.value == nil {
// There is no default value.
return nil, ErrMissingExtension
}
if t.Kind() != reflect.Ptr {
// We do not need to return a Ptr, we can directly return sf.value.
return sf.value, nil
}
// We need to return an interface{} that is a pointer to sf.value.
value := reflect.New(t).Elem()
value.Set(reflect.New(value.Type().Elem()))
if sf.kind == reflect.Int32 {
// We may have an int32 or an enum, but the underlying data is int32.
// Since we can't set an int32 into a non int32 reflect.value directly
// set it as a int32.
value.Elem().SetInt(int64(sf.value.(int32)))
} else {
value.Elem().Set(reflect.ValueOf(sf.value))
}
return value.Interface(), nil
}
// decodeExtension decodes an extension encoded in b.
func decodeExtension(b []byte, extension *ExtensionDesc) (interface{}, error) {
o := NewBuffer(b)
@@ -443,9 +395,6 @@ func GetExtensions(pb Message, es []*ExtensionDesc) (extensions []interface{}, e
extensions = make([]interface{}, len(es))
for i, e := range es {
extensions[i], err = GetExtension(epb, e)
if err == ErrMissingExtension {
err = nil
}
if err != nil {
return
}
@@ -462,18 +411,7 @@ func SetExtension(pb extendableProto, extension *ExtensionDesc, value interface{
if typ != reflect.TypeOf(value) {
return errors.New("proto: bad extension value type")
}
// nil extension values need to be caught early, because the
// encoder can't distinguish an ErrNil due to a nil extension
// from an ErrNil due to a missing field. Extensions are
// always optional, so the encoder would just swallow the error
// and drop all the extensions from the encoded message.
if reflect.ValueOf(value).IsNil() {
return fmt.Errorf("proto: SetExtension called with nil value of type %T", value)
}
return setExtension(pb, extension, value)
}
func setExtension(pb extendableProto, extension *ExtensionDesc, value interface{}) error {
if epb, doki := pb.(extensionsMap); doki {
epb.ExtensionMap()[extension.Field] = Extension{desc: extension, value: value}
} else if epb, doki := pb.(extensionsBytes); doki {

View File

@@ -1,5 +1,5 @@
// Copyright (c) 2013, Vastech SA (PTY) LTD. All rights reserved.
// http://github.com/gogo/protobuf/gogoproto
// http://code.google.com/p/gogoprotobuf/gogoproto
//
// Redistribution and use in source and binary forms, with or without
// modification, are permitted provided that the following conditions are
@@ -28,7 +28,6 @@ package proto
import (
"bytes"
"errors"
"fmt"
"reflect"
"sort"
@@ -186,36 +185,5 @@ func NewExtension(e []byte) Extension {
}
func (this Extension) GoString() string {
if this.enc == nil {
if err := encodeExtension(&this); err != nil {
panic(err)
}
}
return fmt.Sprintf("proto.NewExtension(%#v)", this.enc)
}
func SetUnsafeExtension(pb extendableProto, fieldNum int32, value interface{}) error {
typ := reflect.TypeOf(pb).Elem()
ext, ok := extensionMaps[typ]
if !ok {
return fmt.Errorf("proto: bad extended type; %s is not extendable", typ.String())
}
desc, ok := ext[fieldNum]
if !ok {
return errors.New("proto: bad extension number; not in declared ranges")
}
return setExtension(pb, desc, value)
}
func GetUnsafeExtension(pb extendableProto, fieldNum int32) (interface{}, error) {
typ := reflect.TypeOf(pb).Elem()
ext, ok := extensionMaps[typ]
if !ok {
return nil, fmt.Errorf("proto: bad extended type; %s is not extendable", typ.String())
}
desc, ok := ext[fieldNum]
if !ok {
return nil, fmt.Errorf("unregistered field number %d", fieldNum)
}
return GetExtension(pb, desc)
}

View File

@@ -1,7 +1,7 @@
// Go support for Protocol Buffers - Google's data interchange format
//
// Copyright 2010 The Go Authors. All rights reserved.
// https://github.com/golang/protobuf
// http://code.google.com/p/goprotobuf/
//
// Redistribution and use in source and binary forms, with or without
// modification, are permitted provided that the following conditions are
@@ -30,230 +30,171 @@
// OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
/*
Package proto converts data structures to and from the wire format of
protocol buffers. It works in concert with the Go source code generated
for .proto files by the protocol compiler.
Package proto converts data structures to and from the wire format of
protocol buffers. It works in concert with the Go source code generated
for .proto files by the protocol compiler.
A summary of the properties of the protocol buffer interface
for a protocol buffer variable v:
A summary of the properties of the protocol buffer interface
for a protocol buffer variable v:
- Names are turned from camel_case to CamelCase for export.
- There are no methods on v to set fields; just treat
them as structure fields.
- There are getters that return a field's value if set,
and return the field's default value if unset.
The getters work even if the receiver is a nil message.
- The zero value for a struct is its correct initialization state.
All desired fields must be set before marshaling.
- A Reset() method will restore a protobuf struct to its zero state.
- Non-repeated fields are pointers to the values; nil means unset.
That is, optional or required field int32 f becomes F *int32.
- Repeated fields are slices.
- Helper functions are available to aid the setting of fields.
msg.Foo = proto.String("hello") // set field
- Constants are defined to hold the default values of all fields that
have them. They have the form Default_StructName_FieldName.
Because the getter methods handle defaulted values,
direct use of these constants should be rare.
- Enums are given type names and maps from names to values.
Enum values are prefixed by the enclosing message's name, or by the
enum's type name if it is a top-level enum. Enum types have a String
method, and a Enum method to assist in message construction.
- Nested messages, groups and enums have type names prefixed with the name of
the surrounding message type.
- Extensions are given descriptor names that start with E_,
followed by an underscore-delimited list of the nested messages
that contain it (if any) followed by the CamelCased name of the
extension field itself. HasExtension, ClearExtension, GetExtension
and SetExtension are functions for manipulating extensions.
- Oneof field sets are given a single field in their message,
with distinguished wrapper types for each possible field value.
- Marshal and Unmarshal are functions to encode and decode the wire format.
- Names are turned from camel_case to CamelCase for export.
- There are no methods on v to set fields; just treat
them as structure fields.
- There are getters that return a field's value if set,
and return the field's default value if unset.
The getters work even if the receiver is a nil message.
- The zero value for a struct is its correct initialization state.
All desired fields must be set before marshaling.
- A Reset() method will restore a protobuf struct to its zero state.
- Non-repeated fields are pointers to the values; nil means unset.
That is, optional or required field int32 f becomes F *int32.
- Repeated fields are slices.
- Helper functions are available to aid the setting of fields.
Helpers for getting values are superseded by the
GetFoo methods and their use is deprecated.
msg.Foo = proto.String("hello") // set field
- Constants are defined to hold the default values of all fields that
have them. They have the form Default_StructName_FieldName.
Because the getter methods handle defaulted values,
direct use of these constants should be rare.
- Enums are given type names and maps from names to values.
Enum values are prefixed with the enum's type name. Enum types have
a String method, and a Enum method to assist in message construction.
- Nested groups and enums have type names prefixed with the name of
the surrounding message type.
- Extensions are given descriptor names that start with E_,
followed by an underscore-delimited list of the nested messages
that contain it (if any) followed by the CamelCased name of the
extension field itself. HasExtension, ClearExtension, GetExtension
and SetExtension are functions for manipulating extensions.
- Marshal and Unmarshal are functions to encode and decode the wire format.
The simplest way to describe this is to see an example.
Given file test.proto, containing
The simplest way to describe this is to see an example.
Given file test.proto, containing
package example;
package example;
enum FOO { X = 17; }
enum FOO { X = 17; };
message Test {
required string label = 1;
optional int32 type = 2 [default=77];
repeated int64 reps = 3;
optional group OptionalGroup = 4 {
required string RequiredField = 5;
}
oneof union {
int32 number = 6;
string name = 7;
}
}
The resulting file, test.pb.go, is:
package example
import proto "github.com/gogo/protobuf/proto"
import math "math"
type FOO int32
const (
FOO_X FOO = 17
)
var FOO_name = map[int32]string{
17: "X",
}
var FOO_value = map[string]int32{
"X": 17,
}
func (x FOO) Enum() *FOO {
p := new(FOO)
*p = x
return p
}
func (x FOO) String() string {
return proto.EnumName(FOO_name, int32(x))
}
func (x *FOO) UnmarshalJSON(data []byte) error {
value, err := proto.UnmarshalJSONEnum(FOO_value, data)
if err != nil {
return err
message Test {
required string label = 1;
optional int32 type = 2 [default=77];
repeated int64 reps = 3;
optional group OptionalGroup = 4 {
required string RequiredField = 5;
}
}
*x = FOO(value)
return nil
}
type Test struct {
Label *string `protobuf:"bytes,1,req,name=label" json:"label,omitempty"`
Type *int32 `protobuf:"varint,2,opt,name=type,def=77" json:"type,omitempty"`
Reps []int64 `protobuf:"varint,3,rep,name=reps" json:"reps,omitempty"`
Optionalgroup *Test_OptionalGroup `protobuf:"group,4,opt,name=OptionalGroup" json:"optionalgroup,omitempty"`
// Types that are valid to be assigned to Union:
// *Test_Number
// *Test_Name
Union isTest_Union `protobuf_oneof:"union"`
XXX_unrecognized []byte `json:"-"`
}
func (m *Test) Reset() { *m = Test{} }
func (m *Test) String() string { return proto.CompactTextString(m) }
func (*Test) ProtoMessage() {}
The resulting file, test.pb.go, is:
type isTest_Union interface {
isTest_Union()
}
package example
type Test_Number struct {
Number int32 `protobuf:"varint,6,opt,name=number"`
}
type Test_Name struct {
Name string `protobuf:"bytes,7,opt,name=name"`
}
import "code.google.com/p/gogoprotobuf/proto"
func (*Test_Number) isTest_Union() {}
func (*Test_Name) isTest_Union() {}
func (m *Test) GetUnion() isTest_Union {
if m != nil {
return m.Union
type FOO int32
const (
FOO_X FOO = 17
)
var FOO_name = map[int32]string{
17: "X",
}
return nil
}
const Default_Test_Type int32 = 77
func (m *Test) GetLabel() string {
if m != nil && m.Label != nil {
return *m.Label
var FOO_value = map[string]int32{
"X": 17,
}
return ""
}
func (m *Test) GetType() int32 {
if m != nil && m.Type != nil {
return *m.Type
func (x FOO) Enum() *FOO {
p := new(FOO)
*p = x
return p
}
return Default_Test_Type
}
func (m *Test) GetOptionalgroup() *Test_OptionalGroup {
if m != nil {
return m.Optionalgroup
func (x FOO) String() string {
return proto.EnumName(FOO_name, int32(x))
}
return nil
}
type Test_OptionalGroup struct {
RequiredField *string `protobuf:"bytes,5,req" json:"RequiredField,omitempty"`
}
func (m *Test_OptionalGroup) Reset() { *m = Test_OptionalGroup{} }
func (m *Test_OptionalGroup) String() string { return proto.CompactTextString(m) }
func (m *Test_OptionalGroup) GetRequiredField() string {
if m != nil && m.RequiredField != nil {
return *m.RequiredField
type Test struct {
Label *string `protobuf:"bytes,1,req,name=label" json:"label,omitempty"`
Type *int32 `protobuf:"varint,2,opt,name=type,def=77" json:"type,omitempty"`
Reps []int64 `protobuf:"varint,3,rep,name=reps" json:"reps,omitempty"`
Optionalgroup *Test_OptionalGroup `protobuf:"group,4,opt,name=OptionalGroup" json:"optionalgroup,omitempty"`
XXX_unrecognized []byte `json:"-"`
}
return ""
}
func (this *Test) Reset() { *this = Test{} }
func (this *Test) String() string { return proto.CompactTextString(this) }
const Default_Test_Type int32 = 77
func (m *Test) GetNumber() int32 {
if x, ok := m.GetUnion().(*Test_Number); ok {
return x.Number
func (this *Test) GetLabel() string {
if this != nil && this.Label != nil {
return *this.Label
}
return ""
}
return 0
}
func (m *Test) GetName() string {
if x, ok := m.GetUnion().(*Test_Name); ok {
return x.Name
func (this *Test) GetType() int32 {
if this != nil && this.Type != nil {
return *this.Type
}
return Default_Test_Type
}
return ""
}
func init() {
proto.RegisterEnum("example.FOO", FOO_name, FOO_value)
}
To create and play with a Test object:
package main
import (
"log"
"github.com/gogo/protobuf/proto"
pb "./example.pb"
)
func main() {
test := &pb.Test{
Label: proto.String("hello"),
Type: proto.Int32(17),
Optionalgroup: &pb.Test_OptionalGroup{
RequiredField: proto.String("good bye"),
},
Union: &pb.Test_Name{"fred"},
func (this *Test) GetOptionalgroup() *Test_OptionalGroup {
if this != nil {
return this.Optionalgroup
}
return nil
}
data, err := proto.Marshal(test)
if err != nil {
log.Fatal("marshaling error: ", err)
type Test_OptionalGroup struct {
RequiredField *string `protobuf:"bytes,5,req" json:"RequiredField,omitempty"`
XXX_unrecognized []byte `json:"-"`
}
newTest := &pb.Test{}
err = proto.Unmarshal(data, newTest)
if err != nil {
log.Fatal("unmarshaling error: ", err)
func (this *Test_OptionalGroup) Reset() { *this = Test_OptionalGroup{} }
func (this *Test_OptionalGroup) String() string { return proto.CompactTextString(this) }
func (this *Test_OptionalGroup) GetRequiredField() string {
if this != nil && this.RequiredField != nil {
return *this.RequiredField
}
return ""
}
// Now test and newTest contain the same data.
if test.GetLabel() != newTest.GetLabel() {
log.Fatalf("data mismatch %q != %q", test.GetLabel(), newTest.GetLabel())
func init() {
proto.RegisterEnum("example.FOO", FOO_name, FOO_value)
}
// Use a type switch to determine which oneof was set.
switch u := test.Union.(type) {
case *pb.Test_Number: // u.Number contains the number.
case *pb.Test_Name: // u.Name contains the string.
To create and play with a Test object:
package main
import (
"log"
"code.google.com/p/gogoprotobuf/proto"
"./example.pb"
)
func main() {
test := &example.Test{
Label: proto.String("hello"),
Type: proto.Int32(17),
Optionalgroup: &example.Test_OptionalGroup{
RequiredField: proto.String("good bye"),
},
}
data, err := proto.Marshal(test)
if err != nil {
log.Fatal("marshaling error: ", err)
}
newTest := new(example.Test)
err = proto.Unmarshal(data, newTest)
if err != nil {
log.Fatal("unmarshaling error: ", err)
}
// Now test and newTest contain the same data.
if test.GetLabel() != newTest.GetLabel() {
log.Fatalf("data mismatch %q != %q", test.GetLabel(), newTest.GetLabel())
}
// etc.
}
// etc.
}
*/
package proto
@@ -262,7 +203,6 @@ import (
"fmt"
"log"
"reflect"
"sort"
"strconv"
"sync"
)
@@ -383,7 +323,9 @@ func Float64(v float64) *float64 {
// Uint32 is a helper routine that allocates a new uint32 value
// to store v and returns a pointer to it.
func Uint32(v uint32) *uint32 {
return &v
p := new(uint32)
*p = v
return p
}
// Uint64 is a helper routine that allocates a new uint64 value
@@ -437,13 +379,13 @@ func UnmarshalJSONEnum(m map[string]int32, data []byte, enumName string) (int32,
// DebugPrint dumps the encoded data in b in a debugging format with a header
// including the string s. Used in testing but made available for general debugging.
func (p *Buffer) DebugPrint(s string, b []byte) {
func (o *Buffer) DebugPrint(s string, b []byte) {
var u uint64
obuf := p.buf
index := p.index
p.buf = b
p.index = 0
obuf := o.buf
index := o.index
o.buf = b
o.index = 0
depth := 0
fmt.Printf("\n--- %s ---\n", s)
@@ -454,12 +396,12 @@ out:
fmt.Print(" ")
}
index := p.index
if index == len(p.buf) {
index := o.index
if index == len(o.buf) {
break
}
op, err := p.DecodeVarint()
op, err := o.DecodeVarint()
if err != nil {
fmt.Printf("%3d: fetching op err %v\n", index, err)
break out
@@ -476,7 +418,7 @@ out:
case WireBytes:
var r []byte
r, err = p.DecodeRawBytes(false)
r, err = o.DecodeRawBytes(false)
if err != nil {
break out
}
@@ -497,7 +439,7 @@ out:
fmt.Printf("\n")
case WireFixed32:
u, err = p.DecodeFixed32()
u, err = o.DecodeFixed32()
if err != nil {
fmt.Printf("%3d: t=%3d fix32 err %v\n", index, tag, err)
break out
@@ -505,15 +447,16 @@ out:
fmt.Printf("%3d: t=%3d fix32 %d\n", index, tag, u)
case WireFixed64:
u, err = p.DecodeFixed64()
u, err = o.DecodeFixed64()
if err != nil {
fmt.Printf("%3d: t=%3d fix64 err %v\n", index, tag, err)
break out
}
fmt.Printf("%3d: t=%3d fix64 %d\n", index, tag, u)
break
case WireVarint:
u, err = p.DecodeVarint()
u, err = o.DecodeVarint()
if err != nil {
fmt.Printf("%3d: t=%3d varint err %v\n", index, tag, err)
break out
@@ -521,22 +464,30 @@ out:
fmt.Printf("%3d: t=%3d varint %d\n", index, tag, u)
case WireStartGroup:
if err != nil {
fmt.Printf("%3d: t=%3d start err %v\n", index, tag, err)
break out
}
fmt.Printf("%3d: t=%3d start\n", index, tag)
depth++
case WireEndGroup:
depth--
if err != nil {
fmt.Printf("%3d: t=%3d end err %v\n", index, tag, err)
break out
}
fmt.Printf("%3d: t=%3d end\n", index, tag)
}
}
if depth != 0 {
fmt.Printf("%3d: start-end not balanced %d\n", p.index, depth)
fmt.Printf("%3d: start-end not balanced %d\n", o.index, depth)
}
fmt.Printf("\n")
p.buf = obuf
p.index = index
o.buf = obuf
o.index = index
}
// SetDefaults sets unset protocol buffer fields to their default values.
@@ -650,15 +601,13 @@ func setDefaults(v reflect.Value, recur, zeros bool) {
for _, ni := range dm.nested {
f := v.Field(ni)
// f is *T or []*T or map[T]*T
switch f.Kind() {
case reflect.Ptr:
if f.IsNil() {
continue
}
if f.IsNil() {
continue
}
// f is *T or []*T
if f.Kind() == reflect.Ptr {
setDefaults(f, recur, zeros)
case reflect.Slice:
} else {
for i := 0; i < f.Len(); i++ {
e := f.Index(i)
if e.IsNil() {
@@ -666,15 +615,6 @@ func setDefaults(v reflect.Value, recur, zeros bool) {
}
setDefaults(e, recur, zeros)
}
case reflect.Map:
for _, k := range f.MapKeys() {
e := f.MapIndex(k)
if e.IsNil() {
continue
}
setDefaults(e, recur, zeros)
}
}
}
}
@@ -700,6 +640,10 @@ type scalarField struct {
value interface{} // the proto-declared default value, or nil
}
func ptrToStruct(t reflect.Type) bool {
return t.Kind() == reflect.Ptr && t.Elem().Kind() == reflect.Struct
}
// t is a struct type.
func buildDefaultMessage(t reflect.Type) (dm defaultMessage) {
sprop := GetProperties(t)
@@ -711,173 +655,86 @@ func buildDefaultMessage(t reflect.Type) (dm defaultMessage) {
}
ft := t.Field(fi).Type
sf, nested, err := fieldDefault(ft, prop)
switch {
case err != nil:
log.Print(err)
case nested:
// nested messages
if ptrToStruct(ft) || (ft.Kind() == reflect.Slice && ptrToStruct(ft.Elem())) {
dm.nested = append(dm.nested, fi)
case sf != nil:
sf.index = fi
dm.scalars = append(dm.scalars, *sf)
continue
}
sf := scalarField{
index: fi,
kind: ft.Elem().Kind(),
}
// scalar fields without defaults
if prop.Default == "" {
dm.scalars = append(dm.scalars, sf)
continue
}
// a scalar field: either *T or []byte
switch ft.Elem().Kind() {
case reflect.Bool:
x, err := strconv.ParseBool(prop.Default)
if err != nil {
log.Printf("proto: bad default bool %q: %v", prop.Default, err)
continue
}
sf.value = x
case reflect.Float32:
x, err := strconv.ParseFloat(prop.Default, 32)
if err != nil {
log.Printf("proto: bad default float32 %q: %v", prop.Default, err)
continue
}
sf.value = float32(x)
case reflect.Float64:
x, err := strconv.ParseFloat(prop.Default, 64)
if err != nil {
log.Printf("proto: bad default float64 %q: %v", prop.Default, err)
continue
}
sf.value = x
case reflect.Int32:
x, err := strconv.ParseInt(prop.Default, 10, 32)
if err != nil {
log.Printf("proto: bad default int32 %q: %v", prop.Default, err)
continue
}
sf.value = int32(x)
case reflect.Int64:
x, err := strconv.ParseInt(prop.Default, 10, 64)
if err != nil {
log.Printf("proto: bad default int64 %q: %v", prop.Default, err)
continue
}
sf.value = x
case reflect.String:
sf.value = prop.Default
case reflect.Uint8:
// []byte (not *uint8)
sf.value = []byte(prop.Default)
case reflect.Uint32:
x, err := strconv.ParseUint(prop.Default, 10, 32)
if err != nil {
log.Printf("proto: bad default uint32 %q: %v", prop.Default, err)
continue
}
sf.value = uint32(x)
case reflect.Uint64:
x, err := strconv.ParseUint(prop.Default, 10, 64)
if err != nil {
log.Printf("proto: bad default uint64 %q: %v", prop.Default, err)
continue
}
sf.value = x
default:
log.Printf("proto: unhandled def kind %v", ft.Elem().Kind())
continue
}
dm.scalars = append(dm.scalars, sf)
}
return dm
}
// fieldDefault returns the scalarField for field type ft.
// sf will be nil if the field can not have a default.
// nestedMessage will be true if this is a nested message.
// Note that sf.index is not set on return.
func fieldDefault(ft reflect.Type, prop *Properties) (sf *scalarField, nestedMessage bool, err error) {
var canHaveDefault bool
switch ft.Kind() {
case reflect.Ptr:
if ft.Elem().Kind() == reflect.Struct {
nestedMessage = true
} else {
canHaveDefault = true // proto2 scalar field
}
case reflect.Slice:
switch ft.Elem().Kind() {
case reflect.Ptr:
nestedMessage = true // repeated message
case reflect.Uint8:
canHaveDefault = true // bytes field
}
case reflect.Map:
if ft.Elem().Kind() == reflect.Ptr {
nestedMessage = true // map with message values
}
}
if !canHaveDefault {
if nestedMessage {
return nil, true, nil
}
return nil, false, nil
}
// We now know that ft is a pointer or slice.
sf = &scalarField{kind: ft.Elem().Kind()}
// scalar fields without defaults
if !prop.HasDefault {
return sf, false, nil
}
// a scalar field: either *T or []byte
switch ft.Elem().Kind() {
case reflect.Bool:
x, err := strconv.ParseBool(prop.Default)
if err != nil {
return nil, false, fmt.Errorf("proto: bad default bool %q: %v", prop.Default, err)
}
sf.value = x
case reflect.Float32:
x, err := strconv.ParseFloat(prop.Default, 32)
if err != nil {
return nil, false, fmt.Errorf("proto: bad default float32 %q: %v", prop.Default, err)
}
sf.value = float32(x)
case reflect.Float64:
x, err := strconv.ParseFloat(prop.Default, 64)
if err != nil {
return nil, false, fmt.Errorf("proto: bad default float64 %q: %v", prop.Default, err)
}
sf.value = x
case reflect.Int32:
x, err := strconv.ParseInt(prop.Default, 10, 32)
if err != nil {
return nil, false, fmt.Errorf("proto: bad default int32 %q: %v", prop.Default, err)
}
sf.value = int32(x)
case reflect.Int64:
x, err := strconv.ParseInt(prop.Default, 10, 64)
if err != nil {
return nil, false, fmt.Errorf("proto: bad default int64 %q: %v", prop.Default, err)
}
sf.value = x
case reflect.String:
sf.value = prop.Default
case reflect.Uint8:
// []byte (not *uint8)
sf.value = []byte(prop.Default)
case reflect.Uint32:
x, err := strconv.ParseUint(prop.Default, 10, 32)
if err != nil {
return nil, false, fmt.Errorf("proto: bad default uint32 %q: %v", prop.Default, err)
}
sf.value = uint32(x)
case reflect.Uint64:
x, err := strconv.ParseUint(prop.Default, 10, 64)
if err != nil {
return nil, false, fmt.Errorf("proto: bad default uint64 %q: %v", prop.Default, err)
}
sf.value = x
default:
return nil, false, fmt.Errorf("proto: unhandled def kind %v", ft.Elem().Kind())
}
return sf, false, nil
}
// Map fields may have key types of non-float scalars, strings and enums.
// The easiest way to sort them in some deterministic order is to use fmt.
// If this turns out to be inefficient we can always consider other options,
// such as doing a Schwartzian transform.
func mapKeys(vs []reflect.Value) sort.Interface {
s := mapKeySorter{
vs: vs,
// default Less function: textual comparison
less: func(a, b reflect.Value) bool {
return fmt.Sprint(a.Interface()) < fmt.Sprint(b.Interface())
},
}
// Type specialization per https://developers.google.com/protocol-buffers/docs/proto#maps;
// numeric keys are sorted numerically.
if len(vs) == 0 {
return s
}
switch vs[0].Kind() {
case reflect.Int32, reflect.Int64:
s.less = func(a, b reflect.Value) bool { return a.Int() < b.Int() }
case reflect.Uint32, reflect.Uint64:
s.less = func(a, b reflect.Value) bool { return a.Uint() < b.Uint() }
}
return s
}
type mapKeySorter struct {
vs []reflect.Value
less func(a, b reflect.Value) bool
}
func (s mapKeySorter) Len() int { return len(s.vs) }
func (s mapKeySorter) Swap(i, j int) { s.vs[i], s.vs[j] = s.vs[j], s.vs[i] }
func (s mapKeySorter) Less(i, j int) bool {
return s.less(s.vs[i], s.vs[j])
}
// isProto3Zero reports whether v is a zero proto3 value.
func isProto3Zero(v reflect.Value) bool {
switch v.Kind() {
case reflect.Bool:
return !v.Bool()
case reflect.Int32, reflect.Int64:
return v.Int() == 0
case reflect.Uint32, reflect.Uint64:
return v.Uint() == 0
case reflect.Float32, reflect.Float64:
return v.Float() == 0
case reflect.String:
return v.String() == ""
}
return false
}

View File

@@ -1,5 +1,5 @@
// Copyright (c) 2013, Vastech SA (PTY) LTD. All rights reserved.
// http://github.com/gogo/protobuf/gogoproto
// http://code.google.com/p/gogoprotobuf/gogoproto
//
// Redistribution and use in source and binary forms, with or without
// modification, are permitted provided that the following conditions are

View File

@@ -1,7 +1,7 @@
// Go support for Protocol Buffers - Google's data interchange format
//
// Copyright 2010 The Go Authors. All rights reserved.
// https://github.com/golang/protobuf
// http://code.google.com/p/goprotobuf/
//
// Redistribution and use in source and binary forms, with or without
// modification, are permitted provided that the following conditions are
@@ -36,19 +36,16 @@ package proto
*/
import (
"bytes"
"encoding/json"
"errors"
"fmt"
"reflect"
"sort"
)
// errNoMessageTypeID occurs when a protocol buffer does not have a message type ID.
// ErrNoMessageTypeId occurs when a protocol buffer does not have a message type ID.
// A message type ID is required for storing a protocol buffer in a message set.
var errNoMessageTypeID = errors.New("proto does not have a message type ID")
var ErrNoMessageTypeId = errors.New("proto does not have a message type ID")
// The first two types (_MessageSet_Item and messageSet)
// The first two types (_MessageSet_Item and MessageSet)
// model what the protocol compiler produces for the following protocol message:
// message MessageSet {
// repeated group Item = 1 {
@@ -58,20 +55,27 @@ var errNoMessageTypeID = errors.New("proto does not have a message type ID")
// }
// That is the MessageSet wire format. We can't use a proto to generate these
// because that would introduce a circular dependency between it and this package.
//
// When a proto1 proto has a field that looks like:
// optional message<MessageSet> info = 3;
// the protocol compiler produces a field in the generated struct that looks like:
// Info *_proto_.MessageSet `protobuf:"bytes,3,opt,name=info"`
// The package is automatically inserted so there is no need for that proto file to
// import this package.
type _MessageSet_Item struct {
TypeId *int32 `protobuf:"varint,2,req,name=type_id"`
Message []byte `protobuf:"bytes,3,req,name=message"`
}
type messageSet struct {
type MessageSet struct {
Item []*_MessageSet_Item `protobuf:"group,1,rep"`
XXX_unrecognized []byte
// TODO: caching?
}
// Make sure messageSet is a Message.
var _ Message = (*messageSet)(nil)
// Make sure MessageSet is a Message.
var _ Message = (*MessageSet)(nil)
// messageTypeIder is an interface satisfied by a protocol buffer type
// that may be stored in a MessageSet.
@@ -79,7 +83,7 @@ type messageTypeIder interface {
MessageTypeId() int32
}
func (ms *messageSet) find(pb Message) *_MessageSet_Item {
func (ms *MessageSet) find(pb Message) *_MessageSet_Item {
mti, ok := pb.(messageTypeIder)
if !ok {
return nil
@@ -93,24 +97,24 @@ func (ms *messageSet) find(pb Message) *_MessageSet_Item {
return nil
}
func (ms *messageSet) Has(pb Message) bool {
func (ms *MessageSet) Has(pb Message) bool {
if ms.find(pb) != nil {
return true
}
return false
}
func (ms *messageSet) Unmarshal(pb Message) error {
func (ms *MessageSet) Unmarshal(pb Message) error {
if item := ms.find(pb); item != nil {
return Unmarshal(item.Message, pb)
}
if _, ok := pb.(messageTypeIder); !ok {
return errNoMessageTypeID
return ErrNoMessageTypeId
}
return nil // TODO: return error instead?
}
func (ms *messageSet) Marshal(pb Message) error {
func (ms *MessageSet) Marshal(pb Message) error {
msg, err := Marshal(pb)
if err != nil {
return err
@@ -123,7 +127,7 @@ func (ms *messageSet) Marshal(pb Message) error {
mti, ok := pb.(messageTypeIder)
if !ok {
return errNoMessageTypeID
return ErrWrongType // TODO: custom error?
}
mtid := mti.MessageTypeId()
@@ -134,9 +138,9 @@ func (ms *messageSet) Marshal(pb Message) error {
return nil
}
func (ms *messageSet) Reset() { *ms = messageSet{} }
func (ms *messageSet) String() string { return CompactTextString(ms) }
func (*messageSet) ProtoMessage() {}
func (ms *MessageSet) Reset() { *ms = MessageSet{} }
func (ms *MessageSet) String() string { return CompactTextString(ms) }
func (*MessageSet) ProtoMessage() {}
// Support for the message_set_wire_format message option.
@@ -162,7 +166,7 @@ func MarshalMessageSet(m map[int32]Extension) ([]byte, error) {
}
sort.Ints(ids)
ms := &messageSet{Item: make([]*_MessageSet_Item, 0, len(m))}
ms := &MessageSet{Item: make([]*_MessageSet_Item, 0, len(m))}
for _, id := range ids {
e := m[int32(id)]
// Remove the wire type and field number varint, as well as the length varint.
@@ -179,89 +183,21 @@ func MarshalMessageSet(m map[int32]Extension) ([]byte, error) {
// UnmarshalMessageSet decodes the extension map encoded in buf in the message set wire format.
// It is called by generated Unmarshal methods on protocol buffer messages with the message_set_wire_format option.
func UnmarshalMessageSet(buf []byte, m map[int32]Extension) error {
ms := new(messageSet)
ms := new(MessageSet)
if err := Unmarshal(buf, ms); err != nil {
return err
}
for _, item := range ms.Item {
id := *item.TypeId
msg := item.Message
// restore wire type and field number varint, plus length varint.
b := EncodeVarint(uint64(*item.TypeId)<<3 | WireBytes)
b = append(b, EncodeVarint(uint64(len(item.Message)))...)
b = append(b, item.Message...)
// Restore wire type and field number varint, plus length varint.
// Be careful to preserve duplicate items.
b := EncodeVarint(uint64(id)<<3 | WireBytes)
if ext, ok := m[id]; ok {
// Existing data; rip off the tag and length varint
// so we join the new data correctly.
// We can assume that ext.enc is set because we are unmarshaling.
o := ext.enc[len(b):] // skip wire type and field number
_, n := DecodeVarint(o) // calculate length of length varint
o = o[n:] // skip length varint
msg = append(o, msg...) // join old data and new data
}
b = append(b, EncodeVarint(uint64(len(msg)))...)
b = append(b, msg...)
m[id] = Extension{enc: b}
m[*item.TypeId] = Extension{enc: b}
}
return nil
}
// MarshalMessageSetJSON encodes the extension map represented by m in JSON format.
// It is called by generated MarshalJSON methods on protocol buffer messages with the message_set_wire_format option.
func MarshalMessageSetJSON(m map[int32]Extension) ([]byte, error) {
var b bytes.Buffer
b.WriteByte('{')
// Process the map in key order for deterministic output.
ids := make([]int32, 0, len(m))
for id := range m {
ids = append(ids, id)
}
sort.Sort(int32Slice(ids)) // int32Slice defined in text.go
for i, id := range ids {
ext := m[id]
if i > 0 {
b.WriteByte(',')
}
msd, ok := messageSetMap[id]
if !ok {
// Unknown type; we can't render it, so skip it.
continue
}
fmt.Fprintf(&b, `"[%s]":`, msd.name)
x := ext.value
if x == nil {
x = reflect.New(msd.t.Elem()).Interface()
if err := Unmarshal(ext.enc, x.(Message)); err != nil {
return nil, err
}
}
d, err := json.Marshal(x)
if err != nil {
return nil, err
}
b.Write(d)
}
b.WriteByte('}')
return b.Bytes(), nil
}
// UnmarshalMessageSetJSON decodes the extension map encoded in buf in JSON format.
// It is called by generated UnmarshalJSON methods on protocol buffer messages with the message_set_wire_format option.
func UnmarshalMessageSetJSON(buf []byte, m map[int32]Extension) error {
// Common-case fast path.
if len(buf) == 0 || bytes.Equal(buf, []byte("{}")) {
return nil
}
// This is fairly tricky, and it's not clear that it is needed.
return errors.New("TODO: UnmarshalMessageSetJSON not yet implemented")
}
// A global registry of types that can be used in a MessageSet.
var messageSetMap = make(map[int32]messageSetDesc)
@@ -272,9 +208,9 @@ type messageSetDesc struct {
}
// RegisterMessageSetType is called from the generated code.
func RegisterMessageSetType(m Message, fieldNum int32, name string) {
messageSetMap[fieldNum] = messageSetDesc{
t: reflect.TypeOf(m),
func RegisterMessageSetType(i messageTypeIder, name string) {
messageSetMap[i.MessageTypeId()] = messageSetDesc{
t: reflect.TypeOf(i),
name: name,
}
}

View File

@@ -1,7 +1,7 @@
// Go support for Protocol Buffers - Google's data interchange format
//
// Copyright 2012 The Go Authors. All rights reserved.
// https://github.com/golang/protobuf
// http://code.google.com/p/goprotobuf/
//
// Redistribution and use in source and binary forms, with or without
// modification, are permitted provided that the following conditions are
@@ -114,11 +114,6 @@ func structPointer_Bool(p structPointer, f field) **bool {
return structPointer_ifield(p, f).(**bool)
}
// BoolVal returns the address of a bool field in the struct.
func structPointer_BoolVal(p structPointer, f field) *bool {
return structPointer_ifield(p, f).(*bool)
}
// BoolSlice returns the address of a []bool field in the struct.
func structPointer_BoolSlice(p structPointer, f field) *[]bool {
return structPointer_ifield(p, f).(*[]bool)
@@ -129,11 +124,6 @@ func structPointer_String(p structPointer, f field) **string {
return structPointer_ifield(p, f).(**string)
}
// StringVal returns the address of a string field in the struct.
func structPointer_StringVal(p structPointer, f field) *string {
return structPointer_ifield(p, f).(*string)
}
// StringSlice returns the address of a []string field in the struct.
func structPointer_StringSlice(p structPointer, f field) *[]string {
return structPointer_ifield(p, f).(*[]string)
@@ -144,11 +134,6 @@ func structPointer_ExtMap(p structPointer, f field) *map[int32]Extension {
return structPointer_ifield(p, f).(*map[int32]Extension)
}
// NewAt returns the reflect.Value for a pointer to a field in the struct.
func structPointer_NewAt(p structPointer, f field, typ reflect.Type) reflect.Value {
return structPointer_field(p, f).Addr()
}
// SetStructPointer writes a *struct field in the struct.
func structPointer_SetStructPointer(p structPointer, f field, q structPointer) {
structPointer_field(p, f).Set(q.v)
@@ -250,49 +235,6 @@ func structPointer_Word32(p structPointer, f field) word32 {
return word32{structPointer_field(p, f)}
}
// A word32Val represents a field of type int32, uint32, float32, or enum.
// That is, v.Type() is int32, uint32, float32, or enum and v is assignable.
type word32Val struct {
v reflect.Value
}
// Set sets *p to x.
func word32Val_Set(p word32Val, x uint32) {
switch p.v.Type() {
case int32Type:
p.v.SetInt(int64(x))
return
case uint32Type:
p.v.SetUint(uint64(x))
return
case float32Type:
p.v.SetFloat(float64(math.Float32frombits(x)))
return
}
// must be enum
p.v.SetInt(int64(int32(x)))
}
// Get gets the bits pointed at by p, as a uint32.
func word32Val_Get(p word32Val) uint32 {
elem := p.v
switch elem.Kind() {
case reflect.Int32:
return uint32(elem.Int())
case reflect.Uint32:
return uint32(elem.Uint())
case reflect.Float32:
return math.Float32bits(float32(elem.Float()))
}
panic("unreachable")
}
// Word32Val returns a reference to a int32, uint32, float32, or enum field in the struct.
func structPointer_Word32Val(p structPointer, f field) word32Val {
return word32Val{structPointer_field(p, f)}
}
// A word32Slice is a slice of 32-bit values.
// That is, v.Type() is []int32, []uint32, []float32, or []enum.
type word32Slice struct {
@@ -397,43 +339,6 @@ func structPointer_Word64(p structPointer, f field) word64 {
return word64{structPointer_field(p, f)}
}
// word64Val is like word32Val but for 64-bit values.
type word64Val struct {
v reflect.Value
}
func word64Val_Set(p word64Val, o *Buffer, x uint64) {
switch p.v.Type() {
case int64Type:
p.v.SetInt(int64(x))
return
case uint64Type:
p.v.SetUint(x)
return
case float64Type:
p.v.SetFloat(math.Float64frombits(x))
return
}
panic("unreachable")
}
func word64Val_Get(p word64Val) uint64 {
elem := p.v
switch elem.Kind() {
case reflect.Int64:
return uint64(elem.Int())
case reflect.Uint64:
return elem.Uint()
case reflect.Float64:
return math.Float64bits(elem.Float())
}
panic("unreachable")
}
func structPointer_Word64Val(p structPointer, f field) word64Val {
return word64Val{structPointer_field(p, f)}
}
type word64Slice struct {
v reflect.Value
}

View File

@@ -1,7 +1,7 @@
// Go support for Protocol Buffers - Google's data interchange format
//
// Copyright 2012 The Go Authors. All rights reserved.
// https://github.com/golang/protobuf
// http://code.google.com/p/goprotobuf/
//
// Redistribution and use in source and binary forms, with or without
// modification, are permitted provided that the following conditions are
@@ -100,11 +100,6 @@ func structPointer_Bool(p structPointer, f field) **bool {
return (**bool)(unsafe.Pointer(uintptr(p) + uintptr(f)))
}
// BoolVal returns the address of a bool field in the struct.
func structPointer_BoolVal(p structPointer, f field) *bool {
return (*bool)(unsafe.Pointer(uintptr(p) + uintptr(f)))
}
// BoolSlice returns the address of a []bool field in the struct.
func structPointer_BoolSlice(p structPointer, f field) *[]bool {
return (*[]bool)(unsafe.Pointer(uintptr(p) + uintptr(f)))
@@ -115,11 +110,6 @@ func structPointer_String(p structPointer, f field) **string {
return (**string)(unsafe.Pointer(uintptr(p) + uintptr(f)))
}
// StringVal returns the address of a string field in the struct.
func structPointer_StringVal(p structPointer, f field) *string {
return (*string)(unsafe.Pointer(uintptr(p) + uintptr(f)))
}
// StringSlice returns the address of a []string field in the struct.
func structPointer_StringSlice(p structPointer, f field) *[]string {
return (*[]string)(unsafe.Pointer(uintptr(p) + uintptr(f)))
@@ -130,11 +120,6 @@ func structPointer_ExtMap(p structPointer, f field) *map[int32]Extension {
return (*map[int32]Extension)(unsafe.Pointer(uintptr(p) + uintptr(f)))
}
// NewAt returns the reflect.Value for a pointer to a field in the struct.
func structPointer_NewAt(p structPointer, f field, typ reflect.Type) reflect.Value {
return reflect.NewAt(typ, unsafe.Pointer(uintptr(p)+uintptr(f)))
}
// SetStructPointer writes a *struct field in the struct.
func structPointer_SetStructPointer(p structPointer, f field, q structPointer) {
*(*structPointer)(unsafe.Pointer(uintptr(p) + uintptr(f))) = q
@@ -185,24 +170,6 @@ func structPointer_Word32(p structPointer, f field) word32 {
return word32((**uint32)(unsafe.Pointer(uintptr(p) + uintptr(f))))
}
// A word32Val is the address of a 32-bit value field.
type word32Val *uint32
// Set sets *p to x.
func word32Val_Set(p word32Val, x uint32) {
*p = x
}
// Get gets the value pointed at by p.
func word32Val_Get(p word32Val) uint32 {
return *p
}
// Word32Val returns the address of a *int32, *uint32, *float32, or *enum field in the struct.
func structPointer_Word32Val(p structPointer, f field) word32Val {
return word32Val((*uint32)(unsafe.Pointer(uintptr(p) + uintptr(f))))
}
// A word32Slice is a slice of 32-bit values.
type word32Slice []uint32
@@ -239,21 +206,6 @@ func structPointer_Word64(p structPointer, f field) word64 {
return word64((**uint64)(unsafe.Pointer(uintptr(p) + uintptr(f))))
}
// word64Val is like word32Val but for 64-bit values.
type word64Val *uint64
func word64Val_Set(p word64Val, o *Buffer, x uint64) {
*p = x
}
func word64Val_Get(p word64Val) uint64 {
return *p
}
func structPointer_Word64Val(p structPointer, f field) word64Val {
return word64Val((*uint64)(unsafe.Pointer(uintptr(p) + uintptr(f))))
}
// word64Slice is like word32Slice but for 64-bit values.
type word64Slice []uint64

View File

@@ -1,5 +1,5 @@
// Copyright (c) 2013, Vastech SA (PTY) LTD. All rights reserved.
// http://github.com/gogo/protobuf/gogoproto
// http://code.google.com/p/gogoprotobuf/gogoproto
//
// Redistribution and use in source and binary forms, with or without
// modification, are permitted provided that the following conditions are
@@ -87,6 +87,16 @@ func appendStructPointer(base structPointer, f field, typ reflect.Type) structPo
return structPointer(unsafe.Pointer(uintptr(unsafe.Pointer(bas)) + uintptr(uintptr(newLen-1)*size)))
}
// RefBool returns a *bool field in the struct.
func structPointer_RefBool(p structPointer, f field) *bool {
return (*bool)(unsafe.Pointer(uintptr(p) + uintptr(f)))
}
// RefString returns the address of a string field in the struct.
func structPointer_RefString(p structPointer, f field) *string {
return (*string)(unsafe.Pointer(uintptr(p) + uintptr(f)))
}
func structPointer_FieldPointer(p structPointer, f field) structPointer {
return structPointer(unsafe.Pointer(uintptr(p) + uintptr(f)))
}
@@ -106,3 +116,51 @@ func structPointer_Add(p structPointer, size field) structPointer {
func structPointer_Len(p structPointer, f field) int {
return len(*(*[]interface{})(unsafe.Pointer(structPointer_GetRefStructPointer(p, f))))
}
// refWord32 is the address of a 32-bit value field.
type refWord32 *uint32
func refWord32_IsNil(p refWord32) bool {
return p == nil
}
func refWord32_Set(p refWord32, o *Buffer, x uint32) {
if len(o.uint32s) == 0 {
o.uint32s = make([]uint32, uint32PoolSize)
}
o.uint32s[0] = x
*p = o.uint32s[0]
o.uint32s = o.uint32s[1:]
}
func refWord32_Get(p refWord32) uint32 {
return *p
}
func structPointer_RefWord32(p structPointer, f field) refWord32 {
return refWord32((*uint32)(unsafe.Pointer(uintptr(p) + uintptr(f))))
}
// refWord64 is like refWord32 but for 32-bit values.
type refWord64 *uint64
func refWord64_Set(p refWord64, o *Buffer, x uint64) {
if len(o.uint64s) == 0 {
o.uint64s = make([]uint64, uint64PoolSize)
}
o.uint64s[0] = x
*p = o.uint64s[0]
o.uint64s = o.uint64s[1:]
}
func refWord64_IsNil(p refWord64) bool {
return p == nil
}
func refWord64_Get(p refWord64) uint64 {
return *p
}
func structPointer_RefWord64(p structPointer, f field) refWord64 {
return refWord64((*uint64)(unsafe.Pointer(uintptr(p) + uintptr(f))))
}

View File

@@ -1,7 +1,12 @@
// Extensions for Protocol Buffers to create more go like structures.
//
// Copyright (c) 2013, Vastech SA (PTY) LTD. All rights reserved.
// http://code.google.com/p/gogoprotobuf/gogoproto
//
// Go support for Protocol Buffers - Google's data interchange format
//
// Copyright 2010 The Go Authors. All rights reserved.
// https://github.com/golang/protobuf
// http://code.google.com/p/goprotobuf/
//
// Redistribution and use in source and binary forms, with or without
// modification, are permitted provided that the following conditions are
@@ -37,7 +42,6 @@ package proto
import (
"fmt"
"log"
"os"
"reflect"
"sort"
@@ -85,15 +89,6 @@ type decoder func(p *Buffer, prop *Properties, base structPointer) error
// A valueDecoder decodes a single integer in a particular encoding.
type valueDecoder func(o *Buffer) (x uint64, err error)
// A oneofMarshaler does the marshaling for all oneof fields in a message.
type oneofMarshaler func(Message, *Buffer) error
// A oneofUnmarshaler does the unmarshaling for a oneof field in a message.
type oneofUnmarshaler func(Message, int, int, *Buffer) (bool, error)
// A oneofSizer does the sizing for all oneof fields in a message.
type oneofSizer func(Message) int
// tagMap is an optimization over map[int]int for typical protocol buffer
// use-cases. Encoded protocol buffers are often in tag order with small tag
// numbers.
@@ -142,22 +137,6 @@ type StructProperties struct {
order []int // list of struct field numbers in tag order
unrecField field // field id of the XXX_unrecognized []byte field
extendable bool // is this an extendable proto
oneofMarshaler oneofMarshaler
oneofUnmarshaler oneofUnmarshaler
oneofSizer oneofSizer
stype reflect.Type
// OneofTypes contains information about the oneof fields in this message.
// It is keyed by the original name of a field.
OneofTypes map[string]*OneofProperties
}
// OneofProperties represents information about a specific field in a oneof.
type OneofProperties struct {
Type reflect.Type // pointer to generated struct type for this oneof field
Field int // struct field number of the containing oneof in the message
Prop *Properties
}
// Implement the sorting interface so we can sort the fields in tag order, as recommended by the spec.
@@ -171,21 +150,18 @@ func (sp *StructProperties) Swap(i, j int) { sp.order[i], sp.order[j] = sp.order
// Properties represents the protocol-specific behavior of a single struct field.
type Properties struct {
Name string // name of the field, for error messages
OrigName string // original name before protocol compiler (always set)
Wire string
WireType int
Tag int
Required bool
Optional bool
Repeated bool
Packed bool // relevant for repeated primitives only
Enum string // set for enum types only
proto3 bool // whether this is known to be a proto3 field; set for []byte only
oneof bool // whether this is a oneof field
Name string // name of the field, for error messages
OrigName string // original name before protocol compiler (always set)
Wire string
WireType int
Tag int
Required bool
Optional bool
Repeated bool
Packed bool // relevant for repeated primitives only
Enum string // set for enum types only
Default string // default value
HasDefault bool // whether an explicit default was provided
CustomType string
def_uint64 uint64
enc encoder
@@ -194,14 +170,12 @@ type Properties struct {
tagcode []byte // encoding of EncodeVarint((Tag<<3)|WireType)
tagbuf [8]byte
stype reflect.Type // set for struct types only
sstype reflect.Type // set for slices of structs types only
ctype reflect.Type // set for custom types only
sprop *StructProperties // set for struct types only
isMarshaler bool
isUnmarshaler bool
mtype reflect.Type // set for map types only
mkeyprop *Properties // set for map types only
mvalprop *Properties // set for map types only
size sizer
valSize valueSizer // set for bool and numeric types only
@@ -232,16 +206,10 @@ func (p *Properties) String() string {
if p.OrigName != p.Name {
s += ",name=" + p.OrigName
}
if p.proto3 {
s += ",proto3"
}
if p.oneof {
s += ",oneof"
}
if len(p.Enum) > 0 {
s += ",enum=" + p.Enum
}
if p.HasDefault {
if len(p.Default) > 0 {
s += ",def=" + p.Default
}
return s
@@ -312,18 +280,17 @@ func (p *Properties) Parse(s string) {
p.OrigName = f[5:]
case strings.HasPrefix(f, "enum="):
p.Enum = f[5:]
case f == "proto3":
p.proto3 = true
case f == "oneof":
p.oneof = true
case strings.HasPrefix(f, "def="):
p.HasDefault = true
p.Default = f[4:] // rest of string
if i+1 < len(fields) {
// Commas aren't escaped, and def is always last.
p.Default += "," + strings.Join(fields[i+1:], ",")
break
}
case strings.HasPrefix(f, "embedded="):
p.OrigName = strings.Split(f, "=")[1]
case strings.HasPrefix(f, "customtype="):
p.CustomType = strings.Split(f, "=")[1]
}
}
}
@@ -335,71 +302,41 @@ func logNoSliceEnc(t1, t2 reflect.Type) {
var protoMessageType = reflect.TypeOf((*Message)(nil)).Elem()
// Initialize the fields for encoding and decoding.
func (p *Properties) setEncAndDec(typ reflect.Type, f *reflect.StructField, lockGetProp bool) {
func (p *Properties) setEncAndDec(typ reflect.Type, lockGetProp bool) {
p.enc = nil
p.dec = nil
p.size = nil
if len(p.CustomType) > 0 {
p.setCustomEncAndDec(typ)
p.setTag(lockGetProp)
return
}
switch t1 := typ; t1.Kind() {
default:
fmt.Fprintf(os.Stderr, "proto: no coders for %v\n", t1)
// proto3 scalar types
case reflect.Bool:
p.enc = (*Buffer).enc_proto3_bool
p.dec = (*Buffer).dec_proto3_bool
p.size = size_proto3_bool
case reflect.Int32:
p.enc = (*Buffer).enc_proto3_int32
p.dec = (*Buffer).dec_proto3_int32
p.size = size_proto3_int32
case reflect.Uint32:
p.enc = (*Buffer).enc_proto3_uint32
p.dec = (*Buffer).dec_proto3_int32 // can reuse
p.size = size_proto3_uint32
case reflect.Int64, reflect.Uint64:
p.enc = (*Buffer).enc_proto3_int64
p.dec = (*Buffer).dec_proto3_int64
p.size = size_proto3_int64
case reflect.Float32:
p.enc = (*Buffer).enc_proto3_uint32 // can just treat them as bits
p.dec = (*Buffer).dec_proto3_int32
p.size = size_proto3_uint32
case reflect.Float64:
p.enc = (*Buffer).enc_proto3_int64 // can just treat them as bits
p.dec = (*Buffer).dec_proto3_int64
p.size = size_proto3_int64
case reflect.String:
p.enc = (*Buffer).enc_proto3_string
p.dec = (*Buffer).dec_proto3_string
p.size = size_proto3_string
if !p.setNonNullableEncAndDec(t1) {
fmt.Fprintf(os.Stderr, "proto: no coders for %T\n", t1)
}
case reflect.Ptr:
switch t2 := t1.Elem(); t2.Kind() {
default:
fmt.Fprintf(os.Stderr, "proto: no encoder function for %v -> %v\n", t1, t2)
fmt.Fprintf(os.Stderr, "proto: no encoder function for %T -> %T\n", t1, t2)
break
case reflect.Bool:
p.enc = (*Buffer).enc_bool
p.dec = (*Buffer).dec_bool
p.size = size_bool
case reflect.Int32:
case reflect.Int32, reflect.Uint32:
p.enc = (*Buffer).enc_int32
p.dec = (*Buffer).dec_int32
p.size = size_int32
case reflect.Uint32:
p.enc = (*Buffer).enc_uint32
p.dec = (*Buffer).dec_int32 // can reuse
p.size = size_uint32
case reflect.Int64, reflect.Uint64:
p.enc = (*Buffer).enc_int64
p.dec = (*Buffer).dec_int64
p.size = size_int64
case reflect.Float32:
p.enc = (*Buffer).enc_uint32 // can just treat them as bits
p.enc = (*Buffer).enc_int32 // can just treat them as bits
p.dec = (*Buffer).dec_int32
p.size = size_uint32
p.size = size_int32
case reflect.Float64:
p.enc = (*Buffer).enc_int64 // can just treat them as bits
p.dec = (*Buffer).dec_int64
@@ -438,59 +375,48 @@ func (p *Properties) setEncAndDec(typ reflect.Type, f *reflect.StructField, lock
}
p.dec = (*Buffer).dec_slice_bool
p.packedDec = (*Buffer).dec_slice_packed_bool
case reflect.Int32:
if p.Packed {
p.enc = (*Buffer).enc_slice_packed_int32
p.size = size_slice_packed_int32
} else {
p.enc = (*Buffer).enc_slice_int32
p.size = size_slice_int32
}
p.dec = (*Buffer).dec_slice_int32
p.packedDec = (*Buffer).dec_slice_packed_int32
case reflect.Uint32:
if p.Packed {
p.enc = (*Buffer).enc_slice_packed_uint32
p.size = size_slice_packed_uint32
} else {
p.enc = (*Buffer).enc_slice_uint32
p.size = size_slice_uint32
}
p.dec = (*Buffer).dec_slice_int32
p.packedDec = (*Buffer).dec_slice_packed_int32
case reflect.Int64, reflect.Uint64:
if p.Packed {
p.enc = (*Buffer).enc_slice_packed_int64
p.size = size_slice_packed_int64
} else {
p.enc = (*Buffer).enc_slice_int64
p.size = size_slice_int64
}
p.dec = (*Buffer).dec_slice_int64
p.packedDec = (*Buffer).dec_slice_packed_int64
case reflect.Uint8:
p.enc = (*Buffer).enc_slice_byte
p.dec = (*Buffer).dec_slice_byte
p.size = size_slice_byte
// This is a []byte, which is either a bytes field,
// or the value of a map field. In the latter case,
// we always encode an empty []byte, so we should not
// use the proto3 enc/size funcs.
// f == nil iff this is the key/value of a map field.
if p.proto3 && f != nil {
p.enc = (*Buffer).enc_proto3_slice_byte
p.size = size_proto3_slice_byte
case reflect.Int, reflect.Int8, reflect.Int16, reflect.Int32, reflect.Int64, reflect.Uint, reflect.Uint8, reflect.Uint16, reflect.Uint32, reflect.Uint64, reflect.Uintptr:
switch t2.Bits() {
case 32:
if p.Packed {
p.enc = (*Buffer).enc_slice_packed_int32
p.size = size_slice_packed_int32
} else {
p.enc = (*Buffer).enc_slice_int32
p.size = size_slice_int32
}
p.dec = (*Buffer).dec_slice_int32
p.packedDec = (*Buffer).dec_slice_packed_int32
case 64:
if p.Packed {
p.enc = (*Buffer).enc_slice_packed_int64
p.size = size_slice_packed_int64
} else {
p.enc = (*Buffer).enc_slice_int64
p.size = size_slice_int64
}
p.dec = (*Buffer).dec_slice_int64
p.packedDec = (*Buffer).dec_slice_packed_int64
case 8:
if t2.Kind() == reflect.Uint8 {
p.enc = (*Buffer).enc_slice_byte
p.dec = (*Buffer).dec_slice_byte
p.size = size_slice_byte
}
default:
logNoSliceEnc(t1, t2)
break
}
case reflect.Float32, reflect.Float64:
switch t2.Bits() {
case 32:
// can just treat them as bits
if p.Packed {
p.enc = (*Buffer).enc_slice_packed_uint32
p.size = size_slice_packed_uint32
p.enc = (*Buffer).enc_slice_packed_int32
p.size = size_slice_packed_int32
} else {
p.enc = (*Buffer).enc_slice_uint32
p.size = size_slice_uint32
p.enc = (*Buffer).enc_slice_int32
p.size = size_slice_int32
}
p.dec = (*Buffer).dec_slice_int32
p.packedDec = (*Buffer).dec_slice_packed_int32
@@ -542,26 +468,14 @@ func (p *Properties) setEncAndDec(typ reflect.Type, f *reflect.StructField, lock
p.dec = (*Buffer).dec_slice_slice_byte
p.size = size_slice_slice_byte
}
case reflect.Struct:
p.setSliceOfNonPointerStructs(t1)
}
case reflect.Map:
p.enc = (*Buffer).enc_new_map
p.dec = (*Buffer).dec_new_map
p.size = size_new_map
p.mtype = t1
p.mkeyprop = &Properties{}
p.mkeyprop.init(reflect.PtrTo(p.mtype.Key()), "Key", f.Tag.Get("protobuf_key"), nil, lockGetProp)
p.mvalprop = &Properties{}
vtype := p.mtype.Elem()
if vtype.Kind() != reflect.Ptr && vtype.Kind() != reflect.Slice {
// The value type is not a message (*T) or bytes ([]byte),
// so we need encoders for the pointer to this type.
vtype = reflect.PtrTo(vtype)
}
p.mvalprop.init(vtype, "Value", f.Tag.Get("protobuf_val"), nil, lockGetProp)
}
p.setTag(lockGetProp)
}
func (p *Properties) setTag(lockGetProp bool) {
// precalculate tag code
wire := p.WireType
if p.Packed {
@@ -592,23 +506,11 @@ var (
// isMarshaler reports whether type t implements Marshaler.
func isMarshaler(t reflect.Type) bool {
// We're checking for (likely) pointer-receiver methods
// so if t is not a pointer, something is very wrong.
// The calls above only invoke isMarshaler on pointer types.
if t.Kind() != reflect.Ptr {
panic("proto: misuse of isMarshaler")
}
return t.Implements(marshalerType)
}
// isUnmarshaler reports whether type t implements Unmarshaler.
func isUnmarshaler(t reflect.Type) bool {
// We're checking for (likely) pointer-receiver methods
// so if t is not a pointer, something is very wrong.
// The calls above only invoke isUnmarshaler on pointer types.
if t.Kind() != reflect.Ptr {
panic("proto: misuse of isUnmarshaler")
}
return t.Implements(unmarshalerType)
}
@@ -628,40 +530,23 @@ func (p *Properties) init(typ reflect.Type, name, tag string, f *reflect.StructF
return
}
p.Parse(tag)
p.setEncAndDec(typ, f, lockGetProp)
p.setEncAndDec(typ, lockGetProp)
}
var (
propertiesMu sync.RWMutex
mutex sync.Mutex
propertiesMap = make(map[reflect.Type]*StructProperties)
)
// GetProperties returns the list of properties for the type represented by t.
// t must represent a generated struct type of a protocol message.
func GetProperties(t reflect.Type) *StructProperties {
if t.Kind() != reflect.Struct {
panic("proto: type must have kind struct")
}
// Most calls to GetProperties in a long-running program will be
// retrieving details for types we have seen before.
propertiesMu.RLock()
sprop, ok := propertiesMap[t]
propertiesMu.RUnlock()
if ok {
if collectStats {
stats.Chit++
}
return sprop
}
propertiesMu.Lock()
sprop = getPropertiesLocked(t)
propertiesMu.Unlock()
mutex.Lock()
sprop := getPropertiesLocked(t)
mutex.Unlock()
return sprop
}
// getPropertiesLocked requires that propertiesMu is held.
// getPropertiesLocked requires that mutex is held.
func getPropertiesLocked(t reflect.Type) *StructProperties {
if prop, ok := propertiesMap[t]; ok {
if collectStats {
@@ -690,14 +575,19 @@ func getPropertiesLocked(t reflect.Type) *StructProperties {
p.init(f.Type, name, f.Tag.Get("protobuf"), &f, false)
if f.Name == "XXX_extensions" { // special case
p.enc = (*Buffer).enc_map
p.dec = nil // not needed
p.size = size_map
if len(f.Tag.Get("protobuf")) > 0 {
p.enc = (*Buffer).enc_ext_slice_byte
p.dec = nil // not needed
p.size = size_ext_slice_byte
} else {
p.enc = (*Buffer).enc_map
p.dec = nil // not needed
p.size = size_map
}
}
if f.Name == "XXX_unrecognized" { // special case
prop.unrecField = toField(&f)
}
oneof := f.Tag.Get("protobuf_oneof") != "" // special case
prop.Prop[i] = p
prop.order[i] = i
if debug {
@@ -707,7 +597,7 @@ func getPropertiesLocked(t reflect.Type) *StructProperties {
}
print("\n")
}
if p.enc == nil && !strings.HasPrefix(f.Name, "XXX_") && !oneof {
if p.enc == nil && !strings.HasPrefix(f.Name, "XXX_") {
fmt.Fprintln(os.Stderr, "proto: no encoder for", f.Name, f.Type.String(), "[GetProperties]")
}
}
@@ -715,41 +605,6 @@ func getPropertiesLocked(t reflect.Type) *StructProperties {
// Re-order prop.order.
sort.Sort(prop)
type oneofMessage interface {
XXX_OneofFuncs() (func(Message, *Buffer) error, func(Message, int, int, *Buffer) (bool, error), func(Message) int, []interface{})
}
if om, ok := reflect.Zero(reflect.PtrTo(t)).Interface().(oneofMessage); ok {
var oots []interface{}
prop.oneofMarshaler, prop.oneofUnmarshaler, prop.oneofSizer, oots = om.XXX_OneofFuncs()
prop.stype = t
// Interpret oneof metadata.
prop.OneofTypes = make(map[string]*OneofProperties)
for _, oot := range oots {
oop := &OneofProperties{
Type: reflect.ValueOf(oot).Type(), // *T
Prop: new(Properties),
}
sft := oop.Type.Elem().Field(0)
oop.Prop.Name = sft.Name
oop.Prop.Parse(sft.Tag.Get("protobuf"))
// There will be exactly one interface field that
// this new value is assignable to.
for i := 0; i < t.NumField(); i++ {
f := t.Field(i)
if f.Type.Kind() != reflect.Interface {
continue
}
if !oop.Type.AssignableTo(f.Type) {
continue
}
oop.Field = i
break
}
prop.OneofTypes[oop.Prop.OrigName] = oop
}
}
// build required counts
// build tags
reqCount := 0
@@ -799,6 +654,7 @@ func getbase(pb Message) (t reflect.Type, b structPointer, err error) {
// The generated code will register the generated maps by calling RegisterEnum.
var enumValueMaps = make(map[string]map[string]int32)
var enumStringMaps = make(map[string]map[int32]string)
// RegisterEnum is called from the generated code to install the enum descriptor
// maps into the global table to aid parsing text format protocol buffers.
@@ -807,36 +663,8 @@ func RegisterEnum(typeName string, unusedNameMap map[int32]string, valueMap map[
panic("proto: duplicate enum registered: " + typeName)
}
enumValueMaps[typeName] = valueMap
}
// EnumValueMap returns the mapping from names to integers of the
// enum type enumType, or a nil if not found.
func EnumValueMap(enumType string) map[string]int32 {
return enumValueMaps[enumType]
}
// A registry of all linked message types.
// The string is a fully-qualified proto name ("pkg.Message").
var (
protoTypes = make(map[string]reflect.Type)
revProtoTypes = make(map[reflect.Type]string)
)
// RegisterType is called from generated code and maps from the fully qualified
// proto name to the type (pointer to struct) of the protocol buffer.
func RegisterType(x Message, name string) {
if _, ok := protoTypes[name]; ok {
// TODO: Some day, make this a panic.
log.Printf("proto: duplicate proto type registered: %s", name)
return
if _, ok := enumStringMaps[typeName]; ok {
panic("proto: duplicate enum registered: " + typeName)
}
t := reflect.TypeOf(x)
protoTypes[name] = t
revProtoTypes[t] = name
enumStringMaps[typeName] = unusedNameMap
}
// MessageName returns the fully-qualified proto name for the given message type.
func MessageName(x Message) string { return revProtoTypes[reflect.TypeOf(x)] }
// MessageType returns the message type (pointer to struct) for a named message.
func MessageType(name string) reflect.Type { return protoTypes[name] }

View File

@@ -1,5 +1,5 @@
// Copyright (c) 2013, Vastech SA (PTY) LTD. All rights reserved.
// http://github.com/gogo/protobuf/gogoproto
// http://code.google.com/p/gogoprotobuf/gogoproto
//
// Redistribution and use in source and binary forms, with or without
// modification, are permitted provided that the following conditions are
@@ -49,6 +49,49 @@ func (p *Properties) setCustomEncAndDec(typ reflect.Type) {
}
}
func (p *Properties) setNonNullableEncAndDec(typ reflect.Type) bool {
switch typ.Kind() {
case reflect.Bool:
p.enc = (*Buffer).enc_ref_bool
p.dec = (*Buffer).dec_ref_bool
p.size = size_ref_bool
case reflect.Int32, reflect.Uint32:
p.enc = (*Buffer).enc_ref_int32
p.dec = (*Buffer).dec_ref_int32
p.size = size_ref_int32
case reflect.Int64, reflect.Uint64:
p.enc = (*Buffer).enc_ref_int64
p.dec = (*Buffer).dec_ref_int64
p.size = size_ref_int64
case reflect.Float32:
p.enc = (*Buffer).enc_ref_int32 // can just treat them as bits
p.dec = (*Buffer).dec_ref_int32
p.size = size_ref_int32
case reflect.Float64:
p.enc = (*Buffer).enc_ref_int64 // can just treat them as bits
p.dec = (*Buffer).dec_ref_int64
p.size = size_ref_int64
case reflect.String:
p.dec = (*Buffer).dec_ref_string
p.enc = (*Buffer).enc_ref_string
p.size = size_ref_string
case reflect.Struct:
p.stype = typ
p.isMarshaler = isMarshaler(typ)
p.isUnmarshaler = isUnmarshaler(typ)
if p.Wire == "bytes" {
p.enc = (*Buffer).enc_ref_struct_message
p.dec = (*Buffer).dec_ref_struct_message
p.size = size_ref_struct_message
} else {
fmt.Fprintf(os.Stderr, "proto: no coders for struct %T\n", typ)
}
default:
return false
}
return true
}
func (p *Properties) setSliceOfNonPointerStructs(typ reflect.Type) {
t2 := typ.Elem()
p.sstype = typ

View File

@@ -1,7 +1,7 @@
// Go support for Protocol Buffers - Google's data interchange format
//
// Copyright 2012 The Go Authors. All rights reserved.
// https://github.com/golang/protobuf
// http://code.google.com/p/goprotobuf/
//
// Redistribution and use in source and binary forms, with or without
// modification, are permitted provided that the following conditions are

View File

@@ -1,7 +1,7 @@
// Go support for Protocol Buffers - Google's data interchange format
//
// Copyright 2012 The Go Authors. All rights reserved.
// https://github.com/golang/protobuf
// http://code.google.com/p/goprotobuf/
//
// Redistribution and use in source and binary forms, with or without
// modification, are permitted provided that the following conditions are
@@ -33,12 +33,10 @@ package proto_test
import (
"log"
"strings"
"testing"
. "github.com/coreos/etcd/Godeps/_workspace/src/github.com/gogo/protobuf/proto"
proto3pb "github.com/coreos/etcd/Godeps/_workspace/src/github.com/gogo/protobuf/proto/proto3_proto"
pb "github.com/coreos/etcd/Godeps/_workspace/src/github.com/gogo/protobuf/proto/testdata"
pb "./testdata"
. "github.com/coreos/etcd/Godeps/_workspace/src/code.google.com/p/gogoprotobuf/proto"
)
var messageWithExtension1 = &pb.MyMessage{Count: Int32(7)}
@@ -67,10 +65,8 @@ var SizeTests = []struct {
// Basic types.
{"bool", &pb.Defaults{F_Bool: Bool(true)}},
{"int32", &pb.Defaults{F_Int32: Int32(12)}},
{"negative int32", &pb.Defaults{F_Int32: Int32(-1)}},
{"small int64", &pb.Defaults{F_Int64: Int64(1)}},
{"big int64", &pb.Defaults{F_Int64: Int64(1 << 20)}},
{"negative int64", &pb.Defaults{F_Int64: Int64(-1)}},
{"fixed32", &pb.Defaults{F_Fixed32: Uint32(71)}},
{"fixed64", &pb.Defaults{F_Fixed64: Uint64(72)}},
{"uint32", &pb.Defaults{F_Uint32: Uint32(123)}},
@@ -87,7 +83,7 @@ var SizeTests = []struct {
{"empty repeated bool", &pb.MoreRepeated{Bools: []bool{}}},
{"repeated bool", &pb.MoreRepeated{Bools: []bool{false, true, true, false}}},
{"packed repeated bool", &pb.MoreRepeated{BoolsPacked: []bool{false, true, true, false, true, true, true}}},
{"repeated int32", &pb.MoreRepeated{Ints: []int32{1, 12203, 1729, -1}}},
{"repeated int32", &pb.MoreRepeated{Ints: []int32{1, 12203, 1729}}},
{"repeated int32 packed", &pb.MoreRepeated{IntsPacked: []int32{1, 12203, 1729}}},
{"repeated int64 packed", &pb.MoreRepeated{Int64SPacked: []int64{
// Need enough large numbers to verify that the header is counting the number of bytes
@@ -104,31 +100,6 @@ var SizeTests = []struct {
{"unrecognized", &pb.MoreRepeated{XXX_unrecognized: []byte{13<<3 | 0, 4}}},
{"extension (unencoded)", messageWithExtension1},
{"extension (encoded)", messageWithExtension3},
// proto3 message
{"proto3 empty", &proto3pb.Message{}},
{"proto3 bool", &proto3pb.Message{TrueScotsman: true}},
{"proto3 int64", &proto3pb.Message{ResultCount: 1}},
{"proto3 uint32", &proto3pb.Message{HeightInCm: 123}},
{"proto3 float", &proto3pb.Message{Score: 12.6}},
{"proto3 string", &proto3pb.Message{Name: "Snezana"}},
{"proto3 bytes", &proto3pb.Message{Data: []byte("wowsa")}},
{"proto3 bytes, empty", &proto3pb.Message{Data: []byte{}}},
{"proto3 enum", &proto3pb.Message{Hilarity: proto3pb.Message_PUNS}},
{"proto3 map field with empty bytes", &proto3pb.MessageWithMap{ByteMapping: map[bool][]byte{false: {}}}},
{"map field", &pb.MessageWithMap{NameMapping: map[int32]string{1: "Rob", 7: "Andrew"}}},
{"map field with message", &pb.MessageWithMap{MsgMapping: map[int64]*pb.FloatingPoint{0x7001: {F: Float64(2.0)}}}},
{"map field with bytes", &pb.MessageWithMap{ByteMapping: map[bool][]byte{true: []byte("this time for sure")}}},
{"map field with empty bytes", &pb.MessageWithMap{ByteMapping: map[bool][]byte{true: {}}}},
{"map field with big entry", &pb.MessageWithMap{NameMapping: map[int32]string{8: strings.Repeat("x", 125)}}},
{"map field with big key and val", &pb.MessageWithMap{StrToStr: map[string]string{strings.Repeat("x", 70): strings.Repeat("y", 70)}}},
{"map field with big numeric key", &pb.MessageWithMap{NameMapping: map[int32]string{0xf00d: "om nom nom"}}},
{"oneof not set", &pb.Communique{}},
{"oneof zero int32", &pb.Communique{Union: &pb.Communique_Number{Number: 0}}},
{"oneof int32", &pb.Communique{Union: &pb.Communique_Number{Number: 3}}},
{"oneof string", &pb.Communique{Union: &pb.Communique_Name{Name: "Rhythmic Fman"}}},
}
func TestSize(t *testing.T) {

View File

@@ -1,5 +1,5 @@
// Copyright (c) 2013, Vastech SA (PTY) LTD. All rights reserved.
// http://github.com/gogo/protobuf/gogoproto
// http://code.google.com/p/gogoprotobuf/gogoproto
//
// Redistribution and use in source and binary forms, with or without
// modification, are permitted provided that the following conditions are
@@ -27,7 +27,6 @@
package proto
import (
"fmt"
"io"
)
@@ -80,7 +79,7 @@ func Skip(data []byte) (n int, err error) {
return index, nil
case 3:
for {
var innerWire uint64
var wire uint64
var start int = index
for shift := uint(0); ; shift += 7 {
if index >= l {
@@ -88,13 +87,13 @@ func Skip(data []byte) (n int, err error) {
}
b := data[index]
index++
innerWire |= (uint64(b) & 0x7F) << shift
wire |= (uint64(b) & 0x7F) << shift
if b < 0x80 {
break
}
}
innerWireType := int(innerWire & 0x7)
if innerWireType == 4 {
wireType := int(wire & 0x7)
if wireType == 4 {
break
}
next, err := Skip(data[start:])
@@ -110,7 +109,7 @@ func Skip(data []byte) (n int, err error) {
index += 4
return index, nil
default:
return 0, fmt.Errorf("proto: illegal wireType %d", wireType)
return 0, ErrWrongType
}
}
panic("unreachable")

View File

@@ -1,7 +1,7 @@
# Go support for Protocol Buffers - Google's data interchange format
#
# Copyright 2010 The Go Authors. All rights reserved.
# https://github.com/golang/protobuf
# http://code.google.com/p/goprotobuf/
#
# Redistribution and use in source and binary forms, with or without
# modification, are permitted provided that the following conditions are
@@ -29,19 +29,16 @@
# (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
# OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
include ../../Make.protobuf
all: regenerate
regenerate:
rm -f test.pb.go
make test.pb.go
protoc --gogo_out=. test.proto
# The following rules are just aids to development. Not needed for typical testing.
diff: regenerate
git diff test.pb.go
hg diff test.pb.go
restore:
cp test.pb.go.golden test.pb.go

View File

@@ -1,7 +1,7 @@
// Go support for Protocol Buffers - Google's data interchange format
//
// Copyright 2012 The Go Authors. All rights reserved.
// https://github.com/golang/protobuf
// http://code.google.com/p/goprotobuf/
//
// Redistribution and use in source and binary forms, with or without
// modification, are permitted provided that the following conditions are

View File

@@ -22,7 +22,6 @@ It has these top-level messages:
OtherMessage
MyMessage
Ext
DefaultsMessage
MyMessageSet
Empty
MessageList
@@ -34,18 +33,16 @@ It has these top-level messages:
GroupOld
GroupNew
FloatingPoint
MessageWithMap
Communique
*/
package testdata
import proto "github.com/gogo/protobuf/proto"
import fmt "fmt"
import proto "github.com/coreos/etcd/Godeps/_workspace/src/code.google.com/p/gogoprotobuf/proto"
import json "encoding/json"
import math "math"
// Reference imports to suppress errors if they are not otherwise used.
// Reference proto, json, and math imports to suppress error if they are not otherwise used.
var _ = proto.Marshal
var _ = fmt.Errorf
var _ = &json.SyntaxError{}
var _ = math.Inf
type FOO int32
@@ -185,42 +182,6 @@ func (x *MyMessage_Color) UnmarshalJSON(data []byte) error {
return nil
}
type DefaultsMessage_DefaultsEnum int32
const (
DefaultsMessage_ZERO DefaultsMessage_DefaultsEnum = 0
DefaultsMessage_ONE DefaultsMessage_DefaultsEnum = 1
DefaultsMessage_TWO DefaultsMessage_DefaultsEnum = 2
)
var DefaultsMessage_DefaultsEnum_name = map[int32]string{
0: "ZERO",
1: "ONE",
2: "TWO",
}
var DefaultsMessage_DefaultsEnum_value = map[string]int32{
"ZERO": 0,
"ONE": 1,
"TWO": 2,
}
func (x DefaultsMessage_DefaultsEnum) Enum() *DefaultsMessage_DefaultsEnum {
p := new(DefaultsMessage_DefaultsEnum)
*p = x
return p
}
func (x DefaultsMessage_DefaultsEnum) String() string {
return proto.EnumName(DefaultsMessage_DefaultsEnum_name, int32(x))
}
func (x *DefaultsMessage_DefaultsEnum) UnmarshalJSON(data []byte) error {
value, err := proto.UnmarshalJSONEnum(DefaultsMessage_DefaultsEnum_value, data, "DefaultsMessage_DefaultsEnum")
if err != nil {
return err
}
*x = DefaultsMessage_DefaultsEnum(value)
return nil
}
type Defaults_Color int32
const (
@@ -304,8 +265,8 @@ func (m *GoEnum) GetFoo() FOO {
}
type GoTestField struct {
Label *string `protobuf:"bytes,1,req,name=Label" json:"Label,omitempty"`
Type *string `protobuf:"bytes,2,req,name=Type" json:"Type,omitempty"`
Label *string `protobuf:"bytes,1,req" json:"Label,omitempty"`
Type *string `protobuf:"bytes,2,req" json:"Type,omitempty"`
XXX_unrecognized []byte `json:"-"`
}
@@ -329,13 +290,13 @@ func (m *GoTestField) GetType() string {
type GoTest struct {
// Some typical parameters
Kind *GoTest_KIND `protobuf:"varint,1,req,name=Kind,enum=testdata.GoTest_KIND" json:"Kind,omitempty"`
Table *string `protobuf:"bytes,2,opt,name=Table" json:"Table,omitempty"`
Param *int32 `protobuf:"varint,3,opt,name=Param" json:"Param,omitempty"`
Kind *GoTest_KIND `protobuf:"varint,1,req,enum=testdata.GoTest_KIND" json:"Kind,omitempty"`
Table *string `protobuf:"bytes,2,opt" json:"Table,omitempty"`
Param *int32 `protobuf:"varint,3,opt" json:"Param,omitempty"`
// Required, repeated and optional foreign fields.
RequiredField *GoTestField `protobuf:"bytes,4,req,name=RequiredField" json:"RequiredField,omitempty"`
RepeatedField []*GoTestField `protobuf:"bytes,5,rep,name=RepeatedField" json:"RepeatedField,omitempty"`
OptionalField *GoTestField `protobuf:"bytes,6,opt,name=OptionalField" json:"OptionalField,omitempty"`
RequiredField *GoTestField `protobuf:"bytes,4,req" json:"RequiredField,omitempty"`
RepeatedField []*GoTestField `protobuf:"bytes,5,rep" json:"RepeatedField,omitempty"`
OptionalField *GoTestField `protobuf:"bytes,6,opt" json:"OptionalField,omitempty"`
// Required fields of all basic types
F_BoolRequired *bool `protobuf:"varint,10,req,name=F_Bool_required" json:"F_Bool_required,omitempty"`
F_Int32Required *int32 `protobuf:"varint,11,req,name=F_Int32_required" json:"F_Int32_required,omitempty"`
@@ -936,7 +897,7 @@ func (m *GoTest) GetOptionalgroup() *GoTest_OptionalGroup {
// Required, repeated, and optional groups.
type GoTest_RequiredGroup struct {
RequiredField *string `protobuf:"bytes,71,req,name=RequiredField" json:"RequiredField,omitempty"`
RequiredField *string `protobuf:"bytes,71,req" json:"RequiredField,omitempty"`
XXX_unrecognized []byte `json:"-"`
}
@@ -952,7 +913,7 @@ func (m *GoTest_RequiredGroup) GetRequiredField() string {
}
type GoTest_RepeatedGroup struct {
RequiredField *string `protobuf:"bytes,81,req,name=RequiredField" json:"RequiredField,omitempty"`
RequiredField *string `protobuf:"bytes,81,req" json:"RequiredField,omitempty"`
XXX_unrecognized []byte `json:"-"`
}
@@ -968,7 +929,7 @@ func (m *GoTest_RepeatedGroup) GetRequiredField() string {
}
type GoTest_OptionalGroup struct {
RequiredField *string `protobuf:"bytes,91,req,name=RequiredField" json:"RequiredField,omitempty"`
RequiredField *string `protobuf:"bytes,91,req" json:"RequiredField,omitempty"`
XXX_unrecognized []byte `json:"-"`
}
@@ -1111,7 +1072,6 @@ func (m *MaxTag) GetLastField() string {
type OldMessage struct {
Nested *OldMessage_Nested `protobuf:"bytes,1,opt,name=nested" json:"nested,omitempty"`
Num *int32 `protobuf:"varint,2,opt,name=num" json:"num,omitempty"`
XXX_unrecognized []byte `json:"-"`
}
@@ -1126,13 +1086,6 @@ func (m *OldMessage) GetNested() *OldMessage_Nested {
return nil
}
func (m *OldMessage) GetNum() int32 {
if m != nil && m.Num != nil {
return *m.Num
}
return 0
}
type OldMessage_Nested struct {
Name *string `protobuf:"bytes,1,opt,name=name" json:"name,omitempty"`
XXX_unrecognized []byte `json:"-"`
@@ -1152,10 +1105,8 @@ func (m *OldMessage_Nested) GetName() string {
// NewMessage is wire compatible with OldMessage;
// imagine it as a future version.
type NewMessage struct {
Nested *NewMessage_Nested `protobuf:"bytes,1,opt,name=nested" json:"nested,omitempty"`
// This is an int32 in OldMessage.
Num *int64 `protobuf:"varint,2,opt,name=num" json:"num,omitempty"`
XXX_unrecognized []byte `json:"-"`
Nested *NewMessage_Nested `protobuf:"bytes,1,opt,name=nested" json:"nested,omitempty"`
XXX_unrecognized []byte `json:"-"`
}
func (m *NewMessage) Reset() { *m = NewMessage{} }
@@ -1169,13 +1120,6 @@ func (m *NewMessage) GetNested() *NewMessage_Nested {
return nil
}
func (m *NewMessage) GetNum() int64 {
if m != nil && m.Num != nil {
return *m.Num
}
return 0
}
type NewMessage_Nested struct {
Name *string `protobuf:"bytes,1,opt,name=name" json:"name,omitempty"`
FoodGroup *string `protobuf:"bytes,2,opt,name=food_group" json:"food_group,omitempty"`
@@ -1442,29 +1386,6 @@ var E_Ext_Number = &proto.ExtensionDesc{
Tag: "varint,105,opt,name=number",
}
type DefaultsMessage struct {
XXX_extensions map[int32]proto.Extension `json:"-"`
XXX_unrecognized []byte `json:"-"`
}
func (m *DefaultsMessage) Reset() { *m = DefaultsMessage{} }
func (m *DefaultsMessage) String() string { return proto.CompactTextString(m) }
func (*DefaultsMessage) ProtoMessage() {}
var extRange_DefaultsMessage = []proto.ExtensionRange{
{100, 536870911},
}
func (*DefaultsMessage) ExtensionRangeArray() []proto.ExtensionRange {
return extRange_DefaultsMessage
}
func (m *DefaultsMessage) ExtensionMap() map[int32]proto.Extension {
if m.XXX_extensions == nil {
m.XXX_extensions = make(map[int32]proto.Extension)
}
return m.XXX_extensions
}
type MyMessageSet struct {
XXX_extensions map[int32]proto.Extension `json:"-"`
XXX_unrecognized []byte `json:"-"`
@@ -1480,12 +1401,6 @@ func (m *MyMessageSet) Marshal() ([]byte, error) {
func (m *MyMessageSet) Unmarshal(buf []byte) error {
return proto.UnmarshalMessageSet(buf, m.ExtensionMap())
}
func (m *MyMessageSet) MarshalJSON() ([]byte, error) {
return proto.MarshalMessageSetJSON(m.XXX_extensions)
}
func (m *MyMessageSet) UnmarshalJSON(buf []byte) error {
return proto.UnmarshalMessageSetJSON(buf, m.XXX_extensions)
}
// ensure MyMessageSet satisfies proto.Marshaler and proto.Unmarshaler
var _ proto.Marshaler = (*MyMessageSet)(nil)
@@ -1514,7 +1429,7 @@ func (m *Empty) String() string { return proto.CompactTextString(m) }
func (*Empty) ProtoMessage() {}
type MessageList struct {
Message []*MessageList_Message `protobuf:"group,1,rep,name=Message" json:"message,omitempty"`
Message []*MessageList_Message `protobuf:"group,1,rep" json:"message,omitempty"`
XXX_unrecognized []byte `json:"-"`
}
@@ -1580,29 +1495,27 @@ func (m *Strings) GetBytesField() []byte {
type Defaults struct {
// Default-valued fields of all basic types.
// Same as GoTest, but copied here to make testing easier.
F_Bool *bool `protobuf:"varint,1,opt,name=F_Bool,def=1" json:"F_Bool,omitempty"`
F_Int32 *int32 `protobuf:"varint,2,opt,name=F_Int32,def=32" json:"F_Int32,omitempty"`
F_Int64 *int64 `protobuf:"varint,3,opt,name=F_Int64,def=64" json:"F_Int64,omitempty"`
F_Fixed32 *uint32 `protobuf:"fixed32,4,opt,name=F_Fixed32,def=320" json:"F_Fixed32,omitempty"`
F_Fixed64 *uint64 `protobuf:"fixed64,5,opt,name=F_Fixed64,def=640" json:"F_Fixed64,omitempty"`
F_Uint32 *uint32 `protobuf:"varint,6,opt,name=F_Uint32,def=3200" json:"F_Uint32,omitempty"`
F_Uint64 *uint64 `protobuf:"varint,7,opt,name=F_Uint64,def=6400" json:"F_Uint64,omitempty"`
F_Float *float32 `protobuf:"fixed32,8,opt,name=F_Float,def=314159" json:"F_Float,omitempty"`
F_Double *float64 `protobuf:"fixed64,9,opt,name=F_Double,def=271828" json:"F_Double,omitempty"`
F_String *string `protobuf:"bytes,10,opt,name=F_String,def=hello, \"world!\"\n" json:"F_String,omitempty"`
F_Bytes []byte `protobuf:"bytes,11,opt,name=F_Bytes,def=Bignose" json:"F_Bytes,omitempty"`
F_Sint32 *int32 `protobuf:"zigzag32,12,opt,name=F_Sint32,def=-32" json:"F_Sint32,omitempty"`
F_Sint64 *int64 `protobuf:"zigzag64,13,opt,name=F_Sint64,def=-64" json:"F_Sint64,omitempty"`
F_Enum *Defaults_Color `protobuf:"varint,14,opt,name=F_Enum,enum=testdata.Defaults_Color,def=1" json:"F_Enum,omitempty"`
F_Bool *bool `protobuf:"varint,1,opt,def=1" json:"F_Bool,omitempty"`
F_Int32 *int32 `protobuf:"varint,2,opt,def=32" json:"F_Int32,omitempty"`
F_Int64 *int64 `protobuf:"varint,3,opt,def=64" json:"F_Int64,omitempty"`
F_Fixed32 *uint32 `protobuf:"fixed32,4,opt,def=320" json:"F_Fixed32,omitempty"`
F_Fixed64 *uint64 `protobuf:"fixed64,5,opt,def=640" json:"F_Fixed64,omitempty"`
F_Uint32 *uint32 `protobuf:"varint,6,opt,def=3200" json:"F_Uint32,omitempty"`
F_Uint64 *uint64 `protobuf:"varint,7,opt,def=6400" json:"F_Uint64,omitempty"`
F_Float *float32 `protobuf:"fixed32,8,opt,def=314159" json:"F_Float,omitempty"`
F_Double *float64 `protobuf:"fixed64,9,opt,def=271828" json:"F_Double,omitempty"`
F_String *string `protobuf:"bytes,10,opt,def=hello, \"world!\"\n" json:"F_String,omitempty"`
F_Bytes []byte `protobuf:"bytes,11,opt,def=Bignose" json:"F_Bytes,omitempty"`
F_Sint32 *int32 `protobuf:"zigzag32,12,opt,def=-32" json:"F_Sint32,omitempty"`
F_Sint64 *int64 `protobuf:"zigzag64,13,opt,def=-64" json:"F_Sint64,omitempty"`
F_Enum *Defaults_Color `protobuf:"varint,14,opt,enum=testdata.Defaults_Color,def=1" json:"F_Enum,omitempty"`
// More fields with crazy defaults.
F_Pinf *float32 `protobuf:"fixed32,15,opt,name=F_Pinf,def=inf" json:"F_Pinf,omitempty"`
F_Ninf *float32 `protobuf:"fixed32,16,opt,name=F_Ninf,def=-inf" json:"F_Ninf,omitempty"`
F_Nan *float32 `protobuf:"fixed32,17,opt,name=F_Nan,def=nan" json:"F_Nan,omitempty"`
F_Pinf *float32 `protobuf:"fixed32,15,opt,def=inf" json:"F_Pinf,omitempty"`
F_Ninf *float32 `protobuf:"fixed32,16,opt,def=-inf" json:"F_Ninf,omitempty"`
F_Nan *float32 `protobuf:"fixed32,17,opt,def=nan" json:"F_Nan,omitempty"`
// Sub-message.
Sub *SubDefaults `protobuf:"bytes,18,opt,name=sub" json:"sub,omitempty"`
// Redundant but explicit defaults.
StrZero *string `protobuf:"bytes,19,opt,name=str_zero,def=" json:"str_zero,omitempty"`
XXX_unrecognized []byte `json:"-"`
Sub *SubDefaults `protobuf:"bytes,18,opt,name=sub" json:"sub,omitempty"`
XXX_unrecognized []byte `json:"-"`
}
func (m *Defaults) Reset() { *m = Defaults{} }
@@ -1756,13 +1669,6 @@ func (m *Defaults) GetSub() *SubDefaults {
return nil
}
func (m *Defaults) GetStrZero() string {
if m != nil && m.StrZero != nil {
return *m.StrZero
}
return ""
}
type SubDefaults struct {
N *int64 `protobuf:"varint,1,opt,name=n,def=7" json:"n,omitempty"`
XXX_unrecognized []byte `json:"-"`
@@ -1862,7 +1768,7 @@ func (m *MoreRepeated) GetFixeds() []uint32 {
}
type GroupOld struct {
G *GroupOld_G `protobuf:"group,101,opt,name=G" json:"g,omitempty"`
G *GroupOld_G `protobuf:"group,101,opt" json:"g,omitempty"`
XXX_unrecognized []byte `json:"-"`
}
@@ -1894,7 +1800,7 @@ func (m *GroupOld_G) GetX() int32 {
}
type GroupNew struct {
G *GroupNew_G `protobuf:"group,101,opt,name=G" json:"g,omitempty"`
G *GroupNew_G `protobuf:"group,101,opt" json:"g,omitempty"`
XXX_unrecognized []byte `json:"-"`
}
@@ -1949,245 +1855,6 @@ func (m *FloatingPoint) GetF() float64 {
return 0
}
type MessageWithMap struct {
NameMapping map[int32]string `protobuf:"bytes,1,rep,name=name_mapping" json:"name_mapping,omitempty" protobuf_key:"varint,1,opt,name=key" protobuf_val:"bytes,2,opt,name=value"`
MsgMapping map[int64]*FloatingPoint `protobuf:"bytes,2,rep,name=msg_mapping" json:"msg_mapping,omitempty" protobuf_key:"zigzag64,1,opt,name=key" protobuf_val:"bytes,2,opt,name=value"`
ByteMapping map[bool][]byte `protobuf:"bytes,3,rep,name=byte_mapping" json:"byte_mapping,omitempty" protobuf_key:"varint,1,opt,name=key" protobuf_val:"bytes,2,opt,name=value"`
StrToStr map[string]string `protobuf:"bytes,4,rep,name=str_to_str" json:"str_to_str,omitempty" protobuf_key:"bytes,1,opt,name=key" protobuf_val:"bytes,2,opt,name=value"`
XXX_unrecognized []byte `json:"-"`
}
func (m *MessageWithMap) Reset() { *m = MessageWithMap{} }
func (m *MessageWithMap) String() string { return proto.CompactTextString(m) }
func (*MessageWithMap) ProtoMessage() {}
func (m *MessageWithMap) GetNameMapping() map[int32]string {
if m != nil {
return m.NameMapping
}
return nil
}
func (m *MessageWithMap) GetMsgMapping() map[int64]*FloatingPoint {
if m != nil {
return m.MsgMapping
}
return nil
}
func (m *MessageWithMap) GetByteMapping() map[bool][]byte {
if m != nil {
return m.ByteMapping
}
return nil
}
func (m *MessageWithMap) GetStrToStr() map[string]string {
if m != nil {
return m.StrToStr
}
return nil
}
type Communique struct {
MakeMeCry *bool `protobuf:"varint,1,opt,name=make_me_cry" json:"make_me_cry,omitempty"`
// This is a oneof, called "union".
//
// Types that are valid to be assigned to Union:
// *Communique_Number
// *Communique_Name
// *Communique_Data
// *Communique_TempC
// *Communique_Col
// *Communique_Msg
Union isCommunique_Union `protobuf_oneof:"union"`
XXX_unrecognized []byte `json:"-"`
}
func (m *Communique) Reset() { *m = Communique{} }
func (m *Communique) String() string { return proto.CompactTextString(m) }
func (*Communique) ProtoMessage() {}
type isCommunique_Union interface {
isCommunique_Union()
}
type Communique_Number struct {
Number int32 `protobuf:"varint,5,opt,name=number,oneof"`
}
type Communique_Name struct {
Name string `protobuf:"bytes,6,opt,name=name,oneof"`
}
type Communique_Data struct {
Data []byte `protobuf:"bytes,7,opt,name=data,oneof"`
}
type Communique_TempC struct {
TempC float64 `protobuf:"fixed64,8,opt,name=temp_c,oneof"`
}
type Communique_Col struct {
Col MyMessage_Color `protobuf:"varint,9,opt,name=col,enum=testdata.MyMessage_Color,oneof"`
}
type Communique_Msg struct {
Msg *Strings `protobuf:"bytes,10,opt,name=msg,oneof"`
}
func (*Communique_Number) isCommunique_Union() {}
func (*Communique_Name) isCommunique_Union() {}
func (*Communique_Data) isCommunique_Union() {}
func (*Communique_TempC) isCommunique_Union() {}
func (*Communique_Col) isCommunique_Union() {}
func (*Communique_Msg) isCommunique_Union() {}
func (m *Communique) GetUnion() isCommunique_Union {
if m != nil {
return m.Union
}
return nil
}
func (m *Communique) GetMakeMeCry() bool {
if m != nil && m.MakeMeCry != nil {
return *m.MakeMeCry
}
return false
}
func (m *Communique) GetNumber() int32 {
if x, ok := m.GetUnion().(*Communique_Number); ok {
return x.Number
}
return 0
}
func (m *Communique) GetName() string {
if x, ok := m.GetUnion().(*Communique_Name); ok {
return x.Name
}
return ""
}
func (m *Communique) GetData() []byte {
if x, ok := m.GetUnion().(*Communique_Data); ok {
return x.Data
}
return nil
}
func (m *Communique) GetTempC() float64 {
if x, ok := m.GetUnion().(*Communique_TempC); ok {
return x.TempC
}
return 0
}
func (m *Communique) GetCol() MyMessage_Color {
if x, ok := m.GetUnion().(*Communique_Col); ok {
return x.Col
}
return MyMessage_RED
}
func (m *Communique) GetMsg() *Strings {
if x, ok := m.GetUnion().(*Communique_Msg); ok {
return x.Msg
}
return nil
}
// XXX_OneofFuncs is for the internal use of the proto package.
func (*Communique) XXX_OneofFuncs() (func(msg proto.Message, b *proto.Buffer) error, func(msg proto.Message, tag, wire int, b *proto.Buffer) (bool, error), []interface{}) {
return _Communique_OneofMarshaler, _Communique_OneofUnmarshaler, []interface{}{
(*Communique_Number)(nil),
(*Communique_Name)(nil),
(*Communique_Data)(nil),
(*Communique_TempC)(nil),
(*Communique_Col)(nil),
(*Communique_Msg)(nil),
}
}
func _Communique_OneofMarshaler(msg proto.Message, b *proto.Buffer) error {
m := msg.(*Communique)
// union
switch x := m.Union.(type) {
case *Communique_Number:
_ = b.EncodeVarint(5<<3 | proto.WireVarint)
_ = b.EncodeVarint(uint64(x.Number))
case *Communique_Name:
_ = b.EncodeVarint(6<<3 | proto.WireBytes)
_ = b.EncodeStringBytes(x.Name)
case *Communique_Data:
_ = b.EncodeVarint(7<<3 | proto.WireBytes)
_ = b.EncodeRawBytes(x.Data)
case *Communique_TempC:
_ = b.EncodeVarint(8<<3 | proto.WireFixed64)
_ = b.EncodeFixed64(math.Float64bits(x.TempC))
case *Communique_Col:
_ = b.EncodeVarint(9<<3 | proto.WireVarint)
_ = b.EncodeVarint(uint64(x.Col))
case *Communique_Msg:
_ = b.EncodeVarint(10<<3 | proto.WireBytes)
if err := b.EncodeMessage(x.Msg); err != nil {
return err
}
case nil:
default:
return fmt.Errorf("Communique.Union has unexpected type %T", x)
}
return nil
}
func _Communique_OneofUnmarshaler(msg proto.Message, tag, wire int, b *proto.Buffer) (bool, error) {
m := msg.(*Communique)
switch tag {
case 5: // union.number
if wire != proto.WireVarint {
return true, proto.ErrInternalBadWireType
}
x, err := b.DecodeVarint()
m.Union = &Communique_Number{int32(x)}
return true, err
case 6: // union.name
if wire != proto.WireBytes {
return true, proto.ErrInternalBadWireType
}
x, err := b.DecodeStringBytes()
m.Union = &Communique_Name{x}
return true, err
case 7: // union.data
if wire != proto.WireBytes {
return true, proto.ErrInternalBadWireType
}
x, err := b.DecodeRawBytes(true)
m.Union = &Communique_Data{x}
return true, err
case 8: // union.temp_c
if wire != proto.WireFixed64 {
return true, proto.ErrInternalBadWireType
}
x, err := b.DecodeFixed64()
m.Union = &Communique_TempC{math.Float64frombits(x)}
return true, err
case 9: // union.col
if wire != proto.WireVarint {
return true, proto.ErrInternalBadWireType
}
x, err := b.DecodeVarint()
m.Union = &Communique_Col{MyMessage_Color(x)}
return true, err
case 10: // union.msg
if wire != proto.WireBytes {
return true, proto.ErrInternalBadWireType
}
msg := new(Strings)
err := b.DecodeMessage(msg)
m.Union = &Communique_Msg{msg}
return true, err
default:
return false, nil
}
}
var E_Greeting = &proto.ExtensionDesc{
ExtendedType: (*MyMessage)(nil),
ExtensionType: ([]string)(nil),
@@ -2196,262 +1863,6 @@ var E_Greeting = &proto.ExtensionDesc{
Tag: "bytes,106,rep,name=greeting",
}
var E_NoDefaultDouble = &proto.ExtensionDesc{
ExtendedType: (*DefaultsMessage)(nil),
ExtensionType: (*float64)(nil),
Field: 101,
Name: "testdata.no_default_double",
Tag: "fixed64,101,opt,name=no_default_double",
}
var E_NoDefaultFloat = &proto.ExtensionDesc{
ExtendedType: (*DefaultsMessage)(nil),
ExtensionType: (*float32)(nil),
Field: 102,
Name: "testdata.no_default_float",
Tag: "fixed32,102,opt,name=no_default_float",
}
var E_NoDefaultInt32 = &proto.ExtensionDesc{
ExtendedType: (*DefaultsMessage)(nil),
ExtensionType: (*int32)(nil),
Field: 103,
Name: "testdata.no_default_int32",
Tag: "varint,103,opt,name=no_default_int32",
}
var E_NoDefaultInt64 = &proto.ExtensionDesc{
ExtendedType: (*DefaultsMessage)(nil),
ExtensionType: (*int64)(nil),
Field: 104,
Name: "testdata.no_default_int64",
Tag: "varint,104,opt,name=no_default_int64",
}
var E_NoDefaultUint32 = &proto.ExtensionDesc{
ExtendedType: (*DefaultsMessage)(nil),
ExtensionType: (*uint32)(nil),
Field: 105,
Name: "testdata.no_default_uint32",
Tag: "varint,105,opt,name=no_default_uint32",
}
var E_NoDefaultUint64 = &proto.ExtensionDesc{
ExtendedType: (*DefaultsMessage)(nil),
ExtensionType: (*uint64)(nil),
Field: 106,
Name: "testdata.no_default_uint64",
Tag: "varint,106,opt,name=no_default_uint64",
}
var E_NoDefaultSint32 = &proto.ExtensionDesc{
ExtendedType: (*DefaultsMessage)(nil),
ExtensionType: (*int32)(nil),
Field: 107,
Name: "testdata.no_default_sint32",
Tag: "zigzag32,107,opt,name=no_default_sint32",
}
var E_NoDefaultSint64 = &proto.ExtensionDesc{
ExtendedType: (*DefaultsMessage)(nil),
ExtensionType: (*int64)(nil),
Field: 108,
Name: "testdata.no_default_sint64",
Tag: "zigzag64,108,opt,name=no_default_sint64",
}
var E_NoDefaultFixed32 = &proto.ExtensionDesc{
ExtendedType: (*DefaultsMessage)(nil),
ExtensionType: (*uint32)(nil),
Field: 109,
Name: "testdata.no_default_fixed32",
Tag: "fixed32,109,opt,name=no_default_fixed32",
}
var E_NoDefaultFixed64 = &proto.ExtensionDesc{
ExtendedType: (*DefaultsMessage)(nil),
ExtensionType: (*uint64)(nil),
Field: 110,
Name: "testdata.no_default_fixed64",
Tag: "fixed64,110,opt,name=no_default_fixed64",
}
var E_NoDefaultSfixed32 = &proto.ExtensionDesc{
ExtendedType: (*DefaultsMessage)(nil),
ExtensionType: (*int32)(nil),
Field: 111,
Name: "testdata.no_default_sfixed32",
Tag: "fixed32,111,opt,name=no_default_sfixed32",
}
var E_NoDefaultSfixed64 = &proto.ExtensionDesc{
ExtendedType: (*DefaultsMessage)(nil),
ExtensionType: (*int64)(nil),
Field: 112,
Name: "testdata.no_default_sfixed64",
Tag: "fixed64,112,opt,name=no_default_sfixed64",
}
var E_NoDefaultBool = &proto.ExtensionDesc{
ExtendedType: (*DefaultsMessage)(nil),
ExtensionType: (*bool)(nil),
Field: 113,
Name: "testdata.no_default_bool",
Tag: "varint,113,opt,name=no_default_bool",
}
var E_NoDefaultString = &proto.ExtensionDesc{
ExtendedType: (*DefaultsMessage)(nil),
ExtensionType: (*string)(nil),
Field: 114,
Name: "testdata.no_default_string",
Tag: "bytes,114,opt,name=no_default_string",
}
var E_NoDefaultBytes = &proto.ExtensionDesc{
ExtendedType: (*DefaultsMessage)(nil),
ExtensionType: ([]byte)(nil),
Field: 115,
Name: "testdata.no_default_bytes",
Tag: "bytes,115,opt,name=no_default_bytes",
}
var E_NoDefaultEnum = &proto.ExtensionDesc{
ExtendedType: (*DefaultsMessage)(nil),
ExtensionType: (*DefaultsMessage_DefaultsEnum)(nil),
Field: 116,
Name: "testdata.no_default_enum",
Tag: "varint,116,opt,name=no_default_enum,enum=testdata.DefaultsMessage_DefaultsEnum",
}
var E_DefaultDouble = &proto.ExtensionDesc{
ExtendedType: (*DefaultsMessage)(nil),
ExtensionType: (*float64)(nil),
Field: 201,
Name: "testdata.default_double",
Tag: "fixed64,201,opt,name=default_double,def=3.1415",
}
var E_DefaultFloat = &proto.ExtensionDesc{
ExtendedType: (*DefaultsMessage)(nil),
ExtensionType: (*float32)(nil),
Field: 202,
Name: "testdata.default_float",
Tag: "fixed32,202,opt,name=default_float,def=3.14",
}
var E_DefaultInt32 = &proto.ExtensionDesc{
ExtendedType: (*DefaultsMessage)(nil),
ExtensionType: (*int32)(nil),
Field: 203,
Name: "testdata.default_int32",
Tag: "varint,203,opt,name=default_int32,def=42",
}
var E_DefaultInt64 = &proto.ExtensionDesc{
ExtendedType: (*DefaultsMessage)(nil),
ExtensionType: (*int64)(nil),
Field: 204,
Name: "testdata.default_int64",
Tag: "varint,204,opt,name=default_int64,def=43",
}
var E_DefaultUint32 = &proto.ExtensionDesc{
ExtendedType: (*DefaultsMessage)(nil),
ExtensionType: (*uint32)(nil),
Field: 205,
Name: "testdata.default_uint32",
Tag: "varint,205,opt,name=default_uint32,def=44",
}
var E_DefaultUint64 = &proto.ExtensionDesc{
ExtendedType: (*DefaultsMessage)(nil),
ExtensionType: (*uint64)(nil),
Field: 206,
Name: "testdata.default_uint64",
Tag: "varint,206,opt,name=default_uint64,def=45",
}
var E_DefaultSint32 = &proto.ExtensionDesc{
ExtendedType: (*DefaultsMessage)(nil),
ExtensionType: (*int32)(nil),
Field: 207,
Name: "testdata.default_sint32",
Tag: "zigzag32,207,opt,name=default_sint32,def=46",
}
var E_DefaultSint64 = &proto.ExtensionDesc{
ExtendedType: (*DefaultsMessage)(nil),
ExtensionType: (*int64)(nil),
Field: 208,
Name: "testdata.default_sint64",
Tag: "zigzag64,208,opt,name=default_sint64,def=47",
}
var E_DefaultFixed32 = &proto.ExtensionDesc{
ExtendedType: (*DefaultsMessage)(nil),
ExtensionType: (*uint32)(nil),
Field: 209,
Name: "testdata.default_fixed32",
Tag: "fixed32,209,opt,name=default_fixed32,def=48",
}
var E_DefaultFixed64 = &proto.ExtensionDesc{
ExtendedType: (*DefaultsMessage)(nil),
ExtensionType: (*uint64)(nil),
Field: 210,
Name: "testdata.default_fixed64",
Tag: "fixed64,210,opt,name=default_fixed64,def=49",
}
var E_DefaultSfixed32 = &proto.ExtensionDesc{
ExtendedType: (*DefaultsMessage)(nil),
ExtensionType: (*int32)(nil),
Field: 211,
Name: "testdata.default_sfixed32",
Tag: "fixed32,211,opt,name=default_sfixed32,def=50",
}
var E_DefaultSfixed64 = &proto.ExtensionDesc{
ExtendedType: (*DefaultsMessage)(nil),
ExtensionType: (*int64)(nil),
Field: 212,
Name: "testdata.default_sfixed64",
Tag: "fixed64,212,opt,name=default_sfixed64,def=51",
}
var E_DefaultBool = &proto.ExtensionDesc{
ExtendedType: (*DefaultsMessage)(nil),
ExtensionType: (*bool)(nil),
Field: 213,
Name: "testdata.default_bool",
Tag: "varint,213,opt,name=default_bool,def=1",
}
var E_DefaultString = &proto.ExtensionDesc{
ExtendedType: (*DefaultsMessage)(nil),
ExtensionType: (*string)(nil),
Field: 214,
Name: "testdata.default_string",
Tag: "bytes,214,opt,name=default_string,def=Hello, string",
}
var E_DefaultBytes = &proto.ExtensionDesc{
ExtendedType: (*DefaultsMessage)(nil),
ExtensionType: ([]byte)(nil),
Field: 215,
Name: "testdata.default_bytes",
Tag: "bytes,215,opt,name=default_bytes,def=Hello, bytes",
}
var E_DefaultEnum = &proto.ExtensionDesc{
ExtendedType: (*DefaultsMessage)(nil),
ExtensionType: (*DefaultsMessage_DefaultsEnum)(nil),
Field: 216,
Name: "testdata.default_enum",
Tag: "varint,216,opt,name=default_enum,enum=testdata.DefaultsMessage_DefaultsEnum,def=1",
}
var E_X201 = &proto.ExtensionDesc{
ExtendedType: (*MyMessageSet)(nil),
ExtensionType: (*Empty)(nil),
@@ -2853,85 +2264,15 @@ var E_X250 = &proto.ExtensionDesc{
}
func init() {
proto.RegisterType((*GoEnum)(nil), "testdata.GoEnum")
proto.RegisterType((*GoTestField)(nil), "testdata.GoTestField")
proto.RegisterType((*GoTest)(nil), "testdata.GoTest")
proto.RegisterType((*GoTest_RequiredGroup)(nil), "testdata.GoTest.RequiredGroup")
proto.RegisterType((*GoTest_RepeatedGroup)(nil), "testdata.GoTest.RepeatedGroup")
proto.RegisterType((*GoTest_OptionalGroup)(nil), "testdata.GoTest.OptionalGroup")
proto.RegisterType((*GoSkipTest)(nil), "testdata.GoSkipTest")
proto.RegisterType((*GoSkipTest_SkipGroup)(nil), "testdata.GoSkipTest.SkipGroup")
proto.RegisterType((*NonPackedTest)(nil), "testdata.NonPackedTest")
proto.RegisterType((*PackedTest)(nil), "testdata.PackedTest")
proto.RegisterType((*MaxTag)(nil), "testdata.MaxTag")
proto.RegisterType((*OldMessage)(nil), "testdata.OldMessage")
proto.RegisterType((*OldMessage_Nested)(nil), "testdata.OldMessage.Nested")
proto.RegisterType((*NewMessage)(nil), "testdata.NewMessage")
proto.RegisterType((*NewMessage_Nested)(nil), "testdata.NewMessage.Nested")
proto.RegisterType((*InnerMessage)(nil), "testdata.InnerMessage")
proto.RegisterType((*OtherMessage)(nil), "testdata.OtherMessage")
proto.RegisterType((*MyMessage)(nil), "testdata.MyMessage")
proto.RegisterType((*MyMessage_SomeGroup)(nil), "testdata.MyMessage.SomeGroup")
proto.RegisterType((*Ext)(nil), "testdata.Ext")
proto.RegisterType((*DefaultsMessage)(nil), "testdata.DefaultsMessage")
proto.RegisterType((*MyMessageSet)(nil), "testdata.MyMessageSet")
proto.RegisterType((*Empty)(nil), "testdata.Empty")
proto.RegisterType((*MessageList)(nil), "testdata.MessageList")
proto.RegisterType((*MessageList_Message)(nil), "testdata.MessageList.Message")
proto.RegisterType((*Strings)(nil), "testdata.Strings")
proto.RegisterType((*Defaults)(nil), "testdata.Defaults")
proto.RegisterType((*SubDefaults)(nil), "testdata.SubDefaults")
proto.RegisterType((*RepeatedEnum)(nil), "testdata.RepeatedEnum")
proto.RegisterType((*MoreRepeated)(nil), "testdata.MoreRepeated")
proto.RegisterType((*GroupOld)(nil), "testdata.GroupOld")
proto.RegisterType((*GroupOld_G)(nil), "testdata.GroupOld.G")
proto.RegisterType((*GroupNew)(nil), "testdata.GroupNew")
proto.RegisterType((*GroupNew_G)(nil), "testdata.GroupNew.G")
proto.RegisterType((*FloatingPoint)(nil), "testdata.FloatingPoint")
proto.RegisterType((*MessageWithMap)(nil), "testdata.MessageWithMap")
proto.RegisterType((*Communique)(nil), "testdata.Communique")
proto.RegisterEnum("testdata.FOO", FOO_name, FOO_value)
proto.RegisterEnum("testdata.GoTest_KIND", GoTest_KIND_name, GoTest_KIND_value)
proto.RegisterEnum("testdata.MyMessage_Color", MyMessage_Color_name, MyMessage_Color_value)
proto.RegisterEnum("testdata.DefaultsMessage_DefaultsEnum", DefaultsMessage_DefaultsEnum_name, DefaultsMessage_DefaultsEnum_value)
proto.RegisterEnum("testdata.Defaults_Color", Defaults_Color_name, Defaults_Color_value)
proto.RegisterEnum("testdata.RepeatedEnum_Color", RepeatedEnum_Color_name, RepeatedEnum_Color_value)
proto.RegisterExtension(E_Ext_More)
proto.RegisterExtension(E_Ext_Text)
proto.RegisterExtension(E_Ext_Number)
proto.RegisterExtension(E_Greeting)
proto.RegisterExtension(E_NoDefaultDouble)
proto.RegisterExtension(E_NoDefaultFloat)
proto.RegisterExtension(E_NoDefaultInt32)
proto.RegisterExtension(E_NoDefaultInt64)
proto.RegisterExtension(E_NoDefaultUint32)
proto.RegisterExtension(E_NoDefaultUint64)
proto.RegisterExtension(E_NoDefaultSint32)
proto.RegisterExtension(E_NoDefaultSint64)
proto.RegisterExtension(E_NoDefaultFixed32)
proto.RegisterExtension(E_NoDefaultFixed64)
proto.RegisterExtension(E_NoDefaultSfixed32)
proto.RegisterExtension(E_NoDefaultSfixed64)
proto.RegisterExtension(E_NoDefaultBool)
proto.RegisterExtension(E_NoDefaultString)
proto.RegisterExtension(E_NoDefaultBytes)
proto.RegisterExtension(E_NoDefaultEnum)
proto.RegisterExtension(E_DefaultDouble)
proto.RegisterExtension(E_DefaultFloat)
proto.RegisterExtension(E_DefaultInt32)
proto.RegisterExtension(E_DefaultInt64)
proto.RegisterExtension(E_DefaultUint32)
proto.RegisterExtension(E_DefaultUint64)
proto.RegisterExtension(E_DefaultSint32)
proto.RegisterExtension(E_DefaultSint64)
proto.RegisterExtension(E_DefaultFixed32)
proto.RegisterExtension(E_DefaultFixed64)
proto.RegisterExtension(E_DefaultSfixed32)
proto.RegisterExtension(E_DefaultSfixed64)
proto.RegisterExtension(E_DefaultBool)
proto.RegisterExtension(E_DefaultString)
proto.RegisterExtension(E_DefaultBytes)
proto.RegisterExtension(E_DefaultEnum)
proto.RegisterExtension(E_X201)
proto.RegisterExtension(E_X202)
proto.RegisterExtension(E_X203)

View File

@@ -4,7 +4,7 @@
package testdata
import proto "github.com/gogo/protobuf/proto"
import proto "code.google.com/p/gogoprotobuf/proto"
import json "encoding/json"
import math "math"

View File

@@ -1,7 +1,7 @@
// Go support for Protocol Buffers - Google's data interchange format
//
// Copyright 2010 The Go Authors. All rights reserved.
// https://github.com/golang/protobuf
// http://code.google.com/p/goprotobuf/
//
// Redistribution and use in source and binary forms, with or without
// modification, are permitted provided that the following conditions are
@@ -203,8 +203,6 @@ message OldMessage {
optional string name = 1;
}
optional Nested nested = 1;
optional int32 num = 2;
}
// NewMessage is wire compatible with OldMessage;
@@ -215,9 +213,6 @@ message NewMessage {
optional string food_group = 2;
}
optional Nested nested = 1;
// This is an int32 in OldMessage.
optional int64 num = 2;
}
// Smaller tests for ASCII formatting.
@@ -277,51 +272,6 @@ extend MyMessage {
repeated string greeting = 106;
}
message DefaultsMessage {
enum DefaultsEnum {
ZERO = 0;
ONE = 1;
TWO = 2;
};
extensions 100 to max;
}
extend DefaultsMessage {
optional double no_default_double = 101;
optional float no_default_float = 102;
optional int32 no_default_int32 = 103;
optional int64 no_default_int64 = 104;
optional uint32 no_default_uint32 = 105;
optional uint64 no_default_uint64 = 106;
optional sint32 no_default_sint32 = 107;
optional sint64 no_default_sint64 = 108;
optional fixed32 no_default_fixed32 = 109;
optional fixed64 no_default_fixed64 = 110;
optional sfixed32 no_default_sfixed32 = 111;
optional sfixed64 no_default_sfixed64 = 112;
optional bool no_default_bool = 113;
optional string no_default_string = 114;
optional bytes no_default_bytes = 115;
optional DefaultsMessage.DefaultsEnum no_default_enum = 116;
optional double default_double = 201 [default = 3.1415];
optional float default_float = 202 [default = 3.14];
optional int32 default_int32 = 203 [default = 42];
optional int64 default_int64 = 204 [default = 43];
optional uint32 default_uint32 = 205 [default = 44];
optional uint64 default_uint64 = 206 [default = 45];
optional sint32 default_sint32 = 207 [default = 46];
optional sint64 default_sint64 = 208 [default = 47];
optional fixed32 default_fixed32 = 209 [default = 48];
optional fixed64 default_fixed64 = 210 [default = 49];
optional sfixed32 default_sfixed32 = 211 [default = 50];
optional sfixed64 default_sfixed64 = 212 [default = 51];
optional bool default_bool = 213 [default = true];
optional string default_string = 214 [default = "Hello, string"];
optional bytes default_bytes = 215 [default = "Hello, bytes"];
optional DefaultsMessage.DefaultsEnum default_enum = 216 [default = ONE];
}
message MyMessageSet {
option message_set_wire_format = true;
extensions 100 to max;
@@ -426,9 +376,6 @@ message Defaults {
// Sub-message.
optional SubDefaults sub = 18;
// Redundant but explicit defaults.
optional string str_zero = 19 [default=""];
}
message SubDefaults {
@@ -471,24 +418,3 @@ message GroupNew {
message FloatingPoint {
required double f = 1;
}
message MessageWithMap {
map<int32, string> name_mapping = 1;
map<sint64, FloatingPoint> msg_mapping = 2;
map<bool, bytes> byte_mapping = 3;
map<string, string> str_to_str = 4;
}
message Communique {
optional bool make_me_cry = 1;
// This is a oneof, called "union".
oneof union {
int32 number = 5;
string name = 6;
bytes data = 7;
double temp_c = 8;
MyMessage.Color col = 9;
Strings msg = 10;
}
}

View File

@@ -1,7 +1,12 @@
// Extensions for Protocol Buffers to create more go like structures.
//
// Copyright (c) 2013, Vastech SA (PTY) LTD. All rights reserved.
// http://code.google.com/p/gogoprotobuf/gogoproto
//
// Go support for Protocol Buffers - Google's data interchange format
//
// Copyright 2010 The Go Authors. All rights reserved.
// https://github.com/golang/protobuf
// http://code.google.com/p/goprotobuf/
//
// Redistribution and use in source and binary forms, with or without
// modification, are permitted provided that the following conditions are
@@ -36,12 +41,11 @@ package proto
import (
"bufio"
"bytes"
"encoding"
"errors"
"fmt"
"io"
"log"
"math"
"os"
"reflect"
"sort"
"strings"
@@ -75,6 +79,13 @@ type textWriter struct {
w writer
}
// textMarshaler is implemented by Messages that can marshal themsleves.
// It is identical to encoding.TextMarshaler, introduced in go 1.2,
// which will eventually replace it.
type textMarshaler interface {
MarshalText() (text []byte, err error)
}
func (w *textWriter) WriteString(s string) (n int, err error) {
if !strings.Contains(s, "\n") {
if !w.compact && w.complete {
@@ -170,12 +181,20 @@ func writeName(w *textWriter, props *Properties) error {
return nil
}
var (
messageSetType = reflect.TypeOf((*MessageSet)(nil)).Elem()
)
// raw is the interface satisfied by RawMessage.
type raw interface {
Bytes() []byte
}
func writeStruct(w *textWriter, sv reflect.Value) error {
if sv.Type() == messageSetType {
return writeMessageSet(w, sv.Addr().Interface().(*MessageSet))
}
st := sv.Type()
sprops := GetProperties(st)
for i := 0; i < sv.NumField(); i++ {
@@ -218,16 +237,11 @@ func writeStruct(w *textWriter, sv reflect.Value) error {
return err
}
}
v := fv.Index(j)
if v.Kind() == reflect.Ptr && v.IsNil() {
// A nil message in a repeated field is not valid,
// but we can handle that more gracefully than panicking.
if _, err := w.Write([]byte("<nil>\n")); err != nil {
if len(props.Enum) > 0 {
if err := writeEnum(w, fv.Index(j), props); err != nil {
return err
}
continue
}
if err := writeAny(w, v, props); err != nil {
} else if err := writeAny(w, fv.Index(j), props); err != nil {
return err
}
if err := w.WriteByte('\n'); err != nil {
@@ -236,111 +250,6 @@ func writeStruct(w *textWriter, sv reflect.Value) error {
}
continue
}
if fv.Kind() == reflect.Map {
// Map fields are rendered as a repeated struct with key/value fields.
keys := fv.MapKeys()
sort.Sort(mapKeys(keys))
for _, key := range keys {
val := fv.MapIndex(key)
if err := writeName(w, props); err != nil {
return err
}
if !w.compact {
if err := w.WriteByte(' '); err != nil {
return err
}
}
// open struct
if err := w.WriteByte('<'); err != nil {
return err
}
if !w.compact {
if err := w.WriteByte('\n'); err != nil {
return err
}
}
w.indent()
// key
if _, err := w.WriteString("key:"); err != nil {
return err
}
if !w.compact {
if err := w.WriteByte(' '); err != nil {
return err
}
}
if err := writeAny(w, key, props.mkeyprop); err != nil {
return err
}
if err := w.WriteByte('\n'); err != nil {
return err
}
// nil values aren't legal, but we can avoid panicking because of them.
if val.Kind() != reflect.Ptr || !val.IsNil() {
// value
if _, err := w.WriteString("value:"); err != nil {
return err
}
if !w.compact {
if err := w.WriteByte(' '); err != nil {
return err
}
}
if err := writeAny(w, val, props.mvalprop); err != nil {
return err
}
if err := w.WriteByte('\n'); err != nil {
return err
}
}
// close struct
w.unindent()
if err := w.WriteByte('>'); err != nil {
return err
}
if err := w.WriteByte('\n'); err != nil {
return err
}
}
continue
}
if props.proto3 && fv.Kind() == reflect.Slice && fv.Len() == 0 {
// empty bytes field
continue
}
if fv.Kind() != reflect.Ptr && fv.Kind() != reflect.Slice {
// proto3 non-repeated scalar field; skip if zero value
if isProto3Zero(fv) {
continue
}
}
if fv.Kind() == reflect.Interface {
// Check if it is a oneof.
if st.Field(i).Tag.Get("protobuf_oneof") != "" {
// fv is nil, or holds a pointer to generated struct.
// That generated struct has exactly one field,
// which has a protobuf struct tag.
if fv.IsNil() {
continue
}
inner := fv.Elem().Elem() // interface -> *T -> T
tag := inner.Type().Field(0).Tag.Get("protobuf")
props = new(Properties) // Overwrite the outer props var, but not its pointee.
props.Parse(tag)
// Write the value in the oneof, not the oneof itself.
fv = inner.Field(0)
// Special case to cope with malformed messages gracefully:
// If the value in the oneof is a nil pointer, don't panic
// in writeAny.
if fv.Kind() == reflect.Ptr && fv.IsNil() {
// Use errors.New so writeAny won't render quotes.
msg := errors.New("/* nil */")
fv = reflect.ValueOf(&msg).Elem()
}
}
}
if err := writeName(w, props); err != nil {
return err
@@ -357,8 +266,11 @@ func writeStruct(w *textWriter, sv reflect.Value) error {
continue
}
// Enums have a String method, so writeAny will work fine.
if err := writeAny(w, fv, props); err != nil {
if len(props.Enum) > 0 {
if err := writeEnum(w, fv, props); err != nil {
return err
}
} else if err := writeAny(w, fv, props); err != nil {
return err
}
@@ -403,6 +315,18 @@ func writeRaw(w *textWriter, b []byte) error {
func writeAny(w *textWriter, v reflect.Value, props *Properties) error {
v = reflect.Indirect(v)
if props != nil && len(props.CustomType) > 0 {
var custom Marshaler = v.Interface().(Marshaler)
data, err := custom.Marshal()
if err != nil {
return err
}
if err := writeString(w, string(data)); err != nil {
return err
}
return nil
}
// Floats have special cases.
if v.Kind() == reflect.Float32 || v.Kind() == reflect.Float64 {
x := v.Float()
@@ -449,7 +373,7 @@ func writeAny(w *textWriter, v reflect.Value, props *Properties) error {
}
}
w.indent()
if tm, ok := v.Interface().(encoding.TextMarshaler); ok {
if tm, ok := v.Interface().(textMarshaler); ok {
text, err := tm.MarshalText()
if err != nil {
return err
@@ -517,6 +441,44 @@ func writeString(w *textWriter, s string) error {
return w.WriteByte('"')
}
func writeMessageSet(w *textWriter, ms *MessageSet) error {
for _, item := range ms.Item {
id := *item.TypeId
if msd, ok := messageSetMap[id]; ok {
// Known message set type.
if _, err := fmt.Fprintf(w, "[%s]: <\n", msd.name); err != nil {
return err
}
w.indent()
pb := reflect.New(msd.t.Elem())
if err := Unmarshal(item.Message, pb.Interface().(Message)); err != nil {
if _, err := fmt.Fprintf(w, "/* bad message: %v */\n", err); err != nil {
return err
}
} else {
if err := writeStruct(w, pb.Elem()); err != nil {
return err
}
}
} else {
// Unknown type.
if _, err := fmt.Fprintf(w, "[%d]: <\n", id); err != nil {
return err
}
w.indent()
if err := writeUnknownStruct(w, item.Message); err != nil {
return err
}
}
w.unindent()
if _, err := w.Write(gtNewline); err != nil {
return err
}
}
return nil
}
func writeUnknownStruct(w *textWriter, data []byte) (err error) {
if !w.compact {
if _, err := fmt.Fprintf(w, "/* %d unknown bytes */\n", len(data)); err != nil {
@@ -608,7 +570,18 @@ func writeExtensions(w *textWriter, pv reflect.Value) error {
// Order the extensions by ID.
// This isn't strictly necessary, but it will give us
// canonical output, which will also make testing easier.
m := ep.ExtensionMap()
var m map[int32]Extension
if em, ok := ep.(extensionsMap); ok {
m = em.ExtensionMap()
} else if em, ok := ep.(extensionsBytes); ok {
eb := em.GetExtensions()
var err error
m, err = BytesToExtensionsMap(*eb)
if err != nil {
return err
}
}
ids := make([]int32, 0, len(m))
for id := range m {
ids = append(ids, id)
@@ -631,7 +604,10 @@ func writeExtensions(w *textWriter, pv reflect.Value) error {
pb, err := GetExtension(ep, desc)
if err != nil {
return fmt.Errorf("failed getting extension: %v", err)
if _, err := fmt.Fprintln(os.Stderr, "proto: failed getting extension: ", err); err != nil {
return err
}
continue
}
// Repeated extensions will appear as a slice.
@@ -703,7 +679,7 @@ func marshalText(w io.Writer, pb Message, compact bool) error {
compact: compact,
}
if tm, ok := pb.(encoding.TextMarshaler); ok {
if tm, ok := pb.(textMarshaler); ok {
text, err := tm.MarshalText()
if err != nil {
return err

View File

@@ -1,5 +1,5 @@
// Copyright (c) 2013, Vastech SA (PTY) LTD. All rights reserved.
// http://github.com/gogo/protobuf/gogoproto
// http://code.google.com/p/gogoprotobuf/gogoproto
//
// Redistribution and use in source and binary forms, with or without
// modification, are permitted provided that the following conditions are

View File

@@ -1,12 +1,12 @@
// Extensions for Protocol Buffers to create more go like structures.
//
// Copyright (c) 2013, Vastech SA (PTY) LTD. All rights reserved.
// http://github.com/gogo/protobuf/gogoproto
// http://code.google.com/p/gogoprotobuf/gogoproto
//
// Go support for Protocol Buffers - Google's data interchange format
//
// Copyright 2010 The Go Authors. All rights reserved.
// https://github.com/golang/protobuf
// http://code.google.com/p/goprotobuf/
//
// Redistribution and use in source and binary forms, with or without
// modification, are permitted provided that the following conditions are
@@ -40,7 +40,6 @@ package proto
// TODO: message sets.
import (
"encoding"
"errors"
"fmt"
"reflect"
@@ -49,6 +48,13 @@ import (
"unicode/utf8"
)
// textUnmarshaler is implemented by Messages that can unmarshal themsleves.
// It is identical to encoding.TextUnmarshaler, introduced in go 1.2,
// which will eventually replace it.
type textUnmarshaler interface {
UnmarshalText(text []byte) error
}
type ParseError struct {
Message string
Line int // 1-based line number
@@ -179,7 +185,7 @@ func (p *textParser) advance() {
}
unq, err := unquoteC(p.s[1:i], rune(p.s[0]))
if err != nil {
p.errorf("invalid quoted string %s: %v", p.s[0:i+1], err)
p.errorf("invalid quoted string %v", p.s[0:i+1])
return
}
p.cur.value, p.s = p.s[0:i+1], p.s[i+1:len(p.s)]
@@ -360,20 +366,8 @@ func (p *textParser) next() *token {
return &p.cur
}
func (p *textParser) consumeToken(s string) error {
tok := p.next()
if tok.err != nil {
return tok.err
}
if tok.value != s {
p.back()
return p.errorf("expected %q, found %q", s, tok.value)
}
return nil
}
// Return a RequiredNotSetError indicating which required field was not set.
func (p *textParser) missingRequiredFieldError(sv reflect.Value) *RequiredNotSetError {
// Return an error indicating which required field was not set.
func (p *textParser) missingRequiredFieldError(sv reflect.Value) *ParseError {
st := sv.Type()
sprops := GetProperties(st)
for i := 0; i < st.NumField(); i++ {
@@ -383,14 +377,15 @@ func (p *textParser) missingRequiredFieldError(sv reflect.Value) *RequiredNotSet
props := sprops.Prop[i]
if props.Required {
return &RequiredNotSetError{fmt.Sprintf("%v.%v", st, props.OrigName)}
return p.errorf("message %v missing required field %q", st, props.OrigName)
}
}
return &RequiredNotSetError{fmt.Sprintf("%v.<unknown field name>", st)} // should not happen
return p.errorf("message %v missing required field", st) // should not happen
}
// Returns the index in the struct for the named field, as well as the parsed tag properties.
func structFieldByName(sprops *StructProperties, name string) (int, *Properties, bool) {
func structFieldByName(st reflect.Type, name string) (int, *Properties, bool) {
sprops := GetProperties(st)
i, ok := sprops.decoderOrigNames[name]
if ok {
return i, sprops.Prop[i], true
@@ -425,10 +420,6 @@ func (p *textParser) checkForColon(props *Properties, typ reflect.Type) *ParseEr
if typ.Elem().Kind() != reflect.Ptr {
break
}
} else if typ.Kind() == reflect.String {
// The proto3 exception is for a string field,
// which requires a colon.
break
}
needColon = false
}
@@ -440,12 +431,9 @@ func (p *textParser) checkForColon(props *Properties, typ reflect.Type) *ParseEr
return nil
}
func (p *textParser) readStruct(sv reflect.Value, terminator string) error {
func (p *textParser) readStruct(sv reflect.Value, terminator string) *ParseError {
st := sv.Type()
sprops := GetProperties(st)
reqCount := sprops.reqCount
var reqFieldErr error
fieldSet := make(map[string]bool)
reqCount := GetProperties(st).reqCount
// A struct is a sequence of "name: value", terminated by one of
// '>' or '}', or the end of the input. A name may also be
// "[extension]".
@@ -506,10 +494,7 @@ func (p *textParser) readStruct(sv reflect.Value, terminator string) error {
ext = reflect.New(typ.Elem()).Elem()
}
if err := p.readAny(ext, props); err != nil {
if _, ok := err.(*RequiredNotSetError); !ok {
return err
}
reqFieldErr = err
return err
}
ep := sv.Addr().Interface().(extendableProto)
if !rep {
@@ -525,135 +510,52 @@ func (p *textParser) readStruct(sv reflect.Value, terminator string) error {
sl = reflect.Append(sl, ext)
SetExtension(ep, desc, sl.Interface())
}
if err := p.consumeOptionalSeparator(); err != nil {
} else {
// This is a normal, non-extension field.
fi, props, ok := structFieldByName(st, tok.value)
if !ok {
return p.errorf("unknown field name %q in %v", tok.value, st)
}
dst := sv.Field(fi)
isDstNil := isNil(dst)
// Check that it's not already set if it's not a repeated field.
if !props.Repeated && !isDstNil && dst.Kind() == reflect.Ptr {
return p.errorf("non-repeated field %q was repeated", tok.value)
}
if err := p.checkForColon(props, st.Field(fi).Type); err != nil {
return err
}
continue
// Parse into the field.
if err := p.readAny(dst, props); err != nil {
return err
}
if props.Required {
reqCount--
}
}
// This is a normal, non-extension field.
name := tok.value
var dst reflect.Value
fi, props, ok := structFieldByName(sprops, name)
if ok {
dst = sv.Field(fi)
} else if oop, ok := sprops.OneofTypes[name]; ok {
// It is a oneof.
props = oop.Prop
nv := reflect.New(oop.Type.Elem())
dst = nv.Elem().Field(0)
sv.Field(oop.Field).Set(nv)
// For backward compatibility, permit a semicolon or comma after a field.
tok = p.next()
if tok.err != nil {
return tok.err
}
if !dst.IsValid() {
return p.errorf("unknown field name %q in %v", name, st)
if tok.value != ";" && tok.value != "," {
p.back()
}
if dst.Kind() == reflect.Map {
// Consume any colon.
if err := p.checkForColon(props, dst.Type()); err != nil {
return err
}
// Construct the map if it doesn't already exist.
if dst.IsNil() {
dst.Set(reflect.MakeMap(dst.Type()))
}
key := reflect.New(dst.Type().Key()).Elem()
val := reflect.New(dst.Type().Elem()).Elem()
// The map entry should be this sequence of tokens:
// < key : KEY value : VALUE >
// Technically the "key" and "value" could come in any order,
// but in practice they won't.
tok := p.next()
var terminator string
switch tok.value {
case "<":
terminator = ">"
case "{":
terminator = "}"
default:
return p.errorf("expected '{' or '<', found %q", tok.value)
}
if err := p.consumeToken("key"); err != nil {
return err
}
if err := p.consumeToken(":"); err != nil {
return err
}
if err := p.readAny(key, props.mkeyprop); err != nil {
return err
}
if err := p.consumeOptionalSeparator(); err != nil {
return err
}
if err := p.consumeToken("value"); err != nil {
return err
}
if err := p.checkForColon(props.mvalprop, dst.Type().Elem()); err != nil {
return err
}
if err := p.readAny(val, props.mvalprop); err != nil {
return err
}
if err := p.consumeOptionalSeparator(); err != nil {
return err
}
if err := p.consumeToken(terminator); err != nil {
return err
}
dst.SetMapIndex(key, val)
continue
}
// Check that it's not already set if it's not a repeated field.
if !props.Repeated && fieldSet[name] {
return p.errorf("non-repeated field %q was repeated", name)
}
if err := p.checkForColon(props, dst.Type()); err != nil {
return err
}
// Parse into the field.
fieldSet[name] = true
if err := p.readAny(dst, props); err != nil {
if _, ok := err.(*RequiredNotSetError); !ok {
return err
}
reqFieldErr = err
} else if props.Required {
reqCount--
}
if err := p.consumeOptionalSeparator(); err != nil {
return err
}
}
if reqCount > 0 {
return p.missingRequiredFieldError(sv)
}
return reqFieldErr
}
// consumeOptionalSeparator consumes an optional semicolon or comma.
// It is used in readStruct to provide backward compatibility.
func (p *textParser) consumeOptionalSeparator() error {
tok := p.next()
if tok.err != nil {
return tok.err
}
if tok.value != ";" && tok.value != "," {
p.back()
}
return nil
}
func (p *textParser) readAny(v reflect.Value, props *Properties) error {
func (p *textParser) readAny(v reflect.Value, props *Properties) *ParseError {
tok := p.next()
if tok.err != nil {
return tok.err
@@ -715,32 +617,18 @@ func (p *textParser) readAny(v reflect.Value, props *Properties) error {
fv.Set(reflect.ValueOf(bytes))
return nil
}
// Repeated field.
if tok.value == "[" {
// Repeated field with list notation, like [1,2,3].
for {
fv.Set(reflect.Append(fv, reflect.New(at.Elem()).Elem()))
err := p.readAny(fv.Index(fv.Len()-1), props)
if err != nil {
return err
}
tok := p.next()
if tok.err != nil {
return tok.err
}
if tok.value == "]" {
break
}
if tok.value != "," {
return p.errorf("Expected ']' or ',' found %q", tok.value)
}
}
return nil
// Repeated field. May already exist.
flen := fv.Len()
if flen == fv.Cap() {
nav := reflect.MakeSlice(at, flen, 2*flen+1)
reflect.Copy(nav, fv)
fv.Set(nav)
}
// One value of the repeated field.
fv.SetLen(flen + 1)
// Read one.
p.back()
fv.Set(reflect.Append(fv, reflect.New(at.Elem()).Elem()))
return p.readAny(fv.Index(fv.Len()-1), props)
return p.readAny(fv.Index(flen), props)
case reflect.Bool:
// Either "true", "false", 1 or 0.
switch tok.value {
@@ -767,7 +655,6 @@ func (p *textParser) readAny(v reflect.Value, props *Properties) error {
fv.SetInt(x)
return nil
}
if len(props.Enum) == 0 {
break
}
@@ -786,7 +673,6 @@ func (p *textParser) readAny(v reflect.Value, props *Properties) error {
fv.SetInt(x)
return nil
}
case reflect.Ptr:
// A basic field (indirected through pointer), or a repeated message/group
p.back()
@@ -807,7 +693,7 @@ func (p *textParser) readAny(v reflect.Value, props *Properties) error {
default:
return p.errorf("expected '{' or '<', found %q", tok.value)
}
// TODO: Handle nested messages which implement encoding.TextUnmarshaler.
// TODO: Handle nested messages which implement textUnmarshaler.
return p.readStruct(fv, terminator)
case reflect.Uint32:
if x, err := strconv.ParseUint(tok.value, 0, 32); err == nil {
@@ -825,10 +711,8 @@ func (p *textParser) readAny(v reflect.Value, props *Properties) error {
// UnmarshalText reads a protocol buffer in Text format. UnmarshalText resets pb
// before starting to unmarshal, so any existing data in pb is always removed.
// If a required field is not set and no other error occurs,
// UnmarshalText returns *RequiredNotSetError.
func UnmarshalText(s string, pb Message) error {
if um, ok := pb.(encoding.TextUnmarshaler); ok {
if um, ok := pb.(textUnmarshaler); ok {
err := um.UnmarshalText([]byte(s))
return err
}

View File

@@ -1,7 +1,7 @@
// Go support for Protocol Buffers - Google's data interchange format
//
// Copyright 2010 The Go Authors. All rights reserved.
// https://github.com/golang/protobuf
// http://code.google.com/p/goprotobuf/
//
// Redistribution and use in source and binary forms, with or without
// modification, are permitted provided that the following conditions are
@@ -36,9 +36,8 @@ import (
"reflect"
"testing"
. "github.com/coreos/etcd/Godeps/_workspace/src/github.com/gogo/protobuf/proto"
proto3pb "github.com/coreos/etcd/Godeps/_workspace/src/github.com/gogo/protobuf/proto/proto3_proto"
. "github.com/coreos/etcd/Godeps/_workspace/src/github.com/gogo/protobuf/proto/testdata"
. "./testdata"
. "github.com/coreos/etcd/Godeps/_workspace/src/code.google.com/p/gogoprotobuf/proto"
)
type UnmarshalTextTest struct {
@@ -152,13 +151,13 @@ var unMarshalTextTests = []UnmarshalTextTest{
// Bad quoted string
{
in: `inner: < host: "\0" >` + "\n",
err: `line 1.15: invalid quoted string "\0": \0 requires 2 following digits`,
err: `line 1.15: invalid quoted string "\0"`,
},
// Number too large for int64
{
in: "count: 1 others { key: 123456789012345678901 }",
err: "line 1.23: invalid int64: 123456789012345678901",
in: "count: 123456789012345678901",
err: "line 1.7: invalid int32: 123456789012345678901",
},
// Number too large for int32
@@ -256,15 +255,6 @@ var unMarshalTextTests = []UnmarshalTextTest{
},
},
// Repeated field with list notation
{
in: `count:42 pet: ["horsey", "bunny"]`,
out: &MyMessage{
Count: Int32(42),
Pet: []string{"horsey", "bunny"},
},
},
// Repeated message with/without colon and <>/{}
{
in: `count:42 others:{} others{} others:<> others:{}`,
@@ -304,11 +294,8 @@ var unMarshalTextTests = []UnmarshalTextTest{
// Missing required field
{
in: `name: "Pawel"`,
err: `proto: required field "testdata.MyMessage.count" not set`,
out: &MyMessage{
Name: String("Pawel"),
},
in: ``,
err: `line 1.0: message testdata.MyMessage missing required field "count"`,
},
// Repeated non-repeated field
@@ -421,9 +408,6 @@ func TestUnmarshalText(t *testing.T) {
} else if err.Error() != test.err {
t.Errorf("Test %d: Incorrect error.\nHave: %v\nWant: %v",
i, err.Error(), test.err)
} else if _, ok := err.(*RequiredNotSetError); ok && test.out != nil && !reflect.DeepEqual(pb, test.out) {
t.Errorf("Test %d: Incorrect populated \nHave: %v\nWant: %v",
i, pb, test.out)
}
}
}
@@ -453,60 +437,6 @@ func TestRepeatedEnum(t *testing.T) {
}
}
func TestProto3TextParsing(t *testing.T) {
m := new(proto3pb.Message)
const in = `name: "Wallace" true_scotsman: true`
want := &proto3pb.Message{
Name: "Wallace",
TrueScotsman: true,
}
if err := UnmarshalText(in, m); err != nil {
t.Fatal(err)
}
if !Equal(m, want) {
t.Errorf("\n got %v\nwant %v", m, want)
}
}
func TestMapParsing(t *testing.T) {
m := new(MessageWithMap)
const in = `name_mapping:<key:1234 value:"Feist"> name_mapping:<key:1 value:"Beatles">` +
`msg_mapping:<key:-4, value:<f: 2.0>,>` + // separating commas are okay
`msg_mapping<key:-2 value<f: 4.0>>` + // no colon after "value"
`byte_mapping:<key:true value:"so be it">`
want := &MessageWithMap{
NameMapping: map[int32]string{
1: "Beatles",
1234: "Feist",
},
MsgMapping: map[int64]*FloatingPoint{
-4: {F: Float64(2.0)},
-2: {F: Float64(4.0)},
},
ByteMapping: map[bool][]byte{
true: []byte("so be it"),
},
}
if err := UnmarshalText(in, m); err != nil {
t.Fatal(err)
}
if !Equal(m, want) {
t.Errorf("\n got %v\nwant %v", m, want)
}
}
func TestOneofParsing(t *testing.T) {
const in = `name:"Shrek"`
m := new(Communique)
want := &Communique{Union: &Communique_Name{"Shrek"}}
if err := UnmarshalText(in, m); err != nil {
t.Fatal(err)
}
if !Equal(m, want) {
t.Errorf("\n got %v\nwant %v", m, want)
}
}
var benchInput string
func init() {

View File

@@ -1,7 +1,7 @@
// Go support for Protocol Buffers - Google's data interchange format
//
// Copyright 2010 The Go Authors. All rights reserved.
// https://github.com/golang/protobuf
// http://code.google.com/p/goprotobuf/
//
// Redistribution and use in source and binary forms, with or without
// modification, are permitted provided that the following conditions are
@@ -39,10 +39,9 @@ import (
"strings"
"testing"
"github.com/coreos/etcd/Godeps/_workspace/src/github.com/gogo/protobuf/proto"
"github.com/coreos/etcd/Godeps/_workspace/src/code.google.com/p/gogoprotobuf/proto"
proto3pb "github.com/coreos/etcd/Godeps/_workspace/src/github.com/gogo/protobuf/proto/proto3_proto"
pb "github.com/coreos/etcd/Godeps/_workspace/src/github.com/gogo/protobuf/proto/testdata"
pb "./testdata"
)
// textMessage implements the methods that allow it to marshal and unmarshal
@@ -208,30 +207,6 @@ func TestMarshalTextUnknownEnum(t *testing.T) {
}
}
func TestTextOneof(t *testing.T) {
tests := []struct {
m proto.Message
want string
}{
// zero message
{&pb.Communique{}, ``},
// scalar field
{&pb.Communique{Union: &pb.Communique_Number{Number: 4}}, `number:4`},
// message field
{&pb.Communique{Union: &pb.Communique_Msg{
Msg: &pb.Strings{StringField: proto.String("why hello!")},
}}, `msg:<string_field:"why hello!" >`},
// bad oneof (should not panic)
{&pb.Communique{Union: &pb.Communique_Msg{Msg: nil}}, `msg:/* nil */`},
}
for _, test := range tests {
got := strings.TrimSpace(test.m.String())
if got != test.want {
t.Errorf("\n got %s\nwant %s", got, test.want)
}
}
}
func BenchmarkMarshalTextBuffered(b *testing.B) {
buf := new(bytes.Buffer)
m := newTestMessage()
@@ -410,65 +385,3 @@ func TestFloats(t *testing.T) {
}
}
}
func TestRepeatedNilText(t *testing.T) {
m := &pb.MessageList{
Message: []*pb.MessageList_Message{
nil,
{
Name: proto.String("Horse"),
},
nil,
},
}
want := `Message <nil>
Message {
name: "Horse"
}
Message <nil>
`
if s := proto.MarshalTextString(m); s != want {
t.Errorf(" got: %s\nwant: %s", s, want)
}
}
func TestProto3Text(t *testing.T) {
tests := []struct {
m proto.Message
want string
}{
// zero message
{&proto3pb.Message{}, ``},
// zero message except for an empty byte slice
{&proto3pb.Message{Data: []byte{}}, ``},
// trivial case
{&proto3pb.Message{Name: "Rob", HeightInCm: 175}, `name:"Rob" height_in_cm:175`},
// empty map
{&pb.MessageWithMap{}, ``},
// non-empty map; map format is the same as a repeated struct,
// and they are sorted by key (numerically for numeric keys).
{
&pb.MessageWithMap{NameMapping: map[int32]string{
-1: "Negatory",
7: "Lucky",
1234: "Feist",
6345789: "Otis",
}},
`name_mapping:<key:-1 value:"Negatory" > ` +
`name_mapping:<key:7 value:"Lucky" > ` +
`name_mapping:<key:1234 value:"Feist" > ` +
`name_mapping:<key:6345789 value:"Otis" >`,
},
// map with nil value; not well-defined, but we shouldn't crash
{
&pb.MessageWithMap{MsgMapping: map[int64]*pb.FloatingPoint{7: nil}},
`msg_mapping:<key:7 >`,
},
}
for _, test := range tests {
got := strings.TrimSpace(test.m.String())
if got != test.want {
t.Errorf("\n got %s\nwant %s", got, test.want)
}
}
}

View File

@@ -1,5 +0,0 @@
#*
*~
/tools/pass/pass
/tools/pcaptest/pcaptest
/tools/tcpdump/tcpdump

View File

@@ -1,27 +0,0 @@
Copyright (c) 2009-2011 Andreas Krennmair. All rights reserved.
Redistribution and use in source and binary forms, with or without
modification, are permitted provided that the following conditions are
met:
* Redistributions of source code must retain the above copyright
notice, this list of conditions and the following disclaimer.
* Redistributions in binary form must reproduce the above
copyright notice, this list of conditions and the following disclaimer
in the documentation and/or other materials provided with the
distribution.
* Neither the name of Andreas Krennmair nor the names of its
contributors may be used to endorse or promote products derived from
this software without specific prior written permission.
THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
"AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
(INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.

View File

@@ -1,11 +0,0 @@
# PCAP
This is a simple wrapper around libpcap for Go. Originally written by Andreas
Krennmair <ak@synflood.at> and only minorly touched up by Mark Smith <mark@qq.is>.
Please see the included pcaptest.go and tcpdump.go programs for instructions on
how to use this library.
Miek Gieben <miek@miek.nl> has created a more Go-like package and replaced functionality
with standard functions from the standard library. The package has also been renamed to
pcap.

View File

@@ -1,527 +0,0 @@
package pcap
import (
"encoding/binary"
"fmt"
"net"
"reflect"
"strings"
)
const (
TYPE_IP = 0x0800
TYPE_ARP = 0x0806
TYPE_IP6 = 0x86DD
TYPE_VLAN = 0x8100
IP_ICMP = 1
IP_INIP = 4
IP_TCP = 6
IP_UDP = 17
)
const (
ERRBUF_SIZE = 256
// According to pcap-linktype(7).
LINKTYPE_NULL = 0
LINKTYPE_ETHERNET = 1
LINKTYPE_TOKEN_RING = 6
LINKTYPE_ARCNET = 7
LINKTYPE_SLIP = 8
LINKTYPE_PPP = 9
LINKTYPE_FDDI = 10
LINKTYPE_ATM_RFC1483 = 100
LINKTYPE_RAW = 101
LINKTYPE_PPP_HDLC = 50
LINKTYPE_PPP_ETHER = 51
LINKTYPE_C_HDLC = 104
LINKTYPE_IEEE802_11 = 105
LINKTYPE_FRELAY = 107
LINKTYPE_LOOP = 108
LINKTYPE_LINUX_SLL = 113
LINKTYPE_LTALK = 104
LINKTYPE_PFLOG = 117
LINKTYPE_PRISM_HEADER = 119
LINKTYPE_IP_OVER_FC = 122
LINKTYPE_SUNATM = 123
LINKTYPE_IEEE802_11_RADIO = 127
LINKTYPE_ARCNET_LINUX = 129
LINKTYPE_LINUX_IRDA = 144
LINKTYPE_LINUX_LAPD = 177
)
type addrHdr interface {
SrcAddr() string
DestAddr() string
Len() int
}
type addrStringer interface {
String(addr addrHdr) string
}
func decodemac(pkt []byte) uint64 {
mac := uint64(0)
for i := uint(0); i < 6; i++ {
mac = (mac << 8) + uint64(pkt[i])
}
return mac
}
// Decode decodes the headers of a Packet.
func (p *Packet) Decode() {
if len(p.Data) <= 14 {
return
}
p.Type = int(binary.BigEndian.Uint16(p.Data[12:14]))
p.DestMac = decodemac(p.Data[0:6])
p.SrcMac = decodemac(p.Data[6:12])
if len(p.Data) >= 15 {
p.Payload = p.Data[14:]
}
switch p.Type {
case TYPE_IP:
p.decodeIp()
case TYPE_IP6:
p.decodeIp6()
case TYPE_ARP:
p.decodeArp()
case TYPE_VLAN:
p.decodeVlan()
}
}
func (p *Packet) headerString(headers []interface{}) string {
// If there's just one header, return that.
if len(headers) == 1 {
if hdr, ok := headers[0].(fmt.Stringer); ok {
return hdr.String()
}
}
// If there are two headers (IPv4/IPv6 -> TCP/UDP/IP..)
if len(headers) == 2 {
// Commonly the first header is an address.
if addr, ok := p.Headers[0].(addrHdr); ok {
if hdr, ok := p.Headers[1].(addrStringer); ok {
return fmt.Sprintf("%s %s", p.Time, hdr.String(addr))
}
}
}
// For IP in IP, we do a recursive call.
if len(headers) >= 2 {
if addr, ok := headers[0].(addrHdr); ok {
if _, ok := headers[1].(addrHdr); ok {
return fmt.Sprintf("%s > %s IP in IP: ",
addr.SrcAddr(), addr.DestAddr(), p.headerString(headers[1:]))
}
}
}
var typeNames []string
for _, hdr := range headers {
typeNames = append(typeNames, reflect.TypeOf(hdr).String())
}
return fmt.Sprintf("unknown [%s]", strings.Join(typeNames, ","))
}
// String prints a one-line representation of the packet header.
// The output is suitable for use in a tcpdump program.
func (p *Packet) String() string {
// If there are no headers, print "unsupported protocol".
if len(p.Headers) == 0 {
return fmt.Sprintf("%s unsupported protocol %d", p.Time, int(p.Type))
}
return fmt.Sprintf("%s %s", p.Time, p.headerString(p.Headers))
}
// Arphdr is a ARP packet header.
type Arphdr struct {
Addrtype uint16
Protocol uint16
HwAddressSize uint8
ProtAddressSize uint8
Operation uint16
SourceHwAddress []byte
SourceProtAddress []byte
DestHwAddress []byte
DestProtAddress []byte
}
func (arp *Arphdr) String() (s string) {
switch arp.Operation {
case 1:
s = "ARP request"
case 2:
s = "ARP Reply"
}
if arp.Addrtype == LINKTYPE_ETHERNET && arp.Protocol == TYPE_IP {
s = fmt.Sprintf("%012x (%s) > %012x (%s)",
decodemac(arp.SourceHwAddress), arp.SourceProtAddress,
decodemac(arp.DestHwAddress), arp.DestProtAddress)
} else {
s = fmt.Sprintf("addrtype = %d protocol = %d", arp.Addrtype, arp.Protocol)
}
return
}
func (p *Packet) decodeArp() {
if len(p.Payload) < 8 {
return
}
pkt := p.Payload
arp := new(Arphdr)
arp.Addrtype = binary.BigEndian.Uint16(pkt[0:2])
arp.Protocol = binary.BigEndian.Uint16(pkt[2:4])
arp.HwAddressSize = pkt[4]
arp.ProtAddressSize = pkt[5]
arp.Operation = binary.BigEndian.Uint16(pkt[6:8])
if len(pkt) < int(8+2*arp.HwAddressSize+2*arp.ProtAddressSize) {
return
}
arp.SourceHwAddress = pkt[8 : 8+arp.HwAddressSize]
arp.SourceProtAddress = pkt[8+arp.HwAddressSize : 8+arp.HwAddressSize+arp.ProtAddressSize]
arp.DestHwAddress = pkt[8+arp.HwAddressSize+arp.ProtAddressSize : 8+2*arp.HwAddressSize+arp.ProtAddressSize]
arp.DestProtAddress = pkt[8+2*arp.HwAddressSize+arp.ProtAddressSize : 8+2*arp.HwAddressSize+2*arp.ProtAddressSize]
p.Headers = append(p.Headers, arp)
if len(pkt) >= int(8+2*arp.HwAddressSize+2*arp.ProtAddressSize) {
p.Payload = p.Payload[8+2*arp.HwAddressSize+2*arp.ProtAddressSize:]
}
}
// IPadr is the header of an IP packet.
type Iphdr struct {
Version uint8
Ihl uint8
Tos uint8
Length uint16
Id uint16
Flags uint8
FragOffset uint16
Ttl uint8
Protocol uint8
Checksum uint16
SrcIp []byte
DestIp []byte
}
func (p *Packet) decodeIp() {
if len(p.Payload) < 20 {
return
}
pkt := p.Payload
ip := new(Iphdr)
ip.Version = uint8(pkt[0]) >> 4
ip.Ihl = uint8(pkt[0]) & 0x0F
ip.Tos = pkt[1]
ip.Length = binary.BigEndian.Uint16(pkt[2:4])
ip.Id = binary.BigEndian.Uint16(pkt[4:6])
flagsfrags := binary.BigEndian.Uint16(pkt[6:8])
ip.Flags = uint8(flagsfrags >> 13)
ip.FragOffset = flagsfrags & 0x1FFF
ip.Ttl = pkt[8]
ip.Protocol = pkt[9]
ip.Checksum = binary.BigEndian.Uint16(pkt[10:12])
ip.SrcIp = pkt[12:16]
ip.DestIp = pkt[16:20]
pEnd := int(ip.Length)
if pEnd > len(pkt) {
pEnd = len(pkt)
}
if len(pkt) >= pEnd && int(ip.Ihl*4) < pEnd {
p.Payload = pkt[ip.Ihl*4 : pEnd]
} else {
p.Payload = []byte{}
}
p.Headers = append(p.Headers, ip)
p.IP = ip
switch ip.Protocol {
case IP_TCP:
p.decodeTcp()
case IP_UDP:
p.decodeUdp()
case IP_ICMP:
p.decodeIcmp()
case IP_INIP:
p.decodeIp()
}
}
func (ip *Iphdr) SrcAddr() string { return net.IP(ip.SrcIp).String() }
func (ip *Iphdr) DestAddr() string { return net.IP(ip.DestIp).String() }
func (ip *Iphdr) Len() int { return int(ip.Length) }
type Vlanhdr struct {
Priority byte
DropEligible bool
VlanIdentifier int
Type int // Not actually part of the vlan header, but the type of the actual packet
}
func (v *Vlanhdr) String() {
fmt.Sprintf("VLAN Priority:%d Drop:%v Tag:%d", v.Priority, v.DropEligible, v.VlanIdentifier)
}
func (p *Packet) decodeVlan() {
pkt := p.Payload
vlan := new(Vlanhdr)
if len(pkt) < 4 {
return
}
vlan.Priority = (pkt[2] & 0xE0) >> 13
vlan.DropEligible = pkt[2]&0x10 != 0
vlan.VlanIdentifier = int(binary.BigEndian.Uint16(pkt[:2])) & 0x0FFF
vlan.Type = int(binary.BigEndian.Uint16(p.Payload[2:4]))
p.Headers = append(p.Headers, vlan)
if len(pkt) >= 5 {
p.Payload = p.Payload[4:]
}
switch vlan.Type {
case TYPE_IP:
p.decodeIp()
case TYPE_IP6:
p.decodeIp6()
case TYPE_ARP:
p.decodeArp()
}
}
type Tcphdr struct {
SrcPort uint16
DestPort uint16
Seq uint32
Ack uint32
DataOffset uint8
Flags uint16
Window uint16
Checksum uint16
Urgent uint16
Data []byte
}
const (
TCP_FIN = 1 << iota
TCP_SYN
TCP_RST
TCP_PSH
TCP_ACK
TCP_URG
TCP_ECE
TCP_CWR
TCP_NS
)
func (p *Packet) decodeTcp() {
if len(p.Payload) < 20 {
return
}
pkt := p.Payload
tcp := new(Tcphdr)
tcp.SrcPort = binary.BigEndian.Uint16(pkt[0:2])
tcp.DestPort = binary.BigEndian.Uint16(pkt[2:4])
tcp.Seq = binary.BigEndian.Uint32(pkt[4:8])
tcp.Ack = binary.BigEndian.Uint32(pkt[8:12])
tcp.DataOffset = (pkt[12] & 0xF0) >> 4
tcp.Flags = binary.BigEndian.Uint16(pkt[12:14]) & 0x1FF
tcp.Window = binary.BigEndian.Uint16(pkt[14:16])
tcp.Checksum = binary.BigEndian.Uint16(pkt[16:18])
tcp.Urgent = binary.BigEndian.Uint16(pkt[18:20])
if len(pkt) >= int(tcp.DataOffset*4) {
p.Payload = pkt[tcp.DataOffset*4:]
}
p.Headers = append(p.Headers, tcp)
p.TCP = tcp
}
func (tcp *Tcphdr) String(hdr addrHdr) string {
return fmt.Sprintf("TCP %s:%d > %s:%d %s SEQ=%d ACK=%d LEN=%d",
hdr.SrcAddr(), int(tcp.SrcPort), hdr.DestAddr(), int(tcp.DestPort),
tcp.FlagsString(), int64(tcp.Seq), int64(tcp.Ack), hdr.Len())
}
func (tcp *Tcphdr) FlagsString() string {
var sflags []string
if 0 != (tcp.Flags & TCP_SYN) {
sflags = append(sflags, "syn")
}
if 0 != (tcp.Flags & TCP_FIN) {
sflags = append(sflags, "fin")
}
if 0 != (tcp.Flags & TCP_ACK) {
sflags = append(sflags, "ack")
}
if 0 != (tcp.Flags & TCP_PSH) {
sflags = append(sflags, "psh")
}
if 0 != (tcp.Flags & TCP_RST) {
sflags = append(sflags, "rst")
}
if 0 != (tcp.Flags & TCP_URG) {
sflags = append(sflags, "urg")
}
if 0 != (tcp.Flags & TCP_NS) {
sflags = append(sflags, "ns")
}
if 0 != (tcp.Flags & TCP_CWR) {
sflags = append(sflags, "cwr")
}
if 0 != (tcp.Flags & TCP_ECE) {
sflags = append(sflags, "ece")
}
return fmt.Sprintf("[%s]", strings.Join(sflags, " "))
}
type Udphdr struct {
SrcPort uint16
DestPort uint16
Length uint16
Checksum uint16
}
func (p *Packet) decodeUdp() {
if len(p.Payload) < 8 {
return
}
pkt := p.Payload
udp := new(Udphdr)
udp.SrcPort = binary.BigEndian.Uint16(pkt[0:2])
udp.DestPort = binary.BigEndian.Uint16(pkt[2:4])
udp.Length = binary.BigEndian.Uint16(pkt[4:6])
udp.Checksum = binary.BigEndian.Uint16(pkt[6:8])
p.Headers = append(p.Headers, udp)
p.UDP = udp
if len(p.Payload) >= 8 {
p.Payload = pkt[8:]
}
}
func (udp *Udphdr) String(hdr addrHdr) string {
return fmt.Sprintf("UDP %s:%d > %s:%d LEN=%d CHKSUM=%d",
hdr.SrcAddr(), int(udp.SrcPort), hdr.DestAddr(), int(udp.DestPort),
int(udp.Length), int(udp.Checksum))
}
type Icmphdr struct {
Type uint8
Code uint8
Checksum uint16
Id uint16
Seq uint16
Data []byte
}
func (p *Packet) decodeIcmp() *Icmphdr {
if len(p.Payload) < 8 {
return nil
}
pkt := p.Payload
icmp := new(Icmphdr)
icmp.Type = pkt[0]
icmp.Code = pkt[1]
icmp.Checksum = binary.BigEndian.Uint16(pkt[2:4])
icmp.Id = binary.BigEndian.Uint16(pkt[4:6])
icmp.Seq = binary.BigEndian.Uint16(pkt[6:8])
p.Payload = pkt[8:]
p.Headers = append(p.Headers, icmp)
return icmp
}
func (icmp *Icmphdr) String(hdr addrHdr) string {
return fmt.Sprintf("ICMP %s > %s Type = %d Code = %d ",
hdr.SrcAddr(), hdr.DestAddr(), icmp.Type, icmp.Code)
}
func (icmp *Icmphdr) TypeString() (result string) {
switch icmp.Type {
case 0:
result = fmt.Sprintf("Echo reply seq=%d", icmp.Seq)
case 3:
switch icmp.Code {
case 0:
result = "Network unreachable"
case 1:
result = "Host unreachable"
case 2:
result = "Protocol unreachable"
case 3:
result = "Port unreachable"
default:
result = "Destination unreachable"
}
case 8:
result = fmt.Sprintf("Echo request seq=%d", icmp.Seq)
case 30:
result = "Traceroute"
}
return
}
type Ip6hdr struct {
// http://www.networksorcery.com/enp/protocol/ipv6.htm
Version uint8 // 4 bits
TrafficClass uint8 // 8 bits
FlowLabel uint32 // 20 bits
Length uint16 // 16 bits
NextHeader uint8 // 8 bits, same as Protocol in Iphdr
HopLimit uint8 // 8 bits
SrcIp []byte // 16 bytes
DestIp []byte // 16 bytes
}
func (p *Packet) decodeIp6() {
if len(p.Payload) < 40 {
return
}
pkt := p.Payload
ip6 := new(Ip6hdr)
ip6.Version = uint8(pkt[0]) >> 4
ip6.TrafficClass = uint8((binary.BigEndian.Uint16(pkt[0:2]) >> 4) & 0x00FF)
ip6.FlowLabel = binary.BigEndian.Uint32(pkt[0:4]) & 0x000FFFFF
ip6.Length = binary.BigEndian.Uint16(pkt[4:6])
ip6.NextHeader = pkt[6]
ip6.HopLimit = pkt[7]
ip6.SrcIp = pkt[8:24]
ip6.DestIp = pkt[24:40]
if len(p.Payload) >= 40 {
p.Payload = pkt[40:]
}
p.Headers = append(p.Headers, ip6)
switch ip6.NextHeader {
case IP_TCP:
p.decodeTcp()
case IP_UDP:
p.decodeUdp()
case IP_ICMP:
p.decodeIcmp()
case IP_INIP:
p.decodeIp()
}
}
func (ip6 *Ip6hdr) SrcAddr() string { return net.IP(ip6.SrcIp).String() }
func (ip6 *Ip6hdr) DestAddr() string { return net.IP(ip6.DestIp).String() }
func (ip6 *Ip6hdr) Len() int { return int(ip6.Length) }

View File

@@ -1,247 +0,0 @@
package pcap
import (
"bytes"
"testing"
"time"
)
var testSimpleTcpPacket *Packet = &Packet{
Data: []byte{
0x00, 0x00, 0x0c, 0x9f, 0xf0, 0x20, 0xbc, 0x30, 0x5b, 0xe8, 0xd3, 0x49,
0x08, 0x00, 0x45, 0x00, 0x01, 0xa4, 0x39, 0xdf, 0x40, 0x00, 0x40, 0x06,
0x55, 0x5a, 0xac, 0x11, 0x51, 0x49, 0xad, 0xde, 0xfe, 0xe1, 0xc5, 0xf7,
0x00, 0x50, 0xc5, 0x7e, 0x0e, 0x48, 0x49, 0x07, 0x42, 0x32, 0x80, 0x18,
0x00, 0x73, 0xab, 0xb1, 0x00, 0x00, 0x01, 0x01, 0x08, 0x0a, 0x03, 0x77,
0x37, 0x9c, 0x42, 0x77, 0x5e, 0x3a, 0x47, 0x45, 0x54, 0x20, 0x2f, 0x20,
0x48, 0x54, 0x54, 0x50, 0x2f, 0x31, 0x2e, 0x31, 0x0d, 0x0a, 0x48, 0x6f,
0x73, 0x74, 0x3a, 0x20, 0x77, 0x77, 0x77, 0x2e, 0x66, 0x69, 0x73, 0x68,
0x2e, 0x63, 0x6f, 0x6d, 0x0d, 0x0a, 0x43, 0x6f, 0x6e, 0x6e, 0x65, 0x63,
0x74, 0x69, 0x6f, 0x6e, 0x3a, 0x20, 0x6b, 0x65, 0x65, 0x70, 0x2d, 0x61,
0x6c, 0x69, 0x76, 0x65, 0x0d, 0x0a, 0x55, 0x73, 0x65, 0x72, 0x2d, 0x41,
0x67, 0x65, 0x6e, 0x74, 0x3a, 0x20, 0x4d, 0x6f, 0x7a, 0x69, 0x6c, 0x6c,
0x61, 0x2f, 0x35, 0x2e, 0x30, 0x20, 0x28, 0x58, 0x31, 0x31, 0x3b, 0x20,
0x4c, 0x69, 0x6e, 0x75, 0x78, 0x20, 0x78, 0x38, 0x36, 0x5f, 0x36, 0x34,
0x29, 0x20, 0x41, 0x70, 0x70, 0x6c, 0x65, 0x57, 0x65, 0x62, 0x4b, 0x69,
0x74, 0x2f, 0x35, 0x33, 0x35, 0x2e, 0x32, 0x20, 0x28, 0x4b, 0x48, 0x54,
0x4d, 0x4c, 0x2c, 0x20, 0x6c, 0x69, 0x6b, 0x65, 0x20, 0x47, 0x65, 0x63,
0x6b, 0x6f, 0x29, 0x20, 0x43, 0x68, 0x72, 0x6f, 0x6d, 0x65, 0x2f, 0x31,
0x35, 0x2e, 0x30, 0x2e, 0x38, 0x37, 0x34, 0x2e, 0x31, 0x32, 0x31, 0x20,
0x53, 0x61, 0x66, 0x61, 0x72, 0x69, 0x2f, 0x35, 0x33, 0x35, 0x2e, 0x32,
0x0d, 0x0a, 0x41, 0x63, 0x63, 0x65, 0x70, 0x74, 0x3a, 0x20, 0x74, 0x65,
0x78, 0x74, 0x2f, 0x68, 0x74, 0x6d, 0x6c, 0x2c, 0x61, 0x70, 0x70, 0x6c,
0x69, 0x63, 0x61, 0x74, 0x69, 0x6f, 0x6e, 0x2f, 0x78, 0x68, 0x74, 0x6d,
0x6c, 0x2b, 0x78, 0x6d, 0x6c, 0x2c, 0x61, 0x70, 0x70, 0x6c, 0x69, 0x63,
0x61, 0x74, 0x69, 0x6f, 0x6e, 0x2f, 0x78, 0x6d, 0x6c, 0x3b, 0x71, 0x3d,
0x30, 0x2e, 0x39, 0x2c, 0x2a, 0x2f, 0x2a, 0x3b, 0x71, 0x3d, 0x30, 0x2e,
0x38, 0x0d, 0x0a, 0x41, 0x63, 0x63, 0x65, 0x70, 0x74, 0x2d, 0x45, 0x6e,
0x63, 0x6f, 0x64, 0x69, 0x6e, 0x67, 0x3a, 0x20, 0x67, 0x7a, 0x69, 0x70,
0x2c, 0x64, 0x65, 0x66, 0x6c, 0x61, 0x74, 0x65, 0x2c, 0x73, 0x64, 0x63,
0x68, 0x0d, 0x0a, 0x41, 0x63, 0x63, 0x65, 0x70, 0x74, 0x2d, 0x4c, 0x61,
0x6e, 0x67, 0x75, 0x61, 0x67, 0x65, 0x3a, 0x20, 0x65, 0x6e, 0x2d, 0x55,
0x53, 0x2c, 0x65, 0x6e, 0x3b, 0x71, 0x3d, 0x30, 0x2e, 0x38, 0x0d, 0x0a,
0x41, 0x63, 0x63, 0x65, 0x70, 0x74, 0x2d, 0x43, 0x68, 0x61, 0x72, 0x73,
0x65, 0x74, 0x3a, 0x20, 0x49, 0x53, 0x4f, 0x2d, 0x38, 0x38, 0x35, 0x39,
0x2d, 0x31, 0x2c, 0x75, 0x74, 0x66, 0x2d, 0x38, 0x3b, 0x71, 0x3d, 0x30,
0x2e, 0x37, 0x2c, 0x2a, 0x3b, 0x71, 0x3d, 0x30, 0x2e, 0x33, 0x0d, 0x0a,
0x0d, 0x0a,
}}
func BenchmarkDecodeSimpleTcpPacket(b *testing.B) {
for i := 0; i < b.N; i++ {
testSimpleTcpPacket.Decode()
}
}
func TestDecodeSimpleTcpPacket(t *testing.T) {
p := testSimpleTcpPacket
p.Decode()
if p.DestMac != 0x00000c9ff020 {
t.Error("Dest mac", p.DestMac)
}
if p.SrcMac != 0xbc305be8d349 {
t.Error("Src mac", p.SrcMac)
}
if len(p.Headers) != 2 {
t.Error("Incorrect number of headers", len(p.Headers))
return
}
if ip, ipOk := p.Headers[0].(*Iphdr); ipOk {
if ip.Version != 4 {
t.Error("ip Version", ip.Version)
}
if ip.Ihl != 5 {
t.Error("ip header length", ip.Ihl)
}
if ip.Tos != 0 {
t.Error("ip TOS", ip.Tos)
}
if ip.Length != 420 {
t.Error("ip Length", ip.Length)
}
if ip.Id != 14815 {
t.Error("ip ID", ip.Id)
}
if ip.Flags != 0x02 {
t.Error("ip Flags", ip.Flags)
}
if ip.FragOffset != 0 {
t.Error("ip Fragoffset", ip.FragOffset)
}
if ip.Ttl != 64 {
t.Error("ip TTL", ip.Ttl)
}
if ip.Protocol != 6 {
t.Error("ip Protocol", ip.Protocol)
}
if ip.Checksum != 0x555A {
t.Error("ip Checksum", ip.Checksum)
}
if !bytes.Equal(ip.SrcIp, []byte{172, 17, 81, 73}) {
t.Error("ip Src", ip.SrcIp)
}
if !bytes.Equal(ip.DestIp, []byte{173, 222, 254, 225}) {
t.Error("ip Dest", ip.DestIp)
}
if tcp, tcpOk := p.Headers[1].(*Tcphdr); tcpOk {
if tcp.SrcPort != 50679 {
t.Error("tcp srcport", tcp.SrcPort)
}
if tcp.DestPort != 80 {
t.Error("tcp destport", tcp.DestPort)
}
if tcp.Seq != 0xc57e0e48 {
t.Error("tcp seq", tcp.Seq)
}
if tcp.Ack != 0x49074232 {
t.Error("tcp ack", tcp.Ack)
}
if tcp.DataOffset != 8 {
t.Error("tcp dataoffset", tcp.DataOffset)
}
if tcp.Flags != 0x18 {
t.Error("tcp flags", tcp.Flags)
}
if tcp.Window != 0x73 {
t.Error("tcp window", tcp.Window)
}
if tcp.Checksum != 0xabb1 {
t.Error("tcp checksum", tcp.Checksum)
}
if tcp.Urgent != 0 {
t.Error("tcp urgent", tcp.Urgent)
}
} else {
t.Error("Second header is not TCP header")
}
} else {
t.Error("First header is not IP header")
}
if string(p.Payload) != "GET / HTTP/1.1\r\nHost: www.fish.com\r\nConnection: keep-alive\r\nUser-Agent: Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/535.2 (KHTML, like Gecko) Chrome/15.0.874.121 Safari/535.2\r\nAccept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8\r\nAccept-Encoding: gzip,deflate,sdch\r\nAccept-Language: en-US,en;q=0.8\r\nAccept-Charset: ISO-8859-1,utf-8;q=0.7,*;q=0.3\r\n\r\n" {
t.Error("--- PAYLOAD STRING ---\n", string(p.Payload), "\n--- PAYLOAD BYTES ---\n", p.Payload)
}
}
// Makes sure packet payload doesn't display the 6 trailing null of this packet
// as part of the payload. They're actually the ethernet trailer.
func TestDecodeSmallTcpPacketHasEmptyPayload(t *testing.T) {
p := &Packet{
// This packet is only 54 bits (an empty TCP RST), thus 6 trailing null
// bytes are added by the ethernet layer to make it the minimum packet size.
Data: []byte{
0xbc, 0x30, 0x5b, 0xe8, 0xd3, 0x49, 0xb8, 0xac, 0x6f, 0x92, 0xd5, 0xbf,
0x08, 0x00, 0x45, 0x00, 0x00, 0x28, 0x00, 0x00, 0x40, 0x00, 0x40, 0x06,
0x3f, 0x9f, 0xac, 0x11, 0x51, 0xc5, 0xac, 0x11, 0x51, 0x49, 0x00, 0x63,
0x9a, 0xef, 0x00, 0x00, 0x00, 0x00, 0x2e, 0xc1, 0x27, 0x83, 0x50, 0x14,
0x00, 0x00, 0xc3, 0x08, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
}}
p.Decode()
if p.Payload == nil {
t.Error("Nil payload")
}
if len(p.Payload) != 0 {
t.Error("Non-empty payload:", p.Payload)
}
}
func TestDecodeVlanPacket(t *testing.T) {
p := &Packet{
Data: []byte{
0x00, 0x10, 0xdb, 0xff, 0x10, 0x00, 0x00, 0x15, 0x2c, 0x9d, 0xcc, 0x00, 0x81, 0x00, 0x01, 0xf7,
0x08, 0x00, 0x45, 0x00, 0x00, 0x28, 0x29, 0x8d, 0x40, 0x00, 0x7d, 0x06, 0x83, 0xa0, 0xac, 0x1b,
0xca, 0x8e, 0x45, 0x16, 0x94, 0xe2, 0xd4, 0x0a, 0x00, 0x50, 0xdf, 0xab, 0x9c, 0xc6, 0xcd, 0x1e,
0xe5, 0xd1, 0x50, 0x10, 0x01, 0x00, 0x5a, 0x74, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
}}
p.Decode()
if p.Type != TYPE_VLAN {
t.Error("Didn't detect vlan")
}
if len(p.Headers) != 3 {
t.Error("Incorrect number of headers:", len(p.Headers))
for i, h := range p.Headers {
t.Errorf("Header %d: %#v", i, h)
}
t.FailNow()
}
if _, ok := p.Headers[0].(*Vlanhdr); !ok {
t.Errorf("First header isn't vlan: %q", p.Headers[0])
}
if _, ok := p.Headers[1].(*Iphdr); !ok {
t.Errorf("Second header isn't IP: %q", p.Headers[1])
}
if _, ok := p.Headers[2].(*Tcphdr); !ok {
t.Errorf("Third header isn't TCP: %q", p.Headers[2])
}
}
func TestDecodeFuzzFallout(t *testing.T) {
testData := []struct {
Data []byte
}{
{[]byte("000000000000\x81\x000")},
{[]byte("000000000000\x81\x00000")},
{[]byte("000000000000\x86\xdd0")},
{[]byte("000000000000\b\x000")},
{[]byte("000000000000\b\x060")},
{[]byte{}},
{[]byte("000000000000\b\x0600000000")},
{[]byte("000000000000\x86\xdd000000\x01000000000000000000000000000000000")},
{[]byte("000000000000\x81\x0000\b\x0600000000")},
{[]byte("000000000000\b\x00n0000000000000000000")},
{[]byte("000000000000\x86\xdd000000\x0100000000000000000000000000000000000")},
{[]byte("000000000000\x81\x0000\b\x00g0000000000000000000")},
//{[]byte()},
{[]byte("000000000000\b\x00400000000\x110000000000")},
{[]byte("0nMء\xfe\x13\x13\x81\x00gr\b\x00&x\xc9\xe5b'\x1e0\x00\x04\x00\x0020596224")},
{[]byte("000000000000\x81\x0000\b\x00400000000\x110000000000")},
{[]byte("000000000000\b\x00000000000\x0600\xff0000000")},
{[]byte("000000000000\x86\xdd000000\x06000000000000000000000000000000000")},
{[]byte("000000000000\x81\x0000\b\x00000000000\x0600b0000000")},
{[]byte("000000000000\x81\x0000\b\x00400000000\x060000000000")},
{[]byte("000000000000\x86\xdd000000\x11000000000000000000000000000000000")},
{[]byte("000000000000\x86\xdd000000\x0600000000000000000000000000000000000000000000M")},
{[]byte("000000000000\b\x00500000000\x0600000000000")},
{[]byte("0nM\xd80\xfe\x13\x13\x81\x00gr\b\x00&x\xc9\xe5b'\x1e0\x00\x04\x00\x0020596224")},
}
for _, entry := range testData {
pkt := &Packet{
Time: time.Now(),
Caplen: uint32(len(entry.Data)),
Len: uint32(len(entry.Data)),
Data: entry.Data,
}
pkt.Decode()
/*
func() {
defer func() {
if err := recover(); err != nil {
t.Fatalf("%d. %q failed: %v", idx, string(entry.Data), err)
}
}()
pkt.Decode()
}()
*/
}
}

View File

@@ -1,206 +0,0 @@
package pcap
import (
"encoding/binary"
"fmt"
"io"
"time"
)
// FileHeader is the parsed header of a pcap file.
// http://wiki.wireshark.org/Development/LibpcapFileFormat
type FileHeader struct {
MagicNumber uint32
VersionMajor uint16
VersionMinor uint16
TimeZone int32
SigFigs uint32
SnapLen uint32
Network uint32
}
type PacketTime struct {
Sec int32
Usec int32
}
// Convert the PacketTime to a go Time struct.
func (p *PacketTime) Time() time.Time {
return time.Unix(int64(p.Sec), int64(p.Usec)*1000)
}
// Packet is a single packet parsed from a pcap file.
//
// Convenient access to IP, TCP, and UDP headers is provided after Decode()
// is called if the packet is of the appropriate type.
type Packet struct {
Time time.Time // packet send/receive time
Caplen uint32 // bytes stored in the file (caplen <= len)
Len uint32 // bytes sent/received
Data []byte // packet data
Type int // protocol type, see LINKTYPE_*
DestMac uint64
SrcMac uint64
Headers []interface{} // decoded headers, in order
Payload []byte // remaining non-header bytes
IP *Iphdr // IP header (for IP packets, after decoding)
TCP *Tcphdr // TCP header (for TCP packets, after decoding)
UDP *Udphdr // UDP header (for UDP packets after decoding)
}
// Reader parses pcap files.
type Reader struct {
flip bool
buf io.Reader
err error
fourBytes []byte
twoBytes []byte
sixteenBytes []byte
Header FileHeader
}
// NewReader reads pcap data from an io.Reader.
func NewReader(reader io.Reader) (*Reader, error) {
r := &Reader{
buf: reader,
fourBytes: make([]byte, 4),
twoBytes: make([]byte, 2),
sixteenBytes: make([]byte, 16),
}
switch magic := r.readUint32(); magic {
case 0xa1b2c3d4:
r.flip = false
case 0xd4c3b2a1:
r.flip = true
default:
return nil, fmt.Errorf("pcap: bad magic number: %0x", magic)
}
r.Header = FileHeader{
MagicNumber: 0xa1b2c3d4,
VersionMajor: r.readUint16(),
VersionMinor: r.readUint16(),
TimeZone: r.readInt32(),
SigFigs: r.readUint32(),
SnapLen: r.readUint32(),
Network: r.readUint32(),
}
return r, nil
}
// Next returns the next packet or nil if no more packets can be read.
func (r *Reader) Next() *Packet {
d := r.sixteenBytes
r.err = r.read(d)
if r.err != nil {
return nil
}
timeSec := asUint32(d[0:4], r.flip)
timeUsec := asUint32(d[4:8], r.flip)
capLen := asUint32(d[8:12], r.flip)
origLen := asUint32(d[12:16], r.flip)
data := make([]byte, capLen)
if r.err = r.read(data); r.err != nil {
return nil
}
return &Packet{
Time: time.Unix(int64(timeSec), int64(timeUsec)),
Caplen: capLen,
Len: origLen,
Data: data,
}
}
func (r *Reader) read(data []byte) error {
var err error
n, err := r.buf.Read(data)
for err == nil && n != len(data) {
var chunk int
chunk, err = r.buf.Read(data[n:])
n += chunk
}
if len(data) == n {
return nil
}
return err
}
func (r *Reader) readUint32() uint32 {
data := r.fourBytes
if r.err = r.read(data); r.err != nil {
return 0
}
return asUint32(data, r.flip)
}
func (r *Reader) readInt32() int32 {
data := r.fourBytes
if r.err = r.read(data); r.err != nil {
return 0
}
return int32(asUint32(data, r.flip))
}
func (r *Reader) readUint16() uint16 {
data := r.twoBytes
if r.err = r.read(data); r.err != nil {
return 0
}
return asUint16(data, r.flip)
}
// Writer writes a pcap file.
type Writer struct {
writer io.Writer
buf []byte
}
// NewWriter creates a Writer that stores output in an io.Writer.
// The FileHeader is written immediately.
func NewWriter(writer io.Writer, header *FileHeader) (*Writer, error) {
w := &Writer{
writer: writer,
buf: make([]byte, 24),
}
binary.LittleEndian.PutUint32(w.buf, header.MagicNumber)
binary.LittleEndian.PutUint16(w.buf[4:], header.VersionMajor)
binary.LittleEndian.PutUint16(w.buf[6:], header.VersionMinor)
binary.LittleEndian.PutUint32(w.buf[8:], uint32(header.TimeZone))
binary.LittleEndian.PutUint32(w.buf[12:], header.SigFigs)
binary.LittleEndian.PutUint32(w.buf[16:], header.SnapLen)
binary.LittleEndian.PutUint32(w.buf[20:], header.Network)
if _, err := writer.Write(w.buf); err != nil {
return nil, err
}
return w, nil
}
// Writer writes a packet to the underlying writer.
func (w *Writer) Write(pkt *Packet) error {
binary.LittleEndian.PutUint32(w.buf, uint32(pkt.Time.Unix()))
binary.LittleEndian.PutUint32(w.buf[4:], uint32(pkt.Time.Nanosecond()))
binary.LittleEndian.PutUint32(w.buf[8:], uint32(pkt.Time.Unix()))
binary.LittleEndian.PutUint32(w.buf[12:], pkt.Len)
if _, err := w.writer.Write(w.buf[:16]); err != nil {
return err
}
_, err := w.writer.Write(pkt.Data)
return err
}
func asUint32(data []byte, flip bool) uint32 {
if flip {
return binary.BigEndian.Uint32(data)
}
return binary.LittleEndian.Uint32(data)
}
func asUint16(data []byte, flip bool) uint16 {
if flip {
return binary.BigEndian.Uint16(data)
}
return binary.LittleEndian.Uint16(data)
}

View File

@@ -1,266 +0,0 @@
// Interface to both live and offline pcap parsing.
package pcap
/*
#cgo linux LDFLAGS: -lpcap
#cgo freebsd LDFLAGS: -lpcap
#cgo darwin LDFLAGS: -lpcap
#cgo windows CFLAGS: -I C:/WpdPack/Include
#cgo windows,386 LDFLAGS: -L C:/WpdPack/Lib -lwpcap
#cgo windows,amd64 LDFLAGS: -L C:/WpdPack/Lib/x64 -lwpcap
#include <stdlib.h>
#include <pcap.h>
// Workaround for not knowing how to cast to const u_char**
int hack_pcap_next_ex(pcap_t *p, struct pcap_pkthdr **pkt_header,
u_char **pkt_data) {
return pcap_next_ex(p, pkt_header, (const u_char **)pkt_data);
}
*/
import "C"
import (
"errors"
"net"
"syscall"
"time"
"unsafe"
)
type Pcap struct {
cptr *C.pcap_t
}
type Stat struct {
PacketsReceived uint32
PacketsDropped uint32
PacketsIfDropped uint32
}
type Interface struct {
Name string
Description string
Addresses []IFAddress
// TODO: add more elements
}
type IFAddress struct {
IP net.IP
Netmask net.IPMask
// TODO: add broadcast + PtP dst ?
}
func (p *Pcap) Next() (pkt *Packet) {
rv, _ := p.NextEx()
return rv
}
// Openlive opens a device and returns a *Pcap handler
func Openlive(device string, snaplen int32, promisc bool, timeout_ms int32) (handle *Pcap, err error) {
var buf *C.char
buf = (*C.char)(C.calloc(ERRBUF_SIZE, 1))
h := new(Pcap)
var pro int32
if promisc {
pro = 1
}
dev := C.CString(device)
defer C.free(unsafe.Pointer(dev))
h.cptr = C.pcap_open_live(dev, C.int(snaplen), C.int(pro), C.int(timeout_ms), buf)
if nil == h.cptr {
handle = nil
err = errors.New(C.GoString(buf))
} else {
handle = h
}
C.free(unsafe.Pointer(buf))
return
}
func Openoffline(file string) (handle *Pcap, err error) {
var buf *C.char
buf = (*C.char)(C.calloc(ERRBUF_SIZE, 1))
h := new(Pcap)
cf := C.CString(file)
defer C.free(unsafe.Pointer(cf))
h.cptr = C.pcap_open_offline(cf, buf)
if nil == h.cptr {
handle = nil
err = errors.New(C.GoString(buf))
} else {
handle = h
}
C.free(unsafe.Pointer(buf))
return
}
func (p *Pcap) NextEx() (pkt *Packet, result int32) {
var pkthdr *C.struct_pcap_pkthdr
var buf_ptr *C.u_char
var buf unsafe.Pointer
result = int32(C.hack_pcap_next_ex(p.cptr, &pkthdr, &buf_ptr))
buf = unsafe.Pointer(buf_ptr)
if nil == buf {
return
}
pkt = new(Packet)
pkt.Time = time.Unix(int64(pkthdr.ts.tv_sec), int64(pkthdr.ts.tv_usec)*1000)
pkt.Caplen = uint32(pkthdr.caplen)
pkt.Len = uint32(pkthdr.len)
pkt.Data = C.GoBytes(buf, C.int(pkthdr.caplen))
return
}
func (p *Pcap) Close() {
C.pcap_close(p.cptr)
}
func (p *Pcap) Geterror() error {
return errors.New(C.GoString(C.pcap_geterr(p.cptr)))
}
func (p *Pcap) Getstats() (stat *Stat, err error) {
var cstats _Ctype_struct_pcap_stat
if -1 == C.pcap_stats(p.cptr, &cstats) {
return nil, p.Geterror()
}
stats := new(Stat)
stats.PacketsReceived = uint32(cstats.ps_recv)
stats.PacketsDropped = uint32(cstats.ps_drop)
stats.PacketsIfDropped = uint32(cstats.ps_ifdrop)
return stats, nil
}
func (p *Pcap) Setfilter(expr string) (err error) {
var bpf _Ctype_struct_bpf_program
cexpr := C.CString(expr)
defer C.free(unsafe.Pointer(cexpr))
if -1 == C.pcap_compile(p.cptr, &bpf, cexpr, 1, 0) {
return p.Geterror()
}
if -1 == C.pcap_setfilter(p.cptr, &bpf) {
C.pcap_freecode(&bpf)
return p.Geterror()
}
C.pcap_freecode(&bpf)
return nil
}
func Version() string {
return C.GoString(C.pcap_lib_version())
}
func (p *Pcap) Datalink() int {
return int(C.pcap_datalink(p.cptr))
}
func (p *Pcap) Setdatalink(dlt int) error {
if -1 == C.pcap_set_datalink(p.cptr, C.int(dlt)) {
return p.Geterror()
}
return nil
}
func DatalinkValueToName(dlt int) string {
if name := C.pcap_datalink_val_to_name(C.int(dlt)); name != nil {
return C.GoString(name)
}
return ""
}
func DatalinkValueToDescription(dlt int) string {
if desc := C.pcap_datalink_val_to_description(C.int(dlt)); desc != nil {
return C.GoString(desc)
}
return ""
}
func Findalldevs() (ifs []Interface, err error) {
var buf *C.char
buf = (*C.char)(C.calloc(ERRBUF_SIZE, 1))
defer C.free(unsafe.Pointer(buf))
var alldevsp *C.pcap_if_t
if -1 == C.pcap_findalldevs((**C.pcap_if_t)(&alldevsp), buf) {
return nil, errors.New(C.GoString(buf))
}
defer C.pcap_freealldevs((*C.pcap_if_t)(alldevsp))
dev := alldevsp
var i uint32
for i = 0; dev != nil; dev = (*C.pcap_if_t)(dev.next) {
i++
}
ifs = make([]Interface, i)
dev = alldevsp
for j := uint32(0); dev != nil; dev = (*C.pcap_if_t)(dev.next) {
var iface Interface
iface.Name = C.GoString(dev.name)
iface.Description = C.GoString(dev.description)
iface.Addresses = findalladdresses(dev.addresses)
// TODO: add more elements
ifs[j] = iface
j++
}
return
}
func findalladdresses(addresses *_Ctype_struct_pcap_addr) (retval []IFAddress) {
// TODO - make it support more than IPv4 and IPv6?
retval = make([]IFAddress, 0, 1)
for curaddr := addresses; curaddr != nil; curaddr = (*_Ctype_struct_pcap_addr)(curaddr.next) {
var a IFAddress
var err error
if a.IP, err = sockaddr_to_IP((*syscall.RawSockaddr)(unsafe.Pointer(curaddr.addr))); err != nil {
continue
}
if a.Netmask, err = sockaddr_to_IP((*syscall.RawSockaddr)(unsafe.Pointer(curaddr.addr))); err != nil {
continue
}
retval = append(retval, a)
}
return
}
func sockaddr_to_IP(rsa *syscall.RawSockaddr) (IP []byte, err error) {
switch rsa.Family {
case syscall.AF_INET:
pp := (*syscall.RawSockaddrInet4)(unsafe.Pointer(rsa))
IP = make([]byte, 4)
for i := 0; i < len(IP); i++ {
IP[i] = pp.Addr[i]
}
return
case syscall.AF_INET6:
pp := (*syscall.RawSockaddrInet6)(unsafe.Pointer(rsa))
IP = make([]byte, 16)
for i := 0; i < len(IP); i++ {
IP[i] = pp.Addr[i]
}
return
}
err = errors.New("Unsupported address type")
return
}
func (p *Pcap) Inject(data []byte) (err error) {
buf := (*C.char)(C.malloc((C.size_t)(len(data))))
for i := 0; i < len(data); i++ {
*(*byte)(unsafe.Pointer(uintptr(unsafe.Pointer(buf)) + uintptr(i))) = data[i]
}
if -1 == C.pcap_sendpacket(p.cptr, (*C.u_char)(unsafe.Pointer(buf)), (C.int)(len(data))) {
err = p.Geterror()
}
C.free(unsafe.Pointer(buf))
return
}

View File

@@ -1,49 +0,0 @@
package main
import (
"flag"
"fmt"
"os"
"runtime/pprof"
"time"
"github.com/coreos/etcd/Godeps/_workspace/src/github.com/akrennmair/gopcap"
)
func main() {
var filename *string = flag.String("file", "", "filename")
var decode *bool = flag.Bool("d", false, "If true, decode each packet")
var cpuprofile *string = flag.String("cpuprofile", "", "filename")
flag.Parse()
h, err := pcap.Openoffline(*filename)
if err != nil {
fmt.Printf("Couldn't create pcap reader: %v", err)
}
if *cpuprofile != "" {
if out, err := os.Create(*cpuprofile); err == nil {
pprof.StartCPUProfile(out)
defer func() {
pprof.StopCPUProfile()
out.Close()
}()
} else {
panic(err)
}
}
i, nilPackets := 0, 0
start := time.Now()
for pkt, code := h.NextEx(); code != -2; pkt, code = h.NextEx() {
if pkt == nil {
nilPackets++
} else if *decode {
pkt.Decode()
}
i++
}
duration := time.Since(start)
fmt.Printf("Took %v to process %v packets, %v per packet, %d nil packets\n", duration, i, duration/time.Duration(i), nilPackets)
}

View File

@@ -1,96 +0,0 @@
package main
// Parses a pcap file, writes it back to disk, then verifies the files
// are the same.
import (
"bufio"
"flag"
"fmt"
"io"
"os"
"github.com/coreos/etcd/Godeps/_workspace/src/github.com/akrennmair/gopcap"
)
var input *string = flag.String("input", "", "input file")
var output *string = flag.String("output", "", "output file")
var decode *bool = flag.Bool("decode", false, "print decoded packets")
func copyPcap(dest, src string) {
f, err := os.Open(src)
if err != nil {
fmt.Printf("couldn't open %q: %v\n", src, err)
return
}
defer f.Close()
reader, err := pcap.NewReader(bufio.NewReader(f))
if err != nil {
fmt.Printf("couldn't create reader: %v\n", err)
return
}
w, err := os.Create(dest)
if err != nil {
fmt.Printf("couldn't open %q: %v\n", dest, err)
return
}
defer w.Close()
buf := bufio.NewWriter(w)
writer, err := pcap.NewWriter(buf, &reader.Header)
if err != nil {
fmt.Printf("couldn't create writer: %v\n", err)
return
}
for {
pkt := reader.Next()
if pkt == nil {
break
}
if *decode {
pkt.Decode()
fmt.Println(pkt.String())
}
writer.Write(pkt)
}
buf.Flush()
}
func check(dest, src string) {
f, err := os.Open(src)
if err != nil {
fmt.Printf("couldn't open %q: %v\n", src, err)
return
}
defer f.Close()
freader := bufio.NewReader(f)
g, err := os.Open(dest)
if err != nil {
fmt.Printf("couldn't open %q: %v\n", src, err)
return
}
defer g.Close()
greader := bufio.NewReader(g)
for {
fb, ferr := freader.ReadByte()
gb, gerr := greader.ReadByte()
if ferr == io.EOF && gerr == io.EOF {
break
}
if fb == gb {
continue
}
fmt.Println("FAIL")
return
}
fmt.Println("PASS")
}
func main() {
flag.Parse()
copyPcap(*output, *input)
check(*output, *input)
}

View File

@@ -1,82 +0,0 @@
package main
import (
"flag"
"fmt"
"time"
"github.com/coreos/etcd/Godeps/_workspace/src/github.com/akrennmair/gopcap"
)
func min(x uint32, y uint32) uint32 {
if x < y {
return x
}
return y
}
func main() {
var device *string = flag.String("d", "", "device")
var file *string = flag.String("r", "", "file")
var expr *string = flag.String("e", "", "filter expression")
flag.Parse()
var h *pcap.Pcap
var err error
ifs, err := pcap.Findalldevs()
if len(ifs) == 0 {
fmt.Printf("Warning: no devices found : %s\n", err)
} else {
for i := 0; i < len(ifs); i++ {
fmt.Printf("dev %d: %s (%s)\n", i+1, ifs[i].Name, ifs[i].Description)
}
}
if *device != "" {
h, err = pcap.Openlive(*device, 65535, true, 0)
if h == nil {
fmt.Printf("Openlive(%s) failed: %s\n", *device, err)
return
}
} else if *file != "" {
h, err = pcap.Openoffline(*file)
if h == nil {
fmt.Printf("Openoffline(%s) failed: %s\n", *file, err)
return
}
} else {
fmt.Printf("usage: pcaptest [-d <device> | -r <file>]\n")
return
}
defer h.Close()
fmt.Printf("pcap version: %s\n", pcap.Version())
if *expr != "" {
fmt.Printf("Setting filter: %s\n", *expr)
err := h.Setfilter(*expr)
if err != nil {
fmt.Printf("Warning: setting filter failed: %s\n", err)
}
}
for pkt := h.Next(); pkt != nil; pkt = h.Next() {
fmt.Printf("time: %d.%06d (%s) caplen: %d len: %d\nData:",
int64(pkt.Time.Second()), int64(pkt.Time.Nanosecond()),
time.Unix(int64(pkt.Time.Second()), 0).String(), int64(pkt.Caplen), int64(pkt.Len))
for i := uint32(0); i < pkt.Caplen; i++ {
if i%32 == 0 {
fmt.Printf("\n")
}
if 32 <= pkt.Data[i] && pkt.Data[i] <= 126 {
fmt.Printf("%c", pkt.Data[i])
} else {
fmt.Printf(".")
}
}
fmt.Printf("\n\n")
}
}

View File

@@ -1,121 +0,0 @@
package main
import (
"bufio"
"flag"
"fmt"
"os"
"github.com/coreos/etcd/Godeps/_workspace/src/github.com/akrennmair/gopcap"
)
const (
TYPE_IP = 0x0800
TYPE_ARP = 0x0806
TYPE_IP6 = 0x86DD
IP_ICMP = 1
IP_INIP = 4
IP_TCP = 6
IP_UDP = 17
)
var out *bufio.Writer
var errout *bufio.Writer
func main() {
var device *string = flag.String("i", "", "interface")
var snaplen *int = flag.Int("s", 65535, "snaplen")
var hexdump *bool = flag.Bool("X", false, "hexdump")
expr := ""
out = bufio.NewWriter(os.Stdout)
errout = bufio.NewWriter(os.Stderr)
flag.Usage = func() {
fmt.Fprintf(errout, "usage: %s [ -i interface ] [ -s snaplen ] [ -X ] [ expression ]\n", os.Args[0])
errout.Flush()
os.Exit(1)
}
flag.Parse()
if len(flag.Args()) > 0 {
expr = flag.Arg(0)
}
if *device == "" {
devs, err := pcap.Findalldevs()
if err != nil {
fmt.Fprintf(errout, "tcpdump: couldn't find any devices: %s\n", err)
}
if 0 == len(devs) {
flag.Usage()
}
*device = devs[0].Name
}
h, err := pcap.Openlive(*device, int32(*snaplen), true, 0)
if h == nil {
fmt.Fprintf(errout, "tcpdump: %s\n", err)
errout.Flush()
return
}
defer h.Close()
if expr != "" {
ferr := h.Setfilter(expr)
if ferr != nil {
fmt.Fprintf(out, "tcpdump: %s\n", ferr)
out.Flush()
}
}
for pkt := h.Next(); pkt != nil; pkt = h.Next() {
pkt.Decode()
fmt.Fprintf(out, "%s\n", pkt.String())
if *hexdump {
Hexdump(pkt)
}
out.Flush()
}
}
func min(a, b int) int {
if a < b {
return a
}
return b
}
func Hexdump(pkt *pcap.Packet) {
for i := 0; i < len(pkt.Data); i += 16 {
Dumpline(uint32(i), pkt.Data[i:min(i+16, len(pkt.Data))])
}
}
func Dumpline(addr uint32, line []byte) {
fmt.Fprintf(out, "\t0x%04x: ", int32(addr))
var i uint16
for i = 0; i < 16 && i < uint16(len(line)); i++ {
if i%2 == 0 {
out.WriteString(" ")
}
fmt.Fprintf(out, "%02x", line[i])
}
for j := i; j <= 16; j++ {
if j%2 == 0 {
out.WriteString(" ")
}
out.WriteString(" ")
}
out.WriteString(" ")
for i = 0; i < 16 && i < uint16(len(line)); i++ {
if line[i] >= 32 && line[i] <= 126 {
fmt.Fprintf(out, "%c", line[i])
} else {
out.WriteString(".")
}
}
out.WriteString("\n")
}

View File

@@ -1,63 +0,0 @@
package quantile
import (
"testing"
)
func BenchmarkInsertTargeted(b *testing.B) {
b.ReportAllocs()
s := NewTargeted(Targets)
b.ResetTimer()
for i := float64(0); i < float64(b.N); i++ {
s.Insert(i)
}
}
func BenchmarkInsertTargetedSmallEpsilon(b *testing.B) {
s := NewTargeted(TargetsSmallEpsilon)
b.ResetTimer()
for i := float64(0); i < float64(b.N); i++ {
s.Insert(i)
}
}
func BenchmarkInsertBiased(b *testing.B) {
s := NewLowBiased(0.01)
b.ResetTimer()
for i := float64(0); i < float64(b.N); i++ {
s.Insert(i)
}
}
func BenchmarkInsertBiasedSmallEpsilon(b *testing.B) {
s := NewLowBiased(0.0001)
b.ResetTimer()
for i := float64(0); i < float64(b.N); i++ {
s.Insert(i)
}
}
func BenchmarkQuery(b *testing.B) {
s := NewTargeted(Targets)
for i := float64(0); i < 1e6; i++ {
s.Insert(i)
}
b.ResetTimer()
n := float64(b.N)
for i := float64(0); i < n; i++ {
s.Query(i / n)
}
}
func BenchmarkQuerySmallEpsilon(b *testing.B) {
s := NewTargeted(TargetsSmallEpsilon)
for i := float64(0); i < 1e6; i++ {
s.Insert(i)
}
b.ResetTimer()
n := float64(b.N)
for i := float64(0); i < n; i++ {
s.Query(i / n)
}
}

View File

@@ -1,121 +0,0 @@
// +build go1.1
package quantile_test
import (
"bufio"
"fmt"
"log"
"os"
"strconv"
"time"
"github.com/coreos/etcd/Godeps/_workspace/src/github.com/beorn7/perks/quantile"
)
func Example_simple() {
ch := make(chan float64)
go sendFloats(ch)
// Compute the 50th, 90th, and 99th percentile.
q := quantile.NewTargeted(map[float64]float64{
0.50: 0.005,
0.90: 0.001,
0.99: 0.0001,
})
for v := range ch {
q.Insert(v)
}
fmt.Println("perc50:", q.Query(0.50))
fmt.Println("perc90:", q.Query(0.90))
fmt.Println("perc99:", q.Query(0.99))
fmt.Println("count:", q.Count())
// Output:
// perc50: 5
// perc90: 16
// perc99: 223
// count: 2388
}
func Example_mergeMultipleStreams() {
// Scenario:
// We have multiple database shards. On each shard, there is a process
// collecting query response times from the database logs and inserting
// them into a Stream (created via NewTargeted(0.90)), much like the
// Simple example. These processes expose a network interface for us to
// ask them to serialize and send us the results of their
// Stream.Samples so we may Merge and Query them.
//
// NOTES:
// * These sample sets are small, allowing us to get them
// across the network much faster than sending the entire list of data
// points.
//
// * For this to work correctly, we must supply the same quantiles
// a priori the process collecting the samples supplied to NewTargeted,
// even if we do not plan to query them all here.
ch := make(chan quantile.Samples)
getDBQuerySamples(ch)
q := quantile.NewTargeted(map[float64]float64{0.90: 0.001})
for samples := range ch {
q.Merge(samples)
}
fmt.Println("perc90:", q.Query(0.90))
}
func Example_window() {
// Scenario: We want the 90th, 95th, and 99th percentiles for each
// minute.
ch := make(chan float64)
go sendStreamValues(ch)
tick := time.NewTicker(1 * time.Minute)
q := quantile.NewTargeted(map[float64]float64{
0.90: 0.001,
0.95: 0.0005,
0.99: 0.0001,
})
for {
select {
case t := <-tick.C:
flushToDB(t, q.Samples())
q.Reset()
case v := <-ch:
q.Insert(v)
}
}
}
func sendStreamValues(ch chan float64) {
// Use your imagination
}
func flushToDB(t time.Time, samples quantile.Samples) {
// Use your imagination
}
// This is a stub for the above example. In reality this would hit the remote
// servers via http or something like it.
func getDBQuerySamples(ch chan quantile.Samples) {}
func sendFloats(ch chan<- float64) {
f, err := os.Open("exampledata.txt")
if err != nil {
log.Fatal(err)
}
sc := bufio.NewScanner(f)
for sc.Scan() {
b := sc.Bytes()
v, err := strconv.ParseFloat(string(b), 64)
if err != nil {
log.Fatal(err)
}
ch <- v
}
if sc.Err() != nil {
log.Fatal(sc.Err())
}
close(ch)
}

File diff suppressed because it is too large Load Diff

View File

@@ -1,292 +0,0 @@
// Package quantile computes approximate quantiles over an unbounded data
// stream within low memory and CPU bounds.
//
// A small amount of accuracy is traded to achieve the above properties.
//
// Multiple streams can be merged before calling Query to generate a single set
// of results. This is meaningful when the streams represent the same type of
// data. See Merge and Samples.
//
// For more detailed information about the algorithm used, see:
//
// Effective Computation of Biased Quantiles over Data Streams
//
// http://www.cs.rutgers.edu/~muthu/bquant.pdf
package quantile
import (
"math"
"sort"
)
// Sample holds an observed value and meta information for compression. JSON
// tags have been added for convenience.
type Sample struct {
Value float64 `json:",string"`
Width float64 `json:",string"`
Delta float64 `json:",string"`
}
// Samples represents a slice of samples. It implements sort.Interface.
type Samples []Sample
func (a Samples) Len() int { return len(a) }
func (a Samples) Less(i, j int) bool { return a[i].Value < a[j].Value }
func (a Samples) Swap(i, j int) { a[i], a[j] = a[j], a[i] }
type invariant func(s *stream, r float64) float64
// NewLowBiased returns an initialized Stream for low-biased quantiles
// (e.g. 0.01, 0.1, 0.5) where the needed quantiles are not known a priori, but
// error guarantees can still be given even for the lower ranks of the data
// distribution.
//
// The provided epsilon is a relative error, i.e. the true quantile of a value
// returned by a query is guaranteed to be within (1±Epsilon)*Quantile.
//
// See http://www.cs.rutgers.edu/~muthu/bquant.pdf for time, space, and error
// properties.
func NewLowBiased(epsilon float64) *Stream {
ƒ := func(s *stream, r float64) float64 {
return 2 * epsilon * r
}
return newStream(ƒ)
}
// NewHighBiased returns an initialized Stream for high-biased quantiles
// (e.g. 0.01, 0.1, 0.5) where the needed quantiles are not known a priori, but
// error guarantees can still be given even for the higher ranks of the data
// distribution.
//
// The provided epsilon is a relative error, i.e. the true quantile of a value
// returned by a query is guaranteed to be within 1-(1±Epsilon)*(1-Quantile).
//
// See http://www.cs.rutgers.edu/~muthu/bquant.pdf for time, space, and error
// properties.
func NewHighBiased(epsilon float64) *Stream {
ƒ := func(s *stream, r float64) float64 {
return 2 * epsilon * (s.n - r)
}
return newStream(ƒ)
}
// NewTargeted returns an initialized Stream concerned with a particular set of
// quantile values that are supplied a priori. Knowing these a priori reduces
// space and computation time. The targets map maps the desired quantiles to
// their absolute errors, i.e. the true quantile of a value returned by a query
// is guaranteed to be within (Quantile±Epsilon).
//
// See http://www.cs.rutgers.edu/~muthu/bquant.pdf for time, space, and error properties.
func NewTargeted(targets map[float64]float64) *Stream {
ƒ := func(s *stream, r float64) float64 {
var m = math.MaxFloat64
var f float64
for quantile, epsilon := range targets {
if quantile*s.n <= r {
f = (2 * epsilon * r) / quantile
} else {
f = (2 * epsilon * (s.n - r)) / (1 - quantile)
}
if f < m {
m = f
}
}
return m
}
return newStream(ƒ)
}
// Stream computes quantiles for a stream of float64s. It is not thread-safe by
// design. Take care when using across multiple goroutines.
type Stream struct {
*stream
b Samples
sorted bool
}
func newStream(ƒ invariant) *Stream {
x := &stream{ƒ: ƒ}
return &Stream{x, make(Samples, 0, 500), true}
}
// Insert inserts v into the stream.
func (s *Stream) Insert(v float64) {
s.insert(Sample{Value: v, Width: 1})
}
func (s *Stream) insert(sample Sample) {
s.b = append(s.b, sample)
s.sorted = false
if len(s.b) == cap(s.b) {
s.flush()
}
}
// Query returns the computed qth percentiles value. If s was created with
// NewTargeted, and q is not in the set of quantiles provided a priori, Query
// will return an unspecified result.
func (s *Stream) Query(q float64) float64 {
if !s.flushed() {
// Fast path when there hasn't been enough data for a flush;
// this also yields better accuracy for small sets of data.
l := len(s.b)
if l == 0 {
return 0
}
i := int(float64(l) * q)
if i > 0 {
i -= 1
}
s.maybeSort()
return s.b[i].Value
}
s.flush()
return s.stream.query(q)
}
// Merge merges samples into the underlying streams samples. This is handy when
// merging multiple streams from separate threads, database shards, etc.
//
// ATTENTION: This method is broken and does not yield correct results. The
// underlying algorithm is not capable of merging streams correctly.
func (s *Stream) Merge(samples Samples) {
sort.Sort(samples)
s.stream.merge(samples)
}
// Reset reinitializes and clears the list reusing the samples buffer memory.
func (s *Stream) Reset() {
s.stream.reset()
s.b = s.b[:0]
}
// Samples returns stream samples held by s.
func (s *Stream) Samples() Samples {
if !s.flushed() {
return s.b
}
s.flush()
return s.stream.samples()
}
// Count returns the total number of samples observed in the stream
// since initialization.
func (s *Stream) Count() int {
return len(s.b) + s.stream.count()
}
func (s *Stream) flush() {
s.maybeSort()
s.stream.merge(s.b)
s.b = s.b[:0]
}
func (s *Stream) maybeSort() {
if !s.sorted {
s.sorted = true
sort.Sort(s.b)
}
}
func (s *Stream) flushed() bool {
return len(s.stream.l) > 0
}
type stream struct {
n float64
l []Sample
ƒ invariant
}
func (s *stream) reset() {
s.l = s.l[:0]
s.n = 0
}
func (s *stream) insert(v float64) {
s.merge(Samples{{v, 1, 0}})
}
func (s *stream) merge(samples Samples) {
// TODO(beorn7): This tries to merge not only individual samples, but
// whole summaries. The paper doesn't mention merging summaries at
// all. Unittests show that the merging is inaccurate. Find out how to
// do merges properly.
var r float64
i := 0
for _, sample := range samples {
for ; i < len(s.l); i++ {
c := s.l[i]
if c.Value > sample.Value {
// Insert at position i.
s.l = append(s.l, Sample{})
copy(s.l[i+1:], s.l[i:])
s.l[i] = Sample{
sample.Value,
sample.Width,
math.Max(sample.Delta, math.Floor(s.ƒ(s, r))-1),
// TODO(beorn7): How to calculate delta correctly?
}
i++
goto inserted
}
r += c.Width
}
s.l = append(s.l, Sample{sample.Value, sample.Width, 0})
i++
inserted:
s.n += sample.Width
r += sample.Width
}
s.compress()
}
func (s *stream) count() int {
return int(s.n)
}
func (s *stream) query(q float64) float64 {
t := math.Ceil(q * s.n)
t += math.Ceil(s.ƒ(s, t) / 2)
p := s.l[0]
var r float64
for _, c := range s.l[1:] {
r += p.Width
if r+c.Width+c.Delta > t {
return p.Value
}
p = c
}
return p.Value
}
func (s *stream) compress() {
if len(s.l) < 2 {
return
}
x := s.l[len(s.l)-1]
xi := len(s.l) - 1
r := s.n - 1 - x.Width
for i := len(s.l) - 2; i >= 0; i-- {
c := s.l[i]
if c.Width+x.Width+x.Delta <= s.ƒ(s, r) {
x.Width += c.Width
s.l[xi] = x
// Remove element at i.
copy(s.l[i:], s.l[i+1:])
s.l = s.l[:len(s.l)-1]
xi -= 1
} else {
x = c
xi = i
}
r -= c.Width
}
}
func (s *stream) samples() Samples {
samples := make(Samples, len(s.l))
copy(samples, s.l)
return samples
}

View File

@@ -1,188 +0,0 @@
package quantile
import (
"math"
"math/rand"
"sort"
"testing"
)
var (
Targets = map[float64]float64{
0.01: 0.001,
0.10: 0.01,
0.50: 0.05,
0.90: 0.01,
0.99: 0.001,
}
TargetsSmallEpsilon = map[float64]float64{
0.01: 0.0001,
0.10: 0.001,
0.50: 0.005,
0.90: 0.001,
0.99: 0.0001,
}
LowQuantiles = []float64{0.01, 0.1, 0.5}
HighQuantiles = []float64{0.99, 0.9, 0.5}
)
const RelativeEpsilon = 0.01
func verifyPercsWithAbsoluteEpsilon(t *testing.T, a []float64, s *Stream) {
sort.Float64s(a)
for quantile, epsilon := range Targets {
n := float64(len(a))
k := int(quantile * n)
lower := int((quantile - epsilon) * n)
if lower < 1 {
lower = 1
}
upper := int(math.Ceil((quantile + epsilon) * n))
if upper > len(a) {
upper = len(a)
}
w, min, max := a[k-1], a[lower-1], a[upper-1]
if g := s.Query(quantile); g < min || g > max {
t.Errorf("q=%f: want %v [%f,%f], got %v", quantile, w, min, max, g)
}
}
}
func verifyLowPercsWithRelativeEpsilon(t *testing.T, a []float64, s *Stream) {
sort.Float64s(a)
for _, qu := range LowQuantiles {
n := float64(len(a))
k := int(qu * n)
lowerRank := int((1 - RelativeEpsilon) * qu * n)
upperRank := int(math.Ceil((1 + RelativeEpsilon) * qu * n))
w, min, max := a[k-1], a[lowerRank-1], a[upperRank-1]
if g := s.Query(qu); g < min || g > max {
t.Errorf("q=%f: want %v [%f,%f], got %v", qu, w, min, max, g)
}
}
}
func verifyHighPercsWithRelativeEpsilon(t *testing.T, a []float64, s *Stream) {
sort.Float64s(a)
for _, qu := range HighQuantiles {
n := float64(len(a))
k := int(qu * n)
lowerRank := int((1 - (1+RelativeEpsilon)*(1-qu)) * n)
upperRank := int(math.Ceil((1 - (1-RelativeEpsilon)*(1-qu)) * n))
w, min, max := a[k-1], a[lowerRank-1], a[upperRank-1]
if g := s.Query(qu); g < min || g > max {
t.Errorf("q=%f: want %v [%f,%f], got %v", qu, w, min, max, g)
}
}
}
func populateStream(s *Stream) []float64 {
a := make([]float64, 0, 1e5+100)
for i := 0; i < cap(a); i++ {
v := rand.NormFloat64()
// Add 5% asymmetric outliers.
if i%20 == 0 {
v = v*v + 1
}
s.Insert(v)
a = append(a, v)
}
return a
}
func TestTargetedQuery(t *testing.T) {
rand.Seed(42)
s := NewTargeted(Targets)
a := populateStream(s)
verifyPercsWithAbsoluteEpsilon(t, a, s)
}
func TestLowBiasedQuery(t *testing.T) {
rand.Seed(42)
s := NewLowBiased(RelativeEpsilon)
a := populateStream(s)
verifyLowPercsWithRelativeEpsilon(t, a, s)
}
func TestHighBiasedQuery(t *testing.T) {
rand.Seed(42)
s := NewHighBiased(RelativeEpsilon)
a := populateStream(s)
verifyHighPercsWithRelativeEpsilon(t, a, s)
}
// BrokenTestTargetedMerge is broken, see Merge doc comment.
func BrokenTestTargetedMerge(t *testing.T) {
rand.Seed(42)
s1 := NewTargeted(Targets)
s2 := NewTargeted(Targets)
a := populateStream(s1)
a = append(a, populateStream(s2)...)
s1.Merge(s2.Samples())
verifyPercsWithAbsoluteEpsilon(t, a, s1)
}
// BrokenTestLowBiasedMerge is broken, see Merge doc comment.
func BrokenTestLowBiasedMerge(t *testing.T) {
rand.Seed(42)
s1 := NewLowBiased(RelativeEpsilon)
s2 := NewLowBiased(RelativeEpsilon)
a := populateStream(s1)
a = append(a, populateStream(s2)...)
s1.Merge(s2.Samples())
verifyLowPercsWithRelativeEpsilon(t, a, s2)
}
// BrokenTestHighBiasedMerge is broken, see Merge doc comment.
func BrokenTestHighBiasedMerge(t *testing.T) {
rand.Seed(42)
s1 := NewHighBiased(RelativeEpsilon)
s2 := NewHighBiased(RelativeEpsilon)
a := populateStream(s1)
a = append(a, populateStream(s2)...)
s1.Merge(s2.Samples())
verifyHighPercsWithRelativeEpsilon(t, a, s2)
}
func TestUncompressed(t *testing.T) {
q := NewTargeted(Targets)
for i := 100; i > 0; i-- {
q.Insert(float64(i))
}
if g := q.Count(); g != 100 {
t.Errorf("want count 100, got %d", g)
}
// Before compression, Query should have 100% accuracy.
for quantile := range Targets {
w := quantile * 100
if g := q.Query(quantile); g != w {
t.Errorf("want %f, got %f", w, g)
}
}
}
func TestUncompressedSamples(t *testing.T) {
q := NewTargeted(map[float64]float64{0.99: 0.001})
for i := 1; i <= 100; i++ {
q.Insert(float64(i))
}
if g := q.Samples().Len(); g != 100 {
t.Errorf("want count 100, got %d", g)
}
}
func TestUncompressedOne(t *testing.T) {
q := NewTargeted(map[float64]float64{0.99: 0.01})
q.Insert(3.14)
if g := q.Query(0.90); g != 3.14 {
t.Error("want PI, got", g)
}
}
func TestDefaults(t *testing.T) {
if g := NewTargeted(map[float64]float64{0.99: 0.001}).Query(0.99); g != 0 {
t.Errorf("want 0, got %f", g)
}
}

View File

@@ -1,2 +0,0 @@
example/example
example/example.exe

View File

@@ -1,201 +0,0 @@
Apache License
Version 2.0, January 2004
http://www.apache.org/licenses/
TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
1. Definitions.
"License" shall mean the terms and conditions for use, reproduction,
and distribution as defined by Sections 1 through 9 of this document.
"Licensor" shall mean the copyright owner or entity authorized by
the copyright owner that is granting the License.
"Legal Entity" shall mean the union of the acting entity and all
other entities that control, are controlled by, or are under common
control with that entity. For the purposes of this definition,
"control" means (i) the power, direct or indirect, to cause the
direction or management of such entity, whether by contract or
otherwise, or (ii) ownership of fifty percent (50%) or more of the
outstanding shares, or (iii) beneficial ownership of such entity.
"You" (or "Your") shall mean an individual or Legal Entity
exercising permissions granted by this License.
"Source" form shall mean the preferred form for making modifications,
including but not limited to software source code, documentation
source, and configuration files.
"Object" form shall mean any form resulting from mechanical
transformation or translation of a Source form, including but
not limited to compiled object code, generated documentation,
and conversions to other media types.
"Work" shall mean the work of authorship, whether in Source or
Object form, made available under the License, as indicated by a
copyright notice that is included in or attached to the work
(an example is provided in the Appendix below).
"Derivative Works" shall mean any work, whether in Source or Object
form, that is based on (or derived from) the Work and for which the
editorial revisions, annotations, elaborations, or other modifications
represent, as a whole, an original work of authorship. For the purposes
of this License, Derivative Works shall not include works that remain
separable from, or merely link (or bind by name) to the interfaces of,
the Work and Derivative Works thereof.
"Contribution" shall mean any work of authorship, including
the original version of the Work and any modifications or additions
to that Work or Derivative Works thereof, that is intentionally
submitted to Licensor for inclusion in the Work by the copyright owner
or by an individual or Legal Entity authorized to submit on behalf of
the copyright owner. For the purposes of this definition, "submitted"
means any form of electronic, verbal, or written communication sent
to the Licensor or its representatives, including but not limited to
communication on electronic mailing lists, source code control systems,
and issue tracking systems that are managed by, or on behalf of, the
Licensor for the purpose of discussing and improving the Work, but
excluding communication that is conspicuously marked or otherwise
designated in writing by the copyright owner as "Not a Contribution."
"Contributor" shall mean Licensor and any individual or Legal Entity
on behalf of whom a Contribution has been received by Licensor and
subsequently incorporated within the Work.
2. Grant of Copyright License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
copyright license to reproduce, prepare Derivative Works of,
publicly display, publicly perform, sublicense, and distribute the
Work and such Derivative Works in Source or Object form.
3. Grant of Patent License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
(except as stated in this section) patent license to make, have made,
use, offer to sell, sell, import, and otherwise transfer the Work,
where such license applies only to those patent claims licensable
by such Contributor that are necessarily infringed by their
Contribution(s) alone or by combination of their Contribution(s)
with the Work to which such Contribution(s) was submitted. If You
institute patent litigation against any entity (including a
cross-claim or counterclaim in a lawsuit) alleging that the Work
or a Contribution incorporated within the Work constitutes direct
or contributory patent infringement, then any patent licenses
granted to You under this License for that Work shall terminate
as of the date such litigation is filed.
4. Redistribution. You may reproduce and distribute copies of the
Work or Derivative Works thereof in any medium, with or without
modifications, and in Source or Object form, provided that You
meet the following conditions:
(a) You must give any other recipients of the Work or
Derivative Works a copy of this License; and
(b) You must cause any modified files to carry prominent notices
stating that You changed the files; and
(c) You must retain, in the Source form of any Derivative Works
that You distribute, all copyright, patent, trademark, and
attribution notices from the Source form of the Work,
excluding those notices that do not pertain to any part of
the Derivative Works; and
(d) If the Work includes a "NOTICE" text file as part of its
distribution, then any Derivative Works that You distribute must
include a readable copy of the attribution notices contained
within such NOTICE file, excluding those notices that do not
pertain to any part of the Derivative Works, in at least one
of the following places: within a NOTICE text file distributed
as part of the Derivative Works; within the Source form or
documentation, if provided along with the Derivative Works; or,
within a display generated by the Derivative Works, if and
wherever such third-party notices normally appear. The contents
of the NOTICE file are for informational purposes only and
do not modify the License. You may add Your own attribution
notices within Derivative Works that You distribute, alongside
or as an addendum to the NOTICE text from the Work, provided
that such additional attribution notices cannot be construed
as modifying the License.
You may add Your own copyright statement to Your modifications and
may provide additional or different license terms and conditions
for use, reproduction, or distribution of Your modifications, or
for any such Derivative Works as a whole, provided Your use,
reproduction, and distribution of the Work otherwise complies with
the conditions stated in this License.
5. Submission of Contributions. Unless You explicitly state otherwise,
any Contribution intentionally submitted for inclusion in the Work
by You to the Licensor shall be under the terms and conditions of
this License, without any additional terms or conditions.
Notwithstanding the above, nothing herein shall supersede or modify
the terms of any separate license agreement you may have executed
with Licensor regarding such Contributions.
6. Trademarks. This License does not grant permission to use the trade
names, trademarks, service marks, or product names of the Licensor,
except as required for reasonable and customary use in describing the
origin of the Work and reproducing the content of the NOTICE file.
7. Disclaimer of Warranty. Unless required by applicable law or
agreed to in writing, Licensor provides the Work (and each
Contributor provides its Contributions) on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
implied, including, without limitation, any warranties or conditions
of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
PARTICULAR PURPOSE. You are solely responsible for determining the
appropriateness of using or redistributing the Work and assume any
risks associated with Your exercise of permissions under this License.
8. Limitation of Liability. In no event and under no legal theory,
whether in tort (including negligence), contract, or otherwise,
unless required by applicable law (such as deliberate and grossly
negligent acts) or agreed to in writing, shall any Contributor be
liable to You for damages, including any direct, indirect, special,
incidental, or consequential damages of any character arising as a
result of this License or out of the use or inability to use the
Work (including but not limited to damages for loss of goodwill,
work stoppage, computer failure or malfunction, or any and all
other commercial damages or losses), even if such Contributor
has been advised of the possibility of such damages.
9. Accepting Warranty or Additional Liability. While redistributing
the Work or Derivative Works thereof, You may choose to offer,
and charge a fee for, acceptance of support, warranty, indemnity,
or other liability obligations and/or rights consistent with this
License. However, in accepting such obligations, You may act only
on Your own behalf and on Your sole responsibility, not on behalf
of any other Contributor, and only if You agree to indemnify,
defend, and hold each Contributor harmless for any liability
incurred by, or claims asserted against, such Contributor by reason
of your accepting any such warranty or additional liability.
END OF TERMS AND CONDITIONS
APPENDIX: How to apply the Apache License to your work.
To apply the Apache License to your work, attach the following
boilerplate notice, with the fields enclosed by brackets "[]"
replaced with your own identifying information. (Don't include
the brackets!) The text should be enclosed in the appropriate
comment syntax for the file format. We also recommend that a
file or class name and description of purpose be included on the
same "printed page" as the copyright notice for easier
identification within third-party archives.
Copyright [2013] [the CloudFoundry Authors]
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.

View File

@@ -1,30 +0,0 @@
# Speakeasy
This package provides cross-platform Go (#golang) helpers for taking user input
from the terminal while not echoing the input back (similar to `getpasswd`). The
package uses syscalls to avoid any dependence on cgo, and is therefore
compatible with cross-compiling.
[![GoDoc](https://godoc.org/github.com/bgentry/speakeasy?status.png)][godoc]
## Unicode
Multi-byte unicode characters work successfully on Mac OS X. On Windows,
however, this may be problematic (as is UTF in general on Windows). Other
platforms have not been tested.
## License
The code herein was not written by me, but was compiled from two separate open
source packages. Unix portions were imported from [gopass][gopass], while
Windows portions were imported from the [CloudFoundry Go CLI][cf-cli]'s
[Windows terminal helpers][cf-ui-windows].
The [license for the windows portion](./LICENSE_WINDOWS) has been copied exactly
from the source (though I attempted to fill in the correct owner in the
boilerplate copyright notice).
[cf-cli]: https://github.com/cloudfoundry/cli "CloudFoundry Go CLI"
[cf-ui-windows]: https://github.com/cloudfoundry/cli/blob/master/src/cf/terminal/ui_windows.go "CloudFoundry Go CLI Windows input helpers"
[godoc]: https://godoc.org/github.com/bgentry/speakeasy "speakeasy on Godoc.org"
[gopass]: https://code.google.com/p/gopass "gopass"

View File

@@ -1,18 +0,0 @@
package main
import (
"fmt"
"os"
"github.com/coreos/etcd/Godeps/_workspace/src/github.com/bgentry/speakeasy"
)
func main() {
password, err := speakeasy.Ask("Please enter a password: ")
if err != nil {
fmt.Println(err)
os.Exit(1)
}
fmt.Printf("Password result: %q\n", password)
fmt.Printf("Password len: %d\n", len(password))
}

View File

@@ -1,47 +0,0 @@
package speakeasy
import (
"fmt"
"io"
"os"
"strings"
)
// Ask the user to enter a password with input hidden. prompt is a string to
// display before the user's input. Returns the provided password, or an error
// if the command failed.
func Ask(prompt string) (password string, err error) {
return FAsk(os.Stdout, prompt)
}
// Same as the Ask function, except it is possible to specify the file to write
// the prompt to.
func FAsk(file *os.File, prompt string) (password string, err error) {
if prompt != "" {
fmt.Fprint(file, prompt) // Display the prompt.
}
password, err = getPassword()
// Carriage return after the user input.
fmt.Fprintln(file, "")
return
}
func readline() (value string, err error) {
var valb []byte
var n int
b := make([]byte, 1)
for {
// read one byte at a time so we don't accidentally read extra bytes
n, err = os.Stdin.Read(b)
if err != nil && err != io.EOF {
return "", err
}
if n == 0 || b[0] == '\n' {
break
}
valb = append(valb, b[0])
}
return strings.TrimSuffix(string(valb), "\r"), nil
}

View File

@@ -1,93 +0,0 @@
// based on https://code.google.com/p/gopass
// Author: johnsiilver@gmail.com (John Doak)
//
// Original code is based on code by RogerV in the golang-nuts thread:
// https://groups.google.com/group/golang-nuts/browse_thread/thread/40cc41e9d9fc9247
// +build darwin freebsd linux netbsd openbsd solaris
package speakeasy
import (
"fmt"
"os"
"os/signal"
"strings"
"syscall"
)
const sttyArg0 = "/bin/stty"
var (
sttyArgvEOff = []string{"stty", "-echo"}
sttyArgvEOn = []string{"stty", "echo"}
)
// getPassword gets input hidden from the terminal from a user. This is
// accomplished by turning off terminal echo, reading input from the user and
// finally turning on terminal echo.
func getPassword() (password string, err error) {
sig := make(chan os.Signal, 10)
brk := make(chan bool)
// File descriptors for stdin, stdout, and stderr.
fd := []uintptr{os.Stdin.Fd(), os.Stdout.Fd(), os.Stderr.Fd()}
// Setup notifications of termination signals to channel sig, create a process to
// watch for these signals so we can turn back on echo if need be.
signal.Notify(sig, syscall.SIGHUP, syscall.SIGINT, syscall.SIGKILL, syscall.SIGQUIT,
syscall.SIGTERM)
go catchSignal(fd, sig, brk)
// Turn off the terminal echo.
pid, err := echoOff(fd)
if err != nil {
return "", err
}
// Turn on the terminal echo and stop listening for signals.
defer signal.Stop(sig)
defer close(brk)
defer echoOn(fd)
syscall.Wait4(pid, nil, 0, nil)
line, err := readline()
if err == nil {
password = strings.TrimSpace(line)
} else {
err = fmt.Errorf("failed during password entry: %s", err)
}
return password, err
}
// echoOff turns off the terminal echo.
func echoOff(fd []uintptr) (int, error) {
pid, err := syscall.ForkExec(sttyArg0, sttyArgvEOff, &syscall.ProcAttr{Dir: "", Files: fd})
if err != nil {
return 0, fmt.Errorf("failed turning off console echo for password entry:\n\t%s", err)
}
return pid, nil
}
// echoOn turns back on the terminal echo.
func echoOn(fd []uintptr) {
// Turn on the terminal echo.
pid, e := syscall.ForkExec(sttyArg0, sttyArgvEOn, &syscall.ProcAttr{Dir: "", Files: fd})
if e == nil {
syscall.Wait4(pid, nil, 0, nil)
}
}
// catchSignal tries to catch SIGKILL, SIGQUIT and SIGINT so that we can turn
// terminal echo back on before the program ends. Otherwise the user is left
// with echo off on their terminal.
func catchSignal(fd []uintptr, sig chan os.Signal, brk chan bool) {
select {
case <-sig:
echoOn(fd)
os.Exit(-1)
case <-brk:
}
}

View File

@@ -1,43 +0,0 @@
// +build windows
package speakeasy
import (
"os"
"syscall"
)
// SetConsoleMode function can be used to change value of ENABLE_ECHO_INPUT:
// http://msdn.microsoft.com/en-us/library/windows/desktop/ms686033(v=vs.85).aspx
const ENABLE_ECHO_INPUT = 0x0004
func getPassword() (password string, err error) {
hStdin := syscall.Handle(os.Stdin.Fd())
var oldMode uint32
err = syscall.GetConsoleMode(hStdin, &oldMode)
if err != nil {
return
}
var newMode uint32 = (oldMode &^ ENABLE_ECHO_INPUT)
err = setConsoleMode(hStdin, newMode)
defer setConsoleMode(hStdin, oldMode)
if err != nil {
return
}
return readline()
}
func setConsoleMode(console syscall.Handle, mode uint32) (err error) {
dll := syscall.MustLoadDLL("kernel32")
proc := dll.MustFindProc("SetConsoleMode")
r, _, err := proc.Call(uintptr(console), uintptr(mode))
if r == 0 {
return err
}
return nil
}

View File

@@ -1,4 +0,0 @@
*.prof
*.test
*.swp
/bin/

View File

@@ -1,20 +0,0 @@
The MIT License (MIT)
Copyright (c) 2013 Ben Johnson
Permission is hereby granted, free of charge, to any person obtaining a copy of
this software and associated documentation files (the "Software"), to deal in
the Software without restriction, including without limitation the rights to
use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of
the Software, and to permit persons to whom the Software is furnished to do so,
subject to the following conditions:
The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS
FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR
COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER
IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN
CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.

View File

@@ -1,54 +0,0 @@
TEST=.
BENCH=.
COVERPROFILE=/tmp/c.out
BRANCH=`git rev-parse --abbrev-ref HEAD`
COMMIT=`git rev-parse --short HEAD`
GOLDFLAGS="-X main.branch $(BRANCH) -X main.commit $(COMMIT)"
default: build
bench:
go test -v -test.run=NOTHINCONTAINSTHIS -test.bench=$(BENCH)
# http://cloc.sourceforge.net/
cloc:
@cloc --not-match-f='Makefile|_test.go' .
cover: fmt
go test -coverprofile=$(COVERPROFILE) -test.run=$(TEST) $(COVERFLAG) .
go tool cover -html=$(COVERPROFILE)
rm $(COVERPROFILE)
cpuprofile: fmt
@go test -c
@./bolt.test -test.v -test.run=$(TEST) -test.cpuprofile cpu.prof
# go get github.com/kisielk/errcheck
errcheck:
@echo "=== errcheck ==="
@errcheck github.com/boltdb/bolt
fmt:
@go fmt ./...
get:
@go get -d ./...
build: get
@mkdir -p bin
@go build -ldflags=$(GOLDFLAGS) -a -o bin/bolt ./cmd/bolt
test: fmt
@go get github.com/stretchr/testify/assert
@echo "=== TESTS ==="
@go test -v -cover -test.run=$(TEST)
@echo ""
@echo ""
@echo "=== CLI ==="
@go test -v -test.run=$(TEST) ./cmd/bolt
@echo ""
@echo ""
@echo "=== RACE DETECTOR ==="
@go test -v -race -test.run="TestSimulate_(100op|1000op)"
.PHONY: bench cloc cover cpuprofile fmt memprofile test

View File

@@ -1,724 +0,0 @@
Bolt [![Build Status](https://drone.io/github.com/boltdb/bolt/status.png)](https://drone.io/github.com/boltdb/bolt/latest) [![Coverage Status](https://coveralls.io/repos/boltdb/bolt/badge.png?branch=master)](https://coveralls.io/r/boltdb/bolt?branch=master) [![GoDoc](https://godoc.org/github.com/boltdb/bolt?status.png)](https://godoc.org/github.com/boltdb/bolt) ![Version](http://img.shields.io/badge/version-1.0-green.png)
====
Bolt is a pure Go key/value store inspired by [Howard Chu's][hyc_symas]
[LMDB project][lmdb]. The goal of the project is to provide a simple,
fast, and reliable database for projects that don't require a full database
server such as Postgres or MySQL.
Since Bolt is meant to be used as such a low-level piece of functionality,
simplicity is key. The API will be small and only focus on getting values
and setting values. That's it.
[hyc_symas]: https://twitter.com/hyc_symas
[lmdb]: http://symas.com/mdb/
## Project Status
Bolt is stable and the API is fixed. Full unit test coverage and randomized
black box testing are used to ensure database consistency and thread safety.
Bolt is currently in high-load production environments serving databases as
large as 1TB. Many companies such as Shopify and Heroku use Bolt-backed
services every day.
## Getting Started
### Installing
To start using Bolt, install Go and run `go get`:
```sh
$ go get github.com/boltdb/bolt/...
```
This will retrieve the library and install the `bolt` command line utility into
your `$GOBIN` path.
### Opening a database
The top-level object in Bolt is a `DB`. It is represented as a single file on
your disk and represents a consistent snapshot of your data.
To open your database, simply use the `bolt.Open()` function:
```go
package main
import (
"log"
"github.com/boltdb/bolt"
)
func main() {
// Open the my.db data file in your current directory.
// It will be created if it doesn't exist.
db, err := bolt.Open("my.db", 0600, nil)
if err != nil {
log.Fatal(err)
}
defer db.Close()
...
}
```
Please note that Bolt obtains a file lock on the data file so multiple processes
cannot open the same database at the same time. Opening an already open Bolt
database will cause it to hang until the other process closes it. To prevent
an indefinite wait you can pass a timeout option to the `Open()` function:
```go
db, err := bolt.Open("my.db", 0600, &bolt.Options{Timeout: 1 * time.Second})
```
### Transactions
Bolt allows only one read-write transaction at a time but allows as many
read-only transactions as you want at a time. Each transaction has a consistent
view of the data as it existed when the transaction started.
Individual transactions and all objects created from them (e.g. buckets, keys)
are not thread safe. To work with data in multiple goroutines you must start
a transaction for each one or use locking to ensure only one goroutine accesses
a transaction at a time. Creating transaction from the `DB` is thread safe.
Read-only transactions and read-write transactions should not depend on one
another and generally shouldn't be opened simultaneously in the same goroutine.
This can cause a deadlock as the read-write transaction needs to periodically
re-map the data file but it cannot do so while a read-only transaction is open.
#### Read-write transactions
To start a read-write transaction, you can use the `DB.Update()` function:
```go
err := db.Update(func(tx *bolt.Tx) error {
...
return nil
})
```
Inside the closure, you have a consistent view of the database. You commit the
transaction by returning `nil` at the end. You can also rollback the transaction
at any point by returning an error. All database operations are allowed inside
a read-write transaction.
Always check the return error as it will report any disk failures that can cause
your transaction to not complete. If you return an error within your closure
it will be passed through.
#### Read-only transactions
To start a read-only transaction, you can use the `DB.View()` function:
```go
err := db.View(func(tx *bolt.Tx) error {
...
return nil
})
```
You also get a consistent view of the database within this closure, however,
no mutating operations are allowed within a read-only transaction. You can only
retrieve buckets, retrieve values, and copy the database within a read-only
transaction.
#### Batch read-write transactions
Each `DB.Update()` waits for disk to commit the writes. This overhead
can be minimized by combining multiple updates with the `DB.Batch()`
function:
```go
err := db.Batch(func(tx *bolt.Tx) error {
...
return nil
})
```
Concurrent Batch calls are opportunistically combined into larger
transactions. Batch is only useful when there are multiple goroutines
calling it.
The trade-off is that `Batch` can call the given
function multiple times, if parts of the transaction fail. The
function must be idempotent and side effects must take effect only
after a successful return from `DB.Batch()`.
For example: don't display messages from inside the function, instead
set variables in the enclosing scope:
```go
var id uint64
err := db.Batch(func(tx *bolt.Tx) error {
// Find last key in bucket, decode as bigendian uint64, increment
// by one, encode back to []byte, and add new key.
...
id = newValue
return nil
})
if err != nil {
return ...
}
fmt.Println("Allocated ID %d", id)
```
#### Managing transactions manually
The `DB.View()` and `DB.Update()` functions are wrappers around the `DB.Begin()`
function. These helper functions will start the transaction, execute a function,
and then safely close your transaction if an error is returned. This is the
recommended way to use Bolt transactions.
However, sometimes you may want to manually start and end your transactions.
You can use the `Tx.Begin()` function directly but **please** be sure to close
the transaction.
```go
// Start a writable transaction.
tx, err := db.Begin(true)
if err != nil {
return err
}
defer tx.Rollback()
// Use the transaction...
_, err := tx.CreateBucket([]byte("MyBucket"))
if err != nil {
return err
}
// Commit the transaction and check for error.
if err := tx.Commit(); err != nil {
return err
}
```
The first argument to `DB.Begin()` is a boolean stating if the transaction
should be writable.
### Using buckets
Buckets are collections of key/value pairs within the database. All keys in a
bucket must be unique. You can create a bucket using the `DB.CreateBucket()`
function:
```go
db.Update(func(tx *bolt.Tx) error {
b, err := tx.CreateBucket([]byte("MyBucket"))
if err != nil {
return fmt.Errorf("create bucket: %s", err)
}
return nil
})
```
You can also create a bucket only if it doesn't exist by using the
`Tx.CreateBucketIfNotExists()` function. It's a common pattern to call this
function for all your top-level buckets after you open your database so you can
guarantee that they exist for future transactions.
To delete a bucket, simply call the `Tx.DeleteBucket()` function.
### Using key/value pairs
To save a key/value pair to a bucket, use the `Bucket.Put()` function:
```go
db.Update(func(tx *bolt.Tx) error {
b := tx.Bucket([]byte("MyBucket"))
err := b.Put([]byte("answer"), []byte("42"))
return err
})
```
This will set the value of the `"answer"` key to `"42"` in the `MyBucket`
bucket. To retrieve this value, we can use the `Bucket.Get()` function:
```go
db.View(func(tx *bolt.Tx) error {
b := tx.Bucket([]byte("MyBucket"))
v := b.Get([]byte("answer"))
fmt.Printf("The answer is: %s\n", v)
return nil
})
```
The `Get()` function does not return an error because its operation is
guaranteed to work (unless there is some kind of system failure). If the key
exists then it will return its byte slice value. If it doesn't exist then it
will return `nil`. It's important to note that you can have a zero-length value
set to a key which is different than the key not existing.
Use the `Bucket.Delete()` function to delete a key from the bucket.
Please note that values returned from `Get()` are only valid while the
transaction is open. If you need to use a value outside of the transaction
then you must use `copy()` to copy it to another byte slice.
### Autoincrementing integer for the bucket
By using the NextSequence() function, you can let Bolt determine a sequence
which can be used as the unique identifier for your key/value pairs. See the
example below.
```go
// CreateUser saves u to the store. The new user ID is set on u once the data is persisted.
func (s *Store) CreateUser(u *User) error {
return s.db.Update(func(tx *bolt.Tx) error {
// Retrieve the users bucket.
// This should be created when the DB is first opened.
b := tx.Bucket([]byte("users"))
// Generate ID for the user.
// This returns an error only if the Tx is closed or not writeable.
// That can't happen in an Update() call so I ignore the error check.
id, _ = b.NextSequence()
u.ID = int(id)
// Marshal user data into bytes.
buf, err := json.Marshal(u)
if err != nil {
return err
}
// Persist bytes to users bucket.
return b.Put(itob(u.ID), buf)
})
}
// itob returns an 8-byte big endian representation of v.
func itob(v int) []byte {
b := make([]byte, 8)
binary.BigEndian.PutUint64(b, uint64(v))
return b
}
type User struct {
ID int
...
}
```
### Iterating over keys
Bolt stores its keys in byte-sorted order within a bucket. This makes sequential
iteration over these keys extremely fast. To iterate over keys we'll use a
`Cursor`:
```go
db.View(func(tx *bolt.Tx) error {
b := tx.Bucket([]byte("MyBucket"))
c := b.Cursor()
for k, v := c.First(); k != nil; k, v = c.Next() {
fmt.Printf("key=%s, value=%s\n", k, v)
}
return nil
})
```
The cursor allows you to move to a specific point in the list of keys and move
forward or backward through the keys one at a time.
The following functions are available on the cursor:
```
First() Move to the first key.
Last() Move to the last key.
Seek() Move to a specific key.
Next() Move to the next key.
Prev() Move to the previous key.
```
When you have iterated to the end of the cursor then `Next()` will return `nil`.
You must seek to a position using `First()`, `Last()`, or `Seek()` before
calling `Next()` or `Prev()`. If you do not seek to a position then these
functions will return `nil`.
#### Prefix scans
To iterate over a key prefix, you can combine `Seek()` and `bytes.HasPrefix()`:
```go
db.View(func(tx *bolt.Tx) error {
c := tx.Bucket([]byte("MyBucket")).Cursor()
prefix := []byte("1234")
for k, v := c.Seek(prefix); bytes.HasPrefix(k, prefix); k, v = c.Next() {
fmt.Printf("key=%s, value=%s\n", k, v)
}
return nil
})
```
#### Range scans
Another common use case is scanning over a range such as a time range. If you
use a sortable time encoding such as RFC3339 then you can query a specific
date range like this:
```go
db.View(func(tx *bolt.Tx) error {
// Assume our events bucket has RFC3339 encoded time keys.
c := tx.Bucket([]byte("Events")).Cursor()
// Our time range spans the 90's decade.
min := []byte("1990-01-01T00:00:00Z")
max := []byte("2000-01-01T00:00:00Z")
// Iterate over the 90's.
for k, v := c.Seek(min); k != nil && bytes.Compare(k, max) <= 0; k, v = c.Next() {
fmt.Printf("%s: %s\n", k, v)
}
return nil
})
```
#### ForEach()
You can also use the function `ForEach()` if you know you'll be iterating over
all the keys in a bucket:
```go
db.View(func(tx *bolt.Tx) error {
b := tx.Bucket([]byte("MyBucket"))
b.ForEach(func(k, v []byte) error {
fmt.Printf("key=%s, value=%s\n", k, v)
return nil
})
return nil
})
```
### Nested buckets
You can also store a bucket in a key to create nested buckets. The API is the
same as the bucket management API on the `DB` object:
```go
func (*Bucket) CreateBucket(key []byte) (*Bucket, error)
func (*Bucket) CreateBucketIfNotExists(key []byte) (*Bucket, error)
func (*Bucket) DeleteBucket(key []byte) error
```
### Database backups
Bolt is a single file so it's easy to backup. You can use the `Tx.WriteTo()`
function to write a consistent view of the database to a writer. If you call
this from a read-only transaction, it will perform a hot backup and not block
your other database reads and writes.
By default, it will use a regular file handle which will utilize the operating
system's page cache. See the [`Tx`](https://godoc.org/github.com/boltdb/bolt#Tx)
documentation for information about optimizing for larger-than-RAM datasets.
One common use case is to backup over HTTP so you can use tools like `cURL` to
do database backups:
```go
func BackupHandleFunc(w http.ResponseWriter, req *http.Request) {
err := db.View(func(tx *bolt.Tx) error {
w.Header().Set("Content-Type", "application/octet-stream")
w.Header().Set("Content-Disposition", `attachment; filename="my.db"`)
w.Header().Set("Content-Length", strconv.Itoa(int(tx.Size())))
_, err := tx.WriteTo(w)
return err
})
if err != nil {
http.Error(w, err.Error(), http.StatusInternalServerError)
}
}
```
Then you can backup using this command:
```sh
$ curl http://localhost/backup > my.db
```
Or you can open your browser to `http://localhost/backup` and it will download
automatically.
If you want to backup to another file you can use the `Tx.CopyFile()` helper
function.
### Statistics
The database keeps a running count of many of the internal operations it
performs so you can better understand what's going on. By grabbing a snapshot
of these stats at two points in time we can see what operations were performed
in that time range.
For example, we could start a goroutine to log stats every 10 seconds:
```go
go func() {
// Grab the initial stats.
prev := db.Stats()
for {
// Wait for 10s.
time.Sleep(10 * time.Second)
// Grab the current stats and diff them.
stats := db.Stats()
diff := stats.Sub(&prev)
// Encode stats to JSON and print to STDERR.
json.NewEncoder(os.Stderr).Encode(diff)
// Save stats for the next loop.
prev = stats
}
}()
```
It's also useful to pipe these stats to a service such as statsd for monitoring
or to provide an HTTP endpoint that will perform a fixed-length sample.
### Read-Only Mode
Sometimes it is useful to create a shared, read-only Bolt database. To this,
set the `Options.ReadOnly` flag when opening your database. Read-only mode
uses a shared lock to allow multiple processes to read from the database but
it will block any processes from opening the database in read-write mode.
```go
db, err := bolt.Open("my.db", 0666, &bolt.Options{ReadOnly: true})
if err != nil {
log.Fatal(err)
}
```
## Resources
For more information on getting started with Bolt, check out the following articles:
* [Intro to BoltDB: Painless Performant Persistence](http://npf.io/2014/07/intro-to-boltdb-painless-performant-persistence/) by [Nate Finch](https://github.com/natefinch).
* [Bolt -- an embedded key/value database for Go](https://www.progville.com/go/bolt-embedded-db-golang/) by Progville
## Comparison with other databases
### Postgres, MySQL, & other relational databases
Relational databases structure data into rows and are only accessible through
the use of SQL. This approach provides flexibility in how you store and query
your data but also incurs overhead in parsing and planning SQL statements. Bolt
accesses all data by a byte slice key. This makes Bolt fast to read and write
data by key but provides no built-in support for joining values together.
Most relational databases (with the exception of SQLite) are standalone servers
that run separately from your application. This gives your systems
flexibility to connect multiple application servers to a single database
server but also adds overhead in serializing and transporting data over the
network. Bolt runs as a library included in your application so all data access
has to go through your application's process. This brings data closer to your
application but limits multi-process access to the data.
### LevelDB, RocksDB
LevelDB and its derivatives (RocksDB, HyperLevelDB) are similar to Bolt in that
they are libraries bundled into the application, however, their underlying
structure is a log-structured merge-tree (LSM tree). An LSM tree optimizes
random writes by using a write ahead log and multi-tiered, sorted files called
SSTables. Bolt uses a B+tree internally and only a single file. Both approaches
have trade-offs.
If you require a high random write throughput (>10,000 w/sec) or you need to use
spinning disks then LevelDB could be a good choice. If your application is
read-heavy or does a lot of range scans then Bolt could be a good choice.
One other important consideration is that LevelDB does not have transactions.
It supports batch writing of key/values pairs and it supports read snapshots
but it will not give you the ability to do a compare-and-swap operation safely.
Bolt supports fully serializable ACID transactions.
### LMDB
Bolt was originally a port of LMDB so it is architecturally similar. Both use
a B+tree, have ACID semantics with fully serializable transactions, and support
lock-free MVCC using a single writer and multiple readers.
The two projects have somewhat diverged. LMDB heavily focuses on raw performance
while Bolt has focused on simplicity and ease of use. For example, LMDB allows
several unsafe actions such as direct writes for the sake of performance. Bolt
opts to disallow actions which can leave the database in a corrupted state. The
only exception to this in Bolt is `DB.NoSync`.
There are also a few differences in API. LMDB requires a maximum mmap size when
opening an `mdb_env` whereas Bolt will handle incremental mmap resizing
automatically. LMDB overloads the getter and setter functions with multiple
flags whereas Bolt splits these specialized cases into their own functions.
## Caveats & Limitations
It's important to pick the right tool for the job and Bolt is no exception.
Here are a few things to note when evaluating and using Bolt:
* Bolt is good for read intensive workloads. Sequential write performance is
also fast but random writes can be slow. You can add a write-ahead log or
[transaction coalescer](https://github.com/boltdb/coalescer) in front of Bolt
to mitigate this issue.
* Bolt uses a B+tree internally so there can be a lot of random page access.
SSDs provide a significant performance boost over spinning disks.
* Try to avoid long running read transactions. Bolt uses copy-on-write so
old pages cannot be reclaimed while an old transaction is using them.
* Byte slices returned from Bolt are only valid during a transaction. Once the
transaction has been committed or rolled back then the memory they point to
can be reused by a new page or can be unmapped from virtual memory and you'll
see an `unexpected fault address` panic when accessing it.
* Be careful when using `Bucket.FillPercent`. Setting a high fill percent for
buckets that have random inserts will cause your database to have very poor
page utilization.
* Use larger buckets in general. Smaller buckets causes poor page utilization
once they become larger than the page size (typically 4KB).
* Bulk loading a lot of random writes into a new bucket can be slow as the
page will not split until the transaction is committed. Randomly inserting
more than 100,000 key/value pairs into a single new bucket in a single
transaction is not advised.
* Bolt uses a memory-mapped file so the underlying operating system handles the
caching of the data. Typically, the OS will cache as much of the file as it
can in memory and will release memory as needed to other processes. This means
that Bolt can show very high memory usage when working with large databases.
However, this is expected and the OS will release memory as needed. Bolt can
handle databases much larger than the available physical RAM, provided its
memory-map fits in the process virtual address space. It may be problematic
on 32-bits systems.
* The data structures in the Bolt database are memory mapped so the data file
will be endian specific. This means that you cannot copy a Bolt file from a
little endian machine to a big endian machine and have it work. For most
users this is not a concern since most modern CPUs are little endian.
* Because of the way pages are laid out on disk, Bolt cannot truncate data files
and return free pages back to the disk. Instead, Bolt maintains a free list
of unused pages within its data file. These free pages can be reused by later
transactions. This works well for many use cases as databases generally tend
to grow. However, it's important to note that deleting large chunks of data
will not allow you to reclaim that space on disk.
For more information on page allocation, [see this comment][page-allocation].
[page-allocation]: https://github.com/boltdb/bolt/issues/308#issuecomment-74811638
## Reading the Source
Bolt is a relatively small code base (<3KLOC) for an embedded, serializable,
transactional key/value database so it can be a good starting point for people
interested in how databases work.
The best places to start are the main entry points into Bolt:
- `Open()` - Initializes the reference to the database. It's responsible for
creating the database if it doesn't exist, obtaining an exclusive lock on the
file, reading the meta pages, & memory-mapping the file.
- `DB.Begin()` - Starts a read-only or read-write transaction depending on the
value of the `writable` argument. This requires briefly obtaining the "meta"
lock to keep track of open transactions. Only one read-write transaction can
exist at a time so the "rwlock" is acquired during the life of a read-write
transaction.
- `Bucket.Put()` - Writes a key/value pair into a bucket. After validating the
arguments, a cursor is used to traverse the B+tree to the page and position
where they key & value will be written. Once the position is found, the bucket
materializes the underlying page and the page's parent pages into memory as
"nodes". These nodes are where mutations occur during read-write transactions.
These changes get flushed to disk during commit.
- `Bucket.Get()` - Retrieves a key/value pair from a bucket. This uses a cursor
to move to the page & position of a key/value pair. During a read-only
transaction, the key and value data is returned as a direct reference to the
underlying mmap file so there's no allocation overhead. For read-write
transactions, this data may reference the mmap file or one of the in-memory
node values.
- `Cursor` - This object is simply for traversing the B+tree of on-disk pages
or in-memory nodes. It can seek to a specific key, move to the first or last
value, or it can move forward or backward. The cursor handles the movement up
and down the B+tree transparently to the end user.
- `Tx.Commit()` - Converts the in-memory dirty nodes and the list of free pages
into pages to be written to disk. Writing to disk then occurs in two phases.
First, the dirty pages are written to disk and an `fsync()` occurs. Second, a
new meta page with an incremented transaction ID is written and another
`fsync()` occurs. This two phase write ensures that partially written data
pages are ignored in the event of a crash since the meta page pointing to them
is never written. Partially written meta pages are invalidated because they
are written with a checksum.
If you have additional notes that could be helpful for others, please submit
them via pull request.
## Other Projects Using Bolt
Below is a list of public, open source projects that use Bolt:
* [Operation Go: A Routine Mission](http://gocode.io) - An online programming game for Golang using Bolt for user accounts and a leaderboard.
* [Bazil](https://bazil.org/) - A file system that lets your data reside where it is most convenient for it to reside.
* [DVID](https://github.com/janelia-flyem/dvid) - Added Bolt as optional storage engine and testing it against Basho-tuned leveldb.
* [Skybox Analytics](https://github.com/skybox/skybox) - A standalone funnel analysis tool for web analytics.
* [Scuttlebutt](https://github.com/benbjohnson/scuttlebutt) - Uses Bolt to store and process all Twitter mentions of GitHub projects.
* [Wiki](https://github.com/peterhellberg/wiki) - A tiny wiki using Goji, BoltDB and Blackfriday.
* [ChainStore](https://github.com/nulayer/chainstore) - Simple key-value interface to a variety of storage engines organized as a chain of operations.
* [MetricBase](https://github.com/msiebuhr/MetricBase) - Single-binary version of Graphite.
* [Gitchain](https://github.com/gitchain/gitchain) - Decentralized, peer-to-peer Git repositories aka "Git meets Bitcoin".
* [event-shuttle](https://github.com/sclasen/event-shuttle) - A Unix system service to collect and reliably deliver messages to Kafka.
* [ipxed](https://github.com/kelseyhightower/ipxed) - Web interface and api for ipxed.
* [BoltStore](https://github.com/yosssi/boltstore) - Session store using Bolt.
* [photosite/session](http://godoc.org/bitbucket.org/kardianos/photosite/session) - Sessions for a photo viewing site.
* [LedisDB](https://github.com/siddontang/ledisdb) - A high performance NoSQL, using Bolt as optional storage.
* [ipLocator](https://github.com/AndreasBriese/ipLocator) - A fast ip-geo-location-server using bolt with bloom filters.
* [cayley](https://github.com/google/cayley) - Cayley is an open-source graph database using Bolt as optional backend.
* [bleve](http://www.blevesearch.com/) - A pure Go search engine similar to ElasticSearch that uses Bolt as the default storage backend.
* [tentacool](https://github.com/optiflows/tentacool) - REST api server to manage system stuff (IP, DNS, Gateway...) on a linux server.
* [SkyDB](https://github.com/skydb/sky) - Behavioral analytics database.
* [Seaweed File System](https://github.com/chrislusf/weed-fs) - Highly scalable distributed key~file system with O(1) disk read.
* [InfluxDB](http://influxdb.com) - Scalable datastore for metrics, events, and real-time analytics.
* [Freehold](http://tshannon.bitbucket.org/freehold/) - An open, secure, and lightweight platform for your files and data.
* [Prometheus Annotation Server](https://github.com/oliver006/prom_annotation_server) - Annotation server for PromDash & Prometheus service monitoring system.
* [Consul](https://github.com/hashicorp/consul) - Consul is service discovery and configuration made easy. Distributed, highly available, and datacenter-aware.
* [Kala](https://github.com/ajvb/kala) - Kala is a modern job scheduler optimized to run on a single node. It is persistent, JSON over HTTP API, ISO 8601 duration notation, and dependent jobs.
* [drive](https://github.com/odeke-em/drive) - drive is an unofficial Google Drive command line client for \*NIX operating systems.
* [stow](https://github.com/djherbis/stow) - a persistence manager for objects
backed by boltdb.
* [buckets](https://github.com/joyrexus/buckets) - a bolt wrapper streamlining
simple tx and key scans.
If you are using Bolt in a project please send a pull request to add it to the list.

Some files were not shown because too many files have changed in this diff Show More