Compare commits
17 Commits
release-2.
...
v2.0.6
Author | SHA1 | Date | |
---|---|---|---|
![]() |
e3c902228b | ||
![]() |
52a2d143d2 | ||
![]() |
f53d550a79 | ||
![]() |
63b799b891 | ||
![]() |
697883fb8c | ||
![]() |
f794f87f26 | ||
![]() |
0847986d4a | ||
![]() |
9ea80c6ac1 | ||
![]() |
02fb648abf | ||
![]() |
4c9e1686b1 | ||
![]() |
0fb9362c5c | ||
![]() |
9481945228 | ||
![]() |
e13b09e4d9 | ||
![]() |
78e0149f41 | ||
![]() |
4c86ab4868 | ||
![]() |
59327bab47 | ||
![]() |
62ed1ebf03 |
@@ -4,6 +4,8 @@ go:
|
||||
- 1.4
|
||||
|
||||
install:
|
||||
- go get golang.org/x/tools/cmd/cover
|
||||
- go get golang.org/x/tools/cmd/vet
|
||||
- go get github.com/barakmich/go-nyet
|
||||
|
||||
script:
|
||||
|
@@ -1,6 +1,6 @@
|
||||
# How to contribute
|
||||
|
||||
etcd is Apache 2.0 licensed and accepts contributions via GitHub pull requests. This document outlines some of the conventions on commit message formatting, contact points for developers and other resources to make getting your contribution into etcd easier.
|
||||
etcd is Apache 2.0 licensed and accepts contributions via Github pull requests. This document outlines some of the conventions on commit message formatting, contact points for developers and other resources to make getting your contribution into etcd easier.
|
||||
|
||||
# Email and chat
|
||||
|
||||
|
@@ -15,8 +15,7 @@ etcd will detect 0.4.x data dir and update the data automatically (while leaving
|
||||
|
||||
The tool can be run via:
|
||||
```sh
|
||||
./go build
|
||||
./etcd-migrate --data-dir=<PATH TO YOUR DATA>
|
||||
./bin/etcd-migrate --data-dir=<PATH TO YOUR DATA>
|
||||
```
|
||||
|
||||
It should autodetect everything and convert the data-dir to be 2.0 compatible. It does not remove the 0.4.x data, and is safe to convert multiple times; the 2.0 data will be overwritten. Recovering the disk space once everything is settled is covered later in the document.
|
||||
@@ -45,4 +44,4 @@ If the conversion has completed, the entire cluster is running on something 2.0-
|
||||
rm -ri snapshot conf log
|
||||
```
|
||||
|
||||
It will ask before every deletion, but these are the 0.4.x files and will not affect the working 2.0 data.
|
||||
It will ask before every deletion, but these are the 0.4.x files and will not affect the working 2.0 data.
|
@@ -15,7 +15,7 @@ Using an out-of-date data directory can lead to inconsistency as the member had
|
||||
For maximum safety, if an etcd member suffers any sort of data corruption or loss, it must be removed from the cluster.
|
||||
Once removed the member can be re-added with an empty data directory.
|
||||
|
||||
[members-api]: other_apis.md#members-api
|
||||
[members-api]: https://github.com/coreos/etcd/blob/master/Documentation/other_apis.md#members-api
|
||||
|
||||
#### Contents
|
||||
|
||||
@@ -61,7 +61,7 @@ After your cluster is up and running, adding or removing members is done via [ru
|
||||
|
||||
### Member Migration
|
||||
|
||||
When there is a scheduled machine maintenance or retirement, you might want to migrate an etcd member to another machine without losing the data and changing the member ID.
|
||||
When there is a scheduled machine maintenance or retirement, you might want to migrate an etcd member to another machine without losing the data and changing the member ID.
|
||||
|
||||
The data directory contains all the data to recover a member to its point-in-time state. To migrate a member:
|
||||
|
||||
@@ -102,7 +102,7 @@ $ sudo systemctl stop etcd
|
||||
#### Copy the data directory of the now-idle member to the new machine
|
||||
|
||||
```
|
||||
$ tar -cvzf node1.etcd.tar.gz /var/lib/etcd/node1.etcd
|
||||
$ tar -cvzf node1.etcd.tar.gz /var/lib/etcd/node1.etcd
|
||||
```
|
||||
|
||||
```
|
||||
@@ -133,7 +133,7 @@ etcd -name node1 \
|
||||
-advertise-client-urls http://10.0.1.13:2379,http://127.0.0.1:2379
|
||||
```
|
||||
|
||||
[change peer url]: other_apis.md#change-the-peer-urls-of-a-member
|
||||
[change peer url]: https://github.com/coreos/etcd/blob/master/Documentation/other_apis.md#change-the-peer-urls-of-a-member
|
||||
|
||||
### Disaster Recovery
|
||||
|
||||
@@ -181,13 +181,11 @@ Once you have verified that etcd has started successfully, shut it down and move
|
||||
|
||||
#### Restoring the cluster
|
||||
|
||||
Now that the node is running successfully, you should [change its advertised peer URLs](other_apis.md#change-the-peer-urls-of-a-member), as the `--force-new-cluster` has set the peer URL to the default (listening on localhost).
|
||||
|
||||
You can then add more nodes to the cluster and restore resiliency. See the [runtime configuration](runtime-configuration.md) guide for more details.
|
||||
Now that the node is running successfully, you can add more nodes to the cluster and restore resiliency. See the [runtime configuration](runtime-configuration.md) guide for more details.
|
||||
|
||||
### Client Request Timeout
|
||||
|
||||
etcd sets different timeouts for various types of client requests. The timeout value is not tunable now, which will be improved soon (https://github.com/coreos/etcd/issues/2038).
|
||||
etcd sets different timeouts for various types of client requests. The timeout value is not tunable now, which will be improved soon(https://github.com/coreos/etcd/issues/2038).
|
||||
|
||||
#### Get requests
|
||||
|
||||
@@ -209,11 +207,3 @@ If the request times out, it indicates two possibilities:
|
||||
2. the majority of the cluster is not functioning.
|
||||
|
||||
If timeout happens several times continuously, administrators should check status of cluster and resolve it as soon as possible.
|
||||
|
||||
### Best Practices
|
||||
|
||||
#### Maximum OS threads
|
||||
|
||||
By default, etcd uses the default configuration of the Go 1.4 runtime, which means that at most one operating system thread will be used to execute code simultaneously. (Note that this default behavior [may change in Go 1.5](https://docs.google.com/document/d/1At2Ls5_fhJQ59kDK2DFVhFu3g5mATSXqqV5QrxinasI/edit)).
|
||||
|
||||
When using etcd in heavy-load scenarios on machines with multiple cores it will usually be desirable to increase the number of threads that etcd can utilize. To do this, simply set the environment variable `GOMAXPROCS` to the desired number when starting etcd. For more information on this variable, see the Go [runtime](https://golang.org/pkg/runtime) documentation.
|
||||
|
120
Documentation/allow_legacy_mode.md
Normal file
120
Documentation/allow_legacy_mode.md
Normal file
@@ -0,0 +1,120 @@
|
||||
## Allow-legacy mode
|
||||
|
||||
Allow-legacy is a special mode in etcd that contains logic to enable a running etcd cluster to smoothly transition between major versions of etcd. For example, the internal API versions between etcd 0.4 (internal v1) and etcd 2.0 (internal v2) aren't compatible and the cluster needs to be updated all at once to make the switch. To minimize downtime, allow-legacy coordinates with all of the members of the cluster to shutdown, migration of data and restart onto the new version.
|
||||
|
||||
Allow-legacy helps users upgrade v0.4 etcd clusters easily, and allows your etcd cluster to have a minimal amount of downtime -- less than 1 minute for clusters storing less than 50 MB.
|
||||
|
||||
It supports upgrading from internal v1 to internal v2 now.
|
||||
|
||||
### Setup
|
||||
|
||||
This mode is enabled if `ETCD_ALLOW_LEGACY_MODE` is set to true, or etcd is running in CoreOS system.
|
||||
|
||||
It treats `ETCD_BINARY_DIR` as the directory for etcd binaries, which is organized in this way:
|
||||
|
||||
```
|
||||
ETCD_BINARY_DIR
|
||||
|
|
||||
-- 1
|
||||
|
|
||||
-- 2
|
||||
```
|
||||
|
||||
`1` is etcd with internal v1 protocol. You should use etcd v0.4.7 here. `2` is etcd with internal v2 protocol, which is etcd v2.x.
|
||||
|
||||
The default value for `ETCD_BINARY_DIR` is `/usr/libexec/etcd/internal_versions/`.
|
||||
|
||||
### Upgrading a Cluster
|
||||
|
||||
When starting etcd with a v1 data directory and v1 flags, etcd executes the v0.4.7 binary and runs exactly the same as before. To start the migration, follow the steps below:
|
||||
|
||||

|
||||
|
||||
#### 1. Check the Cluster Health
|
||||
|
||||
Before upgrading, you should check the health of the cluster to double check that everything working perfectly. Check the health by running:
|
||||
|
||||
```
|
||||
$ etcdctl cluster-health
|
||||
cluster is healthy
|
||||
member 6e3bd23ae5f1eae0 is healthy
|
||||
member 924e2e83e93f2560 is healthy
|
||||
member a8266ecf031671f3 is healthy
|
||||
```
|
||||
|
||||
If the cluster and all members are healthy, you can start the upgrading process. If not, check the unhealthy machines and repair them using [admin guide](./admin_guide.md).
|
||||
|
||||
#### 2. Trigger the Upgrade
|
||||
|
||||
When you're ready, use the `etcdctl upgrade` command to start the upgrade the etcd cluster to 2.0:
|
||||
|
||||
```
|
||||
# Defaults work on a CoreOS machine running etcd
|
||||
$ etcdctl upgrade
|
||||
```
|
||||
|
||||
```
|
||||
# Advanced example specifying a peer url
|
||||
$ etcdctl upgrade --old-version=1 --new-version=2 --peer-url=$PEER_URL
|
||||
```
|
||||
|
||||
`PEER_URL` can be any accessible peer url of the cluster.
|
||||
|
||||
Once triggered, all peer-mode members will print out:
|
||||
|
||||
```
|
||||
detected next internal version 2, exit after 10 seconds.
|
||||
```
|
||||
|
||||
#### Parallel Coordinated Upgrade
|
||||
|
||||
As part of the upgrade, etcd does internal coordination within the cluster for a brief period and then exits. Clusters storing 50 MB should be unavailable for less than 1 minute.
|
||||
|
||||
#### Restart etcd Processes
|
||||
|
||||
After the etcd processes exit, they need to be restarted. You can do this manually or configure your unit system to do this automatically. On CoreOS, etcd is already configured to start automatically with systemd.
|
||||
|
||||
When restarted, the data directory of each member is upgraded, and afterwards etcd v2.0 will be running and servicing requests. The upgrade is now complete!
|
||||
|
||||
Standby-mode members are a special case — they will be upgraded into proxy mode (a new feature in etcd 2.0) upon restarting. When the upgrade is triggered, any standbys will exit with the message:
|
||||
|
||||
```
|
||||
Detect the cluster has been upgraded to internal API v2. Exit now.
|
||||
```
|
||||
|
||||
Once restarted, standbys run in v2.0 proxy mode, which proxy user requests to the etcd cluster.
|
||||
|
||||
#### 3. Check the Cluster Health
|
||||
|
||||
After the upgrade process, you can run the health check again to verify the upgrade. If the cluster is unhealthy or there is an unhealthy member, please refer to start [failure recovery](#failure-recovery).
|
||||
|
||||
### Downgrade
|
||||
|
||||
If the upgrading fails due to disk/network issues, you still can restart the upgrading process manually. However, once you upgrade etcd to internal v2 protocol, you CANNOT downgrade it back to internal v1 protocol. If you want to downgrade etcd in the future, please backup your v1 data dir beforehand.
|
||||
|
||||
### Upgrade Process on CoreOS
|
||||
|
||||
When running on a CoreOS system, allow-legacy mode is enabled by default and an automatic update will set up everything needed to execute the upgrade. The `etcd.service` on CoreOS is already configured to restart automatically. All you need to do is run `etcdctl upgrade` when you're ready, as described
|
||||
|
||||
### Internal Details
|
||||
|
||||
etcd v0.4.7 registers versions of available etcd binaries in its local machine into the key space at bootstrap stage. When the upgrade command is executed, etcdctl checks whether each member has internal-version-v2 etcd binary around. If that is true, each member is asked to record the fact that it needs to be upgraded the next time it reboots, and exits after 10 seconds.
|
||||
|
||||
Once restarted, etcd v2.0 sees the upgrade flag recorded. It upgrades the data directory, and executes etcd v2.0.
|
||||
|
||||
### Failure Recovery
|
||||
|
||||
If `etcdctl cluster-health` says that the cluster is unhealthy, the upgrade process fails, which may happen if the network is broken, or the disk cannot work.
|
||||
|
||||
The way to recover it is to manually upgrade the whole cluster to v2.0:
|
||||
|
||||
- Log into machines that ran v0.4 peer-mode etcd
|
||||
- Stop all etcd services
|
||||
- Remove the `member` directory under the etcd data-dir
|
||||
- Start etcd service using [2.0 flags](configuration.md). An example for this is:
|
||||
```
|
||||
$ etcd --data-dir=$DATA_DIR --listen-peer-urls http://$LISTEN_PEER_ADDR \
|
||||
--advertise-client-urls http://$ADVERTISE_CLIENT_ADDR \
|
||||
--listen-client-urls http://$LISTEN_CLIENT_ADDR
|
||||
```
|
||||
- When this is done, v2.0 etcd cluster should work now.
|
@@ -78,7 +78,7 @@ X-Raft-Index: 5398
|
||||
X-Raft-Term: 1
|
||||
```
|
||||
|
||||
- `X-Etcd-Index` is the current etcd index as explained above. When request is a watch on key space, `X-Etcd-Index` is the current etcd index when the watch starts, which means that the watched event may happen after `X-Etcd-Index`.
|
||||
- `X-Etcd-Index` is the current etcd index as explained above.
|
||||
- `X-Raft-Index` is similar to the etcd index but is for the underlying raft protocol
|
||||
- `X-Raft-Term` is an integer that will increase whenever an etcd master election happens in the cluster. If this number is increasing rapidly, you may need to tune the election timeout. See the [tuning][tuning] section for details.
|
||||
|
||||
@@ -277,7 +277,7 @@ The first terminal should get the notification and return with the same response
|
||||
However, the watch command can do more than this.
|
||||
Using the index, we can watch for commands that have happened in the past.
|
||||
This is useful for ensuring you don't miss events between watch commands.
|
||||
Typically, we watch again from the `modifiedIndex` + 1 of the node we got.
|
||||
Typically, we watch again from the (modifiedIndex + 1) of the node we got.
|
||||
|
||||
Let's try to watch for the set command of index 7 again:
|
||||
|
||||
@@ -287,75 +287,49 @@ curl 'http://127.0.0.1:2379/v2/keys/foo?wait=true&waitIndex=7'
|
||||
|
||||
The watch command returns immediately with the same response as previously.
|
||||
|
||||
If we were to restart the watch from index 8 with:
|
||||
|
||||
```sh
|
||||
curl 'http://127.0.0.1:2379/v2/keys/foo?wait=true&waitIndex=8'
|
||||
```
|
||||
|
||||
Then even if etcd is on index 9 or 800, the first event to occur to the `/foo`
|
||||
key between 8 and the current index will be returned.
|
||||
|
||||
**Note**: etcd only keeps the responses of the most recent 1000 events across all etcd keys.
|
||||
**Note**: etcd only keeps the responses of the most recent 1000 events.
|
||||
It is recommended to send the response to another thread to process immediately
|
||||
instead of blocking the watch while processing the result.
|
||||
|
||||
#### Watch from cleared event index
|
||||
|
||||
If we miss all the 1000 events, we need to recover the current state of the
|
||||
watching key space through a get and then start to watch from the
|
||||
`X-Etcd-Index` + 1.
|
||||
watching key space. First, We do a get and then start to watch from the (etcdIndex + 1).
|
||||
|
||||
For example, we set `/other="bar"` for 2000 times and try to wait from index 8.
|
||||
For example, we set `/foo="bar"` for 2000 times and tries to wait from index 7.
|
||||
|
||||
```sh
|
||||
curl 'http://127.0.0.1:2379/v2/keys/foo?wait=true&waitIndex=8'
|
||||
curl 'http://127.0.0.1:2379/v2/keys/foo?wait=true&waitIndex=7'
|
||||
```
|
||||
|
||||
We get the index is outdated response, since we miss the 1000 events kept in etcd.
|
||||
|
||||
```
|
||||
{"errorCode":401,"message":"The event in requested index is outdated and cleared","cause":"the requested history has been cleared [1008/8]","index":2007}
|
||||
{"errorCode":401,"message":"The event in requested index is outdated and cleared","cause":"the requested history has been cleared [1003/7]","index":2002}
|
||||
```
|
||||
|
||||
To start watch, first we need to fetch the current state of key `/foo`:
|
||||
|
||||
To start watch, first we need to fetch the current state of key `/foo` and the etcdIndex.
|
||||
```sh
|
||||
curl 'http://127.0.0.1:2379/v2/keys/foo' -vv
|
||||
```
|
||||
|
||||
```
|
||||
< HTTP/1.1 200 OK
|
||||
< Content-Type: application/json
|
||||
< X-Etcd-Cluster-Id: 7e27652122e8b2ae
|
||||
< X-Etcd-Index: 2007
|
||||
< X-Etcd-Index: 2002
|
||||
< X-Raft-Index: 2615
|
||||
< X-Raft-Term: 2
|
||||
< Date: Mon, 05 Jan 2015 18:54:43 GMT
|
||||
< Transfer-Encoding: chunked
|
||||
<
|
||||
{"action":"get","node":{"key":"/foo","value":"bar","modifiedIndex":7,"createdIndex":7}}
|
||||
{"action":"get","node":{"key":"/foo","value":"","modifiedIndex":2002,"createdIndex":2002}}
|
||||
```
|
||||
|
||||
Unlike watches we use the `X-Etcd-Index` + 1 of the response as a `waitIndex`
|
||||
instead of the node's `modifiedIndex` + 1 for two reasons:
|
||||
|
||||
1. The `X-Etcd-Index` is always greater than or equal to the `modifiedIndex` when
|
||||
getting a key because `X-Etcd-Index` is the current etcd index, and the `modifiedIndex`
|
||||
is the index of an event already stored in etcd.
|
||||
2. None of the events represented by indexes between `modifiedIndex` and
|
||||
`X-Etcd-Index` will be related to the key being fetched.
|
||||
|
||||
Using the `modifiedIndex` + 1 is functionally equivalent for subsequent
|
||||
watches, but since it is smaller than the `X-Etcd-Index` + 1, we may receive a
|
||||
`401 EventIndexCleared` error immediately.
|
||||
|
||||
So the first watch after the get should be:
|
||||
The `X-Etcd-Index` is important. It is the index when we got the value of `/foo`.
|
||||
So we can watch again from the (`X-Etcd-Index` + 1) without missing an event after the last get.
|
||||
|
||||
```sh
|
||||
curl 'http://127.0.0.1:2379/v2/keys/foo?wait=true&waitIndex=2008'
|
||||
curl 'http://127.0.0.1:2379/v2/keys/foo?wait=true&waitIndex=2003'
|
||||
```
|
||||
|
||||
|
||||
### Atomically Creating In-Order Keys
|
||||
|
||||
Using `POST` on a directory, you can create keys with key names that are created in-order.
|
||||
@@ -896,7 +870,7 @@ Here we see the `/message` key but our hidden `/_message` key is not returned.
|
||||
|
||||
### Setting a key from a file
|
||||
|
||||
You can also use etcd to store small configuration files, JSON documents, XML documents, etc directly.
|
||||
You can also use etcd to store small configuration files, json documents, XML documents, etc directly.
|
||||
For example you can use curl to upload a simple text file and encode it:
|
||||
|
||||
```
|
||||
@@ -1072,4 +1046,4 @@ curl http://127.0.0.1:2379/v2/stats/store
|
||||
|
||||
See the [other etcd APIs][other-apis] for details on the cluster management.
|
||||
|
||||
[other-apis]: other_apis.md
|
||||
[other-apis]: https://github.com/coreos/etcd/blob/master/Documentation/other_apis.md
|
||||
|
@@ -1,434 +0,0 @@
|
||||
# v2 Auth and Security
|
||||
|
||||
## etcd Resources
|
||||
There are three types of resources in etcd
|
||||
|
||||
1. permission resources: users and roles in the user store
|
||||
2. key-value resources: key-value pairs in the key-value store
|
||||
3. settings resources: security settings, auth settings, and dynamic etcd cluster settings (election/heartbeat)
|
||||
|
||||
### Permission Resources
|
||||
|
||||
#### Users
|
||||
A user is an identity to be authenticated. Each user can have multiple roles. The user has a capability (such as reading or writing) on the resource if one of the roles has that capability.
|
||||
|
||||
A user named `root` is required before authentication can be enabled, and it always has the ROOT role. The ROOT role can be granted to multiple users, but `root` is required for recovery purposes.
|
||||
|
||||
#### Roles
|
||||
Each role has exact one associated Permission List. An permission list exists for each permission on key-value resources.
|
||||
|
||||
The special static ROOT (named `root`) role has a full permissions on all key-value resources, the permission to manage user resources and settings resources. Only the ROOT role has the permission to manage user resources and modify settings resources. The ROOT role is built-in and does not need to be created.
|
||||
|
||||
There is also a special GUEST role, named 'guest'. These are the permissions given to unauthenticated requests to etcd. This role will be created automatically, and by default allows access to the full keyspace due to backward compatability. (etcd did not previously authenticate any actions.). This role can be modified by a ROOT role holder at any time, to reduce the capabilities of unauthenticated users.
|
||||
|
||||
#### Permissions
|
||||
|
||||
There are two types of permissions, `read` and `write`. All management and settings require the ROOT role.
|
||||
|
||||
A Permission List is a list of allowed patterns for that particular permission (read or write). Only ALLOW prefixes are supported. DENY becomes more complicated and is TBD.
|
||||
|
||||
### Key-Value Resources
|
||||
A key-value resource is a key-value pairs in the store. Given a list of matching patterns, permission for any given key in a request is granted if any of the patterns in the list match.
|
||||
|
||||
Only prefixes or exact keys are supported. A prefix permission string ends in `*`.
|
||||
A permission on `/foo` is for that exact key or directory, not its children or recursively. `/foo*` is a prefix that matches `/foo` recursively, and all keys thereunder, and keys with that prefix (eg. `/foobar`. Contrast to the prefix `/foo/*`). `*` alone is permission on the full keyspace.
|
||||
|
||||
### Settings Resources
|
||||
|
||||
Specific settings for the cluster as a whole. This can include adding and removing cluster members, enabling or disabling authentication, replacing certificates, and any other dynamic configuration by the administrator (holder of the ROOT role).
|
||||
|
||||
## v2 Auth
|
||||
|
||||
### Basic Auth
|
||||
We only support [Basic Auth](http://en.wikipedia.org/wiki/Basic_access_authentication) for the first version. Client needs to attach the basic auth to the HTTP Authorization Header.
|
||||
|
||||
### Authorization field for operations
|
||||
Added to requests to /v2/keys, /v2/auth
|
||||
Add code 401 Unauthorized to the set of responses from the v2 API
|
||||
Authorization: Basic {encoded string}
|
||||
|
||||
### Future Work
|
||||
Other types of auth can be considered for the future (eg, signed certs, public keys) but the `Authorization:` header allows for other such types
|
||||
|
||||
### Things out of Scope for etcd Permissions
|
||||
|
||||
* Pluggable AUTH backends like LDAP (other Authorization tokens generated by LDAP et al may be a possibility)
|
||||
* Very fine-grained access controls (eg: users modifying keys outside work hours)
|
||||
|
||||
|
||||
|
||||
## API endpoints
|
||||
|
||||
An Error JSON corresponds to:
|
||||
{
|
||||
"name": "ErrErrorName",
|
||||
"description" : "The longer helpful description of the error."
|
||||
}
|
||||
|
||||
#### Enable and Disable Authentication
|
||||
|
||||
**Get auth status**
|
||||
|
||||
GET /v2/auth/enable
|
||||
|
||||
Sent Headers:
|
||||
Possible Status Codes:
|
||||
200 OK
|
||||
200 Body:
|
||||
{
|
||||
"enabled": true
|
||||
}
|
||||
|
||||
|
||||
**Enable auth**
|
||||
|
||||
PUT /v2/auth/enable
|
||||
|
||||
Sent Headers:
|
||||
Put Body: (empty)
|
||||
Possible Status Codes:
|
||||
200 OK
|
||||
400 Bad Request (if root user has not been created)
|
||||
409 Conflict (already enabled)
|
||||
200 Body: (empty)
|
||||
|
||||
**Disable auth**
|
||||
|
||||
DELETE /v2/auth/enable
|
||||
|
||||
Sent Headers:
|
||||
Authorization: Basic <RootAuthString>
|
||||
Possible Status Codes:
|
||||
200 OK
|
||||
401 Unauthorized (if not a root user)
|
||||
409 Conflict (already disabled)
|
||||
200 Body: (empty)
|
||||
|
||||
|
||||
#### Users
|
||||
|
||||
The User JSON object is formed as follows:
|
||||
|
||||
```
|
||||
{
|
||||
"user": "userName",
|
||||
"password": "password",
|
||||
"roles": [
|
||||
"role1",
|
||||
"role2"
|
||||
],
|
||||
"grant": [],
|
||||
"revoke": []
|
||||
}
|
||||
```
|
||||
|
||||
Password is only passed when necessary.
|
||||
|
||||
**Get a list of users**
|
||||
|
||||
GET/HEAD /v2/auth/users
|
||||
|
||||
Sent Headers:
|
||||
Authorization: Basic <BasicAuthString>
|
||||
Possible Status Codes:
|
||||
200 OK
|
||||
401 Unauthorized
|
||||
200 Headers:
|
||||
Content-type: application/json
|
||||
200 Body:
|
||||
{
|
||||
"users": ["alice", "bob", "eve"]
|
||||
}
|
||||
|
||||
**Get User Details**
|
||||
|
||||
GET/HEAD /v2/auth/users/alice
|
||||
|
||||
Sent Headers:
|
||||
Authorization: Basic <BasicAuthString>
|
||||
Possible Status Codes:
|
||||
200 OK
|
||||
401 Unauthorized
|
||||
404 Not Found
|
||||
200 Headers:
|
||||
Content-type: application/json
|
||||
200 Body:
|
||||
{
|
||||
"user" : "alice",
|
||||
"roles" : ["fleet", "etcd"]
|
||||
}
|
||||
|
||||
**Create Or Update A User**
|
||||
|
||||
A user can be created with initial roles, if filled in. However, no roles are required; only the username and password fields
|
||||
|
||||
PUT /v2/auth/users/charlie
|
||||
|
||||
Sent Headers:
|
||||
Authorization: Basic <BasicAuthString>
|
||||
Put Body:
|
||||
JSON struct, above, matching the appropriate name
|
||||
* Starting password and roles when creating.
|
||||
* Grant/Revoke/Password filled in when updating (to grant roles, revoke roles, or change the password).
|
||||
Possible Status Codes:
|
||||
200 OK
|
||||
201 Created
|
||||
400 Bad Request
|
||||
401 Unauthorized
|
||||
404 Not Found (update non-existent users)
|
||||
409 Conflict (when granting duplicated roles or revoking non-existent roles)
|
||||
200 Headers:
|
||||
Content-type: application/json
|
||||
200 Body:
|
||||
JSON state of the user
|
||||
|
||||
**Remove A User**
|
||||
|
||||
DELETE /v2/auth/users/charlie
|
||||
|
||||
Sent Headers:
|
||||
Authorization: Basic <BasicAuthString>
|
||||
Possible Status Codes:
|
||||
200 OK
|
||||
401 Unauthorized
|
||||
403 Forbidden (remove root user when auth is enabled)
|
||||
404 Not Found
|
||||
200 Headers:
|
||||
200 Body: (empty)
|
||||
|
||||
#### Roles
|
||||
|
||||
A full role structure may look like this. A Permission List structure is used for the "permissions", "grant", and "revoke" keys.
|
||||
```
|
||||
{
|
||||
"role" : "fleet",
|
||||
"permissions" : {
|
||||
"kv" : {
|
||||
"read" : [ "/fleet/" ],
|
||||
"write": [ "/fleet/" ]
|
||||
}
|
||||
},
|
||||
"grant" : {"kv": {...}},
|
||||
"revoke": {"kv": {...}}
|
||||
}
|
||||
```
|
||||
|
||||
**Get a list of Roles**
|
||||
|
||||
GET/HEAD /v2/auth/roles
|
||||
|
||||
Sent Headers:
|
||||
Authorization: Basic <BasicAuthString>
|
||||
Possible Status Codes:
|
||||
200 OK
|
||||
401 Unauthorized
|
||||
200 Headers:
|
||||
Content-type: application/json
|
||||
200 Body:
|
||||
{
|
||||
"roles": ["fleet", "etcd", "quay"]
|
||||
}
|
||||
|
||||
**Get Role Details**
|
||||
|
||||
GET/HEAD /v2/auth/roles/fleet
|
||||
|
||||
Sent Headers:
|
||||
Authorization: Basic <BasicAuthString>
|
||||
Possible Status Codes:
|
||||
200 OK
|
||||
401 Unauthorized
|
||||
404 Not Found
|
||||
200 Headers:
|
||||
Content-type: application/json
|
||||
200 Body:
|
||||
{
|
||||
"role" : "fleet",
|
||||
"permissions" : {
|
||||
"kv" : {
|
||||
"read": [ "/fleet/" ],
|
||||
"write": [ "/fleet/" ]
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
**Create Or Update A Role**
|
||||
|
||||
PUT /v2/auth/roles/rkt
|
||||
|
||||
Sent Headers:
|
||||
Authorization: Basic <BasicAuthString>
|
||||
Put Body:
|
||||
Initial desired JSON state, including the role name for verification and:
|
||||
* Starting permission set if creating
|
||||
* Granted/Revoked permission set if updating
|
||||
Possible Status Codes:
|
||||
200 OK
|
||||
201 Created
|
||||
400 Bad Request
|
||||
401 Unauthorized
|
||||
404 Not Found (update non-existent roles)
|
||||
409 Conflict (when granting duplicated permission or revoking non-existent permission)
|
||||
200 Body:
|
||||
JSON state of the role
|
||||
|
||||
**Remove A Role**
|
||||
|
||||
DELETE /v2/auth/roles/rkt
|
||||
|
||||
Sent Headers:
|
||||
Authorization: Basic <BasicAuthString>
|
||||
Possible Status Codes:
|
||||
200 OK
|
||||
401 Unauthorized
|
||||
403 Forbidden (remove root)
|
||||
404 Not Found
|
||||
200 Headers:
|
||||
200 Body: (empty)
|
||||
|
||||
|
||||
## Example Workflow
|
||||
|
||||
Let's walk through an example to show two tenants (applications, in our case) using etcd permissions.
|
||||
|
||||
### Create root role
|
||||
|
||||
```
|
||||
PUT /v2/auth/users/root
|
||||
Put Body:
|
||||
{"user" : "root", "password": "betterRootPW!"}
|
||||
```
|
||||
|
||||
### Enable auth
|
||||
|
||||
```
|
||||
PUT /v2/auth/enable
|
||||
```
|
||||
|
||||
### Modify guest role (revoke write permission)
|
||||
|
||||
```
|
||||
PUT /v2/auth/roles/guest
|
||||
Headers:
|
||||
Authorization: Basic <root:betterRootPW!>
|
||||
Put Body:
|
||||
{
|
||||
"role" : "guest",
|
||||
"revoke" : {
|
||||
"kv" : {
|
||||
"write": [
|
||||
"*"
|
||||
]
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
|
||||
### Create Roles for the Applications
|
||||
|
||||
Create the rkt role fully specified:
|
||||
|
||||
```
|
||||
PUT /v2/auth/roles/rkt
|
||||
Headers:
|
||||
Authorization: Basic <root:betterRootPW!>
|
||||
Body:
|
||||
{
|
||||
"role" : "rkt",
|
||||
"permissions" : {
|
||||
"kv": {
|
||||
"read": [
|
||||
"/rkt/*"
|
||||
],
|
||||
"write": [
|
||||
"/rkt/*"
|
||||
]
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
But let's make fleet just a basic role for now:
|
||||
|
||||
```
|
||||
PUT /v2/auth/roles/fleet
|
||||
Headers:
|
||||
Authorization: Basic <root:betterRootPW!>
|
||||
Body:
|
||||
{
|
||||
"role" : "fleet"
|
||||
}
|
||||
```
|
||||
|
||||
### Optional: Grant some permissions to the roles
|
||||
|
||||
Well, we finally figured out where we want fleet to live. Let's fix it.
|
||||
(Note that we avoided this in the rkt case. So this step is optional.)
|
||||
|
||||
|
||||
```
|
||||
PUT /v2/auth/roles/fleet
|
||||
Headers:
|
||||
Authorization: Basic <root:betterRootPW!>
|
||||
Put Body:
|
||||
{
|
||||
"role" : "fleet",
|
||||
"grant" : {
|
||||
"kv" : {
|
||||
"read": [
|
||||
"/rkt/fleet",
|
||||
"/fleet/*"
|
||||
]
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
### Create Users
|
||||
|
||||
Same as before, let's use rocket all at once and fleet separately
|
||||
|
||||
```
|
||||
PUT /v2/auth/users/rktuser
|
||||
Headers:
|
||||
Authorization: Basic <root:betterRootPW!>
|
||||
Body:
|
||||
{"user" : "rktuser", "password" : "rktpw", "roles" : ["rkt"]}
|
||||
```
|
||||
|
||||
```
|
||||
PUT /v2/auth/users/fleetuser
|
||||
Headers:
|
||||
Authorization: Basic <root:betterRootPW!>
|
||||
Body:
|
||||
{"user" : "fleetuser", "password" : "fleetpw"}
|
||||
```
|
||||
|
||||
### Optional: Grant Roles to Users
|
||||
|
||||
Likewise, let's explicitly grant fleetuser access.
|
||||
|
||||
```
|
||||
PUT /v2/auth/users/fleetuser
|
||||
Headers:
|
||||
Authorization: Basic <root:betterRootPW!>
|
||||
Body:
|
||||
{"user": "fleetuser", "grant": ["fleet"]}
|
||||
```
|
||||
|
||||
#### Start to use fleetuser and rktuser
|
||||
|
||||
|
||||
For example:
|
||||
|
||||
```
|
||||
PUT /v2/keys/rkt/RktData
|
||||
Headers:
|
||||
Authorization: Basic <rktuser:rktpw>
|
||||
Body:
|
||||
value=launch
|
||||
```
|
||||
|
||||
Reads and writes outside the prefixes granted will fail with a 401 Unauthorized.
|
||||
|
@@ -1,179 +0,0 @@
|
||||
# Authentication Guide
|
||||
|
||||
**NOTE: The authentication feature is considered experimental. We may change workflow without warning in future releases.**
|
||||
|
||||
## Overview
|
||||
|
||||
Authentication -- having users and roles in etcd -- was added in etcd 2.1. This guide will help you set up basic authentication in etcd.
|
||||
|
||||
etcd before 2.1 was a completely open system; anyone with access to the API could change keys. In order to preserve backward compatibility and upgradability, this feature is off by default.
|
||||
|
||||
For a full discussion of the RESTful API, see [the authentication API documentation](auth_api.md)
|
||||
|
||||
## Special Users and Roles
|
||||
|
||||
There is one special user, `root`, and there are two special roles, `root` and `guest`.
|
||||
|
||||
### User `root`
|
||||
|
||||
User `root` must be created before security can be activated. It has the `root` role and allows for the changing of anything inside etcd. The idea behind the `root` user is for recovery purposes -- a password is generated and stored somewhere -- and the root role is granted to the administrator accounts on the system. In the future, for troubleshooting and recovery, we will need to assume some access to the system, and future documentation will assume this root user (though anyone with the role will suffice).
|
||||
|
||||
### Role `root`
|
||||
|
||||
Role `root` cannot be modified, but it may be granted to any user. Having access via the root role not only allows global read-write access (as was the case before 2.1) but allows modification of the authentication policy and all administrative things, like modifying the cluster membership.
|
||||
|
||||
### Role `guest`
|
||||
|
||||
The `guest` role defines the permissions granted to any request that does not provide an authentication. This will be created on security activation (if it doesn't already exist) to have full access to all keys, as was true in etcd 2.0. It may be modified at any time, and cannot be removed.
|
||||
|
||||
## Working with users
|
||||
|
||||
The `user` subcommand for `etcdctl` handles all things having to do with user accounts.
|
||||
|
||||
A listing of users can be found with
|
||||
|
||||
```
|
||||
$ etcdctl user list
|
||||
```
|
||||
|
||||
Creating a user is as easy as
|
||||
|
||||
```
|
||||
$ etcdctl user add myusername
|
||||
```
|
||||
|
||||
And there will be prompt for a new password.
|
||||
|
||||
Roles can be granted and revoked for a user with
|
||||
|
||||
```
|
||||
$ etcdctl user grant myusername -roles foo,bar,baz
|
||||
$ etcdctl user revoke myusername -roles bar,baz
|
||||
```
|
||||
|
||||
We can look at this user with
|
||||
|
||||
```
|
||||
$ etcdctl user get myusername
|
||||
```
|
||||
|
||||
And the password for a user can be changed with
|
||||
|
||||
```
|
||||
$ etcdctl user passwd myusername
|
||||
```
|
||||
|
||||
Which will prompt again for a new password.
|
||||
|
||||
To delete an account, there's always
|
||||
```
|
||||
$ etcdctl user remove myusername
|
||||
```
|
||||
|
||||
|
||||
## Working with roles
|
||||
|
||||
The `role` subcommand for `etcdctl` handles all things having to do with access controls for particular roles, as were granted to individual users.
|
||||
|
||||
A listing of roles can be found with
|
||||
|
||||
```
|
||||
$ etcdctl role list
|
||||
```
|
||||
|
||||
A new role can be created with
|
||||
|
||||
```
|
||||
$ etcdctl role add myrolename
|
||||
```
|
||||
|
||||
A role has no password; we are merely defining a new set of access rights.
|
||||
|
||||
Roles are granted access to various parts of the keyspace, a single path at a time.
|
||||
|
||||
Reading a path is simple; if the path ends in `*`, that key **and all keys prefixed with it**, are granted to holders of this role. If it does not end in `*`, only that key and that key alone is granted.
|
||||
|
||||
Access can be granted as either read, write, or both, as in the following examples:
|
||||
```
|
||||
# Give read access to keys under the /foo directory
|
||||
$ etcdctl role grant myrolename -path '/foo/*' -read
|
||||
|
||||
# Give write-only access to the key at /foo/bar
|
||||
$ etcdctl role grant myrolename -path '/foo/bar' -write
|
||||
|
||||
# Give full access to keys under /pub
|
||||
$ etcdctl role grant myrolename -path '/pub/*' -readwrite
|
||||
```
|
||||
|
||||
Beware that
|
||||
|
||||
```
|
||||
# Give full access to keys under /pub??
|
||||
$ etcdctl role grant myrolename -path '/pub*' -readwrite
|
||||
```
|
||||
|
||||
Without the slash may include keys under `/publishing`, for example. To do both, grant `/pub` and `/pub/*`
|
||||
|
||||
To see what's granted, we can look at the role at any time:
|
||||
|
||||
```
|
||||
$ etcdctl role get myrolename
|
||||
```
|
||||
|
||||
Revocation of permissions is done the same logical way:
|
||||
|
||||
```
|
||||
$ etcdctl role revoke myrolename -path '/foo/bar' -write
|
||||
```
|
||||
|
||||
As is removing a role entirely
|
||||
|
||||
```
|
||||
$ etcdctl role remove myrolename
|
||||
```
|
||||
|
||||
## Enabling authentication
|
||||
|
||||
The minimal steps to enabling auth follow. The administrator can set up users and roles before or after enabling authentication, as a matter of preference.
|
||||
|
||||
Make sure the root user is created:
|
||||
|
||||
```
|
||||
$ etcdctl user add root
|
||||
New password:
|
||||
```
|
||||
|
||||
And enable authentication
|
||||
|
||||
```
|
||||
$ etcdctl auth enable
|
||||
```
|
||||
|
||||
After this, etcd is running with authentication enabled. To disable it for any reason, use the reciprocal command:
|
||||
|
||||
```
|
||||
$ etcdctl -u root:rootpw auth disable
|
||||
```
|
||||
|
||||
It would also be good to check what guests (unauthenticated users) are allowed to do:
|
||||
```
|
||||
$ etcdctl -u root:rootpw role get guest
|
||||
```
|
||||
|
||||
And modify this role appropriately, depending on your policies.
|
||||
|
||||
## Using `etcdctl` to authenticate
|
||||
|
||||
`etcdctl` supports a similar flag as `curl` for authentication.
|
||||
|
||||
```
|
||||
$ etcdctl -u user:password get foo
|
||||
```
|
||||
|
||||
or if you prefer to be prompted:
|
||||
|
||||
```
|
||||
$ etcdctl -u user get foo
|
||||
```
|
||||
|
||||
Otherwise, all `etcdctl` commands remain the same. Users and roles can still be created and modified, but require authentication by a user with the root role.
|
@@ -1,10 +1,10 @@
|
||||
# Backward Compatibility
|
||||
### Backward Compatibility
|
||||
|
||||
The main goal of etcd 2.0 release is to improve cluster safety around bootstrapping and dynamic reconfiguration. To do this, we deprecated the old error-prone APIs and provide a new set of APIs.
|
||||
|
||||
The other main focus of this release was a more reliable Raft implementation, but as this change is internal it should not have any notable effects to users.
|
||||
|
||||
## Command Line Flags Changes
|
||||
#### Command Line Flags Changes
|
||||
|
||||
The major flag changes are to mostly related to bootstrapping. The `initial-*` flags provide an improved way to specify the required criteria to start the cluster. The advertised URLs now support a list of values instead of a single value, which allows etcd users to gracefully migrate to the new set of IANA-assigned ports (2379/client and 2380/peers) while maintaining backward compatibility with the old ports.
|
||||
|
||||
@@ -20,56 +20,16 @@ The major flag changes are to mostly related to bootstrapping. The `initial-*` f
|
||||
The documentation of new command line flags can be found at
|
||||
https://github.com/coreos/etcd/blob/master/Documentation/configuration.md.
|
||||
|
||||
## Data Directory Naming
|
||||
#### Data Dir
|
||||
- Default data dir location has changed from {$hostname}.etcd to {name}.etcd.
|
||||
|
||||
The default data dir location has changed from {$hostname}.etcd to {name}.etcd.
|
||||
- The disk format within the data dir has changed. etcd 2.0 should be able to auto upgrade the old data format. Instructions on doing so manually are in the [migration tool doc][migrationtooldoc].
|
||||
|
||||
## Data Directory Migration
|
||||
[migrationtooldoc]: https://github.com/coreos/etcd/blob/master/Documentation/0_4_migration_tool.md
|
||||
|
||||
The disk format within the data directory changed with etcd 2.0.
|
||||
If you run etcd 2.0 on an etcd 0.4 data directory it will automatically migrate the data and start.
|
||||
You will want to coordinate this upgrade by walking through each of your machines in the cluster, stopping etcd 0.4 and then starting etcd 2.0.
|
||||
If you would rather manually do the migration, to test it out first in another environment, you can use the [migration tool doc][migrationtooldoc].
|
||||
#### Key-Value API
|
||||
|
||||
[migrationtooldoc]: https://github.com/coreos/etcd/blob/master/tools/etcd-migrate/README.md
|
||||
|
||||
## Snapshot Migration
|
||||
|
||||
If you are only interested in the data in etcd you can migrate a snapshot of your data from a v0.4.9+ cluster into a new etcd 2.0 cluster using a snapshot migration.
|
||||
The advantage of this method is that you are directly dumping only the etcd data so you can run your old and new cluster side-by-side, snapshot the data, import it and then point your applications at this cluster.
|
||||
The disadvantage is that the etcd indexes of your data will change which may confuse applications that use etcd.
|
||||
|
||||
To get started get the newest data snapshot from the 0.4.9+ cluster:
|
||||
|
||||
```
|
||||
curl http://cluster.example.com:4001/v2/migration/snapshot > backup.snap
|
||||
```
|
||||
|
||||
Now, import the snapshot into your new cluster:
|
||||
|
||||
```
|
||||
etcdctl -C new_cluster.example.com import --snap backup.snap
|
||||
```
|
||||
|
||||
If you have a large amount of data, you can specify more concurrent works to copy data in parallel by using `-c` flag.
|
||||
If you have hidden keys to copy, you can use `--hidden` flag to specify.
|
||||
|
||||
And the data will quickly copy into the new cluster:
|
||||
|
||||
```
|
||||
entering dir: /
|
||||
entering dir: /foo
|
||||
entering dir: /foo/bar
|
||||
copying key: /foo/bar/1 1
|
||||
entering dir: /
|
||||
entering dir: /foo2
|
||||
entering dir: /foo2/bar2
|
||||
copying key: /foo2/bar2/2 2
|
||||
```
|
||||
|
||||
## Key-Value API
|
||||
|
||||
### Read consistency flag
|
||||
##### Read consistency flag
|
||||
|
||||
The consistent flag for read operations is removed in etcd 2.0.0. The normal read operations provides the same consistency guarantees with the 0.4.6 read operations with consistent flag set.
|
||||
|
||||
@@ -79,14 +39,14 @@ The consistent read guarantees the sequential consistency within one client that
|
||||
|
||||
Each etcd member will proxy the request to leader and only return the result to user after the result is applied on the local member. Thus after the write succeed, the user is guaranteed to see the value on the member it sent the request to.
|
||||
|
||||
Reads do not provide linearizability. If you want linearizable read, you need to set quorum option to true.
|
||||
Reads do not provide linearizability. If you want linearizabilable read, you need to set quorum option to true.
|
||||
|
||||
**Previous behavior**
|
||||
|
||||
We added an option for a consistent read in the old version of etcd since etcd 0.x redirects the write request to the leader. When the user get back the result from the leader, the member it sent the request to originally might not apply the write request yet. With the consistent flag set to true, the client will always send read request to the leader. So one client should be able to see its last write when consistent=true is enabled. There is no order guarantees among different clients.
|
||||
|
||||
|
||||
## Standby
|
||||
#### Standby
|
||||
|
||||
etcd 0.4’s standby mode has been deprecated. [Proxy mode][proxymode] is introduced to solve a subset of problems standby was solving.
|
||||
|
||||
@@ -94,21 +54,21 @@ Standby mode was intended for large clusters that had a subset of the members ac
|
||||
|
||||
Proxy mode in 2.0 will provide similar functionality, and with improved control over which machines act as proxies due to the operator specifically configuring them. Proxies also support read only or read/write modes for increased security and durability.
|
||||
|
||||
[proxymode]: proxy.md
|
||||
[proxymode]: https://github.com/coreos/etcd/blob/master/Documentation/proxy.md
|
||||
|
||||
## Discovery Service
|
||||
#### Discovery Service
|
||||
|
||||
A size key needs to be provided inside a [discovery token][discoverytoken].
|
||||
[discoverytoken]: clustering.md#custom-etcd-discovery-service
|
||||
[discoverytoken]: https://github.com/coreos/etcd/blob/master/Documentation/clustering.md#custom-etcd-discovery-service
|
||||
|
||||
## HTTP Admin API
|
||||
#### HTTP Admin API
|
||||
|
||||
`v2/admin` on peer url and `v2/keys/_etcd` are unified under the new [v2/member API][memberapi] to better explain which machines are part of an etcd cluster, and to simplify the keyspace for all your use cases.
|
||||
|
||||
[memberapi]: other_apis.md
|
||||
[memberapi]: https://github.com/coreos/etcd/blob/master/Documentation/other_apis.md
|
||||
|
||||
## HTTP Key Value API
|
||||
- The follower can now transparently proxy write requests to the leader. Clients will no longer see 307 redirections to the leader from etcd.
|
||||
#### HTTP Key Value API
|
||||
- The follower can now transparently proxy write equests to the leader. Clients will no longer see 307 redirections to the leader from etcd.
|
||||
|
||||
- Expiration time is in UTC instead of local time.
|
||||
|
||||
|
@@ -1,5 +0,0 @@
|
||||
# Benchmarks
|
||||
|
||||
etcd benchmarks will be published regularly and tracked for each release below:
|
||||
|
||||
- [etcd v2.1.0](etcd-2-1-0-benchmarks.md)
|
@@ -1,49 +0,0 @@
|
||||
## Physical machines
|
||||
|
||||
GCE n1-highcpu-2 machine type
|
||||
|
||||
- 1x dedicated local SSD mounted under /var/lib/etcd
|
||||
- 1x dedicated slow disk for the OS
|
||||
- 1.8 GB memory
|
||||
- 2x CPUs
|
||||
- etcd version 2.1.0
|
||||
|
||||
## etcd Cluster
|
||||
|
||||
3 etcd members, each runs on a single machine
|
||||
|
||||
## Testing
|
||||
|
||||
Bootstrap another machine and use benchmark tool [boom](https://github.com/rakyll/boom) to send requests to each etcd member.
|
||||
|
||||
## Performance
|
||||
|
||||
### reading one single key
|
||||
|
||||
| key size in bytes | number of clients | target etcd server | read QPS | 90th Percentile Latency (ms) |
|
||||
|-------------------|-------------------|--------------------|----------|---------------|
|
||||
| 64 | 1 | leader only | 1534 | 0.7 |
|
||||
| 64 | 64 | leader only | 10125 | 9.1 |
|
||||
| 64 | 256 | leader only | 13892 | 27.1 |
|
||||
| 256 | 1 | leader only | 1530 | 0.8 |
|
||||
| 256 | 64 | leader only | 10106 | 10.1 |
|
||||
| 256 | 256 | leader only | 14667 | 27.0 |
|
||||
| 64 | 64 | all servers | 24200 | 3.9 |
|
||||
| 64 | 256 | all servers | 33300 | 11.8 |
|
||||
| 256 | 64 | all servers | 24800 | 3.9 |
|
||||
| 256 | 256 | all servers | 33000 | 11.5 |
|
||||
|
||||
### writing one single key
|
||||
|
||||
| key size in bytes | number of clients | target etcd server | write QPS | 90th Percentile Latency (ms) |
|
||||
|-------------------|-------------------|--------------------|-----------|---------------|
|
||||
| 64 | 1 | leader only | 60 | 21.4 |
|
||||
| 64 | 64 | leader only | 1742 | 46.8 |
|
||||
| 64 | 256 | leader only | 3982 | 90.5 |
|
||||
| 256 | 1 | leader only | 58 | 20.3 |
|
||||
| 256 | 64 | leader only | 1770 | 47.8 |
|
||||
| 256 | 256 | leader only | 4157 | 105.3 |
|
||||
| 64 | 64 | all servers | 1028 | 123.4 |
|
||||
| 64 | 256 | all servers | 3260 | 123.8 |
|
||||
| 256 | 64 | all servers | 1033 | 121.5 |
|
||||
| 256 | 256 | all servers | 3061 | 119.3 |
|
@@ -1,24 +0,0 @@
|
||||
## Branch Management
|
||||
|
||||
### Guide
|
||||
|
||||
- New development occurs on the [master branch](https://github.com/coreos/etcd/tree/master)
|
||||
- Master branch should always have a green build!
|
||||
- Backwards-compatible bug fixes should target the master branch and subsequently be ported to stable branches
|
||||
- Once the master branch is ready for release, it will be tagged and become the new stable branch.
|
||||
|
||||
The etcd team has adopted a _rolling release model_ and supports one stable version of etcd.
|
||||
|
||||
### Master branch
|
||||
|
||||
The `master` branch is our development branch. All new features land here first.
|
||||
|
||||
If you want to try new features, pull `master` and play with it. Note that `master` may not be stable because new features may introduce bugs.
|
||||
|
||||
Before the release of the next stable version, feature PRs will be frozen. We will focus on the testing, bug-fix and documentation for one to two weeks.
|
||||
|
||||
### Stable branches
|
||||
|
||||
All branches with prefix `release-` are considered _stable_ branches.
|
||||
|
||||
After every minor release (http://semver.org/), we will have a new stable branch for that release. We will keep fixing the backwards-compatible bugs for the latest stable release, but not previous releases. The _patch_ release, incorporating any bug fixes, will be once every two weeks, given any patches.
|
@@ -43,8 +43,6 @@ On each machine you would start etcd with these flags:
|
||||
```
|
||||
$ etcd -name infra0 -initial-advertise-peer-urls http://10.0.1.10:2380 \
|
||||
-listen-peer-urls http://10.0.1.10:2380 \
|
||||
-listen-client-urls http://10.0.1.10:2379,http://127.0.0.1:2379 \
|
||||
-advertise-client-urls http://10.0.1.10:2379 \
|
||||
-initial-cluster-token etcd-cluster-1 \
|
||||
-initial-cluster infra0=http://10.0.1.10:2380,infra1=http://10.0.1.11:2380,infra2=http://10.0.1.12:2380 \
|
||||
-initial-cluster-state new
|
||||
@@ -52,8 +50,6 @@ $ etcd -name infra0 -initial-advertise-peer-urls http://10.0.1.10:2380 \
|
||||
```
|
||||
$ etcd -name infra1 -initial-advertise-peer-urls http://10.0.1.11:2380 \
|
||||
-listen-peer-urls http://10.0.1.11:2380 \
|
||||
-listen-client-urls http://10.0.1.11:2379,http://127.0.0.1:2379 \
|
||||
-advertise-client-urls http://10.0.1.11:2379 \
|
||||
-initial-cluster-token etcd-cluster-1 \
|
||||
-initial-cluster infra0=http://10.0.1.10:2380,infra1=http://10.0.1.11:2380,infra2=http://10.0.1.12:2380 \
|
||||
-initial-cluster-state new
|
||||
@@ -61,8 +57,6 @@ $ etcd -name infra1 -initial-advertise-peer-urls http://10.0.1.11:2380 \
|
||||
```
|
||||
$ etcd -name infra2 -initial-advertise-peer-urls http://10.0.1.12:2380 \
|
||||
-listen-peer-urls http://10.0.1.12:2380 \
|
||||
-listen-client-urls http://10.0.1.12:2379,http://127.0.0.1:2379 \
|
||||
-advertise-client-urls http://10.0.1.12:2379 \
|
||||
-initial-cluster-token etcd-cluster-1 \
|
||||
-initial-cluster infra0=http://10.0.1.10:2380,infra1=http://10.0.1.11:2380,infra2=http://10.0.1.12:2380 \
|
||||
-initial-cluster-state new
|
||||
@@ -77,8 +71,6 @@ In the following example, we have not included our new host in the list of enume
|
||||
```
|
||||
$ etcd -name infra1 -initial-advertise-peer-urls http://10.0.1.11:2380 \
|
||||
-listen-peer-urls https://10.0.1.11:2380 \
|
||||
-listen-client-urls http://10.0.1.11:2379,http://127.0.0.1:2379 \
|
||||
-advertise-client-urls http://10.0.1.11:2379 \
|
||||
-initial-cluster infra0=http://10.0.1.10:2380 \
|
||||
-initial-cluster-state new
|
||||
etcd: infra1 not listed in the initial cluster config
|
||||
@@ -90,8 +82,6 @@ In this example, we are attempting to map a node (infra0) on a different address
|
||||
```
|
||||
$ etcd -name infra0 -initial-advertise-peer-urls http://127.0.0.1:2380 \
|
||||
-listen-peer-urls http://10.0.1.10:2380 \
|
||||
-listen-client-urls http://10.0.1.10:2379,http://127.0.0.1:2379 \
|
||||
-advertise-client-urls http://10.0.1.10:2379 \
|
||||
-initial-cluster infra0=http://10.0.1.10:2380,infra1=http://10.0.1.11:2380,infra2=http://10.0.1.12:2380 \
|
||||
-initial-cluster-state=new
|
||||
etcd: error setting up initial cluster: infra0 has different advertised URLs in the cluster and advertised peer URLs list
|
||||
@@ -103,8 +93,6 @@ If you configure a peer with a different set of configuration and attempt to joi
|
||||
```
|
||||
$ etcd -name infra3 -initial-advertise-peer-urls http://10.0.1.13:2380 \
|
||||
-listen-peer-urls http://10.0.1.13:2380 \
|
||||
-listen-client-urls http://10.0.1.13:2379,http://127.0.0.1:2379 \
|
||||
-advertise-client-urls http://10.0.1.13:2379 \
|
||||
-initial-cluster infra0=http://10.0.1.10:2380,infra1=http://10.0.1.11:2380,infra3=http://10.0.1.13:2380 \
|
||||
-initial-cluster-state=new
|
||||
etcd: conflicting cluster ID to the target cluster (c6ab534d07e8fcc4 != bc25ea2a74fb18b0). Exiting.
|
||||
@@ -128,7 +116,7 @@ A discovery URL identifies a unique etcd cluster. Instead of reusing a discovery
|
||||
|
||||
Moreover, discovery URLs should ONLY be used for the initial bootstrapping of a cluster. To change cluster membership after the cluster is already running, see the [runtime reconfiguration][runtime] guide.
|
||||
|
||||
[runtime]: runtime-configuration.md
|
||||
[runtime]: https://github.com/coreos/etcd/blob/master/Documentation/runtime-configuration.md
|
||||
|
||||
#### Custom etcd Discovery Service
|
||||
|
||||
@@ -149,22 +137,16 @@ Now we start etcd with those relevant flags for each member:
|
||||
```
|
||||
$ etcd -name infra0 -initial-advertise-peer-urls http://10.0.1.10:2380 \
|
||||
-listen-peer-urls http://10.0.1.10:2380 \
|
||||
-listen-client-urls http://10.0.1.10:2379,http://127.0.0.1:2379 \
|
||||
-advertise-client-urls http://10.0.1.10:2379 \
|
||||
-discovery https://myetcd.local/v2/keys/discovery/6c007a14875d53d9bf0ef5a6fc0257c817f0fb83
|
||||
```
|
||||
```
|
||||
$ etcd -name infra1 -initial-advertise-peer-urls http://10.0.1.11:2380 \
|
||||
-listen-peer-urls http://10.0.1.11:2380 \
|
||||
-listen-client-urls http://10.0.1.11:2379,http://127.0.0.1:2379 \
|
||||
-advertise-client-urls http://10.0.1.11:2379 \
|
||||
-discovery https://myetcd.local/v2/keys/discovery/6c007a14875d53d9bf0ef5a6fc0257c817f0fb83
|
||||
```
|
||||
```
|
||||
$ etcd -name infra2 -initial-advertise-peer-urls http://10.0.1.12:2380 \
|
||||
-listen-peer-urls http://10.0.1.12:2380 \
|
||||
-listen-client-urls http://10.0.1.12:2379,http://127.0.0.1:2379 \
|
||||
-advertise-client-urls http://10.0.1.12:2379 \
|
||||
-discovery https://myetcd.local/v2/keys/discovery/6c007a14875d53d9bf0ef5a6fc0257c817f0fb83
|
||||
```
|
||||
|
||||
@@ -199,22 +181,16 @@ Now we start etcd with those relevant flags for each member:
|
||||
```
|
||||
$ etcd -name infra0 -initial-advertise-peer-urls http://10.0.1.10:2380 \
|
||||
-listen-peer-urls http://10.0.1.10:2380 \
|
||||
-listen-client-urls http://10.0.1.10:2379,http://127.0.0.1:2379 \
|
||||
-advertise-client-urls http://10.0.1.10:2379 \
|
||||
-discovery https://discovery.etcd.io/3e86b59982e49066c5d813af1c2e2579cbf573de
|
||||
```
|
||||
```
|
||||
$ etcd -name infra1 -initial-advertise-peer-urls http://10.0.1.11:2380 \
|
||||
-listen-peer-urls http://10.0.1.11:2380 \
|
||||
-listen-client-urls http://10.0.1.11:2379,http://127.0.0.1:2379 \
|
||||
-advertise-client-urls http://10.0.1.11:2379 \
|
||||
-discovery https://discovery.etcd.io/3e86b59982e49066c5d813af1c2e2579cbf573de
|
||||
```
|
||||
```
|
||||
$ etcd -name infra2 -initial-advertise-peer-urls http://10.0.1.12:2380 \
|
||||
-listen-peer-urls http://10.0.1.12:2380 \
|
||||
-listen-client-urls http://10.0.1.12:2379,http://127.0.0.1:2379 \
|
||||
-advertise-client-urls http://10.0.1.12:2379 \
|
||||
-discovery https://discovery.etcd.io/3e86b59982e49066c5d813af1c2e2579cbf573de
|
||||
```
|
||||
|
||||
@@ -230,8 +206,6 @@ You can use the environment variable `ETCD_DISCOVERY_PROXY` to cause etcd to use
|
||||
```
|
||||
$ etcd -name infra0 -initial-advertise-peer-urls http://10.0.1.10:2380 \
|
||||
-listen-peer-urls http://10.0.1.10:2380 \
|
||||
-listen-client-urls http://10.0.1.10:2379,http://127.0.0.1:2379 \
|
||||
-advertise-client-urls http://10.0.1.10:2379 \
|
||||
-discovery https://discovery.etcd.io/3e86b59982e49066c5d813af1c2e2579cbf573de
|
||||
etcd: error: the cluster doesn’t have a size configuration value in https://discovery.etcd.io/3e86b59982e49066c5d813af1c2e2579cbf573de/_config
|
||||
exit 1
|
||||
@@ -244,8 +218,6 @@ This error will occur if the discovery cluster already has the configured number
|
||||
```
|
||||
$ etcd -name infra0 -initial-advertise-peer-urls http://10.0.1.10:2380 \
|
||||
-listen-peer-urls http://10.0.1.10:2380 \
|
||||
-listen-client-urls http://10.0.1.10:2379,http://127.0.0.1:2379 \
|
||||
-advertise-client-urls http://10.0.1.10:2379 \
|
||||
-discovery https://discovery.etcd.io/3e86b59982e49066c5d813af1c2e2579cbf573de \
|
||||
-discovery-fallback exit
|
||||
etcd: discovery: cluster is full
|
||||
@@ -260,8 +232,6 @@ ignored on this machine.
|
||||
```
|
||||
$ etcd -name infra0 -initial-advertise-peer-urls http://10.0.1.10:2380 \
|
||||
-listen-peer-urls http://10.0.1.10:2380 \
|
||||
-listen-client-urls http://10.0.1.10:2379,http://127.0.0.1:2379 \
|
||||
-advertise-client-urls http://10.0.1.10:2379 \
|
||||
-discovery https://discovery.etcd.io/3e86b59982e49066c5d813af1c2e2579cbf573de
|
||||
etcdserver: discovery token ignored since a cluster has already been initialized. Valid log found at /var/lib/etcd
|
||||
```
|
||||
@@ -294,7 +264,7 @@ infra2.example.com. 300 IN A 10.0.1.12
|
||||
```
|
||||
#### Bootstrap the etcd cluster using DNS
|
||||
|
||||
etcd cluster members can listen on domain names or IP address, the bootstrap process will resolve DNS A records.
|
||||
etcd cluster memebers can listen on domain names or IP address, the bootstrap process will resolve DNS A records.
|
||||
|
||||
```
|
||||
$ etcd -name infra0 \
|
||||
|
@@ -13,7 +13,6 @@ To start etcd automatically using custom settings at startup in Linux, using a [
|
||||
##### -name
|
||||
+ Human-readable name for this member.
|
||||
+ default: "default"
|
||||
+ This value is referenced as this node's own entries listed in the `-initial-cluster` flag (Ex: `default=http://localhost:2380` or `default=http://localhost:2380,default=http://localhost:7001`). This needs to match the key used in the flag if you're using [static boostrapping](clustering.md#static).
|
||||
|
||||
##### -data-dir
|
||||
+ Path to the data directory.
|
||||
@@ -28,7 +27,7 @@ To start etcd automatically using custom settings at startup in Linux, using a [
|
||||
+ default: "100"
|
||||
|
||||
##### -election-timeout
|
||||
+ Time (in milliseconds) for an election to timeout. See [Documentation/tuning.md](tuning.md#time-parameters) for details.
|
||||
+ Time (in milliseconds) for an election to timeout.
|
||||
+ default: "1000"
|
||||
|
||||
##### -listen-peer-urls
|
||||
@@ -67,7 +66,6 @@ To start etcd automatically using custom settings at startup in Linux, using a [
|
||||
##### -initial-cluster
|
||||
+ Initial cluster configuration for bootstrapping.
|
||||
+ default: "default=http://localhost:2380,default=http://localhost:7001"
|
||||
+ The key is the value of the `-name` flag for each node provided. The default uses `default` for the key because this is the default for the `-name` flag.
|
||||
|
||||
##### -initial-cluster-state
|
||||
+ Initial cluster state ("new" or "existing"). Set to `new` for all members present during initial static or DNS bootstrapping. If this option is set to `existing`, etcd will attempt to join the existing cluster. If the wrong value is set, etcd will attempt to start but fail safely.
|
||||
@@ -107,32 +105,11 @@ To start etcd automatically using custom settings at startup in Linux, using a [
|
||||
+ Proxy mode setting ("off", "readonly" or "on").
|
||||
+ default: "off"
|
||||
|
||||
##### -proxy-failure-wait
|
||||
+ Time (in milliseconds) an endpoint will be held in a failed state before being reconsidered for proxied requests.
|
||||
+ default: 5000
|
||||
|
||||
##### -proxy-refresh-interval
|
||||
+ Time (in milliseconds) of the endpoints refresh interval.
|
||||
+ default: 30000
|
||||
|
||||
##### -proxy-dial-timeout
|
||||
+ Time (in milliseconds) for a dial to timeout or 0 to disable the timeout
|
||||
+ default: 1000
|
||||
|
||||
##### -proxy-write-timeout
|
||||
+ Time (in milliseconds) for a write to timeout or 0 to disable the timeout.
|
||||
+ default: 5000
|
||||
|
||||
##### -proxy-read-timeout
|
||||
+ Time (in milliseconds) for a read to timeout or 0 to disable the timeout.
|
||||
+ Don't change this value if you use watches because they are using long polling requests.
|
||||
+ default: 0
|
||||
|
||||
### Security Flags
|
||||
|
||||
The security flags help to [build a secure etcd cluster][security].
|
||||
|
||||
##### -ca-file [DEPRECATED]
|
||||
##### -ca-file
|
||||
+ Path to the client server TLS CA file.
|
||||
+ default: none
|
||||
|
||||
@@ -144,15 +121,7 @@ The security flags help to [build a secure etcd cluster][security].
|
||||
+ Path to the client server TLS key file.
|
||||
+ default: none
|
||||
|
||||
##### -client-cert-auth
|
||||
+ Enable client cert authentication.
|
||||
+ default: false
|
||||
|
||||
##### -trusted-ca-file
|
||||
+ Path to the client server TLS trusted CA key file.
|
||||
+ default: none
|
||||
|
||||
##### -peer-ca-file [DEPRECATED]
|
||||
##### -peer-ca-file
|
||||
+ Path to the peer server TLS CA file.
|
||||
+ default: none
|
||||
|
||||
@@ -164,25 +133,6 @@ The security flags help to [build a secure etcd cluster][security].
|
||||
+ Path to the peer server TLS key file.
|
||||
+ default: none
|
||||
|
||||
##### -peer-client-cert-auth
|
||||
+ Enable peer client cert authentication.
|
||||
+ default: false
|
||||
|
||||
##### -peer-trusted-ca-file
|
||||
+ Path to the peer server TLS trusted CA file.
|
||||
+ default: none
|
||||
|
||||
### Logging Flags
|
||||
|
||||
##### -debug
|
||||
+ Drop the default log level to DEBUG for all subpackages.
|
||||
+ default: false (INFO for all packages)
|
||||
|
||||
##### -log-package-levels
|
||||
+ Set individual etcd subpackages to specific log levels. An example being `etcdserver=WARNING,security=DEBUG`
|
||||
+ default: none (INFO for all packages)
|
||||
|
||||
|
||||
### Unsafe Flags
|
||||
|
||||
Please be CAUTIOUS when using unsafe flags because it will break the guarantees given by the consensus protocol.
|
||||
@@ -199,9 +149,9 @@ Follow the instructions when using these flags.
|
||||
+ Print the version and exit.
|
||||
+ default: false
|
||||
|
||||
[build-cluster]: clustering.md#static
|
||||
[reconfig]: runtime-configuration.md
|
||||
[discovery]: clustering.md#discovery
|
||||
[proxy]: proxy.md
|
||||
[security]: security.md
|
||||
[restore]: admin_guide.md#restoring-a-backup
|
||||
[build-cluster]: https://github.com/coreos/etcd/blob/master/Documentation/clustering.md#static
|
||||
[reconfig]: https://github.com/coreos/etcd/blob/master/Documentation/runtime-configuration.md
|
||||
[discovery]: https://github.com/coreos/etcd/blob/master/Documentation/clustering.md#discovery
|
||||
[proxy]: https://github.com/coreos/etcd/blob/master/Documentation/proxy.md
|
||||
[security]: https://github.com/coreos/etcd/blob/master/Documentation/security.md
|
||||
[restore]: https://github.com/coreos/etcd/blob/master/Documentation/admin_guide.md#restoring-a-backup
|
||||
|
@@ -13,8 +13,7 @@ export HostIP="192.168.12.50"
|
||||
The following `docker run` command will expose the etcd client API over ports 4001 and 2379, and expose the peer port over 2380.
|
||||
|
||||
```
|
||||
docker run -d -v /usr/share/ca-certificates/:/etc/ssl/certs -p 4001:4001 -p 2380:2380 -p 2379:2379 \
|
||||
--name etcd quay.io/coreos/etcd:v2.0.8 \
|
||||
docker run -d -p 4001:4001 -p 2380:2380 -p 2379:2379 --name etcd quay.io/coreos/etcd:v2.0.3 \
|
||||
-name etcd0 \
|
||||
-advertise-client-urls http://${HostIP}:2379,http://${HostIP}:4001 \
|
||||
-listen-client-urls http://0.0.0.0:2379,http://0.0.0.0:4001 \
|
||||
@@ -43,8 +42,7 @@ The main difference being the value used for the `-initial-cluster` flag, which
|
||||
### etcd0
|
||||
|
||||
```
|
||||
docker run -d -v /usr/share/ca-certificates/:/etc/ssl/certs -p 4001:4001 -p 2380:2380 -p 2379:2379 \
|
||||
--name etcd quay.io/coreos/etcd:v2.0.8 \
|
||||
docker run -d -p 4001:4001 -p 2380:2380 -p 2379:2379 --name etcd quay.io/coreos/etcd:v2.0.3 \
|
||||
-name etcd0 \
|
||||
-advertise-client-urls http://192.168.12.50:2379,http://192.168.12.50:4001 \
|
||||
-listen-client-urls http://0.0.0.0:2379,http://0.0.0.0:4001 \
|
||||
@@ -58,8 +56,7 @@ docker run -d -v /usr/share/ca-certificates/:/etc/ssl/certs -p 4001:4001 -p 2380
|
||||
### etcd1
|
||||
|
||||
```
|
||||
docker run -d -v /usr/share/ca-certificates/:/etc/ssl/certs -p 4001:4001 -p 2380:2380 -p 2379:2379 \
|
||||
--name etcd quay.io/coreos/etcd:v2.0.8 \
|
||||
docker run -d -p 4001:4001 -p 2380:2380 -p 2379:2379 --name etcd quay.io/coreos/etcd:v2.0.3 \
|
||||
-name etcd1 \
|
||||
-advertise-client-urls http://192.168.12.51:2379,http://192.168.12.51:4001 \
|
||||
-listen-client-urls http://0.0.0.0:2379,http://0.0.0.0:4001 \
|
||||
@@ -73,8 +70,7 @@ docker run -d -v /usr/share/ca-certificates/:/etc/ssl/certs -p 4001:4001 -p 2380
|
||||
### etcd2
|
||||
|
||||
```
|
||||
docker run -d -v /usr/share/ca-certificates/:/etc/ssl/certs -p 4001:4001 -p 2380:2380 -p 2379:2379 \
|
||||
--name etcd quay.io/coreos/etcd:v2.0.8 \
|
||||
docker run -d -p 4001:4001 -p 2380:2380 -p 2379:2379 --name etcd quay.io/coreos/etcd:v2.0.3 \
|
||||
-name etcd2 \
|
||||
-advertise-client-urls http://192.168.12.52:2379,http://192.168.12.52:4001 \
|
||||
-listen-client-urls http://0.0.0.0:2379,http://0.0.0.0:4001 \
|
||||
|
BIN
Documentation/etcd-migration-steps.png
Normal file
BIN
Documentation/etcd-migration-steps.png
Normal file
Binary file not shown.
After Width: | Height: | Size: 7.9 KiB |
@@ -22,10 +22,6 @@ The node in each member follows raft consensus protocol to replicate logs. Clust
|
||||
|
||||
Peer is another member of the same cluster.
|
||||
|
||||
### Proposal
|
||||
|
||||
A proposal is a request (for example a write request, a configuration change request) that needs to go through raft protocol.
|
||||
|
||||
### Client
|
||||
|
||||
Client is a caller of the cluster's HTTP API.
|
||||
|
@@ -1,65 +0,0 @@
|
||||
# FAQ
|
||||
|
||||
## Initial Bootstrapping UX
|
||||
|
||||
etcd initial bootstrapping is done via command line flags such as
|
||||
`--initial-cluster` or `--discovery`. These flags can safely be left on the
|
||||
command line after your cluster is running but they will be ignored if you have
|
||||
a non-empty data dir. So, why did we decide to have this sort of odd UX?
|
||||
|
||||
One of the design goals of etcd is easy bringup of clusters using a one-shot
|
||||
static configuration like AWS Cloud Formation, PXE booting, etc. Essentially we
|
||||
want to describe several virtual machines and bring them all up at once into an
|
||||
etcd cluster.
|
||||
|
||||
To achieve this sort of hands-free cluster bootstrap we had two other options:
|
||||
|
||||
**API to bootstrap**
|
||||
|
||||
This is problematic because it cannot be coordinated from a single service file
|
||||
and we didn't want to have the etcd socket listening but unresponsive to
|
||||
clients for an unbound period of time.
|
||||
|
||||
It would look something like this:
|
||||
|
||||
```
|
||||
ExecStart=/usr/bin/etcd
|
||||
ExecStartPost/usr/bin/etcd init localhost:2379 --cluster=
|
||||
```
|
||||
|
||||
**etcd init subcommand**
|
||||
|
||||
```
|
||||
etcd init --cluster='default=http://localhost:2380,default=http://localhost:7001'...
|
||||
etcd init --discovery https://discovery-example.etcd.io/193e4
|
||||
```
|
||||
|
||||
Then after running an init step you would execute `etcd`. This however
|
||||
introduced problems: we now have to define a hand-off protocol between the etcd
|
||||
init process and the etcd binary itself. This is hard to coordinate in a single
|
||||
service file such as:
|
||||
|
||||
```
|
||||
ExecStartPre=/usr/bin/etcd init --cluster=....
|
||||
ExecStart=/usr/bin/etcd
|
||||
```
|
||||
|
||||
There are several error cases:
|
||||
|
||||
0) Init has already ran and the data directory is already configured
|
||||
1) Discovery fails because of network timeout, etc
|
||||
2) Discovery fails because the cluster is already full and etcd needs to fall back to proxy
|
||||
3) Static cluster configuration fails because of conflict, misconfiguration or timeout
|
||||
|
||||
In hindsight we could have made this work by doing:
|
||||
|
||||
```
|
||||
rc status
|
||||
0 Init already ran
|
||||
1 Discovery fails on network timeout, etc
|
||||
0 Discovery fails for cluster full, coordinate via proxy state file
|
||||
1 Static cluster configuration failed
|
||||
```
|
||||
|
||||
Perhaps we can add the init command in a future version and deprecate if the UX
|
||||
continues to confuse people.
|
@@ -7,9 +7,8 @@
|
||||
- [etcd-dump](https://npmjs.org/package/etcd-dump) - Command line utility for dumping/restoring etcd.
|
||||
- [etcd-fs](https://github.com/xetorthio/etcd-fs) - FUSE filesystem for etcd
|
||||
- [etcd-browser](https://github.com/henszey/etcd-browser) - A web-based key/value editor for etcd using AngularJS
|
||||
- [etcd-lock](https://github.com/datawisesystems/etcd-lock) - Master election & distributed r/w lock implementation using etcd - Supports v2
|
||||
- [etcd-lock](https://github.com/datawisesystems/etcd-lock) - A lock implementation for etcd
|
||||
- [etcd-console](https://github.com/matishsiao/etcd-console) - A web-base key/value editor for etcd using PHP
|
||||
- [etcd-viewer](https://github.com/nikfoundas/etcd-viewer) - An etcd key-value store editor/viewer written in Java
|
||||
|
||||
**Go libraries**
|
||||
|
||||
@@ -34,7 +33,6 @@
|
||||
|
||||
- [stianeikeland/node-etcd](https://github.com/stianeikeland/node-etcd) - Supports v2 (w Coffeescript)
|
||||
- [lavagetto/nodejs-etcd](https://github.com/lavagetto/nodejs-etcd) - Supports v2
|
||||
- [deedubs/node-etcd-config](https://github.com/deedubs/node-etcd-config) - Supports v2
|
||||
|
||||
**Ruby libraries**
|
||||
|
||||
@@ -70,11 +68,7 @@
|
||||
**Haskell libraries**
|
||||
|
||||
- [wereHamster/etcd-hs](https://github.com/wereHamster/etcd-hs)
|
||||
|
||||
**R libraries**
|
||||
|
||||
- [ropensci/etseed](https://github.com/ropensci/etseed)
|
||||
|
||||
|
||||
**Tcl libraries**
|
||||
|
||||
- [efrecon/etcd-tcl](https://github.com/efrecon/etcd-tcl) - Supports v2, except wait.
|
||||
@@ -116,5 +110,3 @@ A detailed recap of client functionalities can be found in the [clients compatib
|
||||
- [skynetservices/skydns](https://github.com/skynetservices/skydns) - RFC compliant DNS server
|
||||
- [xordataexchange/crypt](https://github.com/xordataexchange/crypt) - Securely store values in etcd using GPG encryption
|
||||
- [spf13/viper](https://github.com/spf13/viper) - Go configuration library, reads values from ENV, pflags, files, and etcd with optional encryption
|
||||
- [lytics/metafora](https://github.com/lytics/metafora) - Go distributed task library
|
||||
- [ryandoyle/nss-etcd](https://github.com/ryandoyle/nss-etcd) - A GNU libc NSS module for resolving names from etcd.
|
||||
|
@@ -1,137 +0,0 @@
|
||||
## Metrics
|
||||
|
||||
**NOTE: The metrics feature is considered as an experimental. We might add/change/remove metrics without warning in the future releases.**
|
||||
|
||||
etcd uses [Prometheus](http://prometheus.io/) for metrics reporting in the server. The metrics can be used for real-time monitoring and debugging.
|
||||
|
||||
The simplest way to see the available metrics is to cURL the metrics endpoint `/metrics` of etcd. The format is described [here](http://prometheus.io/docs/instrumenting/exposition_formats/).
|
||||
|
||||
|
||||
You can also follow the doc [here](http://prometheus.io/docs/introduction/getting_started/) to start a Promethus server and monitor etcd metrics.
|
||||
|
||||
The naming of metrics follows the suggested [best practice of Promethus](http://prometheus.io/docs/practices/naming/). A metric name has an `etcd` prefix as its namespace and a subsystem prefix (for example `wal` and `etcdserver`).
|
||||
|
||||
etcd now exposes the following metrics:
|
||||
|
||||
### etcdserver
|
||||
|
||||
| Name | Description | Type |
|
||||
|-----------------------------------------|--------------------------------------------------|---------|
|
||||
| file_descriptors_used_total | The total number of file descriptors used | Gauge |
|
||||
| proposal_durations_milliseconds | The latency distributions of committing proposal | Summary |
|
||||
| pending_proposal_total | The total number of pending proposals | Gauge |
|
||||
| proposal_failed_total | The total number of failed proposals | Counter |
|
||||
|
||||
High file descriptors (`file_descriptors_used_total`) usage (near the file descriptors limitation of the process) indicates a potential out of file descriptors issue. That might cause etcd fails to create new WAL files and panics.
|
||||
|
||||
[Proposal](glossary.md#proposal) durations (`proposal_durations_milliseconds`) give you an summary about the proposal commit latency. Latency can be introduced into this process by network and disk IO.
|
||||
|
||||
Pending proposal (`pending_proposal_total`) gives you an idea about how many proposal are in the queue and waiting for commit. An increasing pending number indicates a high client load or an unstable cluster.
|
||||
|
||||
Failed proposals (`proposal_failed_total`) are normally related to two issues: temporary failures related to a leader election or longer duration downtime caused by a loss of quorum in the cluster.
|
||||
|
||||
|
||||
### store
|
||||
|
||||
These metrics describe the accesses into the data store of etcd members that exist in the cluster. They
|
||||
are useful to count what kind of actions are taken by users. It is also useful to see and whether all etcd members
|
||||
"see" the same set of data mutations, and whether reads and watches (which are local) are equally distributed.
|
||||
|
||||
All these metrics are prefixed with `etcd_store_`.
|
||||
|
||||
| Name | Description | Type |
|
||||
|---------------------------|------------------------------------------------------------------------------------------|--------------------|
|
||||
| reads_total | Total number of reads from store, should differ among etcd members (local reads). | Counter(action) |
|
||||
| writes_total | Total number of writes to store, should be same among all etcd members. | Counter(action) |
|
||||
| reads_failed_total | Number of failed reads from store (e.g. key missing) on local reads. | Counter(action) |
|
||||
| writes_failed_total | Number of failed writes to store (e.g. failed compare and swap). | Counter(action) |
|
||||
| expires_total | Total number of expired keys (due to TTL). | Counter |
|
||||
| watch_requests_totals | Total number of incoming watch requests to this etcd member (local watches). | Counter |
|
||||
| watchers | Current count of active watchers on this etcd member. | Gauge |
|
||||
|
||||
Both `reads_total` and `writes_total` count both successful and failed requests. `reads_failed_total` and
|
||||
`writes_failed_total` count failed requests. A lot of failed writes indicate possible contentions on keys (e.g. when
|
||||
doing `compareAndSet`), and read failures indicate that some clients try to access keys that don't exist.
|
||||
|
||||
Example Prometheus queries that may be useful from these metrics (across all etcd members):
|
||||
|
||||
* `sum(rate(etcd_store_reads_total{job="etcd"}[1m])) by (action)`
|
||||
`max(rate(etcd_store_writes_total{job="etcd"}[1m])) by (action)`
|
||||
|
||||
Rate of reads and writes by action, across all servers across a time window of `1m`. The reason why `max` is used
|
||||
for writes as opposed to `sum` for reads is because all of etcd nodes in the cluster apply all writes to their stores.
|
||||
Shows the rate of successfull readonly/write queries across all servers, across a time window of `1m`.
|
||||
* `sum(rate(etcd_store_watch_requests_total{job="etcd"}[1m]))`
|
||||
|
||||
Shows rate of new watch requests per second. Likely driven by how often watched keys change.
|
||||
* `sum(etcd_store_watchers{job="etcd"})`
|
||||
|
||||
Number of active watchers across all etcd servers.
|
||||
|
||||
|
||||
### wal
|
||||
|
||||
| Name | Description | Type |
|
||||
|------------------------------------|--------------------------------------------------|---------|
|
||||
| fsync_durations_microseconds | The latency distributions of fsync called by wal | Summary |
|
||||
| last_index_saved | The index of the last entry saved by wal | Gauge |
|
||||
|
||||
Abnormally high fsync duration (`fsync_durations_microseconds`) indicates disk issues and might cause the cluster to be unstable.
|
||||
|
||||
### snapshot
|
||||
|
||||
| Name | Description | Type |
|
||||
|--------------------------------------------|------------------------------------------------------------|---------|
|
||||
| snapshot_save_total_durations_microseconds | The total latency distributions of save called by snapshot | Summary |
|
||||
|
||||
Abnormally high snapshot duration (`snapshot_save_total_durations_microseconds`) indicates disk issues and might cause the cluster to be unstable.
|
||||
|
||||
|
||||
### rafthttp
|
||||
|
||||
| Name | Description | Type | Labels |
|
||||
|-----------------------------------|--------------------------------------------|---------|--------------------------------|
|
||||
| message_sent_latency_microseconds | The latency distributions of messages sent | Summary | sendingType, msgType, remoteID |
|
||||
| message_sent_failed_total | The total number of failed messages sent | Summary | sendingType, msgType, remoteID |
|
||||
|
||||
|
||||
Abnormally high message duration (`message_sent_latency_microseconds`) indicates network issues and might cause the cluster to be unstable.
|
||||
|
||||
An increase in message failures (`message_sent_failed_total`) indicates more severe network issues and might cause the cluster to be unstable.
|
||||
|
||||
Label `sendingType` is the connection type to send messages. `message`, `msgapp` and `msgappv2` use HTTP streaming, while `pipeline` does HTTP request for each message.
|
||||
|
||||
Label `msgType` is the type of raft message. `MsgApp` is log replication message; `MsgSnap` is snapshot install message; `MsgProp` is proposal forward message; the others are used to maintain raft internal status. If you have a large snapshot, you would expect a long msgSnap sending latency. For other types of messages, you would expect low latency, which is comparable to your ping latency if you have enough network bandwidth.
|
||||
|
||||
Label `remoteID` is the member ID of the message destination.
|
||||
|
||||
|
||||
### proxy
|
||||
|
||||
etcd members operating in proxy mode do not do store operations. They forward all requests
|
||||
to cluster instances.
|
||||
|
||||
Tracking the rate of requests coming from a proxy allows one to pin down which machine is performing most reads/writes.
|
||||
|
||||
All these metrics are prefixed with `etcd_proxy_`
|
||||
|
||||
| Name | Description | Type |
|
||||
|---------------------------|-----------------------------------------------------------------------------------------|--------------------|
|
||||
| requests_total | Total number of requests by this proxy instance. . | Counter(method) |
|
||||
| handled_total | Total number of fully handled requests, with responses from etcd members. | Counter(method) |
|
||||
| dropped_total | Total number of dropped requests due to forwarding errors to etcd members. | Counter(method,error) |
|
||||
| handling_duration_seconds | Bucketed handling times by HTTP method, including round trip to member instances. | Histogram(method) |
|
||||
|
||||
Example Prometheus queries that may be useful from these metrics (across all etcd servers):
|
||||
|
||||
* `sum(rate(etcd_proxy_handled_total{job="etcd"}[1m])) by (method)`
|
||||
|
||||
Rate of requests (by HTTP method) handled by all proxies, across a window of `1m`.
|
||||
* `histogram_quantile(0.9, sum(increase(etcd_proxy_events_handling_time_seconds_bucket{job="etcd",method="GET"}[5m])) by (le))`
|
||||
`histogram_quantile(0.9, sum(increase(etcd_proxy_events_handling_time_seconds_bucket{job="etcd",method!="GET"}[5m])) by (le))`
|
||||
|
||||
Show the 0.90-tile latency (in seconds) of handling of user requestsacross all proxy machines, with a window of `5m`.
|
||||
* `sum(rate(etcd_proxy_dropped_total{job="etcd"}[1m])) by (proxying_error)`
|
||||
|
||||
Number of failed request on the proxy. This should be 0, spikes here indicate connectivity issues to etcd cluster.
|
||||
|
@@ -4,10 +4,6 @@ etcd can now run as a transparent proxy. Running etcd as a proxy allows for easi
|
||||
|
||||
etcd currently supports two proxy modes: `readwrite` and `readonly`. The default mode is `readwrite`, which forwards both read and write requests to the etcd cluster. A `readonly` etcd proxy only forwards read requests to the etcd cluster, and returns `HTTP 501` to all write requests.
|
||||
|
||||
The proxy will shuffle the list of cluster members periodically to avoid sending all connections to a single member.
|
||||
|
||||
The member list used by proxy consists of all client URLs advertised within the cluster, as specified in each members' `-advertise-client-urls` flag. If this flag is set incorrectly, requests sent to the proxy are forwarded to wrong addresses and then fail. The fix for this problem is to restart etcd member with correct `-advertise-client-urls` flag. After client URLs list in proxy is recalculated, which happens every 30 seconds, requests will be forwarded correctly.
|
||||
|
||||
### Using an etcd proxy
|
||||
To start etcd in proxy mode, you need to provide three flags: `proxy`, `listen-client-urls`, and `initial-cluster` (or `discovery`).
|
||||
|
||||
@@ -18,7 +14,7 @@ The proxy will be listening on `listen-client-urls` and forward requests to the
|
||||
#### Start an etcd proxy with a static configuration
|
||||
To start a proxy that will connect to a statically defined etcd cluster, specify the `initial-cluster` flag:
|
||||
```
|
||||
etcd -proxy on -listen-client-urls http://127.0.0.1:8080 -initial-cluster infra0=http://10.0.1.10:2380,infra1=http://10.0.1.11:2380,infra2=http://10.0.1.12:2380
|
||||
etcd -proxy on -listen-client-urls 127.0.0.1:8080 -initial-cluster infra0=http://10.0.1.10:2380,infra1=http://10.0.1.11:2380,infra2=http://10.0.1.12:2380
|
||||
```
|
||||
|
||||
#### Start an etcd proxy with the discovery service
|
||||
@@ -27,10 +23,10 @@ If you bootstrap an etcd cluster using the [discovery service][discovery-service
|
||||
To start a proxy using the discovery service, specify the `discovery` flag. The proxy will wait until the etcd cluster defined at the `discovery` url finishes bootstrapping, and then start to forward the requests.
|
||||
|
||||
```
|
||||
etcd -proxy on -listen-client-urls http://127.0.0.1:8080 -discovery https://discovery.etcd.io/3e86b59982e49066c5d813af1c2e2579cbf573de
|
||||
etcd -proxy on -listen-client-urls 127.0.0.1:8080 -discovery https://discovery.etcd.io/3e86b59982e49066c5d813af1c2e2579cbf573de
|
||||
```
|
||||
|
||||
#### Fallback to proxy mode with discovery service
|
||||
If you bootstrap a etcd cluster using [discovery service][discovery-service] with more than the expected number of etcd members, the extra etcd processes will fall back to being `readwrite` proxies by default. They will forward the requests to the cluster as described above. For example, if you create a discovery url with `size=5`, and start ten etcd processes using that same discovery url, the result will be a cluster with five etcd members and five proxies. Note that this behaviour can be disabled with the `proxy-fallback` flag.
|
||||
|
||||
[discovery-service]: clustering.md#discovery
|
||||
[discovery-service]: https://github.com/coreos/etcd/blob/master/Documentation/clustering.md#discovery
|
||||
|
470
Documentation/rfc/api_security.md
Normal file
470
Documentation/rfc/api_security.md
Normal file
@@ -0,0 +1,470 @@
|
||||
# v2 Auth and Security
|
||||
|
||||
## etcd Resources
|
||||
There are three types of resources in etcd
|
||||
|
||||
1. user resources: users and roles in the user store
|
||||
2. key-value resources: key-value pairs in the key-value store
|
||||
3. settings resources: security settings, auth settings, and dynamic etcd cluster settings (election/heartbeat)
|
||||
|
||||
### User Resources
|
||||
|
||||
#### Users
|
||||
A user is an identity to be authenticated. Each user can have multiple roles. The user has a capability on the resource if one of the roles has that capability.
|
||||
|
||||
The special static `root` user has a ROOT role. (Caps for visual aid throughout)
|
||||
|
||||
#### Role
|
||||
Each role has exact one associated Permission List. An permission list exists for each permission on key-value resources. A role with `manage` permission of a key-value resource can grant/revoke capability of that key-value to other roles.
|
||||
|
||||
The special static ROOT role has a full permissions on all key-value resources, the permission to manage user resources and settings resources. Only the ROOT role has the permission to manage user resources and modify settings resources.
|
||||
|
||||
#### Permissions
|
||||
|
||||
There are two types of permissions, `read` and `write`. All management stems from the ROOT user.
|
||||
|
||||
A Permission List is a list of allowed patterns for that particular permission (read or write). Only ALLOW prefixes (incidentally, this is what Amazon S3 does). DENY becomes more complicated and is TBD.
|
||||
|
||||
### Key-Value Resources
|
||||
A key-value resource is a key-value pairs in the store. Given a list of matching patterns, permission for any given key in a request is granted if any of the patterns in the list match.
|
||||
|
||||
The glob match rules are as follows:
|
||||
|
||||
* `*` and `\` are special characters, representing "greedy match" and "escape" respectively.
|
||||
* As a corrolary, `\*` and `\\` are the corresponding literal matches.
|
||||
* All other bytes match exactly their bytes, starting always from the *first byte*. (For regex fans, `re.match` in Python)
|
||||
* Examples:
|
||||
* `/foo` matches only the single key/directory of `/foo`
|
||||
* `/foo*` matches the prefix `/foo`, and all subdirectories/keys
|
||||
* `/foo/*/bar` matches the keys bar in any (recursive) subdirectory of `/foo`.
|
||||
|
||||
### Settings Resources
|
||||
|
||||
Specific settings for the cluster as a whole. This can include adding and removing cluster members, enabling or disabling security, replacing certificates, and any other dynamic configuration by the administrator.
|
||||
|
||||
## v2 Auth
|
||||
|
||||
### Basic Auth
|
||||
We only support [Basic Auth](http://en.wikipedia.org/wiki/Basic_access_authentication) for the first version. Client needs to attach the basic auth to the HTTP Authorization Header.
|
||||
|
||||
### Authorization field for operations
|
||||
Added to requests to /v2/keys, /v2/security
|
||||
Add code 403 Forbidden to the set of responses from the v2 API
|
||||
Authorization: Basic {encoded string}
|
||||
|
||||
### Future Work
|
||||
Other types of auth can be considered for the future (eg, signed certs, public keys) but the `Authorization:` header allows for other such types
|
||||
|
||||
### Things out of Scope for etcd Permissions
|
||||
|
||||
* Pluggable AUTH backends like LDAP (other Authorization tokens generated by LDAP et al may be a possiblity)
|
||||
* Very fine-grained access controls (eg: users modifying keys outside work hours)
|
||||
|
||||
|
||||
|
||||
## API endpoints
|
||||
|
||||
An Error JSON corresponds to:
|
||||
{
|
||||
"name": "ErrErrorName",
|
||||
"description" : "The longer helpful description of the error."
|
||||
}
|
||||
|
||||
#### Users
|
||||
|
||||
The User JSON object is formed as follows:
|
||||
|
||||
```
|
||||
{
|
||||
"user": "userName"
|
||||
"password": "password"
|
||||
"roles": [
|
||||
"role1",
|
||||
"role2"
|
||||
],
|
||||
"grant": [],
|
||||
"revoke": [],
|
||||
"lastModified": "2006-01-02Z04:05:07"
|
||||
}
|
||||
```
|
||||
|
||||
Password is only passed when necessary. Last Modified is set by the server and ignored in all client posts.
|
||||
|
||||
**Get a list of users**
|
||||
|
||||
GET/HEAD /v2/security/user
|
||||
|
||||
Sent Headers:
|
||||
Authorization: Basic <BasicAuthString>
|
||||
Possible Status Codes:
|
||||
200 OK
|
||||
403 Forbidden
|
||||
200 Headers:
|
||||
ETag: "<hash of list of users>"
|
||||
Content-type: application/json
|
||||
200 Body:
|
||||
{
|
||||
"users": ["alice", "bob", "eve"]
|
||||
}
|
||||
|
||||
**Get User Details**
|
||||
|
||||
GET/HEAD /v2/security/users/alice
|
||||
|
||||
Sent Headers:
|
||||
Authorization: Basic <BasicAuthString>
|
||||
Possible Status Codes:
|
||||
200 OK
|
||||
403 Forbidden
|
||||
404 Not Found
|
||||
200 Headers:
|
||||
ETag: "users/alice:<lastModified>"
|
||||
Content-type: application/json
|
||||
200 Body:
|
||||
{
|
||||
"user" : "alice"
|
||||
"roles" : ["fleet", "etcd"]
|
||||
"lastModified": "2015-02-05Z18:00:00"
|
||||
}
|
||||
|
||||
**Create A User**
|
||||
|
||||
A user can be created with initial roles, if filled in. However, no roles are required; only the username and password fields
|
||||
|
||||
PUT /v2/security/users/charlie
|
||||
|
||||
Sent Headers:
|
||||
Authorization: Basic <BasicAuthString>
|
||||
Put Body:
|
||||
JSON struct, above, matching the appropriate name and with starting roles.
|
||||
Possible Status Codes:
|
||||
200 OK
|
||||
403 Forbidden
|
||||
409 Conflict (if exists)
|
||||
200 Headers:
|
||||
ETag: "users/charlie:<tzNow>"
|
||||
200 Body: (empty)
|
||||
|
||||
**Remove A User**
|
||||
|
||||
DELETE /v2/security/users/charlie
|
||||
|
||||
Sent Headers:
|
||||
Authorization: Basic <BasicAuthString>
|
||||
Possible Status Codes:
|
||||
200 OK
|
||||
403 Forbidden
|
||||
404 Not Found
|
||||
200 Headers:
|
||||
200 Body: (empty)
|
||||
|
||||
**Grant a Role(s) to a User**
|
||||
|
||||
PUT /v2/security/users/charlie/grant
|
||||
|
||||
Sent Headers:
|
||||
Authorization: Basic <BasicAuthString>
|
||||
Put Body:
|
||||
{ "grantRoles" : ["fleet", "etcd"], (extra JSON data for checking OK) }
|
||||
Possible Status Codes:
|
||||
200 OK
|
||||
403 Forbidden
|
||||
404 Not Found
|
||||
409 Conflict
|
||||
200 Headers:
|
||||
ETag: "users/charlie:<tzNow>"
|
||||
200 Body:
|
||||
JSON user struct, updated. "roles" now contains the grants, and "grantRoles" is empty. If there is an error in the set of roles to be added, for example, a non-existent role, then 409 is returned, with an error JSON stating why.
|
||||
|
||||
**Revoke a Role(s) from a User**
|
||||
|
||||
PUT /v2/security/users/charlie/revoke
|
||||
|
||||
Sent Headers:
|
||||
Authorization: Basic <BasicAuthString>
|
||||
Put Body:
|
||||
{ "revokeRoles" : ["fleet"], (extra JSON data for checking OK) }
|
||||
Possible Status Codes:
|
||||
200 OK
|
||||
403 Forbidden
|
||||
404 Not Found
|
||||
409 Conflict
|
||||
200 Headers:
|
||||
ETag: "users/charlie:<tzNow>"
|
||||
200 Body:
|
||||
JSON user struct, updated. "roles" now doesn't contain the roles, and "revokeRoles" is empty. If there is an error in the set of roles to be removed, for example, a non-existent role, then 409 is returned, with an error JSON stating why.
|
||||
|
||||
**Change password**
|
||||
|
||||
PUT /v2/security/users/charlie/password
|
||||
|
||||
Sent Headers:
|
||||
Authorization: Basic <BasicAuthString>
|
||||
Put Body:
|
||||
{"user": "charlie", "password": "newCharliePassword"}
|
||||
Possible Status Codes:
|
||||
200 OK
|
||||
403 Forbidden
|
||||
404 Not Found
|
||||
200 Headers:
|
||||
ETag: "users/charlie:<tzNow>"
|
||||
200 Body:
|
||||
JSON user struct, updated
|
||||
|
||||
#### Roles
|
||||
|
||||
A full role structure may look like this. A Permission List structure is used for the "permissions", "grant", and "revoke" keys.
|
||||
```
|
||||
{
|
||||
"role" : "fleet",
|
||||
"permissions" : {
|
||||
"kv" {
|
||||
"read" : [ "/fleet/" ],
|
||||
"write": [ "/fleet/" ],
|
||||
}
|
||||
}
|
||||
"grant" : {"kv": {...}},
|
||||
"revoke": {"kv": {...}},
|
||||
"members" : ["alice", "bob"],
|
||||
"lastModified": "2015-02-05Z18:00:00"
|
||||
}
|
||||
```
|
||||
|
||||
**Get a list of Roles**
|
||||
|
||||
GET/HEAD /v2/security/roles
|
||||
|
||||
Sent Headers:
|
||||
Authorization: Basic <BasicAuthString>
|
||||
Possible Status Codes:
|
||||
200 OK
|
||||
403 Forbidden
|
||||
200 Headers:
|
||||
ETag: "<hash of list of roles>"
|
||||
Content-type: application/json
|
||||
200 Body:
|
||||
{
|
||||
"roles": ["fleet", "etcd", "quay"]
|
||||
}
|
||||
|
||||
**Get Role Details**
|
||||
|
||||
GET/HEAD /v2/security/roles/fleet
|
||||
|
||||
Sent Headers:
|
||||
Authorization: Basic <BasicAuthString>
|
||||
Possible Status Codes:
|
||||
200 OK
|
||||
403 Forbidden
|
||||
404 Not Found
|
||||
200 Headers:
|
||||
ETag: "roles/fleet:<lastModified>"
|
||||
Content-type: application/json
|
||||
200 Body:
|
||||
{
|
||||
"role" : "fleet",
|
||||
"read": {
|
||||
"prefixesAllowed": ["/fleet/"],
|
||||
},
|
||||
"write": {
|
||||
"prefixesAllowed": ["/fleet/"],
|
||||
},
|
||||
"members" : ["alice", "bob"] // Reverse map optional?
|
||||
"lastModified": "2015-02-05Z18:00:00"
|
||||
}
|
||||
|
||||
**Create A Role**
|
||||
|
||||
PUT /v2/security/roles/rocket
|
||||
|
||||
Sent Headers:
|
||||
Authorization: Basic <BasicAuthString>
|
||||
Put Body:
|
||||
Initial desired JSON state, complete with prefixes and
|
||||
Possible Status Codes:
|
||||
201 Created
|
||||
403 Forbidden
|
||||
404 Not Found
|
||||
409 Conflict (if exists)
|
||||
200 Headers:
|
||||
ETag: "roles/rocket:<tzNow>"
|
||||
200 Body:
|
||||
JSON state of the role
|
||||
|
||||
**Remove A Role**
|
||||
|
||||
DELETE /v2/security/roles/rocket
|
||||
|
||||
Sent Headers:
|
||||
Authorization: Basic <BasicAuthString>
|
||||
Possible Status Codes:
|
||||
200 OK
|
||||
403 Forbidden
|
||||
404 Not Found
|
||||
200 Headers:
|
||||
200 Body: (empty)
|
||||
|
||||
**Update a Role’s Permission List for {read,write}ing**
|
||||
|
||||
PUT /v2/security/roles/rocket/update
|
||||
|
||||
Sent Headers:
|
||||
Authorization: Basic <BasicAuthString>
|
||||
Put Body:
|
||||
{
|
||||
"role" : "rocket",
|
||||
"grant": {
|
||||
"kv": {
|
||||
"read" : [ "/rocket/"]
|
||||
}
|
||||
},
|
||||
"revoke": {
|
||||
"kv": {
|
||||
"read" : [ "/fleet/"]
|
||||
}
|
||||
}
|
||||
}
|
||||
Possible Status Codes:
|
||||
200 OK
|
||||
403 Forbidden
|
||||
404 Not Found
|
||||
200 Headers:
|
||||
ETag: "roles/rocket:<tzNow>"
|
||||
200 Body:
|
||||
JSON state of the role, with change containing empty lists and the deltas applied appropriately.
|
||||
|
||||
|
||||
#### TBD Management modification
|
||||
|
||||
|
||||
## Example Workflow
|
||||
|
||||
Let's walk through an example to show two tenants (applications, in our case) using etcd permissions.
|
||||
|
||||
### Enable security
|
||||
|
||||
//TODO(barakmich): Maybe this is dynamic? I don't like the idea of rebooting when we don't have to.
|
||||
|
||||
#### Default ROOT
|
||||
|
||||
etcd always has a ROOT when started with security enabled. The default username is `root`, and the password is `root`.
|
||||
|
||||
// TODO(barakmich): if the enabling is dynamic, perhaps that'd be a good time to set a password? Thus obviating the next section.
|
||||
|
||||
|
||||
### Change root's password
|
||||
|
||||
```
|
||||
PUT /v2/security/users/root/password
|
||||
Headers:
|
||||
Authorization: Basic <root:root>
|
||||
Put Body:
|
||||
{"user" : "root", "password": "betterRootPW!"}
|
||||
```
|
||||
|
||||
//TODO(barakmich): How do you recover the root password? *This* may require a flag and a restart. `--disable-permissions`
|
||||
|
||||
### Create Roles for the Applications
|
||||
|
||||
Create the rocket role fully specified:
|
||||
|
||||
```
|
||||
PUT /v2/security/roles/rocket
|
||||
Headers:
|
||||
Authorization: Basic <root:betterRootPW!>
|
||||
Body:
|
||||
{
|
||||
"role" : "rocket",
|
||||
"permissions" : {
|
||||
"kv": {
|
||||
"read": [
|
||||
"/rocket/"
|
||||
],
|
||||
"write": [
|
||||
"/rocket/"
|
||||
]
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
But let's make fleet just a basic role for now:
|
||||
|
||||
```
|
||||
PUT /v2/security/roles/fleet
|
||||
Headers:
|
||||
Authorization: Basic <root:betterRootPW!>
|
||||
Body:
|
||||
{
|
||||
"role" : "fleet",
|
||||
}
|
||||
```
|
||||
|
||||
### Optional: Add some permissions to the roles
|
||||
|
||||
Well, we finally figured out where we want fleet to live. Let's fix it.
|
||||
(Note that we avoided this in the rocket case. So this step is optional.)
|
||||
|
||||
|
||||
```
|
||||
PUT /v2/security/roles/fleet/update
|
||||
Headers:
|
||||
Authorization: Basic <root:betterRootPW!>
|
||||
Put Body:
|
||||
{
|
||||
"role" : "fleet",
|
||||
"grant" : {
|
||||
"kv" : {
|
||||
"read": [
|
||||
"/fleet/"
|
||||
]
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
### Create Users
|
||||
|
||||
Same as before, let's use rocket all at once and fleet separately
|
||||
|
||||
```
|
||||
PUT /v2/security/users/rocketuser
|
||||
Headers:
|
||||
Authorization: Basic <root:betterRootPW!>
|
||||
Body:
|
||||
{"user" : "rocketuser", "password" : "rocketpw", "roles" : ["rocket"]}
|
||||
```
|
||||
|
||||
```
|
||||
PUT /v2/security/users/fleetuser
|
||||
Headers:
|
||||
Authorization: Basic <root:betterRootPW!>
|
||||
Body:
|
||||
{"user" : "fleetuser", "password" : "fleetpw"}
|
||||
```
|
||||
|
||||
### Optional: Grant Roles to Users
|
||||
|
||||
Likewise, let's explicitly grant fleetuser access.
|
||||
|
||||
```
|
||||
PUT /v2/security/users/fleetuser/grant
|
||||
Headers:
|
||||
Authorization: Basic <root:betterRootPW!>
|
||||
Body:
|
||||
{"user": "fleetuser", "grant": ["fleet"]}
|
||||
```
|
||||
|
||||
#### Start to use fleetuser and rocketuser
|
||||
|
||||
|
||||
For example:
|
||||
|
||||
```
|
||||
PUT /v2/keys/rocket/RocketData
|
||||
Headers:
|
||||
Authorization: Basic <rocketuser:rocketpw>
|
||||
```
|
||||
|
||||
Reads and writes outside the prefixes granted will fail with a 403 Forbidden.
|
||||
|
@@ -1,191 +0,0 @@
|
||||
## Design
|
||||
|
||||
1. Flatten binary key-value space
|
||||
|
||||
2. Keep the event history until compaction
|
||||
- access to old version of keys
|
||||
- user controlled history compaction
|
||||
|
||||
3. Support range query
|
||||
- Pagination support with limit argument
|
||||
- Support consistency guarantee across multiple range queries
|
||||
|
||||
4. Replace TTL key with Lease
|
||||
- more efficient/ low cost keep alive
|
||||
- a logical group of TTL keys
|
||||
|
||||
5. Replace CAS/CAD with multi-object Tnx
|
||||
- MUCH MORE powerful and flexible
|
||||
|
||||
6. Support efficient watching with multiple ranges
|
||||
|
||||
7. RPC API supports the completed set of APIs.
|
||||
- more efficient than JSON/HTTP
|
||||
- additional tnx/lease support
|
||||
|
||||
8. HTTP API supports a subset of APIs.
|
||||
- easy for people to try out etcd
|
||||
- easy for people to write simple etcd application
|
||||
|
||||
|
||||
## Protobuf Defined API
|
||||
|
||||
[protobuf](./v3api.proto)
|
||||
|
||||
### Examples
|
||||
|
||||
#### Put a key (foo=bar)
|
||||
```
|
||||
// A put is always successful
|
||||
Put( PutRequest { key = foo, value = bar } )
|
||||
|
||||
PutResponse {
|
||||
cluster_id = 0x1000,
|
||||
member_id = 0x1,
|
||||
index = 1,
|
||||
raft_term = 0x1,
|
||||
}
|
||||
```
|
||||
|
||||
#### Get a key (assume we have foo=bar)
|
||||
```
|
||||
Get ( RangeRequest { key = foo } )
|
||||
|
||||
RangeResponse {
|
||||
cluster_id = 0x1000,
|
||||
member_id = 0x1,
|
||||
index = 1,
|
||||
raft_term = 0x1,
|
||||
kvs = {
|
||||
{
|
||||
key = foo,
|
||||
value = bar,
|
||||
create_index = 1,
|
||||
mod_index = 1,
|
||||
version = 1;
|
||||
},
|
||||
},
|
||||
}
|
||||
```
|
||||
|
||||
#### Range over a key space (assume we have foo0=bar0… foo100=bar100)
|
||||
```
|
||||
Range ( RangeRequest { key = foo, end_key = foo80, limit = 30 } )
|
||||
|
||||
RangeResponse {
|
||||
cluster_id = 0x1000,
|
||||
member_id = 0x1,
|
||||
index = 100,
|
||||
raft_term = 0x1,
|
||||
kvs = {
|
||||
{
|
||||
key = foo0,
|
||||
value = bar0,
|
||||
create_index = 1,
|
||||
mod_index = 1,
|
||||
version = 1;
|
||||
},
|
||||
...,
|
||||
{
|
||||
key = foo30,
|
||||
value = bar30,
|
||||
create_index = 30,
|
||||
mod_index = 30,
|
||||
version = 1;
|
||||
},
|
||||
},
|
||||
}
|
||||
```
|
||||
|
||||
#### Finish a tnx (assume we have foo0=bar0, foo1=bar1)
|
||||
```
|
||||
Tnx(TnxRequest {
|
||||
// mod_index of foo0 is equal to 1, mod_index of foo1 is greater than 1
|
||||
compare = {
|
||||
{compareType = equal, key = foo0, mod_index = 1},
|
||||
{compareType = greater, key = foo1, mod_index = 1}}
|
||||
},
|
||||
// if the comparison succeeds, put foo2 = bar2
|
||||
success = {PutRequest { key = foo2, value = success }},
|
||||
// if the comparison fails, put foo2=fail
|
||||
failure = {PutRequest { key = foo2, value = failure }},
|
||||
)
|
||||
|
||||
TnxResponse {
|
||||
cluster_id = 0x1000,
|
||||
member_id = 0x1,
|
||||
index = 3,
|
||||
raft_term = 0x1,
|
||||
succeeded = true,
|
||||
responses = {
|
||||
// response of PUT foo2=success
|
||||
{
|
||||
cluster_id = 0x1000,
|
||||
member_id = 0x1,
|
||||
index = 3,
|
||||
raft_term = 0x1,
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
#### Watch on a key/range
|
||||
|
||||
```
|
||||
Watch( WatchRequest{
|
||||
key = foo,
|
||||
end_key = fop, // prefix foo
|
||||
start_index = 20,
|
||||
end_index = 10000,
|
||||
// server decided notification frequency
|
||||
progress_notification = true,
|
||||
}
|
||||
… // this can be a watch request stream
|
||||
)
|
||||
|
||||
// put (foo0=bar0) event at 3
|
||||
WatchResponse {
|
||||
cluster_id = 0x1000,
|
||||
member_id = 0x1,
|
||||
index = 3,
|
||||
raft_term = 0x1,
|
||||
event_type = put,
|
||||
kv = {
|
||||
key = foo0,
|
||||
value = bar0,
|
||||
create_index = 1,
|
||||
mod_index = 1,
|
||||
version = 1;
|
||||
},
|
||||
}
|
||||
…
|
||||
|
||||
// a notification at 2000
|
||||
WatchResponse {
|
||||
cluster_id = 0x1000,
|
||||
member_id = 0x1,
|
||||
index = 2000,
|
||||
raft_term = 0x1,
|
||||
// nil event as notification
|
||||
}
|
||||
|
||||
…
|
||||
|
||||
// put (foo0=bar3000) event at 3000
|
||||
WatchResponse {
|
||||
cluster_id = 0x1000,
|
||||
member_id = 0x1,
|
||||
index = 3000,
|
||||
raft_term = 0x1,
|
||||
event_type = put,
|
||||
kv = {
|
||||
key = foo0,
|
||||
value = bar3000,
|
||||
create_index = 1,
|
||||
mod_index = 3000,
|
||||
version = 2;
|
||||
},
|
||||
}
|
||||
…
|
||||
|
||||
```
|
@@ -1,272 +0,0 @@
|
||||
syntax = "proto3";
|
||||
|
||||
// Interface exported by the server.
|
||||
service etcd {
|
||||
// Range gets the keys in the range from the store.
|
||||
rpc Range(RangeRequest) returns (RangeResponse) {}
|
||||
|
||||
// Put puts the given key into the store.
|
||||
// A put request increases the index of the store,
|
||||
// and generates one event in the event history.
|
||||
rpc Put(PutRequest) returns (PutResponse) {}
|
||||
|
||||
// Delete deletes the given range from the store.
|
||||
// A delete request increase the index of the store,
|
||||
// and generates one event in the event history.
|
||||
rpc DeleteRange(DeleteRangeRequest) returns (DeleteRangeResponse) {}
|
||||
|
||||
// Tnx processes all the requests in one transaction.
|
||||
// A tnx request increases the index of the store,
|
||||
// and generates events with the same index in the event history.
|
||||
rpc Tnx(TnxRequest) returns (TnxResponse) {}
|
||||
|
||||
// Watch watches the events happening or happened in etcd. Both input and output
|
||||
// are stream. One watch rpc can watch for multiple ranges and get a stream of
|
||||
// events. The whole events history can be watched unless compacted.
|
||||
rpc WatchRange(stream WatchRangeRequest) returns (stream WatchRangeResponse) {}
|
||||
|
||||
// Compact compacts the event history in etcd. User should compact the
|
||||
// event history periodically, or it will grow infinitely.
|
||||
rpc Compact(CompactionRequest) returns (CompactionResponse) {}
|
||||
|
||||
// LeaseCreate creates a lease. A lease has a TTL. The lease will expire if the
|
||||
// server does not receive a keepAlive within TTL from the lease holder.
|
||||
// All keys attached to the lease will be expired and deleted if the lease expires.
|
||||
// The key expiration generates an event in event history.
|
||||
rpc LeaseCreate(LeaseCreateRequest) returns (LeaseCreateResponse) {}
|
||||
|
||||
// LeaseRevoke revokes a lease. All the key attached to the lease will be expired and deleted.
|
||||
rpc LeaseRevoke(LeaseRevokeRequest) returns (LeaseRevokeResponse) {}
|
||||
|
||||
// LeaseAttach attaches keys with a lease.
|
||||
rpc LeaseAttach(LeaseAttachRequest) returns (LeaseAttachResponse) {}
|
||||
|
||||
// LeaseTnx likes Tnx. It has two addition success and failure LeaseAttachRequest list.
|
||||
// If the Tnx is successful, then the success list will be executed. Or the failure list
|
||||
// will be executed.
|
||||
rpc LeaseTnx(LeaseTnxRequest) returns (LeaseTnxResponse) {}
|
||||
|
||||
// KeepAlive keeps the lease alive.
|
||||
rpc LeaseKeepAlive(stream LeaseKeepAliveRequest) returns (stream LeaseKeepAliveResponse) {}
|
||||
}
|
||||
|
||||
message ResponseHeader {
|
||||
// an error type message?
|
||||
optional string error = 1;
|
||||
optional uint64 cluster_id = 2;
|
||||
optional uint64 member_id = 3;
|
||||
// index of the store when the request was applied.
|
||||
optional int64 index = 4;
|
||||
// term of raft when the request was applied.
|
||||
optional uint64 raft_term = 5;
|
||||
}
|
||||
|
||||
message RangeRequest {
|
||||
// if the range_end is not given, the request returns the key.
|
||||
optional bytes key = 1;
|
||||
// if the range_end is given, it gets the keys in range [key, range_end).
|
||||
optional bytes range_end = 2;
|
||||
// limit the number of keys returned.
|
||||
optional int64 limit = 3;
|
||||
// the response will be consistent with previous request with same token if the token is
|
||||
// given and is vaild.
|
||||
optional bytes consistent_token = 4;
|
||||
}
|
||||
|
||||
message RangeResponse {
|
||||
optional ResponseHeader header = 1;
|
||||
repeated KeyValue kvs = 2;
|
||||
optional bytes consistent_token = 3;
|
||||
}
|
||||
|
||||
message PutRequest {
|
||||
optional bytes key = 1;
|
||||
optional bytes value = 2;
|
||||
}
|
||||
|
||||
message PutResponse {
|
||||
optional ResponseHeader header = 1;
|
||||
}
|
||||
|
||||
message DeleteRangeRequest {
|
||||
// if the range_end is not given, the request deletes the key.
|
||||
optional bytes key = 1;
|
||||
// if the range_end is given, it deletes the keys in range [key, range_end).
|
||||
optional bytes range_end = 2;
|
||||
}
|
||||
|
||||
message DeleteRangeResponse {
|
||||
optional ResponseHeader header = 1;
|
||||
}
|
||||
|
||||
message RequestUnion {
|
||||
oneof request {
|
||||
RangeRequest request_range = 1;
|
||||
PutRequest request_put = 2;
|
||||
DeleteRangeRequest request_delete_range = 3;
|
||||
}
|
||||
}
|
||||
|
||||
message ResponseUnion {
|
||||
oneof response {
|
||||
RangeResponse reponse_range = 1;
|
||||
PutResponse response_put = 2;
|
||||
DeleteRangeResponse response_delete_range = 3;
|
||||
}
|
||||
}
|
||||
|
||||
message Compare {
|
||||
enum CompareType {
|
||||
EQUAL = 0;
|
||||
GREATER = 1;
|
||||
LESS = 2;
|
||||
}
|
||||
optional CompareType type = 1;
|
||||
// key path
|
||||
optional bytes key = 2;
|
||||
oneof target {
|
||||
// version of the given key
|
||||
int64 version = 3;
|
||||
// create index of the given key
|
||||
int64 create_index = 4;
|
||||
// last modified index of the given key
|
||||
int64 mod_index = 5;
|
||||
// value of the given key
|
||||
bytes value = 6;
|
||||
}
|
||||
}
|
||||
|
||||
// First all the compare requests are processed.
|
||||
// If all the compare succeed, all the success
|
||||
// requests will be processed.
|
||||
// Or all the failure requests will be processed and
|
||||
// all the errors in the comparison will be returned.
|
||||
|
||||
// From google paxosdb paper:
|
||||
// Our implementation hinges around a powerful primitive which we call MultiOp. All other database
|
||||
// operations except for iteration are implemented as a single call to MultiOp. A MultiOp is applied atomically
|
||||
// and consists of three components:
|
||||
// 1. A list of tests called guard. Each test in guard checks a single entry in the database. It may check
|
||||
// for the absence or presence of a value, or compare with a given value. Two different tests in the guard
|
||||
// may apply to the same or different entries in the database. All tests in the guard are applied and
|
||||
// MultiOp returns the results. If all tests are true, MultiOp executes t op (see item 2 below), otherwise
|
||||
// it executes f op (see item 3 below).
|
||||
// 2. A list of database operations called t op. Each operation in the list is either an insert, delete, or
|
||||
// lookup operation, and applies to a single database entry. Two different operations in the list may apply
|
||||
// to the same or different entries in the database. These operations are executed
|
||||
// if guard evaluates to
|
||||
// true.
|
||||
// 3. A list of database operations called f op. Like t op, but executed if guard evaluates to false.
|
||||
message TnxRequest {
|
||||
repeated Compare compare = 1;
|
||||
repeated RequestUnion success = 2;
|
||||
repeated RequestUnion failure = 3;
|
||||
}
|
||||
|
||||
message TnxResponse {
|
||||
optional ResponseHeader header = 1;
|
||||
optional bool succeeded = 2;
|
||||
repeated ResponseUnion responses = 3;
|
||||
}
|
||||
|
||||
message KeyValue {
|
||||
optional bytes key = 1;
|
||||
// mod_index is the last modified index of the key.
|
||||
optional int64 create_index = 2;
|
||||
optional int64 mod_index = 3;
|
||||
// version is the version of the key. A deletion resets
|
||||
// the version to zero and any modification of the key
|
||||
// increases its version.
|
||||
optional int64 version = 4;
|
||||
optional bytes value = 5;
|
||||
}
|
||||
|
||||
message WatchRangeRequest {
|
||||
// if the range_end is not given, the request returns the key.
|
||||
optional bytes key = 1;
|
||||
// if the range_end is given, it gets the keys in range [key, range_end).
|
||||
optional bytes range_end = 2;
|
||||
// start_index is an optional index (including) to watch from. No start_index is "now".
|
||||
optional int64 start_index = 3;
|
||||
// end_index is an optional index (excluding) to end watch. No end_index is "forever".
|
||||
optional int64 end_index = 4;
|
||||
optional bool progress_notification = 5;
|
||||
}
|
||||
|
||||
message WatchRangeResponse {
|
||||
optional ResponseHeader header = 1;
|
||||
repeated Event events = 2;
|
||||
}
|
||||
|
||||
message Event {
|
||||
enum EventType {
|
||||
PUT = 0;
|
||||
DELETE = 1;
|
||||
EXPIRE = 2;
|
||||
}
|
||||
optional EventType event_type = 1;
|
||||
// a put event contains the current key-value
|
||||
// a delete/expire event contains the previous
|
||||
// key-value
|
||||
optional KeyValue kv = 2;
|
||||
}
|
||||
|
||||
message CompactionRequest {
|
||||
optional int64 index = 1;
|
||||
}
|
||||
|
||||
message CompactionResponse {
|
||||
optional ResponseHeader header = 1;
|
||||
}
|
||||
|
||||
message LeaseCreateRequest {
|
||||
// advisory ttl in seconds
|
||||
optional int64 ttl = 1;
|
||||
}
|
||||
|
||||
message LeaseCreateResponse {
|
||||
optional ResponseHeader header = 1;
|
||||
optional int64 lease_id = 2;
|
||||
// server decided ttl in second
|
||||
optional int64 ttl = 3;
|
||||
optional string error = 4;
|
||||
}
|
||||
|
||||
message LeaseRevokeRequest {
|
||||
optional int64 lease_id = 1;
|
||||
}
|
||||
|
||||
message LeaseRevokeResponse {
|
||||
optional ResponseHeader header = 1;
|
||||
}
|
||||
|
||||
message LeaseTnxRequest {
|
||||
optional TnxRequest request = 1;
|
||||
repeated LeaseAttachRequest success = 2;
|
||||
repeated LeaseAttachRequest failure = 3;
|
||||
}
|
||||
|
||||
message LeaseTnxResponse {
|
||||
optional ResponseHeader header = 1;
|
||||
optional TnxResponse response = 2;
|
||||
repeated LeaseAttachResponse attach_responses = 3;
|
||||
}
|
||||
|
||||
message LeaseAttachRequest {
|
||||
optional int64 lease_id = 1;
|
||||
optional bytes key = 2;
|
||||
}
|
||||
|
||||
message LeaseAttachResponse {
|
||||
optional ResponseHeader header = 1;
|
||||
}
|
||||
|
||||
message LeaseKeepAliveRequest {
|
||||
optional int64 lease_id = 1;
|
||||
}
|
||||
|
||||
message LeaseKeepAliveResponse {
|
||||
optional ResponseHeader header = 1;
|
||||
optional int64 lease_id = 2;
|
||||
optional int64 ttl = 3;
|
||||
}
|
@@ -57,7 +57,7 @@ To increase from 3 to 5 members you will make two add operations
|
||||
To decrease from 5 to 3 you will make two remove operations
|
||||
|
||||
All of these examples will use the `etcdctl` command line tool that ships with etcd.
|
||||
If you want to use the member API directly you can find the documentation [here](other_apis.md).
|
||||
If you want to use the member API directly you can find the documentation [here](https://github.com/coreos/etcd/blob/master/Documentation/other_apis.md).
|
||||
|
||||
### Remove a Member
|
||||
|
||||
@@ -90,10 +90,10 @@ It is safe to remove the leader, however the cluster will be inactive while a ne
|
||||
|
||||
Adding a member is a two step process:
|
||||
|
||||
* Add the new member to the cluster via the [members API](other_apis.md#post-v2members) or the `etcdctl member add` command.
|
||||
* Add the new member to the cluster via the [members API](https://github.com/coreos/etcd/blob/master/Documentation/other_apis.md#post-v2members) or the `etcdctl member add` command.
|
||||
* Start the new member with the new cluster configuration, including a list of the updated members (existing members + the new member).
|
||||
|
||||
Using `etcdctl` let's add the new member to the cluster by specifying its [name](configuration.md#-name) and [advertised peer URLs](configuration.md#-initial-advertise-peer-urls):
|
||||
Using `etcdctl` let's add the new member to the cluster by specifing its [name](configuration.md#-name) and [advertised peer URLs](configuration.md#-initial-advertise-peer-urls):
|
||||
|
||||
```
|
||||
$ etcdctl member add infra3 http://10.0.1.13:2380
|
||||
|
@@ -18,9 +18,7 @@ etcd takes several certificate related configuration options, either through com
|
||||
|
||||
`--key-file=<path>`: Key for the certificate. Must be unencrypted.
|
||||
|
||||
`--client-cert-auth`: When this is set etcd will check all incoming HTTPS requests for a client certificate signed by the trusted CA, requests that don't supply a valid client certificate will fail.
|
||||
|
||||
`--trusted-ca-file=<path>`: Trusted certificate authority.
|
||||
`--ca-file=<path>`: When this is set etcd will check all incoming HTTPS requests for a client certificate signed by the supplied CA, requests that don't supply a valid client certificate will fail.
|
||||
|
||||
**Peer (server-to-server / cluster) communication:**
|
||||
|
||||
@@ -30,9 +28,7 @@ The peer options work the same way as the client-to-server options:
|
||||
|
||||
`--peer-key-file=<path>`: Key for the certificate. Must be unencrypted.
|
||||
|
||||
`--peer-client-cert-auth`: When set, etcd will check all incoming peer requests from the cluster for valid client certificates signed by the supplied CA.
|
||||
|
||||
`--peer-trusted-ca-file=<path>`: Trusted certificate authority.
|
||||
`--peer-ca-file=<path>`: When set, etcd will check all incoming peer requests from the cluster for valid client certificates signed by the supplied CA.
|
||||
|
||||
If either a client-to-server or peer certificate is supplied the key must also be set. All of these configuration options are also available through the environment variables, `ETCD_CA_FILE`, `ETCD_PEER_CA_FILE` and so on.
|
||||
|
||||
@@ -72,10 +68,12 @@ You need the same files mentioned in the first example for this, as well as a ke
|
||||
|
||||
```sh
|
||||
$ etcd -name infra0 -data-dir infra0 \
|
||||
-client-cert-auth -trusted-ca-file=/path/to/ca.crt -cert-file=/path/to/server.crt -key-file=/path/to/server.key \
|
||||
-ca-file=/path/to/ca.crt -cert-file=/path/to/server.crt -key-file=/path/to/server.key \
|
||||
-advertise-client-urls https://127.0.0.1:2379 -listen-client-urls https://127.0.0.1:2379
|
||||
```
|
||||
|
||||
Notice that the addition of the `-ca-file` option automatically enables client certificate checking.
|
||||
|
||||
Now try the same request as above to this server:
|
||||
|
||||
```sh
|
||||
@@ -132,13 +130,13 @@ DISCOVERY_URL=... # from https://discovery.etcd.io/new
|
||||
|
||||
# member1
|
||||
$ etcd -name infra1 -data-dir infra1 \
|
||||
-peer-client-cert-auth -peer-trusted-ca-file=/path/to/ca.crt -peer-cert-file=/path/to/member1.crt -peer-key-file=/path/to/member1.key \
|
||||
-ca-file=/path/to/ca.crt -cert-file=/path/to/member1.crt -key-file=/path/to/member1.key \
|
||||
-initial-advertise-peer-urls=https://10.0.1.10:2380 -listen-peer-urls=https://10.0.1.10:2380 \
|
||||
-discovery ${DISCOVERY_URL}
|
||||
|
||||
# member2
|
||||
$ etcd -name infra2 -data-dir infra2 \
|
||||
-peer-client-cert-atuh -peer-trusted-ca-file=/path/to/ca.crt -peer-cert-file=/path/to/member2.crt -peer-key-file=/path/to/member2.key \
|
||||
-ca-file=/path/to/ca.crt -cert-file=/path/to/member2.crt -key-file=/path/to/member2.key \
|
||||
-initial-advertise-peer-urls=https://10.0.1.11:2380 -listen-peer-urls=https://10.0.1.11:2380 \
|
||||
-discovery ${DISCOVERY_URL}
|
||||
```
|
||||
@@ -147,13 +145,6 @@ The etcd members will form a cluster and all communication between members in th
|
||||
|
||||
## Frequently Asked Questions
|
||||
|
||||
### My cluster is not working with peer tls configuration?
|
||||
|
||||
The internal protocol of etcd v2.0.x uses a lot of short-lived HTTP connections.
|
||||
So, when enabling TLS you may need to increase the heartbeat interval and election timeouts to reduce internal cluster connection churn.
|
||||
A reasonable place to start are these values: ` --heartbeat-interval 500 --election-timeout 2500`.
|
||||
This issues is resolved in the etcd v2.1.x series of releases which uses fewer connections.
|
||||
|
||||
### I'm seeing a SSLv3 alert handshake failure when using SSL client authentication?
|
||||
|
||||
The `crypto/tls` package of `golang` checks the key usage of the certificate public key before using it.
|
||||
|
@@ -25,10 +25,8 @@ The election timeout should be set based on the heartbeat interval and your netw
|
||||
Election timeouts should be at least 10 times your ping time so it can account for variance in your network.
|
||||
For example, if the ping time between your nodes is 10ms then you should have at least a 100ms election timeout.
|
||||
|
||||
The upper limit of election timeout is 50000ms, which should only be used when deploying global etcd cluster. First, 5s is the upper limit of average global round-trip time. A reasonable round-trip time for the continental united states is 130ms, and the time between US and japan is around 350-400ms. Because package gets delayed a lot, and network situation may be terrible, 5s is a safe value for it. Then, because election timeout should be an order of magnitude bigger than broadcast time, 50s becomes its maximum.
|
||||
|
||||
You should also set your election timeout to at least 5 to 10 times your heartbeat interval to account for variance in leader replication.
|
||||
For a heartbeat interval of 50ms you should set your election timeout to at least 250ms - 500ms.
|
||||
You should also set your election timeout to at least 4 to 5 times your heartbeat interval to account for variance in leader replication.
|
||||
For a heartbeat interval of 50ms you should set your election timeout to at least 200ms - 250ms.
|
||||
|
||||
You can override the default values on the command line:
|
||||
|
||||
@@ -64,3 +62,13 @@ $ etcd -snapshot-count=5000
|
||||
# Environment variables:
|
||||
$ ETCD_SNAPSHOT_COUNT=5000 etcd
|
||||
```
|
||||
|
||||
You can also disable snapshotting by adding the following to your command line:
|
||||
|
||||
```sh
|
||||
# Command line arguments:
|
||||
$ etcd -snapshot false
|
||||
|
||||
# Environment variables:
|
||||
$ ETCD_SNAPSHOT=false etcd
|
||||
```
|
||||
|
@@ -1,112 +0,0 @@
|
||||
## Upgrade etcd to 2.1
|
||||
|
||||
In the general case, upgrading from etcd 2.0 to 2.1 can be a zero-downtime, rolling upgrade:
|
||||
- one by one, stop the etcd v2.0 processes and replace them with etcd v2.1 processes
|
||||
- after you are running all v2.1 processes, new features in v2.1 are available to the cluster
|
||||
|
||||
Before [starting an upgrade](#upgrade-procedure), read through the rest of this guide to prepare.
|
||||
|
||||
### Upgrade Checklists
|
||||
|
||||
#### Upgrade Requirement
|
||||
|
||||
To upgrade an existing etcd deployment to 2.1, you must be running 2.0. If you’re running a version of etcd before 2.0, you must upgrade to [2.0](https://github.com/coreos/etcd/releases/tag/v2.0.13) before upgrading to 2.1.
|
||||
|
||||
Also, to ensure a smooth rolling upgrade, your running cluster must be healthy. You can check the health of the cluster by using `etcdctl cluster-health` command.
|
||||
|
||||
#### Preparedness
|
||||
|
||||
Before upgrading etcd, always test the services relying on etcd in a staging environment before deploying the upgrade to the production environment.
|
||||
|
||||
You might also want to [backup your data directory](admin_guide.md#backing-up-the-datastore) for a potential [downgrade](#downgrade).
|
||||
|
||||
etcd 2.1 introduces a new [authentication](auth_api.md) feature, which is disabled by default. If your deployment depends on these, you may want to test the auth features before enabling them in production.
|
||||
|
||||
#### Mixed Versions
|
||||
|
||||
While upgrading, an etcd cluster supports mixed versions of etcd members. The cluster is only considered upgraded once all its members are upgraded to 2.1.
|
||||
|
||||
Internally, etcd members negotiate with each other to determine the overall etcd cluster version, which controls the reported cluster version and the supported features. For example, if you are mid-upgrade, any 2.1 features (such as the the authentication feature mentioned above) won’t be available.
|
||||
|
||||
#### Limitations
|
||||
|
||||
If you encounter any issues during the upgrade, you can attempt to restart the etcd process in trouble using a newer v2.1 binary to solve the problem. One known issue is that etcd v2.0.0 and v2.0.2 may panic during rolling upgrades due to an existing bug, which has been fixed since etcd v2.0.3.
|
||||
|
||||
It might take up to 2 minutes for the newly upgraded member to catch up with the existing cluster when the total data size is larger than 50MB (You can check the size of the existing snapshot to know about the rough data size). In other words, it is safest to wait for 2 minutes before upgrading the next member.
|
||||
|
||||
If you have even more data, this might take more time. If you have a data size larger than 100MB you should contact us before upgrading, so we can make sure the upgrades work smoothly.
|
||||
|
||||
#### Downgrade
|
||||
|
||||
If all members have been upgraded to v2.1, the cluster will be upgraded to v2.1, and downgrade is **not possible**. If any member is still v2.0, the cluster will remain in v2.0, and you can go back to use v2.0 binary.
|
||||
|
||||
Please [backup your data directory](admin_guide.md#backing-up-the-datastore) of all etcd members if you want to downgrade the cluster, even if it is upgraded.
|
||||
|
||||
### Upgrade Procedure
|
||||
|
||||
#### 1. Check upgrade requirements.
|
||||
|
||||
```
|
||||
$ etcdctl cluster-health
|
||||
cluster is healthy
|
||||
member 6e3bd23ae5f1eae0 is healthy
|
||||
member 924e2e83e93f2560 is healthy
|
||||
member a8266ecf031671f3 is healthy
|
||||
|
||||
$ curl http://127.0.0.1:4001/version
|
||||
etcd 2.0.x
|
||||
```
|
||||
|
||||
#### 2. Stop the existing etcd process
|
||||
|
||||
You will see similar error logging from other etcd processes in your cluster. This is normal, since you just shut down a member.
|
||||
|
||||
```
|
||||
2015/06/23 15:45:09 sender: error posting to 6e3bd23ae5f1eae0: dial tcp 127.0.0.1:7002: connection refused
|
||||
2015/06/23 15:45:09 sender: the connection with 6e3bd23ae5f1eae0 became inactive
|
||||
2015/06/23 15:45:11 rafthttp: encountered error writing to server log stream: write tcp 127.0.0.1:53783: broken pipe
|
||||
2015/06/23 15:45:11 rafthttp: server streaming to 6e3bd23ae5f1eae0 at term 2 has been stopped
|
||||
2015/06/23 15:45:11 stream: error sending message: stopped
|
||||
2015/06/23 15:45:11 stream: stopping the stream server...
|
||||
```
|
||||
|
||||
You could [backup your data directory](https://github.com/coreos/etcd/blob/7f7e2cc79d9c5c342a6eb1e48c386b0223cf934e/Documentation/admin_guide.md#backing-up-the-datastore) for data safety.
|
||||
|
||||
```
|
||||
$ etcdctl backup \
|
||||
--data-dir /var/lib/etcd \
|
||||
--backup-dir /tmp/etcd_backup
|
||||
```
|
||||
|
||||
#### 3. Drop-in etcd v2.1 binary and start the new etcd process
|
||||
|
||||
You will see the etcd publish its information to the cluster.
|
||||
|
||||
```
|
||||
2015/06/23 15:45:39 etcdserver: published {Name:infra2 ClientURLs:[http://localhost:4002]} to cluster e9c7614f68f35fb2
|
||||
```
|
||||
|
||||
You could verify the cluster becomes healthy.
|
||||
|
||||
```
|
||||
$ etcdctl cluster-health
|
||||
cluster is healthy
|
||||
member 6e3bd23ae5f1eae0 is healthy
|
||||
member 924e2e83e93f2560 is healthy
|
||||
member a8266ecf031671f3 is healthy
|
||||
```
|
||||
|
||||
#### 4. Repeat step 2 to step 3 for all other members
|
||||
|
||||
#### 5. Finish
|
||||
|
||||
When all members are upgraded, you will see the cluster is upgraded to 2.1 successfully:
|
||||
|
||||
```
|
||||
2015/06/23 15:46:35 etcdserver: updated the cluster version from 2.0.0 to 2.1.0
|
||||
```
|
||||
|
||||
```
|
||||
$ curl http://127.0.0.1:4001/version
|
||||
{"etcdserver":"2.1.x","etcdcluster":"2.1.0"}
|
||||
```
|
109
Godeps/Godeps.json
generated
109
Godeps/Godeps.json
generated
@@ -6,26 +6,8 @@
|
||||
],
|
||||
"Deps": [
|
||||
{
|
||||
"ImportPath": "bitbucket.org/ww/goautoneg",
|
||||
"Comment": "null-5",
|
||||
"Rev": "75cd24fc2f2c2a2088577d12123ddee5f54e0675"
|
||||
},
|
||||
{
|
||||
"ImportPath": "github.com/beorn7/perks/quantile",
|
||||
"Rev": "b965b613227fddccbfffe13eae360ed3fa822f8d"
|
||||
},
|
||||
{
|
||||
"ImportPath": "github.com/bgentry/speakeasy",
|
||||
"Rev": "5dfe43257d1f86b96484e760f2f0c4e2559089c7"
|
||||
},
|
||||
{
|
||||
"ImportPath": "github.com/boltdb/bolt",
|
||||
"Comment": "v1.0-71-g71f28ea",
|
||||
"Rev": "71f28eaecbebd00604d87bb1de0dae8fcfa54bbd"
|
||||
},
|
||||
{
|
||||
"ImportPath": "github.com/bradfitz/http2",
|
||||
"Rev": "3e36af6d3af0e56fa3da71099f864933dea3d9fb"
|
||||
"ImportPath": "code.google.com/p/gogoprotobuf/proto",
|
||||
"Rev": "7fd1620f09261338b6b1ca1289ace83aee0ec946"
|
||||
},
|
||||
{
|
||||
"ImportPath": "github.com/codegangsta/cli",
|
||||
@@ -34,100 +16,21 @@
|
||||
},
|
||||
{
|
||||
"ImportPath": "github.com/coreos/go-etcd/etcd",
|
||||
"Comment": "v2.0.0-13-g4cceaf7",
|
||||
"Rev": "4cceaf7283b76f27c4a732b20730dcdb61053bf5"
|
||||
},
|
||||
{
|
||||
"ImportPath": "github.com/coreos/go-semver/semver",
|
||||
"Rev": "568e959cd89871e61434c1143528d9162da89ef2"
|
||||
},
|
||||
{
|
||||
"ImportPath": "github.com/coreos/pkg/capnslog",
|
||||
"Rev": "99f6e6b8f8ea30b0f82769c1411691c44a66d015"
|
||||
},
|
||||
{
|
||||
"ImportPath": "github.com/gogo/protobuf/proto",
|
||||
"Rev": "64f27bf06efee53589314a6e5a4af34cdd85adf6"
|
||||
},
|
||||
{
|
||||
"ImportPath": "github.com/golang/glog",
|
||||
"Rev": "44145f04b68cf362d9c4df2182967c2275eaefed"
|
||||
},
|
||||
{
|
||||
"ImportPath": "github.com/golang/protobuf/proto",
|
||||
"Rev": "5677a0e3d5e89854c9974e1256839ee23f8233ca"
|
||||
},
|
||||
{
|
||||
"ImportPath": "github.com/google/btree",
|
||||
"Rev": "cc6329d4279e3f025a53a83c397d2339b5705c45"
|
||||
"Comment": "v0.2.0-rc1-130-g6aa2da5",
|
||||
"Rev": "6aa2da5a7a905609c93036b9307185a04a5a84a5"
|
||||
},
|
||||
{
|
||||
"ImportPath": "github.com/jonboulle/clockwork",
|
||||
"Rev": "72f9bd7c4e0c2a40055ab3d0f09654f730cce982"
|
||||
},
|
||||
{
|
||||
"ImportPath": "github.com/matttproud/golang_protobuf_extensions/pbutil",
|
||||
"Rev": "fc2b8d3a73c4867e51861bbdd5ae3c1f0869dd6a"
|
||||
},
|
||||
{
|
||||
"ImportPath": "github.com/prometheus/client_golang/model",
|
||||
"Comment": "0.5.0-10-ga842dc1",
|
||||
"Rev": "a842dc11e0621c34a71cab634d1d0190a59802a8"
|
||||
},
|
||||
{
|
||||
"ImportPath": "github.com/prometheus/client_golang/prometheus",
|
||||
"Comment": "0.5.0-10-ga842dc1",
|
||||
"Rev": "a842dc11e0621c34a71cab634d1d0190a59802a8"
|
||||
},
|
||||
{
|
||||
"ImportPath": "github.com/prometheus/client_golang/text",
|
||||
"Comment": "0.5.0-10-ga842dc1",
|
||||
"Rev": "a842dc11e0621c34a71cab634d1d0190a59802a8"
|
||||
},
|
||||
{
|
||||
"ImportPath": "github.com/prometheus/client_model/go",
|
||||
"Comment": "model-0.0.2-12-gfa8ad6f",
|
||||
"Rev": "fa8ad6fec33561be4280a8f0514318c79d7f6cb6"
|
||||
},
|
||||
{
|
||||
"ImportPath": "github.com/prometheus/procfs",
|
||||
"Rev": "ee2372b58cee877abe07cde670d04d3b3bac5ee6"
|
||||
},
|
||||
{
|
||||
"ImportPath": "github.com/stretchr/testify/assert",
|
||||
"Rev": "9cc77fa25329013ce07362c7742952ff887361f2"
|
||||
},
|
||||
{
|
||||
"ImportPath": "github.com/ugorji/go/codec",
|
||||
"Rev": "821cda7e48749cacf7cad2c6ed01e96457ca7e9d"
|
||||
},
|
||||
{
|
||||
"ImportPath": "golang.org/x/crypto/bcrypt",
|
||||
"Rev": "1351f936d976c60a0a48d728281922cf63eafb8d"
|
||||
},
|
||||
{
|
||||
"ImportPath": "golang.org/x/crypto/blowfish",
|
||||
"Rev": "1351f936d976c60a0a48d728281922cf63eafb8d"
|
||||
},
|
||||
{
|
||||
"ImportPath": "golang.org/x/net/context",
|
||||
"Rev": "7dbad50ab5b31073856416cdcfeb2796d682f844"
|
||||
},
|
||||
{
|
||||
"ImportPath": "golang.org/x/oauth2",
|
||||
"Rev": "3046bc76d6dfd7d3707f6640f85e42d9c4050f50"
|
||||
},
|
||||
{
|
||||
"ImportPath": "google.golang.org/cloud/compute/metadata",
|
||||
"Rev": "f20d6dcccb44ed49de45ae3703312cb46e627db1"
|
||||
},
|
||||
{
|
||||
"ImportPath": "google.golang.org/cloud/internal",
|
||||
"Rev": "f20d6dcccb44ed49de45ae3703312cb46e627db1"
|
||||
},
|
||||
{
|
||||
"ImportPath": "google.golang.org/grpc",
|
||||
"Rev": "f5ebd86be717593ab029545492c93ddf8914832b"
|
||||
"Comment": "null-220",
|
||||
"Rev": "c5a46024776ec35eb562fa9226968b9d543bb13a"
|
||||
}
|
||||
]
|
||||
}
|
||||
|
13
Godeps/_workspace/src/bitbucket.org/ww/goautoneg/Makefile
generated
vendored
13
Godeps/_workspace/src/bitbucket.org/ww/goautoneg/Makefile
generated
vendored
@@ -1,13 +0,0 @@
|
||||
include $(GOROOT)/src/Make.inc
|
||||
|
||||
TARG=bitbucket.org/ww/goautoneg
|
||||
GOFILES=autoneg.go
|
||||
|
||||
include $(GOROOT)/src/Make.pkg
|
||||
|
||||
format:
|
||||
gofmt -w *.go
|
||||
|
||||
docs:
|
||||
gomake clean
|
||||
godoc ${TARG} > README.txt
|
67
Godeps/_workspace/src/bitbucket.org/ww/goautoneg/README.txt
generated
vendored
67
Godeps/_workspace/src/bitbucket.org/ww/goautoneg/README.txt
generated
vendored
@@ -1,67 +0,0 @@
|
||||
PACKAGE
|
||||
|
||||
package goautoneg
|
||||
import "bitbucket.org/ww/goautoneg"
|
||||
|
||||
HTTP Content-Type Autonegotiation.
|
||||
|
||||
The functions in this package implement the behaviour specified in
|
||||
http://www.w3.org/Protocols/rfc2616/rfc2616-sec14.html
|
||||
|
||||
Copyright (c) 2011, Open Knowledge Foundation Ltd.
|
||||
All rights reserved.
|
||||
|
||||
Redistribution and use in source and binary forms, with or without
|
||||
modification, are permitted provided that the following conditions are
|
||||
met:
|
||||
|
||||
Redistributions of source code must retain the above copyright
|
||||
notice, this list of conditions and the following disclaimer.
|
||||
|
||||
Redistributions in binary form must reproduce the above copyright
|
||||
notice, this list of conditions and the following disclaimer in
|
||||
the documentation and/or other materials provided with the
|
||||
distribution.
|
||||
|
||||
Neither the name of the Open Knowledge Foundation Ltd. nor the
|
||||
names of its contributors may be used to endorse or promote
|
||||
products derived from this software without specific prior written
|
||||
permission.
|
||||
|
||||
THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
|
||||
"AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
|
||||
LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
|
||||
A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
|
||||
HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
|
||||
SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
|
||||
LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
|
||||
DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
|
||||
THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
|
||||
(INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
|
||||
OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
|
||||
|
||||
|
||||
FUNCTIONS
|
||||
|
||||
func Negotiate(header string, alternatives []string) (content_type string)
|
||||
Negotiate the most appropriate content_type given the accept header
|
||||
and a list of alternatives.
|
||||
|
||||
func ParseAccept(header string) (accept []Accept)
|
||||
Parse an Accept Header string returning a sorted list
|
||||
of clauses
|
||||
|
||||
|
||||
TYPES
|
||||
|
||||
type Accept struct {
|
||||
Type, SubType string
|
||||
Q float32
|
||||
Params map[string]string
|
||||
}
|
||||
Structure to represent a clause in an HTTP Accept Header
|
||||
|
||||
|
||||
SUBDIRECTORIES
|
||||
|
||||
.hg
|
162
Godeps/_workspace/src/bitbucket.org/ww/goautoneg/autoneg.go
generated
vendored
162
Godeps/_workspace/src/bitbucket.org/ww/goautoneg/autoneg.go
generated
vendored
@@ -1,162 +0,0 @@
|
||||
/*
|
||||
HTTP Content-Type Autonegotiation.
|
||||
|
||||
The functions in this package implement the behaviour specified in
|
||||
http://www.w3.org/Protocols/rfc2616/rfc2616-sec14.html
|
||||
|
||||
Copyright (c) 2011, Open Knowledge Foundation Ltd.
|
||||
All rights reserved.
|
||||
|
||||
Redistribution and use in source and binary forms, with or without
|
||||
modification, are permitted provided that the following conditions are
|
||||
met:
|
||||
|
||||
Redistributions of source code must retain the above copyright
|
||||
notice, this list of conditions and the following disclaimer.
|
||||
|
||||
Redistributions in binary form must reproduce the above copyright
|
||||
notice, this list of conditions and the following disclaimer in
|
||||
the documentation and/or other materials provided with the
|
||||
distribution.
|
||||
|
||||
Neither the name of the Open Knowledge Foundation Ltd. nor the
|
||||
names of its contributors may be used to endorse or promote
|
||||
products derived from this software without specific prior written
|
||||
permission.
|
||||
|
||||
THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
|
||||
"AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
|
||||
LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
|
||||
A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
|
||||
HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
|
||||
SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
|
||||
LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
|
||||
DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
|
||||
THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
|
||||
(INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
|
||||
OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
|
||||
|
||||
|
||||
*/
|
||||
package goautoneg
|
||||
|
||||
import (
|
||||
"sort"
|
||||
"strconv"
|
||||
"strings"
|
||||
)
|
||||
|
||||
// Structure to represent a clause in an HTTP Accept Header
|
||||
type Accept struct {
|
||||
Type, SubType string
|
||||
Q float64
|
||||
Params map[string]string
|
||||
}
|
||||
|
||||
// For internal use, so that we can use the sort interface
|
||||
type accept_slice []Accept
|
||||
|
||||
func (accept accept_slice) Len() int {
|
||||
slice := []Accept(accept)
|
||||
return len(slice)
|
||||
}
|
||||
|
||||
func (accept accept_slice) Less(i, j int) bool {
|
||||
slice := []Accept(accept)
|
||||
ai, aj := slice[i], slice[j]
|
||||
if ai.Q > aj.Q {
|
||||
return true
|
||||
}
|
||||
if ai.Type != "*" && aj.Type == "*" {
|
||||
return true
|
||||
}
|
||||
if ai.SubType != "*" && aj.SubType == "*" {
|
||||
return true
|
||||
}
|
||||
return false
|
||||
}
|
||||
|
||||
func (accept accept_slice) Swap(i, j int) {
|
||||
slice := []Accept(accept)
|
||||
slice[i], slice[j] = slice[j], slice[i]
|
||||
}
|
||||
|
||||
// Parse an Accept Header string returning a sorted list
|
||||
// of clauses
|
||||
func ParseAccept(header string) (accept []Accept) {
|
||||
parts := strings.Split(header, ",")
|
||||
accept = make([]Accept, 0, len(parts))
|
||||
for _, part := range parts {
|
||||
part := strings.Trim(part, " ")
|
||||
|
||||
a := Accept{}
|
||||
a.Params = make(map[string]string)
|
||||
a.Q = 1.0
|
||||
|
||||
mrp := strings.Split(part, ";")
|
||||
|
||||
media_range := mrp[0]
|
||||
sp := strings.Split(media_range, "/")
|
||||
a.Type = strings.Trim(sp[0], " ")
|
||||
|
||||
switch {
|
||||
case len(sp) == 1 && a.Type == "*":
|
||||
a.SubType = "*"
|
||||
case len(sp) == 2:
|
||||
a.SubType = strings.Trim(sp[1], " ")
|
||||
default:
|
||||
continue
|
||||
}
|
||||
|
||||
if len(mrp) == 1 {
|
||||
accept = append(accept, a)
|
||||
continue
|
||||
}
|
||||
|
||||
for _, param := range mrp[1:] {
|
||||
sp := strings.SplitN(param, "=", 2)
|
||||
if len(sp) != 2 {
|
||||
continue
|
||||
}
|
||||
token := strings.Trim(sp[0], " ")
|
||||
if token == "q" {
|
||||
a.Q, _ = strconv.ParseFloat(sp[1], 32)
|
||||
} else {
|
||||
a.Params[token] = strings.Trim(sp[1], " ")
|
||||
}
|
||||
}
|
||||
|
||||
accept = append(accept, a)
|
||||
}
|
||||
|
||||
slice := accept_slice(accept)
|
||||
sort.Sort(slice)
|
||||
|
||||
return
|
||||
}
|
||||
|
||||
// Negotiate the most appropriate content_type given the accept header
|
||||
// and a list of alternatives.
|
||||
func Negotiate(header string, alternatives []string) (content_type string) {
|
||||
asp := make([][]string, 0, len(alternatives))
|
||||
for _, ctype := range alternatives {
|
||||
asp = append(asp, strings.SplitN(ctype, "/", 2))
|
||||
}
|
||||
for _, clause := range ParseAccept(header) {
|
||||
for i, ctsp := range asp {
|
||||
if clause.Type == ctsp[0] && clause.SubType == ctsp[1] {
|
||||
content_type = alternatives[i]
|
||||
return
|
||||
}
|
||||
if clause.Type == ctsp[0] && clause.SubType == "*" {
|
||||
content_type = alternatives[i]
|
||||
return
|
||||
}
|
||||
if clause.Type == "*" && clause.SubType == "*" {
|
||||
content_type = alternatives[i]
|
||||
return
|
||||
}
|
||||
}
|
||||
}
|
||||
return
|
||||
}
|
33
Godeps/_workspace/src/bitbucket.org/ww/goautoneg/autoneg_test.go
generated
vendored
33
Godeps/_workspace/src/bitbucket.org/ww/goautoneg/autoneg_test.go
generated
vendored
@@ -1,33 +0,0 @@
|
||||
package goautoneg
|
||||
|
||||
import (
|
||||
"testing"
|
||||
)
|
||||
|
||||
var chrome = "application/xml,application/xhtml+xml,text/html;q=0.9,text/plain;q=0.8,image/png,*/*;q=0.5"
|
||||
|
||||
func TestParseAccept(t *testing.T) {
|
||||
alternatives := []string{"text/html", "image/png"}
|
||||
content_type := Negotiate(chrome, alternatives)
|
||||
if content_type != "image/png" {
|
||||
t.Errorf("got %s expected image/png", content_type)
|
||||
}
|
||||
|
||||
alternatives = []string{"text/html", "text/plain", "text/n3"}
|
||||
content_type = Negotiate(chrome, alternatives)
|
||||
if content_type != "text/html" {
|
||||
t.Errorf("got %s expected text/html", content_type)
|
||||
}
|
||||
|
||||
alternatives = []string{"text/n3", "text/plain"}
|
||||
content_type = Negotiate(chrome, alternatives)
|
||||
if content_type != "text/plain" {
|
||||
t.Errorf("got %s expected text/plain", content_type)
|
||||
}
|
||||
|
||||
alternatives = []string{"text/n3", "application/rdf+xml"}
|
||||
content_type = Negotiate(chrome, alternatives)
|
||||
if content_type != "text/n3" {
|
||||
t.Errorf("got %s expected text/n3", content_type)
|
||||
}
|
||||
}
|
@@ -1,7 +1,7 @@
|
||||
# Go support for Protocol Buffers - Google's data interchange format
|
||||
#
|
||||
# Copyright 2010 The Go Authors. All rights reserved.
|
||||
# https://github.com/golang/protobuf
|
||||
# http://code.google.com/p/goprotobuf/
|
||||
#
|
||||
# Redistribution and use in source and binary forms, with or without
|
||||
# modification, are permitted provided that the following conditions are
|
||||
@@ -37,7 +37,4 @@ test: install generate-test-pbs
|
||||
|
||||
|
||||
generate-test-pbs:
|
||||
make install
|
||||
make -C testdata
|
||||
make -C proto3_proto
|
||||
make
|
||||
make install && cd testdata && make
|
@@ -1,7 +1,7 @@
|
||||
// Go support for Protocol Buffers - Google's data interchange format
|
||||
//
|
||||
// Copyright 2010 The Go Authors. All rights reserved.
|
||||
// https://github.com/golang/protobuf
|
||||
// http://code.google.com/p/goprotobuf/
|
||||
//
|
||||
// Redistribution and use in source and binary forms, with or without
|
||||
// modification, are permitted provided that the following conditions are
|
||||
@@ -34,7 +34,6 @@ package proto_test
|
||||
import (
|
||||
"bytes"
|
||||
"encoding/json"
|
||||
"errors"
|
||||
"fmt"
|
||||
"math"
|
||||
"math/rand"
|
||||
@@ -45,7 +44,7 @@ import (
|
||||
"time"
|
||||
|
||||
. "./testdata"
|
||||
. "github.com/coreos/etcd/Godeps/_workspace/src/github.com/golang/protobuf/proto"
|
||||
. "github.com/coreos/etcd/Godeps/_workspace/src/code.google.com/p/gogoprotobuf/proto"
|
||||
)
|
||||
|
||||
var globalO *Buffer
|
||||
@@ -395,63 +394,6 @@ func TestNumericPrimitives(t *testing.T) {
|
||||
}
|
||||
}
|
||||
|
||||
// fakeMarshaler is a simple struct implementing Marshaler and Message interfaces.
|
||||
type fakeMarshaler struct {
|
||||
b []byte
|
||||
err error
|
||||
}
|
||||
|
||||
func (f fakeMarshaler) Marshal() ([]byte, error) {
|
||||
return f.b, f.err
|
||||
}
|
||||
|
||||
func (f fakeMarshaler) String() string {
|
||||
return fmt.Sprintf("Bytes: %v Error: %v", f.b, f.err)
|
||||
}
|
||||
|
||||
func (f fakeMarshaler) ProtoMessage() {}
|
||||
|
||||
func (f fakeMarshaler) Reset() {}
|
||||
|
||||
// Simple tests for proto messages that implement the Marshaler interface.
|
||||
func TestMarshalerEncoding(t *testing.T) {
|
||||
tests := []struct {
|
||||
name string
|
||||
m Message
|
||||
want []byte
|
||||
wantErr error
|
||||
}{
|
||||
{
|
||||
name: "Marshaler that fails",
|
||||
m: fakeMarshaler{
|
||||
err: errors.New("some marshal err"),
|
||||
b: []byte{5, 6, 7},
|
||||
},
|
||||
// Since there's an error, nothing should be written to buffer.
|
||||
want: nil,
|
||||
wantErr: errors.New("some marshal err"),
|
||||
},
|
||||
{
|
||||
name: "Marshaler that succeeds",
|
||||
m: fakeMarshaler{
|
||||
b: []byte{0, 1, 2, 3, 4, 127, 255},
|
||||
},
|
||||
want: []byte{0, 1, 2, 3, 4, 127, 255},
|
||||
wantErr: nil,
|
||||
},
|
||||
}
|
||||
for _, test := range tests {
|
||||
b := NewBuffer(nil)
|
||||
err := b.Marshal(test.m)
|
||||
if !reflect.DeepEqual(test.wantErr, err) {
|
||||
t.Errorf("%s: got err %v wanted %v", test.name, err, test.wantErr)
|
||||
}
|
||||
if !reflect.DeepEqual(test.want, b.Bytes()) {
|
||||
t.Errorf("%s: got bytes %v wanted %v", test.name, b.Bytes(), test.want)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// Simple tests for bytes
|
||||
func TestBytesPrimitives(t *testing.T) {
|
||||
o := old()
|
||||
@@ -1047,35 +989,6 @@ func TestSubmessageUnrecognizedFields(t *testing.T) {
|
||||
}
|
||||
}
|
||||
|
||||
// Check that an int32 field can be upgraded to an int64 field.
|
||||
func TestNegativeInt32(t *testing.T) {
|
||||
om := &OldMessage{
|
||||
Num: Int32(-1),
|
||||
}
|
||||
b, err := Marshal(om)
|
||||
if err != nil {
|
||||
t.Fatalf("Marshal of OldMessage: %v", err)
|
||||
}
|
||||
|
||||
// Check the size. It should be 11 bytes;
|
||||
// 1 for the field/wire type, and 10 for the negative number.
|
||||
if len(b) != 11 {
|
||||
t.Errorf("%v marshaled as %q, wanted 11 bytes", om, b)
|
||||
}
|
||||
|
||||
// Unmarshal into a NewMessage.
|
||||
nm := new(NewMessage)
|
||||
if err := Unmarshal(b, nm); err != nil {
|
||||
t.Fatalf("Unmarshal to NewMessage: %v", err)
|
||||
}
|
||||
want := &NewMessage{
|
||||
Num: Int64(-1),
|
||||
}
|
||||
if !Equal(nm, want) {
|
||||
t.Errorf("nm = %v, want %v", nm, want)
|
||||
}
|
||||
}
|
||||
|
||||
// Check that we can grow an array (repeated field) to have many elements.
|
||||
// This test doesn't depend only on our encoding; for variety, it makes sure
|
||||
// we create, encode, and decode the correct contents explicitly. It's therefore
|
||||
@@ -1203,10 +1116,13 @@ func TestTypeMismatch(t *testing.T) {
|
||||
// Now Unmarshal it to the wrong type.
|
||||
pb2 := initGoTestField()
|
||||
err := o.Unmarshal(pb2)
|
||||
if err == nil {
|
||||
t.Error("expected error, got no error")
|
||||
} else if !strings.Contains(err.Error(), "bad wiretype") {
|
||||
t.Error("expected bad wiretype error, got", err)
|
||||
switch err {
|
||||
case ErrWrongType:
|
||||
// fine
|
||||
case nil:
|
||||
t.Error("expected wrong type error, got no error")
|
||||
default:
|
||||
t.Error("expected wrong type error, got", err)
|
||||
}
|
||||
}
|
||||
|
||||
@@ -1387,11 +1303,10 @@ func TestAllSetDefaults(t *testing.T) {
|
||||
F_Pinf: Float32(float32(math.Inf(1))),
|
||||
F_Ninf: Float32(float32(math.Inf(-1))),
|
||||
F_Nan: Float32(1.7),
|
||||
StrZero: String(""),
|
||||
}
|
||||
SetDefaults(m)
|
||||
if !Equal(m, expected) {
|
||||
t.Errorf("SetDefaults failed\n got %v\nwant %v", m, expected)
|
||||
t.Errorf(" got %v\nwant %v", m, expected)
|
||||
}
|
||||
}
|
||||
|
||||
@@ -1740,8 +1655,7 @@ func TestEncodingSizes(t *testing.T) {
|
||||
n int
|
||||
}{
|
||||
{&Defaults{F_Int32: Int32(math.MaxInt32)}, 6},
|
||||
{&Defaults{F_Int32: Int32(math.MinInt32)}, 11},
|
||||
{&Defaults{F_Uint32: Uint32(uint32(math.MaxInt32) + 1)}, 6},
|
||||
{&Defaults{F_Int32: Int32(math.MinInt32)}, 6},
|
||||
{&Defaults{F_Uint32: Uint32(math.MaxUint32)}, 6},
|
||||
}
|
||||
for _, test := range tests {
|
||||
@@ -1833,86 +1747,6 @@ func fuzzUnmarshal(t *testing.T, data []byte) {
|
||||
Unmarshal(data, pb)
|
||||
}
|
||||
|
||||
func TestMapFieldMarshal(t *testing.T) {
|
||||
m := &MessageWithMap{
|
||||
NameMapping: map[int32]string{
|
||||
1: "Rob",
|
||||
4: "Ian",
|
||||
8: "Dave",
|
||||
},
|
||||
}
|
||||
b, err := Marshal(m)
|
||||
if err != nil {
|
||||
t.Fatalf("Marshal: %v", err)
|
||||
}
|
||||
|
||||
// b should be the concatenation of these three byte sequences in some order.
|
||||
parts := []string{
|
||||
"\n\a\b\x01\x12\x03Rob",
|
||||
"\n\a\b\x04\x12\x03Ian",
|
||||
"\n\b\b\x08\x12\x04Dave",
|
||||
}
|
||||
ok := false
|
||||
for i := range parts {
|
||||
for j := range parts {
|
||||
if j == i {
|
||||
continue
|
||||
}
|
||||
for k := range parts {
|
||||
if k == i || k == j {
|
||||
continue
|
||||
}
|
||||
try := parts[i] + parts[j] + parts[k]
|
||||
if bytes.Equal(b, []byte(try)) {
|
||||
ok = true
|
||||
break
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
if !ok {
|
||||
t.Fatalf("Incorrect Marshal output.\n got %q\nwant %q (or a permutation of that)", b, parts[0]+parts[1]+parts[2])
|
||||
}
|
||||
t.Logf("FYI b: %q", b)
|
||||
|
||||
(new(Buffer)).DebugPrint("Dump of b", b)
|
||||
}
|
||||
|
||||
func TestMapFieldRoundTrips(t *testing.T) {
|
||||
m := &MessageWithMap{
|
||||
NameMapping: map[int32]string{
|
||||
1: "Rob",
|
||||
4: "Ian",
|
||||
8: "Dave",
|
||||
},
|
||||
MsgMapping: map[int64]*FloatingPoint{
|
||||
0x7001: &FloatingPoint{F: Float64(2.0)},
|
||||
},
|
||||
ByteMapping: map[bool][]byte{
|
||||
false: []byte("that's not right!"),
|
||||
true: []byte("aye, 'tis true!"),
|
||||
},
|
||||
}
|
||||
b, err := Marshal(m)
|
||||
if err != nil {
|
||||
t.Fatalf("Marshal: %v", err)
|
||||
}
|
||||
t.Logf("FYI b: %q", b)
|
||||
m2 := new(MessageWithMap)
|
||||
if err := Unmarshal(b, m2); err != nil {
|
||||
t.Fatalf("Unmarshal: %v", err)
|
||||
}
|
||||
for _, pair := range [][2]interface{}{
|
||||
{m.NameMapping, m2.NameMapping},
|
||||
{m.MsgMapping, m2.MsgMapping},
|
||||
{m.ByteMapping, m2.ByteMapping},
|
||||
} {
|
||||
if !reflect.DeepEqual(pair[0], pair[1]) {
|
||||
t.Errorf("Map did not survive a round trip.\ninitial: %v\n final: %v", pair[0], pair[1])
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// Benchmarks
|
||||
|
||||
func testMsg() *GoTest {
|
@@ -1,7 +1,7 @@
|
||||
// Go support for Protocol Buffers - Google's data interchange format
|
||||
//
|
||||
// Copyright 2011 The Go Authors. All rights reserved.
|
||||
// https://github.com/golang/protobuf
|
||||
// http://code.google.com/p/goprotobuf/
|
||||
//
|
||||
// Redistribution and use in source and binary forms, with or without
|
||||
// modification, are permitted provided that the following conditions are
|
||||
@@ -29,7 +29,7 @@
|
||||
// (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
|
||||
// OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
|
||||
|
||||
// Protocol buffer deep copy and merge.
|
||||
// Protocol buffer deep copy.
|
||||
// TODO: MessageSet and RawMessage.
|
||||
|
||||
package proto
|
||||
@@ -118,29 +118,6 @@ func mergeAny(out, in reflect.Value) {
|
||||
case reflect.Bool, reflect.Float32, reflect.Float64, reflect.Int32, reflect.Int64,
|
||||
reflect.String, reflect.Uint32, reflect.Uint64:
|
||||
out.Set(in)
|
||||
case reflect.Map:
|
||||
if in.Len() == 0 {
|
||||
return
|
||||
}
|
||||
if out.IsNil() {
|
||||
out.Set(reflect.MakeMap(in.Type()))
|
||||
}
|
||||
// For maps with value types of *T or []byte we need to deep copy each value.
|
||||
elemKind := in.Type().Elem().Kind()
|
||||
for _, key := range in.MapKeys() {
|
||||
var val reflect.Value
|
||||
switch elemKind {
|
||||
case reflect.Ptr:
|
||||
val = reflect.New(in.Type().Elem().Elem())
|
||||
mergeAny(val, in.MapIndex(key))
|
||||
case reflect.Slice:
|
||||
val = in.MapIndex(key)
|
||||
val = reflect.ValueOf(append([]byte{}, val.Bytes()...))
|
||||
default:
|
||||
val = in.MapIndex(key)
|
||||
}
|
||||
out.SetMapIndex(key, val)
|
||||
}
|
||||
case reflect.Ptr:
|
||||
if in.IsNil() {
|
||||
return
|
||||
@@ -153,21 +130,13 @@ func mergeAny(out, in reflect.Value) {
|
||||
if in.IsNil() {
|
||||
return
|
||||
}
|
||||
if in.Type().Elem().Kind() == reflect.Uint8 {
|
||||
// []byte is a scalar bytes field, not a repeated field.
|
||||
// Make a deep copy.
|
||||
// Append to []byte{} instead of []byte(nil) so that we never end up
|
||||
// with a nil result.
|
||||
out.SetBytes(append([]byte{}, in.Bytes()...))
|
||||
return
|
||||
}
|
||||
n := in.Len()
|
||||
if out.IsNil() {
|
||||
out.Set(reflect.MakeSlice(in.Type(), 0, n))
|
||||
}
|
||||
switch in.Type().Elem().Kind() {
|
||||
case reflect.Bool, reflect.Float32, reflect.Float64, reflect.Int32, reflect.Int64,
|
||||
reflect.String, reflect.Uint32, reflect.Uint64:
|
||||
reflect.String, reflect.Uint32, reflect.Uint64, reflect.Uint8:
|
||||
out.Set(reflect.AppendSlice(out, in))
|
||||
default:
|
||||
for i := 0; i < n; i++ {
|
@@ -1,7 +1,7 @@
|
||||
// Go support for Protocol Buffers - Google's data interchange format
|
||||
//
|
||||
// Copyright 2011 The Go Authors. All rights reserved.
|
||||
// https://github.com/golang/protobuf
|
||||
// http://code.google.com/p/goprotobuf/
|
||||
//
|
||||
// Redistribution and use in source and binary forms, with or without
|
||||
// modification, are permitted provided that the following conditions are
|
||||
@@ -34,7 +34,7 @@ package proto_test
|
||||
import (
|
||||
"testing"
|
||||
|
||||
"github.com/coreos/etcd/Godeps/_workspace/src/github.com/golang/protobuf/proto"
|
||||
"github.com/coreos/etcd/Godeps/_workspace/src/code.google.com/p/gogoprotobuf/proto"
|
||||
|
||||
pb "./testdata"
|
||||
)
|
||||
@@ -79,22 +79,6 @@ func TestClone(t *testing.T) {
|
||||
if proto.Equal(m, cloneTestMessage) {
|
||||
t.Error("Mutating clone changed the original")
|
||||
}
|
||||
// Byte fields and repeated fields should be copied.
|
||||
if &m.Pet[0] == &cloneTestMessage.Pet[0] {
|
||||
t.Error("Pet: repeated field not copied")
|
||||
}
|
||||
if &m.Others[0] == &cloneTestMessage.Others[0] {
|
||||
t.Error("Others: repeated field not copied")
|
||||
}
|
||||
if &m.Others[0].Value[0] == &cloneTestMessage.Others[0].Value[0] {
|
||||
t.Error("Others[0].Value: bytes field not copied")
|
||||
}
|
||||
if &m.RepBytes[0] == &cloneTestMessage.RepBytes[0] {
|
||||
t.Error("RepBytes: repeated field not copied")
|
||||
}
|
||||
if &m.RepBytes[0][0] == &cloneTestMessage.RepBytes[0][0] {
|
||||
t.Error("RepBytes[0]: bytes field not copied")
|
||||
}
|
||||
}
|
||||
|
||||
func TestCloneNil(t *testing.T) {
|
||||
@@ -183,37 +167,6 @@ var mergeTests = []struct {
|
||||
RepBytes: [][]byte{[]byte("sham"), []byte("wow")},
|
||||
},
|
||||
},
|
||||
// Check that a scalar bytes field replaces rather than appends.
|
||||
{
|
||||
src: &pb.OtherMessage{Value: []byte("foo")},
|
||||
dst: &pb.OtherMessage{Value: []byte("bar")},
|
||||
want: &pb.OtherMessage{Value: []byte("foo")},
|
||||
},
|
||||
{
|
||||
src: &pb.MessageWithMap{
|
||||
NameMapping: map[int32]string{6: "Nigel"},
|
||||
MsgMapping: map[int64]*pb.FloatingPoint{
|
||||
0x4001: &pb.FloatingPoint{F: proto.Float64(2.0)},
|
||||
},
|
||||
ByteMapping: map[bool][]byte{true: []byte("wowsa")},
|
||||
},
|
||||
dst: &pb.MessageWithMap{
|
||||
NameMapping: map[int32]string{
|
||||
6: "Bruce", // should be overwritten
|
||||
7: "Andrew",
|
||||
},
|
||||
},
|
||||
want: &pb.MessageWithMap{
|
||||
NameMapping: map[int32]string{
|
||||
6: "Nigel",
|
||||
7: "Andrew",
|
||||
},
|
||||
MsgMapping: map[int64]*pb.FloatingPoint{
|
||||
0x4001: &pb.FloatingPoint{F: proto.Float64(2.0)},
|
||||
},
|
||||
ByteMapping: map[bool][]byte{true: []byte("wowsa")},
|
||||
},
|
||||
},
|
||||
}
|
||||
|
||||
func TestMerge(t *testing.T) {
|
@@ -1,7 +1,7 @@
|
||||
// Go support for Protocol Buffers - Google's data interchange format
|
||||
//
|
||||
// Copyright 2010 The Go Authors. All rights reserved.
|
||||
// https://github.com/golang/protobuf
|
||||
// http://code.google.com/p/goprotobuf/
|
||||
//
|
||||
// Redistribution and use in source and binary forms, with or without
|
||||
// modification, are permitted provided that the following conditions are
|
||||
@@ -43,6 +43,11 @@ import (
|
||||
"reflect"
|
||||
)
|
||||
|
||||
// ErrWrongType occurs when the wire encoding for the field disagrees with
|
||||
// that specified in the type being decoded. This is usually caused by attempting
|
||||
// to convert an encoded protocol buffer into a struct of the wrong type.
|
||||
var ErrWrongType = errors.New("proto: field/encoding mismatch: wrong type for field")
|
||||
|
||||
// errOverflow is returned when an integer is too large to be represented.
|
||||
var errOverflow = errors.New("proto: integer overflow")
|
||||
|
||||
@@ -178,7 +183,7 @@ func (p *Buffer) DecodeZigzag32() (x uint64, err error) {
|
||||
func (p *Buffer) DecodeRawBytes(alloc bool) (buf []byte, err error) {
|
||||
n, err := p.DecodeVarint()
|
||||
if err != nil {
|
||||
return nil, err
|
||||
return
|
||||
}
|
||||
|
||||
nb := int(n)
|
||||
@@ -358,11 +363,11 @@ func (o *Buffer) unmarshalType(st reflect.Type, prop *StructProperties, is_group
|
||||
if is_group {
|
||||
return nil // input is satisfied
|
||||
}
|
||||
return fmt.Errorf("proto: %s: wiretype end group for non-group", st)
|
||||
return ErrWrongType
|
||||
}
|
||||
tag := int(u >> 3)
|
||||
if tag <= 0 {
|
||||
return fmt.Errorf("proto: %s: illegal tag %d (wire type %d)", st, tag, wire)
|
||||
return fmt.Errorf("proto: illegal tag %d", tag)
|
||||
}
|
||||
fieldnum, ok := prop.decoderTags.get(tag)
|
||||
if !ok {
|
||||
@@ -397,7 +402,7 @@ func (o *Buffer) unmarshalType(st reflect.Type, prop *StructProperties, is_group
|
||||
// a packable field
|
||||
dec = p.packedDec
|
||||
} else {
|
||||
err = fmt.Errorf("proto: bad wiretype for field %s.%s: got wiretype %d, want %d", st, st.Field(fieldnum).Name, wire, p.WireType)
|
||||
err = ErrWrongType
|
||||
continue
|
||||
}
|
||||
}
|
||||
@@ -470,15 +475,6 @@ func (o *Buffer) dec_bool(p *Properties, base structPointer) error {
|
||||
return nil
|
||||
}
|
||||
|
||||
func (o *Buffer) dec_proto3_bool(p *Properties, base structPointer) error {
|
||||
u, err := p.valDec(o)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
*structPointer_BoolVal(base, p.field) = u != 0
|
||||
return nil
|
||||
}
|
||||
|
||||
// Decode an int32.
|
||||
func (o *Buffer) dec_int32(p *Properties, base structPointer) error {
|
||||
u, err := p.valDec(o)
|
||||
@@ -489,15 +485,6 @@ func (o *Buffer) dec_int32(p *Properties, base structPointer) error {
|
||||
return nil
|
||||
}
|
||||
|
||||
func (o *Buffer) dec_proto3_int32(p *Properties, base structPointer) error {
|
||||
u, err := p.valDec(o)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
word32Val_Set(structPointer_Word32Val(base, p.field), uint32(u))
|
||||
return nil
|
||||
}
|
||||
|
||||
// Decode an int64.
|
||||
func (o *Buffer) dec_int64(p *Properties, base structPointer) error {
|
||||
u, err := p.valDec(o)
|
||||
@@ -508,31 +495,15 @@ func (o *Buffer) dec_int64(p *Properties, base structPointer) error {
|
||||
return nil
|
||||
}
|
||||
|
||||
func (o *Buffer) dec_proto3_int64(p *Properties, base structPointer) error {
|
||||
u, err := p.valDec(o)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
word64Val_Set(structPointer_Word64Val(base, p.field), o, u)
|
||||
return nil
|
||||
}
|
||||
|
||||
// Decode a string.
|
||||
func (o *Buffer) dec_string(p *Properties, base structPointer) error {
|
||||
s, err := o.DecodeStringBytes()
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
*structPointer_String(base, p.field) = &s
|
||||
return nil
|
||||
}
|
||||
|
||||
func (o *Buffer) dec_proto3_string(p *Properties, base structPointer) error {
|
||||
s, err := o.DecodeStringBytes()
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
*structPointer_StringVal(base, p.field) = s
|
||||
sp := new(string)
|
||||
*sp = s
|
||||
*structPointer_String(base, p.field) = sp
|
||||
return nil
|
||||
}
|
||||
|
||||
@@ -671,72 +642,6 @@ func (o *Buffer) dec_slice_slice_byte(p *Properties, base structPointer) error {
|
||||
return nil
|
||||
}
|
||||
|
||||
// Decode a map field.
|
||||
func (o *Buffer) dec_new_map(p *Properties, base structPointer) error {
|
||||
raw, err := o.DecodeRawBytes(false)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
oi := o.index // index at the end of this map entry
|
||||
o.index -= len(raw) // move buffer back to start of map entry
|
||||
|
||||
mptr := structPointer_Map(base, p.field, p.mtype) // *map[K]V
|
||||
if mptr.Elem().IsNil() {
|
||||
mptr.Elem().Set(reflect.MakeMap(mptr.Type().Elem()))
|
||||
}
|
||||
v := mptr.Elem() // map[K]V
|
||||
|
||||
// Prepare addressable doubly-indirect placeholders for the key and value types.
|
||||
// See enc_new_map for why.
|
||||
keyptr := reflect.New(reflect.PtrTo(p.mtype.Key())).Elem() // addressable *K
|
||||
keybase := toStructPointer(keyptr.Addr()) // **K
|
||||
|
||||
var valbase structPointer
|
||||
var valptr reflect.Value
|
||||
switch p.mtype.Elem().Kind() {
|
||||
case reflect.Slice:
|
||||
// []byte
|
||||
var dummy []byte
|
||||
valptr = reflect.ValueOf(&dummy) // *[]byte
|
||||
valbase = toStructPointer(valptr) // *[]byte
|
||||
case reflect.Ptr:
|
||||
// message; valptr is **Msg; need to allocate the intermediate pointer
|
||||
valptr = reflect.New(reflect.PtrTo(p.mtype.Elem())).Elem() // addressable *V
|
||||
valptr.Set(reflect.New(valptr.Type().Elem()))
|
||||
valbase = toStructPointer(valptr)
|
||||
default:
|
||||
// everything else
|
||||
valptr = reflect.New(reflect.PtrTo(p.mtype.Elem())).Elem() // addressable *V
|
||||
valbase = toStructPointer(valptr.Addr()) // **V
|
||||
}
|
||||
|
||||
// Decode.
|
||||
// This parses a restricted wire format, namely the encoding of a message
|
||||
// with two fields. See enc_new_map for the format.
|
||||
for o.index < oi {
|
||||
// tagcode for key and value properties are always a single byte
|
||||
// because they have tags 1 and 2.
|
||||
tagcode := o.buf[o.index]
|
||||
o.index++
|
||||
switch tagcode {
|
||||
case p.mkeyprop.tagcode[0]:
|
||||
if err := p.mkeyprop.dec(o, p.mkeyprop, keybase); err != nil {
|
||||
return err
|
||||
}
|
||||
case p.mvalprop.tagcode[0]:
|
||||
if err := p.mvalprop.dec(o, p.mvalprop, valbase); err != nil {
|
||||
return err
|
||||
}
|
||||
default:
|
||||
// TODO: Should we silently skip this instead?
|
||||
return fmt.Errorf("proto: bad map data tag %d", raw[0])
|
||||
}
|
||||
}
|
||||
|
||||
v.SetMapIndex(keyptr.Elem(), valptr.Elem())
|
||||
return nil
|
||||
}
|
||||
|
||||
// Decode a group.
|
||||
func (o *Buffer) dec_struct_group(p *Properties, base structPointer) error {
|
||||
bas := structPointer_GetStructPointer(base, p.field)
|
@@ -1,5 +1,5 @@
|
||||
// Copyright (c) 2013, Vastech SA (PTY) LTD. All rights reserved.
|
||||
// http://github.com/gogo/protobuf/gogoproto
|
||||
// http://code.google.com/p/gogoprotobuf/gogoproto
|
||||
//
|
||||
// Redistribution and use in source and binary forms, with or without
|
||||
// modification, are permitted provided that the following conditions are
|
||||
@@ -30,6 +30,51 @@ import (
|
||||
"reflect"
|
||||
)
|
||||
|
||||
// Decode a reference to a bool pointer.
|
||||
func (o *Buffer) dec_ref_bool(p *Properties, base structPointer) error {
|
||||
u, err := p.valDec(o)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
if len(o.bools) == 0 {
|
||||
o.bools = make([]bool, boolPoolSize)
|
||||
}
|
||||
o.bools[0] = u != 0
|
||||
*structPointer_RefBool(base, p.field) = o.bools[0]
|
||||
o.bools = o.bools[1:]
|
||||
return nil
|
||||
}
|
||||
|
||||
// Decode a reference to an int32 pointer.
|
||||
func (o *Buffer) dec_ref_int32(p *Properties, base structPointer) error {
|
||||
u, err := p.valDec(o)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
refWord32_Set(structPointer_RefWord32(base, p.field), o, uint32(u))
|
||||
return nil
|
||||
}
|
||||
|
||||
// Decode a reference to an int64 pointer.
|
||||
func (o *Buffer) dec_ref_int64(p *Properties, base structPointer) error {
|
||||
u, err := p.valDec(o)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
refWord64_Set(structPointer_RefWord64(base, p.field), o, u)
|
||||
return nil
|
||||
}
|
||||
|
||||
// Decode a reference to a string pointer.
|
||||
func (o *Buffer) dec_ref_string(p *Properties, base structPointer) error {
|
||||
s, err := o.DecodeStringBytes()
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
*structPointer_RefString(base, p.field) = s
|
||||
return nil
|
||||
}
|
||||
|
||||
// Decode a reference to a struct pointer.
|
||||
func (o *Buffer) dec_ref_struct_message(p *Properties, base structPointer) (err error) {
|
||||
raw, e := o.DecodeRawBytes(false)
|
@@ -1,7 +1,7 @@
|
||||
// Go support for Protocol Buffers - Google's data interchange format
|
||||
//
|
||||
// Copyright 2010 The Go Authors. All rights reserved.
|
||||
// https://github.com/golang/protobuf
|
||||
// http://code.google.com/p/goprotobuf/
|
||||
//
|
||||
// Redistribution and use in source and binary forms, with or without
|
||||
// modification, are permitted provided that the following conditions are
|
||||
@@ -247,7 +247,7 @@ func (p *Buffer) Marshal(pb Message) error {
|
||||
return ErrNil
|
||||
}
|
||||
if err == nil {
|
||||
err = p.enc_struct(GetProperties(t.Elem()), base)
|
||||
err = p.enc_struct(t.Elem(), GetProperties(t.Elem()), base)
|
||||
}
|
||||
|
||||
if collectStats {
|
||||
@@ -271,7 +271,7 @@ func Size(pb Message) (n int) {
|
||||
return 0
|
||||
}
|
||||
if err == nil {
|
||||
n = size_struct(GetProperties(t.Elem()), base)
|
||||
n = size_struct(t.Elem(), GetProperties(t.Elem()), base)
|
||||
}
|
||||
|
||||
if collectStats {
|
||||
@@ -298,16 +298,6 @@ func (o *Buffer) enc_bool(p *Properties, base structPointer) error {
|
||||
return nil
|
||||
}
|
||||
|
||||
func (o *Buffer) enc_proto3_bool(p *Properties, base structPointer) error {
|
||||
v := *structPointer_BoolVal(base, p.field)
|
||||
if !v {
|
||||
return ErrNil
|
||||
}
|
||||
o.buf = append(o.buf, p.tagcode...)
|
||||
p.valEnc(o, 1)
|
||||
return nil
|
||||
}
|
||||
|
||||
func size_bool(p *Properties, base structPointer) int {
|
||||
v := *structPointer_Bool(base, p.field)
|
||||
if v == nil {
|
||||
@@ -316,32 +306,13 @@ func size_bool(p *Properties, base structPointer) int {
|
||||
return len(p.tagcode) + 1 // each bool takes exactly one byte
|
||||
}
|
||||
|
||||
func size_proto3_bool(p *Properties, base structPointer) int {
|
||||
v := *structPointer_BoolVal(base, p.field)
|
||||
if !v {
|
||||
return 0
|
||||
}
|
||||
return len(p.tagcode) + 1 // each bool takes exactly one byte
|
||||
}
|
||||
|
||||
// Encode an int32.
|
||||
func (o *Buffer) enc_int32(p *Properties, base structPointer) error {
|
||||
v := structPointer_Word32(base, p.field)
|
||||
if word32_IsNil(v) {
|
||||
return ErrNil
|
||||
}
|
||||
x := int32(word32_Get(v)) // permit sign extension to use full 64-bit range
|
||||
o.buf = append(o.buf, p.tagcode...)
|
||||
p.valEnc(o, uint64(x))
|
||||
return nil
|
||||
}
|
||||
|
||||
func (o *Buffer) enc_proto3_int32(p *Properties, base structPointer) error {
|
||||
v := structPointer_Word32Val(base, p.field)
|
||||
x := int32(word32Val_Get(v)) // permit sign extension to use full 64-bit range
|
||||
if x == 0 {
|
||||
return ErrNil
|
||||
}
|
||||
x := word32_Get(v)
|
||||
o.buf = append(o.buf, p.tagcode...)
|
||||
p.valEnc(o, uint64(x))
|
||||
return nil
|
||||
@@ -352,64 +323,7 @@ func size_int32(p *Properties, base structPointer) (n int) {
|
||||
if word32_IsNil(v) {
|
||||
return 0
|
||||
}
|
||||
x := int32(word32_Get(v)) // permit sign extension to use full 64-bit range
|
||||
n += len(p.tagcode)
|
||||
n += p.valSize(uint64(x))
|
||||
return
|
||||
}
|
||||
|
||||
func size_proto3_int32(p *Properties, base structPointer) (n int) {
|
||||
v := structPointer_Word32Val(base, p.field)
|
||||
x := int32(word32Val_Get(v)) // permit sign extension to use full 64-bit range
|
||||
if x == 0 {
|
||||
return 0
|
||||
}
|
||||
n += len(p.tagcode)
|
||||
n += p.valSize(uint64(x))
|
||||
return
|
||||
}
|
||||
|
||||
// Encode a uint32.
|
||||
// Exactly the same as int32, except for no sign extension.
|
||||
func (o *Buffer) enc_uint32(p *Properties, base structPointer) error {
|
||||
v := structPointer_Word32(base, p.field)
|
||||
if word32_IsNil(v) {
|
||||
return ErrNil
|
||||
}
|
||||
x := word32_Get(v)
|
||||
o.buf = append(o.buf, p.tagcode...)
|
||||
p.valEnc(o, uint64(x))
|
||||
return nil
|
||||
}
|
||||
|
||||
func (o *Buffer) enc_proto3_uint32(p *Properties, base structPointer) error {
|
||||
v := structPointer_Word32Val(base, p.field)
|
||||
x := word32Val_Get(v)
|
||||
if x == 0 {
|
||||
return ErrNil
|
||||
}
|
||||
o.buf = append(o.buf, p.tagcode...)
|
||||
p.valEnc(o, uint64(x))
|
||||
return nil
|
||||
}
|
||||
|
||||
func size_uint32(p *Properties, base structPointer) (n int) {
|
||||
v := structPointer_Word32(base, p.field)
|
||||
if word32_IsNil(v) {
|
||||
return 0
|
||||
}
|
||||
x := word32_Get(v)
|
||||
n += len(p.tagcode)
|
||||
n += p.valSize(uint64(x))
|
||||
return
|
||||
}
|
||||
|
||||
func size_proto3_uint32(p *Properties, base structPointer) (n int) {
|
||||
v := structPointer_Word32Val(base, p.field)
|
||||
x := word32Val_Get(v)
|
||||
if x == 0 {
|
||||
return 0
|
||||
}
|
||||
n += len(p.tagcode)
|
||||
n += p.valSize(uint64(x))
|
||||
return
|
||||
@@ -427,17 +341,6 @@ func (o *Buffer) enc_int64(p *Properties, base structPointer) error {
|
||||
return nil
|
||||
}
|
||||
|
||||
func (o *Buffer) enc_proto3_int64(p *Properties, base structPointer) error {
|
||||
v := structPointer_Word64Val(base, p.field)
|
||||
x := word64Val_Get(v)
|
||||
if x == 0 {
|
||||
return ErrNil
|
||||
}
|
||||
o.buf = append(o.buf, p.tagcode...)
|
||||
p.valEnc(o, x)
|
||||
return nil
|
||||
}
|
||||
|
||||
func size_int64(p *Properties, base structPointer) (n int) {
|
||||
v := structPointer_Word64(base, p.field)
|
||||
if word64_IsNil(v) {
|
||||
@@ -449,17 +352,6 @@ func size_int64(p *Properties, base structPointer) (n int) {
|
||||
return
|
||||
}
|
||||
|
||||
func size_proto3_int64(p *Properties, base structPointer) (n int) {
|
||||
v := structPointer_Word64Val(base, p.field)
|
||||
x := word64Val_Get(v)
|
||||
if x == 0 {
|
||||
return 0
|
||||
}
|
||||
n += len(p.tagcode)
|
||||
n += p.valSize(x)
|
||||
return
|
||||
}
|
||||
|
||||
// Encode a string.
|
||||
func (o *Buffer) enc_string(p *Properties, base structPointer) error {
|
||||
v := *structPointer_String(base, p.field)
|
||||
@@ -472,16 +364,6 @@ func (o *Buffer) enc_string(p *Properties, base structPointer) error {
|
||||
return nil
|
||||
}
|
||||
|
||||
func (o *Buffer) enc_proto3_string(p *Properties, base structPointer) error {
|
||||
v := *structPointer_StringVal(base, p.field)
|
||||
if v == "" {
|
||||
return ErrNil
|
||||
}
|
||||
o.buf = append(o.buf, p.tagcode...)
|
||||
o.EncodeStringBytes(v)
|
||||
return nil
|
||||
}
|
||||
|
||||
func size_string(p *Properties, base structPointer) (n int) {
|
||||
v := *structPointer_String(base, p.field)
|
||||
if v == nil {
|
||||
@@ -493,16 +375,6 @@ func size_string(p *Properties, base structPointer) (n int) {
|
||||
return
|
||||
}
|
||||
|
||||
func size_proto3_string(p *Properties, base structPointer) (n int) {
|
||||
v := *structPointer_StringVal(base, p.field)
|
||||
if v == "" {
|
||||
return 0
|
||||
}
|
||||
n += len(p.tagcode)
|
||||
n += sizeStringBytes(v)
|
||||
return
|
||||
}
|
||||
|
||||
// All protocol buffer fields are nillable, but be careful.
|
||||
func isNil(v reflect.Value) bool {
|
||||
switch v.Kind() {
|
||||
@@ -533,7 +405,7 @@ func (o *Buffer) enc_struct_message(p *Properties, base structPointer) error {
|
||||
}
|
||||
|
||||
o.buf = append(o.buf, p.tagcode...)
|
||||
return o.enc_len_struct(p.sprop, structp, &state)
|
||||
return o.enc_len_struct(p.stype, p.sprop, structp, &state)
|
||||
}
|
||||
|
||||
func size_struct_message(p *Properties, base structPointer) int {
|
||||
@@ -552,7 +424,7 @@ func size_struct_message(p *Properties, base structPointer) int {
|
||||
}
|
||||
|
||||
n0 := len(p.tagcode)
|
||||
n1 := size_struct(p.sprop, structp)
|
||||
n1 := size_struct(p.stype, p.sprop, structp)
|
||||
n2 := sizeVarint(uint64(n1)) // size of encoded length
|
||||
return n0 + n1 + n2
|
||||
}
|
||||
@@ -566,7 +438,7 @@ func (o *Buffer) enc_struct_group(p *Properties, base structPointer) error {
|
||||
}
|
||||
|
||||
o.EncodeVarint(uint64((p.Tag << 3) | WireStartGroup))
|
||||
err := o.enc_struct(p.sprop, b)
|
||||
err := o.enc_struct(p.stype, p.sprop, b)
|
||||
if err != nil && !state.shouldContinue(err, nil) {
|
||||
return err
|
||||
}
|
||||
@@ -581,7 +453,7 @@ func size_struct_group(p *Properties, base structPointer) (n int) {
|
||||
}
|
||||
|
||||
n += sizeVarint(uint64((p.Tag << 3) | WireStartGroup))
|
||||
n += size_struct(p.sprop, b)
|
||||
n += size_struct(p.stype, p.sprop, b)
|
||||
n += sizeVarint(uint64((p.Tag << 3) | WireEndGroup))
|
||||
return
|
||||
}
|
||||
@@ -655,16 +527,6 @@ func (o *Buffer) enc_slice_byte(p *Properties, base structPointer) error {
|
||||
return nil
|
||||
}
|
||||
|
||||
func (o *Buffer) enc_proto3_slice_byte(p *Properties, base structPointer) error {
|
||||
s := *structPointer_Bytes(base, p.field)
|
||||
if len(s) == 0 {
|
||||
return ErrNil
|
||||
}
|
||||
o.buf = append(o.buf, p.tagcode...)
|
||||
o.EncodeRawBytes(s)
|
||||
return nil
|
||||
}
|
||||
|
||||
func size_slice_byte(p *Properties, base structPointer) (n int) {
|
||||
s := *structPointer_Bytes(base, p.field)
|
||||
if s == nil {
|
||||
@@ -675,16 +537,6 @@ func size_slice_byte(p *Properties, base structPointer) (n int) {
|
||||
return
|
||||
}
|
||||
|
||||
func size_proto3_slice_byte(p *Properties, base structPointer) (n int) {
|
||||
s := *structPointer_Bytes(base, p.field)
|
||||
if len(s) == 0 {
|
||||
return 0
|
||||
}
|
||||
n += len(p.tagcode)
|
||||
n += sizeRawBytes(s)
|
||||
return
|
||||
}
|
||||
|
||||
// Encode a slice of int32s ([]int32).
|
||||
func (o *Buffer) enc_slice_int32(p *Properties, base structPointer) error {
|
||||
s := structPointer_Word32Slice(base, p.field)
|
||||
@@ -694,7 +546,7 @@ func (o *Buffer) enc_slice_int32(p *Properties, base structPointer) error {
|
||||
}
|
||||
for i := 0; i < l; i++ {
|
||||
o.buf = append(o.buf, p.tagcode...)
|
||||
x := int32(s.Index(i)) // permit sign extension to use full 64-bit range
|
||||
x := s.Index(i)
|
||||
p.valEnc(o, uint64(x))
|
||||
}
|
||||
return nil
|
||||
@@ -708,7 +560,7 @@ func size_slice_int32(p *Properties, base structPointer) (n int) {
|
||||
}
|
||||
for i := 0; i < l; i++ {
|
||||
n += len(p.tagcode)
|
||||
x := int32(s.Index(i)) // permit sign extension to use full 64-bit range
|
||||
x := s.Index(i)
|
||||
n += p.valSize(uint64(x))
|
||||
}
|
||||
return
|
||||
@@ -716,75 +568,6 @@ func size_slice_int32(p *Properties, base structPointer) (n int) {
|
||||
|
||||
// Encode a slice of int32s ([]int32) in packed format.
|
||||
func (o *Buffer) enc_slice_packed_int32(p *Properties, base structPointer) error {
|
||||
s := structPointer_Word32Slice(base, p.field)
|
||||
l := s.Len()
|
||||
if l == 0 {
|
||||
return ErrNil
|
||||
}
|
||||
// TODO: Reuse a Buffer.
|
||||
buf := NewBuffer(nil)
|
||||
for i := 0; i < l; i++ {
|
||||
x := int32(s.Index(i)) // permit sign extension to use full 64-bit range
|
||||
p.valEnc(buf, uint64(x))
|
||||
}
|
||||
|
||||
o.buf = append(o.buf, p.tagcode...)
|
||||
o.EncodeVarint(uint64(len(buf.buf)))
|
||||
o.buf = append(o.buf, buf.buf...)
|
||||
return nil
|
||||
}
|
||||
|
||||
func size_slice_packed_int32(p *Properties, base structPointer) (n int) {
|
||||
s := structPointer_Word32Slice(base, p.field)
|
||||
l := s.Len()
|
||||
if l == 0 {
|
||||
return 0
|
||||
}
|
||||
var bufSize int
|
||||
for i := 0; i < l; i++ {
|
||||
x := int32(s.Index(i)) // permit sign extension to use full 64-bit range
|
||||
bufSize += p.valSize(uint64(x))
|
||||
}
|
||||
|
||||
n += len(p.tagcode)
|
||||
n += sizeVarint(uint64(bufSize))
|
||||
n += bufSize
|
||||
return
|
||||
}
|
||||
|
||||
// Encode a slice of uint32s ([]uint32).
|
||||
// Exactly the same as int32, except for no sign extension.
|
||||
func (o *Buffer) enc_slice_uint32(p *Properties, base structPointer) error {
|
||||
s := structPointer_Word32Slice(base, p.field)
|
||||
l := s.Len()
|
||||
if l == 0 {
|
||||
return ErrNil
|
||||
}
|
||||
for i := 0; i < l; i++ {
|
||||
o.buf = append(o.buf, p.tagcode...)
|
||||
x := s.Index(i)
|
||||
p.valEnc(o, uint64(x))
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
func size_slice_uint32(p *Properties, base structPointer) (n int) {
|
||||
s := structPointer_Word32Slice(base, p.field)
|
||||
l := s.Len()
|
||||
if l == 0 {
|
||||
return 0
|
||||
}
|
||||
for i := 0; i < l; i++ {
|
||||
n += len(p.tagcode)
|
||||
x := s.Index(i)
|
||||
n += p.valSize(uint64(x))
|
||||
}
|
||||
return
|
||||
}
|
||||
|
||||
// Encode a slice of uint32s ([]uint32) in packed format.
|
||||
// Exactly the same as int32, except for no sign extension.
|
||||
func (o *Buffer) enc_slice_packed_uint32(p *Properties, base structPointer) error {
|
||||
s := structPointer_Word32Slice(base, p.field)
|
||||
l := s.Len()
|
||||
if l == 0 {
|
||||
@@ -802,7 +585,7 @@ func (o *Buffer) enc_slice_packed_uint32(p *Properties, base structPointer) erro
|
||||
return nil
|
||||
}
|
||||
|
||||
func size_slice_packed_uint32(p *Properties, base structPointer) (n int) {
|
||||
func size_slice_packed_int32(p *Properties, base structPointer) (n int) {
|
||||
s := structPointer_Word32Slice(base, p.field)
|
||||
l := s.Len()
|
||||
if l == 0 {
|
||||
@@ -955,7 +738,7 @@ func (o *Buffer) enc_slice_struct_message(p *Properties, base structPointer) err
|
||||
}
|
||||
|
||||
o.buf = append(o.buf, p.tagcode...)
|
||||
err := o.enc_len_struct(p.sprop, structp, &state)
|
||||
err := o.enc_len_struct(p.stype, p.sprop, structp, &state)
|
||||
if err != nil && !state.shouldContinue(err, nil) {
|
||||
if err == ErrNil {
|
||||
return ErrRepeatedHasNil
|
||||
@@ -985,7 +768,7 @@ func size_slice_struct_message(p *Properties, base structPointer) (n int) {
|
||||
continue
|
||||
}
|
||||
|
||||
n0 := size_struct(p.sprop, structp)
|
||||
n0 := size_struct(p.stype, p.sprop, structp)
|
||||
n1 := sizeVarint(uint64(n0)) // size of encoded length
|
||||
n += n0 + n1
|
||||
}
|
||||
@@ -1006,7 +789,7 @@ func (o *Buffer) enc_slice_struct_group(p *Properties, base structPointer) error
|
||||
|
||||
o.EncodeVarint(uint64((p.Tag << 3) | WireStartGroup))
|
||||
|
||||
err := o.enc_struct(p.sprop, b)
|
||||
err := o.enc_struct(p.stype, p.sprop, b)
|
||||
|
||||
if err != nil && !state.shouldContinue(err, nil) {
|
||||
if err == ErrNil {
|
||||
@@ -1032,7 +815,7 @@ func size_slice_struct_group(p *Properties, base structPointer) (n int) {
|
||||
return // return size up to this point
|
||||
}
|
||||
|
||||
n += size_struct(p.sprop, b)
|
||||
n += size_struct(p.stype, p.sprop, b)
|
||||
}
|
||||
return
|
||||
}
|
||||
@@ -1069,112 +852,12 @@ func size_map(p *Properties, base structPointer) int {
|
||||
return sizeExtensionMap(v)
|
||||
}
|
||||
|
||||
// Encode a map field.
|
||||
func (o *Buffer) enc_new_map(p *Properties, base structPointer) error {
|
||||
var state errorState // XXX: or do we need to plumb this through?
|
||||
|
||||
/*
|
||||
A map defined as
|
||||
map<key_type, value_type> map_field = N;
|
||||
is encoded in the same way as
|
||||
message MapFieldEntry {
|
||||
key_type key = 1;
|
||||
value_type value = 2;
|
||||
}
|
||||
repeated MapFieldEntry map_field = N;
|
||||
*/
|
||||
|
||||
v := structPointer_Map(base, p.field, p.mtype).Elem() // map[K]V
|
||||
if v.Len() == 0 {
|
||||
return nil
|
||||
}
|
||||
|
||||
keycopy, valcopy, keybase, valbase := mapEncodeScratch(p.mtype)
|
||||
|
||||
enc := func() error {
|
||||
if err := p.mkeyprop.enc(o, p.mkeyprop, keybase); err != nil {
|
||||
return err
|
||||
}
|
||||
if err := p.mvalprop.enc(o, p.mvalprop, valbase); err != nil {
|
||||
return err
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
keys := v.MapKeys()
|
||||
sort.Sort(mapKeys(keys))
|
||||
for _, key := range keys {
|
||||
val := v.MapIndex(key)
|
||||
|
||||
keycopy.Set(key)
|
||||
valcopy.Set(val)
|
||||
|
||||
o.buf = append(o.buf, p.tagcode...)
|
||||
if err := o.enc_len_thing(enc, &state); err != nil {
|
||||
return err
|
||||
}
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
func size_new_map(p *Properties, base structPointer) int {
|
||||
v := structPointer_Map(base, p.field, p.mtype).Elem() // map[K]V
|
||||
|
||||
keycopy, valcopy, keybase, valbase := mapEncodeScratch(p.mtype)
|
||||
|
||||
n := 0
|
||||
for _, key := range v.MapKeys() {
|
||||
val := v.MapIndex(key)
|
||||
keycopy.Set(key)
|
||||
valcopy.Set(val)
|
||||
|
||||
// Tag codes are two bytes per map entry.
|
||||
n += 2
|
||||
n += p.mkeyprop.size(p.mkeyprop, keybase)
|
||||
n += p.mvalprop.size(p.mvalprop, valbase)
|
||||
}
|
||||
return n
|
||||
}
|
||||
|
||||
// mapEncodeScratch returns a new reflect.Value matching the map's value type,
|
||||
// and a structPointer suitable for passing to an encoder or sizer.
|
||||
func mapEncodeScratch(mapType reflect.Type) (keycopy, valcopy reflect.Value, keybase, valbase structPointer) {
|
||||
// Prepare addressable doubly-indirect placeholders for the key and value types.
|
||||
// This is needed because the element-type encoders expect **T, but the map iteration produces T.
|
||||
|
||||
keycopy = reflect.New(mapType.Key()).Elem() // addressable K
|
||||
keyptr := reflect.New(reflect.PtrTo(keycopy.Type())).Elem() // addressable *K
|
||||
keyptr.Set(keycopy.Addr()) //
|
||||
keybase = toStructPointer(keyptr.Addr()) // **K
|
||||
|
||||
// Value types are more varied and require special handling.
|
||||
switch mapType.Elem().Kind() {
|
||||
case reflect.Slice:
|
||||
// []byte
|
||||
var dummy []byte
|
||||
valcopy = reflect.ValueOf(&dummy).Elem() // addressable []byte
|
||||
valbase = toStructPointer(valcopy.Addr())
|
||||
case reflect.Ptr:
|
||||
// message; the generated field type is map[K]*Msg (so V is *Msg),
|
||||
// so we only need one level of indirection.
|
||||
valcopy = reflect.New(mapType.Elem()).Elem() // addressable V
|
||||
valbase = toStructPointer(valcopy.Addr())
|
||||
default:
|
||||
// everything else
|
||||
valcopy = reflect.New(mapType.Elem()).Elem() // addressable V
|
||||
valptr := reflect.New(reflect.PtrTo(valcopy.Type())).Elem() // addressable *V
|
||||
valptr.Set(valcopy.Addr()) //
|
||||
valbase = toStructPointer(valptr.Addr()) // **V
|
||||
}
|
||||
return
|
||||
}
|
||||
|
||||
// Encode a struct.
|
||||
func (o *Buffer) enc_struct(prop *StructProperties, base structPointer) error {
|
||||
func (o *Buffer) enc_struct(t reflect.Type, prop *StructProperties, base structPointer) error {
|
||||
var state errorState
|
||||
// Encode fields in tag order so that decoders may use optimizations
|
||||
// that depend on the ordering.
|
||||
// https://developers.google.com/protocol-buffers/docs/encoding#order
|
||||
// http://code.google.com/apis/protocolbuffers/docs/encoding.html#order
|
||||
for _, i := range prop.order {
|
||||
p := prop.Prop[i]
|
||||
if p.enc != nil {
|
||||
@@ -1202,7 +885,7 @@ func (o *Buffer) enc_struct(prop *StructProperties, base structPointer) error {
|
||||
return state.err
|
||||
}
|
||||
|
||||
func size_struct(prop *StructProperties, base structPointer) (n int) {
|
||||
func size_struct(t reflect.Type, prop *StructProperties, base structPointer) (n int) {
|
||||
for _, i := range prop.order {
|
||||
p := prop.Prop[i]
|
||||
if p.size != nil {
|
||||
@@ -1222,16 +905,11 @@ func size_struct(prop *StructProperties, base structPointer) (n int) {
|
||||
var zeroes [20]byte // longer than any conceivable sizeVarint
|
||||
|
||||
// Encode a struct, preceded by its encoded length (as a varint).
|
||||
func (o *Buffer) enc_len_struct(prop *StructProperties, base structPointer, state *errorState) error {
|
||||
return o.enc_len_thing(func() error { return o.enc_struct(prop, base) }, state)
|
||||
}
|
||||
|
||||
// Encode something, preceded by its encoded length (as a varint).
|
||||
func (o *Buffer) enc_len_thing(enc func() error, state *errorState) error {
|
||||
func (o *Buffer) enc_len_struct(t reflect.Type, prop *StructProperties, base structPointer, state *errorState) error {
|
||||
iLen := len(o.buf)
|
||||
o.buf = append(o.buf, 0, 0, 0, 0) // reserve four bytes for length
|
||||
iMsg := len(o.buf)
|
||||
err := enc()
|
||||
err := o.enc_struct(t, prop, base)
|
||||
if err != nil && !state.shouldContinue(err, nil) {
|
||||
return err
|
||||
}
|
@@ -1,12 +1,12 @@
|
||||
// Extensions for Protocol Buffers to create more go like structures.
|
||||
//
|
||||
// Copyright (c) 2013, Vastech SA (PTY) LTD. All rights reserved.
|
||||
// http://github.com/gogo/protobuf/gogoproto
|
||||
// http://code.google.com/p/gogoprotobuf/gogoproto
|
||||
//
|
||||
// Go support for Protocol Buffers - Google's data interchange format
|
||||
//
|
||||
// Copyright 2010 The Go Authors. All rights reserved.
|
||||
// http://github.com/golang/protobuf/
|
||||
// http://code.google.com/p/goprotobuf/
|
||||
//
|
||||
// Redistribution and use in source and binary forms, with or without
|
||||
// modification, are permitted provided that the following conditions are
|
||||
@@ -40,10 +40,6 @@ import (
|
||||
"reflect"
|
||||
)
|
||||
|
||||
func NewRequiredNotSetError(field string) *RequiredNotSetError {
|
||||
return &RequiredNotSetError{field}
|
||||
}
|
||||
|
||||
type Sizer interface {
|
||||
Size() int
|
||||
}
|
||||
@@ -68,9 +64,12 @@ func size_ext_slice_byte(p *Properties, base structPointer) (n int) {
|
||||
|
||||
// Encode a reference to bool pointer.
|
||||
func (o *Buffer) enc_ref_bool(p *Properties, base structPointer) error {
|
||||
v := *structPointer_BoolVal(base, p.field)
|
||||
v := structPointer_RefBool(base, p.field)
|
||||
if v == nil {
|
||||
return ErrNil
|
||||
}
|
||||
x := 0
|
||||
if v {
|
||||
if *v {
|
||||
x = 1
|
||||
}
|
||||
o.buf = append(o.buf, p.tagcode...)
|
||||
@@ -79,37 +78,31 @@ func (o *Buffer) enc_ref_bool(p *Properties, base structPointer) error {
|
||||
}
|
||||
|
||||
func size_ref_bool(p *Properties, base structPointer) int {
|
||||
v := structPointer_RefBool(base, p.field)
|
||||
if v == nil {
|
||||
return 0
|
||||
}
|
||||
return len(p.tagcode) + 1 // each bool takes exactly one byte
|
||||
}
|
||||
|
||||
// Encode a reference to int32 pointer.
|
||||
func (o *Buffer) enc_ref_int32(p *Properties, base structPointer) error {
|
||||
v := structPointer_Word32Val(base, p.field)
|
||||
x := int32(word32Val_Get(v))
|
||||
v := structPointer_RefWord32(base, p.field)
|
||||
if refWord32_IsNil(v) {
|
||||
return ErrNil
|
||||
}
|
||||
x := refWord32_Get(v)
|
||||
o.buf = append(o.buf, p.tagcode...)
|
||||
p.valEnc(o, uint64(x))
|
||||
return nil
|
||||
}
|
||||
|
||||
func size_ref_int32(p *Properties, base structPointer) (n int) {
|
||||
v := structPointer_Word32Val(base, p.field)
|
||||
x := int32(word32Val_Get(v))
|
||||
n += len(p.tagcode)
|
||||
n += p.valSize(uint64(x))
|
||||
return
|
||||
}
|
||||
|
||||
func (o *Buffer) enc_ref_uint32(p *Properties, base structPointer) error {
|
||||
v := structPointer_Word32Val(base, p.field)
|
||||
x := word32Val_Get(v)
|
||||
o.buf = append(o.buf, p.tagcode...)
|
||||
p.valEnc(o, uint64(x))
|
||||
return nil
|
||||
}
|
||||
|
||||
func size_ref_uint32(p *Properties, base structPointer) (n int) {
|
||||
v := structPointer_Word32Val(base, p.field)
|
||||
x := word32Val_Get(v)
|
||||
v := structPointer_RefWord32(base, p.field)
|
||||
if refWord32_IsNil(v) {
|
||||
return 0
|
||||
}
|
||||
x := refWord32_Get(v)
|
||||
n += len(p.tagcode)
|
||||
n += p.valSize(uint64(x))
|
||||
return
|
||||
@@ -117,16 +110,22 @@ func size_ref_uint32(p *Properties, base structPointer) (n int) {
|
||||
|
||||
// Encode a reference to an int64 pointer.
|
||||
func (o *Buffer) enc_ref_int64(p *Properties, base structPointer) error {
|
||||
v := structPointer_Word64Val(base, p.field)
|
||||
x := word64Val_Get(v)
|
||||
v := structPointer_RefWord64(base, p.field)
|
||||
if refWord64_IsNil(v) {
|
||||
return ErrNil
|
||||
}
|
||||
x := refWord64_Get(v)
|
||||
o.buf = append(o.buf, p.tagcode...)
|
||||
p.valEnc(o, x)
|
||||
return nil
|
||||
}
|
||||
|
||||
func size_ref_int64(p *Properties, base structPointer) (n int) {
|
||||
v := structPointer_Word64Val(base, p.field)
|
||||
x := word64Val_Get(v)
|
||||
v := structPointer_RefWord64(base, p.field)
|
||||
if refWord64_IsNil(v) {
|
||||
return 0
|
||||
}
|
||||
x := refWord64_Get(v)
|
||||
n += len(p.tagcode)
|
||||
n += p.valSize(x)
|
||||
return
|
||||
@@ -134,16 +133,24 @@ func size_ref_int64(p *Properties, base structPointer) (n int) {
|
||||
|
||||
// Encode a reference to a string pointer.
|
||||
func (o *Buffer) enc_ref_string(p *Properties, base structPointer) error {
|
||||
v := *structPointer_StringVal(base, p.field)
|
||||
v := structPointer_RefString(base, p.field)
|
||||
if v == nil {
|
||||
return ErrNil
|
||||
}
|
||||
x := *v
|
||||
o.buf = append(o.buf, p.tagcode...)
|
||||
o.EncodeStringBytes(v)
|
||||
o.EncodeStringBytes(x)
|
||||
return nil
|
||||
}
|
||||
|
||||
func size_ref_string(p *Properties, base structPointer) (n int) {
|
||||
v := *structPointer_StringVal(base, p.field)
|
||||
v := structPointer_RefString(base, p.field)
|
||||
if v == nil {
|
||||
return 0
|
||||
}
|
||||
x := *v
|
||||
n += len(p.tagcode)
|
||||
n += sizeStringBytes(v)
|
||||
n += sizeStringBytes(x)
|
||||
return
|
||||
}
|
||||
|
||||
@@ -168,7 +175,7 @@ func (o *Buffer) enc_ref_struct_message(p *Properties, base structPointer) error
|
||||
}
|
||||
|
||||
o.buf = append(o.buf, p.tagcode...)
|
||||
return o.enc_len_struct(p.sprop, structp, &state)
|
||||
return o.enc_len_struct(p.stype, p.sprop, structp, &state)
|
||||
}
|
||||
|
||||
//TODO this is only copied, please fix this
|
||||
@@ -188,7 +195,7 @@ func size_ref_struct_message(p *Properties, base structPointer) int {
|
||||
}
|
||||
|
||||
n0 := len(p.tagcode)
|
||||
n1 := size_struct(p.sprop, structp)
|
||||
n1 := size_struct(p.stype, p.sprop, structp)
|
||||
n2 := sizeVarint(uint64(n1)) // size of encoded length
|
||||
return n0 + n1 + n2
|
||||
}
|
||||
@@ -203,7 +210,7 @@ func (o *Buffer) enc_slice_ref_struct_message(p *Properties, base structPointer)
|
||||
for i := 0; i < l; i++ {
|
||||
structp := structPointer_Add(ss1, field(uintptr(i)*size))
|
||||
if structPointer_IsNil(structp) {
|
||||
return errRepeatedHasNil
|
||||
return ErrRepeatedHasNil
|
||||
}
|
||||
|
||||
// Can the object marshal itself?
|
||||
@@ -219,10 +226,10 @@ func (o *Buffer) enc_slice_ref_struct_message(p *Properties, base structPointer)
|
||||
}
|
||||
|
||||
o.buf = append(o.buf, p.tagcode...)
|
||||
err := o.enc_len_struct(p.sprop, structp, &state)
|
||||
err := o.enc_len_struct(p.stype, p.sprop, structp, &state)
|
||||
if err != nil && !state.shouldContinue(err, nil) {
|
||||
if err == ErrNil {
|
||||
return errRepeatedHasNil
|
||||
return ErrRepeatedHasNil
|
||||
}
|
||||
return err
|
||||
}
|
||||
@@ -253,7 +260,7 @@ func size_slice_ref_struct_message(p *Properties, base structPointer) (n int) {
|
||||
continue
|
||||
}
|
||||
|
||||
n0 := size_struct(p.sprop, structp)
|
||||
n0 := size_struct(p.stype, p.sprop, structp)
|
||||
n1 := sizeVarint(uint64(n0)) // size of encoded length
|
||||
n += n0 + n1
|
||||
}
|
@@ -1,7 +1,7 @@
|
||||
// Go support for Protocol Buffers - Google's data interchange format
|
||||
//
|
||||
// Copyright 2011 The Go Authors. All rights reserved.
|
||||
// https://github.com/golang/protobuf
|
||||
// http://code.google.com/p/goprotobuf/
|
||||
//
|
||||
// Redistribution and use in source and binary forms, with or without
|
||||
// modification, are permitted provided that the following conditions are
|
||||
@@ -57,7 +57,7 @@ Equality is defined in this way:
|
||||
although represented by []byte, is not a repeated field)
|
||||
- Two unset fields are equal.
|
||||
- Two unknown field sets are equal if their current
|
||||
encoded state is equal.
|
||||
encoded state is equal. (TODO)
|
||||
- Two extension sets are equal iff they have corresponding
|
||||
elements that are pairwise equal.
|
||||
- Every other combination of things are not equal.
|
||||
@@ -154,21 +154,6 @@ func equalAny(v1, v2 reflect.Value) bool {
|
||||
return v1.Float() == v2.Float()
|
||||
case reflect.Int32, reflect.Int64:
|
||||
return v1.Int() == v2.Int()
|
||||
case reflect.Map:
|
||||
if v1.Len() != v2.Len() {
|
||||
return false
|
||||
}
|
||||
for _, key := range v1.MapKeys() {
|
||||
val2 := v2.MapIndex(key)
|
||||
if !val2.IsValid() {
|
||||
// This key was not found in the second map.
|
||||
return false
|
||||
}
|
||||
if !equalAny(v1.MapIndex(key), val2) {
|
||||
return false
|
||||
}
|
||||
}
|
||||
return true
|
||||
case reflect.Ptr:
|
||||
return equalAny(v1.Elem(), v2.Elem())
|
||||
case reflect.Slice:
|
@@ -1,7 +1,7 @@
|
||||
// Go support for Protocol Buffers - Google's data interchange format
|
||||
//
|
||||
// Copyright 2011 The Go Authors. All rights reserved.
|
||||
// https://github.com/golang/protobuf
|
||||
// http://code.google.com/p/goprotobuf/
|
||||
//
|
||||
// Redistribution and use in source and binary forms, with or without
|
||||
// modification, are permitted provided that the following conditions are
|
||||
@@ -35,7 +35,7 @@ import (
|
||||
"testing"
|
||||
|
||||
pb "./testdata"
|
||||
. "github.com/coreos/etcd/Godeps/_workspace/src/github.com/golang/protobuf/proto"
|
||||
. "github.com/coreos/etcd/Godeps/_workspace/src/code.google.com/p/gogoprotobuf/proto"
|
||||
)
|
||||
|
||||
// Four identical base messages.
|
||||
@@ -155,31 +155,6 @@ var EqualTests = []struct {
|
||||
},
|
||||
true,
|
||||
},
|
||||
|
||||
{
|
||||
"map same",
|
||||
&pb.MessageWithMap{NameMapping: map[int32]string{1: "Ken"}},
|
||||
&pb.MessageWithMap{NameMapping: map[int32]string{1: "Ken"}},
|
||||
true,
|
||||
},
|
||||
{
|
||||
"map different entry",
|
||||
&pb.MessageWithMap{NameMapping: map[int32]string{1: "Ken"}},
|
||||
&pb.MessageWithMap{NameMapping: map[int32]string{2: "Rob"}},
|
||||
false,
|
||||
},
|
||||
{
|
||||
"map different key only",
|
||||
&pb.MessageWithMap{NameMapping: map[int32]string{1: "Ken"}},
|
||||
&pb.MessageWithMap{NameMapping: map[int32]string{2: "Ken"}},
|
||||
false,
|
||||
},
|
||||
{
|
||||
"map different value only",
|
||||
&pb.MessageWithMap{NameMapping: map[int32]string{1: "Ken"}},
|
||||
&pb.MessageWithMap{NameMapping: map[int32]string{1: "Rob"}},
|
||||
false,
|
||||
},
|
||||
}
|
||||
|
||||
func TestEqual(t *testing.T) {
|
@@ -1,7 +1,7 @@
|
||||
// Go support for Protocol Buffers - Google's data interchange format
|
||||
//
|
||||
// Copyright 2010 The Go Authors. All rights reserved.
|
||||
// https://github.com/golang/protobuf
|
||||
// http://code.google.com/p/goprotobuf/
|
||||
//
|
||||
// Redistribution and use in source and binary forms, with or without
|
||||
// modification, are permitted provided that the following conditions are
|
||||
@@ -37,7 +37,6 @@ package proto
|
||||
|
||||
import (
|
||||
"errors"
|
||||
"fmt"
|
||||
"reflect"
|
||||
"strconv"
|
||||
"sync"
|
||||
@@ -175,39 +174,32 @@ func extensionProperties(ed *ExtensionDesc) *Properties {
|
||||
// encodeExtensionMap encodes any unmarshaled (unencoded) extensions in m.
|
||||
func encodeExtensionMap(m map[int32]Extension) error {
|
||||
for k, e := range m {
|
||||
err := encodeExtension(&e)
|
||||
if err != nil {
|
||||
if e.value == nil || e.desc == nil {
|
||||
// Extension is only in its encoded form.
|
||||
continue
|
||||
}
|
||||
|
||||
// We don't skip extensions that have an encoded form set,
|
||||
// because the extension value may have been mutated after
|
||||
// the last time this function was called.
|
||||
|
||||
et := reflect.TypeOf(e.desc.ExtensionType)
|
||||
props := extensionProperties(e.desc)
|
||||
|
||||
p := NewBuffer(nil)
|
||||
// If e.value has type T, the encoder expects a *struct{ X T }.
|
||||
// Pass a *T with a zero field and hope it all works out.
|
||||
x := reflect.New(et)
|
||||
x.Elem().Set(reflect.ValueOf(e.value))
|
||||
if err := props.enc(p, props, toStructPointer(x)); err != nil {
|
||||
return err
|
||||
}
|
||||
e.enc = p.buf
|
||||
m[k] = e
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
func encodeExtension(e *Extension) error {
|
||||
if e.value == nil || e.desc == nil {
|
||||
// Extension is only in its encoded form.
|
||||
return nil
|
||||
}
|
||||
// We don't skip extensions that have an encoded form set,
|
||||
// because the extension value may have been mutated after
|
||||
// the last time this function was called.
|
||||
|
||||
et := reflect.TypeOf(e.desc.ExtensionType)
|
||||
props := extensionProperties(e.desc)
|
||||
|
||||
p := NewBuffer(nil)
|
||||
// If e.value has type T, the encoder expects a *struct{ X T }.
|
||||
// Pass a *T with a zero field and hope it all works out.
|
||||
x := reflect.New(et)
|
||||
x.Elem().Set(reflect.ValueOf(e.value))
|
||||
if err := props.enc(p, props, toStructPointer(x)); err != nil {
|
||||
return err
|
||||
}
|
||||
e.enc = p.buf
|
||||
return nil
|
||||
}
|
||||
|
||||
func sizeExtensionMap(m map[int32]Extension) (n int) {
|
||||
for _, e := range m {
|
||||
if e.value == nil || e.desc == nil {
|
||||
@@ -308,8 +300,7 @@ func GetExtension(pb extendableProto, extension *ExtensionDesc) (interface{}, er
|
||||
}
|
||||
|
||||
if epb, doki := pb.(extensionsMap); doki {
|
||||
emap := epb.ExtensionMap()
|
||||
e, ok := emap[extension.Field]
|
||||
e, ok := epb.ExtensionMap()[extension.Field]
|
||||
if !ok {
|
||||
return nil, ErrMissingExtension
|
||||
}
|
||||
@@ -334,7 +325,6 @@ func GetExtension(pb extendableProto, extension *ExtensionDesc) (interface{}, er
|
||||
e.value = v
|
||||
e.desc = extension
|
||||
e.enc = nil
|
||||
emap[extension.Field] = e
|
||||
return e.value, nil
|
||||
} else if epb, doki := pb.(extensionsBytes); doki {
|
||||
ext := epb.GetExtensions()
|
||||
@@ -405,9 +395,6 @@ func GetExtensions(pb Message, es []*ExtensionDesc) (extensions []interface{}, e
|
||||
extensions = make([]interface{}, len(es))
|
||||
for i, e := range es {
|
||||
extensions[i], err = GetExtension(epb, e)
|
||||
if err == ErrMissingExtension {
|
||||
err = nil
|
||||
}
|
||||
if err != nil {
|
||||
return
|
||||
}
|
||||
@@ -424,18 +411,7 @@ func SetExtension(pb extendableProto, extension *ExtensionDesc, value interface{
|
||||
if typ != reflect.TypeOf(value) {
|
||||
return errors.New("proto: bad extension value type")
|
||||
}
|
||||
// nil extension values need to be caught early, because the
|
||||
// encoder can't distinguish an ErrNil due to a nil extension
|
||||
// from an ErrNil due to a missing field. Extensions are
|
||||
// always optional, so the encoder would just swallow the error
|
||||
// and drop all the extensions from the encoded message.
|
||||
if reflect.ValueOf(value).IsNil() {
|
||||
return fmt.Errorf("proto: SetExtension called with nil value of type %T", value)
|
||||
}
|
||||
return setExtension(pb, extension, value)
|
||||
}
|
||||
|
||||
func setExtension(pb extendableProto, extension *ExtensionDesc, value interface{}) error {
|
||||
if epb, doki := pb.(extensionsMap); doki {
|
||||
epb.ExtensionMap()[extension.Field] = Extension{desc: extension, value: value}
|
||||
} else if epb, doki := pb.(extensionsBytes); doki {
|
@@ -1,5 +1,5 @@
|
||||
// Copyright (c) 2013, Vastech SA (PTY) LTD. All rights reserved.
|
||||
// http://github.com/gogo/protobuf/gogoproto
|
||||
// http://code.google.com/p/gogoprotobuf/gogoproto
|
||||
//
|
||||
// Redistribution and use in source and binary forms, with or without
|
||||
// modification, are permitted provided that the following conditions are
|
||||
@@ -28,7 +28,6 @@ package proto
|
||||
|
||||
import (
|
||||
"bytes"
|
||||
"errors"
|
||||
"fmt"
|
||||
"reflect"
|
||||
"sort"
|
||||
@@ -186,36 +185,5 @@ func NewExtension(e []byte) Extension {
|
||||
}
|
||||
|
||||
func (this Extension) GoString() string {
|
||||
if this.enc == nil {
|
||||
if err := encodeExtension(&this); err != nil {
|
||||
panic(err)
|
||||
}
|
||||
}
|
||||
return fmt.Sprintf("proto.NewExtension(%#v)", this.enc)
|
||||
}
|
||||
|
||||
func SetUnsafeExtension(pb extendableProto, fieldNum int32, value interface{}) error {
|
||||
typ := reflect.TypeOf(pb).Elem()
|
||||
ext, ok := extensionMaps[typ]
|
||||
if !ok {
|
||||
return fmt.Errorf("proto: bad extended type; %s is not extendable", typ.String())
|
||||
}
|
||||
desc, ok := ext[fieldNum]
|
||||
if !ok {
|
||||
return errors.New("proto: bad extension number; not in declared ranges")
|
||||
}
|
||||
return setExtension(pb, desc, value)
|
||||
}
|
||||
|
||||
func GetUnsafeExtension(pb extendableProto, fieldNum int32) (interface{}, error) {
|
||||
typ := reflect.TypeOf(pb).Elem()
|
||||
ext, ok := extensionMaps[typ]
|
||||
if !ok {
|
||||
return nil, fmt.Errorf("proto: bad extended type; %s is not extendable", typ.String())
|
||||
}
|
||||
desc, ok := ext[fieldNum]
|
||||
if !ok {
|
||||
return nil, fmt.Errorf("unregistered field number %d", fieldNum)
|
||||
}
|
||||
return GetExtension(pb, desc)
|
||||
}
|
@@ -1,7 +1,7 @@
|
||||
// Go support for Protocol Buffers - Google's data interchange format
|
||||
//
|
||||
// Copyright 2010 The Go Authors. All rights reserved.
|
||||
// https://github.com/golang/protobuf
|
||||
// http://code.google.com/p/goprotobuf/
|
||||
//
|
||||
// Redistribution and use in source and binary forms, with or without
|
||||
// modification, are permitted provided that the following conditions are
|
||||
@@ -39,7 +39,7 @@
|
||||
|
||||
- Names are turned from camel_case to CamelCase for export.
|
||||
- There are no methods on v to set fields; just treat
|
||||
them as structure fields.
|
||||
them as structure fields.
|
||||
- There are getters that return a field's value if set,
|
||||
and return the field's default value if unset.
|
||||
The getters work even if the receiver is a nil message.
|
||||
@@ -50,16 +50,17 @@
|
||||
That is, optional or required field int32 f becomes F *int32.
|
||||
- Repeated fields are slices.
|
||||
- Helper functions are available to aid the setting of fields.
|
||||
msg.Foo = proto.String("hello") // set field
|
||||
Helpers for getting values are superseded by the
|
||||
GetFoo methods and their use is deprecated.
|
||||
msg.Foo = proto.String("hello") // set field
|
||||
- Constants are defined to hold the default values of all fields that
|
||||
have them. They have the form Default_StructName_FieldName.
|
||||
Because the getter methods handle defaulted values,
|
||||
direct use of these constants should be rare.
|
||||
- Enums are given type names and maps from names to values.
|
||||
Enum values are prefixed by the enclosing message's name, or by the
|
||||
enum's type name if it is a top-level enum. Enum types have a String
|
||||
method, and a Enum method to assist in message construction.
|
||||
- Nested messages, groups and enums have type names prefixed with the name of
|
||||
Enum values are prefixed with the enum's type name. Enum types have
|
||||
a String method, and a Enum method to assist in message construction.
|
||||
- Nested groups and enums have type names prefixed with the name of
|
||||
the surrounding message type.
|
||||
- Extensions are given descriptor names that start with E_,
|
||||
followed by an underscore-delimited list of the nested messages
|
||||
@@ -73,7 +74,7 @@
|
||||
|
||||
package example;
|
||||
|
||||
enum FOO { X = 17; }
|
||||
enum FOO { X = 17; };
|
||||
|
||||
message Test {
|
||||
required string label = 1;
|
||||
@@ -88,8 +89,7 @@
|
||||
|
||||
package example
|
||||
|
||||
import proto "github.com/golang/protobuf/proto"
|
||||
import math "math"
|
||||
import "code.google.com/p/gogoprotobuf/proto"
|
||||
|
||||
type FOO int32
|
||||
const (
|
||||
@@ -110,14 +110,6 @@
|
||||
func (x FOO) String() string {
|
||||
return proto.EnumName(FOO_name, int32(x))
|
||||
}
|
||||
func (x *FOO) UnmarshalJSON(data []byte) error {
|
||||
value, err := proto.UnmarshalJSONEnum(FOO_value, data)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
*x = FOO(value)
|
||||
return nil
|
||||
}
|
||||
|
||||
type Test struct {
|
||||
Label *string `protobuf:"bytes,1,req,name=label" json:"label,omitempty"`
|
||||
@@ -126,41 +118,41 @@
|
||||
Optionalgroup *Test_OptionalGroup `protobuf:"group,4,opt,name=OptionalGroup" json:"optionalgroup,omitempty"`
|
||||
XXX_unrecognized []byte `json:"-"`
|
||||
}
|
||||
func (m *Test) Reset() { *m = Test{} }
|
||||
func (m *Test) String() string { return proto.CompactTextString(m) }
|
||||
func (*Test) ProtoMessage() {}
|
||||
func (this *Test) Reset() { *this = Test{} }
|
||||
func (this *Test) String() string { return proto.CompactTextString(this) }
|
||||
const Default_Test_Type int32 = 77
|
||||
|
||||
func (m *Test) GetLabel() string {
|
||||
if m != nil && m.Label != nil {
|
||||
return *m.Label
|
||||
func (this *Test) GetLabel() string {
|
||||
if this != nil && this.Label != nil {
|
||||
return *this.Label
|
||||
}
|
||||
return ""
|
||||
}
|
||||
|
||||
func (m *Test) GetType() int32 {
|
||||
if m != nil && m.Type != nil {
|
||||
return *m.Type
|
||||
func (this *Test) GetType() int32 {
|
||||
if this != nil && this.Type != nil {
|
||||
return *this.Type
|
||||
}
|
||||
return Default_Test_Type
|
||||
}
|
||||
|
||||
func (m *Test) GetOptionalgroup() *Test_OptionalGroup {
|
||||
if m != nil {
|
||||
return m.Optionalgroup
|
||||
func (this *Test) GetOptionalgroup() *Test_OptionalGroup {
|
||||
if this != nil {
|
||||
return this.Optionalgroup
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
type Test_OptionalGroup struct {
|
||||
RequiredField *string `protobuf:"bytes,5,req" json:"RequiredField,omitempty"`
|
||||
RequiredField *string `protobuf:"bytes,5,req" json:"RequiredField,omitempty"`
|
||||
XXX_unrecognized []byte `json:"-"`
|
||||
}
|
||||
func (m *Test_OptionalGroup) Reset() { *m = Test_OptionalGroup{} }
|
||||
func (m *Test_OptionalGroup) String() string { return proto.CompactTextString(m) }
|
||||
func (this *Test_OptionalGroup) Reset() { *this = Test_OptionalGroup{} }
|
||||
func (this *Test_OptionalGroup) String() string { return proto.CompactTextString(this) }
|
||||
|
||||
func (m *Test_OptionalGroup) GetRequiredField() string {
|
||||
if m != nil && m.RequiredField != nil {
|
||||
return *m.RequiredField
|
||||
func (this *Test_OptionalGroup) GetRequiredField() string {
|
||||
if this != nil && this.RequiredField != nil {
|
||||
return *this.RequiredField
|
||||
}
|
||||
return ""
|
||||
}
|
||||
@@ -176,15 +168,15 @@
|
||||
import (
|
||||
"log"
|
||||
|
||||
"github.com/golang/protobuf/proto"
|
||||
pb "./example.pb"
|
||||
"code.google.com/p/gogoprotobuf/proto"
|
||||
"./example.pb"
|
||||
)
|
||||
|
||||
func main() {
|
||||
test := &pb.Test{
|
||||
test := &example.Test{
|
||||
Label: proto.String("hello"),
|
||||
Type: proto.Int32(17),
|
||||
Optionalgroup: &pb.Test_OptionalGroup{
|
||||
Optionalgroup: &example.Test_OptionalGroup{
|
||||
RequiredField: proto.String("good bye"),
|
||||
},
|
||||
}
|
||||
@@ -192,7 +184,7 @@
|
||||
if err != nil {
|
||||
log.Fatal("marshaling error: ", err)
|
||||
}
|
||||
newTest := &pb.Test{}
|
||||
newTest := new(example.Test)
|
||||
err = proto.Unmarshal(data, newTest)
|
||||
if err != nil {
|
||||
log.Fatal("unmarshaling error: ", err)
|
||||
@@ -331,7 +323,9 @@ func Float64(v float64) *float64 {
|
||||
// Uint32 is a helper routine that allocates a new uint32 value
|
||||
// to store v and returns a pointer to it.
|
||||
func Uint32(v uint32) *uint32 {
|
||||
return &v
|
||||
p := new(uint32)
|
||||
*p = v
|
||||
return p
|
||||
}
|
||||
|
||||
// Uint64 is a helper routine that allocates a new uint64 value
|
||||
@@ -673,7 +667,7 @@ func buildDefaultMessage(t reflect.Type) (dm defaultMessage) {
|
||||
}
|
||||
|
||||
// scalar fields without defaults
|
||||
if !prop.HasDefault {
|
||||
if prop.Default == "" {
|
||||
dm.scalars = append(dm.scalars, sf)
|
||||
continue
|
||||
}
|
||||
@@ -744,16 +738,3 @@ func buildDefaultMessage(t reflect.Type) (dm defaultMessage) {
|
||||
|
||||
return dm
|
||||
}
|
||||
|
||||
// Map fields may have key types of non-float scalars, strings and enums.
|
||||
// The easiest way to sort them in some deterministic order is to use fmt.
|
||||
// If this turns out to be inefficient we can always consider other options,
|
||||
// such as doing a Schwartzian transform.
|
||||
|
||||
type mapKeys []reflect.Value
|
||||
|
||||
func (s mapKeys) Len() int { return len(s) }
|
||||
func (s mapKeys) Swap(i, j int) { s[i], s[j] = s[j], s[i] }
|
||||
func (s mapKeys) Less(i, j int) bool {
|
||||
return fmt.Sprint(s[i].Interface()) < fmt.Sprint(s[j].Interface())
|
||||
}
|
@@ -1,5 +1,5 @@
|
||||
// Copyright (c) 2013, Vastech SA (PTY) LTD. All rights reserved.
|
||||
// http://github.com/gogo/protobuf/gogoproto
|
||||
// http://code.google.com/p/gogoprotobuf/gogoproto
|
||||
//
|
||||
// Redistribution and use in source and binary forms, with or without
|
||||
// modification, are permitted provided that the following conditions are
|
@@ -1,7 +1,7 @@
|
||||
// Go support for Protocol Buffers - Google's data interchange format
|
||||
//
|
||||
// Copyright 2010 The Go Authors. All rights reserved.
|
||||
// https://github.com/golang/protobuf
|
||||
// http://code.google.com/p/goprotobuf/
|
||||
//
|
||||
// Redistribution and use in source and binary forms, with or without
|
||||
// modification, are permitted provided that the following conditions are
|
||||
@@ -36,10 +36,7 @@ package proto
|
||||
*/
|
||||
|
||||
import (
|
||||
"bytes"
|
||||
"encoding/json"
|
||||
"errors"
|
||||
"fmt"
|
||||
"reflect"
|
||||
"sort"
|
||||
)
|
||||
@@ -130,7 +127,7 @@ func (ms *MessageSet) Marshal(pb Message) error {
|
||||
|
||||
mti, ok := pb.(messageTypeIder)
|
||||
if !ok {
|
||||
return ErrNoMessageTypeId
|
||||
return ErrWrongType // TODO: custom error?
|
||||
}
|
||||
|
||||
mtid := mti.MessageTypeId()
|
||||
@@ -191,84 +188,16 @@ func UnmarshalMessageSet(buf []byte, m map[int32]Extension) error {
|
||||
return err
|
||||
}
|
||||
for _, item := range ms.Item {
|
||||
id := *item.TypeId
|
||||
msg := item.Message
|
||||
// restore wire type and field number varint, plus length varint.
|
||||
b := EncodeVarint(uint64(*item.TypeId)<<3 | WireBytes)
|
||||
b = append(b, EncodeVarint(uint64(len(item.Message)))...)
|
||||
b = append(b, item.Message...)
|
||||
|
||||
// Restore wire type and field number varint, plus length varint.
|
||||
// Be careful to preserve duplicate items.
|
||||
b := EncodeVarint(uint64(id)<<3 | WireBytes)
|
||||
if ext, ok := m[id]; ok {
|
||||
// Existing data; rip off the tag and length varint
|
||||
// so we join the new data correctly.
|
||||
// We can assume that ext.enc is set because we are unmarshaling.
|
||||
o := ext.enc[len(b):] // skip wire type and field number
|
||||
_, n := DecodeVarint(o) // calculate length of length varint
|
||||
o = o[n:] // skip length varint
|
||||
msg = append(o, msg...) // join old data and new data
|
||||
}
|
||||
b = append(b, EncodeVarint(uint64(len(msg)))...)
|
||||
b = append(b, msg...)
|
||||
|
||||
m[id] = Extension{enc: b}
|
||||
m[*item.TypeId] = Extension{enc: b}
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
// MarshalMessageSetJSON encodes the extension map represented by m in JSON format.
|
||||
// It is called by generated MarshalJSON methods on protocol buffer messages with the message_set_wire_format option.
|
||||
func MarshalMessageSetJSON(m map[int32]Extension) ([]byte, error) {
|
||||
var b bytes.Buffer
|
||||
b.WriteByte('{')
|
||||
|
||||
// Process the map in key order for deterministic output.
|
||||
ids := make([]int32, 0, len(m))
|
||||
for id := range m {
|
||||
ids = append(ids, id)
|
||||
}
|
||||
sort.Sort(int32Slice(ids)) // int32Slice defined in text.go
|
||||
|
||||
for i, id := range ids {
|
||||
ext := m[id]
|
||||
if i > 0 {
|
||||
b.WriteByte(',')
|
||||
}
|
||||
|
||||
msd, ok := messageSetMap[id]
|
||||
if !ok {
|
||||
// Unknown type; we can't render it, so skip it.
|
||||
continue
|
||||
}
|
||||
fmt.Fprintf(&b, `"[%s]":`, msd.name)
|
||||
|
||||
x := ext.value
|
||||
if x == nil {
|
||||
x = reflect.New(msd.t.Elem()).Interface()
|
||||
if err := Unmarshal(ext.enc, x.(Message)); err != nil {
|
||||
return nil, err
|
||||
}
|
||||
}
|
||||
d, err := json.Marshal(x)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
b.Write(d)
|
||||
}
|
||||
b.WriteByte('}')
|
||||
return b.Bytes(), nil
|
||||
}
|
||||
|
||||
// UnmarshalMessageSetJSON decodes the extension map encoded in buf in JSON format.
|
||||
// It is called by generated UnmarshalJSON methods on protocol buffer messages with the message_set_wire_format option.
|
||||
func UnmarshalMessageSetJSON(buf []byte, m map[int32]Extension) error {
|
||||
// Common-case fast path.
|
||||
if len(buf) == 0 || bytes.Equal(buf, []byte("{}")) {
|
||||
return nil
|
||||
}
|
||||
|
||||
// This is fairly tricky, and it's not clear that it is needed.
|
||||
return errors.New("TODO: UnmarshalMessageSetJSON not yet implemented")
|
||||
}
|
||||
|
||||
// A global registry of types that can be used in a MessageSet.
|
||||
|
||||
var messageSetMap = make(map[int32]messageSetDesc)
|
||||
@@ -279,9 +208,9 @@ type messageSetDesc struct {
|
||||
}
|
||||
|
||||
// RegisterMessageSetType is called from the generated code.
|
||||
func RegisterMessageSetType(m Message, fieldNum int32, name string) {
|
||||
messageSetMap[fieldNum] = messageSetDesc{
|
||||
t: reflect.TypeOf(m),
|
||||
func RegisterMessageSetType(i messageTypeIder, name string) {
|
||||
messageSetMap[i.MessageTypeId()] = messageSetDesc{
|
||||
t: reflect.TypeOf(i),
|
||||
name: name,
|
||||
}
|
||||
}
|
@@ -1,7 +1,7 @@
|
||||
// Go support for Protocol Buffers - Google's data interchange format
|
||||
//
|
||||
// Copyright 2012 The Go Authors. All rights reserved.
|
||||
// https://github.com/golang/protobuf
|
||||
// http://code.google.com/p/goprotobuf/
|
||||
//
|
||||
// Redistribution and use in source and binary forms, with or without
|
||||
// modification, are permitted provided that the following conditions are
|
||||
@@ -114,11 +114,6 @@ func structPointer_Bool(p structPointer, f field) **bool {
|
||||
return structPointer_ifield(p, f).(**bool)
|
||||
}
|
||||
|
||||
// BoolVal returns the address of a bool field in the struct.
|
||||
func structPointer_BoolVal(p structPointer, f field) *bool {
|
||||
return structPointer_ifield(p, f).(*bool)
|
||||
}
|
||||
|
||||
// BoolSlice returns the address of a []bool field in the struct.
|
||||
func structPointer_BoolSlice(p structPointer, f field) *[]bool {
|
||||
return structPointer_ifield(p, f).(*[]bool)
|
||||
@@ -129,11 +124,6 @@ func structPointer_String(p structPointer, f field) **string {
|
||||
return structPointer_ifield(p, f).(**string)
|
||||
}
|
||||
|
||||
// StringVal returns the address of a string field in the struct.
|
||||
func structPointer_StringVal(p structPointer, f field) *string {
|
||||
return structPointer_ifield(p, f).(*string)
|
||||
}
|
||||
|
||||
// StringSlice returns the address of a []string field in the struct.
|
||||
func structPointer_StringSlice(p structPointer, f field) *[]string {
|
||||
return structPointer_ifield(p, f).(*[]string)
|
||||
@@ -144,11 +134,6 @@ func structPointer_ExtMap(p structPointer, f field) *map[int32]Extension {
|
||||
return structPointer_ifield(p, f).(*map[int32]Extension)
|
||||
}
|
||||
|
||||
// Map returns the reflect.Value for the address of a map field in the struct.
|
||||
func structPointer_Map(p structPointer, f field, typ reflect.Type) reflect.Value {
|
||||
return structPointer_field(p, f).Addr()
|
||||
}
|
||||
|
||||
// SetStructPointer writes a *struct field in the struct.
|
||||
func structPointer_SetStructPointer(p structPointer, f field, q structPointer) {
|
||||
structPointer_field(p, f).Set(q.v)
|
||||
@@ -250,49 +235,6 @@ func structPointer_Word32(p structPointer, f field) word32 {
|
||||
return word32{structPointer_field(p, f)}
|
||||
}
|
||||
|
||||
// A word32Val represents a field of type int32, uint32, float32, or enum.
|
||||
// That is, v.Type() is int32, uint32, float32, or enum and v is assignable.
|
||||
type word32Val struct {
|
||||
v reflect.Value
|
||||
}
|
||||
|
||||
// Set sets *p to x.
|
||||
func word32Val_Set(p word32Val, x uint32) {
|
||||
switch p.v.Type() {
|
||||
case int32Type:
|
||||
p.v.SetInt(int64(x))
|
||||
return
|
||||
case uint32Type:
|
||||
p.v.SetUint(uint64(x))
|
||||
return
|
||||
case float32Type:
|
||||
p.v.SetFloat(float64(math.Float32frombits(x)))
|
||||
return
|
||||
}
|
||||
|
||||
// must be enum
|
||||
p.v.SetInt(int64(int32(x)))
|
||||
}
|
||||
|
||||
// Get gets the bits pointed at by p, as a uint32.
|
||||
func word32Val_Get(p word32Val) uint32 {
|
||||
elem := p.v
|
||||
switch elem.Kind() {
|
||||
case reflect.Int32:
|
||||
return uint32(elem.Int())
|
||||
case reflect.Uint32:
|
||||
return uint32(elem.Uint())
|
||||
case reflect.Float32:
|
||||
return math.Float32bits(float32(elem.Float()))
|
||||
}
|
||||
panic("unreachable")
|
||||
}
|
||||
|
||||
// Word32Val returns a reference to a int32, uint32, float32, or enum field in the struct.
|
||||
func structPointer_Word32Val(p structPointer, f field) word32Val {
|
||||
return word32Val{structPointer_field(p, f)}
|
||||
}
|
||||
|
||||
// A word32Slice is a slice of 32-bit values.
|
||||
// That is, v.Type() is []int32, []uint32, []float32, or []enum.
|
||||
type word32Slice struct {
|
||||
@@ -397,43 +339,6 @@ func structPointer_Word64(p structPointer, f field) word64 {
|
||||
return word64{structPointer_field(p, f)}
|
||||
}
|
||||
|
||||
// word64Val is like word32Val but for 64-bit values.
|
||||
type word64Val struct {
|
||||
v reflect.Value
|
||||
}
|
||||
|
||||
func word64Val_Set(p word64Val, o *Buffer, x uint64) {
|
||||
switch p.v.Type() {
|
||||
case int64Type:
|
||||
p.v.SetInt(int64(x))
|
||||
return
|
||||
case uint64Type:
|
||||
p.v.SetUint(x)
|
||||
return
|
||||
case float64Type:
|
||||
p.v.SetFloat(math.Float64frombits(x))
|
||||
return
|
||||
}
|
||||
panic("unreachable")
|
||||
}
|
||||
|
||||
func word64Val_Get(p word64Val) uint64 {
|
||||
elem := p.v
|
||||
switch elem.Kind() {
|
||||
case reflect.Int64:
|
||||
return uint64(elem.Int())
|
||||
case reflect.Uint64:
|
||||
return elem.Uint()
|
||||
case reflect.Float64:
|
||||
return math.Float64bits(elem.Float())
|
||||
}
|
||||
panic("unreachable")
|
||||
}
|
||||
|
||||
func structPointer_Word64Val(p structPointer, f field) word64Val {
|
||||
return word64Val{structPointer_field(p, f)}
|
||||
}
|
||||
|
||||
type word64Slice struct {
|
||||
v reflect.Value
|
||||
}
|
@@ -1,7 +1,7 @@
|
||||
// Go support for Protocol Buffers - Google's data interchange format
|
||||
//
|
||||
// Copyright 2012 The Go Authors. All rights reserved.
|
||||
// https://github.com/golang/protobuf
|
||||
// http://code.google.com/p/goprotobuf/
|
||||
//
|
||||
// Redistribution and use in source and binary forms, with or without
|
||||
// modification, are permitted provided that the following conditions are
|
||||
@@ -100,11 +100,6 @@ func structPointer_Bool(p structPointer, f field) **bool {
|
||||
return (**bool)(unsafe.Pointer(uintptr(p) + uintptr(f)))
|
||||
}
|
||||
|
||||
// BoolVal returns the address of a bool field in the struct.
|
||||
func structPointer_BoolVal(p structPointer, f field) *bool {
|
||||
return (*bool)(unsafe.Pointer(uintptr(p) + uintptr(f)))
|
||||
}
|
||||
|
||||
// BoolSlice returns the address of a []bool field in the struct.
|
||||
func structPointer_BoolSlice(p structPointer, f field) *[]bool {
|
||||
return (*[]bool)(unsafe.Pointer(uintptr(p) + uintptr(f)))
|
||||
@@ -115,11 +110,6 @@ func structPointer_String(p structPointer, f field) **string {
|
||||
return (**string)(unsafe.Pointer(uintptr(p) + uintptr(f)))
|
||||
}
|
||||
|
||||
// StringVal returns the address of a string field in the struct.
|
||||
func structPointer_StringVal(p structPointer, f field) *string {
|
||||
return (*string)(unsafe.Pointer(uintptr(p) + uintptr(f)))
|
||||
}
|
||||
|
||||
// StringSlice returns the address of a []string field in the struct.
|
||||
func structPointer_StringSlice(p structPointer, f field) *[]string {
|
||||
return (*[]string)(unsafe.Pointer(uintptr(p) + uintptr(f)))
|
||||
@@ -130,11 +120,6 @@ func structPointer_ExtMap(p structPointer, f field) *map[int32]Extension {
|
||||
return (*map[int32]Extension)(unsafe.Pointer(uintptr(p) + uintptr(f)))
|
||||
}
|
||||
|
||||
// Map returns the reflect.Value for the address of a map field in the struct.
|
||||
func structPointer_Map(p structPointer, f field, typ reflect.Type) reflect.Value {
|
||||
return reflect.NewAt(typ, unsafe.Pointer(uintptr(p)+uintptr(f)))
|
||||
}
|
||||
|
||||
// SetStructPointer writes a *struct field in the struct.
|
||||
func structPointer_SetStructPointer(p structPointer, f field, q structPointer) {
|
||||
*(*structPointer)(unsafe.Pointer(uintptr(p) + uintptr(f))) = q
|
||||
@@ -185,24 +170,6 @@ func structPointer_Word32(p structPointer, f field) word32 {
|
||||
return word32((**uint32)(unsafe.Pointer(uintptr(p) + uintptr(f))))
|
||||
}
|
||||
|
||||
// A word32Val is the address of a 32-bit value field.
|
||||
type word32Val *uint32
|
||||
|
||||
// Set sets *p to x.
|
||||
func word32Val_Set(p word32Val, x uint32) {
|
||||
*p = x
|
||||
}
|
||||
|
||||
// Get gets the value pointed at by p.
|
||||
func word32Val_Get(p word32Val) uint32 {
|
||||
return *p
|
||||
}
|
||||
|
||||
// Word32Val returns the address of a *int32, *uint32, *float32, or *enum field in the struct.
|
||||
func structPointer_Word32Val(p structPointer, f field) word32Val {
|
||||
return word32Val((*uint32)(unsafe.Pointer(uintptr(p) + uintptr(f))))
|
||||
}
|
||||
|
||||
// A word32Slice is a slice of 32-bit values.
|
||||
type word32Slice []uint32
|
||||
|
||||
@@ -239,21 +206,6 @@ func structPointer_Word64(p structPointer, f field) word64 {
|
||||
return word64((**uint64)(unsafe.Pointer(uintptr(p) + uintptr(f))))
|
||||
}
|
||||
|
||||
// word64Val is like word32Val but for 64-bit values.
|
||||
type word64Val *uint64
|
||||
|
||||
func word64Val_Set(p word64Val, o *Buffer, x uint64) {
|
||||
*p = x
|
||||
}
|
||||
|
||||
func word64Val_Get(p word64Val) uint64 {
|
||||
return *p
|
||||
}
|
||||
|
||||
func structPointer_Word64Val(p structPointer, f field) word64Val {
|
||||
return word64Val((*uint64)(unsafe.Pointer(uintptr(p) + uintptr(f))))
|
||||
}
|
||||
|
||||
// word64Slice is like word32Slice but for 64-bit values.
|
||||
type word64Slice []uint64
|
||||
|
@@ -1,5 +1,5 @@
|
||||
// Copyright (c) 2013, Vastech SA (PTY) LTD. All rights reserved.
|
||||
// http://github.com/gogo/protobuf/gogoproto
|
||||
// http://code.google.com/p/gogoprotobuf/gogoproto
|
||||
//
|
||||
// Redistribution and use in source and binary forms, with or without
|
||||
// modification, are permitted provided that the following conditions are
|
||||
@@ -87,6 +87,16 @@ func appendStructPointer(base structPointer, f field, typ reflect.Type) structPo
|
||||
return structPointer(unsafe.Pointer(uintptr(unsafe.Pointer(bas)) + uintptr(uintptr(newLen-1)*size)))
|
||||
}
|
||||
|
||||
// RefBool returns a *bool field in the struct.
|
||||
func structPointer_RefBool(p structPointer, f field) *bool {
|
||||
return (*bool)(unsafe.Pointer(uintptr(p) + uintptr(f)))
|
||||
}
|
||||
|
||||
// RefString returns the address of a string field in the struct.
|
||||
func structPointer_RefString(p structPointer, f field) *string {
|
||||
return (*string)(unsafe.Pointer(uintptr(p) + uintptr(f)))
|
||||
}
|
||||
|
||||
func structPointer_FieldPointer(p structPointer, f field) structPointer {
|
||||
return structPointer(unsafe.Pointer(uintptr(p) + uintptr(f)))
|
||||
}
|
||||
@@ -106,3 +116,51 @@ func structPointer_Add(p structPointer, size field) structPointer {
|
||||
func structPointer_Len(p structPointer, f field) int {
|
||||
return len(*(*[]interface{})(unsafe.Pointer(structPointer_GetRefStructPointer(p, f))))
|
||||
}
|
||||
|
||||
// refWord32 is the address of a 32-bit value field.
|
||||
type refWord32 *uint32
|
||||
|
||||
func refWord32_IsNil(p refWord32) bool {
|
||||
return p == nil
|
||||
}
|
||||
|
||||
func refWord32_Set(p refWord32, o *Buffer, x uint32) {
|
||||
if len(o.uint32s) == 0 {
|
||||
o.uint32s = make([]uint32, uint32PoolSize)
|
||||
}
|
||||
o.uint32s[0] = x
|
||||
*p = o.uint32s[0]
|
||||
o.uint32s = o.uint32s[1:]
|
||||
}
|
||||
|
||||
func refWord32_Get(p refWord32) uint32 {
|
||||
return *p
|
||||
}
|
||||
|
||||
func structPointer_RefWord32(p structPointer, f field) refWord32 {
|
||||
return refWord32((*uint32)(unsafe.Pointer(uintptr(p) + uintptr(f))))
|
||||
}
|
||||
|
||||
// refWord64 is like refWord32 but for 32-bit values.
|
||||
type refWord64 *uint64
|
||||
|
||||
func refWord64_Set(p refWord64, o *Buffer, x uint64) {
|
||||
if len(o.uint64s) == 0 {
|
||||
o.uint64s = make([]uint64, uint64PoolSize)
|
||||
}
|
||||
o.uint64s[0] = x
|
||||
*p = o.uint64s[0]
|
||||
o.uint64s = o.uint64s[1:]
|
||||
}
|
||||
|
||||
func refWord64_IsNil(p refWord64) bool {
|
||||
return p == nil
|
||||
}
|
||||
|
||||
func refWord64_Get(p refWord64) uint64 {
|
||||
return *p
|
||||
}
|
||||
|
||||
func structPointer_RefWord64(p structPointer, f field) refWord64 {
|
||||
return refWord64((*uint64)(unsafe.Pointer(uintptr(p) + uintptr(f))))
|
||||
}
|
@@ -1,7 +1,12 @@
|
||||
// Extensions for Protocol Buffers to create more go like structures.
|
||||
//
|
||||
// Copyright (c) 2013, Vastech SA (PTY) LTD. All rights reserved.
|
||||
// http://code.google.com/p/gogoprotobuf/gogoproto
|
||||
//
|
||||
// Go support for Protocol Buffers - Google's data interchange format
|
||||
//
|
||||
// Copyright 2010 The Go Authors. All rights reserved.
|
||||
// https://github.com/golang/protobuf
|
||||
// http://code.google.com/p/goprotobuf/
|
||||
//
|
||||
// Redistribution and use in source and binary forms, with or without
|
||||
// modification, are permitted provided that the following conditions are
|
||||
@@ -145,20 +150,18 @@ func (sp *StructProperties) Swap(i, j int) { sp.order[i], sp.order[j] = sp.order
|
||||
|
||||
// Properties represents the protocol-specific behavior of a single struct field.
|
||||
type Properties struct {
|
||||
Name string // name of the field, for error messages
|
||||
OrigName string // original name before protocol compiler (always set)
|
||||
Wire string
|
||||
WireType int
|
||||
Tag int
|
||||
Required bool
|
||||
Optional bool
|
||||
Repeated bool
|
||||
Packed bool // relevant for repeated primitives only
|
||||
Enum string // set for enum types only
|
||||
proto3 bool // whether this is known to be a proto3 field; set for []byte only
|
||||
|
||||
Name string // name of the field, for error messages
|
||||
OrigName string // original name before protocol compiler (always set)
|
||||
Wire string
|
||||
WireType int
|
||||
Tag int
|
||||
Required bool
|
||||
Optional bool
|
||||
Repeated bool
|
||||
Packed bool // relevant for repeated primitives only
|
||||
Enum string // set for enum types only
|
||||
Default string // default value
|
||||
HasDefault bool // whether an explicit default was provided
|
||||
CustomType string
|
||||
def_uint64 uint64
|
||||
|
||||
enc encoder
|
||||
@@ -167,14 +170,12 @@ type Properties struct {
|
||||
tagcode []byte // encoding of EncodeVarint((Tag<<3)|WireType)
|
||||
tagbuf [8]byte
|
||||
stype reflect.Type // set for struct types only
|
||||
sstype reflect.Type // set for slices of structs types only
|
||||
ctype reflect.Type // set for custom types only
|
||||
sprop *StructProperties // set for struct types only
|
||||
isMarshaler bool
|
||||
isUnmarshaler bool
|
||||
|
||||
mtype reflect.Type // set for map types only
|
||||
mkeyprop *Properties // set for map types only
|
||||
mvalprop *Properties // set for map types only
|
||||
|
||||
size sizer
|
||||
valSize valueSizer // set for bool and numeric types only
|
||||
|
||||
@@ -205,13 +206,10 @@ func (p *Properties) String() string {
|
||||
if p.OrigName != p.Name {
|
||||
s += ",name=" + p.OrigName
|
||||
}
|
||||
if p.proto3 {
|
||||
s += ",proto3"
|
||||
}
|
||||
if len(p.Enum) > 0 {
|
||||
s += ",enum=" + p.Enum
|
||||
}
|
||||
if p.HasDefault {
|
||||
if len(p.Default) > 0 {
|
||||
s += ",def=" + p.Default
|
||||
}
|
||||
return s
|
||||
@@ -282,16 +280,17 @@ func (p *Properties) Parse(s string) {
|
||||
p.OrigName = f[5:]
|
||||
case strings.HasPrefix(f, "enum="):
|
||||
p.Enum = f[5:]
|
||||
case f == "proto3":
|
||||
p.proto3 = true
|
||||
case strings.HasPrefix(f, "def="):
|
||||
p.HasDefault = true
|
||||
p.Default = f[4:] // rest of string
|
||||
if i+1 < len(fields) {
|
||||
// Commas aren't escaped, and def is always last.
|
||||
p.Default += "," + strings.Join(fields[i+1:], ",")
|
||||
break
|
||||
}
|
||||
case strings.HasPrefix(f, "embedded="):
|
||||
p.OrigName = strings.Split(f, "=")[1]
|
||||
case strings.HasPrefix(f, "customtype="):
|
||||
p.CustomType = strings.Split(f, "=")[1]
|
||||
}
|
||||
}
|
||||
}
|
||||
@@ -303,71 +302,41 @@ func logNoSliceEnc(t1, t2 reflect.Type) {
|
||||
var protoMessageType = reflect.TypeOf((*Message)(nil)).Elem()
|
||||
|
||||
// Initialize the fields for encoding and decoding.
|
||||
func (p *Properties) setEncAndDec(typ reflect.Type, f *reflect.StructField, lockGetProp bool) {
|
||||
func (p *Properties) setEncAndDec(typ reflect.Type, lockGetProp bool) {
|
||||
p.enc = nil
|
||||
p.dec = nil
|
||||
p.size = nil
|
||||
|
||||
if len(p.CustomType) > 0 {
|
||||
p.setCustomEncAndDec(typ)
|
||||
p.setTag(lockGetProp)
|
||||
return
|
||||
}
|
||||
switch t1 := typ; t1.Kind() {
|
||||
default:
|
||||
fmt.Fprintf(os.Stderr, "proto: no coders for %v\n", t1)
|
||||
|
||||
// proto3 scalar types
|
||||
|
||||
case reflect.Bool:
|
||||
p.enc = (*Buffer).enc_proto3_bool
|
||||
p.dec = (*Buffer).dec_proto3_bool
|
||||
p.size = size_proto3_bool
|
||||
case reflect.Int32:
|
||||
p.enc = (*Buffer).enc_proto3_int32
|
||||
p.dec = (*Buffer).dec_proto3_int32
|
||||
p.size = size_proto3_int32
|
||||
case reflect.Uint32:
|
||||
p.enc = (*Buffer).enc_proto3_uint32
|
||||
p.dec = (*Buffer).dec_proto3_int32 // can reuse
|
||||
p.size = size_proto3_uint32
|
||||
case reflect.Int64, reflect.Uint64:
|
||||
p.enc = (*Buffer).enc_proto3_int64
|
||||
p.dec = (*Buffer).dec_proto3_int64
|
||||
p.size = size_proto3_int64
|
||||
case reflect.Float32:
|
||||
p.enc = (*Buffer).enc_proto3_uint32 // can just treat them as bits
|
||||
p.dec = (*Buffer).dec_proto3_int32
|
||||
p.size = size_proto3_uint32
|
||||
case reflect.Float64:
|
||||
p.enc = (*Buffer).enc_proto3_int64 // can just treat them as bits
|
||||
p.dec = (*Buffer).dec_proto3_int64
|
||||
p.size = size_proto3_int64
|
||||
case reflect.String:
|
||||
p.enc = (*Buffer).enc_proto3_string
|
||||
p.dec = (*Buffer).dec_proto3_string
|
||||
p.size = size_proto3_string
|
||||
|
||||
if !p.setNonNullableEncAndDec(t1) {
|
||||
fmt.Fprintf(os.Stderr, "proto: no coders for %T\n", t1)
|
||||
}
|
||||
case reflect.Ptr:
|
||||
switch t2 := t1.Elem(); t2.Kind() {
|
||||
default:
|
||||
fmt.Fprintf(os.Stderr, "proto: no encoder function for %v -> %v\n", t1, t2)
|
||||
fmt.Fprintf(os.Stderr, "proto: no encoder function for %T -> %T\n", t1, t2)
|
||||
break
|
||||
case reflect.Bool:
|
||||
p.enc = (*Buffer).enc_bool
|
||||
p.dec = (*Buffer).dec_bool
|
||||
p.size = size_bool
|
||||
case reflect.Int32:
|
||||
case reflect.Int32, reflect.Uint32:
|
||||
p.enc = (*Buffer).enc_int32
|
||||
p.dec = (*Buffer).dec_int32
|
||||
p.size = size_int32
|
||||
case reflect.Uint32:
|
||||
p.enc = (*Buffer).enc_uint32
|
||||
p.dec = (*Buffer).dec_int32 // can reuse
|
||||
p.size = size_uint32
|
||||
case reflect.Int64, reflect.Uint64:
|
||||
p.enc = (*Buffer).enc_int64
|
||||
p.dec = (*Buffer).dec_int64
|
||||
p.size = size_int64
|
||||
case reflect.Float32:
|
||||
p.enc = (*Buffer).enc_uint32 // can just treat them as bits
|
||||
p.enc = (*Buffer).enc_int32 // can just treat them as bits
|
||||
p.dec = (*Buffer).dec_int32
|
||||
p.size = size_uint32
|
||||
p.size = size_int32
|
||||
case reflect.Float64:
|
||||
p.enc = (*Buffer).enc_int64 // can just treat them as bits
|
||||
p.dec = (*Buffer).dec_int64
|
||||
@@ -406,54 +375,48 @@ func (p *Properties) setEncAndDec(typ reflect.Type, f *reflect.StructField, lock
|
||||
}
|
||||
p.dec = (*Buffer).dec_slice_bool
|
||||
p.packedDec = (*Buffer).dec_slice_packed_bool
|
||||
case reflect.Int32:
|
||||
if p.Packed {
|
||||
p.enc = (*Buffer).enc_slice_packed_int32
|
||||
p.size = size_slice_packed_int32
|
||||
} else {
|
||||
p.enc = (*Buffer).enc_slice_int32
|
||||
p.size = size_slice_int32
|
||||
}
|
||||
p.dec = (*Buffer).dec_slice_int32
|
||||
p.packedDec = (*Buffer).dec_slice_packed_int32
|
||||
case reflect.Uint32:
|
||||
if p.Packed {
|
||||
p.enc = (*Buffer).enc_slice_packed_uint32
|
||||
p.size = size_slice_packed_uint32
|
||||
} else {
|
||||
p.enc = (*Buffer).enc_slice_uint32
|
||||
p.size = size_slice_uint32
|
||||
}
|
||||
p.dec = (*Buffer).dec_slice_int32
|
||||
p.packedDec = (*Buffer).dec_slice_packed_int32
|
||||
case reflect.Int64, reflect.Uint64:
|
||||
if p.Packed {
|
||||
p.enc = (*Buffer).enc_slice_packed_int64
|
||||
p.size = size_slice_packed_int64
|
||||
} else {
|
||||
p.enc = (*Buffer).enc_slice_int64
|
||||
p.size = size_slice_int64
|
||||
}
|
||||
p.dec = (*Buffer).dec_slice_int64
|
||||
p.packedDec = (*Buffer).dec_slice_packed_int64
|
||||
case reflect.Uint8:
|
||||
p.enc = (*Buffer).enc_slice_byte
|
||||
p.dec = (*Buffer).dec_slice_byte
|
||||
p.size = size_slice_byte
|
||||
if p.proto3 {
|
||||
p.enc = (*Buffer).enc_proto3_slice_byte
|
||||
p.size = size_proto3_slice_byte
|
||||
case reflect.Int, reflect.Int8, reflect.Int16, reflect.Int32, reflect.Int64, reflect.Uint, reflect.Uint8, reflect.Uint16, reflect.Uint32, reflect.Uint64, reflect.Uintptr:
|
||||
switch t2.Bits() {
|
||||
case 32:
|
||||
if p.Packed {
|
||||
p.enc = (*Buffer).enc_slice_packed_int32
|
||||
p.size = size_slice_packed_int32
|
||||
} else {
|
||||
p.enc = (*Buffer).enc_slice_int32
|
||||
p.size = size_slice_int32
|
||||
}
|
||||
p.dec = (*Buffer).dec_slice_int32
|
||||
p.packedDec = (*Buffer).dec_slice_packed_int32
|
||||
case 64:
|
||||
if p.Packed {
|
||||
p.enc = (*Buffer).enc_slice_packed_int64
|
||||
p.size = size_slice_packed_int64
|
||||
} else {
|
||||
p.enc = (*Buffer).enc_slice_int64
|
||||
p.size = size_slice_int64
|
||||
}
|
||||
p.dec = (*Buffer).dec_slice_int64
|
||||
p.packedDec = (*Buffer).dec_slice_packed_int64
|
||||
case 8:
|
||||
if t2.Kind() == reflect.Uint8 {
|
||||
p.enc = (*Buffer).enc_slice_byte
|
||||
p.dec = (*Buffer).dec_slice_byte
|
||||
p.size = size_slice_byte
|
||||
}
|
||||
default:
|
||||
logNoSliceEnc(t1, t2)
|
||||
break
|
||||
}
|
||||
case reflect.Float32, reflect.Float64:
|
||||
switch t2.Bits() {
|
||||
case 32:
|
||||
// can just treat them as bits
|
||||
if p.Packed {
|
||||
p.enc = (*Buffer).enc_slice_packed_uint32
|
||||
p.size = size_slice_packed_uint32
|
||||
p.enc = (*Buffer).enc_slice_packed_int32
|
||||
p.size = size_slice_packed_int32
|
||||
} else {
|
||||
p.enc = (*Buffer).enc_slice_uint32
|
||||
p.size = size_slice_uint32
|
||||
p.enc = (*Buffer).enc_slice_int32
|
||||
p.size = size_slice_int32
|
||||
}
|
||||
p.dec = (*Buffer).dec_slice_int32
|
||||
p.packedDec = (*Buffer).dec_slice_packed_int32
|
||||
@@ -505,26 +468,14 @@ func (p *Properties) setEncAndDec(typ reflect.Type, f *reflect.StructField, lock
|
||||
p.dec = (*Buffer).dec_slice_slice_byte
|
||||
p.size = size_slice_slice_byte
|
||||
}
|
||||
case reflect.Struct:
|
||||
p.setSliceOfNonPointerStructs(t1)
|
||||
}
|
||||
|
||||
case reflect.Map:
|
||||
p.enc = (*Buffer).enc_new_map
|
||||
p.dec = (*Buffer).dec_new_map
|
||||
p.size = size_new_map
|
||||
|
||||
p.mtype = t1
|
||||
p.mkeyprop = &Properties{}
|
||||
p.mkeyprop.init(reflect.PtrTo(p.mtype.Key()), "Key", f.Tag.Get("protobuf_key"), nil, lockGetProp)
|
||||
p.mvalprop = &Properties{}
|
||||
vtype := p.mtype.Elem()
|
||||
if vtype.Kind() != reflect.Ptr && vtype.Kind() != reflect.Slice {
|
||||
// The value type is not a message (*T) or bytes ([]byte),
|
||||
// so we need encoders for the pointer to this type.
|
||||
vtype = reflect.PtrTo(vtype)
|
||||
}
|
||||
p.mvalprop.init(vtype, "Value", f.Tag.Get("protobuf_val"), nil, lockGetProp)
|
||||
}
|
||||
p.setTag(lockGetProp)
|
||||
}
|
||||
|
||||
func (p *Properties) setTag(lockGetProp bool) {
|
||||
// precalculate tag code
|
||||
wire := p.WireType
|
||||
if p.Packed {
|
||||
@@ -555,23 +506,11 @@ var (
|
||||
|
||||
// isMarshaler reports whether type t implements Marshaler.
|
||||
func isMarshaler(t reflect.Type) bool {
|
||||
// We're checking for (likely) pointer-receiver methods
|
||||
// so if t is not a pointer, something is very wrong.
|
||||
// The calls above only invoke isMarshaler on pointer types.
|
||||
if t.Kind() != reflect.Ptr {
|
||||
panic("proto: misuse of isMarshaler")
|
||||
}
|
||||
return t.Implements(marshalerType)
|
||||
}
|
||||
|
||||
// isUnmarshaler reports whether type t implements Unmarshaler.
|
||||
func isUnmarshaler(t reflect.Type) bool {
|
||||
// We're checking for (likely) pointer-receiver methods
|
||||
// so if t is not a pointer, something is very wrong.
|
||||
// The calls above only invoke isUnmarshaler on pointer types.
|
||||
if t.Kind() != reflect.Ptr {
|
||||
panic("proto: misuse of isUnmarshaler")
|
||||
}
|
||||
return t.Implements(unmarshalerType)
|
||||
}
|
||||
|
||||
@@ -591,7 +530,7 @@ func (p *Properties) init(typ reflect.Type, name, tag string, f *reflect.StructF
|
||||
return
|
||||
}
|
||||
p.Parse(tag)
|
||||
p.setEncAndDec(typ, f, lockGetProp)
|
||||
p.setEncAndDec(typ, lockGetProp)
|
||||
}
|
||||
|
||||
var (
|
||||
@@ -600,11 +539,7 @@ var (
|
||||
)
|
||||
|
||||
// GetProperties returns the list of properties for the type represented by t.
|
||||
// t must represent a generated struct type of a protocol message.
|
||||
func GetProperties(t reflect.Type) *StructProperties {
|
||||
if t.Kind() != reflect.Struct {
|
||||
panic("proto: type must have kind struct")
|
||||
}
|
||||
mutex.Lock()
|
||||
sprop := getPropertiesLocked(t)
|
||||
mutex.Unlock()
|
||||
@@ -640,9 +575,15 @@ func getPropertiesLocked(t reflect.Type) *StructProperties {
|
||||
p.init(f.Type, name, f.Tag.Get("protobuf"), &f, false)
|
||||
|
||||
if f.Name == "XXX_extensions" { // special case
|
||||
p.enc = (*Buffer).enc_map
|
||||
p.dec = nil // not needed
|
||||
p.size = size_map
|
||||
if len(f.Tag.Get("protobuf")) > 0 {
|
||||
p.enc = (*Buffer).enc_ext_slice_byte
|
||||
p.dec = nil // not needed
|
||||
p.size = size_ext_slice_byte
|
||||
} else {
|
||||
p.enc = (*Buffer).enc_map
|
||||
p.dec = nil // not needed
|
||||
p.size = size_map
|
||||
}
|
||||
}
|
||||
if f.Name == "XXX_unrecognized" { // special case
|
||||
prop.unrecField = toField(&f)
|
||||
@@ -713,6 +654,7 @@ func getbase(pb Message) (t reflect.Type, b structPointer, err error) {
|
||||
// The generated code will register the generated maps by calling RegisterEnum.
|
||||
|
||||
var enumValueMaps = make(map[string]map[string]int32)
|
||||
var enumStringMaps = make(map[string]map[int32]string)
|
||||
|
||||
// RegisterEnum is called from the generated code to install the enum descriptor
|
||||
// maps into the global table to aid parsing text format protocol buffers.
|
||||
@@ -721,4 +663,8 @@ func RegisterEnum(typeName string, unusedNameMap map[int32]string, valueMap map[
|
||||
panic("proto: duplicate enum registered: " + typeName)
|
||||
}
|
||||
enumValueMaps[typeName] = valueMap
|
||||
if _, ok := enumStringMaps[typeName]; ok {
|
||||
panic("proto: duplicate enum registered: " + typeName)
|
||||
}
|
||||
enumStringMaps[typeName] = unusedNameMap
|
||||
}
|
@@ -1,5 +1,5 @@
|
||||
// Copyright (c) 2013, Vastech SA (PTY) LTD. All rights reserved.
|
||||
// http://github.com/gogo/protobuf/gogoproto
|
||||
// http://code.google.com/p/gogoprotobuf/gogoproto
|
||||
//
|
||||
// Redistribution and use in source and binary forms, with or without
|
||||
// modification, are permitted provided that the following conditions are
|
||||
@@ -49,6 +49,49 @@ func (p *Properties) setCustomEncAndDec(typ reflect.Type) {
|
||||
}
|
||||
}
|
||||
|
||||
func (p *Properties) setNonNullableEncAndDec(typ reflect.Type) bool {
|
||||
switch typ.Kind() {
|
||||
case reflect.Bool:
|
||||
p.enc = (*Buffer).enc_ref_bool
|
||||
p.dec = (*Buffer).dec_ref_bool
|
||||
p.size = size_ref_bool
|
||||
case reflect.Int32, reflect.Uint32:
|
||||
p.enc = (*Buffer).enc_ref_int32
|
||||
p.dec = (*Buffer).dec_ref_int32
|
||||
p.size = size_ref_int32
|
||||
case reflect.Int64, reflect.Uint64:
|
||||
p.enc = (*Buffer).enc_ref_int64
|
||||
p.dec = (*Buffer).dec_ref_int64
|
||||
p.size = size_ref_int64
|
||||
case reflect.Float32:
|
||||
p.enc = (*Buffer).enc_ref_int32 // can just treat them as bits
|
||||
p.dec = (*Buffer).dec_ref_int32
|
||||
p.size = size_ref_int32
|
||||
case reflect.Float64:
|
||||
p.enc = (*Buffer).enc_ref_int64 // can just treat them as bits
|
||||
p.dec = (*Buffer).dec_ref_int64
|
||||
p.size = size_ref_int64
|
||||
case reflect.String:
|
||||
p.dec = (*Buffer).dec_ref_string
|
||||
p.enc = (*Buffer).enc_ref_string
|
||||
p.size = size_ref_string
|
||||
case reflect.Struct:
|
||||
p.stype = typ
|
||||
p.isMarshaler = isMarshaler(typ)
|
||||
p.isUnmarshaler = isUnmarshaler(typ)
|
||||
if p.Wire == "bytes" {
|
||||
p.enc = (*Buffer).enc_ref_struct_message
|
||||
p.dec = (*Buffer).dec_ref_struct_message
|
||||
p.size = size_ref_struct_message
|
||||
} else {
|
||||
fmt.Fprintf(os.Stderr, "proto: no coders for struct %T\n", typ)
|
||||
}
|
||||
default:
|
||||
return false
|
||||
}
|
||||
return true
|
||||
}
|
||||
|
||||
func (p *Properties) setSliceOfNonPointerStructs(typ reflect.Type) {
|
||||
t2 := typ.Elem()
|
||||
p.sstype = typ
|
@@ -1,7 +1,7 @@
|
||||
// Go support for Protocol Buffers - Google's data interchange format
|
||||
//
|
||||
// Copyright 2012 The Go Authors. All rights reserved.
|
||||
// https://github.com/golang/protobuf
|
||||
// http://code.google.com/p/goprotobuf/
|
||||
//
|
||||
// Redistribution and use in source and binary forms, with or without
|
||||
// modification, are permitted provided that the following conditions are
|
@@ -1,7 +1,7 @@
|
||||
// Go support for Protocol Buffers - Google's data interchange format
|
||||
//
|
||||
// Copyright 2012 The Go Authors. All rights reserved.
|
||||
// https://github.com/golang/protobuf
|
||||
// http://code.google.com/p/goprotobuf/
|
||||
//
|
||||
// Redistribution and use in source and binary forms, with or without
|
||||
// modification, are permitted provided that the following conditions are
|
||||
@@ -35,9 +35,8 @@ import (
|
||||
"log"
|
||||
"testing"
|
||||
|
||||
proto3pb "./proto3_proto"
|
||||
pb "./testdata"
|
||||
. "github.com/coreos/etcd/Godeps/_workspace/src/github.com/golang/protobuf/proto"
|
||||
. "github.com/coreos/etcd/Godeps/_workspace/src/code.google.com/p/gogoprotobuf/proto"
|
||||
)
|
||||
|
||||
var messageWithExtension1 = &pb.MyMessage{Count: Int32(7)}
|
||||
@@ -66,10 +65,8 @@ var SizeTests = []struct {
|
||||
// Basic types.
|
||||
{"bool", &pb.Defaults{F_Bool: Bool(true)}},
|
||||
{"int32", &pb.Defaults{F_Int32: Int32(12)}},
|
||||
{"negative int32", &pb.Defaults{F_Int32: Int32(-1)}},
|
||||
{"small int64", &pb.Defaults{F_Int64: Int64(1)}},
|
||||
{"big int64", &pb.Defaults{F_Int64: Int64(1 << 20)}},
|
||||
{"negative int64", &pb.Defaults{F_Int64: Int64(-1)}},
|
||||
{"fixed32", &pb.Defaults{F_Fixed32: Uint32(71)}},
|
||||
{"fixed64", &pb.Defaults{F_Fixed64: Uint64(72)}},
|
||||
{"uint32", &pb.Defaults{F_Uint32: Uint32(123)}},
|
||||
@@ -86,7 +83,7 @@ var SizeTests = []struct {
|
||||
{"empty repeated bool", &pb.MoreRepeated{Bools: []bool{}}},
|
||||
{"repeated bool", &pb.MoreRepeated{Bools: []bool{false, true, true, false}}},
|
||||
{"packed repeated bool", &pb.MoreRepeated{BoolsPacked: []bool{false, true, true, false, true, true, true}}},
|
||||
{"repeated int32", &pb.MoreRepeated{Ints: []int32{1, 12203, 1729, -1}}},
|
||||
{"repeated int32", &pb.MoreRepeated{Ints: []int32{1, 12203, 1729}}},
|
||||
{"repeated int32 packed", &pb.MoreRepeated{IntsPacked: []int32{1, 12203, 1729}}},
|
||||
{"repeated int64 packed", &pb.MoreRepeated{Int64SPacked: []int64{
|
||||
// Need enough large numbers to verify that the header is counting the number of bytes
|
||||
@@ -103,20 +100,6 @@ var SizeTests = []struct {
|
||||
{"unrecognized", &pb.MoreRepeated{XXX_unrecognized: []byte{13<<3 | 0, 4}}},
|
||||
{"extension (unencoded)", messageWithExtension1},
|
||||
{"extension (encoded)", messageWithExtension3},
|
||||
// proto3 message
|
||||
{"proto3 empty", &proto3pb.Message{}},
|
||||
{"proto3 bool", &proto3pb.Message{TrueScotsman: true}},
|
||||
{"proto3 int64", &proto3pb.Message{ResultCount: 1}},
|
||||
{"proto3 uint32", &proto3pb.Message{HeightInCm: 123}},
|
||||
{"proto3 float", &proto3pb.Message{Score: 12.6}},
|
||||
{"proto3 string", &proto3pb.Message{Name: "Snezana"}},
|
||||
{"proto3 bytes", &proto3pb.Message{Data: []byte("wowsa")}},
|
||||
{"proto3 bytes, empty", &proto3pb.Message{Data: []byte{}}},
|
||||
{"proto3 enum", &proto3pb.Message{Hilarity: proto3pb.Message_PUNS}},
|
||||
|
||||
{"map field", &pb.MessageWithMap{NameMapping: map[int32]string{1: "Rob", 7: "Andrew"}}},
|
||||
{"map field with message", &pb.MessageWithMap{MsgMapping: map[int64]*pb.FloatingPoint{0x7001: &pb.FloatingPoint{F: Float64(2.0)}}}},
|
||||
{"map field with bytes", &pb.MessageWithMap{ByteMapping: map[bool][]byte{true: []byte("this time for sure")}}},
|
||||
}
|
||||
|
||||
func TestSize(t *testing.T) {
|
@@ -1,5 +1,5 @@
|
||||
// Copyright (c) 2013, Vastech SA (PTY) LTD. All rights reserved.
|
||||
// http://github.com/gogo/protobuf/gogoproto
|
||||
// http://code.google.com/p/gogoprotobuf/gogoproto
|
||||
//
|
||||
// Redistribution and use in source and binary forms, with or without
|
||||
// modification, are permitted provided that the following conditions are
|
||||
@@ -27,7 +27,6 @@
|
||||
package proto
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
"io"
|
||||
)
|
||||
|
||||
@@ -80,7 +79,7 @@ func Skip(data []byte) (n int, err error) {
|
||||
return index, nil
|
||||
case 3:
|
||||
for {
|
||||
var innerWire uint64
|
||||
var wire uint64
|
||||
var start int = index
|
||||
for shift := uint(0); ; shift += 7 {
|
||||
if index >= l {
|
||||
@@ -88,13 +87,13 @@ func Skip(data []byte) (n int, err error) {
|
||||
}
|
||||
b := data[index]
|
||||
index++
|
||||
innerWire |= (uint64(b) & 0x7F) << shift
|
||||
wire |= (uint64(b) & 0x7F) << shift
|
||||
if b < 0x80 {
|
||||
break
|
||||
}
|
||||
}
|
||||
innerWireType := int(innerWire & 0x7)
|
||||
if innerWireType == 4 {
|
||||
wireType := int(wire & 0x7)
|
||||
if wireType == 4 {
|
||||
break
|
||||
}
|
||||
next, err := Skip(data[start:])
|
||||
@@ -110,7 +109,7 @@ func Skip(data []byte) (n int, err error) {
|
||||
index += 4
|
||||
return index, nil
|
||||
default:
|
||||
return 0, fmt.Errorf("proto: illegal wireType %d", wireType)
|
||||
return 0, ErrWrongType
|
||||
}
|
||||
}
|
||||
panic("unreachable")
|
@@ -1,7 +1,7 @@
|
||||
# Go support for Protocol Buffers - Google's data interchange format
|
||||
#
|
||||
# Copyright 2010 The Go Authors. All rights reserved.
|
||||
# https://github.com/golang/protobuf
|
||||
# http://code.google.com/p/goprotobuf/
|
||||
#
|
||||
# Redistribution and use in source and binary forms, with or without
|
||||
# modification, are permitted provided that the following conditions are
|
||||
@@ -29,19 +29,16 @@
|
||||
# (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
|
||||
# OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
|
||||
|
||||
|
||||
include ../../Make.protobuf
|
||||
|
||||
all: regenerate
|
||||
|
||||
regenerate:
|
||||
rm -f test.pb.go
|
||||
make test.pb.go
|
||||
|
||||
protoc --gogo_out=. test.proto
|
||||
|
||||
# The following rules are just aids to development. Not needed for typical testing.
|
||||
|
||||
diff: regenerate
|
||||
git diff test.pb.go
|
||||
hg diff test.pb.go
|
||||
|
||||
restore:
|
||||
cp test.pb.go.golden test.pb.go
|
@@ -1,7 +1,7 @@
|
||||
// Go support for Protocol Buffers - Google's data interchange format
|
||||
//
|
||||
// Copyright 2012 The Go Authors. All rights reserved.
|
||||
// https://github.com/golang/protobuf
|
||||
// http://code.google.com/p/goprotobuf/
|
||||
//
|
||||
// Redistribution and use in source and binary forms, with or without
|
||||
// modification, are permitted provided that the following conditions are
|
@@ -1,4 +1,4 @@
|
||||
// Code generated by protoc-gen-go.
|
||||
// Code generated by protoc-gen-gogo.
|
||||
// source: test.proto
|
||||
// DO NOT EDIT!
|
||||
|
||||
@@ -33,15 +33,16 @@ It has these top-level messages:
|
||||
GroupOld
|
||||
GroupNew
|
||||
FloatingPoint
|
||||
MessageWithMap
|
||||
*/
|
||||
package testdata
|
||||
|
||||
import proto "github.com/coreos/etcd/Godeps/_workspace/src/github.com/golang/protobuf/proto"
|
||||
import proto "github.com/coreos/etcd/Godeps/_workspace/src/code.google.com/p/gogoprotobuf/proto"
|
||||
import json "encoding/json"
|
||||
import math "math"
|
||||
|
||||
// Reference imports to suppress errors if they are not otherwise used.
|
||||
// Reference proto, json, and math imports to suppress error if they are not otherwise used.
|
||||
var _ = proto.Marshal
|
||||
var _ = &json.SyntaxError{}
|
||||
var _ = math.Inf
|
||||
|
||||
type FOO int32
|
||||
@@ -1071,7 +1072,6 @@ func (m *MaxTag) GetLastField() string {
|
||||
|
||||
type OldMessage struct {
|
||||
Nested *OldMessage_Nested `protobuf:"bytes,1,opt,name=nested" json:"nested,omitempty"`
|
||||
Num *int32 `protobuf:"varint,2,opt,name=num" json:"num,omitempty"`
|
||||
XXX_unrecognized []byte `json:"-"`
|
||||
}
|
||||
|
||||
@@ -1086,13 +1086,6 @@ func (m *OldMessage) GetNested() *OldMessage_Nested {
|
||||
return nil
|
||||
}
|
||||
|
||||
func (m *OldMessage) GetNum() int32 {
|
||||
if m != nil && m.Num != nil {
|
||||
return *m.Num
|
||||
}
|
||||
return 0
|
||||
}
|
||||
|
||||
type OldMessage_Nested struct {
|
||||
Name *string `protobuf:"bytes,1,opt,name=name" json:"name,omitempty"`
|
||||
XXX_unrecognized []byte `json:"-"`
|
||||
@@ -1112,10 +1105,8 @@ func (m *OldMessage_Nested) GetName() string {
|
||||
// NewMessage is wire compatible with OldMessage;
|
||||
// imagine it as a future version.
|
||||
type NewMessage struct {
|
||||
Nested *NewMessage_Nested `protobuf:"bytes,1,opt,name=nested" json:"nested,omitempty"`
|
||||
// This is an int32 in OldMessage.
|
||||
Num *int64 `protobuf:"varint,2,opt,name=num" json:"num,omitempty"`
|
||||
XXX_unrecognized []byte `json:"-"`
|
||||
Nested *NewMessage_Nested `protobuf:"bytes,1,opt,name=nested" json:"nested,omitempty"`
|
||||
XXX_unrecognized []byte `json:"-"`
|
||||
}
|
||||
|
||||
func (m *NewMessage) Reset() { *m = NewMessage{} }
|
||||
@@ -1129,13 +1120,6 @@ func (m *NewMessage) GetNested() *NewMessage_Nested {
|
||||
return nil
|
||||
}
|
||||
|
||||
func (m *NewMessage) GetNum() int64 {
|
||||
if m != nil && m.Num != nil {
|
||||
return *m.Num
|
||||
}
|
||||
return 0
|
||||
}
|
||||
|
||||
type NewMessage_Nested struct {
|
||||
Name *string `protobuf:"bytes,1,opt,name=name" json:"name,omitempty"`
|
||||
FoodGroup *string `protobuf:"bytes,2,opt,name=food_group" json:"food_group,omitempty"`
|
||||
@@ -1417,12 +1401,6 @@ func (m *MyMessageSet) Marshal() ([]byte, error) {
|
||||
func (m *MyMessageSet) Unmarshal(buf []byte) error {
|
||||
return proto.UnmarshalMessageSet(buf, m.ExtensionMap())
|
||||
}
|
||||
func (m *MyMessageSet) MarshalJSON() ([]byte, error) {
|
||||
return proto.MarshalMessageSetJSON(m.XXX_extensions)
|
||||
}
|
||||
func (m *MyMessageSet) UnmarshalJSON(buf []byte) error {
|
||||
return proto.UnmarshalMessageSetJSON(buf, m.XXX_extensions)
|
||||
}
|
||||
|
||||
// ensure MyMessageSet satisfies proto.Marshaler and proto.Unmarshaler
|
||||
var _ proto.Marshaler = (*MyMessageSet)(nil)
|
||||
@@ -1536,10 +1514,8 @@ type Defaults struct {
|
||||
F_Ninf *float32 `protobuf:"fixed32,16,opt,def=-inf" json:"F_Ninf,omitempty"`
|
||||
F_Nan *float32 `protobuf:"fixed32,17,opt,def=nan" json:"F_Nan,omitempty"`
|
||||
// Sub-message.
|
||||
Sub *SubDefaults `protobuf:"bytes,18,opt,name=sub" json:"sub,omitempty"`
|
||||
// Redundant but explicit defaults.
|
||||
StrZero *string `protobuf:"bytes,19,opt,name=str_zero,def=" json:"str_zero,omitempty"`
|
||||
XXX_unrecognized []byte `json:"-"`
|
||||
Sub *SubDefaults `protobuf:"bytes,18,opt,name=sub" json:"sub,omitempty"`
|
||||
XXX_unrecognized []byte `json:"-"`
|
||||
}
|
||||
|
||||
func (m *Defaults) Reset() { *m = Defaults{} }
|
||||
@@ -1693,13 +1669,6 @@ func (m *Defaults) GetSub() *SubDefaults {
|
||||
return nil
|
||||
}
|
||||
|
||||
func (m *Defaults) GetStrZero() string {
|
||||
if m != nil && m.StrZero != nil {
|
||||
return *m.StrZero
|
||||
}
|
||||
return ""
|
||||
}
|
||||
|
||||
type SubDefaults struct {
|
||||
N *int64 `protobuf:"varint,1,opt,name=n,def=7" json:"n,omitempty"`
|
||||
XXX_unrecognized []byte `json:"-"`
|
||||
@@ -1886,38 +1855,6 @@ func (m *FloatingPoint) GetF() float64 {
|
||||
return 0
|
||||
}
|
||||
|
||||
type MessageWithMap struct {
|
||||
NameMapping map[int32]string `protobuf:"bytes,1,rep,name=name_mapping" json:"name_mapping,omitempty" protobuf_key:"varint,1,opt,name=key" protobuf_val:"bytes,2,opt,name=value"`
|
||||
MsgMapping map[int64]*FloatingPoint `protobuf:"bytes,2,rep,name=msg_mapping" json:"msg_mapping,omitempty" protobuf_key:"zigzag64,1,opt,name=key" protobuf_val:"bytes,2,opt,name=value"`
|
||||
ByteMapping map[bool][]byte `protobuf:"bytes,3,rep,name=byte_mapping" json:"byte_mapping,omitempty" protobuf_key:"varint,1,opt,name=key" protobuf_val:"bytes,2,opt,name=value"`
|
||||
XXX_unrecognized []byte `json:"-"`
|
||||
}
|
||||
|
||||
func (m *MessageWithMap) Reset() { *m = MessageWithMap{} }
|
||||
func (m *MessageWithMap) String() string { return proto.CompactTextString(m) }
|
||||
func (*MessageWithMap) ProtoMessage() {}
|
||||
|
||||
func (m *MessageWithMap) GetNameMapping() map[int32]string {
|
||||
if m != nil {
|
||||
return m.NameMapping
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
func (m *MessageWithMap) GetMsgMapping() map[int64]*FloatingPoint {
|
||||
if m != nil {
|
||||
return m.MsgMapping
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
func (m *MessageWithMap) GetByteMapping() map[bool][]byte {
|
||||
if m != nil {
|
||||
return m.ByteMapping
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
var E_Greeting = &proto.ExtensionDesc{
|
||||
ExtendedType: (*MyMessage)(nil),
|
||||
ExtensionType: ([]string)(nil),
|
@@ -4,7 +4,7 @@
|
||||
|
||||
package testdata
|
||||
|
||||
import proto "github.com/gogo/protobuf/proto"
|
||||
import proto "code.google.com/p/gogoprotobuf/proto"
|
||||
import json "encoding/json"
|
||||
import math "math"
|
||||
|
@@ -1,7 +1,7 @@
|
||||
// Go support for Protocol Buffers - Google's data interchange format
|
||||
//
|
||||
// Copyright 2010 The Go Authors. All rights reserved.
|
||||
// https://github.com/golang/protobuf
|
||||
// http://code.google.com/p/goprotobuf/
|
||||
//
|
||||
// Redistribution and use in source and binary forms, with or without
|
||||
// modification, are permitted provided that the following conditions are
|
||||
@@ -203,8 +203,6 @@ message OldMessage {
|
||||
optional string name = 1;
|
||||
}
|
||||
optional Nested nested = 1;
|
||||
|
||||
optional int32 num = 2;
|
||||
}
|
||||
|
||||
// NewMessage is wire compatible with OldMessage;
|
||||
@@ -215,9 +213,6 @@ message NewMessage {
|
||||
optional string food_group = 2;
|
||||
}
|
||||
optional Nested nested = 1;
|
||||
|
||||
// This is an int32 in OldMessage.
|
||||
optional int64 num = 2;
|
||||
}
|
||||
|
||||
// Smaller tests for ASCII formatting.
|
||||
@@ -381,9 +376,6 @@ message Defaults {
|
||||
|
||||
// Sub-message.
|
||||
optional SubDefaults sub = 18;
|
||||
|
||||
// Redundant but explicit defaults.
|
||||
optional string str_zero = 19 [default=""];
|
||||
}
|
||||
|
||||
message SubDefaults {
|
||||
@@ -426,9 +418,3 @@ message GroupNew {
|
||||
message FloatingPoint {
|
||||
required double f = 1;
|
||||
}
|
||||
|
||||
message MessageWithMap {
|
||||
map<int32, string> name_mapping = 1;
|
||||
map<sint64, FloatingPoint> msg_mapping = 2;
|
||||
map<bool, bytes> byte_mapping = 3;
|
||||
}
|
@@ -1,12 +1,12 @@
|
||||
// Extensions for Protocol Buffers to create more go like structures.
|
||||
//
|
||||
// Copyright (c) 2013, Vastech SA (PTY) LTD. All rights reserved.
|
||||
// http://github.com/gogo/protobuf/gogoproto
|
||||
// http://code.google.com/p/gogoprotobuf/gogoproto
|
||||
//
|
||||
// Go support for Protocol Buffers - Google's data interchange format
|
||||
//
|
||||
// Copyright 2010 The Go Authors. All rights reserved.
|
||||
// https://github.com/golang/protobuf
|
||||
// http://code.google.com/p/goprotobuf/
|
||||
//
|
||||
// Redistribution and use in source and binary forms, with or without
|
||||
// modification, are permitted provided that the following conditions are
|
||||
@@ -41,7 +41,6 @@ package proto
|
||||
import (
|
||||
"bufio"
|
||||
"bytes"
|
||||
"encoding"
|
||||
"fmt"
|
||||
"io"
|
||||
"log"
|
||||
@@ -80,6 +79,13 @@ type textWriter struct {
|
||||
w writer
|
||||
}
|
||||
|
||||
// textMarshaler is implemented by Messages that can marshal themsleves.
|
||||
// It is identical to encoding.TextMarshaler, introduced in go 1.2,
|
||||
// which will eventually replace it.
|
||||
type textMarshaler interface {
|
||||
MarshalText() (text []byte, err error)
|
||||
}
|
||||
|
||||
func (w *textWriter) WriteString(s string) (n int, err error) {
|
||||
if !strings.Contains(s, "\n") {
|
||||
if !w.compact && w.complete {
|
||||
@@ -231,20 +237,11 @@ func writeStruct(w *textWriter, sv reflect.Value) error {
|
||||
return err
|
||||
}
|
||||
}
|
||||
v := fv.Index(j)
|
||||
if v.Kind() == reflect.Ptr && v.IsNil() {
|
||||
// A nil message in a repeated field is not valid,
|
||||
// but we can handle that more gracefully than panicking.
|
||||
if _, err := w.Write([]byte("<nil>\n")); err != nil {
|
||||
return err
|
||||
}
|
||||
continue
|
||||
}
|
||||
if len(props.Enum) > 0 {
|
||||
if err := writeEnum(w, v, props); err != nil {
|
||||
if err := writeEnum(w, fv.Index(j), props); err != nil {
|
||||
return err
|
||||
}
|
||||
} else if err := writeAny(w, v, props); err != nil {
|
||||
} else if err := writeAny(w, fv.Index(j), props); err != nil {
|
||||
return err
|
||||
}
|
||||
if err := w.WriteByte('\n'); err != nil {
|
||||
@@ -253,100 +250,6 @@ func writeStruct(w *textWriter, sv reflect.Value) error {
|
||||
}
|
||||
continue
|
||||
}
|
||||
if fv.Kind() == reflect.Map {
|
||||
// Map fields are rendered as a repeated struct with key/value fields.
|
||||
keys := fv.MapKeys() // TODO: should we sort these for deterministic output?
|
||||
sort.Sort(mapKeys(keys))
|
||||
for _, key := range keys {
|
||||
val := fv.MapIndex(key)
|
||||
if err := writeName(w, props); err != nil {
|
||||
return err
|
||||
}
|
||||
if !w.compact {
|
||||
if err := w.WriteByte(' '); err != nil {
|
||||
return err
|
||||
}
|
||||
}
|
||||
// open struct
|
||||
if err := w.WriteByte('<'); err != nil {
|
||||
return err
|
||||
}
|
||||
if !w.compact {
|
||||
if err := w.WriteByte('\n'); err != nil {
|
||||
return err
|
||||
}
|
||||
}
|
||||
w.indent()
|
||||
// key
|
||||
if _, err := w.WriteString("key:"); err != nil {
|
||||
return err
|
||||
}
|
||||
if !w.compact {
|
||||
if err := w.WriteByte(' '); err != nil {
|
||||
return err
|
||||
}
|
||||
}
|
||||
if err := writeAny(w, key, props.mkeyprop); err != nil {
|
||||
return err
|
||||
}
|
||||
if err := w.WriteByte('\n'); err != nil {
|
||||
return err
|
||||
}
|
||||
// value
|
||||
if _, err := w.WriteString("value:"); err != nil {
|
||||
return err
|
||||
}
|
||||
if !w.compact {
|
||||
if err := w.WriteByte(' '); err != nil {
|
||||
return err
|
||||
}
|
||||
}
|
||||
if err := writeAny(w, val, props.mvalprop); err != nil {
|
||||
return err
|
||||
}
|
||||
if err := w.WriteByte('\n'); err != nil {
|
||||
return err
|
||||
}
|
||||
// close struct
|
||||
w.unindent()
|
||||
if err := w.WriteByte('>'); err != nil {
|
||||
return err
|
||||
}
|
||||
if err := w.WriteByte('\n'); err != nil {
|
||||
return err
|
||||
}
|
||||
}
|
||||
continue
|
||||
}
|
||||
if props.proto3 && fv.Kind() == reflect.Slice && fv.Len() == 0 {
|
||||
// empty bytes field
|
||||
continue
|
||||
}
|
||||
if props.proto3 && fv.Kind() != reflect.Ptr && fv.Kind() != reflect.Slice {
|
||||
// proto3 non-repeated scalar field; skip if zero value
|
||||
switch fv.Kind() {
|
||||
case reflect.Bool:
|
||||
if !fv.Bool() {
|
||||
continue
|
||||
}
|
||||
case reflect.Int32, reflect.Int64:
|
||||
if fv.Int() == 0 {
|
||||
continue
|
||||
}
|
||||
case reflect.Uint32, reflect.Uint64:
|
||||
if fv.Uint() == 0 {
|
||||
continue
|
||||
}
|
||||
case reflect.Float32, reflect.Float64:
|
||||
if fv.Float() == 0 {
|
||||
continue
|
||||
}
|
||||
case reflect.String:
|
||||
if fv.String() == "" {
|
||||
continue
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
if err := writeName(w, props); err != nil {
|
||||
return err
|
||||
@@ -448,7 +351,7 @@ func writeAny(w *textWriter, v reflect.Value, props *Properties) error {
|
||||
switch v.Kind() {
|
||||
case reflect.Slice:
|
||||
// Should only be a []byte; repeated fields are handled in writeStruct.
|
||||
if err := writeString(w, string(v.Bytes())); err != nil {
|
||||
if err := writeString(w, string(v.Interface().([]byte))); err != nil {
|
||||
return err
|
||||
}
|
||||
case reflect.String:
|
||||
@@ -470,7 +373,7 @@ func writeAny(w *textWriter, v reflect.Value, props *Properties) error {
|
||||
}
|
||||
}
|
||||
w.indent()
|
||||
if tm, ok := v.Interface().(encoding.TextMarshaler); ok {
|
||||
if tm, ok := v.Interface().(textMarshaler); ok {
|
||||
text, err := tm.MarshalText()
|
||||
if err != nil {
|
||||
return err
|
||||
@@ -776,7 +679,7 @@ func marshalText(w io.Writer, pb Message, compact bool) error {
|
||||
compact: compact,
|
||||
}
|
||||
|
||||
if tm, ok := pb.(encoding.TextMarshaler); ok {
|
||||
if tm, ok := pb.(textMarshaler); ok {
|
||||
text, err := tm.MarshalText()
|
||||
if err != nil {
|
||||
return err
|
@@ -1,5 +1,5 @@
|
||||
// Copyright (c) 2013, Vastech SA (PTY) LTD. All rights reserved.
|
||||
// http://github.com/gogo/protobuf/gogoproto
|
||||
// http://code.google.com/p/gogoprotobuf/gogoproto
|
||||
//
|
||||
// Redistribution and use in source and binary forms, with or without
|
||||
// modification, are permitted provided that the following conditions are
|
@@ -1,12 +1,12 @@
|
||||
// Extensions for Protocol Buffers to create more go like structures.
|
||||
//
|
||||
// Copyright (c) 2013, Vastech SA (PTY) LTD. All rights reserved.
|
||||
// http://github.com/gogo/protobuf/gogoproto
|
||||
// http://code.google.com/p/gogoprotobuf/gogoproto
|
||||
//
|
||||
// Go support for Protocol Buffers - Google's data interchange format
|
||||
//
|
||||
// Copyright 2010 The Go Authors. All rights reserved.
|
||||
// https://github.com/golang/protobuf
|
||||
// http://code.google.com/p/goprotobuf/
|
||||
//
|
||||
// Redistribution and use in source and binary forms, with or without
|
||||
// modification, are permitted provided that the following conditions are
|
||||
@@ -40,7 +40,6 @@ package proto
|
||||
// TODO: message sets.
|
||||
|
||||
import (
|
||||
"encoding"
|
||||
"errors"
|
||||
"fmt"
|
||||
"reflect"
|
||||
@@ -49,6 +48,13 @@ import (
|
||||
"unicode/utf8"
|
||||
)
|
||||
|
||||
// textUnmarshaler is implemented by Messages that can unmarshal themsleves.
|
||||
// It is identical to encoding.TextUnmarshaler, introduced in go 1.2,
|
||||
// which will eventually replace it.
|
||||
type textUnmarshaler interface {
|
||||
UnmarshalText(text []byte) error
|
||||
}
|
||||
|
||||
type ParseError struct {
|
||||
Message string
|
||||
Line int // 1-based line number
|
||||
@@ -360,20 +366,8 @@ func (p *textParser) next() *token {
|
||||
return &p.cur
|
||||
}
|
||||
|
||||
func (p *textParser) consumeToken(s string) error {
|
||||
tok := p.next()
|
||||
if tok.err != nil {
|
||||
return tok.err
|
||||
}
|
||||
if tok.value != s {
|
||||
p.back()
|
||||
return p.errorf("expected %q, found %q", s, tok.value)
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
// Return a RequiredNotSetError indicating which required field was not set.
|
||||
func (p *textParser) missingRequiredFieldError(sv reflect.Value) *RequiredNotSetError {
|
||||
// Return an error indicating which required field was not set.
|
||||
func (p *textParser) missingRequiredFieldError(sv reflect.Value) *ParseError {
|
||||
st := sv.Type()
|
||||
sprops := GetProperties(st)
|
||||
for i := 0; i < st.NumField(); i++ {
|
||||
@@ -383,10 +377,10 @@ func (p *textParser) missingRequiredFieldError(sv reflect.Value) *RequiredNotSet
|
||||
|
||||
props := sprops.Prop[i]
|
||||
if props.Required {
|
||||
return &RequiredNotSetError{fmt.Sprintf("%v.%v", st, props.OrigName)}
|
||||
return p.errorf("message %v missing required field %q", st, props.OrigName)
|
||||
}
|
||||
}
|
||||
return &RequiredNotSetError{fmt.Sprintf("%v.<unknown field name>", st)} // should not happen
|
||||
return p.errorf("message %v missing required field", st) // should not happen
|
||||
}
|
||||
|
||||
// Returns the index in the struct for the named field, as well as the parsed tag properties.
|
||||
@@ -426,10 +420,6 @@ func (p *textParser) checkForColon(props *Properties, typ reflect.Type) *ParseEr
|
||||
if typ.Elem().Kind() != reflect.Ptr {
|
||||
break
|
||||
}
|
||||
} else if typ.Kind() == reflect.String {
|
||||
// The proto3 exception is for a string field,
|
||||
// which requires a colon.
|
||||
break
|
||||
}
|
||||
needColon = false
|
||||
}
|
||||
@@ -441,11 +431,9 @@ func (p *textParser) checkForColon(props *Properties, typ reflect.Type) *ParseEr
|
||||
return nil
|
||||
}
|
||||
|
||||
func (p *textParser) readStruct(sv reflect.Value, terminator string) error {
|
||||
func (p *textParser) readStruct(sv reflect.Value, terminator string) *ParseError {
|
||||
st := sv.Type()
|
||||
reqCount := GetProperties(st).reqCount
|
||||
var reqFieldErr error
|
||||
fieldSet := make(map[string]bool)
|
||||
// A struct is a sequence of "name: value", terminated by one of
|
||||
// '>' or '}', or the end of the input. A name may also be
|
||||
// "[extension]".
|
||||
@@ -506,10 +494,7 @@ func (p *textParser) readStruct(sv reflect.Value, terminator string) error {
|
||||
ext = reflect.New(typ.Elem()).Elem()
|
||||
}
|
||||
if err := p.readAny(ext, props); err != nil {
|
||||
if _, ok := err.(*RequiredNotSetError); !ok {
|
||||
return err
|
||||
}
|
||||
reqFieldErr = err
|
||||
return err
|
||||
}
|
||||
ep := sv.Addr().Interface().(extendableProto)
|
||||
if !rep {
|
||||
@@ -527,71 +512,17 @@ func (p *textParser) readStruct(sv reflect.Value, terminator string) error {
|
||||
}
|
||||
} else {
|
||||
// This is a normal, non-extension field.
|
||||
name := tok.value
|
||||
fi, props, ok := structFieldByName(st, name)
|
||||
fi, props, ok := structFieldByName(st, tok.value)
|
||||
if !ok {
|
||||
return p.errorf("unknown field name %q in %v", name, st)
|
||||
return p.errorf("unknown field name %q in %v", tok.value, st)
|
||||
}
|
||||
|
||||
dst := sv.Field(fi)
|
||||
|
||||
if dst.Kind() == reflect.Map {
|
||||
// Consume any colon.
|
||||
if err := p.checkForColon(props, dst.Type()); err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
// Construct the map if it doesn't already exist.
|
||||
if dst.IsNil() {
|
||||
dst.Set(reflect.MakeMap(dst.Type()))
|
||||
}
|
||||
key := reflect.New(dst.Type().Key()).Elem()
|
||||
val := reflect.New(dst.Type().Elem()).Elem()
|
||||
|
||||
// The map entry should be this sequence of tokens:
|
||||
// < key : KEY value : VALUE >
|
||||
// Technically the "key" and "value" could come in any order,
|
||||
// but in practice they won't.
|
||||
|
||||
tok := p.next()
|
||||
var terminator string
|
||||
switch tok.value {
|
||||
case "<":
|
||||
terminator = ">"
|
||||
case "{":
|
||||
terminator = "}"
|
||||
default:
|
||||
return p.errorf("expected '{' or '<', found %q", tok.value)
|
||||
}
|
||||
if err := p.consumeToken("key"); err != nil {
|
||||
return err
|
||||
}
|
||||
if err := p.consumeToken(":"); err != nil {
|
||||
return err
|
||||
}
|
||||
if err := p.readAny(key, props.mkeyprop); err != nil {
|
||||
return err
|
||||
}
|
||||
if err := p.consumeToken("value"); err != nil {
|
||||
return err
|
||||
}
|
||||
if err := p.checkForColon(props.mvalprop, dst.Type().Elem()); err != nil {
|
||||
return err
|
||||
}
|
||||
if err := p.readAny(val, props.mvalprop); err != nil {
|
||||
return err
|
||||
}
|
||||
if err := p.consumeToken(terminator); err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
dst.SetMapIndex(key, val)
|
||||
continue
|
||||
}
|
||||
isDstNil := isNil(dst)
|
||||
|
||||
// Check that it's not already set if it's not a repeated field.
|
||||
if !props.Repeated && fieldSet[name] {
|
||||
return p.errorf("non-repeated field %q was repeated", name)
|
||||
if !props.Repeated && !isDstNil && dst.Kind() == reflect.Ptr {
|
||||
return p.errorf("non-repeated field %q was repeated", tok.value)
|
||||
}
|
||||
|
||||
if err := p.checkForColon(props, st.Field(fi).Type); err != nil {
|
||||
@@ -599,13 +530,11 @@ func (p *textParser) readStruct(sv reflect.Value, terminator string) error {
|
||||
}
|
||||
|
||||
// Parse into the field.
|
||||
fieldSet[name] = true
|
||||
if err := p.readAny(dst, props); err != nil {
|
||||
if _, ok := err.(*RequiredNotSetError); !ok {
|
||||
return err
|
||||
}
|
||||
reqFieldErr = err
|
||||
} else if props.Required {
|
||||
return err
|
||||
}
|
||||
|
||||
if props.Required {
|
||||
reqCount--
|
||||
}
|
||||
}
|
||||
@@ -623,10 +552,10 @@ func (p *textParser) readStruct(sv reflect.Value, terminator string) error {
|
||||
if reqCount > 0 {
|
||||
return p.missingRequiredFieldError(sv)
|
||||
}
|
||||
return reqFieldErr
|
||||
return nil
|
||||
}
|
||||
|
||||
func (p *textParser) readAny(v reflect.Value, props *Properties) error {
|
||||
func (p *textParser) readAny(v reflect.Value, props *Properties) *ParseError {
|
||||
tok := p.next()
|
||||
if tok.err != nil {
|
||||
return tok.err
|
||||
@@ -726,7 +655,6 @@ func (p *textParser) readAny(v reflect.Value, props *Properties) error {
|
||||
fv.SetInt(x)
|
||||
return nil
|
||||
}
|
||||
|
||||
if len(props.Enum) == 0 {
|
||||
break
|
||||
}
|
||||
@@ -745,7 +673,6 @@ func (p *textParser) readAny(v reflect.Value, props *Properties) error {
|
||||
fv.SetInt(x)
|
||||
return nil
|
||||
}
|
||||
|
||||
case reflect.Ptr:
|
||||
// A basic field (indirected through pointer), or a repeated message/group
|
||||
p.back()
|
||||
@@ -766,7 +693,7 @@ func (p *textParser) readAny(v reflect.Value, props *Properties) error {
|
||||
default:
|
||||
return p.errorf("expected '{' or '<', found %q", tok.value)
|
||||
}
|
||||
// TODO: Handle nested messages which implement encoding.TextUnmarshaler.
|
||||
// TODO: Handle nested messages which implement textUnmarshaler.
|
||||
return p.readStruct(fv, terminator)
|
||||
case reflect.Uint32:
|
||||
if x, err := strconv.ParseUint(tok.value, 0, 32); err == nil {
|
||||
@@ -784,10 +711,8 @@ func (p *textParser) readAny(v reflect.Value, props *Properties) error {
|
||||
|
||||
// UnmarshalText reads a protocol buffer in Text format. UnmarshalText resets pb
|
||||
// before starting to unmarshal, so any existing data in pb is always removed.
|
||||
// If a required field is not set and no other error occurs,
|
||||
// UnmarshalText returns *RequiredNotSetError.
|
||||
func UnmarshalText(s string, pb Message) error {
|
||||
if um, ok := pb.(encoding.TextUnmarshaler); ok {
|
||||
if um, ok := pb.(textUnmarshaler); ok {
|
||||
err := um.UnmarshalText([]byte(s))
|
||||
return err
|
||||
}
|
@@ -1,7 +1,7 @@
|
||||
// Go support for Protocol Buffers - Google's data interchange format
|
||||
//
|
||||
// Copyright 2010 The Go Authors. All rights reserved.
|
||||
// https://github.com/golang/protobuf
|
||||
// http://code.google.com/p/goprotobuf/
|
||||
//
|
||||
// Redistribution and use in source and binary forms, with or without
|
||||
// modification, are permitted provided that the following conditions are
|
||||
@@ -36,9 +36,8 @@ import (
|
||||
"reflect"
|
||||
"testing"
|
||||
|
||||
proto3pb "./proto3_proto"
|
||||
. "./testdata"
|
||||
. "github.com/coreos/etcd/Godeps/_workspace/src/github.com/golang/protobuf/proto"
|
||||
. "github.com/coreos/etcd/Godeps/_workspace/src/code.google.com/p/gogoprotobuf/proto"
|
||||
)
|
||||
|
||||
type UnmarshalTextTest struct {
|
||||
@@ -157,8 +156,8 @@ var unMarshalTextTests = []UnmarshalTextTest{
|
||||
|
||||
// Number too large for int64
|
||||
{
|
||||
in: "count: 1 others { key: 123456789012345678901 }",
|
||||
err: "line 1.23: invalid int64: 123456789012345678901",
|
||||
in: "count: 123456789012345678901",
|
||||
err: "line 1.7: invalid int32: 123456789012345678901",
|
||||
},
|
||||
|
||||
// Number too large for int32
|
||||
@@ -295,11 +294,8 @@ var unMarshalTextTests = []UnmarshalTextTest{
|
||||
|
||||
// Missing required field
|
||||
{
|
||||
in: `name: "Pawel"`,
|
||||
err: `proto: required field "testdata.MyMessage.count" not set`,
|
||||
out: &MyMessage{
|
||||
Name: String("Pawel"),
|
||||
},
|
||||
in: ``,
|
||||
err: `line 1.0: message testdata.MyMessage missing required field "count"`,
|
||||
},
|
||||
|
||||
// Repeated non-repeated field
|
||||
@@ -412,9 +408,6 @@ func TestUnmarshalText(t *testing.T) {
|
||||
} else if err.Error() != test.err {
|
||||
t.Errorf("Test %d: Incorrect error.\nHave: %v\nWant: %v",
|
||||
i, err.Error(), test.err)
|
||||
} else if _, ok := err.(*RequiredNotSetError); ok && test.out != nil && !reflect.DeepEqual(pb, test.out) {
|
||||
t.Errorf("Test %d: Incorrect populated \nHave: %v\nWant: %v",
|
||||
i, pb, test.out)
|
||||
}
|
||||
}
|
||||
}
|
||||
@@ -444,48 +437,6 @@ func TestRepeatedEnum(t *testing.T) {
|
||||
}
|
||||
}
|
||||
|
||||
func TestProto3TextParsing(t *testing.T) {
|
||||
m := new(proto3pb.Message)
|
||||
const in = `name: "Wallace" true_scotsman: true`
|
||||
want := &proto3pb.Message{
|
||||
Name: "Wallace",
|
||||
TrueScotsman: true,
|
||||
}
|
||||
if err := UnmarshalText(in, m); err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
if !Equal(m, want) {
|
||||
t.Errorf("\n got %v\nwant %v", m, want)
|
||||
}
|
||||
}
|
||||
|
||||
func TestMapParsing(t *testing.T) {
|
||||
m := new(MessageWithMap)
|
||||
const in = `name_mapping:<key:1234 value:"Feist"> name_mapping:<key:1 value:"Beatles">` +
|
||||
`msg_mapping:<key:-4 value:<f: 2.0>>` +
|
||||
`msg_mapping<key:-2 value<f: 4.0>>` + // no colon after "value"
|
||||
`byte_mapping:<key:true value:"so be it">`
|
||||
want := &MessageWithMap{
|
||||
NameMapping: map[int32]string{
|
||||
1: "Beatles",
|
||||
1234: "Feist",
|
||||
},
|
||||
MsgMapping: map[int64]*FloatingPoint{
|
||||
-4: {F: Float64(2.0)},
|
||||
-2: {F: Float64(4.0)},
|
||||
},
|
||||
ByteMapping: map[bool][]byte{
|
||||
true: []byte("so be it"),
|
||||
},
|
||||
}
|
||||
if err := UnmarshalText(in, m); err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
if !Equal(m, want) {
|
||||
t.Errorf("\n got %v\nwant %v", m, want)
|
||||
}
|
||||
}
|
||||
|
||||
var benchInput string
|
||||
|
||||
func init() {
|
@@ -1,7 +1,7 @@
|
||||
// Go support for Protocol Buffers - Google's data interchange format
|
||||
//
|
||||
// Copyright 2010 The Go Authors. All rights reserved.
|
||||
// https://github.com/golang/protobuf
|
||||
// http://code.google.com/p/goprotobuf/
|
||||
//
|
||||
// Redistribution and use in source and binary forms, with or without
|
||||
// modification, are permitted provided that the following conditions are
|
||||
@@ -39,9 +39,8 @@ import (
|
||||
"strings"
|
||||
"testing"
|
||||
|
||||
"github.com/coreos/etcd/Godeps/_workspace/src/github.com/golang/protobuf/proto"
|
||||
"github.com/coreos/etcd/Godeps/_workspace/src/code.google.com/p/gogoprotobuf/proto"
|
||||
|
||||
proto3pb "./proto3_proto"
|
||||
pb "./testdata"
|
||||
)
|
||||
|
||||
@@ -386,51 +385,3 @@ func TestFloats(t *testing.T) {
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
func TestRepeatedNilText(t *testing.T) {
|
||||
m := &pb.MessageList{
|
||||
Message: []*pb.MessageList_Message{
|
||||
nil,
|
||||
&pb.MessageList_Message{
|
||||
Name: proto.String("Horse"),
|
||||
},
|
||||
nil,
|
||||
},
|
||||
}
|
||||
want := `Message <nil>
|
||||
Message {
|
||||
name: "Horse"
|
||||
}
|
||||
Message <nil>
|
||||
`
|
||||
if s := proto.MarshalTextString(m); s != want {
|
||||
t.Errorf(" got: %s\nwant: %s", s, want)
|
||||
}
|
||||
}
|
||||
|
||||
func TestProto3Text(t *testing.T) {
|
||||
tests := []struct {
|
||||
m proto.Message
|
||||
want string
|
||||
}{
|
||||
// zero message
|
||||
{&proto3pb.Message{}, ``},
|
||||
// zero message except for an empty byte slice
|
||||
{&proto3pb.Message{Data: []byte{}}, ``},
|
||||
// trivial case
|
||||
{&proto3pb.Message{Name: "Rob", HeightInCm: 175}, `name:"Rob" height_in_cm:175`},
|
||||
// empty map
|
||||
{&pb.MessageWithMap{}, ``},
|
||||
// non-empty map; current map format is the same as a repeated struct
|
||||
{
|
||||
&pb.MessageWithMap{NameMapping: map[int32]string{1234: "Feist"}},
|
||||
`name_mapping:<key:1234 value:"Feist" >`,
|
||||
},
|
||||
}
|
||||
for _, test := range tests {
|
||||
got := strings.TrimSpace(test.m.String())
|
||||
if got != test.want {
|
||||
t.Errorf("\n got %s\nwant %s", got, test.want)
|
||||
}
|
||||
}
|
||||
}
|
63
Godeps/_workspace/src/github.com/beorn7/perks/quantile/bench_test.go
generated
vendored
63
Godeps/_workspace/src/github.com/beorn7/perks/quantile/bench_test.go
generated
vendored
@@ -1,63 +0,0 @@
|
||||
package quantile
|
||||
|
||||
import (
|
||||
"testing"
|
||||
)
|
||||
|
||||
func BenchmarkInsertTargeted(b *testing.B) {
|
||||
b.ReportAllocs()
|
||||
|
||||
s := NewTargeted(Targets)
|
||||
b.ResetTimer()
|
||||
for i := float64(0); i < float64(b.N); i++ {
|
||||
s.Insert(i)
|
||||
}
|
||||
}
|
||||
|
||||
func BenchmarkInsertTargetedSmallEpsilon(b *testing.B) {
|
||||
s := NewTargeted(TargetsSmallEpsilon)
|
||||
b.ResetTimer()
|
||||
for i := float64(0); i < float64(b.N); i++ {
|
||||
s.Insert(i)
|
||||
}
|
||||
}
|
||||
|
||||
func BenchmarkInsertBiased(b *testing.B) {
|
||||
s := NewLowBiased(0.01)
|
||||
b.ResetTimer()
|
||||
for i := float64(0); i < float64(b.N); i++ {
|
||||
s.Insert(i)
|
||||
}
|
||||
}
|
||||
|
||||
func BenchmarkInsertBiasedSmallEpsilon(b *testing.B) {
|
||||
s := NewLowBiased(0.0001)
|
||||
b.ResetTimer()
|
||||
for i := float64(0); i < float64(b.N); i++ {
|
||||
s.Insert(i)
|
||||
}
|
||||
}
|
||||
|
||||
func BenchmarkQuery(b *testing.B) {
|
||||
s := NewTargeted(Targets)
|
||||
for i := float64(0); i < 1e6; i++ {
|
||||
s.Insert(i)
|
||||
}
|
||||
b.ResetTimer()
|
||||
n := float64(b.N)
|
||||
for i := float64(0); i < n; i++ {
|
||||
s.Query(i / n)
|
||||
}
|
||||
}
|
||||
|
||||
func BenchmarkQuerySmallEpsilon(b *testing.B) {
|
||||
s := NewTargeted(TargetsSmallEpsilon)
|
||||
for i := float64(0); i < 1e6; i++ {
|
||||
s.Insert(i)
|
||||
}
|
||||
b.ResetTimer()
|
||||
n := float64(b.N)
|
||||
for i := float64(0); i < n; i++ {
|
||||
s.Query(i / n)
|
||||
}
|
||||
}
|
121
Godeps/_workspace/src/github.com/beorn7/perks/quantile/example_test.go
generated
vendored
121
Godeps/_workspace/src/github.com/beorn7/perks/quantile/example_test.go
generated
vendored
@@ -1,121 +0,0 @@
|
||||
// +build go1.1
|
||||
|
||||
package quantile_test
|
||||
|
||||
import (
|
||||
"bufio"
|
||||
"fmt"
|
||||
"log"
|
||||
"os"
|
||||
"strconv"
|
||||
"time"
|
||||
|
||||
"github.com/coreos/etcd/Godeps/_workspace/src/github.com/beorn7/perks/quantile"
|
||||
)
|
||||
|
||||
func Example_simple() {
|
||||
ch := make(chan float64)
|
||||
go sendFloats(ch)
|
||||
|
||||
// Compute the 50th, 90th, and 99th percentile.
|
||||
q := quantile.NewTargeted(map[float64]float64{
|
||||
0.50: 0.005,
|
||||
0.90: 0.001,
|
||||
0.99: 0.0001,
|
||||
})
|
||||
for v := range ch {
|
||||
q.Insert(v)
|
||||
}
|
||||
|
||||
fmt.Println("perc50:", q.Query(0.50))
|
||||
fmt.Println("perc90:", q.Query(0.90))
|
||||
fmt.Println("perc99:", q.Query(0.99))
|
||||
fmt.Println("count:", q.Count())
|
||||
// Output:
|
||||
// perc50: 5
|
||||
// perc90: 16
|
||||
// perc99: 223
|
||||
// count: 2388
|
||||
}
|
||||
|
||||
func Example_mergeMultipleStreams() {
|
||||
// Scenario:
|
||||
// We have multiple database shards. On each shard, there is a process
|
||||
// collecting query response times from the database logs and inserting
|
||||
// them into a Stream (created via NewTargeted(0.90)), much like the
|
||||
// Simple example. These processes expose a network interface for us to
|
||||
// ask them to serialize and send us the results of their
|
||||
// Stream.Samples so we may Merge and Query them.
|
||||
//
|
||||
// NOTES:
|
||||
// * These sample sets are small, allowing us to get them
|
||||
// across the network much faster than sending the entire list of data
|
||||
// points.
|
||||
//
|
||||
// * For this to work correctly, we must supply the same quantiles
|
||||
// a priori the process collecting the samples supplied to NewTargeted,
|
||||
// even if we do not plan to query them all here.
|
||||
ch := make(chan quantile.Samples)
|
||||
getDBQuerySamples(ch)
|
||||
q := quantile.NewTargeted(map[float64]float64{0.90: 0.001})
|
||||
for samples := range ch {
|
||||
q.Merge(samples)
|
||||
}
|
||||
fmt.Println("perc90:", q.Query(0.90))
|
||||
}
|
||||
|
||||
func Example_window() {
|
||||
// Scenario: We want the 90th, 95th, and 99th percentiles for each
|
||||
// minute.
|
||||
|
||||
ch := make(chan float64)
|
||||
go sendStreamValues(ch)
|
||||
|
||||
tick := time.NewTicker(1 * time.Minute)
|
||||
q := quantile.NewTargeted(map[float64]float64{
|
||||
0.90: 0.001,
|
||||
0.95: 0.0005,
|
||||
0.99: 0.0001,
|
||||
})
|
||||
for {
|
||||
select {
|
||||
case t := <-tick.C:
|
||||
flushToDB(t, q.Samples())
|
||||
q.Reset()
|
||||
case v := <-ch:
|
||||
q.Insert(v)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
func sendStreamValues(ch chan float64) {
|
||||
// Use your imagination
|
||||
}
|
||||
|
||||
func flushToDB(t time.Time, samples quantile.Samples) {
|
||||
// Use your imagination
|
||||
}
|
||||
|
||||
// This is a stub for the above example. In reality this would hit the remote
|
||||
// servers via http or something like it.
|
||||
func getDBQuerySamples(ch chan quantile.Samples) {}
|
||||
|
||||
func sendFloats(ch chan<- float64) {
|
||||
f, err := os.Open("exampledata.txt")
|
||||
if err != nil {
|
||||
log.Fatal(err)
|
||||
}
|
||||
sc := bufio.NewScanner(f)
|
||||
for sc.Scan() {
|
||||
b := sc.Bytes()
|
||||
v, err := strconv.ParseFloat(string(b), 64)
|
||||
if err != nil {
|
||||
log.Fatal(err)
|
||||
}
|
||||
ch <- v
|
||||
}
|
||||
if sc.Err() != nil {
|
||||
log.Fatal(sc.Err())
|
||||
}
|
||||
close(ch)
|
||||
}
|
2388
Godeps/_workspace/src/github.com/beorn7/perks/quantile/exampledata.txt
generated
vendored
2388
Godeps/_workspace/src/github.com/beorn7/perks/quantile/exampledata.txt
generated
vendored
File diff suppressed because it is too large
Load Diff
292
Godeps/_workspace/src/github.com/beorn7/perks/quantile/stream.go
generated
vendored
292
Godeps/_workspace/src/github.com/beorn7/perks/quantile/stream.go
generated
vendored
@@ -1,292 +0,0 @@
|
||||
// Package quantile computes approximate quantiles over an unbounded data
|
||||
// stream within low memory and CPU bounds.
|
||||
//
|
||||
// A small amount of accuracy is traded to achieve the above properties.
|
||||
//
|
||||
// Multiple streams can be merged before calling Query to generate a single set
|
||||
// of results. This is meaningful when the streams represent the same type of
|
||||
// data. See Merge and Samples.
|
||||
//
|
||||
// For more detailed information about the algorithm used, see:
|
||||
//
|
||||
// Effective Computation of Biased Quantiles over Data Streams
|
||||
//
|
||||
// http://www.cs.rutgers.edu/~muthu/bquant.pdf
|
||||
package quantile
|
||||
|
||||
import (
|
||||
"math"
|
||||
"sort"
|
||||
)
|
||||
|
||||
// Sample holds an observed value and meta information for compression. JSON
|
||||
// tags have been added for convenience.
|
||||
type Sample struct {
|
||||
Value float64 `json:",string"`
|
||||
Width float64 `json:",string"`
|
||||
Delta float64 `json:",string"`
|
||||
}
|
||||
|
||||
// Samples represents a slice of samples. It implements sort.Interface.
|
||||
type Samples []Sample
|
||||
|
||||
func (a Samples) Len() int { return len(a) }
|
||||
func (a Samples) Less(i, j int) bool { return a[i].Value < a[j].Value }
|
||||
func (a Samples) Swap(i, j int) { a[i], a[j] = a[j], a[i] }
|
||||
|
||||
type invariant func(s *stream, r float64) float64
|
||||
|
||||
// NewLowBiased returns an initialized Stream for low-biased quantiles
|
||||
// (e.g. 0.01, 0.1, 0.5) where the needed quantiles are not known a priori, but
|
||||
// error guarantees can still be given even for the lower ranks of the data
|
||||
// distribution.
|
||||
//
|
||||
// The provided epsilon is a relative error, i.e. the true quantile of a value
|
||||
// returned by a query is guaranteed to be within (1±Epsilon)*Quantile.
|
||||
//
|
||||
// See http://www.cs.rutgers.edu/~muthu/bquant.pdf for time, space, and error
|
||||
// properties.
|
||||
func NewLowBiased(epsilon float64) *Stream {
|
||||
ƒ := func(s *stream, r float64) float64 {
|
||||
return 2 * epsilon * r
|
||||
}
|
||||
return newStream(ƒ)
|
||||
}
|
||||
|
||||
// NewHighBiased returns an initialized Stream for high-biased quantiles
|
||||
// (e.g. 0.01, 0.1, 0.5) where the needed quantiles are not known a priori, but
|
||||
// error guarantees can still be given even for the higher ranks of the data
|
||||
// distribution.
|
||||
//
|
||||
// The provided epsilon is a relative error, i.e. the true quantile of a value
|
||||
// returned by a query is guaranteed to be within 1-(1±Epsilon)*(1-Quantile).
|
||||
//
|
||||
// See http://www.cs.rutgers.edu/~muthu/bquant.pdf for time, space, and error
|
||||
// properties.
|
||||
func NewHighBiased(epsilon float64) *Stream {
|
||||
ƒ := func(s *stream, r float64) float64 {
|
||||
return 2 * epsilon * (s.n - r)
|
||||
}
|
||||
return newStream(ƒ)
|
||||
}
|
||||
|
||||
// NewTargeted returns an initialized Stream concerned with a particular set of
|
||||
// quantile values that are supplied a priori. Knowing these a priori reduces
|
||||
// space and computation time. The targets map maps the desired quantiles to
|
||||
// their absolute errors, i.e. the true quantile of a value returned by a query
|
||||
// is guaranteed to be within (Quantile±Epsilon).
|
||||
//
|
||||
// See http://www.cs.rutgers.edu/~muthu/bquant.pdf for time, space, and error properties.
|
||||
func NewTargeted(targets map[float64]float64) *Stream {
|
||||
ƒ := func(s *stream, r float64) float64 {
|
||||
var m = math.MaxFloat64
|
||||
var f float64
|
||||
for quantile, epsilon := range targets {
|
||||
if quantile*s.n <= r {
|
||||
f = (2 * epsilon * r) / quantile
|
||||
} else {
|
||||
f = (2 * epsilon * (s.n - r)) / (1 - quantile)
|
||||
}
|
||||
if f < m {
|
||||
m = f
|
||||
}
|
||||
}
|
||||
return m
|
||||
}
|
||||
return newStream(ƒ)
|
||||
}
|
||||
|
||||
// Stream computes quantiles for a stream of float64s. It is not thread-safe by
|
||||
// design. Take care when using across multiple goroutines.
|
||||
type Stream struct {
|
||||
*stream
|
||||
b Samples
|
||||
sorted bool
|
||||
}
|
||||
|
||||
func newStream(ƒ invariant) *Stream {
|
||||
x := &stream{ƒ: ƒ}
|
||||
return &Stream{x, make(Samples, 0, 500), true}
|
||||
}
|
||||
|
||||
// Insert inserts v into the stream.
|
||||
func (s *Stream) Insert(v float64) {
|
||||
s.insert(Sample{Value: v, Width: 1})
|
||||
}
|
||||
|
||||
func (s *Stream) insert(sample Sample) {
|
||||
s.b = append(s.b, sample)
|
||||
s.sorted = false
|
||||
if len(s.b) == cap(s.b) {
|
||||
s.flush()
|
||||
}
|
||||
}
|
||||
|
||||
// Query returns the computed qth percentiles value. If s was created with
|
||||
// NewTargeted, and q is not in the set of quantiles provided a priori, Query
|
||||
// will return an unspecified result.
|
||||
func (s *Stream) Query(q float64) float64 {
|
||||
if !s.flushed() {
|
||||
// Fast path when there hasn't been enough data for a flush;
|
||||
// this also yields better accuracy for small sets of data.
|
||||
l := len(s.b)
|
||||
if l == 0 {
|
||||
return 0
|
||||
}
|
||||
i := int(float64(l) * q)
|
||||
if i > 0 {
|
||||
i -= 1
|
||||
}
|
||||
s.maybeSort()
|
||||
return s.b[i].Value
|
||||
}
|
||||
s.flush()
|
||||
return s.stream.query(q)
|
||||
}
|
||||
|
||||
// Merge merges samples into the underlying streams samples. This is handy when
|
||||
// merging multiple streams from separate threads, database shards, etc.
|
||||
//
|
||||
// ATTENTION: This method is broken and does not yield correct results. The
|
||||
// underlying algorithm is not capable of merging streams correctly.
|
||||
func (s *Stream) Merge(samples Samples) {
|
||||
sort.Sort(samples)
|
||||
s.stream.merge(samples)
|
||||
}
|
||||
|
||||
// Reset reinitializes and clears the list reusing the samples buffer memory.
|
||||
func (s *Stream) Reset() {
|
||||
s.stream.reset()
|
||||
s.b = s.b[:0]
|
||||
}
|
||||
|
||||
// Samples returns stream samples held by s.
|
||||
func (s *Stream) Samples() Samples {
|
||||
if !s.flushed() {
|
||||
return s.b
|
||||
}
|
||||
s.flush()
|
||||
return s.stream.samples()
|
||||
}
|
||||
|
||||
// Count returns the total number of samples observed in the stream
|
||||
// since initialization.
|
||||
func (s *Stream) Count() int {
|
||||
return len(s.b) + s.stream.count()
|
||||
}
|
||||
|
||||
func (s *Stream) flush() {
|
||||
s.maybeSort()
|
||||
s.stream.merge(s.b)
|
||||
s.b = s.b[:0]
|
||||
}
|
||||
|
||||
func (s *Stream) maybeSort() {
|
||||
if !s.sorted {
|
||||
s.sorted = true
|
||||
sort.Sort(s.b)
|
||||
}
|
||||
}
|
||||
|
||||
func (s *Stream) flushed() bool {
|
||||
return len(s.stream.l) > 0
|
||||
}
|
||||
|
||||
type stream struct {
|
||||
n float64
|
||||
l []Sample
|
||||
ƒ invariant
|
||||
}
|
||||
|
||||
func (s *stream) reset() {
|
||||
s.l = s.l[:0]
|
||||
s.n = 0
|
||||
}
|
||||
|
||||
func (s *stream) insert(v float64) {
|
||||
s.merge(Samples{{v, 1, 0}})
|
||||
}
|
||||
|
||||
func (s *stream) merge(samples Samples) {
|
||||
// TODO(beorn7): This tries to merge not only individual samples, but
|
||||
// whole summaries. The paper doesn't mention merging summaries at
|
||||
// all. Unittests show that the merging is inaccurate. Find out how to
|
||||
// do merges properly.
|
||||
var r float64
|
||||
i := 0
|
||||
for _, sample := range samples {
|
||||
for ; i < len(s.l); i++ {
|
||||
c := s.l[i]
|
||||
if c.Value > sample.Value {
|
||||
// Insert at position i.
|
||||
s.l = append(s.l, Sample{})
|
||||
copy(s.l[i+1:], s.l[i:])
|
||||
s.l[i] = Sample{
|
||||
sample.Value,
|
||||
sample.Width,
|
||||
math.Max(sample.Delta, math.Floor(s.ƒ(s, r))-1),
|
||||
// TODO(beorn7): How to calculate delta correctly?
|
||||
}
|
||||
i++
|
||||
goto inserted
|
||||
}
|
||||
r += c.Width
|
||||
}
|
||||
s.l = append(s.l, Sample{sample.Value, sample.Width, 0})
|
||||
i++
|
||||
inserted:
|
||||
s.n += sample.Width
|
||||
r += sample.Width
|
||||
}
|
||||
s.compress()
|
||||
}
|
||||
|
||||
func (s *stream) count() int {
|
||||
return int(s.n)
|
||||
}
|
||||
|
||||
func (s *stream) query(q float64) float64 {
|
||||
t := math.Ceil(q * s.n)
|
||||
t += math.Ceil(s.ƒ(s, t) / 2)
|
||||
p := s.l[0]
|
||||
var r float64
|
||||
for _, c := range s.l[1:] {
|
||||
r += p.Width
|
||||
if r+c.Width+c.Delta > t {
|
||||
return p.Value
|
||||
}
|
||||
p = c
|
||||
}
|
||||
return p.Value
|
||||
}
|
||||
|
||||
func (s *stream) compress() {
|
||||
if len(s.l) < 2 {
|
||||
return
|
||||
}
|
||||
x := s.l[len(s.l)-1]
|
||||
xi := len(s.l) - 1
|
||||
r := s.n - 1 - x.Width
|
||||
|
||||
for i := len(s.l) - 2; i >= 0; i-- {
|
||||
c := s.l[i]
|
||||
if c.Width+x.Width+x.Delta <= s.ƒ(s, r) {
|
||||
x.Width += c.Width
|
||||
s.l[xi] = x
|
||||
// Remove element at i.
|
||||
copy(s.l[i:], s.l[i+1:])
|
||||
s.l = s.l[:len(s.l)-1]
|
||||
xi -= 1
|
||||
} else {
|
||||
x = c
|
||||
xi = i
|
||||
}
|
||||
r -= c.Width
|
||||
}
|
||||
}
|
||||
|
||||
func (s *stream) samples() Samples {
|
||||
samples := make(Samples, len(s.l))
|
||||
copy(samples, s.l)
|
||||
return samples
|
||||
}
|
188
Godeps/_workspace/src/github.com/beorn7/perks/quantile/stream_test.go
generated
vendored
188
Godeps/_workspace/src/github.com/beorn7/perks/quantile/stream_test.go
generated
vendored
@@ -1,188 +0,0 @@
|
||||
package quantile
|
||||
|
||||
import (
|
||||
"math"
|
||||
"math/rand"
|
||||
"sort"
|
||||
"testing"
|
||||
)
|
||||
|
||||
var (
|
||||
Targets = map[float64]float64{
|
||||
0.01: 0.001,
|
||||
0.10: 0.01,
|
||||
0.50: 0.05,
|
||||
0.90: 0.01,
|
||||
0.99: 0.001,
|
||||
}
|
||||
TargetsSmallEpsilon = map[float64]float64{
|
||||
0.01: 0.0001,
|
||||
0.10: 0.001,
|
||||
0.50: 0.005,
|
||||
0.90: 0.001,
|
||||
0.99: 0.0001,
|
||||
}
|
||||
LowQuantiles = []float64{0.01, 0.1, 0.5}
|
||||
HighQuantiles = []float64{0.99, 0.9, 0.5}
|
||||
)
|
||||
|
||||
const RelativeEpsilon = 0.01
|
||||
|
||||
func verifyPercsWithAbsoluteEpsilon(t *testing.T, a []float64, s *Stream) {
|
||||
sort.Float64s(a)
|
||||
for quantile, epsilon := range Targets {
|
||||
n := float64(len(a))
|
||||
k := int(quantile * n)
|
||||
lower := int((quantile - epsilon) * n)
|
||||
if lower < 1 {
|
||||
lower = 1
|
||||
}
|
||||
upper := int(math.Ceil((quantile + epsilon) * n))
|
||||
if upper > len(a) {
|
||||
upper = len(a)
|
||||
}
|
||||
w, min, max := a[k-1], a[lower-1], a[upper-1]
|
||||
if g := s.Query(quantile); g < min || g > max {
|
||||
t.Errorf("q=%f: want %v [%f,%f], got %v", quantile, w, min, max, g)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
func verifyLowPercsWithRelativeEpsilon(t *testing.T, a []float64, s *Stream) {
|
||||
sort.Float64s(a)
|
||||
for _, qu := range LowQuantiles {
|
||||
n := float64(len(a))
|
||||
k := int(qu * n)
|
||||
|
||||
lowerRank := int((1 - RelativeEpsilon) * qu * n)
|
||||
upperRank := int(math.Ceil((1 + RelativeEpsilon) * qu * n))
|
||||
w, min, max := a[k-1], a[lowerRank-1], a[upperRank-1]
|
||||
if g := s.Query(qu); g < min || g > max {
|
||||
t.Errorf("q=%f: want %v [%f,%f], got %v", qu, w, min, max, g)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
func verifyHighPercsWithRelativeEpsilon(t *testing.T, a []float64, s *Stream) {
|
||||
sort.Float64s(a)
|
||||
for _, qu := range HighQuantiles {
|
||||
n := float64(len(a))
|
||||
k := int(qu * n)
|
||||
|
||||
lowerRank := int((1 - (1+RelativeEpsilon)*(1-qu)) * n)
|
||||
upperRank := int(math.Ceil((1 - (1-RelativeEpsilon)*(1-qu)) * n))
|
||||
w, min, max := a[k-1], a[lowerRank-1], a[upperRank-1]
|
||||
if g := s.Query(qu); g < min || g > max {
|
||||
t.Errorf("q=%f: want %v [%f,%f], got %v", qu, w, min, max, g)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
func populateStream(s *Stream) []float64 {
|
||||
a := make([]float64, 0, 1e5+100)
|
||||
for i := 0; i < cap(a); i++ {
|
||||
v := rand.NormFloat64()
|
||||
// Add 5% asymmetric outliers.
|
||||
if i%20 == 0 {
|
||||
v = v*v + 1
|
||||
}
|
||||
s.Insert(v)
|
||||
a = append(a, v)
|
||||
}
|
||||
return a
|
||||
}
|
||||
|
||||
func TestTargetedQuery(t *testing.T) {
|
||||
rand.Seed(42)
|
||||
s := NewTargeted(Targets)
|
||||
a := populateStream(s)
|
||||
verifyPercsWithAbsoluteEpsilon(t, a, s)
|
||||
}
|
||||
|
||||
func TestLowBiasedQuery(t *testing.T) {
|
||||
rand.Seed(42)
|
||||
s := NewLowBiased(RelativeEpsilon)
|
||||
a := populateStream(s)
|
||||
verifyLowPercsWithRelativeEpsilon(t, a, s)
|
||||
}
|
||||
|
||||
func TestHighBiasedQuery(t *testing.T) {
|
||||
rand.Seed(42)
|
||||
s := NewHighBiased(RelativeEpsilon)
|
||||
a := populateStream(s)
|
||||
verifyHighPercsWithRelativeEpsilon(t, a, s)
|
||||
}
|
||||
|
||||
// BrokenTestTargetedMerge is broken, see Merge doc comment.
|
||||
func BrokenTestTargetedMerge(t *testing.T) {
|
||||
rand.Seed(42)
|
||||
s1 := NewTargeted(Targets)
|
||||
s2 := NewTargeted(Targets)
|
||||
a := populateStream(s1)
|
||||
a = append(a, populateStream(s2)...)
|
||||
s1.Merge(s2.Samples())
|
||||
verifyPercsWithAbsoluteEpsilon(t, a, s1)
|
||||
}
|
||||
|
||||
// BrokenTestLowBiasedMerge is broken, see Merge doc comment.
|
||||
func BrokenTestLowBiasedMerge(t *testing.T) {
|
||||
rand.Seed(42)
|
||||
s1 := NewLowBiased(RelativeEpsilon)
|
||||
s2 := NewLowBiased(RelativeEpsilon)
|
||||
a := populateStream(s1)
|
||||
a = append(a, populateStream(s2)...)
|
||||
s1.Merge(s2.Samples())
|
||||
verifyLowPercsWithRelativeEpsilon(t, a, s2)
|
||||
}
|
||||
|
||||
// BrokenTestHighBiasedMerge is broken, see Merge doc comment.
|
||||
func BrokenTestHighBiasedMerge(t *testing.T) {
|
||||
rand.Seed(42)
|
||||
s1 := NewHighBiased(RelativeEpsilon)
|
||||
s2 := NewHighBiased(RelativeEpsilon)
|
||||
a := populateStream(s1)
|
||||
a = append(a, populateStream(s2)...)
|
||||
s1.Merge(s2.Samples())
|
||||
verifyHighPercsWithRelativeEpsilon(t, a, s2)
|
||||
}
|
||||
|
||||
func TestUncompressed(t *testing.T) {
|
||||
q := NewTargeted(Targets)
|
||||
for i := 100; i > 0; i-- {
|
||||
q.Insert(float64(i))
|
||||
}
|
||||
if g := q.Count(); g != 100 {
|
||||
t.Errorf("want count 100, got %d", g)
|
||||
}
|
||||
// Before compression, Query should have 100% accuracy.
|
||||
for quantile := range Targets {
|
||||
w := quantile * 100
|
||||
if g := q.Query(quantile); g != w {
|
||||
t.Errorf("want %f, got %f", w, g)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
func TestUncompressedSamples(t *testing.T) {
|
||||
q := NewTargeted(map[float64]float64{0.99: 0.001})
|
||||
for i := 1; i <= 100; i++ {
|
||||
q.Insert(float64(i))
|
||||
}
|
||||
if g := q.Samples().Len(); g != 100 {
|
||||
t.Errorf("want count 100, got %d", g)
|
||||
}
|
||||
}
|
||||
|
||||
func TestUncompressedOne(t *testing.T) {
|
||||
q := NewTargeted(map[float64]float64{0.99: 0.01})
|
||||
q.Insert(3.14)
|
||||
if g := q.Query(0.90); g != 3.14 {
|
||||
t.Error("want PI, got", g)
|
||||
}
|
||||
}
|
||||
|
||||
func TestDefaults(t *testing.T) {
|
||||
if g := NewTargeted(map[float64]float64{0.99: 0.001}).Query(0.99); g != 0 {
|
||||
t.Errorf("want 0, got %f", g)
|
||||
}
|
||||
}
|
2
Godeps/_workspace/src/github.com/bgentry/speakeasy/.gitignore
generated
vendored
2
Godeps/_workspace/src/github.com/bgentry/speakeasy/.gitignore
generated
vendored
@@ -1,2 +0,0 @@
|
||||
example/example
|
||||
example/example.exe
|
201
Godeps/_workspace/src/github.com/bgentry/speakeasy/LICENSE_WINDOWS
generated
vendored
201
Godeps/_workspace/src/github.com/bgentry/speakeasy/LICENSE_WINDOWS
generated
vendored
@@ -1,201 +0,0 @@
|
||||
Apache License
|
||||
Version 2.0, January 2004
|
||||
http://www.apache.org/licenses/
|
||||
|
||||
TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
|
||||
|
||||
1. Definitions.
|
||||
|
||||
"License" shall mean the terms and conditions for use, reproduction,
|
||||
and distribution as defined by Sections 1 through 9 of this document.
|
||||
|
||||
"Licensor" shall mean the copyright owner or entity authorized by
|
||||
the copyright owner that is granting the License.
|
||||
|
||||
"Legal Entity" shall mean the union of the acting entity and all
|
||||
other entities that control, are controlled by, or are under common
|
||||
control with that entity. For the purposes of this definition,
|
||||
"control" means (i) the power, direct or indirect, to cause the
|
||||
direction or management of such entity, whether by contract or
|
||||
otherwise, or (ii) ownership of fifty percent (50%) or more of the
|
||||
outstanding shares, or (iii) beneficial ownership of such entity.
|
||||
|
||||
"You" (or "Your") shall mean an individual or Legal Entity
|
||||
exercising permissions granted by this License.
|
||||
|
||||
"Source" form shall mean the preferred form for making modifications,
|
||||
including but not limited to software source code, documentation
|
||||
source, and configuration files.
|
||||
|
||||
"Object" form shall mean any form resulting from mechanical
|
||||
transformation or translation of a Source form, including but
|
||||
not limited to compiled object code, generated documentation,
|
||||
and conversions to other media types.
|
||||
|
||||
"Work" shall mean the work of authorship, whether in Source or
|
||||
Object form, made available under the License, as indicated by a
|
||||
copyright notice that is included in or attached to the work
|
||||
(an example is provided in the Appendix below).
|
||||
|
||||
"Derivative Works" shall mean any work, whether in Source or Object
|
||||
form, that is based on (or derived from) the Work and for which the
|
||||
editorial revisions, annotations, elaborations, or other modifications
|
||||
represent, as a whole, an original work of authorship. For the purposes
|
||||
of this License, Derivative Works shall not include works that remain
|
||||
separable from, or merely link (or bind by name) to the interfaces of,
|
||||
the Work and Derivative Works thereof.
|
||||
|
||||
"Contribution" shall mean any work of authorship, including
|
||||
the original version of the Work and any modifications or additions
|
||||
to that Work or Derivative Works thereof, that is intentionally
|
||||
submitted to Licensor for inclusion in the Work by the copyright owner
|
||||
or by an individual or Legal Entity authorized to submit on behalf of
|
||||
the copyright owner. For the purposes of this definition, "submitted"
|
||||
means any form of electronic, verbal, or written communication sent
|
||||
to the Licensor or its representatives, including but not limited to
|
||||
communication on electronic mailing lists, source code control systems,
|
||||
and issue tracking systems that are managed by, or on behalf of, the
|
||||
Licensor for the purpose of discussing and improving the Work, but
|
||||
excluding communication that is conspicuously marked or otherwise
|
||||
designated in writing by the copyright owner as "Not a Contribution."
|
||||
|
||||
"Contributor" shall mean Licensor and any individual or Legal Entity
|
||||
on behalf of whom a Contribution has been received by Licensor and
|
||||
subsequently incorporated within the Work.
|
||||
|
||||
2. Grant of Copyright License. Subject to the terms and conditions of
|
||||
this License, each Contributor hereby grants to You a perpetual,
|
||||
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
|
||||
copyright license to reproduce, prepare Derivative Works of,
|
||||
publicly display, publicly perform, sublicense, and distribute the
|
||||
Work and such Derivative Works in Source or Object form.
|
||||
|
||||
3. Grant of Patent License. Subject to the terms and conditions of
|
||||
this License, each Contributor hereby grants to You a perpetual,
|
||||
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
|
||||
(except as stated in this section) patent license to make, have made,
|
||||
use, offer to sell, sell, import, and otherwise transfer the Work,
|
||||
where such license applies only to those patent claims licensable
|
||||
by such Contributor that are necessarily infringed by their
|
||||
Contribution(s) alone or by combination of their Contribution(s)
|
||||
with the Work to which such Contribution(s) was submitted. If You
|
||||
institute patent litigation against any entity (including a
|
||||
cross-claim or counterclaim in a lawsuit) alleging that the Work
|
||||
or a Contribution incorporated within the Work constitutes direct
|
||||
or contributory patent infringement, then any patent licenses
|
||||
granted to You under this License for that Work shall terminate
|
||||
as of the date such litigation is filed.
|
||||
|
||||
4. Redistribution. You may reproduce and distribute copies of the
|
||||
Work or Derivative Works thereof in any medium, with or without
|
||||
modifications, and in Source or Object form, provided that You
|
||||
meet the following conditions:
|
||||
|
||||
(a) You must give any other recipients of the Work or
|
||||
Derivative Works a copy of this License; and
|
||||
|
||||
(b) You must cause any modified files to carry prominent notices
|
||||
stating that You changed the files; and
|
||||
|
||||
(c) You must retain, in the Source form of any Derivative Works
|
||||
that You distribute, all copyright, patent, trademark, and
|
||||
attribution notices from the Source form of the Work,
|
||||
excluding those notices that do not pertain to any part of
|
||||
the Derivative Works; and
|
||||
|
||||
(d) If the Work includes a "NOTICE" text file as part of its
|
||||
distribution, then any Derivative Works that You distribute must
|
||||
include a readable copy of the attribution notices contained
|
||||
within such NOTICE file, excluding those notices that do not
|
||||
pertain to any part of the Derivative Works, in at least one
|
||||
of the following places: within a NOTICE text file distributed
|
||||
as part of the Derivative Works; within the Source form or
|
||||
documentation, if provided along with the Derivative Works; or,
|
||||
within a display generated by the Derivative Works, if and
|
||||
wherever such third-party notices normally appear. The contents
|
||||
of the NOTICE file are for informational purposes only and
|
||||
do not modify the License. You may add Your own attribution
|
||||
notices within Derivative Works that You distribute, alongside
|
||||
or as an addendum to the NOTICE text from the Work, provided
|
||||
that such additional attribution notices cannot be construed
|
||||
as modifying the License.
|
||||
|
||||
You may add Your own copyright statement to Your modifications and
|
||||
may provide additional or different license terms and conditions
|
||||
for use, reproduction, or distribution of Your modifications, or
|
||||
for any such Derivative Works as a whole, provided Your use,
|
||||
reproduction, and distribution of the Work otherwise complies with
|
||||
the conditions stated in this License.
|
||||
|
||||
5. Submission of Contributions. Unless You explicitly state otherwise,
|
||||
any Contribution intentionally submitted for inclusion in the Work
|
||||
by You to the Licensor shall be under the terms and conditions of
|
||||
this License, without any additional terms or conditions.
|
||||
Notwithstanding the above, nothing herein shall supersede or modify
|
||||
the terms of any separate license agreement you may have executed
|
||||
with Licensor regarding such Contributions.
|
||||
|
||||
6. Trademarks. This License does not grant permission to use the trade
|
||||
names, trademarks, service marks, or product names of the Licensor,
|
||||
except as required for reasonable and customary use in describing the
|
||||
origin of the Work and reproducing the content of the NOTICE file.
|
||||
|
||||
7. Disclaimer of Warranty. Unless required by applicable law or
|
||||
agreed to in writing, Licensor provides the Work (and each
|
||||
Contributor provides its Contributions) on an "AS IS" BASIS,
|
||||
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
|
||||
implied, including, without limitation, any warranties or conditions
|
||||
of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
|
||||
PARTICULAR PURPOSE. You are solely responsible for determining the
|
||||
appropriateness of using or redistributing the Work and assume any
|
||||
risks associated with Your exercise of permissions under this License.
|
||||
|
||||
8. Limitation of Liability. In no event and under no legal theory,
|
||||
whether in tort (including negligence), contract, or otherwise,
|
||||
unless required by applicable law (such as deliberate and grossly
|
||||
negligent acts) or agreed to in writing, shall any Contributor be
|
||||
liable to You for damages, including any direct, indirect, special,
|
||||
incidental, or consequential damages of any character arising as a
|
||||
result of this License or out of the use or inability to use the
|
||||
Work (including but not limited to damages for loss of goodwill,
|
||||
work stoppage, computer failure or malfunction, or any and all
|
||||
other commercial damages or losses), even if such Contributor
|
||||
has been advised of the possibility of such damages.
|
||||
|
||||
9. Accepting Warranty or Additional Liability. While redistributing
|
||||
the Work or Derivative Works thereof, You may choose to offer,
|
||||
and charge a fee for, acceptance of support, warranty, indemnity,
|
||||
or other liability obligations and/or rights consistent with this
|
||||
License. However, in accepting such obligations, You may act only
|
||||
on Your own behalf and on Your sole responsibility, not on behalf
|
||||
of any other Contributor, and only if You agree to indemnify,
|
||||
defend, and hold each Contributor harmless for any liability
|
||||
incurred by, or claims asserted against, such Contributor by reason
|
||||
of your accepting any such warranty or additional liability.
|
||||
|
||||
END OF TERMS AND CONDITIONS
|
||||
|
||||
APPENDIX: How to apply the Apache License to your work.
|
||||
|
||||
To apply the Apache License to your work, attach the following
|
||||
boilerplate notice, with the fields enclosed by brackets "[]"
|
||||
replaced with your own identifying information. (Don't include
|
||||
the brackets!) The text should be enclosed in the appropriate
|
||||
comment syntax for the file format. We also recommend that a
|
||||
file or class name and description of purpose be included on the
|
||||
same "printed page" as the copyright notice for easier
|
||||
identification within third-party archives.
|
||||
|
||||
Copyright [2013] [the CloudFoundry Authors]
|
||||
|
||||
Licensed under the Apache License, Version 2.0 (the "License");
|
||||
you may not use this file except in compliance with the License.
|
||||
You may obtain a copy of the License at
|
||||
|
||||
http://www.apache.org/licenses/LICENSE-2.0
|
||||
|
||||
Unless required by applicable law or agreed to in writing, software
|
||||
distributed under the License is distributed on an "AS IS" BASIS,
|
||||
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
See the License for the specific language governing permissions and
|
||||
limitations under the License.
|
30
Godeps/_workspace/src/github.com/bgentry/speakeasy/Readme.md
generated
vendored
30
Godeps/_workspace/src/github.com/bgentry/speakeasy/Readme.md
generated
vendored
@@ -1,30 +0,0 @@
|
||||
# Speakeasy
|
||||
|
||||
This package provides cross-platform Go (#golang) helpers for taking user input
|
||||
from the terminal while not echoing the input back (similar to `getpasswd`). The
|
||||
package uses syscalls to avoid any dependence on cgo, and is therefore
|
||||
compatible with cross-compiling.
|
||||
|
||||
[][godoc]
|
||||
|
||||
## Unicode
|
||||
|
||||
Multi-byte unicode characters work successfully on Mac OS X. On Windows,
|
||||
however, this may be problematic (as is UTF in general on Windows). Other
|
||||
platforms have not been tested.
|
||||
|
||||
## License
|
||||
|
||||
The code herein was not written by me, but was compiled from two separate open
|
||||
source packages. Unix portions were imported from [gopass][gopass], while
|
||||
Windows portions were imported from the [CloudFoundry Go CLI][cf-cli]'s
|
||||
[Windows terminal helpers][cf-ui-windows].
|
||||
|
||||
The [license for the windows portion](./LICENSE_WINDOWS) has been copied exactly
|
||||
from the source (though I attempted to fill in the correct owner in the
|
||||
boilerplate copyright notice).
|
||||
|
||||
[cf-cli]: https://github.com/cloudfoundry/cli "CloudFoundry Go CLI"
|
||||
[cf-ui-windows]: https://github.com/cloudfoundry/cli/blob/master/src/cf/terminal/ui_windows.go "CloudFoundry Go CLI Windows input helpers"
|
||||
[godoc]: https://godoc.org/github.com/bgentry/speakeasy "speakeasy on Godoc.org"
|
||||
[gopass]: https://code.google.com/p/gopass "gopass"
|
18
Godeps/_workspace/src/github.com/bgentry/speakeasy/example/main.go
generated
vendored
18
Godeps/_workspace/src/github.com/bgentry/speakeasy/example/main.go
generated
vendored
@@ -1,18 +0,0 @@
|
||||
package main
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
"os"
|
||||
|
||||
"github.com/coreos/etcd/Godeps/_workspace/src/github.com/bgentry/speakeasy"
|
||||
)
|
||||
|
||||
func main() {
|
||||
password, err := speakeasy.Ask("Please enter a password: ")
|
||||
if err != nil {
|
||||
fmt.Println(err)
|
||||
os.Exit(1)
|
||||
}
|
||||
fmt.Printf("Password result: %q\n", password)
|
||||
fmt.Printf("Password len: %d\n", len(password))
|
||||
}
|
47
Godeps/_workspace/src/github.com/bgentry/speakeasy/speakeasy.go
generated
vendored
47
Godeps/_workspace/src/github.com/bgentry/speakeasy/speakeasy.go
generated
vendored
@@ -1,47 +0,0 @@
|
||||
package speakeasy
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
"io"
|
||||
"os"
|
||||
"strings"
|
||||
)
|
||||
|
||||
// Ask the user to enter a password with input hidden. prompt is a string to
|
||||
// display before the user's input. Returns the provided password, or an error
|
||||
// if the command failed.
|
||||
func Ask(prompt string) (password string, err error) {
|
||||
return FAsk(os.Stdout, prompt)
|
||||
}
|
||||
|
||||
// Same as the Ask function, except it is possible to specify the file to write
|
||||
// the prompt to.
|
||||
func FAsk(file *os.File, prompt string) (password string, err error) {
|
||||
if prompt != "" {
|
||||
fmt.Fprint(file, prompt) // Display the prompt.
|
||||
}
|
||||
password, err = getPassword()
|
||||
|
||||
// Carriage return after the user input.
|
||||
fmt.Fprintln(file, "")
|
||||
return
|
||||
}
|
||||
|
||||
func readline() (value string, err error) {
|
||||
var valb []byte
|
||||
var n int
|
||||
b := make([]byte, 1)
|
||||
for {
|
||||
// read one byte at a time so we don't accidentally read extra bytes
|
||||
n, err = os.Stdin.Read(b)
|
||||
if err != nil && err != io.EOF {
|
||||
return "", err
|
||||
}
|
||||
if n == 0 || b[0] == '\n' {
|
||||
break
|
||||
}
|
||||
valb = append(valb, b[0])
|
||||
}
|
||||
|
||||
return strings.TrimSuffix(string(valb), "\r"), nil
|
||||
}
|
93
Godeps/_workspace/src/github.com/bgentry/speakeasy/speakeasy_unix.go
generated
vendored
93
Godeps/_workspace/src/github.com/bgentry/speakeasy/speakeasy_unix.go
generated
vendored
@@ -1,93 +0,0 @@
|
||||
// based on https://code.google.com/p/gopass
|
||||
// Author: johnsiilver@gmail.com (John Doak)
|
||||
//
|
||||
// Original code is based on code by RogerV in the golang-nuts thread:
|
||||
// https://groups.google.com/group/golang-nuts/browse_thread/thread/40cc41e9d9fc9247
|
||||
|
||||
// +build darwin freebsd linux netbsd openbsd
|
||||
|
||||
package speakeasy
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
"os"
|
||||
"os/signal"
|
||||
"strings"
|
||||
"syscall"
|
||||
)
|
||||
|
||||
const sttyArg0 = "/bin/stty"
|
||||
|
||||
var (
|
||||
sttyArgvEOff []string = []string{"stty", "-echo"}
|
||||
sttyArgvEOn []string = []string{"stty", "echo"}
|
||||
ws syscall.WaitStatus = 0
|
||||
)
|
||||
|
||||
// getPassword gets input hidden from the terminal from a user. This is
|
||||
// accomplished by turning off terminal echo, reading input from the user and
|
||||
// finally turning on terminal echo.
|
||||
func getPassword() (password string, err error) {
|
||||
sig := make(chan os.Signal, 10)
|
||||
brk := make(chan bool)
|
||||
|
||||
// File descriptors for stdin, stdout, and stderr.
|
||||
fd := []uintptr{os.Stdin.Fd(), os.Stdout.Fd(), os.Stderr.Fd()}
|
||||
|
||||
// Setup notifications of termination signals to channel sig, create a process to
|
||||
// watch for these signals so we can turn back on echo if need be.
|
||||
signal.Notify(sig, syscall.SIGHUP, syscall.SIGINT, syscall.SIGKILL, syscall.SIGQUIT,
|
||||
syscall.SIGTERM)
|
||||
go catchSignal(fd, sig, brk)
|
||||
|
||||
// Turn off the terminal echo.
|
||||
pid, err := echoOff(fd)
|
||||
if err != nil {
|
||||
return "", err
|
||||
}
|
||||
|
||||
// Turn on the terminal echo and stop listening for signals.
|
||||
defer close(brk)
|
||||
defer echoOn(fd)
|
||||
|
||||
syscall.Wait4(pid, &ws, 0, nil)
|
||||
|
||||
line, err := readline()
|
||||
if err == nil {
|
||||
password = strings.TrimSpace(line)
|
||||
} else {
|
||||
err = fmt.Errorf("failed during password entry: %s", err)
|
||||
}
|
||||
|
||||
return password, err
|
||||
}
|
||||
|
||||
// echoOff turns off the terminal echo.
|
||||
func echoOff(fd []uintptr) (int, error) {
|
||||
pid, err := syscall.ForkExec(sttyArg0, sttyArgvEOff, &syscall.ProcAttr{Dir: "", Files: fd})
|
||||
if err != nil {
|
||||
return 0, fmt.Errorf("failed turning off console echo for password entry:\n\t%s", err)
|
||||
}
|
||||
return pid, nil
|
||||
}
|
||||
|
||||
// echoOn turns back on the terminal echo.
|
||||
func echoOn(fd []uintptr) {
|
||||
// Turn on the terminal echo.
|
||||
pid, e := syscall.ForkExec(sttyArg0, sttyArgvEOn, &syscall.ProcAttr{Dir: "", Files: fd})
|
||||
if e == nil {
|
||||
syscall.Wait4(pid, &ws, 0, nil)
|
||||
}
|
||||
}
|
||||
|
||||
// catchSignal tries to catch SIGKILL, SIGQUIT and SIGINT so that we can turn
|
||||
// terminal echo back on before the program ends. Otherwise the user is left
|
||||
// with echo off on their terminal.
|
||||
func catchSignal(fd []uintptr, sig chan os.Signal, brk chan bool) {
|
||||
select {
|
||||
case <-sig:
|
||||
echoOn(fd)
|
||||
os.Exit(-1)
|
||||
case <-brk:
|
||||
}
|
||||
}
|
43
Godeps/_workspace/src/github.com/bgentry/speakeasy/speakeasy_windows.go
generated
vendored
43
Godeps/_workspace/src/github.com/bgentry/speakeasy/speakeasy_windows.go
generated
vendored
@@ -1,43 +0,0 @@
|
||||
// +build windows
|
||||
|
||||
package speakeasy
|
||||
|
||||
import (
|
||||
"os"
|
||||
"syscall"
|
||||
)
|
||||
|
||||
// SetConsoleMode function can be used to change value of ENABLE_ECHO_INPUT:
|
||||
// http://msdn.microsoft.com/en-us/library/windows/desktop/ms686033(v=vs.85).aspx
|
||||
const ENABLE_ECHO_INPUT = 0x0004
|
||||
|
||||
func getPassword() (password string, err error) {
|
||||
hStdin := syscall.Handle(os.Stdin.Fd())
|
||||
var oldMode uint32
|
||||
|
||||
err = syscall.GetConsoleMode(hStdin, &oldMode)
|
||||
if err != nil {
|
||||
return
|
||||
}
|
||||
|
||||
var newMode uint32 = (oldMode &^ ENABLE_ECHO_INPUT)
|
||||
|
||||
err = setConsoleMode(hStdin, newMode)
|
||||
defer setConsoleMode(hStdin, oldMode)
|
||||
if err != nil {
|
||||
return
|
||||
}
|
||||
|
||||
return readline()
|
||||
}
|
||||
|
||||
func setConsoleMode(console syscall.Handle, mode uint32) (err error) {
|
||||
dll := syscall.MustLoadDLL("kernel32")
|
||||
proc := dll.MustFindProc("SetConsoleMode")
|
||||
r, _, err := proc.Call(uintptr(console), uintptr(mode))
|
||||
|
||||
if r == 0 {
|
||||
return err
|
||||
}
|
||||
return nil
|
||||
}
|
3
Godeps/_workspace/src/github.com/boltdb/bolt/.gitignore
generated
vendored
3
Godeps/_workspace/src/github.com/boltdb/bolt/.gitignore
generated
vendored
@@ -1,3 +0,0 @@
|
||||
*.prof
|
||||
*.test
|
||||
/bin/
|
20
Godeps/_workspace/src/github.com/boltdb/bolt/LICENSE
generated
vendored
20
Godeps/_workspace/src/github.com/boltdb/bolt/LICENSE
generated
vendored
@@ -1,20 +0,0 @@
|
||||
The MIT License (MIT)
|
||||
|
||||
Copyright (c) 2013 Ben Johnson
|
||||
|
||||
Permission is hereby granted, free of charge, to any person obtaining a copy of
|
||||
this software and associated documentation files (the "Software"), to deal in
|
||||
the Software without restriction, including without limitation the rights to
|
||||
use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of
|
||||
the Software, and to permit persons to whom the Software is furnished to do so,
|
||||
subject to the following conditions:
|
||||
|
||||
The above copyright notice and this permission notice shall be included in all
|
||||
copies or substantial portions of the Software.
|
||||
|
||||
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
|
||||
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS
|
||||
FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR
|
||||
COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER
|
||||
IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN
|
||||
CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
|
54
Godeps/_workspace/src/github.com/boltdb/bolt/Makefile
generated
vendored
54
Godeps/_workspace/src/github.com/boltdb/bolt/Makefile
generated
vendored
@@ -1,54 +0,0 @@
|
||||
TEST=.
|
||||
BENCH=.
|
||||
COVERPROFILE=/tmp/c.out
|
||||
BRANCH=`git rev-parse --abbrev-ref HEAD`
|
||||
COMMIT=`git rev-parse --short HEAD`
|
||||
GOLDFLAGS="-X main.branch $(BRANCH) -X main.commit $(COMMIT)"
|
||||
|
||||
default: build
|
||||
|
||||
bench:
|
||||
go test -v -test.run=NOTHINCONTAINSTHIS -test.bench=$(BENCH)
|
||||
|
||||
# http://cloc.sourceforge.net/
|
||||
cloc:
|
||||
@cloc --not-match-f='Makefile|_test.go' .
|
||||
|
||||
cover: fmt
|
||||
go test -coverprofile=$(COVERPROFILE) -test.run=$(TEST) $(COVERFLAG) .
|
||||
go tool cover -html=$(COVERPROFILE)
|
||||
rm $(COVERPROFILE)
|
||||
|
||||
cpuprofile: fmt
|
||||
@go test -c
|
||||
@./bolt.test -test.v -test.run=$(TEST) -test.cpuprofile cpu.prof
|
||||
|
||||
# go get github.com/kisielk/errcheck
|
||||
errcheck:
|
||||
@echo "=== errcheck ==="
|
||||
@errcheck github.com/boltdb/bolt
|
||||
|
||||
fmt:
|
||||
@go fmt ./...
|
||||
|
||||
get:
|
||||
@go get -d ./...
|
||||
|
||||
build: get
|
||||
@mkdir -p bin
|
||||
@go build -ldflags=$(GOLDFLAGS) -a -o bin/bolt ./cmd/bolt
|
||||
|
||||
test: fmt
|
||||
@go get github.com/stretchr/testify/assert
|
||||
@echo "=== TESTS ==="
|
||||
@go test -v -cover -test.run=$(TEST)
|
||||
@echo ""
|
||||
@echo ""
|
||||
@echo "=== CLI ==="
|
||||
@go test -v -test.run=$(TEST) ./cmd/bolt
|
||||
@echo ""
|
||||
@echo ""
|
||||
@echo "=== RACE DETECTOR ==="
|
||||
@go test -v -race -test.run="TestSimulate_(100op|1000op)"
|
||||
|
||||
.PHONY: bench cloc cover cpuprofile fmt memprofile test
|
591
Godeps/_workspace/src/github.com/boltdb/bolt/README.md
generated
vendored
591
Godeps/_workspace/src/github.com/boltdb/bolt/README.md
generated
vendored
@@ -1,591 +0,0 @@
|
||||
Bolt [](https://drone.io/github.com/boltdb/bolt/latest) [](https://coveralls.io/r/boltdb/bolt?branch=master) [](https://godoc.org/github.com/boltdb/bolt) 
|
||||
====
|
||||
|
||||
Bolt is a pure Go key/value store inspired by [Howard Chu's][hyc_symas] and
|
||||
the [LMDB project][lmdb]. The goal of the project is to provide a simple,
|
||||
fast, and reliable database for projects that don't require a full database
|
||||
server such as Postgres or MySQL.
|
||||
|
||||
Since Bolt is meant to be used as such a low-level piece of functionality,
|
||||
simplicity is key. The API will be small and only focus on getting values
|
||||
and setting values. That's it.
|
||||
|
||||
[hyc_symas]: https://twitter.com/hyc_symas
|
||||
[lmdb]: http://symas.com/mdb/
|
||||
|
||||
|
||||
## Project Status
|
||||
|
||||
Bolt is stable and the API is fixed. Full unit test coverage and randomized
|
||||
black box testing are used to ensure database consistency and thread safety.
|
||||
Bolt is currently in high-load production environments serving databases as
|
||||
large as 1TB. Many companies such as Shopify and Heroku use Bolt-backed
|
||||
services every day.
|
||||
|
||||
|
||||
## Getting Started
|
||||
|
||||
### Installing
|
||||
|
||||
To start using Bolt, install Go and run `go get`:
|
||||
|
||||
```sh
|
||||
$ go get github.com/boltdb/bolt/...
|
||||
```
|
||||
|
||||
This will retrieve the library and install the `bolt` command line utility into
|
||||
your `$GOBIN` path.
|
||||
|
||||
|
||||
### Opening a database
|
||||
|
||||
The top-level object in Bolt is a `DB`. It is represented as a single file on
|
||||
your disk and represents a consistent snapshot of your data.
|
||||
|
||||
To open your database, simply use the `bolt.Open()` function:
|
||||
|
||||
```go
|
||||
package main
|
||||
|
||||
import (
|
||||
"log"
|
||||
|
||||
"github.com/boltdb/bolt"
|
||||
)
|
||||
|
||||
func main() {
|
||||
// Open the my.db data file in your current directory.
|
||||
// It will be created if it doesn't exist.
|
||||
db, err := bolt.Open("my.db", 0600, nil)
|
||||
if err != nil {
|
||||
log.Fatal(err)
|
||||
}
|
||||
defer db.Close()
|
||||
|
||||
...
|
||||
}
|
||||
```
|
||||
|
||||
Please note that Bolt obtains a file lock on the data file so multiple processes
|
||||
cannot open the same database at the same time. Opening an already open Bolt
|
||||
database will cause it to hang until the other process closes it. To prevent
|
||||
an indefinite wait you can pass a timeout option to the `Open()` function:
|
||||
|
||||
```go
|
||||
db, err := bolt.Open("my.db", 0600, &bolt.Options{Timeout: 1 * time.Second})
|
||||
```
|
||||
|
||||
|
||||
### Transactions
|
||||
|
||||
Bolt allows only one read-write transaction at a time but allows as many
|
||||
read-only transactions as you want at a time. Each transaction has a consistent
|
||||
view of the data as it existed when the transaction started.
|
||||
|
||||
Individual transactions and all objects created from them (e.g. buckets, keys)
|
||||
are not thread safe. To work with data in multiple goroutines you must start
|
||||
a transaction for each one or use locking to ensure only one goroutine accesses
|
||||
a transaction at a time. Creating transaction from the `DB` is thread safe.
|
||||
|
||||
|
||||
#### Read-write transactions
|
||||
|
||||
To start a read-write transaction, you can use the `DB.Update()` function:
|
||||
|
||||
```go
|
||||
err := db.Update(func(tx *bolt.Tx) error {
|
||||
...
|
||||
return nil
|
||||
})
|
||||
```
|
||||
|
||||
Inside the closure, you have a consistent view of the database. You commit the
|
||||
transaction by returning `nil` at the end. You can also rollback the transaction
|
||||
at any point by returning an error. All database operations are allowed inside
|
||||
a read-write transaction.
|
||||
|
||||
Always check the return error as it will report any disk failures that can cause
|
||||
your transaction to not complete. If you return an error within your closure
|
||||
it will be passed through.
|
||||
|
||||
|
||||
#### Read-only transactions
|
||||
|
||||
To start a read-only transaction, you can use the `DB.View()` function:
|
||||
|
||||
```go
|
||||
err := db.View(func(tx *bolt.Tx) error {
|
||||
...
|
||||
return nil
|
||||
})
|
||||
```
|
||||
|
||||
You also get a consistent view of the database within this closure, however,
|
||||
no mutating operations are allowed within a read-only transaction. You can only
|
||||
retrieve buckets, retrieve values, and copy the database within a read-only
|
||||
transaction.
|
||||
|
||||
|
||||
#### Batch read-write transactions
|
||||
|
||||
Each `DB.Update()` waits for disk to commit the writes. This overhead
|
||||
can be minimized by combining multiple updates with the `DB.Batch()`
|
||||
function:
|
||||
|
||||
```go
|
||||
err := db.Batch(func(tx *bolt.Tx) error {
|
||||
...
|
||||
return nil
|
||||
})
|
||||
```
|
||||
|
||||
Concurrent Batch calls are opportunistically combined into larger
|
||||
transactions. Batch is only useful when there are multiple goroutines
|
||||
calling it.
|
||||
|
||||
The trade-off is that `Batch` can call the given
|
||||
function multiple times, if parts of the transaction fail. The
|
||||
function must be idempotent and side effects must take effect only
|
||||
after a successful return from `DB.Batch()`.
|
||||
|
||||
For example: don't display messages from inside the function, instead
|
||||
set variables in the enclosing scope:
|
||||
|
||||
```go
|
||||
var id uint64
|
||||
err := db.Batch(func(tx *bolt.Tx) error {
|
||||
// Find last key in bucket, decode as bigendian uint64, increment
|
||||
// by one, encode back to []byte, and add new key.
|
||||
...
|
||||
id = newValue
|
||||
return nil
|
||||
})
|
||||
if err != nil {
|
||||
return ...
|
||||
}
|
||||
fmt.Println("Allocated ID %d", id)
|
||||
```
|
||||
|
||||
|
||||
#### Managing transactions manually
|
||||
|
||||
The `DB.View()` and `DB.Update()` functions are wrappers around the `DB.Begin()`
|
||||
function. These helper functions will start the transaction, execute a function,
|
||||
and then safely close your transaction if an error is returned. This is the
|
||||
recommended way to use Bolt transactions.
|
||||
|
||||
However, sometimes you may want to manually start and end your transactions.
|
||||
You can use the `Tx.Begin()` function directly but _please_ be sure to close the
|
||||
transaction.
|
||||
|
||||
```go
|
||||
// Start a writable transaction.
|
||||
tx, err := db.Begin(true)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
defer tx.Rollback()
|
||||
|
||||
// Use the transaction...
|
||||
_, err := tx.CreateBucket([]byte("MyBucket"))
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
// Commit the transaction and check for error.
|
||||
if err := tx.Commit(); err != nil {
|
||||
return err
|
||||
}
|
||||
```
|
||||
|
||||
The first argument to `DB.Begin()` is a boolean stating if the transaction
|
||||
should be writable.
|
||||
|
||||
|
||||
### Using buckets
|
||||
|
||||
Buckets are collections of key/value pairs within the database. All keys in a
|
||||
bucket must be unique. You can create a bucket using the `DB.CreateBucket()`
|
||||
function:
|
||||
|
||||
```go
|
||||
db.Update(func(tx *bolt.Tx) error {
|
||||
b, err := tx.CreateBucket([]byte("MyBucket"))
|
||||
if err != nil {
|
||||
return fmt.Errorf("create bucket: %s", err)
|
||||
}
|
||||
return nil
|
||||
})
|
||||
```
|
||||
|
||||
You can also create a bucket only if it doesn't exist by using the
|
||||
`Tx.CreateBucketIfNotExists()` function. It's a common pattern to call this
|
||||
function for all your top-level buckets after you open your database so you can
|
||||
guarantee that they exist for future transactions.
|
||||
|
||||
To delete a bucket, simply call the `Tx.DeleteBucket()` function.
|
||||
|
||||
|
||||
### Using key/value pairs
|
||||
|
||||
To save a key/value pair to a bucket, use the `Bucket.Put()` function:
|
||||
|
||||
```go
|
||||
db.Update(func(tx *bolt.Tx) error {
|
||||
b := tx.Bucket([]byte("MyBucket"))
|
||||
err := b.Put([]byte("answer"), []byte("42"))
|
||||
return err
|
||||
})
|
||||
```
|
||||
|
||||
This will set the value of the `"answer"` key to `"42"` in the `MyBucket`
|
||||
bucket. To retrieve this value, we can use the `Bucket.Get()` function:
|
||||
|
||||
```go
|
||||
db.View(func(tx *bolt.Tx) error {
|
||||
b := tx.Bucket([]byte("MyBucket"))
|
||||
v := b.Get([]byte("answer"))
|
||||
fmt.Printf("The answer is: %s\n", v)
|
||||
return nil
|
||||
})
|
||||
```
|
||||
|
||||
The `Get()` function does not return an error because its operation is
|
||||
guarenteed to work (unless there is some kind of system failure). If the key
|
||||
exists then it will return its byte slice value. If it doesn't exist then it
|
||||
will return `nil`. It's important to note that you can have a zero-length value
|
||||
set to a key which is different than the key not existing.
|
||||
|
||||
Use the `Bucket.Delete()` function to delete a key from the bucket.
|
||||
|
||||
Please note that values returned from `Get()` are only valid while the
|
||||
transaction is open. If you need to use a value outside of the transaction
|
||||
then you must use `copy()` to copy it to another byte slice.
|
||||
|
||||
|
||||
### Iterating over keys
|
||||
|
||||
Bolt stores its keys in byte-sorted order within a bucket. This makes sequential
|
||||
iteration over these keys extremely fast. To iterate over keys we'll use a
|
||||
`Cursor`:
|
||||
|
||||
```go
|
||||
db.View(func(tx *bolt.Tx) error {
|
||||
b := tx.Bucket([]byte("MyBucket"))
|
||||
c := b.Cursor()
|
||||
|
||||
for k, v := c.First(); k != nil; k, v = c.Next() {
|
||||
fmt.Printf("key=%s, value=%s\n", k, v)
|
||||
}
|
||||
|
||||
return nil
|
||||
})
|
||||
```
|
||||
|
||||
The cursor allows you to move to a specific point in the list of keys and move
|
||||
forward or backward through the keys one at a time.
|
||||
|
||||
The following functions are available on the cursor:
|
||||
|
||||
```
|
||||
First() Move to the first key.
|
||||
Last() Move to the last key.
|
||||
Seek() Move to a specific key.
|
||||
Next() Move to the next key.
|
||||
Prev() Move to the previous key.
|
||||
```
|
||||
|
||||
When you have iterated to the end of the cursor then `Next()` will return `nil`.
|
||||
You must seek to a position using `First()`, `Last()`, or `Seek()` before
|
||||
calling `Next()` or `Prev()`. If you do not seek to a position then these
|
||||
functions will return `nil`.
|
||||
|
||||
|
||||
#### Prefix scans
|
||||
|
||||
To iterate over a key prefix, you can combine `Seek()` and `bytes.HasPrefix()`:
|
||||
|
||||
```go
|
||||
db.View(func(tx *bolt.Tx) error {
|
||||
c := tx.Bucket([]byte("MyBucket")).Cursor()
|
||||
|
||||
prefix := []byte("1234")
|
||||
for k, v := c.Seek(prefix); bytes.HasPrefix(k, prefix); k, v = c.Next() {
|
||||
fmt.Printf("key=%s, value=%s\n", k, v)
|
||||
}
|
||||
|
||||
return nil
|
||||
})
|
||||
```
|
||||
|
||||
#### Range scans
|
||||
|
||||
Another common use case is scanning over a range such as a time range. If you
|
||||
use a sortable time encoding such as RFC3339 then you can query a specific
|
||||
date range like this:
|
||||
|
||||
```go
|
||||
db.View(func(tx *bolt.Tx) error {
|
||||
// Assume our events bucket has RFC3339 encoded time keys.
|
||||
c := tx.Bucket([]byte("Events")).Cursor()
|
||||
|
||||
// Our time range spans the 90's decade.
|
||||
min := []byte("1990-01-01T00:00:00Z")
|
||||
max := []byte("2000-01-01T00:00:00Z")
|
||||
|
||||
// Iterate over the 90's.
|
||||
for k, v := c.Seek(min); k != nil && bytes.Compare(k, max) <= 0; k, v = c.Next() {
|
||||
fmt.Printf("%s: %s\n", k, v)
|
||||
}
|
||||
|
||||
return nil
|
||||
})
|
||||
```
|
||||
|
||||
|
||||
#### ForEach()
|
||||
|
||||
You can also use the function `ForEach()` if you know you'll be iterating over
|
||||
all the keys in a bucket:
|
||||
|
||||
```go
|
||||
db.View(func(tx *bolt.Tx) error {
|
||||
b := tx.Bucket([]byte("MyBucket"))
|
||||
b.ForEach(func(k, v []byte) error {
|
||||
fmt.Printf("key=%s, value=%s\n", k, v)
|
||||
return nil
|
||||
})
|
||||
return nil
|
||||
})
|
||||
```
|
||||
|
||||
|
||||
### Nested buckets
|
||||
|
||||
You can also store a bucket in a key to create nested buckets. The API is the
|
||||
same as the bucket management API on the `DB` object:
|
||||
|
||||
```go
|
||||
func (*Bucket) CreateBucket(key []byte) (*Bucket, error)
|
||||
func (*Bucket) CreateBucketIfNotExists(key []byte) (*Bucket, error)
|
||||
func (*Bucket) DeleteBucket(key []byte) error
|
||||
```
|
||||
|
||||
|
||||
### Database backups
|
||||
|
||||
Bolt is a single file so it's easy to backup. You can use the `Tx.WriteTo()`
|
||||
function to write a consistent view of the database to a writer. If you call
|
||||
this from a read-only transaction, it will perform a hot backup and not block
|
||||
your other database reads and writes. It will also use `O_DIRECT` when available
|
||||
to prevent page cache trashing.
|
||||
|
||||
One common use case is to backup over HTTP so you can use tools like `cURL` to
|
||||
do database backups:
|
||||
|
||||
```go
|
||||
func BackupHandleFunc(w http.ResponseWriter, req *http.Request) {
|
||||
err := db.View(func(tx *bolt.Tx) error {
|
||||
w.Header().Set("Content-Type", "application/octet-stream")
|
||||
w.Header().Set("Content-Disposition", `attachment; filename="my.db"`)
|
||||
w.Header().Set("Content-Length", strconv.Itoa(int(tx.Size())))
|
||||
_, err := tx.WriteTo(w)
|
||||
return err
|
||||
})
|
||||
if err != nil {
|
||||
http.Error(w, err.Error(), http.StatusInternalServerError)
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
Then you can backup using this command:
|
||||
|
||||
```sh
|
||||
$ curl http://localhost/backup > my.db
|
||||
```
|
||||
|
||||
Or you can open your browser to `http://localhost/backup` and it will download
|
||||
automatically.
|
||||
|
||||
If you want to backup to another file you can use the `Tx.CopyFile()` helper
|
||||
function.
|
||||
|
||||
|
||||
### Statistics
|
||||
|
||||
The database keeps a running count of many of the internal operations it
|
||||
performs so you can better understand what's going on. By grabbing a snapshot
|
||||
of these stats at two points in time we can see what operations were performed
|
||||
in that time range.
|
||||
|
||||
For example, we could start a goroutine to log stats every 10 seconds:
|
||||
|
||||
```go
|
||||
go func() {
|
||||
// Grab the initial stats.
|
||||
prev := db.Stats()
|
||||
|
||||
for {
|
||||
// Wait for 10s.
|
||||
time.Sleep(10 * time.Second)
|
||||
|
||||
// Grab the current stats and diff them.
|
||||
stats := db.Stats()
|
||||
diff := stats.Sub(&prev)
|
||||
|
||||
// Encode stats to JSON and print to STDERR.
|
||||
json.NewEncoder(os.Stderr).Encode(diff)
|
||||
|
||||
// Save stats for the next loop.
|
||||
prev = stats
|
||||
}
|
||||
}()
|
||||
```
|
||||
|
||||
It's also useful to pipe these stats to a service such as statsd for monitoring
|
||||
or to provide an HTTP endpoint that will perform a fixed-length sample.
|
||||
|
||||
|
||||
## Resources
|
||||
|
||||
For more information on getting started with Bolt, check out the following articles:
|
||||
|
||||
* [Intro to BoltDB: Painless Performant Persistence](http://npf.io/2014/07/intro-to-boltdb-painless-performant-persistence/) by [Nate Finch](https://github.com/natefinch).
|
||||
* [Bolt -- an embedded key/value database for Go](https://www.progville.com/go/bolt-embedded-db-golang/) by Progville
|
||||
|
||||
|
||||
## Comparison with other databases
|
||||
|
||||
### Postgres, MySQL, & other relational databases
|
||||
|
||||
Relational databases structure data into rows and are only accessible through
|
||||
the use of SQL. This approach provides flexibility in how you store and query
|
||||
your data but also incurs overhead in parsing and planning SQL statements. Bolt
|
||||
accesses all data by a byte slice key. This makes Bolt fast to read and write
|
||||
data by key but provides no built-in support for joining values together.
|
||||
|
||||
Most relational databases (with the exception of SQLite) are standalone servers
|
||||
that run separately from your application. This gives your systems
|
||||
flexibility to connect multiple application servers to a single database
|
||||
server but also adds overhead in serializing and transporting data over the
|
||||
network. Bolt runs as a library included in your application so all data access
|
||||
has to go through your application's process. This brings data closer to your
|
||||
application but limits multi-process access to the data.
|
||||
|
||||
|
||||
### LevelDB, RocksDB
|
||||
|
||||
LevelDB and its derivatives (RocksDB, HyperLevelDB) are similar to Bolt in that
|
||||
they are libraries bundled into the application, however, their underlying
|
||||
structure is a log-structured merge-tree (LSM tree). An LSM tree optimizes
|
||||
random writes by using a write ahead log and multi-tiered, sorted files called
|
||||
SSTables. Bolt uses a B+tree internally and only a single file. Both approaches
|
||||
have trade offs.
|
||||
|
||||
If you require a high random write throughput (>10,000 w/sec) or you need to use
|
||||
spinning disks then LevelDB could be a good choice. If your application is
|
||||
read-heavy or does a lot of range scans then Bolt could be a good choice.
|
||||
|
||||
One other important consideration is that LevelDB does not have transactions.
|
||||
It supports batch writing of key/values pairs and it supports read snapshots
|
||||
but it will not give you the ability to do a compare-and-swap operation safely.
|
||||
Bolt supports fully serializable ACID transactions.
|
||||
|
||||
|
||||
### LMDB
|
||||
|
||||
Bolt was originally a port of LMDB so it is architecturally similar. Both use
|
||||
a B+tree, have ACID semantics with fully serializable transactions, and support
|
||||
lock-free MVCC using a single writer and multiple readers.
|
||||
|
||||
The two projects have somewhat diverged. LMDB heavily focuses on raw performance
|
||||
while Bolt has focused on simplicity and ease of use. For example, LMDB allows
|
||||
several unsafe actions such as direct writes for the sake of performance. Bolt
|
||||
opts to disallow actions which can leave the database in a corrupted state. The
|
||||
only exception to this in Bolt is `DB.NoSync`.
|
||||
|
||||
There are also a few differences in API. LMDB requires a maximum mmap size when
|
||||
opening an `mdb_env` whereas Bolt will handle incremental mmap resizing
|
||||
automatically. LMDB overloads the getter and setter functions with multiple
|
||||
flags whereas Bolt splits these specialized cases into their own functions.
|
||||
|
||||
|
||||
## Caveats & Limitations
|
||||
|
||||
It's important to pick the right tool for the job and Bolt is no exception.
|
||||
Here are a few things to note when evaluating and using Bolt:
|
||||
|
||||
* Bolt is good for read intensive workloads. Sequential write performance is
|
||||
also fast but random writes can be slow. You can add a write-ahead log or
|
||||
[transaction coalescer](https://github.com/boltdb/coalescer) in front of Bolt
|
||||
to mitigate this issue.
|
||||
|
||||
* Bolt uses a B+tree internally so there can be a lot of random page access.
|
||||
SSDs provide a significant performance boost over spinning disks.
|
||||
|
||||
* Try to avoid long running read transactions. Bolt uses copy-on-write so
|
||||
old pages cannot be reclaimed while an old transaction is using them.
|
||||
|
||||
* Byte slices returned from Bolt are only valid during a transaction. Once the
|
||||
transaction has been committed or rolled back then the memory they point to
|
||||
can be reused by a new page or can be unmapped from virtual memory and you'll
|
||||
see an `unexpected fault address` panic when accessing it.
|
||||
|
||||
* Be careful when using `Bucket.FillPercent`. Setting a high fill percent for
|
||||
buckets that have random inserts will cause your database to have very poor
|
||||
page utilization.
|
||||
|
||||
* Use larger buckets in general. Smaller buckets causes poor page utilization
|
||||
once they become larger than the page size (typically 4KB).
|
||||
|
||||
* Bulk loading a lot of random writes into a new bucket can be slow as the
|
||||
page will not split until the transaction is committed. Randomly inserting
|
||||
more than 100,000 key/value pairs into a single new bucket in a single
|
||||
transaction is not advised.
|
||||
|
||||
* Bolt uses a memory-mapped file so the underlying operating system handles the
|
||||
caching of the data. Typically, the OS will cache as much of the file as it
|
||||
can in memory and will release memory as needed to other processes. This means
|
||||
that Bolt can show very high memory usage when working with large databases.
|
||||
However, this is expected and the OS will release memory as needed. Bolt can
|
||||
handle databases much larger than the available physical RAM.
|
||||
|
||||
* Because of the way pages are laid out on disk, Bolt cannot truncate data files
|
||||
and return free pages back to the disk. Instead, Bolt maintains a free list
|
||||
of unused pages within its data file. These free pages can be reused by later
|
||||
transactions. This works well for many use cases as databases generally tend
|
||||
to grow. However, it's important to note that deleting large chunks of data
|
||||
will not allow you to reclaim that space on disk.
|
||||
|
||||
For more information on page allocation, [see this comment][page-allocation].
|
||||
|
||||
[page-allocation]: https://github.com/boltdb/bolt/issues/308#issuecomment-74811638
|
||||
|
||||
|
||||
## Other Projects Using Bolt
|
||||
|
||||
Below is a list of public, open source projects that use Bolt:
|
||||
|
||||
* [Operation Go: A Routine Mission](http://gocode.io) - An online programming game for Golang using Bolt for user accounts and a leaderboard.
|
||||
* [Bazil](https://github.com/bazillion/bazil) - A file system that lets your data reside where it is most convenient for it to reside.
|
||||
* [DVID](https://github.com/janelia-flyem/dvid) - Added Bolt as optional storage engine and testing it against Basho-tuned leveldb.
|
||||
* [Skybox Analytics](https://github.com/skybox/skybox) - A standalone funnel analysis tool for web analytics.
|
||||
* [Scuttlebutt](https://github.com/benbjohnson/scuttlebutt) - Uses Bolt to store and process all Twitter mentions of GitHub projects.
|
||||
* [Wiki](https://github.com/peterhellberg/wiki) - A tiny wiki using Goji, BoltDB and Blackfriday.
|
||||
* [ChainStore](https://github.com/nulayer/chainstore) - Simple key-value interface to a variety of storage engines organized as a chain of operations.
|
||||
* [MetricBase](https://github.com/msiebuhr/MetricBase) - Single-binary version of Graphite.
|
||||
* [Gitchain](https://github.com/gitchain/gitchain) - Decentralized, peer-to-peer Git repositories aka "Git meets Bitcoin".
|
||||
* [event-shuttle](https://github.com/sclasen/event-shuttle) - A Unix system service to collect and reliably deliver messages to Kafka.
|
||||
* [ipxed](https://github.com/kelseyhightower/ipxed) - Web interface and api for ipxed.
|
||||
* [BoltStore](https://github.com/yosssi/boltstore) - Session store using Bolt.
|
||||
* [photosite/session](http://godoc.org/bitbucket.org/kardianos/photosite/session) - Sessions for a photo viewing site.
|
||||
* [LedisDB](https://github.com/siddontang/ledisdb) - A high performance NoSQL, using Bolt as optional storage.
|
||||
* [ipLocator](https://github.com/AndreasBriese/ipLocator) - A fast ip-geo-location-server using bolt with bloom filters.
|
||||
* [cayley](https://github.com/google/cayley) - Cayley is an open-source graph database using Bolt as optional backend.
|
||||
* [bleve](http://www.blevesearch.com/) - A pure Go search engine similar to ElasticSearch that uses Bolt as the default storage backend.
|
||||
* [tentacool](https://github.com/optiflows/tentacool) - REST api server to manage system stuff (IP, DNS, Gateway...) on a linux server.
|
||||
* [SkyDB](https://github.com/skydb/sky) - Behavioral analytics database.
|
||||
* [Seaweed File System](https://github.com/chrislusf/weed-fs) - Highly scalable distributed key~file system with O(1) disk read.
|
||||
* [InfluxDB](http://influxdb.com) - Scalable datastore for metrics, events, and real-time analytics.
|
||||
|
||||
If you are using Bolt in a project please send a pull request to add it to the list.
|
135
Godeps/_workspace/src/github.com/boltdb/bolt/batch.go
generated
vendored
135
Godeps/_workspace/src/github.com/boltdb/bolt/batch.go
generated
vendored
@@ -1,135 +0,0 @@
|
||||
package bolt
|
||||
|
||||
import (
|
||||
"errors"
|
||||
"fmt"
|
||||
"sync"
|
||||
"time"
|
||||
)
|
||||
|
||||
// Batch calls fn as part of a batch. It behaves similar to Update,
|
||||
// except:
|
||||
//
|
||||
// 1. concurrent Batch calls can be combined into a single Bolt
|
||||
// transaction.
|
||||
//
|
||||
// 2. the function passed to Batch may be called multiple times,
|
||||
// regardless of whether it returns error or not.
|
||||
//
|
||||
// This means that Batch function side effects must be idempotent and
|
||||
// take permanent effect only after a successful return is seen in
|
||||
// caller.
|
||||
//
|
||||
// Batch is only useful when there are multiple goroutines calling it.
|
||||
func (db *DB) Batch(fn func(*Tx) error) error {
|
||||
errCh := make(chan error, 1)
|
||||
|
||||
db.batchMu.Lock()
|
||||
if (db.batch == nil) || (db.batch != nil && len(db.batch.calls) >= db.MaxBatchSize) {
|
||||
// There is no existing batch, or the existing batch is full; start a new one.
|
||||
db.batch = &batch{
|
||||
db: db,
|
||||
}
|
||||
db.batch.timer = time.AfterFunc(db.MaxBatchDelay, db.batch.trigger)
|
||||
}
|
||||
db.batch.calls = append(db.batch.calls, call{fn: fn, err: errCh})
|
||||
if len(db.batch.calls) >= db.MaxBatchSize {
|
||||
// wake up batch, it's ready to run
|
||||
go db.batch.trigger()
|
||||
}
|
||||
db.batchMu.Unlock()
|
||||
|
||||
err := <-errCh
|
||||
if err == trySolo {
|
||||
err = db.Update(fn)
|
||||
}
|
||||
return err
|
||||
}
|
||||
|
||||
type call struct {
|
||||
fn func(*Tx) error
|
||||
err chan<- error
|
||||
}
|
||||
|
||||
type batch struct {
|
||||
db *DB
|
||||
timer *time.Timer
|
||||
start sync.Once
|
||||
calls []call
|
||||
}
|
||||
|
||||
// trigger runs the batch if it hasn't already been run.
|
||||
func (b *batch) trigger() {
|
||||
b.start.Do(b.run)
|
||||
}
|
||||
|
||||
// run performs the transactions in the batch and communicates results
|
||||
// back to DB.Batch.
|
||||
func (b *batch) run() {
|
||||
b.db.batchMu.Lock()
|
||||
b.timer.Stop()
|
||||
// Make sure no new work is added to this batch, but don't break
|
||||
// other batches.
|
||||
if b.db.batch == b {
|
||||
b.db.batch = nil
|
||||
}
|
||||
b.db.batchMu.Unlock()
|
||||
|
||||
retry:
|
||||
for len(b.calls) > 0 {
|
||||
var failIdx = -1
|
||||
err := b.db.Update(func(tx *Tx) error {
|
||||
for i, c := range b.calls {
|
||||
if err := safelyCall(c.fn, tx); err != nil {
|
||||
failIdx = i
|
||||
return err
|
||||
}
|
||||
}
|
||||
return nil
|
||||
})
|
||||
|
||||
if failIdx >= 0 {
|
||||
// take the failing transaction out of the batch. it's
|
||||
// safe to shorten b.calls here because db.batch no longer
|
||||
// points to us, and we hold the mutex anyway.
|
||||
c := b.calls[failIdx]
|
||||
b.calls[failIdx], b.calls = b.calls[len(b.calls)-1], b.calls[:len(b.calls)-1]
|
||||
// tell the submitter re-run it solo, continue with the rest of the batch
|
||||
c.err <- trySolo
|
||||
continue retry
|
||||
}
|
||||
|
||||
// pass success, or bolt internal errors, to all callers
|
||||
for _, c := range b.calls {
|
||||
if c.err != nil {
|
||||
c.err <- err
|
||||
}
|
||||
}
|
||||
break retry
|
||||
}
|
||||
}
|
||||
|
||||
// trySolo is a special sentinel error value used for signaling that a
|
||||
// transaction function should be re-run. It should never be seen by
|
||||
// callers.
|
||||
var trySolo = errors.New("batch function returned an error and should be re-run solo")
|
||||
|
||||
type panicked struct {
|
||||
reason interface{}
|
||||
}
|
||||
|
||||
func (p panicked) Error() string {
|
||||
if err, ok := p.reason.(error); ok {
|
||||
return err.Error()
|
||||
}
|
||||
return fmt.Sprintf("panic: %v", p.reason)
|
||||
}
|
||||
|
||||
func safelyCall(fn func(*Tx) error, tx *Tx) (err error) {
|
||||
defer func() {
|
||||
if p := recover(); p != nil {
|
||||
err = panicked{p}
|
||||
}
|
||||
}()
|
||||
return fn(tx)
|
||||
}
|
170
Godeps/_workspace/src/github.com/boltdb/bolt/batch_benchmark_test.go
generated
vendored
170
Godeps/_workspace/src/github.com/boltdb/bolt/batch_benchmark_test.go
generated
vendored
@@ -1,170 +0,0 @@
|
||||
package bolt_test
|
||||
|
||||
import (
|
||||
"bytes"
|
||||
"encoding/binary"
|
||||
"errors"
|
||||
"hash/fnv"
|
||||
"sync"
|
||||
"testing"
|
||||
|
||||
"github.com/coreos/etcd/Godeps/_workspace/src/github.com/boltdb/bolt"
|
||||
)
|
||||
|
||||
func validateBatchBench(b *testing.B, db *TestDB) {
|
||||
var rollback = errors.New("sentinel error to cause rollback")
|
||||
validate := func(tx *bolt.Tx) error {
|
||||
bucket := tx.Bucket([]byte("bench"))
|
||||
h := fnv.New32a()
|
||||
buf := make([]byte, 4)
|
||||
for id := uint32(0); id < 1000; id++ {
|
||||
binary.LittleEndian.PutUint32(buf, id)
|
||||
h.Reset()
|
||||
h.Write(buf[:])
|
||||
k := h.Sum(nil)
|
||||
v := bucket.Get(k)
|
||||
if v == nil {
|
||||
b.Errorf("not found id=%d key=%x", id, k)
|
||||
continue
|
||||
}
|
||||
if g, e := v, []byte("filler"); !bytes.Equal(g, e) {
|
||||
b.Errorf("bad value for id=%d key=%x: %s != %q", id, k, g, e)
|
||||
}
|
||||
if err := bucket.Delete(k); err != nil {
|
||||
return err
|
||||
}
|
||||
}
|
||||
// should be empty now
|
||||
c := bucket.Cursor()
|
||||
for k, v := c.First(); k != nil; k, v = c.Next() {
|
||||
b.Errorf("unexpected key: %x = %q", k, v)
|
||||
}
|
||||
return rollback
|
||||
}
|
||||
if err := db.Update(validate); err != nil && err != rollback {
|
||||
b.Error(err)
|
||||
}
|
||||
}
|
||||
|
||||
func BenchmarkDBBatchAutomatic(b *testing.B) {
|
||||
db := NewTestDB()
|
||||
defer db.Close()
|
||||
db.MustCreateBucket([]byte("bench"))
|
||||
|
||||
b.ResetTimer()
|
||||
for i := 0; i < b.N; i++ {
|
||||
start := make(chan struct{})
|
||||
var wg sync.WaitGroup
|
||||
|
||||
for round := 0; round < 1000; round++ {
|
||||
wg.Add(1)
|
||||
|
||||
go func(id uint32) {
|
||||
defer wg.Done()
|
||||
<-start
|
||||
|
||||
h := fnv.New32a()
|
||||
buf := make([]byte, 4)
|
||||
binary.LittleEndian.PutUint32(buf, id)
|
||||
h.Write(buf[:])
|
||||
k := h.Sum(nil)
|
||||
insert := func(tx *bolt.Tx) error {
|
||||
b := tx.Bucket([]byte("bench"))
|
||||
return b.Put(k, []byte("filler"))
|
||||
}
|
||||
if err := db.Batch(insert); err != nil {
|
||||
b.Error(err)
|
||||
return
|
||||
}
|
||||
}(uint32(round))
|
||||
}
|
||||
close(start)
|
||||
wg.Wait()
|
||||
}
|
||||
|
||||
b.StopTimer()
|
||||
validateBatchBench(b, db)
|
||||
}
|
||||
|
||||
func BenchmarkDBBatchSingle(b *testing.B) {
|
||||
db := NewTestDB()
|
||||
defer db.Close()
|
||||
db.MustCreateBucket([]byte("bench"))
|
||||
|
||||
b.ResetTimer()
|
||||
for i := 0; i < b.N; i++ {
|
||||
start := make(chan struct{})
|
||||
var wg sync.WaitGroup
|
||||
|
||||
for round := 0; round < 1000; round++ {
|
||||
wg.Add(1)
|
||||
go func(id uint32) {
|
||||
defer wg.Done()
|
||||
<-start
|
||||
|
||||
h := fnv.New32a()
|
||||
buf := make([]byte, 4)
|
||||
binary.LittleEndian.PutUint32(buf, id)
|
||||
h.Write(buf[:])
|
||||
k := h.Sum(nil)
|
||||
insert := func(tx *bolt.Tx) error {
|
||||
b := tx.Bucket([]byte("bench"))
|
||||
return b.Put(k, []byte("filler"))
|
||||
}
|
||||
if err := db.Update(insert); err != nil {
|
||||
b.Error(err)
|
||||
return
|
||||
}
|
||||
}(uint32(round))
|
||||
}
|
||||
close(start)
|
||||
wg.Wait()
|
||||
}
|
||||
|
||||
b.StopTimer()
|
||||
validateBatchBench(b, db)
|
||||
}
|
||||
|
||||
func BenchmarkDBBatchManual10x100(b *testing.B) {
|
||||
db := NewTestDB()
|
||||
defer db.Close()
|
||||
db.MustCreateBucket([]byte("bench"))
|
||||
|
||||
b.ResetTimer()
|
||||
for i := 0; i < b.N; i++ {
|
||||
start := make(chan struct{})
|
||||
var wg sync.WaitGroup
|
||||
|
||||
for major := 0; major < 10; major++ {
|
||||
wg.Add(1)
|
||||
go func(id uint32) {
|
||||
defer wg.Done()
|
||||
<-start
|
||||
|
||||
insert100 := func(tx *bolt.Tx) error {
|
||||
h := fnv.New32a()
|
||||
buf := make([]byte, 4)
|
||||
for minor := uint32(0); minor < 100; minor++ {
|
||||
binary.LittleEndian.PutUint32(buf, uint32(id*100+minor))
|
||||
h.Reset()
|
||||
h.Write(buf[:])
|
||||
k := h.Sum(nil)
|
||||
b := tx.Bucket([]byte("bench"))
|
||||
if err := b.Put(k, []byte("filler")); err != nil {
|
||||
return err
|
||||
}
|
||||
}
|
||||
return nil
|
||||
}
|
||||
if err := db.Update(insert100); err != nil {
|
||||
b.Fatal(err)
|
||||
}
|
||||
}(uint32(major))
|
||||
}
|
||||
close(start)
|
||||
wg.Wait()
|
||||
}
|
||||
|
||||
b.StopTimer()
|
||||
validateBatchBench(b, db)
|
||||
}
|
148
Godeps/_workspace/src/github.com/boltdb/bolt/batch_example_test.go
generated
vendored
148
Godeps/_workspace/src/github.com/boltdb/bolt/batch_example_test.go
generated
vendored
@@ -1,148 +0,0 @@
|
||||
package bolt_test
|
||||
|
||||
import (
|
||||
"encoding/binary"
|
||||
"fmt"
|
||||
"io/ioutil"
|
||||
"log"
|
||||
"math/rand"
|
||||
"net/http"
|
||||
"net/http/httptest"
|
||||
"os"
|
||||
|
||||
"github.com/coreos/etcd/Godeps/_workspace/src/github.com/boltdb/bolt"
|
||||
)
|
||||
|
||||
// Set this to see how the counts are actually updated.
|
||||
const verbose = false
|
||||
|
||||
// Counter updates a counter in Bolt for every URL path requested.
|
||||
type counter struct {
|
||||
db *bolt.DB
|
||||
}
|
||||
|
||||
func (c counter) ServeHTTP(rw http.ResponseWriter, req *http.Request) {
|
||||
// Communicates the new count from a successful database
|
||||
// transaction.
|
||||
var result uint64
|
||||
|
||||
increment := func(tx *bolt.Tx) error {
|
||||
b, err := tx.CreateBucketIfNotExists([]byte("hits"))
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
key := []byte(req.URL.String())
|
||||
// Decode handles key not found for us.
|
||||
count := decode(b.Get(key)) + 1
|
||||
b.Put(key, encode(count))
|
||||
// All good, communicate new count.
|
||||
result = count
|
||||
return nil
|
||||
}
|
||||
if err := c.db.Batch(increment); err != nil {
|
||||
http.Error(rw, err.Error(), 500)
|
||||
return
|
||||
}
|
||||
|
||||
if verbose {
|
||||
log.Printf("server: %s: %d", req.URL.String(), result)
|
||||
}
|
||||
|
||||
rw.Header().Set("Content-Type", "application/octet-stream")
|
||||
fmt.Fprintf(rw, "%d\n", result)
|
||||
}
|
||||
|
||||
func client(id int, base string, paths []string) error {
|
||||
// Process paths in random order.
|
||||
rng := rand.New(rand.NewSource(int64(id)))
|
||||
permutation := rng.Perm(len(paths))
|
||||
|
||||
for i := range paths {
|
||||
path := paths[permutation[i]]
|
||||
resp, err := http.Get(base + path)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
defer resp.Body.Close()
|
||||
buf, err := ioutil.ReadAll(resp.Body)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
if verbose {
|
||||
log.Printf("client: %s: %s", path, buf)
|
||||
}
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
func ExampleDB_Batch() {
|
||||
// Open the database.
|
||||
db, _ := bolt.Open(tempfile(), 0666, nil)
|
||||
defer os.Remove(db.Path())
|
||||
defer db.Close()
|
||||
|
||||
// Start our web server
|
||||
count := counter{db}
|
||||
srv := httptest.NewServer(count)
|
||||
defer srv.Close()
|
||||
|
||||
// Decrease the batch size to make things more interesting.
|
||||
db.MaxBatchSize = 3
|
||||
|
||||
// Get every path multiple times concurrently.
|
||||
const clients = 10
|
||||
paths := []string{
|
||||
"/foo",
|
||||
"/bar",
|
||||
"/baz",
|
||||
"/quux",
|
||||
"/thud",
|
||||
"/xyzzy",
|
||||
}
|
||||
errors := make(chan error, clients)
|
||||
for i := 0; i < clients; i++ {
|
||||
go func(id int) {
|
||||
errors <- client(id, srv.URL, paths)
|
||||
}(i)
|
||||
}
|
||||
// Check all responses to make sure there's no error.
|
||||
for i := 0; i < clients; i++ {
|
||||
if err := <-errors; err != nil {
|
||||
fmt.Printf("client error: %v", err)
|
||||
return
|
||||
}
|
||||
}
|
||||
|
||||
// Check the final result
|
||||
db.View(func(tx *bolt.Tx) error {
|
||||
b := tx.Bucket([]byte("hits"))
|
||||
c := b.Cursor()
|
||||
for k, v := c.First(); k != nil; k, v = c.Next() {
|
||||
fmt.Printf("hits to %s: %d\n", k, decode(v))
|
||||
}
|
||||
return nil
|
||||
})
|
||||
|
||||
// Output:
|
||||
// hits to /bar: 10
|
||||
// hits to /baz: 10
|
||||
// hits to /foo: 10
|
||||
// hits to /quux: 10
|
||||
// hits to /thud: 10
|
||||
// hits to /xyzzy: 10
|
||||
}
|
||||
|
||||
// encode marshals a counter.
|
||||
func encode(n uint64) []byte {
|
||||
buf := make([]byte, 8)
|
||||
binary.BigEndian.PutUint64(buf, n)
|
||||
return buf
|
||||
}
|
||||
|
||||
// decode unmarshals a counter. Nil buffers are decoded as 0.
|
||||
func decode(buf []byte) uint64 {
|
||||
if buf == nil {
|
||||
return 0
|
||||
}
|
||||
return binary.BigEndian.Uint64(buf)
|
||||
}
|
167
Godeps/_workspace/src/github.com/boltdb/bolt/batch_test.go
generated
vendored
167
Godeps/_workspace/src/github.com/boltdb/bolt/batch_test.go
generated
vendored
@@ -1,167 +0,0 @@
|
||||
package bolt_test
|
||||
|
||||
import (
|
||||
"testing"
|
||||
"time"
|
||||
|
||||
"github.com/coreos/etcd/Godeps/_workspace/src/github.com/boltdb/bolt"
|
||||
)
|
||||
|
||||
// Ensure two functions can perform updates in a single batch.
|
||||
func TestDB_Batch(t *testing.T) {
|
||||
db := NewTestDB()
|
||||
defer db.Close()
|
||||
db.MustCreateBucket([]byte("widgets"))
|
||||
|
||||
// Iterate over multiple updates in separate goroutines.
|
||||
n := 2
|
||||
ch := make(chan error)
|
||||
for i := 0; i < n; i++ {
|
||||
go func(i int) {
|
||||
ch <- db.Batch(func(tx *bolt.Tx) error {
|
||||
return tx.Bucket([]byte("widgets")).Put(u64tob(uint64(i)), []byte{})
|
||||
})
|
||||
}(i)
|
||||
}
|
||||
|
||||
// Check all responses to make sure there's no error.
|
||||
for i := 0; i < n; i++ {
|
||||
if err := <-ch; err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
}
|
||||
|
||||
// Ensure data is correct.
|
||||
db.MustView(func(tx *bolt.Tx) error {
|
||||
b := tx.Bucket([]byte("widgets"))
|
||||
for i := 0; i < n; i++ {
|
||||
if v := b.Get(u64tob(uint64(i))); v == nil {
|
||||
t.Errorf("key not found: %d", i)
|
||||
}
|
||||
}
|
||||
return nil
|
||||
})
|
||||
}
|
||||
|
||||
func TestDB_Batch_Panic(t *testing.T) {
|
||||
db := NewTestDB()
|
||||
defer db.Close()
|
||||
|
||||
var sentinel int
|
||||
var bork = &sentinel
|
||||
var problem interface{}
|
||||
var err error
|
||||
|
||||
// Execute a function inside a batch that panics.
|
||||
func() {
|
||||
defer func() {
|
||||
if p := recover(); p != nil {
|
||||
problem = p
|
||||
}
|
||||
}()
|
||||
err = db.Batch(func(tx *bolt.Tx) error {
|
||||
panic(bork)
|
||||
})
|
||||
}()
|
||||
|
||||
// Verify there is no error.
|
||||
if g, e := err, error(nil); g != e {
|
||||
t.Fatalf("wrong error: %v != %v", g, e)
|
||||
}
|
||||
// Verify the panic was captured.
|
||||
if g, e := problem, bork; g != e {
|
||||
t.Fatalf("wrong error: %v != %v", g, e)
|
||||
}
|
||||
}
|
||||
|
||||
func TestDB_BatchFull(t *testing.T) {
|
||||
db := NewTestDB()
|
||||
defer db.Close()
|
||||
db.MustCreateBucket([]byte("widgets"))
|
||||
|
||||
const size = 3
|
||||
// buffered so we never leak goroutines
|
||||
ch := make(chan error, size)
|
||||
put := func(i int) {
|
||||
ch <- db.Batch(func(tx *bolt.Tx) error {
|
||||
return tx.Bucket([]byte("widgets")).Put(u64tob(uint64(i)), []byte{})
|
||||
})
|
||||
}
|
||||
|
||||
db.MaxBatchSize = size
|
||||
// high enough to never trigger here
|
||||
db.MaxBatchDelay = 1 * time.Hour
|
||||
|
||||
go put(1)
|
||||
go put(2)
|
||||
|
||||
// Give the batch a chance to exhibit bugs.
|
||||
time.Sleep(10 * time.Millisecond)
|
||||
|
||||
// not triggered yet
|
||||
select {
|
||||
case <-ch:
|
||||
t.Fatalf("batch triggered too early")
|
||||
default:
|
||||
}
|
||||
|
||||
go put(3)
|
||||
|
||||
// Check all responses to make sure there's no error.
|
||||
for i := 0; i < size; i++ {
|
||||
if err := <-ch; err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
}
|
||||
|
||||
// Ensure data is correct.
|
||||
db.MustView(func(tx *bolt.Tx) error {
|
||||
b := tx.Bucket([]byte("widgets"))
|
||||
for i := 1; i <= size; i++ {
|
||||
if v := b.Get(u64tob(uint64(i))); v == nil {
|
||||
t.Errorf("key not found: %d", i)
|
||||
}
|
||||
}
|
||||
return nil
|
||||
})
|
||||
}
|
||||
|
||||
func TestDB_BatchTime(t *testing.T) {
|
||||
db := NewTestDB()
|
||||
defer db.Close()
|
||||
db.MustCreateBucket([]byte("widgets"))
|
||||
|
||||
const size = 1
|
||||
// buffered so we never leak goroutines
|
||||
ch := make(chan error, size)
|
||||
put := func(i int) {
|
||||
ch <- db.Batch(func(tx *bolt.Tx) error {
|
||||
return tx.Bucket([]byte("widgets")).Put(u64tob(uint64(i)), []byte{})
|
||||
})
|
||||
}
|
||||
|
||||
db.MaxBatchSize = 1000
|
||||
db.MaxBatchDelay = 0
|
||||
|
||||
go put(1)
|
||||
|
||||
// Batch must trigger by time alone.
|
||||
|
||||
// Check all responses to make sure there's no error.
|
||||
for i := 0; i < size; i++ {
|
||||
if err := <-ch; err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
}
|
||||
|
||||
// Ensure data is correct.
|
||||
db.MustView(func(tx *bolt.Tx) error {
|
||||
b := tx.Bucket([]byte("widgets"))
|
||||
for i := 1; i <= size; i++ {
|
||||
if v := b.Get(u64tob(uint64(i))); v == nil {
|
||||
t.Errorf("key not found: %d", i)
|
||||
}
|
||||
}
|
||||
return nil
|
||||
})
|
||||
}
|
7
Godeps/_workspace/src/github.com/boltdb/bolt/bolt_386.go
generated
vendored
7
Godeps/_workspace/src/github.com/boltdb/bolt/bolt_386.go
generated
vendored
@@ -1,7 +0,0 @@
|
||||
package bolt
|
||||
|
||||
// maxMapSize represents the largest mmap size supported by Bolt.
|
||||
const maxMapSize = 0x7FFFFFFF // 2GB
|
||||
|
||||
// maxAllocSize is the size used when creating array pointers.
|
||||
const maxAllocSize = 0xFFFFFFF
|
7
Godeps/_workspace/src/github.com/boltdb/bolt/bolt_amd64.go
generated
vendored
7
Godeps/_workspace/src/github.com/boltdb/bolt/bolt_amd64.go
generated
vendored
@@ -1,7 +0,0 @@
|
||||
package bolt
|
||||
|
||||
// maxMapSize represents the largest mmap size supported by Bolt.
|
||||
const maxMapSize = 0xFFFFFFFFFFFF // 256TB
|
||||
|
||||
// maxAllocSize is the size used when creating array pointers.
|
||||
const maxAllocSize = 0x7FFFFFFF
|
7
Godeps/_workspace/src/github.com/boltdb/bolt/bolt_arm.go
generated
vendored
7
Godeps/_workspace/src/github.com/boltdb/bolt/bolt_arm.go
generated
vendored
@@ -1,7 +0,0 @@
|
||||
package bolt
|
||||
|
||||
// maxMapSize represents the largest mmap size supported by Bolt.
|
||||
const maxMapSize = 0x7FFFFFFF // 2GB
|
||||
|
||||
// maxAllocSize is the size used when creating array pointers.
|
||||
const maxAllocSize = 0xFFFFFFF
|
12
Godeps/_workspace/src/github.com/boltdb/bolt/bolt_linux.go
generated
vendored
12
Godeps/_workspace/src/github.com/boltdb/bolt/bolt_linux.go
generated
vendored
@@ -1,12 +0,0 @@
|
||||
package bolt
|
||||
|
||||
import (
|
||||
"syscall"
|
||||
)
|
||||
|
||||
var odirect = syscall.O_DIRECT
|
||||
|
||||
// fdatasync flushes written data to a file descriptor.
|
||||
func fdatasync(db *DB) error {
|
||||
return syscall.Fdatasync(int(db.file.Fd()))
|
||||
}
|
29
Godeps/_workspace/src/github.com/boltdb/bolt/bolt_openbsd.go
generated
vendored
29
Godeps/_workspace/src/github.com/boltdb/bolt/bolt_openbsd.go
generated
vendored
@@ -1,29 +0,0 @@
|
||||
package bolt
|
||||
|
||||
import (
|
||||
"syscall"
|
||||
"unsafe"
|
||||
)
|
||||
|
||||
const (
|
||||
msAsync = 1 << iota // perform asynchronous writes
|
||||
msSync // perform synchronous writes
|
||||
msInvalidate // invalidate cached data
|
||||
)
|
||||
|
||||
var odirect int
|
||||
|
||||
func msync(db *DB) error {
|
||||
_, _, errno := syscall.Syscall(syscall.SYS_MSYNC, uintptr(unsafe.Pointer(db.data)), uintptr(db.datasz), msInvalidate)
|
||||
if errno != 0 {
|
||||
return errno
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
func fdatasync(db *DB) error {
|
||||
if db.data != nil {
|
||||
return msync(db)
|
||||
}
|
||||
return db.file.Sync()
|
||||
}
|
36
Godeps/_workspace/src/github.com/boltdb/bolt/bolt_test.go
generated
vendored
36
Godeps/_workspace/src/github.com/boltdb/bolt/bolt_test.go
generated
vendored
@@ -1,36 +0,0 @@
|
||||
package bolt_test
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
"path/filepath"
|
||||
"reflect"
|
||||
"runtime"
|
||||
"testing"
|
||||
)
|
||||
|
||||
// assert fails the test if the condition is false.
|
||||
func assert(tb testing.TB, condition bool, msg string, v ...interface{}) {
|
||||
if !condition {
|
||||
_, file, line, _ := runtime.Caller(1)
|
||||
fmt.Printf("\033[31m%s:%d: "+msg+"\033[39m\n\n", append([]interface{}{filepath.Base(file), line}, v...)...)
|
||||
tb.FailNow()
|
||||
}
|
||||
}
|
||||
|
||||
// ok fails the test if an err is not nil.
|
||||
func ok(tb testing.TB, err error) {
|
||||
if err != nil {
|
||||
_, file, line, _ := runtime.Caller(1)
|
||||
fmt.Printf("\033[31m%s:%d: unexpected error: %s\033[39m\n\n", filepath.Base(file), line, err.Error())
|
||||
tb.FailNow()
|
||||
}
|
||||
}
|
||||
|
||||
// equals fails the test if exp is not equal to act.
|
||||
func equals(tb testing.TB, exp, act interface{}) {
|
||||
if !reflect.DeepEqual(exp, act) {
|
||||
_, file, line, _ := runtime.Caller(1)
|
||||
fmt.Printf("\033[31m%s:%d:\n\n\texp: %#v\n\n\tgot: %#v\033[39m\n\n", filepath.Base(file), line, exp, act)
|
||||
tb.FailNow()
|
||||
}
|
||||
}
|
80
Godeps/_workspace/src/github.com/boltdb/bolt/bolt_unix.go
generated
vendored
80
Godeps/_workspace/src/github.com/boltdb/bolt/bolt_unix.go
generated
vendored
@@ -1,80 +0,0 @@
|
||||
// +build !windows,!plan9
|
||||
|
||||
package bolt
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
"os"
|
||||
"syscall"
|
||||
"time"
|
||||
"unsafe"
|
||||
)
|
||||
|
||||
// flock acquires an advisory lock on a file descriptor.
|
||||
func flock(f *os.File, timeout time.Duration) error {
|
||||
var t time.Time
|
||||
for {
|
||||
// If we're beyond our timeout then return an error.
|
||||
// This can only occur after we've attempted a flock once.
|
||||
if t.IsZero() {
|
||||
t = time.Now()
|
||||
} else if timeout > 0 && time.Since(t) > timeout {
|
||||
return ErrTimeout
|
||||
}
|
||||
|
||||
// Otherwise attempt to obtain an exclusive lock.
|
||||
err := syscall.Flock(int(f.Fd()), syscall.LOCK_EX|syscall.LOCK_NB)
|
||||
if err == nil {
|
||||
return nil
|
||||
} else if err != syscall.EWOULDBLOCK {
|
||||
return err
|
||||
}
|
||||
|
||||
// Wait for a bit and try again.
|
||||
time.Sleep(50 * time.Millisecond)
|
||||
}
|
||||
}
|
||||
|
||||
// funlock releases an advisory lock on a file descriptor.
|
||||
func funlock(f *os.File) error {
|
||||
return syscall.Flock(int(f.Fd()), syscall.LOCK_UN)
|
||||
}
|
||||
|
||||
// mmap memory maps a DB's data file.
|
||||
func mmap(db *DB, sz int) error {
|
||||
// Truncate and fsync to ensure file size metadata is flushed.
|
||||
// https://github.com/boltdb/bolt/issues/284
|
||||
if err := db.file.Truncate(int64(sz)); err != nil {
|
||||
return fmt.Errorf("file resize error: %s", err)
|
||||
}
|
||||
if err := db.file.Sync(); err != nil {
|
||||
return fmt.Errorf("file sync error: %s", err)
|
||||
}
|
||||
|
||||
// Map the data file to memory.
|
||||
b, err := syscall.Mmap(int(db.file.Fd()), 0, sz, syscall.PROT_READ, syscall.MAP_SHARED)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
// Save the original byte slice and convert to a byte array pointer.
|
||||
db.dataref = b
|
||||
db.data = (*[maxMapSize]byte)(unsafe.Pointer(&b[0]))
|
||||
db.datasz = sz
|
||||
return nil
|
||||
}
|
||||
|
||||
// munmap unmaps a DB's data file from memory.
|
||||
func munmap(db *DB) error {
|
||||
// Ignore the unmap if we have no mapped data.
|
||||
if db.dataref == nil {
|
||||
return nil
|
||||
}
|
||||
|
||||
// Unmap using the original byte slice.
|
||||
err := syscall.Munmap(db.dataref)
|
||||
db.dataref = nil
|
||||
db.data = nil
|
||||
db.datasz = 0
|
||||
return err
|
||||
}
|
74
Godeps/_workspace/src/github.com/boltdb/bolt/bolt_windows.go
generated
vendored
74
Godeps/_workspace/src/github.com/boltdb/bolt/bolt_windows.go
generated
vendored
@@ -1,74 +0,0 @@
|
||||
package bolt
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
"os"
|
||||
"syscall"
|
||||
"time"
|
||||
"unsafe"
|
||||
)
|
||||
|
||||
var odirect int
|
||||
|
||||
// fdatasync flushes written data to a file descriptor.
|
||||
func fdatasync(db *DB) error {
|
||||
return db.file.Sync()
|
||||
}
|
||||
|
||||
// flock acquires an advisory lock on a file descriptor.
|
||||
func flock(f *os.File, _ time.Duration) error {
|
||||
return nil
|
||||
}
|
||||
|
||||
// funlock releases an advisory lock on a file descriptor.
|
||||
func funlock(f *os.File) error {
|
||||
return nil
|
||||
}
|
||||
|
||||
// mmap memory maps a DB's data file.
|
||||
// Based on: https://github.com/edsrzf/mmap-go
|
||||
func mmap(db *DB, sz int) error {
|
||||
// Truncate the database to the size of the mmap.
|
||||
if err := db.file.Truncate(int64(sz)); err != nil {
|
||||
return fmt.Errorf("truncate: %s", err)
|
||||
}
|
||||
|
||||
// Open a file mapping handle.
|
||||
sizelo := uint32(sz >> 32)
|
||||
sizehi := uint32(sz) & 0xffffffff
|
||||
h, errno := syscall.CreateFileMapping(syscall.Handle(db.file.Fd()), nil, syscall.PAGE_READONLY, sizelo, sizehi, nil)
|
||||
if h == 0 {
|
||||
return os.NewSyscallError("CreateFileMapping", errno)
|
||||
}
|
||||
|
||||
// Create the memory map.
|
||||
addr, errno := syscall.MapViewOfFile(h, syscall.FILE_MAP_READ, 0, 0, uintptr(sz))
|
||||
if addr == 0 {
|
||||
return os.NewSyscallError("MapViewOfFile", errno)
|
||||
}
|
||||
|
||||
// Close mapping handle.
|
||||
if err := syscall.CloseHandle(syscall.Handle(h)); err != nil {
|
||||
return os.NewSyscallError("CloseHandle", err)
|
||||
}
|
||||
|
||||
// Convert to a byte array.
|
||||
db.data = ((*[maxMapSize]byte)(unsafe.Pointer(addr)))
|
||||
db.datasz = sz
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
// munmap unmaps a pointer from a file.
|
||||
// Based on: https://github.com/edsrzf/mmap-go
|
||||
func munmap(db *DB) error {
|
||||
if db.data == nil {
|
||||
return nil
|
||||
}
|
||||
|
||||
addr := (uintptr)(unsafe.Pointer(&db.data[0]))
|
||||
if err := syscall.UnmapViewOfFile(addr); err != nil {
|
||||
return os.NewSyscallError("UnmapViewOfFile", err)
|
||||
}
|
||||
return nil
|
||||
}
|
10
Godeps/_workspace/src/github.com/boltdb/bolt/boltsync_unix.go
generated
vendored
10
Godeps/_workspace/src/github.com/boltdb/bolt/boltsync_unix.go
generated
vendored
@@ -1,10 +0,0 @@
|
||||
// +build !windows,!plan9,!linux,!openbsd
|
||||
|
||||
package bolt
|
||||
|
||||
var odirect int
|
||||
|
||||
// fdatasync flushes written data to a file descriptor.
|
||||
func fdatasync(db *DB) error {
|
||||
return db.file.Sync()
|
||||
}
|
743
Godeps/_workspace/src/github.com/boltdb/bolt/bucket.go
generated
vendored
743
Godeps/_workspace/src/github.com/boltdb/bolt/bucket.go
generated
vendored
@@ -1,743 +0,0 @@
|
||||
package bolt
|
||||
|
||||
import (
|
||||
"bytes"
|
||||
"fmt"
|
||||
"unsafe"
|
||||
)
|
||||
|
||||
const (
|
||||
// MaxKeySize is the maximum length of a key, in bytes.
|
||||
MaxKeySize = 32768
|
||||
|
||||
// MaxValueSize is the maximum length of a value, in bytes.
|
||||
MaxValueSize = 4294967295
|
||||
)
|
||||
|
||||
const (
|
||||
maxUint = ^uint(0)
|
||||
minUint = 0
|
||||
maxInt = int(^uint(0) >> 1)
|
||||
minInt = -maxInt - 1
|
||||
)
|
||||
|
||||
const bucketHeaderSize = int(unsafe.Sizeof(bucket{}))
|
||||
|
||||
const (
|
||||
minFillPercent = 0.1
|
||||
maxFillPercent = 1.0
|
||||
)
|
||||
|
||||
// DefaultFillPercent is the percentage that split pages are filled.
|
||||
// This value can be changed by setting Bucket.FillPercent.
|
||||
const DefaultFillPercent = 0.5
|
||||
|
||||
// Bucket represents a collection of key/value pairs inside the database.
|
||||
type Bucket struct {
|
||||
*bucket
|
||||
tx *Tx // the associated transaction
|
||||
buckets map[string]*Bucket // subbucket cache
|
||||
page *page // inline page reference
|
||||
rootNode *node // materialized node for the root page.
|
||||
nodes map[pgid]*node // node cache
|
||||
|
||||
// Sets the threshold for filling nodes when they split. By default,
|
||||
// the bucket will fill to 50% but it can be useful to increase this
|
||||
// amount if you know that your write workloads are mostly append-only.
|
||||
//
|
||||
// This is non-persisted across transactions so it must be set in every Tx.
|
||||
FillPercent float64
|
||||
}
|
||||
|
||||
// bucket represents the on-file representation of a bucket.
|
||||
// This is stored as the "value" of a bucket key. If the bucket is small enough,
|
||||
// then its root page can be stored inline in the "value", after the bucket
|
||||
// header. In the case of inline buckets, the "root" will be 0.
|
||||
type bucket struct {
|
||||
root pgid // page id of the bucket's root-level page
|
||||
sequence uint64 // monotonically incrementing, used by NextSequence()
|
||||
}
|
||||
|
||||
// newBucket returns a new bucket associated with a transaction.
|
||||
func newBucket(tx *Tx) Bucket {
|
||||
var b = Bucket{tx: tx, FillPercent: DefaultFillPercent}
|
||||
if tx.writable {
|
||||
b.buckets = make(map[string]*Bucket)
|
||||
b.nodes = make(map[pgid]*node)
|
||||
}
|
||||
return b
|
||||
}
|
||||
|
||||
// Tx returns the tx of the bucket.
|
||||
func (b *Bucket) Tx() *Tx {
|
||||
return b.tx
|
||||
}
|
||||
|
||||
// Root returns the root of the bucket.
|
||||
func (b *Bucket) Root() pgid {
|
||||
return b.root
|
||||
}
|
||||
|
||||
// Writable returns whether the bucket is writable.
|
||||
func (b *Bucket) Writable() bool {
|
||||
return b.tx.writable
|
||||
}
|
||||
|
||||
// Cursor creates a cursor associated with the bucket.
|
||||
// The cursor is only valid as long as the transaction is open.
|
||||
// Do not use a cursor after the transaction is closed.
|
||||
func (b *Bucket) Cursor() *Cursor {
|
||||
// Update transaction statistics.
|
||||
b.tx.stats.CursorCount++
|
||||
|
||||
// Allocate and return a cursor.
|
||||
return &Cursor{
|
||||
bucket: b,
|
||||
stack: make([]elemRef, 0),
|
||||
}
|
||||
}
|
||||
|
||||
// Bucket retrieves a nested bucket by name.
|
||||
// Returns nil if the bucket does not exist.
|
||||
func (b *Bucket) Bucket(name []byte) *Bucket {
|
||||
if b.buckets != nil {
|
||||
if child := b.buckets[string(name)]; child != nil {
|
||||
return child
|
||||
}
|
||||
}
|
||||
|
||||
// Move cursor to key.
|
||||
c := b.Cursor()
|
||||
k, v, flags := c.seek(name)
|
||||
|
||||
// Return nil if the key doesn't exist or it is not a bucket.
|
||||
if !bytes.Equal(name, k) || (flags&bucketLeafFlag) == 0 {
|
||||
return nil
|
||||
}
|
||||
|
||||
// Otherwise create a bucket and cache it.
|
||||
var child = b.openBucket(v)
|
||||
if b.buckets != nil {
|
||||
b.buckets[string(name)] = child
|
||||
}
|
||||
|
||||
return child
|
||||
}
|
||||
|
||||
// Helper method that re-interprets a sub-bucket value
|
||||
// from a parent into a Bucket
|
||||
func (b *Bucket) openBucket(value []byte) *Bucket {
|
||||
var child = newBucket(b.tx)
|
||||
|
||||
// If this is a writable transaction then we need to copy the bucket entry.
|
||||
// Read-only transactions can point directly at the mmap entry.
|
||||
if b.tx.writable {
|
||||
child.bucket = &bucket{}
|
||||
*child.bucket = *(*bucket)(unsafe.Pointer(&value[0]))
|
||||
} else {
|
||||
child.bucket = (*bucket)(unsafe.Pointer(&value[0]))
|
||||
}
|
||||
|
||||
// Save a reference to the inline page if the bucket is inline.
|
||||
if child.root == 0 {
|
||||
child.page = (*page)(unsafe.Pointer(&value[bucketHeaderSize]))
|
||||
}
|
||||
|
||||
return &child
|
||||
}
|
||||
|
||||
// CreateBucket creates a new bucket at the given key and returns the new bucket.
|
||||
// Returns an error if the key already exists, if the bucket name is blank, or if the bucket name is too long.
|
||||
func (b *Bucket) CreateBucket(key []byte) (*Bucket, error) {
|
||||
if b.tx.db == nil {
|
||||
return nil, ErrTxClosed
|
||||
} else if !b.tx.writable {
|
||||
return nil, ErrTxNotWritable
|
||||
} else if len(key) == 0 {
|
||||
return nil, ErrBucketNameRequired
|
||||
}
|
||||
|
||||
// Move cursor to correct position.
|
||||
c := b.Cursor()
|
||||
k, _, flags := c.seek(key)
|
||||
|
||||
// Return an error if there is an existing key.
|
||||
if bytes.Equal(key, k) {
|
||||
if (flags & bucketLeafFlag) != 0 {
|
||||
return nil, ErrBucketExists
|
||||
} else {
|
||||
return nil, ErrIncompatibleValue
|
||||
}
|
||||
}
|
||||
|
||||
// Create empty, inline bucket.
|
||||
var bucket = Bucket{
|
||||
bucket: &bucket{},
|
||||
rootNode: &node{isLeaf: true},
|
||||
FillPercent: DefaultFillPercent,
|
||||
}
|
||||
var value = bucket.write()
|
||||
|
||||
// Insert into node.
|
||||
key = cloneBytes(key)
|
||||
c.node().put(key, key, value, 0, bucketLeafFlag)
|
||||
|
||||
// Since subbuckets are not allowed on inline buckets, we need to
|
||||
// dereference the inline page, if it exists. This will cause the bucket
|
||||
// to be treated as a regular, non-inline bucket for the rest of the tx.
|
||||
b.page = nil
|
||||
|
||||
return b.Bucket(key), nil
|
||||
}
|
||||
|
||||
// CreateBucketIfNotExists creates a new bucket if it doesn't already exist and returns a reference to it.
|
||||
// Returns an error if the bucket name is blank, or if the bucket name is too long.
|
||||
func (b *Bucket) CreateBucketIfNotExists(key []byte) (*Bucket, error) {
|
||||
child, err := b.CreateBucket(key)
|
||||
if err == ErrBucketExists {
|
||||
return b.Bucket(key), nil
|
||||
} else if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
return child, nil
|
||||
}
|
||||
|
||||
// DeleteBucket deletes a bucket at the given key.
|
||||
// Returns an error if the bucket does not exists, or if the key represents a non-bucket value.
|
||||
func (b *Bucket) DeleteBucket(key []byte) error {
|
||||
if b.tx.db == nil {
|
||||
return ErrTxClosed
|
||||
} else if !b.Writable() {
|
||||
return ErrTxNotWritable
|
||||
}
|
||||
|
||||
// Move cursor to correct position.
|
||||
c := b.Cursor()
|
||||
k, _, flags := c.seek(key)
|
||||
|
||||
// Return an error if bucket doesn't exist or is not a bucket.
|
||||
if !bytes.Equal(key, k) {
|
||||
return ErrBucketNotFound
|
||||
} else if (flags & bucketLeafFlag) == 0 {
|
||||
return ErrIncompatibleValue
|
||||
}
|
||||
|
||||
// Recursively delete all child buckets.
|
||||
child := b.Bucket(key)
|
||||
err := child.ForEach(func(k, v []byte) error {
|
||||
if v == nil {
|
||||
if err := child.DeleteBucket(k); err != nil {
|
||||
return fmt.Errorf("delete bucket: %s", err)
|
||||
}
|
||||
}
|
||||
return nil
|
||||
})
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
// Remove cached copy.
|
||||
delete(b.buckets, string(key))
|
||||
|
||||
// Release all bucket pages to freelist.
|
||||
child.nodes = nil
|
||||
child.rootNode = nil
|
||||
child.free()
|
||||
|
||||
// Delete the node if we have a matching key.
|
||||
c.node().del(key)
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
// Get retrieves the value for a key in the bucket.
|
||||
// Returns a nil value if the key does not exist or if the key is a nested bucket.
|
||||
// The returned value is only valid for the life of the transaction.
|
||||
func (b *Bucket) Get(key []byte) []byte {
|
||||
k, v, flags := b.Cursor().seek(key)
|
||||
|
||||
// Return nil if this is a bucket.
|
||||
if (flags & bucketLeafFlag) != 0 {
|
||||
return nil
|
||||
}
|
||||
|
||||
// If our target node isn't the same key as what's passed in then return nil.
|
||||
if !bytes.Equal(key, k) {
|
||||
return nil
|
||||
}
|
||||
return v
|
||||
}
|
||||
|
||||
// Put sets the value for a key in the bucket.
|
||||
// If the key exist then its previous value will be overwritten.
|
||||
// Returns an error if the bucket was created from a read-only transaction, if the key is blank, if the key is too large, or if the value is too large.
|
||||
func (b *Bucket) Put(key []byte, value []byte) error {
|
||||
if b.tx.db == nil {
|
||||
return ErrTxClosed
|
||||
} else if !b.Writable() {
|
||||
return ErrTxNotWritable
|
||||
} else if len(key) == 0 {
|
||||
return ErrKeyRequired
|
||||
} else if len(key) > MaxKeySize {
|
||||
return ErrKeyTooLarge
|
||||
} else if int64(len(value)) > MaxValueSize {
|
||||
return ErrValueTooLarge
|
||||
}
|
||||
|
||||
// Move cursor to correct position.
|
||||
c := b.Cursor()
|
||||
k, _, flags := c.seek(key)
|
||||
|
||||
// Return an error if there is an existing key with a bucket value.
|
||||
if bytes.Equal(key, k) && (flags&bucketLeafFlag) != 0 {
|
||||
return ErrIncompatibleValue
|
||||
}
|
||||
|
||||
// Insert into node.
|
||||
key = cloneBytes(key)
|
||||
c.node().put(key, key, value, 0, 0)
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
// Delete removes a key from the bucket.
|
||||
// If the key does not exist then nothing is done and a nil error is returned.
|
||||
// Returns an error if the bucket was created from a read-only transaction.
|
||||
func (b *Bucket) Delete(key []byte) error {
|
||||
if b.tx.db == nil {
|
||||
return ErrTxClosed
|
||||
} else if !b.Writable() {
|
||||
return ErrTxNotWritable
|
||||
}
|
||||
|
||||
// Move cursor to correct position.
|
||||
c := b.Cursor()
|
||||
_, _, flags := c.seek(key)
|
||||
|
||||
// Return an error if there is already existing bucket value.
|
||||
if (flags & bucketLeafFlag) != 0 {
|
||||
return ErrIncompatibleValue
|
||||
}
|
||||
|
||||
// Delete the node if we have a matching key.
|
||||
c.node().del(key)
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
// NextSequence returns an autoincrementing integer for the bucket.
|
||||
func (b *Bucket) NextSequence() (uint64, error) {
|
||||
if b.tx.db == nil {
|
||||
return 0, ErrTxClosed
|
||||
} else if !b.Writable() {
|
||||
return 0, ErrTxNotWritable
|
||||
}
|
||||
|
||||
// Materialize the root node if it hasn't been already so that the
|
||||
// bucket will be saved during commit.
|
||||
if b.rootNode == nil {
|
||||
_ = b.node(b.root, nil)
|
||||
}
|
||||
|
||||
// Increment and return the sequence.
|
||||
b.bucket.sequence++
|
||||
return b.bucket.sequence, nil
|
||||
}
|
||||
|
||||
// ForEach executes a function for each key/value pair in a bucket.
|
||||
// If the provided function returns an error then the iteration is stopped and
|
||||
// the error is returned to the caller.
|
||||
func (b *Bucket) ForEach(fn func(k, v []byte) error) error {
|
||||
if b.tx.db == nil {
|
||||
return ErrTxClosed
|
||||
}
|
||||
c := b.Cursor()
|
||||
for k, v := c.First(); k != nil; k, v = c.Next() {
|
||||
if err := fn(k, v); err != nil {
|
||||
return err
|
||||
}
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
// Stat returns stats on a bucket.
|
||||
func (b *Bucket) Stats() BucketStats {
|
||||
var s, subStats BucketStats
|
||||
pageSize := b.tx.db.pageSize
|
||||
s.BucketN += 1
|
||||
if b.root == 0 {
|
||||
s.InlineBucketN += 1
|
||||
}
|
||||
b.forEachPage(func(p *page, depth int) {
|
||||
if (p.flags & leafPageFlag) != 0 {
|
||||
s.KeyN += int(p.count)
|
||||
|
||||
// used totals the used bytes for the page
|
||||
used := pageHeaderSize
|
||||
|
||||
if p.count != 0 {
|
||||
// If page has any elements, add all element headers.
|
||||
used += leafPageElementSize * int(p.count-1)
|
||||
|
||||
// Add all element key, value sizes.
|
||||
// The computation takes advantage of the fact that the position
|
||||
// of the last element's key/value equals to the total of the sizes
|
||||
// of all previous elements' keys and values.
|
||||
// It also includes the last element's header.
|
||||
lastElement := p.leafPageElement(p.count - 1)
|
||||
used += int(lastElement.pos + lastElement.ksize + lastElement.vsize)
|
||||
}
|
||||
|
||||
if b.root == 0 {
|
||||
// For inlined bucket just update the inline stats
|
||||
s.InlineBucketInuse += used
|
||||
} else {
|
||||
// For non-inlined bucket update all the leaf stats
|
||||
s.LeafPageN++
|
||||
s.LeafInuse += used
|
||||
s.LeafOverflowN += int(p.overflow)
|
||||
|
||||
// Collect stats from sub-buckets.
|
||||
// Do that by iterating over all element headers
|
||||
// looking for the ones with the bucketLeafFlag.
|
||||
for i := uint16(0); i < p.count; i++ {
|
||||
e := p.leafPageElement(i)
|
||||
if (e.flags & bucketLeafFlag) != 0 {
|
||||
// For any bucket element, open the element value
|
||||
// and recursively call Stats on the contained bucket.
|
||||
subStats.Add(b.openBucket(e.value()).Stats())
|
||||
}
|
||||
}
|
||||
}
|
||||
} else if (p.flags & branchPageFlag) != 0 {
|
||||
s.BranchPageN++
|
||||
lastElement := p.branchPageElement(p.count - 1)
|
||||
|
||||
// used totals the used bytes for the page
|
||||
// Add header and all element headers.
|
||||
used := pageHeaderSize + (branchPageElementSize * int(p.count-1))
|
||||
|
||||
// Add size of all keys and values.
|
||||
// Again, use the fact that last element's position equals to
|
||||
// the total of key, value sizes of all previous elements.
|
||||
used += int(lastElement.pos + lastElement.ksize)
|
||||
s.BranchInuse += used
|
||||
s.BranchOverflowN += int(p.overflow)
|
||||
}
|
||||
|
||||
// Keep track of maximum page depth.
|
||||
if depth+1 > s.Depth {
|
||||
s.Depth = (depth + 1)
|
||||
}
|
||||
})
|
||||
|
||||
// Alloc stats can be computed from page counts and pageSize.
|
||||
s.BranchAlloc = (s.BranchPageN + s.BranchOverflowN) * pageSize
|
||||
s.LeafAlloc = (s.LeafPageN + s.LeafOverflowN) * pageSize
|
||||
|
||||
// Add the max depth of sub-buckets to get total nested depth.
|
||||
s.Depth += subStats.Depth
|
||||
// Add the stats for all sub-buckets
|
||||
s.Add(subStats)
|
||||
return s
|
||||
}
|
||||
|
||||
// forEachPage iterates over every page in a bucket, including inline pages.
|
||||
func (b *Bucket) forEachPage(fn func(*page, int)) {
|
||||
// If we have an inline page then just use that.
|
||||
if b.page != nil {
|
||||
fn(b.page, 0)
|
||||
return
|
||||
}
|
||||
|
||||
// Otherwise traverse the page hierarchy.
|
||||
b.tx.forEachPage(b.root, 0, fn)
|
||||
}
|
||||
|
||||
// forEachPageNode iterates over every page (or node) in a bucket.
|
||||
// This also includes inline pages.
|
||||
func (b *Bucket) forEachPageNode(fn func(*page, *node, int)) {
|
||||
// If we have an inline page or root node then just use that.
|
||||
if b.page != nil {
|
||||
fn(b.page, nil, 0)
|
||||
return
|
||||
}
|
||||
b._forEachPageNode(b.root, 0, fn)
|
||||
}
|
||||
|
||||
func (b *Bucket) _forEachPageNode(pgid pgid, depth int, fn func(*page, *node, int)) {
|
||||
var p, n = b.pageNode(pgid)
|
||||
|
||||
// Execute function.
|
||||
fn(p, n, depth)
|
||||
|
||||
// Recursively loop over children.
|
||||
if p != nil {
|
||||
if (p.flags & branchPageFlag) != 0 {
|
||||
for i := 0; i < int(p.count); i++ {
|
||||
elem := p.branchPageElement(uint16(i))
|
||||
b._forEachPageNode(elem.pgid, depth+1, fn)
|
||||
}
|
||||
}
|
||||
} else {
|
||||
if !n.isLeaf {
|
||||
for _, inode := range n.inodes {
|
||||
b._forEachPageNode(inode.pgid, depth+1, fn)
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// spill writes all the nodes for this bucket to dirty pages.
|
||||
func (b *Bucket) spill() error {
|
||||
// Spill all child buckets first.
|
||||
for name, child := range b.buckets {
|
||||
// If the child bucket is small enough and it has no child buckets then
|
||||
// write it inline into the parent bucket's page. Otherwise spill it
|
||||
// like a normal bucket and make the parent value a pointer to the page.
|
||||
var value []byte
|
||||
if child.inlineable() {
|
||||
child.free()
|
||||
value = child.write()
|
||||
} else {
|
||||
if err := child.spill(); err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
// Update the child bucket header in this bucket.
|
||||
value = make([]byte, unsafe.Sizeof(bucket{}))
|
||||
var bucket = (*bucket)(unsafe.Pointer(&value[0]))
|
||||
*bucket = *child.bucket
|
||||
}
|
||||
|
||||
// Skip writing the bucket if there are no materialized nodes.
|
||||
if child.rootNode == nil {
|
||||
continue
|
||||
}
|
||||
|
||||
// Update parent node.
|
||||
var c = b.Cursor()
|
||||
k, _, flags := c.seek([]byte(name))
|
||||
if !bytes.Equal([]byte(name), k) {
|
||||
panic(fmt.Sprintf("misplaced bucket header: %x -> %x", []byte(name), k))
|
||||
}
|
||||
if flags&bucketLeafFlag == 0 {
|
||||
panic(fmt.Sprintf("unexpected bucket header flag: %x", flags))
|
||||
}
|
||||
c.node().put([]byte(name), []byte(name), value, 0, bucketLeafFlag)
|
||||
}
|
||||
|
||||
// Ignore if there's not a materialized root node.
|
||||
if b.rootNode == nil {
|
||||
return nil
|
||||
}
|
||||
|
||||
// Spill nodes.
|
||||
if err := b.rootNode.spill(); err != nil {
|
||||
return err
|
||||
}
|
||||
b.rootNode = b.rootNode.root()
|
||||
|
||||
// Update the root node for this bucket.
|
||||
if b.rootNode.pgid >= b.tx.meta.pgid {
|
||||
panic(fmt.Sprintf("pgid (%d) above high water mark (%d)", b.rootNode.pgid, b.tx.meta.pgid))
|
||||
}
|
||||
b.root = b.rootNode.pgid
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
// inlineable returns true if a bucket is small enough to be written inline
|
||||
// and if it contains no subbuckets. Otherwise returns false.
|
||||
func (b *Bucket) inlineable() bool {
|
||||
var n = b.rootNode
|
||||
|
||||
// Bucket must only contain a single leaf node.
|
||||
if n == nil || !n.isLeaf {
|
||||
return false
|
||||
}
|
||||
|
||||
// Bucket is not inlineable if it contains subbuckets or if it goes beyond
|
||||
// our threshold for inline bucket size.
|
||||
var size = pageHeaderSize
|
||||
for _, inode := range n.inodes {
|
||||
size += leafPageElementSize + len(inode.key) + len(inode.value)
|
||||
|
||||
if inode.flags&bucketLeafFlag != 0 {
|
||||
return false
|
||||
} else if size > b.maxInlineBucketSize() {
|
||||
return false
|
||||
}
|
||||
}
|
||||
|
||||
return true
|
||||
}
|
||||
|
||||
// Returns the maximum total size of a bucket to make it a candidate for inlining.
|
||||
func (b *Bucket) maxInlineBucketSize() int {
|
||||
return b.tx.db.pageSize / 4
|
||||
}
|
||||
|
||||
// write allocates and writes a bucket to a byte slice.
|
||||
func (b *Bucket) write() []byte {
|
||||
// Allocate the appropriate size.
|
||||
var n = b.rootNode
|
||||
var value = make([]byte, bucketHeaderSize+n.size())
|
||||
|
||||
// Write a bucket header.
|
||||
var bucket = (*bucket)(unsafe.Pointer(&value[0]))
|
||||
*bucket = *b.bucket
|
||||
|
||||
// Convert byte slice to a fake page and write the root node.
|
||||
var p = (*page)(unsafe.Pointer(&value[bucketHeaderSize]))
|
||||
n.write(p)
|
||||
|
||||
return value
|
||||
}
|
||||
|
||||
// rebalance attempts to balance all nodes.
|
||||
func (b *Bucket) rebalance() {
|
||||
for _, n := range b.nodes {
|
||||
n.rebalance()
|
||||
}
|
||||
for _, child := range b.buckets {
|
||||
child.rebalance()
|
||||
}
|
||||
}
|
||||
|
||||
// node creates a node from a page and associates it with a given parent.
|
||||
func (b *Bucket) node(pgid pgid, parent *node) *node {
|
||||
_assert(b.nodes != nil, "nodes map expected")
|
||||
|
||||
// Retrieve node if it's already been created.
|
||||
if n := b.nodes[pgid]; n != nil {
|
||||
return n
|
||||
}
|
||||
|
||||
// Otherwise create a node and cache it.
|
||||
n := &node{bucket: b, parent: parent}
|
||||
if parent == nil {
|
||||
b.rootNode = n
|
||||
} else {
|
||||
parent.children = append(parent.children, n)
|
||||
}
|
||||
|
||||
// Use the inline page if this is an inline bucket.
|
||||
var p = b.page
|
||||
if p == nil {
|
||||
p = b.tx.page(pgid)
|
||||
}
|
||||
|
||||
// Read the page into the node and cache it.
|
||||
n.read(p)
|
||||
b.nodes[pgid] = n
|
||||
|
||||
// Update statistics.
|
||||
b.tx.stats.NodeCount++
|
||||
|
||||
return n
|
||||
}
|
||||
|
||||
// free recursively frees all pages in the bucket.
|
||||
func (b *Bucket) free() {
|
||||
if b.root == 0 {
|
||||
return
|
||||
}
|
||||
|
||||
var tx = b.tx
|
||||
b.forEachPageNode(func(p *page, n *node, _ int) {
|
||||
if p != nil {
|
||||
tx.db.freelist.free(tx.meta.txid, p)
|
||||
} else {
|
||||
n.free()
|
||||
}
|
||||
})
|
||||
b.root = 0
|
||||
}
|
||||
|
||||
// dereference removes all references to the old mmap.
|
||||
func (b *Bucket) dereference() {
|
||||
if b.rootNode != nil {
|
||||
b.rootNode.root().dereference()
|
||||
}
|
||||
|
||||
for _, child := range b.buckets {
|
||||
child.dereference()
|
||||
}
|
||||
}
|
||||
|
||||
// pageNode returns the in-memory node, if it exists.
|
||||
// Otherwise returns the underlying page.
|
||||
func (b *Bucket) pageNode(id pgid) (*page, *node) {
|
||||
// Inline buckets have a fake page embedded in their value so treat them
|
||||
// differently. We'll return the rootNode (if available) or the fake page.
|
||||
if b.root == 0 {
|
||||
if id != 0 {
|
||||
panic(fmt.Sprintf("inline bucket non-zero page access(2): %d != 0", id))
|
||||
}
|
||||
if b.rootNode != nil {
|
||||
return nil, b.rootNode
|
||||
}
|
||||
return b.page, nil
|
||||
}
|
||||
|
||||
// Check the node cache for non-inline buckets.
|
||||
if b.nodes != nil {
|
||||
if n := b.nodes[id]; n != nil {
|
||||
return nil, n
|
||||
}
|
||||
}
|
||||
|
||||
// Finally lookup the page from the transaction if no node is materialized.
|
||||
return b.tx.page(id), nil
|
||||
}
|
||||
|
||||
// BucketStats records statistics about resources used by a bucket.
|
||||
type BucketStats struct {
|
||||
// Page count statistics.
|
||||
BranchPageN int // number of logical branch pages
|
||||
BranchOverflowN int // number of physical branch overflow pages
|
||||
LeafPageN int // number of logical leaf pages
|
||||
LeafOverflowN int // number of physical leaf overflow pages
|
||||
|
||||
// Tree statistics.
|
||||
KeyN int // number of keys/value pairs
|
||||
Depth int // number of levels in B+tree
|
||||
|
||||
// Page size utilization.
|
||||
BranchAlloc int // bytes allocated for physical branch pages
|
||||
BranchInuse int // bytes actually used for branch data
|
||||
LeafAlloc int // bytes allocated for physical leaf pages
|
||||
LeafInuse int // bytes actually used for leaf data
|
||||
|
||||
// Bucket statistics
|
||||
BucketN int // total number of buckets including the top bucket
|
||||
InlineBucketN int // total number on inlined buckets
|
||||
InlineBucketInuse int // bytes used for inlined buckets (also accounted for in LeafInuse)
|
||||
}
|
||||
|
||||
func (s *BucketStats) Add(other BucketStats) {
|
||||
s.BranchPageN += other.BranchPageN
|
||||
s.BranchOverflowN += other.BranchOverflowN
|
||||
s.LeafPageN += other.LeafPageN
|
||||
s.LeafOverflowN += other.LeafOverflowN
|
||||
s.KeyN += other.KeyN
|
||||
if s.Depth < other.Depth {
|
||||
s.Depth = other.Depth
|
||||
}
|
||||
s.BranchAlloc += other.BranchAlloc
|
||||
s.BranchInuse += other.BranchInuse
|
||||
s.LeafAlloc += other.LeafAlloc
|
||||
s.LeafInuse += other.LeafInuse
|
||||
|
||||
s.BucketN += other.BucketN
|
||||
s.InlineBucketN += other.InlineBucketN
|
||||
s.InlineBucketInuse += other.InlineBucketInuse
|
||||
}
|
||||
|
||||
// cloneBytes returns a copy of a given slice.
|
||||
func cloneBytes(v []byte) []byte {
|
||||
var clone = make([]byte, len(v))
|
||||
copy(clone, v)
|
||||
return clone
|
||||
}
|
1153
Godeps/_workspace/src/github.com/boltdb/bolt/bucket_test.go
generated
vendored
1153
Godeps/_workspace/src/github.com/boltdb/bolt/bucket_test.go
generated
vendored
File diff suppressed because it is too large
Load Diff
1529
Godeps/_workspace/src/github.com/boltdb/bolt/cmd/bolt/main.go
generated
vendored
1529
Godeps/_workspace/src/github.com/boltdb/bolt/cmd/bolt/main.go
generated
vendored
File diff suppressed because it is too large
Load Diff
145
Godeps/_workspace/src/github.com/boltdb/bolt/cmd/bolt/main_test.go
generated
vendored
145
Godeps/_workspace/src/github.com/boltdb/bolt/cmd/bolt/main_test.go
generated
vendored
@@ -1,145 +0,0 @@
|
||||
package main_test
|
||||
|
||||
import (
|
||||
"bytes"
|
||||
"io/ioutil"
|
||||
"os"
|
||||
"strconv"
|
||||
"testing"
|
||||
|
||||
"github.com/coreos/etcd/Godeps/_workspace/src/github.com/boltdb/bolt"
|
||||
"github.com/coreos/etcd/Godeps/_workspace/src/github.com/boltdb/bolt/cmd/bolt"
|
||||
)
|
||||
|
||||
// Ensure the "info" command can print information about a database.
|
||||
func TestInfoCommand_Run(t *testing.T) {
|
||||
db := MustOpen(0666, nil)
|
||||
db.DB.Close()
|
||||
defer db.Close()
|
||||
|
||||
// Run the info command.
|
||||
m := NewMain()
|
||||
if err := m.Run("info", db.Path); err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
}
|
||||
|
||||
// Ensure the "stats" command can execute correctly.
|
||||
func TestStatsCommand_Run(t *testing.T) {
|
||||
// Ignore
|
||||
if os.Getpagesize() != 4096 {
|
||||
t.Skip("system does not use 4KB page size")
|
||||
}
|
||||
|
||||
db := MustOpen(0666, nil)
|
||||
defer db.Close()
|
||||
|
||||
if err := db.Update(func(tx *bolt.Tx) error {
|
||||
// Create "foo" bucket.
|
||||
b, err := tx.CreateBucket([]byte("foo"))
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
for i := 0; i < 10; i++ {
|
||||
if err := b.Put([]byte(strconv.Itoa(i)), []byte(strconv.Itoa(i))); err != nil {
|
||||
return err
|
||||
}
|
||||
}
|
||||
|
||||
// Create "bar" bucket.
|
||||
b, err = tx.CreateBucket([]byte("bar"))
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
for i := 0; i < 100; i++ {
|
||||
if err := b.Put([]byte(strconv.Itoa(i)), []byte(strconv.Itoa(i))); err != nil {
|
||||
return err
|
||||
}
|
||||
}
|
||||
|
||||
// Create "baz" bucket.
|
||||
b, err = tx.CreateBucket([]byte("baz"))
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
if err := b.Put([]byte("key"), []byte("value")); err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
return nil
|
||||
}); err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
db.DB.Close()
|
||||
|
||||
// Generate expected result.
|
||||
exp := "Aggregate statistics for 3 buckets\n\n" +
|
||||
"Page count statistics\n" +
|
||||
"\tNumber of logical branch pages: 0\n" +
|
||||
"\tNumber of physical branch overflow pages: 0\n" +
|
||||
"\tNumber of logical leaf pages: 1\n" +
|
||||
"\tNumber of physical leaf overflow pages: 0\n" +
|
||||
"Tree statistics\n" +
|
||||
"\tNumber of keys/value pairs: 111\n" +
|
||||
"\tNumber of levels in B+tree: 1\n" +
|
||||
"Page size utilization\n" +
|
||||
"\tBytes allocated for physical branch pages: 0\n" +
|
||||
"\tBytes actually used for branch data: 0 (0%)\n" +
|
||||
"\tBytes allocated for physical leaf pages: 4096\n" +
|
||||
"\tBytes actually used for leaf data: 1996 (48%)\n" +
|
||||
"Bucket statistics\n" +
|
||||
"\tTotal number of buckets: 3\n" +
|
||||
"\tTotal number on inlined buckets: 2 (66%)\n" +
|
||||
"\tBytes used for inlined buckets: 236 (11%)\n"
|
||||
|
||||
// Run the command.
|
||||
m := NewMain()
|
||||
if err := m.Run("stats", db.Path); err != nil {
|
||||
t.Fatal(err)
|
||||
} else if m.Stdout.String() != exp {
|
||||
t.Fatalf("unexpected stdout:\n\n%s", m.Stdout.String())
|
||||
}
|
||||
}
|
||||
|
||||
// Main represents a test wrapper for main.Main that records output.
|
||||
type Main struct {
|
||||
*main.Main
|
||||
Stdin bytes.Buffer
|
||||
Stdout bytes.Buffer
|
||||
Stderr bytes.Buffer
|
||||
}
|
||||
|
||||
// NewMain returns a new instance of Main.
|
||||
func NewMain() *Main {
|
||||
m := &Main{Main: main.NewMain()}
|
||||
m.Main.Stdin = &m.Stdin
|
||||
m.Main.Stdout = &m.Stdout
|
||||
m.Main.Stderr = &m.Stderr
|
||||
return m
|
||||
}
|
||||
|
||||
// MustOpen creates a Bolt database in a temporary location.
|
||||
func MustOpen(mode os.FileMode, options *bolt.Options) *DB {
|
||||
// Create temporary path.
|
||||
f, _ := ioutil.TempFile("", "bolt-")
|
||||
f.Close()
|
||||
os.Remove(f.Name())
|
||||
|
||||
db, err := bolt.Open(f.Name(), mode, options)
|
||||
if err != nil {
|
||||
panic(err.Error())
|
||||
}
|
||||
return &DB{DB: db, Path: f.Name()}
|
||||
}
|
||||
|
||||
// DB is a test wrapper for bolt.DB.
|
||||
type DB struct {
|
||||
*bolt.DB
|
||||
Path string
|
||||
}
|
||||
|
||||
// Close closes and removes the database.
|
||||
func (db *DB) Close() error {
|
||||
defer os.Remove(db.Path)
|
||||
return db.DB.Close()
|
||||
}
|
384
Godeps/_workspace/src/github.com/boltdb/bolt/cursor.go
generated
vendored
384
Godeps/_workspace/src/github.com/boltdb/bolt/cursor.go
generated
vendored
@@ -1,384 +0,0 @@
|
||||
package bolt
|
||||
|
||||
import (
|
||||
"bytes"
|
||||
"fmt"
|
||||
"sort"
|
||||
)
|
||||
|
||||
// Cursor represents an iterator that can traverse over all key/value pairs in a bucket in sorted order.
|
||||
// Cursors see nested buckets with value == nil.
|
||||
// Cursors can be obtained from a transaction and are valid as long as the transaction is open.
|
||||
//
|
||||
// Keys and values returned from the cursor are only valid for the life of the transaction.
|
||||
//
|
||||
// Changing data while traversing with a cursor may cause it to be invalidated
|
||||
// and return unexpected keys and/or values. You must reposition your cursor
|
||||
// after mutating data.
|
||||
type Cursor struct {
|
||||
bucket *Bucket
|
||||
stack []elemRef
|
||||
}
|
||||
|
||||
// Bucket returns the bucket that this cursor was created from.
|
||||
func (c *Cursor) Bucket() *Bucket {
|
||||
return c.bucket
|
||||
}
|
||||
|
||||
// First moves the cursor to the first item in the bucket and returns its key and value.
|
||||
// If the bucket is empty then a nil key and value are returned.
|
||||
// The returned key and value are only valid for the life of the transaction.
|
||||
func (c *Cursor) First() (key []byte, value []byte) {
|
||||
_assert(c.bucket.tx.db != nil, "tx closed")
|
||||
c.stack = c.stack[:0]
|
||||
p, n := c.bucket.pageNode(c.bucket.root)
|
||||
c.stack = append(c.stack, elemRef{page: p, node: n, index: 0})
|
||||
c.first()
|
||||
k, v, flags := c.keyValue()
|
||||
if (flags & uint32(bucketLeafFlag)) != 0 {
|
||||
return k, nil
|
||||
}
|
||||
return k, v
|
||||
|
||||
}
|
||||
|
||||
// Last moves the cursor to the last item in the bucket and returns its key and value.
|
||||
// If the bucket is empty then a nil key and value are returned.
|
||||
// The returned key and value are only valid for the life of the transaction.
|
||||
func (c *Cursor) Last() (key []byte, value []byte) {
|
||||
_assert(c.bucket.tx.db != nil, "tx closed")
|
||||
c.stack = c.stack[:0]
|
||||
p, n := c.bucket.pageNode(c.bucket.root)
|
||||
ref := elemRef{page: p, node: n}
|
||||
ref.index = ref.count() - 1
|
||||
c.stack = append(c.stack, ref)
|
||||
c.last()
|
||||
k, v, flags := c.keyValue()
|
||||
if (flags & uint32(bucketLeafFlag)) != 0 {
|
||||
return k, nil
|
||||
}
|
||||
return k, v
|
||||
}
|
||||
|
||||
// Next moves the cursor to the next item in the bucket and returns its key and value.
|
||||
// If the cursor is at the end of the bucket then a nil key and value are returned.
|
||||
// The returned key and value are only valid for the life of the transaction.
|
||||
func (c *Cursor) Next() (key []byte, value []byte) {
|
||||
_assert(c.bucket.tx.db != nil, "tx closed")
|
||||
k, v, flags := c.next()
|
||||
if (flags & uint32(bucketLeafFlag)) != 0 {
|
||||
return k, nil
|
||||
}
|
||||
return k, v
|
||||
}
|
||||
|
||||
// Prev moves the cursor to the previous item in the bucket and returns its key and value.
|
||||
// If the cursor is at the beginning of the bucket then a nil key and value are returned.
|
||||
// The returned key and value are only valid for the life of the transaction.
|
||||
func (c *Cursor) Prev() (key []byte, value []byte) {
|
||||
_assert(c.bucket.tx.db != nil, "tx closed")
|
||||
|
||||
// Attempt to move back one element until we're successful.
|
||||
// Move up the stack as we hit the beginning of each page in our stack.
|
||||
for i := len(c.stack) - 1; i >= 0; i-- {
|
||||
elem := &c.stack[i]
|
||||
if elem.index > 0 {
|
||||
elem.index--
|
||||
break
|
||||
}
|
||||
c.stack = c.stack[:i]
|
||||
}
|
||||
|
||||
// If we've hit the end then return nil.
|
||||
if len(c.stack) == 0 {
|
||||
return nil, nil
|
||||
}
|
||||
|
||||
// Move down the stack to find the last element of the last leaf under this branch.
|
||||
c.last()
|
||||
k, v, flags := c.keyValue()
|
||||
if (flags & uint32(bucketLeafFlag)) != 0 {
|
||||
return k, nil
|
||||
}
|
||||
return k, v
|
||||
}
|
||||
|
||||
// Seek moves the cursor to a given key and returns it.
|
||||
// If the key does not exist then the next key is used. If no keys
|
||||
// follow, a nil key is returned.
|
||||
// The returned key and value are only valid for the life of the transaction.
|
||||
func (c *Cursor) Seek(seek []byte) (key []byte, value []byte) {
|
||||
k, v, flags := c.seek(seek)
|
||||
|
||||
// If we ended up after the last element of a page then move to the next one.
|
||||
if ref := &c.stack[len(c.stack)-1]; ref.index >= ref.count() {
|
||||
k, v, flags = c.next()
|
||||
}
|
||||
|
||||
if k == nil {
|
||||
return nil, nil
|
||||
} else if (flags & uint32(bucketLeafFlag)) != 0 {
|
||||
return k, nil
|
||||
}
|
||||
return k, v
|
||||
}
|
||||
|
||||
// Delete removes the current key/value under the cursor from the bucket.
|
||||
// Delete fails if current key/value is a bucket or if the transaction is not writable.
|
||||
func (c *Cursor) Delete() error {
|
||||
if c.bucket.tx.db == nil {
|
||||
return ErrTxClosed
|
||||
} else if !c.bucket.Writable() {
|
||||
return ErrTxNotWritable
|
||||
}
|
||||
|
||||
key, _, flags := c.keyValue()
|
||||
// Return an error if current value is a bucket.
|
||||
if (flags & bucketLeafFlag) != 0 {
|
||||
return ErrIncompatibleValue
|
||||
}
|
||||
c.node().del(key)
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
// seek moves the cursor to a given key and returns it.
|
||||
// If the key does not exist then the next key is used.
|
||||
func (c *Cursor) seek(seek []byte) (key []byte, value []byte, flags uint32) {
|
||||
_assert(c.bucket.tx.db != nil, "tx closed")
|
||||
|
||||
// Start from root page/node and traverse to correct page.
|
||||
c.stack = c.stack[:0]
|
||||
c.search(seek, c.bucket.root)
|
||||
ref := &c.stack[len(c.stack)-1]
|
||||
|
||||
// If the cursor is pointing to the end of page/node then return nil.
|
||||
if ref.index >= ref.count() {
|
||||
return nil, nil, 0
|
||||
}
|
||||
|
||||
// If this is a bucket then return a nil value.
|
||||
return c.keyValue()
|
||||
}
|
||||
|
||||
// first moves the cursor to the first leaf element under the last page in the stack.
|
||||
func (c *Cursor) first() {
|
||||
for {
|
||||
// Exit when we hit a leaf page.
|
||||
var ref = &c.stack[len(c.stack)-1]
|
||||
if ref.isLeaf() {
|
||||
break
|
||||
}
|
||||
|
||||
// Keep adding pages pointing to the first element to the stack.
|
||||
var pgid pgid
|
||||
if ref.node != nil {
|
||||
pgid = ref.node.inodes[ref.index].pgid
|
||||
} else {
|
||||
pgid = ref.page.branchPageElement(uint16(ref.index)).pgid
|
||||
}
|
||||
p, n := c.bucket.pageNode(pgid)
|
||||
c.stack = append(c.stack, elemRef{page: p, node: n, index: 0})
|
||||
}
|
||||
}
|
||||
|
||||
// last moves the cursor to the last leaf element under the last page in the stack.
|
||||
func (c *Cursor) last() {
|
||||
for {
|
||||
// Exit when we hit a leaf page.
|
||||
ref := &c.stack[len(c.stack)-1]
|
||||
if ref.isLeaf() {
|
||||
break
|
||||
}
|
||||
|
||||
// Keep adding pages pointing to the last element in the stack.
|
||||
var pgid pgid
|
||||
if ref.node != nil {
|
||||
pgid = ref.node.inodes[ref.index].pgid
|
||||
} else {
|
||||
pgid = ref.page.branchPageElement(uint16(ref.index)).pgid
|
||||
}
|
||||
p, n := c.bucket.pageNode(pgid)
|
||||
|
||||
var nextRef = elemRef{page: p, node: n}
|
||||
nextRef.index = nextRef.count() - 1
|
||||
c.stack = append(c.stack, nextRef)
|
||||
}
|
||||
}
|
||||
|
||||
// next moves to the next leaf element and returns the key and value.
|
||||
// If the cursor is at the last leaf element then it stays there and returns nil.
|
||||
func (c *Cursor) next() (key []byte, value []byte, flags uint32) {
|
||||
// Attempt to move over one element until we're successful.
|
||||
// Move up the stack as we hit the end of each page in our stack.
|
||||
var i int
|
||||
for i = len(c.stack) - 1; i >= 0; i-- {
|
||||
elem := &c.stack[i]
|
||||
if elem.index < elem.count()-1 {
|
||||
elem.index++
|
||||
break
|
||||
}
|
||||
}
|
||||
|
||||
// If we've hit the root page then stop and return. This will leave the
|
||||
// cursor on the last element of the last page.
|
||||
if i == -1 {
|
||||
return nil, nil, 0
|
||||
}
|
||||
|
||||
// Otherwise start from where we left off in the stack and find the
|
||||
// first element of the first leaf page.
|
||||
c.stack = c.stack[:i+1]
|
||||
c.first()
|
||||
return c.keyValue()
|
||||
}
|
||||
|
||||
// search recursively performs a binary search against a given page/node until it finds a given key.
|
||||
func (c *Cursor) search(key []byte, pgid pgid) {
|
||||
p, n := c.bucket.pageNode(pgid)
|
||||
if p != nil && (p.flags&(branchPageFlag|leafPageFlag)) == 0 {
|
||||
panic(fmt.Sprintf("invalid page type: %d: %x", p.id, p.flags))
|
||||
}
|
||||
e := elemRef{page: p, node: n}
|
||||
c.stack = append(c.stack, e)
|
||||
|
||||
// If we're on a leaf page/node then find the specific node.
|
||||
if e.isLeaf() {
|
||||
c.nsearch(key)
|
||||
return
|
||||
}
|
||||
|
||||
if n != nil {
|
||||
c.searchNode(key, n)
|
||||
return
|
||||
}
|
||||
c.searchPage(key, p)
|
||||
}
|
||||
|
||||
func (c *Cursor) searchNode(key []byte, n *node) {
|
||||
var exact bool
|
||||
index := sort.Search(len(n.inodes), func(i int) bool {
|
||||
// TODO(benbjohnson): Optimize this range search. It's a bit hacky right now.
|
||||
// sort.Search() finds the lowest index where f() != -1 but we need the highest index.
|
||||
ret := bytes.Compare(n.inodes[i].key, key)
|
||||
if ret == 0 {
|
||||
exact = true
|
||||
}
|
||||
return ret != -1
|
||||
})
|
||||
if !exact && index > 0 {
|
||||
index--
|
||||
}
|
||||
c.stack[len(c.stack)-1].index = index
|
||||
|
||||
// Recursively search to the next page.
|
||||
c.search(key, n.inodes[index].pgid)
|
||||
}
|
||||
|
||||
func (c *Cursor) searchPage(key []byte, p *page) {
|
||||
// Binary search for the correct range.
|
||||
inodes := p.branchPageElements()
|
||||
|
||||
var exact bool
|
||||
index := sort.Search(int(p.count), func(i int) bool {
|
||||
// TODO(benbjohnson): Optimize this range search. It's a bit hacky right now.
|
||||
// sort.Search() finds the lowest index where f() != -1 but we need the highest index.
|
||||
ret := bytes.Compare(inodes[i].key(), key)
|
||||
if ret == 0 {
|
||||
exact = true
|
||||
}
|
||||
return ret != -1
|
||||
})
|
||||
if !exact && index > 0 {
|
||||
index--
|
||||
}
|
||||
c.stack[len(c.stack)-1].index = index
|
||||
|
||||
// Recursively search to the next page.
|
||||
c.search(key, inodes[index].pgid)
|
||||
}
|
||||
|
||||
// nsearch searches the leaf node on the top of the stack for a key.
|
||||
func (c *Cursor) nsearch(key []byte) {
|
||||
e := &c.stack[len(c.stack)-1]
|
||||
p, n := e.page, e.node
|
||||
|
||||
// If we have a node then search its inodes.
|
||||
if n != nil {
|
||||
index := sort.Search(len(n.inodes), func(i int) bool {
|
||||
return bytes.Compare(n.inodes[i].key, key) != -1
|
||||
})
|
||||
e.index = index
|
||||
return
|
||||
}
|
||||
|
||||
// If we have a page then search its leaf elements.
|
||||
inodes := p.leafPageElements()
|
||||
index := sort.Search(int(p.count), func(i int) bool {
|
||||
return bytes.Compare(inodes[i].key(), key) != -1
|
||||
})
|
||||
e.index = index
|
||||
}
|
||||
|
||||
// keyValue returns the key and value of the current leaf element.
|
||||
func (c *Cursor) keyValue() ([]byte, []byte, uint32) {
|
||||
ref := &c.stack[len(c.stack)-1]
|
||||
if ref.count() == 0 || ref.index >= ref.count() {
|
||||
return nil, nil, 0
|
||||
}
|
||||
|
||||
// Retrieve value from node.
|
||||
if ref.node != nil {
|
||||
inode := &ref.node.inodes[ref.index]
|
||||
return inode.key, inode.value, inode.flags
|
||||
}
|
||||
|
||||
// Or retrieve value from page.
|
||||
elem := ref.page.leafPageElement(uint16(ref.index))
|
||||
return elem.key(), elem.value(), elem.flags
|
||||
}
|
||||
|
||||
// node returns the node that the cursor is currently positioned on.
|
||||
func (c *Cursor) node() *node {
|
||||
_assert(len(c.stack) > 0, "accessing a node with a zero-length cursor stack")
|
||||
|
||||
// If the top of the stack is a leaf node then just return it.
|
||||
if ref := &c.stack[len(c.stack)-1]; ref.node != nil && ref.isLeaf() {
|
||||
return ref.node
|
||||
}
|
||||
|
||||
// Start from root and traverse down the hierarchy.
|
||||
var n = c.stack[0].node
|
||||
if n == nil {
|
||||
n = c.bucket.node(c.stack[0].page.id, nil)
|
||||
}
|
||||
for _, ref := range c.stack[:len(c.stack)-1] {
|
||||
_assert(!n.isLeaf, "expected branch node")
|
||||
n = n.childAt(int(ref.index))
|
||||
}
|
||||
_assert(n.isLeaf, "expected leaf node")
|
||||
return n
|
||||
}
|
||||
|
||||
// elemRef represents a reference to an element on a given page/node.
|
||||
type elemRef struct {
|
||||
page *page
|
||||
node *node
|
||||
index int
|
||||
}
|
||||
|
||||
// isLeaf returns whether the ref is pointing at a leaf page/node.
|
||||
func (r *elemRef) isLeaf() bool {
|
||||
if r.node != nil {
|
||||
return r.node.isLeaf
|
||||
}
|
||||
return (r.page.flags & leafPageFlag) != 0
|
||||
}
|
||||
|
||||
// count returns the number of inodes or page elements.
|
||||
func (r *elemRef) count() int {
|
||||
if r.node != nil {
|
||||
return len(r.node.inodes)
|
||||
}
|
||||
return int(r.page.count)
|
||||
}
|
Some files were not shown because too many files have changed in this diff Show More
Reference in New Issue
Block a user