*: introduce grpc dependency

release-2.1
Xiang Li 2015-06-29 18:59:00 -07:00
parent 718cb18ca2
commit 436bacd77a
166 changed files with 37659 additions and 1 deletions

24
Godeps/Godeps.json generated
View File

@ -23,6 +23,10 @@
"Comment": "v1.0-71-g71f28ea",
"Rev": "71f28eaecbebd00604d87bb1de0dae8fcfa54bbd"
},
{
"ImportPath": "github.com/bradfitz/http2",
"Rev": "3e36af6d3af0e56fa3da71099f864933dea3d9fb"
},
{
"ImportPath": "github.com/codegangsta/cli",
"Comment": "1.2.0-26-gf7ebb76",
@ -45,6 +49,10 @@
"ImportPath": "github.com/gogo/protobuf/proto",
"Rev": "64f27bf06efee53589314a6e5a4af34cdd85adf6"
},
{
"ImportPath": "github.com/golang/glog",
"Rev": "44145f04b68cf362d9c4df2182967c2275eaefed"
},
{
"ImportPath": "github.com/golang/protobuf/proto",
"Rev": "5677a0e3d5e89854c9974e1256839ee23f8233ca"
@ -104,6 +112,22 @@
{
"ImportPath": "golang.org/x/net/context",
"Rev": "7dbad50ab5b31073856416cdcfeb2796d682f844"
},
{
"ImportPath": "golang.org/x/oauth2",
"Rev": "3046bc76d6dfd7d3707f6640f85e42d9c4050f50"
},
{
"ImportPath": "google.golang.org/cloud/compute/metadata",
"Rev": "f20d6dcccb44ed49de45ae3703312cb46e627db1"
},
{
"ImportPath": "google.golang.org/cloud/internal",
"Rev": "f20d6dcccb44ed49de45ae3703312cb46e627db1"
},
{
"ImportPath": "google.golang.org/grpc",
"Rev": "f5ebd86be717593ab029545492c93ddf8914832b"
}
]
}

View File

@ -0,0 +1 @@
*~

View File

@ -0,0 +1,19 @@
# This file is like Go's AUTHORS file: it lists Copyright holders.
# The list of humans who have contributd is in the CONTRIBUTORS file.
#
# To contribute to this project, because it will eventually be folded
# back in to Go itself, you need to submit a CLA:
#
# http://golang.org/doc/contribute.html#copyright
#
# Then you get added to CONTRIBUTORS and you or your company get added
# to the AUTHORS file.
Blake Mizerany <blake.mizerany@gmail.com> github=bmizerany
Daniel Morsing <daniel.morsing@gmail.com> github=DanielMorsing
Gabriel Aszalos <gabriel.aszalos@gmail.com> github=gbbr
Google, Inc.
Keith Rarick <kr@xph.us> github=kr
Matthew Keenan <tank.en.mate@gmail.com> <github@mattkeenan.net> github=mattkeenan
Matt Layher <mdlayher@gmail.com> github=mdlayher
Tatsuhiro Tsujikawa <tatsuhiro.t@gmail.com> github=tatsuhiro-t

View File

@ -0,0 +1,19 @@
# This file is like Go's CONTRIBUTORS file: it lists humans.
# The list of copyright holders (which may be companies) are in the AUTHORS file.
#
# To contribute to this project, because it will eventually be folded
# back in to Go itself, you need to submit a CLA:
#
# http://golang.org/doc/contribute.html#copyright
#
# Then you get added to CONTRIBUTORS and you or your company get added
# to the AUTHORS file.
Blake Mizerany <blake.mizerany@gmail.com> github=bmizerany
Brad Fitzpatrick <bradfitz@golang.org> github=bradfitz
Daniel Morsing <daniel.morsing@gmail.com> github=DanielMorsing
Gabriel Aszalos <gabriel.aszalos@gmail.com> github=gbbr
Keith Rarick <kr@xph.us> github=kr
Matthew Keenan <tank.en.mate@gmail.com> <github@mattkeenan.net> github=mattkeenan
Matt Layher <mdlayher@gmail.com> github=mdlayher
Tatsuhiro Tsujikawa <tatsuhiro.t@gmail.com> github=tatsuhiro-t

View File

@ -0,0 +1,44 @@
#
# This Dockerfile builds a recent curl with HTTP/2 client support, using
# a recent nghttp2 build.
#
# See the Makefile for how to tag it. If Docker and that image is found, the
# Go tests use this curl binary for integration tests.
#
FROM ubuntu:trusty
RUN apt-get update && \
apt-get upgrade -y && \
apt-get install -y git-core build-essential wget
RUN apt-get install -y --no-install-recommends \
autotools-dev libtool pkg-config zlib1g-dev \
libcunit1-dev libssl-dev libxml2-dev libevent-dev \
automake autoconf
# Note: setting NGHTTP2_VER before the git clone, so an old git clone isn't cached:
ENV NGHTTP2_VER af24f8394e43f4
RUN cd /root && git clone https://github.com/tatsuhiro-t/nghttp2.git
WORKDIR /root/nghttp2
RUN git reset --hard $NGHTTP2_VER
RUN autoreconf -i
RUN automake
RUN autoconf
RUN ./configure
RUN make
RUN make install
WORKDIR /root
RUN wget http://curl.haxx.se/download/curl-7.40.0.tar.gz
RUN tar -zxvf curl-7.40.0.tar.gz
WORKDIR /root/curl-7.40.0
RUN ./configure --with-ssl --with-nghttp2=/usr/local
RUN make
RUN make install
RUN ldconfig
CMD ["-h"]
ENTRYPOINT ["/usr/local/bin/curl"]

View File

@ -0,0 +1,5 @@
We only accept contributions from users who have gone through Go's
contribution process (signed a CLA).
Please acknowledge whether you have (and use the same email) if
sending a pull request.

View File

@ -0,0 +1,7 @@
Copyright 2014 Google & the Go AUTHORS
Go AUTHORS are:
See https://code.google.com/p/go/source/browse/AUTHORS
Licensed under the terms of Go itself:
https://code.google.com/p/go/source/browse/LICENSE

View File

@ -0,0 +1,3 @@
curlimage:
docker build -t gohttp2/curl .

17
Godeps/_workspace/src/github.com/bradfitz/http2/README generated vendored Normal file
View File

@ -0,0 +1,17 @@
This is a work-in-progress HTTP/2 implementation for Go.
It will eventually live in the Go standard library and won't require
any changes to your code to use. It will just be automatic.
Status:
* The server support is pretty good. A few things are missing
but are being worked on.
* The client work has just started but shares a lot of code
is coming along much quicker.
Docs are at https://godoc.org/github.com/bradfitz/http2
Demo test server at https://http2.golang.org/
Help & bug reports welcome.

View File

@ -0,0 +1,75 @@
// Copyright 2014 The Go Authors.
// See https://code.google.com/p/go/source/browse/CONTRIBUTORS
// Licensed under the same terms as Go itself:
// https://code.google.com/p/go/source/browse/LICENSE
package http2
import (
"errors"
)
// buffer is an io.ReadWriteCloser backed by a fixed size buffer.
// It never allocates, but moves old data as new data is written.
type buffer struct {
buf []byte
r, w int
closed bool
err error // err to return to reader
}
var (
errReadEmpty = errors.New("read from empty buffer")
errWriteFull = errors.New("write on full buffer")
)
// Read copies bytes from the buffer into p.
// It is an error to read when no data is available.
func (b *buffer) Read(p []byte) (n int, err error) {
n = copy(p, b.buf[b.r:b.w])
b.r += n
if b.closed && b.r == b.w {
err = b.err
} else if b.r == b.w && n == 0 {
err = errReadEmpty
}
return n, err
}
// Len returns the number of bytes of the unread portion of the buffer.
func (b *buffer) Len() int {
return b.w - b.r
}
// Write copies bytes from p into the buffer.
// It is an error to write more data than the buffer can hold.
func (b *buffer) Write(p []byte) (n int, err error) {
if b.closed {
return 0, errors.New("closed")
}
// Slide existing data to beginning.
if b.r > 0 && len(p) > len(b.buf)-b.w {
copy(b.buf, b.buf[b.r:b.w])
b.w -= b.r
b.r = 0
}
// Write new data.
n = copy(b.buf[b.w:], p)
b.w += n
if n < len(p) {
err = errWriteFull
}
return n, err
}
// Close marks the buffer as closed. Future calls to Write will
// return an error. Future calls to Read, once the buffer is
// empty, will return err.
func (b *buffer) Close(err error) {
if !b.closed {
b.closed = true
b.err = err
}
}

View File

@ -0,0 +1,73 @@
// Copyright 2014 The Go Authors.
// See https://code.google.com/p/go/source/browse/CONTRIBUTORS
// Licensed under the same terms as Go itself:
// https://code.google.com/p/go/source/browse/LICENSE
package http2
import (
"io"
"reflect"
"testing"
)
var bufferReadTests = []struct {
buf buffer
read, wn int
werr error
wp []byte
wbuf buffer
}{
{
buffer{[]byte{'a', 0}, 0, 1, false, nil},
5, 1, nil, []byte{'a'},
buffer{[]byte{'a', 0}, 1, 1, false, nil},
},
{
buffer{[]byte{'a', 0}, 0, 1, true, io.EOF},
5, 1, io.EOF, []byte{'a'},
buffer{[]byte{'a', 0}, 1, 1, true, io.EOF},
},
{
buffer{[]byte{0, 'a'}, 1, 2, false, nil},
5, 1, nil, []byte{'a'},
buffer{[]byte{0, 'a'}, 2, 2, false, nil},
},
{
buffer{[]byte{0, 'a'}, 1, 2, true, io.EOF},
5, 1, io.EOF, []byte{'a'},
buffer{[]byte{0, 'a'}, 2, 2, true, io.EOF},
},
{
buffer{[]byte{}, 0, 0, false, nil},
5, 0, errReadEmpty, []byte{},
buffer{[]byte{}, 0, 0, false, nil},
},
{
buffer{[]byte{}, 0, 0, true, io.EOF},
5, 0, io.EOF, []byte{},
buffer{[]byte{}, 0, 0, true, io.EOF},
},
}
func TestBufferRead(t *testing.T) {
for i, tt := range bufferReadTests {
read := make([]byte, tt.read)
n, err := tt.buf.Read(read)
if n != tt.wn {
t.Errorf("#%d: wn = %d want %d", i, n, tt.wn)
continue
}
if err != tt.werr {
t.Errorf("#%d: werr = %v want %v", i, err, tt.werr)
continue
}
read = read[:n]
if !reflect.DeepEqual(read, tt.wp) {
t.Errorf("#%d: read = %+v want %+v", i, read, tt.wp)
}
if !reflect.DeepEqual(tt.buf, tt.wbuf) {
t.Errorf("#%d: buf = %+v want %+v", i, tt.buf, tt.wbuf)
}
}
}

View File

@ -0,0 +1,78 @@
// Copyright 2014 The Go Authors.
// See https://code.google.com/p/go/source/browse/CONTRIBUTORS
// Licensed under the same terms as Go itself:
// https://code.google.com/p/go/source/browse/LICENSE
package http2
import "fmt"
// An ErrCode is an unsigned 32-bit error code as defined in the HTTP/2 spec.
type ErrCode uint32
const (
ErrCodeNo ErrCode = 0x0
ErrCodeProtocol ErrCode = 0x1
ErrCodeInternal ErrCode = 0x2
ErrCodeFlowControl ErrCode = 0x3
ErrCodeSettingsTimeout ErrCode = 0x4
ErrCodeStreamClosed ErrCode = 0x5
ErrCodeFrameSize ErrCode = 0x6
ErrCodeRefusedStream ErrCode = 0x7
ErrCodeCancel ErrCode = 0x8
ErrCodeCompression ErrCode = 0x9
ErrCodeConnect ErrCode = 0xa
ErrCodeEnhanceYourCalm ErrCode = 0xb
ErrCodeInadequateSecurity ErrCode = 0xc
ErrCodeHTTP11Required ErrCode = 0xd
)
var errCodeName = map[ErrCode]string{
ErrCodeNo: "NO_ERROR",
ErrCodeProtocol: "PROTOCOL_ERROR",
ErrCodeInternal: "INTERNAL_ERROR",
ErrCodeFlowControl: "FLOW_CONTROL_ERROR",
ErrCodeSettingsTimeout: "SETTINGS_TIMEOUT",
ErrCodeStreamClosed: "STREAM_CLOSED",
ErrCodeFrameSize: "FRAME_SIZE_ERROR",
ErrCodeRefusedStream: "REFUSED_STREAM",
ErrCodeCancel: "CANCEL",
ErrCodeCompression: "COMPRESSION_ERROR",
ErrCodeConnect: "CONNECT_ERROR",
ErrCodeEnhanceYourCalm: "ENHANCE_YOUR_CALM",
ErrCodeInadequateSecurity: "INADEQUATE_SECURITY",
ErrCodeHTTP11Required: "HTTP_1_1_REQUIRED",
}
func (e ErrCode) String() string {
if s, ok := errCodeName[e]; ok {
return s
}
return fmt.Sprintf("unknown error code 0x%x", uint32(e))
}
// ConnectionError is an error that results in the termination of the
// entire connection.
type ConnectionError ErrCode
func (e ConnectionError) Error() string { return fmt.Sprintf("connection error: %s", ErrCode(e)) }
// StreamError is an error that only affects one stream within an
// HTTP/2 connection.
type StreamError struct {
StreamID uint32
Code ErrCode
}
func (e StreamError) Error() string {
return fmt.Sprintf("stream error: stream ID %d; %v", e.StreamID, e.Code)
}
// 6.9.1 The Flow Control Window
// "If a sender receives a WINDOW_UPDATE that causes a flow control
// window to exceed this maximum it MUST terminate either the stream
// or the connection, as appropriate. For streams, [...]; for the
// connection, a GOAWAY frame with a FLOW_CONTROL_ERROR code."
type goAwayFlowError struct{}
func (goAwayFlowError) Error() string { return "connection exceeded flow control window size" }

View File

@ -0,0 +1,27 @@
// Copyright 2014 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
// See https://code.google.com/p/go/source/browse/CONTRIBUTORS
// Licensed under the same terms as Go itself:
// https://code.google.com/p/go/source/browse/LICENSE
package http2
import "testing"
func TestErrCodeString(t *testing.T) {
tests := []struct {
err ErrCode
want string
}{
{ErrCodeProtocol, "PROTOCOL_ERROR"},
{0xd, "HTTP_1_1_REQUIRED"},
{0xf, "unknown error code 0xf"},
}
for i, tt := range tests {
got := tt.err.String()
if got != tt.want {
t.Errorf("%d. Error = %q; want %q", i, got, tt.want)
}
}
}

View File

@ -0,0 +1,51 @@
// Copyright 2014 The Go Authors.
// See https://code.google.com/p/go/source/browse/CONTRIBUTORS
// Licensed under the same terms as Go itself:
// https://code.google.com/p/go/source/browse/LICENSE
// Flow control
package http2
// flow is the flow control window's size.
type flow struct {
// n is the number of DATA bytes we're allowed to send.
// A flow is kept both on a conn and a per-stream.
n int32
// conn points to the shared connection-level flow that is
// shared by all streams on that conn. It is nil for the flow
// that's on the conn directly.
conn *flow
}
func (f *flow) setConnFlow(cf *flow) { f.conn = cf }
func (f *flow) available() int32 {
n := f.n
if f.conn != nil && f.conn.n < n {
n = f.conn.n
}
return n
}
func (f *flow) take(n int32) {
if n > f.available() {
panic("internal error: took too much")
}
f.n -= n
if f.conn != nil {
f.conn.n -= n
}
}
// add adds n bytes (positive or negative) to the flow control window.
// It returns false if the sum would exceed 2^31-1.
func (f *flow) add(n int32) bool {
remain := (1<<31 - 1) - f.n
if n > remain {
return false
}
f.n += n
return true
}

View File

@ -0,0 +1,54 @@
// Copyright 2014 The Go Authors.
// See https://code.google.com/p/go/source/browse/CONTRIBUTORS
// Licensed under the same terms as Go itself:
// https://code.google.com/p/go/source/browse/LICENSE
package http2
import "testing"
func TestFlow(t *testing.T) {
var st flow
var conn flow
st.add(3)
conn.add(2)
if got, want := st.available(), int32(3); got != want {
t.Errorf("available = %d; want %d", got, want)
}
st.setConnFlow(&conn)
if got, want := st.available(), int32(2); got != want {
t.Errorf("after parent setup, available = %d; want %d", got, want)
}
st.take(2)
if got, want := conn.available(), int32(0); got != want {
t.Errorf("after taking 2, conn = %d; want %d", got, want)
}
if got, want := st.available(), int32(0); got != want {
t.Errorf("after taking 2, stream = %d; want %d", got, want)
}
}
func TestFlowAdd(t *testing.T) {
var f flow
if !f.add(1) {
t.Fatal("failed to add 1")
}
if !f.add(-1) {
t.Fatal("failed to add -1")
}
if got, want := f.available(), int32(0); got != want {
t.Fatalf("size = %d; want %d", got, want)
}
if !f.add(1<<31 - 1) {
t.Fatal("failed to add 2^31-1")
}
if got, want := f.available(), int32(1<<31-1); got != want {
t.Fatalf("size = %d; want %d", got, want)
}
if f.add(1) {
t.Fatal("adding 1 to max shouldn't be allowed")
}
}

1113
Godeps/_workspace/src/github.com/bradfitz/http2/frame.go generated vendored Normal file

File diff suppressed because it is too large Load Diff

View File

@ -0,0 +1,578 @@
// Copyright 2014 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
package http2
import (
"bytes"
"reflect"
"strings"
"testing"
"unsafe"
)
func testFramer() (*Framer, *bytes.Buffer) {
buf := new(bytes.Buffer)
return NewFramer(buf, buf), buf
}
func TestFrameSizes(t *testing.T) {
// Catch people rearranging the FrameHeader fields.
if got, want := int(unsafe.Sizeof(FrameHeader{})), 12; got != want {
t.Errorf("FrameHeader size = %d; want %d", got, want)
}
}
func TestWriteRST(t *testing.T) {
fr, buf := testFramer()
var streamID uint32 = 1<<24 + 2<<16 + 3<<8 + 4
var errCode uint32 = 7<<24 + 6<<16 + 5<<8 + 4
fr.WriteRSTStream(streamID, ErrCode(errCode))
const wantEnc = "\x00\x00\x04\x03\x00\x01\x02\x03\x04\x07\x06\x05\x04"
if buf.String() != wantEnc {
t.Errorf("encoded as %q; want %q", buf.Bytes(), wantEnc)
}
f, err := fr.ReadFrame()
if err != nil {
t.Fatal(err)
}
want := &RSTStreamFrame{
FrameHeader: FrameHeader{
valid: true,
Type: 0x3,
Flags: 0x0,
Length: 0x4,
StreamID: 0x1020304,
},
ErrCode: 0x7060504,
}
if !reflect.DeepEqual(f, want) {
t.Errorf("parsed back %#v; want %#v", f, want)
}
}
func TestWriteData(t *testing.T) {
fr, buf := testFramer()
var streamID uint32 = 1<<24 + 2<<16 + 3<<8 + 4
data := []byte("ABC")
fr.WriteData(streamID, true, data)
const wantEnc = "\x00\x00\x03\x00\x01\x01\x02\x03\x04ABC"
if buf.String() != wantEnc {
t.Errorf("encoded as %q; want %q", buf.Bytes(), wantEnc)
}
f, err := fr.ReadFrame()
if err != nil {
t.Fatal(err)
}
df, ok := f.(*DataFrame)
if !ok {
t.Fatalf("got %T; want *DataFrame", f)
}
if !bytes.Equal(df.Data(), data) {
t.Errorf("got %q; want %q", df.Data(), data)
}
if f.Header().Flags&1 == 0 {
t.Errorf("didn't see END_STREAM flag")
}
}
func TestWriteHeaders(t *testing.T) {
tests := []struct {
name string
p HeadersFrameParam
wantEnc string
wantFrame *HeadersFrame
}{
{
"basic",
HeadersFrameParam{
StreamID: 42,
BlockFragment: []byte("abc"),
Priority: PriorityParam{},
},
"\x00\x00\x03\x01\x00\x00\x00\x00*abc",
&HeadersFrame{
FrameHeader: FrameHeader{
valid: true,
StreamID: 42,
Type: FrameHeaders,
Length: uint32(len("abc")),
},
Priority: PriorityParam{},
headerFragBuf: []byte("abc"),
},
},
{
"basic + end flags",
HeadersFrameParam{
StreamID: 42,
BlockFragment: []byte("abc"),
EndStream: true,
EndHeaders: true,
Priority: PriorityParam{},
},
"\x00\x00\x03\x01\x05\x00\x00\x00*abc",
&HeadersFrame{
FrameHeader: FrameHeader{
valid: true,
StreamID: 42,
Type: FrameHeaders,
Flags: FlagHeadersEndStream | FlagHeadersEndHeaders,
Length: uint32(len("abc")),
},
Priority: PriorityParam{},
headerFragBuf: []byte("abc"),
},
},
{
"with padding",
HeadersFrameParam{
StreamID: 42,
BlockFragment: []byte("abc"),
EndStream: true,
EndHeaders: true,
PadLength: 5,
Priority: PriorityParam{},
},
"\x00\x00\t\x01\r\x00\x00\x00*\x05abc\x00\x00\x00\x00\x00",
&HeadersFrame{
FrameHeader: FrameHeader{
valid: true,
StreamID: 42,
Type: FrameHeaders,
Flags: FlagHeadersEndStream | FlagHeadersEndHeaders | FlagHeadersPadded,
Length: uint32(1 + len("abc") + 5), // pad length + contents + padding
},
Priority: PriorityParam{},
headerFragBuf: []byte("abc"),
},
},
{
"with priority",
HeadersFrameParam{
StreamID: 42,
BlockFragment: []byte("abc"),
EndStream: true,
EndHeaders: true,
PadLength: 2,
Priority: PriorityParam{
StreamDep: 15,
Exclusive: true,
Weight: 127,
},
},
"\x00\x00\v\x01-\x00\x00\x00*\x02\x80\x00\x00\x0f\u007fabc\x00\x00",
&HeadersFrame{
FrameHeader: FrameHeader{
valid: true,
StreamID: 42,
Type: FrameHeaders,
Flags: FlagHeadersEndStream | FlagHeadersEndHeaders | FlagHeadersPadded | FlagHeadersPriority,
Length: uint32(1 + 5 + len("abc") + 2), // pad length + priority + contents + padding
},
Priority: PriorityParam{
StreamDep: 15,
Exclusive: true,
Weight: 127,
},
headerFragBuf: []byte("abc"),
},
},
}
for _, tt := range tests {
fr, buf := testFramer()
if err := fr.WriteHeaders(tt.p); err != nil {
t.Errorf("test %q: %v", tt.name, err)
continue
}
if buf.String() != tt.wantEnc {
t.Errorf("test %q: encoded %q; want %q", tt.name, buf.Bytes(), tt.wantEnc)
}
f, err := fr.ReadFrame()
if err != nil {
t.Errorf("test %q: failed to read the frame back: %v", tt.name, err)
continue
}
if !reflect.DeepEqual(f, tt.wantFrame) {
t.Errorf("test %q: mismatch.\n got: %#v\nwant: %#v\n", tt.name, f, tt.wantFrame)
}
}
}
func TestWriteContinuation(t *testing.T) {
const streamID = 42
tests := []struct {
name string
end bool
frag []byte
wantFrame *ContinuationFrame
}{
{
"not end",
false,
[]byte("abc"),
&ContinuationFrame{
FrameHeader: FrameHeader{
valid: true,
StreamID: streamID,
Type: FrameContinuation,
Length: uint32(len("abc")),
},
headerFragBuf: []byte("abc"),
},
},
{
"end",
true,
[]byte("def"),
&ContinuationFrame{
FrameHeader: FrameHeader{
valid: true,
StreamID: streamID,
Type: FrameContinuation,
Flags: FlagContinuationEndHeaders,
Length: uint32(len("def")),
},
headerFragBuf: []byte("def"),
},
},
}
for _, tt := range tests {
fr, _ := testFramer()
if err := fr.WriteContinuation(streamID, tt.end, tt.frag); err != nil {
t.Errorf("test %q: %v", tt.name, err)
continue
}
f, err := fr.ReadFrame()
if err != nil {
t.Errorf("test %q: failed to read the frame back: %v", tt.name, err)
continue
}
if !reflect.DeepEqual(f, tt.wantFrame) {
t.Errorf("test %q: mismatch.\n got: %#v\nwant: %#v\n", tt.name, f, tt.wantFrame)
}
}
}
func TestWritePriority(t *testing.T) {
const streamID = 42
tests := []struct {
name string
priority PriorityParam
wantFrame *PriorityFrame
}{
{
"not exclusive",
PriorityParam{
StreamDep: 2,
Exclusive: false,
Weight: 127,
},
&PriorityFrame{
FrameHeader{
valid: true,
StreamID: streamID,
Type: FramePriority,
Length: 5,
},
PriorityParam{
StreamDep: 2,
Exclusive: false,
Weight: 127,
},
},
},
{
"exclusive",
PriorityParam{
StreamDep: 3,
Exclusive: true,
Weight: 77,
},
&PriorityFrame{
FrameHeader{
valid: true,
StreamID: streamID,
Type: FramePriority,
Length: 5,
},
PriorityParam{
StreamDep: 3,
Exclusive: true,
Weight: 77,
},
},
},
}
for _, tt := range tests {
fr, _ := testFramer()
if err := fr.WritePriority(streamID, tt.priority); err != nil {
t.Errorf("test %q: %v", tt.name, err)
continue
}
f, err := fr.ReadFrame()
if err != nil {
t.Errorf("test %q: failed to read the frame back: %v", tt.name, err)
continue
}
if !reflect.DeepEqual(f, tt.wantFrame) {
t.Errorf("test %q: mismatch.\n got: %#v\nwant: %#v\n", tt.name, f, tt.wantFrame)
}
}
}
func TestWriteSettings(t *testing.T) {
fr, buf := testFramer()
settings := []Setting{{1, 2}, {3, 4}}
fr.WriteSettings(settings...)
const wantEnc = "\x00\x00\f\x04\x00\x00\x00\x00\x00\x00\x01\x00\x00\x00\x02\x00\x03\x00\x00\x00\x04"
if buf.String() != wantEnc {
t.Errorf("encoded as %q; want %q", buf.Bytes(), wantEnc)
}
f, err := fr.ReadFrame()
if err != nil {
t.Fatal(err)
}
sf, ok := f.(*SettingsFrame)
if !ok {
t.Fatalf("Got a %T; want a SettingsFrame", f)
}
var got []Setting
sf.ForeachSetting(func(s Setting) error {
got = append(got, s)
valBack, ok := sf.Value(s.ID)
if !ok || valBack != s.Val {
t.Errorf("Value(%d) = %v, %v; want %v, true", s.ID, valBack, ok, s.Val)
}
return nil
})
if !reflect.DeepEqual(settings, got) {
t.Errorf("Read settings %+v != written settings %+v", got, settings)
}
}
func TestWriteSettingsAck(t *testing.T) {
fr, buf := testFramer()
fr.WriteSettingsAck()
const wantEnc = "\x00\x00\x00\x04\x01\x00\x00\x00\x00"
if buf.String() != wantEnc {
t.Errorf("encoded as %q; want %q", buf.Bytes(), wantEnc)
}
}
func TestWriteWindowUpdate(t *testing.T) {
fr, buf := testFramer()
const streamID = 1<<24 + 2<<16 + 3<<8 + 4
const incr = 7<<24 + 6<<16 + 5<<8 + 4
if err := fr.WriteWindowUpdate(streamID, incr); err != nil {
t.Fatal(err)
}
const wantEnc = "\x00\x00\x04\x08\x00\x01\x02\x03\x04\x07\x06\x05\x04"
if buf.String() != wantEnc {
t.Errorf("encoded as %q; want %q", buf.Bytes(), wantEnc)
}
f, err := fr.ReadFrame()
if err != nil {
t.Fatal(err)
}
want := &WindowUpdateFrame{
FrameHeader: FrameHeader{
valid: true,
Type: 0x8,
Flags: 0x0,
Length: 0x4,
StreamID: 0x1020304,
},
Increment: 0x7060504,
}
if !reflect.DeepEqual(f, want) {
t.Errorf("parsed back %#v; want %#v", f, want)
}
}
func TestWritePing(t *testing.T) { testWritePing(t, false) }
func TestWritePingAck(t *testing.T) { testWritePing(t, true) }
func testWritePing(t *testing.T, ack bool) {
fr, buf := testFramer()
if err := fr.WritePing(ack, [8]byte{1, 2, 3, 4, 5, 6, 7, 8}); err != nil {
t.Fatal(err)
}
var wantFlags Flags
if ack {
wantFlags = FlagPingAck
}
var wantEnc = "\x00\x00\x08\x06" + string(wantFlags) + "\x00\x00\x00\x00" + "\x01\x02\x03\x04\x05\x06\x07\x08"
if buf.String() != wantEnc {
t.Errorf("encoded as %q; want %q", buf.Bytes(), wantEnc)
}
f, err := fr.ReadFrame()
if err != nil {
t.Fatal(err)
}
want := &PingFrame{
FrameHeader: FrameHeader{
valid: true,
Type: 0x6,
Flags: wantFlags,
Length: 0x8,
StreamID: 0,
},
Data: [8]byte{1, 2, 3, 4, 5, 6, 7, 8},
}
if !reflect.DeepEqual(f, want) {
t.Errorf("parsed back %#v; want %#v", f, want)
}
}
func TestReadFrameHeader(t *testing.T) {
tests := []struct {
in string
want FrameHeader
}{
{in: "\x00\x00\x00" + "\x00" + "\x00" + "\x00\x00\x00\x00", want: FrameHeader{}},
{in: "\x01\x02\x03" + "\x04" + "\x05" + "\x06\x07\x08\x09", want: FrameHeader{
Length: 66051, Type: 4, Flags: 5, StreamID: 101124105,
}},
// Ignore high bit:
{in: "\xff\xff\xff" + "\xff" + "\xff" + "\xff\xff\xff\xff", want: FrameHeader{
Length: 16777215, Type: 255, Flags: 255, StreamID: 2147483647}},
{in: "\xff\xff\xff" + "\xff" + "\xff" + "\x7f\xff\xff\xff", want: FrameHeader{
Length: 16777215, Type: 255, Flags: 255, StreamID: 2147483647}},
}
for i, tt := range tests {
got, err := readFrameHeader(make([]byte, 9), strings.NewReader(tt.in))
if err != nil {
t.Errorf("%d. readFrameHeader(%q) = %v", i, tt.in, err)
continue
}
tt.want.valid = true
if got != tt.want {
t.Errorf("%d. readFrameHeader(%q) = %+v; want %+v", i, tt.in, got, tt.want)
}
}
}
func TestReadWriteFrameHeader(t *testing.T) {
tests := []struct {
len uint32
typ FrameType
flags Flags
streamID uint32
}{
{len: 0, typ: 255, flags: 1, streamID: 0},
{len: 0, typ: 255, flags: 1, streamID: 1},
{len: 0, typ: 255, flags: 1, streamID: 255},
{len: 0, typ: 255, flags: 1, streamID: 256},
{len: 0, typ: 255, flags: 1, streamID: 65535},
{len: 0, typ: 255, flags: 1, streamID: 65536},
{len: 0, typ: 1, flags: 255, streamID: 1},
{len: 255, typ: 1, flags: 255, streamID: 1},
{len: 256, typ: 1, flags: 255, streamID: 1},
{len: 65535, typ: 1, flags: 255, streamID: 1},
{len: 65536, typ: 1, flags: 255, streamID: 1},
{len: 16777215, typ: 1, flags: 255, streamID: 1},
}
for _, tt := range tests {
fr, buf := testFramer()
fr.startWrite(tt.typ, tt.flags, tt.streamID)
fr.writeBytes(make([]byte, tt.len))
fr.endWrite()
fh, err := ReadFrameHeader(buf)
if err != nil {
t.Errorf("ReadFrameHeader(%+v) = %v", tt, err)
continue
}
if fh.Type != tt.typ || fh.Flags != tt.flags || fh.Length != tt.len || fh.StreamID != tt.streamID {
t.Errorf("ReadFrameHeader(%+v) = %+v; mismatch", tt, fh)
}
}
}
func TestWriteTooLargeFrame(t *testing.T) {
fr, _ := testFramer()
fr.startWrite(0, 1, 1)
fr.writeBytes(make([]byte, 1<<24))
err := fr.endWrite()
if err != ErrFrameTooLarge {
t.Errorf("endWrite = %v; want errFrameTooLarge", err)
}
}
func TestWriteGoAway(t *testing.T) {
const debug = "foo"
fr, buf := testFramer()
if err := fr.WriteGoAway(0x01020304, 0x05060708, []byte(debug)); err != nil {
t.Fatal(err)
}
const wantEnc = "\x00\x00\v\a\x00\x00\x00\x00\x00\x01\x02\x03\x04\x05\x06\x07\x08" + debug
if buf.String() != wantEnc {
t.Errorf("encoded as %q; want %q", buf.Bytes(), wantEnc)
}
f, err := fr.ReadFrame()
if err != nil {
t.Fatal(err)
}
want := &GoAwayFrame{
FrameHeader: FrameHeader{
valid: true,
Type: 0x7,
Flags: 0,
Length: uint32(4 + 4 + len(debug)),
StreamID: 0,
},
LastStreamID: 0x01020304,
ErrCode: 0x05060708,
debugData: []byte(debug),
}
if !reflect.DeepEqual(f, want) {
t.Fatalf("parsed back:\n%#v\nwant:\n%#v", f, want)
}
if got := string(f.(*GoAwayFrame).DebugData()); got != debug {
t.Errorf("debug data = %q; want %q", got, debug)
}
}
func TestWritePushPromise(t *testing.T) {
pp := PushPromiseParam{
StreamID: 42,
PromiseID: 42,
BlockFragment: []byte("abc"),
}
fr, buf := testFramer()
if err := fr.WritePushPromise(pp); err != nil {
t.Fatal(err)
}
const wantEnc = "\x00\x00\x07\x05\x00\x00\x00\x00*\x00\x00\x00*abc"
if buf.String() != wantEnc {
t.Errorf("encoded as %q; want %q", buf.Bytes(), wantEnc)
}
f, err := fr.ReadFrame()
if err != nil {
t.Fatal(err)
}
_, ok := f.(*PushPromiseFrame)
if !ok {
t.Fatalf("got %T; want *PushPromiseFrame", f)
}
want := &PushPromiseFrame{
FrameHeader: FrameHeader{
valid: true,
Type: 0x5,
Flags: 0x0,
Length: 0x7,
StreamID: 42,
},
PromiseID: 42,
headerFragBuf: []byte("abc"),
}
if !reflect.DeepEqual(f, want) {
t.Fatalf("parsed back:\n%#v\nwant:\n%#v", f, want)
}
}

View File

@ -0,0 +1,169 @@
// Copyright 2014 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
// See https://code.google.com/p/go/source/browse/CONTRIBUTORS
// Licensed under the same terms as Go itself:
// https://code.google.com/p/go/source/browse/LICENSE
// Defensive debug-only utility to track that functions run on the
// goroutine that they're supposed to.
package http2
import (
"bytes"
"errors"
"fmt"
"runtime"
"strconv"
"sync"
)
var DebugGoroutines = false
type goroutineLock uint64
func newGoroutineLock() goroutineLock {
return goroutineLock(curGoroutineID())
}
func (g goroutineLock) check() {
if !DebugGoroutines {
return
}
if curGoroutineID() != uint64(g) {
panic("running on the wrong goroutine")
}
}
func (g goroutineLock) checkNotOn() {
if !DebugGoroutines {
return
}
if curGoroutineID() == uint64(g) {
panic("running on the wrong goroutine")
}
}
var goroutineSpace = []byte("goroutine ")
func curGoroutineID() uint64 {
bp := littleBuf.Get().(*[]byte)
defer littleBuf.Put(bp)
b := *bp
b = b[:runtime.Stack(b, false)]
// Parse the 4707 out of "goroutine 4707 ["
b = bytes.TrimPrefix(b, goroutineSpace)
i := bytes.IndexByte(b, ' ')
if i < 0 {
panic(fmt.Sprintf("No space found in %q", b))
}
b = b[:i]
n, err := parseUintBytes(b, 10, 64)
if err != nil {
panic(fmt.Sprintf("Failed to parse goroutine ID out of %q: %v", b, err))
}
return n
}
var littleBuf = sync.Pool{
New: func() interface{} {
buf := make([]byte, 64)
return &buf
},
}
// parseUintBytes is like strconv.ParseUint, but using a []byte.
func parseUintBytes(s []byte, base int, bitSize int) (n uint64, err error) {
var cutoff, maxVal uint64
if bitSize == 0 {
bitSize = int(strconv.IntSize)
}
s0 := s
switch {
case len(s) < 1:
err = strconv.ErrSyntax
goto Error
case 2 <= base && base <= 36:
// valid base; nothing to do
case base == 0:
// Look for octal, hex prefix.
switch {
case s[0] == '0' && len(s) > 1 && (s[1] == 'x' || s[1] == 'X'):
base = 16
s = s[2:]
if len(s) < 1 {
err = strconv.ErrSyntax
goto Error
}
case s[0] == '0':
base = 8
default:
base = 10
}
default:
err = errors.New("invalid base " + strconv.Itoa(base))
goto Error
}
n = 0
cutoff = cutoff64(base)
maxVal = 1<<uint(bitSize) - 1
for i := 0; i < len(s); i++ {
var v byte
d := s[i]
switch {
case '0' <= d && d <= '9':
v = d - '0'
case 'a' <= d && d <= 'z':
v = d - 'a' + 10
case 'A' <= d && d <= 'Z':
v = d - 'A' + 10
default:
n = 0
err = strconv.ErrSyntax
goto Error
}
if int(v) >= base {
n = 0
err = strconv.ErrSyntax
goto Error
}
if n >= cutoff {
// n*base overflows
n = 1<<64 - 1
err = strconv.ErrRange
goto Error
}
n *= uint64(base)
n1 := n + uint64(v)
if n1 < n || n1 > maxVal {
// n+v overflows
n = 1<<64 - 1
err = strconv.ErrRange
goto Error
}
n = n1
}
return n, nil
Error:
return n, &strconv.NumError{Func: "ParseUint", Num: string(s0), Err: err}
}
// Return the first number n such that n*base >= 1<<64.
func cutoff64(base int) uint64 {
if base < 2 {
return 0
}
return (1<<64-1)/uint64(base) + 1
}

View File

@ -0,0 +1,33 @@
// Copyright 2014 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
// See https://code.google.com/p/go/source/browse/CONTRIBUTORS
// Licensed under the same terms as Go itself:
// https://code.google.com/p/go/source/browse/LICENSE
package http2
import (
"fmt"
"strings"
"testing"
)
func TestGoroutineLock(t *testing.T) {
DebugGoroutines = true
g := newGoroutineLock()
g.check()
sawPanic := make(chan interface{})
go func() {
defer func() { sawPanic <- recover() }()
g.check() // should panic
}()
e := <-sawPanic
if e == nil {
t.Fatal("did not see panic from check in other goroutine")
}
if !strings.Contains(fmt.Sprint(e), "wrong goroutine") {
t.Errorf("expected on see panic about running on the wrong goroutine; got %v", e)
}
}

View File

@ -0,0 +1,5 @@
h2demo
h2demo.linux
client-id.dat
client-secret.dat
token.dat

View File

@ -0,0 +1,5 @@
h2demo.linux: h2demo.go
GOOS=linux go build --tags=h2demo -o h2demo.linux .
upload: h2demo.linux
cat h2demo.linux | go run launch.go --write_object=http2-demo-server-tls/h2demo --write_object_is_public

View File

@ -0,0 +1,16 @@
Client:
-- Firefox nightly with about:config network.http.spdy.enabled.http2draft set true
-- Chrome: go to chrome://flags/#enable-spdy4, save and restart (button at bottom)
Make CA:
$ openssl genrsa -out rootCA.key 2048
$ openssl req -x509 -new -nodes -key rootCA.key -days 1024 -out rootCA.pem
... install that to Firefox
Make cert:
$ openssl genrsa -out server.key 2048
$ openssl req -new -key server.key -out server.csr
$ openssl x509 -req -in server.csr -CA rootCA.pem -CAkey rootCA.key -CAcreateserial -out server.crt -days 500

View File

@ -0,0 +1,426 @@
// Copyright 2014 The Go Authors.
// See https://code.google.com/p/go/source/browse/CONTRIBUTORS
// Licensed under the same terms as Go itself:
// https://code.google.com/p/go/source/browse/LICENSE
// +build h2demo
package main
import (
"bytes"
"crypto/tls"
"flag"
"fmt"
"hash/crc32"
"image"
"image/jpeg"
"io"
"io/ioutil"
"log"
"net"
"net/http"
"os/exec"
"path"
"regexp"
"runtime"
"strconv"
"strings"
"sync"
"time"
"camlistore.org/pkg/googlestorage"
"camlistore.org/pkg/singleflight"
"github.com/coreos/etcd/Godeps/_workspace/src/github.com/bradfitz/http2"
)
var (
openFirefox = flag.Bool("openff", false, "Open Firefox")
addr = flag.String("addr", "localhost:4430", "TLS address to listen on")
httpAddr = flag.String("httpaddr", "", "If non-empty, address to listen for regular HTTP on")
prod = flag.Bool("prod", false, "Whether to configure itself to be the production http2.golang.org server.")
)
func homeOldHTTP(w http.ResponseWriter, r *http.Request) {
io.WriteString(w, `<html>
<body>
<h1>Go + HTTP/2</h1>
<p>Welcome to <a href="https://golang.org/">the Go language</a>'s <a href="https://http2.github.io/">HTTP/2</a> demo & interop server.</p>
<p>Unfortunately, you're <b>not</b> using HTTP/2 right now. To do so:</p>
<ul>
<li>Use Firefox Nightly or go to <b>about:config</b> and enable "network.http.spdy.enabled.http2draft"</li>
<li>Use Google Chrome Canary and/or go to <b>chrome://flags/#enable-spdy4</b> to <i>Enable SPDY/4</i> (Chrome's name for HTTP/2)</li>
</ul>
<p>See code & instructions for connecting at <a href="https://github.com/bradfitz/http2">https://github.com/bradfitz/http2</a>.</p>
</body></html>`)
}
func home(w http.ResponseWriter, r *http.Request) {
if r.URL.Path != "/" {
http.NotFound(w, r)
return
}
io.WriteString(w, `<html>
<body>
<h1>Go + HTTP/2</h1>
<p>Welcome to <a href="https://golang.org/">the Go language</a>'s <a
href="https://http2.github.io/">HTTP/2</a> demo & interop server.</p>
<p>Congratulations, <b>you're using HTTP/2 right now</b>.</p>
<p>This server exists for others in the HTTP/2 community to test their HTTP/2 client implementations and point out flaws in our server.</p>
<p> The code is currently at <a
href="https://github.com/bradfitz/http2">github.com/bradfitz/http2</a>
but will move to the Go standard library at some point in the future
(enabled by default, without users needing to change their code).</p>
<p>Contact info: <i>bradfitz@golang.org</i>, or <a
href="https://github.com/bradfitz/http2/issues">file a bug</a>.</p>
<h2>Handlers for testing</h2>
<ul>
<li>GET <a href="/reqinfo">/reqinfo</a> to dump the request + headers received</li>
<li>GET <a href="/clockstream">/clockstream</a> streams the current time every second</li>
<li>GET <a href="/gophertiles">/gophertiles</a> to see a page with a bunch of images</li>
<li>GET <a href="/file/gopher.png">/file/gopher.png</a> for a small file (does If-Modified-Since, Content-Range, etc)</li>
<li>GET <a href="/file/go.src.tar.gz">/file/go.src.tar.gz</a> for a larger file (~10 MB)</li>
<li>GET <a href="/redirect">/redirect</a> to redirect back to / (this page)</li>
<li>GET <a href="/goroutines">/goroutines</a> to see all active goroutines in this server</li>
<li>PUT something to <a href="/crc32">/crc32</a> to get a count of number of bytes and its CRC-32</li>
</ul>
</body></html>`)
}
func reqInfoHandler(w http.ResponseWriter, r *http.Request) {
w.Header().Set("Content-Type", "text/plain")
fmt.Fprintf(w, "Method: %s\n", r.Method)
fmt.Fprintf(w, "Protocol: %s\n", r.Proto)
fmt.Fprintf(w, "Host: %s\n", r.Host)
fmt.Fprintf(w, "RemoteAddr: %s\n", r.RemoteAddr)
fmt.Fprintf(w, "RequestURI: %q\n", r.RequestURI)
fmt.Fprintf(w, "URL: %#v\n", r.URL)
fmt.Fprintf(w, "Body.ContentLength: %d (-1 means unknown)\n", r.ContentLength)
fmt.Fprintf(w, "Close: %v (relevant for HTTP/1 only)\n", r.Close)
fmt.Fprintf(w, "TLS: %#v\n", r.TLS)
fmt.Fprintf(w, "\nHeaders:\n")
r.Header.Write(w)
}
func crcHandler(w http.ResponseWriter, r *http.Request) {
if r.Method != "PUT" {
http.Error(w, "PUT required.", 400)
return
}
crc := crc32.NewIEEE()
n, err := io.Copy(crc, r.Body)
if err == nil {
w.Header().Set("Content-Type", "text/plain")
fmt.Fprintf(w, "bytes=%d, CRC32=%x", n, crc.Sum(nil))
}
}
var (
fsGrp singleflight.Group
fsMu sync.Mutex // guards fsCache
fsCache = map[string]http.Handler{}
)
// fileServer returns a file-serving handler that proxies URL.
// It lazily fetches URL on the first access and caches its contents forever.
func fileServer(url string) http.Handler {
return http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
hi, err := fsGrp.Do(url, func() (interface{}, error) {
fsMu.Lock()
if h, ok := fsCache[url]; ok {
fsMu.Unlock()
return h, nil
}
fsMu.Unlock()
res, err := http.Get(url)
if err != nil {
return nil, err
}
defer res.Body.Close()
slurp, err := ioutil.ReadAll(res.Body)
if err != nil {
return nil, err
}
modTime := time.Now()
var h http.Handler = http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
http.ServeContent(w, r, path.Base(url), modTime, bytes.NewReader(slurp))
})
fsMu.Lock()
fsCache[url] = h
fsMu.Unlock()
return h, nil
})
if err != nil {
http.Error(w, err.Error(), 500)
return
}
hi.(http.Handler).ServeHTTP(w, r)
})
}
func clockStreamHandler(w http.ResponseWriter, r *http.Request) {
clientGone := w.(http.CloseNotifier).CloseNotify()
w.Header().Set("Content-Type", "text/plain")
ticker := time.NewTicker(1 * time.Second)
defer ticker.Stop()
fmt.Fprintf(w, "# ~1KB of junk to force browsers to start rendering immediately: \n")
io.WriteString(w, strings.Repeat("# xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx\n", 13))
for {
fmt.Fprintf(w, "%v\n", time.Now())
w.(http.Flusher).Flush()
select {
case <-ticker.C:
case <-clientGone:
log.Printf("Client %v disconnected from the clock", r.RemoteAddr)
return
}
}
}
func registerHandlers() {
tiles := newGopherTilesHandler()
mux2 := http.NewServeMux()
http.HandleFunc("/", func(w http.ResponseWriter, r *http.Request) {
if r.TLS == nil {
if r.URL.Path == "/gophertiles" {
tiles.ServeHTTP(w, r)
return
}
http.Redirect(w, r, "https://http2.golang.org/", http.StatusFound)
return
}
if r.ProtoMajor == 1 {
if r.URL.Path == "/reqinfo" {
reqInfoHandler(w, r)
return
}
homeOldHTTP(w, r)
return
}
mux2.ServeHTTP(w, r)
})
mux2.HandleFunc("/", home)
mux2.Handle("/file/gopher.png", fileServer("https://golang.org/doc/gopher/frontpage.png"))
mux2.Handle("/file/go.src.tar.gz", fileServer("https://storage.googleapis.com/golang/go1.4.1.src.tar.gz"))
mux2.HandleFunc("/reqinfo", reqInfoHandler)
mux2.HandleFunc("/crc32", crcHandler)
mux2.HandleFunc("/clockstream", clockStreamHandler)
mux2.Handle("/gophertiles", tiles)
mux2.HandleFunc("/redirect", func(w http.ResponseWriter, r *http.Request) {
http.Redirect(w, r, "/", http.StatusFound)
})
stripHomedir := regexp.MustCompile(`/(Users|home)/\w+`)
mux2.HandleFunc("/goroutines", func(w http.ResponseWriter, r *http.Request) {
w.Header().Set("Content-Type", "text/plain; charset=utf-8")
buf := make([]byte, 2<<20)
w.Write(stripHomedir.ReplaceAll(buf[:runtime.Stack(buf, true)], nil))
})
}
func newGopherTilesHandler() http.Handler {
const gopherURL = "https://blog.golang.org/go-programming-language-turns-two_gophers.jpg"
res, err := http.Get(gopherURL)
if err != nil {
log.Fatal(err)
}
if res.StatusCode != 200 {
log.Fatalf("Error fetching %s: %v", gopherURL, res.Status)
}
slurp, err := ioutil.ReadAll(res.Body)
res.Body.Close()
if err != nil {
log.Fatal(err)
}
im, err := jpeg.Decode(bytes.NewReader(slurp))
if err != nil {
if len(slurp) > 1024 {
slurp = slurp[:1024]
}
log.Fatalf("Failed to decode gopher image: %v (got %q)", err, slurp)
}
type subImager interface {
SubImage(image.Rectangle) image.Image
}
const tileSize = 32
xt := im.Bounds().Max.X / tileSize
yt := im.Bounds().Max.Y / tileSize
var tile [][][]byte // y -> x -> jpeg bytes
for yi := 0; yi < yt; yi++ {
var row [][]byte
for xi := 0; xi < xt; xi++ {
si := im.(subImager).SubImage(image.Rectangle{
Min: image.Point{xi * tileSize, yi * tileSize},
Max: image.Point{(xi + 1) * tileSize, (yi + 1) * tileSize},
})
buf := new(bytes.Buffer)
if err := jpeg.Encode(buf, si, &jpeg.Options{Quality: 90}); err != nil {
log.Fatal(err)
}
row = append(row, buf.Bytes())
}
tile = append(tile, row)
}
return http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
ms, _ := strconv.Atoi(r.FormValue("latency"))
const nanosPerMilli = 1e6
if r.FormValue("x") != "" {
x, _ := strconv.Atoi(r.FormValue("x"))
y, _ := strconv.Atoi(r.FormValue("y"))
if ms <= 1000 {
time.Sleep(time.Duration(ms) * nanosPerMilli)
}
if x >= 0 && x < xt && y >= 0 && y < yt {
http.ServeContent(w, r, "", time.Time{}, bytes.NewReader(tile[y][x]))
return
}
}
io.WriteString(w, "<html><body>")
fmt.Fprintf(w, "A grid of %d tiled images is below. Compare:<p>", xt*yt)
for _, ms := range []int{0, 30, 200, 1000} {
d := time.Duration(ms) * nanosPerMilli
fmt.Fprintf(w, "[<a href='https://%s/gophertiles?latency=%d'>HTTP/2, %v latency</a>] [<a href='http://%s/gophertiles?latency=%d'>HTTP/1, %v latency</a>]<br>\n",
httpsHost(), ms, d,
httpHost(), ms, d,
)
}
io.WriteString(w, "<p>\n")
cacheBust := time.Now().UnixNano()
for y := 0; y < yt; y++ {
for x := 0; x < xt; x++ {
fmt.Fprintf(w, "<img width=%d height=%d src='/gophertiles?x=%d&y=%d&cachebust=%d&latency=%d'>",
tileSize, tileSize, x, y, cacheBust, ms)
}
io.WriteString(w, "<br/>\n")
}
io.WriteString(w, "<hr><a href='/'>&lt;&lt Back to Go HTTP/2 demo server</a></body></html>")
})
}
func httpsHost() string {
if *prod {
return "http2.golang.org"
}
if v := *addr; strings.HasPrefix(v, ":") {
return "localhost" + v
} else {
return v
}
}
func httpHost() string {
if *prod {
return "http2.golang.org"
}
if v := *httpAddr; strings.HasPrefix(v, ":") {
return "localhost" + v
} else {
return v
}
}
func serveProdTLS() error {
c, err := googlestorage.NewServiceClient()
if err != nil {
return err
}
slurp := func(key string) ([]byte, error) {
const bucket = "http2-demo-server-tls"
rc, _, err := c.GetObject(&googlestorage.Object{
Bucket: bucket,
Key: key,
})
if err != nil {
return nil, fmt.Errorf("Error fetching GCS object %q in bucket %q: %v", key, bucket, err)
}
defer rc.Close()
return ioutil.ReadAll(rc)
}
certPem, err := slurp("http2.golang.org.chained.pem")
if err != nil {
return err
}
keyPem, err := slurp("http2.golang.org.key")
if err != nil {
return err
}
cert, err := tls.X509KeyPair(certPem, keyPem)
if err != nil {
return err
}
srv := &http.Server{
TLSConfig: &tls.Config{
Certificates: []tls.Certificate{cert},
},
}
http2.ConfigureServer(srv, &http2.Server{})
ln, err := net.Listen("tcp", ":443")
if err != nil {
return err
}
return srv.Serve(tls.NewListener(tcpKeepAliveListener{ln.(*net.TCPListener)}, srv.TLSConfig))
}
type tcpKeepAliveListener struct {
*net.TCPListener
}
func (ln tcpKeepAliveListener) Accept() (c net.Conn, err error) {
tc, err := ln.AcceptTCP()
if err != nil {
return
}
tc.SetKeepAlive(true)
tc.SetKeepAlivePeriod(3 * time.Minute)
return tc, nil
}
func serveProd() error {
errc := make(chan error, 2)
go func() { errc <- http.ListenAndServe(":80", nil) }()
go func() { errc <- serveProdTLS() }()
return <-errc
}
func main() {
var srv http.Server
flag.BoolVar(&http2.VerboseLogs, "verbose", false, "Verbose HTTP/2 debugging.")
flag.Parse()
srv.Addr = *addr
registerHandlers()
if *prod {
*httpAddr = "http2.golang.org"
log.Fatal(serveProd())
}
url := "https://" + *addr + "/"
log.Printf("Listening on " + url)
http2.ConfigureServer(&srv, &http2.Server{})
if *httpAddr != "" {
go func() { log.Fatal(http.ListenAndServe(*httpAddr, nil)) }()
}
go func() {
log.Fatal(srv.ListenAndServeTLS("server.crt", "server.key"))
}()
if *openFirefox && runtime.GOOS == "darwin" {
time.Sleep(250 * time.Millisecond)
exec.Command("open", "-b", "org.mozilla.nightly", "https://localhost:4430/").Run()
}
select {}
}

View File

@ -0,0 +1,279 @@
// Copyright 2014 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
// +build ignore
package main
import (
"bufio"
"bytes"
"encoding/json"
"flag"
"fmt"
"io"
"io/ioutil"
"log"
"net/http"
"os"
"strings"
"time"
"code.google.com/p/goauth2/oauth"
compute "code.google.com/p/google-api-go-client/compute/v1"
)
var (
proj = flag.String("project", "symbolic-datum-552", "name of Project")
zone = flag.String("zone", "us-central1-a", "GCE zone")
mach = flag.String("machinetype", "n1-standard-1", "Machine type")
instName = flag.String("instance_name", "http2-demo", "Name of VM instance.")
sshPub = flag.String("ssh_public_key", "", "ssh public key file to authorize. Can modify later in Google's web UI anyway.")
staticIP = flag.String("static_ip", "130.211.116.44", "Static IP to use. If empty, automatic.")
writeObject = flag.String("write_object", "", "If non-empty, a VM isn't created and the flag value is Google Cloud Storage bucket/object to write. The contents from stdin.")
publicObject = flag.Bool("write_object_is_public", false, "Whether the object created by --write_object should be public.")
)
func readFile(v string) string {
slurp, err := ioutil.ReadFile(v)
if err != nil {
log.Fatalf("Error reading %s: %v", v, err)
}
return strings.TrimSpace(string(slurp))
}
var config = &oauth.Config{
// The client-id and secret should be for an "Installed Application" when using
// the CLI. Later we'll use a web application with a callback.
ClientId: readFile("client-id.dat"),
ClientSecret: readFile("client-secret.dat"),
Scope: strings.Join([]string{
compute.DevstorageFull_controlScope,
compute.ComputeScope,
"https://www.googleapis.com/auth/sqlservice",
"https://www.googleapis.com/auth/sqlservice.admin",
}, " "),
AuthURL: "https://accounts.google.com/o/oauth2/auth",
TokenURL: "https://accounts.google.com/o/oauth2/token",
RedirectURL: "urn:ietf:wg:oauth:2.0:oob",
}
const baseConfig = `#cloud-config
coreos:
units:
- name: h2demo.service
command: start
content: |
[Unit]
Description=HTTP2 Demo
[Service]
ExecStartPre=/bin/bash -c 'mkdir -p /opt/bin && curl -s -o /opt/bin/h2demo http://storage.googleapis.com/http2-demo-server-tls/h2demo && chmod +x /opt/bin/h2demo'
ExecStart=/opt/bin/h2demo --prod
RestartSec=5s
Restart=always
Type=simple
[Install]
WantedBy=multi-user.target
`
func main() {
flag.Parse()
if *proj == "" {
log.Fatalf("Missing --project flag")
}
prefix := "https://www.googleapis.com/compute/v1/projects/" + *proj
machType := prefix + "/zones/" + *zone + "/machineTypes/" + *mach
tr := &oauth.Transport{
Config: config,
}
tokenCache := oauth.CacheFile("token.dat")
token, err := tokenCache.Token()
if err != nil {
if *writeObject != "" {
log.Fatalf("Can't use --write_object without a valid token.dat file already cached.")
}
log.Printf("Error getting token from %s: %v", string(tokenCache), err)
log.Printf("Get auth code from %v", config.AuthCodeURL("my-state"))
fmt.Print("\nEnter auth code: ")
sc := bufio.NewScanner(os.Stdin)
sc.Scan()
authCode := strings.TrimSpace(sc.Text())
token, err = tr.Exchange(authCode)
if err != nil {
log.Fatalf("Error exchanging auth code for a token: %v", err)
}
tokenCache.PutToken(token)
}
tr.Token = token
oauthClient := &http.Client{Transport: tr}
if *writeObject != "" {
writeCloudStorageObject(oauthClient)
return
}
computeService, _ := compute.New(oauthClient)
natIP := *staticIP
if natIP == "" {
// Try to find it by name.
aggAddrList, err := computeService.Addresses.AggregatedList(*proj).Do()
if err != nil {
log.Fatal(err)
}
// http://godoc.org/code.google.com/p/google-api-go-client/compute/v1#AddressAggregatedList
IPLoop:
for _, asl := range aggAddrList.Items {
for _, addr := range asl.Addresses {
if addr.Name == *instName+"-ip" && addr.Status == "RESERVED" {
natIP = addr.Address
break IPLoop
}
}
}
}
cloudConfig := baseConfig
if *sshPub != "" {
key := strings.TrimSpace(readFile(*sshPub))
cloudConfig += fmt.Sprintf("\nssh_authorized_keys:\n - %s\n", key)
}
if os.Getenv("USER") == "bradfitz" {
cloudConfig += fmt.Sprintf("\nssh_authorized_keys:\n - %s\n", "ssh-rsa AAAAB3NzaC1yc2EAAAABIwAAAIEAwks9dwWKlRC+73gRbvYtVg0vdCwDSuIlyt4z6xa/YU/jTDynM4R4W10hm2tPjy8iR1k8XhDv4/qdxe6m07NjG/By1tkmGpm1mGwho4Pr5kbAAy/Qg+NLCSdAYnnE00FQEcFOC15GFVMOW2AzDGKisReohwH9eIzHPzdYQNPRWXE= bradfitz@papag.bradfitz.com")
}
const maxCloudConfig = 32 << 10 // per compute API docs
if len(cloudConfig) > maxCloudConfig {
log.Fatalf("cloud config length of %d bytes is over %d byte limit", len(cloudConfig), maxCloudConfig)
}
instance := &compute.Instance{
Name: *instName,
Description: "Go Builder",
MachineType: machType,
Disks: []*compute.AttachedDisk{instanceDisk(computeService)},
Tags: &compute.Tags{
Items: []string{"http-server", "https-server"},
},
Metadata: &compute.Metadata{
Items: []*compute.MetadataItems{
{
Key: "user-data",
Value: cloudConfig,
},
},
},
NetworkInterfaces: []*compute.NetworkInterface{
&compute.NetworkInterface{
AccessConfigs: []*compute.AccessConfig{
&compute.AccessConfig{
Type: "ONE_TO_ONE_NAT",
Name: "External NAT",
NatIP: natIP,
},
},
Network: prefix + "/global/networks/default",
},
},
ServiceAccounts: []*compute.ServiceAccount{
{
Email: "default",
Scopes: []string{
compute.DevstorageFull_controlScope,
compute.ComputeScope,
},
},
},
}
log.Printf("Creating instance...")
op, err := computeService.Instances.Insert(*proj, *zone, instance).Do()
if err != nil {
log.Fatalf("Failed to create instance: %v", err)
}
opName := op.Name
log.Printf("Created. Waiting on operation %v", opName)
OpLoop:
for {
time.Sleep(2 * time.Second)
op, err := computeService.ZoneOperations.Get(*proj, *zone, opName).Do()
if err != nil {
log.Fatalf("Failed to get op %s: %v", opName, err)
}
switch op.Status {
case "PENDING", "RUNNING":
log.Printf("Waiting on operation %v", opName)
continue
case "DONE":
if op.Error != nil {
for _, operr := range op.Error.Errors {
log.Printf("Error: %+v", operr)
}
log.Fatalf("Failed to start.")
}
log.Printf("Success. %+v", op)
break OpLoop
default:
log.Fatalf("Unknown status %q: %+v", op.Status, op)
}
}
inst, err := computeService.Instances.Get(*proj, *zone, *instName).Do()
if err != nil {
log.Fatalf("Error getting instance after creation: %v", err)
}
ij, _ := json.MarshalIndent(inst, "", " ")
log.Printf("Instance: %s", ij)
}
func instanceDisk(svc *compute.Service) *compute.AttachedDisk {
const imageURL = "https://www.googleapis.com/compute/v1/projects/coreos-cloud/global/images/coreos-stable-444-5-0-v20141016"
diskName := *instName + "-disk"
return &compute.AttachedDisk{
AutoDelete: true,
Boot: true,
Type: "PERSISTENT",
InitializeParams: &compute.AttachedDiskInitializeParams{
DiskName: diskName,
SourceImage: imageURL,
DiskSizeGb: 50,
},
}
}
func writeCloudStorageObject(httpClient *http.Client) {
content := os.Stdin
const maxSlurp = 1 << 20
var buf bytes.Buffer
n, err := io.CopyN(&buf, content, maxSlurp)
if err != nil && err != io.EOF {
log.Fatalf("Error reading from stdin: %v, %v", n, err)
}
contentType := http.DetectContentType(buf.Bytes())
req, err := http.NewRequest("PUT", "https://storage.googleapis.com/"+*writeObject, io.MultiReader(&buf, content))
if err != nil {
log.Fatal(err)
}
req.Header.Set("x-goog-api-version", "2")
if *publicObject {
req.Header.Set("x-goog-acl", "public-read")
}
req.Header.Set("Content-Type", contentType)
res, err := httpClient.Do(req)
if err != nil {
log.Fatal(err)
}
if res.StatusCode != 200 {
res.Write(os.Stderr)
log.Fatalf("Failed.")
}
log.Printf("Success.")
os.Exit(0)
}

View File

@ -0,0 +1,27 @@
-----BEGIN RSA PRIVATE KEY-----
MIIEowIBAAKCAQEAt5fAjp4fTcekWUTfzsp0kyih1OYbsGL0KX1eRbSSR8Od0+9Q
62Hyny+GFwMTb4A/KU8mssoHvcceSAAbwfbxFK/+s51TobqUnORZrOoTZjkUygby
XDSK99YBbcR1Pip8vwMTm4XKuLtCigeBBdjjAQdgUO28LENGlsMnmeYkJfODVGnV
mr5Ltb9ANA8IKyTfsnHJ4iOCS/PlPbUj2q7YnoVLposUBMlgUb/CykX3mOoLb4yJ
JQyA/iST6ZxiIEj36D4yWZ5lg7YJl+UiiBQHGCnPdGyipqV06ex0heYWcaiW8LWZ
SUQ93jQ+WVCH8hT7DQO1dmsvUmXlq/JeAlwQ/QIDAQABAoIBAFFHV7JMAqPWnMYA
nezY6J81v9+XN+7xABNWM2Q8uv4WdksbigGLTXR3/680Z2hXqJ7LMeC5XJACFT/e
/Gr0vmpgOCygnCPfjGehGKpavtfksXV3edikUlnCXsOP1C//c1bFL+sMYmFCVgTx
qYdDK8yKzXNGrKYT6q5YG7IglyRNV1rsQa8lM/5taFYiD1Ck/3tQi3YIq8Lcuser
hrxsMABcQ6mi+EIvG6Xr4mfJug0dGJMHG4RG1UGFQn6RXrQq2+q53fC8ZbVUSi0j
NQ918aKFzktwv+DouKU0ME4I9toks03gM860bAL7zCbKGmwR3hfgX/TqzVCWpG9E
LDVfvekCgYEA8fk9N53jbBRmULUGEf4qWypcLGiZnNU0OeXWpbPV9aa3H0VDytA7
8fCN2dPAVDPqlthMDdVe983NCNwp2Yo8ZimDgowyIAKhdC25s1kejuaiH9OAPj3c
0f8KbriYX4n8zNHxFwK6Ae3pQ6EqOLJVCUsziUaZX9nyKY5aZlyX6xcCgYEAwjws
K62PjC64U5wYddNLp+kNdJ4edx+a7qBb3mEgPvSFT2RO3/xafJyG8kQB30Mfstjd
bRxyUV6N0vtX1zA7VQtRUAvfGCecpMo+VQZzcHXKzoRTnQ7eZg4Lmj5fQ9tOAKAo
QCVBoSW/DI4PZL26CAMDcAba4Pa22ooLapoRIQsCgYA6pIfkkbxLNkpxpt2YwLtt
Kr/590O7UaR9n6k8sW/aQBRDXNsILR1KDl2ifAIxpf9lnXgZJiwE7HiTfCAcW7c1
nzwDCI0hWuHcMTS/NYsFYPnLsstyyjVZI3FY0h4DkYKV9Q9z3zJLQ2hz/nwoD3gy
b2pHC7giFcTts1VPV4Nt8wKBgHeFn4ihHJweg76vZz3Z78w7VNRWGFklUalVdDK7
gaQ7w2y/ROn/146mo0OhJaXFIFRlrpvdzVrU3GDf2YXJYDlM5ZRkObwbZADjksev
WInzcgDy3KDg7WnPasRXbTfMU4t/AkW2p1QKbi3DnSVYuokDkbH2Beo45vxDxhKr
C69RAoGBAIyo3+OJenoZmoNzNJl2WPW5MeBUzSh8T/bgyjFTdqFHF5WiYRD/lfHj
x9Glyw2nutuT4hlOqHvKhgTYdDMsF2oQ72fe3v8Q5FU7FuKndNPEAyvKNXZaShVA
hnlhv5DjXKb0wFWnt5PCCiQLtzG0yyHaITrrEme7FikkIcTxaX/Y
-----END RSA PRIVATE KEY-----

View File

@ -0,0 +1,26 @@
-----BEGIN CERTIFICATE-----
MIIEWjCCA0KgAwIBAgIJALfRlWsI8YQHMA0GCSqGSIb3DQEBBQUAMHsxCzAJBgNV
BAYTAlVTMQswCQYDVQQIEwJDQTEWMBQGA1UEBxMNU2FuIEZyYW5jaXNjbzEUMBIG
A1UEChMLQnJhZGZpdHppbmMxEjAQBgNVBAMTCWxvY2FsaG9zdDEdMBsGCSqGSIb3
DQEJARYOYnJhZEBkYW5nYS5jb20wHhcNMTQwNzE1MjA0NjA1WhcNMTcwNTA0MjA0
NjA1WjB7MQswCQYDVQQGEwJVUzELMAkGA1UECBMCQ0ExFjAUBgNVBAcTDVNhbiBG
cmFuY2lzY28xFDASBgNVBAoTC0JyYWRmaXR6aW5jMRIwEAYDVQQDEwlsb2NhbGhv
c3QxHTAbBgkqhkiG9w0BCQEWDmJyYWRAZGFuZ2EuY29tMIIBIjANBgkqhkiG9w0B
AQEFAAOCAQ8AMIIBCgKCAQEAt5fAjp4fTcekWUTfzsp0kyih1OYbsGL0KX1eRbSS
R8Od0+9Q62Hyny+GFwMTb4A/KU8mssoHvcceSAAbwfbxFK/+s51TobqUnORZrOoT
ZjkUygbyXDSK99YBbcR1Pip8vwMTm4XKuLtCigeBBdjjAQdgUO28LENGlsMnmeYk
JfODVGnVmr5Ltb9ANA8IKyTfsnHJ4iOCS/PlPbUj2q7YnoVLposUBMlgUb/CykX3
mOoLb4yJJQyA/iST6ZxiIEj36D4yWZ5lg7YJl+UiiBQHGCnPdGyipqV06ex0heYW
caiW8LWZSUQ93jQ+WVCH8hT7DQO1dmsvUmXlq/JeAlwQ/QIDAQABo4HgMIHdMB0G
A1UdDgQWBBRcAROthS4P4U7vTfjByC569R7E6DCBrQYDVR0jBIGlMIGigBRcAROt
hS4P4U7vTfjByC569R7E6KF/pH0wezELMAkGA1UEBhMCVVMxCzAJBgNVBAgTAkNB
MRYwFAYDVQQHEw1TYW4gRnJhbmNpc2NvMRQwEgYDVQQKEwtCcmFkZml0emluYzES
MBAGA1UEAxMJbG9jYWxob3N0MR0wGwYJKoZIhvcNAQkBFg5icmFkQGRhbmdhLmNv
bYIJALfRlWsI8YQHMAwGA1UdEwQFMAMBAf8wDQYJKoZIhvcNAQEFBQADggEBAG6h
U9f9sNH0/6oBbGGy2EVU0UgITUQIrFWo9rFkrW5k/XkDjQm+3lzjT0iGR4IxE/Ao
eU6sQhua7wrWeFEn47GL98lnCsJdD7oZNhFmQ95Tb/LnDUjs5Yj9brP0NWzXfYU4
UK2ZnINJRcJpB8iRCaCxE8DdcUF0XqIEq6pA272snoLmiXLMvNl3kYEdm+je6voD
58SNVEUsztzQyXmJEhCpwVI0A6QCjzXj+qvpmw3ZZHi8JwXei8ZZBLTSFBki8Z7n
sH9BBH38/SzUmAN4QHSPy1gjqm00OAE8NaYDkh/bzE4d7mLGGMWp/WE3KPSu82HF
kPe6XoSbiLm/kxk32T0=
-----END CERTIFICATE-----

View File

@ -0,0 +1 @@
E2CE26BF3285059C

View File

@ -0,0 +1,20 @@
-----BEGIN CERTIFICATE-----
MIIDPjCCAiYCCQDizia/MoUFnDANBgkqhkiG9w0BAQUFADB7MQswCQYDVQQGEwJV
UzELMAkGA1UECBMCQ0ExFjAUBgNVBAcTDVNhbiBGcmFuY2lzY28xFDASBgNVBAoT
C0JyYWRmaXR6aW5jMRIwEAYDVQQDEwlsb2NhbGhvc3QxHTAbBgkqhkiG9w0BCQEW
DmJyYWRAZGFuZ2EuY29tMB4XDTE0MDcxNTIwNTAyN1oXDTE1MTEyNzIwNTAyN1ow
RzELMAkGA1UEBhMCVVMxCzAJBgNVBAgTAkNBMQswCQYDVQQHEwJTRjEeMBwGA1UE
ChMVYnJhZGZpdHogaHR0cDIgc2VydmVyMIIBIjANBgkqhkiG9w0BAQEFAAOCAQ8A
MIIBCgKCAQEAs1Y9CyLFrdL8VQWN1WaifDqaZFnoqjHhCMlc1TfG2zA+InDifx2l
gZD3o8FeNnAcfM2sPlk3+ZleOYw9P/CklFVDlvqmpCv9ss/BEp/dDaWvy1LmJ4c2
dbQJfmTxn7CV1H3TsVJvKdwFmdoABb41NoBp6+NNO7OtDyhbIMiCI0pL3Nefb3HL
A7hIMo3DYbORTtJLTIH9W8YKrEWL0lwHLrYFx/UdutZnv+HjdmO6vCN4na55mjws
/vjKQUmc7xeY7Xe20xDEG2oDKVkL2eD7FfyrYMS3rO1ExP2KSqlXYG/1S9I/fz88
F0GK7HX55b5WjZCl2J3ERVdnv/0MQv+sYQIDAQABMA0GCSqGSIb3DQEBBQUAA4IB
AQC0zL+n/YpRZOdulSu9tS8FxrstXqGWoxfe+vIUgqfMZ5+0MkjJ/vW0FqlLDl2R
rn4XaR3e7FmWkwdDVbq/UB6lPmoAaFkCgh9/5oapMaclNVNnfF3fjCJfRr+qj/iD
EmJStTIN0ZuUjAlpiACmfnpEU55PafT5Zx+i1yE4FGjw8bJpFoyD4Hnm54nGjX19
KeCuvcYFUPnBm3lcL0FalF2AjqV02WTHYNQk7YF/oeO7NKBoEgvGvKG3x+xaOeBI
dwvdq175ZsGul30h+QjrRlXhH/twcuaT3GSdoysDl9cCYE8f1Mk8PD6gan3uBCJU
90p6/CbU71bGbfpM2PHot2fm
-----END CERTIFICATE-----

View File

@ -0,0 +1,27 @@
-----BEGIN RSA PRIVATE KEY-----
MIIEowIBAAKCAQEAs1Y9CyLFrdL8VQWN1WaifDqaZFnoqjHhCMlc1TfG2zA+InDi
fx2lgZD3o8FeNnAcfM2sPlk3+ZleOYw9P/CklFVDlvqmpCv9ss/BEp/dDaWvy1Lm
J4c2dbQJfmTxn7CV1H3TsVJvKdwFmdoABb41NoBp6+NNO7OtDyhbIMiCI0pL3Nef
b3HLA7hIMo3DYbORTtJLTIH9W8YKrEWL0lwHLrYFx/UdutZnv+HjdmO6vCN4na55
mjws/vjKQUmc7xeY7Xe20xDEG2oDKVkL2eD7FfyrYMS3rO1ExP2KSqlXYG/1S9I/
fz88F0GK7HX55b5WjZCl2J3ERVdnv/0MQv+sYQIDAQABAoIBADQ2spUwbY+bcz4p
3M66ECrNQTBggP40gYl2XyHxGGOu2xhZ94f9ELf1hjRWU2DUKWco1rJcdZClV6q3
qwmXvcM2Q/SMS8JW0ImkNVl/0/NqPxGatEnj8zY30d/L8hGFb0orzFu/XYA5gCP4
NbN2WrXgk3ZLeqwcNxHHtSiJWGJ/fPyeDWAu/apy75u9Xf2GlzBZmV6HYD9EfK80
LTlI60f5FO487CrJnboL7ovPJrIHn+k05xRQqwma4orpz932rTXnTjs9Lg6KtbQN
a7PrqfAntIISgr11a66Mng3IYH1lYqJsWJJwX/xHT4WLEy0EH4/0+PfYemJekz2+
Co62drECgYEA6O9zVJZXrLSDsIi54cfxA7nEZWm5CAtkYWeAHa4EJ+IlZ7gIf9sL
W8oFcEfFGpvwVqWZ+AsQ70dsjXAv3zXaG0tmg9FtqWp7pzRSMPidifZcQwWkKeTO
gJnFmnVyed8h6GfjTEu4gxo1/S5U0V+mYSha01z5NTnN6ltKx1Or3b0CgYEAxRgm
S30nZxnyg/V7ys61AZhst1DG2tkZXEMcA7dYhabMoXPJAP/EfhlWwpWYYUs/u0gS
Wwmf5IivX5TlYScgmkvb/NYz0u4ZmOXkLTnLPtdKKFXhjXJcHjUP67jYmOxNlJLp
V4vLRnFxTpffAV+OszzRxsXX6fvruwZBANYJeXUCgYBVouLFsFgfWGYp2rpr9XP4
KK25kvrBqF6JKOIDB1zjxNJ3pUMKrl8oqccCFoCyXa4oTM2kUX0yWxHfleUjrMq4
yimwQKiOZmV7fVLSSjSw6e/VfBd0h3gb82ygcplZkN0IclkwTY5SNKqwn/3y07V5
drqdhkrgdJXtmQ6O5YYECQKBgATERcDToQ1USlI4sKrB/wyv1AlG8dg/IebiVJ4e
ZAyvcQmClFzq0qS+FiQUnB/WQw9TeeYrwGs1hxBHuJh16srwhLyDrbMvQP06qh8R
48F8UXXSRec22dV9MQphaROhu2qZdv1AC0WD3tqov6L33aqmEOi+xi8JgbT/PLk5
c/c1AoGBAI1A/02ryksW6/wc7/6SP2M2rTy4m1sD/GnrTc67EHnRcVBdKO6qH2RY
nqC8YcveC2ZghgPTDsA3VGuzuBXpwY6wTyV99q6jxQJ6/xcrD9/NUG6Uwv/xfCxl
IJLeBYEqQundSSny3VtaAUK8Ul1nxpTvVRNwtcyWTo8RHAAyNPWd
-----END RSA PRIVATE KEY-----

View File

@ -0,0 +1,80 @@
// Copyright 2014 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
// See https://code.google.com/p/go/source/browse/CONTRIBUTORS
// Licensed under the same terms as Go itself:
// https://code.google.com/p/go/source/browse/LICENSE
package http2
import (
"net/http"
"strings"
)
var (
commonLowerHeader = map[string]string{} // Go-Canonical-Case -> lower-case
commonCanonHeader = map[string]string{} // lower-case -> Go-Canonical-Case
)
func init() {
for _, v := range []string{
"accept",
"accept-charset",
"accept-encoding",
"accept-language",
"accept-ranges",
"age",
"access-control-allow-origin",
"allow",
"authorization",
"cache-control",
"content-disposition",
"content-encoding",
"content-language",
"content-length",
"content-location",
"content-range",
"content-type",
"cookie",
"date",
"etag",
"expect",
"expires",
"from",
"host",
"if-match",
"if-modified-since",
"if-none-match",
"if-unmodified-since",
"last-modified",
"link",
"location",
"max-forwards",
"proxy-authenticate",
"proxy-authorization",
"range",
"referer",
"refresh",
"retry-after",
"server",
"set-cookie",
"strict-transport-security",
"transfer-encoding",
"user-agent",
"vary",
"via",
"www-authenticate",
} {
chk := http.CanonicalHeaderKey(v)
commonLowerHeader[chk] = v
commonCanonHeader[v] = chk
}
}
func lowerHeader(v string) string {
if s, ok := commonLowerHeader[v]; ok {
return s
}
return strings.ToLower(v)
}

View File

@ -0,0 +1,252 @@
// Copyright 2014 The Go Authors.
// See https://code.google.com/p/go/source/browse/CONTRIBUTORS
// Licensed under the same terms as Go itself:
// https://code.google.com/p/go/source/browse/LICENSE
package hpack
import (
"io"
)
const (
uint32Max = ^uint32(0)
initialHeaderTableSize = 4096
)
type Encoder struct {
dynTab dynamicTable
// minSize is the minimum table size set by
// SetMaxDynamicTableSize after the previous Header Table Size
// Update.
minSize uint32
// maxSizeLimit is the maximum table size this encoder
// supports. This will protect the encoder from too large
// size.
maxSizeLimit uint32
// tableSizeUpdate indicates whether "Header Table Size
// Update" is required.
tableSizeUpdate bool
w io.Writer
buf []byte
}
// NewEncoder returns a new Encoder which performs HPACK encoding. An
// encoded data is written to w.
func NewEncoder(w io.Writer) *Encoder {
e := &Encoder{
minSize: uint32Max,
maxSizeLimit: initialHeaderTableSize,
tableSizeUpdate: false,
w: w,
}
e.dynTab.setMaxSize(initialHeaderTableSize)
return e
}
// WriteField encodes f into a single Write to e's underlying Writer.
// This function may also produce bytes for "Header Table Size Update"
// if necessary. If produced, it is done before encoding f.
func (e *Encoder) WriteField(f HeaderField) error {
e.buf = e.buf[:0]
if e.tableSizeUpdate {
e.tableSizeUpdate = false
if e.minSize < e.dynTab.maxSize {
e.buf = appendTableSize(e.buf, e.minSize)
}
e.minSize = uint32Max
e.buf = appendTableSize(e.buf, e.dynTab.maxSize)
}
idx, nameValueMatch := e.searchTable(f)
if nameValueMatch {
e.buf = appendIndexed(e.buf, idx)
} else {
indexing := e.shouldIndex(f)
if indexing {
e.dynTab.add(f)
}
if idx == 0 {
e.buf = appendNewName(e.buf, f, indexing)
} else {
e.buf = appendIndexedName(e.buf, f, idx, indexing)
}
}
n, err := e.w.Write(e.buf)
if err == nil && n != len(e.buf) {
err = io.ErrShortWrite
}
return err
}
// searchTable searches f in both stable and dynamic header tables.
// The static header table is searched first. Only when there is no
// exact match for both name and value, the dynamic header table is
// then searched. If there is no match, i is 0. If both name and value
// match, i is the matched index and nameValueMatch becomes true. If
// only name matches, i points to that index and nameValueMatch
// becomes false.
func (e *Encoder) searchTable(f HeaderField) (i uint64, nameValueMatch bool) {
for idx, hf := range staticTable {
if !constantTimeStringCompare(hf.Name, f.Name) {
continue
}
if i == 0 {
i = uint64(idx + 1)
}
if f.Sensitive {
continue
}
if !constantTimeStringCompare(hf.Value, f.Value) {
continue
}
i = uint64(idx + 1)
nameValueMatch = true
return
}
j, nameValueMatch := e.dynTab.search(f)
if nameValueMatch || (i == 0 && j != 0) {
i = j + uint64(len(staticTable))
}
return
}
// SetMaxDynamicTableSize changes the dynamic header table size to v.
// The actual size is bounded by the value passed to
// SetMaxDynamicTableSizeLimit.
func (e *Encoder) SetMaxDynamicTableSize(v uint32) {
if v > e.maxSizeLimit {
v = e.maxSizeLimit
}
if v < e.minSize {
e.minSize = v
}
e.tableSizeUpdate = true
e.dynTab.setMaxSize(v)
}
// SetMaxDynamicTableSizeLimit changes the maximum value that can be
// specified in SetMaxDynamicTableSize to v. By default, it is set to
// 4096, which is the same size of the default dynamic header table
// size described in HPACK specification. If the current maximum
// dynamic header table size is strictly greater than v, "Header Table
// Size Update" will be done in the next WriteField call and the
// maximum dynamic header table size is truncated to v.
func (e *Encoder) SetMaxDynamicTableSizeLimit(v uint32) {
e.maxSizeLimit = v
if e.dynTab.maxSize > v {
e.tableSizeUpdate = true
e.dynTab.setMaxSize(v)
}
}
// shouldIndex reports whether f should be indexed.
func (e *Encoder) shouldIndex(f HeaderField) bool {
return !f.Sensitive && f.size() <= e.dynTab.maxSize
}
// appendIndexed appends index i, as encoded in "Indexed Header Field"
// representation, to dst and returns the extended buffer.
func appendIndexed(dst []byte, i uint64) []byte {
first := len(dst)
dst = appendVarInt(dst, 7, i)
dst[first] |= 0x80
return dst
}
// appendNewName appends f, as encoded in one of "Literal Header field
// - New Name" representation variants, to dst and returns the
// extended buffer.
//
// If f.Sensitive is true, "Never Indexed" representation is used. If
// f.Sensitive is false and indexing is true, "Inremental Indexing"
// representation is used.
func appendNewName(dst []byte, f HeaderField, indexing bool) []byte {
dst = append(dst, encodeTypeByte(indexing, f.Sensitive))
dst = appendHpackString(dst, f.Name)
return appendHpackString(dst, f.Value)
}
// appendIndexedName appends f and index i referring indexed name
// entry, as encoded in one of "Literal Header field - Indexed Name"
// representation variants, to dst and returns the extended buffer.
//
// If f.Sensitive is true, "Never Indexed" representation is used. If
// f.Sensitive is false and indexing is true, "Incremental Indexing"
// representation is used.
func appendIndexedName(dst []byte, f HeaderField, i uint64, indexing bool) []byte {
first := len(dst)
var n byte
if indexing {
n = 6
} else {
n = 4
}
dst = appendVarInt(dst, n, i)
dst[first] |= encodeTypeByte(indexing, f.Sensitive)
return appendHpackString(dst, f.Value)
}
// appendTableSize appends v, as encoded in "Header Table Size Update"
// representation, to dst and returns the extended buffer.
func appendTableSize(dst []byte, v uint32) []byte {
first := len(dst)
dst = appendVarInt(dst, 5, uint64(v))
dst[first] |= 0x20
return dst
}
// appendVarInt appends i, as encoded in variable integer form using n
// bit prefix, to dst and returns the extended buffer.
//
// See
// http://http2.github.io/http2-spec/compression.html#integer.representation
func appendVarInt(dst []byte, n byte, i uint64) []byte {
k := uint64((1 << n) - 1)
if i < k {
return append(dst, byte(i))
}
dst = append(dst, byte(k))
i -= k
for ; i >= 128; i >>= 7 {
dst = append(dst, byte(0x80|(i&0x7f)))
}
return append(dst, byte(i))
}
// appendHpackString appends s, as encoded in "String Literal"
// representation, to dst and returns the the extended buffer.
//
// s will be encoded in Huffman codes only when it produces strictly
// shorter byte string.
func appendHpackString(dst []byte, s string) []byte {
huffmanLength := HuffmanEncodeLength(s)
if huffmanLength < uint64(len(s)) {
first := len(dst)
dst = appendVarInt(dst, 7, huffmanLength)
dst = AppendHuffmanString(dst, s)
dst[first] |= 0x80
} else {
dst = appendVarInt(dst, 7, uint64(len(s)))
dst = append(dst, s...)
}
return dst
}
// encodeTypeByte returns type byte. If sensitive is true, type byte
// for "Never Indexed" representation is returned. If sensitive is
// false and indexing is true, type byte for "Incremental Indexing"
// representation is returned. Otherwise, type byte for "Without
// Indexing" is returned.
func encodeTypeByte(indexing, sensitive bool) byte {
if sensitive {
return 0x10
}
if indexing {
return 0x40
}
return 0
}

View File

@ -0,0 +1,331 @@
// Copyright 2014 The Go Authors.
// See https://code.google.com/p/go/source/browse/CONTRIBUTORS
// Licensed under the same terms as Go itself:
// https://code.google.com/p/go/source/browse/LICENSE
package hpack
import (
"bytes"
"encoding/hex"
"reflect"
"strings"
"testing"
)
func TestEncoderTableSizeUpdate(t *testing.T) {
tests := []struct {
size1, size2 uint32
wantHex string
}{
// Should emit 2 table size updates (2048 and 4096)
{2048, 4096, "3fe10f 3fe11f 82"},
// Should emit 1 table size update (2048)
{16384, 2048, "3fe10f 82"},
}
for _, tt := range tests {
var buf bytes.Buffer
e := NewEncoder(&buf)
e.SetMaxDynamicTableSize(tt.size1)
e.SetMaxDynamicTableSize(tt.size2)
if err := e.WriteField(pair(":method", "GET")); err != nil {
t.Fatal(err)
}
want := removeSpace(tt.wantHex)
if got := hex.EncodeToString(buf.Bytes()); got != want {
t.Errorf("e.SetDynamicTableSize %v, %v = %q; want %q", tt.size1, tt.size2, got, want)
}
}
}
func TestEncoderWriteField(t *testing.T) {
var buf bytes.Buffer
e := NewEncoder(&buf)
var got []HeaderField
d := NewDecoder(4<<10, func(f HeaderField) {
got = append(got, f)
})
tests := []struct {
hdrs []HeaderField
}{
{[]HeaderField{
pair(":method", "GET"),
pair(":scheme", "http"),
pair(":path", "/"),
pair(":authority", "www.example.com"),
}},
{[]HeaderField{
pair(":method", "GET"),
pair(":scheme", "http"),
pair(":path", "/"),
pair(":authority", "www.example.com"),
pair("cache-control", "no-cache"),
}},
{[]HeaderField{
pair(":method", "GET"),
pair(":scheme", "https"),
pair(":path", "/index.html"),
pair(":authority", "www.example.com"),
pair("custom-key", "custom-value"),
}},
}
for i, tt := range tests {
buf.Reset()
got = got[:0]
for _, hf := range tt.hdrs {
if err := e.WriteField(hf); err != nil {
t.Fatal(err)
}
}
_, err := d.Write(buf.Bytes())
if err != nil {
t.Errorf("%d. Decoder Write = %v", i, err)
}
if !reflect.DeepEqual(got, tt.hdrs) {
t.Errorf("%d. Decoded %+v; want %+v", i, got, tt.hdrs)
}
}
}
func TestEncoderSearchTable(t *testing.T) {
e := NewEncoder(nil)
e.dynTab.add(pair("foo", "bar"))
e.dynTab.add(pair("blake", "miz"))
e.dynTab.add(pair(":method", "GET"))
tests := []struct {
hf HeaderField
wantI uint64
wantMatch bool
}{
// Name and Value match
{pair("foo", "bar"), uint64(len(staticTable) + 3), true},
{pair("blake", "miz"), uint64(len(staticTable) + 2), true},
{pair(":method", "GET"), 2, true},
// Only name match because Sensitive == true
{HeaderField{":method", "GET", true}, 2, false},
// Only Name matches
{pair("foo", "..."), uint64(len(staticTable) + 3), false},
{pair("blake", "..."), uint64(len(staticTable) + 2), false},
{pair(":method", "..."), 2, false},
// None match
{pair("foo-", "bar"), 0, false},
}
for _, tt := range tests {
if gotI, gotMatch := e.searchTable(tt.hf); gotI != tt.wantI || gotMatch != tt.wantMatch {
t.Errorf("d.search(%+v) = %v, %v; want %v, %v", tt.hf, gotI, gotMatch, tt.wantI, tt.wantMatch)
}
}
}
func TestAppendVarInt(t *testing.T) {
tests := []struct {
n byte
i uint64
want []byte
}{
// Fits in a byte:
{1, 0, []byte{0}},
{2, 2, []byte{2}},
{3, 6, []byte{6}},
{4, 14, []byte{14}},
{5, 30, []byte{30}},
{6, 62, []byte{62}},
{7, 126, []byte{126}},
{8, 254, []byte{254}},
// Multiple bytes:
{5, 1337, []byte{31, 154, 10}},
}
for _, tt := range tests {
got := appendVarInt(nil, tt.n, tt.i)
if !bytes.Equal(got, tt.want) {
t.Errorf("appendVarInt(nil, %v, %v) = %v; want %v", tt.n, tt.i, got, tt.want)
}
}
}
func TestAppendHpackString(t *testing.T) {
tests := []struct {
s, wantHex string
}{
// Huffman encoded
{"www.example.com", "8c f1e3 c2e5 f23a 6ba0 ab90 f4ff"},
// Not Huffman encoded
{"a", "01 61"},
// zero length
{"", "00"},
}
for _, tt := range tests {
want := removeSpace(tt.wantHex)
buf := appendHpackString(nil, tt.s)
if got := hex.EncodeToString(buf); want != got {
t.Errorf("appendHpackString(nil, %q) = %q; want %q", tt.s, got, want)
}
}
}
func TestAppendIndexed(t *testing.T) {
tests := []struct {
i uint64
wantHex string
}{
// 1 byte
{1, "81"},
{126, "fe"},
// 2 bytes
{127, "ff00"},
{128, "ff01"},
}
for _, tt := range tests {
want := removeSpace(tt.wantHex)
buf := appendIndexed(nil, tt.i)
if got := hex.EncodeToString(buf); want != got {
t.Errorf("appendIndex(nil, %v) = %q; want %q", tt.i, got, want)
}
}
}
func TestAppendNewName(t *testing.T) {
tests := []struct {
f HeaderField
indexing bool
wantHex string
}{
// Incremental indexing
{HeaderField{"custom-key", "custom-value", false}, true, "40 88 25a8 49e9 5ba9 7d7f 89 25a8 49e9 5bb8 e8b4 bf"},
// Without indexing
{HeaderField{"custom-key", "custom-value", false}, false, "00 88 25a8 49e9 5ba9 7d7f 89 25a8 49e9 5bb8 e8b4 bf"},
// Never indexed
{HeaderField{"custom-key", "custom-value", true}, true, "10 88 25a8 49e9 5ba9 7d7f 89 25a8 49e9 5bb8 e8b4 bf"},
{HeaderField{"custom-key", "custom-value", true}, false, "10 88 25a8 49e9 5ba9 7d7f 89 25a8 49e9 5bb8 e8b4 bf"},
}
for _, tt := range tests {
want := removeSpace(tt.wantHex)
buf := appendNewName(nil, tt.f, tt.indexing)
if got := hex.EncodeToString(buf); want != got {
t.Errorf("appendNewName(nil, %+v, %v) = %q; want %q", tt.f, tt.indexing, got, want)
}
}
}
func TestAppendIndexedName(t *testing.T) {
tests := []struct {
f HeaderField
i uint64
indexing bool
wantHex string
}{
// Incremental indexing
{HeaderField{":status", "302", false}, 8, true, "48 82 6402"},
// Without indexing
{HeaderField{":status", "302", false}, 8, false, "08 82 6402"},
// Never indexed
{HeaderField{":status", "302", true}, 8, true, "18 82 6402"},
{HeaderField{":status", "302", true}, 8, false, "18 82 6402"},
}
for _, tt := range tests {
want := removeSpace(tt.wantHex)
buf := appendIndexedName(nil, tt.f, tt.i, tt.indexing)
if got := hex.EncodeToString(buf); want != got {
t.Errorf("appendIndexedName(nil, %+v, %v) = %q; want %q", tt.f, tt.indexing, got, want)
}
}
}
func TestAppendTableSize(t *testing.T) {
tests := []struct {
i uint32
wantHex string
}{
// Fits into 1 byte
{30, "3e"},
// Extra byte
{31, "3f00"},
{32, "3f01"},
}
for _, tt := range tests {
want := removeSpace(tt.wantHex)
buf := appendTableSize(nil, tt.i)
if got := hex.EncodeToString(buf); want != got {
t.Errorf("appendTableSize(nil, %v) = %q; want %q", tt.i, got, want)
}
}
}
func TestEncoderSetMaxDynamicTableSize(t *testing.T) {
var buf bytes.Buffer
e := NewEncoder(&buf)
tests := []struct {
v uint32
wantUpdate bool
wantMinSize uint32
wantMaxSize uint32
}{
// Set new table size to 2048
{2048, true, 2048, 2048},
// Set new table size to 16384, but still limited to
// 4096
{16384, true, 2048, 4096},
}
for _, tt := range tests {
e.SetMaxDynamicTableSize(tt.v)
if got := e.tableSizeUpdate; tt.wantUpdate != got {
t.Errorf("e.tableSizeUpdate = %v; want %v", got, tt.wantUpdate)
}
if got := e.minSize; tt.wantMinSize != got {
t.Errorf("e.minSize = %v; want %v", got, tt.wantMinSize)
}
if got := e.dynTab.maxSize; tt.wantMaxSize != got {
t.Errorf("e.maxSize = %v; want %v", got, tt.wantMaxSize)
}
}
}
func TestEncoderSetMaxDynamicTableSizeLimit(t *testing.T) {
e := NewEncoder(nil)
// 4095 < initialHeaderTableSize means maxSize is truncated to
// 4095.
e.SetMaxDynamicTableSizeLimit(4095)
if got, want := e.dynTab.maxSize, uint32(4095); got != want {
t.Errorf("e.dynTab.maxSize = %v; want %v", got, want)
}
if got, want := e.maxSizeLimit, uint32(4095); got != want {
t.Errorf("e.maxSizeLimit = %v; want %v", got, want)
}
if got, want := e.tableSizeUpdate, true; got != want {
t.Errorf("e.tableSizeUpdate = %v; want %v", got, want)
}
// maxSize will be truncated to maxSizeLimit
e.SetMaxDynamicTableSize(16384)
if got, want := e.dynTab.maxSize, uint32(4095); got != want {
t.Errorf("e.dynTab.maxSize = %v; want %v", got, want)
}
// 8192 > current maxSizeLimit, so maxSize does not change.
e.SetMaxDynamicTableSizeLimit(8192)
if got, want := e.dynTab.maxSize, uint32(4095); got != want {
t.Errorf("e.dynTab.maxSize = %v; want %v", got, want)
}
if got, want := e.maxSizeLimit, uint32(8192); got != want {
t.Errorf("e.maxSizeLimit = %v; want %v", got, want)
}
}
func removeSpace(s string) string {
return strings.Replace(s, " ", "", -1)
}

View File

@ -0,0 +1,445 @@
// Copyright 2014 The Go Authors.
// See https://code.google.com/p/go/source/browse/CONTRIBUTORS
// Licensed under the same terms as Go itself:
// https://code.google.com/p/go/source/browse/LICENSE
// Package hpack implements HPACK, a compression format for
// efficiently representing HTTP header fields in the context of HTTP/2.
//
// See http://tools.ietf.org/html/draft-ietf-httpbis-header-compression-09
package hpack
import (
"bytes"
"errors"
"fmt"
)
// A DecodingError is something the spec defines as a decoding error.
type DecodingError struct {
Err error
}
func (de DecodingError) Error() string {
return fmt.Sprintf("decoding error: %v", de.Err)
}
// An InvalidIndexError is returned when an encoder references a table
// entry before the static table or after the end of the dynamic table.
type InvalidIndexError int
func (e InvalidIndexError) Error() string {
return fmt.Sprintf("invalid indexed representation index %d", int(e))
}
// A HeaderField is a name-value pair. Both the name and value are
// treated as opaque sequences of octets.
type HeaderField struct {
Name, Value string
// Sensitive means that this header field should never be
// indexed.
Sensitive bool
}
func (hf *HeaderField) size() uint32 {
// http://http2.github.io/http2-spec/compression.html#rfc.section.4.1
// "The size of the dynamic table is the sum of the size of
// its entries. The size of an entry is the sum of its name's
// length in octets (as defined in Section 5.2), its value's
// length in octets (see Section 5.2), plus 32. The size of
// an entry is calculated using the length of the name and
// value without any Huffman encoding applied."
// This can overflow if somebody makes a large HeaderField
// Name and/or Value by hand, but we don't care, because that
// won't happen on the wire because the encoding doesn't allow
// it.
return uint32(len(hf.Name) + len(hf.Value) + 32)
}
// A Decoder is the decoding context for incremental processing of
// header blocks.
type Decoder struct {
dynTab dynamicTable
emit func(f HeaderField)
// buf is the unparsed buffer. It's only written to
// saveBuf if it was truncated in the middle of a header
// block. Because it's usually not owned, we can only
// process it under Write.
buf []byte // usually not owned
saveBuf bytes.Buffer
}
func NewDecoder(maxSize uint32, emitFunc func(f HeaderField)) *Decoder {
d := &Decoder{
emit: emitFunc,
}
d.dynTab.allowedMaxSize = maxSize
d.dynTab.setMaxSize(maxSize)
return d
}
// TODO: add method *Decoder.Reset(maxSize, emitFunc) to let callers re-use Decoders and their
// underlying buffers for garbage reasons.
func (d *Decoder) SetMaxDynamicTableSize(v uint32) {
d.dynTab.setMaxSize(v)
}
// SetAllowedMaxDynamicTableSize sets the upper bound that the encoded
// stream (via dynamic table size updates) may set the maximum size
// to.
func (d *Decoder) SetAllowedMaxDynamicTableSize(v uint32) {
d.dynTab.allowedMaxSize = v
}
type dynamicTable struct {
// ents is the FIFO described at
// http://http2.github.io/http2-spec/compression.html#rfc.section.2.3.2
// The newest (low index) is append at the end, and items are
// evicted from the front.
ents []HeaderField
size uint32
maxSize uint32 // current maxSize
allowedMaxSize uint32 // maxSize may go up to this, inclusive
}
func (dt *dynamicTable) setMaxSize(v uint32) {
dt.maxSize = v
dt.evict()
}
// TODO: change dynamicTable to be a struct with a slice and a size int field,
// per http://http2.github.io/http2-spec/compression.html#rfc.section.4.1:
//
//
// Then make add increment the size. maybe the max size should move from Decoder to
// dynamicTable and add should return an ok bool if there was enough space.
//
// Later we'll need a remove operation on dynamicTable.
func (dt *dynamicTable) add(f HeaderField) {
dt.ents = append(dt.ents, f)
dt.size += f.size()
dt.evict()
}
// If we're too big, evict old stuff (front of the slice)
func (dt *dynamicTable) evict() {
base := dt.ents // keep base pointer of slice
for dt.size > dt.maxSize {
dt.size -= dt.ents[0].size()
dt.ents = dt.ents[1:]
}
// Shift slice contents down if we evicted things.
if len(dt.ents) != len(base) {
copy(base, dt.ents)
dt.ents = base[:len(dt.ents)]
}
}
// constantTimeStringCompare compares string a and b in a constant
// time manner.
func constantTimeStringCompare(a, b string) bool {
if len(a) != len(b) {
return false
}
c := byte(0)
for i := 0; i < len(a); i++ {
c |= a[i] ^ b[i]
}
return c == 0
}
// Search searches f in the table. The return value i is 0 if there is
// no name match. If there is name match or name/value match, i is the
// index of that entry (1-based). If both name and value match,
// nameValueMatch becomes true.
func (dt *dynamicTable) search(f HeaderField) (i uint64, nameValueMatch bool) {
l := len(dt.ents)
for j := l - 1; j >= 0; j-- {
ent := dt.ents[j]
if !constantTimeStringCompare(ent.Name, f.Name) {
continue
}
if i == 0 {
i = uint64(l - j)
}
if f.Sensitive {
continue
}
if !constantTimeStringCompare(ent.Value, f.Value) {
continue
}
i = uint64(l - j)
nameValueMatch = true
return
}
return
}
func (d *Decoder) maxTableIndex() int {
return len(d.dynTab.ents) + len(staticTable)
}
func (d *Decoder) at(i uint64) (hf HeaderField, ok bool) {
if i < 1 {
return
}
if i > uint64(d.maxTableIndex()) {
return
}
if i <= uint64(len(staticTable)) {
return staticTable[i-1], true
}
dents := d.dynTab.ents
return dents[len(dents)-(int(i)-len(staticTable))], true
}
// Decode decodes an entire block.
//
// TODO: remove this method and make it incremental later? This is
// easier for debugging now.
func (d *Decoder) DecodeFull(p []byte) ([]HeaderField, error) {
var hf []HeaderField
saveFunc := d.emit
defer func() { d.emit = saveFunc }()
d.emit = func(f HeaderField) { hf = append(hf, f) }
if _, err := d.Write(p); err != nil {
return nil, err
}
if err := d.Close(); err != nil {
return nil, err
}
return hf, nil
}
func (d *Decoder) Close() error {
if d.saveBuf.Len() > 0 {
d.saveBuf.Reset()
return DecodingError{errors.New("truncated headers")}
}
return nil
}
func (d *Decoder) Write(p []byte) (n int, err error) {
if len(p) == 0 {
// Prevent state machine CPU attacks (making us redo
// work up to the point of finding out we don't have
// enough data)
return
}
// Only copy the data if we have to. Optimistically assume
// that p will contain a complete header block.
if d.saveBuf.Len() == 0 {
d.buf = p
} else {
d.saveBuf.Write(p)
d.buf = d.saveBuf.Bytes()
d.saveBuf.Reset()
}
for len(d.buf) > 0 {
err = d.parseHeaderFieldRepr()
if err != nil {
if err == errNeedMore {
err = nil
d.saveBuf.Write(d.buf)
}
break
}
}
return len(p), err
}
// errNeedMore is an internal sentinel error value that means the
// buffer is truncated and we need to read more data before we can
// continue parsing.
var errNeedMore = errors.New("need more data")
type indexType int
const (
indexedTrue indexType = iota
indexedFalse
indexedNever
)
func (v indexType) indexed() bool { return v == indexedTrue }
func (v indexType) sensitive() bool { return v == indexedNever }
// returns errNeedMore if there isn't enough data available.
// any other error is fatal.
// consumes d.buf iff it returns nil.
// precondition: must be called with len(d.buf) > 0
func (d *Decoder) parseHeaderFieldRepr() error {
b := d.buf[0]
switch {
case b&128 != 0:
// Indexed representation.
// High bit set?
// http://http2.github.io/http2-spec/compression.html#rfc.section.6.1
return d.parseFieldIndexed()
case b&192 == 64:
// 6.2.1 Literal Header Field with Incremental Indexing
// 0b10xxxxxx: top two bits are 10
// http://http2.github.io/http2-spec/compression.html#rfc.section.6.2.1
return d.parseFieldLiteral(6, indexedTrue)
case b&240 == 0:
// 6.2.2 Literal Header Field without Indexing
// 0b0000xxxx: top four bits are 0000
// http://http2.github.io/http2-spec/compression.html#rfc.section.6.2.2
return d.parseFieldLiteral(4, indexedFalse)
case b&240 == 16:
// 6.2.3 Literal Header Field never Indexed
// 0b0001xxxx: top four bits are 0001
// http://http2.github.io/http2-spec/compression.html#rfc.section.6.2.3
return d.parseFieldLiteral(4, indexedNever)
case b&224 == 32:
// 6.3 Dynamic Table Size Update
// Top three bits are '001'.
// http://http2.github.io/http2-spec/compression.html#rfc.section.6.3
return d.parseDynamicTableSizeUpdate()
}
return DecodingError{errors.New("invalid encoding")}
}
// (same invariants and behavior as parseHeaderFieldRepr)
func (d *Decoder) parseFieldIndexed() error {
buf := d.buf
idx, buf, err := readVarInt(7, buf)
if err != nil {
return err
}
hf, ok := d.at(idx)
if !ok {
return DecodingError{InvalidIndexError(idx)}
}
d.emit(HeaderField{Name: hf.Name, Value: hf.Value})
d.buf = buf
return nil
}
// (same invariants and behavior as parseHeaderFieldRepr)
func (d *Decoder) parseFieldLiteral(n uint8, it indexType) error {
buf := d.buf
nameIdx, buf, err := readVarInt(n, buf)
if err != nil {
return err
}
var hf HeaderField
if nameIdx > 0 {
ihf, ok := d.at(nameIdx)
if !ok {
return DecodingError{InvalidIndexError(nameIdx)}
}
hf.Name = ihf.Name
} else {
hf.Name, buf, err = readString(buf)
if err != nil {
return err
}
}
hf.Value, buf, err = readString(buf)
if err != nil {
return err
}
d.buf = buf
if it.indexed() {
d.dynTab.add(hf)
}
hf.Sensitive = it.sensitive()
d.emit(hf)
return nil
}
// (same invariants and behavior as parseHeaderFieldRepr)
func (d *Decoder) parseDynamicTableSizeUpdate() error {
buf := d.buf
size, buf, err := readVarInt(5, buf)
if err != nil {
return err
}
if size > uint64(d.dynTab.allowedMaxSize) {
return DecodingError{errors.New("dynamic table size update too large")}
}
d.dynTab.setMaxSize(uint32(size))
d.buf = buf
return nil
}
var errVarintOverflow = DecodingError{errors.New("varint integer overflow")}
// readVarInt reads an unsigned variable length integer off the
// beginning of p. n is the parameter as described in
// http://http2.github.io/http2-spec/compression.html#rfc.section.5.1.
//
// n must always be between 1 and 8.
//
// The returned remain buffer is either a smaller suffix of p, or err != nil.
// The error is errNeedMore if p doesn't contain a complete integer.
func readVarInt(n byte, p []byte) (i uint64, remain []byte, err error) {
if n < 1 || n > 8 {
panic("bad n")
}
if len(p) == 0 {
return 0, p, errNeedMore
}
i = uint64(p[0])
if n < 8 {
i &= (1 << uint64(n)) - 1
}
if i < (1<<uint64(n))-1 {
return i, p[1:], nil
}
origP := p
p = p[1:]
var m uint64
for len(p) > 0 {
b := p[0]
p = p[1:]
i += uint64(b&127) << m
if b&128 == 0 {
return i, p, nil
}
m += 7
if m >= 63 { // TODO: proper overflow check. making this up.
return 0, origP, errVarintOverflow
}
}
return 0, origP, errNeedMore
}
func readString(p []byte) (s string, remain []byte, err error) {
if len(p) == 0 {
return "", p, errNeedMore
}
isHuff := p[0]&128 != 0
strLen, p, err := readVarInt(7, p)
if err != nil {
return "", p, err
}
if uint64(len(p)) < strLen {
return "", p, errNeedMore
}
if !isHuff {
return string(p[:strLen]), p[strLen:], nil
}
// TODO: optimize this garbage:
var buf bytes.Buffer
if _, err := HuffmanDecode(&buf, p[:strLen]); err != nil {
return "", nil, err
}
return buf.String(), p[strLen:], nil
}

View File

@ -0,0 +1,648 @@
// Copyright 2014 The Go Authors.
// See https://code.google.com/p/go/source/browse/CONTRIBUTORS
// Licensed under the same terms as Go itself:
// https://code.google.com/p/go/source/browse/LICENSE
package hpack
import (
"bufio"
"bytes"
"encoding/hex"
"fmt"
"reflect"
"regexp"
"strconv"
"strings"
"testing"
)
func TestStaticTable(t *testing.T) {
fromSpec := `
+-------+-----------------------------+---------------+
| 1 | :authority | |
| 2 | :method | GET |
| 3 | :method | POST |
| 4 | :path | / |
| 5 | :path | /index.html |
| 6 | :scheme | http |
| 7 | :scheme | https |
| 8 | :status | 200 |
| 9 | :status | 204 |
| 10 | :status | 206 |
| 11 | :status | 304 |
| 12 | :status | 400 |
| 13 | :status | 404 |
| 14 | :status | 500 |
| 15 | accept-charset | |
| 16 | accept-encoding | gzip, deflate |
| 17 | accept-language | |
| 18 | accept-ranges | |
| 19 | accept | |
| 20 | access-control-allow-origin | |
| 21 | age | |
| 22 | allow | |
| 23 | authorization | |
| 24 | cache-control | |
| 25 | content-disposition | |
| 26 | content-encoding | |
| 27 | content-language | |
| 28 | content-length | |
| 29 | content-location | |
| 30 | content-range | |
| 31 | content-type | |
| 32 | cookie | |
| 33 | date | |
| 34 | etag | |
| 35 | expect | |
| 36 | expires | |
| 37 | from | |
| 38 | host | |
| 39 | if-match | |
| 40 | if-modified-since | |
| 41 | if-none-match | |
| 42 | if-range | |
| 43 | if-unmodified-since | |
| 44 | last-modified | |
| 45 | link | |
| 46 | location | |
| 47 | max-forwards | |
| 48 | proxy-authenticate | |
| 49 | proxy-authorization | |
| 50 | range | |
| 51 | referer | |
| 52 | refresh | |
| 53 | retry-after | |
| 54 | server | |
| 55 | set-cookie | |
| 56 | strict-transport-security | |
| 57 | transfer-encoding | |
| 58 | user-agent | |
| 59 | vary | |
| 60 | via | |
| 61 | www-authenticate | |
+-------+-----------------------------+---------------+
`
bs := bufio.NewScanner(strings.NewReader(fromSpec))
re := regexp.MustCompile(`\| (\d+)\s+\| (\S+)\s*\| (\S(.*\S)?)?\s+\|`)
for bs.Scan() {
l := bs.Text()
if !strings.Contains(l, "|") {
continue
}
m := re.FindStringSubmatch(l)
if m == nil {
continue
}
i, err := strconv.Atoi(m[1])
if err != nil {
t.Errorf("Bogus integer on line %q", l)
continue
}
if i < 1 || i > len(staticTable) {
t.Errorf("Bogus index %d on line %q", i, l)
continue
}
if got, want := staticTable[i-1].Name, m[2]; got != want {
t.Errorf("header index %d name = %q; want %q", i, got, want)
}
if got, want := staticTable[i-1].Value, m[3]; got != want {
t.Errorf("header index %d value = %q; want %q", i, got, want)
}
}
if err := bs.Err(); err != nil {
t.Error(err)
}
}
func (d *Decoder) mustAt(idx int) HeaderField {
if hf, ok := d.at(uint64(idx)); !ok {
panic(fmt.Sprintf("bogus index %d", idx))
} else {
return hf
}
}
func TestDynamicTableAt(t *testing.T) {
d := NewDecoder(4096, nil)
at := d.mustAt
if got, want := at(2), (pair(":method", "GET")); got != want {
t.Errorf("at(2) = %v; want %v", got, want)
}
d.dynTab.add(pair("foo", "bar"))
d.dynTab.add(pair("blake", "miz"))
if got, want := at(len(staticTable)+1), (pair("blake", "miz")); got != want {
t.Errorf("at(dyn 1) = %v; want %v", got, want)
}
if got, want := at(len(staticTable)+2), (pair("foo", "bar")); got != want {
t.Errorf("at(dyn 2) = %v; want %v", got, want)
}
if got, want := at(3), (pair(":method", "POST")); got != want {
t.Errorf("at(3) = %v; want %v", got, want)
}
}
func TestDynamicTableSearch(t *testing.T) {
dt := dynamicTable{}
dt.setMaxSize(4096)
dt.add(pair("foo", "bar"))
dt.add(pair("blake", "miz"))
dt.add(pair(":method", "GET"))
tests := []struct {
hf HeaderField
wantI uint64
wantMatch bool
}{
// Name and Value match
{pair("foo", "bar"), 3, true},
{pair(":method", "GET"), 1, true},
// Only name match because of Sensitive == true
{HeaderField{"blake", "miz", true}, 2, false},
// Only Name matches
{pair("foo", "..."), 3, false},
{pair("blake", "..."), 2, false},
{pair(":method", "..."), 1, false},
// None match
{pair("foo-", "bar"), 0, false},
}
for _, tt := range tests {
if gotI, gotMatch := dt.search(tt.hf); gotI != tt.wantI || gotMatch != tt.wantMatch {
t.Errorf("d.search(%+v) = %v, %v; want %v, %v", tt.hf, gotI, gotMatch, tt.wantI, tt.wantMatch)
}
}
}
func TestDynamicTableSizeEvict(t *testing.T) {
d := NewDecoder(4096, nil)
if want := uint32(0); d.dynTab.size != want {
t.Fatalf("size = %d; want %d", d.dynTab.size, want)
}
add := d.dynTab.add
add(pair("blake", "eats pizza"))
if want := uint32(15 + 32); d.dynTab.size != want {
t.Fatalf("after pizza, size = %d; want %d", d.dynTab.size, want)
}
add(pair("foo", "bar"))
if want := uint32(15 + 32 + 6 + 32); d.dynTab.size != want {
t.Fatalf("after foo bar, size = %d; want %d", d.dynTab.size, want)
}
d.dynTab.setMaxSize(15 + 32 + 1 /* slop */)
if want := uint32(6 + 32); d.dynTab.size != want {
t.Fatalf("after setMaxSize, size = %d; want %d", d.dynTab.size, want)
}
if got, want := d.mustAt(len(staticTable)+1), (pair("foo", "bar")); got != want {
t.Errorf("at(dyn 1) = %v; want %v", got, want)
}
add(pair("long", strings.Repeat("x", 500)))
if want := uint32(0); d.dynTab.size != want {
t.Fatalf("after big one, size = %d; want %d", d.dynTab.size, want)
}
}
func TestDecoderDecode(t *testing.T) {
tests := []struct {
name string
in []byte
want []HeaderField
wantDynTab []HeaderField // newest entry first
}{
// C.2.1 Literal Header Field with Indexing
// http://http2.github.io/http2-spec/compression.html#rfc.section.C.2.1
{"C.2.1", dehex("400a 6375 7374 6f6d 2d6b 6579 0d63 7573 746f 6d2d 6865 6164 6572"),
[]HeaderField{pair("custom-key", "custom-header")},
[]HeaderField{pair("custom-key", "custom-header")},
},
// C.2.2 Literal Header Field without Indexing
// http://http2.github.io/http2-spec/compression.html#rfc.section.C.2.2
{"C.2.2", dehex("040c 2f73 616d 706c 652f 7061 7468"),
[]HeaderField{pair(":path", "/sample/path")},
[]HeaderField{}},
// C.2.3 Literal Header Field never Indexed
// http://http2.github.io/http2-spec/compression.html#rfc.section.C.2.3
{"C.2.3", dehex("1008 7061 7373 776f 7264 0673 6563 7265 74"),
[]HeaderField{{"password", "secret", true}},
[]HeaderField{}},
// C.2.4 Indexed Header Field
// http://http2.github.io/http2-spec/compression.html#rfc.section.C.2.4
{"C.2.4", []byte("\x82"),
[]HeaderField{pair(":method", "GET")},
[]HeaderField{}},
}
for _, tt := range tests {
d := NewDecoder(4096, nil)
hf, err := d.DecodeFull(tt.in)
if err != nil {
t.Errorf("%s: %v", tt.name, err)
continue
}
if !reflect.DeepEqual(hf, tt.want) {
t.Errorf("%s: Got %v; want %v", tt.name, hf, tt.want)
}
gotDynTab := d.dynTab.reverseCopy()
if !reflect.DeepEqual(gotDynTab, tt.wantDynTab) {
t.Errorf("%s: dynamic table after = %v; want %v", tt.name, gotDynTab, tt.wantDynTab)
}
}
}
func (dt *dynamicTable) reverseCopy() (hf []HeaderField) {
hf = make([]HeaderField, len(dt.ents))
for i := range hf {
hf[i] = dt.ents[len(dt.ents)-1-i]
}
return
}
type encAndWant struct {
enc []byte
want []HeaderField
wantDynTab []HeaderField
wantDynSize uint32
}
// C.3 Request Examples without Huffman Coding
// http://http2.github.io/http2-spec/compression.html#rfc.section.C.3
func TestDecodeC3_NoHuffman(t *testing.T) {
testDecodeSeries(t, 4096, []encAndWant{
{dehex("8286 8441 0f77 7777 2e65 7861 6d70 6c65 2e63 6f6d"),
[]HeaderField{
pair(":method", "GET"),
pair(":scheme", "http"),
pair(":path", "/"),
pair(":authority", "www.example.com"),
},
[]HeaderField{
pair(":authority", "www.example.com"),
},
57,
},
{dehex("8286 84be 5808 6e6f 2d63 6163 6865"),
[]HeaderField{
pair(":method", "GET"),
pair(":scheme", "http"),
pair(":path", "/"),
pair(":authority", "www.example.com"),
pair("cache-control", "no-cache"),
},
[]HeaderField{
pair("cache-control", "no-cache"),
pair(":authority", "www.example.com"),
},
110,
},
{dehex("8287 85bf 400a 6375 7374 6f6d 2d6b 6579 0c63 7573 746f 6d2d 7661 6c75 65"),
[]HeaderField{
pair(":method", "GET"),
pair(":scheme", "https"),
pair(":path", "/index.html"),
pair(":authority", "www.example.com"),
pair("custom-key", "custom-value"),
},
[]HeaderField{
pair("custom-key", "custom-value"),
pair("cache-control", "no-cache"),
pair(":authority", "www.example.com"),
},
164,
},
})
}
// C.4 Request Examples with Huffman Coding
// http://http2.github.io/http2-spec/compression.html#rfc.section.C.4
func TestDecodeC4_Huffman(t *testing.T) {
testDecodeSeries(t, 4096, []encAndWant{
{dehex("8286 8441 8cf1 e3c2 e5f2 3a6b a0ab 90f4 ff"),
[]HeaderField{
pair(":method", "GET"),
pair(":scheme", "http"),
pair(":path", "/"),
pair(":authority", "www.example.com"),
},
[]HeaderField{
pair(":authority", "www.example.com"),
},
57,
},
{dehex("8286 84be 5886 a8eb 1064 9cbf"),
[]HeaderField{
pair(":method", "GET"),
pair(":scheme", "http"),
pair(":path", "/"),
pair(":authority", "www.example.com"),
pair("cache-control", "no-cache"),
},
[]HeaderField{
pair("cache-control", "no-cache"),
pair(":authority", "www.example.com"),
},
110,
},
{dehex("8287 85bf 4088 25a8 49e9 5ba9 7d7f 8925 a849 e95b b8e8 b4bf"),
[]HeaderField{
pair(":method", "GET"),
pair(":scheme", "https"),
pair(":path", "/index.html"),
pair(":authority", "www.example.com"),
pair("custom-key", "custom-value"),
},
[]HeaderField{
pair("custom-key", "custom-value"),
pair("cache-control", "no-cache"),
pair(":authority", "www.example.com"),
},
164,
},
})
}
// http://http2.github.io/http2-spec/compression.html#rfc.section.C.5
// "This section shows several consecutive header lists, corresponding
// to HTTP responses, on the same connection. The HTTP/2 setting
// parameter SETTINGS_HEADER_TABLE_SIZE is set to the value of 256
// octets, causing some evictions to occur."
func TestDecodeC5_ResponsesNoHuff(t *testing.T) {
testDecodeSeries(t, 256, []encAndWant{
{dehex(`
4803 3330 3258 0770 7269 7661 7465 611d
4d6f 6e2c 2032 3120 4f63 7420 3230 3133
2032 303a 3133 3a32 3120 474d 546e 1768
7474 7073 3a2f 2f77 7777 2e65 7861 6d70
6c65 2e63 6f6d
`),
[]HeaderField{
pair(":status", "302"),
pair("cache-control", "private"),
pair("date", "Mon, 21 Oct 2013 20:13:21 GMT"),
pair("location", "https://www.example.com"),
},
[]HeaderField{
pair("location", "https://www.example.com"),
pair("date", "Mon, 21 Oct 2013 20:13:21 GMT"),
pair("cache-control", "private"),
pair(":status", "302"),
},
222,
},
{dehex("4803 3330 37c1 c0bf"),
[]HeaderField{
pair(":status", "307"),
pair("cache-control", "private"),
pair("date", "Mon, 21 Oct 2013 20:13:21 GMT"),
pair("location", "https://www.example.com"),
},
[]HeaderField{
pair(":status", "307"),
pair("location", "https://www.example.com"),
pair("date", "Mon, 21 Oct 2013 20:13:21 GMT"),
pair("cache-control", "private"),
},
222,
},
{dehex(`
88c1 611d 4d6f 6e2c 2032 3120 4f63 7420
3230 3133 2032 303a 3133 3a32 3220 474d
54c0 5a04 677a 6970 7738 666f 6f3d 4153
444a 4b48 514b 425a 584f 5157 454f 5049
5541 5851 5745 4f49 553b 206d 6178 2d61
6765 3d33 3630 303b 2076 6572 7369 6f6e
3d31
`),
[]HeaderField{
pair(":status", "200"),
pair("cache-control", "private"),
pair("date", "Mon, 21 Oct 2013 20:13:22 GMT"),
pair("location", "https://www.example.com"),
pair("content-encoding", "gzip"),
pair("set-cookie", "foo=ASDJKHQKBZXOQWEOPIUAXQWEOIU; max-age=3600; version=1"),
},
[]HeaderField{
pair("set-cookie", "foo=ASDJKHQKBZXOQWEOPIUAXQWEOIU; max-age=3600; version=1"),
pair("content-encoding", "gzip"),
pair("date", "Mon, 21 Oct 2013 20:13:22 GMT"),
},
215,
},
})
}
// http://http2.github.io/http2-spec/compression.html#rfc.section.C.6
// "This section shows the same examples as the previous section, but
// using Huffman encoding for the literal values. The HTTP/2 setting
// parameter SETTINGS_HEADER_TABLE_SIZE is set to the value of 256
// octets, causing some evictions to occur. The eviction mechanism
// uses the length of the decoded literal values, so the same
// evictions occurs as in the previous section."
func TestDecodeC6_ResponsesHuffman(t *testing.T) {
testDecodeSeries(t, 256, []encAndWant{
{dehex(`
4882 6402 5885 aec3 771a 4b61 96d0 7abe
9410 54d4 44a8 2005 9504 0b81 66e0 82a6
2d1b ff6e 919d 29ad 1718 63c7 8f0b 97c8
e9ae 82ae 43d3
`),
[]HeaderField{
pair(":status", "302"),
pair("cache-control", "private"),
pair("date", "Mon, 21 Oct 2013 20:13:21 GMT"),
pair("location", "https://www.example.com"),
},
[]HeaderField{
pair("location", "https://www.example.com"),
pair("date", "Mon, 21 Oct 2013 20:13:21 GMT"),
pair("cache-control", "private"),
pair(":status", "302"),
},
222,
},
{dehex("4883 640e ffc1 c0bf"),
[]HeaderField{
pair(":status", "307"),
pair("cache-control", "private"),
pair("date", "Mon, 21 Oct 2013 20:13:21 GMT"),
pair("location", "https://www.example.com"),
},
[]HeaderField{
pair(":status", "307"),
pair("location", "https://www.example.com"),
pair("date", "Mon, 21 Oct 2013 20:13:21 GMT"),
pair("cache-control", "private"),
},
222,
},
{dehex(`
88c1 6196 d07a be94 1054 d444 a820 0595
040b 8166 e084 a62d 1bff c05a 839b d9ab
77ad 94e7 821d d7f2 e6c7 b335 dfdf cd5b
3960 d5af 2708 7f36 72c1 ab27 0fb5 291f
9587 3160 65c0 03ed 4ee5 b106 3d50 07
`),
[]HeaderField{
pair(":status", "200"),
pair("cache-control", "private"),
pair("date", "Mon, 21 Oct 2013 20:13:22 GMT"),
pair("location", "https://www.example.com"),
pair("content-encoding", "gzip"),
pair("set-cookie", "foo=ASDJKHQKBZXOQWEOPIUAXQWEOIU; max-age=3600; version=1"),
},
[]HeaderField{
pair("set-cookie", "foo=ASDJKHQKBZXOQWEOPIUAXQWEOIU; max-age=3600; version=1"),
pair("content-encoding", "gzip"),
pair("date", "Mon, 21 Oct 2013 20:13:22 GMT"),
},
215,
},
})
}
func testDecodeSeries(t *testing.T, size uint32, steps []encAndWant) {
d := NewDecoder(size, nil)
for i, step := range steps {
hf, err := d.DecodeFull(step.enc)
if err != nil {
t.Fatalf("Error at step index %d: %v", i, err)
}
if !reflect.DeepEqual(hf, step.want) {
t.Fatalf("At step index %d: Got headers %v; want %v", i, hf, step.want)
}
gotDynTab := d.dynTab.reverseCopy()
if !reflect.DeepEqual(gotDynTab, step.wantDynTab) {
t.Errorf("After step index %d, dynamic table = %v; want %v", i, gotDynTab, step.wantDynTab)
}
if d.dynTab.size != step.wantDynSize {
t.Errorf("After step index %d, dynamic table size = %v; want %v", i, d.dynTab.size, step.wantDynSize)
}
}
}
func TestHuffmanDecode(t *testing.T) {
tests := []struct {
inHex, want string
}{
{"f1e3 c2e5 f23a 6ba0 ab90 f4ff", "www.example.com"},
{"a8eb 1064 9cbf", "no-cache"},
{"25a8 49e9 5ba9 7d7f", "custom-key"},
{"25a8 49e9 5bb8 e8b4 bf", "custom-value"},
{"6402", "302"},
{"aec3 771a 4b", "private"},
{"d07a be94 1054 d444 a820 0595 040b 8166 e082 a62d 1bff", "Mon, 21 Oct 2013 20:13:21 GMT"},
{"9d29 ad17 1863 c78f 0b97 c8e9 ae82 ae43 d3", "https://www.example.com"},
{"9bd9 ab", "gzip"},
{"94e7 821d d7f2 e6c7 b335 dfdf cd5b 3960 d5af 2708 7f36 72c1 ab27 0fb5 291f 9587 3160 65c0 03ed 4ee5 b106 3d50 07",
"foo=ASDJKHQKBZXOQWEOPIUAXQWEOIU; max-age=3600; version=1"},
}
for i, tt := range tests {
var buf bytes.Buffer
in, err := hex.DecodeString(strings.Replace(tt.inHex, " ", "", -1))
if err != nil {
t.Errorf("%d. hex input error: %v", i, err)
continue
}
if _, err := HuffmanDecode(&buf, in); err != nil {
t.Errorf("%d. decode error: %v", i, err)
continue
}
if got := buf.String(); tt.want != got {
t.Errorf("%d. decode = %q; want %q", i, got, tt.want)
}
}
}
func TestAppendHuffmanString(t *testing.T) {
tests := []struct {
in, want string
}{
{"www.example.com", "f1e3 c2e5 f23a 6ba0 ab90 f4ff"},
{"no-cache", "a8eb 1064 9cbf"},
{"custom-key", "25a8 49e9 5ba9 7d7f"},
{"custom-value", "25a8 49e9 5bb8 e8b4 bf"},
{"302", "6402"},
{"private", "aec3 771a 4b"},
{"Mon, 21 Oct 2013 20:13:21 GMT", "d07a be94 1054 d444 a820 0595 040b 8166 e082 a62d 1bff"},
{"https://www.example.com", "9d29 ad17 1863 c78f 0b97 c8e9 ae82 ae43 d3"},
{"gzip", "9bd9 ab"},
{"foo=ASDJKHQKBZXOQWEOPIUAXQWEOIU; max-age=3600; version=1",
"94e7 821d d7f2 e6c7 b335 dfdf cd5b 3960 d5af 2708 7f36 72c1 ab27 0fb5 291f 9587 3160 65c0 03ed 4ee5 b106 3d50 07"},
}
for i, tt := range tests {
buf := []byte{}
want := strings.Replace(tt.want, " ", "", -1)
buf = AppendHuffmanString(buf, tt.in)
if got := hex.EncodeToString(buf); want != got {
t.Errorf("%d. encode = %q; want %q", i, got, want)
}
}
}
func TestReadVarInt(t *testing.T) {
type res struct {
i uint64
consumed int
err error
}
tests := []struct {
n byte
p []byte
want res
}{
// Fits in a byte:
{1, []byte{0}, res{0, 1, nil}},
{2, []byte{2}, res{2, 1, nil}},
{3, []byte{6}, res{6, 1, nil}},
{4, []byte{14}, res{14, 1, nil}},
{5, []byte{30}, res{30, 1, nil}},
{6, []byte{62}, res{62, 1, nil}},
{7, []byte{126}, res{126, 1, nil}},
{8, []byte{254}, res{254, 1, nil}},
// Doesn't fit in a byte:
{1, []byte{1}, res{0, 0, errNeedMore}},
{2, []byte{3}, res{0, 0, errNeedMore}},
{3, []byte{7}, res{0, 0, errNeedMore}},
{4, []byte{15}, res{0, 0, errNeedMore}},
{5, []byte{31}, res{0, 0, errNeedMore}},
{6, []byte{63}, res{0, 0, errNeedMore}},
{7, []byte{127}, res{0, 0, errNeedMore}},
{8, []byte{255}, res{0, 0, errNeedMore}},
// Ignoring top bits:
{5, []byte{255, 154, 10}, res{1337, 3, nil}}, // high dummy three bits: 111
{5, []byte{159, 154, 10}, res{1337, 3, nil}}, // high dummy three bits: 100
{5, []byte{191, 154, 10}, res{1337, 3, nil}}, // high dummy three bits: 101
// Extra byte:
{5, []byte{191, 154, 10, 2}, res{1337, 3, nil}}, // extra byte
// Short a byte:
{5, []byte{191, 154}, res{0, 0, errNeedMore}},
// integer overflow:
{1, []byte{255, 128, 128, 128, 128, 128, 128, 128, 128, 128, 128, 128, 128, 128, 128, 128}, res{0, 0, errVarintOverflow}},
}
for _, tt := range tests {
i, remain, err := readVarInt(tt.n, tt.p)
consumed := len(tt.p) - len(remain)
got := res{i, consumed, err}
if got != tt.want {
t.Errorf("readVarInt(%d, %v ~ %x) = %+v; want %+v", tt.n, tt.p, tt.p, got, tt.want)
}
}
}
func dehex(s string) []byte {
s = strings.Replace(s, " ", "", -1)
s = strings.Replace(s, "\n", "", -1)
b, err := hex.DecodeString(s)
if err != nil {
panic(err)
}
return b
}

View File

@ -0,0 +1,159 @@
// Copyright 2014 The Go Authors.
// See https://code.google.com/p/go/source/browse/CONTRIBUTORS
// Licensed under the same terms as Go itself:
// https://code.google.com/p/go/source/browse/LICENSE
package hpack
import (
"bytes"
"io"
"sync"
)
var bufPool = sync.Pool{
New: func() interface{} { return new(bytes.Buffer) },
}
// HuffmanDecode decodes the string in v and writes the expanded
// result to w, returning the number of bytes written to w and the
// Write call's return value. At most one Write call is made.
func HuffmanDecode(w io.Writer, v []byte) (int, error) {
buf := bufPool.Get().(*bytes.Buffer)
buf.Reset()
defer bufPool.Put(buf)
n := rootHuffmanNode
cur, nbits := uint(0), uint8(0)
for _, b := range v {
cur = cur<<8 | uint(b)
nbits += 8
for nbits >= 8 {
n = n.children[byte(cur>>(nbits-8))]
if n.children == nil {
buf.WriteByte(n.sym)
nbits -= n.codeLen
n = rootHuffmanNode
} else {
nbits -= 8
}
}
}
for nbits > 0 {
n = n.children[byte(cur<<(8-nbits))]
if n.children != nil || n.codeLen > nbits {
break
}
buf.WriteByte(n.sym)
nbits -= n.codeLen
n = rootHuffmanNode
}
return w.Write(buf.Bytes())
}
type node struct {
// children is non-nil for internal nodes
children []*node
// The following are only valid if children is nil:
codeLen uint8 // number of bits that led to the output of sym
sym byte // output symbol
}
func newInternalNode() *node {
return &node{children: make([]*node, 256)}
}
var rootHuffmanNode = newInternalNode()
func init() {
for i, code := range huffmanCodes {
if i > 255 {
panic("too many huffman codes")
}
addDecoderNode(byte(i), code, huffmanCodeLen[i])
}
}
func addDecoderNode(sym byte, code uint32, codeLen uint8) {
cur := rootHuffmanNode
for codeLen > 8 {
codeLen -= 8
i := uint8(code >> codeLen)
if cur.children[i] == nil {
cur.children[i] = newInternalNode()
}
cur = cur.children[i]
}
shift := 8 - codeLen
start, end := int(uint8(code<<shift)), int(1<<shift)
for i := start; i < start+end; i++ {
cur.children[i] = &node{sym: sym, codeLen: codeLen}
}
}
// AppendHuffmanString appends s, as encoded in Huffman codes, to dst
// and returns the extended buffer.
func AppendHuffmanString(dst []byte, s string) []byte {
rembits := uint8(8)
for i := 0; i < len(s); i++ {
if rembits == 8 {
dst = append(dst, 0)
}
dst, rembits = appendByteToHuffmanCode(dst, rembits, s[i])
}
if rembits < 8 {
// special EOS symbol
code := uint32(0x3fffffff)
nbits := uint8(30)
t := uint8(code >> (nbits - rembits))
dst[len(dst)-1] |= t
}
return dst
}
// HuffmanEncodeLength returns the number of bytes required to encode
// s in Huffman codes. The result is round up to byte boundary.
func HuffmanEncodeLength(s string) uint64 {
n := uint64(0)
for i := 0; i < len(s); i++ {
n += uint64(huffmanCodeLen[s[i]])
}
return (n + 7) / 8
}
// appendByteToHuffmanCode appends Huffman code for c to dst and
// returns the extended buffer and the remaining bits in the last
// element. The appending is not byte aligned and the remaining bits
// in the last element of dst is given in rembits.
func appendByteToHuffmanCode(dst []byte, rembits uint8, c byte) ([]byte, uint8) {
code := huffmanCodes[c]
nbits := huffmanCodeLen[c]
for {
if rembits > nbits {
t := uint8(code << (rembits - nbits))
dst[len(dst)-1] |= t
rembits -= nbits
break
}
t := uint8(code >> (nbits - rembits))
dst[len(dst)-1] |= t
nbits -= rembits
rembits = 8
if nbits == 0 {
break
}
dst = append(dst, 0)
}
return dst, rembits
}

View File

@ -0,0 +1,353 @@
// Copyright 2014 The Go Authors.
// See https://code.google.com/p/go/source/browse/CONTRIBUTORS
// Licensed under the same terms as Go itself:
// https://code.google.com/p/go/source/browse/LICENSE
package hpack
func pair(name, value string) HeaderField {
return HeaderField{Name: name, Value: value}
}
// http://tools.ietf.org/html/draft-ietf-httpbis-header-compression-07#appendix-B
var staticTable = []HeaderField{
pair(":authority", ""), // index 1 (1-based)
pair(":method", "GET"),
pair(":method", "POST"),
pair(":path", "/"),
pair(":path", "/index.html"),
pair(":scheme", "http"),
pair(":scheme", "https"),
pair(":status", "200"),
pair(":status", "204"),
pair(":status", "206"),
pair(":status", "304"),
pair(":status", "400"),
pair(":status", "404"),
pair(":status", "500"),
pair("accept-charset", ""),
pair("accept-encoding", "gzip, deflate"),
pair("accept-language", ""),
pair("accept-ranges", ""),
pair("accept", ""),
pair("access-control-allow-origin", ""),
pair("age", ""),
pair("allow", ""),
pair("authorization", ""),
pair("cache-control", ""),
pair("content-disposition", ""),
pair("content-encoding", ""),
pair("content-language", ""),
pair("content-length", ""),
pair("content-location", ""),
pair("content-range", ""),
pair("content-type", ""),
pair("cookie", ""),
pair("date", ""),
pair("etag", ""),
pair("expect", ""),
pair("expires", ""),
pair("from", ""),
pair("host", ""),
pair("if-match", ""),
pair("if-modified-since", ""),
pair("if-none-match", ""),
pair("if-range", ""),
pair("if-unmodified-since", ""),
pair("last-modified", ""),
pair("link", ""),
pair("location", ""),
pair("max-forwards", ""),
pair("proxy-authenticate", ""),
pair("proxy-authorization", ""),
pair("range", ""),
pair("referer", ""),
pair("refresh", ""),
pair("retry-after", ""),
pair("server", ""),
pair("set-cookie", ""),
pair("strict-transport-security", ""),
pair("transfer-encoding", ""),
pair("user-agent", ""),
pair("vary", ""),
pair("via", ""),
pair("www-authenticate", ""),
}
var huffmanCodes = []uint32{
0x1ff8,
0x7fffd8,
0xfffffe2,
0xfffffe3,
0xfffffe4,
0xfffffe5,
0xfffffe6,
0xfffffe7,
0xfffffe8,
0xffffea,
0x3ffffffc,
0xfffffe9,
0xfffffea,
0x3ffffffd,
0xfffffeb,
0xfffffec,
0xfffffed,
0xfffffee,
0xfffffef,
0xffffff0,
0xffffff1,
0xffffff2,
0x3ffffffe,
0xffffff3,
0xffffff4,
0xffffff5,
0xffffff6,
0xffffff7,
0xffffff8,
0xffffff9,
0xffffffa,
0xffffffb,
0x14,
0x3f8,
0x3f9,
0xffa,
0x1ff9,
0x15,
0xf8,
0x7fa,
0x3fa,
0x3fb,
0xf9,
0x7fb,
0xfa,
0x16,
0x17,
0x18,
0x0,
0x1,
0x2,
0x19,
0x1a,
0x1b,
0x1c,
0x1d,
0x1e,
0x1f,
0x5c,
0xfb,
0x7ffc,
0x20,
0xffb,
0x3fc,
0x1ffa,
0x21,
0x5d,
0x5e,
0x5f,
0x60,
0x61,
0x62,
0x63,
0x64,
0x65,
0x66,
0x67,
0x68,
0x69,
0x6a,
0x6b,
0x6c,
0x6d,
0x6e,
0x6f,
0x70,
0x71,
0x72,
0xfc,
0x73,
0xfd,
0x1ffb,
0x7fff0,
0x1ffc,
0x3ffc,
0x22,
0x7ffd,
0x3,
0x23,
0x4,
0x24,
0x5,
0x25,
0x26,
0x27,
0x6,
0x74,
0x75,
0x28,
0x29,
0x2a,
0x7,
0x2b,
0x76,
0x2c,
0x8,
0x9,
0x2d,
0x77,
0x78,
0x79,
0x7a,
0x7b,
0x7ffe,
0x7fc,
0x3ffd,
0x1ffd,
0xffffffc,
0xfffe6,
0x3fffd2,
0xfffe7,
0xfffe8,
0x3fffd3,
0x3fffd4,
0x3fffd5,
0x7fffd9,
0x3fffd6,
0x7fffda,
0x7fffdb,
0x7fffdc,
0x7fffdd,
0x7fffde,
0xffffeb,
0x7fffdf,
0xffffec,
0xffffed,
0x3fffd7,
0x7fffe0,
0xffffee,
0x7fffe1,
0x7fffe2,
0x7fffe3,
0x7fffe4,
0x1fffdc,
0x3fffd8,
0x7fffe5,
0x3fffd9,
0x7fffe6,
0x7fffe7,
0xffffef,
0x3fffda,
0x1fffdd,
0xfffe9,
0x3fffdb,
0x3fffdc,
0x7fffe8,
0x7fffe9,
0x1fffde,
0x7fffea,
0x3fffdd,
0x3fffde,
0xfffff0,
0x1fffdf,
0x3fffdf,
0x7fffeb,
0x7fffec,
0x1fffe0,
0x1fffe1,
0x3fffe0,
0x1fffe2,
0x7fffed,
0x3fffe1,
0x7fffee,
0x7fffef,
0xfffea,
0x3fffe2,
0x3fffe3,
0x3fffe4,
0x7ffff0,
0x3fffe5,
0x3fffe6,
0x7ffff1,
0x3ffffe0,
0x3ffffe1,
0xfffeb,
0x7fff1,
0x3fffe7,
0x7ffff2,
0x3fffe8,
0x1ffffec,
0x3ffffe2,
0x3ffffe3,
0x3ffffe4,
0x7ffffde,
0x7ffffdf,
0x3ffffe5,
0xfffff1,
0x1ffffed,
0x7fff2,
0x1fffe3,
0x3ffffe6,
0x7ffffe0,
0x7ffffe1,
0x3ffffe7,
0x7ffffe2,
0xfffff2,
0x1fffe4,
0x1fffe5,
0x3ffffe8,
0x3ffffe9,
0xffffffd,
0x7ffffe3,
0x7ffffe4,
0x7ffffe5,
0xfffec,
0xfffff3,
0xfffed,
0x1fffe6,
0x3fffe9,
0x1fffe7,
0x1fffe8,
0x7ffff3,
0x3fffea,
0x3fffeb,
0x1ffffee,
0x1ffffef,
0xfffff4,
0xfffff5,
0x3ffffea,
0x7ffff4,
0x3ffffeb,
0x7ffffe6,
0x3ffffec,
0x3ffffed,
0x7ffffe7,
0x7ffffe8,
0x7ffffe9,
0x7ffffea,
0x7ffffeb,
0xffffffe,
0x7ffffec,
0x7ffffed,
0x7ffffee,
0x7ffffef,
0x7fffff0,
0x3ffffee,
}
var huffmanCodeLen = []uint8{
13, 23, 28, 28, 28, 28, 28, 28, 28, 24, 30, 28, 28, 30, 28, 28,
28, 28, 28, 28, 28, 28, 30, 28, 28, 28, 28, 28, 28, 28, 28, 28,
6, 10, 10, 12, 13, 6, 8, 11, 10, 10, 8, 11, 8, 6, 6, 6,
5, 5, 5, 6, 6, 6, 6, 6, 6, 6, 7, 8, 15, 6, 12, 10,
13, 6, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7,
7, 7, 7, 7, 7, 7, 7, 7, 8, 7, 8, 13, 19, 13, 14, 6,
15, 5, 6, 5, 6, 5, 6, 6, 6, 5, 7, 7, 6, 6, 6, 5,
6, 7, 6, 5, 5, 6, 7, 7, 7, 7, 7, 15, 11, 14, 13, 28,
20, 22, 20, 20, 22, 22, 22, 23, 22, 23, 23, 23, 23, 23, 24, 23,
24, 24, 22, 23, 24, 23, 23, 23, 23, 21, 22, 23, 22, 23, 23, 24,
22, 21, 20, 22, 22, 23, 23, 21, 23, 22, 22, 24, 21, 22, 23, 23,
21, 21, 22, 21, 23, 22, 23, 23, 20, 22, 22, 22, 23, 22, 22, 23,
26, 26, 20, 19, 22, 23, 22, 25, 26, 26, 26, 27, 27, 26, 24, 25,
19, 21, 26, 27, 27, 26, 27, 24, 21, 21, 26, 26, 28, 27, 27, 27,
20, 24, 20, 21, 22, 21, 21, 23, 22, 22, 25, 25, 24, 24, 26, 23,
26, 27, 26, 26, 27, 27, 27, 27, 27, 28, 27, 27, 27, 27, 27, 26,
}

View File

@ -0,0 +1,249 @@
// Copyright 2014 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
// See https://code.google.com/p/go/source/browse/CONTRIBUTORS
// Licensed under the same terms as Go itself:
// https://code.google.com/p/go/source/browse/LICENSE
// Package http2 implements the HTTP/2 protocol.
//
// This is a work in progress. This package is low-level and intended
// to be used directly by very few people. Most users will use it
// indirectly through integration with the net/http package. See
// ConfigureServer. That ConfigureServer call will likely be automatic
// or available via an empty import in the future.
//
// This package currently targets draft-14. See http://http2.github.io/
package http2
import (
"bufio"
"fmt"
"io"
"net/http"
"strconv"
"sync"
)
var VerboseLogs = false
const (
// ClientPreface is the string that must be sent by new
// connections from clients.
ClientPreface = "PRI * HTTP/2.0\r\n\r\nSM\r\n\r\n"
// SETTINGS_MAX_FRAME_SIZE default
// http://http2.github.io/http2-spec/#rfc.section.6.5.2
initialMaxFrameSize = 16384
// NextProtoTLS is the NPN/ALPN protocol negotiated during
// HTTP/2's TLS setup.
NextProtoTLS = "h2"
// http://http2.github.io/http2-spec/#SettingValues
initialHeaderTableSize = 4096
initialWindowSize = 65535 // 6.9.2 Initial Flow Control Window Size
defaultMaxReadFrameSize = 1 << 20
)
var (
clientPreface = []byte(ClientPreface)
)
type streamState int
const (
stateIdle streamState = iota
stateOpen
stateHalfClosedLocal
stateHalfClosedRemote
stateResvLocal
stateResvRemote
stateClosed
)
var stateName = [...]string{
stateIdle: "Idle",
stateOpen: "Open",
stateHalfClosedLocal: "HalfClosedLocal",
stateHalfClosedRemote: "HalfClosedRemote",
stateResvLocal: "ResvLocal",
stateResvRemote: "ResvRemote",
stateClosed: "Closed",
}
func (st streamState) String() string {
return stateName[st]
}
// Setting is a setting parameter: which setting it is, and its value.
type Setting struct {
// ID is which setting is being set.
// See http://http2.github.io/http2-spec/#SettingValues
ID SettingID
// Val is the value.
Val uint32
}
func (s Setting) String() string {
return fmt.Sprintf("[%v = %d]", s.ID, s.Val)
}
// Valid reports whether the setting is valid.
func (s Setting) Valid() error {
// Limits and error codes from 6.5.2 Defined SETTINGS Parameters
switch s.ID {
case SettingEnablePush:
if s.Val != 1 && s.Val != 0 {
return ConnectionError(ErrCodeProtocol)
}
case SettingInitialWindowSize:
if s.Val > 1<<31-1 {
return ConnectionError(ErrCodeFlowControl)
}
case SettingMaxFrameSize:
if s.Val < 16384 || s.Val > 1<<24-1 {
return ConnectionError(ErrCodeProtocol)
}
}
return nil
}
// A SettingID is an HTTP/2 setting as defined in
// http://http2.github.io/http2-spec/#iana-settings
type SettingID uint16
const (
SettingHeaderTableSize SettingID = 0x1
SettingEnablePush SettingID = 0x2
SettingMaxConcurrentStreams SettingID = 0x3
SettingInitialWindowSize SettingID = 0x4
SettingMaxFrameSize SettingID = 0x5
SettingMaxHeaderListSize SettingID = 0x6
)
var settingName = map[SettingID]string{
SettingHeaderTableSize: "HEADER_TABLE_SIZE",
SettingEnablePush: "ENABLE_PUSH",
SettingMaxConcurrentStreams: "MAX_CONCURRENT_STREAMS",
SettingInitialWindowSize: "INITIAL_WINDOW_SIZE",
SettingMaxFrameSize: "MAX_FRAME_SIZE",
SettingMaxHeaderListSize: "MAX_HEADER_LIST_SIZE",
}
func (s SettingID) String() string {
if v, ok := settingName[s]; ok {
return v
}
return fmt.Sprintf("UNKNOWN_SETTING_%d", uint16(s))
}
func validHeader(v string) bool {
if len(v) == 0 {
return false
}
for _, r := range v {
// "Just as in HTTP/1.x, header field names are
// strings of ASCII characters that are compared in a
// case-insensitive fashion. However, header field
// names MUST be converted to lowercase prior to their
// encoding in HTTP/2. "
if r >= 127 || ('A' <= r && r <= 'Z') {
return false
}
}
return true
}
var httpCodeStringCommon = map[int]string{} // n -> strconv.Itoa(n)
func init() {
for i := 100; i <= 999; i++ {
if v := http.StatusText(i); v != "" {
httpCodeStringCommon[i] = strconv.Itoa(i)
}
}
}
func httpCodeString(code int) string {
if s, ok := httpCodeStringCommon[code]; ok {
return s
}
return strconv.Itoa(code)
}
// from pkg io
type stringWriter interface {
WriteString(s string) (n int, err error)
}
// A gate lets two goroutines coordinate their activities.
type gate chan struct{}
func (g gate) Done() { g <- struct{}{} }
func (g gate) Wait() { <-g }
// A closeWaiter is like a sync.WaitGroup but only goes 1 to 0 (open to closed).
type closeWaiter chan struct{}
// Init makes a closeWaiter usable.
// It exists because so a closeWaiter value can be placed inside a
// larger struct and have the Mutex and Cond's memory in the same
// allocation.
func (cw *closeWaiter) Init() {
*cw = make(chan struct{})
}
// Close marks the closeWaiter as closed and unblocks any waiters.
func (cw closeWaiter) Close() {
close(cw)
}
// Wait waits for the closeWaiter to become closed.
func (cw closeWaiter) Wait() {
<-cw
}
// bufferedWriter is a buffered writer that writes to w.
// Its buffered writer is lazily allocated as needed, to minimize
// idle memory usage with many connections.
type bufferedWriter struct {
w io.Writer // immutable
bw *bufio.Writer // non-nil when data is buffered
}
func newBufferedWriter(w io.Writer) *bufferedWriter {
return &bufferedWriter{w: w}
}
var bufWriterPool = sync.Pool{
New: func() interface{} {
// TODO: pick something better? this is a bit under
// (3 x typical 1500 byte MTU) at least.
return bufio.NewWriterSize(nil, 4<<10)
},
}
func (w *bufferedWriter) Write(p []byte) (n int, err error) {
if w.bw == nil {
bw := bufWriterPool.Get().(*bufio.Writer)
bw.Reset(w.w)
w.bw = bw
}
return w.bw.Write(p)
}
func (w *bufferedWriter) Flush() error {
bw := w.bw
if bw == nil {
return nil
}
err := bw.Flush()
bw.Reset(nil)
bufWriterPool.Put(bw)
w.bw = nil
return err
}

View File

@ -0,0 +1,152 @@
// Copyright 2014 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
// See https://code.google.com/p/go/source/browse/CONTRIBUTORS
// Licensed under the same terms as Go itself:
// https://code.google.com/p/go/source/browse/LICENSE
package http2
import (
"bytes"
"errors"
"flag"
"fmt"
"net/http"
"os/exec"
"strconv"
"strings"
"testing"
"github.com/coreos/etcd/Godeps/_workspace/src/github.com/bradfitz/http2/hpack"
)
var knownFailing = flag.Bool("known_failing", false, "Run known-failing tests.")
func condSkipFailingTest(t *testing.T) {
if !*knownFailing {
t.Skip("Skipping known-failing test without --known_failing")
}
}
func init() {
DebugGoroutines = true
flag.BoolVar(&VerboseLogs, "verboseh2", false, "Verbose HTTP/2 debug logging")
}
func TestSettingString(t *testing.T) {
tests := []struct {
s Setting
want string
}{
{Setting{SettingMaxFrameSize, 123}, "[MAX_FRAME_SIZE = 123]"},
{Setting{1<<16 - 1, 123}, "[UNKNOWN_SETTING_65535 = 123]"},
}
for i, tt := range tests {
got := fmt.Sprint(tt.s)
if got != tt.want {
t.Errorf("%d. for %#v, string = %q; want %q", i, tt.s, got, tt.want)
}
}
}
type twriter struct {
t testing.TB
st *serverTester // optional
}
func (w twriter) Write(p []byte) (n int, err error) {
if w.st != nil {
ps := string(p)
for _, phrase := range w.st.logFilter {
if strings.Contains(ps, phrase) {
return len(p), nil // no logging
}
}
}
w.t.Logf("%s", p)
return len(p), nil
}
// like encodeHeader, but don't add implicit psuedo headers.
func encodeHeaderNoImplicit(t *testing.T, headers ...string) []byte {
var buf bytes.Buffer
enc := hpack.NewEncoder(&buf)
for len(headers) > 0 {
k, v := headers[0], headers[1]
headers = headers[2:]
if err := enc.WriteField(hpack.HeaderField{Name: k, Value: v}); err != nil {
t.Fatalf("HPACK encoding error for %q/%q: %v", k, v, err)
}
}
return buf.Bytes()
}
// Verify that curl has http2.
func requireCurl(t *testing.T) {
out, err := dockerLogs(curl(t, "--version"))
if err != nil {
t.Skipf("failed to determine curl features; skipping test")
}
if !strings.Contains(string(out), "HTTP2") {
t.Skip("curl doesn't support HTTP2; skipping test")
}
}
func curl(t *testing.T, args ...string) (container string) {
out, err := exec.Command("docker", append([]string{"run", "-d", "--net=host", "gohttp2/curl"}, args...)...).CombinedOutput()
if err != nil {
t.Skipf("Failed to run curl in docker: %v, %s", err, out)
}
return strings.TrimSpace(string(out))
}
type puppetCommand struct {
fn func(w http.ResponseWriter, r *http.Request)
done chan<- bool
}
type handlerPuppet struct {
ch chan puppetCommand
}
func newHandlerPuppet() *handlerPuppet {
return &handlerPuppet{
ch: make(chan puppetCommand),
}
}
func (p *handlerPuppet) act(w http.ResponseWriter, r *http.Request) {
for cmd := range p.ch {
cmd.fn(w, r)
cmd.done <- true
}
}
func (p *handlerPuppet) done() { close(p.ch) }
func (p *handlerPuppet) do(fn func(http.ResponseWriter, *http.Request)) {
done := make(chan bool)
p.ch <- puppetCommand{fn, done}
<-done
}
func dockerLogs(container string) ([]byte, error) {
out, err := exec.Command("docker", "wait", container).CombinedOutput()
if err != nil {
return out, err
}
exitStatus, err := strconv.Atoi(strings.TrimSpace(string(out)))
if err != nil {
return out, errors.New("unexpected exit status from docker wait")
}
out, err = exec.Command("docker", "logs", container).CombinedOutput()
exec.Command("docker", "rm", container).Run()
if err == nil && exitStatus != 0 {
err = fmt.Errorf("exit status %d: %s", exitStatus, out)
}
return out, err
}
func kill(container string) {
exec.Command("docker", "kill", container).Run()
exec.Command("docker", "rm", container).Run()
}

View File

@ -0,0 +1,43 @@
// Copyright 2014 The Go Authors.
// See https://code.google.com/p/go/source/browse/CONTRIBUTORS
// Licensed under the same terms as Go itself:
// https://code.google.com/p/go/source/browse/LICENSE
package http2
import (
"sync"
)
type pipe struct {
b buffer
c sync.Cond
m sync.Mutex
}
// Read waits until data is available and copies bytes
// from the buffer into p.
func (r *pipe) Read(p []byte) (n int, err error) {
r.c.L.Lock()
defer r.c.L.Unlock()
for r.b.Len() == 0 && !r.b.closed {
r.c.Wait()
}
return r.b.Read(p)
}
// Write copies bytes from p into the buffer and wakes a reader.
// It is an error to write more data than the buffer can hold.
func (w *pipe) Write(p []byte) (n int, err error) {
w.c.L.Lock()
defer w.c.L.Unlock()
defer w.c.Signal()
return w.b.Write(p)
}
func (c *pipe) Close(err error) {
c.c.L.Lock()
defer c.c.L.Unlock()
defer c.c.Signal()
c.b.Close(err)
}

View File

@ -0,0 +1,24 @@
// Copyright 2014 The Go Authors.
// See https://code.google.com/p/go/source/browse/CONTRIBUTORS
// Licensed under the same terms as Go itself:
// https://code.google.com/p/go/source/browse/LICENSE
package http2
import (
"errors"
"testing"
)
func TestPipeClose(t *testing.T) {
var p pipe
p.c.L = &p.m
a := errors.New("a")
b := errors.New("b")
p.Close(a)
p.Close(b)
_, err := p.Read(make([]byte, 1))
if err != a {
t.Errorf("err = %v want %v", err, a)
}
}

View File

@ -0,0 +1,121 @@
// Copyright 2014 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
// See https://code.google.com/p/go/source/browse/CONTRIBUTORS
// Licensed under the same terms as Go itself:
// https://code.google.com/p/go/source/browse/LICENSE
package http2
import (
"testing"
)
func TestPriority(t *testing.T) {
// A -> B
// move A's parent to B
streams := make(map[uint32]*stream)
a := &stream{
parent: nil,
weight: 16,
}
streams[1] = a
b := &stream{
parent: a,
weight: 16,
}
streams[2] = b
adjustStreamPriority(streams, 1, PriorityParam{
Weight: 20,
StreamDep: 2,
})
if a.parent != b {
t.Errorf("Expected A's parent to be B")
}
if a.weight != 20 {
t.Errorf("Expected A's weight to be 20; got %d", a.weight)
}
if b.parent != nil {
t.Errorf("Expected B to have no parent")
}
if b.weight != 16 {
t.Errorf("Expected B's weight to be 16; got %d", b.weight)
}
}
func TestPriorityExclusiveZero(t *testing.T) {
// A B and C are all children of the 0 stream.
// Exclusive reprioritization to any of the streams
// should bring the rest of the streams under the
// reprioritized stream
streams := make(map[uint32]*stream)
a := &stream{
parent: nil,
weight: 16,
}
streams[1] = a
b := &stream{
parent: nil,
weight: 16,
}
streams[2] = b
c := &stream{
parent: nil,
weight: 16,
}
streams[3] = c
adjustStreamPriority(streams, 3, PriorityParam{
Weight: 20,
StreamDep: 0,
Exclusive: true,
})
if a.parent != c {
t.Errorf("Expected A's parent to be C")
}
if a.weight != 16 {
t.Errorf("Expected A's weight to be 16; got %d", a.weight)
}
if b.parent != c {
t.Errorf("Expected B's parent to be C")
}
if b.weight != 16 {
t.Errorf("Expected B's weight to be 16; got %d", b.weight)
}
if c.parent != nil {
t.Errorf("Expected C to have no parent")
}
if c.weight != 20 {
t.Errorf("Expected C's weight to be 20; got %d", b.weight)
}
}
func TestPriorityOwnParent(t *testing.T) {
streams := make(map[uint32]*stream)
a := &stream{
parent: nil,
weight: 16,
}
streams[1] = a
b := &stream{
parent: a,
weight: 16,
}
streams[2] = b
adjustStreamPriority(streams, 1, PriorityParam{
Weight: 20,
StreamDep: 1,
})
if a.parent != nil {
t.Errorf("Expected A's parent to be nil")
}
if a.weight != 20 {
t.Errorf("Expected A's weight to be 20; got %d", a.weight)
}
if b.parent != a {
t.Errorf("Expected B's parent to be A")
}
if b.weight != 16 {
t.Errorf("Expected B's weight to be 16; got %d", b.weight)
}
}

1777
Godeps/_workspace/src/github.com/bradfitz/http2/server.go generated vendored Normal file

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

View File

@ -0,0 +1,553 @@
// Copyright 2015 The Go Authors.
// See https://go.googlesource.com/go/+/master/CONTRIBUTORS
// Licensed under the same terms as Go itself:
// https://go.googlesource.com/go/+/master/LICENSE
package http2
import (
"bufio"
"bytes"
"crypto/tls"
"errors"
"fmt"
"io"
"log"
"net"
"net/http"
"strconv"
"strings"
"sync"
"github.com/coreos/etcd/Godeps/_workspace/src/github.com/bradfitz/http2/hpack"
)
type Transport struct {
Fallback http.RoundTripper
// TODO: remove this and make more general with a TLS dial hook, like http
InsecureTLSDial bool
connMu sync.Mutex
conns map[string][]*clientConn // key is host:port
}
type clientConn struct {
t *Transport
tconn *tls.Conn
tlsState *tls.ConnectionState
connKey []string // key(s) this connection is cached in, in t.conns
readerDone chan struct{} // closed on error
readerErr error // set before readerDone is closed
hdec *hpack.Decoder
nextRes *http.Response
mu sync.Mutex
closed bool
goAway *GoAwayFrame // if non-nil, the GoAwayFrame we received
streams map[uint32]*clientStream
nextStreamID uint32
bw *bufio.Writer
werr error // first write error that has occurred
br *bufio.Reader
fr *Framer
// Settings from peer:
maxFrameSize uint32
maxConcurrentStreams uint32
initialWindowSize uint32
hbuf bytes.Buffer // HPACK encoder writes into this
henc *hpack.Encoder
}
type clientStream struct {
ID uint32
resc chan resAndError
pw *io.PipeWriter
pr *io.PipeReader
}
type stickyErrWriter struct {
w io.Writer
err *error
}
func (sew stickyErrWriter) Write(p []byte) (n int, err error) {
if *sew.err != nil {
return 0, *sew.err
}
n, err = sew.w.Write(p)
*sew.err = err
return
}
func (t *Transport) RoundTrip(req *http.Request) (*http.Response, error) {
if req.URL.Scheme != "https" {
if t.Fallback == nil {
return nil, errors.New("http2: unsupported scheme and no Fallback")
}
return t.Fallback.RoundTrip(req)
}
host, port, err := net.SplitHostPort(req.URL.Host)
if err != nil {
host = req.URL.Host
port = "443"
}
for {
cc, err := t.getClientConn(host, port)
if err != nil {
return nil, err
}
res, err := cc.roundTrip(req)
if shouldRetryRequest(err) { // TODO: or clientconn is overloaded (too many outstanding requests)?
continue
}
if err != nil {
return nil, err
}
return res, nil
}
}
// CloseIdleConnections closes any connections which were previously
// connected from previous requests but are now sitting idle.
// It does not interrupt any connections currently in use.
func (t *Transport) CloseIdleConnections() {
t.connMu.Lock()
defer t.connMu.Unlock()
for _, vv := range t.conns {
for _, cc := range vv {
cc.closeIfIdle()
}
}
}
var errClientConnClosed = errors.New("http2: client conn is closed")
func shouldRetryRequest(err error) bool {
// TODO: or GOAWAY graceful shutdown stuff
return err == errClientConnClosed
}
func (t *Transport) removeClientConn(cc *clientConn) {
t.connMu.Lock()
defer t.connMu.Unlock()
for _, key := range cc.connKey {
vv, ok := t.conns[key]
if !ok {
continue
}
newList := filterOutClientConn(vv, cc)
if len(newList) > 0 {
t.conns[key] = newList
} else {
delete(t.conns, key)
}
}
}
func filterOutClientConn(in []*clientConn, exclude *clientConn) []*clientConn {
out := in[:0]
for _, v := range in {
if v != exclude {
out = append(out, v)
}
}
return out
}
func (t *Transport) getClientConn(host, port string) (*clientConn, error) {
t.connMu.Lock()
defer t.connMu.Unlock()
key := net.JoinHostPort(host, port)
for _, cc := range t.conns[key] {
if cc.canTakeNewRequest() {
return cc, nil
}
}
if t.conns == nil {
t.conns = make(map[string][]*clientConn)
}
cc, err := t.newClientConn(host, port, key)
if err != nil {
return nil, err
}
t.conns[key] = append(t.conns[key], cc)
return cc, nil
}
func (t *Transport) newClientConn(host, port, key string) (*clientConn, error) {
cfg := &tls.Config{
ServerName: host,
NextProtos: []string{NextProtoTLS},
InsecureSkipVerify: t.InsecureTLSDial,
}
tconn, err := tls.Dial("tcp", host+":"+port, cfg)
if err != nil {
return nil, err
}
if err := tconn.Handshake(); err != nil {
return nil, err
}
if !t.InsecureTLSDial {
if err := tconn.VerifyHostname(cfg.ServerName); err != nil {
return nil, err
}
}
state := tconn.ConnectionState()
if p := state.NegotiatedProtocol; p != NextProtoTLS {
// TODO(bradfitz): fall back to Fallback
return nil, fmt.Errorf("bad protocol: %v", p)
}
if !state.NegotiatedProtocolIsMutual {
return nil, errors.New("could not negotiate protocol mutually")
}
if _, err := tconn.Write(clientPreface); err != nil {
return nil, err
}
cc := &clientConn{
t: t,
tconn: tconn,
connKey: []string{key}, // TODO: cert's validated hostnames too
tlsState: &state,
readerDone: make(chan struct{}),
nextStreamID: 1,
maxFrameSize: 16 << 10, // spec default
initialWindowSize: 65535, // spec default
maxConcurrentStreams: 1000, // "infinite", per spec. 1000 seems good enough.
streams: make(map[uint32]*clientStream),
}
cc.bw = bufio.NewWriter(stickyErrWriter{tconn, &cc.werr})
cc.br = bufio.NewReader(tconn)
cc.fr = NewFramer(cc.bw, cc.br)
cc.henc = hpack.NewEncoder(&cc.hbuf)
cc.fr.WriteSettings()
// TODO: re-send more conn-level flow control tokens when server uses all these.
cc.fr.WriteWindowUpdate(0, 1<<30) // um, 0x7fffffff doesn't work to Google? it hangs?
cc.bw.Flush()
if cc.werr != nil {
return nil, cc.werr
}
// Read the obligatory SETTINGS frame
f, err := cc.fr.ReadFrame()
if err != nil {
return nil, err
}
sf, ok := f.(*SettingsFrame)
if !ok {
return nil, fmt.Errorf("expected settings frame, got: %T", f)
}
cc.fr.WriteSettingsAck()
cc.bw.Flush()
sf.ForeachSetting(func(s Setting) error {
switch s.ID {
case SettingMaxFrameSize:
cc.maxFrameSize = s.Val
case SettingMaxConcurrentStreams:
cc.maxConcurrentStreams = s.Val
case SettingInitialWindowSize:
cc.initialWindowSize = s.Val
default:
// TODO(bradfitz): handle more
log.Printf("Unhandled Setting: %v", s)
}
return nil
})
// TODO: figure out henc size
cc.hdec = hpack.NewDecoder(initialHeaderTableSize, cc.onNewHeaderField)
go cc.readLoop()
return cc, nil
}
func (cc *clientConn) setGoAway(f *GoAwayFrame) {
cc.mu.Lock()
defer cc.mu.Unlock()
cc.goAway = f
}
func (cc *clientConn) canTakeNewRequest() bool {
cc.mu.Lock()
defer cc.mu.Unlock()
return cc.goAway == nil &&
int64(len(cc.streams)+1) < int64(cc.maxConcurrentStreams) &&
cc.nextStreamID < 2147483647
}
func (cc *clientConn) closeIfIdle() {
cc.mu.Lock()
if len(cc.streams) > 0 {
cc.mu.Unlock()
return
}
cc.closed = true
// TODO: do clients send GOAWAY too? maybe? Just Close:
cc.mu.Unlock()
cc.tconn.Close()
}
func (cc *clientConn) roundTrip(req *http.Request) (*http.Response, error) {
cc.mu.Lock()
if cc.closed {
cc.mu.Unlock()
return nil, errClientConnClosed
}
cs := cc.newStream()
hasBody := false // TODO
// we send: HEADERS[+CONTINUATION] + (DATA?)
hdrs := cc.encodeHeaders(req)
first := true
for len(hdrs) > 0 {
chunk := hdrs
if len(chunk) > int(cc.maxFrameSize) {
chunk = chunk[:cc.maxFrameSize]
}
hdrs = hdrs[len(chunk):]
endHeaders := len(hdrs) == 0
if first {
cc.fr.WriteHeaders(HeadersFrameParam{
StreamID: cs.ID,
BlockFragment: chunk,
EndStream: !hasBody,
EndHeaders: endHeaders,
})
first = false
} else {
cc.fr.WriteContinuation(cs.ID, endHeaders, chunk)
}
}
cc.bw.Flush()
werr := cc.werr
cc.mu.Unlock()
if hasBody {
// TODO: write data. and it should probably be interleaved:
// go ... io.Copy(dataFrameWriter{cc, cs, ...}, req.Body) ... etc
}
if werr != nil {
return nil, werr
}
re := <-cs.resc
if re.err != nil {
return nil, re.err
}
res := re.res
res.Request = req
res.TLS = cc.tlsState
return res, nil
}
// requires cc.mu be held.
func (cc *clientConn) encodeHeaders(req *http.Request) []byte {
cc.hbuf.Reset()
// TODO(bradfitz): figure out :authority-vs-Host stuff between http2 and Go
host := req.Host
if host == "" {
host = req.URL.Host
}
path := req.URL.Path
if path == "" {
path = "/"
}
cc.writeHeader(":authority", host) // probably not right for all sites
cc.writeHeader(":method", req.Method)
cc.writeHeader(":path", path)
cc.writeHeader(":scheme", "https")
for k, vv := range req.Header {
lowKey := strings.ToLower(k)
if lowKey == "host" {
continue
}
for _, v := range vv {
cc.writeHeader(lowKey, v)
}
}
return cc.hbuf.Bytes()
}
func (cc *clientConn) writeHeader(name, value string) {
log.Printf("sending %q = %q", name, value)
cc.henc.WriteField(hpack.HeaderField{Name: name, Value: value})
}
type resAndError struct {
res *http.Response
err error
}
// requires cc.mu be held.
func (cc *clientConn) newStream() *clientStream {
cs := &clientStream{
ID: cc.nextStreamID,
resc: make(chan resAndError, 1),
}
cc.nextStreamID += 2
cc.streams[cs.ID] = cs
return cs
}
func (cc *clientConn) streamByID(id uint32, andRemove bool) *clientStream {
cc.mu.Lock()
defer cc.mu.Unlock()
cs := cc.streams[id]
if andRemove {
delete(cc.streams, id)
}
return cs
}
// runs in its own goroutine.
func (cc *clientConn) readLoop() {
defer cc.t.removeClientConn(cc)
defer close(cc.readerDone)
activeRes := map[uint32]*clientStream{} // keyed by streamID
// Close any response bodies if the server closes prematurely.
// TODO: also do this if we've written the headers but not
// gotten a response yet.
defer func() {
err := cc.readerErr
if err == io.EOF {
err = io.ErrUnexpectedEOF
}
for _, cs := range activeRes {
cs.pw.CloseWithError(err)
}
}()
// continueStreamID is the stream ID we're waiting for
// continuation frames for.
var continueStreamID uint32
for {
f, err := cc.fr.ReadFrame()
if err != nil {
cc.readerErr = err
return
}
log.Printf("Transport received %v: %#v", f.Header(), f)
streamID := f.Header().StreamID
_, isContinue := f.(*ContinuationFrame)
if isContinue {
if streamID != continueStreamID {
log.Printf("Protocol violation: got CONTINUATION with id %d; want %d", streamID, continueStreamID)
cc.readerErr = ConnectionError(ErrCodeProtocol)
return
}
} else if continueStreamID != 0 {
// Continue frames need to be adjacent in the stream
// and we were in the middle of headers.
log.Printf("Protocol violation: got %T for stream %d, want CONTINUATION for %d", f, streamID, continueStreamID)
cc.readerErr = ConnectionError(ErrCodeProtocol)
return
}
if streamID%2 == 0 {
// Ignore streams pushed from the server for now.
// These always have an even stream id.
continue
}
streamEnded := false
if ff, ok := f.(streamEnder); ok {
streamEnded = ff.StreamEnded()
}
cs := cc.streamByID(streamID, streamEnded)
if cs == nil {
log.Printf("Received frame for untracked stream ID %d", streamID)
continue
}
switch f := f.(type) {
case *HeadersFrame:
cc.nextRes = &http.Response{
Proto: "HTTP/2.0",
ProtoMajor: 2,
Header: make(http.Header),
}
cs.pr, cs.pw = io.Pipe()
cc.hdec.Write(f.HeaderBlockFragment())
case *ContinuationFrame:
cc.hdec.Write(f.HeaderBlockFragment())
case *DataFrame:
log.Printf("DATA: %q", f.Data())
cs.pw.Write(f.Data())
case *GoAwayFrame:
cc.t.removeClientConn(cc)
if f.ErrCode != 0 {
// TODO: deal with GOAWAY more. particularly the error code
log.Printf("transport got GOAWAY with error code = %v", f.ErrCode)
}
cc.setGoAway(f)
default:
log.Printf("Transport: unhandled response frame type %T", f)
}
headersEnded := false
if he, ok := f.(headersEnder); ok {
headersEnded = he.HeadersEnded()
if headersEnded {
continueStreamID = 0
} else {
continueStreamID = streamID
}
}
if streamEnded {
cs.pw.Close()
delete(activeRes, streamID)
}
if headersEnded {
if cs == nil {
panic("couldn't find stream") // TODO be graceful
}
// TODO: set the Body to one which notes the
// Close and also sends the server a
// RST_STREAM
cc.nextRes.Body = cs.pr
res := cc.nextRes
activeRes[streamID] = cs
cs.resc <- resAndError{res: res}
}
}
}
func (cc *clientConn) onNewHeaderField(f hpack.HeaderField) {
// TODO: verifiy pseudo headers come before non-pseudo headers
// TODO: verifiy the status is set
log.Printf("Header field: %+v", f)
if f.Name == ":status" {
code, err := strconv.Atoi(f.Value)
if err != nil {
panic("TODO: be graceful")
}
cc.nextRes.Status = f.Value + " " + http.StatusText(code)
cc.nextRes.StatusCode = code
return
}
if strings.HasPrefix(f.Name, ":") {
// "Endpoints MUST NOT generate pseudo-header fields other than those defined in this document."
// TODO: treat as invalid?
return
}
cc.nextRes.Header.Add(http.CanonicalHeaderKey(f.Name), f.Value)
}

View File

@ -0,0 +1,168 @@
// Copyright 2015 The Go Authors.
// See https://go.googlesource.com/go/+/master/CONTRIBUTORS
// Licensed under the same terms as Go itself:
// https://go.googlesource.com/go/+/master/LICENSE
package http2
import (
"flag"
"io"
"io/ioutil"
"net/http"
"os"
"reflect"
"strings"
"testing"
"time"
)
var (
extNet = flag.Bool("extnet", false, "do external network tests")
transportHost = flag.String("transporthost", "http2.golang.org", "hostname to use for TestTransport")
insecure = flag.Bool("insecure", false, "insecure TLS dials")
)
func TestTransportExternal(t *testing.T) {
if !*extNet {
t.Skip("skipping external network test")
}
req, _ := http.NewRequest("GET", "https://"+*transportHost+"/", nil)
rt := &Transport{
InsecureTLSDial: *insecure,
}
res, err := rt.RoundTrip(req)
if err != nil {
t.Fatalf("%v", err)
}
res.Write(os.Stdout)
}
func TestTransport(t *testing.T) {
const body = "sup"
st := newServerTester(t, func(w http.ResponseWriter, r *http.Request) {
io.WriteString(w, body)
})
defer st.Close()
tr := &Transport{InsecureTLSDial: true}
defer tr.CloseIdleConnections()
req, err := http.NewRequest("GET", st.ts.URL, nil)
if err != nil {
t.Fatal(err)
}
res, err := tr.RoundTrip(req)
if err != nil {
t.Fatal(err)
}
defer res.Body.Close()
t.Logf("Got res: %+v", res)
if g, w := res.StatusCode, 200; g != w {
t.Errorf("StatusCode = %v; want %v", g, w)
}
if g, w := res.Status, "200 OK"; g != w {
t.Errorf("Status = %q; want %q", g, w)
}
wantHeader := http.Header{
"Content-Length": []string{"3"},
"Content-Type": []string{"text/plain; charset=utf-8"},
}
if !reflect.DeepEqual(res.Header, wantHeader) {
t.Errorf("res Header = %v; want %v", res.Header, wantHeader)
}
if res.Request != req {
t.Errorf("Response.Request = %p; want %p", res.Request, req)
}
if res.TLS == nil {
t.Errorf("Response.TLS = nil; want non-nil", res.TLS)
}
slurp, err := ioutil.ReadAll(res.Body)
if err != nil {
t.Error("Body read: %v", err)
} else if string(slurp) != body {
t.Errorf("Body = %q; want %q", slurp, body)
}
}
func TestTransportReusesConns(t *testing.T) {
st := newServerTester(t, func(w http.ResponseWriter, r *http.Request) {
io.WriteString(w, r.RemoteAddr)
}, optOnlyServer)
defer st.Close()
tr := &Transport{InsecureTLSDial: true}
defer tr.CloseIdleConnections()
get := func() string {
req, err := http.NewRequest("GET", st.ts.URL, nil)
if err != nil {
t.Fatal(err)
}
res, err := tr.RoundTrip(req)
if err != nil {
t.Fatal(err)
}
defer res.Body.Close()
slurp, err := ioutil.ReadAll(res.Body)
if err != nil {
t.Fatalf("Body read: %v", err)
}
addr := strings.TrimSpace(string(slurp))
if addr == "" {
t.Fatalf("didn't get an addr in response")
}
return addr
}
first := get()
second := get()
if first != second {
t.Errorf("first and second responses were on different connections: %q vs %q", first, second)
}
}
func TestTransportAbortClosesPipes(t *testing.T) {
shutdown := make(chan struct{})
st := newServerTester(t,
func(w http.ResponseWriter, r *http.Request) {
w.(http.Flusher).Flush()
<-shutdown
},
optOnlyServer,
)
defer st.Close()
defer close(shutdown) // we must shutdown before st.Close() to avoid hanging
done := make(chan struct{})
requestMade := make(chan struct{})
go func() {
defer close(done)
tr := &Transport{
InsecureTLSDial: true,
}
req, err := http.NewRequest("GET", st.ts.URL, nil)
if err != nil {
t.Fatal(err)
}
res, err := tr.RoundTrip(req)
if err != nil {
t.Fatal(err)
}
defer res.Body.Close()
close(requestMade)
_, err = ioutil.ReadAll(res.Body)
if err == nil {
t.Error("expected error from res.Body.Read")
}
}()
<-requestMade
// Now force the serve loop to end, via closing the connection.
st.closeConn()
// deadlock? that's a bug.
select {
case <-done:
case <-time.After(3 * time.Second):
t.Fatal("timeout")
}
}

View File

@ -0,0 +1,204 @@
// Copyright 2014 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
// See https://code.google.com/p/go/source/browse/CONTRIBUTORS
// Licensed under the same terms as Go itself:
// https://code.google.com/p/go/source/browse/LICENSE
package http2
import (
"bytes"
"fmt"
"net/http"
"time"
"github.com/coreos/etcd/Godeps/_workspace/src/github.com/bradfitz/http2/hpack"
)
// writeFramer is implemented by any type that is used to write frames.
type writeFramer interface {
writeFrame(writeContext) error
}
// writeContext is the interface needed by the various frame writer
// types below. All the writeFrame methods below are scheduled via the
// frame writing scheduler (see writeScheduler in writesched.go).
//
// This interface is implemented by *serverConn.
// TODO: use it from the client code too, once it exists.
type writeContext interface {
Framer() *Framer
Flush() error
CloseConn() error
// HeaderEncoder returns an HPACK encoder that writes to the
// returned buffer.
HeaderEncoder() (*hpack.Encoder, *bytes.Buffer)
}
// endsStream reports whether the given frame writer w will locally
// close the stream.
func endsStream(w writeFramer) bool {
switch v := w.(type) {
case *writeData:
return v.endStream
case *writeResHeaders:
return v.endStream
}
return false
}
type flushFrameWriter struct{}
func (flushFrameWriter) writeFrame(ctx writeContext) error {
return ctx.Flush()
}
type writeSettings []Setting
func (s writeSettings) writeFrame(ctx writeContext) error {
return ctx.Framer().WriteSettings([]Setting(s)...)
}
type writeGoAway struct {
maxStreamID uint32
code ErrCode
}
func (p *writeGoAway) writeFrame(ctx writeContext) error {
err := ctx.Framer().WriteGoAway(p.maxStreamID, p.code, nil)
if p.code != 0 {
ctx.Flush() // ignore error: we're hanging up on them anyway
time.Sleep(50 * time.Millisecond)
ctx.CloseConn()
}
return err
}
type writeData struct {
streamID uint32
p []byte
endStream bool
}
func (w *writeData) String() string {
return fmt.Sprintf("writeData(stream=%d, p=%d, endStream=%v)", w.streamID, len(w.p), w.endStream)
}
func (w *writeData) writeFrame(ctx writeContext) error {
return ctx.Framer().WriteData(w.streamID, w.endStream, w.p)
}
func (se StreamError) writeFrame(ctx writeContext) error {
return ctx.Framer().WriteRSTStream(se.StreamID, se.Code)
}
type writePingAck struct{ pf *PingFrame }
func (w writePingAck) writeFrame(ctx writeContext) error {
return ctx.Framer().WritePing(true, w.pf.Data)
}
type writeSettingsAck struct{}
func (writeSettingsAck) writeFrame(ctx writeContext) error {
return ctx.Framer().WriteSettingsAck()
}
// writeResHeaders is a request to write a HEADERS and 0+ CONTINUATION frames
// for HTTP response headers from a server handler.
type writeResHeaders struct {
streamID uint32
httpResCode int
h http.Header // may be nil
endStream bool
contentType string
contentLength string
}
func (w *writeResHeaders) writeFrame(ctx writeContext) error {
enc, buf := ctx.HeaderEncoder()
buf.Reset()
enc.WriteField(hpack.HeaderField{Name: ":status", Value: httpCodeString(w.httpResCode)})
for k, vv := range w.h {
k = lowerHeader(k)
for _, v := range vv {
// TODO: more of "8.1.2.2 Connection-Specific Header Fields"
if k == "transfer-encoding" && v != "trailers" {
continue
}
enc.WriteField(hpack.HeaderField{Name: k, Value: v})
}
}
if w.contentType != "" {
enc.WriteField(hpack.HeaderField{Name: "content-type", Value: w.contentType})
}
if w.contentLength != "" {
enc.WriteField(hpack.HeaderField{Name: "content-length", Value: w.contentLength})
}
headerBlock := buf.Bytes()
if len(headerBlock) == 0 {
panic("unexpected empty hpack")
}
// For now we're lazy and just pick the minimum MAX_FRAME_SIZE
// that all peers must support (16KB). Later we could care
// more and send larger frames if the peer advertised it, but
// there's little point. Most headers are small anyway (so we
// generally won't have CONTINUATION frames), and extra frames
// only waste 9 bytes anyway.
const maxFrameSize = 16384
first := true
for len(headerBlock) > 0 {
frag := headerBlock
if len(frag) > maxFrameSize {
frag = frag[:maxFrameSize]
}
headerBlock = headerBlock[len(frag):]
endHeaders := len(headerBlock) == 0
var err error
if first {
first = false
err = ctx.Framer().WriteHeaders(HeadersFrameParam{
StreamID: w.streamID,
BlockFragment: frag,
EndStream: w.endStream,
EndHeaders: endHeaders,
})
} else {
err = ctx.Framer().WriteContinuation(w.streamID, endHeaders, frag)
}
if err != nil {
return err
}
}
return nil
}
type write100ContinueHeadersFrame struct {
streamID uint32
}
func (w write100ContinueHeadersFrame) writeFrame(ctx writeContext) error {
enc, buf := ctx.HeaderEncoder()
buf.Reset()
enc.WriteField(hpack.HeaderField{Name: ":status", Value: "100"})
return ctx.Framer().WriteHeaders(HeadersFrameParam{
StreamID: w.streamID,
BlockFragment: buf.Bytes(),
EndStream: false,
EndHeaders: true,
})
}
type writeWindowUpdate struct {
streamID uint32 // or 0 for conn-level
n uint32
}
func (wu writeWindowUpdate) writeFrame(ctx writeContext) error {
return ctx.Framer().WriteWindowUpdate(wu.streamID, wu.n)
}

View File

@ -0,0 +1,286 @@
// Copyright 2014 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
// See https://code.google.com/p/go/source/browse/CONTRIBUTORS
// Licensed under the same terms as Go itself:
// https://code.google.com/p/go/source/browse/LICENSE
package http2
import "fmt"
// frameWriteMsg is a request to write a frame.
type frameWriteMsg struct {
// write is the interface value that does the writing, once the
// writeScheduler (below) has decided to select this frame
// to write. The write functions are all defined in write.go.
write writeFramer
stream *stream // used for prioritization. nil for non-stream frames.
// done, if non-nil, must be a buffered channel with space for
// 1 message and is sent the return value from write (or an
// earlier error) when the frame has been written.
done chan error
}
// for debugging only:
func (wm frameWriteMsg) String() string {
var streamID uint32
if wm.stream != nil {
streamID = wm.stream.id
}
var des string
if s, ok := wm.write.(fmt.Stringer); ok {
des = s.String()
} else {
des = fmt.Sprintf("%T", wm.write)
}
return fmt.Sprintf("[frameWriteMsg stream=%d, ch=%v, type: %v]", streamID, wm.done != nil, des)
}
// writeScheduler tracks pending frames to write, priorities, and decides
// the next one to use. It is not thread-safe.
type writeScheduler struct {
// zero are frames not associated with a specific stream.
// They're sent before any stream-specific freams.
zero writeQueue
// maxFrameSize is the maximum size of a DATA frame
// we'll write. Must be non-zero and between 16K-16M.
maxFrameSize uint32
// sq contains the stream-specific queues, keyed by stream ID.
// when a stream is idle, it's deleted from the map.
sq map[uint32]*writeQueue
// canSend is a slice of memory that's reused between frame
// scheduling decisions to hold the list of writeQueues (from sq)
// which have enough flow control data to send. After canSend is
// built, the best is selected.
canSend []*writeQueue
// pool of empty queues for reuse.
queuePool []*writeQueue
}
func (ws *writeScheduler) putEmptyQueue(q *writeQueue) {
if len(q.s) != 0 {
panic("queue must be empty")
}
ws.queuePool = append(ws.queuePool, q)
}
func (ws *writeScheduler) getEmptyQueue() *writeQueue {
ln := len(ws.queuePool)
if ln == 0 {
return new(writeQueue)
}
q := ws.queuePool[ln-1]
ws.queuePool = ws.queuePool[:ln-1]
return q
}
func (ws *writeScheduler) empty() bool { return ws.zero.empty() && len(ws.sq) == 0 }
func (ws *writeScheduler) add(wm frameWriteMsg) {
st := wm.stream
if st == nil {
ws.zero.push(wm)
} else {
ws.streamQueue(st.id).push(wm)
}
}
func (ws *writeScheduler) streamQueue(streamID uint32) *writeQueue {
if q, ok := ws.sq[streamID]; ok {
return q
}
if ws.sq == nil {
ws.sq = make(map[uint32]*writeQueue)
}
q := ws.getEmptyQueue()
ws.sq[streamID] = q
return q
}
// take returns the most important frame to write and removes it from the scheduler.
// It is illegal to call this if the scheduler is empty or if there are no connection-level
// flow control bytes available.
func (ws *writeScheduler) take() (wm frameWriteMsg, ok bool) {
if ws.maxFrameSize == 0 {
panic("internal error: ws.maxFrameSize not initialized or invalid")
}
// If there any frames not associated with streams, prefer those first.
// These are usually SETTINGS, etc.
if !ws.zero.empty() {
return ws.zero.shift(), true
}
if len(ws.sq) == 0 {
return
}
// Next, prioritize frames on streams that aren't DATA frames (no cost).
for id, q := range ws.sq {
if q.firstIsNoCost() {
return ws.takeFrom(id, q)
}
}
// Now, all that remains are DATA frames with non-zero bytes to
// send. So pick the best one.
if len(ws.canSend) != 0 {
panic("should be empty")
}
for _, q := range ws.sq {
if n := ws.streamWritableBytes(q); n > 0 {
ws.canSend = append(ws.canSend, q)
}
}
if len(ws.canSend) == 0 {
return
}
defer ws.zeroCanSend()
// TODO: find the best queue
q := ws.canSend[0]
return ws.takeFrom(q.streamID(), q)
}
// zeroCanSend is defered from take.
func (ws *writeScheduler) zeroCanSend() {
for i := range ws.canSend {
ws.canSend[i] = nil
}
ws.canSend = ws.canSend[:0]
}
// streamWritableBytes returns the number of DATA bytes we could write
// from the given queue's stream, if this stream/queue were
// selected. It is an error to call this if q's head isn't a
// *writeData.
func (ws *writeScheduler) streamWritableBytes(q *writeQueue) int32 {
wm := q.head()
ret := wm.stream.flow.available() // max we can write
if ret == 0 {
return 0
}
if int32(ws.maxFrameSize) < ret {
ret = int32(ws.maxFrameSize)
}
if ret == 0 {
panic("internal error: ws.maxFrameSize not initialized or invalid")
}
wd := wm.write.(*writeData)
if len(wd.p) < int(ret) {
ret = int32(len(wd.p))
}
return ret
}
func (ws *writeScheduler) takeFrom(id uint32, q *writeQueue) (wm frameWriteMsg, ok bool) {
wm = q.head()
// If the first item in this queue costs flow control tokens
// and we don't have enough, write as much as we can.
if wd, ok := wm.write.(*writeData); ok && len(wd.p) > 0 {
allowed := wm.stream.flow.available() // max we can write
if allowed == 0 {
// No quota available. Caller can try the next stream.
return frameWriteMsg{}, false
}
if int32(ws.maxFrameSize) < allowed {
allowed = int32(ws.maxFrameSize)
}
// TODO: further restrict the allowed size, because even if
// the peer says it's okay to write 16MB data frames, we might
// want to write smaller ones to properly weight competing
// streams' priorities.
if len(wd.p) > int(allowed) {
wm.stream.flow.take(allowed)
chunk := wd.p[:allowed]
wd.p = wd.p[allowed:]
// Make up a new write message of a valid size, rather
// than shifting one off the queue.
return frameWriteMsg{
stream: wm.stream,
write: &writeData{
streamID: wd.streamID,
p: chunk,
// even if the original had endStream set, there
// arebytes remaining because len(wd.p) > allowed,
// so we know endStream is false:
endStream: false,
},
// our caller is blocking on the final DATA frame, not
// these intermediates, so no need to wait:
done: nil,
}, true
}
wm.stream.flow.take(int32(len(wd.p)))
}
q.shift()
if q.empty() {
ws.putEmptyQueue(q)
delete(ws.sq, id)
}
return wm, true
}
func (ws *writeScheduler) forgetStream(id uint32) {
q, ok := ws.sq[id]
if !ok {
return
}
delete(ws.sq, id)
// But keep it for others later.
for i := range q.s {
q.s[i] = frameWriteMsg{}
}
q.s = q.s[:0]
ws.putEmptyQueue(q)
}
type writeQueue struct {
s []frameWriteMsg
}
// streamID returns the stream ID for a non-empty stream-specific queue.
func (q *writeQueue) streamID() uint32 { return q.s[0].stream.id }
func (q *writeQueue) empty() bool { return len(q.s) == 0 }
func (q *writeQueue) push(wm frameWriteMsg) {
q.s = append(q.s, wm)
}
// head returns the next item that would be removed by shift.
func (q *writeQueue) head() frameWriteMsg {
if len(q.s) == 0 {
panic("invalid use of queue")
}
return q.s[0]
}
func (q *writeQueue) shift() frameWriteMsg {
if len(q.s) == 0 {
panic("invalid use of queue")
}
wm := q.s[0]
// TODO: less copy-happy queue.
copy(q.s, q.s[1:])
q.s[len(q.s)-1] = frameWriteMsg{}
q.s = q.s[:len(q.s)-1]
return wm
}
func (q *writeQueue) firstIsNoCost() bool {
if df, ok := q.s[0].write.(*writeData); ok {
return len(df.p) == 0
}
return true
}

View File

@ -0,0 +1,357 @@
// Copyright 2014 The Go Authors.
// See https://code.google.com/p/go/source/browse/CONTRIBUTORS
// Licensed under the same terms as Go itself:
// https://code.google.com/p/go/source/browse/LICENSE
package http2
import (
"bytes"
"encoding/xml"
"flag"
"fmt"
"io"
"os"
"reflect"
"regexp"
"sort"
"strconv"
"strings"
"sync"
"testing"
)
var coverSpec = flag.Bool("coverspec", false, "Run spec coverage tests")
// The global map of sentence coverage for the http2 spec.
var defaultSpecCoverage specCoverage
var loadSpecOnce sync.Once
func loadSpec() {
if f, err := os.Open("testdata/draft-ietf-httpbis-http2.xml"); err != nil {
panic(err)
} else {
defaultSpecCoverage = readSpecCov(f)
f.Close()
}
}
// covers marks all sentences for section sec in defaultSpecCoverage. Sentences not
// "covered" will be included in report outputed by TestSpecCoverage.
func covers(sec, sentences string) {
loadSpecOnce.Do(loadSpec)
defaultSpecCoverage.cover(sec, sentences)
}
type specPart struct {
section string
sentence string
}
func (ss specPart) Less(oo specPart) bool {
atoi := func(s string) int {
n, err := strconv.Atoi(s)
if err != nil {
panic(err)
}
return n
}
a := strings.Split(ss.section, ".")
b := strings.Split(oo.section, ".")
for len(a) > 0 {
if len(b) == 0 {
return false
}
x, y := atoi(a[0]), atoi(b[0])
if x == y {
a, b = a[1:], b[1:]
continue
}
return x < y
}
if len(b) > 0 {
return true
}
return false
}
type bySpecSection []specPart
func (a bySpecSection) Len() int { return len(a) }
func (a bySpecSection) Less(i, j int) bool { return a[i].Less(a[j]) }
func (a bySpecSection) Swap(i, j int) { a[i], a[j] = a[j], a[i] }
type specCoverage struct {
coverage map[specPart]bool
d *xml.Decoder
}
func joinSection(sec []int) string {
s := fmt.Sprintf("%d", sec[0])
for _, n := range sec[1:] {
s = fmt.Sprintf("%s.%d", s, n)
}
return s
}
func (sc specCoverage) readSection(sec []int) {
var (
buf = new(bytes.Buffer)
sub = 0
)
for {
tk, err := sc.d.Token()
if err != nil {
if err == io.EOF {
return
}
panic(err)
}
switch v := tk.(type) {
case xml.StartElement:
if skipElement(v) {
if err := sc.d.Skip(); err != nil {
panic(err)
}
if v.Name.Local == "section" {
sub++
}
break
}
switch v.Name.Local {
case "section":
sub++
sc.readSection(append(sec, sub))
case "xref":
buf.Write(sc.readXRef(v))
}
case xml.CharData:
if len(sec) == 0 {
break
}
buf.Write(v)
case xml.EndElement:
if v.Name.Local == "section" {
sc.addSentences(joinSection(sec), buf.String())
return
}
}
}
}
func (sc specCoverage) readXRef(se xml.StartElement) []byte {
var b []byte
for {
tk, err := sc.d.Token()
if err != nil {
panic(err)
}
switch v := tk.(type) {
case xml.CharData:
if b != nil {
panic("unexpected CharData")
}
b = []byte(string(v))
case xml.EndElement:
if v.Name.Local != "xref" {
panic("expected </xref>")
}
if b != nil {
return b
}
sig := attrSig(se)
switch sig {
case "target":
return []byte(fmt.Sprintf("[%s]", attrValue(se, "target")))
case "fmt-of,rel,target", "fmt-,,rel,target":
return []byte(fmt.Sprintf("[%s, %s]", attrValue(se, "target"), attrValue(se, "rel")))
case "fmt-of,sec,target", "fmt-,,sec,target":
return []byte(fmt.Sprintf("[section %s of %s]", attrValue(se, "sec"), attrValue(se, "target")))
case "fmt-of,rel,sec,target":
return []byte(fmt.Sprintf("[section %s of %s, %s]", attrValue(se, "sec"), attrValue(se, "target"), attrValue(se, "rel")))
default:
panic(fmt.Sprintf("unknown attribute signature %q in %#v", sig, fmt.Sprintf("%#v", se)))
}
default:
panic(fmt.Sprintf("unexpected tag %q", v))
}
}
}
var skipAnchor = map[string]bool{
"intro": true,
"Overview": true,
}
var skipTitle = map[string]bool{
"Acknowledgements": true,
"Change Log": true,
"Document Organization": true,
"Conventions and Terminology": true,
}
func skipElement(s xml.StartElement) bool {
switch s.Name.Local {
case "artwork":
return true
case "section":
for _, attr := range s.Attr {
switch attr.Name.Local {
case "anchor":
if skipAnchor[attr.Value] || strings.HasPrefix(attr.Value, "changes.since.") {
return true
}
case "title":
if skipTitle[attr.Value] {
return true
}
}
}
}
return false
}
func readSpecCov(r io.Reader) specCoverage {
sc := specCoverage{
coverage: map[specPart]bool{},
d: xml.NewDecoder(r)}
sc.readSection(nil)
return sc
}
func (sc specCoverage) addSentences(sec string, sentence string) {
for _, s := range parseSentences(sentence) {
sc.coverage[specPart{sec, s}] = false
}
}
func (sc specCoverage) cover(sec string, sentence string) {
for _, s := range parseSentences(sentence) {
p := specPart{sec, s}
if _, ok := sc.coverage[p]; !ok {
panic(fmt.Sprintf("Not found in spec: %q, %q", sec, s))
}
sc.coverage[specPart{sec, s}] = true
}
}
var whitespaceRx = regexp.MustCompile(`\s+`)
func parseSentences(sens string) []string {
sens = strings.TrimSpace(sens)
if sens == "" {
return nil
}
ss := strings.Split(whitespaceRx.ReplaceAllString(sens, " "), ". ")
for i, s := range ss {
s = strings.TrimSpace(s)
if !strings.HasSuffix(s, ".") {
s += "."
}
ss[i] = s
}
return ss
}
func TestSpecParseSentences(t *testing.T) {
tests := []struct {
ss string
want []string
}{
{"Sentence 1. Sentence 2.",
[]string{
"Sentence 1.",
"Sentence 2.",
}},
{"Sentence 1. \nSentence 2.\tSentence 3.",
[]string{
"Sentence 1.",
"Sentence 2.",
"Sentence 3.",
}},
}
for i, tt := range tests {
got := parseSentences(tt.ss)
if !reflect.DeepEqual(got, tt.want) {
t.Errorf("%d: got = %q, want %q", i, got, tt.want)
}
}
}
func TestSpecCoverage(t *testing.T) {
if !*coverSpec {
t.Skip()
}
loadSpecOnce.Do(loadSpec)
var (
list []specPart
cv = defaultSpecCoverage.coverage
total = len(cv)
complete = 0
)
for sp, touched := range defaultSpecCoverage.coverage {
if touched {
complete++
} else {
list = append(list, sp)
}
}
sort.Stable(bySpecSection(list))
if testing.Short() && len(list) > 5 {
list = list[:5]
}
for _, p := range list {
t.Errorf("\tSECTION %s: %s", p.section, p.sentence)
}
t.Logf("%d/%d (%d%%) sentances covered", complete, total, (complete/total)*100)
}
func attrSig(se xml.StartElement) string {
var names []string
for _, attr := range se.Attr {
if attr.Name.Local == "fmt" {
names = append(names, "fmt-"+attr.Value)
} else {
names = append(names, attr.Name.Local)
}
}
sort.Strings(names)
return strings.Join(names, ",")
}
func attrValue(se xml.StartElement, attr string) string {
for _, a := range se.Attr {
if a.Name.Local == attr {
return a.Value
}
}
panic("unknown attribute " + attr)
}
func TestSpecPartLess(t *testing.T) {
tests := []struct {
sec1, sec2 string
want bool
}{
{"6.2.1", "6.2", false},
{"6.2", "6.2.1", true},
{"6.10", "6.10.1", true},
{"6.10", "6.1.1", false}, // 10, not 1
{"6.1", "6.1", false}, // equal, so not less
}
for _, tt := range tests {
got := (specPart{tt.sec1, "foo"}).Less(specPart{tt.sec2, "foo"})
if got != tt.want {
t.Errorf("Less(%q, %q) = %v; want %v", tt.sec1, tt.sec2, got, tt.want)
}
}
}

191
Godeps/_workspace/src/github.com/golang/glog/LICENSE generated vendored Normal file
View File

@ -0,0 +1,191 @@
Apache License
Version 2.0, January 2004
http://www.apache.org/licenses/
TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
1. Definitions.
"License" shall mean the terms and conditions for use, reproduction, and
distribution as defined by Sections 1 through 9 of this document.
"Licensor" shall mean the copyright owner or entity authorized by the copyright
owner that is granting the License.
"Legal Entity" shall mean the union of the acting entity and all other entities
that control, are controlled by, or are under common control with that entity.
For the purposes of this definition, "control" means (i) the power, direct or
indirect, to cause the direction or management of such entity, whether by
contract or otherwise, or (ii) ownership of fifty percent (50%) or more of the
outstanding shares, or (iii) beneficial ownership of such entity.
"You" (or "Your") shall mean an individual or Legal Entity exercising
permissions granted by this License.
"Source" form shall mean the preferred form for making modifications, including
but not limited to software source code, documentation source, and configuration
files.
"Object" form shall mean any form resulting from mechanical transformation or
translation of a Source form, including but not limited to compiled object code,
generated documentation, and conversions to other media types.
"Work" shall mean the work of authorship, whether in Source or Object form, made
available under the License, as indicated by a copyright notice that is included
in or attached to the work (an example is provided in the Appendix below).
"Derivative Works" shall mean any work, whether in Source or Object form, that
is based on (or derived from) the Work and for which the editorial revisions,
annotations, elaborations, or other modifications represent, as a whole, an
original work of authorship. For the purposes of this License, Derivative Works
shall not include works that remain separable from, or merely link (or bind by
name) to the interfaces of, the Work and Derivative Works thereof.
"Contribution" shall mean any work of authorship, including the original version
of the Work and any modifications or additions to that Work or Derivative Works
thereof, that is intentionally submitted to Licensor for inclusion in the Work
by the copyright owner or by an individual or Legal Entity authorized to submit
on behalf of the copyright owner. For the purposes of this definition,
"submitted" means any form of electronic, verbal, or written communication sent
to the Licensor or its representatives, including but not limited to
communication on electronic mailing lists, source code control systems, and
issue tracking systems that are managed by, or on behalf of, the Licensor for
the purpose of discussing and improving the Work, but excluding communication
that is conspicuously marked or otherwise designated in writing by the copyright
owner as "Not a Contribution."
"Contributor" shall mean Licensor and any individual or Legal Entity on behalf
of whom a Contribution has been received by Licensor and subsequently
incorporated within the Work.
2. Grant of Copyright License.
Subject to the terms and conditions of this License, each Contributor hereby
grants to You a perpetual, worldwide, non-exclusive, no-charge, royalty-free,
irrevocable copyright license to reproduce, prepare Derivative Works of,
publicly display, publicly perform, sublicense, and distribute the Work and such
Derivative Works in Source or Object form.
3. Grant of Patent License.
Subject to the terms and conditions of this License, each Contributor hereby
grants to You a perpetual, worldwide, non-exclusive, no-charge, royalty-free,
irrevocable (except as stated in this section) patent license to make, have
made, use, offer to sell, sell, import, and otherwise transfer the Work, where
such license applies only to those patent claims licensable by such Contributor
that are necessarily infringed by their Contribution(s) alone or by combination
of their Contribution(s) with the Work to which such Contribution(s) was
submitted. If You institute patent litigation against any entity (including a
cross-claim or counterclaim in a lawsuit) alleging that the Work or a
Contribution incorporated within the Work constitutes direct or contributory
patent infringement, then any patent licenses granted to You under this License
for that Work shall terminate as of the date such litigation is filed.
4. Redistribution.
You may reproduce and distribute copies of the Work or Derivative Works thereof
in any medium, with or without modifications, and in Source or Object form,
provided that You meet the following conditions:
You must give any other recipients of the Work or Derivative Works a copy of
this License; and
You must cause any modified files to carry prominent notices stating that You
changed the files; and
You must retain, in the Source form of any Derivative Works that You distribute,
all copyright, patent, trademark, and attribution notices from the Source form
of the Work, excluding those notices that do not pertain to any part of the
Derivative Works; and
If the Work includes a "NOTICE" text file as part of its distribution, then any
Derivative Works that You distribute must include a readable copy of the
attribution notices contained within such NOTICE file, excluding those notices
that do not pertain to any part of the Derivative Works, in at least one of the
following places: within a NOTICE text file distributed as part of the
Derivative Works; within the Source form or documentation, if provided along
with the Derivative Works; or, within a display generated by the Derivative
Works, if and wherever such third-party notices normally appear. The contents of
the NOTICE file are for informational purposes only and do not modify the
License. You may add Your own attribution notices within Derivative Works that
You distribute, alongside or as an addendum to the NOTICE text from the Work,
provided that such additional attribution notices cannot be construed as
modifying the License.
You may add Your own copyright statement to Your modifications and may provide
additional or different license terms and conditions for use, reproduction, or
distribution of Your modifications, or for any such Derivative Works as a whole,
provided Your use, reproduction, and distribution of the Work otherwise complies
with the conditions stated in this License.
5. Submission of Contributions.
Unless You explicitly state otherwise, any Contribution intentionally submitted
for inclusion in the Work by You to the Licensor shall be under the terms and
conditions of this License, without any additional terms or conditions.
Notwithstanding the above, nothing herein shall supersede or modify the terms of
any separate license agreement you may have executed with Licensor regarding
such Contributions.
6. Trademarks.
This License does not grant permission to use the trade names, trademarks,
service marks, or product names of the Licensor, except as required for
reasonable and customary use in describing the origin of the Work and
reproducing the content of the NOTICE file.
7. Disclaimer of Warranty.
Unless required by applicable law or agreed to in writing, Licensor provides the
Work (and each Contributor provides its Contributions) on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied,
including, without limitation, any warranties or conditions of TITLE,
NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A PARTICULAR PURPOSE. You are
solely responsible for determining the appropriateness of using or
redistributing the Work and assume any risks associated with Your exercise of
permissions under this License.
8. Limitation of Liability.
In no event and under no legal theory, whether in tort (including negligence),
contract, or otherwise, unless required by applicable law (such as deliberate
and grossly negligent acts) or agreed to in writing, shall any Contributor be
liable to You for damages, including any direct, indirect, special, incidental,
or consequential damages of any character arising as a result of this License or
out of the use or inability to use the Work (including but not limited to
damages for loss of goodwill, work stoppage, computer failure or malfunction, or
any and all other commercial damages or losses), even if such Contributor has
been advised of the possibility of such damages.
9. Accepting Warranty or Additional Liability.
While redistributing the Work or Derivative Works thereof, You may choose to
offer, and charge a fee for, acceptance of support, warranty, indemnity, or
other liability obligations and/or rights consistent with this License. However,
in accepting such obligations, You may act only on Your own behalf and on Your
sole responsibility, not on behalf of any other Contributor, and only if You
agree to indemnify, defend, and hold each Contributor harmless for any liability
incurred by, or claims asserted against, such Contributor by reason of your
accepting any such warranty or additional liability.
END OF TERMS AND CONDITIONS
APPENDIX: How to apply the Apache License to your work
To apply the Apache License to your work, attach the following boilerplate
notice, with the fields enclosed by brackets "[]" replaced with your own
identifying information. (Don't include the brackets!) The text should be
enclosed in the appropriate comment syntax for the file format. We also
recommend that a file or class name and description of purpose be included on
the same "printed page" as the copyright notice for easier identification within
third-party archives.
Copyright [yyyy] [name of copyright owner]
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.

44
Godeps/_workspace/src/github.com/golang/glog/README generated vendored Normal file
View File

@ -0,0 +1,44 @@
glog
====
Leveled execution logs for Go.
This is an efficient pure Go implementation of leveled logs in the
manner of the open source C++ package
http://code.google.com/p/google-glog
By binding methods to booleans it is possible to use the log package
without paying the expense of evaluating the arguments to the log.
Through the -vmodule flag, the package also provides fine-grained
control over logging at the file level.
The comment from glog.go introduces the ideas:
Package glog implements logging analogous to the Google-internal
C++ INFO/ERROR/V setup. It provides functions Info, Warning,
Error, Fatal, plus formatting variants such as Infof. It
also provides V-style logging controlled by the -v and
-vmodule=file=2 flags.
Basic examples:
glog.Info("Prepare to repel boarders")
glog.Fatalf("Initialization failed: %s", err)
See the documentation for the V function for an explanation
of these examples:
if glog.V(2) {
glog.Info("Starting transaction...")
}
glog.V(2).Infoln("Processed", nItems, "elements")
The repository contains an open source version of the log package
used inside Google. The master copy of the source lives inside
Google, not here. The code in this repo is for export only and is not itself
under development. Feature requests will be ignored.
Send bug reports to golang-nuts@googlegroups.com.

1177
Godeps/_workspace/src/github.com/golang/glog/glog.go generated vendored Normal file

File diff suppressed because it is too large Load Diff

View File

@ -0,0 +1,124 @@
// Go support for leveled logs, analogous to https://code.google.com/p/google-glog/
//
// Copyright 2013 Google Inc. All Rights Reserved.
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
// File I/O for logs.
package glog
import (
"errors"
"flag"
"fmt"
"os"
"os/user"
"path/filepath"
"strings"
"sync"
"time"
)
// MaxSize is the maximum size of a log file in bytes.
var MaxSize uint64 = 1024 * 1024 * 1800
// logDirs lists the candidate directories for new log files.
var logDirs []string
// If non-empty, overrides the choice of directory in which to write logs.
// See createLogDirs for the full list of possible destinations.
var logDir = flag.String("log_dir", "", "If non-empty, write log files in this directory")
func createLogDirs() {
if *logDir != "" {
logDirs = append(logDirs, *logDir)
}
logDirs = append(logDirs, os.TempDir())
}
var (
pid = os.Getpid()
program = filepath.Base(os.Args[0])
host = "unknownhost"
userName = "unknownuser"
)
func init() {
h, err := os.Hostname()
if err == nil {
host = shortHostname(h)
}
current, err := user.Current()
if err == nil {
userName = current.Username
}
// Sanitize userName since it may contain filepath separators on Windows.
userName = strings.Replace(userName, `\`, "_", -1)
}
// shortHostname returns its argument, truncating at the first period.
// For instance, given "www.google.com" it returns "www".
func shortHostname(hostname string) string {
if i := strings.Index(hostname, "."); i >= 0 {
return hostname[:i]
}
return hostname
}
// logName returns a new log file name containing tag, with start time t, and
// the name for the symlink for tag.
func logName(tag string, t time.Time) (name, link string) {
name = fmt.Sprintf("%s.%s.%s.log.%s.%04d%02d%02d-%02d%02d%02d.%d",
program,
host,
userName,
tag,
t.Year(),
t.Month(),
t.Day(),
t.Hour(),
t.Minute(),
t.Second(),
pid)
return name, program + "." + tag
}
var onceLogDirs sync.Once
// create creates a new log file and returns the file and its filename, which
// contains tag ("INFO", "FATAL", etc.) and t. If the file is created
// successfully, create also attempts to update the symlink for that tag, ignoring
// errors.
func create(tag string, t time.Time) (f *os.File, filename string, err error) {
onceLogDirs.Do(createLogDirs)
if len(logDirs) == 0 {
return nil, "", errors.New("log: no log dirs")
}
name, link := logName(tag, t)
var lastErr error
for _, dir := range logDirs {
fname := filepath.Join(dir, name)
f, err := os.Create(fname)
if err == nil {
symlink := filepath.Join(dir, link)
os.Remove(symlink) // ignore err
os.Symlink(name, symlink) // ignore err
return f, fname, nil
}
lastErr = err
}
return nil, "", fmt.Errorf("log: cannot create log: %v", lastErr)
}

View File

@ -0,0 +1,415 @@
// Go support for leveled logs, analogous to https://code.google.com/p/google-glog/
//
// Copyright 2013 Google Inc. All Rights Reserved.
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
package glog
import (
"bytes"
"fmt"
stdLog "log"
"path/filepath"
"runtime"
"strconv"
"strings"
"testing"
"time"
)
// Test that shortHostname works as advertised.
func TestShortHostname(t *testing.T) {
for hostname, expect := range map[string]string{
"": "",
"host": "host",
"host.google.com": "host",
} {
if got := shortHostname(hostname); expect != got {
t.Errorf("shortHostname(%q): expected %q, got %q", hostname, expect, got)
}
}
}
// flushBuffer wraps a bytes.Buffer to satisfy flushSyncWriter.
type flushBuffer struct {
bytes.Buffer
}
func (f *flushBuffer) Flush() error {
return nil
}
func (f *flushBuffer) Sync() error {
return nil
}
// swap sets the log writers and returns the old array.
func (l *loggingT) swap(writers [numSeverity]flushSyncWriter) (old [numSeverity]flushSyncWriter) {
l.mu.Lock()
defer l.mu.Unlock()
old = l.file
for i, w := range writers {
logging.file[i] = w
}
return
}
// newBuffers sets the log writers to all new byte buffers and returns the old array.
func (l *loggingT) newBuffers() [numSeverity]flushSyncWriter {
return l.swap([numSeverity]flushSyncWriter{new(flushBuffer), new(flushBuffer), new(flushBuffer), new(flushBuffer)})
}
// contents returns the specified log value as a string.
func contents(s severity) string {
return logging.file[s].(*flushBuffer).String()
}
// contains reports whether the string is contained in the log.
func contains(s severity, str string, t *testing.T) bool {
return strings.Contains(contents(s), str)
}
// setFlags configures the logging flags how the test expects them.
func setFlags() {
logging.toStderr = false
}
// Test that Info works as advertised.
func TestInfo(t *testing.T) {
setFlags()
defer logging.swap(logging.newBuffers())
Info("test")
if !contains(infoLog, "I", t) {
t.Errorf("Info has wrong character: %q", contents(infoLog))
}
if !contains(infoLog, "test", t) {
t.Error("Info failed")
}
}
func TestInfoDepth(t *testing.T) {
setFlags()
defer logging.swap(logging.newBuffers())
f := func() { InfoDepth(1, "depth-test1") }
// The next three lines must stay together
_, _, wantLine, _ := runtime.Caller(0)
InfoDepth(0, "depth-test0")
f()
msgs := strings.Split(strings.TrimSuffix(contents(infoLog), "\n"), "\n")
if len(msgs) != 2 {
t.Fatalf("Got %d lines, expected 2", len(msgs))
}
for i, m := range msgs {
if !strings.HasPrefix(m, "I") {
t.Errorf("InfoDepth[%d] has wrong character: %q", i, m)
}
w := fmt.Sprintf("depth-test%d", i)
if !strings.Contains(m, w) {
t.Errorf("InfoDepth[%d] missing %q: %q", i, w, m)
}
// pull out the line number (between : and ])
msg := m[strings.LastIndex(m, ":")+1:]
x := strings.Index(msg, "]")
if x < 0 {
t.Errorf("InfoDepth[%d]: missing ']': %q", i, m)
continue
}
line, err := strconv.Atoi(msg[:x])
if err != nil {
t.Errorf("InfoDepth[%d]: bad line number: %q", i, m)
continue
}
wantLine++
if wantLine != line {
t.Errorf("InfoDepth[%d]: got line %d, want %d", i, line, wantLine)
}
}
}
func init() {
CopyStandardLogTo("INFO")
}
// Test that CopyStandardLogTo panics on bad input.
func TestCopyStandardLogToPanic(t *testing.T) {
defer func() {
if s, ok := recover().(string); !ok || !strings.Contains(s, "LOG") {
t.Errorf(`CopyStandardLogTo("LOG") should have panicked: %v`, s)
}
}()
CopyStandardLogTo("LOG")
}
// Test that using the standard log package logs to INFO.
func TestStandardLog(t *testing.T) {
setFlags()
defer logging.swap(logging.newBuffers())
stdLog.Print("test")
if !contains(infoLog, "I", t) {
t.Errorf("Info has wrong character: %q", contents(infoLog))
}
if !contains(infoLog, "test", t) {
t.Error("Info failed")
}
}
// Test that the header has the correct format.
func TestHeader(t *testing.T) {
setFlags()
defer logging.swap(logging.newBuffers())
defer func(previous func() time.Time) { timeNow = previous }(timeNow)
timeNow = func() time.Time {
return time.Date(2006, 1, 2, 15, 4, 5, .067890e9, time.Local)
}
pid = 1234
Info("test")
var line int
format := "I0102 15:04:05.067890 1234 glog_test.go:%d] test\n"
n, err := fmt.Sscanf(contents(infoLog), format, &line)
if n != 1 || err != nil {
t.Errorf("log format error: %d elements, error %s:\n%s", n, err, contents(infoLog))
}
// Scanf treats multiple spaces as equivalent to a single space,
// so check for correct space-padding also.
want := fmt.Sprintf(format, line)
if contents(infoLog) != want {
t.Errorf("log format error: got:\n\t%q\nwant:\t%q", contents(infoLog), want)
}
}
// Test that an Error log goes to Warning and Info.
// Even in the Info log, the source character will be E, so the data should
// all be identical.
func TestError(t *testing.T) {
setFlags()
defer logging.swap(logging.newBuffers())
Error("test")
if !contains(errorLog, "E", t) {
t.Errorf("Error has wrong character: %q", contents(errorLog))
}
if !contains(errorLog, "test", t) {
t.Error("Error failed")
}
str := contents(errorLog)
if !contains(warningLog, str, t) {
t.Error("Warning failed")
}
if !contains(infoLog, str, t) {
t.Error("Info failed")
}
}
// Test that a Warning log goes to Info.
// Even in the Info log, the source character will be W, so the data should
// all be identical.
func TestWarning(t *testing.T) {
setFlags()
defer logging.swap(logging.newBuffers())
Warning("test")
if !contains(warningLog, "W", t) {
t.Errorf("Warning has wrong character: %q", contents(warningLog))
}
if !contains(warningLog, "test", t) {
t.Error("Warning failed")
}
str := contents(warningLog)
if !contains(infoLog, str, t) {
t.Error("Info failed")
}
}
// Test that a V log goes to Info.
func TestV(t *testing.T) {
setFlags()
defer logging.swap(logging.newBuffers())
logging.verbosity.Set("2")
defer logging.verbosity.Set("0")
V(2).Info("test")
if !contains(infoLog, "I", t) {
t.Errorf("Info has wrong character: %q", contents(infoLog))
}
if !contains(infoLog, "test", t) {
t.Error("Info failed")
}
}
// Test that a vmodule enables a log in this file.
func TestVmoduleOn(t *testing.T) {
setFlags()
defer logging.swap(logging.newBuffers())
logging.vmodule.Set("glog_test=2")
defer logging.vmodule.Set("")
if !V(1) {
t.Error("V not enabled for 1")
}
if !V(2) {
t.Error("V not enabled for 2")
}
if V(3) {
t.Error("V enabled for 3")
}
V(2).Info("test")
if !contains(infoLog, "I", t) {
t.Errorf("Info has wrong character: %q", contents(infoLog))
}
if !contains(infoLog, "test", t) {
t.Error("Info failed")
}
}
// Test that a vmodule of another file does not enable a log in this file.
func TestVmoduleOff(t *testing.T) {
setFlags()
defer logging.swap(logging.newBuffers())
logging.vmodule.Set("notthisfile=2")
defer logging.vmodule.Set("")
for i := 1; i <= 3; i++ {
if V(Level(i)) {
t.Errorf("V enabled for %d", i)
}
}
V(2).Info("test")
if contents(infoLog) != "" {
t.Error("V logged incorrectly")
}
}
// vGlobs are patterns that match/don't match this file at V=2.
var vGlobs = map[string]bool{
// Easy to test the numeric match here.
"glog_test=1": false, // If -vmodule sets V to 1, V(2) will fail.
"glog_test=2": true,
"glog_test=3": true, // If -vmodule sets V to 1, V(3) will succeed.
// These all use 2 and check the patterns. All are true.
"*=2": true,
"?l*=2": true,
"????_*=2": true,
"??[mno]?_*t=2": true,
// These all use 2 and check the patterns. All are false.
"*x=2": false,
"m*=2": false,
"??_*=2": false,
"?[abc]?_*t=2": false,
}
// Test that vmodule globbing works as advertised.
func testVmoduleGlob(pat string, match bool, t *testing.T) {
setFlags()
defer logging.swap(logging.newBuffers())
defer logging.vmodule.Set("")
logging.vmodule.Set(pat)
if V(2) != Verbose(match) {
t.Errorf("incorrect match for %q: got %t expected %t", pat, V(2), match)
}
}
// Test that a vmodule globbing works as advertised.
func TestVmoduleGlob(t *testing.T) {
for glob, match := range vGlobs {
testVmoduleGlob(glob, match, t)
}
}
func TestRollover(t *testing.T) {
setFlags()
var err error
defer func(previous func(error)) { logExitFunc = previous }(logExitFunc)
logExitFunc = func(e error) {
err = e
}
defer func(previous uint64) { MaxSize = previous }(MaxSize)
MaxSize = 512
Info("x") // Be sure we have a file.
info, ok := logging.file[infoLog].(*syncBuffer)
if !ok {
t.Fatal("info wasn't created")
}
if err != nil {
t.Fatalf("info has initial error: %v", err)
}
fname0 := info.file.Name()
Info(strings.Repeat("x", int(MaxSize))) // force a rollover
if err != nil {
t.Fatalf("info has error after big write: %v", err)
}
// Make sure the next log file gets a file name with a different
// time stamp.
//
// TODO: determine whether we need to support subsecond log
// rotation. C++ does not appear to handle this case (nor does it
// handle Daylight Savings Time properly).
time.Sleep(1 * time.Second)
Info("x") // create a new file
if err != nil {
t.Fatalf("error after rotation: %v", err)
}
fname1 := info.file.Name()
if fname0 == fname1 {
t.Errorf("info.f.Name did not change: %v", fname0)
}
if info.nbytes >= MaxSize {
t.Errorf("file size was not reset: %d", info.nbytes)
}
}
func TestLogBacktraceAt(t *testing.T) {
setFlags()
defer logging.swap(logging.newBuffers())
// The peculiar style of this code simplifies line counting and maintenance of the
// tracing block below.
var infoLine string
setTraceLocation := func(file string, line int, ok bool, delta int) {
if !ok {
t.Fatal("could not get file:line")
}
_, file = filepath.Split(file)
infoLine = fmt.Sprintf("%s:%d", file, line+delta)
err := logging.traceLocation.Set(infoLine)
if err != nil {
t.Fatal("error setting log_backtrace_at: ", err)
}
}
{
// Start of tracing block. These lines know about each other's relative position.
_, file, line, ok := runtime.Caller(0)
setTraceLocation(file, line, ok, +2) // Two lines between Caller and Info calls.
Info("we want a stack trace here")
}
numAppearances := strings.Count(contents(infoLog), infoLine)
if numAppearances < 2 {
// Need 2 appearances, one in the log header and one in the trace:
// log_test.go:281: I0511 16:36:06.952398 02238 log_test.go:280] we want a stack trace here
// ...
// github.com/glog/glog_test.go:280 (0x41ba91)
// ...
// We could be more precise but that would require knowing the details
// of the traceback format, which may not be dependable.
t.Fatal("got no trace back; log is ", contents(infoLog))
}
}
func BenchmarkHeader(b *testing.B) {
for i := 0; i < b.N; i++ {
buf, _, _ := logging.header(infoLog, 0)
logging.putBuffer(buf)
}
}

14
Godeps/_workspace/src/golang.org/x/oauth2/.travis.yml generated vendored Normal file
View File

@ -0,0 +1,14 @@
language: go
go:
- 1.3
- 1.4
install:
- export GOPATH="$HOME/gopath"
- mkdir -p "$GOPATH/src/golang.org/x"
- mv "$TRAVIS_BUILD_DIR" "$GOPATH/src/golang.org/x/oauth2"
- go get -v -t -d golang.org/x/oauth2/...
script:
- go test -v golang.org/x/oauth2/...

3
Godeps/_workspace/src/golang.org/x/oauth2/AUTHORS generated vendored Normal file
View File

@ -0,0 +1,3 @@
# This source code refers to The Go Authors for copyright purposes.
# The master list of authors is in the main Go distribution,
# visible at http://tip.golang.org/AUTHORS.

View File

@ -0,0 +1,31 @@
# Contributing to Go
Go is an open source project.
It is the work of hundreds of contributors. We appreciate your help!
## Filing issues
When [filing an issue](https://github.com/golang/oauth2/issues), make sure to answer these five questions:
1. What version of Go are you using (`go version`)?
2. What operating system and processor architecture are you using?
3. What did you do?
4. What did you expect to see?
5. What did you see instead?
General questions should go to the [golang-nuts mailing list](https://groups.google.com/group/golang-nuts) instead of the issue tracker.
The gophers there will answer or ask you to file an issue if you've tripped over a bug.
## Contributing code
Please read the [Contribution Guidelines](https://golang.org/doc/contribute.html)
before sending patches.
**We do not accept GitHub pull requests**
(we use [Gerrit](https://code.google.com/p/gerrit/) instead for code review).
Unless otherwise noted, the Go source files are distributed under
the BSD-style license found in the LICENSE file.

View File

@ -0,0 +1,3 @@
# This source code was written by the Go contributors.
# The master list of contributors is in the main Go distribution,
# visible at http://tip.golang.org/CONTRIBUTORS.

27
Godeps/_workspace/src/golang.org/x/oauth2/LICENSE generated vendored Normal file
View File

@ -0,0 +1,27 @@
Copyright (c) 2009 The oauth2 Authors. All rights reserved.
Redistribution and use in source and binary forms, with or without
modification, are permitted provided that the following conditions are
met:
* Redistributions of source code must retain the above copyright
notice, this list of conditions and the following disclaimer.
* Redistributions in binary form must reproduce the above
copyright notice, this list of conditions and the following disclaimer
in the documentation and/or other materials provided with the
distribution.
* Neither the name of Google Inc. nor the names of its
contributors may be used to endorse or promote products derived from
this software without specific prior written permission.
THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
"AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
(INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.

64
Godeps/_workspace/src/golang.org/x/oauth2/README.md generated vendored Normal file
View File

@ -0,0 +1,64 @@
# OAuth2 for Go
[![Build Status](https://travis-ci.org/golang/oauth2.svg?branch=master)](https://travis-ci.org/golang/oauth2)
oauth2 package contains a client implementation for OAuth 2.0 spec.
## Installation
~~~~
go get golang.org/x/oauth2
~~~~
See godoc for further documentation and examples.
* [godoc.org/golang.org/x/oauth2](http://godoc.org/golang.org/x/oauth2)
* [godoc.org/golang.org/x/oauth2/google](http://godoc.org/golang.org/x/oauth2/google)
## App Engine
In change 96e89be (March 2015) we removed the `oauth2.Context2` type in favor
of the [`context.Context`](https://golang.org/x/net/context#Context) type from
the `golang.org/x/net/context` package
This means its no longer possible to use the "Classic App Engine"
`appengine.Context` type with the `oauth2` package. (You're using
Classic App Engine if you import the package `"appengine"`.)
To work around this, you may use the new `"google.golang.org/appengine"`
package. This package has almost the same API as the `"appengine"` package,
but it can be fetched with `go get` and used on "Managed VMs" and well as
Classic App Engine.
See the [new `appengine` package's readme](https://github.com/golang/appengine#updating-a-go-app-engine-app)
for information on updating your app.
If you don't want to update your entire app to use the new App Engine packages,
you may use both sets of packages in parallel, using only the new packages
with the `oauth2` package.
import (
"golang.org/x/net/context"
"golang.org/x/oauth2"
"golang.org/x/oauth2/google"
newappengine "google.golang.org/appengine"
newurlftech "google.golang.org/urlfetch"
"appengine"
)
func handler(w http.ResponseWriter, r *http.Request) {
var c appengine.Context = appengine.NewContext(r)
c.Infof("Logging a message with the old package")
var ctx context.Context = newappengine.NewContext(r)
client := &http.Client{
Transport: &oauth2.Transport{
Source: google.AppEngineTokenSource(ctx, "scope"),
Base: &newurlfetch.Transport{Context: ctx},
},
}
client.Get("...")
}

View File

@ -0,0 +1,24 @@
// Copyright 2014 The oauth2 Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
// +build appengine appenginevm
// App Engine hooks.
package oauth2
import (
"net/http"
"github.com/coreos/etcd/Godeps/_workspace/src/golang.org/x/net/context"
"google.golang.org/appengine/urlfetch"
)
func init() {
registerContextClientFunc(contextClientAppEngine)
}
func contextClientAppEngine(ctx context.Context) (*http.Client, error) {
return urlfetch.Client(ctx), nil
}

View File

@ -0,0 +1,45 @@
// Copyright 2014 The oauth2 Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
package oauth2_test
import (
"fmt"
"log"
"github.com/coreos/etcd/Godeps/_workspace/src/golang.org/x/oauth2"
)
func ExampleConfig() {
conf := &oauth2.Config{
ClientID: "YOUR_CLIENT_ID",
ClientSecret: "YOUR_CLIENT_SECRET",
Scopes: []string{"SCOPE1", "SCOPE2"},
Endpoint: oauth2.Endpoint{
AuthURL: "https://provider.com/o/oauth2/auth",
TokenURL: "https://provider.com/o/oauth2/token",
},
}
// Redirect user to consent page to ask for permission
// for the scopes specified above.
url := conf.AuthCodeURL("state", oauth2.AccessTypeOffline)
fmt.Printf("Visit the URL for the auth dialog: %v", url)
// Use the authorization code that is pushed to the redirect URL.
// NewTransportWithCode will do the handshake to retrieve
// an access token and initiate a Transport that is
// authorized and authenticated by the retrieved token.
var code string
if _, err := fmt.Scan(&code); err != nil {
log.Fatal(err)
}
tok, err := conf.Exchange(oauth2.NoContext, code)
if err != nil {
log.Fatal(err)
}
client := conf.Client(oauth2.NoContext, tok)
client.Get("...")
}

View File

@ -0,0 +1,16 @@
// Copyright 2015 The oauth2 Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
// Package facebook provides constants for using OAuth2 to access Facebook.
package facebook
import (
"github.com/coreos/etcd/Godeps/_workspace/src/golang.org/x/oauth2"
)
// Endpoint is Facebook's OAuth 2.0 endpoint.
var Endpoint = oauth2.Endpoint{
AuthURL: "https://www.facebook.com/dialog/oauth",
TokenURL: "https://graph.facebook.com/oauth/access_token",
}

View File

@ -0,0 +1,16 @@
// Copyright 2014 The oauth2 Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
// Package github provides constants for using OAuth2 to access Github.
package github
import (
"github.com/coreos/etcd/Godeps/_workspace/src/golang.org/x/oauth2"
)
// Endpoint is Github's OAuth 2.0 endpoint.
var Endpoint = oauth2.Endpoint{
AuthURL: "https://github.com/login/oauth/authorize",
TokenURL: "https://github.com/login/oauth/access_token",
}

View File

@ -0,0 +1,83 @@
// Copyright 2014 The oauth2 Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
package google
import (
"sort"
"strings"
"sync"
"time"
"github.com/coreos/etcd/Godeps/_workspace/src/golang.org/x/net/context"
"github.com/coreos/etcd/Godeps/_workspace/src/golang.org/x/oauth2"
)
// Set at init time by appengine_hook.go. If nil, we're not on App Engine.
var appengineTokenFunc func(c context.Context, scopes ...string) (token string, expiry time.Time, err error)
// AppEngineTokenSource returns a token source that fetches tokens
// issued to the current App Engine application's service account.
// If you are implementing a 3-legged OAuth 2.0 flow on App Engine
// that involves user accounts, see oauth2.Config instead.
//
// The provided context must have come from appengine.NewContext.
func AppEngineTokenSource(ctx context.Context, scope ...string) oauth2.TokenSource {
if appengineTokenFunc == nil {
panic("google: AppEngineTokenSource can only be used on App Engine.")
}
scopes := append([]string{}, scope...)
sort.Strings(scopes)
return &appEngineTokenSource{
ctx: ctx,
scopes: scopes,
key: strings.Join(scopes, " "),
}
}
// aeTokens helps the fetched tokens to be reused until their expiration.
var (
aeTokensMu sync.Mutex
aeTokens = make(map[string]*tokenLock) // key is space-separated scopes
)
type tokenLock struct {
mu sync.Mutex // guards t; held while fetching or updating t
t *oauth2.Token
}
type appEngineTokenSource struct {
ctx context.Context
scopes []string
key string // to aeTokens map; space-separated scopes
}
func (ts *appEngineTokenSource) Token() (*oauth2.Token, error) {
if appengineTokenFunc == nil {
panic("google: AppEngineTokenSource can only be used on App Engine.")
}
aeTokensMu.Lock()
tok, ok := aeTokens[ts.key]
if !ok {
tok = &tokenLock{}
aeTokens[ts.key] = tok
}
aeTokensMu.Unlock()
tok.mu.Lock()
defer tok.mu.Unlock()
if tok.t.Valid() {
return tok.t, nil
}
access, exp, err := appengineTokenFunc(ts.ctx, ts.scopes...)
if err != nil {
return nil, err
}
tok.t = &oauth2.Token{
AccessToken: access,
Expiry: exp,
}
return tok.t, nil
}

View File

@ -0,0 +1,13 @@
// Copyright 2015 The oauth2 Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
// +build appengine appenginevm
package google
import "google.golang.org/appengine"
func init() {
appengineTokenFunc = appengine.AccessToken
}

View File

@ -0,0 +1,154 @@
// Copyright 2015 The oauth2 Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
package google
import (
"encoding/json"
"errors"
"fmt"
"io/ioutil"
"net/http"
"os"
"path/filepath"
"runtime"
"github.com/coreos/etcd/Godeps/_workspace/src/golang.org/x/net/context"
"github.com/coreos/etcd/Godeps/_workspace/src/golang.org/x/oauth2"
"github.com/coreos/etcd/Godeps/_workspace/src/golang.org/x/oauth2/jwt"
"github.com/coreos/etcd/Godeps/_workspace/src/google.golang.org/cloud/compute/metadata"
)
// DefaultClient returns an HTTP Client that uses the
// DefaultTokenSource to obtain authentication credentials.
//
// This client should be used when developing services
// that run on Google App Engine or Google Compute Engine
// and use "Application Default Credentials."
//
// For more details, see:
// https://developers.google.com/accounts/application-default-credentials
//
func DefaultClient(ctx context.Context, scope ...string) (*http.Client, error) {
ts, err := DefaultTokenSource(ctx, scope...)
if err != nil {
return nil, err
}
return oauth2.NewClient(ctx, ts), nil
}
// DefaultTokenSource is a token source that uses
// "Application Default Credentials".
//
// It looks for credentials in the following places,
// preferring the first location found:
//
// 1. A JSON file whose path is specified by the
// GOOGLE_APPLICATION_CREDENTIALS environment variable.
// 2. A JSON file in a location known to the gcloud command-line tool.
// On Windows, this is %APPDATA%/gcloud/application_default_credentials.json.
// On other systems, $HOME/.config/gcloud/application_default_credentials.json.
// 3. On Google App Engine it uses the appengine.AccessToken function.
// 4. On Google Compute Engine, it fetches credentials from the metadata server.
// (In this final case any provided scopes are ignored.)
//
// For more details, see:
// https://developers.google.com/accounts/application-default-credentials
//
func DefaultTokenSource(ctx context.Context, scope ...string) (oauth2.TokenSource, error) {
// First, try the environment variable.
const envVar = "GOOGLE_APPLICATION_CREDENTIALS"
if filename := os.Getenv(envVar); filename != "" {
ts, err := tokenSourceFromFile(ctx, filename, scope)
if err != nil {
return nil, fmt.Errorf("google: error getting credentials using %v environment variable: %v", envVar, err)
}
return ts, nil
}
// Second, try a well-known file.
filename := wellKnownFile()
_, err := os.Stat(filename)
if err == nil {
ts, err2 := tokenSourceFromFile(ctx, filename, scope)
if err2 == nil {
return ts, nil
}
err = err2
} else if os.IsNotExist(err) {
err = nil // ignore this error
}
if err != nil {
return nil, fmt.Errorf("google: error getting credentials using well-known file (%v): %v", filename, err)
}
// Third, if we're on Google App Engine use those credentials.
if appengineTokenFunc != nil {
return AppEngineTokenSource(ctx, scope...), nil
}
// Fourth, if we're on Google Compute Engine use the metadata server.
if metadata.OnGCE() {
return ComputeTokenSource(""), nil
}
// None are found; return helpful error.
const url = "https://developers.google.com/accounts/application-default-credentials"
return nil, fmt.Errorf("google: could not find default credentials. See %v for more information.", url)
}
func wellKnownFile() string {
const f = "application_default_credentials.json"
if runtime.GOOS == "windows" {
return filepath.Join(os.Getenv("APPDATA"), "gcloud", f)
}
return filepath.Join(guessUnixHomeDir(), ".config", "gcloud", f)
}
func tokenSourceFromFile(ctx context.Context, filename string, scopes []string) (oauth2.TokenSource, error) {
b, err := ioutil.ReadFile(filename)
if err != nil {
return nil, err
}
var d struct {
// Common fields
Type string
ClientID string `json:"client_id"`
// User Credential fields
ClientSecret string `json:"client_secret"`
RefreshToken string `json:"refresh_token"`
// Service Account fields
ClientEmail string `json:"client_email"`
PrivateKeyID string `json:"private_key_id"`
PrivateKey string `json:"private_key"`
}
if err := json.Unmarshal(b, &d); err != nil {
return nil, err
}
switch d.Type {
case "authorized_user":
cfg := &oauth2.Config{
ClientID: d.ClientID,
ClientSecret: d.ClientSecret,
Scopes: append([]string{}, scopes...), // copy
Endpoint: Endpoint,
}
tok := &oauth2.Token{RefreshToken: d.RefreshToken}
return cfg.TokenSource(ctx, tok), nil
case "service_account":
cfg := &jwt.Config{
Email: d.ClientEmail,
PrivateKey: []byte(d.PrivateKey),
Scopes: append([]string{}, scopes...), // copy
TokenURL: JWTTokenURL,
}
return cfg.TokenSource(ctx), nil
case "":
return nil, errors.New("missing 'type' field in credentials")
default:
return nil, fmt.Errorf("unknown credential type: %q", d.Type)
}
}

View File

@ -0,0 +1,150 @@
// Copyright 2014 The oauth2 Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
// +build appenginevm !appengine
package google_test
import (
"fmt"
"io/ioutil"
"log"
"net/http"
"github.com/coreos/etcd/Godeps/_workspace/src/golang.org/x/oauth2"
"github.com/coreos/etcd/Godeps/_workspace/src/golang.org/x/oauth2/google"
"github.com/coreos/etcd/Godeps/_workspace/src/golang.org/x/oauth2/jwt"
"google.golang.org/appengine"
"google.golang.org/appengine/urlfetch"
)
func ExampleDefaultClient() {
client, err := google.DefaultClient(oauth2.NoContext,
"https://www.googleapis.com/auth/devstorage.full_control")
if err != nil {
log.Fatal(err)
}
client.Get("...")
}
func Example_webServer() {
// Your credentials should be obtained from the Google
// Developer Console (https://console.developers.google.com).
conf := &oauth2.Config{
ClientID: "YOUR_CLIENT_ID",
ClientSecret: "YOUR_CLIENT_SECRET",
RedirectURL: "YOUR_REDIRECT_URL",
Scopes: []string{
"https://www.googleapis.com/auth/bigquery",
"https://www.googleapis.com/auth/blogger",
},
Endpoint: google.Endpoint,
}
// Redirect user to Google's consent page to ask for permission
// for the scopes specified above.
url := conf.AuthCodeURL("state")
fmt.Printf("Visit the URL for the auth dialog: %v", url)
// Handle the exchange code to initiate a transport.
tok, err := conf.Exchange(oauth2.NoContext, "authorization-code")
if err != nil {
log.Fatal(err)
}
client := conf.Client(oauth2.NoContext, tok)
client.Get("...")
}
func ExampleJWTConfigFromJSON() {
// Your credentials should be obtained from the Google
// Developer Console (https://console.developers.google.com).
// Navigate to your project, then see the "Credentials" page
// under "APIs & Auth".
// To create a service account client, click "Create new Client ID",
// select "Service Account", and click "Create Client ID". A JSON
// key file will then be downloaded to your computer.
data, err := ioutil.ReadFile("/path/to/your-project-key.json")
if err != nil {
log.Fatal(err)
}
conf, err := google.JWTConfigFromJSON(data, "https://www.googleapis.com/auth/bigquery")
if err != nil {
log.Fatal(err)
}
// Initiate an http.Client. The following GET request will be
// authorized and authenticated on the behalf of
// your service account.
client := conf.Client(oauth2.NoContext)
client.Get("...")
}
func ExampleSDKConfig() {
// The credentials will be obtained from the first account that
// has been authorized with `gcloud auth login`.
conf, err := google.NewSDKConfig("")
if err != nil {
log.Fatal(err)
}
// Initiate an http.Client. The following GET request will be
// authorized and authenticated on the behalf of the SDK user.
client := conf.Client(oauth2.NoContext)
client.Get("...")
}
func Example_serviceAccount() {
// Your credentials should be obtained from the Google
// Developer Console (https://console.developers.google.com).
conf := &jwt.Config{
Email: "xxx@developer.gserviceaccount.com",
// The contents of your RSA private key or your PEM file
// that contains a private key.
// If you have a p12 file instead, you
// can use `openssl` to export the private key into a pem file.
//
// $ openssl pkcs12 -in key.p12 -passin pass:notasecret -out key.pem -nodes
//
// The field only supports PEM containers with no passphrase.
// The openssl command will convert p12 keys to passphrase-less PEM containers.
PrivateKey: []byte("-----BEGIN RSA PRIVATE KEY-----..."),
Scopes: []string{
"https://www.googleapis.com/auth/bigquery",
"https://www.googleapis.com/auth/blogger",
},
TokenURL: google.JWTTokenURL,
// If you would like to impersonate a user, you can
// create a transport with a subject. The following GET
// request will be made on the behalf of user@example.com.
// Optional.
Subject: "user@example.com",
}
// Initiate an http.Client, the following GET request will be
// authorized and authenticated on the behalf of user@example.com.
client := conf.Client(oauth2.NoContext)
client.Get("...")
}
func ExampleAppEngineTokenSource() {
var req *http.Request // from the ServeHTTP handler
ctx := appengine.NewContext(req)
client := &http.Client{
Transport: &oauth2.Transport{
Source: google.AppEngineTokenSource(ctx, "https://www.googleapis.com/auth/bigquery"),
Base: &urlfetch.Transport{
Context: ctx,
},
},
}
client.Get("...")
}
func ExampleComputeTokenSource() {
client := &http.Client{
Transport: &oauth2.Transport{
// Fetch from Google Compute Engine's metadata server to retrieve
// an access token for the provided account.
// If no account is specified, "default" is used.
Source: google.ComputeTokenSource(""),
},
}
client.Get("...")
}

View File

@ -0,0 +1,145 @@
// Copyright 2014 The oauth2 Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
// Package google provides support for making OAuth2 authorized and
// authenticated HTTP requests to Google APIs.
// It supports the Web server flow, client-side credentials, service accounts,
// Google Compute Engine service accounts, and Google App Engine service
// accounts.
//
// For more information, please read
// https://developers.google.com/accounts/docs/OAuth2
// and
// https://developers.google.com/accounts/application-default-credentials.
package google
import (
"encoding/json"
"errors"
"fmt"
"strings"
"time"
"github.com/coreos/etcd/Godeps/_workspace/src/golang.org/x/oauth2"
"github.com/coreos/etcd/Godeps/_workspace/src/golang.org/x/oauth2/jwt"
"github.com/coreos/etcd/Godeps/_workspace/src/google.golang.org/cloud/compute/metadata"
)
// Endpoint is Google's OAuth 2.0 endpoint.
var Endpoint = oauth2.Endpoint{
AuthURL: "https://accounts.google.com/o/oauth2/auth",
TokenURL: "https://accounts.google.com/o/oauth2/token",
}
// JWTTokenURL is Google's OAuth 2.0 token URL to use with the JWT flow.
const JWTTokenURL = "https://accounts.google.com/o/oauth2/token"
// ConfigFromJSON uses a Google Developers Console client_credentials.json
// file to construct a config.
// client_credentials.json can be downloadable from https://console.developers.google.com,
// under "APIs & Auth" > "Credentials". Download the Web application credentials in the
// JSON format and provide the contents of the file as jsonKey.
func ConfigFromJSON(jsonKey []byte, scope ...string) (*oauth2.Config, error) {
type cred struct {
ClientID string `json:"client_id"`
ClientSecret string `json:"client_secret"`
RedirectURIs []string `json:"redirect_uris"`
AuthURI string `json:"auth_uri"`
TokenURI string `json:"token_uri"`
}
var j struct {
Web *cred `json:"web"`
Installed *cred `json:"installed"`
}
if err := json.Unmarshal(jsonKey, &j); err != nil {
return nil, err
}
var c *cred
switch {
case j.Web != nil:
c = j.Web
case j.Installed != nil:
c = j.Installed
default:
return nil, fmt.Errorf("oauth2/google: no credentials found")
}
if len(c.RedirectURIs) < 1 {
return nil, errors.New("oauth2/google: missing redirect URL in the client_credentials.json")
}
return &oauth2.Config{
ClientID: c.ClientID,
ClientSecret: c.ClientSecret,
RedirectURL: c.RedirectURIs[0],
Scopes: scope,
Endpoint: oauth2.Endpoint{
AuthURL: c.AuthURI,
TokenURL: c.TokenURI,
},
}, nil
}
// JWTConfigFromJSON uses a Google Developers service account JSON key file to read
// the credentials that authorize and authenticate the requests.
// Create a service account on "Credentials" page under "APIs & Auth" for your
// project at https://console.developers.google.com to download a JSON key file.
func JWTConfigFromJSON(jsonKey []byte, scope ...string) (*jwt.Config, error) {
var key struct {
Email string `json:"client_email"`
PrivateKey string `json:"private_key"`
}
if err := json.Unmarshal(jsonKey, &key); err != nil {
return nil, err
}
return &jwt.Config{
Email: key.Email,
PrivateKey: []byte(key.PrivateKey),
Scopes: scope,
TokenURL: JWTTokenURL,
}, nil
}
// ComputeTokenSource returns a token source that fetches access tokens
// from Google Compute Engine (GCE)'s metadata server. It's only valid to use
// this token source if your program is running on a GCE instance.
// If no account is specified, "default" is used.
// Further information about retrieving access tokens from the GCE metadata
// server can be found at https://cloud.google.com/compute/docs/authentication.
func ComputeTokenSource(account string) oauth2.TokenSource {
return oauth2.ReuseTokenSource(nil, computeSource{account: account})
}
type computeSource struct {
account string
}
func (cs computeSource) Token() (*oauth2.Token, error) {
if !metadata.OnGCE() {
return nil, errors.New("oauth2/google: can't get a token from the metadata service; not running on GCE")
}
acct := cs.account
if acct == "" {
acct = "default"
}
tokenJSON, err := metadata.Get("instance/service-accounts/" + acct + "/token")
if err != nil {
return nil, err
}
var res struct {
AccessToken string `json:"access_token"`
ExpiresInSec int `json:"expires_in"`
TokenType string `json:"token_type"`
}
err = json.NewDecoder(strings.NewReader(tokenJSON)).Decode(&res)
if err != nil {
return nil, fmt.Errorf("oauth2/google: invalid token JSON from metadata: %v", err)
}
if res.ExpiresInSec == 0 || res.AccessToken == "" {
return nil, fmt.Errorf("oauth2/google: incomplete token received from metadata")
}
return &oauth2.Token{
AccessToken: res.AccessToken,
TokenType: res.TokenType,
Expiry: time.Now().Add(time.Duration(res.ExpiresInSec) * time.Second),
}, nil
}

View File

@ -0,0 +1,67 @@
// Copyright 2015 The oauth2 Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
package google
import (
"strings"
"testing"
)
var webJSONKey = []byte(`
{
"web": {
"auth_uri": "https://google.com/o/oauth2/auth",
"client_secret": "3Oknc4jS_wA2r9i",
"token_uri": "https://google.com/o/oauth2/token",
"client_email": "222-nprqovg5k43uum874cs9osjt2koe97g8@developer.gserviceaccount.com",
"redirect_uris": ["https://www.example.com/oauth2callback"],
"client_x509_cert_url": "https://www.googleapis.com/robot/v1/metadata/x509/222-nprqovg5k43uum874cs9osjt2koe97g8@developer.gserviceaccount.com",
"client_id": "222-nprqovg5k43uum874cs9osjt2koe97g8.apps.googleusercontent.com",
"auth_provider_x509_cert_url": "https://www.googleapis.com/oauth2/v1/certs",
"javascript_origins": ["https://www.example.com"]
}
}`)
var installedJSONKey = []byte(`{
"installed": {
"client_id": "222-installed.apps.googleusercontent.com",
"redirect_uris": ["https://www.example.com/oauth2callback"]
}
}`)
func TestConfigFromJSON(t *testing.T) {
conf, err := ConfigFromJSON(webJSONKey, "scope1", "scope2")
if err != nil {
t.Error(err)
}
if got, want := conf.ClientID, "222-nprqovg5k43uum874cs9osjt2koe97g8.apps.googleusercontent.com"; got != want {
t.Errorf("ClientID = %q; want %q", got, want)
}
if got, want := conf.ClientSecret, "3Oknc4jS_wA2r9i"; got != want {
t.Errorf("ClientSecret = %q; want %q", got, want)
}
if got, want := conf.RedirectURL, "https://www.example.com/oauth2callback"; got != want {
t.Errorf("RedictURL = %q; want %q", got, want)
}
if got, want := strings.Join(conf.Scopes, ","), "scope1,scope2"; got != want {
t.Errorf("Scopes = %q; want %q", got, want)
}
if got, want := conf.Endpoint.AuthURL, "https://google.com/o/oauth2/auth"; got != want {
t.Errorf("AuthURL = %q; want %q", got, want)
}
if got, want := conf.Endpoint.TokenURL, "https://google.com/o/oauth2/token"; got != want {
t.Errorf("TokenURL = %q; want %q", got, want)
}
}
func TestConfigFromJSON_Installed(t *testing.T) {
conf, err := ConfigFromJSON(installedJSONKey)
if err != nil {
t.Error(err)
}
if got, want := conf.ClientID, "222-installed.apps.googleusercontent.com"; got != want {
t.Errorf("ClientID = %q; want %q", got, want)
}
}

168
Godeps/_workspace/src/golang.org/x/oauth2/google/sdk.go generated vendored Normal file
View File

@ -0,0 +1,168 @@
// Copyright 2015 The oauth2 Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
package google
import (
"encoding/json"
"errors"
"fmt"
"net/http"
"os"
"os/user"
"path/filepath"
"runtime"
"strings"
"time"
"github.com/coreos/etcd/Godeps/_workspace/src/golang.org/x/net/context"
"github.com/coreos/etcd/Godeps/_workspace/src/golang.org/x/oauth2"
"github.com/coreos/etcd/Godeps/_workspace/src/golang.org/x/oauth2/internal"
)
type sdkCredentials struct {
Data []struct {
Credential struct {
ClientID string `json:"client_id"`
ClientSecret string `json:"client_secret"`
AccessToken string `json:"access_token"`
RefreshToken string `json:"refresh_token"`
TokenExpiry *time.Time `json:"token_expiry"`
} `json:"credential"`
Key struct {
Account string `json:"account"`
Scope string `json:"scope"`
} `json:"key"`
}
}
// An SDKConfig provides access to tokens from an account already
// authorized via the Google Cloud SDK.
type SDKConfig struct {
conf oauth2.Config
initialToken *oauth2.Token
}
// NewSDKConfig creates an SDKConfig for the given Google Cloud SDK
// account. If account is empty, the account currently active in
// Google Cloud SDK properties is used.
// Google Cloud SDK credentials must be created by running `gcloud auth`
// before using this function.
// The Google Cloud SDK is available at https://cloud.google.com/sdk/.
func NewSDKConfig(account string) (*SDKConfig, error) {
configPath, err := sdkConfigPath()
if err != nil {
return nil, fmt.Errorf("oauth2/google: error getting SDK config path: %v", err)
}
credentialsPath := filepath.Join(configPath, "credentials")
f, err := os.Open(credentialsPath)
if err != nil {
return nil, fmt.Errorf("oauth2/google: failed to load SDK credentials: %v", err)
}
defer f.Close()
var c sdkCredentials
if err := json.NewDecoder(f).Decode(&c); err != nil {
return nil, fmt.Errorf("oauth2/google: failed to decode SDK credentials from %q: %v", credentialsPath, err)
}
if len(c.Data) == 0 {
return nil, fmt.Errorf("oauth2/google: no credentials found in %q, run `gcloud auth login` to create one", credentialsPath)
}
if account == "" {
propertiesPath := filepath.Join(configPath, "properties")
f, err := os.Open(propertiesPath)
if err != nil {
return nil, fmt.Errorf("oauth2/google: failed to load SDK properties: %v", err)
}
defer f.Close()
ini, err := internal.ParseINI(f)
if err != nil {
return nil, fmt.Errorf("oauth2/google: failed to parse SDK properties %q: %v", propertiesPath, err)
}
core, ok := ini["core"]
if !ok {
return nil, fmt.Errorf("oauth2/google: failed to find [core] section in %v", ini)
}
active, ok := core["account"]
if !ok {
return nil, fmt.Errorf("oauth2/google: failed to find %q attribute in %v", "account", core)
}
account = active
}
for _, d := range c.Data {
if account == "" || d.Key.Account == account {
if d.Credential.AccessToken == "" && d.Credential.RefreshToken == "" {
return nil, fmt.Errorf("oauth2/google: no token available for account %q", account)
}
var expiry time.Time
if d.Credential.TokenExpiry != nil {
expiry = *d.Credential.TokenExpiry
}
return &SDKConfig{
conf: oauth2.Config{
ClientID: d.Credential.ClientID,
ClientSecret: d.Credential.ClientSecret,
Scopes: strings.Split(d.Key.Scope, " "),
Endpoint: Endpoint,
RedirectURL: "oob",
},
initialToken: &oauth2.Token{
AccessToken: d.Credential.AccessToken,
RefreshToken: d.Credential.RefreshToken,
Expiry: expiry,
},
}, nil
}
}
return nil, fmt.Errorf("oauth2/google: no such credentials for account %q", account)
}
// Client returns an HTTP client using Google Cloud SDK credentials to
// authorize requests. The token will auto-refresh as necessary. The
// underlying http.RoundTripper will be obtained using the provided
// context. The returned client and its Transport should not be
// modified.
func (c *SDKConfig) Client(ctx context.Context) *http.Client {
return &http.Client{
Transport: &oauth2.Transport{
Source: c.TokenSource(ctx),
},
}
}
// TokenSource returns an oauth2.TokenSource that retrieve tokens from
// Google Cloud SDK credentials using the provided context.
// It will returns the current access token stored in the credentials,
// and refresh it when it expires, but it won't update the credentials
// with the new access token.
func (c *SDKConfig) TokenSource(ctx context.Context) oauth2.TokenSource {
return c.conf.TokenSource(ctx, c.initialToken)
}
// Scopes are the OAuth 2.0 scopes the current account is authorized for.
func (c *SDKConfig) Scopes() []string {
return c.conf.Scopes
}
// sdkConfigPath tries to guess where the gcloud config is located.
// It can be overridden during tests.
var sdkConfigPath = func() (string, error) {
if runtime.GOOS == "windows" {
return filepath.Join(os.Getenv("APPDATA"), "gcloud"), nil
}
homeDir := guessUnixHomeDir()
if homeDir == "" {
return "", errors.New("unable to get current user home directory: os/user lookup failed; $HOME is empty")
}
return filepath.Join(homeDir, ".config", "gcloud"), nil
}
func guessUnixHomeDir() string {
usr, err := user.Current()
if err == nil {
return usr.HomeDir
}
return os.Getenv("HOME")
}

View File

@ -0,0 +1,46 @@
// Copyright 2015 The oauth2 Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
package google
import "testing"
func TestSDKConfig(t *testing.T) {
sdkConfigPath = func() (string, error) {
return "testdata/gcloud", nil
}
tests := []struct {
account string
accessToken string
err bool
}{
{"", "bar_access_token", false},
{"foo@example.com", "foo_access_token", false},
{"bar@example.com", "bar_access_token", false},
{"baz@serviceaccount.example.com", "", true},
}
for _, tt := range tests {
c, err := NewSDKConfig(tt.account)
if got, want := err != nil, tt.err; got != want {
if !tt.err {
t.Errorf("expected no error, got error: %v", tt.err, err)
} else {
t.Errorf("expected error, got none")
}
continue
}
if err != nil {
continue
}
tok := c.initialToken
if tok == nil {
t.Errorf("expected token %q, got: nil", tt.accessToken)
continue
}
if tok.AccessToken != tt.accessToken {
t.Errorf("expected token %q, got: %q", tt.accessToken, tok.AccessToken)
}
}
}

View File

@ -0,0 +1,122 @@
{
"data": [
{
"credential": {
"_class": "OAuth2Credentials",
"_module": "oauth2client.client",
"access_token": "foo_access_token",
"client_id": "foo_client_id",
"client_secret": "foo_client_secret",
"id_token": {
"at_hash": "foo_at_hash",
"aud": "foo_aud",
"azp": "foo_azp",
"cid": "foo_cid",
"email": "foo@example.com",
"email_verified": true,
"exp": 1420573614,
"iat": 1420569714,
"id": "1337",
"iss": "accounts.google.com",
"sub": "1337",
"token_hash": "foo_token_hash",
"verified_email": true
},
"invalid": false,
"refresh_token": "foo_refresh_token",
"revoke_uri": "https://accounts.google.com/o/oauth2/revoke",
"token_expiry": "2015-01-09T00:51:51Z",
"token_response": {
"access_token": "foo_access_token",
"expires_in": 3600,
"id_token": "foo_id_token",
"token_type": "Bearer"
},
"token_uri": "https://accounts.google.com/o/oauth2/token",
"user_agent": "Cloud SDK Command Line Tool"
},
"key": {
"account": "foo@example.com",
"clientId": "foo_client_id",
"scope": "https://www.googleapis.com/auth/appengine.admin https://www.googleapis.com/auth/bigquery https://www.googleapis.com/auth/compute https://www.googleapis.com/auth/devstorage.full_control https://www.googleapis.com/auth/userinfo.email https://www.googleapis.com/auth/ndev.cloudman https://www.googleapis.com/auth/cloud-platform https://www.googleapis.com/auth/sqlservice.admin https://www.googleapis.com/auth/prediction https://www.googleapis.com/auth/projecthosting",
"type": "google-cloud-sdk"
}
},
{
"credential": {
"_class": "OAuth2Credentials",
"_module": "oauth2client.client",
"access_token": "bar_access_token",
"client_id": "bar_client_id",
"client_secret": "bar_client_secret",
"id_token": {
"at_hash": "bar_at_hash",
"aud": "bar_aud",
"azp": "bar_azp",
"cid": "bar_cid",
"email": "bar@example.com",
"email_verified": true,
"exp": 1420573614,
"iat": 1420569714,
"id": "1337",
"iss": "accounts.google.com",
"sub": "1337",
"token_hash": "bar_token_hash",
"verified_email": true
},
"invalid": false,
"refresh_token": "bar_refresh_token",
"revoke_uri": "https://accounts.google.com/o/oauth2/revoke",
"token_expiry": "2015-01-09T00:51:51Z",
"token_response": {
"access_token": "bar_access_token",
"expires_in": 3600,
"id_token": "bar_id_token",
"token_type": "Bearer"
},
"token_uri": "https://accounts.google.com/o/oauth2/token",
"user_agent": "Cloud SDK Command Line Tool"
},
"key": {
"account": "bar@example.com",
"clientId": "bar_client_id",
"scope": "https://www.googleapis.com/auth/appengine.admin https://www.googleapis.com/auth/bigquery https://www.googleapis.com/auth/compute https://www.googleapis.com/auth/devstorage.full_control https://www.googleapis.com/auth/userinfo.email https://www.googleapis.com/auth/ndev.cloudman https://www.googleapis.com/auth/cloud-platform https://www.googleapis.com/auth/sqlservice.admin https://www.googleapis.com/auth/prediction https://www.googleapis.com/auth/projecthosting",
"type": "google-cloud-sdk"
}
},
{
"credential": {
"_class": "ServiceAccountCredentials",
"_kwargs": {},
"_module": "oauth2client.client",
"_private_key_id": "00000000000000000000000000000000",
"_private_key_pkcs8_text": "-----BEGIN RSA PRIVATE KEY-----\nMIICWwIBAAKBgQCt3fpiynPSaUhWSIKMGV331zudwJ6GkGmvQtwsoK2S2LbvnSwU\nNxgj4fp08kIDR5p26wF4+t/HrKydMwzftXBfZ9UmLVJgRdSswmS5SmChCrfDS5OE\nvFFcN5+6w1w8/Nu657PF/dse8T0bV95YrqyoR0Osy8WHrUOMSIIbC3hRuwIDAQAB\nAoGAJrGE/KFjn0sQ7yrZ6sXmdLawrM3mObo/2uI9T60+k7SpGbBX0/Pi6nFrJMWZ\nTVONG7P3Mu5aCPzzuVRYJB0j8aldSfzABTY3HKoWCczqw1OztJiEseXGiYz4QOyr\nYU3qDyEpdhS6q6wcoLKGH+hqRmz6pcSEsc8XzOOu7s4xW8kCQQDkc75HjhbarCnd\nJJGMe3U76+6UGmdK67ltZj6k6xoB5WbTNChY9TAyI2JC+ppYV89zv3ssj4L+02u3\nHIHFGxsHAkEAwtU1qYb1tScpchPobnYUFiVKJ7KA8EZaHVaJJODW/cghTCV7BxcJ\nbgVvlmk4lFKn3lPKAgWw7PdQsBTVBUcCrQJATPwoIirizrv3u5soJUQxZIkENAqV\nxmybZx9uetrzP7JTrVbFRf0SScMcyN90hdLJiQL8+i4+gaszgFht7sNMnwJAAbfj\nq0UXcauQwALQ7/h2oONfTg5S+MuGC/AxcXPSMZbMRGGoPh3D5YaCv27aIuS/ukQ+\n6dmm/9AGlCb64fsIWQJAPaokbjIifo+LwC5gyK73Mc4t8nAOSZDenzd/2f6TCq76\nS1dcnKiPxaED7W/y6LJiuBT2rbZiQ2L93NJpFZD/UA==\n-----END RSA PRIVATE KEY-----\n",
"_revoke_uri": "https://accounts.google.com/o/oauth2/revoke",
"_scopes": "https://www.googleapis.com/auth/appengine.admin https://www.googleapis.com/auth/bigquery https://www.googleapis.com/auth/compute https://www.googleapis.com/auth/devstorage.full_control https://www.googleapis.com/auth/userinfo.email https://www.googleapis.com/auth/ndev.cloudman https://www.googleapis.com/auth/cloud-platform https://www.googleapis.com/auth/sqlservice.admin https://www.googleapis.com/auth/prediction https://www.googleapis.com/auth/projecthosting",
"_service_account_email": "baz@serviceaccount.example.com",
"_service_account_id": "baz.serviceaccount.example.com",
"_token_uri": "https://accounts.google.com/o/oauth2/token",
"_user_agent": "Cloud SDK Command Line Tool",
"access_token": null,
"assertion_type": null,
"client_id": null,
"client_secret": null,
"id_token": null,
"invalid": false,
"refresh_token": null,
"revoke_uri": "https://accounts.google.com/o/oauth2/revoke",
"service_account_name": "baz@serviceaccount.example.com",
"token_expiry": null,
"token_response": null,
"user_agent": "Cloud SDK Command Line Tool"
},
"key": {
"account": "baz@serviceaccount.example.com",
"clientId": "baz_client_id",
"scope": "https://www.googleapis.com/auth/appengine.admin https://www.googleapis.com/auth/bigquery https://www.googleapis.com/auth/compute https://www.googleapis.com/auth/devstorage.full_control https://www.googleapis.com/auth/userinfo.email https://www.googleapis.com/auth/ndev.cloudman https://www.googleapis.com/auth/cloud-platform https://www.googleapis.com/auth/sqlservice.admin https://www.googleapis.com/auth/prediction https://www.googleapis.com/auth/projecthosting",
"type": "google-cloud-sdk"
}
}
],
"file_version": 1
}

View File

@ -0,0 +1,2 @@
[core]
account = bar@example.com

View File

@ -0,0 +1,69 @@
// Copyright 2014 The oauth2 Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
// Package internal contains support packages for oauth2 package.
package internal
import (
"bufio"
"crypto/rsa"
"crypto/x509"
"encoding/pem"
"errors"
"fmt"
"io"
"strings"
)
// ParseKey converts the binary contents of a private key file
// to an *rsa.PrivateKey. It detects whether the private key is in a
// PEM container or not. If so, it extracts the the private key
// from PEM container before conversion. It only supports PEM
// containers with no passphrase.
func ParseKey(key []byte) (*rsa.PrivateKey, error) {
block, _ := pem.Decode(key)
if block != nil {
key = block.Bytes
}
parsedKey, err := x509.ParsePKCS8PrivateKey(key)
if err != nil {
parsedKey, err = x509.ParsePKCS1PrivateKey(key)
if err != nil {
return nil, fmt.Errorf("private key should be a PEM or plain PKSC1 or PKCS8; parse error: %v", err)
}
}
parsed, ok := parsedKey.(*rsa.PrivateKey)
if !ok {
return nil, errors.New("private key is invalid")
}
return parsed, nil
}
func ParseINI(ini io.Reader) (map[string]map[string]string, error) {
result := map[string]map[string]string{
"": map[string]string{}, // root section
}
scanner := bufio.NewScanner(ini)
currentSection := ""
for scanner.Scan() {
line := strings.TrimSpace(scanner.Text())
if strings.HasPrefix(line, ";") {
// comment.
continue
}
if strings.HasPrefix(line, "[") && strings.HasSuffix(line, "]") {
currentSection = strings.TrimSpace(line[1 : len(line)-1])
result[currentSection] = map[string]string{}
continue
}
parts := strings.SplitN(line, "=", 2)
if len(parts) == 2 && parts[0] != "" {
result[currentSection][strings.TrimSpace(parts[0])] = strings.TrimSpace(parts[1])
}
}
if err := scanner.Err(); err != nil {
return nil, fmt.Errorf("error scanning ini: %v", err)
}
return result, nil
}

View File

@ -0,0 +1,62 @@
// Copyright 2014 The oauth2 Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
// Package internal contains support packages for oauth2 package.
package internal
import (
"reflect"
"strings"
"testing"
)
func TestParseINI(t *testing.T) {
tests := []struct {
ini string
want map[string]map[string]string
}{
{
`root = toor
[foo]
bar = hop
ini = nin
`,
map[string]map[string]string{
"": map[string]string{"root": "toor"},
"foo": map[string]string{"bar": "hop", "ini": "nin"},
},
},
{
`[empty]
[section]
empty=
`,
map[string]map[string]string{
"": map[string]string{},
"empty": map[string]string{},
"section": map[string]string{"empty": ""},
},
},
{
`ignore
[invalid
=stuff
;comment=true
`,
map[string]map[string]string{
"": map[string]string{},
},
},
}
for _, tt := range tests {
result, err := ParseINI(strings.NewReader(tt.ini))
if err != nil {
t.Errorf("ParseINI(%q) error %v, want: no error", tt.ini, err)
continue
}
if !reflect.DeepEqual(result, tt.want) {
t.Errorf("ParseINI(%q) = %#v, want: %#v", tt.ini, result, tt.want)
}
}
}

160
Godeps/_workspace/src/golang.org/x/oauth2/jws/jws.go generated vendored Normal file
View File

@ -0,0 +1,160 @@
// Copyright 2014 The oauth2 Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
// Package jws provides encoding and decoding utilities for
// signed JWS messages.
package jws
import (
"bytes"
"crypto"
"crypto/rand"
"crypto/rsa"
"crypto/sha256"
"encoding/base64"
"encoding/json"
"errors"
"fmt"
"strings"
"time"
)
// ClaimSet contains information about the JWT signature including the
// permissions being requested (scopes), the target of the token, the issuer,
// the time the token was issued, and the lifetime of the token.
type ClaimSet struct {
Iss string `json:"iss"` // email address of the client_id of the application making the access token request
Scope string `json:"scope,omitempty"` // space-delimited list of the permissions the application requests
Aud string `json:"aud"` // descriptor of the intended target of the assertion (Optional).
Exp int64 `json:"exp"` // the expiration time of the assertion
Iat int64 `json:"iat"` // the time the assertion was issued.
Typ string `json:"typ,omitempty"` // token type (Optional).
// Email for which the application is requesting delegated access (Optional).
Sub string `json:"sub,omitempty"`
// The old name of Sub. Client keeps setting Prn to be
// complaint with legacy OAuth 2.0 providers. (Optional)
Prn string `json:"prn,omitempty"`
// See http://tools.ietf.org/html/draft-jones-json-web-token-10#section-4.3
// This array is marshalled using custom code (see (c *ClaimSet) encode()).
PrivateClaims map[string]interface{} `json:"-"`
exp time.Time
iat time.Time
}
func (c *ClaimSet) encode() (string, error) {
if c.exp.IsZero() || c.iat.IsZero() {
// Reverting time back for machines whose time is not perfectly in sync.
// If client machine's time is in the future according
// to Google servers, an access token will not be issued.
now := time.Now().Add(-10 * time.Second)
c.iat = now
c.exp = now.Add(time.Hour)
}
c.Exp = c.exp.Unix()
c.Iat = c.iat.Unix()
b, err := json.Marshal(c)
if err != nil {
return "", err
}
if len(c.PrivateClaims) == 0 {
return base64Encode(b), nil
}
// Marshal private claim set and then append it to b.
prv, err := json.Marshal(c.PrivateClaims)
if err != nil {
return "", fmt.Errorf("jws: invalid map of private claims %v", c.PrivateClaims)
}
// Concatenate public and private claim JSON objects.
if !bytes.HasSuffix(b, []byte{'}'}) {
return "", fmt.Errorf("jws: invalid JSON %s", b)
}
if !bytes.HasPrefix(prv, []byte{'{'}) {
return "", fmt.Errorf("jws: invalid JSON %s", prv)
}
b[len(b)-1] = ',' // Replace closing curly brace with a comma.
b = append(b, prv[1:]...) // Append private claims.
return base64Encode(b), nil
}
// Header represents the header for the signed JWS payloads.
type Header struct {
// The algorithm used for signature.
Algorithm string `json:"alg"`
// Represents the token type.
Typ string `json:"typ"`
}
func (h *Header) encode() (string, error) {
b, err := json.Marshal(h)
if err != nil {
return "", err
}
return base64Encode(b), nil
}
// Decode decodes a claim set from a JWS payload.
func Decode(payload string) (*ClaimSet, error) {
// decode returned id token to get expiry
s := strings.Split(payload, ".")
if len(s) < 2 {
// TODO(jbd): Provide more context about the error.
return nil, errors.New("jws: invalid token received")
}
decoded, err := base64Decode(s[1])
if err != nil {
return nil, err
}
c := &ClaimSet{}
err = json.NewDecoder(bytes.NewBuffer(decoded)).Decode(c)
return c, err
}
// Encode encodes a signed JWS with provided header and claim set.
func Encode(header *Header, c *ClaimSet, signature *rsa.PrivateKey) (string, error) {
head, err := header.encode()
if err != nil {
return "", err
}
cs, err := c.encode()
if err != nil {
return "", err
}
ss := fmt.Sprintf("%s.%s", head, cs)
h := sha256.New()
h.Write([]byte(ss))
b, err := rsa.SignPKCS1v15(rand.Reader, signature, crypto.SHA256, h.Sum(nil))
if err != nil {
return "", err
}
sig := base64Encode(b)
return fmt.Sprintf("%s.%s", ss, sig), nil
}
// base64Encode returns and Base64url encoded version of the input string with any
// trailing "=" stripped.
func base64Encode(b []byte) string {
return strings.TrimRight(base64.URLEncoding.EncodeToString(b), "=")
}
// base64Decode decodes the Base64url encoded string
func base64Decode(s string) ([]byte, error) {
// add back missing padding
switch len(s) % 4 {
case 2:
s += "=="
case 3:
s += "="
}
return base64.URLEncoding.DecodeString(s)
}

View File

@ -0,0 +1,31 @@
// Copyright 2014 The oauth2 Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
package jwt_test
import (
"github.com/coreos/etcd/Godeps/_workspace/src/golang.org/x/oauth2"
"github.com/coreos/etcd/Godeps/_workspace/src/golang.org/x/oauth2/jwt"
)
func ExampleJWTConfig() {
conf := &jwt.Config{
Email: "xxx@developer.com",
// The contents of your RSA private key or your PEM file
// that contains a private key.
// If you have a p12 file instead, you
// can use `openssl` to export the private key into a pem file.
//
// $ openssl pkcs12 -in key.p12 -out key.pem -nodes
//
// It only supports PEM containers with no passphrase.
PrivateKey: []byte("-----BEGIN RSA PRIVATE KEY-----..."),
Subject: "user@example.com",
TokenURL: "https://provider.com/o/oauth2/token",
}
// Initiate an http.Client, the following GET request will be
// authorized and authenticated on the behalf of user@example.com.
client := conf.Client(oauth2.NoContext)
client.Get("...")
}

147
Godeps/_workspace/src/golang.org/x/oauth2/jwt/jwt.go generated vendored Normal file
View File

@ -0,0 +1,147 @@
// Copyright 2014 The oauth2 Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
// Package jwt implements the OAuth 2.0 JSON Web Token flow, commonly
// known as "two-legged OAuth 2.0".
//
// See: https://tools.ietf.org/html/draft-ietf-oauth-jwt-bearer-12
package jwt
import (
"encoding/json"
"fmt"
"io"
"io/ioutil"
"net/http"
"net/url"
"strings"
"time"
"github.com/coreos/etcd/Godeps/_workspace/src/golang.org/x/net/context"
"github.com/coreos/etcd/Godeps/_workspace/src/golang.org/x/oauth2"
"github.com/coreos/etcd/Godeps/_workspace/src/golang.org/x/oauth2/internal"
"github.com/coreos/etcd/Godeps/_workspace/src/golang.org/x/oauth2/jws"
)
var (
defaultGrantType = "urn:ietf:params:oauth:grant-type:jwt-bearer"
defaultHeader = &jws.Header{Algorithm: "RS256", Typ: "JWT"}
)
// Config is the configuration for using JWT to fetch tokens,
// commonly known as "two-legged OAuth 2.0".
type Config struct {
// Email is the OAuth client identifier used when communicating with
// the configured OAuth provider.
Email string
// PrivateKey contains the contents of an RSA private key or the
// contents of a PEM file that contains a private key. The provided
// private key is used to sign JWT payloads.
// PEM containers with a passphrase are not supported.
// Use the following command to convert a PKCS 12 file into a PEM.
//
// $ openssl pkcs12 -in key.p12 -out key.pem -nodes
//
PrivateKey []byte
// Subject is the optional user to impersonate.
Subject string
// Scopes optionally specifies a list of requested permission scopes.
Scopes []string
// TokenURL is the endpoint required to complete the 2-legged JWT flow.
TokenURL string
}
// TokenSource returns a JWT TokenSource using the configuration
// in c and the HTTP client from the provided context.
func (c *Config) TokenSource(ctx context.Context) oauth2.TokenSource {
return oauth2.ReuseTokenSource(nil, jwtSource{ctx, c})
}
// Client returns an HTTP client wrapping the context's
// HTTP transport and adding Authorization headers with tokens
// obtained from c.
//
// The returned client and its Transport should not be modified.
func (c *Config) Client(ctx context.Context) *http.Client {
return oauth2.NewClient(ctx, c.TokenSource(ctx))
}
// jwtSource is a source that always does a signed JWT request for a token.
// It should typically be wrapped with a reuseTokenSource.
type jwtSource struct {
ctx context.Context
conf *Config
}
func (js jwtSource) Token() (*oauth2.Token, error) {
pk, err := internal.ParseKey(js.conf.PrivateKey)
if err != nil {
return nil, err
}
hc := oauth2.NewClient(js.ctx, nil)
claimSet := &jws.ClaimSet{
Iss: js.conf.Email,
Scope: strings.Join(js.conf.Scopes, " "),
Aud: js.conf.TokenURL,
}
if subject := js.conf.Subject; subject != "" {
claimSet.Sub = subject
// prn is the old name of sub. Keep setting it
// to be compatible with legacy OAuth 2.0 providers.
claimSet.Prn = subject
}
payload, err := jws.Encode(defaultHeader, claimSet, pk)
if err != nil {
return nil, err
}
v := url.Values{}
v.Set("grant_type", defaultGrantType)
v.Set("assertion", payload)
resp, err := hc.PostForm(js.conf.TokenURL, v)
if err != nil {
return nil, fmt.Errorf("oauth2: cannot fetch token: %v", err)
}
defer resp.Body.Close()
body, err := ioutil.ReadAll(io.LimitReader(resp.Body, 1<<20))
if err != nil {
return nil, fmt.Errorf("oauth2: cannot fetch token: %v", err)
}
if c := resp.StatusCode; c < 200 || c > 299 {
return nil, fmt.Errorf("oauth2: cannot fetch token: %v\nResponse: %s", resp.Status, body)
}
// tokenRes is the JSON response body.
var tokenRes struct {
AccessToken string `json:"access_token"`
TokenType string `json:"token_type"`
IDToken string `json:"id_token"`
ExpiresIn int64 `json:"expires_in"` // relative seconds from now
}
if err := json.Unmarshal(body, &tokenRes); err != nil {
return nil, fmt.Errorf("oauth2: cannot fetch token: %v", err)
}
token := &oauth2.Token{
AccessToken: tokenRes.AccessToken,
TokenType: tokenRes.TokenType,
}
raw := make(map[string]interface{})
json.Unmarshal(body, &raw) // no error checks for optional fields
token = token.WithExtra(raw)
if secs := tokenRes.ExpiresIn; secs > 0 {
token.Expiry = time.Now().Add(time.Duration(secs) * time.Second)
}
if v := tokenRes.IDToken; v != "" {
// decode returned id token to get expiry
claimSet, err := jws.Decode(v)
if err != nil {
return nil, fmt.Errorf("oauth2: error decoding JWT token: %v", err)
}
token.Expiry = time.Unix(claimSet.Exp, 0)
}
return token, nil
}

View File

@ -0,0 +1,134 @@
// Copyright 2014 The oauth2 Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
package jwt
import (
"net/http"
"net/http/httptest"
"testing"
"github.com/coreos/etcd/Godeps/_workspace/src/golang.org/x/oauth2"
)
var dummyPrivateKey = []byte(`-----BEGIN RSA PRIVATE KEY-----
MIIEpAIBAAKCAQEAx4fm7dngEmOULNmAs1IGZ9Apfzh+BkaQ1dzkmbUgpcoghucE
DZRnAGd2aPyB6skGMXUytWQvNYav0WTR00wFtX1ohWTfv68HGXJ8QXCpyoSKSSFY
fuP9X36wBSkSX9J5DVgiuzD5VBdzUISSmapjKm+DcbRALjz6OUIPEWi1Tjl6p5RK
1w41qdbmt7E5/kGhKLDuT7+M83g4VWhgIvaAXtnhklDAggilPPa8ZJ1IFe31lNlr
k4DRk38nc6sEutdf3RL7QoH7FBusI7uXV03DC6dwN1kP4GE7bjJhcRb/7jYt7CQ9
/E9Exz3c0yAp0yrTg0Fwh+qxfH9dKwN52S7SBwIDAQABAoIBAQCaCs26K07WY5Jt
3a2Cw3y2gPrIgTCqX6hJs7O5ByEhXZ8nBwsWANBUe4vrGaajQHdLj5OKfsIDrOvn
2NI1MqflqeAbu/kR32q3tq8/Rl+PPiwUsW3E6Pcf1orGMSNCXxeducF2iySySzh3
nSIhCG5uwJDWI7a4+9KiieFgK1pt/Iv30q1SQS8IEntTfXYwANQrfKUVMmVF9aIK
6/WZE2yd5+q3wVVIJ6jsmTzoDCX6QQkkJICIYwCkglmVy5AeTckOVwcXL0jqw5Kf
5/soZJQwLEyBoQq7Kbpa26QHq+CJONetPP8Ssy8MJJXBT+u/bSseMb3Zsr5cr43e
DJOhwsThAoGBAPY6rPKl2NT/K7XfRCGm1sbWjUQyDShscwuWJ5+kD0yudnT/ZEJ1
M3+KS/iOOAoHDdEDi9crRvMl0UfNa8MAcDKHflzxg2jg/QI+fTBjPP5GOX0lkZ9g
z6VePoVoQw2gpPFVNPPTxKfk27tEzbaffvOLGBEih0Kb7HTINkW8rIlzAoGBAM9y
1yr+jvfS1cGFtNU+Gotoihw2eMKtIqR03Yn3n0PK1nVCDKqwdUqCypz4+ml6cxRK
J8+Pfdh7D+ZJd4LEG6Y4QRDLuv5OA700tUoSHxMSNn3q9As4+T3MUyYxWKvTeu3U
f2NWP9ePU0lV8ttk7YlpVRaPQmc1qwooBA/z/8AdAoGAW9x0HWqmRICWTBnpjyxx
QGlW9rQ9mHEtUotIaRSJ6K/F3cxSGUEkX1a3FRnp6kPLcckC6NlqdNgNBd6rb2rA
cPl/uSkZP42Als+9YMoFPU/xrrDPbUhu72EDrj3Bllnyb168jKLa4VBOccUvggxr
Dm08I1hgYgdN5huzs7y6GeUCgYEAj+AZJSOJ6o1aXS6rfV3mMRve9bQ9yt8jcKXw
5HhOCEmMtaSKfnOF1Ziih34Sxsb7O2428DiX0mV/YHtBnPsAJidL0SdLWIapBzeg
KHArByIRkwE6IvJvwpGMdaex1PIGhx5i/3VZL9qiq/ElT05PhIb+UXgoWMabCp84
OgxDK20CgYAeaFo8BdQ7FmVX2+EEejF+8xSge6WVLtkaon8bqcn6P0O8lLypoOhd
mJAYH8WU+UAy9pecUnDZj14LAGNVmYcse8HFX71MoshnvCTFEPVo4rZxIAGwMpeJ
5jgQ3slYLpqrGlcbLgUXBUgzEO684Wk/UV9DFPlHALVqCfXQ9dpJPg==
-----END RSA PRIVATE KEY-----`)
func TestJWTFetch_JSONResponse(t *testing.T) {
ts := httptest.NewServer(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
w.Header().Set("Content-Type", "application/json")
w.Write([]byte(`{
"access_token": "90d64460d14870c08c81352a05dedd3465940a7c",
"scope": "user",
"token_type": "bearer",
"expires_in": 3600
}`))
}))
defer ts.Close()
conf := &Config{
Email: "aaa@xxx.com",
PrivateKey: dummyPrivateKey,
TokenURL: ts.URL,
}
tok, err := conf.TokenSource(oauth2.NoContext).Token()
if err != nil {
t.Fatal(err)
}
if !tok.Valid() {
t.Errorf("Token invalid")
}
if tok.AccessToken != "90d64460d14870c08c81352a05dedd3465940a7c" {
t.Errorf("Unexpected access token, %#v", tok.AccessToken)
}
if tok.TokenType != "bearer" {
t.Errorf("Unexpected token type, %#v", tok.TokenType)
}
if tok.Expiry.IsZero() {
t.Errorf("Unexpected token expiry, %#v", tok.Expiry)
}
scope := tok.Extra("scope")
if scope != "user" {
t.Errorf("Unexpected value for scope: %v", scope)
}
}
func TestJWTFetch_BadResponse(t *testing.T) {
ts := httptest.NewServer(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
w.Header().Set("Content-Type", "application/json")
w.Write([]byte(`{"scope": "user", "token_type": "bearer"}`))
}))
defer ts.Close()
conf := &Config{
Email: "aaa@xxx.com",
PrivateKey: dummyPrivateKey,
TokenURL: ts.URL,
}
tok, err := conf.TokenSource(oauth2.NoContext).Token()
if err != nil {
t.Fatal(err)
}
if tok == nil {
t.Fatalf("token is nil")
}
if tok.Valid() {
t.Errorf("token is valid. want invalid.")
}
if tok.AccessToken != "" {
t.Errorf("Unexpected non-empty access token %q.", tok.AccessToken)
}
if want := "bearer"; tok.TokenType != want {
t.Errorf("TokenType = %q; want %q", tok.TokenType, want)
}
scope := tok.Extra("scope")
if want := "user"; scope != want {
t.Errorf("token scope = %q; want %q", scope, want)
}
}
func TestJWTFetch_BadResponseType(t *testing.T) {
ts := httptest.NewServer(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
w.Header().Set("Content-Type", "application/json")
w.Write([]byte(`{"access_token":123, "scope": "user", "token_type": "bearer"}`))
}))
defer ts.Close()
conf := &Config{
Email: "aaa@xxx.com",
PrivateKey: dummyPrivateKey,
TokenURL: ts.URL,
}
tok, err := conf.TokenSource(oauth2.NoContext).Token()
if err == nil {
t.Error("got a token; expected error")
if tok.AccessToken != "" {
t.Errorf("Unexpected access token, %#v.", tok.AccessToken)
}
}
}

View File

@ -0,0 +1,16 @@
// Copyright 2015 The oauth2 Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
// Package linkedin provides constants for using OAuth2 to access LinkedIn.
package linkedin
import (
"github.com/coreos/etcd/Godeps/_workspace/src/golang.org/x/oauth2"
)
// Endpoint is LinkedIn's OAuth 2.0 endpoint.
var Endpoint = oauth2.Endpoint{
AuthURL: "https://www.linkedin.com/uas/oauth2/authorization",
TokenURL: "https://www.linkedin.com/uas/oauth2/accessToken",
}

523
Godeps/_workspace/src/golang.org/x/oauth2/oauth2.go generated vendored Normal file
View File

@ -0,0 +1,523 @@
// Copyright 2014 The oauth2 Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
// Package oauth2 provides support for making
// OAuth2 authorized and authenticated HTTP requests.
// It can additionally grant authorization with Bearer JWT.
package oauth2
import (
"bytes"
"encoding/json"
"errors"
"fmt"
"io"
"io/ioutil"
"mime"
"net/http"
"net/url"
"strconv"
"strings"
"sync"
"time"
"github.com/coreos/etcd/Godeps/_workspace/src/golang.org/x/net/context"
)
// NoContext is the default context you should supply if not using
// your own context.Context (see https://golang.org/x/net/context).
var NoContext = context.TODO()
// Config describes a typical 3-legged OAuth2 flow, with both the
// client application information and the server's endpoint URLs.
type Config struct {
// ClientID is the application's ID.
ClientID string
// ClientSecret is the application's secret.
ClientSecret string
// Endpoint contains the resource server's token endpoint
// URLs. These are constants specific to each server and are
// often available via site-specific packages, such as
// google.Endpoint or github.Endpoint.
Endpoint Endpoint
// RedirectURL is the URL to redirect users going through
// the OAuth flow, after the resource owner's URLs.
RedirectURL string
// Scope specifies optional requested permissions.
Scopes []string
}
// A TokenSource is anything that can return a token.
type TokenSource interface {
// Token returns a token or an error.
// Token must be safe for concurrent use by multiple goroutines.
// The returned Token must not be modified.
Token() (*Token, error)
}
// Endpoint contains the OAuth 2.0 provider's authorization and token
// endpoint URLs.
type Endpoint struct {
AuthURL string
TokenURL string
}
var (
// AccessTypeOnline and AccessTypeOffline are options passed
// to the Options.AuthCodeURL method. They modify the
// "access_type" field that gets sent in the URL returned by
// AuthCodeURL.
//
// Online is the default if neither is specified. If your
// application needs to refresh access tokens when the user
// is not present at the browser, then use offline. This will
// result in your application obtaining a refresh token the
// first time your application exchanges an authorization
// code for a user.
AccessTypeOnline AuthCodeOption = SetParam("access_type", "online")
AccessTypeOffline AuthCodeOption = SetParam("access_type", "offline")
// ApprovalForce forces the users to view the consent dialog
// and confirm the permissions request at the URL returned
// from AuthCodeURL, even if they've already done so.
ApprovalForce AuthCodeOption = SetParam("approval_prompt", "force")
)
// An AuthCodeOption is passed to Config.AuthCodeURL.
type AuthCodeOption interface {
setValue(url.Values)
}
type setParam struct{ k, v string }
func (p setParam) setValue(m url.Values) { m.Set(p.k, p.v) }
// SetParam builds an AuthCodeOption which passes key/value parameters
// to a provider's authorization endpoint.
func SetParam(key, value string) AuthCodeOption {
return setParam{key, value}
}
// AuthCodeURL returns a URL to OAuth 2.0 provider's consent page
// that asks for permissions for the required scopes explicitly.
//
// State is a token to protect the user from CSRF attacks. You must
// always provide a non-zero string and validate that it matches the
// the state query parameter on your redirect callback.
// See http://tools.ietf.org/html/rfc6749#section-10.12 for more info.
//
// Opts may include AccessTypeOnline or AccessTypeOffline, as well
// as ApprovalForce.
func (c *Config) AuthCodeURL(state string, opts ...AuthCodeOption) string {
var buf bytes.Buffer
buf.WriteString(c.Endpoint.AuthURL)
v := url.Values{
"response_type": {"code"},
"client_id": {c.ClientID},
"redirect_uri": condVal(c.RedirectURL),
"scope": condVal(strings.Join(c.Scopes, " ")),
"state": condVal(state),
}
for _, opt := range opts {
opt.setValue(v)
}
if strings.Contains(c.Endpoint.AuthURL, "?") {
buf.WriteByte('&')
} else {
buf.WriteByte('?')
}
buf.WriteString(v.Encode())
return buf.String()
}
// PasswordCredentialsToken converts a resource owner username and password
// pair into a token.
//
// Per the RFC, this grant type should only be used "when there is a high
// degree of trust between the resource owner and the client (e.g., the client
// is part of the device operating system or a highly privileged application),
// and when other authorization grant types are not available."
// See https://tools.ietf.org/html/rfc6749#section-4.3 for more info.
//
// The HTTP client to use is derived from the context.
// If nil, http.DefaultClient is used.
func (c *Config) PasswordCredentialsToken(ctx context.Context, username, password string) (*Token, error) {
return retrieveToken(ctx, c, url.Values{
"grant_type": {"password"},
"username": {username},
"password": {password},
"scope": condVal(strings.Join(c.Scopes, " ")),
})
}
// Exchange converts an authorization code into a token.
//
// It is used after a resource provider redirects the user back
// to the Redirect URI (the URL obtained from AuthCodeURL).
//
// The HTTP client to use is derived from the context.
// If a client is not provided via the context, http.DefaultClient is used.
//
// The code will be in the *http.Request.FormValue("code"). Before
// calling Exchange, be sure to validate FormValue("state").
func (c *Config) Exchange(ctx context.Context, code string) (*Token, error) {
return retrieveToken(ctx, c, url.Values{
"grant_type": {"authorization_code"},
"code": {code},
"redirect_uri": condVal(c.RedirectURL),
"scope": condVal(strings.Join(c.Scopes, " ")),
})
}
// contextClientFunc is a func which tries to return an *http.Client
// given a Context value. If it returns an error, the search stops
// with that error. If it returns (nil, nil), the search continues
// down the list of registered funcs.
type contextClientFunc func(context.Context) (*http.Client, error)
var contextClientFuncs []contextClientFunc
func registerContextClientFunc(fn contextClientFunc) {
contextClientFuncs = append(contextClientFuncs, fn)
}
func contextClient(ctx context.Context) (*http.Client, error) {
for _, fn := range contextClientFuncs {
c, err := fn(ctx)
if err != nil {
return nil, err
}
if c != nil {
return c, nil
}
}
if hc, ok := ctx.Value(HTTPClient).(*http.Client); ok {
return hc, nil
}
return http.DefaultClient, nil
}
func contextTransport(ctx context.Context) http.RoundTripper {
hc, err := contextClient(ctx)
if err != nil {
// This is a rare error case (somebody using nil on App Engine),
// so I'd rather not everybody do an error check on this Client
// method. They can get the error that they're doing it wrong
// later, at client.Get/PostForm time.
return errorTransport{err}
}
return hc.Transport
}
// Client returns an HTTP client using the provided token.
// The token will auto-refresh as necessary. The underlying
// HTTP transport will be obtained using the provided context.
// The returned client and its Transport should not be modified.
func (c *Config) Client(ctx context.Context, t *Token) *http.Client {
return NewClient(ctx, c.TokenSource(ctx, t))
}
// TokenSource returns a TokenSource that returns t until t expires,
// automatically refreshing it as necessary using the provided context.
//
// Most users will use Config.Client instead.
func (c *Config) TokenSource(ctx context.Context, t *Token) TokenSource {
tkr := &tokenRefresher{
ctx: ctx,
conf: c,
}
if t != nil {
tkr.refreshToken = t.RefreshToken
}
return &reuseTokenSource{
t: t,
new: tkr,
}
}
// tokenRefresher is a TokenSource that makes "grant_type"=="refresh_token"
// HTTP requests to renew a token using a RefreshToken.
type tokenRefresher struct {
ctx context.Context // used to get HTTP requests
conf *Config
refreshToken string
}
// WARNING: Token is not safe for concurrent access, as it
// updates the tokenRefresher's refreshToken field.
// Within this package, it is used by reuseTokenSource which
// synchronizes calls to this method with its own mutex.
func (tf *tokenRefresher) Token() (*Token, error) {
if tf.refreshToken == "" {
return nil, errors.New("oauth2: token expired and refresh token is not set")
}
tk, err := retrieveToken(tf.ctx, tf.conf, url.Values{
"grant_type": {"refresh_token"},
"refresh_token": {tf.refreshToken},
})
if err != nil {
return nil, err
}
if tf.refreshToken != tk.RefreshToken {
tf.refreshToken = tk.RefreshToken
}
return tk, err
}
// reuseTokenSource is a TokenSource that holds a single token in memory
// and validates its expiry before each call to retrieve it with
// Token. If it's expired, it will be auto-refreshed using the
// new TokenSource.
type reuseTokenSource struct {
new TokenSource // called when t is expired.
mu sync.Mutex // guards t
t *Token
}
// Token returns the current token if it's still valid, else will
// refresh the current token (using r.Context for HTTP client
// information) and return the new one.
func (s *reuseTokenSource) Token() (*Token, error) {
s.mu.Lock()
defer s.mu.Unlock()
if s.t.Valid() {
return s.t, nil
}
t, err := s.new.Token()
if err != nil {
return nil, err
}
s.t = t
return t, nil
}
func retrieveToken(ctx context.Context, c *Config, v url.Values) (*Token, error) {
hc, err := contextClient(ctx)
if err != nil {
return nil, err
}
v.Set("client_id", c.ClientID)
bustedAuth := !providerAuthHeaderWorks(c.Endpoint.TokenURL)
if bustedAuth && c.ClientSecret != "" {
v.Set("client_secret", c.ClientSecret)
}
req, err := http.NewRequest("POST", c.Endpoint.TokenURL, strings.NewReader(v.Encode()))
if err != nil {
return nil, err
}
req.Header.Set("Content-Type", "application/x-www-form-urlencoded")
if !bustedAuth {
req.SetBasicAuth(c.ClientID, c.ClientSecret)
}
r, err := hc.Do(req)
if err != nil {
return nil, err
}
defer r.Body.Close()
body, err := ioutil.ReadAll(io.LimitReader(r.Body, 1<<20))
if err != nil {
return nil, fmt.Errorf("oauth2: cannot fetch token: %v", err)
}
if code := r.StatusCode; code < 200 || code > 299 {
return nil, fmt.Errorf("oauth2: cannot fetch token: %v\nResponse: %s", r.Status, body)
}
var token *Token
content, _, _ := mime.ParseMediaType(r.Header.Get("Content-Type"))
switch content {
case "application/x-www-form-urlencoded", "text/plain":
vals, err := url.ParseQuery(string(body))
if err != nil {
return nil, err
}
token = &Token{
AccessToken: vals.Get("access_token"),
TokenType: vals.Get("token_type"),
RefreshToken: vals.Get("refresh_token"),
raw: vals,
}
e := vals.Get("expires_in")
if e == "" {
// TODO(jbd): Facebook's OAuth2 implementation is broken and
// returns expires_in field in expires. Remove the fallback to expires,
// when Facebook fixes their implementation.
e = vals.Get("expires")
}
expires, _ := strconv.Atoi(e)
if expires != 0 {
token.Expiry = time.Now().Add(time.Duration(expires) * time.Second)
}
default:
var tj tokenJSON
if err = json.Unmarshal(body, &tj); err != nil {
return nil, err
}
token = &Token{
AccessToken: tj.AccessToken,
TokenType: tj.TokenType,
RefreshToken: tj.RefreshToken,
Expiry: tj.expiry(),
raw: make(map[string]interface{}),
}
json.Unmarshal(body, &token.raw) // no error checks for optional fields
}
// Don't overwrite `RefreshToken` with an empty value
// if this was a token refreshing request.
if token.RefreshToken == "" {
token.RefreshToken = v.Get("refresh_token")
}
return token, nil
}
// tokenJSON is the struct representing the HTTP response from OAuth2
// providers returning a token in JSON form.
type tokenJSON struct {
AccessToken string `json:"access_token"`
TokenType string `json:"token_type"`
RefreshToken string `json:"refresh_token"`
ExpiresIn expirationTime `json:"expires_in"` // at least PayPal returns string, while most return number
Expires expirationTime `json:"expires"` // broken Facebook spelling of expires_in
}
func (e *tokenJSON) expiry() (t time.Time) {
if v := e.ExpiresIn; v != 0 {
return time.Now().Add(time.Duration(v) * time.Second)
}
if v := e.Expires; v != 0 {
return time.Now().Add(time.Duration(v) * time.Second)
}
return
}
type expirationTime int32
func (e *expirationTime) UnmarshalJSON(b []byte) error {
var n json.Number
err := json.Unmarshal(b, &n)
if err != nil {
return err
}
i, err := n.Int64()
if err != nil {
return err
}
*e = expirationTime(i)
return nil
}
func condVal(v string) []string {
if v == "" {
return nil
}
return []string{v}
}
var brokenAuthHeaderProviders = []string{
"https://accounts.google.com/",
"https://www.googleapis.com/",
"https://github.com/",
"https://api.instagram.com/",
"https://www.douban.com/",
"https://api.dropbox.com/",
"https://api.soundcloud.com/",
"https://www.linkedin.com/",
"https://api.twitch.tv/",
"https://oauth.vk.com/",
"https://api.odnoklassniki.ru/",
"https://connect.stripe.com/",
"https://api.pushbullet.com/",
"https://oauth.sandbox.trainingpeaks.com/",
"https://oauth.trainingpeaks.com/",
"https://www.strava.com/oauth/",
}
// providerAuthHeaderWorks reports whether the OAuth2 server identified by the tokenURL
// implements the OAuth2 spec correctly
// See https://code.google.com/p/goauth2/issues/detail?id=31 for background.
// In summary:
// - Reddit only accepts client secret in the Authorization header
// - Dropbox accepts either it in URL param or Auth header, but not both.
// - Google only accepts URL param (not spec compliant?), not Auth header
// - Stripe only accepts client secret in Auth header with Bearer method, not Basic
func providerAuthHeaderWorks(tokenURL string) bool {
for _, s := range brokenAuthHeaderProviders {
if strings.HasPrefix(tokenURL, s) {
// Some sites fail to implement the OAuth2 spec fully.
return false
}
}
// Assume the provider implements the spec properly
// otherwise. We can add more exceptions as they're
// discovered. We will _not_ be adding configurable hooks
// to this package to let users select server bugs.
return true
}
// HTTPClient is the context key to use with golang.org/x/net/context's
// WithValue function to associate an *http.Client value with a context.
var HTTPClient contextKey
// contextKey is just an empty struct. It exists so HTTPClient can be
// an immutable public variable with a unique type. It's immutable
// because nobody else can create a contextKey, being unexported.
type contextKey struct{}
// NewClient creates an *http.Client from a Context and TokenSource.
// The returned client is not valid beyond the lifetime of the context.
//
// As a special case, if src is nil, a non-OAuth2 client is returned
// using the provided context. This exists to support related OAuth2
// packages.
func NewClient(ctx context.Context, src TokenSource) *http.Client {
if src == nil {
c, err := contextClient(ctx)
if err != nil {
return &http.Client{Transport: errorTransport{err}}
}
return c
}
return &http.Client{
Transport: &Transport{
Base: contextTransport(ctx),
Source: ReuseTokenSource(nil, src),
},
}
}
// ReuseTokenSource returns a TokenSource which repeatedly returns the
// same token as long as it's valid, starting with t.
// When its cached token is invalid, a new token is obtained from src.
//
// ReuseTokenSource is typically used to reuse tokens from a cache
// (such as a file on disk) between runs of a program, rather than
// obtaining new tokens unnecessarily.
//
// The initial token t may be nil, in which case the TokenSource is
// wrapped in a caching version if it isn't one already. This also
// means it's always safe to wrap ReuseTokenSource around any other
// TokenSource without adverse effects.
func ReuseTokenSource(t *Token, src TokenSource) TokenSource {
// Don't wrap a reuseTokenSource in itself. That would work,
// but cause an unnecessary number of mutex operations.
// Just build the equivalent one.
if rt, ok := src.(*reuseTokenSource); ok {
if t == nil {
// Just use it directly.
return rt
}
src = rt.new
}
return &reuseTokenSource{
t: t,
new: src,
}
}

View File

@ -0,0 +1,435 @@
// Copyright 2014 The oauth2 Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
package oauth2
import (
"encoding/json"
"errors"
"fmt"
"io/ioutil"
"net/http"
"net/http/httptest"
"reflect"
"strconv"
"testing"
"time"
"github.com/coreos/etcd/Godeps/_workspace/src/golang.org/x/net/context"
)
type mockTransport struct {
rt func(req *http.Request) (resp *http.Response, err error)
}
func (t *mockTransport) RoundTrip(req *http.Request) (resp *http.Response, err error) {
return t.rt(req)
}
type mockCache struct {
token *Token
readErr error
}
func (c *mockCache) ReadToken() (*Token, error) {
return c.token, c.readErr
}
func (c *mockCache) WriteToken(*Token) {
// do nothing
}
func newConf(url string) *Config {
return &Config{
ClientID: "CLIENT_ID",
ClientSecret: "CLIENT_SECRET",
RedirectURL: "REDIRECT_URL",
Scopes: []string{"scope1", "scope2"},
Endpoint: Endpoint{
AuthURL: url + "/auth",
TokenURL: url + "/token",
},
}
}
func TestAuthCodeURL(t *testing.T) {
conf := newConf("server")
url := conf.AuthCodeURL("foo", AccessTypeOffline, ApprovalForce)
if url != "server/auth?access_type=offline&approval_prompt=force&client_id=CLIENT_ID&redirect_uri=REDIRECT_URL&response_type=code&scope=scope1+scope2&state=foo" {
t.Errorf("Auth code URL doesn't match the expected, found: %v", url)
}
}
func TestAuthCodeURL_CustomParam(t *testing.T) {
conf := newConf("server")
param := SetParam("foo", "bar")
url := conf.AuthCodeURL("baz", param)
if url != "server/auth?client_id=CLIENT_ID&foo=bar&redirect_uri=REDIRECT_URL&response_type=code&scope=scope1+scope2&state=baz" {
t.Errorf("Auth code URL doesn't match the expected, found: %v", url)
}
}
func TestAuthCodeURL_Optional(t *testing.T) {
conf := &Config{
ClientID: "CLIENT_ID",
Endpoint: Endpoint{
AuthURL: "/auth-url",
TokenURL: "/token-url",
},
}
url := conf.AuthCodeURL("")
if url != "/auth-url?client_id=CLIENT_ID&response_type=code" {
t.Fatalf("Auth code URL doesn't match the expected, found: %v", url)
}
}
func TestExchangeRequest(t *testing.T) {
ts := httptest.NewServer(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
if r.URL.String() != "/token" {
t.Errorf("Unexpected exchange request URL, %v is found.", r.URL)
}
headerAuth := r.Header.Get("Authorization")
if headerAuth != "Basic Q0xJRU5UX0lEOkNMSUVOVF9TRUNSRVQ=" {
t.Errorf("Unexpected authorization header, %v is found.", headerAuth)
}
headerContentType := r.Header.Get("Content-Type")
if headerContentType != "application/x-www-form-urlencoded" {
t.Errorf("Unexpected Content-Type header, %v is found.", headerContentType)
}
body, err := ioutil.ReadAll(r.Body)
if err != nil {
t.Errorf("Failed reading request body: %s.", err)
}
if string(body) != "client_id=CLIENT_ID&code=exchange-code&grant_type=authorization_code&redirect_uri=REDIRECT_URL&scope=scope1+scope2" {
t.Errorf("Unexpected exchange payload, %v is found.", string(body))
}
w.Header().Set("Content-Type", "application/x-www-form-urlencoded")
w.Write([]byte("access_token=90d64460d14870c08c81352a05dedd3465940a7c&scope=user&token_type=bearer"))
}))
defer ts.Close()
conf := newConf(ts.URL)
tok, err := conf.Exchange(NoContext, "exchange-code")
if err != nil {
t.Error(err)
}
if !tok.Valid() {
t.Fatalf("Token invalid. Got: %#v", tok)
}
if tok.AccessToken != "90d64460d14870c08c81352a05dedd3465940a7c" {
t.Errorf("Unexpected access token, %#v.", tok.AccessToken)
}
if tok.TokenType != "bearer" {
t.Errorf("Unexpected token type, %#v.", tok.TokenType)
}
scope := tok.Extra("scope")
if scope != "user" {
t.Errorf("Unexpected value for scope: %v", scope)
}
}
func TestExchangeRequest_JSONResponse(t *testing.T) {
ts := httptest.NewServer(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
if r.URL.String() != "/token" {
t.Errorf("Unexpected exchange request URL, %v is found.", r.URL)
}
headerAuth := r.Header.Get("Authorization")
if headerAuth != "Basic Q0xJRU5UX0lEOkNMSUVOVF9TRUNSRVQ=" {
t.Errorf("Unexpected authorization header, %v is found.", headerAuth)
}
headerContentType := r.Header.Get("Content-Type")
if headerContentType != "application/x-www-form-urlencoded" {
t.Errorf("Unexpected Content-Type header, %v is found.", headerContentType)
}
body, err := ioutil.ReadAll(r.Body)
if err != nil {
t.Errorf("Failed reading request body: %s.", err)
}
if string(body) != "client_id=CLIENT_ID&code=exchange-code&grant_type=authorization_code&redirect_uri=REDIRECT_URL&scope=scope1+scope2" {
t.Errorf("Unexpected exchange payload, %v is found.", string(body))
}
w.Header().Set("Content-Type", "application/json")
w.Write([]byte(`{"access_token": "90d64460d14870c08c81352a05dedd3465940a7c", "scope": "user", "token_type": "bearer", "expires_in": 86400}`))
}))
defer ts.Close()
conf := newConf(ts.URL)
tok, err := conf.Exchange(NoContext, "exchange-code")
if err != nil {
t.Error(err)
}
if !tok.Valid() {
t.Fatalf("Token invalid. Got: %#v", tok)
}
if tok.AccessToken != "90d64460d14870c08c81352a05dedd3465940a7c" {
t.Errorf("Unexpected access token, %#v.", tok.AccessToken)
}
if tok.TokenType != "bearer" {
t.Errorf("Unexpected token type, %#v.", tok.TokenType)
}
scope := tok.Extra("scope")
if scope != "user" {
t.Errorf("Unexpected value for scope: %v", scope)
}
}
const day = 24 * time.Hour
func TestExchangeRequest_JSONResponse_Expiry(t *testing.T) {
seconds := int32(day.Seconds())
jsonNumberType := reflect.TypeOf(json.Number("0"))
for _, c := range []struct {
expires string
expect error
}{
{fmt.Sprintf(`"expires_in": %d`, seconds), nil},
{fmt.Sprintf(`"expires_in": "%d"`, seconds), nil}, // PayPal case
{fmt.Sprintf(`"expires": %d`, seconds), nil}, // Facebook case
{`"expires": false`, &json.UnmarshalTypeError{"bool", jsonNumberType}}, // wrong type
{`"expires": {}`, &json.UnmarshalTypeError{"object", jsonNumberType}}, // wrong type
{`"expires": "zzz"`, &strconv.NumError{"ParseInt", "zzz", strconv.ErrSyntax}}, // wrong value
} {
testExchangeRequest_JSONResponse_expiry(t, c.expires, c.expect)
}
}
func testExchangeRequest_JSONResponse_expiry(t *testing.T, exp string, expect error) {
ts := httptest.NewServer(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
w.Header().Set("Content-Type", "application/json")
w.Write([]byte(fmt.Sprintf(`{"access_token": "90d", "scope": "user", "token_type": "bearer", %s}`, exp)))
}))
defer ts.Close()
conf := newConf(ts.URL)
t1 := time.Now().Add(day)
tok, err := conf.Exchange(NoContext, "exchange-code")
t2 := time.Now().Add(day)
if err == nil && expect != nil {
t.Errorf("Incorrect state, conf.Exchange() should return an error: %v", expect)
} else if err != nil {
if reflect.DeepEqual(err, expect) {
t.Logf("Expected error: %v", err)
return
} else {
t.Error(err)
}
}
if !tok.Valid() {
t.Fatalf("Token invalid. Got: %#v", tok)
}
expiry := tok.Expiry
if expiry.Before(t1) || expiry.After(t2) {
t.Errorf("Unexpected value for Expiry: %v (shold be between %v and %v)", expiry, t1, t2)
}
}
func TestExchangeRequest_BadResponse(t *testing.T) {
ts := httptest.NewServer(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
w.Header().Set("Content-Type", "application/json")
w.Write([]byte(`{"scope": "user", "token_type": "bearer"}`))
}))
defer ts.Close()
conf := newConf(ts.URL)
tok, err := conf.Exchange(NoContext, "code")
if err != nil {
t.Fatal(err)
}
if tok.AccessToken != "" {
t.Errorf("Unexpected access token, %#v.", tok.AccessToken)
}
}
func TestExchangeRequest_BadResponseType(t *testing.T) {
ts := httptest.NewServer(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
w.Header().Set("Content-Type", "application/json")
w.Write([]byte(`{"access_token":123, "scope": "user", "token_type": "bearer"}`))
}))
defer ts.Close()
conf := newConf(ts.URL)
_, err := conf.Exchange(NoContext, "exchange-code")
if err == nil {
t.Error("expected error from invalid access_token type")
}
}
func TestExchangeRequest_NonBasicAuth(t *testing.T) {
tr := &mockTransport{
rt: func(r *http.Request) (w *http.Response, err error) {
headerAuth := r.Header.Get("Authorization")
if headerAuth != "" {
t.Errorf("Unexpected authorization header, %v is found.", headerAuth)
}
return nil, errors.New("no response")
},
}
c := &http.Client{Transport: tr}
conf := &Config{
ClientID: "CLIENT_ID",
Endpoint: Endpoint{
AuthURL: "https://accounts.google.com/auth",
TokenURL: "https://accounts.google.com/token",
},
}
ctx := context.WithValue(context.Background(), HTTPClient, c)
conf.Exchange(ctx, "code")
}
func TestPasswordCredentialsTokenRequest(t *testing.T) {
ts := httptest.NewServer(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
defer r.Body.Close()
expected := "/token"
if r.URL.String() != expected {
t.Errorf("URL = %q; want %q", r.URL, expected)
}
headerAuth := r.Header.Get("Authorization")
expected = "Basic Q0xJRU5UX0lEOkNMSUVOVF9TRUNSRVQ="
if headerAuth != expected {
t.Errorf("Authorization header = %q; want %q", headerAuth, expected)
}
headerContentType := r.Header.Get("Content-Type")
expected = "application/x-www-form-urlencoded"
if headerContentType != expected {
t.Errorf("Content-Type header = %q; want %q", headerContentType, expected)
}
body, err := ioutil.ReadAll(r.Body)
if err != nil {
t.Errorf("Failed reading request body: %s.", err)
}
expected = "client_id=CLIENT_ID&grant_type=password&password=password1&scope=scope1+scope2&username=user1"
if string(body) != expected {
t.Errorf("res.Body = %q; want %q", string(body), expected)
}
w.Header().Set("Content-Type", "application/x-www-form-urlencoded")
w.Write([]byte("access_token=90d64460d14870c08c81352a05dedd3465940a7c&scope=user&token_type=bearer"))
}))
defer ts.Close()
conf := newConf(ts.URL)
tok, err := conf.PasswordCredentialsToken(NoContext, "user1", "password1")
if err != nil {
t.Error(err)
}
if !tok.Valid() {
t.Fatalf("Token invalid. Got: %#v", tok)
}
expected := "90d64460d14870c08c81352a05dedd3465940a7c"
if tok.AccessToken != expected {
t.Errorf("AccessToken = %q; want %q", tok.AccessToken, expected)
}
expected = "bearer"
if tok.TokenType != expected {
t.Errorf("TokenType = %q; want %q", tok.TokenType, expected)
}
}
func TestTokenRefreshRequest(t *testing.T) {
ts := httptest.NewServer(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
if r.URL.String() == "/somethingelse" {
return
}
if r.URL.String() != "/token" {
t.Errorf("Unexpected token refresh request URL, %v is found.", r.URL)
}
headerContentType := r.Header.Get("Content-Type")
if headerContentType != "application/x-www-form-urlencoded" {
t.Errorf("Unexpected Content-Type header, %v is found.", headerContentType)
}
body, _ := ioutil.ReadAll(r.Body)
if string(body) != "client_id=CLIENT_ID&grant_type=refresh_token&refresh_token=REFRESH_TOKEN" {
t.Errorf("Unexpected refresh token payload, %v is found.", string(body))
}
}))
defer ts.Close()
conf := newConf(ts.URL)
c := conf.Client(NoContext, &Token{RefreshToken: "REFRESH_TOKEN"})
c.Get(ts.URL + "/somethingelse")
}
func TestFetchWithNoRefreshToken(t *testing.T) {
ts := httptest.NewServer(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
if r.URL.String() == "/somethingelse" {
return
}
if r.URL.String() != "/token" {
t.Errorf("Unexpected token refresh request URL, %v is found.", r.URL)
}
headerContentType := r.Header.Get("Content-Type")
if headerContentType != "application/x-www-form-urlencoded" {
t.Errorf("Unexpected Content-Type header, %v is found.", headerContentType)
}
body, _ := ioutil.ReadAll(r.Body)
if string(body) != "client_id=CLIENT_ID&grant_type=refresh_token&refresh_token=REFRESH_TOKEN" {
t.Errorf("Unexpected refresh token payload, %v is found.", string(body))
}
}))
defer ts.Close()
conf := newConf(ts.URL)
c := conf.Client(NoContext, nil)
_, err := c.Get(ts.URL + "/somethingelse")
if err == nil {
t.Errorf("Fetch should return an error if no refresh token is set")
}
}
func TestRefreshToken_RefreshTokenReplacement(t *testing.T) {
ts := httptest.NewServer(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
w.Header().Set("Content-Type", "application/json")
w.Write([]byte(`{"access_token":"ACCESS TOKEN", "scope": "user", "token_type": "bearer", "refresh_token": "NEW REFRESH TOKEN"}`))
return
}))
defer ts.Close()
conf := newConf(ts.URL)
tkr := tokenRefresher{
conf: conf,
ctx: NoContext,
refreshToken: "OLD REFRESH TOKEN",
}
tk, err := tkr.Token()
if err != nil {
t.Errorf("Unexpected refreshToken error returned: %v", err)
return
}
if tk.RefreshToken != tkr.refreshToken {
t.Errorf("tokenRefresher.refresh_token = %s; want %s", tkr.refreshToken, tk.RefreshToken)
}
}
func TestConfigClientWithToken(t *testing.T) {
tok := &Token{
AccessToken: "abc123",
}
ts := httptest.NewServer(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
if got, want := r.Header.Get("Authorization"), fmt.Sprintf("Bearer %s", tok.AccessToken); got != want {
t.Errorf("Authorization header = %q; want %q", got, want)
}
return
}))
defer ts.Close()
conf := newConf(ts.URL)
c := conf.Client(NoContext, tok)
req, err := http.NewRequest("GET", ts.URL, nil)
if err != nil {
t.Error(err)
}
_, err = c.Do(req)
if err != nil {
t.Error(err)
}
}
func Test_providerAuthHeaderWorks(t *testing.T) {
for _, p := range brokenAuthHeaderProviders {
if providerAuthHeaderWorks(p) {
t.Errorf("URL: %s not found in list", p)
}
p := fmt.Sprintf("%ssomesuffix", p)
if providerAuthHeaderWorks(p) {
t.Errorf("URL: %s not found in list", p)
}
}
p := "https://api.not-in-the-list-example.com/"
if !providerAuthHeaderWorks(p) {
t.Errorf("URL: %s found in list", p)
}
}

View File

@ -0,0 +1,16 @@
// Copyright 2015 The oauth2 Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
// Package odnoklassniki provides constants for using OAuth2 to access Odnoklassniki.
package odnoklassniki
import (
"github.com/coreos/etcd/Godeps/_workspace/src/golang.org/x/oauth2"
)
// Endpoint is Odnoklassniki's OAuth 2.0 endpoint.
var Endpoint = oauth2.Endpoint{
AuthURL: "https://www.odnoklassniki.ru/oauth/authorize",
TokenURL: "https://api.odnoklassniki.ru/oauth/token.do",
}

View File

@ -0,0 +1,22 @@
// Copyright 2015 The oauth2 Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
// Package paypal provides constants for using OAuth2 to access PayPal.
package paypal
import (
"github.com/coreos/etcd/Godeps/_workspace/src/golang.org/x/oauth2"
)
// Endpoint is PayPal's OAuth 2.0 endpoint in live (production) environment.
var Endpoint = oauth2.Endpoint{
AuthURL: "https://www.paypal.com/webapps/auth/protocol/openidconnect/v1/authorize",
TokenURL: "https://api.paypal.com/v1/identity/openidconnect/tokenservice",
}
// SandboxEndpoint is PayPal's OAuth 2.0 endpoint in sandbox (testing) environment.
var SandboxEndpoint = oauth2.Endpoint{
AuthURL: "https://www.sandbox.paypal.com/webapps/auth/protocol/openidconnect/v1/authorize",
TokenURL: "https://api.sandbox.paypal.com/v1/identity/openidconnect/tokenservice",
}

104
Godeps/_workspace/src/golang.org/x/oauth2/token.go generated vendored Normal file
View File

@ -0,0 +1,104 @@
// Copyright 2014 The oauth2 Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
package oauth2
import (
"net/http"
"net/url"
"time"
)
// expiryDelta determines how earlier a token should be considered
// expired than its actual expiration time. It is used to avoid late
// expirations due to client-server time mismatches.
const expiryDelta = 10 * time.Second
// Token represents the crendentials used to authorize
// the requests to access protected resources on the OAuth 2.0
// provider's backend.
//
// Most users of this package should not access fields of Token
// directly. They're exported mostly for use by related packages
// implementing derivative OAuth2 flows.
type Token struct {
// AccessToken is the token that authorizes and authenticates
// the requests.
AccessToken string `json:"access_token"`
// TokenType is the type of token.
// The Type method returns either this or "Bearer", the default.
TokenType string `json:"token_type,omitempty"`
// RefreshToken is a token that's used by the application
// (as opposed to the user) to refresh the access token
// if it expires.
RefreshToken string `json:"refresh_token,omitempty"`
// Expiry is the optional expiration time of the access token.
//
// If zero, TokenSource implementations will reuse the same
// token forever and RefreshToken or equivalent
// mechanisms for that TokenSource will not be used.
Expiry time.Time `json:"expiry,omitempty"`
// raw optionally contains extra metadata from the server
// when updating a token.
raw interface{}
}
// Type returns t.TokenType if non-empty, else "Bearer".
func (t *Token) Type() string {
if t.TokenType != "" {
return t.TokenType
}
return "Bearer"
}
// SetAuthHeader sets the Authorization header to r using the access
// token in t.
//
// This method is unnecessary when using Transport or an HTTP Client
// returned by this package.
func (t *Token) SetAuthHeader(r *http.Request) {
r.Header.Set("Authorization", t.Type()+" "+t.AccessToken)
}
// WithExtra returns a new Token that's a clone of t, but using the
// provided raw extra map. This is only intended for use by packages
// implementing derivative OAuth2 flows.
func (t *Token) WithExtra(extra interface{}) *Token {
t2 := new(Token)
*t2 = *t
t2.raw = extra
return t2
}
// Extra returns an extra field.
// Extra fields are key-value pairs returned by the server as a
// part of the token retrieval response.
func (t *Token) Extra(key string) interface{} {
if vals, ok := t.raw.(url.Values); ok {
// TODO(jbd): Cast numeric values to int64 or float64.
return vals.Get(key)
}
if raw, ok := t.raw.(map[string]interface{}); ok {
return raw[key]
}
return nil
}
// expired reports whether the token is expired.
// t must be non-nil.
func (t *Token) expired() bool {
if t.Expiry.IsZero() {
return false
}
return t.Expiry.Add(-expiryDelta).Before(time.Now())
}
// Valid reports whether t is non-nil, has an AccessToken, and is not expired.
func (t *Token) Valid() bool {
return t != nil && t.AccessToken != "" && !t.expired()
}

View File

@ -0,0 +1,50 @@
// Copyright 2014 The oauth2 Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
package oauth2
import (
"testing"
"time"
)
func TestTokenExtra(t *testing.T) {
type testCase struct {
key string
val interface{}
want interface{}
}
const key = "extra-key"
cases := []testCase{
{key: key, val: "abc", want: "abc"},
{key: key, val: 123, want: 123},
{key: key, val: "", want: ""},
{key: "other-key", val: "def", want: nil},
}
for _, tc := range cases {
extra := make(map[string]interface{})
extra[tc.key] = tc.val
tok := &Token{raw: extra}
if got, want := tok.Extra(key), tc.want; got != want {
t.Errorf("Extra(%q) = %q; want %q", key, got, want)
}
}
}
func TestTokenExpiry(t *testing.T) {
now := time.Now()
cases := []struct {
name string
tok *Token
want bool
}{
{name: "12 seconds", tok: &Token{Expiry: now.Add(12 * time.Second)}, want: false},
{name: "10 seconds", tok: &Token{Expiry: now.Add(expiryDelta)}, want: true},
}
for _, tc := range cases {
if got, want := tc.tok.expired(), tc.want; got != want {
t.Errorf("expired (%q) = %v; want %v", tc.name, got, want)
}
}
}

138
Godeps/_workspace/src/golang.org/x/oauth2/transport.go generated vendored Normal file
View File

@ -0,0 +1,138 @@
// Copyright 2014 The oauth2 Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
package oauth2
import (
"errors"
"io"
"net/http"
"sync"
)
// Transport is an http.RoundTripper that makes OAuth 2.0 HTTP requests,
// wrapping a base RoundTripper and adding an Authorization header
// with a token from the supplied Sources.
//
// Transport is a low-level mechanism. Most code will use the
// higher-level Config.Client method instead.
type Transport struct {
// Source supplies the token to add to outgoing requests'
// Authorization headers.
Source TokenSource
// Base is the base RoundTripper used to make HTTP requests.
// If nil, http.DefaultTransport is used.
Base http.RoundTripper
mu sync.Mutex // guards modReq
modReq map[*http.Request]*http.Request // original -> modified
}
// RoundTrip authorizes and authenticates the request with an
// access token. If no token exists or token is expired,
// tries to refresh/fetch a new token.
func (t *Transport) RoundTrip(req *http.Request) (*http.Response, error) {
if t.Source == nil {
return nil, errors.New("oauth2: Transport's Source is nil")
}
token, err := t.Source.Token()
if err != nil {
return nil, err
}
req2 := cloneRequest(req) // per RoundTripper contract
token.SetAuthHeader(req2)
t.setModReq(req, req2)
res, err := t.base().RoundTrip(req2)
if err != nil {
t.setModReq(req, nil)
return nil, err
}
res.Body = &onEOFReader{
rc: res.Body,
fn: func() { t.setModReq(req, nil) },
}
return res, nil
}
// CancelRequest cancels an in-flight request by closing its connection.
func (t *Transport) CancelRequest(req *http.Request) {
type canceler interface {
CancelRequest(*http.Request)
}
if cr, ok := t.base().(canceler); ok {
t.mu.Lock()
modReq := t.modReq[req]
delete(t.modReq, req)
t.mu.Unlock()
cr.CancelRequest(modReq)
}
}
func (t *Transport) base() http.RoundTripper {
if t.Base != nil {
return t.Base
}
return http.DefaultTransport
}
func (t *Transport) setModReq(orig, mod *http.Request) {
t.mu.Lock()
defer t.mu.Unlock()
if t.modReq == nil {
t.modReq = make(map[*http.Request]*http.Request)
}
if mod == nil {
delete(t.modReq, orig)
} else {
t.modReq[orig] = mod
}
}
// cloneRequest returns a clone of the provided *http.Request.
// The clone is a shallow copy of the struct and its Header map.
func cloneRequest(r *http.Request) *http.Request {
// shallow copy of the struct
r2 := new(http.Request)
*r2 = *r
// deep copy of the Header
r2.Header = make(http.Header, len(r.Header))
for k, s := range r.Header {
r2.Header[k] = append([]string(nil), s...)
}
return r2
}
type onEOFReader struct {
rc io.ReadCloser
fn func()
}
func (r *onEOFReader) Read(p []byte) (n int, err error) {
n, err = r.rc.Read(p)
if err == io.EOF {
r.runFunc()
}
return
}
func (r *onEOFReader) Close() error {
err := r.rc.Close()
r.runFunc()
return err
}
func (r *onEOFReader) runFunc() {
if fn := r.fn; fn != nil {
fn()
r.fn = nil
}
}
type errorTransport struct{ err error }
func (t errorTransport) RoundTrip(*http.Request) (*http.Response, error) {
return nil, t.err
}

View File

@ -0,0 +1,53 @@
package oauth2
import (
"net/http"
"net/http/httptest"
"testing"
"time"
)
type tokenSource struct{ token *Token }
func (t *tokenSource) Token() (*Token, error) {
return t.token, nil
}
func TestTransportTokenSource(t *testing.T) {
ts := &tokenSource{
token: &Token{
AccessToken: "abc",
},
}
tr := &Transport{
Source: ts,
}
server := newMockServer(func(w http.ResponseWriter, r *http.Request) {
if r.Header.Get("Authorization") != "Bearer abc" {
t.Errorf("Transport doesn't set the Authorization header from the fetched token")
}
})
defer server.Close()
client := http.Client{Transport: tr}
client.Get(server.URL)
}
func TestTokenValidNoAccessToken(t *testing.T) {
token := &Token{}
if token.Valid() {
t.Errorf("Token should not be valid with no access token")
}
}
func TestExpiredWithExpiry(t *testing.T) {
token := &Token{
Expiry: time.Now().Add(-5 * time.Hour),
}
if token.Valid() {
t.Errorf("Token should not be valid if it expired in the past")
}
}
func newMockServer(handler func(w http.ResponseWriter, r *http.Request)) *httptest.Server {
return httptest.NewServer(http.HandlerFunc(handler))
}

16
Godeps/_workspace/src/golang.org/x/oauth2/vk/vk.go generated vendored Normal file
View File

@ -0,0 +1,16 @@
// Copyright 2015 The oauth2 Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
// Package vk provides constants for using OAuth2 to access VK.com.
package vk
import (
"github.com/coreos/etcd/Godeps/_workspace/src/golang.org/x/oauth2"
)
// Endpoint is VK's OAuth 2.0 endpoint.
var Endpoint = oauth2.Endpoint{
AuthURL: "https://oauth.vk.com/authorize",
TokenURL: "https://oauth.vk.com/access_token",
}

View File

@ -0,0 +1,37 @@
// Copyright 2015 Google Inc. All Rights Reserved.
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
// +build go1.3
package metadata
import (
"net"
"time"
)
// This is a workaround for https://github.com/golang/oauth2/issues/70, where
// net.Dialer.KeepAlive is unavailable on Go 1.2 (which App Engine as of
// Jan 2015 still runs).
//
// TODO(bradfitz,jbd,adg): remove this once App Engine supports Go
// 1.3+.
func init() {
go13Dialer = func() *net.Dialer {
return &net.Dialer{
Timeout: 750 * time.Millisecond,
KeepAlive: 30 * time.Second,
}
}
}

View File

@ -0,0 +1,267 @@
// Copyright 2014 Google Inc. All Rights Reserved.
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
// Package metadata provides access to Google Compute Engine (GCE)
// metadata and API service accounts.
//
// This package is a wrapper around the GCE metadata service,
// as documented at https://developers.google.com/compute/docs/metadata.
package metadata
import (
"encoding/json"
"fmt"
"io/ioutil"
"net"
"net/http"
"strings"
"sync"
"time"
"github.com/coreos/etcd/Godeps/_workspace/src/google.golang.org/cloud/internal"
)
type cachedValue struct {
k string
trim bool
mu sync.Mutex
v string
}
var (
projID = &cachedValue{k: "project/project-id", trim: true}
projNum = &cachedValue{k: "project/numeric-project-id", trim: true}
instID = &cachedValue{k: "instance/id", trim: true}
)
var metaClient = &http.Client{
Transport: &internal.Transport{
Base: &http.Transport{
Dial: dialer().Dial,
ResponseHeaderTimeout: 750 * time.Millisecond,
},
},
}
// go13Dialer is nil until we're using Go 1.3+.
// This is a workaround for https://github.com/golang/oauth2/issues/70, where
// net.Dialer.KeepAlive is unavailable on Go 1.2 (which App Engine as of
// Jan 2015 still runs).
//
// TODO(bradfitz,jbd,adg,dsymonds): remove this once App Engine supports Go
// 1.3+ and go-app-builder also supports 1.3+, or when Go 1.2 is no longer an
// option on App Engine.
var go13Dialer func() *net.Dialer
func dialer() *net.Dialer {
if fn := go13Dialer; fn != nil {
return fn()
}
return &net.Dialer{
Timeout: 750 * time.Millisecond,
}
}
// NotDefinedError is returned when requested metadata is not defined.
//
// The underlying string is the suffix after "/computeMetadata/v1/".
//
// This error is not returned if the value is defined to be the empty
// string.
type NotDefinedError string
func (suffix NotDefinedError) Error() string {
return fmt.Sprintf("metadata: GCE metadata %q not defined", string(suffix))
}
// Get returns a value from the metadata service.
// The suffix is appended to "http://metadata/computeMetadata/v1/".
//
// If the requested metadata is not defined, the returned error will
// be of type NotDefinedError.
func Get(suffix string) (string, error) {
// Using 169.254.169.254 instead of "metadata" here because Go
// binaries built with the "netgo" tag and without cgo won't
// know the search suffix for "metadata" is
// ".google.internal", and this IP address is documented as
// being stable anyway.
url := "http://169.254.169.254/computeMetadata/v1/" + suffix
req, _ := http.NewRequest("GET", url, nil)
req.Header.Set("Metadata-Flavor", "Google")
res, err := metaClient.Do(req)
if err != nil {
return "", err
}
defer res.Body.Close()
if res.StatusCode == http.StatusNotFound {
return "", NotDefinedError(suffix)
}
if res.StatusCode != 200 {
return "", fmt.Errorf("status code %d trying to fetch %s", res.StatusCode, url)
}
all, err := ioutil.ReadAll(res.Body)
if err != nil {
return "", err
}
return string(all), nil
}
func getTrimmed(suffix string) (s string, err error) {
s, err = Get(suffix)
s = strings.TrimSpace(s)
return
}
func (c *cachedValue) get() (v string, err error) {
defer c.mu.Unlock()
c.mu.Lock()
if c.v != "" {
return c.v, nil
}
if c.trim {
v, err = getTrimmed(c.k)
} else {
v, err = Get(c.k)
}
if err == nil {
c.v = v
}
return
}
var onGCE struct {
sync.Mutex
set bool
v bool
}
// OnGCE reports whether this process is running on Google Compute Engine.
func OnGCE() bool {
defer onGCE.Unlock()
onGCE.Lock()
if onGCE.set {
return onGCE.v
}
onGCE.set = true
// We use the DNS name of the metadata service here instead of the IP address
// because we expect that to fail faster in the not-on-GCE case.
res, err := metaClient.Get("http://metadata.google.internal")
if err != nil {
return false
}
onGCE.v = res.Header.Get("Metadata-Flavor") == "Google"
return onGCE.v
}
// ProjectID returns the current instance's project ID string.
func ProjectID() (string, error) { return projID.get() }
// NumericProjectID returns the current instance's numeric project ID.
func NumericProjectID() (string, error) { return projNum.get() }
// InternalIP returns the instance's primary internal IP address.
func InternalIP() (string, error) {
return getTrimmed("instance/network-interfaces/0/ip")
}
// ExternalIP returns the instance's primary external (public) IP address.
func ExternalIP() (string, error) {
return getTrimmed("instance/network-interfaces/0/access-configs/0/external-ip")
}
// Hostname returns the instance's hostname. This will probably be of
// the form "INSTANCENAME.c.PROJECT.internal" but that isn't
// guaranteed.
//
// TODO: what is this defined to be? Docs say "The host name of the
// instance."
func Hostname() (string, error) {
return getTrimmed("network-interfaces/0/ip")
}
// InstanceTags returns the list of user-defined instance tags,
// assigned when initially creating a GCE instance.
func InstanceTags() ([]string, error) {
var s []string
j, err := Get("instance/tags")
if err != nil {
return nil, err
}
if err := json.NewDecoder(strings.NewReader(j)).Decode(&s); err != nil {
return nil, err
}
return s, nil
}
// InstanceID returns the current VM's numeric instance ID.
func InstanceID() (string, error) {
return instID.get()
}
// InstanceAttributes returns the list of user-defined attributes,
// assigned when initially creating a GCE VM instance. The value of an
// attribute can be obtained with InstanceAttributeValue.
func InstanceAttributes() ([]string, error) { return lines("instance/attributes/") }
// ProjectAttributes returns the list of user-defined attributes
// applying to the project as a whole, not just this VM. The value of
// an attribute can be obtained with ProjectAttributeValue.
func ProjectAttributes() ([]string, error) { return lines("project/attributes/") }
func lines(suffix string) ([]string, error) {
j, err := Get(suffix)
if err != nil {
return nil, err
}
s := strings.Split(strings.TrimSpace(j), "\n")
for i := range s {
s[i] = strings.TrimSpace(s[i])
}
return s, nil
}
// InstanceAttributeValue returns the value of the provided VM
// instance attribute.
//
// If the requested attribute is not defined, the returned error will
// be of type NotDefinedError.
//
// InstanceAttributeValue may return ("", nil) if the attribute was
// defined to be the empty string.
func InstanceAttributeValue(attr string) (string, error) {
return Get("instance/attributes/" + attr)
}
// ProjectAttributeValue returns the value of the provided
// project attribute.
//
// If the requested attribute is not defined, the returned error will
// be of type NotDefinedError.
//
// ProjectAttributeValue may return ("", nil) if the attribute was
// defined to be the empty string.
func ProjectAttributeValue(attr string) (string, error) {
return Get("project/attributes/" + attr)
}
// Scopes returns the service account scopes for the given account.
// The account may be empty or the string "default" to use the instance's
// main account.
func Scopes(serviceAccount string) ([]string, error) {
if serviceAccount == "" {
serviceAccount = "default"
}
return lines("instance/service-accounts/" + serviceAccount + "/scopes")
}

View File

@ -0,0 +1,128 @@
// Copyright 2014 Google Inc. All Rights Reserved.
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
// Package internal provides support for the cloud packages.
//
// Users should not import this package directly.
package internal
import (
"fmt"
"net/http"
"sync"
"github.com/coreos/etcd/Godeps/_workspace/src/golang.org/x/net/context"
)
type contextKey struct{}
func WithContext(parent context.Context, projID string, c *http.Client) context.Context {
if c == nil {
panic("nil *http.Client passed to WithContext")
}
if projID == "" {
panic("empty project ID passed to WithContext")
}
return context.WithValue(parent, contextKey{}, &cloudContext{
ProjectID: projID,
HTTPClient: c,
})
}
const userAgent = "gcloud-golang/0.1"
type cloudContext struct {
ProjectID string
HTTPClient *http.Client
mu sync.Mutex // guards svc
svc map[string]interface{} // e.g. "storage" => *rawStorage.Service
}
// Service returns the result of the fill function if it's never been
// called before for the given name (which is assumed to be an API
// service name, like "datastore"). If it has already been cached, the fill
// func is not run.
// It's safe for concurrent use by multiple goroutines.
func Service(ctx context.Context, name string, fill func(*http.Client) interface{}) interface{} {
return cc(ctx).service(name, fill)
}
func (c *cloudContext) service(name string, fill func(*http.Client) interface{}) interface{} {
c.mu.Lock()
defer c.mu.Unlock()
if c.svc == nil {
c.svc = make(map[string]interface{})
} else if v, ok := c.svc[name]; ok {
return v
}
v := fill(c.HTTPClient)
c.svc[name] = v
return v
}
// Transport is an http.RoundTripper that appends
// Google Cloud client's user-agent to the original
// request's user-agent header.
type Transport struct {
// Base represents the actual http.RoundTripper
// the requests will be delegated to.
Base http.RoundTripper
}
// RoundTrip appends a user-agent to the existing user-agent
// header and delegates the request to the base http.RoundTripper.
func (t *Transport) RoundTrip(req *http.Request) (*http.Response, error) {
req = cloneRequest(req)
ua := req.Header.Get("User-Agent")
if ua == "" {
ua = userAgent
} else {
ua = fmt.Sprintf("%s %s", ua, userAgent)
}
req.Header.Set("User-Agent", ua)
return t.Base.RoundTrip(req)
}
// cloneRequest returns a clone of the provided *http.Request.
// The clone is a shallow copy of the struct and its Header map.
func cloneRequest(r *http.Request) *http.Request {
// shallow copy of the struct
r2 := new(http.Request)
*r2 = *r
// deep copy of the Header
r2.Header = make(http.Header)
for k, s := range r.Header {
r2.Header[k] = s
}
return r2
}
func ProjID(ctx context.Context) string {
return cc(ctx).ProjectID
}
func HTTPClient(ctx context.Context) *http.Client {
return cc(ctx).HTTPClient
}
// cc returns the internal *cloudContext (cc) state for a context.Context.
// It panics if the user did it wrong.
func cc(ctx context.Context) *cloudContext {
if c, ok := ctx.Value(contextKey{}).(*cloudContext); ok {
return c
}
panic("invalid context.Context type; it should be created with cloud.NewContext")
}

File diff suppressed because it is too large Load Diff

View File

@ -0,0 +1,594 @@
// Copyright 2013 Google Inc. All Rights Reserved.
//
// The datastore v1 service proto definitions
syntax = "proto2";
package datastore;
option java_package = "com.google.api.services.datastore";
// An identifier for a particular subset of entities.
//
// Entities are partitioned into various subsets, each used by different
// datasets and different namespaces within a dataset and so forth.
//
// All input partition IDs are normalized before use.
// A partition ID is normalized as follows:
// If the partition ID is unset or is set to an empty partition ID, replace it
// with the context partition ID.
// Otherwise, if the partition ID has no dataset ID, assign it the context
// partition ID's dataset ID.
// Unless otherwise documented, the context partition ID has the dataset ID set
// to the context dataset ID and no other partition dimension set.
//
// A partition ID is empty if all of its fields are unset.
//
// Partition dimension:
// A dimension may be unset.
// A dimension's value must never be "".
// A dimension's value must match [A-Za-z\d\.\-_]{1,100}
// If the value of any dimension matches regex "__.*__",
// the partition is reserved/read-only.
// A reserved/read-only partition ID is forbidden in certain documented contexts.
//
// Dataset ID:
// A dataset id's value must never be "".
// A dataset id's value must match
// ([a-z\d\-]{1,100}~)?([a-z\d][a-z\d\-\.]{0,99}:)?([a-z\d][a-z\d\-]{0,99}
message PartitionId {
// The dataset ID.
optional string dataset_id = 3;
// The namespace.
optional string namespace = 4;
}
// A unique identifier for an entity.
// If a key's partition id or any of its path kinds or names are
// reserved/read-only, the key is reserved/read-only.
// A reserved/read-only key is forbidden in certain documented contexts.
message Key {
// Entities are partitioned into subsets, currently identified by a dataset
// (usually implicitly specified by the project) and namespace ID.
// Queries are scoped to a single partition.
optional PartitionId partition_id = 1;
// A (kind, ID/name) pair used to construct a key path.
//
// At most one of name or ID may be set.
// If either is set, the element is complete.
// If neither is set, the element is incomplete.
message PathElement {
// The kind of the entity.
// A kind matching regex "__.*__" is reserved/read-only.
// A kind must not contain more than 500 characters.
// Cannot be "".
required string kind = 1;
// The ID of the entity.
// Never equal to zero. Values less than zero are discouraged and will not
// be supported in the future.
optional int64 id = 2;
// The name of the entity.
// A name matching regex "__.*__" is reserved/read-only.
// A name must not be more than 500 characters.
// Cannot be "".
optional string name = 3;
}
// The entity path.
// An entity path consists of one or more elements composed of a kind and a
// string or numerical identifier, which identify entities. The first
// element identifies a <em>root entity</em>, the second element identifies
// a <em>child</em> of the root entity, the third element a child of the
// second entity, and so forth. The entities identified by all prefixes of
// the path are called the element's <em>ancestors</em>.
// An entity path is always fully complete: ALL of the entity's ancestors
// are required to be in the path along with the entity identifier itself.
// The only exception is that in some documented cases, the identifier in the
// last path element (for the entity) itself may be omitted. A path can never
// be empty.
repeated PathElement path_element = 2;
}
// A message that can hold any of the supported value types and associated
// metadata.
//
// At most one of the <type>Value fields may be set.
// If none are set the value is "null".
//
message Value {
// A boolean value.
optional bool boolean_value = 1;
// An integer value.
optional int64 integer_value = 2;
// A double value.
optional double double_value = 3;
// A timestamp value.
optional int64 timestamp_microseconds_value = 4;
// A key value.
optional Key key_value = 5;
// A blob key value.
optional string blob_key_value = 16;
// A UTF-8 encoded string value.
optional string string_value = 17;
// A blob value.
optional bytes blob_value = 18;
// An entity value.
// May have no key.
// May have a key with an incomplete key path.
// May have a reserved/read-only key.
optional Entity entity_value = 6;
// A list value.
// Cannot contain another list value.
// Cannot also have a meaning and indexing set.
repeated Value list_value = 7;
// The <code>meaning</code> field is reserved and should not be used.
optional int32 meaning = 14;
// If the value should be indexed.
//
// The <code>indexed</code> property may be set for a
// <code>null</code> value.
// When <code>indexed</code> is <code>true</code>, <code>stringValue</code>
// is limited to 500 characters and the blob value is limited to 500 bytes.
// Exception: If meaning is set to 2, string_value is limited to 2038
// characters regardless of indexed.
// When indexed is true, meaning 15 and 22 are not allowed, and meaning 16
// will be ignored on input (and will never be set on output).
// Input values by default have <code>indexed</code> set to
// <code>true</code>; however, you can explicitly set <code>indexed</code> to
// <code>true</code> if you want. (An output value never has
// <code>indexed</code> explicitly set to <code>true</code>.) If a value is
// itself an entity, it cannot have <code>indexed</code> set to
// <code>true</code>.
// Exception: An entity value with meaning 9, 20 or 21 may be indexed.
optional bool indexed = 15 [default = true];
}
// An entity property.
message Property {
// The name of the property.
// A property name matching regex "__.*__" is reserved.
// A reserved property name is forbidden in certain documented contexts.
// The name must not contain more than 500 characters.
// Cannot be "".
required string name = 1;
// The value(s) of the property.
// Each value can have only one value property populated. For example,
// you cannot have a values list of <code>{ value: { integerValue: 22,
// stringValue: "a" } }</code>, but you can have <code>{ value: { listValue:
// [ { integerValue: 22 }, { stringValue: "a" } ] }</code>.
required Value value = 4;
}
// An entity.
//
// An entity is limited to 1 megabyte when stored. That <em>roughly</em>
// corresponds to a limit of 1 megabyte for the serialized form of this
// message.
message Entity {
// The entity's key.
//
// An entity must have a key, unless otherwise documented (for example,
// an entity in <code>Value.entityValue</code> may have no key).
// An entity's kind is its key's path's last element's kind,
// or null if it has no key.
optional Key key = 1;
// The entity's properties.
// Each property's name must be unique for its entity.
repeated Property property = 2;
}
// The result of fetching an entity from the datastore.
message EntityResult {
// Specifies what data the 'entity' field contains.
// A ResultType is either implied (for example, in LookupResponse.found it
// is always FULL) or specified by context (for example, in message
// QueryResultBatch, field 'entity_result_type' specifies a ResultType
// for all the values in field 'entity_result').
enum ResultType {
FULL = 1; // The entire entity.
PROJECTION = 2; // A projected subset of properties.
// The entity may have no key.
// A property value may have meaning 18.
KEY_ONLY = 3; // Only the key.
}
// The resulting entity.
required Entity entity = 1;
}
// A query.
message Query {
// The projection to return. If not set the entire entity is returned.
repeated PropertyExpression projection = 2;
// The kinds to query (if empty, returns entities from all kinds).
repeated KindExpression kind = 3;
// The filter to apply (optional).
optional Filter filter = 4;
// The order to apply to the query results (if empty, order is unspecified).
repeated PropertyOrder order = 5;
// The properties to group by (if empty, no grouping is applied to the
// result set).
repeated PropertyReference group_by = 6;
// A starting point for the query results. Optional. Query cursors are
// returned in query result batches.
optional bytes /* serialized QueryCursor */ start_cursor = 7;
// An ending point for the query results. Optional. Query cursors are
// returned in query result batches.
optional bytes /* serialized QueryCursor */ end_cursor = 8;
// The number of results to skip. Applies before limit, but after all other
// constraints (optional, defaults to 0).
optional int32 offset = 10 [default=0];
// The maximum number of results to return. Applies after all other
// constraints. Optional.
optional int32 limit = 11;
}
// A representation of a kind.
message KindExpression {
// The name of the kind.
required string name = 1;
}
// A reference to a property relative to the kind expressions.
// exactly.
message PropertyReference {
// The name of the property.
required string name = 2;
}
// A representation of a property in a projection.
message PropertyExpression {
enum AggregationFunction {
FIRST = 1;
}
// The property to project.
required PropertyReference property = 1;
// The aggregation function to apply to the property. Optional.
// Can only be used when grouping by at least one property. Must
// then be set on all properties in the projection that are not
// being grouped by.
optional AggregationFunction aggregation_function = 2;
}
// The desired order for a specific property.
message PropertyOrder {
enum Direction {
ASCENDING = 1;
DESCENDING = 2;
}
// The property to order by.
required PropertyReference property = 1;
// The direction to order by.
optional Direction direction = 2 [default=ASCENDING];
}
// A holder for any type of filter. Exactly one field should be specified.
message Filter {
// A composite filter.
optional CompositeFilter composite_filter = 1;
// A filter on a property.
optional PropertyFilter property_filter = 2;
}
// A filter that merges the multiple other filters using the given operation.
message CompositeFilter {
enum Operator {
AND = 1;
}
// The operator for combining multiple filters.
required Operator operator = 1;
// The list of filters to combine.
// Must contain at least one filter.
repeated Filter filter = 2;
}
// A filter on a specific property.
message PropertyFilter {
enum Operator {
LESS_THAN = 1;
LESS_THAN_OR_EQUAL = 2;
GREATER_THAN = 3;
GREATER_THAN_OR_EQUAL = 4;
EQUAL = 5;
HAS_ANCESTOR = 11;
}
// The property to filter by.
required PropertyReference property = 1;
// The operator to filter by.
required Operator operator = 2;
// The value to compare the property to.
required Value value = 3;
}
// A GQL query.
message GqlQuery {
required string query_string = 1;
// When false, the query string must not contain a literal.
optional bool allow_literal = 2 [default = false];
// A named argument must set field GqlQueryArg.name.
// No two named arguments may have the same name.
// For each non-reserved named binding site in the query string,
// there must be a named argument with that name,
// but not necessarily the inverse.
repeated GqlQueryArg name_arg = 3;
// Numbered binding site @1 references the first numbered argument,
// effectively using 1-based indexing, rather than the usual 0.
// A numbered argument must NOT set field GqlQueryArg.name.
// For each binding site numbered i in query_string,
// there must be an ith numbered argument.
// The inverse must also be true.
repeated GqlQueryArg number_arg = 4;
}
// A binding argument for a GQL query.
// Exactly one of fields value and cursor must be set.
message GqlQueryArg {
// Must match regex "[A-Za-z_$][A-Za-z_$0-9]*".
// Must not match regex "__.*__".
// Must not be "".
optional string name = 1;
optional Value value = 2;
optional bytes cursor = 3;
}
// A batch of results produced by a query.
message QueryResultBatch {
// The possible values for the 'more_results' field.
enum MoreResultsType {
NOT_FINISHED = 1; // There are additional batches to fetch from this query.
MORE_RESULTS_AFTER_LIMIT = 2; // The query is finished, but there are more
// results after the limit.
NO_MORE_RESULTS = 3; // The query has been exhausted.
}
// The result type for every entity in entityResults.
required EntityResult.ResultType entity_result_type = 1;
// The results for this batch.
repeated EntityResult entity_result = 2;
// A cursor that points to the position after the last result in the batch.
// May be absent.
optional bytes /* serialized QueryCursor */ end_cursor = 4;
// The state of the query after the current batch.
required MoreResultsType more_results = 5;
// The number of results skipped because of <code>Query.offset</code>.
optional int32 skipped_results = 6;
}
// A set of changes to apply.
//
// No entity in this message may have a reserved property name,
// not even a property in an entity in a value.
// No value in this message may have meaning 18,
// not even a value in an entity in another value.
//
// If entities with duplicate keys are present, an arbitrary choice will
// be made as to which is written.
message Mutation {
// Entities to upsert.
// Each upserted entity's key must have a complete path and
// must not be reserved/read-only.
repeated Entity upsert = 1;
// Entities to update.
// Each updated entity's key must have a complete path and
// must not be reserved/read-only.
repeated Entity update = 2;
// Entities to insert.
// Each inserted entity's key must have a complete path and
// must not be reserved/read-only.
repeated Entity insert = 3;
// Insert entities with a newly allocated ID.
// Each inserted entity's key must omit the final identifier in its path and
// must not be reserved/read-only.
repeated Entity insert_auto_id = 4;
// Keys of entities to delete.
// Each key must have a complete key path and must not be reserved/read-only.
repeated Key delete = 5;
// Ignore a user specified read-only period. Optional.
optional bool force = 6;
}
// The result of applying a mutation.
message MutationResult {
// Number of index writes.
required int32 index_updates = 1;
// Keys for <code>insertAutoId</code> entities. One per entity from the
// request, in the same order.
repeated Key insert_auto_id_key = 2;
}
// Options shared by read requests.
message ReadOptions {
enum ReadConsistency {
DEFAULT = 0;
STRONG = 1;
EVENTUAL = 2;
}
// The read consistency to use.
// Cannot be set when transaction is set.
// Lookup and ancestor queries default to STRONG, global queries default to
// EVENTUAL and cannot be set to STRONG.
optional ReadConsistency read_consistency = 1 [default=DEFAULT];
// The transaction to use. Optional.
optional bytes /* serialized Transaction */ transaction = 2;
}
// The request for Lookup.
message LookupRequest {
// Options for this lookup request. Optional.
optional ReadOptions read_options = 1;
// Keys of entities to look up from the datastore.
repeated Key key = 3;
}
// The response for Lookup.
message LookupResponse {
// The order of results in these fields is undefined and has no relation to
// the order of the keys in the input.
// Entities found as ResultType.FULL entities.
repeated EntityResult found = 1;
// Entities not found as ResultType.KEY_ONLY entities.
repeated EntityResult missing = 2;
// A list of keys that were not looked up due to resource constraints.
repeated Key deferred = 3;
}
// The request for RunQuery.
message RunQueryRequest {
// The options for this query.
optional ReadOptions read_options = 1;
// Entities are partitioned into subsets, identified by a dataset (usually
// implicitly specified by the project) and namespace ID. Queries are scoped
// to a single partition.
// This partition ID is normalized with the standard default context
// partition ID, but all other partition IDs in RunQueryRequest are
// normalized with this partition ID as the context partition ID.
optional PartitionId partition_id = 2;
// The query to run.
// Either this field or field gql_query must be set, but not both.
optional Query query = 3;
// The GQL query to run.
// Either this field or field query must be set, but not both.
optional GqlQuery gql_query = 7;
}
// The response for RunQuery.
message RunQueryResponse {
// A batch of query results (always present).
optional QueryResultBatch batch = 1;
}
// The request for BeginTransaction.
message BeginTransactionRequest {
enum IsolationLevel {
SNAPSHOT = 0; // Read from a consistent snapshot. Concurrent transactions
// conflict if their mutations conflict. For example:
// Read(A),Write(B) may not conflict with Read(B),Write(A),
// but Read(B),Write(B) does conflict with Read(B),Write(B).
SERIALIZABLE = 1; // Read from a consistent snapshot. Concurrent
// transactions conflict if they cannot be serialized.
// For example Read(A),Write(B) does conflict with
// Read(B),Write(A) but Read(A) may not conflict with
// Write(A).
}
// The transaction isolation level.
optional IsolationLevel isolation_level = 1 [default=SNAPSHOT];
}
// The response for BeginTransaction.
message BeginTransactionResponse {
// The transaction identifier (always present).
optional bytes /* serialized Transaction */ transaction = 1;
}
// The request for Rollback.
message RollbackRequest {
// The transaction identifier, returned by a call to
// <code>beginTransaction</code>.
required bytes /* serialized Transaction */ transaction = 1;
}
// The response for Rollback.
message RollbackResponse {
// Empty
}
// The request for Commit.
message CommitRequest {
enum Mode {
TRANSACTIONAL = 1;
NON_TRANSACTIONAL = 2;
}
// The transaction identifier, returned by a call to
// <code>beginTransaction</code>. Must be set when mode is TRANSACTIONAL.
optional bytes /* serialized Transaction */ transaction = 1;
// The mutation to perform. Optional.
optional Mutation mutation = 2;
// The type of commit to perform. Either TRANSACTIONAL or NON_TRANSACTIONAL.
optional Mode mode = 5 [default=TRANSACTIONAL];
}
// The response for Commit.
message CommitResponse {
// The result of performing the mutation (if any).
optional MutationResult mutation_result = 1;
}
// The request for AllocateIds.
message AllocateIdsRequest {
// A list of keys with incomplete key paths to allocate IDs for.
// No key may be reserved/read-only.
repeated Key key = 1;
}
// The response for AllocateIds.
message AllocateIdsResponse {
// The keys specified in the request (in the same order), each with
// its key path completed with a newly allocated ID.
repeated Key key = 1;
}
// Each rpc normalizes the partition IDs of the keys in its input entities,
// and always returns entities with keys with normalized partition IDs.
// (Note that applies to all entities, including entities in values.)
service DatastoreService {
// Look up some entities by key.
rpc Lookup(LookupRequest) returns (LookupResponse) {
};
// Query for entities.
rpc RunQuery(RunQueryRequest) returns (RunQueryResponse) {
};
// Begin a new transaction.
rpc BeginTransaction(BeginTransactionRequest) returns (BeginTransactionResponse) {
};
// Commit a transaction, optionally creating, deleting or modifying some
// entities.
rpc Commit(CommitRequest) returns (CommitResponse) {
};
// Roll back a transaction.
rpc Rollback(RollbackRequest) returns (RollbackResponse) {
};
// Allocate IDs for incomplete keys (useful for referencing an entity before
// it is inserted).
rpc AllocateIds(AllocateIdsRequest) returns (AllocateIdsResponse) {
};
}

View File

@ -0,0 +1,57 @@
// Copyright 2014 Google Inc. All Rights Reserved.
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
// Package testutil contains helper functions for writing tests.
package testutil
import (
"io/ioutil"
"log"
"net/http"
"os"
"github.com/coreos/etcd/Godeps/_workspace/src/golang.org/x/net/context"
"github.com/coreos/etcd/Godeps/_workspace/src/golang.org/x/oauth2"
"github.com/coreos/etcd/Godeps/_workspace/src/golang.org/x/oauth2/google"
"google.golang.org/cloud"
)
const (
envProjID = "GCLOUD_TESTS_GOLANG_PROJECT_ID"
envPrivateKey = "GCLOUD_TESTS_GOLANG_KEY"
)
func Context(scopes ...string) context.Context {
key, projID := os.Getenv(envPrivateKey), os.Getenv(envProjID)
if key == "" || projID == "" {
log.Fatal("GCLOUD_TESTS_GOLANG_KEY and GCLOUD_TESTS_GOLANG_PROJECT_ID must be set. See CONTRIBUTING.md for details.")
}
jsonKey, err := ioutil.ReadFile(key)
if err != nil {
log.Fatalf("Cannot read the JSON key file, err: %v", err)
}
conf, err := google.JWTConfigFromJSON(jsonKey, scopes...)
if err != nil {
log.Fatal(err)
}
return cloud.NewContext(projID, conf.Client(oauth2.NoContext))
}
func NoAuthContext() context.Context {
projID := os.Getenv(envProjID)
if projID == "" {
log.Fatal("GCLOUD_TESTS_GOLANG_PROJECT_ID must be set. See CONTRIBUTING.md for details.")
}
return cloud.NewContext(projID, &http.Client{Transport: http.DefaultTransport})
}

View File

@ -0,0 +1,10 @@
sudo: false
language: go
install:
- go get -v -t -d google.golang.org/grpc/...
script:
- go test -v -cpu 1,4 google.golang.org/grpc/...
- go test -v -race -cpu 1,4 google.golang.org/grpc/...

View File

@ -0,0 +1,27 @@
# How to contribute
We definitely welcome patches and contribution to grpc! Here is some guideline
and information about how to do so.
## Getting started
### Legal requirements
In order to protect both you and ourselves, you will need to sign the
[Contributor License Agreement](https://cla.developers.google.com/clas).
### Filing Issues
When filing an issue, make sure to answer these five questions:
1. What version of Go are you using (`go version`)?
2. What operating system and processor architecture are you using?
3. What did you do?
4. What did you expect to see?
5. What did you see instead?
### Contributing code
Please read the Contribution Guidelines before sending patches.
We will not accept GitHub pull requests once Gerrit is setup (we will use Gerrit instead for code review).
Unless otherwise noted, the Go source files are distributed under the BSD-style license found in the LICENSE file.

28
Godeps/_workspace/src/google.golang.org/grpc/LICENSE generated vendored Normal file
View File

@ -0,0 +1,28 @@
Copyright 2014, Google Inc.
All rights reserved.
Redistribution and use in source and binary forms, with or without
modification, are permitted provided that the following conditions are
met:
* Redistributions of source code must retain the above copyright
notice, this list of conditions and the following disclaimer.
* Redistributions in binary form must reproduce the above
copyright notice, this list of conditions and the following disclaimer
in the documentation and/or other materials provided with the
distribution.
* Neither the name of Google Inc. nor the names of its
contributors may be used to endorse or promote products derived from
this software without specific prior written permission.
THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
"AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
(INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.

22
Godeps/_workspace/src/google.golang.org/grpc/PATENTS generated vendored Normal file
View File

@ -0,0 +1,22 @@
Additional IP Rights Grant (Patents)
"This implementation" means the copyrightable works distributed by
Google as part of the GRPC project.
Google hereby grants to You a perpetual, worldwide, non-exclusive,
no-charge, royalty-free, irrevocable (except as stated in this section)
patent license to make, have made, use, offer to sell, sell, import,
transfer and otherwise run, modify and propagate the contents of this
implementation of GRPC, where such license applies only to those patent
claims, both currently owned or controlled by Google and acquired in
the future, licensable by Google that are necessarily infringed by this
implementation of GRPC. This grant does not include claims that would be
infringed only as a consequence of further modification of this
implementation. If you or your agent or exclusive licensee institute or
order or agree to the institution of patent litigation against any
entity (including a cross-claim or counterclaim in a lawsuit) alleging
that this implementation of GRPC or any code incorporated within this
implementation of GRPC constitutes direct or contributory patent
infringement, or inducement of patent infringement, then any patent
rights granted to you under this License for this implementation of GRPC
shall terminate as of the date such litigation is filed.

Some files were not shown because too many files have changed in this diff Show More