* Allow partial reads from a GorillaConn; add SetDeadline (from net.Conn)
The current implementation of GorillaConn will drop data if the
websocket frame isn't read 100%. For example, if a websocket frame is
size=3, and Read() is called with a []byte of len=1, the 2 other bytes
in the frame are lost forever.
This is currently masked by the fact that this is used primarily in
io.Copy to another socket (in ingress.Stream) - as long as the read buffer
used by io.Copy is big enough (it is 32*1024, so in theory we could see
this today?) then data is copied over to the other socket.
The client then can do partial reads just fine as the kernel will take
care of the buffer from here on out.
I hit this by trying to create my own tunnel and avoiding
ingress.Stream, but this could be a real bug today I think if a
websocket frame bigger than 32*1024 was received, although it is also
possible that we are lucky and the upstream size which I haven't checked
uses a smaller buffer than that always.
The test I added hangs before my change, succeeds after.
Also add SetDeadline so that GorillaConn fully implements net.Conn
* Comment formatting; fast path
* Avoid intermediate buffer for first len(p) bytes; import order
- Move packages the provide generic functionality (such as config) from `cmd` subtree to top level.
- Remove all dependencies on `cmd` subtree from top level packages.
- Consolidate all code dealing with token generation and transfer to a single cohesive package.
* Issue-285: Detect TARGET_ARCH correctly for FreeBSD amd64 (uname -m returns amd64 not x86_64)
See: https://github.com/cloudflare/cloudflared/issues/285
* Add note not to `go get github.com/BurntSushi/go-sumtype` in build directory as this will cause vendor issues
Co-authored-by: PaulC <paulc@>
added ingress.DefaultStreamHandler and a basic test for tcp stream proxy
moved websocket.Stream to ingress
cloudflared no longer picks tcpstream host from header
- extracted ResponseWriter from proxyConnection
- added bastion tests over websocket
- removed HTTPResp()
- added some docstrings
- Renamed some ingress clients as proxies
- renamed instances of client to proxy in connection and origin
- Stream no longer takes a context and logger.Service
* Add max upstream connections dns-proxy option
Allows defining a limit to the number of connections that can be
established with the upstream DNS host.
If left unset, there may be situations where connections fail to
establish, which causes the Transport to create an influx of connections
causing upstream to throttle our requests and triggering a runaway
effect resulting in high CPU usage. See https://github.com/cloudflare/cloudflared/issues/91
* Code review with proposed changes
* Add max upstream connections flag to tunnel flags
* Reduce DNS proxy max upstream connections default value
Reduce the default value of maximum upstream connections on the DNS
proxy to guarantee it works on single-core and other low-end hardware.
Further testing could allow for a safe increase of this value.
* Update dns-proxy flag name
Also remove `MaxUpstreamConnsFlag` const as it's no longer referenced in more than one place and to make things more consistent with how the other flags are referenced.
Co-authored-by: Adam Chalmers <achalmers@cloudflare.com>
Jitter is important to avoid every cloudflared in the world trying to
reconnect at t=1, 2, 4, etc. That could overwhelm the backend. But
if each cloudflared randomly waits for up to 2, then up to 4, then up
to 8 etc, then the retries get spread out evenly across time.
On average, wait times should be the same (e.g. instead of waiting for
exactly 1 second, cloudflared will wait betweeen 0 and 2 seconds).
This is the "Full Jitter" algorithm from https://aws.amazon.com/blogs/architecture/exponential-backoff-and-jitter/