All header transformation code from h2mux has been consolidated in the connection package since it's used by both h2mux and http2 logic.
Exported headers used by proxying between edge and cloudflared so then can be shared by tunnel service on the edge.
Moved access-related headers to corresponding packages that have the code that sets/uses these headers.
Removed tunnel hostname tracking from h2mux since it wasn't used by anything. We will continue to set the tunnel hostname header from the edge for backward compatibilty, but it's no longer used by cloudflared.
Move bastion-related logic into carrier package, untangled dependencies between carrier, origin, and websocket packages.
This change has two parts:
1. Update to newer version of the urfave/cli fork that correctly sets flag value along the context hierarchy while respecting config file overide behavior of the most specific instance of the flag.
2. Redefine --credentials-file flag so that create and delete subcommand don't use value from the config file.
- Move packages the provide generic functionality (such as config) from `cmd` subtree to top level.
- Remove all dependencies on `cmd` subtree from top level packages.
- Consolidate all code dealing with token generation and transfer to a single cohesive package.
- extracted ResponseWriter from proxyConnection
- added bastion tests over websocket
- removed HTTPResp()
- added some docstrings
- Renamed some ingress clients as proxies
- renamed instances of client to proxy in connection and origin
- Stream no longer takes a context and logger.Service
* Add max upstream connections dns-proxy option
Allows defining a limit to the number of connections that can be
established with the upstream DNS host.
If left unset, there may be situations where connections fail to
establish, which causes the Transport to create an influx of connections
causing upstream to throttle our requests and triggering a runaway
effect resulting in high CPU usage. See https://github.com/cloudflare/cloudflared/issues/91
* Code review with proposed changes
* Add max upstream connections flag to tunnel flags
* Reduce DNS proxy max upstream connections default value
Reduce the default value of maximum upstream connections on the DNS
proxy to guarantee it works on single-core and other low-end hardware.
Further testing could allow for a safe increase of this value.
* Update dns-proxy flag name
Also remove `MaxUpstreamConnsFlag` const as it's no longer referenced in more than one place and to make things more consistent with how the other flags are referenced.
Co-authored-by: Adam Chalmers <achalmers@cloudflare.com>
Jitter is important to avoid every cloudflared in the world trying to
reconnect at t=1, 2, 4, etc. That could overwhelm the backend. But
if each cloudflared randomly waits for up to 2, then up to 4, then up
to 8 etc, then the retries get spread out evenly across time.
On average, wait times should be the same (e.g. instead of waiting for
exactly 1 second, cloudflared will wait betweeen 0 and 2 seconds).
This is the "Full Jitter" algorithm from https://aws.amazon.com/blogs/architecture/exponential-backoff-and-jitter/
Unless I'm mistaken, when there is no existing token for an app, the `login` command needs to be run to obtain a token (not the `token` command, which itself doesn't generate a token).
Classic tunnels flow was triggering an event for RegisteringTunnel for
every connection that was about to be established, and then a Connected
event for every connection established.
However, the RegistreringTunnel event had no connection ID, always
causing it to unset/disconnect the 0th connection making the /ready
endpoint report incorrect numbers for classic tunnels.
The following UInt flags:
* Uint64 - heartbeat-count, compression-quality
* Uint - retries, port, proxy-port
were not being correctly picked from the configuration YAML
since the multi origin refactor
This is due to a limitation of the ufarve library, which we
overcome for now with handling those as Int flags.
Not doing so was causing cloudflared to become stuck after
some time. This would happen because the Observer pattern
was sending events to the UI channel (that has 16 slots) but
no one was consuming those when the UI is not enabled (which
is the default case).
Hence, events (such as connection disconnect / reconnect) would
cause that buffer to be full and cause cloudflared to become
apparently stuck, in the sense that the connections would not be
reconnected.
Tunnel delete is successful even if we don't find the credentials
file in the user's filesystem. We no longer "error" indicating this
is a problem. This fix also enables chaining of the delete command
by removing a pre-mature return if the credentials file is not found.