* TUN-9863: Introduce Code Signing for Windows Builds
This commit adds a signing step to the build script for windows binaries.
Since we package the MSI on Linux, this commit adds another CI step that depends on package-windows and signs all of the windows packages.
To do so, we use azuresigntool which relies on a certificate stored in Azure Vault.
Closes TUN-9863
* chore: Update cloudflared signing key name in index.html
We want to preserve the old key name so that we don't have to update the dev docs.
We will have the same key under this name and the v2 name to account for everyone who has already updated.
* Fix systemd service installation hanging
---
This kills the hanging when there is a network issue (port blocking or no Internet) and the installation cannot be completed with no error sent to the output.
Before (killed manually since it hangs forever):
{width=817 height=69}
After:
{width=825 height=78}
---
* TUN-9919: Make RPM postinstall scriplet idempotent
Before this commit the postinstall scriptlet isn't idempotent, meaning the users see this error in their upgrade logs:
`ln: failed to create symbolic link '/usr/local/bin/cloudflared': File exists
warning: %post(cloudflared-2025.10.0-1.x86_64) scriptlet failed, exit status 1`
This doesn't break the upgrade (which is why we haven't touched this in 5 years), but adding the -f (force) flag to the symlink command prevents this issue from happening
Closes TUN-9919
* chore: Fix upload of RPM repo file during double signing
This commit fixes a variable that was supposed to hold the path of the repo file, but instead was being overwritten with the repo file handle
* chore: Fix import of GPG keys when two keys are provided
We were only retrieving the first output of gpg.list keys because previously we were only running import_gpg_keys once. Now that we run it twice we need to ensure that the key we select from the list matches the one we've imported.
Adds new metrics for:
- Dropped UDP datagrams for reads and write paths
- Dropped ICMP packets for write paths
- Failures that preemptively close UDP flows
Closes TUN-9882
Add a deadline for origin writes as a preventative measure in the case that the kernel blocks any writes for too long.
In the case that the socket exceeds the write deadline, the datagram will be dropped.
Closes TUN-9882
Instead of creating a go routine to process each incoming datagram from the tunnel, a single consumer (the demuxer) will
process each of the datagrams in serial.
Registration datagrams will still be spun out into separate go routines since they are responsible for managing the
lifetime of the session once started via the `Serve` method.
UDP payload datagrams will be handled in separate channels to allow for parallel writing inside of the scope of a
session via a new write loop. This channel will have a small buffer to help unblock the demuxer from dequeueing other
datagrams.
ICMP datagrams will be funneled into a single channel across all possible origins with a single consumer to write to
their respective destinations.
Each of these changes is to prevent datagram reordering from occurring when dequeuing from the tunnel connection. By
establishing a single demuxer that serializes the writes per session, each session will be able to write sequentially,
but in parallel to their respective origins.
Closes TUN-9882
* TUN-9776: Support signing Debian packages with two keys for rollover
Debian Trixie doesn't support the SHA-1 algo for GPG keys.
This commit leverages the ability of providing two keys in the reprepro configuration in order to have two signatures in InRelease and Release.gpg files.
This allows users that have the old key to continue fetching the binaries with the old key while allowing us to provide a new key that can be used in Trixie.
Unfortunately current versions of RPM (since 2002) don't support double signing, so we can't apply the same logic for RPM
Closes TUN-9776
## Summary
This commit migrates the cloduflared ci pipelines, that built, tested and component tested the linux binaries to gitlab ci.
The only thing that is remaining to move from teamcity to gitlab are now the release pipelines that run on master.
Relates to TUN-9800
Corrects the pattern of using errgroup's and context cancellation to simplify the logic for canceling extra routines for the QUIC connection. This is because the extra context cancellation is redundant with the fact that the errgroup also cancels it's own provided context when a routine returns (error or not).
For the datagram handler specifically, since it can respond faster to a context cancellation from the QUIC connection, we wrap the error before surfacing it outside of the QUIC connection scope to the supervisor. Additionally, the supervisor will look for this error type to check if it should retry the QUIC connection. These two operations are required because the supervisor does not look for a context canceled error when deciding to retry a connection. If a context canceled from the datagram handler were to be returned up to the supervisor on the initial connection, the cloudflared application would exit. We want to ensure that cloudflared maintains connection attempts even if any of the services on-top of a QUIC connection fail (datagram handler in this case).
Additional logging is also introduced along these paths to help with understanding the error conditions from the specific handlers on-top of a QUIC connection.
Related CUSTESC-53681
Closes TUN-9610
This commit adds support for FedRAMP environments. Cloudflared will
now dynamically configure the management hostname and API URL, switching
to FedRAMP-specific values like `management.fed.argotunnel.com` and `https://api.fed.cloudflare.com/client/v4`
when a FedRAMP endpoint is detected.
Key to this is an enhanced `ParseToken` function, which now includes an `IsFed()`
method to determine if a management token's issuer is `fed-tunnelstore`. This allows
cloudflared to correctly identify and operate within a FedRAMP context, ensuring
proper connectivity.
Closes TUN-9583
## Summary
This commit removes configurations and references for Debian-based releases that are no longer supported in the build and packaging processes.
For Ubuntu versions for most of them only PRO users still have support, so we might decide remove some of them as well. Information available in:
- Debian Releases: https://wiki.debian.org/LTS (we no longer support bullseye at Cloudflare)
- Ubuntu Releases: https://ubuntu.com/about/release-cycle
Closes TUN-9542
## Summary
This commit changes the USER instruction in our Dockerfiles from using
the string "nonroot" to its numeric ID "65532".
This change is necessary because Kubernetes does not support string-based
user IDs in security contexts, requiring numeric IDs instead. The nonroot
user maps to 65532 in distroless images.
Remove P256Kyber768Draft00PQKex curve from nonFips curve preferences and add tests to verify that the advertised curves are the same as the curve preferences we set.
Closes TUN-9161
To help support users with environments that don't work well with the
DNS local resolver's automatic resolution process for local resolver
addresses, we introduce a flag to provide them statically to the
runtime. When providing the resolver addresses, cloudflared will no
longer lookup the DNS resolver addresses and use the user input
directly.
When provided with a list of DNS resolvers larger than one, the resolver
service will randomly select one at random for each incoming request.
Closes TUN-9473
Adds an OriginDialerService that takes over the roles of both DialUDP and DialTCP
towards the origin. This provides the possibility to leverage dialer "middleware"
to inject virtual origins, such as the DNS resolver service.
DNS Resolver service also gains access to the DialTCP operation to service TCP
DNS requests.
Minor refactoring includes changes to remove the needs previously provided by
the warp-routing configuration. This configuration cannot be disabled by cloudflared
so many of the references have been adjusted or removed.
Closes TUN-9470
Introduces a new `UDPOriginProxy` interface and `UDPOriginService`
to standardize how UDP connections are dialed to origins. Allows for
future overrides of the ingress service for specific dial destinations.
Simplifies dependency injection for UDP dialing throughout both datagram
v2 and v3 by using the same ingress service. Previous invocations called
into a DialUDP function in the ingress package that was a light
wrapper over `net.DialUDP`. Now a reference is passed into both datagram
controllers that allows more control over the DialUDP method.
Closes TUN-9469
## Summary
When bumping cloudflared to use go1.24, we no longer need cloudflare-go,
since most of the PQ and FIPS compliant curves are already available in go 1.24.
Therefore, we can remove everything related with installing our go toolchain.
## Summary
Update several moving parts of cloudflared build system:
* use goboring 1.24.2 in cfsetup
* update linter and fix lint issues
* update packages namely **quic-go and net**
* install script for macos
* update docker files to use go 1.24.1
* remove usage of cloudflare-go
* pin golang linter
Closes TUN-9016
## Summary
The is_default field in the request body of the POST /virtual_networks endpoint has been
deprecated and should no longer be used. Clients should use the `is_default_network` field
instead for setting the default virtual network.
Closes TUN-9171
Make sure to enforce snapshots of features and client information for each connection
so that the feature information can change in the background. This allows for new features
to only be applied to a connection if it completely disconnects and attempts a reconnect.
Updates the feature refresh time to 1 hour from previous cloudflared versions which
refreshed every 6 hours.
Closes TUN-9319
During a refresh of the supported features via the DNS TXT record,
cloudflared would update the internal feature list, but would not
propagate this information to the edge during a new connection.
This meant that a situation could occur in which cloudflared would
think that the client's connection could support datagram V3, and
would setup that muxer locally, but would not propagate that information
to the edge during a register connection in the `ClientInfo` of the
`ConnectionOptions`. This meant that the edge still thought that the
client was setup to support datagram V2 and since the protocols are
not backwards compatible, the local muxer for datagram V3 would reject
the incoming RPC calls.
To address this, the feature list will be fetched only once during
client bootstrapping and will persist as-is until the client is restarted.
This helps reduce the complexity involved with different connections
having possibly different sets of features when connecting to the edge.
The features will now be tied to the client and never diverge across
connections.
Also, retires the use of `support_datagram_v3` in-favor of
`support_datagram_v3_1` to reduce the risk of reusing the feature key.
The `dv3` TXT feature key is also deprecated.
Closes TUN-9291
Adds a new Gitlab CI pipeline that releases cloudflared Mac builds and replaces the Teamcity adhoc job.
This will build, sign and create a new Github release or add the artifacts to an existing release if the other jobs finish first.
## Summary
We have adapted our edge services to better know when they should flush on write. This is an important
feature to ensure response types like Server Side Events are not buffered, and instead are propagated to the eyeball
as soon as possible. This commit implements a similar logic for http2 tunnel protocol that we use in our edge
services. By adding the new events stream header for json `application/x-ndjson` and using the content-length
and transfer-encoding headers as well, following the RFC's:
- https://datatracker.ietf.org/doc/html/rfc7230#section-4.1
- https://datatracker.ietf.org/doc/html/rfc9112#section-6.1
Closes TUN-9255
Using path package methods can cause errors on windows machines.
path methods are used for url operations and unix specific operation.
filepath methods are used for file system paths and its cross platform.
Remove strings.HasSuffix and use filepath.Ext and path.Ext for file and
url extenstions respectively.
## Issue
The [documentation for creating a tunnel's configuration
file](https://developers.cloudflare.com/cloudflare-one/connections/connect-networks/get-started/create-local-tunnel/#4-create-a-configuration-file)
does not specify that the `credentials-file` field in `config.yml` needs
to be an absolute path.
A user (E.G. me 🤦) might add a path like `~/.cloudflared/<uuid>.json`
and wonder why the `cloudflared tunnel run` command is throwing a
credentials file not found error. Although one might consider it
intuitive, it's not a fair assumption as a lot of CLI tools allow file
paths with `~` for specifying files.
P.S. The tunnel ID in the following snippet is not a real tunnel ID, I
just generated it.
```
url: http://localhost:8000
tunnel: 958a1ef6-ff8c-4455-825a-5aed91242135
credentials-file: ~/.cloudflared/958a1ef6-ff8c-4455-825a-5aed91242135.json
```
Furthermore, the error has a confusing message for the user as the file
at the logged path actually exists, it is just that `os.Stat` failed
because it could not expand the `~`.
## Solution
This commit fixes the above issue by running a `homedir.Expand` on the
`credentials-file` path in the `credentialFinder` function.
Per the contribution guidelines, this seemed to me like a small enough
change to not warrant an issue before creating this pull request. Let me
know if you'd like me to create one anyway.
## Background
While working with `cloudflared` on FreeBSD recently, I noticed that
there's an inconsistency with the available CLI commands on that OS
versus others — namely that the `service` command doesn't exist at all
for operating systems other than Linux, macOS, and Windows.
Contrast `cloudflared --help` output on macOS versus FreeBSD (truncated
to focus on the `COMMANDS` section):
- Current help output on macOS:
```text
COMMANDS:
update Update the agent if a new version exists
version Print the version
proxy-dns Run a DNS over HTTPS proxy server.
tail Stream logs from a remote cloudflared
service Manages the cloudflared launch agent
help, h Shows a list of commands or help for one command
Access:
access, forward access <subcommand>
Tunnel:
tunnel Use Cloudflare Tunnel to expose private services to the Internet
or to Cloudflare connected private users.
```
- Current help output on FreeBSD:
```text
COMMANDS:
update Update the agent if a new version exists
version Print the version
proxy-dns Run a DNS over HTTPS proxy server.
tail Stream logs from a remote cloudflared
help, h Shows a list of commands or help for one command
Access:
access, forward access <subcommand>
Tunnel:
tunnel Use Cloudflare Tunnel to expose private services to the Internet
or to Cloudflare connected private users.
```
This omission has caused confusion for users (including me), especially
since the provided command in the Cloudflare Zero Trust dashboard
returns a seemingly-unrelated error message:
```console
$ sudo cloudflared service install ...
You did not specify any valid additional argument to the cloudflared tunnel command.
If you are trying to run a Quick Tunnel then you need to explicitly pass the --url flag.
Eg. cloudflared tunnel --url localhost:8080/.
Please note that Quick Tunnels are meant to be ephemeral and should only be used for testing purposes.
For production usage, we recommend creating Named Tunnels. (https://developers.cloudflare.com/cloudflare-one/connections/connect-apps/install-and-setup/tunnel-guide/)
```
## Contribution
This pull request adds a "stub" `service` command (including the usual
subcommands available on other OSes) to explicitly declare it as
unsupported on the operating system.
New help output on FreeBSD (and other operating systems where service
management is unsupported):
```text
COMMANDS:
update Update the agent if a new version exists
version Print the version
proxy-dns Run a DNS over HTTPS proxy server.
tail Stream logs from a remote cloudflared
service Manages the cloudflared system service (not supported on this operating system)
help, h Shows a list of commands or help for one command
Access:
access, forward access <subcommand>
Tunnel:
tunnel Use Cloudflare Tunnel to expose private services to the Internet or to Cloudflare connected private users.
```
New outputs when running the service management subcommands:
```console
$ sudo cloudflared service install ...
service installation is not supported on this operating system
```
```console
$ sudo cloudflared service uninstall ...
service uninstallation is not supported on this operating system
```
This keeps the available commands consistent until proper service
management support can be added for these otherwise-supported operating
systems.
## Summary
This change ensures that errors resulting from the `cloudflared access ssh` call are no longer ignored. By returning the error from `carrier.StartClient` to the upstream, we ensure that these errors are properly logged on stdout, providing better visibility and debugging capabilities.
Relates to TUN-9101
## Summary
Within the work of FEDRamp it is necessary to change the HA SD lookup to use as srv `fed-v2-origintunneld`
This work assumes that the tunnel token has an optional endpoint field which will be used to modify the behaviour of the HA SD lookup.
Finally, the presence of the endpoint will override region to _fed_ and fail if any value is passed for the flag region.
Closes TUN-9007
## Summary
Within the scope of the FEDRamp High RM, it is necessary to detect if an user should connect to a FEDRamp colo.
At first, it was considered to add the --fedramp as global flag however this could be a footgun for the user or even an hindrance, thus, the proposal is to save in the token (during login) if the user authenticated using the FEDRamp Dashboard. This solution makes it easier to the user as they will only be required to pass the flag in login and nothing else.
* Introduces the new field, endpoint, in OriginCert
* Refactors login to remove the private key and certificate which are no longer used
* Login will only store the Argo Tunnel Token
* Remove namedTunnelToken as it was only used to for serialization
Closes TUN-8960
## Summary
This commit refactors some of the flags of cloudflared to their own module, so that they can be used across the code without requiring to literal strings which are much more error prone.
Closes TUN-8914
## Summary
This commit introduces a new command line flag, `--max-active-flows`, which allows overriding the remote configuration for the maximum number of active flows.
The flag can be used with the `run` command, like `cloudflared tunnel --no-autoupdate run --token <TUNNEL_TOKEN> --max-active-flows 50000`, or set via an environment variable `TUNNEL_MAX_ACTIVE_FLOWS`.
Note that locally-set values always take precedence over remote settings, even if the tunnel is remotely managed.
Closes TUN-8914
## Summary
When the FIPS compliance was achieved with HTTP/2 Transport the technology at the time wasn't available or certified to be used in tandem with Post-Quantum encryption. Nowadays, that is possible, thus, we can also remove this restriction from Cloudflared.
Closes TUN-8857
## Summary
Nowadays, Cloudflared only supports X25519Kyber768Draft00 (0x6399,25497) but older versions may use different preferences.
For FIPS compliance we are required to use P256Kyber768Draft00 (0xfe32,65074) which is supported in our internal fork of [Go-Boring-1.22.10](https://bitbucket.cfdata.org/projects/PLAT/repos/goboring/browse?at=refs/heads/go-boring/1.22.10 "Follow link").
In the near future, Go will support by default the X25519MLKEM768 (0x11ec,4588) given this we may drop the usage of our public fork of GO.
To summarise:
* Cloudflared FIPS: QUIC_CURVE_PREFERENCES=65074
* Cloudflared non-FIPS: QUIC_CURVE_PREFERENCES=4588
Closes TUN-8855
## Summary
This commit renames the public variable that identifies the metadata key and value for the ConnectResponse structure when the flow was rate limited.
Closes TUN-8904
## Summary
cloudflared access login and cloudflared access curl fails when the Access application has warp_as_auth enabled.
This bug originates from a 4 year old inconsistency where tokens signed by the nginx-fl-access module include 'aud' as a string, while tokens signed by the access authentication worker include 'aud' as an array of strings.
When the new(ish) feature warp_as_auth is enabled for the app, the fl module signs the token as opposed to the worker like usually.
I'm going to bring this up to the Access team, and try to figure out a way to consolidate this discrepancy without breaking behaviour.
Meanwhile we have this [CUSTESC ](https://jira.cfdata.org/browse/CUSTESC-47987), so I'm making cloudflared more lenient by accepting both []string and string in the token 'aud' field.
Tested this by compiling and running cloudflared access curls to my domains
Closes AUTH-6633
## Summary
Session is the concept used for UDP flows. Therefore, to make
the session limiter ambiguous for both TCP and UDP, this commit
renames it to flow limiter.
Closes TUN-8861
## Summary
In order to make cloudflared behavior more predictable and
prevent an exhaustion of resources, we have decided to add
session limits that can be configured by the user. This commit
adds the session limiter to the HTTP/TCP handling path.
For now the limiter is set to run only in unlimited mode.
## Summary
In order to make cloudflared behavior more predictable and
prevent an exhaustion of resources, we have decided to add
session limits that can be configured by the user. This first
commit introduces the session limiter and adds it to the UDP
handling path. For now the limiter is set to run only in
unlimited mode.
## Summary
During the renewal of the certificates used to sign the macOS binaries and package,
we faced an issue with the new certificates requiring a specific certification authority
that wasn't available in the keychain of the mac agents. Therefore, this commit adds
an import step that will ensure that the Certificate Authority, usually fetched from
https://www.apple.com/certificateauthority/ is imported into the keychain to validate
the Developer Certificates.
Closes TUN-8900
## Summary
To improve our code, this commit adds a linter that will start
checking for issues from this commit onwards, also forcing
issues to be fixed on the file changed and not only on the changes
themselves. This should help improve our code quality overtime.
Closes TUN-8866
Support rolling out the `support_datagram_v3` feature via remote feature rollout (DNS TXT record) with `dv3` key.
Consolidated some of the feature evaluation code into the features module to simplify the lookup of available features at runtime.
Reduced complexity for management logs feature lookup since it's a default feature.
Closes TUN-8807
## Summary
Ubuntu has released a new LTS version, and there are people starting to use it, this makes
our installation recommendation, that automatically detecs the release flavor, to fail for
Noble users. Therefore, this commit adds this new version to our release packages.
It also adds an `any` package so that we can update our documentation to use it since
we are using the same binaries across all debian flavors, so there is no reason to keep
adding more release flavors when we can just take advantage of the `any` release flavor
like other repositories do.
When closing a session, there are two possible signals that will occur,
one from the outside, indicating that the session is idle and needs to
be closed, and the internal error condition that will be unblocked
with a net.ErrClosed when the connection underneath is closed. Both of
these routines write to the session's closeChan.
Once the reader for the closeChan reads one value, it will immediately
return. This means that the channel is a one-shot and one of the two
writers will get stuck unless the size of the channel is increased to
accomodate for the second write to the channel.
With the channel size increased to two, the second writer (whichever
loses the race to write) will now be unblocked to end their go routine
and return.
Closes TUN-8817
## Summary
Adds a new CLI subcommand, under the tunnel command, the `diag`. This command has as function the automatic collection of different data points, such as, logs, metrics, network information, system information, tunnel state, and runtime information which will be written to a single zip file.
Closes TUN-8724
## Summary
Change the system information collector and respective http handler so that it always returns a JSON.
Closes [TUN-8792](https://jira.cfdata.org/browse/TUN-8792)
## Summary
The initial implementation produced correct JSON however it was not formatted which would make it harder to read the file by an user.
Closes TUN-8784
## Summary
* The host log collector now verifies if the OS is linux and has systemd if so it will use journalctl to get the logs
* In linux systems docker will write the output of the command logs to the stderr therefore the function that handles the execution of the process will copy both the contents of stdout and stderr; this also affect the k8s collector
Closes TUN-8783
## Summary
The default-flavour of cfsetup changed from bullseye to bookworm and in the latter the createrepo package was renamed to createrepo_c.
Closes TUN-8795
## Summary
The previous changes regarding python's distribution which broke CI the installation of python packages.
Python packages in cfsetup are now installed via virtual environment. The dependency python3-venv was added as builddep to allow the creation of venv and the python packages installation was moved to the post-cache resulting in the removal of
* anchor build_release_pre_cache
* anchor component_test_pre_cache
Closes TUN-8789
The previous capture of the sync.OnceValue was re-initialized for each
call to `Close`. This needed to be initialized during the creation of
the session to ensure that the sync.OnceValue reference was held for
the session's lifetime.
Closes TUN-8775
## Summary
Add a new job that write to a file the result of all of the other tasks along with possible errors. This file is also added to the root of the diagnostic zip file.
Closes TUN-8768
## Summary
Adds two new jobs which will export the cli configuration and tunnel configuration in separate files. These files will also be added to the zipfile's root.
Closes TUN-8770
## Summary
Export raw format of traceroute is widely known and useful for debugging. This raw output is written to the zipfile's root at the end of the diagnostic.
Closes TUN-8767
## Summary
The windows code path has three bugs:
* the -4 and -6 option cannot be passed in the last position
* since there are some lines in the command output that are not parsable the collection fails to parse any kind of output
* the timeout hop is not correctly parsed
This PR also guards the parsing code against empty domains
Closes TUN-8762
## Summary
The diagnostic procedure needs to extract information available in the metrics server via HTTP calls.
These changes add to the diagnostic client the remaining endpoints.
Closes TUN-8727
A new ICMPResponder interface is introduced to provide different
implementations of how the ICMP flows should return to the QUIC
connection muxer.
Improves usages of netip.AddrPort to leverage the embedded zone
field for IPv6 addresses.
Closes TUN-8640
Implements the endpoint that retrieves the configuration of a running instance.
The configuration consists in a map of cli flag to the provided value along with the uid that of the user that started the process
## Summary
The new endpoint returns the current information to be used when calling the diagnostic procedure.
This also adds:
- add indexed connection info and method to extract active connections from connTracker
- add edge address to Event struct and conn tracker
- remove unnecessary event send
- add tunnel configuration handler
- adjust cmd and metrics to create diagnostic server
Closes TUN-8728
## Summary
This PR will add a new endpoint, "diag/system" to the metrics server that collects system information from different operating systems.
Closes TUN-8731
## Summary
Update how metrics server binds to a listener by using a known set of ports whenever the default address is used with the fallback to a random port in case all address are already in use. The default address changes at compile time in order to bind to a different default address when the final deliverable is a docker image.
Refactor ReadyServer tests.
Closes TUN-8737
Previously, during local flow migration the current connection context
was not part of the migration and would cause the flow to still be listening
on the connection context of the old connection (before the migration).
This meant that if a flow was migrated from connection 0 to
connection 1, and connection 0 goes away, the flow would be early
terminated incorrectly with the context lifetime of connection 0.
The new connection context is provided during migration of a flow
and will trigger the observe loop for the flow lifetime to be rebound
to this provided context.
Closes TUN-8748
To help reduce the volume of logs during the happy path of flow registration, there will only be one log message reported when a flow is completed.
There are additional fields added to all flow log messages:
1. `src`: local address
2. `dst`: origin address
3. `durationMS`: capturing the total duration of the flow in milliseconds
Additional logs were added to capture when a flow was migrated or when cloudflared sent off a registration response retry.
Closes TUN-8701
When a registration response from cloudflared gets lost on it's way back to the edge, the edge service will retry and send another registration request. Since cloudflared already has bound the local UDP socket for the provided request id, we want to re-send the registration response.
There are three types of retries that the edge will send:
1. A retry from the same QUIC connection index; cloudflared will just respond back with a registration response and reset the idle timer for the session.
2. A retry from a different QUIC connection index; cloudflared will need to migrate the current session connection to this new QUIC connection and reset the idle timer for the session.
3. A retry to a different cloudflared connector; cloudflared will eventually time the session out since no further packets will arrive to the session at the original connector.
Closes TUN-8709
## Summary
The initial purpose of this PR was to bump the base image from buster to bookworm however these tests are no longer exercised hence the removal
Closes VULN-66059
The datagram muxer will wrap a QUIC Connection datagram read-writer operations to unmarshal datagrams from the connection to the origin with the session manager. Incoming datagram session registration operations will create new UDP sockets for sessions to proxy UDP packets between the edge and the origin. The muxer is also responsible for marshalling UDP packets and operations into datagrams for communication over the QUIC connection towards the edge.
Closes TUN-8700
New session manager leverages similar functionality that was previously
provided with datagram v2, with the distinct difference that the sessions
are registered via QUIC Datagrams and unregistered via timeouts only; the
sessions will no longer attempt to unregister sessions remotely with the
edge service.
The Session Manager is shared across all QUIC connections that cloudflared
uses to connect to the edge (typically 4). This will help cloudflared be
able to monitor all sessions across the connections and help correlate
in the future if sessions migrate across connections.
The UDP payload size is still limited to 1280 bytes across all OS's. Any
UDP packet that provides a payload size of greater than 1280 will cause
cloudflared to report (as it currently does) a log error and drop the packet.
Closes TUN-8667
The current supervisor serves the quic connection by performing all of the following in one method:
1. Dial QUIC edge connection
2. Initialize datagram muxer for UDP sessions and ICMP
3. Wrap all together in a single struct to serve the process loops
In an effort to better support modularity, each of these steps were broken out into their own separate methods that the supervisor will compose together to create the TunnelConnection and run its `Serve` method.
This also provides us with the capability to better interchange the functionality supported by the datagram session manager in the future with a new mechanism.
Closes TUN-8661
For macOS, we want to set the DF bit for the UDP packets used by the QUIC
connection; to achieve this, you need to explicitly set the network
to either "udp4" or "udp6". When determining which network type to pick
we need to use the edge IP address chosen to align with what the local
IP family interface we will use. This means we want cloudflared to bind
to local interfaces for a random port, so we provide a zero IP and 0 port
number (ex. 0.0.0.0:0). However, instead of providing the zero IP, we
can leave the value as nil and let the kernel decide which interface and
pick a random port as defined by the target edge IP family.
This was previously broken for IPv6-only edge connectivity on macOS and
all other operating systems should be unaffected because the network type
was left as default "udp" which will rely on the provided local or remote
IP for selection.
Closes TUN-8688
Some more legacy h2mux code to be cleaned up and moved out of the way.
The h2mux.Header used in the serialization for http2 proxied headers is moved to connection module. Additionally, the booleanfuse structure is also moved to supervisor as it is also needed. Both of these structures could be evaluated later for removal/updates, however, the intent of the proposed changes here is to remove the dependencies on the h2mux code and removal.
Approved-by: Chung-Ting Huang <chungting@cloudflare.com>
Approved-by: Luis Neto <lneto@cloudflare.com>
Approved-by: Gonçalo Garcia <ggarcia@cloudflare.com>
MR: https://gitlab.cfdata.org/cloudflare/tun/cloudflared/-/merge_requests/1576
Whenever cloudflared receives a SIGTERM or SIGINT it goes into graceful shutdown mode, which unregisters the connection and closes the control stream. Unregistering makes it so we no longer receive any new requests and makes the edge close the connection, allowing in-flight requests to finish (within a 3 minute period).
This was working fine for http2 connections, but the quic proxy was cancelling the context as soon as the controls stream ended, forcing the process to stop immediately.
This commit changes the behavior so that we wait the full grace period before cancelling the request
Updating Semgrep.yml file - Semgrep is a tool that will be used to scan Cloudflare's public repos for Supply chain, code and secrets. This work is part of Application & Product Security team's initiative to onboard Semgrep onto all of Cloudflare's public repos.
In case of any questions, please reach out to "Hrushikesh Deshpande" on cf internal chat.
Delaying the auto-update check timer to start after one full round of
the provided frequency reduces the chance of upgrading immediately
after starting.
In the rare case that the updater downloads the same binary (validated via checksum)
we want to make sure that the updater does not attempt to upgrade and restart the cloudflared
process. The binaries are equivalent and this would provide no value.
However, we are covering this case because there was an errant deployment of cloudflared
that reported itself as an older version and was then stuck in an infinite loop
attempting to upgrade to the latest version which didn't exist. By making sure that
the binary is different ensures that the upgrade will be attempted and cloudflared
will be restarted to run the new version.
This change only affects cloudflared tunnels running with default settings or
`--no-autoupdate=false` which allows cloudflared to auto-update itself in-place. Most
distributions that handle package management at the operating system level are
not affected by this change.
Revert "TUN-8621: Fix cloudflared version in change notes."
Revert "PPIP-2310: Update quick tunnel disclaimer"
Revert "TUN-8621: Prevent QUIC connection from closing before grace period after unregistering"
Revert "TUN-8484: Print response when QuickTunnel can't be unmarshalled"
Revert "TUN-8592: Use metadata from the edge to determine if request body is empty for QUIC transport"
Whenever cloudflared receives a SIGTERM or SIGINT it goes into graceful shutdown mode, which unregisters the connection and closes the control stream. Unregistering makes it so we no longer receive any new requests and makes the edge close the connection, allowing in-flight requests to finish (within a 3 minute period).
This was working fine for http2 connections, but the quic proxy was cancelling the context as soon as the controls stream ended, forcing the process to stop immediately.
This commit changes the behavior so that we wait the full grace period before cancelling the request
The rework consists in building and packaging the cloudflared binary based on the OS & ARCH of the system.
read TARGET_ARCH from export and exit if TARGET_ARCH is not set
- remove unused targets in Makefile
- order deps in cfsetup.yaml
- only build cloudflared not all linux targets
- rename stages to be more explicit
- adjust build deps of build-linux-release
- adjust build deps of build-linux-fips-release
- rename github_release_pkgs_pre_cache to build_release_pre_cache
- only build release release artifacts within build-linux-release
- only build release release artifacts within build-linux-fips-release
- remove github-release-macos
- remove github-release-windows
- adjust builddeps of test and test-fips
- create builddeps anchor for component-test and use it in component-test-fips
- remove wixl from build-linux-*
- rename release-pkgs-linux to r2-linux-release
- add github-release: artifacts uplooad and set release message
- clean build directory before build
- add step to package windows binaries
- refactor windows script
One of TeamCity changes is moving the artifacts to the built artifacts, hence, there is no need to cp files from artifacts to built_artifacts
- create anchor for release builds
- create anchor for tests stages
- remove reprepro and createrepo as they are only called by release_pkgs.py
- refactor build script for macos to include arm64 build
- refactor Makefile to upload all the artifacts instead of issuing one by one
- update cfsetup due to 2.
- place build files in specific folders
- cleanup build directory before/after creating build artifacts
Recently python.org started blocking our requests. We've asked the Devtools team to upgrade the default python installation to 3.10 so that we can use it in our tests
cloudflared_udp_total_sessions was incorrectly a gauge when it
represents the total since the cloudflared process started and will
only ever increase.
Additionally adds new ICMP metrics for requests and replies.
Adds new suite of metrics to capture the following for capnp rpcs operations:
- Method calls
- Method call failures
- Method call latencies
Each of the operations is labeled by the handler that serves the method and
the method of operation invoked. Additionally, each of these are split
between if the operation was called by a client or served.
Since legacy tunnels have been removed for a while now, we can remove
many of the capnp rpc interfaces that are no longer leveraged by the
legacy tunnel registration and authentication mechanisms.
A clock structure was used to help support unit testing timetravel
but it is a globally shared object and is likely unsafe to share
across tests. Reordering of the tests seemed to have intermittent
failures for the TestWaitForBackoffFallback specifically on windows
builds.
Adjusting this to be a shim inside the BackoffHandler struct should
resolve shared object overrides in unit testing.
Additionally, added the reset retries functionality to be inline with
the ResetNow function of the BackoffHandler to align better with
expected functionality of the method.
Removes unused reconnectCredentialManager.
To help support temporary errors that can occur in the capnp rpc
calls, a wrapper is introduced to inspect the error conditions and
allow for retrying within a short window.
Combines the tunnelrpc and quic/schema capnp files into the same module.
To help reduce future issues with capnp id generation, capnpids are
provided in the capnp files from the existing capnp struct ids generated
in the go files.
Reduces the overall interface of the Capnp methods to the rest of
the code by providing an interface that will handle the quic protocol
selection.
Introduces a new `rpc-timeout` config that will allow all of the
SessionManager and ConfigurationManager RPC requests to have a timeout.
The timeout for these values is set to 5 seconds as non of these operations
for the managers should take a long time to complete.
Removed the RPC-specific logger as it never provided good debugging value
as the RPC method names were not visible in the logs.
If cloudflared was unable to register the UDP session with the
edge, the socket would be left open to be eventually closed by the
OS, or garbage collected by the runtime. Considering that either of
these closes happened significantly after some delay, it was causing
cloudflared to hold open file descriptors longer than usual if continuously
unable to register sessions.
## Summary
We discovered that we were being impacted by a bug in quic-go,
that could create deadlocks and not close connections.
This commit bumps quic-go to the version that contains the fix
to prevent that from happening.
Before this commit the commands that listed tunnels and tunnel routes would be limited to 1000 results by the server.
Now, the commands will call the endpoints until the result set is exhausted. This can take a long time if there are
thousands of pages available, since each request is executed synchronously.
From a user's perspective, nothing changes.
## Summary:
In order to properly monitor what is happening with the new write timeouts that we introduced
in TUN-8244 we need proper logging. Right now we were logging write timeouts when the safe
stream was being closed which didn't make sense because it was miss leading, so this commit
prevents that by adding a flag that allows us to know whether we are closing the stream or not.
## Summary
To avoid having to verbose logs we need to only log when an
actual issue occurred. Therefore, we will be skipping any error
logging if the write timeout is caused by no network activity
which just means that nothing is being sent through the stream.
This commit makes the remote diagnostics enabled by default, which is
a useful feature when debugging cloudflared issues without manual intervention from users.
Users can still opt-out by disabling the feature flag.
Propagates the logger context into further locations to help provide more context for certain errors. For instance, upstream and downstream copying errors will properly have the assigned flow id attached and destination address.
## Summary
To prevent bad eyeballs and severs to be able to exhaust the quic
control flows we are adding the possibility of having a timeout
for a write operation to be acknowledged. This will prevent hanging
connections from exhausting the quic control flows, creating a DDoS.
During the recent changes to the build pipeline, the implicit GOARM env variable changed from
6 to 7.
This means we need to explicitly define the GOARM to v6.
## Summary
We have decided to no longer push cloudflared to cloudflare homebrew, and use
the automation from homebrew-core to update cloudflared on their repository.
Therefore, the scripts for homebrew and makefile targets are no longer necessary.
Also update golang.org/x/net and google.golang.org/grpc to fix vulnerabilities,
although cloudflared is using them in a way that is not exposed to those risks
When embedding the tunnel command inside another CLI, it
became difficult to test shutdown behavior due to this leaking
tunnel. By using the command context, we're able to shutdown
gracefully.
This changes guarantees that the coommand to report rule matches when
testing local config reports the rule number using the 0-based indexing.
This is to be consistent with the 0-based indexing on the log lines when
proxying requests.
## Summary
To determine which services were installed, cloudflared, was using the command
`systemctl status` this command gives an error if the service is installed
but isn't running, which makes the `uninstall services` command report wrongly
the services not installed. Therefore, this commit adapts it to use the
`systemctl list-units` command combined with a grep to find which services are
installed and need to be removed.
## Summary
Previously the force flag in the tunnel delete command was only explicitly deleting the
connections of a tunnel. Therefore, we are changing it to use the cascade query parameter
supported by the API. That parameter will delegate to the server the deletion of the tunnel
dependencies implicitly instead of the client doing it explicitly. This means that not only
the connections will get deleted, but also the tunnel routes, ensuring that no dependencies
are left without a non-deleted tunnel.
This commits makes sure that cloudflared starts using the new API
endpoints for managing routes.
Additionally, the delete route operation still allows deleting by CIDR
and VNet but it is being marked as deprecated in favor of specifying the
route ID.
The goal of this change is to make it simpler for the user to delete
routes without specifying Vnet.
Will no longer provide full hostname with path from provided
`--hostname` flag for cloudflared access to the Host header field.
This addresses certain issues caught from a security fix in go
1.19.11 and 1.20.6 in the net/http URL parsing.
Summary:
This commit adds a new flag "no-update-service" to the `cloudflared service install` command.
Previously, when installing cloudflared as a linux service it would always get auto-updates, now with this new flag it is possible to disable the auto updates of the service.
This flag allows to define whether we want cloudflared service to **perform auto updates or not**.
For **systemd this is done by removing the installation of the update service and timer**, for **sysv** this is done by **setting the cloudflared autoupdate flag**.
h2mux is already deprecated and will be eventually removed, in the meantime,
the compression tests cause flaky failures. Removing them and the brotli
code slims down our binaries and dependencies on CGO.
This commit implements the option to disable PTMU discovery for QUIC
connections.
QUIC finds the PMTU during startup by increasing Ping packet frames
until Ping responses are not received anymore, and it seems to stick
with that PMTU forever.
This is no problem if the PTMU doesn't change over time, but if it does
it may case packet drops.
We add this hidden flag for debugging purposes in such situations as a
quick way to validate if problems that are being seen can be solved by
reducing the packet size to the edge.
Note however, that this option may impact UDP proxying since we expect
being able to send UDP packets of 1280 bytes over QUIC.
So, this option should not be used when tunnel is being used for UDP
proxying.
With the new flag --management-diagnostics (an opt-in flag)
cloudflared's will be able to report additional diagnostic information
over the management.argotunnel.com request path.
Additions include the /metrics prometheus endpoint; which is already
bound to a local port via --metrics.
/debug/pprof/(goroutine|heap) are also provided to allow for remotely
retrieving heap information from a running cloudflared connector.
In the streambased origin proxy flow (example ssh over access), there is
a chance when we do not flush on http.ResponseWriter writes. This PR
guarantees that the response writer passed to proxy stream has a flusher
embedded after writes. This means we write much more often back to the
ResponseWriter and are not waiting. Note, this is only something we do
when proxyHTTP-ing to a StreamBasedOriginProxy because we do not want to
have situations where we are not sending information that is needed by
the other side (eyeball).
Allows for debugging the payloads that are sent in client mode to
the ssh server. Required to be run with --log-directory to capture
logging output. Additionally has maximum limit that is provided with
the flag that will only capture the first N number of reads plus
writes through the WebSocket stream. These reads/writes are not directly
captured at the packet boundary so some reconstruction from the
log messages will be required.
Added User-Agent for all out-going cloudflared access
tcp requests in client mode.
Added check to not run terminal logging in cloudflared access tcp
client mode to not obstruct the stdin and stdout.
I deliberately kept this as an unregistertimeout because that was the
intent. In the future we could change this to a UDPConnConfig if we want
to pass multiple values here.
The idea of this PR is simply to add a configurable unregister UDP
timeout.
The lucas-clemente/quic-go package moved namespaces and our branch
went stale, this new fork provides support for the new quic-go repo
and applies the max datagram frame size change.
Until the max datagram frame size support gets upstreamed into quic-go,
this can be used to unblock go 1.20 support as the old
lucas-clemente/quic-go will not get go 1.20 support.
We need to set the default configuration to -1 to accommodate local
to remote configuration migrations that will set the configuration
version to 0. This make's sure to override the local configuration
with the new remote configuration when sent as it does a check against
the local current configuration version.
This PR adds ApplicationError as one of the "try_again" error types for
startfirstTunnel. This ensures that these kind of errors (which we've
seen occur when a tunnel gets rate-limited) are retried.
In addition to supporting sampling support for streaming logs,
cloudflared tail also supports this via `--sample 0.5` to sample 50%
of your log events.
To help accommodate web browser interactions with websockets, when a
streaming logs session is requested for the same actor while already
serving a session for that user in a separate request, the original
request will be closed and the new request start streaming logs
instead. This should help with rogue sessions holding on for too long
with no client on the other side (before idle timeout or connection
close).
It might make sense for users to sometimes name their cloudflared
connectors to make identification easier than relying on hostnames that
TUN-7360 provides. This PR provides a new --label option to cloudflared
tunnel that a user could provide to give custom names to their
connectors.
Before this change, the only sure fire way to make sure you had a valid
Access token was to run `cloudflared access login <your domain>`. That
was because that command would actually make a preflight request to ensure
that the edge considered that token valid. The most common reasons a token
was no longer valid was expiration and revocation. Expiration is easy to
check client side, but revocation can only be checked at the edge.
This change adds the same flow that cfd access login did to the curl command.
It will preflight the request with the token and ensure that the edge thinks
its valid before making the real request.
With the management tunnels work, we allow calls to our edge service
using an access JWT provided by Tunnelstore. Given a connector ID,
this request is then proxied to the appropriate Cloudflare Tunnel.
This PR takes advantage of this flow and adds a new host_details
endpoint. Calls to this endpoint will result in cloudflared gathering
some details about the host: hostname (os.hostname()) and ip address
(localAddr in a dial).
Note that the mini spec lists 4 alternatives and this picks alternative
3 because:
1. Ease of implementation: This is quick and non-intrusive to any of our
code path. We expect to change how connection tracking works and
regardless of the direction we take, it may be easy to keep, morph
or throw this away.
2. The cloudflared part of this round trip takes some time with a
hostname call and a dial. But note that this is off the critical path
and not an API that will be exercised often.
cloudflared tail will now fetch the management token from by making
a request to the Cloudflare API using the cert.pem (acquired from
cloudflared login).
Refactored some of the credentials code into it's own package as
to allow for easier use between subcommands outside of
`cloudflared tunnel`.
This PR fixes some long standing bugs in the windows update
paths. We previously did not surface the errors at all leading to
this function failing silently.
This PR:
1. Now returns the ExitError if the bat run for update fails.
2. Fixes the errors surfaced by that return:
a. The batch file doesnt play well with spaces. This is fixed by
using PROGRA~1/2 which are aliases windows uses.
b. The existing script also seemed to be irregular about where batch
files were put and looked for. This is also fixed in this script.
Sends a ping every 15 seconds to keep the session alive even if no
protocol messages are being propagated. Additionally, sets a hard
timeout of 5 minutes when not actively streaming logs to drop the
connection.
By default, we want streaming logs to be able to stream debug logs
from cloudflared without needing to update the remote cloudflared's
configuration. This disconnects the provided local log level sent
to console, file, etc. from the level that management tunnel will
utilize via requested filters.
The previous logic of var == x86 never fired for 386 arch windows
systems causing us to set ProgramFiles64Folder for the older windows
versions causing downloads to default to a different location. This
change fixes that.
Going forward, the only protocols supported will be QUIC and HTTP2,
defaulting to QUIC for "auto". Selecting h2mux protocol will be forcibly
upgraded to http2 internally.
Named Tunnels can exist without Ingress rules (They would default to
8080). Moreover, having this check also prevents warp tunnels from
starting since they do not need ingress rules.
This changes fixes a bug where cloudflared was not propagating errors
when proxying the body of an HTTP request.
In a situation where we already sent HTTP status code, the eyeball would
see the request as sucessfully when in fact it wasn't.
To solve this, we need to guarantee that we produce HTTP RST_STREAM
frames.
This change was applied to both http2 and quic transports.
This PR starts a separate server for proxy-dns if the configuration is
available. This fixes a problem on cloudflared not starting in proxy-dns
mode if the url flag (which isn't necessary for proxy-dns) is not
provided. Note: This is still being supported for legacy reasons and
since proxy-dns is not a tunnel and should not be part of the
cloudflared tunnel group of commands.
This PR does two things:
It changes how we fallback to a lower protocol: The current state
is to try connecting with a protocol. If it fails, fall back to a
lower protocol. And try connecting with that and so on. With this PR,
if we fail to connect with a protocol, we will try to connect to other
edge addresses first. Only if we fail to connect to those will we
fall back to a lower protocol.
It fixes a behaviour where if we fail to connect to an edge addr,
we keep re-trying the same address over and over again.
This PR now switches between edge addresses on subsequent connecton attempts.
Note that through these switches, it still respects the backoff time.
(We are connecting to a different edge, but this helps to not bombard an edge
address with connect requests if a particular edge addresses stops working).
Before this change when running cloudflare tunnel command without any
subcommand and without any additional flag, we would spin up a
QuickTunnel.
This is really a strange behaviour because we can easily create unwanted
tunnels and results in bad user experience.
This also has the side effect on putting more burden in our services
that are probably just mistakes.
This commit fixes that by requiring user to specify the url command
flag.
Running cloudflared tunnel alone will result in an error message
instead.
cloudflared shows possible directories for config files to be present if
it doesn't see one when starting up. For remotely configured files, it
may not be necessary to have a config file present. This PR looks to see
if a token flag was provided, and if yes, does not log this message.
This PR temporarily disables the xcrun notarize-app feature since this
is soemthing we've historically had broken. However, what changed now is
we set -e for the mac os scripts. We'll need to remove this to unblock
mac builds.
We could spend time as part of https://jira.cfdata.org/browse/TUN-5789
to look into this.
We previously always preferred region2 as the first region to connect
to if both the regions cloudflared connects to have the same number of
availabe addresses. This change randomises that choice. The first
connection, conn index: 0, can now either connect to region 1 or region
2.
More importantly, conn 0 and 2 and 1 and 3 need not belong to the same
region.
This PR lets the script skip if the `security import`
command exits with a 1. This is okay becuase this script manually checks
this exit code to validate if its a duplicate error and if its not,
returns.
This PR changes protocol initialization of the other N connections to be
the same as the one we know the initial tunnel connected with. This is
so we homogenize connections and not lead to some connections being
QUIC-able and the others not.
There's also an improvement to the connection registered log so we know
what protocol every individual connection connected with from the
cloudflared side.
This commit makes cloudflared use the API token provided during login
instead of service key.
In addition, it eliminates some of the old formats since those are
legacy and we only support cloudflared versions newer than 6 months.
This commit makes cloudflared use the API token provided during login
instead of service key.
In addition, it eliminates some of the old formats since those are
legacy and we only support cloudflared versions newer than 6 months.
Origintunneld has been observed to continue sending reply packets to the first incoming connection it received, even if a newer connection is observed to be sending the requests.
OTD uses the funnel library from cloudflared, which is why the changes are here.
In theory, cloudflared has the same type of bug where a ping session switching between quic connections will continue sending replies to the first connection. This bug has not been tested or confirmed though, but this PR will fix if it exists.
This PR is made using suggestion from #574. The pros for this config is that it will work both Windows and Linux (tested), as well as in VSCode, which normally can't be done with the current generated ssh config (refers to #734)
cloudflared's Makefile uses `shell go env GOOS` to determine the
LOCAL_OS regardless of it being provided. We therefore need pinned_go as
a dependency to run docker-generate-versions.
Remove send and return methods from Funnel interface. Users of Funnel can provide their own send and return methods without wrapper to comply with the interface.
Move packet router to ingress package to avoid circular dependency
printing `seconds` is superfluous since time.Duration already adds the `s` suffix
Invalid log message would be
```
Retrying connection in up to 1s seconds
```
Co-authored-by: João Oliveirinha <joliveirinha@cloudflare.com>
By running the github release message step after all of the binaries are built, the KV will be populated with all of the binary checksums to inject into the release message.
Once we introduced multi arch docker images, pinning cloudflared
versions required suffixing -(arm64/amd64) to the cloudflared:version
image tag. This change should fix that by adding specific versions to
the cloudflare docker build cycle
ProxyHTTP now processes middleware Handler before executing the request.
A chain of handlers is now executed and appropriate response status
codes are sent.
This adds a new verifier interface that can be attached to ingress.Rule.
This would act as a middleware layer that gets executed at the start of
proxy.ProxyHTTP.
A jwt validator implementation for this verifier is also provided. The
validator downloads the public key from the access teams endpoint and
uses it to verify the JWT sent to cloudflared with the audtag (clientID)
information provided in the config.
We take advantage of the JWTValidator middleware and attach it to an
ingress rule based on Access configurations. We attach the Validator
directly to the ingress rules because we want to take advantage of
caching and token revert/handling that comes with go-oidc.
This adds a new verifier interface that can be attached to ingress.Rule.
This would act as a middleware layer that gets executed at the start of
proxy.ProxyHTTP.
A jwt validator implementation for this verifier is also provided. The
validator downloads the public key from the access teams endpoint and
uses it to verify the JWT sent to cloudflared with the audtag (clientID)
information provided in the config.
We take advantage of the JWTValidator middleware and attach it to an
ingress rule based on Access configurations. We attach the Validator
directly to the ingress rules because we want to take advantage of
caching and token revert/handling that comes with go-oidc.
This adds a new verifier interface that can be attached to ingress.Rule.
This would act as a middleware layer that gets executed at the start of
proxy.ProxyHTTP.
A jwt validator implementation for this verifier is also provided. The
validator downloads the public key from the access teams endpoint and
uses it to verify the JWT sent to cloudflared with the audtag (clientID)
information provided in the config.
A funnel is an abstraction for 1 source to many destinations.
As part of this refactoring, shared logic between Darwin and Linux are moved into icmp_posix
I can only reproduce the flakiness, which is the hello world still
responding when it should be shut down already, in Windows (both in
TeamCity as well as my local VM). Locally, it only happens when the
machine is under high load.
Anyway, it's valid that the proxies take some time to shut down since
they handle that via channels asynchronously with regards to the event
that updates the configuration.
Hence, nothing is wrong, as long as they eventually shut down, which the
test still verifies.
This test was failing on Windows. We did not catch it before because our
TeamCity Windows builds were ignoring failed unit tests: TUN-6727
- the fix is implementing WriteString for mockSSERespWriter
- reason is because cfio.Copy was calling that, and not Write method,
thus not triggering the usage of the channel for the test to continue
- mockSSERespWriter was providing a valid implementation of WriteString
via ResponseRecorder, which it implements via the embedded mockHTTPRespWriter
- it is not clear why this only happened on Windows
- changed it to be a top-level test since it did not share any code
with other sub-tests in the same top-level test
Previously allowing the reconnect signal forcibly close the connection
caused a race condition on which error was returned by the errgroup
in the tunnel connection. Allowing the signal to return and provide
a context cancel to the connection provides a safer shutdown of the
tunnel for this test-only scenario.
In a previous commit, we fixed a bug where the client roundtrip code
could close the request body, which in fact would be the quic.Stream,
thus closing the write-side.
The way that was fixed, prevented the client roundtrip code from closing
also read-side (the body).
This fixes that, by allowing close to only close the read side, which
will guarantee that any subsquent will fail with an error or EOF it
occurred before the close.
This change seeks to push an arm64 built image to dockerhub for arm users to run. This should spin cloudflared on arm machines without the warning
WARNING: The requested image's platform (linux/amd64) does not match the detected host platform (linux/arm64/v8) and no specific platform was requested
cloudflared falls back aggressively to HTTP/2 protocol if a connection
attempt with QUIC failed. This was done to ensure that machines with UDP
egress disabled did not stop clients from connecting to the cloudlfare
edge. This PR improves on that experience by having cloudflared remember
if a QUIC connection was successful which implies UDP egress works. In
this case, cloudflared does not fallback to HTTP/2 and keeps trying to
connect to the edge with QUIC.
cloudflared falls back aggressively to HTTP/2 protocol if a connection
attempt with QUIC failed. This was done to ensure that machines with UDP
egress disabled did not stop clients from connecting to the cloudlfare
edge. This PR improves on that experience by having cloudflared remember
if a QUIC connection was successful which implies UDP egress works. In
this case, cloudflared does not fallback to HTTP/2 and keeps trying to
connect to the edge with QUIC.
cloudflared falls back aggressively to HTTP/2 protocol if a connection
attempt with QUIC failed. This was done to ensure that machines with UDP
egress disabled did not stop clients from connecting to the cloudlfare
edge. This PR improves on that experience by having cloudflared remember
if a QUIC connection was successful which implies UDP egress works. In
this case, cloudflared does not fallback to HTTP/2 and keeps trying to
connect to the edge with QUIC.
This reverts commit d4d9a43dd7.
We revert this change because the value this configuration addition
brings is small (it only stops an explicit cyclic configuration versus
not accounting for local hosts and ip based cycles amongst other things)
whilst the potential inconvenience it may cause is high (for example,
someone had a cyclic configuration as an ingress rule that they weren't
even using).
This commit guarantees that stream is only closed once the are finished
handling the stream. Without it, we were seeing closes being triggered
by the code that proxies to the origin, which was resulting in failures
to actually send downstream the status code of the proxy request to the
eyeball.
This was then subsequently triggering unexpected retries to cloudflared
in situations such as cloudflared being unable to reach the origin.
It is currently possible to set cloudflared to proxy to the hostname
that traffic is ingressing from as an origin service. This change checks
for this configuration error and prompts a change.
This PR removes automatic assignees on github issues because it sends a
slightly wrong message about triaging. We will continue to triage issues
and find a more focussed method to nominate assignees.
For Google's managed prometheus, it seems they reserved certain
labels for their internal service regions/locations. This causes
customers to run into issues using our metrics since our
metric: `cloudflared_tunnel_server_locations` has a `location`
label. Renaming this to `edge_location` should unblock the
conflict and usage.
We now will have `armhf` based debs on our github pages
This will also sync to our R2 Release process allowing legacy rpi users to
eventually be able to apt-get install cloudflared.
This PR removes go-sumtype from cloudflared's build processes.
The value we see from analysing exhaustive match patterns in go is minimal (we can do this in code reviews) compared to using a tool that is unmaintained and (now broken) in Go 1.18.
We'd already been using the patched version here: https://github.com/sudarshan-reddy/go-sumtype because the original is broken for go tools > 1.16
For WARP routing the defaults for these new settings are 5 seconds for connect timeout and 30 seconds for keep-alive timeout. These values can be configured either remotely or locally. Local config lives under "warp-routing" section in config.yaml.
For websocket-based proxy, the defaults come from originConfig settings (either global or per-service) and use the same defaults as HTTP proxying.
The idle period is set to 5sec.
We now also ping every second since last activity.
This makes the quic.Connection less prone to being closed with
no network activity, since we send multiple pings per idle
period, and thus a single packet loss cannot cause the problem.
This PR provides a cloudflared.repo template that can simply then be
added to yum repos by running
```
sudo dnf config-manager --add-repo
```
removing the requirement for yum installers to handcraft this or run
echo commands.
This addresses https://security.snyk.io/vuln/SNYK-GOLANG-GOPKGINYAMLV3-2841557
by updating yaml v3 to latest version.
It also stops using yaml v2 directly (we were using both v2 and v3 mixed).
We still rely on yaml v2 indirectly, via urfave cli, though.
Note that the security vulnerability does not affect v2.
This PR mostly raises exceptions so we are aware if release deb or
release pkgs fail. It also makes release_version optional if backup pkgs
are not needed.
We now keep the gpg key inputs configurable. This PR imports base64
encoded gpg details into the build environment and uses this information
to sign the linux builds.
This PR extends release_pkgs.py to now also support uploading rpm based
assets to R2. The packages are not signed yet and will be done in a
subsequent PR.
This PR
- Packs the .rpm assets into relevant directories
- Calls createrepo on them to make them yum repo ready
- Uploads them to R2
The publish to brew core prints a URL with a PR that does the change
in github to brew core formula for cloudflared. It then tries to open
the browser, which obviously fails in CI.
So this adds a flag for it to skip opening the browser.
It's not clear how the PR will be opened, it seems like it must be
done by a human.
But at least this won't fail the build.
The way apt works is:
1. It looks at the release file based on the `deb` added to sources.list.
2. It uses this release file to find the relative location of Packages or Packages.gz
3. It uses the pool information from packages to find the relative location of where the .deb file is located and then downloads and installs it.
This PR seeks to take advantage of this information by simply arranging
the files in a way apt expects thereby eliminating the need for an
orchestrating endpoint.
This PR does the following:
1. Creates packages.gz, signed InRelease files for debs in
built_artifacts for configured debian releases.
2. Uploads them to Cloudflare R2.
3. Adds a Workers KV entry that talks about where these assets are
uploaded.
This commit adds the tunnel details to RPC register connection response
so we can have access to some of the details associacted to the tunnel
that only the edge knows.
Currently this is limited to knowing if the tunnel is remotely managed
or not. In the future we could extend this with more information.
The buffer size was big to support a compression feature that we don't
use anymore.
As such, we can now reduce this and be more efficient with memory usage.
Errors that are non-recoverable can lead to one of two things happening:
1. That connection lying dead and cloudflared not retrying to make that
connection.
2. cloudflared resolving to a different edge addr to retry connection.
We should subject these errors to a backoff as well. This will result in
us introducing a backoff for 1. When we are going to let the connection
become stale anyway and 2. When we are about to try a different edge
addr.
Setting the body to nil was rendering cloudflared to crashing with
a SIGSEGV in the odd case where the hostname accessed maps to a
TCP origin (e.g. SSH/RDP/...) but the eyeball sends a plain HTTP
request that does not go through cloudflared access (thus not wrapped
in websocket as it should).
Instead, QUIC transport now sets http.noBody in that condition, which
deals with the situation gracefully.
Ingress validate currently validates config from a file. This PR adds a
new --json/-j flag to provide the ingress/config data as a plaintext
command line argument.
Right now the proxying of cloudflared -> unix socket is a bit of
a no man's land, where we do not have the ability to specify the
actual protocol since the user just configures "unix:/path/"
In practice, we proxy using an HTTP client.
But it could be that the origin expects HTTP or HTTPS. However,
we have no way of knowing.
So how are we proxying to it? We are configuring the http.Request
in ways that depend on the transport and edge implementation, and
it so happens that for h2mux and http2 we are using a http.Request
whose Scheme is HTTP, whereas for quic we are generating a http.Request
whose scheme is HTTPS.
Since it does not make sense to have different behaviours depending
on the transport, we are making a (hopefully temporary) change so
that proxied requests to Unix sockets are systematically HTTP.
In practice we should do https://github.com/cloudflare/cloudflared/issues/502
to make this configurable.
Until this PR, we were naively closing the quic.Stream whenever
the callstack for handling the request (HTTP or TCP) finished.
However, our proxy handler may still be reading or writing from
the quic.Stream at that point, because we return the callstack if
either side finishes, but not necessarily both.
This is a problem for quic-go library because quic.Stream#Close
cannot be called concurrently with quic.Stream#Write
Furthermore, we also noticed that quic.Stream#Close does nothing
to do receiving stream (since, underneath, quic.Stream has 2 streams,
1 for each direction), thus leaking memory, as explained in:
https://github.com/lucas-clemente/quic-go/issues/3322
This PR addresses both problems by wrapping the quic.Stream that
is passed down to the proxying logic and handle all these concerns.
We have made 2 changes in the past that caused an unexpected edge case:
1. when faced with QUIC "no network activity", give up re-attempts and fall-back
2. when a protocol is chosen explicitly, rather than using auto (the default), do not fallback
The reasoning for 1. was to fallback quickly in situations where the user may not
have chosen QUIC, and simply got it because we auto-chose it (with the TXT DNS record),
but the users' environment does not allow egress via UDP.
The reasoning for 2. was to avoid falling back if the user explicitly chooses a
protocol. E.g., if the user chooses QUIC, she may want to do UDP proxying, so if
we fallback to HTTP2 protocol that will be unexpected since it does not support
UDP (and same applies for HTTP2 falling back to h2mux and TCP proxying).
This PR fixes the edge case that happens when both those changes 1. and 2. are
put together: when faced with a QUIC "no network activity", we should only try
to fallback if there is a possible fallback. Otherwise, we should exhaust the
retries as normal.
This parameterizes relevant component tests by transport protocol
where applicable.
The motivation is to have coverage for (graceful or not) shutdown
that was broken in QUIC. That logic (as well as reconnect) is
different depending on the transport, so we should have it
parameterized. In fact, the test is failing for QUIC (and passing
for others) right now, which is expected until we roll out some
edge fixes for QUIC. So we could have caught this earlier on.
This adds various bug fixes when investigating why QUIC transports were
not being unregistered when they should (and only when the graceful shutdown
started).
Most of these bug fixes are making the QUIC transport implementation closer
to its HTTP2 counterpart:
- ServeControlStream is now a blocking function (it's up to the transport to handle that)
- QUIC transport then handles the control plane as part of its Serve, thus waiting for it on shutdown
- QUIC transport now returns "non recoverable" for connections with similar semantics to HTTP2 and H2mux
- QUIC transport no longer has a loop around its Serve logic that retries connections on its own (that logic is upstream)
This does a few fixes to make sure that the QUICConnection returns from
Serve when the context is cancelled.
QUIC transport now behaves like other transports: closes as soon as there
is no traffic, or at most by grace-period. Note that we do not wait for
UDP traffic since that's connectionless by design.
This way we will force the adoption of FIPS compliant cloudflared without having
to handle the transition for systems that already have it installed (since we
were previously using new artifacts with fips suffix) nor without having to
segregate the resulting binary name (since we were always generating a binary
just called cloudflared from the unpacked debian archive to avoid having to change
any automation that assumes the binary to be called just that).
This changes existing Makefile targets to make it obvious that they are
used to publish debian packages for internal Cloudflare usage. Those are
now FIPS compliant, with no alternative provided. This only affects amd64
builds (and we only publish internally for Linux).
This new Makefile target is used by all internal builds (including nightly
that is used for e2e tests).
Note that this Makefile target renames the artifact to be just `cloudflared`
so that this is used "as is" internally, without expecting people to opt-in
to the new `cloudflared-fips` package (as we are giving them no alternative).
This is a cherry-pick of 157f5d1412
followed by build/CI changes so that amd64/linux FIPS compliance is
provided by new/separate binaries/artifacts/packages.
The reasoning being that FIPS compliance places excessive requirements
in the encryption algorithms used for regular users that do not care
about that. This can cause cloudflared to reject HTTPS origins that
would otherwise be accepted without FIPS checks.
This way, by having separate binaries, existing ones remain as they
were, and only FIPS-needy users will opt-in to the new FIPS binaries.
This reverts commit 157f5d1412.
FIPS compliant binaries (for linux/amd64) are causing HTTPS origins to not
be reachable by cloudflared in certain cases (e.g. with Let's Encrypt certificates).
Origins that are not HTTPS for cloudflared are not affected.
Creates an abstraction over UDP Conn for origin "connection" which can
be useful for future support of complex protocols that may require
changing ports during protocol negotiation (eg. SIP, TFTP)
In addition, it removes a dependency from ingress on connection package.
When building the docker image, this `-dev` suffix is being added to the
cloudflared binary version.
The reason must be that there's some file, which is tracked by git, and
that is modified during that build.
It's not clear which file is it. But, at the same time, it's not clear what
advantage this `-dev` suffix is bringing. So we're simply removing it so that
we generate proper versions (so that our tracking/observability can correctly
aggregate these values).
- Refactors some h2mux specific logic from connection/header.go to connection/h2mux_header.go
- Do the same for the unit tests
- Add a non-h2mux "is control response header" function (we don't need one for the request flow)
- In that new function, do not consider "content-length" as a control header
- Use that function in the non-h2mux flow for response (and it will be used also in origintunneld)
When forwarding an UA-less request to the origin server cloudflared insert the default golang http User-Agent, this is unexpected and can lead to issue.
This simple fix force setting the UA to the empty string when it isn't originaly provided.
Connections from cloudflared to Cloudflare edge are long lived and may
break over time. That is expected for many reasons (ranging from network
conditions to operations within Cloudflare edge). Hence, logging that as
Error feels too strong and leads to users being concerned that something
is failing when it is actually expected.
With this change, we wrap logging about connection issues to be aware
of the tunnel state:
- if the tunnel has no connections active, we log as error
- otherwise we log as warning
* `max-fetch-size` can now be set up in the config YAML
* we no longer pass that to filter commands that filter by name
* flag changed to signed int since altsrc does not support UInt flags
* we now look up each non UUID (to convert it to a UUID) when needed, separately
This can be useful/important for accounts with many tunnels that exceed
the 1000 default page size.
There are various tunnel subcommands that use listing underneath, so we make
that flag a tunnel one, rather than adding it to each subcommand.
The default max streams value of 100 is rather small when subject to
high load in terms of connecting QUIC with streams faster than it can
create new ones. This high value allows for more throughput.
Go's client defaults to chunked encoding after a 200ms delay if the following cases are true:
* the request body blocks
* the content length is not set (or set to -1)
* the method doesn't usually have a body (GET, HEAD, DELETE, ...)
* there is no transfer-encoding=chunked already set.
So for non websocket requests, if transfer-encoding isn't chunked and content length is 0, we dont set a request body.
ServeControlStream accidentally became non-blocking in the last quic
change causing stream to not be returned until a SIGTERM was received.
This change makes ServeControlStream be non-blocking for QUIC streams.
This maximum grace period will be honored by Cloudflare edge such that
either side will close the connection after unregistration at most
by this time (3min as of this commit):
- If the connection is unused, it is already closed as soon as possible.
- If the connection is still used, it is closed on the cloudflared configured grace-period.
Even if cloudflared does not close the connection by the grace-period time,
the edge will do so.
- cfsetup now has a build command `github-release-pkgs` to release linux
and msi packages to github.
- github_message.py now has an option to upload all assets in a provided
directory.
- Vendored the capnproto library to cloudflared.
- Added capnproto schema defining application protocol.
- Added Pogs and application level read write of the protocol.
This change extracts the need for EstablishConnection to know about a
request's entire context. It also removes the concern of populating the
http.Response from EstablishConnection's responsibilities.
Reuses HTTPProxy's Roundtrip method to directly proxy websockets from
eyeball clients (determined by websocket type and ingress not being
connection oriented , i.e. Not ssh or smb for example) to proxy
websocket traffic.
time.Tick() does not get garbage collected because the channel
underneath never gets deleted and the underlying Ticker can never be
recovered by the garbage collector. We replace this with NewTicker() to
avoid this.
to help understand why the ingress rule logged is being selected.
in addition, combine "Request Headers..." and "Serving with ingress..." logs
into this updated log.
Co-authored-by: Silver <sssilver@users.noreply.github.com>
All header transformation code from h2mux has been consolidated in the connection package since it's used by both h2mux and http2 logic.
Exported headers used by proxying between edge and cloudflared so then can be shared by tunnel service on the edge.
Moved access-related headers to corresponding packages that have the code that sets/uses these headers.
Removed tunnel hostname tracking from h2mux since it wasn't used by anything. We will continue to set the tunnel hostname header from the edge for backward compatibilty, but it's no longer used by cloudflared.
Move bastion-related logic into carrier package, untangled dependencies between carrier, origin, and websocket packages.
This change has two parts:
1. Update to newer version of the urfave/cli fork that correctly sets flag value along the context hierarchy while respecting config file overide behavior of the most specific instance of the flag.
2. Redefine --credentials-file flag so that create and delete subcommand don't use value from the config file.
To use cloudflared as a socks proxy, add an ingress on the server
side with your desired rules. Rules are matched in the order they
are added. If there are no rules, it is an implicit allow. If
there are rules, but no rule matches match, the connection is denied.
ingress:
- hostname: socks.example.com
service: socks-proxy
originRequest:
ipRules:
- prefix: 1.1.1.1/24
ports: [80, 443]
allow: true
- prefix: 0.0.0.0/0
allow: false
On the client, run using tcp mode:
cloudflared access tcp --hostname socks.example.com --url 127.0.0.1:8080
Set your socks proxy as 127.0.0.1:8080 and you will now be proxying
all connections to the remote machine.
* Allow partial reads from a GorillaConn; add SetDeadline (from net.Conn)
The current implementation of GorillaConn will drop data if the
websocket frame isn't read 100%. For example, if a websocket frame is
size=3, and Read() is called with a []byte of len=1, the 2 other bytes
in the frame are lost forever.
This is currently masked by the fact that this is used primarily in
io.Copy to another socket (in ingress.Stream) - as long as the read buffer
used by io.Copy is big enough (it is 32*1024, so in theory we could see
this today?) then data is copied over to the other socket.
The client then can do partial reads just fine as the kernel will take
care of the buffer from here on out.
I hit this by trying to create my own tunnel and avoiding
ingress.Stream, but this could be a real bug today I think if a
websocket frame bigger than 32*1024 was received, although it is also
possible that we are lucky and the upstream size which I haven't checked
uses a smaller buffer than that always.
The test I added hangs before my change, succeeds after.
Also add SetDeadline so that GorillaConn fully implements net.Conn
* Comment formatting; fast path
* Avoid intermediate buffer for first len(p) bytes; import order
- Move packages the provide generic functionality (such as config) from `cmd` subtree to top level.
- Remove all dependencies on `cmd` subtree from top level packages.
- Consolidate all code dealing with token generation and transfer to a single cohesive package.
* Issue-285: Detect TARGET_ARCH correctly for FreeBSD amd64 (uname -m returns amd64 not x86_64)
See: https://github.com/cloudflare/cloudflared/issues/285
* Add note not to `go get github.com/BurntSushi/go-sumtype` in build directory as this will cause vendor issues
Co-authored-by: PaulC <paulc@>
added ingress.DefaultStreamHandler and a basic test for tcp stream proxy
moved websocket.Stream to ingress
cloudflared no longer picks tcpstream host from header
- extracted ResponseWriter from proxyConnection
- added bastion tests over websocket
- removed HTTPResp()
- added some docstrings
- Renamed some ingress clients as proxies
- renamed instances of client to proxy in connection and origin
- Stream no longer takes a context and logger.Service
* Add max upstream connections dns-proxy option
Allows defining a limit to the number of connections that can be
established with the upstream DNS host.
If left unset, there may be situations where connections fail to
establish, which causes the Transport to create an influx of connections
causing upstream to throttle our requests and triggering a runaway
effect resulting in high CPU usage. See https://github.com/cloudflare/cloudflared/issues/91
* Code review with proposed changes
* Add max upstream connections flag to tunnel flags
* Reduce DNS proxy max upstream connections default value
Reduce the default value of maximum upstream connections on the DNS
proxy to guarantee it works on single-core and other low-end hardware.
Further testing could allow for a safe increase of this value.
* Update dns-proxy flag name
Also remove `MaxUpstreamConnsFlag` const as it's no longer referenced in more than one place and to make things more consistent with how the other flags are referenced.
Co-authored-by: Adam Chalmers <achalmers@cloudflare.com>
Jitter is important to avoid every cloudflared in the world trying to
reconnect at t=1, 2, 4, etc. That could overwhelm the backend. But
if each cloudflared randomly waits for up to 2, then up to 4, then up
to 8 etc, then the retries get spread out evenly across time.
On average, wait times should be the same (e.g. instead of waiting for
exactly 1 second, cloudflared will wait betweeen 0 and 2 seconds).
This is the "Full Jitter" algorithm from https://aws.amazon.com/blogs/architecture/exponential-backoff-and-jitter/
Also changed the socks test code so that it binds to localhost, so that
we don't get popups saying "would you like to allow socks.test to use
the network"
Unless I'm mistaken, when there is no existing token for an app, the `login` command needs to be run to obtain a token (not the `token` command, which itself doesn't generate a token).
- Don't rely on edge to close connection on graceful shutdown in h2mux, start muxer shutdown from cloudflared.
- Don't retry failed connections after graceful shutdown has started.
- After graceful shutdown channel is closed we stop waiting for retry timer and don't try to restart tunnel loop.
- Use readonly channel for graceful shutdown in functions that only consume the signal
Classic tunnels flow was triggering an event for RegisteringTunnel for
every connection that was about to be established, and then a Connected
event for every connection established.
However, the RegistreringTunnel event had no connection ID, always
causing it to unset/disconnect the 0th connection making the /ready
endpoint report incorrect numbers for classic tunnels.
This change is focused on fixing rotating loggers in Windows
where it was failing due to Windows file semantics disallowing
the rotation while that file was still being open (because we
had multiple lumberjacks pointing to the same file).
This is fixed by ensuring the initialization happens only once.
This addresses a bug where logging would not be output when
cloudflared was run as a Windows Service.
That was happening because Windows Services have no stderr/out,
and the ConsoleWriter log was failing inside zerolog, which would
then not proceed to the next logger (the file logger).
We now overcome that by using our own multi writer that is resilient
to errors.
The following UInt flags:
* Uint64 - heartbeat-count, compression-quality
* Uint - retries, port, proxy-port
were not being correctly picked from the configuration YAML
since the multi origin refactor
This is due to a limitation of the ufarve library, which we
overcome for now with handling those as Int flags.
Not doing so was causing cloudflared to become stuck after
some time. This would happen because the Observer pattern
was sending events to the UI channel (that has 16 slots) but
no one was consuming those when the UI is not enabled (which
is the default case).
Hence, events (such as connection disconnect / reconnect) would
cause that buffer to be full and cause cloudflared to become
apparently stuck, in the sense that the connections would not be
reconnected.
Tunnel delete is successful even if we don't find the credentials
file in the user's filesystem. We no longer "error" indicating this
is a problem. This fix also enables chaining of the delete command
by removing a pre-mature return if the credentials file is not found.
dpkg does not support bzip2 compression, so fails to unpack and install
the built package. By omitting the option, fpm defaults to gzip which is
the default supported option by dpkg.
Signed-off-by: Joe Groocock <jgroocock@cloudflare.com>
This removes the redundant chgrp command from the publish step when
pushing packages to our public repositories. The directory being pushed
to has the setgid bit set on it, which means that we don't need to force
the group using this command. Further, attempting to do so resulted in
an error as the cfsync user does not have the appropriate permissions to
use the chgrp command.
This updates the public repository upload process to change the group on
the uploaded files to `cf` and adds the write permission for members of
the group. This should allow the `cf` user to properly overwrite the
file when signing it.
FROM ${CLOUDFLARE_DOCKER_REGISTRY_HOST:-registry.cfdata.org}/stash/cf/debian-images/bookworm/main:2025.7.0@sha256:6350da2f7e728dae2c1420f6dafc38e23cacc0b399d3d5b2f40fe48d9c8ff1ca
- `cloudflared` will no longer officially support Debian and Ubuntu distros that reached end-of-life: `buster`, `bullseye`, `impish`, `trusty`.
## 2025.1.1
### New Features
- This release introduces the use of new Post Quantum curves and the ability to use Post Quantum curves when running tunnels with the QUIC protocol this applies to non-FIPS and FIPS builds.
## 2024.12.2
### New Features
- This release introduces the ability to collect troubleshooting information from one instance of cloudflared running on the local machine. The command can be executed as `cloudflared tunnel diag`.
## 2024.12.1
### Notices
- The use of the `--metrics` is still honoured meaning that if this flag is set the metrics server will try to bind it, however, this version includes a change that makes the metrics server bind to a port with a semi-deterministic approach. If the metrics flag is not present the server will bind to the first available port of the range 20241 to 20245. In case of all ports being unavailable then the fallback is to bind to a random port.
## 2024.10.0
### Bug Fixes
- We fixed a bug related to `--grace-period`. Tunnels that use QUIC as transport weren't abiding by this waiting period before forcefully closing the connections to the edge. From now on, both QUIC and HTTP2 tunnels will wait for either the grace period to end (defaults to 30 seconds) or until the last in-flight request is handled. Users that wish to maintain the previous behavior should set `--grace-period` to 0 if `--protocol` is set to `quic`. This will force `cloudflared` to shutdown as soon as either SIGTERM or SIGINT is received.
## 2024.2.1
### Notices
- Starting from this version, tunnel diagnostics will be enabled by default. This will allow the engineering team to remotely get diagnostics from cloudflared during debug activities. Users still have the capability to opt-out of this feature by defining `--management-diagnostics=false` (or env `TUNNEL_MANAGEMENT_DIAGNOSTICS`).
## 2023.9.0
### Notices
- The `warp-routing``enabled: boolean` flag is no longer supported in the configuration file. Warp Routing traffic (eg TCP, UDP, ICMP) traffic is proxied to cloudflared if routes to the target tunnel are configured. This change does not affect remotely managed tunnels, but for locally managed tunnels, users that might be relying on this feature flag to block traffic should instead guarantee that tunnel has no Private Routes configured for the tunnel.
## 2023.7.0
### New Features
- You can now enable additional diagnostics over the management.argotunnel.com service for your active cloudflared connectors via a new runtime flag `--management-diagnostics` (or env `TUNNEL_MANAGEMENT_DIAGNOSTICS`). This feature is provided as opt-in and requires the flag to enable. Endpoints such as /metrics provides your prometheus metrics endpoint another mechanism to be reached. Additionally /debug/pprof/(goroutine|heap) are also introduced to allow for remotely retrieving active pprof information from a running cloudflared connector.
## 2023.4.1
### New Features
- You can now stream your logs from your remote cloudflared to your local terminal with `cloudflared tail <TUNNEL-ID>`. This new feature requires the remote cloudflared to be version 2023.4.1 or higher.
## 2023.3.2
### Notices
- Due to the nature of QuickTunnels (https://developers.cloudflare.com/cloudflare-one/connections/connect-apps/do-more-with-tunnels/trycloudflare/) and its intended usage for testing and experiment of Cloudflare Tunnels, starting from 2023.3.2, QuickTunnels only make a single connection to the edge. If users want to use Tunnels in a production environment, they should move to Named Tunnels instead. (https://developers.cloudflare.com/cloudflare-one/connections/connect-apps/install-and-setup/tunnel-guide/remote/#set-up-a-tunnel-remotely-dashboard-setup)
## 2023.3.1
### Breaking Change
- Running a tunnel without ingress rules defined in configuration file nor from the CLI flags will no longer provide a default ingress rule to localhost:8080 and instead will return HTTP response code 503 for all incoming HTTP requests.
### Security Fixes
- Windows 32 bit machines MSI now defaults to Program Files to install cloudflared. (See CVE-2023-1314). The cloudflared client itself is unaffected. This just changes how the installer works on 32 bit windows machines.
### Bug Fixes
- Fixed a bug that would cause running tunnel on Bastion mode and without ingress rules to crash.
## 2023.2.2
### Notices
- Legacy tunnels were officially deprecated on December 1, 2022. Starting with this version, cloudflared no longer supports connecting legacy tunnels.
- h2mux tunnel connection protocol is no longer supported. Any tunnels still configured to use this protocol will alert and use http2 tunnel protocol instead. We recommend using quic protocol for all tunnels going forward.
## 2023.2.1
### Bug fixes
- Fixed a bug in TCP connection proxy that could result in the connection being closed before all data was written.
- cloudflared now correctly aborts body write if connection to origin service fails after response headers were sent already.
- Fixed a bug introduced in the previous release where debug endpoints were removed.
## 2022.12.0
### Improvements
- cloudflared now attempts to try other edge addresses before falling back to a lower protocol.
- cloudflared tunnel no longer spins up a quick tunnel. The call has to be explicit and provide a --url flag.
- cloudflared will now randomly pick the first or second region to connect to instead of always connecting to region2 first.
## 2022.9.0
### New Features
- cloudflared now rejects ingress rules with invalid http status codes for http_status.
## 2022.8.1
### New Features
- cloudflared now remembers if it connected to a certain protocol successfully. If it did, it does not fall back to a lower
protocol on connection failures.
## 2022.7.1
### New Features
- It is now possible to connect cloudflared tunnel to Cloudflare Global Network with IPv6. See `cloudflared tunnel --help` and look for `edge-ip-version` for more information. For now, the default behavior is to still connect with IPv4 only.
### Bug Fixes
- Several bug fixes related with QUIC transport (used between cloudflared tunnel and Cloudflare Global Network). Updating to this version is highly recommended.
## 2022.4.0
### Bug Fixes
- `cloudflared tunnel run` no longer logs the Tunnel token or JSON credentials in clear text as those are the secret
that allows to run the Tunnel.
## 2022.3.4
### New Features
- It is now possible to retrieve the credentials that allow to run a Tunnel in case you forgot/lost them. This is
achievable with: `cloudflared tunnel token --cred-file /path/to/file.json TUNNEL`. This new feature only works for
Tunnels created with cloudflared version 2022.3.0 or more recent.
### Bug Fixes
- `cloudflared service install` now starts the underlying agent service on Linux operating system (similarly to the
behaviour in Windows and MacOS).
## 2022.3.3
### Bug Fixes
- `cloudflared service install` now starts the underlying agent service on Windows operating system (similarly to the
behaviour in MacOS).
## 2022.3.1
### Bug Fixes
- Various fixes to the reliability of `quic` protocol, including an edge case that could lead to cloudflared crashing.
## 2022.3.0
### New Features
- It is now possible to configure Ingress Rules to point to an origin served by unix socket with either HTTP or HTTPS.
If the origin starts with `unix:/` then we assume HTTP (existing behavior). Otherwise, the origin can start with
`unix+tls:/` for HTTPS.
## 2022.2.1
### New Features
- This project now has a new LICENSE that is more compliant with open source purposes.
### Bug Fixes
- Various fixes to the reliability of `quic` protocol.
## 2022.1.3
### New Features
- New `cloudflared tunnel vnet` commands to allow for private routing to be virtualized. This means that the same CIDR
can now be used to point to two different Tunnels with `cloudflared tunnel route ip` command. More information will be
made available on blog.cloudflare.com and developers.cloudflare.com/cloudflare-one once the feature is globally available.
### Bug Fixes
- Correctly handle proxying UDP datagrams with no payload.
- Bug fix for origins that use Server-Sent Events (SSE).
## 2022.1.0
### Improvements
- If a specific `protocol` property is defined (e.g. for `quic`), cloudflared no longer falls back to an older protocol
(such as `http2`) in face of connectivity errors. This is important because some features are only supported in a specific
protocol (e.g. UDP proxying only works for `quic`). Hence, if a user chooses a protocol, cloudflared now adheres to it
no matter what.
### Bug Fixes
- Stopping cloudflared running with `quic` protocol now respects graceful shutdown.
## 2021.12.2
### Bug Fixes
- Fix logging when `quic` transport is used and UDP traffic is proxied.
- FIPS compliant cloudflared binaries will now be released as separate artifacts. Recall that these are only for linux
and amd64.
## 2021.12.1
### Bug Fixes
- Fixes Github issue #530 where cloudflared 2021.12.0 could not reach origins that were HTTPS and using certain encryption
methods forbidden by FIPS compliance (such as Let's Encrypt certificates). To address this fix we have temporarily reverted
FIPS compliance from amd64 linux binaries that was recently introduced (or fixed actually as it was never working before).
## 2021.12.0
### New Features
- Cloudflared binary released for amd64 linux is now FIPS compliant.
### Improvements
- Logging about connectivity to Cloudflare edge now only yields `ERR` level logging if there are no connections to
Cloudflare edge that are active. Otherwise it logs `WARN` level.
### Bug Fixes
- Fixes Github issue #501.
## 2021.11.0
### Improvements
- Fallback from `protocol:quic` to `protocol:http2` immediately if UDP connectivity isn't available. This could be because of a firewall or
egress rule.
## 2021.10.4
### Improvements
- Collect quic transport metrics on RTT, packets and bytes transferred.
### Bug Fixes
- Fix race condition that was writing to the connection after the http2 handler returns.
## 2021.9.2
### New features
- `cloudflared` can now run with `quic` as the underlying tunnel transport protocol. To try it, change or add "protocol: quic" to your config.yml file or
run cloudflared with the `--protocol quic` flag. e.g:
`cloudflared tunnel --protocol quic run <tunnel-name>`
### Bug Fixes
- Fixed some generic transport bugs in `quic` mode. It's advised to upgrade to at least this version (2021.9.2) when running `cloudflared`
with `quic` protocol.
- `cloudflared` docker images will now show version.
## 2021.8.4
### Improvements
- Temporary tunnels (those hosted on trycloudflare.com that do not require a Cloudflare login) now run as Named Tunnels
underneath. We recall that these tunnels should not be relied upon for production usage as they come with no guarantee
of uptime. Previous cloudflared versions will soon be unable to run legacy temporary tunnels and will require an update
(to this version or more recent).
## 2021.8.2
### Improvements
- Because Equinox os shutting down, all cloudflared releases are now present [here](https://github.com/cloudflare/cloudflared/releases).
[Equinox](https://dl.equinox.io/cloudflare/cloudflared/stable) will no longer receive updates.
## 2021.8.0
### Bug fixes
- Prevents tunnel from accidentally running when only proxy-dns should run.
### Improvements
- If auto protocol transport lookup fails, we now default to a transport instead of not connecting.
## 2021.6.0
### Bug Fixes
- Fixes a http2 transport (the new default for Named Tunnels) to work with unix socket origins.
## 2021.5.10
### Bug Fixes
- Fixes a memory leak in h2mux transport that connects cloudflared to Cloudflare edge.
## 2021.5.9
### New Features
- Uses new Worker based login helper service to facilitate token exchange in cloudflared flows.
### Bug Fixes
- Fixes Centos-7 builds.
## 2021.5.8
### New Features
- When creating a DNS record to point a hostname at a tunnel, you can now use --overwrite-dns to overwrite any existing
DNS records with that hostname. This works both when using the CLI to provision DNS, as well as when starting an adhoc
named tunnel, e.g.:
- `cloudflared tunnel route dns --overwrite-dns foo-tunnel foo.example.com`
- Named Tunnels will automatically select the protocol to connect to Cloudflare's edge network.
## 2021.5.0
### New Features
- It is now possible to run the same tunnel using more than one `cloudflared` instance. This is a server-side change and
is compatible with any client version that uses Named Tunnels.
To get started, visit our [developer documentation](https://developers.cloudflare.com/cloudflare-one/connections/connect-apps/run-tunnel/deploy-cloudflared-replicas).
- `cloudflared tunnel ingress validate` will now warn about unused keys in your config file. This is helpful for
detecting typos in your config.
- If `cloudflared` detects it is running inside a Linux container, it will limit itself to use only the number of CPUs
the pod has been granted, instead of trying to use every CPU available.
## 2021.4.0
### Bug Fixes
- Fixed proxying of websocket requests to avoid possibility of losing initial frames that were sent in the same TCP
packet as response headers [#345](https://github.com/cloudflare/cloudflared/issues/345).
- `proxy-dns` option now works in conjunction with running a named tunnel [#346](https://github.com/cloudflare/cloudflared/issues/346).
## 2021.3.6
### Bug Fixes
- Reverted 2021.3.5 improvement to use HTTP/2 in a best-effort manner between cloudflared and origin services because
it was found to break in some cases.
## 2021.3.5
### Improvements
- HTTP/2 transport is now always chosen if origin server supports it and the service url scheme is HTTPS.
This was previously done in a best attempt manner.
### Bug Fixes
- The MacOS binaries were not successfully released in 2021.3.3 and 2021.3.4. This release is aimed at addressing that.
## 2021.3.3
### Improvements
- Tunnel create command, as well as, running ad-hoc tunnels using `cloudflared tunnel -name NAME`, will not overwrite
existing files when writing tunnel credentials.
### Bug Fixes
- Tunnel create and delete commands no longer use path to credentials from the configuration file.
If you need ot place tunnel credentials file at a specific location, you must use `--credentials-file` flag.
- Access ssh-gen creates properly named keys for SSH short lived certs.
## 2021.3.2
### New Features
- It is now possible to obtain more detailed information about the cloudflared connectors to Cloudflare Edge via
`cloudflared tunnel info <name/uuid>`. It is possible to sort the output as well as output in different formats,
such as: `cloudflared tunnel info --sort-by version --invert-sort --output json <name/uuid>`.
You can obtain more information via `cloudflared tunnel info --help`.
### Bug Fixes
- Don't look for configuration file in default paths when `--config FILE` flag is present after `tunnel` subcommand.
- cloudflared access token command now functions correctly with the new token-per-app change from 2021.3.0.
## 2021.3.0
### New Features
- [Cloudflare One Routing](https://developers.cloudflare.com/cloudflare-one/tutorials/warp-to-tunnel) specific commands
now show up in the `cloudflared tunnel route --help` output.
- There is a new ingress type that allows cloudflared to proxy SOCKS5 as a bastion. You can use it with an ingress
rule by adding `service: socks-proxy`. Traffic is routed to any destination specified by the SOCKS5 packet but only
if allowed by a rule. In the following example we allow proxying to a certain CIDR but explicitly forbid one address
within it:
```
ingress:
- hostname: socks.example.com
service: socks-proxy
originRequest:
ipRules:
- prefix: 192.168.1.8/32
allow: false
- prefix: 192.168.1.0/24
ports: [80, 443]
allow: true
```
### Improvements
- Nested commands, such as `cloudflared tunnel run`, now consider CLI arguments even if they appear earlier on the
command. For instance, `cloudflared --config config.yaml tunnel run` will now behave the same as
`cloudflared tunnel --config config.yaml run`
- Warnings are now shown in the output logs whenever cloudflared is running without the most recent version and
`no-autoupdate` is `true`.
- Access tokens are now stored per Access App instead of per request path. This decreases the number of times that the
user is required to authenticate with an Access policy redundantly.
### Bug Fixes
- GitHub [PR #317](https://github.com/cloudflare/cloudflared/issues/317) was broken in 2021.2.5 and is now fixed again.
## 2021.2.5
### New Features
- We introduce [Cloudflare One Routing](https://developers.cloudflare.com/cloudflare-one/tutorials/warp-to-tunnel) in
beta mode. Cloudflare customer can now connect users and private networks with RFC 1918 IP addresses via the
Cloudflare edge network. Users running Cloudflare WARP client in the same organization can connect to the services
made available by Argo Tunnel IP routes. Please share your feedback in the GitHub issue tracker.
## 2021.2.4
### Bug Fixes
- Reverts the Improvement released in 2021.2.3 for CLI arguments as it introduced a regression where cloudflared failed
to read URLs in configuration files.
- cloudflared now logs the reason for failed connections if the error is recoverable.
## 2021.2.3
### Backward Incompatible Changes
- Removes db-connect. The Cloudflare Workers product will continue to support db-connect implementations with versions
of cloudflared that predate this release and include support for db-connect.
### New Features
- Introduces support for proxy configurations with websockets in arbitrary TCP connections (#318).
### Improvements
- (reverted) Nested command line argument handling.
### Bug Fixes
- The maximum number of upstream connections is now limited by default which should fix reported issues of cloudflared
exhausting CPU usage when faced with connectivity issues.
Contains the command-line client for Cloudflare Tunnel, a tunneling daemon that proxies traffic from the Cloudflare network to your origins.
This daemon sits between Cloudflare network and your origin (e.g. a webserver). Cloudflare attracts client requests and sends them to you
via this daemon, without requiring you to poke holes on your firewall --- your origin can remain as closed as possible.
Extensive documentation can be found in the [Cloudflare Tunnel section](https://developers.cloudflare.com/cloudflare-one/networks/connectors/cloudflare-tunnel) of the Cloudflare Docs.
All usages related with proxying to your origins are available under `cloudflared tunnel help`.
You can also use `cloudflared` to access Tunnel origins (that are protected with `cloudflared tunnel`) for TCP traffic
at Layer 4 (i.e., not HTTP/websocket), which is relevant for use cases such as SSH, RDP, etc.
Such usages are available under `cloudflared access help`.
You can instead use [WARP client](https://developers.cloudflare.com/cloudflare-one/team-and-resources/devices/warp/)
to access private origins behind Tunnels for Layer 4 traffic without requiring `cloudflared access` commands on the client side.
Contains the command-line client for Argo Tunnel, a tunneling daemon that proxies any local webserver through the Cloudflare network. Extensive documentation can be found in the [Argo Tunnel section](https://developers.cloudflare.com/argo-tunnel/) of the Cloudflare Docs.
## Before you get started
Before you use Argo Tunnel, you'll need to complete a few steps in the Cloudflare dashboard. The website you add to Cloudflare will be used to route traffic to your Tunnel.
Before you use Cloudflare Tunnel, you'll need to complete a few steps in the Cloudflare dashboard: you need to add a
website to your Cloudflare account. Note that today it is possible to use Tunnel without a website (e.g. for private
routing), but for legacy reasons this requirement is still necessary:
1. [Add a website to Cloudflare](https://developers.cloudflare.com/fundamentals/manage-domains/add-site/)
2. [Change your domain nameservers to Cloudflare](https://developers.cloudflare.com/dns/zone-setups/full-setup/setup/)
1. [Add a website to Cloudflare](https://support.cloudflare.com/hc/en-us/articles/201720164-Creating-a-Cloudflare-account-and-adding-a-website)
2. [Change your domain nameservers to Cloudflare](https://support.cloudflare.com/hc/en-us/articles/205195708)
## Installing `cloudflared`
Downloads are available as standalone binaries, a Docker image, and Debian, RPM, and Homebrew packages. You can also find releases here on the `cloudflared` GitHub repository.
Downloads are available as standalone binaries, a Docker image, and Debian, RPM, and Homebrew packages. You can also find releases [here](https://github.com/cloudflare/cloudflared/releases) on the `cloudflared` GitHub repository.
* You can [install on macOS](https://developers.cloudflare.com/argo-tunnel/getting-started/installation#macos) via Homebrew or by downloading the [latest Darwin amd64 release](https://github.com/cloudflare/cloudflared/releases)
* Binaries, Debian, and RPM packages for Linux [can be found here](https://developers.cloudflare.com/argo-tunnel/getting-started/installation#linux)
* You can [install on macOS](https://developers.cloudflare.com/cloudflare-one/networks/connectors/cloudflare-tunnel/downloads/#macos) via Homebrew or by downloading the [latest Darwin amd64 release](https://github.com/cloudflare/cloudflared/releases)
* Binaries, Debian, and RPM packages for Linux [can be found here](https://developers.cloudflare.com/cloudflare-one/networks/connectors/cloudflare-tunnel/downloads/#linux)
* A Docker image of `cloudflared` is [available on DockerHub](https://hub.docker.com/r/cloudflare/cloudflared)
* You can install on Windows machines with the [steps here](https://developers.cloudflare.com/argo-tunnel/getting-started/installation#windows)
* You can install on Windows machines with the [steps here](https://developers.cloudflare.com/cloudflare-one/networks/connectors/cloudflare-tunnel/downloads/#windows)
* To build from source, install the required version of go, mentioned in the [Development](#development) section below. Then you can run `make cloudflared`.
User documentation for Cloudflare Tunnel can be found at https://developers.cloudflare.com/cloudflare-one/networks/connectors/cloudflare-tunnel/
User documentation for Argo Tunnel can be found at https://developers.cloudflare.com/argo-tunnel/
## Creating Tunnels and routing traffic
Once installed, you can authenticate `cloudflared` into your Cloudflare account and begin creating Tunnels that serve traffic for hostnames in your account.
Once installed, you can authenticate `cloudflared` into your Cloudflare account and begin creating Tunnels to serve traffic to your origins.
* Create a Tunnel with [these instructions](https://developers.cloudflare.com/cloudflare-one/networks/connectors/cloudflare-tunnel/get-started/)
* Route traffic to that Tunnel:
* Via public [DNS records in Cloudflare](https://developers.cloudflare.com/cloudflare-one/networks/connectors/cloudflare-tunnel/routing-to-tunnel/dns/)
* Or via a public hostname guided by a [Cloudflare Load Balancer](https://developers.cloudflare.com/cloudflare-one/networks/connectors/cloudflare-tunnel/routing-to-tunnel/public-load-balancers/)
* Or from [WARP client private traffic](https://developers.cloudflare.com/cloudflare-one/networks/connectors/cloudflare-tunnel/private-net/)
* Create a Tunnel with [these instructions](https://developers.cloudflare.com/argo-tunnel/create-tunnel)
* Route traffic to that Tunnel with [DNS records in Cloudflare](https://developers.cloudflare.com/argo-tunnel/routing-to-tunnel/dns) or with a [Cloudflare Load Balancer](https://developers.cloudflare.com/argo-tunnel/routing-to-tunnel/lb)
## TryCloudflare
Want to test Argo Tunnel before adding a website to Cloudflare? You can do so with TryCloudflare using the documentation [available here](https://developers.cloudflare.com/argo-tunnel/learning/trycloudflare).
Want to test Cloudflare Tunnel before adding a website to Cloudflare? You can do so with TryCloudflare using the documentation [available here](https://developers.cloudflare.com/cloudflare-one/networks/connectors/cloudflare-tunnel/do-more-with-tunnels/trycloudflare/).
## Deprecated versions
Cloudflare currently supports versions of cloudflared that are **within one year** of the most recent release. Breaking changes unrelated to feature availability may be introduced that will impact versions released more than one year ago. You can read more about upgrading cloudflared in our [developer documentation](https://developers.cloudflare.com/cloudflare-one/networks/connectors/cloudflare-tunnel/downloads/update-cloudflared/).
For example, as of January 2023 Cloudflare will support cloudflared version 2023.1.1 to cloudflared 2022.1.1.
`{"errors": [{"code": 1003, "message":"An A, AAAA or CNAME record already exists with that host"}], "result": {"cname": "new"}}`,
`{"errors": [{"code": 1003, "message":"An A, AAAA or CNAME record already exists with that host"}, {"code": 1004, "message":"Cannot use tunnel as origin for non-proxied load balancer"}], "result": {"cname": "new"}}`,
`{"errors": [{"code": 1003, "message":"An A, AAAA or CNAME record already exists with that host"}], "result": {"pool": "unchanged", "load_balancer": "updated"}}`,
`{"errors": [{"code": 1003, "message":"An A, AAAA or CNAME record already exists with that host"}, {"code": 1004, "message":"Cannot use tunnel as origin for non-proxied load balancer"}], "result": {"pool": "unchanged", "load_balancer": "updated"}}`,
{ChangeNew,ChangeNew,"Created load balancer lb.example.com and added a new pool POOL with this tunnel as an origin"},
{ChangeNew,ChangeUpdated,"Created load balancer lb.example.com with an existing pool POOL which was updated to use this tunnel as an origin"},
{ChangeNew,ChangeUnchanged,"Created load balancer lb.example.com with an existing pool POOL which already has this tunnel as an origin"},
{ChangeUpdated,ChangeNew,"Added new pool POOL with this tunnel as an origin to load balancer lb.example.com"},
{ChangeUpdated,ChangeUpdated,"Updated pool POOL to use this tunnel as an origin and added it to load balancer lb.example.com"},
{ChangeUpdated,ChangeUnchanged,"Added pool POOL, which already has this tunnel as an origin, to load balancer lb.example.com"},
{ChangeUnchanged,ChangeNew,"Something went wrong: failed to modify load balancer lb.example.com with pool POOL; please check traffic manager configuration in the dashboard"},
{ChangeUnchanged,ChangeUpdated,"Added this tunnel as an origin in pool POOL which is already used by load balancer lb.example.com"},
{ChangeUnchanged,ChangeUnchanged,"Load balancer lb.example.com already uses pool POOL which has this tunnel as an origin"},
{"","","Something went wrong: failed to modify load balancer lb.example.com with pool POOL; please check traffic manager configuration in the dashboard"},
{"a","b","Something went wrong: failed to modify load balancer lb.example.com with pool POOL; please check traffic manager configuration in the dashboard"},
`{"errors": [{"code": 1003, "message":"An A, AAAA or CNAME record already exists with that host"}], "result": {"id": "00000000-0000-0000-0000-000000000000","name":"test","created_at":"0001-01-01T00:00:00Z","connections":[]}}}`,
Usage:"List virtual networks with the given `ID`",
}
filterVnetByName=cli.StringFlag{
Name:"name",
Usage:"List virtual networks with the given `NAME`",
}
filterDefaultVnet=cli.BoolFlag{
Name:"is-default",
Usage:"If true, lists the virtual network that is the default one. If false, lists all non-default virtual networks for the account. If absent, all are included in the results regardless of their default status.",
}
filterDeletedVnet=cli.BoolFlag{
Name:"show-deleted",
Usage:"If false (default), only show non-deleted virtual networks. If true, only show deleted virtual networks.",