Compare commits

..

22 Commits

Author SHA1 Message Date
Devin Carr 02705c44b2 TUN-9322: Add metric for unsupported RPC commands for datagram v3
Additionally adds support for the connection index as a label for the
datagram v3 specific tunnel metrics.

Closes TUN-9322
2025-05-13 16:11:09 +00:00
Devin Carr ce27840573 TUN-9291: Remove dynamic reloading of features for datagram v3
During a refresh of the supported features via the DNS TXT record,
cloudflared would update the internal feature list, but would not
propagate this information to the edge during a new connection.

This meant that a situation could occur in which cloudflared would
think that the client's connection could support datagram V3, and
would setup that muxer locally, but would not propagate that information
to the edge during a register connection in the `ClientInfo` of the
`ConnectionOptions`. This meant that the edge still thought that the
client was setup to support datagram V2 and since the protocols are
not backwards compatible, the local muxer for datagram V3 would reject
the incoming RPC calls.

To address this, the feature list will be fetched only once during
client bootstrapping and will persist as-is until the client is restarted.
This helps reduce the complexity involved with different connections
having possibly different sets of features when connecting to the edge.
The features will now be tied to the client and never diverge across
connections.

Also, retires the use of `support_datagram_v3` in-favor of
`support_datagram_v3_1` to reduce the risk of reusing the feature key.
The `dv3` TXT feature key is also deprecated.

Closes TUN-9291
2025-05-07 23:21:08 +00:00
GoncaloGarcia 40dc601e9d Release 2025.4.2 2025-04-30 14:15:20 +01:00
João "Pisco" Fernandes e5578cb74e Release 2025.4.1 2025-04-30 13:10:45 +01:00
João "Pisco" Fernandes bb765e741d chore: Do not use gitlab merge request pipelines
## Summary
If we define pipelines to trigger on merge requests,
they will take precedence over branch pipelines,
which is currently the way our old pipelines are still
triggered. This means that we can have a merge request
with green pipelines, but actually the external pipelines failed.
Therefore, we need to only rely on branch pipelines,
to ensure that we don't ignore the results from
external pipelines.

More information here:
- https://forum.gitlab.com/t/merge-request-considering-merge-request-pipelines-instead-of-branch-pipelines/111248/2
- https://docs.gitlab.com/17.6/ci/jobs/job_rules/#run-jobs-only-in-specific-pipeline-types
2025-04-30 12:01:43 +00:00
João "Pisco" Fernandes 10081602a4 Release 2025.4.1 2025-04-30 11:09:14 +01:00
Gonçalo Garcia 236fcf56d6 DEVTOOLS-16383: Create GitlabCI pipeline to release Mac builds
Adds a new Gitlab CI pipeline that releases cloudflared Mac builds and replaces the Teamcity adhoc job.
This will build, sign and create a new Github release or add the artifacts to an existing release if the other jobs finish first.
2025-04-30 09:57:52 +00:00
João "Pisco" Fernandes 73a9980f38 TUN-9255: Improve flush on write conditions in http2 tunnel type to match what is done on the edge
## Summary
We have adapted our edge services to better know when they should flush on write. This is an important
feature to ensure response types like Server Side Events are not buffered, and instead are propagated to the eyeball
as soon as possible. This commit implements a similar logic for http2 tunnel protocol that we use in our edge
services. By adding the new events stream header for json `application/x-ndjson` and using the content-length
and transfer-encoding headers as well, following the RFC's:
- https://datatracker.ietf.org/doc/html/rfc7230#section-4.1
- https://datatracker.ietf.org/doc/html/rfc9112#section-6.1

Closes TUN-9255
2025-04-24 11:49:19 +00:00
Tom Lianza 86e8585563 SDLC-3727 - Adding FIPS status to backstage
## Summary
This is a documentation change to help make sure we have an accurate FIPS inventory: https://wiki.cfdata.org/display/ENG/RFC%3A+Scalable+approach+for+managing+FIPS+compliance

Closes SDLC-3727
2025-04-10 16:58:04 +00:00
João "Pisco" Fernandes d8a066628b Release 2025.4.0 2025-04-01 20:23:54 +01:00
João "Pisco" Fernandes 553e77e061 chore: fix linter rules 2025-04-01 18:57:55 +01:00
Cyb3r Jak3 8f94f54ec7
feat: Adds a new command line for tunnel run for token file
Adds a new command line flag for `tunnel run` which allows a file to be
read for the token. I've left the token command line argument with
priority.
2025-04-01 18:23:22 +01:00
gofastasf 2827b2fe8f
fix: Use path and filepath operation appropriately
Using path package methods can cause errors on windows machines.

path methods are used for url operations and unix specific operation.

filepath methods are used for file system paths and its cross platform. 

Remove strings.HasSuffix and use filepath.Ext and path.Ext for file and
url extenstions respectively.
2025-04-01 17:59:43 +01:00
Rohan Mukherjee 6dc8ed710e
fix: expand home directory for credentials file
## Issue

The [documentation for creating a tunnel's configuration
file](https://developers.cloudflare.com/cloudflare-one/connections/connect-networks/get-started/create-local-tunnel/#4-create-a-configuration-file)
does not specify that the `credentials-file` field in `config.yml` needs
to be an absolute path.

A user (E.G. me 🤦) might add a path like `~/.cloudflared/<uuid>.json`
and wonder why the `cloudflared tunnel run` command is throwing a
credentials file not found error. Although one might consider it
intuitive, it's not a fair assumption as a lot of CLI tools allow file
paths with `~` for specifying files.

P.S. The tunnel ID in the following snippet is not a real tunnel ID, I
just generated it.
```
url: http://localhost:8000
tunnel: 958a1ef6-ff8c-4455-825a-5aed91242135
credentials-file: ~/.cloudflared/958a1ef6-ff8c-4455-825a-5aed91242135.json
```

Furthermore, the error has a confusing message for the user as the file
at the logged path actually exists, it is just that `os.Stat` failed
because it could not expand the `~`.

## Solution

This commit fixes the above issue by running a `homedir.Expand` on the
`credentials-file` path in the `credentialFinder` function.
2025-04-01 17:54:57 +01:00
Shereef Marzouk e0b1ac0d05
chore: Update tunnel configuration link in the readme 2025-04-01 17:53:29 +01:00
Bernhard M. Wiedemann e7c5eb54af
Use RELEASE_NOTES date instead of build date
Use `RELEASE_NOTES` date instead of build date
to make builds reproducible.
See https://reproducible-builds.org/ for why this is good
and https://reproducible-builds.org/specs/source-date-epoch/
for the definition of this variable.
This date call only works with GNU date and BSD date.

Alternatively,
https://reproducible-builds.org/docs/source-date-epoch/#makefile could
be implemented.

This patch was done while working on reproducible builds for openSUSE,
sponsored by the NLnet NGI0 fund.
2025-04-01 17:52:50 +01:00
teslaedison cfec602fa7
chore: remove repetitive words 2025-04-01 17:51:57 +01:00
Micah Yeager 6fceb94998
feat: emit explicit errors for the `service` command on unsupported OSes
Per the contribution guidelines, this seemed to me like a small enough
change to not warrant an issue before creating this pull request. Let me
know if you'd like me to create one anyway.

## Background

While working with `cloudflared` on FreeBSD recently, I noticed that
there's an inconsistency with the available CLI commands on that OS
versus others — namely that the `service` command doesn't exist at all
for operating systems other than Linux, macOS, and Windows.

Contrast `cloudflared --help` output on macOS versus FreeBSD (truncated
to focus on the `COMMANDS` section):

- Current help output on macOS:

  ```text
  COMMANDS:
     update     Update the agent if a new version exists
     version    Print the version
     proxy-dns  Run a DNS over HTTPS proxy server.
     tail       Stream logs from a remote cloudflared
     service    Manages the cloudflared launch agent
     help, h    Shows a list of commands or help for one command
     Access:
       access, forward  access <subcommand>
     Tunnel:
tunnel Use Cloudflare Tunnel to expose private services to the Internet
or to Cloudflare connected private users.
  ```
- Current help output on FreeBSD:
  ```text
  COMMANDS:
     update     Update the agent if a new version exists
     version    Print the version
     proxy-dns  Run a DNS over HTTPS proxy server.
     tail       Stream logs from a remote cloudflared
     help, h    Shows a list of commands or help for one command
     Access:
       access, forward  access <subcommand>
     Tunnel:
tunnel Use Cloudflare Tunnel to expose private services to the Internet
or to Cloudflare connected private users.
  ```

This omission has caused confusion for users (including me), especially
since the provided command in the Cloudflare Zero Trust dashboard
returns a seemingly-unrelated error message:

```console
$ sudo cloudflared service install ...
You did not specify any valid additional argument to the cloudflared tunnel command.

If you are trying to run a Quick Tunnel then you need to explicitly pass the --url flag.
Eg. cloudflared tunnel --url localhost:8080/.

Please note that Quick Tunnels are meant to be ephemeral and should only be used for testing purposes.
For production usage, we recommend creating Named Tunnels. (https://developers.cloudflare.com/cloudflare-one/connections/connect-apps/install-and-setup/tunnel-guide/)
```

## Contribution

This pull request adds a "stub" `service` command (including the usual
subcommands available on other OSes) to explicitly declare it as
unsupported on the operating system.

New help output on FreeBSD (and other operating systems where service
management is unsupported):

```text
COMMANDS:
   update     Update the agent if a new version exists
   version    Print the version
   proxy-dns  Run a DNS over HTTPS proxy server.
   tail       Stream logs from a remote cloudflared
   service    Manages the cloudflared system service (not supported on this operating system)
   help, h    Shows a list of commands or help for one command
   Access:
     access, forward  access <subcommand>
   Tunnel:
     tunnel  Use Cloudflare Tunnel to expose private services to the Internet or to   Cloudflare connected private users.
```

New outputs when running the service management subcommands:

```console
$ sudo cloudflared service install ...
service installation is not supported on this operating system
```

```console
$ sudo cloudflared service uninstall ...
service uninstallation is not supported on this operating system
```

This keeps the available commands consistent until proper service
management support can be added for these otherwise-supported operating
systems.
2025-04-01 17:48:20 +01:00
Roman cf817f7036
Fix messages to point to one.dash.cloudflare.com 2025-04-01 17:47:23 +01:00
VFLC c8724a290a
Fix broken links in `cmd/cloudflared/*.go` related to running tunnel as a service
This PR updates 3 broken links to document [run tunnel as a
service](https://developers.cloudflare.com/cloudflare-one/connections/connect-networks/configure-tunnels/local-management/as-a-service/).
2025-04-01 17:45:59 +01:00
João "Pisco" Fernandes e7586153be TUN-9101: Don't ignore errors on `cloudflared access ssh`
## Summary

This change ensures that errors resulting from the `cloudflared access ssh` call are no longer ignored. By returning the error from `carrier.StartClient` to the upstream, we ensure that these errors are properly logged on stdout, providing better visibility and debugging capabilities.

Relates to TUN-9101
2025-03-17 18:42:19 +00:00
Chung-Ting Huang 11777db304 TUN-9089: Pin go import to v0.30.0, v0.31.0 requires go 1.23
Closes TUN-9089
2025-03-06 12:05:24 +00:00
44 changed files with 595 additions and 413 deletions

131
.gitlab-ci.yml Normal file
View File

@ -0,0 +1,131 @@
stages: [build, release]
default:
id_tokens:
VAULT_ID_TOKEN:
aud: https://vault.cfdata.org
# This before_script is injected into every job that runs on master meaning that if there is no tag the step
# will succeed but only write "No tag present - Skipping" to the console.
.check_tag:
before_script:
- |
# Check if there is a Git tag pointing to HEAD
echo "Tag found: $(git tag --points-at HEAD | grep .)"
if git tag --points-at HEAD | grep .; then
echo "Tag found: $(git tag --points-at HEAD | grep .)"
export "VERSION=$(git tag --points-at HEAD | grep .)"
else
echo "No tag present — skipping."
exit 0
fi
## A set of predefined rules to use on the different jobs
.default_rules:
# Rules to run the job only on the master branch
run_on_master:
- if: $CI_COMMIT_BRANCH == $CI_DEFAULT_BRANCH
when: always
- when: never
# Rules to run the job only on branches that are not master. This is needed because for now
# we need to keep a similar behavior due to the integration with teamcity, which requires us
# to not trigger pipelines on tags and/or merge requests.
run_on_branch:
- if: $CI_COMMIT_TAG
when: never
- if: $CI_PIPELINE_SOURCE != "merge_request_event" && $CI_COMMIT_BRANCH != $CI_DEFAULT_BRANCH
when: always
- when: never
# -----------------------------------------------
# Stage 1: Build on every PR
# -----------------------------------------------
build_cloudflared_macos: &build
stage: build
rules:
- !reference [.default_rules, run_on_branch]
tags:
- "macstadium-${RUNNER_ARCH}"
parallel:
matrix:
- RUNNER_ARCH: [arm, intel]
artifacts:
paths:
- artifacts/*
script:
- '[ "${RUNNER_ARCH}" = "arm" ] && export TARGET_ARCH=arm64'
- '[ "${RUNNER_ARCH}" = "intel" ] && export TARGET_ARCH=amd64'
- ARCH=$(uname -m)
- echo ARCH=$ARCH - TARGET_ARCH=$TARGET_ARCH
- ./.teamcity/mac/install-cloudflare-go.sh
- export PATH="/tmp/go/bin:$PATH"
- BUILD_SCRIPT=.teamcity/mac/build.sh
- if [[ ! -x ${BUILD_SCRIPT} ]] ; then exit ; fi
- set -euo pipefail
- echo "Executing ${BUILD_SCRIPT}"
- exec ${BUILD_SCRIPT}
# -----------------------------------------------
# Stage 1: Build and sign only on releases
# -----------------------------------------------
build_and_sign_cloudflared_macos:
<<: *build
rules:
- !reference [.default_rules, run_on_master]
secrets:
APPLE_DEV_CA_CERT:
vault: gitlab/cloudflare/tun/cloudflared/_branch/master/apple_dev_ca_cert_v2/data@kv
file: false
CFD_CODE_SIGN_CERT:
vault: gitlab/cloudflare/tun/cloudflared/_branch/master/cfd_code_sign_cert_v2/data@kv
file: false
CFD_CODE_SIGN_KEY:
vault: gitlab/cloudflare/tun/cloudflared/_branch/master/cfd_code_sign_key_v2/data@kv
file: false
CFD_CODE_SIGN_PASS:
vault: gitlab/cloudflare/tun/cloudflared/_branch/master/cfd_code_sign_pass_v2/data@kv
file: false
CFD_INSTALLER_CERT:
vault: gitlab/cloudflare/tun/cloudflared/_branch/master/cfd_installer_cert_v2/data@kv
file: false
CFD_INSTALLER_KEY:
vault: gitlab/cloudflare/tun/cloudflared/_branch/master/cfd_installer_key_v2/data@kv
file: false
CFD_INSTALLER_PASS:
vault: gitlab/cloudflare/tun/cloudflared/_branch/master/cfd_installer_pass_v2/data@kv
file: false
# -----------------------------------------------
# Stage 2: Release to Github after building and signing
# -----------------------------------------------
release_cloudflared_macos_to_github:
stage: release
image: docker-registry.cfdata.org/stash/tun/docker-images/cloudflared-ci/main:6-8616fe631b76-amd64@sha256:96f4fd05e66cec03e0864c1bcf09324c130d4728eef45ee994716da499183614
extends: .check_tag
dependencies:
- build_and_sign_cloudflared_macos
rules:
- !reference [.default_rules, run_on_master]
cache:
paths:
- .cache/pip
variables:
PIP_CACHE_DIR: "$CI_PROJECT_DIR/.cache/pip"
KV_NAMESPACE: 380e19aa04314648949b6ad841417ebe
KV_ACCOUNT: 5ab4e9dfbd435d24068829fda0077963
secrets:
KV_API_TOKEN:
vault: gitlab/cloudflare/tun/cloudflared/_dev/cfd_kv_api_token/data@kv
file: false
API_KEY:
vault: gitlab/cloudflare/tun/cloudflared/_dev/cfd_github_api_key/data@kv
file: false
script:
- python3 --version ; pip --version # For debugging
- python3 -m venv venv
- source venv/bin/activate
- pip install pynacl==1.4.0 pygithub==1.55
- echo $VERSION
- echo $TAG_EXISTS
- echo "Running release because tag exists."
- make macos-release

View File

@ -49,7 +49,7 @@ import_certificate() {
echo -n -e ${CERTIFICATE_ENV_VAR} | base64 -D > ${CERTIFICATE_FILE_NAME} echo -n -e ${CERTIFICATE_ENV_VAR} | base64 -D > ${CERTIFICATE_FILE_NAME}
# we set || true here and for every `security import invoke` because the "duplicate SecKeychainItemImport" error # we set || true here and for every `security import invoke` because the "duplicate SecKeychainItemImport" error
# will cause set -e to exit 1. It is okay we do this because we deliberately handle this error in the lines below. # will cause set -e to exit 1. It is okay we do this because we deliberately handle this error in the lines below.
local out=$(security import ${CERTIFICATE_FILE_NAME} -A 2>&1) || true local out=$(security import ${CERTIFICATE_FILE_NAME} -T /usr/bin/pkgbuild -A 2>&1) || true
local exitcode=$? local exitcode=$?
# delete the certificate from disk # delete the certificate from disk
rm -rf ${CERTIFICATE_FILE_NAME} rm -rf ${CERTIFICATE_FILE_NAME}
@ -68,6 +68,28 @@ import_certificate() {
fi fi
} }
create_cloudflared_build_keychain() {
# Reusing the private key password as the keychain key
local PRIVATE_KEY_PASS=$1
# Create keychain only if it doesn't already exist
if [ ! -f "$HOME/Library/Keychains/cloudflared_build_keychain.keychain-db" ]; then
security create-keychain -p "$PRIVATE_KEY_PASS" cloudflared_build_keychain
else
echo "Keychain already exists: cloudflared_build_keychain"
fi
# Append temp keychain to the user domain
security list-keychains -d user -s cloudflared_build_keychain $(security list-keychains -d user | sed s/\"//g)
# Remove relock timeout
security set-keychain-settings cloudflared_build_keychain
# Unlock keychain so it doesn't require password
security unlock-keychain -p "$PRIVATE_KEY_PASS" cloudflared_build_keychain
}
# Imports private keys to the Apple KeyChain # Imports private keys to the Apple KeyChain
import_private_keys() { import_private_keys() {
local PRIVATE_KEY_NAME=$1 local PRIVATE_KEY_NAME=$1
@ -83,7 +105,7 @@ import_private_keys() {
echo -n -e ${PRIVATE_KEY_ENV_VAR} | base64 -D > ${PRIVATE_KEY_FILE_NAME} echo -n -e ${PRIVATE_KEY_ENV_VAR} | base64 -D > ${PRIVATE_KEY_FILE_NAME}
# we set || true here and for every `security import invoke` because the "duplicate SecKeychainItemImport" error # we set || true here and for every `security import invoke` because the "duplicate SecKeychainItemImport" error
# will cause set -e to exit 1. It is okay we do this because we deliberately handle this error in the lines below. # will cause set -e to exit 1. It is okay we do this because we deliberately handle this error in the lines below.
local out=$(security import ${PRIVATE_KEY_FILE_NAME} -A -P "${PRIVATE_KEY_PASS}" 2>&1) || true local out=$(security import ${PRIVATE_KEY_FILE_NAME} -k cloudflared_build_keychain -P "$PRIVATE_KEY_PASS" -T /usr/bin/pkgbuild -A -P "${PRIVATE_KEY_PASS}" 2>&1) || true
local exitcode=$? local exitcode=$?
rm -rf ${PRIVATE_KEY_FILE_NAME} rm -rf ${PRIVATE_KEY_FILE_NAME}
if [ -n "$out" ]; then if [ -n "$out" ]; then
@ -100,6 +122,9 @@ import_private_keys() {
fi fi
} }
# Create temp keychain only for this build
create_cloudflared_build_keychain "${CFD_CODE_SIGN_PASS}"
# Add Apple Root Developer certificate to the key chain # Add Apple Root Developer certificate to the key chain
import_certificate "Apple Developer CA" "${APPLE_DEV_CA_CERT}" "${APPLE_CA_CERT}" import_certificate "Apple Developer CA" "${APPLE_DEV_CA_CERT}" "${APPLE_CA_CERT}"
@ -119,8 +144,8 @@ import_certificate "Developer ID Installer" "${CFD_INSTALLER_CERT}" "${INSTALLER
if [[ ! -z "$CFD_CODE_SIGN_NAME" ]]; then if [[ ! -z "$CFD_CODE_SIGN_NAME" ]]; then
CODE_SIGN_NAME="${CFD_CODE_SIGN_NAME}" CODE_SIGN_NAME="${CFD_CODE_SIGN_NAME}"
else else
if [[ -n "$(security find-certificate -c "Developer ID Application" | cut -d'"' -f 4 -s | grep "Developer ID Application:" | head -1)" ]]; then if [[ -n "$(security find-certificate -c "Developer ID Application" cloudflared_build_keychain | cut -d'"' -f 4 -s | grep "Developer ID Application:" | head -1)" ]]; then
CODE_SIGN_NAME=$(security find-certificate -c "Developer ID Application" | cut -d'"' -f 4 -s | grep "Developer ID Application:" | head -1) CODE_SIGN_NAME=$(security find-certificate -c "Developer ID Application" cloudflared_build_keychain | cut -d'"' -f 4 -s | grep "Developer ID Application:" | head -1)
else else
CODE_SIGN_NAME="" CODE_SIGN_NAME=""
fi fi
@ -130,8 +155,8 @@ fi
if [[ ! -z "$CFD_INSTALLER_NAME" ]]; then if [[ ! -z "$CFD_INSTALLER_NAME" ]]; then
PKG_SIGN_NAME="${CFD_INSTALLER_NAME}" PKG_SIGN_NAME="${CFD_INSTALLER_NAME}"
else else
if [[ -n "$(security find-certificate -c "Developer ID Installer" | cut -d'"' -f 4 -s | grep "Developer ID Installer:" | head -1)" ]]; then if [[ -n "$(security find-certificate -c "Developer ID Installer" cloudflared_build_keychain | cut -d'"' -f 4 -s | grep "Developer ID Installer:" | head -1)" ]]; then
PKG_SIGN_NAME=$(security find-certificate -c "Developer ID Installer" | cut -d'"' -f 4 -s | grep "Developer ID Installer:" | head -1) PKG_SIGN_NAME=$(security find-certificate -c "Developer ID Installer" cloudflared_build_keychain | cut -d'"' -f 4 -s | grep "Developer ID Installer:" | head -1)
else else
PKG_SIGN_NAME="" PKG_SIGN_NAME=""
fi fi
@ -142,9 +167,16 @@ rm -rf "${TARGET_DIRECTORY}"
export TARGET_OS="darwin" export TARGET_OS="darwin"
GOCACHE="$PWD/../../../../" GOPATH="$PWD/../../../../" CGO_ENABLED=1 make cloudflared GOCACHE="$PWD/../../../../" GOPATH="$PWD/../../../../" CGO_ENABLED=1 make cloudflared
# This allows apple tools to use the certificates in the keychain without requiring password input.
# This command always needs to run after the certificates have been loaded into the keychain
if [[ ! -z "$CFD_CODE_SIGN_PASS" ]]; then
security set-key-partition-list -S apple-tool:,apple: -s -k "${CFD_CODE_SIGN_PASS}" cloudflared_build_keychain
fi
# sign the cloudflared binary # sign the cloudflared binary
if [[ ! -z "$CODE_SIGN_NAME" ]]; then if [[ ! -z "$CODE_SIGN_NAME" ]]; then
codesign -s "${CODE_SIGN_NAME}" -f -v --timestamp --options runtime ${BINARY_NAME} codesign --keychain $HOME/Library/Keychains/cloudflared_build_keychain.keychain-db -s "${CODE_SIGN_NAME}" -fv --options runtime --timestamp ${BINARY_NAME}
# notarize the binary # notarize the binary
# TODO: TUN-5789 # TODO: TUN-5789
@ -165,11 +197,13 @@ tar czf "$FILENAME" "${BINARY_NAME}"
# build the installer package # build the installer package
if [[ ! -z "$PKG_SIGN_NAME" ]]; then if [[ ! -z "$PKG_SIGN_NAME" ]]; then
pkgbuild --identifier com.cloudflare.${PRODUCT} \ pkgbuild --identifier com.cloudflare.${PRODUCT} \
--version ${VERSION} \ --version ${VERSION} \
--scripts ${ARCH_TARGET_DIRECTORY}/scripts \ --scripts ${ARCH_TARGET_DIRECTORY}/scripts \
--root ${ARCH_TARGET_DIRECTORY}/contents \ --root ${ARCH_TARGET_DIRECTORY}/contents \
--install-location /usr/local/bin \ --install-location /usr/local/bin \
--keychain cloudflared_build_keychain \
--sign "${PKG_SIGN_NAME}" \ --sign "${PKG_SIGN_NAME}" \
${PKGNAME} ${PKGNAME}
@ -187,3 +221,8 @@ fi
# cleanup build directory because this script is not ran within containers, # cleanup build directory because this script is not ran within containers,
# which might lead to future issues in subsequent runs. # which might lead to future issues in subsequent runs.
rm -rf "${TARGET_DIRECTORY}" rm -rf "${TARGET_DIRECTORY}"
# cleanup the keychain
security default-keychain -d user -s login.keychain-db
security list-keychains -d user -s login.keychain-db
security delete-keychain cloudflared_build_keychain

View File

@ -24,7 +24,7 @@ else
DEB_PACKAGE_NAME := $(BINARY_NAME) DEB_PACKAGE_NAME := $(BINARY_NAME)
endif endif
DATE := $(shell date -u '+%Y-%m-%d-%H%M UTC') DATE := $(shell date -u -r RELEASE_NOTES '+%Y-%m-%d-%H%M UTC')
VERSION_FLAGS := -X "main.Version=$(VERSION)" -X "main.BuildTime=$(DATE)" VERSION_FLAGS := -X "main.Version=$(VERSION)" -X "main.BuildTime=$(DATE)"
ifdef PACKAGE_MANAGER ifdef PACKAGE_MANAGER
VERSION_FLAGS := $(VERSION_FLAGS) -X "github.com/cloudflare/cloudflared/cmd/cloudflared/updater.BuiltForPackageManager=$(PACKAGE_MANAGER)" VERSION_FLAGS := $(VERSION_FLAGS) -X "github.com/cloudflare/cloudflared/cmd/cloudflared/updater.BuiltForPackageManager=$(PACKAGE_MANAGER)"
@ -237,6 +237,10 @@ github-release:
python3 github_release.py --path $(PWD)/built_artifacts --release-version $(VERSION) python3 github_release.py --path $(PWD)/built_artifacts --release-version $(VERSION)
python3 github_message.py --release-version $(VERSION) python3 github_message.py --release-version $(VERSION)
.PHONY: macos-release
macos-release:
python3 github_release.py --path $(PWD)/artifacts/ --release-version $(VERSION)
.PHONY: r2-linux-release .PHONY: r2-linux-release
r2-linux-release: r2-linux-release:
python3 ./release_pkgs.py python3 ./release_pkgs.py

View File

@ -40,7 +40,7 @@ User documentation for Cloudflare Tunnel can be found at https://developers.clou
Once installed, you can authenticate `cloudflared` into your Cloudflare account and begin creating Tunnels to serve traffic to your origins. Once installed, you can authenticate `cloudflared` into your Cloudflare account and begin creating Tunnels to serve traffic to your origins.
* Create a Tunnel with [these instructions](https://developers.cloudflare.com/cloudflare-one/connections/connect-apps/create-tunnel) * Create a Tunnel with [these instructions](https://developers.cloudflare.com/cloudflare-one/connections/connect-networks/get-started/)
* Route traffic to that Tunnel: * Route traffic to that Tunnel:
* Via public [DNS records in Cloudflare](https://developers.cloudflare.com/cloudflare-one/connections/connect-apps/routing-to-tunnel/dns) * Via public [DNS records in Cloudflare](https://developers.cloudflare.com/cloudflare-one/connections/connect-apps/routing-to-tunnel/dns)
* Or via a public hostname guided by a [Cloudflare Load Balancer](https://developers.cloudflare.com/cloudflare-one/connections/connect-apps/routing-to-tunnel/lb) * Or via a public hostname guided by a [Cloudflare Load Balancer](https://developers.cloudflare.com/cloudflare-one/connections/connect-apps/routing-to-tunnel/lb)

View File

@ -1,3 +1,23 @@
2025.4.2
- 2025-04-30 chore: Do not use gitlab merge request pipelines
- 2025-04-30 DEVTOOLS-16383: Create GitlabCI pipeline to release Mac builds
- 2025-04-24 TUN-9255: Improve flush on write conditions in http2 tunnel type to match what is done on the edge
- 2025-04-10 SDLC-3727 - Adding FIPS status to backstage
2025.4.0
- 2025-04-02 Fix broken links in `cmd/cloudflared/*.go` related to running tunnel as a service
- 2025-04-02 chore: remove repetitive words
- 2025-04-01 Fix messages to point to one.dash.cloudflare.com
- 2025-04-01 feat: emit explicit errors for the `service` command on unsupported OSes
- 2025-04-01 Use RELEASE_NOTES date instead of build date
- 2025-04-01 chore: Update tunnel configuration link in the readme
- 2025-04-01 fix: expand home directory for credentials file
- 2025-04-01 fix: Use path and filepath operation appropriately
- 2025-04-01 feat: Adds a new command line for tunnel run for token file
- 2025-04-01 chore: fix linter rules
- 2025-03-17 TUN-9101: Don't ignore errors on `cloudflared access ssh`
- 2025-03-06 TUN-9089: Pin go import to v0.30.0, v0.31.0 requires go 1.23
2025.2.1 2025.2.1
- 2025-02-26 TUN-9016: update base-debian to v12 - 2025-02-26 TUN-9016: update base-debian to v12
- 2025-02-25 TUN-8960: Connect to FED API GW based on the OriginCert's endpoint - 2025-02-25 TUN-8960: Connect to FED API GW based on the OriginCert's endpoint

View File

@ -13,3 +13,5 @@ spec:
type: "service" type: "service"
lifecycle: "Active" lifecycle: "Active"
owner: "teams/tunnel-teams-routing" owner: "teams/tunnel-teams-routing"
cf:
FIPS: "required"

View File

@ -16,7 +16,7 @@ bullseye: &bullseye
- golangci-lint - golangci-lint
pre-cache: &build_pre_cache pre-cache: &build_pre_cache
- export GOCACHE=/cfsetup_build/.cache/go-build - export GOCACHE=/cfsetup_build/.cache/go-build
- go install golang.org/x/tools/cmd/goimports@latest - go install golang.org/x/tools/cmd/goimports@v0.30.0
post-cache: post-cache:
# Linting # Linting
- make lint - make lint

View File

@ -104,7 +104,7 @@ func ssh(c *cli.Context) error {
case 3: case 3:
options.OriginURL = fmt.Sprintf("https://%s:%s", parts[2], parts[1]) options.OriginURL = fmt.Sprintf("https://%s:%s", parts[2], parts[1])
options.TLSClientConfig = &tls.Config{ options.TLSClientConfig = &tls.Config{
InsecureSkipVerify: true, InsecureSkipVerify: true, // #nosec G402
ServerName: parts[0], ServerName: parts[0],
} }
log.Warn().Msgf("Using insecure SSL connection because SNI overridden to %s", parts[0]) log.Warn().Msgf("Using insecure SSL connection because SNI overridden to %s", parts[0])
@ -141,6 +141,5 @@ func ssh(c *cli.Context) error {
logger := log.With().Str("host", url.Host).Logger() logger := log.With().Str("host", url.Host).Logger()
s = stream.NewDebugStream(s, &logger, maxMessages) s = stream.NewDebugStream(s, &logger, maxMessages)
} }
carrier.StartClient(wsConn, s, options) return carrier.StartClient(wsConn, s, options)
return nil
} }

View File

@ -3,11 +3,38 @@
package main package main
import ( import (
"fmt"
"os" "os"
cli "github.com/urfave/cli/v2" cli "github.com/urfave/cli/v2"
"github.com/cloudflare/cloudflared/cmd/cloudflared/cliutil"
) )
func runApp(app *cli.App, graceShutdownC chan struct{}) { func runApp(app *cli.App, graceShutdownC chan struct{}) {
app.Commands = append(app.Commands, &cli.Command{
Name: "service",
Usage: "Manages the cloudflared system service (not supported on this operating system)",
Subcommands: []*cli.Command{
{
Name: "install",
Usage: "Install cloudflared as a system service (not supported on this operating system)",
Action: cliutil.ConfiguredAction(installGenericService),
},
{
Name: "uninstall",
Usage: "Uninstall the cloudflared service (not supported on this operating system)",
Action: cliutil.ConfiguredAction(uninstallGenericService),
},
},
})
app.Run(os.Args) app.Run(os.Args)
} }
func installGenericService(c *cli.Context) error {
return fmt.Errorf("service installation is not supported on this operating system")
}
func uninstallGenericService(c *cli.Context) error {
return fmt.Errorf("service uninstallation is not supported on this operating system")
}

View File

@ -120,7 +120,7 @@ func installLaunchd(c *cli.Context) error {
log.Info().Msg("Installing cloudflared client as an user launch agent. " + log.Info().Msg("Installing cloudflared client as an user launch agent. " +
"Note that cloudflared client will only run when the user is logged in. " + "Note that cloudflared client will only run when the user is logged in. " +
"If you want to run cloudflared client at boot, install with root permission. " + "If you want to run cloudflared client at boot, install with root permission. " +
"For more information, visit https://developers.cloudflare.com/cloudflare-one/connections/connect-apps/run-tunnel/run-as-service") "For more information, visit https://developers.cloudflare.com/cloudflare-one/connections/connect-networks/configure-tunnels/local-management/as-a-service/macos/")
} }
etPath, err := os.Executable() etPath, err := os.Executable()
if err != nil { if err != nil {

View File

@ -1,13 +1,13 @@
package main package main
import ( import (
"bufio"
"bytes" "bytes"
"errors"
"fmt" "fmt"
"io" "io"
"os" "os"
"os/exec" "os/exec"
"path" "path/filepath"
"text/template" "text/template"
homedir "github.com/mitchellh/go-homedir" homedir "github.com/mitchellh/go-homedir"
@ -44,7 +44,7 @@ func (st *ServiceTemplate) Generate(args *ServiceTemplateArgs) error {
return err return err
} }
if _, err = os.Stat(resolvedPath); err == nil { if _, err = os.Stat(resolvedPath); err == nil {
return fmt.Errorf(serviceAlreadyExistsWarn(resolvedPath)) return errors.New(serviceAlreadyExistsWarn(resolvedPath))
} }
var buffer bytes.Buffer var buffer bytes.Buffer
@ -57,7 +57,7 @@ func (st *ServiceTemplate) Generate(args *ServiceTemplateArgs) error {
fileMode = st.FileMode fileMode = st.FileMode
} }
plistFolder := path.Dir(resolvedPath) plistFolder := filepath.Dir(resolvedPath)
err = os.MkdirAll(plistFolder, 0o755) err = os.MkdirAll(plistFolder, 0o755)
if err != nil { if err != nil {
return fmt.Errorf("error creating %s: %v", plistFolder, err) return fmt.Errorf("error creating %s: %v", plistFolder, err)
@ -118,49 +118,6 @@ func ensureConfigDirExists(configDir string) error {
return err return err
} }
// openFile opens the file at path. If create is set and the file exists, returns nil, true, nil
func openFile(path string, create bool) (file *os.File, exists bool, err error) {
expandedPath, err := homedir.Expand(path)
if err != nil {
return nil, false, err
}
if create {
fileInfo, err := os.Stat(expandedPath)
if err == nil && fileInfo.Size() > 0 {
return nil, true, nil
}
file, err = os.OpenFile(expandedPath, os.O_RDWR|os.O_CREATE, 0600)
} else {
file, err = os.Open(expandedPath)
}
return file, false, err
}
func copyCredential(srcCredentialPath, destCredentialPath string) error {
destFile, exists, err := openFile(destCredentialPath, true)
if err != nil {
return err
} else if exists {
// credentials already exist, do nothing
return nil
}
defer destFile.Close()
srcFile, _, err := openFile(srcCredentialPath, false)
if err != nil {
return err
}
defer srcFile.Close()
// Copy certificate
_, err = io.Copy(destFile, srcFile)
if err != nil {
return fmt.Errorf("unable to copy %s to %s: %v", srcCredentialPath, destCredentialPath, err)
}
return nil
}
func copyFile(src, dest string) error { func copyFile(src, dest string) error {
srcFile, err := os.Open(src) srcFile, err := os.Open(src)
if err != nil { if err != nil {
@ -187,36 +144,3 @@ func copyFile(src, dest string) error {
ok = true ok = true
return nil return nil
} }
func copyConfig(srcConfigPath, destConfigPath string) error {
// Copy or create config
destFile, exists, err := openFile(destConfigPath, true)
if err != nil {
return fmt.Errorf("cannot open %s with error: %s", destConfigPath, err)
} else if exists {
// config already exists, do nothing
return nil
}
defer destFile.Close()
srcFile, _, err := openFile(srcConfigPath, false)
if err != nil {
fmt.Println("Your service needs a config file that at least specifies the hostname option.")
fmt.Println("Type in a hostname now, or leave it blank and create the config file later.")
fmt.Print("Hostname: ")
reader := bufio.NewReader(os.Stdin)
input, _ := reader.ReadString('\n')
if input == "" {
return err
}
fmt.Fprintf(destFile, "hostname: %s\n", input)
} else {
defer srcFile.Close()
_, err = io.Copy(destFile, srcFile)
if err != nil {
return fmt.Errorf("unable to copy %s to %s: %v", srcConfigPath, destConfigPath, err)
}
}
return nil
}

View File

@ -208,7 +208,7 @@ then protect with Cloudflare Access).
B) Locally reachable TCP/UDP-based private services to Cloudflare connected private users in the same account, e.g., B) Locally reachable TCP/UDP-based private services to Cloudflare connected private users in the same account, e.g.,
those enrolled to a Zero Trust WARP Client. those enrolled to a Zero Trust WARP Client.
You can manage your Tunnels via dash.teams.cloudflare.com. This approach will only require you to run a single command You can manage your Tunnels via one.dash.cloudflare.com. This approach will only require you to run a single command
later in each machine where you wish to run a Tunnel. later in each machine where you wish to run a Tunnel.
Alternatively, you can manage your Tunnels via the command line. Begin by obtaining a certificate to be able to do so: Alternatively, you can manage your Tunnels via the command line. Begin by obtaining a certificate to be able to do so:

View File

@ -1,15 +0,0 @@
package tunnel
import (
"testing"
"github.com/stretchr/testify/require"
"github.com/cloudflare/cloudflared/features"
)
func TestDedup(t *testing.T) {
expected := []string{"a", "b"}
actual := features.Dedup([]string{"a", "b", "a"})
require.ElementsMatch(t, expected, actual)
}

View File

@ -140,7 +140,7 @@ func prepareTunnelConfig(
transportProtocol := c.String(flags.Protocol) transportProtocol := c.String(flags.Protocol)
isPostQuantumEnforced := c.Bool(flags.PostQuantum) isPostQuantumEnforced := c.Bool(flags.PostQuantum)
featureSelector, err := features.NewFeatureSelector(ctx, namedTunnel.Credentials.AccountTag, c.StringSlice("features"), c.Bool("post-quantum"), log) featureSelector, err := features.NewFeatureSelector(ctx, namedTunnel.Credentials.AccountTag, c.StringSlice(flags.Features), c.Bool(flags.PostQuantum), log)
if err != nil { if err != nil {
return nil, nil, errors.Wrap(err, "Failed to create feature selector") return nil, nil, errors.Wrap(err, "Failed to create feature selector")
} }

View File

@ -9,6 +9,7 @@ import (
"strings" "strings"
"github.com/google/uuid" "github.com/google/uuid"
"github.com/mitchellh/go-homedir"
"github.com/pkg/errors" "github.com/pkg/errors"
"github.com/rs/zerolog" "github.com/rs/zerolog"
"github.com/urfave/cli/v2" "github.com/urfave/cli/v2"
@ -54,7 +55,12 @@ func newSubcommandContext(c *cli.Context) (*subcommandContext, error) {
// Returns something that can find the given tunnel's credentials file. // Returns something that can find the given tunnel's credentials file.
func (sc *subcommandContext) credentialFinder(tunnelID uuid.UUID) CredFinder { func (sc *subcommandContext) credentialFinder(tunnelID uuid.UUID) CredFinder {
if path := sc.c.String(CredFileFlag); path != "" { if path := sc.c.String(CredFileFlag); path != "" {
return newStaticPath(path, sc.fs) // Expand path if CredFileFlag contains `~`
absPath, err := homedir.Expand(path)
if err != nil {
return newStaticPath(path, sc.fs)
}
return newStaticPath(absPath, sc.fs)
} }
return newSearchByID(tunnelID, sc.c, sc.log, sc.fs) return newSearchByID(tunnelID, sc.c, sc.log, sc.fs)
} }
@ -106,7 +112,7 @@ func (sc *subcommandContext) readTunnelCredentials(credFinder CredFinder) (conne
var credentials connection.Credentials var credentials connection.Credentials
if err = json.Unmarshal(body, &credentials); err != nil { if err = json.Unmarshal(body, &credentials); err != nil {
if strings.HasSuffix(filePath, ".pem") { if filepath.Ext(filePath) == ".pem" {
return connection.Credentials{}, fmt.Errorf("The tunnel credentials file should be .json but you gave a .pem. " + return connection.Credentials{}, fmt.Errorf("The tunnel credentials file should be .json but you gave a .pem. " +
"The tunnel credentials file was originally created by `cloudflared tunnel create`. " + "The tunnel credentials file was originally created by `cloudflared tunnel create`. " +
"You may have accidentally used the filepath to cert.pem, which is generated by `cloudflared tunnel " + "You may have accidentally used the filepath to cert.pem, which is generated by `cloudflared tunnel " +

View File

@ -41,6 +41,7 @@ const (
CredFileFlag = "credentials-file" CredFileFlag = "credentials-file"
CredContentsFlag = "credentials-contents" CredContentsFlag = "credentials-contents"
TunnelTokenFlag = "token" TunnelTokenFlag = "token"
TunnelTokenFileFlag = "token-file"
overwriteDNSFlagName = "overwrite-dns" overwriteDNSFlagName = "overwrite-dns"
noDiagLogsFlagName = "no-diag-logs" noDiagLogsFlagName = "no-diag-logs"
noDiagMetricsFlagName = "no-diag-metrics" noDiagMetricsFlagName = "no-diag-metrics"
@ -126,9 +127,14 @@ var (
}) })
tunnelTokenFlag = altsrc.NewStringFlag(&cli.StringFlag{ tunnelTokenFlag = altsrc.NewStringFlag(&cli.StringFlag{
Name: TunnelTokenFlag, Name: TunnelTokenFlag,
Usage: "The Tunnel token. When provided along with credentials, this will take precedence.", Usage: "The Tunnel token. When provided along with credentials, this will take precedence. Also takes precedence over token-file",
EnvVars: []string{"TUNNEL_TOKEN"}, EnvVars: []string{"TUNNEL_TOKEN"},
}) })
tunnelTokenFileFlag = altsrc.NewStringFlag(&cli.StringFlag{
Name: TunnelTokenFileFlag,
Usage: "Filepath at which to read the tunnel token. When provided along with credentials, this will take precedence.",
EnvVars: []string{"TUNNEL_TOKEN_FILE"},
})
forceDeleteFlag = &cli.BoolFlag{ forceDeleteFlag = &cli.BoolFlag{
Name: flags.Force, Name: flags.Force,
Aliases: []string{"f"}, Aliases: []string{"f"},
@ -708,6 +714,7 @@ func buildRunCommand() *cli.Command {
selectProtocolFlag, selectProtocolFlag,
featuresFlag, featuresFlag,
tunnelTokenFlag, tunnelTokenFlag,
tunnelTokenFileFlag,
icmpv4SrcFlag, icmpv4SrcFlag,
icmpv6SrcFlag, icmpv6SrcFlag,
maxActiveFlowsFlag, maxActiveFlowsFlag,
@ -748,12 +755,22 @@ func runCommand(c *cli.Context) error {
"your origin will not be reachable. You should remove the `hostname` property to avoid this warning.") "your origin will not be reachable. You should remove the `hostname` property to avoid this warning.")
} }
tokenStr := c.String(TunnelTokenFlag)
// Check if tokenStr is blank before checking for tokenFile
if tokenStr == "" {
if tokenFile := c.String(TunnelTokenFileFlag); tokenFile != "" {
data, err := os.ReadFile(tokenFile)
if err != nil {
return cliutil.UsageError("Failed to read token file: " + err.Error())
}
tokenStr = strings.TrimSpace(string(data))
}
}
// Check if token is provided and if not use default tunnelID flag method // Check if token is provided and if not use default tunnelID flag method
if tokenStr := c.String(TunnelTokenFlag); tokenStr != "" { if tokenStr != "" {
if token, err := ParseToken(tokenStr); err == nil { if token, err := ParseToken(tokenStr); err == nil {
return sc.runWithCredentials(token.Credentials()) return sc.runWithCredentials(token.Credentials())
} }
return cliutil.UsageError("Provided Tunnel token is not valid.") return cliutil.UsageError("Provided Tunnel token is not valid.")
} else { } else {
tunnelRef := c.Args().First() tunnelRef := c.Args().First()

View File

@ -22,7 +22,7 @@ var (
Usage: "The ID or name of the virtual network to which the route is associated to.", Usage: "The ID or name of the virtual network to which the route is associated to.",
} }
routeAddError = errors.New("You must supply exactly one argument, the ID or CIDR of the route you want to delete") errAddRoute = errors.New("You must supply exactly one argument, the ID or CIDR of the route you want to delete")
) )
func buildRouteIPSubcommand() *cli.Command { func buildRouteIPSubcommand() *cli.Command {
@ -32,7 +32,7 @@ func buildRouteIPSubcommand() *cli.Command {
UsageText: "cloudflared tunnel [--config FILEPATH] route COMMAND [arguments...]", UsageText: "cloudflared tunnel [--config FILEPATH] route COMMAND [arguments...]",
Description: `cloudflared can provision routes for any IP space in your corporate network. Users enrolled in Description: `cloudflared can provision routes for any IP space in your corporate network. Users enrolled in
your Cloudflare for Teams organization can reach those IPs through the Cloudflare WARP your Cloudflare for Teams organization can reach those IPs through the Cloudflare WARP
client. You can then configure L7/L4 filtering on https://dash.teams.cloudflare.com to client. You can then configure L7/L4 filtering on https://one.dash.cloudflare.com to
determine who can reach certain routes. determine who can reach certain routes.
By default IP routes all exist within a single virtual network. If you use the same IP By default IP routes all exist within a single virtual network. If you use the same IP
space(s) in different physical private networks, all meant to be reachable via IP routes, space(s) in different physical private networks, all meant to be reachable via IP routes,
@ -187,7 +187,7 @@ func deleteRouteCommand(c *cli.Context) error {
} }
if c.NArg() != 1 { if c.NArg() != 1 {
return routeAddError return errAddRoute
} }
var routeId uuid.UUID var routeId uuid.UUID
@ -195,7 +195,7 @@ func deleteRouteCommand(c *cli.Context) error {
if err != nil { if err != nil {
_, network, err := net.ParseCIDR(c.Args().First()) _, network, err := net.ParseCIDR(c.Args().First())
if err != nil || network == nil { if err != nil || network == nil {
return routeAddError return errAddRoute
} }
var vnetId *uuid.UUID var vnetId *uuid.UUID

View File

@ -22,7 +22,7 @@ import (
const ( const (
DefaultCheckUpdateFreq = time.Hour * 24 DefaultCheckUpdateFreq = time.Hour * 24
noUpdateInShellMessage = "cloudflared will not automatically update when run from the shell. To enable auto-updates, run cloudflared as a service: https://developers.cloudflare.com/cloudflare-one/connections/connect-apps/run-tunnel/as-a-service/" noUpdateInShellMessage = "cloudflared will not automatically update when run from the shell. To enable auto-updates, run cloudflared as a service: https://developers.cloudflare.com/cloudflare-one/connections/connect-apps/configure-tunnels/local-management/as-a-service/"
noUpdateOnWindowsMessage = "cloudflared will not automatically update on Windows systems." noUpdateOnWindowsMessage = "cloudflared will not automatically update on Windows systems."
noUpdateManagedPackageMessage = "cloudflared will not automatically update if installed by a package manager." noUpdateManagedPackageMessage = "cloudflared will not automatically update if installed by a package manager."
isManagedInstallFile = ".installedFromPackageManager" isManagedInstallFile = ".installedFromPackageManager"

View File

@ -10,9 +10,9 @@ import (
"net/url" "net/url"
"os" "os"
"os/exec" "os/exec"
"path"
"path/filepath" "path/filepath"
"runtime" "runtime"
"strings"
"text/template" "text/template"
"time" "time"
@ -134,7 +134,7 @@ func (v *WorkersVersion) Apply() error {
if err := os.Rename(newFilePath, v.targetPath); err != nil { if err := os.Rename(newFilePath, v.targetPath); err != nil {
//attempt rollback //attempt rollback
os.Rename(oldFilePath, v.targetPath) _ = os.Rename(oldFilePath, v.targetPath)
return err return err
} }
os.Remove(oldFilePath) os.Remove(oldFilePath)
@ -181,7 +181,7 @@ func download(url, filepath string, isCompressed bool) error {
tr := tar.NewReader(gr) tr := tar.NewReader(gr)
// advance the reader pass the header, which will be the single binary file // advance the reader pass the header, which will be the single binary file
tr.Next() _, _ = tr.Next()
r = tr r = tr
} }
@ -198,7 +198,7 @@ func download(url, filepath string, isCompressed bool) error {
// isCompressedFile is a really simple file extension check to see if this is a macos tar and gzipped // isCompressedFile is a really simple file extension check to see if this is a macos tar and gzipped
func isCompressedFile(urlstring string) bool { func isCompressedFile(urlstring string) bool {
if strings.HasSuffix(urlstring, ".tgz") { if path.Ext(urlstring) == ".tgz" {
return true return true
} }
@ -206,7 +206,7 @@ func isCompressedFile(urlstring string) bool {
if err != nil { if err != nil {
return false return false
} }
return strings.HasSuffix(u.Path, ".tgz") return path.Ext(u.Path) == ".tgz"
} }
// writeBatchFile writes a batch file out to disk // writeBatchFile writes a batch file out to disk
@ -249,7 +249,6 @@ func runWindowsBatch(batchFile string) error {
if exitError, ok := err.(*exec.ExitError); ok { if exitError, ok := err.(*exec.ExitError); ok {
return fmt.Errorf("Error during update : %s;", string(exitError.Stderr)) return fmt.Errorf("Error during update : %s;", string(exitError.Stderr))
} }
} }
return err return err
} }

View File

@ -26,7 +26,7 @@ import (
const ( const (
windowsServiceName = "Cloudflared" windowsServiceName = "Cloudflared"
windowsServiceDescription = "Cloudflared agent" windowsServiceDescription = "Cloudflared agent"
windowsServiceUrl = "https://developers.cloudflare.com/cloudflare-one/connections/connect-apps/run-tunnel/as-a-service/windows/" windowsServiceUrl = "https://developers.cloudflare.com/cloudflare-one/connections/connect-apps/configure-tunnels/local-management/as-a-service/windows/"
recoverActionDelay = time.Second * 20 recoverActionDelay = time.Second * 20
failureCountResetPeriod = time.Hour * 24 failureCountResetPeriod = time.Hour * 24

View File

@ -26,14 +26,20 @@ const (
MaxGracePeriod = time.Minute * 3 MaxGracePeriod = time.Minute * 3
MaxConcurrentStreams = math.MaxUint32 MaxConcurrentStreams = math.MaxUint32
contentTypeHeader = "content-type" contentTypeHeader = "content-type"
sseContentType = "text/event-stream" contentLengthHeader = "content-length"
grpcContentType = "application/grpc" transferEncodingHeader = "transfer-encoding"
sseContentType = "text/event-stream"
grpcContentType = "application/grpc"
sseJsonContentType = "application/x-ndjson"
chunkTransferEncoding = "chunked"
) )
var ( var (
switchingProtocolText = fmt.Sprintf("%d %s", http.StatusSwitchingProtocols, http.StatusText(http.StatusSwitchingProtocols)) switchingProtocolText = fmt.Sprintf("%d %s", http.StatusSwitchingProtocols, http.StatusText(http.StatusSwitchingProtocols))
flushableContentTypes = []string{sseContentType, grpcContentType} flushableContentTypes = []string{sseContentType, grpcContentType, sseJsonContentType}
) )
// TunnelConnection represents the connection to the edge. // TunnelConnection represents the connection to the edge.
@ -274,6 +280,22 @@ type ConnectedFuse interface {
// Helper method to let the caller know what content-types should require a flush on every // Helper method to let the caller know what content-types should require a flush on every
// write to a ResponseWriter. // write to a ResponseWriter.
func shouldFlush(headers http.Header) bool { func shouldFlush(headers http.Header) bool {
// When doing Server Side Events (SSE), some frameworks don't respect the `Content-Type` header.
// Therefore, we need to rely on other ways to know whether we should flush on write or not. A good
// approach is to assume that responses without `Content-Length` or with `Transfer-Encoding: chunked`
// are streams, and therefore, should be flushed right away to the eyeball.
// References:
// - https://datatracker.ietf.org/doc/html/rfc7230#section-4.1
// - https://datatracker.ietf.org/doc/html/rfc9112#section-6.1
if contentLength := headers.Get(contentLengthHeader); contentLength == "" {
return true
}
if transferEncoding := headers.Get(transferEncodingHeader); transferEncoding != "" {
transferEncoding = strings.ToLower(transferEncoding)
if strings.Contains(transferEncoding, chunkTransferEncoding) {
return true
}
}
if contentType := headers.Get(contentTypeHeader); contentType != "" { if contentType := headers.Get(contentTypeHeader); contentType != "" {
contentType = strings.ToLower(contentType) contentType = strings.ToLower(contentType)
for _, c := range flushableContentTypes { for _, c := range flushableContentTypes {
@ -282,7 +304,6 @@ func shouldFlush(headers http.Header) bool {
} }
} }
} }
return false return false
} }

View File

@ -7,10 +7,12 @@ import (
"io" "io"
"math/big" "math/big"
"net/http" "net/http"
"testing"
"time" "time"
pkgerrors "github.com/pkg/errors" pkgerrors "github.com/pkg/errors"
"github.com/rs/zerolog" "github.com/rs/zerolog"
"github.com/stretchr/testify/require"
cfdflow "github.com/cloudflare/cloudflared/flow" cfdflow "github.com/cloudflare/cloudflared/flow"
@ -209,3 +211,48 @@ func (mcf mockConnectedFuse) Connected() {}
func (mcf mockConnectedFuse) IsConnected() bool { func (mcf mockConnectedFuse) IsConnected() bool {
return true return true
} }
func TestShouldFlushHeaders(t *testing.T) {
tests := []struct {
headers map[string]string
shouldFlush bool
}{
{
headers: map[string]string{contentTypeHeader: "application/json", contentLengthHeader: "1"},
shouldFlush: false,
},
{
headers: map[string]string{contentTypeHeader: "text/html", contentLengthHeader: "1"},
shouldFlush: false,
},
{
headers: map[string]string{contentTypeHeader: "text/event-stream", contentLengthHeader: "1"},
shouldFlush: true,
},
{
headers: map[string]string{contentTypeHeader: "application/grpc", contentLengthHeader: "1"},
shouldFlush: true,
},
{
headers: map[string]string{contentTypeHeader: "application/x-ndjson", contentLengthHeader: "1"},
shouldFlush: true,
},
{
headers: map[string]string{contentTypeHeader: "application/json"},
shouldFlush: true,
},
{
headers: map[string]string{contentTypeHeader: "application/json", contentLengthHeader: "-1", transferEncodingHeader: "chunked"},
shouldFlush: true,
},
}
for _, test := range tests {
headers := http.Header{}
for k, v := range test.headers {
headers.Add(k, v)
}
require.Equal(t, test.shouldFlush, shouldFlush(headers))
}
}

View File

@ -10,7 +10,7 @@ import (
"github.com/cloudflare/cloudflared/management" "github.com/cloudflare/cloudflared/management"
"github.com/cloudflare/cloudflared/tunnelrpc" "github.com/cloudflare/cloudflared/tunnelrpc"
tunnelpogs "github.com/cloudflare/cloudflared/tunnelrpc/pogs" "github.com/cloudflare/cloudflared/tunnelrpc/pogs"
) )
// registerClient derives a named tunnel rpc client that can then be used to register and unregister connections. // registerClient derives a named tunnel rpc client that can then be used to register and unregister connections.
@ -36,7 +36,7 @@ type controlStream struct {
// ControlStreamHandler registers connections with origintunneld and initiates graceful shutdown. // ControlStreamHandler registers connections with origintunneld and initiates graceful shutdown.
type ControlStreamHandler interface { type ControlStreamHandler interface {
// ServeControlStream handles the control plane of the transport in the current goroutine calling this // ServeControlStream handles the control plane of the transport in the current goroutine calling this
ServeControlStream(ctx context.Context, rw io.ReadWriteCloser, connOptions *tunnelpogs.ConnectionOptions, tunnelConfigGetter TunnelConfigJSONGetter) error ServeControlStream(ctx context.Context, rw io.ReadWriteCloser, connOptions *pogs.ConnectionOptions, tunnelConfigGetter TunnelConfigJSONGetter) error
// IsStopped tells whether the method above has finished // IsStopped tells whether the method above has finished
IsStopped() bool IsStopped() bool
} }
@ -78,7 +78,7 @@ func NewControlStream(
func (c *controlStream) ServeControlStream( func (c *controlStream) ServeControlStream(
ctx context.Context, ctx context.Context,
rw io.ReadWriteCloser, rw io.ReadWriteCloser,
connOptions *tunnelpogs.ConnectionOptions, connOptions *pogs.ConnectionOptions,
tunnelConfigGetter TunnelConfigJSONGetter, tunnelConfigGetter TunnelConfigJSONGetter,
) error { ) error {
registrationClient := c.registerClientFunc(ctx, rw, c.registerTimeout) registrationClient := c.registerClientFunc(ctx, rw, c.registerTimeout)

View File

@ -19,7 +19,7 @@ import (
cfdflow "github.com/cloudflare/cloudflared/flow" cfdflow "github.com/cloudflare/cloudflared/flow"
"github.com/cloudflare/cloudflared/tracing" "github.com/cloudflare/cloudflared/tracing"
tunnelpogs "github.com/cloudflare/cloudflared/tunnelrpc/pogs" "github.com/cloudflare/cloudflared/tunnelrpc/pogs"
) )
// note: these constants are exported so we can reuse them in the edge-side code // note: these constants are exported so we can reuse them in the edge-side code
@ -39,7 +39,7 @@ type HTTP2Connection struct {
conn net.Conn conn net.Conn
server *http2.Server server *http2.Server
orchestrator Orchestrator orchestrator Orchestrator
connOptions *tunnelpogs.ConnectionOptions connOptions *pogs.ConnectionOptions
observer *Observer observer *Observer
connIndex uint8 connIndex uint8
@ -54,7 +54,7 @@ type HTTP2Connection struct {
func NewHTTP2Connection( func NewHTTP2Connection(
conn net.Conn, conn net.Conn,
orchestrator Orchestrator, orchestrator Orchestrator,
connOptions *tunnelpogs.ConnectionOptions, connOptions *pogs.ConnectionOptions,
observer *Observer, observer *Observer,
connIndex uint8, connIndex uint8,
controlStreamHandler ControlStreamHandler, controlStreamHandler ControlStreamHandler,

View File

@ -22,7 +22,6 @@ import (
cfdquic "github.com/cloudflare/cloudflared/quic" cfdquic "github.com/cloudflare/cloudflared/quic"
"github.com/cloudflare/cloudflared/tracing" "github.com/cloudflare/cloudflared/tracing"
"github.com/cloudflare/cloudflared/tunnelrpc/pogs" "github.com/cloudflare/cloudflared/tunnelrpc/pogs"
tunnelpogs "github.com/cloudflare/cloudflared/tunnelrpc/pogs"
rpcquic "github.com/cloudflare/cloudflared/tunnelrpc/quic" rpcquic "github.com/cloudflare/cloudflared/tunnelrpc/quic"
) )
@ -44,7 +43,7 @@ type quicConnection struct {
orchestrator Orchestrator orchestrator Orchestrator
datagramHandler DatagramSessionHandler datagramHandler DatagramSessionHandler
controlStreamHandler ControlStreamHandler controlStreamHandler ControlStreamHandler
connOptions *tunnelpogs.ConnectionOptions connOptions *pogs.ConnectionOptions
connIndex uint8 connIndex uint8
rpcTimeout time.Duration rpcTimeout time.Duration
@ -235,7 +234,7 @@ func (q *quicConnection) dispatchRequest(ctx context.Context, stream *rpcquic.Re
} }
// UpdateConfiguration is the RPC method invoked by edge when there is a new configuration // UpdateConfiguration is the RPC method invoked by edge when there is a new configuration
func (q *quicConnection) UpdateConfiguration(ctx context.Context, version int32, config []byte) *tunnelpogs.UpdateConfigurationResponse { func (q *quicConnection) UpdateConfiguration(ctx context.Context, version int32, config []byte) *pogs.UpdateConfigurationResponse {
return q.orchestrator.UpdateConfig(version, config) return q.orchestrator.UpdateConfig(version, config)
} }

View File

@ -2,11 +2,11 @@ package connection
import ( import (
"context" "context"
"fmt"
"net" "net"
"time" "time"
"github.com/google/uuid" "github.com/google/uuid"
"github.com/pkg/errors"
"github.com/quic-go/quic-go" "github.com/quic-go/quic-go"
"github.com/rs/zerolog" "github.com/rs/zerolog"
@ -16,10 +16,17 @@ import (
"github.com/cloudflare/cloudflared/tunnelrpc/pogs" "github.com/cloudflare/cloudflared/tunnelrpc/pogs"
) )
var (
ErrUnsupportedRPCUDPRegistration = errors.New("datagram v3 does not support RegisterUdpSession RPC")
ErrUnsupportedRPCUDPUnregistration = errors.New("datagram v3 does not support UnregisterUdpSession RPC")
)
type datagramV3Connection struct { type datagramV3Connection struct {
conn quic.Connection conn quic.Connection
index uint8
// datagramMuxer mux/demux datagrams from quic connection // datagramMuxer mux/demux datagrams from quic connection
datagramMuxer cfdquic.DatagramConn datagramMuxer cfdquic.DatagramConn
metrics cfdquic.Metrics
logger *zerolog.Logger logger *zerolog.Logger
} }
@ -40,7 +47,9 @@ func NewDatagramV3Connection(ctx context.Context,
return &datagramV3Connection{ return &datagramV3Connection{
conn, conn,
index,
datagramMuxer, datagramMuxer,
metrics,
logger, logger,
} }
} }
@ -50,9 +59,11 @@ func (d *datagramV3Connection) Serve(ctx context.Context) error {
} }
func (d *datagramV3Connection) RegisterUdpSession(ctx context.Context, sessionID uuid.UUID, dstIP net.IP, dstPort uint16, closeAfterIdleHint time.Duration, traceContext string) (*pogs.RegisterUdpSessionResponse, error) { func (d *datagramV3Connection) RegisterUdpSession(ctx context.Context, sessionID uuid.UUID, dstIP net.IP, dstPort uint16, closeAfterIdleHint time.Duration, traceContext string) (*pogs.RegisterUdpSessionResponse, error) {
return nil, fmt.Errorf("datagram v3 does not support RegisterUdpSession RPC") d.metrics.UnsupportedRemoteCommand(d.index, "register_udp_session")
return nil, ErrUnsupportedRPCUDPRegistration
} }
func (d *datagramV3Connection) UnregisterUdpSession(ctx context.Context, sessionID uuid.UUID, message string) error { func (d *datagramV3Connection) UnregisterUdpSession(ctx context.Context, sessionID uuid.UUID, message string) error {
return fmt.Errorf("datagram v3 does not support UnregisterUdpSession RPC") d.metrics.UnsupportedRemoteCommand(d.index, "unregister_udp_session")
return ErrUnsupportedRPCUDPUnregistration
} }

View File

@ -3,7 +3,7 @@ package credentials
import ( import (
"io/fs" "io/fs"
"os" "os"
"path" "path/filepath"
"testing" "testing"
"github.com/stretchr/testify/require" "github.com/stretchr/testify/require"
@ -13,8 +13,8 @@ func TestCredentialsRead(t *testing.T) {
file, err := os.ReadFile("test-cloudflare-tunnel-cert-json.pem") file, err := os.ReadFile("test-cloudflare-tunnel-cert-json.pem")
require.NoError(t, err) require.NoError(t, err)
dir := t.TempDir() dir := t.TempDir()
certPath := path.Join(dir, originCertFile) certPath := filepath.Join(dir, originCertFile)
os.WriteFile(certPath, file, fs.ModePerm) _ = os.WriteFile(certPath, file, fs.ModePerm)
user, err := Read(certPath, &nopLog) user, err := Read(certPath, &nopLog)
require.NoError(t, err) require.NoError(t, err)
require.Equal(t, certPath, user.CertPath()) require.Equal(t, certPath, user.CertPath())

View File

@ -4,7 +4,7 @@ import (
"fmt" "fmt"
"io/fs" "io/fs"
"os" "os"
"path" "path/filepath"
"testing" "testing"
"github.com/rs/zerolog" "github.com/rs/zerolog"
@ -63,7 +63,7 @@ func TestFindOriginCert_Valid(t *testing.T) {
file, err := os.ReadFile("test-cloudflare-tunnel-cert-json.pem") file, err := os.ReadFile("test-cloudflare-tunnel-cert-json.pem")
require.NoError(t, err) require.NoError(t, err)
dir := t.TempDir() dir := t.TempDir()
certPath := path.Join(dir, originCertFile) certPath := filepath.Join(dir, originCertFile)
_ = os.WriteFile(certPath, file, fs.ModePerm) _ = os.WriteFile(certPath, file, fs.ModePerm)
path, err := FindOriginCert(certPath, &nopLog) path, err := FindOriginCert(certPath, &nopLog)
require.NoError(t, err) require.NoError(t, err)
@ -72,7 +72,7 @@ func TestFindOriginCert_Valid(t *testing.T) {
func TestFindOriginCert_Missing(t *testing.T) { func TestFindOriginCert_Missing(t *testing.T) {
dir := t.TempDir() dir := t.TempDir()
certPath := path.Join(dir, originCertFile) certPath := filepath.Join(dir, originCertFile)
_, err := FindOriginCert(certPath, &nopLog) _, err := FindOriginCert(certPath, &nopLog)
require.Error(t, err) require.Error(t, err)
} }

View File

@ -84,7 +84,7 @@ func (s *Session) waitForCloseCondition(ctx context.Context, closeAfterIdle time
// Closing dstConn cancels read so dstToTransport routine in Serve() can return // Closing dstConn cancels read so dstToTransport routine in Serve() can return
defer s.dstConn.Close() defer s.dstConn.Close()
if closeAfterIdle == 0 { if closeAfterIdle == 0 {
// provide deafult is caller doesn't specify one // provide default is caller doesn't specify one
closeAfterIdle = defaultCloseIdleAfter closeAfterIdle = defaultCloseIdleAfter
} }

View File

@ -12,6 +12,7 @@ import (
"github.com/google/uuid" "github.com/google/uuid"
"github.com/rs/zerolog" "github.com/rs/zerolog"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require" "github.com/stretchr/testify/require"
"golang.org/x/sync/errgroup" "golang.org/x/sync/errgroup"
@ -54,22 +55,22 @@ func testSessionReturns(t *testing.T, closeBy closeMethod, closeAfterIdle time.D
closedByRemote, err := session.Serve(ctx, closeAfterIdle) closedByRemote, err := session.Serve(ctx, closeAfterIdle)
switch closeBy { switch closeBy {
case closeByContext: case closeByContext:
require.Equal(t, context.Canceled, err) assert.Equal(t, context.Canceled, err)
require.False(t, closedByRemote) assert.False(t, closedByRemote)
case closeByCallingClose: case closeByCallingClose:
require.Equal(t, localCloseReason, err) assert.Equal(t, localCloseReason, err)
require.Equal(t, localCloseReason.byRemote, closedByRemote) assert.Equal(t, localCloseReason.byRemote, closedByRemote)
case closeByTimeout: case closeByTimeout:
require.Equal(t, SessionIdleErr(closeAfterIdle), err) assert.Equal(t, SessionIdleErr(closeAfterIdle), err)
require.False(t, closedByRemote) assert.False(t, closedByRemote)
} }
close(sessionDone) close(sessionDone)
}() }()
go func() { go func() {
n, err := session.transportToDst(payload) n, err := session.transportToDst(payload)
require.NoError(t, err) assert.NoError(t, err)
require.Equal(t, len(payload), n) assert.Equal(t, len(payload), n)
}() }()
readBuffer := make([]byte, len(payload)+1) readBuffer := make([]byte, len(payload)+1)
@ -84,6 +85,8 @@ func testSessionReturns(t *testing.T, closeBy closeMethod, closeAfterIdle time.D
cancel() cancel()
case closeByCallingClose: case closeByCallingClose:
session.close(localCloseReason) session.close(localCloseReason)
default:
// ignore
} }
<-sessionDone <-sessionDone
@ -128,7 +131,7 @@ func testActiveSessionNotClosed(t *testing.T, readFromDst bool, writeToDst bool)
ctx, cancel := context.WithCancel(context.Background()) ctx, cancel := context.WithCancel(context.Background())
errGroup, ctx := errgroup.WithContext(ctx) errGroup, ctx := errgroup.WithContext(ctx)
errGroup.Go(func() error { errGroup.Go(func() error {
session.Serve(ctx, closeAfterIdle) _, _ = session.Serve(ctx, closeAfterIdle)
if time.Now().Before(startTime.Add(activeTime)) { if time.Now().Before(startTime.Add(activeTime)) {
return fmt.Errorf("session closed while it's still active") return fmt.Errorf("session closed while it's still active")
} }

View File

@ -1,5 +1,7 @@
package features package features
import "slices"
const ( const (
FeatureSerializedHeaders = "serialized_headers" FeatureSerializedHeaders = "serialized_headers"
FeatureQuickReconnects = "quick_reconnects" FeatureQuickReconnects = "quick_reconnects"
@ -8,7 +10,9 @@ const (
FeaturePostQuantum = "postquantum" FeaturePostQuantum = "postquantum"
FeatureQUICSupportEOF = "support_quic_eof" FeatureQUICSupportEOF = "support_quic_eof"
FeatureManagementLogs = "management_logs" FeatureManagementLogs = "management_logs"
FeatureDatagramV3 = "support_datagram_v3" FeatureDatagramV3_1 = "support_datagram_v3_1"
DeprecatedFeatureDatagramV3 = "support_datagram_v3" // Deprecated: TUN-9291
) )
var defaultFeatures = []string{ var defaultFeatures = []string{
@ -19,6 +23,11 @@ var defaultFeatures = []string{
FeatureManagementLogs, FeatureManagementLogs,
} }
// List of features that are no longer in-use.
var deprecatedFeatures = []string{
DeprecatedFeatureDatagramV3,
}
// Features set by user provided flags // Features set by user provided flags
type staticFeatures struct { type staticFeatures struct {
PostQuantumMode *PostQuantumMode PostQuantumMode *PostQuantumMode
@ -40,15 +49,19 @@ const (
// DatagramV2 is the currently supported datagram protocol for UDP and ICMP packets // DatagramV2 is the currently supported datagram protocol for UDP and ICMP packets
DatagramV2 DatagramVersion = FeatureDatagramV2 DatagramV2 DatagramVersion = FeatureDatagramV2
// DatagramV3 is a new datagram protocol for UDP and ICMP packets. It is not backwards compatible with datagram v2. // DatagramV3 is a new datagram protocol for UDP and ICMP packets. It is not backwards compatible with datagram v2.
DatagramV3 DatagramVersion = FeatureDatagramV3 DatagramV3 DatagramVersion = FeatureDatagramV3_1
) )
// Remove any duplicates from the slice // Remove any duplicate features from the list and remove deprecated features
func Dedup(slice []string) []string { func dedupAndRemoveFeatures(features []string) []string {
// Convert the slice into a set // Convert the slice into a set
set := make(map[string]bool, 0) set := map[string]bool{}
for _, str := range slice { for _, feature := range features {
set[str] = true // Remove deprecated features from the provided list
if slices.Contains(deprecatedFeatures, feature) {
continue
}
set[feature] = true
} }
// Convert the set back into a slice // Convert the set back into a slice

View File

@ -7,7 +7,6 @@ import (
"hash/fnv" "hash/fnv"
"net" "net"
"slices" "slices"
"sync"
"time" "time"
"github.com/rs/zerolog" "github.com/rs/zerolog"
@ -15,7 +14,6 @@ import (
const ( const (
featureSelectorHostname = "cfd-features.argotunnel.com" featureSelectorHostname = "cfd-features.argotunnel.com"
defaultRefreshFreq = time.Hour * 6
lookupTimeout = time.Second * 10 lookupTimeout = time.Second * 10
) )
@ -23,32 +21,27 @@ const (
// If the TXT record is missing a key, the field will unmarshal to the default Go value // If the TXT record is missing a key, the field will unmarshal to the default Go value
type featuresRecord struct { type featuresRecord struct {
// support_datagram_v3 // DatagramV3Percentage int32 `json:"dv3"` // Removed in TUN-9291
DatagramV3Percentage int32 `json:"dv3"`
// PostQuantumPercentage int32 `json:"pq"` // Removed in TUN-7970 // PostQuantumPercentage int32 `json:"pq"` // Removed in TUN-7970
} }
func NewFeatureSelector(ctx context.Context, accountTag string, cliFeatures []string, pq bool, logger *zerolog.Logger) (*FeatureSelector, error) { func NewFeatureSelector(ctx context.Context, accountTag string, cliFeatures []string, pq bool, logger *zerolog.Logger) (*FeatureSelector, error) {
return newFeatureSelector(ctx, accountTag, logger, newDNSResolver(), cliFeatures, pq, defaultRefreshFreq) return newFeatureSelector(ctx, accountTag, logger, newDNSResolver(), cliFeatures, pq)
} }
// FeatureSelector determines if this account will try new features. It periodically queries a DNS TXT record // FeatureSelector determines if this account will try new features; loaded once during startup.
// to see which features are turned on.
type FeatureSelector struct { type FeatureSelector struct {
accountHash int32 accountHash uint32
logger *zerolog.Logger logger *zerolog.Logger
resolver resolver resolver resolver
staticFeatures staticFeatures staticFeatures staticFeatures
cliFeatures []string cliFeatures []string
// lock protects concurrent access to dynamic features
lock sync.RWMutex
features featuresRecord features featuresRecord
} }
func newFeatureSelector(ctx context.Context, accountTag string, logger *zerolog.Logger, resolver resolver, cliFeatures []string, pq bool, refreshFreq time.Duration) (*FeatureSelector, error) { func newFeatureSelector(ctx context.Context, accountTag string, logger *zerolog.Logger, resolver resolver, cliFeatures []string, pq bool) (*FeatureSelector, error) {
// Combine default features and user-provided features // Combine default features and user-provided features
var pqMode *PostQuantumMode var pqMode *PostQuantumMode
if pq { if pq {
@ -64,22 +57,16 @@ func newFeatureSelector(ctx context.Context, accountTag string, logger *zerolog.
logger: logger, logger: logger,
resolver: resolver, resolver: resolver,
staticFeatures: staticFeatures, staticFeatures: staticFeatures,
cliFeatures: Dedup(cliFeatures), cliFeatures: dedupAndRemoveFeatures(cliFeatures),
} }
if err := selector.refresh(ctx); err != nil { if err := selector.init(ctx); err != nil {
logger.Err(err).Msg("Failed to fetch features, default to disable") logger.Err(err).Msg("Failed to fetch features, default to disable")
} }
go selector.refreshLoop(ctx, refreshFreq)
return selector, nil return selector, nil
} }
func (fs *FeatureSelector) accountEnabled(percentage int32) bool {
return percentage > fs.accountHash
}
func (fs *FeatureSelector) PostQuantumMode() PostQuantumMode { func (fs *FeatureSelector) PostQuantumMode() PostQuantumMode {
if fs.staticFeatures.PostQuantumMode != nil { if fs.staticFeatures.PostQuantumMode != nil {
return *fs.staticFeatures.PostQuantumMode return *fs.staticFeatures.PostQuantumMode
@ -89,11 +76,8 @@ func (fs *FeatureSelector) PostQuantumMode() PostQuantumMode {
} }
func (fs *FeatureSelector) DatagramVersion() DatagramVersion { func (fs *FeatureSelector) DatagramVersion() DatagramVersion {
fs.lock.RLock()
defer fs.lock.RUnlock()
// If user provides the feature via the cli, we take it as priority over remote feature evaluation // If user provides the feature via the cli, we take it as priority over remote feature evaluation
if slices.Contains(fs.cliFeatures, FeatureDatagramV3) { if slices.Contains(fs.cliFeatures, FeatureDatagramV3_1) {
return DatagramV3 return DatagramV3
} }
// If the user specifies DatagramV2, we also take that over remote // If the user specifies DatagramV2, we also take that over remote
@ -101,36 +85,16 @@ func (fs *FeatureSelector) DatagramVersion() DatagramVersion {
return DatagramV2 return DatagramV2
} }
if fs.accountEnabled(fs.features.DatagramV3Percentage) {
return DatagramV3
}
return DatagramV2 return DatagramV2
} }
// ClientFeatures will return the list of currently available features that cloudflared should provide to the edge. // ClientFeatures will return the list of currently available features that cloudflared should provide to the edge.
//
// This list is dynamic and can change in-between returns.
func (fs *FeatureSelector) ClientFeatures() []string { func (fs *FeatureSelector) ClientFeatures() []string {
// Evaluate any remote features along with static feature list to construct the list of features // Evaluate any remote features along with static feature list to construct the list of features
return Dedup(slices.Concat(defaultFeatures, fs.cliFeatures, []string{string(fs.DatagramVersion())})) return dedupAndRemoveFeatures(slices.Concat(defaultFeatures, fs.cliFeatures, []string{string(fs.DatagramVersion())}))
} }
func (fs *FeatureSelector) refreshLoop(ctx context.Context, refreshFreq time.Duration) { func (fs *FeatureSelector) init(ctx context.Context) error {
ticker := time.NewTicker(refreshFreq)
for {
select {
case <-ctx.Done():
return
case <-ticker.C:
err := fs.refresh(ctx)
if err != nil {
fs.logger.Err(err).Msg("Failed to refresh feature selector")
}
}
}
}
func (fs *FeatureSelector) refresh(ctx context.Context) error {
record, err := fs.resolver.lookupRecord(ctx) record, err := fs.resolver.lookupRecord(ctx)
if err != nil { if err != nil {
return err return err
@ -141,9 +105,6 @@ func (fs *FeatureSelector) refresh(ctx context.Context) error {
return err return err
} }
fs.lock.Lock()
defer fs.lock.Unlock()
fs.features = features fs.features = features
return nil return nil
@ -180,8 +141,8 @@ func (dr *dnsResolver) lookupRecord(ctx context.Context) ([]byte, error) {
return []byte(records[0]), nil return []byte(records[0]), nil
} }
func switchThreshold(accountTag string) int32 { func switchThreshold(accountTag string) uint32 {
h := fnv.New32a() h := fnv.New32a()
_, _ = h.Write([]byte(accountTag)) _, _ = h.Write([]byte(accountTag))
return int32(h.Sum32() % 100) return h.Sum32() % 100
} }

View File

@ -3,9 +3,7 @@ package features
import ( import (
"context" "context"
"encoding/json" "encoding/json"
"fmt"
"testing" "testing"
"time"
"github.com/rs/zerolog" "github.com/rs/zerolog"
"github.com/stretchr/testify/require" "github.com/stretchr/testify/require"
@ -14,33 +12,23 @@ import (
func TestUnmarshalFeaturesRecord(t *testing.T) { func TestUnmarshalFeaturesRecord(t *testing.T) {
tests := []struct { tests := []struct {
record []byte record []byte
expectedPercentage int32 expectedPercentage uint32
}{ }{
{
record: []byte(`{"dv3":0}`),
expectedPercentage: 0,
},
{
record: []byte(`{"dv3":39}`),
expectedPercentage: 39,
},
{
record: []byte(`{"dv3":100}`),
expectedPercentage: 100,
},
{ {
record: []byte(`{}`), // Unmarshal to default struct if key is not present record: []byte(`{}`), // Unmarshal to default struct if key is not present
}, },
{ {
record: []byte(`{"kyber":768}`), // Unmarshal to default struct if key is not present record: []byte(`{"kyber":768}`), // Unmarshal to default struct if key is not present
}, },
{
record: []byte(`{"pq": 101,"dv3":100}`), // Expired keys don't unmarshal to anything
},
} }
for _, test := range tests { for _, test := range tests {
var features featuresRecord var features featuresRecord
err := json.Unmarshal(test.record, &features) err := json.Unmarshal(test.record, &features)
require.NoError(t, err) require.NoError(t, err)
require.Equal(t, test.expectedPercentage, features.DatagramV3Percentage, test)
} }
} }
@ -61,7 +49,7 @@ func TestFeaturePrecedenceEvaluationPostQuantum(t *testing.T) {
{ {
name: "user_specified", name: "user_specified",
cli: true, cli: true,
expectedFeatures: Dedup(append(defaultFeatures, FeaturePostQuantum)), expectedFeatures: dedupAndRemoveFeatures(append(defaultFeatures, FeaturePostQuantum)),
expectedVersion: PostQuantumStrict, expectedVersion: PostQuantumStrict,
}, },
} }
@ -69,7 +57,7 @@ func TestFeaturePrecedenceEvaluationPostQuantum(t *testing.T) {
for _, test := range tests { for _, test := range tests {
t.Run(test.name, func(t *testing.T) { t.Run(test.name, func(t *testing.T) {
resolver := &staticResolver{record: featuresRecord{}} resolver := &staticResolver{record: featuresRecord{}}
selector, err := newFeatureSelector(context.Background(), test.name, &logger, resolver, []string{}, test.cli, time.Second) selector, err := newFeatureSelector(context.Background(), test.name, &logger, resolver, []string{}, test.cli)
require.NoError(t, err) require.NoError(t, err)
require.ElementsMatch(t, test.expectedFeatures, selector.ClientFeatures()) require.ElementsMatch(t, test.expectedFeatures, selector.ClientFeatures())
require.Equal(t, test.expectedVersion, selector.PostQuantumMode()) require.Equal(t, test.expectedVersion, selector.PostQuantumMode())
@ -102,44 +90,17 @@ func TestFeaturePrecedenceEvaluationDatagramVersion(t *testing.T) {
}, },
{ {
name: "user_specified_v3", name: "user_specified_v3",
cli: []string{FeatureDatagramV3}, cli: []string{FeatureDatagramV3_1},
remote: featuresRecord{}, remote: featuresRecord{},
expectedFeatures: Dedup(append(defaultFeatures, FeatureDatagramV3)), expectedFeatures: dedupAndRemoveFeatures(append(defaultFeatures, FeatureDatagramV3_1)),
expectedVersion: FeatureDatagramV3, expectedVersion: FeatureDatagramV3_1,
},
{
name: "remote_specified_v3",
cli: []string{},
remote: featuresRecord{
DatagramV3Percentage: 100,
},
expectedFeatures: Dedup(append(defaultFeatures, FeatureDatagramV3)),
expectedVersion: FeatureDatagramV3,
},
{
name: "remote_and_user_specified_v3",
cli: []string{FeatureDatagramV3},
remote: featuresRecord{
DatagramV3Percentage: 100,
},
expectedFeatures: Dedup(append(defaultFeatures, FeatureDatagramV3)),
expectedVersion: FeatureDatagramV3,
},
{
name: "remote_v3_and_user_specified_v2",
cli: []string{FeatureDatagramV2},
remote: featuresRecord{
DatagramV3Percentage: 100,
},
expectedFeatures: defaultFeatures,
expectedVersion: DatagramV2,
}, },
} }
for _, test := range tests { for _, test := range tests {
t.Run(test.name, func(t *testing.T) { t.Run(test.name, func(t *testing.T) {
resolver := &staticResolver{record: test.remote} resolver := &staticResolver{record: test.remote}
selector, err := newFeatureSelector(context.Background(), test.name, &logger, resolver, test.cli, false, time.Second) selector, err := newFeatureSelector(context.Background(), test.name, &logger, resolver, test.cli, false)
require.NoError(t, err) require.NoError(t, err)
require.ElementsMatch(t, test.expectedFeatures, selector.ClientFeatures()) require.ElementsMatch(t, test.expectedFeatures, selector.ClientFeatures())
require.Equal(t, test.expectedVersion, selector.DatagramVersion()) require.Equal(t, test.expectedVersion, selector.DatagramVersion())
@ -147,75 +108,59 @@ func TestFeaturePrecedenceEvaluationDatagramVersion(t *testing.T) {
} }
} }
func TestRefreshFeaturesRecord(t *testing.T) { func TestDeprecatedFeaturesRemoved(t *testing.T) {
// The hash of the accountTag is 82 logger := zerolog.Nop()
accountTag := t.Name() tests := []struct {
threshold := switchThreshold(accountTag) name string
cli []string
percentages := []int32{0, 10, 81, 82, 83, 100, 101, 1000} remote featuresRecord
refreshFreq := time.Millisecond * 10 expectedFeatures []string
selector := newTestSelector(t, percentages, false, refreshFreq) }{
{
// Starting out should default to DatagramV2 name: "no_removals",
require.Equal(t, DatagramV2, selector.DatagramVersion()) cli: []string{},
remote: featuresRecord{},
for _, percentage := range percentages { expectedFeatures: defaultFeatures,
if percentage > threshold { },
require.Equal(t, DatagramV3, selector.DatagramVersion()) {
} else { name: "support_datagram_v3",
require.Equal(t, DatagramV2, selector.DatagramVersion()) cli: []string{DeprecatedFeatureDatagramV3},
} remote: featuresRecord{},
expectedFeatures: defaultFeatures,
time.Sleep(refreshFreq + time.Millisecond) },
} }
// Make sure error doesn't override the last fetched features for _, test := range tests {
require.Equal(t, DatagramV3, selector.DatagramVersion()) t.Run(test.name, func(t *testing.T) {
resolver := &staticResolver{record: test.remote}
selector, err := newFeatureSelector(context.Background(), test.name, &logger, resolver, test.cli, false)
require.NoError(t, err)
require.ElementsMatch(t, test.expectedFeatures, selector.ClientFeatures())
})
}
} }
func TestStaticFeatures(t *testing.T) { func TestStaticFeatures(t *testing.T) {
percentages := []int32{0} percentages := []uint32{0}
// PostQuantum Enabled from user flag // PostQuantum Enabled from user flag
selector := newTestSelector(t, percentages, true, time.Millisecond*10) selector := newTestSelector(t, percentages, true)
require.Equal(t, PostQuantumStrict, selector.PostQuantumMode()) require.Equal(t, PostQuantumStrict, selector.PostQuantumMode())
// PostQuantum Disabled (or not set) // PostQuantum Disabled (or not set)
selector = newTestSelector(t, percentages, false, time.Millisecond*10) selector = newTestSelector(t, percentages, false)
require.Equal(t, PostQuantumPrefer, selector.PostQuantumMode()) require.Equal(t, PostQuantumPrefer, selector.PostQuantumMode())
} }
func newTestSelector(t *testing.T, percentages []int32, pq bool, refreshFreq time.Duration) *FeatureSelector { func newTestSelector(t *testing.T, percentages []uint32, pq bool) *FeatureSelector {
accountTag := t.Name() accountTag := t.Name()
logger := zerolog.Nop() logger := zerolog.Nop()
resolver := &mockResolver{ selector, err := newFeatureSelector(context.Background(), accountTag, &logger, &staticResolver{}, []string{}, pq)
percentages: percentages,
}
selector, err := newFeatureSelector(context.Background(), accountTag, &logger, resolver, []string{}, pq, refreshFreq)
require.NoError(t, err) require.NoError(t, err)
return selector return selector
} }
type mockResolver struct {
nextIndex int
percentages []int32
}
func (mr *mockResolver) lookupRecord(ctx context.Context) ([]byte, error) {
if mr.nextIndex >= len(mr.percentages) {
return nil, fmt.Errorf("no more record to lookup")
}
record, err := json.Marshal(featuresRecord{
DatagramV3Percentage: mr.percentages[mr.nextIndex],
})
mr.nextIndex++
return record, err
}
type staticResolver struct { type staticResolver struct {
record featuresRecord record featuresRecord
} }

View File

@ -61,7 +61,7 @@ def assert_tag_exists(repo, version):
raise Exception("Tag {} not found".format(version)) raise Exception("Tag {} not found".format(version))
def get_or_create_release(repo, version, dry_run=False): def get_or_create_release(repo, version, dry_run=False, is_draft=False):
""" """
Get a Github Release matching the version tag or create a new one. Get a Github Release matching the version tag or create a new one.
If a conflict occurs on creation, attempt to fetch the Release on last time If a conflict occurs on creation, attempt to fetch the Release on last time
@ -81,8 +81,11 @@ def get_or_create_release(repo, version, dry_run=False):
return return
try: try:
logging.info("Creating release %s", version) if is_draft:
return repo.create_git_release(version, version, "") logging.info("Drafting release %s", version)
else:
logging.info("Creating release %s", version)
return repo.create_git_release(version, version, "", is_draft)
except GithubException as e: except GithubException as e:
errors = e.data.get("errors", []) errors = e.data.get("errors", [])
if e.status == 422 and any( if e.status == 422 and any(
@ -129,6 +132,10 @@ def parse_args():
"--dry-run", action="store_true", help="Do not create release or upload asset" "--dry-run", action="store_true", help="Do not create release or upload asset"
) )
parser.add_argument(
"--draft", action="store_true", help="Create a draft release"
)
args = parser.parse_args() args = parser.parse_args()
is_valid = True is_valid = True
if not args.release_version: if not args.release_version:
@ -292,7 +299,7 @@ def main():
for filename in onlyfiles: for filename in onlyfiles:
binary_path = os.path.join(args.path, filename) binary_path = os.path.join(args.path, filename)
assert_asset_version(binary_path, args.release_version) assert_asset_version(binary_path, args.release_version)
release = get_or_create_release(repo, args.release_version, args.dry_run) release = get_or_create_release(repo, args.release_version, args.dry_run, args.draft)
for filename in onlyfiles: for filename in onlyfiles:
binary_path = os.path.join(args.path, filename) binary_path = os.path.join(args.path, filename)
upload_asset(release, binary_path, filename, args.release_version, args.kv_account_id, args.namespace_id, upload_asset(release, binary_path, filename, args.release_version, args.kv_account_id, args.namespace_id,

View File

@ -4,7 +4,6 @@ import (
"fmt" "fmt"
"io" "io"
"os" "os"
"path"
"path/filepath" "path/filepath"
"sync" "sync"
"time" "time"
@ -249,7 +248,7 @@ func createRollingLogger(config RollingConfig) (io.Writer, error) {
} }
rotatingFileInit.writer = &lumberjack.Logger{ rotatingFileInit.writer = &lumberjack.Logger{
Filename: path.Join(config.Dirname, config.Filename), Filename: filepath.Join(config.Dirname, config.Filename),
MaxBackups: config.maxBackups, MaxBackups: config.maxBackups,
MaxSize: config.maxSize, MaxSize: config.maxSize,
MaxAge: config.maxAge, MaxAge: config.maxAge,

View File

@ -74,7 +74,7 @@ type EventLog struct {
type LogEventType int8 type LogEventType int8
const ( const (
// Cloudflared events are signficant to cloudflared operations like connection state changes. // Cloudflared events are significant to cloudflared operations like connection state changes.
// Cloudflared is also the default event type for any events that haven't been separated into a proper event type. // Cloudflared is also the default event type for any events that haven't been separated into a proper event type.
Cloudflared LogEventType = iota Cloudflared LogEventType = iota
HTTP HTTP
@ -129,7 +129,7 @@ func (e *LogEventType) UnmarshalJSON(data []byte) error {
// LogLevel corresponds to the zerolog logging levels // LogLevel corresponds to the zerolog logging levels
// "panic", "fatal", and "trace" are exempt from this list as they are rarely used and, at least // "panic", "fatal", and "trace" are exempt from this list as they are rarely used and, at least
// the the first two are limited to failure conditions that lead to cloudflared shutting down. // the first two are limited to failure conditions that lead to cloudflared shutting down.
type LogLevel int8 type LogLevel int8
const ( const (

View File

@ -11,12 +11,15 @@ import (
) )
const ( const (
namespace = "quic" namespace = "quic"
ConnectionIndexMetricLabel = "conn_index"
frameTypeMetricLabel = "frame_type"
packetTypeMetricLabel = "packet_type"
reasonMetricLabel = "reason"
) )
var ( var (
clientConnLabels = []string{"conn_index"} clientMetrics = struct {
clientMetrics = struct {
totalConnections prometheus.Counter totalConnections prometheus.Counter
closedConnections prometheus.Counter closedConnections prometheus.Counter
maxUDPPayloadSize *prometheus.GaugeVec maxUDPPayloadSize *prometheus.GaugeVec
@ -35,7 +38,7 @@ var (
congestionState *prometheus.GaugeVec congestionState *prometheus.GaugeVec
}{ }{
totalConnections: prometheus.NewCounter( totalConnections: prometheus.NewCounter(
prometheus.CounterOpts{ prometheus.CounterOpts{ //nolint:promlinter
Namespace: namespace, Namespace: namespace,
Subsystem: "client", Subsystem: "client",
Name: "total_connections", Name: "total_connections",
@ -43,7 +46,7 @@ var (
}, },
), ),
closedConnections: prometheus.NewCounter( closedConnections: prometheus.NewCounter(
prometheus.CounterOpts{ prometheus.CounterOpts{ //nolint:promlinter
Namespace: namespace, Namespace: namespace,
Subsystem: "client", Subsystem: "client",
Name: "closed_connections", Name: "closed_connections",
@ -57,70 +60,70 @@ var (
Name: "max_udp_payload", Name: "max_udp_payload",
Help: "Maximum UDP payload size in bytes for a QUIC packet", Help: "Maximum UDP payload size in bytes for a QUIC packet",
}, },
clientConnLabels, []string{ConnectionIndexMetricLabel},
), ),
sentFrames: prometheus.NewCounterVec( sentFrames: prometheus.NewCounterVec(
prometheus.CounterOpts{ prometheus.CounterOpts{ //nolint:promlinter
Namespace: namespace, Namespace: namespace,
Subsystem: "client", Subsystem: "client",
Name: "sent_frames", Name: "sent_frames",
Help: "Number of frames that have been sent through a connection", Help: "Number of frames that have been sent through a connection",
}, },
append(clientConnLabels, "frame_type"), []string{ConnectionIndexMetricLabel, frameTypeMetricLabel},
), ),
sentBytes: prometheus.NewCounterVec( sentBytes: prometheus.NewCounterVec(
prometheus.CounterOpts{ prometheus.CounterOpts{ //nolint:promlinter
Namespace: namespace, Namespace: namespace,
Subsystem: "client", Subsystem: "client",
Name: "sent_bytes", Name: "sent_bytes",
Help: "Number of bytes that have been sent through a connection", Help: "Number of bytes that have been sent through a connection",
}, },
clientConnLabels, []string{ConnectionIndexMetricLabel},
), ),
receivedFrames: prometheus.NewCounterVec( receivedFrames: prometheus.NewCounterVec(
prometheus.CounterOpts{ prometheus.CounterOpts{ //nolint:promlinter
Namespace: namespace, Namespace: namespace,
Subsystem: "client", Subsystem: "client",
Name: "received_frames", Name: "received_frames",
Help: "Number of frames that have been received through a connection", Help: "Number of frames that have been received through a connection",
}, },
append(clientConnLabels, "frame_type"), []string{ConnectionIndexMetricLabel, frameTypeMetricLabel},
), ),
receivedBytes: prometheus.NewCounterVec( receivedBytes: prometheus.NewCounterVec(
prometheus.CounterOpts{ prometheus.CounterOpts{ //nolint:promlinter
Namespace: namespace, Namespace: namespace,
Subsystem: "client", Subsystem: "client",
Name: "receive_bytes", Name: "receive_bytes",
Help: "Number of bytes that have been received through a connection", Help: "Number of bytes that have been received through a connection",
}, },
clientConnLabels, []string{ConnectionIndexMetricLabel},
), ),
bufferedPackets: prometheus.NewCounterVec( bufferedPackets: prometheus.NewCounterVec(
prometheus.CounterOpts{ prometheus.CounterOpts{ //nolint:promlinter
Namespace: namespace, Namespace: namespace,
Subsystem: "client", Subsystem: "client",
Name: "buffered_packets", Name: "buffered_packets",
Help: "Number of bytes that have been buffered on a connection", Help: "Number of bytes that have been buffered on a connection",
}, },
append(clientConnLabels, "packet_type"), []string{ConnectionIndexMetricLabel, packetTypeMetricLabel},
), ),
droppedPackets: prometheus.NewCounterVec( droppedPackets: prometheus.NewCounterVec(
prometheus.CounterOpts{ prometheus.CounterOpts{ //nolint:promlinter
Namespace: namespace, Namespace: namespace,
Subsystem: "client", Subsystem: "client",
Name: "dropped_packets", Name: "dropped_packets",
Help: "Number of bytes that have been dropped on a connection", Help: "Number of bytes that have been dropped on a connection",
}, },
append(clientConnLabels, "packet_type", "reason"), []string{ConnectionIndexMetricLabel, packetTypeMetricLabel, reasonMetricLabel},
), ),
lostPackets: prometheus.NewCounterVec( lostPackets: prometheus.NewCounterVec(
prometheus.CounterOpts{ prometheus.CounterOpts{ //nolint:promlinter
Namespace: namespace, Namespace: namespace,
Subsystem: "client", Subsystem: "client",
Name: "lost_packets", Name: "lost_packets",
Help: "Number of packets that have been lost from a connection", Help: "Number of packets that have been lost from a connection",
}, },
append(clientConnLabels, "reason"), []string{ConnectionIndexMetricLabel, reasonMetricLabel},
), ),
minRTT: prometheus.NewGaugeVec( minRTT: prometheus.NewGaugeVec(
prometheus.GaugeOpts{ prometheus.GaugeOpts{
@ -129,7 +132,7 @@ var (
Name: "min_rtt", Name: "min_rtt",
Help: "Lowest RTT measured on a connection in millisec", Help: "Lowest RTT measured on a connection in millisec",
}, },
clientConnLabels, []string{ConnectionIndexMetricLabel},
), ),
latestRTT: prometheus.NewGaugeVec( latestRTT: prometheus.NewGaugeVec(
prometheus.GaugeOpts{ prometheus.GaugeOpts{
@ -138,7 +141,7 @@ var (
Name: "latest_rtt", Name: "latest_rtt",
Help: "Latest RTT measured on a connection", Help: "Latest RTT measured on a connection",
}, },
clientConnLabels, []string{ConnectionIndexMetricLabel},
), ),
smoothedRTT: prometheus.NewGaugeVec( smoothedRTT: prometheus.NewGaugeVec(
prometheus.GaugeOpts{ prometheus.GaugeOpts{
@ -147,7 +150,7 @@ var (
Name: "smoothed_rtt", Name: "smoothed_rtt",
Help: "Calculated smoothed RTT measured on a connection in millisec", Help: "Calculated smoothed RTT measured on a connection in millisec",
}, },
clientConnLabels, []string{ConnectionIndexMetricLabel},
), ),
mtu: prometheus.NewGaugeVec( mtu: prometheus.NewGaugeVec(
prometheus.GaugeOpts{ prometheus.GaugeOpts{
@ -156,7 +159,7 @@ var (
Name: "mtu", Name: "mtu",
Help: "Current maximum transmission unit (MTU) of a connection", Help: "Current maximum transmission unit (MTU) of a connection",
}, },
clientConnLabels, []string{ConnectionIndexMetricLabel},
), ),
congestionWindow: prometheus.NewGaugeVec( congestionWindow: prometheus.NewGaugeVec(
prometheus.GaugeOpts{ prometheus.GaugeOpts{
@ -165,7 +168,7 @@ var (
Name: "congestion_window", Name: "congestion_window",
Help: "Current congestion window size", Help: "Current congestion window size",
}, },
clientConnLabels, []string{ConnectionIndexMetricLabel},
), ),
congestionState: prometheus.NewGaugeVec( congestionState: prometheus.NewGaugeVec(
prometheus.GaugeOpts{ prometheus.GaugeOpts{
@ -174,13 +177,13 @@ var (
Name: "congestion_state", Name: "congestion_state",
Help: "Current congestion control state. See https://pkg.go.dev/github.com/quic-go/quic-go@v0.45.0/logging#CongestionState for what each value maps to", Help: "Current congestion control state. See https://pkg.go.dev/github.com/quic-go/quic-go@v0.45.0/logging#CongestionState for what each value maps to",
}, },
clientConnLabels, []string{ConnectionIndexMetricLabel},
), ),
} }
registerClient = sync.Once{} registerClient = sync.Once{}
packetTooBigDropped = prometheus.NewCounter(prometheus.CounterOpts{ packetTooBigDropped = prometheus.NewCounter(prometheus.CounterOpts{ //nolint:promlinter
Namespace: namespace, Namespace: namespace,
Subsystem: "client", Subsystem: "client",
Name: "packet_too_big_dropped", Name: "packet_too_big_dropped",

View File

@ -2,82 +2,98 @@ package v3
import ( import (
"github.com/prometheus/client_golang/prometheus" "github.com/prometheus/client_golang/prometheus"
"github.com/cloudflare/cloudflared/quic"
) )
const ( const (
namespace = "cloudflared" namespace = "cloudflared"
subsystem = "udp" subsystem = "udp"
commandMetricLabel = "command"
) )
type Metrics interface { type Metrics interface {
IncrementFlows() IncrementFlows(connIndex uint8)
DecrementFlows() DecrementFlows(connIndex uint8)
PayloadTooLarge() PayloadTooLarge(connIndex uint8)
RetryFlowResponse() RetryFlowResponse(connIndex uint8)
MigrateFlow() MigrateFlow(connIndex uint8)
UnsupportedRemoteCommand(connIndex uint8, command string)
} }
type metrics struct { type metrics struct {
activeUDPFlows prometheus.Gauge activeUDPFlows *prometheus.GaugeVec
totalUDPFlows prometheus.Counter totalUDPFlows *prometheus.CounterVec
payloadTooLarge prometheus.Counter payloadTooLarge *prometheus.CounterVec
retryFlowResponses prometheus.Counter retryFlowResponses *prometheus.CounterVec
migratedFlows prometheus.Counter migratedFlows *prometheus.CounterVec
unsupportedRemoteCommands *prometheus.CounterVec
} }
func (m *metrics) IncrementFlows() { func (m *metrics) IncrementFlows(connIndex uint8) {
m.totalUDPFlows.Inc() m.totalUDPFlows.WithLabelValues(string(connIndex)).Inc()
m.activeUDPFlows.Inc() m.activeUDPFlows.WithLabelValues(string(connIndex)).Inc()
} }
func (m *metrics) DecrementFlows() { func (m *metrics) DecrementFlows(connIndex uint8) {
m.activeUDPFlows.Dec() m.activeUDPFlows.WithLabelValues(string(connIndex)).Dec()
} }
func (m *metrics) PayloadTooLarge() { func (m *metrics) PayloadTooLarge(connIndex uint8) {
m.payloadTooLarge.Inc() m.payloadTooLarge.WithLabelValues(string(connIndex)).Inc()
} }
func (m *metrics) RetryFlowResponse() { func (m *metrics) RetryFlowResponse(connIndex uint8) {
m.retryFlowResponses.Inc() m.retryFlowResponses.WithLabelValues(string(connIndex)).Inc()
} }
func (m *metrics) MigrateFlow() { func (m *metrics) MigrateFlow(connIndex uint8) {
m.migratedFlows.Inc() m.migratedFlows.WithLabelValues(string(connIndex)).Inc()
}
func (m *metrics) UnsupportedRemoteCommand(connIndex uint8, command string) {
m.unsupportedRemoteCommands.WithLabelValues(string(connIndex), command).Inc()
} }
func NewMetrics(registerer prometheus.Registerer) Metrics { func NewMetrics(registerer prometheus.Registerer) Metrics {
m := &metrics{ m := &metrics{
activeUDPFlows: prometheus.NewGauge(prometheus.GaugeOpts{ activeUDPFlows: prometheus.NewGaugeVec(prometheus.GaugeOpts{
Namespace: namespace, Namespace: namespace,
Subsystem: subsystem, Subsystem: subsystem,
Name: "active_flows", Name: "active_flows",
Help: "Concurrent count of UDP flows that are being proxied to any origin", Help: "Concurrent count of UDP flows that are being proxied to any origin",
}), }, []string{quic.ConnectionIndexMetricLabel}),
totalUDPFlows: prometheus.NewCounter(prometheus.CounterOpts{ totalUDPFlows: prometheus.NewCounterVec(prometheus.CounterOpts{ //nolint:promlinter
Namespace: namespace, Namespace: namespace,
Subsystem: subsystem, Subsystem: subsystem,
Name: "total_flows", Name: "total_flows",
Help: "Total count of UDP flows that have been proxied to any origin", Help: "Total count of UDP flows that have been proxied to any origin",
}), }, []string{quic.ConnectionIndexMetricLabel}),
payloadTooLarge: prometheus.NewCounter(prometheus.CounterOpts{ payloadTooLarge: prometheus.NewCounterVec(prometheus.CounterOpts{ //nolint:promlinter
Namespace: namespace, Namespace: namespace,
Subsystem: subsystem, Subsystem: subsystem,
Name: "payload_too_large", Name: "payload_too_large",
Help: "Total count of UDP flows that have had origin payloads that are too large to proxy", Help: "Total count of UDP flows that have had origin payloads that are too large to proxy",
}), }, []string{quic.ConnectionIndexMetricLabel}),
retryFlowResponses: prometheus.NewCounter(prometheus.CounterOpts{ retryFlowResponses: prometheus.NewCounterVec(prometheus.CounterOpts{ //nolint:promlinter
Namespace: namespace, Namespace: namespace,
Subsystem: subsystem, Subsystem: subsystem,
Name: "retry_flow_responses", Name: "retry_flow_responses",
Help: "Total count of UDP flows that have had to send their registration response more than once", Help: "Total count of UDP flows that have had to send their registration response more than once",
}), }, []string{quic.ConnectionIndexMetricLabel}),
migratedFlows: prometheus.NewCounter(prometheus.CounterOpts{ migratedFlows: prometheus.NewCounterVec(prometheus.CounterOpts{ //nolint:promlinter
Namespace: namespace, Namespace: namespace,
Subsystem: subsystem, Subsystem: subsystem,
Name: "migrated_flows", Name: "migrated_flows",
Help: "Total count of UDP flows have been migrated across local connections", Help: "Total count of UDP flows have been migrated across local connections",
}), }, []string{quic.ConnectionIndexMetricLabel}),
unsupportedRemoteCommands: prometheus.NewCounterVec(prometheus.CounterOpts{
Namespace: namespace,
Subsystem: subsystem,
Name: "unsupported_remote_command_total",
Help: "Total count of unsupported remote RPC commands for the ",
}, []string{quic.ConnectionIndexMetricLabel, commandMetricLabel}),
} }
registerer.MustRegister( registerer.MustRegister(
m.activeUDPFlows, m.activeUDPFlows,
@ -85,6 +101,7 @@ func NewMetrics(registerer prometheus.Registerer) Metrics {
m.payloadTooLarge, m.payloadTooLarge,
m.retryFlowResponses, m.retryFlowResponses,
m.migratedFlows, m.migratedFlows,
m.unsupportedRemoteCommands,
) )
return m return m
} }

View File

@ -2,8 +2,9 @@ package v3_test
type noopMetrics struct{} type noopMetrics struct{}
func (noopMetrics) IncrementFlows() {} func (noopMetrics) IncrementFlows(connIndex uint8) {}
func (noopMetrics) DecrementFlows() {} func (noopMetrics) DecrementFlows(connIndex uint8) {}
func (noopMetrics) PayloadTooLarge() {} func (noopMetrics) PayloadTooLarge(connIndex uint8) {}
func (noopMetrics) RetryFlowResponse() {} func (noopMetrics) RetryFlowResponse(connIndex uint8) {}
func (noopMetrics) MigrateFlow() {} func (noopMetrics) MigrateFlow(connIndex uint8) {}
func (noopMetrics) UnsupportedRemoteCommand(connIndex uint8, command string) {}

View File

@ -264,10 +264,10 @@ func (c *datagramConn) handleSessionRegistrationDatagram(ctx context.Context, da
return return
} }
log = log.With().Str(logSrcKey, session.LocalAddr().String()).Logger() log = log.With().Str(logSrcKey, session.LocalAddr().String()).Logger()
c.metrics.IncrementFlows() c.metrics.IncrementFlows(c.index)
// Make sure to eventually remove the session from the session manager when the session is closed // Make sure to eventually remove the session from the session manager when the session is closed
defer c.sessionManager.UnregisterSession(session.ID()) defer c.sessionManager.UnregisterSession(session.ID())
defer c.metrics.DecrementFlows() defer c.metrics.DecrementFlows(c.index)
// Respond that we are able to process the new session // Respond that we are able to process the new session
err = c.SendUDPSessionResponse(datagram.RequestID, ResponseOk) err = c.SendUDPSessionResponse(datagram.RequestID, ResponseOk)
@ -315,7 +315,7 @@ func (c *datagramConn) handleSessionAlreadyRegistered(requestID RequestID, logge
// The session is already running in another routine so we want to restart the idle timeout since no proxied // The session is already running in another routine so we want to restart the idle timeout since no proxied
// packets have come down yet. // packets have come down yet.
session.ResetIdleTimer() session.ResetIdleTimer()
c.metrics.RetryFlowResponse() c.metrics.RetryFlowResponse(c.index)
logger.Debug().Msgf("flow registration response retry") logger.Debug().Msgf("flow registration response retry")
} }

View File

@ -781,12 +781,12 @@ func newICMPDatagram(pk *packet.ICMP) []byte {
// Cancel the provided context and make sure it closes with the expected cancellation error // Cancel the provided context and make sure it closes with the expected cancellation error
func assertContextClosed(t *testing.T, ctx context.Context, done <-chan error, cancel context.CancelCauseFunc) { func assertContextClosed(t *testing.T, ctx context.Context, done <-chan error, cancel context.CancelCauseFunc) {
cancel(expectedContextCanceled) cancel(errExpectedContextCanceled)
err := <-done err := <-done
if !errors.Is(err, context.Canceled) { if !errors.Is(err, context.Canceled) {
t.Fatal(err) t.Fatal(err)
} }
if !errors.Is(context.Cause(ctx), expectedContextCanceled) { if !errors.Is(context.Cause(ctx), errExpectedContextCanceled) {
t.Fatal(err) t.Fatal(err)
} }
} }

View File

@ -27,11 +27,11 @@ const (
) )
// SessionCloseErr indicates that the session's Close method was called. // SessionCloseErr indicates that the session's Close method was called.
var SessionCloseErr error = errors.New("flow was closed directly") var SessionCloseErr error = errors.New("flow was closed directly") //nolint:errname
// SessionIdleErr is returned when the session was closed because there was no communication // SessionIdleErr is returned when the session was closed because there was no communication
// in either direction over the session for the timeout period. // in either direction over the session for the timeout period.
type SessionIdleErr struct { type SessionIdleErr struct { //nolint:errname
timeout time.Duration timeout time.Duration
} }
@ -149,7 +149,8 @@ func (s *session) Migrate(eyeball DatagramConn, ctx context.Context, logger *zer
} }
// The session is already running so we want to restart the idle timeout since no proxied packets have come down yet. // The session is already running so we want to restart the idle timeout since no proxied packets have come down yet.
s.markActive() s.markActive()
s.metrics.MigrateFlow() connectionIndex := eyeball.ID()
s.metrics.MigrateFlow(connectionIndex)
} }
func (s *session) Serve(ctx context.Context) error { func (s *session) Serve(ctx context.Context) error {
@ -160,7 +161,7 @@ func (s *session) Serve(ctx context.Context) error {
// To perform a zero copy write when passing the datagram to the connection, we prepare the buffer with // To perform a zero copy write when passing the datagram to the connection, we prepare the buffer with
// the required datagram header information. We can reuse this buffer for this session since the header is the // the required datagram header information. We can reuse this buffer for this session since the header is the
// same for the each read. // same for the each read.
MarshalPayloadHeaderTo(s.id, readBuffer[:DatagramPayloadHeaderLen]) _ = MarshalPayloadHeaderTo(s.id, readBuffer[:DatagramPayloadHeaderLen])
for { for {
// Read from the origin UDP socket // Read from the origin UDP socket
n, err := s.origin.Read(readBuffer[DatagramPayloadHeaderLen:]) n, err := s.origin.Read(readBuffer[DatagramPayloadHeaderLen:])
@ -177,7 +178,8 @@ func (s *session) Serve(ctx context.Context) error {
continue continue
} }
if n > maxDatagramPayloadLen { if n > maxDatagramPayloadLen {
s.metrics.PayloadTooLarge() connectionIndex := s.ConnectionID()
s.metrics.PayloadTooLarge(connectionIndex)
s.log.Error().Int(logPacketSizeKey, n).Msg("flow (origin) packet read was too large and was dropped") s.log.Error().Int(logPacketSizeKey, n).Msg("flow (origin) packet read was too large and was dropped")
continue continue
} }
@ -241,7 +243,7 @@ func (s *session) waitForCloseCondition(ctx context.Context, closeAfterIdle time
// Closing the session at the end cancels read so Serve() can return // Closing the session at the end cancels read so Serve() can return
defer s.Close() defer s.Close()
if closeAfterIdle == 0 { if closeAfterIdle == 0 {
// provide deafult is caller doesn't specify one // Provided that the default caller doesn't specify one
closeAfterIdle = defaultCloseIdleAfter closeAfterIdle = defaultCloseIdleAfter
} }

View File

@ -17,7 +17,7 @@ import (
) )
var ( var (
expectedContextCanceled = errors.New("expected context canceled") errExpectedContextCanceled = errors.New("expected context canceled")
testOriginAddr = net.UDPAddrFromAddrPort(netip.MustParseAddrPort("127.0.0.1:0")) testOriginAddr = net.UDPAddrFromAddrPort(netip.MustParseAddrPort("127.0.0.1:0"))
testLocalAddr = net.UDPAddrFromAddrPort(netip.MustParseAddrPort("127.0.0.1:0")) testLocalAddr = net.UDPAddrFromAddrPort(netip.MustParseAddrPort("127.0.0.1:0"))
@ -40,7 +40,7 @@ func testSessionWrite(t *testing.T, payload []byte) {
serverRead := make(chan []byte, 1) serverRead := make(chan []byte, 1)
go func() { go func() {
read := make([]byte, 1500) read := make([]byte, 1500)
server.Read(read[:]) _, _ = server.Read(read[:])
serverRead <- read serverRead <- read
}() }()
// Create session and write to origin // Create session and write to origin
@ -110,12 +110,12 @@ func testSessionServe_Origin(t *testing.T, payload []byte) {
case data := <-eyeball.recvData: case data := <-eyeball.recvData:
// check received data matches provided from origin // check received data matches provided from origin
expectedData := makePayload(1500) expectedData := makePayload(1500)
v3.MarshalPayloadHeaderTo(testRequestID, expectedData[:]) _ = v3.MarshalPayloadHeaderTo(testRequestID, expectedData[:])
copy(expectedData[17:], payload) copy(expectedData[17:], payload)
if !slices.Equal(expectedData[:v3.DatagramPayloadHeaderLen+len(payload)], data) { if !slices.Equal(expectedData[:v3.DatagramPayloadHeaderLen+len(payload)], data) {
t.Fatal("expected datagram did not equal expected") t.Fatal("expected datagram did not equal expected")
} }
cancel(expectedContextCanceled) cancel(errExpectedContextCanceled)
case err := <-ctx.Done(): case err := <-ctx.Done():
// we expect the payload to return before the context to cancel on the session // we expect the payload to return before the context to cancel on the session
t.Fatal(err) t.Fatal(err)
@ -125,7 +125,7 @@ func testSessionServe_Origin(t *testing.T, payload []byte) {
if !errors.Is(err, context.Canceled) { if !errors.Is(err, context.Canceled) {
t.Fatal(err) t.Fatal(err)
} }
if !errors.Is(context.Cause(ctx), expectedContextCanceled) { if !errors.Is(context.Cause(ctx), errExpectedContextCanceled) {
t.Fatal(err) t.Fatal(err)
} }
} }
@ -198,7 +198,7 @@ func TestSessionServe_Migrate(t *testing.T) {
// Origin sends data // Origin sends data
payload2 := []byte{0xde} payload2 := []byte{0xde}
pipe1.Write(payload2) _, _ = pipe1.Write(payload2)
// Expect write to eyeball2 // Expect write to eyeball2
data := <-eyeball2.recvData data := <-eyeball2.recvData
@ -249,13 +249,13 @@ func TestSessionServe_Migrate_CloseContext2(t *testing.T) {
t.Fatalf("expected session to still be running") t.Fatalf("expected session to still be running")
default: default:
} }
if context.Cause(eyeball1Ctx) != contextCancelErr { if !errors.Is(context.Cause(eyeball1Ctx), contextCancelErr) {
t.Fatalf("first eyeball context should be cancelled manually: %+v", context.Cause(eyeball1Ctx)) t.Fatalf("first eyeball context should be cancelled manually: %+v", context.Cause(eyeball1Ctx))
} }
// Origin sends data // Origin sends data
payload2 := []byte{0xde} payload2 := []byte{0xde}
pipe1.Write(payload2) _, _ = pipe1.Write(payload2)
// Expect write to eyeball2 // Expect write to eyeball2
data := <-eyeball2.recvData data := <-eyeball2.recvData

View File

@ -79,8 +79,8 @@ func (b *BackoffHandler) BackoffTimer() <-chan time.Time {
} else { } else {
b.retries++ b.retries++
} }
maxTimeToWait := time.Duration(b.GetBaseTime() * 1 << (b.retries)) maxTimeToWait := b.GetBaseTime() * (1 << b.retries)
timeToWait := time.Duration(rand.Int63n(maxTimeToWait.Nanoseconds())) timeToWait := time.Duration(rand.Int63n(maxTimeToWait.Nanoseconds())) // #nosec G404
return b.Clock.After(timeToWait) return b.Clock.After(timeToWait)
} }
@ -99,11 +99,11 @@ func (b *BackoffHandler) Backoff(ctx context.Context) bool {
} }
} }
// Sets a grace period within which the the backoff timer is maintained. After the grace // Sets a grace period within which the backoff timer is maintained. After the grace
// period expires, the number of retries & backoff duration is reset. // period expires, the number of retries & backoff duration is reset.
func (b *BackoffHandler) SetGracePeriod() time.Duration { func (b *BackoffHandler) SetGracePeriod() time.Duration {
maxTimeToWait := b.GetBaseTime() * 2 << (b.retries + 1) maxTimeToWait := b.GetBaseTime() * 2 << (b.retries + 1)
timeToWait := time.Duration(rand.Int63n(maxTimeToWait.Nanoseconds())) timeToWait := time.Duration(rand.Int63n(maxTimeToWait.Nanoseconds())) // #nosec G404
b.resetDeadline = b.Clock.Now().Add(timeToWait) b.resetDeadline = b.Clock.Now().Add(timeToWait)
return timeToWait return timeToWait
@ -118,7 +118,7 @@ func (b BackoffHandler) GetBaseTime() time.Duration {
// Retries returns the number of retries consumed so far. // Retries returns the number of retries consumed so far.
func (b *BackoffHandler) Retries() int { func (b *BackoffHandler) Retries() int {
return int(b.retries) return int(b.retries) // #nosec G115
} }
func (b *BackoffHandler) ReachedMaxRetries() bool { func (b *BackoffHandler) ReachedMaxRetries() bool {