We take advantage of the JWTValidator middleware and attach it to an
ingress rule based on Access configurations. We attach the Validator
directly to the ingress rules because we want to take advantage of
caching and token revert/handling that comes with go-oidc.
This adds a new verifier interface that can be attached to ingress.Rule.
This would act as a middleware layer that gets executed at the start of
proxy.ProxyHTTP.
A jwt validator implementation for this verifier is also provided. The
validator downloads the public key from the access teams endpoint and
uses it to verify the JWT sent to cloudflared with the audtag (clientID)
information provided in the config.
We take advantage of the JWTValidator middleware and attach it to an
ingress rule based on Access configurations. We attach the Validator
directly to the ingress rules because we want to take advantage of
caching and token revert/handling that comes with go-oidc.
This adds a new verifier interface that can be attached to ingress.Rule.
This would act as a middleware layer that gets executed at the start of
proxy.ProxyHTTP.
A jwt validator implementation for this verifier is also provided. The
validator downloads the public key from the access teams endpoint and
uses it to verify the JWT sent to cloudflared with the audtag (clientID)
information provided in the config.
A funnel is an abstraction for 1 source to many destinations.
As part of this refactoring, shared logic between Darwin and Linux are moved into icmp_posix
I can only reproduce the flakiness, which is the hello world still
responding when it should be shut down already, in Windows (both in
TeamCity as well as my local VM). Locally, it only happens when the
machine is under high load.
Anyway, it's valid that the proxies take some time to shut down since
they handle that via channels asynchronously with regards to the event
that updates the configuration.
Hence, nothing is wrong, as long as they eventually shut down, which the
test still verifies.
This test was failing on Windows. We did not catch it before because our
TeamCity Windows builds were ignoring failed unit tests: TUN-6727
- the fix is implementing WriteString for mockSSERespWriter
- reason is because cfio.Copy was calling that, and not Write method,
thus not triggering the usage of the channel for the test to continue
- mockSSERespWriter was providing a valid implementation of WriteString
via ResponseRecorder, which it implements via the embedded mockHTTPRespWriter
- it is not clear why this only happened on Windows
- changed it to be a top-level test since it did not share any code
with other sub-tests in the same top-level test
Previously allowing the reconnect signal forcibly close the connection
caused a race condition on which error was returned by the errgroup
in the tunnel connection. Allowing the signal to return and provide
a context cancel to the connection provides a safer shutdown of the
tunnel for this test-only scenario.
In a previous commit, we fixed a bug where the client roundtrip code
could close the request body, which in fact would be the quic.Stream,
thus closing the write-side.
The way that was fixed, prevented the client roundtrip code from closing
also read-side (the body).
This fixes that, by allowing close to only close the read side, which
will guarantee that any subsquent will fail with an error or EOF it
occurred before the close.