The previous capture of the sync.OnceValue was re-initialized for each
call to `Close`. This needed to be initialized during the creation of
the session to ensure that the sync.OnceValue reference was held for
the session's lifetime.
Closes TUN-8775
Previously, during local flow migration the current connection context
was not part of the migration and would cause the flow to still be listening
on the connection context of the old connection (before the migration).
This meant that if a flow was migrated from connection 0 to
connection 1, and connection 0 goes away, the flow would be early
terminated incorrectly with the context lifetime of connection 0.
The new connection context is provided during migration of a flow
and will trigger the observe loop for the flow lifetime to be rebound
to this provided context.
Closes TUN-8748
To help reduce the volume of logs during the happy path of flow registration, there will only be one log message reported when a flow is completed.
There are additional fields added to all flow log messages:
1. `src`: local address
2. `dst`: origin address
3. `durationMS`: capturing the total duration of the flow in milliseconds
Additional logs were added to capture when a flow was migrated or when cloudflared sent off a registration response retry.
Closes TUN-8701
When a registration response from cloudflared gets lost on it's way back to the edge, the edge service will retry and send another registration request. Since cloudflared already has bound the local UDP socket for the provided request id, we want to re-send the registration response.
There are three types of retries that the edge will send:
1. A retry from the same QUIC connection index; cloudflared will just respond back with a registration response and reset the idle timer for the session.
2. A retry from a different QUIC connection index; cloudflared will need to migrate the current session connection to this new QUIC connection and reset the idle timer for the session.
3. A retry to a different cloudflared connector; cloudflared will eventually time the session out since no further packets will arrive to the session at the original connector.
Closes TUN-8709
The datagram muxer will wrap a QUIC Connection datagram read-writer operations to unmarshal datagrams from the connection to the origin with the session manager. Incoming datagram session registration operations will create new UDP sockets for sessions to proxy UDP packets between the edge and the origin. The muxer is also responsible for marshalling UDP packets and operations into datagrams for communication over the QUIC connection towards the edge.
Closes TUN-8700
New session manager leverages similar functionality that was previously
provided with datagram v2, with the distinct difference that the sessions
are registered via QUIC Datagrams and unregistered via timeouts only; the
sessions will no longer attempt to unregister sessions remotely with the
edge service.
The Session Manager is shared across all QUIC connections that cloudflared
uses to connect to the edge (typically 4). This will help cloudflared be
able to monitor all sessions across the connections and help correlate
in the future if sessions migrate across connections.
The UDP payload size is still limited to 1280 bytes across all OS's. Any
UDP packet that provides a payload size of greater than 1280 will cause
cloudflared to report (as it currently does) a log error and drop the packet.
Closes TUN-8667
Combines the tunnelrpc and quic/schema capnp files into the same module.
To help reduce future issues with capnp id generation, capnpids are
provided in the capnp files from the existing capnp struct ids generated
in the go files.
Reduces the overall interface of the Capnp methods to the rest of
the code by providing an interface that will handle the quic protocol
selection.
Introduces a new `rpc-timeout` config that will allow all of the
SessionManager and ConfigurationManager RPC requests to have a timeout.
The timeout for these values is set to 5 seconds as non of these operations
for the managers should take a long time to complete.
Removed the RPC-specific logger as it never provided good debugging value
as the RPC method names were not visible in the logs.
## Summary:
In order to properly monitor what is happening with the new write timeouts that we introduced
in TUN-8244 we need proper logging. Right now we were logging write timeouts when the safe
stream was being closed which didn't make sense because it was miss leading, so this commit
prevents that by adding a flag that allows us to know whether we are closing the stream or not.
## Summary
To avoid having to verbose logs we need to only log when an
actual issue occurred. Therefore, we will be skipping any error
logging if the write timeout is caused by no network activity
which just means that nothing is being sent through the stream.
## Summary
To prevent bad eyeballs and severs to be able to exhaust the quic
control flows we are adding the possibility of having a timeout
for a write operation to be acknowledged. This will prevent hanging
connections from exhausting the quic control flows, creating a DDoS.
I deliberately kept this as an unregistertimeout because that was the
intent. In the future we could change this to a UDPConnConfig if we want
to pass multiple values here.
The idea of this PR is simply to add a configurable unregister UDP
timeout.
The lucas-clemente/quic-go package moved namespaces and our branch
went stale, this new fork provides support for the new quic-go repo
and applies the max datagram frame size change.
Until the max datagram frame size support gets upstreamed into quic-go,
this can be used to unblock go 1.20 support as the old
lucas-clemente/quic-go will not get go 1.20 support.
The idle period is set to 5sec.
We now also ping every second since last activity.
This makes the quic.Connection less prone to being closed with
no network activity, since we send multiple pings per idle
period, and thus a single packet loss cannot cause the problem.
Until this PR, we were naively closing the quic.Stream whenever
the callstack for handling the request (HTTP or TCP) finished.
However, our proxy handler may still be reading or writing from
the quic.Stream at that point, because we return the callstack if
either side finishes, but not necessarily both.
This is a problem for quic-go library because quic.Stream#Close
cannot be called concurrently with quic.Stream#Write
Furthermore, we also noticed that quic.Stream#Close does nothing
to do receiving stream (since, underneath, quic.Stream has 2 streams,
1 for each direction), thus leaking memory, as explained in:
https://github.com/lucas-clemente/quic-go/issues/3322
This PR addresses both problems by wrapping the quic.Stream that
is passed down to the proxying logic and handle all these concerns.