Propagates the logger context into further locations to help provide more context for certain errors. For instance, upstream and downstream copying errors will properly have the assigned flow id attached and destination address.
## Summary
To prevent bad eyeballs and severs to be able to exhaust the quic
control flows we are adding the possibility of having a timeout
for a write operation to be acknowledged. This will prevent hanging
connections from exhausting the quic control flows, creating a DDoS.
In the streambased origin proxy flow (example ssh over access), there is
a chance when we do not flush on http.ResponseWriter writes. This PR
guarantees that the response writer passed to proxy stream has a flusher
embedded after writes. This means we write much more often back to the
ResponseWriter and are not waiting. Note, this is only something we do
when proxyHTTP-ing to a StreamBasedOriginProxy because we do not want to
have situations where we are not sending information that is needed by
the other side (eyeball).
This changes fixes a bug where cloudflared was not propagating errors
when proxying the body of an HTTP request.
In a situation where we already sent HTTP status code, the eyeball would
see the request as sucessfully when in fact it wasn't.
To solve this, we need to guarantee that we produce HTTP RST_STREAM
frames.
This change was applied to both http2 and quic transports.
ProxyHTTP now processes middleware Handler before executing the request.
A chain of handlers is now executed and appropriate response status
codes are sent.
For WARP routing the defaults for these new settings are 5 seconds for connect timeout and 30 seconds for keep-alive timeout. These values can be configured either remotely or locally. Local config lives under "warp-routing" section in config.yaml.
For websocket-based proxy, the defaults come from originConfig settings (either global or per-service) and use the same defaults as HTTP proxying.
The buffer size was big to support a compression feature that we don't
use anymore.
As such, we can now reduce this and be more efficient with memory usage.