docs: clarify download failover
This commit is contained in:
parent
975e1531bc
commit
5a18483c6f
|
@ -167,7 +167,7 @@ python build.py
|
|||
|
||||
## Download failover
|
||||
|
||||
For `get*filter` search commands, the script will attempt to download from the following domains in sequence (check out the `DOWNLOAD_URLS` constant in each script):
|
||||
For `get*filter` search commands, the script will attempt to download from the following domains **in sequence** (check out the `DOWNLOAD_URLS` constant in each script):
|
||||
|
||||
- malware-filter.gitlab.io
|
||||
- curbengh.github.io
|
||||
|
@ -176,7 +176,7 @@ For `get*filter` search commands, the script will attempt to download from the f
|
|||
- malware-filter.pages.dev
|
||||
- \*-filter.pages.dev
|
||||
|
||||
If your corporate proxy admin balks at having to allow >1 domains, allowing any of them will do. Since the script wouldn't know the proxy ruleset, it will still attempt those domains in sequence until it found a reachable one.
|
||||
It is not necessary to allow outbound connection to all the above domains, it just depends how much redundancy you prefer.
|
||||
|
||||
## Disclaimer
|
||||
|
||||
|
|
Loading…
Reference in New Issue