1
0
Fork 0

Merge branch 'arch3rPro-1Panel-Appstore/only-apps' into patch/localApps

This commit is contained in:
pooneyy 2025-07-01 13:04:57 +08:00
parent 1d3f9e1835
commit 0b478b5196
252 changed files with 50131 additions and 0 deletions

View File

@ -0,0 +1,22 @@
# AdGuardHome-Sync
AdGuardHome-Sync 是一个用于在多个 AdGuardHome 实例之间同步配置的工具。
## 功能特性
- 支持多个 AdGuardHome 实例之间的配置同步
- 提供 Web API 接口进行管理
- 支持定时同步任务
- 基于 LinuxServer.io 的 Docker 镜像
## 使用方法
1. 部署后访问 Web API 端口(默认 8080
2. 配置 AdGuardHome 实例的连接信息
3. 设置同步规则和时间间隔
4. 启动同步任务
## 相关链接
- [GitHub 项目](https://github.com/bakito/adguardhome-sync)
- [LinuxServer.io 文档](https://docs.linuxserver.io/images/docker-adguardhome-sync/)

View File

@ -0,0 +1,22 @@
# AdGuardHome-Sync
AdGuardHome-Sync is a tool for synchronizing configurations between multiple AdGuardHome instances.
## Features
- Synchronize configurations between multiple AdGuardHome instances
- Web API interface for management
- Scheduled synchronization tasks
- Based on LinuxServer.io Docker image
## Usage
1. Access the Web API port (default 8080) after deployment
2. Configure connection information for AdGuardHome instances
3. Set synchronization rules and intervals
4. Start synchronization tasks
## Links
- [GitHub Project](https://github.com/bakito/adguardhome-sync)
- [LinuxServer.io Documentation](https://docs.linuxserver.io/images/docker-adguardhome-sync/)

View File

@ -0,0 +1,32 @@
name: AdGuardHome-Sync
tags:
- 安全
title: AdGuardHome 配置同步工具
description: AdGuardHome 配置同步工具,支持多个 AdGuardHome 实例之间的配置同步
additionalProperties:
key: adguardhome-sync
name: AdGuardHome-Sync
tags:
- Security
shortDescZh: AdGuardHome 配置同步工具
shortDescEn: AdGuardHome configuration synchronization tool
description:
en: AdGuardHome configuration synchronization tool for syncing configurations between multiple AdGuardHome instances
ja: 複数のAdGuardHomeインスタンス間で設定を同期するためのAdGuardHome設定同期ツール
ms: Alat penyegerakan konfigurasi AdGuardHome untuk menyegerakkan konfigurasi antara pelbagai instans AdGuardHome
pt-br: Ferramenta de sincronização de configuração do AdGuardHome para sincronizar configurações entre múltiplas instâncias do AdGuardHome
ru: Инструмент синхронизации конфигурации AdGuardHome для синхронизации конфигураций между несколькими экземплярами AdGuardHome
ko: 여러 AdGuardHome 인스턴스 간의 구성을 동기화하기 위한 AdGuardHome 구성 동기화 도구
zh-Hant: AdGuardHome 配置同步工具,支援多個 AdGuardHome 實例之間的配置同步
zh: AdGuardHome 配置同步工具,支持多个 AdGuardHome 实例之间的配置同步
type: website
crossVersionUpdate: true
limit: 0
recommend: 0
website: https://github.com/bakito/adguardhome-sync
github: https://github.com/bakito/adguardhome-sync
document: https://docs.linuxserver.io/images/docker-adguardhome-sync/
architectures:
- amd64
- arm64
- arm/v7

View File

@ -0,0 +1,19 @@
additionalProperties:
formFields:
- default: 8080
edit: true
envKey: PANEL_APP_PORT_HTTP
labelEn: Web API Port
labelZh: Web API 端口
required: true
rule: paramPort
type: number
label:
en: Web API Port
ja: Web API ポート
ms: Port API Web
pt-br: Porta da API Web
ru: Порт Web API
ko: Web API 포트
zh-Hant: Web API 埠
zh: Web API 端口

View File

@ -0,0 +1,20 @@
services:
adguardhome-sync:
container_name: ${CONTAINER_NAME}
restart: always
networks:
- 1panel-network
ports:
- "${PANEL_APP_PORT_HTTP}:8080"
volumes:
- ./config:/config
environment:
- PUID=1000
- PGID=1000
- TZ=Asia/Shanghai
image: lscr.io/linuxserver/adguardhome-sync:latest
labels:
createdBy: "Apps"
networks:
1panel-network:
external: true

Binary file not shown.

After

Width:  |  Height:  |  Size: 2.7 KiB

View File

@ -0,0 +1,36 @@
additionalProperties:
formFields:
- default: 5244
edit: true
envKey: PANEL_APP_PORT_HTTP
labelEn: WebUI Port
labelZh: 网页端口
required: true
rule: paramPort
type: number
label:
en: WebUI Port
ja: WebUI ポート
ms: Port WebUI
pt-br: Porta WebUI
ru: Порт WebUI
ko: WebUI 포트
zh-Hant: WebUI 埠
zh: WebUI 端口
- default: 5426
edit: true
envKey: PANEL_APP_PORT_S3
labelEn: S3 Port
labelZh: S3 端口
required: true
rule: paramPort
type: number
label:
en: S3 Port
ja: S3 ポート
ms: Port S3
pt-br: Porta S3
ru: Порт S3
ko: S3 포트
zh-Hant: S3 埠
zh: S3 端口

View File

View File

View File

@ -0,0 +1,23 @@
services:
alist:
container_name: ${CONTAINER_NAME}
restart: always
networks:
- 1panel-network
ports:
- "${PANEL_APP_PORT_HTTP}:5244"
- "${PANEL_APP_PORT_S3}:5426"
volumes:
- ./data/data:/opt/alist/data
- ./data/mnt:/mnt/data
environment:
- PUID=0
- PGID=0
- UMASK=022
image: xhofe/alist:v3.45.0
labels:
createdBy: "Apps"
networks:
1panel-network:
external: true

View File

@ -0,0 +1,12 @@
#!/bin/bash
if [[ -f ./.env ]]; then
if grep -q 'PANEL_APP_PORT_S3' ./.env; then
echo "PANEL_APP_PORT_S3 参数已存在"
else
echo 'PANEL_APP_PORT_S3=5426' >> ./.env
echo "已添加 PANEL_APP_PORT_S3=5426"
fi
else
echo ".env 文件不存在"
fi

64
apps/alist/README_en.md Normal file
View File

@ -0,0 +1,64 @@
# AList
A file list program that supports multiple storage, and supports web browsing and webdav, powered by gin and Solidjs.
## Supported Storage
- Local storage
- [Crypt](/guide/drivers/Crypt.md)
- [Aliyundrive Open](../guide/drivers/aliyundrive_open.md)
- [aliyundrive](https://www.alipan.com/)
- [OneDrive](./drivers/onedrive.md) /[APP](./drivers/onedrive_app.md)/ Sharepoint ([global](https://www.office.com/), [cn](https://portal.partner.microsoftonline.cn),de,us
- [GoogleDrive](https://drive.google.com/)
- [123pan/Share/Link](https://www.123pan.com/)
- [Alist](https://github.com/Xhofe/alist)
- FTP
- SFTP
- [PikPak / share](https://www.mypikpak.com/)
- [S3](../guide/drivers/s3.md)
- [Doge](../guide/drivers/s3.md#add-object-storage-examples-and-official-documents)
- [UPYUN Storage Service](https://www.upyun.com/products/file-storage)
- WebDAV
- Teambition[China](https://www.teambition.com/)[International](https://us.teambition.com/)
- [mediatrack](https://www.mediatrack.cn/)
- [189cloud](https://cloud.189.cn) (Personal, Family)
- [139yun](https://yun.139.com/) (Personal, Family)
- [Wopan](https://pan.wo.cn)
- [MoPan](https://mopan.sc.189.cn/mopan/#/downloadPc)
- [YandexDisk](https://disk.yandex.com/)
- [BaiduNetdisk](https://pan.baidu.com/) / [share](./drivers/baidu_share.md)
- [Quark/TV](https://pan.quark.cn/)
- [Thunder / X Browser](../guide/drivers/thunder.md)
- [Lanzou](https://www.lanzou.com/)、[NewLanzou](https://www.ilanzou.com)
- [Feiji Cloud](https://feijipan.com/)
- [Aliyundrive share](https://www.alipan.com/)
- [Google photo](https://photos.google.com/)
- [Mega.nz](https://mega.nz)
- [Baidu photo](https://photo.baidu.com/)
- [TeraBox](https://www.terabox.com/)
- [AList v2/v3](../guide/drivers/Alist%20V2%20V3.md)
- SMB
- [alias](../guide/advanced/alias.md)
- [115](https://115.com/)
- [Seafile](https://www.seafile.com/)
- Cloudreve
- [Trainbit](https://trainbit.com/)
- [UrlTree](../guide/drivers/UrlTree.md)
- IPFS
- [UC Clouddrive/TV](https://drive.uc.cn/)
- [Dropbox](https://www.dropbox.com)
- [Tencent weiyun](https://www.weiyun.com/)
- [vtencent](https://app.v.tencent.com/)
- [ChaoxingGroupCloud](../guide/drivers/chaoxing.md)
- [Quqi Cloud](https://quqi.com)
- [163 Music Drive](../guide/drivers/163music.md)
- [halalcloud](../guide/drivers/halalcloud.md)
- [LenovoNasShare](https://pc.lenovo.com.cn)
## Account Password
Click the `Terminal` button in the container list to enter the container and execute commands to set the password.
- **Use a random password**: `./alist admin random`
- **Or set password manually**: `./alist admin set NEW_PASSWORD`

40
apps/arcane/README.md Normal file
View File

@ -0,0 +1,40 @@
# Arcane
Arcane 是一款现代化、开源的Docker管理Web面板支持容器、镜像、网络等一站式管理。
## 功能特性
- 现代化Web界面操作简洁直观
- 支持容器、镜像、网络、卷等Docker资源的可视化管理
- 支持多平台和多架构
- 支持堆栈Stack定义与管理
- 数据和设置持久化存储于 `./data` 目录
- 挂载Docker套接字支持主机级管理
## 使用方法
1. 部署后访问 `http://服务器IP:3000` 进入Web管理界面
2. 首次使用请根据界面提示初始化设置
3. 数据目录:`./data`
4. 挂载宿主机 `/var/run/docker.sock`,实现容器管理
5. 环境变量 `PUBLIC_SESSION_SECRET`用于会话加密建议使用32位随机字符串可在应用表单中自定义默认值为 `arcane-session-4e2b8c7f9d1a6e3b2c5d7f8a1b0c9e6d`。如需更高安全性,可用 `openssl rand -base64 32` 生成。
### 账户密码
- 首次运行时如果不存在用户Arcane 会创建默认管理员用户。
- **用户名:** `arcane`
- **密码:** `arcane-admin`
- 首次登录必须更改此密码。
- 要添加用户:转到**设置 → 用户管理**,然后单击**创建用户**。填写用户名、显示名称、电子邮件和密码。
## 安全提醒
- 挂载Docker套接字/var/run/docker.sock会赋予容器主机级管理权限请确保安全使用
- Arcane 目前为预发布软件,功能和界面可能会有较大变动。
## 相关链接
- [官方网站](https://arcane.ofkm.dev/)
- [GitHub 项目](https://github.com/ofkm/arcane)
- [官方文档](https://arcane.ofkm.dev/docs/)
- [Docker Hub](https://ghcr.io/ofkm/arcane)

40
apps/arcane/README_en.md Normal file
View File

@ -0,0 +1,40 @@
# Arcane
Arcane is a modern, open-source Docker management web panel for containers, images, networks and more.
## Features
- Modern web UI, clean and intuitive
- Visual management for containers, images, networks, volumes, etc.
- Multi-platform and multi-architecture support
- Stack (compose) definition and management
- Data and settings persist in `./data` directory
- Mount Docker socket for host-level management
## Usage
1. After deployment, access `http://your-server-ip:3000` for the web UI
2. Follow the initial setup instructions on first use
3. Data directory: `./data`
4. Mount host `/var/run/docker.sock` for container management
5. Environment variable `PUBLIC_SESSION_SECRET`: Used for session encryption. It is recommended to use a 32-character random string. You can customize it in the app form. Default: `arcane-session-4e2b8c7f9d1a6e3b2c5d7f8a1b0c9e6d`. For higher security, generate with `openssl rand -base64 32`.
## Local User Management
- On first run, Arcane creates a default admin user if no users exist.
- **Username:** `arcane`
- **Password:** `arcane-admin`
- You must change this password during onboarding.
- To add users: Go to **Settings → User Management** and click **Create User**. Fill in username, display name, email, and password.
## Security Notice
- Mounting the Docker socket (`/var/run/docker.sock`) gives the container root-level access to the Docker host. Use with caution!
- Arcane is pre-release software. Features and UI may change frequently.
## Links
- [Official Website](https://arcane.ofkm.dev/)
- [GitHub Project](https://github.com/ofkm/arcane)
- [Official Documentation](https://arcane.ofkm.dev/docs/)
- [Docker Hub](https://ghcr.io/ofkm/arcane)

31
apps/arcane/data.yml Normal file
View File

@ -0,0 +1,31 @@
name: Arcane
tags:
- 实用工具
title: 现代化Docker管理面板
description: Arcane 是一款现代化、开源的Docker管理Web面板支持容器、镜像、网络等一站式管理
additionalProperties:
key: arcane
name: Arcane
tags:
- Tool
shortDescZh: 现代化Docker管理面板
shortDescEn: Modern Docker management panel
description:
en: Arcane is a modern, open-source Docker management web panel for containers, images, networks and more
ja: Arcaneはコンテナ、イメージ、ネットワークなどを一元管理できるモダンなオープンソースDocker管理Webパネルです
ms: Arcane ialah panel web pengurusan Docker moden sumber terbuka untuk kontena, imej, rangkaian dan banyak lagi
pt-br: Arcane é um painel web moderno e de código aberto para gerenciamento de Docker, incluindo containers, imagens, redes e mais
ru: Arcane — это современная, открытая веб-панель управления Docker для контейнеров, образов, сетей и др.
ko: Arcane는 컨테이너, 이미지, 네트워크 등을 위한 현대적이고 오픈 소스인 Docker 관리 웹 패널입니다
zh-Hant: Arcane 是一款現代化、開源的 Docker 管理 Web 面板,支援容器、映像、網路等一站式管理
zh: Arcane 是一款现代化、开源的Docker管理Web面板支持容器、镜像、网络等一站式管理
type: website
crossVersionUpdate: true
limit: 0
recommend: 0
website: https://arcane.ofkm.dev/
github: https://github.com/ofkm/arcane
document: https://arcane.ofkm.dev/docs/
architectures:
- amd64
- arm64

View File

@ -0,0 +1,35 @@
additionalProperties:
formFields:
- default: 3000
edit: true
envKey: PANEL_APP_PORT_HTTP
labelEn: Web UI Port
labelZh: Web界面端口
required: true
rule: paramPort
type: number
label:
en: Web UI Port
ja: Web UI ポート
ms: Port UI Web
pt-br: Porta da interface web
ru: Веб-порт интерфейса
ko: 웹 UI 포트
zh-Hant: Web UI 埠
zh: Web界面端口
- default: "arcane-session-4e2b8c7f9d1a6e3b2c5d7f8a1b0c9e6d"
edit: true
envKey: PUBLIC_SESSION_SECRET
labelEn: Session Secret
labelZh: 会话密钥
required: true
type: text
label:
en: Session Secret
ja: セッションシークレット
ms: Rahsia Sesi
pt-br: Segredo da sessão
ru: Секрет сессии
ko: 세션 시크릿
zh-Hant: 會話密鑰
zh: 会话密钥

View File

@ -0,0 +1,18 @@
services:
arcane:
image: ghcr.io/ofkm/arcane:latest
container_name: ${CONTAINER_NAME}
ports:
- "${PANEL_APP_PORT_HTTP}:3000"
volumes:
- /var/run/docker.sock:/var/run/docker.sock
- ./data:/app/data
environment:
- APP_ENV=production
- PUID=2000
- PGID=2000
- PUBLIC_SESSION_SECRET=${PUBLIC_SESSION_SECRET}
restart: always
networks:
1panel-network:
external: true

BIN
apps/arcane/logo.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 67 KiB

View File

@ -0,0 +1,11 @@
CONTAINER_NAME="blinko"
NEXTAUTH_SECRET="my_ultra_secure_nextauth_secret"
NEXTAUTH_URL="http://1.2.3.4:1111"
NEXT_PUBLIC_BASE_URL="http://1.2.3.4:1111"
PANEL_APP_PORT_HTTP=1111
PANEL_DB_HOST="postgresql"
PANEL_DB_HOST_NAME="postgresql"
PANEL_DB_NAME="blinko"
PANEL_DB_PORT=5432
PANEL_DB_USER="blinko"
PANEL_DB_USER_PASSWORD="blinko"

View File

@ -0,0 +1,70 @@
additionalProperties:
formFields:
- default: "1111"
envKey: PANEL_APP_PORT_HTTP
labelEn: HTTP Port
labelZh: HTTP 端口
required: true
rule: paramPort
type: number
- default: "http://1.2.3.4:1111"
envKey: NEXTAUTH_URL
labelEn: NextAuth URL
labelZh: 基本 URL
required: true
rule: paramExtUrl
type: text
- default: "http://1.2.3.4:1111"
envKey: NEXT_PUBLIC_BASE_URL
labelEn: Next Public Base URL
labelZh: 公共基本 URL
required: true
rule: paramExtUrl
type: text
- default: "my_ultra_secure_nextauth_secret"
envKey: NEXTAUTH_SECRET
labelEn: NextAuth Secret
labelZh: NextAuth 密钥
random: true
required: true
rule: paramComplexity
type: password
- default: ""
envKey: PANEL_DB_HOST
key: postgresql
labelEn: PostgreSQL Database Service
labelZh: PostgreSQL 数据库服务
required: true
type: service
- default: "5432"
edit: true
envKey: PANEL_DB_PORT
labelEn: Database Port Number
labelZh: 数据库端口号
required: true
rule: paramPort
type: number
- default: "blinko"
envKey: PANEL_DB_NAME
labelEn: Database
labelZh: 数据库名
random: true
required: true
rule: paramCommon
type: text
- default: "blinko"
envKey: PANEL_DB_USER
labelEn: User
labelZh: 数据库用户
random: true
required: true
rule: paramCommon
type: text
- default: "blinko"
envKey: PANEL_DB_USER_PASSWORD
labelEn: Password
labelZh: 数据库用户密码
random: true
required: true
rule: paramComplexity
type: password

View File

@ -0,0 +1,35 @@
services:
blinko:
image: "blinkospace/blinko:1.0.3"
container_name: ${CONTAINER_NAME}
environment:
NODE_ENV: production
NEXTAUTH_URL: ${NEXTAUTH_URL}
NEXT_PUBLIC_BASE_URL: ${NEXT_PUBLIC_BASE_URL}
NEXTAUTH_SECRET: ${NEXTAUTH_SECRET}
DATABASE_URL: postgresql://${PANEL_DB_USER}:${PANEL_DB_USER_PASSWORD}@${PANEL_DB_HOST}:${PANEL_DB_PORT}/${PANEL_DB_NAME}
depends_on:
postgres:
condition: service_healthy
volumes:
- "./data:/app/.blinko"
restart: always
logging:
options:
max-size: "10m"
max-file: "3"
ports:
- "${PANEL_APP_PORT_HTTP}:1111"
healthcheck:
test: ["CMD", "curl", "-f", "http://localhost:1111/"]
interval: 30s
timeout: 10s
retries: 5
start_period: 30s
networks:
- 1panel-network
labels:
createdBy: "Apps"
networks:
1panel-network:
external: true

10
apps/cup/latest/data.yml Normal file
View File

@ -0,0 +1,10 @@
additionalProperties:
formFields:
- default: "51230"
edit: true
envKey: PANEL_APP_PORT_HTTP
labelEn: Service Port 8000
labelZh: 服务端口 8000
required: true
rule: paramPort
type: number

View File

@ -0,0 +1,17 @@
services:
cup:
image: ghcr.io/sergi0g/cup:latest
container_name: ${CONTAINER_NAME}
restart: unless-stopped
ports:
- ${PANEL_APP_PORT_HTTP}:8000
volumes:
- /var/run/docker.sock:/var/run/docker.sock
networks:
- 1panel-network
labels:
createdBy: Apps
command: serve
networks:
1panel-network:
external: true

View File

@ -0,0 +1,10 @@
additionalProperties:
formFields:
- default: "8001"
edit: true
envKey: PANEL_APP_PORT_HTTP
labelEn: Service Port
labelZh: 服务端口
required: true
rule: paramPort
type: number

View File

@ -0,0 +1,16 @@
services:
deepseek-free-api:
image: vinlic/deepseek-free-api:latest
container_name: ${CONTAINER_NAME}
ports:
- ${PANEL_APP_PORT_HTTP}:8000
networks:
- 1panel-network
environment:
- TZ=Asia/Shanghai
labels:
createdBy: Apps
restart: always
networks:
1panel-network:
external: true

19
apps/demo/data.yml Normal file
View File

@ -0,0 +1,19 @@
name: 1Panel Apps
tags:
- 建站
title: 适配 1Panel 应用商店的通用应用模板
description: 适配 1Panel 应用商店的通用应用模板
additionalProperties:
key: 1panel-apps
name: 1Panel Apps
tags:
- Website
shortDescZh: 适配 1Panel 应用商店的通用应用模板
shortDescEn: Universal app template for the 1Panel App Store
type: website
crossVersionUpdate: true
limit: 0
recommend: 0
website: https://github.com/okxlin/appstore
github: https://github.com/okxlin/appstore
document: https://github.com/okxlin/appstore

View File

@ -0,0 +1,76 @@
# Launching new servers with SSL certificates
## Short description
docker compose certbot configurations with Backward compatibility (without certbot container).
Use `docker compose --profile certbot up` to use this features.
## The simplest way for launching new servers with SSL certificates
1. Get letsencrypt certs
set `.env` values
```properties
NGINX_SSL_CERT_FILENAME=fullchain.pem
NGINX_SSL_CERT_KEY_FILENAME=privkey.pem
NGINX_ENABLE_CERTBOT_CHALLENGE=true
CERTBOT_DOMAIN=your_domain.com
CERTBOT_EMAIL=example@your_domain.com
```
execute command:
```shell
docker network prune
docker compose --profile certbot up --force-recreate -d
```
then after the containers launched:
```shell
docker compose exec -it certbot /bin/sh /update-cert.sh
```
2. Edit `.env` file and `docker compose --profile certbot up` again.
set `.env` value additionally
```properties
NGINX_HTTPS_ENABLED=true
```
execute command:
```shell
docker compose --profile certbot up -d --no-deps --force-recreate nginx
```
Then you can access your serve with HTTPS.
[https://your_domain.com](https://your_domain.com)
## SSL certificates renewal
For SSL certificates renewal, execute commands below:
```shell
docker compose exec -it certbot /bin/sh /update-cert.sh
docker compose exec nginx nginx -s reload
```
## Options for certbot
`CERTBOT_OPTIONS` key might be helpful for testing. i.e.,
```properties
CERTBOT_OPTIONS=--dry-run
```
To apply changes to `CERTBOT_OPTIONS`, regenerate the certbot container before updating the certificates.
```shell
docker compose --profile certbot up -d --no-deps --force-recreate certbot
docker compose exec -it certbot /bin/sh /update-cert.sh
```
Then, reload the nginx container if necessary.
```shell
docker compose exec nginx nginx -s reload
```
## For legacy servers
To use cert files dir `nginx/ssl` as before, simply launch containers WITHOUT `--profile certbot` option.
```shell
docker compose up -d
```

View File

@ -0,0 +1,30 @@
#!/bin/sh
set -e
printf '%s\n' "Docker entrypoint script is running"
printf '%s\n' "\nChecking specific environment variables:"
printf '%s\n' "CERTBOT_EMAIL: ${CERTBOT_EMAIL:-Not set}"
printf '%s\n' "CERTBOT_DOMAIN: ${CERTBOT_DOMAIN:-Not set}"
printf '%s\n' "CERTBOT_OPTIONS: ${CERTBOT_OPTIONS:-Not set}"
printf '%s\n' "\nChecking mounted directories:"
for dir in "/etc/letsencrypt" "/var/www/html" "/var/log/letsencrypt"; do
if [ -d "$dir" ]; then
printf '%s\n' "$dir exists. Contents:"
ls -la "$dir"
else
printf '%s\n' "$dir does not exist."
fi
done
printf '%s\n' "\nGenerating update-cert.sh from template"
sed -e "s|\${CERTBOT_EMAIL}|$CERTBOT_EMAIL|g" \
-e "s|\${CERTBOT_DOMAIN}|$CERTBOT_DOMAIN|g" \
-e "s|\${CERTBOT_OPTIONS}|$CERTBOT_OPTIONS|g" \
/update-cert.template.txt > /update-cert.sh
chmod +x /update-cert.sh
printf '%s\n' "\nExecuting command:" "$@"
exec "$@"

View File

@ -0,0 +1,19 @@
#!/bin/bash
set -e
DOMAIN="${CERTBOT_DOMAIN}"
EMAIL="${CERTBOT_EMAIL}"
OPTIONS="${CERTBOT_OPTIONS}"
CERT_NAME="${DOMAIN}" # 証明書名をドメイン名と同じにする
# Check if the certificate already exists
if [ -f "/etc/letsencrypt/renewal/${CERT_NAME}.conf" ]; then
echo "Certificate exists. Attempting to renew..."
certbot renew --noninteractive --cert-name ${CERT_NAME} --webroot --webroot-path=/var/www/html --email ${EMAIL} --agree-tos --no-eff-email ${OPTIONS}
else
echo "Certificate does not exist. Obtaining a new certificate..."
certbot certonly --noninteractive --webroot --webroot-path=/var/www/html --email ${EMAIL} --agree-tos --no-eff-email -d ${DOMAIN} ${OPTIONS}
fi
echo "Certificate operation successful"
# Note: Nginx reload should be handled outside this container
echo "Please ensure to reload Nginx to apply any certificate changes."

View File

@ -0,0 +1,4 @@
FROM couchbase/server:latest AS stage_base
# FROM couchbase:latest AS stage_base
COPY init-cbserver.sh /opt/couchbase/init/
RUN chmod +x /opt/couchbase/init/init-cbserver.sh

View File

@ -0,0 +1,44 @@
#!/bin/bash
# used to start couchbase server - can't get around this as docker compose only allows you to start one command - so we have to start couchbase like the standard couchbase Dockerfile would
# https://github.com/couchbase/docker/blob/master/enterprise/couchbase-server/7.2.0/Dockerfile#L88
/entrypoint.sh couchbase-server &
# track if setup is complete so we don't try to setup again
FILE=/opt/couchbase/init/setupComplete.txt
if ! [ -f "$FILE" ]; then
# used to automatically create the cluster based on environment variables
# https://docs.couchbase.com/server/current/cli/cbcli/couchbase-cli-cluster-init.html
echo $COUCHBASE_ADMINISTRATOR_USERNAME ":" $COUCHBASE_ADMINISTRATOR_PASSWORD
sleep 20s
/opt/couchbase/bin/couchbase-cli cluster-init -c 127.0.0.1 \
--cluster-username $COUCHBASE_ADMINISTRATOR_USERNAME \
--cluster-password $COUCHBASE_ADMINISTRATOR_PASSWORD \
--services data,index,query,fts \
--cluster-ramsize $COUCHBASE_RAM_SIZE \
--cluster-index-ramsize $COUCHBASE_INDEX_RAM_SIZE \
--cluster-eventing-ramsize $COUCHBASE_EVENTING_RAM_SIZE \
--cluster-fts-ramsize $COUCHBASE_FTS_RAM_SIZE \
--index-storage-setting default
sleep 2s
# used to auto create the bucket based on environment variables
# https://docs.couchbase.com/server/current/cli/cbcli/couchbase-cli-bucket-create.html
/opt/couchbase/bin/couchbase-cli bucket-create -c localhost:8091 \
--username $COUCHBASE_ADMINISTRATOR_USERNAME \
--password $COUCHBASE_ADMINISTRATOR_PASSWORD \
--bucket $COUCHBASE_BUCKET \
--bucket-ramsize $COUCHBASE_BUCKET_RAMSIZE \
--bucket-type couchbase
# create file so we know that the cluster is setup and don't run the setup again
touch $FILE
fi
# docker compose will stop the container from running unless we do this
# known issue and workaround
tail -f /dev/null

View File

@ -0,0 +1,25 @@
#!/bin/bash
set -e
if [ "${VECTOR_STORE}" = "elasticsearch-ja" ]; then
# Check if the ICU tokenizer plugin is installed
if ! /usr/share/elasticsearch/bin/elasticsearch-plugin list | grep -q analysis-icu; then
printf '%s\n' "Installing the ICU tokenizer plugin"
if ! /usr/share/elasticsearch/bin/elasticsearch-plugin install analysis-icu; then
printf '%s\n' "Failed to install the ICU tokenizer plugin"
exit 1
fi
fi
# Check if the Japanese language analyzer plugin is installed
if ! /usr/share/elasticsearch/bin/elasticsearch-plugin list | grep -q analysis-kuromoji; then
printf '%s\n' "Installing the Japanese language analyzer plugin"
if ! /usr/share/elasticsearch/bin/elasticsearch-plugin install analysis-kuromoji; then
printf '%s\n' "Failed to install the Japanese language analyzer plugin"
exit 1
fi
fi
fi
# Run the original entrypoint script
exec /bin/tini -- /usr/local/bin/docker-entrypoint.sh

View File

@ -0,0 +1,48 @@
# Please do not directly edit this file. Instead, modify the .env variables related to NGINX configuration.
server {
listen ${NGINX_PORT};
server_name ${NGINX_SERVER_NAME};
location /console/api {
proxy_pass http://api:5001;
include proxy.conf;
}
location /api {
proxy_pass http://api:5001;
include proxy.conf;
}
location /v1 {
proxy_pass http://api:5001;
include proxy.conf;
}
location /files {
proxy_pass http://api:5001;
include proxy.conf;
}
location /explore {
proxy_pass http://web:3000;
include proxy.conf;
}
location /e/ {
proxy_pass http://plugin_daemon:5002;
proxy_set_header Dify-Hook-Url $scheme://$host$request_uri;
include proxy.conf;
}
location / {
proxy_pass http://web:3000;
include proxy.conf;
}
# placeholder for acme challenge location
${ACME_CHALLENGE_LOCATION}
# placeholder for https config defined in https.conf.template
${HTTPS_CONFIG}
}

View File

@ -0,0 +1,39 @@
#!/bin/bash
if [ "${NGINX_HTTPS_ENABLED}" = "true" ]; then
# Check if the certificate and key files for the specified domain exist
if [ -n "${CERTBOT_DOMAIN}" ] && \
[ -f "/etc/letsencrypt/live/${CERTBOT_DOMAIN}/${NGINX_SSL_CERT_FILENAME}" ] && \
[ -f "/etc/letsencrypt/live/${CERTBOT_DOMAIN}/${NGINX_SSL_CERT_KEY_FILENAME}" ]; then
SSL_CERTIFICATE_PATH="/etc/letsencrypt/live/${CERTBOT_DOMAIN}/${NGINX_SSL_CERT_FILENAME}"
SSL_CERTIFICATE_KEY_PATH="/etc/letsencrypt/live/${CERTBOT_DOMAIN}/${NGINX_SSL_CERT_KEY_FILENAME}"
else
SSL_CERTIFICATE_PATH="/etc/ssl/${NGINX_SSL_CERT_FILENAME}"
SSL_CERTIFICATE_KEY_PATH="/etc/ssl/${NGINX_SSL_CERT_KEY_FILENAME}"
fi
export SSL_CERTIFICATE_PATH
export SSL_CERTIFICATE_KEY_PATH
# set the HTTPS_CONFIG environment variable to the content of the https.conf.template
HTTPS_CONFIG=$(envsubst < /etc/nginx/https.conf.template)
export HTTPS_CONFIG
# Substitute the HTTPS_CONFIG in the default.conf.template with content from https.conf.template
envsubst '${HTTPS_CONFIG}' < /etc/nginx/conf.d/default.conf.template > /etc/nginx/conf.d/default.conf
fi
if [ "${NGINX_ENABLE_CERTBOT_CHALLENGE}" = "true" ]; then
ACME_CHALLENGE_LOCATION='location /.well-known/acme-challenge/ { root /var/www/html; }'
else
ACME_CHALLENGE_LOCATION=''
fi
export ACME_CHALLENGE_LOCATION
env_vars=$(printenv | cut -d= -f1 | sed 's/^/$/g' | paste -sd, -)
envsubst "$env_vars" < /etc/nginx/nginx.conf.template > /etc/nginx/nginx.conf
envsubst "$env_vars" < /etc/nginx/proxy.conf.template > /etc/nginx/proxy.conf
envsubst < /etc/nginx/conf.d/default.conf.template > /etc/nginx/conf.d/default.conf
# Start Nginx using the default entrypoint
exec nginx -g 'daemon off;'

View File

@ -0,0 +1,9 @@
# Please do not directly edit this file. Instead, modify the .env variables related to NGINX configuration.
listen ${NGINX_SSL_PORT} ssl;
ssl_certificate ${SSL_CERTIFICATE_PATH};
ssl_certificate_key ${SSL_CERTIFICATE_KEY_PATH};
ssl_protocols ${NGINX_SSL_PROTOCOLS};
ssl_prefer_server_ciphers on;
ssl_session_cache shared:SSL:10m;
ssl_session_timeout 10m;

View File

@ -0,0 +1,34 @@
# Please do not directly edit this file. Instead, modify the .env variables related to NGINX configuration.
user nginx;
worker_processes ${NGINX_WORKER_PROCESSES};
error_log /var/log/nginx/error.log notice;
pid /var/run/nginx.pid;
events {
worker_connections 1024;
}
http {
include /etc/nginx/mime.types;
default_type application/octet-stream;
log_format main '$remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for"';
access_log /var/log/nginx/access.log main;
sendfile on;
#tcp_nopush on;
keepalive_timeout ${NGINX_KEEPALIVE_TIMEOUT};
#gzip on;
client_max_body_size ${NGINX_CLIENT_MAX_BODY_SIZE};
include /etc/nginx/conf.d/*.conf;
}

View File

@ -0,0 +1,11 @@
# Please do not directly edit this file. Instead, modify the .env variables related to NGINX configuration.
proxy_set_header Host $host;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header X-Forwarded-Port $server_port;
proxy_http_version 1.1;
proxy_set_header Connection "";
proxy_buffering off;
proxy_read_timeout ${NGINX_PROXY_READ_TIMEOUT};
proxy_send_timeout ${NGINX_PROXY_SEND_TIMEOUT};

View File

View File

@ -0,0 +1,42 @@
#!/bin/bash
# Modified based on Squid OCI image entrypoint
# This entrypoint aims to forward the squid logs to stdout to assist users of
# common container related tooling (e.g., kubernetes, docker-compose, etc) to
# access the service logs.
# Moreover, it invokes the squid binary, leaving all the desired parameters to
# be provided by the "command" passed to the spawned container. If no command
# is provided by the user, the default behavior (as per the CMD statement in
# the Dockerfile) will be to use Ubuntu's default configuration [1] and run
# squid with the "-NYC" options to mimic the behavior of the Ubuntu provided
# systemd unit.
# [1] The default configuration is changed in the Dockerfile to allow local
# network connections. See the Dockerfile for further information.
echo "[ENTRYPOINT] re-create snakeoil self-signed certificate removed in the build process"
if [ ! -f /etc/ssl/private/ssl-cert-snakeoil.key ]; then
/usr/sbin/make-ssl-cert generate-default-snakeoil --force-overwrite > /dev/null 2>&1
fi
tail -F /var/log/squid/access.log 2>/dev/null &
tail -F /var/log/squid/error.log 2>/dev/null &
tail -F /var/log/squid/store.log 2>/dev/null &
tail -F /var/log/squid/cache.log 2>/dev/null &
# Replace environment variables in the template and output to the squid.conf
echo "[ENTRYPOINT] replacing environment variables in the template"
awk '{
while(match($0, /\${[A-Za-z_][A-Za-z_0-9]*}/)) {
var = substr($0, RSTART+2, RLENGTH-3)
val = ENVIRON[var]
$0 = substr($0, 1, RSTART-1) val substr($0, RSTART+RLENGTH)
}
print
}' /etc/squid/squid.conf.template > /etc/squid/squid.conf
/usr/sbin/squid -Nz
echo "[ENTRYPOINT] starting squid"
/usr/sbin/squid -f /etc/squid/squid.conf -NYC 1

View File

@ -0,0 +1,51 @@
acl localnet src 0.0.0.1-0.255.255.255 # RFC 1122 "this" network (LAN)
acl localnet src 10.0.0.0/8 # RFC 1918 local private network (LAN)
acl localnet src 100.64.0.0/10 # RFC 6598 shared address space (CGN)
acl localnet src 169.254.0.0/16 # RFC 3927 link-local (directly plugged) machines
acl localnet src 172.16.0.0/12 # RFC 1918 local private network (LAN)
acl localnet src 192.168.0.0/16 # RFC 1918 local private network (LAN)
acl localnet src fc00::/7 # RFC 4193 local private network range
acl localnet src fe80::/10 # RFC 4291 link-local (directly plugged) machines
acl SSL_ports port 443
# acl SSL_ports port 1025-65535 # Enable the configuration to resolve this issue: https://github.com/langgenius/dify/issues/12792
acl Safe_ports port 80 # http
acl Safe_ports port 21 # ftp
acl Safe_ports port 443 # https
acl Safe_ports port 70 # gopher
acl Safe_ports port 210 # wais
acl Safe_ports port 1025-65535 # unregistered ports
acl Safe_ports port 280 # http-mgmt
acl Safe_ports port 488 # gss-http
acl Safe_ports port 591 # filemaker
acl Safe_ports port 777 # multiling http
acl CONNECT method CONNECT
http_access deny !Safe_ports
http_access deny CONNECT !SSL_ports
http_access allow localhost manager
http_access deny manager
http_access allow localhost
include /etc/squid/conf.d/*.conf
http_access deny all
################################## Proxy Server ################################
http_port ${HTTP_PORT}
coredump_dir ${COREDUMP_DIR}
refresh_pattern ^ftp: 1440 20% 10080
refresh_pattern ^gopher: 1440 0% 1440
refresh_pattern -i (/cgi-bin/|\?) 0 0% 0
refresh_pattern \/(Packages|Sources)(|\.bz2|\.gz|\.xz)$ 0 0% 0 refresh-ims
refresh_pattern \/Release(|\.gpg)$ 0 0% 0 refresh-ims
refresh_pattern \/InRelease$ 0 0% 0 refresh-ims
refresh_pattern \/(Translation-.*)(|\.bz2|\.gz|\.xz)$ 0 0% 0 refresh-ims
refresh_pattern . 0 20% 4320
# cache_dir ufs /var/spool/squid 100 16 256
# upstream proxy, set to your own upstream proxy IP to avoid SSRF attacks
# cache_peer 172.1.1.1 parent 3128 0 no-query no-digest no-netdb-exchange default
################################## Reverse Proxy To Sandbox ################################
http_port ${REVERSE_PROXY_PORT} accel vhost
cache_peer ${SANDBOX_HOST} parent ${SANDBOX_PORT} 0 no-query originserver
acl src_all src all
http_access allow src_all

View File

@ -0,0 +1,13 @@
#!/usr/bin/env bash
DB_INITIALIZED="/opt/oracle/oradata/dbinit"
#[ -f ${DB_INITIALIZED} ] && exit
#touch ${DB_INITIALIZED}
if [ -f ${DB_INITIALIZED} ]; then
echo 'File exists. Standards for have been Init'
exit
else
echo 'File does not exist. Standards for first time Start up this DB'
"$ORACLE_HOME"/bin/sqlplus -s "/ as sysdba" @"/opt/oracle/scripts/startup/init_user.script";
touch ${DB_INITIALIZED}
fi

View File

@ -0,0 +1,10 @@
show pdbs;
ALTER SYSTEM SET PROCESSES=500 SCOPE=SPFILE;
alter session set container= freepdb1;
create user dify identified by dify DEFAULT TABLESPACE users quota unlimited on users;
grant DB_DEVELOPER_ROLE to dify;
BEGIN
CTX_DDL.CREATE_PREFERENCE('my_chinese_vgram_lexer','CHINESE_VGRAM_LEXER');
END;
/

View File

@ -0,0 +1,4 @@
# PD Configuration File reference:
# https://docs.pingcap.com/tidb/stable/pd-configuration-file#pd-configuration-file
[replication]
max-replicas = 1

View File

@ -0,0 +1,13 @@
# TiFlash tiflash-learner.toml Configuration File reference:
# https://docs.pingcap.com/tidb/stable/tiflash-configuration#configure-the-tiflash-learnertoml-file
log-file = "/logs/tiflash_tikv.log"
[server]
engine-addr = "tiflash:4030"
addr = "0.0.0.0:20280"
advertise-addr = "tiflash:20280"
status-addr = "tiflash:20292"
[storage]
data-dir = "/data/flash"

View File

@ -0,0 +1,19 @@
# TiFlash tiflash.toml Configuration File reference:
# https://docs.pingcap.com/tidb/stable/tiflash-configuration#configure-the-tiflashtoml-file
listen_host = "0.0.0.0"
path = "/data"
[flash]
tidb_status_addr = "tidb:10080"
service_addr = "tiflash:4030"
[flash.proxy]
config = "/tiflash-learner.toml"
[logger]
errorlog = "/logs/tiflash_error.log"
log = "/logs/tiflash.log"
[raft]
pd_addr = "pd0:2379"

View File

@ -0,0 +1,62 @@
services:
pd0:
image: pingcap/pd:v8.5.1
# ports:
# - "2379"
volumes:
- ./config/pd.toml:/pd.toml:ro
- ./volumes/data:/data
- ./volumes/logs:/logs
command:
- --name=pd0
- --client-urls=http://0.0.0.0:2379
- --peer-urls=http://0.0.0.0:2380
- --advertise-client-urls=http://pd0:2379
- --advertise-peer-urls=http://pd0:2380
- --initial-cluster=pd0=http://pd0:2380
- --data-dir=/data/pd
- --config=/pd.toml
- --log-file=/logs/pd.log
restart: on-failure
tikv:
image: pingcap/tikv:v8.5.1
volumes:
- ./volumes/data:/data
- ./volumes/logs:/logs
command:
- --addr=0.0.0.0:20160
- --advertise-addr=tikv:20160
- --status-addr=tikv:20180
- --data-dir=/data/tikv
- --pd=pd0:2379
- --log-file=/logs/tikv.log
depends_on:
- "pd0"
restart: on-failure
tidb:
image: pingcap/tidb:v8.5.1
# ports:
# - "4000:4000"
volumes:
- ./volumes/logs:/logs
command:
- --advertise-address=tidb
- --store=tikv
- --path=pd0:2379
- --log-file=/logs/tidb.log
depends_on:
- "tikv"
restart: on-failure
tiflash:
image: pingcap/tiflash:v8.5.1
volumes:
- ./config/tiflash.toml:/tiflash.toml:ro
- ./config/tiflash-learner.toml:/tiflash-learner.toml:ro
- ./volumes/data:/data
- ./volumes/logs:/logs
command:
- --config=/tiflash.toml
depends_on:
- "tikv"
- "tidb"
restart: on-failure

View File

@ -0,0 +1,17 @@
<clickhouse>
<users>
<default>
<password></password>
<networks>
<ip>::1</ip> <!-- change to ::/0 to allow access from all addresses -->
<ip>127.0.0.1</ip>
<ip>10.0.0.0/8</ip>
<ip>172.16.0.0/12</ip>
<ip>192.168.0.0/16</ip>
</networks>
<profile>default</profile>
<quota>default</quota>
<access_management>1</access_management>
</default>
</users>
</clickhouse>

View File

@ -0,0 +1 @@
ALTER SYSTEM SET ob_vector_memory_limit_percentage = 30;

View File

@ -0,0 +1,222 @@
---
# Copyright OpenSearch Contributors
# SPDX-License-Identifier: Apache-2.0
# Description:
# Default configuration for OpenSearch Dashboards
# OpenSearch Dashboards is served by a back end server. This setting specifies the port to use.
# server.port: 5601
# Specifies the address to which the OpenSearch Dashboards server will bind. IP addresses and host names are both valid values.
# The default is 'localhost', which usually means remote machines will not be able to connect.
# To allow connections from remote users, set this parameter to a non-loopback address.
# server.host: "localhost"
# Enables you to specify a path to mount OpenSearch Dashboards at if you are running behind a proxy.
# Use the `server.rewriteBasePath` setting to tell OpenSearch Dashboards if it should remove the basePath
# from requests it receives, and to prevent a deprecation warning at startup.
# This setting cannot end in a slash.
# server.basePath: ""
# Specifies whether OpenSearch Dashboards should rewrite requests that are prefixed with
# `server.basePath` or require that they are rewritten by your reverse proxy.
# server.rewriteBasePath: false
# The maximum payload size in bytes for incoming server requests.
# server.maxPayloadBytes: 1048576
# The OpenSearch Dashboards server's name. This is used for display purposes.
# server.name: "your-hostname"
# The URLs of the OpenSearch instances to use for all your queries.
# opensearch.hosts: ["http://localhost:9200"]
# OpenSearch Dashboards uses an index in OpenSearch to store saved searches, visualizations and
# dashboards. OpenSearch Dashboards creates a new index if the index doesn't already exist.
# opensearchDashboards.index: ".opensearch_dashboards"
# The default application to load.
# opensearchDashboards.defaultAppId: "home"
# Setting for an optimized healthcheck that only uses the local OpenSearch node to do Dashboards healthcheck.
# This settings should be used for large clusters or for clusters with ingest heavy nodes.
# It allows Dashboards to only healthcheck using the local OpenSearch node rather than fan out requests across all nodes.
#
# It requires the user to create an OpenSearch node attribute with the same name as the value used in the setting
# This node attribute should assign all nodes of the same cluster an integer value that increments with each new cluster that is spun up
# e.g. in opensearch.yml file you would set the value to a setting using node.attr.cluster_id:
# Should only be enabled if there is a corresponding node attribute created in your OpenSearch config that matches the value here
# opensearch.optimizedHealthcheckId: "cluster_id"
# If your OpenSearch is protected with basic authentication, these settings provide
# the username and password that the OpenSearch Dashboards server uses to perform maintenance on the OpenSearch Dashboards
# index at startup. Your OpenSearch Dashboards users still need to authenticate with OpenSearch, which
# is proxied through the OpenSearch Dashboards server.
# opensearch.username: "opensearch_dashboards_system"
# opensearch.password: "pass"
# Enables SSL and paths to the PEM-format SSL certificate and SSL key files, respectively.
# These settings enable SSL for outgoing requests from the OpenSearch Dashboards server to the browser.
# server.ssl.enabled: false
# server.ssl.certificate: /path/to/your/server.crt
# server.ssl.key: /path/to/your/server.key
# Optional settings that provide the paths to the PEM-format SSL certificate and key files.
# These files are used to verify the identity of OpenSearch Dashboards to OpenSearch and are required when
# xpack.security.http.ssl.client_authentication in OpenSearch is set to required.
# opensearch.ssl.certificate: /path/to/your/client.crt
# opensearch.ssl.key: /path/to/your/client.key
# Optional setting that enables you to specify a path to the PEM file for the certificate
# authority for your OpenSearch instance.
# opensearch.ssl.certificateAuthorities: [ "/path/to/your/CA.pem" ]
# To disregard the validity of SSL certificates, change this setting's value to 'none'.
# opensearch.ssl.verificationMode: full
# Time in milliseconds to wait for OpenSearch to respond to pings. Defaults to the value of
# the opensearch.requestTimeout setting.
# opensearch.pingTimeout: 1500
# Time in milliseconds to wait for responses from the back end or OpenSearch. This value
# must be a positive integer.
# opensearch.requestTimeout: 30000
# List of OpenSearch Dashboards client-side headers to send to OpenSearch. To send *no* client-side
# headers, set this value to [] (an empty list).
# opensearch.requestHeadersWhitelist: [ authorization ]
# Header names and values that are sent to OpenSearch. Any custom headers cannot be overwritten
# by client-side headers, regardless of the opensearch.requestHeadersWhitelist configuration.
# opensearch.customHeaders: {}
# Time in milliseconds for OpenSearch to wait for responses from shards. Set to 0 to disable.
# opensearch.shardTimeout: 30000
# Logs queries sent to OpenSearch. Requires logging.verbose set to true.
# opensearch.logQueries: false
# Specifies the path where OpenSearch Dashboards creates the process ID file.
# pid.file: /var/run/opensearchDashboards.pid
# Enables you to specify a file where OpenSearch Dashboards stores log output.
# logging.dest: stdout
# Set the value of this setting to true to suppress all logging output.
# logging.silent: false
# Set the value of this setting to true to suppress all logging output other than error messages.
# logging.quiet: false
# Set the value of this setting to true to log all events, including system usage information
# and all requests.
# logging.verbose: false
# Set the interval in milliseconds to sample system and process performance
# metrics. Minimum is 100ms. Defaults to 5000.
# ops.interval: 5000
# Specifies locale to be used for all localizable strings, dates and number formats.
# Supported languages are the following: English - en , by default , Chinese - zh-CN .
# i18n.locale: "en"
# Set the allowlist to check input graphite Url. Allowlist is the default check list.
# vis_type_timeline.graphiteAllowedUrls: ['https://www.hostedgraphite.com/UID/ACCESS_KEY/graphite']
# Set the blocklist to check input graphite Url. Blocklist is an IP list.
# Below is an example for reference
# vis_type_timeline.graphiteBlockedIPs: [
# //Loopback
# '127.0.0.0/8',
# '::1/128',
# //Link-local Address for IPv6
# 'fe80::/10',
# //Private IP address for IPv4
# '10.0.0.0/8',
# '172.16.0.0/12',
# '192.168.0.0/16',
# //Unique local address (ULA)
# 'fc00::/7',
# //Reserved IP address
# '0.0.0.0/8',
# '100.64.0.0/10',
# '192.0.0.0/24',
# '192.0.2.0/24',
# '198.18.0.0/15',
# '192.88.99.0/24',
# '198.51.100.0/24',
# '203.0.113.0/24',
# '224.0.0.0/4',
# '240.0.0.0/4',
# '255.255.255.255/32',
# '::/128',
# '2001:db8::/32',
# 'ff00::/8',
# ]
# vis_type_timeline.graphiteBlockedIPs: []
# opensearchDashboards.branding:
# logo:
# defaultUrl: ""
# darkModeUrl: ""
# mark:
# defaultUrl: ""
# darkModeUrl: ""
# loadingLogo:
# defaultUrl: ""
# darkModeUrl: ""
# faviconUrl: ""
# applicationTitle: ""
# Set the value of this setting to true to capture region blocked warnings and errors
# for your map rendering services.
# map.showRegionBlockedWarning: false%
# Set the value of this setting to false to suppress search usage telemetry
# for reducing the load of OpenSearch cluster.
# data.search.usageTelemetry.enabled: false
# 2.4 renames 'wizard.enabled: false' to 'vis_builder.enabled: false'
# Set the value of this setting to false to disable VisBuilder
# functionality in Visualization.
# vis_builder.enabled: false
# 2.4 New Experimental Feature
# Set the value of this setting to true to enable the experimental multiple data source
# support feature. Use with caution.
# data_source.enabled: false
# Set the value of these settings to customize crypto materials to encryption saved credentials
# in data sources.
# data_source.encryption.wrappingKeyName: 'changeme'
# data_source.encryption.wrappingKeyNamespace: 'changeme'
# data_source.encryption.wrappingKey: [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0]
# 2.6 New ML Commons Dashboards Feature
# Set the value of this setting to true to enable the ml commons dashboards
# ml_commons_dashboards.enabled: false
# 2.12 New experimental Assistant Dashboards Feature
# Set the value of this setting to true to enable the assistant dashboards
# assistant.chat.enabled: false
# 2.13 New Query Assistant Feature
# Set the value of this setting to false to disable the query assistant
# observability.query_assist.enabled: false
# 2.14 Enable Ui Metric Collectors in Usage Collector
# Set the value of this setting to true to enable UI Metric collections
# usageCollection.uiMetric.enabled: false
opensearch.hosts: [https://localhost:9200]
opensearch.ssl.verificationMode: none
opensearch.username: admin
opensearch.password: 'Qazwsxedc!@#123'
opensearch.requestHeadersWhitelist: [authorization, securitytenant]
opensearch_security.multitenancy.enabled: true
opensearch_security.multitenancy.tenants.preferred: [Private, Global]
opensearch_security.readonly_mode.roles: [kibana_read_only]
# Use this setting if you are running opensearch-dashboards without https
opensearch_security.cookie.secure: false
server.host: '0.0.0.0'

View File

@ -0,0 +1,14 @@
app:
port: 8194
debug: True
key: dify-sandbox
max_workers: 4
max_requests: 50
worker_timeout: 5
python_path: /usr/local/bin/python3
enable_network: True # please make sure there is no network risk in your environment
allowed_syscalls: # please leave it empty if you have no idea how seccomp works
proxy:
socks5: ''
http: ''
https: ''

View File

@ -0,0 +1,35 @@
app:
port: 8194
debug: True
key: dify-sandbox
max_workers: 4
max_requests: 50
worker_timeout: 5
python_path: /usr/local/bin/python3
python_lib_path:
- /usr/local/lib/python3.10
- /usr/lib/python3.10
- /usr/lib/python3
- /usr/lib/x86_64-linux-gnu
- /etc/ssl/certs/ca-certificates.crt
- /etc/nsswitch.conf
- /etc/hosts
- /etc/resolv.conf
- /run/systemd/resolve/stub-resolv.conf
- /run/resolvconf/resolv.conf
- /etc/localtime
- /usr/share/zoneinfo
- /etc/timezone
# add more paths if needed
python_pip_mirror_url: https://pypi.tuna.tsinghua.edu.cn/simple
nodejs_path: /usr/local/bin/node
enable_network: True
allowed_syscalls:
- 1
- 2
- 3
# add all the syscalls which you require
proxy:
socks5: ''
http: ''
https: ''

82
apps/dify/1.1.1/data.yml Normal file
View File

@ -0,0 +1,82 @@
additionalProperties:
formFields:
- default: "/home/dify"
edit: true
envKey: DIFY_ROOT_PATH
labelZh: 数据持久化路径
labelEn: Data persistence path
required: true
type: text
- default: 8080
edit: true
envKey: PANEL_APP_PORT_HTTP
labelZh: 网站端口
labelEn: WebUI port
required: true
rule: paramPort
type: number
- default: 8443
edit: true
envKey: PANEL_APP_PORT_HTTPS
labelZh: HTTPS 端口
labelEn: HTTPS port
required: true
rule: paramPort
type: number
- default: 5432
edit: true
envKey: EXPOSE_DB_PORT
labelZh: 数据库端口
labelEn: Database port
required: true
rule: paramPort
type: number
- default: 5003
edit: true
envKey: EXPOSE_PLUGIN_DEBUGGING_PORT
labelZh: 插件调试端口
labelEn: Plugin debugging port
required: true
rule: paramPort
type: number
- default: 19530
disabled: true
edit: true
envKey: MILVUS_STANDALONE_API_PORT
labelZh: Milvus 接口端口
labelEn: Milvus API port
required: true
rule: paramPort
type: number
- default: 9091
disabled: true
envKey: MILVUS_STANDALONE_SERVER_PORT
labelZh: Milvus 服务端口
labelEn: Milvus server port
required: true
rule: paramPort
type: number
- default: 8123
edit: true
envKey: MYSCALE_PORT
labelZh: MyScale 端口
labelEn: MyScale port
required: true
rule: paramPort
type: number
- default: 9200
edit: true
envKey: ELASTICSEARCH_PORT
labelZh: Elasticsearch 端口
labelEn: Elasticsearch port
required: true
rule: paramPort
type: number
- default: 5601
edit: true
envKey: KIBANA_PORT
labelZh: Kibana 端口
labelEn: Kibana port
required: true
rule: paramPort
type: number

View File

@ -0,0 +1,970 @@
# ==================================================================
# WARNING: This file is auto-generated by generate_docker_compose
# Do not modify this file directly. Instead, update the .env.example
# or docker-compose-template.yaml and regenerate this file.
# ==================================================================
x-shared-env: &shared-api-worker-env
CONSOLE_API_URL: ${CONSOLE_API_URL:-}
CONSOLE_WEB_URL: ${CONSOLE_WEB_URL:-}
SERVICE_API_URL: ${SERVICE_API_URL:-}
APP_API_URL: ${APP_API_URL:-}
APP_WEB_URL: ${APP_WEB_URL:-}
FILES_URL: ${FILES_URL:-}
LOG_LEVEL: ${LOG_LEVEL:-INFO}
LOG_FILE: ${LOG_FILE:-/app/logs/server.log}
LOG_FILE_MAX_SIZE: ${LOG_FILE_MAX_SIZE:-20}
LOG_FILE_BACKUP_COUNT: ${LOG_FILE_BACKUP_COUNT:-5}
LOG_DATEFORMAT: ${LOG_DATEFORMAT:-%Y-%m-%d %H:%M:%S}
LOG_TZ: ${LOG_TZ:-UTC}
DEBUG: ${DEBUG:-false}
FLASK_DEBUG: ${FLASK_DEBUG:-false}
SECRET_KEY: ${SECRET_KEY:-sk-9f73s3ljTXVcMT3Blb3ljTqtsKiGHXVcMT3BlbkFJLK7U}
INIT_PASSWORD: ${INIT_PASSWORD:-}
DEPLOY_ENV: ${DEPLOY_ENV:-PRODUCTION}
CHECK_UPDATE_URL: ${CHECK_UPDATE_URL:-https://updates.dify.ai}
OPENAI_API_BASE: ${OPENAI_API_BASE:-https://api.openai.com/v1}
MIGRATION_ENABLED: ${MIGRATION_ENABLED:-true}
FILES_ACCESS_TIMEOUT: ${FILES_ACCESS_TIMEOUT:-300}
ACCESS_TOKEN_EXPIRE_MINUTES: ${ACCESS_TOKEN_EXPIRE_MINUTES:-60}
REFRESH_TOKEN_EXPIRE_DAYS: ${REFRESH_TOKEN_EXPIRE_DAYS:-30}
APP_MAX_ACTIVE_REQUESTS: ${APP_MAX_ACTIVE_REQUESTS:-0}
APP_MAX_EXECUTION_TIME: ${APP_MAX_EXECUTION_TIME:-1200}
DIFY_BIND_ADDRESS: ${DIFY_BIND_ADDRESS:-0.0.0.0}
DIFY_PORT: ${DIFY_PORT:-5001}
SERVER_WORKER_AMOUNT: ${SERVER_WORKER_AMOUNT:-1}
SERVER_WORKER_CLASS: ${SERVER_WORKER_CLASS:-gevent}
SERVER_WORKER_CONNECTIONS: ${SERVER_WORKER_CONNECTIONS:-10}
CELERY_WORKER_CLASS: ${CELERY_WORKER_CLASS:-}
GUNICORN_TIMEOUT: ${GUNICORN_TIMEOUT:-360}
CELERY_WORKER_AMOUNT: ${CELERY_WORKER_AMOUNT:-}
CELERY_AUTO_SCALE: ${CELERY_AUTO_SCALE:-false}
CELERY_MAX_WORKERS: ${CELERY_MAX_WORKERS:-}
CELERY_MIN_WORKERS: ${CELERY_MIN_WORKERS:-}
API_TOOL_DEFAULT_CONNECT_TIMEOUT: ${API_TOOL_DEFAULT_CONNECT_TIMEOUT:-10}
API_TOOL_DEFAULT_READ_TIMEOUT: ${API_TOOL_DEFAULT_READ_TIMEOUT:-60}
DB_USERNAME: ${DB_USERNAME:-postgres}
DB_PASSWORD: ${DB_PASSWORD:-difyai123456}
DB_HOST: ${DB_HOST:-db}
DB_PORT: ${DB_PORT:-5432}
DB_DATABASE: ${DB_DATABASE:-dify}
SQLALCHEMY_POOL_SIZE: ${SQLALCHEMY_POOL_SIZE:-30}
SQLALCHEMY_POOL_RECYCLE: ${SQLALCHEMY_POOL_RECYCLE:-3600}
SQLALCHEMY_ECHO: ${SQLALCHEMY_ECHO:-false}
POSTGRES_MAX_CONNECTIONS: ${POSTGRES_MAX_CONNECTIONS:-100}
POSTGRES_SHARED_BUFFERS: ${POSTGRES_SHARED_BUFFERS:-128MB}
POSTGRES_WORK_MEM: ${POSTGRES_WORK_MEM:-4MB}
POSTGRES_MAINTENANCE_WORK_MEM: ${POSTGRES_MAINTENANCE_WORK_MEM:-64MB}
POSTGRES_EFFECTIVE_CACHE_SIZE: ${POSTGRES_EFFECTIVE_CACHE_SIZE:-4096MB}
REDIS_HOST: ${REDIS_HOST:-redis}
REDIS_PORT: ${REDIS_PORT:-6379}
REDIS_USERNAME: ${REDIS_USERNAME:-}
REDIS_PASSWORD: ${REDIS_PASSWORD:-difyai123456}
REDIS_USE_SSL: ${REDIS_USE_SSL:-false}
REDIS_DB: ${REDIS_DB:-0}
REDIS_USE_SENTINEL: ${REDIS_USE_SENTINEL:-false}
REDIS_SENTINELS: ${REDIS_SENTINELS:-}
REDIS_SENTINEL_SERVICE_NAME: ${REDIS_SENTINEL_SERVICE_NAME:-}
REDIS_SENTINEL_USERNAME: ${REDIS_SENTINEL_USERNAME:-}
REDIS_SENTINEL_PASSWORD: ${REDIS_SENTINEL_PASSWORD:-}
REDIS_SENTINEL_SOCKET_TIMEOUT: ${REDIS_SENTINEL_SOCKET_TIMEOUT:-0.1}
REDIS_USE_CLUSTERS: ${REDIS_USE_CLUSTERS:-false}
REDIS_CLUSTERS: ${REDIS_CLUSTERS:-}
REDIS_CLUSTERS_PASSWORD: ${REDIS_CLUSTERS_PASSWORD:-}
CELERY_BROKER_URL: ${CELERY_BROKER_URL:-redis://:difyai123456@redis:6379/1}
BROKER_USE_SSL: ${BROKER_USE_SSL:-false}
CELERY_USE_SENTINEL: ${CELERY_USE_SENTINEL:-false}
CELERY_SENTINEL_MASTER_NAME: ${CELERY_SENTINEL_MASTER_NAME:-}
CELERY_SENTINEL_SOCKET_TIMEOUT: ${CELERY_SENTINEL_SOCKET_TIMEOUT:-0.1}
WEB_API_CORS_ALLOW_ORIGINS: ${WEB_API_CORS_ALLOW_ORIGINS:-*}
CONSOLE_CORS_ALLOW_ORIGINS: ${CONSOLE_CORS_ALLOW_ORIGINS:-*}
STORAGE_TYPE: ${STORAGE_TYPE:-opendal}
OPENDAL_SCHEME: ${OPENDAL_SCHEME:-fs}
OPENDAL_FS_ROOT: ${OPENDAL_FS_ROOT:-storage}
S3_ENDPOINT: ${S3_ENDPOINT:-}
S3_REGION: ${S3_REGION:-us-east-1}
S3_BUCKET_NAME: ${S3_BUCKET_NAME:-difyai}
S3_ACCESS_KEY: ${S3_ACCESS_KEY:-}
S3_SECRET_KEY: ${S3_SECRET_KEY:-}
S3_USE_AWS_MANAGED_IAM: ${S3_USE_AWS_MANAGED_IAM:-false}
AZURE_BLOB_ACCOUNT_NAME: ${AZURE_BLOB_ACCOUNT_NAME:-difyai}
AZURE_BLOB_ACCOUNT_KEY: ${AZURE_BLOB_ACCOUNT_KEY:-difyai}
AZURE_BLOB_CONTAINER_NAME: ${AZURE_BLOB_CONTAINER_NAME:-difyai-container}
AZURE_BLOB_ACCOUNT_URL: ${AZURE_BLOB_ACCOUNT_URL:-https://<your_account_name>.blob.core.windows.net}
GOOGLE_STORAGE_BUCKET_NAME: ${GOOGLE_STORAGE_BUCKET_NAME:-your-bucket-name}
GOOGLE_STORAGE_SERVICE_ACCOUNT_JSON_BASE64: ${GOOGLE_STORAGE_SERVICE_ACCOUNT_JSON_BASE64:-}
ALIYUN_OSS_BUCKET_NAME: ${ALIYUN_OSS_BUCKET_NAME:-your-bucket-name}
ALIYUN_OSS_ACCESS_KEY: ${ALIYUN_OSS_ACCESS_KEY:-your-access-key}
ALIYUN_OSS_SECRET_KEY: ${ALIYUN_OSS_SECRET_KEY:-your-secret-key}
ALIYUN_OSS_ENDPOINT: ${ALIYUN_OSS_ENDPOINT:-https://oss-ap-southeast-1-internal.aliyuncs.com}
ALIYUN_OSS_REGION: ${ALIYUN_OSS_REGION:-ap-southeast-1}
ALIYUN_OSS_AUTH_VERSION: ${ALIYUN_OSS_AUTH_VERSION:-v4}
ALIYUN_OSS_PATH: ${ALIYUN_OSS_PATH:-your-path}
TENCENT_COS_BUCKET_NAME: ${TENCENT_COS_BUCKET_NAME:-your-bucket-name}
TENCENT_COS_SECRET_KEY: ${TENCENT_COS_SECRET_KEY:-your-secret-key}
TENCENT_COS_SECRET_ID: ${TENCENT_COS_SECRET_ID:-your-secret-id}
TENCENT_COS_REGION: ${TENCENT_COS_REGION:-your-region}
TENCENT_COS_SCHEME: ${TENCENT_COS_SCHEME:-your-scheme}
OCI_ENDPOINT: ${OCI_ENDPOINT:-https://objectstorage.us-ashburn-1.oraclecloud.com}
OCI_BUCKET_NAME: ${OCI_BUCKET_NAME:-your-bucket-name}
OCI_ACCESS_KEY: ${OCI_ACCESS_KEY:-your-access-key}
OCI_SECRET_KEY: ${OCI_SECRET_KEY:-your-secret-key}
OCI_REGION: ${OCI_REGION:-us-ashburn-1}
HUAWEI_OBS_BUCKET_NAME: ${HUAWEI_OBS_BUCKET_NAME:-your-bucket-name}
HUAWEI_OBS_SECRET_KEY: ${HUAWEI_OBS_SECRET_KEY:-your-secret-key}
HUAWEI_OBS_ACCESS_KEY: ${HUAWEI_OBS_ACCESS_KEY:-your-access-key}
HUAWEI_OBS_SERVER: ${HUAWEI_OBS_SERVER:-your-server-url}
VOLCENGINE_TOS_BUCKET_NAME: ${VOLCENGINE_TOS_BUCKET_NAME:-your-bucket-name}
VOLCENGINE_TOS_SECRET_KEY: ${VOLCENGINE_TOS_SECRET_KEY:-your-secret-key}
VOLCENGINE_TOS_ACCESS_KEY: ${VOLCENGINE_TOS_ACCESS_KEY:-your-access-key}
VOLCENGINE_TOS_ENDPOINT: ${VOLCENGINE_TOS_ENDPOINT:-your-server-url}
VOLCENGINE_TOS_REGION: ${VOLCENGINE_TOS_REGION:-your-region}
BAIDU_OBS_BUCKET_NAME: ${BAIDU_OBS_BUCKET_NAME:-your-bucket-name}
BAIDU_OBS_SECRET_KEY: ${BAIDU_OBS_SECRET_KEY:-your-secret-key}
BAIDU_OBS_ACCESS_KEY: ${BAIDU_OBS_ACCESS_KEY:-your-access-key}
BAIDU_OBS_ENDPOINT: ${BAIDU_OBS_ENDPOINT:-your-server-url}
SUPABASE_BUCKET_NAME: ${SUPABASE_BUCKET_NAME:-your-bucket-name}
SUPABASE_API_KEY: ${SUPABASE_API_KEY:-your-access-key}
SUPABASE_URL: ${SUPABASE_URL:-your-server-url}
VECTOR_STORE: ${VECTOR_STORE:-weaviate}
WEAVIATE_ENDPOINT: ${WEAVIATE_ENDPOINT:-http://weaviate:8080}
WEAVIATE_API_KEY: ${WEAVIATE_API_KEY:-WVF5YThaHlkYwhGUSmCRgsX3tD5ngdN8pkih}
QDRANT_URL: ${QDRANT_URL:-http://qdrant:6333}
QDRANT_API_KEY: ${QDRANT_API_KEY:-difyai123456}
QDRANT_CLIENT_TIMEOUT: ${QDRANT_CLIENT_TIMEOUT:-20}
QDRANT_GRPC_ENABLED: ${QDRANT_GRPC_ENABLED:-false}
QDRANT_GRPC_PORT: ${QDRANT_GRPC_PORT:-6334}
MILVUS_URI: ${MILVUS_URI:-http://127.0.0.1:19530}
MILVUS_TOKEN: ${MILVUS_TOKEN:-}
MILVUS_USER: ${MILVUS_USER:-root}
MILVUS_PASSWORD: ${MILVUS_PASSWORD:-Milvus}
MILVUS_ENABLE_HYBRID_SEARCH: ${MILVUS_ENABLE_HYBRID_SEARCH:-False}
MYSCALE_HOST: ${MYSCALE_HOST:-myscale}
MYSCALE_PORT: ${MYSCALE_PORT:-8123}
MYSCALE_USER: ${MYSCALE_USER:-default}
MYSCALE_PASSWORD: ${MYSCALE_PASSWORD:-}
MYSCALE_DATABASE: ${MYSCALE_DATABASE:-dify}
MYSCALE_FTS_PARAMS: ${MYSCALE_FTS_PARAMS:-}
COUCHBASE_CONNECTION_STRING: ${COUCHBASE_CONNECTION_STRING:-couchbase://couchbase-server}
COUCHBASE_USER: ${COUCHBASE_USER:-Administrator}
COUCHBASE_PASSWORD: ${COUCHBASE_PASSWORD:-password}
COUCHBASE_BUCKET_NAME: ${COUCHBASE_BUCKET_NAME:-Embeddings}
COUCHBASE_SCOPE_NAME: ${COUCHBASE_SCOPE_NAME:-_default}
PGVECTOR_HOST: ${PGVECTOR_HOST:-pgvector}
PGVECTOR_PORT: ${PGVECTOR_PORT:-5432}
PGVECTOR_USER: ${PGVECTOR_USER:-postgres}
PGVECTOR_PASSWORD: ${PGVECTOR_PASSWORD:-difyai123456}
PGVECTOR_DATABASE: ${PGVECTOR_DATABASE:-dify}
PGVECTOR_MIN_CONNECTION: ${PGVECTOR_MIN_CONNECTION:-1}
PGVECTOR_MAX_CONNECTION: ${PGVECTOR_MAX_CONNECTION:-5}
PGVECTO_RS_HOST: ${PGVECTO_RS_HOST:-pgvecto-rs}
PGVECTO_RS_PORT: ${PGVECTO_RS_PORT:-5432}
PGVECTO_RS_USER: ${PGVECTO_RS_USER:-postgres}
PGVECTO_RS_PASSWORD: ${PGVECTO_RS_PASSWORD:-difyai123456}
PGVECTO_RS_DATABASE: ${PGVECTO_RS_DATABASE:-dify}
ANALYTICDB_KEY_ID: ${ANALYTICDB_KEY_ID:-your-ak}
ANALYTICDB_KEY_SECRET: ${ANALYTICDB_KEY_SECRET:-your-sk}
ANALYTICDB_REGION_ID: ${ANALYTICDB_REGION_ID:-cn-hangzhou}
ANALYTICDB_INSTANCE_ID: ${ANALYTICDB_INSTANCE_ID:-gp-ab123456}
ANALYTICDB_ACCOUNT: ${ANALYTICDB_ACCOUNT:-testaccount}
ANALYTICDB_PASSWORD: ${ANALYTICDB_PASSWORD:-testpassword}
ANALYTICDB_NAMESPACE: ${ANALYTICDB_NAMESPACE:-dify}
ANALYTICDB_NAMESPACE_PASSWORD: ${ANALYTICDB_NAMESPACE_PASSWORD:-difypassword}
ANALYTICDB_HOST: ${ANALYTICDB_HOST:-gp-test.aliyuncs.com}
ANALYTICDB_PORT: ${ANALYTICDB_PORT:-5432}
ANALYTICDB_MIN_CONNECTION: ${ANALYTICDB_MIN_CONNECTION:-1}
ANALYTICDB_MAX_CONNECTION: ${ANALYTICDB_MAX_CONNECTION:-5}
TIDB_VECTOR_HOST: ${TIDB_VECTOR_HOST:-tidb}
TIDB_VECTOR_PORT: ${TIDB_VECTOR_PORT:-4000}
TIDB_VECTOR_USER: ${TIDB_VECTOR_USER:-}
TIDB_VECTOR_PASSWORD: ${TIDB_VECTOR_PASSWORD:-}
TIDB_VECTOR_DATABASE: ${TIDB_VECTOR_DATABASE:-dify}
TIDB_ON_QDRANT_URL: ${TIDB_ON_QDRANT_URL:-http://127.0.0.1}
TIDB_ON_QDRANT_API_KEY: ${TIDB_ON_QDRANT_API_KEY:-dify}
TIDB_ON_QDRANT_CLIENT_TIMEOUT: ${TIDB_ON_QDRANT_CLIENT_TIMEOUT:-20}
TIDB_ON_QDRANT_GRPC_ENABLED: ${TIDB_ON_QDRANT_GRPC_ENABLED:-false}
TIDB_ON_QDRANT_GRPC_PORT: ${TIDB_ON_QDRANT_GRPC_PORT:-6334}
TIDB_PUBLIC_KEY: ${TIDB_PUBLIC_KEY:-dify}
TIDB_PRIVATE_KEY: ${TIDB_PRIVATE_KEY:-dify}
TIDB_API_URL: ${TIDB_API_URL:-http://127.0.0.1}
TIDB_IAM_API_URL: ${TIDB_IAM_API_URL:-http://127.0.0.1}
TIDB_REGION: ${TIDB_REGION:-regions/aws-us-east-1}
TIDB_PROJECT_ID: ${TIDB_PROJECT_ID:-dify}
TIDB_SPEND_LIMIT: ${TIDB_SPEND_LIMIT:-100}
CHROMA_HOST: ${CHROMA_HOST:-127.0.0.1}
CHROMA_PORT: ${CHROMA_PORT:-8000}
CHROMA_TENANT: ${CHROMA_TENANT:-default_tenant}
CHROMA_DATABASE: ${CHROMA_DATABASE:-default_database}
CHROMA_AUTH_PROVIDER: ${CHROMA_AUTH_PROVIDER:-chromadb.auth.token_authn.TokenAuthClientProvider}
CHROMA_AUTH_CREDENTIALS: ${CHROMA_AUTH_CREDENTIALS:-}
ORACLE_HOST: ${ORACLE_HOST:-oracle}
ORACLE_PORT: ${ORACLE_PORT:-1521}
ORACLE_USER: ${ORACLE_USER:-dify}
ORACLE_PASSWORD: ${ORACLE_PASSWORD:-dify}
ORACLE_DATABASE: ${ORACLE_DATABASE:-FREEPDB1}
RELYT_HOST: ${RELYT_HOST:-db}
RELYT_PORT: ${RELYT_PORT:-5432}
RELYT_USER: ${RELYT_USER:-postgres}
RELYT_PASSWORD: ${RELYT_PASSWORD:-difyai123456}
RELYT_DATABASE: ${RELYT_DATABASE:-postgres}
OPENSEARCH_HOST: ${OPENSEARCH_HOST:-opensearch}
OPENSEARCH_PORT: ${OPENSEARCH_PORT:-9200}
OPENSEARCH_USER: ${OPENSEARCH_USER:-admin}
OPENSEARCH_PASSWORD: ${OPENSEARCH_PASSWORD:-admin}
OPENSEARCH_SECURE: ${OPENSEARCH_SECURE:-true}
TENCENT_VECTOR_DB_URL: ${TENCENT_VECTOR_DB_URL:-http://127.0.0.1}
TENCENT_VECTOR_DB_API_KEY: ${TENCENT_VECTOR_DB_API_KEY:-dify}
TENCENT_VECTOR_DB_TIMEOUT: ${TENCENT_VECTOR_DB_TIMEOUT:-30}
TENCENT_VECTOR_DB_USERNAME: ${TENCENT_VECTOR_DB_USERNAME:-dify}
TENCENT_VECTOR_DB_DATABASE: ${TENCENT_VECTOR_DB_DATABASE:-dify}
TENCENT_VECTOR_DB_SHARD: ${TENCENT_VECTOR_DB_SHARD:-1}
TENCENT_VECTOR_DB_REPLICAS: ${TENCENT_VECTOR_DB_REPLICAS:-2}
ELASTICSEARCH_HOST: ${ELASTICSEARCH_HOST:-0.0.0.0}
ELASTICSEARCH_PORT: ${ELASTICSEARCH_PORT:-9200}
ELASTICSEARCH_USERNAME: ${ELASTICSEARCH_USERNAME:-elastic}
ELASTICSEARCH_PASSWORD: ${ELASTICSEARCH_PASSWORD:-elastic}
KIBANA_PORT: ${KIBANA_PORT:-5601}
BAIDU_VECTOR_DB_ENDPOINT: ${BAIDU_VECTOR_DB_ENDPOINT:-http://127.0.0.1:5287}
BAIDU_VECTOR_DB_CONNECTION_TIMEOUT_MS: ${BAIDU_VECTOR_DB_CONNECTION_TIMEOUT_MS:-30000}
BAIDU_VECTOR_DB_ACCOUNT: ${BAIDU_VECTOR_DB_ACCOUNT:-root}
BAIDU_VECTOR_DB_API_KEY: ${BAIDU_VECTOR_DB_API_KEY:-dify}
BAIDU_VECTOR_DB_DATABASE: ${BAIDU_VECTOR_DB_DATABASE:-dify}
BAIDU_VECTOR_DB_SHARD: ${BAIDU_VECTOR_DB_SHARD:-1}
BAIDU_VECTOR_DB_REPLICAS: ${BAIDU_VECTOR_DB_REPLICAS:-3}
VIKINGDB_ACCESS_KEY: ${VIKINGDB_ACCESS_KEY:-your-ak}
VIKINGDB_SECRET_KEY: ${VIKINGDB_SECRET_KEY:-your-sk}
VIKINGDB_REGION: ${VIKINGDB_REGION:-cn-shanghai}
VIKINGDB_HOST: ${VIKINGDB_HOST:-api-vikingdb.xxx.volces.com}
VIKINGDB_SCHEMA: ${VIKINGDB_SCHEMA:-http}
VIKINGDB_CONNECTION_TIMEOUT: ${VIKINGDB_CONNECTION_TIMEOUT:-30}
VIKINGDB_SOCKET_TIMEOUT: ${VIKINGDB_SOCKET_TIMEOUT:-30}
LINDORM_URL: ${LINDORM_URL:-http://lindorm:30070}
LINDORM_USERNAME: ${LINDORM_USERNAME:-lindorm}
LINDORM_PASSWORD: ${LINDORM_PASSWORD:-lindorm}
OCEANBASE_VECTOR_HOST: ${OCEANBASE_VECTOR_HOST:-oceanbase}
OCEANBASE_VECTOR_PORT: ${OCEANBASE_VECTOR_PORT:-2881}
OCEANBASE_VECTOR_USER: ${OCEANBASE_VECTOR_USER:-root@test}
OCEANBASE_VECTOR_PASSWORD: ${OCEANBASE_VECTOR_PASSWORD:-difyai123456}
OCEANBASE_VECTOR_DATABASE: ${OCEANBASE_VECTOR_DATABASE:-test}
OCEANBASE_CLUSTER_NAME: ${OCEANBASE_CLUSTER_NAME:-difyai}
OCEANBASE_MEMORY_LIMIT: ${OCEANBASE_MEMORY_LIMIT:-6G}
UPSTASH_VECTOR_URL: ${UPSTASH_VECTOR_URL:-https://xxx-vector.upstash.io}
UPSTASH_VECTOR_TOKEN: ${UPSTASH_VECTOR_TOKEN:-dify}
UPLOAD_FILE_SIZE_LIMIT: ${UPLOAD_FILE_SIZE_LIMIT:-15}
UPLOAD_FILE_BATCH_LIMIT: ${UPLOAD_FILE_BATCH_LIMIT:-5}
ETL_TYPE: ${ETL_TYPE:-dify}
UNSTRUCTURED_API_URL: ${UNSTRUCTURED_API_URL:-}
UNSTRUCTURED_API_KEY: ${UNSTRUCTURED_API_KEY:-}
SCARF_NO_ANALYTICS: ${SCARF_NO_ANALYTICS:-true}
PROMPT_GENERATION_MAX_TOKENS: ${PROMPT_GENERATION_MAX_TOKENS:-512}
CODE_GENERATION_MAX_TOKENS: ${CODE_GENERATION_MAX_TOKENS:-1024}
MULTIMODAL_SEND_FORMAT: ${MULTIMODAL_SEND_FORMAT:-base64}
UPLOAD_IMAGE_FILE_SIZE_LIMIT: ${UPLOAD_IMAGE_FILE_SIZE_LIMIT:-10}
UPLOAD_VIDEO_FILE_SIZE_LIMIT: ${UPLOAD_VIDEO_FILE_SIZE_LIMIT:-100}
UPLOAD_AUDIO_FILE_SIZE_LIMIT: ${UPLOAD_AUDIO_FILE_SIZE_LIMIT:-50}
SENTRY_DSN: ${SENTRY_DSN:-}
API_SENTRY_DSN: ${API_SENTRY_DSN:-}
API_SENTRY_TRACES_SAMPLE_RATE: ${API_SENTRY_TRACES_SAMPLE_RATE:-1.0}
API_SENTRY_PROFILES_SAMPLE_RATE: ${API_SENTRY_PROFILES_SAMPLE_RATE:-1.0}
WEB_SENTRY_DSN: ${WEB_SENTRY_DSN:-}
NOTION_INTEGRATION_TYPE: ${NOTION_INTEGRATION_TYPE:-public}
NOTION_CLIENT_SECRET: ${NOTION_CLIENT_SECRET:-}
NOTION_CLIENT_ID: ${NOTION_CLIENT_ID:-}
NOTION_INTERNAL_SECRET: ${NOTION_INTERNAL_SECRET:-}
MAIL_TYPE: ${MAIL_TYPE:-resend}
MAIL_DEFAULT_SEND_FROM: ${MAIL_DEFAULT_SEND_FROM:-}
RESEND_API_URL: ${RESEND_API_URL:-https://api.resend.com}
RESEND_API_KEY: ${RESEND_API_KEY:-your-resend-api-key}
SMTP_SERVER: ${SMTP_SERVER:-}
SMTP_PORT: ${SMTP_PORT:-465}
SMTP_USERNAME: ${SMTP_USERNAME:-}
SMTP_PASSWORD: ${SMTP_PASSWORD:-}
SMTP_USE_TLS: ${SMTP_USE_TLS:-true}
SMTP_OPPORTUNISTIC_TLS: ${SMTP_OPPORTUNISTIC_TLS:-false}
INDEXING_MAX_SEGMENTATION_TOKENS_LENGTH: ${INDEXING_MAX_SEGMENTATION_TOKENS_LENGTH:-4000}
INVITE_EXPIRY_HOURS: ${INVITE_EXPIRY_HOURS:-72}
RESET_PASSWORD_TOKEN_EXPIRY_MINUTES: ${RESET_PASSWORD_TOKEN_EXPIRY_MINUTES:-5}
CODE_EXECUTION_ENDPOINT: ${CODE_EXECUTION_ENDPOINT:-http://sandbox:8194}
CODE_EXECUTION_API_KEY: ${CODE_EXECUTION_API_KEY:-dify-sandbox}
CODE_MAX_NUMBER: ${CODE_MAX_NUMBER:-9223372036854775807}
CODE_MIN_NUMBER: ${CODE_MIN_NUMBER:--9223372036854775808}
CODE_MAX_DEPTH: ${CODE_MAX_DEPTH:-5}
CODE_MAX_PRECISION: ${CODE_MAX_PRECISION:-20}
CODE_MAX_STRING_LENGTH: ${CODE_MAX_STRING_LENGTH:-80000}
CODE_MAX_STRING_ARRAY_LENGTH: ${CODE_MAX_STRING_ARRAY_LENGTH:-30}
CODE_MAX_OBJECT_ARRAY_LENGTH: ${CODE_MAX_OBJECT_ARRAY_LENGTH:-30}
CODE_MAX_NUMBER_ARRAY_LENGTH: ${CODE_MAX_NUMBER_ARRAY_LENGTH:-1000}
CODE_EXECUTION_CONNECT_TIMEOUT: ${CODE_EXECUTION_CONNECT_TIMEOUT:-10}
CODE_EXECUTION_READ_TIMEOUT: ${CODE_EXECUTION_READ_TIMEOUT:-60}
CODE_EXECUTION_WRITE_TIMEOUT: ${CODE_EXECUTION_WRITE_TIMEOUT:-10}
TEMPLATE_TRANSFORM_MAX_LENGTH: ${TEMPLATE_TRANSFORM_MAX_LENGTH:-80000}
WORKFLOW_MAX_EXECUTION_STEPS: ${WORKFLOW_MAX_EXECUTION_STEPS:-500}
WORKFLOW_MAX_EXECUTION_TIME: ${WORKFLOW_MAX_EXECUTION_TIME:-1200}
WORKFLOW_CALL_MAX_DEPTH: ${WORKFLOW_CALL_MAX_DEPTH:-5}
MAX_VARIABLE_SIZE: ${MAX_VARIABLE_SIZE:-204800}
WORKFLOW_PARALLEL_DEPTH_LIMIT: ${WORKFLOW_PARALLEL_DEPTH_LIMIT:-3}
WORKFLOW_FILE_UPLOAD_LIMIT: ${WORKFLOW_FILE_UPLOAD_LIMIT:-10}
HTTP_REQUEST_NODE_MAX_BINARY_SIZE: ${HTTP_REQUEST_NODE_MAX_BINARY_SIZE:-10485760}
HTTP_REQUEST_NODE_MAX_TEXT_SIZE: ${HTTP_REQUEST_NODE_MAX_TEXT_SIZE:-1048576}
SSRF_PROXY_HTTP_URL: ${SSRF_PROXY_HTTP_URL:-http://ssrf_proxy:3128}
SSRF_PROXY_HTTPS_URL: ${SSRF_PROXY_HTTPS_URL:-http://ssrf_proxy:3128}
TEXT_GENERATION_TIMEOUT_MS: ${TEXT_GENERATION_TIMEOUT_MS:-60000}
PGUSER: ${PGUSER:-${DB_USERNAME}}
POSTGRES_PASSWORD: ${POSTGRES_PASSWORD:-${DB_PASSWORD}}
POSTGRES_DB: ${POSTGRES_DB:-${DB_DATABASE}}
PGDATA: ${PGDATA:-/var/lib/postgresql/data/pgdata}
SANDBOX_API_KEY: ${SANDBOX_API_KEY:-dify-sandbox}
SANDBOX_GIN_MODE: ${SANDBOX_GIN_MODE:-release}
SANDBOX_WORKER_TIMEOUT: ${SANDBOX_WORKER_TIMEOUT:-15}
SANDBOX_ENABLE_NETWORK: ${SANDBOX_ENABLE_NETWORK:-true}
SANDBOX_HTTP_PROXY: ${SANDBOX_HTTP_PROXY:-http://ssrf_proxy:3128}
SANDBOX_HTTPS_PROXY: ${SANDBOX_HTTPS_PROXY:-http://ssrf_proxy:3128}
SANDBOX_PORT: ${SANDBOX_PORT:-8194}
WEAVIATE_PERSISTENCE_DATA_PATH: ${WEAVIATE_PERSISTENCE_DATA_PATH:-/var/lib/weaviate}
WEAVIATE_QUERY_DEFAULTS_LIMIT: ${WEAVIATE_QUERY_DEFAULTS_LIMIT:-25}
WEAVIATE_AUTHENTICATION_ANONYMOUS_ACCESS_ENABLED: ${WEAVIATE_AUTHENTICATION_ANONYMOUS_ACCESS_ENABLED:-true}
WEAVIATE_DEFAULT_VECTORIZER_MODULE: ${WEAVIATE_DEFAULT_VECTORIZER_MODULE:-none}
WEAVIATE_CLUSTER_HOSTNAME: ${WEAVIATE_CLUSTER_HOSTNAME:-node1}
WEAVIATE_AUTHENTICATION_APIKEY_ENABLED: ${WEAVIATE_AUTHENTICATION_APIKEY_ENABLED:-true}
WEAVIATE_AUTHENTICATION_APIKEY_ALLOWED_KEYS: ${WEAVIATE_AUTHENTICATION_APIKEY_ALLOWED_KEYS:-WVF5YThaHlkYwhGUSmCRgsX3tD5ngdN8pkih}
WEAVIATE_AUTHENTICATION_APIKEY_USERS: ${WEAVIATE_AUTHENTICATION_APIKEY_USERS:-hello@dify.ai}
WEAVIATE_AUTHORIZATION_ADMINLIST_ENABLED: ${WEAVIATE_AUTHORIZATION_ADMINLIST_ENABLED:-true}
WEAVIATE_AUTHORIZATION_ADMINLIST_USERS: ${WEAVIATE_AUTHORIZATION_ADMINLIST_USERS:-hello@dify.ai}
CHROMA_SERVER_AUTHN_CREDENTIALS: ${CHROMA_SERVER_AUTHN_CREDENTIALS:-difyai123456}
CHROMA_SERVER_AUTHN_PROVIDER: ${CHROMA_SERVER_AUTHN_PROVIDER:-chromadb.auth.token_authn.TokenAuthenticationServerProvider}
CHROMA_IS_PERSISTENT: ${CHROMA_IS_PERSISTENT:-TRUE}
ORACLE_PWD: ${ORACLE_PWD:-Dify123456}
ORACLE_CHARACTERSET: ${ORACLE_CHARACTERSET:-AL32UTF8}
ETCD_AUTO_COMPACTION_MODE: ${ETCD_AUTO_COMPACTION_MODE:-revision}
ETCD_AUTO_COMPACTION_RETENTION: ${ETCD_AUTO_COMPACTION_RETENTION:-1000}
ETCD_QUOTA_BACKEND_BYTES: ${ETCD_QUOTA_BACKEND_BYTES:-4294967296}
ETCD_SNAPSHOT_COUNT: ${ETCD_SNAPSHOT_COUNT:-50000}
MINIO_ACCESS_KEY: ${MINIO_ACCESS_KEY:-minioadmin}
MINIO_SECRET_KEY: ${MINIO_SECRET_KEY:-minioadmin}
ETCD_ENDPOINTS: ${ETCD_ENDPOINTS:-etcd:2379}
MINIO_ADDRESS: ${MINIO_ADDRESS:-minio:9000}
MILVUS_AUTHORIZATION_ENABLED: ${MILVUS_AUTHORIZATION_ENABLED:-true}
PGVECTOR_PGUSER: ${PGVECTOR_PGUSER:-postgres}
PGVECTOR_POSTGRES_PASSWORD: ${PGVECTOR_POSTGRES_PASSWORD:-difyai123456}
PGVECTOR_POSTGRES_DB: ${PGVECTOR_POSTGRES_DB:-dify}
PGVECTOR_PGDATA: ${PGVECTOR_PGDATA:-/var/lib/postgresql/data/pgdata}
OPENSEARCH_DISCOVERY_TYPE: ${OPENSEARCH_DISCOVERY_TYPE:-single-node}
OPENSEARCH_BOOTSTRAP_MEMORY_LOCK: ${OPENSEARCH_BOOTSTRAP_MEMORY_LOCK:-true}
OPENSEARCH_JAVA_OPTS_MIN: ${OPENSEARCH_JAVA_OPTS_MIN:-512m}
OPENSEARCH_JAVA_OPTS_MAX: ${OPENSEARCH_JAVA_OPTS_MAX:-1024m}
OPENSEARCH_INITIAL_ADMIN_PASSWORD: ${OPENSEARCH_INITIAL_ADMIN_PASSWORD:-Qazwsxedc!@#123}
OPENSEARCH_MEMLOCK_SOFT: ${OPENSEARCH_MEMLOCK_SOFT:--1}
OPENSEARCH_MEMLOCK_HARD: ${OPENSEARCH_MEMLOCK_HARD:--1}
OPENSEARCH_NOFILE_SOFT: ${OPENSEARCH_NOFILE_SOFT:-65536}
OPENSEARCH_NOFILE_HARD: ${OPENSEARCH_NOFILE_HARD:-65536}
NGINX_SERVER_NAME: ${NGINX_SERVER_NAME:-_}
NGINX_HTTPS_ENABLED: ${NGINX_HTTPS_ENABLED:-false}
NGINX_PORT: ${NGINX_PORT:-80}
NGINX_SSL_PORT: ${NGINX_SSL_PORT:-443}
NGINX_SSL_CERT_FILENAME: ${NGINX_SSL_CERT_FILENAME:-dify.crt}
NGINX_SSL_CERT_KEY_FILENAME: ${NGINX_SSL_CERT_KEY_FILENAME:-dify.key}
NGINX_SSL_PROTOCOLS: ${NGINX_SSL_PROTOCOLS:-TLSv1.1 TLSv1.2 TLSv1.3}
NGINX_WORKER_PROCESSES: ${NGINX_WORKER_PROCESSES:-auto}
NGINX_CLIENT_MAX_BODY_SIZE: ${NGINX_CLIENT_MAX_BODY_SIZE:-15M}
NGINX_KEEPALIVE_TIMEOUT: ${NGINX_KEEPALIVE_TIMEOUT:-65}
NGINX_PROXY_READ_TIMEOUT: ${NGINX_PROXY_READ_TIMEOUT:-3600s}
NGINX_PROXY_SEND_TIMEOUT: ${NGINX_PROXY_SEND_TIMEOUT:-3600s}
NGINX_ENABLE_CERTBOT_CHALLENGE: ${NGINX_ENABLE_CERTBOT_CHALLENGE:-false}
CERTBOT_EMAIL: ${CERTBOT_EMAIL:-your_email@example.com}
CERTBOT_DOMAIN: ${CERTBOT_DOMAIN:-your_domain.com}
CERTBOT_OPTIONS: ${CERTBOT_OPTIONS:-}
SSRF_HTTP_PORT: ${SSRF_HTTP_PORT:-3128}
SSRF_COREDUMP_DIR: ${SSRF_COREDUMP_DIR:-/var/spool/squid}
SSRF_REVERSE_PROXY_PORT: ${SSRF_REVERSE_PROXY_PORT:-8194}
SSRF_SANDBOX_HOST: ${SSRF_SANDBOX_HOST:-sandbox}
SSRF_DEFAULT_TIME_OUT: ${SSRF_DEFAULT_TIME_OUT:-5}
SSRF_DEFAULT_CONNECT_TIME_OUT: ${SSRF_DEFAULT_CONNECT_TIME_OUT:-5}
SSRF_DEFAULT_READ_TIME_OUT: ${SSRF_DEFAULT_READ_TIME_OUT:-5}
SSRF_DEFAULT_WRITE_TIME_OUT: ${SSRF_DEFAULT_WRITE_TIME_OUT:-5}
EXPOSE_NGINX_PORT: ${PANEL_APP_PORT_HTTP:-8080}
EXPOSE_NGINX_SSL_PORT: ${PANEL_APP_PORT_HTTPS:-8443}
POSITION_TOOL_PINS: ${POSITION_TOOL_PINS:-}
POSITION_TOOL_INCLUDES: ${POSITION_TOOL_INCLUDES:-}
POSITION_TOOL_EXCLUDES: ${POSITION_TOOL_EXCLUDES:-}
POSITION_PROVIDER_PINS: ${POSITION_PROVIDER_PINS:-}
POSITION_PROVIDER_INCLUDES: ${POSITION_PROVIDER_INCLUDES:-}
POSITION_PROVIDER_EXCLUDES: ${POSITION_PROVIDER_EXCLUDES:-}
CSP_WHITELIST: ${CSP_WHITELIST:-}
CREATE_TIDB_SERVICE_JOB_ENABLED: ${CREATE_TIDB_SERVICE_JOB_ENABLED:-false}
MAX_SUBMIT_COUNT: ${MAX_SUBMIT_COUNT:-100}
TOP_K_MAX_VALUE: ${TOP_K_MAX_VALUE:-10}
DB_PLUGIN_DATABASE: ${DB_PLUGIN_DATABASE:-dify_plugin}
EXPOSE_PLUGIN_DAEMON_PORT: ${EXPOSE_PLUGIN_DAEMON_PORT:-5002}
PLUGIN_DAEMON_PORT: ${PLUGIN_DAEMON_PORT:-5002}
PLUGIN_DAEMON_KEY: ${PLUGIN_DAEMON_KEY:-lYkiYYT6owG+71oLerGzA7GXCgOT++6ovaezWAjpCjf+Sjc3ZtU+qUEi}
PLUGIN_DAEMON_URL: ${PLUGIN_DAEMON_URL:-http://plugin_daemon:5002}
PLUGIN_MAX_PACKAGE_SIZE: ${PLUGIN_MAX_PACKAGE_SIZE:-52428800}
PLUGIN_PPROF_ENABLED: ${PLUGIN_PPROF_ENABLED:-false}
PLUGIN_DEBUGGING_HOST: ${PLUGIN_DEBUGGING_HOST:-0.0.0.0}
PLUGIN_DEBUGGING_PORT: ${PLUGIN_DEBUGGING_PORT:-5003}
EXPOSE_PLUGIN_DEBUGGING_HOST: ${EXPOSE_PLUGIN_DEBUGGING_HOST:-localhost}
EXPOSE_PLUGIN_DEBUGGING_PORT: ${EXPOSE_PLUGIN_DEBUGGING_PORT:-5003}
PLUGIN_DIFY_INNER_API_KEY: ${PLUGIN_DIFY_INNER_API_KEY:-QaHbTe77CtuXmsfyhR7+vRjI/+XbV1AaFy691iy+kGDv2Jvy0/eAh8Y1}
PLUGIN_DIFY_INNER_API_URL: ${PLUGIN_DIFY_INNER_API_URL:-http://api:5001}
ENDPOINT_URL_TEMPLATE: ${ENDPOINT_URL_TEMPLATE:-http://localhost/e/{hook_id}}
MARKETPLACE_ENABLED: ${MARKETPLACE_ENABLED:-true}
MARKETPLACE_API_URL: ${MARKETPLACE_API_URL:-https://marketplace.dify.ai}
FORCE_VERIFYING_SIGNATURE: ${FORCE_VERIFYING_SIGNATURE:-true}
services:
api:
image: langgenius/dify-api:1.1.1
container_name: api-${CONTAINER_NAME}
restart: always
environment:
<<: *shared-api-worker-env
MODE: api
SENTRY_DSN: ${API_SENTRY_DSN:-}
SENTRY_TRACES_SAMPLE_RATE: ${API_SENTRY_TRACES_SAMPLE_RATE:-1.0}
SENTRY_PROFILES_SAMPLE_RATE: ${API_SENTRY_PROFILES_SAMPLE_RATE:-1.0}
PLUGIN_MAX_PACKAGE_SIZE: ${PLUGIN_MAX_PACKAGE_SIZE:-52428800}
INNER_API_KEY_FOR_PLUGIN: ${PLUGIN_DIFY_INNER_API_KEY:-QaHbTe77CtuXmsfyhR7+vRjI/+XbV1AaFy691iy+kGDv2Jvy0/eAh8Y1}
depends_on:
- db
- redis
volumes:
- ${DIFY_ROOT_PATH}/volumes/app/storage:/app/api/storage
networks:
- ssrf_proxy_network
- default
worker:
image: langgenius/dify-api:1.1.1
container_name: worker-${CONTAINER_NAME}
restart: always
environment:
<<: *shared-api-worker-env
MODE: worker
SENTRY_DSN: ${API_SENTRY_DSN:-}
SENTRY_TRACES_SAMPLE_RATE: ${API_SENTRY_TRACES_SAMPLE_RATE:-1.0}
SENTRY_PROFILES_SAMPLE_RATE: ${API_SENTRY_PROFILES_SAMPLE_RATE:-1.0}
PLUGIN_MAX_PACKAGE_SIZE: ${PLUGIN_MAX_PACKAGE_SIZE:-52428800}
INNER_API_KEY_FOR_PLUGIN: ${PLUGIN_DIFY_INNER_API_KEY:-QaHbTe77CtuXmsfyhR7+vRjI/+XbV1AaFy691iy+kGDv2Jvy0/eAh8Y1}
depends_on:
- db
- redis
volumes:
- ${DIFY_ROOT_PATH}/volumes/app/storage:/app/api/storage
networks:
- ssrf_proxy_network
- default
web:
image: langgenius/dify-web:1.1.1
container_name: ${CONTAINER_NAME}
restart: always
environment:
CONSOLE_API_URL: ${CONSOLE_API_URL:-}
APP_API_URL: ${APP_API_URL:-}
SENTRY_DSN: ${WEB_SENTRY_DSN:-}
NEXT_TELEMETRY_DISABLED: ${NEXT_TELEMETRY_DISABLED:-0}
TEXT_GENERATION_TIMEOUT_MS: ${TEXT_GENERATION_TIMEOUT_MS:-60000}
CSP_WHITELIST: ${CSP_WHITELIST:-}
MARKETPLACE_API_URL: ${MARKETPLACE_API_URL:-https://marketplace.dify.ai}
MARKETPLACE_URL: ${MARKETPLACE_URL:-https://marketplace.dify.ai}
TOP_K_MAX_VALUE: ${TOP_K_MAX_VALUE:-}
INDEXING_MAX_SEGMENTATION_TOKENS_LENGTH: ${INDEXING_MAX_SEGMENTATION_TOKENS_LENGTH:-}
db:
image: postgres:15-alpine
container_name: db-${CONTAINER_NAME}
restart: always
environment:
PGUSER: ${PGUSER:-postgres}
POSTGRES_PASSWORD: ${POSTGRES_PASSWORD:-difyai123456}
POSTGRES_DB: ${POSTGRES_DB:-dify}
PGDATA: ${PGDATA:-/var/lib/postgresql/data/pgdata}
command: >
postgres -c 'max_connections=${POSTGRES_MAX_CONNECTIONS:-100}'
-c 'shared_buffers=${POSTGRES_SHARED_BUFFERS:-128MB}'
-c 'work_mem=${POSTGRES_WORK_MEM:-4MB}'
-c 'maintenance_work_mem=${POSTGRES_MAINTENANCE_WORK_MEM:-64MB}'
-c 'effective_cache_size=${POSTGRES_EFFECTIVE_CACHE_SIZE:-4096MB}'
volumes:
- ${DIFY_ROOT_PATH}/volumes/db/data:/var/lib/postgresql/data
healthcheck:
test: [ 'CMD', 'pg_isready' ]
interval: 1s
timeout: 3s
retries: 30
ports:
- '${EXPOSE_DB_PORT:-5432}:5432'
redis:
image: redis:6-alpine
container_name: redis-${CONTAINER_NAME}
restart: always
environment:
REDISCLI_AUTH: ${REDIS_PASSWORD:-difyai123456}
volumes:
- ${DIFY_ROOT_PATH}/volumes/redis/data:/data
command: redis-server --requirepass ${REDIS_PASSWORD:-difyai123456}
healthcheck:
test: [ 'CMD', 'redis-cli', 'ping' ]
sandbox:
image: langgenius/dify-sandbox:0.2.10
container_name: sandbox-${CONTAINER_NAME}
restart: always
environment:
API_KEY: ${SANDBOX_API_KEY:-dify-sandbox}
GIN_MODE: ${SANDBOX_GIN_MODE:-release}
WORKER_TIMEOUT: ${SANDBOX_WORKER_TIMEOUT:-15}
ENABLE_NETWORK: ${SANDBOX_ENABLE_NETWORK:-true}
HTTP_PROXY: ${SANDBOX_HTTP_PROXY:-http://ssrf_proxy:3128}
HTTPS_PROXY: ${SANDBOX_HTTPS_PROXY:-http://ssrf_proxy:3128}
SANDBOX_PORT: ${SANDBOX_PORT:-8194}
volumes:
- ${DIFY_ROOT_PATH}/volumes/sandbox/dependencies:/dependencies
- ${DIFY_ROOT_PATH}/volumes/sandbox/conf:/conf
healthcheck:
test: [ 'CMD', 'curl', '-f', 'http://localhost:8194/health' ]
networks:
- ssrf_proxy_network
plugin_daemon:
image: langgenius/dify-plugin-daemon:0.0.3-local
container_name: plugin_daemon-${CONTAINER_NAME}
restart: always
environment:
<<: *shared-api-worker-env
DB_DATABASE: ${DB_PLUGIN_DATABASE:-dify_plugin}
SERVER_PORT: ${PLUGIN_DAEMON_PORT:-5002}
SERVER_KEY: ${PLUGIN_DAEMON_KEY:-lYkiYYT6owG+71oLerGzA7GXCgOT++6ovaezWAjpCjf+Sjc3ZtU+qUEi}
MAX_PLUGIN_PACKAGE_SIZE: ${PLUGIN_MAX_PACKAGE_SIZE:-52428800}
PPROF_ENABLED: ${PLUGIN_PPROF_ENABLED:-false}
DIFY_INNER_API_URL: ${PLUGIN_DIFY_INNER_API_URL:-http://api:5001}
DIFY_INNER_API_KEY: ${INNER_API_KEY_FOR_PLUGIN:-QaHbTe77CtuXmsfyhR7+vRjI/+XbV1AaFy691iy+kGDv2Jvy0/eAh8Y1}
PLUGIN_REMOTE_INSTALLING_HOST: ${PLUGIN_REMOTE_INSTALL_HOST:-0.0.0.0}
PLUGIN_REMOTE_INSTALLING_PORT: ${PLUGIN_REMOTE_INSTALL_PORT:-5003}
PLUGIN_WORKING_PATH: ${PLUGIN_WORKING_PATH:-/app/storage/cwd}
FORCE_VERIFYING_SIGNATURE: ${FORCE_VERIFYING_SIGNATURE:-true}
ports:
- "${EXPOSE_PLUGIN_DEBUGGING_PORT:-5003}:${PLUGIN_DEBUGGING_PORT:-5003}"
volumes:
- ${DIFY_ROOT_PATH}/volumes/plugin_daemon:/app/storage
ssrf_proxy:
image: ubuntu/squid:latest
container_name: ssrf_proxy-${CONTAINER_NAME}
restart: always
volumes:
- ${DIFY_ROOT_PATH}/ssrf_proxy/squid.conf.template:/etc/squid/squid.conf.template
- ${DIFY_ROOT_PATH}/ssrf_proxy/docker-entrypoint.sh:/docker-entrypoint-mount.sh
entrypoint: [ 'sh', '-c', "cp /docker-entrypoint-mount.sh /docker-entrypoint.sh && sed -i 's/\r$$//' /docker-entrypoint.sh && chmod +x /docker-entrypoint.sh && /docker-entrypoint.sh" ]
environment:
HTTP_PORT: ${SSRF_HTTP_PORT:-3128}
COREDUMP_DIR: ${SSRF_COREDUMP_DIR:-/var/spool/squid}
REVERSE_PROXY_PORT: ${SSRF_REVERSE_PROXY_PORT:-8194}
SANDBOX_HOST: ${SSRF_SANDBOX_HOST:-sandbox}
SANDBOX_PORT: ${SANDBOX_PORT:-8194}
networks:
- ssrf_proxy_network
- default
certbot:
image: certbot/certbot
container_name: certbot-${CONTAINER_NAME}
profiles:
- certbot
volumes:
- ${DIFY_ROOT_PATH}/volumes/certbot/conf:/etc/letsencrypt
- ${DIFY_ROOT_PATH}/volumes/certbot/www:/var/www/html
- ${DIFY_ROOT_PATH}/volumes/certbot/logs:/var/log/letsencrypt
- ${DIFY_ROOT_PATH}/volumes/certbot/conf/live:/etc/letsencrypt/live
- ${DIFY_ROOT_PATH}/certbot/update-cert.template.txt:/update-cert.template.txt
- ${DIFY_ROOT_PATH}/certbot/docker-entrypoint.sh:/docker-entrypoint.sh
environment:
- CERTBOT_EMAIL=${CERTBOT_EMAIL}
- CERTBOT_DOMAIN=${CERTBOT_DOMAIN}
- CERTBOT_OPTIONS=${CERTBOT_OPTIONS:-}
entrypoint: [ '/docker-entrypoint.sh' ]
command: [ 'tail', '-f', '/dev/null' ]
nginx:
image: nginx:latest
container_name: nginx-${CONTAINER_NAME}
restart: always
volumes:
- ${DIFY_ROOT_PATH}/nginx/nginx.conf.template:/etc/nginx/nginx.conf.template
- ${DIFY_ROOT_PATH}/nginx/proxy.conf.template:/etc/nginx/proxy.conf.template
- ${DIFY_ROOT_PATH}/nginx/https.conf.template:/etc/nginx/https.conf.template
- ${DIFY_ROOT_PATH}/nginx/conf.d:/etc/nginx/conf.d
- ${DIFY_ROOT_PATH}/nginx/docker-entrypoint.sh:/docker-entrypoint-mount.sh
- ${DIFY_ROOT_PATH}/nginx/ssl:/etc/ssl # cert dir (legacy)
- ${DIFY_ROOT_PATH}/volumes/certbot/conf/live:/etc/letsencrypt/live # cert dir (with certbot container)
- ${DIFY_ROOT_PATH}/volumes/certbot/conf:/etc/letsencrypt
- ${DIFY_ROOT_PATH}/volumes/certbot/www:/var/www/html
entrypoint: [ 'sh', '-c', "cp /docker-entrypoint-mount.sh /docker-entrypoint.sh && sed -i 's/\r$$//' /docker-entrypoint.sh && chmod +x /docker-entrypoint.sh && /docker-entrypoint.sh" ]
environment:
NGINX_SERVER_NAME: ${NGINX_SERVER_NAME:-_}
NGINX_HTTPS_ENABLED: ${NGINX_HTTPS_ENABLED:-false}
NGINX_SSL_PORT: ${NGINX_SSL_PORT:-443}
NGINX_PORT: ${NGINX_PORT:-80}
NGINX_SSL_CERT_FILENAME: ${NGINX_SSL_CERT_FILENAME:-dify.crt}
NGINX_SSL_CERT_KEY_FILENAME: ${NGINX_SSL_CERT_KEY_FILENAME:-dify.key}
NGINX_SSL_PROTOCOLS: ${NGINX_SSL_PROTOCOLS:-TLSv1.1 TLSv1.2 TLSv1.3}
NGINX_WORKER_PROCESSES: ${NGINX_WORKER_PROCESSES:-auto}
NGINX_CLIENT_MAX_BODY_SIZE: ${NGINX_CLIENT_MAX_BODY_SIZE:-15M}
NGINX_KEEPALIVE_TIMEOUT: ${NGINX_KEEPALIVE_TIMEOUT:-65}
NGINX_PROXY_READ_TIMEOUT: ${NGINX_PROXY_READ_TIMEOUT:-3600s}
NGINX_PROXY_SEND_TIMEOUT: ${NGINX_PROXY_SEND_TIMEOUT:-3600s}
NGINX_ENABLE_CERTBOT_CHALLENGE: ${NGINX_ENABLE_CERTBOT_CHALLENGE:-false}
CERTBOT_DOMAIN: ${CERTBOT_DOMAIN:-}
depends_on:
- api
- web
ports:
- '${PANEL_APP_PORT_HTTP:-80}:${NGINX_PORT:-80}'
- '${PANEL_APP_PORT_HTTPS:-443}:${NGINX_SSL_PORT:-443}'
weaviate:
image: semitechnologies/weaviate:1.19.0
container_name: weaviate-${CONTAINER_NAME}
profiles:
- ''
- weaviate
restart: always
volumes:
- ${DIFY_ROOT_PATH}/volumes/weaviate:/var/lib/weaviate
environment:
PERSISTENCE_DATA_PATH: ${WEAVIATE_PERSISTENCE_DATA_PATH:-/var/lib/weaviate}
QUERY_DEFAULTS_LIMIT: ${WEAVIATE_QUERY_DEFAULTS_LIMIT:-25}
AUTHENTICATION_ANONYMOUS_ACCESS_ENABLED: ${WEAVIATE_AUTHENTICATION_ANONYMOUS_ACCESS_ENABLED:-false}
DEFAULT_VECTORIZER_MODULE: ${WEAVIATE_DEFAULT_VECTORIZER_MODULE:-none}
CLUSTER_HOSTNAME: ${WEAVIATE_CLUSTER_HOSTNAME:-node1}
AUTHENTICATION_APIKEY_ENABLED: ${WEAVIATE_AUTHENTICATION_APIKEY_ENABLED:-true}
AUTHENTICATION_APIKEY_ALLOWED_KEYS: ${WEAVIATE_AUTHENTICATION_APIKEY_ALLOWED_KEYS:-WVF5YThaHlkYwhGUSmCRgsX3tD5ngdN8pkih}
AUTHENTICATION_APIKEY_USERS: ${WEAVIATE_AUTHENTICATION_APIKEY_USERS:-hello@dify.ai}
AUTHORIZATION_ADMINLIST_ENABLED: ${WEAVIATE_AUTHORIZATION_ADMINLIST_ENABLED:-true}
AUTHORIZATION_ADMINLIST_USERS: ${WEAVIATE_AUTHORIZATION_ADMINLIST_USERS:-hello@dify.ai}
qdrant:
image: langgenius/qdrant:v1.7.3
container_name: qdrant-${CONTAINER_NAME}
profiles:
- qdrant
restart: always
volumes:
- ${DIFY_ROOT_PATH}/volumes/qdrant:/qdrant/storage
environment:
QDRANT_API_KEY: ${QDRANT_API_KEY:-difyai123456}
couchbase-server:
build: ./conf/couchbase-server
profiles:
- couchbase
restart: always
environment:
- CLUSTER_NAME=dify_search
- COUCHBASE_ADMINISTRATOR_USERNAME=${COUCHBASE_USER:-Administrator}
- COUCHBASE_ADMINISTRATOR_PASSWORD=${COUCHBASE_PASSWORD:-password}
- COUCHBASE_BUCKET=${COUCHBASE_BUCKET_NAME:-Embeddings}
- COUCHBASE_BUCKET_RAMSIZE=512
- COUCHBASE_RAM_SIZE=2048
- COUCHBASE_EVENTING_RAM_SIZE=512
- COUCHBASE_INDEX_RAM_SIZE=512
- COUCHBASE_FTS_RAM_SIZE=1024
hostname: couchbase-server
container_name: couchbase-server
working_dir: /opt/couchbase
stdin_open: true
tty: true
entrypoint: [ "" ]
command: sh -c "/opt/couchbase/init/init-cbserver.sh"
volumes:
- ${DIFY_ROOT_PATH}/volumes/couchbase/data:/opt/couchbase/var/lib/couchbase/data
healthcheck:
test: [ "CMD-SHELL", "curl -s -f -u Administrator:password http://localhost:8091/pools/default/buckets | grep -q '\\[{' || exit 1" ]
interval: 10s
retries: 10
start_period: 30s
timeout: 10s
pgvector:
image: pgvector/pgvector:pg16
container_name: pgvector-${CONTAINER_NAME}
profiles:
- pgvector
restart: always
environment:
PGUSER: ${PGVECTOR_PGUSER:-postgres}
POSTGRES_PASSWORD: ${PGVECTOR_POSTGRES_PASSWORD:-difyai123456}
POSTGRES_DB: ${PGVECTOR_POSTGRES_DB:-dify}
PGDATA: ${PGVECTOR_PGDATA:-/var/lib/postgresql/data/pgdata}
volumes:
- ${DIFY_ROOT_PATH}/volumes/pgvector/data:/var/lib/postgresql/data
healthcheck:
test: [ 'CMD', 'pg_isready' ]
interval: 1s
timeout: 3s
retries: 30
pgvecto-rs:
image: tensorchord/pgvecto-rs:pg16-v0.3.0
container_name: pgvecto-rs-${CONTAINER_NAME}
profiles:
- pgvecto-rs
restart: always
environment:
PGUSER: ${PGVECTOR_PGUSER:-postgres}
POSTGRES_PASSWORD: ${PGVECTOR_POSTGRES_PASSWORD:-difyai123456}
POSTGRES_DB: ${PGVECTOR_POSTGRES_DB:-dify}
PGDATA: ${PGVECTOR_PGDATA:-/var/lib/postgresql/data/pgdata}
volumes:
- ${DIFY_ROOT_PATH}/volumes/pgvecto_rs/data:/var/lib/postgresql/data
healthcheck:
test: [ 'CMD', 'pg_isready' ]
interval: 1s
timeout: 3s
retries: 30
chroma:
image: ghcr.io/chroma-core/chroma:0.5.20
container_name: chroma-${CONTAINER_NAME}
profiles:
- chroma
restart: always
volumes:
- ${DIFY_ROOT_PATH}/volumes/chroma:/chroma/chroma
environment:
CHROMA_SERVER_AUTHN_CREDENTIALS: ${CHROMA_SERVER_AUTHN_CREDENTIALS:-difyai123456}
CHROMA_SERVER_AUTHN_PROVIDER: ${CHROMA_SERVER_AUTHN_PROVIDER:-chromadb.auth.token_authn.TokenAuthenticationServerProvider}
IS_PERSISTENT: ${CHROMA_IS_PERSISTENT:-TRUE}
oceanbase:
image: quay.io/oceanbase/oceanbase-ce:4.3.3.0-100000142024101215
container_name: oceanbase-${CONTAINER_NAME}
profiles:
- oceanbase
restart: always
volumes:
- ${DIFY_ROOT_PATH}/volumes/oceanbase/data:/root/ob
- ${DIFY_ROOT_PATH}/volumes/oceanbase/conf:/root/.obd/cluster
- ${DIFY_ROOT_PATH}/volumes/oceanbase/init.d:/root/boot/init.d
environment:
OB_MEMORY_LIMIT: ${OCEANBASE_MEMORY_LIMIT:-6G}
OB_SYS_PASSWORD: ${OCEANBASE_VECTOR_PASSWORD:-difyai123456}
OB_TENANT_PASSWORD: ${OCEANBASE_VECTOR_PASSWORD:-difyai123456}
OB_CLUSTER_NAME: ${OCEANBASE_CLUSTER_NAME:-difyai}
OB_SERVER_IP: '127.0.0.1'
oracle:
image: container-registry.oracle.com/database/free:latest
container_name: oracle-${CONTAINER_NAME}
profiles:
- oracle
restart: always
volumes:
- source: oradata
type: volume
target: /opt/oracle/oradata
- ${DIFY_ROOT_PATH}/startupscripts:/opt/oracle/scripts/startup
environment:
ORACLE_PWD: ${ORACLE_PWD:-Dify123456}
ORACLE_CHARACTERSET: ${ORACLE_CHARACTERSET:-AL32UTF8}
etcd:
image: quay.io/coreos/etcd:v3.5.5
container_name: milvus-etcd-${CONTAINER_NAME}
profiles:
- milvus
environment:
ETCD_AUTO_COMPACTION_MODE: ${ETCD_AUTO_COMPACTION_MODE:-revision}
ETCD_AUTO_COMPACTION_RETENTION: ${ETCD_AUTO_COMPACTION_RETENTION:-1000}
ETCD_QUOTA_BACKEND_BYTES: ${ETCD_QUOTA_BACKEND_BYTES:-4294967296}
ETCD_SNAPSHOT_COUNT: ${ETCD_SNAPSHOT_COUNT:-50000}
volumes:
- ${DIFY_ROOT_PATH}/volumes/milvus/etcd:/etcd
command: etcd -advertise-client-urls=http://127.0.0.1:2379 -listen-client-urls http://0.0.0.0:2379 --data-dir /etcd
healthcheck:
test: [ 'CMD', 'etcdctl', 'endpoint', 'health' ]
interval: 30s
timeout: 20s
retries: 3
networks:
- milvus
minio:
image: minio/minio:RELEASE.2023-03-20T20-16-18Z
container_name: milvus-minio-${CONTAINER_NAME}
profiles:
- milvus
environment:
MINIO_ACCESS_KEY: ${MINIO_ACCESS_KEY:-minioadmin}
MINIO_SECRET_KEY: ${MINIO_SECRET_KEY:-minioadmin}
volumes:
- ${DIFY_ROOT_PATH}/volumes/milvus/minio:/minio_data
command: minio server /minio_data --console-address ":9001"
healthcheck:
test: [ 'CMD', 'curl', '-f', 'http://localhost:9000/minio/health/live' ]
interval: 30s
timeout: 20s
retries: 3
networks:
- milvus
milvus-standalone:
image: milvusdb/milvus:v2.5.0-beta
container_name: milvus-standalone-${CONTAINER_NAME}
profiles:
- milvus
command: [ 'milvus', 'run', 'standalone' ]
environment:
ETCD_ENDPOINTS: ${ETCD_ENDPOINTS:-etcd:2379}
MINIO_ADDRESS: ${MINIO_ADDRESS:-minio:9000}
common.security.authorizationEnabled: ${MILVUS_AUTHORIZATION_ENABLED:-true}
volumes:
- ${DIFY_ROOT_PATH}/volumes/milvus/milvus:/var/lib/milvus
healthcheck:
test: [ 'CMD', 'curl', '-f', 'http://localhost:9091/healthz' ]
interval: 30s
start_period: 90s
timeout: 20s
retries: 3
depends_on:
- etcd
- minio
ports:
- 19530:19530
- 9091:9091
networks:
- milvus
opensearch:
image: opensearchproject/opensearch:latest
container_name: opensearch-${CONTAINER_NAME}
profiles:
- opensearch
environment:
discovery.type: ${OPENSEARCH_DISCOVERY_TYPE:-single-node}
bootstrap.memory_lock: ${OPENSEARCH_BOOTSTRAP_MEMORY_LOCK:-true}
OPENSEARCH_JAVA_OPTS: -Xms${OPENSEARCH_JAVA_OPTS_MIN:-512m} -Xmx${OPENSEARCH_JAVA_OPTS_MAX:-1024m}
OPENSEARCH_INITIAL_ADMIN_PASSWORD: ${OPENSEARCH_INITIAL_ADMIN_PASSWORD:-Qazwsxedc!@#123}
ulimits:
memlock:
soft: ${OPENSEARCH_MEMLOCK_SOFT:--1}
hard: ${OPENSEARCH_MEMLOCK_HARD:--1}
nofile:
soft: ${OPENSEARCH_NOFILE_SOFT:-65536}
hard: ${OPENSEARCH_NOFILE_HARD:-65536}
volumes:
- ${DIFY_ROOT_PATH}/volumes/opensearch/data:/usr/share/opensearch/data
networks:
- opensearch-net
opensearch-dashboards:
image: opensearchproject/opensearch-dashboards:latest
container_name: opensearch-dashboards-${CONTAINER_NAME}
profiles:
- opensearch
environment:
OPENSEARCH_HOSTS: '["https://opensearch:9200"]'
volumes:
- ${DIFY_ROOT_PATH}/volumes/opensearch/opensearch_dashboards.yml:/usr/share/opensearch-dashboards/config/opensearch_dashboards.yml
networks:
- opensearch-net
depends_on:
- opensearch
myscale:
image: myscale/myscaledb:1.6.4
container_name: myscale-${CONTAINER_NAME}
profiles:
- myscale
restart: always
tty: true
volumes:
- ${DIFY_ROOT_PATH}/volumes/myscale/data:/var/lib/clickhouse
- ${DIFY_ROOT_PATH}/volumes/myscale/log:/var/log/clickhouse-server
- ${DIFY_ROOT_PATH}/volumes/myscale/config/users.d/custom_users_config.xml:/etc/clickhouse-server/users.d/custom_users_config.xml
ports:
- ${MYSCALE_PORT:-8123}:${MYSCALE_PORT:-8123}
elasticsearch:
image: docker.elastic.co/elasticsearch/elasticsearch:8.14.3
container_name: elasticsearch-${CONTAINER_NAME}
profiles:
- elasticsearch
- elasticsearch-ja
restart: always
volumes:
- ${DIFY_ROOT_PATH}/elasticsearch/docker-entrypoint.sh:/docker-entrypoint-mount.sh
- dify_es01_data:/usr/share/elasticsearch/data
environment:
ELASTIC_PASSWORD: ${ELASTICSEARCH_PASSWORD:-elastic}
VECTOR_STORE: ${VECTOR_STORE:-}
cluster.name: dify-es-cluster
node.name: dify-es0
discovery.type: single-node
xpack.license.self_generated.type: basic
xpack.security.enabled: 'true'
xpack.security.enrollment.enabled: 'false'
xpack.security.http.ssl.enabled: 'false'
ports:
- ${ELASTICSEARCH_PORT:-9200}:9200
deploy:
resources:
limits:
memory: 2g
entrypoint: [ 'sh', '-c', "sh /docker-entrypoint-mount.sh" ]
healthcheck:
test: [ 'CMD', 'curl', '-s', 'http://localhost:9200/_cluster/health?pretty' ]
interval: 30s
timeout: 10s
retries: 50
kibana:
image: docker.elastic.co/kibana/kibana:8.14.3
container_name: kibana-${CONTAINER_NAME}
profiles:
- elasticsearch
depends_on:
- elasticsearch
restart: always
environment:
XPACK_ENCRYPTEDSAVEDOBJECTS_ENCRYPTIONKEY: d1a66dfd-c4d3-4a0a-8290-2abcb83ab3aa
NO_PROXY: localhost,127.0.0.1,elasticsearch,kibana
XPACK_SECURITY_ENABLED: 'true'
XPACK_SECURITY_ENROLLMENT_ENABLED: 'false'
XPACK_SECURITY_HTTP_SSL_ENABLED: 'false'
XPACK_FLEET_ISAIRGAPPED: 'true'
I18N_LOCALE: zh-CN
SERVER_PORT: '5601'
ELASTICSEARCH_HOSTS: http://elasticsearch:9200
ports:
- ${KIBANA_PORT:-5601}:5601
healthcheck:
test: [ 'CMD-SHELL', 'curl -s http://localhost:5601 >/dev/null || exit 1' ]
interval: 30s
timeout: 10s
retries: 3
unstructured:
image: downloads.unstructured.io/unstructured-io/unstructured-api:latest
container_name: unstructured-${CONTAINER_NAME}
profiles:
- unstructured
restart: always
volumes:
- ${DIFY_ROOT_PATH}/volumes/unstructured:/app/data
networks:
ssrf_proxy_network:
driver: bridge
internal: true
milvus:
driver: bridge
opensearch-net:
driver: bridge
internal: true
volumes:
oradata:
dify_es01_data:

View File

@ -0,0 +1 @@
ENV_FILE=.env

View File

@ -0,0 +1,965 @@
# ------------------------------
# Environment Variables for API service & worker
# ------------------------------
# ------------------------------
# Common Variables
# ------------------------------
# The backend URL of the console API,
# used to concatenate the authorization callback.
# If empty, it is the same domain.
# Example: https://api.console.dify.ai
CONSOLE_API_URL=
# The front-end URL of the console web,
# used to concatenate some front-end addresses and for CORS configuration use.
# If empty, it is the same domain.
# Example: https://console.dify.ai
CONSOLE_WEB_URL=
# Service API Url,
# used to display Service API Base Url to the front-end.
# If empty, it is the same domain.
# Example: https://api.dify.ai
SERVICE_API_URL=
# WebApp API backend Url,
# used to declare the back-end URL for the front-end API.
# If empty, it is the same domain.
# Example: https://api.app.dify.ai
APP_API_URL=
# WebApp Url,
# used to display WebAPP API Base Url to the front-end.
# If empty, it is the same domain.
# Example: https://app.dify.ai
APP_WEB_URL=
# File preview or download Url prefix.
# used to display File preview or download Url to the front-end or as Multi-model inputs;
# Url is signed and has expiration time.
FILES_URL=
# ------------------------------
# Server Configuration
# ------------------------------
# The log level for the application.
# Supported values are `DEBUG`, `INFO`, `WARNING`, `ERROR`, `CRITICAL`
LOG_LEVEL=INFO
# Log file path
LOG_FILE=/app/logs/server.log
# Log file max size, the unit is MB
LOG_FILE_MAX_SIZE=20
# Log file max backup count
LOG_FILE_BACKUP_COUNT=5
# Log dateformat
LOG_DATEFORMAT=%Y-%m-%d %H:%M:%S
# Log Timezone
LOG_TZ=UTC
# Debug mode, default is false.
# It is recommended to turn on this configuration for local development
# to prevent some problems caused by monkey patch.
DEBUG=false
# Flask debug mode, it can output trace information at the interface when turned on,
# which is convenient for debugging.
FLASK_DEBUG=false
# A secretkey that is used for securely signing the session cookie
# and encrypting sensitive information on the database.
# You can generate a strong key using `openssl rand -base64 42`.
SECRET_KEY=sk-9f73s3ljTXVcMT3Blb3ljTqtsKiGHXVcMT3BlbkFJLK7U
# Password for admin user initialization.
# If left unset, admin user will not be prompted for a password
# when creating the initial admin account.
# The length of the password cannot exceed 30 charactors.
INIT_PASSWORD=
# Deployment environment.
# Supported values are `PRODUCTION`, `TESTING`. Default is `PRODUCTION`.
# Testing environment. There will be a distinct color label on the front-end page,
# indicating that this environment is a testing environment.
DEPLOY_ENV=PRODUCTION
# Whether to enable the version check policy.
# If set to empty, https://updates.dify.ai will be called for version check.
CHECK_UPDATE_URL=https://updates.dify.ai
# Used to change the OpenAI base address, default is https://api.openai.com/v1.
# When OpenAI cannot be accessed in China, replace it with a domestic mirror address,
# or when a local model provides OpenAI compatible API, it can be replaced.
OPENAI_API_BASE=https://api.openai.com/v1
# When enabled, migrations will be executed prior to application startup
# and the application will start after the migrations have completed.
MIGRATION_ENABLED=true
# File Access Time specifies a time interval in seconds for the file to be accessed.
# The default value is 300 seconds.
FILES_ACCESS_TIMEOUT=300
# Access token expiration time in minutes
ACCESS_TOKEN_EXPIRE_MINUTES=60
# Refresh token expiration time in days
REFRESH_TOKEN_EXPIRE_DAYS=30
# The maximum number of active requests for the application, where 0 means unlimited, should be a non-negative integer.
APP_MAX_ACTIVE_REQUESTS=0
APP_MAX_EXECUTION_TIME=1200
# ------------------------------
# Container Startup Related Configuration
# Only effective when starting with docker image or docker-compose.
# ------------------------------
# API service binding address, default: 0.0.0.0, i.e., all addresses can be accessed.
DIFY_BIND_ADDRESS=0.0.0.0
# API service binding port number, default 5001.
DIFY_PORT=5001
# The number of API server workers, i.e., the number of workers.
# Formula: number of cpu cores x 2 + 1 for sync, 1 for Gevent
# Reference: https://docs.gunicorn.org/en/stable/design.html#how-many-workers
SERVER_WORKER_AMOUNT=1
# Defaults to gevent. If using windows, it can be switched to sync or solo.
SERVER_WORKER_CLASS=gevent
# Default number of worker connections, the default is 10.
SERVER_WORKER_CONNECTIONS=10
# Similar to SERVER_WORKER_CLASS.
# If using windows, it can be switched to sync or solo.
CELERY_WORKER_CLASS=
# Request handling timeout. The default is 200,
# it is recommended to set it to 360 to support a longer sse connection time.
GUNICORN_TIMEOUT=360
# The number of Celery workers. The default is 1, and can be set as needed.
CELERY_WORKER_AMOUNT=
# Flag indicating whether to enable autoscaling of Celery workers.
#
# Autoscaling is useful when tasks are CPU intensive and can be dynamically
# allocated and deallocated based on the workload.
#
# When autoscaling is enabled, the maximum and minimum number of workers can
# be specified. The autoscaling algorithm will dynamically adjust the number
# of workers within the specified range.
#
# Default is false (i.e., autoscaling is disabled).
#
# Example:
# CELERY_AUTO_SCALE=true
CELERY_AUTO_SCALE=false
# The maximum number of Celery workers that can be autoscaled.
# This is optional and only used when autoscaling is enabled.
# Default is not set.
CELERY_MAX_WORKERS=
# The minimum number of Celery workers that can be autoscaled.
# This is optional and only used when autoscaling is enabled.
# Default is not set.
CELERY_MIN_WORKERS=
# API Tool configuration
API_TOOL_DEFAULT_CONNECT_TIMEOUT=10
API_TOOL_DEFAULT_READ_TIMEOUT=60
# ------------------------------
# Database Configuration
# The database uses PostgreSQL. Please use the public schema.
# It is consistent with the configuration in the 'db' service below.
# ------------------------------
DB_USERNAME=postgres
DB_PASSWORD=difyai123456
DB_HOST=db
DB_PORT=5432
DB_DATABASE=dify
# The size of the database connection pool.
# The default is 30 connections, which can be appropriately increased.
SQLALCHEMY_POOL_SIZE=30
# Database connection pool recycling time, the default is 3600 seconds.
SQLALCHEMY_POOL_RECYCLE=3600
# Whether to print SQL, default is false.
SQLALCHEMY_ECHO=false
# Maximum number of connections to the database
# Default is 100
#
# Reference: https://www.postgresql.org/docs/current/runtime-config-connection.html#GUC-MAX-CONNECTIONS
POSTGRES_MAX_CONNECTIONS=100
# Sets the amount of shared memory used for postgres's shared buffers.
# Default is 128MB
# Recommended value: 25% of available memory
# Reference: https://www.postgresql.org/docs/current/runtime-config-resource.html#GUC-SHARED-BUFFERS
POSTGRES_SHARED_BUFFERS=128MB
# Sets the amount of memory used by each database worker for working space.
# Default is 4MB
#
# Reference: https://www.postgresql.org/docs/current/runtime-config-resource.html#GUC-WORK-MEM
POSTGRES_WORK_MEM=4MB
# Sets the amount of memory reserved for maintenance activities.
# Default is 64MB
#
# Reference: https://www.postgresql.org/docs/current/runtime-config-resource.html#GUC-MAINTENANCE-WORK-MEM
POSTGRES_MAINTENANCE_WORK_MEM=64MB
# Sets the planner's assumption about the effective cache size.
# Default is 4096MB
#
# Reference: https://www.postgresql.org/docs/current/runtime-config-query.html#GUC-EFFECTIVE-CACHE-SIZE
POSTGRES_EFFECTIVE_CACHE_SIZE=4096MB
# ------------------------------
# Redis Configuration
# This Redis configuration is used for caching and for pub/sub during conversation.
# ------------------------------
REDIS_HOST=redis
REDIS_PORT=6379
REDIS_USERNAME=
REDIS_PASSWORD=difyai123456
REDIS_USE_SSL=false
REDIS_DB=0
# Whether to use Redis Sentinel mode.
# If set to true, the application will automatically discover and connect to the master node through Sentinel.
REDIS_USE_SENTINEL=false
# List of Redis Sentinel nodes. If Sentinel mode is enabled, provide at least one Sentinel IP and port.
# Format: `<sentinel1_ip>:<sentinel1_port>,<sentinel2_ip>:<sentinel2_port>,<sentinel3_ip>:<sentinel3_port>`
REDIS_SENTINELS=
REDIS_SENTINEL_SERVICE_NAME=
REDIS_SENTINEL_USERNAME=
REDIS_SENTINEL_PASSWORD=
REDIS_SENTINEL_SOCKET_TIMEOUT=0.1
# List of Redis Cluster nodes. If Cluster mode is enabled, provide at least one Cluster IP and port.
# Format: `<Cluster1_ip>:<Cluster1_port>,<Cluster2_ip>:<Cluster2_port>,<Cluster3_ip>:<Cluster3_port>`
REDIS_USE_CLUSTERS=false
REDIS_CLUSTERS=
REDIS_CLUSTERS_PASSWORD=
# ------------------------------
# Celery Configuration
# ------------------------------
# Use redis as the broker, and redis db 1 for celery broker.
# Format as follows: `redis://<redis_username>:<redis_password>@<redis_host>:<redis_port>/<redis_database>`
# Example: redis://:difyai123456@redis:6379/1
# If use Redis Sentinel, format as follows: `sentinel://<sentinel_username>:<sentinel_password>@<sentinel_host>:<sentinel_port>/<redis_database>`
# Example: sentinel://localhost:26379/1;sentinel://localhost:26380/1;sentinel://localhost:26381/1
CELERY_BROKER_URL=redis://:difyai123456@redis:6379/1
BROKER_USE_SSL=false
# If you are using Redis Sentinel for high availability, configure the following settings.
CELERY_USE_SENTINEL=false
CELERY_SENTINEL_MASTER_NAME=
CELERY_SENTINEL_SOCKET_TIMEOUT=0.1
# ------------------------------
# CORS Configuration
# Used to set the front-end cross-domain access policy.
# ------------------------------
# Specifies the allowed origins for cross-origin requests to the Web API,
# e.g. https://dify.app or * for all origins.
WEB_API_CORS_ALLOW_ORIGINS=*
# Specifies the allowed origins for cross-origin requests to the console API,
# e.g. https://cloud.dify.ai or * for all origins.
CONSOLE_CORS_ALLOW_ORIGINS=*
# ------------------------------
# File Storage Configuration
# ------------------------------
# The type of storage to use for storing user files.
STORAGE_TYPE=opendal
# Apache OpenDAL Configuration
# The configuration for OpenDAL consists of the following format: OPENDAL_<SCHEME_NAME>_<CONFIG_NAME>.
# You can find all the service configurations (CONFIG_NAME) in the repository at: https://github.com/apache/opendal/tree/main/core/src/services.
# Dify will scan configurations starting with OPENDAL_<SCHEME_NAME> and automatically apply them.
# The scheme name for the OpenDAL storage.
OPENDAL_SCHEME=fs
# Configurations for OpenDAL Local File System.
OPENDAL_FS_ROOT=storage
# S3 Configuration
#
S3_ENDPOINT=
S3_REGION=us-east-1
S3_BUCKET_NAME=difyai
S3_ACCESS_KEY=
S3_SECRET_KEY=
# Whether to use AWS managed IAM roles for authenticating with the S3 service.
# If set to false, the access key and secret key must be provided.
S3_USE_AWS_MANAGED_IAM=false
# Azure Blob Configuration
#
AZURE_BLOB_ACCOUNT_NAME=difyai
AZURE_BLOB_ACCOUNT_KEY=difyai
AZURE_BLOB_CONTAINER_NAME=difyai-container
AZURE_BLOB_ACCOUNT_URL=https://<your_account_name>.blob.core.windows.net
# Google Storage Configuration
#
GOOGLE_STORAGE_BUCKET_NAME=your-bucket-name
GOOGLE_STORAGE_SERVICE_ACCOUNT_JSON_BASE64=
# The Alibaba Cloud OSS configurations,
#
ALIYUN_OSS_BUCKET_NAME=your-bucket-name
ALIYUN_OSS_ACCESS_KEY=your-access-key
ALIYUN_OSS_SECRET_KEY=your-secret-key
ALIYUN_OSS_ENDPOINT=https://oss-ap-southeast-1-internal.aliyuncs.com
ALIYUN_OSS_REGION=ap-southeast-1
ALIYUN_OSS_AUTH_VERSION=v4
# Don't start with '/'. OSS doesn't support leading slash in object names.
ALIYUN_OSS_PATH=your-path
# Tencent COS Configuration
#
TENCENT_COS_BUCKET_NAME=your-bucket-name
TENCENT_COS_SECRET_KEY=your-secret-key
TENCENT_COS_SECRET_ID=your-secret-id
TENCENT_COS_REGION=your-region
TENCENT_COS_SCHEME=your-scheme
# Oracle Storage Configuration
#
OCI_ENDPOINT=https://objectstorage.us-ashburn-1.oraclecloud.com
OCI_BUCKET_NAME=your-bucket-name
OCI_ACCESS_KEY=your-access-key
OCI_SECRET_KEY=your-secret-key
OCI_REGION=us-ashburn-1
# Huawei OBS Configuration
#
HUAWEI_OBS_BUCKET_NAME=your-bucket-name
HUAWEI_OBS_SECRET_KEY=your-secret-key
HUAWEI_OBS_ACCESS_KEY=your-access-key
HUAWEI_OBS_SERVER=your-server-url
# Volcengine TOS Configuration
#
VOLCENGINE_TOS_BUCKET_NAME=your-bucket-name
VOLCENGINE_TOS_SECRET_KEY=your-secret-key
VOLCENGINE_TOS_ACCESS_KEY=your-access-key
VOLCENGINE_TOS_ENDPOINT=your-server-url
VOLCENGINE_TOS_REGION=your-region
# Baidu OBS Storage Configuration
#
BAIDU_OBS_BUCKET_NAME=your-bucket-name
BAIDU_OBS_SECRET_KEY=your-secret-key
BAIDU_OBS_ACCESS_KEY=your-access-key
BAIDU_OBS_ENDPOINT=your-server-url
# Supabase Storage Configuration
#
SUPABASE_BUCKET_NAME=your-bucket-name
SUPABASE_API_KEY=your-access-key
SUPABASE_URL=your-server-url
# ------------------------------
# Vector Database Configuration
# ------------------------------
# The type of vector store to use.
# Supported values are `weaviate`, `qdrant`, `milvus`, `myscale`, `relyt`, `pgvector`, `pgvecto-rs`, `chroma`, `opensearch`, `tidb_vector`, `oracle`, `tencent`, `elasticsearch`, `elasticsearch-ja`, `analyticdb`, `couchbase`, `vikingdb`, `oceanbase`.
VECTOR_STORE=weaviate
# The Weaviate endpoint URL. Only available when VECTOR_STORE is `weaviate`.
WEAVIATE_ENDPOINT=http://weaviate:8080
WEAVIATE_API_KEY=WVF5YThaHlkYwhGUSmCRgsX3tD5ngdN8pkih
# The Qdrant endpoint URL. Only available when VECTOR_STORE is `qdrant`.
QDRANT_URL=http://qdrant:6333
QDRANT_API_KEY=difyai123456
QDRANT_CLIENT_TIMEOUT=20
QDRANT_GRPC_ENABLED=false
QDRANT_GRPC_PORT=6334
# Milvus configuration Only available when VECTOR_STORE is `milvus`.
# The milvus uri.
MILVUS_URI=http://127.0.0.1:19530
MILVUS_TOKEN=
MILVUS_USER=root
MILVUS_PASSWORD=Milvus
MILVUS_ENABLE_HYBRID_SEARCH=False
# MyScale configuration, only available when VECTOR_STORE is `myscale`
# For multi-language support, please set MYSCALE_FTS_PARAMS with referring to:
# https://myscale.com/docs/en/text-search/#understanding-fts-index-parameters
MYSCALE_HOST=myscale
MYSCALE_PORT=8123
MYSCALE_USER=default
MYSCALE_PASSWORD=
MYSCALE_DATABASE=dify
MYSCALE_FTS_PARAMS=
# Couchbase configurations, only available when VECTOR_STORE is `couchbase`
# The connection string must include hostname defined in the docker-compose file (couchbase-server in this case)
COUCHBASE_CONNECTION_STRING=couchbase://couchbase-server
COUCHBASE_USER=Administrator
COUCHBASE_PASSWORD=password
COUCHBASE_BUCKET_NAME=Embeddings
COUCHBASE_SCOPE_NAME=_default
# pgvector configurations, only available when VECTOR_STORE is `pgvector`
PGVECTOR_HOST=pgvector
PGVECTOR_PORT=5432
PGVECTOR_USER=postgres
PGVECTOR_PASSWORD=difyai123456
PGVECTOR_DATABASE=dify
PGVECTOR_MIN_CONNECTION=1
PGVECTOR_MAX_CONNECTION=5
# pgvecto-rs configurations, only available when VECTOR_STORE is `pgvecto-rs`
PGVECTO_RS_HOST=pgvecto-rs
PGVECTO_RS_PORT=5432
PGVECTO_RS_USER=postgres
PGVECTO_RS_PASSWORD=difyai123456
PGVECTO_RS_DATABASE=dify
# analyticdb configurations, only available when VECTOR_STORE is `analyticdb`
ANALYTICDB_KEY_ID=your-ak
ANALYTICDB_KEY_SECRET=your-sk
ANALYTICDB_REGION_ID=cn-hangzhou
ANALYTICDB_INSTANCE_ID=gp-ab123456
ANALYTICDB_ACCOUNT=testaccount
ANALYTICDB_PASSWORD=testpassword
ANALYTICDB_NAMESPACE=dify
ANALYTICDB_NAMESPACE_PASSWORD=difypassword
ANALYTICDB_HOST=gp-test.aliyuncs.com
ANALYTICDB_PORT=5432
ANALYTICDB_MIN_CONNECTION=1
ANALYTICDB_MAX_CONNECTION=5
# TiDB vector configurations, only available when VECTOR_STORE is `tidb`
TIDB_VECTOR_HOST=tidb
TIDB_VECTOR_PORT=4000
TIDB_VECTOR_USER=
TIDB_VECTOR_PASSWORD=
TIDB_VECTOR_DATABASE=dify
# Tidb on qdrant configuration, only available when VECTOR_STORE is `tidb_on_qdrant`
TIDB_ON_QDRANT_URL=http://127.0.0.1
TIDB_ON_QDRANT_API_KEY=dify
TIDB_ON_QDRANT_CLIENT_TIMEOUT=20
TIDB_ON_QDRANT_GRPC_ENABLED=false
TIDB_ON_QDRANT_GRPC_PORT=6334
TIDB_PUBLIC_KEY=dify
TIDB_PRIVATE_KEY=dify
TIDB_API_URL=http://127.0.0.1
TIDB_IAM_API_URL=http://127.0.0.1
TIDB_REGION=regions/aws-us-east-1
TIDB_PROJECT_ID=dify
TIDB_SPEND_LIMIT=100
# Chroma configuration, only available when VECTOR_STORE is `chroma`
CHROMA_HOST=127.0.0.1
CHROMA_PORT=8000
CHROMA_TENANT=default_tenant
CHROMA_DATABASE=default_database
CHROMA_AUTH_PROVIDER=chromadb.auth.token_authn.TokenAuthClientProvider
CHROMA_AUTH_CREDENTIALS=
# Oracle configuration, only available when VECTOR_STORE is `oracle`
ORACLE_HOST=oracle
ORACLE_PORT=1521
ORACLE_USER=dify
ORACLE_PASSWORD=dify
ORACLE_DATABASE=FREEPDB1
# relyt configurations, only available when VECTOR_STORE is `relyt`
RELYT_HOST=db
RELYT_PORT=5432
RELYT_USER=postgres
RELYT_PASSWORD=difyai123456
RELYT_DATABASE=postgres
# open search configuration, only available when VECTOR_STORE is `opensearch`
OPENSEARCH_HOST=opensearch
OPENSEARCH_PORT=9200
OPENSEARCH_USER=admin
OPENSEARCH_PASSWORD=admin
OPENSEARCH_SECURE=true
# tencent vector configurations, only available when VECTOR_STORE is `tencent`
TENCENT_VECTOR_DB_URL=http://127.0.0.1
TENCENT_VECTOR_DB_API_KEY=dify
TENCENT_VECTOR_DB_TIMEOUT=30
TENCENT_VECTOR_DB_USERNAME=dify
TENCENT_VECTOR_DB_DATABASE=dify
TENCENT_VECTOR_DB_SHARD=1
TENCENT_VECTOR_DB_REPLICAS=2
# ElasticSearch configuration, only available when VECTOR_STORE is `elasticsearch`
ELASTICSEARCH_HOST=0.0.0.0
ELASTICSEARCH_PORT=9200
ELASTICSEARCH_USERNAME=elastic
ELASTICSEARCH_PASSWORD=elastic
KIBANA_PORT=5601
# baidu vector configurations, only available when VECTOR_STORE is `baidu`
BAIDU_VECTOR_DB_ENDPOINT=http://127.0.0.1:5287
BAIDU_VECTOR_DB_CONNECTION_TIMEOUT_MS=30000
BAIDU_VECTOR_DB_ACCOUNT=root
BAIDU_VECTOR_DB_API_KEY=dify
BAIDU_VECTOR_DB_DATABASE=dify
BAIDU_VECTOR_DB_SHARD=1
BAIDU_VECTOR_DB_REPLICAS=3
# VikingDB configurations, only available when VECTOR_STORE is `vikingdb`
VIKINGDB_ACCESS_KEY=your-ak
VIKINGDB_SECRET_KEY=your-sk
VIKINGDB_REGION=cn-shanghai
VIKINGDB_HOST=api-vikingdb.xxx.volces.com
VIKINGDB_SCHEMA=http
VIKINGDB_CONNECTION_TIMEOUT=30
VIKINGDB_SOCKET_TIMEOUT=30
# Lindorm configuration, only available when VECTOR_STORE is `lindorm`
LINDORM_URL=http://lindorm:30070
LINDORM_USERNAME=lindorm
LINDORM_PASSWORD=lindorm
# OceanBase Vector configuration, only available when VECTOR_STORE is `oceanbase`
OCEANBASE_VECTOR_HOST=oceanbase
OCEANBASE_VECTOR_PORT=2881
OCEANBASE_VECTOR_USER=root@test
OCEANBASE_VECTOR_PASSWORD=difyai123456
OCEANBASE_VECTOR_DATABASE=test
OCEANBASE_CLUSTER_NAME=difyai
OCEANBASE_MEMORY_LIMIT=6G
# Upstash Vector configuration, only available when VECTOR_STORE is `upstash`
UPSTASH_VECTOR_URL=https://xxx-vector.upstash.io
UPSTASH_VECTOR_TOKEN=dify
# ------------------------------
# Knowledge Configuration
# ------------------------------
# Upload file size limit, default 15M.
UPLOAD_FILE_SIZE_LIMIT=15
# The maximum number of files that can be uploaded at a time, default 5.
UPLOAD_FILE_BATCH_LIMIT=5
# ETL type, support: `dify`, `Unstructured`
# `dify` Dify's proprietary file extraction scheme
# `Unstructured` Unstructured.io file extraction scheme
ETL_TYPE=dify
# Unstructured API path and API key, needs to be configured when ETL_TYPE is Unstructured
# Or using Unstructured for document extractor node for pptx.
# For example: http://unstructured:8000/general/v0/general
UNSTRUCTURED_API_URL=
UNSTRUCTURED_API_KEY=
SCARF_NO_ANALYTICS=true
# ------------------------------
# Model Configuration
# ------------------------------
# The maximum number of tokens allowed for prompt generation.
# This setting controls the upper limit of tokens that can be used by the LLM
# when generating a prompt in the prompt generation tool.
# Default: 512 tokens.
PROMPT_GENERATION_MAX_TOKENS=512
# The maximum number of tokens allowed for code generation.
# This setting controls the upper limit of tokens that can be used by the LLM
# when generating code in the code generation tool.
# Default: 1024 tokens.
CODE_GENERATION_MAX_TOKENS=1024
# ------------------------------
# Multi-modal Configuration
# ------------------------------
# The format of the image/video/audio/document sent when the multi-modal model is input,
# the default is base64, optional url.
# The delay of the call in url mode will be lower than that in base64 mode.
# It is generally recommended to use the more compatible base64 mode.
# If configured as url, you need to configure FILES_URL as an externally accessible address so that the multi-modal model can access the image/video/audio/document.
MULTIMODAL_SEND_FORMAT=base64
# Upload image file size limit, default 10M.
UPLOAD_IMAGE_FILE_SIZE_LIMIT=10
# Upload video file size limit, default 100M.
UPLOAD_VIDEO_FILE_SIZE_LIMIT=100
# Upload audio file size limit, default 50M.
UPLOAD_AUDIO_FILE_SIZE_LIMIT=50
# ------------------------------
# Sentry Configuration
# Used for application monitoring and error log tracking.
# ------------------------------
SENTRY_DSN=
# API Service Sentry DSN address, default is empty, when empty,
# all monitoring information is not reported to Sentry.
# If not set, Sentry error reporting will be disabled.
API_SENTRY_DSN=
# API Service The reporting ratio of Sentry events, if it is 0.01, it is 1%.
API_SENTRY_TRACES_SAMPLE_RATE=1.0
# API Service The reporting ratio of Sentry profiles, if it is 0.01, it is 1%.
API_SENTRY_PROFILES_SAMPLE_RATE=1.0
# Web Service Sentry DSN address, default is empty, when empty,
# all monitoring information is not reported to Sentry.
# If not set, Sentry error reporting will be disabled.
WEB_SENTRY_DSN=
# ------------------------------
# Notion Integration Configuration
# Variables can be obtained by applying for Notion integration: https://www.notion.so/my-integrations
# ------------------------------
# Configure as "public" or "internal".
# Since Notion's OAuth redirect URL only supports HTTPS,
# if deploying locally, please use Notion's internal integration.
NOTION_INTEGRATION_TYPE=public
# Notion OAuth client secret (used for public integration type)
NOTION_CLIENT_SECRET=
# Notion OAuth client id (used for public integration type)
NOTION_CLIENT_ID=
# Notion internal integration secret.
# If the value of NOTION_INTEGRATION_TYPE is "internal",
# you need to configure this variable.
NOTION_INTERNAL_SECRET=
# ------------------------------
# Mail related configuration
# ------------------------------
# Mail type, support: resend, smtp
MAIL_TYPE=resend
# Default send from email address, if not specified
MAIL_DEFAULT_SEND_FROM=
# API-Key for the Resend email provider, used when MAIL_TYPE is `resend`.
RESEND_API_URL=https://api.resend.com
RESEND_API_KEY=your-resend-api-key
# SMTP server configuration, used when MAIL_TYPE is `smtp`
SMTP_SERVER=
SMTP_PORT=465
SMTP_USERNAME=
SMTP_PASSWORD=
SMTP_USE_TLS=true
SMTP_OPPORTUNISTIC_TLS=false
# ------------------------------
# Others Configuration
# ------------------------------
# Maximum length of segmentation tokens for indexing
INDEXING_MAX_SEGMENTATION_TOKENS_LENGTH=4000
# Member invitation link valid time (hours),
# Default: 72.
INVITE_EXPIRY_HOURS=72
# Reset password token valid time (minutes),
RESET_PASSWORD_TOKEN_EXPIRY_MINUTES=5
# The sandbox service endpoint.
CODE_EXECUTION_ENDPOINT=http://sandbox:8194
CODE_EXECUTION_API_KEY=dify-sandbox
CODE_MAX_NUMBER=9223372036854775807
CODE_MIN_NUMBER=-9223372036854775808
CODE_MAX_DEPTH=5
CODE_MAX_PRECISION=20
CODE_MAX_STRING_LENGTH=80000
CODE_MAX_STRING_ARRAY_LENGTH=30
CODE_MAX_OBJECT_ARRAY_LENGTH=30
CODE_MAX_NUMBER_ARRAY_LENGTH=1000
CODE_EXECUTION_CONNECT_TIMEOUT=10
CODE_EXECUTION_READ_TIMEOUT=60
CODE_EXECUTION_WRITE_TIMEOUT=10
TEMPLATE_TRANSFORM_MAX_LENGTH=80000
# Workflow runtime configuration
WORKFLOW_MAX_EXECUTION_STEPS=500
WORKFLOW_MAX_EXECUTION_TIME=1200
WORKFLOW_CALL_MAX_DEPTH=5
MAX_VARIABLE_SIZE=204800
WORKFLOW_PARALLEL_DEPTH_LIMIT=3
WORKFLOW_FILE_UPLOAD_LIMIT=10
# HTTP request node in workflow configuration
HTTP_REQUEST_NODE_MAX_BINARY_SIZE=10485760
HTTP_REQUEST_NODE_MAX_TEXT_SIZE=1048576
# SSRF Proxy server HTTP URL
SSRF_PROXY_HTTP_URL=http://ssrf_proxy:3128
# SSRF Proxy server HTTPS URL
SSRF_PROXY_HTTPS_URL=http://ssrf_proxy:3128
# ------------------------------
# Environment Variables for web Service
# ------------------------------
# The timeout for the text generation in millisecond
TEXT_GENERATION_TIMEOUT_MS=60000
# ------------------------------
# Environment Variables for db Service
# ------------------------------
PGUSER=${DB_USERNAME}
# The password for the default postgres user.
POSTGRES_PASSWORD=${DB_PASSWORD}
# The name of the default postgres database.
POSTGRES_DB=${DB_DATABASE}
# postgres data directory
PGDATA=/var/lib/postgresql/data/pgdata
# ------------------------------
# Environment Variables for sandbox Service
# ------------------------------
# The API key for the sandbox service
SANDBOX_API_KEY=dify-sandbox
# The mode in which the Gin framework runs
SANDBOX_GIN_MODE=release
# The timeout for the worker in seconds
SANDBOX_WORKER_TIMEOUT=15
# Enable network for the sandbox service
SANDBOX_ENABLE_NETWORK=true
# HTTP proxy URL for SSRF protection
SANDBOX_HTTP_PROXY=http://ssrf_proxy:3128
# HTTPS proxy URL for SSRF protection
SANDBOX_HTTPS_PROXY=http://ssrf_proxy:3128
# The port on which the sandbox service runs
SANDBOX_PORT=8194
# ------------------------------
# Environment Variables for weaviate Service
# (only used when VECTOR_STORE is weaviate)
# ------------------------------
WEAVIATE_PERSISTENCE_DATA_PATH=/var/lib/weaviate
WEAVIATE_QUERY_DEFAULTS_LIMIT=25
WEAVIATE_AUTHENTICATION_ANONYMOUS_ACCESS_ENABLED=true
WEAVIATE_DEFAULT_VECTORIZER_MODULE=none
WEAVIATE_CLUSTER_HOSTNAME=node1
WEAVIATE_AUTHENTICATION_APIKEY_ENABLED=true
WEAVIATE_AUTHENTICATION_APIKEY_ALLOWED_KEYS=WVF5YThaHlkYwhGUSmCRgsX3tD5ngdN8pkih
WEAVIATE_AUTHENTICATION_APIKEY_USERS=hello@dify.ai
WEAVIATE_AUTHORIZATION_ADMINLIST_ENABLED=true
WEAVIATE_AUTHORIZATION_ADMINLIST_USERS=hello@dify.ai
# ------------------------------
# Environment Variables for Chroma
# (only used when VECTOR_STORE is chroma)
# ------------------------------
# Authentication credentials for Chroma server
CHROMA_SERVER_AUTHN_CREDENTIALS=difyai123456
# Authentication provider for Chroma server
CHROMA_SERVER_AUTHN_PROVIDER=chromadb.auth.token_authn.TokenAuthenticationServerProvider
# Persistence setting for Chroma server
CHROMA_IS_PERSISTENT=TRUE
# ------------------------------
# Environment Variables for Oracle Service
# (only used when VECTOR_STORE is Oracle)
# ------------------------------
ORACLE_PWD=Dify123456
ORACLE_CHARACTERSET=AL32UTF8
# ------------------------------
# Environment Variables for milvus Service
# (only used when VECTOR_STORE is milvus)
# ------------------------------
# ETCD configuration for auto compaction mode
ETCD_AUTO_COMPACTION_MODE=revision
# ETCD configuration for auto compaction retention in terms of number of revisions
ETCD_AUTO_COMPACTION_RETENTION=1000
# ETCD configuration for backend quota in bytes
ETCD_QUOTA_BACKEND_BYTES=4294967296
# ETCD configuration for the number of changes before triggering a snapshot
ETCD_SNAPSHOT_COUNT=50000
# MinIO access key for authentication
MINIO_ACCESS_KEY=minioadmin
# MinIO secret key for authentication
MINIO_SECRET_KEY=minioadmin
# ETCD service endpoints
ETCD_ENDPOINTS=etcd:2379
# MinIO service address
MINIO_ADDRESS=minio:9000
# Enable or disable security authorization
MILVUS_AUTHORIZATION_ENABLED=true
# ------------------------------
# Environment Variables for pgvector / pgvector-rs Service
# (only used when VECTOR_STORE is pgvector / pgvector-rs)
# ------------------------------
PGVECTOR_PGUSER=postgres
# The password for the default postgres user.
PGVECTOR_POSTGRES_PASSWORD=difyai123456
# The name of the default postgres database.
PGVECTOR_POSTGRES_DB=dify
# postgres data directory
PGVECTOR_PGDATA=/var/lib/postgresql/data/pgdata
# ------------------------------
# Environment Variables for opensearch
# (only used when VECTOR_STORE is opensearch)
# ------------------------------
OPENSEARCH_DISCOVERY_TYPE=single-node
OPENSEARCH_BOOTSTRAP_MEMORY_LOCK=true
OPENSEARCH_JAVA_OPTS_MIN=512m
OPENSEARCH_JAVA_OPTS_MAX=1024m
OPENSEARCH_INITIAL_ADMIN_PASSWORD=Qazwsxedc!@#123
OPENSEARCH_MEMLOCK_SOFT=-1
OPENSEARCH_MEMLOCK_HARD=-1
OPENSEARCH_NOFILE_SOFT=65536
OPENSEARCH_NOFILE_HARD=65536
# ------------------------------
# Environment Variables for Nginx reverse proxy
# ------------------------------
NGINX_SERVER_NAME=_
NGINX_HTTPS_ENABLED=false
# HTTP port
NGINX_PORT=80
# SSL settings are only applied when HTTPS_ENABLED is true
NGINX_SSL_PORT=443
# if HTTPS_ENABLED is true, you're required to add your own SSL certificates/keys to the `./nginx/ssl` directory
# and modify the env vars below accordingly.
NGINX_SSL_CERT_FILENAME=dify.crt
NGINX_SSL_CERT_KEY_FILENAME=dify.key
NGINX_SSL_PROTOCOLS=TLSv1.1 TLSv1.2 TLSv1.3
# Nginx performance tuning
NGINX_WORKER_PROCESSES=auto
NGINX_CLIENT_MAX_BODY_SIZE=15M
NGINX_KEEPALIVE_TIMEOUT=65
# Proxy settings
NGINX_PROXY_READ_TIMEOUT=3600s
NGINX_PROXY_SEND_TIMEOUT=3600s
# Set true to accept requests for /.well-known/acme-challenge/
NGINX_ENABLE_CERTBOT_CHALLENGE=false
# ------------------------------
# Certbot Configuration
# ------------------------------
# Email address (required to get certificates from Let's Encrypt)
CERTBOT_EMAIL=your_email@example.com
# Domain name
CERTBOT_DOMAIN=your_domain.com
# certbot command options
# i.e: --force-renewal --dry-run --test-cert --debug
CERTBOT_OPTIONS=
# ------------------------------
# Environment Variables for SSRF Proxy
# ------------------------------
SSRF_HTTP_PORT=3128
SSRF_COREDUMP_DIR=/var/spool/squid
SSRF_REVERSE_PROXY_PORT=8194
SSRF_SANDBOX_HOST=sandbox
SSRF_DEFAULT_TIME_OUT=5
SSRF_DEFAULT_CONNECT_TIME_OUT=5
SSRF_DEFAULT_READ_TIME_OUT=5
SSRF_DEFAULT_WRITE_TIME_OUT=5
# ------------------------------
# docker env var for specifying vector db type at startup
# (based on the vector db type, the corresponding docker
# compose profile will be used)
# if you want to use unstructured, add ',unstructured' to the end
# ------------------------------
COMPOSE_PROFILES=${VECTOR_STORE:-weaviate}
# ------------------------------
# Docker Compose Service Expose Host Port Configurations
# ------------------------------
EXPOSE_NGINX_PORT=80
EXPOSE_NGINX_SSL_PORT=443
# ----------------------------------------------------------------------------
# ModelProvider & Tool Position Configuration
# Used to specify the model providers and tools that can be used in the app.
# ----------------------------------------------------------------------------
# Pin, include, and exclude tools
# Use comma-separated values with no spaces between items.
# Example: POSITION_TOOL_PINS=bing,google
POSITION_TOOL_PINS=
POSITION_TOOL_INCLUDES=
POSITION_TOOL_EXCLUDES=
# Pin, include, and exclude model providers
# Use comma-separated values with no spaces between items.
# Example: POSITION_PROVIDER_PINS=openai,openllm
POSITION_PROVIDER_PINS=
POSITION_PROVIDER_INCLUDES=
POSITION_PROVIDER_EXCLUDES=
# CSP https://developer.mozilla.org/en-US/docs/Web/HTTP/CSP
CSP_WHITELIST=
# Enable or disable create tidb service job
CREATE_TIDB_SERVICE_JOB_ENABLED=false
# Maximum number of submitted thread count in a ThreadPool for parallel node execution
MAX_SUBMIT_COUNT=100
# The maximum number of top-k value for RAG.
TOP_K_MAX_VALUE=10
# ------------------------------
# Plugin Daemon Configuration
# ------------------------------
DB_PLUGIN_DATABASE=dify_plugin
EXPOSE_PLUGIN_DAEMON_PORT=5002
PLUGIN_DAEMON_PORT=5002
PLUGIN_DAEMON_KEY=lYkiYYT6owG+71oLerGzA7GXCgOT++6ovaezWAjpCjf+Sjc3ZtU+qUEi
PLUGIN_DAEMON_URL=http://plugin_daemon:5002
PLUGIN_MAX_PACKAGE_SIZE=52428800
PLUGIN_PPROF_ENABLED=false
PLUGIN_DEBUGGING_HOST=0.0.0.0
PLUGIN_DEBUGGING_PORT=5003
EXPOSE_PLUGIN_DEBUGGING_HOST=localhost
EXPOSE_PLUGIN_DEBUGGING_PORT=5003
PLUGIN_DIFY_INNER_API_KEY=QaHbTe77CtuXmsfyhR7+vRjI/+XbV1AaFy691iy+kGDv2Jvy0/eAh8Y1
PLUGIN_DIFY_INNER_API_URL=http://api:5001
ENDPOINT_URL_TEMPLATE=http://localhost/e/{hook_id}
MARKETPLACE_ENABLED=true
MARKETPLACE_API_URL=https://marketplace.dify.ai
FORCE_VERIFYING_SIGNATURE=true

View File

@ -0,0 +1 @@
TZ=Asia/Shanghai

View File

@ -0,0 +1,36 @@
#!/bin/bash
if [ -f .env ]; then
source .env
# setup-1 add default values
CURRENT_DIR=$(pwd)
sed -i '/^ENV_FILE=/d' .env
sed -i '/^GLOBAL_ENV_FILE=/d' .env
echo "ENV_FILE=${CURRENT_DIR}/.env" >> .env
echo "GLOBAL_ENV_FILE=${CURRENT_DIR}/envs/global.env" >> .env
echo "APP_ENV_FILE=${CURRENT_DIR}/envs/dify.env" >> .env
# setup-2 update dir permissions
mkdir -p "$DIFY_ROOT_PATH"
cp -r conf/. "$DIFY_ROOT_PATH/"
# setup-3 sync environment variables
env_source="envs/dify.env"
if [ -f "$env_source" ]; then
while IFS='=' read -r key value; do
if [[ -z "$key" || "$key" =~ ^# ]]; then
continue
fi
if ! grep -q "^$key=" .env; then
echo "$key=$value" >> .env
fi
done < "$env_source"
fi
echo "Check Finish."
else
echo "Error: .env file not found."
fi

View File

@ -0,0 +1,10 @@
#!/bin/bash
if [ -f .env ]; then
source .env
echo "Check Finish."
else
echo "Error: .env file not found."
fi

View File

@ -0,0 +1,47 @@
#!/bin/bash
if [ -f .env ]; then
source .env
# setup-1 add default values
CURRENT_DIR=$(pwd)
sed -i '/^ENV_FILE=/d' .env
sed -i '/^GLOBAL_ENV_FILE=/d' .env
echo "ENV_FILE=${CURRENT_DIR}/.env" >> .env
echo "GLOBAL_ENV_FILE=${CURRENT_DIR}/envs/global.env" >> .env
echo "APP_ENV_FILE=${CURRENT_DIR}/envs/dify.env" >> .env
# setup-2 update dir permissions
mkdir -p "$DIFY_ROOT_PATH"
if [ -d "conf" ]; then
find conf -type f | while read -r file; do
dest="$DIFY_ROOT_PATH/${file#conf/}"
if [ ! -e "$dest" ]; then
mkdir -p "$(dirname "$dest")"
cp "$file" "$dest"
fi
done
echo "Conf files copied to $DIFY_ROOT_PATH."
else
echo "Warning: conf directory not found."
fi
# setup-3 sync environment variables
env_source="envs/dify.env"
if [ -f "$env_source" ]; then
while IFS='=' read -r key value; do
if [[ -z "$key" || "$key" =~ ^# ]]; then
continue
fi
if ! grep -q "^$key=" .env; then
echo "$key=$value" >> .env
fi
done < "$env_source"
fi
echo "Check Finish."
else
echo "Error: .env file not found."
fi

82
apps/diun/README.md Normal file
View File

@ -0,0 +1,82 @@
# Diun
Diun 是一个用于监控 Docker 镜像更新并发送通知的命令行工具。
## 功能特性
- 自动监控 Docker 镜像更新
- 支持多种通知方式Discord、Telegram、Slack、邮件等
- 支持多种提供者Docker、Kubernetes、Swarm、Nomad等
- 基于 Cron 表达式的调度
- 支持多种架构amd64、arm64、arm/v6、arm/v7等
- 轻量级设计,资源占用少
## 使用方法
1. 部署后 Diun 会自动开始监控 Docker 镜像
2. 默认每6小时检查一次镜像更新
3. 当检测到镜像更新时,会发送通知
4. 通过 Docker 标签 `diun.enable=true` 控制哪些容器被监控
## 配置说明
- **时区**Asia/Shanghai上海时区
- **数据存储**`./data` 目录包含 bbolt 数据库
- **配置文件**`./diun.yml` 包含所有监控和通知配置
## 默认配置
应用使用以下默认配置:
```yaml
watch:
workers: 20
schedule: "0 */6 * * *"
firstCheckNotif: false
providers:
docker:
watchByDefault: true
```
## 自定义配置
如需自定义监控、通知等配置,请编辑 `diun.yml` 文件。参考官方文档进行配置:
- [配置概述](https://crazymax.dev/diun/configuration/overview/)
- [通知配置](https://crazymax.dev/diun/notifications/)
- [提供者配置](https://crazymax.dev/diun/providers/)
## 支持的提供者
- **Docker**:监控 Docker 容器和镜像
- **Kubernetes**:监控 Kubernetes 集群
- **Swarm**:监控 Docker Swarm 服务
- **Nomad**:监控 HashiCorp Nomad 任务
- **Dockerfile**:监控 Dockerfile 中的基础镜像
- **File**:从文件读取镜像列表
## 支持的通知方式
- Discord、Telegram、Slack
- 邮件、Matrix、MQTT
- Pushover、Rocket.Chat
- Webhook、Script、Signal
- Gotify、Ntfy、Teams
## 监控配置
要监控特定的 Docker 容器,在容器标签中添加:
```yaml
labels:
- "diun.enable=true"
- "diun.watch_repo=true" # 可选,监控仓库更新
```
## 相关链接
- [官方网站](https://crazymax.dev/diun/)
- [GitHub 项目](https://github.com/crazy-max/diun)
- [Docker Hub](https://hub.docker.com/r/crazymax/diun/)
- [基础示例](https://crazymax.dev/diun/usage/basic-example/)

82
apps/diun/README_en.md Normal file
View File

@ -0,0 +1,82 @@
# Diun
Diun is a CLI tool to monitor Docker image updates and send notifications.
## Features
- Automatically monitor Docker image updates
- Support multiple notification methods (Discord, Telegram, Slack, Email, etc.)
- Support multiple providers (Docker, Kubernetes, Swarm, Nomad, etc.)
- Cron-based scheduling
- Support multiple architectures (amd64, arm64, arm/v6, arm/v7, etc.)
- Lightweight design with low resource usage
## Usage
1. After deployment, Diun will automatically start monitoring Docker images
2. Default check interval is every 6 hours
3. When image updates are detected, notifications will be sent
4. Control which containers are monitored via Docker label `diun.enable=true`
## Configuration
- **Timezone**: Asia/Shanghai (Shanghai timezone)
- **Data Storage**: `./data` directory contains bbolt database
- **Config File**: `./diun.yml` contains all monitoring and notification configurations
## Default Configuration
The application uses the following default configuration:
```yaml
watch:
workers: 20
schedule: "0 */6 * * *"
firstCheckNotif: false
providers:
docker:
watchByDefault: true
```
## Custom Configuration
To customize monitoring, notifications, and other configurations, please edit the `diun.yml` file. Refer to the official documentation for configuration:
- [Configuration Overview](https://crazymax.dev/diun/configuration/overview/)
- [Notifications](https://crazymax.dev/diun/notifications/)
- [Providers](https://crazymax.dev/diun/providers/)
## Supported Providers
- **Docker**: Monitor Docker containers and images
- **Kubernetes**: Monitor Kubernetes clusters
- **Swarm**: Monitor Docker Swarm services
- **Nomad**: Monitor HashiCorp Nomad tasks
- **Dockerfile**: Monitor base images in Dockerfiles
- **File**: Read image list from files
## Supported Notifications
- Discord, Telegram, Slack
- Email, Matrix, MQTT
- Pushover, Rocket.Chat
- Webhook, Script, Signal
- Gotify, Ntfy, Teams
## Monitoring Configuration
To monitor specific Docker containers, add these labels:
```yaml
labels:
- "diun.enable=true"
- "diun.watch_repo=true" # Optional, monitor repository updates
```
## Links
- [Official Website](https://crazymax.dev/diun/)
- [GitHub Project](https://github.com/crazy-max/diun)
- [Docker Hub](https://hub.docker.com/r/crazymax/diun/)
- [Basic Example](https://crazymax.dev/diun/usage/basic-example/)

35
apps/diun/data.yml Normal file
View File

@ -0,0 +1,35 @@
name: Diun
tags:
- 实用工具
title: Docker 镜像更新通知工具
description: Diun 是一个用于监控 Docker 镜像更新并发送通知的命令行工具
additionalProperties:
key: diun
name: Diun
tags:
- Tool
shortDescZh: Docker 镜像更新通知命令行工具
shortDescEn: Docker image update notification CLI tool
description:
en: Diun is a CLI tool to monitor Docker image updates and send notifications
ja: DiunはDockerイメージの更新を監視し、通知を送信するCLIツールです
ms: Diun ialah alat CLI untuk memantau kemas kini imej Docker dan menghantar notifikasi
pt-br: Diun é uma ferramenta CLI para monitorar atualizações de imagens Docker e enviar notificações
ru: Diun — это CLI-инструмент для мониторинга обновлений Docker-образов и отправки уведомлений
ko: Diun은 Docker 이미지 업데이트를 모니터링하고 알림을 보내는 CLI 도구입니다
zh-Hant: Diun 是一個用於監控 Docker 映像更新並發送通知的命令列工具
zh: Diun 是一个用于监控 Docker 镜像更新并发送通知的命令行工具
type: tool
crossVersionUpdate: true
limit: 0
recommend: 0
website: https://crazymax.dev/diun/
github: https://github.com/crazy-max/diun
document: https://crazymax.dev/diun/usage/basic-example/
architectures:
- amd64
- arm/v6
- arm/v7
- arm64
- 386
- ppc64le

View File

@ -0,0 +1,2 @@
additionalProperties:
formFields: []

View File

@ -0,0 +1,8 @@
watch:
workers: 20
schedule: "0 */6 * * *"
firstCheckNotif: false
providers:
docker:
watchByDefault: true

View File

@ -0,0 +1,19 @@
services:
diun:
container_name: ${CONTAINER_NAME}
image: crazymax/diun:latest
command: serve
restart: always
networks:
- 1panel-network
volumes:
- ./data:/data
- ./diun.yml:/diun.yml:ro
- /var/run/docker.sock:/var/run/docker.sock
environment:
- "TZ=Asia/Shanghai"
- "LOG_LEVEL=info"
- "LOG_JSON=false"
networks:
1panel-network:
external: true

BIN
apps/diun/logo.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 77 KiB

View File

@ -0,0 +1,19 @@
additionalProperties:
formFields:
- default: 30049
edit: true
envKey: PANEL_APP_PORT_HTTP
labelEn: Port
labelZh: 端口
required: true
rule: paramPort
type: number
- default: ""
edit: true
envKey: PANEL_DB_USER_PASSWORD
labelEn: Database Password
labelZh: 数据库用户密码
random: true
required: true
rule: paramComplexity
type: password

View File

@ -0,0 +1,38 @@
services:
docmost:
container_name: ${CONTAINER_NAME}
restart: always
ports:
- "${PANEL_APP_PORT_HTTP}:3000"
volumes:
- docmost:/app/data/storage
environment:
APP_URL: "http://localhost:3000"
APP_SECRET: "52f235dee223c92a83a934ada13b83075c9855fe966b3cbf9dd86810e2b742ee"
DATABASE_URL: "postgresql://docmost:${PANEL_DB_USER_PASSWORD}@db:5432/docmost?schema=public"
REDIS_URL: "redis://redis:6379"
image: docmost/docmost:0.20.4
labels:
createdBy: "Apps"
depends_on:
- db
- redis
db:
image: postgres:16-alpine
environment:
POSTGRES_DB: docmost
POSTGRES_USER: docmost
POSTGRES_PASSWORD: ${PANEL_DB_USER_PASSWORD}
restart: always
volumes:
- db_data:/var/lib/postgresql/data
redis:
image: redis:7.2-alpine
restart: always
volumes:
- redis_data:/data
volumes:
docmost:
db_data:
redis_data:

View File

@ -0,0 +1,8 @@
COMMAND="-c /root/config.toml"
CONFIG_FILE_PATH="./data/config.toml"
CONFIG_FILE_PATH_INTERNAL="/root/config.toml"
CONTAINER_NAME="easytier"
DATA_PATH="./data/data"
HOSTNAME="easytier"
PRIVILEGED_MODE="true"
TIME_ZONE="Asia/Shanghai"

View File

@ -0,0 +1,51 @@
additionalProperties:
formFields:
- default: "./data/data"
edit: true
envKey: DATA_PATH
labelEn: Data Path
labelZh: 数据路径
required: true
type: text
- default: "./data/config.toml"
disabled: true
envKey: CONFIG_FILE_PATH
labelEn: Configuration file path
labelZh: 配置文件路径
required: true
type: text
- default: "/root/config.toml"
disabled: true
envKey: CONFIG_FILE_PATH_INTERNAL
labelEn: Configuration file path(inside container)
labelZh: 配置文件路径 (容器内部)
required: true
type: text
- default: "Asia/Shanghai"
edit: true
envKey: TIME_ZONE
labelEn: Time Zone
labelZh: 时区
required: true
type: text
- default: "true"
disabled: true
envKey: PRIVILEGED_MODE
labelEn: Privilege mode switch
labelZh: 特权模式开关
required: true
type: text
- default: "easytier"
edit: true
envKey: HOSTNAME
labelEn: Hostname
labelZh: 主机名
required: true
type: text
- default: "-c /root/config.toml"
disabled: true
envKey: COMMAND
labelEn: Command
labelZh: 命令
required: true
type: text

View File

@ -0,0 +1,68 @@
# 实例名称,用于在同一台机器上标识此 VPN 节点
instance_name = ""
# 主机名,用于标识此设备的主机名
hostname = ""
# 实例 ID一般为 UUID在同一个 VPN 网络中唯一
instance_id = ""
# 此 VPN 节点的 IPv4 地址,如果为空,则此节点将仅转发数据包,不会创建 TUN 设备
ipv4 = ""
# 由 Easytier 自动确定并设置IP地址默认从10.0.0.1开始。警告:在使用 DHCP 时,如果网络中出现 IP 冲突IP 将自动更改
dhcp = false
# 监听器列表,用于接受连接
listeners = [
"tcp://0.0.0.0:11010",
"udp://0.0.0.0:11010",
"wg://0.0.0.0:11011",
"ws://0.0.0.0:11011/",
"wss://0.0.0.0:11012/",
]
# 退出节点列表
exit_nodes = [
]
# 用于管理的 RPC 门户地址
rpc_portal = "127.0.0.1:15888"
[network_identity]
# 网络名称,用于标识 VPN 网络
network_name = ""
# 网络密钥,用于验证此节点属于 VPN 网络
network_secret = ""
# 这里是对等连接节点配置,可以多段配置
[[peer]]
uri = ""
[[peer]]
uri = ""
# 这里是子网代理节点配置,可以有多段配置
[[proxy_network]]
cidr = "10.0.1.0/24"
[[proxy_network]]
cidr = "10.0.2.0/24"
[flags]
# 连接到对等节点使用的默认协议
default_protocol = "tcp"
# TUN 设备名称,如果为空,则使用默认名称
dev_name = ""
# 是否启用加密
enable_encryption = true
# 是否启用 IPv6 支持
enable_ipv6 = true
# TUN 设备的 MTU
mtu = 1380
# 延迟优先模式,将尝试使用最低延迟路径转发流量,默认使用最短路径
latency_first = false
# 将本节点配置为退出节点
enable_exit_node = false
# 禁用 TUN 设备
no_tun = false
# 为子网代理启用 smoltcp 堆栈
use_smoltcp = false
# 仅转发白名单网络的流量,支持通配符字符串。多个网络名称间可以使用英文空格间隔。如果该参数为空,则禁用转发。默认允许所有网络。例如:'*'(所有网络),'def*'以def为前缀的网络'net1 net2'只允许net1和net2
foreign_network_whitelist = "*"

View File

@ -0,0 +1,17 @@
instance_name = "default"
instance_id = "3d3db819-ad54-4d86-bf9a-faac864478ab"
dhcp = false
listeners = [
"tcp://0.0.0.0:11010",
"udp://0.0.0.0:11010",
"wg://0.0.0.0:11011",
"ws://0.0.0.0:11011/",
"wss://0.0.0.0:11012/",
]
exit_nodes = []
peer = []
rpc_portal = "0.0.0.0:15889"
[network_identity]
network_name = "default"
network_secret = ""

View File

@ -0,0 +1,20 @@
services:
easytier:
image: "easytier/easytier:v2.3.1"
container_name: ${CONTAINER_NAME}
restart: always
network_mode: host
privileged: ${PRIVILEGED_MODE}
hostname: ${HOSTNAME}
environment:
- TZ=${TIME_ZONE}
volumes:
- ${DATA_PATH}:/root
- ${CONFIG_FILE_PATH}:${CONFIG_FILE_PATH_INTERNAL}
command: ${COMMAND}
labels:
createdBy: "Apps"
networks:
1panel-network:
external: true

View File

@ -0,0 +1,10 @@
additionalProperties:
formFields:
- default: "8004"
edit: true
envKey: PANEL_APP_PORT_HTTP
labelEn: Service Port
labelZh: 服务端口
required: true
rule: paramPort
type: number

View File

@ -0,0 +1,16 @@
services:
glm-free-api:
image: vinlic/glm-free-api:latest
container_name: ${CONTAINER_NAME}
ports:
- ${PANEL_APP_PORT_HTTP}:8000
networks:
- 1panel-network
environment:
- TZ=Asia/Shanghai
labels:
createdBy: Apps
restart: always
networks:
1panel-network:
external: true

View File

@ -0,0 +1,5 @@
CONTAINER_NAME="hexo"
PANEL_APP_PORT_HTTP="40064"
DATA_PATH="./data"
GIT_USERNAME="gituser"
GIT_MAIL="user@email.com"

24
apps/hexo/latest/data.yml Normal file
View File

@ -0,0 +1,24 @@
additionalProperties:
formFields:
- default: 40064
edit: true
envKey: PANEL_APP_PORT_HTTP
labelEn: Port
labelZh: 端口
required: true
rule: paramPort
type: number
- default: ""
edit: true
envKey: GIT_USERNAME
labelEn: Git username
labelZh: Git 用户名
required: true
type: text
- default: ""
edit: true
envKey: GIT_MAIL
labelEn: Git Email
labelZh: Git 邮箱
required: true
type: text

View File

@ -0,0 +1,25 @@
services:
hexo:
container_name: ${CONTAINER_NAME}
restart: always
networks:
- 1panel-network
ports:
- "${PANEL_APP_PORT_HTTP}:4000"
volumes:
- hexo_data:/app
environment:
- PUID=1000
- PGID=1000
- TZ=Asia/Shanghai
- GIT_USER=${GIT_USERNAME}
- GIT_EMAIL=${GIT_MAIL}
image: bloodstar/hexo:latest
labels:
createdBy: "Apps"
volumes:
hexo_data:
name: hexo_data
networks:
1panel-network:
external: true

View File

@ -0,0 +1,10 @@
additionalProperties:
formFields:
- default: "30080"
edit: true
envKey: PANEL_APP_PORT_HTTP
labelEn: Service Port 3000
labelZh: 服务端口 3000
required: true
rule: paramPort
type: number

View File

@ -0,0 +1,19 @@
services:
hubcmd-ui:
image: dqzboy/hubcmd-ui:latest
container_name: ${CONTAINER_NAME}
restart: always
ports:
- ${PANEL_APP_PORT_HTTP}:3000
volumes:
- /var/run/docker.sock:/var/run/docker.sock
networks:
- 1panel-network
environment:
- LOG_LEVEL=INFO
- SIMPLE_LOGS=true
labels:
createdBy: Apps
networks:
1panel-network:
external: true

View File

@ -0,0 +1,10 @@
additionalProperties:
formFields:
- default: "8005"
edit: true
envKey: PANEL_APP_PORT_HTTP
labelEn: Service Port
labelZh: 服务端口
required: true
rule: paramPort
type: number

View File

@ -0,0 +1,16 @@
services:
jimeng-free-api:
image: vinlic/jimeng-free-api:latest
container_name: ${CONTAINER_NAME}
ports:
- ${PANEL_APP_PORT_HTTP}:8000
networks:
- 1panel-network
environment:
- TZ=Asia/Shanghai
labels:
createdBy: Apps
restart: always
networks:
1panel-network:
external: true

53
apps/kali-linux/README.md Normal file
View File

@ -0,0 +1,53 @@
# Kali Linux
Kali Linux 是一个专为渗透测试和安全审计设计的 Debian 基础 Linux 发行版。
## 功能特性
- 预装数百种渗透测试工具
- 专为安全研究人员和渗透测试人员设计
- 支持多种架构amd64、arm64
- 基于 LinuxServer.io 的 Docker 镜像
- 提供完整的渗透测试环境
- 支持 Web 界面访问
## 使用方法
1. 部署后通过 Web 浏览器访问容器(默认端口 3000 和 3001
2. 容器内提供完整的 Kali Linux 环境
3. 使用预装的渗透测试工具进行安全测试
4. 支持 Docker 套接字挂载,可在容器内运行 Docker 命令
## 预装工具
Kali Linux 包含数百种渗透测试工具,包括但不限于:
- 信息收集工具nmap、whois、dnsenum 等)
- 漏洞扫描工具OpenVAS、Nessus 等)
- 无线网络工具aircrack-ng、reaver 等)
- Web 应用测试工具Burp Suite、OWASP ZAP 等)
- 密码破解工具John the Ripper、Hashcat 等)
- 社会工程学工具Social Engineer Toolkit 等)
## 配置说明
- **端口 3000**HTTP Web 访问端口
- **端口 3001**HTTPS Web 访问端口
- **时区**Asia/Shanghai上海时区
- **共享内存**1GB确保工具正常运行
- **安全选项**seccomp:unconfined允许特权操作
- **Docker 套接字**:挂载宿主机 Docker 套接字,支持容器内 Docker 操作
## 安全提醒
⚠️ **重要提醒**Kali Linux 仅用于合法的安全测试和渗透测试。请确保:
- 只在您拥有授权的系统上进行测试
- 遵守当地法律法规
- 不要用于恶意攻击或非法活动
## 相关链接
- [官方网站](https://www.kali.org/)
- [Docker Hub](https://hub.docker.com/r/linuxserver/kali-linux)
- [GitHub 项目](https://github.com/linuxserver/docker-kali-linux)

View File

@ -0,0 +1,53 @@
# Kali Linux
Kali Linux is a Debian-based Linux distribution designed for penetration testing and security auditing.
## Features
- Pre-installed with hundreds of penetration testing tools
- Designed for security researchers and penetration testers
- Supports multiple architectures (amd64, arm64)
- Based on LinuxServer.io Docker image
- Provides a complete penetration testing environment
- Supports web interface access
## Usage
1. Access the container via web browser after deployment (default ports 3000 and 3001)
2. Complete Kali Linux environment available inside the container
3. Use pre-installed penetration testing tools for security testing
4. Docker socket mounting supported, allowing Docker commands inside the container
## Pre-installed Tools
Kali Linux includes hundreds of penetration testing tools, including but not limited to:
- Information gathering tools (nmap, whois, dnsenum, etc.)
- Vulnerability scanning tools (OpenVAS, Nessus, etc.)
- Wireless network tools (aircrack-ng, reaver, etc.)
- Web application testing tools (Burp Suite, OWASP ZAP, etc.)
- Password cracking tools (John the Ripper, Hashcat, etc.)
- Social engineering tools (Social Engineer Toolkit, etc.)
## Configuration
- **Port 3000**: HTTP web access port
- **Port 3001**: HTTPS web access port
- **Timezone**: Asia/Shanghai (Shanghai timezone)
- **Shared Memory**: 1GB, ensuring tools run properly
- **Security Options**: seccomp:unconfined, allowing privileged operations
- **Docker Socket**: Mounts host Docker socket, supporting Docker operations inside container
## Security Notice
⚠️ **Important Notice**: Kali Linux should only be used for legitimate security testing and penetration testing. Please ensure:
- Only test systems you are authorized to test
- Comply with local laws and regulations
- Do not use for malicious attacks or illegal activities
## Links
- [Official Website](https://www.kali.org/)
- [Docker Hub](https://hub.docker.com/r/linuxserver/kali-linux)
- [GitHub Project](https://github.com/linuxserver/docker-kali-linux)

31
apps/kali-linux/data.yml Normal file
View File

@ -0,0 +1,31 @@
name: Kali Linux
tags:
- 开发工具
title: Kali Linux 渗透测试环境
description: Kali Linux 是一个专为渗透测试和安全审计设计的 Linux 发行版
additionalProperties:
key: kali-linux
name: Kali Linux
tags:
- DevTool
shortDescZh: Kali Linux 渗透测试环境
shortDescEn: Kali Linux Penetration Testing Environment
description:
en: Kali Linux is a Debian-based Linux distribution designed for penetration testing and security auditing
ja: Kali Linuxは、ペネトレーションテストとセキュリティ監査のために設計されたDebianベースのLinuxディストリビューションです
ms: Kali Linux adalah pengedaran Linux berasaskan Debian yang direka untuk ujian penembusan dan audit keselamatan
pt-br: Kali Linux é uma distribuição Linux baseada em Debian projetada para testes de penetração e auditoria de segurança
ru: Kali Linux - это дистрибутив Linux на основе Debian, разработанный для тестирования на проникновение и аудита безопасности
ko: Kali Linux는 침투 테스트 및 보안 감사를 위해 설계된 Debian 기반 Linux 배포판입니다
zh-Hant: Kali Linux 是一個專為滲透測試和安全審計設計的 Debian 基礎 Linux 發行版
zh: Kali Linux 是一个专为渗透测试和安全审计设计的 Debian 基础 Linux 发行版
type: website
crossVersionUpdate: true
limit: 0
recommend: 0
website: https://www.kali.org/
github: https://github.com/linuxserver/docker-kali-linux
document: https://hub.docker.com/r/linuxserver/kali-linux
architectures:
- amd64
- arm64

View File

@ -0,0 +1,36 @@
additionalProperties:
formFields:
- default: 3000
edit: true
envKey: PANEL_APP_PORT_HTTP
labelEn: Web Port 1
labelZh: Web 端口 1
required: true
rule: paramPort
type: number
label:
en: Web Port 1
ja: Web ポート 1
ms: Port Web 1
pt-br: Porta Web 1
ru: Веб Порт 1
ko: 웹 포트 1
zh-Hant: Web 埠 1
zh: Web 端口 1
- default: 3001
edit: true
envKey: PANEL_APP_PORT_HTTPS
labelEn: HTTPS Port
labelZh: HTTPS 端口
required: true
rule: paramPort
type: number
label:
en: HTTPS Port
ja: HTTPS ポート
ms: Port HTTPS
pt-br: Porta HTTPS
ru: HTTPS Порт
ko: HTTPS 포트
zh-Hant: HTTPS 埠
zh: HTTPS 端口

View File

@ -0,0 +1,27 @@
services:
kali-linux:
container_name: ${CONTAINER_NAME}
restart: always
networks:
- 1panel-network
security_opt:
- seccomp:unconfined
ports:
- "${PANEL_APP_PORT_HTTP}:3000"
- "${PANEL_APP_PORT_HTTPS}:3001"
volumes:
- ./config:/config
- /var/run/docker.sock:/var/run/docker.sock
environment:
- PUID=1000
- PGID=1000
- TZ=Asia/Shanghai
- SUBFOLDER=/
- TITLE=Kali Linux
shm_size: "1gb"
image: lscr.io/linuxserver/kali-linux:latest
labels:
createdBy: "Apps"
networks:
1panel-network:
external: true

BIN
apps/kali-linux/logo.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 9.1 KiB

View File

@ -0,0 +1,31 @@
additionalProperties:
formFields:
- default: 30012
edit: true
envKey: PANEL_APP_PORT_HTTP
labelEn: Port
labelZh: 端口
required: true
rule: paramPort
type: number
- default: "ky0QvBQNLBQFxY6OurQV/6Wg2KjHqS0YSktBRPJw23QWqq5m"
edit: true
envKey: NEXTAUTH_SECRET
labelEn: NextAuth Secret
labelZh: NextAuth密钥
required: true
type: text
- default: "a+BQCRpK74QuCSqZPhJ6hDeryTn/1rFKhwuc5DC19hOrI8VR"
edit: true
envKey: MEILI_MASTER_KEY
labelEn: Meilisearch Master Key
labelZh: Meilisearch主密钥
required: true
type: text
- default: ""
edit: true
envKey: OPENAI_API_KEY
labelEn: OpenAI API Key
labelZh: OpenAI API 密钥
required: false
type: text

View File

@ -0,0 +1,46 @@
services:
karakeep:
image: ghcr.io/karakeep-app/karakeep:0.24.1
container_name: ${CONTAINER_NAME}
restart: unless-stopped
volumes:
# By default, the data is stored in a docker volume called "data".
# If you want to mount a custom directory, change the volume mapping to:
# - /path/to/your/directory:/data
- data:/data
ports:
- "${PANEL_APP_PORT_HTTP}:3000"
environment:
MEILI_ADDR: http://meilisearch:7700
BROWSER_WEB_URL: http://chrome:9222
OPENAI_API_KEY: ${OPENAI_API_KEY}
NEXTAUTH_SECRET: ${NEXTAUTH_SECRET}
MEILI_MASTER_KEY: ${MEILI_MASTER_KEY}
NEXTAUTH_URL: http://localhost:3000
# You almost never want to change the value of the DATA_DIR variable.
# If you want to mount a custom directory, change the volume mapping above instead.
DATA_DIR: /data # DON'T CHANGE THIS
chrome:
image: gcr.io/zenika-hub/alpine-chrome:123
container_name: chrome-${CONTAINER_NAME}
restart: unless-stopped
command:
- --no-sandbox
- --disable-gpu
- --disable-dev-shm-usage
- --remote-debugging-address=0.0.0.0
- --remote-debugging-port=9222
- --hide-scrollbars
meilisearch:
image: getmeili/meilisearch:v1.11.1
container_name: meili-${CONTAINER_NAME}
restart: unless-stopped
environment:
MEILI_NO_ANALYTICS: "true"
volumes:
- meilisearch:/meili_data
volumes:
meilisearch:
data:

26
apps/keepassxc/README.md Normal file
View File

@ -0,0 +1,26 @@
# KeePassXC
KeePassXC 是一款开源、跨平台的密码管理器支持Web界面和多种平台。
## 功能特性
- 跨平台密码管理,支持多种操作系统
- 支持Web界面管理默认端口3000
- 数据库文件本地存储,安全可靠
- 支持浏览器集成和自动填充
- 支持多用户和多数据库
- 轻量级、资源占用低
## 使用方法
1. 部署后访问 `http://服务器IP:3000` 进入Web管理界面
2. 创建或导入密码数据库(.kdbx 文件)
3. 配置浏览器插件实现自动填充
4. 数据库目录:`./db`
## 相关链接
- [官方网站](https://keepassxc.org/)
- [GitHub 项目](https://github.com/keepassxreboot/keepassxc)
- [官方文档](https://keepassxc.org/docs/)
- [Docker Hub](https://hub.docker.com/r/linuxserver/keepassxc)

View File

@ -0,0 +1,26 @@
# KeePassXC
KeePassXC is an open-source, cross-platform password manager with web interface support.
## Features
- Cross-platform password management for multiple OS
- Web UI management (default port 3000)
- Local database file storage, secure and reliable
- Browser integration and autofill support
- Multi-user and multi-database support
- Lightweight and low resource usage
## Usage
1. After deployment, access `http://your-server-ip:3000` for the web UI
2. Create or import a password database (.kdbx file)
3. Configure browser extension for autofill
4. Database directory: `./db`
## Links
- [Official Website](https://keepassxc.org/)
- [GitHub Project](https://github.com/keepassxreboot/keepassxc)
- [Official Documentation](https://keepassxc.org/docs/)
- [Docker Hub](https://hub.docker.com/r/linuxserver/keepassxc)

32
apps/keepassxc/data.yml Normal file
View File

@ -0,0 +1,32 @@
name: KeePassXC
tags:
- 实用工具
title: 跨平台密码管理器
description: KeePassXC 是一款开源、跨平台的密码管理器支持Web界面和多种平台
additionalProperties:
key: keepassxc
name: KeePassXC
tags:
- Tool
shortDescZh: 跨平台密码管理器
shortDescEn: Cross-platform password manager
description:
en: KeePassXC is an open-source, cross-platform password manager with web interface support
ja: KeePassXCはWebインターフェースをサポートするオープンソースのクロスプラットフォームパスワードマネージャーです
ms: KeePassXC ialah pengurus kata laluan sumber terbuka merentas platform dengan sokongan antara muka web
pt-br: KeePassXC é um gerenciador de senhas de código aberto e multiplataforma com suporte a interface web
ru: KeePassXC — это кроссплатформенный менеджер паролей с открытым исходным кодом и поддержкой веб-интерфейса
ko: KeePassXC는 웹 인터페이스를 지원하는 오픈 소스 크로스 플랫폼 비밀번호 관리자입니다
zh-Hant: KeePassXC 是一款開源、跨平台的密碼管理器,支援 Web 管理
zh: KeePassXC 是一款开源、跨平台的密码管理器支持Web界面和多种平台
type: website
crossVersionUpdate: true
limit: 0
recommend: 0
website: https://keepassxc.org/
github: https://github.com/keepassxreboot/keepassxc
document: https://keepassxc.org/docs/
architectures:
- amd64
- arm64
- arm/v7

View File

@ -0,0 +1,19 @@
additionalProperties:
formFields:
- default: 3000
edit: true
envKey: PANEL_APP_PORT_HTTP
labelEn: Web UI Port
labelZh: Web界面端口
required: true
rule: paramPort
type: number
label:
en: Web UI Port
ja: Web UI ポート
ms: Port UI Web
pt-br: Porta da interface web
ru: Веб-порт интерфейса
ko: 웹 UI 포트
zh-Hant: Web UI 埠
zh: Web界面端口

View File

@ -0,0 +1,21 @@
services:
keepassxc:
container_name: ${CONTAINER_NAME}
image: lscr.io/linuxserver/keepassxc:latest
restart: always
networks:
- 1panel-network
ports:
- "${PANEL_APP_PORT_HTTP}:3000"
environment:
- TZ=Asia/Shanghai
- PUID=1000
- PGID=1000
volumes:
- ./config:/config
- ./db:/db
labels:
createdBy: "Apps"
networks:
1panel-network:
external: true

BIN
apps/keepassxc/logo.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 21 KiB

View File

@ -0,0 +1,10 @@
additionalProperties:
formFields:
- default: "8002"
edit: true
envKey: PANEL_APP_PORT_HTTP
labelEn: Service Port
labelZh: 服务端口
required: true
rule: paramPort
type: number

View File

@ -0,0 +1,16 @@
services:
kimi-free-api:
image: vinlic/kimi-free-api:latest
container_name: ${CONTAINER_NAME}
ports:
- ${PANEL_APP_PORT_HTTP}:8000
networks:
- 1panel-network
environment:
- TZ=Asia/Shanghai
labels:
createdBy: Apps
restart: always
networks:
1panel-network:
external: true

38
apps/kspeeder/README.md Normal file
View File

@ -0,0 +1,38 @@
# KSpeeder 镜像加速服务
KSpeeder 是一款专为 Docker 镜像加速设计的镜像服务,支持本地私有部署,适用于无法访问 DockerHub 的环境。通过 KSpeeder您可以在局域网内搭建属于自己的镜像加速服务大幅提升 Docker 镜像的拉取速度。
## 主要特性
- **镜像加速**:为 Docker 镜像拉取提供加速服务,解决国内访问 DockerHub 缓慢的问题。
- **私有部署**:支持在本地或局域网内私有部署,数据更安全。
- **多平台支持**:支持 iStoreOS、群晖、unRAID、macOS、Linux、Windows 等多种平台。
- **易于配置**:通过简单的 Docker Compose 即可快速部署。
- **数据与配置分离**:支持数据和配置持久化,方便迁移和备份。
- **多架构支持**:支持 amd64、arm64 等主流架构。
- **高可用性**:支持断点续传、自动重试等机制,提升镜像拉取成功率。
- **管理监控**:内置管理监控端口,便于运维和监控服务状态。
## 典型使用场景
- 局域网内为多台主机提供统一的 Docker 镜像加速服务
- 企业/团队内部搭建私有镜像加速仓库
- 无法访问 DockerHub 或访问速度较慢的环境
## 常见问题 FAQ
- **Q: 支持哪些平台?**
A: 支持 iStoreOS、群晖、unRAID、macOS、Linux、Windows 等。
- **Q: 如何持久化数据?**
A: 挂载 /kspeeder-data 和 /kspeeder-config 到本地目录。
- **Q: 默认端口是多少?**
A: 主服务端口 5443管理监控端口 5003。
- **Q: 镜像源如何配置?**
A: 参考官方文档,配置 Docker 镜像加速地址。
## 官方文档与支持
- 官网:[https://kspeeder.istoreos.com/](https://kspeeder.istoreos.com/)
- 快速开始指南:[https://kspeeder.istoreos.com/guide/getting-started.html](https://kspeeder.istoreos.com/guide/getting-started.html)
- GitHub[https://github.com/linkease/kspeeder](https://github.com/linkease/kspeeder)

Some files were not shown because too many files have changed in this diff Show More