跳至主要內容

Docker部署(优化版)

五六零网校大约 3 分钟

方案说明

官方部署的版本分别部署了多个容器,并且通过映射的端口进行互访,实际访问只需要开放fastgpt容器的端口即可。

优化逻辑

  • 修改容器接口到同一网络,只需开放fastgpt访问端口
  • 只需要简单修改基础即可完成部署

优化版部署

准备条件

在部署优化版之前,需要准备好以下条件:

  1. 在宝塔面板下创建好对应站点
  2. 确认已经安装Docker和docker-compose
  3. 在站点目录下,创建文件夹:FastGPT
  4. FastGPT文件夹下创建2个文件,分别为:docker-compose.yamlconfig.json
  5. 把对应的代码,复制到docker-compose.yamlconfig.json

文件配置

docker-compose.yaml 配置:

version: '3.3'
services:
  pg:
    # image: ankane/pgvector:v0.5.0 # docker
    image: registry.cn-hangzhou.aliyuncs.com/fastgpt/pgvector:v0.5.0 # 阿里云
    container_name: pg
    restart: always
    networks:
      - fastgpt
    environment:
      - POSTGRES_USER=username
      - POSTGRES_PASSWORD=password
      - POSTGRES_DB=postgres
    volumes:
      - ./pg/data:/var/lib/postgresql/data

  mongo:
    # image: mongo:5.0.18
    image: registry.cn-hangzhou.aliyuncs.com/fastgpt/mongo:5.0.18 # 阿里云
    container_name: mongo
    restart: always
    networks:
      - fastgpt
    environment:
      - MONGO_INITDB_ROOT_USERNAME=username
      - MONGO_INITDB_ROOT_PASSWORD=password
    volumes:
      - ./mongo/data:/data/db

  fastgpt:
    container_name: fastgpt
    image: registry.cn-hangzhou.aliyuncs.com/fastgpt/fastgpt:latest # 阿里云
    ports:
      - 5600:3000
    networks:
      - fastgpt
    depends_on:
      - mongo
      - pg
    restart: always
    environment:
      - DEFAULT_ROOT_PSW=xxxxxx
      - OPENAI_BASE_URL=https://api.xxx.com/v1
      - CHAT_API_KEY=sk-xxxxxx
      - DB_MAX_LINK=50
      - TOKEN_KEY=any
      - ROOT_KEY=xxxxxx
      - FILE_TOKEN_KEY=filetokenkey
      - MONGODB_URI=mongodb://username:password@mongo:27017/fastgpt?authSource=admin
      - PG_URL=postgresql://username:password@pg:5432/postgres
    volumes:
      - ./config.json:/app/data/config.json

networks:
  fastgpt:

代码说明:

将示例代码中的xxx替换为你的信息

  • DEFAULT_ROOT_PSW 为你设置的管理员密码,用户名为root
  • OPENAI_BASE_URL 为openai的地址,如果主机是国内的,需要换为代理地址,也可以是one api部署的API地址
  • CHAT_API_KEY 为上方地址对应的KEY,例如为openai的KEY,或者One api的KEY
  • ROOT_KEY= 请设置随机20位以上长度的密码,在升级的时候,有部分版本需要初始化API的,那么这个时候就需要用到
  • 5600:3000 左侧的5600为访问fastgpt需要的端口,改为你需要的端口即可

config.json 配置

官方会不断更新FastGPT的功能,对应的 config.json 配置也会更新 最新的配置内容,可以看最新版本的升级内容

<=== 点击左侧版本升级,找到最新版本

V4.6.6版本配置文件config.json的配置:
{
  "systemEnv": {
    "vectorMaxProcess": 15,
    "qaMaxProcess": 15,
    "pgHNSWEfSearch": 100
  },
  "chatModels": [
    {
      "model": "gpt-3.5-turbo-1106",
      "name": "GPT35-1106",
      "inputPrice": 0,
      "outputPrice": 0,
      "maxContext": 16000,
      "maxResponse": 4000,
      "quoteMaxToken": 2000,
      "maxTemperature": 1.2,
      "censor": false,
      "vision": false,
      "defaultSystemChatPrompt": ""
    },
    {
      "model": "gpt-3.5-turbo-16k",
      "name": "GPT35-16k",
      "maxContext": 16000,
      "maxResponse": 16000,
      "inputPrice": 0,
      "outputPrice": 0,
      "quoteMaxToken": 8000,
      "maxTemperature": 1.2,
      "censor": false,
      "vision": false,
      "defaultSystemChatPrompt": ""
    },
    {
      "model": "gpt-4",
      "name": "GPT4-8k",
      "maxContext": 8000,
      "maxResponse": 8000,
      "inputPrice": 0,
      "outputPrice": 0,
      "quoteMaxToken": 4000,
      "maxTemperature": 1.2,
      "censor": false,
      "vision": false,
      "defaultSystemChatPrompt": ""
    },
    {
      "model": "gpt-4-vision-preview",
      "name": "GPT4-Vision",
      "maxContext": 128000,
      "maxResponse": 4000,
      "inputPrice": 0,
      "outputPrice": 0,
      "quoteMaxToken": 100000,
      "maxTemperature": 1.2,
      "censor": false,
      "vision": true,
      "defaultSystemChatPrompt": ""
    }
  ],
  "qaModels": [
    {
      "model": "gpt-3.5-turbo-16k",
      "name": "GPT35-16k",
      "maxContext": 16000,
      "maxResponse": 16000,
      "inputPrice": 0,
      "outputPrice": 0
    }
  ],
  "cqModels": [
    {
      "model": "gpt-3.5-turbo-1106",
      "name": "GPT35-1106",
      "maxContext": 16000,
      "maxResponse": 4000,
      "inputPrice": 0,
      "outputPrice": 0,
      "toolChoice": true,
      "functionPrompt": ""
    },
    {
      "model": "gpt-4",
      "name": "GPT4-8k",
      "maxContext": 8000,
      "maxResponse": 8000,
      "inputPrice": 0,
      "outputPrice": 0,
      "toolChoice": true,
      "functionPrompt": ""
    }
  ],
  "extractModels": [
    {
      "model": "gpt-3.5-turbo-1106",
      "name": "GPT35-1106",
      "maxContext": 16000,
      "maxResponse": 4000,
      "inputPrice": 0,
      "outputPrice": 0,
      "toolChoice": true,
      "functionPrompt": ""
    }
  ],
  "qgModels": [
    {
      "model": "gpt-3.5-turbo-1106",
      "name": "GPT35-1106",
      "maxContext": 1600,
      "maxResponse": 4000,
      "inputPrice": 0,
      "outputPrice": 0
    }
  ],
  "vectorModels": [
    {
      "model": "text-embedding-ada-002",
      "name": "Embedding-2",
      "inputPrice": 0,
      "defaultToken": 700,
      "maxToken": 3000
    }
  ],
  "reRankModels": [],
  "audioSpeechModels": [
    {
      "model": "tts-1",
      "name": "OpenAI TTS1",
      "inputPrice": 0,
      "baseUrl": "",
      "key": "",
      "voices": [
        { "label": "Alloy", "value": "alloy", "bufferId": "openai-Alloy" },
        { "label": "Echo", "value": "echo", "bufferId": "openai-Echo" },
        { "label": "Fable", "value": "fable", "bufferId": "openai-Fable" },
        { "label": "Onyx", "value": "onyx", "bufferId": "openai-Onyx" },
        { "label": "Nova", "value": "nova", "bufferId": "openai-Nova" },
        { "label": "Shimmer", "value": "shimmer", "bufferId": "openai-Shimmer" }
      ]
    }
  ],
  "whisperModel": {
    "model": "whisper-1",
    "name": "Whisper1",
    "inputPrice": 0
  }
}