docker run -d -p 3000:8080 -e OPENAI_API_BASE_URLS="https://api.siliconflow.cn/v1/" -e OPENAI_API_KEY=<my_api_key> -v open-webui:/path/to/docker-v-data --name open-webui --restart always ghcr.io/open-webui/open-webui:main ps: tried --env HTTPS_PROXY="http://192.168.50.107:1080" which does not help downloading speed from docker-hub.
Official github
Flux 画图
See also backends:
/rag-agent-frameworks /chatglm /fastchat-vicuna /llamaindex /ollama SiliconFlow.cn 硅基流动 See also frontends:
HuggingChat GPT4ALL LibreChat /open-webui localAI See also API translator:
NewAPI OneAPI See also datasets etc.:
Chinese NLP Data: 四大名著现代汉语版、古汉语版 See also multi-agent frameworks (below).
Interesting APPs
#
Some simple examples/demos.
语音聊天机器人:【Open WebUI+Ollama/vLLM+CosyVoice+Whisper】终极个人聊天互动机器人-环境部署及成果展示 简单多模态:ollama+open-webui_知识库+多模态+文生图功能详解 Fine-tune vs. RAG vs. Prompt
#
Fine-tune ≈ Learn a course RAG ≈ open-book examination Prompt ≈ ? Several Tools to Run the LLM’s Model (itself)
#
本地大模型启动 openai 服务的 N 种方式,vllm,fastchat,llama factory,llama.
...