Vicuna

ChatGLM

2023-05-29. Category & Tags: AIGC, GPT, ChatGPT, Vicuna, LLAMA, LLM, ChatGLM

public: 2025-04-19 See also the main item: /LLM. 【DOing , not finished】 see also: 手把手带你实现:基于 Langchain 和 chatglm-6b 构建本地知识库的自动问答应用 9.5 min pytorch 入门 20 - 本地知识库 LLM 对话系统(langchain-ChatGLM 项目)- 源码分析(完结喽) - 跟小鱼儿一起学习 pytorch 官网入门教程 37min 利用 LangChain 和国产大模型 ChatGLM-6B 实现基于本地知识库的自动问答 1.4min Github 地址:https://github.com/thomas-yanxin/LangChain-ChatGLM-Webui ModelScope 在线体验:https://modelscope.cn/studios/AI-ModelScope/LangChain-ChatLLM/summary OpenI 地址: https://openi.pcl.ac.cn/Learning-Develop-Union/LangChain-ChatGLM-Webui Install Env # ref: imClumsyPanda/langchain-ChatGLM (tested on 22.04) Public curl -s -L https://nvidia.github.io/nvidia-docker/gpgkey | sudo apt-key add - echo distribution=$(. /etc/os-release;echo $ID$VERSION_ID) curl -s -L https://nvidia. ...

FastChat Vicuna

2023-05-29. Category & Tags: AIGC, GPT, ChatGPT, LLAMA, LLM, FastChat, Vicuna

public: 2025-04-19 See also the main item: /LLM. Official GitHub. Follow this CSDN blog for the 1st time run: CSDN, (bak 2023-04-18). Note about timing (on Tesla V100 16G): convert_llama_weights_to_hf.py for LLAMA-7B uses <10min. python -m fastchat.model.apply_delta for LLAMA-7B uses <10min. GPTQ-for-LLaMA for LLAMA-13B to 4bit .pt uses 0.75 hour. Vicuna GPTQ Models (量化模型) Comparison & WebUI Tutorial. ref: medium See also FastChat for WebUI & RESTful API: FastChat GitHub Home. ...