A Gradio web UI for Large Language Models. Supports transformers, GPTQ, AWQ, EXL2, llama.cpp (GGUF), Llama models.
Go to file
jllllll fcb215fed5
Add check for compute support for GPTQ-for-LLaMa (#104)
Installs from main cuda repo if fork not supported
Also removed cuBLAS llama-cpp-python installation in preperation for 4b19b74e6c
2023-07-20 11:11:00 -03:00
.github/workflows Create stale.yml 2023-06-05 17:15:31 -03:00
.gitignore Create .gitignore (#43) 2023-05-02 23:47:19 -03:00
cmd_linux.sh Improve environment isolation (#68) 2023-05-25 11:15:05 -03:00
cmd_macos.sh Fix cmd_macos.sh (#82) 2023-06-17 19:09:42 -03:00
cmd_windows.bat Improve environment isolation (#68) 2023-05-25 11:15:05 -03:00
cmd_wsl.bat Installer for WSL (#78) 2023-06-13 00:04:15 -03:00
generate_zips.sh Installer for WSL (#78) 2023-06-13 00:04:15 -03:00
INSTRUCTIONS-WSL.TXT Installer for WSL (#78) 2023-06-13 00:04:15 -03:00
INSTRUCTIONS.TXT Grammar 2023-06-01 14:05:29 -03:00
LICENSE Create LICENSE 2023-05-31 16:28:36 -03:00
start_linux.sh Install Pytorch through pip instead of Conda (#84) 2023-06-20 16:39:23 -03:00
start_macos.sh Install Pytorch through pip instead of Conda (#84) 2023-06-20 16:39:23 -03:00
start_windows.bat Move special character check to start script (#92) 2023-06-24 10:06:35 -03:00
start_wsl.bat Installer for WSL (#78) 2023-06-13 00:04:15 -03:00
update_linux.sh Improve environment isolation (#68) 2023-05-25 11:15:05 -03:00
update_macos.sh Improve environment isolation (#68) 2023-05-25 11:15:05 -03:00
update_windows.bat Improve environment isolation (#68) 2023-05-25 11:15:05 -03:00
update_wsl.bat Installer for WSL (#78) 2023-06-13 00:04:15 -03:00
webui.py Add check for compute support for GPTQ-for-LLaMa (#104) 2023-07-20 11:11:00 -03:00
wsl.sh Install Pytorch through pip instead of Conda (#84) 2023-06-20 16:39:23 -03:00