Commit Graph

2616 Commits

Author SHA1 Message Date
Panchovix
34dc7306b8
Fix NTK (alpha) and RoPE scaling for exllamav2 and exllamav2_HF (#3897) 2023-09-13 02:35:09 -03:00
dependabot[bot]
eb9ebabec7
Bump exllamav2 from 0.0.0 to 0.0.1 (#3896) 2023-09-13 02:13:51 -03:00
cal066
a4e4e887d7
Bump ctransformers to 0.2.27 (#3893) 2023-09-13 00:37:31 -03:00
oobabooga
b7adf290fc Fix ExLlama-v2 path issue 2023-09-12 17:42:22 -07:00
jllllll
1a5d68015a
Bump llama-cpp-python to 0.1.85 (#3887) 2023-09-12 19:41:41 -03:00
oobabooga
833bc59f1b Remove ninja from requirements.txt
It's installed with exllamav2 automatically
2023-09-12 15:12:56 -07:00
oobabooga
b190676893 Merge remote-tracking branch 'refs/remotes/origin/main' 2023-09-12 15:06:33 -07:00
oobabooga
2f935547c8 Minor changes 2023-09-12 15:05:21 -07:00
oobabooga
18e6b275f3 Add alpha_value/compress_pos_emb to ExLlama-v2 2023-09-12 15:02:47 -07:00
Gennadij
460c40d8ab
Read more GGUF metadata (scale_linear and freq_base) (#3877) 2023-09-12 17:02:42 -03:00
Eve
90fca6a77d
add pygmalion-2 and mythalion support (#3821) 2023-09-12 15:57:49 -03:00
Chang Chi, Meng
b61d9aef19
openai API: add support for chunked transfer encoding in POST requests (#3870) 2023-09-12 15:54:42 -03:00
dependabot[bot]
0efbe5ef76
Bump optimum from 1.12.0 to 1.13.1 (#3872) 2023-09-12 15:53:21 -03:00
missionfloyd
43ec9d1619
silero_tts: Add language option (#3878) 2023-09-12 15:49:46 -03:00
oobabooga
04a74b3774 Update README 2023-09-12 10:46:27 -07:00
oobabooga
16e1696071 Minor qol change 2023-09-12 10:44:26 -07:00
oobabooga
c2a309f56e
Add ExLlamaV2 and ExLlamav2_HF loaders (#3881) 2023-09-12 14:33:07 -03:00
oobabooga
a821928877 Reduce chat width 2023-09-12 10:26:43 -07:00
oobabooga
df123a20fc Prevent extra keys from being saved to settings.yaml 2023-09-11 20:13:10 -07:00
oobabooga
dae428a967 Revamp cai-chat theme, make it default 2023-09-11 19:30:40 -07:00
oobabooga
47d1ca467b Pin pandas version in superbooga 2023-09-11 18:34:34 -07:00
oobabooga
78811dd89a Fix GGUF metadata reading for falcon 2023-09-11 15:49:50 -07:00
oobabooga
9331ab4798
Read GGUF metadata (#3873) 2023-09-11 18:49:30 -03:00
oobabooga
39f4800d94 Merge remote-tracking branch 'refs/remotes/origin/main' 2023-09-11 10:45:13 -07:00
oobabooga
5c58dfadef Update requirements_nocuda.txt 2023-09-11 10:44:19 -07:00
Sam
fa363da7ce
improve docker builds (#3715) 2023-09-11 12:22:00 -03:00
oobabooga
df52dab67b Lint 2023-09-11 07:57:38 -07:00
Eve
92f3cd624c
Improve instructions for CPUs without AVX2 (#3786) 2023-09-11 11:54:04 -03:00
oobabooga
ed86878f02 Remove GGML support 2023-09-11 07:44:00 -07:00
John Smith
cc7b7ba153
fix lora training with alpaca_lora_4bit (#3853) 2023-09-11 01:22:20 -03:00
Forkoz
15e9b8c915
Exllama new rope settings (#3852) 2023-09-11 01:14:36 -03:00
jllllll
859b4fd737
Bump exllama to 0.1.17 (#3847) 2023-09-11 01:13:14 -03:00
dependabot[bot]
1d6b384828
Update transformers requirement from ==4.32.* to ==4.33.* (#3865) 2023-09-11 01:12:22 -03:00
jllllll
e8f234ca8f
Bump llama-cpp-python to 0.1.84 (#3854) 2023-09-11 01:11:33 -03:00
oobabooga
66d5caba1b Pin pydantic version (closes #3850) 2023-09-10 21:09:04 -07:00
oobabooga
4affa08821 Do not impose instruct mode while loading models 2023-09-02 11:31:33 -07:00
oobabooga
0576691538 Add optimum to requirements (for GPTQ LoRA training)
See https://github.com/oobabooga/text-generation-webui/issues/3655
2023-08-31 08:45:38 -07:00
oobabooga
40ffc3d687
Update README.md 2023-08-30 18:19:04 -03:00
oobabooga
47e490c7b4 Set use_cache=True by default for all models 2023-08-30 13:26:27 -07:00
oobabooga
5190e153ed
Update README.md 2023-08-30 14:06:29 -03:00
jllllll
9626f57721
Bump exllama to 0.0.14 (#3758) 2023-08-30 13:43:38 -03:00
oobabooga
bc4023230b Improved instructions for AMD/Metal/Intel Arc/CPUs without AVCX2 2023-08-30 09:40:00 -07:00
oobabooga
b2f7ca0d18 Cloudfare fix 2 2023-08-29 19:54:43 -07:00
missionfloyd
787219267c
Allow downloading single file from UI (#3737) 2023-08-29 23:32:36 -03:00
Alberto Ferrer
f63dd83631
Update download-model.py (Allow single file download) (#3732) 2023-08-29 22:57:58 -03:00
jllllll
dac5f4b912
Bump llama-cpp-python to 0.1.83 (#3745) 2023-08-29 22:35:59 -03:00
oobabooga
6c16e4cecf Cloudfare fix
Credits: https://github.com/oobabooga/text-generation-webui/issues/1524#issuecomment-1698255209
2023-08-29 16:35:44 -07:00
oobabooga
828d97a98c Minor CSS improvement 2023-08-29 16:15:12 -07:00
oobabooga
a26c2300cb Make instruct style more readable (attempt) 2023-08-29 14:14:01 -07:00
q5sys (JT)
cdb854db9e
Update llama.cpp.md instructions (#3702) 2023-08-29 17:56:50 -03:00