oobabooga
|
4affa08821
|
Do not impose instruct mode while loading models
|
2023-09-02 11:31:33 -07:00 |
|
oobabooga
|
0576691538
|
Add optimum to requirements (for GPTQ LoRA training)
See https://github.com/oobabooga/text-generation-webui/issues/3655
|
2023-08-31 08:45:38 -07:00 |
|
oobabooga
|
40ffc3d687
|
Update README.md
|
2023-08-30 18:19:04 -03:00 |
|
oobabooga
|
47e490c7b4
|
Set use_cache=True by default for all models
|
2023-08-30 13:26:27 -07:00 |
|
oobabooga
|
5190e153ed
|
Update README.md
|
2023-08-30 14:06:29 -03:00 |
|
jllllll
|
9626f57721
|
Bump exllama to 0.0.14 (#3758)
|
2023-08-30 13:43:38 -03:00 |
|
oobabooga
|
bc4023230b
|
Improved instructions for AMD/Metal/Intel Arc/CPUs without AVCX2
|
2023-08-30 09:40:00 -07:00 |
|
oobabooga
|
b2f7ca0d18
|
Cloudfare fix 2
|
2023-08-29 19:54:43 -07:00 |
|
missionfloyd
|
787219267c
|
Allow downloading single file from UI (#3737)
|
2023-08-29 23:32:36 -03:00 |
|
Alberto Ferrer
|
f63dd83631
|
Update download-model.py (Allow single file download) (#3732)
|
2023-08-29 22:57:58 -03:00 |
|
jllllll
|
dac5f4b912
|
Bump llama-cpp-python to 0.1.83 (#3745)
|
2023-08-29 22:35:59 -03:00 |
|
oobabooga
|
6c16e4cecf
|
Cloudfare fix
Credits: https://github.com/oobabooga/text-generation-webui/issues/1524#issuecomment-1698255209
|
2023-08-29 16:35:44 -07:00 |
|
oobabooga
|
828d97a98c
|
Minor CSS improvement
|
2023-08-29 16:15:12 -07:00 |
|
oobabooga
|
a26c2300cb
|
Make instruct style more readable (attempt)
|
2023-08-29 14:14:01 -07:00 |
|
q5sys (JT)
|
cdb854db9e
|
Update llama.cpp.md instructions (#3702)
|
2023-08-29 17:56:50 -03:00 |
|
VishwasKukreti
|
a9a1784420
|
Update accelerate to 0.22 in requirements.txt (#3725)
|
2023-08-29 17:47:37 -03:00 |
|
oobabooga
|
cec8db52e5
|
Add max_tokens_second param (#3533)
|
2023-08-29 17:44:31 -03:00 |
|
jllllll
|
fe1f7c6513
|
Bump ctransformers to 0.2.25 (#3740)
|
2023-08-29 17:24:36 -03:00 |
|
oobabooga
|
672b610dba
|
Improve tab switching js
|
2023-08-29 13:22:15 -07:00 |
|
oobabooga
|
2b58a89f6a
|
Clear instruction template before loading new one
|
2023-08-29 13:11:32 -07:00 |
|
oobabooga
|
36864cb3e8
|
Use Alpaca as the default instruction template
|
2023-08-29 13:06:25 -07:00 |
|
oobabooga
|
9a202f7fb2
|
Prevent <ul> lists from flickering during streaming
|
2023-08-28 20:45:07 -07:00 |
|
oobabooga
|
8b56fc993a
|
Change lists style in chat mode
|
2023-08-28 20:14:02 -07:00 |
|
oobabooga
|
e8c0c4990d
|
Unescape HTML in the chat API examples
|
2023-08-28 19:42:03 -07:00 |
|
oobabooga
|
439dd0faab
|
Fix stopping strings in the chat API
|
2023-08-28 19:40:11 -07:00 |
|
oobabooga
|
86c45b67ca
|
Merge remote-tracking branch 'refs/remotes/origin/main'
|
2023-08-28 18:29:38 -07:00 |
|
oobabooga
|
c75f98a6d6
|
Autoscroll Notebook/Default textareas during streaming
|
2023-08-28 18:22:03 -07:00 |
|
jllllll
|
22b2a30ec7
|
Bump llama-cpp-python to 0.1.82 (#3730)
|
2023-08-28 18:02:24 -03:00 |
|
oobabooga
|
558e918fd6
|
Add a typing dots (...) animation to chat tab
|
2023-08-28 13:50:36 -07:00 |
|
oobabooga
|
57e9ded00c
|
Make it possible to scroll during streaming (#3721)
|
2023-08-28 16:03:20 -03:00 |
|
jllllll
|
7d3a0b5387
|
Bump llama-cpp-python to 0.1.81 (#3716)
|
2023-08-27 22:38:41 -03:00 |
|
oobabooga
|
fdef0e4efa
|
Focus on chat input field after Ctrl+S
|
2023-08-27 16:45:37 -07:00 |
|
Cebtenzzre
|
2f5d769a8d
|
accept floating-point alpha value on the command line (#3712)
|
2023-08-27 18:54:43 -03:00 |
|
oobabooga
|
0986868b1b
|
Fix chat scrolling with Dark Reader extension
|
2023-08-27 14:53:42 -07:00 |
|
oobabooga
|
b2296dcda0
|
Ctrl+S to show/hide chat controls
|
2023-08-27 13:14:33 -07:00 |
|
Kelvie Wong
|
a965a36803
|
Add ffmpeg to the Docker image (#3664)
|
2023-08-27 12:29:00 -03:00 |
|
Ravindra Marella
|
e4c3e1bdd2
|
Fix ctransformers model unload (#3711)
Add missing comma in model types list
Fixes marella/ctransformers#111
|
2023-08-27 10:53:48 -03:00 |
|
oobabooga
|
0c9e818bb8
|
Update truncation length based on max_seq_len/n_ctx
|
2023-08-26 23:10:45 -07:00 |
|
oobabooga
|
e6eda5c2da
|
Merge pull request #3695 from oobabooga/gguf2
GGUF
|
2023-08-27 02:33:26 -03:00 |
|
oobabooga
|
3361728da1
|
Change some comments
|
2023-08-26 22:24:44 -07:00 |
|
oobabooga
|
8aeae3b3f4
|
Fix llamacpp_HF loading
|
2023-08-26 22:15:06 -07:00 |
|
oobabooga
|
7f5370a272
|
Minor fixes/cosmetics
|
2023-08-26 22:11:07 -07:00 |
|
oobabooga
|
d826bc5d1b
|
Merge pull request #3697 from jllllll/llamacpp-ggml
Use separate llama-cpp-python packages for GGML support
|
2023-08-27 01:51:00 -03:00 |
|
jllllll
|
4d61a7d9da
|
Account for deprecated GGML parameters
|
2023-08-26 14:07:46 -05:00 |
|
jllllll
|
4a999e3bcd
|
Use separate llama-cpp-python packages for GGML support
|
2023-08-26 10:40:08 -05:00 |
|
oobabooga
|
6e6431e73f
|
Update requirements.txt
|
2023-08-26 01:07:28 -07:00 |
|
oobabooga
|
83640d6f43
|
Replace ggml occurences with gguf
|
2023-08-26 01:06:59 -07:00 |
|
oobabooga
|
1a642c12b5
|
Fix silero_tts HTML unescaping
|
2023-08-26 00:45:07 -07:00 |
|
jllllll
|
db42b365c9
|
Fix ctransformers threads auto-detection (#3688)
|
2023-08-25 14:37:02 -03:00 |
|
oobabooga
|
0bcecaa216
|
Set mode: instruct for CodeLlama-instruct
|
2023-08-25 07:59:23 -07:00 |
|