Commit Graph

491 Commits

Author SHA1 Message Date
oobabooga
113f94b61e Bump transformers (16-bit llama must be reconverted/redownloaded) 2023-04-06 16:04:03 -03:00
oobabooga
03cb44fc8c Add new llama.cpp library (2048 context, temperature, etc now work) 2023-04-06 13:12:14 -03:00
EyeDeck
39f3fec913
Broaden GPTQ-for-LLaMA branch support (#820) 2023-04-06 12:16:48 -03:00
Alex "mcmonkey" Goodwin
0c7ef26981
Lora trainer improvements (#763) 2023-04-06 02:04:11 -03:00
oobabooga
e94ab5dac1 Minor fixes 2023-04-06 01:43:10 -03:00
oobabooga
3f3e42e26c
Refactor several function calls and the API 2023-04-06 01:22:15 -03:00
SDS
378d21e80c
Add LLaMA-Precise preset (#767) 2023-04-05 18:52:36 -03:00
Forkoz
8203ce0cac
Stop character pic from being cached when changing chars or clearing. (#798)
Tested on both FF and chromium
2023-04-05 14:25:01 -03:00
oobabooga
7f66421369 Fix loading characters 2023-04-05 14:22:32 -03:00
oobabooga
e722c240af Add Instruct mode 2023-04-05 13:54:50 -03:00
oobabooga
3d6cb5ed63 Minor rewrite 2023-04-05 01:21:40 -03:00
oobabooga
f3a2e0b8a9 Disable pre_layer when the model type is not llama 2023-04-05 01:19:26 -03:00
catalpaaa
4ab679480e
allow quantized model to be loaded from model dir (#760) 2023-04-04 23:19:38 -03:00
oobabooga
ae1fe45bc0 One more cache reset 2023-04-04 23:15:57 -03:00
oobabooga
8ef89730a5 Try to better handle browser image cache 2023-04-04 23:09:28 -03:00
oobabooga
cc6c7a37f3 Add make_thumbnail function 2023-04-04 23:03:58 -03:00
oobabooga
80dfba05f3 Better crop/resize cached images 2023-04-04 22:52:15 -03:00
oobabooga
65d8a24a6d Show profile pictures in the Character tab 2023-04-04 22:28:49 -03:00
OWKenobi
ee4547cd34
Detect "vicuna" as llama model type (#772) 2023-04-04 13:23:27 -03:00
oobabooga
b24147c7ca Document --pre_layer 2023-04-03 17:34:25 -03:00
oobabooga
4c9ed09270 Update settings template 2023-04-03 14:59:26 -03:00
OWKenobi
dcf61a8897
"character greeting" displayed and editable on the fly (#743)
* Add greetings field

* add greeting field and make it interactive

* Minor changes

* Fix a bug

* Simplify clear_chat_log

* Change a label

* Minor change

* Simplifications

* Simplification

* Simplify loading the default character history

* Fix regression

---------

Co-authored-by: oobabooga
2023-04-03 12:16:15 -03:00
Alex "mcmonkey" Goodwin
8b1f20aa04
Fix some old JSON characters not loading (#740) 2023-04-03 10:49:28 -03:00
oobabooga
8b442305ac Rename another variable 2023-04-03 01:15:20 -03:00
oobabooga
08448fb637 Rename a variable 2023-04-03 01:02:11 -03:00
oobabooga
2a267011dc Use Path.stem for simplicity 2023-04-03 00:56:14 -03:00
Alex "mcmonkey" Goodwin
ea97303509
Apply dialogue format in all character fields not just example dialogue (#650) 2023-04-02 21:54:29 -03:00
TheTerrasque
2157bb4319
New yaml character format (#337 from TheTerrasque/feature/yaml-characters)
This doesn't break backward compatibility with JSON characters.
2023-04-02 20:34:25 -03:00
oobabooga
5f3f3faa96 Better handle CUDA out of memory errors in chat mode 2023-04-02 17:48:00 -03:00
oobabooga
b0890a7925 Add shared.is_chat() function 2023-04-01 20:15:00 -03:00
oobabooga
b857f4655b
Update shared.py 2023-04-01 13:56:47 -03:00
oobabooga
fcda3f8776 Add also_return_rows to generate_chat_prompt 2023-04-01 01:12:13 -03:00
oobabooga
2c52310642 Add --threads flag for llama.cpp 2023-03-31 21:18:05 -03:00
oobabooga
eeafd60713 Fix streaming 2023-03-31 19:05:38 -03:00
oobabooga
52065ae4cd Add repetition_penalty 2023-03-31 19:01:34 -03:00
oobabooga
2259143fec Fix llama.cpp with --no-stream 2023-03-31 18:43:45 -03:00
oobabooga
3a47a602a3 Detect ggml*.bin files automatically 2023-03-31 17:18:21 -03:00
oobabooga
0aee7341d8 Properly count tokens/s for llama.cpp in chat mode 2023-03-31 17:04:32 -03:00
oobabooga
ea3ba6fc73 Merge branch 'feature/llamacpp' of github.com:thomasantony/text-generation-webui into thomasantony-feature/llamacpp 2023-03-31 14:45:53 -03:00
oobabooga
09b0a3aafb Add repetition_penalty 2023-03-31 14:45:17 -03:00
oobabooga
4d98623041
Merge branch 'main' into feature/llamacpp 2023-03-31 14:37:04 -03:00
oobabooga
4c27562157 Minor changes 2023-03-31 14:33:46 -03:00
oobabooga
9d1dcf880a General improvements 2023-03-31 14:27:01 -03:00
oobabooga
770ff0efa9 Merge branch 'main' of github.com:oobabooga/text-generation-webui 2023-03-31 12:22:22 -03:00
oobabooga
1d1d9e40cd Add seed to settings 2023-03-31 12:22:07 -03:00
Maya
b246d17513
Fix type object is not subscriptable
Fix `type object is not subscriptable` on python 3.8
2023-03-31 14:20:31 +03:00
oobabooga
d4a9b5ea97 Remove redundant preset (see the plot in #587) 2023-03-30 17:34:44 -03:00
Thomas Antony
7fa5d96c22 Update to use new llamacpp API 2023-03-30 11:23:05 +01:00
Thomas Antony
79fa2b6d7e Add support for alpaca 2023-03-30 11:23:04 +01:00
Thomas Antony
a5f5736e74 Add to text_generation.py 2023-03-30 11:22:38 +01:00
Thomas Antony
7745faa7bb Add llamacpp to models.py 2023-03-30 11:22:37 +01:00
Thomas Antony
7a562481fa Initial version of llamacpp_model.py 2023-03-30 11:22:07 +01:00
oobabooga
a21e580782 Move an import 2023-03-29 22:50:58 -03:00
oobabooga
55755e27b9 Don't hardcode prompts in the settings dict/json 2023-03-29 22:47:01 -03:00
oobabooga
1cb9246160 Adapt to the new model names 2023-03-29 21:47:36 -03:00
oobabooga
58349f44a0
Handle training exception for unsupported models 2023-03-29 11:55:34 -03:00
oobabooga
a6d0373063
Fix training dataset loading #636 2023-03-29 11:48:17 -03:00
oobabooga
1edfb96778
Fix loading extensions from within the interface 2023-03-28 23:27:02 -03:00
oobabooga
304f812c63 Gracefully handle CUDA out of memory errors with streaming 2023-03-28 19:20:50 -03:00
oobabooga
010b259dde Update documentation 2023-03-28 17:46:00 -03:00
oobabooga
0bec15ebcd Reorder imports 2023-03-28 17:34:15 -03:00
Maya Eary
41ec682834 Disable kernel threshold for gpt-j 2023-03-28 22:45:38 +03:00
Maya
1ac003d41c
Merge branch 'oobabooga:main' into feature/gpt-j-4bit-v2 2023-03-28 22:30:39 +03:00
Maya Eary
1c075d8d21 Fix typo 2023-03-28 20:43:50 +03:00
Maya Eary
c8207d474f Generalized load_quantized 2023-03-28 20:38:55 +03:00
oobabooga
8579fe51dd Fix new lines in the HTML tab 2023-03-28 12:59:34 -03:00
Alex "mcmonkey" Goodwin
e817fac542 better defaults 2023-03-27 22:29:23 -07:00
Alex "mcmonkey" Goodwin
2e08af4edf implement initial Raw Text File Input
also bump default Rank & Alpha for values that will make sense in testing if you don't know what you're doing and leave the defaults.
2023-03-27 22:15:32 -07:00
Alex "mcmonkey" Goodwin
b749952fe3 change number minimums to 0
gradio calculates 'step' relative to the minimum, so at '1' the step values were all offset awkwardly. 0 isn't valid, but, uh, just don't slam the slider to the left.
2023-03-27 21:22:43 -07:00
Alex "mcmonkey" Goodwin
ec6224f556 use new shared.args.lora_dir 2023-03-27 20:04:16 -07:00
Alex "mcmonkey" Goodwin
31f04dc615 Merge branch 'main' into add-train-lora-tab 2023-03-27 20:03:30 -07:00
oobabooga
53da672315 Fix FlexGen 2023-03-27 23:44:21 -03:00
oobabooga
ee95e55df6 Fix RWKV tokenizer 2023-03-27 23:42:29 -03:00
oobabooga
036163a751 Change description 2023-03-27 23:39:26 -03:00
oobabooga
005f552ea3 Some simplifications 2023-03-27 23:29:52 -03:00
oobabooga
fde92048af Merge branch 'main' into catalpaaa-lora-and-model-dir 2023-03-27 23:16:44 -03:00
Alex "mcmonkey" Goodwin
8a97f6ba29 corrections per the PR comments 2023-03-27 18:39:06 -07:00
Alex "mcmonkey" Goodwin
7fab7ea1b6 couple missed camelCases 2023-03-27 18:19:06 -07:00
Alex "mcmonkey" Goodwin
6368dad7db Fix camelCase to snake_case to match repo format standard 2023-03-27 18:17:42 -07:00
oobabooga
2f0571bfa4 Small style changes 2023-03-27 21:24:39 -03:00
oobabooga
c2cad30772 Merge branch 'main' into mcmonkey4eva-add-train-lora-tab 2023-03-27 21:05:44 -03:00
Alex "mcmonkey" Goodwin
9ced75746d add total time estimate 2023-03-27 10:57:27 -07:00
Alex "mcmonkey" Goodwin
16ea4fc36d interrupt button 2023-03-27 10:43:01 -07:00
Alex "mcmonkey" Goodwin
8fc723fc95 initial progress tracker in UI 2023-03-27 10:25:08 -07:00
oobabooga
48a6c9513e
Merge pull request #572 from clusterfudge/issues/571
Potential fix for issues/571
2023-03-27 14:06:38 -03:00
Alex "mcmonkey" Goodwin
c07bcd0850 add some outputs to indicate progress updates (sorta)
Actual progressbar still needed. Also minor formatting fixes.
2023-03-27 09:41:06 -07:00
oobabooga
af65c12900 Change Stop button behavior 2023-03-27 13:23:59 -03:00
Alex "mcmonkey" Goodwin
d911c22af9 use shared rows to make the LoRA Trainer interface a bit more compact / clean 2023-03-27 08:31:49 -07:00
Alex "mcmonkey" Goodwin
e439228ed8 Merge branch 'main' into add-train-lora-tab 2023-03-27 08:21:19 -07:00
oobabooga
3dc61284d5 Handle unloading LoRA from dropdown menu icon 2023-03-27 00:04:43 -03:00
oobabooga
1c77fdca4c Change notebook mode appearance 2023-03-26 22:20:30 -03:00
oobabooga
49c10c5570
Add support for the latest GPTQ models with group-size (#530)
**Warning: old 4-bit weights will not work anymore!**

See here how to get up to date weights: https://github.com/oobabooga/text-generation-webui/wiki/LLaMA-model#step-2-get-the-pre-converted-weights
2023-03-26 00:11:33 -03:00
Sean Fitzgerald
0bac80d9eb Potential fix for issues/571 2023-03-25 13:08:45 -07:00
Alex "mcmonkey" Goodwin
f1ba2196b1 make 'model' variables less ambiguous 2023-03-25 12:57:36 -07:00
Alex "mcmonkey" Goodwin
8da237223e document options better 2023-03-25 12:48:35 -07:00
Alex "mcmonkey" Goodwin
5c49a0dcd0 fix error from prepare call running twice in a row 2023-03-25 12:37:32 -07:00
Alex "mcmonkey" Goodwin
7bf601107c automatically strip empty data entries (for better alpaca dataset compat) 2023-03-25 12:28:46 -07:00
Alex "mcmonkey" Goodwin
566898a79a initial lora training tab 2023-03-25 12:08:26 -07:00
oobabooga
8c8e8b4450
Fix the early stopping callback #559 2023-03-25 12:35:52 -03:00
oobabooga
a1f12d607f
Merge pull request #538 from Ph0rk0z/display-input-context
Add display of context when input was generated
2023-03-25 11:56:18 -03:00
catalpaaa
f740ee558c
Merge branch 'oobabooga:main' into lora-and-model-dir 2023-03-25 01:28:33 -07:00
oobabooga
25be9698c7
Fix LoRA on mps 2023-03-25 01:18:32 -03:00
oobabooga
3da633a497
Merge pull request #529 from EyeDeck/main
Allow loading of .safetensors through GPTQ-for-LLaMa
2023-03-24 23:51:01 -03:00
catalpaaa
b37c54edcf lora-dir, model-dir and login auth
Added lora-dir, model-dir, and a login auth arguments that points to a file contains usernames and passwords in the format of "u:pw,u:pw,..."
2023-03-24 17:30:18 -07:00
oobabooga
9fa47c0eed
Revert GPTQ_loader.py (accident) 2023-03-24 19:57:12 -03:00
oobabooga
a6bf54739c
Revert models.py (accident) 2023-03-24 19:56:45 -03:00
oobabooga
0a16224451
Update GPTQ_loader.py 2023-03-24 19:54:36 -03:00
oobabooga
a80aa65986
Update models.py 2023-03-24 19:53:20 -03:00
oobabooga
507db0929d
Do not use empty user messages in chat mode
This allows the bot to send messages by clicking on Generate with empty inputs.
2023-03-24 17:22:22 -03:00
oobabooga
6e1b16c2aa
Update html_generator.py 2023-03-24 17:18:27 -03:00
oobabooga
ffb0187e83
Update chat.py 2023-03-24 17:17:29 -03:00
oobabooga
bfe960731f
Merge branch 'main' into fix/api-reload 2023-03-24 16:54:41 -03:00
oobabooga
8fad84abc2
Update extensions.py 2023-03-24 16:51:27 -03:00
Forkoz
b740c5b284
Add display of context when input was generated
Not sure if I did this right but it does move with the conversation and seems to match value.
2023-03-24 08:56:07 -05:00
oobabooga
4f5c2ce785
Fix chat_generation_attempts 2023-03-24 02:03:30 -03:00
EyeDeck
dcfd866402 Allow loading of .safetensors through GPTQ-for-LLaMa 2023-03-23 21:31:34 -04:00
oobabooga
8747c74339
Another missing import 2023-03-23 22:19:01 -03:00
oobabooga
7078d168c3
Missing import 2023-03-23 22:16:08 -03:00
oobabooga
d1327f99f9
Fix broken callbacks.py 2023-03-23 22:12:24 -03:00
oobabooga
b0abb327d8
Update LoRA.py 2023-03-23 22:02:09 -03:00
oobabooga
bf22d16ebc
Clear cache while switching LoRAs 2023-03-23 21:56:26 -03:00
oobabooga
4578e88ffd
Stop the bot from talking for you in chat mode 2023-03-23 21:38:20 -03:00
oobabooga
9bf6ecf9e2
Fix LoRA device map (attempt) 2023-03-23 16:49:41 -03:00
oobabooga
c5ebcc5f7e
Change the default names (#518)
* Update shared.py

* Update settings-template.json
2023-03-23 13:36:00 -03:00
oobabooga
29bd41d453
Fix LoRA in CPU mode 2023-03-23 01:05:13 -03:00
oobabooga
eac27f4f55
Make LoRAs work in 16-bit mode 2023-03-23 00:55:33 -03:00
oobabooga
bfa81e105e
Fix FlexGen streaming 2023-03-23 00:22:14 -03:00
oobabooga
de6a09dc7f
Properly separate the original prompt from the reply 2023-03-23 00:12:40 -03:00
wywywywy
61346b88ea
Add "seed" menu in the Parameters tab 2023-03-22 15:40:20 -03:00
oobabooga
45b7e53565
Only catch proper Exceptions in the text generation function 2023-03-20 20:36:02 -03:00
oobabooga
db4219a340
Update comments 2023-03-20 16:40:08 -03:00
oobabooga
7618f3fe8c
Add -gptq-preload for 4-bit offloading (#460)
This works in a 4GB card now:

```
python server.py --model llama-7b-hf --gptq-bits 4 --gptq-pre-layer 20
```
2023-03-20 16:30:56 -03:00
Vladimir Belitskiy
e96687b1d6 Do not send empty user input as part of the prompt.
However, if extensions modify the empty prompt to be non-empty,
it'l still work as before.
2023-03-20 14:27:39 -04:00
oobabooga
9a3bed50c3
Attempt at fixing 4-bit with CPU offload 2023-03-20 15:11:56 -03:00
Vladimir Belitskiy
ca47e016b4
Do not display empty user messages in chat mode.
There doesn't seem to be much value to them - they just take up space while also making it seem like there's still some sort of pseudo-dialogue going on, instead of a monologue by the bot.
2023-03-20 12:55:57 -04:00
oobabooga
75a7a84ef2
Exception handling (#454)
* Update text_generation.py
* Update extensions.py
2023-03-20 13:36:52 -03:00
oobabooga
ddb62470e9 --no-cache and --gpu-memory in MiB for fine VRAM control 2023-03-19 19:21:41 -03:00
oobabooga
a78b6508fc Make custom LoRAs work by default #385 2023-03-19 12:11:35 -03:00
Maya
acdbd6b708 Check if app should display extensions ui 2023-03-19 13:31:21 +00:00
Maya
81c9d130f2 Fix global 2023-03-19 13:25:49 +00:00
Maya
099d7a844b Add setup method to extensions 2023-03-19 13:22:24 +00:00
oobabooga
c753261338 Disable stop_at_newline by default 2023-03-18 10:55:57 -03:00
oobabooga
7c945cfe8e Don't include PeftModel every time 2023-03-18 10:55:24 -03:00
oobabooga
e26763a510 Minor changes 2023-03-17 22:56:46 -03:00
Wojtek Kowaluk
7994b580d5 clean up duplicated code 2023-03-18 02:27:26 +01:00
Wojtek Kowaluk
30939e2aee add mps support on apple silicon 2023-03-18 00:56:23 +01:00
oobabooga
9256e937d6 Add some LoRA params 2023-03-17 17:45:28 -03:00
oobabooga
9ed2c4501c Use markdown in the "HTML" tab 2023-03-17 16:06:11 -03:00
oobabooga
f0b26451b4 Add a comment 2023-03-17 13:07:17 -03:00
oobabooga
3bda907727
Merge pull request #366 from oobabooga/lora
Add LoRA support
2023-03-17 11:48:48 -03:00