oobabooga
a2127239de
Fix a bug
2023-04-16 01:41:37 -03:00
oobabooga
9d3c6d2dc3
Fix a bug
2023-04-16 01:40:47 -03:00
Mikel Bober-Irizar
16a3a5b039
Merge pull request from GHSA-hv5m-3rp9-xcpf
...
* Remove eval of API input
* Remove unnecessary eval/exec for security
* Use ast.literal_eval
* Use ast.literal_eval
---------
Co-authored-by: oobabooga <112222186+oobabooga@users.noreply.github.com>
2023-04-16 01:36:50 -03:00
oobabooga
d2ea925fa5
Bump llama-cpp-python to use LlamaCache
2023-04-16 00:53:40 -03:00
oobabooga
ac189011cb
Add "Save current settings for this model" button
2023-04-15 12:54:02 -03:00
oobabooga
abef355ed0
Remove deprecated flag
2023-04-15 01:21:19 -03:00
oobabooga
c3aa79118e
Minor generate_chat_prompt simplification
2023-04-14 23:02:08 -03:00
oobabooga
3a337cfded
Use argparse defaults
2023-04-14 15:35:06 -03:00
Alex "mcmonkey" Goodwin
64e3b44e0f
initial multi-lora support ( #1103 )
...
---------
Co-authored-by: oobabooga <112222186+oobabooga@users.noreply.github.com>
2023-04-14 14:52:06 -03:00
oobabooga
1901d238e1
Minor change to API code
2023-04-14 12:11:47 -03:00
oobabooga
8e31f2bad4
Automatically set wbits/groupsize/instruct based on model name ( #1167 )
2023-04-14 11:07:28 -03:00
v0xie
9d66957207
Add --listen-host launch option ( #1122 )
2023-04-13 21:35:08 -03:00
oobabooga
a75e02de4d
Simplify GPTQ_loader.py
2023-04-13 12:13:07 -03:00
oobabooga
ca293bb713
Show a warning if two quantized models are found
2023-04-13 12:04:27 -03:00
oobabooga
8b482b4127
Merge #1073 from sgsdxzy/triton
...
* Multi-GPU support for triton
* Better quantized model filename detection
2023-04-13 11:31:21 -03:00
oobabooga
fde6d06167
Prioritize names with the groupsize in them
2023-04-13 11:27:03 -03:00
oobabooga
f2bf1a2c9e
Add some comments, remove obsolete code
2023-04-13 11:17:32 -03:00
Light
da74cd7c44
Generalized weight search path.
2023-04-13 21:43:32 +08:00
oobabooga
04866dc4fc
Add a warning for when no model is loaded
2023-04-13 10:35:08 -03:00
Light
cf58058c33
Change warmup_autotune to a negative switch.
2023-04-13 20:59:49 +08:00
Light
15d5a043f2
Merge remote-tracking branch 'origin/main' into triton
2023-04-13 19:38:51 +08:00
oobabooga
7dfbe54f42
Add --model-menu option
2023-04-12 21:24:26 -03:00
oobabooga
388038fb8e
Update settings-template.json
2023-04-12 18:30:43 -03:00
oobabooga
10e939c9b4
Merge branch 'main' of github.com:oobabooga/text-generation-webui
2023-04-12 17:21:59 -03:00
oobabooga
1566d8e344
Add model settings to the Models tab
2023-04-12 17:20:18 -03:00
Light
a405064ceb
Better dispatch.
2023-04-13 01:48:17 +08:00
Light
f3591ccfa1
Keep minimal change.
2023-04-12 23:26:06 +08:00
Lukas
5ad92c940e
lora training fixes: ( #970 )
...
Fix wrong input format being picked
Fix crash when an entry in the dataset has an attribute of value None
2023-04-12 11:38:01 -03:00
oobabooga
80f4eabb2a
Fix send_pictures extension
2023-04-12 10:27:06 -03:00
oobabooga
8265d45db8
Add send dummy message/reply buttons
...
Useful for starting a new reply.
2023-04-11 22:21:41 -03:00
oobabooga
37d52c96bc
Fix Continue in chat mode
2023-04-11 21:46:17 -03:00
oobabooga
cacbcda208
Two new options: truncation length and ban eos token
2023-04-11 18:46:06 -03:00
catalpaaa
78bbc66fc4
allow custom stopping strings in all modes ( #903 )
2023-04-11 12:30:06 -03:00
oobabooga
0f212093a3
Refactor the UI
...
A single dictionary called 'interface_state' is now passed as input to all functions. The values are updated only when necessary.
The goal is to make it easier to add new elements to the UI.
2023-04-11 11:46:30 -03:00
IggoOnCode
09d8119e3c
Add CPU LoRA training ( #938 )
...
(It's very slow)
2023-04-10 17:29:00 -03:00
Alex "mcmonkey" Goodwin
0caf718a21
add on-page documentation to parameters ( #1008 )
2023-04-10 17:19:12 -03:00
oobabooga
bd04ff27ad
Make the bos token optional
2023-04-10 16:44:22 -03:00
oobabooga
0f1627eff1
Don't treat Intruct mode histories as regular histories
...
* They must now be saved/loaded manually
* Also improved browser caching of pfps
* Also changed the global default preset
2023-04-10 15:48:07 -03:00
oobabooga
769aa900ea
Print the used seed
2023-04-10 10:53:31 -03:00
Alex "mcmonkey" Goodwin
30befe492a
fix random seeds to actually randomize
...
Without this fix, manual seeds get locked in.
2023-04-10 06:29:10 -07:00
oobabooga
1911504f82
Minor bug fix
2023-04-09 23:45:41 -03:00
oobabooga
dba2000d2b
Do things that I am not proud of
2023-04-09 23:40:49 -03:00
oobabooga
65552d2157
Merge branch 'main' of github.com:oobabooga/text-generation-webui
2023-04-09 23:19:53 -03:00
oobabooga
8c6155251a
More robust 4-bit model loading
2023-04-09 23:19:28 -03:00
MarkovInequality
992663fa20
Added xformers support to Llama ( #950 )
2023-04-09 23:08:40 -03:00
Brian O'Connor
625d81f495
Update character log logic ( #977 )
...
* When logs are cleared, save the cleared log over the old log files
* Generate a log file when a character is loaded the first time
2023-04-09 22:20:21 -03:00
oobabooga
a3085dba07
Fix LlamaTokenizer eos_token (attempt)
2023-04-09 21:19:39 -03:00
oobabooga
120f5662cf
Better handle spaces for Continue
2023-04-09 20:37:31 -03:00
oobabooga
b27d757fd1
Minor change
2023-04-09 20:06:20 -03:00
oobabooga
d29f4624e9
Add a Continue button to chat mode
2023-04-09 20:04:16 -03:00
oobabooga
cc693a7546
Remove obsolete code
2023-04-09 00:51:07 -03:00
oobabooga
cb169d0834
Minor formatting changes
2023-04-08 17:34:07 -03:00
oobabooga
0b458bf82d
Simplify a function
2023-04-07 21:37:41 -03:00
Φφ
ffd102e5c0
SD Api Pics extension, v.1.1 ( #596 )
2023-04-07 21:36:04 -03:00
oobabooga
1dc464dcb0
Sort imports
2023-04-07 14:42:03 -03:00
oobabooga
42ea6a3fc0
Change the timing for setup() calls
2023-04-07 12:20:57 -03:00
oobabooga
768354239b
Change training file encoding
2023-04-07 11:15:52 -03:00
oobabooga
6762e62a40
Simplifications
2023-04-07 11:14:32 -03:00
oobabooga
a453d4e9c4
Reorganize some chat functions
2023-04-07 11:07:03 -03:00
Maya
8fa182cfa7
Fix regeneration of first message in instruct mode ( #881 )
2023-04-07 10:45:42 -03:00
oobabooga
46c4654226
More PEP8 stuff
2023-04-07 00:52:02 -03:00
oobabooga
ea6e77df72
Make the code more like PEP8 for readability ( #862 )
2023-04-07 00:15:45 -03:00
OWKenobi
310bf46a94
Instruction Character Vicuna, Instruction Mode Bugfix ( #838 )
2023-04-06 17:40:44 -03:00
oobabooga
113f94b61e
Bump transformers (16-bit llama must be reconverted/redownloaded)
2023-04-06 16:04:03 -03:00
oobabooga
03cb44fc8c
Add new llama.cpp library (2048 context, temperature, etc now work)
2023-04-06 13:12:14 -03:00
EyeDeck
39f3fec913
Broaden GPTQ-for-LLaMA branch support ( #820 )
2023-04-06 12:16:48 -03:00
Alex "mcmonkey" Goodwin
0c7ef26981
Lora trainer improvements ( #763 )
2023-04-06 02:04:11 -03:00
oobabooga
e94ab5dac1
Minor fixes
2023-04-06 01:43:10 -03:00
oobabooga
3f3e42e26c
Refactor several function calls and the API
2023-04-06 01:22:15 -03:00
SDS
378d21e80c
Add LLaMA-Precise preset ( #767 )
2023-04-05 18:52:36 -03:00
Forkoz
8203ce0cac
Stop character pic from being cached when changing chars or clearing. ( #798 )
...
Tested on both FF and chromium
2023-04-05 14:25:01 -03:00
oobabooga
7f66421369
Fix loading characters
2023-04-05 14:22:32 -03:00
oobabooga
e722c240af
Add Instruct mode
2023-04-05 13:54:50 -03:00
oobabooga
3d6cb5ed63
Minor rewrite
2023-04-05 01:21:40 -03:00
oobabooga
f3a2e0b8a9
Disable pre_layer when the model type is not llama
2023-04-05 01:19:26 -03:00
catalpaaa
4ab679480e
allow quantized model to be loaded from model dir ( #760 )
2023-04-04 23:19:38 -03:00
oobabooga
ae1fe45bc0
One more cache reset
2023-04-04 23:15:57 -03:00
oobabooga
8ef89730a5
Try to better handle browser image cache
2023-04-04 23:09:28 -03:00
oobabooga
cc6c7a37f3
Add make_thumbnail function
2023-04-04 23:03:58 -03:00
oobabooga
80dfba05f3
Better crop/resize cached images
2023-04-04 22:52:15 -03:00
oobabooga
65d8a24a6d
Show profile pictures in the Character tab
2023-04-04 22:28:49 -03:00
OWKenobi
ee4547cd34
Detect "vicuna" as llama model type ( #772 )
2023-04-04 13:23:27 -03:00
oobabooga
b24147c7ca
Document --pre_layer
2023-04-03 17:34:25 -03:00
oobabooga
4c9ed09270
Update settings template
2023-04-03 14:59:26 -03:00
OWKenobi
dcf61a8897
"character greeting" displayed and editable on the fly ( #743 )
...
* Add greetings field
* add greeting field and make it interactive
* Minor changes
* Fix a bug
* Simplify clear_chat_log
* Change a label
* Minor change
* Simplifications
* Simplification
* Simplify loading the default character history
* Fix regression
---------
Co-authored-by: oobabooga
2023-04-03 12:16:15 -03:00
Alex "mcmonkey" Goodwin
8b1f20aa04
Fix some old JSON characters not loading ( #740 )
2023-04-03 10:49:28 -03:00
oobabooga
8b442305ac
Rename another variable
2023-04-03 01:15:20 -03:00
oobabooga
08448fb637
Rename a variable
2023-04-03 01:02:11 -03:00
oobabooga
2a267011dc
Use Path.stem for simplicity
2023-04-03 00:56:14 -03:00
Alex "mcmonkey" Goodwin
ea97303509
Apply dialogue format in all character fields not just example dialogue ( #650 )
2023-04-02 21:54:29 -03:00
TheTerrasque
2157bb4319
New yaml character format ( #337 from TheTerrasque/feature/yaml-characters)
...
This doesn't break backward compatibility with JSON characters.
2023-04-02 20:34:25 -03:00
oobabooga
5f3f3faa96
Better handle CUDA out of memory errors in chat mode
2023-04-02 17:48:00 -03:00
oobabooga
b0890a7925
Add shared.is_chat() function
2023-04-01 20:15:00 -03:00
oobabooga
b857f4655b
Update shared.py
2023-04-01 13:56:47 -03:00
oobabooga
fcda3f8776
Add also_return_rows to generate_chat_prompt
2023-04-01 01:12:13 -03:00
oobabooga
2c52310642
Add --threads flag for llama.cpp
2023-03-31 21:18:05 -03:00
oobabooga
eeafd60713
Fix streaming
2023-03-31 19:05:38 -03:00
oobabooga
52065ae4cd
Add repetition_penalty
2023-03-31 19:01:34 -03:00
oobabooga
2259143fec
Fix llama.cpp with --no-stream
2023-03-31 18:43:45 -03:00
oobabooga
3a47a602a3
Detect ggml*.bin files automatically
2023-03-31 17:18:21 -03:00