Commit Graph

1413 Commits

Author SHA1 Message Date
Alex "mcmonkey" Goodwin
6368dad7db Fix camelCase to snake_case to match repo format standard 2023-03-27 18:17:42 -07:00
oobabooga
2f0571bfa4 Small style changes 2023-03-27 21:24:39 -03:00
oobabooga
c2cad30772 Merge branch 'main' into mcmonkey4eva-add-train-lora-tab 2023-03-27 21:05:44 -03:00
Alex "mcmonkey" Goodwin
9ced75746d add total time estimate 2023-03-27 10:57:27 -07:00
Alex "mcmonkey" Goodwin
16ea4fc36d interrupt button 2023-03-27 10:43:01 -07:00
Alex "mcmonkey" Goodwin
8fc723fc95 initial progress tracker in UI 2023-03-27 10:25:08 -07:00
oobabooga
48a6c9513e
Merge pull request #572 from clusterfudge/issues/571
Potential fix for issues/571
2023-03-27 14:06:38 -03:00
Alex "mcmonkey" Goodwin
c07bcd0850 add some outputs to indicate progress updates (sorta)
Actual progressbar still needed. Also minor formatting fixes.
2023-03-27 09:41:06 -07:00
oobabooga
af65c12900 Change Stop button behavior 2023-03-27 13:23:59 -03:00
Alex "mcmonkey" Goodwin
d911c22af9 use shared rows to make the LoRA Trainer interface a bit more compact / clean 2023-03-27 08:31:49 -07:00
Alex "mcmonkey" Goodwin
e439228ed8 Merge branch 'main' into add-train-lora-tab 2023-03-27 08:21:19 -07:00
oobabooga
3dc61284d5 Handle unloading LoRA from dropdown menu icon 2023-03-27 00:04:43 -03:00
oobabooga
1c77fdca4c Change notebook mode appearance 2023-03-26 22:20:30 -03:00
oobabooga
49c10c5570
Add support for the latest GPTQ models with group-size (#530)
**Warning: old 4-bit weights will not work anymore!**

See here how to get up to date weights: https://github.com/oobabooga/text-generation-webui/wiki/LLaMA-model#step-2-get-the-pre-converted-weights
2023-03-26 00:11:33 -03:00
Sean Fitzgerald
0bac80d9eb Potential fix for issues/571 2023-03-25 13:08:45 -07:00
Alex "mcmonkey" Goodwin
f1ba2196b1 make 'model' variables less ambiguous 2023-03-25 12:57:36 -07:00
Alex "mcmonkey" Goodwin
8da237223e document options better 2023-03-25 12:48:35 -07:00
Alex "mcmonkey" Goodwin
5c49a0dcd0 fix error from prepare call running twice in a row 2023-03-25 12:37:32 -07:00
Alex "mcmonkey" Goodwin
7bf601107c automatically strip empty data entries (for better alpaca dataset compat) 2023-03-25 12:28:46 -07:00
Alex "mcmonkey" Goodwin
566898a79a initial lora training tab 2023-03-25 12:08:26 -07:00
oobabooga
8c8e8b4450
Fix the early stopping callback #559 2023-03-25 12:35:52 -03:00
oobabooga
a1f12d607f
Merge pull request #538 from Ph0rk0z/display-input-context
Add display of context when input was generated
2023-03-25 11:56:18 -03:00
catalpaaa
f740ee558c
Merge branch 'oobabooga:main' into lora-and-model-dir 2023-03-25 01:28:33 -07:00
oobabooga
25be9698c7
Fix LoRA on mps 2023-03-25 01:18:32 -03:00
oobabooga
3da633a497
Merge pull request #529 from EyeDeck/main
Allow loading of .safetensors through GPTQ-for-LLaMa
2023-03-24 23:51:01 -03:00
catalpaaa
b37c54edcf lora-dir, model-dir and login auth
Added lora-dir, model-dir, and a login auth arguments that points to a file contains usernames and passwords in the format of "u:pw,u:pw,..."
2023-03-24 17:30:18 -07:00
oobabooga
9fa47c0eed
Revert GPTQ_loader.py (accident) 2023-03-24 19:57:12 -03:00
oobabooga
a6bf54739c
Revert models.py (accident) 2023-03-24 19:56:45 -03:00
oobabooga
0a16224451
Update GPTQ_loader.py 2023-03-24 19:54:36 -03:00
oobabooga
a80aa65986
Update models.py 2023-03-24 19:53:20 -03:00
oobabooga
507db0929d
Do not use empty user messages in chat mode
This allows the bot to send messages by clicking on Generate with empty inputs.
2023-03-24 17:22:22 -03:00
oobabooga
6e1b16c2aa
Update html_generator.py 2023-03-24 17:18:27 -03:00
oobabooga
ffb0187e83
Update chat.py 2023-03-24 17:17:29 -03:00
oobabooga
bfe960731f
Merge branch 'main' into fix/api-reload 2023-03-24 16:54:41 -03:00
oobabooga
8fad84abc2
Update extensions.py 2023-03-24 16:51:27 -03:00
Forkoz
b740c5b284
Add display of context when input was generated
Not sure if I did this right but it does move with the conversation and seems to match value.
2023-03-24 08:56:07 -05:00
oobabooga
4f5c2ce785
Fix chat_generation_attempts 2023-03-24 02:03:30 -03:00
EyeDeck
dcfd866402 Allow loading of .safetensors through GPTQ-for-LLaMa 2023-03-23 21:31:34 -04:00
oobabooga
8747c74339
Another missing import 2023-03-23 22:19:01 -03:00
oobabooga
7078d168c3
Missing import 2023-03-23 22:16:08 -03:00
oobabooga
d1327f99f9
Fix broken callbacks.py 2023-03-23 22:12:24 -03:00
oobabooga
b0abb327d8
Update LoRA.py 2023-03-23 22:02:09 -03:00
oobabooga
bf22d16ebc
Clear cache while switching LoRAs 2023-03-23 21:56:26 -03:00
oobabooga
4578e88ffd
Stop the bot from talking for you in chat mode 2023-03-23 21:38:20 -03:00
oobabooga
9bf6ecf9e2
Fix LoRA device map (attempt) 2023-03-23 16:49:41 -03:00
oobabooga
c5ebcc5f7e
Change the default names (#518)
* Update shared.py

* Update settings-template.json
2023-03-23 13:36:00 -03:00
oobabooga
29bd41d453
Fix LoRA in CPU mode 2023-03-23 01:05:13 -03:00
oobabooga
eac27f4f55
Make LoRAs work in 16-bit mode 2023-03-23 00:55:33 -03:00
oobabooga
bfa81e105e
Fix FlexGen streaming 2023-03-23 00:22:14 -03:00
oobabooga
de6a09dc7f
Properly separate the original prompt from the reply 2023-03-23 00:12:40 -03:00
wywywywy
61346b88ea
Add "seed" menu in the Parameters tab 2023-03-22 15:40:20 -03:00
oobabooga
45b7e53565
Only catch proper Exceptions in the text generation function 2023-03-20 20:36:02 -03:00
oobabooga
db4219a340
Update comments 2023-03-20 16:40:08 -03:00
oobabooga
7618f3fe8c
Add -gptq-preload for 4-bit offloading (#460)
This works in a 4GB card now:

```
python server.py --model llama-7b-hf --gptq-bits 4 --gptq-pre-layer 20
```
2023-03-20 16:30:56 -03:00
Vladimir Belitskiy
e96687b1d6 Do not send empty user input as part of the prompt.
However, if extensions modify the empty prompt to be non-empty,
it'l still work as before.
2023-03-20 14:27:39 -04:00
oobabooga
9a3bed50c3
Attempt at fixing 4-bit with CPU offload 2023-03-20 15:11:56 -03:00
Vladimir Belitskiy
ca47e016b4
Do not display empty user messages in chat mode.
There doesn't seem to be much value to them - they just take up space while also making it seem like there's still some sort of pseudo-dialogue going on, instead of a monologue by the bot.
2023-03-20 12:55:57 -04:00
oobabooga
75a7a84ef2
Exception handling (#454)
* Update text_generation.py
* Update extensions.py
2023-03-20 13:36:52 -03:00
oobabooga
ddb62470e9 --no-cache and --gpu-memory in MiB for fine VRAM control 2023-03-19 19:21:41 -03:00
oobabooga
a78b6508fc Make custom LoRAs work by default #385 2023-03-19 12:11:35 -03:00
Maya
acdbd6b708 Check if app should display extensions ui 2023-03-19 13:31:21 +00:00
Maya
81c9d130f2 Fix global 2023-03-19 13:25:49 +00:00
Maya
099d7a844b Add setup method to extensions 2023-03-19 13:22:24 +00:00
oobabooga
c753261338 Disable stop_at_newline by default 2023-03-18 10:55:57 -03:00
oobabooga
7c945cfe8e Don't include PeftModel every time 2023-03-18 10:55:24 -03:00
oobabooga
e26763a510 Minor changes 2023-03-17 22:56:46 -03:00
Wojtek Kowaluk
7994b580d5 clean up duplicated code 2023-03-18 02:27:26 +01:00
Wojtek Kowaluk
30939e2aee add mps support on apple silicon 2023-03-18 00:56:23 +01:00
oobabooga
9256e937d6 Add some LoRA params 2023-03-17 17:45:28 -03:00
oobabooga
9ed2c4501c Use markdown in the "HTML" tab 2023-03-17 16:06:11 -03:00
oobabooga
f0b26451b4 Add a comment 2023-03-17 13:07:17 -03:00
oobabooga
3bda907727
Merge pull request #366 from oobabooga/lora
Add LoRA support
2023-03-17 11:48:48 -03:00
oobabooga
614dad0075 Remove unused import 2023-03-17 11:43:11 -03:00
oobabooga
a717fd709d Sort the imports 2023-03-17 11:42:25 -03:00
oobabooga
29fe7b1c74 Remove LoRA tab, move it into the Parameters menu 2023-03-17 11:39:48 -03:00
oobabooga
214dc6868e Several QoL changes related to LoRA 2023-03-17 11:24:52 -03:00
askmyteapot
53b6a66beb
Update GPTQ_Loader.py
Correcting decoder layer for renamed class.
2023-03-17 18:34:13 +10:00
oobabooga
0cecfc684c Add files 2023-03-16 21:35:53 -03:00
oobabooga
104293f411 Add LoRA support 2023-03-16 21:31:39 -03:00
oobabooga
ee164d1821 Don't split the layers in 8-bit mode by default 2023-03-16 18:22:16 -03:00
oobabooga
e085cb4333 Small changes 2023-03-16 13:34:23 -03:00
awoo
83cb20aad8 Add support for --gpu-memory witn --load-in-8bit 2023-03-16 18:42:53 +03:00
oobabooga
1c378965e1 Remove unused imports 2023-03-16 10:18:34 -03:00
oobabooga
a577fb1077 Keep GALACTICA special tokens (#300) 2023-03-16 00:46:59 -03:00
oobabooga
4d64a57092 Add Interface mode tab 2023-03-15 23:29:56 -03:00
oobabooga
66256ac1dd Make the "no GPU has been detected" message more descriptive 2023-03-15 19:31:27 -03:00
oobabooga
c1959c26ee Show/hide the extensions block using javascript 2023-03-15 16:35:28 -03:00
oobabooga
348596f634 Fix broken extensions 2023-03-15 15:11:16 -03:00
oobabooga
c5f14fb9b8 Optimize the HTML generation speed 2023-03-15 14:19:28 -03:00
oobabooga
bf812c4893 Minor fix 2023-03-15 14:05:35 -03:00
oobabooga
05ee323ce5 Rename a file 2023-03-15 13:26:32 -03:00
oobabooga
d30a14087f Further reorganize the UI 2023-03-15 13:24:54 -03:00
oobabooga
cf2da86352 Prevent *Is typing* from disappearing instantly while streaming 2023-03-15 12:51:13 -03:00
oobabooga
ec972b85d1 Move all css/js into separate files 2023-03-15 12:35:11 -03:00
oobabooga
693b53d957 Merge branch 'main' into HideLord-main 2023-03-15 12:08:56 -03:00
oobabooga
1413931705 Add a header bar and redesign the interface (#293) 2023-03-15 12:01:32 -03:00
oobabooga
9d6a625bd6 Add 'hallucinations' filter #326
This breaks the API since a new parameter has been added.
It should be a one-line fix. See api-example.py.
2023-03-15 11:10:35 -03:00
oobabooga
afc5339510
Remove "eval" statements from text generation functions 2023-03-14 16:04:17 -03:00
oobabooga
265ba384b7 Rename a file, add deprecation warning for --load-in-4bit 2023-03-14 07:56:31 -03:00
oobabooga
3da73e409f Merge branch 'main' into Zerogoki00-opt4-bit 2023-03-14 07:50:36 -03:00
oobabooga
3fb8196e16 Implement "*Is recording a voice message...*" for TTS #303 2023-03-13 22:28:00 -03:00
oobabooga
518e5c4244 Some minor fixes to the GPTQ loader 2023-03-13 16:45:08 -03:00
Ayanami Rei
8778b756e6 use updated load_quantized 2023-03-13 22:11:40 +03:00
Ayanami Rei
a6a6522b6a determine model type from model name 2023-03-13 22:11:32 +03:00
Ayanami Rei
b6c5c57f2e remove default value from argument 2023-03-13 22:11:08 +03:00
Alexander Hristov Hristov
63c5a139a2
Merge branch 'main' into main 2023-03-13 19:50:08 +02:00
Ayanami Rei
e1c952c41c make argument non case-sensitive 2023-03-13 20:22:38 +03:00
Ayanami Rei
3c9afd5ca3 rename method 2023-03-13 20:14:40 +03:00
Ayanami Rei
1b99ed61bc add argument --gptq-model-type and remove duplicate arguments 2023-03-13 20:01:34 +03:00
Ayanami Rei
edbc61139f use new quant loader 2023-03-13 20:00:38 +03:00
Ayanami Rei
345b6dee8c refactor quant models loader and add support of OPT 2023-03-13 19:59:57 +03:00
oobabooga
66b6971b61 Update README 2023-03-13 12:44:18 -03:00
oobabooga
ddea518e0f Document --auto-launch 2023-03-13 12:43:33 -03:00
oobabooga
372363bc3d Fix GPTQ load_quant call on Windows 2023-03-13 12:07:02 -03:00
oobabooga
0c224cf4f4 Fix GALACTICA (#285) 2023-03-13 10:32:28 -03:00
oobabooga
2c4699a7e9 Change a comment 2023-03-13 00:20:02 -03:00
oobabooga
0a7acb3bd9 Remove redundant comments 2023-03-13 00:12:21 -03:00
oobabooga
77294b27dd Use str(Path) instead of os.path.abspath(Path) 2023-03-13 00:08:01 -03:00
oobabooga
b9e0712b92 Fix Open Assistant 2023-03-12 23:58:25 -03:00
oobabooga
1ddcd4d0ba Clean up silero_tts
This should only be used with --no-stream.

The shared.still_streaming implementation was faulty by design:
output_modifier should never be called when streaming is already over.
2023-03-12 23:42:49 -03:00
HideLord
683556f411 Adding markdown support and slight refactoring. 2023-03-12 21:34:09 +02:00
oobabooga
cebe8b390d Remove useless "substring_found" variable 2023-03-12 15:50:38 -03:00
oobabooga
4bcd675ccd Add *Is typing...* to regenerate as well 2023-03-12 15:23:33 -03:00
oobabooga
c7aa51faa6 Use a list of eos_tokens instead of just a number
This might be the cause of LLaMA ramblings that some people have experienced.
2023-03-12 14:54:58 -03:00
oobabooga
d8bea766d7
Merge pull request #192 from xanthousm/main
Add text generation stream status to shared module, use for better TTS with auto-play
2023-03-12 13:40:16 -03:00
oobabooga
fda376d9c3 Use os.path.abspath() instead of str() 2023-03-12 12:41:04 -03:00
HideLord
8403152257 Fixing compatibility with GPTQ repo commit 2f667f7da051967566a5fb0546f8614bcd3a1ccd. Expects string and breaks on 2023-03-12 17:28:15 +02:00
oobabooga
f3b00dd165
Merge pull request #224 from ItsLogic/llama-bits
Allow users to load 2, 3 and 4 bit llama models
2023-03-12 11:23:50 -03:00
oobabooga
65dda28c9d Rename --llama-bits to --gptq-bits 2023-03-12 11:19:07 -03:00
oobabooga
fed3617f07 Move LLaMA 4-bit into a separate file 2023-03-12 11:12:34 -03:00
oobabooga
0ac562bdba Add a default prompt for OpenAssistant oasst-sft-1-pythia-12b #253 2023-03-12 10:46:16 -03:00
oobabooga
78901d522b Remove unused imports 2023-03-12 08:59:05 -03:00
Xan
b3e10e47c0 Fix merge conflict in text_generation
- Need to update `shared.still_streaming = False` before the final `yield formatted_outputs`, shifted the position of some yields.
2023-03-12 18:56:35 +11:00
oobabooga
ad14f0e499 Fix regenerate (provisory way) 2023-03-12 03:42:29 -03:00
oobabooga
6e12068ba2
Merge pull request #258 from lxe/lxe/utf8
Load and save character files and chat history in UTF-8
2023-03-12 03:28:49 -03:00
oobabooga
e2da6b9685 Fix You You You appearing in chat mode 2023-03-12 03:25:56 -03:00
oobabooga
bcf0075278
Merge pull request #235 from xanthousm/Quality_of_life-main
--auto-launch and "Is typing..."
2023-03-12 03:12:56 -03:00
Aleksey Smolenchuk
3f7c3d6559
No need to set encoding on binary read 2023-03-11 22:10:57 -08:00
oobabooga
341e135036 Various fixes in chat mode 2023-03-12 02:53:08 -03:00
Aleksey Smolenchuk
3baf5fc700
Load and save chat history in utf-8 2023-03-11 21:40:01 -08:00
oobabooga
b0e8cb8c88 Various fixes in chat mode 2023-03-12 02:31:45 -03:00
unknown
433f6350bc Load and save character files in UTF-8 2023-03-11 21:23:05 -08:00
oobabooga
0bd5430988 Use 'with' statement to better handle streaming memory 2023-03-12 02:04:28 -03:00
oobabooga
37f0166b2d Fix memory leak in new streaming (second attempt) 2023-03-11 23:14:49 -03:00
oobabooga
92fe947721 Merge branch 'main' into new-streaming 2023-03-11 19:59:45 -03:00
oobabooga
2743dd736a Add *Is typing...* to impersonate as well 2023-03-11 10:50:18 -03:00
Xan
96c51973f9 --auto-launch and "Is typing..."
- Added `--auto-launch` arg to open web UI in the default browser when ready.
- Changed chat.py to display user input immediately and "*Is typing...*" as a temporary reply while generating text. Most noticeable when using `--no-stream`.
2023-03-11 22:50:59 +11:00
Xan
33df4bd91f Merge remote-tracking branch 'upstream/main' 2023-03-11 22:40:47 +11:00
draff
28fd4fc970 Change wording to be consistent with other args 2023-03-10 23:34:13 +00:00
draff
001e638b47 Make it actually work 2023-03-10 23:28:19 +00:00
draff
804486214b Re-implement --load-in-4bit and update --llama-bits arg description 2023-03-10 23:21:01 +00:00
ItsLogic
9ba8156a70
remove unnecessary Path() 2023-03-10 22:33:58 +00:00
draff
e6c631aea4 Replace --load-in-4bit with --llama-bits
Replaces --load-in-4bit with a more flexible --llama-bits arg to allow for 2 and 3 bit models as well. This commit also fixes a loading issue with .pt files which are not in the root of the models folder
2023-03-10 21:36:45 +00:00
oobabooga
026d60bd34 Remove default preset that didn't do anything 2023-03-10 14:01:02 -03:00
oobabooga
e9dbdafb14
Merge branch 'main' into pt-path-changes 2023-03-10 11:03:42 -03:00
oobabooga
706a03b2cb Minor changes 2023-03-10 11:02:25 -03:00
oobabooga
de7dd8b6aa Add comments 2023-03-10 10:54:08 -03:00
oobabooga
e461c0b7a0 Move the import to the top 2023-03-10 10:51:12 -03:00
deepdiffuser
9fbd60bf22 add no_split_module_classes to prevent tensor split error 2023-03-10 05:30:47 -08:00
deepdiffuser
ab47044459 add multi-gpu support for 4bit gptq LLaMA 2023-03-10 04:52:45 -08:00
rohvani
2ac2913747 fix reference issue 2023-03-09 20:13:23 -08:00
rohvani
826e297b0e add llama-65b-4bit support & multiple pt paths 2023-03-09 18:31:32 -08:00
oobabooga
9849aac0f1 Don't show .pt models in the list 2023-03-09 21:54:50 -03:00
oobabooga
74102d5ee4 Insert to the path instead of appending 2023-03-09 20:51:22 -03:00
oobabooga
2965aa1625 Check if the .pt file exists 2023-03-09 20:48:51 -03:00
oobabooga
828a524f9a Add LLaMA 4-bit support 2023-03-09 15:50:26 -03:00
oobabooga
59b5f7a4b7 Improve usage of stopping_criteria 2023-03-08 12:13:40 -03:00
oobabooga
add9330e5e Bug fixes 2023-03-08 11:26:29 -03:00
Xan
5648a41a27 Merge branch 'main' of https://github.com/xanthousm/text-generation-webui 2023-03-08 22:08:54 +11:00
Xan
ad6b699503 Better TTS with autoplay
- Adds "still_streaming" to shared module for extensions to know if generation is complete
- Changed TTS extension with new options:
   - Show text under the audio widget
   - Automatically play the audio once text generation finishes
   - manage the generated wav files (only keep files for finished generations, optional max file limit)
   - [wip] ability to change voice pitch and speed
- added 'tensorboard' to requirements, since python sent "tensorboard not found" errors after a fresh installation.
2023-03-08 22:02:17 +11:00
oobabooga
33fb6aed74 Minor bug fix 2023-03-08 03:08:16 -03:00
oobabooga
ad2970374a Readability improvements 2023-03-08 03:00:06 -03:00
oobabooga
72d539dbff Better separate the FlexGen case 2023-03-08 02:54:47 -03:00
oobabooga
0e16c0bacb Remove redeclaration of a function 2023-03-08 02:50:49 -03:00
oobabooga
ab50f80542 New text streaming method (much faster) 2023-03-08 02:46:35 -03:00
oobabooga
8e89bc596b Fix encode() for RWKV 2023-03-07 23:15:46 -03:00
oobabooga
19a34941ed Add proper streaming to RWKV 2023-03-07 18:17:56 -03:00
oobabooga
8660227e1b Add top_k to RWKV 2023-03-07 17:24:28 -03:00
oobabooga
153dfeb4dd Add --rwkv-cuda-on parameter, bump rwkv version 2023-03-06 20:12:54 -03:00
oobabooga
6904a507c6 Change some parameters 2023-03-06 16:29:43 -03:00
oobabooga
20bd645f6a Fix bug in multigpu setups (attempt 3) 2023-03-06 15:58:18 -03:00
oobabooga
09a7c36e1b Minor improvement while running custom models 2023-03-06 15:36:35 -03:00
oobabooga
24c4c20391 Fix bug in multigpu setups (attempt #2) 2023-03-06 15:23:29 -03:00
oobabooga
d88b7836c6 Fix bug in multigpu setups 2023-03-06 14:58:30 -03:00
oobabooga
5bed607b77 Increase repetition frequency/penalty for RWKV 2023-03-06 14:25:48 -03:00
oobabooga
bf56b6c1fb Load settings.json without the need for --settings settings.json
This is for setting UI defaults
2023-03-06 10:57:45 -03:00
oobabooga
e91f4bc25a Add RWKV tokenizer 2023-03-06 08:45:49 -03:00
oobabooga
c855b828fe Better handle <USER> 2023-03-05 17:01:47 -03:00
oobabooga
2af66a4d4c Fix <USER> in pygmalion replies 2023-03-05 16:08:50 -03:00
oobabooga
a54b91af77 Improve readability 2023-03-05 10:21:15 -03:00
oobabooga
8e706df20e Fix a memory leak when text streaming is on 2023-03-05 10:12:43 -03:00
oobabooga
c33715ad5b Move towards HF LLaMA implementation 2023-03-05 01:20:31 -03:00
oobabooga
bd8aac8fa4 Add LLaMA 8-bit support 2023-03-04 13:28:42 -03:00
oobabooga
c93f1fa99b Count the tokens more conservatively 2023-03-04 03:10:21 -03:00
oobabooga
ed8b35efd2 Add --pin-weight parameter for FlexGen 2023-03-04 01:04:02 -03:00
oobabooga
05e703b4a4 Print the performance information more reliably 2023-03-03 21:24:32 -03:00
oobabooga
5a79863df3 Increase the sequence length, decrease batch size
I have no idea what I am doing
2023-03-03 15:54:13 -03:00
oobabooga
a345a2acd2 Add a tokenizer placeholder 2023-03-03 15:16:55 -03:00
oobabooga
5b354817f6 Make chat minimally work with LLaMA 2023-03-03 15:04:41 -03:00
oobabooga
ea5c5eb3da Add LLaMA support 2023-03-03 14:39:14 -03:00
oobabooga
2bff646130 Stop chat from flashing dark when processing 2023-03-03 13:19:13 -03:00
oobabooga
169209805d Model-aware prompts and presets 2023-03-02 11:25:04 -03:00
oobabooga
7bbe32f618 Don't return a value in an iterator function 2023-03-02 00:48:46 -03:00
oobabooga
ff9f649c0c Remove some unused imports 2023-03-02 00:36:20 -03:00
oobabooga
1a05860ca3 Ensure proper no-streaming with generation_attempts > 1 2023-03-02 00:10:10 -03:00
oobabooga
a2a3e8f797 Add --rwkv-strategy parameter 2023-03-01 20:02:48 -03:00
oobabooga
449116a510 Fix RWKV paths on Windows (attempt) 2023-03-01 19:17:16 -03:00
oobabooga
955cf431e8 Minor consistency fix 2023-03-01 19:11:26 -03:00
oobabooga
f3da6dcc8f
Merge pull request #149 from oobabooga/RWKV
Add RWKV support
2023-03-01 16:57:45 -03:00
oobabooga
831ac7ed3f Add top_p 2023-03-01 16:45:48 -03:00
oobabooga
7c4d5ca8cc Improve the text generation call a bit 2023-03-01 16:40:25 -03:00
oobabooga
2f16ce309a Rename a variable 2023-03-01 12:33:09 -03:00
oobabooga
9e9cfc4b31 Parameters 2023-03-01 12:19:37 -03:00
oobabooga
0f6708c471 Sort the imports 2023-03-01 12:18:17 -03:00
oobabooga
e735806c51 Add a generate() function for RWKV 2023-03-01 12:16:11 -03:00
oobabooga
659bb76722 Add RWKVModel class 2023-03-01 12:08:55 -03:00
oobabooga
9c86a1cd4a Add RWKV pip package 2023-03-01 11:42:49 -03:00
oobabooga
6837d4d72a Load the model by name 2023-02-28 02:52:29 -03:00
oobabooga
a1429d1607 Add default extensions to the settings 2023-02-28 02:20:11 -03:00
oobabooga
19ccb2aaf5 Handle <USER> and <BOT> 2023-02-28 01:05:43 -03:00
oobabooga
626da6c731 Handle {{user}} and {{char}} in example dialogue 2023-02-28 00:59:05 -03:00
oobabooga
e861e68e38 Move the chat example dialogue to the prompt 2023-02-28 00:50:46 -03:00
oobabooga
f871971de1 Trying to get the chat to work 2023-02-28 00:25:30 -03:00
oobabooga
67ee7bead7 Add cpu, bf16 options 2023-02-28 00:09:11 -03:00
oobabooga
ebd698905c Add streaming to RWKV 2023-02-28 00:04:04 -03:00
oobabooga
70e522732c Move RWKV loader into a separate file 2023-02-27 23:50:16 -03:00
oobabooga
ebc64a408c RWKV support prototype 2023-02-27 23:03:35 -03:00
oobabooga
021bd55886 Better format the prompt when generation attempts > 1 2023-02-27 21:37:03 -03:00
oobabooga
43b6ab8673 Store thumbnails as files instead of base64 strings
This improves the UI responsiveness for large histories.
2023-02-27 13:41:00 -03:00
oobabooga
f24b6e78a3 Fix clear history 2023-02-26 23:58:04 -03:00
oobabooga
8e3e8a070f Make FlexGen work with the newest API 2023-02-26 16:53:41 -03:00
oobabooga
3333f94c30 Make the gallery extension work on colab 2023-02-26 12:37:26 -03:00
oobabooga
633a2b6be2 Don't regenerate/remove last message if the chat is empty 2023-02-26 00:43:12 -03:00
oobabooga
6e843a11d6 Fix FlexGen in chat mode 2023-02-26 00:36:04 -03:00
oobabooga
4548227fb5 Downgrade gradio version (file uploads are broken in 3.19.1) 2023-02-25 22:59:02 -03:00
oobabooga
9456c1d6ed Prevent streaming with no_stream + generation attempts > 1 2023-02-25 17:45:03 -03:00
oobabooga
32f40f3b42 Bump gradio version to 3.19.1 2023-02-25 17:20:03 -03:00
oobabooga
fa58fd5559 Proper way to free the cuda cache 2023-02-25 15:50:29 -03:00
oobabooga
b585e382c0 Rename the custom prompt generator function 2023-02-25 15:13:14 -03:00
oobabooga
700311ce40 Empty the cuda cache at model.generate() 2023-02-25 14:39:13 -03:00
oobabooga
1878acd9f3 Minor bug fix in chat 2023-02-25 09:30:59 -03:00
oobabooga
e71ff959f5 Clean up some unused code 2023-02-25 09:23:02 -03:00
oobabooga
91f5852245 Move bot_picture.py inside the extension 2023-02-25 03:00:19 -03:00
oobabooga
5ac24b019e Minor fix in the extensions implementation 2023-02-25 02:53:18 -03:00
oobabooga
85f914b9b9 Disable the hijack after using it 2023-02-25 02:36:01 -03:00
oobabooga
7e9f13e29f Rename a variable 2023-02-25 01:55:32 -03:00
oobabooga
1741c36092 Minor fix 2023-02-25 01:47:25 -03:00
oobabooga
7c2babfe39 Rename greed to "generation attempts" 2023-02-25 01:42:19 -03:00
oobabooga
2dfb999bf1 Add greed parameter 2023-02-25 01:31:01 -03:00
oobabooga
13f2688134 Better way to generate custom prompts 2023-02-25 01:08:17 -03:00
oobabooga
67623a52b7 Allow for permanent hijacking 2023-02-25 00:55:19 -03:00
oobabooga
111b5d42e7 Add prompt hijack option for extensions 2023-02-25 00:49:18 -03:00
oobabooga
7a527a5581 Move "send picture" into an extension
I am not proud of how I did it for now.
2023-02-25 00:23:51 -03:00
oobabooga
e51ece21c0 Add ui() function to extensions 2023-02-24 19:00:11 -03:00
oobabooga
78ad55641b Remove duplicate max_new_tokens parameter 2023-02-24 17:19:42 -03:00
oobabooga
65326b545a Move all gradio elements to shared (so that extensions can use them) 2023-02-24 16:46:50 -03:00
oobabooga
0817fe1beb Move code back into the chatbot wrapper 2023-02-24 14:10:32 -03:00
oobabooga
8a7563ae84 Reorder the imports 2023-02-24 12:42:43 -03:00
oobabooga
ace74a557a Add some comments 2023-02-24 12:41:27 -03:00
oobabooga
fe5057f932 Simplify the extensions implementation 2023-02-24 10:01:21 -03:00
oobabooga
2fb6ae6970 Move chat preprocessing into a separate function 2023-02-24 09:40:48 -03:00
oobabooga
f6f792363b Separate command-line params by spaces instead of commas 2023-02-24 08:55:09 -03:00
oobabooga
e260e84e5a Merge branch 'max_memory' of https://github.com/elwolf6/text-generation-webui into elwolf6-max_memory 2023-02-24 08:47:01 -03:00
oobabooga
146f786c57 Reorganize a bit 2023-02-24 08:44:54 -03:00
oobabooga
c2f4c395b9 Clean up some chat functions 2023-02-24 08:31:30 -03:00
luis
5abdc99a7c gpu-memory arg change 2023-02-23 18:43:55 -05:00
oobabooga
9ae063e42b Fix softprompts when deepspeed is active (#112) 2023-02-23 20:22:47 -03:00
oobabooga
dac6fe0ff4 Reset the history if no default history exists on reload 2023-02-23 19:53:50 -03:00
oobabooga
3b8cecbab7 Reload the default chat on page refresh 2023-02-23 19:50:23 -03:00
oobabooga
f1914115d3 Fix minor issue with chat logs 2023-02-23 16:04:47 -03:00
oobabooga
b78561fba6 Minor bug fix 2023-02-23 15:26:41 -03:00
oobabooga
2e86a1ec04 Move chat history into shared module 2023-02-23 15:11:18 -03:00
oobabooga
c87800341c Move function to extensions module 2023-02-23 14:55:21 -03:00
oobabooga
2048b403a5 Reorder functions 2023-02-23 14:49:02 -03:00
oobabooga
7224343a70 Improve the imports 2023-02-23 14:41:42 -03:00
oobabooga
e46c43afa6 Move some stuff from server.py to modules 2023-02-23 13:42:23 -03:00
oobabooga
1dacd34165 Further refactor 2023-02-23 13:28:30 -03:00
oobabooga
ce7feb3641 Further refactor 2023-02-23 13:03:52 -03:00
oobabooga
98af4bfb0d Refactor the code to make it more modular 2023-02-23 12:05:25 -03:00
oobabooga
bc856eb962 Add some more margin 2023-02-20 20:49:21 -03:00
oobabooga
f867285e3d Make the circle a bit less red 2023-02-20 18:41:38 -03:00
oobabooga
e4440cd984 Make highlighted text gray in cai-chat mode 2023-02-20 16:43:32 -03:00
oobabooga
995bcfcf5e Minor style change 2023-02-18 22:14:57 -03:00
oobabooga
d58544a420 Some minor formatting changes 2023-02-18 11:07:55 -03:00
oobabooga
3e6a8ccdce Fix galactica latex css 2023-02-18 00:18:39 -03:00
oobabooga
14f49bbe9a Fix galactica equations in dark mode 2023-02-17 23:57:09 -03:00
oobabooga
abb4667b44 Improve basic HTML style 2023-02-17 23:08:34 -03:00
oobabooga
00ca17abc9 Minor change 2023-02-17 22:52:03 -03:00
oobabooga
2fd003c044 Fix gpt4chan styles that were broken by gradio 3.18.0 2023-02-17 22:47:41 -03:00
oobabooga
0dd41e4830 Reorganize the sliders some more 2023-02-17 16:33:27 -03:00
oobabooga
6b9ac2f88e Reorganize the generation parameters 2023-02-17 16:18:01 -03:00
oobabooga
3923ac967f Create a cache for profile pictures (in RAM)
This is a performance optimization.
2023-02-17 14:30:39 -03:00
oobabooga
a6ddbbfc77 Add more fonts options 2023-02-17 11:30:04 -03:00
oobabooga
5eeb3f4e54 Make thumbnails for the profile pictures (for performance) 2023-02-17 10:58:54 -03:00
oobabooga
71c2764516 Fix the API docs in chat mode 2023-02-17 01:56:51 -03:00
oobabooga
33ad21c4f2 Make the profile pictures a bit larger 2023-02-17 00:35:17 -03:00
oobabooga
c4e87c109e Include the bot's image as base64
This is needed for Colab.
2023-02-17 00:24:27 -03:00
oobabooga
aeddf902ec Make the refresh button prettier 2023-02-16 21:55:20 -03:00
oobabooga
3746d72853 More style fixes 2023-02-15 21:13:12 -03:00
oobabooga
a55e8836f6 Bump gradio version
It looks uglier, but the old one was bugged and unstable.
2023-02-15 20:20:56 -03:00
oobabooga
1622059179 Move BLIP to the CPU
It's just as fast
2023-02-15 00:03:19 -03:00
oobabooga
8c3ef58e00 Use BLIP directly + some simplifications 2023-02-14 23:55:46 -03:00
SillyLossy
a7d98f494a Use BLIP to send a picture to model 2023-02-15 01:38:21 +02:00
oobabooga
56bbc996a4 Minor CSS change for readability 2023-02-13 23:01:14 -03:00
oobabooga
61aed97439 Slightly increase a margin 2023-02-12 17:38:54 -03:00
oobabooga
76d3d7ddb3 Reorder the imports here too 2023-02-10 15:57:55 -03:00
oobabooga
d038963193 Rename a variable (for #59) 2023-02-07 23:26:02 -03:00
oobabooga
f38c9bf428 Fix deepspeed (oops) 2023-02-02 10:39:37 -03:00
oobabooga
90f1067598 Move deepspeed parameters to another file 2023-02-02 10:25:09 -03:00
oobabooga
dfbca86533 Add **bold** support in chat mode 2023-01-30 08:36:58 -03:00
oobabooga
fc73188ec7 Allow specifying your own profile picture in chat mode 2023-01-25 19:37:44 -03:00
oobabooga
0b2a6b2819 Add file 2023-01-25 10:19:50 -03:00
oobabooga
434d4b128c Add refresh buttons for the model/preset/character menus 2023-01-22 00:02:46 -03:00