Commit Graph

2800 Commits

Author SHA1 Message Date
oobabooga
8265d45db8 Add send dummy message/reply buttons
Useful for starting a new reply.
2023-04-11 22:21:41 -03:00
oobabooga
37d52c96bc Fix Continue in chat mode 2023-04-11 21:46:17 -03:00
oobabooga
f2ec880e81 Auto-scroll to the bottom when streaming is over in notebook/default modes 2023-04-11 20:58:10 -03:00
oobabooga
f34f2daa3d More reasonable default preset 2023-04-11 18:57:46 -03:00
oobabooga
cacbcda208
Two new options: truncation length and ban eos token 2023-04-11 18:46:06 -03:00
oobabooga
749c08a4ff
Update README.md 2023-04-11 14:42:10 -03:00
DavG25
e9e93189ff
Fix text overflow in chat and instruct mode (#1044) 2023-04-11 14:41:29 -03:00
oobabooga
dc3c9d00a0 Update the API extension 2023-04-11 13:07:45 -03:00
oobabooga
457d3c58eb Update the API example 2023-04-11 12:57:36 -03:00
catalpaaa
78bbc66fc4
allow custom stopping strings in all modes (#903) 2023-04-11 12:30:06 -03:00
oobabooga
0f212093a3
Refactor the UI
A single dictionary called 'interface_state' is now passed as input to all functions. The values are updated only when necessary.

The goal is to make it easier to add new elements to the UI.
2023-04-11 11:46:30 -03:00
oobabooga
64f5c90ee7 Fix the API extension 2023-04-10 20:14:38 -03:00
oobabooga
58b34c0841 Fix chat_prompt_size 2023-04-10 20:06:42 -03:00
oobabooga
5234071c04 Improve Instruct mode text readability 2023-04-10 17:41:07 -03:00
IggoOnCode
09d8119e3c
Add CPU LoRA training (#938)
(It's very slow)
2023-04-10 17:29:00 -03:00
Alex "mcmonkey" Goodwin
0caf718a21
add on-page documentation to parameters (#1008) 2023-04-10 17:19:12 -03:00
oobabooga
85a7954823 Update settings-template.json 2023-04-10 16:53:07 -03:00
oobabooga
d37b4f76b1 Merge branch 'main' of github.com:oobabooga/text-generation-webui 2023-04-10 16:45:09 -03:00
oobabooga
bd04ff27ad Make the bos token optional 2023-04-10 16:44:22 -03:00
oobabooga
f035b01823
Update README.md 2023-04-10 16:20:23 -03:00
Jeff Lefebvre
b7ca89ba3f
Mention that build-essential is required (#1013) 2023-04-10 16:19:10 -03:00
loeken
52339e9b20
add make/g++ to docker (#1015) 2023-04-10 16:18:07 -03:00
oobabooga
4961f43702 Improve header bar colors 2023-04-10 16:15:16 -03:00
oobabooga
617530296e Instruct mode color/style improvements 2023-04-10 16:04:21 -03:00
oobabooga
0f1627eff1 Don't treat Intruct mode histories as regular histories
* They must now be saved/loaded manually
* Also improved browser caching of pfps
* Also changed the global default preset
2023-04-10 15:48:07 -03:00
oobabooga
d679c4be13 Change a label 2023-04-10 11:44:37 -03:00
oobabooga
45244ed125 More descriptive download info 2023-04-10 11:42:12 -03:00
oobabooga
7e70741a4e
Download models from Model tab (#954 from UsamaKenway/main) 2023-04-10 11:38:30 -03:00
oobabooga
11b23db8d4 Remove unused imports 2023-04-10 11:37:42 -03:00
oobabooga
2c14df81a8 Use download-model.py to download the model 2023-04-10 11:36:39 -03:00
oobabooga
c6e9ba20a4 Merge branch 'main' into UsamaKenway-main 2023-04-10 11:14:03 -03:00
oobabooga
843f672227
fix random seeds to actually randomize (#1004 from mcmonkey4eva/seed-fix) 2023-04-10 10:56:12 -03:00
oobabooga
769aa900ea Print the used seed 2023-04-10 10:53:31 -03:00
jllllll
254609daca
Update llama-cpp-python link to official wheel (#19) 2023-04-10 10:48:56 -03:00
oobabooga
32d078487e Add llama-cpp-python to requirements.txt 2023-04-10 10:45:51 -03:00
Alex "mcmonkey" Goodwin
30befe492a fix random seeds to actually randomize
Without this fix, manual seeds get locked in.
2023-04-10 06:29:10 -07:00
jllllll
c3e1a58cb3
Correct llama-cpp-python wheel link (#17) 2023-04-09 23:46:54 -03:00
oobabooga
1911504f82 Minor bug fix 2023-04-09 23:45:41 -03:00
BlueprintCoding
8178fde2cb
Added dropdown to character bias. (#986) 2023-04-09 23:44:31 -03:00
oobabooga
dba2000d2b Do things that I am not proud of 2023-04-09 23:40:49 -03:00
oobabooga
97840c92f9
Add working llamaa-cpp-python install from wheel. (#13 from Loufe/oobabooga-windows) 2023-04-09 23:23:27 -03:00
oobabooga
65552d2157 Merge branch 'main' of github.com:oobabooga/text-generation-webui 2023-04-09 23:19:53 -03:00
oobabooga
8c6155251a More robust 4-bit model loading 2023-04-09 23:19:28 -03:00
MarkovInequality
992663fa20
Added xformers support to Llama (#950) 2023-04-09 23:08:40 -03:00
Brian O'Connor
625d81f495
Update character log logic (#977)
* When logs are cleared, save the cleared log over the old log files
* Generate a log file when a character is loaded the first time
2023-04-09 22:20:21 -03:00
oobabooga
57f768eaad Better preset in api-example.py 2023-04-09 22:18:40 -03:00
oobabooga
a3085dba07 Fix LlamaTokenizer eos_token (attempt) 2023-04-09 21:19:39 -03:00
oobabooga
120f5662cf Better handle spaces for Continue 2023-04-09 20:37:31 -03:00
oobabooga
b27d757fd1 Minor change 2023-04-09 20:06:20 -03:00
oobabooga
d29f4624e9 Add a Continue button to chat mode 2023-04-09 20:04:16 -03:00