Commit Graph

3654 Commits

Author SHA1 Message Date
oobabooga
5f4f38ca5d Merge branch 'main' of github.com:oobabooga/text-generation-webui 2023-04-06 14:38:29 -03:00
oobabooga
ef0f748618
Prevent CPU version of Torch from being installed (#10 from jllllll/oobabooga-windows) 2023-04-06 13:54:14 -03:00
oobabooga
d9e7aba714
Update README.md 2023-04-06 13:42:24 -03:00
oobabooga
59058576b5 Remove unused requirement 2023-04-06 13:28:21 -03:00
oobabooga
eec3665845
Add instructions for updating requirements 2023-04-06 13:24:01 -03:00
oobabooga
03cb44fc8c Add new llama.cpp library (2048 context, temperature, etc now work) 2023-04-06 13:12:14 -03:00
EyeDeck
39f3fec913
Broaden GPTQ-for-LLaMA branch support (#820) 2023-04-06 12:16:48 -03:00
oobabooga
8cd899515e Change instruct html a bit 2023-04-06 12:00:20 -03:00
oobabooga
4a28f39823
Update README.md 2023-04-06 02:47:27 -03:00
oobabooga
158ec51ae3 Increase instruct mode padding 2023-04-06 02:20:52 -03:00
Alex "mcmonkey" Goodwin
0c7ef26981
Lora trainer improvements (#763) 2023-04-06 02:04:11 -03:00
oobabooga
5b301d9a02 Create a Model tab 2023-04-06 01:54:05 -03:00
oobabooga
4a400320dd Clean up 2023-04-06 01:47:00 -03:00
oobabooga
e94ab5dac1 Minor fixes 2023-04-06 01:43:10 -03:00
Randell Miller
641646a801
Fix crash if missing instructions directory (#812) 2023-04-06 01:24:22 -03:00
oobabooga
3f3e42e26c
Refactor several function calls and the API 2023-04-06 01:22:15 -03:00
SDS
378d21e80c
Add LLaMA-Precise preset (#767) 2023-04-05 18:52:36 -03:00
jllllll
1e656bef25
Specifically target cuda 11.7 ver. of torch 2.0.0
Move conda-forge channel to global list of channels
Hopefully prevents missing or incorrect packages
2023-04-05 16:52:05 -05:00
eiery
19b516b11b
fix link to streaming api example (#803) 2023-04-05 14:50:23 -03:00
oobabooga
7617ed5bfd
Add AMD instructions 2023-04-05 14:42:58 -03:00
oobabooga
770ef5744f Update README 2023-04-05 14:38:11 -03:00
Forkoz
8203ce0cac
Stop character pic from being cached when changing chars or clearing. (#798)
Tested on both FF and chromium
2023-04-05 14:25:01 -03:00
oobabooga
7f66421369 Fix loading characters 2023-04-05 14:22:32 -03:00
oobabooga
90141bc1a8 Fix saving prompts on Windows 2023-04-05 14:08:54 -03:00
oobabooga
cf2c4e740b Disable gradio analytics globally 2023-04-05 14:05:50 -03:00
oobabooga
e722c240af Add Instruct mode 2023-04-05 13:54:50 -03:00
oobabooga
3d6cb5ed63 Minor rewrite 2023-04-05 01:21:40 -03:00
oobabooga
f3a2e0b8a9 Disable pre_layer when the model type is not llama 2023-04-05 01:19:26 -03:00
oobabooga
ca8bb38949 Simplify gallery 2023-04-05 00:34:17 -03:00
catalpaaa
4ab679480e
allow quantized model to be loaded from model dir (#760) 2023-04-04 23:19:38 -03:00
oobabooga
ae1fe45bc0 One more cache reset 2023-04-04 23:15:57 -03:00
oobabooga
8ef89730a5 Try to better handle browser image cache 2023-04-04 23:09:28 -03:00
oobabooga
cc6c7a37f3 Add make_thumbnail function 2023-04-04 23:03:58 -03:00
oobabooga
80dfba05f3 Better crop/resize cached images 2023-04-04 22:52:15 -03:00
oobabooga
65d8a24a6d Show profile pictures in the Character tab 2023-04-04 22:28:49 -03:00
oobabooga
f70a2e3ad4
Second attempt at fixing empty space 2023-04-04 18:30:34 -03:00
oobabooga
9c86acda67
Fix huge empty space in the Character tab 2023-04-04 18:07:34 -03:00
oobabooga
38afc2470c
Change indentation 2023-04-04 16:32:27 -03:00
oobabooga
b2ce7282a1
Use past transformers version #773 2023-04-04 16:11:42 -03:00
jllllll
5aaf771c7d
Add additional sanity check
Add environment creation error
Improve error visibility
2023-04-04 12:31:26 -05:00
OWKenobi
ee4547cd34
Detect "vicuna" as llama model type (#772) 2023-04-04 13:23:27 -03:00
oobabooga
881dbc3d44
Add back the name 2023-04-04 13:11:34 -03:00
oobabooga
af0cb283e4
improve the example character yaml format (#770 from mcmonkey4eva) 2023-04-04 12:52:21 -03:00
Alex "mcmonkey" Goodwin
165d757444 improve the example character yaml format - use multiline blocks
multiline blocks make the input much cleaner and simpler, particularly for the example_dialogue. For the greeting block it can kinda go either way but I think it still ends up nicer. Also double quotes in context fixes the need to escape the singlequote inside.
2023-04-04 08:25:11 -07:00
oobabooga
8de22ac82a Merge character upload tabs 2023-04-03 18:01:45 -03:00
oobabooga
b24147c7ca Document --pre_layer 2023-04-03 17:34:25 -03:00
oobabooga
4c9ed09270 Update settings template 2023-04-03 14:59:26 -03:00
dependabot[bot]
ad37f396fc
Bump rwkv from 0.7.1 to 0.7.2 (#747) 2023-04-03 14:29:57 -03:00
dependabot[bot]
18f756ada6
Bump gradio from 3.24.0 to 3.24.1 (#746) 2023-04-03 14:29:37 -03:00
Niels Mündler
7aab88bcc6
Give API extension access to all generate_reply parameters (#744)
* Make every parameter of the generate_reply function parameterizable

* Add stopping strings as parameterizable
2023-04-03 13:31:12 -03:00