Commit Graph

1527 Commits

Author SHA1 Message Date
oobabooga
9c86acda67
Fix huge empty space in the Character tab 2023-04-04 18:07:34 -03:00
oobabooga
38afc2470c
Change indentation 2023-04-04 16:32:27 -03:00
oobabooga
b2ce7282a1
Use past transformers version #773 2023-04-04 16:11:42 -03:00
OWKenobi
ee4547cd34
Detect "vicuna" as llama model type (#772) 2023-04-04 13:23:27 -03:00
oobabooga
881dbc3d44
Add back the name 2023-04-04 13:11:34 -03:00
oobabooga
af0cb283e4
improve the example character yaml format (#770 from mcmonkey4eva) 2023-04-04 12:52:21 -03:00
Alex "mcmonkey" Goodwin
165d757444 improve the example character yaml format - use multiline blocks
multiline blocks make the input much cleaner and simpler, particularly for the example_dialogue. For the greeting block it can kinda go either way but I think it still ends up nicer. Also double quotes in context fixes the need to escape the singlequote inside.
2023-04-04 08:25:11 -07:00
oobabooga
8de22ac82a Merge character upload tabs 2023-04-03 18:01:45 -03:00
oobabooga
b24147c7ca Document --pre_layer 2023-04-03 17:34:25 -03:00
oobabooga
4c9ed09270 Update settings template 2023-04-03 14:59:26 -03:00
dependabot[bot]
ad37f396fc
Bump rwkv from 0.7.1 to 0.7.2 (#747) 2023-04-03 14:29:57 -03:00
dependabot[bot]
18f756ada6
Bump gradio from 3.24.0 to 3.24.1 (#746) 2023-04-03 14:29:37 -03:00
Niels Mündler
7aab88bcc6
Give API extension access to all generate_reply parameters (#744)
* Make every parameter of the generate_reply function parameterizable

* Add stopping strings as parameterizable
2023-04-03 13:31:12 -03:00
oobabooga
9318e16ed5 Expand .gitignore 2023-04-03 12:51:30 -03:00
oobabooga
3012bdb5e0 Fix a label 2023-04-03 12:20:53 -03:00
OWKenobi
dcf61a8897
"character greeting" displayed and editable on the fly (#743)
* Add greetings field

* add greeting field and make it interactive

* Minor changes

* Fix a bug

* Simplify clear_chat_log

* Change a label

* Minor change

* Simplifications

* Simplification

* Simplify loading the default character history

* Fix regression

---------

Co-authored-by: oobabooga
2023-04-03 12:16:15 -03:00
Alex "mcmonkey" Goodwin
8b1f20aa04
Fix some old JSON characters not loading (#740) 2023-04-03 10:49:28 -03:00
oobabooga
8b442305ac Rename another variable 2023-04-03 01:15:20 -03:00
oobabooga
08448fb637 Rename a variable 2023-04-03 01:02:11 -03:00
oobabooga
2a267011dc Use Path.stem for simplicity 2023-04-03 00:56:14 -03:00
Alex "mcmonkey" Goodwin
ea97303509
Apply dialogue format in all character fields not just example dialogue (#650) 2023-04-02 21:54:29 -03:00
oobabooga
525f729b8e
Update README.md 2023-04-02 21:12:41 -03:00
oobabooga
53084241b4
Update README.md 2023-04-02 20:50:06 -03:00
TheTerrasque
2157bb4319
New yaml character format (#337 from TheTerrasque/feature/yaml-characters)
This doesn't break backward compatibility with JSON characters.
2023-04-02 20:34:25 -03:00
oobabooga
7ce608d101
Merge pull request #732 from StefanDanielSchwarz/fix-verbose-(beam-search)-preset
Fix "Verbose (Beam Search)" preset
2023-04-02 19:38:11 -03:00
SDS
34c3b4af6e
Fix "Verbose (Beam Search)" preset
Just a quick fix that removes an erroneous space between "length_penalty" and "=" (doesn't affect Python, but makes it possible to source the file from Bash, e. g. to use the variables with API calls)
2023-04-03 00:31:58 +02:00
oobabooga
1a823aaeb5
Clear text input for chat (#715 from bmoconno/clear-chat-input) 2023-04-02 18:08:25 -03:00
oobabooga
0dc6fa038b Use gr.State() to store the user input 2023-04-02 18:05:21 -03:00
oobabooga
5f3f3faa96 Better handle CUDA out of memory errors in chat mode 2023-04-02 17:48:00 -03:00
Brian O'Connor
d0f9625f0b Clear text input for chat
Add logic to clear the textbox for chat input when the user submits or hits the generate button.
2023-04-01 21:48:24 -04:00
oobabooga
b0890a7925 Add shared.is_chat() function 2023-04-01 20:15:00 -03:00
oobabooga
b38ba230f4
Update download-model.py 2023-04-01 15:03:24 -03:00
oobabooga
b6f817be45
Update README.md 2023-04-01 14:54:10 -03:00
oobabooga
88fa38ac01
Update README.md 2023-04-01 14:49:03 -03:00
oobabooga
526d5725db
Update download-model.py 2023-04-01 14:47:47 -03:00
oobabooga
4b57bd0d99
Update README.md 2023-04-01 14:38:04 -03:00
oobabooga
b53bec5a1f
Update README.md 2023-04-01 14:37:35 -03:00
oobabooga
9160586c04
Update README.md 2023-04-01 14:31:10 -03:00
oobabooga
7ec11ae000
Update README.md 2023-04-01 14:15:19 -03:00
oobabooga
b857f4655b
Update shared.py 2023-04-01 13:56:47 -03:00
oobabooga
012f4f83b8
Update README.md 2023-04-01 13:55:15 -03:00
oobabooga
fcda3f8776 Add also_return_rows to generate_chat_prompt 2023-04-01 01:12:13 -03:00
oobabooga
8c51b405e4 Progress towards generalizing Interface mode tab 2023-03-31 23:41:10 -03:00
oobabooga
23116b88ef
Add support for resuming downloads (#654 from nikita-skakun/support-partial-downloads) 2023-03-31 22:55:55 -03:00
oobabooga
74462ac713 Don't override the metadata when checking the sha256sum 2023-03-31 22:52:52 -03:00
oobabooga
2c52310642 Add --threads flag for llama.cpp 2023-03-31 21:18:05 -03:00
oobabooga
eeafd60713 Fix streaming 2023-03-31 19:05:38 -03:00
oobabooga
52065ae4cd Add repetition_penalty 2023-03-31 19:01:34 -03:00
oobabooga
2259143fec Fix llama.cpp with --no-stream 2023-03-31 18:43:45 -03:00
oobabooga
875de5d983 Update ggml template 2023-03-31 17:57:31 -03:00