oobabooga
9c9bd1074f
Add option to replace the bot's last reply
2023-01-29 12:02:44 -03:00
oobabooga
e5ff4ddfc8
Add bot prefix modifier option in extensions
2023-01-29 10:11:59 -03:00
oobabooga
b6d01bb704
Enable extensions in all modes, not just chat
2023-01-29 09:48:18 -03:00
oobabooga
1a139664f5
Grammar
2023-01-29 02:54:36 -03:00
oobabooga
2d134031ca
Apply extensions to character greeting
2023-01-29 00:04:11 -03:00
oobabooga
e349b52256
Read extensions parameters from settings file
2023-01-28 23:21:40 -03:00
oobabooga
2239be2351
Support for number/bool extension parameters
2023-01-28 23:08:28 -03:00
oobabooga
6da94e358c
Add support for extensions parameters
...
Still experimental
2023-01-28 23:00:51 -03:00
oobabooga
e779fd795f
Save TavernAI characters with TavernAI- prefix
2023-01-28 21:01:56 -03:00
oobabooga
833a1138fa
Explain the dialogue tokenization output
2023-01-28 20:41:02 -03:00
oobabooga
545b7395b2
Prevent huge --help outputs
2023-01-28 20:36:51 -03:00
oobabooga
f4c455ce29
Merge pull request #30 from 10sa/patch-1
...
Add listening port options for listening mode.
2023-01-28 20:35:20 -03:00
oobabooga
7b283a4a3d
Update server.py
2023-01-28 20:35:05 -03:00
oobabooga
f4674d34a9
Reorganize chat UI elements
2023-01-28 20:28:08 -03:00
oobabooga
3687962e6c
Add support for TavernAI character cards ( closes #31 )
2023-01-28 20:18:23 -03:00
oobabooga
f71531186b
Upload profile pictures from the web UI
2023-01-28 19:16:37 -03:00
Tensa
3742d3b18a
Add listening port options for listening mode.
2023-01-28 03:38:34 +09:00
oobabooga
69ffef4391
History loading minor bug fix
2023-01-27 12:01:11 -03:00
oobabooga
8b8236c6ff
Fix Regenerate button bug
2023-01-27 11:14:19 -03:00
oobabooga
1d1f931757
Load extensions at startup
2023-01-27 10:53:05 -03:00
oobabooga
70e034589f
Update the export/load chat history functions
2023-01-27 02:16:05 -03:00
oobabooga
6b5dcd46c5
Add support for extensions
...
This is experimental.
2023-01-27 00:40:39 -03:00
oobabooga
e69990e37b
Change order of upload and download tabs in chat mode
2023-01-26 16:57:12 -03:00
oobabooga
ac6065d5ed
Fix character loading bug
2023-01-26 13:45:19 -03:00
oobabooga
61611197e0
Add --verbose option (oops)
2023-01-26 02:18:06 -03:00
oobabooga
abc920752f
Stop at eos_token while streaming text (for #26 )
2023-01-25 22:27:04 -03:00
oobabooga
b77933d327
File names must be img_me.jpg and img_bot.jpg
2023-01-25 19:40:30 -03:00
oobabooga
fc73188ec7
Allow specifying your own profile picture in chat mode
2023-01-25 19:37:44 -03:00
oobabooga
3fa14befc5
Bump the gradio version, add back the queue
2023-01-25 16:10:35 -03:00
oobabooga
7a3717b824
Allow uploading characters
2023-01-25 15:45:25 -03:00
oobabooga
6388c7fbc0
Set queue size to 1 to prevent gradio undefined behavior
2023-01-25 14:37:41 -03:00
oobabooga
ec69c190ba
Keep the character's greeting/example dialogue when "clear history" is clicked
2023-01-25 10:52:35 -03:00
oobabooga
ebed1dea56
Generate 8 tokens at a time in streaming mode instead of just 1
...
This is a performance optimization.
2023-01-25 10:38:26 -03:00
oobabooga
3b8f0021cc
Stop generating at \nYou: in chat mode
2023-01-25 10:17:55 -03:00
oobabooga
54e77acac4
Rename to "Generation parameters preset" for clarity
2023-01-23 20:49:44 -03:00
oobabooga
ce4756fb88
Allow uploading chat history in official pygmalion web ui format
2023-01-23 15:29:01 -03:00
oobabooga
8325e23923
Fix bug in loading chat history as text file
2023-01-23 14:28:02 -03:00
oobabooga
059d47edb5
Submit with enter instead of shift+enter in chat mode
2023-01-23 14:04:01 -03:00
oobabooga
4820379139
Add debug preset (deterministic, should always give the same responses)
2023-01-23 13:36:01 -03:00
oobabooga
947b50e8ea
Allow uploading chat history as simple text files
2023-01-23 09:45:10 -03:00
oobabooga
ebf720585b
Mention time and it/s in terminal with streaming off
2023-01-22 20:07:19 -03:00
oobabooga
d87310ad61
Send last input to the input box when "Remove last" is clicked
2023-01-22 19:40:22 -03:00
oobabooga
d0ea6d5f86
Make the maximum history size in prompt unlimited by default
2023-01-22 17:17:35 -03:00
oobabooga
00f3b0996b
Warn the user that chat mode becomes a lot slower with text streaming
2023-01-22 16:19:11 -03:00
oobabooga
c5cc3a3075
Fix bug in "remove last" button
2023-01-22 13:10:36 -03:00
oobabooga
a410cf1345
Mention that "Chat history size" means "Chat history size in prompt"
2023-01-22 03:15:35 -03:00
oobabooga
b3e1a874bc
Fix bug in loading history
2023-01-22 02:32:54 -03:00
oobabooga
62b533f344
Add "regenerate" button to the chat
2023-01-22 02:19:58 -03:00
oobabooga
94ecbc6dff
Export history as nicely formatted json
2023-01-22 01:24:16 -03:00
oobabooga
deacb96c34
Change the pygmalion default context
2023-01-22 00:49:59 -03:00
oobabooga
23f94f559a
Improve the chat prompt design
2023-01-22 00:35:42 -03:00
oobabooga
139e2f0ab4
Redesign the upload/download chat history buttons
2023-01-22 00:22:50 -03:00
oobabooga
434d4b128c
Add refresh buttons for the model/preset/character menus
2023-01-22 00:02:46 -03:00
oobabooga
1e5e56fa2e
Better recognize the 4chan model (for #19 )
2023-01-21 22:13:01 -03:00
oobabooga
aadf4e899a
Improve example dialogue handling
2023-01-21 15:04:13 -03:00
oobabooga
f9dbe7e08e
Update README
2023-01-21 03:05:55 -03:00
oobabooga
27e2d932b0
Don't export include the example dialogue in the export json
2023-01-21 02:55:13 -03:00
oobabooga
990ee54ddd
Move the example dialogue to the chat history, and keep it hidden.
...
This greatly improves the performance of text generation, as
histories can be quite long. It also makes more sense to implement
it this way.
2023-01-21 02:48:06 -03:00
oobabooga
d7299df01f
Rename parameters
2023-01-21 00:33:41 -03:00
oobabooga
5df03bf0fd
Merge branch 'main' into main
2023-01-21 00:25:34 -03:00
oobabooga
faaafe7c0e
Better parameter naming
2023-01-20 23:45:16 -03:00
Silver267
f4634e4c32
Update.
2023-01-20 17:05:43 -05:00
oobabooga
c0f2367b54
Minor fix
2023-01-20 17:09:25 -03:00
oobabooga
185587a33e
Add a history size parameter to the chat
...
If too many messages are used in the prompt, the model
gets really slow. It is useful to have the ability to
limit this.
2023-01-20 17:03:09 -03:00
oobabooga
78d5a999e6
Improve prompt formatation
2023-01-20 01:54:38 -03:00
oobabooga
70ff685736
Encode the input string correctly
2023-01-20 00:45:02 -03:00
oobabooga
b66d18d5a0
Allow presets/characters with '.' in their names
2023-01-19 21:56:33 -03:00
oobabooga
11c3214981
Fix some regexes
2023-01-19 19:59:34 -03:00
oobabooga
e61138bdad
Minor fixes
2023-01-19 19:04:54 -03:00
oobabooga
2181fca709
Better defaults for chat
2023-01-19 18:58:45 -03:00
oobabooga
83808171d3
Add --share option for Colab
2023-01-19 17:31:29 -03:00
oobabooga
8d788874d7
Add support for characters
2023-01-19 16:46:46 -03:00
oobabooga
3121f4788e
Fix uploading chat log in --chat mode
2023-01-19 15:05:42 -03:00
oobabooga
849e4c7f90
Better way of finding the generated reply in the output string
2023-01-19 14:57:01 -03:00
oobabooga
d03b0ad7a8
Implement saving/loading chat logs ( #9 )
2023-01-19 14:03:47 -03:00
oobabooga
39bfea5a22
Add a progress bar
2023-01-19 12:20:57 -03:00
oobabooga
5390fc87c8
add auto-devices when disk is used
2023-01-19 12:11:44 -03:00
oobabooga
759da435e3
Release 8-bit models memory
2023-01-19 12:03:16 -03:00
oobabooga
7ace04864a
Implement sending layers to disk with --disk ( #10 )
2023-01-19 11:09:24 -03:00
oobabooga
93fa9bbe01
Clean up the streaming implementation
2023-01-19 10:43:05 -03:00
oobabooga
c90310e40e
Small simplification
2023-01-19 00:41:57 -03:00
oobabooga
99536ef5bf
Add no-stream option
2023-01-18 23:56:42 -03:00
oobabooga
116299b3ad
Manual eos_token implementation
2023-01-18 22:57:39 -03:00
oobabooga
3cb30bed0a
Add a "stop" button
2023-01-18 22:44:47 -03:00
oobabooga
8f27d33034
Fix another bug
2023-01-18 22:08:23 -03:00
oobabooga
6c7f187586
Minor change
2023-01-18 21:59:23 -03:00
oobabooga
b3cba0b330
Bug
2023-01-18 21:54:44 -03:00
oobabooga
df2e910421
Stop generating in chat mode when \nYou: is generated
2023-01-18 21:51:18 -03:00
oobabooga
022960a087
This is the correct way of sampling 1 token at a time
2023-01-18 21:37:21 -03:00
oobabooga
0f01a3b1fa
Implement text streaming ( #10 )
...
Still experimental. There might be bugs.
2023-01-18 19:06:50 -03:00
oobabooga
ca13acdfa0
Ensure that the chat prompt will always contain < 2048 tokens
...
This way, we can keep the context string at the top of the prompt
even if you keep talking to the bot for hours.
Before this commit, the prompt would be simply truncated and the
context string would eventually be lost.
2023-01-17 20:16:23 -03:00
oobabooga
6456777b09
Clean things up
2023-01-16 16:35:45 -03:00
oobabooga
3a99b2b030
Change a truncation parameter
2023-01-16 13:53:30 -03:00
oobabooga
54bf55372b
Truncate prompts to 2048 characters
2023-01-16 13:43:23 -03:00
oobabooga
c7a2818665
Grammar
2023-01-16 10:10:09 -03:00
oobabooga
d973897021
Typo
2023-01-16 01:52:28 -03:00
oobabooga
47a20638de
Don't need this
2023-01-15 23:15:30 -03:00
oobabooga
b55486fa00
Reorganize things
2023-01-15 23:01:51 -03:00
oobabooga
ebf4d5f506
Add --max-gpu-memory parameter for #7
2023-01-15 22:33:35 -03:00
oobabooga
bb1a172da0
Fix a bug in cai mode chat
2023-01-15 19:41:25 -03:00