Commit Graph

665 Commits

Author SHA1 Message Date
oobabooga
7be372829d Set chat prompt size in tokens 2023-02-15 10:18:50 -03:00
oobabooga
8c3ef58e00 Use BLIP directly + some simplifications 2023-02-14 23:55:46 -03:00
SillyLossy
a7d98f494a Use BLIP to send a picture to model 2023-02-15 01:38:21 +02:00
oobabooga
d910d435cd Consider the softprompt in the maximum prompt length calculation 2023-02-14 12:06:47 -03:00
oobabooga
8b3bb512ef Minor bug fix (soft prompt was being loaded twice) 2023-02-13 23:34:04 -03:00
oobabooga
7739a29524 Some simplifications 2023-02-13 18:48:32 -03:00
oobabooga
3277b751f5 Add softprompt support (for real this time)
Is this too much voodoo for our purposes?
2023-02-13 15:25:16 -03:00
oobabooga
aa1177ff15 Send last internal reply to input rather than visible 2023-02-13 03:29:23 -03:00
oobabooga
2c3abcf57a Add support for rosey/chip/joi instruct models 2023-02-12 09:46:34 -03:00
oobabooga
7ef7bba6e6 Add progress bar for model loading 2023-02-12 09:36:27 -03:00
oobabooga
5d3f15b915 Use the CPU if no GPU is detected 2023-02-11 23:17:06 -03:00
oobabooga
b3c4657c47 Remove commas from preset files 2023-02-11 14:54:29 -03:00
oobabooga
0dd1409f24 Add penalty_alpha parameter (contrastive search) 2023-02-11 14:48:12 -03:00
oobabooga
2ed0386d87 Fix replace last reply in --chat mode (for #69) 2023-02-11 07:59:54 -03:00
oobabooga
316e07f06a auto-assign gpu memory with --auto-devices alone 2023-02-10 16:36:06 -03:00
oobabooga
219366342b Sort imports according to PEP8 (based on #67) 2023-02-10 15:40:03 -03:00
81300
20dbef9623
Extend bfloat16 support 2023-02-09 20:00:03 +02:00
oobabooga
cadd100405 min_length has to be 0 when streaming is on 2023-02-08 00:23:35 -03:00
oobabooga
6be571cff7 Better variable names 2023-02-08 00:19:20 -03:00
oobabooga
58b07cca81 length_penalty can be negative (apparently) 2023-02-07 23:33:02 -03:00
oobabooga
7e4c25691d Repetition penalty has to be < 5 2023-02-07 23:23:39 -03:00
oobabooga
1c30e1b49a Add even more sliders 2023-02-07 23:11:04 -03:00
oobabooga
24dc705eca Add lots of sliders 2023-02-07 22:08:21 -03:00
Martin J
06a4664805 Fix a regex issue in tokenize_dialogue.
The existing regex would fail if using character names that start with
numbers, for example: 9S or 2B.
2023-02-05 07:42:57 +01:00
oobabooga
2fe235738e Reorganize chat buttons 2023-02-04 22:53:42 -03:00
oobabooga
2207d44986 Windows doesn't like : in filenames 2023-02-04 20:07:39 -03:00
oobabooga
65266f3349 Fix loading official colab chat logs 2023-02-03 22:43:02 -03:00
oobabooga
44e8c671f9 Fix API documentation formatting in chat mode 2023-02-03 10:00:05 -03:00
oobabooga
a28f0d8bd7 Show it/s in the same units with or without streaming
Closes #49
2023-02-03 09:11:11 -03:00
oobabooga
4e4cd67223 Save chat history with name/date in filename
closes #50
2023-02-03 09:02:35 -03:00
oobabooga
3af3ffeb90 Make --help output more readable 2023-02-02 23:36:28 -03:00
oobabooga
638495b633 Simplify generate() function 2023-02-02 13:47:08 -03:00
oobabooga
3f05cf5ddd Simplify encode() function 2023-02-02 13:31:32 -03:00
oobabooga
2583bc5840 Simplify deepspeed implementation (#40) 2023-02-02 12:15:44 -03:00
oobabooga
f38c9bf428 Fix deepspeed (oops) 2023-02-02 10:39:37 -03:00
oobabooga
90f1067598 Move deepspeed parameters to another file 2023-02-02 10:25:09 -03:00
81300
248ec4fa21
Merge branch 'oobabooga:main' into ds 2023-02-01 20:50:51 +02:00
81300
a6f4760772
Add arg for bfloat16 2023-02-01 20:22:07 +02:00
81300
c515282f5c
no_split_module_classes not needed 2023-02-01 19:47:26 +02:00
81300
0a0d289537
Fix issue with generating on multiple GPUs 2023-02-01 19:02:07 +02:00
81300
a97afa6965
Add DeepSpeed ZeRO-3 integration 2023-02-01 18:48:13 +02:00
oobabooga
6b13816c47 Change default --disk behavior 2023-02-01 10:43:28 -03:00
oobabooga
119be56390 Add back low_cpu_mem_usage=True
Removing it didn't help with anything, so I am adding it bad on a purely
superstiticious basis.
2023-02-01 10:01:44 -03:00
oobabooga
d4a0b377ab Allow standalone --cpu-memory
I think that what I am doing probably makes sense, but I could be wrong.
2023-01-31 21:23:16 -03:00
oobabooga
8ef89df746 Try to leave at least 1GiB free to prevent oom errors 2023-01-31 20:47:05 -03:00
oobabooga
bb77f20a6c Don't use low_cpu_mem_usage and device_map together 2023-01-31 13:24:05 -03:00
oobabooga
001ecf95b2
Update server.py 2023-01-31 08:14:16 -03:00
Silver267
a85bb5e9a2
Fix an error
Fixes "UnboundLocalError: local variable 'substring_found' referenced before assignment" when loading non-pygmalion models in cai chat mode.
2023-01-31 01:34:10 -05:00
oobabooga
5b0bbfa6e8 Clean up 2023-01-30 14:17:12 -03:00
oobabooga
2dadf42cb5 Print the tokenized example dialogue in a prettier way 2023-01-30 08:29:49 -03:00
oobabooga
161cae001b I needed this 2023-01-29 23:20:22 -03:00
oobabooga
3ebca480f6 Minor fix 2023-01-29 23:05:17 -03:00
oobabooga
00707a0b3b Add "Impersonate" button 2023-01-29 22:56:23 -03:00
oobabooga
de72e83508 Reorganize things 2023-01-29 14:27:22 -03:00
oobabooga
6fbfee9e6d Remove some bloat 2023-01-29 12:05:18 -03:00
oobabooga
9c9bd1074f Add option to replace the bot's last reply 2023-01-29 12:02:44 -03:00
oobabooga
e5ff4ddfc8 Add bot prefix modifier option in extensions 2023-01-29 10:11:59 -03:00
oobabooga
b6d01bb704 Enable extensions in all modes, not just chat 2023-01-29 09:48:18 -03:00
oobabooga
1a139664f5 Grammar 2023-01-29 02:54:36 -03:00
oobabooga
2d134031ca Apply extensions to character greeting 2023-01-29 00:04:11 -03:00
oobabooga
e349b52256 Read extensions parameters from settings file 2023-01-28 23:21:40 -03:00
oobabooga
2239be2351 Support for number/bool extension parameters 2023-01-28 23:08:28 -03:00
oobabooga
6da94e358c Add support for extensions parameters
Still experimental
2023-01-28 23:00:51 -03:00
oobabooga
e779fd795f Save TavernAI characters with TavernAI- prefix 2023-01-28 21:01:56 -03:00
oobabooga
833a1138fa Explain the dialogue tokenization output 2023-01-28 20:41:02 -03:00
oobabooga
545b7395b2 Prevent huge --help outputs 2023-01-28 20:36:51 -03:00
oobabooga
f4c455ce29
Merge pull request #30 from 10sa/patch-1
Add listening port options for listening mode.
2023-01-28 20:35:20 -03:00
oobabooga
7b283a4a3d
Update server.py 2023-01-28 20:35:05 -03:00
oobabooga
f4674d34a9 Reorganize chat UI elements 2023-01-28 20:28:08 -03:00
oobabooga
3687962e6c Add support for TavernAI character cards (closes #31) 2023-01-28 20:18:23 -03:00
oobabooga
f71531186b Upload profile pictures from the web UI 2023-01-28 19:16:37 -03:00
Tensa
3742d3b18a
Add listening port options for listening mode. 2023-01-28 03:38:34 +09:00
oobabooga
69ffef4391 History loading minor bug fix 2023-01-27 12:01:11 -03:00
oobabooga
8b8236c6ff Fix Regenerate button bug 2023-01-27 11:14:19 -03:00
oobabooga
1d1f931757 Load extensions at startup 2023-01-27 10:53:05 -03:00
oobabooga
70e034589f Update the export/load chat history functions 2023-01-27 02:16:05 -03:00
oobabooga
6b5dcd46c5 Add support for extensions
This is experimental.
2023-01-27 00:40:39 -03:00
oobabooga
e69990e37b Change order of upload and download tabs in chat mode 2023-01-26 16:57:12 -03:00
oobabooga
ac6065d5ed Fix character loading bug 2023-01-26 13:45:19 -03:00
oobabooga
61611197e0 Add --verbose option (oops) 2023-01-26 02:18:06 -03:00
oobabooga
abc920752f Stop at eos_token while streaming text (for #26) 2023-01-25 22:27:04 -03:00
oobabooga
b77933d327 File names must be img_me.jpg and img_bot.jpg 2023-01-25 19:40:30 -03:00
oobabooga
fc73188ec7 Allow specifying your own profile picture in chat mode 2023-01-25 19:37:44 -03:00
oobabooga
3fa14befc5 Bump the gradio version, add back the queue 2023-01-25 16:10:35 -03:00
oobabooga
7a3717b824 Allow uploading characters 2023-01-25 15:45:25 -03:00
oobabooga
6388c7fbc0 Set queue size to 1 to prevent gradio undefined behavior 2023-01-25 14:37:41 -03:00
oobabooga
ec69c190ba Keep the character's greeting/example dialogue when "clear history" is clicked 2023-01-25 10:52:35 -03:00
oobabooga
ebed1dea56 Generate 8 tokens at a time in streaming mode instead of just 1
This is a performance optimization.
2023-01-25 10:38:26 -03:00
oobabooga
3b8f0021cc Stop generating at \nYou: in chat mode 2023-01-25 10:17:55 -03:00
oobabooga
54e77acac4 Rename to "Generation parameters preset" for clarity 2023-01-23 20:49:44 -03:00
oobabooga
ce4756fb88 Allow uploading chat history in official pygmalion web ui format 2023-01-23 15:29:01 -03:00
oobabooga
8325e23923 Fix bug in loading chat history as text file 2023-01-23 14:28:02 -03:00
oobabooga
059d47edb5 Submit with enter instead of shift+enter in chat mode 2023-01-23 14:04:01 -03:00
oobabooga
4820379139 Add debug preset (deterministic, should always give the same responses) 2023-01-23 13:36:01 -03:00
oobabooga
947b50e8ea Allow uploading chat history as simple text files 2023-01-23 09:45:10 -03:00
oobabooga
ebf720585b Mention time and it/s in terminal with streaming off 2023-01-22 20:07:19 -03:00
oobabooga
d87310ad61 Send last input to the input box when "Remove last" is clicked 2023-01-22 19:40:22 -03:00
oobabooga
d0ea6d5f86 Make the maximum history size in prompt unlimited by default 2023-01-22 17:17:35 -03:00
oobabooga
00f3b0996b Warn the user that chat mode becomes a lot slower with text streaming 2023-01-22 16:19:11 -03:00
oobabooga
c5cc3a3075 Fix bug in "remove last" button 2023-01-22 13:10:36 -03:00
oobabooga
a410cf1345 Mention that "Chat history size" means "Chat history size in prompt" 2023-01-22 03:15:35 -03:00
oobabooga
b3e1a874bc Fix bug in loading history 2023-01-22 02:32:54 -03:00
oobabooga
62b533f344 Add "regenerate" button to the chat 2023-01-22 02:19:58 -03:00
oobabooga
94ecbc6dff Export history as nicely formatted json 2023-01-22 01:24:16 -03:00
oobabooga
deacb96c34 Change the pygmalion default context 2023-01-22 00:49:59 -03:00
oobabooga
23f94f559a Improve the chat prompt design 2023-01-22 00:35:42 -03:00
oobabooga
139e2f0ab4 Redesign the upload/download chat history buttons 2023-01-22 00:22:50 -03:00
oobabooga
434d4b128c Add refresh buttons for the model/preset/character menus 2023-01-22 00:02:46 -03:00
oobabooga
1e5e56fa2e Better recognize the 4chan model (for #19) 2023-01-21 22:13:01 -03:00
oobabooga
aadf4e899a Improve example dialogue handling 2023-01-21 15:04:13 -03:00
oobabooga
f9dbe7e08e Update README 2023-01-21 03:05:55 -03:00
oobabooga
27e2d932b0 Don't export include the example dialogue in the export json 2023-01-21 02:55:13 -03:00
oobabooga
990ee54ddd Move the example dialogue to the chat history, and keep it hidden.
This greatly improves the performance of text generation, as
histories can be quite long. It also makes more sense to implement
it this way.
2023-01-21 02:48:06 -03:00
oobabooga
d7299df01f Rename parameters 2023-01-21 00:33:41 -03:00
oobabooga
5df03bf0fd
Merge branch 'main' into main 2023-01-21 00:25:34 -03:00
oobabooga
faaafe7c0e Better parameter naming 2023-01-20 23:45:16 -03:00
Silver267
f4634e4c32 Update. 2023-01-20 17:05:43 -05:00
oobabooga
c0f2367b54 Minor fix 2023-01-20 17:09:25 -03:00
oobabooga
185587a33e Add a history size parameter to the chat
If too many messages are used in the prompt, the model
gets really slow. It is useful to have the ability to
limit this.
2023-01-20 17:03:09 -03:00
oobabooga
78d5a999e6 Improve prompt formatation 2023-01-20 01:54:38 -03:00
oobabooga
70ff685736 Encode the input string correctly 2023-01-20 00:45:02 -03:00
oobabooga
b66d18d5a0 Allow presets/characters with '.' in their names 2023-01-19 21:56:33 -03:00
oobabooga
11c3214981 Fix some regexes 2023-01-19 19:59:34 -03:00
oobabooga
e61138bdad Minor fixes 2023-01-19 19:04:54 -03:00
oobabooga
2181fca709 Better defaults for chat 2023-01-19 18:58:45 -03:00
oobabooga
83808171d3 Add --share option for Colab 2023-01-19 17:31:29 -03:00
oobabooga
8d788874d7 Add support for characters 2023-01-19 16:46:46 -03:00
oobabooga
3121f4788e Fix uploading chat log in --chat mode 2023-01-19 15:05:42 -03:00
oobabooga
849e4c7f90 Better way of finding the generated reply in the output string 2023-01-19 14:57:01 -03:00
oobabooga
d03b0ad7a8 Implement saving/loading chat logs (#9) 2023-01-19 14:03:47 -03:00
oobabooga
39bfea5a22 Add a progress bar 2023-01-19 12:20:57 -03:00
oobabooga
5390fc87c8 add auto-devices when disk is used 2023-01-19 12:11:44 -03:00
oobabooga
759da435e3 Release 8-bit models memory 2023-01-19 12:03:16 -03:00
oobabooga
7ace04864a Implement sending layers to disk with --disk (#10) 2023-01-19 11:09:24 -03:00
oobabooga
93fa9bbe01 Clean up the streaming implementation 2023-01-19 10:43:05 -03:00
oobabooga
c90310e40e Small simplification 2023-01-19 00:41:57 -03:00
oobabooga
99536ef5bf Add no-stream option 2023-01-18 23:56:42 -03:00
oobabooga
116299b3ad Manual eos_token implementation 2023-01-18 22:57:39 -03:00
oobabooga
3cb30bed0a Add a "stop" button 2023-01-18 22:44:47 -03:00
oobabooga
8f27d33034 Fix another bug 2023-01-18 22:08:23 -03:00
oobabooga
6c7f187586 Minor change 2023-01-18 21:59:23 -03:00
oobabooga
b3cba0b330 Bug 2023-01-18 21:54:44 -03:00
oobabooga
df2e910421 Stop generating in chat mode when \nYou: is generated 2023-01-18 21:51:18 -03:00
oobabooga
022960a087 This is the correct way of sampling 1 token at a time 2023-01-18 21:37:21 -03:00
oobabooga
0f01a3b1fa Implement text streaming (#10)
Still experimental. There might be bugs.
2023-01-18 19:06:50 -03:00
oobabooga
ca13acdfa0 Ensure that the chat prompt will always contain < 2048 tokens
This way, we can keep the context string at the top of the prompt
even if you keep talking to the bot for hours.

Before this commit, the prompt would be simply truncated and the
context string would eventually be lost.
2023-01-17 20:16:23 -03:00
oobabooga
6456777b09 Clean things up 2023-01-16 16:35:45 -03:00
oobabooga
3a99b2b030 Change a truncation parameter 2023-01-16 13:53:30 -03:00
oobabooga
54bf55372b Truncate prompts to 2048 characters 2023-01-16 13:43:23 -03:00
oobabooga
c7a2818665
Grammar 2023-01-16 10:10:09 -03:00
oobabooga
d973897021
Typo 2023-01-16 01:52:28 -03:00
oobabooga
47a20638de Don't need this 2023-01-15 23:15:30 -03:00
oobabooga
b55486fa00 Reorganize things 2023-01-15 23:01:51 -03:00
oobabooga
ebf4d5f506 Add --max-gpu-memory parameter for #7 2023-01-15 22:33:35 -03:00
oobabooga
bb1a172da0 Fix a bug in cai mode chat 2023-01-15 19:41:25 -03:00
oobabooga
e6691bd920 Make chat mode more like cai 2023-01-15 18:16:46 -03:00
oobabooga
e04ecd4bce Minor improvements 2023-01-15 16:43:31 -03:00
oobabooga
027c3dd27d Allow jpg profile images 2023-01-15 15:45:25 -03:00
oobabooga
afe9f77f96 Reorder parameters 2023-01-15 15:30:39 -03:00
oobabooga
88d67427e1 Implement default settings customization using a json file 2023-01-15 15:23:41 -03:00
oobabooga
6136da419c Add --cai-chat option that mimics Character.AI's interface 2023-01-15 12:20:04 -03:00
oobabooga
13b04c1b94 Add "remove last message" button to chat 2023-01-15 03:19:09 -03:00
oobabooga
fd220f827f Remove annoying warnings 2023-01-15 00:39:51 -03:00
oobabooga
d962e69496 Improve chat preprocessing 2023-01-14 23:50:34 -03:00
oobabooga
9a7f187b5a Improve pygmalion line breaks 2023-01-14 23:26:14 -03:00
oobabooga
ecb2cc2194 Pygmalion: add checkbox for choosing whether to stop at newline or not 2023-01-13 15:02:17 -03:00
oobabooga
3a00cb1bbd Reorganize GUI elements 2023-01-13 14:28:53 -03:00
oobabooga
3f1e70d2c8 Remove the temperature slider
It was not being used by most presets.
2023-01-13 14:00:43 -03:00
oobabooga
7f93012a89 Add default names/context for pygmalion 2023-01-13 10:12:47 -03:00
oobabooga
9410486bd8 Enable the API
Let's goooooooooooooo
2023-01-11 16:43:13 -03:00
oobabooga
66f73c1b32 Remove default text from output box 2023-01-11 01:36:11 -03:00
oobabooga
01ac065d7e Implement Continue button 2023-01-11 01:33:57 -03:00
oobabooga
4b09e7e355 Sort models alphabetically 2023-01-11 01:17:20 -03:00
oobabooga
d5e01c80e3 Add nice HTML output for all models 2023-01-11 01:10:11 -03:00
oobabooga
b2a2ddcb15 Remove T5 support (it sucks) 2023-01-10 23:39:50 -03:00
oobabooga
a236b24d24 Add --auto-devices and --load-in-8bit options for #4 2023-01-10 23:16:33 -03:00
oobabooga
3aefcfd963 Grammar 2023-01-09 19:07:47 -03:00
oobabooga
6c178b1c91 Add --listen parameter 2023-01-09 19:05:36 -03:00
oobabooga
13836a37c8 Remove unused parameter 2023-01-09 17:23:43 -03:00
oobabooga
f0013ac8e9 Don't need that 2023-01-09 16:30:14 -03:00
oobabooga
00a12889e9 Refactor model loading function 2023-01-09 16:28:04 -03:00
oobabooga
980f8112a7 Small bug fix 2023-01-09 12:56:54 -03:00
oobabooga
a751d7e693 Don't require GPT-J to be installed to load gpt4chan 2023-01-09 11:39:13 -03:00
oobabooga
6cbfe19c23 Submit with Shift+Enter 2023-01-09 11:22:12 -03:00
oobabooga
0e67ccf607 Implement CPU mode 2023-01-09 10:58:46 -03:00
oobabooga
f2a548c098 Stop generating at \n in chat mode
Makes it a lot more efficient.
2023-01-08 23:00:38 -03:00
oobabooga
a9280dde52 Increase chat height, reorganize things 2023-01-08 20:10:31 -03:00
oobabooga
b871f76aac Better default for chat output length
Ideally, generation should stop at '\n', but this feature is brand new
on transformers (https://github.com/huggingface/transformers/pull/20727)
2023-01-08 15:00:02 -03:00
oobabooga
b801e0d50d Minor changes 2023-01-08 14:37:43 -03:00
oobabooga
730c5562cc Disable gradio analytics 2023-01-08 01:42:38 -03:00
oobabooga
493051d5d5 Chat improvements 2023-01-08 01:33:45 -03:00
oobabooga
4058b33fc9 Improve the chat experience 2023-01-08 01:10:02 -03:00
oobabooga
ef4e610d37 Re-enable the progress bar in notebook mode 2023-01-07 23:01:39 -03:00
oobabooga
c3a0d00715 Name the input box 2023-01-07 22:55:54 -03:00
oobabooga
f76bdadbed Add chat mode 2023-01-07 22:52:46 -03:00
oobabooga
300a500c0b Improve spacings 2023-01-07 19:11:21 -03:00
oobabooga
5345685ead Make paths cross-platform (should work on Windows now) 2023-01-07 16:33:43 -03:00
oobabooga
342e756878 Better recognize the model sizes 2023-01-07 12:21:04 -03:00
oobabooga
62c4d9880b Fix galactica equations (more) 2023-01-07 12:13:09 -03:00
oobabooga
eeb63b1b8a Fix galactica equations 2023-01-07 01:56:21 -03:00
oobabooga
3aaf5fb4aa Make NovelAI-Sphinx Moth the default preset 2023-01-07 00:49:47 -03:00
oobabooga
c7b29668a2 Add HTML support for gpt4chan 2023-01-06 23:14:08 -03:00
oobabooga
3d6a3aac73 Reorganize the layout 2023-01-06 22:05:37 -03:00
oobabooga
4c89d4ab29 Name the inputs 2023-01-06 20:26:47 -03:00
oobabooga
e5f547fc87 Implement notebook mode 2023-01-06 20:22:26 -03:00
oobabooga
f54a13929f Load default model with --model flag 2023-01-06 19:56:44 -03:00
oobabooga
1da8d4a787 Remove a space 2023-01-06 02:58:09 -03:00
oobabooga
ee650343bc Better defaults while loading models 2023-01-06 02:54:33 -03:00
oobabooga
9498dca748 Make model autodetect all gpt-neo and opt models 2023-01-06 02:31:54 -03:00
oobabooga
deefa2e86a Add comments 2023-01-06 02:26:33 -03:00
oobabooga
c06d7d28cb Autodetect available models 2023-01-06 02:06:59 -03:00
oobabooga
285032da36 Make model loading more transparent 2023-01-06 01:41:52 -03:00
oobabooga
c65bad40dc Add support for presets 2023-01-06 01:33:21 -03:00
oobabooga
838f768437 Add files 2022-12-21 13:27:31 -03:00
oobabooga
dde76a962f Initial commit 2022-12-21 13:17:06 -03:00