Commit Graph

269 Commits

Author SHA1 Message Date
oobabooga
2eea0f4edb Minor change 2023-02-15 12:58:11 -03:00
oobabooga
3c31fa7079 Simplifications 2023-02-15 12:46:11 -03:00
oobabooga
80fbc584f7 Readability 2023-02-15 11:38:44 -03:00
oobabooga
b397bea387 Make chat history persistent 2023-02-15 11:30:38 -03:00
oobabooga
7be372829d Set chat prompt size in tokens 2023-02-15 10:18:50 -03:00
oobabooga
8c3ef58e00 Use BLIP directly + some simplifications 2023-02-14 23:55:46 -03:00
SillyLossy
a7d98f494a Use BLIP to send a picture to model 2023-02-15 01:38:21 +02:00
oobabooga
d910d435cd Consider the softprompt in the maximum prompt length calculation 2023-02-14 12:06:47 -03:00
oobabooga
8b3bb512ef Minor bug fix (soft prompt was being loaded twice) 2023-02-13 23:34:04 -03:00
oobabooga
7739a29524 Some simplifications 2023-02-13 18:48:32 -03:00
oobabooga
3277b751f5 Add softprompt support (for real this time)
Is this too much voodoo for our purposes?
2023-02-13 15:25:16 -03:00
oobabooga
aa1177ff15 Send last internal reply to input rather than visible 2023-02-13 03:29:23 -03:00
oobabooga
2c3abcf57a Add support for rosey/chip/joi instruct models 2023-02-12 09:46:34 -03:00
oobabooga
7ef7bba6e6 Add progress bar for model loading 2023-02-12 09:36:27 -03:00
oobabooga
5d3f15b915 Use the CPU if no GPU is detected 2023-02-11 23:17:06 -03:00
oobabooga
b3c4657c47 Remove commas from preset files 2023-02-11 14:54:29 -03:00
oobabooga
0dd1409f24 Add penalty_alpha parameter (contrastive search) 2023-02-11 14:48:12 -03:00
oobabooga
2ed0386d87 Fix replace last reply in --chat mode (for #69) 2023-02-11 07:59:54 -03:00
oobabooga
316e07f06a auto-assign gpu memory with --auto-devices alone 2023-02-10 16:36:06 -03:00
oobabooga
219366342b Sort imports according to PEP8 (based on #67) 2023-02-10 15:40:03 -03:00
81300
20dbef9623
Extend bfloat16 support 2023-02-09 20:00:03 +02:00
oobabooga
cadd100405 min_length has to be 0 when streaming is on 2023-02-08 00:23:35 -03:00
oobabooga
6be571cff7 Better variable names 2023-02-08 00:19:20 -03:00
oobabooga
58b07cca81 length_penalty can be negative (apparently) 2023-02-07 23:33:02 -03:00
oobabooga
7e4c25691d Repetition penalty has to be < 5 2023-02-07 23:23:39 -03:00
oobabooga
1c30e1b49a Add even more sliders 2023-02-07 23:11:04 -03:00
oobabooga
24dc705eca Add lots of sliders 2023-02-07 22:08:21 -03:00
Martin J
06a4664805 Fix a regex issue in tokenize_dialogue.
The existing regex would fail if using character names that start with
numbers, for example: 9S or 2B.
2023-02-05 07:42:57 +01:00
oobabooga
2fe235738e Reorganize chat buttons 2023-02-04 22:53:42 -03:00
oobabooga
2207d44986 Windows doesn't like : in filenames 2023-02-04 20:07:39 -03:00
oobabooga
65266f3349 Fix loading official colab chat logs 2023-02-03 22:43:02 -03:00
oobabooga
44e8c671f9 Fix API documentation formatting in chat mode 2023-02-03 10:00:05 -03:00
oobabooga
a28f0d8bd7 Show it/s in the same units with or without streaming
Closes #49
2023-02-03 09:11:11 -03:00
oobabooga
4e4cd67223 Save chat history with name/date in filename
closes #50
2023-02-03 09:02:35 -03:00
oobabooga
3af3ffeb90 Make --help output more readable 2023-02-02 23:36:28 -03:00
oobabooga
638495b633 Simplify generate() function 2023-02-02 13:47:08 -03:00
oobabooga
3f05cf5ddd Simplify encode() function 2023-02-02 13:31:32 -03:00
oobabooga
2583bc5840 Simplify deepspeed implementation (#40) 2023-02-02 12:15:44 -03:00
oobabooga
f38c9bf428 Fix deepspeed (oops) 2023-02-02 10:39:37 -03:00
oobabooga
90f1067598 Move deepspeed parameters to another file 2023-02-02 10:25:09 -03:00
81300
248ec4fa21
Merge branch 'oobabooga:main' into ds 2023-02-01 20:50:51 +02:00
81300
a6f4760772
Add arg for bfloat16 2023-02-01 20:22:07 +02:00
81300
c515282f5c
no_split_module_classes not needed 2023-02-01 19:47:26 +02:00
81300
0a0d289537
Fix issue with generating on multiple GPUs 2023-02-01 19:02:07 +02:00
81300
a97afa6965
Add DeepSpeed ZeRO-3 integration 2023-02-01 18:48:13 +02:00
oobabooga
6b13816c47 Change default --disk behavior 2023-02-01 10:43:28 -03:00
oobabooga
119be56390 Add back low_cpu_mem_usage=True
Removing it didn't help with anything, so I am adding it bad on a purely
superstiticious basis.
2023-02-01 10:01:44 -03:00
oobabooga
d4a0b377ab Allow standalone --cpu-memory
I think that what I am doing probably makes sense, but I could be wrong.
2023-01-31 21:23:16 -03:00
oobabooga
8ef89df746 Try to leave at least 1GiB free to prevent oom errors 2023-01-31 20:47:05 -03:00
oobabooga
bb77f20a6c Don't use low_cpu_mem_usage and device_map together 2023-01-31 13:24:05 -03:00
oobabooga
001ecf95b2
Update server.py 2023-01-31 08:14:16 -03:00
Silver267
a85bb5e9a2
Fix an error
Fixes "UnboundLocalError: local variable 'substring_found' referenced before assignment" when loading non-pygmalion models in cai chat mode.
2023-01-31 01:34:10 -05:00
oobabooga
5b0bbfa6e8 Clean up 2023-01-30 14:17:12 -03:00
oobabooga
2dadf42cb5 Print the tokenized example dialogue in a prettier way 2023-01-30 08:29:49 -03:00
oobabooga
161cae001b I needed this 2023-01-29 23:20:22 -03:00
oobabooga
3ebca480f6 Minor fix 2023-01-29 23:05:17 -03:00
oobabooga
00707a0b3b Add "Impersonate" button 2023-01-29 22:56:23 -03:00
oobabooga
de72e83508 Reorganize things 2023-01-29 14:27:22 -03:00
oobabooga
6fbfee9e6d Remove some bloat 2023-01-29 12:05:18 -03:00
oobabooga
9c9bd1074f Add option to replace the bot's last reply 2023-01-29 12:02:44 -03:00
oobabooga
e5ff4ddfc8 Add bot prefix modifier option in extensions 2023-01-29 10:11:59 -03:00
oobabooga
b6d01bb704 Enable extensions in all modes, not just chat 2023-01-29 09:48:18 -03:00
oobabooga
1a139664f5 Grammar 2023-01-29 02:54:36 -03:00
oobabooga
2d134031ca Apply extensions to character greeting 2023-01-29 00:04:11 -03:00
oobabooga
e349b52256 Read extensions parameters from settings file 2023-01-28 23:21:40 -03:00
oobabooga
2239be2351 Support for number/bool extension parameters 2023-01-28 23:08:28 -03:00
oobabooga
6da94e358c Add support for extensions parameters
Still experimental
2023-01-28 23:00:51 -03:00
oobabooga
e779fd795f Save TavernAI characters with TavernAI- prefix 2023-01-28 21:01:56 -03:00
oobabooga
833a1138fa Explain the dialogue tokenization output 2023-01-28 20:41:02 -03:00
oobabooga
545b7395b2 Prevent huge --help outputs 2023-01-28 20:36:51 -03:00
oobabooga
f4c455ce29
Merge pull request #30 from 10sa/patch-1
Add listening port options for listening mode.
2023-01-28 20:35:20 -03:00
oobabooga
7b283a4a3d
Update server.py 2023-01-28 20:35:05 -03:00
oobabooga
f4674d34a9 Reorganize chat UI elements 2023-01-28 20:28:08 -03:00
oobabooga
3687962e6c Add support for TavernAI character cards (closes #31) 2023-01-28 20:18:23 -03:00
oobabooga
f71531186b Upload profile pictures from the web UI 2023-01-28 19:16:37 -03:00
Tensa
3742d3b18a
Add listening port options for listening mode. 2023-01-28 03:38:34 +09:00
oobabooga
69ffef4391 History loading minor bug fix 2023-01-27 12:01:11 -03:00
oobabooga
8b8236c6ff Fix Regenerate button bug 2023-01-27 11:14:19 -03:00
oobabooga
1d1f931757 Load extensions at startup 2023-01-27 10:53:05 -03:00
oobabooga
70e034589f Update the export/load chat history functions 2023-01-27 02:16:05 -03:00
oobabooga
6b5dcd46c5 Add support for extensions
This is experimental.
2023-01-27 00:40:39 -03:00
oobabooga
e69990e37b Change order of upload and download tabs in chat mode 2023-01-26 16:57:12 -03:00
oobabooga
ac6065d5ed Fix character loading bug 2023-01-26 13:45:19 -03:00
oobabooga
61611197e0 Add --verbose option (oops) 2023-01-26 02:18:06 -03:00
oobabooga
abc920752f Stop at eos_token while streaming text (for #26) 2023-01-25 22:27:04 -03:00
oobabooga
b77933d327 File names must be img_me.jpg and img_bot.jpg 2023-01-25 19:40:30 -03:00
oobabooga
fc73188ec7 Allow specifying your own profile picture in chat mode 2023-01-25 19:37:44 -03:00
oobabooga
3fa14befc5 Bump the gradio version, add back the queue 2023-01-25 16:10:35 -03:00
oobabooga
7a3717b824 Allow uploading characters 2023-01-25 15:45:25 -03:00
oobabooga
6388c7fbc0 Set queue size to 1 to prevent gradio undefined behavior 2023-01-25 14:37:41 -03:00
oobabooga
ec69c190ba Keep the character's greeting/example dialogue when "clear history" is clicked 2023-01-25 10:52:35 -03:00
oobabooga
ebed1dea56 Generate 8 tokens at a time in streaming mode instead of just 1
This is a performance optimization.
2023-01-25 10:38:26 -03:00
oobabooga
3b8f0021cc Stop generating at \nYou: in chat mode 2023-01-25 10:17:55 -03:00
oobabooga
54e77acac4 Rename to "Generation parameters preset" for clarity 2023-01-23 20:49:44 -03:00
oobabooga
ce4756fb88 Allow uploading chat history in official pygmalion web ui format 2023-01-23 15:29:01 -03:00
oobabooga
8325e23923 Fix bug in loading chat history as text file 2023-01-23 14:28:02 -03:00
oobabooga
059d47edb5 Submit with enter instead of shift+enter in chat mode 2023-01-23 14:04:01 -03:00
oobabooga
4820379139 Add debug preset (deterministic, should always give the same responses) 2023-01-23 13:36:01 -03:00
oobabooga
947b50e8ea Allow uploading chat history as simple text files 2023-01-23 09:45:10 -03:00
oobabooga
ebf720585b Mention time and it/s in terminal with streaming off 2023-01-22 20:07:19 -03:00