oobabooga
a410cf1345
Mention that "Chat history size" means "Chat history size in prompt"
2023-01-22 03:15:35 -03:00
oobabooga
b3e1a874bc
Fix bug in loading history
2023-01-22 02:32:54 -03:00
oobabooga
62b533f344
Add "regenerate" button to the chat
2023-01-22 02:19:58 -03:00
oobabooga
94ecbc6dff
Export history as nicely formatted json
2023-01-22 01:24:16 -03:00
oobabooga
deacb96c34
Change the pygmalion default context
2023-01-22 00:49:59 -03:00
oobabooga
23f94f559a
Improve the chat prompt design
2023-01-22 00:35:42 -03:00
oobabooga
139e2f0ab4
Redesign the upload/download chat history buttons
2023-01-22 00:22:50 -03:00
oobabooga
434d4b128c
Add refresh buttons for the model/preset/character menus
2023-01-22 00:02:46 -03:00
oobabooga
1e5e56fa2e
Better recognize the 4chan model (for #19 )
2023-01-21 22:13:01 -03:00
oobabooga
aadf4e899a
Improve example dialogue handling
2023-01-21 15:04:13 -03:00
oobabooga
f9dbe7e08e
Update README
2023-01-21 03:05:55 -03:00
oobabooga
27e2d932b0
Don't export include the example dialogue in the export json
2023-01-21 02:55:13 -03:00
oobabooga
990ee54ddd
Move the example dialogue to the chat history, and keep it hidden.
...
This greatly improves the performance of text generation, as
histories can be quite long. It also makes more sense to implement
it this way.
2023-01-21 02:48:06 -03:00
oobabooga
d7299df01f
Rename parameters
2023-01-21 00:33:41 -03:00
oobabooga
5df03bf0fd
Merge branch 'main' into main
2023-01-21 00:25:34 -03:00
oobabooga
faaafe7c0e
Better parameter naming
2023-01-20 23:45:16 -03:00
Silver267
f4634e4c32
Update.
2023-01-20 17:05:43 -05:00
oobabooga
c0f2367b54
Minor fix
2023-01-20 17:09:25 -03:00
oobabooga
185587a33e
Add a history size parameter to the chat
...
If too many messages are used in the prompt, the model
gets really slow. It is useful to have the ability to
limit this.
2023-01-20 17:03:09 -03:00
oobabooga
78d5a999e6
Improve prompt formatation
2023-01-20 01:54:38 -03:00
oobabooga
70ff685736
Encode the input string correctly
2023-01-20 00:45:02 -03:00
oobabooga
b66d18d5a0
Allow presets/characters with '.' in their names
2023-01-19 21:56:33 -03:00
oobabooga
11c3214981
Fix some regexes
2023-01-19 19:59:34 -03:00
oobabooga
e61138bdad
Minor fixes
2023-01-19 19:04:54 -03:00
oobabooga
2181fca709
Better defaults for chat
2023-01-19 18:58:45 -03:00
oobabooga
83808171d3
Add --share option for Colab
2023-01-19 17:31:29 -03:00
oobabooga
8d788874d7
Add support for characters
2023-01-19 16:46:46 -03:00
oobabooga
3121f4788e
Fix uploading chat log in --chat mode
2023-01-19 15:05:42 -03:00
oobabooga
849e4c7f90
Better way of finding the generated reply in the output string
2023-01-19 14:57:01 -03:00
oobabooga
d03b0ad7a8
Implement saving/loading chat logs ( #9 )
2023-01-19 14:03:47 -03:00
oobabooga
39bfea5a22
Add a progress bar
2023-01-19 12:20:57 -03:00
oobabooga
5390fc87c8
add auto-devices when disk is used
2023-01-19 12:11:44 -03:00
oobabooga
759da435e3
Release 8-bit models memory
2023-01-19 12:03:16 -03:00
oobabooga
7ace04864a
Implement sending layers to disk with --disk ( #10 )
2023-01-19 11:09:24 -03:00
oobabooga
93fa9bbe01
Clean up the streaming implementation
2023-01-19 10:43:05 -03:00
oobabooga
c90310e40e
Small simplification
2023-01-19 00:41:57 -03:00
oobabooga
99536ef5bf
Add no-stream option
2023-01-18 23:56:42 -03:00
oobabooga
116299b3ad
Manual eos_token implementation
2023-01-18 22:57:39 -03:00
oobabooga
3cb30bed0a
Add a "stop" button
2023-01-18 22:44:47 -03:00
oobabooga
8f27d33034
Fix another bug
2023-01-18 22:08:23 -03:00
oobabooga
6c7f187586
Minor change
2023-01-18 21:59:23 -03:00
oobabooga
b3cba0b330
Bug
2023-01-18 21:54:44 -03:00
oobabooga
df2e910421
Stop generating in chat mode when \nYou: is generated
2023-01-18 21:51:18 -03:00
oobabooga
022960a087
This is the correct way of sampling 1 token at a time
2023-01-18 21:37:21 -03:00
oobabooga
0f01a3b1fa
Implement text streaming ( #10 )
...
Still experimental. There might be bugs.
2023-01-18 19:06:50 -03:00
oobabooga
ca13acdfa0
Ensure that the chat prompt will always contain < 2048 tokens
...
This way, we can keep the context string at the top of the prompt
even if you keep talking to the bot for hours.
Before this commit, the prompt would be simply truncated and the
context string would eventually be lost.
2023-01-17 20:16:23 -03:00
oobabooga
6456777b09
Clean things up
2023-01-16 16:35:45 -03:00
oobabooga
3a99b2b030
Change a truncation parameter
2023-01-16 13:53:30 -03:00
oobabooga
54bf55372b
Truncate prompts to 2048 characters
2023-01-16 13:43:23 -03:00
oobabooga
c7a2818665
Grammar
2023-01-16 10:10:09 -03:00
oobabooga
d973897021
Typo
2023-01-16 01:52:28 -03:00
oobabooga
47a20638de
Don't need this
2023-01-15 23:15:30 -03:00
oobabooga
b55486fa00
Reorganize things
2023-01-15 23:01:51 -03:00
oobabooga
ebf4d5f506
Add --max-gpu-memory parameter for #7
2023-01-15 22:33:35 -03:00
oobabooga
bb1a172da0
Fix a bug in cai mode chat
2023-01-15 19:41:25 -03:00
oobabooga
e6691bd920
Make chat mode more like cai
2023-01-15 18:16:46 -03:00
oobabooga
e04ecd4bce
Minor improvements
2023-01-15 16:43:31 -03:00
oobabooga
027c3dd27d
Allow jpg profile images
2023-01-15 15:45:25 -03:00
oobabooga
afe9f77f96
Reorder parameters
2023-01-15 15:30:39 -03:00
oobabooga
88d67427e1
Implement default settings customization using a json file
2023-01-15 15:23:41 -03:00
oobabooga
6136da419c
Add --cai-chat option that mimics Character.AI's interface
2023-01-15 12:20:04 -03:00
oobabooga
13b04c1b94
Add "remove last message" button to chat
2023-01-15 03:19:09 -03:00
oobabooga
fd220f827f
Remove annoying warnings
2023-01-15 00:39:51 -03:00
oobabooga
d962e69496
Improve chat preprocessing
2023-01-14 23:50:34 -03:00
oobabooga
9a7f187b5a
Improve pygmalion line breaks
2023-01-14 23:26:14 -03:00
oobabooga
ecb2cc2194
Pygmalion: add checkbox for choosing whether to stop at newline or not
2023-01-13 15:02:17 -03:00
oobabooga
3a00cb1bbd
Reorganize GUI elements
2023-01-13 14:28:53 -03:00
oobabooga
3f1e70d2c8
Remove the temperature slider
...
It was not being used by most presets.
2023-01-13 14:00:43 -03:00
oobabooga
7f93012a89
Add default names/context for pygmalion
2023-01-13 10:12:47 -03:00
oobabooga
9410486bd8
Enable the API
...
Let's goooooooooooooo
2023-01-11 16:43:13 -03:00
oobabooga
66f73c1b32
Remove default text from output box
2023-01-11 01:36:11 -03:00
oobabooga
01ac065d7e
Implement Continue button
2023-01-11 01:33:57 -03:00
oobabooga
4b09e7e355
Sort models alphabetically
2023-01-11 01:17:20 -03:00
oobabooga
d5e01c80e3
Add nice HTML output for all models
2023-01-11 01:10:11 -03:00
oobabooga
b2a2ddcb15
Remove T5 support (it sucks)
2023-01-10 23:39:50 -03:00
oobabooga
a236b24d24
Add --auto-devices and --load-in-8bit options for #4
2023-01-10 23:16:33 -03:00
oobabooga
3aefcfd963
Grammar
2023-01-09 19:07:47 -03:00
oobabooga
6c178b1c91
Add --listen parameter
2023-01-09 19:05:36 -03:00
oobabooga
13836a37c8
Remove unused parameter
2023-01-09 17:23:43 -03:00
oobabooga
f0013ac8e9
Don't need that
2023-01-09 16:30:14 -03:00
oobabooga
00a12889e9
Refactor model loading function
2023-01-09 16:28:04 -03:00
oobabooga
980f8112a7
Small bug fix
2023-01-09 12:56:54 -03:00
oobabooga
a751d7e693
Don't require GPT-J to be installed to load gpt4chan
2023-01-09 11:39:13 -03:00
oobabooga
6cbfe19c23
Submit with Shift+Enter
2023-01-09 11:22:12 -03:00
oobabooga
0e67ccf607
Implement CPU mode
2023-01-09 10:58:46 -03:00
oobabooga
f2a548c098
Stop generating at \n in chat mode
...
Makes it a lot more efficient.
2023-01-08 23:00:38 -03:00
oobabooga
a9280dde52
Increase chat height, reorganize things
2023-01-08 20:10:31 -03:00
oobabooga
b871f76aac
Better default for chat output length
...
Ideally, generation should stop at '\n', but this feature is brand new
on transformers (https://github.com/huggingface/transformers/pull/20727 )
2023-01-08 15:00:02 -03:00
oobabooga
b801e0d50d
Minor changes
2023-01-08 14:37:43 -03:00
oobabooga
730c5562cc
Disable gradio analytics
2023-01-08 01:42:38 -03:00
oobabooga
493051d5d5
Chat improvements
2023-01-08 01:33:45 -03:00
oobabooga
4058b33fc9
Improve the chat experience
2023-01-08 01:10:02 -03:00
oobabooga
ef4e610d37
Re-enable the progress bar in notebook mode
2023-01-07 23:01:39 -03:00
oobabooga
c3a0d00715
Name the input box
2023-01-07 22:55:54 -03:00
oobabooga
f76bdadbed
Add chat mode
2023-01-07 22:52:46 -03:00
oobabooga
300a500c0b
Improve spacings
2023-01-07 19:11:21 -03:00
oobabooga
5345685ead
Make paths cross-platform (should work on Windows now)
2023-01-07 16:33:43 -03:00
oobabooga
342e756878
Better recognize the model sizes
2023-01-07 12:21:04 -03:00
oobabooga
62c4d9880b
Fix galactica equations (more)
2023-01-07 12:13:09 -03:00
oobabooga
eeb63b1b8a
Fix galactica equations
2023-01-07 01:56:21 -03:00
oobabooga
3aaf5fb4aa
Make NovelAI-Sphinx Moth the default preset
2023-01-07 00:49:47 -03:00
oobabooga
c7b29668a2
Add HTML support for gpt4chan
2023-01-06 23:14:08 -03:00
oobabooga
3d6a3aac73
Reorganize the layout
2023-01-06 22:05:37 -03:00
oobabooga
4c89d4ab29
Name the inputs
2023-01-06 20:26:47 -03:00
oobabooga
e5f547fc87
Implement notebook mode
2023-01-06 20:22:26 -03:00
oobabooga
f54a13929f
Load default model with --model flag
2023-01-06 19:56:44 -03:00
oobabooga
1da8d4a787
Remove a space
2023-01-06 02:58:09 -03:00
oobabooga
ee650343bc
Better defaults while loading models
2023-01-06 02:54:33 -03:00
oobabooga
9498dca748
Make model autodetect all gpt-neo and opt models
2023-01-06 02:31:54 -03:00
oobabooga
deefa2e86a
Add comments
2023-01-06 02:26:33 -03:00
oobabooga
c06d7d28cb
Autodetect available models
2023-01-06 02:06:59 -03:00
oobabooga
285032da36
Make model loading more transparent
2023-01-06 01:41:52 -03:00
oobabooga
c65bad40dc
Add support for presets
2023-01-06 01:33:21 -03:00
oobabooga
838f768437
Add files
2022-12-21 13:27:31 -03:00
oobabooga
dde76a962f
Initial commit
2022-12-21 13:17:06 -03:00