oobabooga
|
bb1a172da0
|
Fix a bug in cai mode chat
|
2023-01-15 19:41:25 -03:00 |
|
oobabooga
|
e6691bd920
|
Make chat mode more like cai
|
2023-01-15 18:16:46 -03:00 |
|
oobabooga
|
e04ecd4bce
|
Minor improvements
|
2023-01-15 16:43:31 -03:00 |
|
oobabooga
|
027c3dd27d
|
Allow jpg profile images
|
2023-01-15 15:45:25 -03:00 |
|
oobabooga
|
afe9f77f96
|
Reorder parameters
|
2023-01-15 15:30:39 -03:00 |
|
oobabooga
|
88d67427e1
|
Implement default settings customization using a json file
|
2023-01-15 15:23:41 -03:00 |
|
oobabooga
|
6136da419c
|
Add --cai-chat option that mimics Character.AI's interface
|
2023-01-15 12:20:04 -03:00 |
|
oobabooga
|
13b04c1b94
|
Add "remove last message" button to chat
|
2023-01-15 03:19:09 -03:00 |
|
oobabooga
|
fd220f827f
|
Remove annoying warnings
|
2023-01-15 00:39:51 -03:00 |
|
oobabooga
|
d962e69496
|
Improve chat preprocessing
|
2023-01-14 23:50:34 -03:00 |
|
oobabooga
|
9a7f187b5a
|
Improve pygmalion line breaks
|
2023-01-14 23:26:14 -03:00 |
|
oobabooga
|
ecb2cc2194
|
Pygmalion: add checkbox for choosing whether to stop at newline or not
|
2023-01-13 15:02:17 -03:00 |
|
oobabooga
|
3a00cb1bbd
|
Reorganize GUI elements
|
2023-01-13 14:28:53 -03:00 |
|
oobabooga
|
3f1e70d2c8
|
Remove the temperature slider
It was not being used by most presets.
|
2023-01-13 14:00:43 -03:00 |
|
oobabooga
|
7f93012a89
|
Add default names/context for pygmalion
|
2023-01-13 10:12:47 -03:00 |
|
oobabooga
|
9410486bd8
|
Enable the API
Let's goooooooooooooo
|
2023-01-11 16:43:13 -03:00 |
|
oobabooga
|
66f73c1b32
|
Remove default text from output box
|
2023-01-11 01:36:11 -03:00 |
|
oobabooga
|
01ac065d7e
|
Implement Continue button
|
2023-01-11 01:33:57 -03:00 |
|
oobabooga
|
4b09e7e355
|
Sort models alphabetically
|
2023-01-11 01:17:20 -03:00 |
|
oobabooga
|
d5e01c80e3
|
Add nice HTML output for all models
|
2023-01-11 01:10:11 -03:00 |
|
oobabooga
|
b2a2ddcb15
|
Remove T5 support (it sucks)
|
2023-01-10 23:39:50 -03:00 |
|
oobabooga
|
a236b24d24
|
Add --auto-devices and --load-in-8bit options for #4
|
2023-01-10 23:16:33 -03:00 |
|
oobabooga
|
3aefcfd963
|
Grammar
|
2023-01-09 19:07:47 -03:00 |
|
oobabooga
|
6c178b1c91
|
Add --listen parameter
|
2023-01-09 19:05:36 -03:00 |
|
oobabooga
|
13836a37c8
|
Remove unused parameter
|
2023-01-09 17:23:43 -03:00 |
|
oobabooga
|
f0013ac8e9
|
Don't need that
|
2023-01-09 16:30:14 -03:00 |
|
oobabooga
|
00a12889e9
|
Refactor model loading function
|
2023-01-09 16:28:04 -03:00 |
|
oobabooga
|
980f8112a7
|
Small bug fix
|
2023-01-09 12:56:54 -03:00 |
|
oobabooga
|
a751d7e693
|
Don't require GPT-J to be installed to load gpt4chan
|
2023-01-09 11:39:13 -03:00 |
|
oobabooga
|
6cbfe19c23
|
Submit with Shift+Enter
|
2023-01-09 11:22:12 -03:00 |
|
oobabooga
|
0e67ccf607
|
Implement CPU mode
|
2023-01-09 10:58:46 -03:00 |
|
oobabooga
|
f2a548c098
|
Stop generating at \n in chat mode
Makes it a lot more efficient.
|
2023-01-08 23:00:38 -03:00 |
|
oobabooga
|
a9280dde52
|
Increase chat height, reorganize things
|
2023-01-08 20:10:31 -03:00 |
|
oobabooga
|
b871f76aac
|
Better default for chat output length
Ideally, generation should stop at '\n', but this feature is brand new
on transformers (https://github.com/huggingface/transformers/pull/20727)
|
2023-01-08 15:00:02 -03:00 |
|
oobabooga
|
b801e0d50d
|
Minor changes
|
2023-01-08 14:37:43 -03:00 |
|
oobabooga
|
730c5562cc
|
Disable gradio analytics
|
2023-01-08 01:42:38 -03:00 |
|
oobabooga
|
493051d5d5
|
Chat improvements
|
2023-01-08 01:33:45 -03:00 |
|
oobabooga
|
4058b33fc9
|
Improve the chat experience
|
2023-01-08 01:10:02 -03:00 |
|
oobabooga
|
ef4e610d37
|
Re-enable the progress bar in notebook mode
|
2023-01-07 23:01:39 -03:00 |
|
oobabooga
|
c3a0d00715
|
Name the input box
|
2023-01-07 22:55:54 -03:00 |
|
oobabooga
|
f76bdadbed
|
Add chat mode
|
2023-01-07 22:52:46 -03:00 |
|
oobabooga
|
300a500c0b
|
Improve spacings
|
2023-01-07 19:11:21 -03:00 |
|
oobabooga
|
5345685ead
|
Make paths cross-platform (should work on Windows now)
|
2023-01-07 16:33:43 -03:00 |
|
oobabooga
|
342e756878
|
Better recognize the model sizes
|
2023-01-07 12:21:04 -03:00 |
|
oobabooga
|
62c4d9880b
|
Fix galactica equations (more)
|
2023-01-07 12:13:09 -03:00 |
|
oobabooga
|
eeb63b1b8a
|
Fix galactica equations
|
2023-01-07 01:56:21 -03:00 |
|
oobabooga
|
3aaf5fb4aa
|
Make NovelAI-Sphinx Moth the default preset
|
2023-01-07 00:49:47 -03:00 |
|
oobabooga
|
c7b29668a2
|
Add HTML support for gpt4chan
|
2023-01-06 23:14:08 -03:00 |
|
oobabooga
|
3d6a3aac73
|
Reorganize the layout
|
2023-01-06 22:05:37 -03:00 |
|
oobabooga
|
4c89d4ab29
|
Name the inputs
|
2023-01-06 20:26:47 -03:00 |
|
oobabooga
|
e5f547fc87
|
Implement notebook mode
|
2023-01-06 20:22:26 -03:00 |
|
oobabooga
|
f54a13929f
|
Load default model with --model flag
|
2023-01-06 19:56:44 -03:00 |
|
oobabooga
|
1da8d4a787
|
Remove a space
|
2023-01-06 02:58:09 -03:00 |
|
oobabooga
|
ee650343bc
|
Better defaults while loading models
|
2023-01-06 02:54:33 -03:00 |
|
oobabooga
|
9498dca748
|
Make model autodetect all gpt-neo and opt models
|
2023-01-06 02:31:54 -03:00 |
|
oobabooga
|
deefa2e86a
|
Add comments
|
2023-01-06 02:26:33 -03:00 |
|
oobabooga
|
c06d7d28cb
|
Autodetect available models
|
2023-01-06 02:06:59 -03:00 |
|
oobabooga
|
285032da36
|
Make model loading more transparent
|
2023-01-06 01:41:52 -03:00 |
|
oobabooga
|
c65bad40dc
|
Add support for presets
|
2023-01-06 01:33:21 -03:00 |
|
oobabooga
|
838f768437
|
Add files
|
2022-12-21 13:27:31 -03:00 |
|
oobabooga
|
dde76a962f
|
Initial commit
|
2022-12-21 13:17:06 -03:00 |
|