oobabooga
|
a236b24d24
|
Add --auto-devices and --load-in-8bit options for #4
|
2023-01-10 23:16:33 -03:00 |
|
oobabooga
|
3aefcfd963
|
Grammar
|
2023-01-09 19:07:47 -03:00 |
|
oobabooga
|
6c178b1c91
|
Add --listen parameter
|
2023-01-09 19:05:36 -03:00 |
|
oobabooga
|
13836a37c8
|
Remove unused parameter
|
2023-01-09 17:23:43 -03:00 |
|
oobabooga
|
f0013ac8e9
|
Don't need that
|
2023-01-09 16:30:14 -03:00 |
|
oobabooga
|
00a12889e9
|
Refactor model loading function
|
2023-01-09 16:28:04 -03:00 |
|
oobabooga
|
980f8112a7
|
Small bug fix
|
2023-01-09 12:56:54 -03:00 |
|
oobabooga
|
a751d7e693
|
Don't require GPT-J to be installed to load gpt4chan
|
2023-01-09 11:39:13 -03:00 |
|
oobabooga
|
6cbfe19c23
|
Submit with Shift+Enter
|
2023-01-09 11:22:12 -03:00 |
|
oobabooga
|
0e67ccf607
|
Implement CPU mode
|
2023-01-09 10:58:46 -03:00 |
|
oobabooga
|
f2a548c098
|
Stop generating at \n in chat mode
Makes it a lot more efficient.
|
2023-01-08 23:00:38 -03:00 |
|
oobabooga
|
a9280dde52
|
Increase chat height, reorganize things
|
2023-01-08 20:10:31 -03:00 |
|
oobabooga
|
b871f76aac
|
Better default for chat output length
Ideally, generation should stop at '\n', but this feature is brand new
on transformers (https://github.com/huggingface/transformers/pull/20727)
|
2023-01-08 15:00:02 -03:00 |
|
oobabooga
|
b801e0d50d
|
Minor changes
|
2023-01-08 14:37:43 -03:00 |
|
oobabooga
|
730c5562cc
|
Disable gradio analytics
|
2023-01-08 01:42:38 -03:00 |
|
oobabooga
|
493051d5d5
|
Chat improvements
|
2023-01-08 01:33:45 -03:00 |
|
oobabooga
|
4058b33fc9
|
Improve the chat experience
|
2023-01-08 01:10:02 -03:00 |
|
oobabooga
|
ef4e610d37
|
Re-enable the progress bar in notebook mode
|
2023-01-07 23:01:39 -03:00 |
|
oobabooga
|
c3a0d00715
|
Name the input box
|
2023-01-07 22:55:54 -03:00 |
|
oobabooga
|
f76bdadbed
|
Add chat mode
|
2023-01-07 22:52:46 -03:00 |
|
oobabooga
|
300a500c0b
|
Improve spacings
|
2023-01-07 19:11:21 -03:00 |
|
oobabooga
|
5345685ead
|
Make paths cross-platform (should work on Windows now)
|
2023-01-07 16:33:43 -03:00 |
|
oobabooga
|
342e756878
|
Better recognize the model sizes
|
2023-01-07 12:21:04 -03:00 |
|
oobabooga
|
62c4d9880b
|
Fix galactica equations (more)
|
2023-01-07 12:13:09 -03:00 |
|
oobabooga
|
eeb63b1b8a
|
Fix galactica equations
|
2023-01-07 01:56:21 -03:00 |
|
oobabooga
|
3aaf5fb4aa
|
Make NovelAI-Sphinx Moth the default preset
|
2023-01-07 00:49:47 -03:00 |
|
oobabooga
|
c7b29668a2
|
Add HTML support for gpt4chan
|
2023-01-06 23:14:08 -03:00 |
|
oobabooga
|
3d6a3aac73
|
Reorganize the layout
|
2023-01-06 22:05:37 -03:00 |
|
oobabooga
|
4c89d4ab29
|
Name the inputs
|
2023-01-06 20:26:47 -03:00 |
|
oobabooga
|
e5f547fc87
|
Implement notebook mode
|
2023-01-06 20:22:26 -03:00 |
|
oobabooga
|
f54a13929f
|
Load default model with --model flag
|
2023-01-06 19:56:44 -03:00 |
|
oobabooga
|
1da8d4a787
|
Remove a space
|
2023-01-06 02:58:09 -03:00 |
|
oobabooga
|
ee650343bc
|
Better defaults while loading models
|
2023-01-06 02:54:33 -03:00 |
|
oobabooga
|
9498dca748
|
Make model autodetect all gpt-neo and opt models
|
2023-01-06 02:31:54 -03:00 |
|
oobabooga
|
deefa2e86a
|
Add comments
|
2023-01-06 02:26:33 -03:00 |
|
oobabooga
|
c06d7d28cb
|
Autodetect available models
|
2023-01-06 02:06:59 -03:00 |
|
oobabooga
|
285032da36
|
Make model loading more transparent
|
2023-01-06 01:41:52 -03:00 |
|
oobabooga
|
c65bad40dc
|
Add support for presets
|
2023-01-06 01:33:21 -03:00 |
|
oobabooga
|
838f768437
|
Add files
|
2022-12-21 13:27:31 -03:00 |
|
oobabooga
|
dde76a962f
|
Initial commit
|
2022-12-21 13:17:06 -03:00 |
|