Commit Graph

3013 Commits

Author SHA1 Message Date
oobabooga
3750e301a1 Update README 2023-01-13 11:33:38 -03:00
oobabooga
996ceff082 Add PygmalionAI preset
Borrowed from
https://github.com/PygmalionAI/gradio-ui/blob/master/src/gradio_ui.py
2023-01-13 11:32:12 -03:00
oobabooga
7f93012a89 Add default names/context for pygmalion 2023-01-13 10:12:47 -03:00
oobabooga
7299159360 Mention pygmalion 2023-01-13 09:33:38 -03:00
oobabooga
acf3d6d27e I have tested the CPU conda requirements and this works 2023-01-13 09:19:39 -03:00
oobabooga
fcda5d7107 Fix the download script on windows (#6) 2023-01-13 09:05:21 -03:00
oobabooga
d3bd6a3093 Installation instructions 2023-01-13 01:39:09 -03:00
oobabooga
44b4274ec2 Installation instructions 2023-01-13 01:37:12 -03:00
oobabooga
323aaa074f Installation instructions 2023-01-13 01:29:36 -03:00
oobabooga
886c12dd77 Add more detailed installation instructions 2023-01-13 01:27:29 -03:00
oobabooga
aeff0d4cc1 Update README 2023-01-11 23:04:06 -03:00
oobabooga
e0ec20c1b7 Update readme 2023-01-11 23:01:50 -03:00
oobabooga
5ba60f83a4 Add detailed gpt4chan installation instructions 2023-01-11 23:00:31 -03:00
oobabooga
21415f6988 Mention API 2023-01-11 16:51:32 -03:00
oobabooga
9410486bd8 Enable the API
Let's goooooooooooooo
2023-01-11 16:43:13 -03:00
oobabooga
66f73c1b32 Remove default text from output box 2023-01-11 01:36:11 -03:00
oobabooga
01ac065d7e Implement Continue button 2023-01-11 01:33:57 -03:00
oobabooga
4b09e7e355 Sort models alphabetically 2023-01-11 01:17:20 -03:00
oobabooga
d5e01c80e3 Add nice HTML output for all models 2023-01-11 01:10:11 -03:00
oobabooga
18ae08ef91 Remove T5 support 2023-01-10 23:41:35 -03:00
oobabooga
b2a2ddcb15 Remove T5 support (it sucks) 2023-01-10 23:39:50 -03:00
oobabooga
89fd0180b7 Better description of features 2023-01-10 23:30:41 -03:00
oobabooga
a236b24d24 Add --auto-devices and --load-in-8bit options for #4 2023-01-10 23:16:33 -03:00
oobabooga
3aefcfd963 Grammar 2023-01-09 19:07:47 -03:00
oobabooga
7028116bf2 Fix 2023-01-09 19:07:10 -03:00
oobabooga
222ae23fe0 Fix 2023-01-09 19:06:29 -03:00
oobabooga
6c178b1c91 Add --listen parameter 2023-01-09 19:05:36 -03:00
oobabooga
86092d1879 Update README 2023-01-09 18:58:24 -03:00
oobabooga
9aa51ca227 Update README 2023-01-09 18:18:35 -03:00
oobabooga
fbbad7cf8a Mention units 2023-01-09 18:14:09 -03:00
oobabooga
306be22b8e Add system requirements 2023-01-09 18:12:41 -03:00
oobabooga
13836a37c8 Remove unused parameter 2023-01-09 17:23:43 -03:00
oobabooga
f0013ac8e9 Don't need that 2023-01-09 16:30:14 -03:00
oobabooga
00a12889e9 Refactor model loading function 2023-01-09 16:28:04 -03:00
oobabooga
96a75b616b Update README 2023-01-09 16:19:57 -03:00
oobabooga
f737029ee5 Update README 2023-01-09 16:15:54 -03:00
oobabooga
9704cfaa45 Update README 2023-01-09 16:09:27 -03:00
oobabooga
980f8112a7 Small bug fix 2023-01-09 12:56:54 -03:00
oobabooga
a751d7e693 Don't require GPT-J to be installed to load gpt4chan 2023-01-09 11:39:13 -03:00
oobabooga
6cbfe19c23 Submit with Shift+Enter 2023-01-09 11:22:12 -03:00
oobabooga
e2a917b3bf Update README 2023-01-09 11:11:05 -03:00
oobabooga
0e67ccf607 Implement CPU mode 2023-01-09 10:58:46 -03:00
oobabooga
f2a548c098 Stop generating at \n in chat mode
Makes it a lot more efficient.
2023-01-08 23:00:38 -03:00
oobabooga
a9280dde52 Increase chat height, reorganize things 2023-01-08 20:10:31 -03:00
oobabooga
b871f76aac Better default for chat output length
Ideally, generation should stop at '\n', but this feature is brand new
on transformers (https://github.com/huggingface/transformers/pull/20727)
2023-01-08 15:00:02 -03:00
oobabooga
b801e0d50d Minor changes 2023-01-08 14:37:43 -03:00
oobabooga
730c5562cc Disable gradio analytics 2023-01-08 01:42:38 -03:00
oobabooga
493051d5d5 Chat improvements 2023-01-08 01:33:45 -03:00
oobabooga
4058b33fc9 Improve the chat experience 2023-01-08 01:10:02 -03:00
oobabooga
a0b1b1beb2 Mention gpt4chan's config.json 2023-01-07 23:13:43 -03:00