Commit Graph

3754 Commits

Author SHA1 Message Date
oobabooga
2930e5a895
Update README.md 2023-05-11 10:04:38 -03:00
oobabooga
0ff38c994e
Update README.md 2023-05-11 09:58:58 -03:00
oobabooga
e6959a5d9a
Update README.md 2023-05-11 09:54:22 -03:00
oobabooga
dcfd09b61e
Update README.md 2023-05-11 09:49:57 -03:00
oobabooga
ac9a86a16c
Update llama.cpp-models.md 2023-05-11 09:47:36 -03:00
oobabooga
7a49ceab29
Update README.md 2023-05-11 09:42:39 -03:00
oobabooga
943b5e5f80 Minor bug fix 2023-05-10 23:54:25 -03:00
oobabooga
9695bfe117 Add an option for cleaning up html in superbooga 2023-05-10 23:51:52 -03:00
oobabooga
c7ba2d4f3f Change a message in download-model.py 2023-05-10 19:00:14 -03:00
oobabooga
1309cdd257
Add a space 2023-05-10 18:03:12 -03:00
oobabooga
3e19733d35
Remove obsolete comment 2023-05-10 18:01:04 -03:00
oobabooga
4ab5deeea0
Update INSTRUCTIONS.TXT 2023-05-10 18:00:37 -03:00
oobabooga
d7d3f7f31c
Add a "CMD_FLAGS" variable 2023-05-10 17:54:12 -03:00
oobabooga
b7a589afc8 Improve the Metharme prompt 2023-05-10 16:09:32 -03:00
oobabooga
e5b1547849 Fix reload model button 2023-05-10 14:44:25 -03:00
oobabooga
b01c4884cb Better stopping strings for instruct mode 2023-05-10 14:22:38 -03:00
oobabooga
6a4783afc7 Add markdown table rendering 2023-05-10 13:41:23 -03:00
oobabooga
57dc44a995
Update README.md 2023-05-10 12:48:25 -03:00
oobabooga
f5592781e5
Update README.md 2023-05-10 12:19:56 -03:00
oobabooga
f1d10edcb7
Update README.md 2023-05-10 12:13:14 -03:00
oobabooga
181b102521
Update README.md 2023-05-10 12:09:47 -03:00
oobabooga
3316e33d14 Remove unused code 2023-05-10 11:59:59 -03:00
Alexander Dibrov
ec14d9b725
Fix custom_generate_chat_prompt (#1965) 2023-05-10 11:29:59 -03:00
oobabooga
32481ec4d6 Fix prompt order in the dropdown 2023-05-10 02:24:09 -03:00
oobabooga
dfd9ba3e90 Remove duplicate code 2023-05-10 02:07:22 -03:00
oobabooga
cd36b8f739 Remove space 2023-05-10 01:41:33 -03:00
oobabooga
c35860ff2f Add a link to silero samples 2023-05-10 01:39:35 -03:00
oobabooga
bdf1274b5d Remove duplicate code 2023-05-10 01:34:04 -03:00
oobabooga
ba445cf59f Fix some galactica templates 2023-05-09 22:58:59 -03:00
oobabooga
3b1de7e8bc Remove redundant presets 2023-05-09 22:56:19 -03:00
oobabooga
3913155c1f
Style improvements (#1957) 2023-05-09 22:49:39 -03:00
minipasila
334486f527
Added instruct-following template for Metharme (#1679) 2023-05-09 22:29:22 -03:00
Carl Kenner
1aaa47070a
Expand Open Assistant support (#1735) 2023-05-09 20:40:29 -03:00
Carl Kenner
814f754451
Support for MPT, INCITE, WizardLM, StableLM, Galactica, Vicuna, Guanaco, and Baize instruction following (#1596) 2023-05-09 20:37:31 -03:00
Matthew McAllister
06c7db017d
Add config for pygmalion-7b and metharme-7b (#1887) 2023-05-09 20:31:27 -03:00
missionfloyd
fe4dfc647d
SileroTTS preview (#1934) 2023-05-09 20:28:59 -03:00
oobabooga
8fa5f651d6 Style changes 2023-05-09 20:20:35 -03:00
Wojtab
e9e75a9ec7
Generalize multimodality (llava/minigpt4 7b and 13b now supported) (#1741) 2023-05-09 20:18:02 -03:00
Wesley Pyburn
a2b25322f0
Fix trust_remote_code in wrong location (#1953) 2023-05-09 19:22:10 -03:00
oobabooga
13e7ebfc77 Change a comment 2023-05-09 15:56:32 -03:00
LaaZa
218bd64bd1
Add the option to not automatically load the selected model (#1762)
---------

Co-authored-by: oobabooga <112222186+oobabooga@users.noreply.github.com>
2023-05-09 15:52:35 -03:00
oobabooga
b8cfc20e58
Don't install superbooga by default 2023-05-09 14:17:08 -03:00
Maks
cf6caf1830
Make the RWKV model cache the RNN state between messages (#1354) 2023-05-09 11:12:53 -03:00
Kamil Szurant
641500dcb9
Use current input for Impersonate (continue impersonate feature) (#1147) 2023-05-09 02:37:42 -03:00
dependabot[bot]
a5bb278631
Bump accelerate from 0.18.0 to 0.19.0 (#1925) 2023-05-09 02:17:27 -03:00
jllllll
29727c6502
Fix Windows PATH fix (#57) 2023-05-09 01:49:27 -03:00
IJumpAround
020fe7b50b
Remove mutable defaults from function signature. (#1663) 2023-05-08 22:55:41 -03:00
shadownetdev1
32ad47c898
added note about build essentials to WSL docs (#1859) 2023-05-08 22:32:41 -03:00
Jeffrey Lin
791a38bad1
[extensions/openai] Support undocumented base64 'encoding_format' param for compatibility with official OpenAI client (#1876) 2023-05-08 22:31:34 -03:00
Matthew McAllister
d78b04f0b4
Add error message when GPTQ-for-LLaMa import fails (#1871)
---------

Co-authored-by: oobabooga <112222186+oobabooga@users.noreply.github.com>
2023-05-08 22:29:09 -03:00