oobabooga
|
df37ba5256
|
Update impersonate_wrapper
|
2023-05-12 12:59:48 -03:00 |
|
oobabooga
|
e283ddc559
|
Change how spaces are handled in continue/generation attempts
|
2023-05-12 12:50:29 -03:00 |
|
oobabooga
|
2eeb27659d
|
Fix bug in --cpu-memory
|
2023-05-12 06:17:07 -03:00 |
|
oobabooga
|
5eaa914e1b
|
Fix settings.json being ignored because of config.yaml
|
2023-05-12 06:09:45 -03:00 |
|
oobabooga
|
71693161eb
|
Better handle spaces in LlamaTokenizer
|
2023-05-11 17:55:50 -03:00 |
|
oobabooga
|
7221d1389a
|
Fix a bug
|
2023-05-11 17:11:10 -03:00 |
|
oobabooga
|
0d36c18f5d
|
Always return only the new tokens in generation functions
|
2023-05-11 17:07:20 -03:00 |
|
oobabooga
|
394bb253db
|
Syntax improvement
|
2023-05-11 16:27:50 -03:00 |
|
oobabooga
|
f7dbddfff5
|
Add a variable for tts extensions to use
|
2023-05-11 16:12:46 -03:00 |
|
oobabooga
|
638c6a65a2
|
Refactor chat functions (#2003)
|
2023-05-11 15:37:04 -03:00 |
|
oobabooga
|
b7a589afc8
|
Improve the Metharme prompt
|
2023-05-10 16:09:32 -03:00 |
|
oobabooga
|
b01c4884cb
|
Better stopping strings for instruct mode
|
2023-05-10 14:22:38 -03:00 |
|
oobabooga
|
6a4783afc7
|
Add markdown table rendering
|
2023-05-10 13:41:23 -03:00 |
|
oobabooga
|
3316e33d14
|
Remove unused code
|
2023-05-10 11:59:59 -03:00 |
|
Alexander Dibrov
|
ec14d9b725
|
Fix custom_generate_chat_prompt (#1965)
|
2023-05-10 11:29:59 -03:00 |
|
oobabooga
|
32481ec4d6
|
Fix prompt order in the dropdown
|
2023-05-10 02:24:09 -03:00 |
|
oobabooga
|
dfd9ba3e90
|
Remove duplicate code
|
2023-05-10 02:07:22 -03:00 |
|
oobabooga
|
bdf1274b5d
|
Remove duplicate code
|
2023-05-10 01:34:04 -03:00 |
|
oobabooga
|
3913155c1f
|
Style improvements (#1957)
|
2023-05-09 22:49:39 -03:00 |
|
minipasila
|
334486f527
|
Added instruct-following template for Metharme (#1679)
|
2023-05-09 22:29:22 -03:00 |
|
Carl Kenner
|
814f754451
|
Support for MPT, INCITE, WizardLM, StableLM, Galactica, Vicuna, Guanaco, and Baize instruction following (#1596)
|
2023-05-09 20:37:31 -03:00 |
|
Wojtab
|
e9e75a9ec7
|
Generalize multimodality (llava/minigpt4 7b and 13b now supported) (#1741)
|
2023-05-09 20:18:02 -03:00 |
|
Wesley Pyburn
|
a2b25322f0
|
Fix trust_remote_code in wrong location (#1953)
|
2023-05-09 19:22:10 -03:00 |
|
LaaZa
|
218bd64bd1
|
Add the option to not automatically load the selected model (#1762)
---------
Co-authored-by: oobabooga <112222186+oobabooga@users.noreply.github.com>
|
2023-05-09 15:52:35 -03:00 |
|
Maks
|
cf6caf1830
|
Make the RWKV model cache the RNN state between messages (#1354)
|
2023-05-09 11:12:53 -03:00 |
|
Kamil Szurant
|
641500dcb9
|
Use current input for Impersonate (continue impersonate feature) (#1147)
|
2023-05-09 02:37:42 -03:00 |
|
IJumpAround
|
020fe7b50b
|
Remove mutable defaults from function signature. (#1663)
|
2023-05-08 22:55:41 -03:00 |
|
Matthew McAllister
|
d78b04f0b4
|
Add error message when GPTQ-for-LLaMa import fails (#1871)
---------
Co-authored-by: oobabooga <112222186+oobabooga@users.noreply.github.com>
|
2023-05-08 22:29:09 -03:00 |
|
oobabooga
|
68dcbc7ebd
|
Fix chat history handling in instruct mode
|
2023-05-08 16:41:21 -03:00 |
|
Clay Shoaf
|
79ac94cc2f
|
fixed LoRA loading issue (#1865)
|
2023-05-08 16:21:55 -03:00 |
|
oobabooga
|
b5260b24f1
|
Add support for custom chat styles (#1917)
|
2023-05-08 12:35:03 -03:00 |
|
EgrorBs
|
d3ea70f453
|
More trust_remote_code=trust_remote_code (#1899)
|
2023-05-07 23:48:20 -03:00 |
|
oobabooga
|
56a5969658
|
Improve the separation between instruct/chat modes (#1896)
|
2023-05-07 23:47:02 -03:00 |
|
oobabooga
|
9754d6a811
|
Fix an error message
|
2023-05-07 17:44:05 -03:00 |
|
camenduru
|
ba65a48ec8
|
trust_remote_code=shared.args.trust_remote_code (#1891)
|
2023-05-07 17:42:44 -03:00 |
|
oobabooga
|
6b67cb6611
|
Generalize superbooga to chat mode
|
2023-05-07 15:05:26 -03:00 |
|
oobabooga
|
56f6b7052a
|
Sort dropdowns numerically
|
2023-05-05 23:14:56 -03:00 |
|
oobabooga
|
8aafb1f796
|
Refactor text_generation.py, add support for custom generation functions (#1817)
|
2023-05-05 18:53:03 -03:00 |
|
oobabooga
|
c728f2b5f0
|
Better handle new line characters in code blocks
|
2023-05-05 11:22:36 -03:00 |
|
oobabooga
|
00e333d790
|
Add MOSS support
|
2023-05-04 23:20:34 -03:00 |
|
oobabooga
|
f673f4a4ca
|
Change --verbose behavior
|
2023-05-04 15:56:06 -03:00 |
|
oobabooga
|
97a6a50d98
|
Use oasst tokenizer instead of universal tokenizer
|
2023-05-04 15:55:39 -03:00 |
|
oobabooga
|
b6ff138084
|
Add --checkpoint argument for GPTQ
|
2023-05-04 15:17:20 -03:00 |
|
Mylo
|
bd531c2dc2
|
Make --trust-remote-code work for all models (#1772)
|
2023-05-04 02:01:28 -03:00 |
|
oobabooga
|
0e6d17304a
|
Clearer syntax for instruction-following characters
|
2023-05-03 22:50:39 -03:00 |
|
oobabooga
|
9c77ab4fc2
|
Improve some warnings
|
2023-05-03 22:06:46 -03:00 |
|
oobabooga
|
057b1b2978
|
Add credits
|
2023-05-03 21:49:55 -03:00 |
|
oobabooga
|
95d04d6a8d
|
Better warning messages
|
2023-05-03 21:43:17 -03:00 |
|
oobabooga
|
f54256e348
|
Rename no_mmap to no-mmap
|
2023-05-03 09:50:31 -03:00 |
|
practicaldreamer
|
e3968f7dd0
|
Fix Training Pad Token (#1678)
Currently padding with 0 the character vs 0 the token id (<unk> in the case of llama)
|
2023-05-02 23:16:08 -03:00 |
|