oobabooga
5f3f3faa96
Better handle CUDA out of memory errors in chat mode
2023-04-02 17:48:00 -03:00
oobabooga
b0890a7925
Add shared.is_chat() function
2023-04-01 20:15:00 -03:00
oobabooga
b857f4655b
Update shared.py
2023-04-01 13:56:47 -03:00
oobabooga
fcda3f8776
Add also_return_rows to generate_chat_prompt
2023-04-01 01:12:13 -03:00
oobabooga
2c52310642
Add --threads flag for llama.cpp
2023-03-31 21:18:05 -03:00
oobabooga
eeafd60713
Fix streaming
2023-03-31 19:05:38 -03:00
oobabooga
52065ae4cd
Add repetition_penalty
2023-03-31 19:01:34 -03:00
oobabooga
2259143fec
Fix llama.cpp with --no-stream
2023-03-31 18:43:45 -03:00
oobabooga
3a47a602a3
Detect ggml*.bin files automatically
2023-03-31 17:18:21 -03:00
oobabooga
0aee7341d8
Properly count tokens/s for llama.cpp in chat mode
2023-03-31 17:04:32 -03:00
oobabooga
ea3ba6fc73
Merge branch 'feature/llamacpp' of github.com:thomasantony/text-generation-webui into thomasantony-feature/llamacpp
2023-03-31 14:45:53 -03:00
oobabooga
09b0a3aafb
Add repetition_penalty
2023-03-31 14:45:17 -03:00
oobabooga
4d98623041
Merge branch 'main' into feature/llamacpp
2023-03-31 14:37:04 -03:00
oobabooga
4c27562157
Minor changes
2023-03-31 14:33:46 -03:00
oobabooga
9d1dcf880a
General improvements
2023-03-31 14:27:01 -03:00
oobabooga
770ff0efa9
Merge branch 'main' of github.com:oobabooga/text-generation-webui
2023-03-31 12:22:22 -03:00
oobabooga
1d1d9e40cd
Add seed to settings
2023-03-31 12:22:07 -03:00
Maya
b246d17513
Fix type object is not subscriptable
...
Fix `type object is not subscriptable` on python 3.8
2023-03-31 14:20:31 +03:00
oobabooga
d4a9b5ea97
Remove redundant preset (see the plot in #587 )
2023-03-30 17:34:44 -03:00
Thomas Antony
7fa5d96c22
Update to use new llamacpp API
2023-03-30 11:23:05 +01:00
Thomas Antony
79fa2b6d7e
Add support for alpaca
2023-03-30 11:23:04 +01:00
Thomas Antony
a5f5736e74
Add to text_generation.py
2023-03-30 11:22:38 +01:00
Thomas Antony
7745faa7bb
Add llamacpp to models.py
2023-03-30 11:22:37 +01:00
Thomas Antony
7a562481fa
Initial version of llamacpp_model.py
2023-03-30 11:22:07 +01:00
oobabooga
a21e580782
Move an import
2023-03-29 22:50:58 -03:00
oobabooga
55755e27b9
Don't hardcode prompts in the settings dict/json
2023-03-29 22:47:01 -03:00
oobabooga
1cb9246160
Adapt to the new model names
2023-03-29 21:47:36 -03:00
oobabooga
58349f44a0
Handle training exception for unsupported models
2023-03-29 11:55:34 -03:00
oobabooga
a6d0373063
Fix training dataset loading #636
2023-03-29 11:48:17 -03:00
oobabooga
1edfb96778
Fix loading extensions from within the interface
2023-03-28 23:27:02 -03:00
oobabooga
304f812c63
Gracefully handle CUDA out of memory errors with streaming
2023-03-28 19:20:50 -03:00
oobabooga
010b259dde
Update documentation
2023-03-28 17:46:00 -03:00
oobabooga
0bec15ebcd
Reorder imports
2023-03-28 17:34:15 -03:00
Maya Eary
41ec682834
Disable kernel threshold for gpt-j
2023-03-28 22:45:38 +03:00
Maya
1ac003d41c
Merge branch 'oobabooga:main' into feature/gpt-j-4bit-v2
2023-03-28 22:30:39 +03:00
Maya Eary
1c075d8d21
Fix typo
2023-03-28 20:43:50 +03:00
Maya Eary
c8207d474f
Generalized load_quantized
2023-03-28 20:38:55 +03:00
oobabooga
8579fe51dd
Fix new lines in the HTML tab
2023-03-28 12:59:34 -03:00
Alex "mcmonkey" Goodwin
e817fac542
better defaults
2023-03-27 22:29:23 -07:00
Alex "mcmonkey" Goodwin
2e08af4edf
implement initial Raw Text File Input
...
also bump default Rank & Alpha for values that will make sense in testing if you don't know what you're doing and leave the defaults.
2023-03-27 22:15:32 -07:00
Alex "mcmonkey" Goodwin
b749952fe3
change number minimums to 0
...
gradio calculates 'step' relative to the minimum, so at '1' the step values were all offset awkwardly. 0 isn't valid, but, uh, just don't slam the slider to the left.
2023-03-27 21:22:43 -07:00
Alex "mcmonkey" Goodwin
ec6224f556
use new shared.args.lora_dir
2023-03-27 20:04:16 -07:00
Alex "mcmonkey" Goodwin
31f04dc615
Merge branch 'main' into add-train-lora-tab
2023-03-27 20:03:30 -07:00
oobabooga
53da672315
Fix FlexGen
2023-03-27 23:44:21 -03:00
oobabooga
ee95e55df6
Fix RWKV tokenizer
2023-03-27 23:42:29 -03:00
oobabooga
036163a751
Change description
2023-03-27 23:39:26 -03:00
oobabooga
005f552ea3
Some simplifications
2023-03-27 23:29:52 -03:00
oobabooga
fde92048af
Merge branch 'main' into catalpaaa-lora-and-model-dir
2023-03-27 23:16:44 -03:00
Alex "mcmonkey" Goodwin
8a97f6ba29
corrections per the PR comments
2023-03-27 18:39:06 -07:00
Alex "mcmonkey" Goodwin
7fab7ea1b6
couple missed camelCases
2023-03-27 18:19:06 -07:00