oobabooga
|
2c52310642
|
Add --threads flag for llama.cpp
|
2023-03-31 21:18:05 -03:00 |
|
oobabooga
|
eeafd60713
|
Fix streaming
|
2023-03-31 19:05:38 -03:00 |
|
oobabooga
|
52065ae4cd
|
Add repetition_penalty
|
2023-03-31 19:01:34 -03:00 |
|
oobabooga
|
2259143fec
|
Fix llama.cpp with --no-stream
|
2023-03-31 18:43:45 -03:00 |
|
oobabooga
|
875de5d983
|
Update ggml template
|
2023-03-31 17:57:31 -03:00 |
|
oobabooga
|
cbfe0b944a
|
Update README.md
|
2023-03-31 17:49:11 -03:00 |
|
oobabooga
|
6a44f4aec6
|
Add support for downloading ggml files
|
2023-03-31 17:33:42 -03:00 |
|
oobabooga
|
3a47a602a3
|
Detect ggml*.bin files automatically
|
2023-03-31 17:18:21 -03:00 |
|
oobabooga
|
0aee7341d8
|
Properly count tokens/s for llama.cpp in chat mode
|
2023-03-31 17:04:32 -03:00 |
|
oobabooga
|
5c4e44b452
|
llama.cpp documentation
|
2023-03-31 15:20:39 -03:00 |
|
oobabooga
|
6fd70d0032
|
Add llama.cpp support (#447 from thomasantony/feature/llamacpp)
Documentation: https://github.com/oobabooga/text-generation-webui/wiki/llama.cpp-models
|
2023-03-31 15:17:32 -03:00 |
|
oobabooga
|
a5c9b7d977
|
Bump llamacpp version
|
2023-03-31 15:08:01 -03:00 |
|
oobabooga
|
ea3ba6fc73
|
Merge branch 'feature/llamacpp' of github.com:thomasantony/text-generation-webui into thomasantony-feature/llamacpp
|
2023-03-31 14:45:53 -03:00 |
|
oobabooga
|
09b0a3aafb
|
Add repetition_penalty
|
2023-03-31 14:45:17 -03:00 |
|
oobabooga
|
4d98623041
|
Merge branch 'main' into feature/llamacpp
|
2023-03-31 14:37:04 -03:00 |
|
oobabooga
|
4c27562157
|
Minor changes
|
2023-03-31 14:33:46 -03:00 |
|
oobabooga
|
9d1dcf880a
|
General improvements
|
2023-03-31 14:27:01 -03:00 |
|
oobabooga
|
770ff0efa9
|
Merge branch 'main' of github.com:oobabooga/text-generation-webui
|
2023-03-31 12:22:22 -03:00 |
|
oobabooga
|
1d1d9e40cd
|
Add seed to settings
|
2023-03-31 12:22:07 -03:00 |
|
oobabooga
|
daeab6bac7
|
Merge pull request #678 from mayaeary/fix/python3.8
Fix `type object is not subscriptable`
|
2023-03-31 12:19:06 -03:00 |
|
oobabooga
|
5a6f939f05
|
Change the preset here too
|
2023-03-31 10:43:05 -03:00 |
|
Maya
|
b246d17513
|
Fix type object is not subscriptable
Fix `type object is not subscriptable` on python 3.8
|
2023-03-31 14:20:31 +03:00 |
|
oobabooga
|
3e1267af79
|
Merge pull request #673 from ye7iaserag/patch-1
Implement character gallery using Dataset
|
2023-03-31 02:04:52 -03:00 |
|
oobabooga
|
3b90d604d7
|
Sort the imports
|
2023-03-31 02:01:48 -03:00 |
|
oobabooga
|
d28a5c9569
|
Remove unnecessary css
|
2023-03-31 02:01:13 -03:00 |
|
ye7iaserag
|
ec093a5af7
|
Fix div alignment for long strings
|
2023-03-31 06:54:24 +02:00 |
|
oobabooga
|
3737eafeaa
|
Remove a border and allow more characters per pagination page
|
2023-03-31 00:48:50 -03:00 |
|
oobabooga
|
fd72afd8e7
|
Increase the textbox sizes
|
2023-03-31 00:43:00 -03:00 |
|
oobabooga
|
f27a66b014
|
Bump gradio version (make sure to update)
This fixes the textbox shrinking vertically once it reaches
a certain number of lines.
|
2023-03-31 00:42:26 -03:00 |
|
ye7iaserag
|
f9940b79dc
|
Implement character gallery using Dataset
|
2023-03-31 04:56:49 +02:00 |
|
oobabooga
|
bb69e054a7
|
Add dummy file
|
2023-03-30 21:08:50 -03:00 |
|
oobabooga
|
d4a9b5ea97
|
Remove redundant preset (see the plot in #587)
|
2023-03-30 17:34:44 -03:00 |
|
Thomas Antony
|
7fa5d96c22
|
Update to use new llamacpp API
|
2023-03-30 11:23:05 +01:00 |
|
Thomas Antony
|
79fa2b6d7e
|
Add support for alpaca
|
2023-03-30 11:23:04 +01:00 |
|
Thomas Antony
|
8953a262cb
|
Add llamacpp to requirements.txt
|
2023-03-30 11:22:38 +01:00 |
|
Thomas Antony
|
a5f5736e74
|
Add to text_generation.py
|
2023-03-30 11:22:38 +01:00 |
|
Thomas Antony
|
7745faa7bb
|
Add llamacpp to models.py
|
2023-03-30 11:22:37 +01:00 |
|
Thomas Antony
|
7a562481fa
|
Initial version of llamacpp_model.py
|
2023-03-30 11:22:07 +01:00 |
|
Thomas Antony
|
53ab1e285d
|
Update .gitignore
|
2023-03-30 11:22:07 +01:00 |
|
oobabooga
|
f0fdab08d3
|
Increase --chat height
|
2023-03-30 01:02:11 -03:00 |
|
oobabooga
|
bd65940a48
|
Increase --chat box height
|
2023-03-30 00:43:49 -03:00 |
|
oobabooga
|
131753fcf5
|
Save the sha256sum of downloaded models
|
2023-03-29 23:28:16 -03:00 |
|
oobabooga
|
a21e580782
|
Move an import
|
2023-03-29 22:50:58 -03:00 |
|
oobabooga
|
55755e27b9
|
Don't hardcode prompts in the settings dict/json
|
2023-03-29 22:47:01 -03:00 |
|
oobabooga
|
1cb9246160
|
Adapt to the new model names
|
2023-03-29 21:47:36 -03:00 |
|
oobabooga
|
0345e04249
|
Fix "Unknown argument(s): {'verbose': False}"
|
2023-03-29 21:17:48 -03:00 |
|
oobabooga
|
9104164297
|
Merge pull request #618 from nikita-skakun/optimize-download-model
Improve download-model.py progress bar with multiple threads
|
2023-03-29 20:54:19 -03:00 |
|
oobabooga
|
37754164eb
|
Move argparse
|
2023-03-29 20:47:36 -03:00 |
|
oobabooga
|
6403e72062
|
Merge branch 'main' into nikita-skakun-optimize-download-model
|
2023-03-29 20:45:33 -03:00 |
|
oobabooga
|
1445ea86f7
|
Add --output and better metadata for downloading models
|
2023-03-29 20:26:44 -03:00 |
|