Commit Graph

1334 Commits

Author SHA1 Message Date
oobabooga
23116b88ef
Add support for resuming downloads (#654 from nikita-skakun/support-partial-downloads) 2023-03-31 22:55:55 -03:00
oobabooga
74462ac713 Don't override the metadata when checking the sha256sum 2023-03-31 22:52:52 -03:00
oobabooga
2c52310642 Add --threads flag for llama.cpp 2023-03-31 21:18:05 -03:00
oobabooga
eeafd60713 Fix streaming 2023-03-31 19:05:38 -03:00
oobabooga
52065ae4cd Add repetition_penalty 2023-03-31 19:01:34 -03:00
oobabooga
2259143fec Fix llama.cpp with --no-stream 2023-03-31 18:43:45 -03:00
oobabooga
875de5d983 Update ggml template 2023-03-31 17:57:31 -03:00
oobabooga
cbfe0b944a
Update README.md 2023-03-31 17:49:11 -03:00
oobabooga
6a44f4aec6 Add support for downloading ggml files 2023-03-31 17:33:42 -03:00
oobabooga
3a47a602a3 Detect ggml*.bin files automatically 2023-03-31 17:18:21 -03:00
oobabooga
0aee7341d8 Properly count tokens/s for llama.cpp in chat mode 2023-03-31 17:04:32 -03:00
oobabooga
5c4e44b452
llama.cpp documentation 2023-03-31 15:20:39 -03:00
oobabooga
6fd70d0032
Add llama.cpp support (#447 from thomasantony/feature/llamacpp)
Documentation: https://github.com/oobabooga/text-generation-webui/wiki/llama.cpp-models
2023-03-31 15:17:32 -03:00
oobabooga
a5c9b7d977 Bump llamacpp version 2023-03-31 15:08:01 -03:00
oobabooga
ea3ba6fc73 Merge branch 'feature/llamacpp' of github.com:thomasantony/text-generation-webui into thomasantony-feature/llamacpp 2023-03-31 14:45:53 -03:00
oobabooga
09b0a3aafb Add repetition_penalty 2023-03-31 14:45:17 -03:00
oobabooga
4d98623041
Merge branch 'main' into feature/llamacpp 2023-03-31 14:37:04 -03:00
oobabooga
4c27562157 Minor changes 2023-03-31 14:33:46 -03:00
oobabooga
9d1dcf880a General improvements 2023-03-31 14:27:01 -03:00
oobabooga
770ff0efa9 Merge branch 'main' of github.com:oobabooga/text-generation-webui 2023-03-31 12:22:22 -03:00
oobabooga
1d1d9e40cd Add seed to settings 2023-03-31 12:22:07 -03:00
oobabooga
daeab6bac7
Merge pull request #678 from mayaeary/fix/python3.8
Fix `type object is not subscriptable`
2023-03-31 12:19:06 -03:00
oobabooga
5a6f939f05 Change the preset here too 2023-03-31 10:43:05 -03:00
Maya
b246d17513
Fix type object is not subscriptable
Fix `type object is not subscriptable` on python 3.8
2023-03-31 14:20:31 +03:00
Nikita Skakun
b99bea3c69 Fixed reported header affecting resuming download 2023-03-30 23:11:59 -07:00
oobabooga
3e1267af79
Merge pull request #673 from ye7iaserag/patch-1
Implement character gallery using Dataset
2023-03-31 02:04:52 -03:00
oobabooga
3b90d604d7 Sort the imports 2023-03-31 02:01:48 -03:00
oobabooga
d28a5c9569 Remove unnecessary css 2023-03-31 02:01:13 -03:00
ye7iaserag
ec093a5af7
Fix div alignment for long strings 2023-03-31 06:54:24 +02:00
oobabooga
92c7068daf Don't download if --check is specified 2023-03-31 01:31:47 -03:00
oobabooga
3737eafeaa Remove a border and allow more characters per pagination page 2023-03-31 00:48:50 -03:00
oobabooga
fd72afd8e7 Increase the textbox sizes 2023-03-31 00:43:00 -03:00
oobabooga
f27a66b014 Bump gradio version (make sure to update)
This fixes the textbox shrinking vertically once it reaches
a certain number of lines.
2023-03-31 00:42:26 -03:00
Nikita Skakun
0cc89e7755 Checksum code now activated by --check flag. 2023-03-30 20:06:12 -07:00
ye7iaserag
f9940b79dc
Implement character gallery using Dataset 2023-03-31 04:56:49 +02:00
oobabooga
bb69e054a7 Add dummy file 2023-03-30 21:08:50 -03:00
oobabooga
d4a9b5ea97 Remove redundant preset (see the plot in #587) 2023-03-30 17:34:44 -03:00
Nikita Skakun
d550c12a3e Fixed the bug with additional bytes.
The issue seems to be with huggingface not reporting the entire size of the model.
Added an error message with instructions if the checksums don't match.
2023-03-30 12:52:16 -07:00
Thomas Antony
7fa5d96c22 Update to use new llamacpp API 2023-03-30 11:23:05 +01:00
Thomas Antony
79fa2b6d7e Add support for alpaca 2023-03-30 11:23:04 +01:00
Thomas Antony
8953a262cb Add llamacpp to requirements.txt 2023-03-30 11:22:38 +01:00
Thomas Antony
a5f5736e74 Add to text_generation.py 2023-03-30 11:22:38 +01:00
Thomas Antony
7745faa7bb Add llamacpp to models.py 2023-03-30 11:22:37 +01:00
Thomas Antony
7a562481fa Initial version of llamacpp_model.py 2023-03-30 11:22:07 +01:00
Thomas Antony
53ab1e285d Update .gitignore 2023-03-30 11:22:07 +01:00
Nikita Skakun
297ac051d9 Added sha256 validation of model files. 2023-03-30 02:34:19 -07:00
Nikita Skakun
8c590c2362 Added a 'clean' flag to not resume download. 2023-03-30 00:42:19 -07:00
Nikita Skakun
e17af59261 Add support for resuming downloads
This commit adds the ability to resume interrupted downloads by adding a new function to the downloader module. The function uses the HTTP Range header to fetch only the remaining part of a file that wasn't downloaded yet.
2023-03-30 00:21:34 -07:00
oobabooga
f0fdab08d3 Increase --chat height 2023-03-30 01:02:11 -03:00
oobabooga
bd65940a48 Increase --chat box height 2023-03-30 00:43:49 -03:00