Commit Graph

2760 Commits

Author SHA1 Message Date
oobabooga
0aee7341d8 Properly count tokens/s for llama.cpp in chat mode 2023-03-31 17:04:32 -03:00
oobabooga
5c4e44b452
llama.cpp documentation 2023-03-31 15:20:39 -03:00
oobabooga
6fd70d0032
Add llama.cpp support (#447 from thomasantony/feature/llamacpp)
Documentation: https://github.com/oobabooga/text-generation-webui/wiki/llama.cpp-models
2023-03-31 15:17:32 -03:00
oobabooga
a5c9b7d977 Bump llamacpp version 2023-03-31 15:08:01 -03:00
oobabooga
ea3ba6fc73 Merge branch 'feature/llamacpp' of github.com:thomasantony/text-generation-webui into thomasantony-feature/llamacpp 2023-03-31 14:45:53 -03:00
oobabooga
09b0a3aafb Add repetition_penalty 2023-03-31 14:45:17 -03:00
oobabooga
4d98623041
Merge branch 'main' into feature/llamacpp 2023-03-31 14:37:04 -03:00
oobabooga
4c27562157 Minor changes 2023-03-31 14:33:46 -03:00
oobabooga
9d1dcf880a General improvements 2023-03-31 14:27:01 -03:00
oobabooga
770ff0efa9 Merge branch 'main' of github.com:oobabooga/text-generation-webui 2023-03-31 12:22:22 -03:00
oobabooga
1d1d9e40cd Add seed to settings 2023-03-31 12:22:07 -03:00
oobabooga
daeab6bac7
Merge pull request #678 from mayaeary/fix/python3.8
Fix `type object is not subscriptable`
2023-03-31 12:19:06 -03:00
oobabooga
75465fa041
Merge pull request #6 from jllllll/oobabooga-windows
Attempt to Improve Reliability
2023-03-31 11:27:23 -03:00
oobabooga
5a6f939f05 Change the preset here too 2023-03-31 10:43:05 -03:00
Maya
b246d17513
Fix type object is not subscriptable
Fix `type object is not subscriptable` on python 3.8
2023-03-31 14:20:31 +03:00
Nikita Skakun
b99bea3c69 Fixed reported header affecting resuming download 2023-03-30 23:11:59 -07:00
oobabooga
3e1267af79
Merge pull request #673 from ye7iaserag/patch-1
Implement character gallery using Dataset
2023-03-31 02:04:52 -03:00
oobabooga
3b90d604d7 Sort the imports 2023-03-31 02:01:48 -03:00
oobabooga
d28a5c9569 Remove unnecessary css 2023-03-31 02:01:13 -03:00
ye7iaserag
ec093a5af7
Fix div alignment for long strings 2023-03-31 06:54:24 +02:00
oobabooga
92c7068daf Don't download if --check is specified 2023-03-31 01:31:47 -03:00
oobabooga
3737eafeaa Remove a border and allow more characters per pagination page 2023-03-31 00:48:50 -03:00
oobabooga
fd72afd8e7 Increase the textbox sizes 2023-03-31 00:43:00 -03:00
oobabooga
f27a66b014 Bump gradio version (make sure to update)
This fixes the textbox shrinking vertically once it reaches
a certain number of lines.
2023-03-31 00:42:26 -03:00
Nikita Skakun
0cc89e7755 Checksum code now activated by --check flag. 2023-03-30 20:06:12 -07:00
ye7iaserag
f9940b79dc
Implement character gallery using Dataset 2023-03-31 04:56:49 +02:00
jllllll
e4e3c9095d
Add warning for long paths 2023-03-30 20:48:40 -05:00
jllllll
172035d2e1
Minor Correction 2023-03-30 20:44:56 -05:00
jllllll
0b4ee14edc
Attempt to Improve Reliability
Have pip directly download and install backup GPTQ wheel instead of first downloading through curl.
Install bitsandbytes from wheel compiled for Windows from modified source.
Add clarification of minor, intermittent issue to instructions.
Add system32 folder to end of PATH rather than beginning.
Add warning when installed under a path containing spaces.
2023-03-30 20:04:16 -05:00
oobabooga
bb69e054a7 Add dummy file 2023-03-30 21:08:50 -03:00
oobabooga
85e4ec6e6b
Download the cuda branch directly 2023-03-30 18:22:48 -03:00
oobabooga
78c0da4a18
Use the cuda branch of gptq-for-llama
Did I do this right @jllllll? This is because the current default branch (triton) is not compatible with Windows.
2023-03-30 18:04:05 -03:00
oobabooga
d4a9b5ea97 Remove redundant preset (see the plot in #587) 2023-03-30 17:34:44 -03:00
Nikita Skakun
d550c12a3e Fixed the bug with additional bytes.
The issue seems to be with huggingface not reporting the entire size of the model.
Added an error message with instructions if the checksums don't match.
2023-03-30 12:52:16 -07:00
Thomas Antony
7fa5d96c22 Update to use new llamacpp API 2023-03-30 11:23:05 +01:00
Thomas Antony
79fa2b6d7e Add support for alpaca 2023-03-30 11:23:04 +01:00
Thomas Antony
8953a262cb Add llamacpp to requirements.txt 2023-03-30 11:22:38 +01:00
Thomas Antony
a5f5736e74 Add to text_generation.py 2023-03-30 11:22:38 +01:00
Thomas Antony
7745faa7bb Add llamacpp to models.py 2023-03-30 11:22:37 +01:00
Thomas Antony
7a562481fa Initial version of llamacpp_model.py 2023-03-30 11:22:07 +01:00
Thomas Antony
53ab1e285d Update .gitignore 2023-03-30 11:22:07 +01:00
Nikita Skakun
297ac051d9 Added sha256 validation of model files. 2023-03-30 02:34:19 -07:00
Nikita Skakun
8c590c2362 Added a 'clean' flag to not resume download. 2023-03-30 00:42:19 -07:00
Nikita Skakun
e17af59261 Add support for resuming downloads
This commit adds the ability to resume interrupted downloads by adding a new function to the downloader module. The function uses the HTTP Range header to fetch only the remaining part of a file that wasn't downloaded yet.
2023-03-30 00:21:34 -07:00
oobabooga
f0fdab08d3 Increase --chat height 2023-03-30 01:02:11 -03:00
oobabooga
bd65940a48 Increase --chat box height 2023-03-30 00:43:49 -03:00
oobabooga
131753fcf5 Save the sha256sum of downloaded models 2023-03-29 23:28:16 -03:00
oobabooga
a21e580782 Move an import 2023-03-29 22:50:58 -03:00
oobabooga
55755e27b9 Don't hardcode prompts in the settings dict/json 2023-03-29 22:47:01 -03:00
oobabooga
1cb9246160 Adapt to the new model names 2023-03-29 21:47:36 -03:00