oobabooga
b857f4655b
Update shared.py
2023-04-01 13:56:47 -03:00
oobabooga
012f4f83b8
Update README.md
2023-04-01 13:55:15 -03:00
oobabooga
fcda3f8776
Add also_return_rows to generate_chat_prompt
2023-04-01 01:12:13 -03:00
oobabooga
8c51b405e4
Progress towards generalizing Interface mode tab
2023-03-31 23:41:10 -03:00
oobabooga
23116b88ef
Add support for resuming downloads ( #654 from nikita-skakun/support-partial-downloads)
2023-03-31 22:55:55 -03:00
oobabooga
74462ac713
Don't override the metadata when checking the sha256sum
2023-03-31 22:52:52 -03:00
oobabooga
2c52310642
Add --threads flag for llama.cpp
2023-03-31 21:18:05 -03:00
oobabooga
eeafd60713
Fix streaming
2023-03-31 19:05:38 -03:00
oobabooga
52065ae4cd
Add repetition_penalty
2023-03-31 19:01:34 -03:00
oobabooga
2259143fec
Fix llama.cpp with --no-stream
2023-03-31 18:43:45 -03:00
oobabooga
875de5d983
Update ggml template
2023-03-31 17:57:31 -03:00
oobabooga
cbfe0b944a
Update README.md
2023-03-31 17:49:11 -03:00
oobabooga
6a44f4aec6
Add support for downloading ggml files
2023-03-31 17:33:42 -03:00
oobabooga
3a47a602a3
Detect ggml*.bin files automatically
2023-03-31 17:18:21 -03:00
oobabooga
0aee7341d8
Properly count tokens/s for llama.cpp in chat mode
2023-03-31 17:04:32 -03:00
oobabooga
5c4e44b452
llama.cpp documentation
2023-03-31 15:20:39 -03:00
oobabooga
6fd70d0032
Add llama.cpp support ( #447 from thomasantony/feature/llamacpp)
...
Documentation: https://github.com/oobabooga/text-generation-webui/wiki/llama.cpp-models
2023-03-31 15:17:32 -03:00
oobabooga
a5c9b7d977
Bump llamacpp version
2023-03-31 15:08:01 -03:00
oobabooga
ea3ba6fc73
Merge branch 'feature/llamacpp' of github.com:thomasantony/text-generation-webui into thomasantony-feature/llamacpp
2023-03-31 14:45:53 -03:00
oobabooga
09b0a3aafb
Add repetition_penalty
2023-03-31 14:45:17 -03:00
oobabooga
4d98623041
Merge branch 'main' into feature/llamacpp
2023-03-31 14:37:04 -03:00
oobabooga
4c27562157
Minor changes
2023-03-31 14:33:46 -03:00
oobabooga
9d1dcf880a
General improvements
2023-03-31 14:27:01 -03:00
oobabooga
770ff0efa9
Merge branch 'main' of github.com:oobabooga/text-generation-webui
2023-03-31 12:22:22 -03:00
oobabooga
1d1d9e40cd
Add seed to settings
2023-03-31 12:22:07 -03:00
oobabooga
daeab6bac7
Merge pull request #678 from mayaeary/fix/python3.8
...
Fix `type object is not subscriptable`
2023-03-31 12:19:06 -03:00
oobabooga
75465fa041
Merge pull request #6 from jllllll/oobabooga-windows
...
Attempt to Improve Reliability
2023-03-31 11:27:23 -03:00
oobabooga
5a6f939f05
Change the preset here too
2023-03-31 10:43:05 -03:00
Maya
b246d17513
Fix type object is not subscriptable
...
Fix `type object is not subscriptable` on python 3.8
2023-03-31 14:20:31 +03:00
Nikita Skakun
b99bea3c69
Fixed reported header affecting resuming download
2023-03-30 23:11:59 -07:00
oobabooga
3e1267af79
Merge pull request #673 from ye7iaserag/patch-1
...
Implement character gallery using Dataset
2023-03-31 02:04:52 -03:00
oobabooga
3b90d604d7
Sort the imports
2023-03-31 02:01:48 -03:00
oobabooga
d28a5c9569
Remove unnecessary css
2023-03-31 02:01:13 -03:00
ye7iaserag
ec093a5af7
Fix div alignment for long strings
2023-03-31 06:54:24 +02:00
oobabooga
92c7068daf
Don't download if --check is specified
2023-03-31 01:31:47 -03:00
oobabooga
3737eafeaa
Remove a border and allow more characters per pagination page
2023-03-31 00:48:50 -03:00
oobabooga
fd72afd8e7
Increase the textbox sizes
2023-03-31 00:43:00 -03:00
oobabooga
f27a66b014
Bump gradio version (make sure to update)
...
This fixes the textbox shrinking vertically once it reaches
a certain number of lines.
2023-03-31 00:42:26 -03:00
Nikita Skakun
0cc89e7755
Checksum code now activated by --check flag.
2023-03-30 20:06:12 -07:00
ye7iaserag
f9940b79dc
Implement character gallery using Dataset
2023-03-31 04:56:49 +02:00
jllllll
e4e3c9095d
Add warning for long paths
2023-03-30 20:48:40 -05:00
jllllll
172035d2e1
Minor Correction
2023-03-30 20:44:56 -05:00
jllllll
0b4ee14edc
Attempt to Improve Reliability
...
Have pip directly download and install backup GPTQ wheel instead of first downloading through curl.
Install bitsandbytes from wheel compiled for Windows from modified source.
Add clarification of minor, intermittent issue to instructions.
Add system32 folder to end of PATH rather than beginning.
Add warning when installed under a path containing spaces.
2023-03-30 20:04:16 -05:00
oobabooga
bb69e054a7
Add dummy file
2023-03-30 21:08:50 -03:00
oobabooga
85e4ec6e6b
Download the cuda branch directly
2023-03-30 18:22:48 -03:00
oobabooga
78c0da4a18
Use the cuda branch of gptq-for-llama
...
Did I do this right @jllllll? This is because the current default branch (triton) is not compatible with Windows.
2023-03-30 18:04:05 -03:00
oobabooga
d4a9b5ea97
Remove redundant preset (see the plot in #587 )
2023-03-30 17:34:44 -03:00
Nikita Skakun
d550c12a3e
Fixed the bug with additional bytes.
...
The issue seems to be with huggingface not reporting the entire size of the model.
Added an error message with instructions if the checksums don't match.
2023-03-30 12:52:16 -07:00
Thomas Antony
7fa5d96c22
Update to use new llamacpp API
2023-03-30 11:23:05 +01:00
Thomas Antony
79fa2b6d7e
Add support for alpaca
2023-03-30 11:23:04 +01:00