Lounger
|
7c0a17962d
|
Gallery improvements (#4789)
|
2023-12-03 22:45:50 -03:00 |
|
oobabooga
|
77d6ccf12b
|
Add a LOADER debug message while loading models
|
2023-11-30 12:00:32 -08:00 |
|
oobabooga
|
092a2c3516
|
Fix a bug in llama.cpp get_logits() function
|
2023-11-30 11:21:40 -08:00 |
|
oobabooga
|
2698d7c9fd
|
Fix llama.cpp model unloading
|
2023-11-29 15:19:48 -08:00 |
|
oobabooga
|
9940ed9c77
|
Sort the loaders
|
2023-11-29 15:13:03 -08:00 |
|
oobabooga
|
a7670c31ca
|
Sort
|
2023-11-28 18:43:33 -08:00 |
|
oobabooga
|
6e51bae2e0
|
Sort the loaders menu
|
2023-11-28 18:41:11 -08:00 |
|
oobabooga
|
68059d7c23
|
llama.cpp: minor log change & lint
|
2023-11-27 10:44:55 -08:00 |
|
tsukanov-as
|
9f7ae6bb2e
|
fix detection of stopping strings when HTML escaping is used (#4728)
|
2023-11-27 15:42:08 -03:00 |
|
oobabooga
|
0589ff5b12
|
Bump llama-cpp-python to 0.2.19 & add min_p and typical_p parameters to llama.cpp loader (#4701)
|
2023-11-21 20:59:39 -03:00 |
|
oobabooga
|
2769a1fa25
|
Hide deprecated args from Session tab
|
2023-11-21 15:15:16 -08:00 |
|
oobabooga
|
a2e6d00128
|
Use convert_ids_to_tokens instead of decode in logits endpoint
This preserves the llama tokenizer spaces.
|
2023-11-19 09:22:08 -08:00 |
|
oobabooga
|
9da7bb203d
|
Minor LoRA bug fix
|
2023-11-19 07:59:29 -08:00 |
|
oobabooga
|
a6f1e1bcc5
|
Fix PEFT LoRA unloading
|
2023-11-19 07:55:25 -08:00 |
|
oobabooga
|
ab94f0d9bf
|
Minor style change
|
2023-11-18 21:11:04 -08:00 |
|
oobabooga
|
5fcee696ea
|
New feature: enlarge character pictures on click (#4654)
|
2023-11-19 02:05:17 -03:00 |
|
oobabooga
|
ef6feedeb2
|
Add --nowebui flag for pure API mode (#4651)
|
2023-11-18 23:38:39 -03:00 |
|
oobabooga
|
0fa1af296c
|
Add /v1/internal/logits endpoint (#4650)
|
2023-11-18 23:19:31 -03:00 |
|
oobabooga
|
8f4f4daf8b
|
Add --admin-key flag for API (#4649)
|
2023-11-18 22:33:27 -03:00 |
|
Jordan Tucker
|
baab894759
|
fix: use system message in chat-instruct mode (#4648)
|
2023-11-18 20:20:13 -03:00 |
|
oobabooga
|
47d9e2618b
|
Refresh the Preset menu after saving a preset
|
2023-11-18 14:03:42 -08:00 |
|
oobabooga
|
83b64e7fc1
|
New feature: "random preset" button (#4647)
|
2023-11-18 18:31:41 -03:00 |
|
oobabooga
|
e0ca49ed9c
|
Bump llama-cpp-python to 0.2.18 (2nd attempt) (#4637)
* Update requirements*.txt
* Add back seed
|
2023-11-18 00:31:27 -03:00 |
|
oobabooga
|
9d6f79db74
|
Revert "Bump llama-cpp-python to 0.2.18 (#4611)"
This reverts commit 923c8e25fb .
|
2023-11-17 05:14:25 -08:00 |
|
oobabooga
|
13dc3b61da
|
Update README
|
2023-11-16 19:57:55 -08:00 |
|
oobabooga
|
8b66d83aa9
|
Set use_fast=True by default, create --no_use_fast flag
This increases tokens/second for HF loaders.
|
2023-11-16 19:55:28 -08:00 |
|
oobabooga
|
6525707a7f
|
Fix "send instruction template to..." buttons (closes #4625)
|
2023-11-16 18:16:42 -08:00 |
|
oobabooga
|
510a01ef46
|
Lint
|
2023-11-16 18:03:06 -08:00 |
|
oobabooga
|
923c8e25fb
|
Bump llama-cpp-python to 0.2.18 (#4611)
|
2023-11-16 22:55:14 -03:00 |
|
oobabooga
|
58c6001be9
|
Add missing exllamav2 samplers
|
2023-11-16 07:09:40 -08:00 |
|
oobabooga
|
cd41f8912b
|
Warn users about n_ctx / max_seq_len
|
2023-11-15 18:56:42 -08:00 |
|
oobabooga
|
9be48e83a9
|
Start API when "api" checkbox is checked
|
2023-11-15 16:35:47 -08:00 |
|
oobabooga
|
a85ce5f055
|
Add more info messages for truncation / instruction template
|
2023-11-15 16:20:31 -08:00 |
|
oobabooga
|
883701bc40
|
Alternative solution to 025da386a0
Fixes an error.
|
2023-11-15 16:04:02 -08:00 |
|
oobabooga
|
8ac942813c
|
Revert "Fix CPU memory limit error (issue #3763) (#4597)"
This reverts commit 025da386a0 .
|
2023-11-15 16:01:54 -08:00 |
|
oobabooga
|
e6f44d6d19
|
Print context length / instruction template to terminal when loading models
|
2023-11-15 16:00:51 -08:00 |
|
oobabooga
|
e05d8fd441
|
Style changes
|
2023-11-15 15:51:37 -08:00 |
|
Andy Bao
|
025da386a0
|
Fix CPU memory limit error (issue #3763) (#4597)
get_max_memory_dict() was not properly formatting shared.args.cpu_memory
Co-authored-by: oobabooga <112222186+oobabooga@users.noreply.github.com>
|
2023-11-15 20:27:20 -03:00 |
|
oobabooga
|
4aabff3728
|
Remove old API, launch OpenAI API with --api
|
2023-11-10 06:39:08 -08:00 |
|
oobabooga
|
2af7e382b1
|
Revert "Bump llama-cpp-python to 0.2.14"
This reverts commit 5c3eb22ce6 .
The new version has issues:
https://github.com/oobabooga/text-generation-webui/issues/4540
https://github.com/abetlen/llama-cpp-python/issues/893
|
2023-11-09 10:02:13 -08:00 |
|
oobabooga
|
21ed9a260e
|
Document the new "Custom system message" field
|
2023-11-08 17:54:10 -08:00 |
|
oobabooga
|
2358706453
|
Add /v1/internal/model/load endpoint (tentative)
|
2023-11-07 20:58:06 -08:00 |
|
oobabooga
|
43c53a7820
|
Refactor the /v1/models endpoint
|
2023-11-07 19:59:27 -08:00 |
|
oobabooga
|
1b69694fe9
|
Add types to the encode/decode/token-count endpoints
|
2023-11-07 19:32:14 -08:00 |
|
oobabooga
|
6e2e0317af
|
Separate context and system message in instruction formats (#4499)
|
2023-11-07 20:02:58 -03:00 |
|
oobabooga
|
5c0559da69
|
Training: fix .txt files now showing in dropdowns
|
2023-11-07 14:41:11 -08:00 |
|
oobabooga
|
af3d25a503
|
Disable logits_all in llamacpp_HF (makes processing 3x faster)
|
2023-11-07 14:35:48 -08:00 |
|
oobabooga
|
5c3eb22ce6
|
Bump llama-cpp-python to 0.2.14
|
2023-11-07 14:20:43 -08:00 |
|
oobabooga
|
ec17a5d2b7
|
Make OpenAI API the default API (#4430)
|
2023-11-06 02:38:29 -03:00 |
|
feng lui
|
4766a57352
|
transformers: add use_flash_attention_2 option (#4373)
|
2023-11-04 13:59:33 -03:00 |
|
wouter van der plas
|
add359379e
|
fixed two links in the ui (#4452)
|
2023-11-04 13:41:42 -03:00 |
|
oobabooga
|
aa5d671579
|
Add temperature_last parameter (#4472)
|
2023-11-04 13:09:07 -03:00 |
|
oobabooga
|
1ab8700d94
|
Change frequency/presence penalty ranges
|
2023-11-03 17:38:19 -07:00 |
|
oobabooga
|
45fcb60e7a
|
Make truncation_length_max apply to max_seq_len/n_ctx
|
2023-11-03 11:29:31 -07:00 |
|
oobabooga
|
7f9c1cbb30
|
Change min_p default to 0.0
|
2023-11-03 08:25:22 -07:00 |
|
oobabooga
|
4537853e2c
|
Change min_p default to 1.0
|
2023-11-03 08:13:50 -07:00 |
|
kalomaze
|
367e5e6e43
|
Implement Min P as a sampler option in HF loaders (#4449)
|
2023-11-02 16:32:51 -03:00 |
|
oobabooga
|
fcb7017b7a
|
Remove a checkbox
|
2023-11-02 12:24:09 -07:00 |
|
Julien Chaumond
|
fdcaa955e3
|
transformers: Add a flag to force load from safetensors (#4450)
|
2023-11-02 16:20:54 -03:00 |
|
oobabooga
|
c0655475ae
|
Add cache_8bit option
|
2023-11-02 11:23:04 -07:00 |
|
oobabooga
|
42f816312d
|
Merge remote-tracking branch 'refs/remotes/origin/dev' into dev
|
2023-11-02 11:09:26 -07:00 |
|
oobabooga
|
77abd9b69b
|
Add no_flash_attn option
|
2023-11-02 11:08:53 -07:00 |
|
Julien Chaumond
|
a56ef2a942
|
make torch.load a bit safer (#4448)
|
2023-11-02 14:07:08 -03:00 |
|
Mehran Ziadloo
|
aaf726dbfb
|
Updating the shared settings object when loading a model (#4425)
|
2023-11-01 01:29:57 -03:00 |
|
oobabooga
|
9bd0724d85
|
Change frequency/presence penalty ranges
|
2023-10-31 20:57:56 -07:00 |
|
Meheret
|
0707ed7677
|
updated wiki link (#4415)
|
2023-10-31 19:09:05 -03:00 |
|
oobabooga
|
262f8ae5bb
|
Use default gr.Dataframe for evaluation table
|
2023-10-27 06:49:14 -07:00 |
|
oobabooga
|
839a87bac8
|
Fix is_ccl_available & is_xpu_available imports
|
2023-10-26 20:27:04 -07:00 |
|
Abhilash Majumder
|
778a010df8
|
Intel Gpu support initialization (#4340)
|
2023-10-26 23:39:51 -03:00 |
|
oobabooga
|
92b2f57095
|
Minor metadata bug fix (second attempt)
|
2023-10-26 18:57:32 -07:00 |
|
tdrussell
|
72f6fc6923
|
Rename additive_repetition_penalty to presence_penalty, add frequency_penalty (#4376)
|
2023-10-25 12:10:28 -03:00 |
|
oobabooga
|
ef1489cd4d
|
Remove unused parameter in AutoAWQ
|
2023-10-23 20:45:43 -07:00 |
|
oobabooga
|
1edf321362
|
Lint
|
2023-10-23 13:09:03 -07:00 |
|
oobabooga
|
280ae720d7
|
Organize
|
2023-10-23 13:07:17 -07:00 |
|
oobabooga
|
49e5eecce4
|
Merge remote-tracking branch 'refs/remotes/origin/main'
|
2023-10-23 12:54:05 -07:00 |
|
oobabooga
|
306d764ff6
|
Minor metadata bug fix
|
2023-10-23 12:46:24 -07:00 |
|
adrianfiedler
|
4bc411332f
|
Fix broken links (#4367)
---------
Co-authored-by: oobabooga <112222186+oobabooga@users.noreply.github.com>
|
2023-10-23 14:09:57 -03:00 |
|
oobabooga
|
92691ee626
|
Disable trust_remote_code by default
|
2023-10-23 09:57:44 -07:00 |
|
tdrussell
|
4440f87722
|
Add additive_repetition_penalty sampler setting. (#3627)
|
2023-10-23 02:28:07 -03:00 |
|
oobabooga
|
df90d03e0b
|
Replace --mul_mat_q with --no_mul_mat_q
|
2023-10-22 12:23:03 -07:00 |
|
Googulator
|
d0c3b407b3
|
transformers loader: multi-LoRAs support (#3120)
|
2023-10-22 16:06:22 -03:00 |
|
omo
|
4405513ca5
|
Option to select/target additional linear modules/layers in LORA training (#4178)
|
2023-10-22 15:57:19 -03:00 |
|
oobabooga
|
2d1b3332e4
|
Ignore warnings on Colab
|
2023-10-21 21:45:25 -07:00 |
|
oobabooga
|
09f807af83
|
Use ExLlama_HF for GPTQ models by default
|
2023-10-21 20:45:38 -07:00 |
|
oobabooga
|
506d05aede
|
Organize command-line arguments
|
2023-10-21 18:52:59 -07:00 |
|
oobabooga
|
fbac6d21ca
|
Add missing exception
|
2023-10-20 23:53:24 -07:00 |
|
Brian Dashore
|
3345da2ea4
|
Add flash-attention 2 for windows (#4235)
|
2023-10-21 03:46:23 -03:00 |
|
Johan
|
1d5a015ce7
|
Enable special token support for exllamav2 (#4314)
|
2023-10-21 01:54:06 -03:00 |
|
turboderp
|
ae8cd449ae
|
ExLlamav2_HF: Convert logits to FP32 (#4310)
|
2023-10-18 23:16:05 -03:00 |
|
oobabooga
|
f17f7a6913
|
Increase the evaluation table height
|
2023-10-16 12:55:35 -07:00 |
|
oobabooga
|
8ea554bc19
|
Check for torch.xpu.is_available()
|
2023-10-16 12:53:40 -07:00 |
|
oobabooga
|
188d20e9e5
|
Reduce the evaluation table height
|
2023-10-16 10:53:42 -07:00 |
|
oobabooga
|
2d44adbb76
|
Clear the torch cache while evaluating
|
2023-10-16 10:52:50 -07:00 |
|
oobabooga
|
71cac7a1b2
|
Increase the height of the evaluation table
|
2023-10-15 21:56:40 -07:00 |
|
oobabooga
|
e14bde4946
|
Minor improvements to evaluation logs
|
2023-10-15 20:51:43 -07:00 |
|
oobabooga
|
b88b2b74a6
|
Experimental Intel Arc transformers support (untested)
|
2023-10-15 20:51:11 -07:00 |
|
Forkoz
|
8cce1f1126
|
Exllamav2 lora support (#4229)
---------
Co-authored-by: oobabooga <112222186+oobabooga@users.noreply.github.com>
|
2023-10-14 16:12:41 -03:00 |
|
oobabooga
|
773c17faec
|
Fix a warning
|
2023-10-10 20:53:38 -07:00 |
|
oobabooga
|
f63361568c
|
Fix safetensors kwarg usage in AutoAWQ
|
2023-10-10 19:03:09 -07:00 |
|
oobabooga
|
39f16ff83d
|
Fix default/notebook tabs css
|
2023-10-10 18:45:12 -07:00 |
|