oobabooga
|
10aedc329f
|
Logging: more readable messages when renaming chat histories
|
2024-02-22 07:57:06 -08:00 |
|
oobabooga
|
faf3bf2503
|
Perplexity evaluation: make UI events more robust (attempt)
|
2024-02-22 07:13:22 -08:00 |
|
oobabooga
|
ac5a7a26ea
|
Perplexity evaluation: add some informative error messages
|
2024-02-21 20:20:52 -08:00 |
|
oobabooga
|
59032140b5
|
Fix CFG with llamacpp_HF (2nd attempt)
|
2024-02-19 18:35:42 -08:00 |
|
oobabooga
|
c203c57c18
|
Fix CFG with llamacpp_HF
|
2024-02-19 18:09:49 -08:00 |
|
oobabooga
|
ae05d9830f
|
Replace {{char}}, {{user}} in the chat template itself
|
2024-02-18 19:57:54 -08:00 |
|
oobabooga
|
1f27bef71b
|
Move chat UI elements to the right on desktop (#5538)
|
2024-02-18 14:32:05 -03:00 |
|
oobabooga
|
d6bd71db7f
|
ExLlamaV2: fix loading when autosplit is not set
|
2024-02-17 12:54:37 -08:00 |
|
oobabooga
|
af0bbf5b13
|
Lint
|
2024-02-17 09:01:04 -08:00 |
|
oobabooga
|
a6730f88f7
|
Add --autosplit flag for ExLlamaV2 (#5524)
|
2024-02-16 15:26:10 -03:00 |
|
oobabooga
|
4039999be5
|
Autodetect llamacpp_HF loader when tokenizer exists
|
2024-02-16 09:29:26 -08:00 |
|
oobabooga
|
76d28eaa9e
|
Add a menu for customizing the instruction template for the model (#5521)
|
2024-02-16 14:21:17 -03:00 |
|
oobabooga
|
0e1d8d5601
|
Instruction template: make "Send to default/notebook" work without a tokenizer
|
2024-02-16 08:01:07 -08:00 |
|
oobabooga
|
44018c2f69
|
Add a "llamacpp_HF creator" menu (#5519)
|
2024-02-16 12:43:24 -03:00 |
|
oobabooga
|
b2b74c83a6
|
Fix Qwen1.5 in llamacpp_HF
|
2024-02-15 19:04:19 -08:00 |
|
oobabooga
|
080f7132c0
|
Revert gradio to 3.50.2 (#5513)
|
2024-02-15 20:40:23 -03:00 |
|
oobabooga
|
7123ac3f77
|
Remove "Maximum UI updates/second" parameter (#5507)
|
2024-02-14 23:34:30 -03:00 |
|
DominikKowalczyk
|
33c4ce0720
|
Bump gradio to 4.19 (#5419)
---------
Co-authored-by: oobabooga <112222186+oobabooga@users.noreply.github.com>
|
2024-02-14 23:28:26 -03:00 |
|
oobabooga
|
b16958575f
|
Minor bug fix
|
2024-02-13 19:48:32 -08:00 |
|
oobabooga
|
d47182d9d1
|
llamacpp_HF: do not use oobabooga/llama-tokenizer (#5499)
|
2024-02-14 00:28:51 -03:00 |
|
oobabooga
|
069ed7c6ef
|
Lint
|
2024-02-13 16:05:41 -08:00 |
|
oobabooga
|
86c320ab5a
|
llama.cpp: add a progress bar for prompt evaluation
|
2024-02-07 21:56:10 -08:00 |
|
oobabooga
|
c55b8ce932
|
Improved random preset generation
|
2024-02-06 08:51:52 -08:00 |
|
oobabooga
|
4e34ae0587
|
Minor logging improvements
|
2024-02-06 08:22:08 -08:00 |
|
oobabooga
|
3add2376cd
|
Better warpers logging
|
2024-02-06 07:09:21 -08:00 |
|
oobabooga
|
494cc3c5b0
|
Handle empty sampler priority field, use default values
|
2024-02-06 07:05:32 -08:00 |
|
oobabooga
|
775902c1f2
|
Sampler priority: better logging, always save to presets
|
2024-02-06 06:49:22 -08:00 |
|
oobabooga
|
acfbe6b3b3
|
Minor doc changes
|
2024-02-06 06:35:01 -08:00 |
|
oobabooga
|
8ee3cea7cb
|
Improve some log messages
|
2024-02-06 06:31:27 -08:00 |
|
oobabooga
|
8a6d9abb41
|
Small fixes
|
2024-02-06 06:26:27 -08:00 |
|
oobabooga
|
2a1063eff5
|
Revert "Remove non-HF ExLlamaV2 loader (#5431)"
This reverts commit cde000d478 .
|
2024-02-06 06:21:36 -08:00 |
|
oobabooga
|
8c35fefb3b
|
Add custom sampler order support (#5443)
|
2024-02-06 11:20:10 -03:00 |
|
oobabooga
|
7301c7618f
|
Minor change to Models tab
|
2024-02-04 21:49:58 -08:00 |
|
oobabooga
|
f234fbe83f
|
Improve a log message after previous commit
|
2024-02-04 21:44:53 -08:00 |
|
oobabooga
|
7073665a10
|
Truncate long chat completions inputs (#5439)
|
2024-02-05 02:31:24 -03:00 |
|
oobabooga
|
9033fa5eee
|
Organize the Model tab
|
2024-02-04 19:30:22 -08:00 |
|
Forkoz
|
2a45620c85
|
Split by rows instead of layers for llama.cpp multi-gpu (#5435)
|
2024-02-04 23:36:40 -03:00 |
|
Badis Ghoubali
|
3df7e151f7
|
fix the n_batch slider (#5436)
|
2024-02-04 18:15:30 -03:00 |
|
oobabooga
|
4e188eeb80
|
Lint
|
2024-02-03 20:40:10 -08:00 |
|
oobabooga
|
cde000d478
|
Remove non-HF ExLlamaV2 loader (#5431)
|
2024-02-04 01:15:51 -03:00 |
|
kalomaze
|
b6077b02e4
|
Quadratic sampling (#5403)
---------
Co-authored-by: oobabooga <112222186+oobabooga@users.noreply.github.com>
|
2024-02-04 00:20:02 -03:00 |
|
Badis Ghoubali
|
40c7977f9b
|
Add roleplay.gbnf grammar (#5368)
|
2024-01-28 21:41:28 -03:00 |
|
sam-ngu
|
c0bdcee646
|
added trust_remote_code to deepspeed init loaderClass (#5237)
|
2024-01-26 11:10:57 -03:00 |
|
oobabooga
|
87dc421ee8
|
Bump exllamav2 to 0.0.12 (#5352)
|
2024-01-22 22:40:12 -03:00 |
|
oobabooga
|
aad73667af
|
Lint
|
2024-01-22 03:25:55 -08:00 |
|
lmg-anon
|
db1da9f98d
|
Fix logprobs tokens in OpenAI API (#5339)
|
2024-01-22 08:07:42 -03:00 |
|
Forkoz
|
5c5ef4cef7
|
UI: change n_gpu_layers maximum to 256 for larger models. (#5262)
|
2024-01-17 17:13:16 -03:00 |
|
ilya sheprut
|
4d14eb8b82
|
LoRA: Fix error "Attempting to unscale FP16 gradients" when training (#5268)
|
2024-01-17 17:11:49 -03:00 |
|
oobabooga
|
e055967974
|
Add prompt_lookup_num_tokens parameter (#5296)
|
2024-01-17 17:09:36 -03:00 |
|
oobabooga
|
b3fc2cd887
|
UI: Do not save unchanged extension settings to settings.yaml
|
2024-01-10 03:48:30 -08:00 |
|