oobabooga
|
cde000d478
|
Remove non-HF ExLlamaV2 loader (#5431)
|
2024-02-04 01:15:51 -03:00 |
|
kalomaze
|
b6077b02e4
|
Quadratic sampling (#5403)
---------
Co-authored-by: oobabooga <112222186+oobabooga@users.noreply.github.com>
|
2024-02-04 00:20:02 -03:00 |
|
lmg-anon
|
db1da9f98d
|
Fix logprobs tokens in OpenAI API (#5339)
|
2024-01-22 08:07:42 -03:00 |
|
oobabooga
|
e055967974
|
Add prompt_lookup_num_tokens parameter (#5296)
|
2024-01-17 17:09:36 -03:00 |
|
oobabooga
|
29c2693ea0
|
dynatemp_low, dynatemp_high, dynatemp_exponent parameters (#5209)
|
2024-01-08 23:28:35 -03:00 |
|
oobabooga
|
0d07b3a6a1
|
Add dynamic_temperature_low parameter (#5198)
|
2024-01-07 17:03:47 -03:00 |
|
oobabooga
|
b8a0b3f925
|
Don't print torch tensors with --verbose
|
2024-01-07 10:35:55 -08:00 |
|
oobabooga
|
cf820c69c5
|
Print generation parameters with --verbose (HF only)
|
2024-01-07 10:06:23 -08:00 |
|
kalomaze
|
48327cc5c4
|
Dynamic Temperature HF loader support (#5174)
---------
Co-authored-by: oobabooga <112222186+oobabooga@users.noreply.github.com>
|
2024-01-07 10:36:26 -03:00 |
|
oobabooga
|
2734ce3e4c
|
Remove RWKV loader (#5130)
|
2023-12-31 02:01:40 -03:00 |
|
oobabooga
|
0e54a09bcb
|
Remove exllamav1 loaders (#5128)
|
2023-12-31 01:57:06 -03:00 |
|
oobabooga
|
8c60495878
|
UI: add "Maximum UI updates/second" parameter
|
2023-12-24 09:17:40 -08:00 |
|
zhangningboo
|
1b8b61b928
|
Fix output_ids decoding for Qwen/Qwen-7B-Chat (#5045)
|
2023-12-22 23:11:02 -03:00 |
|
oobabooga
|
83cf1a6b67
|
Fix Yi space issue (closes #4996)
|
2023-12-19 07:54:19 -08:00 |
|
oobabooga
|
12690d3ffc
|
Better HF grammar implementation (#4953)
|
2023-12-17 02:01:23 -03:00 |
|
oobabooga
|
8513028968
|
Fix lag in the chat tab during streaming
|
2023-12-12 13:01:25 -08:00 |
|
oobabooga
|
39d2fe1ed9
|
Jinja templates for Instruct and Chat (#4874)
|
2023-12-12 17:23:14 -03:00 |
|
oobabooga
|
181743fd97
|
Fix missing spaces tokenizer issue (closes #4834)
|
2023-12-08 05:16:46 -08:00 |
|
Yiximail
|
1c74b3ab45
|
Fix partial unicode characters issue (#4837)
|
2023-12-08 09:50:53 -03:00 |
|
oobabooga
|
6430acadde
|
Minor bug fix after https://github.com/oobabooga/text-generation-webui/pull/4814
|
2023-12-05 10:08:11 -08:00 |
|
oobabooga
|
0f828ea441
|
Do not limit API updates/second
|
2023-12-04 20:45:43 -08:00 |
|
oobabooga
|
9edb193def
|
Optimize HF text generation (#4814)
|
2023-12-05 00:00:40 -03:00 |
|
tsukanov-as
|
9f7ae6bb2e
|
fix detection of stopping strings when HTML escaping is used (#4728)
|
2023-11-27 15:42:08 -03:00 |
|
oobabooga
|
1b69694fe9
|
Add types to the encode/decode/token-count endpoints
|
2023-11-07 19:32:14 -08:00 |
|
oobabooga
|
ec17a5d2b7
|
Make OpenAI API the default API (#4430)
|
2023-11-06 02:38:29 -03:00 |
|
oobabooga
|
aa5d671579
|
Add temperature_last parameter (#4472)
|
2023-11-04 13:09:07 -03:00 |
|
kalomaze
|
367e5e6e43
|
Implement Min P as a sampler option in HF loaders (#4449)
|
2023-11-02 16:32:51 -03:00 |
|
Abhilash Majumder
|
778a010df8
|
Intel Gpu support initialization (#4340)
|
2023-10-26 23:39:51 -03:00 |
|
tdrussell
|
72f6fc6923
|
Rename additive_repetition_penalty to presence_penalty, add frequency_penalty (#4376)
|
2023-10-25 12:10:28 -03:00 |
|
tdrussell
|
4440f87722
|
Add additive_repetition_penalty sampler setting. (#3627)
|
2023-10-23 02:28:07 -03:00 |
|
oobabooga
|
b88b2b74a6
|
Experimental Intel Arc transformers support (untested)
|
2023-10-15 20:51:11 -07:00 |
|
Brian Dashore
|
98fa73a974
|
Text Generation: stop if EOS token is reached (#4213)
|
2023-10-07 19:46:42 -03:00 |
|
oobabooga
|
ae4ba3007f
|
Add grammar to transformers and _HF loaders (#4091)
|
2023-10-05 10:01:36 -03:00 |
|
oobabooga
|
869f47fff9
|
Lint
|
2023-09-19 13:51:57 -07:00 |
|
BadisG
|
893a72a1c5
|
Stop generation immediately when using "Maximum tokens/second" (#3952)
---------
Co-authored-by: oobabooga <112222186+oobabooga@users.noreply.github.com>
|
2023-09-18 14:27:06 -03:00 |
|
oobabooga
|
0ede2965d5
|
Remove an error message
|
2023-09-17 18:46:08 -07:00 |
|
oobabooga
|
a069f3904c
|
Undo part of ad8ac545a5
|
2023-09-17 08:12:23 -07:00 |
|
oobabooga
|
ad8ac545a5
|
Tokenization improvements
|
2023-09-17 07:02:00 -07:00 |
|
saltacc
|
cd08eb0753
|
token probs for non HF loaders (#3957)
|
2023-09-17 10:42:32 -03:00 |
|
oobabooga
|
ef04138bc0
|
Improve the UI tokenizer
|
2023-09-15 19:30:44 -07:00 |
|
saltacc
|
f01b9aa71f
|
Add customizable ban tokens (#3899)
|
2023-09-15 18:27:27 -03:00 |
|
oobabooga
|
c2a309f56e
|
Add ExLlamaV2 and ExLlamav2_HF loaders (#3881)
|
2023-09-12 14:33:07 -03:00 |
|
oobabooga
|
47e490c7b4
|
Set use_cache=True by default for all models
|
2023-08-30 13:26:27 -07:00 |
|
oobabooga
|
cec8db52e5
|
Add max_tokens_second param (#3533)
|
2023-08-29 17:44:31 -03:00 |
|
oobabooga
|
2cb07065ec
|
Fix an escaping bug
|
2023-08-20 21:50:42 -07:00 |
|
oobabooga
|
a74dd9003f
|
Fix HTML escaping for perplexity_colors extension
|
2023-08-20 21:40:22 -07:00 |
|
cal066
|
7a4fcee069
|
Add ctransformers support (#3313)
---------
Co-authored-by: cal066 <cal066@users.noreply.github.com>
Co-authored-by: oobabooga <112222186+oobabooga@users.noreply.github.com>
Co-authored-by: randoentity <137087500+randoentity@users.noreply.github.com>
|
2023-08-11 14:41:33 -03:00 |
|
oobabooga
|
65aa11890f
|
Refactor everything (#3481)
|
2023-08-06 21:49:27 -03:00 |
|
oobabooga
|
0af10ab49b
|
Add Classifier Free Guidance (CFG) for Transformers/ExLlama (#3325)
|
2023-08-06 17:22:48 -03:00 |
|
Pete
|
f4005164f4
|
Fix llama.cpp truncation (#3400)
---------
Co-authored-by: oobabooga <112222186+oobabooga@users.noreply.github.com>
|
2023-08-03 20:01:15 -03:00 |
|