oobabooga
e0ca49ed9c
Bump llama-cpp-python to 0.2.18 (2nd attempt) ( #4637 )
...
* Update requirements*.txt
* Add back seed
2023-11-18 00:31:27 -03:00
oobabooga
9d6f79db74
Revert "Bump llama-cpp-python to 0.2.18 ( #4611 )"
...
This reverts commit 923c8e25fb
.
2023-11-17 05:14:25 -08:00
oobabooga
8b66d83aa9
Set use_fast=True by default, create --no_use_fast flag
...
This increases tokens/second for HF loaders.
2023-11-16 19:55:28 -08:00
oobabooga
923c8e25fb
Bump llama-cpp-python to 0.2.18 ( #4611 )
2023-11-16 22:55:14 -03:00
oobabooga
6e2e0317af
Separate context and system message in instruction formats ( #4499 )
2023-11-07 20:02:58 -03:00
oobabooga
af3d25a503
Disable logits_all in llamacpp_HF (makes processing 3x faster)
2023-11-07 14:35:48 -08:00
feng lui
4766a57352
transformers: add use_flash_attention_2 option ( #4373 )
2023-11-04 13:59:33 -03:00
oobabooga
aa5d671579
Add temperature_last parameter ( #4472 )
2023-11-04 13:09:07 -03:00
kalomaze
367e5e6e43
Implement Min P as a sampler option in HF loaders ( #4449 )
2023-11-02 16:32:51 -03:00
oobabooga
c0655475ae
Add cache_8bit option
2023-11-02 11:23:04 -07:00
Abhilash Majumder
778a010df8
Intel Gpu support initialization ( #4340 )
2023-10-26 23:39:51 -03:00
tdrussell
72f6fc6923
Rename additive_repetition_penalty to presence_penalty, add frequency_penalty ( #4376 )
2023-10-25 12:10:28 -03:00
tdrussell
4440f87722
Add additive_repetition_penalty sampler setting. ( #3627 )
2023-10-23 02:28:07 -03:00
oobabooga
df90d03e0b
Replace --mul_mat_q with --no_mul_mat_q
2023-10-22 12:23:03 -07:00
oobabooga
fae8062d39
Bump to latest gradio (3.47) ( #4258 )
2023-10-10 22:20:49 -03:00
oobabooga
b6fe6acf88
Add threads_batch parameter
2023-10-01 21:28:00 -07:00
jllllll
41a2de96e5
Bump llama-cpp-python to 0.2.11
2023-10-01 18:08:10 -05:00
StoyanStAtanasov
7e6ff8d1f0
Enable NUMA feature for llama_cpp_python ( #4040 )
2023-09-26 22:05:00 -03:00
oobabooga
1ca54faaf0
Improve --multi-user mode
2023-09-26 06:42:33 -07:00
oobabooga
d0d221df49
Add --use_fast option ( closes #3741 )
2023-09-25 12:19:43 -07:00
oobabooga
b973b91d73
Automatically filter by loader ( closes #4072 )
2023-09-25 10:28:35 -07:00
oobabooga
08cf150c0c
Add a grammar editor to the UI ( #4061 )
2023-09-24 18:05:24 -03:00
oobabooga
b227e65d86
Add grammar to llama.cpp loader ( closes #4019 )
2023-09-24 07:10:45 -07:00
saltacc
f01b9aa71f
Add customizable ban tokens ( #3899 )
2023-09-15 18:27:27 -03:00
oobabooga
1ce3c93600
Allow "Your name" field to be saved
2023-09-14 03:44:35 -07:00
oobabooga
9f199c7a4c
Use Noto Sans font
...
Copied from 6c8bd06308/public/webfonts/NotoSans
2023-09-13 13:48:05 -07:00
oobabooga
ed86878f02
Remove GGML support
2023-09-11 07:44:00 -07:00
oobabooga
cec8db52e5
Add max_tokens_second param ( #3533 )
2023-08-29 17:44:31 -03:00
oobabooga
52ab2a6b9e
Add rope_freq_base parameter for CodeLlama
2023-08-25 06:55:15 -07:00
oobabooga
d6934bc7bc
Implement CFG for ExLlama_HF ( #3666 )
2023-08-24 16:27:36 -03:00
oobabooga
7cba000421
Bump llama-cpp-python, +tensor_split by @shouyiwang, +mul_mat_q ( #3610 )
2023-08-18 12:03:34 -03:00
oobabooga
73d9befb65
Make "Show controls" customizable through settings.yaml
2023-08-16 07:04:18 -07:00
oobabooga
2a29208224
Add a "Show controls" button to chat UI ( #3590 )
2023-08-16 02:39:58 -03:00
oobabooga
ccfc02a28d
Add the --disable_exllama option for AutoGPTQ ( #3545 from clefever/disable-exllama)
2023-08-14 15:15:55 -03:00
oobabooga
619cb4e78b
Add "save defaults to settings.yaml" button ( #3574 )
2023-08-14 11:46:07 -03:00
oobabooga
4a05aa92cb
Add "send to" buttons for instruction templates
...
- Remove instruction templates from prompt dropdowns (default/notebook)
- Add 3 buttons to Parameters > Instruction template as a replacement
- Increase the number of lines of 'negative prompt' field to 3, and add a scrollbar
- When uploading a character, switch to the Character tab
- When uploading chat history, switch to the Chat tab
2023-08-13 18:35:45 -07:00
oobabooga
a1a9ec895d
Unify the 3 interface modes ( #3554 )
2023-08-13 01:12:15 -03:00
Chris Lefever
0230fa4e9c
Add the --disable_exllama option for AutoGPTQ
2023-08-12 02:26:58 -04:00
oobabooga
65aa11890f
Refactor everything ( #3481 )
2023-08-06 21:49:27 -03:00
oobabooga
0af10ab49b
Add Classifier Free Guidance (CFG) for Transformers/ExLlama ( #3325 )
2023-08-06 17:22:48 -03:00
missionfloyd
2336b75d92
Remove unnecessary chat.js ( #3445 )
2023-08-04 01:58:37 -03:00
oobabooga
0e8f9354b5
Add direct download for session/chat history JSONs
2023-08-02 19:43:39 -07:00
oobabooga
e931844fe2
Add auto_max_new_tokens parameter ( #3419 )
2023-08-02 14:52:20 -03:00
oobabooga
b17893a58f
Revert "Add tensor split support for llama.cpp ( #3171 )"
...
This reverts commit 031fe7225e
.
2023-07-26 07:06:01 -07:00
oobabooga
c2e0d46616
Add credits
2023-07-25 15:49:04 -07:00
Shouyi
031fe7225e
Add tensor split support for llama.cpp ( #3171 )
2023-07-25 18:59:26 -03:00
oobabooga
a07d070b6c
Add llama-2-70b GGML support ( #3285 )
2023-07-24 16:37:03 -03:00
Gabriel Pena
eedb3bf023
Add low vram mode on llama cpp ( #3076 )
2023-07-12 11:05:13 -03:00
Ricardo Pinto
3e9da5a27c
Changed FormComponent to IOComponent ( #3017 )
...
Co-authored-by: Ricardo Pinto <1-ricardo.pinto@users.noreply.gitlab.cognitage.com>
2023-07-11 18:52:16 -03:00
oobabooga
c21b73ff37
Minor change to ui.py
2023-07-07 09:09:14 -07:00
oobabooga
333075e726
Fix #3003
2023-07-04 11:38:35 -03:00
Panchovix
10c8c197bf
Add Support for Static NTK RoPE scaling for exllama/exllama_hf ( #2955 )
2023-07-04 01:13:16 -03:00
oobabooga
4b1804a438
Implement sessions + add basic multi-user support ( #2991 )
2023-07-04 00:03:30 -03:00
oobabooga
3443219cbc
Add repetition penalty range parameter to transformers ( #2916 )
2023-06-29 13:40:13 -03:00
oobabooga
c52290de50
ExLlama with long context ( #2875 )
2023-06-25 22:49:26 -03:00
oobabooga
3ae9af01aa
Add --no_use_cuda_fp16 param for AutoGPTQ
2023-06-23 12:22:56 -03:00
oobabooga
5f392122fd
Add gpu_split param to ExLlama
...
Adapted from code created by Ph0rk0z. Thank you Ph0rk0z.
2023-06-16 20:49:36 -03:00
oobabooga
7ef6a50e84
Reorganize model loading UI completely ( #2720 )
2023-06-16 19:00:37 -03:00
Tom Jobbins
646b0c889f
AutoGPTQ: Add UI and command line support for disabling fused attention and fused MLP ( #2648 )
2023-06-15 23:59:54 -03:00
oobabooga
ac122832f7
Make dropdown menus more similar to automatic1111
2023-06-11 14:20:16 -03:00
oobabooga
f276d88546
Use AutoGPTQ by default for GPTQ models
2023-06-05 15:41:48 -03:00
oobabooga
2f6631195a
Add desc_act checkbox to the UI
2023-06-02 01:45:46 -03:00
Luis Lopez
9e7204bef4
Add tail-free and top-a sampling ( #2357 )
2023-05-29 21:40:01 -03:00
oobabooga
1394f44e14
Add triton checkbox for AutoGPTQ
2023-05-29 15:32:45 -03:00
Honkware
204731952a
Falcon support (trust-remote-code and autogptq checkboxes) ( #2367 )
...
---------
Co-authored-by: oobabooga <112222186+oobabooga@users.noreply.github.com>
2023-05-29 10:20:18 -03:00
DGdev91
cf088566f8
Make llama.cpp read prompt size and seed from settings ( #2299 )
2023-05-25 10:29:31 -03:00
oobabooga
361451ba60
Add --load-in-4bit parameter ( #2320 )
2023-05-25 01:14:13 -03:00
oobabooga
c0fd7f3257
Add mirostat parameters for llama.cpp ( #2287 )
2023-05-22 19:37:24 -03:00
oobabooga
8ac3636966
Add epsilon_cutoff/eta_cutoff parameters ( #2258 )
2023-05-21 15:11:57 -03:00
Matthew McAllister
ab6acddcc5
Add Save/Delete character buttons ( #1870 )
...
---------
Co-authored-by: oobabooga <112222186+oobabooga@users.noreply.github.com>
2023-05-20 21:48:45 -03:00
oobabooga
5cd6dd4287
Fix no-mmap bug
2023-05-16 17:35:49 -03:00
Jakub Strnad
0227e738ed
Add settings UI for llama.cpp and fixed reloading of llama.cpp models ( #2087 )
2023-05-15 19:51:23 -03:00
oobabooga
3b886f9c9f
Add chat-instruct mode ( #2049 )
2023-05-14 10:43:55 -03:00
oobabooga
b5260b24f1
Add support for custom chat styles ( #1917 )
2023-05-08 12:35:03 -03:00
oobabooga
56a5969658
Improve the separation between instruct/chat modes ( #1896 )
2023-05-07 23:47:02 -03:00
oobabooga
8aafb1f796
Refactor text_generation.py, add support for custom generation functions ( #1817 )
2023-05-05 18:53:03 -03:00
oobabooga
95d04d6a8d
Better warning messages
2023-05-03 21:43:17 -03:00
oobabooga
a777c058af
Precise prompts for instruct mode
2023-04-26 03:21:53 -03:00
oobabooga
b6af2e56a2
Add --character flag, add character to settings.json
2023-04-24 13:19:42 -03:00
oobabooga
b1ee674d75
Make interface state (mostly) persistent on page reload
2023-04-24 03:05:47 -03:00
oobabooga
5e023ae64d
Change dropdown menu highlight color
2023-04-21 02:47:18 -03:00
oobabooga
c4f4f41389
Add an "Evaluate" tab to calculate the perplexities of models ( #1322 )
2023-04-21 00:20:33 -03:00
oobabooga
649e4017a5
Style improvements
2023-04-19 00:36:28 -03:00
oobabooga
b937c9d8c2
Add skip_special_tokens checkbox for Dolly model ( #1218 )
2023-04-16 14:24:49 -03:00
oobabooga
8e31f2bad4
Automatically set wbits/groupsize/instruct based on model name ( #1167 )
2023-04-14 11:07:28 -03:00
oobabooga
80f4eabb2a
Fix send_pictures extension
2023-04-12 10:27:06 -03:00
oobabooga
ea6e77df72
Make the code more like PEP8 for readability ( #862 )
2023-04-07 00:15:45 -03:00
oobabooga
d30a14087f
Further reorganize the UI
2023-03-15 13:24:54 -03:00
oobabooga
ec972b85d1
Move all css/js into separate files
2023-03-15 12:35:11 -03:00
oobabooga
1413931705
Add a header bar and redesign the interface ( #293 )
2023-03-15 12:01:32 -03:00
oobabooga
2bff646130
Stop chat from flashing dark when processing
2023-03-03 13:19:13 -03:00
oobabooga
4548227fb5
Downgrade gradio version (file uploads are broken in 3.19.1)
2023-02-25 22:59:02 -03:00
oobabooga
32f40f3b42
Bump gradio version to 3.19.1
2023-02-25 17:20:03 -03:00
oobabooga
3e6a8ccdce
Fix galactica latex css
2023-02-18 00:18:39 -03:00
oobabooga
14f49bbe9a
Fix galactica equations in dark mode
2023-02-17 23:57:09 -03:00
oobabooga
00ca17abc9
Minor change
2023-02-17 22:52:03 -03:00
oobabooga
2fd003c044
Fix gpt4chan styles that were broken by gradio 3.18.0
2023-02-17 22:47:41 -03:00
oobabooga
0dd41e4830
Reorganize the sliders some more
2023-02-17 16:33:27 -03:00
oobabooga
6b9ac2f88e
Reorganize the generation parameters
2023-02-17 16:18:01 -03:00
oobabooga
71c2764516
Fix the API docs in chat mode
2023-02-17 01:56:51 -03:00