Shouyi
|
031fe7225e
|
Add tensor split support for llama.cpp (#3171)
|
2023-07-25 18:59:26 -03:00 |
|
oobabooga
|
a07d070b6c
|
Add llama-2-70b GGML support (#3285)
|
2023-07-24 16:37:03 -03:00 |
|
Gabriel Pena
|
eedb3bf023
|
Add low vram mode on llama cpp (#3076)
|
2023-07-12 11:05:13 -03:00 |
|
Ricardo Pinto
|
3e9da5a27c
|
Changed FormComponent to IOComponent (#3017)
Co-authored-by: Ricardo Pinto <1-ricardo.pinto@users.noreply.gitlab.cognitage.com>
|
2023-07-11 18:52:16 -03:00 |
|
oobabooga
|
c21b73ff37
|
Minor change to ui.py
|
2023-07-07 09:09:14 -07:00 |
|
oobabooga
|
333075e726
|
Fix #3003
|
2023-07-04 11:38:35 -03:00 |
|
Panchovix
|
10c8c197bf
|
Add Support for Static NTK RoPE scaling for exllama/exllama_hf (#2955)
|
2023-07-04 01:13:16 -03:00 |
|
oobabooga
|
4b1804a438
|
Implement sessions + add basic multi-user support (#2991)
|
2023-07-04 00:03:30 -03:00 |
|
oobabooga
|
3443219cbc
|
Add repetition penalty range parameter to transformers (#2916)
|
2023-06-29 13:40:13 -03:00 |
|
oobabooga
|
c52290de50
|
ExLlama with long context (#2875)
|
2023-06-25 22:49:26 -03:00 |
|
oobabooga
|
3ae9af01aa
|
Add --no_use_cuda_fp16 param for AutoGPTQ
|
2023-06-23 12:22:56 -03:00 |
|
oobabooga
|
5f392122fd
|
Add gpu_split param to ExLlama
Adapted from code created by Ph0rk0z. Thank you Ph0rk0z.
|
2023-06-16 20:49:36 -03:00 |
|
oobabooga
|
7ef6a50e84
|
Reorganize model loading UI completely (#2720)
|
2023-06-16 19:00:37 -03:00 |
|
Tom Jobbins
|
646b0c889f
|
AutoGPTQ: Add UI and command line support for disabling fused attention and fused MLP (#2648)
|
2023-06-15 23:59:54 -03:00 |
|
oobabooga
|
ac122832f7
|
Make dropdown menus more similar to automatic1111
|
2023-06-11 14:20:16 -03:00 |
|
oobabooga
|
f276d88546
|
Use AutoGPTQ by default for GPTQ models
|
2023-06-05 15:41:48 -03:00 |
|
oobabooga
|
2f6631195a
|
Add desc_act checkbox to the UI
|
2023-06-02 01:45:46 -03:00 |
|
Luis Lopez
|
9e7204bef4
|
Add tail-free and top-a sampling (#2357)
|
2023-05-29 21:40:01 -03:00 |
|
oobabooga
|
1394f44e14
|
Add triton checkbox for AutoGPTQ
|
2023-05-29 15:32:45 -03:00 |
|
Honkware
|
204731952a
|
Falcon support (trust-remote-code and autogptq checkboxes) (#2367)
---------
Co-authored-by: oobabooga <112222186+oobabooga@users.noreply.github.com>
|
2023-05-29 10:20:18 -03:00 |
|
DGdev91
|
cf088566f8
|
Make llama.cpp read prompt size and seed from settings (#2299)
|
2023-05-25 10:29:31 -03:00 |
|
oobabooga
|
361451ba60
|
Add --load-in-4bit parameter (#2320)
|
2023-05-25 01:14:13 -03:00 |
|
oobabooga
|
c0fd7f3257
|
Add mirostat parameters for llama.cpp (#2287)
|
2023-05-22 19:37:24 -03:00 |
|
oobabooga
|
8ac3636966
|
Add epsilon_cutoff/eta_cutoff parameters (#2258)
|
2023-05-21 15:11:57 -03:00 |
|
Matthew McAllister
|
ab6acddcc5
|
Add Save/Delete character buttons (#1870)
---------
Co-authored-by: oobabooga <112222186+oobabooga@users.noreply.github.com>
|
2023-05-20 21:48:45 -03:00 |
|
oobabooga
|
5cd6dd4287
|
Fix no-mmap bug
|
2023-05-16 17:35:49 -03:00 |
|
Jakub Strnad
|
0227e738ed
|
Add settings UI for llama.cpp and fixed reloading of llama.cpp models (#2087)
|
2023-05-15 19:51:23 -03:00 |
|
oobabooga
|
3b886f9c9f
|
Add chat-instruct mode (#2049)
|
2023-05-14 10:43:55 -03:00 |
|
oobabooga
|
b5260b24f1
|
Add support for custom chat styles (#1917)
|
2023-05-08 12:35:03 -03:00 |
|
oobabooga
|
56a5969658
|
Improve the separation between instruct/chat modes (#1896)
|
2023-05-07 23:47:02 -03:00 |
|
oobabooga
|
8aafb1f796
|
Refactor text_generation.py, add support for custom generation functions (#1817)
|
2023-05-05 18:53:03 -03:00 |
|
oobabooga
|
95d04d6a8d
|
Better warning messages
|
2023-05-03 21:43:17 -03:00 |
|
oobabooga
|
a777c058af
|
Precise prompts for instruct mode
|
2023-04-26 03:21:53 -03:00 |
|
oobabooga
|
b6af2e56a2
|
Add --character flag, add character to settings.json
|
2023-04-24 13:19:42 -03:00 |
|
oobabooga
|
b1ee674d75
|
Make interface state (mostly) persistent on page reload
|
2023-04-24 03:05:47 -03:00 |
|
oobabooga
|
5e023ae64d
|
Change dropdown menu highlight color
|
2023-04-21 02:47:18 -03:00 |
|
oobabooga
|
c4f4f41389
|
Add an "Evaluate" tab to calculate the perplexities of models (#1322)
|
2023-04-21 00:20:33 -03:00 |
|
oobabooga
|
649e4017a5
|
Style improvements
|
2023-04-19 00:36:28 -03:00 |
|
oobabooga
|
b937c9d8c2
|
Add skip_special_tokens checkbox for Dolly model (#1218)
|
2023-04-16 14:24:49 -03:00 |
|
oobabooga
|
8e31f2bad4
|
Automatically set wbits/groupsize/instruct based on model name (#1167)
|
2023-04-14 11:07:28 -03:00 |
|
oobabooga
|
80f4eabb2a
|
Fix send_pictures extension
|
2023-04-12 10:27:06 -03:00 |
|
oobabooga
|
ea6e77df72
|
Make the code more like PEP8 for readability (#862)
|
2023-04-07 00:15:45 -03:00 |
|
oobabooga
|
d30a14087f
|
Further reorganize the UI
|
2023-03-15 13:24:54 -03:00 |
|
oobabooga
|
ec972b85d1
|
Move all css/js into separate files
|
2023-03-15 12:35:11 -03:00 |
|
oobabooga
|
1413931705
|
Add a header bar and redesign the interface (#293)
|
2023-03-15 12:01:32 -03:00 |
|
oobabooga
|
2bff646130
|
Stop chat from flashing dark when processing
|
2023-03-03 13:19:13 -03:00 |
|
oobabooga
|
4548227fb5
|
Downgrade gradio version (file uploads are broken in 3.19.1)
|
2023-02-25 22:59:02 -03:00 |
|
oobabooga
|
32f40f3b42
|
Bump gradio version to 3.19.1
|
2023-02-25 17:20:03 -03:00 |
|
oobabooga
|
3e6a8ccdce
|
Fix galactica latex css
|
2023-02-18 00:18:39 -03:00 |
|
oobabooga
|
14f49bbe9a
|
Fix galactica equations in dark mode
|
2023-02-17 23:57:09 -03:00 |
|
oobabooga
|
00ca17abc9
|
Minor change
|
2023-02-17 22:52:03 -03:00 |
|
oobabooga
|
2fd003c044
|
Fix gpt4chan styles that were broken by gradio 3.18.0
|
2023-02-17 22:47:41 -03:00 |
|
oobabooga
|
0dd41e4830
|
Reorganize the sliders some more
|
2023-02-17 16:33:27 -03:00 |
|
oobabooga
|
6b9ac2f88e
|
Reorganize the generation parameters
|
2023-02-17 16:18:01 -03:00 |
|
oobabooga
|
71c2764516
|
Fix the API docs in chat mode
|
2023-02-17 01:56:51 -03:00 |
|
oobabooga
|
aeddf902ec
|
Make the refresh button prettier
|
2023-02-16 21:55:20 -03:00 |
|
oobabooga
|
434d4b128c
|
Add refresh buttons for the model/preset/character menus
|
2023-01-22 00:02:46 -03:00 |
|