oobabooga
|
6d354bb50b
|
Allow the webui to do multiple tasks simultaneously
|
2023-08-07 23:57:25 -03:00 |
|
oobabooga
|
bbe4a29a25
|
Add back dark theme code
|
2023-08-07 23:03:09 -03:00 |
|
oobabooga
|
65aa11890f
|
Refactor everything (#3481)
|
2023-08-06 21:49:27 -03:00 |
|
oobabooga
|
0af10ab49b
|
Add Classifier Free Guidance (CFG) for Transformers/ExLlama (#3325)
|
2023-08-06 17:22:48 -03:00 |
|
oobabooga
|
8df3cdfd51
|
Add SSL certificate support (#3453)
|
2023-08-04 13:57:31 -03:00 |
|
missionfloyd
|
2336b75d92
|
Remove unnecessary chat.js (#3445)
|
2023-08-04 01:58:37 -03:00 |
|
oobabooga
|
1839dff763
|
Use Esc to Stop the generation
|
2023-08-03 08:13:17 -07:00 |
|
oobabooga
|
3e70bce576
|
Properly format exceptions in the UI
|
2023-08-03 06:57:21 -07:00 |
|
oobabooga
|
3390196a14
|
Add some javascript alerts for confirmations
|
2023-08-02 22:15:20 -07:00 |
|
oobabooga
|
6bf9e855f8
|
Minor change
|
2023-08-02 21:41:38 -07:00 |
|
oobabooga
|
32c564509e
|
Fix loading session in chat mode
|
2023-08-02 21:13:16 -07:00 |
|
oobabooga
|
4b6c1d3f08
|
CSS change
|
2023-08-02 20:20:23 -07:00 |
|
oobabooga
|
0e8f9354b5
|
Add direct download for session/chat history JSONs
|
2023-08-02 19:43:39 -07:00 |
|
oobabooga
|
e931844fe2
|
Add auto_max_new_tokens parameter (#3419)
|
2023-08-02 14:52:20 -03:00 |
|
oobabooga
|
0d9932815c
|
Improve TheEncrypted777 on mobile devices
|
2023-08-02 09:15:54 -07:00 |
|
Pete
|
6afc1a193b
|
Add a scrollbar to notebook/default, improve chat scrollbar style (#3403)
---------
Co-authored-by: oobabooga <112222186+oobabooga@users.noreply.github.com>
|
2023-08-02 12:02:36 -03:00 |
|
oobabooga
|
b53ed70a70
|
Make llamacpp_HF 6x faster
|
2023-08-01 13:18:20 -07:00 |
|
oobabooga
|
959feba602
|
When saving model settings, only save the settings for the current loader
|
2023-08-01 06:10:09 -07:00 |
|
oobabooga
|
ebb4f22028
|
Change a comment
|
2023-07-31 20:06:10 -07:00 |
|
oobabooga
|
8e2217a029
|
Minor changes to the Parameters tab
|
2023-07-31 19:55:11 -07:00 |
|
oobabooga
|
b2207f123b
|
Update docs
|
2023-07-31 19:20:48 -07:00 |
|
oobabooga
|
84297d05c4
|
Add a "Filter by loader" menu to the Parameters tab
|
2023-07-31 19:09:02 -07:00 |
|
oobabooga
|
e6be25ea11
|
Fix a regression
|
2023-07-30 18:12:30 -07:00 |
|
oobabooga
|
5ca37765d3
|
Only replace {{user}} and {{char}} at generation time
|
2023-07-30 11:42:30 -07:00 |
|
oobabooga
|
6e16af34fd
|
Save uploaded characters as yaml
Also allow yaml characters to be uploaded directly
|
2023-07-30 11:25:38 -07:00 |
|
oobabooga
|
ed80a2e7db
|
Reorder llama.cpp params
|
2023-07-25 20:45:20 -07:00 |
|
oobabooga
|
0e8782df03
|
Set instruction template when switching from default/notebook to chat
|
2023-07-25 20:37:01 -07:00 |
|
oobabooga
|
1b89c304ad
|
Update README
|
2023-07-25 15:46:12 -07:00 |
|
oobabooga
|
75c2dd38cf
|
Remove flexgen support
|
2023-07-25 15:15:29 -07:00 |
|
Shouyi
|
031fe7225e
|
Add tensor split support for llama.cpp (#3171)
|
2023-07-25 18:59:26 -03:00 |
|
oobabooga
|
7bc408b472
|
Change rms_norm_eps to 5e-6 for llama-2-70b ggml
Based on https://github.com/ggerganov/llama.cpp/pull/2384
|
2023-07-25 14:54:57 -07:00 |
|
oobabooga
|
08c622df2e
|
Autodetect rms_norm_eps and n_gqa for llama-2-70b
|
2023-07-24 15:27:34 -07:00 |
|
oobabooga
|
a07d070b6c
|
Add llama-2-70b GGML support (#3285)
|
2023-07-24 16:37:03 -03:00 |
|
jllllll
|
d7a14174a2
|
Remove auto-loading when only one model is available (#3187)
|
2023-07-18 11:39:08 -03:00 |
|
oobabooga
|
f83fdb9270
|
Don't reset LoRA menu when loading a model
|
2023-07-17 12:50:25 -07:00 |
|
oobabooga
|
2de0cedce3
|
Fix reload screen color
|
2023-07-15 22:39:39 -07:00 |
|
oobabooga
|
27a84b4e04
|
Make AutoGPTQ the default again
Purely for compatibility with more models.
You should still use ExLlama_HF for LLaMA models.
|
2023-07-15 22:29:23 -07:00 |
|
oobabooga
|
5e3f7e00a9
|
Create llamacpp_HF loader (#3062)
|
2023-07-16 02:21:13 -03:00 |
|
Panchovix
|
7c4d4fc7d3
|
Increase alpha value limit for NTK RoPE scaling for exllama/exllama_HF (#3149)
|
2023-07-16 01:56:04 -03:00 |
|
oobabooga
|
b284f2407d
|
Make ExLlama_HF the new default for GPTQ
|
2023-07-14 14:03:56 -07:00 |
|
oobabooga
|
22341e948d
|
Merge branch 'main' into dev
|
2023-07-12 14:19:49 -07:00 |
|
oobabooga
|
0e6295886d
|
Fix lora download folder
|
2023-07-12 14:19:33 -07:00 |
|
oobabooga
|
eb823fce96
|
Fix typo
|
2023-07-12 13:55:19 -07:00 |
|
oobabooga
|
d0a626f32f
|
Change reload screen color
|
2023-07-12 13:54:43 -07:00 |
|
oobabooga
|
c592a9b740
|
Fix #3117
|
2023-07-12 13:33:44 -07:00 |
|
Gabriel Pena
|
eedb3bf023
|
Add low vram mode on llama cpp (#3076)
|
2023-07-12 11:05:13 -03:00 |
|
Axiom Wolf
|
d986c17c52
|
Chat history download creates more detailed file names (#3051)
|
2023-07-12 00:10:36 -03:00 |
|
Salvador E. Tropea
|
324e45b848
|
[Fixed] wbits and groupsize values from model not shown (#2977)
|
2023-07-11 23:27:38 -03:00 |
|
oobabooga
|
bfafd07f44
|
Change a message
|
2023-07-11 18:29:20 -07:00 |
|
micsthepick
|
3708de2b1f
|
respect model dir for downloads (#3077) (#3079)
|
2023-07-11 18:55:46 -03:00 |
|