oobabooga
|
6e16af34fd
|
Save uploaded characters as yaml
Also allow yaml characters to be uploaded directly
|
2023-07-30 11:25:38 -07:00 |
|
oobabooga
|
c25602eb65
|
Merge branch 'dev'
|
2023-07-30 08:47:50 -07:00 |
|
oobabooga
|
ca4188aabc
|
Update the example extension
|
2023-07-29 18:57:22 -07:00 |
|
jllllll
|
c4e14a757c
|
Bump exllama module to 0.0.9 (#3338)
|
2023-07-29 22:16:23 -03:00 |
|
jllllll
|
ecd92d6a4e
|
Remove unused variable from ROCm GPTQ install (#107)
|
2023-07-26 22:16:36 -03:00 |
|
jllllll
|
1e3c950c7d
|
Add AMD GPU support for Linux (#98)
|
2023-07-26 17:33:02 -03:00 |
|
GuizzyQC
|
4b37a2b397
|
sd_api_pictures: Widen sliders for image size minimum and maximum (#3326)
|
2023-07-26 13:49:46 -03:00 |
|
oobabooga
|
d6314fd539
|
Change a comment
|
2023-07-26 09:38:45 -07:00 |
|
oobabooga
|
f24f87cfb0
|
Change a comment
|
2023-07-26 09:38:13 -07:00 |
|
oobabooga
|
de5de045e0
|
Set rms_norm_eps to 5e-6 for every llama-2 ggml model, not just 70b
|
2023-07-26 08:26:56 -07:00 |
|
oobabooga
|
193c6be39c
|
Add missing \n to llama-v2 template context
|
2023-07-26 08:26:56 -07:00 |
|
oobabooga
|
ec68d5211e
|
Set rms_norm_eps to 5e-6 for every llama-2 ggml model, not just 70b
|
2023-07-26 08:23:24 -07:00 |
|
oobabooga
|
a9e10753df
|
Add missing \n to llama-v2 template context
|
2023-07-26 07:59:49 -07:00 |
|
oobabooga
|
b780d520d2
|
Add a link to the gradio docs
|
2023-07-26 07:49:42 -07:00 |
|
oobabooga
|
b553c33dd0
|
Add a link to the gradio docs
|
2023-07-26 07:49:22 -07:00 |
|
oobabooga
|
d94ba6e68b
|
Define visible_text before applying chat_input extensions
|
2023-07-26 07:30:25 -07:00 |
|
oobabooga
|
b31321c779
|
Define visible_text before applying chat_input extensions
|
2023-07-26 07:27:14 -07:00 |
|
oobabooga
|
b17893a58f
|
Revert "Add tensor split support for llama.cpp (#3171)"
This reverts commit 031fe7225e .
|
2023-07-26 07:06:01 -07:00 |
|
oobabooga
|
517d40cffe
|
Update Extensions.md
|
2023-07-26 07:01:35 -07:00 |
|
oobabooga
|
b11f63cb18
|
update extensions docs
|
2023-07-26 07:00:33 -07:00 |
|
jllllll
|
52e3b91f5e
|
Fix broken gxx_linux-64 package. (#106)
|
2023-07-26 01:55:08 -03:00 |
|
oobabooga
|
4a24849715
|
Revert changes
|
2023-07-25 21:09:32 -07:00 |
|
oobabooga
|
69f8b35bc9
|
Revert changes to README
|
2023-07-25 20:51:19 -07:00 |
|
oobabooga
|
ed80a2e7db
|
Reorder llama.cpp params
|
2023-07-25 20:45:20 -07:00 |
|
oobabooga
|
0e8782df03
|
Set instruction template when switching from default/notebook to chat
|
2023-07-25 20:37:01 -07:00 |
|
oobabooga
|
28779cd959
|
Use dark theme by default
|
2023-07-25 20:11:57 -07:00 |
|
oobabooga
|
c2e0d46616
|
Add credits
|
2023-07-25 15:49:04 -07:00 |
|
oobabooga
|
1b89c304ad
|
Update README
|
2023-07-25 15:46:12 -07:00 |
|
oobabooga
|
d3abe7caa8
|
Update llama.cpp.md
|
2023-07-25 15:33:16 -07:00 |
|
oobabooga
|
863d2f118f
|
Update llama.cpp.md
|
2023-07-25 15:31:05 -07:00 |
|
oobabooga
|
77d2e9f060
|
Remove flexgen 2
|
2023-07-25 15:18:25 -07:00 |
|
oobabooga
|
75c2dd38cf
|
Remove flexgen support
|
2023-07-25 15:15:29 -07:00 |
|
oobabooga
|
5134d5b1c6
|
Update README
|
2023-07-25 15:13:07 -07:00 |
|
Foxtr0t1337
|
85b3a26e25
|
Ignore values which are not string in training.py (#3287)
|
2023-07-25 19:00:25 -03:00 |
|
Shouyi
|
031fe7225e
|
Add tensor split support for llama.cpp (#3171)
|
2023-07-25 18:59:26 -03:00 |
|
Eve
|
f653546484
|
README updates and improvements (#3198)
|
2023-07-25 18:58:13 -03:00 |
|
Ikko Eltociear Ashimine
|
b09e4f10fd
|
Fix typo in README.md (#3286)
tranformers -> transformers
|
2023-07-25 18:56:25 -03:00 |
|
oobabooga
|
7bc408b472
|
Change rms_norm_eps to 5e-6 for llama-2-70b ggml
Based on https://github.com/ggerganov/llama.cpp/pull/2384
|
2023-07-25 14:54:57 -07:00 |
|
oobabooga
|
ef8637e32d
|
Add extension example, replace input_hijack with chat_input_modifier (#3307)
|
2023-07-25 18:49:56 -03:00 |
|
oobabooga
|
08c622df2e
|
Autodetect rms_norm_eps and n_gqa for llama-2-70b
|
2023-07-24 15:27:34 -07:00 |
|
oobabooga
|
a07d070b6c
|
Add llama-2-70b GGML support (#3285)
|
2023-07-24 16:37:03 -03:00 |
|
oobabooga
|
6f4830b4d3
|
Bump peft commit
|
2023-07-24 09:49:57 -07:00 |
|
matatonic
|
90a4ab631c
|
extensions/openai: Fixes for: embeddings, tokens, better errors. +Docs update, +Images, +logit_bias/logprobs, +more. (#3122)
|
2023-07-24 11:28:12 -03:00 |
|
jllllll
|
1141987a0d
|
Add checks for ROCm and unsupported architectures to llama_cpp_cuda loading (#3225)
|
2023-07-24 11:25:36 -03:00 |
|
iongpt
|
74fc5dd873
|
Add user-agent to download-model.py requests (#3243)
|
2023-07-24 11:19:13 -03:00 |
|
Ikko Eltociear Ashimine
|
b2d5433409
|
Fix typo in deepspeed_parameters.py (#3222)
configration -> configuration
|
2023-07-24 11:17:28 -03:00 |
|
jllllll
|
eb105b0495
|
Bump llama-cpp-python to 0.1.74 (#3257)
|
2023-07-24 11:15:42 -03:00 |
|
jllllll
|
152cf1e8ef
|
Bump bitsandbytes to 0.41.0 (#3258)
e229fbce66...a06a0f6a08
|
2023-07-24 11:06:18 -03:00 |
|
jllllll
|
8d31d20c9a
|
Bump exllama module to 0.0.8 (#3256)
39b3541cdd...3f83ebb378
|
2023-07-24 11:05:54 -03:00 |
|
oobabooga
|
cc2ed46d44
|
Make chat the default again
|
2023-07-20 18:55:09 -03:00 |
|