GralchemOz
|
f7b07c4705
|
Fix the missing Chinese character bug (#2497)
|
2023-06-02 13:45:41 -03:00 |
|
oobabooga
|
28198bc15c
|
Change some headers
|
2023-06-02 11:28:43 -03:00 |
|
oobabooga
|
5177cdf634
|
Change AutoGPTQ info
|
2023-06-02 11:19:44 -03:00 |
|
oobabooga
|
8e98633efd
|
Add a description for chat_prompt_size
|
2023-06-02 11:13:22 -03:00 |
|
oobabooga
|
5a8162a46d
|
Reorganize models tab
|
2023-06-02 02:24:15 -03:00 |
|
oobabooga
|
d183c7d29e
|
Fix streaming japanese/chinese characters
Credits to matasonic for the idea
|
2023-06-02 02:09:52 -03:00 |
|
jllllll
|
5216117a63
|
Fix MacOS incompatibility in requirements.txt (#2485)
|
2023-06-02 01:46:16 -03:00 |
|
oobabooga
|
2f6631195a
|
Add desc_act checkbox to the UI
|
2023-06-02 01:45:46 -03:00 |
|
LaaZa
|
9c066601f5
|
Extend AutoGPTQ support for any GPTQ model (#1668)
|
2023-06-02 01:33:55 -03:00 |
|
oobabooga
|
b4ad060c1f
|
Use cuda 11.7 instead of 11.8
|
2023-06-02 01:04:44 -03:00 |
|
oobabooga
|
d0aca83b53
|
Add AutoGPTQ wheels to requirements.txt
|
2023-06-02 00:47:11 -03:00 |
|
oobabooga
|
f344ccdddb
|
Add a template for bluemoon
|
2023-06-01 14:42:12 -03:00 |
|
oobabooga
|
522b01d051
|
Grammar
|
2023-06-01 14:05:29 -03:00 |
|
oobabooga
|
5540335819
|
Better way to detect if a model has been downloaded
|
2023-06-01 14:01:19 -03:00 |
|
oobabooga
|
aa83fc21d4
|
Update Low-VRAM-guide.md
|
2023-06-01 12:14:27 -03:00 |
|
oobabooga
|
ee99a87330
|
Update README.md
|
2023-06-01 12:08:44 -03:00 |
|
oobabooga
|
a83f9aa65b
|
Update shared.py
|
2023-06-01 12:08:39 -03:00 |
|
oobabooga
|
146505a16b
|
Update README.md
|
2023-06-01 12:04:58 -03:00 |
|
oobabooga
|
756e3afbcc
|
Update llama.cpp-models.md
|
2023-06-01 12:04:31 -03:00 |
|
oobabooga
|
3347395944
|
Update README.md
|
2023-06-01 12:01:20 -03:00 |
|
oobabooga
|
74bf2f05b1
|
Update llama.cpp-models.md
|
2023-06-01 11:58:33 -03:00 |
|
oobabooga
|
90dc8a91ae
|
Update llama.cpp-models.md
|
2023-06-01 11:57:57 -03:00 |
|
oobabooga
|
aba56de41b
|
Update README.md
|
2023-06-01 11:46:28 -03:00 |
|
oobabooga
|
c9ac45d4cf
|
Update Using-LoRAs.md
|
2023-06-01 11:34:04 -03:00 |
|
oobabooga
|
9aad6d07de
|
Update Using-LoRAs.md
|
2023-06-01 11:32:41 -03:00 |
|
oobabooga
|
df18ae7d6c
|
Update README.md
|
2023-06-01 11:27:33 -03:00 |
|
oobabooga
|
248ef32358
|
Print a big message for CPU users
|
2023-06-01 01:40:24 -03:00 |
|
oobabooga
|
290a3374e4
|
Don't download a model during installation
And some other updates/minor improvements
|
2023-06-01 01:30:21 -03:00 |
|
oobabooga
|
e52b43c934
|
Update GPTQ-models-(4-bit-mode).md
|
2023-06-01 01:17:13 -03:00 |
|
Morgan Schweers
|
1aed2b9e52
|
Make it possible to download protected HF models from the command line. (#2408)
|
2023-06-01 00:11:21 -03:00 |
|
oobabooga
|
419c34eca4
|
Update GPTQ-models-(4-bit-mode).md
|
2023-05-31 23:49:00 -03:00 |
|
oobabooga
|
486ddd62df
|
Add tfs and top_a to the API examples
|
2023-05-31 23:44:38 -03:00 |
|
oobabooga
|
b6c407f51d
|
Don't stream at more than 24 fps
This is a performance optimization
|
2023-05-31 23:41:42 -03:00 |
|
oobabooga
|
a160230893
|
Update GPTQ-models-(4-bit-mode).md
|
2023-05-31 23:38:15 -03:00 |
|
oobabooga
|
2cdf525d3b
|
Bump llama-cpp-python version
|
2023-05-31 23:29:02 -03:00 |
|
oobabooga
|
2e53caa806
|
Create LICENSE
|
2023-05-31 16:28:36 -03:00 |
|
Sam
|
dea1bf3d04
|
Parse g++ version instead of using string matching (#72)
|
2023-05-31 14:44:36 -03:00 |
|
gavin660
|
97bc7e3fb6
|
Adds functionality for user to set flags via environment variable (#59)
|
2023-05-31 14:43:22 -03:00 |
|
Sam
|
5405635305
|
Install pre-compiled wheels for Linux (#74)
|
2023-05-31 14:41:54 -03:00 |
|
jllllll
|
be98e74337
|
Install older bitsandbytes on older gpus + fix llama-cpp-python issue (#75)
|
2023-05-31 14:41:03 -03:00 |
|
jllllll
|
412e7a6a96
|
Update README.md to include missing flags (#2449)
|
2023-05-31 11:07:56 -03:00 |
|
AlpinDale
|
6627f7feb9
|
Add notice about downgrading gcc and g++ (#2446)
|
2023-05-30 22:28:53 -03:00 |
|
Atinoda
|
bfbd13ae89
|
Update docker repo link (#2340)
|
2023-05-30 22:14:49 -03:00 |
|
matatonic
|
a6d3f010a5
|
extensions/openai: include all available models in Model.list (#2368)
Co-authored-by: Matthew Ashton <mashton-gitlab@zhero.org>
|
2023-05-30 22:13:37 -03:00 |
|
matatonic
|
e5b756ecfe
|
Fixes #2331, IndexError: string index out of range (#2383)
|
2023-05-30 22:07:40 -03:00 |
|
Juan M Uys
|
b984a44f47
|
fix error when downloading a model for the first time (#2404)
|
2023-05-30 22:07:12 -03:00 |
|
Yiximail
|
4715123f55
|
Add a /api/v1/stop-stream API that allows the user to interrupt the generation (#2392)
|
2023-05-30 22:03:40 -03:00 |
|
matatonic
|
ebcadc0042
|
extensions/openai: cross_origin + chunked_response (updated fix) (#2423)
|
2023-05-30 21:54:24 -03:00 |
|
matatonic
|
df50f077db
|
fixup missing tfs top_a params, defaults reorg (#2443)
|
2023-05-30 21:52:33 -03:00 |
|
Forkoz
|
9ab90d8b60
|
Fix warning for qlora (#2438)
|
2023-05-30 11:09:18 -03:00 |
|