Commit Graph

2204 Commits

Author SHA1 Message Date
oobabooga
60bfd0b722
Merge pull request #2535 from oobabooga/dev
Dev branch merge
2023-06-05 17:07:54 -03:00
oobabooga
eda224c92d Update README 2023-06-05 17:04:09 -03:00
oobabooga
bef94b9ebb Update README 2023-06-05 17:01:13 -03:00
oobabooga
99d701994a Update GPTQ-models-(4-bit-mode).md 2023-06-05 15:55:00 -03:00
oobabooga
f276d88546 Use AutoGPTQ by default for GPTQ models 2023-06-05 15:41:48 -03:00
oobabooga
632571a009 Update README 2023-06-05 15:16:06 -03:00
oobabooga
6a75bda419 Assign some 4096 seq lengths 2023-06-05 12:07:52 -03:00
oobabooga
9b0e95abeb Fix "regenerate" when "Start reply with" is set 2023-06-05 11:56:03 -03:00
oobabooga
e61316ce0b Detect airoboros and Nous-Hermes 2023-06-05 11:52:13 -03:00
oobabooga
19f78684e6 Add "Start reply with" feature to chat mode 2023-06-02 13:58:08 -03:00
GralchemOz
f7b07c4705
Fix the missing Chinese character bug (#2497) 2023-06-02 13:45:41 -03:00
oobabooga
28198bc15c Change some headers 2023-06-02 11:28:43 -03:00
oobabooga
5177cdf634 Change AutoGPTQ info 2023-06-02 11:19:44 -03:00
oobabooga
8e98633efd Add a description for chat_prompt_size 2023-06-02 11:13:22 -03:00
oobabooga
5a8162a46d Reorganize models tab 2023-06-02 02:24:15 -03:00
oobabooga
d183c7d29e Fix streaming japanese/chinese characters
Credits to matasonic for the idea
2023-06-02 02:09:52 -03:00
jllllll
5216117a63
Fix MacOS incompatibility in requirements.txt (#2485) 2023-06-02 01:46:16 -03:00
oobabooga
2f6631195a Add desc_act checkbox to the UI 2023-06-02 01:45:46 -03:00
LaaZa
9c066601f5
Extend AutoGPTQ support for any GPTQ model (#1668) 2023-06-02 01:33:55 -03:00
oobabooga
b4ad060c1f Use cuda 11.7 instead of 11.8 2023-06-02 01:04:44 -03:00
oobabooga
d0aca83b53 Add AutoGPTQ wheels to requirements.txt 2023-06-02 00:47:11 -03:00
oobabooga
f344ccdddb Add a template for bluemoon 2023-06-01 14:42:12 -03:00
oobabooga
aa83fc21d4
Update Low-VRAM-guide.md 2023-06-01 12:14:27 -03:00
oobabooga
ee99a87330
Update README.md 2023-06-01 12:08:44 -03:00
oobabooga
a83f9aa65b
Update shared.py 2023-06-01 12:08:39 -03:00
oobabooga
146505a16b
Update README.md 2023-06-01 12:04:58 -03:00
oobabooga
756e3afbcc
Update llama.cpp-models.md 2023-06-01 12:04:31 -03:00
oobabooga
3347395944
Update README.md 2023-06-01 12:01:20 -03:00
oobabooga
74bf2f05b1
Update llama.cpp-models.md 2023-06-01 11:58:33 -03:00
oobabooga
90dc8a91ae
Update llama.cpp-models.md 2023-06-01 11:57:57 -03:00
oobabooga
aba56de41b
Update README.md 2023-06-01 11:46:28 -03:00
oobabooga
c9ac45d4cf
Update Using-LoRAs.md 2023-06-01 11:34:04 -03:00
oobabooga
9aad6d07de
Update Using-LoRAs.md 2023-06-01 11:32:41 -03:00
oobabooga
df18ae7d6c
Update README.md 2023-06-01 11:27:33 -03:00
oobabooga
e52b43c934
Update GPTQ-models-(4-bit-mode).md 2023-06-01 01:17:13 -03:00
Morgan Schweers
1aed2b9e52
Make it possible to download protected HF models from the command line. (#2408) 2023-06-01 00:11:21 -03:00
oobabooga
419c34eca4
Update GPTQ-models-(4-bit-mode).md 2023-05-31 23:49:00 -03:00
oobabooga
486ddd62df Add tfs and top_a to the API examples 2023-05-31 23:44:38 -03:00
oobabooga
b6c407f51d Don't stream at more than 24 fps
This is a performance optimization
2023-05-31 23:41:42 -03:00
oobabooga
a160230893 Update GPTQ-models-(4-bit-mode).md 2023-05-31 23:38:15 -03:00
oobabooga
2cdf525d3b Bump llama-cpp-python version 2023-05-31 23:29:02 -03:00
jllllll
412e7a6a96
Update README.md to include missing flags (#2449) 2023-05-31 11:07:56 -03:00
AlpinDale
6627f7feb9
Add notice about downgrading gcc and g++ (#2446) 2023-05-30 22:28:53 -03:00
Atinoda
bfbd13ae89
Update docker repo link (#2340) 2023-05-30 22:14:49 -03:00
matatonic
a6d3f010a5
extensions/openai: include all available models in Model.list (#2368)
Co-authored-by: Matthew Ashton <mashton-gitlab@zhero.org>
2023-05-30 22:13:37 -03:00
matatonic
e5b756ecfe
Fixes #2331, IndexError: string index out of range (#2383) 2023-05-30 22:07:40 -03:00
Juan M Uys
b984a44f47
fix error when downloading a model for the first time (#2404) 2023-05-30 22:07:12 -03:00
Yiximail
4715123f55
Add a /api/v1/stop-stream API that allows the user to interrupt the generation (#2392) 2023-05-30 22:03:40 -03:00
matatonic
ebcadc0042
extensions/openai: cross_origin + chunked_response (updated fix) (#2423) 2023-05-30 21:54:24 -03:00
matatonic
df50f077db
fixup missing tfs top_a params, defaults reorg (#2443) 2023-05-30 21:52:33 -03:00