Commit Graph

2470 Commits

Author SHA1 Message Date
oobabooga
e52b43c934
Update GPTQ-models-(4-bit-mode).md 2023-06-01 01:17:13 -03:00
Morgan Schweers
1aed2b9e52
Make it possible to download protected HF models from the command line. (#2408) 2023-06-01 00:11:21 -03:00
oobabooga
419c34eca4
Update GPTQ-models-(4-bit-mode).md 2023-05-31 23:49:00 -03:00
oobabooga
486ddd62df Add tfs and top_a to the API examples 2023-05-31 23:44:38 -03:00
oobabooga
b6c407f51d Don't stream at more than 24 fps
This is a performance optimization
2023-05-31 23:41:42 -03:00
oobabooga
a160230893 Update GPTQ-models-(4-bit-mode).md 2023-05-31 23:38:15 -03:00
oobabooga
2cdf525d3b Bump llama-cpp-python version 2023-05-31 23:29:02 -03:00
jllllll
412e7a6a96
Update README.md to include missing flags (#2449) 2023-05-31 11:07:56 -03:00
AlpinDale
6627f7feb9
Add notice about downgrading gcc and g++ (#2446) 2023-05-30 22:28:53 -03:00
Atinoda
bfbd13ae89
Update docker repo link (#2340) 2023-05-30 22:14:49 -03:00
matatonic
a6d3f010a5
extensions/openai: include all available models in Model.list (#2368)
Co-authored-by: Matthew Ashton <mashton-gitlab@zhero.org>
2023-05-30 22:13:37 -03:00
matatonic
e5b756ecfe
Fixes #2331, IndexError: string index out of range (#2383) 2023-05-30 22:07:40 -03:00
Juan M Uys
b984a44f47
fix error when downloading a model for the first time (#2404) 2023-05-30 22:07:12 -03:00
Yiximail
4715123f55
Add a /api/v1/stop-stream API that allows the user to interrupt the generation (#2392) 2023-05-30 22:03:40 -03:00
matatonic
ebcadc0042
extensions/openai: cross_origin + chunked_response (updated fix) (#2423) 2023-05-30 21:54:24 -03:00
matatonic
df50f077db
fixup missing tfs top_a params, defaults reorg (#2443) 2023-05-30 21:52:33 -03:00
Forkoz
9ab90d8b60
Fix warning for qlora (#2438) 2023-05-30 11:09:18 -03:00
oobabooga
0db4e191bd
Improve chat buttons on mobile devices 2023-05-30 00:30:15 -03:00
oobabooga
3209440b7c
Rearrange chat buttons 2023-05-30 00:17:31 -03:00
oobabooga
3578dd3611
Change a warning message 2023-05-29 22:40:54 -03:00
oobabooga
3a6e194bc7
Change a warning message 2023-05-29 22:39:23 -03:00
oobabooga
e763ace593
Update GPTQ-models-(4-bit-mode).md 2023-05-29 22:35:49 -03:00
oobabooga
86ef695d37
Update GPTQ-models-(4-bit-mode).md 2023-05-29 22:20:55 -03:00
oobabooga
8e0a997c60
Add new parameters to API extension 2023-05-29 22:03:08 -03:00
Luis Lopez
9e7204bef4
Add tail-free and top-a sampling (#2357) 2023-05-29 21:40:01 -03:00
oobabooga
b4662bf4af
Download gptq_model*.py using download-model.py 2023-05-29 16:12:54 -03:00
oobabooga
540a161a08
Update GPTQ-models-(4-bit-mode).md 2023-05-29 15:45:40 -03:00
oobabooga
b8d2f6d876 Merge remote-tracking branch 'refs/remotes/origin/main' 2023-05-29 15:33:05 -03:00
oobabooga
1394f44e14 Add triton checkbox for AutoGPTQ 2023-05-29 15:32:45 -03:00
oobabooga
166a0d9893
Update GPTQ-models-(4-bit-mode).md 2023-05-29 15:07:59 -03:00
oobabooga
962d05ca7e
Update README.md 2023-05-29 14:56:55 -03:00
oobabooga
4a190a98fd
Update GPTQ-models-(4-bit-mode).md 2023-05-29 14:56:05 -03:00
matatonic
2b7ba9586f
Fixes #2326, KeyError: 'assistant' (#2382) 2023-05-29 14:19:57 -03:00
oobabooga
6de727c524 Improve Eta Sampling preset 2023-05-29 13:56:15 -03:00
oobabooga
f34d20922c Minor fix 2023-05-29 13:31:17 -03:00
oobabooga
983eef1e29 Attempt at evaluating falcon perplexity (failed) 2023-05-29 13:28:25 -03:00
Honkware
204731952a
Falcon support (trust-remote-code and autogptq checkboxes) (#2367)
---------

Co-authored-by: oobabooga <112222186+oobabooga@users.noreply.github.com>
2023-05-29 10:20:18 -03:00
Forkoz
60ae80cf28
Fix hang in tokenizer for AutoGPTQ llama models. (#2399) 2023-05-28 23:10:10 -03:00
oobabooga
2f811b1bdf Change a warning message 2023-05-28 22:48:20 -03:00
oobabooga
9ee1e37121 Fix return message when no model is loaded 2023-05-28 22:46:32 -03:00
oobabooga
f27135bdd3 Add Eta Sampling preset
Also remove some presets that I do not consider relevant
2023-05-28 22:44:35 -03:00
oobabooga
00ebea0b2a Use YAML for presets and settings 2023-05-28 22:34:12 -03:00
Elias Vincent Simon
2cf711f35e
update SpeechRecognition dependency (#2345) 2023-05-26 00:34:57 -03:00
jllllll
78dbec4c4e
Add 'scipy' to requirements.txt #2335 (#2343)
Unlisted dependency of bitsandbytes
2023-05-25 23:26:25 -03:00
Luis Lopez
0dbc3d9b2c
Fix get_documents_ids_distances return error when n_results = 0 (#2347) 2023-05-25 23:25:36 -03:00
jllllll
07a4f0569f
Update README.md to account for BnB Windows wheel (#2341) 2023-05-25 18:44:26 -03:00
oobabooga
acfd876f29 Some qol changes to "Perplexity evaluation" 2023-05-25 15:06:22 -03:00
oobabooga
8efdc01ffb Better default for compute_dtype 2023-05-25 15:05:53 -03:00
oobabooga
fc33216477 Small fix for n_ctx in llama.cpp 2023-05-25 13:55:51 -03:00
oobabooga
35009c32f0 Beautify all CSS 2023-05-25 13:12:34 -03:00