Forkoz
|
9ab90d8b60
|
Fix warning for qlora (#2438)
|
2023-05-30 11:09:18 -03:00 |
|
oobabooga
|
0db4e191bd
|
Improve chat buttons on mobile devices
|
2023-05-30 00:30:15 -03:00 |
|
oobabooga
|
3209440b7c
|
Rearrange chat buttons
|
2023-05-30 00:17:31 -03:00 |
|
oobabooga
|
3578dd3611
|
Change a warning message
|
2023-05-29 22:40:54 -03:00 |
|
oobabooga
|
3a6e194bc7
|
Change a warning message
|
2023-05-29 22:39:23 -03:00 |
|
oobabooga
|
e763ace593
|
Update GPTQ-models-(4-bit-mode).md
|
2023-05-29 22:35:49 -03:00 |
|
oobabooga
|
86ef695d37
|
Update GPTQ-models-(4-bit-mode).md
|
2023-05-29 22:20:55 -03:00 |
|
oobabooga
|
8e0a997c60
|
Add new parameters to API extension
|
2023-05-29 22:03:08 -03:00 |
|
Luis Lopez
|
9e7204bef4
|
Add tail-free and top-a sampling (#2357)
|
2023-05-29 21:40:01 -03:00 |
|
oobabooga
|
b4662bf4af
|
Download gptq_model*.py using download-model.py
|
2023-05-29 16:12:54 -03:00 |
|
oobabooga
|
540a161a08
|
Update GPTQ-models-(4-bit-mode).md
|
2023-05-29 15:45:40 -03:00 |
|
oobabooga
|
b8d2f6d876
|
Merge remote-tracking branch 'refs/remotes/origin/main'
|
2023-05-29 15:33:05 -03:00 |
|
oobabooga
|
1394f44e14
|
Add triton checkbox for AutoGPTQ
|
2023-05-29 15:32:45 -03:00 |
|
oobabooga
|
166a0d9893
|
Update GPTQ-models-(4-bit-mode).md
|
2023-05-29 15:07:59 -03:00 |
|
oobabooga
|
962d05ca7e
|
Update README.md
|
2023-05-29 14:56:55 -03:00 |
|
oobabooga
|
4a190a98fd
|
Update GPTQ-models-(4-bit-mode).md
|
2023-05-29 14:56:05 -03:00 |
|
matatonic
|
2b7ba9586f
|
Fixes #2326, KeyError: 'assistant' (#2382)
|
2023-05-29 14:19:57 -03:00 |
|
oobabooga
|
6de727c524
|
Improve Eta Sampling preset
|
2023-05-29 13:56:15 -03:00 |
|
oobabooga
|
f34d20922c
|
Minor fix
|
2023-05-29 13:31:17 -03:00 |
|
oobabooga
|
983eef1e29
|
Attempt at evaluating falcon perplexity (failed)
|
2023-05-29 13:28:25 -03:00 |
|
Honkware
|
204731952a
|
Falcon support (trust-remote-code and autogptq checkboxes) (#2367)
---------
Co-authored-by: oobabooga <112222186+oobabooga@users.noreply.github.com>
|
2023-05-29 10:20:18 -03:00 |
|
Forkoz
|
60ae80cf28
|
Fix hang in tokenizer for AutoGPTQ llama models. (#2399)
|
2023-05-28 23:10:10 -03:00 |
|
oobabooga
|
2f811b1bdf
|
Change a warning message
|
2023-05-28 22:48:20 -03:00 |
|
oobabooga
|
9ee1e37121
|
Fix return message when no model is loaded
|
2023-05-28 22:46:32 -03:00 |
|
oobabooga
|
f27135bdd3
|
Add Eta Sampling preset
Also remove some presets that I do not consider relevant
|
2023-05-28 22:44:35 -03:00 |
|
oobabooga
|
00ebea0b2a
|
Use YAML for presets and settings
|
2023-05-28 22:34:12 -03:00 |
|
Elias Vincent Simon
|
2cf711f35e
|
update SpeechRecognition dependency (#2345)
|
2023-05-26 00:34:57 -03:00 |
|
jllllll
|
78dbec4c4e
|
Add 'scipy' to requirements.txt #2335 (#2343)
Unlisted dependency of bitsandbytes
|
2023-05-25 23:26:25 -03:00 |
|
Luis Lopez
|
0dbc3d9b2c
|
Fix get_documents_ids_distances return error when n_results = 0 (#2347)
|
2023-05-25 23:25:36 -03:00 |
|
jllllll
|
07a4f0569f
|
Update README.md to account for BnB Windows wheel (#2341)
|
2023-05-25 18:44:26 -03:00 |
|
oobabooga
|
acfd876f29
|
Some qol changes to "Perplexity evaluation"
|
2023-05-25 15:06:22 -03:00 |
|
oobabooga
|
8efdc01ffb
|
Better default for compute_dtype
|
2023-05-25 15:05:53 -03:00 |
|
oobabooga
|
fc33216477
|
Small fix for n_ctx in llama.cpp
|
2023-05-25 13:55:51 -03:00 |
|
oobabooga
|
35009c32f0
|
Beautify all CSS
|
2023-05-25 13:12:34 -03:00 |
|
oobabooga
|
231305d0f5
|
Update README.md
|
2023-05-25 12:05:08 -03:00 |
|
oobabooga
|
37d4ad012b
|
Add a button for rendering markdown for any model
|
2023-05-25 11:59:27 -03:00 |
|
oobabooga
|
9a43656a50
|
Add bitsandbytes note
|
2023-05-25 11:21:52 -03:00 |
|
jllllll
|
b1b3bb6923
|
Improve environment isolation (#68)
|
2023-05-25 11:15:05 -03:00 |
|
oobabooga
|
c8ce2e777b
|
Add instructions for CPU mode users
|
2023-05-25 10:57:52 -03:00 |
|
oobabooga
|
996c49daa7
|
Remove bitsandbytes installation step
Following 548f05e106
|
2023-05-25 10:50:20 -03:00 |
|
oobabooga
|
548f05e106
|
Add windows bitsandbytes wheel by jllllll
|
2023-05-25 10:48:22 -03:00 |
|
DGdev91
|
cf088566f8
|
Make llama.cpp read prompt size and seed from settings (#2299)
|
2023-05-25 10:29:31 -03:00 |
|
Luis Lopez
|
ee674afa50
|
Add superbooga time weighted history retrieval (#2080)
|
2023-05-25 10:22:45 -03:00 |
|
oobabooga
|
a04266161d
|
Update README.md
|
2023-05-25 01:23:46 -03:00 |
|
oobabooga
|
361451ba60
|
Add --load-in-4bit parameter (#2320)
|
2023-05-25 01:14:13 -03:00 |
|
oobabooga
|
63ce5f9c28
|
Add back a missing bos token
|
2023-05-24 13:54:36 -03:00 |
|
Alex "mcmonkey" Goodwin
|
3cd7c5bdd0
|
LoRA Trainer: train_only_after option to control which part of your input to train on (#2315)
|
2023-05-24 12:43:22 -03:00 |
|
eiery
|
9967e08b1f
|
update llama-cpp-python to v0.1.53 for ggml v3, fixes #2245 (#2264)
|
2023-05-24 10:25:28 -03:00 |
|
Gabriel Terrien
|
e50ade438a
|
FIX silero_tts/elevenlabs_tts activation/deactivation (#2313)
|
2023-05-24 10:06:38 -03:00 |
|
Gabriel Terrien
|
fc116711b0
|
FIX save_model_settings function to also update shared.model_config (#2282)
|
2023-05-24 10:01:07 -03:00 |
|