matatonic
b45baeea41
extensions/openai: Major docs update, fix #2852 (critical bug), minor improvements ( #2849 )
2023-06-24 22:50:04 -03:00
oobabooga
ebfcfa41f2
Update ExLlama.md
2023-06-24 20:25:34 -03:00
jllllll
bef67af23c
Use pre-compiled python module for ExLlama ( #2770 )
2023-06-24 20:24:17 -03:00
oobabooga
a70a2ac3be
Update ExLlama.md
2023-06-24 20:23:01 -03:00
oobabooga
b071eb0d4b
Clean up the presets ( #2854 )
2023-06-24 18:41:17 -03:00
oobabooga
cec5fb0ef6
Failed attempt at evaluating exllama_hf perplexity
2023-06-24 12:02:25 -03:00
快乐的我531
e356f69b36
Make stop_everything work with non-streamed generation ( #2848 )
2023-06-24 11:19:16 -03:00
oobabooga
ec482f3dae
Apply input extensions after yielding *Is typing...*
2023-06-24 11:07:11 -03:00
oobabooga
3e80f2aceb
Apply the output extensions only once
...
Relevant for google translate, silero
2023-06-24 10:59:07 -03:00
rizerphe
77baf43f6d
Add CORS support to the API ( #2718 )
2023-06-24 10:16:06 -03:00
matatonic
8c36c19218
8k size only for minotaur-15B ( #2815 )
...
Co-authored-by: Matthew Ashton <mashton-gitlab@zhero.org>
2023-06-24 10:14:19 -03:00
Roman
38897fbd8a
fix: added model parameter check ( #2829 )
2023-06-24 10:09:34 -03:00
missionfloyd
51a388fa34
Organize chat history/character import menu ( #2845 )
...
* Organize character import menu
* Move Chat history upload/download labels
2023-06-24 09:55:02 -03:00
oobabooga
8bb3bb39b3
Implement stopping string search in string space ( #2847 )
2023-06-24 09:43:00 -03:00
oobabooga
0f9088f730
Update README
2023-06-23 12:24:43 -03:00
oobabooga
3ae9af01aa
Add --no_use_cuda_fp16 param for AutoGPTQ
2023-06-23 12:22:56 -03:00
Panchovix
5646690769
Fix some models not loading on exllama_hf ( #2835 )
2023-06-23 11:31:02 -03:00
oobabooga
383c50f05b
Replace old presets with the results of Preset Arena ( #2830 )
2023-06-23 01:48:29 -03:00
missionfloyd
aa1f1ef46a
Fix printing, take two. ( #2810 )
...
* Format chat for printing
* Better printing
2023-06-22 16:06:49 -03:00
Panchovix
b4a38c24b7
Fix Multi-GPU not working on exllama_hf ( #2803 )
2023-06-22 16:05:25 -03:00
matatonic
d94ea31d54
more models. +minotaur 8k ( #2806 )
2023-06-21 21:05:08 -03:00
LarryVRH
580c1ee748
Implement a demo HF wrapper for exllama to utilize existing HF transformers decoding. ( #2777 )
2023-06-21 15:31:42 -03:00
jllllll
a06acd6d09
Update bitsandbytes to 0.39.1 ( #2799 )
2023-06-21 15:04:45 -03:00
Gaurav Bhagchandani
89fb6f9236
Fixed the ZeroDivisionError when downloading a model ( #2797 )
2023-06-21 12:31:50 -03:00
matatonic
90be1d9fe1
More models (match more) & templates (starchat-beta, tulu) ( #2790 )
2023-06-21 12:30:44 -03:00
missionfloyd
2661c9899a
Format chat for printing ( #2793 )
2023-06-21 10:39:58 -03:00
oobabooga
5dfe0bec06
Remove old/useless code
2023-06-20 23:36:56 -03:00
oobabooga
faa92eee8d
Add spaces
2023-06-20 23:25:58 -03:00
Peter Sofronas
b22c7199c9
Download optimizations ( #2786 )
...
* download_model_files metadata writing improvement
* line swap
* reduce line length
* safer download and greater block size
* Minor changes by pycodestyle
---------
Co-authored-by: oobabooga <112222186+oobabooga@users.noreply.github.com>
2023-06-20 23:14:18 -03:00
Morgan Schweers
447569e31a
Add a download progress bar to the web UI. ( #2472 )
...
* Show download progress on the model screen.
* In case of error, mark as done to clear progress bar.
* Increase the iteration block size to reduce overhead.
2023-06-20 22:59:14 -03:00
ramblingcoder
0d0d849478
Update Dockerfile to resolve superbooga requirement error ( #2401 )
2023-06-20 18:31:28 -03:00
EugeoSynthesisThirtyTwo
7625c6de89
fix usage of self in classmethod ( #2781 )
2023-06-20 16:18:42 -03:00
MikoAL
c40932eb39
Added Falcon LoRA training support ( #2684 )
...
I am 50% sure this will work
2023-06-20 01:03:44 -03:00
oobabooga
c623e142ac
Bump llama-cpp-python
2023-06-20 00:49:38 -03:00
FartyPants
ce86f726e9
Added saving of training logs to training_log.json ( #2769 )
2023-06-20 00:47:36 -03:00
oobabooga
017884132f
Merge remote-tracking branch 'refs/remotes/origin/main'
2023-06-20 00:46:29 -03:00
oobabooga
e1cd6cc410
Minor style change
2023-06-20 00:46:18 -03:00
Cebtenzzre
59e7ecb198
llama.cpp: implement ban_eos_token via logits_processor ( #2765 )
2023-06-19 21:31:19 -03:00
oobabooga
0d9d70ec7e
Update docs
2023-06-19 12:52:23 -03:00
oobabooga
f6a602861e
Update docs
2023-06-19 12:51:30 -03:00
oobabooga
5d4b4d15a5
Update Using-LoRAs.md
2023-06-19 12:43:57 -03:00
oobabooga
eb30f4441f
Add ExLlama+LoRA support ( #2756 )
2023-06-19 12:31:24 -03:00
oobabooga
a1cac88c19
Update README.md
2023-06-19 01:28:23 -03:00
oobabooga
5f418f6171
Fix a memory leak (credits for the fix: Ph0rk0z)
2023-06-19 01:19:28 -03:00
ThisIsPIRI
def3b69002
Fix loading condition for universal llama tokenizer ( #2753 )
2023-06-18 18:14:06 -03:00
oobabooga
490a1795f0
Bump peft commit
2023-06-18 16:42:11 -03:00
oobabooga
09c781b16f
Add modules/block_requests.py
...
This has become unnecessary, but it could be useful in the future
for other libraries.
2023-06-18 16:31:14 -03:00
oobabooga
687fd2604a
Improve code/ul styles in chat mode
2023-06-18 15:52:59 -03:00
oobabooga
e8588d7077
Merge remote-tracking branch 'refs/remotes/origin/main'
2023-06-18 15:23:38 -03:00
oobabooga
44f28830d1
Chat CSS: fix ul, li, pre styles + remove redefinitions
2023-06-18 15:20:51 -03:00