oobabooga
383c50f05b
Replace old presets with the results of Preset Arena ( #2830 )
2023-06-23 01:48:29 -03:00
missionfloyd
aa1f1ef46a
Fix printing, take two. ( #2810 )
...
* Format chat for printing
* Better printing
2023-06-22 16:06:49 -03:00
Panchovix
b4a38c24b7
Fix Multi-GPU not working on exllama_hf ( #2803 )
2023-06-22 16:05:25 -03:00
matatonic
d94ea31d54
more models. +minotaur 8k ( #2806 )
2023-06-21 21:05:08 -03:00
LarryVRH
580c1ee748
Implement a demo HF wrapper for exllama to utilize existing HF transformers decoding. ( #2777 )
2023-06-21 15:31:42 -03:00
jllllll
a06acd6d09
Update bitsandbytes to 0.39.1 ( #2799 )
2023-06-21 15:04:45 -03:00
Gaurav Bhagchandani
89fb6f9236
Fixed the ZeroDivisionError when downloading a model ( #2797 )
2023-06-21 12:31:50 -03:00
matatonic
90be1d9fe1
More models (match more) & templates (starchat-beta, tulu) ( #2790 )
2023-06-21 12:30:44 -03:00
missionfloyd
2661c9899a
Format chat for printing ( #2793 )
2023-06-21 10:39:58 -03:00
oobabooga
5dfe0bec06
Remove old/useless code
2023-06-20 23:36:56 -03:00
oobabooga
faa92eee8d
Add spaces
2023-06-20 23:25:58 -03:00
Peter Sofronas
b22c7199c9
Download optimizations ( #2786 )
...
* download_model_files metadata writing improvement
* line swap
* reduce line length
* safer download and greater block size
* Minor changes by pycodestyle
---------
Co-authored-by: oobabooga <112222186+oobabooga@users.noreply.github.com>
2023-06-20 23:14:18 -03:00
Morgan Schweers
447569e31a
Add a download progress bar to the web UI. ( #2472 )
...
* Show download progress on the model screen.
* In case of error, mark as done to clear progress bar.
* Increase the iteration block size to reduce overhead.
2023-06-20 22:59:14 -03:00
ramblingcoder
0d0d849478
Update Dockerfile to resolve superbooga requirement error ( #2401 )
2023-06-20 18:31:28 -03:00
EugeoSynthesisThirtyTwo
7625c6de89
fix usage of self in classmethod ( #2781 )
2023-06-20 16:18:42 -03:00
MikoAL
c40932eb39
Added Falcon LoRA training support ( #2684 )
...
I am 50% sure this will work
2023-06-20 01:03:44 -03:00
oobabooga
c623e142ac
Bump llama-cpp-python
2023-06-20 00:49:38 -03:00
FartyPants
ce86f726e9
Added saving of training logs to training_log.json ( #2769 )
2023-06-20 00:47:36 -03:00
oobabooga
017884132f
Merge remote-tracking branch 'refs/remotes/origin/main'
2023-06-20 00:46:29 -03:00
oobabooga
e1cd6cc410
Minor style change
2023-06-20 00:46:18 -03:00
Cebtenzzre
59e7ecb198
llama.cpp: implement ban_eos_token via logits_processor ( #2765 )
2023-06-19 21:31:19 -03:00
oobabooga
0d9d70ec7e
Update docs
2023-06-19 12:52:23 -03:00
oobabooga
f6a602861e
Update docs
2023-06-19 12:51:30 -03:00
oobabooga
5d4b4d15a5
Update Using-LoRAs.md
2023-06-19 12:43:57 -03:00
oobabooga
eb30f4441f
Add ExLlama+LoRA support ( #2756 )
2023-06-19 12:31:24 -03:00
oobabooga
a1cac88c19
Update README.md
2023-06-19 01:28:23 -03:00
oobabooga
5f418f6171
Fix a memory leak (credits for the fix: Ph0rk0z)
2023-06-19 01:19:28 -03:00
ThisIsPIRI
def3b69002
Fix loading condition for universal llama tokenizer ( #2753 )
2023-06-18 18:14:06 -03:00
oobabooga
490a1795f0
Bump peft commit
2023-06-18 16:42:11 -03:00
oobabooga
09c781b16f
Add modules/block_requests.py
...
This has become unnecessary, but it could be useful in the future
for other libraries.
2023-06-18 16:31:14 -03:00
oobabooga
687fd2604a
Improve code/ul styles in chat mode
2023-06-18 15:52:59 -03:00
oobabooga
e8588d7077
Merge remote-tracking branch 'refs/remotes/origin/main'
2023-06-18 15:23:38 -03:00
oobabooga
44f28830d1
Chat CSS: fix ul, li, pre styles + remove redefinitions
2023-06-18 15:20:51 -03:00
Forkoz
3cae1221d4
Update exllama.py - Respect model dir parameter ( #2744 )
2023-06-18 13:26:30 -03:00
oobabooga
5b4c0155f6
Move a button
2023-06-18 01:56:43 -03:00
oobabooga
0686a2e75f
Improve instruct colors in dark mode
2023-06-18 01:44:52 -03:00
oobabooga
c5641b65d3
Handle leading spaces properly in ExLllama
2023-06-17 19:35:12 -03:00
matatonic
1e97aaac95
extensions/openai: docs update, model loader, minor fixes ( #2557 )
2023-06-17 19:15:24 -03:00
matatonic
2220b78e7a
models/config.yaml: +alpacino, +alpasta, +hippogriff, +gpt4all-snoozy, +lazarus, +based, -airoboros 4k ( #2580 )
2023-06-17 19:14:25 -03:00
oobabooga
05a743d6ad
Make llama.cpp use tfs parameter
2023-06-17 19:08:25 -03:00
oobabooga
e19cbea719
Add a variable to modules/shared.py
2023-06-17 19:02:29 -03:00
oobabooga
cbd63eeeff
Fix repeated tokens with exllama
2023-06-17 19:02:08 -03:00
oobabooga
766c760cd7
Use gen_begin_reuse in exllama
2023-06-17 18:00:10 -03:00
oobabooga
239b11c94b
Minor bug fixes
2023-06-17 17:57:56 -03:00
Bhavika Tekwani
d8d29edf54
Install wheel using pip3 ( #2719 )
2023-06-16 23:46:40 -03:00
Jonathan Yankovich
a1ca1c04a1
Update ExLlama.md ( #2729 )
...
Add details for configuring exllama
2023-06-16 23:46:25 -03:00
oobabooga
b27f83c0e9
Make exllama stoppable
2023-06-16 22:03:23 -03:00
oobabooga
7f06d551a3
Fix streaming callback
2023-06-16 21:44:56 -03:00
oobabooga
1e400218e9
Fix a typo
2023-06-16 21:01:57 -03:00
oobabooga
5f392122fd
Add gpu_split param to ExLlama
...
Adapted from code created by Ph0rk0z. Thank you Ph0rk0z.
2023-06-16 20:49:36 -03:00