Morgan Schweers
447569e31a
Add a download progress bar to the web UI. ( #2472 )
...
* Show download progress on the model screen.
* In case of error, mark as done to clear progress bar.
* Increase the iteration block size to reduce overhead.
2023-06-20 22:59:14 -03:00
jllllll
d1da22d7ee
Fix -y from previous commit ( #90 )
2023-06-20 22:48:59 -03:00
oobabooga
80a615c3ae
Add space
2023-06-20 22:48:45 -03:00
oobabooga
a2116e8b2b
use uninstall -y
2023-06-20 21:24:01 -03:00
oobabooga
c0a1baa46e
Minor changes
2023-06-20 20:23:21 -03:00
jllllll
5cbc0b28f2
Workaround for Peft not updating their package version on the git repo ( #88 )
...
* Workaround for Peft not updating their git package version
* Update webui.py
---------
Co-authored-by: oobabooga <112222186+oobabooga@users.noreply.github.com>
2023-06-20 20:21:10 -03:00
ramblingcoder
0d0d849478
Update Dockerfile to resolve superbooga requirement error ( #2401 )
2023-06-20 18:31:28 -03:00
jllllll
9bb2fc8cd7
Install Pytorch through pip instead of Conda ( #84 )
2023-06-20 16:39:23 -03:00
EugeoSynthesisThirtyTwo
7625c6de89
fix usage of self in classmethod ( #2781 )
2023-06-20 16:18:42 -03:00
MikoAL
c40932eb39
Added Falcon LoRA training support ( #2684 )
...
I am 50% sure this will work
2023-06-20 01:03:44 -03:00
oobabooga
c623e142ac
Bump llama-cpp-python
2023-06-20 00:49:38 -03:00
FartyPants
ce86f726e9
Added saving of training logs to training_log.json ( #2769 )
2023-06-20 00:47:36 -03:00
oobabooga
017884132f
Merge remote-tracking branch 'refs/remotes/origin/main'
2023-06-20 00:46:29 -03:00
oobabooga
e1cd6cc410
Minor style change
2023-06-20 00:46:18 -03:00
Cebtenzzre
59e7ecb198
llama.cpp: implement ban_eos_token via logits_processor ( #2765 )
2023-06-19 21:31:19 -03:00
oobabooga
0d9d70ec7e
Update docs
2023-06-19 12:52:23 -03:00
oobabooga
f6a602861e
Update docs
2023-06-19 12:51:30 -03:00
oobabooga
5d4b4d15a5
Update Using-LoRAs.md
2023-06-19 12:43:57 -03:00
oobabooga
eb30f4441f
Add ExLlama+LoRA support ( #2756 )
2023-06-19 12:31:24 -03:00
oobabooga
a1cac88c19
Update README.md
2023-06-19 01:28:23 -03:00
oobabooga
5f418f6171
Fix a memory leak (credits for the fix: Ph0rk0z)
2023-06-19 01:19:28 -03:00
ThisIsPIRI
def3b69002
Fix loading condition for universal llama tokenizer ( #2753 )
2023-06-18 18:14:06 -03:00
oobabooga
490a1795f0
Bump peft commit
2023-06-18 16:42:11 -03:00
oobabooga
09c781b16f
Add modules/block_requests.py
...
This has become unnecessary, but it could be useful in the future
for other libraries.
2023-06-18 16:31:14 -03:00
oobabooga
687fd2604a
Improve code/ul styles in chat mode
2023-06-18 15:52:59 -03:00
oobabooga
e8588d7077
Merge remote-tracking branch 'refs/remotes/origin/main'
2023-06-18 15:23:38 -03:00
oobabooga
44f28830d1
Chat CSS: fix ul, li, pre styles + remove redefinitions
2023-06-18 15:20:51 -03:00
Forkoz
3cae1221d4
Update exllama.py - Respect model dir parameter ( #2744 )
2023-06-18 13:26:30 -03:00
oobabooga
5b4c0155f6
Move a button
2023-06-18 01:56:43 -03:00
oobabooga
0686a2e75f
Improve instruct colors in dark mode
2023-06-18 01:44:52 -03:00
oobabooga
c5641b65d3
Handle leading spaces properly in ExLllama
2023-06-17 19:35:12 -03:00
matatonic
1e97aaac95
extensions/openai: docs update, model loader, minor fixes ( #2557 )
2023-06-17 19:15:24 -03:00
matatonic
2220b78e7a
models/config.yaml: +alpacino, +alpasta, +hippogriff, +gpt4all-snoozy, +lazarus, +based, -airoboros 4k ( #2580 )
2023-06-17 19:14:25 -03:00
jllllll
b1d05cbbf6
Install exllama ( #83 )
...
* Install exllama
* Handle updating exllama
2023-06-17 19:10:36 -03:00
jllllll
657049d7d0
Fix cmd_macos.sh ( #82 )
...
MacOS version of Bash does not support process substitution
2023-06-17 19:09:42 -03:00
jllllll
b2483e28d1
Check for special characters in path on Windows ( #81 )
...
Display warning message if detected
2023-06-17 19:09:22 -03:00
oobabooga
05a743d6ad
Make llama.cpp use tfs parameter
2023-06-17 19:08:25 -03:00
oobabooga
e19cbea719
Add a variable to modules/shared.py
2023-06-17 19:02:29 -03:00
oobabooga
cbd63eeeff
Fix repeated tokens with exllama
2023-06-17 19:02:08 -03:00
oobabooga
766c760cd7
Use gen_begin_reuse in exllama
2023-06-17 18:00:10 -03:00
oobabooga
239b11c94b
Minor bug fixes
2023-06-17 17:57:56 -03:00
Bhavika Tekwani
d8d29edf54
Install wheel using pip3 ( #2719 )
2023-06-16 23:46:40 -03:00
Jonathan Yankovich
a1ca1c04a1
Update ExLlama.md ( #2729 )
...
Add details for configuring exllama
2023-06-16 23:46:25 -03:00
oobabooga
b27f83c0e9
Make exllama stoppable
2023-06-16 22:03:23 -03:00
oobabooga
7f06d551a3
Fix streaming callback
2023-06-16 21:44:56 -03:00
oobabooga
1e400218e9
Fix a typo
2023-06-16 21:01:57 -03:00
oobabooga
5f392122fd
Add gpu_split param to ExLlama
...
Adapted from code created by Ph0rk0z. Thank you Ph0rk0z.
2023-06-16 20:49:36 -03:00
oobabooga
cb9be5db1c
Update ExLlama.md
2023-06-16 20:40:12 -03:00
oobabooga
83be8eacf0
Minor fix
2023-06-16 20:38:32 -03:00
oobabooga
9f40032d32
Add ExLlama support ( #2444 )
2023-06-16 20:35:38 -03:00