Commit Graph

3076 Commits

Author SHA1 Message Date
oobabooga
cea7fc2435 Update html_instruct_style.css 2023-10-22 12:28:23 -07:00
oobabooga
df90d03e0b Replace --mul_mat_q with --no_mul_mat_q 2023-10-22 12:23:03 -07:00
Googulator
d0c3b407b3
transformers loader: multi-LoRAs support (#3120) 2023-10-22 16:06:22 -03:00
omo
4405513ca5
Option to select/target additional linear modules/layers in LORA training (#4178) 2023-10-22 15:57:19 -03:00
oobabooga
7a3f885ea8
Update 03 ‐ Parameters Tab.md 2023-10-22 14:52:23 -03:00
oobabooga
63688004dc Add default cmd flags to colab 2023-10-22 09:56:43 -07:00
oobabooga
613feca23b Make colab functional for llama.cpp
- Download only Q4_K_M for GGUF repositories by default
- Use maximum n-gpu-layers by default
2023-10-22 09:08:25 -07:00
oobabooga
994502d41b Colab fixes 2023-10-22 08:57:16 -07:00
Jiashu Xu
c544f5cc51
Support LLaVA v1.5 7B (#4348) 2023-10-22 12:49:04 -03:00
oobabooga
05741821a5 Minor colab changes 2023-10-22 08:44:35 -07:00
FartyPants (FP HAM)
6a61158adf
Training PRO a month worth of updates (#4345) 2023-10-22 12:38:09 -03:00
mongolu
c18504f369
USE_CUDA118 from ENV remains null one_click.py + cuda-toolkit (#4352) 2023-10-22 12:37:24 -03:00
oobabooga
cd45635f53 tqdm improvement for colab 2023-10-21 22:00:29 -07:00
oobabooga
ae79c510cc Merge remote-tracking branch 'refs/remotes/origin/main' 2023-10-21 21:46:15 -07:00
oobabooga
2d1b3332e4 Ignore warnings on Colab 2023-10-21 21:45:25 -07:00
oobabooga
caf6db07ad
Update README.md 2023-10-22 01:22:17 -03:00
oobabooga
1a34927314 Make API URLs more visible 2023-10-21 21:11:07 -07:00
oobabooga
09f807af83 Use ExLlama_HF for GPTQ models by default 2023-10-21 20:45:38 -07:00
oobabooga
619093483e Add Colab notebook 2023-10-21 20:27:52 -07:00
oobabooga
506d05aede Organize command-line arguments 2023-10-21 18:52:59 -07:00
oobabooga
b1f33b55fd
Update 01 ‐ Chat Tab.md 2023-10-21 20:17:56 -03:00
oobabooga
ac6d5d50b7
Update README.md 2023-10-21 20:03:43 -03:00
oobabooga
6efb990b60
Add a proper documentation (#3885) 2023-10-21 19:15:54 -03:00
Adam White
5a5bc135e9
Docker: Remove explicit CUDA 11.8 Reference (#4343) 2023-10-21 15:09:34 -03:00
oobabooga
b98fbe0afc Add download link 2023-10-20 23:58:05 -07:00
oobabooga
fbac6d21ca Add missing exception 2023-10-20 23:53:24 -07:00
Brian Dashore
3345da2ea4
Add flash-attention 2 for windows (#4235) 2023-10-21 03:46:23 -03:00
oobabooga
258d046218 More robust way of initializing empty .git folder 2023-10-20 23:13:09 -07:00
Johan
1d5a015ce7
Enable special token support for exllamav2 (#4314) 2023-10-21 01:54:06 -03:00
mjbogusz
8f6405d2fa
Python 3.11, 3.9, 3.8 support (#4233)
---------

Co-authored-by: oobabooga <112222186+oobabooga@users.noreply.github.com>
2023-10-20 21:13:33 -03:00
oobabooga
9be74fb57c Change 2 margins 2023-10-20 14:04:14 -07:00
oobabooga
e208128d68 Lint the CSS files 2023-10-20 13:02:18 -07:00
oobabooga
dedbdb46c2 Chat CSS improvements 2023-10-20 12:49:36 -07:00
Haotian Liu
32984ea2f0
Support LLaVA v1.5 (#4305) 2023-10-20 02:28:14 -03:00
oobabooga
bb71272903 Detect WizardCoder-Python-34B & Phind-CodeLlama-34B 2023-10-19 14:35:56 -07:00
oobabooga
eda7126b25 Organize the .gitignore 2023-10-19 14:33:44 -07:00
turboderp
ae8cd449ae
ExLlamav2_HF: Convert logits to FP32 (#4310) 2023-10-18 23:16:05 -03:00
missionfloyd
c0ffb77fd8
More silero languages (#3950) 2023-10-16 17:12:32 -03:00
hronoas
db7ecdd274
openai: fix empty models list on query present in url (#4139) 2023-10-16 17:02:47 -03:00
oobabooga
f17f7a6913 Increase the evaluation table height 2023-10-16 12:55:35 -07:00
oobabooga
8ea554bc19 Check for torch.xpu.is_available() 2023-10-16 12:53:40 -07:00
oobabooga
188d20e9e5 Reduce the evaluation table height 2023-10-16 10:53:42 -07:00
oobabooga
2d44adbb76 Clear the torch cache while evaluating 2023-10-16 10:52:50 -07:00
oobabooga
388d1864a6 Merge remote-tracking branch 'refs/remotes/origin/main' 2023-10-15 21:58:16 -07:00
oobabooga
71cac7a1b2 Increase the height of the evaluation table 2023-10-15 21:56:40 -07:00
oobabooga
e14bde4946 Minor improvements to evaluation logs 2023-10-15 20:51:43 -07:00
oobabooga
b88b2b74a6 Experimental Intel Arc transformers support (untested) 2023-10-15 20:51:11 -07:00
Sam
d331501ebc
Fix for using Torch with CUDA 11.8 (#4298) 2023-10-15 19:27:19 -03:00
oobabooga
3bb4046fad
Update auto-release.yml 2023-10-15 17:27:16 -03:00
oobabooga
45fa803943
Create auto-release.yml 2023-10-15 17:25:29 -03:00