Commit Graph

2116 Commits

Author SHA1 Message Date
Forkoz
74ea7522a0
Lora fixes for AutoGPTQ (#2818) 2023-07-09 01:03:43 -03:00
Chris Rude
70b088843d
fix for issue #2475: Streaming api deadlock (#3048) 2023-07-08 23:21:20 -03:00
oobabooga
5ac4e4da8b Make --model work with argument like models/folder_name 2023-07-08 10:22:54 -07:00
Brandon McClure
acf24ebb49
Whisper_stt params for model, language, and auto_submit (#3031) 2023-07-07 20:54:53 -03:00
oobabooga
79679b3cfd Pin fastapi version (for #3042) 2023-07-07 16:40:57 -07:00
oobabooga
b6643e5039 Add decode functions to llama.cpp/exllama 2023-07-07 09:11:30 -07:00
oobabooga
1ba2e88551 Add truncation to exllama 2023-07-07 09:09:23 -07:00
oobabooga
c21b73ff37 Minor change to ui.py 2023-07-07 09:09:14 -07:00
oobabooga
de994331a4 Merge remote-tracking branch 'refs/remotes/origin/main' 2023-07-06 22:25:43 -07:00
oobabooga
9aee1064a3 Block a cloudfare request 2023-07-06 22:24:52 -07:00
Fernando Tarin Morales
d7e14e1f78
Fixed the param name when loading a LoRA using a model loaded in 4 or 8 bits (#3036) 2023-07-07 02:24:07 -03:00
Fernando Tarin Morales
1f540fa4f8
Added the format to be able to finetune Vicuna1.1 models (#3037) 2023-07-07 02:22:39 -03:00
Xiaojian "JJ" Deng
ff45317032
Update models.py (#3020)
Hopefully fixed error with "ValueError: Tokenizer class GPTNeoXTokenizer does not exist or is not currently 
imported."
2023-07-05 21:40:43 -03:00
ofirkris
b67c362735
Bump llama-cpp-python (#3011)
Bump llama-cpp-python to V0.1.68
2023-07-05 11:33:28 -03:00
jeckyhl
88a747b5b9
fix: Error when downloading model from UI (#3014) 2023-07-05 11:27:29 -03:00
oobabooga
e0a50fb77a
Merge pull request #2922 from Honkware/main
Load Salesforce Xgen Models
2023-07-04 23:47:21 -03:00
oobabooga
8705eba830 Remove universal llama tokenizer support
Instead replace it with a warning if the tokenizer files look off
2023-07-04 19:43:19 -07:00
oobabooga
84d6c93d0d Merge branch 'main' into Honkware-main 2023-07-04 18:50:07 -07:00
oobabooga
31c297d7e0 Various changes 2023-07-04 18:50:01 -07:00
AN Long
be4582be40
Support specify retry times in download-model.py (#2908) 2023-07-04 22:26:30 -03:00
oobabooga
70a4d5dbcf Update chat API (fixes #3006) 2023-07-04 17:36:47 -07:00
oobabooga
333075e726
Fix #3003 2023-07-04 11:38:35 -03:00
oobabooga
40c5722499
Fix #2998 2023-07-04 11:35:25 -03:00
oobabooga
463ddfffd0 Fix start_with 2023-07-03 23:32:02 -07:00
oobabooga
55457549cd Add information about presets to the UI 2023-07-03 22:39:01 -07:00
oobabooga
373555c4fb Fix loading some histories (thanks kaiokendev) 2023-07-03 22:19:28 -07:00
Panchovix
10c8c197bf
Add Support for Static NTK RoPE scaling for exllama/exllama_hf (#2955) 2023-07-04 01:13:16 -03:00
jllllll
1610d5ffb2
Bump exllama module to 0.0.5 (#2993) 2023-07-04 00:15:55 -03:00
FartyPants
eb6112d5a2
Update server.py - clear LORA after reload (#2952) 2023-07-04 00:13:38 -03:00
oobabooga
7e8340b14d Make greetings appear in --multi-user mode 2023-07-03 20:08:14 -07:00
oobabooga
4b1804a438
Implement sessions + add basic multi-user support (#2991) 2023-07-04 00:03:30 -03:00
FartyPants
1f8cae14f9
Update training.py - correct use of lora_names (#2988) 2023-07-03 17:41:18 -03:00
FartyPants
c23c88ee4c
Update LoRA.py - avoid potential error (#2953) 2023-07-03 17:40:22 -03:00
FartyPants
33f56fd41d
Update models.py to clear LORA names after unload (#2951) 2023-07-03 17:39:06 -03:00
FartyPants
48b11f9c5b
Training: added trainable parameters info (#2944) 2023-07-03 17:38:36 -03:00
Turamarth14
847f70b694
Update html_generator.py (#2954)
With version 10.0.0 of Pillow the constant Image.ANTIALIAS has been removed. Instead Image.LANCZOS should be used.
2023-07-02 01:43:58 -03:00
ardfork
3c076c3c80
Disable half2 for ExLlama when using HIP (#2912) 2023-06-29 15:03:16 -03:00
missionfloyd
ac0f96e785
Some more character import tweaks. (#2921) 2023-06-29 14:56:25 -03:00
oobabooga
5d2a8b31be Improve Parameters tab UI 2023-06-29 14:33:47 -03:00
oobabooga
79db629665 Minor bug fix 2023-06-29 13:53:06 -03:00
oobabooga
3443219cbc
Add repetition penalty range parameter to transformers (#2916) 2023-06-29 13:40:13 -03:00
Honkware
b9a3d28177 Merge branch 'main' of https://github.com/Honkware/text-generation-webui 2023-06-29 01:33:00 -05:00
Honkware
3147f0b8f8 xgen config 2023-06-29 01:32:53 -05:00
Honkware
0a6a498383 Load xgen tokenizer 2023-06-29 01:32:44 -05:00
Honkware
1d03387f74
Xgen instruction template 2023-06-29 01:31:33 -05:00
oobabooga
c6cae106e7 Bump llama-cpp-python 2023-06-28 18:14:45 -03:00
oobabooga
20740ab16e Revert "Fix exllama_hf gibbersh above 2048 context, and works >5000 context. (#2913)"
This reverts commit 37a16d23a7.
2023-06-28 18:10:34 -03:00
jllllll
7b048dcf67
Bump exllama module version to 0.0.4 (#2915) 2023-06-28 18:09:58 -03:00
Panchovix
37a16d23a7
Fix exllama_hf gibbersh above 2048 context, and works >5000 context. (#2913) 2023-06-28 12:36:07 -03:00
oobabooga
63770c0643 Update docs/Extensions.md 2023-06-27 22:25:05 -03:00