Commit Graph

2503 Commits

Author SHA1 Message Date
oobabooga
6447b2eea6
Merge pull request #3116 from oobabooga/dev
v1.1
2023-07-12 15:55:40 -03:00
oobabooga
2463d7c098 Spaces 2023-07-12 11:35:43 -07:00
oobabooga
e202190c4f lint 2023-07-12 11:33:25 -07:00
FartyPants
9b55d3a9f9
More robust and error prone training (#3058) 2023-07-12 15:29:43 -03:00
oobabooga
30f37530d5 Add back .replace('\r', '') 2023-07-12 09:52:20 -07:00
Fernando Tarin Morales
987d0fe023
Fix: Fixed the tokenization process of a raw dataset and improved its efficiency (#3035) 2023-07-12 12:05:37 -03:00
kabachuha
3f19e94c93
Add Tensorboard/Weights and biases integration for training (#2624) 2023-07-12 11:53:31 -03:00
kizinfo
5d513eea22
Add ability to load all text files from a subdirectory for training (#1997)
* Update utils.py

returns individual txt files and subdirectories to getdatasets to allow for training from a directory of text files

* Update training.py

minor tweak to training on raw datasets to detect if a directory is selected, and if so, to load in all the txt files in that directory for training

* Update put-trainer-datasets-here.txt

document

* Minor change

* Use pathlib, sort by natural keys

* Space

---------

Co-authored-by: oobabooga <112222186+oobabooga@users.noreply.github.com>
2023-07-12 11:44:30 -03:00
practicaldreamer
73a0def4af
Add Feature to Log Sample of Training Dataset for Inspection (#1711) 2023-07-12 11:26:45 -03:00
oobabooga
b6ba68eda9 Merge remote-tracking branch 'refs/remotes/origin/dev' into dev 2023-07-12 07:19:34 -07:00
oobabooga
a17b78d334 Disable wandb during training 2023-07-12 07:19:12 -07:00
Gabriel Pena
eedb3bf023
Add low vram mode on llama cpp (#3076) 2023-07-12 11:05:13 -03:00
oobabooga
180420d2c9 Fix send_pictures extension 2023-07-11 20:56:01 -07:00
original-subliminal-thought-criminal
ad07839a7b
Small bug, when arbitrary loading character.json that doesn't exist (#2643)
* Fixes #2482

* corrected erroronius variable

* Use .exists()

---------

Co-authored-by: oobabooga <112222186+oobabooga@users.noreply.github.com>
2023-07-12 00:16:36 -03:00
Axiom Wolf
d986c17c52
Chat history download creates more detailed file names (#3051) 2023-07-12 00:10:36 -03:00
atriantafy
d9fabdde40
Add context_instruct to API. Load default model instruction template … (#2688) 2023-07-12 00:01:03 -03:00
Salvador E. Tropea
324e45b848
[Fixed] wbits and groupsize values from model not shown (#2977) 2023-07-11 23:27:38 -03:00
oobabooga
e3810dff40 Style changes 2023-07-11 18:49:06 -07:00
oobabooga
bfafd07f44 Change a message 2023-07-11 18:29:20 -07:00
oobabooga
a12dae51b9 Bump bitsandbytes 2023-07-11 18:29:08 -07:00
Keith Kjer
37bffb2e1a
Add reference to new pipeline in multimodal readme (#2947) 2023-07-11 19:04:15 -03:00
Juliano Henriquez
1fc0b5041e
substitu superboog Beatiful Soup Parser (#2996)
* add lxml to requirments

add lxml to requirments

* Change Beaitful Soup Parser

"lxml" parser which might be more tolerant of certain kinds of parsing errors than "html.parser" and quicker at the same time.
2023-07-11 19:02:49 -03:00
Salvador E. Tropea
ab044a5a44
Elevenlabs tts fixes (#2959)
* [Fixed] Keep setting option for the voice

- It was always changed to the first available voice
- Also added an error if the selected voice isn't valid

* [Fixed] elevenlabs_tts API key handling

- The one from the settings wasn't applied
- We always got "Enter your API key", even when the settings specified
  an api_key

* [Added] elevenlabs_tts model selection

- Now we can also use the "eleven_multilingual_v1" model.
  Used for anything but english.
2023-07-11 19:00:37 -03:00
micsthepick
3708de2b1f
respect model dir for downloads (#3077) (#3079) 2023-07-11 18:55:46 -03:00
matatonic
3778816b8d
models/config.yaml: +platypus/gplatty, +longchat, +vicuna-33b, +Redmond-Hermes-Coder, +wizardcoder, +more (#2928)
* +platypus/gplatty

* +longchat, +vicuna-33b, +Redmond-Hermes-Coder

* +wizardcoder

* +superplatty

* +Godzilla, +WizardLM-V1.1, +rwkv 8k,
+wizard-mega fix </s>

---------

Co-authored-by: Matthew Ashton <mashton-gitlab@zhero.org>
2023-07-11 18:53:48 -03:00
Ricardo Pinto
3e9da5a27c
Changed FormComponent to IOComponent (#3017)
Co-authored-by: Ricardo Pinto <1-ricardo.pinto@users.noreply.gitlab.cognitage.com>
2023-07-11 18:52:16 -03:00
matatonic
3e7feb699c
extensions/openai: Major openai extension updates & fixes (#3049)
* many openai updates

* total reorg & cleanup.

* fixups

* missing import os for images

* +moderations, custom_stopping_strings, more fixes

* fix bugs in completion streaming

* moderation fix (flagged)

* updated moderation categories

---------

Co-authored-by: Matthew Ashton <mashton-gitlab@zhero.org>
2023-07-11 18:50:08 -03:00
Ahmad Fahadh Ilyas
8db7e857b1
Add token authorization for downloading model (#3067) 2023-07-11 18:48:08 -03:00
FartyPants
61102899cd
google flan T5 download fix (#3080) 2023-07-11 18:46:59 -03:00
jllllll
fdd596f98f
Bump bitsandbytes Windows wheel (#3097) 2023-07-11 18:41:24 -03:00
Vadim Peretokin
987d522b55
Fix API example for loading models (#3101) 2023-07-11 18:40:55 -03:00
Josh XT
f4aa11cef6
Add default environment variable values to docker compose file (#3102)
Add default environment variable values to docker compose file
2023-07-11 18:38:26 -03:00
ofirkris
a81cdd1367
Bump cpp llama version (#3081)
Bump cpp llama version to 0.1.70
2023-07-10 19:36:15 -03:00
jllllll
f8dbd7519b
Bump exllama module version (#3087)
d769533b6f...e61d4d31d4
2023-07-10 19:35:59 -03:00
tianchen zhong
c7058afb40
Add new possible bin file name regex (#3070) 2023-07-09 17:22:56 -03:00
ofirkris
161d984e80
Bump llama-cpp-python version (#3072)
Bump llama-cpp-python version to 0.1.69
2023-07-09 17:22:24 -03:00
Salvador E. Tropea
463aac2d65
[Added] google_translate activate param (#2961)
- So you can quickly enable/disable it, otherwise you must select
  English to disable it, and then your language to enable it again.
2023-07-09 01:08:20 -03:00
Forkoz
74ea7522a0
Lora fixes for AutoGPTQ (#2818) 2023-07-09 01:03:43 -03:00
Chris Rude
70b088843d
fix for issue #2475: Streaming api deadlock (#3048) 2023-07-08 23:21:20 -03:00
oobabooga
5ac4e4da8b Make --model work with argument like models/folder_name 2023-07-08 10:22:54 -07:00
Brandon McClure
acf24ebb49
Whisper_stt params for model, language, and auto_submit (#3031) 2023-07-07 20:54:53 -03:00
oobabooga
79679b3cfd Pin fastapi version (for #3042) 2023-07-07 16:40:57 -07:00
oobabooga
b6643e5039 Add decode functions to llama.cpp/exllama 2023-07-07 09:11:30 -07:00
oobabooga
1ba2e88551 Add truncation to exllama 2023-07-07 09:09:23 -07:00
oobabooga
c21b73ff37 Minor change to ui.py 2023-07-07 09:09:14 -07:00
oobabooga
de994331a4 Merge remote-tracking branch 'refs/remotes/origin/main' 2023-07-06 22:25:43 -07:00
oobabooga
9aee1064a3 Block a cloudfare request 2023-07-06 22:24:52 -07:00
Fernando Tarin Morales
d7e14e1f78
Fixed the param name when loading a LoRA using a model loaded in 4 or 8 bits (#3036) 2023-07-07 02:24:07 -03:00
Fernando Tarin Morales
1f540fa4f8
Added the format to be able to finetune Vicuna1.1 models (#3037) 2023-07-07 02:22:39 -03:00
Xiaojian "JJ" Deng
ff45317032
Update models.py (#3020)
Hopefully fixed error with "ValueError: Tokenizer class GPTNeoXTokenizer does not exist or is not currently 
imported."
2023-07-05 21:40:43 -03:00