Commit Graph

  • 91a242d3d4 +Godzilla, +WizardLM-V1.1, +rwkv 8k, +wizard-mega fix </s> Matthew Ashton 2023-07-10 17:05:36 -0400
  • f98db4e7e2 fix bugs in completion streaming Matthew Ashton 2023-07-10 16:29:13 -0400
  • e38f5c63f3
    Bump fastapi from 0.95.2 to 0.100.0 dependabot[bot] 2023-07-10 20:26:40 +0000
  • 348009bb3d
    Bump gradio from 3.33.1 to 3.36.1 dependabot[bot] 2023-07-10 20:26:35 +0000
  • aeb8ec262a
    Bump gradio-client from 0.2.5 to 0.2.8 dependabot[bot] 2023-07-10 20:26:30 +0000
  • 2bf6437c1f
    Bump llama-cpp-python from 0.1.69 to 0.1.70 dependabot[bot] 2023-07-10 20:26:21 +0000
  • 7daec28b72
    Bump exllama module version jllllll 2023-07-10 14:59:49 -0500
  • 5ff62de8f3 +moderations, custom_stopping_strings, more fixes Matthew Ashton 2023-07-10 13:31:58 -0400
  • 16685d6b5d
    Bump cpp llama version ofirkris 2023-07-10 18:38:01 +0300
  • b91c3bf3d2
    Merge branch 'oobabooga:main' into llamacpp_lowvram Gabriel Pena 2023-07-10 12:14:28 -0300
  • 289c1212d7 respect model dir for downloads (#3077) micsthepick 2023-07-09 20:23:45 -0700
  • 5bf75d2ca3
    Merge branch 'oobabooga:main' into api-instruct-fixes atriantafy 2023-07-10 13:03:41 +0100
  • ecfbc40907 Added min_char FPHam 2023-07-10 02:11:44 -0400
  • ea9e72da47
    Update download-model.py FartyPants 2023-07-10 02:06:59 -0400
  • 0d83cb4018 Add perplexity_colors extension SeanScripts 2023-07-09 21:54:38 -0500
  • 97f4a4ddb8 Add low vram mode on llama cpp gabriel-pena 2023-07-09 23:05:55 -0300
  • dbfef170e4 Merge branch 'main' of https://github.com/SeanScripts/text-generation-webui SeanScripts 2023-07-09 20:34:41 -0500
  • fbc42be8dd Add perplexity_colors extension SeanScripts 2023-07-09 20:34:26 -0500
  • 40bd01a96a
    Merge branch 'oobabooga:main' into pull_branch FartyPants 2023-07-09 18:08:54 -0400
  • 1bce294d6a
    Merge branch 'oobabooga:main' into main FartyPants 2023-07-09 18:08:46 -0400
  • 0efe773847 update rwkv tc 2023-07-09 14:42:08 -0700
  • c7058afb40
    Add new possible bin file name regex (#3070) tianchen zhong 2023-07-09 13:22:56 -0700
  • 161d984e80
    Bump llama-cpp-python version (#3072) ofirkris 2023-07-09 23:22:24 +0300
  • 1efa4960c0
    Bump llama-cpp-python version ofirkris 2023-07-09 23:16:12 +0300
  • fa53a90592
    Add new possible bin file name regex tianchen zhong 2023-07-09 11:46:24 -0700
  • 70b5870126 Add token authorization for downloading model fahadh4ilyas 2023-07-09 22:32:50 +0700
  • 463aac2d65
    [Added] google_translate activate param (#2961) Salvador E. Tropea 2023-07-09 01:08:20 -0300
  • 74ea7522a0
    Lora fixes for AutoGPTQ (#2818) Forkoz 2023-07-09 04:03:43 +0000
  • 88a415ea22
    Merge branch 'main' into patch-2 oobabooga 2023-07-09 01:02:35 -0300
  • e3eece6828 Trying to evaluate oobabooga 2023-07-08 20:31:00 -0700
  • 76c765c800 Handle the cache properly oobabooga 2023-07-08 19:37:02 -0700
  • 70b088843d
    fix for issue #2475: Streaming api deadlock (#3048) Chris Rude 2023-07-08 19:21:20 -0700
  • db72ba29ef Fix a basic bug oobabooga 2023-07-08 19:20:10 -0700
  • 904f31b6a3 Make it functional oobabooga 2023-07-08 19:06:29 -0700
  • 4b4d585c35 Sort imports oobabooga 2023-07-08 18:02:03 -0700
  • 7785dff55f Create a draft oobabooga 2023-07-08 17:50:50 -0700
  • ccdce94aa0 removed testing print FPHam 2023-07-08 15:11:28 -0400
  • 7757755b61 More robust training FPHam 2023-07-08 14:49:53 -0400
  • 4636201998
    Merge branch 'oobabooga:main' into pull_branch FartyPants 2023-07-08 14:45:33 -0400
  • e815a22e62
    Merge branch 'oobabooga:main' into main FartyPants 2023-07-08 14:45:26 -0400
  • a29ce674b0 EOS change FPHam 2023-07-08 14:44:08 -0400
  • 012b65ced4 EOS in training FPHam 2023-07-08 14:31:26 -0400
  • 5ac4e4da8b Make --model work with argument like models/folder_name oobabooga 2023-07-08 10:22:54 -0700
  • a298842a39 missing import os for images Matthew Ashton 2023-07-08 12:25:04 -0400
  • 4b492fe9b0 Chat history download creates detailed file names Unskilled Wolf 2023-07-08 16:18:13 +0200
  • d7e5c995bd fixups Matthew Ashton 2023-07-08 05:30:56 -0400
  • f5caabedad remove unused import added during earlier changes Chris Rude 2023-07-08 01:50:47 -0700
  • c2b7b331fb total reorg & cleanup. Matthew Ashton 2023-07-08 04:30:48 -0400
  • 0724ad435a
    Merge branch 'oobabooga:main' into pull_branch FartyPants 2023-07-08 04:22:56 -0400
  • 55d0decbc4
    Merge branch 'oobabooga:main' into main FartyPants 2023-07-08 04:22:43 -0400
  • 6da93f73d6 Added EOS FPHam 2023-07-08 04:20:50 -0400
  • ec407da9e8 actually, the blocking API was fine all along Chris Rude 2023-07-08 00:51:26 -0700
  • fe9c540724 fix for static api Chris Rude 2023-07-08 00:42:05 -0700
  • 8175eb8d13
    Delete 0001-fix-for-issue-2475-Streaming-api-deadlock.patch Chris Rude 2023-07-08 00:35:45 -0700
  • 1353394704 update to fix same problem for blocking api Chris Rude 2023-07-08 00:33:55 -0700
  • f44f52464c use underscore notation for private methods Chris Rude 2023-07-08 00:25:15 -0700
  • 0e3245cc24 fix for issue #2475: Streaming api deadlock Chris Rude 2023-07-08 00:19:39 -0700
  • ff3997390e update tc 2023-07-07 19:03:25 -0700
  • 9be0491266 add internlm tc 2023-07-07 19:02:23 -0700
  • acf24ebb49
    Whisper_stt params for model, language, and auto_submit (#3031) Brandon McClure 2023-07-07 17:54:53 -0600
  • 79679b3cfd Pin fastapi version (for #3042) oobabooga 2023-07-07 16:40:57 -0700
  • bb79037ebd
    Fix wrong pytorch version on Linux+CPU oobabooga 2023-07-07 20:40:31 -0300
  • 1fb6fa9ec8 also download gptq tc 2023-07-07 15:25:49 -0700
  • 918c401b29 update llama version tc 2023-07-07 15:18:31 -0700
  • b24ec70fe5 many openai updates Matthew Ashton 2023-07-07 14:10:29 -0400
  • 564a8c507f
    Don't launch chat mode by default oobabooga 2023-07-07 13:32:11 -0300
  • b6643e5039 Add decode functions to llama.cpp/exllama oobabooga 2023-07-07 09:11:30 -0700
  • 1ba2e88551 Add truncation to exllama oobabooga 2023-07-07 09:09:23 -0700
  • c21b73ff37 Minor change to ui.py oobabooga 2023-07-07 09:09:14 -0700
  • a43680c738 Remove the CFG extension. Morgan Schweers 2023-07-07 08:53:45 -0700
  • de994331a4 Merge remote-tracking branch 'refs/remotes/origin/main' oobabooga 2023-07-06 22:25:43 -0700
  • 9aee1064a3 Block a cloudfare request oobabooga 2023-07-06 22:24:52 -0700
  • d7e14e1f78
    Fixed the param name when loading a LoRA using a model loaded in 4 or 8 bits (#3036) Fernando Tarin Morales 2023-07-07 14:24:07 +0900
  • 1f540fa4f8
    Added the format to be able to finetune Vicuna1.1 models (#3037) Fernando Tarin Morales 2023-07-07 14:22:39 +0900
  • 6c10ac832f Added the format to be able to finetune Vicuna1.1 models Nan-Do 2023-07-07 12:15:12 +0900
  • 2cd8b49692 Fixed the param name when loading a LoRA using a model loaded in 4 or 8 bits Nan-Do 2023-07-07 12:07:21 +0900
  • 4d46981581 Fixed the tokenization process of a raw dataset and improved its efficiency Nan-Do 2023-07-07 11:59:12 +0900
  • a09530150d Fixed the tokenization process of a raw dataset and improved its efficiency Nan-Do 2023-07-07 11:31:31 +0900
  • afe62d59af Revert "Fixed the tokenization process of a raw dataset and improved its efficiency" Nan-Do 2023-07-07 11:28:00 +0900
  • 5b51813841 Fixed the tokenization process of a raw dataset and improved its efficiency Nan-Do 2023-07-07 11:18:58 +0900
  • 1660751eef Fixed the param name when loading a LoRA using a model loaded in 4 or 8 bits Nan-Do 2023-07-07 10:46:18 +0900
  • 0fb65fd709 Revert "Fixes and Improvements to the LoRA training." Nan-Do 2023-07-07 10:38:04 +0900
  • 05f70ef552
    start readme with how to change default settings Brandon McClure 2023-07-06 19:18:15 -0600
  • 8ab3636603
    surface the language, model and auto_submit settings as params Move the UI around to collapse the settings by default Brandon McClure 2023-07-06 19:17:07 -0600
  • 48dce8cd4d Add support for logits processors in extensions and negative prompts. Morgan Schweers 2023-07-06 14:34:25 -0700
  • 57a7f047ad update training fix FPHam 2023-07-06 01:50:56 -0400
  • 366c722226 Resolved merge conflict. FPHam 2023-07-06 01:45:33 -0400
  • b018b07982
    Merge branch 'oobabooga:main' into pull_branch FartyPants 2023-07-06 01:42:31 -0400
  • 74fecb9d17 +superplatty Matthew Ashton 2023-07-04 00:24:05 -0400
  • 4d8390f568 +wizardcoder Matthew Ashton 2023-07-02 11:36:49 -0400
  • b3fc9adda3 +longchat, +vicuna-33b, +Redmond-Hermes-Coder Matthew Ashton 2023-07-01 20:36:26 -0400
  • 89cc9274ce +platypus/gplatty Matthew Ashton 2023-06-29 10:57:06 -0400
  • eedd9eb6fc
    Merge branch 'oobabooga:main' into 8k_loras_fixes matatonic 2023-07-06 01:04:30 -0400
  • 0b2c393b6c update tokenizer tcz 2023-07-05 19:24:40 -0700
  • ff45317032
    Update models.py (#3020) Xiaojian "JJ" Deng 2023-07-05 20:40:43 -0400
  • 7c37b82362 Modified instructions for clarity in superbooga search input CG 2023-07-05 14:53:03 -0700
  • 0480d70bfd updated superbooga with search integration and semantic filtering CG 2023-07-05 14:48:34 -0700
  • b554d094da
    Update models.py Xiaojian "JJ" Deng 2023-07-05 17:08:48 -0400
  • ab4ca9a3dd Add new feature: Enable search engine integration in script.py CG 2023-07-05 13:10:58 -0700
  • 9a2c1b5262
    Merge branch 'oobabooga:main' into change-gradio-FormComponent-to-IOComponent Ricardo Pinto 2023-07-05 17:50:02 +0100