Commit Graph

  • f052ab9c8f Fix setting pre_layer from within the ui oobabooga 2023-05-17 23:17:44 -0300
  • 5d20e4d976 Introduce docker repo Atinoda 2023-05-17 21:02:14 +0100
  • b667ffa51d Simplify GPTQ_loader.py oobabooga 2023-05-17 16:22:56 -0300
  • ef10ffc6b4 Add various checks to model loading functions oobabooga 2023-05-17 15:52:23 -0300
  • 452abccc1a Mitigate UnboundLocalError Konstantin Gukov 2023-05-17 16:51:32 +0200
  • abd361b3a0 Minor change oobabooga 2023-05-17 11:33:43 -0300
  • 21ecc3701e Avoid a name conflict oobabooga 2023-05-17 11:23:13 -0300
  • fb91c07191 Minor bug fix oobabooga 2023-05-17 11:16:37 -0300
  • 1a8151a2b6
    Add AutoGPTQ support (basic) (#2132) oobabooga 2023-05-17 11:12:12 -0300
  • 7afbf41f54 Add AutoGPTQ support (basic) oobabooga 2023-05-17 11:08:33 -0300
  • 10cf7831f7
    Update Extensions.md oobabooga 2023-05-17 10:45:29 -0300
  • 1f50dbe352
    Experimental jank multiGPU inference that's 2x faster than native somehow (#2100) Alex "mcmonkey" Goodwin 2023-05-17 06:41:09 -0700
  • fd743a0207 Small change oobabooga 2023-05-17 02:34:29 -0300
  • aeb1b7a9c5
    feature to save prompts with custom names (#1583) LoopLooter 2023-05-17 08:30:45 +0300
  • 48543eddd1 Small changes oobabooga 2023-05-17 02:29:23 -0300
  • 1cd3083a09 Small change oobabooga 2023-05-17 02:27:56 -0300
  • 59eb02f218 Style changes oobabooga 2023-05-17 02:26:39 -0300
  • c3286d29c0 Merge branch 'main' into LoopLooter-save-prompts-with-custom-names oobabooga 2023-05-17 02:20:33 -0300
  • c9c6aa2b6e Update docs/Extensions.md oobabooga 2023-05-17 02:04:37 -0300
  • 85f74961f9 Update "Interface mode" tab oobabooga 2023-05-17 01:57:51 -0300
  • 9e558cba9b Update docs/Extensions.md oobabooga 2023-05-17 01:43:32 -0300
  • 687f21f965 Update docs/Extensions.md oobabooga 2023-05-17 01:41:01 -0300
  • 8f85d84e08 Merge remote-tracking branch 'refs/remotes/origin/main' oobabooga 2023-05-17 01:32:42 -0300
  • ce21804ec7 Allow extensions to define a new tab oobabooga 2023-05-17 01:25:01 -0300
  • acf3dbbcc5
    Allow extensions to have custom display_name (#1242) ye7iaserag 2023-05-17 07:08:22 +0300
  • c4f7e12676
    Merge branch 'main' into main oobabooga 2023-05-17 01:07:55 -0300
  • ad0b71af11 Add missing file oobabooga 2023-05-17 00:37:34 -0300
  • a84f499718 Allow extensions to define custom CSS and JS oobabooga 2023-05-17 00:03:39 -0300
  • 824fa8fc0e Attempt at making interface restart more robust oobabooga 2023-05-16 22:27:43 -0300
  • 259020a0be Bump gradio to 3.31.0 oobabooga 2023-05-16 22:21:05 -0300
  • 458a627ab9
    fix: elevenlabs cloned voices do not show up in webui after entering API key (#2107) pixel 2023-05-16 17:21:36 -0600
  • 01869ad122
    convert api key update into function pixel 2023-05-16 16:56:02 -0600
  • 7584d46c29
    Refactor models.py (#2113) oobabooga 2023-05-16 19:52:22 -0300
  • 97a1221ba4 Minor changes oobabooga 2023-05-16 19:46:34 -0300
  • 582630d8b3 Fix deepspeed instructions oobabooga 2023-05-16 19:41:04 -0300
  • 65f509f6e6 Minor change oobabooga 2023-05-16 19:33:40 -0300
  • 6f00f95deb Minor change oobabooga 2023-05-16 19:27:46 -0300
  • 8459e2c263 Refactor model loading oobabooga 2023-05-16 19:20:51 -0300
  • c3b42556b2 Refactor models.py oobabooga 2023-05-16 19:19:41 -0300
  • d69b86a028 Use nargs="+" oobabooga 2023-05-16 18:39:01 -0300
  • 7804316fda wip oobabooga 2023-05-16 18:30:47 -0300
  • b6e486f7a0 Slight change to the parameters tab oobabooga 2023-05-16 18:22:35 -0300
  • 5cd6dd4287 Fix no-mmap bug oobabooga 2023-05-16 17:35:49 -0300
  • 89e37626ab Reorganize chat settings tab oobabooga 2023-05-16 17:22:59 -0300
  • 9734851fb5
    fix: elevenlabs api key not sent after changed in ui pixel 2023-05-16 12:53:49 -0600
  • 76747416ca hard_cut_string Alex "mcmonkey" Goodwin 2023-05-16 11:37:51 -0700
  • a62d07d0db add an error check for bad overlap/cutoff length Alex "mcmonkey" Goodwin 2023-05-16 11:09:22 -0700
  • d205ec9706
    Fix Training fails when evaluation dataset is selected (#2099) Forkoz 2023-05-16 16:40:19 +0000
  • 428261eede
    fix: elevenlabs removed the need for the api key for refreshing voices (#2097) Orbitoid 2023-05-17 02:34:49 +1000
  • d26ed8287c Fix the model search with the .pt change LaaZa 2023-05-16 15:01:03 +0300
  • baec119a2f Experimental jank multiGPU inference that's 2x faster than native somehow Alex "mcmonkey" Goodwin 2023-05-16 03:59:21 -0700
  • 8a577ff807
    Fix Training fails when evaluation dataset is selected Forkoz 2023-05-16 10:56:05 +0000
  • 8b1b368261 fix: elevenlabs removed the need for the api key for refreshing voices Orbitoid 2023-05-16 19:26:39 +1000
  • 37e54ead69 Add superbooga Instructor-based embedder toast22a 2023-05-16 16:54:03 +0800
  • 233e0dc329 Add superbooga option to set embedder model in settings.json toast22a 2023-05-16 16:41:30 +0800
  • a3f6ec975c
    Merge branch 'oobabooga:main' into AutoGPTQ LaaZa 2023-05-16 04:59:10 +0000
  • cd9be4c2ba
    Update llama.cpp-models.md oobabooga 2023-05-16 00:49:32 -0300
  • 26cf8c2545
    add api port options (#1990) atriantafy 2023-05-16 00:44:16 +0100
  • e657dd342d
    Add in-memory cache support for llama.cpp (#1936) Andrei 2023-05-15 19:19:55 -0400
  • 0227e738ed
    Add settings UI for llama.cpp and fixed reloading of llama.cpp models (#2087) Jakub Strnad 2023-05-16 00:51:23 +0200
  • e13a95e93b Minor changes oobabooga 2023-05-15 19:50:18 -0300
  • 10869de0f4 Merge remote-tracking branch 'refs/remotes/origin/main' oobabooga 2023-05-15 19:39:48 -0300
  • c07215cc08 Improve the default Assistant character oobabooga 2023-05-15 19:39:08 -0300
  • 4e66f68115 Create get_max_memory_dict() function oobabooga 2023-05-15 19:38:27 -0300
  • 5cf70a3830 Merge branch 'main' of https://github.com/oobabooga/text-generation-webui into abetlen/add-llama-cache Andrei Betlen 2023-05-15 18:29:06 -0400
  • d49338f9f1 Update cache_capacity parsing Andrei Betlen 2023-05-15 18:25:24 -0400
  • ae54d83455
    Bump transformers from 4.28.1 to 4.29.1 (#2089) dependabot[bot] 2023-05-15 19:25:24 -0300
  • b44fd4c5bf Merge remote-tracking branch 'origin/AutoGPTQ' into AutoGPTQ LaaZa 2023-05-16 00:54:04 +0300
  • 74d68a4e9a Support .pt models LaaZa 2023-05-16 00:53:10 +0300
  • 7d11e52e45
    Bump gradio from 3.25.0 to 3.30.0 dependabot[bot] 2023-05-15 21:03:51 +0000
  • f77c8addd8
    Bump gradio-client from 0.1.4 to 0.2.4 dependabot[bot] 2023-05-15 21:03:38 +0000
  • 6b4509a1a4
    Bump transformers from 4.28.1 to 4.29.1 dependabot[bot] 2023-05-15 21:03:04 +0000
  • 4b8c006c06 add UI controls for llama.cpp settings strnad 2023-05-15 22:36:41 +0200
  • 6ab5bd5be4 fix (re/un)loading of llama.cpp models strnad 2023-05-15 22:35:43 +0200
  • 7378332246
    Merge branch 'oobabooga:main' into AutoGPTQ LaaZa 2023-05-15 17:41:34 +0000
  • f576992f15 Fix missing return type hint in apply_time_weight_to_distances toast22a 2023-05-16 01:23:06 +0800
  • f353512517 Add superbooga time weighted history retrieval toast22a 2023-05-15 23:16:43 +0800
  • 071f0776ad
    Add llama.cpp GPU offload option (#2060) AlphaAtlas 2023-05-14 21:58:11 -0400
  • 06e79d03b5 Add docs oobabooga 2023-05-14 22:51:36 -0300
  • eee986348c
    Update llama-cpp-python from 0.1.45 to 0.1.50 (#2058) feeelX 2023-05-15 03:41:14 +0200
  • 897fa60069 Sort selected superbooga chunks by insertion order oobabooga 2023-05-14 22:19:29 -0300
  • b07f849e41
    Add superbooga chunk separator option (#2051) Luis Lopez 2023-05-15 08:44:52 +0800
  • 61b058cb95
    Change argument in modules/shared.py AlphaAtlas 2023-05-14 17:45:24 -0400
  • cd6138d104
    Change argument in modules/llamacpp_model.py AlphaAtlas 2023-05-14 17:45:10 -0400
  • 31f4ae29f5
    Fix typo (double h in https) feeelX 2023-05-14 22:15:38 +0200
  • 6c7dea2760
    Add argument to shared.py AlphaAtlas 2023-05-14 14:17:39 -0400
  • 3d56218a19
    Add gpu layers to llamacpp_model.py AlphaAtlas 2023-05-14 14:13:05 -0400
  • 96c2a6aaf4
    Merge branch 'oobabooga:main' into AutoGPTQ LaaZa 2023-05-14 17:39:16 +0000
  • 82b66ffda2 Update llama-cpp-python from 0.1.45 to 0.1.50 feeelX 2023-05-14 19:01:21 +0200
  • ab08cf6465
    [extensions/openai] clip extra leading space (#2042) matatonic 2023-05-14 11:57:52 -0400
  • 4ee76e6bfb Merge remote-tracking branch 'origin/AutoGPTQ' into AutoGPTQ LaaZa 2023-05-14 18:29:01 +0300
  • 78e56de0fa Update for AutoGPTQ LaaZa 2023-05-14 18:27:29 +0300
  • c631d9fe7e Separator-split chunks are split again if longer than chunk length toast22a 2023-05-14 22:44:41 +0800
  • b6fcf1c66b Update README oobabooga 2023-05-14 11:09:33 -0300
  • 3e06b58300 Add --cache-capacity argument oobabooga 2023-05-14 11:05:15 -0300
  • 3b886f9c9f
    Add chat-instruct mode (#2049) oobabooga 2023-05-14 10:43:55 -0300
  • 24d1d32ea3 Generalize the chat-instruct command oobabooga 2023-05-14 10:41:22 -0300
  • a8d59efd9b Add superbooga chunk separator option toast22a 2023-05-14 17:02:26 +0800
  • ec90ad5168 Change some info messages oobabooga 2023-05-14 02:45:34 -0300
  • 610ec43597 Allow character and instruction-character to be loaded simultaneously oobabooga 2023-05-14 02:17:16 -0300