Commit Graph

  • 49c10c5570
    Add support for the latest GPTQ models with group-size (#530) oobabooga 2023-03-26 00:11:33 -0300
  • 071d006457
    Merge branch 'main' into gptq-group-size oobabooga 2023-03-26 00:09:52 -0300
  • e93a5f156e Removes extra print wario 2023-03-25 21:22:27 -0400
  • 42d88baa1e Initial Public Release wario 2023-03-25 20:25:40 -0400
  • 0bac80d9eb Potential fix for issues/571 Sean Fitzgerald 2023-03-25 13:08:45 -0700
  • f1ba2196b1 make 'model' variables less ambiguous Alex "mcmonkey" Goodwin 2023-03-25 12:57:36 -0700
  • 8da237223e document options better Alex "mcmonkey" Goodwin 2023-03-25 12:48:35 -0700
  • 8134c4b334 add training/datsets to gitignore for #570 Alex "mcmonkey" Goodwin 2023-03-25 12:41:18 -0700
  • 5c49a0dcd0 fix error from prepare call running twice in a row Alex "mcmonkey" Goodwin 2023-03-25 12:37:32 -0700
  • 7bf601107c automatically strip empty data entries (for better alpaca dataset compat) Alex "mcmonkey" Goodwin 2023-03-25 12:28:46 -0700
  • 566898a79a initial lora training tab Alex "mcmonkey" Goodwin 2023-03-25 12:08:26 -0700
  • 1a1e420e65 Silero_tts streaming fix Φφ 2023-03-25 21:31:13 +0300
  • b60604ac63
    Add output to let user know character is being loaded Brian O'Connor 2023-03-25 13:29:51 -0400
  • c72c2c1534
    Default character value in dropdown Brian O'Connor 2023-03-25 13:28:25 -0400
  • 9ccf505ccd improve/simplify gitignore Alex "mcmonkey" Goodwin 2023-03-25 10:04:00 -0700
  • 456479c060
    should be pip3 loeken 2023-03-25 17:36:03 +0100
  • 8c8e8b4450
    Fix the early stopping callback #559 oobabooga 2023-03-25 12:35:52 -0300
  • a1f12d607f
    Merge pull request #538 from Ph0rk0z/display-input-context oobabooga 2023-03-25 11:56:18 -0300
  • f793ed2357
    Update shared.py oobabooga 2023-03-25 11:51:05 -0300
  • 58506a9534
    Update README.md oobabooga 2023-03-25 11:50:30 -0300
  • a47c6e736e
    Update README.md oobabooga 2023-03-25 11:42:55 -0300
  • 297b1f79e5
    Prevent context from being overwritten Brian O'Connor 2023-03-25 09:57:48 -0400
  • 577a9e92ef Revert "Merge branch 'main' into workarround-for-port-not-freeing" catalpaaa 2023-03-25 05:47:03 -0700
  • 932cf8152f
    Merge branch 'main' into workarround-for-port-not-freeing catalpaaa 2023-03-25 05:36:53 -0700
  • 6f1eec45a3 add --wait-before-restart to allow gradio to have time to free port catalpaaa 2023-03-25 05:34:14 -0700
  • d2fb6e5929
    Merge pull request #2 from catalpaaa/lora-and-model-dir catalpaaa 2023-03-25 05:31:20 -0700
  • 9bd51b8dea
    Merge pull request #1 from loeken/test_windows loeken 2023-03-25 12:26:23 +0100
  • f6a03795c8
    moved GPTQ_SHA into build arguments in docker-compose/.env file loeken 2023-03-25 12:25:12 +0100
  • b893ee23ce
    testing commit suggested by deece to allow versions below 6 to continue working loeken 2023-03-25 12:13:30 +0100
  • 73415d4a82
    passing GPTQ_SHA dynamic loeken 2023-03-25 12:12:09 +0100
  • d739088abe
    testing on windows port 7860 seems to be in use already loeken 2023-03-25 11:56:22 +0100
  • 34fe72c176
    missing newline loeken 2023-03-25 11:45:02 +0100
  • c895ac95de
    added my example for 6GB of vram, which runs a bit faster then the variant with gptq-pre-layer 20 loeken 2023-03-25 11:38:35 +0100
  • 4f59726584
    didnt save file and add to commit loeken 2023-03-25 11:34:11 +0100
  • 7222bcf12f
    moved configuration options to .env file so it's easier for the end user to edit their configurables in one place and not be confused by the docker-compose/Dockerfile structure loeken 2023-03-25 11:29:42 +0100
  • f740ee558c
    Merge branch 'oobabooga:main' into lora-and-model-dir catalpaaa 2023-03-25 01:28:33 -0700
  • ce9a5e3b53
    Update install.bat jllllll 2023-03-25 02:22:02 -0500
  • 2e02d42682 Changed things around to allow Micromamba to work with paths containing spaces. jllllll 2023-03-25 01:14:29 -0500
  • 70f9565f37
    Update README.md oobabooga 2023-03-25 02:35:30 -0300
  • 44f930f041
    Fix loading/saving of character logs Brian O'Connor 2023-03-25 00:19:21 -0400
  • 25be9698c7
    Fix LoRA on mps oobabooga 2023-03-25 01:18:32 -0300
  • 26fcc624f9
    Better recognize the model type by the model name oobabooga 2023-03-25 01:03:02 -0300
  • b79708ab26
    Update GPTQ_loader.py oobabooga 2023-03-25 00:46:20 -0300
  • 98a1d5f3ed
    Update GPTQ_loader.py oobabooga 2023-03-25 00:37:24 -0300
  • 558e7dbda6
    Update GPTQ_loader.py oobabooga 2023-03-25 00:36:57 -0300
  • ee58c5f158
    Remove gptq-group-size oobabooga 2023-03-25 00:30:08 -0300
  • 2aac1fb9d2
    Merge main oobabooga 2023-03-25 00:28:29 -0300
  • bf1eeb50bb
    Update GPTQ_loader.py oobabooga 2023-03-25 00:25:24 -0300
  • 8be8e6d381
    Update shared.py oobabooga 2023-03-25 00:24:38 -0300
  • cc3a4e7acb
    Load default character Brian O'Connor 2023-03-24 23:11:38 -0400
  • 3da633a497
    Merge pull request #529 from EyeDeck/main oobabooga 2023-03-24 23:51:01 -0300
  • 32cd298c44
    Add --load-character parameter to shared.py Brian O'Connor 2023-03-24 22:50:27 -0400
  • 1e260544cd
    Update install.bat jllllll 2023-03-24 21:25:14 -0500
  • d51cb8292b Update server.py catalpaaa 2023-03-24 17:36:31 -0700
  • 9e2963e0c8 Update server.py catalpaaa 2023-03-24 17:35:45 -0700
  • ec2a1facee Update server.py catalpaaa 2023-03-24 17:34:33 -0700
  • b37c54edcf lora-dir, model-dir and login auth catalpaaa 2023-03-24 17:30:18 -0700
  • 2142f4bdc4
    Merge pull request #1 from oobabooga/main catalpaaa 2023-03-24 16:47:25 -0700
  • da4a214434
    defaults to <4GB vram required loeken 2023-03-25 00:31:50 +0100
  • d5ad4da6bf
    keeping old order loeken 2023-03-25 00:30:57 +0100
  • 5dd9208a69
    Update GPTQ_loader.py oobabooga 2023-03-24 20:30:23 -0300
  • fa916aa1de
    Update INSTRUCTIONS.txt jllllll 2023-03-24 18:28:46 -0500
  • 586775ad47
    Update download-model.bat jllllll 2023-03-24 18:25:49 -0500
  • bddbc2f898
    Update start-webui.bat jllllll 2023-03-24 18:19:23 -0500
  • b6d5db59db testing on newly installed system loeken 2023-03-25 00:17:44 +0100
  • 2604e3f7ac
    Update download-model.bat jllllll 2023-03-24 18:15:24 -0500
  • 97710177fc
    Update shared.py oobabooga 2023-03-24 20:12:28 -0300
  • 24870e51ed
    Update micromamba-cmd.bat jllllll 2023-03-24 18:12:02 -0500
  • f0c82f06c3
    Add files via upload jllllll 2023-03-24 18:09:44 -0500
  • de7fca3c4e Dockerfile tested loeken 2023-03-25 00:07:23 +0100
  • 4b9d45b3af
    Update models.py oobabooga 2023-03-24 19:58:42 -0300
  • b865a41fa2
    Update GPTQ_loader.py oobabooga 2023-03-24 19:58:12 -0300
  • 9fa47c0eed
    Revert GPTQ_loader.py (accident) oobabooga 2023-03-24 19:57:12 -0300
  • a6bf54739c
    Revert models.py (accident) oobabooga 2023-03-24 19:56:45 -0300
  • fe751037e7 Merge branch 'main' of github.com:loeken/text-generation-webui loeken 2023-03-24 23:55:37 +0100
  • 7a1280f64b poc loeken 2023-03-24 23:55:27 +0100
  • eec773b1f4
    Update install.bat jllllll 2023-03-24 17:54:47 -0500
  • 0a16224451
    Update GPTQ_loader.py oobabooga 2023-03-24 19:54:36 -0300
  • a80aa65986
    Update models.py oobabooga 2023-03-24 19:53:20 -0300
  • 40e0cab2cb
    Update shared.py oobabooga 2023-03-24 19:53:06 -0300
  • 817e6c681e
    Update install.bat jllllll 2023-03-24 17:51:13 -0500
  • 04eb089216
    Update GPTQ_loader.py oobabooga 2023-03-24 19:34:21 -0300
  • aabf07271b
    Update shared.py oobabooga 2023-03-24 19:32:24 -0300
  • a80a5465f2
    Update install.bat jllllll 2023-03-24 17:27:29 -0500
  • 507db0929d
    Do not use empty user messages in chat mode oobabooga 2023-03-24 17:22:22 -0300
  • 6e1b16c2aa
    Update html_generator.py oobabooga 2023-03-24 17:18:27 -0300
  • ffb0187e83
    Update chat.py oobabooga 2023-03-24 17:17:29 -0300
  • c14e598f14
    Merge pull request #433 from mayaeary/fix/api-reload oobabooga 2023-03-24 16:56:10 -0300
  • bfe960731f
    Merge branch 'main' into fix/api-reload oobabooga 2023-03-24 16:54:41 -0300
  • 4a724ed22f
    Reorder imports oobabooga 2023-03-24 16:53:56 -0300
  • 8fad84abc2
    Update extensions.py oobabooga 2023-03-24 16:51:27 -0300
  • d8e950d6bd
    Don't load the model twice when using --lora oobabooga 2023-03-24 16:30:32 -0300
  • fd99995b01
    Make the Stop button more consistent in chat mode oobabooga 2023-03-24 15:59:27 -0300
  • 35745d6a04 Merge branch 'main' of github.com:loeken/text-generation-webui loeken 2023-03-24 19:04:55 +0100
  • b740c5b284
    Add display of context when input was generated Forkoz 2023-03-24 08:56:07 -0500
  • 4f5c2ce785
    Fix chat_generation_attempts oobabooga 2023-03-24 02:03:30 -0300
  • 04417b658b
    Update README.md oobabooga 2023-03-24 01:40:43 -0300
  • c7598a0549
    Add a default prompt for alpaca models oobabooga 2023-03-24 01:17:55 -0300
  • 5fa482a21b
    Update server.py oobabooga 2023-03-24 01:17:18 -0300
  • f4b35ee0ef
    Fix offloading oobabooga 2023-03-24 01:12:09 -0300