Commit Graph

  • 4e9937aa99 Bump gradio oobabooga 2023-06-05 17:29:21 -0300
  • a0c17ba434 adjust defaults & add autogptq to example Matthew Ashton 2023-06-05 16:25:20 -0400
  • 53496ffa80
    Create stale.yml oobabooga 2023-06-05 17:15:31 -0300
  • 0377e385e0
    Update .gitignore (#2504) pandego 2023-06-05 22:11:03 +0200
  • 60bfd0b722
    Merge pull request #2535 from oobabooga/dev oobabooga 2023-06-05 17:07:54 -0300
  • eda224c92d Update README oobabooga 2023-06-05 17:04:09 -0300
  • bef94b9ebb Update README oobabooga 2023-06-05 17:01:13 -0300
  • 91c9343251 typo Matthew Ashton 2023-06-05 15:59:44 -0400
  • 3f71c24782 request_param type fixes Matthew Ashton 2023-06-05 15:01:35 -0400
  • 96aff05f53 "type" -> error_type Matthew Ashton 2023-06-05 14:57:55 -0400
  • 99d701994a Update GPTQ-models-(4-bit-mode).md oobabooga 2023-06-05 15:55:00 -0300
  • f276d88546 Use AutoGPTQ by default for GPTQ models oobabooga 2023-06-05 15:41:48 -0300
  • 632571a009 Update README oobabooga 2023-06-05 15:16:06 -0300
  • cceed2e1bf improve error handling Fixes #2505 Matthew Ashton 2023-06-05 13:32:17 -0400
  • f1939465c2 improve readme Thanks @uogbuji Matthew Ashton 2023-06-05 13:10:34 -0400
  • 3e6e8c87c5 openai_fixes Matthew Ashton 2023-06-05 12:38:31 -0400
  • df6d2d738f partially revert chunk length change Matthew Ashton 2023-06-05 11:22:22 -0400
  • 6a75bda419 Assign some 4096 seq lengths oobabooga 2023-06-05 12:07:52 -0300
  • 9b0e95abeb Fix "regenerate" when "Start reply with" is set oobabooga 2023-06-05 11:56:03 -0300
  • e61316ce0b Detect airoboros and Nous-Hermes oobabooga 2023-06-05 11:52:13 -0300
  • d31dfd9b94
    Delete txt2format.py Orion 2023-06-05 18:06:53 +0800
  • 49c3bfd12a Mention the launch parameter in the README, for the user's sake Uche Ogbuji 2023-06-04 20:25:21 -0600
  • 6acc7d0037 Should at least help address https://github.com/oobabooga/text-generation-webui/issues/2505 Uche Ogbuji 2023-06-04 20:09:27 -0600
  • 6fd5c00249 Add --select option to download-model.py, and also do some code cleanup in it, including avoiding shadowing builtins (i.e. dict & bytes) Uche Ogbuji 2023-06-03 14:27:57 -0600
  • f6d0ab2e01
    Update .gitignore pandego 2023-06-03 15:26:32 +0200
  • d8eeb43fdf no committing history.json orion 2023-06-03 19:56:41 +0800
  • 8f8de2e864 add launch.py to check envs and load history args orion 2023-06-03 19:50:30 +0800
  • 0a0adb328b add command history and packages auto installation orion 2023-06-03 18:29:22 +0800
  • 6416cbe665 Merge branch 'main' of https://github.com/Orion-zhen/text-generation-webui orion 2023-06-03 10:24:05 +0800
  • 19f78684e6 Add "Start reply with" feature to chat mode oobabooga 2023-06-02 13:58:08 -0300
  • f7b07c4705
    Fix the missing Chinese character bug (#2497) GralchemOz 2023-06-03 00:45:41 +0800
  • 1d3aa9e1b0 /api/v1/encode add token_texts too yiximail 2023-06-03 00:29:59 +0800
  • d43433071e Unified parameter name yiximail 2023-06-02 23:51:19 +0800
  • 2c87ece191 Add Api /api/v1/encode and /api/v1/decode yiximail 2023-06-02 23:44:37 +0800
  • 11f7856ddc
    Merge branch 'oobabooga:main' into main GralchemOz 2023-06-02 23:42:25 +0800
  • 28198bc15c Change some headers oobabooga 2023-06-02 11:28:43 -0300
  • 5177cdf634 Change AutoGPTQ info oobabooga 2023-06-02 11:19:44 -0300
  • 8e98633efd Add a description for chat_prompt_size oobabooga 2023-06-02 11:13:22 -0300
  • d1a84a9190 Fix the missing Chinese character bug GralchemOz 2023-06-02 16:05:18 +0800
  • d85f55c3ae
    Merge branch 'oobabooga:main' into main Orion 2023-06-02 13:51:41 +0800
  • 5a8162a46d Reorganize models tab oobabooga 2023-06-02 02:24:15 -0300
  • d183c7d29e Fix streaming japanese/chinese characters oobabooga 2023-06-02 02:09:52 -0300
  • 5216117a63
    Fix MacOS incompatibility in requirements.txt (#2485) jllllll 2023-06-01 23:46:16 -0500
  • 2f6631195a Add desc_act checkbox to the UI oobabooga 2023-06-02 01:45:46 -0300
  • 225c478279
    Fix MacOS incompatibility in requirements.txt jllllll 2023-06-01 23:36:42 -0500
  • 9c066601f5
    Extend AutoGPTQ support for any GPTQ model (#1668) LaaZa 2023-06-02 07:33:55 +0300
  • f0ef6e5514 Merge branch 'main' into LaaZa-AutoGPTQ oobabooga 2023-06-02 01:28:59 -0300
  • b4ad060c1f Use cuda 11.7 instead of 11.8 oobabooga 2023-06-02 01:04:44 -0300
  • d0aca83b53 Add AutoGPTQ wheels to requirements.txt oobabooga 2023-06-02 00:47:11 -0300
  • 72934a217e Add ngrok shared URL ingress support bobzilladev 2023-05-09 12:44:44 -0400
  • f344ccdddb Add a template for bluemoon oobabooga 2023-06-01 14:42:12 -0300
  • 522b01d051 Grammar oobabooga 2023-06-01 14:05:29 -0300
  • 5540335819 Better way to detect if a model has been downloaded oobabooga 2023-06-01 14:01:19 -0300
  • aa83fc21d4
    Update Low-VRAM-guide.md oobabooga 2023-06-01 12:14:27 -0300
  • ee99a87330
    Update README.md oobabooga 2023-06-01 12:08:44 -0300
  • a83f9aa65b
    Update shared.py oobabooga 2023-06-01 12:08:39 -0300
  • 146505a16b
    Update README.md oobabooga 2023-06-01 12:04:58 -0300
  • 756e3afbcc
    Update llama.cpp-models.md oobabooga 2023-06-01 12:04:31 -0300
  • 3347395944
    Update README.md oobabooga 2023-06-01 12:01:20 -0300
  • 74bf2f05b1
    Update llama.cpp-models.md oobabooga 2023-06-01 11:58:33 -0300
  • 90dc8a91ae
    Update llama.cpp-models.md oobabooga 2023-06-01 11:57:57 -0300
  • aba56de41b
    Update README.md oobabooga 2023-06-01 11:46:28 -0300
  • c9ac45d4cf
    Update Using-LoRAs.md oobabooga 2023-06-01 11:34:04 -0300
  • 9aad6d07de
    Update Using-LoRAs.md oobabooga 2023-06-01 11:32:41 -0300
  • df18ae7d6c
    Update README.md oobabooga 2023-06-01 11:27:33 -0300
  • ddc293a9a9 In case of error, mark as done to clear progress bar. Morgan Schweers 2023-06-01 02:01:07 -0700
  • 63f068ae4a Add a new button to process last reply with TTS; restore the original output modifier function. Cocktail Boy 2023-06-01 00:23:06 -0700
  • 739f63814c Show download progress on the model screen. Morgan Schweers 2023-05-31 04:11:07 -0700
  • 248ef32358 Print a big message for CPU users oobabooga 2023-06-01 01:38:48 -0300
  • 290a3374e4 Don't download a model during installation oobabooga 2023-06-01 01:20:56 -0300
  • e52b43c934
    Update GPTQ-models-(4-bit-mode).md oobabooga 2023-06-01 01:17:13 -0300
  • 1aed2b9e52
    Make it possible to download protected HF models from the command line. (#2408) Morgan Schweers 2023-05-31 20:11:21 -0700
  • 99f0c53a39 Update README oobabooga 2023-06-01 00:09:05 -0300
  • 8169d1f2c2 Minor changes oobabooga 2023-06-01 00:07:23 -0300
  • 419c34eca4
    Update GPTQ-models-(4-bit-mode).md oobabooga 2023-05-31 23:49:00 -0300
  • 57008a90b6
    Bump llama-cpp-python from 0.1.53 to 0.1.56 dependabot[bot] 2023-06-01 02:45:39 +0000
  • 486ddd62df Add tfs and top_a to the API examples oobabooga 2023-05-31 23:44:38 -0300
  • b6c407f51d Don't stream at more than 24 fps oobabooga 2023-05-31 23:41:42 -0300
  • a160230893 Update GPTQ-models-(4-bit-mode).md oobabooga 2023-05-31 23:38:15 -0300
  • 2cdf525d3b Bump llama-cpp-python version oobabooga 2023-05-31 23:29:02 -0300
  • 51c1f92766 Change default voice Cocktail Boy 2023-05-31 17:45:27 -0700
  • 413bc9ea99 Update the Guanaco template as a creative writer. Cocktail Boy 2023-05-31 17:42:14 -0700
  • 2e53caa806
    Create LICENSE oobabooga 2023-05-31 16:28:36 -0300
  • dea1bf3d04
    Parse g++ version instead of using string matching (#72) Sam 2023-05-31 13:44:36 -0400
  • 97bc7e3fb6
    Adds functionality for user to set flags via environment variable (#59) gavin660 2023-05-31 10:43:22 -0700
  • 5405635305
    Install pre-compiled wheels for Linux (#74) Sam 2023-05-31 13:41:54 -0400
  • be98e74337
    Install older bitsandbytes on older gpus + fix llama-cpp-python issue (#75) jllllll 2023-05-31 12:41:03 -0500
  • 85bd3908f6
    Update LLaMA-model.md zaypen 2023-06-01 01:31:36 +0800
  • 2a255d7949
    Merge branch 'oobabooga:main' into main Orion 2023-05-31 22:31:23 +0800
  • 412e7a6a96
    Update README.md to include missing flags (#2449) jllllll 2023-05-31 09:07:56 -0500
  • 4418262e92 Show download progress on the model screen. Morgan Schweers 2023-05-31 04:11:07 -0700
  • 12121df9a4 Fix the UI version of downloading models. Morgan Schweers 2023-05-31 03:19:04 -0700
  • 7bf4126f45 Make it possible to download protected HF models from the command line. Morgan Schweers 2023-05-28 22:38:27 -0700
  • 3fd83ef679 Update the instruction template for Guanaco model. Cocktail Boy 2023-05-30 22:49:32 -0700
  • d4c23d7257 Modify preprocess function to replace hyphen with semicolon Cocktail Boy 2023-05-30 22:04:04 -0700
  • cc3d9e913a
    Normalize formatting of api flag additions jllllll 2023-05-30 23:12:43 -0500
  • 301cc1c569
    Update README.md to include missing flags jllllll 2023-05-30 23:08:39 -0500
  • b9a5b3f0a4 include fp16 Matthew Ashton 2023-05-30 22:18:17 -0400
  • 4d2e0a4dcf Merge branch 'models_api' of github.com:matatonic/text-generation-webui into models_api Matthew Ashton 2023-05-30 22:16:35 -0400
  • 8ede86a737 Merge branch 'models_api' of github.com:matatonic/text-generation-webui into models_api Matthew Ashton 2023-05-30 22:16:20 -0400