Commit Graph

  • 66b6971b61 Update README oobabooga 2023-03-13 12:44:18 -0300
  • ddea518e0f Document --auto-launch oobabooga 2023-03-13 12:43:33 -0300
  • d97bfb8713
    Update README.md oobabooga 2023-03-13 12:39:33 -0300
  • 372363bc3d Fix GPTQ load_quant call on Windows oobabooga 2023-03-13 12:07:02 -0300
  • bdff37f0bb
    Update README.md oobabooga 2023-03-13 11:05:51 -0300
  • b6098e9ccb
    Merge pull request #275 from stefanhamburger/patch-1 oobabooga 2023-03-13 11:01:31 -0300
  • 72757088fa
    Create FUNDING.yml oobabooga 2023-03-13 10:55:00 -0300
  • 0c224cf4f4 Fix GALACTICA (#285) oobabooga 2023-03-13 10:32:28 -0300
  • 58380faa96 small fix Ensheng 2023-03-13 06:30:29 -0700
  • 91c2a8e88d
    Fix: tuple object does not support item assignment stefanhamburger 2023-03-13 07:42:09 +0100
  • 2c4699a7e9 Change a comment oobabooga 2023-03-13 00:20:02 -0300
  • 0a7acb3bd9 Remove redundant comments oobabooga 2023-03-13 00:12:21 -0300
  • 77294b27dd Use str(Path) instead of os.path.abspath(Path) oobabooga 2023-03-13 00:08:01 -0300
  • b9e0712b92 Fix Open Assistant oobabooga 2023-03-12 23:58:25 -0300
  • 1ddcd4d0ba Clean up silero_tts oobabooga 2023-03-12 23:42:49 -0300
  • a95592fc56 Add back a progress indicator to --no-stream oobabooga 2023-03-12 20:38:40 -0300
  • d168b6e1f7
    Update README.md oobabooga 2023-03-12 17:54:07 -0300
  • 48aa52849b use Gradio microphone input instead EliasVincent 2023-03-12 21:03:07 +0100
  • 54e8f0c31f
    Update README.md oobabooga 2023-03-12 16:58:00 -0300
  • 0a4d8a5cf6
    Delete README.md oobabooga 2023-03-12 16:43:06 -0300
  • 88af917e0e
    Add files via upload oobabooga 2023-03-12 16:42:50 -0300
  • 0b86ac38b1
    Initial commit oobabooga 2023-03-12 16:40:10 -0300
  • 683556f411 Adding markdown support and slight refactoring. HideLord 2023-03-12 21:34:09 +0200
  • cebe8b390d Remove useless "substring_found" variable oobabooga 2023-03-12 15:50:38 -0300
  • 4bcd675ccd Add *Is typing...* to regenerate as well oobabooga 2023-03-12 15:23:33 -0300
  • 3b4145966d
    Merge branch 'oobabooga:main' into stt-extension Elias Vincent Simon 2023-03-12 19:19:43 +0100
  • 3375eaece0 Update README oobabooga 2023-03-12 15:01:32 -0300
  • c7aa51faa6 Use a list of eos_tokens instead of just a number oobabooga 2023-03-12 14:54:58 -0300
  • 17210ff88f
    Update README.md oobabooga 2023-03-12 14:31:24 -0300
  • 441e993c51 Bump accelerate, RWKV and safetensors oobabooga 2023-03-12 14:25:14 -0300
  • d8bea766d7
    Merge pull request #192 from xanthousm/main oobabooga 2023-03-12 13:40:16 -0300
  • 4066ab4c0c Reorder the imports oobabooga 2023-03-12 13:36:18 -0300
  • 4dc1d8c091
    Update README.md oobabooga 2023-03-12 12:46:53 -0300
  • 901dcba9b4
    Merge pull request #263 from HideLord/main oobabooga 2023-03-12 12:42:08 -0300
  • fda376d9c3 Use os.path.abspath() instead of str() oobabooga 2023-03-12 12:41:04 -0300
  • 8403152257 Fixing compatibility with GPTQ repo commit 2f667f7da051967566a5fb0546f8614bcd3a1ccd. Expects string and breaks on HideLord 2023-03-12 17:28:15 +0200
  • a27f98dbbc Merge branch 'main' of https://github.com/HideLord/text-generation-webui HideLord 2023-03-12 16:51:04 +0200
  • f3b00dd165
    Merge pull request #224 from ItsLogic/llama-bits oobabooga 2023-03-12 11:23:50 -0300
  • 89e9493509 Update README oobabooga 2023-03-12 11:23:20 -0300
  • 65dda28c9d Rename --llama-bits to --gptq-bits oobabooga 2023-03-12 11:19:07 -0300
  • fed3617f07 Move LLaMA 4-bit into a separate file oobabooga 2023-03-12 11:12:34 -0300
  • 0ac562bdba Add a default prompt for OpenAssistant oasst-sft-1-pythia-12b #253 oobabooga 2023-03-12 10:46:16 -0300
  • 78901d522b Remove unused imports oobabooga 2023-03-12 08:59:05 -0300
  • 35c14f31b2
    Merge pull request #259 from hieultp/patch-1 oobabooga 2023-03-12 08:52:02 -0300
  • 3c25557ef0 Add tqdm to requirements.txt oobabooga 2023-03-12 08:48:16 -0300
  • 781c09235c
    Fix typo error in script.py Phuoc-Hieu Le 2023-03-12 15:21:50 +0700
  • 9276af3561 clean up Xan 2023-03-12 19:06:24 +1100
  • b3e10e47c0 Fix merge conflict in text_generation Xan 2023-03-12 18:56:35 +1100
  • d4afed4e44 Fixes and polish Xan 2023-03-12 17:56:57 +1100
  • ad14f0e499 Fix regenerate (provisory way) oobabooga 2023-03-12 03:42:29 -0300
  • 6e12068ba2
    Merge pull request #258 from lxe/lxe/utf8 oobabooga 2023-03-12 03:28:49 -0300
  • e2da6b9685 Fix You You You appearing in chat mode oobabooga 2023-03-12 03:25:56 -0300
  • bcf0075278
    Merge pull request #235 from xanthousm/Quality_of_life-main oobabooga 2023-03-12 03:12:56 -0300
  • 3f7c3d6559
    No need to set encoding on binary read Aleksey Smolenchuk 2023-03-11 22:10:57 -0800
  • 3437de686c
    Merge pull request #189 from oobabooga/new-streaming oobabooga 2023-03-12 03:01:26 -0300
  • 341e135036 Various fixes in chat mode oobabooga 2023-03-12 02:53:08 -0300
  • 3baf5fc700
    Load and save chat history in utf-8 Aleksey Smolenchuk 2023-03-11 21:40:01 -0800
  • b0e8cb8c88 Various fixes in chat mode oobabooga 2023-03-12 02:31:45 -0300
  • 433f6350bc Load and save character files in UTF-8 unknown 2023-03-11 21:21:30 -0800
  • 0bd5430988 Use 'with' statement to better handle streaming memory oobabooga 2023-03-12 02:04:28 -0300
  • 37f0166b2d Fix memory leak in new streaming (second attempt) oobabooga 2023-03-11 23:14:49 -0300
  • def97f658c Small patch to fix loading of character jsons. Now it correctly reads non-ascii characters on Windows. HideLord 2023-03-12 02:54:22 +0200
  • 341ba958b4 Use the pt_path value in load_quant call jtang613 2023-03-11 18:12:01 -0500
  • 92fe947721 Merge branch 'main' into new-streaming oobabooga 2023-03-11 19:59:45 -0300
  • 195e99d0b6 Add llama_prompts extension oobabooga 2023-03-11 16:11:15 -0300
  • 501afbc234 Add requests to requirements.txt oobabooga 2023-03-11 14:47:30 -0300
  • 8f8da6707d Minor style changes to silero_tts oobabooga 2023-03-11 11:17:13 -0300
  • 2743dd736a Add *Is typing...* to impersonate as well oobabooga 2023-03-11 10:50:18 -0300
  • 96c51973f9 --auto-launch and "Is typing..." Xan 2023-03-11 22:50:59 +1100
  • 33df4bd91f Merge remote-tracking branch 'upstream/main' Xan 2023-03-11 22:40:47 +1100
  • b8f7d34c1d Undo changes to requirements Xan 2023-03-11 17:05:09 +1100
  • 0dfac4b777 Working html autoplay, clean up, improve wav naming Xan 2023-03-11 16:34:59 +1100
  • 28fd4fc970 Change wording to be consistent with other args draff 2023-03-10 23:34:13 +0000
  • 001e638b47 Make it actually work draff 2023-03-10 23:28:19 +0000
  • 804486214b Re-implement --load-in-4bit and update --llama-bits arg description draff 2023-03-10 23:21:01 +0000
  • 9ba8156a70
    remove unnecessary Path() ItsLogic 2023-03-10 22:33:58 +0000
  • e6c631aea4 Replace --load-in-4bit with --llama-bits draff 2023-03-10 21:36:45 +0000
  • 026d60bd34 Remove default preset that didn't do anything oobabooga 2023-03-10 14:01:02 -0300
  • e01da4097c
    Merge pull request #210 from rohvani/pt-path-changes oobabooga 2023-03-10 11:04:56 -0300
  • e9dbdafb14
    Merge branch 'main' into pt-path-changes oobabooga 2023-03-10 11:03:42 -0300
  • 706a03b2cb Minor changes oobabooga 2023-03-10 11:02:25 -0300
  • de7dd8b6aa Add comments oobabooga 2023-03-10 10:54:08 -0300
  • 113b791aa5
    Merge pull request #219 from deepdiffuser/4bit-multigpu oobabooga 2023-03-10 10:52:45 -0300
  • e461c0b7a0 Move the import to the top oobabooga 2023-03-10 10:51:12 -0300
  • 9fbd60bf22 add no_split_module_classes to prevent tensor split error deepdiffuser 2023-03-10 05:30:47 -0800
  • ab47044459 add multi-gpu support for 4bit gptq LLaMA deepdiffuser 2023-03-10 04:29:09 -0800
  • 1c0bda33fb added installation instructions EliasVincent 2023-03-10 11:47:16 +0100
  • 2ac2913747 fix reference issue rohvani 2023-03-09 20:13:23 -0800
  • 1d7e893fa1
    Merge pull request #211 from zoidbb/add-tokenizer-to-hf-downloads oobabooga 2023-03-10 00:46:21 -0300
  • 875847bf88 Consider tokenizer a type of text oobabooga 2023-03-10 00:45:28 -0300
  • 8ed214001d Merge branch 'main' of github.com:oobabooga/text-generation-webui oobabooga 2023-03-10 00:42:09 -0300
  • 249c268176 Fix the download script for long lists of files on HF oobabooga 2023-03-10 00:41:10 -0300
  • ec3de0495c download tokenizer when present Ber Zoidberg 2023-03-09 19:08:09 -0800
  • 5ee376c580 add LLaMA preset rohvani 2023-03-09 18:31:41 -0800
  • 826e297b0e add llama-65b-4bit support & multiple pt paths rohvani 2023-03-09 18:31:32 -0800
  • 7c3d1b43c1
    Merge pull request #204 from MichealC0/patch-1 oobabooga 2023-03-09 23:04:09 -0300
  • 9849aac0f1 Don't show .pt models in the list oobabooga 2023-03-09 21:54:50 -0300
  • 1a3d25f75d
    Merge pull request #206 from oobabooga/llama-4bit oobabooga 2023-03-09 21:07:32 -0300
  • eb0cb9b6df Update README oobabooga 2023-03-09 20:53:52 -0300
  • 74102d5ee4 Insert to the path instead of appending oobabooga 2023-03-09 20:51:22 -0300