Commit Graph

  • 47e7d67fcd
    telegrambot - added regenerate and minor upd innightwolfsleep 2023-04-02 21:48:45 +0600
  • f7acef3915
    Not sure if I am the only one with this problem. Forkoz 2023-04-02 09:08:16 -0500
  • 4551df7d67
    webui version line to not fail if no WEBUI_VERSION provided loeken 2023-04-02 15:08:13 +0200
  • 35fe49bb59 small fix akatsukinoyami 2023-04-02 14:05:22 +0300
  • a6c362eaa5 Merge branch 'add-telegram-bot-extension' of github.com:akatsukinoyami/text-generation-webui into add-telegram-bot-extension akatsukinoyami 2023-04-02 14:04:49 +0300
  • ac28126f6a Merge branch 'main' of github.com:akatsukinoyami/text-generation-webui into add-telegram-bot-extension akatsukinoyami 2023-04-02 14:04:45 +0300
  • 3e7ae33b09
    Delete server.py Katsu 2023-04-02 14:03:48 +0300
  • 1a48eb2ba8
    Delete text_generation.py Katsu 2023-04-02 14:03:39 +0300
  • 0489c6c214
    Delete shared.py Katsu 2023-04-02 14:03:27 +0300
  • adf07d5146
    Delete script.py Katsu 2023-04-02 14:03:17 +0300
  • d78ca39400
    Delete script.py Katsu 2023-04-02 14:02:56 +0300
  • dd44aea61e Merge branch 'main' of github.com:akatsukinoyami/text-generation-webui into add-telegram-bot-extension akatsukinoyami 2023-04-02 13:55:31 +0300
  • 6b8c9c2652 Add shared.is_chat() function akatsukinoyami 2023-04-02 13:55:31 +0300
  • ba6b197be6 Fix for file mismatch (when second changes) Φφ 2023-04-02 10:08:21 +0300
  • c25f6924ee Changes in interactive mode Φφ 2023-04-02 09:55:32 +0300
  • 07de9ad320
    Merge pull request #3 from innightwolfsleep/innightwolfsleep-patch-2 innightwolfsleep 2023-04-02 12:13:49 +0600
  • ef8508b2ff
    telegram_bot v0.4 - beta! innightwolfsleep 2023-04-02 12:07:44 +0600
  • 1dc75aebc1 Customisation of seed, sampler, CFG_s & steps Φφ 2023-04-02 08:10:29 +0300
  • 6e9290cf26 Reverting the htpps:// filtering+ Φφ 2023-04-02 07:26:24 +0300
  • e3c348e42b
    Add .git oobabooga 2023-04-02 01:11:05 -0300
  • b704fe7878
    Use my fork of GPTQ-for-LLaMa for stability oobabooga 2023-04-02 01:10:22 -0300
  • 48720298a1 Group files by date Φφ 2023-03-29 13:03:12 +0300
  • 55c116edc4 Full-sized images Φφ 2023-03-29 12:17:08 +0300
  • 35eab98e2a VRAM, params, fixes Φφ 2023-03-29 11:39:45 +0300
  • 2dba3e2bf0 Working... Φφ 2023-03-25 07:02:13 +0300
  • 61982f6da5 Working... Φφ 2023-03-22 12:45:39 +0300
  • 16deaf9a2a Unload and reload models on request Φφ 2023-03-21 13:15:42 +0300
  • d0f9625f0b Clear text input for chat Brian O'Connor 2023-04-01 21:48:24 -0400
  • 71e4c3ac8a made client init safer akatsukinoyami 2023-04-02 02:41:52 +0300
  • 0a51d5036d delete service akatsukinoyami 2023-04-02 02:41:32 +0300
  • 1690d108cd changes in i18n akatsukinoyami 2023-04-02 02:41:22 +0300
  • 45f067e104 small fixes akatsukinoyami 2023-04-02 02:40:53 +0300
  • b639e922fd rewrite ai controller akatsukinoyami 2023-04-02 02:40:42 +0300
  • 1399c01254 changes in syste controller akatsukinoyami 2023-04-02 02:40:27 +0300
  • b0890a7925 Add shared.is_chat() function oobabooga 2023-04-01 20:14:43 -0300
  • af3bcbc21b Merge branch 'main' of github.com:akatsukinoyami/text-generation-webui into add-telegram-bot-extension akatsukinoyami 2023-04-02 01:38:18 +0300
  • 1e9b1151d7 optimise bot startup process akatsukinoyami 2023-04-02 01:37:35 +0300
  • cd15e8d4f9 start use manual routs against smart plugins akatsukinoyami 2023-04-02 01:37:19 +0300
  • 5867bd56a5 fix in i18n akatsukinoyami 2023-04-02 01:36:55 +0300
  • 83d46b13b9 start use built in generate_promt and chatbot_wrapper akatsukinoyami 2023-04-02 01:36:44 +0300
  • 836c0b9beb start use built-in methods for model/char chaging akatsukinoyami 2023-04-02 01:36:04 +0300
  • 95927db90c fully reworked controllers akatsukinoyami 2023-04-02 01:35:18 +0300
  • fa9f73f578
    Merge pull request #2 from innightwolfsleep/innightwolfsleep-patch-1 innightwolfsleep 2023-04-02 01:56:04 +0600
  • 8e9b688d11
    Telegram_bot_v3 innightwolfsleep 2023-04-02 01:55:40 +0600
  • bdb2dfa02e
    Merge branch 'oobabooga:main' into main SDS 2023-04-01 21:02:24 +0200
  • 8f0e7d1503
    telegramm_bot_v2 innightwolfsleep 2023-04-02 00:43:24 +0600
  • 657ce70da7
    updated version of gptq, linked in links to models used in testing loeken 2023-04-01 20:36:08 +0200
  • b38ba230f4
    Update download-model.py oobabooga 2023-04-01 15:03:24 -0300
  • b6f817be45
    Update README.md oobabooga 2023-04-01 14:54:10 -0300
  • 88fa38ac01
    Update README.md oobabooga 2023-04-01 14:49:03 -0300
  • 526d5725db
    Update download-model.py oobabooga 2023-04-01 14:47:47 -0300
  • 766b364778
    Merge pull request #1 from innightwolfsleep/innightwolfsleep-telegram_bot innightwolfsleep 2023-04-01 23:40:21 +0600
  • 71143db1e1
    Telegram bot API innightwolfsleep 2023-04-01 23:39:35 +0600
  • 4b57bd0d99
    Update README.md oobabooga 2023-04-01 14:38:04 -0300
  • b53bec5a1f
    Update README.md oobabooga 2023-04-01 14:37:35 -0300
  • 9160586c04
    Update README.md oobabooga 2023-04-01 14:31:10 -0300
  • 7ec11ae000
    Update README.md oobabooga 2023-04-01 14:15:19 -0300
  • b857f4655b
    Update shared.py oobabooga 2023-04-01 13:56:47 -0300
  • 012f4f83b8
    Update README.md oobabooga 2023-04-01 13:55:15 -0300
  • ecd5538663
    Merge branch 'main' of github.com:oobabooga/text-generation-webui into dockerize loeken 2023-04-01 14:05:04 +0200
  • 1fc2dca992
    changes suggested by deece to allow running version with uncommited changes loeken 2023-04-01 13:42:49 +0200
  • 6f05f2e8b1
    didnt save file loeken 2023-04-01 13:38:01 +0200
  • d83a10cf3b
    unified arguments WEBUI_VERSION and GPTQ_VERSION loeken 2023-04-01 12:50:43 +0200
  • 5d77765cf7
    Merge branch 'oobabooga:main' into main SDS 2023-04-01 11:27:39 +0200
  • fcda3f8776 Add also_return_rows to generate_chat_prompt oobabooga 2023-04-01 01:12:13 -0300
  • d83198ac12
    Merge pull request #3 from wawawario2/increase_num_memories wawawario2 2023-03-31 23:45:10 -0400
  • 9d6ceae61d
    Update looks and animations of character cards in gallery ye7iaserag 2023-04-01 05:41:43 +0200
  • d51144a500 Adds ability to increase number of memories loaded in context wario 2023-03-31 23:01:59 -0400
  • 0e7e80db1e
    Merge branch 'oobabooga:main' into master wawawario2 2023-03-31 23:21:24 -0400
  • 8c51b405e4 Progress towards generalizing Interface mode tab oobabooga 2023-03-31 23:41:10 -0300
  • 23116b88ef
    Add support for resuming downloads (#654 from nikita-skakun/support-partial-downloads) oobabooga 2023-03-31 22:55:55 -0300
  • 74462ac713 Don't override the metadata when checking the sha256sum oobabooga 2023-03-31 22:52:52 -0300
  • b4886a2668
    Merge branch 'dockerize' of github.com:loeken/text-generation-webui into dockerize loeken 2023-04-01 03:07:03 +0200
  • 1797fd5b30
    docs for ubuntu 22.04/manjaro installation of dependencies loeken 2023-04-01 03:06:51 +0200
  • f25567028c Merge branch 'main' of github.com:oobabooga/text-generation-webui wario 2023-03-31 20:27:42 -0400
  • 2c52310642 Add --threads flag for llama.cpp oobabooga 2023-03-31 21:18:05 -0300
  • cf8196b090 GPTQ switch to cuda branch, minor update to nvidia/cuda:11.8.0-devel-ubuntu22.04 to delay deprecation of base image loeken 2023-03-31 22:40:18 +0200
  • 4694d97805 Merge branch 'main' of github.com:akatsukinoyami/text-generation-webui into add-telegram-bot-extension Dan Kild 2023-03-31 22:55:19 +0300
  • 67a81da1c5 Merge branch 'main' of https://github.com/oobabooga/text-generation-webui into dockerize loeken 2023-03-31 21:37:14 +0200
  • 85a5154e04 Merge branch 'main' of github.com:oobabooga/text-generation-webui wario 2023-03-31 15:34:22 -0400
  • 16d79957aa Setting n_threads to os.cpu_count() to maximize utilization InconsolableCellist 2023-03-31 11:09:53 -0600
  • eeafd60713 Fix streaming oobabooga 2023-03-31 19:05:38 -0300
  • 52065ae4cd Add repetition_penalty oobabooga 2023-03-31 19:01:34 -0300
  • 2259143fec Fix llama.cpp with --no-stream oobabooga 2023-03-31 18:43:45 -0300
  • 875de5d983 Update ggml template oobabooga 2023-03-31 17:57:31 -0300
  • cbfe0b944a
    Update README.md oobabooga 2023-03-31 17:49:11 -0300
  • 6a44f4aec6 Add support for downloading ggml files oobabooga 2023-03-31 17:33:10 -0300
  • 3a47a602a3 Detect ggml*.bin files automatically oobabooga 2023-03-31 17:18:21 -0300
  • 0aee7341d8 Properly count tokens/s for llama.cpp in chat mode oobabooga 2023-03-31 17:00:55 -0300
  • 5c4e44b452
    llama.cpp documentation oobabooga 2023-03-31 15:20:39 -0300
  • 6fd70d0032
    Add llama.cpp support (#447 from thomasantony/feature/llamacpp) oobabooga 2023-03-31 15:17:32 -0300
  • a5c9b7d977 Bump llamacpp version oobabooga 2023-03-31 15:08:01 -0300
  • ea3ba6fc73 Merge branch 'feature/llamacpp' of github.com:thomasantony/text-generation-webui into thomasantony-feature/llamacpp oobabooga 2023-03-31 14:45:53 -0300
  • 09b0a3aafb Add repetition_penalty oobabooga 2023-03-31 14:45:17 -0300
  • 4d98623041
    Merge branch 'main' into feature/llamacpp oobabooga 2023-03-31 14:37:04 -0300
  • 4c27562157 Minor changes oobabooga 2023-03-31 14:33:46 -0300
  • 9d1dcf880a General improvements oobabooga 2023-03-31 14:27:01 -0300
  • 770ff0efa9 Merge branch 'main' of github.com:oobabooga/text-generation-webui oobabooga 2023-03-31 12:22:22 -0300
  • 1d1d9e40cd Add seed to settings oobabooga 2023-03-31 12:22:07 -0300
  • daeab6bac7
    Merge pull request #678 from mayaeary/fix/python3.8 oobabooga 2023-03-31 12:19:06 -0300