Commit Graph

  • 3cd7af1255 Small style changes oobabooga 2023-06-09 21:25:02 -0300
  • 4c7db22c88 Fix a small bug oobabooga 2023-06-09 21:22:50 -0300
  • 1678e0860c Merge branch 'dev' into brandonj60-main oobabooga 2023-06-09 21:17:38 -0300
  • aff3e04df4 Remove irrelevant docs oobabooga 2023-06-09 21:15:37 -0300
  • 235b374a52
    Merge branch 'main' into models_patch matatonic 2023-06-09 15:39:12 -0400
  • 936ecfa403
    Create StarChat-Beta.yaml ChobPT 2023-06-09 19:58:26 +0100
  • 1baef1da24
    Create StarChat-Beta.yaml ChobPT 2023-06-09 19:25:21 +0100
  • 960b3f88ac Increase the iteration block size to reduce overhead. Morgan Schweers 2023-06-09 05:38:13 -0700
  • ac1c6fdbf7 +lazarus, +based Matthew Ashton 2023-06-08 15:49:51 -0400
  • efc88d57a8 gpt4all snoozy template Matthew Ashton 2023-06-08 15:23:16 -0400
  • 9207a025e4 +alpacino, +alpasta, +hippogriff, -airoboros 4k Matthew Ashton 2023-06-08 12:43:54 -0400
  • 4fad2af3ad fixed userwarning on download_button Hyeongmin Moon 2023-06-09 14:10:14 +0900
  • fdd5305542
    Merge branch 'oobabooga:main' into main Hyeongmin Moon 2023-06-09 13:52:54 +0900
  • d7db25dac9 Fix a permission oobabooga 2023-06-09 01:44:17 -0300
  • d033c85cf9 Fix a permission oobabooga 2023-06-09 01:43:22 -0300
  • 741afd74f6 Update requirements-minimal.txt oobabooga 2023-06-09 00:48:41 -0300
  • c333e4c906 Add docs for performance optimizations oobabooga 2023-06-09 00:45:49 -0300
  • 9b753b1133
    Merge branch 'main' into main Yiximail 2023-06-09 11:36:10 +0800
  • fd7677d9b8
    Bump gradio from 3.33.1 to 3.34.0 dependabot[bot] 2023-06-09 03:31:31 +0000
  • 77aacee30e
    Bump llama-cpp-python from 0.1.57 to 0.1.59 dependabot[bot] 2023-06-09 03:31:27 +0000
  • 73da594c38
    Bump gradio-client from 0.2.5 to 0.2.6 dependabot[bot] 2023-06-09 03:31:21 +0000
  • aaf240a14c
    Merge pull request #2587 from oobabooga/dev oobabooga 2023-06-09 00:30:59 -0300
  • c6552785af Minor cleanup oobabooga 2023-06-09 00:30:22 -0300
  • 92b45cb3f5 Merge branch 'main' into dev oobabooga 2023-06-09 00:27:11 -0300
  • 8a7a8343be Detect TheBloke_WizardLM-30B-GPTQ oobabooga 2023-06-09 00:26:34 -0300
  • 0f8140e99d Bump transformers/accelerate/peft/autogptq oobabooga 2023-06-09 00:25:13 -0300
  • d5cba0c917 Fixed all elements on Text generation tab Hyeongmin Moon 2023-06-09 12:16:09 +0900
  • 9df3f7941f fix impersonate Hyeongmin Moon 2023-06-09 10:53:57 +0900
  • cb43828273
    Update server.py - clarify mirostat_mode=1 only for llama.cpp brandonj60 2023-06-08 20:20:34 -0500
  • 9db1ec88a5
    Update sampler_hijack.py - adding back TemperatureLogitWarper brandonj60 2023-06-08 20:11:14 -0500
  • 63111a3df5 API error on batched generation requests Matthew Ashton 2023-06-08 17:13:30 -0400
  • 05cac3c534 +lazarus, +based Matthew Ashton 2023-06-08 15:49:51 -0400
  • ada5810c8b gpt4all snoozy template Matthew Ashton 2023-06-08 15:23:16 -0400
  • d0761d94ce
    Windows commands to install llama-cpp-python changed Iakovenko-Oleksandr 2023-06-08 20:19:03 +0300
  • bf6d7045cc +alpacino, +alpasta, +hippogriff, -airoboros 4k Matthew Ashton 2023-06-08 12:43:54 -0400
  • ac40c59ac3
    Added Guanaco-QLoRA to Instruct character (#2574) FartyPants 2023-06-08 11:24:32 -0400
  • db2cbe7b5a Detect WizardLM-30B-V1.0 instruction format oobabooga 2023-06-08 11:41:06 -0300
  • e0b43102e6 Merge remote-tracking branch 'refs/remotes/origin/dev' into dev oobabooga 2023-06-08 11:35:23 -0300
  • 7be6fe126b
    extensions/api: models api for blocking_api (updated) (#2539) matatonic 2023-06-08 10:34:36 -0400
  • 240752617d Increase download timeout to 20s oobabooga 2023-06-08 11:16:38 -0300
  • 832df9f4ad add multi_cnt; batched inference size limit for multi_user mode Hyeongmin Moon 2023-06-08 22:56:23 +0900
  • 844ca41ca7 allow concurrency for multi_user mode Hyeongmin Moon 2023-06-08 19:50:03 +0900
  • dc554502ed move uuid generator into text generation tab Hyeongmin Moon 2023-06-08 15:20:07 +0900
  • bf3db78678 disable multi_user mode when multi_user==True but chat==False Hyeongmin Moon 2023-06-08 14:39:20 +0900
  • c117c02c6d
    Added Guanaco-QLoRA FartyPants 2023-06-08 01:35:32 -0400
  • 4607eae678 hide uuid generation objects Hyeongmin Moon 2023-06-08 14:32:15 +0900
  • 79ee7a8b89 partially revert stopping strings change Matthew Ashton 2023-06-08 01:09:53 -0400
  • 4434f9f2cd add multi-user history Hyeongmin Moon 2023-06-08 12:19:07 +0900
  • 6237dc618c
    Update sampler_hijack.py brandonj60 2023-06-07 19:24:42 -0500
  • 0945c750eb
    Update sampler_hijack.py brandonj60 2023-06-07 18:07:33 -0500
  • e153bc04a1
    Update sampler_hijack.py brandonj60 2023-06-07 18:02:44 -0500
  • 9cd24e221e
    Update text_generation.py brandonj60 2023-06-07 17:59:20 -0500
  • 3fb22c9faf
    Update sampler_hijack.py brandonj60 2023-06-07 17:57:03 -0500
  • 084b006cfe
    Update LLaMA-model.md (#2460) zaypen 2023-06-08 02:34:50 +0800
  • 3f447657cc Merge branch 'models_api_new' of github.com:matatonic/text-generation-webui into models_api_new Matthew Ashton 2023-06-07 13:44:52 -0400
  • 0893b5cc7e . Matthew Ashton 2023-06-07 13:44:50 -0400
  • d655083a92
    Merge branch 'oobabooga:main' into models_api_new matatonic 2023-06-07 13:05:08 -0400
  • 2b31fb7f28 improved example with more functionality Matthew Ashton 2023-06-07 12:59:57 -0400
  • b7c4b6bcb8
    flags.ini to store flags Pb-207 2023-06-07 22:31:43 +0800
  • ec43a5b40b
    configs file to store flags for quick launch Pb-207 2023-06-07 22:28:35 +0800
  • 127a423f10
    A simple script for quick launch. Pb-207 2023-06-07 22:26:28 +0800
  • 7f8966be98
    Merge a325e13857 into c05edfcdfc catalpaaa 2023-06-07 09:50:25 +0200
  • 560d0561af readme++, model load via legacy engines.get Matthew Ashton 2023-06-07 00:19:05 -0400
  • c05edfcdfc
    fix: reverse-proxied URI should end with 'chat', not 'generate' (#2556) dnobs 2023-06-06 20:08:04 -0700
  • b41e42e6f2
    fix: reverse-proxied URI should end with 'chat', not 'generate' dnobs 2023-06-06 19:38:07 -0700
  • e97c5ca56c epsilon_cutoff & eta_cutoff are floats Matthew Ashton 2023-06-06 19:39:13 -0400
  • 878250d609 Merge branch 'main' into dev oobabooga 2023-06-06 19:43:53 -0300
  • f55e85e28a Fix multimodal with model loaded through AutoGPTQ oobabooga 2023-06-06 19:42:40 -0300
  • c8b830629d First working version of extension Paolo Rechia 2023-06-06 23:51:16 +0200
  • bc8b942174 Guidance extension Paolo Rechia 2023-06-06 22:52:38 +0200
  • 0181afcaf0 added completions api and docs in example danikhan632 2023-06-06 14:24:39 -0400
  • eb2601a8c3 Reorganize Parameters tab oobabooga 2023-06-06 14:51:02 -0300
  • 6fdd149864 better error logging. Matthew Ashton 2023-06-05 21:26:50 -0400
  • 4ca94fed82 drop autogptq setting Matthew Ashton 2023-06-05 20:53:08 -0400
  • e33a73f93f gptq-for-llama -> gptq_for_llama Matthew Ashton 2023-06-05 20:32:28 -0400
  • 9ef3fc0463 repackage and update for new defaults Matthew Ashton 2023-06-05 18:35:38 -0400
  • 3cc5ce3c42
    Merge pull request #2551 from oobabooga/dev oobabooga 2023-06-06 14:40:52 -0300
  • 6015616338 Style changes oobabooga 2023-06-06 13:06:05 -0300
  • f040073ef1 Handle the case of older autogptq install oobabooga 2023-06-06 13:05:05 -0300
  • 5d515eeb8c Bump llama-cpp-python wheel oobabooga 2023-06-06 13:01:15 -0300
  • bc58dc40bd Fix a minor bug oobabooga 2023-06-06 12:57:13 -0300
  • 6f76b4b478 Clean-up, clarify. Alexander Ljungberg 2023-06-06 15:46:17 +0100
  • dd43c17a49 Fixes Falcon model error 'Expected query, key, and value to have the same dtype'. Alexander Ljungberg 2023-06-06 15:34:51 +0100
  • f06a1387f0 Reorganize Models tab oobabooga 2023-06-06 07:58:07 -0300
  • d49d299b67 Change a message oobabooga 2023-06-06 07:54:56 -0300
  • f9b8bed953 Remove folder oobabooga 2023-06-06 07:49:12 -0300
  • 90fdb8edc6 Merge remote-tracking branch 'refs/remotes/origin/dev' into dev oobabooga 2023-06-06 07:46:51 -0300
  • 7ed1e35fbf Reorganize Parameters tab in chat mode oobabooga 2023-06-06 07:46:25 -0300
  • 00b94847da Remove softprompt support oobabooga 2023-06-06 07:42:23 -0300
  • 643c44e975
    Add ngrok shared URL ingress support (#1944) bobzilla 2023-06-06 03:34:20 -0700
  • ccb4c9f178 Add some padding to chat box oobabooga 2023-06-06 07:21:16 -0300
  • 0aebc838a0 Don't save the history for 'None' character oobabooga 2023-06-06 07:21:07 -0300
  • 9f215523e2 Remove some unused imports oobabooga 2023-06-06 07:05:32 -0300
  • b9bc9665d9 Remove some extra space oobabooga 2023-06-06 07:01:37 -0300
  • 177ab7912a Merge remote-tracking branch 'refs/remotes/origin/dev' into dev oobabooga 2023-06-06 07:01:00 -0300
  • 0f0108ce34 Never load the history for default character oobabooga 2023-06-06 07:00:11 -0300
  • ae25b21d61 Improve instruct style in dark mode oobabooga 2023-06-06 07:00:00 -0300
  • 4a17a5db67
    [extensions/openai] various fixes (#2533) matatonic 2023-06-06 00:43:04 -0400
  • 97f3fa843f
    Bump llama-cpp-python from 0.1.56 to 0.1.57 (#2537) dependabot[bot] 2023-06-05 23:45:58 -0300
  • 11f38b5c2b Add AutoGPTQ LoRA support oobabooga 2023-06-05 23:29:29 -0300