Commit Graph

  • 105b0b2816 Merge branch 'main' into ban-newline oobabooga 2023-07-30 11:46:45 -0700
  • 5ca37765d3 Only replace {{user}} and {{char}} at generation time oobabooga 2023-07-30 11:42:30 -0700
  • afcd7473ed
    Update requirements.txt PseudoMotivated 2023-07-30 20:41:27 +0200
  • 6e16af34fd Save uploaded characters as yaml oobabooga 2023-07-30 11:25:38 -0700
  • c25602eb65 Merge branch 'dev' oobabooga 2023-07-30 08:47:50 -0700
  • 25a34eef55
    FIxed parser argument description Friedemann Lipphardt 2023-07-29 19:21:01 -0700
  • ca4188aabc Update the example extension oobabooga 2023-07-29 18:57:22 -0700
  • c2d72775a1 Remove a comment oobabooga 2023-07-29 18:48:39 -0700
  • c3a530bb43 Turn it into an extension oobabooga 2023-07-29 18:46:59 -0700
  • c4e14a757c
    Bump exllama module to 0.0.9 (#3338) jllllll 2023-07-29 20:16:23 -0500
  • b332bc7956 Add an option to ban the \n character in chat mode oobabooga 2023-07-29 18:12:31 -0700
  • e8f1ea28fa
    Fix superbooga when using regenerate Hans Raaf 2023-07-30 02:49:27 +0200
  • 4d01918ee7 added option for named cloudflare tunnels Friedemann Lipphardt 2023-07-30 02:32:40 +0200
  • 8053121fd5 Add 2 fields for History GET method, returning the names of the characters BlankhansDH 2023-07-29 22:41:13 +0200
  • 1583f96493 Supercharging superbooga: 8 HideLord 2023-07-29 21:17:30 +0300
  • 3f59e84363 Merge branch 'main' into cognitage Ricardo Pinto 2023-07-29 15:06:35 +0000
  • a8b67c27a5 Supercharging superbooga: 7 HideLord 2023-07-29 04:51:22 +0300
  • 95050596df checkpoint save FPHam 2023-07-28 17:18:01 -0400
  • 338c952592 Save steps < loss slider FPHam 2023-07-28 13:33:15 -0400
  • 54941351a1 chat indentation FPHam 2023-07-28 12:22:06 -0400
  • b6781203fa updates FPHam 2023-07-28 12:17:26 -0400
  • d3a9f22110
    Merge branch 'oobabooga:main' into main FartyPants 2023-07-28 12:14:21 -0400
  • 4fde240bf9 Supercharging superbooga: 6 HideLord 2023-07-28 17:48:05 +0300
  • 0cf820936e
    Merge branch 'oobabooga:main' into chat-token-count atriantafy 2023-07-28 14:10:06 +0100
  • e937851420 Supercharging superbooga: 5 HideLord 2023-07-28 04:37:36 +0300
  • 5e0a58ed73
    Bump exllama module to 0.0.9 jllllll 2023-07-27 15:48:07 -0500
  • c3f38d5362 Add standalone Dockerfile for NVIDIA Jetson toolboc 2023-07-27 13:52:18 -0500
  • 3974794a2f add auth and endpoint GET and POST for character Paulo Henrique Silveira 2023-07-27 12:24:37 -0300
  • d2dda8c2ab add chat instruction config for BaiChuan model CrazyShipOne 2023-07-27 15:59:08 +0800
  • afa6cc1d28 Add CFG for exllama (not exllama_HF) oobabooga 2023-07-26 19:42:18 -0700
  • ecd92d6a4e
    Remove unused variable from ROCm GPTQ install (#107) jllllll 2023-07-26 20:16:36 -0500
  • 4f2456ce22
    Merge branch 'oobabooga:main' into openai_update matatonic 2023-07-26 18:39:31 -0400
  • 31f4dd4666 Syntax simplification oobabooga 2023-07-26 14:11:17 -0700
  • ebc7cc462c Progress on exllama_hf and llamacpp_hf oobabooga 2023-07-26 14:00:58 -0700
  • 1e3c950c7d
    Add AMD GPU support for Linux (#98) jllllll 2023-07-26 15:33:02 -0500
  • 14b208b20f Move negative_prompt to shared.settings oobabooga 2023-07-26 10:50:27 -0700
  • e6e9450b3b Mention that 1.5 is a good value for guidance_scale oobabooga 2023-07-26 10:38:15 -0700
  • 4b37a2b397
    sd_api_pictures: Widen sliders for image size minimum and maximum (#3326) GuizzyQC 2023-07-26 12:49:46 -0400
  • 86bcffa6d7 Revert "Revert "Add tensor split support for llama.cpp (#3171)"" oobabooga 2023-07-26 09:49:23 -0700
  • 5e37bdb929 Merge branch 'dev' into GuizzyQC-patch-1 oobabooga 2023-07-26 09:48:34 -0700
  • d6314fd539 Change a comment oobabooga 2023-07-26 09:37:48 -0700
  • f24f87cfb0 Change a comment oobabooga 2023-07-26 09:37:48 -0700
  • baa8edd0ef Fix negative prompt oobabooga 2023-07-26 09:08:48 -0700
  • 7b871cbc50
    Align image size slider minimum and maximum with Auto1111 GuizzyQC 2023-07-26 12:01:08 -0400
  • 73227f4142 guidance_scale should be > 1 oobabooga 2023-07-26 08:57:52 -0700
  • 1c00a567f6 Add guidance_scale and negative_prompt parameters for CFG oobabooga 2023-07-26 08:50:53 -0700
  • de5de045e0 Set rms_norm_eps to 5e-6 for every llama-2 ggml model, not just 70b oobabooga 2023-07-26 08:23:24 -0700
  • 193c6be39c Add missing \n to llama-v2 template context oobabooga 2023-07-26 07:59:40 -0700
  • ec68d5211e Set rms_norm_eps to 5e-6 for every llama-2 ggml model, not just 70b oobabooga 2023-07-26 08:23:24 -0700
  • a9e10753df Add missing \n to llama-v2 template context oobabooga 2023-07-26 07:59:40 -0700
  • b780d520d2 Add a link to the gradio docs oobabooga 2023-07-26 07:49:22 -0700
  • b553c33dd0 Add a link to the gradio docs oobabooga 2023-07-26 07:49:22 -0700
  • d94ba6e68b Define visible_text before applying chat_input extensions oobabooga 2023-07-26 07:26:37 -0700
  • b31321c779 Define visible_text before applying chat_input extensions oobabooga 2023-07-26 07:26:37 -0700
  • b17893a58f Revert "Add tensor split support for llama.cpp (#3171)" v1.5 oobabooga 2023-07-26 07:06:01 -0700
  • 517d40cffe Update Extensions.md oobabooga 2023-07-26 07:01:35 -0700
  • b11f63cb18 update extensions docs oobabooga 2023-07-26 07:00:33 -0700
  • 6df4425a4c Supercharging superbooga: 4 HideLord 2023-07-26 14:45:49 +0300
  • 52e3b91f5e
    Fix broken gxx_linux-64 package. (#106) jllllll 2023-07-25 23:55:08 -0500
  • 4a24849715 Revert changes oobabooga 2023-07-25 21:09:32 -0700
  • 69f8b35bc9 Revert changes to README oobabooga 2023-07-25 20:49:00 -0700
  • ed80a2e7db Reorder llama.cpp params oobabooga 2023-07-25 20:45:20 -0700
  • 0e8782df03 Set instruction template when switching from default/notebook to chat oobabooga 2023-07-25 20:37:01 -0700
  • 28779cd959 Use dark theme by default oobabooga 2023-07-25 20:11:57 -0700
  • 258ba6c3ed fix some logit_processors Matthew Ashton 2023-07-25 20:01:42 -0400
  • 3ea3e22fc1 Progress oobabooga 2023-07-25 16:08:34 -0700
  • c2e0d46616 Add credits oobabooga 2023-07-25 15:49:04 -0700
  • 1b89c304ad Update README oobabooga 2023-07-25 15:46:12 -0700
  • 888b10c15f Unify the 3 interface modes oobabooga 2023-07-25 15:43:09 -0700
  • d3abe7caa8 Update llama.cpp.md oobabooga 2023-07-25 15:33:16 -0700
  • 863d2f118f Update llama.cpp.md oobabooga 2023-07-25 15:31:05 -0700
  • 77d2e9f060 Remove flexgen 2 oobabooga 2023-07-25 15:18:25 -0700
  • 75c2dd38cf Remove flexgen support oobabooga 2023-07-25 15:15:29 -0700
  • 5134d5b1c6 Update README oobabooga 2023-07-25 15:13:07 -0700
  • 85b3a26e25
    Ignore values which are not string in training.py (#3287) Foxtr0t1337 2023-07-26 06:00:25 +0800
  • 031fe7225e
    Add tensor split support for llama.cpp (#3171) Shouyi 2023-07-26 07:59:26 +1000
  • f653546484
    README updates and improvements (#3198) Eve 2023-07-25 17:58:13 -0400
  • b09e4f10fd
    Fix typo in README.md (#3286) Ikko Eltociear Ashimine 2023-07-26 06:56:25 +0900
  • 7bc408b472 Change rms_norm_eps to 5e-6 for llama-2-70b ggml oobabooga 2023-07-25 14:54:57 -0700
  • ef8637e32d
    Add extension example, replace input_hijack with chat_input_modifier (#3307) oobabooga 2023-07-25 18:49:56 -0300
  • 9dc83b4d14 Update Extensions.md oobabooga 2023-07-25 14:44:20 -0700
  • 330d43bb89 Rename a function oobabooga 2023-07-25 14:37:17 -0700
  • 235da1bfbf Minor changes oobabooga 2023-07-25 14:33:26 -0700
  • d7509bb3b1 Replace input_hijack with chat_input_modifier oobabooga 2023-07-25 14:30:21 -0700
  • eeecaccbe8 batched (array) input, fixes #3296 Matthew Ashton 2023-07-25 17:29:13 -0400
  • 33d822c49c missing model_name, fixes #3305 Matthew Ashton 2023-07-25 17:24:18 -0400
  • 0bed00cce2
    Update Extensions.md oobabooga 2023-07-25 17:11:34 -0300
  • eeb35271cb Update Extensions.md oobabooga 2023-07-25 13:02:23 -0700
  • 3afad67159 Don't break old logit processor extensions oobabooga 2023-07-25 12:49:37 -0700
  • eed18cf262 Add extension example oobabooga 2023-07-25 12:47:05 -0700
  • c1dd141b51
    Bump bitsandbytes from 0.40.2 to 0.41.0 (#10) dependabot[bot] 2023-07-25 19:44:33 +0100
  • 985047cc94
    Bump gradio from 3.37.0 to 3.38.0 (#9) dependabot[bot] 2023-07-25 19:44:12 +0100
  • 56c21f3512 missing model_name, fixes #3305 Matthew Ashton 2023-07-25 14:38:31 -0400
  • 443e953468 Supercharging superbooga: 3 HideLord 2023-07-25 19:19:09 +0300
  • 7ac492df7c
    Merge branch 'oobabooga:main' into auto-max-new-tokens atriantafy 2023-07-25 10:51:39 +0100
  • ea4edaab97
    Merge branch 'oobabooga:main' into chat-token-count atriantafy 2023-07-25 10:51:26 +0100
  • 443aa0c40b
    Merge b21d387e20 into 08c622df2e M S 2023-07-25 07:29:39 +0000
  • b21d387e20 just .gitignore M S 2023-07-25 02:29:08 -0500
  • 00b48a56aa change parameter to use_max_seq_len; parameter now also overrides default truncation_length if not set in the request Alexandros Triantafyllidis 2023-07-25 00:40:12 +0100
  • 08c622df2e Autodetect rms_norm_eps and n_gqa for llama-2-70b oobabooga 2023-07-24 15:26:29 -0700