Commit Graph

  • 54e6f7c5c5
    Bump gradio from 3.33.1 to 3.38.0 dependabot[bot] 2023-07-24 20:49:40 +0000
  • a07d070b6c
    Add llama-2-70b GGML support (#3285) v1.4 oobabooga 2023-07-24 16:37:03 -0300
  • 9183d82fa1 Adds a new GET Endpoint for Character History Retrieval and an option, to save the conversation to the character history right away. BlankhansDH 2023-07-24 21:04:19 +0200
  • 76ee525e4f Update README oobabooga 2023-07-24 11:36:30 -0700
  • dc983d7ef2 Add rms_norm_eps parameter oobabooga 2023-07-24 11:33:20 -0700
  • ce678154b8
    Fix typo in README.md Ikko Eltociear Ashimine 2023-07-25 02:17:23 +0900
  • ec1496bcf9 Ignore values which are not string Foxtr0t1337 2023-07-25 01:11:32 +0800
  • 6f4830b4d3 Bump peft commit oobabooga 2023-07-24 09:49:57 -0700
  • 5419651c9f Update README oobabooga 2023-07-24 09:19:21 -0700
  • ffd1cafb13 Add llama-70b ggml support oobabooga 2023-07-24 09:03:49 -0700
  • 90a4ab631c
    extensions/openai: Fixes for: embeddings, tokens, better errors. +Docs update, +Images, +logit_bias/logprobs, +more. (#3122) matatonic 2023-07-24 10:28:12 -0400
  • 1141987a0d
    Add checks for ROCm and unsupported architectures to llama_cpp_cuda loading (#3225) jllllll 2023-07-24 09:25:36 -0500
  • 24f07b4ac3 Remove --cpu flag usage oobabooga 2023-07-24 07:22:49 -0700
  • 74fc5dd873
    Add user-agent to download-model.py requests (#3243) iongpt 2023-07-24 16:19:13 +0200
  • b2d5433409
    Fix typo in deepspeed_parameters.py (#3222) Ikko Eltociear Ashimine 2023-07-24 23:17:28 +0900
  • eb105b0495
    Bump llama-cpp-python to 0.1.74 (#3257) jllllll 2023-07-24 09:15:42 -0500
  • 97721a1831 Update (Needs testing) da3dsoul 2023-07-24 10:10:15 -0400
  • 152cf1e8ef
    Bump bitsandbytes to 0.41.0 (#3258) jllllll 2023-07-24 09:06:18 -0500
  • 8d31d20c9a
    Bump exllama module to 0.0.8 (#3256) jllllll 2023-07-24 09:05:54 -0500
  • f5f345cccd Merge remote-tracking branch 'upstream/main' da3dsoul 2023-07-24 09:42:24 -0400
  • 417d811725 Supercharging superbooga: 2 HideLord 2023-07-24 05:00:55 +0300
  • 363288f6d2 working on h100 Mochi Liu 2023-07-24 01:33:42 +0000
  • 8bd46b7c86 introducing optimize_max_new_tokens parameter for better control Alexandros Triantafyllidis 2023-07-24 02:27:44 +0100
  • 938b68a713 revert restore faces, less horrible. Matthew Ashton 2023-07-23 17:12:20 -0400
  • abd0ce5987 better logit_bias, better missing param error Matthew Ashton 2023-07-23 14:00:55 -0400
  • 7cdc83aaef auto max_new_tokens; fix default truncation_length value Alexandros Triantafyllidis 2023-07-23 18:16:37 +0100
  • 941d2c57c9
    indicate llama v2 and cpu only support Eve 2023-07-23 12:15:07 -0400
  • 193bf57797
    Merge branch 'oobabooga:dev' into dev Eve 2023-07-23 11:59:58 -0400
  • 00d00c608d fix api example Alexandros Triantafyllidis 2023-07-23 16:19:40 +0100
  • 1e6d1be5b7 remove instruction_template in order to autodetect it Alexandros Triantafyllidis 2023-07-23 16:09:27 +0100
  • 42ad7944de Add endpoint to count chat prompt tokens Alexandros Triantafyllidis 2023-07-23 15:53:28 +0100
  • e9702c8255 update requirement tc 2023-07-22 23:55:33 -0700
  • 4f76f20d96
    Bump bitsandbytes to 0.41.0 jllllll 2023-07-22 20:48:06 -0500
  • 3bc5757bcc
    Bump llama-cpp-python to 0.1.74 jllllll 2023-07-22 20:19:31 -0500
  • b2a5198d08 Supercharging superbooga hidelord 2023-07-21 03:10:03 +0300
  • 6fc8a4b32f
    Bump exllama module to 0.0.8 jllllll 2023-07-22 17:42:34 -0500
  • aad22c77d3 Gpg no longer works for some stupid reason so not only did I lose two essays because of git's stupidity, this commit isn't signed either. But that's fine. Everything is fine. Just let me die already. LoganDark 2023-07-22 13:06:57 -0700
  • c67f8657be
    Add check for cpu flag jllllll 2023-07-22 04:08:14 -0500
  • d60a2e77d5
    Update download_urls.py iongpt 2023-07-22 09:40:14 +0200
  • f8fbdb0744
    Add transformers_stream_generator to requirement.txt Aether 2023-07-22 11:23:00 +0800
  • e65975a046 working with hf Mochi Liu 2023-07-21 22:48:44 -0400
  • 2f0c6a3b70 remove .env Mochi Liu 2023-07-21 22:22:17 -0400
  • c03a119d78 docker built Mochi Liu 2023-07-21 22:21:13 -0400
  • 70d497f362 stop reason fix, revert model class hacks Matthew Ashton 2023-07-21 15:05:32 -0400
  • 6d87415d61 Global model aded clobal tts_model to fix. Michael Sullivan 2023-07-21 02:57:44 -0500
  • a2977466ba images improvements, hacks for presence_penalty Matthew Ashton 2023-07-21 03:44:43 -0400
  • 87fe313d6c Merge branch 'main' of https://github.com/oobabooga/text-generation-webui Michael Sullivan 2023-07-21 02:34:27 -0500
  • 1c68c05b66 model in the TTS extensions clobbered global model Michael Sullivan 2023-07-21 02:33:11 -0500
  • 4ad5831144 udpate script tc 2023-07-20 22:12:22 -0700
  • 0ac392ffc5 update chinese tc 2023-07-20 21:43:07 -0700
  • cc2ed46d44
    Make chat the default again oobabooga 2023-07-20 18:55:09 -0300
  • f37e26c5d1
    Add checks for ROCm and unsupported architectures jllllll 2023-07-20 16:03:58 -0500
  • a23b2e770d
    Merge branch 'oobabooga:main' into openai_update matatonic 2023-07-20 16:55:50 -0400
  • aa8e0924ac wrong calc FPHam 2023-07-20 15:58:41 -0400
  • 22e01eb39d
    Fix typo in deepspeed_parameters.py Ikko Eltociear Ashimine 2023-07-21 02:17:31 +0900
  • c39f4da322
    Update README.md repo-reviews 2023-07-20 17:32:11 +0200
  • fcb215fed5
    Add check for compute support for GPTQ-for-LLaMa (#104) jllllll 2023-07-20 09:11:00 -0500
  • 63ece46213 Merge branch 'main' into dev oobabooga 2023-07-20 07:06:41 -0700
  • 6456b7d9ca fale/true FPHam 2023-07-20 00:30:39 -0400
  • 5bf55caefd changes FPHam 2023-07-20 00:26:19 -0400
  • 6415cc68a2 Remove obsolete information from README oobabooga 2023-07-19 21:20:40 -0700
  • e6da02db59 debug FPHam 2023-07-20 00:03:44 -0400
  • bd0032a796 new Training FPHam 2023-07-19 23:59:10 -0400
  • fe10215429
    Merge branch 'oobabooga:main' into main FartyPants 2023-07-19 23:57:30 -0400
  • 4b19b74e6c Add CUDA wheels for llama-cpp-python by jllllll oobabooga 2023-07-19 19:31:19 -0700
  • dc791e2149
    Merge branch 'oobabooga:main' into openai_update matatonic 2023-07-19 22:27:46 -0400
  • 05f4cc63c8 Merge branch 'main' into dev oobabooga 2023-07-19 19:22:34 -0700
  • 4df3f72753
    Fix GPTQ fail message not being shown on update (#103) jllllll 2023-07-19 20:25:09 -0500
  • 87926d033d
    Bump exllama module to 0.0.7 (#3211) jllllll 2023-07-19 20:24:47 -0500
  • 2a54776e2f
    Bump exllama module to 0.0.7 jllllll 2023-07-19 18:22:43 -0500
  • 12d3d92c1f
    Update shared.py RoPE to match readme Eve 2023-07-19 17:15:37 -0400
  • b84e459e24
    make --alpha_value doc match server.py Eve 2023-07-19 17:11:52 -0400
  • 913e060348 Change the default preset to Divine Intellect oobabooga 2023-07-19 08:24:37 -0700
  • d73d4608a4
    Merge branch 'oobabooga:main' into openai_update matatonic 2023-07-19 10:25:49 -0400
  • 0d7f43225f Merge branch 'dev' v1.3.1 oobabooga 2023-07-19 07:20:13 -0700
  • 08c23b62c7 Bump llama-cpp-python and transformers oobabooga 2023-07-19 07:19:12 -0700
  • 3df32b388a
    Bump fastapi from 0.95.2 to 0.100.0 (#7) dependabot[bot] 2023-07-19 02:19:03 +0100
  • bf2ae1ec31
    Bump transformers from 4.30.2 to 4.31.0 (#8) dependabot[bot] 2023-07-19 02:18:53 +0100
  • 88c069cf4c
    Bump gradio from 3.33.1 to 3.37.0 (#2) dependabot[bot] 2023-07-19 02:16:43 +0100
  • ce40c85823
    Bump llama-cpp-python from 0.1.66 to 0.1.73 (#4) dependabot[bot] 2023-07-19 02:15:53 +0100
  • 47264681b7
    Bump gradio-client from 0.2.5 to 0.2.10 (#5) dependabot[bot] 2023-07-19 02:15:30 +0100
  • 4f727f5b9f
    Merge branch 'oobabooga:main' into cognitage Ricardo Pinto 2023-07-19 02:14:00 +0100
  • e946452f4f Added edge_tts extension Marco Tundo 2023-07-18 20:16:17 -0400
  • e8f995978b
    Merge branch 'main' into ts2 Shouyi 2023-07-19 09:54:56 +1000
  • edb8ebeb07
    Merge branch 'oobabooga:main' into openai_update matatonic 2023-07-18 19:16:46 -0400
  • 5447e75191 Merge branch 'dev' oobabooga 2023-07-18 15:36:26 -0700
  • 8ec225f245 Add EOS/BOS tokens to Llama-2 template oobabooga 2023-07-18 15:35:27 -0700
  • 82fac28208
    add gpt-4chan Eve 2023-07-18 18:04:29 -0400
  • 5958e14a79
    encourage quantization for low ram use cases Eve 2023-07-18 17:54:41 -0400
  • fdb4a22d6c
    Update README.md Eve 2023-07-18 17:39:10 -0400
  • c7ed22ea8a
    Merge branch 'oobabooga:main' into openai_update matatonic 2023-07-18 17:35:33 -0400
  • 08aa776064
    clean up model download section Eve 2023-07-18 17:30:55 -0400
  • 5558abf7e0
    Update README.md Eve 2023-07-18 17:25:54 -0400
  • 848e1e9a93
    Rename llama.cpp-models.md to GGML-llama.cpp-models.md Eve 2023-07-18 17:24:57 -0400
  • 6934fd8757
    Create GPT-4chan-model.md Eve 2023-07-18 17:15:47 -0400
  • f9d3688995
    add rope documentation for #3083 Eve 2023-07-18 16:55:01 -0400
  • cb92fd9240
    Bump transformers from 4.30.2 to 4.31.0 dependabot[bot] 2023-07-18 20:34:02 +0000
  • 2d215dd212
    Bump llama-cpp-python from 0.1.72 to 0.1.73 dependabot[bot] 2023-07-18 20:33:54 +0000
  • 3ef49397bb
    Merge pull request #3195 from oobabooga/dev v1.3 oobabooga 2023-07-18 17:33:11 -0300
  • 070a886278 Revert "Prevent lists from flickering in chat mode while streaming" oobabooga 2023-07-18 13:23:29 -0700