Commit Graph

  • ca9e33caa1 Added url_file function that downloads the file provided from API and feeds it into database. WIP nabelekm 2023-05-26 00:43:09 +0200
  • 2feaef1e1b
    Add 'scipy' to requirements.txt #2335 jllllll 2023-05-25 17:05:42 -0500
  • 07a4f0569f
    Update README.md to account for BnB Windows wheel (#2341) jllllll 2023-05-25 16:44:26 -0500
  • db6f167d0a
    Update README.md to account for BnB Windows wheel jllllll 2023-05-25 16:19:27 -0500
  • 971fa04f7b
    Update README.md Aitrepreneur 2023-05-25 23:46:06 +0300
  • c50424bee0 Update docker repo link Atinoda 2023-05-25 21:45:08 +0100
  • fda7894cc8
    Update README.md Aitrepreneur 2023-05-25 23:43:27 +0300
  • 348f8b1221
    Update README.md Aitrepreneur 2023-05-25 23:42:33 +0300
  • 627785960e Created using Colaboratory Aitrepreneur 2023-05-25 23:36:50 +0300
  • bd63952634 Moved superbooga to modules to use the functions and share collection with api nabelekm 2023-05-25 22:23:53 +0200
  • acfd876f29 Some qol changes to "Perplexity evaluation" oobabooga 2023-05-25 15:06:22 -0300
  • 8efdc01ffb Better default for compute_dtype oobabooga 2023-05-25 15:05:53 -0300
  • b279e30fec Merge branch 'main' into embedder-set-model toast22a 2023-05-26 01:14:01 +0800
  • 6b078e1926 Add document and query templating toast22a 2023-05-26 00:57:07 +0800
  • fc33216477 Small fix for n_ctx in llama.cpp oobabooga 2023-05-25 13:55:51 -0300
  • 35009c32f0 Beautify all CSS oobabooga 2023-05-25 13:12:34 -0300
  • 231305d0f5
    Update README.md oobabooga 2023-05-25 12:05:08 -0300
  • 37d4ad012b Add a button for rendering markdown for any model oobabooga 2023-05-25 11:59:27 -0300
  • 9a43656a50
    Add bitsandbytes note oobabooga 2023-05-25 11:21:52 -0300
  • b1b3bb6923
    Improve environment isolation (#68) jllllll 2023-05-25 09:15:05 -0500
  • c8ce2e777b
    Add instructions for CPU mode users oobabooga 2023-05-25 10:57:52 -0300
  • 996c49daa7
    Remove bitsandbytes installation step oobabooga 2023-05-25 10:50:20 -0300
  • 548f05e106 Add windows bitsandbytes wheel by jllllll oobabooga 2023-05-25 10:48:22 -0300
  • cf088566f8
    Make llama.cpp read prompt size and seed from settings (#2299) DGdev91 2023-05-25 15:29:31 +0200
  • 7cbff3ddc6 Set an upper bound for n_ctx oobabooga 2023-05-25 10:27:58 -0300
  • ee674afa50
    Add superbooga time weighted history retrieval (#2080) Luis Lopez 2023-05-25 21:22:45 +0800
  • 9bc3071959 Fix logger call in script.py toast22a 2023-05-25 20:20:03 +0800
  • 909ea81e66
    Merge branch 'main' into main DGdev91 2023-05-25 14:00:56 +0200
  • 7c93f3b86d n_initial = -1 retrieves all chunks toast22a 2023-05-25 19:51:44 +0800
  • b82602be3b Fix get_documents_ids_distances() not getting distances from db toast22a 2023-05-25 18:08:58 +0800
  • a04266161d
    Update README.md oobabooga 2023-05-25 01:23:46 -0300
  • 361451ba60
    Add --load-in-4bit parameter (#2320) oobabooga 2023-05-25 01:14:13 -0300
  • b817f686d4 Fix typo oobabooga 2023-05-25 00:54:51 -0300
  • fedc470a0a Update the requirements oobabooga 2023-05-25 00:53:20 -0300
  • d15b4a1eb3 Add more params oobabooga 2023-05-25 00:45:08 -0300
  • c52439cebb merge main SilverJim 2023-05-25 03:24:18 +0800
  • 8bfa772686 Replace openai-whiper implementation with faster_whisper that is four times faster and uses almost a third of memory Daniel Fernández 2023-05-24 20:41:09 +0200
  • 63ce5f9c28 Add back a missing bos token oobabooga 2023-05-24 13:54:36 -0300
  • 3c76f439e1 Add one more documentation resource oobabooga 2023-05-24 13:51:04 -0300
  • 9a918e2178 Add placeholder for more params oobabooga 2023-05-24 13:48:58 -0300
  • a3b103b0f8 Add load-in-4bit checkbox oobabooga 2023-05-24 13:41:55 -0300
  • 49e8e083d0 Fix typo oobabooga 2023-05-24 13:09:03 -0300
  • 202285ee6b Add --load-in-4bit parameter oobabooga 2023-05-24 13:07:45 -0300
  • 3cd7c5bdd0
    LoRA Trainer: train_only_after option to control which part of your input to train on (#2315) Alex "mcmonkey" Goodwin 2023-05-24 08:43:22 -0700
  • 8d919ad1d6 strip bos tokens with a more formal guarantee about it Alex "mcmonkey" Goodwin 2023-05-24 08:36:50 -0700
  • 8a52383970 Small fix to improve the logging to load a raw text dataset file when training a LoRA Nan-Do 2023-05-25 00:26:21 +0900
  • 2848844cc2 fix stripping of wrong token in before_tokens Alex "mcmonkey" Goodwin 2023-05-24 08:25:33 -0700
  • f8f67c85d8 intentionally break hf datasets cache Alex "mcmonkey" Goodwin 2023-05-24 07:58:58 -0700
  • 0d367f00a9 don't strip last token Alex "mcmonkey" Goodwin 2023-05-24 07:54:13 -0700
  • bdea2faccf fix training without 'train only after' Alex "mcmonkey" Goodwin 2023-05-24 07:38:00 -0700
  • 5789d19889 minor fix Alex "mcmonkey" Goodwin 2023-05-24 07:34:00 -0700
  • 9fae02c3b0 train_only_after, to disconnect prompt prefix from training Alex "mcmonkey" Goodwin 2023-05-24 07:30:55 -0700
  • 9967e08b1f
    update llama-cpp-python to v0.1.53 for ggml v3, fixes #2245 (#2264) eiery 2023-05-24 09:25:28 -0400
  • e50ade438a
    FIX silero_tts/elevenlabs_tts activation/deactivation (#2313) Gabriel Terrien 2023-05-24 15:06:38 +0200
  • df88066732 Also update elevenlabs_tts oobabooga 2023-05-24 10:04:52 -0300
  • fc116711b0
    FIX save_model_settings function to also update shared.model_config (#2282) Gabriel Terrien 2023-05-24 15:01:07 +0200
  • 4c3eab988e Add the new entry to the dict oobabooga 2023-05-24 10:00:01 -0300
  • d37a28730d
    Beginning of multi-user support (#2262) flurb18 2023-05-24 08:38:20 -0400
  • b945803d24 Small changes oobabooga 2023-05-24 09:36:55 -0300
  • 48c75270e1 Small style changes oobabooga 2023-05-24 09:35:49 -0300
  • 577ee6385e FIX silero_tts deactivation RDeckard 2023-05-24 13:22:06 +0200
  • 798d683a65 FIX save_model_settings function to save also in shared.model_config RDeckard 2023-05-24 12:42:55 +0200
  • 9fab6348b9
    Remove maximum from n_ctx in server.py DGdev91 2023-05-24 10:21:12 +0200
  • 5d711bc55e Revert changenges in readme for no-mmap and n-gpu-layers DGdev91 2023-05-24 09:47:46 +0200
  • bb3c1afe4d Add new params n_ctx and llama_cpp_seed for llama models DGdev91 2023-05-24 09:44:35 +0200
  • 5e1ce7c5c2 updated to lock based Flurb 2023-05-24 00:44:39 -0400
  • 59e77cd072 Style changes oobabooga 2023-05-23 23:25:22 -0300
  • 260d904571 Remove unnecessary changes oobabooga 2023-05-23 23:23:56 -0300
  • b772bc9729 Merge branch 'main' into GitHub1712-main oobabooga 2023-05-23 23:19:53 -0300
  • 7dc87984a2
    Fix spelling mistake in new name var of chat api (#2309) Anthony K 2023-05-23 21:03:03 -0500
  • c4dc29f944 Change a comment oobabooga 2023-05-23 22:59:46 -0300
  • 549a6332d5 Set max time weight instead of min oobabooga 2023-05-23 22:56:25 -0300
  • 7dcf3b6f76 Change a comment oobabooga 2023-05-23 22:44:58 -0300
  • f251bb89b4 Simplify the formula oobabooga 2023-05-23 22:42:07 -0300
  • 2ebc0356eb Merge branch 'main' into toast22a-history-time-weight oobabooga 2023-05-23 22:21:21 -0300
  • c2545813d0 Some simplifications oobabooga 2023-05-23 22:20:04 -0300
  • af97a4b5f8 Fix spelling mistake in new name var of chat api Anthony K 2023-05-23 20:09:55 -0500
  • 1490c0af68 Remove RWKV from requirements.txt oobabooga 2023-05-23 20:48:12 -0300
  • 7aed53559a
    Support of the --gradio-auth flag (#2283) Gabriel Terrien 2023-05-24 01:39:26 +0200
  • 4155aaa96a
    Add mention to alternative docker repository (#2145) Atinoda 2023-05-24 00:35:53 +0100
  • 5542730250
    Update README.md oobabooga 2023-05-23 20:35:07 -0300
  • 7cc27ada63
    Update Docker.md oobabooga 2023-05-23 20:31:12 -0300
  • 9714072692
    [extensions/openai] use instruction templates with chat_completions (#2291) matatonic 2023-05-23 18:58:41 -0400
  • 74aae34beb Allow passing your name to the chat API oobabooga 2023-05-23 18:41:58 -0300
  • b7930806d7 Merge branch 'main' into embedder-set-model toast22a 2023-05-24 04:17:55 +0800
  • fb6a00f4e5 Small AutoGPTQ fix oobabooga 2023-05-23 15:20:01 -0300
  • c3af40344b Make llama.cpp read prompt size and seed from settings DGdev91 2023-05-23 12:49:23 +0200
  • c2d2ef7c13
    Update Generation-parameters.md oobabooga 2023-05-23 02:11:28 -0300
  • b0845ae4e8
    Update RWKV-model.md oobabooga 2023-05-23 02:10:08 -0300
  • cd3618d7fb Add support for RWKV in Hugging Face format oobabooga 2023-05-23 02:07:28 -0300
  • 75adc110d4 Fix "perplexity evaluation" progress messages oobabooga 2023-05-23 01:54:52 -0300
  • 4d94a111d4 memoize load_character to speed up the chat API oobabooga 2023-05-23 00:50:58 -0300
  • 17de95d4a7 use instruction templates with chat_completions Matthew Ashton 2023-05-22 21:47:52 -0400
  • 8b9ba3d7b4 Fix a typo oobabooga 2023-05-22 20:13:03 -0300
  • 0f51b64bb3
    Add a "dark_theme" option to settings.json (#2288) Gabriel Terrien 2023-05-23 00:45:11 +0200
  • 7b7e3fb977 Add a default oobabooga 2023-05-22 19:43:50 -0300
  • c5446ae0e2 Fix a link oobabooga 2023-05-22 19:38:34 -0300
  • c0fd7f3257
    Add mirostat parameters for llama.cpp (#2287) oobabooga 2023-05-22 19:37:24 -0300
  • b4814ddb73 Remove extra space oobabooga 2023-05-22 19:33:56 -0300
  • 2cca0a112f Reorder oobabooga 2023-05-22 19:31:46 -0300