Commit Graph

  • 4bb79c57ac One-click installer: change an info message oobabooga 2024-03-04 08:11:55 -0800
  • 74564fe8d0 One-click installer: delete the Miniconda installer after completion oobabooga 2024-03-04 08:11:03 -0800
  • dc2dd5b9d8 One-click installer: add an info message before git pull oobabooga 2024-03-04 08:00:39 -0800
  • 527ba98105
    Do not install extensions requirements by default (#5621) oobabooga 2024-03-04 04:46:39 -0300
  • 5e5a037171 Update dockerfiles oobabooga 2024-03-03 23:43:27 -0800
  • d5cd216885 Update docs/colab notebook oobabooga 2024-03-03 23:42:14 -0800
  • 0b2e1aec91 Update a message oobabooga 2024-03-03 23:30:25 -0800
  • 73a5c829c3 Handle the WSL case oobabooga 2024-03-03 23:29:32 -0800
  • 30d9def2ae Do not install extensions requirements by default oobabooga 2024-03-03 23:21:49 -0800
  • fa4ce0eee8 One-click installer: minor change to CMD_FLAGS.txt in CPU mode oobabooga 2024-03-03 17:42:59 -0800
  • 8bd4960d05
    Update PyTorch to 2.2 (also update flash-attn to 2.5.6) (#5618) oobabooga 2024-03-03 19:40:32 -0300
  • 9b170bd6d5 Bump flash-attention for windows oobabooga 2024-03-03 14:35:20 -0800
  • e6d5ff3c7c Merge branch 'dev' into update-pytorch oobabooga 2024-03-03 13:32:27 -0800
  • 70047a5c57 Bump bitsandytes to 0.42.0 on Windows oobabooga 2024-03-03 13:19:27 -0800
  • 24e86bb21b Bump llama-cpp-python to 0.2.55 oobabooga 2024-03-03 12:14:48 -0800
  • b8b9d2d089 Update README oobabooga 2024-03-03 11:16:50 -0800
  • 3977ac2672 Lint oobabooga 2024-03-03 10:55:01 -0800
  • 6fbb31d79c Add back git pull oobabooga 2024-03-03 10:54:25 -0800
  • 60f3d87309
    Merge pull request #5617 from oobabooga/dev snapshot-2024-03-03 oobabooga 2024-03-03 15:50:40 -0300
  • 314e42fd98 Fix transformers requirement oobabooga 2024-03-03 10:49:28 -0800
  • abee725690 Update ExLlamaV2 and flash attention oobabooga 2024-03-03 09:57:35 -0800
  • 57bf62b3b5 Various fixes oobabooga 2024-03-03 09:57:30 -0800
  • b30645a210 update pytorch oobabooga 2024-03-03 09:07:13 -0800
  • 71b1617c1b Remove bitsandbytes from incompatible requirements.txt files oobabooga 2024-03-03 08:24:54 -0800
  • cfb25c9b3f
    Cubic sampling w/ curve param (#5551) kalomaze 2024-03-03 10:22:21 -0600
  • be5a9abcf8 Change a comment oobabooga 2024-03-03 08:18:20 -0800
  • bb30014278 Prevent numerical overflow with -inf values oobabooga 2024-03-03 08:08:55 -0800
  • b7ca63c6a9 Minor changes oobabooga 2024-03-03 07:46:08 -0800
  • 3168644152
    Training: Update llama2-chat-format.json (#5593) jeffbiocode 2024-03-04 00:42:14 +0900
  • f9b58c7344 Fix typo yhyu13 2024-02-10 11:08:05 +0800
  • 89bfa205cc Update openai api doc yhyu13 2024-01-21 20:53:25 +0800
  • c531794396 Improve debug message; Revert None check on assistant message in history, as we break openai function call compilance that content would be None on function call and deliberately fill in assistant message yhyu13 2024-01-21 20:28:19 +0800
  • f25f65ef86 Fill function call content instead of let it be None; Add SQL and parallel function call prompt to function calling to increase success rate; Leave json control char exception to user yhyu13 2024-01-21 01:04:54 +0800
  • 643574531b Fix function call from assistant not interpret correctly in history message yhyu13 2024-01-20 23:39:49 +0800
  • 85e24ab97b Fix function call escape not using unicode; Add function calling retires if necessary; Lower function call temperature yhyu13 2024-01-20 01:03:08 +0800
  • 6a906aa632 Escape control char in argument section of function call to pass json.loads yhyu13 2024-01-18 21:01:08 +0800
  • 95b76749f3 Fix openai api function call not returning dict but str; Make assitant message None for openai function call for compiliance yhyu13 2024-01-18 18:20:10 +0800
  • e212f6466c Support parallel function call for openai api yhyu13 2024-01-18 01:52:43 +0800
  • 42c82f4d39 Add link to model in the function call section for openai api readme yhyu13 2024-01-17 13:14:47 +0800
  • 6765003b32 Use few shot prompting to enhance function calling success rate for llm not sft on function calling ability yhyu13 2024-01-16 22:26:48 +0800
  • 67bdebfbb7 Move function calling context to standalone module yhyu13 2024-01-15 22:58:20 +0800
  • 171ff3d32d Improve prompt yhyu13 2024-01-06 15:23:49 +0000
  • 0dc2fc78ec Add function calling context handler for openai extension yhyu13 2024-01-06 14:02:57 +0000
  • 021a410c73
    Merge branch 'oobabooga:main' into curve-test kalomaze 2024-03-01 00:13:07 -0600
  • 71dc5b4dee Merge remote-tracking branch 'refs/remotes/origin/dev' into dev oobabooga 2024-02-28 19:59:20 -0800
  • 09b13acfb2 Perplexity evaluation: print to terminal after calculation is finished oobabooga 2024-02-28 19:58:21 -0800
  • ea71b50d4c Merge branch 'dev' into gradio4 oobabooga 2024-02-27 14:08:11 -0800
  • 534d5b78cf
    Update llama2-chat-format.json jeffbiocode 2024-02-27 22:17:19 +0900
  • dfdf6eb5b4
    Bump hqq from 0.1.3 to 0.1.3.post1 (#5582) dependabot[bot] 2024-02-26 20:51:39 -0300
  • 332957ffec Bump llama-cpp-python to 0.2.52 oobabooga 2024-02-26 15:05:53 -0800
  • b7d23af198
    Update gradio requirement from ==3.50.* to ==4.19.* dependabot[bot] 2024-02-26 20:21:47 +0000
  • efef927b09
    Update transformers requirement from ==4.37.* to ==4.38.* dependabot[bot] 2024-02-26 20:21:16 +0000
  • 28bffa5258
    Bump autoawq from 0.1.8 to 0.2.2 dependabot[bot] 2024-02-26 20:20:58 +0000
  • cf07b4d41d
    Bump hqq from 0.1.3 to 0.1.3.post1 dependabot[bot] 2024-02-26 20:20:29 +0000
  • b64770805b Merge remote-tracking branch 'refs/remotes/origin/dev' into dev oobabooga 2024-02-26 08:51:31 -0800
  • 830168d3d4 Revert "Replace hashlib.sha256 with hashlib.file_digest so we don't need to load entire files into ram before hashing them. (#4383)" oobabooga 2024-02-26 05:54:33 -0800
  • f056081e49
    Allow multiple system messages from api flurb18 2024-02-26 03:51:58 -0500
  • 21acf504ce
    Bump transformers to 4.38 for gemma compatibility (#5575) Bartowski 2024-02-25 18:15:13 -0500
  • f0f658522c Use 4.38.* oobabooga 2024-02-25 15:14:32 -0800
  • 4164e29416 Block the "To create a public link, set share=True" gradio message oobabooga 2024-02-25 15:06:08 -0800
  • ac01b083cc Bump transformers to 4.38.1 for gemma compatibility Colin 2024-02-25 17:56:13 -0500
  • ba852716fd
    Merge pull request #5574 from oobabooga/dev snapshot-2024-02-25 oobabooga 2024-02-25 14:29:35 -0300
  • d34126255d Fix loading extensions with "-" in the name (closes #5557) oobabooga 2024-02-25 09:24:52 -0800
  • 0f68c6fb5b
    Big picture fixes (#5565) Lounger 2024-02-25 18:10:16 +0100
  • 45c4cd01c5
    Add llama 2 chat format for lora training (#5553) jeffbiocode 2024-02-25 14:36:36 +0900
  • e0fc808980
    fix: ngrok logging does not use the shared logger module (#5570) Devin Roark 2024-02-25 00:35:59 -0500
  • 32ee5504ed
    Remove -k from curl command to download miniconda (#5535) oobabooga 2024-02-25 02:35:23 -0300
  • c07dc56736 Bump llama-cpp-python to 0.2.50 oobabooga 2024-02-24 21:34:11 -0800
  • 98580cad8e Bump exllamav2 to 0.0.14 oobabooga 2024-02-24 18:35:42 -0800
  • 54f684cfcf Take HF_ENDPOINT in consideration zaypen 2024-02-25 04:19:14 +0800
  • c42732fdff ngrok logging does not use the shared logger module Devin Roark 2024-02-23 20:34:25 +0000
  • ef1bb335b2
    Update llama2-chat-format.json jeffbiocode 2024-02-23 16:55:58 +0900
  • dbd20c6b49 Merge branch 'dev' into gradio4 oobabooga 2024-02-22 20:45:52 -0800
  • 527f2652af Bump llama-cpp-python to 0.2.47 oobabooga 2024-02-22 19:48:49 -0800
  • 3f42e3292a Revert "Bump autoawq from 0.1.8 to 0.2.2 (#5547)" oobabooga 2024-02-22 19:48:04 -0800
  • 713137a87c Put character pfp on top Lounger 2024-02-23 02:04:16 +0100
  • f678d00424 Hide non-existent big picture Lounger 2024-02-23 01:26:13 +0100
  • 10aedc329f Logging: more readable messages when renaming chat histories oobabooga 2024-02-22 07:57:06 -0800
  • faf3bf2503 Perplexity evaluation: make UI events more robust (attempt) oobabooga 2024-02-21 20:27:25 -0800
  • ac5a7a26ea Perplexity evaluation: add some informative error messages oobabooga 2024-02-21 20:19:47 -0800
  • 33dcc46e53 Merge branch 'dev' into gradio4 oobabooga 2024-02-20 20:12:45 -0800
  • b63544de71
    Update llama2-chat-format.json jeffbiocode 2024-02-20 17:55:03 +0900
  • 2ce51b8fad added llama2 chat format Jeff Powell 2024-02-20 15:57:55 +0900
  • 2765ab7454 Remove unused parameter oobabooga 2024-02-19 22:05:15 -0800
  • 6ec1b60743 Do it in relative terms oobabooga 2024-02-19 21:53:58 -0800
  • 59032140b5 Fix CFG with llamacpp_HF (2nd attempt) oobabooga 2024-02-19 18:35:42 -0800
  • c203c57c18 Fix CFG with llamacpp_HF oobabooga 2024-02-19 18:09:40 -0800
  • 76c73f747f Cubic sampling w/ curve param kalomaze 2024-02-19 18:08:37 -0600
  • 0554f4ed18 Merge branch 'dev' into gradio4 oobabooga 2024-02-19 14:16:16 -0800
  • 5f7dbf454a
    Update optimum requirement from ==1.16.* to ==1.17.* (#5548) dependabot[bot] 2024-02-19 19:15:21 -0300
  • d04fef6a07
    Bump autoawq from 0.1.8 to 0.2.2 (#5547) dependabot[bot] 2024-02-19 19:14:55 -0300
  • ed6ff49431
    Update accelerate requirement from ==0.25.* to ==0.27.* (#5546) dependabot[bot] 2024-02-19 19:14:04 -0300
  • d6bb6e7390
    Merge pull request #5549 from oobabooga/dev oobabooga 2024-02-19 18:53:25 -0300
  • 10df23efb7
    Remove message.content from openai streaming API (#5503) Kevin Pham 2024-02-19 13:50:27 -0800
  • c83f0f11f0 Merge branch 'dev' into deoxykev-main oobabooga 2024-02-19 13:47:44 -0800
  • 0b2279d031 Bump llama-cpp-python to 0.2.44 oobabooga 2024-02-19 13:42:31 -0800
  • da745fa30e
    Update optimum requirement from ==1.16.* to ==1.17.* dependabot[bot] 2024-02-19 20:13:23 +0000
  • 58f8b0e45b
    Bump autoawq from 0.1.8 to 0.2.2 dependabot[bot] 2024-02-19 20:13:19 +0000
  • be72ca9fcf
    Update accelerate requirement from ==0.25.* to ==0.27.* dependabot[bot] 2024-02-19 20:13:12 +0000
  • 9d9b6ddb27
    Update gradio requirement from ==3.50.* to ==4.19.* dependabot[bot] 2024-02-19 20:13:06 +0000