Commit Graph

  • f62ff370b8 Merge branch 'dev' into tensorrt oobabooga 2024-06-23 19:54:39 -0700
  • 125bb7b03b Revert "Bump llama-cpp-python to 0.2.78" oobabooga 2024-06-23 19:54:28 -0700
  • 3dcd13c7aa Merge branch 'dev' into tensorrt oobabooga 2024-06-23 19:54:06 -0700
  • 5993904acf
    Fix several typos in the codebase (#6151) CharlesCNorton 2024-06-22 20:40:25 -0400
  • ee096fbd0b Merge remote-tracking branch 'origin/patch-6' into combined-pr PortfolioAI 2024-06-22 08:36:40 -0400
  • 911a3bcf96 Merge remote-tracking branch 'origin/patch-5' into combined-pr PortfolioAI 2024-06-22 08:36:40 -0400
  • a45c7ccca7 Merge remote-tracking branch 'origin/patch-4' into combined-pr PortfolioAI 2024-06-22 08:36:39 -0400
  • c4d36ba306 Merge remote-tracking branch 'origin/patch-3' into combined-pr PortfolioAI 2024-06-22 08:36:39 -0400
  • 8020ff3454 Merge remote-tracking branch 'origin/patch-2' into combined-pr PortfolioAI 2024-06-22 08:36:39 -0400
  • 0545285243
    fix: typo in number of tabs "Four" -> "Five" CharlesCNorton 2024-06-21 09:00:26 -0400
  • 0454e8edd3
    typo in documentation mirror message CharlesCNorton 2024-06-20 18:32:08 -0400
  • 42396f5bf0
    typo in OpenAI API documentation CharlesCNorton 2024-06-20 18:18:03 -0400
  • 5a0af3a778
    fix typo in Tokens tab description CharlesCNorton 2024-06-20 17:38:33 -0400
  • 2c5a9eb597
    Change limits of RoPE scaling sliders in UI (#6142) GodEmperor785 2024-06-20 02:42:17 +0200
  • 5904142777 Merge remote-tracking branch 'refs/remotes/origin/dev' into dev oobabooga 2024-06-19 17:41:09 -0700
  • b10d735176 Minor CSS linting oobabooga 2024-06-19 17:40:33 -0700
  • 1f7fa8af52
    Change limits of RoPE scaling sliders in UI GodEmperor785 2024-06-19 15:48:29 +0200
  • 151d5f25d3
    Update numba requirement from ==0.59.* to ==0.60.* dependabot[bot] 2024-06-17 20:24:31 +0000
  • 4cd7cb3f73
    Bump sse-starlette from 1.6.5 to 2.1.2 dependabot[bot] 2024-06-17 20:24:23 +0000
  • 77742d749e
    fix: typo in yield statement CharlesCNorton 2024-06-16 06:18:41 -0400
  • d2ece1dcf5
    Fix typo: "Atttention" to "Attention" CharlesCNorton 2024-06-16 06:12:24 -0400
  • 229d89ccfb
    Make logs more readable, no more \u7f16\u7801 (#6127) Guanghua Lu 2024-06-16 10:00:13 +0800
  • fd7c3c5bb0 Don't git pull on installation (to make past releases installable) oobabooga 2024-06-15 06:38:05 -0700
  • 6235d43074
    Merge branch 'oobabooga:main' into main Shixian Sheng 2024-06-15 09:11:43 -0400
  • 1f5f1612e3 Make logs more readable, no more \u7f16\u7801 Touch-Night 2024-06-15 15:34:05 +0800
  • b6eaf7923e Bump llama-cpp-python to 0.2.78 oobabooga 2024-06-14 21:22:09 -0700
  • 76c76584f3 Typing Artificiangel 2024-06-14 20:52:02 -0400
  • 9420973b62
    Downgrade PyTorch to 2.2.2 (#6124) oobabooga 2024-06-14 16:42:03 -0300
  • c9d17603ff Update some wheels oobabooga 2024-06-14 12:40:32 -0700
  • 1576227f16
    Fix GGUFs with no BOS token present, mainly qwen2 models. (#6119) Forkoz 2024-06-14 11:51:01 -0500
  • 0b39b5c73f Set bos_token = "" oobabooga 2024-06-14 09:50:20 -0700
  • 2123a2bda9 Downgrate PyTorch to 2.2.2 oobabooga 2024-06-14 09:49:00 -0700
  • fdd8fab9cf
    Bump hqq from 0.1.7.post2 to 0.1.7.post3 (#6090) dependabot[bot] 2024-06-14 13:46:35 -0300
  • 10601850d9 Fix after previous commit oobabooga 2024-06-13 19:54:12 -0700
  • 0f3a423de1 Alternative solution to "get next logits" deadlock (#6106) oobabooga 2024-06-13 19:33:15 -0700
  • 9aef01551d Revert "Use reentrant generation lock (#6107)" oobabooga 2024-06-13 17:53:07 -0700
  • 8930bfc5f4
    Bump PyTorch, ExLlamaV2, flash-attention (#6122) oobabooga 2024-06-13 20:38:31 -0300
  • fb3eb11cb6 Bump flash-attention (Windows) oobabooga 2024-06-13 16:21:57 -0700
  • 3f1e7a9847 Bump flash-attention (Linux) oobabooga 2024-06-13 09:13:20 -0700
  • 34e45bff73
    Update models_settings.py Forkoz 2024-06-13 08:47:06 -0500
  • 87dc23530c Bump ExLlamaV2 to 0.1.5 oobabooga 2024-06-13 06:17:24 -0700
  • bdfd608028 Bump PyTorch to 2.3.1 oobabooga 2024-06-13 05:54:02 -0700
  • 386500aa37 Avoid unnecessary calls UI -> backend, to make it faster oobabooga 2024-06-12 20:52:42 -0700
  • 0afed24269
    Bump hqq from 0.1.5 to 0.1.7.post3 dependabot[bot] 2024-06-13 03:39:12 +0000
  • 3db5528aab
    Update gradio requirement from ==4.26.* to ==4.36.* dependabot[bot] 2024-06-13 03:39:12 +0000
  • 4820ae9aef
    Merge pull request #6118 from oobabooga/dev oobabooga 2024-06-13 00:38:03 -0300
  • 1d79aa67cf
    Fix flash-attn UI parameter to actually store true. (#6076) Forkoz 2024-06-13 03:34:54 +0000
  • 08a310a778 Undo unnecessary changes oobabooga 2024-06-12 20:30:08 -0700
  • 3abafee696
    DRY sampler improvements (#6053) Belladore 2024-06-13 05:39:11 +0300
  • b556ee2800 lint oobabooga 2024-06-12 19:37:49 -0700
  • ed43c5e494 Merge branch 'dev' into belladoreai-dev-dry-optimization2 oobabooga 2024-06-12 19:36:49 -0700
  • b675151f25
    Use reentrant generation lock (#6107) theo77186 2024-06-13 04:25:05 +0200
  • a36fa73071 Lint oobabooga 2024-06-12 19:00:21 -0700
  • 2d196ed2fe Remove obsolete pre_layer parameter oobabooga 2024-06-12 18:56:44 -0700
  • 46174a2d33
    Fix error when bos_token_id is None. (#6061) Belladore 2024-06-13 04:52:27 +0300
  • 98443196d1
    Merge branch 'oobabooga:dev' into dev Artificiangel 2024-06-12 21:05:35 -0400
  • 0270af4101 Revert "Use custom model/lora download folder in model downloader" Artificiangel 2024-06-12 21:04:48 -0400
  • 603375bfc5
    Update models_settings.py: add default alpha_value, add proper compress_pos_emb for newer GGUFs mefich 2024-06-09 23:58:00 +0500
  • f8ab5bcdac
    Update docker-image.yml xerhab 2024-06-08 15:07:32 -0400
  • 7b7a29648d
    Merge pull request #8 from xerktech/syncrate xerhab 2024-06-07 15:43:46 -0400
  • 570fc68160
    Update syncfork.yaml xerhab 2024-06-07 15:43:35 -0400
  • a4802381f1
    Merge pull request #7 from xerktech/workflows4 xerhab 2024-06-07 15:30:20 -0400
  • 9092c40320
    Update docker-image.yml xerhab 2024-06-07 15:30:02 -0400
  • 554c0d2ffb
    Merge pull request #6 from xerktech/workflow3 xerhab 2024-06-07 15:01:56 -0400
  • 24ad2ce8da
    Update docker-image.yml xerhab 2024-06-07 15:01:45 -0400
  • 63a8d6a789
    Merge pull request #5 from xerktech/workflow2 xerhab 2024-06-07 14:59:39 -0400
  • e6b2cfc8d6
    Update docker-image.yml xerhab 2024-06-07 14:59:16 -0400
  • 82f58c3172
    Merge pull request #4 from xerktech/workflows xerhab 2024-06-07 14:56:19 -0400
  • 1d2cad2483
    Update docker-image.yml xerhab 2024-06-07 14:54:55 -0400
  • b4631a7fb8
    Delete .github/pull_request_template.md xerhab 2024-06-07 14:54:19 -0400
  • ad55eb867c
    Merge pull request #3 from xerktech/docker2-2 xerhab 2024-06-07 14:49:12 -0400
  • 88a3421e43
    Update docker-image.yml xerhab 2024-06-07 14:48:39 -0400
  • ca96bcb5b1
    Merge pull request #2 from xerktech/Docker-Workflow xerhab 2024-06-07 14:18:07 -0400
  • c85f59c95f
    Create docker-image.yml xerhab 2024-06-07 14:17:53 -0400
  • 77d769ef6e
    Merge pull request #1 from xerktech/xerhab-patch-1 xerhab 2024-06-07 14:00:31 -0400
  • 88319ff0e9
    Update syncfork.yaml xerhab 2024-06-07 13:59:58 -0400
  • fe3a90141f
    Create syncfork.yaml xerhab 2024-06-07 13:52:06 -0400
  • 9cdc84df4b
    Use reentrant generation lock theo77186 2024-06-07 16:55:17 +0200
  • 00eed08b5e
    Update README.md Forkoz 2024-06-04 17:01:16 +0000
  • 8178540ffe
    Update ui_model_menu.py Guanghua Lu 2024-06-04 11:48:09 +0800
  • 9fee8ccde1
    Update ui.py Guanghua Lu 2024-06-04 11:47:18 +0800
  • ae7414ae46
    Update loaders.py Guanghua Lu 2024-06-04 11:46:10 +0800
  • 8517ee7342
    Update gradio requirement from ==4.26.* to ==4.32.* dependabot[bot] 2024-06-03 20:32:55 +0000
  • 785f53ab07
    Update optimum requirement from ==1.17.* to ==1.20.* dependabot[bot] 2024-06-03 20:32:40 +0000
  • 9372fb3cf9 Send your images to llamacpp_hf, wrong and slow Touch-Night 2024-06-03 21:12:44 +0800
  • 9a33e4f457 Accelerate DRYLogitsProcessor by using numpy on cpu (fix #5677) Jonas Tingeborn 2024-06-03 06:49:36 +0200
  • abaca1e0a9
    Update extensions/openai/completions.py Koesn 2024-06-02 20:22:36 +0700
  • 605487ab32
    Add files via upload Guanghua Lu 2024-06-02 19:20:35 +0800
  • d8caa3ca56 Multimodal support for llamacpp_hf, not working embed_tokens() is not working, need to fix it Touch-Night 2024-06-02 19:13:06 +0800
  • 2ac2ed9d5d improve GGUF metadata handling ddh0 2024-06-01 22:39:04 -0500
  • 98c6065615
    Merge pull request #4 from blackmambaza/modules-html-gen-improvements blackmambaza 2024-06-02 00:44:22 +0200
  • 4696b1a362
    Improved consistency of HTML generator blackmambaza 2024-06-02 00:43:50 +0200
  • 66697d913c
    Merge pull request #3 from blackmambaza/modules-ui-chat-performance-improvements blackmambaza 2024-06-02 00:35:30 +0200
  • b770d142da
    Improved performance and state management on the chat ui blackmambaza 2024-06-02 00:33:54 +0200
  • fb5a9387c6
    Merge pull request #2 from blackmambaza/revert-1-modules-chat-improvements blackmambaza 2024-06-01 23:34:20 +0200
  • e55d57a01f
    Revert "Reduces cognitive complexity of chat module and improves string opera…" blackmambaza 2024-06-01 23:33:40 +0200
  • 71111d2854
    Merge pull request #1 from blackmambaza/modules-chat-improvements blackmambaza 2024-06-01 23:32:40 +0200
  • d133ef1443
    Reduces cognitive complexity of chat module and improves string operations blackmambaza 2024-06-01 23:30:07 +0200
  • bd2a18e089
    Fix ui_model_menu.py Forkoz 2024-06-01 12:28:20 +0000
  • 776f320d28
    Fix loaders.py Forkoz 2024-06-01 12:27:30 +0000