oobabooga
|
98ed6d3a66
|
Don't use flash attention on Google Colab
|
2024-07-23 19:50:56 -07:00 |
|
oobabooga
|
9d5513fda0
|
Remove the AutoAWQ requirement
|
2024-07-23 19:38:04 -07:00 |
|
oobabooga
|
8b52b93e85
|
Make the Google Colab notebook functional again (attempt)
|
2024-07-23 19:35:00 -07:00 |
|
oobabooga
|
e777b73349
|
UI: prevent LaTeX from being rendered for inline "$"
|
2024-07-23 19:04:19 -07:00 |
|
oobabooga
|
1815877061
|
UI: fix the default character not loading correctly on startup
|
2024-07-23 18:48:10 -07:00 |
|
oobabooga
|
e6181e834a
|
Remove AutoAWQ as a standalone loader
(it works better through transformers)
|
2024-07-23 15:31:17 -07:00 |
|
oobabooga
|
f66ab63d64
|
Bump transformers to 4.43
|
2024-07-23 14:06:34 -07:00 |
|
oobabooga
|
95b3e98c36
|
UI: Fix code syntax highlighting
|
2024-07-22 23:08:48 -07:00 |
|
oobabooga
|
3ee682208c
|
Revert "Bump hqq from 0.1.7.post3 to 0.1.8 (#6238)"
This reverts commit 1c3671699c .
|
2024-07-22 19:53:56 -07:00 |
|
oobabooga
|
5e7f4ee88a
|
UI: simplify the interface load events
|
2024-07-22 19:11:55 -07:00 |
|
oobabooga
|
5c5e7264ec
|
Update README
|
2024-07-22 18:20:01 -07:00 |
|
oobabooga
|
7e73058943
|
UI: fix h1/h2/h3/h4 color in light mode
|
2024-07-22 18:18:02 -07:00 |
|
oobabooga
|
f18c947a86
|
Update the tensorcores description
|
2024-07-22 18:06:41 -07:00 |
|
oobabooga
|
aa809e420e
|
Bump llama-cpp-python to 0.2.83, add back tensorcore wheels
Also add back the progress bar patch
|
2024-07-22 18:05:11 -07:00 |
|
oobabooga
|
11bbf71aa5
|
Bump back llama-cpp-python (#6257)
|
2024-07-22 16:19:41 -03:00 |
|
oobabooga
|
0f53a736c1
|
Revert the llama-cpp-python update
|
2024-07-22 12:02:25 -07:00 |
|
oobabooga
|
a687f950ba
|
Remove the tensorcores llama.cpp wheels
They are not faster than the default wheels anymore and they use a lot of space.
|
2024-07-22 11:54:35 -07:00 |
|
oobabooga
|
017d2332ea
|
Remove no longer necessary llama-cpp-python patch
|
2024-07-22 11:50:36 -07:00 |
|
oobabooga
|
7d2449f8b0
|
Bump llama-cpp-python to 0.2.82.3 (unofficial build)
|
2024-07-22 11:49:20 -07:00 |
|
oobabooga
|
f2d802e707
|
UI: make Default/Notebook contents persist on page reload
|
2024-07-22 11:07:10 -07:00 |
|
oobabooga
|
8768b69a2d
|
Lint
|
2024-07-21 22:08:14 -07:00 |
|
oobabooga
|
79e8dbe45f
|
UI: minor optimization
|
2024-07-21 22:06:49 -07:00 |
|
oobabooga
|
e1085180cf
|
UI: better handle scrolling when the input area grows
|
2024-07-21 21:20:22 -07:00 |
|
oobabooga
|
7ef2414357
|
UI: Make the file saving dialogs more robust
|
2024-07-21 15:38:20 -07:00 |
|
oobabooga
|
423372d6e7
|
Organize ui_file_saving.py
|
2024-07-21 13:23:18 -07:00 |
|
oobabooga
|
af99e0697e
|
UI: increase the font weight of chat messages
|
2024-07-21 10:45:27 -07:00 |
|
oobabooga
|
17df2d7bdf
|
UI: don't export the instruction template on "Save UI defaults to settings.yaml"
|
2024-07-21 10:45:01 -07:00 |
|
oobabooga
|
d05846eae5
|
UI: refresh the pfp cache on handle_your_picture_change
|
2024-07-21 10:17:22 -07:00 |
|
oobabooga
|
58a1581b96
|
Add missing dark_theme.js (oops)
|
2024-07-21 09:47:55 -07:00 |
|
oobabooga
|
e9d4bff7d0
|
Update the --tensor_split description
|
2024-07-20 22:04:48 -07:00 |
|
oobabooga
|
916d1d8283
|
UI: improve the style of code blocks in light theme
|
2024-07-20 20:32:57 -07:00 |
|
Patrick Leiser
|
9b205f94a4
|
Fix for issue #6024, don't auto-hide the chat contents (#6247)
|
2024-07-21 00:05:28 -03:00 |
|
oobabooga
|
564d8c8c0d
|
Make alpha_value a float number
|
2024-07-20 20:02:54 -07:00 |
|
oobabooga
|
79c4d3da3d
|
Optimize the UI (#6251)
|
2024-07-21 00:01:42 -03:00 |
|
Alberto Cano
|
a14c510afb
|
Customize the subpath for gradio, use with reverse proxy (#5106)
|
2024-07-20 19:10:39 -03:00 |
|
FartyPants (FP HAM)
|
6ab477f375
|
training: Added ChatML-format.json format example (#5899)
|
2024-07-20 19:05:09 -03:00 |
|
Vhallo
|
a9a6d72d8c
|
Use gr.Number for RoPE scaling parameters (#6233)
---------
Co-authored-by: oobabooga <112222186+oobabooga@users.noreply.github.com>
|
2024-07-20 18:57:09 -03:00 |
|
dependabot[bot]
|
1c3671699c
|
Bump hqq from 0.1.7.post3 to 0.1.8 (#6238)
|
2024-07-20 18:20:26 -03:00 |
|
oobabooga
|
aa7c14a463
|
Use chat-instruct mode by default
|
2024-07-19 21:43:52 -07:00 |
|
oobabooga
|
b19d239a60
|
Bump flash-attention to 2.6.1
|
2024-07-12 20:16:11 -07:00 |
|
InvectorGator
|
4148a9201f
|
Fix for MacOS users encountering model load errors (#6227)
---------
Co-authored-by: oobabooga <112222186+oobabooga@users.noreply.github.com>
Co-authored-by: Invectorgator <Kudzu12gaming@outlook.com>
|
2024-07-13 00:04:19 -03:00 |
|
oobabooga
|
05676caf70
|
Update README
|
2024-07-11 16:25:52 -07:00 |
|
oobabooga
|
f5599656b4
|
Update README
|
2024-07-11 16:22:00 -07:00 |
|
oobabooga
|
d4eac58f2d
|
Merge remote-tracking branch 'refs/remotes/origin/dev' into dev
|
2024-07-11 16:21:16 -07:00 |
|
oobabooga
|
a30ec2e7db
|
Update README
|
2024-07-11 16:20:44 -07:00 |
|
dependabot[bot]
|
063d2047dd
|
Update accelerate requirement from ==0.31.* to ==0.32.* (#6217)
|
2024-07-11 19:56:42 -03:00 |
|
oobabooga
|
e436d69e2b
|
Add --no_xformers and --no_sdpa flags for ExllamaV2
|
2024-07-11 15:47:37 -07:00 |
|
oobabooga
|
512b311137
|
Improve the llama-cpp-python exception messages
|
2024-07-11 13:00:29 -07:00 |
|
oobabooga
|
01e4721da7
|
Bump ExLlamaV2 to 0.1.7
|
2024-07-11 12:33:46 -07:00 |
|
oobabooga
|
fa075e41f4
|
Bump llama-cpp-python to 0.2.82
|
2024-07-10 06:03:24 -07:00 |
|