oobabooga
3da633a497
Merge pull request #529 from EyeDeck/main
...
Allow loading of .safetensors through GPTQ-for-LLaMa
2023-03-24 23:51:01 -03:00
catalpaaa
b37c54edcf
lora-dir, model-dir and login auth
...
Added lora-dir, model-dir, and a login auth arguments that points to a file contains usernames and passwords in the format of "u:pw,u:pw,..."
2023-03-24 17:30:18 -07:00
oobabooga
9fa47c0eed
Revert GPTQ_loader.py (accident)
2023-03-24 19:57:12 -03:00
oobabooga
a6bf54739c
Revert models.py (accident)
2023-03-24 19:56:45 -03:00
oobabooga
0a16224451
Update GPTQ_loader.py
2023-03-24 19:54:36 -03:00
oobabooga
a80aa65986
Update models.py
2023-03-24 19:53:20 -03:00
oobabooga
507db0929d
Do not use empty user messages in chat mode
...
This allows the bot to send messages by clicking on Generate with empty inputs.
2023-03-24 17:22:22 -03:00
oobabooga
6e1b16c2aa
Update html_generator.py
2023-03-24 17:18:27 -03:00
oobabooga
ffb0187e83
Update chat.py
2023-03-24 17:17:29 -03:00
oobabooga
bfe960731f
Merge branch 'main' into fix/api-reload
2023-03-24 16:54:41 -03:00
oobabooga
8fad84abc2
Update extensions.py
2023-03-24 16:51:27 -03:00
Forkoz
b740c5b284
Add display of context when input was generated
...
Not sure if I did this right but it does move with the conversation and seems to match value.
2023-03-24 08:56:07 -05:00
oobabooga
4f5c2ce785
Fix chat_generation_attempts
2023-03-24 02:03:30 -03:00
EyeDeck
dcfd866402
Allow loading of .safetensors through GPTQ-for-LLaMa
2023-03-23 21:31:34 -04:00
oobabooga
8747c74339
Another missing import
2023-03-23 22:19:01 -03:00
oobabooga
7078d168c3
Missing import
2023-03-23 22:16:08 -03:00
oobabooga
d1327f99f9
Fix broken callbacks.py
2023-03-23 22:12:24 -03:00
oobabooga
b0abb327d8
Update LoRA.py
2023-03-23 22:02:09 -03:00
oobabooga
bf22d16ebc
Clear cache while switching LoRAs
2023-03-23 21:56:26 -03:00
oobabooga
4578e88ffd
Stop the bot from talking for you in chat mode
2023-03-23 21:38:20 -03:00
oobabooga
9bf6ecf9e2
Fix LoRA device map (attempt)
2023-03-23 16:49:41 -03:00
oobabooga
c5ebcc5f7e
Change the default names ( #518 )
...
* Update shared.py
* Update settings-template.json
2023-03-23 13:36:00 -03:00
oobabooga
29bd41d453
Fix LoRA in CPU mode
2023-03-23 01:05:13 -03:00
oobabooga
eac27f4f55
Make LoRAs work in 16-bit mode
2023-03-23 00:55:33 -03:00
oobabooga
bfa81e105e
Fix FlexGen streaming
2023-03-23 00:22:14 -03:00
oobabooga
de6a09dc7f
Properly separate the original prompt from the reply
2023-03-23 00:12:40 -03:00
wywywywy
61346b88ea
Add "seed" menu in the Parameters tab
2023-03-22 15:40:20 -03:00
oobabooga
45b7e53565
Only catch proper Exceptions in the text generation function
2023-03-20 20:36:02 -03:00
oobabooga
db4219a340
Update comments
2023-03-20 16:40:08 -03:00
oobabooga
7618f3fe8c
Add -gptq-preload for 4-bit offloading ( #460 )
...
This works in a 4GB card now:
```
python server.py --model llama-7b-hf --gptq-bits 4 --gptq-pre-layer 20
```
2023-03-20 16:30:56 -03:00
Vladimir Belitskiy
e96687b1d6
Do not send empty user input as part of the prompt.
...
However, if extensions modify the empty prompt to be non-empty,
it'l still work as before.
2023-03-20 14:27:39 -04:00
oobabooga
9a3bed50c3
Attempt at fixing 4-bit with CPU offload
2023-03-20 15:11:56 -03:00
Vladimir Belitskiy
ca47e016b4
Do not display empty user messages in chat mode.
...
There doesn't seem to be much value to them - they just take up space while also making it seem like there's still some sort of pseudo-dialogue going on, instead of a monologue by the bot.
2023-03-20 12:55:57 -04:00
oobabooga
75a7a84ef2
Exception handling ( #454 )
...
* Update text_generation.py
* Update extensions.py
2023-03-20 13:36:52 -03:00
oobabooga
ddb62470e9
--no-cache and --gpu-memory in MiB for fine VRAM control
2023-03-19 19:21:41 -03:00
oobabooga
a78b6508fc
Make custom LoRAs work by default #385
2023-03-19 12:11:35 -03:00
Maya
acdbd6b708
Check if app should display extensions ui
2023-03-19 13:31:21 +00:00
Maya
81c9d130f2
Fix global
2023-03-19 13:25:49 +00:00
Maya
099d7a844b
Add setup method to extensions
2023-03-19 13:22:24 +00:00
oobabooga
c753261338
Disable stop_at_newline by default
2023-03-18 10:55:57 -03:00
oobabooga
7c945cfe8e
Don't include PeftModel every time
2023-03-18 10:55:24 -03:00
oobabooga
e26763a510
Minor changes
2023-03-17 22:56:46 -03:00
Wojtek Kowaluk
7994b580d5
clean up duplicated code
2023-03-18 02:27:26 +01:00
Wojtek Kowaluk
30939e2aee
add mps support on apple silicon
2023-03-18 00:56:23 +01:00
oobabooga
9256e937d6
Add some LoRA params
2023-03-17 17:45:28 -03:00
oobabooga
9ed2c4501c
Use markdown in the "HTML" tab
2023-03-17 16:06:11 -03:00
oobabooga
f0b26451b4
Add a comment
2023-03-17 13:07:17 -03:00
oobabooga
3bda907727
Merge pull request #366 from oobabooga/lora
...
Add LoRA support
2023-03-17 11:48:48 -03:00
oobabooga
614dad0075
Remove unused import
2023-03-17 11:43:11 -03:00
oobabooga
a717fd709d
Sort the imports
2023-03-17 11:42:25 -03:00
oobabooga
29fe7b1c74
Remove LoRA tab, move it into the Parameters menu
2023-03-17 11:39:48 -03:00
oobabooga
214dc6868e
Several QoL changes related to LoRA
2023-03-17 11:24:52 -03:00
askmyteapot
53b6a66beb
Update GPTQ_Loader.py
...
Correcting decoder layer for renamed class.
2023-03-17 18:34:13 +10:00
oobabooga
0cecfc684c
Add files
2023-03-16 21:35:53 -03:00
oobabooga
104293f411
Add LoRA support
2023-03-16 21:31:39 -03:00
oobabooga
ee164d1821
Don't split the layers in 8-bit mode by default
2023-03-16 18:22:16 -03:00
oobabooga
e085cb4333
Small changes
2023-03-16 13:34:23 -03:00
awoo
83cb20aad8
Add support for --gpu-memory witn --load-in-8bit
2023-03-16 18:42:53 +03:00
oobabooga
1c378965e1
Remove unused imports
2023-03-16 10:18:34 -03:00
oobabooga
a577fb1077
Keep GALACTICA special tokens ( #300 )
2023-03-16 00:46:59 -03:00
oobabooga
4d64a57092
Add Interface mode tab
2023-03-15 23:29:56 -03:00
oobabooga
66256ac1dd
Make the "no GPU has been detected" message more descriptive
2023-03-15 19:31:27 -03:00
oobabooga
c1959c26ee
Show/hide the extensions block using javascript
2023-03-15 16:35:28 -03:00
oobabooga
348596f634
Fix broken extensions
2023-03-15 15:11:16 -03:00
oobabooga
c5f14fb9b8
Optimize the HTML generation speed
2023-03-15 14:19:28 -03:00
oobabooga
bf812c4893
Minor fix
2023-03-15 14:05:35 -03:00
oobabooga
05ee323ce5
Rename a file
2023-03-15 13:26:32 -03:00
oobabooga
d30a14087f
Further reorganize the UI
2023-03-15 13:24:54 -03:00
oobabooga
cf2da86352
Prevent *Is typing* from disappearing instantly while streaming
2023-03-15 12:51:13 -03:00
oobabooga
ec972b85d1
Move all css/js into separate files
2023-03-15 12:35:11 -03:00
oobabooga
693b53d957
Merge branch 'main' into HideLord-main
2023-03-15 12:08:56 -03:00
oobabooga
1413931705
Add a header bar and redesign the interface ( #293 )
2023-03-15 12:01:32 -03:00
oobabooga
9d6a625bd6
Add 'hallucinations' filter #326
...
This breaks the API since a new parameter has been added.
It should be a one-line fix. See api-example.py.
2023-03-15 11:10:35 -03:00
oobabooga
afc5339510
Remove "eval" statements from text generation functions
2023-03-14 16:04:17 -03:00
oobabooga
265ba384b7
Rename a file, add deprecation warning for --load-in-4bit
2023-03-14 07:56:31 -03:00
oobabooga
3da73e409f
Merge branch 'main' into Zerogoki00-opt4-bit
2023-03-14 07:50:36 -03:00
oobabooga
3fb8196e16
Implement "*Is recording a voice message...*" for TTS #303
2023-03-13 22:28:00 -03:00
oobabooga
518e5c4244
Some minor fixes to the GPTQ loader
2023-03-13 16:45:08 -03:00
Ayanami Rei
8778b756e6
use updated load_quantized
2023-03-13 22:11:40 +03:00
Ayanami Rei
a6a6522b6a
determine model type from model name
2023-03-13 22:11:32 +03:00
Ayanami Rei
b6c5c57f2e
remove default value from argument
2023-03-13 22:11:08 +03:00
Alexander Hristov Hristov
63c5a139a2
Merge branch 'main' into main
2023-03-13 19:50:08 +02:00
Ayanami Rei
e1c952c41c
make argument non case-sensitive
2023-03-13 20:22:38 +03:00
Ayanami Rei
3c9afd5ca3
rename method
2023-03-13 20:14:40 +03:00
Ayanami Rei
1b99ed61bc
add argument --gptq-model-type and remove duplicate arguments
2023-03-13 20:01:34 +03:00
Ayanami Rei
edbc61139f
use new quant loader
2023-03-13 20:00:38 +03:00
Ayanami Rei
345b6dee8c
refactor quant models loader and add support of OPT
2023-03-13 19:59:57 +03:00
oobabooga
66b6971b61
Update README
2023-03-13 12:44:18 -03:00
oobabooga
ddea518e0f
Document --auto-launch
2023-03-13 12:43:33 -03:00
oobabooga
372363bc3d
Fix GPTQ load_quant call on Windows
2023-03-13 12:07:02 -03:00
oobabooga
0c224cf4f4
Fix GALACTICA ( #285 )
2023-03-13 10:32:28 -03:00
oobabooga
2c4699a7e9
Change a comment
2023-03-13 00:20:02 -03:00
oobabooga
0a7acb3bd9
Remove redundant comments
2023-03-13 00:12:21 -03:00
oobabooga
77294b27dd
Use str(Path) instead of os.path.abspath(Path)
2023-03-13 00:08:01 -03:00
oobabooga
b9e0712b92
Fix Open Assistant
2023-03-12 23:58:25 -03:00
oobabooga
1ddcd4d0ba
Clean up silero_tts
...
This should only be used with --no-stream.
The shared.still_streaming implementation was faulty by design:
output_modifier should never be called when streaming is already over.
2023-03-12 23:42:49 -03:00
HideLord
683556f411
Adding markdown support and slight refactoring.
2023-03-12 21:34:09 +02:00
oobabooga
cebe8b390d
Remove useless "substring_found" variable
2023-03-12 15:50:38 -03:00
oobabooga
4bcd675ccd
Add *Is typing...* to regenerate as well
2023-03-12 15:23:33 -03:00
oobabooga
c7aa51faa6
Use a list of eos_tokens instead of just a number
...
This might be the cause of LLaMA ramblings that some people have experienced.
2023-03-12 14:54:58 -03:00