OWKenobi
dcf61a8897
"character greeting" displayed and editable on the fly ( #743 )
...
* Add greetings field
* add greeting field and make it interactive
* Minor changes
* Fix a bug
* Simplify clear_chat_log
* Change a label
* Minor change
* Simplifications
* Simplification
* Simplify loading the default character history
* Fix regression
---------
Co-authored-by: oobabooga
2023-04-03 12:16:15 -03:00
Alex "mcmonkey" Goodwin
8b1f20aa04
Fix some old JSON characters not loading ( #740 )
2023-04-03 10:49:28 -03:00
oobabooga
8b442305ac
Rename another variable
2023-04-03 01:15:20 -03:00
oobabooga
08448fb637
Rename a variable
2023-04-03 01:02:11 -03:00
oobabooga
2a267011dc
Use Path.stem for simplicity
2023-04-03 00:56:14 -03:00
Alex "mcmonkey" Goodwin
ea97303509
Apply dialogue format in all character fields not just example dialogue ( #650 )
2023-04-02 21:54:29 -03:00
TheTerrasque
2157bb4319
New yaml character format ( #337 from TheTerrasque/feature/yaml-characters)
...
This doesn't break backward compatibility with JSON characters.
2023-04-02 20:34:25 -03:00
oobabooga
5f3f3faa96
Better handle CUDA out of memory errors in chat mode
2023-04-02 17:48:00 -03:00
oobabooga
b0890a7925
Add shared.is_chat() function
2023-04-01 20:15:00 -03:00
oobabooga
b857f4655b
Update shared.py
2023-04-01 13:56:47 -03:00
oobabooga
fcda3f8776
Add also_return_rows to generate_chat_prompt
2023-04-01 01:12:13 -03:00
oobabooga
2c52310642
Add --threads flag for llama.cpp
2023-03-31 21:18:05 -03:00
oobabooga
eeafd60713
Fix streaming
2023-03-31 19:05:38 -03:00
oobabooga
52065ae4cd
Add repetition_penalty
2023-03-31 19:01:34 -03:00
oobabooga
2259143fec
Fix llama.cpp with --no-stream
2023-03-31 18:43:45 -03:00
oobabooga
3a47a602a3
Detect ggml*.bin files automatically
2023-03-31 17:18:21 -03:00
oobabooga
0aee7341d8
Properly count tokens/s for llama.cpp in chat mode
2023-03-31 17:04:32 -03:00
oobabooga
ea3ba6fc73
Merge branch 'feature/llamacpp' of github.com:thomasantony/text-generation-webui into thomasantony-feature/llamacpp
2023-03-31 14:45:53 -03:00
oobabooga
09b0a3aafb
Add repetition_penalty
2023-03-31 14:45:17 -03:00
oobabooga
4d98623041
Merge branch 'main' into feature/llamacpp
2023-03-31 14:37:04 -03:00
oobabooga
4c27562157
Minor changes
2023-03-31 14:33:46 -03:00
oobabooga
9d1dcf880a
General improvements
2023-03-31 14:27:01 -03:00
oobabooga
770ff0efa9
Merge branch 'main' of github.com:oobabooga/text-generation-webui
2023-03-31 12:22:22 -03:00
oobabooga
1d1d9e40cd
Add seed to settings
2023-03-31 12:22:07 -03:00
Maya
b246d17513
Fix type object is not subscriptable
...
Fix `type object is not subscriptable` on python 3.8
2023-03-31 14:20:31 +03:00
oobabooga
d4a9b5ea97
Remove redundant preset (see the plot in #587 )
2023-03-30 17:34:44 -03:00
Thomas Antony
7fa5d96c22
Update to use new llamacpp API
2023-03-30 11:23:05 +01:00
Thomas Antony
79fa2b6d7e
Add support for alpaca
2023-03-30 11:23:04 +01:00
Thomas Antony
a5f5736e74
Add to text_generation.py
2023-03-30 11:22:38 +01:00
Thomas Antony
7745faa7bb
Add llamacpp to models.py
2023-03-30 11:22:37 +01:00
Thomas Antony
7a562481fa
Initial version of llamacpp_model.py
2023-03-30 11:22:07 +01:00
oobabooga
a21e580782
Move an import
2023-03-29 22:50:58 -03:00
oobabooga
55755e27b9
Don't hardcode prompts in the settings dict/json
2023-03-29 22:47:01 -03:00
oobabooga
1cb9246160
Adapt to the new model names
2023-03-29 21:47:36 -03:00
oobabooga
58349f44a0
Handle training exception for unsupported models
2023-03-29 11:55:34 -03:00
oobabooga
a6d0373063
Fix training dataset loading #636
2023-03-29 11:48:17 -03:00
oobabooga
1edfb96778
Fix loading extensions from within the interface
2023-03-28 23:27:02 -03:00
oobabooga
304f812c63
Gracefully handle CUDA out of memory errors with streaming
2023-03-28 19:20:50 -03:00
oobabooga
010b259dde
Update documentation
2023-03-28 17:46:00 -03:00
oobabooga
0bec15ebcd
Reorder imports
2023-03-28 17:34:15 -03:00
Maya Eary
41ec682834
Disable kernel threshold for gpt-j
2023-03-28 22:45:38 +03:00
Maya
1ac003d41c
Merge branch 'oobabooga:main' into feature/gpt-j-4bit-v2
2023-03-28 22:30:39 +03:00
Maya Eary
1c075d8d21
Fix typo
2023-03-28 20:43:50 +03:00
Maya Eary
c8207d474f
Generalized load_quantized
2023-03-28 20:38:55 +03:00
oobabooga
8579fe51dd
Fix new lines in the HTML tab
2023-03-28 12:59:34 -03:00
Alex "mcmonkey" Goodwin
e817fac542
better defaults
2023-03-27 22:29:23 -07:00
Alex "mcmonkey" Goodwin
2e08af4edf
implement initial Raw Text File Input
...
also bump default Rank & Alpha for values that will make sense in testing if you don't know what you're doing and leave the defaults.
2023-03-27 22:15:32 -07:00
Alex "mcmonkey" Goodwin
b749952fe3
change number minimums to 0
...
gradio calculates 'step' relative to the minimum, so at '1' the step values were all offset awkwardly. 0 isn't valid, but, uh, just don't slam the slider to the left.
2023-03-27 21:22:43 -07:00
Alex "mcmonkey" Goodwin
ec6224f556
use new shared.args.lora_dir
2023-03-27 20:04:16 -07:00
Alex "mcmonkey" Goodwin
31f04dc615
Merge branch 'main' into add-train-lora-tab
2023-03-27 20:03:30 -07:00
oobabooga
53da672315
Fix FlexGen
2023-03-27 23:44:21 -03:00
oobabooga
ee95e55df6
Fix RWKV tokenizer
2023-03-27 23:42:29 -03:00
oobabooga
036163a751
Change description
2023-03-27 23:39:26 -03:00
oobabooga
005f552ea3
Some simplifications
2023-03-27 23:29:52 -03:00
oobabooga
fde92048af
Merge branch 'main' into catalpaaa-lora-and-model-dir
2023-03-27 23:16:44 -03:00
Alex "mcmonkey" Goodwin
8a97f6ba29
corrections per the PR comments
2023-03-27 18:39:06 -07:00
Alex "mcmonkey" Goodwin
7fab7ea1b6
couple missed camelCases
2023-03-27 18:19:06 -07:00
Alex "mcmonkey" Goodwin
6368dad7db
Fix camelCase to snake_case to match repo format standard
2023-03-27 18:17:42 -07:00
oobabooga
2f0571bfa4
Small style changes
2023-03-27 21:24:39 -03:00
oobabooga
c2cad30772
Merge branch 'main' into mcmonkey4eva-add-train-lora-tab
2023-03-27 21:05:44 -03:00
Alex "mcmonkey" Goodwin
9ced75746d
add total time estimate
2023-03-27 10:57:27 -07:00
Alex "mcmonkey" Goodwin
16ea4fc36d
interrupt button
2023-03-27 10:43:01 -07:00
Alex "mcmonkey" Goodwin
8fc723fc95
initial progress tracker in UI
2023-03-27 10:25:08 -07:00
oobabooga
48a6c9513e
Merge pull request #572 from clusterfudge/issues/571
...
Potential fix for issues/571
2023-03-27 14:06:38 -03:00
Alex "mcmonkey" Goodwin
c07bcd0850
add some outputs to indicate progress updates (sorta)
...
Actual progressbar still needed. Also minor formatting fixes.
2023-03-27 09:41:06 -07:00
oobabooga
af65c12900
Change Stop button behavior
2023-03-27 13:23:59 -03:00
Alex "mcmonkey" Goodwin
d911c22af9
use shared rows to make the LoRA Trainer interface a bit more compact / clean
2023-03-27 08:31:49 -07:00
Alex "mcmonkey" Goodwin
e439228ed8
Merge branch 'main' into add-train-lora-tab
2023-03-27 08:21:19 -07:00
oobabooga
3dc61284d5
Handle unloading LoRA from dropdown menu icon
2023-03-27 00:04:43 -03:00
oobabooga
1c77fdca4c
Change notebook mode appearance
2023-03-26 22:20:30 -03:00
oobabooga
49c10c5570
Add support for the latest GPTQ models with group-size ( #530 )
...
**Warning: old 4-bit weights will not work anymore!**
See here how to get up to date weights: https://github.com/oobabooga/text-generation-webui/wiki/LLaMA-model#step-2-get-the-pre-converted-weights
2023-03-26 00:11:33 -03:00
Sean Fitzgerald
0bac80d9eb
Potential fix for issues/571
2023-03-25 13:08:45 -07:00
Alex "mcmonkey" Goodwin
f1ba2196b1
make 'model' variables less ambiguous
2023-03-25 12:57:36 -07:00
Alex "mcmonkey" Goodwin
8da237223e
document options better
2023-03-25 12:48:35 -07:00
Alex "mcmonkey" Goodwin
5c49a0dcd0
fix error from prepare call running twice in a row
2023-03-25 12:37:32 -07:00
Alex "mcmonkey" Goodwin
7bf601107c
automatically strip empty data entries (for better alpaca dataset compat)
2023-03-25 12:28:46 -07:00
Alex "mcmonkey" Goodwin
566898a79a
initial lora training tab
2023-03-25 12:08:26 -07:00
oobabooga
8c8e8b4450
Fix the early stopping callback #559
2023-03-25 12:35:52 -03:00
oobabooga
a1f12d607f
Merge pull request #538 from Ph0rk0z/display-input-context
...
Add display of context when input was generated
2023-03-25 11:56:18 -03:00
catalpaaa
f740ee558c
Merge branch 'oobabooga:main' into lora-and-model-dir
2023-03-25 01:28:33 -07:00
oobabooga
25be9698c7
Fix LoRA on mps
2023-03-25 01:18:32 -03:00
oobabooga
3da633a497
Merge pull request #529 from EyeDeck/main
...
Allow loading of .safetensors through GPTQ-for-LLaMa
2023-03-24 23:51:01 -03:00
catalpaaa
b37c54edcf
lora-dir, model-dir and login auth
...
Added lora-dir, model-dir, and a login auth arguments that points to a file contains usernames and passwords in the format of "u:pw,u:pw,..."
2023-03-24 17:30:18 -07:00
oobabooga
9fa47c0eed
Revert GPTQ_loader.py (accident)
2023-03-24 19:57:12 -03:00
oobabooga
a6bf54739c
Revert models.py (accident)
2023-03-24 19:56:45 -03:00
oobabooga
0a16224451
Update GPTQ_loader.py
2023-03-24 19:54:36 -03:00
oobabooga
a80aa65986
Update models.py
2023-03-24 19:53:20 -03:00
oobabooga
507db0929d
Do not use empty user messages in chat mode
...
This allows the bot to send messages by clicking on Generate with empty inputs.
2023-03-24 17:22:22 -03:00
oobabooga
6e1b16c2aa
Update html_generator.py
2023-03-24 17:18:27 -03:00
oobabooga
ffb0187e83
Update chat.py
2023-03-24 17:17:29 -03:00
oobabooga
bfe960731f
Merge branch 'main' into fix/api-reload
2023-03-24 16:54:41 -03:00
oobabooga
8fad84abc2
Update extensions.py
2023-03-24 16:51:27 -03:00
Forkoz
b740c5b284
Add display of context when input was generated
...
Not sure if I did this right but it does move with the conversation and seems to match value.
2023-03-24 08:56:07 -05:00
oobabooga
4f5c2ce785
Fix chat_generation_attempts
2023-03-24 02:03:30 -03:00
EyeDeck
dcfd866402
Allow loading of .safetensors through GPTQ-for-LLaMa
2023-03-23 21:31:34 -04:00
oobabooga
8747c74339
Another missing import
2023-03-23 22:19:01 -03:00
oobabooga
7078d168c3
Missing import
2023-03-23 22:16:08 -03:00
oobabooga
d1327f99f9
Fix broken callbacks.py
2023-03-23 22:12:24 -03:00
oobabooga
b0abb327d8
Update LoRA.py
2023-03-23 22:02:09 -03:00
oobabooga
bf22d16ebc
Clear cache while switching LoRAs
2023-03-23 21:56:26 -03:00
oobabooga
4578e88ffd
Stop the bot from talking for you in chat mode
2023-03-23 21:38:20 -03:00
oobabooga
9bf6ecf9e2
Fix LoRA device map (attempt)
2023-03-23 16:49:41 -03:00
oobabooga
c5ebcc5f7e
Change the default names ( #518 )
...
* Update shared.py
* Update settings-template.json
2023-03-23 13:36:00 -03:00
oobabooga
29bd41d453
Fix LoRA in CPU mode
2023-03-23 01:05:13 -03:00
oobabooga
eac27f4f55
Make LoRAs work in 16-bit mode
2023-03-23 00:55:33 -03:00
oobabooga
bfa81e105e
Fix FlexGen streaming
2023-03-23 00:22:14 -03:00
oobabooga
de6a09dc7f
Properly separate the original prompt from the reply
2023-03-23 00:12:40 -03:00
wywywywy
61346b88ea
Add "seed" menu in the Parameters tab
2023-03-22 15:40:20 -03:00
oobabooga
45b7e53565
Only catch proper Exceptions in the text generation function
2023-03-20 20:36:02 -03:00
oobabooga
db4219a340
Update comments
2023-03-20 16:40:08 -03:00
oobabooga
7618f3fe8c
Add -gptq-preload for 4-bit offloading ( #460 )
...
This works in a 4GB card now:
```
python server.py --model llama-7b-hf --gptq-bits 4 --gptq-pre-layer 20
```
2023-03-20 16:30:56 -03:00
Vladimir Belitskiy
e96687b1d6
Do not send empty user input as part of the prompt.
...
However, if extensions modify the empty prompt to be non-empty,
it'l still work as before.
2023-03-20 14:27:39 -04:00
oobabooga
9a3bed50c3
Attempt at fixing 4-bit with CPU offload
2023-03-20 15:11:56 -03:00
Vladimir Belitskiy
ca47e016b4
Do not display empty user messages in chat mode.
...
There doesn't seem to be much value to them - they just take up space while also making it seem like there's still some sort of pseudo-dialogue going on, instead of a monologue by the bot.
2023-03-20 12:55:57 -04:00
oobabooga
75a7a84ef2
Exception handling ( #454 )
...
* Update text_generation.py
* Update extensions.py
2023-03-20 13:36:52 -03:00
oobabooga
ddb62470e9
--no-cache and --gpu-memory in MiB for fine VRAM control
2023-03-19 19:21:41 -03:00
oobabooga
a78b6508fc
Make custom LoRAs work by default #385
2023-03-19 12:11:35 -03:00
Maya
acdbd6b708
Check if app should display extensions ui
2023-03-19 13:31:21 +00:00
Maya
81c9d130f2
Fix global
2023-03-19 13:25:49 +00:00
Maya
099d7a844b
Add setup method to extensions
2023-03-19 13:22:24 +00:00
oobabooga
c753261338
Disable stop_at_newline by default
2023-03-18 10:55:57 -03:00
oobabooga
7c945cfe8e
Don't include PeftModel every time
2023-03-18 10:55:24 -03:00
oobabooga
e26763a510
Minor changes
2023-03-17 22:56:46 -03:00
Wojtek Kowaluk
7994b580d5
clean up duplicated code
2023-03-18 02:27:26 +01:00
Wojtek Kowaluk
30939e2aee
add mps support on apple silicon
2023-03-18 00:56:23 +01:00
oobabooga
9256e937d6
Add some LoRA params
2023-03-17 17:45:28 -03:00
oobabooga
9ed2c4501c
Use markdown in the "HTML" tab
2023-03-17 16:06:11 -03:00
oobabooga
f0b26451b4
Add a comment
2023-03-17 13:07:17 -03:00
oobabooga
3bda907727
Merge pull request #366 from oobabooga/lora
...
Add LoRA support
2023-03-17 11:48:48 -03:00
oobabooga
614dad0075
Remove unused import
2023-03-17 11:43:11 -03:00
oobabooga
a717fd709d
Sort the imports
2023-03-17 11:42:25 -03:00
oobabooga
29fe7b1c74
Remove LoRA tab, move it into the Parameters menu
2023-03-17 11:39:48 -03:00
oobabooga
214dc6868e
Several QoL changes related to LoRA
2023-03-17 11:24:52 -03:00
askmyteapot
53b6a66beb
Update GPTQ_Loader.py
...
Correcting decoder layer for renamed class.
2023-03-17 18:34:13 +10:00
oobabooga
0cecfc684c
Add files
2023-03-16 21:35:53 -03:00
oobabooga
104293f411
Add LoRA support
2023-03-16 21:31:39 -03:00
oobabooga
ee164d1821
Don't split the layers in 8-bit mode by default
2023-03-16 18:22:16 -03:00
oobabooga
e085cb4333
Small changes
2023-03-16 13:34:23 -03:00
awoo
83cb20aad8
Add support for --gpu-memory witn --load-in-8bit
2023-03-16 18:42:53 +03:00
oobabooga
1c378965e1
Remove unused imports
2023-03-16 10:18:34 -03:00
oobabooga
a577fb1077
Keep GALACTICA special tokens ( #300 )
2023-03-16 00:46:59 -03:00
oobabooga
4d64a57092
Add Interface mode tab
2023-03-15 23:29:56 -03:00
oobabooga
66256ac1dd
Make the "no GPU has been detected" message more descriptive
2023-03-15 19:31:27 -03:00
oobabooga
c1959c26ee
Show/hide the extensions block using javascript
2023-03-15 16:35:28 -03:00
oobabooga
348596f634
Fix broken extensions
2023-03-15 15:11:16 -03:00
oobabooga
c5f14fb9b8
Optimize the HTML generation speed
2023-03-15 14:19:28 -03:00
oobabooga
bf812c4893
Minor fix
2023-03-15 14:05:35 -03:00
oobabooga
05ee323ce5
Rename a file
2023-03-15 13:26:32 -03:00
oobabooga
d30a14087f
Further reorganize the UI
2023-03-15 13:24:54 -03:00
oobabooga
cf2da86352
Prevent *Is typing* from disappearing instantly while streaming
2023-03-15 12:51:13 -03:00
oobabooga
ec972b85d1
Move all css/js into separate files
2023-03-15 12:35:11 -03:00
oobabooga
693b53d957
Merge branch 'main' into HideLord-main
2023-03-15 12:08:56 -03:00
oobabooga
1413931705
Add a header bar and redesign the interface ( #293 )
2023-03-15 12:01:32 -03:00
oobabooga
9d6a625bd6
Add 'hallucinations' filter #326
...
This breaks the API since a new parameter has been added.
It should be a one-line fix. See api-example.py.
2023-03-15 11:10:35 -03:00
oobabooga
afc5339510
Remove "eval" statements from text generation functions
2023-03-14 16:04:17 -03:00
oobabooga
265ba384b7
Rename a file, add deprecation warning for --load-in-4bit
2023-03-14 07:56:31 -03:00
oobabooga
3da73e409f
Merge branch 'main' into Zerogoki00-opt4-bit
2023-03-14 07:50:36 -03:00
oobabooga
3fb8196e16
Implement "*Is recording a voice message...*" for TTS #303
2023-03-13 22:28:00 -03:00
oobabooga
518e5c4244
Some minor fixes to the GPTQ loader
2023-03-13 16:45:08 -03:00
Ayanami Rei
8778b756e6
use updated load_quantized
2023-03-13 22:11:40 +03:00
Ayanami Rei
a6a6522b6a
determine model type from model name
2023-03-13 22:11:32 +03:00
Ayanami Rei
b6c5c57f2e
remove default value from argument
2023-03-13 22:11:08 +03:00
Alexander Hristov Hristov
63c5a139a2
Merge branch 'main' into main
2023-03-13 19:50:08 +02:00
Ayanami Rei
e1c952c41c
make argument non case-sensitive
2023-03-13 20:22:38 +03:00
Ayanami Rei
3c9afd5ca3
rename method
2023-03-13 20:14:40 +03:00
Ayanami Rei
1b99ed61bc
add argument --gptq-model-type and remove duplicate arguments
2023-03-13 20:01:34 +03:00
Ayanami Rei
edbc61139f
use new quant loader
2023-03-13 20:00:38 +03:00
Ayanami Rei
345b6dee8c
refactor quant models loader and add support of OPT
2023-03-13 19:59:57 +03:00
oobabooga
66b6971b61
Update README
2023-03-13 12:44:18 -03:00
oobabooga
ddea518e0f
Document --auto-launch
2023-03-13 12:43:33 -03:00
oobabooga
372363bc3d
Fix GPTQ load_quant call on Windows
2023-03-13 12:07:02 -03:00
oobabooga
0c224cf4f4
Fix GALACTICA ( #285 )
2023-03-13 10:32:28 -03:00
oobabooga
2c4699a7e9
Change a comment
2023-03-13 00:20:02 -03:00
oobabooga
0a7acb3bd9
Remove redundant comments
2023-03-13 00:12:21 -03:00
oobabooga
77294b27dd
Use str(Path) instead of os.path.abspath(Path)
2023-03-13 00:08:01 -03:00
oobabooga
b9e0712b92
Fix Open Assistant
2023-03-12 23:58:25 -03:00
oobabooga
1ddcd4d0ba
Clean up silero_tts
...
This should only be used with --no-stream.
The shared.still_streaming implementation was faulty by design:
output_modifier should never be called when streaming is already over.
2023-03-12 23:42:49 -03:00
HideLord
683556f411
Adding markdown support and slight refactoring.
2023-03-12 21:34:09 +02:00
oobabooga
cebe8b390d
Remove useless "substring_found" variable
2023-03-12 15:50:38 -03:00
oobabooga
4bcd675ccd
Add *Is typing...* to regenerate as well
2023-03-12 15:23:33 -03:00
oobabooga
c7aa51faa6
Use a list of eos_tokens instead of just a number
...
This might be the cause of LLaMA ramblings that some people have experienced.
2023-03-12 14:54:58 -03:00
oobabooga
d8bea766d7
Merge pull request #192 from xanthousm/main
...
Add text generation stream status to shared module, use for better TTS with auto-play
2023-03-12 13:40:16 -03:00
oobabooga
fda376d9c3
Use os.path.abspath() instead of str()
2023-03-12 12:41:04 -03:00
HideLord
8403152257
Fixing compatibility with GPTQ repo commit 2f667f7da051967566a5fb0546f8614bcd3a1ccd. Expects string and breaks on
2023-03-12 17:28:15 +02:00
oobabooga
f3b00dd165
Merge pull request #224 from ItsLogic/llama-bits
...
Allow users to load 2, 3 and 4 bit llama models
2023-03-12 11:23:50 -03:00
oobabooga
65dda28c9d
Rename --llama-bits to --gptq-bits
2023-03-12 11:19:07 -03:00
oobabooga
fed3617f07
Move LLaMA 4-bit into a separate file
2023-03-12 11:12:34 -03:00
oobabooga
0ac562bdba
Add a default prompt for OpenAssistant oasst-sft-1-pythia-12b #253
2023-03-12 10:46:16 -03:00
oobabooga
78901d522b
Remove unused imports
2023-03-12 08:59:05 -03:00
Xan
b3e10e47c0
Fix merge conflict in text_generation
...
- Need to update `shared.still_streaming = False` before the final `yield formatted_outputs`, shifted the position of some yields.
2023-03-12 18:56:35 +11:00
oobabooga
ad14f0e499
Fix regenerate (provisory way)
2023-03-12 03:42:29 -03:00
oobabooga
6e12068ba2
Merge pull request #258 from lxe/lxe/utf8
...
Load and save character files and chat history in UTF-8
2023-03-12 03:28:49 -03:00
oobabooga
e2da6b9685
Fix You You You appearing in chat mode
2023-03-12 03:25:56 -03:00
oobabooga
bcf0075278
Merge pull request #235 from xanthousm/Quality_of_life-main
...
--auto-launch and "Is typing..."
2023-03-12 03:12:56 -03:00
Aleksey Smolenchuk
3f7c3d6559
No need to set encoding on binary read
2023-03-11 22:10:57 -08:00
oobabooga
341e135036
Various fixes in chat mode
2023-03-12 02:53:08 -03:00
Aleksey Smolenchuk
3baf5fc700
Load and save chat history in utf-8
2023-03-11 21:40:01 -08:00
oobabooga
b0e8cb8c88
Various fixes in chat mode
2023-03-12 02:31:45 -03:00
unknown
433f6350bc
Load and save character files in UTF-8
2023-03-11 21:23:05 -08:00
oobabooga
0bd5430988
Use 'with' statement to better handle streaming memory
2023-03-12 02:04:28 -03:00