This website requires JavaScript.
Explore
Help
Register
Sign In
Mirrors
/
llama.cpp
Watch
1
Star
0
Fork
0
You've already forked llama.cpp
mirror of
https://github.com/ggerganov/llama.cpp.git
synced
2025-01-10 12:30:50 +01:00
Code
Issues
Packages
Projects
Releases
Wiki
Activity
llama.cpp
/
gguf-py
/
gguf
History
pmysl
c1386c936e
gguf-py : add IQ1_M to GGML_QUANT_SIZES (
#6761
)
2024-04-21 15:49:30 +03:00
..
__init__.py
gguf-py: Refactor and allow reading/modifying existing GGUF files (
#3981
)
2023-11-11 08:04:50 +03:00
constants.py
gguf-py : add IQ1_M to GGML_QUANT_SIZES (
#6761
)
2024-04-21 15:49:30 +03:00
gguf_reader.py
gguf : add support for I64 and F64 arrays (
#6062
)
2024-03-15 10:46:51 +02:00
gguf_writer.py
convert : support models with multiple chat templates (
#6588
)
2024-04-18 14:49:01 +03:00
gguf.py
gguf-py: Refactor and allow reading/modifying existing GGUF files (
#3981
)
2023-11-11 08:04:50 +03:00
py.typed
convert : various script cleanups/fixes + merges and special token handling (
#2842
)
2023-08-30 11:25:50 +03:00
tensor_mapping.py
llama : add qwen2moe (
#6074
)
2024-04-16 18:40:48 +03:00
vocab.py
convert : support models with multiple chat templates (
#6588
)
2024-04-18 14:49:01 +03:00