llama.cpp/gguf-py/gguf
Francis Couture-Harpin c8cdb48d10 llama : support all OpenELM models
* llama : add variable GQA and variable FFN sizes

Some metadata keys can now also be arrays to support setting
their value per-layer for models like OpenELM.
2024-06-30 23:14:01 -04:00
..
__init__.py convert-hf : support direct Q8_0 conversion (#7234) 2024-05-13 14:10:51 -04:00
constants.py llama : support all OpenELM models 2024-06-30 23:14:01 -04:00
gguf_reader.py Gguf dump start data offset via --data-offset and some extra refactor (#8054) 2024-06-25 22:03:25 +10:00
gguf_writer.py llama : support all OpenELM models 2024-06-30 23:14:01 -04:00
gguf.py gguf-py: Refactor and allow reading/modifying existing GGUF files (#3981) 2023-11-11 08:04:50 +03:00
lazy.py convert-hf : support direct Q8_0 conversion (#7234) 2024-05-13 14:10:51 -04:00
py.typed convert : various script cleanups/fixes + merges and special token handling (#2842) 2023-08-30 11:25:50 +03:00
quants.py gguf-py : fix and simplify quantized shape round-trip (#7483) 2024-05-25 11:11:48 +10:00
tensor_mapping.py llama : support all OpenELM models 2024-06-30 23:14:01 -04:00
vocab.py Move convert.py to examples/convert-legacy-llama.py (#7430) 2024-05-30 21:40:00 +10:00