* Update custom.md
* Removed Model section as it is better placed in README.md
* Updates to README.md model section
* Inserted text that was removed from issue template about obtaining models from FB and links to papers describing the various models
* Removed IPF down links for the Alpaca 7B models as these look to be in the old data format and probably shouldn't be directly linked to, anyway
* Updated the perplexity section to point at Perplexity scores #406 discussion
* Deduplicate q4 quantization functions
* Use const; add basic test
* Re-enable quantization test
* Disable AVX2 flags in CI
---------
Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>
* Don't force immediate interactive without -i
Sometimes we might want to use a reverse prompt but we want to let the
model generate tokens right after the initial prompt. So we don't force
user input mode if the -i flag wasn't specified and instead let it run
until we encounter the reverse prompt.
This gives use some more flexibility, since it doesn't force the user to
enter a newline if they want to let the model generate text right after
the initial prompt and only be asked for input if the reverse prompt is
encountered.
The `--interactive-first` flag is reintroduced to force the old
behavior. `-r` behaves like `-i` plus introduces a reverse prompt (it
can be specified more than once).
* Update help output.
---------
Co-authored-by: Johnman <tjohnman@github>
* Major refactoring - introduce C-style API
* Clean up
* Add <cassert>
* Add <iterator>
* Add <algorithm> ....
* Fix timing reporting and accumulation
* Measure eval time only for single-token calls
* Change llama_tokenize return meaning
* Improve performance by changing std::map to std::unordered_map and std::map<id, token> id_to_token; to std::vector<token> id_to_token;
* fix last commit on gpt_vocab_init add vocab.id_to_token.resize(vocab.token_to_id.size());
* Removed include <map>
* Nest struct token score inside gpt_vocab
* renamed token to tok
* [WIP, broken] Importer for GPTQ quantized LLaMA models
Based on: https://github.com/qwopqwop200/GPTQ-for-LLaMa
Current status: Something is busted. The output starts out decent, but
quickly degrades into gibberish. This doesn't happen with either the
original GPTQ-for-LLaMa using the same weights, or llama.cpp when using
weights quantized by its own quantizer. Is there a bug in the
conversion script that somehow only comes into play with a large context
size?
I did notice one potential issue. It's clearly not the main cause of
the gibberish, since it doesn't happen when using q4_1 weights quantized
by llama.cpp itself, but it seems concerning. When doing a matrix
multiplication of f16 * f32 => f32 or q4_1 * f32 => f32, at least when
the multiplication is not done with BLAS, the intermediate results are
stored in the smaller format rather than f32. This seems like an
unnecessary waste of precision, especially in the q4_1 case.
I was originally hoping to validate the results by matching the Python
implementation's output exactly, but precision and non-associativity
issues make this very difficult, including when performing matrix
multiplications and, especially, computing norms.
Anyway, design details:
The models being imported store per-layer weights in essentially q4_1
format, although the addend and scale are shared across an entire row
rather than every group of 32 weights. This script duplicates the
addend and scale to match ggml's expectations, at the cost of wasting
some memory.
However, there are two differences which I accommodated changing the
output format (and adding corresponding support to main.cpp) rather than
having the script match the existing one:
- The tok_embeddings and output weights (i.e. the weights that aren't
per-layer) are f16 instead of q4_1. They could be converted to q4_1,
and the impact of the loss of precision would probably be low, but
this would rule out exactly matching the Python implementation's
output for validation.
- There is no sharding, since the input doesn't have it, and for a
CPU-only implementation it seems more useful to avoid having to deal
with multiple files.
The new format is differentiated from existing q4_1 format by changing
the 'f16' header flag to a new value, 4. That said, I think a cleaner
approach would be to change main.cpp to support loading each tensor with
an arbitrary sharding configuration and type rather than hardcoding
specific combinations of types. So far I've wasted too much time
debugging to try implementing this...
* Add missing permutation. Now it works.
---------
Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>
* Compute perplexity over prompt
* More accurate perplexity calculation - over all logits in the context window (so 512x more tokens!)
* Output all perplexitiies
* Add timing/ETA
* Add chatLLaMa script
* Fix shellcheck errors and do some cleanup
* Move chatLLaMa script to `examples` directory
* Reduce chatLLaMa context size to 2048
Ref d7def1a752
* Include n_predict to 2048 in examples/chatLLaMa
* Enable ANSI colors on Windows 10+
On older versions function will silently fail without any ill effects
* Do not call SetConsoleMode if the mode is already set
* Update main.cpp
---------
Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>
* Add test-tokenizer-0 to do a few tokenizations - feel free to expand
* Added option to convert-pth-to-ggml.py script to dump just the vocabulary
* Added ./models/ggml-vocab.bin containing just LLaMA vocab data (used for tests)
* Added utility to load vocabulary file from previous point (temporary implementation)
* Avoid using std::string_view and drop back to C++11 (hope I didn't break something)
* Rename gpt_vocab -> llama_vocab
* All CMake binaries go into ./bin/ now
* Update Makefile to detect AVX512 support and add compiler flags if it's available
* Based on existing AVX2 implementation, dot product on one 32-value block of 4-bit quantized ints at a time
* Perform 8 bit -> 16 bit sign extension and multiply+add on 32 values at time instead of 16
* Use built-in AVX512 horizontal reduce add to get sum at the end
* Manual unrolling on inner dot product loop to reduce loop counter overhead
* Functionality addition CMakeLists.txt
Refactoring:
1. Simplify more options that are negation of negation.
LLAMA_NO_ACCELERATE -> LLAMA_ACCELERATE
2. Changed to an optional expression instead of forcing to enable AVX2 in MSVC.
3. Make CMAKE_CXX_STANDARD, which is different from Makefile, the same.
4. Use add_compile_options instead of adding options to CMAKE_C_FLAGS.
5. Make utils use target_link_libraries instead of directly referencing code.
Added features:
1. Added some options.
LLAMA_STATIC_LINK,LLAMA_NATIVE,LLAMA_LTO,LLAMA_GPROF,LLAMA_OPENBLAS
* Fix Accelerate link in CMake
* Windows build Fix
* C++11 to C++17
* Reflects C/C++ standard individually
* Change the version to 3.12
---------
Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>
* potential out of bounds read
* fix quantize
* style
* Update convert-pth-to-ggml.py
* mild cleanup
* don't need the space-prefixing here rn since main.cpp already does it
* new file magic + version header field
* readme notice
* missing newlines
Co-authored-by: slaren <2141330+slaren@users.noreply.github.com>