diff --git a/extensions/llava/README.md b/extensions/llava/README.md index 848c7cb0..287162ef 100644 --- a/extensions/llava/README.md +++ b/extensions/llava/README.md @@ -1,14 +1,15 @@ # LLaVA ## Description -Adds [LLaVA](https://github.com/haotian-liu/LLaVA) multimodality support to text-generation-webui. +Adds [LLaVA 13B](https://github.com/haotian-liu/LLaVA) multimodality support to text-generation-webui. https://user-images.githubusercontent.com/3718215/233817203-69b57e77-0c55-4fd6-b742-3204bb13b8fc.mp4 -## Usage -To run this extension, download LLaVA weights, for example from [here](https://huggingface.co/wojtab/llava-13b-v0-4bit-128g), and then start server.py with `--extensions llava` argument. +## LLaVA-7B +7B version currently isn't supported. It will be supported if/when [more generic multimodality support](https://github.com/oobabooga/text-generation-webui/discussions/1687) gets implemented. -When in ui, go to instruct mode, and select LLaVA template, you also should add `"\n###"` to "Custom stopping strings" in parameters tab. +## Usage +To run this extension, download LLaVA weights, for example from [here](https://huggingface.co/wojtab/llava-13b-v0-4bit-128g) (note: it's a 4-bit [GPTQ quantization](https://github.com/oobabooga/text-generation-webui/tree/main/docs/GPTQ-models-(4-bit-mode).md), done on "old CUDA" branch), and then start server.py with `--extensions llava` argument. Do note, that each image takes up 258 tokens, so adjust max_new_tokens to be at most 1700 (recommended value is between 200 to 500), so the images don't get truncated. @@ -22,10 +23,12 @@ This extension uses following parameters (from settings.json): |---------|-----------| |`llava-clip_bits`|Number of bits to load CLIP feature extractor in (either 32 or 16, default=32)| |`llava-clip_device`|Torch device to run the extractor on, for example `cpu` or `cuda:0`, by default `cuda:0` if available| +|`llava-clip_repo`|Huggingface repository of CLIP model, `openai/clip-vit-large-patch14` by default. There should be no need to change it| |`llava-projector_bits`|Number of bits to load CLIP->LLaMA feature projector in (either 32 or 16, default=32)| -|`llava-projector_bits`|Torch device to run the CLIP->LLaMA feature projector on, for example `cpu` or `cuda:0`, by default `cuda:0` if available| +|`llava-projector_device`|Torch device to run the CLIP->LLaMA feature projector on, for example `cpu` or `cuda:0`, by default `cuda:0` if available| +|`llava-projector_repo`|Huggingface repository of multimodal projector, `liuhaotian/LLaVA-13b-delta-v0` by default. There should be no need to change it| +|`llava-projector_filename`|The filename of multimodal projector weights, `mm_projector.bin` by default. There should be no need to change it| |`llava-add_all_images_to_prompt`|Default value of "Embed all images, not only the last one" checkbox| - ## Technical description ### Original LLaVA @@ -46,4 +49,23 @@ The concatenated prompt then gets fed to fine-tuned LLaMA. ### In this extension Using default transformers, they only load the LLaMA part of LLaVA, ignoring the added projector weights, and not loading CLIP. We then reconstruct the `images -> CLIP -> projector` pipeline ourselves, then concatenate the input embeddings, and feed it to LLaMA loaded by transformers. This allows us to use normal flow from webui to load this model, and just hijack the model input with additional features. -Splitting it to 3 separate models, allows us to configure each of them, and to move them to different devices(for example we can run CLIP+projector on CPU and LLaMA on GPU). Also, it enables us to use 4-bit GPTQ quantization for LLaVA, massively cutting down the VRAM requirement (it should be possible to fit on 12GB of VRAM with full context size by moving CLIP and projector to CPU). \ No newline at end of file +Splitting it to 3 separate models, allows us to configure each of them, and to move them to different devices(for example we can run CLIP+projector on CPU and LLaMA on GPU). Also, it enables us to use 4-bit GPTQ quantization for LLaVA, massively cutting down the VRAM requirement (it should be possible to fit on 12GB of VRAM with full context size by moving CLIP and projector to CPU). + +### Usage through API + +You can run the multimodal inference through API, by inputting the images to prompt. Images are embedded like so: `f''`, where `img_str` is base-64 jpeg data. Python example: +```Python +import base64 +import requests + +CONTEXT = "You are LLaVA, a large language and vision assistant trained by UW Madison WAIV Lab. You are able to understand the visual content that the user provides, and assist the user with a variety of tasks using natural language. Follow the instructions carefully and explain your answers in detail.\n### Human: \nHi!\n### Assistant: \nHi there! How can I help you today?\n" + +with open('extreme_ironing.jpg', 'rb') as f: + img_str = base64.b64encode(f.read()).decode('utf-8') + prompt = CONTEXT + f'### Human: \nWhat is unusual about this image: \n\n### Assistant: \n' + print(requests.post('http://127.0.0.1:5000/api/v1/generate', json={'prompt': prompt, 'stopping_strings': ['\n###']}).json()) +``` +script output: +```Python +{'results': [{'text': "The unusual aspect of this image is that a man is standing on top of a yellow minivan while doing his laundry. He has set up a makeshift clothes line using the car's rooftop as an outdoor drying area. This scene is uncommon because people typically do their laundry indoors, in a dedicated space like a laundromat or a room in their home, rather than on top of a moving vehicle. Additionally, hanging clothes on the car could be potentially hazardous or illegal in some jurisdictions due to the risk of damaging the vehicle or causing accidents on the road.\n##"}]} +``` \ No newline at end of file diff --git a/extensions/llava/script.py b/extensions/llava/script.py index 9d44a2b0..0089e01e 100644 --- a/extensions/llava/script.py +++ b/extensions/llava/script.py @@ -21,10 +21,16 @@ params = { "clip_device": None, # bits to load clip in either 32 or 16 (it doesn't support 8-bit) "clip_bits": 32, + # clip repository + "clip_repo": "openai/clip-vit-large-patch14", # device to run projector on "projector_device": None, # projector bits, either 32 or 16 - "projector_bits": 32 + "projector_bits": 32, + # projector repository + "projector_repo": "liuhaotian/LLaVA-13b-delta-v0", + # file with the projector weights + "projector_file": "mm_projector.bin" } @@ -49,9 +55,6 @@ class LLaVAEmbedder: IM_PATCH = Token("", 32000) IM_START = Token("", 32001) IM_END = Token("", 32002) - CLIP_VIT_HUB_NAME = 'openai/clip-vit-large-patch14' - PROJECTOR_HUB_NAME = 'liuhaotian/LLaVA-13b-pretrain-projector-v0' - PROJECTOR_FILE = 'LLaVA-13b-pretrain-projector-v0-CC3M-595K-original_caption.bin' def __init__(self): self.clip_device = self._get_device("clip_device") @@ -71,12 +74,12 @@ class LLaVAEmbedder: def _load_models(self): start_ts = time.time() - print(f"LLaVA - Loading {LLaVAEmbedder.CLIP_VIT_HUB_NAME} as {self.clip_dtype} on {self.clip_device}...") - image_processor = CLIPImageProcessor.from_pretrained(LLaVAEmbedder.CLIP_VIT_HUB_NAME, torch_dtype=self.clip_dtype) - vision_tower = CLIPVisionModel.from_pretrained(LLaVAEmbedder.CLIP_VIT_HUB_NAME, torch_dtype=self.clip_dtype).to(self.clip_device) + print(f"LLaVA - Loading CLIP from {params['clip_repo']} as {self.clip_dtype} on {self.clip_device}...") + image_processor = CLIPImageProcessor.from_pretrained(params["clip_repo"], torch_dtype=self.clip_dtype) + vision_tower = CLIPVisionModel.from_pretrained(params["clip_repo"], torch_dtype=self.clip_dtype).to(self.clip_device) - print(f"LLaVA - Loading {LLaVAEmbedder.PROJECTOR_HUB_NAME} as {self.projector_dtype} on {self.projector_device}...") - projector_path = hf_hub_download(LLaVAEmbedder.PROJECTOR_HUB_NAME, LLaVAEmbedder.PROJECTOR_FILE) + print(f"LLaVA - Loading projector from {params['projector_repo']} as {self.projector_dtype} on {self.projector_device}...") + projector_path = hf_hub_download(params["projector_repo"], params["projector_file"]) mm_projector = torch.nn.Linear(1024, 5120) projector_data = torch.load(projector_path) mm_projector.weight = torch.nn.Parameter(projector_data['model.mm_projector.weight'].to(dtype=self.projector_dtype), False) diff --git a/modules/text_generation.py b/modules/text_generation.py index 054b948a..42a5c394 100644 --- a/modules/text_generation.py +++ b/modules/text_generation.py @@ -236,21 +236,6 @@ def generate_reply(question, state, eos_token=None, stopping_strings=[]): if eos_token is not None: eos_token_ids.append(int(encode(eos_token)[0][-1])) - # Create the StoppingCriteriaList with the stopping strings - stopping_criteria_list = transformers.StoppingCriteriaList() - for st in (stopping_strings, ast.literal_eval(f"[{state['custom_stopping_strings']}]")): - if type(st) is list and len(st) > 0: - sentinel_token_ids = [encode(string, add_special_tokens=False) for string in st] - stopping_criteria_list.append(_SentinelTokenStoppingCriteria(sentinel_token_ids=sentinel_token_ids, starting_idx=len(input_ids[0]))) - break - - # Update generate_params with the eos token and the stopping strings - if shared.args.flexgen: - generate_params['stop'] = eos_token_ids[-1] - else: - generate_params['eos_token_id'] = eos_token_ids - generate_params['stopping_criteria'] = stopping_criteria_list - # Add the encoded tokens to generate_params if shared.soft_prompt: inputs_embeds, filler_input_ids = generate_softprompt_input_tensors(input_ids) @@ -265,6 +250,21 @@ def generate_reply(question, state, eos_token=None, stopping_strings=[]): if inputs_embeds is not None: generate_params.update({'inputs_embeds': inputs_embeds}) + # Create the StoppingCriteriaList with the stopping strings (needs to be done after tokenizer extensions) + stopping_criteria_list = transformers.StoppingCriteriaList() + for st in (stopping_strings, ast.literal_eval(f"[{state['custom_stopping_strings']}]")): + if type(st) is list and len(st) > 0: + sentinel_token_ids = [encode(string, add_special_tokens=False) for string in st] + stopping_criteria_list.append(_SentinelTokenStoppingCriteria(sentinel_token_ids=sentinel_token_ids, starting_idx=len(input_ids[0]))) + break + + # Update generate_params with the eos token and the stopping strings + if shared.args.flexgen: + generate_params['stop'] = eos_token_ids[-1] + else: + generate_params['eos_token_id'] = eos_token_ids + generate_params['stopping_criteria'] = stopping_criteria_list + try: # Generate the entire reply at once. if shared.args.no_stream: