mirror of
https://github.com/oobabooga/text-generation-webui.git
synced 2024-11-22 08:07:56 +01:00
Clarify how to start server.py with multimodal API support (#2025)
This commit is contained in:
parent
437d1c7ead
commit
3f1bfba718
@ -57,7 +57,10 @@ This extension uses the following parameters (from `settings.json`):
|
|||||||
|
|
||||||
## Usage through API
|
## Usage through API
|
||||||
|
|
||||||
You can run the multimodal inference through API, by inputting the images to prompt. Images are embedded like so: `f'<img src="data:image/jpeg;base64,{img_str}">'`, where `img_str` is base-64 jpeg data. Python example:
|
You can run the multimodal inference through API, by inputting the images to prompt. Images are embedded like so: `f'<img src="data:image/jpeg;base64,{img_str}">'`, where `img_str` is base-64 jpeg data. Note that you will need to launch `server.py` with the arguments `--api --extensions multimodal`.
|
||||||
|
|
||||||
|
Python example:
|
||||||
|
|
||||||
```Python
|
```Python
|
||||||
import base64
|
import base64
|
||||||
import requests
|
import requests
|
||||||
|
Loading…
Reference in New Issue
Block a user