diff --git a/README.md b/README.md index f8a691a0..95b9a12b 100644 --- a/README.md +++ b/README.md @@ -20,9 +20,8 @@ Its goal is to become the [AUTOMATIC1111/stable-diffusion-webui](https://github. * [Multimodal pipelines, including LLaVA and MiniGPT-4](https://github.com/oobabooga/text-generation-webui/tree/main/extensions/multimodal) * [Extensions framework](https://github.com/oobabooga/text-generation-webui/wiki/07-%E2%80%90-Extensions) * [Custom chat characters](https://github.com/oobabooga/text-generation-webui/wiki/03-%E2%80%90-Parameters-Tab#character) -* Very efficient text streaming * Markdown output with LaTeX rendering, to use for instance with [GALACTICA](https://github.com/paperswithcode/galai) -* OpenAI-compatible API server +* OpenAI-compatible API server with Chat and Completions endpoints -- see the [examples](https://github.com/oobabooga/text-generation-webui/wiki/12-%E2%80%90-OpenAI-API#examples) ## Documentation diff --git a/docs/12 - OpenAI API.md b/docs/12 - OpenAI API.md index 17c83a3d..c0261785 100644 --- a/docs/12 - OpenAI API.md +++ b/docs/12 - OpenAI API.md @@ -174,11 +174,15 @@ while True: stream_response = requests.post(url, headers=headers, json=data, verify=False, stream=True) client = sseclient.SSEClient(stream_response) + assistant_message = '' for event in client.events(): payload = json.loads(event.data) - print(payload['choices'][0]['message']['content'], end='') + chunk = payload['choices'][0]['message']['content'] + assistant_message += chunk + print(chunk, end='') print() + history.append({"role": "assistant", "content": assistant_message}) ``` #### Python completions example with streaming @@ -233,7 +237,7 @@ OPENAI_API_BASE=http://127.0.0.1:5000/v1 With the [official python openai client](https://github.com/openai/openai-python), the address can be set like this: -```shell +```python import openai openai.api_key = "..."