Artificiangel
|
d9fdb3db71
|
Add docs for image generation
|
2024-05-23 08:44:15 -04:00 |
|
Artificiangel
|
432b070bde
|
Fix transcriptions endpoint
|
2024-05-23 08:07:51 -04:00 |
|
Artificiangel
|
d64459e20d
|
model/unload should also return "OK", consistent with lora/unload
|
2024-04-28 08:26:42 -04:00 |
|
Artificiangel
|
dce32fe2d9
|
Raise HTTPException to pass correct status code to the client
|
2024-04-28 08:26:15 -04:00 |
|
Artificiangel
|
3ffb09d465
|
Run in executor for long blocking functions.
|
2024-04-28 08:24:45 -04:00 |
|
oobabooga
|
f27e1ba302
|
Add a /v1/internal/chat-prompt endpoint (#5879)
|
2024-04-19 00:24:46 -03:00 |
|
oobabooga
|
6247eafcc5
|
API: better handle temperature = 0
|
2024-01-22 04:12:23 -08:00 |
|
Stefan Daniel Schwarz
|
232c07bf1f
|
API: set do_sample=false when temperature=0 (#5275)
|
2024-01-17 23:58:11 -03:00 |
|
Philipp Claßen
|
3eca20c015
|
Typo fixed in variable names (#5184)
|
2024-01-06 03:05:03 -03:00 |
|
oobabooga
|
23818dc098
|
Better logger
Credits: vladmandic/automatic
|
2023-12-19 20:38:33 -08:00 |
|
oobabooga
|
b6d16a35b1
|
Minor API fix
|
2023-11-21 17:56:28 -08:00 |
|
oobabooga
|
771e62e476
|
Add /v1/internal/lora endpoints (#4652)
|
2023-11-19 00:35:22 -03:00 |
|
oobabooga
|
ef6feedeb2
|
Add --nowebui flag for pure API mode (#4651)
|
2023-11-18 23:38:39 -03:00 |
|
oobabooga
|
0fa1af296c
|
Add /v1/internal/logits endpoint (#4650)
|
2023-11-18 23:19:31 -03:00 |
|
oobabooga
|
8f4f4daf8b
|
Add --admin-key flag for API (#4649)
|
2023-11-18 22:33:27 -03:00 |
|
oobabooga
|
e0a7cc5e0f
|
Simplify CORS code
|
2023-11-16 20:11:55 -08:00 |
|
oobabooga
|
c0233bb9d3
|
Minor message change
|
2023-11-16 18:36:57 -08:00 |
|
oobabooga
|
510a01ef46
|
Lint
|
2023-11-16 18:03:06 -08:00 |
|
oobabooga
|
a475aa7816
|
Improve API documentation
|
2023-11-15 18:39:08 -08:00 |
|
oobabooga
|
a85ce5f055
|
Add more info messages for truncation / instruction template
|
2023-11-15 16:20:31 -08:00 |
|
oobabooga
|
be125e2708
|
Add /v1/internal/model/unload endpoint
|
2023-11-15 15:48:33 -08:00 |
|
oobabooga
|
c5be3f7acb
|
Make /v1/embeddings functional, add request/response types
|
2023-11-10 07:34:27 -08:00 |
|
oobabooga
|
4aabff3728
|
Remove old API, launch OpenAI API with --api
|
2023-11-10 06:39:08 -08:00 |
|
oobabooga
|
d86f1fd2c3
|
OpenAI API: stop streaming on client disconnect (closes #4521)
|
2023-11-09 06:37:32 -08:00 |
|
oobabooga
|
effb3aef42
|
Prevent deadlocks in OpenAI API with simultaneous requests
|
2023-11-08 20:55:39 -08:00 |
|
oobabooga
|
678fd73aef
|
Document /v1/internal/model/load and fix a bug
|
2023-11-08 17:41:12 -08:00 |
|
oobabooga
|
050ff36bd6
|
Revert "Add a comment to /v1/models"
This reverts commit 38b07493a0 .
|
2023-11-07 21:09:47 -08:00 |
|
oobabooga
|
38b07493a0
|
Add a comment to /v1/models
|
2023-11-07 21:07:12 -08:00 |
|
oobabooga
|
2358706453
|
Add /v1/internal/model/load endpoint (tentative)
|
2023-11-07 20:58:06 -08:00 |
|
oobabooga
|
43c53a7820
|
Refactor the /v1/models endpoint
|
2023-11-07 19:59:27 -08:00 |
|
oobabooga
|
1b69694fe9
|
Add types to the encode/decode/token-count endpoints
|
2023-11-07 19:32:14 -08:00 |
|
oobabooga
|
f6ca9cfcdc
|
Add /v1/internal/model-info endpoint
|
2023-11-07 18:59:02 -08:00 |
|
oobabooga
|
79b3f5a546
|
Add /v1/internal/stop-generation to OpenAI API (#4498)
|
2023-11-07 00:10:42 -03:00 |
|
oobabooga
|
b87c6213ae
|
Remove obsolete endpoint
|
2023-11-06 05:45:45 -08:00 |
|
oobabooga
|
ec17a5d2b7
|
Make OpenAI API the default API (#4430)
|
2023-11-06 02:38:29 -03:00 |
|
hronoas
|
db7ecdd274
|
openai: fix empty models list on query present in url (#4139)
|
2023-10-16 17:02:47 -03:00 |
|
Jesus Alvarez
|
ed66ca3cdf
|
Add HTTPS support to APIs (openai and default) (#4270)
---------
Co-authored-by: oobabooga <112222186+oobabooga@users.noreply.github.com>
|
2023-10-13 01:31:13 -03:00 |
|
Chenxiao Wang
|
347aed4254
|
extensions/openai: load extension settings via settings.yaml (#3953)
|
2023-09-17 22:39:29 -03:00 |
|
wizd
|
cc7f345c29
|
add whisper api to openai plugin (#3958)
|
2023-09-16 12:04:04 -03:00 |
|
oobabooga
|
8f97e87cac
|
Lint the openai extension
|
2023-09-15 20:11:16 -07:00 |
|
Chang Chi, Meng
|
b61d9aef19
|
openai API: add support for chunked transfer encoding in POST requests (#3870)
|
2023-09-12 15:54:42 -03:00 |
|
matatonic
|
8f98268252
|
extensions/openai: include content-length for json replies (#3416)
|
2023-08-03 16:10:49 -03:00 |
|
matatonic
|
9ae0eab989
|
extensions/openai: +Array input (batched) , +Fixes (#3309)
|
2023-08-01 22:26:00 -03:00 |
|
matatonic
|
90a4ab631c
|
extensions/openai: Fixes for: embeddings, tokens, better errors. +Docs update, +Images, +logit_bias/logprobs, +more. (#3122)
|
2023-07-24 11:28:12 -03:00 |
|
oobabooga
|
e202190c4f
|
lint
|
2023-07-12 11:33:25 -07:00 |
|
matatonic
|
3e7feb699c
|
extensions/openai: Major openai extension updates & fixes (#3049)
* many openai updates
* total reorg & cleanup.
* fixups
* missing import os for images
* +moderations, custom_stopping_strings, more fixes
* fix bugs in completion streaming
* moderation fix (flagged)
* updated moderation categories
---------
Co-authored-by: Matthew Ashton <mashton-gitlab@zhero.org>
|
2023-07-11 18:50:08 -03:00 |
|
oobabooga
|
3443219cbc
|
Add repetition penalty range parameter to transformers (#2916)
|
2023-06-29 13:40:13 -03:00 |
|
matatonic
|
b45baeea41
|
extensions/openai: Major docs update, fix #2852 (critical bug), minor improvements (#2849)
|
2023-06-24 22:50:04 -03:00 |
|
matatonic
|
1e97aaac95
|
extensions/openai: docs update, model loader, minor fixes (#2557)
|
2023-06-17 19:15:24 -03:00 |
|
matatonic
|
4a17a5db67
|
[extensions/openai] various fixes (#2533)
|
2023-06-06 01:43:04 -03:00 |
|