chore: automatic commit 2025-04-30 12:48
This commit is contained in:
@@ -0,0 +1 @@
|
||||
pip
|
||||
@@ -0,0 +1,782 @@
|
||||
Metadata-Version: 2.3
|
||||
Name: openai
|
||||
Version: 1.76.0
|
||||
Summary: The official Python library for the openai API
|
||||
Project-URL: Homepage, https://github.com/openai/openai-python
|
||||
Project-URL: Repository, https://github.com/openai/openai-python
|
||||
Author-email: OpenAI <support@openai.com>
|
||||
License: Apache-2.0
|
||||
Classifier: Intended Audience :: Developers
|
||||
Classifier: License :: OSI Approved :: Apache Software License
|
||||
Classifier: Operating System :: MacOS
|
||||
Classifier: Operating System :: Microsoft :: Windows
|
||||
Classifier: Operating System :: OS Independent
|
||||
Classifier: Operating System :: POSIX
|
||||
Classifier: Operating System :: POSIX :: Linux
|
||||
Classifier: Programming Language :: Python :: 3.8
|
||||
Classifier: Programming Language :: Python :: 3.9
|
||||
Classifier: Programming Language :: Python :: 3.10
|
||||
Classifier: Programming Language :: Python :: 3.11
|
||||
Classifier: Programming Language :: Python :: 3.12
|
||||
Classifier: Topic :: Software Development :: Libraries :: Python Modules
|
||||
Classifier: Typing :: Typed
|
||||
Requires-Python: >=3.8
|
||||
Requires-Dist: anyio<5,>=3.5.0
|
||||
Requires-Dist: distro<2,>=1.7.0
|
||||
Requires-Dist: httpx<1,>=0.23.0
|
||||
Requires-Dist: jiter<1,>=0.4.0
|
||||
Requires-Dist: pydantic<3,>=1.9.0
|
||||
Requires-Dist: sniffio
|
||||
Requires-Dist: tqdm>4
|
||||
Requires-Dist: typing-extensions<5,>=4.11
|
||||
Provides-Extra: datalib
|
||||
Requires-Dist: numpy>=1; extra == 'datalib'
|
||||
Requires-Dist: pandas-stubs>=1.1.0.11; extra == 'datalib'
|
||||
Requires-Dist: pandas>=1.2.3; extra == 'datalib'
|
||||
Provides-Extra: realtime
|
||||
Requires-Dist: websockets<16,>=13; extra == 'realtime'
|
||||
Provides-Extra: voice-helpers
|
||||
Requires-Dist: numpy>=2.0.2; extra == 'voice-helpers'
|
||||
Requires-Dist: sounddevice>=0.5.1; extra == 'voice-helpers'
|
||||
Description-Content-Type: text/markdown
|
||||
|
||||
# OpenAI Python API library
|
||||
|
||||
[](https://pypi.org/project/openai/)
|
||||
|
||||
The OpenAI Python library provides convenient access to the OpenAI REST API from any Python 3.8+
|
||||
application. The library includes type definitions for all request params and response fields,
|
||||
and offers both synchronous and asynchronous clients powered by [httpx](https://github.com/encode/httpx).
|
||||
|
||||
It is generated from our [OpenAPI specification](https://github.com/openai/openai-openapi) with [Stainless](https://stainlessapi.com/).
|
||||
|
||||
## Documentation
|
||||
|
||||
The REST API documentation can be found on [platform.openai.com](https://platform.openai.com/docs/api-reference). The full API of this library can be found in [api.md](https://github.com/openai/openai-python/tree/main/api.md).
|
||||
|
||||
## Installation
|
||||
|
||||
```sh
|
||||
# install from PyPI
|
||||
pip install openai
|
||||
```
|
||||
|
||||
## Usage
|
||||
|
||||
The full API of this library can be found in [api.md](https://github.com/openai/openai-python/tree/main/api.md).
|
||||
|
||||
The primary API for interacting with OpenAI models is the [Responses API](https://platform.openai.com/docs/api-reference/responses). You can generate text from the model with the code below.
|
||||
|
||||
```python
|
||||
import os
|
||||
from openai import OpenAI
|
||||
|
||||
client = OpenAI(
|
||||
# This is the default and can be omitted
|
||||
api_key=os.environ.get("OPENAI_API_KEY"),
|
||||
)
|
||||
|
||||
response = client.responses.create(
|
||||
model="gpt-4o",
|
||||
instructions="You are a coding assistant that talks like a pirate.",
|
||||
input="How do I check if a Python object is an instance of a class?",
|
||||
)
|
||||
|
||||
print(response.output_text)
|
||||
```
|
||||
|
||||
The previous standard (supported indefinitely) for generating text is the [Chat Completions API](https://platform.openai.com/docs/api-reference/chat). You can use that API to generate text from the model with the code below.
|
||||
|
||||
```python
|
||||
from openai import OpenAI
|
||||
|
||||
client = OpenAI()
|
||||
|
||||
completion = client.chat.completions.create(
|
||||
model="gpt-4o",
|
||||
messages=[
|
||||
{"role": "developer", "content": "Talk like a pirate."},
|
||||
{
|
||||
"role": "user",
|
||||
"content": "How do I check if a Python object is an instance of a class?",
|
||||
},
|
||||
],
|
||||
)
|
||||
|
||||
print(completion.choices[0].message.content)
|
||||
```
|
||||
|
||||
While you can provide an `api_key` keyword argument,
|
||||
we recommend using [python-dotenv](https://pypi.org/project/python-dotenv/)
|
||||
to add `OPENAI_API_KEY="My API Key"` to your `.env` file
|
||||
so that your API key is not stored in source control.
|
||||
[Get an API key here](https://platform.openai.com/settings/organization/api-keys).
|
||||
|
||||
### Vision
|
||||
|
||||
With an image URL:
|
||||
|
||||
```python
|
||||
prompt = "What is in this image?"
|
||||
img_url = "https://upload.wikimedia.org/wikipedia/commons/thumb/d/d5/2023_06_08_Raccoon1.jpg/1599px-2023_06_08_Raccoon1.jpg"
|
||||
|
||||
response = client.responses.create(
|
||||
model="gpt-4o-mini",
|
||||
input=[
|
||||
{
|
||||
"role": "user",
|
||||
"content": [
|
||||
{"type": "input_text", "text": prompt},
|
||||
{"type": "input_image", "image_url": f"{img_url}"},
|
||||
],
|
||||
}
|
||||
],
|
||||
)
|
||||
```
|
||||
|
||||
With the image as a base64 encoded string:
|
||||
|
||||
```python
|
||||
import base64
|
||||
from openai import OpenAI
|
||||
|
||||
client = OpenAI()
|
||||
|
||||
prompt = "What is in this image?"
|
||||
with open("path/to/image.png", "rb") as image_file:
|
||||
b64_image = base64.b64encode(image_file.read()).decode("utf-8")
|
||||
|
||||
response = client.responses.create(
|
||||
model="gpt-4o-mini",
|
||||
input=[
|
||||
{
|
||||
"role": "user",
|
||||
"content": [
|
||||
{"type": "input_text", "text": prompt},
|
||||
{"type": "input_image", "image_url": f"data:image/png;base64,{b64_image}"},
|
||||
],
|
||||
}
|
||||
],
|
||||
)
|
||||
```
|
||||
|
||||
## Async usage
|
||||
|
||||
Simply import `AsyncOpenAI` instead of `OpenAI` and use `await` with each API call:
|
||||
|
||||
```python
|
||||
import os
|
||||
import asyncio
|
||||
from openai import AsyncOpenAI
|
||||
|
||||
client = AsyncOpenAI(
|
||||
# This is the default and can be omitted
|
||||
api_key=os.environ.get("OPENAI_API_KEY"),
|
||||
)
|
||||
|
||||
|
||||
async def main() -> None:
|
||||
response = await client.responses.create(
|
||||
model="gpt-4o", input="Explain disestablishmentarianism to a smart five year old."
|
||||
)
|
||||
print(response.output_text)
|
||||
|
||||
|
||||
asyncio.run(main())
|
||||
```
|
||||
|
||||
Functionality between the synchronous and asynchronous clients is otherwise identical.
|
||||
|
||||
## Streaming responses
|
||||
|
||||
We provide support for streaming responses using Server Side Events (SSE).
|
||||
|
||||
```python
|
||||
from openai import OpenAI
|
||||
|
||||
client = OpenAI()
|
||||
|
||||
stream = client.responses.create(
|
||||
model="gpt-4o",
|
||||
input="Write a one-sentence bedtime story about a unicorn.",
|
||||
stream=True,
|
||||
)
|
||||
|
||||
for event in stream:
|
||||
print(event)
|
||||
```
|
||||
|
||||
The async client uses the exact same interface.
|
||||
|
||||
```python
|
||||
import asyncio
|
||||
from openai import AsyncOpenAI
|
||||
|
||||
client = AsyncOpenAI()
|
||||
|
||||
|
||||
async def main():
|
||||
stream = client.responses.create(
|
||||
model="gpt-4o",
|
||||
input="Write a one-sentence bedtime story about a unicorn.",
|
||||
stream=True,
|
||||
)
|
||||
|
||||
for event in stream:
|
||||
print(event)
|
||||
|
||||
|
||||
asyncio.run(main())
|
||||
```
|
||||
|
||||
## Realtime API beta
|
||||
|
||||
The Realtime API enables you to build low-latency, multi-modal conversational experiences. It currently supports text and audio as both input and output, as well as [function calling](https://platform.openai.com/docs/guides/function-calling) through a WebSocket connection.
|
||||
|
||||
Under the hood the SDK uses the [`websockets`](https://websockets.readthedocs.io/en/stable/) library to manage connections.
|
||||
|
||||
The Realtime API works through a combination of client-sent events and server-sent events. Clients can send events to do things like update session configuration or send text and audio inputs. Server events confirm when audio responses have completed, or when a text response from the model has been received. A full event reference can be found [here](https://platform.openai.com/docs/api-reference/realtime-client-events) and a guide can be found [here](https://platform.openai.com/docs/guides/realtime).
|
||||
|
||||
Basic text based example:
|
||||
|
||||
```py
|
||||
import asyncio
|
||||
from openai import AsyncOpenAI
|
||||
|
||||
async def main():
|
||||
client = AsyncOpenAI()
|
||||
|
||||
async with client.beta.realtime.connect(model="gpt-4o-realtime-preview") as connection:
|
||||
await connection.session.update(session={'modalities': ['text']})
|
||||
|
||||
await connection.conversation.item.create(
|
||||
item={
|
||||
"type": "message",
|
||||
"role": "user",
|
||||
"content": [{"type": "input_text", "text": "Say hello!"}],
|
||||
}
|
||||
)
|
||||
await connection.response.create()
|
||||
|
||||
async for event in connection:
|
||||
if event.type == 'response.text.delta':
|
||||
print(event.delta, flush=True, end="")
|
||||
|
||||
elif event.type == 'response.text.done':
|
||||
print()
|
||||
|
||||
elif event.type == "response.done":
|
||||
break
|
||||
|
||||
asyncio.run(main())
|
||||
```
|
||||
|
||||
However the real magic of the Realtime API is handling audio inputs / outputs, see this example [TUI script](https://github.com/openai/openai-python/blob/main/examples/realtime/push_to_talk_app.py) for a fully fledged example.
|
||||
|
||||
### Realtime error handling
|
||||
|
||||
Whenever an error occurs, the Realtime API will send an [`error` event](https://platform.openai.com/docs/guides/realtime-model-capabilities#error-handling) and the connection will stay open and remain usable. This means you need to handle it yourself, as _no errors are raised directly_ by the SDK when an `error` event comes in.
|
||||
|
||||
```py
|
||||
client = AsyncOpenAI()
|
||||
|
||||
async with client.beta.realtime.connect(model="gpt-4o-realtime-preview") as connection:
|
||||
...
|
||||
async for event in connection:
|
||||
if event.type == 'error':
|
||||
print(event.error.type)
|
||||
print(event.error.code)
|
||||
print(event.error.event_id)
|
||||
print(event.error.message)
|
||||
```
|
||||
|
||||
## Using types
|
||||
|
||||
Nested request parameters are [TypedDicts](https://docs.python.org/3/library/typing.html#typing.TypedDict). Responses are [Pydantic models](https://docs.pydantic.dev) which also provide helper methods for things like:
|
||||
|
||||
- Serializing back into JSON, `model.to_json()`
|
||||
- Converting to a dictionary, `model.to_dict()`
|
||||
|
||||
Typed requests and responses provide autocomplete and documentation within your editor. If you would like to see type errors in VS Code to help catch bugs earlier, set `python.analysis.typeCheckingMode` to `basic`.
|
||||
|
||||
## Pagination
|
||||
|
||||
List methods in the OpenAI API are paginated.
|
||||
|
||||
This library provides auto-paginating iterators with each list response, so you do not have to request successive pages manually:
|
||||
|
||||
```python
|
||||
from openai import OpenAI
|
||||
|
||||
client = OpenAI()
|
||||
|
||||
all_jobs = []
|
||||
# Automatically fetches more pages as needed.
|
||||
for job in client.fine_tuning.jobs.list(
|
||||
limit=20,
|
||||
):
|
||||
# Do something with job here
|
||||
all_jobs.append(job)
|
||||
print(all_jobs)
|
||||
```
|
||||
|
||||
Or, asynchronously:
|
||||
|
||||
```python
|
||||
import asyncio
|
||||
from openai import AsyncOpenAI
|
||||
|
||||
client = AsyncOpenAI()
|
||||
|
||||
|
||||
async def main() -> None:
|
||||
all_jobs = []
|
||||
# Iterate through items across all pages, issuing requests as needed.
|
||||
async for job in client.fine_tuning.jobs.list(
|
||||
limit=20,
|
||||
):
|
||||
all_jobs.append(job)
|
||||
print(all_jobs)
|
||||
|
||||
|
||||
asyncio.run(main())
|
||||
```
|
||||
|
||||
Alternatively, you can use the `.has_next_page()`, `.next_page_info()`, or `.get_next_page()` methods for more granular control working with pages:
|
||||
|
||||
```python
|
||||
first_page = await client.fine_tuning.jobs.list(
|
||||
limit=20,
|
||||
)
|
||||
if first_page.has_next_page():
|
||||
print(f"will fetch next page using these details: {first_page.next_page_info()}")
|
||||
next_page = await first_page.get_next_page()
|
||||
print(f"number of items we just fetched: {len(next_page.data)}")
|
||||
|
||||
# Remove `await` for non-async usage.
|
||||
```
|
||||
|
||||
Or just work directly with the returned data:
|
||||
|
||||
```python
|
||||
first_page = await client.fine_tuning.jobs.list(
|
||||
limit=20,
|
||||
)
|
||||
|
||||
print(f"next page cursor: {first_page.after}") # => "next page cursor: ..."
|
||||
for job in first_page.data:
|
||||
print(job.id)
|
||||
|
||||
# Remove `await` for non-async usage.
|
||||
```
|
||||
|
||||
## Nested params
|
||||
|
||||
Nested parameters are dictionaries, typed using `TypedDict`, for example:
|
||||
|
||||
```python
|
||||
from openai import OpenAI
|
||||
|
||||
client = OpenAI()
|
||||
|
||||
response = client.chat.responses.create(
|
||||
input=[
|
||||
{
|
||||
"role": "user",
|
||||
"content": "How much ?",
|
||||
}
|
||||
],
|
||||
model="gpt-4o",
|
||||
response_format={"type": "json_object"},
|
||||
)
|
||||
```
|
||||
|
||||
## File uploads
|
||||
|
||||
Request parameters that correspond to file uploads can be passed as `bytes`, or a [`PathLike`](https://docs.python.org/3/library/os.html#os.PathLike) instance or a tuple of `(filename, contents, media type)`.
|
||||
|
||||
```python
|
||||
from pathlib import Path
|
||||
from openai import OpenAI
|
||||
|
||||
client = OpenAI()
|
||||
|
||||
client.files.create(
|
||||
file=Path("input.jsonl"),
|
||||
purpose="fine-tune",
|
||||
)
|
||||
```
|
||||
|
||||
The async client uses the exact same interface. If you pass a [`PathLike`](https://docs.python.org/3/library/os.html#os.PathLike) instance, the file contents will be read asynchronously automatically.
|
||||
|
||||
## Handling errors
|
||||
|
||||
When the library is unable to connect to the API (for example, due to network connection problems or a timeout), a subclass of `openai.APIConnectionError` is raised.
|
||||
|
||||
When the API returns a non-success status code (that is, 4xx or 5xx
|
||||
response), a subclass of `openai.APIStatusError` is raised, containing `status_code` and `response` properties.
|
||||
|
||||
All errors inherit from `openai.APIError`.
|
||||
|
||||
```python
|
||||
import openai
|
||||
from openai import OpenAI
|
||||
|
||||
client = OpenAI()
|
||||
|
||||
try:
|
||||
client.fine_tuning.jobs.create(
|
||||
model="gpt-4o",
|
||||
training_file="file-abc123",
|
||||
)
|
||||
except openai.APIConnectionError as e:
|
||||
print("The server could not be reached")
|
||||
print(e.__cause__) # an underlying Exception, likely raised within httpx.
|
||||
except openai.RateLimitError as e:
|
||||
print("A 429 status code was received; we should back off a bit.")
|
||||
except openai.APIStatusError as e:
|
||||
print("Another non-200-range status code was received")
|
||||
print(e.status_code)
|
||||
print(e.response)
|
||||
```
|
||||
|
||||
Error codes are as follows:
|
||||
|
||||
| Status Code | Error Type |
|
||||
| ----------- | -------------------------- |
|
||||
| 400 | `BadRequestError` |
|
||||
| 401 | `AuthenticationError` |
|
||||
| 403 | `PermissionDeniedError` |
|
||||
| 404 | `NotFoundError` |
|
||||
| 422 | `UnprocessableEntityError` |
|
||||
| 429 | `RateLimitError` |
|
||||
| >=500 | `InternalServerError` |
|
||||
| N/A | `APIConnectionError` |
|
||||
|
||||
## Request IDs
|
||||
|
||||
> For more information on debugging requests, see [these docs](https://platform.openai.com/docs/api-reference/debugging-requests)
|
||||
|
||||
All object responses in the SDK provide a `_request_id` property which is added from the `x-request-id` response header so that you can quickly log failing requests and report them back to OpenAI.
|
||||
|
||||
```python
|
||||
response = await client.responses.create(
|
||||
model="gpt-4o-mini",
|
||||
input="Say 'this is a test'.",
|
||||
)
|
||||
print(response._request_id) # req_123
|
||||
```
|
||||
|
||||
Note that unlike other properties that use an `_` prefix, the `_request_id` property
|
||||
_is_ public. Unless documented otherwise, _all_ other `_` prefix properties,
|
||||
methods and modules are _private_.
|
||||
|
||||
> [!IMPORTANT]
|
||||
> If you need to access request IDs for failed requests you must catch the `APIStatusError` exception
|
||||
|
||||
```python
|
||||
import openai
|
||||
|
||||
try:
|
||||
completion = await client.chat.completions.create(
|
||||
messages=[{"role": "user", "content": "Say this is a test"}], model="gpt-4"
|
||||
)
|
||||
except openai.APIStatusError as exc:
|
||||
print(exc.request_id) # req_123
|
||||
raise exc
|
||||
```
|
||||
|
||||
## Retries
|
||||
|
||||
Certain errors are automatically retried 2 times by default, with a short exponential backoff.
|
||||
Connection errors (for example, due to a network connectivity problem), 408 Request Timeout, 409 Conflict,
|
||||
429 Rate Limit, and >=500 Internal errors are all retried by default.
|
||||
|
||||
You can use the `max_retries` option to configure or disable retry settings:
|
||||
|
||||
```python
|
||||
from openai import OpenAI
|
||||
|
||||
# Configure the default for all requests:
|
||||
client = OpenAI(
|
||||
# default is 2
|
||||
max_retries=0,
|
||||
)
|
||||
|
||||
# Or, configure per-request:
|
||||
client.with_options(max_retries=5).chat.completions.create(
|
||||
messages=[
|
||||
{
|
||||
"role": "user",
|
||||
"content": "How can I get the name of the current day in JavaScript?",
|
||||
}
|
||||
],
|
||||
model="gpt-4o",
|
||||
)
|
||||
```
|
||||
|
||||
## Timeouts
|
||||
|
||||
By default requests time out after 10 minutes. You can configure this with a `timeout` option,
|
||||
which accepts a float or an [`httpx.Timeout`](https://www.python-httpx.org/advanced/timeouts/#fine-tuning-the-configuration) object:
|
||||
|
||||
```python
|
||||
from openai import OpenAI
|
||||
|
||||
# Configure the default for all requests:
|
||||
client = OpenAI(
|
||||
# 20 seconds (default is 10 minutes)
|
||||
timeout=20.0,
|
||||
)
|
||||
|
||||
# More granular control:
|
||||
client = OpenAI(
|
||||
timeout=httpx.Timeout(60.0, read=5.0, write=10.0, connect=2.0),
|
||||
)
|
||||
|
||||
# Override per-request:
|
||||
client.with_options(timeout=5.0).chat.completions.create(
|
||||
messages=[
|
||||
{
|
||||
"role": "user",
|
||||
"content": "How can I list all files in a directory using Python?",
|
||||
}
|
||||
],
|
||||
model="gpt-4o",
|
||||
)
|
||||
```
|
||||
|
||||
On timeout, an `APITimeoutError` is thrown.
|
||||
|
||||
Note that requests that time out are [retried twice by default](https://github.com/openai/openai-python/tree/main/#retries).
|
||||
|
||||
## Advanced
|
||||
|
||||
### Logging
|
||||
|
||||
We use the standard library [`logging`](https://docs.python.org/3/library/logging.html) module.
|
||||
|
||||
You can enable logging by setting the environment variable `OPENAI_LOG` to `info`.
|
||||
|
||||
```shell
|
||||
$ export OPENAI_LOG=info
|
||||
```
|
||||
|
||||
Or to `debug` for more verbose logging.
|
||||
|
||||
### How to tell whether `None` means `null` or missing
|
||||
|
||||
In an API response, a field may be explicitly `null`, or missing entirely; in either case, its value is `None` in this library. You can differentiate the two cases with `.model_fields_set`:
|
||||
|
||||
```py
|
||||
if response.my_field is None:
|
||||
if 'my_field' not in response.model_fields_set:
|
||||
print('Got json like {}, without a "my_field" key present at all.')
|
||||
else:
|
||||
print('Got json like {"my_field": null}.')
|
||||
```
|
||||
|
||||
### Accessing raw response data (e.g. headers)
|
||||
|
||||
The "raw" Response object can be accessed by prefixing `.with_raw_response.` to any HTTP method call, e.g.,
|
||||
|
||||
```py
|
||||
from openai import OpenAI
|
||||
|
||||
client = OpenAI()
|
||||
response = client.chat.completions.with_raw_response.create(
|
||||
messages=[{
|
||||
"role": "user",
|
||||
"content": "Say this is a test",
|
||||
}],
|
||||
model="gpt-4o",
|
||||
)
|
||||
print(response.headers.get('X-My-Header'))
|
||||
|
||||
completion = response.parse() # get the object that `chat.completions.create()` would have returned
|
||||
print(completion)
|
||||
```
|
||||
|
||||
These methods return a [`LegacyAPIResponse`](https://github.com/openai/openai-python/tree/main/src/openai/_legacy_response.py) object. This is a legacy class as we're changing it slightly in the next major version.
|
||||
|
||||
For the sync client this will mostly be the same with the exception
|
||||
of `content` & `text` will be methods instead of properties. In the
|
||||
async client, all methods will be async.
|
||||
|
||||
A migration script will be provided & the migration in general should
|
||||
be smooth.
|
||||
|
||||
#### `.with_streaming_response`
|
||||
|
||||
The above interface eagerly reads the full response body when you make the request, which may not always be what you want.
|
||||
|
||||
To stream the response body, use `.with_streaming_response` instead, which requires a context manager and only reads the response body once you call `.read()`, `.text()`, `.json()`, `.iter_bytes()`, `.iter_text()`, `.iter_lines()` or `.parse()`. In the async client, these are async methods.
|
||||
|
||||
As such, `.with_streaming_response` methods return a different [`APIResponse`](https://github.com/openai/openai-python/tree/main/src/openai/_response.py) object, and the async client returns an [`AsyncAPIResponse`](https://github.com/openai/openai-python/tree/main/src/openai/_response.py) object.
|
||||
|
||||
```python
|
||||
with client.chat.completions.with_streaming_response.create(
|
||||
messages=[
|
||||
{
|
||||
"role": "user",
|
||||
"content": "Say this is a test",
|
||||
}
|
||||
],
|
||||
model="gpt-4o",
|
||||
) as response:
|
||||
print(response.headers.get("X-My-Header"))
|
||||
|
||||
for line in response.iter_lines():
|
||||
print(line)
|
||||
```
|
||||
|
||||
The context manager is required so that the response will reliably be closed.
|
||||
|
||||
### Making custom/undocumented requests
|
||||
|
||||
This library is typed for convenient access to the documented API.
|
||||
|
||||
If you need to access undocumented endpoints, params, or response properties, the library can still be used.
|
||||
|
||||
#### Undocumented endpoints
|
||||
|
||||
To make requests to undocumented endpoints, you can make requests using `client.get`, `client.post`, and other
|
||||
http verbs. Options on the client will be respected (such as retries) when making this request.
|
||||
|
||||
```py
|
||||
import httpx
|
||||
|
||||
response = client.post(
|
||||
"/foo",
|
||||
cast_to=httpx.Response,
|
||||
body={"my_param": True},
|
||||
)
|
||||
|
||||
print(response.headers.get("x-foo"))
|
||||
```
|
||||
|
||||
#### Undocumented request params
|
||||
|
||||
If you want to explicitly send an extra param, you can do so with the `extra_query`, `extra_body`, and `extra_headers` request
|
||||
options.
|
||||
|
||||
#### Undocumented response properties
|
||||
|
||||
To access undocumented response properties, you can access the extra fields like `response.unknown_prop`. You
|
||||
can also get all the extra fields on the Pydantic model as a dict with
|
||||
[`response.model_extra`](https://docs.pydantic.dev/latest/api/base_model/#pydantic.BaseModel.model_extra).
|
||||
|
||||
### Configuring the HTTP client
|
||||
|
||||
You can directly override the [httpx client](https://www.python-httpx.org/api/#client) to customize it for your use case, including:
|
||||
|
||||
- Support for [proxies](https://www.python-httpx.org/advanced/proxies/)
|
||||
- Custom [transports](https://www.python-httpx.org/advanced/transports/)
|
||||
- Additional [advanced](https://www.python-httpx.org/advanced/clients/) functionality
|
||||
|
||||
```python
|
||||
import httpx
|
||||
from openai import OpenAI, DefaultHttpxClient
|
||||
|
||||
client = OpenAI(
|
||||
# Or use the `OPENAI_BASE_URL` env var
|
||||
base_url="http://my.test.server.example.com:8083/v1",
|
||||
http_client=DefaultHttpxClient(
|
||||
proxy="http://my.test.proxy.example.com",
|
||||
transport=httpx.HTTPTransport(local_address="0.0.0.0"),
|
||||
),
|
||||
)
|
||||
```
|
||||
|
||||
You can also customize the client on a per-request basis by using `with_options()`:
|
||||
|
||||
```python
|
||||
client.with_options(http_client=DefaultHttpxClient(...))
|
||||
```
|
||||
|
||||
### Managing HTTP resources
|
||||
|
||||
By default the library closes underlying HTTP connections whenever the client is [garbage collected](https://docs.python.org/3/reference/datamodel.html#object.__del__). You can manually close the client using the `.close()` method if desired, or with a context manager that closes when exiting.
|
||||
|
||||
```py
|
||||
from openai import OpenAI
|
||||
|
||||
with OpenAI() as client:
|
||||
# make requests here
|
||||
...
|
||||
|
||||
# HTTP client is now closed
|
||||
```
|
||||
|
||||
## Microsoft Azure OpenAI
|
||||
|
||||
To use this library with [Azure OpenAI](https://learn.microsoft.com/azure/ai-services/openai/overview), use the `AzureOpenAI`
|
||||
class instead of the `OpenAI` class.
|
||||
|
||||
> [!IMPORTANT]
|
||||
> The Azure API shape differs from the core API shape which means that the static types for responses / params
|
||||
> won't always be correct.
|
||||
|
||||
```py
|
||||
from openai import AzureOpenAI
|
||||
|
||||
# gets the API Key from environment variable AZURE_OPENAI_API_KEY
|
||||
client = AzureOpenAI(
|
||||
# https://learn.microsoft.com/azure/ai-services/openai/reference#rest-api-versioning
|
||||
api_version="2023-07-01-preview",
|
||||
# https://learn.microsoft.com/azure/cognitive-services/openai/how-to/create-resource?pivots=web-portal#create-a-resource
|
||||
azure_endpoint="https://example-endpoint.openai.azure.com",
|
||||
)
|
||||
|
||||
completion = client.chat.completions.create(
|
||||
model="deployment-name", # e.g. gpt-35-instant
|
||||
messages=[
|
||||
{
|
||||
"role": "user",
|
||||
"content": "How do I output all files in a directory using Python?",
|
||||
},
|
||||
],
|
||||
)
|
||||
print(completion.to_json())
|
||||
```
|
||||
|
||||
In addition to the options provided in the base `OpenAI` client, the following options are provided:
|
||||
|
||||
- `azure_endpoint` (or the `AZURE_OPENAI_ENDPOINT` environment variable)
|
||||
- `azure_deployment`
|
||||
- `api_version` (or the `OPENAI_API_VERSION` environment variable)
|
||||
- `azure_ad_token` (or the `AZURE_OPENAI_AD_TOKEN` environment variable)
|
||||
- `azure_ad_token_provider`
|
||||
|
||||
An example of using the client with Microsoft Entra ID (formerly known as Azure Active Directory) can be found [here](https://github.com/openai/openai-python/blob/main/examples/azure_ad.py).
|
||||
|
||||
## Versioning
|
||||
|
||||
This package generally follows [SemVer](https://semver.org/spec/v2.0.0.html) conventions, though certain backwards-incompatible changes may be released as minor versions:
|
||||
|
||||
1. Changes that only affect static types, without breaking runtime behavior.
|
||||
2. Changes to library internals which are technically public but not intended or documented for external use. _(Please open a GitHub issue to let us know if you are relying on such internals.)_
|
||||
3. Changes that we do not expect to impact the vast majority of users in practice.
|
||||
|
||||
We take backwards-compatibility seriously and work hard to ensure you can rely on a smooth upgrade experience.
|
||||
|
||||
We are keen for your feedback; please open an [issue](https://www.github.com/openai/openai-python/issues) with questions, bugs, or suggestions.
|
||||
|
||||
### Determining the installed version
|
||||
|
||||
If you've upgraded to the latest version but aren't seeing any new features you were expecting then your python environment is likely still using an older version.
|
||||
|
||||
You can determine the version that is being used at runtime with:
|
||||
|
||||
```py
|
||||
import openai
|
||||
print(openai.__version__)
|
||||
```
|
||||
|
||||
## Requirements
|
||||
|
||||
Python 3.8 or higher.
|
||||
|
||||
## Contributing
|
||||
|
||||
See [the contributing documentation](https://github.com/openai/openai-python/tree/main/./CONTRIBUTING.md).
|
||||
1226
venv/lib/python3.11/site-packages/openai-1.76.0.dist-info/RECORD
Normal file
1226
venv/lib/python3.11/site-packages/openai-1.76.0.dist-info/RECORD
Normal file
File diff suppressed because it is too large
Load Diff
@@ -0,0 +1,4 @@
|
||||
Wheel-Version: 1.0
|
||||
Generator: hatchling 1.26.3
|
||||
Root-Is-Purelib: true
|
||||
Tag: py3-none-any
|
||||
@@ -0,0 +1,2 @@
|
||||
[console_scripts]
|
||||
openai = openai.cli:main
|
||||
@@ -0,0 +1,201 @@
|
||||
Apache License
|
||||
Version 2.0, January 2004
|
||||
http://www.apache.org/licenses/
|
||||
|
||||
TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
|
||||
|
||||
1. Definitions.
|
||||
|
||||
"License" shall mean the terms and conditions for use, reproduction,
|
||||
and distribution as defined by Sections 1 through 9 of this document.
|
||||
|
||||
"Licensor" shall mean the copyright owner or entity authorized by
|
||||
the copyright owner that is granting the License.
|
||||
|
||||
"Legal Entity" shall mean the union of the acting entity and all
|
||||
other entities that control, are controlled by, or are under common
|
||||
control with that entity. For the purposes of this definition,
|
||||
"control" means (i) the power, direct or indirect, to cause the
|
||||
direction or management of such entity, whether by contract or
|
||||
otherwise, or (ii) ownership of fifty percent (50%) or more of the
|
||||
outstanding shares, or (iii) beneficial ownership of such entity.
|
||||
|
||||
"You" (or "Your") shall mean an individual or Legal Entity
|
||||
exercising permissions granted by this License.
|
||||
|
||||
"Source" form shall mean the preferred form for making modifications,
|
||||
including but not limited to software source code, documentation
|
||||
source, and configuration files.
|
||||
|
||||
"Object" form shall mean any form resulting from mechanical
|
||||
transformation or translation of a Source form, including but
|
||||
not limited to compiled object code, generated documentation,
|
||||
and conversions to other media types.
|
||||
|
||||
"Work" shall mean the work of authorship, whether in Source or
|
||||
Object form, made available under the License, as indicated by a
|
||||
copyright notice that is included in or attached to the work
|
||||
(an example is provided in the Appendix below).
|
||||
|
||||
"Derivative Works" shall mean any work, whether in Source or Object
|
||||
form, that is based on (or derived from) the Work and for which the
|
||||
editorial revisions, annotations, elaborations, or other modifications
|
||||
represent, as a whole, an original work of authorship. For the purposes
|
||||
of this License, Derivative Works shall not include works that remain
|
||||
separable from, or merely link (or bind by name) to the interfaces of,
|
||||
the Work and Derivative Works thereof.
|
||||
|
||||
"Contribution" shall mean any work of authorship, including
|
||||
the original version of the Work and any modifications or additions
|
||||
to that Work or Derivative Works thereof, that is intentionally
|
||||
submitted to Licensor for inclusion in the Work by the copyright owner
|
||||
or by an individual or Legal Entity authorized to submit on behalf of
|
||||
the copyright owner. For the purposes of this definition, "submitted"
|
||||
means any form of electronic, verbal, or written communication sent
|
||||
to the Licensor or its representatives, including but not limited to
|
||||
communication on electronic mailing lists, source code control systems,
|
||||
and issue tracking systems that are managed by, or on behalf of, the
|
||||
Licensor for the purpose of discussing and improving the Work, but
|
||||
excluding communication that is conspicuously marked or otherwise
|
||||
designated in writing by the copyright owner as "Not a Contribution."
|
||||
|
||||
"Contributor" shall mean Licensor and any individual or Legal Entity
|
||||
on behalf of whom a Contribution has been received by Licensor and
|
||||
subsequently incorporated within the Work.
|
||||
|
||||
2. Grant of Copyright License. Subject to the terms and conditions of
|
||||
this License, each Contributor hereby grants to You a perpetual,
|
||||
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
|
||||
copyright license to reproduce, prepare Derivative Works of,
|
||||
publicly display, publicly perform, sublicense, and distribute the
|
||||
Work and such Derivative Works in Source or Object form.
|
||||
|
||||
3. Grant of Patent License. Subject to the terms and conditions of
|
||||
this License, each Contributor hereby grants to You a perpetual,
|
||||
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
|
||||
(except as stated in this section) patent license to make, have made,
|
||||
use, offer to sell, sell, import, and otherwise transfer the Work,
|
||||
where such license applies only to those patent claims licensable
|
||||
by such Contributor that are necessarily infringed by their
|
||||
Contribution(s) alone or by combination of their Contribution(s)
|
||||
with the Work to which such Contribution(s) was submitted. If You
|
||||
institute patent litigation against any entity (including a
|
||||
cross-claim or counterclaim in a lawsuit) alleging that the Work
|
||||
or a Contribution incorporated within the Work constitutes direct
|
||||
or contributory patent infringement, then any patent licenses
|
||||
granted to You under this License for that Work shall terminate
|
||||
as of the date such litigation is filed.
|
||||
|
||||
4. Redistribution. You may reproduce and distribute copies of the
|
||||
Work or Derivative Works thereof in any medium, with or without
|
||||
modifications, and in Source or Object form, provided that You
|
||||
meet the following conditions:
|
||||
|
||||
(a) You must give any other recipients of the Work or
|
||||
Derivative Works a copy of this License; and
|
||||
|
||||
(b) You must cause any modified files to carry prominent notices
|
||||
stating that You changed the files; and
|
||||
|
||||
(c) You must retain, in the Source form of any Derivative Works
|
||||
that You distribute, all copyright, patent, trademark, and
|
||||
attribution notices from the Source form of the Work,
|
||||
excluding those notices that do not pertain to any part of
|
||||
the Derivative Works; and
|
||||
|
||||
(d) If the Work includes a "NOTICE" text file as part of its
|
||||
distribution, then any Derivative Works that You distribute must
|
||||
include a readable copy of the attribution notices contained
|
||||
within such NOTICE file, excluding those notices that do not
|
||||
pertain to any part of the Derivative Works, in at least one
|
||||
of the following places: within a NOTICE text file distributed
|
||||
as part of the Derivative Works; within the Source form or
|
||||
documentation, if provided along with the Derivative Works; or,
|
||||
within a display generated by the Derivative Works, if and
|
||||
wherever such third-party notices normally appear. The contents
|
||||
of the NOTICE file are for informational purposes only and
|
||||
do not modify the License. You may add Your own attribution
|
||||
notices within Derivative Works that You distribute, alongside
|
||||
or as an addendum to the NOTICE text from the Work, provided
|
||||
that such additional attribution notices cannot be construed
|
||||
as modifying the License.
|
||||
|
||||
You may add Your own copyright statement to Your modifications and
|
||||
may provide additional or different license terms and conditions
|
||||
for use, reproduction, or distribution of Your modifications, or
|
||||
for any such Derivative Works as a whole, provided Your use,
|
||||
reproduction, and distribution of the Work otherwise complies with
|
||||
the conditions stated in this License.
|
||||
|
||||
5. Submission of Contributions. Unless You explicitly state otherwise,
|
||||
any Contribution intentionally submitted for inclusion in the Work
|
||||
by You to the Licensor shall be under the terms and conditions of
|
||||
this License, without any additional terms or conditions.
|
||||
Notwithstanding the above, nothing herein shall supersede or modify
|
||||
the terms of any separate license agreement you may have executed
|
||||
with Licensor regarding such Contributions.
|
||||
|
||||
6. Trademarks. This License does not grant permission to use the trade
|
||||
names, trademarks, service marks, or product names of the Licensor,
|
||||
except as required for reasonable and customary use in describing the
|
||||
origin of the Work and reproducing the content of the NOTICE file.
|
||||
|
||||
7. Disclaimer of Warranty. Unless required by applicable law or
|
||||
agreed to in writing, Licensor provides the Work (and each
|
||||
Contributor provides its Contributions) on an "AS IS" BASIS,
|
||||
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
|
||||
implied, including, without limitation, any warranties or conditions
|
||||
of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
|
||||
PARTICULAR PURPOSE. You are solely responsible for determining the
|
||||
appropriateness of using or redistributing the Work and assume any
|
||||
risks associated with Your exercise of permissions under this License.
|
||||
|
||||
8. Limitation of Liability. In no event and under no legal theory,
|
||||
whether in tort (including negligence), contract, or otherwise,
|
||||
unless required by applicable law (such as deliberate and grossly
|
||||
negligent acts) or agreed to in writing, shall any Contributor be
|
||||
liable to You for damages, including any direct, indirect, special,
|
||||
incidental, or consequential damages of any character arising as a
|
||||
result of this License or out of the use or inability to use the
|
||||
Work (including but not limited to damages for loss of goodwill,
|
||||
work stoppage, computer failure or malfunction, or any and all
|
||||
other commercial damages or losses), even if such Contributor
|
||||
has been advised of the possibility of such damages.
|
||||
|
||||
9. Accepting Warranty or Additional Liability. While redistributing
|
||||
the Work or Derivative Works thereof, You may choose to offer,
|
||||
and charge a fee for, acceptance of support, warranty, indemnity,
|
||||
or other liability obligations and/or rights consistent with this
|
||||
License. However, in accepting such obligations, You may act only
|
||||
on Your own behalf and on Your sole responsibility, not on behalf
|
||||
of any other Contributor, and only if You agree to indemnify,
|
||||
defend, and hold each Contributor harmless for any liability
|
||||
incurred by, or claims asserted against, such Contributor by reason
|
||||
of your accepting any such warranty or additional liability.
|
||||
|
||||
END OF TERMS AND CONDITIONS
|
||||
|
||||
APPENDIX: How to apply the Apache License to your work.
|
||||
|
||||
To apply the Apache License to your work, attach the following
|
||||
boilerplate notice, with the fields enclosed by brackets "[]"
|
||||
replaced with your own identifying information. (Don't include
|
||||
the brackets!) The text should be enclosed in the appropriate
|
||||
comment syntax for the file format. We also recommend that a
|
||||
file or class name and description of purpose be included on the
|
||||
same "printed page" as the copyright notice for easier
|
||||
identification within third-party archives.
|
||||
|
||||
Copyright 2025 OpenAI
|
||||
|
||||
Licensed under the Apache License, Version 2.0 (the "License");
|
||||
you may not use this file except in compliance with the License.
|
||||
You may obtain a copy of the License at
|
||||
|
||||
http://www.apache.org/licenses/LICENSE-2.0
|
||||
|
||||
Unless required by applicable law or agreed to in writing, software
|
||||
distributed under the License is distributed on an "AS IS" BASIS,
|
||||
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
See the License for the specific language governing permissions and
|
||||
limitations under the License.
|
||||
Reference in New Issue
Block a user