You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
All endpoints have a `.create` method that supports a `request_timeout` param. This param takes a `Union[float, Tuple[float, float]]` and will raise an `openai.error.Timeout` error if the request exceeds that time in seconds (See: https://requests.readthedocs.io/en/latest/user/quickstart/#timeouts).
76
-
77
-
### Microsoft Azure Endpoints
78
-
79
-
In order to use the library with Microsoft Azure endpoints, you need to set the `api_type`, `api_base` and `api_version` in addition to the `api_key`. The `api_type` must be set to 'azure' and the others correspond to the properties of your endpoint.
80
-
In addition, the deployment name must be passed as the engine parameter.
### Microsoft Azure Active Directory Authentication
104
-
105
-
In order to use Microsoft Active Directory to authenticate to your Azure endpoint, you need to set the `api_type` to "azure_ad" and pass the acquired credential token to `api_key`. The rest of the parameters need to be set as specified in the previous section.
Examples of how to use this library to accomplish various tasks can be found in the [OpenAI Cookbook](https://github.com/openai/openai-cookbook/). It contains code examples for: classification using fine-tuning, clustering, code search, customizing embeddings, question answering from a corpus of documents. recommendations, visualization of embeddings, and more.
125
67
126
-
This library additionally provides an `openai` command-line utility
127
-
which makes it easy to interact with the API from your terminal. Run
128
-
`openai api -h` for usage.
68
+
Most endpoints support a `request_timeout` param. This param takes a `Union[float, Tuple[float, float]]` and will raise an `openai.error.Timeout` error if the request exceeds that time in seconds (See: https://requests.readthedocs.io/en/latest/user/quickstart/#timeouts).
129
69
130
-
```sh
131
-
# list models
132
-
openai api models.list
133
-
134
-
# create a chat completion (gpt-3.5-turbo, gpt-4, etc.)
135
-
openai api chat_completions.create -m gpt-3.5-turbo -g user "Hello world"
openai api completions.create -m ada -p "Hello world"
139
-
140
-
# generate images via DALL·E API
141
-
openai api image.create -p "two dogs playing chess, cartoon" -n 1
70
+
### Chat completions
142
71
143
-
# using openai through a proxy
144
-
openai --proxy=http://proxy.com api models.list
145
-
```
146
-
147
-
## Example code
148
-
149
-
Examples of how to use this Python library to accomplish various tasks can be found in the [OpenAI Cookbook](https://github.com/openai/openai-cookbook/). It contains code examples for:
150
-
151
-
- Classification using fine-tuning
152
-
- Clustering
153
-
- Code search
154
-
- Customizing embeddings
155
-
- Question answering from a corpus of documents
156
-
- Recommendations
157
-
- Visualization of embeddings
158
-
- And more
159
-
160
-
Prior to July 2022, this OpenAI Python library hosted code examples in its examples folder, but since then all examples have been migrated to the [OpenAI Cookbook](https://github.com/openai/openai-cookbook/).
161
-
162
-
### Chat Completions
163
-
164
-
Conversational models such as `gpt-3.5-turbo` can be called using the chat completions endpoint.
72
+
Chat models such as `gpt-3.5-turbo` and `gpt-4` can be called using the [chat completions endpoint](https://platform.openai.com/docs/api-reference/chat/create).
165
73
166
74
```python
167
-
import openai
168
-
openai.api_key ="sk-..."# supply your API key however you choose
You can learn more in our [chat completions guide](https://platform.openai.com/docs/guides/gpt/chat-completions-api).
80
+
174
81
### Completions
175
82
176
83
Text models such as `babbage-002` or `davinci-002` (and our [legacy completions models](https://platform.openai.com/docs/deprecations/deprecation-history)) can be called using the completions endpoint.
177
84
178
85
```python
179
-
import openai
180
-
openai.api_key ="sk-..."# supply your API key however you choose
You can learn more in our [completions guide](https://platform.openai.com/docs/guides/gpt/completions-api).
187
91
188
-
In the OpenAI Python library, an embedding represents a text string as a fixed-length vector of floating point numbers. Embeddings are designed to measure the similarity or relevance between text strings.
92
+
### Embeddings
189
93
190
-
To get an embedding for a text string, you can use the embeddings method as follows in Python:
94
+
Embeddings are designed to measure the similarity or relevance between text strings. To get an embedding for a text string, you can use following:
191
95
192
96
```python
193
-
import openai
194
-
openai.api_key ="sk-..."# supply your API key however you choose
An example of how to call the embeddings method is shown in this [embeddings guide](https://platform.openai.com/docs/guides/embeddings/embeddings).
207
-
208
-
Examples of how to use embeddings are shared in the following Jupyter notebooks:
209
-
210
-
-[Classification using embeddings](https://github.com/openai/openai-cookbook/blob/main/examples/Classification_using_embeddings.ipynb)
211
-
-[Clustering using embeddings](https://github.com/openai/openai-cookbook/blob/main/examples/Clustering.ipynb)
212
-
-[Code search using embeddings](https://github.com/openai/openai-cookbook/blob/main/examples/Code_search.ipynb)
213
-
-[Semantic text search using embeddings](https://github.com/openai/openai-cookbook/blob/main/examples/Semantic_text_search_using_embeddings.ipynb)
214
-
-[User and product embeddings](https://github.com/openai/openai-cookbook/blob/main/examples/User_and_product_embeddings.ipynb)
215
-
-[Zero-shot classification using embeddings](https://github.com/openai/openai-cookbook/blob/main/examples/Zero-shot_classification_with_embeddings.ipynb)
216
-
-[Recommendation using embeddings](https://github.com/openai/openai-cookbook/blob/main/examples/Recommendation_using_embeddings.ipynb)
217
-
218
-
For more information on embeddings and the types of embeddings OpenAI offers, read the [embeddings guide](https://platform.openai.com/docs/guides/embeddings) in the OpenAI documentation.
104
+
You can learn more in our [embeddings guide](https://platform.openai.com/docs/guides/embeddings/embeddings).
For more information on fine-tuning, read the [fine-tuning guide](https://platform.openai.com/docs/guides/fine-tuning) in the OpenAI documentation.
130
+
You can learn more in our [fine-tuning guide](https://platform.openai.com/docs/guides/fine-tuning).
245
131
246
132
### Moderation
247
133
248
-
OpenAI provides a Moderation endpoint that can be used to check whether content complies with the OpenAI [content policy](https://platform.openai.com/docs/usage-policies)
134
+
OpenAI provides a free Moderation endpoint that can be used to check whether content complies with the OpenAI [content policy](https://platform.openai.com/docs/usage-policies).
249
135
250
136
```python
251
-
import openai
252
-
openai.api_key ="sk-..."# supply your API key however you choose
253
-
254
137
moderation_resp = openai.Moderation.create(input="Here is some perfectly innocuous text that follows all OpenAI content policies.")
255
138
```
256
139
257
-
See the [moderation guide](https://platform.openai.com/docs/guides/moderation) for more details.
140
+
You can learn more in our [moderation guide](https://platform.openai.com/docs/guides/moderation).
258
141
259
-
## Image generation (DALL·E)
142
+
###Image generation (DALL·E)
260
143
261
-
```python
262
-
import openai
263
-
openai.api_key ="sk-..."# supply your API key however you choose
144
+
DALL·E is a generative image model that can create new images based on a prompt.
264
145
146
+
```python
265
147
image_resp = openai.Image.create(prompt="two dogs playing chess, oil painting", n=4, size="512x512")
266
-
267
148
```
268
149
269
-
## Audio transcription (Whisper)
150
+
You can learn more in our [image generation guide](https://platform.openai.com/docs/guides/images).
151
+
152
+
### Audio (Whisper)
153
+
154
+
The speech to text API provides two endpoints, transcriptions and translations, based on our state-of-the-art [open source large-v2 Whisper model](https://github.com/openai/whisper).
270
155
271
156
```python
272
-
import openai
273
-
openai.api_key ="sk-..."# supply your API key however you choose
openai api completions.create -m ada -p "Hello world"
201
+
202
+
# generate images via DALL·E API
203
+
openai api image.create -p "two dogs playing chess, cartoon" -n 1
204
+
205
+
# using openai through a proxy
206
+
openai --proxy=http://proxy.com api models.list
207
+
```
208
+
209
+
### Microsoft Azure Endpoints
306
210
307
-
## Requirements
211
+
In order to use the library with Microsoft Azure endpoints, you need to set the `api_type`, `api_base` and `api_version` in addition to the `api_key`. The `api_type` must be set to 'azure' and the others correspond to the properties of your endpoint.
212
+
In addition, the deployment name must be passed as the engine parameter.
### Microsoft Azure Active Directory Authentication
236
+
237
+
In order to use Microsoft Active Directory to authenticate to your Azure endpoint, you need to set the `api_type` to "azure_ad" and pass the acquired credential token to `api_key`. The rest of the parameters need to be set as specified in the previous section.
0 commit comments