Skip to content

Commit ee7329c

Browse files
committed
fix(broken-link): changed link from beta.openai.com to platform.openai.com openai#1833
1 parent 5ab3df4 commit ee7329c

21 files changed

+38
-38
lines changed

articles/text_comparison_examples.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,6 +1,6 @@
11
# Text comparison examples
22

3-
The [OpenAI API embeddings endpoint](https://beta.openai.com/docs/guides/embeddings) can be used to measure relatedness or similarity between pieces of text.
3+
The [OpenAI API embeddings endpoint](https://platform.openai.com/docs/guides/embeddings) can be used to measure relatedness or similarity between pieces of text.
44

55
By leveraging GPT-3's understanding of text, these embeddings [achieved state-of-the-art results](https://arxiv.org/abs/2201.10005) on benchmarks in unsupervised learning and transfer learning settings.
66

examples/Embedding_long_inputs.ipynb

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -9,7 +9,7 @@
99
"\n",
1010
"OpenAI's embedding models cannot embed text that exceeds a maximum length. The maximum length varies by model, and is measured by _tokens_, not string length. If you are unfamiliar with tokenization, check out [How to count tokens with tiktoken](How_to_count_tokens_with_tiktoken.ipynb).\n",
1111
"\n",
12-
"This notebook shows how to handle texts that are longer than a model's maximum context length. We'll demonstrate using embeddings from `text-embedding-3-small`, but the same ideas can be applied to other models and tasks. To learn more about embeddings, check out the [OpenAI Embeddings Guide](https://beta.openai.com/docs/guides/embeddings).\n"
12+
"This notebook shows how to handle texts that are longer than a model's maximum context length. We'll demonstrate using embeddings from `text-embedding-3-small`, but the same ideas can be applied to other models and tasks. To learn more about embeddings, check out the [OpenAI Embeddings Guide](https://platform.openai.com/docs/guides/embeddings).\n"
1313
]
1414
},
1515
{

examples/How_to_count_tokens_with_tiktoken.ipynb

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -57,7 +57,7 @@
5757
"\n",
5858
"## How strings are typically tokenized\n",
5959
"\n",
60-
"In English, tokens commonly range in length from one character to one word (e.g., `\"t\"` or `\" great\"`), though in some languages tokens can be shorter than one character or longer than one word. Spaces are usually grouped with the starts of words (e.g., `\" is\"` instead of `\"is \"` or `\" \"`+`\"is\"`). You can quickly check how a string is tokenized at the [OpenAI Tokenizer](https://beta.openai.com/tokenizer), or the third-party [Tiktokenizer](https://tiktokenizer.vercel.app/) webapp."
60+
"In English, tokens commonly range in length from one character to one word (e.g., `\"t\"` or `\" great\"`), though in some languages tokens can be shorter than one character or longer than one word. Spaces are usually grouped with the starts of words (e.g., `\" is\"` instead of `\"is \"` or `\" \"`+`\"is\"`). You can quickly check how a string is tokenized at the [OpenAI Tokenizer](https://platform.openai.com/tokenizer), or the third-party [Tiktokenizer](https://tiktokenizer.vercel.app/) webapp."
6161
]
6262
},
6363
{

examples/How_to_stream_completions.ipynb

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -17,7 +17,7 @@
1717
"\n",
1818
"## Downsides\n",
1919
"\n",
20-
"Note that using `stream=True` in a production application makes it more difficult to moderate the content of the completions, as partial completions may be more difficult to evaluate. This may have implications for [approved usage](https://beta.openai.com/docs/usage-guidelines).\n",
20+
"Note that using `stream=True` in a production application makes it more difficult to moderate the content of the completions, as partial completions may be more difficult to evaluate. This may have implications for [approved usage](https://platform.openai.com/docs/usage-guidelines).\n",
2121
"\n",
2222
"## Example code\n",
2323
"\n",

examples/dalle/Image_generations_edits_and_variations_with_DALL-E.ipynb

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -86,7 +86,7 @@
8686
"- `response_format` (str): The format in which the generated images are returned. Must be one of \"url\" or \"b64_json\". Defaults to \"url\".\n",
8787
"- `size` (str): The size of the generated images. Must be one of 256x256, 512x512, or 1024x1024 for dall-e-2. Must be one of 1024x1024, 1792x1024, or 1024x1792 for dall-e-3 models. Defaults to \"1024x1024\".\n",
8888
"- `style`(str | null): The style of the generated images. Must be one of vivid or natural. Vivid causes the model to lean towards generating hyper-real and dramatic images. Natural causes the model to produce more natural, less hyper-real looking images. This param is only supported for dall-e-3.\n",
89-
"- `user` (str): A unique identifier representing your end-user, which will help OpenAI to monitor and detect abuse. [Learn more.](https://beta.openai.com/docs/usage-policies/end-user-ids)"
89+
"- `user` (str): A unique identifier representing your end-user, which will help OpenAI to monitor and detect abuse. [Learn more.](https://platform.openai.com/docs/usage-policies/end-user-ids)"
9090
]
9191
},
9292
{
@@ -166,7 +166,7 @@
166166
"- `n` (int): The number of images to generate. Must be between 1 and 10. Defaults to 1.\n",
167167
"- `size` (str): The size of the generated images. Must be one of \"256x256\", \"512x512\", or \"1024x1024\". Smaller images are faster. Defaults to \"1024x1024\".\n",
168168
"- `response_format` (str): The format in which the generated images are returned. Must be one of \"url\" or \"b64_json\". Defaults to \"url\".\n",
169-
"- `user` (str): A unique identifier representing your end-user, which will help OpenAI to monitor and detect abuse. [Learn more.](https://beta.openai.com/docs/usage-policies/end-user-ids)\n"
169+
"- `user` (str): A unique identifier representing your end-user, which will help OpenAI to monitor and detect abuse. [Learn more.](https://platform.openai.com/docs/usage-policies/end-user-ids)\n"
170170
]
171171
},
172172
{
@@ -248,7 +248,7 @@
248248
"- `n` (int): The number of images to generate. Must be between 1 and 10. Defaults to 1.\n",
249249
"- `size` (str): The size of the generated images. Must be one of \"256x256\", \"512x512\", or \"1024x1024\". Smaller images are faster. Defaults to \"1024x1024\".\n",
250250
"- `response_format` (str): The format in which the generated images are returned. Must be one of \"url\" or \"b64_json\". Defaults to \"url\".\n",
251-
"- `user` (str): A unique identifier representing your end-user, which will help OpenAI to monitor and detect abuse. [Learn more.](https://beta.openai.com/docs/usage-policies/end-user-ids)\n"
251+
"- `user` (str): A unique identifier representing your end-user, which will help OpenAI to monitor and detect abuse. [Learn more.](https://platform.openai.com/docs/usage-policies/end-user-ids)\n"
252252
]
253253
},
254254
{

examples/fine-tuned_qa/olympics-2-create-qa.ipynb

Lines changed: 4 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -12,7 +12,7 @@
1212
"metadata": {},
1313
"source": [
1414
"# 2. Creating a synthetic Q&A dataset\n",
15-
"We use [`davinci-instruct-beta-v3`](https://beta.openai.com/docs/engines/instruct-series-beta), a model specialized in following instructions, to create questions based on the given context. Then we also use [`davinci-instruct-beta-v3`](https://beta.openai.com/docs/engines/instruct-series-beta) to answer those questions, given the same context. \n",
15+
"We use [`davinci-instruct-beta-v3`](https://platform.openai.com/docs/engines/instruct-series-beta), a model specialized in following instructions, to create questions based on the given context. Then we also use [`davinci-instruct-beta-v3`](https://platform.openai.com/docs/engines/instruct-series-beta) to answer those questions, given the same context. \n",
1616
"\n",
1717
"This is expensive, and will also take a long time, as we call the davinci engine for each section. You can simply download the final dataset instead.\n",
1818
"\n",
@@ -306,7 +306,7 @@
306306
"metadata": {},
307307
"source": [
308308
"## 2.5 Search file (DEPRECATED)\n",
309-
"We create a search file ([API reference](https://beta.openai.com/docs/api-reference/files/list)), which can be used to retrieve the relevant context when a question is asked.\n",
309+
"We create a search file ([API reference](https://platform.openai.com/docs/api-reference/files/list)), which can be used to retrieve the relevant context when a question is asked.\n",
310310
"\n",
311311
"<span style=\"color:orange; font-weight:bold\">DEPRECATED: The /search endpoint is deprecated in favour of using embeddings. Embeddings are cheaper, faster and can support a better search experience. See <a href=\"https://github.com/openai/openai-cookbook/blob/main/examples/Question_answering_using_embeddings.ipynb\">Question Answering Guide</a> for a search implementation using the embeddings</span>\n"
312312
]
@@ -333,7 +333,7 @@
333333
"source": [
334334
"## 2.6 Answer questions based on the context provided\n",
335335
"\n",
336-
"We will use a simple implementation of the answers endpoint. This works by simply using the [/search endpoint](https://beta.openai.com/docs/api-reference/searches), which searches over an indexed file to obtain the relevant sections which can be included in the context, following by a question and answering prompt given a specified model."
336+
"We will use a simple implementation of the answers endpoint. This works by simply using the [/search endpoint](https://platform.openai.com/docs/api-reference/searches), which searches over an indexed file to obtain the relevant sections which can be included in the context, following by a question and answering prompt given a specified model."
337337
]
338338
},
339339
{
@@ -393,7 +393,7 @@
393393
"cell_type": "markdown",
394394
"metadata": {},
395395
"source": [
396-
"After we fine-tune the model for Q&A we'll be able to use it instead of [`davinci-instruct-beta-v3`](https://beta.openai.com/docs/engines/instruct-series-beta), to obtain better answers when the question can't be answered based on the context. We see a downside of [`davinci-instruct-beta-v3`](https://beta.openai.com/docs/engines/instruct-series-beta), which always attempts to answer the question, regardless of the relevant context being present or not. (Note the second question is asking about a future event, set in 2024.)"
396+
"After we fine-tune the model for Q&A we'll be able to use it instead of [`davinci-instruct-beta-v3`](https://platform.openai.com/docs/engines/instruct-series-beta), to obtain better answers when the question can't be answered based on the context. We see a downside of [`davinci-instruct-beta-v3`](https://platform.openai.com/docs/engines/instruct-series-beta), which always attempts to answer the question, regardless of the relevant context being present or not. (Note the second question is asking about a future event, set in 2024.)"
397397
]
398398
},
399399
{

examples/fine-tuned_qa/olympics-3-train-qa.ipynb

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -593,7 +593,7 @@
593593
"metadata": {},
594594
"source": [
595595
"## 3.4 Answering the question based on a knowledge base\n",
596-
"Finally we can use a logic similar to the [/answers](https://beta.openai.com/docs/api-reference/answers) endpoint, where we first search for the relevant context, and then ask a Q&A model to answer the question given that context. If you'd like to see the implementation details, check out the [`answers_with_ft.py`](answers_with_ft.py) file."
596+
"Finally we can use a logic similar to the [/answers](https://platform.openai.com/docs/api-reference/answers) endpoint, where we first search for the relevant context, and then ask a Q&A model to answer the question given that context. If you'd like to see the implementation details, check out the [`answers_with_ft.py`](answers_with_ft.py) file."
597597
]
598598
},
599599
{

examples/vector_databases/PolarDB/Getting_started_with_PolarDB_and_OpenAI.ipynb

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -41,7 +41,7 @@
4141
"\n",
4242
"1. PolarDB-PG cloud server instance.\n",
4343
"2. The 'psycopg2' library to interact with the vector database. Any other postgresql client library is ok.\n",
44-
"3. An [OpenAI API key](https://beta.openai.com/account/api-keys)."
44+
"3. An [OpenAI API key](https://platform.openai.com/account/api-keys)."
4545
]
4646
},
4747
{
@@ -79,7 +79,7 @@
7979
"Prepare your OpenAI API key\n",
8080
"The OpenAI API key is used for vectorization of the documents and queries.\n",
8181
"\n",
82-
"If you don't have an OpenAI API key, you can get one from https://beta.openai.com/account/api-keys.\n",
82+
"If you don't have an OpenAI API key, you can get one from https://platform.openai.com/account/api-keys.\n",
8383
"\n",
8484
"Once you get your key, please add it to your environment variables as OPENAI_API_KEY.\n",
8585
"\n",

examples/vector_databases/analyticdb/Getting_started_with_AnalyticDB_and_OpenAI.ipynb

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -33,7 +33,7 @@
3333
"\n",
3434
"1. AnalyticDB cloud server instance.\n",
3535
"2. The 'psycopg2' library to interact with the vector database. Any other postgresql client library is ok.\n",
36-
"3. An [OpenAI API key](https://beta.openai.com/account/api-keys).\n",
36+
"3. An [OpenAI API key](https://platform.openai.com/account/api-keys).\n",
3737
"\n"
3838
]
3939
},
@@ -78,7 +78,7 @@
7878
"\n",
7979
"The OpenAI API key is used for vectorization of the documents and queries.\n",
8080
"\n",
81-
"If you don't have an OpenAI API key, you can get one from [https://beta.openai.com/account/api-keys](https://beta.openai.com/account/api-keys).\n",
81+
"If you don't have an OpenAI API key, you can get one from [https://platform.openai.com/account/api-keys](https://platform.openai.com/account/api-keys).\n",
8282
"\n",
8383
"Once you get your key, please add it to your environment variables as `OPENAI_API_KEY`."
8484
]

examples/vector_databases/chroma/hyde-with-chroma-and-openai.ipynb

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -53,7 +53,7 @@
5353
"cell_type": "markdown",
5454
"metadata": {},
5555
"source": [
56-
"We use OpenAI's API's throughout this notebook. You can get an API key from [https://beta.openai.com/account/api-keys](https://beta.openai.com/account/api-keys)\n",
56+
"We use OpenAI's API's throughout this notebook. You can get an API key from [https://platform.openai.com/account/api-keys](https://platform.openai.com/account/api-keys)\n",
5757
"\n",
5858
"You can add your API key as an environment variable by executing the command `export OPENAI_API_KEY=sk-xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx` in a terminal. Note that you will need to reload the notebook if the environment variable wasn't set yet. Alternatively, you can set it in the notebook, see below. "
5959
]

examples/vector_databases/hologres/Getting_started_with_Hologres_and_OpenAI.ipynb

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -38,7 +38,7 @@
3838
"\n",
3939
"1. Hologres cloud server instance.\n",
4040
"2. The 'psycopg2-binary' library to interact with the vector database. Any other postgresql client library is ok.\n",
41-
"3. An [OpenAI API key](https://beta.openai.com/account/api-keys).\n",
41+
"3. An [OpenAI API key](https://platform.openai.com/account/api-keys).\n",
4242
"\n"
4343
]
4444
},
@@ -83,7 +83,7 @@
8383
"\n",
8484
"The OpenAI API key is used for vectorization of the documents and queries.\n",
8585
"\n",
86-
"If you don't have an OpenAI API key, you can get one from [https://beta.openai.com/account/api-keys](https://beta.openai.com/account/api-keys).\n",
86+
"If you don't have an OpenAI API key, you can get one from [https://platform.openai.com/account/api-keys](https://platform.openai.com/account/api-keys).\n",
8787
"\n",
8888
"Once you get your key, please add it to your environment variables as `OPENAI_API_KEY`."
8989
]

examples/vector_databases/myscale/Getting_started_with_MyScale_and_OpenAI.ipynb

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -33,7 +33,7 @@
3333
"\n",
3434
"1. A MyScale cluster deployed by following the [quickstart guide](https://docs.myscale.com/en/quickstart/).\n",
3535
"2. The 'clickhouse-connect' library to interact with MyScale.\n",
36-
"3. An [OpenAI API key](https://beta.openai.com/account/api-keys) for vectorization of queries."
36+
"3. An [OpenAI API key](https://platform.openai.com/account/api-keys) for vectorization of queries."
3737
]
3838
},
3939
{

examples/vector_databases/qdrant/Getting_started_with_Qdrant_and_OpenAI.ipynb

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -132,7 +132,7 @@
132132
"\n",
133133
"The OpenAI API key is used for vectorization of the documents and queries.\n",
134134
"\n",
135-
"If you don't have an OpenAI API key, you can get one from [https://beta.openai.com/account/api-keys](https://beta.openai.com/account/api-keys).\n",
135+
"If you don't have an OpenAI API key, you can get one from [https://platform.openai.com/account/api-keys](https://platform.openai.com/account/api-keys).\n",
136136
"\n",
137137
"Once you get your key, please add it to your environment variables as `OPENAI_API_KEY` by running following command:"
138138
]

examples/vector_databases/qdrant/QA_with_Langchain_Qdrant_and_OpenAI.ipynb

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -29,7 +29,7 @@
2929
"1. Qdrant server instance. In our case a local Docker container.\n",
3030
"2. The [qdrant-client](https://github.com/qdrant/qdrant_client) library to interact with the vector database.\n",
3131
"3. [Langchain](https://github.com/hwchase17/langchain) as a framework.\n",
32-
"3. An [OpenAI API key](https://beta.openai.com/account/api-keys).\n",
32+
"3. An [OpenAI API key](https://platform.openai.com/account/api-keys).\n",
3333
"\n",
3434
"### Start Qdrant server\n",
3535
"\n",
@@ -120,7 +120,7 @@
120120
"\n",
121121
"The OpenAI API key is used for vectorization of the documents and queries.\n",
122122
"\n",
123-
"If you don't have an OpenAI API key, you can get one from [https://beta.openai.com/account/api-keys](https://beta.openai.com/account/api-keys).\n",
123+
"If you don't have an OpenAI API key, you can get one from [https://platform.openai.com/account/api-keys](https://platform.openai.com/account/api-keys).\n",
124124
"\n",
125125
"Once you get your key, please add it to your environment variables as `OPENAI_API_KEY` by running following command:"
126126
]

examples/vector_databases/redis/getting-started-with-redis-and-openai.ipynb

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -43,7 +43,7 @@
4343
"* start a Redis database with RediSearch (redis-stack)\n",
4444
"* install libraries\n",
4545
" * [Redis-py](https://github.com/redis/redis-py)\n",
46-
"* get your [OpenAI API key](https://beta.openai.com/account/api-keys)\n",
46+
"* get your [OpenAI API key](https://platform.openai.com/account/api-keys)\n",
4747
"\n",
4848
"===========================================================\n",
4949
"\n",
@@ -92,7 +92,7 @@
9292
"\n",
9393
"The `OpenAI API key` is used for vectorization of query data.\n",
9494
"\n",
95-
"If you don't have an OpenAI API key, you can get one from [https://beta.openai.com/account/api-keys](https://beta.openai.com/account/api-keys).\n",
95+
"If you don't have an OpenAI API key, you can get one from [https://platform.openai.com/account/api-keys](https://platform.openai.com/account/api-keys).\n",
9696
"\n",
9797
"Once you get your key, please add it to your environment variables as `OPENAI_API_KEY` by using following command:"
9898
]

examples/vector_databases/redis/redis-hybrid-query-examples.ipynb

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -24,7 +24,7 @@
2424
"* start a Redis database with RediSearch (redis-stack)\n",
2525
"* install libraries\n",
2626
" * [Redis-py](https://github.com/redis/redis-py)\n",
27-
"* get your [OpenAI API key](https://beta.openai.com/account/api-keys)\n",
27+
"* get your [OpenAI API key](https://platform.openai.com/account/api-keys)\n",
2828
"\n",
2929
"===========================================================\n",
3030
"\n",
@@ -100,7 +100,7 @@
100100
"\n",
101101
"The `OpenAI API key` is used for vectorization of query data.\n",
102102
"\n",
103-
"If you don't have an OpenAI API key, you can get one from [https://beta.openai.com/account/api-keys](https://beta.openai.com/account/api-keys).\n",
103+
"If you don't have an OpenAI API key, you can get one from [https://platform.openai.com/account/api-keys](https://platform.openai.com/account/api-keys).\n",
104104
"\n",
105105
"Once you get your key, please add it to your environment variables as `OPENAI_API_KEY` by using following command:"
106106
]

examples/vector_databases/tair/Getting_started_with_Tair_and_OpenAI.ipynb

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -38,7 +38,7 @@
3838
"\n",
3939
"1. Tair cloud server instance.\n",
4040
"2. The 'tair' library to interact with the tair database.\n",
41-
"3. An [OpenAI API key](https://beta.openai.com/account/api-keys).\n",
41+
"3. An [OpenAI API key](https://platform.openai.com/account/api-keys).\n",
4242
"\n"
4343
]
4444
},
@@ -109,7 +109,7 @@
109109
"\n",
110110
"The OpenAI API key is used for vectorization of the documents and queries.\n",
111111
"\n",
112-
"If you don't have an OpenAI API key, you can get one from [https://beta.openai.com/account/api-keys](https://beta.openai.com/account/api-keys).\n",
112+
"If you don't have an OpenAI API key, you can get one from [https://platform.openai.com/account/api-keys](https://platform.openai.com./account/api-keys).\n",
113113
"\n",
114114
"Once you get your key, please add it by getpass."
115115
]

examples/vector_databases/weaviate/generative-search-with-weaviate-and-openai.ipynb

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -30,7 +30,7 @@
3030
"* completed [Getting Started cookbook](./getting-started-with-weaviate-and-openai.ipynb),\n",
3131
"* crated a `Weaviate` instance,\n",
3232
"* imported data into your `Weaviate` instance,\n",
33-
"* you have an [OpenAI API key](https://beta.openai.com/account/api-keys)"
33+
"* you have an [OpenAI API key](https://platform.openai.com/account/api-keys)"
3434
]
3535
},
3636
{
@@ -43,7 +43,7 @@
4343
"\n",
4444
"The `OpenAI API key` is used for vectorization of your data at import, and for running queries.\n",
4545
"\n",
46-
"If you don't have an OpenAI API key, you can get one from [https://beta.openai.com/account/api-keys](https://beta.openai.com/account/api-keys).\n",
46+
"If you don't have an OpenAI API key, you can get one from [https://platform.openai.com/account/api-keys](https://platform.openai.com/account/api-keys).\n",
4747
"\n",
4848
"Once you get your key, please add it to your environment variables as `OPENAI_API_KEY`."
4949
]

0 commit comments

Comments
 (0)