diff --git a/articles/text_comparison_examples.md b/articles/text_comparison_examples.md index ebce20839b..11412e5c65 100644 --- a/articles/text_comparison_examples.md +++ b/articles/text_comparison_examples.md @@ -1,6 +1,6 @@ # Text comparison examples -The [OpenAI API embeddings endpoint](https://beta.openai.com/docs/guides/embeddings) can be used to measure relatedness or similarity between pieces of text. +The [OpenAI API embeddings endpoint](https://platform.openai.com/docs/guides/embeddings) can be used to measure relatedness or similarity between pieces of text. By leveraging GPT-3's understanding of text, these embeddings [achieved state-of-the-art results](https://arxiv.org/abs/2201.10005) on benchmarks in unsupervised learning and transfer learning settings. diff --git a/examples/Embedding_long_inputs.ipynb b/examples/Embedding_long_inputs.ipynb index 4b697cfb9e..f2faa16a07 100644 --- a/examples/Embedding_long_inputs.ipynb +++ b/examples/Embedding_long_inputs.ipynb @@ -9,7 +9,7 @@ "\n", "OpenAI's embedding models cannot embed text that exceeds a maximum length. The maximum length varies by model, and is measured by _tokens_, not string length. If you are unfamiliar with tokenization, check out [How to count tokens with tiktoken](How_to_count_tokens_with_tiktoken.ipynb).\n", "\n", - "This notebook shows how to handle texts that are longer than a model's maximum context length. We'll demonstrate using embeddings from `text-embedding-3-small`, but the same ideas can be applied to other models and tasks. To learn more about embeddings, check out the [OpenAI Embeddings Guide](https://beta.openai.com/docs/guides/embeddings).\n" + "This notebook shows how to handle texts that are longer than a model's maximum context length. We'll demonstrate using embeddings from `text-embedding-3-small`, but the same ideas can be applied to other models and tasks. To learn more about embeddings, check out the [OpenAI Embeddings Guide](https://platform.openai.com/docs/guides/embeddings).\n" ] }, { diff --git a/examples/How_to_count_tokens_with_tiktoken.ipynb b/examples/How_to_count_tokens_with_tiktoken.ipynb index 19ce768921..ed1fd1b0df 100644 --- a/examples/How_to_count_tokens_with_tiktoken.ipynb +++ b/examples/How_to_count_tokens_with_tiktoken.ipynb @@ -57,7 +57,7 @@ "\n", "## How strings are typically tokenized\n", "\n", - "In English, tokens commonly range in length from one character to one word (e.g., `\"t\"` or `\" great\"`), though in some languages tokens can be shorter than one character or longer than one word. Spaces are usually grouped with the starts of words (e.g., `\" is\"` instead of `\"is \"` or `\" \"`+`\"is\"`). You can quickly check how a string is tokenized at the [OpenAI Tokenizer](https://beta.openai.com/tokenizer), or the third-party [Tiktokenizer](https://tiktokenizer.vercel.app/) webapp." + "In English, tokens commonly range in length from one character to one word (e.g., `\"t\"` or `\" great\"`), though in some languages tokens can be shorter than one character or longer than one word. Spaces are usually grouped with the starts of words (e.g., `\" is\"` instead of `\"is \"` or `\" \"`+`\"is\"`). You can quickly check how a string is tokenized at the [OpenAI Tokenizer](https://platform.openai.com/tokenizer), or the third-party [Tiktokenizer](https://tiktokenizer.vercel.app/) webapp." ] }, { diff --git a/examples/How_to_stream_completions.ipynb b/examples/How_to_stream_completions.ipynb index 98ce1e2ceb..e496fb6434 100644 --- a/examples/How_to_stream_completions.ipynb +++ b/examples/How_to_stream_completions.ipynb @@ -17,7 +17,7 @@ "\n", "## Downsides\n", "\n", - "Note that using `stream=True` in a production application makes it more difficult to moderate the content of the completions, as partial completions may be more difficult to evaluate. This may have implications for [approved usage](https://beta.openai.com/docs/usage-guidelines).\n", + "Note that using `stream=True` in a production application makes it more difficult to moderate the content of the completions, as partial completions may be more difficult to evaluate. This may have implications for [approved usage](https://platform.openai.com/docs/usage-guidelines).\n", "\n", "## Example code\n", "\n", diff --git a/examples/dalle/Image_generations_edits_and_variations_with_DALL-E.ipynb b/examples/dalle/Image_generations_edits_and_variations_with_DALL-E.ipynb index b3a659372f..4312236891 100644 --- a/examples/dalle/Image_generations_edits_and_variations_with_DALL-E.ipynb +++ b/examples/dalle/Image_generations_edits_and_variations_with_DALL-E.ipynb @@ -86,7 +86,7 @@ "- `response_format` (str): The format in which the generated images are returned. Must be one of \"url\" or \"b64_json\". Defaults to \"url\".\n", "- `size` (str): The size of the generated images. Must be one of 256x256, 512x512, or 1024x1024 for dall-e-2. Must be one of 1024x1024, 1792x1024, or 1024x1792 for dall-e-3 models. Defaults to \"1024x1024\".\n", "- `style`(str | null): The style of the generated images. Must be one of vivid or natural. Vivid causes the model to lean towards generating hyper-real and dramatic images. Natural causes the model to produce more natural, less hyper-real looking images. This param is only supported for dall-e-3.\n", - "- `user` (str): A unique identifier representing your end-user, which will help OpenAI to monitor and detect abuse. [Learn more.](https://beta.openai.com/docs/usage-policies/end-user-ids)" + "- `user` (str): A unique identifier representing your end-user, which will help OpenAI to monitor and detect abuse. [Learn more.](https://platform.openai.com/docs/usage-policies/end-user-ids)" ] }, { @@ -166,7 +166,7 @@ "- `n` (int): The number of images to generate. Must be between 1 and 10. Defaults to 1.\n", "- `size` (str): The size of the generated images. Must be one of \"256x256\", \"512x512\", or \"1024x1024\". Smaller images are faster. Defaults to \"1024x1024\".\n", "- `response_format` (str): The format in which the generated images are returned. Must be one of \"url\" or \"b64_json\". Defaults to \"url\".\n", - "- `user` (str): A unique identifier representing your end-user, which will help OpenAI to monitor and detect abuse. [Learn more.](https://beta.openai.com/docs/usage-policies/end-user-ids)\n" + "- `user` (str): A unique identifier representing your end-user, which will help OpenAI to monitor and detect abuse. [Learn more.](https://platform.openai.com/docs/usage-policies/end-user-ids)\n" ] }, { @@ -248,7 +248,7 @@ "- `n` (int): The number of images to generate. Must be between 1 and 10. Defaults to 1.\n", "- `size` (str): The size of the generated images. Must be one of \"256x256\", \"512x512\", or \"1024x1024\". Smaller images are faster. Defaults to \"1024x1024\".\n", "- `response_format` (str): The format in which the generated images are returned. Must be one of \"url\" or \"b64_json\". Defaults to \"url\".\n", - "- `user` (str): A unique identifier representing your end-user, which will help OpenAI to monitor and detect abuse. [Learn more.](https://beta.openai.com/docs/usage-policies/end-user-ids)\n" + "- `user` (str): A unique identifier representing your end-user, which will help OpenAI to monitor and detect abuse. [Learn more.](https://platform.openai.com/docs/usage-policies/end-user-ids)\n" ] }, { diff --git a/examples/fine-tuned_qa/olympics-2-create-qa.ipynb b/examples/fine-tuned_qa/olympics-2-create-qa.ipynb index eb7d5e9e69..922a996f76 100644 --- a/examples/fine-tuned_qa/olympics-2-create-qa.ipynb +++ b/examples/fine-tuned_qa/olympics-2-create-qa.ipynb @@ -12,7 +12,7 @@ "metadata": {}, "source": [ "# 2. Creating a synthetic Q&A dataset\n", - "We use [`davinci-instruct-beta-v3`](https://beta.openai.com/docs/engines/instruct-series-beta), a model specialized in following instructions, to create questions based on the given context. Then we also use [`davinci-instruct-beta-v3`](https://beta.openai.com/docs/engines/instruct-series-beta) to answer those questions, given the same context. \n", + "We use [`davinci-instruct-beta-v3`](https://platform.openai.com/docs/engines/instruct-series-beta), a model specialized in following instructions, to create questions based on the given context. Then we also use [`davinci-instruct-beta-v3`](https://platform.openai.com/docs/engines/instruct-series-beta) to answer those questions, given the same context. \n", "\n", "This is expensive, and will also take a long time, as we call the davinci engine for each section. You can simply download the final dataset instead.\n", "\n", @@ -306,7 +306,7 @@ "metadata": {}, "source": [ "## 2.5 Search file (DEPRECATED)\n", - "We create a search file ([API reference](https://beta.openai.com/docs/api-reference/files/list)), which can be used to retrieve the relevant context when a question is asked.\n", + "We create a search file ([API reference](https://platform.openai.com/docs/api-reference/files/list)), which can be used to retrieve the relevant context when a question is asked.\n", "\n", "DEPRECATED: The /search endpoint is deprecated in favour of using embeddings. Embeddings are cheaper, faster and can support a better search experience. See Question Answering Guide for a search implementation using the embeddings\n" ] @@ -333,7 +333,7 @@ "source": [ "## 2.6 Answer questions based on the context provided\n", "\n", - "We will use a simple implementation of the answers endpoint. This works by simply using the [/search endpoint](https://beta.openai.com/docs/api-reference/searches), which searches over an indexed file to obtain the relevant sections which can be included in the context, following by a question and answering prompt given a specified model." + "We will use a simple implementation of the answers endpoint. This works by simply using the [/search endpoint](https://platform.openai.com/docs/api-reference/searches), which searches over an indexed file to obtain the relevant sections which can be included in the context, following by a question and answering prompt given a specified model." ] }, { @@ -393,7 +393,7 @@ "cell_type": "markdown", "metadata": {}, "source": [ - "After we fine-tune the model for Q&A we'll be able to use it instead of [`davinci-instruct-beta-v3`](https://beta.openai.com/docs/engines/instruct-series-beta), to obtain better answers when the question can't be answered based on the context. We see a downside of [`davinci-instruct-beta-v3`](https://beta.openai.com/docs/engines/instruct-series-beta), which always attempts to answer the question, regardless of the relevant context being present or not. (Note the second question is asking about a future event, set in 2024.)" + "After we fine-tune the model for Q&A we'll be able to use it instead of [`davinci-instruct-beta-v3`](https://platform.openai.com/docs/engines/instruct-series-beta), to obtain better answers when the question can't be answered based on the context. We see a downside of [`davinci-instruct-beta-v3`](https://platform.openai.com/docs/engines/instruct-series-beta), which always attempts to answer the question, regardless of the relevant context being present or not. (Note the second question is asking about a future event, set in 2024.)" ] }, { diff --git a/examples/fine-tuned_qa/olympics-3-train-qa.ipynb b/examples/fine-tuned_qa/olympics-3-train-qa.ipynb index cdbf315d10..4777ed4186 100644 --- a/examples/fine-tuned_qa/olympics-3-train-qa.ipynb +++ b/examples/fine-tuned_qa/olympics-3-train-qa.ipynb @@ -593,7 +593,7 @@ "metadata": {}, "source": [ "## 3.4 Answering the question based on a knowledge base\n", - "Finally we can use a logic similar to the [/answers](https://beta.openai.com/docs/api-reference/answers) endpoint, where we first search for the relevant context, and then ask a Q&A model to answer the question given that context. If you'd like to see the implementation details, check out the [`answers_with_ft.py`](answers_with_ft.py) file." + "Finally we can use a logic similar to the [/answers](https://platform.openai.com/docs/api-reference/answers) endpoint, where we first search for the relevant context, and then ask a Q&A model to answer the question given that context. If you'd like to see the implementation details, check out the [`answers_with_ft.py`](answers_with_ft.py) file." ] }, { diff --git a/examples/vector_databases/PolarDB/Getting_started_with_PolarDB_and_OpenAI.ipynb b/examples/vector_databases/PolarDB/Getting_started_with_PolarDB_and_OpenAI.ipynb index e52d543765..1a0145ecd7 100644 --- a/examples/vector_databases/PolarDB/Getting_started_with_PolarDB_and_OpenAI.ipynb +++ b/examples/vector_databases/PolarDB/Getting_started_with_PolarDB_and_OpenAI.ipynb @@ -41,7 +41,7 @@ "\n", "1. PolarDB-PG cloud server instance.\n", "2. The 'psycopg2' library to interact with the vector database. Any other postgresql client library is ok.\n", - "3. An [OpenAI API key](https://beta.openai.com/account/api-keys)." + "3. An [OpenAI API key](https://platform.openai.com/account/api-keys)." ] }, { @@ -79,7 +79,7 @@ "Prepare your OpenAI API key\n", "The OpenAI API key is used for vectorization of the documents and queries.\n", "\n", - "If you don't have an OpenAI API key, you can get one from https://beta.openai.com/account/api-keys.\n", + "If you don't have an OpenAI API key, you can get one from https://platform.openai.com/account/api-keys.\n", "\n", "Once you get your key, please add it to your environment variables as OPENAI_API_KEY.\n", "\n", diff --git a/examples/vector_databases/analyticdb/Getting_started_with_AnalyticDB_and_OpenAI.ipynb b/examples/vector_databases/analyticdb/Getting_started_with_AnalyticDB_and_OpenAI.ipynb index ed95bc1915..1eb1123dbc 100644 --- a/examples/vector_databases/analyticdb/Getting_started_with_AnalyticDB_and_OpenAI.ipynb +++ b/examples/vector_databases/analyticdb/Getting_started_with_AnalyticDB_and_OpenAI.ipynb @@ -33,7 +33,7 @@ "\n", "1. AnalyticDB cloud server instance.\n", "2. The 'psycopg2' library to interact with the vector database. Any other postgresql client library is ok.\n", - "3. An [OpenAI API key](https://beta.openai.com/account/api-keys).\n", + "3. An [OpenAI API key](https://platform.openai.com/account/api-keys).\n", "\n" ] }, @@ -78,7 +78,7 @@ "\n", "The OpenAI API key is used for vectorization of the documents and queries.\n", "\n", - "If you don't have an OpenAI API key, you can get one from [https://beta.openai.com/account/api-keys](https://beta.openai.com/account/api-keys).\n", + "If you don't have an OpenAI API key, you can get one from [https://platform.openai.com/account/api-keys](https://platform.openai.com/account/api-keys).\n", "\n", "Once you get your key, please add it to your environment variables as `OPENAI_API_KEY`." ] diff --git a/examples/vector_databases/chroma/hyde-with-chroma-and-openai.ipynb b/examples/vector_databases/chroma/hyde-with-chroma-and-openai.ipynb index 1274618bfb..49b69e9e57 100644 --- a/examples/vector_databases/chroma/hyde-with-chroma-and-openai.ipynb +++ b/examples/vector_databases/chroma/hyde-with-chroma-and-openai.ipynb @@ -53,7 +53,7 @@ "cell_type": "markdown", "metadata": {}, "source": [ - "We use OpenAI's API's throughout this notebook. You can get an API key from [https://beta.openai.com/account/api-keys](https://beta.openai.com/account/api-keys)\n", + "We use OpenAI's API's throughout this notebook. You can get an API key from [https://platform.openai.com/account/api-keys](https://platform.openai.com/account/api-keys)\n", "\n", "You can add your API key as an environment variable by executing the command `export OPENAI_API_KEY=sk-xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx` in a terminal. Note that you will need to reload the notebook if the environment variable wasn't set yet. Alternatively, you can set it in the notebook, see below. " ] diff --git a/examples/vector_databases/hologres/Getting_started_with_Hologres_and_OpenAI.ipynb b/examples/vector_databases/hologres/Getting_started_with_Hologres_and_OpenAI.ipynb index 01db551ca7..760d2fcaac 100644 --- a/examples/vector_databases/hologres/Getting_started_with_Hologres_and_OpenAI.ipynb +++ b/examples/vector_databases/hologres/Getting_started_with_Hologres_and_OpenAI.ipynb @@ -38,7 +38,7 @@ "\n", "1. Hologres cloud server instance.\n", "2. The 'psycopg2-binary' library to interact with the vector database. Any other postgresql client library is ok.\n", - "3. An [OpenAI API key](https://beta.openai.com/account/api-keys).\n", + "3. An [OpenAI API key](https://platform.openai.com/account/api-keys).\n", "\n" ] }, @@ -83,7 +83,7 @@ "\n", "The OpenAI API key is used for vectorization of the documents and queries.\n", "\n", - "If you don't have an OpenAI API key, you can get one from [https://beta.openai.com/account/api-keys](https://beta.openai.com/account/api-keys).\n", + "If you don't have an OpenAI API key, you can get one from [https://platform.openai.com/account/api-keys](https://platform.openai.com/account/api-keys).\n", "\n", "Once you get your key, please add it to your environment variables as `OPENAI_API_KEY`." ] diff --git a/examples/vector_databases/myscale/Getting_started_with_MyScale_and_OpenAI.ipynb b/examples/vector_databases/myscale/Getting_started_with_MyScale_and_OpenAI.ipynb index 3609b0ee45..88f055e832 100644 --- a/examples/vector_databases/myscale/Getting_started_with_MyScale_and_OpenAI.ipynb +++ b/examples/vector_databases/myscale/Getting_started_with_MyScale_and_OpenAI.ipynb @@ -33,7 +33,7 @@ "\n", "1. A MyScale cluster deployed by following the [quickstart guide](https://docs.myscale.com/en/quickstart/).\n", "2. The 'clickhouse-connect' library to interact with MyScale.\n", - "3. An [OpenAI API key](https://beta.openai.com/account/api-keys) for vectorization of queries." + "3. An [OpenAI API key](https://platform.openai.com/account/api-keys) for vectorization of queries." ] }, { diff --git a/examples/vector_databases/qdrant/Getting_started_with_Qdrant_and_OpenAI.ipynb b/examples/vector_databases/qdrant/Getting_started_with_Qdrant_and_OpenAI.ipynb index 6764d5f677..08b3c5039a 100644 --- a/examples/vector_databases/qdrant/Getting_started_with_Qdrant_and_OpenAI.ipynb +++ b/examples/vector_databases/qdrant/Getting_started_with_Qdrant_and_OpenAI.ipynb @@ -132,7 +132,7 @@ "\n", "The OpenAI API key is used for vectorization of the documents and queries.\n", "\n", - "If you don't have an OpenAI API key, you can get one from [https://beta.openai.com/account/api-keys](https://beta.openai.com/account/api-keys).\n", + "If you don't have an OpenAI API key, you can get one from [https://platform.openai.com/account/api-keys](https://platform.openai.com/account/api-keys).\n", "\n", "Once you get your key, please add it to your environment variables as `OPENAI_API_KEY` by running following command:" ] diff --git a/examples/vector_databases/qdrant/QA_with_Langchain_Qdrant_and_OpenAI.ipynb b/examples/vector_databases/qdrant/QA_with_Langchain_Qdrant_and_OpenAI.ipynb index 376c83d080..01df667326 100644 --- a/examples/vector_databases/qdrant/QA_with_Langchain_Qdrant_and_OpenAI.ipynb +++ b/examples/vector_databases/qdrant/QA_with_Langchain_Qdrant_and_OpenAI.ipynb @@ -29,7 +29,7 @@ "1. Qdrant server instance. In our case a local Docker container.\n", "2. The [qdrant-client](https://github.com/qdrant/qdrant_client) library to interact with the vector database.\n", "3. [Langchain](https://github.com/hwchase17/langchain) as a framework.\n", - "3. An [OpenAI API key](https://beta.openai.com/account/api-keys).\n", + "3. An [OpenAI API key](https://platform.openai.com/account/api-keys).\n", "\n", "### Start Qdrant server\n", "\n", @@ -120,7 +120,7 @@ "\n", "The OpenAI API key is used for vectorization of the documents and queries.\n", "\n", - "If you don't have an OpenAI API key, you can get one from [https://beta.openai.com/account/api-keys](https://beta.openai.com/account/api-keys).\n", + "If you don't have an OpenAI API key, you can get one from [https://platform.openai.com/account/api-keys](https://platform.openai.com/account/api-keys).\n", "\n", "Once you get your key, please add it to your environment variables as `OPENAI_API_KEY` by running following command:" ] diff --git a/examples/vector_databases/redis/getting-started-with-redis-and-openai.ipynb b/examples/vector_databases/redis/getting-started-with-redis-and-openai.ipynb index b040949548..1dee436f4f 100644 --- a/examples/vector_databases/redis/getting-started-with-redis-and-openai.ipynb +++ b/examples/vector_databases/redis/getting-started-with-redis-and-openai.ipynb @@ -43,7 +43,7 @@ "* start a Redis database with RediSearch (redis-stack)\n", "* install libraries\n", " * [Redis-py](https://github.com/redis/redis-py)\n", - "* get your [OpenAI API key](https://beta.openai.com/account/api-keys)\n", + "* get your [OpenAI API key](https://platform.openai.com/account/api-keys)\n", "\n", "===========================================================\n", "\n", @@ -92,7 +92,7 @@ "\n", "The `OpenAI API key` is used for vectorization of query data.\n", "\n", - "If you don't have an OpenAI API key, you can get one from [https://beta.openai.com/account/api-keys](https://beta.openai.com/account/api-keys).\n", + "If you don't have an OpenAI API key, you can get one from [https://platform.openai.com/account/api-keys](https://platform.openai.com/account/api-keys).\n", "\n", "Once you get your key, please add it to your environment variables as `OPENAI_API_KEY` by using following command:" ] diff --git a/examples/vector_databases/redis/redis-hybrid-query-examples.ipynb b/examples/vector_databases/redis/redis-hybrid-query-examples.ipynb index ee1056c706..ca72251447 100644 --- a/examples/vector_databases/redis/redis-hybrid-query-examples.ipynb +++ b/examples/vector_databases/redis/redis-hybrid-query-examples.ipynb @@ -24,7 +24,7 @@ "* start a Redis database with RediSearch (redis-stack)\n", "* install libraries\n", " * [Redis-py](https://github.com/redis/redis-py)\n", - "* get your [OpenAI API key](https://beta.openai.com/account/api-keys)\n", + "* get your [OpenAI API key](https://platform.openai.com/account/api-keys)\n", "\n", "===========================================================\n", "\n", @@ -100,7 +100,7 @@ "\n", "The `OpenAI API key` is used for vectorization of query data.\n", "\n", - "If you don't have an OpenAI API key, you can get one from [https://beta.openai.com/account/api-keys](https://beta.openai.com/account/api-keys).\n", + "If you don't have an OpenAI API key, you can get one from [https://platform.openai.com/account/api-keys](https://platform.openai.com/account/api-keys).\n", "\n", "Once you get your key, please add it to your environment variables as `OPENAI_API_KEY` by using following command:" ] diff --git a/examples/vector_databases/tair/Getting_started_with_Tair_and_OpenAI.ipynb b/examples/vector_databases/tair/Getting_started_with_Tair_and_OpenAI.ipynb index c2c03b777b..1d7c98053b 100644 --- a/examples/vector_databases/tair/Getting_started_with_Tair_and_OpenAI.ipynb +++ b/examples/vector_databases/tair/Getting_started_with_Tair_and_OpenAI.ipynb @@ -38,7 +38,7 @@ "\n", "1. Tair cloud server instance.\n", "2. The 'tair' library to interact with the tair database.\n", - "3. An [OpenAI API key](https://beta.openai.com/account/api-keys).\n", + "3. An [OpenAI API key](https://platform.openai.com/account/api-keys).\n", "\n" ] }, @@ -109,7 +109,7 @@ "\n", "The OpenAI API key is used for vectorization of the documents and queries.\n", "\n", - "If you don't have an OpenAI API key, you can get one from [https://beta.openai.com/account/api-keys](https://beta.openai.com/account/api-keys).\n", + "If you don't have an OpenAI API key, you can get one from [https://platform.openai.com/account/api-keys](https://platform.openai.com./account/api-keys).\n", "\n", "Once you get your key, please add it by getpass." ] diff --git a/examples/vector_databases/weaviate/generative-search-with-weaviate-and-openai.ipynb b/examples/vector_databases/weaviate/generative-search-with-weaviate-and-openai.ipynb index 1c7d7fb808..f77b21ee07 100644 --- a/examples/vector_databases/weaviate/generative-search-with-weaviate-and-openai.ipynb +++ b/examples/vector_databases/weaviate/generative-search-with-weaviate-and-openai.ipynb @@ -30,7 +30,7 @@ "* completed [Getting Started cookbook](./getting-started-with-weaviate-and-openai.ipynb),\n", "* crated a `Weaviate` instance,\n", "* imported data into your `Weaviate` instance,\n", - "* you have an [OpenAI API key](https://beta.openai.com/account/api-keys)" + "* you have an [OpenAI API key](https://platform.openai.com/account/api-keys)" ] }, { @@ -43,7 +43,7 @@ "\n", "The `OpenAI API key` is used for vectorization of your data at import, and for running queries.\n", "\n", - "If you don't have an OpenAI API key, you can get one from [https://beta.openai.com/account/api-keys](https://beta.openai.com/account/api-keys).\n", + "If you don't have an OpenAI API key, you can get one from [https://platform.openai.com/account/api-keys](https://platform.openai.com/account/api-keys).\n", "\n", "Once you get your key, please add it to your environment variables as `OPENAI_API_KEY`." ] diff --git a/examples/vector_databases/weaviate/getting-started-with-weaviate-and-openai.ipynb b/examples/vector_databases/weaviate/getting-started-with-weaviate-and-openai.ipynb index d58c528c76..740c4b60cc 100644 --- a/examples/vector_databases/weaviate/getting-started-with-weaviate-and-openai.ipynb +++ b/examples/vector_databases/weaviate/getting-started-with-weaviate-and-openai.ipynb @@ -95,7 +95,7 @@ " * `weaviate-client`\n", " * `datasets`\n", " * `apache-beam`\n", - "* get your [OpenAI API key](https://beta.openai.com/account/api-keys)\n", + "* get your [OpenAI API key](https://platform.openai.com/account/api-keys)\n", "\n", "===========================================================\n", "### Create a Weaviate instance\n", @@ -172,7 +172,7 @@ "\n", "The `OpenAI API key` is used for vectorization of your data at import, and for running queries.\n", "\n", - "If you don't have an OpenAI API key, you can get one from [https://beta.openai.com/account/api-keys](https://beta.openai.com/account/api-keys).\n", + "If you don't have an OpenAI API key, you can get one from [https://platform.openai.com/account/api-keys](https://platform.openai.com/account/api-keys).\n", "\n", "Once you get your key, please add it to your environment variables as `OPENAI_API_KEY`." ] diff --git a/examples/vector_databases/weaviate/hybrid-search-with-weaviate-and-openai.ipynb b/examples/vector_databases/weaviate/hybrid-search-with-weaviate-and-openai.ipynb index 3ab0965d30..a75a311c69 100644 --- a/examples/vector_databases/weaviate/hybrid-search-with-weaviate-and-openai.ipynb +++ b/examples/vector_databases/weaviate/hybrid-search-with-weaviate-and-openai.ipynb @@ -95,7 +95,7 @@ " * `weaviate-client`\n", " * `datasets`\n", " * `apache-beam`\n", - "* get your [OpenAI API key](https://beta.openai.com/account/api-keys)\n", + "* get your [OpenAI API key](https://platform.openai.com/account/api-keys)\n", "\n", "===========================================================\n", "### Create a Weaviate instance\n", @@ -172,7 +172,7 @@ "\n", "The `OpenAI API key` is used for vectorization of your data at import, and for running queries.\n", "\n", - "If you don't have an OpenAI API key, you can get one from [https://beta.openai.com/account/api-keys](https://beta.openai.com/account/api-keys).\n", + "If you don't have an OpenAI API key, you can get one from [https://platform.openai.com/account/api-keys](https://platform.openai.com/account/api-keys).\n", "\n", "Once you get your key, please add it to your environment variables as `OPENAI_API_KEY`." ] diff --git a/examples/vector_databases/weaviate/question-answering-with-weaviate-and-openai.ipynb b/examples/vector_databases/weaviate/question-answering-with-weaviate-and-openai.ipynb index 1c6f2b0a73..8e619e16b9 100644 --- a/examples/vector_databases/weaviate/question-answering-with-weaviate-and-openai.ipynb +++ b/examples/vector_databases/weaviate/question-answering-with-weaviate-and-openai.ipynb @@ -9,7 +9,7 @@ "\n", "This notebook is prepared for a scenario where:\n", "* Your data is not vectorized\n", - "* You want to run Q&A ([learn more](https://weaviate.io/developers/weaviate/modules/reader-generator-modules/qna-openai)) on your data based on the [OpenAI completions](https://beta.openai.com/docs/api-reference/completions) endpoint.\n", + "* You want to run Q&A ([learn more](https://weaviate.io/developers/weaviate/modules/reader-generator-modules/qna-openai)) on your data based on the [OpenAI completions](https://platform.openai.com/docs/api-reference/completions) endpoint.\n", "* You want to use Weaviate with the OpenAI module ([text2vec-openai](https://weaviate.io/developers/weaviate/modules/retriever-vectorizer-modules/text2vec-openai)), to generate vector embeddings for you.\n", "\n", "This notebook takes you through a simple flow to set up a Weaviate instance, connect to it (with OpenAI API key), configure data schema, import data (which will automatically generate vector embeddings for your data), and run question answering.\n", @@ -94,7 +94,7 @@ " * `weaviate-client`\n", " * `datasets`\n", " * `apache-beam`\n", - "* get your [OpenAI API key](https://beta.openai.com/account/api-keys)\n", + "* get your [OpenAI API key](https://platform.openai.com/account/api-keys)\n", "\n", "===========================================================\n", "### Create a Weaviate instance\n", @@ -171,7 +171,7 @@ "\n", "The `OpenAI API key` is used for vectorization of your data at import, and for queries.\n", "\n", - "If you don't have an OpenAI API key, you can get one from [https://beta.openai.com/account/api-keys](https://beta.openai.com/account/api-keys).\n", + "If you don't have an OpenAI API key, you can get one from [https://platform.openai.com/account/api-keys](https://platform.openai.com/account/api-keys).\n", "\n", "Once you get your key, please add it to your environment variables as `OPENAI_API_KEY`." ]