Skip to content

Add markdown docs #462

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 1 commit into from
Jul 17, 2024
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
1 change: 0 additions & 1 deletion .gitignore
Original file line number Diff line number Diff line change
Expand Up @@ -3,7 +3,6 @@
/.idea/
/.pytype/
/build/
/docs/api
*.egg-info
.DS_Store
__pycache__
Expand Down
138 changes: 138 additions & 0 deletions docs/api/google/generativeai.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,138 @@
description: Google AI Python SDK

<div itemscope itemtype="http://developers.google.com/ReferenceObject">
<meta itemprop="name" content="google.generativeai" />
<meta itemprop="path" content="Stable" />
<meta itemprop="property" content="__version__"/>
<meta itemprop="property" content="annotations"/>
</div>

# Module: google.generativeai

<!-- Insert buttons and diff -->

<table class="tfo-notebook-buttons tfo-api nocontent" align="left">
<td>
<a target="_blank" href="https://github.com/google/generative-ai-python/blob/master/google/generativeai/__init__.py">
<img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" />
View source on GitHub
</a>
</td>
</table>



Google AI Python SDK



## Setup

```posix-terminal
pip install google-generativeai
```

## GenerativeModel

Use `genai.GenerativeModel` to access the API:

```
import google.generativeai as genai
import os

genai.configure(api_key=os.environ['API_KEY'])

model = genai.GenerativeModel(model_name='gemini-1.5-flash')
response = model.generate_content('Teach me about how an LLM works')

print(response.text)
```

See the [python quickstart](https://ai.google.dev/tutorials/python_quickstart) for more details.

## Modules

[`protos`](../google/generativeai/protos.md) module: This module provides low level access to the ProtoBuffer "Message" classes used by the API.

[`types`](../google/generativeai/types.md) module: A collection of type definitions used throughout the library.

## Classes

[`class ChatSession`](../google/generativeai/ChatSession.md): Contains an ongoing conversation with the model.

[`class GenerationConfig`](../google/generativeai/types/GenerationConfig.md): A simple dataclass used to configure the generation parameters of <a href="../google/generativeai/GenerativeModel.md#generate_content"><code>GenerativeModel.generate_content</code></a>.

[`class GenerativeModel`](../google/generativeai/GenerativeModel.md): The `genai.GenerativeModel` class wraps default parameters for calls to <a href="../google/generativeai/GenerativeModel.md#generate_content"><code>GenerativeModel.generate_content</code></a>, <a href="../google/generativeai/GenerativeModel.md#count_tokens"><code>GenerativeModel.count_tokens</code></a>, and <a href="../google/generativeai/GenerativeModel.md#start_chat"><code>GenerativeModel.start_chat</code></a>.

## Functions

[`chat(...)`](../google/generativeai/chat.md): Calls the API to initiate a chat with a model using provided parameters

[`chat_async(...)`](../google/generativeai/chat_async.md): Calls the API to initiate a chat with a model using provided parameters

[`configure(...)`](../google/generativeai/configure.md): Captures default client configuration.

[`count_message_tokens(...)`](../google/generativeai/count_message_tokens.md): Calls the API to calculate the number of tokens used in the prompt.

[`count_text_tokens(...)`](../google/generativeai/count_text_tokens.md): Calls the API to count the number of tokens in the text prompt.

[`create_tuned_model(...)`](../google/generativeai/create_tuned_model.md): Calls the API to initiate a tuning process that optimizes a model for specific data, returning an operation object to track and manage the tuning progress.

[`delete_file(...)`](../google/generativeai/delete_file.md): Calls the API to permanently delete a specified file using a supported file service.

[`delete_tuned_model(...)`](../google/generativeai/delete_tuned_model.md): Calls the API to delete a specified tuned model

[`embed_content(...)`](../google/generativeai/embed_content.md): Calls the API to create embeddings for content passed in.

[`embed_content_async(...)`](../google/generativeai/embed_content_async.md): Calls the API to create async embeddings for content passed in.

[`generate_embeddings(...)`](../google/generativeai/generate_embeddings.md): Calls the API to create an embedding for the text passed in.

[`generate_text(...)`](../google/generativeai/generate_text.md): Calls the API to generate text based on the provided prompt.

[`get_base_model(...)`](../google/generativeai/get_base_model.md): Calls the API to fetch a base model by name.

[`get_file(...)`](../google/generativeai/get_file.md): Calls the API to retrieve a specified file using a supported file service.

[`get_model(...)`](../google/generativeai/get_model.md): Calls the API to fetch a model by name.

[`get_operation(...)`](../google/generativeai/get_operation.md): Calls the API to get a specific operation

[`get_tuned_model(...)`](../google/generativeai/get_tuned_model.md): Calls the API to fetch a tuned model by name.

[`list_files(...)`](../google/generativeai/list_files.md): Calls the API to list files using a supported file service.

[`list_models(...)`](../google/generativeai/list_models.md): Calls the API to list all available models.

[`list_operations(...)`](../google/generativeai/list_operations.md): Calls the API to list all operations

[`list_tuned_models(...)`](../google/generativeai/list_tuned_models.md): Calls the API to list all tuned models.

[`update_tuned_model(...)`](../google/generativeai/update_tuned_model.md): Calls the API to push updates to a specified tuned model where only certain attributes are updatable.

[`upload_file(...)`](../google/generativeai/upload_file.md): Calls the API to upload a file using a supported file service.



<!-- Tabular view -->
<table class="responsive fixed orange">
<colgroup><col width="214px"><col></colgroup>
<tr><th colspan="2"><h2 class="add-link">Other Members</h2></th></tr>

<tr>
<td>
__version__<a id="__version__"></a>
</td>
<td>
`'0.7.2'`
</td>
</tr><tr>
<td>
annotations<a id="annotations"></a>
</td>
<td>
Instance of `__future__._Feature`
</td>
</tr>
</table>

222 changes: 222 additions & 0 deletions docs/api/google/generativeai/ChatSession.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,222 @@
description: Contains an ongoing conversation with the model.

<div itemscope itemtype="http://developers.google.com/ReferenceObject">
<meta itemprop="name" content="google.generativeai.ChatSession" />
<meta itemprop="path" content="Stable" />
<meta itemprop="property" content="__init__"/>
<meta itemprop="property" content="rewind"/>
<meta itemprop="property" content="send_message"/>
<meta itemprop="property" content="send_message_async"/>
</div>

# google.generativeai.ChatSession

<!-- Insert buttons and diff -->

<table class="tfo-notebook-buttons tfo-api nocontent" align="left">
<td>
<a target="_blank" href="https://github.com/google/generative-ai-python/blob/master/google/generativeai/generative_models.py#L481-L875">
<img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" />
View source on GitHub
</a>
</td>
</table>



Contains an ongoing conversation with the model.

<pre class="devsite-click-to-copy prettyprint lang-py tfo-signature-link">
<code>google.generativeai.ChatSession(
model: GenerativeModel,
history: (Iterable[content_types.StrictContentType] | None) = None,
enable_automatic_function_calling: bool = False
)
</code></pre>



<!-- Placeholder for "Used in" -->

```
>>> model = genai.GenerativeModel('models/gemini-pro')
>>> chat = model.start_chat()
>>> response = chat.send_message("Hello")
>>> print(response.text)
>>> response = chat.send_message("Hello again")
>>> print(response.text)
>>> response = chat.send_message(...
```

This `ChatSession` object collects the messages sent and received, in its
<a href="../../google/generativeai/ChatSession.md#history"><code>ChatSession.history</code></a> attribute.

<!-- Tabular view -->
<table class="responsive fixed orange">
<colgroup><col width="214px"><col></colgroup>
<tr><th colspan="2"><h2 class="add-link">Arguments</h2></th></tr>

<tr>
<td>
`model`<a id="model"></a>
</td>
<td>
The model to use in the chat.
</td>
</tr><tr>
<td>
`history`<a id="history"></a>
</td>
<td>
A chat history to initialize the object with.
</td>
</tr>
</table>





<!-- Tabular view -->
<table class="responsive fixed orange">
<colgroup><col width="214px"><col></colgroup>
<tr><th colspan="2"><h2 class="add-link">Attributes</h2></th></tr>

<tr>
<td>
`history`<a id="history"></a>
</td>
<td>
The chat history.
</td>
</tr><tr>
<td>
`last`<a id="last"></a>
</td>
<td>
returns the last received `genai.GenerateContentResponse`
</td>
</tr>
</table>



## Methods

<h3 id="rewind"><code>rewind</code></h3>

<a target="_blank" class="external" href="https://github.com/google/generative-ai-python/blob/master/google/generativeai/generative_models.py#L785-L794">View source</a>

<pre class="devsite-click-to-copy prettyprint lang-py tfo-signature-link">
<code>rewind() -> tuple[protos.Content, protos.Content]
</code></pre>

Removes the last request/response pair from the chat history.


<h3 id="send_message"><code>send_message</code></h3>

<a target="_blank" class="external" href="https://github.com/google/generative-ai-python/blob/master/google/generativeai/generative_models.py#L512-L604">View source</a>

<pre class="devsite-click-to-copy prettyprint lang-py tfo-signature-link">
<code>send_message(
content: content_types.ContentType,
*,
generation_config: generation_types.GenerationConfigType = None,
safety_settings: safety_types.SafetySettingOptions = None,
stream: bool = False,
tools: (content_types.FunctionLibraryType | None) = None,
tool_config: (content_types.ToolConfigType | None) = None,
request_options: (helper_types.RequestOptionsType | None) = None
) -> generation_types.GenerateContentResponse
</code></pre>

Sends the conversation history with the added message and returns the model's response.

Appends the request and response to the conversation history.

```
>>> model = genai.GenerativeModel('models/gemini-pro')
>>> chat = model.start_chat()
>>> response = chat.send_message("Hello")
>>> print(response.text)
"Hello! How can I assist you today?"
>>> len(chat.history)
2
```

Call it with `stream=True` to receive response chunks as they are generated:

```
>>> chat = model.start_chat()
>>> response = chat.send_message("Explain quantum physics", stream=True)
>>> for chunk in response:
... print(chunk.text, end='')
```

Once iteration over chunks is complete, the `response` and `ChatSession` are in states identical to the
`stream=False` case. Some properties are not available until iteration is complete.

Like <a href="../../google/generativeai/GenerativeModel.md#generate_content"><code>GenerativeModel.generate_content</code></a> this method lets you override the model's `generation_config` and
`safety_settings`.

<!-- Tabular view -->
<table class="responsive fixed orange">
<colgroup><col width="214px"><col></colgroup>
<tr><th colspan="2">Arguments</th></tr>

<tr>
<td>
`content`
</td>
<td>
The message contents.
</td>
</tr><tr>
<td>
`generation_config`
</td>
<td>
Overrides for the model's generation config.
</td>
</tr><tr>
<td>
`safety_settings`
</td>
<td>
Overrides for the model's safety settings.
</td>
</tr><tr>
<td>
`stream`
</td>
<td>
If True, yield response chunks as they are generated.
</td>
</tr>
</table>



<h3 id="send_message_async"><code>send_message_async</code></h3>

<a target="_blank" class="external" href="https://github.com/google/generative-ai-python/blob/master/google/generativeai/generative_models.py#L671-L733">View source</a>

<pre class="devsite-click-to-copy prettyprint lang-py tfo-signature-link">
<code>send_message_async(
content,
*,
generation_config=None,
safety_settings=None,
stream=False,
tools=None,
tool_config=None,
request_options=None
)
</code></pre>

The async version of <a href="../../google/generativeai/ChatSession.md#send_message"><code>ChatSession.send_message</code></a>.




Loading
Loading