Skip to content

feat(ai): Add support for AbortSignal #8890

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 9 commits into
base: main
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from 7 commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
9 changes: 0 additions & 9 deletions .changeset/dirty-crews-cross.md

This file was deleted.

6 changes: 6 additions & 0 deletions .changeset/long-keys-watch.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,6 @@
---
'firebase': minor
'@firebase/ai': minor
---

Add support for `AbortSignal`, allowing requests to be aborted.
19 changes: 12 additions & 7 deletions common/api-review/ai.api.md
Original file line number Diff line number Diff line change
Expand Up @@ -120,8 +120,8 @@ export class ChatSession {
params?: StartChatParams | undefined;
// (undocumented)
requestOptions?: RequestOptions | undefined;
sendMessage(request: string | Array<string | Part>): Promise<GenerateContentResult>;
sendMessageStream(request: string | Array<string | Part>): Promise<GenerateContentStreamResult>;
sendMessage(request: string | Array<string | Part>, singleRequestOptions?: SingleRequestOptions): Promise<GenerateContentResult>;
sendMessageStream(request: string | Array<string | Part>, singleRequestOptions?: SingleRequestOptions): Promise<GenerateContentStreamResult>;
}

// @public
Expand Down Expand Up @@ -396,9 +396,9 @@ export interface GenerativeContentBlob {
// @public
export class GenerativeModel extends AIModel {
constructor(ai: AI, modelParams: ModelParams, requestOptions?: RequestOptions);
countTokens(request: CountTokensRequest | string | Array<string | Part>): Promise<CountTokensResponse>;
generateContent(request: GenerateContentRequest | string | Array<string | Part>): Promise<GenerateContentResult>;
generateContentStream(request: GenerateContentRequest | string | Array<string | Part>): Promise<GenerateContentStreamResult>;
countTokens(request: CountTokensRequest | string | Array<string | Part>, singleRequestOptions?: SingleRequestOptions): Promise<CountTokensResponse>;
generateContent(request: GenerateContentRequest | string | Array<string | Part>, singleRequestOptions?: SingleRequestOptions): Promise<GenerateContentResult>;
generateContentStream(request: GenerateContentRequest | string | Array<string | Part>, singleRequestOptions?: SingleRequestOptions): Promise<GenerateContentStreamResult>;
// (undocumented)
generationConfig: GenerationConfig;
// (undocumented)
Expand Down Expand Up @@ -597,9 +597,9 @@ export interface ImagenInlineImage {
// @beta
export class ImagenModel extends AIModel {
constructor(ai: AI, modelParams: ImagenModelParams, requestOptions?: RequestOptions | undefined);
generateImages(prompt: string): Promise<ImagenGenerationResponse<ImagenInlineImage>>;
generateImages(prompt: string, singleRequestOptions?: SingleRequestOptions): Promise<ImagenGenerationResponse<ImagenInlineImage>>;
// @internal
generateImagesGCS(prompt: string, gcsURI: string): Promise<ImagenGenerationResponse<ImagenGCSImage>>;
generateImagesGCS(prompt: string, gcsURI: string, singleRequestOptions?: SingleRequestOptions): Promise<ImagenGenerationResponse<ImagenGCSImage>>;
generationConfig?: ImagenGenerationConfig;
// (undocumented)
requestOptions?: RequestOptions | undefined;
Expand Down Expand Up @@ -857,6 +857,11 @@ export interface Segment {
startIndex: number;
}

// @public
export interface SingleRequestOptions extends RequestOptions {
signal?: AbortSignal;
}

// @public
export interface StartChatParams extends BaseParams {
// (undocumented)
Expand Down
25 changes: 22 additions & 3 deletions common/api-review/vertexai.api.md
Original file line number Diff line number Diff line change
Expand Up @@ -120,8 +120,8 @@ export class ChatSession {
params?: StartChatParams | undefined;
// (undocumented)
requestOptions?: RequestOptions | undefined;
sendMessage(request: string | Array<string | Part>): Promise<GenerateContentResult>;
sendMessageStream(request: string | Array<string | Part>): Promise<GenerateContentStreamResult>;
sendMessage(request: string | Array<string | Part>, singleRequestOptions?: SingleRequestOptions): Promise<GenerateContentResult>;
sendMessageStream(request: string | Array<string | Part>, singleRequestOptions?: SingleRequestOptions): Promise<GenerateContentStreamResult>;
}

// @public
Expand Down Expand Up @@ -394,11 +394,19 @@ export interface GenerativeContentBlob {
}

// @public
<<<<<<< HEAD
export class GenerativeModel extends VertexAIModel {
constructor(vertexAI: VertexAI, modelParams: ModelParams, requestOptions?: RequestOptions);
countTokens(request: CountTokensRequest | string | Array<string | Part>, singleRequestOptions?: SingleRequestOptions): Promise<CountTokensResponse>;
generateContent(request: GenerateContentRequest | string | Array<string | Part>, singleRequestOptions?: SingleRequestOptions): Promise<GenerateContentResult>;
generateContentStream(request: GenerateContentRequest | string | Array<string | Part>, singleRequestOptions?: SingleRequestOptions): Promise<GenerateContentStreamResult>;
=======
export class GenerativeModel extends AIModel {
constructor(ai: AI, modelParams: ModelParams, requestOptions?: RequestOptions);
countTokens(request: CountTokensRequest | string | Array<string | Part>): Promise<CountTokensResponse>;
generateContent(request: GenerateContentRequest | string | Array<string | Part>): Promise<GenerateContentResult>;
generateContentStream(request: GenerateContentRequest | string | Array<string | Part>): Promise<GenerateContentStreamResult>;
>>>>>>> main
// (undocumented)
generationConfig: GenerationConfig;
// (undocumented)
Expand Down Expand Up @@ -595,11 +603,17 @@ export interface ImagenInlineImage {
}

// @beta
<<<<<<< HEAD
export class ImagenModel extends VertexAIModel {
constructor(vertexAI: VertexAI, modelParams: ImagenModelParams, requestOptions?: RequestOptions | undefined);
generateImages(prompt: string, singleRequestOptions?: SingleRequestOptions): Promise<ImagenGenerationResponse<ImagenInlineImage>>;
=======
export class ImagenModel extends AIModel {
constructor(ai: AI, modelParams: ImagenModelParams, requestOptions?: RequestOptions | undefined);
generateImages(prompt: string): Promise<ImagenGenerationResponse<ImagenInlineImage>>;
>>>>>>> main
// @internal
generateImagesGCS(prompt: string, gcsURI: string): Promise<ImagenGenerationResponse<ImagenGCSImage>>;
generateImagesGCS(prompt: string, gcsURI: string, singleRequestOptions?: SingleRequestOptions): Promise<ImagenGenerationResponse<ImagenGCSImage>>;
generationConfig?: ImagenGenerationConfig;
// (undocumented)
requestOptions?: RequestOptions | undefined;
Expand Down Expand Up @@ -857,6 +871,11 @@ export interface Segment {
startIndex: number;
}

// @public
export interface SingleRequestOptions extends RequestOptions {
signal?: AbortSignal;
}

// @public
export interface StartChatParams extends BaseParams {
// (undocumented)
Expand Down
2 changes: 2 additions & 0 deletions docs-devsite/_toc.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -132,6 +132,8 @@ toc:
path: /docs/reference/js/ai.schemashared.md
- title: Segment
path: /docs/reference/js/ai.segment.md
- title: SingleRequestOptions
path: /docs/reference/js/ai.singlerequestoptions.md
- title: StartChatParams
path: /docs/reference/js/ai.startchatparams.md
- title: StringSchema
Expand Down
10 changes: 6 additions & 4 deletions docs-devsite/ai.chatsession.md
Original file line number Diff line number Diff line change
Expand Up @@ -37,8 +37,8 @@ export declare class ChatSession
| Method | Modifiers | Description |
| --- | --- | --- |
| [getHistory()](./ai.chatsession.md#chatsessiongethistory) | | Gets the chat history so far. Blocked prompts are not added to history. Neither blocked candidates nor the prompts that generated them are added to history. |
| [sendMessage(request)](./ai.chatsession.md#chatsessionsendmessage) | | Sends a chat message and receives a non-streaming [GenerateContentResult](./ai.generatecontentresult.md#generatecontentresult_interface) |
| [sendMessageStream(request)](./ai.chatsession.md#chatsessionsendmessagestream) | | Sends a chat message and receives the response as a [GenerateContentStreamResult](./ai.generatecontentstreamresult.md#generatecontentstreamresult_interface) containing an iterable stream and a response promise. |
| [sendMessage(request, singleRequestOptions)](./ai.chatsession.md#chatsessionsendmessage) | | Sends a chat message and receives a non-streaming [GenerateContentResult](./ai.generatecontentresult.md#generatecontentresult_interface) |
| [sendMessageStream(request, singleRequestOptions)](./ai.chatsession.md#chatsessionsendmessagestream) | | Sends a chat message and receives the response as a [GenerateContentStreamResult](./ai.generatecontentstreamresult.md#generatecontentstreamresult_interface) containing an iterable stream and a response promise. |

## ChatSession.(constructor)

Expand Down Expand Up @@ -103,14 +103,15 @@ Sends a chat message and receives a non-streaming [GenerateContentResult](./ai.g
<b>Signature:</b>

```typescript
sendMessage(request: string | Array<string | Part>): Promise<GenerateContentResult>;
sendMessage(request: string | Array<string | Part>, singleRequestOptions?: SingleRequestOptions): Promise<GenerateContentResult>;
```

#### Parameters

| Parameter | Type | Description |
| --- | --- | --- |
| request | string \| Array&lt;string \| [Part](./ai.md#part)<!-- -->&gt; | |
| singleRequestOptions | [SingleRequestOptions](./ai.singlerequestoptions.md#singlerequestoptions_interface) | |

<b>Returns:</b>

Expand All @@ -123,14 +124,15 @@ Sends a chat message and receives the response as a [GenerateContentStreamResult
<b>Signature:</b>

```typescript
sendMessageStream(request: string | Array<string | Part>): Promise<GenerateContentStreamResult>;
sendMessageStream(request: string | Array<string | Part>, singleRequestOptions?: SingleRequestOptions): Promise<GenerateContentStreamResult>;
```

#### Parameters

| Parameter | Type | Description |
| --- | --- | --- |
| request | string \| Array&lt;string \| [Part](./ai.md#part)<!-- -->&gt; | |
| singleRequestOptions | [SingleRequestOptions](./ai.singlerequestoptions.md#singlerequestoptions_interface) | |

<b>Returns:</b>

Expand Down
15 changes: 9 additions & 6 deletions docs-devsite/ai.generativemodel.md
Original file line number Diff line number Diff line change
Expand Up @@ -40,9 +40,9 @@ export declare class GenerativeModel extends AIModel
| Method | Modifiers | Description |
| --- | --- | --- |
| [countTokens(request)](./ai.generativemodel.md#generativemodelcounttokens) | | Counts the tokens in the provided request. |
| [generateContent(request)](./ai.generativemodel.md#generativemodelgeneratecontent) | | Makes a single non-streaming call to the model and returns an object containing a single [GenerateContentResponse](./ai.generatecontentresponse.md#generatecontentresponse_interface)<!-- -->. |
| [generateContentStream(request)](./ai.generativemodel.md#generativemodelgeneratecontentstream) | | Makes a single streaming call to the model and returns an object containing an iterable stream that iterates over all chunks in the streaming response as well as a promise that returns the final aggregated response. |
| [countTokens(request, singleRequestOptions)](./ai.generativemodel.md#generativemodelcounttokens) | | Counts the tokens in the provided request. |
| [generateContent(request, singleRequestOptions)](./ai.generativemodel.md#generativemodelgeneratecontent) | | Makes a single non-streaming call to the model and returns an object containing a single [GenerateContentResponse](./ai.generatecontentresponse.md#generatecontentresponse_interface)<!-- -->. |
| [generateContentStream(request, singleRequestOptions)](./ai.generativemodel.md#generativemodelgeneratecontentstream) | | Makes a single streaming call to the model and returns an object containing an iterable stream that iterates over all chunks in the streaming response as well as a promise that returns the final aggregated response. |
| [startChat(startChatParams)](./ai.generativemodel.md#generativemodelstartchat) | | Gets a new [ChatSession](./ai.chatsession.md#chatsession_class) instance which can be used for multi-turn chats. |
## GenerativeModel.(constructor)
Expand Down Expand Up @@ -118,14 +118,15 @@ Counts the tokens in the provided request.
<b>Signature:</b>
```typescript
countTokens(request: CountTokensRequest | string | Array<string | Part>): Promise<CountTokensResponse>;
countTokens(request: CountTokensRequest | string | Array<string | Part>, singleRequestOptions?: SingleRequestOptions): Promise<CountTokensResponse>;
```
#### Parameters
| Parameter | Type | Description |
| --- | --- | --- |
| request | [CountTokensRequest](./ai.counttokensrequest.md#counttokensrequest_interface) \| string \| Array&lt;string \| [Part](./ai.md#part)<!-- -->&gt; | |
| singleRequestOptions | [SingleRequestOptions](./ai.singlerequestoptions.md#singlerequestoptions_interface) | |
<b>Returns:</b>
Expand All @@ -138,14 +139,15 @@ Makes a single non-streaming call to the model and returns an object containing
<b>Signature:</b>
```typescript
generateContent(request: GenerateContentRequest | string | Array<string | Part>): Promise<GenerateContentResult>;
generateContent(request: GenerateContentRequest | string | Array<string | Part>, singleRequestOptions?: SingleRequestOptions): Promise<GenerateContentResult>;
```
#### Parameters
| Parameter | Type | Description |
| --- | --- | --- |
| request | [GenerateContentRequest](./ai.generatecontentrequest.md#generatecontentrequest_interface) \| string \| Array&lt;string \| [Part](./ai.md#part)<!-- -->&gt; | |
| singleRequestOptions | [SingleRequestOptions](./ai.singlerequestoptions.md#singlerequestoptions_interface) | |
<b>Returns:</b>
Expand All @@ -158,14 +160,15 @@ Makes a single streaming call to the model and returns an object containing an i
<b>Signature:</b>
```typescript
generateContentStream(request: GenerateContentRequest | string | Array<string | Part>): Promise<GenerateContentStreamResult>;
generateContentStream(request: GenerateContentRequest | string | Array<string | Part>, singleRequestOptions?: SingleRequestOptions): Promise<GenerateContentStreamResult>;
```
#### Parameters
| Parameter | Type | Description |
| --- | --- | --- |
| request | [GenerateContentRequest](./ai.generatecontentrequest.md#generatecontentrequest_interface) \| string \| Array&lt;string \| [Part](./ai.md#part)<!-- -->&gt; | |
| singleRequestOptions | [SingleRequestOptions](./ai.singlerequestoptions.md#singlerequestoptions_interface) | |
<b>Returns:</b>
Expand Down
5 changes: 3 additions & 2 deletions docs-devsite/ai.imagenmodel.md
Original file line number Diff line number Diff line change
Expand Up @@ -42,7 +42,7 @@ export declare class ImagenModel extends AIModel
| Method | Modifiers | Description |
| --- | --- | --- |
| [generateImages(prompt)](./ai.imagenmodel.md#imagenmodelgenerateimages) | | <b><i>(Public Preview)</i></b> Generates images using the Imagen model and returns them as base64-encoded strings. |
| [generateImages(prompt, singleRequestOptions)](./ai.imagenmodel.md#imagenmodelgenerateimages) | | <b><i>(Public Preview)</i></b> Generates images using the Imagen model and returns them as base64-encoded strings. |
## ImagenModel.(constructor)
Expand Down Expand Up @@ -118,14 +118,15 @@ If the prompt was not blocked, but one or more of the generated images were filt
<b>Signature:</b>
```typescript
generateImages(prompt: string): Promise<ImagenGenerationResponse<ImagenInlineImage>>;
generateImages(prompt: string, singleRequestOptions?: SingleRequestOptions): Promise<ImagenGenerationResponse<ImagenInlineImage>>;
```
#### Parameters
| Parameter | Type | Description |
| --- | --- | --- |
| prompt | string | A text prompt describing the image(s) to generate. |
| singleRequestOptions | [SingleRequestOptions](./ai.singlerequestoptions.md#singlerequestoptions_interface) | |
<b>Returns:</b>
Expand Down
1 change: 1 addition & 0 deletions docs-devsite/ai.md
Original file line number Diff line number Diff line change
Expand Up @@ -117,6 +117,7 @@ The Firebase AI Web SDK.
| [SchemaRequest](./ai.schemarequest.md#schemarequest_interface) | Final format for [Schema](./ai.schema.md#schema_class) params passed to backend requests. |
| [SchemaShared](./ai.schemashared.md#schemashared_interface) | Basic [Schema](./ai.schema.md#schema_class) properties shared across several Schema-related types. |
| [Segment](./ai.segment.md#segment_interface) | |
| [SingleRequestOptions](./ai.singlerequestoptions.md#singlerequestoptions_interface) | Options that can be provided per-request. Extends the base [RequestOptions](./ai.requestoptions.md#requestoptions_interface) (like <code>timeout</code> and <code>baseUrl</code>) with request-specific controls like cancellation via <code>AbortSignal</code>.<!-- -->Options specified here will override any default [RequestOptions](./ai.requestoptions.md#requestoptions_interface) configured on a model (e.g. [GenerativeModel](./ai.generativemodel.md#generativemodel_class)<!-- -->). |
| [StartChatParams](./ai.startchatparams.md#startchatparams_interface) | Params for [GenerativeModel.startChat()](./ai.generativemodel.md#generativemodelstartchat)<!-- -->. |
| [TextPart](./ai.textpart.md#textpart_interface) | Content part interface if the part represents a text string. |
| [ToolConfig](./ai.toolconfig.md#toolconfig_interface) | Tool config. This config is shared for all tools provided in the request. |
Expand Down
61 changes: 61 additions & 0 deletions docs-devsite/ai.singlerequestoptions.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,61 @@
Project: /docs/reference/js/_project.yaml
Book: /docs/reference/_book.yaml
page_type: reference

{% comment %}
DO NOT EDIT THIS FILE!
This is generated by the JS SDK team, and any local changes will be
overwritten. Changes should be made in the source code at
https://github.com/firebase/firebase-js-sdk
{% endcomment %}

# SingleRequestOptions interface
Options that can be provided per-request. Extends the base [RequestOptions](./ai.requestoptions.md#requestoptions_interface) (like `timeout` and `baseUrl`<!-- -->) with request-specific controls like cancellation via `AbortSignal`<!-- -->.

Options specified here will override any default [RequestOptions](./ai.requestoptions.md#requestoptions_interface) configured on a model (e.g. [GenerativeModel](./ai.generativemodel.md#generativemodel_class)<!-- -->).

<b>Signature:</b>

```typescript
export interface SingleRequestOptions extends RequestOptions
```
<b>Extends:</b> [RequestOptions](./ai.requestoptions.md#requestoptions_interface)

## Properties

| Property | Type | Description |
| --- | --- | --- |
| [signal](./ai.singlerequestoptions.md#singlerequestoptionssignal) | AbortSignal | An <code>AbortSignal</code> instance that allows cancelling ongoing requests (like <code>generateContent</code> or <code>generateImages</code>).<!-- -->If provided, calling <code>abort()</code> on the corresponding <code>AbortController</code> will attempt to cancel the underlying HTTP request. An <code>AbortError</code> will be thrown if cancellation is successful.<!-- -->Note that this will not cancel the request in the backend, so billing will still be applied despite cancellation. |

## SingleRequestOptions.signal

An `AbortSignal` instance that allows cancelling ongoing requests (like `generateContent` or `generateImages`<!-- -->).

If provided, calling `abort()` on the corresponding `AbortController` will attempt to cancel the underlying HTTP request. An `AbortError` will be thrown if cancellation is successful.

Note that this will not cancel the request in the backend, so billing will still be applied despite cancellation.

<b>Signature:</b>

```typescript
signal?: AbortSignal;
```

### Example


```javascript
const controller = new AbortController();
const model = getGenerativeModel({
// ...
});
model.generateContent(
"Write a story about a magic backpack.",
{ signal: controller.signal }
);

// To cancel request:
controller.abort();

```

Loading
Loading