Skip to content

Adding count token impl #8950

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 1 commit into from
Apr 18, 2025
Merged

Adding count token impl #8950

merged 1 commit into from
Apr 18, 2025

Conversation

gsiddh
Copy link

@gsiddh gsiddh commented Apr 18, 2025

Hey there! So you want to contribute to a Firebase SDK?
Before you file this pull request, please read these guidelines:

Discussion

  • Read the contribution guidelines (CONTRIBUTING.md).
  • If this has been discussed in an issue, make sure to link to the issue here.
    If not, go file an issue about this before creating a pull request to discuss.

Testing

  • Make sure all existing tests in the repository pass after your change.
  • If you fixed a bug or added a feature, add a new test to cover your code.

API Changes

  • At this time we cannot accept changes that affect the public API. If you'd like to help
    us make Firebase APIs better, please propose your change in an issue so that we
    can discuss it together.

@gsiddh gsiddh requested a review from a team as a code owner April 18, 2025 22:11
Copy link

changeset-bot bot commented Apr 18, 2025

⚠️ No Changeset found

Latest commit: 897c165

Merging this PR will not cause a version bump for any packages. If these changes should not result in a new version, you're good to go. If these changes should result in a version bump, you need to add a changeset.

This PR includes no changesets

When changesets are added to this PR, you'll see the packages that this PR includes changesets for and the associated semver types

Click here to learn what changesets are, and how to add one.

Click here if you're a maintainer who wants to add a changeset to this PR

Copy link
Contributor

Vertex AI Mock Responses Check ⚠️

A newer major version of the mock responses for Vertex AI unit tests is available. update_vertexai_responses.sh should be updated to clone the latest version of the responses: v10.0

@google-oss-bot
Copy link
Contributor

google-oss-bot commented Apr 18, 2025

Size Report 1

Affected Products

  • @firebase/vertexai

    TypeBase (02c4ba7)Merge (a309c3e)Diff
    browser37.7 kB38.3 kB+544 B (+1.4%)
    main38.7 kB39.3 kB+544 B (+1.4%)
    module37.7 kB38.3 kB+544 B (+1.4%)
  • firebase

    TypeBase (02c4ba7)Merge (a309c3e)Diff
    firebase-vertexai.js30.4 kB30.8 kB+349 B (+1.1%)

Test Logs

  1. https://storage.googleapis.com/firebase-sdk-metric-reports/gMOi7iv69X.html

@gsiddh gsiddh force-pushed the vaihi-count-tokens branch from 897c165 to 31aea9b Compare April 18, 2025 22:22
@google-oss-bot
Copy link
Contributor

google-oss-bot commented Apr 18, 2025

Size Analysis Report 1

Affected Products

  • @firebase/vertexai

    • GenerativeModel

      Size

      TypeBase (02c4ba7)Merge (a309c3e)Diff
      size19.4 kB19.5 kB+132 B (+0.7%)
      size-with-ext-deps38.3 kB38.5 kB+132 B (+0.3%)

      Dependency

      TypeBase (02c4ba7)Merge (a309c3e)Diff
      functions

      25 dependencies

      addHelpers
      aggregateResponses
      assignRoleToPartsAndValidateSendMessageRequest
      constructRequest
      countTokens
      createEnhancedContentResponse
      formatBlockErrorMessage
      formatGenerateContentInput
      formatNewContent
      formatSystemInstruction
      generateContent
      generateContentOnCloud
      generateContentStream
      generateResponseSequence
      getClientHeaders
      getFunctionCalls
      getHeaders
      getResponsePromise
      getResponseStream
      getText
      hadBadFinishReason
      makeRequest
      processStream
      registerVertex
      validateChatHistory

      26 dependencies

      addHelpers
      aggregateResponses
      assignRoleToPartsAndValidateSendMessageRequest
      constructRequest
      countTokens
      countTokensOnCloud
      createEnhancedContentResponse
      formatBlockErrorMessage
      formatGenerateContentInput
      formatNewContent
      formatSystemInstruction
      generateContent
      generateContentOnCloud
      generateContentStream
      generateResponseSequence
      getClientHeaders
      getFunctionCalls
      getHeaders
      getResponsePromise
      getResponseStream
      getText
      hadBadFinishReason
      makeRequest
      processStream
      registerVertex
      validateChatHistory

      + countTokensOnCloud

    • ImagenModel

      Size

      TypeBase (02c4ba7)Merge (a309c3e)Diff
      size21.1 kB21.2 kB+132 B (+0.6%)
      size-with-ext-deps40.1 kB40.2 kB+132 B (+0.3%)

      Dependency

      TypeBase (02c4ba7)Merge (a309c3e)Diff
      functions

      27 dependencies

      addHelpers
      aggregateResponses
      assignRoleToPartsAndValidateSendMessageRequest
      constructRequest
      countTokens
      createEnhancedContentResponse
      createPredictRequestBody
      formatBlockErrorMessage
      formatGenerateContentInput
      formatNewContent
      formatSystemInstruction
      generateContent
      generateContentOnCloud
      generateContentStream
      generateResponseSequence
      getClientHeaders
      getFunctionCalls
      getHeaders
      getResponsePromise
      getResponseStream
      getText
      hadBadFinishReason
      handlePredictResponse
      makeRequest
      processStream
      registerVertex
      validateChatHistory

      28 dependencies

      addHelpers
      aggregateResponses
      assignRoleToPartsAndValidateSendMessageRequest
      constructRequest
      countTokens
      countTokensOnCloud
      createEnhancedContentResponse
      createPredictRequestBody
      formatBlockErrorMessage
      formatGenerateContentInput
      formatNewContent
      formatSystemInstruction
      generateContent
      generateContentOnCloud
      generateContentStream
      generateResponseSequence
      getClientHeaders
      getFunctionCalls
      getHeaders
      getResponsePromise
      getResponseStream
      getText
      hadBadFinishReason
      handlePredictResponse
      makeRequest
      processStream
      registerVertex
      validateChatHistory

      + countTokensOnCloud

    • VertexAIModel

      Size

      TypeBase (02c4ba7)Merge (a309c3e)Diff
      size19.4 kB19.5 kB+132 B (+0.7%)
      size-with-ext-deps38.3 kB38.5 kB+132 B (+0.3%)

      Dependency

      TypeBase (02c4ba7)Merge (a309c3e)Diff
      functions

      25 dependencies

      addHelpers
      aggregateResponses
      assignRoleToPartsAndValidateSendMessageRequest
      constructRequest
      countTokens
      createEnhancedContentResponse
      formatBlockErrorMessage
      formatGenerateContentInput
      formatNewContent
      formatSystemInstruction
      generateContent
      generateContentOnCloud
      generateContentStream
      generateResponseSequence
      getClientHeaders
      getFunctionCalls
      getHeaders
      getResponsePromise
      getResponseStream
      getText
      hadBadFinishReason
      makeRequest
      processStream
      registerVertex
      validateChatHistory

      26 dependencies

      addHelpers
      aggregateResponses
      assignRoleToPartsAndValidateSendMessageRequest
      constructRequest
      countTokens
      countTokensOnCloud
      createEnhancedContentResponse
      formatBlockErrorMessage
      formatGenerateContentInput
      formatNewContent
      formatSystemInstruction
      generateContent
      generateContentOnCloud
      generateContentStream
      generateResponseSequence
      getClientHeaders
      getFunctionCalls
      getHeaders
      getResponsePromise
      getResponseStream
      getText
      hadBadFinishReason
      makeRequest
      processStream
      registerVertex
      validateChatHistory

      + countTokensOnCloud

    • getGenerativeModel

      Size

      TypeBase (02c4ba7)Merge (a309c3e)Diff
      size21.2 kB21.6 kB+335 B (+1.6%)
      size-with-ext-deps40.2 kB40.6 kB+335 B (+0.8%)

      Dependency

      TypeBase (02c4ba7)Merge (a309c3e)Diff
      functions

      26 dependencies

      addHelpers
      aggregateResponses
      assignRoleToPartsAndValidateSendMessageRequest
      constructRequest
      countTokens
      createEnhancedContentResponse
      formatBlockErrorMessage
      formatGenerateContentInput
      formatNewContent
      formatSystemInstruction
      generateContent
      generateContentOnCloud
      generateContentStream
      generateResponseSequence
      getClientHeaders
      getFunctionCalls
      getGenerativeModel
      getHeaders
      getResponsePromise
      getResponseStream
      getText
      hadBadFinishReason
      makeRequest
      processStream
      registerVertex
      validateChatHistory

      27 dependencies

      addHelpers
      aggregateResponses
      assignRoleToPartsAndValidateSendMessageRequest
      constructRequest
      countTokens
      countTokensOnCloud
      createEnhancedContentResponse
      formatBlockErrorMessage
      formatGenerateContentInput
      formatNewContent
      formatSystemInstruction
      generateContent
      generateContentOnCloud
      generateContentStream
      generateResponseSequence
      getClientHeaders
      getFunctionCalls
      getGenerativeModel
      getHeaders
      getResponsePromise
      getResponseStream
      getText
      hadBadFinishReason
      makeRequest
      processStream
      registerVertex
      validateChatHistory

      + countTokensOnCloud

    • getImagenModel

      Size

      TypeBase (02c4ba7)Merge (a309c3e)Diff
      size21.3 kB21.4 kB+132 B (+0.6%)
      size-with-ext-deps40.3 kB40.4 kB+132 B (+0.3%)

      Dependency

      TypeBase (02c4ba7)Merge (a309c3e)Diff
      functions

      28 dependencies

      addHelpers
      aggregateResponses
      assignRoleToPartsAndValidateSendMessageRequest
      constructRequest
      countTokens
      createEnhancedContentResponse
      createPredictRequestBody
      formatBlockErrorMessage
      formatGenerateContentInput
      formatNewContent
      formatSystemInstruction
      generateContent
      generateContentOnCloud
      generateContentStream
      generateResponseSequence
      getClientHeaders
      getFunctionCalls
      getHeaders
      getImagenModel
      getResponsePromise
      getResponseStream
      getText
      hadBadFinishReason
      handlePredictResponse
      makeRequest
      processStream
      registerVertex
      validateChatHistory

      29 dependencies

      addHelpers
      aggregateResponses
      assignRoleToPartsAndValidateSendMessageRequest
      constructRequest
      countTokens
      countTokensOnCloud
      createEnhancedContentResponse
      createPredictRequestBody
      formatBlockErrorMessage
      formatGenerateContentInput
      formatNewContent
      formatSystemInstruction
      generateContent
      generateContentOnCloud
      generateContentStream
      generateResponseSequence
      getClientHeaders
      getFunctionCalls
      getHeaders
      getImagenModel
      getResponsePromise
      getResponseStream
      getText
      hadBadFinishReason
      handlePredictResponse
      makeRequest
      processStream
      registerVertex
      validateChatHistory

      + countTokensOnCloud

Test Logs

  1. https://storage.googleapis.com/firebase-sdk-metric-reports/ON4QX309UB.html

@erikeldridge erikeldridge self-requested a review April 18, 2025 22:57
@gsiddh gsiddh merged commit e069751 into vaihi-exp Apr 18, 2025
34 of 48 checks passed
@gsiddh gsiddh deleted the vaihi-count-tokens branch April 18, 2025 22:59
gsiddh added a commit that referenced this pull request Apr 22, 2025
gsiddh added a commit that referenced this pull request Apr 22, 2025
gsiddh pushed a commit that referenced this pull request Apr 23, 2025
Fix languageCode parameter in action_code_url (#8912)

* Fix languageCode parameter in action_code_url

* Add changeset

Vaihi add langmodel types. (#8927)

* Adding LanguageModel types. These are based off https://github.com/webmachinelearning/prompt-api?tab=readme-ov-file#full-api-surface-in-web-idl

* Adding LanguageModel types.

* Remove bunch of exports

* yarn formatted

* after lint

Define HybridParams (#8935)

Co-authored-by: Erik Eldridge <[email protected]>

Adding smoke test for new hybrid params (#8937)

* Adding smoke test for new hybrid params

* Use the existing name of the model params input

---------

Co-authored-by: Erik Eldridge <[email protected]>

Moving to in-cloud naming (#8938)

Co-authored-by: Erik Eldridge <[email protected]>

Moving to string type for the inference mode (#8941)

Define ChromeAdapter class (#8942)

Co-authored-by: Erik Eldridge <[email protected]>

VinF Hybrid Inference: Implement ChromeAdapter (rebased) (#8943)

Adding count token impl (#8950)

VinF Hybrid Inference #4: ChromeAdapter in stream methods (rebased) (#8949)

Define values for Availability enum (#8951)

VinF Hybrid Inference: narrow Chrome input type (#8953)

Add image inference support (#8954)

* Adding image based input for inference

* adding image as input to create language model object

disable count tokens api for on-device inference (#8962)

VinF Hybrid Inference: throw if only_on_device and model is unavailable (#8965)
gsiddh pushed a commit that referenced this pull request Apr 23, 2025
Fix languageCode parameter in action_code_url (#8912)

* Fix languageCode parameter in action_code_url

* Add changeset

Vaihi add langmodel types. (#8927)

* Adding LanguageModel types. These are based off https://github.com/webmachinelearning/prompt-api?tab=readme-ov-file#full-api-surface-in-web-idl

* Adding LanguageModel types.

* Remove bunch of exports

* yarn formatted

* after lint

Define HybridParams (#8935)

Co-authored-by: Erik Eldridge <[email protected]>

Adding smoke test for new hybrid params (#8937)

* Adding smoke test for new hybrid params

* Use the existing name of the model params input

---------

Co-authored-by: Erik Eldridge <[email protected]>

Moving to in-cloud naming (#8938)

Co-authored-by: Erik Eldridge <[email protected]>

Moving to string type for the inference mode (#8941)

Define ChromeAdapter class (#8942)

Co-authored-by: Erik Eldridge <[email protected]>

VinF Hybrid Inference: Implement ChromeAdapter (rebased) (#8943)

Adding count token impl (#8950)

VinF Hybrid Inference #4: ChromeAdapter in stream methods (rebased) (#8949)

Define values for Availability enum (#8951)

VinF Hybrid Inference: narrow Chrome input type (#8953)

Add image inference support (#8954)

* Adding image based input for inference

* adding image as input to create language model object

disable count tokens api for on-device inference (#8962)

VinF Hybrid Inference: throw if only_on_device and model is unavailable (#8965)
gsiddh pushed a commit that referenced this pull request Apr 23, 2025
Fix languageCode parameter in action_code_url (#8912)

* Fix languageCode parameter in action_code_url

* Add changeset

Vaihi add langmodel types. (#8927)

* Adding LanguageModel types. These are based off https://github.com/webmachinelearning/prompt-api?tab=readme-ov-file#full-api-surface-in-web-idl

* Adding LanguageModel types.

* Remove bunch of exports

* yarn formatted

* after lint

Define HybridParams (#8935)

Co-authored-by: Erik Eldridge <[email protected]>

Adding smoke test for new hybrid params (#8937)

* Adding smoke test for new hybrid params

* Use the existing name of the model params input

---------

Co-authored-by: Erik Eldridge <[email protected]>

Moving to in-cloud naming (#8938)

Co-authored-by: Erik Eldridge <[email protected]>

Moving to string type for the inference mode (#8941)

Define ChromeAdapter class (#8942)

Co-authored-by: Erik Eldridge <[email protected]>

VinF Hybrid Inference: Implement ChromeAdapter (rebased) (#8943)

Adding count token impl (#8950)

VinF Hybrid Inference #4: ChromeAdapter in stream methods (rebased) (#8949)

Define values for Availability enum (#8951)

VinF Hybrid Inference: narrow Chrome input type (#8953)

Add image inference support (#8954)

* Adding image based input for inference

* adding image as input to create language model object

disable count tokens api for on-device inference (#8962)

VinF Hybrid Inference: throw if only_on_device and model is unavailable (#8965)
gsiddh pushed a commit that referenced this pull request Apr 23, 2025
Fix languageCode parameter in action_code_url (#8912)

* Fix languageCode parameter in action_code_url

* Add changeset

Vaihi add langmodel types. (#8927)

* Adding LanguageModel types. These are based off https://github.com/webmachinelearning/prompt-api?tab=readme-ov-file#full-api-surface-in-web-idl

* Adding LanguageModel types.

* Remove bunch of exports

* yarn formatted

* after lint

Define HybridParams (#8935)

Co-authored-by: Erik Eldridge <[email protected]>

Adding smoke test for new hybrid params (#8937)

* Adding smoke test for new hybrid params

* Use the existing name of the model params input

---------

Co-authored-by: Erik Eldridge <[email protected]>

Moving to in-cloud naming (#8938)

Co-authored-by: Erik Eldridge <[email protected]>

Moving to string type for the inference mode (#8941)

Define ChromeAdapter class (#8942)

Co-authored-by: Erik Eldridge <[email protected]>

VinF Hybrid Inference: Implement ChromeAdapter (rebased) (#8943)

Adding count token impl (#8950)

VinF Hybrid Inference #4: ChromeAdapter in stream methods (rebased) (#8949)

Define values for Availability enum (#8951)

VinF Hybrid Inference: narrow Chrome input type (#8953)

Add image inference support (#8954)

* Adding image based input for inference

* adding image as input to create language model object

disable count tokens api for on-device inference (#8962)

VinF Hybrid Inference: throw if only_on_device and model is unavailable (#8965)
gsiddh pushed a commit that referenced this pull request Apr 23, 2025
Fix languageCode parameter in action_code_url (#8912)

* Fix languageCode parameter in action_code_url

* Add changeset

Vaihi add langmodel types. (#8927)

* Adding LanguageModel types. These are based off https://github.com/webmachinelearning/prompt-api?tab=readme-ov-file#full-api-surface-in-web-idl

* Adding LanguageModel types.

* Remove bunch of exports

* yarn formatted

* after lint

Define HybridParams (#8935)

Co-authored-by: Erik Eldridge <[email protected]>

Adding smoke test for new hybrid params (#8937)

* Adding smoke test for new hybrid params

* Use the existing name of the model params input

---------

Co-authored-by: Erik Eldridge <[email protected]>

Moving to in-cloud naming (#8938)

Co-authored-by: Erik Eldridge <[email protected]>

Moving to string type for the inference mode (#8941)

Define ChromeAdapter class (#8942)

Co-authored-by: Erik Eldridge <[email protected]>

VinF Hybrid Inference: Implement ChromeAdapter (rebased) (#8943)

Adding count token impl (#8950)

VinF Hybrid Inference #4: ChromeAdapter in stream methods (rebased) (#8949)

Define values for Availability enum (#8951)

VinF Hybrid Inference: narrow Chrome input type (#8953)

Add image inference support (#8954)

* Adding image based input for inference

* adding image as input to create language model object

disable count tokens api for on-device inference (#8962)

VinF Hybrid Inference: throw if only_on_device and model is unavailable (#8965)
gsiddh pushed a commit that referenced this pull request Apr 23, 2025
Fix languageCode parameter in action_code_url (#8912)

* Fix languageCode parameter in action_code_url

* Add changeset

Vaihi add langmodel types. (#8927)

* Adding LanguageModel types. These are based off https://github.com/webmachinelearning/prompt-api?tab=readme-ov-file#full-api-surface-in-web-idl

* Adding LanguageModel types.

* Remove bunch of exports

* yarn formatted

* after lint

Define HybridParams (#8935)

Co-authored-by: Erik Eldridge <[email protected]>

Adding smoke test for new hybrid params (#8937)

* Adding smoke test for new hybrid params

* Use the existing name of the model params input

---------

Co-authored-by: Erik Eldridge <[email protected]>

Moving to in-cloud naming (#8938)

Co-authored-by: Erik Eldridge <[email protected]>

Moving to string type for the inference mode (#8941)

Define ChromeAdapter class (#8942)

Co-authored-by: Erik Eldridge <[email protected]>

VinF Hybrid Inference: Implement ChromeAdapter (rebased) (#8943)

Adding count token impl (#8950)

VinF Hybrid Inference #4: ChromeAdapter in stream methods (rebased) (#8949)

Define values for Availability enum (#8951)

VinF Hybrid Inference: narrow Chrome input type (#8953)

Add image inference support (#8954)

* Adding image based input for inference

* adding image as input to create language model object

disable count tokens api for on-device inference (#8962)

VinF Hybrid Inference: throw if only_on_device and model is unavailable (#8965)
gsiddh pushed a commit that referenced this pull request Apr 23, 2025
Fix languageCode parameter in action_code_url (#8912)

* Fix languageCode parameter in action_code_url

* Add changeset

Vaihi add langmodel types. (#8927)

* Adding LanguageModel types. These are based off https://github.com/webmachinelearning/prompt-api?tab=readme-ov-file#full-api-surface-in-web-idl

* Adding LanguageModel types.

* Remove bunch of exports

* yarn formatted

* after lint

Define HybridParams (#8935)

Co-authored-by: Erik Eldridge <[email protected]>

Adding smoke test for new hybrid params (#8937)

* Adding smoke test for new hybrid params

* Use the existing name of the model params input

---------

Co-authored-by: Erik Eldridge <[email protected]>

Moving to in-cloud naming (#8938)

Co-authored-by: Erik Eldridge <[email protected]>

Moving to string type for the inference mode (#8941)

Define ChromeAdapter class (#8942)

Co-authored-by: Erik Eldridge <[email protected]>

VinF Hybrid Inference: Implement ChromeAdapter (rebased) (#8943)

Adding count token impl (#8950)

VinF Hybrid Inference #4: ChromeAdapter in stream methods (rebased) (#8949)

Define values for Availability enum (#8951)

VinF Hybrid Inference: narrow Chrome input type (#8953)

Add image inference support (#8954)

* Adding image based input for inference

* adding image as input to create language model object

disable count tokens api for on-device inference (#8962)

VinF Hybrid Inference: throw if only_on_device and model is unavailable (#8965)
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants