Skip to content

Moving to in-cloud naming #8938

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 1 commit into from
Apr 16, 2025
Merged

Moving to in-cloud naming #8938

merged 1 commit into from
Apr 16, 2025

Conversation

gsiddh
Copy link

@gsiddh gsiddh commented Apr 16, 2025

Moving to in-cloud as naming

@gsiddh gsiddh requested a review from a team as a code owner April 16, 2025 20:13
Copy link

changeset-bot bot commented Apr 16, 2025

⚠️ No Changeset found

Latest commit: c5a317e

Merging this PR will not cause a version bump for any packages. If these changes should not result in a new version, you're good to go. If these changes should result in a version bump, you need to add a changeset.

This PR includes no changesets

When changesets are added to this PR, you'll see the packages that this PR includes changesets for and the associated semver types

Click here to learn what changesets are, and how to add one.

Click here if you're a maintainer who wants to add a changeset to this PR

Copy link
Contributor

Vertex AI Mock Responses Check ⚠️

A newer major version of the mock responses for Vertex AI unit tests is available. update_vertexai_responses.sh should be updated to clone the latest version of the responses: v10.0

@gsiddh gsiddh requested a review from a team as a code owner April 16, 2025 20:21
@google-oss-bot
Copy link
Contributor

google-oss-bot commented Apr 16, 2025

Size Report 1

Affected Products

  • @firebase/vertexai

    TypeBase (badaa74)Merge (3713e5d)Diff
    browser35.1 kB35.2 kB+92 B (+0.3%)
    main36.1 kB36.2 kB+92 B (+0.3%)
    module35.1 kB35.2 kB+92 B (+0.3%)
  • firebase

    TypeBase (badaa74)Merge (3713e5d)Diff
    firebase-vertexai.js28.5 kB28.6 kB+92 B (+0.3%)

Test Logs

  1. https://storage.googleapis.com/firebase-sdk-metric-reports/R2uOIiWIQP.html

@google-oss-bot
Copy link
Contributor

google-oss-bot commented Apr 16, 2025

Size Analysis Report 1

Affected Products

  • @firebase/vertexai

    • GenerativeModel

      Size

      TypeBase (badaa74)Merge (3713e5d)Diff
      size19.1 kB19.1 kB+57 B (+0.3%)
      size-with-ext-deps38.0 kB38.1 kB+57 B (+0.1%)
    • ImagenModel

      Size

      TypeBase (badaa74)Merge (3713e5d)Diff
      size10.2 kB20.9 kB+10.7 kB (+104.3%)
      size-with-ext-deps28.2 kB39.9 kB+11.6 kB (+41.3%)

      Dependency

      TypeBase (badaa74)Merge (3713e5d)Diff
      functions

      constructRequest
      createPredictRequestBody
      getClientHeaders
      getHeaders
      handlePredictResponse
      makeRequest
      registerVertex

      26 dependencies

      addHelpers
      aggregateResponses
      assignRoleToPartsAndValidateSendMessageRequest
      constructRequest
      countTokens
      createEnhancedContentResponse
      createPredictRequestBody
      formatBlockErrorMessage
      formatGenerateContentInput
      formatNewContent
      formatSystemInstruction
      generateContent
      generateContentStream
      generateResponseSequence
      getClientHeaders
      getFunctionCalls
      getHeaders
      getResponsePromise
      getResponseStream
      getText
      hadBadFinishReason
      handlePredictResponse
      makeRequest
      processStream
      registerVertex
      validateChatHistory

      19 dependency diffs

      + addHelpers
      + aggregateResponses
      + assignRoleToPartsAndValidateSendMessageRequest
      + countTokens
      + createEnhancedContentResponse
      + formatBlockErrorMessage
      + formatGenerateContentInput
      + formatNewContent
      + formatSystemInstruction
      + generateContent
      + generateContentStream
      + generateResponseSequence
      + getFunctionCalls
      + getResponsePromise
      + getResponseStream
      + getText
      + hadBadFinishReason
      + processStream
      + validateChatHistory

      classes

      ImagenModel
      RequestUrl
      VertexAIError
      VertexAIModel
      VertexAIService

      ChatSession
      GenerativeModel
      ImagenModel
      RequestUrl
      VertexAIError
      VertexAIModel
      VertexAIService

      + ChatSession
      + GenerativeModel

      variables

      25 dependencies

      BlockReason
      DEFAULT_API_VERSION
      DEFAULT_BASE_URL
      DEFAULT_FETCH_TIMEOUT_MS
      DEFAULT_LOCATION
      FinishReason
      FunctionCallingMode
      HarmBlockMethod
      HarmBlockThreshold
      HarmCategory
      HarmProbability
      HarmSeverity
      ImagenAspectRatio
      ImagenPersonFilterLevel
      ImagenSafetyFilterLevel
      InferenceMode
      LANGUAGE_TAG
      Modality
      PACKAGE_VERSION
      SchemaType
      Task
      VERTEX_TYPE
      logger
      name
      version

      32 dependencies

      BlockReason
      DEFAULT_API_VERSION
      DEFAULT_BASE_URL
      DEFAULT_FETCH_TIMEOUT_MS
      DEFAULT_LOCATION
      FinishReason
      FunctionCallingMode
      HarmBlockMethod
      HarmBlockThreshold
      HarmCategory
      HarmProbability
      HarmSeverity
      ImagenAspectRatio
      ImagenPersonFilterLevel
      ImagenSafetyFilterLevel
      InferenceMode
      LANGUAGE_TAG
      Modality
      PACKAGE_VERSION
      POSSIBLE_ROLES
      SILENT_ERROR
      SchemaType
      Task
      VALID_PARTS_PER_ROLE
      VALID_PART_FIELDS
      VALID_PREVIOUS_CONTENT_ROLES
      VERTEX_TYPE
      badFinishReasons
      logger
      name
      responseLineRE
      version

      + POSSIBLE_ROLES
      + SILENT_ERROR
      + VALID_PARTS_PER_ROLE
      + VALID_PART_FIELDS
      + VALID_PREVIOUS_CONTENT_ROLES
      + badFinishReasons
      + responseLineRE

      External Dependency

      ModuleBase (badaa74)Merge (3713e5d)Diff
      tslib

      __asyncGenerator
      __await

      + __asyncGenerator
      + __await

    • VertexAIModel

      Size

      TypeBase (badaa74)Merge (3713e5d)Diff
      size5.37 kB19.1 kB+13.7 kB (+255.5%)
      size-with-ext-deps23.3 kB38.1 kB+14.8 kB (+63.3%)

      Dependency

      TypeBase (badaa74)Merge (3713e5d)Diff
      functions

      registerVertex

      24 dependencies

      addHelpers
      aggregateResponses
      assignRoleToPartsAndValidateSendMessageRequest
      constructRequest
      countTokens
      createEnhancedContentResponse
      formatBlockErrorMessage
      formatGenerateContentInput
      formatNewContent
      formatSystemInstruction
      generateContent
      generateContentStream
      generateResponseSequence
      getClientHeaders
      getFunctionCalls
      getHeaders
      getResponsePromise
      getResponseStream
      getText
      hadBadFinishReason
      makeRequest
      processStream
      registerVertex
      validateChatHistory

      23 dependency diffs

      + addHelpers
      + aggregateResponses
      + assignRoleToPartsAndValidateSendMessageRequest
      + constructRequest
      + countTokens
      + createEnhancedContentResponse
      + formatBlockErrorMessage
      + formatGenerateContentInput
      + formatNewContent
      + formatSystemInstruction
      + generateContent
      + generateContentStream
      + generateResponseSequence
      + getClientHeaders
      + getFunctionCalls
      + getHeaders
      + getResponsePromise
      + getResponseStream
      + getText
      + hadBadFinishReason
      + makeRequest
      + processStream
      + validateChatHistory

      classes

      VertexAIError
      VertexAIModel
      VertexAIService

      ChatSession
      GenerativeModel
      RequestUrl
      VertexAIError
      VertexAIModel
      VertexAIService

      + ChatSession
      + GenerativeModel
      + RequestUrl

      variables

      19 dependencies

      BlockReason
      DEFAULT_LOCATION
      FinishReason
      FunctionCallingMode
      HarmBlockMethod
      HarmBlockThreshold
      HarmCategory
      HarmProbability
      HarmSeverity
      ImagenAspectRatio
      ImagenPersonFilterLevel
      ImagenSafetyFilterLevel
      InferenceMode
      Modality
      SchemaType
      Task
      VERTEX_TYPE
      name
      version

      32 dependencies

      BlockReason
      DEFAULT_API_VERSION
      DEFAULT_BASE_URL
      DEFAULT_FETCH_TIMEOUT_MS
      DEFAULT_LOCATION
      FinishReason
      FunctionCallingMode
      HarmBlockMethod
      HarmBlockThreshold
      HarmCategory
      HarmProbability
      HarmSeverity
      ImagenAspectRatio
      ImagenPersonFilterLevel
      ImagenSafetyFilterLevel
      InferenceMode
      LANGUAGE_TAG
      Modality
      PACKAGE_VERSION
      POSSIBLE_ROLES
      SILENT_ERROR
      SchemaType
      Task
      VALID_PARTS_PER_ROLE
      VALID_PART_FIELDS
      VALID_PREVIOUS_CONTENT_ROLES
      VERTEX_TYPE
      badFinishReasons
      logger
      name
      responseLineRE
      version

      13 dependency diffs

      + DEFAULT_API_VERSION
      + DEFAULT_BASE_URL
      + DEFAULT_FETCH_TIMEOUT_MS
      + LANGUAGE_TAG
      + PACKAGE_VERSION
      + POSSIBLE_ROLES
      + SILENT_ERROR
      + VALID_PARTS_PER_ROLE
      + VALID_PART_FIELDS
      + VALID_PREVIOUS_CONTENT_ROLES
      + badFinishReasons
      + logger
      + responseLineRE

      External Dependency

      ModuleBase (badaa74)Merge (3713e5d)Diff
      tslib

      __asyncGenerator
      __await

      + __asyncGenerator
      + __await

    • getGenerativeModel

      Size

      TypeBase (badaa74)Merge (3713e5d)Diff
      size19.3 kB19.4 kB+66 B (+0.3%)
      size-with-ext-deps38.3 kB38.4 kB+66 B (+0.2%)
    • getImagenModel

      Size

      TypeBase (badaa74)Merge (3713e5d)Diff
      size10.4 kB21.0 kB+10.7 kB (+102.7%)
      size-with-ext-deps28.4 kB40.0 kB+11.6 kB (+41.0%)

      Dependency

      TypeBase (badaa74)Merge (3713e5d)Diff
      functions

      constructRequest
      createPredictRequestBody
      getClientHeaders
      getHeaders
      getImagenModel
      handlePredictResponse
      makeRequest
      registerVertex

      27 dependencies

      addHelpers
      aggregateResponses
      assignRoleToPartsAndValidateSendMessageRequest
      constructRequest
      countTokens
      createEnhancedContentResponse
      createPredictRequestBody
      formatBlockErrorMessage
      formatGenerateContentInput
      formatNewContent
      formatSystemInstruction
      generateContent
      generateContentStream
      generateResponseSequence
      getClientHeaders
      getFunctionCalls
      getHeaders
      getImagenModel
      getResponsePromise
      getResponseStream
      getText
      hadBadFinishReason
      handlePredictResponse
      makeRequest
      processStream
      registerVertex
      validateChatHistory

      19 dependency diffs

      + addHelpers
      + aggregateResponses
      + assignRoleToPartsAndValidateSendMessageRequest
      + countTokens
      + createEnhancedContentResponse
      + formatBlockErrorMessage
      + formatGenerateContentInput
      + formatNewContent
      + formatSystemInstruction
      + generateContent
      + generateContentStream
      + generateResponseSequence
      + getFunctionCalls
      + getResponsePromise
      + getResponseStream
      + getText
      + hadBadFinishReason
      + processStream
      + validateChatHistory

      classes

      ImagenModel
      RequestUrl
      VertexAIError
      VertexAIModel
      VertexAIService

      ChatSession
      GenerativeModel
      ImagenModel
      RequestUrl
      VertexAIError
      VertexAIModel
      VertexAIService

      + ChatSession
      + GenerativeModel

      variables

      25 dependencies

      BlockReason
      DEFAULT_API_VERSION
      DEFAULT_BASE_URL
      DEFAULT_FETCH_TIMEOUT_MS
      DEFAULT_LOCATION
      FinishReason
      FunctionCallingMode
      HarmBlockMethod
      HarmBlockThreshold
      HarmCategory
      HarmProbability
      HarmSeverity
      ImagenAspectRatio
      ImagenPersonFilterLevel
      ImagenSafetyFilterLevel
      InferenceMode
      LANGUAGE_TAG
      Modality
      PACKAGE_VERSION
      SchemaType
      Task
      VERTEX_TYPE
      logger
      name
      version

      32 dependencies

      BlockReason
      DEFAULT_API_VERSION
      DEFAULT_BASE_URL
      DEFAULT_FETCH_TIMEOUT_MS
      DEFAULT_LOCATION
      FinishReason
      FunctionCallingMode
      HarmBlockMethod
      HarmBlockThreshold
      HarmCategory
      HarmProbability
      HarmSeverity
      ImagenAspectRatio
      ImagenPersonFilterLevel
      ImagenSafetyFilterLevel
      InferenceMode
      LANGUAGE_TAG
      Modality
      PACKAGE_VERSION
      POSSIBLE_ROLES
      SILENT_ERROR
      SchemaType
      Task
      VALID_PARTS_PER_ROLE
      VALID_PART_FIELDS
      VALID_PREVIOUS_CONTENT_ROLES
      VERTEX_TYPE
      badFinishReasons
      logger
      name
      responseLineRE
      version

      + POSSIBLE_ROLES
      + SILENT_ERROR
      + VALID_PARTS_PER_ROLE
      + VALID_PART_FIELDS
      + VALID_PREVIOUS_CONTENT_ROLES
      + badFinishReasons
      + responseLineRE

      External Dependency

      ModuleBase (badaa74)Merge (3713e5d)Diff
      tslib

      __asyncGenerator
      __await

      + __asyncGenerator
      + __await

Test Logs

  1. https://storage.googleapis.com/firebase-sdk-metric-reports/ZfTCr2FjyE.html

@gsiddh gsiddh merged commit e78c39a into vaihi-exp Apr 16, 2025
31 of 34 checks passed
@gsiddh gsiddh deleted the vaihi-api-3 branch April 16, 2025 21:14
gsiddh added a commit that referenced this pull request Apr 22, 2025
gsiddh added a commit that referenced this pull request Apr 22, 2025
gsiddh pushed a commit that referenced this pull request Apr 23, 2025
Fix languageCode parameter in action_code_url (#8912)

* Fix languageCode parameter in action_code_url

* Add changeset

Vaihi add langmodel types. (#8927)

* Adding LanguageModel types. These are based off https://github.com/webmachinelearning/prompt-api?tab=readme-ov-file#full-api-surface-in-web-idl

* Adding LanguageModel types.

* Remove bunch of exports

* yarn formatted

* after lint

Define HybridParams (#8935)

Co-authored-by: Erik Eldridge <[email protected]>

Adding smoke test for new hybrid params (#8937)

* Adding smoke test for new hybrid params

* Use the existing name of the model params input

---------

Co-authored-by: Erik Eldridge <[email protected]>

Moving to in-cloud naming (#8938)

Co-authored-by: Erik Eldridge <[email protected]>

Moving to string type for the inference mode (#8941)

Define ChromeAdapter class (#8942)

Co-authored-by: Erik Eldridge <[email protected]>

VinF Hybrid Inference: Implement ChromeAdapter (rebased) (#8943)

Adding count token impl (#8950)

VinF Hybrid Inference #4: ChromeAdapter in stream methods (rebased) (#8949)

Define values for Availability enum (#8951)

VinF Hybrid Inference: narrow Chrome input type (#8953)

Add image inference support (#8954)

* Adding image based input for inference

* adding image as input to create language model object

disable count tokens api for on-device inference (#8962)

VinF Hybrid Inference: throw if only_on_device and model is unavailable (#8965)
gsiddh pushed a commit that referenced this pull request Apr 23, 2025
Fix languageCode parameter in action_code_url (#8912)

* Fix languageCode parameter in action_code_url

* Add changeset

Vaihi add langmodel types. (#8927)

* Adding LanguageModel types. These are based off https://github.com/webmachinelearning/prompt-api?tab=readme-ov-file#full-api-surface-in-web-idl

* Adding LanguageModel types.

* Remove bunch of exports

* yarn formatted

* after lint

Define HybridParams (#8935)

Co-authored-by: Erik Eldridge <[email protected]>

Adding smoke test for new hybrid params (#8937)

* Adding smoke test for new hybrid params

* Use the existing name of the model params input

---------

Co-authored-by: Erik Eldridge <[email protected]>

Moving to in-cloud naming (#8938)

Co-authored-by: Erik Eldridge <[email protected]>

Moving to string type for the inference mode (#8941)

Define ChromeAdapter class (#8942)

Co-authored-by: Erik Eldridge <[email protected]>

VinF Hybrid Inference: Implement ChromeAdapter (rebased) (#8943)

Adding count token impl (#8950)

VinF Hybrid Inference #4: ChromeAdapter in stream methods (rebased) (#8949)

Define values for Availability enum (#8951)

VinF Hybrid Inference: narrow Chrome input type (#8953)

Add image inference support (#8954)

* Adding image based input for inference

* adding image as input to create language model object

disable count tokens api for on-device inference (#8962)

VinF Hybrid Inference: throw if only_on_device and model is unavailable (#8965)
gsiddh pushed a commit that referenced this pull request Apr 23, 2025
Fix languageCode parameter in action_code_url (#8912)

* Fix languageCode parameter in action_code_url

* Add changeset

Vaihi add langmodel types. (#8927)

* Adding LanguageModel types. These are based off https://github.com/webmachinelearning/prompt-api?tab=readme-ov-file#full-api-surface-in-web-idl

* Adding LanguageModel types.

* Remove bunch of exports

* yarn formatted

* after lint

Define HybridParams (#8935)

Co-authored-by: Erik Eldridge <[email protected]>

Adding smoke test for new hybrid params (#8937)

* Adding smoke test for new hybrid params

* Use the existing name of the model params input

---------

Co-authored-by: Erik Eldridge <[email protected]>

Moving to in-cloud naming (#8938)

Co-authored-by: Erik Eldridge <[email protected]>

Moving to string type for the inference mode (#8941)

Define ChromeAdapter class (#8942)

Co-authored-by: Erik Eldridge <[email protected]>

VinF Hybrid Inference: Implement ChromeAdapter (rebased) (#8943)

Adding count token impl (#8950)

VinF Hybrid Inference #4: ChromeAdapter in stream methods (rebased) (#8949)

Define values for Availability enum (#8951)

VinF Hybrid Inference: narrow Chrome input type (#8953)

Add image inference support (#8954)

* Adding image based input for inference

* adding image as input to create language model object

disable count tokens api for on-device inference (#8962)

VinF Hybrid Inference: throw if only_on_device and model is unavailable (#8965)
gsiddh pushed a commit that referenced this pull request Apr 23, 2025
Fix languageCode parameter in action_code_url (#8912)

* Fix languageCode parameter in action_code_url

* Add changeset

Vaihi add langmodel types. (#8927)

* Adding LanguageModel types. These are based off https://github.com/webmachinelearning/prompt-api?tab=readme-ov-file#full-api-surface-in-web-idl

* Adding LanguageModel types.

* Remove bunch of exports

* yarn formatted

* after lint

Define HybridParams (#8935)

Co-authored-by: Erik Eldridge <[email protected]>

Adding smoke test for new hybrid params (#8937)

* Adding smoke test for new hybrid params

* Use the existing name of the model params input

---------

Co-authored-by: Erik Eldridge <[email protected]>

Moving to in-cloud naming (#8938)

Co-authored-by: Erik Eldridge <[email protected]>

Moving to string type for the inference mode (#8941)

Define ChromeAdapter class (#8942)

Co-authored-by: Erik Eldridge <[email protected]>

VinF Hybrid Inference: Implement ChromeAdapter (rebased) (#8943)

Adding count token impl (#8950)

VinF Hybrid Inference #4: ChromeAdapter in stream methods (rebased) (#8949)

Define values for Availability enum (#8951)

VinF Hybrid Inference: narrow Chrome input type (#8953)

Add image inference support (#8954)

* Adding image based input for inference

* adding image as input to create language model object

disable count tokens api for on-device inference (#8962)

VinF Hybrid Inference: throw if only_on_device and model is unavailable (#8965)
gsiddh pushed a commit that referenced this pull request Apr 23, 2025
Fix languageCode parameter in action_code_url (#8912)

* Fix languageCode parameter in action_code_url

* Add changeset

Vaihi add langmodel types. (#8927)

* Adding LanguageModel types. These are based off https://github.com/webmachinelearning/prompt-api?tab=readme-ov-file#full-api-surface-in-web-idl

* Adding LanguageModel types.

* Remove bunch of exports

* yarn formatted

* after lint

Define HybridParams (#8935)

Co-authored-by: Erik Eldridge <[email protected]>

Adding smoke test for new hybrid params (#8937)

* Adding smoke test for new hybrid params

* Use the existing name of the model params input

---------

Co-authored-by: Erik Eldridge <[email protected]>

Moving to in-cloud naming (#8938)

Co-authored-by: Erik Eldridge <[email protected]>

Moving to string type for the inference mode (#8941)

Define ChromeAdapter class (#8942)

Co-authored-by: Erik Eldridge <[email protected]>

VinF Hybrid Inference: Implement ChromeAdapter (rebased) (#8943)

Adding count token impl (#8950)

VinF Hybrid Inference #4: ChromeAdapter in stream methods (rebased) (#8949)

Define values for Availability enum (#8951)

VinF Hybrid Inference: narrow Chrome input type (#8953)

Add image inference support (#8954)

* Adding image based input for inference

* adding image as input to create language model object

disable count tokens api for on-device inference (#8962)

VinF Hybrid Inference: throw if only_on_device and model is unavailable (#8965)
gsiddh pushed a commit that referenced this pull request Apr 23, 2025
Fix languageCode parameter in action_code_url (#8912)

* Fix languageCode parameter in action_code_url

* Add changeset

Vaihi add langmodel types. (#8927)

* Adding LanguageModel types. These are based off https://github.com/webmachinelearning/prompt-api?tab=readme-ov-file#full-api-surface-in-web-idl

* Adding LanguageModel types.

* Remove bunch of exports

* yarn formatted

* after lint

Define HybridParams (#8935)

Co-authored-by: Erik Eldridge <[email protected]>

Adding smoke test for new hybrid params (#8937)

* Adding smoke test for new hybrid params

* Use the existing name of the model params input

---------

Co-authored-by: Erik Eldridge <[email protected]>

Moving to in-cloud naming (#8938)

Co-authored-by: Erik Eldridge <[email protected]>

Moving to string type for the inference mode (#8941)

Define ChromeAdapter class (#8942)

Co-authored-by: Erik Eldridge <[email protected]>

VinF Hybrid Inference: Implement ChromeAdapter (rebased) (#8943)

Adding count token impl (#8950)

VinF Hybrid Inference #4: ChromeAdapter in stream methods (rebased) (#8949)

Define values for Availability enum (#8951)

VinF Hybrid Inference: narrow Chrome input type (#8953)

Add image inference support (#8954)

* Adding image based input for inference

* adding image as input to create language model object

disable count tokens api for on-device inference (#8962)

VinF Hybrid Inference: throw if only_on_device and model is unavailable (#8965)
gsiddh pushed a commit that referenced this pull request Apr 23, 2025
Fix languageCode parameter in action_code_url (#8912)

* Fix languageCode parameter in action_code_url

* Add changeset

Vaihi add langmodel types. (#8927)

* Adding LanguageModel types. These are based off https://github.com/webmachinelearning/prompt-api?tab=readme-ov-file#full-api-surface-in-web-idl

* Adding LanguageModel types.

* Remove bunch of exports

* yarn formatted

* after lint

Define HybridParams (#8935)

Co-authored-by: Erik Eldridge <[email protected]>

Adding smoke test for new hybrid params (#8937)

* Adding smoke test for new hybrid params

* Use the existing name of the model params input

---------

Co-authored-by: Erik Eldridge <[email protected]>

Moving to in-cloud naming (#8938)

Co-authored-by: Erik Eldridge <[email protected]>

Moving to string type for the inference mode (#8941)

Define ChromeAdapter class (#8942)

Co-authored-by: Erik Eldridge <[email protected]>

VinF Hybrid Inference: Implement ChromeAdapter (rebased) (#8943)

Adding count token impl (#8950)

VinF Hybrid Inference #4: ChromeAdapter in stream methods (rebased) (#8949)

Define values for Availability enum (#8951)

VinF Hybrid Inference: narrow Chrome input type (#8953)

Add image inference support (#8954)

* Adding image based input for inference

* adding image as input to create language model object

disable count tokens api for on-device inference (#8962)

VinF Hybrid Inference: throw if only_on_device and model is unavailable (#8965)
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants