Skip to content
This repository was archived by the owner on Aug 15, 2019. It is now read-only.

added inference model interface for sharing between layer model and frozen model #1053

Merged
merged 5 commits into from
May 23, 2018

Conversation

pyu10055
Copy link
Collaborator

@pyu10055 pyu10055 commented May 22, 2018

This change is Reviewable

@pyu10055 pyu10055 requested review from dsmilkov and caisq May 22, 2018 22:10
@dsmilkov
Copy link
Contributor

Needs a bit more more polish on the doc. Ping me when you take a pass and I'll take another look.


Review status: 0 of 2 files reviewed at latest revision, all discussions resolved, some commit checks failed.


src/types.ts, line 149 at r1 (raw file):

/**
 * Common interface for rxjs inference model.

why rxjs? Rephrase as: Common interface for a machine learning model that can do inference.


src/types.ts, line 155 at r1 (raw file):

   * Execute the inference for the input tensors.
   * When there is single input for the model, inputs param should be a Tensor
   * and Tensor[] for batch mode. For models with mutliple inputs, inputs params

I don't think this is the correct. Model with multiple inputs takes Tensor[]. Please read carefully the params in https://js.tensorflow.org/api/0.11.1/#tf.Model.predict


src/types.ts, line 156 at r1 (raw file):

   * When there is single input for the model, inputs param should be a Tensor
   * and Tensor[] for batch mode. For models with mutliple inputs, inputs params
   * should be in NamedTensorMap format, and NamedTensorMap[] for batch mode.

What is batch mode? How are using enabling/disabling batch mode?


src/types.ts, line 163 at r1 (raw file):

   * The output would be a Tensor if there is single output node or
   * Tensor[] for batch mode. For model with multiple outputs either
   * NamedTensorMap or NamedTensorMap[] will be returned.

again check docs in https://js.tensorflow.org/api/0.11.1/#tf.Model.predict. I think this is incorrect.


Comments from Reviewable

@dsmilkov
Copy link
Contributor

Review status: 0 of 2 files reviewed at latest revision, 4 unresolved discussions, some commit checks failed.


src/types.ts, line 156 at r1 (raw file):

Previously, dsmilkov (Daniel Smilkov) wrote…

What is batch mode? How are using enabling/disabling batch mode?

typo: using should be users


Comments from Reviewable

@caisq
Copy link
Collaborator

caisq commented May 23, 2018

Review status: 0 of 2 files reviewed at latest revision, 4 unresolved discussions, some commit checks failed.


src/types.ts, line 149 at r1 (raw file):

Previously, dsmilkov (Daniel Smilkov) wrote…

why rxjs? Rephrase as: Common interface for a machine learning model that can do inference.

+1 This doesn't seem to have anything to do with RxJS.


src/types.ts, line 155 at r1 (raw file):

Previously, dsmilkov (Daniel Smilkov) wrote…

I don't think this is the correct. Model with multiple inputs takes Tensor[]. Please read carefully the params in https://js.tensorflow.org/api/0.11.1/#tf.Model.predict

+1 to what Daniel said. If the model has a single input SymbolicTensor, then inputs here must be either

  • a single Tensor
    or
  • a single NamedTensorMap with only one key, where the key is the name of the SymbolicTensor.

If the model has > 1 input SymbolicTensors, the inpus can be

  • a single NamedTensorMap with n keys, where each key is the name of an input SymbolicTensor
    or
  • Tensor[], in which the order of the Tensors matches the order of the SymbolicTensors.

So I still don't understand whyNamedTensorMap[] is needed here. It seems to me it is not needed.


Comments from Reviewable

@pyu10055
Copy link
Collaborator Author

Thanks! PTAL!


Review status: 0 of 2 files reviewed at latest revision, 4 unresolved discussions, some commit checks failed.


src/types.ts, line 149 at r1 (raw file):

Previously, caisq (Shanqing Cai) wrote…

+1 This doesn't seem to have anything to do with RxJS.

lol, it has been in my head too much last couple days. sorry.


src/types.ts, line 155 at r1 (raw file):

Previously, caisq (Shanqing Cai) wrote…

+1 to what Daniel said. If the model has a single input SymbolicTensor, then inputs here must be either

  • a single Tensor
    or
  • a single NamedTensorMap with only one key, where the key is the name of the SymbolicTensor.

If the model has > 1 input SymbolicTensors, the inpus can be

  • a single NamedTensorMap with n keys, where each key is the name of an input SymbolicTensor
    or
  • Tensor[], in which the order of the Tensors matches the order of the SymbolicTensors.

So I still don't understand whyNamedTensorMap[] is needed here. It seems to me it is not needed.

updated, I think for single Tensor input, we should recommend user to always use Tensor.


src/types.ts, line 156 at r1 (raw file):

Previously, dsmilkov (Daniel Smilkov) wrote…

typo: using should be users

ud


src/types.ts, line 163 at r1 (raw file):

Previously, dsmilkov (Daniel Smilkov) wrote…

again check docs in https://js.tensorflow.org/api/0.11.1/#tf.Model.predict. I think this is incorrect.

Done.


Comments from Reviewable

@dsmilkov
Copy link
Contributor

:lgtm_strong: needs a bit more polish (jstags like @param and @return), but LGTM since nothing major.


Reviewed 1 of 2 files at r1.
Review status: 1 of 2 files reviewed at latest revision, 3 unresolved discussions, some commit checks failed.


src/index.ts, line 47 at r2 (raw file):

export {Scalar, Tensor, Tensor1D, Tensor2D, Tensor3D, Tensor4D, TensorBuffer, variable, Variable} from './tensor';
// tslint:disable-next-line:max-line-length
export {DataType, InferenceModel, ModelPredictConfig, NamedTensorMap, Rank, ShapeMap} from './types';

FYI, NamedTensorMap will conflict with the NamedTensorMap that is exported by the converter: https://github.com/tensorflow/tfjs-converter/blob/master/src/index.ts#L18
The conflict will happen at the union package since union merges everything under tf. So once you commit this, make sure to update converter to use the NamedTensorMap from core (will require another core patch release).


src/types.ts, line 156 at r1 (raw file):

Previously, pyu10055 (Ping Yu) wrote…

ud

thanks for removing "Batch mode"


src/types.ts, line 143 at r2 (raw file):

  /**
   * Output node names. Default to model graph outputs if not set.

Optional. List of output node names to evaluate when running predict(). Defaults to the model's default output.


src/types.ts, line 160 at r2 (raw file):

   *
   * For batch inference execution, the tensors for each input need to be
   * cancatenated together. For example with mobilenet, the required input shape

typo: concatenated


src/types.ts, line 168 at r2 (raw file):

   * node names.
   *
   * The output would be a Tensor if there is single output node, Tensor[] or

I would use the @param and @return jsdoc tags to split up this documentation into sections, respective for each param. Like what we do in model.predict in layers: https://github.com/tensorflow/tfjs-layers/blob/v0.6.1/src/engine/training.ts#L1054


Comments from Reviewable

@dsmilkov
Copy link
Contributor

Note though the comment about exporting NamedTensorMap and the conflict that will happen in union package.

@pyu10055
Copy link
Collaborator Author

thanks!


Review status: 1 of 2 files reviewed at latest revision, 6 unresolved discussions, some commit checks failed.


src/index.ts, line 47 at r2 (raw file):

Previously, dsmilkov (Daniel Smilkov) wrote…

FYI, NamedTensorMap will conflict with the NamedTensorMap that is exported by the converter: https://github.com/tensorflow/tfjs-converter/blob/master/src/index.ts#L18
The conflict will happen at the union package since union merges everything under tf. So once you commit this, make sure to update converter to use the NamedTensorMap from core (will require another core patch release).

yes, thank you for pointing that out.


src/types.ts, line 143 at r2 (raw file):

Previously, dsmilkov (Daniel Smilkov) wrote…

Optional. List of output node names to evaluate when running predict(). Defaults to the model's default output.

Done.


src/types.ts, line 160 at r2 (raw file):

Previously, dsmilkov (Daniel Smilkov) wrote…

typo: concatenated

Done.


src/types.ts, line 168 at r2 (raw file):

Previously, dsmilkov (Daniel Smilkov) wrote…

I would use the @param and @return jsdoc tags to split up this documentation into sections, respective for each param. Like what we do in model.predict in layers: https://github.com/tensorflow/tfjs-layers/blob/v0.6.1/src/engine/training.ts#L1054

Done.


Comments from Reviewable

@pyu10055 pyu10055 merged commit 6b72b44 into master May 23, 2018
@pyu10055 pyu10055 deleted the predict-api branch May 23, 2018 23:25
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants