Skip to content

Expose model server metrics in model playground #438

Closed
@MichaelClifford

Description

@MichaelClifford

Each time an LLM responds, it also outputs some info about its performance.

llama_print_timings:        load time =    4732.44 ms
llama_print_timings:      sample time =      86.82 ms /   485 runs   (    0.18 ms per token,  5586.14 tokens per second)
llama_print_timings: prompt eval time =    1997.60 ms /     2 tokens (  998.80 ms per token,     1.00 tokens per second)
llama_print_timings:        eval time =   20404.39 ms /   484 runs   (   42.16 ms per token,    23.72 tokens per second)
llama_print_timings:       total time =   22575.28 ms /   486 tokens

It would be great if a subset of this information could be exposed to the user on the playground page. Adding the prompt eval time tokens per second and eval time tokens per second would give the user a good sense of how the model is performing on their machine.

Metadata

Metadata

Assignees

Labels

No labels
No labels

Type

No type

Projects

Status

✔️ Done

Milestone

No milestone

Relationships

None yet

Development

No branches or pull requests

Issue actions