Skip to content

feat: add langfuse llm node input and output #16924

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 3 commits into from
Mar 27, 2025
Merged

Conversation

ZhouhaoJiang
Copy link
Collaborator

@ZhouhaoJiang ZhouhaoJiang commented Mar 27, 2025

Summary

Fix #14272
Fix #13951

Tip

Close issue syntax: Fixes #<issue number> or Resolves #<issue number>, see documentation for more details.

Screenshots

Now
CleanShot 2025-03-27 at 15 37 51@2x

Before
CleanShot 2025-03-27 at 15 39 07@2x

Checklist

Important

Please review the checklist below before submitting your pull request.

  • This change requires a documentation update, included: Dify Document
  • I understand that this PR may be closed in case there was no previous discussion or issues. (This doesn't apply to typos!)
  • I've added a test for each change that was introduced, and I tried as much as possible to make a single atomic change.
  • I've updated the documentation accordingly.
  • I ran dev/reformat(backend) and cd web && npx lint-staged(frontend) to appease the lint gods

@crazywoola crazywoola requested a review from Copilot March 27, 2025 07:44
Copy link
Contributor

@Copilot Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull Request Overview

This PR implements language model tracing with enhanced logging and token usage tracking by integrating Langfuse for llm node input and output.

  • Update exception logging to include error details in ops_trace_task.py.
  • Retrieve token usage from Message records and correctly assign input and output tokens in langfuse_trace.py.

Reviewed Changes

Copilot reviewed 2 out of 2 changed files in this pull request and generated 1 comment.

File Description
api/tasks/ops_trace_task.py Augmented exception handling with detailed logging.
api/core/ops/langfuse_trace/langfuse_trace.py Added token usage tracking by querying Message data; potential inversion in token assignment.
Comments suppressed due to low confidence (1)

api/tasks/ops_trace_task.py:52

  • Consider using logging.error instead of logging.info when logging errors to more clearly indicate failure events.
logging.info(f"Processing trace tasks failed, app_id: {app_id}, error: {e}")

@ZhouhaoJiang ZhouhaoJiang marked this pull request as ready for review March 27, 2025 08:17
@dosubot dosubot bot added size:S This PR changes 10-29 lines, ignoring generated files. 💪 enhancement New feature or request labels Mar 27, 2025
@dosubot dosubot bot added the lgtm This PR has been approved by a maintainer label Mar 27, 2025
@crazywoola crazywoola merged commit 82189e1 into main Mar 27, 2025
9 checks passed
@crazywoola crazywoola deleted the fix/langfuse-node-data branch March 27, 2025 08:32
@DavideDelbianco
Copy link

DavideDelbianco commented Mar 27, 2025

@ZhouhaoJiang
From your screenshot of the after, the total token sum does not match token-in + token-out
image

Only one can be correct

I think the fix you wrote, repeat the TOTAL workflow consumption in every single node.

In fact if you sum the total token of the LLM + total token of the Extraction it adds up to 1827

@crazywoola
If you approve PR, make sure the PR is a fix and not a worse Bug... You don't have to understand the code, but the image is clear, cannot be the same values in every node....

image

This is a product that subscribers pay for... c'mon...

DON'T Release this PR it is going to break analytical data for thousand of users....

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
💪 enhancement New feature or request lgtm This PR has been approved by a maintainer size:S This PR changes 10-29 lines, ignoring generated files.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Parameter Extractor Node Costs Not Visible in Langfuse Langfuse v3+ not displaying the cost
3 participants