Open
Description
gptscript version v0.0.0-dev-7ff3fa1f-dirty
Steps to reproduce the problem:
- I am testing with an openai account which is in "Tier 1" (with $5 credit) billing tier.
- When I hit rate limits for
tokens per min
ormaximum context length
, the error message presented in the TUI looks like they are unhandled exceptions .
%gptscript chat_internet.gpt
Hello! I'm here to help you with any questions or tasks you have. How can I assist you today?
> who won 2024 superbowl?
┌────────────────────────────────────────────────────────────────────┐
│ Call Arguments: │
│ │
│ answersFromTheInternet {"question":"Who won the 2024 Super Bowl?"} │
└────────────────────────────────────────────────────────────────────┘
Running answers-from-the-internet from github.com/gptscript-ai/answers-from-the-internet
┌───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
│ ERROR: got (exit status 1) while running tool, OUTPUT:
│ > tool
│ > node --no-warnings --loader ts-node/esm src/server.ts
│
│ slow page: https://www.marca.com/en/nfl/winners.html
│
│ node:internal/process/esm_loader:34
│ internalBinding('errors').triggerUncaughtException(
│ ^
│ error, status code: 429, message: Request too large for gpt-4o in organization org-*** on tokens per min (TPM): Limit 30000, Requeste
│ (Use `node --trace-uncaught ...` to show where the exception was thrown)
│
│ Node.js v21.7.0
└───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
It seems there was an issue retrieving the information. Let me try again.
┌────────────────────────────────────────────────────────────────────┐
│ Call Arguments: │
│ │
│ answersFromTheInternet {"question":"Who won the 2024 Super Bowl?"} │
└────────────────────────────────────────────────────────────────────┘
Running answers-from-the-internet from github.com/gptscript-ai/answers-from-the-internet
┌───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
│ ERROR: got (exit status 1) while running tool, OUTPUT:
│ > tool
│ > node --no-warnings --loader ts-node/esm src/server.ts
│
│ slow page: https://www.marca.com/en/nfl/winners.html
│
│ node:internal/process/esm_loader:34
│ internalBinding('errors').triggerUncaughtException(
│ ^
│ error, status code: 429, message: Request too large for gpt-4o in organization org-*** on tokens per min (TPM): Limit 30000, Requeste
│ (Use `node --trace-uncaught ...` to show where the exception was thrown)
│
│ Node.js v21.7.0
└───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
It looks like I'm having trouble retrieving the information at the moment. You might want to check a reliable sports news website or the official NFL
website for the latest updates on the Super Bowl winner. If you have any other questions or need assistance with something else, feel free to ask!
sangeethahariharan@Sangeethas-MBP scripts % gptscript --default-model gpt-3.5-turbo chat_internet.gpt
12:27:58 WARNING: Changing the default model can have unknown behavior for existing tools. Use the model field per tool instead.
Hello! I am here to assist you. Feel free to ask me any question, and I will do my best to provide you with accurate information.
> who won 2024 superbowl?
┌──────────────────────────────────────────────────────────────┐
│ Call Arguments: │
│ │
│ answersFromTheInternet {"question":"2024 Super Bowl winner"} │
└──────────────────────────────────────────────────────────────┘
Running answers-from-the-internet from github.com/gptscript-ai/answers-from-the-internet
┌───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
│ ERROR: got (exit status 1) while running tool, OUTPUT:
│ > tool
│ > node --no-warnings --loader ts-node/esm src/server.ts
│
│
│ node:internal/process/esm_loader:34
│ internalBinding('errors').triggerUncaughtException(
│ ^
│ error, status code: 400, message: This model's maximum context length is 16385 tokens. However, your messages resulted in 27294 tokens. Please reduce the
│ (Use `node --trace-uncaught ...` to show where the exception was thrown)
│
│ Node.js v21.7.0
└───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
I encountered an error while trying to retrieve the information about the 2024 Super Bowl winner. Is there anything else you would like to know or ask
about?
%
Expected Behavior:
Have an Improved error message presented to the user in this case.