Error handling in langchain

No one likes to talk about Errors!

As the code base grows we start to encounter various issues propping up from various functionalities that we use. Be it in langchain/openAI/groq or any other integrations available. Error handling becomes quite important. There are a few errors that I would like to highlight in this article.

Rate limit reached( Error 429)

This could happen in any chat models that are available and have certain criteria for free requests that can be sent.

An example scenario is when using chatgroq with any open source model like llama 3.1 you might encounter an error about the rate limit. The error looks like below

INFO:__main__:Error code: 429 - {'error': {'message': 'Rate limit reached for model `llama3-8b-8192` in organization `org_******` on tokens per minute (TPM): Limit 30000, Used 30640, Requested 560. Please try again in 2.401s. Visit https://console.groq.com/docs/rate-limits for more information.', 'type': 'tokens', 'code': 'rate_limit_exceeded'}}

Understanding that the tokens were exceeded beyond the approved limit. Review the rate limits in the link https://console.groq.com/settings/limits. The best way is to purchase a plan that you can use without much hassles.

Ways to achieve the retry logic

I have a simple retry logic added through try except here. This logic sleeps and retries again and again until it gets the answer. Beware that this loop will never stop until the rate limit error is not corrected/the stream ends successfully. However, you should also be aware of the token limits per day. One can also add the number of times we retry before creating another fallback mechanism to handle this scenario.

The chat models in Langchain also have retry numbers that can be set in the call. The current default is to retry 2 times. This also includes a timeout as well. For example:

from langchain_groq import ChatGroq

llm = ChatGroq(
    model="mixtral-8x7b-32768",
    temperature=0.0,
    max_retries=2,
    # other params...
)

Review how you can use the Runnables along with_retry provided with the API.(source from langchain documentation)

These options are available in all the chatmodels that are derived from BaseChatModel which implements the Runnable interface.

There are a few more options available to use based on the needs of the applications like setting the max_tokens to a certain threshold, not streaming when not necessary, and retrying it with exponential backoff. Review here.

Handling tool errors

You may have a list of tools that are bound to an LLM. When some tool fails there can be multiple ways to handle it. Langchain documentation suggests a simple way to use try/except in a tool call

from typing import Any

from langchain_core.runnables import Runnable, RunnableConfig


def try_except_tool(tool_args: dict, config: RunnableConfig) -> Runnable:
    try:
        complex_tool.invoke(tool_args, config=config)
    except Exception as e:
        return f"Calling tool with arguments:\n\n{tool_args}\n\nraised the following error:\n\n{type(e)}: {e}"


chain = llm_with_tools | (lambda msg: msg.tool_calls[0]["args"]) | try_except_tool

There is an issue invalid tool that I have seen. Reference here. They also have a note about fallback, retry, and much more in that link.

Handling parsing errors documentation is here. The Agent executor contains a parameter called handle_parsing_errors which could be either set to True or a string with a custom message or a custom function to be called in a scenario where there is an error.

Error 400 “Failed to call a function. Please adjust your prompt. See ‘failed_generation’ for more details.”

The error might look like this

INFO:__main__:Error code: 400 - {'error': {'message': "Failed to call a function. Please adjust your prompt. See 'failed_generation' for more details.", 'type': 'invalid_request_error', 'code': 'tool_use_failed', 'failed_generation': '<tool-use>{"tool_calls":[{"id":"pending","type":"function","function":{"name":"some_function"},"parameters":{"query":"some_argument_value"}}]}</tool-use>'}}

Review the ChatGroq documentation here. Consider the recommendations listed by Groq. I usually retry when I get this issue.
Some references are 1 & 2

Invalid API key (401)

The error may look like this

Error code: 401 - {'error': {'message': 'Invalid API Key', 'type': 'invalid_request_error', 'code': 'invalid_api_key'}}

Review the API key being used. If it is not valid anymore create a new key or re-check the key to verify that the given and implemented keys are the same.

Conclusion

I have just touched the tip of the iceberg here! Will keep adding more to this.

View my portfolio here