Skip to content

feat(generator): add hub parity retry/timeout policies to giskard agents library#101

Closed
kevinmessiaen wants to merge 6 commits intomainfrom
feature/eng-1271-add-hub-parity-retrytimeout-policy-to-giskard-agents-library
Closed

feat(generator): add hub parity retry/timeout policies to giskard agents library#101
kevinmessiaen wants to merge 6 commits intomainfrom
feature/eng-1271-add-hub-parity-retrytimeout-policy-to-giskard-agents-library

Conversation

@kevinmessiaen
Copy link
Copy Markdown
Member

@kevinmessiaen kevinmessiaen commented Feb 3, 2026

Retry policy:

generator = LiteLLMGenerator(model="test-model").with_retry(
  max_retry=5,  # Required: amount of retry
  base_delay=1.0,  # Optional (default: 1.0): base default time in second
  max_delay=10.0,  # Optional (default: sys.maxsize / 2): max retry time in second
 )
 
 # Above is a shortcut for:
 generator = LiteLLMGenerator(
   model="test-model",
   retry_policy=RetryPolicy(
     max_retry=5,  # Required: amount of retry
    base_delay=1.0,  # Optional (default: 1.0): base default time in second
    max_delay=10.0,  # Optional (default: sys.maxsize / 2): max retry time in second
   )
 )

Max timeout:

Note, max timeout is considered a completion param by LiteLLM, therefore we follow the same pattern:

generator = LiteLLMGenerator(model="test-model").with_params(
  timeout=60,  # Optional
 )
 
 # Above is a shortcut for:
 generator = LiteLLMGenerator(
   model="test-model",
   params=GenerationParams(
     timeout=60
   )
 )
 
 # timeout can be overiden per call:
generator = LiteLLMGenerator(model="test-model")
await generator.complete("Hello!", GenerationParams(timeout=60))

Writing custom Generator with above patterns:

class HttpxGenerator(WithRetryPolicy, BaseGenerator):
    """A custom generator using httpx for HTTP requests."""
    
    api_url: str = Field(description="The API endpoint URL")
    
    def _should_retry(self, err: Exception) -> bool: 
        """Determine if an error should trigger a retry."""
        import httpx
        if isinstance(err, httpx.HTTPStatusError):
            # Retry on 5xx server errors and rate limits
            return err.response.status_code >= 500 or err.response.status_code == 429
        return isinstance(err, (httpx.NetworkError, httpx.TimeoutException))
    
    async def _complete_once(
        self, messages: list[Message], params: GenerationParams | None = None
    ) -> Response:
        """Complete a single request without retry logic."""
        import httpx
        
        params_ = self.params.model_dump(exclude={"tools"})
        if params is not None:
            params_.update(params.model_dump(exclude={"tools"}, exclude_unset=True))
        
        timeout_value = params_.get("timeout")
        timeout = httpx.Timeout(timeout_value) if timeout_value else None
        
        async with httpx.AsyncClient(timeout=timeout) as client:
            response = await client.post(
                self.api_url,
                json={"messages": [m.model_dump() for m in messages], **params_}
            )
            response.raise_for_status()
            data = response.json()
            
        return Response(
            message=Message(**data["message"]),
            finish_reason=data.get("finish_reason")
        )

# Usage with retry policy and timeout:
generator = HttpxGenerator(api_url="https://api.example.com/chat").with_retry(
    max_retry=5,
    base_delay=1.0,
    max_delay=10.0
).with_params(
    timeout=60
)

# Or using explicit policy and params:
generator = HttpxGenerator(
    api_url="https://api.example.com/chat",
    retry_policy=RetryPolicy(max_retries=5, base_delay=1.0, max_delay=10.0),
    params=GenerationParams(timeout=60)
)

Note

Medium Risk
Touches core generator completion/retry behavior and changes backoff timing semantics; risk is mainly behavioral (timeouts/backoff caps) rather than data/security-related.

Overview
Adds a new timeout field to GenerationParams and clarifies/standardizes the contract that per-call complete(..., params=...) values override generator defaults while tools may be merged.

Enhances WithRetryPolicy by introducing RetryPolicy.max_delay (plus a sentinel MAX_WAIT_SECONDS), capping Tenacity exponential backoff, improving with_retries() to patch/retain existing policy values, and adding an overridable _tenacity_before_sleep hook; tests are updated/expanded to assert correct param merging and retry sleep timings (including max-delay capping and exponential growth).

Written by Cursor Bugbot for commit 987f07d. This will update automatically on new commits. Configure here.

Add max_delay field to RetryPolicy to cap exponential backoff delays.
When max_delay is set, exponential backoff wait times are capped at
the specified value. Falls back to MAX_WAIT_SECONDS when max_delay
is None to maintain backward compatibility.

- Add max_delay field to RetryPolicy model
- Add MAX_WAIT_SECONDS constant for default max wait time
- Update exponential backoff to respect max_delay
- Add comprehensive tests for max_delay behavior
Add timeout parameter to GenerationParams to support timeout configuration
for completion requests. This enables hub parity for retry timeout policy
in the giskard agents library.

- Add timeout field to GenerationParams with optional float/int type
- Add test case to verify timeout parameter is preserved in generator params
@linear
Copy link
Copy Markdown

linear bot commented Feb 3, 2026

@gemini-code-assist
Copy link
Copy Markdown
Contributor

Summary of Changes

Hello @kevinmessiaen, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request significantly enhances the robustness of generator agents by implementing both timeout and advanced retry policies. The timeout feature allows users to define a maximum execution time for generation requests, preventing indefinite waits. The retry policy has been upgraded to include a maximum delay for exponential backoff, providing finer control over retry behavior and preventing excessively long delays. These additions ensure more reliable and predictable interactions with external services, improving the overall stability of the agents.

Highlights

  • Timeout Policy for Generators: Introduced a timeout parameter within GenerationParams to allow specifying a maximum duration in seconds for completion requests made by generators. This timeout can be set at the generator level or overridden per call.
  • Enhanced Retry Policy with Max Delay: The RetryPolicy now supports a max_delay parameter, which caps the exponential backoff duration between retries. The with_retries method has been updated to allow for more flexible updates to the retry policy, preserving existing settings when new ones are applied.
  • Improved Tenacity Integration: The underlying tenacity retry mechanism has been configured to respect the new max_delay for exponential backoff. A _tenacity_before_sleep hook was added to WithRetryPolicy to allow for custom actions before each retry sleep, which is also used in tests to verify sleep durations.
  • Comprehensive Test Coverage: New and updated tests ensure the correct behavior of the timeout parameter and the enhanced retry policy, specifically verifying exponential backoff with and without the max_delay cap.

🧠 New Feature in Public Preview: You can now enable Memory to help Gemini Code Assist learn from your team's feedback. This makes future code reviews more consistent and personalized to your project's style. Click here to enable Memory in your admin console.

Changelog
  • src/giskard/agents/generators/base.py
    • Added timeout field to GenerationParams for specifying request timeouts.
  • src/giskard/agents/generators/mixins.py
    • Imported MAX_WAIT_SECONDS for retry policy.
    • Modified with_retries to accept max_delay and merge existing retry policy parameters.
    • Updated AsyncRetrying to use max_delay for exponential backoff and added before_sleep hook.
    • Added _tenacity_before_sleep method.
  • src/giskard/agents/generators/retry.py
    • Imported sys and defined MAX_WAIT_SECONDS.
    • Added max_delay field to RetryPolicy.
  • tests/test_generator.py
    • Updated test_generator_with_params_overwrite to include and assert on the timeout parameter.
  • tests/test_generator_retry.py
    • Imported tenacity.
    • Modified MockGenerator to record sleep times via _tenacity_before_sleep for testing.
    • Added assertions for _sleep_times in existing retry tests.
    • Added new tests test_retries_with_max_delay, test_retries_exponential_backoff, and test_retries_exponential_backoff_with_max_delay to validate retry logic and max delay.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

Copy link
Copy Markdown
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request adds retry and timeout policies to the LiteLLMGenerator, bringing it to parity with other components. The changes include adding timeout to GenerationParams, and max_delay to RetryPolicy. The implementation correctly uses tenacity for exponential backoff with a configurable max delay. The changes are well-tested, covering various scenarios for the retry logic. I've suggested a refactoring in the test suite to reduce code duplication and improve maintainability by using pytest.mark.parametrize.

@kevinmessiaen kevinmessiaen marked this pull request as ready for review February 3, 2026 10:13
wait=t.wait_exponential(multiplier=self.retry_policy.base_delay),
wait=t.wait_exponential(
multiplier=self.retry_policy.base_delay,
max=self.retry_policy.max_delay or MAX_WAIT_SECONDS,
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Falsy max_delay value incorrectly treated as unset

Low Severity

The expression self.retry_policy.max_delay or MAX_WAIT_SECONDS uses Python's truthiness evaluation, meaning if max_delay is explicitly set to 0 (a valid float), it will be treated as falsy and replaced with MAX_WAIT_SECONDS. Since max_delay is typed as float | None, setting it to 0 is valid and could be desired for testing scenarios with immediate retries. This would silently result in effectively unlimited wait times instead of zero delay.

Fix in Cursor Fix in Web

Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

should we allow 0 as max_delay, here it will be changed to MAX_WAIT_SECONDS silently

Document retry policy behavior, no nested retries, and how per-call params
override generator defaults.
Copy link
Copy Markdown

@cursor cursor bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Cursor Bugbot has reviewed your changes and found 1 potential issue.

Bugbot Autofix is OFF. To automatically fix reported issues with Cloud Agents, enable Autofix in the Cursor dashboard.


def _tenacity_retry_condition(self, retry_state: t.RetryCallState) -> bool:
return self._should_retry(retry_state.outcome.exception())

Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Retry condition passes None to exception handler

Medium Severity

The _tenacity_retry_condition method unconditionally calls self._should_retry(retry_state.outcome.exception()) without first checking if the attempt failed. When the request succeeds, outcome.exception() returns None, but _should_retry is documented and type-hinted to receive an Exception. This can cause crashes in custom generator implementations that don't defensively handle None. The method needs to check retry_state.outcome.failed() before accessing the exception.

Fix in Cursor Fix in Web

@Hartorn Hartorn requested a review from Inokinoki February 10, 2026 08:51
Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Let's update uv ;)

patch = {
"max_retries": max_retries,
"base_delay": base_delay,
"max_delay": max_delay,
Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

how about validating the numbers here?

e.g., "max_delay": max(0, max_delay), so that we do not have a negative value

@henchaves
Copy link
Copy Markdown
Member

henchaves commented Feb 19, 2026

Changes integrated to Giskard-AI/giskard-oss#2239

@henchaves henchaves closed this Feb 19, 2026
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Development

Successfully merging this pull request may close these issues.

3 participants