Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Adding a configuration for max retries to handle system wide outages #789

Open
andrewgaun opened this issue Aug 17, 2023 · 0 comments
Open

Comments

@andrewgaun
Copy link

  • Algolia Client Version: 3.16.6
  • Language Version: Java

Description

Correct me if I am mistaken but HttpTransport.executeWithRetry will before sending a request, increase the timeout for the request based on the number of retries. After the response is received, it uses RetryStrategy to determine what to do with the results.

The function RetryStrategy.decide will in the case of a request time out, will always retry.

During an Indexer service outage in July 13th, it appears that was happening for the entire length of the outage causing the Promises to hang and never fail. This becomes a pretty big issue when using BatchIndexingResponse.waitTask() like in the API examples.

Proposed change

  1. Adding a new configurable property to ConfigBase called something along the lines of maxRetriesPerHost and update RetryStrategy to save that value in its constructor.
  2. Update RetryStrategy.decide to something along the lines of
} else if (response.isTimedOut()) {
    boolean shouldKeepUp = maxRetriesPerHost == null  || tryableHost.getRetryCount() <= maxRetriesPerHost;
    tryableHost.setUp(shouldKeepUp);
    tryableHost.setLastUse(AlgoliaUtils.nowUTC());
    tryableHost.incrementRetryCount();
    return RetryOutcome.RETRY;
}

Why make the change

During the time of the indexing outage, it was not clear immediately why our requests were stuck. Looking at the code it seems like if the servers actually responded with server errors, the StatefulHosts would have been turned off one by one which would be closer to what I would expect from an outage. The current logic has no way of determining this type of issue, causing ever expanding timeouts without any way of intervening.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant