Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Update indexer rev to sleep time between request and Hasura metadata loading #436

Merged
merged 19 commits into from
Sep 13, 2024

Conversation

musitdev
Copy link
Contributor

@musitdev musitdev commented Aug 22, 2024

Summary

Define the env var SLEEP_TIME_BETWENN_REQUEST_MS to define the sleep time between indexer gRpc request.
By default, it's set to 10 ms.
If SLEEP_TIME_BETWENN_REQUEST_MS is et to zero, no sleep is applied between request.

Add a command to load the Hasura metadata.
The meta data file is localed in the folder networks/suzuka/indexer and the metadata file is : hasura_metadata.json

I've updated the Aptos one.
The file must contain the IP address of the PostgreSQL db so that Hasura can load it. To update dynamically the file with the DB IP and load the metadata, I've created the load_metadata binary that can be run this way:
POSTGRES_DB_HOST=<POSTGRES_HOST_IP> INDEXER_API_URL=<hasura console url> cargo run -p suzuka-indexer-service --bin load_metadata

  • POSTGRES_DB_HOST: set to the IP of the PostgreSQL DB, default postgres
  • INDEXER_API_URL: Url of the Hasura console, default: http://127.0.0.1:8085

Changelog

  • Add SLEEP_TIME_BETWENN_REQUEST_MS to slow indexer request
  • Add the load_metadata binary to load Hasura metadata.

Testing

To test the whole chain locally:
CELESTIA_LOG_LEVEL=FATAL nix develop --extra-experimental-features nix-command --extra-experimental-features flakes --command bash -c "just suzuka-full-node native build.celestia-local.indexer.hasura.indexer-test --keep-tui"

Outstanding issues

@musitdev musitdev changed the title Update indexer rev to sleep time between request version Update indexer rev to sleep time between request and Hasura metadata loading Aug 26, 2024
@l-monninger
Copy link
Collaborator

@musitdev Can you make sure this passes before I review?

@musitdev
Copy link
Contributor Author

@l-monninger the test pass now, it made a mistake and the test never end. So I split the 2 in a separate one.
I've updated the PR description for the second time. The description was reverted to the initial one!
This PR contains 2 evolutions: the time between indexer request and the Hasura metadata loading and indexer test update.
You can review the PR.

@musitdev
Copy link
Contributor Author

@l-monninger I made an update and restart the test. I tell you when it's done.

@musitdev
Copy link
Contributor Author

@l-monninger the test has passed.

Copy link
Collaborator

@l-monninger l-monninger left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Working for me. But, did you use a dynamic environment to see if this improves those early crashes? Can you link that?

@@ -4,12 +4,14 @@ sleep 10
result=$(PGPASSWORD=password psql -h $POSTGRES_HOST_IP -U postgres -d postgres -t -c "SELECT COUNT(*) FROM public.transactions;")
result=$(echo $result | xargs)
if (( result >= 1 )); then
http_status=$(curl -s -o /dev/null -w "%{http_code}" "http://localhost:8085/console")
if [ "$http_status" -eq 200 ]; then
response=$(curl -s -X POST -H "Content-Type: application/json" -d '{"query":"query {user_transactions { block_height } }"}' "http://localhost:8085/v1/graphql")
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Would be better to move this into a crate at some point.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You prefer to implement it in Rust?

@musitdev
Copy link
Contributor Author

Yes, I test it to synchronize the current suzuka db. The diff is that the suzuka node wasn't processing Tx. It synchronized more than 130M Tx and never crash. I've tested with 10ms and 1ms delay.

@musitdev musitdev merged commit 4a8f644 into main Sep 13, 2024
167 of 169 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants