Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Review and improve test executions #94

Open
3 tasks
donpui opened this issue Dec 2, 2022 · 11 comments
Open
3 tasks

Review and improve test executions #94

donpui opened this issue Dec 2, 2022 · 11 comments
Labels
Tech Debt Technical debt, small improvements to code, libraries

Comments

@donpui
Copy link
Contributor

donpui commented Dec 2, 2022

Review tests executions implementation. This should cover UNIT, INTEGRATION and E2E tests

  • improve stability on tests in Github CI
  • get rid of retries if possible
  • improve speed of tests execution

Issue related with some workarounds applied: #87

@donpui donpui changed the title Review test executions implementation Review and improve test executions Dec 2, 2022
@btlogy
Copy link
Contributor

btlogy commented Dec 6, 2022

In #87, @donpui said:

Currently when we commit and pull request, two the same tests are running.

I understand we should avoid retrying, but do we need concurrent test at all?
As I'm trying to figure this out for other repositories, I've found this to be pretty common in .github/actions:

on:
  push:
    branches:
    - main
  pull_request:

This should avoid the test to be running twice when pull_request are made from branch created in the same repository (and not from forked one as I understand).
And if any one still want to manually trigger a test in a branch w/o creating a pull_request, we might want to add a workflow_dispatch event.

I've created #101 for this. Could that one be enough to tick the first task of this issue @donpui?

@btlogy
Copy link
Contributor

btlogy commented Dec 8, 2022

Now #101 has been merged, it should avoid running test concurrently for the same content.
However, concurrent tests may still occur whenever 2+ branches are updated with different commits around the same time.
Unless we want to ensure our integration test support concurrency, I guess we need to dig into the documentation to avoid it.
After what, we could disable the retry introduced in #87.

@btlogy btlogy self-assigned this Dec 8, 2022
@meejah
Copy link
Collaborator

meejah commented Dec 8, 2022

I don't understand how two parallel runs can interfere? Is there some global service(s) some of the tests rely on or...?

@btlogy
Copy link
Contributor

btlogy commented Dec 8, 2022

I don't understand how two parallel runs can interfere? Is there some global service(s) some of the tests rely on or...?

@donpui will correct me, but the problem I've bumped into in #92 had thrown Exceeded timeout of 5000 ms for a test:

Github won't allow me to search the logs, but I could smell that test failing after 6-7 minutes were good candidate:

@meejah, I suspect Github could be that global service.
Would that not make sense if they were limiting the resources available for the runners per repository?
@wuan: do you know if this could have something about what our 'spending limit'?

@wuan
Copy link
Contributor

wuan commented Dec 8, 2022

@btlogy, the runner time of a public repository does not affect the spending limit.

@meejah
Copy link
Collaborator

meejah commented Dec 8, 2022

IME parallel runs are fine (e.g. if two branches are pushed "at the same time") in lots of other repos we work on.

@donpui
Copy link
Contributor Author

donpui commented Dec 9, 2022

@btlogy you are correct, we are talking about the same timeout error.

We are not using global services (I mean, our public services), however we run up local mailbox, relay and client docker containers in side github runner virtual machine. It can be that something just start earlier, then other servizes are fully operational. For example, when winden docker starts, it takes some time, while all wasm, js recompiles.
Maybe optimisation or refactor on client-ci is needed.

FYI. @JustusFT

@meejah
Copy link
Collaborator

meejah commented Dec 11, 2022

If the tests are slow, okay they're slow -- but there's no reason to avoid concurrent runs (or stated the other way: how does avoiding concurrent runs help slow tests?).

@donpui donpui added the Tech Debt Technical debt, small improvements to code, libraries label Dec 22, 2022
@donpui
Copy link
Contributor Author

donpui commented Dec 22, 2022

Adding reference, there is Draft PR to try alternative way to run tests #105

@btlogy btlogy removed their assignment Oct 27, 2023
@btlogy
Copy link
Contributor

btlogy commented Feb 1, 2024

While working on #171 and its fix #173, I've noticed the e2e test were randomly failing nearly 2 third of the time!
I had to re-build a couple of time the same commit to pass the check!

I'm not 100% sure the stability part of this issue was already covering these failure (too long ago).
But this is rather annoying when trying to fix the other little issues I've found.

@btlogy
Copy link
Contributor

btlogy commented Feb 2, 2024

In the context of #176, I've improved the workflow in #183 with some caching which should speedup the testes...

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Tech Debt Technical debt, small improvements to code, libraries
Projects
None yet
Development

No branches or pull requests

4 participants