You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hi, we've been running a few experiments using Split and noticed nearly all of our variants were losing over the control version by pretty hefty margins. So we set up a test to see how Split performed when splitting an 'experiment' where the control and the variant were the exact same experience. This test found the variant losing to the control by nearly 20% with a 95% confidence interval. Any suggestions as to what might be causing this or how we can improve the accuracy of our results? This was using the default splitting algorithm. Would switching to the block randomization algorithm help us out at all?
The text was updated successfully, but these errors were encountered:
Similar problem here, with both control and T1 being the exact same behavior, there is a significant difference in conversion rate (> 1%) after about 10k participants and ~75% conversion. The other issue we are seeing that there is also more than 1% difference in participation, even though it is set to 50:50.
Hi, we've been running a few experiments using Split and noticed nearly all of our variants were losing over the control version by pretty hefty margins. So we set up a test to see how Split performed when splitting an 'experiment' where the control and the variant were the exact same experience. This test found the variant losing to the control by nearly 20% with a 95% confidence interval. Any suggestions as to what might be causing this or how we can improve the accuracy of our results? This was using the default splitting algorithm. Would switching to the block randomization algorithm help us out at all?
The text was updated successfully, but these errors were encountered: