-
Notifications
You must be signed in to change notification settings - Fork 52
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Connection trouble with 0.10.0 #89
Comments
OK, thanks for the report. 0.10 changed a specific way in which connections are handled so that's enough information to look. I wish there were a way to mark versions as beta / not-yet-well-tested versions to try and catch these kinds of issues before they into production... |
@miccolis Do you mind trying out |
I just tested
|
Can you share how you have |
@mikemorris Any update here? I can't replicate this on my end or see any problems from starring at the code so won't be able to make progress without help. |
Hi @alevy , we are using memjs version 0.8.5 in our Heroku app and we randomly get a "MemJS: Server <..> failed after (2) retries with error - This socket has been ended by the other party ". This issue starts with a daily recycling of a server and it goes away when we restart the server manually. As you mentioned, 0.10.0 handles the connection differently, should we update our app to use 0.10.0, will that help with this error ? given you pointed out 0.10.0 to be not well tested? |
@azmsrwr Please try out If you run into issues with |
Another MemCachier customer reports:
Using |
Any update for this issue? |
@leohihimax No update sorry, hopefully Amit or I will find some time soon to sit down with this one. The problem is we aren't able to reproduce this. Any extra information you can provide to help us there would be great. |
We were randomly getting following issue with memjs version 0.8.5 in our Heroku app "MemJS: Server <..> failed after (2) retries with error - This socket has been ended by the other party ". We imagine this issue was around some sort of connection pooling where occasionally the server was restarted (daily planned restart) but connection was still not being renewed. This issue used to goes away when we restart the server manually. After updating to memjs 0.10.0 we do not see this issue anymore. |
@dterei |
OK, I'll try to setup a fake network environment that simulates a poor network and see what happens. |
The version we were using had an issue where too many timeout handlers were being defined: memcachier/memjs#86. This lead to many warnings like the following in the production logs: (node:9852) Warning: Possible EventEmitter memory leak detected. 11 timeout listeners added. Use emitter.setMaxListeners() to increase limit The logs also show many `ECONNRESET` and similar errors, which may be related to this: memcachier/memjs#89. According to that thread, however, it seems like version 0.10.0 could also be suffering from issues, although others report it working for them.
I tried to reproduce this by connecting to a memcached server on the other side of the pond (I think it is called the Atlantic ocean) and setting the timeout to a value very close to the ping times I got from the server. I get occasional timeouts (which will return a timeout error for all outstanding requests) and sometimes even a ECONNRESET error (which will also return an error for all outstanding requests) but in all cases memjs reconnects fine for future requests. |
After updating to 0.10.0 we starting seeing connection issues which were hard to diagnose. We've downgraded back to 0.9.0 and it doesn't look like we're seeing them anymore.
What would happen was that simply that a process would start spewing
Error: socket timed out waiting on response.
~10 times a minute and be unable to re-establish a connection to the memcache server (we're using AWS Elasticache). New processes would connect with no issue.I wish I had more details to report, but so far I've been unable to reproduce this outside of a production environment. I'll update this post as I learn more.
The text was updated successfully, but these errors were encountered: