-
Notifications
You must be signed in to change notification settings - Fork 288
Fix messages are one-behind when using secure sockets (wss) #298
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: master
Are you sure you want to change the base?
Fix messages are one-behind when using secure sockets (wss) #298
Conversation
This fix tries to query the sockets directly for pending data if poller.poll() returns no file descriptors. In case of pending data, a special flag is set on the websocket, to ensure the whole buffer is processed in one go.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Pull Request Overview
This PR fixes an issue where messages on secure sockets (wss) are processed one message behind by querying sockets directly when poller.poll() fails to return file descriptors. Key changes include introducing a special flag (_force_process_buffer) in the websocket to trigger full buffer processing, and adding a helper method in the manager to detect and handle pending data on secure sockets.
Reviewed Changes
Copilot reviewed 2 out of 2 changed files in this pull request and generated 1 comment.
File | Description |
---|---|
ws4py/websocket.py | Introduces _force_process_buffer and extends the message processing loop in once(). |
ws4py/manager.py | Adds get_fds_with_pending_data to check for pending socket data and set the flag. |
Comments suppressed due to low confidence (1)
ws4py/manager.py:297
- [nitpick] Ensure that ws.sock.pending() returns a boolean as expected. If it returns an integer or another type, consider making the check explicit (e.g. checking for a positive value) to improve code clarity.
if hasattr( ws.sock, 'pending' ) and ws.sock.pending():
@@ -424,6 +429,15 @@ def once(self): | |||
if not self.process(self.buf[:requested]): | |||
return False | |||
self.buf = self.buf[requested:] | |||
if self.buf and self._force_process_buffer: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
[nitpick] The while loop for processing the buffer under special handling mode duplicates some logic found earlier in the method. Consider refactoring this loop into a separate helper method to improve readability and maintainability.
Copilot uses AI. Check for mistakes.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
can you please add unit tests to verify the changes?
Sure. I've already prepared an integration test in pytest for our system. I will refactor the test to use |
thank you. will be looking forward to it |
added unittest for CherryPy ssl fix certificate and key file have been created using: ``openssl req -x509 -nodes -days 365 -newkey rsa:2048 -keyout server.key -out server.crt``
You might want to replace my server.crt and server.key files. They have been created on a clean Ubuntu container, so I don't mind if they are available on github, but still. Btw, the server.crt and server.key files in the example folders seem to be too old and will raise a |
Seems that tox is using pytest, not unittest. Anyway, when change my local setup to run pytest, I do not get an error:
All I can see in the failing job is
No idea, what |
After some more testing, I can reproduce the error. The issue is with running multiple CherryPy tests in parallel. Moving the tests of Removing I have currently no idea how to fix this. I've looked into CherryPy's tests and it seems they are relying on some sort of helper magic: https://github.com/cherrypy/cherrypy/tree/main/cherrypy/test Help is appreciated. |
Converted to draft, because I just cannot get the unittests to pass. I have refactored the two CherryPy tests to use the unittest wrapper class, but no success. I've tried to narrow down the issue and it seems that starting the websocket manager (Thread) causes the tests to fail once multiple CherryPy tests are run in the same (pytest) process space. Not starting the manager will result in the tests passing (as long as they do not rely on the manager). Refactoring the two CherryPy test modules into one also failed, because of all the mocking in the existing test. I have no idea how to tell the manager thread to run in "CherryPy's unittest class wrapper process space" so I give up. |
This fix tries to query the sockets directly for pending data if poller.poll() returns no file descriptors.
In case of pending data, a special flag is set on the websocket, to ensure the whole buffer is processed in one go.
Fixes #270