Skip to content

Fix messages are one-behind when using secure sockets (wss) #298

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Draft
wants to merge 2 commits into
base: master
Choose a base branch
from

Conversation

mpf82
Copy link

@mpf82 mpf82 commented May 7, 2025

This fix tries to query the sockets directly for pending data if poller.poll() returns no file descriptors.

In case of pending data, a special flag is set on the websocket, to ensure the whole buffer is processed in one go.

Fixes #270

This fix tries to query the sockets directly for pending data if poller.poll() returns no file descriptors.

In case of pending data, a special flag is set on the websocket, to ensure the whole buffer is processed in one go.
@auvipy auvipy requested a review from Copilot May 7, 2025 11:42
@auvipy auvipy closed this May 7, 2025
Copy link

@Copilot Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull Request Overview

This PR fixes an issue where messages on secure sockets (wss) are processed one message behind by querying sockets directly when poller.poll() fails to return file descriptors. Key changes include introducing a special flag (_force_process_buffer) in the websocket to trigger full buffer processing, and adding a helper method in the manager to detect and handle pending data on secure sockets.

Reviewed Changes

Copilot reviewed 2 out of 2 changed files in this pull request and generated 1 comment.

File Description
ws4py/websocket.py Introduces _force_process_buffer and extends the message processing loop in once().
ws4py/manager.py Adds get_fds_with_pending_data to check for pending socket data and set the flag.
Comments suppressed due to low confidence (1)

ws4py/manager.py:297

  • [nitpick] Ensure that ws.sock.pending() returns a boolean as expected. If it returns an integer or another type, consider making the check explicit (e.g. checking for a positive value) to improve code clarity.
if hasattr( ws.sock, 'pending' ) and ws.sock.pending():

@@ -424,6 +429,15 @@ def once(self):
if not self.process(self.buf[:requested]):
return False
self.buf = self.buf[requested:]
if self.buf and self._force_process_buffer:
Copy link
Preview

Copilot AI May 7, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

[nitpick] The while loop for processing the buffer under special handling mode duplicates some logic found earlier in the method. Consider refactoring this loop into a separate helper method to improve readability and maintainability.

Copilot uses AI. Check for mistakes.

@auvipy auvipy reopened this May 7, 2025
@auvipy auvipy closed this May 7, 2025
@auvipy auvipy reopened this May 7, 2025
Copy link
Collaborator

@auvipy auvipy left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

can you please add unit tests to verify the changes?

@mpf82
Copy link
Author

mpf82 commented May 7, 2025

can you please add unit tests to verify the changes?

Sure. I've already prepared an integration test in pytest for our system.

I will refactor the test to use unittest.TestCase.

@auvipy
Copy link
Collaborator

auvipy commented May 12, 2025

thank you. will be looking forward to it

added unittest for CherryPy ssl fix

certificate and key file have been created using:

``openssl req -x509 -nodes -days 365 -newkey rsa:2048 -keyout server.key -out server.crt``
@mpf82
Copy link
Author

mpf82 commented May 13, 2025

You might want to replace my server.crt and server.key files.

They have been created on a clean Ubuntu container, so I don't mind if they are available on github, but still.

Btw, the server.crt and server.key files in the example folders seem to be too old and will raise a [SSL: EE_KEY_TOO_SMALL] when used in my environment.

@mpf82 mpf82 requested a review from auvipy May 14, 2025 13:19
@mpf82
Copy link
Author

mpf82 commented May 14, 2025

Seems that tox is using pytest, not unittest.

Anyway, when change my local setup to run pytest, I do not get an error:

Running pytest with args: ['-p', 'vscode_pytest', '-vvv', '--rootdir=\mpf82\WebSocket-for-Python', '\mpf82\WebSocket-for-Python\test\test_cherrypy_ssl.py::CherryPySSLTest::test_ssl']

test/test_cherrypy_ssl.py::CherryPySSLTest::test_ssl PASSED [100%]

All I can see in the failing job is

test/test_cherrypy_ssl.py py311: exit 70 (1.05 seconds) /home/runner/work/WebSocket-for-Python/WebSocket-for-Python> pytest pid=1888
py311: FAIL code 70 (11.27=setup[10.22]+cmd[1.05] seconds)
evaluation failed :( (11.38 seconds)

No idea, what FAIL code 70 is supposed to mean.

@mpf82
Copy link
Author

mpf82 commented May 14, 2025

After some more testing, I can reproduce the error.

The issue is with running multiple CherryPy tests in parallel.

Moving the tests of test_cherrypy.py to the end of the test cases (via conftest.py) will result in all tests PASSing, but the pytest process will still fail.

Removing test_cherrypy.py completely (and keeping only the new test_cherrypy_ssl.py) fixes the issue, but that's obviously not an option.

I have currently no idea how to fix this.

I've looked into CherryPy's tests and it seems they are relying on some sort of helper magic:

https://github.com/cherrypy/cherrypy/tree/main/cherrypy/test

Help is appreciated.

@mpf82 mpf82 marked this pull request as draft May 20, 2025 07:02
@mpf82
Copy link
Author

mpf82 commented May 20, 2025

Converted to draft, because I just cannot get the unittests to pass.

I have refactored the two CherryPy tests to use the unittest wrapper class, but no success.

I've tried to narrow down the issue and it seems that starting the websocket manager (Thread) causes the tests to fail once multiple CherryPy tests are run in the same (pytest) process space.

Not starting the manager will result in the tests passing (as long as they do not rely on the manager).

Refactoring the two CherryPy test modules into one also failed, because of all the mocking in the existing test.

I have no idea how to tell the manager thread to run in "CherryPy's unittest class wrapper process space" so I give up.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Messages sent are received "one behind" when using WSS.
2 participants