Skip to content

Conversation

@reiase
Copy link
Contributor

@reiase reiase commented Dec 27, 2025

Overview:

Details:

Where should the reviewer start?

Related Issues: (use one of the action keywords Closes / Fixes / Resolves / Relates to)

  • closes GitHub issue: #xxx

- Updated CLI and README to reflect changes in actor command syntax and added macOS Metal support parameters.
- Refactored vLLM Worker to initialize in the background and handle Metal environment setup for macOS.
- Improved error handling in the Router for stream responses and added checks for coroutine results in Python actor methods.
- Enhanced stream message handling to prevent repetition in generated outputs and ensure proper error reporting.
…y and error handling

- Cleaned up whitespace and formatting in actor methods to enhance code clarity.
- Improved coroutine handling in Python actor methods to ensure proper execution flow.
- Enhanced logging messages in vLLM Worker for better traceability during initialization and error handling.
- Streamlined message writing in the Router to maintain consistency and readability.
except Exception as e:
await response.write(f"data: {json.dumps({'error': str(e)})}\n\n".encode())
await stream_response.write(
f"data: {json.dumps({'error': str(e)})}\n\n".encode()

Check warning

Code scanning / CodeQL

Information exposure through an exception Medium

Stack trace information
flows to this location and may be exposed to an external user.

Copilot Autofix

AI 9 days ago

In general, to fix this kind of problem you should avoid sending raw exception messages or stack traces to clients. Instead, log the detailed error on the server (for debugging/monitoring) and return a generic, user‑friendly error message or a fixed error structure that does not reveal internals.

For this specific code, the best fix without changing existing functionality is to replace str(e) in the error response with a generic message such as "Internal server error" or "Stream generation failed". Optionally, we can log the full exception on the server side using Python’s standard logging module, which is a well‑known library and does not alter external behavior except for adding logs. Concretely:

  • Add an import for logging near the top of python/pulsing/actors/router.py.
  • In the except Exception as e: block around line 314, call logging.exception("Stream generation failed") to record the stack trace server‑side.
  • Change the await stream_response.write(...) call to send a constant error description instead of interpolating str(e).

This preserves the overall control flow and response format (still sending a JSON object with an "error" key via SSE), but removes potentially sensitive content from the client‑visible response.

Suggested changeset 1
python/pulsing/actors/router.py

Autofix patch

Autofix patch
Run the following command in your local git repository to apply this patch
cat << 'EOF' | git apply
diff --git a/python/pulsing/actors/router.py b/python/pulsing/actors/router.py
--- a/python/pulsing/actors/router.py
+++ b/python/pulsing/actors/router.py
@@ -8,6 +8,7 @@
 from aiohttp import web
 
 from pulsing.actor import ActorSystem, Message
+import logging
 
 
 @dataclass
@@ -312,14 +313,16 @@
                 except json.JSONDecodeError:
                     continue
         except Exception as e:
+            # Log full exception details on the server, but return a generic message to the client
+            logging.exception("Error occurred during streaming response generation")
             await stream_response.write(
-                f"data: {json.dumps({'error': str(e)})}\n\n".encode()
+                f"data: {json.dumps({'error': 'Internal server error'})}\n\n".encode()
             )
 
         final = {
             "id": request_id,
             "object": obj_type,
-            "created": created,
+"created": created,
             "model": model or self.model_name,
             "choices": [{"index": 0, "finish_reason": "stop"}],
         }
EOF
@@ -8,6 +8,7 @@
from aiohttp import web

from pulsing.actor import ActorSystem, Message
import logging


@dataclass
@@ -312,14 +313,16 @@
except json.JSONDecodeError:
continue
except Exception as e:
# Log full exception details on the server, but return a generic message to the client
logging.exception("Error occurred during streaming response generation")
await stream_response.write(
f"data: {json.dumps({'error': str(e)})}\n\n".encode()
f"data: {json.dumps({'error': 'Internal server error'})}\n\n".encode()
)

final = {
"id": request_id,
"object": obj_type,
"created": created,
"created": created,
"model": model or self.model_name,
"choices": [{"index": 0, "finish_reason": "stop"}],
}
Copilot is powered by AI and may make mistakes. Always verify output.
@reiase reiase closed this Jan 1, 2026
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants