With a comment from @samwaseda (basically "Tests are obscure here and there, add some msg.") and after @liamhuber's post on going for 100% coverage and what he took away from trying to achieve it, I'll see how close I can come to full coverage for courier (almost) from the get-go. In the same procedure, I want to address some points which are more specific to what courier does and have implications for testing, i.e. the interaction with live services.
"generic" potential improvements
- Testing is not really "verbose"; almost no assertion uses
msg=....
Strategy: go through each test and add a message, whenever something a test does is not trivially self-explaining.
- There are 14 uncovered lines (according to https://app.codecov.io/gh/pyiron/courier (30.03.2026).
- For some code, there are definitely edge cases, not yet covered.
live service interactions
The fact that courier aims to connect to remotely running services (at the moment: services which expose some endpoints to receive standard http requests). So far, I used classes _FakeSession, _FakeRespose and _FakeRequest to mimick requests.session instances' behavior "connected" to an actual live service.
We can distinct tests using mocks in 2 categories:
- Category A – Fake session/response to test wiring logic (HTTP method, URL construction, header passing, parameter forwarding). These are the
_FakeSession/_FakeResponse patterns in test_http_client.py, test_ontodocker_client.py, and test_request.py. The mocks substitute requests.Session so no network call is made.
- Category B – Pure validation/logic tests that happen to use
_FakeSession only to instantiate a client, but never actually hit the session (e.g. test_create_validates_name, test_endpoint_validates_dataset). These don't really need a real service.
The mocks in Category A are genuinely problematic because:
_FakeResponse doesn't replicate actual Ontodocker response formats — e.g. the Python-literal list for /api/v1/endpoints, the actual Turtle content, the actual SPARQL JSON results structure.
_FakeSession doesn't model connection errors, TLS, redirects, timeouts, or auth rejection.
- Tests verify that
method="PUT" is sent, but never verify the real service accepts it and responds as expected.
Tests that benefit most from a real running service:
| Test file |
Specific area |
Why |
test_ontodocker_client.py |
EndpointsResource.list_raw / list |
The response format is a quirky Python literal, not JSON. A real service test confirms the parsing/rectification actually works end-to-end. |
test_ontodocker_client.py |
DatasetsResource.create / delete |
Confirms the real service accepts PUT/DELETE at those URLs and returns expected text. |
test_ontodocker_client.py |
DatasetsResource.fetch_turtle / download_turtle |
Confirms returned content is valid Turtle. |
test_ontodocker_client.py |
DatasetsResource.upload_turtlefile / upload_graph |
The multipart upload format (files={"file": ...}) is brittle: a real server might reject if the field name, content-type, or encoding is wrong. |
test_ontodocker_client.py |
SparqlResource.query_raw / query_df |
Confirms the SPARQL endpoint actually returns the expected JSON structure and that make_dataframe can parse it. |
test_http_client.py |
get_text/get_json/post_text/put_text/delete_text |
Verifies the full request→response pipeline including TLS, auth headers, and status code handling against a real endpoint. |
test_http_client.py |
Error propagation (test_http_error_is_propagated_as_http_error) |
A real 404/401 from the server confirms the error-wrapping logic works with real requests.Response objects. |
Tests where mocks are fine and a real service adds no value:
- All of
test_auth.py, test_url.py, test_session.py, test_compat.py, test_exceptions.py: pure logic, no I/O required.
- Validation-only tests in
test_http_client.py (TestHttpClientValidation) and test_ontodocker_client.py (test_create_validates_name, test_endpoint_validates_dataset, etc.).
In conclusion this means that some of the unit tests - namely where receiving the actual service's response is important - should be reconstructed and moved to integration tests.
With a comment from @samwaseda (basically "Tests are obscure here and there, add some
msg.") and after @liamhuber's post on going for 100% coverage and what he took away from trying to achieve it, I'll see how close I can come to full coverage forcourier(almost) from the get-go. In the same procedure, I want to address some points which are more specific to whatcourierdoes and have implications for testing, i.e. the interaction with live services."generic" potential improvements
msg=....Strategy: go through each test and add a message, whenever something a test does is not trivially self-explaining.
live service interactions
The fact that
courieraims to connect to remotely running services (at the moment: services which expose some endpoints to receive standard http requests). So far, I used classes_FakeSession,_FakeResposeand_FakeRequestto mimickrequests.sessioninstances' behavior "connected" to an actual live service.We can distinct tests using mocks in 2 categories:
_FakeSession/_FakeResponsepatterns intest_http_client.py,test_ontodocker_client.py, andtest_request.py. The mocks substituterequests.Sessionso no network call is made._FakeSessiononly to instantiate a client, but never actually hit the session (e.g.test_create_validates_name,test_endpoint_validates_dataset). These don't really need a real service.The mocks in Category A are genuinely problematic because:
_FakeResponsedoesn't replicate actual Ontodocker response formats — e.g. the Python-literal list for/api/v1/endpoints, the actual Turtle content, the actual SPARQL JSON results structure._FakeSessiondoesn't model connection errors, TLS, redirects, timeouts, or auth rejection.method="PUT"is sent, but never verify the real service accepts it and responds as expected.Tests that benefit most from a real running service:
test_ontodocker_client.pyEndpointsResource.list_raw/listtest_ontodocker_client.pyDatasetsResource.create/deletetest_ontodocker_client.pyDatasetsResource.fetch_turtle/download_turtletest_ontodocker_client.pyDatasetsResource.upload_turtlefile/upload_graphfiles={"file": ...}) is brittle: a real server might reject if the field name, content-type, or encoding is wrong.test_ontodocker_client.pySparqlResource.query_raw/query_dfmake_dataframecan parse it.test_http_client.pyget_text/get_json/post_text/put_text/delete_texttest_http_client.pytest_http_error_is_propagated_as_http_error)requests.Responseobjects.Tests where mocks are fine and a real service adds no value:
test_auth.py,test_url.py,test_session.py,test_compat.py,test_exceptions.py: pure logic, no I/O required.test_http_client.py(TestHttpClientValidation) andtest_ontodocker_client.py(test_create_validates_name,test_endpoint_validates_dataset, etc.).In conclusion this means that some of the unit tests - namely where receiving the actual service's response is important - should be reconstructed and moved to integration tests.