Conversation
added synchronous client and synchronous test server able to serialize data in a struct and send it between client and server can also successfully send an image, however not in the yuv420 as expected from the RPI side sync_client_mock.cpp and sync_server_mock.cpp need to be run separately and the server needs to be started first in order for the client to successfully connect to it currently working on converting everything into asynchronous Asynchronous TODOS - need to make the async client thread safe, need to learn how to use strand and such to protect shared resources and the execution context - need to ensure that requests to take images complete in the requested order, in other words no images out of order. this isn't a problem in the synchronous version but asynchronous needs to handle it - what do we do in the case any of the async functions fail? - once we receive the image in yuv420 format, does it need to be reassembled by imgConvert() or is that on the cv pipeline? also need to save to local storage instead of keeping it in memory what is the expected format to save as? i.e name or title - also need to handle errors
…red functionality. Need to further change to support the RPI functionality.
AskewParity
left a comment
There was a problem hiding this comment.
BIG! Thanks for doing this. I think the picam is a platform we will use for a decent while, so this not only is important for near term testing, but also will be the infrastructure for future camera stuff.
Some of the comments I left are more facetious than others. Overall, I don't have a lot of constructive comments to provide, but I have left some questions about the meaning of certain control flow statements.
There was a problem hiding this comment.
The long term solution is to add this to the docker-file correct? If that is the case, I should start looking into implementing it.
There was a problem hiding this comment.
Yes I think we can look into using libboost-dev from the apt repository instead.
include/camera/rpi.hpp
Outdated
| const std::uint8_t START_REQUEST = 's'; | ||
| const std::uint8_t PICTURE_REQUEST = 'I'; | ||
| const std::uint8_t END_REQUEST = 'e'; | ||
| const std::uint8_t LOCK_REQUEST = 'l'; | ||
|
|
| inline uint32_t IMG_WIDTH = 1456; | ||
| inline uint32_t IMG_HEIGHT = 1088; | ||
| inline uint32_t IMG_BUFFER = IMG_WIDTH * IMG_HEIGHT * 3 / 2; | ||
|
|
||
| // Libcamera Strides/Padding | ||
| const uint32_t STRIDE_Y = 1472; | ||
| const uint32_t STRIDE_UV = 736; | ||
|
|
||
| // Network Config | ||
| const char SERVER_IP[] = "192.168.77.2"; | ||
| const int SERVER_PORT = 25565; | ||
|
|
||
| const int headerSize = 12; | ||
| const uint32_t EXPECTED_MAGIC = 0x12345678; | ||
| const size_t CHUNK_SIZE = 1024; | ||
|
|
||
| struct Header { | ||
| uint32_t magic; | ||
| uint32_t total_chunks; | ||
| uint32_t mem_size; | ||
| }; |
There was a problem hiding this comment.
Should this be in config?
There was a problem hiding this comment.
I think it depends, I can add all of these to a specific PiCamera config but it would result in a picamera-specific config that isn't used for mock or anywhere else. I plan to implement a feature where certain network intrinsics are configured on the camera-things side, and fetched via UDP instead of being hardcoded. I think for the MVP this is fine, but if you think a config is needed now I can start working on that in this PR.
|
@AskewParity I've addressed the ENUM, byte order, and pruned the misc comments. |
Closes #214 Closes #291 Closes #331
Changes
This PR introduces the MVP for the Pi-Camera Client. It includes several classes, mainly
rpi.cpp/hppandudp_client.cpp/hpp. The former implements theCameraInterfacespec, and the latter implements theUDPClientInterfacebased off how thecamera-thingsproject sends images.Here is a quick explanation of the request lifecycle between the
obcppRPi client and thecamera-thingsserver:rpi.cpp) sends a single ASCII character'I'(Request Image) within a UDP packet to thecamera-thingsserver port.magicnumber (0x12345678) for validation, thetotal_chunksthat will follow, and themem_size(total size of the plane in bytes).Plane Transmission (Chunks): The server then transmits the plane data split into multiple chunk packets to fit within network MTU limits (typically ~1024 bytes data chunks). Each chunk is prefixed with a 4-byte sequential chunk index in network byte order.
Reconstruction: The client attempts to receive the header, validates the
magicnumber, and reconstructs the plane by receiving the specified number of data chunks. This process is repeated for all three planes (Y, U, and V).Formatting: Once all three planes are fully received,
rpi.cppstrips any padding based on specified image strides, aggregates them into a contiguous I420 buffer, and converts it to a standard OpenCVcv::Mat(BGR format) for further processing.The RPI Client has several pre-defined behaviors to handle UDP unreliability:
On top of these additions, I've pruned deprecated
PiCamera.cppfrom 2024 comp which utilized gstreamer.Testing
The main testing solution is
camera_integration_test.cpp. It allows you to test the client, assuming that the pi-client is active.The client/server lacks unit testing, so future issues will address implementation of the udp_server as a mock pi-server for better testing.
Future Improvements
Feedback
Any feedback is appreciated, but focusing on networking constructs and testing stability.