Add direct navigation commands (move, turn, strafe, stop, stand/sit)#41
Add direct navigation commands (move, turn, strafe, stop, stand/sit)#41
Conversation
Add 5 new action types (MoveRelative, TurnRelative, Strafe, Stop, StandSit) that bypass the PDDL planning pipeline for direct robot control via chat. Add pause/resume mechanism on ~/pause topic for non-LLM emergency stopping. Include unit tests and on-robot verification script.
|
|
||
|
|
||
| @to_viz_msg.register | ||
| def _(action: MoveRelative, marker_ns): |
There was a problem hiding this comment.
We should definitely add visualization conversion functions here. I know it's a little tricky to figure out where to actually call them from, but I think having that capability is pretty important for understanding what the robot's trying to do. I think we would want to add an extra lightweight node that just listens on the appropriate topic, and publishes the markers associated with these new actions.
| timer_period_s, self.hb_callback, callback_group=heartbeat_timer_group | ||
| ) | ||
|
|
||
| def pause_callback(self, msg): |
There was a problem hiding this comment.
@y-veys Can you confirm that the pause callback implementation is compatible with the interrupt functionality you were working on previously?
| # --- Action Dataclass Tests --- | ||
|
|
||
|
|
||
| class TestActionDataclasses: |
There was a problem hiding this comment.
I don't think these tests are useful, and I think they should be removed -- they currently only test whether Python's dataclasses are implemented correctly.
| # --- Executor Dispatch Tests --- | ||
|
|
||
|
|
||
| class TestExecutorDispatch: |
There was a problem hiding this comment.
I also do not think these tests are useful -- they break the abstraction enabled by the executor. It would be useful to have a "closed loop" test that checks whether spot actually ends up at the desired location (which we can more or less test, because the fake spot has the step function), but not this assertion about what function gets called.
| # --- ROS Message Serialization Tests --- | ||
|
|
||
|
|
||
| class TestMessageSerialization: |
There was a problem hiding this comment.
These tests should live in robot_executor_interface_ros (not here). Also, I would prefer to implement an equality operator (either as an actual override for ==, or as a multiple-dispatched function) to check equality, rather than relying on the asserts here. That makes things much more straightforward to extend in the future.
There was a problem hiding this comment.
Seems like the discussion about implementing equality was dropped during the move of the tests
There was a problem hiding this comment.
If we are going to have tests, we should run them in CI. It should be reasonably straightforward (we just need to uncomment this line and update the path
. And we might also need to change the line before that one to install robot_executor in addition to spot_tools. )- Add @to_viz_msg functions for MoveRelative (cyan arrow), TurnRelative (orange cylinder), Strafe (yellow arrow), Stop (red cube), StandSit (green/blue sphere) - Remove TestActionDataclasses (tests Python, not our code) - Remove TestExecutorDispatch (mock-based, breaks abstraction) - Move TestMessageSerialization to robot_executor_interface_ros/tests - Enable pytest in CI with proper dependencies
|
Work in progress: addressing review feedback from GoldenZephyr.
|
GoldenZephyr
left a comment
There was a problem hiding this comment.
see also comment about equality checking in tests
| def _(action: MoveRelative, marker_ns): | ||
| return [] | ||
| m = Marker() | ||
| m.header.frame_id = "body" |
There was a problem hiding this comment.
I don't think this is the right way to visualize the relative commands. I think we want to be able to visualize a series of these commands in a way that represents the overall robot motion.
Representing all of the commands with respect to the current body frame does not match the semantics of what is being commanded of the robot (not to mention that hard-coding the frame here is very bad...). I think the most reasonable way of visualizing the trajectory is to transform the relative commands into the same fixed frame as the other actions, based on what the state of the robot should be when it executes the relative commands. This will require a little extra logic in the function that visualize the sequence of actions, as we have to roll forward the state from the previous actions. I think you can use an instance of SpotExecutor and fake spot to use the existing interpretation of these commands, although it might be slightly more straightforward to add the logic directly here.
Summary
MoveRelative,TurnRelative,Strafe,Stop, andStandSitaction dataclasses torobot_executor_interfaceSpotExecutorusingnavigate_to_relative_pose/navigate_to_absolute_pose~/pauseROS topic to hold position and block new sequences without cutting power (robot stays standing)tests/test_direct_navigation.py) and on-robot verification script (examples/test_direct_navigation_on_robot.py)spot_tools_ros/scripts/nav_test.shfor quick interactive testing viasource nav_test.sh+ one-liner commandsActionMsg.msgwith new type constants and fields (scalar_value,stand_sit_action)action_descriptions_ros.pyTest plan
python -m pytest spot_tools/tests/test_direct_navigation.py -vsource nav_test.shthen verifyforward,backward,turn_left,turn_right,strafe_left,strafe_right,sit,stand,stoppausehalts motion and holds pose;resumeaccepts new commands