This repository has the ROS2 implementation of the computer vision algorithm I designed to locate doors and windows from indoor LiDAR point cloud in real-time. This feature is part of a broader project aimed at developing robots that can assist firefighters in indoor fire rescue operations. Cameras are ineffective due to smoke in these environments. The project was a collaboration between Saxion University of Applied Science, University of Twente and 4 Dutch fire departments.
Quadruped and mobile robots used in the project.
|
|
Wait for the demo to load.
This README explains how to use ROS2 package made to locate doors and openings from Velodyne LiDAR point cloud. The implementation was made using classical computer vision algorithms. However, work done with the deep learning model i.e. PointNet++ is also provided. To understand how to train and test on it, refer PointNet2/README.md.
- OS: Ubuntu 22.04 LTS (Used WSL2 but dual-boot should work as well)
- Python 3.12.4
- ROS2-humble
- Open3D 0.18.0
Clone this repository.
git clone https://github.com/SigmaAcehole/LiDAR_Semantic_Segmentation.gitMake sure to have ROS2-humble installed. Instructions for installation can be found here.
[Optional] To avoid dependency issues, its always a good idea to setup a virtual environment where the required python packages with the correct versions are uninstalled. This will prevent conflicts with other packages. Follow the following steps to setup the virtual environment using Miniconda.
- Install Miniconda.
mkdir -p ~/miniconda3
wget https://repo.anaconda.com/miniconda/Miniconda3-latest-Linux-x86_64.sh -O ~/miniconda3/miniconda.sh
bash ~/miniconda3/miniconda.sh -b -u -p ~/miniconda3
rm ~/miniconda3/miniconda.shIf there is any trouble installing miniconda, refer instructions from here.
- Refresh the terminal.
source ~/miniconda3/bin/activate- Initialize conda on all terminals.
conda init --all- Create a conda environment with Python 3.12.4
conda create -n ros2 python=3.12.4- Activate the environment.
conda activate ros2- Install Open3D.
conda install conda-forge::open3d- Install Open3D if you are not using virtual environment as mentioned above. Skip this step if you followed the steps for virtual environment.
pip install open3d- Build the package from the workspace.
cd ros2_ws
colcon build- Open another terminal at this directory and source the overlay.
source install/setup.bash- The ROS2 node can be run in two ways.
To run it with real-time visualizations of the bounding boxes while publishing.
ros2 run lidar_seg cluster --ros-args -p visual:=1To run it to just publish the bounding boxes but not visualize it.
ros2 run lidar_seg cluster- If the velodyne data is stored in a rosbag then play the rosbag.
ros2 bag play [bag_directory/bag_name.db3]- [OPTIONAL] The
clusternode publishes the coordinates of 8 corner points of the bounding box as a PointCloud2 message. They are published in topic/doorand/openingfor detected doors and opening respectively. If no bounding box detected, a single point with coordinates [0,0,0] is published by default. Each bounding box is represented by an array of shape 8x3 so if two boxes are detected then the shape of the published point cloud will be 16x2. A nodetestis provided that subscribes to these bounding boxes and prints the number of doors and opening detected. This node can be used as a reference to use the results ofclusternode in the future.
On a new terminal, source and run thetestnode.
source install/setup.bash
ros2 run lidar_seg testA node preproc is provided which is not used in the traditional method. It subscribes to \velodyne_points publishes by Velodyne LiDAR or rosbag. It then does intensity based color transformation to add RGB details. It then uses a moving window ICP method to make the point cloud more dense. It publishes the point cloud of shape n x 7 where n is the number of points and the 7 columns are RGB + XYZ + Intensity. This point cloud is published in topic \xyz_rgb. This can be useful when making inference on the point cloud with the deep learning model.
- To use, source a new terminal and run the node.
source install/setup.bash
ros2 run lidar_seg preproc- Run the rosbag.
- Results can be visualized using RViz. The frame id of the published point cloud is
rgb.

