Skip to content

SigmaAcehole/LiDAR-Detection

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

67 Commits
 
 
 
 
 
 
 
 

Repository files navigation

Locating doors and windows from LiDAR point cloud

This repository has the ROS2 implementation of the computer vision algorithm I designed to locate doors and windows from indoor LiDAR point cloud in real-time. This feature is part of a broader project aimed at developing robots that can assist firefighters in indoor fire rescue operations. Cameras are ineffective due to smoke in these environments. The project was a collaboration between Saxion University of Applied Science, University of Twente and 4 Dutch fire departments.


Quadruped and mobile robots used in the project.

Wait for the demo to load.

This README explains how to use ROS2 package made to locate doors and openings from Velodyne LiDAR point cloud. The implementation was made using classical computer vision algorithms. However, work done with the deep learning model i.e. PointNet++ is also provided. To understand how to train and test on it, refer PointNet2/README.md.

Tested Environment

  1. OS: Ubuntu 22.04 LTS (Used WSL2 but dual-boot should work as well)
  2. Python 3.12.4
  3. ROS2-humble
  4. Open3D 0.18.0

Setup the environment

Clone this repository.

git clone https://github.com/SigmaAcehole/LiDAR_Semantic_Segmentation.git

Make sure to have ROS2-humble installed. Instructions for installation can be found here.

[Optional] To avoid dependency issues, its always a good idea to setup a virtual environment where the required python packages with the correct versions are uninstalled. This will prevent conflicts with other packages. Follow the following steps to setup the virtual environment using Miniconda.

  1. Install Miniconda.
mkdir -p ~/miniconda3
wget https://repo.anaconda.com/miniconda/Miniconda3-latest-Linux-x86_64.sh -O ~/miniconda3/miniconda.sh
bash ~/miniconda3/miniconda.sh -b -u -p ~/miniconda3
rm ~/miniconda3/miniconda.sh

If there is any trouble installing miniconda, refer instructions from here.

  1. Refresh the terminal.
source ~/miniconda3/bin/activate
  1. Initialize conda on all terminals.
conda init --all
  1. Create a conda environment with Python 3.12.4
conda create -n ros2 python=3.12.4
  1. Activate the environment.
conda activate ros2
  1. Install Open3D.
conda install conda-forge::open3d

How to run it

  1. Install Open3D if you are not using virtual environment as mentioned above. Skip this step if you followed the steps for virtual environment.
pip install open3d
  1. Build the package from the workspace.
cd ros2_ws
colcon build
  1. Open another terminal at this directory and source the overlay.
source install/setup.bash
  1. The ROS2 node can be run in two ways.
    To run it with real-time visualizations of the bounding boxes while publishing.
ros2 run lidar_seg cluster --ros-args -p visual:=1

To run it to just publish the bounding boxes but not visualize it.

ros2 run lidar_seg cluster
  1. If the velodyne data is stored in a rosbag then play the rosbag.
ros2 bag play [bag_directory/bag_name.db3]
  1. [OPTIONAL] The cluster node publishes the coordinates of 8 corner points of the bounding box as a PointCloud2 message. They are published in topic /door and /opening for detected doors and opening respectively. If no bounding box detected, a single point with coordinates [0,0,0] is published by default. Each bounding box is represented by an array of shape 8x3 so if two boxes are detected then the shape of the published point cloud will be 16x2. A node test is provided that subscribes to these bounding boxes and prints the number of doors and opening detected. This node can be used as a reference to use the results of cluster node in the future.
    On a new terminal, source and run the test node.
source install/setup.bash
ros2 run lidar_seg test

Additional information and other useful node

A node preproc is provided which is not used in the traditional method. It subscribes to \velodyne_points publishes by Velodyne LiDAR or rosbag. It then does intensity based color transformation to add RGB details. It then uses a moving window ICP method to make the point cloud more dense. It publishes the point cloud of shape n x 7 where n is the number of points and the 7 columns are RGB + XYZ + Intensity. This point cloud is published in topic \xyz_rgb. This can be useful when making inference on the point cloud with the deep learning model.

  1. To use, source a new terminal and run the node.
source install/setup.bash
ros2 run lidar_seg preproc
  1. Run the rosbag.
  2. Results can be visualized using RViz. The frame id of the published point cloud is rgb.

About

This repository contains ROS2 implementation for my computer vision algorithm to locate doors and openings from 3D LiDAR point clouds. An implementation of training and testing PointNet++ for indoor semantic segmentation is also documented.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors

Languages