Synchronizing LiDAR and Camera Data for Offline Processing Using ROS

Synchronized point cloud – image pair

Robots and other complex machines are usually equipped with a wide variety of sensors. Often, when performing tasks such as driving autonomously or collecting data for offline processing, it is useful to synchronize the different sensors. For example, in my work with 3D Machine Learning, I had to collect a dataset consisting of LiDAR point clouds as well as camera frames to serve as the context.

Why is it important to synchronize this data? There are different reasons. One is that different sensors collect data at different rates. For instance, my LiDAR collects data at 10 Hz (10 samples per second) while my camera takes images at 3 Hz. For every image, we have about 3 point clouds. Another reason is related to how fast your machine is moving. If you were collecting data out of sensors mounted on a car, the difference between a point cloud collected at a given time and the corresponding context image collected about a second later would be big if the car is moving very fast. The image might be showing things that the LiDAR could not catch earlier (occluded objects, etc).

My LiDAR’s ROS Topic with point clouds coming at 10 Hz
My camera’s ROS Topic with images coming at 3 Hz

Synchronizing sensor data seems to have two obvious advantages. First, it allows us to drop unnecessary data from faster sensors, hence saving space and complexity in the subsequent processing. Second, by making sure the data is collected at approximately the same time, we are confident that all the data represents the same state of the world, as perceived by the sensors, at that time. In the rest of this article, I will describe how I synchronized data from a LiDAR and a camera using the Robot Operating System (ROS). The same approach can be used for an arbitrary number and type of sources.

First, what is ROS, and why I used it? ROS is an open-source robotics middleware suit. Its main goal is to provide services on top of an Operating System (Ubuntu for me) to facilitate the development of robotics applications. The two main reasons to use ROS are that it provides commonly used functionality (in our case synchronizing data), and message-passing between processes (in my case, the SDKs of the LiDAR and the camera send the messages). Collecting and synchronizing the data without a tool like ROS would require a larger amount of code and added complexity. If you are working with robots, ROS is worth learning.

So, my robot’s computer is running a ROS node provided by my LiDAR’s manufacturer that is publishing messages of type PointCloud2 at 10 Hz, and another node provided by my camera’s manufacturer publishing Image data at 3 Hz. A ROS node subscribes to both topics (ROS messaging mechanism) in order to receive the data. A ROS package called message_filters is used to synchronize the topics by using an ApproximateTime Policy synchronizer. This policy is useful when the data from the different sensors don’t come at the exact time, but can be collected within a small enough time delta.

Some of the most important aspects of the code include creating subscribers to the LiDAR and camera data topics, creating a synchronizer with the ApproximateTime policy, and registering a callback for the synchronizer, as shown in the code snipped below.

// Create the subscribers
message_filters::Subscriber<Image> image_sub(nh, "/zed/rgb/image_rect_color", 1);
message_filters::Subscriber<CameraInfo> camera_info_sub(nh, "/zed/rgb/camera_info", 1);
message_filters::Subscriber<PointCloud2> point_cloud_sub(nh, "/rslidar_points", 1);

// Create the synchronizer
typedef sync_policies::ApproximateTime<Image, CameraInfo, PointCloud2> MySyncPolicy;
  
// ApproximateTime takes a queue size as its constructor argument, hence MySyncPolicy(10)
Synchronizer<MySyncPolicy> sync(MySyncPolicy(10), image_sub, camera_info_sub, point_cloud_sub);
sync.registerCallback(boost::bind(&callback, _1, _2, _3));

The registered callback will be triggered whenever there is LiDAR and Camera data that arrive at nearly the same time. Every other data will be ignored. Multiple things can be done within the callback, but in my case, I just save all synchronized point cloud-image pairs for offline processing.

// Synchronizer Callback, it will be called when camera and LiDAR data are in sync.
void callback(const ImageConstPtr& image, const CameraInfoConstPtr& camara_info, const PointCloud2ConstPtr& point_cloud)
{
  printf("%s\n", "All data in sync!" );

  // Process data

}

The main reason for collecting such data is that I can for instance use the context image to annotate the point cloud for object detection. It is a lot simpler to know what is in front of the robot from the context image than from the point cloud. Finally, as described in a previous post, some filtering can be done on the point cloud to only retain the points directly visible by the camera as shown in the image below.

Top: filtered point cloud and context image. Down: full point cloud.

The full code is presented below. Notice that to run the code you need to have ROS installed as well as multiple other libraries. The instructions to install all those libraries are out of the scope of this article.

#include <message_filters/subscriber.h>
#include <message_filters/synchronizer.h>
#include <message_filters/sync_policies/approximate_time.h>

#include <sensor_msgs/Image.h>
#include <sensor_msgs/CameraInfo.h>
#include <sensor_msgs/PointCloud2.h>

#include <ros/ros.h>
#include <pcl_ros/point_cloud.h>

#include <cv_bridge/cv_bridge.h>
#include <sensor_msgs/image_encodings.h>

#include <opencv2/imgproc/imgproc.hpp>
#include <opencv2/highgui/highgui.hpp>
#include <iostream>
#include <fstream>
#include <sstream>    

#include <boost/filesystem.hpp>                                    
#include <ros/package.h>

static const std::string OPENCV_WINDOW = "Image window";

using namespace sensor_msgs;
using namespace message_filters;
using namespace std;


std::string root_output_dir, lidar_output_dir, camera_output_dir;
bool debug;

// Synchronizer Callback, it will be called when camera and LiDAR data are in sync.
void callback(const ImageConstPtr& image, const CameraInfoConstPtr& camara_info, const PointCloud2ConstPtr& point_cloud)
{
  printf("%s\n", "All data in sync!" );

  // Convert ROS PointCloud2 to PCL point cloud
  pcl::PointCloud<pcl::PointXYZI> cloud;
  pcl::fromROSMsg(*point_cloud, cloud);

  // Create a date string from the point cloud's timestamp to use in the file name of the saved data
  const int output_size = 100;
  char output[output_size];
  std::time_t raw_time = static_cast<time_t>(point_cloud->header.stamp.sec);
  struct tm* timeinfo = localtime(&raw_time);
  std::strftime(output, output_size, "lidar_%Y_%m_%d_%H_%M_%S", timeinfo);

  // Creates a string containing the millisencods to be added to the previously created date string
  std::stringstream ss; 
  ss << std::setw(9) << std::setfill('0') << point_cloud->header.stamp.nsec;  
  const size_t fractional_second_digits = 4;
  
  // Combine all of the pieces to get the output file name
  std::string output_file = lidar_output_dir + "/" + std::string(output) + "." + ss.str().substr(0, fractional_second_digits)+".pcd";

  // Save the point cloud as a PCD file
  pcl::io::savePCDFileASCII (output_file, cloud);
  printf("%s\n", output_file.c_str() );

  // Convert the ROS image to an OpenCV image
  cv_bridge::CvImagePtr cv_ptr;
  try{
    cv_ptr = cv_bridge::toCvCopy(image, sensor_msgs::image_encodings::BGR8);
  }catch (cv_bridge::Exception& e){
    ROS_ERROR("cv_bridge exception: %s", e.what());
    return;
  }

  // Update GUI Window
  if (debug){
    cv::imshow(OPENCV_WINDOW, cv_ptr->image);
    cv::waitKey(3);
  }

  // Create the filename for the image
  output_file = camera_output_dir + "/" + std::string(output) + "." + ss.str().substr(0, fractional_second_digits)+".jpg";
  
  // Save the image
  cv::imwrite(output_file, cv_ptr->image);
  
}


int main(int argc, char** argv)
{

  // Initialize the ROS node
  ros::init(argc, argv, "lidar_and_cam_synchronizer");
  ros::NodeHandle nh("~");

  // Get the paremeters or use default values
  nh.param("root_output_dir", root_output_dir, ros::package::getPath("shl_robot")+"/data/lidar"); 
  nh.param("debug", debug, false);

  // Create the subscribers
  message_filters::Subscriber<Image> image_sub(nh, "/zed/rgb/image_rect_color", 1);
  message_filters::Subscriber<CameraInfo> camera_info_sub(nh, "/zed/rgb/camera_info", 1);
  message_filters::Subscriber<PointCloud2> point_cloud_sub(nh, "/rslidar_points", 1);

  // Create the synchronizer
  typedef sync_policies::ApproximateTime<Image, CameraInfo, PointCloud2> MySyncPolicy;
  
  // ApproximateTime takes a queue size as its constructor argument, hence MySyncPolicy(10)
  Synchronizer<MySyncPolicy> sync(MySyncPolicy(10), image_sub, camera_info_sub, point_cloud_sub);
  sync.registerCallback(boost::bind(&callback, _1, _2, _3));

  // Create the folder for the current run. A new folder will be created each time the node runs
  const int output_size = 100;
  char output[output_size];
  std::time_t raw_time = static_cast<time_t>(ros::Time::now().sec);
  struct tm* timeinfo = localtime(&raw_time);
  std::strftime(output, output_size, "run_%Y_%m_%d_%H_%M_%S", timeinfo);

  // Combine all of the pieces to get the output folders for this run
  lidar_output_dir = root_output_dir + "/" + std::string(output)+"/pcd";
  camera_output_dir = root_output_dir + "/" + std::string(output)+"/jpg";

  boost::filesystem::create_directories(lidar_output_dir);
  boost::filesystem::create_directories(camera_output_dir);

  std::cout << lidar_output_dir << std::endl;

  // Call spin() to let the node run until stopped.
  ros::spin();

  return 0;
}

Posted

in

by

Comments

Leave a comment

Blog at WordPress.com.