The Point Cloud Library (PCL) is an open-source, BSD-licensed framework for 2D/3D image and point cloud processing. Originally developed at Willow Garage (the robotics research lab that also gave rise to ROS), PCL has become the standard toolkit for 3D perception in both robotics and industrial machine vision. It provides algorithms for filtering, feature estimation, surface reconstruction, registration, model fitting, and segmentation — all built around the pcl::PointCloud<T> container template.
PCL's modular architecture divides functionality into libraries: pcl_common (data structures and utilities), pcl_filters (voxel grids, statistical outlier removal, passthrough filters), pcl_features (FPFH, SHOT, VFH descriptors), pcl_registration (ICP, NDT, and feature-based alignment), pcl_segmentation (Euclidean cluster extraction, RANSAC plane/cylinder fitting), and pcl_visualization (PCLVisualizer, built on VTK). For industrial inspection, the most frequently used modules are filters, segmentation, and registration.
Point cloud registration — aligning two or more point clouds into a common coordinate frame — is foundational for 3D inspection applications. PCL's Iterative Closest Point (ICP) implementation (pcl::IterativeClosestPoint) works well for fine alignment when the initial pose estimate is within a few centimeters. For coarser initial alignment, Fast Point Feature Histograms (FPFH) combined with Sample Consensus Initial Alignment (SAC-IA) provide a robust feature-based registration approach that tolerates larger initial pose errors.
Euclidean cluster extraction is the PCL workaround for scene segmentation without deep learning. By first applying a RANSAC plane model fit to remove the dominant supporting surface (conveyor belt, workpiece fixture), and then running Euclidean clustering on the residual points with an appropriate distance threshold, you can reliably isolate individual objects in a cluttered industrial scene. The key parameters are cluster tolerance (typically 1-5mm for parts on a conveyor), minimum cluster size, and maximum cluster size — tuned empirically for each application.
Integrating PCL with depth cameras requires understanding each camera's point cloud format and coordinate system. Intel RealSense depth cameras output organized point clouds (where the 2D pixel grid structure is preserved), which PCL can process more efficiently than unorganized clouds for operations like normal estimation. Structured-light scanners typically output unorganized clouds with higher accuracy but lower density. Converting between camera SDK formats and pcl::PointCloud<pcl::PointXYZRGB> is straightforward but requires careful handling of invalid depth pixels (represented as NaN in PCL) to avoid corrupting downstream algorithms.