As point feature representations go, surface normals and curvature estimates
are somewhat basic in their representations of the geometry around a specific
point. Though extremely fast and easy to compute, they cannot capture too much
detail, as they approximate the geometry of a point’s k-neighborhood with only
a few values. As a direct consequence, most scenes will contain many points
with the same or very similar feature values, thus reducing their informative
characteristics.
This tutorial introduces a family of 3D feature descriptors coined PFH (Point
Feature Histograms) for simplicity, presents their theoretical advantages and
discusses implementation details from PCL’s perspective. As a prerequisite,
please go ahead and read the Estimating Surface Normals in a PointCloud tutorial first, as PFH
signatures rely on both xyz 3D data as well as surface normals.
The goal of the PFH formulation is to encode a point’s k-neighborhood
geometrical properties by generalizing the mean curvature around the point
using a multi-dimensional histogram of values. This highly dimensional
hyperspace provides an informative signature for the feature representation, is
invariant to the 6D pose of the underlying surface, and copes very well with
different sampling densities or noise levels present in the neighborhood.
A Point Feature Histogram representation is based on the relationships between
the points in the k-neighborhood and their estimated surface normals. Simply
put, it attempts to capture as best as possible the sampled surface variations
by taking into account all the interactions between the directions of the
estimated normals. The resultant hyperspace is thus dependent on the quality of
the surface normal estimations at each point.
The figure below presents an influence region diagram of the PFH computation
for a query point (
), marked with red and placed in the middle of a
circle (sphere in 3D) with radius r, and all its k neighbors (points
with distances smaller than the radius r) are fully interconnected in a
mesh. The final PFH descriptor is computed as a histogram of relationships
between all pairs of points in the neighborhood, and thus has a computational
complexity of
.
To compute the relative difference between two points
and
and their associated normals
and
, we
define a fixed coordinate frame at one of the points (see the figure below).
Using the above uvw frame, the difference between the two normals
and
can be expressed as a set of angular features as
follows:
where d is the Euclidean distance between the two points
and
,
. The quadruplet
is computed for each pair of two
points in k-neighborhood, therefore reducing the 12 values (xyz and normal
information) of the two points and their normals to 4.
To estimate a PFH quadruplet for a pair of points, use:
To create the final PFH representation for the query point, the set of all
quadruplets is binned into a histogram. The binning process divides each
feature’s value range into b subdivisions, and counts the number of
occurrences in each subinterval. Since three out of the four features presented
above are measures of the angles between normals, their values can easily be
normalized to the same interval on the trigonometric circle. A binning example
is to divide each feature interval into the same number of equal parts, and
therefore create a histogram with
bins in a fully correlated space.
In this space, a histogram bin increment corresponds to a point having certain
values for all its 4 features. The figure below presents examples of Point
Feature Histograms representations for different points in a cloud.
In some cases, the fourth feature, d, does not present an extreme
significance for 2.5D datasets, usually acquired in robotics, as the distance
between neighboring points increases from the viewpoint. Therefore, omitting
d for scans where the local point density influences this feature dimension
has proved to be beneficial.
Note
For more information and mathematical derivations, including an analysis of PFH signatures for different surface geometries please see [RusuDissertation].
Point Feature Histograms are implemented in PCL as part of the pcl_features library.
The default PFH implementation uses 5 binning subdivisions (e.g., each of the
four feature values will use this many bins from its value interval), and does
not include the distances (as explained above – although the
computePairFeatures method can be called by the user to obtain the
distances too, if desired) which results in a 125-byte array (
) of
float values. These are stored in a pcl::PFHSignature125 point type.
The following code snippet will estimate a set of PFH features for all the
points in the input dataset.
1#include<pcl/point_types.h> 2#include<pcl/features/pfh.h> 3 4{ 5pcl::PointCloud<pcl::PointXYZ>::Ptrcloud(newpcl::PointCloud<pcl::PointXYZ>); 6pcl::PointCloud<pcl::Normal>::Ptrnormals(newpcl::PointCloud<pcl::Normal>()); 7 8...read,passinorcreateapointcloudwithnormals... 9...(note:youcancreateasinglePointCloud<PointNormal>ifyouwant)...1011// Create the PFH estimation class, and pass the input dataset+normals to it12pcl::PFHEstimation<pcl::PointXYZ,pcl::Normal,pcl::PFHSignature125>pfh;13pfh.setInputCloud(cloud);14pfh.setInputNormals(normals);15// alternatively, if cloud is of tpe PointNormal, do pfh.setInputNormals (cloud);1617// Create an empty kdtree representation, and pass it to the PFH estimation object.18// Its content will be filled inside the object, based on the given input dataset (as no other search surface is given).19pcl::search::KdTree<pcl::PointXYZ>::Ptrtree(newpcl::search::KdTree<pcl::PointXYZ>());20//pcl::KdTreeFLANN<pcl::PointXYZ>::Ptr tree (new pcl::KdTreeFLANN<pcl::PointXYZ> ()); -- older call for PCL 1.5-21pfh.setSearchMethod(tree);2223// Output datasets24pcl::PointCloud<pcl::PFHSignature125>::Ptrpfhs(newpcl::PointCloud<pcl::PFHSignature125>());2526// Use all neighbors in a sphere of radius 5cm27// IMPORTANT: the radius used here has to be larger than the radius used to estimate the surface normals!!!28pfh.setRadiusSearch(0.05);2930// Compute the features31pfh.compute(*pfhs);3233// pfhs->size () should have the same size as the input cloud->size ()*34}
The actual compute call from the PFHEstimation class does nothing internally but:
Where cloud is the input point cloud that contains the points, normals is
the input point cloud that contains the normals (could be equal to cloud if
PointInT=PointNT=PointNormal), indices represents the set of k-nearest
neighbors from cloud, nr_split is the number of subdivisions to use for the
binning process for each feature interval, and pfh_histogram is the output
resultant histogram as an array of float values.
Note
For efficiency reasons, the compute method in PFHEstimation does not check if the normals contain NaN or infinite values.
Passing such values to compute() will result in undefined output.
It is advisable to check the normals, at least during the design of the processing chain or when setting the parameters.
This can be done by inserting the following code before the call to compute():
for(inti=0;i<normals->size();i++){if(!pcl::isFinite<pcl::Normal>((*normals)[i])){PCL_WARN("normals[%d] is not finite\n",i);}}
In production code, preprocessing steps and parameters should be set so that normals are finite or raise an error.