libcvd
|
Functions and classes to support common computer vision concepts and operations. More...
![]() |
Modules | |
Efficient Second Order Minimization (ESM) | |
This module implements the ESM template tracking algorithm for homography-based image transformations as described by Benhimane & Mails, "Real-time image-based tracking of planes using efficient second-order minimization", 2004. | |
Namespaces | |
CVD::Camera | |
Classes which represent camera calibrations. | |
CVD::Morphology | |
Image morphology operations. | |
Classes | |
class | CVD::Camera::Linear |
A linear camera with zero skew. More... | |
class | CVD::Camera::Cubic |
A camera with zero skew and cubic distortion. More... | |
class | CVD::Camera::Quintic |
A camera with zero skew and quintic distortion. More... | |
class | CVD::Camera::Harris |
A Camera with zero skew and Harris distortion. More... | |
class | CVD::Camera::ArcTan |
A Camera for lenses which attempt to maintain a constant view angle per pixel. More... | |
class | CVD::Camera::OldCameraAdapter< C > |
An adapter to make old-style cameras which remember the last projected point. More... | |
struct | CVD::Harris::HarrisScore |
Compute the corner score according to Harris. More... | |
struct | CVD::Harris::ShiTomasiScore |
Compute the score according to Shi-Tomasi. More... | |
struct | CVD::Harris::PosInserter |
Used to save corner positions from harrislike_corner_detect. More... | |
struct | CVD::Harris::PairInserter |
Used to save corner positions and scores from harrislike_corner_detect. More... | |
struct | CVD::Morphology::BasicGray< T, Cmp > |
A helper class for performing basic grayscale morphology on an image. More... | |
class | CVD::Morphology::Erode< T > |
Class for performing greyscale erosion. More... | |
class | CVD::Morphology::Dilate< T > |
Class for performing greyscale dilation. More... | |
class | CVD::Morphology::Percentile< T > |
Class for performing percentile filtering. More... | |
class | CVD::Morphology::Median< T > |
Class for performing median filtering. More... | |
struct | CVD::Morphology::BasicGrayByte |
A helper class for performing basic grayscale morphology on an image of bytes. More... | |
class | CVD::Morphology::Erode< byte > |
Class for performing greyscale erosion of bytes. More... | |
class | CVD::Morphology::Dilate< byte > |
Class for performing greyscale dilation of bytes. More... | |
class | CVD::Morphology::Percentile< byte > |
Class for performing percentile filtering of bytes. More... | |
class | CVD::Morphology::Median< byte > |
Class for performing percentile filtering of bytes. More... | |
struct | CVD::Morphology::BasicBinary< T > |
Class for performing binary morphology. More... | |
struct | CVD::Morphology::BinaryErode< T > |
Class for performing binary erosion. More... | |
struct | CVD::Morphology::BinaryDilate< T > |
Class for performing binary dilation. More... | |
struct | CVD::Morphology::BinaryMedian< T > |
Class for performing binary median filtering. More... | |
struct | CVD::multiplyBy< T > |
a functor multiplying pixels with constant value. More... | |
Functions | |
void | CVD::connected_components (const std::vector< ImageRef > &v, std::vector< std::vector< ImageRef >> &r) |
Find the connected components of the input, using 4-way floodfill. More... | |
template<class T > | |
T | CVD::gaussianKernel (std::vector< T > &k, T maxval, double stddev) |
creates a Gaussian kernel with given maximum value and standard deviation. More... | |
template<class S , class T > | |
T | CVD::scaleKernel (const std::vector< S > &k, std::vector< T > &scaled, T maxval) |
scales a GaussianKernel to a different maximum value. More... | |
template<class T > | |
void | CVD::convolveWithBox (const BasicImage< T > &I, BasicImage< T > &J, ImageRef hwin) |
convolves an image with a box of given size. More... | |
template<class T , class Q > | |
void | CVD::euclidean_distance_transform_sq (const BasicImage< T > &in, BasicImage< Q > &out) |
Compute squared Euclidean distance transform using the Felzenszwalb & Huttenlocher algorithm. More... | |
template<class T , class Q > | |
void | CVD::euclidean_distance_transform_sq (const BasicImage< T > &in, BasicImage< Q > &out, BasicImage< ImageRef > &lookup_DT) |
Compute squared Euclidean distance transform using the Felzenszwalb & Huttenlocher algorithm. More... | |
template<class T , class Q > | |
void | CVD::euclidean_distance_transform (const BasicImage< T > &in, BasicImage< Q > &out) |
Compute Euclidean distance transform using the Felzenszwalb & Huttenlocher algorithm. More... | |
template<class T , class Q > | |
void | CVD::euclidean_distance_transform (const BasicImage< T > &in, BasicImage< Q > &out, BasicImage< ImageRef > &lookup_DT) |
Compute Euclidean distance transform using the Felzenszwalb & Huttenlocher algorithm. More... | |
void | CVD::fast_nonmax (const BasicImage< byte > &im, const std::vector< ImageRef > &corners, int barrier, std::vector< ImageRef > &max_corners) |
Perform non-maximal suppression on a set of FAST features. More... | |
void | CVD::fast_nonmax_with_scores (const BasicImage< byte > &im, const std::vector< ImageRef > &corners, int barrier, std::vector< std::pair< ImageRef, int >> &max_corners) |
Perform non-maximal suppression on a set of FAST features, also returning the score for each remaining corner. More... | |
void | CVD::fast_corner_detect_7 (const BasicImage< byte > &im, std::vector< ImageRef > &corners, int barrier) |
Perform tree based 7 point FAST feature detection. More... | |
void | CVD::fast_corner_score_7 (const BasicImage< byte > &i, const std::vector< ImageRef > &corners, int b, std::vector< int > &scores) |
Compute the 7 point score (as the maximum threshold at which the point will still be detected) for a std::vector of features. More... | |
void | CVD::fast_corner_detect_8 (const BasicImage< byte > &im, std::vector< ImageRef > &corners, int barrier) |
Perform tree based 8 point FAST feature detection. More... | |
void | CVD::fast_corner_score_8 (const BasicImage< byte > &i, const std::vector< ImageRef > &corners, int b, std::vector< int > &scores) |
Compute the 8 point score (as the maximum threshold at which the point will still be detected) for a std::vector of features. More... | |
void | CVD::fast_corner_detect_9 (const BasicImage< byte > &im, std::vector< ImageRef > &corners, int barrier) |
Perform tree based 9 point FAST feature detection as described in: Machine Learning for High Speed Corner Detection, E. More... | |
void | CVD::fast_corner_score_9 (const BasicImage< byte > &i, const std::vector< ImageRef > &corners, int b, std::vector< int > &scores) |
Compute the 9 point score (as the maximum threshold at which the point will still be detected) for a std::vector of features. More... | |
void | CVD::fast_corner_detect_9_nonmax (const BasicImage< byte > &im, std::vector< ImageRef > &max_corners, int barrier) |
Perform FAST-9 corner detection (see fast_corner_detect_9), with nonmaximal suppression (see fast_corner_score_9 and nonmax_suppression) More... | |
void | CVD::fast_corner_detect_10 (const BasicImage< byte > &im, std::vector< ImageRef > &corners, int barrier) |
Perform tree based 10 point FAST feature detection If you use this, please cite the paper given in fast_corner_detect. More... | |
void | CVD::fast_corner_score_10 (const BasicImage< byte > &i, const std::vector< ImageRef > &corners, int b, std::vector< int > &scores) |
Compute the 10 point score (as the maximum threshold at which the point will still be detected) for a std::vector of features. More... | |
void | CVD::fast_corner_detect_11 (const BasicImage< byte > &im, std::vector< ImageRef > &corners, int barrier) |
Perform tree based 11 point FAST feature detection If you use this, please cite the paper given in fast_corner_detect_9. More... | |
void | CVD::fast_corner_score_11 (const BasicImage< byte > &i, const std::vector< ImageRef > &corners, int b, std::vector< int > &scores) |
Compute the 11 point score (as the maximum threshold at which the point will still be detected) for a std::vector of features. More... | |
void | CVD::fast_corner_detect_12 (const BasicImage< byte > &im, std::vector< ImageRef > &corners, int barrier) |
Perform tree based 12 point FAST feature detection If you use this, please cite the paper given in fast_corner_detect_9. More... | |
void | CVD::fast_corner_score_12 (const BasicImage< byte > &i, const std::vector< ImageRef > &corners, int b, std::vector< int > &scores) |
Compute the 11 point score (as the maximum threshold at which the point will still be detected) for a std::vector of features. More... | |
template<class It > | |
void | CVD::haar1D (It from, It to) |
computes the 1D Haar transform of a signal in place. More... | |
template<class It > | |
void | CVD::inv_haar1D (It from, It to) |
computes the inverse 1D Haar transform of a signal in place. More... | |
template<class It > | |
void | CVD::haar1D (It from, int size) |
computes the 1D Haar transform of a signal in place. More... | |
template<class It > | |
void | CVD::inv_haar1D (It from, int size) |
computes the inverse 1D Haar transform of a signal in place. More... | |
template<class It > | |
void | CVD::haar2D (It from, const int width, const int height, int stride=-1) |
computes the 2D Haar transform of a signal in place. More... | |
template<class T > | |
void | CVD::haar2D (BasicImage< T > &I) |
computes the 2D Haar transform of an image in place. More... | |
template<class Score , class Inserter , class C , class B > | |
void | CVD::harrislike_corner_detect (const BasicImage< B > &i, C &c, unsigned int N, float blur, float sigmas, BasicImage< float > &xx, BasicImage< float > &xy, BasicImage< float > &yy) |
Generic Harris corner detection function. More... | |
template<class S , class D > | |
void | CVD::integral_image (const BasicImage< S > &in, BasicImage< D > &out) |
Compute an integral image. More... | |
double | CVD::interpolate_extremum (double d1, double d2, double d3) |
Interploate a 1D local extremem by fitting a quadratic tho the three data points and interpolating. More... | |
std::pair< TooN::Vector< 2 >, double > | CVD::interpolate_extremum_value (double I__1__1, double I__1_0, double I__1_1, double I_0__1, double I_0_0, double I_0_1, double I_1__1, double I_1_0, double I_1_1) |
Interpolate a 2D local maximum, by fitting a quadratic. More... | |
template<class I > | |
std::pair< TooN::Vector< 2 >, double > | CVD::interpolate_extremum_value (const BasicImage< I > &i, ImageRef p) |
Interpolate a 2D local maximum, by fitting a quadratic. More... | |
template<class Accumulator , class T > | |
void | CVD::morphology (const BasicImage< T > &in, const std::vector< ImageRef > &selem, const Accumulator &a_, BasicImage< T > &out) |
Perform a morphological operation on the image. More... | |
void | CVD::nonmax_suppression_strict (const std::vector< ImageRef > &corners, const std::vector< int > &scores, std::vector< ImageRef > &nmax_corners) |
Perform nonmaximal suppression on a set of features, in a 3 by 3 window. More... | |
void | CVD::nonmax_suppression (const std::vector< ImageRef > &corners, const std::vector< int > &scores, std::vector< ImageRef > &nmax_corners) |
Perform nonmaximal suppression on a set of features, in a 3 by 3 window. More... | |
void | CVD::nonmax_suppression_with_scores (const std::vector< ImageRef > &corners, const std::vector< int > &socres, std::vector< std::pair< ImageRef, int >> &max_corners) |
Perform nonmaximal suppression on a set of features, in a 3 by 3 window. More... | |
template<class C > | |
Image< TooN::Matrix< 2 > > | CVD::dense_tensor_vote_gradients (const BasicImage< C > &image, double sigma, double ratio, double cutoff=0.001, unsigned int num_divs=4096) |
This function performs tensor voting on the gradients of an image. More... | |
template<class C > | |
Image< C > | CVD::linearInterpolationDownsample (const BasicImage< C > &in, float scale) |
Downsample an image using linear interpolation. More... | |
template<class C > | |
Image< C > | CVD::fastApproximateDownSample (const BasicImage< C > &in, double scale) |
Downsample an image using some fast hacks. More... | |
template<class C > | |
void | CVD::twoThirdsSample (const BasicImage< C > &in, BasicImage< C > &out) |
Subsamples an image to 2/3 of its size by averaging 3x3 blocks into 2x2 blocks. More... | |
template<class C > | |
Image< C > | CVD::twoThirdsSample (const BasicImage< C > &from) |
Subsamples an image by averaging 3x3 blocks in to 2x2 ones. More... | |
template<class T > | |
void | CVD::halfSample (const BasicImage< T > &in, BasicImage< T > &out) |
subsamples an image to half its size by averaging 2x2 pixel blocks More... | |
template<class T > | |
Image< T > | CVD::halfSample (const BasicImage< T > &in) |
subsamples an image to half its size by averaging 2x2 pixel blocks More... | |
template<class T > | |
Image< T > | CVD::halfSample (Image< T > in, unsigned int octaves) |
subsamples an image repeatedly by half its size by averaging 2x2 pixel blocks. More... | |
template<class T > | |
void | CVD::threshold (BasicImage< T > &im, const T &minimum, const T &hi) |
thresholds an image by setting all pixel values below a minimum to 0 and all values above to a given maximum More... | |
template<class T > | |
void | CVD::stats (const BasicImage< T > &im, T &mean, T &stddev) |
computes mean and stddev of intensities in an image. More... | |
template<class S , class T > | |
void | CVD::gradient (const BasicImage< S > &im, BasicImage< T > &out) |
computes the gradient image from an image. More... | |
Variables | |
const ImageRef | CVD::fast_pixel_ring [16] |
The 16 offsets from the centre pixel used in FAST feature detection. More... | |
Functions and classes to support common computer vision concepts and operations.
void CVD::connected_components | ( | const std::vector< ImageRef > & | v, |
std::vector< std::vector< ImageRef >> & | r | ||
) |
Find the connected components of the input, using 4-way floodfill.
This is implemented as the graph based algorithm. There is no restriction on the input except that positions can not be INT_MIN or INT_MAX.
The pixels in the resulting segments are not sorted.
v | List of pixel positions |
r | List of segments. |
void CVD::convolveWithBox | ( | const BasicImage< T > & | I, |
BasicImage< T > & | J, | ||
ImageRef | hwin | ||
) |
convolves an image with a box of given size.
I | input image, modified in place |
hwin | window size, this is half of the box size |
Image<TooN::Matrix<2> > CVD::dense_tensor_vote_gradients | ( | const BasicImage< C > & | image, |
double | sigma, | ||
double | ratio, | ||
double | cutoff = 0.001 , |
||
unsigned int | num_divs = 4096 |
||
) |
This function performs tensor voting on the gradients of an image.
The voting is performed densely at each pixel, and the contribution of each pixel is scaled by its gradient magnitude. The kernel for voting is computed as follows. Consider that there is a point at \((0,0)\), with gradient normal \((0,1)\). This will make a contribution to the point \((x,y)\).
The arc-length, \(l\), of the arc passing through \((0,0)\), tangent to the gradient at this point and also passing through \((x, y)\) is:
\[ l = 2 r \theta \]
Where
\[ \theta = \tan^{-1}\frac{y}{x} \]
and the radius of the arc, \(r\) is:
\[ r = \frac{x^2 + y^2}{2y}. \]
The scale of the contribution is:
\[ s = e^{-\frac{l^2}{\sigma^2} - \kappa\frac{\sigma^2}{r^2}}. \]
Note that this is achieved by scaling \(x\) and \(y\) by \(\sigma\), so \(\kappa\) controls the kernel shape independent of the size. The complete tensor contribution is therefore:
\[ e^{-\frac{l^2}{\sigma^2} - \kappa\frac{\sigma^2}{r^2}} \left[ \begin{array}{c} \cos 2\theta\\ \sin 2\theta \end{array} \right] [ \cos 2\theta\ \ \sin 2\theta] \]
image | The image on which to perform tensor voting |
sigma | \( \sigma \) |
ratio | \( \kappa \) |
cutoff | When \(s\) points drop below the cutoff, it is set to zero. |
num_divs | The voting kernels are quantized by angle in to this many dicisions in the half-circle. |
void CVD::euclidean_distance_transform | ( | const BasicImage< T > & | in, |
BasicImage< Q > & | out | ||
) |
Compute Euclidean distance transform using the Felzenszwalb & Huttenlocher algorithm.
Example in examples/distance_transform.cc
in | input image: thresholded so anything > 0 is on the object |
out | output image is euclidean distance of input image. |
Exceptions::Vision::BadInput | Throws if the input contains no points. |
void CVD::euclidean_distance_transform | ( | const BasicImage< T > & | in, |
BasicImage< Q > & | out, | ||
BasicImage< ImageRef > & | lookup_DT | ||
) |
Compute Euclidean distance transform using the Felzenszwalb & Huttenlocher algorithm.
Example in examples/distance_transform.cc
in | input image: thresholded so anything > 0 is on the object |
out | output image is euclidean distance of input image. |
lookup_DT | For each output pixel, this is the location of the closest input pixel. |
Exceptions::Vision::BadInput | Throws if the input contains no points. |
void CVD::euclidean_distance_transform_sq | ( | const BasicImage< T > & | in, |
BasicImage< Q > & | out | ||
) |
Compute squared Euclidean distance transform using the Felzenszwalb & Huttenlocher algorithm.
Example in examples/distance_transform.cc
in | input image: thresholded so anything > 0 is on the object |
out | output image is euclidean distance of input image. |
Exceptions::Vision::BadInput | Throws if the input contains no points. |
void CVD::euclidean_distance_transform_sq | ( | const BasicImage< T > & | in, |
BasicImage< Q > & | out, | ||
BasicImage< ImageRef > & | lookup_DT | ||
) |
Compute squared Euclidean distance transform using the Felzenszwalb & Huttenlocher algorithm.
Example in examples/distance_transform.cc
in | input image: thresholded so anything > 0 is on the object |
out | output image is euclidean distance of input image. |
lookup_DT | For each output pixel, this is the location of the closest input pixel. |
Exceptions::Vision::BadInput | Throws if the input contains no points. |
void CVD::fast_corner_detect_10 | ( | const BasicImage< byte > & | im, |
std::vector< ImageRef > & | corners, | ||
int | barrier | ||
) |
Perform tree based 10 point FAST feature detection If you use this, please cite the paper given in fast_corner_detect.
im | The input image |
corners | The resulting container of corner locations |
barrier | Corner detection threshold |
void CVD::fast_corner_detect_11 | ( | const BasicImage< byte > & | im, |
std::vector< ImageRef > & | corners, | ||
int | barrier | ||
) |
Perform tree based 11 point FAST feature detection If you use this, please cite the paper given in fast_corner_detect_9.
im | The input image |
corners | The resulting container of corner locations |
barrier | Corner detection threshold |
void CVD::fast_corner_detect_12 | ( | const BasicImage< byte > & | im, |
std::vector< ImageRef > & | corners, | ||
int | barrier | ||
) |
Perform tree based 12 point FAST feature detection If you use this, please cite the paper given in fast_corner_detect_9.
im | The input image |
corners | The resulting container of corner locations |
barrier | Corner detection threshold |
void CVD::fast_corner_detect_7 | ( | const BasicImage< byte > & | im, |
std::vector< ImageRef > & | corners, | ||
int | barrier | ||
) |
Perform tree based 7 point FAST feature detection.
This is more like an edge detector. If you use this, please cite the paper given in fast_corner_detect_9
im | The input image |
corners | The resulting container of corner locations |
barrier | Corner detection threshold |
void CVD::fast_corner_detect_8 | ( | const BasicImage< byte > & | im, |
std::vector< ImageRef > & | corners, | ||
int | barrier | ||
) |
Perform tree based 8 point FAST feature detection.
This is more like an edge detector. If you use this, please cite the paper given in fast_corner_detect_9
im | The input image |
corners | The resulting container of corner locations |
barrier | Corner detection threshold |
void CVD::fast_corner_detect_9 | ( | const BasicImage< byte > & | im, |
std::vector< ImageRef > & | corners, | ||
int | barrier | ||
) |
Perform tree based 9 point FAST feature detection as described in: Machine Learning for High Speed Corner Detection, E.
Rosten and T. Drummond. Results show that this is both the fastest and the best of the detectors. If you use this in published work, please cite:
@inproceedings{rosten2006machine, title = "Machine Learning for High Speed Corner Detection", author = "Edward Rosten and Tom Drummond", year = "2006", month = "May", booktitle = "9th European Conference on Computer Vision", }
im | The input image |
corners | The resulting container of corner locations |
barrier | Corner detection threshold |
void CVD::fast_corner_detect_9_nonmax | ( | const BasicImage< byte > & | im, |
std::vector< ImageRef > & | max_corners, | ||
int | barrier | ||
) |
Perform FAST-9 corner detection (see fast_corner_detect_9), with nonmaximal suppression (see fast_corner_score_9 and nonmax_suppression)
im | The input image |
corners | The resulting container of locally maximal corner locations |
barrier | Corner detection threshold |
void CVD::fast_corner_score_10 | ( | const BasicImage< byte > & | i, |
const std::vector< ImageRef > & | corners, | ||
int | b, | ||
std::vector< int > & | scores | ||
) |
Compute the 10 point score (as the maximum threshold at which the point will still be detected) for a std::vector of features.
im | The input image |
corners | The resulting container of corner locations |
barrier | Initial corner detection threshold. Using the same threshold as for corner detection will produce the quickest results, but any lower value (e.g. 0) will produce correct results. |
void CVD::fast_corner_score_11 | ( | const BasicImage< byte > & | i, |
const std::vector< ImageRef > & | corners, | ||
int | b, | ||
std::vector< int > & | scores | ||
) |
Compute the 11 point score (as the maximum threshold at which the point will still be detected) for a std::vector of features.
im | The input image |
corners | The resulting container of corner locations |
barrier | Initial corner detection threshold. Using the same threshold as for corner detection will produce the quickest results, but any lower value (e.g. 0) will produce correct results. |
void CVD::fast_corner_score_12 | ( | const BasicImage< byte > & | i, |
const std::vector< ImageRef > & | corners, | ||
int | b, | ||
std::vector< int > & | scores | ||
) |
Compute the 11 point score (as the maximum threshold at which the point will still be detected) for a std::vector of features.
im | The input image |
corners | The resulting container of corner locations |
barrier | Initial corner detection threshold. Using the same threshold as for corner detection will produce the quickest results, but any lower value (e.g. 0) will produce correct results. |
void CVD::fast_corner_score_7 | ( | const BasicImage< byte > & | i, |
const std::vector< ImageRef > & | corners, | ||
int | b, | ||
std::vector< int > & | scores | ||
) |
Compute the 7 point score (as the maximum threshold at which the point will still be detected) for a std::vector of features.
im | The input image |
corners | The resulting container of corner locations |
barrier | Initial corner detection threshold. Using the same threshold as for corner detection will produce the quickest results, but any lower value (e.g. 0) will produce correct results. |
void CVD::fast_corner_score_8 | ( | const BasicImage< byte > & | i, |
const std::vector< ImageRef > & | corners, | ||
int | b, | ||
std::vector< int > & | scores | ||
) |
Compute the 8 point score (as the maximum threshold at which the point will still be detected) for a std::vector of features.
im | The input image |
corners | The resulting container of corner locations |
barrier | Initial corner detection threshold. Using the same threshold as for corner detection will produce the quickest results, but any lower value (e.g. 0) will produce correct results. |
void CVD::fast_corner_score_9 | ( | const BasicImage< byte > & | i, |
const std::vector< ImageRef > & | corners, | ||
int | b, | ||
std::vector< int > & | scores | ||
) |
Compute the 9 point score (as the maximum threshold at which the point will still be detected) for a std::vector of features.
im | The input image |
corners | The resulting container of corner locations |
barrier | Initial corner detection threshold. Using the same threshold as for corner detection will produce the quickest results, but any lower value (e.g. 0) will produce correct results. |
void CVD::fast_nonmax | ( | const BasicImage< byte > & | im, |
const std::vector< ImageRef > & | corners, | ||
int | barrier, | ||
std::vector< ImageRef > & | max_corners | ||
) |
Perform non-maximal suppression on a set of FAST features.
This cleans up areas where there are multiple adjacent features, using a computed score function to leave only the 'best' features. This function is typically called immediately after a call to fast_corner_detect() (or one of its variants). This uses the scoring function given in the paper given in fast_corner_detect_9:
im | The image used to generate the FAST features |
corners | The FAST features previously detected (e.g. by calling fast_corner_detect()) |
barrier | The barrier used to calculate the score, which should be the same as that passed to fast_corner_detect() |
max_corners | Vector to be filled with the new list of locally maximal corners. |
void CVD::fast_nonmax_with_scores | ( | const BasicImage< byte > & | im, |
const std::vector< ImageRef > & | corners, | ||
int | barrier, | ||
std::vector< std::pair< ImageRef, int >> & | max_corners | ||
) |
Perform non-maximal suppression on a set of FAST features, also returning the score for each remaining corner.
This function cleans up areas where there are multiple adjacent features, using a computed score function to leave only the 'best' features. This function is typically called immediately after a call to fast_corner_detect() (or one of its variants).
im | The image used to generate the FAST features |
corners | The FAST features previously detected (e.g. by calling fast_corner_detect()) |
barrier | The barrier used to calculate the score, which should be the same as that passed to fast_corner_detect() |
max_corners | Vector to be filled with the new list of locally maximal corners, and their scores. non_maxcorners[i].first gives the location and non_maxcorners[i].second gives the score (higher is better). |
Image<C> CVD::fastApproximateDownSample | ( | const BasicImage< C > & | in, |
double | scale | ||
) |
Downsample an image using some fast hacks.
The image is half-sampled as much as it can be, then twoThirdsSampled if possible and then finally resampled with linear interpolation. This ensures that the linear interpolation never goes further than a factor af about 1.5. Repeated area sampling and linear interpolation aren't perfect, so the resulting image won't be completely free from aliasing artefacts. However, it's pretty good, decently fast for small rescales and very fast for large ones.
in | input image |
scale | fraction to downsample |
T CVD::gaussianKernel | ( | std::vector< T > & | k, |
T | maxval, | ||
double | stddev | ||
) |
creates a Gaussian kernel with given maximum value and standard deviation.
All elements of the passed vector are filled up, therefore the vector defines the size of the computed kernel. The normalizing value is returned.
k | vector of T's holds the kernel values |
maxval | the maximum value to be used |
stddev | standard deviation of the kernel |
void CVD::gradient | ( | const BasicImage< S > & | im, |
BasicImage< T > & | out | ||
) |
computes the gradient image from an image.
The gradient image contains two components per pixel holding the x and y components of the gradient.
im | input image |
out | output image, must have the same dimensions as input image |
IncompatibleImageSizes | if out does not have same dimensions as im |
|
inline |
computes the 1D Haar transform of a signal in place.
This version takes two iterators, and the data between them will be transformed. Will only work correctly on 2^N data points.
from | iterator pointing to the beginning of the data |
to | iterator pointing to the end (after the last element) |
|
inline |
computes the 1D Haar transform of a signal in place.
Will only work correctly on 2^N data points.
from | iterator pointing to the beginning of the data |
size | number of data points, should be 2^N |
|
inline |
computes the 2D Haar transform of a signal in place.
Works only with data with power of two dimensions, 2^N x 2^ M.
from | iterator pointing to the beginning of the data |
width | columns of data, should be 2^N |
height | rows of data, should be 2^M |
stride | offset between rows, if negative will be set to width |
|
inline |
computes the 2D Haar transform of an image in place.
Works only with images with power of two dimensions, 2^N x 2^ M.
I | image to be transformed |
void CVD::halfSample | ( | const BasicImage< T > & | in, |
BasicImage< T > & | out | ||
) |
subsamples an image to half its size by averaging 2x2 pixel blocks
in | input image |
out | output image, must have the right dimensions versus input image |
IncompatibleImageSizes | if out does not have half the dimensions of in |
|
inline |
subsamples an image to half its size by averaging 2x2 pixel blocks
in | input image |
IncompatibleImageSizes | if out does not have half the dimensions of in |
subsamples an image repeatedly by half its size by averaging 2x2 pixel blocks.
This version will not create a copy for 0 octaves because it receives already an Image and will reuse the data.
in | input image |
octaves | number of halfsamplings |
IncompatibleImageSizes | if out does not have half the dimensions of in |
void CVD::harrislike_corner_detect | ( | const BasicImage< B > & | i, |
C & | c, | ||
unsigned int | N, | ||
float | blur, | ||
float | sigmas, | ||
BasicImage< float > & | xx, | ||
BasicImage< float > & | xy, | ||
BasicImage< float > & | yy | ||
) |
Generic Harris corner detection function.
This can use any scoring metric and can store corners in any container. The images used to hold the intermediate results must be passed to this function.
i | Input image. |
c | Container holding detected corners |
N | Number of corners to detect |
blur | Blur radius to use |
sigmas | Number of sigmas to use in blur. |
xx | Holds the result of blurred, squared X gradient. |
xy | Holds the result of blurred, X times Y gradient. |
yy | Holds the result of blurred, squared Y gradient. |
void CVD::integral_image | ( | const BasicImage< S > & | in, |
BasicImage< D > & | out | ||
) |
Compute an integral image.
In an integral image, pixel (x,y) is equal to the sum of all the pixels in the rectangle from (0,0) to (x,y) in the original image. and reallocation is not performed if b
is unique and of the correct size.
D | The destination image pixel type |
S | The source image pixel type |
in | The source image. |
out | The source image. |
double CVD::interpolate_extremum | ( | double | d1, |
double | d2, | ||
double | d3 | ||
) |
Interploate a 1D local extremem by fitting a quadratic tho the three data points and interpolating.
The middle argument must be the most extreme, and the extremum position is returned relative to 0.
Arguments are checked for extremeness by means of assert.
d1 | Data point value for $x=-1$ |
d2 | Data point value for $x=0$ |
d3 | Data point value for $x=1$ |
std::pair<TooN::Vector<2>, double> CVD::interpolate_extremum_value | ( | double | I__1__1, |
double | I__1_0, | ||
double | I__1_1, | ||
double | I_0__1, | ||
double | I_0_0, | ||
double | I_0_1, | ||
double | I_1__1, | ||
double | I_1_0, | ||
double | I_1_1 | ||
) |
Interpolate a 2D local maximum, by fitting a quadratic.
This is done by using using the 9 datapoints to compute the local Hessian using finite differences and finding the location where the gradient is zero. This version returns also the value of the extremum.
Given the grid of pixels:
a b c d e f g h i
The centre pixel (e) must be the most extreme of all the pixels.
I__1__1 | Pixel $(-1, -1)$ relative to the centre (a) |
I__1_0 | Pixel $(-1, 0)$ relative to the centre (b) |
I__1_1 | Pixel $(-1, 1)$ relative to the centre (c) |
I_0__1 | Pixel $( 0, -1)$ relative to the centre (d) |
I_0_0 | Pixel $( 0, 0)$ relative to the centre (e) |
I_0_1 | Pixel $( 0, 1)$ relative to the centre (f) |
I_1__1 | Pixel $( 1, -1)$ relative to the centre (g) |
I_1_0 | Pixel $( 1, 0)$ relative to the centre (h) |
I_1_1 | Pixel $( 1, 1)$ relative to the centre (i) |
std::pair<TooN::Vector<2>, double> CVD::interpolate_extremum_value | ( | const BasicImage< I > & | i, |
ImageRef | p | ||
) |
Interpolate a 2D local maximum, by fitting a quadratic.
i | Image in which to interpolate extremum |
p | Point at which to interpolate extremum |
|
inline |
computes the inverse 1D Haar transform of a signal in place.
This version takes two iterators, and the data between them will be transformed. Will only work correctly on 2^N data points.
from | iterator pointing to the beginning of the data |
to | iterator pointing to the end (after the last element) |
|
inline |
computes the inverse 1D Haar transform of a signal in place.
Will only work correctly on 2^N data points.
from | iterator pointing to the beginning of the data |
size | number of data points, should be 2^N |
Image<C> CVD::linearInterpolationDownsample | ( | const BasicImage< C > & | in, |
float | scale | ||
) |
Downsample an image using linear interpolation.
This will give horrendous aliasing if scale is more than 2, but not if it's substantially less, due to the low pass filter nature of bilinear interpretation. Don't use this function unless you know what youre doing!
Image resampling has more or less the following meaning. The image is represented as a real valued signal of sample points, by one delta function per pixel. To linearly interpolate this to fill up the real domain, it's convolved with a triangle kernel which is 0 when it hits a neighbouring sample point, 1 at zero, and symmetric. Now we have a real valued signal, we can then sample it which is effectively a multiplication with a delta comb.
The triangle kernel isn't band limited (though it does fall off), so when you resample, you will alias some high frequency information. But not all that much.
in | input image |
scale | fraction to downsample |
void CVD::morphology | ( | const BasicImage< T > & | in, |
const std::vector< ImageRef > & | selem, | ||
const Accumulator & | a_, | ||
BasicImage< T > & | out | ||
) |
Perform a morphological operation on the image.
At the edge of the image, the structuring element is cropped to the image boundary. This function is for homogenous structuring elements, so it is suitable for erosion, dialtion and etc, not hit-and-miss and so on.
For example:
Morphology is performed efficiently using an incremental algorithm. As the structuring element is moved across the images, only pixels on it's edge are added and removed. Other morphological operators can be added by creating a class with the following methods:
Grayscale erode could be implemented with a multiset to store and remove pixels. Get would simply return the first element in the multiset.
in | The source image. |
selem | The structuring element. See e.g. getDisc() |
a_ | The morphological operation to perform. See Morphology |
out | The destination image. |
void CVD::nonmax_suppression | ( | const std::vector< ImageRef > & | corners, |
const std::vector< int > & | scores, | ||
std::vector< ImageRef > & | nmax_corners | ||
) |
Perform nonmaximal suppression on a set of features, in a 3 by 3 window.
The test is non-strict: a point must be at least as large as its neighbours.
corners | The corner locations |
scores | The corners' scores |
max_corners | The locally maximal corners. |
void CVD::nonmax_suppression_strict | ( | const std::vector< ImageRef > & | corners, |
const std::vector< int > & | scores, | ||
std::vector< ImageRef > & | nmax_corners | ||
) |
Perform nonmaximal suppression on a set of features, in a 3 by 3 window.
The test is strict: a point must be greater than its neighbours.
corners | The corner locations |
scores | The corners' scores |
max_corners | The locally maximal corners. |
void CVD::nonmax_suppression_with_scores | ( | const std::vector< ImageRef > & | corners, |
const std::vector< int > & | socres, | ||
std::vector< std::pair< ImageRef, int >> & | max_corners | ||
) |
Perform nonmaximal suppression on a set of features, in a 3 by 3 window.
Non strict.
corners | The corner locations |
scores | The corners' scores |
max_corners | The locally maximal corners, and their scores. |
T CVD::scaleKernel | ( | const std::vector< S > & | k, |
std::vector< T > & | scaled, | ||
T | maxval | ||
) |
scales a GaussianKernel to a different maximum value.
The new kernel is returned in scaled. The new normalizing value is returned.
k | input kernel |
scaled | output vector to hold the resulting kernel |
maxval | the new maximum value |
void CVD::stats | ( | const BasicImage< T > & | im, |
T & | mean, | ||
T & | stddev | ||
) |
computes mean and stddev of intensities in an image.
These are computed for each component of the pixel type, therefore the output are two pixels with mean and stddev for each component.
im | input image |
mean | pixel element containing the mean of intensities in the image for each component |
stddev | pixel element containing the standard deviation for each component |
void CVD::threshold | ( | BasicImage< T > & | im, |
const T & | minimum, | ||
const T & | hi | ||
) |
thresholds an image by setting all pixel values below a minimum to 0 and all values above to a given maximum
im | input image changed in place |
minimum | threshold value |
hi | maximum value for values above the threshold |
void CVD::twoThirdsSample | ( | const BasicImage< C > & | in, |
BasicImage< C > & | out | ||
) |
Subsamples an image to 2/3 of its size by averaging 3x3 blocks into 2x2 blocks.
in | input image |
out | output image (must be out.size() == in.size()/3*2 ) |
IncompatibleImageSizes | if out does not have the correct dimensions. |
Image<C> CVD::twoThirdsSample | ( | const BasicImage< C > & | from | ) |
Subsamples an image by averaging 3x3 blocks in to 2x2 ones.
Note that this is performed using lazy evaluation, so subsampling happens on assignment, and memory allocation is not performed if unnecessary.
from | The image to convert from |
const ImageRef CVD::fast_pixel_ring |
The 16 offsets from the centre pixel used in FAST feature detection.