openCV's hidden BEST corner detection algorithm

openCV corner detection algorithms, and the secret BEST one.


OpenCV, an open-source computer vision library, provides a range of corner detection algorithms essential for image processing, object recognition, and other computer vision applications. Corner detection plays a crucial role in identifying key features or points in an image, facilitating object recognition, image stitching, and tracking. Several common corner detection methods exist, each with its strengths and limitations.

Through my employer, I recently worked with researchers at UVU to create a new open-source software called EMPATH (Electron Microscopy Program for Automated Tracking and High-resolution imaging) which can be given a row of cells, and the program will identity the individual cells. I explored multiple corner detection algorithms, and discovered a hidden algorithm that finally did the trick.

If you'd like to skip to the secret algorithm and see how it works, click here. Before we get there, I'll explain my journey discovering this 'unknown' algorithm while developing EMPATH.


EMPATH

EMPATH is an open-source object recognition software designed for interfacing with electron microscopes. It is specifically designed to pick out cells from a row of cells, utilizing advanced object recognition algorithms for precise cellular imaging. Its powerful approach ensures high-resolution, automatic, close-up images of each cell, allowing researchers to explore the microscopic world without spending hours at the microscope. The open-source nature of EMPATH promotes collaboration and customization, accelerating advancements in electron microscopy and cellular analysis and positioning EMPATH as a pivotal tool at the intersection of computer vision and life sciences.

Check out EMPATH on GitHub!

The goal

Image

My primary goal was to detect and isolate individual cells within a low-quality image of a row of cells, and a crucial step in achieving this is the detection of corners for each cell. This process involves creating bounding boxes around each cell, providing a structured framework for subsequent analyses. Once these bounding boxes are established, it becomes feasible to zoom in on specific points within every cell, with each corresponding point representing the same location across all cells in the row.

The challenge lies in the surprising difficulty of corner detection. To address the initial object recognition, the Meta SAM model comes into play, serving as a reliable tool for detecting the row of connected cells within the image. Subsequently, a Canny edge detection algorithm is applied to identify all edge pixels, providing a foundation for the corner detection process. Leveraging the capabilities of OpenCV, the software then systematically detects the corners of each cell, paving the way for the creation of bounding boxes around them.

Image

The journey

I tested common corner detection algorithms as follows:

  1. Harris Corner Detector
  2. Shi-Tomasi Corner Detector
  3. FAST

Harris Corner Detector

Image

The Harris Corner Detector, a cornerstone in computer vision, approaches its analysis like an investigator at a crime scene. It evaluates local intensity variations across the image canvas, identifying corners by examining each detail for signs of irregularity—manifested as significant changes in intensity in the pixels. Robust to rotational variations, this detector stands out by its ability to identify corners based on their resilience to changes in orientation, much like studying an object from different angles. This flexibility proves crucial in applications such as image recognition and tracking, where perfect alignment isn't guaranteed.

However, this algorithm did not perform well on the edge data in this example. So, I moved on the supposedly improved algorithms.

Shi-Tomasi Corner Detector

Image

The Shi-Tomasi Corner Detector is an improvement over the Harris Corner Detector, revolutionizing corner selection by introducing a more sophisticated criterion - a minimum eigenvalue criterion. Thus, the Shi-Tomasi Detector adeptly distinguishes between corners of varying importance, like how a discerning curator selectively highlighting significant pieces in an art gallery. This ability to prioritize prominent corners proves invaluable in applications where precise feature detection is essential. Moreover, the minimum eigenvalue criterion gives the Shi-Tomasi method resilience over diverse image conditions, intricate patterns, and varying lighting scenarios, showcasing its robustness in corner detection.

Although promising, this method performed poorly in my use case. I was unsure why, and simply moved on to another method.

FAST Algorithm

The FAST algorithm, standing for Features from Accelerated Segment Test, is a highly efficient corner detection method known for its exceptional speed, making it a preferred choice for scenarios requiring swift processing. Visualized as a nimble scout navigating an image, FAST employs a strategic approach to identify corners rapidly, setting it apart in real-time applications like video analysis, robotics, and augmented reality where quick decision-making is critical. What makes FAST distinctive is its simplicity, relying on a straightforward intensity test on pixels to determine corner-like characteristics. This simplicity not only accelerates computations but also enhances the algorithm's versatility. In dynamic environments demanding real-time responsiveness, FAST's ability to swiftly and accurately identify corners positions it as a valuable tool in the realm of computer vision.

However, speed was not important for me. I correctly expected this to work no better, and I moved on to eventually discover how openCV's findContours method really works.

The findContours Method

Image

While learning how the findContours method stores its data, I uncovered a hidden gem among corner detection strategies—what I believe to be an unconventional combination of OpenCV functions that breathes new life into the process. I used both the findContours and straightenContours methods together, and a novel approach to corner detection emerged. Initially sounding unlike traditional corner detection, this method involves identifying contours and straightening them, reducing the image to the most important, but straight, contours. Then, I realized these contours are stored as two points - each point being a corner!

Simply by extracting each endpoint of the contours, removing duplicates, you have every important corner - and in order of discovery! For whatever reason, this method worked beautifully for my use case. I hope others will find this useful as well!