Feature Detection

From AutonomousPrime Senior Design
Jump to: navigation, search

In order to get a better idea of the image processing side of our Convolutional Neural Network, we took some test images of hallways and ran some basic feature detection algorithms on them. The first two images are from Durham, the third is from Atanasoff. Below are the results.

We sampled three different images. For each, we looked at both the phone camera’s original size (5248 x 2952 pixels), a reduced size (1000 x 563 pixels), and a blurred version of the reduced image (using a 4 x 4 blur). On each of these images, we ran three feature detection algorithms. ORB is an OpenCV alternative to SIFT and SURF. Harris Corner is a basic corner detection algorithm. Hough Lines identifies lines in the image.

A number of problems became apparent, even from these simple tests. The reduced size images perform much better than the original size, which makes sense. These algorithms run by looking for major differences in neighboring pixels. If the pixels are very numerous, the information they represent doesn’t change much from one to the next, which makes feature identification difficult. The convolution step will work to identify features and eliminate unnecessary detail and information before passing it along to the fully connected network layers.

In the second Durham example, especially, there is a lot of noise generated by the patterns in the floor. This is a good issue to be aware of, and is something we will inevitably see a lot more of. In this case, blurring the image handled this issue pretty well. A similar step will be performed in our convolution. Another important thing to note is that it will be crucial for us to train our network in a variety of buildings. For example, if we trained in Durham alone, the neural network might learn to identify the lines created by the corners of the hallways. If it associated lines with its boundary and then was run in Atanasoff, the line in the floor would be inaccurately recognized as a boundary.

Simplified Feature Detection Code

Here is a simplified version of how we tested feature detection:

  import numpy as np
  import cv2
  from matplotlib import pyplot as plt
  
img = cv2.imread('Hallway_3.jpg', cv2.IMREAD_COLOR)
grayscale_img = None grayscale_img = cv2.cvtColor(img, cv2.COLOR_RGB2GRAY)
# #### ORB #### # Initiate STAR detector orb = cv2.ORB_create() # find the keypoints with ORB kp = orb.detect(img,None) # compute the descriptors with ORB kp, des = orb.compute(img, kp) # draw only keypoints location,not size and orientation orb_img = None orb_img = cv2.drawKeypoints(img,kp,orb_img,color=(255,0,0), flags=0)
# #### Harris Corner #### gray = np.float32(grayscale_img) dst = cv2.cornerHarris(gray,2,3,0.04) #result is dilated for marking the corners, not important dst = cv2.dilate(dst,None) # Threshold for an optimal value, it may vary depending on the image. hc_img = img.copy() hc_img[dst>0.005*dst.max()]=[255,0,0]
# #### Hough Lines using Canny Edge #### E = cv2.Canny(grayscale_img, 50, 200, 3) lines = cv2.HoughLinesP(E,rho = 1,theta = 1*np.pi/180,threshold = 80,minLineLength = 30,maxLineGap = 10) N = lines.shape[0] hough_img = img.copy() for i in range(N): x1 = lines[i][0][0] y1 = lines[i][0][1] x2 = lines[i][0][2] y2 = lines[i][0][3] cv2.line(hough_img,(x1,y1),(x2,y2),(255,0,0),3)

For a text file version of this Python script, click here.

Feature Detection Images


Return to Main Page