Spindle Alignment with Laser Pointer, Image Sensor and OpenCV

For the Optical Table Lathe, it is important to align the rotational axis of the Lathe with the z axis. There are a number of common ways of doing this, usually using dial test indicators. I am going to use an optical method. A laser pointer chucked in the spindle will describe a conic section with the light beam. To illustrate this, I made a model exaggerating the misalignment of the laser beam, where the laser pointer is both off axis and tilted:

When rotating a line in 3D space around an arbitrary axis, the line will describe a double cone, unless the line is parallel or perpendicular to the axis. So the exact alignment of the laser pointer is immaterial; it will describe a circle on a plane perpendicular to the axis of rotation. Its' center is also the center of the axis of rotation.

Since the circle is a plane section of a hollow cone, it will get smaller as the 'screen' gets closer to the laser pointer. If the screen or the spindle are non-perpendicular to the axis of rotation (tilted left-right or up-down), then the light will describe an ellipse instead of a circle. However, the center of the ellipse will still be at the center of rotation of the axis.

When the screen is moved along the linear rails, the center of the circle/ellipse will only stay in the coordinate on the screen if the linear rails are parallel to the axis of rotation.

We can exploit the fact that modern digital cameras frequently have very small pixel sizes ~1um. Instead of a canvas, we use the sensor of a camera from which the lens assembly has been removed - similar to this:

OLYMPUS DIGITAL CAMERA

Then, the circle/ellipse will be inscribed by the laser beam in the form of a dot, which we should be able to track with computer vision using OpenCV.

This way (using some modeling for sub-pixel estimation) the axis can potentially be aligned with sub-micron precision along an arbitrary length. Also - or alternatively - a correction table can be built for the CNC controller.

I started out with using OpenCV using Python on Linux, with a camera like the small cube on the right. However, it turns out that my laser pointer is not well collimated and the dot that gets a bit to big for the small sensor at distances of abut >6 inches. So I moved to a Sony NEX 5, similar to the camera on the left, since it has a much larger sensor. However, it only outputs over HDMI, and my HDMI capture device has no Linux drivers, so I moved development over to Windows...

I started with video_threaded.py from under OpenCV/samples/python and the support files common.py, video.py, and tst_scene_render.py

All the frame processing happens in this part of the code:

    def process_frame(frame, t0):
        # some intensive computation...
        frame = cv2.medianBlur(frame, 19)
        frame = cv2.medianBlur(frame, 19)
        return frame, t0

So lets delete the blur and see what we get with just the laser hitting the image sensor

def process_frame(frame, t0):

       
        return frame, t0

Next, extract a single color channel for processing and threshold out the low intensity signal:

def process_frame(frame, t0):
  b, g, r = cv2.split(frame)                        # separate image into color channels
  singlechannelthreshold = cv2.inRange(g, 200, 255) # take a single channel and threshold it
  return singlechannelthreshold, t0

Then we 'erode' the image to keep only large shapes.

def process_frame(frame, t0):
  b, g, r = cv2.split(frame)                        # separate image into color channels
  singlechannelthreshold = cv2.inRange(g, 200, 255) # take a single channel and threshold it
  kernel = np.ones((5, 5), np.uint8)                # set up kernel for erode function
  eroded = cv2.erode(singlechannelthreshold, kernel, iterations=1)  # erode image
  return eroded, t0

Then, to estimate the center and track it, we can roughly follow the example here:

http://www.pyimagesearch.com/2015/09/14/ball-tracking-with-opencv/

First find the contours

def process_frame(frame, t0):
  b, g, r = cv2.split(frame)                        # separate image into color channels
  singlechannelthreshold = cv2.inRange(g, 200, 255) # take a single channel and threshold it
  kernel = np.ones((5, 5), np.uint8)                # set up kernel for erode function
  eroded = cv2.erode(singlechannelthreshold, kernel, iterations=1)  # erode image
  # find contours in the mask and initialize the current (x, y) center of the ellipse
  im2,cnts, hierarchy= cv2.findContours(eroded, cv2.RETR_EXTERNAL,cv2.CHAIN_APPROX_NONE )
  center = None
  return frame, t0

Next, estimate the center of the largest contour and draw a circle around it

def process_frame(frame, t0):
  b, g, r = cv2.split(frame)                        # separate image into color channels
  singlechannelthreshold = cv2.inRange(g, 200, 255) # take a single channel and threshold it
  kernel = np.ones((5, 5), np.uint8)                # set up kernel for erode function
  eroded = cv2.erode(singlechannelthreshold, kernel, iterations=1)  # erode image
  # find contours in the mask and initialize the current (x, y) center of the ellipse
  im2,cnts, hierarchy= cv2.findContours(eroded, cv2.RETR_EXTERNAL,cv2.CHAIN_APPROX_NONE )
  center = None
  # only proceed if at least one contour was found
        if len(cnts) > 0:
          # find the largest contour in the mask, then use it to compute the minimum enclosing circle and centroid
          c = max(cnts, key=cv2.contourArea)
          ((x, y), radius) = cv2.minEnclosingCircle(c)
          M = cv2.moments(c)
          center = (int(M["m10"] / M["m00"]), int(M["m01"] / M["m00"]))
          # only proceed if the radius meets a minimum size
          if radius > 10:
           # draw the circle and centroid on the frame, then update the list of tracked points
           cv2.circle(frame, (int(x), int(y)), int(radius),(0, 255, 255), 2)
           cv2.circle(frame, center, 5, (0, 0, 255), -1)
  return frame, t0

Keep a history of the estimated center coordinates

def process_frame(frame, t0):
  b, g, r = cv2.split(frame)                        # separate image into color channels
  singlechannelthreshold = cv2.inRange(g, 200, 255) # take a single channel and threshold it
  kernel = np.ones((5, 5), np.uint8)                # set up kernel for erode function
  eroded = cv2.erode(singlechannelthreshold, kernel, iterations=1)  # erode image
  # find contours in the mask and initialize the current (x, y) center of the ellipse
  im2,cnts, hierarchy= cv2.findContours(eroded, cv2.RETR_EXTERNAL,cv2.CHAIN_APPROX_NONE )
  center = None
  # only proceed if at least one contour was found
        if len(cnts) > 0:
          # find the largest contour in the mask, then use it to compute the minimum enclosing circle and centroid
          c = max(cnts, key=cv2.contourArea)
          ((x, y), radius) = cv2.minEnclosingCircle(c)
          M = cv2.moments(c)
          center = (int(M["m10"] / M["m00"]), int(M["m01"] / M["m00"]))
          # only proceed if the radius meets a minimum size
          if radius > 10:
           # draw the circle and centroid on the frame, then update the list of tracked points
           cv2.circle(frame, (int(x), int(y)), int(radius),(0, 255, 255), 2)
           cv2.circle(frame, center, 5, (0, 0, 255), -1)
        # update the points queue
        pts.appendleft(center)
  return frame, t0

And calculate the average over that history

def process_frame(frame, t0):
  b, g, r = cv2.split(frame)                        # separate image into color channels
  singlechannelthreshold = cv2.inRange(g, 200, 255) # take a single channel and threshold it
  kernel = np.ones((5, 5), np.uint8)                # set up kernel for erode function
  eroded = cv2.erode(singlechannelthreshold, kernel, iterations=1)  # erode image
  # find contours in the mask and initialize the current (x, y) center of the ellipse
  im2,cnts, hierarchy= cv2.findContours(eroded, cv2.RETR_EXTERNAL,cv2.CHAIN_APPROX_NONE )
  center = None
  # only proceed if at least one contour was found
        if len(cnts) > 0:
          # find the largest contour in the mask, then use it to compute the minimum enclosing circle and centroid
          c = max(cnts, key=cv2.contourArea)
          ((x, y), radius) = cv2.minEnclosingCircle(c)
          M = cv2.moments(c)
          center = (int(M["m10"] / M["m00"]), int(M["m01"] / M["m00"]))
          # only proceed if the radius meets a minimum size
          if radius > 10:
           # draw the circle and centroid on the frame, then update the list of tracked points
           cv2.circle(frame, (int(x), int(y)), int(radius),(0, 255, 255), 2)
           cv2.circle(frame, center, 5, (0, 0, 255), -1)
        # update the points queue
        pts.appendleft(center)
        # loop over the set of tracked points
        count = 0
        xsum = 0
        ysum = 0
        for i in range(1, len(pts)):
          # if either of the tracked points are None, ignore
          # them
          if pts[i - 1] is None or pts[i] is None:
            continue
          # otherwise, compute the thickness of the line and
          # draw the connecting lines
          count = count + 1
          (a,b) = pts[i]
          xsum = xsum + a
          ysum = ysum + b
  return frame, t0

And draw a line over the history

def process_frame(frame, t0):
  b, g, r = cv2.split(frame)                        # separate image into color channels
  singlechannelthreshold = cv2.inRange(g, 200, 255) # take a single channel and threshold it
  kernel = np.ones((5, 5), np.uint8)                # set up kernel for erode function
  eroded = cv2.erode(singlechannelthreshold, kernel, iterations=1)  # erode image
  # find contours in the mask and initialize the current (x, y) center of the ellipse
  im2,cnts, hierarchy= cv2.findContours(eroded, cv2.RETR_EXTERNAL,cv2.CHAIN_APPROX_NONE )
  center = None
  # only proceed if at least one contour was found
        if len(cnts) > 0:
          # find the largest contour in the mask, then use it to compute the minimum enclosing circle and centroid
          c = max(cnts, key=cv2.contourArea)
          ((x, y), radius) = cv2.minEnclosingCircle(c)
          M = cv2.moments(c)
          center = (int(M["m10"] / M["m00"]), int(M["m01"] / M["m00"]))
          # only proceed if the radius meets a minimum size
          if radius > 10:
           # draw the circle and centroid on the frame, then update the list of tracked points
           cv2.circle(frame, (int(x), int(y)), int(radius),(0, 255, 255), 2)
           cv2.circle(frame, center, 5, (0, 0, 255), -1)
        # update the points queue
        pts.appendleft(center)
        # loop over the set of tracked points
        count = 0
        xsum = 0
        ysum = 0
        for i in range(1, len(pts)):
          # if either of the tracked points are None, ignore
          # them
          if pts[i - 1] is None or pts[i] is None:
            continue
          # otherwise, compute the thickness of the line and
          # draw the connecting lines
          count = count + 1
          (a,b) = pts[i]
          xsum = xsum + a
          ysum = ysum + b
          cv2.line(frame, pts[i - 1], pts[i], (0, 0, 255), 1)
        if count > 0 :
          cv2.circle(frame, (int(xsum/count),int(ysum/count)), 5, (255, 0, 0), -1)
          draw_str(frame, (20, 80), "x,y      :  " + str(int(int(xsum/count))) + "," + str(int(ysum/count)))
  return frame, t0

The full code including saving a video capture file can be found below