车道线检测

点击查看进阶车道线检测视频

Advanced Lane Finding Project

The goals / steps of this project are the following:

  • Compute the camera calibration matrix and distortion coefficients given a set of chessboard images.
  • Apply a distortion correction to raw images.
  • Use color transforms, gradients, etc., to create a thresholded binary image.
  • Apply a perspective transform to rectify binary image ("birds-eye view").
  • Detect lane pixels and fit to find the lane boundary.
  • Determine the curvature of the lane and vehicle position with respect to center.
  • Warp the detected lane boundaries back onto the original image.
  • Output visual display of the lane boundaries and numerical estimation of lane curvature and vehicle position.

Here I will consider the rubric points individually and describe how I addressed each point in my implementation.


Writeup

1. Provide a Writeup / README that includes all the rubric points and how you addressed each one.

You're reading it!

Camera Calibration

1. Briefly state how you computed the camera matrix and distortion coefficients. Provide an example of a distortion corrected calibration image.

The code for this step is contained in the second, third code cell of the notebook located in "Project.ipynb"

I start by preparing "object points", which will be the (x, y, z) coordinates of the chessboard corners in the world. Here I am assuming the chessboard is fixed on the (x, y) plane at z=0, such that the object points are the same for each calibration image. Thus, objp is just a replicated array of coordinates, and objpoints will be appended with a copy of it every time I successfully detect all chessboard corners in a test image. imgpoints will be appended with the (x, y) pixel position of each of the corners in the image plane with each successful chessboard detection.

I then used the output objpoints and imgpoints to compute the camera calibration and distortion coefficients using the cv2.calibrateCamera() function. I applied this distortion correction to the test image using the cv2.undistort() function and obtained this result:

03

Pipeline (single images)

1. Provide an example of a distortion-corrected image.

To demonstrate this step, I will describe how I apply the distortion correction to one of the test images like this one:
006tNbRwly1fnevch4c02j30zw0a3wgp

2. Describe how (and identify where in your code) you used color transforms, gradients or other methods to create a thresholded binary image. Provide an example of a binary image result.

I used a combination of color and gradient thresholds to generate a binary image. Here's an example of my output for this step.

I tried 5 color space.

HLS

006tNbRwly1fnevd8x219j318i08e76n

RGB

006tNbRwly1fnevcvd17tj318i08e76j

HSV

006tNbRwly1fnevd2hz38j318i08edi9

LAB

006tNbRwly1fnevdp3mhgj318i08egnu

LUV

006tNbRwly1fnevduizskj318i08edhq

Finally, I choose LUV_L and LAB_B for combination.
006tNbRwly1fneve2lg4gj30ed08j0te

3. Describe how (and identify where in your code) you performed a perspective transform and provide an example of a transformed image.

I did perspective transform before thresholded.

The code for my perspective transform includes a function called warp(), which appears in cell [9] in the file Project.ipynb . The warp() function takes as inputs an image (img), as well as source (src) and destination (dst) points. I chose the hardcode the source and destination points in the following manner:

h,w = res.shape[:2]
src = np.float32([(575,465),(707,465), (290,660), (1010,660)])
dst = np.float32([(260,0),(w-330,0),(260,h),(w-330,h)])

This resulted in the following source and destination points:
h, w is the shape of image.

Source Destination
575, 465 260, 0
707, 465 w-330, 0
290, 660 260, h
1010, 660 w-330, h

I verified that my perspective transform was working as expected by drawing the src and dst points onto a test image and its warped counterpart to verify that the lines appear parallel in the warped image.

006tNbRwly1fneveh8aofj318i0c7ac

Results for test images.
006tNbRwly1fnevf4o921j30lm18cahd

And get histogram.
006tNbRwly1fnevemezzwj30eu09wq39

4. Describe how (and identify where in your code) you identified lane-line pixels and fit their positions with a polynomial?

Then I use a sliding window and fit my lane lines with a 2nd order polynomial kinda like this:
006tNbRwly1fnevgggq6hj30w20ii0uu

006tNbRwly1fnevgm9os6j30w20iidhk

5. Describe how (and identify where in your code) you calculated the radius of curvature of the lane and the position of the vehicle with respect to center.

I did this by

l, r, d = get_curvature(leftx, lefty, rightx, righty, ploty, binary_warped.shape)

And get_curvature() is in cell [21] of Project.ipynb.

6. Provide an example image of your result plotted back down onto the road such that the lane area is identified clearly.

I implemented this step in cell [26] in my code in Project.ipynb in the function draw_lanes_on_image(). Here is an example of my result on a test image:

006tNbRwly1fnevh26oogj30w20iiac


Pipeline (video)

1. Provide a link to your final video output. Your pipeline should perform reasonably well on the entire project video (wobbly lines are ok but no catastrophic failures that would cause the car to drive off the road!).

Here's a link to my video result


Discussion

1. Briefly discuss any problems / issues you faced in your implementation of this project. Where will your pipeline likely fail? What could you do to make it more robust?

When I implement this project, I met several problems.

  1. The hardcode perspective transform consumed a lot time to be optimized.
  2. Choosing a channel in some of the color space need some time to experiment.
  3. The threshold value need to be fine tune.
  4. I used previous frame and use \(0.6 * previous + 0.4 * current\) to smooth the lane line.
  5. If the curvature is less than 400, it will be seen as an invalid detection and will be replaced by the previous detection.

This pipeline likely to fail when the lane line is near to other parallel line like the fence or car tyre. Also, lightness and curvature affect this method a lot.

I think state-of-the-art lane line detection is using FCN to generate an image segmentation, which will yield a more robust result with the power of CNN.