Car Speed Detector

(2016-12-18 Version 2 is now on github. It fixes one bug, makes the code style more consistent, and adds a few additional options for tracking the results.)

It started with a Facebook post from my brother-in-law:

“Okay Facebook, I need your help.
How do you deal with A$$hole’s that live on your street and constantly speed (75 in a 25) up and down the road?
I’ve called the sheriff and they are under manned. I can call the state highway patrol but they are probably in the same boat.
Short of shooting them as they drive by, what do you recommend to get the $#&@! from hell to slow the F down?”

Seventy-five? Really bro? But that got me thinking – could you document the speed of cars on a residential street to add support for police intervention?

Hmmm. I have a Raspberry Pi and a Pi Camera module. Ought to be able to use them to measure a car’s speed. What follows is my implementation of an application that records images with the speed of cars passing in front of the camera.

Carspeed_Sample

Requirements:

This is mostly a software implementation so the hardware required is simple:

Raspberry Pi Model 2  (the Pi Model 3 will work even better, but I don’t have one to test)
Pi Camera

The software required is pretty straight forward too:
Raspian
OpenCV
Python

Steps:

Install Raspian on the Raspberry Pi. This has been covered in many places on the net. At this point in time Jessie is the current release and I recommend it for this project.

Install OpenCV 3 and Python 3 on the Pi. Thanks go to Adrian Rosebrock for these great instructions on getting OpenCV and Python installed on a Pi:
http://www.pyimagesearch.com/2015/10/26/how-to-install-opencv-3-on-raspbian-jessie/ My system is using Python 3.4.1 and Opencv 3.1.0 on Raspberry Pi 2 Model B.

Copy the carspeed.py program to your /home/pi directory. My program is based on the motion detection program found on the pyimagesearch.com site with modifications for speed detection. Not only does the program need to detect motion, it needs to time the moving car as it passes across the camera’s field of view, and it needs to know how far the car traveled.

Obviously the horizontal distance that the camera sees at a distance one foot from of the lens is very different than the horizontal distance measured 50 feet from the lens. Using the camera’s field of view and a bit of trigonometry the ‘width’ at any distance from the lens can be computed:

Picamera

The field of view (FOV) of the Picamera is about 53.5 degrees. Let’s say the road is about 60 feet (D) from our camera. The horizontal distance (C) covered by the image at a distance 60 feet from the lens would be:

2*60+tan(53.5 * 0.5)
120*tan(26.75)
120*0.50
60 feet

So it just happens that the horizontal distance covered by the Picamera’s image is roughly equal to the distance from the lens. If you are 10 feet from the lens, the image is about 10 feet across, 47 feet from the lens, about 47 feet across, and so on.

Of course, other cameras may have a different field of view and won’t have this easy to determine correspondence.

Once the horizontal distance is known, dividing it by the number of pixels in the width of the frame gives the distance each pixel represents. The speed can be calculated from the time it takes for an object to traverse the pixels.

This is what we will ask the motion detection part of the program to do:

  1. detect motion using the logic presented at the pyimagesearch site
  2. begin a timer
  3. track the moving object until it reaches the opposite side of the frame.
  4. calculate the speed
  5. save a picture of the image labeled with the speed calculated

The first version of the program was tested by pointing the camera at the street outside my front window. It immediately ran into a problem – the program worked too well. It tracked birds as they flew by, squirrels searching for food in the front yard, and my neighbor opening his garage door.

The program was supposed to watch the street, not the whole neighborhood!

I added logic allowing a mouse to be used to draw a box bounding the area of the image to monitor. That enhancement worked and eliminated nearly all of the extraneous motion detection.

But the program was having difficulty with cars that were traveling 35+ miles per hour. A car at that speed traveled through the monitored area so quickly that two or more readings were not possible. I profiled the program and found that displaying the image on the screen with the imshow and waitKey() commands was slowing the image processing. A quick code change stopped the image from updating while the car passed through the monitored area. It is a bit strange to see a car enter the frame, hit the edge of the monitored area and disappear until it exits the monitored area, but that modification sped up the processing 2 fold and allowed measurement of higher speeds. From my testing, it looks like the program is accurate to within +/- 1 for speeds up to 40 mph. Cars going faster than that are still recorded but the speed is less accurate since the car passes through the frame so quickly. The program is limited to processing one car at a time, so if you live on a busy highway, it won’t give accurate results. The Raspberry Pi 3 was announced while this post was being added. Given its increased processing power, measurement should be more accurate when recording higher speeds.

Usage

Point the Picamera at the road. Before you run carspeed.py, modify the constant DISTANCE to the distance from the front of the Picamera lens to the middle of the road. You may also need to adjust the vflip and hflip to match you camera’s orientation.

Run from a terminal with:
python carspeed.py

Use a mouse to draw a rectangle around the area you wish to monitor. I recommend a height just sufficient to capture the whole car and a width about one half the frame, centered. Press ‘c’ and the program will begin monitoring the road.

As cars pass through the monitored area, an image will be written to disk with the speed.

Exit with a press of the ‘q’ key.

The Car Speed Program

The program code can be downloaded from here. What follows is a description of the carspeed.py logic.

# CarSpeed Version 2.0

# import the necessary packages
from picamera.array import PiRGBArray
from picamera import PiCamera
import time
import math
import datetime
import cv2

The program starts with the import of packages used.


# place a prompt on the displayed image
def prompt_on_image(txt):
    global image
    cv2.putText(image, txt, (10, 35),
    cv2.FONT_HERSHEY_SIMPLEX, 0.35, (0, 0, 255), 1)

# calculate speed from pixels and time
def get_speed(pixels, ftperpixel, secs):
    if secs > 0.0:
        return ((pixels * ftperpixel)/ secs) * 0.681818
    else:
        return 0.0

# calculate elapsed seconds
def secs_diff(endTime, begTime):
    diff = (endTime - begTime).total_seconds()
    return diff

# record speed in .csv format
def record_speed(res):
    global csvfileout
    f = open(csvfileout, 'a')
    f.write(res+"\n")
    f.close

Next, a few methods and functions are defined. The method “prompt_on_image” simply formats and displays a message on the image. The function “get_speed” returns the speed based on the number of pixels traversed in a given time (substitue 3.6 for the 0.681818 value if you are working with meters and kph rather than feet and mph.) And finally, the function “secs_diff” returns the number of seconds between two times.


# mouse callback function for drawing capture area
def draw_rectangle(event,x,y,flags,param):
    global ix,iy,fx,fy,drawing,setup_complete,image, org_image, prompt

    if event == cv2.EVENT_LBUTTONDOWN:
        drawing = True
        ix,iy = x,y

    elif event == cv2.EVENT_MOUSEMOVE:
        if drawing == True:
            image = org_image.copy()
            prompt_on_image(prompt)
            cv2.rectangle(image,(ix,iy),(x,y),(0,255,0),2)

    elif event == cv2.EVENT_LBUTTONUP:
        drawing = False
        fx,fy = x,y
        image = org_image.copy()
        prompt_on_image(prompt)
        cv2.rectangle(image,(ix,iy),(fx,fy),(0,255,0),2)

The draw_rectangle method handles the mouse events that are used to define the monitored area of the image. It simply lets the user draw a rectangle on the image. The image is refreshed from the original so that the rectangle expands as the mouse moves. If this isn’t done, the mouse moves result in a solid, filled-in rectangle as the rectangles are drawn one on top of the other.


# define some constants
DISTANCE = 76  #<---- enter your distance-to-road value here
MIN_SPEED = 0  #<---- enter the minimum speed for saving images
SAVE_CSV = False  #<---- record the results in .csv format in carspeed_(date).csv

THRESHOLD = 15
MIN_AREA = 175
BLURSIZE = (15,15)
IMAGEWIDTH = 640
IMAGEHEIGHT = 480
RESOLUTION = [IMAGEWIDTH,IMAGEHEIGHT]
FOV = 53.5    #<---- Field of view
FPS = 30
SHOW_BOUNDS = True
SHOW_IMAGE = True

Next comes the definition of constants used in the program. You will need to estimate the distance from the camera to the center of the road and enter it in the DISTANCE constant in line 58. You can specify the minumum speed required before images are saved, and whether the speeds should be saved in a .csv file for analyzing later. The values in THRESHOLD, MIN_AREA and BLURSIZE were what worked best during testing.


# the following enumerated values are used to make the program more readable
WAITING = 0
TRACKING = 1
SAVING = 2
UNKNOWN = 0
LEFT_TO_RIGHT = 1
RIGHT_TO_LEFT = 2

I prefer using enumerated values to make program easier to read. This section defines the ones used. The first three monitor the current state of the tracking process. The next two define the direction of movement on the image. The values assigned are not significant.


# calculate the the width of the image at the distance specified
frame_width_ft = 2*(math.tan(math.radians(FOV*0.5))*DISTANCE)
ftperpixel = frame_width_ft / float(IMAGEWIDTH)
print("Image width in feet {} at {} from camera".format("%.0f" % frame_width_ft,"%.0f" % DISTANCE))

That completes the initialization of the program’s constants and methods. Now it is time to calculate the frame width and the ft per pixel. Note: the same logic works for a distance defined in meters – it just results in meters per pixel.


# state maintains the state of the speed computation process
# if starts as WAITING
# the first motion detected sets it to TRACKING

# if it is tracking and no motion is found or the x value moves
# out of bounds, state is set to SAVING and the speed of the object
# is calculated
# initial_x holds the x value when motion was first detected
# last_x holds the last x value before tracking was was halted
# depending upon the direction of travel, the front of the
# vehicle is either at x, or at x+w
# (tracking_end_time - tracking_start_time) is the elapsed time
# from these the speed is calculated and displayed 

state = WAITING
direction = UNKNOWN
initial_x = 0
last_x = 0

#-- other values used in program
base_image = None
abs_chg = 0
mph = 0
secs = 0.0
ix,iy = -1,-1
fx,fy = -1,-1
drawing = False
setup_complete = False
tracking = False
text_on_image = 'No cars'
prompt = ''

The initialization of global variables used throughout the program comes next.


# initialize the camera. Adjust vflip and hflip to reflect your camera's orientation
camera = PiCamera()
camera.resolution = RESOLUTION
camera.framerate = FPS
camera.vflip = True
camera.hflip = True

rawCapture = PiRGBArray(camera, size=camera.resolution)
# allow the camera to warm up
time.sleep(0.9)

And the initialization of the Picamera.


# create an image window and place it in the upper left corner of the screen
cv2.namedWindow("Speed Camera")
cv2.moveWindow("Speed Camera", 10, 40)

We’ll want to see that the program is processing, so a window is created and moved to the upper left corner of the display.


# call the draw_rectangle routines when the mouse is used
cv2.setMouseCallback('Speed Camera',draw_rectangle)

# grab a reference image to use for drawing the monitored area's boundry
camera.capture(rawCapture, format="bgr", use_video_port=True)
image = rawCapture.array
rawCapture.truncate(0)
org_image = image.copy()

if SAVE_CSV:
    csvfileout = "carspeed_{}.cvs".format(datetime.datetime.now().strftime("%Y%m%d_%H%M"))
    record_speed('Date,Day,Time,Speed,Image')
else:
    csvfileout = ''

prompt = "Define the monitored area - press 'c' to continue"
prompt_on_image(prompt)

# wait while the user draws the monitored area's boundry
while not setup_complete:
    cv2.imshow("Speed Camera",image)

    #wait for for c to be pressed
    key = cv2.waitKey(1) & 0xFF

    # if the `c` key is pressed, break from the loop
    if key == ord("c"):
        break

To keep extraneous movement from triggering speed calculations, the user must define the area of the image that should be monitored by the program. This is accomplished by sending mouse events to the method draw_rectangle. In line 134, the program captures one image that will be used for drawing. This image is duplicated with the image.copy() statement. The image is refreshed with the original image as the rectangle is drawn. If the image wasn’t refreshed, the user would see a solid green rectangle drawn, rather than a green outline. The program displays an image and then waits for the user to draw the rectangle around the monitored area. When the user is done, they press the ‘c’ key to continue.


# the monitored area is defined, time to move on
prompt = "Press 'q' to quit" 

# since the monitored area's bounding box could be drawn starting
# from any corner, normalize the coordinates

if fx > ix:
    upper_left_x = ix
    lower_right_x = fx
else:
    upper_left_x = fx
    lower_right_x = ix

if fy > iy:
    upper_left_y = iy
    lower_right_y = fy
else:
    upper_left_y = fy
    lower_right_y = iy

monitored_width = lower_right_x - upper_left_x
monitored_height = lower_right_y - upper_left_y

print("Monitored area:")
print(" upper_left_x {}".format(upper_left_x))
print(" upper_left_y {}".format(upper_left_y))
print(" lower_right_x {}".format(lower_right_x))
print(" lower_right_y {}".format(lower_right_y))
print(" monitored_width {}".format(monitored_width))
print(" monitored_height {}".format(monitored_height))
print(" monitored_area {}".format(monitored_width * monitored_height))

The user can start drawing the rectangle from any corner, so once the monitored area is defined, the initial and final x and y points are normalized so that they can be used in calculating direction and distance. The values that result are displayed in the terminal.


# capture frames from the camera (using capture_continuous.
#   This keeps the picamera in capture mode - it doesn't need
#   to prep for each frame's capture.

for frame in camera.capture_continuous(rawCapture, format="bgr", use_video_port=True):
    #initialize the timestamp
    timestamp = datetime.datetime.now()

    # grab the raw NumPy array representing the image
    image = frame.array

    # crop area defined by [y1:y2,x1:x2]
    gray = image[upper_left_y:lower_right_y,upper_left_x:lower_right_x]

    # convert the fram to grayscale, and blur it
    gray = cv2.cvtColor(gray, cv2.COLOR_BGR2GRAY)
    gray = cv2.GaussianBlur(gray, BLURSIZE, 0)

    # if the base image has not been defined, initialize it
    if base_image is None:
        base_image = gray.copy().astype("float")
        lastTime = timestamp
        rawCapture.truncate(0)
        cv2.imshow("Speed Camera", image)

    # compute the absolute difference between the current image and
    # base image and then turn eveything lighter gray than THRESHOLD into
    # white
    frameDelta = cv2.absdiff(gray, cv2.convertScaleAbs(base_image))
    thresh = cv2.threshold(frameDelta, THRESHOLD, 255, cv2.THRESH_BINARY)[1]

    # dilate the thresholded image to fill in any holes, then find contours
    # on thresholded image
    thresh = cv2.dilate(thresh, None, iterations=2)
    (_, cnts, _) = cv2.findContours(thresh.copy(), cv2.RETR_EXTERNAL,cv2.CHAIN_APPROX_SIMPLE)

Finally we are to the meat of the program. Using capture_continuous, the program repeatedly grabs a frame and operates on it. Capture_continuous is used so that the Picamera doesn’t go through the initialization process required when capturing one frame at a time. The image is cropped in line 205 to the area that the user defined to monitor. Using the logic discussed on the pyimagesearch site, the image is converted to grayscale and blurred. The first time through, the program saves the image as base_image. Base_image is then used to compare to the current image and see what has changed. At this point, any differences between the captured image and the base_image are represented by blobs of white in the threshold image.


    # look for motion
    motion_found = False
    biggest_area = 0

    # examine the contours, looking for the largest one
    for c in cnts:
        (x1, y1, w1, h1) = cv2.boundingRect(c)
        # get an approximate area of the contour
        found_area = w1*h1
        # find the largest bounding rectangle
        if (found_area > MIN_AREA) and (found_area > biggest_area):
            biggest_area = found_area
            motion_found = True
            x = x1
            y = y1
            h = h1
            w = w1

Next the program looks for the largest white blob in the threshold image using findContours. We ignore small white blobs, as they can happen at random or may represent a leaf or other small object traveling through the monitored area.The process of grabbing an image and looking for motion continues until motion is detected.


    if motion_found:
        if state == WAITING:
            # intialize tracking
            state = TRACKING
            initial_x = x
            last_x = x
            initial_time = timestamp
            last_mph = 0
            text_on_image = 'Tracking'
            print(text_on_image)
            print("x-chg    Secs      MPH  x-pos width")
        else:

The first time motion is detected, the state changes from WAITING to TRACKING and the initial values of the area-in-motion are recorded.
# compute the lapsed time


            secs = secs_diff(timestamp,initial_time)

            if secs >= 15:
                state = WAITING
                direction = UNKNOWN
                text_on_image = 'No Car Detected'
                motion_found = False
                biggest_area = 0
                rawCapture.truncate(0)
                base_image = None
                print('Resetting')
                continue

if the camera gets bumped or the lighting in monitired area changes dramatically, reset processing after 15 seconds of garbage
(thanks to RawLiquid for suggesting this change)


            if state == TRACKING:
                if x >= last_x:
                    direction = LEFT_TO_RIGHT
                    abs_chg = x + w - initial_x
                else:
                    direction = RIGHT_TO_LEFT
                    abs_chg = initial_x - x
                mph = get_speed(abs_chg,ftperpixel,secs)
                print("{0:4d}  {1:7.2f}  {2:7.0f}   {3:4d}  {4:4d}".format(abs_chg,secs,mph,x,w))
                real_y = upper_left_y + y
                real_x = upper_left_x + x
                # is front of object outside the monitired boundary? Then write date, time and speed on image
                # and save it
                if ((x <= 2) and (direction == RIGHT_TO_LEFT)) \                         or ((x+w >= monitored_width - 2) \
                        and (direction == LEFT_TO_RIGHT)):
                    if (last_mph > MIN_SPEED):    # save the image
                        # timestamp the image
                        cv2.putText(image, datetime.datetime.now().strftime("%A %d %B %Y %I:%M:%S%p"),
                            (10, image.shape[0] - 10), cv2.FONT_HERSHEY_SIMPLEX, 0.75, (0, 255, 0), 1)
                        # write the speed: first get the size of the text
                        size, base = cv2.getTextSize( "%.0f mph" % last_mph, cv2.FONT_HERSHEY_SIMPLEX, 2, 3)
                        # then center it horizontally on the image
                        cntr_x = int((IMAGEWIDTH - size[0]) / 2)
                        cv2.putText(image, "%.0f mph" % last_mph,
                            (cntr_x , int(IMAGEHEIGHT * 0.2)), cv2.FONT_HERSHEY_SIMPLEX, 2.00, (0, 255, 0), 3)
                        # and save the image to disk
                        imageFilename = "car_at_" + datetime.datetime.now().strftime("%Y%m%d_%H%M%S") + ".jpg"
                        # use the following image file name if you want to be able to sort the images by speed
                        #imageFilename = "car_at_%02.0f" % last_mph + "_" + datetime.datetime.now().strftime("%Y%m%d_%H%M%S") + ".jpg"

                        cv2.imwrite(imageFilename,image)
                        if SAVE_CSV:
                            cap_time = datetime.datetime.now()
                            record_speed(cap_time.strftime("%Y.%m.%d")+','+cap_time.strftime('%A')+','+\
                               cap_time.strftime('%H%M')+','+("%.0f" % last_mph) + ','+imageFilename)
                    state = SAVING
                # if the object hasn't reached the end of the monitored area, just remember the speed
                # and its last position
                last_mph = mph
                last_x = x
    else:
        if state != WAITING:
            state = WAITING
            direction = UNKNOWN
            text_on_image = 'No Car Detected'
            print(text_on_image)

With a state of TRACKING, the second and subsequent images with motion are processed to see how far the area-in-motion has changed.The calculation of change in position, line 275 to line 280, depends upon the direction of movement. From right-to-left, the x value of the box bounding the area-in-motion represents the front of the car as it passes through the monitored area. But for motion from left-to-right, the x value won’t change until the entire car has entered the monitored area. The bounding box grows wider as more and more of the car enters the monitored area, until finally the bounding box encloses the rear of the car. Thus the front of the car is x+w where w is the width of the bounding box enclosing the area-in-motion.

Once we have the current position of the front of the car, we calculate the absolute change in pixels from our initial x position in line 277 or line 280. The time interval between the current frame and the initial frame provide the seconds that have lapsed. From time and distance, the speed is calculated in line 281.

This process continues until the area-in-motion’s bounding box reaches the opposite end of the monitored area, line 287. At that point, the date, time are written to the image (line 292), the last speed is displayed centered on the image (line 295 – line 298) and the image is written to disk, line 305. The state is changed to SAVING. The last speed is used since the front of the car would have traveled beyond the monitored boundry, corrupting the x value. The program will continue to see motion as the car exits the monitored area, but since the state is not WAITING or TRACKING, the motion will be ignored.

If no motion is detected, the state returns to WAITING.


    # only update image and wait for a keypress when waiting for a car
    # This is required since waitkey slows processing.
    if (state == WAITING):    

        # draw the text and timestamp on the frame
        cv2.putText(image, datetime.datetime.now().strftime("%A %d %B %Y %I:%M:%S%p"),
            (10, image.shape[0] - 10), cv2.FONT_HERSHEY_SIMPLEX, 0.75, (0, 255, 0), 1)
        cv2.putText(image, "Road Status: {}".format(text_on_image), (10, 20),
            cv2.FONT_HERSHEY_SIMPLEX,0.35, (0, 0, 255), 1)

        if SHOW_BOUNDS:
            #define the monitored area right and left boundary
            cv2.line(image,(upper_left_x,upper_left_y),(upper_left_x,lower_right_y),(0, 255, 0))
            cv2.line(image,(lower_right_x,upper_left_y),(lower_right_x,lower_right_y),(0, 255, 0))

        # show the frame and check for a keypress
        if SHOW_IMAGE:
            prompt_on_image(prompt)
            cv2.imshow("Speed Camera", image)

        # Adjust the base_image as lighting changes through the day
        if state == WAITING:
            last_x = 0
            cv2.accumulateWeighted(gray, base_image, 0.25)

        state=WAITING;
        key = cv2.waitKey(1) & 0xFF

        # if the `q` key is pressed, break from the loop and terminate processing
        if key == ord("q"):
            break

    # clear the stream in preparation for the next frame
    rawCapture.truncate(0)

# cleanup the camera and close any open windows
cv2.destroyAllWindows()

The image window is only updated if no motion is detected. If the state is WAITING, the date, time and status is added to the current image. In line 345, the base_image is adjusted slightly to account for lighting changes in the monitored area. These changes result from clouds passing, the changing angle of shadows, blowing leaves, etc.The keyboard is checked for the press of “q” indicating the program should be terminated.


Advertisements

124 thoughts on “Car Speed Detector

  1. Hi Greg. This fills a wonderful void that I am sure would interest a lot of people with a similar issue…speeding neighbors. Do you sell this as a “ready to use” kit?

    Like

    • This exercise was meant to prove that speed measurement was possible with a Pi. It was a learning experience for me and I share it so that others can benefit from the result. Once you have a Pi with OpenCV and Python installed, you only need to download the program to use it. Also, selling it as a kit would imply the results are accurate, which may not be the case, since so many factors can throw off the final reading.

      Like

    • Hi there, based on this excellent guide I bought a PiCam, upgraded OS, installed OpenCV, which was time consuming, but interesting – but unfortunately could never get the code to run. So I looked elsewhere and found there’s an iOS app that does much the same and works really well:
      https://itunes.apple.com/gb/app/speedclock-video-radar/id400876654?mt=8

      Hope that’s ok Greg – don’t want to detract from your excellent project!

      Like

      • Sorry to hear you couldn’t get the code to run. If you post questions here or on the carspeed github site, someone may be able to help you. The app looks interesting, and seems to use the same concept, but where is the fun (and education) in buying an app you can create yourself?

        Like

  2. hi greg i don’t know if you still monitor or get these replies but i was wondering if it possible to set up a 2nd monitoring area and track both areas ?

    Like

    • Since there was a bug in the initial version, I’ve replaced it completely. WordPress doesn’t give me a way to start a new comment thread. However, feel free to start a new topic on the Carspeed gitbub site if you would like. I’ve closed the existing issue threads on github so the all new comments or questions have a starting point after the update.

      Like

    • Carspeed can only measure cars passing horizontally, perpendicular to the camera. If the camera is perpendicular to the road, but the the road isn’t horizontal, just rotate the camera to make it horizontal. If that isn’t satisfactory, you will need to enhance the program to track the motion of an object at an angle and calculate its speed.

      Like

  3. Hi Gregtinkers, I have two doubts:
    1: how do I make the caption of the image in high resolution ?, Example: when I do a manual capture “with RPi_Cam_Web_Interface of silvanmelchior”, the captured image comes out in high resolution and weighs 2.44 mb, if I configure it in “Image Res: 2592 x 1944 “and the clear image comes out, and the plate is visible. In the same way as I do for “carspeed.py”, make the capura of the image in good resolution with the intention of reading or distinguish also the plate ?. Observe that “carspeed.py”, your vehicle weighs 256 kb, how or where do I to make it look clearer?

    2. How do I make a rotation of 90 or 270 degrees of the camera through “vs.camera.rotation”, so that I can handle the rectangle drawn horizontally, without having to move or rotate the camera physically?

    Like

  4. Unfortunately unreliable

    This is a very interesting project but unfortunately about 15% of the cars going from left to right get measured wrong here. The speed is much to high, sometimes rediculous high.

    I live in a narrow road with a speed limit of 30 km/h (approximately 20 mph). But this is the old main road through the small village I live in and it is absolutely straight and the shortest connection between two inter-city routes. So many cars drive too fast and I wanted to get some numerical values. Unfortunately the output of this program is so unreliable, that I it is of no use for me 😦
    When I look at the shell window, I see the different speed readings the program takes for every passing car and the values differ from -12 to 60 and have all values in between. In most cases the value in the photo is quite close to what I would have guessed but in too many cases the speed is obviously complete wrong.
    Can I somehow “see” what the image tracking tracks? I’m only 13 meters (39 feet?) away from the road and usually a car passes my window in about 0.8 seconds. Are these values a problem in any way? The background of the picture is mostly a house on the other side of the road.
    It would be useful to see the captured frames and the reference point, that was taken for the speed measurement.

    Like

    • Are you using a Raspberry Pi 3? How large an area in pixels are you monitoring? Did you change the camera resolution? Remember, image processing is taxing the pi to its limits, so being accurate 85% may be the best you can hope for. You are welcome to improve upon the logic to make it more efficient. You may want to check out this version which uses threads to speed up the image processing.

      You can display the image at any point using the cv2.imshow command. During development I had several windows open to display the image transformations as they occurred.

      Like

      • Hello and thank you for your answer. I use Raspberry Pi 2 (don’t have a model 3 here). The area I’m monitoring is probably half of the picture width (there are some poles I didn’t want to have in the picture, because of that I use such small region). So width is something about 320 pixels. the height may be 20% – 25% of the width. A car fits into this window approximately 2 – 2.5 times in length.

        I used your project because I thought it would be easier, faster and cheaper than this one: http://blog.durablescope.com/post/BuildASpeedCameraAndTrafficLogger/

        Before I try to understand and modify your brilliant work, I will try to get informations about the precision I can expect from the radar approach.

        Like

  5. Hello again,

    it is weekend and so I spent some more time with this very interesting project. I observed another car, that was measured much too fast and had a look at the shell window.

    As you can see, the first car was only tracked two times. But it wasn’t faster than approximately 40 km/h (25 mph), so quite normal speed. As far as I understand from the comments in the code, the saved picture should show the car, when it hits the border of the drawn frame. But this saved picture shows the front of the car in the middle of the drawn frame!

    No Car Detected
    Tracking
    x-chg Secs MPH x-pos width
    118 0.09 65 125 176
    243 0.23 53 0 301
    No Car Detected
    Tracking
    x-chg Secs MPH x-pos width
    47 0.14 17 189 112
    93 0.20 24 143 158
    156 0.36 22 80 221
    216 0.50 22 20 281
    236 0.65 19 0 301
    No Car Detected
    Tracking
    x-chg Secs MPH x-pos width
    46 0.09 26 196 105
    115 0.19 31 127 174
    242 0.33 37 0 301
    No Car Detected
    Tracking
    x-chg Secs MPH x-pos width
    45 0.09 25 223 78
    82 0.19 22 186 115
    114 0.30 20 154 147
    150 0.40 19 118 183
    185 0.50 19 83 207
    223 0.64 18 45 209
    259 0.72 19 9 213
    268 0.87 16 0 177
    No Car Detected
    Tracking
    x-chg Secs MPH x-pos width
    48 0.15 17 0 48
    85 0.21 21 0 85
    124 0.35 18 0 124
    164 0.41 21 0 164
    202 0.51 20 0 202
    239 0.61 20 7 232
    277 0.71 20 48 229
    301 0.81 19 89 212
    No Car Detected

    Like

    • The first vehicle must have been moving fairly quickly since it covered the monitored area in .14 second. The last two vehicles passed through the monitored area in .87 and .81 seconds respectively.

      The image from the last complete reading is saved. Since a vehicle may have exited the monitored area on the last frame, it is the second from the last frame that is used. Since there were only two readings on that first vehicle, it showed the first frame.

      Since you are fairly close to the road, try making the height of the monitored area smaller. The monitored area doesn’t need to completely contain the vehicle – a thin sliver passing through the monitored area is sufficient to detect. That may reduce the processing time and give you more frames for computing the speed.

      Like

  6. Root cause of wrong measurements found

    I modified the code in a way, that all cropped images are stored for fast cars in tracking state. In each and every wrong measurements two cars were involved. This was not visible on the single photos that are usually taken.

    Like

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s