Using Raspberry Pi camera (NOIR) and the power of Python and OpenCV TO BUILD A HOME SURVEILLANCE CAMERA – PART 3 OF 4 – USING OpenCV motion detection capability to trigger video recording, no need PIR sensor

Motion Detection script using the power of Python and OpenCV. Due to the problem of false triggering by the PIR motion sensor, I have decided to use OpenCV motion detection feature to trigger the recording and storing process. The setup is even more simple : power, thumb drive and camera module. No PIR sensor. 👍

Simple setup, just power, camera module and USB thumb drive

To install Python3 libraries(eg.pandas), type : sudo pip3 install pandas. However to install OpenCV to Rpi is not that direct, and requirement a few hours to install. I am using RPI Model 3B+ and it took me half a day to install OpenCV. I followed the steps from this website ( To me, this is the hardest and most tedious part of this project. Afterwhich we are ready to write the motion detection script.

The script is from Automatic Addison website — This is a fantastic website, please check it out. I made a few changes for triggering the video recording and rotating the camera.

The script for the Python OpenCV motion detection ↓

from picamera.array import PiRGBArray # Generates a 3D RGB array
from picamera import PiCamera # Provides a Python interface for the RPi Camera Module
import time 
from time import sleep
import datetime
import cv2 
import numpy as np 
# Initialize the camera
camera = PiCamera()

camera.rotation = -90
# Set the camera resolution
camera.resolution = (640, 480)
#camera.resolution = (1280, 720)
#camera.resolution = (1640, 922)
#camera.resolution = (3280, 2464)
# Set the number of frames per second
#Resolution and frame rate affect FOV

#camera.framerate = 30
#camera.framerate = 60
camera.framerate = 30

#Generates a 3D RGB array from camera object(source) 
#and put into raw_capture
raw_capture = PiRGBArray(camera, size=(640, 480))
#raw_capture = PiRGBArray(camera, size=(1280, 720))
#raw_capture = PiRGBArray(camera, size=(1640, 922))
#raw_capture = PiRGBArray(camera, size=(3280, 2464))
# Create the background object
# Parameters set to default values if leave blank
# MOG2 comes with detectShadows parameter, MOG does not.
#back_sub = cv2.createBackgroundSubtractorMOG2(history=150,
  #varThreshold=25, detectShadows=True)
back_sub = cv2.createBackgroundSubtractorMOG2(history=25,
  varThreshold=20, detectShadows=True)
# Wait a certain number of seconds to allow the camera time to warmup

# Create kernel for morphological operation. You can tweak
# the dimensions of the kernel.
# e.g. instead of 20, 20, you can try 30, 30
# This is like a template running through fg_mask and smoothen the edges
kernel = np.ones((20,20),np.uint8)
# Capture frames continuously from the camera
for frame in camera.capture_continuous(raw_capture, format="bgr", use_video_port=True):
    # Grab the raw NumPy array representing the image
    image = frame.array
    # Get the foreground mask
    fg_mask = back_sub.apply(image)
    # Perform morphological operation (function cv2.morphologyEx aka Closing), 
	# which is a combination of Dilation followed by Erosion.
	# Closing closes any small holes inside fg_mask, or remove any small black points in 
	# fg_mask. Closing is opposite of Opening (Erosion followed by Dilation)
    fg_mask = cv2.morphologyEx(fg_mask,cv2.MORPH_CLOSE,kernel)

# Remove the unnessary details (salt n pepper noise),we need the outline only
    fg_mask = cv2.medianBlur(fg_mask,5)
     # If a pixel is less than 127, it is considered black (background)
	# and set to 0, otherwise, it is white (foreground). Set to 255.
    # Modify the number after fg_mask as you see fit.
	#4th parameter is one of the type of threshold which is : if less than
	# 127 set to black else set to white
# ret usually in front is not neccessary as we do not require a return.
    _, fg_mask = cv2.threshold(fg_mask, 127, 255, cv2.THRESH_BINARY)
    # Find the contours of the object inside the binary image
    contours, hierarchy = cv2.findContours(fg_mask,cv2.RETR_TREE,cv2.CHAIN_APPROX_SIMPLE)[-2:]
    areas = [cv2.contourArea(c) for c in contours]
    # If there are no countours
    if len(areas) < 1:
      # Display the resulting frame

    #while w > 500 and h > 500:    #doesn't work
    while h > 200:
        date ="%d-%m-%Y_%H-%M-%S")

      # Wait for keyPress for 1 millisecond
      key = cv2.waitKey(1) & 0xFF

# Clear the stream in preparation for the next frame
      # If "q" is pressed on the keyboard,
      # exit this loop
      if key == ord("q"):
      # Go to the top of the for loop
      # Find the largest moving object in the image
      max_index = np.argmax(areas)
    # Draw the bounding box
    cnt = contours[max_index]
    x,y,w,h = cv2.boundingRect(cnt)
    # Wait for keyPress for 1 millisecond
    key = cv2.waitKey(1) & 0xFF
    # Clear the stream in preparation for the next frame
    # If "q" is pressed on the keyboard,
    # exit this loop
    if key == ord("q"):
# Close down windows

Video to explain about the workings of the script ↓. The recorded video (2nd video below) will not have the green rectangle box and the x-y coordinates.

This is the recorded video with the man walking by, saved in the thumb drive.

Below 2 videos is to show that there is continuity between 2 back to back recorded videos . ↓

First 3 seconds
2nd 3 seconds vid