Skip to main content

Face Detection with python using opencv

So I have been continuing on with the live edge detection I looked out a few weeks ago... I have made that code alot more object oriented and hopefully re-useable. I am now using both pygame and opencv built from svn instead of the ubuntu repositories. I wanted independence in the image rendering, the webcam capturing and the image processing. So I needed to convert between a numpy array (which pygame and any scipy processing uses) and cvMat which is opencv's data type. This was not immediately obvious as the opencv.adaptors module was full of routines for converting via the Python Image Library (PIL). These images were annoyingly being rotated by the functions when going from numpy to cvMat, then rotated back to the correct way going back from cvMat to numpy.
First up is the VideoCapturePlayer class, it can be used to simply display a video feed. It uses pygame camera, stores the images as a pygame.surface and shows the video with pygame. The latest pygame has an option to force pygame.camera to use opencv. Also in this class is an optional process function, something that takes and returns a surface... So in here one could put in the edge detection functionality that I looked at in the last post.
import pygame
import pygame.camera
from pygame.locals import *
import numpy
class VideoCapturePlayer(object):
"""A VideoCapturePlayer object is an encapsulation of
the display of a video stream. A process can be
given (as a function) that is done on every frame
For example a filter function that takes and returns a
surface can be given. This player will take the webcam image,
pass it through the filter then display the result.
If the function takes significant computation time (>1second)
The VideoCapturePlayer takes 3 images between each, this ensures
an updated picture is used in the next computation.
"""
size = width,height = 640, 480
def __init__(self, processFunction = None, forceOpenCv = False, **argd):
self.__dict__.update(**argd)
super(VideoCapturePlayer, self).__init__(**argd)
pygame.init()
pygame.camera.init()
if forceOpenCv:
import os
os.environ["PYGAME_CAMERA"] = "opencv"
self.processFunction = processFunction
# create a display surface. standard pygame stuff
self.display = pygame.display.set_mode( self.size, 0 )
# gets a list of available cameras.
self.clist = pygame.camera.list_cameras()
if not self.clist:
raise ValueError("Sorry, no cameras detected.")
# creates the camera of the specified size and in RGB colorspace
self.camera = pygame.camera.Camera(self.clist[0], self.size, "RGB")
# starts the camera
self.camera.start()
self.waitForCam()
self.clock = pygame.time.Clock()
self.processClock = pygame.time.Clock()
# create a surface to capture to. for performance purposes, you want the
# bit depth to be the same as that of the display surface.
self.snapshot = pygame.surface.Surface(self.size, 0, self.display)
def get_and_flip(self):
"""We will take a snapshot, do some arbitrary process (eg in numpy/scipy)
then display it.
"""
# capture an image
self.snapshot = self.camera.get_image(self.snapshot).convert()
if self.processFunction:
self.processClock.tick()
if self.processClock.get_fps() < 2:
print "Running your resource intensive process at %f fps" % self.processClock.get_fps()
# flush the camera buffer to get a new image...
# we have the time since the process is so damn slow...
for i in range(3):
self.waitForCam()
self.snapshot = self.camera.get_image(self.snapshot).convert()
self.snapshot = self.processFunction(self.snapshot)
# blit it to the display surface. simple!
self.display.blit(self.snapshot, (0,0))
pygame.display.flip()
def waitForCam(self):
# Wait until camera is ready to take image
while not self.camera.query_image():
pass
def main(self):
print "Video Capture & Display Started... press Escape to quit"
going = True
fpslist = []
while going:
events = pygame.event.get()
for e in events:
if e.type == QUIT or (e.type == KEYDOWN and e.key == K_ESCAPE):
going = False
# if you don't want to tie the framerate to the camera, you can check and
# see if the camera has an image ready. note that while this works
# on most cameras, some will never return true.
# note seems to work on my camera at hitlab - Brian
if self.camera.query_image():
self.get_and_flip()
self.clock.tick()
if self.clock.get_fps():
fpslist.append(self.clock.get_fps())
print fpslist[-1]
print "Video Capture & Display complete."
print "Average Frames Per Second "
avg = numpy.average(fpslist)
print avg
if __name__ == "__main__":
vcp = VideoCapturePlayer(processFunction=None,forceOpenCv=True)
vcp.main()
pygame.quit()
Standalone running the module (from pydev in eclipse)
If you see behind me I have it printing the frames per second... I am sadly limited somewhat on my ibook in this respect. It cannot keep up with the 15fps that my machine at hitlab could do just displaying video.
Well that is all good and fun, say to add in the edge detection filter from last post I would write this:
(As a random aside the indentation was lost importing this into google docs.... so you will have to work out how the code was indented to run it.... Stupid problem I know! Actually on second thought I might remove most of the code from here - if any one wants it just message me. Eventually I will put it into an svn repo...

def edgeDetectionProcess(surf):
    if useScipy:
        imageArray1 = numpy.mean(surfarray.pixels3d(surf),2) # converting here to one col
        if scipySpline:
            imageArray2 = edgeDetect2(imageArray1)
        else:
            imageArray2 = edgeDetect1(imageArray1)
        surf = surfarray.make_surface(imageArray2)
    else:
        # use pygame transform
        surf = transform.laplacian(surf)
    return surf

def main():
    vcp = VideoCapturePlayer(processFunction=edgeDetectionProcess)
    vcp.main()
    pygame.quit()

Musing on the coolness of it all...
I also changed the edge detection to work on an average of the red, green and blue values instead of just one. Once I had got everything working I compared the performance of forcing pygame.camera to use opencv internally for the capturing.
opencv edgeDetection scipy spline result
true false N/A N/A 66ms
false false n/A n/a 66ms
true true false N/A 209ms // opencv capture, pygame edge detection
false true false n/a 211ms
true true true false 553ms
false true true false 551ms
true true true true 790ms
false true true true 795
Might as well just throw it out there, next I thought Face detection would be a good idea. Cue a lot of research into haarcascades and object recognition. So eventually found some working code in the opencv samples and hacked away at it to come up:

def detect(img):
    gray = cvCreateImage( cvSize(img.width,img.height), 8, 1 )
    small_img = cvCreateImage( cvSize( cvRound (img.width/image_scale),
    cvRound (img.height/image_scale)), 8, 1 )
    cvCvtColor( img, gray, CV_BGR2GRAY )
    cvResize( gray, small_img, CV_INTER_LINEAR )
    cvEqualizeHist( small_img, small_img )
    cvClearMemStorage( storage )
    if( cascade ):
        t = cvGetTickCount()
        faces = cvHaarDetectObjects( small_img, cascade, storage,
        haar_scale, min_neighbors, haar_flags, min_size )
        t = cvGetTickCount() - t
        print "%i faces found, detection time = %gms" % (faces.total,t/(cvGetTickFrequency()*1000.))
        return faces
    else:
        print "no cascade"

def detect_and_draw( img ):
"""
draw a box with opencv on the image around the detected faces.
"""
faces = detect(img)
if faces:
    for r in faces:
        print "Face found at (x,y) = (%i,%i)" % (r.x*image_scale,r.y*image_scale)
        pt1 = cvPoint( int(r.x*image_scale), int(r.y*image_scale))
        pt2 = cvPoint( int((r.x+r.width)*image_scale), int((r.y+r.height)*image_scale) )
        cvRectangle( img, pt1, pt2, CV_RGB(255,0,0), 3, 8, 0 )
        cvShowImage( "result", img ) # TODO is this reqd if pygame renders?

Now the main function in the above file (now edited out) is using opencv for the capture, analysis and the rendering. I also made the next script to interface with it using the VideoCapturePlayer class from above and to use pygame for the rendering.

def drawFacesOnSurface(surf,faces):
"""draw rectangles around detected cvObjects with pygame
"""
...Snip....
if __name__ == "__main__":
    vcp = VideoCapturePlayer(processFunction=locateFaces)
    vcp.main()
    pygame.quit()
I suppose showing it working would be a good idea :-P

Yes it still works close up!
And further back...

And it works on more than one face at once!
Now if you see the blue box at the top of my screen... yeah thats my CPU usage... This is a rather intese process and seems to take about a second for the loop that does a capture, a conversion, analysis, reconversion and rendering.
Well that should just about do it for today! Actually no, maybe I'll do a quick mash of the edgedetection + face detection :-P
#!/usr/bin/python
from VideoCapturePlayer import *
import pygameFaceDetect
import edgeDetect
def process(surf):
faces = pygameFaceDetect.getFaces(surf)
surf = edgeDetect.edgeDetectionProcess(surf)
if faces:
pygameFaceDetect.drawFacesOnSurface(surf, faces)
return surf
if __name__ == "__main__":
vcp = VideoCapturePlayer(processFunction=process)
vcp.main()
pygame.quit()
Yes that is a book I am holding up...
Next maybe eye detection within a face...? Who knows! Then face recognition? Oh the possibilities!

Popular posts from this blog

Python and Gmail with IMAP

Today I had to automatically access my Gmail inbox from Python. I needed the ability to get an unread email count, the subjects of those unread emails and then download them. I found a Gmail.py library on sourceforge, but it actually opened the normal gmail webpage and site scraped the info. I wanted something much faster, luckily gmail can now be accessed with both pop and imap. After a tiny amount of research I decided imap was the better albiet slightly more difficult protocol. Enabling imap in gmail is straight forward, it was under labs. The address for gmail's imap server is: imap.gmail.com:993 Python has a library module called imaplib , we will make heavy use of that to access our emails. I'm going to assume that we have already defined two globals - username and password. To connect and login to the gmail server and select the inbox we can do: import imaplib imap_server = imaplib . IMAP4_SSL ( "imap.gmail.com" , 993 ) imap_server . login ( use...

Bluetooth with Python 3.3

Since about version 3.3 Python supports Bluetooth sockets natively. To put this to the test I got hold of an iRacer from sparkfun . To send to New Zealand the cost was $60. The toy has an on-board Bluetooth radio that supports the RFCOMM transport protocol. The drive  protocol is dead easy, you send single byte instructions when a direction or speed change is required. The bytes are broken into two nibbles:  0xXY  where X is the direction and Y is the speed. For example the byte 0x16 means forwards at mid-speed. I was surprised to note the car continues carrying out the last given demand! I let pairing get dealt with by the operating system. The code to create a  Car object that is drivable over Bluetooth is very straight forward in pure Python: import socket import time class BluetoothCar : def __init__ ( self , mac_address = "00:12:05:09:98:36" ): self . socket = socket . socket ( socket . AF_BLUETO...

Matplotlib in Django

The official django tutorial is very good, it stops short of displaying data with matplotlib - which could be very handy for dsp or automated testing. This is an extension to the tutorial. So first you must do the official tutorial! Complete the tutorial (as of writing this up to part 4). Adding an image to a view To start with we will take a static image from the hard drive and display it on the polls index page. Usually if it really is a static image this would be managed by the webserver eg apache. For introduction purposes we will get django to serve the static image. To do this we first need to change the template. Change the template At the moment poll_list.html probably looks something like this: <h1>Django test app - Polls</h1> {% if object_list %} <ul> {% for object in object_list %} <li><a href="/polls/{{object.id}}">{{ object.question }}</a></li> {% endfor %} </ul> {% else %} <p>No polls...