Skip to main content


Showing posts from September, 2009

Bring your hat!

So thought I would make a really simple example of how pygame can be used with a webcam. This example uses opencv to detect a face, then pygame to draw a "hat".

#!/usr/bin/python from pycam import VideoCapturePlayer from pycam import pygameFaceDetect import pygame from pygame.locals import * def process(surf): faces = pygameFaceDetect.getFaces(surf) if faces: s = pygameFaceDetect.faceDetect.image_scale for face in faces: pointsInHat = [ (face.x*s, face.y*s), (face.x*s + face.width*s, face.y*s), (face.x*s + face.width*s/2, face.y*s - face.height*s/2 ) ] pygame.draw.polygon(surf, Color("red"), pointsInHat) pygame.draw.polygon(surf, Color("black"), pointsInHat, 10) return surf if __name__ == "__main__": vcp = VideoCapturePlayer(processFunction=process) vcp.main() pygame.quit() And the obligatory screen shot:

I …


Something used heaps in the film industry is the "Greenscreen" I thought I would take a quick look at how to make a greenscreen that works fast enough to run on a live webcam stream. And infact one that works with any coloured background. It has many many limitations, but was a fun experiment! To run this example you will need OpenCV with the SWIG Python bindings installed. You can get this code from my SVN repository here.

Firstly the background I started with:

Adding an object to the scene, and carrying out back ground subtraction:

So anyhow the code:
#!/usr/bin/env pythonfromVideoCapturePlayerimportVideoCapturePlayerasVCPfromopencvimportcvdefthreshold_image(image,n=[]):"""Record the first 5 images to get a background, then diff current frame with the last saved frame. """iflen(n)<5:# n[4] will be our background# First capture a few imagesn.append(cv.cvCloneMat(image))iflen(n)==5:# last time here # could do averaging here.passreturnimage…

Kiwi Pycon

I have been looking at harris feature detection lately. Implementing it side by side in OpenCV and SciPy, luckily for me a SciPy implementation by Jan Solemwas found on this blog. As I was going through the code, I wanted to get my head around what was happening. So using these two lines: from IPython.Shell import IPShellEmbed IPShellEmbed()() This piece of magic can be put anywhere, right deep inside a nested loop, inside a function called from X via Y via Z etc. And it obviously pops you right into the brilliant IPython shell, with the full normal IPython luxuries like timeit, history, autocomplete, pylab plotting... So I plotted a few images midway through processing, just to see what the program sees. First is the grayscale image taken from my webcam. No I'm not colour blind - I realize I have plotted it in colour.... Second and thirdly the two gaussian derivatives of the image, one in X and one in Y.

And a bit later after getting the thing going - the final output! Pretty cool to…

Gaussian Blur

Trying to work out why the gaussian blur in OpenCV is different from that of SciPy... The differences are too small to see, but are still there.
Following is an imshow of each channel of the difference image.
And by looking at a singe row, we see that the difference spikes the whole intensity range.

Edit: That was pretty much staring me in the face wasn't it... uint8's are prone to integer overflow!
Comparing the OpenCV implementation with two versions in SciPy now gives:
Doing a pixel for pixel comparison on each channel between the SciPy and OpenCV examples:

The second one is comparing an IIR filter implementation to a ndfilt.

Decoration to the rescue!


"""This decorator can be used to wrap a function that takes
and returns a numpy array into one that takes and retuns an
opencv CvMat.

# Convert CvMat to ndarray

# Call the original function

# Convert back to CvMat

"""Manual gaussian blur - Very very very slow!"""