Skip to main content

Publicly query facebook using Python


In my cosc lab today a few students were asking about doing something "real" and "cool" with Python, something that isn't easy in excel. After a bit of a think I came to the conclusion that getting data from the internet is a "real" enough problem. As for "cool", since most people seem to have facebook open in the background during labs I thought getting some real live data off facebook could be interesting.
First a disclaimer or two.
  • Don't just run random code without reading it and satisfying yourself that its not trying to delete your operating system or do anything sinister!
  • I don't make the test or the assignment. This is not at all related, except for the fact it is using Python (okay it uses a for loop to iterate over a list, if nothing else in this post is of interest to you, learn how to do that!)
So if anyone is still reading, let me introduce the problem, then look into how we can solve it. We will take a quick look at facebook's graph api, then finally how it all ties together in a pretty short snippet of Python. There are two modules you probably haven't seen from the Python standard library so I'll briefly touch on those as well. 
Onwards to the problem: Lets say I'm very interested in comparing how many fans various public pages on facebook have. Maybe I am very pedantic and I check all the time, at least once an hour. But I really hate searching for the page on facebook each time to check how many fans there are. My tech friend told me about bookmarks, so I bookmarked each of the pages I check but it still takes me too long. All this other information is irrelevant to me... what I really want is to write a program that cuts through the crap so to speak and gives me the data. All I require for each page I'm interested in is the current number of fans.
Facebook has an application programming interface (API) called Graph API, basically it connects everything on facebook to anything else on facebook. For example, the official facebook page for the Facebook Platform has the id 19292868552, so you can fetch the object at https://graph.facebook.com/19292868552, alternatively if you know the username (and the page/user has one) you can fetch the object from https://graph.facebook.com/platform. If you clicked on those links you will notice the data is very similar to a python dictionary - this is data in the json format. If you know your own username on facebook, try see what is publicly known about you - just replace platform with your username (Note: since accessing private data on facebook requires secure authentication, we are just going to look at public pages). To see more about the facebook graph api go to http://developers.facebook.com/docs/api 
I imagine you have at least peeked at the code below by now, and line one should now make sense. If we want to decode json data aren't we lucky that Python has an inbuilt json module, lets use it!
Oh wait, before we can decode the json data we need to get access to it in Python. So just like in our earlier labs where you open a csv file, by calling open with a filename, we open a website by its url. The only function we use from urllib2 is urlopen, it takes an address (url) as its parameter, then instead of calling readlines like we did in lab 6, we call read. At this stage we have the data, but as a single large string.[1]
Line 7 creates a python dictionary out of the json data found at the address we specify. No I didn't know how to do that, I looked it up - the documentation is your friend. Google for "python json" and the first link will be the official documentation (including examples). Scan through that and you will find a function loads that will load python data from a string of json, and a function load that will load json from a file-like object. A url behaves very similar to a file, if we wanted we could call readlines() on an object created by urlopen, just like we could for a file object created with open.

import json
import urllib2

def load_facebook_page(facebook_id):
    '''Return a dictionary of data from a facebook page id or username'''
    addy = 'http://graph.facebook.com/' + facebook_id
    return json.load(urllib2.urlopen(addy))

def print_fans(facebook_page_id):
    '''Print the name and number of fans of a facebook page'''
    facebook_page = load_facebook_page(facebook_page_id)
    print facebook_page['name'], 'fans: ', facebook_page['fan_count']

page_ids = [ 'pythonlang', '62842406160', '63723325087' ]
for facebook_id in page_ids:
    print_fans(facebook_id


Hmm writing this description has taken about four or five times as long as writing the code! It really turned into an essay, opps. Luckily everything after line 8 should be very straight forward. Make up some problems and improvements. A simple one to start with would be printing out the address of the facebook page as well as its name and the number of fans.


[1] The module was called urllib but when it first got introduced people decided to write a new version that was so different from the original that they called it urllib2. In python 3 the old urllib is being thrown out and urllib2 becomes urllib - complicated much?

Popular posts from this blog

My setup for downloading & streaming movies and tv

I recently signed up for Netflix and am retiring my headless home media pc. This blog will have to serve as its obituary. The box spent about half of its life running FreeNAS, and half running Archlinux. I’ll briefly talk about my experience with FreeNAS, the migration, and then I’ll get to the robust setup I ended up with.

The machine itself cost around $1000 in 2014. Powered by an AMD A4-7300 3.8GHz cpu with 8GB of memory. A SilverStone DS380 case is both functional, quiet and looks great. The hard drives have been updated over the last two years until it had a full compliment of 6 WD Green 4TiB drives - all spinning bits of metal though.

Initially I had the BSD based FreeNAS operating system installed. I had a single hard drive in its own ZFS pool for TV and Movies, and a second ZFS pool comprised of 5 hard drives for documents and photos.

FreeNAS is straight forward to use and setup, provided you only want to do things supported out of the box or by plugins. Each plugin is install…

Driveby contribution to Python Cryptography

While at PyConAU 2016 I attended the Monday sprints and spent some time looking at a proposed feature I hoped would soon be part of cryptography. As most readers of this blog will know, cryptography is a very respected project within the Python ecosystem and it was an interesting experience to see how such a prominent open source project handles contributions and reviews.

The feature in question is the Diffie-Hellman Key Exchange algorithm used in many cryptography applications. Diffie-Helman Key Exchange is a way of generating a shared secret between two parties where the secret can't be determined by an eavesdropper observing the communication. DHE is extremely common - it is one of the primary methods used to provide "perfect forward secrecy" every time you initiate a TLS connection to an HTTPS website. Mathematically it is extremely elegant and the inventors were the recipients of the 2015 Turing award.

I wanted to write about this particular contribution because man…

Python, Virtualenv and Docker

Unsurprisingly I use some very popular Scientific Python packages like Numpy, Scipy and Scikit Learn. These packages don't get on that well with virtualenv and pip as they take a lot of external dependencies to build. These dependencies can be optional libraries like libblas and libatlas which if present will make Numpy run faster, or required dependencies like a fortran compiler.

Back in the good old days you wouldn't pin all your dependency versions down and you'd end up with a precarious mix of apt-get installed and pip installed packages. Working with other developers, especially on different operating system update schedules could be a pain. It was time to update your project when it breaks because of a dependency upgraded by the operating system.

Does virtualenv fully solve this? No, not when you have hard requirements on the binaries that must be installed at a system level.



Docker being at a lower level gives you much more control without adding too much extra comp…