Since moving my blog to Google App Engine a few days ago, I've been keeping a close eye on the logs. This is my first app engine project that's using the datastore, so I wanted to make sure I hadn't done anything silly and I wasn't getting a large number of timeouts. Although it's probably overkill for my blog, to learn the APIs I use memcache to avoid hitting the datastore lots for the same data.

The caching I've implemented is fairly basic. When it generates a page using Django templates, the entire page contents are put into memcache, using a key that includes the page URL. If somebody else requests the same page, I can serve the entire thing from memcache without any processing overhead. This works well because there is no dynamic (different per user) content on any pages (unless you're an admin, but then memcache is bypassed entirely). When a comment is posted or a post is modified, the entire cache is wiped out. This is done to ensure comment counts on list pages are always up-to-date. Since posts and comments happen very infrequently on this blog (a handful per day at most) this shouldn't be an issue.

So, after putting things live, I noticed that some of my page hits were still quite expensive (in terms of CPU) with the caching. It turned out, that when someone visited a page that wasn't in the cache, I had to execute 3 or 4 datastore queries. One to get the Tag or Archive you were viewing (this step wasn't needed for homepage of direct post pages), one for the post(s), one for the tag list and one for the archive list. Since the tags and archive list don't change much, these could be cached across the application, rather than just per-page, which would result in much faster loading of uncached pages (since they would now be only 1-2 datastore hits).

So, I implemented this extra caching as follows, and all was good.

# TAGS: Check cache first
tags = memcache.get('tags')

# TAGS: Get and cache if not there
if not tags:
	logging.info('Saving tags to cache')
	tags = Tag.all().order('name_lower')
	memcache.set('tags', tags, CACHE_TIME)
else:
	logging.info('Got tags from cache')

# ARCHIVE: Check cache first
archive = memcache.get('archive')

# ARCHIVE: Get and cache if not there
if not archive:
	logging.info('Saving archive to cache')
	archive = Archive.all().order('-date')
	memcache.set('archive', archive, CACHE_TIME)
else:
	logging.info('Got archive from cache')

Or so I thought. Although I tested the code worked with no errors, there's no nice "Datastore Profiler" like the SQL Server Profile I'm used to in Microsoft-land, so I assumed everything was good.

Today, it occurred to me that the CPU values for my uncached page views was still pretty high. I was seeing up to half a second CPU time for what should be a single datastore query. This seemed pretty high even for my inefficient coding!

It turns out, I made a rookie mistake. The problem is with this code right here:

archive = Archive.all().order('-date')
memcache.set('archive', archive, CACHE_TIME)

This code does not cache the entities returned by the query. Rather, it caches the query. That means every time I grabbed it from cache and gave it into Django, the query was being executed as the template being parsed!

A quick change to call fetch() forces the query to be executed and now the entities are cached instead.

archive = Archive.all().order('-date').fetch(1000)
memcache.set('archive', archive, CACHE_TIME)

I'm not so keen on the hard-coded 1000, but until I can find a better way to force the query to execute, it'll do the job. My next job is to write some better hooks to allow better profiling of the datastore so I can better test for this kind of issue in future!

Update

After posting in the App Engine group, I was directed to the article Efficient Model Memcaching on Nick Johnson's blog, which shows a better way, avoiding some of the overhead of caching Model instances directly.