Wednesday, December 5, 2007

ZODB performance: a small change can make a big difference

Tunning Zope's ZODB performance can be a big challenge. Almost everything you can find on the web about it can be resumed on the following: use the Edisonian approach: trial and error discovery.

At La Jornada we are using ZEO to serve about 500,000 pages to more than 60,000 visitors everyday. Some weeks ago I noticed we were facing performance issues with the servers holding most of the ZEO clients of our installation: at peak times the load was so high that we were unable to serve some pages.

In the beginning I thought I had made a mistake with the amount of memory dedicated to the servers but, based on our previous experiences with the main server, I started investigating how to decrease CPU usage looking for bottlenecks using the usual Linux tools like vmstat and iostat.

I didn't found anything clear but, suddenly, I noticed that the clients on the main server were also acting strangely: one of them had a very high CPU usage while the other had almost none. I started looking for a mistake on the load-balancing configuration, but after some time I discovered that one of the ZEO clients had a value of 20,000 in the cache-size directive and the other, the one running smoothly, a value of 30,000.

I made the following change on the zope.conf file of all of my ZEO clients:

<zodb_db main>
cache-size 30000
<zeoclient>

</zeoclient>
mount-point /
</zodb_db>

After restarting them I was shocked with the results: a minimum increase on memory usage and an amazing decrease on CPU usage. You can see below the behavior of one of the servers in the previous month (the change in the configuration occurred at the middle of the graph):




This chart shows processor load on a ZEO server in the previous month


So, talking about cache-size, how big is big enough? According to Chris McDonough's excelent presentation at the Plone Symposium 2005 in New Orleans: as big as you can make it without causing your machine to swap.

And, how many objects can we store given a certain amount of memory? According to Dieter Maurer, as the size of objects can vary unboundedly, this gives very imprecise control over the main memory used for the cache.

In our case, with about 700,000 objects in the ZODB, memory consumption is around 1GB on a cluster of 3 ZEO clients (with 4 threads and 30,000 objects in cache each one).

As we reserved 2GB for this server, we still have some space to grow up; we only need to find out some time to test it.

No comments:

Post a Comment