Performance, benchmarking

Every so often some individual or group decides they’re going to “invent” a new benchmark. They ignore all that have been done before (like PolePos) maybe due to NotInventedHereSyndrome, or maybe due to “oh someone who helped write that was related to some datastore therefore it must be dodgy”. While it is highly likely that existing benchmarks don’t include their particular specific case, they never make any reference to what was there before that people are familiar with. They make minimal efforts at configuring other persistence solutions other than what is their favourite or what they have experience with, and then publish their results with a flourish to websites.

Such benchmarks have been known to draw conclusions that you have a standard performance difference across all types of tests. Anyone who has ever looked at the persistence process would know that this conclusion is flawed since good persistence solutions all provide particular features, and by turning on these features you get beneficial things at the expense of some performance, and so certain operations will be better with one tool than with another. Hibernate has some good features, DataNucleus has some good features. If you enable particular features you get poorer performance in other areas.

The other aspect of their conclusion is that they simply want a headline grabber black and white this is better than that. They seemingly aren’t interested in thinking about the different methods employed by the software under test and the particular options. If I was exploring the topic of performance (and its an interesting topic, that can be useful in influencing priorities) I’d want to think about what I was asking the software to do and, bearing in mind that these benchmarks use open source software, and I’d have information available as to how a particular implementation attempts to do something. This could then be termed constructive benchmarking.

Recently we had one which took a flat class, no inheritance or relationships, and persisted it, … many times … and then called itself a “Detailed Comparison”. While this may be an operation that an application may need to do, the response time for the persist of an object of this form would not differ by any perceptible amount to the end user of that software. If a user is wanting to persist a large number of objects (like in bulk loading), then this would typically be performed in a background task anyway. DataNucleus has never been optimised for such a task (it would logically be better at complex object graphs due to the process it uses for detection of changes), but even with it, is actually configurable to give pretty good performance. As above, it’s good at some things and less good at others. Anyone claiming to make a “detailed comparison” would have bothered to try it on a range of cases, to look at the full capabilities, to look at outstanding issues etc.

Here, to give you something to talk about. I just ran a flat class

@Entity
public class A
{
@Id
Long id;

String name;

public A(long id, String name)
{
this.id = id;
this.name = name;
}
}

I ran it through the following persistence code

EntityManager em = emf.createEntityManager();
EntityTransaction txn = em.getTransaction();
txn.begin();
for (int i=0;i<10000;i++)
{
A a = new A(i, "First");
em.persist(a);
}
txn.commit();

I used the config of the aforementioned (Hibernate) users for running Hibernate, and tuned DataNucleus myself. And the runtimes ? Hibernate (3.6.1.Final) took 8164ms, and DataNucleus (SVN trunk on 3/Mar/2011) took 7227ms (using HSQLDB 1.8.0.4 embedded). So on that case DataNucleus was faster. Is this significant ? Well no, since as I already explained it’s a particular case, but demonstrates the principle clearly enough. We can all turn on/off particular options and get some results. Besides which persisting 10000 objects in just 7-8 seconds (on this PC!) is pretty impressive in anyone’s book and never a bottleneck in a normal application.

If you are performing something of the nature of a flat object bulk persist you would not use transactions to avoid the overhead, and you would turn off various features that are not of interest – managed relationships, persistence-by-reachability-at-commit, L2 caching even. Then if using an identity generator you would allocate large blocks of values with your metadata. That said, if you are so serious about performance and persisting flat objects to RDBMS then anyone sensible would use either JDBC direct, or a JDBC wrapper like MyBatis or SpringJDBC. This is “right tool for the job”. Since the benchmark was provided by a group of Hibernate users (they provide various Hibernate tutorials and nothing for any other persistence implementation) we only have to assume that this must be what they think is the “best tool” – if so why not reproduce your benchmark with well-written JDBC and let us know what you find. I even posed this question about applicability of a flat class on their blog and my comment was deleted. Whether it was deleted by them, or by the blogger system I’ve no idea, but it was there for an hour and at that point was deleted. Our blog has never lost comments, and it’s on the same host system.

One of the (undeleted) comments on their benchmark was from the author of an ODBMS who seems to like to decide publically what I ought to spend my spare time on. Is he a commercial client ? no. Is his software open source ? no. Or free ? not if I want to use it on anything serious. The response to him is simple : let me decide what I spend my time on, and you concentrate on your own software; I don’t tell you what to do. It’s an ancient custom called “respect” – sadly symptomatic of the attitudes in the IT profession.

A benchmark to use as the basis for choosing what software you use in your own application needs to cover the different persistence operations that you will perform. If you have a web application that creates a few objects, deletes a few, updates a few, etc continually, then the likelihood is that the performance will not impact on you or your end users to one iota. What will impact on you is whether the persistence solution allows you to do what you want to do or whether it has a large number of unfixed bugs that force you to continually implement workarounds or compromises on your design. Why not have a look through the issue tracker of the software and see what types of problems people are having, and how long the issues have existed?

Edit : to give an example of another benchmark for JPA providers, here is one that was presented in 2012 comparing the 4 most well-known JPA providers on some more complicated models. DataNucleus comes out very well. Note that in this example the author actually bothered to investigate what was happening under the covers.

DataNucleus 3.0 development is initially focussing on architecture, since we believe in getting the architecture right first to take the software to the level we think it needs to be. This means that in early milestones (the benchmark referenced above decided to use 3.0M1) we spend time on refactoring etc, and not on performance. This doesn’t mean that performance isn’t important, just that we feel our users want to be able to perform their tasks first and foremost and then speed things up later. This is the same methodology employed by PostgreSQL to much success, who for years had to listen to “MySQL is faster” comments. Even with that general philosophy anyone using current SVN code would already see some very noticeable speed up in non-transactional persistence performance, and anyone using the MongoDB plugin would also see much more optimised insert performance. These benefits are due to extending the architecture to do some things that we’ve wanted to do for some time but didn’t since we wanted to maintain backwards compatibility and due to resourcing.

Next time you look at some “performance benchmark” we suggest that you bear this in mind. We won’t be spending our time analysing their results, or responding to their claims. This is because we’d rather spend it developing this software, rather than “mine is better than yours” negative mentality discussions.

This entry was posted in JDO, JPA, Persistence. Bookmark the permalink.

3 Responses to Performance, benchmarking

  1. Marco Lopes says:

    Andy, are you talking about the JPAB tests? http://www.jpab.org/All/All/All.html
    Datanucleus even “fails” some of them… as you know, i use JDO, but these results make me wonder if JDO was a good choice.

    Like

  2. andy says:

    You mean some benchmark on a different API makes you wonder if JDO is a good choice? and that benchmark makes no reference to JDO also. That comment has no sense in it. You know what JDO is, and what it is about. You know full well its advantages over JPA. Perhaps actually reading what I had to say above about benchmarking would help (and why the originator of that benchmark couldn't be bothered getting something to work), and defining where *you* think there is some performance problem, based on *your* experience.

    Like

  3. Anonymous says:

    Great post, thank you

    Like

Leave a comment