Every time Java is discussed on Slashdot, someone says that the overheads of automatic memory management aren’t worth it because Java still has memory leaks.
After further discussion, it generally turns out that they’re not talking about memory leaks; rather, they are talking about failure to free up resources in a timely manner — resource hogging. It’s a subtle distinction. In a memory leak, the system loses track of the memory, so it never gets freed during the life of the program. In the case of Java resource hogging, the Java system is still keeping track of the resources, and will eventually free them — it just doesn’t do it soon enough.
A common situation where resource hogging occurs is JDBC, querying a SQL database from a Java application or servlet environment. The problem is, JDBC query code is surprisingly tricky to get completely correct. It’s easy to write code where an exception causes active JDBC objects to be left unclosed, leading to the application being unreliable, overloading the database server, or using more memory than it needs.
MySQL and PostgreSQL are extremely liberal in what they are prepared to accept. For example, you can generally close a connection and rely on the database to implicitly close everything else, including abandoning any uncommitted transactions. This is not the case with IBM DB2, which will actually refuse to let you close a connection unless you have cleared out everything properly. So it’s not just a resource usage issue — you can also suddenly find yourself having to do a ton of debugging when your data load increases and you need to swap out your development database engine for something more scalable.
So, it pays to get your JDBC code right the first time. To illustrate the painful construction of some hopefully correct JDBC query code, I’m going to discuss the process of writing a simple example program in Eclipse.… Read more »