In our recent discussions around the Cloud Database offerings, we have mentioned that many of these services have deprecated many of the traditional RDBMS functions such as user defined transactions and referential integrity. Over several decades these have been considered critical, non debatable database features. I can’t imagine someone implementing a database application in an enterprise 10 years ago that wasn’t ACID compliant, you would have been nailed by the DBA and laughed out of the client as being non-enterprise ready.
What is interesting about this is though is more recently, over the last 3-5 years I have noticed a strong shift in application development away from many of the core RDBMS principals. More and more applications, while using a RDBMS platform that supports ACID fully, are using uncommitted isolation levels, not running statements in user defined transactions and not implementing DRI.
The key reason? Performance (including scalability and concurrency). The most visible issue with an application is one that performs poorly, and with modern day scalability requirements in terms of concurrent users and data volumes application developers are avoiding as many of the ACID requirements that impact performance as possible. How many 300GB databases did we see 5 years ago? Now 100GB is an average size departmental database.
Justification is that system failures are relatively rare, application code is heavily tested (as to avoid data inconsistencies) and some small level of data loss or “corruption” is justifiable in wider scheme of increasing numbers of transactions processed. Some applications even run batch jobs to “patch up” inconsistent data periodically. And so, as I put my question to you, performance trumps everything?