Tony Bain

Building products & teams that leverage data, analytics, AI & automation to do amazing things

What is the biggest challenge for Big Data?

September 09, 2011 | Tony Bain

Often I think about challenges that organizations face with “Big Data”.  While Big Data is a generic and over used term, what I am really referring to is an organizations ability to disseminate, understand and ultimately benefit from increasing volumes of data.  It is almost without question that in the future customers will be won/lost, competitive advantage will be gained/forfeited and businesses will succeed/fail based on their ability to leverage their data assets.

It may be surprising what I think are the near term challenges.  Largely I don’t think these are purely technical.  There are enough wheels in motion now to almost guarantee that data accessibility will continue to improve at pace in-line with the increase in data volume.  Sure, there will continue to be lots of interesting innovation with technology, but when organizations like Google are doing 10PB sorts on 8000 machines in just over 6 hours – we know the technical scope for Big Data exists and eventually will flow down to the masses, and such scale will likely be achievable by most organizations in the next decade.

Instead I think the core problem that needs to be addressed relates to people and skills.  There are lots of technical engineers who can build distributed systems, orders of magnitude more who can operate them and fill them to the brim with captured data.  But where I think we are lacking skills is with people who know what to do with the data.  People who know how to make it actually useful.  Sure, a BI industry exists today but I think this is currently more focused on the engineering challenges of providing an organization with faster/easier access to their existing knowledge rather than reaching out into the distance and discovering new knowledge.  The people with pure data analysis and knowledge discovery skills are much harder to find, and these are the people who are going to be front and center driving the big data revolution.  People who you can give a few PB of data too and they can provide you back information, discoveries, trends, factoids, patterns, beautiful visualizations and needles you didn’t even know were in the haystack.

These are people who can make a real and significant impact on an organizations bottom line, or help solve some of the world’s problems when applied to R&D.  Data Geeks are the people to be revered in the future and hopefully we see a steady increase in people wanting to grow up to be Data Scientists. 

NSA, Accumulo & Hadoop

September 08, 2011 | Tony Bain

Reading yesterday that the NSA has submitted a proposal to Apache to incubate their Accumulo platform.  This, according to the description, is a key/value store built over Hadoop which appears to provide similar function to HBase except it provides “cell level access labels” to allow fine grained access control.  This is something you would expect as a requirement for many applications built at government agencies like the NSA.  But this also is very important for organizations in health care and law enforcement etc where strict control is required to large volumes of privacy sensitive data.

An interesting part of this is how it highlights the acceptance of Hadoop.  Hadoop is no longer just a new technology scratching at the edges of the traditional database market.  Hadoop is no longer just used by startups and web companies.  This is highlighted by outputs like this from organizations such as the NSA.  This is also further highlighted by the amount of research and focus on Hadoop by the data community at large (such as last week at VLDB).  No, Hadoop has become a proven and trusted platform and is now being used by traditional and conservative segments of the market.  

 

SQL Server to discontinue support for OLE-DB

September 08, 2011 | Tony Bain

ODBC was first created in 1992 as a generic set of standards for providing access to a wide range of data platforms using a standard interface.  ODBC used to be a common interface for accessing SQL Server data in earlier days.  However over the last 15 years ODBC has been second fiddle as a provider for SQL Server application developers who have usually favoured the platform specific OLE-DB provider and the interface built on top of it such as ADO.

Now in an apparent reverse of direction various Microsoft blogs have announced the next version of SQL Server will be the last to support OLE-DB with the emphasis returning to ODBC.  Why this is the case isn’t entirely clear but various people have tried to answer this, the primary message being that ODBC is an industry standard whereas OLE-DB is Microsoft proprietary.   And as they are largely equivalent, it makes sense to only to continue to support the more generic of the two providers.

After years of developers moving away from ODBC to OLE-DB, as you would expect this announcement is being met with much surprise in the community.  But to be fair I suspect most developers won’t notice as they user higher level interfaces, such as ADO.NET, which abstract the specifics of the underlying providers.  C/C++ developers on the other hand may need to revisit their data access interfaces if they are directly accessing SQL Server via OLE-DB.

What Scales Best?

July 29, 2011 | Tony Bain

It is a constant, yet interesting debate in the world of big data.  What scales best?  OldSQL, NoSQL, NewSQL?

I have a longer post coming on this soon.  But for now, let me make the following comments.  Generally, most data technologies can be made to scale – somehow.  Scaling up tends not to be too much of an issue, scaling out is where the difficulties begin.  Yet, most data technologies can be scaled in one form or another to meet a data challenge even if the result isn’t pretty. 

What is best?  Well that comes down to the resulting complexity, cost, performance and other trade-offs.  Trade-offs are key as there are almost always significant concessions to be made as you scale up.

A recent example of mine, I was looking at scalability aspects of MySQL.  In particular, MySQL Cluster.  It is actually pretty easy to make it scale.  A 5 node cluster on AWS was able to scale to process a sustained transaction rate of 371,000 insert transactions – per second.   Good scalability yes, but there were many trade-offs made around availability, recoverability and non-insert query performance to achieve it.  But for the particular requirement I was looking at, it fitted very well.

So what is this all about?  Well, if a Social Network is  running MySQL in a sharded cluster to achieve the scale necessary to support their multi-millions users the fact that database technology x or database technology y can also scale with different “costs” or trade-offs doesn’t necessarily make it any better – for them.  If you, for example, have some of the smartest and talented MySQL developers on your team and can alter the code at a moment’s notice to meet a new requirement – that alone might make your choice of MySQL “better’ than using NoSQL database xyz from a proprietary vender where there may be a loss of flexibility and control from soup to nuts.

So what is my point?  Well I guess what I am saying is physical scalability is of course an important consideration in determining what is best.  But it is only one side of the coin.  What it “costs” you in terms of complexity, actual dollars, performance, flexibility, availability, consistency etc, etc are all important too.  And these are often relative, what is complex for you may not be complex for someone else.

Reply to The Future of the NoSQL, SQL, and RDBMS Markets

July 12, 2011 | Tony Bain

Conor O'Mahony over at IBM wrote a good post on a favorite topic of mine “The Future of the NoSQL, SQL, and RDBMS Markets”.  If this is of interest to you then I suggest you read his original post.  I replied in the comments but thought I would also repost my reply here.

———————————————————————————————–

Hi Connor, I wish it was as simple as SQL & RDBMS is good for this and NoSQL is good for that.  For me at least, the waters are much muddier than that.

The benefit of SQL & RDBMS is that its general purpose nature has meant it can be applied to a lot of problems, and because of its applicability it is become mainstream to the point every developer on the planet can probably write basic SQL.  And it is justified, there aren’t many data problems you can’t through a RDBMS at and solve.

The problem with SQL & RDBMS, well essentially I see two.  Firstly, distributed scale is a problem in a small number of cases.  This can be solved by losing some of the generic nature of RDBMS and keeping SQL such as with MPP or attempts like Stonebraker’s NewSQL.  The other way is to lose RDBMS and SQL altogether to achieve scale with alternative key/value methods such as Cassandra, HBase etc.  But these NoSQL databases don’t seem to be the ones gaining the most traction.  From my perspective, the most “popular” and fastest growing NoSQL databases tend to be those which aren’t entirely focused on pure scale but instead focus first on the development model, such as Couch and MongoDB.  Which brings me to my second issue with SQL & RDBMS.

Without a doubt the way in which we build applications has changed dramatically over the last 20 years.  We now see much greater application volumes, much smaller developer teams, shorter development timeframes and faster changing requirements.  Much of what the RDBMS has offered developers – such as strong normalization, enforced integrity, strong data definition, documented schemas – have become less relevant to applications and developers.  Today I would suspect most applications use a SQL database purely as a application specific dumb datastore.  Usually there aren’t multiple applications accessing that database, there aren’t lots of direct data import/exports into other aplications, no third party application reporting, no ad-hoc user queries and the data store is just a repository for a single application to retain data purely for the purpose of making that application function.  Even several major ERP applications have fairly generic databases with soft schemas without any form of constraints of referential integrity.  This is just handled better, from a development perspective, in the code that populates it.

Now of course the RDBMS can meet this requirement – but the issue is the cost of doing this is higher than what it needs to be.  People write code with classes, RDBMS uses SQL.  The translation between these two structures, the plumbing code, can be in cases 50% of more of an applications code base (be that it hand-written code or automatic code generated by a modeling tool).  Why write queries if you are just retrieving and entire row based on key.  Why have a strict data model if you are the only application using it and you maintain integrity in the code?  Why should a change in requirements require you to now to go through the process of building a schema change script/process that has to have deployed sync’d with application version.  Why have cost based optimization when all the data access paths are 100% known at the time of code compilation?

Now I am still largely undecided on all of this.  I get why NoSQL can be appealing.  I get how it fits with today’s requirements, what I am unsure about if it is all very short sighted.  Applications being built today with NoSQL will themselves grow over time.  What may start off today as simple gets/puts within a soft schema’d datastore may overtime gain certain reporting or analytics requirements unexpected when initial development began.  What might have taken a simple SQL query to meet such a requirement in RDBMS now might require data being extracted into something else, maybe Hadoop or MPP or maybe just a simple SQL RDBMS – where it can be processed and re-extracted back into the NoSQL store in a processed form.  It might make sense if you have huge volumes of data but for the small scale web app, this could be a lot of cost and overhead to summarize data for simple reporting needs.

Of course this is all still evolving.  And RDBMS vendors and NoSQL are both on some form of convergence path.  We have already started hearing noises about RBDMS looking to offer more NoSQL like interfaces to the underlying data stores as well as the NoSQL looking to offer more SQL like interfaces to their repositories.  They will meet up eventually, but by then we will all be talking about something new like stream processing ??

Thanks Connor for the thought provoking post.

 

Realtime Data Pipelines

July 01, 2011 | Tony Bain

In life there are really two major types of data analytics.  Firstly, we don’t know what we want to know – so we need analytics to tell us what is interesting.  This is broadly called discovery.  Secondly, we already know what we want to know – we just need analytics to tell us this information, often repeatedly and as quickly as possible.  This is called anything from reporting or dashboarding through more general data transformation and so on.

Typically we are using the same techniques to achieve this.  We shove lots of data into a repository of some from (SQL, MPP SQL, NoSQL, HDFS etc) then run queries/ jobs/ processes across that data to retrieve the information we care about.  

Now this makes sense for data discovery.  If we don’t know what we want to know, having lots of data in a big pile that we can slice and dice in interesting ways is good.   But when we already know what we want to know, continued batch based processing across mounds of data to produce “updated” results of data, that is often changing in constantly, can be highly inefficient.

Enter Realtime Data Pipelines.  Data is fed in one end, results are computed in real time as data flows down the pipeline and come out the other end whenever relevant changes we care about occur.  Data Pipelines / workflow / streams are becoming much more relevant for processing massive amounts of data with real time results.  Moving relevant forms of analytics out of large repositories into the actual data flow from producer to consumer, I believe, will be a fundamental step forward in big data management.

There are some emerging technologies looking to address this, more details to follow.

Ingres cleans up in TPC-H

May 05, 2011 | Tony Bain

Ingres Vectorwise has taken the lead spot convincingly for 100GB, 300GB and 1TB in the “less interesting” un-clustered TPC-H benchmark for both performance & price/peformance.  How convincingly?  Over 3 times the TPC QphH rate of the previous #1 spot holders for all three results.

But while the un-clustered result may be of less of interest for database performance rev-heads (yes we exist), it is highly relevant for the key target segment for Ingres Vectorwise,  enterprise data marts.

Well done Ingres (and the Vectorwise team), I know all the hard effort you have been putting in to get this result.

What’s hot in Big Data startups?

March 18, 2011 | Tony Bain

There are so, so many big data platforms in play at the moment it can be confusing for developers to know where to start.  For startups it used to be simple, MySQL, but dust clouds were created when all the NoSQL platforms started to crash the party 18 months or so ago.  But I do see the dust begin to settle and we are starting to see some market “leaders” appear.  A very unscientific approach is to list the technologies I hear about in the “big data startup” world on a daily basis.  These are, in no particular order:

  • MySQL – yes it is still very much hanging in there despite the Oracle acquisition.  MySQL has been helped by technologies such as AWS RDS and Xeround making it more digestible for big data startups who want to minimize operational overheads.
  • Redis – becoming very popular, surprising how much I hear about it.
  • MongoDB – being early, being focused and being well backed has its advantages.
  • Hadoop – well established, a staple in the big data analytics diet.

Of course there are many more that are doing well, many more which will certain niches – but this is my list of big data technologies that I would not go a day without hearing about a new project utilizing them in a core capacity.

Who/What to acquire next

March 18, 2011 | Tony Bain

Well as predicted, with Aster Data recently being picked up by Teradata most of the key new generation MPP distributed analytics vendors have been acquired (Aster Data, Vertica, Netezza & Greenplum).  This had to happen and was expected to happen.  The MPP Analytics startup “revolution” is over and these technologies will now be integrated into the mainstream.

So what’s next?  As we now, if you are a massive multi-national software company it is a lot less risky to incrementally innovate and leave the development of “game changing” technologies to startups that can be acquired after they prove both the tech and the market.  So what follows MPP?

NoSQL technologies seem the only likely candidate at the moment, although I think it is a few years too early for any major acquisitions to occur.  A key issue that would need to be worked through is what exactly is being acquired as most NoSQL platforms are open source / free (most MPP platforms were proprietary).  But nonetheless, as the market grows and starts to eat away at some noticeable level from the existing RDBMS market the major vendors will want a piece of that action and the frenzy will start again.  But this is still quite a while away yet.

Truly Distributed Analytics

October 22, 2010 | Tony Bain

The growth and success of Hadoop is very interesting.  It is emerging as a highly significant technology for the data scientist.  It is a platform that can scale and accommodate data exploration even across some of the largest datasets that exist today.  Yahoo, I’m told, has a 43,000 node Hadoop cluster.  The mind boggles at the volume of data being crunched with this cluster and ones like it.  Hadoop is distributed.  More specifically, it is a distributed system.  A cluster of servers acting together to process a sequence of user initiated jobs. 

While the system may be considered distributed, the data being analyzed is, for all intensive purposes, centralized.  The data at the centre of job analysis jobs must be located within your cluster and directly accessible by your local applications.  This means as the volume of data under the microscope grows the size of the analytics platform grows to accommodate the influx of information. 

However as data science expands external data sources are becoming increasingly relevant for data analytics.  External data being data that is related to your business, but not produced within your organization.  Examples of such data may be environmental data (weather), geographic data (maps, places, addresses etc), shipping & delivery data and so on.  External data can provide insight into irregularity and opportunity within your own datasets that, without it, could be overlooked or misunderstood. 

While I spoke about this the other day somewhat in jest, some silly but simple examples may be the discovery that it is beneficial to increase advertising targeting those in their 30-50’s when “The O.C” is on TV or that it is beneficial to boost the advertising of certain novels in regions where it is currently pouring down outside.  These areas for opportunity couldn’t be discovered until your data is combined with externally sourced data (television scheduling, weather etc).

External data at the moment tends to be quite small and discrete so the current approach is to import external data into the local analytics environment.  And organizations such as Infochimps are doing a great job or organizing these external data sets and providing APIs for importing data into whatever localized analytics platform you are running.  However as the important and volume of external data grows I believe the impact of “importing” this data will grow and the volume of external data may become significantly greater than the local data in certain cases.  Also identifying what external data is relevant will become a role of analytics itself.

While it is early days, one project I am very excited about is focused on how analytics can be distributed between systems and even organizations.  Rather than centralizing large sets of data, the analytics jobs themselves span organizations and data centers.  And of course, when doing so, respecting the security and privacy expectations of all parties in the process.