Part 1: Governance in the Big Data Age

The revelation earlier this year that 100s of thousands of Facebook users were unknowingly subjects in a psychology experiment in 2012 caused widespread negative reaction. According to this WSJ article “Researchers from Facebook and Cornell University manipulated the news feed of nearly 700,000 Facebook users for a week in 2012 to gauge whether emotions spread on social media.” Another interesting read comes from Doug Henschen of InformationWeek titled “Mining WiFi Data: Retail Privacy Pitfalls”. In this article Doug speaks to the value that retailers can realize by mining Wifi data but also the potential pitfalls of being able to track and store the minute behaviors of individuals.

So of course Facebook is not the only organization with a burgeoning wealth of personal customer data; every business looking to gain an edge in its industry is looking to store every piece of data it generates (including data on every single customer interaction) and at some point gain valuable insight from it. Every business with a Big Data initiative needs to carefully consider data privacy and security ramifications. And beyond the ethical decisions around use of data that must be considered is how technology supports governance of data – how is access to data limited and tracked, how do you know what personal data you are storing and how do you mask it?

The critical importance of governance for the success of a Big Data initiative is something IBM recognized very early and something it has invested heavily in for its BigInsights Hadoop offering. I wanted to take a few posts to take a closer look at capabilities for governance included in BigInsights – where they come from, how they work and the business problems they address.

imperatives_bigdata

Part 1: SQL Server Parallel Data Warehouse – Best Thing Since Sliced Bread?

Update: As of an April 2014 announcement Microsoft is calling its upcoming next iteration of Parallel Data Warehouse Edition-based offerings Analytics Platform Systems and relative unknown Quanta joins Dell and HP as a HW provider.

With my first post, I wanted to take a look at the capabilities of Microsoft’s SQL Server Parallel Data Warehouse offering and contrast it with a more established offering, IBM’s PureData System for Analytics – still probably better known today as Netezza.

Parallel Data Warehouse (PDW) is offering you can order from HP called the AppSystem for Parallel Data Warehouse or from Dell called the Dell Parallel Data Warehouse, both running SQL Server 2012 Parallel Data Warehouse edition. Parallel Data Warehouse Edition combines or leverages capabilities from Microsoft’s SMP-only SQL Server and that of their 2008 DATAllegro acquisition.

PDW Background/History

When general availability of PDW V1 was first announced in November of 2010, the message seemed to me to be that the MPP (massively parallel processing) or shared-nothing architecture of PDW was something new and revolutionary, rather than a technology leveraged by some other vendors for two decades for very large databases. IBM introduced DB2 Parallel Edition, today called the DB2 Database Partitioning Feature or DPF, in 1995; Netezza, today called PureData System for Analytics, came out in 2003; Teradata had an offering in the 80s. While it is positive that Microsoft introduced an option for customers hitting the wall with BI on SQL Server (where Oracle for example persists with their RAC shared data architecture for everything), many customers long ago recognized that shared nothing was the right approach for working with large data sets and have leveraged shared nothing platforms to gain insights from their data. The small number of PDW case studies being highlighted by Microsoft up to two years after the V1 release suggest adoption has been slow.

In first half of 2013, PDW V2 came out with a significantly different architecture, moving from deployment of the software directly on the servers in a rack to using Hyper-V virtualization, using JBODs (just a bunch of disks) vs. a SAN, and a 1 rack starting format (vs. 2 rack in V1) with more CPU, memory, and disk. There were also a few database-level enhancements, the most notable being use of ColumnStore Indexes for query performance improvement.

Proof Points

Reading about how a product offering’s benefits in a vendor’s solution brief is great. Hearing from actual customers is much better. PureData System for Analytics is used by over 1000 customers, with all of them echoing the same key points. Customers see excellent Total Cost of Ownership, not only from the software and hardware cost side but in terms of on-going management cost – big data volumes tend to create big data complexity. They also invariably see excellent out-of-the box performance for their most demanding analytic workloads. According to the a Jan 2014 Information Week article on Big Data Analytics platforms by Doug Henschen, “There’s no doubt that Microsoft is amassing all the pieces, but it’s early days for HDInsight, and we still don’t see many PDW deployments after three years in the market.” While proof points are not everything in a world of rapidly evolving technology, they are something worth paying attention to.

Stay tuned for following parts…