The total cost of operations (TCO) of Business Intelligence (BI) systems is often measured in three categories: time-to-completion of projects, on-budget completion of projects, and cost per user of BI applications. There is a key process in every project that impacts all three categories: Business Requirements Engineering. An effective requirements methodology ensures that project scope is clearly understood and costs accurately estimated. At the same time, when we deliver what users want, usage and adoption of the solution increase the user base. Why then do so many programs not take a closer look and the effectiveness of their approach to this key part of the process?
BI Professionals are used to working with a wide range of products and platforms and typically have a pretty substantial tool belt to be able to work across a multitude of different technologies. Over the past couple of months I took the opportunity to experiment with technologies that are entering the data warehousing ecosystem. These technologies included the Cloudera Sandbox, Hortonworks Sandbox, IBM Big Insights Sandbox, and Amazon’s Red Shift.
The business world continues to evaluate and implement the cloud for some of its IT requirements. The concept of the cloud as a viable IT storage solution as well as a way to cut costs is gaining momentum. But it might prompt the question: is the cloud the right place for a data warehouse? This is an interesting question for many, and a problematic question for some.
We often work with clients to benchmark database performance during the vendor selection process and as a result have become experts at working with the various standardized testing data sets available to the public. Almost anyone who has been involved in a performance test is familiar with the Transaction Processing Performance Council (TPC) test.
Now and then new technologies, ideas, and even buzzwords come along that fundamentally change the way people look at the IT game. When Amazon first released Amazon Web Services (AWS) it changed the game of cloud-based data centers by introducing pay-as-you-go pricing for servers and storage. By replacing large up-front capital infrastructure expenditures with much lower costs that people could scale as their businesses grew, Amazon grew their own business by fostering many more entrants into the e-commerce space that in many cases also turned to Amazon logistics and fulfillment services. That was a game changing moment that was definitely a win-win for all parties.
This blog entry was published on TIBCO Spotfire's Business Intelligence blog, by Linda Rosencrance, of the Spotfire Blogging Team. Here are 13 of her favorite data quotes – gathered from a number of sources on the web – from CEOs, statisticians, authors and even Sherlock Holmes. I thought it was really cool!
So, would you ever even consider putting a data warehouse in the cloud? With the cloud’s huge capacity, quick deployments and high availability—all at really low costs—it’s hard to ignore the possibilities. Ever since the iOLAP management team told me we were new integration partners with the Amazon Web Services (AWS) Redshift platform, I have been thinking about the cloud-based data warehouse concept.
Companies are being pressured on multiple fronts to do something amazing with Big Data. But should the brakes be applied, even slightly, to consider some lessons from the past? Historically, large data warehouse projects failed at surprisingly high rates. How do we learn from these collective past mistakes and increase the odds that Big Data efforts will provide Big Returns?
From a May 23, 2013 article in the NY Times Technology Section. "In the 1960s, mainframe computers posed a significant technological challenge to common notions of privacy. That’s when the federal government started putting tax returns into those giant machines, and consumer credit bureaus began building databases containing the personal financial information of millions of Americans. Many people feared that the new computerized databanks would be put in the service of an intrusive corporate or government Big Brother."
In the May 21, 2013 issue of BI This Week, TDWI Author Steve Swoyer wrote an interesting article that represented some insights into both practitioner and vendor thinking when it comes to the pragmatics of Big Data, Hadoop and Relational Databases.