A big data equivalent of LAMP stack could emerge in 2014, according to Richard Daley, chief strategy officer of Pentaho, the specialist company dealing in business intelligence. He is of the belief that a stack may be built as consensus will develop around a few big data architectures. The stack’s upper layers may contain more number of proprietary elements when compared to LAMP.

This is important as the huge growth of interactive and dynamic websites in the latter part of the 1990s and in the beginning of the millennium years were driven, either completely or partially, by LAMP stack, like Apache HTTP server, Linus, PHP and MySQL.

More than sum total of parts

According to Daley, there are a large number of operating data reference architectures. Companies dealing with Big Data analytics and technologies can swiftly use it for marketing and also to detect network intrusions. It is more useful to analyze and utilize Big Data than to simply store it.

Open source components are free and powerful tools when considered individually. But when they are combined together, they become more powerful than the sum of parts. The components are easily available and all have licenses which are open with comparatively fewer limitations. Another crucial positive point in this regard is that the source is easily available, which provides developers with maximum flexibility. This is where people like us atGrayMatter come into play at a very strategic level in terms of unique deployment methodologies and deep dive analytics and data science expertise irrespective of tools.

Even as the LAMP stack mentions individual parts, the stack of Big Data as envisioned by Daley has multiple options at each layer, dependent on the application you want.

Layers

The foundation or the stack’s bottom layer is a data layer. This layer is for NoSQL databases and Hadoop distributions and analytical databases and relational databases like Teradata, Vertica, Greenplum and SAS.

According to Daley, any of the above mentioned technologies can be utilized for applications related to Big Data. NoSQL and Hadoop are more scalable, open and have lower operational costs. But they are not capable of doing everything. That is where Vertica and Greenplum come in. They can achieve extremely fast analytical applications.

The integration layer comes after the bottom layer. Data preparation, cleansing, transformation and integration happen in this layer. The Analytics layer comes next, where the visualization and analytics events take place. The top layer or the Prescriptive or Predictive Analytics layer is where companies begin to recognize the true value that can be obtained from Big Data. Opportunities and risks can be identified here.