Tuesday, August 21, 2012
08:30 AM - 11:45 AM
|Level: ||Technical - Introductory|
This tutorial is designed to demonstrate how the Hadoop framework can be used to solved current big data problems in a fast, scalable and cost effective way. It is designed for technical personnel and management who are evaluating and considering using Hadoop to solve their data scalability problems.
The tutorial will cover Hadoop basics and discuss best practices using Hadoop in enterprises dealing with large data sets. We will look into the current data problems you are dealing with and potential use cases of using Hadoop in your infrastructure. The presentation covers the Hadoop architecture and its main components: Hadoop Distributed File System (HDFS) and MapReduce. We will present case studies on how other enterprises are using Hadoop and look into what it takes to get Hadoop up and running in your environment.
Serge Blazhievsky is an experienced developer and architect with a rich background in C++/Java and distributed systems. His latest venture, LiveOps, Inc. uses Hadoop infrastructure for all reporting needs. LiveOps Hadoop framework was completely designed by him and satisfies very strict performance and availability requirements. Serge's prior ventures include Attributor, Inc. where he designed Hadoop infrastructure used for Internet crawling and web-page analysis. He holds a Masters Degree in Computer Engineering from Santa Clara University, CA, located in the heart of Silicon Valley. Serge is a regular attendee and contributor to various Hadoop conferences including Hadoop User Group at Yahoo, the creator of Hadoop.