from 15:40 to 16:20
Stream Processing is a concept used to create a high-performance system for rapidly building applications that analyze and act on real-time streaming data. Benefits, amongst others, are faster processing and reaction to real-time complex event streams and the flexibility to quickly adapt to changing business and analytic needs. Big data, cloud, mobile and internet of things are the major drivers for stream processing and streaming analytics.
This session discusses the technical concepts of stream processing and how it is related to big data, mobile, cloud and internet of things. Different use cases such as predictive fault management or fraud detection are used to show and compare alternative frameworks and products for stream processing and streaming analytics.
The audience will understand when to use open source frameworks such as Apache Storm, Apache Flink or Spark Streaming, and powerful engines from software vendors such as IBM InfoSphere Streams or TIBCO StreamBase. Live demos will give the audience a good feeling about how to use these frameworks and tools.
The session will also discuss how stream processing is related to Apache Hadoop frameworks (such as MapReduce, Hive, Pig or Impala) and machine learning (such as R, Apache Spark’s MLlib, H2O or SAS).