apache-flink

Apache Flink single node installation

Apache Flink single node installation 

What is apache flink ?

Flink’s core is a streaming dataflow engine that provides data distribution, communication, and fault tolerance for distributed computations over data streams.

Flink includes several APIs for creating applications that use the Flink engine:

>DataSet API for static data embedded in Java, Scala, and Python,
>DataStream API for unbounded streams embedded in Java and Scala, and
>Table API with a SQL-like expression language embedded in Java and Scala.

Flink also bundles libraries for domain-specific use cases:

>Machine Learning library, and
>Gelly, a graph processing API and library.
You can integrate Flink easily with other well-known open source systems both for data input and output as well as deployment.

flink-stack-small
flink-stack-small

For more Click Here Apache site

For Video Reference  Click Here

Single node installation.

Step 1 : Download the Apache flink from Apache site.

Step 2 : Extract the file.

Apache flink
Apache flink

Step 3 : Change your directory to Apache flink and start the Apache flink as a local mode (Single node).

Apache flink
Apache flink

Step 4 : Run the word count program in Apache flink.

Apache flink
Apache flink

Step 5 : Web UI of Apache flink runs in http://localhost:8081

Apache flink
Apache flink

Step 6 : Now check the output in the path which you have specified in my case the output path is (/home/hadoop/wordcountout.txt)