There are some rules and conventions to be followed in any framework. Only by following and mastering these rules can we use them more easily and achieve twice the result with half the effort.When we develop Flink job, we actually use the API provided by Flink to write an executable program (which must have a main() function) according to the development method required by Flink. We access variousConnectorin the program, and after a series of operatoroperations, we finally sink the data to the target storage through the Connector .

We call this method of step-by-step programming according to certain agreed rules the “programming paradigm”. In this chapter, we will talk about the “programming paradigm” of StreamPark and the development considerations.

Let’s start from these aspects

  • Architecture
  • Programming paradigm
  • RunTime Context
  • Life Cycle
  • Catalog Structure
  • Packaged Deployment

Architecture

Programming Paradigm - 图1

Programming paradigm

streampark-core is positioned as a programming time framework, rapid development scaffolding, specifically created to simplify Flink development. Developers will use this module during the development phase. Let’s take a look at what the programming paradigm of DataStream and Flink Sql with StreamPark looks like, and what the specifications and requirements are.

DataStream

StreamPark provides both scala and Java APIs to develop DataStream programs, the specific code development is as follows.

Scala

  1. import org.apache.streampark.flink.core.scala.FlinkStreaming
  2. import org.apache.flink.api.scala._
  3. object MyFlinkApp extends FlinkStreaming {
  4. override def handle(): Unit = {
  5. ...
  6. }
  7. }

Java

  1. public class MyFlinkJavaApp {
  2. public static void main(String[] args) {
  3. StreamEnvConfig JavaConfig = new StreamEnvConfig(args, (environment, parameterTool) -> {
  4. //The user can set parameters for the environment...
  5. System.out.println("environment argument set...");
  6. });
  7. StreamingContext context = new StreamingContext(JavaConfig);
  8. ....
  9. context.start();
  10. }
  11. }

To develop with the scala API, the program must inherit from FlinkStreaming. After inheritance, it is mandatory for developers to implement the handle() method, which is the entry point for users to write code, and the streamingContext for developers to use.

Development with the Java API can not omit the main() method due to the limitations of the language itself, so it will be a standard main() function,. The user needs to create the StreamingContext manually. StreamingContext is a very important class, which will be introduced later.

The above lines of scala and Java code are the basic skeleton code necessary to develop DataStream with StreamPark. Developing a DataStream program with StreamPark. Starting from these lines of code, Java API development requires the developer to manually start the task start.

The TableEnvironment is used to create the contextual execution environment for Table & SQL programs and is the entry point for Table & SQL programs. The main functions of the TableEnvironment include: interfacing with external systems, registering and retrieving tables and metadata, executing SQL statements, and providing more detailed configuration options.

The Flink community has been promoting the batch processing capability of DataStream and unifying the stream-batch integration, and in Flink 1.12, the stream-batch integration is truly unified, many historical APIs such as: DataSet API, BatchTableEnvironment API, etc. are deprecated and retired from the history stage. TableEnvironment and StreamTableEnvironment**.

StreamPark provides a more convenient API for the development of TableEnvironment and StreamTableEnvironment environments.

TableEnvironment

To develop Table & SQL jobs, TableEnvironment will be the recommended entry class for Flink, supporting both Java API and Scala API, the following code demonstrates how to develop a TableEnvironment type job in StreamPark

Scala

  1. import org.apache.streampark.flink.core.scala.FlinkTable
  2. object TableApp extends FlinkTable {
  3. override def handle(): Unit = {
  4. ...
  5. }
  6. }

Java

  1. import org.apache.streampark.flink.core.scala.TableContext;
  2. import org.apache.streampark.flink.core.scala.util.TableEnvConfig;
  3. public class JavaTableApp {
  4. public static void main(String[] args) {
  5. TableEnvConfig tableEnvConfig = new TableEnvConfig(args, null);
  6. TableContext context = new TableContext(tableEnvConfig);
  7. ...
  8. context.start("Flink SQl Job");
  9. }
  10. }

The above lines of Scala and Java code are the essential skeleton code for developing a TableEnvironment with StreamPark. Scala API must inherit FlinkTable, Java API development needs to manually construct TableContext, and the developer needs to manually start the task start.

StreamTableEnvironment

StreamTableEnvironment is used in stream computing scenarios, where the object of stream computing is a DataStream. Compared to TableEnvironment, StreamTableEnvironment provides an interface to convert between DataStream and Table. If your application is written using the DataStream API in addition to the Table API & SQL, you need to use the StreamTableEnvironment. The following code demonstrates how to develop a StreamTableEnvironment type job in StreamPark.

Scala

  1. package org.apache.streampark.test.tablesql
  2. import org.apache.streampark.flink.core.scala.FlinkStreamTable
  3. object StreamTableApp extends FlinkStreamTable {
  4. override def handle(): Unit = {
  5. ...
  6. }
  7. }

Java

  1. import org.apache.streampark.flink.core.scala.StreamTableContext;
  2. import org.apache.streampark.flink.core.scala.util.StreamTableEnvConfig;
  3. public class JavaStreamTableApp {
  4. public static void main(String[] args) {
  5. StreamTableEnvConfig JavaConfig = new StreamTableEnvConfig(args, null, null);
  6. StreamTableContext context = new StreamTableContext(JavaConfig);
  7. ...
  8. context.start("Flink SQl Job");
  9. }
  10. }

The above lines of scala and Java code are the essential skeleton code for developing StreamTableEnvironment with StreamPark, and for developing StreamTableEnvironment programs with StreamPark. Starting from these lines of code, Java code needs to construct StreamTableContext manually, and Java API development requires the developer to start the task start manually.

RunTime Context

RunTime Context - StreamingContext , TableContext , StreamTableContext are three very important objects in StreamPark, next we look at the definition and role of these three Context.

Programming Paradigm - 图2

StreamingContext

StreamingContext inherits from StreamExecutionEnvironment, adding ParameterTool on top of StreamExecutionEnvironment, which can be simply understood as:

StreamingContext = ParameterTool + StreamExecutionEnvironment

The specific definitions are as follows:

  1. class StreamingContext(val parameter: ParameterTool, private val environment: StreamExecutionEnvironment)
  2. extends StreamExecutionEnvironment(environment.getJavaEnv) {
  3. /**
  4. * for scala
  5. *
  6. * @param args
  7. */
  8. def this(args: (ParameterTool, StreamExecutionEnvironment)) = this(args._1, args._2)
  9. /**
  10. * for Java
  11. *
  12. * @param args
  13. */
  14. def this(args: StreamEnvConfig) = this(FlinkStreamingInitializer.initJavaStream(args))
  15. ...
  16. }

This object is very important and will be used throughout the lifecycle of the task in the DataStream job. The StreamingContext itself inherits from the StreamExecutionEnvironment, and the configuration file is fully integrated into the StreamingContext, so that it is very easy to get various parameters from the StreamingContext.

In StreamPark, StreamingContext is also the entry class for the Java API to write DataStream jobs, one of the constructors of StreamingContext is specially built for the Java API, the constructor is defined as follows:

  1. /**
  2. * for Java
  3. * @param args
  4. */
  5. def this(args: StreamEnvConfig) = this(FlinkStreamingInitializer.initJavaStream(args))

From the above constructor you can see that to create StreamingContext, you need to pass in a StreamEnvConfig object. StreamEnvConfig is defined as follows:

  1. class StreamEnvConfig(val args: Array[String], val conf: StreamEnvConfigFunction)

In the constructor of StreamEnvConfig,

  • args is the start parameter and must be the args in the main method
  • conf is a Function of type StreamEnvConfigFunction

The definition of StreamEnvConfigFunction is as follows.

  1. @FunctionalInterface
  2. public interface StreamEnvConfigFunction {
  3. /**
  4. * Used to initialize the StreamExecutionEnvironment, for the function can be implemented, customize the parameters to be set...
  5. *
  6. * @param environment
  7. * @param parameterTool
  8. */
  9. void configuration(StreamExecutionEnvironment environment, ParameterTool parameterTool);
  10. }

The purpose of the Function is to allow the developer to set more parameters by means of hooks, which will pass the parameter (parsing all parameters in the configuration file) and the initialized StreamExecutionEnvironment object to the developer to set more parameters, e.g.:

  1. StreamEnvConfig JavaConfig = new StreamEnvConfig(args, (environment, parameterTool) -> {
  2. System.out.println("environment argument set...");
  3. environment.getConfig().enableForceAvro();
  4. });
  5. StreamingContext context = new StreamingContext(JavaConfig);

TableContext

TableContext inherits from TableEnvironment. On top of TableEnvironment, it adds ParameterTool, which is used to create the contextual execution environment for Table & SQL programs. It can be simply understood as :

TableContext = ParameterTool + TableEnvironment

The specific definitions are as follows:

  1. class TableContext(val parameter: ParameterTool,
  2. private val tableEnv: TableEnvironment)
  3. extends TableEnvironment
  4. with FlinkTableTrait {
  5. /**
  6. * for scala
  7. *
  8. * @param args
  9. */
  10. def this(args: (ParameterTool, TableEnvironment)) = this(args._1, args._2)
  11. /**
  12. * for Java
  13. * @param args
  14. */
  15. def this(args: TableEnvConfig) = this(FlinkTableInitializer.initJavaTable(args))
  16. ...
  17. }

In StreamPark, TableContext is also the entry class for the Java API to write Table Sql jobs of type TableEnvironment. One of the constructor methods of TableContext is a constructor specifically built for the Java API, which is defined as follows:

  1. /**
  2. * for Java
  3. * @param args
  4. */
  5. def this(args: TableEnvConfig) = this(FlinkTableInitializer.initJavaTable(args))

From the above constructor you can see that to create a TableContext, you need to pass in a TableEnvConfig object. TableEnvConfig is defined as follows:

  1. class TableEnvConfig(val args: Array[String], val conf: TableEnvConfigFunction)

In the constructor method of TableEnvConfig,

  • args is the start parameter, and is the args in the main method.
  • conf is a Function of type TableEnvConfigFunction

The definition of TableEnvConfigFunction is as follows.

  1. @FunctionalInterface
  2. public interface TableEnvConfigFunction {
  3. /**
  4. * Used to initialize the TableEnvironment, for the function can be implemented, customize the parameters to be set...
  5. *
  6. * @param tableConfig
  7. * @param parameterTool
  8. */
  9. void configuration(TableConfig tableConfig, ParameterTool parameterTool);
  10. }

The purpose of the Function is to allow the developer to set more parameters by hooking the parameter (parsing all parameters in the configuration file) and the TableConfig object in the initialized TableEnvironment to the developer to set more parameters, such as:

  1. TableEnvConfig config = new TableEnvConfig(args,(tableConfig,parameterTool)->{
  2. tableConfig.setLocalTimeZone(ZoneId.of("Asia/Shanghai"));
  3. });
  4. TableContext context = new TableContext(config);
  5. ...

StreamTableContext

StreamTableContext inherits from StreamTableEnvironment and is used in stream computing scenarios. The object of stream computation is DataStream. Compared to TableEnvironment, StreamTableEnvironment provides an interface for conversion between DataStream and Table. StreamTableContext adds ParameterTool on top of StreamTableEnvironment and directly accesses the StreamTableEnvironment API, which can be easily understood as:

StreamTableContext = ParameterTool + StreamTableEnvironment + StreamExecutionEnvironment

The specific definitions are as follows:

  1. class StreamTableContext(val parameter: ParameterTool,
  2. private val streamEnv: StreamExecutionEnvironment,
  3. private val tableEnv: StreamTableEnvironment)
  4. extends StreamTableEnvironment
  5. with FlinkTableTrait {
  6. /**
  7. * Once the Table is converted to a DataStream,
  8. * The DataStream job must be executed using the execute method of the StreamExecutionEnvironment.
  9. */
  10. private[scala] var isConvertedToDataStream: Boolean = false
  11. /**
  12. * for scala
  13. *
  14. * @param args
  15. */
  16. def this(args: (ParameterTool, StreamExecutionEnvironment, StreamTableEnvironment)) =
  17. this(args._1, args._2, args._3)
  18. /**
  19. * for Java
  20. *
  21. * @param args
  22. */
  23. def this(args: StreamTableEnvConfig) = this(FlinkTableInitializer.initJavaStreamTable(args))
  24. ...
  25. }

In StreamPark, StreamTableContext is the entry class for the Java API to write Table Sql jobs of type StreamTableEnvironment. One of the constructors of StreamTableContext is a function built specifically for the Java API, which is defined as follows:

  1. /**
  2. * for Java
  3. *
  4. * @param args
  5. */
  6. def this(args: StreamTableEnvConfig) = this(FlinkTableInitializer.initJavaStreamTable(args))

From the above constructor you can see that to create StreamTableContext, you need to pass in a StreamTableEnvConfig object. StreamTableEnvConfig is defined as follows:

  1. class StreamTableEnvConfig (
  2. val args: Array[String],
  3. val streamConfig: StreamEnvConfigFunction,
  4. val tableConfig: TableEnvConfigFunction
  5. )

The constructor of StreamTableEnvConfig has three parameters:

  • args is the start parameter, and must be args in the main method
  • streamConfig is a Function of type StreamEnvConfigFunction.
  • tableConfig is a Function of type TableEnvConfigFunction

The definitions of StreamEnvConfigFunction and TableEnvConfigFunction have been described above and will not be repeated here.

The purpose of this Function is to allow the developer to set more parameters by means of hooks. Unlike the other parameter settings above, this Function provides the opportunity to set both the StreamExecutionEnvironment and the TableEnvironment, which will pass the parameter and the initialized StreamExecutionEnvironment ' and theTableConfigobject in theTableEnvironment` are passed to the developer for additional parameter settings, such as:

  1. StreamTableEnvConfig JavaConfig = new StreamTableEnvConfig(args, (environment, parameterTool) -> {
  2. environment.getConfig().enableForceAvro();
  3. }, (tableConfig, parameterTool) -> {
  4. tableConfig.setLocalTimeZone(ZoneId.of("Asia/Shanghai"));
  5. });
  6. StreamTableContext context = new StreamTableContext(JavaConfig);
  7. ...

You can use the StreamExecutionEnvironment API directly in the StreamTableContext, methods prefixed with $ are the StreamExecutionEnvironment API.

Programming Paradigm - 图3

Life Cycle

The lifecycle concept is currently only available for the scala API. this lifecycle explicitly defines the entire process of running a task, which is executed according to this lifecycle as long as it is inherited from FlinkStreaming or FlinkTable or StreamingTable. The core methods of the lifecycle are as follows.

  1. final def main(args: Array[String]): Unit = {
  2. init(args)
  3. ready()
  4. handle()
  5. jobExecutionResult = context.start()
  6. destroy()
  7. }
  8. private[this] def init(args: Array[String]): Unit = {
  9. SystemPropertyUtils.setAppHome(KEY_APP_HOME, classOf[FlinkStreaming])
  10. context = new StreamingContext(FlinkStreamingInitializer.initStream(args, config))
  11. }
  12. /**
  13. * Users can override the sub-method...
  14. *
  15. */
  16. def ready(): Unit = {}
  17. def config(env: StreamExecutionEnvironment, parameter: ParameterTool): Unit = {}
  18. def handle(): Unit
  19. def destroy(): Unit = {}

The life cycle is as follows.

  • init Stages of configuration file initialization
  • config Stage of manual parameter setting by the developer
  • ready Stage for executing custom actions before starting
  • handle Stages of developer code access
  • start Stages of program initiation
  • destroy Stages of destruction

Life Cycle

Life Cycle - init

In the init phase, the framework automatically parses the incoming configuration file and initializes the StreamExecutionEnvironment according to the various parameters defined inside. This step is automatically executed by the framework and does not require developer involvement.

Life Cycle — config

The purpose of the config phase is to allow the developer to set more parameters (other than the agreed configuration file), by means of hooks. The config phase passes parameter (all parameters in the configuration file parsed in the init phase) and the StreamExecutionEnvironment object initialized in the init phase to the developer,this allows the developer to configure more parameters.

The config stage is an optional stage that requires developer participation.

Life Cycle — ready

The ready stage is an entry point for the developer to do other actions after the parameters have been set, and is done after initialization is complete before the program is started.

The ready stage is a stage that requires developer participation and is optional.

Life Cycle — handle

The handle stage is the stage of accessing the code written by the developer, it is the entrance to the code written by the developer and is the most important stage, this handle method will force the developer to implement.

The handle stage is a mandatory stage that requires developer participation.

Life Cycle — start

The start phase, which starts the task, is executed automatically by the framework.

Life Cycle — destroy

The destroy phase is the last phase before jvm exits after the program has finished running, and is generally used to wrap up the work.

The destroy stage is an optional stage that requires developer participation.

Catalog Structure

The recommended project directory structure is as follows, please refer to the directory structure and configuration in StreamPark-flink-quickstart

  1. .
  2. |── assembly
  3. ├── bin
  4. ├── startup.sh //Launch Script
  5. ├── setclasspath.sh //ava environment variables related to the script (internal use of the framework, developers do not need to pay attention to)
  6. ├── shutdown.sh //Task stop script (not recommended)
  7. └── flink.sh //the script that internal use to, when starting (this script is used internally in the framework, the developer does not need to pay attention to)
  8. │── conf
  9. ├── test
  10. ├── application.yaml //Configuration file for the test phase
  11. └── sql.yaml //flink sql
  12. ├── prod
  13. ├── application.yaml //Profiles for the production (prod) stage
  14. └── sql.yaml //flink sql
  15. │── logs //logs Catalog
  16. └── temp
  17. │── src
  18. └── main
  19. ├── Java
  20. ├── resources
  21. └── scala
  22. │── assembly.xml
  23. └── pom.xml

assembly.xml is the configuration file needed for the assembly packaging plugin, defined as follows:

  1. <assembly>
  2. <id>bin</id>
  3. <formats>
  4. <format>tar.gz</format>
  5. </formats>
  6. <fileSets>
  7. <fileSet>
  8. <directory>assembly/bin</directory>
  9. <outputDirectory>bin</outputDirectory>
  10. <fileMode>0755</fileMode>
  11. </fileSet>
  12. <fileSet>
  13. <directory>${project.build.directory}</directory>
  14. <outputDirectory>lib</outputDirectory>
  15. <fileMode>0755</fileMode>
  16. <includes>
  17. <include>*.jar</include>
  18. </includes>
  19. <excludes>
  20. <exclude>original-*.jar</exclude>
  21. </excludes>
  22. </fileSet>
  23. <fileSet>
  24. <directory>assembly/conf</directory>
  25. <outputDirectory>conf</outputDirectory>
  26. <fileMode>0755</fileMode>
  27. </fileSet>
  28. <fileSet>
  29. <directory>assembly/logs</directory>
  30. <outputDirectory>logs</outputDirectory>
  31. <fileMode>0755</fileMode>
  32. </fileSet>
  33. <fileSet>
  34. <directory>assembly/temp</directory>
  35. <outputDirectory>temp</outputDirectory>
  36. <fileMode>0755</fileMode>
  37. </fileSet>
  38. </fileSets>
  39. </assembly>

Packaged Deployment

The recommended packaging mode in streampark-flink-quickstart is recommended. It runs maven package directly to generate a standard StreamPark recommended project package, after unpacking the directory structure is as follows.

  1. .
  2. StreamPark-flink-quickstart-1.0.0
  3. ├── bin
  4. ├── startup.sh //Launch Script
  5. ├── setclasspath.sh //Java environment variable-related scripts (used internally, not of concern to users)
  6. ├── shutdown.sh //Task stop script (not recommended)
  7. ├── flink.sh //Scripts used internally at startup (used internally, not of concern to the user)
  8. ├── conf
  9. ├── application.yaml //Project's configuration file
  10. ├── sql.yaml // flink sql file
  11. ├── lib
  12. └── StreamPark-flink-quickstart-1.0.0.jar //The project's jar package
  13. └── temp

Start command

The application.yaml and sql.yaml configuration files need to be defined before starting. If the task to be started is a DataStream task, just follow the configuration file directly after startup.sh.

  1. bin/startup.sh --conf conf/application.yaml

If the task you want to start is the Flink Sql task, you need to follow the configuration file and sql.yaml.

  1. bin/startup.sh --conf conf/application.yaml --sql conf/sql.yaml