Intro to config file

In SeaTunnel, the most important thing is the Config file, through which users can customize their own data synchronization requirements to maximize the potential of SeaTunnel. So next, I will introduce you how to configure the Config file.

The main format of the Config file is hocon, for more details of this format type you can refer to HOCON-GUIDE, BTW, we also support the json format, but you should know that the name of the config file should end with .json

Example

Before you read on, you can find config file examples here and in distribute package’s config directory.

Config file structure

The Config file will be similar to the one below.

hocon

  1. env {
  2. job.mode = "BATCH"
  3. }
  4. source {
  5. FakeSource {
  6. result_table_name = "fake"
  7. row.num = 100
  8. schema = {
  9. fields {
  10. name = "string"
  11. age = "int"
  12. card = "int"
  13. }
  14. }
  15. }
  16. }
  17. transform {
  18. Filter {
  19. source_table_name = "fake"
  20. result_table_name = "fake1"
  21. fields = [name, card]
  22. }
  23. }
  24. sink {
  25. Clickhouse {
  26. host = "clickhouse:8123"
  27. database = "default"
  28. table = "seatunnel_console"
  29. fields = ["name", "card"]
  30. username = "default"
  31. password = ""
  32. source_table_name = "fake1"
  33. }
  34. }

multi-line support

In hocon, multiline strings are supported, which allows you to include extended passages of text without worrying about newline characters or special formatting. This is achieved by enclosing the text within triple quotes """ . For example:

  1. var = """
  2. Apache SeaTunnel is a
  3. next-generation high-performance,
  4. distributed, massive data integration tool.
  5. """
  6. sql = """ select * from "table" """

json

  1. {
  2. "env": {
  3. "job.mode": "batch"
  4. },
  5. "source": [
  6. {
  7. "plugin_name": "FakeSource",
  8. "result_table_name": "fake",
  9. "row.num": 100,
  10. "schema": {
  11. "fields": {
  12. "name": "string",
  13. "age": "int",
  14. "card": "int"
  15. }
  16. }
  17. }
  18. ],
  19. "transform": [
  20. {
  21. "plugin_name": "Filter",
  22. "source_table_name": "fake",
  23. "result_table_name": "fake1",
  24. "fields": ["name", "card"]
  25. }
  26. ],
  27. "sink": [
  28. {
  29. "plugin_name": "Clickhouse",
  30. "host": "clickhouse:8123",
  31. "database": "default",
  32. "table": "seatunnel_console",
  33. "fields": ["name", "card"],
  34. "username": "default",
  35. "password": "",
  36. "source_table_name": "fake1"
  37. }
  38. ]
  39. }

As you can see, the Config file contains several sections: env, source, transform, sink. Different modules have different functions. After you understand these modules, you will understand how SeaTunnel works.

env

Used to add some engine optional parameters, no matter which engine (Spark or Flink), the corresponding optional parameters should be filled in here.

Note that we have separated the parameters by engine, and for the common parameters, we can configure them as before. For flink and spark engine, the specific configuration rules of their parameters can be referred to JobEnvConfig.

source

source is used to define where SeaTunnel needs to fetch data, and use the fetched data for the next step. Multiple sources can be defined at the same time. The supported source at now check Source of SeaTunnel. Each source has its own specific parameters to define how to fetch data, and SeaTunnel also extracts the parameters that each source will use, such as the result_table_name parameter, which is used to specify the name of the data generated by the current source, which is convenient for follow-up used by other modules.

transform

When we have the data source, we may need to further process the data, so we have the transform module. Of course, this uses the word ‘may’, which means that we can also directly treat the transform as non-existent, directly from source to sink. Like below.

  1. env {
  2. job.mode = "BATCH"
  3. }
  4. source {
  5. FakeSource {
  6. result_table_name = "fake"
  7. row.num = 100
  8. schema = {
  9. fields {
  10. name = "string"
  11. age = "int"
  12. card = "int"
  13. }
  14. }
  15. }
  16. }
  17. sink {
  18. Clickhouse {
  19. host = "clickhouse:8123"
  20. database = "default"
  21. table = "seatunnel_console"
  22. fields = ["name", "age", "card"]
  23. username = "default"
  24. password = ""
  25. source_table_name = "fake1"
  26. }
  27. }

Like source, transform has specific parameters that belong to each module. The supported source at now check. The supported transform at now check Transform V2 of SeaTunnel

sink

Our purpose with SeaTunnel is to synchronize data from one place to another, so it is critical to define how and where data is written. With the sink module provided by SeaTunnel, you can complete this operation quickly and efficiently. Sink and source are very similar, but the difference is reading and writing. So go check out our supported sinks.

Other

You will find that when multiple sources and multiple sinks are defined, which data is read by each sink, and which is the data read by each transform? We use result_table_name and source_table_name two key configurations. Each source module will be configured with a result_table_name to indicate the name of the data source generated by the data source, and other transform and sink modules can use source_table_name to refer to the corresponding data source name, indicating that I want to read the data for processing. Then transform, as an intermediate processing module, can use both result_table_name and source_table_name configurations at the same time. But you will find that in the above example Config, not every module is configured with these two parameters, because in SeaTunnel, there is a default convention, if these two parameters are not configured, then the generated data from the last module of the previous node will be used. This is much more convenient when there is only one source.

Config variable substitution

In config file we can define some variables and replace it in run time. This is only support hocon format file.

  1. env {
  2. job.mode = "BATCH"
  3. job.name = ${jobName}
  4. parallelism = 2
  5. }
  6. source {
  7. FakeSource {
  8. result_table_name = ${resName}
  9. row.num = ${rowNum}
  10. string.template = ${strTemplate}
  11. int.template = [20, 21]
  12. schema = {
  13. fields {
  14. name = ${nameType}
  15. age = "int"
  16. }
  17. }
  18. }
  19. }
  20. transform {
  21. sql {
  22. source_table_name = "fake"
  23. result_table_name = "sql"
  24. query = "select * from "${resName}" where name = '"${nameVal}"' "
  25. }
  26. }
  27. sink {
  28. Console {
  29. source_table_name = "sql"
  30. username = ${username}
  31. password = ${password}
  32. blankSpace = ${blankSpace}
  33. }
  34. }

In the above config, we define some variables, like ${rowNum}, ${resName}. We can replace those parameters with this shell command:

  1. ./bin/seatunnel.sh -c <this_config_file>
  2. -i jobName='st var job'
  3. -i resName=fake
  4. -i rowNum=10
  5. -i strTemplate=['abc','d~f','h i']
  6. -i nameType=string
  7. -i nameVal=abc
  8. -i username=seatunnel=2.3.1
  9. -i password='$a^b%c.d~e0*9('
  10. -i blankSpace='2023-12-26 11:30:00'
  11. -e local

Then the final submitted config is:

  1. env {
  2. job.mode = "BATCH"
  3. job.name = "st var job"
  4. parallelism = 2
  5. }
  6. source {
  7. FakeSource {
  8. result_table_name = "fake"
  9. row.num = 10
  10. string.template = ["abc","d~f","h i"]
  11. int.template = [20, 21]
  12. schema = {
  13. fields {
  14. name = string
  15. age = "int"
  16. }
  17. }
  18. }
  19. }
  20. transform {
  21. sql {
  22. source_table_name = "fake"
  23. result_table_name = "sql"
  24. query = "select * from fake where name = 'abc' "
  25. }
  26. }
  27. sink {
  28. Console {
  29. source_table_name = "sql"
  30. username = "seatunnel=2.3.1"
  31. password = "$a^b%c.d~e0*9("
  32. blankSpace = "2023-12-26 11:30:00"
  33. }
  34. }

Some Notes:

  • quota with ' if the value has space ` or special character (like(`)
  • if the replacement variables is in " or ', like resName and nameVal, you need add "

What’s More

If you want to know the details of this format configuration, Please see HOCON.