Sftp file source connector

Support Those Engines

Spark
Flink
SeaTunnel Zeta

Key Features

Description

Read data from sftp file server.

Supported DataSource Info

In order to use the SftpFile connector, the following dependencies are required. They can be downloaded via install-plugin.sh or from the Maven central repository.

Datasource Supported Versions Dependency
SftpFile universal Download

If you use spark/flink, In order to use this connector, You must ensure your spark/flink cluster already integrated hadoop. The tested hadoop version is 2.x.

If you use SeaTunnel Engine, It automatically integrated the hadoop jar when you download and install SeaTunnel Engine. You can check the jar package under ${SEATUNNEL_HOME}/lib to confirm this.

We made some trade-offs in order to support more file types, so we used the HDFS protocol for internal access to Sftp and this connector need some hadoop dependencies. It only supports hadoop version 2.9.X+.

Data Type Mapping

The File does not have a specific type list, and we can indicate which SeaTunnel data type the corresponding data needs to be converted to by specifying the Schema in the config.

SeaTunnel Data type
STRING
SHORT
INT
BIGINT
BOOLEAN
DOUBLE
DECIMAL
FLOAT
DATE
TIME
TIMESTAMP
BYTES
ARRAY
MAP

Source Options

Name Type Required default value Description
host String Yes - The target sftp host is required
port Int Yes - The target sftp port is required
user String Yes - The target sftp username is required
password String Yes - The target sftp password is required
path String Yes - The source file path.
file_format_type String Yes - Please check #file_format_type below
file_filter_pattern String No - Filter pattern, which used for filtering files.
delimiter/field_delimiter String No \001 delimiter parameter will deprecate after version 2.3.5, please use field_delimiter instead.
Field delimiter, used to tell connector how to slice and dice fields when reading text files.
Default \001, the same as hive’s default delimiter
parse_partition_from_path Boolean No true Control whether parse the partition keys and values from file path
For example if you read a file from path oss://hadoop-cluster/tmp/seatunnel/parquet/name=tyrantlucifer/age=26
Every record data from file will be added these two fields:
name age
tyrantlucifer 26
Tips: Do not define partition fields in schema option
date_format String No yyyy-MM-dd Date type format, used to tell connector how to convert string to date, supported as the following formats:
yyyy-MM-dd yyyy.MM.dd yyyy/MM/dd
default yyyy-MM-dd
datetime_format String No yyyy-MM-dd HH:mm:ss Datetime type format, used to tell connector how to convert string to datetime, supported as the following formats:
yyyy-MM-dd HH:mm:ss yyyy.MM.dd HH:mm:ss yyyy/MM/dd HH:mm:ss yyyyMMddHHmmss
default yyyy-MM-dd HH:mm:ss
time_format String No HH:mm:ss Time type format, used to tell connector how to convert string to time, supported as the following formats:
HH:mm:ss HH:mm:ss.SSS
default HH:mm:ss
skip_header_row_number Long No 0 Skip the first few lines, but only for the txt and csv.
For example, set like following:
skip_header_row_number = 2
then SeaTunnel will skip the first 2 lines from source files
read_columns list no - The read column list of the data source, user can use it to implement field projection.
sheet_name String No - Reader the sheet of the workbook,Only used when file_format is excel.
xml_row_tag string no - Specifies the tag name of the data rows within the XML file, only used when file_format is xml.
xml_use_attr_format boolean no - Specifies whether to process data using the tag attribute format, only used when file_format is xml.
schema Config No - Please check #schema below
compress_codec String No None The compress codec of files and the details that supported as the following shown:
- txt: lzo None
- json: lzo None
- csv: lzo None
- orc: lzo snappy lz4 zlib None
- parquet: lzo snappy lz4 gzip brotli zstd None
Tips: excel type does Not support any compression format
encoding string no UTF-8
common-options No - Source plugin common parameters, please refer to Source Common Options for details.

file_format_type [string]

File type, supported as the following file types: text csv parquet orc json excel xml If you assign file type to json, you should also assign schema option to tell connector how to parse data to the row you want. For example: upstream data is the following:

  1. {"code": 200, "data": "get success", "success": true}

You can also save multiple pieces of data in one file and split them by newline:

  1. {"code": 200, "data": "get success", "success": true}
  2. {"code": 300, "data": "get failed", "success": false}

you should assign schema as the following:

  1. schema {
  2. fields {
  3. code = int
  4. data = string
  5. success = boolean
  6. }
  7. }

connector will generate data as the following:

code data success
200 get success true

If you assign file type to parquet orc, schema option not required, connector can find the schema of upstream data automatically. If you assign file type to text csv, you can choose to specify the schema information or not. For example, upstream data is the following:

  1. tyrantlucifer#26#male

If you do not assign data schema connector will treat the upstream data as the following:

content
tyrantlucifer#26#male

If you assign data schema, you should also assign the option field_delimiter too except CSV file type you should assign schema and delimiter as the following:

  1. field_delimiter = "#"
  2. schema {
  3. fields {
  4. name = string
  5. age = int
  6. gender = string
  7. }
  8. }

connector will generate data as the following:

name age gender
tyrantlucifer 26 male

compress_codec [string]

The compress codec of files and the details that supported as the following shown:

  • txt: lzo none
  • json: lzo none
  • csv: lzo none
  • orc/parquet:
    automatically recognizes the compression type, no additional settings required.

encoding [string]

Only used when file_format_type is json,text,csv,xml. The encoding of the file to read. This param will be parsed by Charset.forName(encoding).

schema [config]

fields [Config]

The schema of upstream data.

How to Create a Sftp Data Synchronization Jobs

The following example demonstrates how to create a data synchronization job that reads data from sftp and prints it on the local client:

  1. # Set the basic configuration of the task to be performed
  2. env {
  3. parallelism = 1
  4. job.mode = "BATCH"
  5. }
  6. # Create a source to connect to sftp
  7. source {
  8. SftpFile {
  9. host = "sftp"
  10. port = 22
  11. user = seatunnel
  12. password = pass
  13. path = "tmp/seatunnel/read/json"
  14. file_format_type = "json"
  15. result_table_name = "sftp"
  16. schema = {
  17. fields {
  18. c_map = "map<string, string>"
  19. c_array = "array<int>"
  20. c_string = string
  21. c_boolean = boolean
  22. c_tinyint = tinyint
  23. c_smallint = smallint
  24. c_int = int
  25. c_bigint = bigint
  26. c_float = float
  27. c_double = double
  28. c_bytes = bytes
  29. c_date = date
  30. c_decimal = "decimal(38, 18)"
  31. c_timestamp = timestamp
  32. c_row = {
  33. C_MAP = "map<string, string>"
  34. C_ARRAY = "array<int>"
  35. C_STRING = string
  36. C_BOOLEAN = boolean
  37. C_TINYINT = tinyint
  38. C_SMALLINT = smallint
  39. C_INT = int
  40. C_BIGINT = bigint
  41. C_FLOAT = float
  42. C_DOUBLE = double
  43. C_BYTES = bytes
  44. C_DATE = date
  45. C_DECIMAL = "decimal(38, 18)"
  46. C_TIMESTAMP = timestamp
  47. }
  48. }
  49. }
  50. }
  51. }
  52. # Console printing of the read sftp data
  53. sink {
  54. Console {
  55. parallelism = 1
  56. }
  57. }