MongoDB Sink Connector

Support Those Engines

Spark
Flink
SeaTunnel Zeta

Key features

Tips

1.If you want to use CDC-written features, recommend enable the upsert-enable configuration.

Description

The MongoDB Connector provides the ability to read and write data from and to MongoDB. This document describes how to set up the MongoDB connector to run data writers against MongoDB.

Supported DataSource Info

In order to use the Mongodb connector, the following dependencies are required. They can be downloaded via install-plugin.sh or from the Maven central repository.

Datasource Supported Versions Dependency
MongoDB universal Download

Data Type Mapping

The following table lists the field data type mapping from MongoDB BSON type to Seatunnel data type.

Seatunnel Data Type MongoDB BSON Type
STRING ObjectId
STRING String
BOOLEAN Boolean
BINARY Binary
INTEGER Int32
TINYINT Int32
SMALLINT Int32
BIGINT Int64
DOUBLE Double
FLOAT Double
DECIMAL Decimal128
Date Date
Timestamp Timestamp[Date]
ROW Object
ARRAY Array

Tips

1.When using SeaTunnel to write Date and Timestamp types to MongoDB, both will produce a Date data type in MongoDB, but the precision will be different. The data generated by the SeaTunnel Date type has second-level precision, while the data generated by the SeaTunnel Timestamp type has millisecond-level precision.
2.When using the DECIMAL type in SeaTunnel, be aware that the maximum range cannot exceed 34 digits, which means you should use decimal(34, 18).

Sink Options

Name Type Required Default Description
uri String Yes - The MongoDB standard connection uri. eg. mongodb://user:password@hosts:27017/database?readPreference=secondary&slaveOk=true.
database String Yes - The name of MongoDB database to read or write.
collection String Yes - The name of MongoDB collection to read or write.
schema String Yes - MongoDB’s BSON and seatunnel data structure mapping.
buffer-flush.max-rows String No 1000 Specifies the maximum number of buffered rows per batch request.
buffer-flush.interval String No 30000 Specifies the maximum interval of buffered rows per batch request, the unit is millisecond.
retry.max String No 3 Specifies the max number of retry if writing records to database failed.
retry.interval Duration No 1000 Specifies the retry time interval if writing records to database failed, the unit is millisecond.
upsert-enable Boolean No false Whether to write documents via upsert mode.
primary-key List No - The primary keys for upsert/update. Keys are in ["id","name",...] format for properties.
transaction Boolean No false Whether to use transactions in MongoSink (requires MongoDB 4.2+).
common-options No - Source plugin common parameters, please refer to Source Common Options for details

Tips

1.The data flushing logic of the MongoDB Sink Connector is jointly controlled by three parameters: buffer-flush.max-rows, buffer-flush.interval, and checkpoint.interval.
Data flushing will be triggered if any of these conditions are met.
2.Compatible with the historical parameter upsert-key. If upsert-key is set, please do not set primary-key.

How to Create a MongoDB Data Synchronization Jobs

The following example demonstrates how to create a data synchronization job that writes randomly generated data to a MongoDB database:

  1. # Set the basic configuration of the task to be performed
  2. env {
  3. parallelism = 1
  4. job.mode = "BATCH"
  5. checkpoint.interval = 1000
  6. }
  7. source {
  8. FakeSource {
  9. row.num = 2
  10. bigint.min = 0
  11. bigint.max = 10000000
  12. split.num = 1
  13. split.read-interval = 300
  14. schema {
  15. fields {
  16. c_bigint = bigint
  17. }
  18. }
  19. }
  20. }
  21. sink {
  22. MongoDB{
  23. uri = mongodb://user:password@127.0.0.1:27017
  24. database = "test"
  25. collection = "test"
  26. schema = {
  27. fields {
  28. _id = string
  29. c_bigint = bigint
  30. }
  31. }
  32. }
  33. }

Parameter Interpretation

MongoDB Database Connection URI Examples

Unauthenticated single node connection:

  1. mongodb://127.0.0.0:27017/mydb

Replica set connection:

  1. mongodb://127.0.0.0:27017/mydb?replicaSet=xxx

Authenticated replica set connection:

  1. mongodb://admin:password@127.0.0.0:27017/mydb?replicaSet=xxx&authSource=admin

Multi-node replica set connection:

  1. mongodb://127.0.0..1:27017,127.0.0..2:27017,127.0.0.3:27017/mydb?replicaSet=xxx

Sharded cluster connection:

  1. mongodb://127.0.0.0:27017/mydb

Multiple mongos connections:

  1. mongodb://192.168.0.1:27017,192.168.0.2:27017,192.168.0.3:27017/mydb

Note: The username and password in the URI must be URL-encoded before being concatenated into the connection string.

Buffer Flush

  1. sink {
  2. MongoDB {
  3. uri = "mongodb://user:password@127.0.0.1:27017"
  4. database = "test_db"
  5. collection = "users"
  6. buffer-flush.max-rows = 2000
  7. buffer-flush.interval = 1000
  8. schema = {
  9. fields {
  10. _id = string
  11. id = bigint
  12. status = string
  13. }
  14. }
  15. }
  16. }

Although MongoDB has fully supported multi-document transactions since version 4.2, it doesn’t mean that everyone should use them recklessly. Transactions are equivalent to locks, node coordination, additional overhead, and performance impact. Instead, the principle for using transactions should be: avoid using them if possible. The necessity for using transactions can be greatly avoided by designing systems rationally.

Idempotent Writes

By specifying a clear primary key and using the upsert method, exactly-once write semantics can be achieved.

If primary-key and upsert-enable is defined in the configuration, the MongoDB sink will use upsert semantics instead of regular INSERT statements. We combine the primary keys declared in upsert-key as the MongoDB reserved primary key and use upsert mode for writing to ensure idempotent writes. In the event of a failure, Seatunnel jobs will recover from the last successful checkpoint and reprocess, which may result in duplicate message processing during recovery. It is highly recommended to use upsert mode, as it helps to avoid violating database primary key constraints and generating duplicate data if records need to be reprocessed.

  1. sink {
  2. MongoDB {
  3. uri = "mongodb://user:password@127.0.0.1:27017"
  4. database = "test_db"
  5. collection = "users"
  6. upsert-enable = true
  7. primary-key = ["name","status"]
  8. schema = {
  9. fields {
  10. _id = string
  11. name = string
  12. status = string
  13. }
  14. }
  15. }
  16. }

Changelog

2.2.0-beta

  • Add MongoDB Source Connector

2.3.1-release

  • [Feature]Refactor mongodb source connector(4620)

Next Version

  • [Feature]Mongodb support cdc sink(4833)