Devlive 开源社区 本次搜索耗时 2.463 秒,为您找到 172 个相关结果.
  • Configuration

    6115 2025-03-14 《Redisson 3.45.0》
    Using Redisson API Programmatic configuration is performed by the Config object instance. For example: Config config = new Config (); config . setTransportMode ( Transpo...
  • MongoDB

    Support Those Engines Key Features Description Supported DataSource Info Data Type Mapping Source Options Tips How to Create a MongoDB Data Synchronization Jobs Parameter I...
  • Deployment

    5859 2024-06-26 《Apache Amoro 0.6.1》
    System requirements Download the distribution Source code compilation Configuration Configure the service address Configure system database Configure high availability Config...
  • Getting Started

    5808 2024-07-05 《Apache Kyuubi 1.9.1》
    Requirements Installation Install Kyuubi Install Spark Configuration Start Kyuubi Operate Clients Open Connections Execute Statements Start Engines Close Connections Stop...
  • Postgre CDC

    Support Those Engines Key features Description Supported DataSource Info Using Dependency Install Jdbc Driver For Spark/Flink Engine For SeaTunnel Zeta Engine Data Type Map...
  • Caching

    5778 2024-05-25 《Apache Superset 4.0.1》
    Dependencies Fallback Metastore Cache Chart Cache Timeout SQL Lab Query Results Caching Thumbnails Superset uses Flask-Caching for caching purposes. Flask-Caching supports v...
  • REST API v1

    5728 2024-07-05 《Apache Kyuubi 1.9.1》
    REST API v1 Session Resource GET /sessions Response Body GET /sessions/${sessionHandle} Response Body GET /sessions/${sessionHandle}/info/${infoType} Request Parameters Respon...
  • Flink K8s 集成支持

    额外环境要求 集成准备 Kubernetes 连接配置 Kubernetes RBAC 配置 Docker 远程容器服务配置 任务提交 Application 任务发布 Session 任务发布 相关参数配置 StreamPark Flink Kubernetes 基于 Flink Native Kubernetes 实现,支持以下 F...
  • Deployment

    5674 2024-07-01 《Apache Hudi 0.15.0》
    Deploying Hudi Streamer Spark Datasource Writer Jobs Upgrading Downgrading Migrating This section provides all the help you need to deploy and operate Hudi tables at scale. ...
  • Creating your first interoperable table

    Pre-requisites Steps Initialize a pyspark shell Create dataset Running sync Conclusion Next steps Using OneTable to sync your source tables in different target format invo...