Devlive 开源社区 本次搜索耗时 2.559 秒,为您找到 804 个相关结果.
  • MySQL

    Support Mysql Version Support Those Engines Description Using Dependency For Spark/Flink Engine For SeaTunnel Zeta Engine Key Features Supported DataSource Info Data Type M...
  • JDO 3.0 Overview

    4848 2024-05-25 《Apache JDO 3.2.1》
    Background Metadata API Enhancer API Query Cancel/Timeout API Control of read objects locking Background Java Data Objects (JDO) is a specification begun in 2000, with 2 maj...
  • Kafka

    Support Those Engines Key Features Description Supported DataSource Info Sink Options Parameter Interpretation Topic Formats Semantics Partition Key Fields Assign Partition...
  • Elasticsearch Connector

    Elasticsearch 写入依赖 基于官网的Elasticsearch写入数据 Apache StreamPark™ 写入 Elasticsearch 1. 配置策略和连接信息 2. 写入Elasticsearch 其他配置 处理失败的 Elasticsearch 请求 配置内部批量处理器 Apache StreamPark™配置 E...
  • REST API v1

    4817 2024-07-05 《Apache Kyuubi 1.9.1》
    REST API v1 Session Resource GET /sessions Response Body GET /sessions/${sessionHandle} Response Body GET /sessions/${sessionHandle}/info/${infoType} Request Parameters Respon...
  • String

    4788 2024-06-14 《Lodash 3.10.1》
    _.camelCase([string=’’]) Arguments Returns Example _.capitalize([string=’’]) Arguments Returns Example _.deburr([string=’’]) Arguments Returns Example _.endsWith([strin...
  • High-Speed Ingest

    4780 2024-06-22 《Apache Accumulo 2.x》
    Pre-Splitting New Tables Multiple Ingest Clients Bulk Ingest Logical Time for Bulk Ingest MapReduce Ingest Accumulo is often used as part of a larger data processing and stor...
  • RHEL/CentOS/Oracle Linux 7

    Steps Next Step More Information On a server host that has Internet access, use a command line editor to perform the following Steps Before installing Ambari, you must upda...
  • 3. Trouble Shooting

    Trouble Shooting Common Issues java.lang.UnsupportedClassVersionError .. Unsupported major.minor version 52.0 org.apache.spark.SparkException: When running with master ‘yarn’ eith...
  • Creating your first interoperable table

    Pre-requisites Steps Initialize a pyspark shell Create dataset Running sync Conclusion Next steps Using OneTable to sync your source tables in different target format invo...