Devlive 开源社区 本次搜索耗时 0.238 秒,为您找到 937 个相关结果.
  • 6.2 Pandoc options for LaTeX output

    608 2024-05-24 《R Markdown Cookbook》
    If you are using the default Pandoc template for LaTeX output, there are several options that you may set to adjust the appearance of the PDF output document. We list a few exampl...
  • AmazonSqs

    AmazonSqs AmazonSqs source connector Support Those Engines Spark Flink SeaTunnel Zeta Key Features batch stream exactly-once column projection parallelism su...
  • Choose Services

    Steps Next Step More Information Based on the Stack chosen during the Select Stack step, you are presented with the choice of Services to install into the cluster. A Stack com...
  • Deploy Kyuubi engines on Kubernetes

    606 2024-07-05 《Apache Kyuubi 1.9.1》
    Requirements Configurations Master Deploy Mode Docker Image Test Cluster ServiceAccount Volumes PodTemplateFile Other Requirements When you want to run Kyuubi’s Spark S...
  • 5. Data partitioning (sharding)

    606 2024-06-19 《Redisson》
    1. Partitioning (sharding) of Redis based Java objects 2. Partitioning (sharding) of Redis setup This feature available only in Redisson PRO edition. 1. Partitioning (shardin...
  • Google BigQuery

    Iceberg tables Using Iceberg JSON metadata file to create the Iceberg BigLake tables : Steps to add additional configurations to the Hudi writers: Using BigLake Metastore to crea...
  • Alert Configuration

    Added alert configuration E-mail DingTalk Wechat Lark Alert Test Modify alert configuration: Use alert configuration Delete alert configuration: StreamPark supports a va...
  • Encryption

    601 2024-07-01 《Apache Hudi 0.15.0》
    Encrypt Copy-on-Write tables Note Since Hudi 0.11.0, Spark 3.2 support has been added and accompanying that, Parquet 1.12 has been included, which brings encryption feature to H...
  • Flink DDL

    601 2024-06-26 《Apache Amoro 0.6.1》
    Create catalogs Flink SQL YAML configuration CREATE statement CREATE DATABASE CREATE TABLE PARTITIONED BY CREATE TABLE LIKE DROP statement DROP DATABASE DROP TABLE SHOW ...
  • 数据读取

    夜莺把接收到的监控数据都直接写入了后端时序数据库,所以,读取监控数据,无需经由夜莺的接口,直接读取后端的时序库的接口就可以了。即:如果使用了 Prometheus,就通过 Prometheus 的接口读取监控数据,如果用了 VictoriaMetrics,就通过 VictoriaMetrics 的接口读取监控数据。 比如 Prometheus,就是那些/...