Devlive 开源社区 本次搜索耗时 0.478 秒,为您找到 604 个相关结果.
  • Object

    777 2024-06-05 《Ramda 0.18.0》
    clone values eqProps keys omit pick pickAll project prop props keysIn path valuesIn toPairs toPairsIn propOr has hasIn assoc assocPath lens pickBy evolve inv...
  • Object

    767 2024-06-05 《Ramda 0.10.0》
    clone values eqProps keys omit pick pickAll project prop props keysIn path valuesIn toPairs toPairsIn propOr has hasIn assoc assocPath lens pickBy evolve inv...
  • Variable Management

    Background Introduction Create Variable Reference variables in Flink SQL Reference variables in args of Flink JAR jobs Background Introduction In the actual production enviro...
  • Configure Postgres to Allow Remote Connections

    About This Task Steps About This Task It is critical that you configure Postgres to allow remote connections before you deploy a cluster. If you do not perform these steps in ...
  • Trino

    Trino just like Presto allows you to query table formats like Hudi, Delta and Iceberg tables using connectors. Users do not need additional configurations to work with OneTable syn...
  • 在 Postgres 中配置 SAM 和 Schema Registry 元数据存储

    关于此任务 步骤 关于此任务 如果您已经安装了 MySQL 并使用 MySQL 配置了 SAM 和架构注册表元数据存储,则无需在 Postgres 中配置其他元数据存储。 步骤 登录Postgres: sudo su postgres psql 创建一个名为 registry 的数据库,密码为 registry : ...
  • TiDB

    760 2024-07-05 《Apache Kyuubi 1.9.1》
    TiDB Integration Dependencies Configurations TiDB Operations TiDB is an open-source NewSQL database that supports Hybrid Transactional and Analytical Processing (HTAP) workloa...
  • PyHive

    751 2024-07-05 《Apache Kyuubi 1.9.1》
    Requirements Usage DB-API Use PyHive with Pandas Authentication PyHive is a collection of Python DB-API and SQLAlchemy interfaces for Hive. PyHive can connect with the Kyuub...
  • MySQL/MariaDB Prerequisite

    You must change the variable log_bin_trust_function_creators to 1 during Ranger installation. From RDS Dashboard>Parameter group (on the left side of the page): Set the MySQL...
  • Performance

    Scan planning Metadata filtering Data filtering Iceberg is designed for huge tables and is used in production where a single table can contain tens of petabytes of data. Even ...