You can choose to download the stable release package from download page, or the source code form Github and compile it according to the README.

System requirements

  • Java 8 is required. Java 17 is required for Trino.
  • Optional: MySQL 5.5 or higher
  • Optional: PostgreSQL 14.x or higher
  • Optional: ZooKeeper 3.4.x or higher
  • Optional: Hive (2.x or 3.x)
  • Optional: Hadoop (2.9.x or 3.x)

Download the distribution

All released package can be downloaded from download page. You can download amoro-x.y.z-bin.zip (x.y.z is the release number), and you can also download the runtime packages for each engine version according to the engine you are using. Unzip it to create the amoro-x.y.z directory in the same directory, and then go to the amoro-x.y.z directory.

Source code compilation

You can build based on the master branch without compiling Trino. The compilation method and the directory of results are described below:

  1. git clone https://github.com/NetEase/amoro.git
  2. cd amoro
  3. base_dir=$(pwd)
  4. mvn clean package -DskipTests -pl '!Trino'
  5. cd dist/target/
  6. ls
  7. amoro-x.y.z-bin.zip # AMS release package
  8. dist-x.y.z-tests.jar
  9. dist-x.y.z.jar
  10. archive-tmp/
  11. maven-archiver/
  12. cd ${base_dir}/flink/v1.15/flink-runtime/target
  13. ls
  14. amoro-flink-runtime-1.15-x.y.z-tests.jar
  15. amoro-flink-runtime-1.15-x.y.z.jar # Flink 1.15 runtime package
  16. original-amoro-flink-runtime-1.15-x.y.z.jar
  17. maven-archiver/
  18. cd ${base_dir}/spark/v3.1/spark-runtime/target
  19. ls
  20. amoro-spark-3.1-runtime-x.y.z.jar # Spark v3.1 runtime package)
  21. amoro-spark-3.1-runtime-x.y.z-tests.jar
  22. amoro-spark-3.1-runtime-x.y.z-sources.jar
  23. original-amoro-spark-3.1-runtime-x.y.z.jar

If you need to compile the Trino module at the same time, you need to install jdk17 locally and configure toolchains.xml in the user’s ${user.home}/.m2/ directory, then run mvn package -P toolchain to compile the entire project.

  1. <?xml version="1.0" encoding="UTF-8"?>
  2. <toolchains>
  3. <toolchain>
  4. <type>jdk</type>
  5. <provides>
  6. <version>17</version>
  7. <vendor>sun</vendor>
  8. </provides>
  9. <configuration>
  10. <jdkHome>${YourJDK17Home}</jdkHome>
  11. </configuration>
  12. </toolchain>
  13. </toolchains>

Configuration

If you want to use AMS in a production environment, it is recommended to modify {AMORO_HOME}/conf/config.yaml by referring to the following configuration steps.

Configure the service address

  • The ams.server-bind-host configuration specifies the host to which AMS is bound. The default value, 0.0.0.0, indicates binding to all network interfaces.
  • The ams.server-expose-host configuration specifies the host exposed by AMS that the compute engines and optimizers used to connect to AMS. You can configure a specific IP address on the machine, or an IP prefix. When AMS starts up, it will find the first host that matches this prefix.
  • The ams.thrift-server.table-service.bind-port configuration specifies the binding port of the Thrift Server that provides the table service. The compute engines access AMS through this port, and the default value is 1260.
  • The ams.thrift-server.optimizing-service.bind-port configuration specifies the binding port of the Thrift Server that provides the optimizing service. The optimizers access AMS through this port, and the default value is 1261.
  • The ams.http-server.bind-port configuration specifies the port to which the HTTP service is bound. The Dashboard and Open API are bound to this port, and the default value is 1630.
  1. ams:
  2. server-bind-host: "0.0.0.0" #The IP address for service listening, default is 0.0.0.0.
  3. server-expose-host: "127.0.0.1" #The IP address for service external exposure, default is 127.0.0.1.
  4. thrift-server:
  5. table-service:
  6. bind-port: 1260 #The port for accessing AMS table service.
  7. optimizing-service:
  8. bind-port: 1261 #The port for accessing AMS optimizing service.
  9. http-server:
  10. bind-port: 1630 #The port for accessing AMS Dashboard.

Make sure the port is not used before configuring it.

Configure system database

You can use MySQL/PostgreSQL as the system database instead of the default Derby.

If you would like to use MySQL as the system database, you need to manually download the MySQL JDBC Connector and move it into the {AMORO_HOME}/lib/ directory. You can use the following command to complete these operations:

  1. cd ${AMORO_HOME}
  2. MYSQL_JDBC_DRIVER_VERSION=8.0.30
  3. wget https://repo1.maven.org/maven2/mysql/mysql-connector-java/${MYSQL_JDBC_DRIVER_VERSION}/mysql-connector-java-${MYSQL_JDBC_DRIVER_VERSION}.jar
  4. mv mysql-connector-java-${MYSQL_JDBC_DRIVER_VERSION}.jar lib

Create an empty database in MySQL/PostgreSQL, then AMS will automatically create table structures in this MySQL/PostgreSQL database when it first started.

One thing you need to do is Adding MySQL/PostgreSQL configuration under config.yaml of Ams:

  1. # MySQL
  2. ams:
  3. database:
  4. type: mysql
  5. jdbc-driver-class: com.mysql.cj.jdbc.Driver
  6. url: jdbc:mysql://127.0.0.1:3306/amoro?useUnicode=true&characterEncoding=UTF8&autoReconnect=true&useAffectedRows=true&useSSL=false
  7. username: root
  8. password: root
  9. # PostgreSQL
  10. #ams:
  11. # database:
  12. # type: postgres
  13. # jdbc-driver-class: org.postgresql.Driver
  14. # url: jdbc:postgresql://127.0.0.1:5432/amoro
  15. # username: user
  16. # password: passwd

Configure high availability

To improve stability, AMS supports a one-master-multi-backup HA mode. Zookeeper is used to implement leader election, and the AMS cluster name and Zookeeper address are specified. The AMS cluster name is used to bind different AMS clusters on the same Zookeeper cluster to avoid mutual interference.

  1. ams:
  2. ha:
  3. enabled: true #Enable HA
  4. cluster-name: default # Differentiating binding multiple sets of AMS on the same ZooKeeper.
  5. zookeeper-address: 127.0.0.1:2181,127.0.0.1:2182,127.0.0.1:2183 # ZooKeeper server address.

Configure optimizer containers

To scale out the optimizer through AMS, container configuration is required. If you choose to manually start an external optimizer, no additional container configuration is required. AMS will initialize a container named external by default to store all externally started optimizers. AMS provides implementations of LocalContainer and FlinkContainer by default. Configuration for both container types can be found below:

  1. containers:
  2. - name: localContainer
  3. container-impl: com.netease.amoro.optimizer.LocalOptimizerContainer
  4. properties:
  5. export.JAVA_HOME: "/opt/java" # JDK environment
  6. - name: flinkContainer
  7. container-impl: com.netease.amoro.optimizer.FlinkOptimizerContainer
  8. properties:
  9. flink-home: "/opt/flink/" # The installation directory of Flink
  10. export.JVM_ARGS: "-Djava.security.krb5.conf=/opt/krb5.conf" # Submitting Flink jobs with Java parameters, such as Kerberos parameters.
  11. export.HADOOP_CONF_DIR: "/etc/hadoop/conf/" # Hadoop configuration file directory
  12. export.HADOOP_USER_NAME: "hadoop" # Hadoop user
  13. export.FLINK_CONF_DIR: "/etc/hadoop/conf/" # Flink configuration file directory

Configure terminal

The Terminal module in the AMS Dashboard allows users to execute SQL directly on the platform. Currently, the Terminal backend supports two implementations: local and kyuubi. In local mode, an embedded Spark environment will be started in AMS. In kyuubi mode, an additional kyuubi service needs to be deployed. The configuration for kyuubi mode can refer to: Using Kyuubi with Terminal. Below is the configuration for the local mode:

  1. ams:
  2. terminal:
  3. backend: local
  4. local.spark.sql.iceberg.handle-timestamp-without-timezone: false
  5. # When the catalog type is Hive, it automatically uses the Spark session catalog to access Hive tables.
  6. local.using-session-catalog-for-hive: true

Environments variables

The following environment variables take effect during the startup process of AMS, you can set up those environments to overwrite the default value.

Environments variable name Default value Description
AMORO_CONF_DIR ${AMORO_HOME}/conf location where Amoro loading config files.
AMORO_LOG_DIR ${AMORO_HOME}/logs location where the logs files output

Note: $AMORO_HOME can’t be overwritten from environment variable. It always points to the parent dir of ./bin.

Configure AMS JVM

The following JVM options could be set in ${AMORO_CONF_DIR}/jvm.properties.

Property Name Related Jvm option Description
xms “-Xms${value}m Xms config for jvm
xmx “-Xmx${value}m Xmx config for jvm
jmx.remote.port “-Dcom.sun.management.jmxremote.port=${value} Enable remote debug
extra.options “JAVA_OPTS=”${JAVA_OPTS} ${JVM_EXTRA_CONFIG}” The addition jvm options

Start AMS

Enter the directory amoro-x.y.z and execute bin/ams.sh start to start AMS.

  1. cd amoro-x.y.z
  2. bin/ams.sh start

Then, access http://localhost:1630 through a browser to see the login page. If it appears, it means that the startup is successful. The default username and password for login are both “admin”.

You can also restart/stop AMS with the following command:

  1. bin/ams.sh restart/stop

Upgrade AMS

Upgrade system databases

You can find all the upgrade SQL scripts under {AMORO_HOME}/conf/mysql/ with name pattern upgrade-a.b.c-to-x.y.z.sql. Execute the upgrade SQL scripts one by one to your system database based on your starting and target versions.

Replace all libs and plugins

Replace all contents in the original {AMORO_HOME}/lib directory with the contents in the lib directory of the new installation package. Replace all contents in the original {AMORO_HOME}/plugin directory with the contents in the plugin directory of the new installation package.

Backup the old content before replacing it, so that you can roll back the upgrade operation if necessary.

Configure new parameters

The old configuration file {AMORO_HOME}/conf/config.yaml is usually compatible with the new version, but the new version may introduce new parameters. Try to compare the configuration files of the old and new versions, and reconfigure the parameters if necessary.

Restart AMS

Restart AMS with the following commands:

  1. bin/ams.sh restart