Apache Spark applications can read from and write to Accumulo tables.

Before reading this documentation, it may help to review the MapReduce documentation as API created for MapReduce jobs is used by Spark.

This documentation references code from the Accumulo Spark example.

General configuration

  1. Create a shaded jar with your Spark code and all of your dependencies (excluding Spark and Hadoop). When creating the shaded jar, you should relocate Guava as Accumulo uses a different version. The pom.xml in the Spark example is a good reference and can be used as a starting point for a Spark application.

  2. Submit the job by running spark-submit with your shaded jar. You should pass in the location of your accumulo-client.properties that will be used to connect to your Accumulo instance.

    1. $SPARK_HOME/bin/spark-submit \
    2. --class com.my.spark.job.MainClass \
    3. --master yarn \
    4. --deploy-mode client \
    5. /path/to/spark-job-shaded.jar \
    6. /path/to/accumulo-client.properties

Reading from Accumulo table

Apache Spark can read from an Accumulo table by using AccumuloInputFormat.

  1. Job job = Job.getInstance();
  2. AccumuloInputFormat.configure().clientProperties(props).table(inputTable).store(job);
  3. JavaPairRDD<Key,Value> data = sc.newAPIHadoopRDD(job.getConfiguration(),
  4. AccumuloInputFormat.class, Key.class, Value.class);

Writing to Accumulo table

There are two ways to write to an Accumulo table in Spark applications.

Use a BatchWriter

Write your data to Accumulo by creating an AccumuloClient for each partition and writing all data in the partition using a BatchWriter.

  1. // Spark will automatically serialize this properties object and send it to each partition
  2. Properties props = Accumulo.newClientProperties()
  3. .from("/path/to/accumulo-client.properties").build();
  4. JavaPairRDD<Key, Value> dataToWrite = ... ;
  5. dataToWrite.foreachPartition(iter -> {
  6. // Create client inside partition so that Spark does not attempt to serialize it.
  7. try (AccumuloClient client = Accumulo.newClient().from(props).build();
  8. BatchWriter bw = client.createBatchWriter(outputTable)) {
  9. iter.forEachRemaining(kv -> {
  10. Key key = kv._1;
  11. Value val = kv._2;
  12. Mutation m = new Mutation(key.getRow());
  13. m.at().family(key.getColumnFamily()).qualifier(key.getColumnQualifier())
  14. .visibility(key.getColumnVisibility()).timestamp(key.getTimestamp()).put(val);
  15. bw.addMutation(m);
  16. });
  17. }
  18. });

Using Bulk Import

Partition your data and write it to RFiles. The AccumuloRangePartitioner found in the Accumulo Spark example can be used for partitioning data. After your data has been written to an output directory using AccumuloFileOutputFormat as RFiles, bulk import this directory into Accumulo.

  1. // Write Spark output to HDFS
  2. JavaPairRDD<Key, Value> dataToWrite = ... ;
  3. Job job = Job.getInstance();
  4. AccumuloFileOutputFormat.configure().outputPath(outputDir).store(job);
  5. Partitioner partitioner = new AccumuloRangePartitioner("3", "7");
  6. JavaPairRDD<Key, Value> partData = dataPlus5K.repartitionAndSortWithinPartitions(partitioner);
  7. partData.saveAsNewAPIHadoopFile(outputDir.toString(), Key.class, Value.class,
  8. AccumuloFileOutputFormat.class);
  9. // Bulk import RFiles in HDFS into Accumulo
  10. try (AccumuloClient client = Accumulo.newClient().from(props).build()) {
  11. client.tableOperations().importDirectory(outputDir.toString()).to(outputTable).load();
  12. }

Reference

  • Spark example - Example Spark application that reads from and writes to Accumulo
  • MapReduce - Documentation on reading/writing to Accumulo using MapReduce
  • Apache Spark - Spark project website