MemOptimized RowStore in Oracle Database 18c with OCI

On June 10 2018 I blogged about the MemOptimized RowStore in Oracle Database 18c. If you haven’t read this post, it is a good idea to catch up now. I showed that accessing a memoptimized table t4 via the MemOptimized RowStore was around 60% slower than accessing a heap-organized table t1. I suspected that this disappointing result was related to my unsupported Docker environment. But I was wrong.

The next day, Chris Antognini contacted me, since he planned to talk about this feature at the AOUG Anwenderkonferenz 2018. We exchanged our thoughts and shared our findings. Chris did his tests in the Oracle cloud and could also reproduce my test results. That’s interesting. Even more interesting is, that Chris translated my Java program to C and proved that the MemOptimized RowStore can be fast. That’s cool. But why didn’t it work in Java? It’s the same after all, right? No. The Java program used the JDBC thin driver and the C program OCI.

In this blog post, I will show that OCI is a prerequisite for getting good performance out of the MemOptimized RowStore.

The Program

I use the Java program from my previous post. I only added 3 parameters. The JDBC URL, the username and the password. Here’s the program.

J.java – 100,000 queries
import java.sql.Connection;
import java.sql.DriverManager;
import java.sql.PreparedStatement;
import java.sql.ResultSet;
import java.sql.SQLException;
import oracle.jdbc.driver.OracleDriver;
public class J {
   private static boolean isRunningInDatabase() {
      return System.getProperty("oracle.jserver.version") != null;
   }
   public static void m(String tableName, String url, String userName, String password) throws SQLException {
      Connection conn;
      if (isRunningInDatabase()) {
         conn = new OracleDriver().defaultConnection();
      } else {
         conn = DriverManager.getConnection(
           url, userName, password);
      }
      conn.setAutoCommit(false);
      long start = System.currentTimeMillis();
      String query = "SELECT value FROM " + tableName + " WHERE key = ?";      
      PreparedStatement ps = conn.prepareStatement(query);
      for (long i = 1; i <= 100000; i++) {
         ps.setLong(1, i);
         ResultSet rs = ps.executeQuery();
         while (rs.next()) {
            rs.getString("value");
         }
         rs.close();
      }
      ps.close();
      if (!isRunningInDatabase()) {
         conn.close();
      }
      long end = System.currentTimeMillis();
      System.out.println("read 100000 rows from " + tableName + " in " +
         String.valueOf((double) (end-start)/1000) + " seconds via " + url + ".");      
   }
   public static void main(String[] args) throws SQLException {
      m(args[0], args[1], args[2], args[3]);
   }
}

I copied this program to my Docker container into the directory $ORACLE_HOME/jdbc/lib and compiled it with the following script:

compile.sh
export rt CLASSPATH=.:./ojdbc8.jar
javac J.java

The Test Script

I’ve run the tests for my previous post from the Eclipse IDE because it was convenient for me to set a break point on line 20 to identify the Oracle process for perf. Now, I do not need to produce flame graphs. Running the test script on the server directly will also reduce the network overhead, especially when running without Oracle Net.

Here’s the test script:

run.sh
#/bin/bash

run(){
    TABLE=${1}
    URL=${2}
    for ((i=1;i<=5;i++));
    do
        echo -n "run #${i}: "
        java J ${TABLE} ${URL} ${USERNAME} ${PASSWORD}
    done
    echo ""
} 

export CLASSPATH=.:./ojdbc8.jar
export USERNAME=tvdca
export PASSWORD=tvdca

# Thin driver
run t1 "jdbc:oracle:thin:@//localhost:1521/odb.docker"
run t2 "jdbc:oracle:thin:@//localhost:1521/odb.docker"
run t3 "jdbc:oracle:thin:@//localhost:1521/odb.docker"
run t4 "jdbc:oracle:thin:@//localhost:1521/odb.docker"

# OCI driver with Oracle*Net
run t1 "jdbc:oracle:oci:@odb"
run t2 "jdbc:oracle:oci:@odb"
run t3 "jdbc:oracle:oci:@odb"
run t4 "jdbc:oracle:oci:@odb"

# OCI driver without Oracle*Net
run t1 "jdbc:oracle:oci:@"
run t2 "jdbc:oracle:oci:@"
run t3 "jdbc:oracle:oci:@"
run t4 "jdbc:oracle:oci:@"

The Result

Here’s the output of the run.sh call:

Console output of run.sh
[oracle@odb180 lib]$ ./run.sh
run #1: read 100000 rows from t1 in 11.918 seconds via jdbc:oracle:thin:@//localhost:1521/odb.docker.
run #2: read 100000 rows from t1 in 11.625 seconds via jdbc:oracle:thin:@//localhost:1521/odb.docker.
run #3: read 100000 rows from t1 in 11.662 seconds via jdbc:oracle:thin:@//localhost:1521/odb.docker.
run #4: read 100000 rows from t1 in 11.574 seconds via jdbc:oracle:thin:@//localhost:1521/odb.docker.
run #5: read 100000 rows from t1 in 11.729 seconds via jdbc:oracle:thin:@//localhost:1521/odb.docker.

run #1: read 100000 rows from t2 in 11.786 seconds via jdbc:oracle:thin:@//localhost:1521/odb.docker.
run #2: read 100000 rows from t2 in 12.071 seconds via jdbc:oracle:thin:@//localhost:1521/odb.docker.
run #3: read 100000 rows from t2 in 12.621 seconds via jdbc:oracle:thin:@//localhost:1521/odb.docker.
run #4: read 100000 rows from t2 in 11.913 seconds via jdbc:oracle:thin:@//localhost:1521/odb.docker.
run #5: read 100000 rows from t2 in 11.972 seconds via jdbc:oracle:thin:@//localhost:1521/odb.docker.

run #1: read 100000 rows from t3 in 11.397 seconds via jdbc:oracle:thin:@//localhost:1521/odb.docker.
run #2: read 100000 rows from t3 in 11.429 seconds via jdbc:oracle:thin:@//localhost:1521/odb.docker.
run #3: read 100000 rows from t3 in 11.308 seconds via jdbc:oracle:thin:@//localhost:1521/odb.docker.
run #4: read 100000 rows from t3 in 11.793 seconds via jdbc:oracle:thin:@//localhost:1521/odb.docker.
run #5: read 100000 rows from t3 in 11.903 seconds via jdbc:oracle:thin:@//localhost:1521/odb.docker.

run #1: read 100000 rows from t4 in 19.789 seconds via jdbc:oracle:thin:@//localhost:1521/odb.docker.
run #2: read 100000 rows from t4 in 19.461 seconds via jdbc:oracle:thin:@//localhost:1521/odb.docker.
run #3: read 100000 rows from t4 in 19.181 seconds via jdbc:oracle:thin:@//localhost:1521/odb.docker.
run #4: read 100000 rows from t4 in 19.211 seconds via jdbc:oracle:thin:@//localhost:1521/odb.docker.
run #5: read 100000 rows from t4 in 19.242 seconds via jdbc:oracle:thin:@//localhost:1521/odb.docker.

run #1: read 100000 rows from t1 in 13.145 seconds via jdbc:oracle:oci:@odb.
run #2: read 100000 rows from t1 in 12.698 seconds via jdbc:oracle:oci:@odb.
run #3: read 100000 rows from t1 in 13.14 seconds via jdbc:oracle:oci:@odb.
run #4: read 100000 rows from t1 in 12.842 seconds via jdbc:oracle:oci:@odb.
run #5: read 100000 rows from t1 in 12.978 seconds via jdbc:oracle:oci:@odb.

run #1: read 100000 rows from t2 in 13.049 seconds via jdbc:oracle:oci:@odb.
run #2: read 100000 rows from t2 in 12.581 seconds via jdbc:oracle:oci:@odb.
run #3: read 100000 rows from t2 in 12.44 seconds via jdbc:oracle:oci:@odb.
run #4: read 100000 rows from t2 in 12.787 seconds via jdbc:oracle:oci:@odb.
run #5: read 100000 rows from t2 in 12.727 seconds via jdbc:oracle:oci:@odb.

run #1: read 100000 rows from t3 in 12.402 seconds via jdbc:oracle:oci:@odb.
run #2: read 100000 rows from t3 in 12.479 seconds via jdbc:oracle:oci:@odb.
run #3: read 100000 rows from t3 in 12.483 seconds via jdbc:oracle:oci:@odb.
run #4: read 100000 rows from t3 in 12.346 seconds via jdbc:oracle:oci:@odb.
run #5: read 100000 rows from t3 in 12.528 seconds via jdbc:oracle:oci:@odb.

run #1: read 100000 rows from t4 in 11.452 seconds via jdbc:oracle:oci:@odb.
run #2: read 100000 rows from t4 in 10.945 seconds via jdbc:oracle:oci:@odb.
run #3: read 100000 rows from t4 in 11.597 seconds via jdbc:oracle:oci:@odb.
run #4: read 100000 rows from t4 in 11.295 seconds via jdbc:oracle:oci:@odb.
run #5: read 100000 rows from t4 in 11.746 seconds via jdbc:oracle:oci:@odb.

run #1: read 100000 rows from t1 in 10.508 seconds via jdbc:oracle:oci:@.
run #2: read 100000 rows from t1 in 10.662 seconds via jdbc:oracle:oci:@.
run #3: read 100000 rows from t1 in 10.105 seconds via jdbc:oracle:oci:@.
run #4: read 100000 rows from t1 in 10.44 seconds via jdbc:oracle:oci:@.
run #5: read 100000 rows from t1 in 10.415 seconds via jdbc:oracle:oci:@.

run #1: read 100000 rows from t2 in 10.29 seconds via jdbc:oracle:oci:@.
run #2: read 100000 rows from t2 in 10.15 seconds via jdbc:oracle:oci:@.
run #3: read 100000 rows from t2 in 10.266 seconds via jdbc:oracle:oci:@.
run #4: read 100000 rows from t2 in 10.351 seconds via jdbc:oracle:oci:@.
run #5: read 100000 rows from t2 in 10.259 seconds via jdbc:oracle:oci:@.

run #1: read 100000 rows from t3 in 9.95 seconds via jdbc:oracle:oci:@.
run #2: read 100000 rows from t3 in 9.756 seconds via jdbc:oracle:oci:@.
run #3: read 100000 rows from t3 in 10.325 seconds via jdbc:oracle:oci:@.
run #4: read 100000 rows from t3 in 9.517 seconds via jdbc:oracle:oci:@.
run #5: read 100000 rows from t3 in 9.951 seconds via jdbc:oracle:oci:@.

run #1: read 100000 rows from t4 in 9.182 seconds via jdbc:oracle:oci:@.
run #2: read 100000 rows from t4 in 8.996 seconds via jdbc:oracle:oci:@.
run #3: read 100000 rows from t4 in 8.977 seconds via jdbc:oracle:oci:@.
run #4: read 100000 rows from t4 in 9.024 seconds via jdbc:oracle:oci:@.
run #5: read 100000 rows from t4 in 9.082 seconds via jdbc:oracle:oci:@.

As in my previous post, I ignore the slowest and fastest run and take the average of the remaining three results per test variant to produce a chart.

blank

We see that using the JDBC OCI driver delivers the fastest results when skipping Oracle Net. Furthermore, the MemOptimized RowStore delivers the fastest results via OCI. Accessing the MemOptimized RowStore via the JDBC thin driver leads by to a bad performance. This looks like a bug.

Conclusion

To get benefit from the MemOptimized RowStore you have to access the database via OCI.

2 Comments

  1. […] You have to use OCI, otherwise the performance will be really bad, see succeeding blog post MemOptimized RowStore in Oracle Database 18c with OCI. This might be a bug. [added on […]

  2. […] February, 13 2019 Oracle Database 19c is available. I blogged about this feature here and here. Time for an update. So, what’s new in 19c regarding the MemOptimized […]

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.