MemOptimized RowStore in Oracle Database 19c

Since February, 13 2019 Oracle Database 19c is available. I blogged about this feature here and here. Time for an update. So, what’s new in 19c regarding the MemOptimized Rowstore?

Fast Lookup Works with JDBC Thin Driver

I listed 16 prerequisites for the MemOptimized Rowstore in this blog post. The last one – “The query must not be executed from a JDBC thin driver connection. You have to use OCI, otherwise the performance will be really bad.” does not apply anymore. This “bug” is fixed in 19c. Here are the JDBC thin driver results of the test program listed in this blog post:

See, the runs for t4 are the fastest, because they used memopt r lookups  instead of  consistent gets.  BTW, the absolute runtime values of these tests are not important or representative, they vary a lot depending on the Docker environment that I use. However, I consider the relative difference between the t1 and t4 as relevant and conclusive. The next graph visualizes the results. I also added the results for the PL/SQL runs to this graph. It clearly shows that if you can do it within the database, you should do it.

I also run this test for 18.5.0.0.0. It looks like this bug has not been fixed in 18c yet.

Fast Ingest

This new 19c feature consists of two parts. The usage is best described in the Database Performance Tuning Guide.

First, the table must be enabled for memoptimized write using the memoptimize_write_clause. You can do that in the create table or the alter table statement. Here’s an example:

One way to trigger fast ingest is to use an insert hint. Here’s an example:

There is no commit in this anonymous PL/SQL block. In this case an additional commit statement would just slow down the processing. The insert statements are treated like asynchronous transactions. This mechanism is called „delayed inserts“. The rows to be inserted are collected in the large pool and processed asynchronously in batches using direct path inserts. That will happen eventually. However, you can call dbms_memoptimize_admin.writes_flush to force the rows in the large pool to be written to disk.

Fast ingest is much more efficient than a series of conventional single transactions. But, there are some disadvantages to consider.

  1. Data loss in case of a crash of the database instance
  2. Delayed visibility of inserted data
  3. Delayed visibility of errors

The first two are simply the price to optimize the insert performance of multiple clients. However, the last one is interesting. Where are errors reported in this case and how do we deal with them?

Here’s an example.

Fast ingest persisted the non-existing row with the key 20001. But the row with the existing key 10001 was ignored. Somewhere an ORA-00001: unique constraint violated must have been thrown. But right now I do not know If it was just swallowed or stored somewhere (I have not found the trace file mentioned in Database Concepts). If ignoring is the right way to deal with such errors, then we are all set. Otherwise we have to think a bit more about it.

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.