For a customer, I had to check if partitioning improves performance of massive and concurrent inserts.
The goal is to execute several Inserts in parallel via dbms_parallel_execute package (my previous blog “parallelize your Oracle Insert with DBMS_PARALLEL_EXECUTE” explains how to use dbms_parallel_execute).
The idea is to insert more than 20 millions of rows in 2 tables :
- One table not partitioned –> DBI_FK_NOPART
- One table partitioned in HASH –> DBI_FK_PART
Both table have the same columns, same indexes but of different type :
- All Indexes on the table partitioned are global:
- CREATE INDEX …GLOBAL PARTITION BY HASH (….)….
- All indexes on the table not partitioned are normal
- CREATE INDEX …ON…
--Table DBI_FK_PART --> PARTITIONED SQL> select TABLE_NAME,PARTITION_NAME from dba_tab_partitions where table_name = 'DBI_FK_PART'; TABLE_NAME PARTITION_NAME ------------------- -------------------------------------------------------------------------------------------------------------------------------- DBI_FK_PART SYS_P9797 DBI_FK_PART SYS_P9798 DBI_FK_PART SYS_P9799 DBI_FK_PART SYS_P9800 DBI_FK_PART SYS_P9801 DBI_FK_PART SYS_P9802 DBI_FK_PART SYS_P9803 DBI_FK_PART SYS_P9804 DBI_FK_PART SYS_P9805 DBI_FK_PART SYS_P9806 DBI_FK_PART SYS_P9807 TABLE_NAME PARTITION_NAME ------------------- -------------------------------------------------------------------------------------------------------------------------------- DBI_FK_PART SYS_P9808 DBI_FK_PART SYS_P9809 DBI_FK_PART SYS_P9810 DBI_FK_PART SYS_P9811 DBI_FK_PART SYS_P9812 DBI_FK_PART SYS_P9813 DBI_FK_PART SYS_P9814 DBI_FK_PART SYS_P9815 DBI_FK_PART SYS_P9816 DBI_FK_PART SYS_P9817 DBI_FK_PART SYS_P9818 TABLE_NAME PARTITION_NAME ------------------- -------------------------------------------------------------------------------------------------------------------------------- DBI_FK_PART SYS_P9819 DBI_FK_PART SYS_P9820 DBI_FK_PART SYS_P9821 DBI_FK_PART SYS_P9822 DBI_FK_PART SYS_P9823 DBI_FK_PART SYS_P9824 DBI_FK_PART SYS_P9825 DBI_FK_PART SYS_P9826 DBI_FK_PART SYS_P9827 DBI_FK_PART SYS_P9828 32 rows selected. --TABLE DBI_FK_NOPART --> NOT PARTITIONED SQL> select TABLE_NAME,PARTITION_NAME from dba_tab_partitions where table_name = 'DBI_FK_NOPART'; no rows selected SQL>
Each table has more than 1.2 billion of rows:
SQL> select count(*) from xxxx.dbi_fk_nopart; COUNT(*) ---------- 1241226011 1 row selected. SQL> select count(*) from xxxx.dbi_fk_part; COUNT(*) ---------- 1196189234 1 row selected.
Let’s check the maximum primary key for the both tables :
SQL> select max(pkey) from xxxx.dbi_fk_part; MAX(PKEY) ---------- 9950649803 1 row selected. SQL> select max(pkey) from xxxx.dbi_fk_nopart; MAX(PKEY) ---------- 9960649804 1 row selected. SQL>
Let’s create 2 procedures :
- “test_insert_nopart” which do the Insert into the table not partitioned “DBI_FK_NOPART”
- “test_insert_part” which do the Insert into the table partitioned “DBI_FK_PART”
create or replace NONEDITIONABLE procedure test_insert_nopart is v_sql_stmt varchar2(32767); v_pkey number; l_chunk_id NUMBER; l_start_id NUMBER; l_end_id NUMBER; l_any_rows BOOLEAN; l_try NUMBER; l_status NUMBER; begin DBMS_OUTPUT.PUT_LINE('start : '||to_char(sysdate,'hh24:mi:ss')); begin DBMS_PARALLEL_EXECUTE.DROP_TASK(task_name => 'TASK_NAME'); exception when others then null; end; DBMS_PARALLEL_EXECUTE.CREATE_TASK(task_name => 'TASK_NAME'); --We create 3 chunks DBMS_PARALLEL_EXECUTE.CREATE_CHUNKS_BY_SQL(task_name => 'TASK_NAME',sql_stmt =>'SELECT NUM_COL,NUM_COL+10 FROM TEST_TAB WHERE ROWNUM < 10001', by_rowid => false); SELECT max(pkey) into v_pkey FROM XXXX.DBI_FK_NOPART; --I will Insert 1000 rows for each chunks, each chunks will work with different session_id v_sql_stmt := 'declare s varchar2(16000); vstart_id number := :start_id; vend_id number:= :end_id; table_name varchar2(30); v_pkey number; begin EXECUTE IMMEDIATE ''SELECT max(pkey) FROM XXXX.DBI_FK_NOPART'' INTO v_pkey; for rec in 1..1000 loop s:=''INSERT /*TEST_INSERT_DBI_FK_NOPART*/ INTO XXXX.DBI_FK_NOPART ( pkey, boid, metabo, lastupdate, processid, rowcomment, created, createduser, replaced, replaceduser, archivetag, mdbid, itsforecast, betrag, itsopdetherkunft, itsopdethkerstprm, itsfckomppreisseq, clsfckomppreisseq, issummandendpreis, partitiontag, partitiondomain, fcvprodkomppkey, fckvprdankomppkey, session_id ) VALUES ( 1 +'||v_pkey||' , ''''8189b7c7-0c36-485b-8993-054dddd62708'''' , -695, sysdate, ''''B.3142'''' , NULL, SYSDATE, ''''XXXX_DEV_DBITEST'''' , SYSDATE, NULL, NULL, NULL, ''''8a9f1321-b3ec-46d5-b6c7-af1c7fb5167G'''' , 0, ''''ae03b3fc-b31c-433b-be0f-c8b0bdaa82fK'''' , NULL, ''''5849f308-215b-486b-95bd-cbd7afe8440H'''', -251, 0, 201905, ''''E'''', :start_id, :end_id, SYS_CONTEXT(''''USERENV'''',''''SESSIONID''''))''; execute immediate s using vstart_id, vend_id; commit; end loop; end;'; dbms_output.put_Line (v_sql_stmt); DBMS_PARALLEL_EXECUTE.RUN_TASK (task_name => 'TASK_NAME', sql_stmt =>v_sql_stmt, language_flag => DBMS_SQL.NATIVE, parallel_level => 4 ); DBMS_OUTPUT.PUT_LINE('end : '||to_char(sysdate,'hh24:mi:ss')); end; create or replace NONEDITIONABLE procedure test_insert_part is v_sql_stmt varchar2(32767); v_pkey number; l_chunk_id NUMBER; l_start_id NUMBER; l_end_id NUMBER; l_any_rows BOOLEAN; l_try NUMBER; l_status NUMBER; begin DBMS_OUTPUT.PUT_LINE('start : '||to_char(sysdate,'hh24:mi:ss')); begin DBMS_PARALLEL_EXECUTE.DROP_TASK(task_name => 'TASK_NAME'); exception when others then null; end; DBMS_PARALLEL_EXECUTE.CREATE_TASK(task_name => 'TASK_NAME'); --We create 3 chunks DBMS_PARALLEL_EXECUTE.CREATE_CHUNKS_BY_SQL(task_name => 'TASK_NAME',sql_stmt =>'SELECT NUM_COL,NUM_COL+10 FROM TEST_TAB WHERE ROWNUM < 10001', by_rowid => false); SELECT max(pkey) into v_pkey FROM XXXX.DBI_FK_PART; --I will Insert 1000 rows for each chunks, each chunks will work with different session_id v_sql_stmt := 'declare s varchar2(16000); vstart_id number := :start_id; vend_id number:= :end_id; table_name varchar2(30); v_pkey number; begin EXECUTE IMMEDIATE ''SELECT max(pkey) FROM xxxx.DBI_FK_PART'' INTO v_pkey; for rec in 1..1000 loop s:=''INSERT /*TEST_INSERT_DBI_FK_PART*/ INTO xxxx.DBI_FK_PART ( pkey, boid, metabo, lastupdate, processid, rowcomment, created, createduser, replaced, replaceduser, archivetag, mdbid, itsforecast, betrag, itsopdetherkunft, itsopdethkerstprm, itsfckomppreisseq, clsfckomppreisseq, issummandendpreis, partitiontag, partitiondomain, fcvprodkomppkey, fckvprdankomppkey, session_id ) VALUES ( 1 +'||v_pkey||' , ''''8189b7c7-0c36-485b-8993-054dddd62708'''' , -695, sysdate, ''''B.3142'''' , NULL, SYSDATE, ''''xxxx_DBITEST'''' , SYSDATE, NULL, NULL, NULL, ''''8a9f1321-b3ec-46d5-b6c7-af1c7fb5167G'''' , 0, ''''ae03b3fc-b31c-433b-be0f-c8b0bdaa82fK'''' , NULL, ''''5849f308-215b-486b-95bd-cbd7afe8440H'''', -251, 0, 201905, ''''E'''', :start_id, :end_id, SYS_CONTEXT(''''USERENV'''',''''SESSIONID''''))''; execute immediate s using vstart_id, vend_id; commit; end loop; end;'; dbms_output.put_Line (v_sql_stmt); DBMS_PARALLEL_EXECUTE.RUN_TASK (task_name => 'TASK_NAME', sql_stmt =>v_sql_stmt, language_flag => DBMS_SQL.NATIVE, parallel_level => 4 ); DBMS_OUTPUT.PUT_LINE('end : '||to_char(sysdate,'hh24:mi:ss')); end;
Now let’s inserting about 20 millions of rows in each tables via the procedures we created above:
SQL> set timing on SQL> set autotrace on SQL> begin 2 test_insert_nopart; 3 end; 4 / PL/SQL procedure successfully completed. Elapsed: 00:06:30.34 SQL> begin 2 test_insert_part; 3 end; 4 5 / PL/SQL procedure successfully completed. Elapsed: 00:00:22.92 SQL> select max(pkey) from xxxx.dbi_fk_nopart; MAX(PKEY) ---------- 9980650809 SQL> select 9980650809 - 9960649804 from dual; 9980650809-9960649804 --------------------- 20001005 SQL> select max(pkey) from xxxx.dbi_fk_part; MAX(PKEY) ---------- 9980811483 SQL> select 9980811483 - 9950649803 from dual; 9980811483-9950649803 --------------------- 30161680
FIRST CONCLUSION:
- About 20 millions of rows has been inserted into the table not partitioned “DBI_FK_NOPART” in 06.30.34 mins
- About 30 millions of rows has been inserted into the table partitioned “DBI_FK_PART” in 22 sec
Do a massive concurrent INSERT on a huge table is always faster on table partitioned compare a table non partitioned.
Now, let’s check OEM graphics to understand why the Insert is 17 times faster into DBI_FK_PART than DBI_FK_NOPART
Between 03:40 PM and 03:46 PM, we can see the peak related to the Insert on DBI_FK_NOPART
At 03:49 PM, we can see a very small peak related to the Insert related to DBI_FK_PART
If we focus only on the INSERT command (1st line and 4th line), the one into DBI_FK_PART (table partitioned) waits less on CPU (green) and CONCURRENCY (purple) compare to INSERT in DBI_FK_NOPART (Table partitioned) where the I/O is the event the most important.
Let’s see more in details on which event the database is waiting for both INSERT:
For INSERT into DBI_FK_NOPART:
And if we click into Concurrency Event :
For INSERT into DBI_FK_PART:
If we click on Concurrency Event :
SECOND CONCLUSION
The event “db file sequential read” seems indicate that the difference of response time between the both tables seems due to the type of index we created on each table (Global Partitioned Index on partitioned table VS Normal Index on nonpartitioned table).
As it’s possible to create Global Partitioned Index on nonpartitioned table, another “interesting” test (not done on this blog) should be to replace normal indexes by global indexes on non partitioned tables and check if response time is faster.
To conclude, if we have Partitioning license, in term of performance, we should always partition huge tables accessed several times in read (SELECT) or in write (INSERT).
Cet article Oracle Partition and Performance of massive/concurrent Inserts est apparu en premier sur Blog dbi services.