Now, that’s a really catchy title isn’t it 🙂 ? Datapump is a great tool that Oracle provided when they launched version 10g of their flagship product database. There were tons of features that came with it and this was surely a very big enhancement (functionality wise) compared to its older cousin, Export/Import.
There are many times when the new tool or functionality is not used or considered simply for the reason that it may break something which is already working very fine. This indeed follows the golden rule that “why to fix something which is not broken” ! Fear, that applying the change or using a new tool, makes many dba’s not to attempt even some times look at the new functionality which is almost all the times better than the existing one. They are right in one aspect that if the new tool or functionality would need them to recode, restructure their existing(and working) paradigms, its more a problem and extra work than being helpful. So what’s the conclusion then? Should one avoid using new tools and features simply because of fear? But even when one decides to take a chance with the new tool, what about the pain that’s going to come in the redesigning of the existing process, issues with the incompatibility of the new options of a tool with the old ones? Sounds like a chicken-egg kind of problem!
The same kind of issue did arise when datapump was introduced. Lots of new functionalities, new features but that did also bring many challenges. Challenges like to understand the new options of it, understanding that there are no more client side files available, there are no more absolute paths possible in the dump file location and many more! In addition to all this, what about those numerous cron jobs which are running on the system which are calling export (or import too) binaries and are working without giving any issues at all? These all kinds of things proved to be a show stopper for many to use datapump, even when it’s a ground breaking tool and is miles ahead from traditional export and import. So what’s the answer, what’s the solution to such issues? The answer is 11.2’s Legacy Mode of Datapump!
Legacy mode of Datapump>>
In 11g release 2, legacy mode of datapump is introduced. As the name implies, this mode is given for those people who are willing to try datapump but are not ready to break their existing scripts and also are not completely going to change the options in order to use it. Legacy mode of DP allows a DBA to use the datapump with the options of traditional export used either in the command line or in the script, thus letting them use the datapump but without asking them to modify their scripts at all!
The legacy mode is started by datapump when it detectst that there is a parameter or parameters used which are specific to the traditional export or import. The parameter must have a similar option available , may not be with the same naming convention but with the same functionality. If that’s there, datapump continues to run without throwing error about the option(s) used and enters into the legacy mode. The parameter mapping is done automatically by datapump itself without any intervention or modifications needed to be done by the dba or by the user who is trying to run DP with the traditional export/import options. The legacy mode is started and for which parameter, all of this is displayed in the output log of the datapump operation. Also, the converted, new parameter is also displayed in the output. This gives you the correct idea that which option was actually supposed to be there in the place of your traditional option and if time permits, you may go and modify your existing script to make use of the new parameter only lately.
Here is an example to demonstrate that how the legacy mode is picked by datapump when there is a attempt to do a transport of a tablespace through it but with the options unique to the traditional export?
expdp aman/aman directory=dptest dumpfile=tbs_ts_exp.dmp transport_tablespace=y tablespaces=tbs1
Export: Release 11.2.0.1.0 - Production on Thu Jun 10 01:44: 2010
Copyright (c) 1982, 2009, Oracle and/or its affiliates. All rights reserved.
Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 - Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options
Legacy Mode Active due to the following parameters:
Legacy Mode Parameter: "transport_tablespace=TRUE" Location: Command Line, Replaced with: "transport_tablespaces=tbs1"
Legacy Mode has set reuse_dumpfiles=true parameter.
Starting "AMAN"."SYS_EXPORT_TRANSPORTABLE_01": aman/******** directory=dptest dumpfile=tbs_ts_exp.dmp tablespaces=tbs1 reuse_dump
files=true
Processing object type TRANSPORTABLE_EXPORT/PLUGTS_BLK
Processing object type TRANSPORTABLE_EXPORT/POST_INSTANCE/PLUGTS_BLK
Master table "AMAN"."SYS_EXPORT_TRANSPORTABLE_01" successfully loaded/unloaded
******************************************************************************
Dump file set for AMAN.SYS_EXPORT_TRANSPORTABLE_01 is:
D:\DATAPUMP_TEST\TBS_TS_EXP.DMP
******************************************************************************
Datafiles required for transportable tablespace TBS1:
D:\ORACLE\ORADATA\ORCL\TBS1.DBF
Job "AMAN"."SYS_EXPORT_TRANSPORTABLE_01" successfully completed at 01:45:54
You can see in this very simple demonstration that Oracle has chosen to to go the legacy mode because the parameter that we had issued for the transport of the tablespace was of traditional export. In datapump, the parameter is TRANSPORT_TABLESPACES but in the traditional export, its just TRANSPORT_TABLESPACE. But because both the parameters do imply the same functionality, there was no error reported and the export ran successfully. Oracle also told us the new and correct parameter which should had been chosen in the first place when to use the datapump export. This option can be modified at a future point by the user in the calling script.
Parameter mapping in legacy mode>>
The parameter mapping in legacy mode depends on this what is the usage of the parameter? If the given old parameter( like above) has the same functionality with another name in the new datapump, automatic conversion of it will be done and in the output, the new option will be shown as well( like above). This does need that the option used must have a matching option in the new datapump. Its not mandatory that the parameters share the exactly same naming convention in both traditional and new datapump but they must work in the same manner. One example of such can come when we think about GRANTS=N , an option of the traditional export. If this is used with the datapump, this would be automatically converted to GRANTS=EXCLUDE , a datapump specific parameter! So you don’t need to do many of the changes if all what you want that your script should run with the datapump binary and do just the same. We can see it in action in the below demo,
expdp aman/aman tables=scott.emp rows=N directory=dptest dumpfile=sample.dmp
Export: Release 11.2.0.1.0 - Production on Thu Jun 10 18:39:09 2010
Copyright (c) 1982, 2009, Oracle and/or its affiliates. All rights reserved.
Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 - Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options
Legacy Mode Active due to the following parameters:
Legacy Mode Parameter: "rows=FALSE" Location: Command Line, Replaced with: "content=metadata_only"
Legacy Mode has set reuse_dumpfiles=true parameter.
Starting "AMAN"."SYS_EXPORT_TABLE_01": aman/******** tables=scott.emp content=metadata_only directory=dptest dumpfile=sample.dmp
reuse_dumpfiles=true
Processing object type TABLE_EXPORT/TABLE/TABLE
Processing object type TABLE_EXPORT/TABLE/INDEX/INDEX
Processing object type TABLE_EXPORT/TABLE/CONSTRAINT/CONSTRAINT
Processing object type TABLE_EXPORT/TABLE/INDEX/STATISTICS/INDEX_STATISTICS
Processing object type TABLE_EXPORT/TABLE/CONSTRAINT/REF_CONSTRAINT
Processing object type TABLE_EXPORT/TABLE/STATISTICS/TABLE_STATISTICS
Master table "AMAN"."SYS_EXPORT_TABLE_01" successfully loaded/unloaded
******************************************************************************
In the above demo, it can be seen that when used the switch ROWS=N , which is essentially a traditional export option only, its automatically mapped by datapump to CONTENT=METADATA_ONLY.
If there is a parameter chosen in the datapump which matches exactly with the traditional export too, then there won’t be any conversion done. For example, FILESIZE is a parameter which is used to define the size of the dump file in both, traditional and datapump too. So when its there, there is no conversion that’s needed and done. Other than this, there can be a parameter which can be similar but can be more enhanced in datapump. QUERY is one such parameter which is present in both traditional export and datapump but is better handled in datapump compared to the old export.
If the functionality of an old parameter is built right into the datapump, the parameter is simply ignored and not considered. For example, BUFFER parameter is not considered by data pump because in data pump, the default mode is DIRECT PATH. So this parameter is not needed at all. Like this, couple of other parameters like CHARSET, COMMIT are among few other parameters which if used, are ignored.
If the parameter chosen is not at all available in the datapump, this leads to the failure of the operation. For example, COMPRESS is a traditional export parameter but is not available in the data pump. So using this leads to error. Also, a combination of both traditional export/import and datapump parameters also leads to an error.
Dumpfile location handling>>
The dump file location is one area where both traditional export and datapump become really different from each other. Where in traditional export, the dump file is created at the local client itself and there is a complete freedom to use the absolute path to save it at the preferred destination, in datapump, this is completely not the case. Data pump uses the server side directory object to store the dump files and there is no option to use absolute path for the dump files. This distinct behavior of both the binaries makes them completely different. When it comes to the legacy mode of the data pump, where one uses the FILE and LOG parameters to specify the dump and log file path, datapump looks at couple of options before throwing error for it.
If there is an absolute path specified for either LOG or FILE parameter, a matching directory object of the same path is searched accessible to the schema. If found, the operations succeeds. If not , the operation fails.
This can be seen from the following demo code where we shall use an absolute file path and name for the datapump dump file. It can be seen that DP has mapped to a directory object whose permissions are given to Public i.e. to all the schemas of the database.
expdp hr/hr file=e:\expdpdir\schema.dmp tables=employees
Export: Release 11.2.0.1.0 - Production on Sat Jun 12 14:00:00 2010
Copyright (c) 1982, 2009, Oracle and/or its affiliates. All rights reserved.
Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 - Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options
Legacy Mode Active due to the following parameters:
Legacy Mode Parameter: "file=e:\expdpdir\schema.dmp" Location: Command Line, Replaced with: "dumpfile=EXPDIR:schema.dmp"
Database Directory Object "EXPDIR" has been added to file specification: "e:\expdpdir\schema.dmp".
Legacy Mode has set reuse_dumpfiles=true parameter.
Legacy Mode has set nologfile=true parameter.
Starting "HR"."SYS_EXPORT_TABLE_01": hr/******** dumpfile=EXPDIR:schema.dmp tables=employees reuse_dumpfiles=true nologfile=true
Estimate in progress using BLOCKS method...
Processing object type TABLE_EXPORT/TABLE/TABLE_DATA
Total estimation using BLOCKS method: 64 KB
Processing object type TABLE_EXPORT/TABLE/TABLE
Processing object type TABLE_EXPORT/TABLE/GRANT/OWNER_GRANT/OBJECT_GRANT
Processing object type TABLE_EXPORT/TABLE/INDEX/INDEX
Processing object type TABLE_EXPORT/TABLE/CONSTRAINT/CONSTRAINT
Processing object type TABLE_EXPORT/TABLE/INDEX/STATISTICS/INDEX_STATISTICS
Processing object type TABLE_EXPORT/TABLE/COMMENT
Processing object type TABLE_EXPORT/TABLE/CONSTRAINT/REF_CONSTRAINT
Processing object type TABLE_EXPORT/TABLE/TRIGGER
Processing object type TABLE_EXPORT/TABLE/STATISTICS/TABLE_STATISTICS
. . exported "HR"."EMPLOYEES" 16.81 KB 107 rows
Master table "HR"."SYS_EXPORT_TABLE_01" successfully loaded/unloaded
******************************************************************************
Dump file set for HR.SYS_EXPORT_TABLE_01 is:
E:\EXPDPDIR\SCHEMA.DMP
Job "HR"."SYS_EXPORT_TABLE_01" successfully completed at 14:00:29
This option won’t work if there is no directory accessible to the user initiating datapump mapped to the same path given in the FILE parameter. This was true in the above case where the directory path and the given path in the command , both were same. We can verify it from the DBA_DIRECTORIES view output,
sys@orcl112> select * from dba_directories where directory_name like '%EXPDIR%';
OWNER DIRECTORY_NAME
------------------------------ ------------------------------
DIRECTORY_PATH
-----------------------------------------------------------------------------------------
SYS EXPDIR
E:\expdpdir
Now, if we would change the path and run the same command again, this should not work ideally but because directory access is given to PUBLIC, the wrong path won’t matter and the automatic replacement of the publically accessible directory object with the given dump file would be done. We can see it from the below output,
expdp hr/hr file=e:\expdpdir1\schema.dmp tables=employees
Export: Release 11.2.0.1.0 - Production on Sat Jun 12 14:00:59 2010
Copyright (c) 1982, 2009, Oracle and/or its affiliates. All rights reserved.
Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 - Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options
Legacy Mode Active due to the following parameters:
Legacy Mode Parameter: "file=e:\expdpdir1\schema.dmp" Location: Command Line, Replaced with: "dumpfile=EXPDIR:schema.dmp"
Database Directory Object "EXPDIR" has been added to file specification: "e:\expdpdir1\schema.dmp".
Legacy Mode has set reuse_dumpfiles=true parameter.
Legacy Mode has set nologfile=true parameter.
Starting "HR"."SYS_EXPORT_TABLE_01": hr/******** dumpfile=EXPDIR:schema.dmp tables=employees reuse_dumpfiles=true nologfile=tru
Estimate in progress using BLOCKS method...
Processing object type TABLE_EXPORT/TABLE/TABLE_DATA
Total estimation using BLOCKS method: 64 KB
Processing object type TABLE_EXPORT/TABLE/TABLE
Processing object type TABLE_EXPORT/TABLE/GRANT/OWNER_GRANT/OBJECT_GRANT
Processing object type TABLE_EXPORT/TABLE/INDEX/INDEX
Processing object type TABLE_EXPORT/TABLE/CONSTRAINT/CONSTRAINT
Processing object type TABLE_EXPORT/TABLE/INDEX/STATISTICS/INDEX_STATISTICS
Processing object type TABLE_EXPORT/TABLE/COMMENT
Processing object type TABLE_EXPORT/TABLE/CONSTRAINT/REF_CONSTRAINT
Processing object type TABLE_EXPORT/TABLE/TRIGGER
Processing object type TABLE_EXPORT/TABLE/STATISTICS/TABLE_STATISTICS
. . exported "HR"."EMPLOYEES" 16.81 KB 107 rows
Master table "HR"."SYS_EXPORT_TABLE_01" successfully loaded/unloaded
******************************************************************************
Dump file set for HR.SYS_EXPORT_TABLE_01 is:
E:\EXPDPDIR\SCHEMA.DMP
Job "HR"."SYS_EXPORT_TABLE_01" successfully completed at 14:01:21
There are two important things to watch here. One, the publically accessible directory object EXPDIR was automatically used . Second, even the dumpfile name was same as like the previous run, REUSE_DUMPFILE was set to TRUE automatically and we didn’t receive any errors.
If now, the access gets revoked and the schema who is initiating the datapump operation, has no access to any directory object, the operation fails giving the error that there is no accessible directory found.
So here we have revoked the access on the directory object,
SQL> revoke all on directory expdir from public;
Revoke succeeded.
And now, let’s try to run the same operation again.
expdp hr/hr file=yyyy.dmp tables=employees
Export: Release 11.2.0.1.0 - Production on Sat Jun 12 14:07:41 2010
Copyright (c) 1982, 2009, Oracle and/or its affiliates. All rights reserved.
Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 - Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options
ORA-39001: invalid argument value
ORA-39000: bad dump file specification
ORA-39145: directory object parameter must be specified and non-null
If there is no path specified with the file name but there is a directory object specified in the command, Oracle DP uses that directory object to store the dump and log files for the operation. Let’s try this by first creating a separate directory object,
sys@orcl112> create directory expdir1 as 'e:\expdpdir\dir1';
Directory created.
sys@orcl112> grant read, write on directory expdir1 to hr;
Grant succeeded.
And now, if we would explicitly mention this directory without specifying any path with the dumpfile, this directory object would be used.
expdp hr/hr file=yyyy.dmp directory=expdir1 tables=employees
Export: Release 11.2.0.1.0 - Production on Sat Jun 12 14:49:08 2010
Copyright (c) 1982, 2009, Oracle and/or its affiliates. All rights reserved.
Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 - Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options
Legacy Mode Active due to the following parameters:
Legacy Mode Parameter: "file=yyyy.dmp" Location: Command Line, Replaced with: "dumpfile=yyyy.dmp"
Legacy Mode has set reuse_dumpfiles=true parameter.
Starting "HR"."SYS_EXPORT_TABLE_01": hr/******** dumpfile=yyyy.dmp directory=expdir1 tables=employees reuse_dumpfiles=true
Estimate in progress using BLOCKS method...
Processing object type TABLE_EXPORT/TABLE/TABLE_DATA
Total estimation using BLOCKS method: 64 KB
Processing object type TABLE_EXPORT/TABLE/TABLE
Processing object type TABLE_EXPORT/TABLE/GRANT/OWNER_GRANT/OBJECT_GRANT
Processing object type TABLE_EXPORT/TABLE/INDEX/INDEX
Processing object type TABLE_EXPORT/TABLE/CONSTRAINT/CONSTRAINT
Processing object type TABLE_EXPORT/TABLE/INDEX/STATISTICS/INDEX_STATISTICS
Processing object type TABLE_EXPORT/TABLE/COMMENT
Processing object type TABLE_EXPORT/TABLE/CONSTRAINT/REF_CONSTRAINT
Processing object type TABLE_EXPORT/TABLE/TRIGGER
Processing object type TABLE_EXPORT/TABLE/STATISTICS/TABLE_STATISTICS
. . exported "HR"."EMPLOYEES" 16.81 KB 107 rows
Master table "HR"."SYS_EXPORT_TABLE_01" successfully loaded/unloaded
******************************************************************************
Dump file set for HR.SYS_EXPORT_TABLE_01 is:
E:\EXPDPDIR\DIR1\YYYY.DMP
Job "HR"."SYS_EXPORT_TABLE_01" successfully completed at 14:49:30
So we can see that now this time, our file has gone to the path used by the directory EXPDIR1. But if we would mention an absolute path with our FILE parameter and still mention the directory object, then the directory object would be ignored and oracle would use the the directory object on the server side whose path would match with the path specified with the FILE parameter.
If there is no path specified with the FILE parameter and there is no directory object specified , the datapump would look for a directory object either accessible or available in the calling schema with the naming convention, DATA_PUMP_DIR_schemaname. For example, if HR is the user initiating a datapump job the directory name either accessible to it or in its schema should be DATA_PUMP_DIR_HR. If this is not there then on the local client, environment variable DATA_PUMP_DIR is searched for an accessible server side directory. If the schema is having the access to the default directory object given or is a DBA schema, then the default directory object is used. This directory object’s path is mapped at the time of the installation of the software. For example, on a Windows system running 11.2 database, the path was this,
sys@orcl112> select * from dba_directories where directory_name like '%PUMP_DIR%';
OWNER DIRECTORY_NAME
------------------------------ ------------------------------
DIRECTORY_PATH
------------------------------------------------------------------------------------
SYS DATA_PUMP_DIR
D:\app\aristadba\admin\orcl112\dpdump\
This is the DP folder under the oracle’s base installation folder.
If none of the methods work, the operation gets failed.
Though its not a really internal feature at all, I didn’t find many knowing about it when I didscussed about the 11.2’s feature listing with them. I guesss, Datapump’s legacy mode is an awesome feature. Using it, there is no more need to fiddle with the tested and proven scripts of yours yet you become able to test datapump and you can surely keep the legacy alive 😉 .
The donwload for the post is here.
References: Oracle Documentation (Utilities Guide) (11.2)

Thanks Aman, I’ve learned something today!
Thanks for the comment Uwe 🙂 .
Aman….
Very nice post, Well explained about fearness about using New Feature. Meanwhile According to me, I’m always ready to adopt with new feature whenever i get a chance But I never stop for testing a New Feature 🙂
Mohamed Azar.
Thanks Azar 🙂 .
Aman….
Really Good Stuff for DBAs
Thanks Rafi 🙂