Below Are the differences in tradionational exp/imp vs Datapump
- Datapump operates on a group of files called dump file sets. However, normal export operates on a single file.
- Datapump access files in the server (using ORACLE directories). Traditional export can access files in client and server both (not using ORACLE directories).
- Exports (exp/imp) represent database metadata information as DDLs in the dump file, but in datapump, it represents in XML document format.
- Datapump has parallel execution but in exp/imp single stream execution.
- Datapump does not support sequential media like tapes, but traditional export supports.
- Impdp/Expdp use parallel execution rather than a single stream of execution, for improved performance.
- Data Pump will recreate the user, whereas the old imp utility required the DBA to create the user ID before importing.
- In Data Pump, we can stop and restart the jobs.
Why expdp is faster than exp (or) why Data Pump is faster than conventional export/import.
- Data Pump is block mode, exp is byte mode.
- Data Pump will do parallel execution.
- Data Pump uses direct path API.
- In Data Pump, where the jobs info will be stored (or) if you restart a job in Data Pump, how it will know from where to resume – Whenever Data Pump export or import is running, Oracle will create a table with the JOB_NAME and will be deleted once the job is done. From this table, Oracle will find out how much job has completed and from where to continue etc.
- Default export job name will be SYS_EXPORT_XXXX_01, where XXXX can be FULL or SCHEMA or TABLE.
- Default import job name will be SYS_IMPORT_XXXX_01, where XXXX can be FULL or SCHEMA or TABLE.
- Datapump gives 15 – 50% performance improvement than exp/imp.
- Export and import can be taken over the network using database links even without generating the dump file using NETWORK_LINK parameter.
- CONTENT parameter gives the freedom for what to export with options METADATA ONLY, DATA, BOTH.
- Few parameter name changes in datapump and it always makes confusion with parameters in normal exp/imp
SLNO | EXP/IMP Parameter | EXPDP/IMPDP Parameter |
1 | owner | schemas |
2 | file | dumpfile |
3 | log | logfile/nologfile |
4 | IMP: fromuser, touser | IMPDP: remap_schema |
How you can check data dump dump file is taken by exp or expdp?
For Conventional export, the logfile ends with “Export Terminated”
$ tail -1 exp_user1.log
Export terminated successfully without warnings.
For Data pump the logfile ends with “Job”
$ tail -1 expdp_user1.log
Job “SYS”.”SYS_EXPORT_SCHEMA_01″ successfully completed.
The simplest way would be: just run the imp, if it throws error , then its created through expdp, because dump from expdp cannot be used with imp and vice versa. So that could help you find.
2 Responses
very useful…….thanks
Thank you for sharing your thoughts.