Monday, 12 September 2011

SQL Developer Data Pump Support - Part 1

In 3.1 we're adding support for Data Pump, which has replaced exp and imp.  In SQL Developer, we've added this in the DBA navigator which has support for several new things this release.  In this post, I'm only going through Data Pump export and in part 2 of this post, we'll visit the import.  We'll  revisit several of other new DBA features in the very near future.  For export, the demonstration will show that this is pretty easy to use, however, get a cup of coffee, cos we've a few snapshots to go through! 

Ok, so first off, we need to fire up the DBA Navigator which can be reached from the view menu.  Select DBA and you get the navigator shown on the right. This image shows the sys user which has been selected as the current connection.  We're going to use the barry connection for using data pump today.

From the data pump node, select the menu item for the Data Pump Export Wizard.  

On the first page of this wizard, we are choosing schema today.  You also have the option of exporting the entire database, some tablespaces or a block of tables.  When you're ready press next

As we said earlier, we are only choosing schema today and in the image below, we are only selecting one schema for export. The 'BARRY' schema.

Step 3 allows you to filter what is exported by including or excluding various things, you can choose none of these things or, a  selection of each with an appropriate filter string in the value field.
We won't add any today and let it take everything out of the schema.

In Step 4, the data tab, we can add data filters to all the tables, or individual tables as required.  Again for this demonstration, I'm not going to choose any of that and let the data pump export everything.

Step 5 allows us to specify the options for the export.  Our primary interest is in the directory for the  logging.

Step 6 focuses on our Output directory.  Specify the one you want to use here.  Remember, as in Step 5, this directory must exist, and you must have the privileges to write to it from your user.  Check on the main connections navigator for directories.  By default there is a DATA_PUMP_DIR setup, but make sure that the directory exists.  We've created a directory called 'BARRYS_DPUMP_DIR' for this example.

Step 7 specifies the schedule which will dump the data for you.  We're choosing immediately as our option. You can specify whenever you like for this job to run, repeatedly if required.

Lastly, we have the summary screen, which is split in two parts. The main summary screen shows what actions you are taking and how they will be carried out.  The second panel shows the actual PL/SQL which will be run on your behalf to create the data pump job, and dump the data.
When you click finish, and job submission is successful, you'll get  a new job listed in the DBA navigator under Data Pump export Jobs.

If it doesn't appear, then it's highly likely you'll get this error, primarily due to the directory that you are writing the files or the logs to.  This is easily fixed by going back and making sure the directories you have chosen, exist and that you have access to read from and write to them.

 Lastly, you can go to your database directory and see your exported file, together with the log file.

 Your log file will contain something like this if you're successful. (I've cut a lot out of it as it is long)

Starting "BARRY"."EXPORT_JOB_SQLDEV_327":  
Estimate in progress using BLOCKS method...
.  estimated "BARRY"."MD_ADDITIONAL_PROPERTIES"              3 MB
.  estimated "BARRY"."STAGE_TERADATA_TABLETEXT"          2.062 MB
.  estimated "BARRY"."MD_DERIVATIVES"                        2 MB
.  estimated "BARRY"."MD_FILE_ARTIFACTS"                     2 MB
.  estimated "BARRY"."文化大革命"                                 0 KB
Total estimation using BLOCKS method: 17.75 MB
Processing object type SCHEMA_EXPORT/USER
Processing object type SCHEMA_EXPORT/SYSTEM_GRANT
. . exported "BARRY"."MD_ADDITIONAL_PROPERTIES"          13.54 KB      59 rows
. . exported "BARRY"."STAGE_TERADATA_TABLETEXT"          1.197 MB      25 rows
. . exported "BARRY"."MD_DERIVATIVES"                    49.60 KB     386 rows
. . exported "BARRY"."文化大革命"                                 0 KB       0 rows
Master table "BARRY"."EXPORT_JOB_SQLDEV_327" successfully loaded/unloaded
Dump file set for BARRY.EXPORT_JOB_SQLDEV_327 is:
Job "BARRY"."EXPORT_JOB_SQLDEV_327" successfully completed at 15:42:52

So, for today, thats exporting from the Oracle database using the Datapump built into Oracle SQL Developer 3.1 which will be available soon!  We'll have part 2 on importing this dump file next.


Movies Gallery 2011 said...

I have no words for this great post such a awe-some information i got gathered. Thanks to Author.
Vee Eee Technologies

Sayan Guharoy said...

Hope the below example might help also,

job tardis said...

Nice news and thanks for sharing this here to let me know this updates,please know the recent updates by visiting this link also…because I had gain some info from here also: Oracle Developer Jobs