Wednesday, 7 December 2011

Using the Oracle Developer Days VirtualBox Image

As many of you know @krisrice put together a great VM for the Oracle Developer Days and while the content is great for education, I find us using it more and more a default scratch database on laptops.  As usual, being 'Networkly challenged', it took me some time to figure out which network adapters did what and why.  This is as much a note for me as it is to share with you :).

First of all, you need to install VirtualBox, which is found here.  Then download one of the prebuilt VirtualBox images for the Developer Days.  You can choose your one here.  Download the ova file and then import your VM into VirtualBox.

To import, choose File> Import Appliance in VirtualBox and click choose to select the directory where you want the VM to live, then select your ova file.  In this example, I'm using @krisrice's Oracle DeveloperDays VM. Once you click finish, and agree to the licenses, you'll see the 'Oracle Developer Days' VM in VirtualBox with a powered off state.

Great.  We're ready to fire up the box and play inside it.  All passwords for this are 'oracle', so you cant go wrong. In this, the database is all setup automatically, as is this list of stuff.

  • Oracle Linux 5
  • Oracle Database 11g Release 2 Enterprise Edition
  • Oracle TimesTen In-Memory Database Cache
  • Oracle XML DB
  • Oracle SQL Developer
  • Oracle SQL Developer Data Modeler
  • Oracle Application Express
  • Oracle JDeveloper
  • Hands-On-Labs (accessed via the Toolbar Menu in Firefox)

So now, login to the vm with oracle/oracle and a terminal window will show you whats available and the network profiles you setup in the VM settings.  The network settings I setup for this is here
These two adapters do separate things.  The bridged adapter will assign an IP address from the wireless NIC.  This is setup by default when you install the VM and allows you to get access to the internet from the VM.  The NAT adapter will allow you to access the VM from the host machine when you have no physical NIC or internet available.  This happened me yesterday when presenting some SQLDeveloper functionality.  By enabling the NAT adapter and setting up a few port forwarding rules, we can ssh into the VM and connect SQL Developer to the normal LISTENER port.
The NAT adapter looks like this.

and setting up two port forwarding rules, gives us this
When you login to the VM, you'll see this on the terminal window.
This setup will allow you to spark up firefox in the VM and connect to the internet using the local network ip  Now, if we switch off the Airport and disable the bridging adapter, we should still be able to connect to the VM from outside.  Restarting the VM, gives us this terminal window on login to the oracle account
Now, we have no external IP, but we have our port forwarding rules set up to get access to the VM from outside.  Now, we have two rules, one which maps anything on port 2222 to port 22 on the guest.  this means we can ssh into the VM on port 2222 on the host. So, connecting with this
and we get the login terminal from the VM
Brilliant.  Now, Lets see SQL Developer connect to the VM too.  We setup the connection like a connection to xe on localhost, except the SID is orcl on the VM.
and we can look at the database version report which shows us what database we are connected to
And now, you're connected and good to go.  This is great for doing demos in a canned environment, especially when you dont have access to an internet connection when doing a demo or showing something off.

Monday, 5 December 2011

Explain Plan, Autotrace and Diff

A SQL statement can be executed in many different ways, such as full table scans, index scans, nested loops, and hash joins. The query optimizer determines the most efficient way to execute a SQL statement after considering many factors related to the objects referenced and the conditions specified in the query. This determination is an important step in the processing of any SQL statement and can greatly affect execution time.

The EXPLAIN PLAN results let you determine whether the optimizer selects a particular execution plan, such as, nested loops join. It also helps you to understand the optimizer decisions, such as why the optimizer chose a nested loops join instead of a hash join, and lets you understand the performance of a query.

You can examine the execution plan chosen by the optimizer for a SQL statement by using the EXPLAIN PLAN button on the worksheet. When the statement is issued, the optimizer chooses an execution plan and then inserts data describing the plan into a database table. SQL Developer looks at the table and displays the tree operations.  Looking at the example, we can see the query is doing a Cartesian product

Similary, when you use autotrace, which, on SQL Developer, is the button beside the explain plan. AND, you can diff two plans to  see what the difference is between two plans, as you can see below.

However, even though we can show you what a plan looks like, it cannot differentiate between well-tuned statements and those that perform poorly.

For example, an EXPLAIN PLAN output that shows that a statement uses an index does not necessarily mean that the statement runs efficiently. Sometimes indexes can be extremely inefficient. In this case, you should examine the following:
  • The columns of the index being used
  • Their selectivity (fraction of table being accessed)
It is best to use EXPLAIN PLAN to determine an access plan, and then later prove that it is the optimal plan through testing. When evaluating a plan, examine the statement's actual resource consumption.

And from the doc!: 

In addition to running the EXPLAIN PLAN command and displaying the plan, you can use the V$SQL_PLAN views to display the execution plan of a SQL statement. After the statement has executed, you can display the plan by querying the V$SQL_PLAN view. V$SQL_PLAN contains the execution plan for every statement stored in the cursor cache. 

The V$SQL_PLAN_STATISTICS view provides the actual execution statistics for every operation in the plan, such as the number of output rows and elapsed time. All statistics, except the number of output rows, are cumulative. For example, the statistics for a join operation also includes the statistics for its two inputs. The statistics in V$SQL_PLAN_STATISTICS are available for cursors that have been compiled with the STATISTICS_LEVEL initialization parameter set toALL.

The V$SQL_PLAN_STATISTICS_ALL view enables side by side comparisons of the estimates that the optimizer provides for the number of rows and elapsed time. This view combines both V$SQL_PLAN and V$SQL_PLAN_STATISTICS information for every cursor.

Anyway, don't take my word for all this, try it on SQLDeveloper today and you can follow Tom Kyte's process for tuning sql statements.

Monday, 28 November 2011

UKOUG 2011

I'm speaking next week at the UK Oracle User Group at the ICC in Birmingham. The topic will be one I've posted several times about which is Tuning, Refactoring and Instrumentation.   Have a look at the agenda, and if you are in town, come along.  You can click on the image to go to the conference site and check out the agenda.

Friday, 18 November 2011

login.sql, SQLPROMPT and worksheets

SQLDeveloper has had support for a login.sql for several releases now.  You can set this in the preferences at

Tools -> Prefernces -> Database

You can set your login.sql here.

Now, when SQL*Plus starts up, it looks for a global login script called glogin.sql in the $ORACLE_HOME/sqlplus/admin directory. If found, this script will be executed.
Thereafter, SQL*Plus will try to find a local login script called login.sql in the directory where you start sqlplus from, alternatively the directories listed in the SQLPATH environment variable. When found, sqlplus will execute it.  Here's my login.sql for SQL*Plus
define gname=idle
column global_name new_value gname
select lower(user) || '@' || substr( global_name, 1, decode( dot, 0, length(global_name), dot-1) ) global_name
  from (select global_name, instr(global_name,'.') dot from global_name );
set sqlprompt '&gname> '

and when I login to sqlplus, I get this.
SQL*Plus: Release Beta on Mon Nov 21 11:05:58 2011

Copyright (c) 1982, 2010, Oracle.  All rights reserved.

Connected to:
Oracle Database 11g Express Edition Release - Beta




Obviously, in SQLDeveloper, this won't mean anything as prompts are not there, however, the variables you set and the column formats, titles, pagesizes etc, will be preserved.

For example, in our login2.sql  list in the preferences above, we set a couple of column settings and for fun, lets set the prompt variable too.

As shown above, we now have a login.sql defined in the preferences. When we make a connection, the login.sql will be run and any settings will be applied to the database.  We will also hold onto any SQL*Plus variables defined so they can be used in any worksheet that is started using this connection.

When you connect, you get the new worksheet, and the sql that was run produces any output in the messages.log.  The reason for this is you can connect to the database without spawning a worksheet and this lets you know of any output from that login script we put together earlier.

Now, we are connected, any worksheet created on that connection will have the context of the original script.

Lastly, you can also set your worksheet name to be a substitution variable as well

set worksheetname &gname

which will swap you default worksheet name to your connection credentials shown above.

Tuesday, 15 November 2011

Substitution Variables in SQL*Plus

Working through security issues uncovers some interesting things. Anyone who has developed scripts for building out schemas for an application will have had the issues of passing variables to subscripts or managing password visibility when creating users, building objects or granting permissions

SQLDeveloper and SQL*Plus have substitution variables to solve this problem.  Basically, there are two types of substitution variables,  & and &&.  &foo is used to refer to the variable foo.  &&foo is also used to refer to the variable foo.  The main difference between the two variables is that first time SQL*Plus comes across a variable defined with &&, e.g., &&foo, it prompts for the value and then uses this values for every other occurrence of the variable. &foo on the other hand will prompt for the variable, use it and then discard the value so the next time it is seen, it will prompt again. So, now an example.  Lets assume we want to create a user db1 and db2 with some tables in each user. We can define a simple script for each one.

define db1_password=&&db1
define db2_password=&&db2

Now, this script does three things.
  1. it defines two variables for the passwords of the two users we are going to create.
  2. @@users.sql sets up the users for us. Remember, in SQL*Plus, there is only ever one connection active, so whenever you have a connect statement in your script, that will be the user running the script until you change it. In this case above, we're starting with a privileged user to create the users.
  3. @@db1.sql and @@db2.sql create our users for us and we will create a table in each one. We'll see that script in a moment.

drop user user1 cascade;
prompt creating user user1
create user user1 identified by &&db1_password;
grant connect, resource to user1;
drop user user2 cascade;
prompt creating user user2
create user user2 identified by &&db2_password;
grant connect, resource to user2;

db1.sql and db2.sql are identical apart from user names.

connect user1/&&db1
create table table1 (id number, name varchar2(10));
insert into table1 values (1,'barry');

Substitution variables are passed to any subscript called from the main script.  The initial &&db1 and &&db2 are prompted for and set to the variables db1_password and db2_password
From then on, the subscripts use the variables to connect to each user.

Finally, the output from all of this is below.  I have VERIFY=ON here to show the substitutions going through, but for any real world scenario, you'll want to switch that off avoid printing the passwords.

user USER1 dropped.
creating user user1
old:create user user1 identified by &&db1_password
new:create user user1 identified by db1
user USER1 created.
grant succeeded.
user USER2 dropped.
creating user user2
old:create user user2 identified by &&db2_password
new:create user user2 identified by db2
user USER2 created.
grant succeeded.
old:connect user1/&&db1_password
new:connect user1/db1
table TABLE1 created.
1 rows inserted.
old:connect user2/&&db2_password
new:connect user2/db2
table TABLE2 created.
1 rows inserted.
Connection created by CONNECT script command disconnected

With verify off, you get cleaner output with no passwords.  If you need more ouput here, you can also use the prompt command to identify which script you are in and what is running.

user USER1 dropped.
creating user user1
user USER1 created.
grant succeeded.
user USER2 dropped.
creating user user2
user USER2 created.
grant succeeded.
table TABLE1 created.
1 rows inserted.
table TABLE2 created.
1 rows inserted.
Connection created by CONNECT script command disconnected

Monday, 14 November 2011

SQLPLus as an external tool in SQLDeveloper

@thatjeffsmith asked me today about running SQL*Plus from SQLDeveloper for his current sql file.  This has been shown before, but there is a simple way to add it and to get it to run your file under SQL*Plus.

The only caveate on this example, is that we are using the bequeath adapter to connect to a local XE database. You can amend this to add a service after the username

The main steps are to point the program executable to your SQLPlus, which will populate the executable and the Run directory.  Next you need to populate the arguments, which for sqlplus are like this.

 sqlplus [LOGIN} @{FILENEMAME]

where LOGIN can be any of this.

 {[/][@] | / }

So, our arguments will be

${sqldev.dbuser}/${promptl:label=Password} @${file.dir}/${}

You can check the variables you can submit with the insert button.  I am choosing a labelled prompt for the password here for security.

Once we have all that done we can see our External tool in the list of tools.

Now, all we have to do is make sure our file is in focus and click the SQL*Plus button.  This will run the file with the current user of the connection we have on the file and prompt for the password.

My File looks like this
clear screen
select username from all_users where username like 'B%';

and running it with my new SQL*Plus button gives me this.

C:\oraclexe\app\oracle\product\11.2.0\server\bin\sqlplus.exe barry/barry "@C:\Documents and Settings\bamcgill\Desktop/Untitled1.sql"

SQL*Plus: Release Beta on Mon Nov 14 15:38:25 2011

Copyright (c) 1982, 2010, Oracle.  All rights reserved.

Connected to:
Oracle Database 11g Express Edition Release - Beta




Now, thats not so secure either, since you can see my password, so we want to switch off banner output too which will suppress the login string and the header.  You can do this by editing your external tools and adding it to the arguments like this.
-S ${sqldev.dbuser}/${promptl:label=Password} @${file.dir}/${}

giving you what you want here.
C:\oraclexe\app\oracle\product\11.2.0\server\bin\sqlplus.exe -S barry/barry "@C:\Documents and Settings\bamcgill\Desktop/Untitled1.sql"


Tuesday, 18 October 2011

Proxy Connections

While looking at some issues with specific connection upgrades, I've been playing with proxy connections in SQL Developer, and while easy to do, can be interesting to get your head around. There are a number of things to do which are important. So, Lets start with a proxy user called proxy and a target user called target. (Nice and original)
drop user proxy cascade;
drop user target cascade;
create user proxy identified by proxy;
create user target identified by target;
alter user target grant connect through proxy;
grant create session to proxy;
grant connect, resource to target;
connect target/target;
create table target (id number);
insert into target values (1);
connect proxy[target]/proxy;
show user
select * from target;

This set of commands run as Sys in the worksheet will create the two users. The proxy privilege is granted using

alter user target grant connect through proxy;

The target user is granted resource role to create a table, in this case, we call it target and put some data in it. Next, we can connect to the target user, through the proxy using

connect proxy[target]/proxy;select * from target;

This all gives us this feedback including the user which is actually connected.

user PROXY dropped.user TARGET dropped.user PROXY created.user TARGET created.user TARGET altered.grant succeeded.grant succeeded.Connectedtable TARGET created.1 rows inserted.ConnectedUSER is TARGETID-- 1 Connection created by CONNECT script command disconnected

We can also set this up in SQL Developer using the connection dialog. Given the users have been created as above and the appropriate privileges have been granted, we can set this up in the dialog

Once we make the connection, we can expand the table tree and see the target table from the user we proxied into to.

The data is exactly the same as the data from the Worksheet script output as well.

One thing I did forget to mention was the ability to create distinguished proxies as well. You do the same thing with the connection panels and but switch to disctinguished name. You can set up a distinguished user doing the following.

drop user dproxy cascade;
create user dproxy identified globally as 'CD=dproxy,OU=europe,O=oracle,L=omagh,ST=tyrone,C=ie';
alter user dproxy grant connect through barry authenticated using distinguished name;

Tuesday, 11 October 2011

SQL Developer 3.1 EA1 Available

SQL Developer 3.1 EA is now available for download here.   Have a look and try out several of the new features introduced as part of this release.  We have done a lot of work around adding DBA functionality for RMAN, Datapump and other features and vastly improved Database Diff, Database Copy and Database Export.
In Migration, we've introduced a TeraSQL translator and greatly enhance our Sybase TSQL parse. We've also done a lot of work with insight, including templates for most statements directly from published documentation and added column formatting for SQL*Plus.  These are only some of the highlights for this release and we will publish a full list soon.
As usual, you can ask us any questions on the forum or add a feature request.  Let us know what you think!

Wednesday, 5 October 2011

Tuning, Refactoring and Instrumentation

For all those who attended the talk today, this is a promise partially fulfilled in that I had said I would post in more detail about the talk.  At a high level, we talked about three different topics.  Tuning, Refactoring and Instrumentation.  I'll list out the main points here today and flesh these out over the next few days for each of the bullet points.

SQL Developer support several types of tuning activities.  These are:

  • Explain Plan/ Autotrace and Diff
  • Monitoring SQL
  • SQL Tuning Advisor
  • PLSQL Hierarchical Profiler
For Refactoring, there are several options as well, and these grow with every release.
  • Code Surrounding
  • Procedure extraction
  • Local variable renaming
  • Obfuscation
Instrumentation is a way of finding out what an application is doing, who is using it, how its doing and and how long it has taken.  In order to look at this from a database application point of view, we can break application instrumentation into a number of key areas.
  • Debugging
  • Logging
  • Runtime registration
  • Metric Collection
Each of these key areas have specific tools and processes which should be implemented as part of your application development and we will get into those in the section on Instrumentation.

Monday, 12 September 2011

SQL Developer Data Pump Support - Part 2

Welcome to part 2 of this feature on our introduction of data pump functionality into SQL Developer.  Previously, we walked through exporting from the database.  This post will go through the importing the data export to a new schema in our database.
In the DBA navigator, go to the data pump node, and choose 'Data Pump Import Wizard'.

When the wizard appears, choose the type of import you want.   In our previous episode, we exported the 'BARRY' Schema.  We'll now choose to do a schema import.

For the input directories, choose a directory that exists and you have access to.  The dump files from the export session need to reside on this directory and conform to the filename as specified.
Hitting next, will parse the files and step 2 shows us the available schema to choose from.  Since we only export 'BARRY', we only have one to choose from in this case.
Step 3 involves remapping.  In this case, on my database, the Barry schema exists.  I need to create another schema to create these objects in.  I've done this already before we started the wizard.
Once the schema has been created and we're back in the wizard, we can set the destination as 'BARRY2'

Step 4 has two parts, one is for logging and we choose an appropriate database directory for that which exists and we have access to.  The second is for the actions on tables if they exist.  In my case, I want to replace them.  For this example, we know its a fresh schema with nothing in it, but if it did, we'd be replacing the tables.

Lastly, we can schedule the import, and like last time, I want to do this immediately.

The summary shows us what will be done on our behalf and once we hit the finish button, an import job will be created and kicked off immediately.

When the job starts, there will be an import job shown in the dba navigator.  its corresponding editor will show the job executing.

Finally, we can create a connection for the BARRY2 schema and look at the data.

 So, that's it in a nutshell.  Over two parts, we've shown you how to export any part of a database to file using the data pump utilities which have been integrated into Oracle SQL Developer.  In this part, we took those files from the previous post and imported them into Oracle.

SQL Developer Data Pump Support - Part 1

In 3.1 we're adding support for Data Pump, which has replaced exp and imp.  In SQL Developer, we've added this in the DBA navigator which has support for several new things this release.  In this post, I'm only going through Data Pump export and in part 2 of this post, we'll visit the import.  We'll  revisit several of other new DBA features in the very near future.  For export, the demonstration will show that this is pretty easy to use, however, get a cup of coffee, cos we've a few snapshots to go through! 

Ok, so first off, we need to fire up the DBA Navigator which can be reached from the view menu.  Select DBA and you get the navigator shown on the right. This image shows the sys user which has been selected as the current connection.  We're going to use the barry connection for using data pump today.

From the data pump node, select the menu item for the Data Pump Export Wizard.  

On the first page of this wizard, we are choosing schema today.  You also have the option of exporting the entire database, some tablespaces or a block of tables.  When you're ready press next

As we said earlier, we are only choosing schema today and in the image below, we are only selecting one schema for export. The 'BARRY' schema.

Step 3 allows you to filter what is exported by including or excluding various things, you can choose none of these things or, a  selection of each with an appropriate filter string in the value field.
We won't add any today and let it take everything out of the schema.

In Step 4, the data tab, we can add data filters to all the tables, or individual tables as required.  Again for this demonstration, I'm not going to choose any of that and let the data pump export everything.

Step 5 allows us to specify the options for the export.  Our primary interest is in the directory for the  logging.

Step 6 focuses on our Output directory.  Specify the one you want to use here.  Remember, as in Step 5, this directory must exist, and you must have the privileges to write to it from your user.  Check on the main connections navigator for directories.  By default there is a DATA_PUMP_DIR setup, but make sure that the directory exists.  We've created a directory called 'BARRYS_DPUMP_DIR' for this example.

Step 7 specifies the schedule which will dump the data for you.  We're choosing immediately as our option. You can specify whenever you like for this job to run, repeatedly if required.

Lastly, we have the summary screen, which is split in two parts. The main summary screen shows what actions you are taking and how they will be carried out.  The second panel shows the actual PL/SQL which will be run on your behalf to create the data pump job, and dump the data.
When you click finish, and job submission is successful, you'll get  a new job listed in the DBA navigator under Data Pump export Jobs.

If it doesn't appear, then it's highly likely you'll get this error, primarily due to the directory that you are writing the files or the logs to.  This is easily fixed by going back and making sure the directories you have chosen, exist and that you have access to read from and write to them.

 Lastly, you can go to your database directory and see your exported file, together with the log file.

 Your log file will contain something like this if you're successful. (I've cut a lot out of it as it is long)

Starting "BARRY"."EXPORT_JOB_SQLDEV_327":  
Estimate in progress using BLOCKS method...
.  estimated "BARRY"."MD_ADDITIONAL_PROPERTIES"              3 MB
.  estimated "BARRY"."STAGE_TERADATA_TABLETEXT"          2.062 MB
.  estimated "BARRY"."MD_DERIVATIVES"                        2 MB
.  estimated "BARRY"."MD_FILE_ARTIFACTS"                     2 MB
.  estimated "BARRY"."文化大革命"                                 0 KB
Total estimation using BLOCKS method: 17.75 MB
Processing object type SCHEMA_EXPORT/USER
Processing object type SCHEMA_EXPORT/SYSTEM_GRANT
. . exported "BARRY"."MD_ADDITIONAL_PROPERTIES"          13.54 KB      59 rows
. . exported "BARRY"."STAGE_TERADATA_TABLETEXT"          1.197 MB      25 rows
. . exported "BARRY"."MD_DERIVATIVES"                    49.60 KB     386 rows
. . exported "BARRY"."文化大革命"                                 0 KB       0 rows
Master table "BARRY"."EXPORT_JOB_SQLDEV_327" successfully loaded/unloaded
Dump file set for BARRY.EXPORT_JOB_SQLDEV_327 is:
Job "BARRY"."EXPORT_JOB_SQLDEV_327" successfully completed at 15:42:52

So, for today, thats exporting from the Oracle database using the Datapump built into Oracle SQL Developer 3.1 which will be available soon!  We'll have part 2 on importing this dump file next.