Oracle Scratchpad

February 15, 2011

ORA-29913

Filed under: Infrastructure,Troubleshooting — Jonathan Lewis @ 1:19 pm GMT Feb 15,2011

External tables as a substitute of loading tables through SQL*Loader have become more popular over the last couple of years – which means questions about Oracle error ORA-29913 have also become more popular. So what do you do about something like this:


SQL> select * from extern;
select * from extern
*
ERROR at line 1:
ORA-29913: error in executing ODCIEXTTABLEOPEN callout
ORA-29400: data cartridge error
KUP-04063: unable to open log file EXTERN_17063.log
OS error No such file or directory
ORA-06512: at "SYS.ORACLE_LOADER", line 19


The commonest reason for ORA-29913 is simply that Oracle can’t open the operating system files it’s supposed to use. This may be the input data files or the various output files (the “bad”, “discard” and “log” files) that it needs to produce. The reason for failure may be that the directory doesn’t exist, or that the Oracle executable doesn’t have permission to access the directory appropriately, or that the file itself doesn’t have the necessary permissions.

In the example above it looks as if Oracle can’t write to the required log file which, as the “OS error” line suggests, may be a problem with the (unspecified) directory or with the named file (EXTERN_17063.log) in that directory.

So, a few quick checks when you see this error when trying to work with external tables:

    Check your ‘create directory … ‘ statements
    Check that the directories have appropriate permission for Oracle to read and write to them
    Check that there isn’t a problem with existing file permissions
    Be particularly careful about permissions and ownership of network directories

I have a piece of framework code for external tables, and its most important feature is catering for all the output files (that I am aware of), and separating the directories for the inputs and the outputs.

define m_filename = 'd11g_ora_1820.trc'
define m_file_dir = 'C:\oracle\diag\rdbms\d11g\d11g\trace'
define m_log_dir  = 'c:\working'

drop table ext;

create or replace directory ext_tab as '&m_file_dir';
create or replace directory log_dir as '&m_log_dir';

create table ext(text_line varchar(4000))
organization external
(
	type oracle_loader
	default directory ext_tab
	access parameters (
		records delimited by newline
		discardfile	log_dir:'read_trace_dis.txt'
		badfile		log_dir:'read_trace_bad.txt'
		logfile		log_dir:'read_trace_log.txt'
		characterset us7ascii
		fields
			rtrim
			reject rows with all null fields
		(
			text_line	(1:4000)	char
		)
	)
	location ('&m_filename')
)	reject limit unlimited
;

As it stands this has been set up to define a standard trace file on my laptop as an external table; it’s a framework I use sometimes on customer sites where I have lots of database privileges but don’t have any rights to see the server: sometimes you really need to see the trace files. (A nice little feature that’s appeared in 11g, by the way, is that you don’t need to work out names for trace files any more, you just need to get at column v$process.tracefile)

A couple of footnotes:

  • If you don’t specifiy a “log” directory as I have then all the output files go into the “default” directory which, for many people, is likely to be the directory where you had your datafiles. (You could have different directories for the “bad” and “discard” files, of course.)
  • The various log files are appended to (not replaced) every time you query the external table – and it’s very easy to forget this and let them become very large (which means that you could get ORA-29913 because the size of an output file exceeds the O/S limit, or has filled the device).
  • If you don’t specify names for the output files then they will contain the name of the external table and the process ID of the session’s shadow process. The presence of the proces ID does reduce the risk of a single output file getting too big, of course. (I haven’t checked what this does with shared servers, but the typical use of external tables means you probably shouldn’t be querying them through shared servers)
  • The default names of the output files varies with version – until 11g I think it was always {table_name}_{process_id}.{type}, but my 11g sandbox seems to introduce another number in the middle of the name to give:  {table_name}_3168_{process_id}.{type}

Got to stop there, the plane’s about to push back – and it’s “all electronics off”. Spelling and grammar may be corrected this evening.

2 Comments »

  1. Jonathan,

    Thank you for sharing this excellent tip on external tables. As a DBA, I frequently run into this problem with developers who create external tables.

    Regards,
    Ben

    Comment by Ben Prusinski — February 17, 2011 @ 6:49 pm GMT Feb 17,2011 | Reply

  2. perfect

    Comment by Manohar — June 26, 2013 @ 6:55 pm BST Jun 26,2013 | Reply


RSS feed for comments on this post. TrackBack URI

Comments and related questions are welcome.

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Website Powered by WordPress.com.