Hendry's "Oracle and SQL Server DBA Stuff"

Oracle and SQL Server Database Solutions for DBA's please visit http://hendrydba.com for the latest posts. Thanks

Archive for the ‘Datapump10g’ Category

UDE-00008: operation generated ORACLE error 31626

Posted by Hendry chinnapparaj on June 12, 2012

Problem:- During Oracle Datapump export in oracle encounters the errors below

. . exported “MZS_OWNER”.”READING”                       3.269 GB 12277747 rows


UDE-00008: operation generated ORACLE error 31626

ORA-31626: job does not exist

ORA-39086: cannot retrieve job information

ORA-06512: at “SYS.DBMS_DATAPUMP”, line 2772

ORA-06512: at “SYS.DBMS_DATAPUMP”, line 3886

ORA-06512: at line 1


Solution:– it’s mentioned in Oracle MOS.

DataPump Export (EXPDP) Client Gets UDE-8 ORA-31626 ORA-39086 [ID 549781.1]

check the expdp logfile first, if it’s successfully completed as below, then no issue

Dump file set for SYS.SYS_EXPORT_SCHEMA_01 is:
Job “SYS”.”SYS_EXPORT_SCHEMA_01″ successfully completed at 14:12:39


However, reviewing the log file shows that the “job successfully completed”


This issue has been discussed in Bug 5969934 EXPDP CLIENT GETS UDE-00008 ORA-31626 WHILE THE SERVER SIDE EXPORT IS OK


The expdp client makes calls to DBMS_DATAPUMP package to start and monitor export job. Once the export job is underway, the client just monitors the job status by issuing DBMS_DATAPUMP.GET_STAUS. Therefore, if the export logfile says “job successfully completed”, the dump file generated by the job should be fine.

You can simply ignore the errors, since the dump file is still valid for an import.

In the release, there were a number of problems that caused the expdp and impdp clients to exit prematurely, interpreting a nonfatal error as a fatal one, giving the appearance that the job had failed when it hadn’t. In fact, inspection of the log file, if one was specified for the job, showed that the job ran successfully to completion. Often a trace file written by one of the Data Pump processes would provide more detail on the error that had been misinterpreted as a fatal one. Many of these errors involved the queues used for communication between the Data Pump processes, but there were other issues as well.

With each subsequent release, these problems have been addressed, and the client has become more robust and rarely, if ever, runs into situations like this. However, this is the result of many bug fixes in subsequent releases, some in Data Pump and some in supporting layers. It’s impossible to know, at this point, what combination of bug fixes would address this specific failure, and even if that was possible, it wouldn’t address other possible failures that look very similar on the client side.

Posted in Datapump10g | Leave a Comment »