Real life usage of Oracle Import and export scenarios – complete reference for the new DBA





You are a new oracle DBA  and you know already that oracle has 2 tools the Import and export tool . The export tool is used to  export data from the oracle database into a binary file which is generally called as the ‘dump file’.
The Import tool is used to import data from this ‘dump file’ into the oracle database .

Below article contains a mixture of the export commands that i used sometimes in my day to day production dba life and also contains some troubleshooting tips in case of any errors.

Until oracle 9i the import tool was called as imp and the export tool was called as exp
from 10g the import tool is called impdp (datapump import tool) and expdp(datapump export)

–command to export a table called purg_save_stat_sep08 which is present

–in the ud2 schema.

expdp ud2/ud2 DIRECTORY=imp_udtr DUMPFILE=imp_udtr.dmp

TABLES=purg_save_stat_sep08

i had problems when i imported the nx.in_fa_te table into the nc3 schema.

so i had to add the below parameters ..basically because the table that I am importing is present in a different schema on production and I am trying to import into a different schema..so I used below parameters which then worked.

REMAP_SCHEMA=NX:NC3

tables=in_fa_te

table_exists_action=truncate

directory=nx_imp_dir

logfile=cs_in_fa_2009_1006.log

dumpfile=cs_in_fa_2009_1006_%u.dmp

parallel=4

For compatibility rules for the enhanced utilities Export Data Pump (expdp) and Import Data Pump (impdp) which were introduced in Oracle10g, see:
Note 553337.1 "Export/Import DataPump Parameter VERSION – Compatibility of Data Pump Between Different Oracle Versions"

The export fails if you use a higher release export version.
For example, if you use the export 9.0.1 utility to export data from an 8.1.7 database, you will get the errors:

EXP-00056 Oracle error 942 encountered
ORA-00942 table or view does not exist
EXP-00000 Export terminated unsuccessfully

The recommended option is not to use a higher version of export utility on a database

After upgrading the test and production databases to Oracle 10g, we perform the import from prod into test  in just one step using the following impdp parameters:

SCHEMAS=SCHEMAPROD

NETWORK_LINK=TEST_SCHEMA

REMAP_SCHEMA=SCHEMAPROD:SCHEMATEST

TABLE_EXISTS_ACTION=REPLACE

EXCLUDE=OBJECT_GRANT

The name of the transform can be either SEGMENT_ATTRIBUTES or STORAGE. STORAGE removes

the STORAGE clause from the CREATE statement DDL whereas SEGMENT_ATTRIBUTES removes

physical attributes, tablespaces, logging, and storage attributes. The boolean_value can be Y

or N; the default is Y. The type of object is optional; the valid values are TABLE and INDEX.

For example, if you want to ignore the storage characteristics during the import, and use the

defaults for the tablespace, you may do:

impdp dumpfile=scott.dmp transform=storage:N:table ➥

exclude=indexes

When importing in 10g if the table already exists then do thr following

ec04*ec-/glo/ec/ora/ew/adm/export :->more  ew_impdp.par

schemas=ew

TABLE_EXISTS_ACTION=replace

logfile=ew_datapumpimport_apr22.log

directory=ew_datapump_dir

content=all

dumpfile=ew_datapumpexport_apr22_%u.dmp

parallel=2

to01*toi-/glo/toi-db/oracle/toi/adm/export :->more  expdatapump_top_schema.param

full=n

schemas=tp, tpuser

logfile=tp_schema_time_2008_0617.log

directory=dpump_dir1

content=all

dumpfile=tp_schema_time_2008_0617_%u.dmp

parallel=2

flashback_time="TO_TIMESTAMP(‘2008-06-17 15:35:00’, ‘YYYY-MM-DD HH24:MI:SS’)"

to01*toi-/toporacledir/toi-db/oracle/toi/adm/export :->more  impdp_apr202009.parm

schemas=tp

logfile=tp_schema_time_2009_0420.log

directory=dpump_dir1

content=all

dumpfile=tp_schema_time_2009_0420_%u.dmp

parallel=2

export NLS_LANG=AMERICAN_AMERICA.AL32UTF8

exp system/systemprod FILE=ea_april14.dmp OWNER=ea GRANTS=y ROWS=y COMPRESS=y

Parameters for full export

FULL=Y

BUFFER=102400

ROWS=Y

INDEXES=Y

GRANTS=Y

CONSTRAINTS=Y

COMPRESS=N

~

when you do full import the system and sys passwords will be repaced from the dump

when doing a user import the user should have unlimited quota on the tablespace on which the objects are getting created..otherwise import gives error that no privileges..for the user..whi is owner of the tablespace

also sometimes the sys procedures don’t have permissions to public due to some databasehardening and hence import gives error in infinite loop..

–find out if the dump on production is bigger than on integration and by how much and if space is available on the integration server

select owner,sum(bytes)/1024/1024 from dba_segments where owner=’SUPPDB’ group by owner;

–find out in which disk all the tablespace files exist

select file_name, t.tablespace_name tablespace, bytes from dba_data_files d,dba_tablespaces t where t.tablespace_name = d.tablespace_name and d.tablespace_name=’DATA’;

–find out the list of users and their tablespaces from the segments table that can show on which tablespaces there objects are located.

select distinct owner,tablespace_name from dba_segments;

To export to a compressed file:

/etc/mknod pipename p

compress < pipename > newfilename.Z &

exp <username>/<password> file=pipename

rm -f pipename

To import from a compressed file:

/etc/mknod pipename p

uncompress < newfilename.Z > pipename &

imp <username>/<password> file=pipename

rm -f pipename

You may use gzip in the same way. Gzip has a better compression rate but on some systems it is limited to 2GB gzip size.

# create a named pipe

mknod exp.pipe p

# read the pipe – output to zip file in the background

gzip < exp.pipe > exp_full.dmp.gz &

# feed the pipe

exp userid=scott/tiger file=exp.pipe …

exp \’/ as sysdba\’ file=full_exp_no_rows.dmp log=full_exp_no_rows.log full=y rows=n buffer=1200000

How to handle import errors

How to handle ORA-00904 errors on Export
========================================

When running full database export the following errors will be returned:

exp80 user/password full=y file=D:\DB-Backup\ORCL\expORCL.dmp
log=D:\DB-Backup\ORCL\expORCL.log consistent=y buffer=40960000

    EXP-00008: ORACLE error 904 encountered
    ORA-00904: invalid column name

To get a clue what has force these ORA-00904 message, follow the following
steps:

1. Connect as privileged user and run the following command inside an
   SQLPLUS session:
 
   SQL> alter system set events=’904 trace name errorstack’;

   ——————————– Note ————————————

   This event will work starting with Oracle8i. If running a Oracle database
   version below Oracle8i, you’ll get ORA-02065 errors returned, when
   specifying this event. In Oracle versions below 8i you’ll have to insert
   the following entry into INIT.ORA

   event="904 trace name errorstack"
   
   ————————————————————————–
 
2. Retry the FULL export. When the ORA-00904 occurs, a trace file will be
   written to the storage location specified by the INIT.ORA parameter
   ‘user_dump_dest’. As soon as the ORA-00904 has been returned to your
   export session, you can abort the export and examine the trace information.

3. Disable event tracing by running the following command:
 
   SQL> alter system set events=’904 trace name errorstack off’;

   On Oracle versions below 8.1.5 return the event entry from INIT.ORA

4. Examine the trace file:

   *** SESSION ID:(9.3) 2001.11.21.15.28.00.494
   *** 2001.11.21.15.28.00.494
   ksedmp: internal or fatal error
   ORA-00904: invalid column name
   Current SQL statement for this session:
   SELECT fname,fsize,maxextend,inc FROM sys.exu8fil WHERE tsid = :1
                                         ===========

   a) a problem with object EXU8FIL has been detected
   ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
   In this example, there’s a problem with the EXE8FIL view. To examine,
   whether this object exist, run the following command:

   SQL> select owner,object_name,object_type,object_id,status
        from   dba_objects
        where  object_name=’EXU8FIL’;

   If the problem is on the EXU8FIL object you simply can recreate the
   object by running the CATEXP.SQL script.

   b) a problem with object DBMS_JAVA or DBMS_JAVA_TEST
   ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
   ksedmp: internal or fatal error
   ORA-00904: invalid column name
   Current SQL statement for this session:
   SELECT SYNNAM, DBMS_JAVA.LONGNAME(SYNNAM), DBMS_JAVA.LONGNAME(SYNTAB), …

   On systems where the Java Server has been installed the trace file could
   contain information on a failing select statement on DBMS_JAVA packages.

Take the exp/imp as an example. Here is the deal:

1. EXPORT your master database in full at the first time around. This will serve as your baseline…Everytime the instructors need to do some work on the master database, do a incremental export, this will just export the tables that has been changed since the point in time, which can be very small and limited to a few tables.
2. whenever you have the needs to bring all the other 6 database back to the baseline, just drop all the users objects, and then do a full import, plus with necessary incremental backup.
3. You mentioned that dropping user objects is a pain, but you can automates this using scripts,
to find all the objects in different user schema
SQL> select object_name, object_type from dba_objects where owner IN (‘<user1>’, ‘<user2>’, ‘<user3>’,….)

to drop all the objects, you can script out the spooled file to drop them….
if you need that script, just search online they are everywhere….

Using Export

Examples:

Parameter File Method
> exp SYSTEM/password PARFILE=params.dat
The params.dat file contains the following information:
FILE=dba.dmp
GRANTS=y
FULL=y
ROWS=y
LOG=export.log
Command-Line Method
> exp SYSTEM/password FULL=y FILE=dba.dmp GRANTS=y ROWS=y LOG=export.log
Parameter File Method
> exp scott/tiger PARFILE=params.dat
The params.dat file contains the following information:
FILE=scott.dmp
OWNER=scott
GRANTS=y
ROWS=y
COMPRESS=y
Command-Line Method
> exp scott/tiger FILE=scott.dmp OWNER=scott GRANTS=y ROWS=y COMPRESS=y
Parameter File Method
> exp SYSTEM/password PARFILE=params.dat
The params.dat file contains the following information:
FILE=expdat.dmp
TABLES=(scott.emp,blake.dept)
GRANTS=y
INDEXES=y
Command-Line Method
> exp SYSTEM/password FILE=expdat.dmp TABLES=(scott.emp,blake.dept) GRANTS=y-
INDEXES=y
 

Using Import

Examples:
Parameter File Method
> imp SYSTEM/password PARFILE=params.dat
The params.dat file contains the following information:
FILE=blake.dmp
SHOW=n
IGNORE=n
GRANTS=y
ROWS=y
FULL=y
LOG=/u01/imp.log
Command-Line Method
> imp SYSTEM/password FILE=blake.dmp SHOW=n IGNORE=n GRANTS=y ROWS=y FULL=y LOG=/u01/imp.log

Exporting a Individual users:
=============================

exp  system/<password>@<hoststring> file=myfile.dmp log=myfile.exp owner=<user1>, <user2>

Note:  If you are working on personal oracle then there would be no hoststring.

Keep Smiling.

R. K.

create a parameter file, let’s call it expfile,  for the export with the following parameters:
  userid=your username /your password@database name
  owner = name of the schema you want to export
  *consistent=y
  file=filename.dmp
  log=logfile.log

*this is useful if the schema being exported will be modified during the export, be careful because it can dramatically slow down an export and cause rbs to grow very large depending on the transactions performed while the export is occurring* If you are sure no users are accessing the schema, this parameter can be left to it’s default, which is consistent=n.

on the command line type:
 exp parfile=expfile

Next, create the user in the database where you will be importing into (make sure to give proper rights and quotas).
 
  then import:
  again, use a parameter file, let’s call it impfile, with the following parameters:
   userid = your username / password@database name
   fromuser=name of the schema exported
   touser=same as fromuser schema
   file=export .dmp filename
   log=logfile.log filename

If the tnsnames are right, this should work fine.

some important steps when doing import and export of a database.

A Full System Export and Import is a useful way to replicate or clean up
a database.  Please note the following guidelines when performing a Full
System Export and Import:
 
1.  When performing the import, ensure you are pointing at the correct  
    instance.  Always check values for the SID and two task driver before
    proceeding.
   
    *Note:   On Unix systems, just entering a sub-shell, such as using
             C-shell, can change the database that the import would work
             against.
 
2.  When possible, it is advisable to take a physical copy of the
    exported database and the database you intend to import into before
    starting the import.  This ensures any mistakes are reversible.
 
3.  *NEVER* run a FULL SYSTEM IMPORT on a machine that has more than one
    database UNLESS you are 100% sure all tablespaces have been
    pre-created.  A full import will create any undefined tablespaces
    using the same datafile names as the exported database.  There are
    situations where this is very dangerous:
 
    ->  If the datafiles belong to ANY other database, they will be
        CORRUPTED.  This is especially true if the exported database is
        on the same machine, because its datafiles will be re-used by
        the database you are importing into.
 
    ->  If the datafiles have names that clash with existing operating
        system files.
 
4.  When exporting, it is advisable to also produce a report showing
    information you may need before / after the import:
 
    ->  List the tablespaces and datafiles
 
    ->  List the rollback segments
 
    ->  List a count, by user, of each object type such as tables,
        indexes, etc.
 
    This enables you to ensure that tablespaces have been pre-created
    and to check the success of the import after it is complete.
 
5.  If you are creating a completely new database from an export, remember
    to create an extra rollback segment in SYSTEM, and make this available
    in the init.ora file before proceeding with the import.
 
6.  There are some versions of export that do not retain sequence numbers
    correctly.  It is advisable to select next_value for each sequence
    after the export so that these can be checked after import.

Exporting 9i database and importing into 10g .below are the full steps needed.

These are the things that u need to do..

1) Export dump from 9i server using EXP
2) Copy the file to the 10g server
3) Import the dump into the 10g server using IMP

Can u please have a log file while importing with the "log=import.log" option and paste the contents of the log file here??

Okay.. I understand that u have manually created tablespaces and tables.. is that correct??? It is fine to create the necessary tablespaces.. However it is not necessary to create the tables.. Oracle itself will take care of it.. The DMP file includes the table definition as well..
So when u import back, if the table already exists, then Oracle will simply skip the table as the object already exists.. Now u have 2 options..

1) Drop the table and run the import again
2) Use the imp command again with the option "ignore=Y". This will import data into the tables even if the table already exisits..

It looks like Oracle is just doing something without importing the data actually.. Something similar to what happens while using indexfile option.. So could you please try importing using the actual command without making use of the par file??

This will be ur command,
imp <system>/<oracle> file=camdba.dmp log=imp_camdba.log fromuser=camdba touser=camdba commit=y feedback=100000 show=y ignore=y

The problem is with: show=y

This tells imp to only SHOW what it would do if it could but don’t do anything.

Normally the best proven way is using Export/Import utilities.
But by 1T DB it will run very long.

For the usage of upgrade assistant study the Oracle recommendation
and read chapter 3 of
Oracle® Database
Upgrade Guide
10g Release 2 (10.2)
B14238-01
on
http://download-uk.oracle.com/docs/cd/B19306_01/server.102/b14238.pdf

I wonder if it will possible to use 10g Data Pump against the 9i DB.
Data Pump in Paralel Options can do the job 5 times faster
as the clasical Import.

What I observed when I exported on the production system was it exports all the users except sys .. system and other users.. so it will definetly try to import the ssyetm tables but many of the system tables do not have any rows..

In any case I will take a backup of the database on the preprod and do a full import.

For importing using datapump in 10g

I created directory object on pre production

SQL> create directory dar_preprod_dir as ‘/toporacledir/merwin-db/oracle/merwin/import’;

Directory created.

SQL> grant read, write on directory dar_preprod_dir to system;

Grant succeeded.

The import command I used was

Impdp system/merwinsystem directory=dar_preprod_dir dumpfile=exportfilename full=y

For export using datapump in 10g  use

I created directory object on Merwin production

SQL> create directory dpump_merwin1 as ‘/toporacledir/merwin-db/oracle/merwin/export’;

Directory created.

Granted permissions to system user

SQL> grant read, write on directory dpump_merwin1 to system;

Grant succeeded.

About using COMPESS option in 9i export

Additionally, by default, the Export utility will `compress’ the
extents of each table when creating the dump file. The actual function of the
COMPRESS option is often misunderstood. What actually happens is that the
Export utility modifies the value of INITIAL from what the table was originally
created using. For example, consider a table that was created with a storage of
(INITIAL 10K NEXT 10K) and the table currently has 100 extents. If an
export-compress operation is performed, the SQL create syntax stored in the
dump file would be (INITIAL 1000K NEXT 10K). The INITIAL value is made as large
as the sum of the existing extents. The NEXT value is unchanged.

When using the COMPRESS option, there are a couple of points to keep in mind.  
The INITIAL value chosen by Export is the sum of extents that a table currently
has, and has nothing to do with the amount of data in the table. For example,
I perform a DELETE FROM INVENTORY, where the `inventory’ table has four
100 megabyte (MB) extents. I then export with COMPRESS; the new table will have
an initial extent of400 MB, even though the table is completely empty.

Side Note:  The size of the export dump file has very little to do with the
actual amount of data that exists in the database and/or the amount of space
required in the importing database. For example, I perform a table-level export
of a particular table that is empty, but has extents totalling 400 MB. The
export dump file will be tiny, as it will contain the table DDL and no data.  
However, on import, that dump file will require 400 MB of free space in the
importing database.

The second point is the size of the INITIAL extent created relative to datafile
sizes for the tablespace it is being placed in. As an example, I have the USERS
tablespace that has four 50 MB datafiles. In it I have the EMPLOYEES table
which consists of fifteen 10 MB extents. If I export this table with COMPRESS,
the resulting dump file will try to import the table back with a single 150 MB
initial extent. Neither the table nor the tablespace has changed size, but the
new table will not fit in the USERS tablespace because a single extent can not
be split among datafiles. Note that this is only a factor if the tablespace
consists of multiple datafiles. The only workarounds are to export without
compression or to pre-create this table prior to import and give it workable
storage options.

Also, collecting the statistics (compute) will do basically the same and will also give you fresh statistics, especially if you use compress=y with export, which should reduce your table extent allocation. This depends on the INITIAL extent size of your tables in the database you export from.

I am running a imp job whic takes 30 hours .Please sugest somehing to improve performance
uffer=26214400 file=mgr_fpa.dmp  constraints=y indexes=y fromuser=mgr_fpa touser=mgr_fpa ignore=y
log=mgr_fpa_imp.log

 

Followup   November 15, 2005 – 8am US/Eastern:

commit=n
 
don't have it commit each batch insert
 
 
and if possible, you would import in parallel (having exported in parallel, creating many dump
files).
 
and if possible use datapump with 10g as it can do parallel and direct path operations.

 

1. You mean I should put commit=n   right?
2. How to do export on parallel ?
 
Thanks!

 

Followup   November 15, 2005 – 12pm US/Eastern:

1) yes
 
2) by running many exports - each getting a separate set of schema objects (different sets of
tables, or different user accounts, whatever - do it yourself parallelism, something data pump in
10g does for you)

 

http://asktom.oracle.com/pls/asktom/f?p=100:11:0::::P11_QUESTION_ID:230214032357

It is possible to import data in a database without the dump file. From network the data will be retrieved from one database and then import that data back to the target database. There are no dump files involved.

How to do the Network Mode Import:
————————————-

1)Create a database link (with the source database) in the database where you will perform the import operation.

SQL> create database link THERAP.REGRESS.RDBMS.DEV.US.ORACLE.COM connect to arju identified by a using ‘
NEPTUNE‘;
Database link created.

I have to used this long name because the database toporacledir name settings.

2)Check the database link:

SQL> select table_name from user_tables@THERAP.REGRESS.RDBMS.DEV.US.ORACLE.COM;
TABLE_NAME
——————————
FIRST_TABLE
TEST
NAME
SQL> select table_name from user_tables;
no rows selected

3)Perform the network mode import operation.Specify database link parameter with NETWORK_LINK

SQL> !impdp arju/arju directory=dnet FLASHBACK_SCN=118740640 NETWORK_LINK=THERAP.REGRESS.RDBMS.DEV.US.ORACLE.COM

Note that no dumpfile parameter here. I created dnet directory into which log file will be written.

Restrictions of Network Mode Import:
——————————————-

1.If the source database is read-only, then the user on the source database must have a locally-managed tablespace assigned as a default temporary tablespace. Otherwise, the job will fail.

2.If the USERID that is executing the import job has the IMP_FULL_DATABASE role on the target database, then that user must also have the EXP_FULL_DATABASE role on the source database.

3.The only types of database links supported by Data Pump Import are: public, fixed-user, and connected-user.

4.When the NETWORK_LINK parameter is used in conjunction with the TABLES parameter, only whole tables can be imported (not partitions of tables).

5.Network imports do not support the use of evolved types.

Doing import using sql*net

You can use SQL*Net connection strings or connection descriptors when you invoke

the Import utility. To do so, the listener must be running (lsnrctl start). The

following example shows how to invoke Import using a SQL*Net connection:

> impdp hr/hr@inst1 DIRECTORY=dpump_dir DUMPFILE=hr.dmp TABLES=employees

The hr/hr@inst1 results in a SQL*Net connection. The inst1 refers to a service

name specified in the tnsnames.ora file. This means that the import client is

being run remotely from the server to import data from a dump file.

In 10g using datapump to do a import remotely into serverB.

The following is assumed:

The export dump is in ServerA.

The directory where the dump is present is called DirA

The tnsnames entry for the remote database is RemB. This entry should be present in the tnsnames.ora file on serverA.

The username(having dba privileges) for the remote database is userB

The password for the remote database is passB

1)log into server A

2)go to the directory where the export dump file is present.

3)execute below command

expdp userB/passB@RemB  directory = DirA  dumpfile = dumpfilename  schemas = schemaname  TABLES_EXISTS_ACTION = REPLACE

My first guess is DBMS_TRANSFORM_EXIMP doesn’t exists or isn’t valid. Run
this to find out:

select * from dba_objects where owner=’SYS’ and
object_name=’DBMS_TRANSFORM_EXIMP’;

The create package DBMS_TRANSFORM_EXIMP statement exists in
$ORACLE_HOME/rdbms/admin/dbmstxfm.sql, which is normally called by
catproc.sql. If it doesn’t already exist, you can try running that or you
can try running $ORACLE_HOME/rdbms/admin/catproc.sql to create. If it does
exist and its status is INVALID, try running
$ORACLE_HOME/rdbms/admin/utlrp.sql to re-compile (as SYS of course). If it
exists and is valid, I would suspect permissions issues. Try running these
commands (again as SYS) to create proper grants:

GRANT EXECUTE ON sys.dbms_transform_eximp TO SYSTEM WITH GRANT OPTION;
GRANT EXECUTE ON sys.dbms_transform_eximp TO imp_full_database;
GRANT EXECUTE ON sys.dbms_transform_eximp TO exp_full_database;
GRANT EXECUTE ON sys.dbms_transform_eximp TO execute_catalog_role;
GRANT EXECUTE ON sys.dbms_transform_eximp TO PUBLIC;
CREATE OR REPLACE PUBLIC SYNONYM dbms_transform_eximp for
sys.dbms_transform_eximp ;

  TIP:  Click help for a detailed explanation of this page.

 

Bookmark Go to End

 

Subject: Schema Import fails with PLS-201: Identifier ‘SYS.DBMS_EXPORT_EXTENSION’ Must Be Declared
  Doc ID: 445135.1 Type: PROBLEM
  Modified Date : 14-OCT-2008 Status: PUBLISHED

In this Document
  Symptoms
  Cause
  Solution


Applies to:

Oracle Server – Enterprise Edition – Version: 10.2.0.2
This problem can occur on any platform.

Symptoms

10gR2

Schema import fails with:

IMP-00058: ORACLE error 6550 encountered
ORA-06550: line 1, column 7:
PLS-00201: identifier ‘SYS.DBMS_EXPORT_EXTENSION’ must be declared
ORA-06550: line 1, column 7:
PL/SQL: Statement ignored
IMP-00017: following statement failed with ORACLE error 6550:
"BEGIN SYS.DBMS_EXPORT_EXTENSION.SET_IMP_SKIP_INDEXES_OFF; END;"
IMP-00003: ORACLE error 6550 encountered
ORA-06550: line 1, column 8:
PLS-00201: identifier ‘SYS.DBMS_EXPORT_EXTENSION’ must be declared
ORA-06550: line 1, column 8:
PL/SQL: Statement ignored
IMP-00017: following statement failed with ORACLE error 6550:
"BEGIN SYS.DBMS_EXPORT_EXTENSION.SET_IMP_SKIP_INDEXES_OFF; END;"

.

Cause

In 10gR2, import uses DBMS_EXPORT_EXTENSION package and you need to grant execute on DBMS_EXPORT_EXTENSION to the schema user being imported into.

.

Solution

The main issue here is lack of privileges. There are 2 possible solutions:

1- Grant execute on DBMS_EXPORT_EXTENSION to public.
or
2- Grant execute on DBMS_EXPORT_EXTENSION to the user/schema you are importing into (not the user who is running the import).  For example, if you are importing into schema CDM_DB0 then grant execute to that user.

 

ORA-01466: unable to read data – table definition has changed


Symptom:

The error (possibly during an export):

ORA-01466: unable to read data - table definition has changed

Cause:

Oracle has detected a problem when trying to read the database table (or index). The creation timestamp of the table (or index) is in the future.

Possible remedies:

  • According to Oracle this could be caused by a long running snapshot. Try committing or rolling-back all outstanding transactions and try again.
  • The system time has been changed. Oracle suggest exporting the affected tables, dropping the tables and importing them.
  • If this error is encountered when trying to export tables then try exporting the tables without the flag ‘consistent=Y‘.

Below link has some more information about the table definition has changed type of errors.

http://laurentschneider.com/wordpress/2007/07/keep-your-client-system-time-accurate.html

Support note 167495.1 says
 
=================================
Reason ~~~~~~ There are objects inside the database
that have a object creation time-stamp that is greater
than the current system time. Probable this happened
by once setting a wrong system date/time. To find out
the affected objects run the following command:
 
SQL> select to_char(created,'dd-mm-yyyy hh24:mi:ss')
"CREATION TIME", object_name, object_type, object_id
from dba_objects where created > sysdate;
 
 
CREATION TIME OBJECT_NAME OBJECT_TYPE OBJECT_ID
------------------- --------------- -------------
----------
22-05-2006 10:55:35 PK_T1C1 INDEX 25490
22-05-2006 10:56:10 PK_T2C1 INDEX 25491
22-05-2006 10:56:10 PK_T3C1 INDEX 25492
22-05-2006 10:56:10 PK_T4C1 INDEX 25493
22-05-2006 10:56:56 PK_T5 INDEX 25494
22-05-2006 10:49:55 TAB1 TABLE 25485
22-05-2006 10:51:13 TAB2 TABLE 25486
22-05-2006 10:51:36 TAB3 TABLE 25487
22-05-2006 10:51:57 TAB4 TABLE 25488
22-05-2001 10:53:45 TAB5 TABLE 25489
 
 
Comment ~~~~~~~ The CONSISTENT=YES parameter is used
to ensure that the data seen by Export is consistent
to a single point in time and doesn't change during
the execution of the export command.
 
 
fix:
 
If the object creation time-stamp is still greater
than the system time, then export the object's data
without using the CONSISTENT=YES parameter, drop the
object, recreate the object so it has a new creation time-stamp, import the object's data, and
resume work.
========================================
 
Here is what I did
 
scott@ORA9I> select sal from emp1;
 
       SAL
Author: admin