Oracle exp/imp Parameter File: Direct Path User-to-User Data Migration

Oracle exp/imp Parameter File: Direct Path User-to-User Data Migration

Using a parameter file (parfile) with Oracle's legacy exp and imp utilities is a clean, repeatable way to move schema data between users β€” especially when you need speed, control over what gets transferred, and a consistent log trail. This post breaks down a real-world Oracle export/import parameter file that uses OS authentication, direct path mode, and selective object filtering.

The Parameter File

This is the complete parameter file referenced in this post:

 1userid=/
 2file=/u03/scr9/oradata/expimp/exppipe
 3log=/u03/scr9/oradata/expimp/imp_scr9.log
 4rows=y
 5constraints=n
 6ignore=y
 7indexes=n
 8grants=n
 9full=n
10feedback=10000
11direct=y
12fromuser=andy,user2
13touser=andy,user2

To use the file with export or import:

1exp parfile=<file>
2imp parfile=<file>

What This File Does

This parameter file instructs Oracle's legacy exp/imp utilities to export data rows from two schemas (andy and user2) using a fast direct-path read, and then import that data back into the same schema names in the target database β€” skipping indexes, constraints, and grants to keep the process lean and fast.


Line-by-Line Breakdown

userid=/

This uses OS-level authentication β€” no username or password is required in the file. The forward slash (/) tells Oracle to authenticate the current OS user against the database using the OS authentication mechanism (typically OPS$ or SYSDBA privilege via sqlnet.ora). This is a common and secure pattern for DBA scripts that run as the oracle OS user, because it avoids storing plaintext passwords.


file=/u03/scr9/oradata/expimp/exppipe

This is the dump file path β€” where the export writes data and where the import reads it. The filename exppipe strongly suggests this is a named pipe (a Unix FIFO file), which means the export and import can be run simultaneously, streaming data directly from the source database to the target without writing a large intermediate dump file to disk. This is a powerful technique for high-volume migrations with limited disk space.


log=/u03/scr9/oradata/expimp/imp_scr9.log

This sets the log file where Oracle writes the full output of the operation. Keeping logs is critical for DBA audits, post-migration checks, and debugging any row or object failures. The _scr9 in the filename is likely a reference to the source database name or SID (scr9), making it easy to trace which database the import came from.


rows=y

Tells Oracle to export and import the actual data rows in the tables. Setting this to n would export only the schema structure (DDL), which is useful for schema-only clones. Here it is y, meaning full row data is transferred.


constraints=n

Skips all table constraints (foreign keys, check constraints, etc.) during the operation. This is a deliberate performance choice β€” it avoids constraint-violation errors mid-import, especially when loading tables in an order that might temporarily violate referential integrity. Constraints can be re-enabled or re-created manually after data is loaded.


ignore=y

Tells the import utility to ignore object creation errors β€” most commonly, "table already exists" errors. Without this, if a target table already exists, the import would stop for that object. With ignore=y, it continues and loads rows even if the table structure is already present. This is standard practice when doing a data refresh into an existing schema.


indexes=n

Skips index creation during the import. Re-creating indexes during a large import dramatically slows performance. The best practice is to import the raw data first, then build indexes afterward. This parameter makes that two-step process easy.


grants=n

Skips object-level grants (privileges given to other users on these objects). This prevents unintentional privilege replication into the target environment β€” important when migrating between production, test, or development databases where access control must differ.


full=n

This is not a full database export/import. Setting full=n means the operation is scoped to specific users defined by fromuser/touser, not the entire database. A full export (full=y) requires the EXP_FULL_DATABASE role and exports all schemas, system objects, and tablespace definitions.


feedback=10000

Prints a progress dot every 10,000 rows to the screen and log. This is useful during long imports to confirm the process is still running and to estimate completion time. For very large tables, increasing this number reduces output noise.


direct=y

This enables direct path export, which bypasses the Oracle buffer cache and reads data blocks directly from the datafiles. It is significantly faster than conventional path export (direct=n) for large tables because it skips the SQL layer and reads data using the same mechanism as SQL*Loader's direct path load. Note that direct=y has some restrictions β€” it cannot be used with certain column types or when the database has active read-consistent transactions on the target objects.


fromuser=andy,user2

Specifies the source schemas to export data from. Both andy and user2 schemas are exported. Multiple users are separated by commas. The user running the export must have the EXP_FULL_DATABASE role or be exporting their own schema.


touser=andy,user2

Specifies the target schemas to import data into. The mapping is positional β€” andy maps to andy, and user2 maps to user2. If you wanted to remap schemas (for example, import andy's data into user3), you would write touser=user3,user2. This is a simple but powerful remapping feature of the legacy imp utility.


Key Points and Insights

  • Named pipe advantage: Using a Unix named pipe for the file= value allows exp and imp to run simultaneously in separate terminal sessions, with no large intermediate dump file needed on disk β€” ideal for storage-constrained environments.
  • Performance order: The combination of direct=y, indexes=n, and constraints=n represents the fastest possible import pattern β€” load raw data first, then add structure.
  • Security: userid=/ avoids storing credentials in the file, which matters for scripts committed to version control or stored in shared directories.
  • Selective migration: full=n combined with fromuser/touser gives precise control β€” only the specified schemas move, not the whole database.
  • Modern alternative: For Oracle 10g and above, Oracle recommends using Data Pump (expdp/impdp) instead of the legacy exp/imp. Data Pump is faster, supports parallel operations, and offers more granular filtering. Oracle Data Pump also supports a PARFILE parameter with the same concept.
  • Legacy compatibility: Oracle Data Pump includes a legacy mode that recognizes exp/imp-style parameters, so existing scripts using this pattern can often run without changes.
  • Parameter file naming: Oracle places no restriction on the file name or extension for a parfile, but using .par as the extension is a widely adopted convention that makes files easy to identify.
  • Command-line override: Any parameter passed on the command line after parfile=<file> will override the same parameter inside the file. For example, imp parfile=mypars.par indexes=y will override indexes=n in the file.

References

Posts in this series