Delete the 500 Oldest Files in a Directory (Oracle DBA)
Delete the 500 Oldest Files in a Directory (Oracle DBA)
Purpose
Oracle's diagnostic destination (DIAGNOSTIC_DEST), audit file destination (AUDIT_FILE_DEST, also known as adump), and per-process trace directories collect thousands of small files over time. Inode counts, not bytes, are usually the limit that bites first — many filesystems cap directory entries or per-filesystem inodes well below the disk's byte capacity. The one-liner below — taken straight from the shutdownabort.com DBA Quick Guides (Andrew Barry, 2007–2013, preserved via the Wayback Machine as the anchor source for this post) — deletes the 500 oldest files in the current directory in one shot. It is the quickest way to free up enough headroom to keep the database running while a longer-term retention strategy is put in place.
Code
1rm -f `ls -tr | head -500`
Breakdown of Code
Three pieces, piped together via command substitution.
ls -tr— list files in the current directory, sorted by modification time, reverse order. Oldest files first, newest files last. The-tflag sorts by mtime; the-rflag reverses the default newest-first order.head -500— take the first 500 lines from the input. Combined with the reversed sort above, this yields the 500 oldest filenames.`...`— backtick command substitution. The shell runs the inner pipeline first, captures its output, and substitutes that output as arguments to the outer command.rm -f— delete the listed files. The-fflag suppresses error messages for files that no longer exist (race condition with another cleanup process) and skips the per-file confirmation prompt.
The end result: the 500 oldest files in the current directory are deleted in one rm call.
How It Works
ls -tr reads the directory entries, sorts them by st_mtime, reverses the order so the oldest is first, and writes the filenames one per line to standard output. head -500 takes the first 500 of those filenames. Backtick substitution turns those 500 filenames into one big argument list passed to rm -f.
That last step has two practical limits worth knowing:
ARG_MAX— every UNIX kernel caps the maximum length of an argument list (commonly 128 KB to 2 MB). At ~50 bytes per filename, 500 filenames is comfortably under the limit. Forhead -50000, thermcall may exceedARG_MAXand the shell will reportArgument list too long. Usexargsfor the large case (see Common Variations).- Hidden files are skipped —
lswithout-adoes not list dotfiles. That is fine for trace dirs (Oracle does not write dotfiles there) but worth knowing on a generic directory.
Key Points
- Run in the right directory first —
cdto the trace directory oraudit_file_destbefore running. The one-liner has no path argument and operates on.only. - Mtime, not creation time — UNIX traditionally tracks modification time, not creation time. For Oracle trace and audit files this is fine: each file is written once and never touched, so mtime equals creation time.
- No safety net — there is no dry-run flag in the one-liner above. Always run
ls -tr | head -500on its own first to inspect the list. - Does not recurse — operates on the current directory only. Trace files in
<diag>/rdbms/<dbname>/<sid>/traceare siblings, not nested, so flat operation is the correct behavior here.
Why a DBA cares — typical Oracle directories
This one-liner is most useful against three categories of Oracle output directory.
AUDIT_FILE_DEST(adump) — everyconnect / as sysdbawrites a file. On a busy host that runs hundreds of monitoring scripts a day,adumpaccumulates tens of thousands of files in a few weeks. Punchline: the database itself does not need any of these files for normal operation.DIAGNOSTIC_DEST/diag/rdbms/<dbname>/<sid>/trace— every error, every shared-server dump, every event-triggered trace lands here. ADR retention policies (adrci) handle this for current versions, but if the policy is unset (or set to a very long horizon), the directory grows without bound.- Custom application log directories — anywhere a UTL_FILE PL/SQL routine, a Java external table writer, or an OS-level cron job is dropping per-run output without rotation.
Common Variations
Adjust the count for a heavier prune:
1# Delete the 5000 oldest files
2rm -f `ls -tr | head -5000`
3
4# Or use xargs for very large lists (avoids ARG_MAX)
5ls -tr | head -50000 | xargs rm -f
Restrict by file age instead of count — useful when you want a rolling time window:
1# Delete every .trc file older than 14 days
2find . -maxdepth 1 -name "*.trc" -mtime +14 -delete
Inspect before deleting (always a good idea on production):
1ls -tr | head -500 | head -20 # sanity-check the first 20 names
2ls -tr | head -500 | wc -l # confirm 500 (or fewer)
For the adump directory, prefer Oracle's own retention rather than rm once the database is in a steady state:
1adrci> set home diag/rdbms/scr10/scr10
2adrci> show control -- view current retention policies
3adrci> set control (SHORTP_POLICY = 168, LONGP_POLICY = 720)
4adrci> purge -age 168 -type TRACE
Important Considerations
Never run this in $ORACLE_HOME or any subdirectory of it. The one-liner does not distinguish "log file" from "binary I cannot afford to lose" — it deletes the 500 oldest files of any kind in the current directory. The cost of cd-ing to the wrong place before running it is a broken Oracle install.
For long-term, hands-off retention switch to adrci (ADR-managed directories), the audit_trail parameter (database-resident audit instead of file-based), or a cron job built around find -mtime (covered in this site's Find Oracle Archive Logs Older Than N Days post). Use the 500-oldest one-liner when the directory is already full and you need it to not be full in the next 30 seconds.
References
- ls(1) — UNIX ls command manual page — full description of
-tand-rsort flags - Oracle Database Administrator's Guide — Managing Diagnostic Data — ADR retention,
adrci, and the supported way to prune Oracle's own diagnostic directories - Oracle Database Reference — AUDIT_FILE_DEST init parameter — controls where SYS audit files (
adump) are written - shutdownabort.com — Miscellaneous Useful UNIX (Wayback, 2013-01-15) — original source of the one-liner above