Help Docs Performance Server Optimization Troubleshoot disk space issues

Troubleshoot disk space issues

Linux disk full? Learn to find large files/dirs (`df`, `du`, `find`) & high inode usage. Guide covers cleanup for common paths & cPanel specifics.

This guide provides comprehensive instructions for troubleshooting and resolving disk space issues on your Linux server. Running out of disk space can lead to service disruptions, performance degradation, and data loss. Following the steps outlined below will help you identify what is consuming disk space or inodes and how to safely clean it up.

Audience: This guide is intended for users with root or sudo access to their server via SSH. Basic familiarity with Linux command-line operations is recommended.

Before You Begin:

  • Backup Your Data: Before making any significant changes, especially those involving file deletion, ensure you have a recent backup of your important data. We recommend keeping off-server local backups of your most important files.
  • Use screen or tmux: Many disk analysis and cleanup commands can take a long time to run, especially on large filesystems or busy servers. It’s highly recommended to run these commands within a screen or tmux session to prevent them from being interrupted if your SSH connection drops.
  • Proceed with Caution: Deleting files can have unintended consequences. Always double-check commands and paths before execution. If unsure, consult with our support team.

Part 1: Identifying disk space utilization

The first step is to determine which partitions are full and what directories or files are consuming the most space.

Using df (disk free)

The df command displays an overview of filesystem disk space usage.

  • To see a human-readable summary of all mounted filesystems: df -h
  • To quickly find partitions over a specific threshold (e.g., 90% full): df -h | sed 1d | awk '{if (int($5) > 90) print $6 " is at " $5;}' (Change 90 to your desired percentage.)

If df shows critical disk usage on a partition, the next step is to use du to find out what’s taking up space within that partition.

Using du (disk usage)

The du command estimates file and directory space usage.

  • Basic Check from Root: To get a summary of disk usage for all directories directly under the root (/) and sort them by size (largest first): du -s /* | sort -rnk1
    • du -s: Display only a total for each specified file/directory.
    • /*: Check all directories directly under root.
    • sort -rnk1: Sort numerically (-n) in reverse order (-r) based on the first field (-k1).
  • Checking a Specific Partition/Directory (Recommended): To check total disk usage within the current directory, showing human-readable sizes, staying on the same filesystem, and limiting depth to the current directory: du -hx --max-depth=1 . If --max-depth=1 is not supported on your du version, try -d1 like this: du -hxd1 .
  • To sort the output by size (largest first): du -hx --max-depth=1 . | sort -rh
    • du -h: Display sizes in human-readable format (e.g., 1K, 234M, 2G).du -x: Skip directories on different filesystems. This is important to avoid counting mounted backups or network shares as part of the local partition’s usage.du --max-depth=1 (or -d1): Restrict the display to directories at the specified depth (1 means only the directories immediately within the target)..: Represents the current directory. You can replace this with any path (e.g., /var, /home).sort -r: Reverse the sort order (largest first).sort -h: Sort by human-readable numbers (e.g., 1K, 2M, 3G).
    Example Output: If you run du -xh --max-depth=1 in /usr: root@beast [/usr]# du -xh --max-depth 1 16k ./lost+found 101M ./bin 424M ./lib 320k ./libexec 15M ./sbin 449M ./share 26M ./X11R6 4.0k ./dict 8.0k ./etc 8.0k ./games 24M ./include 2.2G ./local <-- Majority of space in /usr 7.3M ./src 3.2M ./kerberos 76k ./doc 604k ./man 8.8M ./i386-glibc21-linux 3.2G . <-- Total for /usr In this example, /usr/local uses 2.2G. You would then cd /usr/local and run the command again to drill down further.
  • Excluding virtfs on cPanel Servers: If you are seeing unusual results for /home on cPanel servers, virtfs (used for Jailed Shell environments) can skew the output. Exclude it: du -sh --exclude=/home/virtfs /home/* | sort -rh Or, for a more specific path: Bashdu -hx --max-depth=1 --exclude=/home/virtfs /home | sort -rh
  • Excluding Other Directories: To skip a specific directory (e.g., /backup/ when checking subdirectories of root): du -s /*/* --exclude=/backup/* | sort -rnk1
  • Summing Specific File Types: To find the total size of files matching a pattern (e.g., all .tar.gz files in /home): du -sch /home/*.tar.gz
    • -c: Produce a grand total.
  • Deeper Recursion to Find Largest Folders: This command starts in the current directory (./) and shows the largest folders with several layers of recursion. Note: This can take a long time to run. find ./ -type d -print0 | xargs -0 du -s | sort -n | tail -10 | cut -f2 | xargs -I{} du -sh {} | sort -rn To display absolute paths, replace ./ with `pwd`: find `pwd` -type d -print0 | xargs -0 du -s | sort -n | tail -10 | cut -f2 | xargs -I{} du -sh {} | sort -rn Sample Output (run from /home): 434G . 28G ./temp 20G ./theshalo 18G ./fbpickup 17G ./theshalo/mail/theshalomgroup.com 17G ./theshalo/mail 17G ./johnsons 16G ./theh2og7 15G ./cuscore2 13G ./dwihitp5 This output indicates that the cPanel account theshalo is using significant space, primarily in its mail directory. You can then cd into /home/theshalo and run the command again or use du -sh * | grep G to find gigabyte-sized files/folders.
  • Quickly List Gigabyte-Sized Files/Folders: du -sh * | grep G Caution: This may show false positives if a file or folder name itself contains a capital “G”.

Finding large files quickly with find

The find command is excellent for locating specific types of files.

  • Find Files Over a Certain Size: This command finds all files larger than 200MB in a specified $directory, lists them with details, and provides a grand total. Replace /$directory with the actual path (e.g., /home, /var).
find /$directory -type f -size +200M -exec ls -la '{}' ; | awk 'BEGIN{pref[1]="K";pref[2]="M";pref[3]="G";pref[4]="T"} {x = $5; y = 0; while( x > 1024 ) { x = x/1024; y++; } printf("%g%st%s %s %sn",int(x*10+.5)/10,pref[y],$6,$7,$9); sum += $5} END {print "Total:" sum/1024**3 " G" }'
  • Find Large Archive Files: This command hunts for archive files (.tar, .zip, .tar.gz, .jpa) larger than 100MB in /home/* directories.
find /home/* -type f -size +100M -regex '.*.(tar|zip|tar.gz|jpa)' -exec ls -la '{}' ; | awk 'BEGIN{pref[1]="K";pref[2]="M";pref[3]="G";pref[4]="T"} {x = $5; y = 0; while( x > 1024 ) { x = x/1024; y++; } printf("%g%st%s %s %sn",int(x*10+.5)/10,pref[y],$6,$7,$9); sum += $5} END {print "Total:" sum/1024**3 " G" }'

Sample Output:

722.2M Apr 19 /home/foo2/public_html/cdb.zip 2G Apr 19 /home/foo2/public_html/nat.zip ... Total:46.0076 G

Understanding discrepancies: deleted files held open

Sometimes df shows high disk usage, but du doesn’t account for all of it. This can happen when files (often logs) are deleted, but a process still has them open. The space isn’t freed until the process closes the file handle or is restarted.

  • Identify Deleted Files Held Open: lsof | awk '$4 == "DEL" || $10 == "(deleted)" { print $0 }'
  • Sample Output (column headers added for clarity):
COMMAND PID USER FD TYPE DEV SIZE NODE NAME httpd 546 root 3u REG 8,8 0 46 /tmp/ZCUDfVfcQp (deleted) mysqld 844 root 7u REG 8,8 3000 25 /tmp/ibMnPjAD (deleted)
  • The first column shows the COMMAND and the second shows the PID (Process ID).
  • The SIZE column shows the space still occupied.
  • If you see large files, restarting the corresponding service (e.g., httpd, mysqld) will release the space. Caution: Restarting services will cause a brief interruption.

Part 2: identifying Inode utilization

Disk space issues aren’t always about raw byte usage; they can also be due to running out of inodes. An inode is a data structure that stores information about a file or directory (metadata), except for its name and actual data. Each file and directory uses one inode. If you have a very large number of small files, you can exhaust inodes before running out of disk space.

Checking Inode Usage with df -i

  • To see inode usage for all mounted filesystems:
df -i 
  • or for human-readable inode counts:
df -ih
  • Look for any partition showing 100% (or near 100%)
IUse%

Finding directories with high Inode counts

The quick and simple method (directory file size)

A directory is a special file containing a list of its contents and their inode numbers. Directories with many files have larger directory file sizes (not the sum of the contents).

  • Check Directory File Size: ls -ld /path/to/directory Example: [root@host8 ~]# ls -lhd /home/c0000006/mail/new drwxr-x--x 2 c0000006 c0000006 366M Apr 21 16:20 /home/c0000006/mail/new/ A directory file size in Megabytes (like 366M above) indicates a very large number of files.
  • Quickly Check Common Mail Directories: This command checks default mail inbox directories for large directory file sizes, often indicative of spam or unmanaged mailboxes: ls -lhd /home*/*/mail/{{new,cur},{.*_*/new,.*_*/cur},{.*cur,.*new}} 2>/dev/null | awk '{print $5, $9}' | sort -rh | head
  • Iterative Search for Large Directory Files: This loop searches progressively deeper into /home for directories with large file sizes. Press CTRL+C to stop once you find a culprit. for depth in {1..10}; do echo "Depth: $depth"; find /home -mindepth $depth -maxdepth $depth -not ( -path /home/virtfs -prune ) -type d -exec ls -lhd {} ; 2>/dev/null | awk '{print $5, $9}' | sort -rh | head; echo; done Example Hit: Depth: 5 222M /home/douglasj/public_html/store/var/session <---- HERE IS A HUGE SESSION DIRECTORY NEEDING CLEANED UP Note: This method is good for finding directories with many small files (high inode usage) but won’t find directories containing a few very large files.

The slow and reliable methods

  • Using du --inodes (if available): Some newer versions of du support the --inodes option. du --inodes -sch /*/* | sort -hk1 Adjust the path (/*/*) as needed.
  • Counting Files in Directories (Very Slow): If a partition (e.g., /) is full of inodes, this command will count files in each directory and sort them. This can be extremely slow on large filesystems. find . -xdev -type d | while read line; do echo "$( find "$line" -maxdepth 1 -type f | wc -l) $line"; done 2>/dev/null | sort -rn | less
    • . : Start in the current directory.
    • -xdev: Don’t descend directories on other filesystems.
    • You might want to redirect output to a file: ... > /root/inode_counts.txt
  • Search Specific Root Directories for High File Counts: find /var/ /opt/ /usr/ /tmp/ /etc/ /root/ /home* -xdev -type d | while read line; do echo "$(find "$line" -maxdepth 1 -type f | wc -l) $line"; done 2>/dev/null | sort -rn | head
  • Liquid Web Inode Script (if available or downloadable): wget -O /scripts/inodes.sh http://files.liquidweb.com/support/layer3-scripts/inodes.sh chmod +x /scripts/inodes.sh /scripts/inodes.sh Note: Availability of this script may vary. Always be cautious when downloading and running scripts from the internet.

Specific cPanel Inode issues

  • /.cpanel/comet: If this directory shows high inode usage, cPanel provides a script to clear unneeded files: /usr/local/cpanel/bin/purge_dead_comet_files
  • Default Email Accounts: Check user mail directories (e.g., /home/$user/mail/cur, /home/$user/mail/new).
  • Exim Mail Queue: Check the number of emails in the queue: exim -bpc A very high count can contribute to inode issues indirectly.

Part 3: Deleting large numbers of files

Removing a directory containing hundreds of thousands or millions of files can be problematic. ls might time out, and rm -rf can be slow and resource-intensive.

Warning

Removing files in bulk can be very dangerous and could cripple your server. Be sure you have good, off-server backups before proceeding.

  • Using find with xargs and rm (Safer and More Efficient):Run in a screen session. This command finds files and deletes them in batches of 1000. find /full/path/to/folder/ -type f -print0 | xargs -t -n 1000 -0 ionice -c3 rm -f
    • /full/path/to/folder/: Replace with the actual path.
    • -type f: Delete only files (safer than deleting directories this way initially).
    • -print0 and -0: Handle filenames with spaces or special characters correctly.
    • xargs -t: Print the command before executing it.
    • -n 1000: Process 1000 files at a time.
    • ionice -c3: Set I/O priority to idle to minimize impact on other processes.
    • rm -f: Force removal without prompting. Use with extreme caution.
  • Delete Files Older Than X Days: To delete files older than 30 days in a specific folder: find /full/path/to/folder/ -type f -mtime +30 -print0 | xargs -t -n 1000 -0 rm -f
    • -mtime +30: Files modified more than 30 days ago.

Part 4: Cleaning specific partitions and directories (common culprits)

Once offending files/directories are identified, they can be cleaned. Below are common locations and strategies.


Cleaning the root (“/”) partition

The root partition is typically small.

  • whm-server-status files: Older WHM/cPanel versions might generate whm-server-status.(#) files in / due to service httpd fullstatus. These are safe to remove. rm -f /whm-server-status.* Update your monitoring scripts (e.g., loadwatch) to call the binary directly for fullstatus if this is an issue.
  • /lib/modules: Older systems might accumulate kernel module directories. You can remove modules for kernels you are not currently running.
    • 1. Check current kernel: uname -r
    • 2. List installed kernel packages (RPM-based systems like CentOS/AlmaLinux): rpm -qa kernel
    • 3. Remove old kernel packages (example): yum remove kernel-3.10.0-1062.el7.x86_64 # Replace with actual old kernel version Caution: Do NOT remove the currently running kernel. After removal, check /boot/grub/grub.conf or /boot/grub2/grub.cfg to ensure it points to a valid kernel.
  • Non-standard directories: Sometimes data is mistakenly placed directly in /. This data should be moved to a larger partition (e.g., /home).
  • /root: Root’s home directory can fill with downloads, logs, etc.
    • Check for large files: du -sh /root/* | sort -rh
    • /root/loadMon: If using Loadmon, its logs can grow. Consider disabling written logs in its configuration.
  • /root/.cpanel/comet/: These files relate to cPanel’s mail queue manager interface. du -sh /root/.cpanel/comet/* To clear files older than 3 days: /usr/local/cpanel/bin/purge_dead_comet_files If the script doesn’t clear enough and files are recent, you may need to manually remove files from /root/.cpanel/comet/channels/. Make a backup first if / is critically full and you have space elsewhere. # Example: Move to a temporary location on /home if / is full # mkdir /home/comet_backup # mv /root/.cpanel/comet/channels/* /home/comet_backup/ # After confirming system stability, you can remove the backup. Refer to cPanel documentation for the latest advice on this directory.

/tmp

This directory is for temporary files. Most files can be safely removed. Caution: Do not remove mysql.sock (or similar socket files) if present.

  • General Cleanup: # List files to see what’s there ls -lath /tmp # Carefully remove old/unneeded files # find /tmp -type f -mtime +7 -delete # Example: delete files older than 7 days
  • PHP Session Files: PHP session files (often named sess_*) can accumulate.
    • 1. Confirm you’re deleting only session files older than 1 day:
    • 2. Delete them: find /tmp -type f -name 'sess_*' -mtime +1 -delete Prevention: Ensure session.gc_maxlifetime in php.ini is set to a reasonable value (e.g., 1440 seconds for 24 minutes, or a few hours). Avoid excessively long lifetimes. cPanel handles session cleanup differently (see /var/cpanel/php/sessions/ section).
  • APC/OpCache and Zend Conflicts: Conflicts between PHP caching extensions (like APC, OpCache) and Zend Guard Loader/Optimizer can lead to “hidden” disk use in /tmp by deleted-but-open files.
    • 1. Check for extensions: php -m | grep -E "apc|Zend Guard"
    • 2. If both are present, check for deleted APC temp files: lsof | grep '/tmp/apc..*(deleted)' Example Output: php 12306 accessco DEL REG 7,0 49546 /tmp/apc.jfcOJ6 (deleted) Resolution (with customer permission): Comment out Zend or ionCube lines in the global php.ini (e.g., /usr/local/lib/php.ini), then kill PHP processes and restart Apache. cp /usr/local/lib/php.ini{,.lwbkup_$(date +%F)} # Edit /usr/local/lib/php.ini and comment out Zend lines # Example: # ; zend_extension_manager.optimizer = "/usr/local/Zend/lib/Optimizer-3.3.3" # ; zend_extension_manager.optimizer_ts = "/usr/local/Zend/lib/Optimizer_TS-3.3.3" killall php # Use with caution, may affect CLI scripts. A more targeted approach might be needed. service httpd restart # or systemctl restart httpd php -m # Verify Zend is no longer loaded

/boot

Usually small and rarely fills up.

  • If full, it’s often due to too many old kernels. See /lib/modules cleanup or use yum autoremove (on Yum-based systems) if appropriate, or the kernel cleaner script mentioned earlier.
  • A full /boot can prevent kernel updates.
  • Refer to your distribution’s documentation for kernel management. For cPanel servers, you might use WHM’s Kernel Manager or command-line tools like package-cleanup (from yum-utils).

/usr

Contains system software, libraries, and often cPanel components. Many items here are critical. If /usr is its own small partition and is full, an upgrade to a server with a different partitioning scheme might be necessary.

  • /usr/src and /usr/src/kernels: Kernel source directories. If not compiling custom kernels, these might be removable.
    • 1. Check current kernel: uname -a
    • 2. If kernel sources were installed via RPM kernel-devel or kernel-source: rpm -qa kernel-devel kernel-source # List packages # rpm -e <package_name> # Remove specific old versions Caution: Be sure the server isn’t using a custom-compiled kernel dependent on these sources. Other source directories in /usr/src (not from LW) should be left alone. Consider tarring and gzipping them if space is critical.
  • /usr/local/apache/domlogs (cPanel): Domain access logs. Can grow very large.
    • Option 1: Configure Log Rotation in WHM: WHM » cPanel Log Rotation Configuration. Also, WHM » Tweak Settings for Apache log processing.
    • Option 2: Move to /home and Symlink (if /home has space): echo "Stopping Apache..." service httpd stop # or systemctl stop httpd echo "Creating new directory and syncing logs..." mkdir -p /home/domlogs chown root:root /home/domlogs # Or appropriate ownership for your system chmod 750 /home/domlogs # Or appropriate permissions rsync -avHP /usr/local/apache/domlogs/ /home/domlogs/ echo "Final sync after stopping Apache..." rsync -avHP /usr/local/apache/domlogs/ /home/domlogs/ echo "Moving old domlogs and creating symlink..." mv /usr/local/apache/domlogs{,.bak_$(date +%F)} ln -s /home/domlogs /usr/local/apache/domlogs echo "Starting Apache..." service httpd start # or systemctl start httpd echo "Verifying..." # Check that new logs appear in /usr/local/apache/domlogs (and thus in /home/domlogs) # After verification (e.g., after a day or two): # echo "Removing old domlogs backup (if verified)..." # rm -rf /usr/local/apache/domlogs.bak_YYYY-MM-DD
  • /usr/local/apache/logs (cPanel): Default Apache logs (e.g., error_log, access_log). Ensure log rotation is configured. The system logrotate utility often handles this via /etc/logrotate.d/httpd or /etc/logrotate.d/apache. Sample /etc/logrotate.d/httpd configuration:/usr/local/apache/logs/*log { compress weekly notifempty missingok rotate 4 # Keep 4 weeks of logs sharedscripts postrotate /bin/kill -HUP `cat /usr/local/apache/logs/httpd.pid 2>/dev/null` 2> /dev/null || true endscript }
    • Force rotation: logrotate -f /etc/logrotate.d/httpd
  • /usr/local/cpanel: CRITICAL CPANEL DIRECTORY. DO NOT SYMLINK THE ENTIRE /usr/local/cpanel DIRECTORY. This will break cPanel. Specific subdirectories can sometimes be managed.
  • /usr/local/cpanel-rollback: cPanel update rollback files. Check modification dates. Delete all but the latest 1 or 2 backup sets if space is needed. ls -lath /usr/local/cpanel-rollback/ # rm -rf /usr/local/cpanel-rollback/YYYY-MM-DD_HH-MM-SS # Example
  • /usr/local/cpanel/3rdparty/mailman: Mailman mailing list software.
    • logs/: Can be large. If logs are unneeded, they can be cleared (with caution).
    • archives/: Customer data (list archives). Do not delete. Can be symlinked to /home: echo "Stopping cPanel/Mailman..." service cpanel stop # or /scripts/restartsrv_cpsrvd --stop; /scripts/restartsrv_mailman --stop echo "Creating new directory and syncing archives..." mkdir -p /home/mailman/archives rsync -avHl /usr/local/cpanel/3rdparty/mailman/archives/ /home/mailman/archives/ echo "Final sync..." rsync -avHl /usr/local/cpanel/3rdparty/mailman/archives/ /home/mailman/archives/ echo "Moving old archives and creating symlink..." mv /usr/local/cpanel/3rdparty/mailman/archives{,.bak_$(date +%F)} ln -s /home/mailman/archives /usr/local/cpanel/3rdparty/mailman/archives echo "Starting cPanel/Mailman..." service cpanel start # or /scripts/restartsrv_cpsrvd --start; /scripts/restartsrv_mailman --start # After verification: # rm -rf /usr/local/cpanel/3rdparty/mailman/archives.bak_YYYY-MM-DD
    • data/: Can fill with pickled held messages. To discard: Bashcd /usr/local/cpanel/3rdparty/mailman # Replace <listname> with the actual list name # bin/discard data/heldmsg-<listname>-* # If too many files for wildcard expansion: # find ./data -name 'heldmsg-<listname>-*' -print0 | xargs -0 bin/discard Note: The discard script may not always be reliable.
  • /usr/local/cpanel/logs: cPanel service logs. Can grow large.
    • Option 1: Configure Log Rotation in WHM: WHM » cPanel Log Rotation Configuration.
    • Option 2: Symlink to /home (similar to domlogs or mailman/archives): # Ensure cPanel services are stopped (service cpanel stop or individual services) # mkdir -p /home/cpanel/logs # rsync -avHl /usr/local/cpanel/logs/ /home/cpanel/logs/ # mv /usr/local/cpanel/logs{,.bak_$(date +%F)} # ln -s /home/cpanel/logs /usr/local/cpanel/logs # service cpanel start # After verification, rm -rf /usr/local/cpanel/logs.bak_YYYY-MM-DD Immutable Attribute: If rm -rf fails on the .bak directory due to the immutable attribute (a): lsattr /usr/local/cpanel/logs.bak_YYYY-MM-DD/* chattr -a /usr/local/cpanel/logs.bak_YYYY-MM-DD/* rm -rf /usr/local/cpanel/logs.bak_YYYY-MM-DD
  • /usr/local/cpanel/src: Source code for software cPanel builds. 3rdparty/ subdirectory can be large. It’s generally safe to remove contents, as upcp will repopulate if needed. Moving is safer: mkdir -p /home/cpanel/src/ rsync -aHlP /usr/local/cpanel/src/ /home/cpanel/src/ rm -rf /usr/local/cpanel/src/ # Original path ln -s /home/cpanel/src/ /usr/local/cpanel/src Note: A upcp --force might delete this symlink and recreate the original directory.
  • /usr/local/jakarta (Tomcat): If Tomcat is used, catalina.out can grow large. Adjust paths if your Tomcat installation differs /usr/local/jakarta/tomcat/bin/shutdown.sh rm /usr/local/jakarta/tomcat/logs/catalina.out /usr/local/jakarta/tomcat/bin/startup.sh Configure log rotation for Tomcat if possible.
  • /usr/share/clamav/: ClamAV antivirus definitions. If large and /usr/share is on a constrained partition: # Stop clamd/freshclam first # systemctl stop clamd@scan freshclam # Example for systemd # mv /usr/share/clamav /home/usr_share_clamav # ln -s /home/usr_share_clamav /usr/share/clamav # systemctl start clamd@scan freshclam # /etc/init.d/exim restart #

/var

This directory often contains logs, mail spools, temporary files, and databases.

  • /var/log/journal/ (systemd): Systemd journal logs.
    • 1. Check usage: journalctl --disk-usage
    • 2. Reduce size (destructive commands):
      • By size (e.g., to 200MB): journalctl --vacuum-size=200M By time (e.g., keep last 1 year): journalctl --vacuum-time=1years (Units: years, months, weeks, days, h, m, s)
    • 3. Persistent Configuration (Recommended): Edit or create /etc/systemd/journald.conf or a file in /etc/systemd/journald.conf.d/:
    [Journal] SystemMaxUse=500M # Example: Limit persistent storage to 500MB # RuntimeMaxUse=100M # Example: Limit volatile /run/log/journal to 100MB Then reload: systemctl restart systemd-journald
  • /var/cpanel/bandwidth (cPanel): Bandwidth data. Can be moved to /home: echo "Stopping cpanellogd and tailwatchd..." killall cpanellogd # Or pkill cpanellogd /usr/local/cpanel/bin/tailwatchd --stop echo "Creating new directory and syncing data..." mkdir -p /home/bandwidth chown root:wheel /home/bandwidth # Adjust ownership as needed chmod 755 /home/bandwidth rsync -avHl /var/cpanel/bandwidth/ /home/bandwidth/ echo "Final sync..." rsync -avHl /var/cpanel/bandwidth/ /home/bandwidth/ echo "Moving old directory and creating symlink..." mv /var/cpanel/bandwidth{,.bak_$(date +%F)} ln -s /home/bandwidth /var/cpanel/bandwidth echo "Starting tailwatchd..." /usr/local/cpanel/bin/tailwatchd --start # cpanellogd is usually started by tailwatchd or on its own schedule # After verification: # rm -rf /var/cpanel/bandwidth.bak_YYYY-MM-DD
  • /var/cpanel/php/sessions/ea-php (cPanel with EasyApache PHP):* PHP session files. cPanel uses a script (/usr/local/cpanel/scripts/clean_user_php_sessions) run by cron to clean these based on global PHP settings (session.gc_maxlifetime, session.save_path). Problems can arise if:
    1. Local .ini overrides: PHP scripts use a local save_path not checked by the cPanel script, or a local gc_maxlifetime is ignored by the script. Frameworks rename session files: e.g., CodeIgniter uses ci_session*. The cPanel script only looks for sess_*. Solutions: Correct framework settings (e.g., CodeIgniter sess_cookie_name to sess_).Enable built-in PHP garbage collection (e.g., session.gc_probability = 1, session.gc_divisor = 100) if cPanel’s script is insufficient. This runs on PHP requests and can add overhead. Manual Cleanup (if needed): Replace ea-phpXX with the correct PHP version (e.g., ea-php74, ea-php81).
      1. 1. List old session files (e.g., older than 24 hours):
      find /var/cpanel/php/sessions/ea-phpXX -type f -name 'sess_*' -mtime +1 -print | head # To generate a script for deletion: # find /var/cpanel/php/sessions/ea-phpXX -type f -name 'sess_*' -mtime +1 -exec echo rm -f {} ; > /root/sessions-to-delete.sh # less /root/sessions-to-delete.sh # REVIEW CAREFULLY # sh /root/sessions-to-delete.sh # EXECUTE CAREFULLY If deleting drives up load too much (run in screen): PHP_SESSION_PATH="/var/cpanel/php/sessions/ea-phpXX" # Set correct path mv "${PHP_SESSION_PATH}" "${PHP_SESSION_PATH}-cleanup" mkdir "${PHP_SESSION_PATH}" chmod 1733 "${PHP_SESSION_PATH}" # Set correct permissions (sticky bit) chown root:root "${PHP_SESSION_PATH}" # Set correct ownership # Slow cleanup of old files cd "${PHP_SESSION_PATH}-cleanup" # This loop deletes 10,000 files every 5 minutes. Adjust as needed. for i in {1..1000}; do ls -U | head -n 10000 | xargs -r rm -f; echo "Batch $i done, sleeping 300s..."; sleep 300; done
  • /var/cache/yum (Yum/DNF based systems): Package manager cache. Safe to clean. yum clean all # or for DNF systems (CentOS 8+, AlmaLinux, Rocky etc.): # dnf clean all
  • /var/log: General system logs. Ensure logrotate is configured and compressing logs. Check /etc/logrotate.conf and files in /etc/logrotate.d/. Example for /etc/logrotate.d/syslog:/var/log/messages /var/log/secure /var/log/maillog /var/log/spooler /var/log/boot.log /var/log/cron { sharedscripts compress # Ensure this line is present postrotate /bin/kill -HUP `cat /var/run/syslogd.pid 2> /dev/null` 2> /dev/null || true # For older syslog # For rsyslog: systemctl kill -s HUP rsyslogd.service # For syslog-ng: systemctl kill -s USR1 syslog-ng.service endscript }
    • Compress existing uncompressed rotated logs: gzip /var/log/*.1 /var/log/*.2, etc.
    • Force rotation: logrotate -f /etc/logrotate.d/syslog
  • /var/log/audit/: (Usually audit.log or files in /var/log/audit/) Audit daemon logs. If not needed and very large: service auditd stop # or systemctl stop auditd chkconfig auditd off # or systemctl disable auditd rm -rf /var/log/audit/* # Removes historical logs Caution: Disabling auditd may affect security compliance if it’s required for your environment.
  • /var/tmp: Temporary space, similar to /tmp. Generally safe to delete old files. Avoid deleting active socket files.
  • /var/lib/mysql: MySQL/MariaDB data directory. If this partition is full due to large databases, the data directory needs to be moved to a larger partition. See our guide on “MySQL Move Datadir” or consult MariaDB/MySQL documentation for the official procedure. This is a complex operation requiring service downtime.
  • /var/lib/mysql/eximstats/ (cPanel): Exim mail statistics database. Can grow very large.
    • Option 1: Truncate tables (if database isn’t too large/disk too full): mysql eximstats -e "TRUNCATE defers; TRUNCATE failures; TRUNCATE sends; TRUNCATE smtp;" mysqlcheck -r eximstats # Repair tables # Restart MySQL if needed
    • Option 2: Recreate schema (if TRUNCATE is problematic): Caution: Do not use this method for other databases without understanding implications. # Stop services that might write to eximstats (Exim, cPanel tasks) mysqldump --no-data eximstats > /root/eximstats_schema.sql # Dumps only table structure # Verify eximstats_schema.sql contains CREATE TABLE statements # Stop MySQL # service mysqld stop or systemctl stop mariadb # Remove data files (BE VERY CAREFUL WITH THIS PATH) # rm -f /var/lib/mysql/eximstats/* # Start MySQL # service mysqld start or systemctl start mariadb Recreate tables mysql eximstats < /root/eximstats_schema.sql # # Restart Exim, cPanel services Prevention:
      • In WHM » Service Manager, disable “Exim Stats”.
      • In WHM » Tweak Settings, reduce “The number of days to keep Exim stats”.
  • /var/crash: System crash dumps. If not actively troubleshooting a crash, files here can be removed. rm -rf /var/crash/* If investigating, keep only the newest 1-2 files.

/home

Typically the largest partition, containing user data.

  • General Approach: Use du -sh /home/* | sort -rh (excluding virtfs if cPanel) to find large accounts. Drill down into large accounts. Look for:
    • Old backups (.tar.gz, .zip).
    • Large log files not managed by system logrotate.
    • Unneeded site archives (e.g., public_html.old).
    • .trash directories. Provide a list of findings to the customer for approval before deletion.
  • cPanel Email Usage: To find large email accounts across all cPanel users: for cPUser in `ls -A1 /var/cpanel/users/ | egrep -v "^(system|nobody|root)$"`; do sudo -u "${cPUser}" /usr/local/cpanel/bin/doveadm -f table quota get -u "${cPUser}"; /usr/local/cpanel/bin/doveadm -f table mailbox status -u "${cPUser}" -t messages,vsize "*" ; done A simpler (but potentially slower if many users/mailboxes) du approach for mail directories: du -sch /home*/*/mail/*/*/.INBOX/ # Or other common mail paths, adjust as needed du -sch /home*/*/mail/*/*/cur/ /home*/*/mail/*/*/new/ | sort -rh | head -n 20 Consider advising customers to set quotas for email accounts.
  • /home/temp: A directory sometimes created by Liquid Web. Check for old files. find /home/temp -mtime +14 -ls # List files older than 2 weeks CRITICAL: Before deleting from /home/temp, check for symlinks pointing elsewhere: find /home/temp -type l -ls If symlinks exist, investigate where they point. Do not blindly delete if they point to important data. Remove only the symlink itself if it’s orphaned or unneeded: find /home/temp -type l -delete
  • /home/cprestore, /home/cpmove:* cPanel migration/restore files. Check dates; old ones can often be removed.
  • cPanel Build Directories (e.g., /home/cpapachebuild, /home/cpzendinstall): If present and space is an issue, these can usually be removed.
  • CPAN Directories (/home/.cpan/): Perl CPAN cache and build directories. rm -rf /home/.cpan/build/* rm -rf /home/.cpan/sources/* # Be cautious with ~/.cpan/prefs or other configuration files.
  • Directories Symlinked from Other Partitions: If /home is full and contains symlinks to directories that were moved from other (now less full) partitions, consider moving them back if it makes sense and space permits on the original partition.
  • Over-quota Accounts (Shared Hosting): Ensure filesystem quotas are enforced. To find files owned by a specific cPanel user: find / -mount -user cpanelusername -ls 2>/dev/null (Replace cpanelusername with the actual username. -mount keeps find on the same filesystem as /) If virtfs mounts are causing quota reporting issues: cat /proc/mounts | grep virtfs # If orphaned virtfs mounts exist and no users are actively using Jailed Shell: # /scripts/clear_orphaned_virtfs_mounts # cPanel script
  • /home/virtfs (cPanel): DO NOT DELETE FILES MANUALLY FROM /home/virtfs/! These are bind mounts for Jailed Shell environments and do not consume additional disk space themselves. Incorrectly removing them can break Jailed Shell access and cause other issues. Use the clear_orphaned_virtfs_mounts script if needed.

/backup

If your server has a dedicated backup partition/drive.

  • If full, verify backup retention settings (e.g., in WHM Backup Configuration for cPanel).
  • Delete old, unneeded backups.
  • If legitimate backups fill the drive, reduce retention (e.g., disable daily and keep weekly/monthly) or consider upgrading backup storage.
  • Sometimes broken/incomplete backups can fill space if a backup job fails repeatedly.

/run/log/journal (systemd on tmpfs)

This is systemd journald data stored in RAM (tmpfs).

  • Check usage: df -hT /run du -shx /run/log/journal
  • Limit usage (persistent configuration): Create/edit /etc/systemd/journald.conf.d/00-journal-size.conf (or similar): [Journal] RuntimeMaxUse=100M # Example: Limit to 100MB in /run Then reload and restart: systemctl daemon-reload systemctl restart systemd-journald
  • Manual Pruning (if needed immediately): journalctl -m --vacuum-size=100M Setting RuntimeMaxUse is preferred for long-term management.

Part 5: Other useful commands

Disk usage by file extension

  • Using find and du (can be slow): Sums the size of files with a specific extension (e.g., .flv) in the current directory and subdirectories. # Example for .flv files LC_ALL=C find . -type f -iname '*.flv' -print0 | du -ch --files0-from=- | tail -n 1
  • Using find, stat, and awk (more precise sum): # Example for .flv files in a specific path find /home/username/public_html/videos -type f -iname '*.flv' -exec stat --format="%s" {} ; | awk '{sum += $1} END {printf "Total: %.2f GBn", sum/1024/1024/1024}' You can use any find flags to filter by date, name, permissions, etc.

This guide covers many common scenarios for disk space and inode issues. If you’ve followed these steps and are still unable to resolve the problem, or if you’re unsure about any command, please contact Liquid Web support for assistance.

Was this article helpful?