TLDR

  • Problem: Oversized log file consumed all remaining space, causing service anomalies
  • Cause: MySQL was configured with mysql-general-log enabled, without scheduled cleanup
  • Investigation Method: Checked MySQL settings and disk space usage
  • Solution: Set up scheduled cleanup using logrotate

Scenario

The MySQL Slave in the production environment regularly generates analysis reports and emails them to relevant personnel.

We received a notification that reports hadn’t been received for some time. Upon logging into the host to attempt manual sending, it failed due to insufficient space.

The database files showed no abnormalities, and the database was located on an external hard drive with ample remaining space.

Investigation

Checking Linux Disk Space

First, we used df -h to check the disk space status, only to find very little remaining space.

1
df -h
1
2
3
4
5
6
7
8
# Example output
Filesystem      Size  Used Avail Use% Mounted on
/dev/sda1        50G   50G     0 100% /
tmpfs           3.9G     0  3.9G   0% /dev/shm
tmpfs           3.9G  1.2M  3.9G   1% /run
tmpfs           3.9G     0  3.9G   0% /sys/fs/cgroup
/dev/sdb1       100G   60G   35G  64% /data
/dev/sdc1       200G  150G   50G  75% /backup

Checking Large Directories in Root

We can use the following command to view the top 20 directories consuming the most space in the root directory.

The issue I encountered was with /var/log/mysql/mysql-gen.log. (The output below is for illustration)

1
du -ahx / | sort -rh | head -n 20
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
# Example output
50G     /
45G     /var/log/MySQL/mysql-gen.log
3G      /var/log
1.2G    /home/user/videos
1G      /usr/lib
800M    /usr/share
500M    /var/cache
300M    /tmp
# ...

Analyzing the Cause

About six months ago, the client requested us to enable MySQL’s general_log and forward it through rsyslog.

A colleague, unfamiliar with general_log and unaware that rsyslog forwards in real-time, didn’t set up scheduled cleanup.

In simple terms, general_log is MySQL’s detailed log that records all queries, thus generating a large volume of logs.

In just six months, it completely filled up the remaining space.

Solution

Step 1: Use logrotate for periodic log file cleanup

Add the following configuration to /etc/logrotate.d/mysql:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
/var/log/MySQL/mysql-gen.log {
    # Rotate logs daily
    daily
    # Keep the most recent 7 log files
    rotate 7
    # Ignore missing files
    missingok
    # Don't rotate empty log files
    notifempty
    # Compress log files
    compress
    delaycompress
    # Create log files with 640 permissions, owned by mysql user
    create 640 MySQL MySQL
    # Execute commands after rotation
    postrotate
        # Use mysqladmin command to flush MySQL logs, ensuring MySQL starts using the new log file
        /usr/bin/mysqladmin flush-logs
    endscript
}

Step 2: Force execute the new configuration

1
2
3
# -f force execution
# -v display detailed information
logrotate -fv /etc/logrotate.d/MySQL

Step 3: Clean up existing large log files

Remove the rotated mysql-gen.log.1 file to free up space.

Step 4: Verify results

Use df -h again to check disk usage and confirm that root directory space has been freed.

Conclusion

  1. If Linux operates abnormally, check hardware and memory usage.
  2. Use logrotate for scheduled log cleanup to prevent logs from growing too large and consuming space.
  3. Before making any configurations or running commands, it’s crucial to understand their purpose and potential consequences.