Reading Amazon RDS MySQL/Aurora log file from terminal.

Introduction:

At Mydbops we support a good number of clients on AWS cloud (Aurora and RDS).

Amazon Relational Database Service (RDS) is providing the cloud based database service. It is the cost-efficient, resize able & ease to manage. As in any other DBaaS, If you need to analyse the log files (Error log / Slow log), you need to login the console and manually download the files.

Logging into the console seems simple, But this is a bit complex operation when it comes to incorporate that in a day to day operation and automation. In this blog i would like to share my experience in making this into a straightforward process for downloading the log files directly from command line without console GUI.

Prerequisites:

Following tools are to be installed for this operation.

  • awscli – Cli utility for AWS Cloud
  • jq – Utility For Parse And Format JSON Data, Just like SED for JSON
for linux,
yum install -y awscli jq

for Ubuntu,
apt-get install awscli jq


Step 1 – ( Configuring the AWS account )

After the installation, the Server should be configured with the respective AWS account as well as region using aws configure command.

Output –

[sakthi@mydbops_labs ~]$ aws configure
AWS Access Key ID : xxxxxxxxxxxxxx
AWS Secret Access Key : yyyyyyyyy/zzzzzzzzzz
Default region name : us-west-2
Default output format : json

To find the exact region name you can use this link.  My Server is on Oregon, so i have used “us-west-2”.


Step 2 – ( Check the available Instances and their Status )

The Flag describe-db-instances is used to find the instances details and status.

# aws rds describe-db-instances

Output:

[sakthi@mydbops_labs ~]$ aws rds describe-db-instances | jq ' .DBInstances[] | select( .Engine ) | .DBInstanceIdentifier + ": " + .DBInstanceStatus'
"localmaster: available"
"mysql-rds: available"
"slave-tuning: available"


Step 3 – ( Check the available log files for specific Instance )

describe-db-log-files command is used to get the log details. It can be provided with the db instance identifier to get the log files for that instance.

# aws rds describe-db-log-files –db-instance-identifier localmaster

Output:

[sakthi@mydbops_labs ~]$  aws rds describe-db-log-files --db-instance-identifier localmaster | jq
{
  "DescribeDBLogFiles": [
    {
      "LastWritten": 1518969300000,
      "LogFileName": "error/mysql-error-running.log",
      "Size": 0
    },
    {
      "LastWritten": 1501165840000,
      "LogFileName": "mysqlUpgrade",
      "Size": 3189
    },
    {
      "LastWritten": 1519032900000,
      "LogFileName": "slowquery/mysql-slowquery.log",
      "Size": 1392
    }

Note: Size is given in bytes

Step 4 – ( Download the specific log file )

download-db-log-file-portion command is used to download the log files, It has to be given along with the db-instance-identifier and log-file-name, we have identified on the previous step and output type.

[sakthi@mydbops_labs ~]$ aws rds download-db-log-file-portion --db-instance-identifier localmaster --log-file-name "slowquery/mysql-slowquery.log.13" --output text > rds_slow.log
[sakthi@mydbops_labs ~]$ ls -lrth | grep -i rds_slow.log
-rw-rw-r--.  1 sakthi sakthi  34K Feb 19 09:43 rds_slow.log

–output determines the output type of the file to be downloaded, It will support the following options.

  • JSON
  • TABLE
  • TEXT

Output Examples –

Table Output -
| LogFileData| 
2018-03-23 11:55:11 47633684932352 [Warning] Aborted connection 425753 to db: 'unconnected' user: 'rdsadmin' host: 'localhost' (Got timeout reading communication packets)

JSON Output -
{
 "LogFileData": "2018-03-23 11:55:11 47633684932352 [Warning] Aborted connection 425753 to db: 'unconnected' user: 'rdsadmin' host: 'localhost' (Got timeout reading communication packets)

TEXT Output -
2018-03-23 11:55:11 47633684932352 [Warning] Aborted connection 425753 to db: 'unconnected' user: 'rdsadmin' host: 'localhost' (Got timeout reading communication packets)

 

Implementing In Production:

I have given some use case scenario explanations with examples of production implementation for effective database operation.

Scenario 1: ( Integrating with Pt-query-digest )

We can easily manipulate the slow log files with Percona tool pt-query-digest by making the simple automation scripts. Below is a sample script to analyse the slow logs with Percona toolkit from my local master server.

Script: digest_rds_slow_log.sh

#bin/bash

path="/home/sakthi"
echo -en "\navailable Instances - \n\n"
info=$(aws rds describe-db-instances | jq ' .DBInstances[] | select( .Engine ) | .DBInstanceIdentifier + ": " + .DBInstanceStatus' | awk '{ print $1, $2 }' | sed 's/\"//g' | sed 's/,//g' | sed 's/://g' | nl )
echo "$info"
echo -en "\n";read -p "enter instance name : " instance

if [[ $(echo "$info" | grep -sw "$instance" | wc -l) -eq 1 ]]; then
 rds_node=$(echo "$info" | grep -sw "$instance" | awk '{print $2}');
else
 echo "Instance not found";
 exit;
fi

echo -en "\navailable slow logs for $rds_node \n\n"
log_files=`aws rds describe-db-log-files --db-instance-identifier $rds_node | grep slowquery | awk '{ print $2 }' | sed 's/\"//g' | sed 's/,//g' | head -n5`

if [[ $(echo "$log_files" | wc -c) -gt 1 ]]; then
 echo "$log_files"
else
 echo -en "\nno slow log files .. exiting the script [script]\n"
 exit
fi

echo -en "\n";read -p "enter slow log file : " log_file
aws rds download-db-log-file-portion --db-instance-identifier $rds_node --log-file-name $log_file --output text > $path/recent.log

if [[ -f /usr/bin/pt-query-digest || -f /bin/pt-query-digest ]]; then

pt-query-digest --limit=95% --set-vars='ssl_verify_mode='SSL_VERIFY_NONE'' $path/recent.log > $path/slow_pt_result.log
echo -en "\n\nstatus ... [done] \n\no/p file - $path/slow_pt_result.log\n\nBye\n"

else
echo "pt-query-digest not found [exit]\n"
fi

exit
Output:

[sakthi@mydbops_labs ~]$ bash digest_rds_slow_log.sh

available Instances -

 1 localmaster available
 2 slave-tuning available
 3 mysql-rds available

enter instance name : 2  #choosing the respective number

available slow logs for localmaster

slowquery/mysql-slowquery.log
slowquery/mysql-slowquery.log.0
slowquery/mysql-slowquery.log.1
slowquery/mysql-slowquery.log.10
slowquery/mysql-slowquery.log.11

enter slow log file : slowquery/mysql-slowquery.log.10
status ... [done]
o/p file - /home/sakthi/slow_pt_result.log
Bye

 

Scenario 2 – ( Accessing the latest ERROR log from any Server in the region )

We can easily check the Server’s recent ERROR log files in a single command. This will be really helpful in case of any MySQL crash happens.

We will be reading the running log

[sakthi@mydbops_labs ~]$ cat read_rds_error

#bin/bash
path="/home/sakthi"
echo -en "\navailable Instances - \n\n"
aws rds describe-db-instances | jq ' .DBInstances[] | select( .Engine ) | .DBInstanceIdentifier + ": " + .DBInstanceStatus' | awk '{ print $1 }' | sed 's/\"//g' | sed 's/,//g' | sed 's/://g'
echo -en "\n";read -p "enter instance name : " instance
aws rds download-db-log-file-portion --db-instance-identifier $instance --log-file-name "error/mysql-error-running.log" --output text | less

Output -

[sakthi@mydbops_labs ~]$ read_rds_error

available Instances -
localmaster
mysql-rds
slave-tuning

enter instance name : localmaster

 

Conclusion –

I believe this blog post will help you in handling day to day DBA  tasks easier on your RDS Environment. We are keep testing the new things on Amazon RDS Cloud, will be coming back with new stuff soon.

Photo Credits

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s