Percona Audit Log Plugin and the Percona Monitoring and Management Security Threat Tool

Percona Audit Log Plugin

Percona Audit Log PluginThe Security Threat Tool has been available since Percona Monitoring and Management 2.6.0 (PMM) and starting from that first release it is proving to be a severely-needed feature. If you’re not familiar with the Security Threat Tool (STT), it helps you to ensure compliance as you have the ability to run checks daily, get alerts when non-compliance is found, and audit your security check history. Instructions on how to set up the STT can be found in this excellent post called Running Security Threat Tool In Percona Monitoring and Management For the First Time.

Now, one of the promises of PMM is the ability to be customizable and that is still true with the STT, and this blog is proof. If you attended our webinar “How Percona Monitoring and Management (PMM) Improves Database Security” you might have seen that we demonstrated a new dynamic check. Here’s the process of how we made it work.

While thinking about ways to improve database security, the obvious data source was clear: The Percona Audit Log Plugin. Our plugin is an alternative to the MySQL Enterprise Audit Log Plugin and provides detailed monitoring and logging about the activity performed on a specific database server.

And what exactly is the info that would be checked by the STT? Access attempts. Let’s talk about that briefly:

Failed Access Attempts

By failed access attempts I mean every connection that didn’t make it through the authentication stage (a great indication that bad things could be happening), like:

  • Wrong user, and whatever the password is
  • Correct user, wrong password
  • Correct user, no password

One of the logs produced by the Audit Log is the “Connect/Disconnect” which will add a record with the NAME value of “Connect” for every user logged in or login failed. Perfect info for our purposes! Let’s create a custom check!….or not so fast.

The STT checks for MySQL in particular (remember that the STT works not only with MySQL but also for MongoDB, PostgreSQL, and DBaaS like RDS) can only be queries of the kind SHOW (like SHOW STATUS) or read queries (SELECTs) and the Audit Log Plugin writes to either a file or SysLog.

So what can we do to use the valuable info inside the Audit Log file? One can think of several options, but the one we liked was to stream the records to a MySQL table. Here’s how:

Audit Log Stream to MySQL

Before actually thinking on how to ingest a table, the question to answer is “How do we want to store that data?” The audit log plugin supports four log formats: OLD, NEW, JSON, and CSV. OLD and NEW formats are based on XML, where the former outputs log record properties as XML attributes and the latter as XML tags. Information logged is the same in all four formats.

Now, JSON is a language both MySQL (since 5.7) and the Audit Log plugin speak natively. So JSON it is! A simple table was created with a JSON type field:

mysql> show create table audit_data\G
*************************** 1. row ***************************
       Table: audit_data
Create Table: CREATE TABLE `audit_data` (
  `id` int(11) NOT NULL AUTO_INCREMENT,
  `info` json DEFAULT NULL,
  PRIMARY KEY (`id`)
) ENGINE=InnoDB DEFAULT CHARSET=latin1
1 row in set (0.00 sec)

And the way to instruct the plugin to use JSON as the output format is by changing the value of the variable audit_log_format.

Next step: data ingestion.

Streaming

The idea is to insert data into the table as soon as it’s available in the log file. The solution applied was to use something called named pipes for linux (not to be confused with the named pipes used on Windows to connect to MySQL). A named pipe (also known as a FIFO) is just a way to communicate with processes using the filesystem.

The whole process is this:

  • Create a fifo file
  • Tail the audit log file and send the output to the fifo file
  • Using a simple script, read from the fifo file and insert the record into MySQL

Steps:

mkfifo /root/danipipe  #Create the named pipe
exec 3<>/root/danipipe #Open the file in a non-blocking way so it doesn't close when the pipe is read
tail -f /var/lib/mysql/audit.log 1>/root/danipipe 2>&1   #Send the log file contents to the pipe as soon as is written to the log file

The script to continuously read the pipe is as follows:

#/bin/bash
pipe=/root/danipipe

while true; do
    if read line <$pipe; then if [[ "$line" == 'quit' ]]; then break fi query="INSERT INTO percona.audit_data (info) VALUES(\"${line//\"/\\\"}\")" echo $query > /tmp/dani
        eval mysql -uaudit_user -pXXXXXX < /tmp/dani
    fi
done

Run the script and it will be executed until is explicitly terminated. As an alternative, you can also daemonize it with SystemD to guarantee re spawn. And now we have the audit data into a MySQL table. But before continuing, here’s the full audit log config used:

mysql> show variables like 'audit_log%';
+-----------------------------+------------------------------------+
| Variable_name               | Value                              |
+-----------------------------+------------------------------------+
| audit_log_buffer_size       | 1048576                            |
| audit_log_exclude_accounts  | pmm@localhost,audit_user@localhost |
| audit_log_exclude_commands  |                                    |
| audit_log_exclude_databases |                                    |
| audit_log_file              | audit.log                          |
| audit_log_flush             | OFF                                |
| audit_log_format            | JSON                               |
| audit_log_handler           | FILE                               |
| audit_log_include_accounts  |                                    |
| audit_log_include_commands  |                                    |
| audit_log_include_databases |                                    |
| audit_log_policy            | LOGINS                             |
| audit_log_rotate_on_size    | 104857600                          |
| audit_log_rotations         | 10                                 |
| audit_log_strategy          | SYNCHRONOUS                        |
| audit_log_syslog_facility   | LOG_USER                           |
| audit_log_syslog_ident      | percona-audit                      |
| audit_log_syslog_priority   | LOG_INFO                           |
+-----------------------------+------------------------------------+

The relevant variables for this case are:

  • audit_log_format = JSON.
  • audit_log_handler = FILE (instead of SysLog)
  • audit_log_policy = LOGINS
  • audit_log_exclude_accounts = pmm@localhost,audit_user@localhost (To avoid records related to the PMM user and the user that inserts data from the audit log)

And now, we need a query. Here’s what a record for a failed login looks like:

{"audit_record": {"db": "", "ip": "", "host": "localhost", "name": "Connect", "user": "root", "record": "77248798_2020-07-13T15:20:46", "status": 1045, "os_login": "", "priv_user": "root", "timestamp": "2020-07-14T18:36:42 UTC", "proxy_user": "", "connection_id": "112777"}}

We need to look for records on that the status value is different than zero. Another condition that we added is that it’s gotta be records from the last 24 hours. So this is the query that would retrieve that data:

SELECT 
        concat(json_extract(info, "$.audit_record.user"),"@",json_extract(info, "$.audit_record.host")," - ",json_extract(info, "$.audit_record.timestamp")) AS name
FROM 
        audit_data
WHERE 
        info->>'$.audit_record.name' = 'Connect'
        AND info->>'$.audit_record.status' != '0'
        AND DATE(info->>'$.audit_record.timestamp') > DATE(DATE_SUB(NOW(), INTERVAL 24 HOUR))
ORDER BY id DESC

Finally, we can write a custom check.

STT Custom Checks

Now I need to level with you, this part isn’t quite ready for public consumption yet as the PMM development team is still finalizing how end-users will be able to create their own checks, but this is meant to show you what will be possible and get you thinking of how YOU can create your own checks to ensure your company isn’t featured on the front page of “I Didn’t Take Security Seriously Weekly.”

For writing custom checks, when using docker as the PMM Server deploy method, one needs to login to the pmm-server container and edit the custom checks file. 

With the help from the VP of Engineering – Platform, Steve Hoffman, who wrote the actual check code, we ended up with the following lines of code:

- version: 1
  name: mysql_failed_login
  type: MYSQL_SELECT
  query: json_extract(info, '$.audit_record.user') as name, json_extract(info, "$.audit_record.host") as host, json_extract(info, "$.audit_record.timestamp") as timestamp from percona.audit_data where info->>'$.audit_record.name' = 'Connect' and info->>'$.audit_record.status' != '0' and DATE(info->>'$.audit_record.timestamp') > DATE(DATE_SUB(NOW(), INTERVAL 24 HOUR)) order by id desc 
  script: |
    def check(rows):
        """
        This check returns a warning of a failed login attempt.
        """
        results = []
        for row in rows:
            results.append("{}@{} - {}".format(row.get("name"), row.get("host"), row.get("timestamp")))
        count = len(results)
        if count:
            desc = "1 user"
            if count > 1:
                desc = "{} users".format(count)
            return [{
                "summary": "Trivial - Failed Login Detected",
                "description": "{} - {}".format(desc, results),
                "severity": "notice",
                "labels": {
                    "count": str(count),
                },
            }]
        return []

We are ready to test the check and, in the next release of PMM, you’ll be able to click a button on the Check Results page to run the checks immediately.

Tests

Let’s deliberately execute some fail to connect attempts:

[root@ip-192-168-1-200 ~]# mysql -uiexists
ERROR 1045 (28000): Access denied for user 'iexists'@'localhost' (using password: YES)
[root@ip-192-168-1-200 ~]# mysql -uroot -pbadpass
mysql: [Warning] Using a password on the command line interface can be insecure.
ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
[root@ip-192-168-1-200 ~]# mysql -uroot -p
Enter password:
ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: NO)

We have a user that doesn’t exist, a user that exists with a wrong password, and a user that exists with an empty password. All three failed. And the audit log recorded those actions, and through the streaming process, ended up in the mysql table:

mysql> select concat(json_extract(info, "$.audit_record.user"),"@",json_extract(info, "$.audit_record.host")," - ",json_extract(info, "$.audit_record.timestamp")) as name from audit_data where info->>'$.audit_record.name' = 'Connect' and info->>'$.audit_record.status' != '0' and DATE(info->>'$.audit_record.timestamp') > DATE(DATE_SUB(NOW(), INTERVAL 24 HOUR)) order by id desc\G
*************************** 1. row ***************************
name: "root"@"localhost" - "2020-07-17T02:55:56 UTC"
*************************** 2. row ***************************
name: "root"@"localhost" - "2020-07-17T02:55:45 UTC"
*************************** 3. row ***************************
name: "iexists"@"localhost" - "2020-07-17T02:55:34 UTC"
3 rows in set, 6 warnings (0.00 sec)

Great! Streaming works, the query works, but did the STT check work? To find out, check the dashboard and yes, you will see that the check worked:

A failed database check informing that three failed login attempts were recorded in the last 24 hours. Works!

In Conclusion

The Security Threat Tool, like everything else in PMM, is highly flexible and can be completely customized. By integrating the tool with other available security measures, you take control of an incredibly powerful security solution. Download Percona Monitoring and Management for free today!


by Daniel Guzmán Burgos via Percona Database Performance Blog

Comments