Your Web News in One Place

Help Webnuz

Referal links:

Sign up for GreenGeeks web hosting
October 11, 2021 08:56 am GMT

Protect S3 bucketswith AWS Macie

Lets discover how we can use machine learning and pattern matching at scale to discover and report sensitive data stored in S3. Lets also notify our DevSec team about it by using EventBridge and Lambda.

Unfortunately sloppy access behaviour and exposed PII data in Amazon S3 is responsible for 7% of all data breaches. This often happens due to lack of knowledge or rushing the creation of an S3 bucket. It's easy to leave gaps open in your security layer without having all required knowledge. Lets figure out a way how we can minimize the risk of leaving such flaws in our access layer by using AWS Macie. The intention of this exercise is to setup, detect and report anomalies in access behaviour and sensitive data that's exposed through S3. We also would like to notify our DevSec team about it.

Alt Text

Let me introduce AWS Macie first

Macie is trained and designed to continually scan S3 buckets looking for sensitive information by using fine-grained machine learning algorithms. It also keeps track of anomalous behaviour on files, deletion of audit trails and poor security practices defined in policies. As protecting data at scale can become increasingly complex, expensive, and time-consuming this sounds like a great tool to have under your belt, right?

Before you plan to roll out AWS Macie it's important to note that it only integrates with AWS S3 so far, but more is on the way.

Enabling it is a piece of cake

To begin with, we need to setup an AWS Macie account. This is simple and straight-forward, terraform provides us with the macie2 client in order to create Macie resources (be aware the integration is quite new, so not everything is supported).

resource "aws_macie2_account" "this" {  status = "ENABLED"}

Apply the preceding terraform gist to enable AWS Macie on your account.

Managed/Custom data identifiers

Before starting to create an AWS Macie detection job, it's good to know that by default a detection job inherits all the managed data identifiers. Macie uses certain criteria and techniques, including machine learning and pattern matching, to detect sensitive data or anomalies. These 'criteria and techniques' are typically referred to as managed data identifiers. There's managed data identifiers for all kinds of categories, think of financial information, personal health information, credentials, etc. There's also a possibility to create your own data identifiers, these are so called custom data identifiers. It consist of a regex to perform a pattern match and a proximity rule to refine the result. I advice that you read up on these yourself.

AWS Macie detection jobs

With Macie, we have to create and run detection jobs to automate detection, logging, and reporting of sensitive data in S3 buckets. Jobs are responsible for analyzing objects in S3 buckets to determine whether it contains sensitive data or anomalies.

Every job is associated with 1 or more S3 buckets by either passing in a list of bucket names or by targeting specific tags or other bucket properties. We also have to setup the schedule and scope of the job, e.g. run once every 24 hours and refine your search to only focus on OpenSSH private keys and social insurance numbers.

Time to create a bucket and copy some of sensitive data files over. I prepared a few files containing AWS secret keys and sensitive employee data.

resource "aws_s3_bucket" "this" {  bucket = "demo-macie-detection-bucket"  acl    = "private"}

After applying the above s3.tf we can upload some sensitive data to our bucket for the sake of testing. This should be simple for Macie to detect, I hope.

$ cat files/employee.txt Jane Doe100 Main Street, Anytown, USAJohn Doe123 Any Street, Any Town, USA$ cat files/credentials.txt #Fake AWS CredentialsAWS_ACCESS_KEY_ID=AKIAIOSFODNN7EXAMPLEAWS_SECRET_ACCESS_KEY=wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY$ aws s3 cp files s3://demo-macie-detection-bucket --recursiveupload: files/apikeys.txt to s3://demo-macie-detection-bucket/apikeys.txtupload: files/credentials.txt to s3://demo-macie-detection-bucket/credentials.txtupload: files/patent.txt to s3://demo-macie-detection-bucket/patent.txtupload: files/financial.txt to s3://demo-macie-detection-bucket/financial.txtupload: files/employee.txt to s3://demo-macie-detection-bucket/employee.txt

aws s3 cp --recursive, a useful command to recursively copy the content of a directory over to an S3 bucket. Another neat trick to add to your toolbelt.

Now that we got our bucket and sample data in place it's time to setup the Macie detection job. Lets take a stab at explaining the below terraform resource: job_type indicates that it's a job running by a schedule_frequency, which means it runs by a specific interval (in our case daily). And we use the s3_job_definition to target specific S3 buckets, in our case the one we defined in s3.tf above here.

data "aws_caller_identity" "current" {}resource "aws_macie2_classification_job" "this" {  job_type = "SCHEDULED"  name     = "demo-macie-detection-job"  schedule_frequency {    daily_schedule = "true"  }  s3_job_definition {    bucket_definitions {      account_id = data.aws_caller_identity.current.account_id      buckets    = [aws_s3_bucket.this.id]    }  }  depends_on = [aws_macie2_account.this]}

This job will simply run against the buckets defined in bucket_definitions every day and use all managed data identifiers to analyze the objects in S3. Every anomaly is reported as a Macie Finding by Macie and extensively described in a report per finding.

Tying Macie together with Eventbridge

Now having Macie find and report 'Macie Findings' is great, but I also want to notify our internal DevSec team about the finding. Macie produces events when Macie Findings happen, these are captures by CloudTrail. This allows us to capture it with an EventBridge rule, nice! Time to set one up.

resource "aws_cloudwatch_event_rule" "this" {  name        = "demo-macie-slack-notifer-event-rule"  description = "Rule that captures AWS Macie findings"  event_pattern = <<EOF{  "source": ["aws.macie"],  "detail-type": ["Macie Finding"]}EOF}resource "aws_cloudwatch_event_target" "this" {  rule = aws_cloudwatch_event_rule.this.name  arn  = aws_lambda_function.this.arn  depends_on = [aws_lambda_function.this]}

This will intercept every Macie Findings event. We can use this aws_cloudwatch_event_rule resource later to somehow make it end up in our DevSec team there slack channel. I'm sure they would love that!

Lambda and Slack

Lets put an end to this exercise. Everything is in place, the sample data, our detection job and an event to capture the result of the detection job. As you know by now, I have a weakness for Lambda functions, I use them left and right in every project. Anyways, I created a Slack workspace, Slack app and added the app to a Slack channel called '#aws-macie-events'. I also took a little dive in the Slack API documentation to pull some Python code together that sends details of the Macie finding to the channel.

#!/usr/bin/python# -*- coding: utf-8 -*-import requestsimport jsonimport loggingimport oslogger = logging.getLogger()logger.setLevel(level=os.getenv("LOG_LEVEL", "INFO"))globalVars = {}globalVars["SLACK_TOKEN"] = os.getenv("SLACK_TOKEN")globalVars["SLACK_CHANNEL"] = os.getenv("SLACK_CHANNEL")def _format_divider():    return {        "type": "divider"    }def post_message_to_slack(text, blocks=None):    return requests.post(        "https://slack.com/api/chat.postMessage",        {            "token": globalVars["SLACK_TOKEN"],            "channel": globalVars["SLACK_CHANNEL"],            "text": text,            "blocks": json.dumps(blocks) if blocks else None,        },    ).json()def lambda_handler(event, context):    logger.info(f'Received event: {json.dumps(event)}')    if event["detail"]["category"] == "CLASSIFICATION":        classification_blocks = [            {                "type": "section",                "text": {                        "type": "mrkdwn",                        "text": "Macie detected an anomaly at {timestamp}, take action! :rotating_light:".format(timestamp=event["time"])                },                "fields": [                    {                        "type": "mrkdwn",                        "text": "*S3 Bucket*"                    },                    {                        "type": "mrkdwn",                        "text": "*Severity*"                    },                    {                        "type": "plain_text",                        "text": event["detail"]["resourcesAffected"]["s3Bucket"]["arn"]                    },                    {                        "type": "mrkdwn",                        "text": ":red_circle: {severity}".format(severity=event["detail"]["severity"]["description"])                        if event["detail"]["severity"]["description"] == "High"                        else ":large_orange_circle: {severity}".format(severity=event["detail"]["severity"]["description"])                    }                ]            },            {                "type": "section",                "fields": [                        {                            "type": "mrkdwn",                            "text": "*Type*"                        },                    {                        "type": "mrkdwn",                        "text": "*S3 Object*"                    },                    {                        "type": "plain_text",                        "text": event["detail"]["title"]                    },                    {                        "type": "plain_text",                        "text": event["detail"]["resourcesAffected"]["s3Object"]["key"]                    }                ]            },            {                "type": "section",                "fields": [                        {                            "type": "mrkdwn",                            "text": "*File type*"                        },                    {                        "type": "mrkdwn",                        "text": "*Last modified*"                    },                    {                        "type": "mrkdwn",                        "text": event["detail"]["resourcesAffected"]["s3Object"]["extension"]                    },                    {                        "type": "mrkdwn",                        "text": event["detail"]["resourcesAffected"]["s3Object"]["lastModified"]                    }                ]            },            {                "type": "actions",                "block_id": "actionblock789",                "elements": [                    {                        "type": "button",                        "text": {                            "type": "plain_text",                            "text": "Visit object"                        },                        "url": "https://{bucket_name}.s3.{region}.amazonaws.com/{key}".format(                            region=event["region"],                            bucket_name=event["detail"]["resourcesAffected"]["s3Bucket"]["name"],                            key=event["detail"]["resourcesAffected"]["s3Object"]["key"]                        )                    },                    {                        "type": "button",                        "text": {                            "type": "plain_text",                            "text": "Escalate"                        },                        "url": "https://api.slack.com/block-kit"                    }                ]            },            _format_divider()        ]        response = post_message_to_slack(None, classification_blocks)        logger.info(f'Response: {response}')    elif event["detail"]["category"] == "POLICY":        policy_blocks = [            {                "type": "section",                "text": {                        "type": "mrkdwn",                        "text": "Macie detected an anomaly at {timestamp}, take action! :rotating_light:".format(timestamp=event["time"])                },                "fields": [                    {                        "type": "mrkdwn",                        "text": "*S3 Bucket*"                    },                    {                        "type": "mrkdwn",                        "text": "*Severity*"                    },                    {                        "type": "plain_text",                        "text": event["detail"]["resourcesAffected"]["s3Bucket"]["arn"]                    },                    {                        "type": "mrkdwn",                        "text": ":red_circle: {severity}".format(severity=event["detail"]["severity"]["description"])                        if event["detail"]["severity"]["description"] == "High"                        else ":large_orange_circle: {severity}".format(severity=event["detail"]["severity"]["description"])                    }                ]            },            {                "type": "section",                "fields": [                        {                            "type": "mrkdwn",                            "text": "*Type*"                        },                    {                        "type": "mrkdwn",                        "text": "*API Action*"                    },                    {                        "type": "plain_text",                        "text": event["detail"]["title"]                    },                    {                        "type": "plain_text",                        "text": event["detail"]["policyDetails"]["action"]["apiCallDetails"]["api"]                    }                ]            },            {                "type": "section",                "fields": [                        {                            "type": "mrkdwn",                            "text": "*AccessKeyId*"                        },                    {                        "type": "mrkdwn",                        "text": "*IPV4 Address*"                    },                    {                        "type": "mrkdwn",                        "text": event["detail"]["policyDetails"]["actor"]["userIdentity"]["assumedRole"]["accessKeyId"]                    },                    {                        "type": "mrkdwn",                        "text": event["detail"]["policyDetails"]["actor"]["ipAddressDetails"]["ipAddressV4"]                    }                ]            },            {                "type": "actions",                "block_id": "actionblock789",                "elements": [                    {                        "type": "button",                        "text": {                            "type": "plain_text",                            "text": "Visit policy"                        },                        "url": "https://s3.console.aws.amazon.com/s3/bucket/{bucket_name}/property/policy/edit?region={region}".format(                            region=event["region"],                            bucket_name=event["detail"]["resourcesAffected"]["s3Bucket"]["name"]                        )                    },                    {                        "type": "button",                        "text": {                            "type": "plain_text",                            "text": "Escalate"                        },                        "url": "https://api.slack.com/block-kit"                    }                ]            },            _format_divider()        ]        response = post_message_to_slack(None, policy_blocks)        logger.info(f'Response: {response}')if __name__ == "__main__":    lambda_handler(None, None)

Take some time to go through the code, it's fairly straight-forward, it reads some properties from the event and wraps it in a neat little message to then send it through the client. Voil, no rocket science.

Lambda and Terraform

There's a few things we need to create in order to tie all of this together and make it work. Think of declaring the Lambda function in terraform, an IAM role, IAM policies, a log group and hook up the EventBridge event to the Lambda function.

Lets start with defining the Lambda function, I'm using a module from @rojopolis called terraform-aws-lambda-python-archive to zip and install the dependencies. It only requires a path to the Python source code, the module generates a hash based of the source code and uses it to measure changes against.

locals {  function_name = "demo-macie-slack-notifier-function"}module "lambda-python-archive" {  source  = "rojopolis/lambda-python-archive/aws"  version = "0.1.6"  src_dir              = "${path.module}/lambda/macie-slack-notifier"  output_path          = "${path.module}/lambda/macie-slack-notifier.zip"  install_dependencies = true}resource "aws_lambda_function" "this" {  filename         = module.lambda-python-archive.archive_path  source_code_hash = module.lambda-python-archive.source_code_hash  function_name    = local.function_name  description      = "Notifies a given slack channel with AWS Macie findings"  role             = aws_iam_role.this.arn  handler          = "lambda_function.lambda_handler"  architectures    = ["arm64"]  runtime          = "python3.9"  timeout          = 300 environment {    variables = {      LOG_LEVEL     = "INFO"      SLACK_TOKEN   = ""                  # Recommended to resolve the slack token via ssm or vault.      SLACK_CHANNEL = "#aws-macie-events" # Move this to variables if you feel like.    }  }  depends_on = [    aws_iam_role_policy.this_cloudwatch,    aws_cloudwatch_log_group.this  ]}

Carefully look at the preceding lambda.tf gist. It should be easy to understand as long as you keep in mind that the module points to the directory containing the source code (just one file callde lambda_function.py).

As we log the event and the Slack API call its response we need to create a log group for the Lambda function in order to make this visible. I've made this part of the lambda.tf file but I'm showing it separately here.

resource "aws_cloudwatch_log_group" "this" {  name              = "/aws/lambda/${local.function_name}"  retention_in_days = 14}

It's good to point out that Cloudwatch log groups should always reflect the name of the Lambda function properly, therefor we are using this local.function_name reference defined at the top of lambda.tf.

Lets wrap it up by creating an IAM role for the Lambda function and the necessary IAM policies that will be attached to the IAM role.

resource "aws_iam_role" "this" {  name = "demo-macie-slack-notifer-lambda-role"  assume_role_policy = <<EOF{  "Version": "2012-10-17",  "Statement": [    {      "Action": "sts:AssumeRole",      "Principal": {        "Service": "lambda.amazonaws.com"      },      "Effect": "Allow",      "Sid": ""    }  ]}EOF}resource "aws_iam_role_policy" "this_lambda" {  name = "demo-macie-slack-notifer-lambda-policy"  role = aws_iam_role.this.name  policy = <<EOF{  "Version": "2012-10-17",  "Statement": [    {      "Sid": "AllowInvokeItself",      "Effect": "Allow",      "Action": [        "lambda:InvokeFunction"      ],      "Resource": [        "${aws_lambda_function.this.arn}"      ]    }  ]}EOF}resource "aws_iam_role_policy" "this_cloudwatch" {  name = "demo-macie-slack-notifer-cw-policy"  role = aws_iam_role.this.name  policy = <<EOF{  "Version": "2012-10-17",  "Statement": [    {      "Sid": "AllowCloudWatchLogs",      "Effect": "Allow",      "Action": [        "logs:CreateLogGroup",        "logs:CreateLogStream",        "logs:PutLogEvents"      ],      "Resource": [        "${aws_cloudwatch_log_group.this.arn}:*"      ]    }  ]}EOF}

The preceding iam.tf gist is responsible for creating a role and policies that are needed for the Lambda function to invoke itself and write logs to Cloudwatch. We'll also need to give the EventBridge rule permission to call our Lambda function, lets add that resource to the lambda.tf file. For the sake of the blog I've made a separate gist.

resource "aws_lambda_permission" "this" {  statement_id  = "AllowExecutionFromCloudWatch"  action        = "lambda:InvokeFunction"  function_name = aws_lambda_function.this.function_name  principal     = "events.amazonaws.com"  source_arn    = aws_cloudwatch_event_rule.this.arn}

Great, all of our resources are ready to be applied. Go ahead and run terraform apply to create everything. Once the Macie detection job is created it will immediately start running, so be sure that you've already applied the S3 bucket earlier on and imported the sample data into it. If you forget about this, just comment out some of the resources and apply the S3 bucket first.

After Macie detected its first Macie finding it should be written to your Slack channel as below (if you've created and added a Slack bot to the channel, be aware of the OAuth token with chat:write.customize scope you need to create, this token is passed down as an environment variable in lambda.tf):

Alt Text

The preceding image shows a sensitive data finding from Macie, these are categorized under a category called 'classification'. In the below image Macie also found a policy issue, someone removed the encryption from a bucket. The policy findings are categorized under the category 'policy'. Feel free to alter the Python code to log other information in the Slack message.

Alt Text

Happy days, our DevSec team is notified about a security issue by a fancy message . Besides a working Macie setup that finds us potential issues, it's always important to add such visibility in order to make everyone aware.


Original Link: https://dev.to/aws-builders/protect-s3-buckets-with-aws-macie-16gm

Share this article:    Share on Facebook
View Full Article

Dev To

An online community for sharing and discovering great ideas, having debates, and making friends

More About this Source Visit Dev To