Configuring AWS Transfer for SFTP: IAM, Route 53 and CloudWatch

At its annual re:Invent conference recently, AWS announced AWS Transfer for SFTP. Don’t yawn and click away from this blog post just yet! 🙂

Despite years of attempts to retire inter-organization file transfers based on SFTP, the protocol is alive and well and continues to be deeply, deeply embedded in enterprises’ workflows. In the healthcare and finance industries, especially, the world might well come to an end without SFTP. And while there are some excellent SFTP servers out there (I’m a fan of Globalscape and worked at Ipswitch), there hasn’t been a true cloud offering until now.

Think of it: an sftp service that requires no physical or virtualized server infrastructure, features low cost usage-based pricing, offers nearly unlimited storage capability (via S3) with automatic archive to Glacier and which is integrated with a sophisticated identity system. Now add scalability and reliability and cloud integration like triggered Lambda workflows based on arrival of data. You can see why AWS’s entry into secure file transfer is of special importance to enterprises. AWS Transfer for SFTP may not have the glitz of some of AWS’s latest offerings but, trust me, it may have a more important impact on many enterprises’ migrations to the cloud and their ability to enhance their security in external communications than the products that gets headline attention at re:Invent and among tech journalists.

@jeffbarr, the most prolific explainer of AWS on the planet, has written an excellent tutorial for AWS SFTP.  Jeff’s example describes how you could use AWS SFTP to isolate users into “home” directories but doesn’t actually have a detailed how-to for doing so. That’s the topic of this post, along with a couple of other hints. Building on Jeff’s post and the AWS SFTP documentation, I want to focus on the IAM setup required to direct individual users to a specific S3 “folder” in an S3 bucket. (S3 doesn’t really have folders — these are just objects in an bucket. But they can be made to look like a hierarchical file system, so must people think of and use them that way).

I also discuss how to set up a custom domain name in a delegated Route 53 subdomain. (This technique uses the concepts described in how to set up Route 53 DNS — the most popular post of all time on this blog.) Third, I describe how to set up AWS SFTP logging to CloudWatch.

I assume you already know how to set up S3 buckets, IAM users and groups, how to generate keys for sftp users and how to set up the keys in a transfer client and/or command line sftp client.

There’s quite a lot here to understand, especially about the IAM set up. Even though the IAM policy examples are lifted directly from the AWS SFTP documentation, I don’t think their illustrations and explanations are clear enough. So, I will take you step-by-step though setting up two users (sftpuser1 and sftpuser2) to access S3 home directories in a bucket (air11sftpbucket). Of course, we’ll set AWS SFTP up so that neither user is able to view or manipulate the other’s home directory.

If you remember only one thing from this post, it should be that your AWS SFTP Server must assume a role that gives it permissions to act on the bucket. In IAM, you don’t assign policies directly to AWS services. Instead, you assign the policies to roles that are account-number-specific and then establish which services may use the permissions in the policy attached to the role. This is called a “trust relationship.” Of course, you also have to provide the users in the group access to the bucket as well.

Let’s start with a picture of the IAM relationships for AWS SFTP that implements our simply two-user setup.

AWS SFTP IAM configuration
AWS SFTP IAM configuration (click to enlarge)

Let’s step through the IAM definitions from left to right (inside the orange outline). First we set up two users and add them to a group. I know you know how to do this, so I won’t be showing the IAM definitions for these two steps.

Next, you need to create a policy for full access to the bucket. Here’s the JSON you can use. Just be sure to change the name of your bucket in the Resource statements. GetBucketLocation and ListAllMyBuckets apply to console operations and so we allow all resources for them. Otherwise, this policy applies only to bucket air11sftpbucket and the objects within that bucket.

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Effect": "Allow",
            "Action": [
                "s3:GetBucketLocation",
                "s3:ListAllMyBuckets"
            ],
            "Resource": "*"
        },
        {
            "Effect": "Allow",
            "Action": [
                "s3:ListBucket"
            ],
            "Resource": [
                "arn:aws:s3:::air11sftpbucket"
            ]
        },
        {
            "Effect": "Allow",
            "Action": [
                "s3:PutObject",
                "s3:GetObject",
                "s3:DeleteObject"
            ],
            "Resource": [
                "arn:aws:s3:::air11sftpbucket/*"
            ]
        }
    ]
}

Next, you need to add this policy to a role and edit the trust relationship for the role so that it can be assigned to the AWS SFTP server. Here are screenshots of the role definition, the trust relationship and the JSON code to use for the trust relationship. (I nearly went crazy finding out the service name, so if this post does nothing else for you, there’s that. 🙂 ).  Creating a role for the account with the AWS SFTP Server is a little tricky: In the console: select “Another AWS account” and specify this account’s account number.

AWS Transfer for SFTP server role definition
AWS Transfer for SFTP server role definition (click to enlarge)
{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Effect": "Allow",
      "Principal": {
        "Service": "transfer.amazonaws.com"
      },
      "Action": "sts:AssumeRole",
      "Condition": {}
    }
  ]
}

Note that you have given the users in the group as well as the AWS SFTP server full access to the bucket, the latter gaining access via a role that allows the sftp server to assume the permissions in the policy. It’s confusing at first but makes (typically elegant AWS) sense if you think hard about it. Be sure to assign this role to the server in the AWS SFTP console.

But there’s an issue: we want the AWS Transfer for SFTP service to associate our users with their specific home directories (actually S3 objects). Our S3 bucket looks like this:

|--air11sftpbucket
   |--stfpuser1
   |--sftpuser2

That means the role’s permissions for the AWS SFTP server are too broad. We need a policy that limits access to the object in the bucket that’s associated with the current AWS SFTP user. AWS SFTP implements this via what it calls a “scope-down” IAM policy. This is a simple policy that contains variables AWS SFTP passes at execution time to IAM to allow the policy to limit access to the correct object in the S3 bucket. This precisely mimics “home directories” in managed file transfer systems like Globalscape EFT and WS_FTP Server.

Here is a screenshot showing how to define a scope-down policy that limits the current user to a specific “subdirectory” (S3 object) in the S3 bucket, along with the JSON policy specified for that user. Note the variables in the JSON that resolve at policy check time to the bucket name and the home directory specified for the user in the AWS SFTP user configuration.

AWS Transfer for SFTP scope-down policy
AWS Transfer for SFTP scope-down policy (click to enlarge)
{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Sid": "ListHomeDir",
            "Effect": "Allow",
            "Action": "s3:ListBucket",
            "Resource": "arn:aws:s3:::${transfer:HomeBucket}"
        },
        {
            "Sid": "AWSTransferRequirements",
            "Effect": "Allow",
            "Action": [
                "s3:ListAllMyBuckets",
                "s3:GetBucketLocation"
            ],
            "Resource": "*"
        },
        {
            "Sid": "HomeDirObjectAccess",
            "Effect": "Allow",
            "Action": [
                "s3:PutObject",
                "s3:GetObject",
                "s3:DeleteObjectVersion",
                "s3:DeleteObject",
                "s3:GetObjectVersion"
            ],
            "Resource": "arn:aws:s3:::${transfer:HomeDirectory}*"
        }
    ]
}

And here’s the result in FileZilla. We are able to connect to AWS SFTP using the user specified in IAM and are limited (“jailed”) to the folder specified for that user in the AWS SFTP console.

AWS Transfer for SFTP with FileZilla as transfer client
AWS Transfer for SFTP
with FileZilla as transfer client (click to enlarge)

But wait! There’s more. As with any AWS service, integration with CloudWatch is automatic, providing detailed, actionable info. This is ideal for security conscious enterprises as everything about the transfer is logged. Trust me, you really want to integrate AWS SFTP and CloudWatch.

But think about it: how do we get AWS Transfer for SFTP to log events into CloudWatch for the account associated with the running AWS SFTP server? Answer: we need a role that establishes a trust relationship between the AWS SFTP server in this account and CloudWatch. This is exactly what we did above for the S3 bucket. Now that you’re expert at this, it’ll only take a minute to create a new role that permits AWS SFTP in this account to AssumeRole and assign an “all access” CloudWatch managed policy to the role. You can use the same JSON as above to create the trust relationship between this account and transfer.amazonaws.com, as shown here.

AWS Transfer for SFTP CloudWatch IAM policy
AWS Transfer for SFTP CloudWatch IAM policy (click to enlarge)

This produces that lovely CloudWatch output that your CISO requires.

AWS SFTP CloudWatch logs
AWS SFTP CloudWatch logs (click to enlarge)

And finally, you may have noticed I am using a custom DNS name for the AWS SFTP server. You can select Route 53 to host the zone for this SFTP server. When you make the selection in the screenshot below, the zone is automatically created. If, like me, your DNS is registered elsewhere you can conveniently place the AWS SFTP servers in a subdomain of your domain by adding NS records in your base domain, using the Route 53 name servers as the target for the NS records. This makes Route 53 authoritative for the subdomain (here aws). I described how to do this over 4.5 years ago — and I still think this is a clever way to take advantage of Route 53 if you cannot otherwise migrate your entire DNS infrastructure to Route 53 DNS.

AWS Transfer for SFTP custom DNS name in Route 53
AWS Transfer for SFTP custom DNS name in Route 53 (click to enlarge)

This is one of the more complex topics I’ve tacked and I hope it helps you. AWS Transfer SFTP is very appealing on many levels and I am sure if you try it, you’ll like it. If for no other reason, you should consider AWS Transfer for SFTP not least for the fact that’s it’s totally integrated into the AWS environment, meaning you can do things with it you’d never attempt with any other sftp server — like using Lambda to manage files on arrival and departure or S3 lifecycle policies on your buckets to archive files at low cost.

One final note: you’ll notice I didn’t obscure and names or account numbers. I did that so you could follow all the text, screenshots and code samples more easily. These resources have been deleted, so that the black hats among you have nothing to pwn.

I look forward to your comments and feedback.


Posted

in

, ,

by

Comments

20 responses to “Configuring AWS Transfer for SFTP: IAM, Route 53 and CloudWatch”

  1. iain Avatar

    great article thanks

    I would like to be able to connect to the AWS SFTP Transfer via a public IP address, the vendor i work with uses a tool on their end that requires a fixed entry point via a public IP address to be connected to via the AWS SFTP TRANSFER.

    Some recent documentation i have seen on the aws site indicates that this is possible by setting up a NLB using fixed ip address, i cant find any documentation on it though, so can only assume it would work but dont know the effort involved to test this.. would you have any idea if this is possible and how to do it please.

    Here is what the aws site says

    Q: Can my SFTP clients use fixed IP addresses to access my SFTP server ‘s VPC endpoint?

    A: Yes, you can enable fixed IPs by building on your SFTP server’s VPC endpoint. You can create an Network Load Balancer (NLB) with Elastic IP enabled, within your VPC, and specify your SFTP server’s VPC endpoint as its target. The associated Elastic IPs will give you one or more static IP addresses that will not change. These IPs can be used for firewall whitelisting purposes by your SFTP client users. To learn more about this setup, visit the Network Load Balancer documentation.

     

    thanks any assistance would be appreciated

     

     

     

     

    1. Alex Neihaus Avatar
      Alex Neihaus

      Hello, Ian.

      While I’ve never connected my AWS Transfer for SFTP server to an NLB, it seems straightforward enough. Plus, you’ve already found the doc describing how to do so. So, you should just give it a try.

      Good luck.

      Alex

  2. Phil Avatar
    Phil

    Thought I’d share some of our ongoing experience with AWS’ Transfer for SFTP.

    Here’s background on what we’ve setup / building

    Transfer for SFTP service in a public subnet (with VPC endpoint but no VPC endpoint service, we utilize a provider on top of AWS and their automated system for constraining / containing us doesn’t gel with AWS’ private link if memory serves correctly)

    Public facing NLB listening on port 22 and connecting to Transfer for SFTP service’s VPC endpoint by private IP (pretty sure using an IP in a Target Group is something rather recent, don’t recall seeing it late last year)… this has made the NLB to Transfer for SFTP frail but it’ll be no worse than our current solution where an individual EC2 is hosting a SFTP server.

    Transfer for SFTP service configured to use a custom identity provider (the standard AWS user management didn’t seem flexible enough)

    API Gateway configured to use an Endpoint Type of private so we’ve also setup a VPC endpoint for the private API Gateway. This VPC endpoint is inside a private subnet since there’s no need for it to be public exposed and a belt and braces approach to security never hurts.

    The API Gateway is setup as a trigger for a pretty basic lambda function that performs the identity provider function… the complexity if you can call it that is really in the configuration file for the identity provider lambda.

    Now, of course at this date and time end to end it doesn’t all work 🙂 but that’s just a technicality. What this whole experience has however exposed me to is just how to go about trouble shooting all of this.

    First off for the API Gateway to Lambda as everyone no doubt knows there’s the neat feature in the web console for testing the gateway to lambda connection. Also I’d note that the cloud formation script AWS refers to in their documentation didn’t seem any good for automatically creating the whole setup when I tried it but what it does nicely help with is detailing what your models need to be for passing the request to the lambda and formatting the response back to Transfer for SFTP.

    I guess I’m slow to all this cloud bizzo, I’ve been working in AWS since around 2007 but my comfort zone is being on an EC2 with root permissions where I can run tcpdump… so when my auth attempts to the SFTP service were failing, I was seeing nothing hit the API Gateway and nothing logged by the SFTP service to cloudwatch. The lack of SFTP cloudwatch logs is actually a pending ticket with AWS, pure speculation – I’m wondering if it’s that Transfer for SFTP doesn’t currently support cloudwatch logging when using a custom identity provider as multiple people (AWS included) have confirmed all the necessary permissions are in place to support logging.

    The lack of visibility through SFTP cloudwatch logs finally led me to find/look at the flow logs for the appropriate ENIs which showed no packets were ever being exchanged between the SFTP service and the API Gateway so the issue wasn’t due to bad policy on the gateway. So I finally noticed in the AWS CLI the ability to test the identity provider… see aws transfer test-identity-provider help for details. This immediately pointed out my mistake, the username didn’t conform to the AWS imposed username requirement – ^[a-zA-Z0-9_][a-zA-Z0-9_-]{2,31}$

    Once removing the period / fall stop from the username I hit my next and current issue Unable to call identity provider: Unable to execute HTTP request: XXXX: Name or service not known. Where XXXX is the hostname of the API Gateway.

    Not expecting anyone to read all this and tell me my wrong step… haven’t even provided anywhere near the level of you’d need anyways to begin troubleshooting. Thought I’d just share.

    In case you were wondering though, yes EnableDnsSupport and EnableDnsHostnames are enabled in the VPC and PrivateDnsEnabled is enabled on the VPC endpoint.

    The take away I think for me has been not to forget about the AWS CLI, the web console lags behind the raft of APIs AWS offers and you can always resort to the flow logs if you are really scratching your head and saying but if I was just on an EC2 I could do this, that and the other 🙂 Admittedly the flow logs are no where near as detailed as what you’ll get from tcpdump but it’s certainly better than nothing at all.

    When we finally get this all sorted I’ll try and remember to share what it was that’s causing the SFTP cloudwatch logging issue and the SFTP name resolution of the API Gateway.

    1. Phil Avatar
      Phil

      Well it turns out it the simple reason the Transfer for SFTP service couldn’t reach the (private) API Gateway is of course that each Amazon service lives in an AWS VPC and not my VPC, tsk tsk my bad, I should have recalled this. So when I set the API Gateway as private the host name was only resolvable within my VPC. Once the API Gateway was set as regional the Transfer for SFTP service was then able to reach the gateway and the Transfer for SFTP CloudWatch logs showed up. I’d argue AWS has a little more work on their Transfer for SFTP CloudWatch logging since they log nothing if the API Gateway can’t be reached.

      1. Alex Neihaus Avatar
        Alex Neihaus

        Thanks, Phil, for circling back with your learnings. Yes, it’s a subtlety that AWS services run in a “different” VPC and you must account for that. But I would argue that’s also an elegant approach. That means you either have an endpoint (really, an ENI) that appears/you add in your VPC to permit access or you explicitly route traffic to and from your VPC originating or coming from the service.

        That means you have to take a conscious action to permit access to your VPC and its resources — a very important security design point.

  3. Ryan Morris Avatar
    Ryan Morris

    Is there a way to provide a user permission to multiple buckets or multiple subsets of folders?

    1. Alex Neihaus Avatar
      Alex Neihaus

      I’d imagine you’d do this via IAM roles.

      1. Alok Avatar
        Alok

        But how would that work with a Scope down policy, I am assuming you would have to get the buckets you have access to and map them somehow to that user so everytime the user logs in the list of folders with access to can be retrived.

  4. Enzo Avatar
    Enzo

    The scope down policy doesn’t work for me. I’ve used the one you show as well as the one in the updated AWS doc here: https://docs.aws.amazon.com/transfer/latest/userguide/users-policies-scope-down.html But, once I connect, I can cd back and go into another user’s directory with no problem. The S3 bucket is defined like: /ftp-files/user1 but user1 can cd back and get to /ftp/user2 The policy is defined in the SFTP user’s profile and viewing it shows correctly. In your example screenshot for sftpuser1, you have sftp1home set as the home dir, but the s3 bucket is air11sftpbucket/sftpuser1 I’m assuming that is a typo because you don’t have an sftp1home directory in S3 that you show.

    1. Alex Neihaus Avatar
      Alex Neihaus

      Hello, Enzo.

      Sorry, I can’t debug your policy since you didn’t post it — and I don’t provide technical support for blog posts anyway. As they say, YMMV.

      But I suspect you haven’t coded the scope-down policy for the user correctly. I tested this exact scenario and I am confident it jails the user, as I noted in the screenshot after the scope-down policy. If I had to guess, I’d take a look at the permissions in the scope-down policy.

      This isn’t easy stuff — but it’s worth it to get out of the business of managing your own sftp servers.

      Good luck and thanks for commenting on this post.

      1. Enzo Avatar
        Enzo

        Understood. What about the example screenshot for sftpuser1? You have sftp1home set as the home dir, but the s3 bucket is air11sftpbucket/sftpuser1 I’m assuming that is a typo because you don’t have an sftp1home directory in S3 that you show. Unless you have an sftp1home somewhere else?

        1. Alex Neihaus Avatar
          Alex Neihaus

          Ah! Yes, it’s a typo — I probably used an earlier screenshot for the image. The JSON is correct. The “home directory” should be air11sftpbucket/sftpuser1. “sftp1home” is a typo.

          I appreciate you taking such a good look at this. When (if) I get time, I’ll correct that image. It would mean rebuilding from scratch since I did this demo in a throw-away AWS account. But that might be kinda fun….

          Thanks again.

          1. Enzo Avatar
            Enzo

            Thanks. I actually figured it out. I had the S3 bucket policy directly on the S3 bucket itself. Once I remove that policy on the S3 bucket, I then created a new standalone policy for it and attached it directly to the role. It works fine now. The user gets jailed into their own home directory as they should.

          2. Alex Neihaus Avatar
            Alex Neihaus

            Woot! It’s a pretty cool service, IMO. Good luck with it.

  5. rroddy Avatar
    rroddy

    “Be sure to assign this role to the server in the AWS SFTP console” – there does not seem to be an option to assign roles to the server on creation other than a logging role, nor when created. Can you clarify? Thanks for the rest of the detail; cleared up lots for me.

    1. Alex Neihaus Avatar
      Alex Neihaus

      Hello, @rroddy. I misspoke. I was trying to make sure that people reading the post understood something I had already said: to create a role for the AWS SFTP Server, you have to associate the role with the account that contains the server. As you point out, there’s no way to assign a role directly to an SFTP server; instead you assign the role to the account. I’ll rework the post a little to clear up the confusion. Let me know if that works for you.

      Thanks.

      Alex

      1. rroddy Avatar
        rroddy

        Thanks Alex, you’re a gent. That wrinkle is just what I needed spelled out.

        1. Alex Neihaus Avatar
          Alex Neihaus

          My pleasure. I appreciate your feedback.

  6. John Roubanian Avatar
    John Roubanian

    Excellent article, saved me some time as the AWS documentation was somewhat vague.. I appreciate it.

    1. Alex Neihaus Avatar
      Alex Neihaus

      Thank you, John!

Leave a Reply

Your email address will not be published. Required fields are marked *