Categories
AWS S3

Backup solution with AWS S3 Sync

Recently I had the opportunity to upgrade a windows computer. This computer was running Windows 8.1 and was not performing well. CPU and memory settings were sitting between 60-90%. I also noticed that using Google Chrome made things worse. After my initial analysis, I decided to upgrade to Windows 10 and start from scratch. This computer has 8GB of memory and a 1TB hard drive disk.

Before I started the upgrade process, I wanted to keep a backup of 1500 pictures and videos. In this post, I want to share how I was able to use AWS S3 Sync command to backup these files. Ready? Let’s get started.

First, make sure you have the AWS CLI installed. Follow this article for more details. Let’s create a new IAM user with S3 permissions. Go to the aws management console, search for IAM and create a new user. Enter a user name and make sure you check Programmatic access. We will use the access key id and secret access key to setup our CLI locally. Now click on Next: Permissions.

Select Attach existing policies directly and type s3 in the filter policies. Select AmazonS3FullAccess.

Click on Next: Tags and add any tags if needed. Click on Next: Review and finally create user. Make sure you download the csv file that contains the access key id and secret access key. Another option is to copy these values from the confirmation page.

With the access key id and secret access key, we can setup the CLI locally. Open up a command prompt or terminal and type “aws –version”.

If you see a similar output to the above image, your CLI is setup correctly. In the same terminal window, type aws configure and enter access key id, secret access key, default region name, and output format. For this article, I’m using us-east-1 for my default region and json for my ouput.

With the CLI setup correctly, we can create a new bucket. Run this command in your terminal, “aws s3api create-bucket –bucket [yourbucketname] –region us-east-1”. Let’s verify our S3 bucket was created successfully with this command, “aws s3 ls”.

With our s3 bucket in place, we can run the sync command to copy our local images and videos to this new bucket. Run this command “aws s3 sync . s3://agileraymond-s3”. The sync command will copy any files in my current directory to the s3 bucket named agileraymond-s3. Since I needed to copy 1500 images and videos, I ran multiple sync commands in parallel using this format.

aws s3 sync . s3://agileraymond-s3/Pictures

aws s3 sync . s3://agileraymond-s3/Videos

aws s3 sync . s3://agileraymond-s3/Downloads

Don’t forget to be in the right directory for this to work. After a couple of hours, I was able to verify all files made it to my bucket. I continued with the Windows 10 upgrade and after the initial setup, I was able to re-install the aws cli and ran the aws sync with a different format. This time I needed to copy files from my s3 bucket to my local pc. I used “aws sync s3://agileraymond-s3/Pictures ./” to copy Pictures to my local Pictures folder.

In summary, I was able to backup 1500 images and videos with the aws sync command. It was a very simple process in my opinion. If you want to explore different options with the sync command, take a look at the documentation page.

Categories
AWS EBS EC2 KMS

Easy way to encrypt EBS volume

Back in January 2019, I wrote an article on how to encrypt an EBS volume. It was a very tedious process. However, things have changed for good. AWS has simplified this process. In this article, I want to share how easy is to encrypt an EBS volume.

Before we launch a new EC2 instance, we need to use a key. Just like you lock and unlock your house door using a physical or digital key. Same principle applies to encrypted EBS volumes. To create a new KMS key, go to the key management service console and select customer managed keys from the left side menu.

Now create a new key. Once you have a new KMS key, we can launch a new EC2 instance. When you get to the storage option, pay attention to the encryption option. 

As you can see from image above, you will see your KMS keys in the encryption option. Go ahead and select a key and continue with your EC2 configuration wizard. 

That’s it. We have to be thankful to AWS for making this process easier. 

Categories
AWS

AWS Solutions Architect – Associate

After a very long process, I was able to take and passed the AWS Certified Solutions Architect Associate exam. In my current job, we don’t use AWS and that made it more difficult to gain hands on experience with AWS services. To prepare for the exam, I used 4 resources:

Acloudguru – I took at least 30 minutes to watch acloudguru videos. I also paused the videos to take notes. It is so many material to cover so it’s better to have notes for future use.

Frequently ask questions – There is so much information on AWS FAQ documents. Based on the acloudguru videos, I wrote down what services I needed to read the FAQs. I highly recommend to take additional notes as well. It will come handy before you take the exam.

Re:Invent videos – During my commute to work and back, I use my phone to listen to re:invent videos. I listened to EC2, S3, ALB videos and they were very beneficial to reinforce what I learned in the past.

Hands on experience – I also gained hands on experience by using AWS console or using the SDK. If you search on my blog, you will find many posts with detailed information on different AWS services.

I hope this post will help you prepare for any AWS certifications.

Categories
AWS Lambda

My First AWS Lambda Using .NET Core

As I prepare for the AWS Certified Solutions Architect – Associate exam, I have a need to play with more services. It’s crucial to gain hands on experience with these AWS services. It’s not enough to just read white-papers and faqs. I’ve heard good thing about AWS Lambda and now it’s time to build something with it. In this post I want to share how I was able to create my first Lambda function using .NET Core.

Before we dive into AWS Lambda, let’s understand what it is. Lambda is a service that allows you to run code without thinking about provisioning or managing servers. You upload your code and AWS handles the rest. Nice! Here is the official summary, “AWS Lambda lets you run code without provisioning or managing servers. With Lambda, you can run code for virtually any type of application or backend service – all with zero administration. Just upload your code and Lambda takes care of everything required to run and scale your code with high availability. You can set up your code to automatically trigger from other AWS services or call it directly from any web or mobile app.”

Now that we know what Lambda is, let’s install the require software to create a Lambda function using .NET Core. First install the lambda templates using nuget. Using a terminal or command prompt, type “dotnet new -i Amazon.Lambda.Templates”. It will install lambda templates so you can get up and running quickly. To test it, type “dotnet new” and press enter. You should see the following templates:

As you can see from the screenshot above, there are 2 categories of templates: lambda functions and lambda serverless. To keep it simple, I’m going to use a simple lambda function that integrates with S3. Now we need to install the AWS Lambda global tool. Using a terminal/command, type “dotnet tool install -g Amazon.Lambda.Tools”.

With the required software installed, it’s time to create our first Lambda using .NET Core. Using a terminal/command, create a new directory called “firstLambda” and cd into it. Now type dotnet new lambda.S3 to create a new function using AWS lambda templates. After creating the function, we need to update a config files with a profile and region. Using a text editor or IDE, open up the new project and update the profile and region setting in aws-lambda-tools-defaults.json.

AWS Lambda will use these settings to deploy and run your function. Let’s take a look at the Function.cs file.

The constructor takes an IAmazonS3 object and the async method FunctionHandler is where our main logic lives. Our lambda function is triggered by a S3 event like a delete object or put object event. Using the same event information, we retrieve the object’s metadata using GetObjectMetadataAsync and finally returning the contentType.

Let’s deploy our first lambda function to AWS using the CLI. Using a terminal/command window, type “dotnet lambda deploy-function agileraymond-1st-lambda”. I’m using agileraymond-1st-lambda as my function name. This command uses the profile and region in our config file so you have to make sure permissions are set correctly. Otherwise you will get errors. Also the command will ask you to provide a role or give you the option to create a new role. If you want to verify that your function made it to AWS, check the AWS lambda console.

To test our new lambda function locally, we can use the test project that was created along with our new function.

Go back to the terminal window and type “dotnet test” to run the integration test. If everything is setup correctly, you will see 1 passing test. That’s it for this post. In a future post, I’m going to test it using the AWS console.

Categories
AWS

Understanding IAM policies

One of the most critical components in any system is security. In AWS, security is at the top of their list. With Identity and Access Management, you can create users, roles, policies, and groups to secure your AWS resources. In this post, I’m going to share how to secure a S3 bucket by creating a new user with limited access. Let’s get started.

Create a new user

To create a new user, sign in to the aws console and select IAM. Select users from the left menu and click Add User. Add a user name and select programmatic access in the access type section.

Click Next. Since we don’t have a policy in place, click Next again.

Now it’s time to review our new user. Notice that aws is displaying a warning message that this user has no permissions. Click next.

We’re in the final step in creating our new user. Click on the Download .csv button. This file will contain the access key id and secret access key. We’ll use these items in the aws cli tool to access S3 buckets. You can also click on the show link below the secret access key header.

Now that we have our user ready, it’s time to create a new policy with limited permissions to a S3 bucket. Click on the Policies link on the left side menu. Click on Create Policy.

There are 2 ways to create your policy: using the visual editor and using a JSON file. For this exercise, I’m going to use a JSON file to specify the policy. Click on JSON tab next to Visual editor tab and paste below JSON.

This simple policy is allowing access to S3 PutObject action to a bucket named agileraymond-s3. As you can see, this policy is limited to what it can perform. AWS recommends that you follow the principle of least privileges. Only give access to the resources that your application needs. Click on Next and finally create your new policy.

With our new user and policy in place, we have to link our user to this new policy. Select your user and click on Add permissions button.

Click on the attach existing policies directly tab and filter policies by selecting customer managed from the filter menu next to the search input.

Click next and review your changes. And finally add permissions. We’re ready to test our new user and its permissions. Let’s use AWS CLI to test our new user. Using a terminal/command prompt, type aws configure and add access key, secret access key, region, and format. Make sure you select the same region where your resources are. In my case, I selected us-east-1 because that’s where my bucket resides.

Now, type “aws s3 ls” in your terminal window. You should see an error since we don’t have permissions to list. We only have access to PutObject for a bucket. To upload a file to our S3 bucket, type aws s3 cp myfile.txt s3://yourbucketname. If you go back to the aws console, you should see myfile.txt inside your bucket.

In conclusion, you have to secure your resources by default. Create new users with limited permissions. Give them access to resources that they need. See you next time.

Categories
AWS General

Host a website using AWS S3

Simple Storage Service was one of the first services offered by AWS. With S3 you can store your files in the cloud. In addition to storing your files, S3 allows you host a static website. In this post, I will share how to accomplish this task using the S3 console.

First, login to the aws console. Now go to the S3 console and create a bucket. To keep it simple, a bucket is like a folder or directory in your computer. For this example, I’m using agileraymond-web for my bucket name and US Virginia for my region. Click create button to create your bucket. With our bucket in place, we can enable it to host a static site. Select your bucket and click on properties tab.

Now click anywhere in the static website hosting section and select Use this bucket to host a website. I’m going to use index.html for my index page and error.html for my error page. Click save. Go ahead and create these 2 html files. To upload these files, click on the overview tab and click upload.

Add your files and click on upload button. In the overview section of your bucket, you will see 2 files. Currently the bucket and these 2 files are private. Since we are hosting a static website and other people want access to this site, we have to update the bucket permission. Go to the bucket permissions’ tab and select bucket policy. Copy and paste the below policy. Make sure to update the resource name. In my case, my bucket name is agileraymond-web but your’s will be different.
{
"Version":"2012-10-17",
"Statement":[{
"Sid":"PublicReadGetObject",
"Effect":"Allow",
"Principal": "*",
"Action":["s3:GetObject"],
"Resource":["arn:aws:s3:::agileramond-web/*"
]
}
]
}

Click save. After saving your policy, you will see the following message: “This bucket has public access. You have provided public access to this bucket. We highly recommend that you never grant any kind of public access to your S3 bucket.” For now, ignore this warning message since this bucket is acting as a public website. This policy allows all object placed in my bucket read access. It is time to test our new website. To get the URL, go to bucket properties and click on static website hosting. Next to the endpoint you will find the url. Copy and paste it in a new browser window and add /index.html to the end of the url. If everything is setup correctly, you will see the index.html page.

To test the error page, go ahead and delete index.html. After deleting index.html, try to browse to index.html. And now you should see the error page since index.html doesn’t exist anymore. As you can see, it’s very easy to create a static website using S3. See you soon!

Categories
.Net AWS CodeDeploy General

Creating AWS CodeDeploy Deployments Using .NET SDK

This is part 3 in a series dedicated to AWS CodeDeploy API using .NET SDK. In part 1, I created the CodeDeploy application using ASP.NET Core MVC, C#, and DynamoDB. In part 2, I created the deployment group which has settings for alarms, load balance, auto scaling groups, deployment styles, and other settings.

In this post, I want to concentrate on creating the deployment. In the deployment request, you can specify revision information (Github or S3 settings), deployment group, auto rollback configuration, and other settings.

Talk is cheap. Show me the code!

If you want to follow along, you can visit my github repo at https://github.com/agileraymond/DotNetDeployments. Now that you have a reference to the repo, let’s modify our controller so we can display our view.

This controller action displays our AddDeployment view.

There is a limitation with this view since it only displays S3 settings. In a future post, I will revisit this view and add the Github repo as well. I wanted to get something working in a short amount of time. When the user clicks on Add Deployment button, it will call a Post controller action to trigger a new deployment.

If the user entered all required information, a new deployment will be created in the AWS CodeDeploy console.

If you want to read more about the AWS .NET SDK, follow this link https://docs.aws.amazon.com/sdkfornet/v3/apidocs/Index.html.

If you have any questions or issues with this code, contact me via twitter @agileraymond.

Have a nice day!

Categories
.Net AWS CI Code Deployment CodeDeploy Continuous Delivery

Creating AWS CodeDeploy Deployment Groups Using .NET SDK

In a previous post, I shared how to create codedeploy applications using the AWS .NET SDK. Adding an application is the foundation to get codedeploy working correctly. In this post, I want to continue this series and show you how to add deployment groups.

To see what parameters we need to add deployment groups, I’m going to read the official documentation here. Find the Amazon.CodeDeploy documentation on the left of the page, and then click on AmazonCodeDeployClient. All codedeploy operations will be handled by the AmazonCodeDeployClient. The method we need is CreateDeploymentGroupAsync. Since we are using .NET Core 2, we need to use the Async methods. CreateDeploymentGroupAsync takes 2 parameters: CreateDeploymentGroupRequest and CancellationToken.

These are CreateDeploymentGroupRequest’s properties:

– AlarmConfiguration: Gets and sets the property AlarmConfiguration. Information to add about Amazon CloudWatch alarms when the deployment group is created.

– ApplicationName: Gets and sets the property ApplicationName. The name of an AWS CodeDeploy application associated with the applicable IAM user or AWS account.

– AutoRollbackConfiguration: Gets and sets the property AutoRollbackConfiguration. Configuration information for an automatic rollback that is added when a deployment group is created.

– AutoScalingGroups: Gets and sets the property AutoScalingGroups. A list of associated Auto Scaling groups.

– BlueGreenDeploymentConfiguration: Gets and sets the property BlueGreenDeploymentConfiguration. Information about blue/green deployment options for a deployment group.

– DeploymentConfigName: Gets and sets the property DeploymentConfigName. If specified, the deployment configuration name can be either one of the predefined configurations provided with AWS CodeDeploy or a custom deployment configuration that you create by calling the create deployment configuration operation. CodeDeployDefault.OneAtATime is the default deployment configuration. It is used if a configuration isn’t specified for the deployment or the deployment group. For more information about the predefined deployment configurations in AWS CodeDeploy, see Working with Deployment Groups in AWS CodeDeploy in the AWS CodeDeploy User Guide.

– DeploymentGroupName: Gets and sets the property DeploymentGroupName. The name of a new deployment group for the specified application.

– DeploymentStyle: Gets and sets the property DeploymentStyle. Information about the type of deployment, in-place or blue/green, that you want to run and whether to route deployment traffic behind a load balancer.

– Ec2TagFilters: Gets and sets the property Ec2TagFilters. The Amazon EC2 tags on which to filter. The deployment group will include EC2 instances with any of the specified tags. Cannot be used in the same call as ec2TagSet.

– Ec2TagSet: Gets and sets the property Ec2TagSet. Information about groups of tags applied to EC2 instances. The deployment group will include only EC2 instances identified by all the tag groups. Cannot be used in the same call as ec2TagFilters.

– LoadBalancerInfo: Gets and sets the property LoadBalancerInfo. Information about the load balancer used in a deployment.

– OnPremisesInstanceTagFilters: Gets and sets the property OnPremisesInstanceTagFilters. The on-premises instance tags on which to filter. The deployment group will include on-premises instances with any of the specified tags. Cannot be used in the same call as OnPremisesTagSet.

– OnPremisesTagSet: Gets and sets the property OnPremisesTagSet. Information about groups of tags applied to on-premises instances. The deployment group will include only on-premises instances identified by all the tag groups. Cannot be used in the same call as onPremisesInstanceTagFilters.

– ServiceRoleArn: Gets and sets the property ServiceRoleArn. A service role ARN that allows AWS CodeDeploy to act on the user’s behalf when interacting with AWS services.

– TriggerConfigurations: Gets and sets the property TriggerConfigurations. Information about triggers to create when the deployment group is created. For examples, see Create a Trigger for an AWS CodeDeploy Event in the AWS CodeDeploy User Guide.

To keep my code example concise, I’m going to only use required properties to add a deployment group. Let’s start by adding the controller actions. Take a look at the gist below:

The first action returns a view so we can fill out application name, deployment group name, and service role arn. Take a look at the view:

I’m only displaying the required fields to create a new deployment group. This is how I like to develop my applications: add small features that work and then add more features and keep improving those features. It is very difficult to add perfect code at first. It is constant improvements that will yield better applications.

When the user clicks on add button, the post action will take care of sending the request to the codedeploy client. If the call to CreateDeploymentGroupAsync is successful, we will see a new deployment group in the aws console. To be able to understand deployment groups, we have to understand development environments. We usually have dev, test, and production environments. These environments are usually separated from each other. Dev environment is usually open for all developers. Test might be use to test actual deployments. And production only a couple of engineers should have access to that environment. In CodeDeploy, deployment groups allow you to mirror your development environment when it comes to deployment. For 1 application, you can setup 3 deployment groups (dev, test, and production). Each group will be linked to an EC2 instance(s) or on-premises servers. In a future post, I will provide examples with all these properties. Stay tuned!

Next: Creating AWS CodeDeploy Deployments Using .NET SDK

Categories
.Net ASP.NET MVC AWS Code Deployment CodeDeploy Continuous Delivery

Creating AWS CodeDeploy Application Using .NET SDK

I’m a big fan of AWS and its cloud services. S3 has changed the way we store objects. EC2 has enabled us to spin up instances quickly and in a cost effective way. CodeDeploy helps developers deploy applications to EC2 instances and also on-premises servers. In this post, I want to share how to create a CodeDeploy application using the AWS .NET SDK.

First, create a new asp.net mvc project using Visual Studio or Visual Studio Code. Make sure to target .NET Core 2.0. Now that we have a new project, let’s add codedeploy nuget package. If you are using VS Code, use the built-in terminal and type “dotnet add package AWSSDK.CodeDeploy”. This will add the latest version of CodeDeploy. We also need to add an AWS nuget package to inject codedeploy service in our Startup.cs file. Run in the terminal “dotnet add package AWSSDK.Extensions.NETCore.Setup” to add this package.

Let’s modify our Startup.cs file to look like below:

In order to have access to AWS CodeDeploy api calls, we need to setup our credentials. For this example, I’m using a profile to store AWS access key id and secret access key. I’m storing the credentials outside my source code in a profile file so I can keep them secure. Also those credentials will be different between developers. To help you with this setup, follow this document to setup your AWS credentials and make sure to pay special attention to the profile section.

With the AWS profile in place, we need to add a reference to it. Take a look at my appsettings.json file. I named mine dotnetdeployments-profile since you can have multiple profiles.

We are ready to start looking that controller. Take a look at the controller below:

In connection with the controller, we also need to look at the view:

The add action in our controller displays the add view only. The view has a reference to a model called CreateApplicationRequest and this class has a property named ApplicationName. When the user clicks on the add button, a post will be triggered back to our controller. And finally, it will call the api method CreateApplicationAsync. If everything is setup correctly, we will receive a successful response and our application will be visible on the AWS console.

If you want to see a fully functional example, go to the github project dotnetdeployments and clone it locally to see a working example. Creating a CodeDeploy application is the first step in using this api. We also need to create deployment groups, deployment configs and other settings. Stay tuned for the next post in this series.

Next: Create deployment groups

Categories
.Net AWS C# CI Github

3 Things I learned During Hacktoberfest

Hacktoberfest is a month long initiative to promote collaboration using Github. DigitalOcean created this event a few years back. This year I decided to participate in this event. In this post I want to share the 3 things I learned during Hacktoberfest.

Learn something new

During our busy schedules at work, we’re focused on maintaining existing products. In many occasions, these products are using old technology. We have a very small web site running ASP.NET MVC 2. MVC 2 was released on March 2010 so this web site is using old technology that is 7 years old. We tried to upgrade this website to run a more recent version but we ran into migration issues and the effort was abandoned. Since this site is not a critical product in our company, we decided not to spent more time on it.

With the recent release of .NET Core 2, it’s very important that .NET developers stay on top of these changes. Last month I created a new project in github called  DotNetDeployments. This project was created to automate .NET deployments. No more copy and paste files between servers. In order for me to learn something new, I decided to base this project on .NET Core 2. Core 2 was released on August 2017 and there are major changes in relationship to previous versions. In addition to learning .NET Core 2, I also learned DynamoDB high level operations using the AWS SDK.

Solve your own problems

Before Hacktoberfest took place, I started brainstorming ideas for a new project. I wrote down some ideas but I was not happy with those projects. I wanted to solve bigger problems. I’ve worked in different industries and companies and there is always areas to improve. In my current position, we are using Jenkins for our continuous integration server and powershell scripts to deploy our applications. With this setup, we are able to deploy 95% of our projects. The other 5% are deployed by copy and paste. It is not fun. So I decided to create a new project to solve this problem. DotNetDeployments will handle our deployments using AWS CodeDeploy and powershell will be use to create IIS sites, and create Windows Services. The beauty of this project is that it can handle on-premises servers and also AWS EC2 instances. Since this is an open source project, I’m expecting the community to get involved and make this project even better.

People are willing to help

After creating DotNetDeployments, I created github issues to keep track of all things I wanted to accomplish. I added “hacktoberfest” and “help wanted” labels to my issues so I can communicate with the community that I needed help. It didn’t take long and I was receiving small pull requests. I was so excited that developers were willing to help a new project. I reviewed the code and was able to accept those pull requests. After the first pull requests, I decided to add AppVeyor to handle my automated builds. AppVeyor is really easy to use and their documentation is awesome. Now with CI in place, I created more issues to handle unit tests, and also to rearrange the folder structure. I received more pull requests and was happy to review and accept them. Some of these changes broke the build but I merged those changes since I had a different issue to update AppVeyor config file. These changes were necessary because our folder structure changed. I just want to thank all the contributors that are taking the time to make this project better. We’re not done yet but during Hacktoberfest we made a lot of progress.

In summary, Hacktoberfest was a very successful initiative by DigitalOcean and GitHub. During this month, I was able to learn new technologies and solve real problems that developers face every day. DotNetDeployments could not be possible without the help of the community. Thanks to all contributors.