The Struggle of Conceptualizing Sexual Assault

A college girl arrives at a man’s house drunk, so drunk that she could feel the ground spinning at a Frat party. She remembers very little of what even happened at the party. She couldn’t even tell…

Smartphone

独家优惠奖金 100% 高达 1 BTC + 180 免费旋转




Common Mistakes and Pitfalls when using AWS with WordPress

Before using Amazon Web Services (AWS) for large scale websites and WordPress projects, there are some mistakes, and pitfalls to be aware of. This article will address the four key areas where the most mistakes occur within AWS.

The four areas we will examine closely are:

Billing Pitfalls

While using Amazon Web Services (AWS), you will notice how Cloud has allowed you to work easier with infrastructure. This great opportunity for flexibility and running applications is a large improvement from traditional technological processes. However, AWS can become costly. To get around the potential high costs, you need to decide which instances you want:

Instances will cost you money if they are not used, used too much, and/or not the correct size for your needs.

AWS purchasing costs can be solved by controlling it in a few ways. One way is to buy on-demand instances. This is buying at a fixed price per hour with no set conditions. This is the preferred option. Another way is through reserved instances. This is making a commitment to long-term purchases with hourly discounts. Reserved instances are preferable if your site will run 24/7 and you know the total cost up front. The last control method are spot instances. Spot instances require bidding a price for the instance type you need. They can be cost effective if your infrastructure is well managed and architected properly.

Another cost-effective tool is managing your point-in-time snapshots. They are taken to back up your data from your Amazon EBS to S3. But if too many snapshots are saved, then they can become pricey. Be sure to have a strategy for your EBS Snapshots.

What else can drive your bills up? Elastic IPs (EIPs). They are limited resources and can result in added charges when you connect them to an instance. This is typically an hourly charge. An additional hourly charge can be priced when the IP addresses are not connected with running an instance. The elastic IP may not be released even if the running instance is stopped. Which means you will still be billed for the IP address. Double check your bill to make sure you aren’t charge for unused EIPs.

Any leakage of AWS keys may also result in a high bill. When using public GIT accounts, developers accidently upload AWS keys when getting code. Scanners can quickly get the keys from that and use your account for their AWS workloads. Using private GIT accounts will ensure your keys are secure.

One further example of hidden costs includes using an incorrect class of Elastic Block Storage (EBS). Another could be forgotten added resources in AWS that are running. There are lots of reasons to double check your account and have it running optimal for a low monthly bill.

Auto Scale Pitfalls

Auto scaling is a popular selling point of cloud computing. However, this technological feature does come with its own challenges and misconceptions. There are four common misconceptions about auto scaling needs to have addressed. These misconceptions often lead to confusion among IT professionals about cloud architecture. Addressing these issues will clarify what is true and what needs explaining.

The first misconception is that auto scaling is easy. There are platforms out there (IaaS for example), that make auto scaling very straightforward. However, if you use AWS and spin up an instance, you will notice that public cloud doesn’t have auto scaling.

You will have to invest a lot of time to make an automated, optimal architecture that can replace failed instances and scale out independently.When setting up a load balancing group between availability zones (AZs), this will appear very straightforward. But creating instances automatically with flawless configuration and minimal standup times require much more effort. These instances take time to make custom scripts and templates.

On top of that, you also must become familiar with using AWS tools.Keeping the templates and scripts a part of auto scaling is a tough job. It may take months for an experienced systems engineer to get acquainted with auto scaling. Not everyone has the time or the resources to master true auto scaling.

Often small engineering teams rely on a mixture of elastic load balancing and manual configuration. Sending external and internal resources to create templates may weaken your buildout time significantly. This may be the reason why IT companies have whole teams of engineers work on automation scripts.

The second misconception is that elastic scaling is more common than fixed-size auto scaling. Auto scaling does not specifically equal load-based scaling. In some opinions the most useful aspect of auto scaling focus on high-availability, rather than elastic scaling techniques.

The purpose for auto scaling groups is to have resiliency. Instances are put into a fixed-size auto scaling group and if that instance flops, it is seamlessly replaced. A simple use example is an auto scaling group with a min size of 1 and a max of 1. Also there are more methods to scale a cluster than by just looking at CPU load.

Auto scaling may add capacity to work queues and can be helpful with data analyst projects. A group of worker servers in an auto scaling group can listen to a queue and fulfill the actions. They may also start a spot instance when the queue size is a certain amount. Like all spot instances, this will only occur if the instance price is below a target dollar amount. Capacity is then only added when it would be nice to have extra.

The third misconception is that capacity should always match demand. In other words, load-based auto scaling is not fit for every environment. Some cloud deployments are more robust without having auto scaling. This really effects small startups that have fewer than 50 instances. This is where matching capacity and demand may have some costs.

Auto Scaling doesn’t necessarily look like this in the real-world.

For example, pretend that a startup has peak website traffic at 5:00pm. The traffic only necessitates 10 instances, but the startup is fine with just two instances. They decide to cut costs and use their cloud’s auto scaling tool. They put their instances in an auto scaling group with a maximum size of 15 and a minimum size of two. Yet, one day they get a new traffic peak at noon that is just as high as the usual 5:00pm traffic time. The noon traffic time may last for only a few minutes, but their website and application go down as a result. Why did their website go down if they have auto scaling? There are a few reasons for their website going down.

One, their auto scaling group is set up to only add instances during certain time intervals (i.e. every 5 minutes). It can take up to 5 minutes for a new instance to be up and running. Their extra capacity will not be ready for their noon traffic spike. Also, since their instances cannot handle load, it creates new (unnecessary) instances. The existing servers will be overloaded, and their health check runs on the new instances will not be as quick.

Then when the Elastic Load Balancers notice that the health check isn’t functioning, it drops the instance. This snowballs into a worse problem and increases the load.In a perfect situation the demand would slowly and precisely increase. But that isn’t how the real world works; with unpredictable jumps auto scaling can’t keep up. Think of the example above. Even if the startup saves money by scaling down to two instances, they risk downtime of their website.

Time for a truth bomb. Auto scaling is optimal to those who scale to hundreds of servers. If your capacity falls below a needed amount, you will always risk downtime. Even if the auto scaling group is set up, it takes time (5 minutes) for an instance to appear. 5 minutes is plenty of time for a peak in traffic to occur. Scale down shouldn’t be at the 90% level, as in the example above. 20% of the capacity would be enough as a starting point. The lesson is to avoid downtime at all costs.

The fourth misconception: configuration management. Sometimes it can be tricky to find the balance with configuration. For example: what is made between the AMI (to make a “Golden Master”) and what is done with a launch of a configuration management tool (“Vanilla AMI”).

In the real world, the number of times you configure an instance depends a few variables. One variable is how fast the instance needs to be spun up. Another is how often auto scaling events need to happen and/or the average life of an instance.Benefit of utilizing a configuration management tool and building off of Vanilla AMIs quickly becomes apparent.

If you are working more than 100 machines, you just have to update packages in one place. This also keeps a record of every configuration amendment. But, in an auto scaling event you do not want to have to wait to download and install packages. Additionally, the more work that the default installation procedure must do, the likelihood of something going wrong rises.

For example, there are a few things that could go wrong with a Puppet script. You may need to update OpenSSL to the newest version every time Puppet runs. Even if this is a rare occurrence, issues may arise. Accidental network problems could lead to momentary outages while linking to the package repository. If the initialization process fails epically, then it will cost a lot of money. The instances may keep dying and being recreated. In an hour it may spin up to 30 instances and cost you a large sum, especially if running huge production instances.

Through trial and error, the possibility of a balance between the two approaches is likely. Ideally, it would be to start from a stock image after running Puppet on an instance. The test of an effective deploy process is whether the instance works the same when created from the existing stock image. Then it must be equated to when it was made from “scratch”, with a Vanilla AMI constructed by Puppet.

Auto scaling is a difficult and time-consuming project for any engineering team to set up. Within the next couple of years, more tools will be available to assist in setting up auto scaling. However, these cloud management tools are usually slow to come by. Tools such as Amazon’s OpsWorks need to become more useful. Thus, the efficiency of any environment’s auto scaling process will depend on the skills of cloud automation engineers.

Common Engineering Mistakes

A typical web application usually has a load balancer, scalable web backend, and a database. While engineering a website, store, or application, a few mistakes can occur.

Mistake #1 involves managing infrastructure manually. When your AWS setup was made by using the web-based management support console, then the infrastructure is managed manually. A problem arises with this approach. The web application is not reproducible, there is no record, and lots of mistakes may be made. There are ways around this. AWS CloudFormation helps solve this problem for free. Rather than making all the resources (like EC2 instances) manually, you can define them in a template. CloudFormation will know how to turn the template you make into a running stack.

CloudFormation also puts all the resources for you in the correct order. You may even update templates that apply modifications to a running stack. Remember: managing infrastructure manually is not worth the time. It usually results in a big mess and is an unprofessional approach to your site or store.

Mistake #2 is not using Auto Scaling groups. Everyone assumes that auto scaling groups are about auto scaling. This is not true! Every EC2 instance should be launched from inside an auto scaling group. Even if that means launching a singular EC2 instance. The auto scaling group takes care of watching the EC2 instance. The group also takes the initiative as a logical group for virtual machines and best of, it’s free!

A usual web application has the web servers run on virtual machines in an auto scaling group. You may use auto scaling groups to scale the quantity of virtual machines to corresponding current workloads. But remember, you need auto scaling groups as a precondition. Auto scaling is attained by setting up notifications on specific metrics like CPU usage of the logical group. If the alarm target is reached, you can describe an action to address it. An example of this would to be increase the quantity of machines in the auto scaling group.

Mistake #3 is not evaluating metrics in CloudWatch. Each AWS service reports many metrics to a service known as CloudWatch. Virtual machines dictate CPU usage, network usage, and disk activity. Databases also report memory usage IOPS usage. Engineers must examine the data and understand their usage.

This is a useful tool to notice when and where usage spike. Without the report, you may not know when spikes occur and where problems arise from the peak in usage. It’s up to you to look at your CloudWatch report! The step after analyzing your metrics is to set up alarms for them. Then you’ll be much more prepared in the future.

Mistake #4 addresses ignoring Trusted Advisor. This is not about your school guidance counselor. In your AWS account you have a tool known as Trusted Advisor. This checks your account against best practices established by AWS. The areas it focuses on are:

Your Trusted Advisor Dashboard will show you where you can improve while using your AWS account. Always check security first! This should be a priority. Trusted Advisor has an option to receive emails to notify you of any changes in issues on a weekly basis. You can activate this feature in the preferences section. Also, if you pay for AWS support Trusted Advisor allows you to add on more checks to help you become even more optimized.

The final mistake is #5, which is underutilizing virtual machines. There is no reason to not decrease the instance size if you find out your EC2 instances are underutilized. One caveat to this is when you manually manage infrastructure (which is a big no-no). By checking your CloudWatch metrics, you will know if you are underutilized. If you use Auto Scaling Groups, you should also make sure your auto scaling rules are correct. This means whether to scale up later or scale down earlier.

Security Issues

The final area of AWS where mistakes may likely occur is with security issues. This section will address some tips of how to prevent security issue mistakes from happening.

Lack of Separation

Be sure to have your own code separated from others and isolated. Most LAMP stack hosts provide shared hosting. Shared hosting is a cheap option to dedicated servers, but this option is not ideal. A common disaster that arises from this is when another site (not yours) gets overloaded with requests. This makes the server they running on to be overwhelmed with requests. And the result? Every other site that is hosted on the shared site, including yours, is unreachable to users. This is a huge problem, especially if you have a small business website.

Mutability

Projects like WordPress are created with the ability to have the application modify itself. This makes it easy to keep the application up to date on a single host. But the LAMP stack is not capable of having the easiness happen across multiple hosts when your site needs to scale. To scale, you can’t have the application update itself. And the application to update itself, it must stay on one host so that it can upgrade all the code.

Insecurity

Insecurity is a close relative to mutability within a system. Having code be able to change itself leads to problems. Bugs and other issues with software will allow attackers to adjust the site in unplanned ways.

WordPress sites are hacked all the time. This often is a result of using poorly coded themes and plugins. If every program in the stack were perfectly written, then it wouldn’t have as many problems. But programming errors will occur no matter what. It’s up to you to have methods and precautions in place to prevent this.

Another reason sites get hacked is incorrect file and directory permissions and improper web configuration. Following the principal of least privileges and implementing configuration best practices helps prevent majority of security incidents with WordPress.

Upside of using AWS correctly

Add a comment

Related posts:

L2M4 Dumps Basic Tips to Pass Exam

The CIPS L2M4 practice questions present the most beneficial facts. You may drop by one of the most raised results by using L2M4 exam dumps material and achieve each of the Level 2 Certificate in…

The Best Apple Watch Bands for Small Wrists

Three years ago I bought my husband a Series 1 38mm Apple watch for Christmas. Turns out, after years of not wearing a watch, having something on his wrist drove him bonkers. So I decided that the…

How Luc and Ryan created a successful mentorship business after they met at Tribe Theory

Finding the right business partner is like finding the right relationship. You’re looking for the spark in the eyes, a sense of humour that connects you both and a mindset about life that’s in…