m4.large vs. c4.large: Examining the Two AWS EC2 Instances

Upon closer inspection of the m4.large and c4.large AWS EC2 instances, there are significant differences in their specs and intended performance that users should consider. Let’s take a look at the most important insights from a cloud management perspective.
Looking to compare the newer family of EC2 instances? Check out the Amazon EC2 Comparisons: M5 vs. C5 post!
Upon closer inspection of the m4.large and c4.large AWS EC2 instances, there are significant differences in their specs and intended performance that users should consider.

Both instance types are compelling choices for any cloud infrastructure. But, depending on the kind of cloud environments and performance expectations that AWS users have (from general usage to large clusters of use with AWS EMR), understanding and choosing the best-fitting instances can make a difference in performance and cloud cost efficiency.

Provisioning right-fitting EC2 instances can also give anyone the best cloud cost savings throughout any project. Here are a few performance and cost efficiency insights that your engineering team should consider when deciding between either EC2 instance.

Comparing their key specs

Instance Memory Compute units Cores Storage Dedicated EBS Throughput
m4.large 8.0GB 6.5 units 2 EBS 450 Mbps
c4.large 3.75GB 8.0 units 2 EBS 500Mbps

 

Pricing using Linux examples from U.S. West

Instance Linux On-Demand Linux Reserved Instance Windows On-Demand Windows Reserved Instance
m4.large $0.120/hr $0.083/hr $0.246/hr $0.184/hr
c4.large $0.105/hr $0.078/hr $0.193/hr $0.170/hr

[Edit: 11/14/2016] AWS made a big pricing announcement during November 2016. It stated that as of December 1, 2016, there will be various M4, C4, and T2 price reductions (anywhere from 5 to 25 percent) within certain regions. We’ll reflect those changes throughout this article where applicable.

Both the m4.large and c4.large offer dual-core vCPUs with 64-bit architectures. The m4.large offers two 2.4 GHz Intel Xeon® E5-2676 v3 (Haswell) processors. The c4.large features two high-frequency Intel Xeon E5-2666 v3 (Haswell) processors, which AWS markets as being “optimized for EC2.”

AWS users who consider either should be aware that both instances require using Amazon VPC. Both instances also require EBS provisioning. A general cloud cost management best practice is to tag EBS volumes so that costs can be allocated to each provisioned instance.

Operating costs-wise, running the c4.large instance will be the cheaper resource to provision. As with other EC2 instances, both the m4.large and c4.large see an increase in price when requiring Windows as an operating system.

AWS features both instances for use with their Elastic MapReduce service. Users wanting to take advantage of EMR, with a combination of m4.large, c4.large, or whichever instances of choice, need to consider the additional AWS costs. EMR incurs its own monthly bill, in addition to any EC2 instances which are launched as part of the EMR job.

For consideration: the c4.large’s focus on compute and storage bandwidth

Both instances are Enhanced Networking-capable, but it’s worth noting that the c4.large has a focus on compute and storage bandwidth.

AWS features their c4 family as “the latest generation of Compute-optimized instances, featuring the highest performing processors and the lowest price/compute performance in EC2.”

This “compute-optimization” is a tradeoff in a lower RAM allocation for more compute units. This tradeoff gives it a specialization that’s different than the m4.large’s known general purpose role.

Instance Memory Compute units Dedicated EBS Throughput
m4.large 8.0GB 6.5 units 450 Mbps
c4.large 3.75GB 8.0 units 500 Mbps

While both the m4.large and c4.large feature EBS-optimization at no additional cost, the c4.large has a slightly higher dedicated throughput which could be handy for more intense IO workloads.

For users who forecast scaling their environments up over time and want a more granular control over compute jobs, a marketed feature of the C4 family is the ability to control processor C-state and P-state configuration on the c4.8xlarge instance types.

The bottom line: does the workload need the extra memory?

The m4.large is a great default choice between the two EC2 instances if the engineering team is unsure of what their environment workloads will be like. The m4.large’s balance between compute, memory, and its EBS-optimization makes it a solid choice for in-memory databases, gaming servers (single and multiplayer experiences), caching fleets, batch processing, and business applications like SAP and Microsoft SharePoint, as described by AWS.

Pricing per memory and compute units (ECU) using U.S. West examples

Instance Linux On-Demand cost per GB Linux On-Demand cost per ECU Linux Reserved cost per GB Linux Reserved cost per ECU Linux On-Demand cost per GB Windows On-Demand cost per ECU Windows Reserved cost per GB Windows Reserved cost per ECU
m4.large $0.015/GB $0.018/ECU $0.010/GB $0.013/ECU $0.031/GB $0.038/ECU $0.023/GB $0.028/ECU
c4.large $0.028/GB $0.013/ECU $0.021/GB $0.010/ECU $0.051/GB $0.024/ECU $0.045/GB $0.021/ECU

Breaking down pricing per GB of memory and compute units shows the respective cost efficiencies of the m4 and c4 examples.

Another ECU and GB comparison including December 2016 pricing examples

[Edit: 11/14/2016] Get more details about AWS’s EC2 pricing updates on their recent blog article. The table below is an initial comparison exercise to show the cost efficiency of ECU and memory once December 2016 pricing is active with maximum proposed discounts taken into consideration for respective regions. These are subject to change once AWS releases actual pricing.

Instance Linux On-Demand cost per GB: U.S. East (N Virginia) Linux On-Demand cost per ECU: U.S. East (N Virginia) Linux On-Demand cost per GB: EU (Ireland) Linux On-Demand cost per ECU: EU (Ireland) Linux On-Demand cost per GB: Specific Asia Pacific Regions Linux On-Demand cost per ECU: Specific Asia Pacific Regions
m4.large $0.0135/GB $0.0165/ECU $0.01485/GB $0.018/ECU $0.0167/GB (Singapore) $0.0205/ECU (Singapore)
c4.large $0.0252/GB $0.0117/ECU $0.0285/GB $0.0134/ECU $0.0307/GB (Sydney) $0.0144/ECU (Sydney)

This is just one example that compares a few upcoming December 2016 EC2 region-specific price reductions. See the AWS blog article for more details.

The M4’s larger memory capacity, of the two compared EC2 instances, suits workloads that require more RAM. For users who can identify that their environments don’t surpass the 3.75GB RAM allocation on a consistent basis, they can opt for provisioning the c4.large instance instead. This grants more compute power at a slightly cheaper hourly rate.

High-performing front-end fleets, web-servers, and other compute-intensive environments can make the most of the c4.large. The c4.large will also have a performance and cost advantage when used for distributed analytics and high-performance science and engineering applications. It’s also optimized for handling the high-demand workloads of MMO gaming services (massive multiplayer-type experiences), ad serving, and video encoding. But, if the workload begins to feel the bottleneck of the lower RAM allocation, it’s time to consider a new instance.

As of December 2016, choosing regions where the m4.large and c4.large have discounted rates can bring in even more savings. That’s before any Reserved Instances are applied, too!

Investigate EC2 usage and utilization to determine the right fit

AWS users can understand which is a best fit by finding the means to measure their cloud usage and utilization and matching that with the needs of their app or project. The goal would be to check CPU, disk I/O, and bandwidth utilization to understand what meets project requirements and when the project might outgrow existing cloud environments.

Choosing EC2 instances that fit well to their workload is often the best way to get the most value out of AWS. Monitoring and adjusting EC2 provisioning as projects change and grow keeps the savings coming.

To see this kind of cloud cost and usage reporting at work, reach out to us for a free trial of IBM Cloudability. We’d be happy to show anyone how to better understand their cloud cost situation and identify ways to right-size those instances.

Article Contents

Categories

Tags

Additional Resources