TECHNOLOGY
Integrating cloud compute with Wasabi Hot Cloud Storage
Cloud storage has long run the risk of being an afterthought in infrastructure planning. An architect turned to a cloud manager and said, “Oh, yeah, we need some storage for that cloud compute instance. Order some, okay?” The cloud manager nodded, and dutifully spun up whatever storage service was available, perhaps not investigating the long-term cost, performance, or security aspects of the decision.
To be fair to such box checkers, there weren’t that many cloud storage options available. Cloud storage was mostly “cool,” while “hot,” higher-performing alternatives were either costly or not suitable for many use cases. This is all starting to change now that the cloud has become more sophisticated and critical. Solutions like Wasabi Hot Cloud Storage offer new cloud storage capabilities that cloud managers should consider as they figure out the best way to integrate cloud compute with storage.
What is cloud compute, really?
A discussion about integrating cloud compute with storage should necessarily start with a meaningful definition of cloud compute. Now, you may be thinking, we don’t need that. Everyone knows what cloud compute is all about. If you’ve spent any time talking to people in the tech industry, however, you should be aware of two problems that arise from making such an assumption. First, there is seldom consensus, even on facts as seemingly settled as the definition of cloud compute. And, what we once considered to be the complete truth is less complete with every passing year.
The idea of cloud computing has its origins in the old “time sharing” model of the 1960s, an era when computers were rare and quite expensive. This idea evolved by the 1990s, when technology David Hoffman first crafted the metaphor of computing resources being available as if “in a cloud”—remote and abstracted from the end user and an identifiable physical data center.
The best definition of the cloud today, based on the National Institute of Standards and Technology (NIST), is a technology environment that makes computer system resources available on an on-demand, self-service basis. A cloud computing platform should have broad network access, employ resource pooling, and enable rapid elasticity.
The “compute” in cloud compute refers to compute instances, which are almost always virtual machines (VMs) running in a cloud architecture. It can be a little confusing, because the phrase “cloud computing” generally means the entire cloud ecosystem, including storage, networking, and so forth.
Three types of cloud compute predominate:
Virtualized compute, e.g., VMs packaged available as a service, typically X86 CPUs, but also ARM chips. Cloud users can select the number of cores and threads they need for a given workload.
Accelerated compute, which uses graphical processing units (GPUs) or accelerated processing units (APUs) to boost compute performance.
Bare metal, which means an actual piece of hardware, rather than a VM. Bare metal translates into a dedicated compute instance, which may be preferable for performance or security reasons. Bare metal also provides more flexibility in terms of compute and memory and configuration. The only downside is cost. Bare metal tends to be far more expensive than virtualized or accelerated compute.
Cloud compute use cases
It is possible to put a cloud compute instance to work for almost any use case that can be run on-premises on a physical computer. There are thus myriad cloud compute use cases. However, some use cases are more sensitive to the nuances of the compute instance than others.
Cloud-native applications
A new generation of software is being built just for the cloud. Examples of such cloud-native applications include those based on software containers like Kubernetes and microservices. Cloud-native apps utilize compute resources differently from conventional software. The need for performance, for example, may be more granular and workload specific.
Data analytics
Data analytics is a popular workload for cloud compute. Reasons for this include the ability to select a specific type of compute instance, e.g., with certain performance characteristics versus using whatever compute is available on-premises. In addition, the tendency of data analytics workloads to manifest as spikes in computer load favors the cloud. If one has a massive data set to analyze, it’s usually faster and more economical to spin up cloud compute instances and spin them down when the data analytics job is over. Data sets aren’t getting smaller, either. In fact, today’s data analytics processes may involve the use of massive lakes and other unstructured forms of data, which are also well suited to the cloud.
Machine Learning/Artificial Intelligence
Machine learning (ML) and artificial intelligence (AI) are growing in popularity, especially as recent advances in AI/ML have made the technology useful for a broader range of applications. AI and ML are compute intensive. The “model training” stage of AI requires a tremendous amount of compute. Then, as AI/ML software goes to work, its day-to-day tasks of pattern recognition and analysis of multiple data streams tend to need a lot of compute power.
Cloud object storage for cloud compute
The discussion around what type of cloud storage is right for a given cloud compute use case has evolved over time. Users traditionally had three basic choices: object storage, file storage, and block storage. Object storage, whether in the cloud or on-premises, creates objects that contain stored data. The advantages of this approach include flexibility and the ability to customize object data. Cloud object storage is suitable for unstructured data.
Cloud file storage, analogous to network attached storage (NAS) on-premises, uses a hierarchical file drive directory setup to manage stored data. Cloud block storage, which is synonymous with on-premises storage area networks (SANs), stores data in blocks, based on a specific storage device.
Users selected from these three based on protocol, price, and performance service level agreements (SLAs). However, their decisions were based on a number of assumptions related to storage tiers. On the cold-to-warm-to-hot spectrum of cloud storage performance, object storage was viewed as “cool,” i.e., not useful for higher performing use cases.
Expectations are changing, though. Today, users expect cloud data to be available instantaneously. The notion of waiting for data because it’s in a low cost, low performance “cold storage” tier is less and less acceptable. Data- and compute-intensive workloads are forcing the issue.
The result is a push to make more, if not all, storage either warm or hot. This has put pressure on cloud object storage to be higher performing (but also not excessively costly). This is a problem that Wasabi can solve.
Wasabi makes all cloud storage available on the same “hot” basis. There are no warm or cool tiers. Wasabi is cost effective, too, with S3 compatible object storage. With this capability, users can build hybrid infrastructure that extends on-premises storage capacity into the cloud with consistently high performance. Wasabi is high performing enough for demanding workloads like AI, but economical enough for secondary storage tier uses cases like backup and redundancy. Wasabi works well with cloud data lakes, as well.
In the case of backup and restore, Wasabi partners with the most popular backup and restore solutions. Users can leverage their favorite disaster recovery (DR) applications and spin up failover environments for critical applications. They can also create durable archival storage that preserves immediate access to content. Their data remains readily available to adjacent compute services and applications.
Customers praise Wasabi as a storage solution for cloud compute partly because the pricing is predictable. Wasabi does not change for data egress or API calls, which are common with other cloud storage services. Users also consider Wasabi easy to use. They like its simple console.
Wasabi’s cloud compute partners, including Equinix, Hivelocity, and Vultr, augment its utility for numerous cloud compute use cases. Wasabi can integrate with compute services from hyperscalers, such as Equinix, as a low-cost data storage option that eliminates API costs between services.
Cloud storage security comes from encryption capabilities, as well as Wasabi’s distinctive immutability. Data stored on Wasabi cannot be modified or deleted. It is immutable. This countermeasure helps mitigate the risk of ransomware attacks.
Conclusion
Wasabi offers an attractive storage option for cloud compute. The solution speeds up cloud object storage, making it a good solution for a broader range of workloads than cloud object storage could previously handle. It is still economical, however, and not subject to unpredictable pricing swings due to egress and other charges. Getting started is relatively easy. Just switch on Wasabi and integrate it with cloud compute instances.
To learn more, visit our cloud computing page
Related article
Most Recent
A large segment of our customers use Wasabi for backups....
As this year’s Cyber Security Awareness month continues, we’re exploring...
As data becomes increasingly vital to business operations, robust cloud...
SUBSCRIBE
Storage Insights from the Storage Experts
Storage insights sent direct to your inbox every other week.