A growing number of organizations have deployed hyperconverged infrastructure (HCI) systems in an effort to simplify IT operations, better utilize resources, and lower costs. They might house the systems in their own data centers, colocation facilities, edge environments, or office closets. Regardless of the location, many of the organizations are running SQL Server on their HCI systems, often alongside other applications. Although it means deploying SQL Server to a virtualized environment, such a practice has become fairly common, especially with the advent of the cloud. This article covers hyperconvergence, another option for SQL Server.
Despite how common hyperconvergence has become, some IT teams might still not be familiar with HCI or are familiar with HCI but have not deployed SQL Server to an HCI platform. In either case, they might now be considering HCI for SQL Server and need to better understand what this looks like before deciding on new infrastructure. Although HCI can make it easier to provide a platform for SQL Server, decision-makers should know what they’re getting into before going down this route.
The data center dilemma
Hyperconverged infrastructure was introduced to help address the challenges that IT teams faced in managing the traditional data center. Not only did they have to maintain a complex mix of compute, storage, and network hardware from a multitude of vendors, but they also had to contend with rapidly evolving conditions, as applications became more sophisticated and data volumes grew larger and more diverse, putting greater demands on hardware resources.
In such a climate, IT teams had to work overtime to keep up with everything they needed to do—maintaining equipment, protecting data, ensuring interoperability, and optimizing systems on a continuous basis—often with limited budgets and staff. Given these conditions, it’s not surprising that many data centers suffered from infrastructure siloes, underutilized resources, and unnecessary costs, while leaving little time for IT teams to innovate or respond to changing business conditions.
To help streamline their operations, some organizations turned to converged infrastructure solutions, which consolidated hardware and software from different vendors into a preconfigured, preoptimized appliance that was easy to deploy and manage, helping to eliminate many of the maintenance and interoperability issues that IT teams faced. Although converged infrastructure is still used extensively in the data center, it tends to be inflexible and specific to certain types of applications, causing many organizations to look for other options.
Introducing the HCI platform
Hyperconverged infrastructure tightly integrates compute, storage, and network resources into a unified platform made up of industry-standard x86 hardware. Unlike converged infrastructure, the HCI system also includes a software layer that abstracts and pools compute and storage resources and, in some cases, network resources. The hardware comes in preconfigured nodes (modules) that are housed in one or more racks to form a consolidated cluster of pooled resources. An HCI system also includes a management layer for monitoring and controlling the entire HCI environment.
In the early days of HCI, compute and storage resources resided in the same node, with components tightly integrated and optimized. In this way, organizations could start small, usually with a minimum of three nodes, and then scale up by adding additional nodes. Deploying a new node was a simple and quick operation that had minimal impact on running operations.
The challenge with this approach, however, was that it often forced customers to purchase more resources than necessary. For example, if they needed additional processing power, they would have to purchase an entire node, whether or not they also needed more storage. As a result, customers often had to overprovision resources to get what they needed—the same problem that plagued traditional data centers.
Newer HCI systems provide more flexibility by separating the compute and storage resources into different modules. Referred to as HCI 2.0 or disaggregated HCI (dHCI), these systems make it possible to scale compute and storage resources independently, enabling IT teams to deploy HCI systems that better accommodate their workloads, without having to invest in hardware they don’t need. At the same time, they still get modules that are preconfigured and preoptimized and that provide the same overall interoperability that came with the first generation of HCI.
Many HCI solutions on the market today are offered as appliances that include all the hardware and software necessary to be up and running in a matter of hours. Most of the hardware comes from the same vendor and, along with the software, is configured, optimized, and tested to ensure a high level of performance and interoperability. However, many of these appliances are built with proprietary hardware—leading to higher costs and the risk of vendor lock-in.
Some vendors offer software-only HCI solutions, leaving it up to IT teams to assemble the physical components. In this way, they have more control over their choice of hardware and how the components are configured, but it also requires more effort to assemble, deploy, and maintain these systems, offsetting some of the savings that come with the do-it-yourself approach.
Even so, software-only HCI can make it possible for customers to use their existing hardware, or they can purchase off-the-shelf commodity hardware. Vendors often validate certain hardware components to work with their software, helping ease the procurement and deployment process. Some vendors also provide reference architectures that describe how to assemble HCI systems using specific equipment.
How HCI works
Today’s HCI appliances are made up of modular compute, storage, and network blades installed in one or more racks. Although some appliances use proprietary servers, others use standard, off-the-shelf machines. Some HCI systems also include a software-defined networking layer that abstracts the physical network components, making it possible to virtualize the entire HCI environment.
A hypervisor delivers the server resources through virtual machines (VMs), where the hosted applications typically run. More recently, several vendors have also added support for containers, usually in conjunction with Kubernetes. However, many of these implementations still rely on VMs, rather than running the containers on bare metal.
An HCI system also provides a software-defined storage (SDS) layer that consolidates the physical storage devices into a shared resource pool that’s available to the VMs across all nodes. In many cases, the HCI platform also uses VMs to control the storage environment, but this is not a requirement. What is required is that the platform provides a flexible pool of storage resources that can accommodate different workloads, facilitate data maintenance and reduction, and optimize I/O operations.
HCI storage can include solid-state drives (SSDs), hard-disk drives (HDDs), or a combination of both. These days, the fastest systems use non-volatile memory express (NVMe), and some even support Intel Optane devices. One of the big advantages of an HCI system is that storage is in close proximity to the compute resources, eliminating many of the challenges that come with network-attached storage (NAS) or a storage area network (SAN). This can be especially beneficial to applications such as SQL Server that need to maximize IOPS and limit latency.
An HCI system also comes with a unified management layer that provides visibility into both the physical and virtual components that make up the HCI environment. In this way, administrators have a central interface for deploying and troubleshooting components, as well as provisioning and managing workloads. In addition, the management layer usually provides monitoring capabilities for tracking systems and assessing performance.
Most HCI systems include built-in data protections that make it easy to restore data in the event of node failure. If a node should fail, other nodes can compensate until the failed node is restored. Some systems also offer features such as backups, clones, snapshots, or other disaster recovery mechanisms. In addition, some might include self-healing capabilities that can automatically identify and resolve issues.
Pros and cons of HCI
For many organizations, the decision to implement HCI often goes beyond SQL Server because they want infrastructure to support other applications as well. Even so, SQL Server and other database systems are some of the most commonly deployed applications on HCI. According to some estimates, database systems such as SQL Server represent 50% of the most common HCI workloads. In addition, a survey published by Computer Business Quarter in 2019 identifies SQL Server as the top database product for HCI, with 66% of the respondents reporting that they use HCI for their SQL Server deployments, far above Oracle, MySQL, or other database systems.
Regardless of the exact numbers, HCI is clearly a common platform for deploying SQL Server and other database systems, yet IT teams new to HCI might not be aware of the many benefits that HCI can offer their organizations:
- HCI simplifies IT operations. Because components are preconfigured and pre-integrated, IT teams can more easily procure, deploy, and maintain infrastructure, as well as upgrade hardware and software. HCI also makes it easier to automate routine operations through the centralized management layer. Additionally, some HCI systems incorporate artificial intelligence and machine learning to provide real-time insights into the HCI environment.
- HCI offers a great deal of flexibility, despite its rigid node-centric architecture. IT teams can start small and scale up their systems when needed by adding more nodes without downtime or interoperability issues. In addition, virtualization makes it easier to handle diverse workloads, move workloads between environments, and accommodate changing business requirements, while enabling teams to set up specialized environments such as testing or development.
- HCI can improve workload performance because the data is closer to where it’s processed, helping to increase I/O and reduce latency. In addition, many HCI systems now support all-flash storage, Optane SSDs, and NVMe. Plus, HCI’s software-defined capabilities can help accommodate changing requirements, making it easier to maintain performance as applications evolve.
- HCI’s multi-node architecture provides a high degree of resiliency and availability. If a node fails, administrators can replace it without downtime or data loss. Many HCI systems also provide data protections such as backups and snapshots.
- HCI offers a cloud-friendly environment that makes it easier to support modern applications and integrate with cloud services. Although HCI is not in itself a cloud platform, it is often used as the foundation for building on-premises private and hybrid clouds.
- HCI can help an organization save money by consolidating workloads and using resources more efficiently. It can also help free up IT personnel for other efforts while reducing the need for specialists to maintain infrastructure. In addition, an HCI appliance that comes from a single vendor can help simplify operations, leading to further savings. However, IT teams can realize even more savings by building their own systems, especially when using commodity hardware.
Although HCI offers a number of worthwhile benefits, it should not be adopted blindly. In fact, HCI comes with several significant challenges:
- With some HCI appliances, especially earlier models, compute and storage resources must be scaled together, often forcing organizations to over-provision one or the other.
- Organizations risk vendor lock-in, especially with preconfigured HCI appliances.
- HCI appliances typically pack a lot of hardware into a small space. Although this can reduce the system’s footprint, it can also lead to power density and cooling issues.
- With HCI, most workloads run in a virtual environment. Some systems might allow you to run containers on bare metal, but you can’t run the applications themselves on bare metal, including SQL Server.
- Although HCI can simplify operations, it’s still a highly complex system, making it difficult to identify and resolve issues when they arise, increasing operational risks and costs.
- In any virtual environment, the possibility of resource contention exists, which can have a negative impact on SQL Server workload performance.
- HCI can sometimes result in unexpected costs, such as software licensing or maintenance contracts. In addition, an HCI system requires a minimum number of nodes to ensure resiliency and availability.
Despite these challenges, HCI can still benefit IT teams looking for an easier way to deploy their SQL Server instances, but they should know what they’re getting into before making any decisions.
Choosing an HCI solution for SQL Server
IT teams are under greater pressure than ever to ensure their SQL Server instances have the infrastructure necessary to support 24/7 operations and the ever-increasing amounts of data. They must keep their mission-critical applications running while dealing with the challenges of data center sprawl, infrastructure siloes, and over-provisioned resources. At the same time, IT teams must contend with smaller budgets, demands for greater flexibility, and the need to consolidate components to reduce the data center footprint.
For many IT teams, HCI offers in infrastructure solution to help meet today’s data center challenges. But it’s not enough to be sold on the idea of HCI. Decision-makers must be able to choose the right HCI system for hosting their SQL Server instances, and for that, they need to take into account a wide range of considerations.
For example, they must decide whether to purchase an HCI appliance such as Dell EMC VxRail or HPE SimpliVity or purchase a software-only solution from a vendor such as Nutanix or VMware. A software-only solution provides more flexibility and can reduce costs, but it puts a greater demand on IT resources. Before choosing between the two options, decision-makers should conduct a thorough cost analysis to determine how much they can actually save.
Decision-makers should also determine which HCI solutions can deliver the performance necessary to support their SQL Server instances. For this, they must evaluate the individual HCI platforms and determine what hardware options are available. For example, Cisco HyperFlex offers all-flash NVMe storage, Dell EMC VxRail can be configured with Intel Optane SSDs, and Azure Stack HCI solutions, such as those available through Lenovo, support both NVMe storage and Remote Direct Memory Access (RDMA), which can improve throughput and performance.
Management is another important consideration when choosing an HCI solution. The system should be easy to procure, deploy, and maintain, and workloads such as SQL Server should be simple to implement and manage. An HCI solution should help streamline IT operations while providing complete visibility into the physical and virtual components. The platform should also support automation capabilities such as resource discovery and provisioning.
When reviewing HCI platforms, decision-makers should evaluate each system’s management and monitoring capabilities to ensure that IT will be able to easily maintain the environment. For instance, VxRail includes end-to-end lifecycle management that enables administrators to deploy and scale the system as SQL Server database applications evolve. And HyperFlex comes with full network fabric integration that lets administrators create QoS policies and manage vSwitch configurations that scale throughout the entire fabric, resulting in greater data reliability and faster database performance.
But performance and management are only some of the considerations that you should take into account when choosing an HCI solution. There are, in fact, many factors to consider, including the following six:
- An HCI system should be able to interoperate with other systems and services in your environment, including cloud platforms. Look for an HCI solution that’s built on standards-based technologies and that exposes APIs to support interoperability and automation.
- Although SQL Server includes many built-in security features, the infrastructure that hosts SQL Server must be just as secure. An HCI system should provide out-of-the-box protections while offering extensive visibility into all components. Some systems also include features such as microsegmentation and role-based access controls for further safeguarding the environment.
- You should have a thorough understanding of the hardware and software components that make up the HCI environment and how they can be deployed and configured. For example, it’s important to know hardware limitations such as maximum capacities and achievable IOPS, but you should also know what hypervisor the platform uses, whether it supports multiple hypervisors and whether hypervisor licensing fees are included in the price. An HCI system should be able to support all your anticipated workloads, as well as the amounts and types of data you’ll be handling.
- Some vendors back their HCI systems with advanced analytics that can help in provisioning resources and troubleshooting systems, as well as predict potential problems and future resource requirements. When evaluating HCI solutions, be sure to learn what advanced analytics might be available and whether they’re included with the platform or considered an add-on.
- Although HCI solutions often provide multiple data protections, they’re not always part of the basic package and come at additional costs. And some HCI systems include only minimal data protections. You should determine exactly what protections are included, what it will cost to incorporate additional protections, and whether you can centrally manage those protections along with the rest of the infrastructure.
- For any HCI solution that you’re considering, you should evaluate the true total cost of ownership (TCO) throughout the system’s lifetime, taking into account licensing and support, training and migration, and the staff time necessary to deploy the new platform. Many vendors provide reference architectures and best practices guides for implementing their products. These can help you better understand what you need to consider to arrive at the true TCO.
Decision-makers should not assume that an HCI system will automatically be a good fit for their SQL Server deployments just because it carries the HCI label. They must do their homework and learn as much as possible about each HCI system that they’re evaluating, taking into account which of them will meet their workload and performance requirements now and in the foreseeable future.
SQL Server and hyperconvergence
SQL Server can support a wide range of workloads, but it requires infrastructure that can deliver the I/O operations per second (IOPS) necessary to keep those workloads running. For many organizations, HCI could prove a useful alternative to more traditional infrastructure. Although HCI virtualizes compute and storage resources, today’s advanced technologies, such as SSDs and NVMe, make virtualizing SQL Server a more viable option than ever. At the same time, many HCI systems can support the use of hybrid storage configurations, which can help reduce overall infrastructure costs.
Under the right circumstances, HCI can provide IT teams with a number of important advantages over traditional infrastructure, particularly when it comes to simplifying their operations. But decision-makers should also keep in mind the challenges that come with HCI, weighing them carefully against the benefits. In the end, it will come down to an organization’s individual circumstances and their short- and long-term needs. Although HCI should not be adopted lightly, its value is substantial enough that decision-makers should at least give it careful consideration when looking for ways to deploy SQL Server.
The post SQL Server and hyperconvergence appeared first on Simple Talk.
from Simple Talk https://ift.tt/3pTNzYh
via
No comments:
Post a Comment