Which of the following is a benefit of using virtualization for server consolidation?

Configuring Windows Server Hyper-V and Virtual Machines

Tony Piltzecker, Brien Posey, in The Best Damn Windows Server 2008 Book Period (Second Edition), 2008

Migrating from Physical to Virtual Machines

Server consolidation is the process of migrating network services and applications from multiple computers to a singular computer. This consolidation can include multiple physical computers to multiple virtual computers on one host computer. You can consolidate computers for several reasons, such as minimizing power consumption, simplifying administration duties, or reducing overall cost. Consolidation can also increase hardware resource utilization.

The planned guest partition (virtual machine) used for migration should include an advanced system with a uniprocessor or multiprocessor configuration, a virtualized network card, and a virtualized hard disk. Hardware that is not supported includes parallel port dongles, almost all universal serial bus devices, and hardware-based authentication. The physical computer you are using for the migration must also contain more than 96 MB RAM. It should also run the NTFS file system.

Virtualization technology often is deployed in an attempt to make outages, planned or unplanned, invisible to users. Hyper-V can help in the areas of access virtualization, application virtualization, processing virtualization, network virtualization, storage virtualization, and tools to manage its virtualized environment.

Read full chapter

URL: //www.sciencedirect.com/science/article/pii/B9781597492737000100

Introduction to Windows Server 2008 R2

Dustin Hannifin, ... Joey Alpern, in Microsoft Windows Server 2008 R2, 2010

Server consolidation

Server consolidation is not a new IT concept. Fewer servers mean lower hardware, software, and management costs. Windows Server 2008 R2 Hyper-V allows you to run multiple virtual servers simultaneously on one physical server. Virtualization technology is definitely not a new concept and several companies now have hypervisors available; however, Microsoft provides Hyper-V free of charge as part of the Windows Server 2008 R2 operating system. If your organization has not yet implemented server virtualization, you should definitely consider making a business case to do so during your Windows Server 2008 R2 deployment. The money your organization can save from server virtualization could easily add up to thousands of dollars per year.

Notes from the field

Microsoft virtualization ROI calculator

Microsoft has created a free tool to help you determine potential cost savings by deploying virtualization technologies. The Microsoft integrated virtualization Return on Investment (ROI) calculator will help you build a detailed analysis of how your infrastructure can benefit from virtualization. This tool is found online at //roianalyst.alinean.com/msft/AutoLogin.do?d=307025591178580657.

Notes from the field

Do not virtualize everything

Virtualization is definitely the hot topic in the IT industry these days. However, not all servers and systems are good virtualization candidates. Typically servers that are already carrying a large workload and consuming most of the hardware's resources are not good candidates to be virtualized. Additionally, Microsoft does not currently support virtualization for some products that require audio/video communications such as Exchange Unified Messaging servers and Office Communications Server 2007 servers performing AV functions.

Read full chapter

URL: //www.sciencedirect.com/science/article/pii/B9781597495783000013

Sybase Migrations from a Systems Integrator Perspective, and Case Study

Tom Laszewski, Prakash Nauduri, in Migrating to the Cloud, 2012

Server Consolidation

To ensure an ROI for migration projects, most customers also evaluate and include server consolidation as part of the overall project. Server consolidation strategies help to reduce application and database licensing and support costs. Newer hardware devices are faster and less expensive than before, and are more reliable. They are also better in terms of power consumption and in promoting any corporate green IT initiatives.

Five main areas should be evaluated:

Hardware server consolidation

Storage and disk system consolidation

Infrastructure consolidation including management system and backup systems

Database instance consolidation

Operational management consolidation (relocation of distributed operational and support personnel to a central location while maintaining the geographic distribution of the server state using automated tools and technologies)

However, server consolidation should not only look at reducing the number of servers, but also at simplifying and optimizing the overall existing IT infrastructure. In addition, centralization and physical consolidation of hardware devices, data integration, and application integration should be key components of an overall server consolidation strategy. The following are some areas for consideration:

Usage patterns for the application This includes OLTP, OLAP, mixed use, batch reporting, ad hoc reporting, data mining, advanced analytics, and so on. Can these workloads be combined on one database server?

Middle-tier application servers Application server consolidation may also be performed along with database consolidation.

CPU utilization Can CPU resources be combined/shared across the data server engines? How will performance be impacted? Will CPUs manage runaway queries?

Network and bandwidth requirements Will database and application server consolidation increase your network bandwidth requirements?

mLogica, in partnership with Oracle, has engineered the Data Consolidation Server (DCS) hardware and software appliance to address these very issues. DCS includes Oracle 11 g, Oracle's leading TimesTen In-Memory Database (IMDB), augmented by server management software from mLogica. Figure 11.3 shows how all the pieces of this engineered data consolidation server work together. The objective is to provide a preconfigured, pretested, fine-tuned and engineered hardware and software platform for high performance. IMDB, as the primary data source or as a cache option to Oracle 11 g, can provide ultra-fast, subsecond responses for client applications.

FIGURE 11.3. Database Consolidation Server with IMDB

This solution, along with Oracle Exadata, offers Sybase customers two great options when consolidating Sybase databases to an Oracle platform.

Read full chapter

URL: //www.sciencedirect.com/science/article/pii/B9781597496476000119

Oracle Database Cloud Infrastructure Planning and Implementation

Tom Laszewski, Prakash Nauduri, in Migrating to the Cloud, 2012

Workload Consolidation and Database Migrations to Oracle (PaaS or DBaaS)

Using virtualization as a way to offer PaaS provides excellent isolation between individual deployments as well as easier deployment, monitoring, and chargeback capabilities. But from the perspective of a cloud provider (public and private clouds), this method of offering PaaS does not help much in terms of reducing the cost and effort involved in capital and operational expenditures and efficiency, as the provider must manage and maintain a wide range of software products. For public cloud providers (both IaaS and PaaS), there is not much choice in terms of supporting a wide range of products as the service offerings are tied to customer demand. But when these providers consider PaaS offerings around an Oracle database (DBaaS) with fewer or no isolation concerns, or for a limited group of consumers (private clouds), there are opportunities to improve resource utilization and reduction of capital and operational expenditures by implementing workload consolidation involving operating systems and databases. In addition, this can improve resource sharing at all layers (I/O, memory, CPU, and network) in a centralized environment.

Database migration to Oracle presents a unique opportunity for workload consolidation as well as server consolidation and virtualization. As we discussed in Chapter 3, the Oracle database differs from other databases in terms of schema layouts as well as support for multiple databases under a single database engine. Furthermore, in many databases it is a common practice to deploy and maintain multiple database engines and databases for reporting and other requirements, to reduce the load on transactional systems by using replication technologies. Therefore, when creating Oracle databases and schemas, organizations have two choices:

Database/instance level consolidation Create an Oracle database instance for each database engine in the source environment, and then create schemas in Oracle to map each database and schema that exists under a database engine at the source. Using this approach can trigger the potential for a conflict as the same schema names might be in use under different databases at the source, which will result in a duplicate user/schema error in the Oracle database environment. In such cases, one option is to assign a new name to one of the conflicting schemas. Another option is to create another new Oracle database instance, and then to create the conflicting schemas in it and continue creating unique schemas under one Oracle database instance. With this approach, the number of Oracle database instances can be reduced to just a few. Figure 10.1 illustrates a typical mapping between non-Oracle databases and schemas and Oracle database instances and schemas.

FIGURE 10.1. Database/Instance Mappings between Non-Oracle and Oracle Databases

When migrating databases to Oracle in the manner shown in Figure 10.1, it is possible to create many database instances in Oracle for each source database engine.

WARNING

Creating too many Oracle database instances will result in increased consumption of such resources as CPU/memory and storage on the servers, because each database instance will have its own set of files, background processes, shared memory allocations, and so on.

Oracle database instances created as a result of migration (as illustrated in Figure 10.1) can be deployed on one large symmetric multiprocessing (SMP) server, or on separate servers and have database links created so that they can communicate seamlessly. All such instances can also be configured in one Oracle Real Application Clusters (RAC) environment which allows users to run multiple database instances. Figure 10.2 illustrates the deployment of migrated Oracle databases in an Oracle RAC environment.

FIGURE 10.2. Oracle RAC Environment Supporting Multiple Oracle Databases

Schema level consolidation Instead of running many databases and instances in Oracle for transactional, reporting, and backup purposes, it is best to map databases and schemas from the source database into schemas in an Oracle database. For workload separation and fine-grained management, an Oracle RAC database can be used so that different workloads can be isolated to different nodes based on their profiles, such as online transaction processing (OLTP), data warehousing (DW), and so on. In addition, it can be configured in such a way that workloads with higher priorities can run on other nodes during peak hours or to handle unexpected spikes. For simple database migrations (i.e., where only the database is being migrated to Oracle), it is best to map each database and schema from the source database to a schema in Oracle. Figure 10.3 shows a simple mapping of multiple databases and schemas to an Oracle RAC database.

FIGURE 10.3. Mapping of Multiple Databases and Schemas to an Oracle RAC Database in a Consolidated Environment

Typically in an Oracle RAC environment, only one database is created which is then accessed from multiple instances running on nodes in the cluster. Consolidating all the schemas from the source into one RAC database can improve server resource utilization.

NOTE

In Figures 10.1, 10.2, and 10.3, the boxes denoting Schemas A, B, and C do not indicate that these schemas are accessible from only one node in an Oracle RAC cluster. All schemas created in an Oracle RAC database are accessible from all RAC cluster nodes.

Creating a couple of database instances under an Oracle RAC cluster can provide isolation and management flexibility (e.g., rolling patch upgrades for database instances). Having separate database instances for applications with different priorities—for example, a mission-critical instance and a non-mission-critical instance—can improve environment availability and manageability. However, when the Oracle RAC cluster needs a major upgrade (e.g., due to a major version change) the whole cluster, including the databases, will be unavailable. Major upgrades and changes to the database software or to the databases in general are treated as planned downtime, and to provide continuity in IT operations, end users are redirected to a standby database in a remote data center.

In addition to understanding the options for workload consolidation, it is also important to understand the process for implementing workload consolidation, outlined as follows:

1.

Standardize on operating systems and databases that are deployed in the data center. Reducing the types of software (versions and vendors) to just a couple from half a dozen or more, which is very typical in large organizations, can result in substantial savings in capital and operation expenditures. It can also result in a reduction in the manpower required to maintain all the different databases, operating systems, and so on, as each requires different skill sets, even in the face of the cross-training that most large organizations impart to their IT staff.

2.

Consolidate disparate database instances into a few, utilizing technologies such as an Oracle RAC. Reducing the number of deployed database instances can reduce management costs as well as improve the utilization of resources and reduce data duplication. It may also help you to avoid using costly replication technologies to maintain separate databases for reporting, data warehousing, and so on. Instead of creating a new database instance for every request that comes in from developers and testers for an Oracle database environment for development and testing purposes, a new schema can be created in an existing database that is deployed in a server pool in the development and testing environment. By following this practice, the organization can avoid installing many copies of the Oracle software on the same server and starting all the associated background processes. Of course, new database instances can be created if it is necessary to ensure isolation from other users and workloads on the same server or to deploy the database instance on another server.

3.

Use Oracle database features such as instance caging and the Oracle Database Resource Manager to restrict the amount of server resources used by multiple Oracle database instances, schemas, and users. This can allow multiple applications and workloads to exist on the same servers.

4.

If it is important for the organization to reduce the manual effort involved in provisioning, it is recommended that the organization deploy a self-service portal (a Web application) so that users can provision it themselves based on the availability of resources, priorities, and configuration requirements.

5.

Optionally, if the organization wants to employ a chargeback mechanism for usage of servers, storage, and databases from a central pool, sufficient data needs to be gathered from the IT infrastructure and processed for chargeback reports. The type of data that needs to be gathered (servers, OS, database, storage, etc.) depends on the level of service granularity required. Since we are talking about DBaaS here, it is important that all data pertaining to CPU, memory, I/O, and database space consumption be gathered for chargeback considerations.

As organizations move to consolidate various workloads into a centralized environment, concerns regarding security, availability, and reliability will sometimes come to the fore. In instances where a certain workload cannot tolerate downtime because databases and operating systems are being brought down for maintenance due to issues with other workloads, it is recommended that these important workloads be assigned to a separate environment where high availability and reliability measures such as Oracle Data Guard, Oracle GoldenGate, and the Oracle RAC have been implemented. Workloads can be grouped separately based on their priorities, type of usage, performance, and high-availability requirements. There certainly will be exceptions to the established policies.

Read full chapter

URL: //www.sciencedirect.com/science/article/pii/B9781597496476000107

Virtualization Challenges

Diane Barrett, Gregory Kipper, in Virtualization and Forensics, 2010

Networking

Using virtual environments should be included in the network topology. As the use of VMs enables server consolidation, the principles of trusted zones should be applied as they are in the existing security infrastructure. In Chapter 8, “Virtual Environments and Compliance,” we looked at implementing virtualization in a payment card industry (PCI) environment. There are many areas that require segregation and protection of data. In any physical environment, there are trust zones. According to a VMware white paper titled “Network Segmentation in Virtualized Environments,” a trust zone is loosely defined as a network segment within which data flows relatively freely, whereas data flowing in and out of the trust zone is subject to stronger restrictions. Examples of trust zones include the following:

Demilitarized zones (DMZs)

PCI cardholder data environment

Site-specific zones such as segmentation according to department or function

Application-defined zones such as the three tiers of a Web application

In a virtual environment, trust zones are often crossed because one server holds many VMs and a trained administrator may merely look for a server that has space to host the VM, thereby putting it in a vulnerable position. In addition, there is a chance that the VM virtual network interface card (NIC) is accidentally placed in the wrong trust zone. As more and more organizations move to virtualization and consolidate physical servers, the virtual aspect of security needs to be addressed. VMs need to be secured in the same manner as physical machines. The network topology should include mapping out which virtual servers will reside on which physical servers and then establishing the level of trust that is required for each system. In addition, administrators should be trained in how to configure, secure, and place VMs in the proper trust zone.

Read full chapter

URL: //www.sciencedirect.com/science/article/pii/B9781597495578000096

Application Management in Virtualized Systems

Rick Sturm, ... Julie Craig, in Application Performance Management (APM) in the Digital Enterprise, 2017

Server Virtualization

The most common aspect of virtualization is server virtualization. Now considered a mainstream technology, server virtualization benefits organizations by allowing server consolidation. A single physical server allows support for multiple VMs, which in turn allow support for applications that normally require dedicated servers. In this way, resources can be shared on a single server. Ultimately, this practice reduces unnecessary server hardware costs. It is no wonder that recent estimates show that most organizations have virtualized over half of their production workloads to facilitate the mobility of applications and application systems. Fig. 5.2 shows how applications and their underlying operating systems in a virtualized environment are completely independent of the physical host machine. In their new configuration, a layer of specialized software known as the VMM or hypervisor creates a separation layer between the applications and the underlying hardware. The hypervisor provides an abstraction software layer to disconnect direct reliance of the applications on specific hardware capabilities and communicates instead with the generalized virtual hardware interface in the hypervisor. This allows applications to run on literally any hardware system that can run the same hypervisor.

Figure 5.2. Comparison of traditional versus virtualized architectures.

A key benefit on the server side of applications is that new virtual servers can be activated in hours, if not minutes, to meet the need for new applications or applications that require additional capacity. Prior to virtualization, if a new application or an existing application needed more capacity, the only answer was to order new hardware and then physically install, test, and provision it. In some cases, this could take weeks or months. In a traditional IT architecture, it is not uncommon for an organization to be faced with the need to upgrade power requirements or build a new data center to meet their requirements. Neither option is easy, efficient, or inexpensive. With virtualized servers, these extreme measures are no longer necessary.

Applications that rely on specific hardware functions are less mobile than other applications that do not have hardware-specific feature dependencies. When applications need direct hardware access for performance reasons, those applications are also less portable. Virtualization can run most applications with little performance degradation, as if they were run natively on a server. However, it is best to avoid deploying applications with hardware-specific dependencies whenever possible.

Server and desktop virtualization also creates new ways to package, deploy, and manage applications. Single and multitier applications can be installed on a system with all of it dependent on libraries and services. A copy of that virtual disk image can then be saved as a file along with metadata about the image, which can be reused or cloned for future deployments.

Another advantage of virtualized servers is the ability to make a copy of the disk image of the system. This allows application managers to make a replica of a system, with all of its operating systems and application components that can be deployed when another VM instance is needed. These images may be in various formats, depending on the platform. For example, VMware uses virtual machine disk, Microsoft and others use virtual hard drive, and Amazon Cloud uses Amazon Machine Image.

A packaging format for these images called open virtualization format (OVF) was created to provide an industry standard to link to the VM image, along with metadata about the VM(s). For example, VMware packages their metadata in a VMX file that contains description and resources requirements for the disk image to be loaded and booted. In multitier applications, the metadata may include information about the machines like the sequence by which the machines should be started. Although these image formats are different, there are several tools that allow you to easily convert one format to another. And, although the conversion is not always perfect, it’s close enough to get you going! In addition to the disk image format there is associated metadata (OVF will be covered in greater detail in Chapter 16).

Using metadata along with the disk images allows for a much quicker installation of new applications since disk images are distributed without running the install process and applications are often up and running in minutes instead of hours or days. Of course, there is a certain amount of configuration/customization that may need to be done before the application or application system is fully functional. This is normally done via some user prompts and configuration wizards. Many products today are delivered as VM images. This process also ensures that the necessary software dependencies were already installed at the appropriate level for the distributed application.

Virtualized server images are also used by cloud service providers to offer machine images or templates. These templates can provide prepackaged applications and services such as a web server, database server, or a load balancer. In this way, cloud server providers for either public or private clouds can offer their customers a comprehensive menu of preinstalled packaged application services for their use. Customers can also be assured that all the required components and dependencies were installed in the image. Templates also allow for version control of the applications and services. If and when newer versions of the software become available that are safer, faster, and provide the latest functions, templates also allow for control of end-of-life versions of the software.

Read full chapter

URL: //www.sciencedirect.com/science/article/pii/B978012804018800005X

Extended UTM Functionality

Kenneth Tam, ... Josh More, in UTM Security with Fortinet, 2013

Introduction

WAN optimization (WanOPT) should be a familiar concept to anyone who has set their Internet browser to use a caching proxy when “surfing the web.” This web caching proxy would typically also cache frequently used web page objects and other files, thereby saving bandwidth and appearing to speed up loading when a user visits a page that a previous user has also viewed.

WanOPT also connotes “getting more out of that Internet pipe,” allowing the same bandwidth saving and increased efficiency of caching with two additional benefits: LAN-like behavior over WAN and justification for server consolidation. WanOPT delivers these benefits by employing one or more of the following techniques:

Duplication reduction.

Bandwidth management.

Caching.

While the technical details of these techniques is beyond the scope of this book, this chapter will focus on implementation and usage of the FortiGate WanOPT technologies.

Duplication reduction attempts to avoid resending the same data over a connection by sending reference tags that represent data block patterns. This allows one end of a WAN connection to reconstruct the original packet. This process requires a “data dictionary” to be compiled, which can require significant training time and numerous data streams. Although similar to caching, duplication reduction works at the byte level and is application independent.

Bandwidth management directly improves the efficiency and usage of a path. A few commonly used methods are:

Compression: For certain known data types, data compression can be applied to greatly reduce the size of a data block. However, be aware that compression algorithms can be processor intensive and will incur latency at both ends of the WanOPT path.

Latency optimization: For certain protocols, the overhead of traffic passing over the WanOPT can be reduced by modifying aspects like window size and selective acknowledgements, or by combining multiple individual requests into a single message. This functions largely as client-side protocol spoofing or server-side reply buffering.

Forward error correction: Reduce retransmission across a link through error correcting codes.

Traffic shaping: Prioritize packets latency-sensitive applications and apply bandwidth limiting to large data transfers.

Caching, particularly for web traffic, simply means answering client-side requests using local versions of webpage objects such as images, icons, and style page items. These items are stored from previous requests so WAN bandwidth does not need to be consumed sending data that the FortiGate has already received. These items are usually set to expire automatically and to be prefetched when the caching system predicts that a user will need them.

Read full chapter

URL: //www.sciencedirect.com/science/article/pii/B9781597497473000077

Virtualization

Rajkumar Buyya, ... S. Thamarai Selvi, in Mastering Cloud Computing, 2013

3.5.1 Advantages of virtualization

Managed execution and isolation are perhaps the most important advantages of virtualization. In the case of techniques supporting the creation of virtualized execution environments, these two characteristics allow building secure and controllable computing environments. A virtual execution environment can be configured as a sandbox, thus preventing any harmful operation to cross the borders of the virtual host. Moreover, allocation of resources and their partitioning among different guests is simplified, being the virtual host controlled by a program. This enables fine-tuning of resources, which is very important in a server consolidation scenario and is also a requirement for effective quality of service.

Portability is another advantage of virtualization, especially for execution virtualization techniques. Virtual machine instances are normally represented by one or more files that can be easily transported with respect to physical systems. Moreover, they also tend to be self-contained since they do not have other dependencies besides the virtual machine manager for their use. Portability and self-containment simplify their administration. Java programs are “compiled once and run everywhere”; they only require that the Java virtual machine be installed on the host. The same applies to hardware-level virtualization. It is in fact possible to build our own operating environment within a virtual machine instance and bring it with us wherever we go, as though we had our own laptop. This concept is also an enabler for migration techniques in a server consolidation scenario.

Portability and self-containment also contribute to reducing the costs of maintenance, since the number of hosts is expected to be lower than the number of virtual machine instances. Since the guest program is executed in a virtual environment, there is very limited opportunity for the guest program to damage the underlying hardware. Moreover, it is expected that there will be fewer virtual machine managers with respect to the number of virtual machine instances managed.

Finally, by means of virtualization it is possible to achieve a more efficient use of resources. Multiple systems can securely coexist and share the resources of the underlying host, without interfering with each other. This is a prerequisite for server consolidation, which allows adjusting the number of active physical resources dynamically according to the current load of the system, thus creating the opportunity to save in terms of energy consumption and to be less impacting on the environment.

Read full chapter

URL: //www.sciencedirect.com/science/article/pii/B9780124114548000036

Cloud Computing Data Center Networking

Carolyn J. Sher DeCusatis, Aparicio Carranza, in Handbook of Fiber Optic Data Communication (Fourth Edition), 2013

15.5.1.1 Server virtualization

There is a significant advantage to virtualizing servers: they are often underutilized, and equipment upgrades will typically make this problem worse, as newer servers tend to have improved performance [13]. Therefore, several virtual servers may be hosted on a single physical server (which is called server consolidation), and this virtual server may move between physical servers as needed to appropriately manage the data center resources. While this helps control costs by using fewer servers, it also has associated advantages in reducing power consumption, heating and cooling, and space. The full advantages of virtual machine mobility include the following [43]:

Data center maintenance without downtime

Disaster avoidance and recovery

Data center migration or consolidation

Data center expansion

Workload balancing across multiple sites

However, virtual servers require a hypervisor or a virtual machine monitor (VMM) to keep track of virtual machine identification, local policies, and security, which is a complex task [43]. The integration of the hypervisor with existing network management software is problematic and requires changes to the switches [44,45].

Virtual machines migrate according to a number of heuristics. Things that spur a virtual machine migration include the following [46]:

Periodic time-based balancing

Detection of a hot spot

Excess spare capacity

Load imbalance

Addition/removal of a virtual machine or physical machine

Hypervisors, virtualization, and networking are discussed in more detail in another chapter of this book.

Read full chapter

URL: //www.sciencedirect.com/science/article/pii/B9780124016736000155

How Virtualization Happens

Diane Barrett, Gregory Kipper, in Virtualization and Forensics, 2010

Benefits of Virtualization

There are many benefits to virtualization. As mentioned earlier, with power becoming more expensive and a trend towards going green, virtualization offers cost benefits by decreasing the number of physical machines required within an environment. There are additional benefits that virtualization offers such as more effective disaster recovery and better resource management. In the “Rise of the Virtual Machine,” Orakwue (2009) describes additional benefits of virtualization, including testing, business continuity, and server consolidation. Several of those benefits will be discussed as well.

The cost and complexity of disaster recovery and business continuity management are greatly reduced by using virtualization technology. Because OSes can be encapsulated with applications and data, it is much easier to move and implement for immediate access. VMs consist entirely of software, so transportation to an offsite location can be done just as with transmitting a data file. This capability alone can reduce downtime in the event of system failure. With virtualization, organizations can do full mirroring and backups of primary systems without duplicating the hardware for those systems. Conversely, hosts running the same platform can access saved files of configured OSes from a repository.

Virtualization management tools help to effectively manage server resources. In virtualization, since the machine hardware and OS are independent, an OS can readily be moved to run on another server. This comes in handy when server space gets low, or memory capacity is reduced. Additionally servers can be readily put up and taken down in seasonal businesses such as tax return preparation. The need for more physical resources during times of high usage can be easily compensated for with virtualization.

Virtualization is an ideal environment for testing applications. Multiple OS configurations can be easily stored, then downloaded to build, verify, and validate software applications, and just as easily deleted when testing is completed. This type of environment can also be used to test malware actions. Additional benefits of virtualization include server consolidation. For example, organizations can run as many as 15 applications in a VE on a single server. Virtualization technology can be used in technical support services to help walk customers and clients through support issues in the environment they are experiencing. It can also be used to isolate business applications from portable devices that workers bring into the workplace. Most importantly, it can be used by forensic investigators to view the environment as the suspect used it.

Read full chapter

URL: //www.sciencedirect.com/science/article/pii/B9781597495578000011

What are the benefits of using server virtualization?

Top 5 Business Benefits of Server Virtualization. Virtualization is not just an IT trend. ... .
Reduced Hardware Costs. ... .
Faster Server Provisioning and Deployment. ... .
Greatly Improved Disaster Recovery. ... .
Increased Productivity. ... .
Energy Cost Savings..

What are some of the benefits of virtual machine consolidation?

By collapsing physical servers into virtual servers and reducing the number of physical machines, your company will reap a tremendous savings in power and cooling costs. Additionally, you will be able to reduce your IT footprint which can include UPS costs, network switch costs, rack space and floor space.

What are the uses of virtual server consolidation?

Now considered a mainstream technology, server virtualization benefits organizations by allowing server consolidation. A single physical server allows support for multiple VMs, which in turn allow support for applications that normally require dedicated servers. In this way, resources can be shared on a single server.

What are the benefits of using these datastores in server virtualization?

Some of the main benefits of virtualization include:.
Server Consolidation. ... .
Better Uptime. ... .
Simpler Server Setup. ... .
Streamlined Quality and Assurance Testing. ... .
Simplified Desktop Management. ... .
Improving User Mobility..

Toplist

Neuester Beitrag

Stichworte