Performance evaluation
We use a toolkit referred to as the CloudSim when analyzing the virtualized climate of cloud computing.cloudsim have been extended to the placement of simulations of VM in carbon and energy efficiency. The tool is aware of the following features from data centers, which include the rate of carbon footprint, dynamic energy, PUE, and, most important, it is able to simulate different sorts of requests from dynamic VM.
In order to analyse the algorithm, we use four different data centers and model their Iaas, involving 90 real servers from each site. Each center has a different PUE value from the rest and contains 2 clusters that vary in the rate of carbon footprints. Column two contains the rates of carbon footprints were obtained from secondary data in the US energy department. The rate of carbon print was calculated as the average emission of carbon of the overall power sector emission in a given center.
Data center site | PUE | Carbon foorprint rate (Tons/MWh) |
DC1 -Oregan, USA | 1.56 | 0.124, 0.147 |
DC2 -California, USA | 1.7 | 0.350, 0.658 |
DC3 -Virginia, USA | 1.9 | 0.466, 0.782 |
DC4 -Dallas, USA | 2.1 | 0.678, 0.730 |
Table 3.1: shows the PUE value of the four different data centers used and the values are obtained as per the study by Greenberg et al
Two power models are used in order to allow hardware heterogeneity.
Platform Type | Number of Cores | Core Speed (GHz) | Memory (GB) | Storage (GB) | Network Bandwidth (Mbps) | Bits | Power Model |
Platform1 | 2 | 2 | 16 | 2000 | 1000 | B32 | PowerModel1 |
Platform2 | 4 | 4 | 32 | 6000 | 1000 | B64 | PowerModel1 |
Platform3 | 8 | 4 | 32 | 7000 | 2000 | B64 | PowerModel2 |
Platform4 | 8 | 8 | 64 | 7000 | 4000 | B64 | PowerModel2 |
Platform5 | 8 | 16 | 128 | 9000 | 4000 | B64 | PowerModel2 |
Table 3.2: shows five different server models their characteristics that are applied in this analysis
Various VM resources are dispersed to using the criteria of resource need by the VM, and all VMs are known to operate at maximum energy utilization throughout their half-life. VM categories and the quantity of requested VMs by the client vary in probabilities.
VM Type | Number of Cores | Core Speed (GHz) | Memory (MB) | Storage (GB) | Network Bandwidth (Mbps) | Bits | Probability and UserType | |
Standard Instances | M1Small | 1 | 1 | 1740 | 160 | 500 | B32 | 0.25-BT |
M1Large | 2 | 4 | 7680 | 850 | 500 | B64 | 0.12-WR 0.25-BT | |
M1XLarge | 4 | 8 | 15360 | 1690 | 1000 | B64 | 0.08-WR | |
High Memory Instances | M2XLarge | 2 | 6.5 | 17510 | 420 | 1000 | B64 | 0.12-WR |
M22XLarge | 4 | 13 | 35020 | 850 | 1000 | B64 | 0.08-WR | |
High CPU Instances | C1Medium | 2 | 5 | 1740 | 320 | 500 | B32 | 0.1-BT |
The table3.3: shows the VM categories related to their corresponding probabilities
For us to generate a task, we require the arrival and holding time of a particular VM request. To create numerous requests in the task, we apply a model called the Lublin-Feitelson workload. Lublin assists as to set parameters on the request that help us know the number of requests, time of arrival, and its holding time. We increase the primary parameter in case we want to generate VM with a longer holding time using the gamma distribution while leaving the rest of the parameters to their initial value. To create a web request, we apply the model similar to the model of arrival time task requests, while for the holding time, we apply the hyper gamma distribution using a variance of 165 and a mean of 73. The first and the last 5% of the created requests are omitted and taken as a warm-up and cooling period, respectively. We use a varying number of requests for a 24-hour task. For the purpose efficiency and accurate reading, we repeat each experiment numerous time lets say 20 times and the mean recorded for the experiment