Difference between revisions of "Performance Optimization"
Windows7ge (talk | contribs) |
Windows7ge (talk | contribs) |
||
Line 97: | Line 97: | ||
=== Identifying CPU Affinity === | === Identifying CPU Affinity === | ||
− | Their are a couple of options available to determine what PCI_e device and CPU threads are connected to which Node. On Debian lscpu & lspci can both be used to determine which node a PCI_e device is connected to and which threads belong to that node. | + | Their are a couple of options available to determine what PCI_e device and CPU threads are connected to which NUMA Node. On Debian <code>lscpu</code> & <code>lspci -vnn</code> can both be used to determine which node a PCI_e device is connected to and which threads belong to that node. |
− | Another option is lstopo (the <code>hwloc</code> package). This application provides a GUI overview of the cores/threads and what PCI_e | + | Another option is lstopo (the <code>hwloc</code> package). This application provides a GUI overview of the cores/threads and what PCI_e device(s) are connected to which. |
+ | |||
+ | === Assigning CPU Affinity === | ||
+ | To tie cores/threads to a VM open the VM's .XML file |
Revision as of 13:43, 1 March 2020
After the initial creation of your Virtual Machine there are a number of performance tweaks you can make to your Guest's .XML file and/or the Host system to greatly increase the Guest's performance.
Contents
hugepages
Hugepages are a function that lets the system kernel use larger pages when reading or writing information to memory (RAM). When this is enabled and the Guest is configured to use them the performance can be greatly increased. How to enable Hugepages depends on your GNU/Linux distribution.
Note: When hugepages are configured this portion of memory is taken away from the Host. This means the Host will no longer be able to use it. Keep this in mind.
Debain (Ubuntu/Mint/Lubuntu/PopOS/etc)
First check if Linux isn't already using Hugepages with: cat /proc/meminfo | grep Huge
.
If the output resembles the following:
AnonHugePages: 2048 kB
ShmemHugePages: 0 kB
HugePages_Total: 0
HugePages_Free: 0
HugePages_Rsvd: 0
HugePages_Surp: 0
Hugepagesize: 2048 kB
Hugetlb: 0 kB
then Hugepages aren't enabled.
To enable Hugepages first check /etc/sysctl.conf
for the following entries:
vm.nr_hugepages=
vm.hugetlb_shm_group=
If they don't exist they can be appended to the end of the file.
The general rule of thumb is 1 Hugepage for every 2MB of of RAM to be assigned to the VM:
vm.nr_hugepages=8192
vm.hugetlb_shm_group=48
The above example will set aside approx 16GB. After saving the changes reboot the system.
Now to verify the changes rerun: cat /proc/meminfo | grep Huge
The output should resemble the following:
AnonHugePages: 0 kB
ShmemHugePages: 0 kB
HugePages_Total: 8192
HugePages_Free: 8192
HugePages_Rsvd: 0
HugePages_Surp: 0
Hugepagesize: 2048 kB
Hugetlb: 16777216 kB
Hugepages are now enabled.
It's also recommended by RedHat to disable the old transparent hugepages. This can be done with:
echo 'never' > /sys/kernel/mm/transparent_hugepage/defrag
echo 'never' > /sys/kernel/mm/transparent_hugepage/enabled
Now restart the computer.
Assigning Hugepages to VM
To make the VM use Hugepages enter the VM's .XML file and add <memoryBacking><hugepages/></memoryBacking>
to the memory section:
...
<memory unit='KiB'>16777216</memory>
<currentMemory unit='KiB'>16777216</currentMemory>
<memoryBacking>
<hugepages/>
</memoryBacking>
...
Hugepages are now enabled.
hyperv
Hyperv has a number of variables that can change the way in which the VM interacts with system resources. A few variables that can help preserve resources for the system are by appending:
<vpindex state='on'/>
<runtime state='on'/>
<synic state='on'/>
<stimer state='on'/>
to the hyperv section of the VM's .XML file.
...
<hyperv>
<related state='on'/>
<vapic state='on'/>
<spinlocks state='on' retries='8191'/>
<vpindex state='on'/>
<runtime state='on'/>
<synic state='on'/>
<stimer state='on'/>
...
This edit doesn't help the VM perform better as much as it helps preserve system resources if the plan is to run multiple simultaneous instances.
vcpupin
vcpupin is the process where-in each vCPU assigned to the VM is tied to a physical core/thread. Configuring this has the most profound impact when dealing with a system that has multiple NUMA Nodes because it forces requests to memory to stay on one node. This also helps by tying the vCPUs to the Node that is directly connected to the GPU.
Identifying CPU Affinity
Their are a couple of options available to determine what PCI_e device and CPU threads are connected to which NUMA Node. On Debian lscpu
& lspci -vnn
can both be used to determine which node a PCI_e device is connected to and which threads belong to that node.
Another option is lstopo (the hwloc
package). This application provides a GUI overview of the cores/threads and what PCI_e device(s) are connected to which.
Assigning CPU Affinity
To tie cores/threads to a VM open the VM's .XML file