Editing Performance Optimization
Jump to navigation
Jump to search
Warning: You are not logged in. Your IP address will be publicly visible if you make any edits. If you log in or create an account, your edits will be attributed to your username, along with other benefits.
The edit can be undone. Please check the comparison below to verify that this is what you want to do, and then save the changes below to finish undoing the edit.
Latest revision | Your text | ||
Line 6: | Line 6: | ||
<span style="color: red;">'''Note: When hugepages are configured this portion of memory is taken away from the Host. This means the Host will no longer be able to use it. Keep this in mind.'''</span> | <span style="color: red;">'''Note: When hugepages are configured this portion of memory is taken away from the Host. This means the Host will no longer be able to use it. Keep this in mind.'''</span> | ||
− | === | + | === Debain (Ubuntu/Mint/Lubuntu/PopOS/etc) === |
First check if Linux isn't already using Hugepages with: <code>cat /proc/meminfo | grep Huge</code>. | First check if Linux isn't already using Hugepages with: <code>cat /proc/meminfo | grep Huge</code>. | ||
If the output resembles the following: | If the output resembles the following: | ||
Line 94: | Line 94: | ||
== vcpupin == | == vcpupin == | ||
− | vcpupin is the process where-in each vCPU assigned to the VM is tied to a physical core/thread. Configuring this has the most profound impact when dealing with a system that has multiple NUMA Nodes because it forces requests to memory to stay on one node | + | vcpupin is the process where-in each vCPU assigned to the VM is tied to a physical core/thread. Configuring this has the most profound impact when dealing with a system that has multiple NUMA Nodes because it forces requests to memory to stay on one node. This also helps by tying the vCPUs to the Node that is directly connected to the GPU. |
− | |||
− | |||
=== Identifying CPU Affinity === | === Identifying CPU Affinity === | ||
− | Their are a couple of options available to determine what PCI_e device and CPU threads are connected to which | + | Their are a couple of options available to determine what PCI_e device and CPU threads are connected to which Node. On Debian lscpu & lspci can both be used to determine which node a PCI_e device is connected to and which threads belong to that node. |
− | |||
− | |||
− | + | Another option is lstopo (the <code>hwloc</code> package). This application provides a GUI overview of the cores/threads and what PCI_e devices are connected to which. | |
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− |