Install Intel oneAPI

Note:

If using Redhat 8.6 or Rocky 8.6 Linux, user should update kernel version to equal or more than 4.18.0-372.26.1. Otherwise,some Intel oneAPI toolkit will fail to install in the server.

Install dependence

redhat:

dnf install at-spi2-core mesa-libgbm kernel-devel gtk3 xdg-utils libnotify libdrm

ubuntu:

apt-get install  libatspi2.0-0 libgbm1 linux-headers-<major version>-<mini version>.el8.x86_64 libgtk-3-0 xdg-utils libnotify4 libdrm2

Install Intel oneAPI Base Toolkit

Step 1: Download install file.

wget https://registrationcenter-download.intel.com/akdlm/IRC_NAS/7deeaac4-f605-4bcf-a81b-ea7531577c61/l_BaseKit_p_2023.1.0.46401_offline.sh

Step 2: Use $ sudo sh ./.sh to launch the GUI Installer as the root.

sudo sh ./l_BaseKit_p_2023.1.0.46401_offline.sh

Step 3: Follow the instructions in the installer to finish the Intel oneAPI Base Toolkit installation.

note:

You can follow the instructions in https://software.intel.com/content/www/us/en/develop/tools/oneapi/base-toolkit.html to install Intel oneAPI Base Toolkit on the management node according to your own needs.

Install Intel oneAPI HPC Toolkit

Step 1: Download install file.

wget https://registrationcenter-download.intel.com/akdlm/IRC_NAS/1ff1b38a-8218-4c53-9956-f0b264de35a4/l_HPCKit_p_2023.1.0.46346_offline.sh

Step 2: Use $ sudo sh ./.sh to launch the GUI Installer as the root.

sudo sh ./l_HPCKit_p_2023.1.0.46346_offline.sh

Step 3: Follow the instructions in the installer to finish the Intel oneAPI HPC Toolkit installation.

note:

You can follow the instructions in https://software.intel.com/content/www/us/en/develop/tools/oneapi/hpc-toolkit.html to install Intel oneAPI HPC Toolkit on the management node according to your own needs.

Install Intel oneAPI AI Analytics Toolkit

Step 1: Download install file.

wget https://registrationcenter-download.intel.com/akdlm/irc_nas/19202/l_AIKit_p_2023.1.0.31760_offline.sh

Step 2: Use $ sudo sh ./.sh to launch the GUI Installer as the root.

sudo sh ./l_AIKit_p_2023.1.0.31760_offline.sh

Step 3: Follow the instructions in the installer to finish the Intel oneAPI AI Analytics Toolkit installation.

note:

Before install AI Analytics Toolkit,make sure intel oneAPI Base Toolkit installed. You can follow the instructions in https://software.intel.com/content/www/us/en/develop/tools/oneapi/ai-analytics-toolkit.html to install Intel oneAPI AI Analytics Toolkit on the management node according to your own needs.

note:

Currently Tensorflow-GPU and pytorch-GPU doesn't work in AI Analytics Toolkit 2023.1.0 version, so it needs to use pip to install intel tensorflow gpu and pytorch gpu env with latest version

Install AI Framework for Intel GPU by pip

tensorflow

# Intall
source /opt/intel/oneapi/setvars.sh
conda rename -n tensorflow-gpu  tensorflow-gpu-old
conda create -n tensorflow-gpu python=3.9.15
conda activate tensorflow-gpu

pip install tensorflow==2.12.0
pip install --upgrade intel-extension-for-tensorflow[gpu]==1.2.0
pip install intel-optimization-for-horovod==0.5.0
# Verify
python -c "import intel_extension_for_tensorflow as itex; print(itex.__version__)"

pytorch

# Install
source /opt/intel/oneapi/setvars.sh
conda rename -n pytorch-gpu  pytorch-gpu-old
conda create -n pytorch-gpu python=3.9.15
conda activate pytorch-gpu
pip install torch==1.13.0a0 torchvision==0.14.1a0 intel_extension_for_pytorch==1.13.10+xpu -f https://developer.intel.com/ipex-whl-stable-xpu
pip install scikit-learn==1.2.2
# Verify
python -c "import torch; import intel_extension_for_pytorch as ipex; print(torch.__version__); print(ipex.__version__); [print(f'[{i}]: {torch.xpu.get_device_properties(i)}') for i in range(torch.xpu.device_count())];"

Initialize modulefiles

Run the following commands to initialize modulefiles for LiCO on the management node:

# where is the intel oneapi installed, /opt/intel/oneapi is the default path
ONEAPI_PATH="/opt/intel/oneapi"

source $ONEAPI_PATH/setvars.sh
bash $ONEAPI_PATH/modulefiles-setup.sh

Configure NFS

Please create a shared directory based on the upper directory of the oneapi installation path (task /opt/intel as an example):

Step 1: Manage node sharing /opt/intel for Intel oneAPI:

echo "/opt/intel *(rw,no_subtree_check,no_root_squash)" >> /etc/exports
exportfs -a

Step 2: Configure the shared directory:

# IP address of management node in the compute intranet
MANAGER_NODE_IP="192.168.0.1"

nodeshell all "echo '${MANAGER_NODE_IP}:/opt/intel /opt/intel nfs nfsvers=4.0,nodev,noatime \
0 0' >> /etc/fstab"

Step 3: Mount shared directory

nodeshell all mkdir -p /opt/intel
nodeshell all mount /opt/intel

Step 4: change write permission for socwatch

cd /opt/intel/oneapi/vtune/latest/socwatch
chmod 777 x64

Add module path

For Redhat, Rocky, CentOS Linux:

Add the LiCO module path on the login and compute nodes:

# where is the intel oneapi installed, /opt/intel/oneapi is the default path
ONEAPI_PATH="/opt/intel/oneapi"

nodeshell all "sed -i s#/opt/ohpc/pub/modulefiles#/opt/ohpc/pub/modulefiles:$ONEAPI_PATH/modulefiles#g /etc/profile.d/lmod.sh"

nodeshell all "sed -i s#/opt/ohpc/pub/modulefiles#/opt/ohpc/pub/modulefiles:$ONEAPI_PATH/modulefiles#g /etc/profile.d/lmod.csh"

nodeshell all "source /etc/profile.d/lmod.sh"

Run the following commands on the management node:

note: if LiCO deployed with docker container, please run following commands on LiCO container, else on management node.

# where is the intel oneapi installed, /opt/intel/oneapi is the default path
ONEAPI_PATH="/opt/intel/oneapi"

sed -i s#/opt/ohpc/pub/modulefiles#/opt/ohpc/pub/modulefiles:$ONEAPI_PATH/modulefiles#g /etc/profile.d/lmod.sh

sed -i s#/opt/ohpc/pub/modulefiles#/opt/ohpc/pub/modulefiles:$ONEAPI_PATH/modulefiles#g /etc/profile.d/lmod.csh

source /etc/profile.d/lmod.sh

sed -i s#/opt/ohpc/pub/modulefiles#/opt/ohpc/pub/modulefiles:$ONEAPI_PATH/modulefiles#g  /etc/lico/lico.ini.d/template.ini

lico lmod_sync

For Ubuntu Linux:

Add the LiCO module path on the login and compute nodes:


ONEAPI_PATH="/opt/intel/oneapi"

nodeshell all "echo "$ONEAPI_PATH/modulefiles" >> /etc/lmod/modulespath"

nodeshell all  "source /etc/profile.d/lmod.sh"

Sync module to LiCO database in management node:


ONEAPI_PATH="/opt/intel/oneapi"

echo "$ONEAPI_PATH/modulefiles" >> /etc/lmod/modulespath

source /etc/profile.d/lmod.sh

sed -i s#/opt/ohpc/pub/modulefiles#/opt/ohpc/pub/modulefiles:$ONEAPI_PATH/modulefiles#g  /etc/lico/lico.ini.d/template.ini

lico lmod_sync

Enable oneAPI in LiCO

note: if LiCO deployed with docker container, please run following commands on LiCO container, else on management node.

vi /etc/lico/lico.ini.d/oneapi.ini 

INTEL_MODULE_PATH = "<oneAPI install dir>"
ENABLE = true

Create oneAPI system runtime

note: if LiCO deployed with docker container, please run following commands on LiCO container, else on management node.

lico init

Configure kernel parameter for Intel VTune

# add two kernel parameters in /etc/sysctl.conf on the compute nodes

nodeshell compute "echo 'kernel.kptr_restrict=0' >> /etc/sysctl.conf"
nodeshell compute "echo 'kernel.perf_event_paranoid=0' >> /etc/sysctl.conf"
nodeshell compute sysctl -p /etc/sysctl.conf

Configure VTune Backend Authentication

note: if LiCO deployed with docker container, please run following commands on LiCO container, else on management node.

vim /opt/intel/oneapi/vtune/latest/backend/config.yml

# change the type from passphrase to reverse-proxy

#type:passphrase  
type: reverse-proxy
header: Authorization

vim /opt/intel/oneapi/vtune/latest/backend/server.js
#navigate to the function:sendIndexHtml 
#replace urls.public().href to config.urlPathPrefix in the sendIndexHtml(res) function as follows

function sendIndexHtml(res) {
    const indexHtmlPath = path.join(__dirname, '../frontend/index.html');

    if (config.urlPathPrefix || config.baseUrl) {
        fs.readFile(indexHtmlPath, (err, data) => {
            if (err) {
                res.status(500);
                res.end();
            }
            const content = data.toString();
            res.send(content.replace('<base href="/">', `<base href="${config.urlPathPrefix}">`));
        });
    } else {
        res.sendFile(indexHtmlPath);
    }
}

Install the Intel driver for Intel VTune

# where is the intel oneapi installed, /opt/intel/oneapi is the default path
ONEAPI_PATH="/opt/intel/oneapi"

nodeshell compute "cd ${ONEAPI_PATH}/vtune/latest/sepdk/src && sudo ./rmmod-sep"
nodeshell -c 1 compute "cd ${ONEAPI_PATH}/vtune/latest/sepdk/src && sudo ./build-driver -ni"

# The following errors may be reported:
# c3: ERROR: kernel source directory "/usr/src/linux-4.18.0-305.3.1.el8.x86_64" either does not existor not a valid kernel source directory.
# c3:
# c3: Please use the following command to install kernel header on CentOS:
# c3:      yum install kernel-devel-4.18.0-305.3.1.el8.x86_64
#
# If an error occurs, install the corresponding program on the corresponding node as prompted:
# eg: nodeshell c3 "sudo yum install -y kernel-devel-4.18.0-305.3.1.el8.x86_64"

nodeshell -c 1 compute "cd ${ONEAPI_PATH}/vtune/latest/sepdk/src && sudo ./boot-script -i -g vtune -p 666"
nodeshell -c 1 compute "sed -i 's#^After.*#& network.target\nRequiresMountsFor=${ONEAPI_PATH}#g' /usr/lib/systemd/system/sep5.service"
nodeshell compute systemctl start sep5.service
nodeshell compute systemctl daemon-reload
nodeshell compute systemctl enable sep5.service

Install the software for Intel VTune

For Redhat,CentOS,Rocky Linux:

nodeshell compute "dnf install -y dstat"

For Ubuntu Linux:

nodeshell compute "apt-get install dstat pcp"

Configure kernel parameters for Intel GDB

# add two kernel parameters in /etc/sysctl.conf on the compute nodes to confirm the core path
nodeshell compute "echo 'kernel.core_pattern = ./core-%e-%p-%s-%h-%t' >> /etc/sysctl.conf"
nodeshell compute "echo 'kernel.core_uses_pid = 0' >> /etc/sysctl.conf"
nodeshell compute sysctl -p /etc/sysctl.conf

For Ubuntu Linux:

For allow gdb attach to running process, the following commands need to be executed additionally.

# change the value of kernel.yama.ptrace_scope for GDB 
nodeshell compute "sed -i s#kernel.yama.ptrace_scope = 1#kernel.yama.ptrace_scope = 0#g /etc/sysctl.d/10-ptrace.conf"
nodeshell compute  sysctl -p /etc/sysctl.d/10-ptrace.conf

Check

Run the following commands on the management node to check if the installation is successful:

# The output may be /opt/intel/oneapi/intelpython/latest/bin/mpirun
which mpirun

# The output may be /opt/intel/oneapi/intelpython/latest/bin/mpitune
which mpitune

# The output may be /opt/intel/oneapi/mpi/2021.1.1/bin/mpiicc
which mpiicc

Run the following commands on the management node to check whether the module is successfully configured:

# The output contains /opt/intel/oneapi/modulefiles information
module ava

Run the following commands on the management node to check whether the intel python is installed.

[root@head ~]# source /opt/intel/oneapi/setvars.sh
[root@head ~]# conda env list
# conda environments:
#
base                  *  /opt/intel/oneapi/intelpython/latest
2022.1.0                 /opt/intel/oneapi/intelpython/latest/envs/2022.1.0
modin                    /opt/intel/oneapi/intelpython/latest/envs/modin
modin-0.13.3             /opt/intel/oneapi/intelpython/latest/envs/modin-0.13.3
pytorch                  /opt/intel/oneapi/intelpython/latest/envs/pytorch
pytorch-1.10.0           /opt/intel/oneapi/intelpython/latest/envs/pytorch-1.10.0
tensorflow               /opt/intel/oneapi/intelpython/latest/envs/tensorflow
tensorflow-2.8.0         /opt/intel/oneapi/intelpython/latest/envs/tensorflow-2.8.0

Run the following commands on the management node to check whether the Intel driver is installed.

# The output might be the following:
#        c1: sep5                 2793472  0
#        c1: socperf3              602112  1 sep5
#        c2: sep5                 2793472  0
#        c2: socperf3              602112  1 sep5
nodeshell compute "lsmod |grep sep"

Optional

VTune Platform Analysis trouble shooting:

1.For slurm scheduler, to allow non-root to login compute node, make sure configure user white list firstly

If the message "Cannot get the driver.Please check it" is displayed when you run the Platform Analysis of the administrator page,you can Run the following command to view the background logs.

tail -f /var/log/lico/lico-core-django.log

If the following error occurs in the log,you need to add the current user to the slurm whitelist.

Access denied: user <user> has no active jobs on this node.
Access denied by pam_slurm_adopt: you have no active jobs on this node

step1: Run the following commands on the management node, and share them with the compute node.

# Edit the slurm.conf file
vi /etc/slurm/slurm.conf
# Add the following configuration items
PrologFlags=contain

step2: Run the following commands on the compute nodes:

# Edit the sshd file
vi /etc/pam.d/sshd
# Add the following configuration items and the order cannot be changed
account sufficient pam_listfile.so item=user sense=allow onerr=fail file=/etc/ssh/allowed_users
account required pam_slurm_adopt.so
# Create or modify the '/etc/ssh/allowed_users' file
vi /etc/ssh/allowed_users
# Add users according to the following example format
myuser1
myuser2

step3: Restart the following services

# On the management node
systemctl restart slurmctld
# On the compute node
systemctl restart slurmd
systemctl restart sshd

2.Ftrace issue

On the Linux Ftrace subsystem, located in the debugfs partition in /sys/kernel/debug/tracing, may be accessible for the root user only. In this case, the VTune Profiler provides an error message: Ftrace collection is not possible due to a lack of credentials. Root privileges are required.

vtune: Error: Unable to analyze interrupts. Ftrace is not available. For more information, see the Linux* and Android* Kernel Analysis User Guide.
vtune: Error: notErrorOrWarning
vtune: Error: Ftrace collection is not possible due to a lack of credentials. Make sure you have read/write access to debugFS. You may either run the analysis with root privileges (recommended) or follow the configuration instructions provided in the Linux and Android Kernel Analysis help topic.

To enable Ftrace events collection on such a system, you may change permissions manually by using the chown command under the root account, for example:

# nodeshell compute chown -R <user>:vtune /sys/kernel/debug/tracing

Or you can automate change the permissions by using VTune scripts:

# nodeshell compute  /opt/intel/oneapi/vtune/latest/bin64/prepare-debugfs.sh --user <user>

Note: Each compute node should change permissions of /sys/kernel/debug/tracing.