Install Intel oneAPI

Install dependence

redhat:

dnf install at-spi2-core mesa-libgbm kernel-devel gtk3 xdg-utils libnotify libdrm

ubuntu:

apt-get install  libatspi2.0-0 libgbm1 linux-headers-<major version>-<mini version>.el8.x86_64 libgtk-3-0 xdg-utils libnotify4 libdrm2

Install Intel oneAPI Base Toolkit

Step 1: Download install file.

wget https://registrationcenter-download.intel.com/akdlm/IRC_NAS/fdc7a2bc-b7a8-47eb-8876-de6201297144/l_BaseKit_p_2024.1.0.596_offline.sh

Step 2: Use $ sudo sh ./.sh to launch the GUI Installer as the root.

sudo sh ./l_BaseKit_p_2024.1.0.596_offline.sh

Step 3: Follow the instructions in the installer to finish the Intel oneAPI Base Toolkit installation.

Note:

You can follow the instructions in https://software.intel.com/content/www/us/en/develop/tools/oneapi/base-toolkit.html to install Intel oneAPI Base Toolkit on the management node according to your own needs.

Install Intel oneAPI HPC Toolkit

Step 1: Download install file.

wget https://registrationcenter-download.intel.com/akdlm/IRC_NAS/7f096850-dc7b-4c35-90b5-36c12abd9eaa/l_HPCKit_p_2024.1.0.560_offline.sh

Step 2: Use $ sudo sh ./.sh to launch the GUI Installer as the root.

sudo sh ./l_HPCKit_p_2024.1.0.560_offline.sh

Step 3: Follow the instructions in the installer to finish the Intel oneAPI HPC Toolkit installation.

Note:

You can follow the instructions in https://software.intel.com/content/www/us/en/develop/tools/oneapi/hpc-toolkit.html to install Intel oneAPI HPC Toolkit on the management node according to your own needs.

Install Intel Trace Analyzer and Collector(ITAC)

Step 1: Download install file.

wget https://registrationcenter-download.intel.com/akdlm/IRC_NAS/644eec67-83d9-4bdd-be0d-d90587ec72ed/l_itac_oneapi_p_2022.1.0.158_offline.sh

Step 2: Use $ sudo sh ./.sh to launch the GUI Installer as the root.

sudo sh ./l_itac_oneapi_p_2022.1.0.158_offline.sh

Step 3: Follow the instructions in the installer to finish the Intel Trace Analyzer and Collector(ITAC) installation.

Note:

You can follow the instructions in https://www.intel.com/content/www/us/en/developer/tools/oneapi/trace-analyzer.html to install Intel Trace Analyzer and Collector(ITAC) on the management node according to your own needs.

Install Intel oneAPI AI Tools

Step 1: Download install file.

wget https://registrationcenter-download.intel.com/akdlm/IRC_NAS/eb21f0d0-bb7a-44b3-84d1-6489b0138b2e-2/l_AITools.2024.0.2.1.sh

Step 2: Use $ sudo sh ./.sh to launch the GUI Installer as the root.

# where is the intel oneapi Base Toolkit installed, /opt/intel/oneapi is the default path
ONEAPI_PATH="/opt/intel/oneapi"
sudo sh ./l_AITools.2024.0.2.1.sh -p $ONEAPI_PATH/intelpython

Step 3: Follow the instructions in the installer to finish the Intel oneAPI AI Tools installation.

Note: Before installing AI Tools, please ensure that the Intel oneAPI Base Toolkit is installed, and specify that the installation directory of AI Tools is the same as the Intel oneAPI Base Toolkit.

During the installation process, choose not to update the shell configuration file to automatically initialize Conda.

You can follow the instructions in https://software.intel.com/content/www/us/en/develop/tools/oneapi/ai-analytics-toolkit.html to install Intel oneAPI AI Tools on the management node according to your own needs.

Install AI Framework by pip

tensorflow

# where is the intel oneapi Base Toolkit installed, /opt/intel/oneapi is the default path
ONEAPI_PATH="/opt/intel/oneapi"
source $ONEAPI_PATH/intelpython/bin/activate
rm -rf $ONEAPI_PATH/intelpython/envs/tensorflow
conda create -n tensorflow python=3.9.15 -y
conda activate tensorflow

pip install tensorflow==2.12.0
pip install intel-extension-for-tensorflow[cpu]==1.2.0
pip install protobuf==3.20.3 --upgrade
# Verify
python -c "import intel_extension_for_tensorflow as itex; print(itex.__version__)"

tensorflow-gpu

# where is the intel oneapi Base Toolkit installed, /opt/intel/oneapi is the default path
ONEAPI_PATH="/opt/intel/oneapi"
source $ONEAPI_PATH/intelpython/bin/activate
rm -rf $ONEAPI_PATH/intelpython/envs/tensorflow-gpu
conda create -n tensorflow-gpu python=3.9.15
conda activate tensorflow-gpu

pip install tensorflow==2.12.0
pip install --upgrade intel-extension-for-tensorflow[gpu]==1.2.0
pip install intel-optimization-for-horovod==0.5.0
pip install protobuf==3.20.3 --upgrade
# Verify
python -c "import intel_extension_for_tensorflow as itex; print(itex.__version__)"

pytorch

# where is the intel oneapi Base Toolkit installed, /opt/intel/oneapi is the default path
ONEAPI_PATH="/opt/intel/oneapi"
source $ONEAPI_PATH/intelpython/bin/activate
conda activate pytorch
pip install defusedxml==0.7.1

modin

# where is the intel oneapi Base Toolkit installed, /opt/intel/oneapi is the default path
ONEAPI_PATH="/opt/intel/oneapi"
source $ONEAPI_PATH/intelpython/bin/activate
rm -rf $ONEAPI_PATH/intelpython/envs/modin
conda create -n modin python=3.9.18 -y
conda activate modin

pip install modin[ray]==0.30.0
pip install grpcio==1.64.0

Update VTune Profiler Module File

Step 1: Delete the Original Module File

# where is the intel oneapi installed, /opt/intel/oneapi is the default path
ONEAPI_PATH="/opt/intel/oneapi"

cd $ONEAPI_PATH/vtune/2024.1/etc/modulefiles/vtune/
rm -rf 2024.1

Step 2: Download the New Module File

From https://hpc.lenovo.com/lico/downloads/7.2.1/2024.1 download the 2024.1 file and upload it to the $ONEAPI_PATH/vtune/2024.1/etc/modulefiles/vtune/ directory.

Initialize modulefiles

Run the following commands to initialize modulefiles for LiCO on the management node:

# where is the intel oneapi installed, /opt/intel/oneapi is the default path
ONEAPI_PATH="/opt/intel/oneapi"

source $ONEAPI_PATH/setvars.sh
bash $ONEAPI_PATH/modulefiles-setup.sh --output-dir=$ONEAPI_PATH/modulefiles

Configure NFS

Please create a shared directory based on the upper directory of the oneapi installation path (task /opt/intel as an example):

Step 1: Manage node sharing /opt/intel for Intel oneAPI:

echo "/opt/intel *(rw,no_subtree_check,no_root_squash)" >> /etc/exports
exportfs -a

Step 2: Configure the shared directory:

# IP address of management node in the compute intranet
MANAGER_NODE_IP="192.168.0.1"

nodeshell all "echo '${MANAGER_NODE_IP}:/opt/intel /opt/intel nfs nfsvers=4.0,nodev,noatime \
0 0' >> /etc/fstab"

Step 3: Mount shared directory

nodeshell all mkdir -p /opt/intel
nodeshell all mount /opt/intel

Step 4: change write permission for socwatch

cd /opt/intel/oneapi/vtune/latest/socwatch
chmod 777 x64

Add module path

For Redhat, Rocky, CentOS Linux:

Add the LiCO module path on the login and compute nodes:

# where is the intel oneapi installed, /opt/intel/oneapi is the default path
ONEAPI_PATH="/opt/intel/oneapi"

nodeshell all "echo "MODULEPATH=$ONEAPI_PATH/modulefiles:$MODULEPATH" >>/etc/profile.d/oneapi.sh"
nodeshell all "source /etc/profile.d/oneapi.sh"

Run the following commands on the management node:

Note: if LiCO deployed with docker container, please run following commands on LiCO container, else on management node.

# where is the intel oneapi installed, /opt/intel/oneapi is the default path
ONEAPI_PATH="/opt/intel/oneapi"

echo "MODULEPATH=$ONEAPI_PATH/modulefiles:$MODULEPATH" >>/etc/profile.d/oneapi.sh

source /etc/profile.d/oneapi.sh

sed -i s#/opt/ohpc/pub/modulefiles#/opt/ohpc/pub/modulefiles:$ONEAPI_PATH/modulefiles#g  /etc/lico/lico.ini.d/template.ini

lico lmod_sync

For Ubuntu Linux:

Add the LiCO module path on the login and compute nodes:

ONEAPI_PATH="/opt/intel/oneapi"
nodeshell all  "source /etc/profile.d/lmod.sh"
nodeshell all "echo "MODULEPATH=$ONEAPI_PATH/modulefiles:$MODULEPATH" >>/etc/profile.d/oneapi.sh"
nodeshell all "source /etc/profile.d/oneapi.sh"

Sync module to LiCO database in management node:

ONEAPI_PATH="/opt/intel/oneapi"

source /etc/profile.d/lmod.sh

echo "MODULEPATH=$ONEAPI_PATH/modulefiles:$MODULEPATH" >>/etc/profile.d/oneapi.sh
source /etc/profile.d/oneapi.sh

sed -i s#/opt/hpc/pub/modulefiles#/opt/hpc/pub/modulefiles:$ONEAPI_PATH/modulefiles#g  /etc/lico/lico.ini.d/template.ini

Note: if LiCO deployed with docker container, please run following commands on LiCO container, else on management node.

lico lmod_sync

Enable oneAPI in LiCO

Note: if LiCO deployed with docker container, please run following commands on LiCO container, else on management node.

vi /etc/lico/lico.ini.d/oneapi.ini 

INTEL_MODULE_PATH = "<oneAPI install dir>"
ENABLE = true

Create oneAPI system runtime

Note: if LiCO deployed with docker container, please run following commands on LiCO container, else on management node.

lico init

Configure kernel parameter for Intel VTune

# add two kernel parameters in /etc/sysctl.conf on the compute nodes

nodeshell compute "echo 'kernel.kptr_restrict=0' >> /etc/sysctl.conf"
nodeshell compute "echo 'kernel.perf_event_paranoid=0' >> /etc/sysctl.conf"
nodeshell compute sysctl -p /etc/sysctl.conf

Configure VTune Backend Authentication

Note: if LiCO deployed with docker container, please run following commands on LiCO container, else on management node.

vim /opt/intel/oneapi/vtune/latest/backend/config.yml

# change the type from passphrase to reverse-proxy

#type:passphrase  
type: reverse-proxy
header: Authorization

vim /opt/intel/oneapi/vtune/latest/backend/server.js
#navigate to the function:sendIndexHtml 
#replace urls.public().href to config.urlPathPrefix in the sendIndexHtml(res) function as follows

function sendIndexHtml(res) {
    const indexHtmlPath = path.join(__dirname, '../frontend/index.html');

    if (config.urlPathPrefix || config.baseUrl) {
        fs.readFile(indexHtmlPath, (err, data) => {
            if (err) {
                res.status(500);
                res.end();
            }
            const content = data.toString();
            res.send(content.replace('<base href="/">', `<base href="${config.urlPathPrefix}">`));
        });
    } else {
        res.sendFile(indexHtmlPath);
    }
}

Install the Intel driver for Intel VTune

Note:

Make sure that the kernel header sources are present on your host system. The kernel version should be 2.6.28 or later. For details, please refer to:https://www.intel.com/content/www/us/en/docs/vtune-profiler/user-guide/2024-1/build-install-sampling-drivers-for-linux-targets.html

# where is the intel oneapi installed, /opt/intel/oneapi is the default path
ONEAPI_PATH="/opt/intel/oneapi"

nodeshell compute "cd ${ONEAPI_PATH}/vtune/latest/sepdk/src && sudo ./rmmod-sep"
nodeshell -c 1 compute "cd ${ONEAPI_PATH}/vtune/latest/sepdk/src && sudo ./build-driver -ni"

# The following errors may be reported:
# c3: ERROR: kernel source directory "/usr/src/linux-4.18.0-305.3.1.el8.x86_64" either does not existor not a valid kernel source directory.
# c3:
# c3: Please use the following command to install kernel header on CentOS:
# c3:      yum install kernel-devel-4.18.0-305.3.1.el8.x86_64
#
# If an error occurs, install the corresponding program on the corresponding node as prompted:
# eg: nodeshell c3 "sudo yum install -y kernel-devel-4.18.0-305.3.1.el8.x86_64"

nodeshell -c 1 compute "cd ${ONEAPI_PATH}/vtune/latest/sepdk/src && sudo ./insmod-sep -r -g vtune -p 666"
nodeshell -c 1 compute "cd ${ONEAPI_PATH}/vtune/latest/sepdk/src && sudo ./boot-script -i -g vtune -p 666"

nodeshell -c 1 compute "sed -i 's#^After.*#& network.target\nRequiresMountsFor=${ONEAPI_PATH}#g' /usr/lib/systemd/system/sep5.service"
nodeshell compute systemctl daemon-reload
nodeshell compute systemctl restart sep5.service
# If the restart fails, please execute the following command first:
# nodeshell compute "cd ${ONEAPI_PATH}/vtune/latest/sepdk/src && sudo ./rmmod-sep"
# Then, run the restart command again.
nodeshell compute systemctl enable sep5.service

Add Users to the VTune Group:

To use VTune for Platform Analysis in LiCO, it's essential to add the user to the vtune group.

For example, if OpenLDAP is used for user management in LiCO, follow these steps:

lgroupadd vtune
lgroupmod -M hpcadmin vtune

If a different authentication service is used, please make sure to add the necessary users to the vtune group according to its usage requirements.

Install the software for Intel VTune

For Redhat,CentOS,Rocky Linux:

nodeshell compute "dnf install -y dstat"

For Ubuntu Linux:

nodeshell compute "apt-get install dstat pcp"

Configure kernel parameters for Intel GDB

# add two kernel parameters in /etc/sysctl.conf on the compute nodes to confirm the core path
nodeshell compute "echo 'kernel.core_pattern = ./core-%e-%p-%s-%h-%t' >> /etc/sysctl.conf"
nodeshell compute "echo 'kernel.core_uses_pid = 0' >> /etc/sysctl.conf"
nodeshell compute sysctl -p /etc/sysctl.conf

For Ubuntu Linux:

For allow gdb attach to running process, the following commands need to be executed additionally.

# change the value of kernel.yama.ptrace_scope for GDB 
nodeshell compute "sed -i s#kernel.yama.ptrace_scope = 1#kernel.yama.ptrace_scope = 0#g /etc/sysctl.d/10-ptrace.conf"
nodeshell compute  sysctl -p /etc/sysctl.d/10-ptrace.conf

Check

Run the following commands on the management node to check if the installation is successful:

# The output may be /opt/intel/oneapi/intelpython/latest/bin/mpirun
which mpirun

# The output may be /opt/intel/oneapi/vtune/2024.1/bin64/vtune
which vtune

# The output may be /opt/intel/oneapi/mpi/2021.1.1/bin/mpiicc
which mpiicc

Run the following commands on the management node to check whether the module is successfully configured:

# The output contains /opt/intel/oneapi/modulefiles information
module ava

Run the following commands on the management node to check whether the intel python is installed.

[root@head ~]# source /opt/intel/oneapi/intelpython/bin/activate
[root@head ~]# conda env list
# conda environments:
#
base                  *  /opt/intel/oneapi/intelpython
modin                    /opt/intel/oneapi/intelpython/envs/modin
pytorch                  /opt/intel/oneapi/intelpython/envs/pytorch
pytorch-gpu              /opt/intel/oneapi/intelpython/envs/pytorch-gpu
tensorflow               /opt/intel/oneapi/intelpython/envs/tensorflow
tensorflow-gpu           /opt/intel/oneapi/intelpython/envs/tensorflow-gpu

Run the following commands on the management node to check whether the Intel driver is installed.

# The output might be the following:
#        c1: sep5                 2793472  0
#        c1: socperf3              602112  1 sep5
#        c2: sep5                 2793472  0
#        c2: socperf3              602112  1 sep5
nodeshell compute "lsmod |grep sep"

Optional

VTune Platform Analysis trouble shooting:

1.For slurm scheduler, to allow non-root to login compute node, make sure configure user white list firstly

If the message "Cannot get the driver.Please check it" is displayed when you run the Platform Analysis of the administrator page,you can Run the following command to view the background logs.

tail -f /var/log/lico/lico-core-django.log

If the following error occurs in the log,you need to add the current user to the slurm whitelist.

Access denied: user <user> has no active jobs on this node.
Access denied by pam_slurm_adopt: you have no active jobs on this node

step1: Run the following commands on the management node, and share them with the compute node.

# Edit the slurm.conf file
vi /etc/slurm/slurm.conf
# Add the following configuration items
PrologFlags=contain

step2: Run the following commands on the compute nodes:

# Edit the sshd file
vi /etc/pam.d/sshd
# Add the following configuration items and the order cannot be changed
account sufficient pam_listfile.so item=user sense=allow onerr=fail file=/etc/ssh/allowed_users
account required pam_slurm_adopt.so
# Create or modify the '/etc/ssh/allowed_users' file
vi /etc/ssh/allowed_users
# Add users according to the following example format
myuser1
myuser2

step3: Restart the following services

# On the management node
systemctl restart slurmctld
# On the compute node
systemctl restart slurmd
systemctl restart sshd

2.Ftrace issue

On the Linux Ftrace subsystem, located in the debugfs partition in /sys/kernel/debug/tracing, may be accessible for the root user only. In this case, the VTune Profiler provides an error message: Ftrace collection is not possible due to a lack of credentials. Root privileges are required.

vtune: Error: Unable to analyze interrupts. Ftrace is not available. For more information, see the Linux* and Android* Kernel Analysis User Guide.
vtune: Error: notErrorOrWarning
vtune: Error: Ftrace collection is not possible due to a lack of credentials. Make sure you have read/write access to debugFS. You may either run the analysis with root privileges (recommended) or follow the configuration instructions provided in the Linux and Android Kernel Analysis help topic.

To enable Ftrace events collection on such a system, you may change permissions manually by using the chown command under the root account, for example:

# nodeshell compute chown -R <user>:vtune /sys/kernel/debug/tracing

Or you can automate change the permissions by using VTune scripts:

# nodeshell compute  /opt/intel/oneapi/vtune/latest/bin64/prepare-debugfs.sh --user <user>

Note: Each compute node should change permissions of /sys/kernel/debug/tracing.